title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Do the finite subcovers of open coverings have to be open? | This is always the case automatically. The formulation is a bit unfortunate.
We say that a covering $(U_i)_{i \in I}$ is open, if each set $U_i$ is open.
We say that a covering $(V_j)_{j \in J}$ is finite, if the index set $J$ is finite.
Hence, the two adjectives "open" and "finite" refer to completely different things.
Finally, we say that $(V_j)_{j \in J}$ is a subcover of $(U_i)_{i \in I}$ if for each $j \in J$, there is some $i_j \in I$ with $V_j = U_{i_j}$, i.e. if
$$
\{V_j \mid j \in J\} \subset \{U_i \mid i \in I\}.
$$
If $(U_i)_i$ is an open cover and $(V_j)_j$ is a subcover of this cover, then the above implies that each set $V_j$ is automatically open (because it is one of the $U_i$). Hence, every subcover of an open cover is automatically open. |
Any bipartite graph has a matching that covers each vertex of maximum degree | Let $G$ be our bipartite graph with bipartition $X,Y$ and let the maximum degree be $k$.
Let $S_X$ be the set containing exactly all vertices of degree $k$ in $X$ and
$S_Y$ the set containing all vertices of degree $k$ in $Y$.
Let $A\subset S_X$ and $N(A)$ be the neighbours of $A$.
Exactly $k|A|$ edges are leaving vertices of $A$, so $k|A|$ vertices must
be arriving at vertices of $N(A)$.
Since each vertex of $N(A)$ has degree at most $k$, $N(A)$ must have at least $|A$| vertices.
So Hall's condition is satisfied and there is a matching $M_X$ of $S_X$.
In the same way we find a matching $M_Y$ of $S_Y$.
Now let $G'$ be the subgraph that consists of exactly the edges of $M_X$ and $M_Y$.
Let $T_X$ be the vertices of $X$ that are matched using $M_Y$ and
$T_Y$ vertices that are matched using $M_X$ (draw a picture or you will get confused!).
Take an arbitrary vertex $v$ in $S_Y-T_Y$. It has degree 1 so it must start a path.
This path must start with an edge of $M_Y$.
The next vertex on that path, $w$, will certainly be in $T_X$.
If $w$ is not in $S_X$ the path ends (we just traveled an edge of $M_Y$ and we can only
continue along an edge of $M_X$ if $w$ is in $S_X$).
If $w$ is in $S_X$ we continue along an edge of $M_X$ that takes us to a vertex $u$ of $T_Y$.
And here we repeat the argument: if $u$ is not in $S_Y$ the path ends here, otherwise
we continue to a vertex in $T_X$. So we see that the endpoint $t$ of our path must be in $T_X-S_X$ or in $T_Y-S_Y$.
In the first case the last edge must have been an edge of $M_Y$ and we have an augmenting path for $M_X$.
If we swap the edges on this path from $M_X$ to $M_Y$ and v.v. our 'new' $M_X$ still matches
everything from $S_X$, and everything from $S_Y$ it matched before, but also $v$.
In the second case the last edge must have been an edge of $M_X$. If we swap edges on this path
our 'new' $M_X$ still matches $S_X$, one vertex less in $T_Y-S_Y$, but one vertex more ($v$) in $S_Y-T_Y$.
Now simply perform this operation for all vertices $v$ of $S_Y-T_Y$ and we end up with a
matching that matches all of $S_X$ and all of $S_Y$, as desired. |
Who has the upper hand in a generalized game of Risk? | For an actual game of Risk, where the attacker has 3 armies and the defender has 2, I worked it out this way. I think you can generalize easily for other values of $a$ and $n$. Other values of $b$ will make the counting slightly more difficult, but still within reach (I think). Other values of $k$ however will make my counting method a lot more difficult (I think).
If we view the three attacker's dice as distinguished, then the space of possible rolls for the attacker has size $6^3$. It's like a giant $6\times6\times6$ cube itself - one dimension for each die. For example, the attacker's roll might be a $(2,5,4)$ or a $(4,6,6)$. Each such roll has the same probability: $\frac{1}{6^3}$.
We can break this cube into shells that I will try to define. The smallest shell contains the singleton $(1,1,1)$. The next shell contains all possible rolls where the high die is a 2. (This shell therefore has $7$ elements - surrounding $(1,1,1)$.) Continue in this way up to the outermost shell, which will contain all possible rolls where the highest die is a 6. Geometrically, this last shell is three faces of the cube. Each previous shell can also be viewed as having three faces (with the possible exception of $S_1$.) To be more precise,
$$S_i = \{(a,b,c)|\max\{a,b,c\}=i\}$$
for $i =1\ldots6$.
The probability of tossing into shell $i$ is $$P(S_i)=\frac{i^3-(i-1)^3}{6^3}$$
Each shell $S_i$ has subsets $S_{i,j}$ where $j$ is the second highest roll (possibly equal to $i$). Geometrically, $S_{i,j}$ is composed of line segments running along the three faces of $S_i$. $S_{i,i}$ is composed of the three edges converging to the main corner in $S_i$. For $j$ less than $i$, $S_{i,j}$ is composed of three "arrows", one along each face of $S_i$, each pointing to the main corner of $S_i$. The image below illustrates the cube of possilbe attacker rolls, $S_6$, $S_{6,6}$, and $S_{6,3}$.
We find that for the attacker's roll, with $j$ less than $i$,
\begin{align*}
P(S_{i,i}) & = \frac{3i-2}{i^3-(i-1)^3}\frac{i^3-(i-1)^3}{6^3}=\frac{3i-2}{6^3}\\\\
P(S_{i,j}) & = \frac{6j-3}{i^3-(i-1)^3}\frac{i^3-(i-1)^3}{6^3}=\frac{6j-3}{6^3}\\
\end{align*}
Now we begin to consider the defender's tosses, conditional upon what $S_{i,j}$ the attacker has rolled into. Similar to the cube that we envisioned for the attacker, the defender has a $6\times 6$ square of possible rolls.
Let's start counting the number of armies that the attacker loses. Let $X$ be this random variable, which can take values $0$, $1$, or $2$.
Suppose the attacker has rolled into $S_{i,i}$. If both the defender's dice are less than $i$ (which will happen with $P=\frac{(i-1)^2}{6^2}$), the attacker will lose zero armies. If both of the defender's dice are $\geq i$ (which will happen with $P=\frac{(7-i)^2}{6^2}$) the attacker will lose two armies. In all other situations ($P=\frac{2(i-1)(7-i)}{6^2}$), each player loses one army.
Now suppose that the attacker has rolled into $S_{i,j}$ with $j<i$. The attacker loses no armies if the defender's highest die is less than $i$ and the other die is less than $j$. This defines an "L" shaped region of the defender's square of possibilities. This region has $P=\frac{(i-1)^2-(i-j)^2}{6^2}$. Similarly, the attacker will lose two armies exactly when the defender has his high die at least $i$ and other die at least $j$. This defines an "L" shaped region on the other side of the square. ($P=\frac{(7-j)^2-(i-j)^2}{6^2}$). This leaves three rectangular regions on the square where each player loses one army ($P=\frac{2(j-1)(7-i)+(i-j)^2}{6^2}$).
Altogether now, the probabilities for each value of $X$ can be compute via summation:
\begin{align*}
P(X=0)&=\sum_{i=1}^6\left(P(S_{i,i})\frac{(i-1)^2}{6^2}+\sum_{j=1}^{i-1}P(S_{i,j})\frac{(i-1)^2-(i-j)^2}{6^2}\right)\\\\
&=\sum_{i=1}^6\left(\frac{3i-2}{6^3}\frac{(i-1)^2}{6^2}+\sum_{j=1}^{i-1}\frac{6j-3}{6^3}\frac{(i-1)^2-(i-j)^2}{6^2}\right)\\\\
P(X=1)&=\sum_{i=1}^6\left(P(S_{i,i})\frac{2(i-1)(7-i)}{6^2}+\sum_{j=1}^{i-1}P(S_{i,j})\frac{2(j-1)(7-i)+(i-j)^2}{6^2}\right)\\\\
&=\sum_{i=1}^6\left(\frac{3i-2}{6^3}\frac{2(i-1)(7-i)}{6^2}+\sum_{j=1}^{i-1}\frac{6j-3}{6^3}\frac{2(j-1)(7-i)+(i-j)^2}{6^2}\right)\\\\
P(X=2)&=\sum_{i=1}^6\left(P(S_{i,i})\frac{(7-i)^2}{6^2}+\sum_{j=1}^{i-1}P(S_{i,j})\frac{(7-j)^2-(i-j)^2}{6^2}\right)\\\\
&=\sum_{i=1}^6\left(\frac{3i-2}{6^3}\frac{(7-i)^2}{6^2}+\sum_{j=1}^{i-1}\frac{6j-3}{6^3}\frac{(7-j)^2-(i-j)^2}{6^2}\right)\\\\
\end{align*}
Each of these can be reduced using the sum formulas
\begin{align*}
\sum_{n=1}^m\ 1& =m\\\\
\sum_{n=1}^m\ n&=\frac{1}{2}m(m+1)\\\\
\sum_{n=1}^m\ n^2&=\frac{1}{6}m(m+1)(2m+1)\\\\
\sum_{n=1}^m\ n^3&=\frac{1}{4}m^2(m+1)^2\\\\
\sum_{n=1}^m\ n^4&=\frac{1}{30}m(m+1)(2m+1)(3m^2+3m-1)
\end{align*}
After applying these, we find:
\begin{align*}
P(X=0)&=\frac{2890}{6^5}\\\\
P(X=1)&=\frac{2611}{6^5}\\\\
P(X=2)&=\frac{2275}{6^5}
\end{align*}
These agree with the decimal probabilities reported here. Thus the expected number of casualties is $$0\frac{2890}{6^5}+1\frac{2611}{6^5}+2\frac{2275}{6^5}=\frac{7161}{6^5}$$ |
When do quadratically integrable functions vanish at infinity? | A trivial sufficient condition is that $f$ is absolutely continuous with $f'\in L_1(\mathbb R)$. Indeed, the absolute continuity means that
$$f(x)=f(0)+\int_0^xf'(t)\,dt\text,$$
and $f'\in L_1(\mathbb R)$ implies that $$\lim_{x\to\pm\infty}\,\int_0^xf'(t)\,dt$$
exist which can be seen from a Cauchy-like criterion, observing that
$$\lim_{N\to\infty}\,\int_{\mathbb R\setminus[-N,N]}\lvert f'(t)\rvert\,dt=0.$$
(The latter follows for instance from Lebesgue's dominated convergence theorem.)
Note that $f'\in L_1(\mathbb R)$ does even for continuous $f'$ not imply the boundedness of $f'$ and thus also does not imply that $f$ is uniformly continuous.
On the other hand, the assumption $f\in L_p(\mathbb R)$ would be used here only to verify that the limits are not different from zero... |
Is the space of pairs with addition $(x,y) + (a,b) = (x+a+1,y+b+1)$ a vector space? | Hint:
$$2(x,y) \stackrel{?}{=} (x,y)+(x,y).$$ |
Question regarding number congruences? | The author derives it from some basic properties of congruences.
(1)
$1000 \equiv -1 \pmod 7$
So it follows that:
$1000^k \equiv (-1)^k \pmod 7$
The same is true for modulo 11 and 13.
(2)
If $a \equiv b \pmod n$, and if
$a \equiv b \pmod m$
and if $(n,m)=1$, it follows that
$a \equiv b \pmod {nm}$
What the author did follows from the basic properties of congruences and of the operations one can do with them. There's nothing fancy here.
So the rule is the following: a number is divisible by 7/11/13 if the sign-alternating sum of its digits (in base 1000) is divisible by 7/11/13.
EDIT: This part should read
This gives $n \equiv a_0 - a_1 + a_2-... + (-1)^ka_k \pmod m $ for $m = 7, 11, 13.$
I guess that's what confused you.
In fact (2) is not used by the author.
EDIT: When you have a congruence $ax \equiv b \pmod n$ (a,b,n are given, x is a variable), and if (a,n)=1, there's a theorem saying that this congruence is satisfied by exactly one number x from the set {0, 1, ... , n-1}. The author simply gave you this number saying that it's 1. So based on this theorem, there's no other one but x=1. AFAIK, one can only find the solution (for x) by trying all numbers from the set. Your approach seems nice too, but it won't work always, I think. |
Can derivatives be defined as anti-integrals? | Let $f(x)=0$ for all real $x$.
Here is one anti-integral for $f$:
$$ g(x) = \begin{cases} x &\text{when }x\in\mathbb Z \\ 0 & \text{otherwise} \end{cases} $$
in the sense that $\int_a^b g(x)\,dx = f(b)-f(a)$ for all $a,b$.
How do you explain that the slope of $f$ at $x=5$ is not $g(5)=5$?
The idea works better if we restrict all the functions we ever look at to "sufficiently nice" ones -- for example, we could insist that everything is real analytic.
Merely looking for a continuous anti-integral wouldn't suffice to recover the usual concept of derivative, because then something like
$$ x \mapsto \begin{cases} 0 & \text{when }x=0 \\ x^2\sin(1/x) & \text{otherwise} \end{cases} $$
wouldn't have a derivative on $\mathbb R$ (which is does by the usual definition). |
Primitive roots of unity in $\mathbb{Z}/p$ | The order of the group of units is $p-1$. The order of any element divides the order of the group.
Detail: Suppose that $a$ is a primitive $n$-th root of unity. Then $a^n\equiv 1\pmod{p}$, and there is no positive interger $m\lt n$ such that $a^m\equiv 1\pmod{p}$.
Let $p-1=qn +r$, where $0\le r\le n-1$. Because $a^{p-1}\equiv 1\pmod{p}$ (Fermat's Theorem), we have
$$1\equiv a^{p-1}=a^{qn}a^r=(a^n)^q a^r\equiv a^r\pmod{p}.$$
Thus $a^r\equiv 1\pmod{p}$. If $r\ne 0$, this contradicts the fact that $a$ is a primitive $n$-th root of unity, meaning that $a$ has order $n$. |
Find the value of $n$ if $\frac{a^{n+1}+b^{n+1}}{a^n+b^n}=\frac{a+b}{2}$ | You have cancelled $(a-b)$ both sides assuming that $a\neq b$. What you have missed out is that $a=b$ is also a solution to the first method of solving. (Because then $0=0$)
Here's some more explanation:
Let's have a look where you've ended up after the first method:
$$a^n(a-b)=b^n(a-b)$$
Now if we have to cancel out $(a-b)$ on both sides, we must assume that $a\neq b$. This ends us up with:
$$a^n=b^n$$
Now $n$ cannot be $1$ because of the assumption we made to arrive here. Hence $n$ must be $0$
Now let's go back to the point before we cancelled $(a-b)$. Note that if $a=b$, the equality is respected:
$$0=0$$
Hence, $a=b$ is another solution to this. From here, If we consider it in the form of : $a^n=b^n$, we get $n=1$.
Let's walk over to method $2$ (again, just before the cancellation):
$$a(a^n-b^n)=b(a^n-b^n)$$
Again, assuming $a^n\neq b^n$, we end up in what we already found earlier:
$$a=b$$
Now looking at it in the eyes of : $a^n=b^n$, $n$ cannot be $0$ because of our underlying assumption that $a^n\neq b^n$. Hence $n$ should be $1$
And for the last time, if we don't cancel , but simply observe, $a^n=b^n$ is also a solution , leading us to $n=0$
Thus, both the methods yield the same result . (The beauty of Mathematics)
P.S : You can place the values $0$ and $1$ for $n$ in the question and see that everything checks out like it should.
P.P.S : I have assumed that : $a=b \implies n=1$ and $a^n=b^n\implies n=0$ . You can do it the other way round too. |
Explaining why $\displaystyle {d \over dt}e^{\int_0^tA(s)\,ds} \ne A(t)e^{\int_0^tA(s)\,ds}$ by definition | Notice that, according to Leibniz's rule,
$$\frac {\Bbb d} {\Bbb d t} \left(\int \limits _0 ^t A(s) \ \Bbb d s \right) ^n = \frac {\Bbb d} {\Bbb d t} \underbrace {\int \limits _0 ^t A(s) \ \Bbb d s \dots \int \limits _0 ^t A(s) \ \Bbb d s} _{n \text{ times}} = \color{red} {A(t)} \underbrace{ \int \limits _0 ^t A(s) \ \Bbb d s \dots \int \limits _0 ^t A(s) \ \Bbb d s} _{n-1 \text{ times}} + \int \limits _0 ^t A(s) \ \Bbb d s \color{red} {A(t)} \underbrace{ \int \limits _0 ^t A(s) \ \Bbb d s \dots \int \limits _0 ^t A(s) \ \Bbb d s} _{n-2 \text{ times}} + \dots \underbrace{\int \limits _0 ^t A(s) \ \Bbb d s \dots \int \limits _0 ^t A(s) \ \Bbb d s} _{n-1 \text{ times}} \color{red} {A(t)} ,$$
where $A(t)$ occupies, in turn, all the positions from $1$ to $n$ in that product, and it is not at all obvious why $A(t)$ should commute with $A(s)$ for $0 \le s < t$.
If, on the other hand, these matrices do commute, then indeed, the result is as you have written, namely
$$\color{red} {A(t)} \ n \left( \int \limits _0 ^t A(s) \ \Bbb d s \right) ^{n-1} .$$ |
How does a vector space $V$ act on the polynomial ring $\text{Sym}(V^{\ast})$? | If $V$ is over the field $k$, you should compare your situation to the point $a \in k^n$ acting on a polynomial $p \in k[x_1, \ldots, x_n]$. Here the action is evaluation: replace each variable $x_i$ with the coordinate $a_i$. This should tell you that you're doing the right thing in the coordinates you picked, but I think choosing coordinates is unnecessary to answer this question (or at least you should answer it more abstractly before picking coordinates).
An arbitrary element of $\mathrm{Sym}(V^*)$ may be split into graded parts which are either a constant, or a product of linear functionals $f_1 \cdots f_d$. The action of $v \in V$ on $f_1 \cdots f_d$ should be $f_1(v) \cdots f_d(v)$. Now extend this action linearly. |
Finding the minimum number of machines working | Your result should be $n \ge 2.17,$ so you need $n=3$. For $n=2$ the chance of failure is $0.2^2=0.04,$ too high |
Geometrical interpretation of $\frac{\sin(\alpha)}{\alpha}$ | Based on the previous hints, in a circle of radius $R$, take lines from the center, forming an angle $2\alpha$. The lines intersect the circle at $A$ and $B$. Then the length of the arc between them is $2R\alpha$, and the length of the chord is $2R\sin\alpha$. So $$\frac{\sin\alpha}{\alpha}=\frac{|\overparen{AB}|}{|AB|}$$ is the ratio of the cord length to the arc length for two points on the circle, forming an angle $2\alpha$ from the origin.
** As requested** I've added an image. The ratio is the length of the blue line to the length of the red arc. |
Show that $f$ satisfies a Lipschitz condition on $E$ | So $f \in C^1(E)$. Hence, $f'$ exists and is continuous. By continuity of $f'$ and compactness of $E$, the function $f'$ is bounded on $E$, i.e. there exists a number $M$ such that $f' \leq M$ on $E$. Pick any two points $x$ and $y$. Because the line segment joining them lies inside $E$, by convexity, the mean value theorem applies i.e. there exists a point $z$ on the line joining $x$ and $y$ such that $f(x)-f(y) = f'(z)(x-y)$. Since $f'(z) < M$, we get that $f(x)-f(y) < M(x-y)$. Hence, we are done. |
Variational Representation for Kullback-Leibler Divergence | The partial derivatives of the right-hand side's expression with respect to $a$ and $b$ are
$$q - \frac{pe^a}{pe^a + (1-p) e^b}$$
and
$$1-q - \frac{(1-p) e^b}{pe^a + (1-p)e^b}.$$
These derivatives are zero when
$$p(1-q) e^a = (1-p) q e^b$$
i.e.
$$b-a = \log \frac{p(1-q)}{q(1-p)}.$$
You can actually show that the value of $aq+b(1-q) - \log(e^a p + e^b (1-p))$ depends only on the value of $b-a$, so any choice (such as $a = \log \frac{p}{q}$ and $b = \log \frac{1-q}{1-p}$) will work. |
Infinitely many solutions of a linear system of equations | Add the first and second and subtract the third equation we have: $(k-4)(y-z) = 0$. Thus we require $k = 4$ to have infinite solutions. |
What's wrong with my example? | Yes, there is something wrong: the point $p=(i,-1)$ is in $V$ since $(-1)-(i)^2=0$, but $1+y(p)=1+(-1)=0$, so $f$ is not regular at $p$. |
Video lectures for Commutative Algebra | Here are A.V Jayanthan's lectures based on Atiyah-McDonald : http://nptel.ac.in/courses/111106098/ which builds upto the structure theorem of Artinian Rings and Hilbert's Nullstellensatz. |
Proof of "Singular values of a normal matrix are the absolute values of its eigenvalues" | Since you work over $\mathbb{C}$, if the singular values are distinct you can only conclude that $u = e^{i\theta}v$ but this is enough.
If the singular values are not distinct, I doubt you'll find a "completely elementary" proof which avoids something like the spectral theorem as you need to know something about the eigenvalues of $AA^H$ and their relation to the eigenvalues of $A$.
Using the spectral theorem, I can offer the following argument. If $A$ is normal and $v$ is an eigenvector of $A$ with eigenvalue $\lambda$ then $v$ is also an eigenvector of $A^H$ with eigenvalue $\overline{\lambda}$. Denote the eigenvalues of $A$ by $\lambda_1, \ldots, \lambda_n$. Since $A$ is diagonalizable, we can choose a basis $(v_1, \ldots, v_n)$ with $Av_i =\lambda_i$. Then we have
$$ AA^H(v_i) = A(\overline{\lambda_i} v_i) = \lambda_i \overline{\lambda_i} v_i = |\lambda_i|^2 v_i $$
which shows that the eigenvalues of $AA^H$ are $|\lambda_1|^2, \ldots, |\lambda_n|^2$. Since you have shown that $\sigma^2$ is an eigenvalue of $AA^H$, we have $\sigma = |\lambda_i|$ for some $1 \leq i \leq n$. |
How to solve system of equations? | We'll prove that $(2,1)$ is the only real solution. First note that if $y = 0$, the second equation is unsolvable, so $y\neq 0$.
Solving the second equation for $x$ via the quadratic formula gives \begin{align*} x &= \frac{-2y^2 \pm \sqrt{4y^4-4y(y^3-9)}}{2y}\\ &= \frac{-y^2\pm 3\sqrt{y}}{y} \\ &= -y \pm y^{-\frac{1}{2}}.\end{align*}
In particular, $y \geq 0$. Subbing this into the first equation gives \begin{align*}7 &= \left(-y \pm 3y^{-\frac{1}{2}} \right)^3 y - y^4 \\ &= -2y^4\pm 9y^{\frac{5}{2}}-27y\pm 27y^{-\frac{1}{2}}.\end{align*}
This is equivalent to $$ 0 = -2y^{\frac{9}{2}} \pm 9y^{\frac{6}{2}}-27y^{\frac{3}{2}} -7y^\frac{1}{2} \pm 27.$$
Now, substitute in $z = \sqrt{y}$, which is possible since we already know $y\geq 0$. This gives $$0 = -2z^9 \pm 9z^6 - 27z^3 -7z \pm 27$$
If we choose the $-$ sign, then clearly there is no solution with $z$ real, so we must have $$0 = -2z^9 + 9z^6 - 27z^3-7z+27.$$
Now, using the rational roots theorem, maple, direct inspection, etc, we see $z=1$ is a solution.
I claim that $f(z)-2z^9 + 9z^6 - 27z^3 - 7z + 27$ is always decreasing, so $z=1$ is the only solution.
To see this, compute the derivative, getting $f'(z)=-18z^8 + 54z^5-81z^2 - 7 = 9(-2z^8 + 6z^5-9z^2 - \frac{7}{9})$. I claim this is always negative. The $9$ out in front doesn't effect this, so we'll ignore it in intermediate computations.
Why is $f'(z)$ always negative? Lets find its maximum. So, take another derivative, getting $f''(z)-16z^7+30z^4-18z = 2z(-8z^6 + 15z^3 - 9)$. Of course, this is $0$ when $z=0$, but the other part is quadratic in $z^3$ and so we see that it's never $0$. Since $f'(z)$ goes to $-\infty$ as $x\rightarrow \pm \infty$, it follows that if it only has one critical point, this must be a max. So, the max of $f'(z)$ occurs when $z = 0$, where $f'(0) = -7 < 0$.
Since $f'(z) < 0$, $f(z)$ is always decreasing, so $z= 1$ must be its only solution.
When $z=1$, $\sqrt{y} = 1$ so $y=1$, and subbing back in shows that $x=2$. |
Confusion about the splitting field of $x^3-7$ | The splitting field $K$ of $x^3-7$ is $\mathbb Q(\sqrt[3]7, \omega)$, where $\omega=-\frac{1}{2}+i\frac{\sqrt 3}{2}$ is a primitive cubic root of the unity, a root of $x^2+x+1$.
There is no need to decompose $\omega$ into $i$ and $\sqrt 3$.
It is true that $L =\mathbb Q(\sqrt[3]7, i, \sqrt 3)$ contains all the roots of $x^3-7$, but $L$ is not the smallest such field. That is $K$. |
Proving height of water as ice melts in it is constant | Your error is here:
Change in mass of ice$=x^3⋅ρ_i-α^3 x^3⋅ρ_i=x^3 (1-α)^3⋅ρ_i$
It is not true: $x^3⋅ρ_i-α^3 x^3⋅ρ_i=x^3 (1-α^3)⋅ρ_i$ |
Continuously differentiable function (multivariable) on a compact set implies Lipschitz | If $a$, $b\in[-1,1]^n$ then the segment
$$\sigma:\quad t\mapsto (1-t)a+tb\qquad(0\leq t\leq1)$$
connecting $a$ with $b$ is contained in this cube as well. It follows that the auxiliary function
$$\phi(t):=f\bigl((1-t)a+tb\bigr)$$
is welll defined and $C^1$ on $[0,1]$. We therefore have
$$f(b)-f(a)=\phi(1)-\phi(0)=\int_0^1\phi'(t)\>dt\ .$$
Since by the chain rule $\phi'(t)=df\bigl((1-t)a+tb\bigr).(b-a)$ we obtain
$$|f(b)-f(a)|\leq\int_0^1\|df\bigl((1-t)a+tb\bigr)\|\>|b-a|\>dt\leq M\,|b-a|\ ,$$
whereby $$M:=\max_{0\leq t\leq 1}\|df\bigl((1-t)a+tb\bigr)\|\leq\max_{x\in[-1,1]^n}\|df(x)\|\ .$$ |
Determining the Cartesian product of a set | Hint: First, figure out what $U$ is. Second, figure out what $B\times B$ is. Finally, use the first two pieces of information to figure out what $\overline{B\times B}$ is.
Each part is a matter of applying the relevant definition. |
Is it possible in a optimzation problem the function value converge but the variable does not converge? | Sure. Take Newton's method on the function $f(x) = \frac{1}{x}$, on the domain $(0, \infty)$. Take a first estimate $x_0 = 1$, and define
$$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)} = x_n - \frac{\frac{1}{x_n}}{-\frac{1}{x_n^2}} = 2x_n.$$
Inductively, we see $x_n = 2^n$. Note that $f(x_n) = \frac{1}{2^n} \to 0$, the infimal value of the function, but the iterates $x_n$ fail to converge. |
Proof of $\int_Mf\nabla_i\nabla_jh_{ij}\mathsf{dvol}_g=\int_M(\nabla_i\nabla_j f)h_{ij}\ \mathsf{dvol}_g.$ | Hint: The divergence theorem, in abstract index notation, reads
$$
\int_M\nabla_iX^idV_g=\int_{\partial M}N_iX^idV_{g|_{\partial M}}
$$
This means that on a compact manifold without boundary, you can integrate by parts exactly as you would in single variable calculus, by moving a $\nabla_i$ from one term to another and reversing the sign, regardless of how indices are contracted. |
Proof - for all integers $y$, there is integer $x$ so that $x^3 + x = y$ | Observe that $x^3+x=x(x^2+1)$ is a composite number if $x>1$, so can never represent a prime. |
What can be said about functions of constant Hessian determinant? | We can say that $f$ is a quadratic polynomial (same holds in any $\mathbb R^n$, not only in $\mathbb R^2$; but then convexity of $f$ must be added as an assumption). This is sometimes called a Liouville-type theorem (as it expresses the rigidity of global solutions) or Bernstein-type theorem (note the parallel with Bernstein's theorem that global solution of the minimal surface equation are affine). The result belongs to a long line of development involving the names of Jörgens, Calabi, Pogorelov, Cheng, Yau, Caffarelli... References:
The Monge-Ampère Equation by C. E. Gutiérrez, Chapter 4. Carefully written, but the result is stated for $u\in C^4$ only.
A Liouville theorem for solutions of the Monge–Ampère equation with periodic data by L. Caffarelli and Y. Li. Proof for $u\in C^2$, references to earlier literature.
On the improper convex affine hyperspheres by A. V. Pogorelov, Geometriae Dedicata 1 (1972), no. 1, 33–46. Distinctive geometric style of Pogorelov's writing is refreshing.
Some aspects of the global geometry of entire space-like submanifolds by
J. Jost and Y. L. Xin. Puts the result in the context of the aforementioned Bernstein's theorem.
Affine Bernstein Problems and Monge-Ampère Equations by An-Min Li, Chapter 4. Another book addressing the subject, but unfortunately the proof is more of an outline. |
Polygonal line in a square | Assume no segment in the polygonal line is vertical or horizontal, else the problem is trivial. Let $f(u)$ be the number of points of the polygonal line on the segment $x=u$; let $g(v)$ be the number of points of the polygonal line on the segment $y=v$. You should be able to get the lower bound
$$
\int_0^1 f(u)\,du + \int_0^1 g(v)\,dv > 1001
$$
by considering the contribution from a given segment of the polygonal line to both integrals. This means there is either a $u$ for which $f(u)>1001/2$ or a $v$ for which $g(v)>1001/2$. So indeed you get $501$ points, not just $500$. |
Show the eigenvalues of constant block symmetric minus special matrix | You are assuming $A$ is diagonalizable, i.e. has $c$ linearly independent eigenvectors.
Let $\lambda$ be any eigenvalue of $A$, corresponding to one of these eigenvectors $v$. Consider
vectors of the block form
$$ V(t) = \pmatrix{ t_1 v\cr t_2 v\cr \ldots \cr t_K v} $$
where $t \in \mathbb R^K$.
The linear map $t \mapsto U' V(t)$ has rank at most $p$, so there are at least $K-p$ linearly independent $t$ for which $U' V(t) = 0$. Then
$M V(t) = \lambda V(t)$, so $V(t)$ is an eigenvector of $M$ with eigenvalue $\lambda$.
Considering all $v$ and all such $t$ for each $v$, this gives you $c(K-p)$ linearly independent eigenvectors of $M$ whose eigenvalues are the eigenvalues of $A$. |
Definition of Critical Point at endpoints | What is the concrete definition of critical point?
Before giving the concrete definition, it's better to have a rough idea about critical points.
Critical points are points at which an extremum could possibly occur.
Source: © CalculusQuest™
endpoint:
Image sources: Solomon Xie's story, http://tutorial.math.lamar.edu/Classes/CalcI/MinMaxValues_Files/image002.png
Therefore, it makes sense to includes three types of points in the domain
points at which the derivative of $f$ vanishes (local min/max, points of inflection)
endpoints of domain
points at which the derivative of $f$ is undefined (corner, cusp, points of discontinuity)
On a math test, we were instructed to find critical points of the function $f(x) = x\sqrt{30-x^2}$.
The given domain of $f$ is not clearly stated in the question. The open interval $(-\sqrt{30},\sqrt{30})$ is just OP's perception. Taking account of OP's comment and of the 3rd paragraph of the question body, it seems that OP has mistaken the domain of $f$, which should actually be the closed and bounded interval $[-\sqrt{30},\sqrt{30}]$.
Find the critical points type by type.
$f'(x) = \sqrt{30 - x^2} - x \, \dfrac{x}{\sqrt{30 - x^2}} = \dfrac{30 - 2x^2}{\sqrt{30 - x^2}}$, so $f'(x) = 0$ iff $x = \pm\sqrt{15}$
endpoints of the domain of $f$: $x = \pm\sqrt{30}$
The denominator of $f'(x)$ vanishes iff $x^2 = 30$, i..e the endpoints of the domain of $f$.
Conclusion: The critical points of $f$ are $x = \pm\sqrt{15}, \pm\sqrt{30}$.
Alternative solution 1
Observe that $f$ is an odd function, since it's a product of an odd function $x \mapsto x$ and an even function $x \mapsto \sqrt{30 - x^2}$. Therefore, it suffices to find the global maximum of $f$. Since $f$ vanishes at the endpoints and the midpoint of the domain, the global extrema are actualy local extrema. The fact that $f$ is odd allows us to concentrates on nonnegative real numbers and apply a.m.-g.m.-inequality
$$\frac{a^2 + b^2}{2} \ge ab$$ with $a = x$ and $b = \sqrt{30 - x^2}$.
\begin{align}
\frac{x^2 + (30 - x^2)}{2} \ge& x \sqrt{30 - x^2} \\
x \sqrt{30 - x^2} \le& 15
\end{align}
Equality holds iff $a = b$.
\begin{align}
\text{i.e. } \quad x &= \sqrt{30 - x^2} \\
x^2 &= 30 - x^2 \\
x &= \pm\sqrt{15}
\end{align}
This elementary solution in algebra-precalculus is useful in contest-math since you don't need to use calculus.
Alternative solution 2
It's even simpler to make use of quadratics. Note that in the right half of the domain, everything is nonnegative, so squaring $f$ won't affect the answer. As a result, consider $$(f(x))^2 = x^2 (30 - x^2) = -(x^2 - 15)^2 + 15^2 \le 15^2.$$ This gives the same maximizer $x = \sqrt{15}$ as expected. |
How can I show that $f$ is continuous in $x_0$ iff $\mu(C_{x_0})=0$? | Lemma 1. There is at most countable set of points of positive measure.
Proof. Let $A$ be a set of all points with positive measure and assume $A$ is uncountable. For a number $M\in\mathbb{R}$ define
$$A_M=\{x\in A\ |\ \mu(x) > M\}$$
If $A$ is uncountable then there exists $M$ such that $A_M$ is infinite. Indeed, otherwise
$$\bigcup_{n\in\mathbb{N}}A_{\frac{1}{n}} = A$$
would be countable as a countable sum of finite sets. Let $M$ be such that $A_M$ is infinite. Then pick a sequence of distinct points from $(a_n)\subset A_M$. You can do that since $A_M$ is infinte. Then
$$\mu(\bigcup\{a_n\})=\sum\mu(\{a_n\})\geq\sum M=\infty$$
and thus $\bigcup\{a_n\}$ has an infinite measure. Contradiction with finitness of $\mu$. $\Box$
"$\Rightarrow$" Assume that $\mu(C_x)\neq 0$ for some $x\in\mathbb{R}^2$. Consider a sequence $x_n$ converging to $x$. Pick that sequence in such a way that $x_n\neq x_m$ for $n\neq m$ (it's quite simply to construct it).
Since $f$ is continous then $f(x_n)$ converges to $f(x)=\mu(C_x)\neq 0$. In particular for almost all $n$ we have $f(x_n)\neq 0$. Without loss of generality assume that $f(x_n)\neq 0$ for all $n$.
As you've noted $C_x$ and $C_y$ have at most two common points if $x\neq y$. We can refine our sequence $x_n$ even more. Pick $x_n$ in such a way that $C_{x_n}\cap C_{x_m}$ is of measure $0$ (whenever $n\neq m$). This can be done since $C_x\cap C_y$ has at most 2 points and there is at most countable number of points (Lemma 1) with positive measure (opposed to uncountable number of points in $\mathbb{R}^2$). Put
$$P=\bigcup_{n\neq m} C_x\cap C_y$$
Since $P$ is a countable union of finite sets of measure $0$ then it is countable and of measure $0$. Define
$$B_n=C_{x_n}-P$$
(the difference of sets). Note that each $B_n$ is actually $C_{x_n}$ minus a countable set of measure $0$. So it is Borel and its measure doesn't change:
$$\mu(B_n)=\mu(C_{x_n})=f(x_n)\neq 0$$
Also by definition $B_n\cap B_m=\emptyset$ for $n\neq m$ (that's because we've removed all common points). In particular countable additivity of $\mu$ applies and so
$$\mu(\bigcup B_n)=\sum\mu(B_n)=\infty$$
The last equality comes from the fact that $\mu(B_n)=f(x_n)$ converges to something positive (and thus the series is divergent to infinity).
Contradiction since $\mu$ is finite. $\Box$
"$\Leftarrow$" Assume that $\mu(C_x)=0$ for some $x$ and assume that $f$ is not continous at $x$. Then there exists a sequence $(x_n)$ convergent to $x$ such that $f(x_n)$ is not convergent to $f(x)=0$. Since $f(x_n)\geq 0$ is not convergent to $0$ then there exists a positive constant $M>0$ and a subsequence $f(x_{n_k})$ such that $f(x_{n_k})>M$. Without a loss of generality we may assume that $f(x_n)$ is bounded from below by a positive constant $M$.
Now consider the set
$$P=\bigcup_{n\neq m} C_x\cap C_y$$
again. Now put
$$B_n=C_{x_n} - P$$
Obviously $B_n$ are pairwise disjoint and
$$\mu(B_n) \geq \mu(C_{x_n}) - \mu(P) \geq M-\mu(P)$$
Let $M'=M-\mu(P)$. Note that $M'$ is constant and does not depend on $n$. So due to countable additivity we get
$$\mu(\bigcup B_n)=\sum\mu(B_n)\geq \sum M'= \infty$$
EDIT: Actually the last equality $\sum M'=\infty$ does not have to hold, since $M'$ might be $0$. Uhhh... TODO. |
Function Projection: Orthogonal Polynomials | After looking at the paper, I think that the author is simply abusing notation here. Simply by virtue of words "by projecting...", the author means that $f(x)$ is to be approximated by the expression on the right-hand-side. The point is that $f(x)$ can be some arbitrary smooth function, and we have a quadrature that works on polynomials with weight $w(x)$, so we wish to express $f(x)$ approximately in terms of a polynomial multiplied by $w(x)$.
In particular, suppose $p_0, \ldots, p_n$ are orthonormal polynomials with respect to the weight $w(x)$; that is, we have inner product $\langle f,g \rangle = \int f(x) g(x) w(x) \, dx$. Then the orthogonal projection of $f(x)/w(x)$ onto that polynomial space is given by
\begin{align}
\mathrm{Proj} (f(x)/w(x)) & = \sum_{j=0}^n \langle f/w, p_j \rangle p_j(x) \\
& = \sum_j p_j(x) \int [f(x')/w(x')] p_j(x') w(x') \, dx' \\
& = \sum_j p_j(x) \int f(x') p_j(x') \, dx'
\end{align}
Now, by virtue of being an orthogonal projection, this means that so long as $f(x)$ and its higher derivatives are well-behaved, $f(x)/w(x)$ is "close" to the above expression (in an appropriate norm). Therefore, we have, informally,
$$f(x)/w(x) \approx \sum_j p_j(x) \int f(x') p_j(x') \, dx' $$
Multiplying by $w(x)$ on both sides yields the "equality" (which is really an approximation) that the author derived. |
How do I find $\lim_{(x,y) \to (0,0)} \frac{\sqrt{xy + 1} -1}{x + y}$? | In the OP, it appears that you have analyzed the limit along straight-line and parabolic trajectories.
Let's analyze the case for which the limit is approached along the path $y=-x+x^\alpha$. Then, we have
$$\frac{xy}{x+y}=\frac{-x^2+x^{1+\alpha}}{x^\alpha}=x-\frac{1}{x^{\alpha-2}}\tag1$$
What can you conclude now?
For the record, if the limit is approached along the path $y=-x+x^2$, then with $\alpha=2$ in $(1)$, we find
$$\frac{xy}{x+y}=\frac{-x^2+x^3}{x^2}=x-1\to -1\ne 0$$
from which we can conclude the limit fails to exist. |
what graph have exactly one positive eigenvalue | If we adjoin isolated vertices to a complete multipartite graph, there is still only one positive eigenvalue. So we aim to prove a connected graph $X$ with exactly one positive eigenvalue is complete multipartite. Observe that if there is an induced $P_4$ then, by interlacing, $X$ has at least two positive eigenvalues. Therefore $X$ has diameter at most two. If the diameter is one, we’re complete, and we’re done.
Suppose $a$ and $b$ are two non-adjacent vertices. If $a$ has a neighbour not adjacent to $b$ then this neighbour, along with an $ab$-path of length two gives an induced $P_4$. So any two non-adjacent vertices have exactly the same neighbours. Now it is easy to prove $X$ is complete multipartite. |
Trying to show that an analytic function maps open sets to open sets | Yes you can apply the inverse function theorem. There are however other methods.
$f^\prime$ is indeed continuous. Remember that an analytic function can be expanded locally to a power series. And a power series is indefinitely complex differentiable. |
Show $\large\sum\limits_{j=0}^{r}\binom{j+k-1}{k-1}=\binom{r+k}{k}$ | For $r$ balls in $m$ urns the number of possible distributions is $\left( \array{ r + m - 1 \\ m - 1 } \right)$, not $\left( \array{ r + m - 1 \\ m } \right)$. So we see that on the left side of the equation, $\left( \array{ j + k - 1 \\ k - 1 } \right)$ is the number of ways you can distribute $j$ balls into $k$ urns, and on the right side, $\left( \array{ r + k \\ k } \right)$ is the number of ways you can distribute $r$ balls into $k + 1$ urns.
Your instinct that you should be summing $b$ over the range $0$ to $r$ was a good one, but due to the error in the formula for $r$ balls in $m$ urns, it seems you were led into trying to set $m = k$. Using the correct formula, $m = k + 1$ becomes a more obvious tactic. You have set up enough parts of the proof already that I think this will enable you to complete it. |
Trying to derive the Laurent Expansion of $\wp$ (Weierstrass Elliptic function) | $$ \wp(z)=\frac{1}{z^2}+\sum_{\omega\neq 0}\Big[\frac{1}{(z-\omega)^2}-\frac{1}{\omega^2}\Big]=\frac{1}{z^2}+\sum_{\omega\neq 0}\Big[\frac{1}{\omega^2}\frac{1}{(1-\frac{z}{\omega})^2}-\frac{1}{\omega^2}\Big] $$
$$=\frac{1}{z^2}+\sum_{\omega\neq 0}\frac{1}{\omega^2}\sum_{m=1}^{\infty}(m+1)\Big(\frac{z}{\omega}\Big)^{m}=\frac{1}{z^2}+\sum_{m=1}^{\infty}\Big[\sum_{\omega\neq 0}\frac{m+1}{\omega^{m+2}}\Big]z^m$$
using the fact that
$$ -1+\frac{1}{(1-x)^2}=\sum_{m=1}^{\infty}(m+1)x^m $$
for $|x|<1$, and using the absolute convergence of the $\wp$ function to interchange the sums.
Finally, if $m$ is odd then $\omega^{m+2}=-(-\omega)^{m+2}$, hence
$$ \sum_{\omega\neq 0}\frac{m+1}{\omega^{m+2}}=0 $$
because for every non-zero element $\omega$ there is a corresponding term $-\omega$ in the sum. This shows that all the odd powers vanish in the Laurent expansion. |
Smallest integer such that $\dfrac{C_k}{n+k+1}\binom{2n}{n+k}$ is an integer | Think about it base $10$ rather than base $p$ for a moment. To say $n+k+1=p^lm$ is like $n+k+1$ is some number $m$ followed by $l$ zeros, eg. 120000. Saying $k+1<\frac{p}{2}$ is like $k+1<5$, and it's saying that, for example, $120000 - (k+1) = 11999x$ where $x\geq\frac{11}{2}$ is a digit ( not multiplication ). There's nothing about prime numbers here. |
Showing that two infinite series converge to the same value | $$\frac{1}{2n(2n-1)}=\frac{1}{2n-1}-\frac1{2n}$$ |
Bijection between $\mathbb R^\mathbb N$ and $\mathbb R$ | $\mathbb{R} \cong 2^{\mathbb{N}}$ and $\mathbb{N}\times \mathbb{N} \cong \mathbb{N}$.
So $\mathbb{R}^{\mathbb{N}} \cong (
2 ^ { \mathbb{N}})^{\mathbb{N}} \cong 2^{\mathbb{N} \times \mathbb{N}} \cong 2^{\mathbb{N}} \cong \mathbb{R}$ |
How does a complex power series behave on the boundary of the disc of convergence? | A series may converge (absolutely and therefore uniformly) at all points of the boundary of convergence, consider $\displaystyle \sum \frac{z^n}{n^2}$.
However, there is at least one point along which the holomorphic function the series defines cannot be analytically continued. Intuitively, think of $\sum z^n$. This series coincides with $1/(1-z)$, which is holomorphic everywhere (except at $z=1$), even though the series only has radius of convergence 1. What this result says is that the "except at $z=1$" is unavoidable.
Let me be more precise. A point on the boundary of the disk $D$ of convergence of a power series (say, about 0) is called regular iff there is an open neighborhood $U$ of that point and an analytic function that coincides with the power series on the intersection of $D$ and $U$. Otherwise, the point is singular.
The theorem your instructor was referring to is probably the following:
Theorem. If a power series has finite radius of convergence, then the set of singular points is a nonempty closed subset of its boundary of convergence.
The idea of the proof that there is at least one singular point is easy. For details, examples, and a discussion of analytic continuation, see for example Chapter 5 of Berenstein-Gay "Complex Variables. An introduction", or a similar textbook.
The thing is, calling $f$ the function defined by the power series, if there are no singular points, we can find around each point $p$ of the boundary, a little disk $D_p$ and an analytic function $f_p$ that coincides with $f$ on $D_p\cap D$.
But, by connectedness, if $D_p$ and $D_{p'}$ intersect, then on their intersection $f_p$ and $f_{p'}$ coincide. This is because they coincide (with $f$) on $D_p\cap D_{p'}\cap D$, which is nonempty if $D_p\cap D_{p'}\ne\emptyset$ to begin with.
Using compactness, we can then use the disks $D_p$ to see that there is a disk concentric with $D$ but slightly larger where there is an analytic function extending $f$. But then, the disk of convergence of $f$ was not $D$ to begin with. |
LU Decomposition Trouble | Your $L$ should be $$ \begin {bmatrix}1&0&0\\7&1&0\\-7&-2&1\end {bmatrix}$$
and your $U$ should be $$ \begin {bmatrix}9&7&1\\0&2&-5\\0&0&5\end {bmatrix}$$ |
Can D'Alembert criterion apply for sequences? | Yes, It's true and can be proved like you prove the criteria for series (by taking geometrical sequence based on L and compare it to $a_n$).
You can also shorten it by using the criteria for series, if $\lim_{n \to \inf} \frac{a_{n+1}}{a_n} = L $ (< 1 for example) then by D'Alembert criteria $\sum_{n = 1}^{\inf}a_n$ converges and therefore $a_n \to 0$ |
Evaluating a trigonometric integral by means of contour $\int_0^{\pi} \frac{\cos(4\theta)}{1+\cos^2(\theta)} d\theta$ | Note that $\sin 4\theta$ is an odd function, so we can simplify to
$$\begin{align}
I &= \frac12 \int_{-\pi}^\pi \frac{e^{4i\theta}}{1 + \cos^2\theta}\,d\theta\\
&= \frac{1}{2i}\int_{\lvert z\rvert = 1} \frac{z^4}{1 + \left(\frac{z+z^{-1}}{2}\right)^2}\,\frac{dz}{z}\\
&= \frac{1}{2i}\int_{\lvert z\rvert = 1} \frac{z^5}{z^2+\frac14(z^2+1)^2}\,dz\\
&= -2i \int_{\lvert z\rvert = 1} \frac{z^5}{z^4 + 6z^2+1}\,dz.
\end{align}$$
That looks a little simpler to my untrained eye.
Then we need the zeros of the denominator, which are $\pm \sqrt{-3\pm\sqrt{8}}$, where the inner square root shall be the positive, and the outer can be either square root. The zeros inside the unit disk are $\zeta_\pm = \pm i\sqrt{3-\sqrt{8}}$, both are simple, so
$$\operatorname{Res}\left(\frac{z^5}{z^4+6z^2+1};\,\zeta_\pm\right) = \frac{\zeta_\pm^5}{4\zeta_\pm^3 + 12\zeta_\pm} = \frac{\zeta_\pm^4}{4\zeta_\pm^2 + 12} = \frac{(\sqrt{8}-3)^2}{4(\sqrt{8}-3)+12}=\frac{17-12\sqrt{2}}{8\sqrt{2}},$$
both residues are the same, and
$$I = 8\pi \frac{17-12\sqrt{2}}{8\sqrt{2}} = \pi\frac{17-12\sqrt{2}}{\sqrt{2}} = \frac{\pi}{24+17\sqrt{2}},$$
if I haven't miscalculated. |
Find the sum of all values of $ f ( 2017 ) $ given $ f ^ { f ( a ) } ( b ) f ^ { f ( b ) } ( a ) = f ( a + b ) ^ 2 $. | It's easy to see that $ f ( n ) = n + 1 $ satisfies the functional equation
$$ f ^ { f ( a ) } ( b ) f ^ { f ( b ) } ( a ) = f ( a + b ) ^ 2 \text , \tag 0 \label 0 $$
since then for every positive integers $ m $ and $ n $, one will have $ f ^ m ( n ) = n + m $. It can be shown that this is the only injective solution to \eqref{0} that maps positive integers to positive integers, and thus the only possible value for $ f ( 2017 ) $ is $ 2018 $, which shows $ S = 2018 $ and hence $ S \mod 1000 = 18 $. To see this, first note that $ f $ cannot take the value $ 1 $, because if $ f ( n ) = 1 $ for some positive integer $ n $, then letting $ a = b = n $ in \eqref{0}, you get $ f ( 2 n ) = 1 $, which by injectivity of $ f $ yields $ n = 2 n $, which can't happen. Next, let $ b = a $ in \eqref{0} to get
$$ f ^ { f ( a ) } ( a ) = f ( 2 a ) \text , $$
and by injectivity,
$$ f ^ { f ( a ) - 1 } ( a ) = 2 a \text , \tag 1 \label 1 $$
as you've done yourself. We can use \eqref{1} to inductively prove
$$ f ^ { \sum _ { m = 1 } ^ n f \left( 2 ^ { m - 1 } a \right) - n } ( a ) = 2 ^ n a $$
for every nonnegative integer $ n $ (note that if $ n = 0 $, the left-hand side gives $ f ^ 0 ( a ) $ that means $ a $ itself), which in particular shows that $ \left\{ f ^ i ( a ) \right\} _ { i = 0 } ^ \infty $ is infinite. As a consequence, we can see that if for some nonnegative integers $ i $ and $ j $ we have $ f ^ i ( a ) = f ^ j ( a ) $, then we must have $ i = j $.
We can now show that $ f ( 1 ) = 2 $. If that's not the case, we must have $ f ( 1 ) = n + 2 $ for some positive integer $ n $. Then we have $ f ^ { n + 1 } ( 1 ) = 2 $ by putting $ a = 1 $ in \eqref{1}, and letting $ a = f ^ n ( 1 ) $ in \eqref{1} we get
$$ 2 f ^ n ( 1 ) = f ^ { f \left( f ^ n ( 1 ) \right) - 1 } \big( f ^ n ( 1 ) \big) = f ^ { f ^ { n + 1 } ( 1 ) - 1 } \big( f ^ n ( 1 ) \big) = f ^ { 2 - 1 } \big( f ^ n ( 1 ) \big) = f ^ { n + 1 } ( 1 ) = 2 \text . $$
But this means $ f \left( f ^ { n - 1 } ( 1 ) \right) = 1 $, which is impossible.
Then, we show that $ f ( 2 ) = 3 $. To see this, first note that if for some positive integer $ n $ and some positive integer $ k \ne 1 $ we have $ f ( n ) = 2 k $, letting $ a = k $ in \eqref{1} we get $ f ^ { f ( k ) - 1 } ( k ) = f ( n ) $, and as we must have $ f ( k ) > 2 $, we get $ f \left( f ^ { f ( k ) - 3 } ( k ) \right) = f ( n ) $. By injectivity of $ f $, we conclude that there is a positive integer $ m $ such that $ f ( m ) = n $. Since by putting $ a = 1 $ and $ b = 2 $ in \eqref{0} and using \eqref{1} we have
$$ f ( 3 ) ^ 2 = f ^ { f ( 1 ) } ( 2 ) f ^ { f ( 2 ) } ( 1 ) = f ^ 2 ( 2 ) f ^ { f ( 2 ) - 1 } \big( f ( 1 ) \big) = f ^ 2 ( 2 ) f ^ { f ( 2 ) - 1 } ( 2 ) = 4 f ^ 2 ( 2 ) \text , $$
it follows that there is a positive integer $ k $ such that $ f ( 3 ) = 2 k $. As $ f ( 3 ) \ne f ( 1 ) = 2 $, we must have $ k \ne 1 $, and thus there is a positive integer $ m $ with $ f ( m ) = 3 $. We know that $ m \ne 1 $. $ m $ cannot be equal to $ 3 $ either, because then by \eqref{1} we would have $ 6 = f ^ { f ( 3 ) - 1 } ( 3 ) = f ^ 2 ( 3 ) = 3 $. If $ m > 3 $, we can let $ a = m - 3 $ and $ b = 3 $ in \eqref{0} and see that
$$ 9 = f ( m ) ^ 2 = f ^ { f ( m - 3 ) } ( 3 ) f ^ { f ( 3 ) } ( m - 3 ) \text . $$
As non of $ f ^ { f ( m - 3 ) } ( 3 ) $ and $ f ^ { f ( 3 ) } ( m - 3 ) $ can be equal to $ 1 $, we must have $ f ^ { f ( m - 3 ) } ( 3 ) = f ^ { f ( 3 ) } ( m - 3 ) = 3 $. But
$ f ^ { f ( m - 3 ) } ( 3 ) = 3 = f ^ 0 ( 3 ) $ leads to $ f ( m - 3 ) = 0 $, which is impossible. Thus the only possible case is $ m = 2 $, as desired.
At last, we use strong induction on $ n $ to prove that $ f ( n ) = n + 1 $ for every positive integer $ n \ge 3 $. Note that as $ n \ge 3 $, if we let $ a = \left\lfloor \frac { n + 1 } 2 \right\rfloor $ and $ b = \left\lceil \frac { n + 1 } 2 \right\rceil $, then $ a $ and $ b $ are positive integers less than $ n $ such that $ a + b = n + 1 $. By induction hypothesis, assume that $ f ( m ) = m + 1 $ for every positive integer $ m $ less than $ n $. This shows that $ f ^ { n - m } ( m ) = n $ for every positive integer $ m $ less than $ n $. Thus, using \eqref{0} we have
$$ f ( n + 1 ) ^ 2 = f ( a + b ) ^ 2 = f ^ { f ( a ) } ( b ) f ^ { f ( b ) } ( a ) = f ^ { a + 1 } ( b ) f ^ { b + 1 } ( a ) \\
= f ^ { n + 2 - b } ( b ) f ^ { n + 2 - a } ( a ) = f ^ 2 \left( f ^ { n - b } ( b ) \right) f ^ 2 \left( f ^ { n - a } ( a ) \right) = \left( f ^ 2 ( n ) \right) ^ 2 \text . $$
Therefore we have $ f ( n + 1 ) = f \big( f ( n ) \big) $, and hence by injectivity of $ f $, $ f ( n ) = n + 1 $, as desired. |
equivalence of Hilbert spaces | It is sufficient to prove $C_1\|X\|_{\ell^2}\leq \|X\|_{W}\leq C_2\|X\|_{\ell^2}$ where $\|X\|_{\ell^2}$ is the common norm in $\ell^2$. Then $\|\cdot\|_{\ell^2}$ and $\|\cdot\|_W$ are equivalent. You can conclude that $\|\cdot\|_{W_1}$ and $\|\cdot\|_{W_2}$ are equivalent since both are equivalent to $\|\cdot\|_{\ell^2}$.
As Aweygan noticed it is crucial to assume $\inf_i w_i>0$. For $W=\left(\frac1i\right)_i$ is
$$
e_n=(0,\ldots,0,\underbrace{1}_{n\text{.th}},0,\ldots)
$$
convergent to $0$ in $\|\cdot\|_W$ while in $\|\cdot\|_{\ell^2}$ it hasn't even a convergent subsequence.
Now if $W=(w_i)_i$ a sequence where $0<\inf_i w_i$ and $\sup w_i<\infty$ you get
$$
\|X\|_W=\sum_{i=1}^\infty x_i^2w_i\geq\sum_{i=1}^\infty x_i^2(\inf_k w_k)=(\inf_k w_k)\sum_{i=1}^\infty x_i^2 =(\inf_k w_k)\|X\|_{\ell^2}
$$
and
$$
\|X\|_W=\sum_{i=1}^\infty x_i^2w_i\leq\sum_{i=1}^\infty x_i^2(\sup_k w_k)=(\sup_k w_k)\sum_{i=1}^\infty x_i^2 =(\sup_k w_k)\|X\|_{\ell^2}
$$
Defining $C_1=\inf_k w_k>0$ and $C_2=\sup_k w_k>0$ yields the claim. |
Confusion about a proof on Mycielski construction and chromatic number | The proof aims to show $\chi(G) = k$ and has two parts.
First, they show that $\chi(G) \leq k$.
The second part of the proof is showing that $\chi(G) \geq k$. This is the part that concerns the bolded argument in your post.
To prove this, the authors suppose BWOC, that $\chi(G) \ngeq k$. Then there is a $k-1$ coloring of $G$. The vertex $z$ can be colored using the color $k-1$ and the shadow vertices can be colored using $k-2$ colors.
At this point the bolded statement comes in:
therefore, if for each vetex of belonging to , the color () is replaced by (′), we have a (−2)-coloring of .
The authors are saying to give each vertex $y$ in $F$ the same color as their shadow vertex $y'$. Why do we know this is a proper coloring?
Since every vertex that is adjacent to $y$ in $F$ is also adjacent to $y'$ in $G$, then $y'$ must have a different color to every neighbor of $y$.
This fact essentially says that if we have a proper $k-1$ coloring of $G$, we don't have to use the color of $z$ (i.e. $c(z)$) for any of the vertices of $F$. So we would have a proper $k-2$ coloring of $F$ which is the desired contradiction. |
When matrix $A$ is linear isometry in $\|\cdot\|_{\infty}$ norm? | It holds if and only if $A$ is an entrywise signed permutation matrix.
Since $\|Ae_j\|_\infty=\|e_j\|_\infty=1$, every $|a_{ij}|$ is bounded above by $1$ and each column of $A$ contains at least one entry whose value is $\pm1$.
On the other hand, as $\|v\|_\infty=\|Av\|_\infty$ for all $v$, we also have $\max_i\sum_j|a_{ij}|=\|A\|_\infty=1$. Therefore, each row of $A$ has at most one entry whose value is $\pm1$.
It follows that on each column or on each row of $A$, there is exactly one entry whose value is $\pm1$. However, as $\max_i\sum_j|a_{ij}|=\|A\|_\infty=1$, all other entries must be zero. Therefore $A$ is a permutation matrix carrying signs on its entries. |
Inner Product of Bounded Self Adjoint Linear Operator | Note that for $v \neq 0$, we have
$$
|\langle Tv,v \rangle| =
\|v\|^2 \left \langle T\frac{v}{\|v\|},\frac{v}{\|v\|}\right \rangle \leq \|v\|^2 \cdot M
$$ |
Probability of last defective item detected at $12^{\text{th}} \text{ trial}$ | ${20 \choose 11}$ in the denominator is correct. $\frac{{18 \choose 10} {2 \choose 1}}{{20 \choose 11}}$ represents the probability that you draw 1 defective item in your first 11 draws.
The numerator is the number of ways you can draw 10 non-defective items and 1 defective. To get the probability, you need to divide by the number of ways you can draw 11 items, which is ${20 \choose 11}$.
If your numerator included cases where there are two defective items, then you would have a problem. |
Reasoning out the existence of a limit of a piecewise-defined function in order to prove discontinuity | By squeezing, $$\lim_{x\to0\\x\notin\mathbb Q}f(x)=0,$$
while $$f(0)=a,$$ and this is enough to conclude. |
How can I get the derivative of these two functions? | With the tip from above we get for the first derivative $$i\cdot t_n\frac{-1}{2}x^{-3/2}$$
Analogously we can write for the second equation
$$f(x)=-i^2\cdot t_nx^{-2}$$ Thus, we get $$f'(x)=2\cdot i^2\cdot t_nx^{-3}$$ After the exponential rule $$ (x^n)'=nx^{n-1} $$ |
Justifying differentiation under the integral sign with Dominated Convergence | One can check that $e^u\geq 1+u$ for all $u\in\mathbb{R}$, hence $1-e^u\leq -u$. Applying this with $u=-h_nx$, we obtain
$$ \Big|\frac{e^{-h_nx}-1}{h_n}\Big|=\frac{1-e^{-h_nx}}{h_n}\leq \frac{h_nx}{h_n}=x $$
if $h_n>0$, and
$$ \Big|\frac{e^{-h_nx}-1}{h_n}\Big|=\frac{e^{-h_nx}-1}{-h_n}=\frac{1-e^{-h_nx}}{h_n}\leq \frac{h_nx}{h_n}=x $$
if $h_n<0$.
Therefore
$$ \Big|e^{-tx}\frac{e^{-h_nx}-1}{h_n}\Big|\leq xe^{-tx} $$
which is integrable on $(0,\infty)$ if $t>0$. |
Finding the adjoint for a diagonal operator in $\ell^2$. | Your approach is indeed correct. |
Number of ways to choose, with repetitions, $K$ integers from a set $\{1...n\}$ having Longest Increasing Subsequence (LIS) of length $L$ | Seems like I've managed to find an interpretation.
I initially though that permutation defines a strict ordering of $K$ numbers, but that leads into nowhere.
Let's denote $\{a_i\}$ as the sequence of chosen $K$ elements. Then the sequence corresponds to a permutation $p$ if $a_{p_i} \lt a_{p_j}$, for $i \lt j$, i.e. the position of $k$-th smallest element is $p_k$. It's easy to see that the $LIS$ is preserved: $$i \lt j, p_i \lt p_j \implies a_{p_i} \lt a_{p_j}$$. If we didn't allow repetition, then $\binom{N}{K}$ would count the number of ways to choose a sequence that corresponds to a given permutation as we already know the order of chosen numbers — assign smallest of them to $a_{p_1}$, second smallest to $a_{p_2}$, etc.
Now, if we have a descent, that is $p_i \gt p_{i + 1}$, we can ease the requirement of $a_{p_i} \lt a_{p_{i + 1}}$ to that of $a_{p_i} \le a_{p_{i + 1}}$ because the two elements will never belong to some increasing subsequence in both cases, and, on the other hand, $a_{p_{i + 1}}$ can still be in all increasing subsequences it used to be as it can only become equal to — and never less than, due to construction — the previous element in the subsequence only if we have a descending run, but any two elements in a descending run can never be in some increasing subsequence.
Now, suppose that we have only one descent $p_i \gt p_{i + 1}$. Then there are $$\binom{N}{K}+\binom{N}{K-1} = \binom{N+1}{K}$$ ways to choose a sequence — first summand corresponds to the case where all numbers distinct, and the second one corresponds to the case where $a_{p_i} = a_{p_{i + 1}}$.
If we had two descents, there would be $$\binom{N}{K}+2\binom{N}{K-1} + \binom{N}{K-2} = \binom{N+2}{K}$$ ways. Since there are two ways to choose either descent we count the second summand twice.
In general, for a permutation with $d$ descents the sum is $$\sum_{i=0}^{d}\binom{d}{i}\binom{N}{K-i}=\binom{N+d}{K}$$
The equality is precisely the Vandermonde's identity, thanks to @Phicar for pointing it out.
Since every possible subset of descent indices is contained in at least one permutation (not sure about it; it's a claim by intuition), we can indeed map a permutation to some subset of weak orderings of $K$ elements — a permutation with $d$ descents maps to some subset of cardinality $2^{d}$.
I've also checked programmatically that $$\sum_p 2^{d} = \text{Fubini number}$$. So it seems that a bijection I looked for is indeed established, that is, a bijection between the set of permutations of $K$ numbers and a partition of a set of weak ordering of $K$ elements that preserves $LIS$.
It's a pity that many claims in this post seem to rely on intuition and checking small cases with a computer program, for I lack knowledge and skills to make a rigorous explanation.
To sum up. I think the matter is closed only after there is a rigorous proof of the last two summation identities. |
Why is an empty set not a terminal object in categories $\mathsf{Top}$ and $\mathsf{Sets}$? | Is this because we cannot map a non-empty set to an empty set (since our mapping is total) although we can map an empty set to a non-empty set?
Yep. A function has to take values on inputs. A function with non-empty domain but empty codomain can't take any values. (By contrast, a function with empty domain but non-empty codomain doesn't need to take values because it has no inputs.) |
Convince me: limit of sum of a constant is infinity | If $c>0$ then $\sum_{i=1}^{\infty }c= \lim_{n\to \infty }
\sum_{i=1}^{n}c=\lim_{n\to \infty }nc =\infty $
If c=$0$ then $\sum_{i=1}^{\infty }c=0 $
If $c<0 $then $\sum_{i=1}^{\infty }c =-\infty $ |
Differentiating w. r. t. $x$ | Method $1:$
HINT:
Putting $x=\tan \theta \implies \theta=\arctan x$
$$\frac{1-x}{1+x}=\frac{1-\tan\theta}{1+\tan\theta}=\frac{\tan\frac\pi4-\tan\theta}{1+\tan\theta\tan \frac\pi4}\text{ as } \tan\frac\pi4=1$$
$$=\tan\left(\frac\pi4-\theta\right)=\cot\{\frac\pi2-\left(\frac\pi4-\theta\right)\}=\cot\left(\frac\pi4+\theta\right)=\cot\left(\frac\pi4+\arctan x\right)$$
Then, $$ \text{arccot} \left(\frac{1-x}{1+x}\right) =?$$
Method $2:$
Let $$ y= \text{arccot} \frac{1-x}{1+x}\implies \cot y=\frac{1-x}{1+x} $$
Applying Componendo and dividendo, $$x=\frac{1-\cot y}{1+\cot y}=\frac{\tan y-1}{1+\tan y} (\text{ multiplying the numerator & the denominator by}\tan y)$$
$$\implies x=\frac{\tan y-\tan\frac\pi4}{1+\tan\frac\pi4\tan y}=\tan\left(y-\frac\pi4\right)\text{ using }\tan(A-B)=\frac{\tan A-\tan B}{1+\tan A\tan B}$$
Differentiating wrt $y,$
$$ \frac{dx}{dy}=\frac{d \tan\left(y-\frac\pi4\right)}{d(y-\frac\pi4)}\cdot\frac {d(y-\frac\pi4)}{dy}=\sec^2\left(y-\frac\pi4\right)=1+\tan^2\left(y-\frac\pi4\right)=1+x^2$$
Now use this |
Greatest Integer Function Linear Equality. | Since
$\dfrac{x}{49}-1<\left[\dfrac{x}{49}\right]\leq \dfrac{x}{49}$
$\dfrac{x}{51}-1<\left[\dfrac{x}{51}\right]\leq \dfrac{x}{51}$
we have $$\dfrac{x}{49}-1< \dfrac{x}{51}\implies x<{49\cdot 51\over 2}\implies x\leq 2499$$ and
$$\dfrac{x}{51}-1< \dfrac{x}{49}\implies x>-{49\cdot 51\over 2}\implies x\geq -2499$$
So there is $2499$ natural solution to this equation. (I don't consider $0$ as natural.) |
How to find the values of $a$ that make $\det A= 0$? | Just solve the following equation.
$$a\cdot(-8)\cdot a+3\cdot6\cdot9+7\cdot a\cdot(-9)-3\cdot(-8)\cdot7-a\cdot(-9)\cdot9-a\cdot a\cdot6=0.$$
I got $$7a^2-9a-165=0,$$ which gives $$\left\{\frac{9+\sqrt{4701}}{14},\frac{9-\sqrt{4701}}{14}\right\}.$$
Now, if you'substitute these values, you'll get that the determinat is equal to $0$.
You got that $\det A\neq0$ because you took an approximated values of $a$. |
Show that the subset $S$ in $\mathbb{R}_3$ is a subspace. | You want to show, that
$$ S = \left\{ (a,b,c) \in \Bbb R^3 \; : \; a+b = c \right\} \subset \Bbb R^3$$
is a subspace of $\Bbb R^3$.
First, note that $(0,0,0) \in S$, since $0 + 0 = 0$, so $S \neq \emptyset$.
Next, let $v_1 := (a_1, b_1, c_1), \, v_2 := (a_2, b_2, c_2) \in S$. We have to show, that $v_1 + v_2 = (a_1 + b_1, a_2 + b_2, a_3 + b_3) \in S$. Since $v_1 \in S$, we have $a_1 + b_1 = c_1$, and since $v_2 \in S$, we have $a_2 + b_2 = c_2$. So we see, that
$$(a_1 + a_2) + (b_1 + b_2) = c_1 + c_2 \; ,$$
which means that
$$ v_1 + v_2 = (a_1 + a_2, b_1 + b_2, c_1 + c_2) \in S \; .$$
Finally, let $\alpha \in \Bbb R$ and $v := (a,b,c) \in S$. We need to show, that $\alpha v \in S$. Since $a+b = c$, we have $\alpha a + \alpha b = \alpha c$, which means that
$$ \alpha v = (\alpha a, \alpha b, \alpha c) \in S \; .$$
This shows, that $S$ is a subspace of $\Bbb R^3$. |
2 ways to show that a surface is smooth | If partial derivatives
$$ \frac{\partial f}{\partial x},\frac{\partial f}{\partial y},\frac{\partial f}{\partial z}$$
are continuous, the function is smooth.
In Monge form $Z=f(X,Y) $ a vector cross product normal to surface
$$ Z_x X Z_y $$
should be continuous and non-zero. |
Help in understanding the logic expressions | The triple line means "congruent", and saying $$a\equiv b\mod c$$
simply means that $a$ and $b$ both have the same remainder when divided by $c$.
In your case, $3n^3 + 9n^2 + 15n + 9$ has the same remainder as $3n^3 + 6n$ when divided by $9$
For more detail, check http://mathworld.wolfram.com/Congruence.html
An important thing to know is that
$a\equiv b\mod c\iff a-b$ is divisible by $c$
This is easy to see, since we can always write $a=k_a\cdot c + n_a$ and $b=k_b\cdot c + n_b$, where $0\leq n_a,n_b<c$ and $k_a,k_b$ are integers. Then, $a-b = (k_a-k_b)\cdot c + (n_a-n_b)$
If $a-b$ is divisible by $c$, then $a-b=(k_a-k_b)\cdot c + (n_a-n_b)$ is divisible by $c$. Since $(k_a-k_b)c$ is divisible by $c$, so is $n_a-n_b$, and since $-c < n_a-n_b < c$, this is only possible if $n_a-n_b=0$, so $a\equiv b\mod c$.
If $a\equiv b\mod c$, then $n_a=n_b$, which means that $a-b=(k_a-k_b)c$ and $a-b$ is divisible by $c$.
Having this what you need for your claim is the following claim:
For any integer $a$ and any integer $k$, $a + k\cdot b\equiv a\mod b$
The claim is true because $(a+k\cdot b)- a = k\cdot b$ which is divisible by $b$.
Now, if you write $3n^3 + 9n^2 + 15n + 9$ as $$9n^2 + 9n + 9 + 3n^3 + 6n$$
everything should be clear. |
Best strategy for this game. [Nintendo Wii game] | Well, I think this is some kind of prisoner's dilemma. The best strategy always depends on whom you are playing with. How many of the players can you control? Is it only humans, or also AIs? How experienced are your opponents?
Having said that.
Case 1, random opponents Assume your opponents are playing randomly. For any numbers $i,j\in{1,3,5}$ we then have the probability of none of them choosing the number $p_i=p_j$. ($p=(\frac{2}{3})^3$ by the way, but that's actually not important) So you are right always chosing $5$ would be the best choice, assuming the opponents play randomly. This should also work with real human beings, I believe for psychological reasons. People will not stick to the highest number if they don't know the game, or know someone will surely pick it, and can not agree on a joint strategy against you.
Case 1.u, you and your random opponents The chance of your opponents playing strictly randomly and you always playing $5$ and winning after at most three moves is given as $\frac{8^2}{20^2}+2\frac{12}{20}\frac{8^2}{20^2}=0.352$. Meaning that in more than every third game you'd win, so you necessarily do have a serious advantage over the others :-) Your chance of winning actually is severely higher, but for now we are just looking for comparison of strategies.
Case 2, deterministic strategies for all players Now let's lift it one level. We don't require the opponents to play randomly anymore. Assume there is an optimal strategy, and thus obviously assume that all the players are aware of the optimal strategy and thus try to play this optimal strategy. We assume a determined and fair game, i.e. players make their choice at exactly the same time, have exactly the same interface and make their choice without any randomness. But then all four players will always play exactly the same number and thus nobody will ever move forward and thus the game will never end.
Case 3, everybody playing random Can the optimal strategy involve some randomness? Assume it to be purely random. Then your probability of winning is exactly $\frac{1}{4}$ which I guess is as good as it gets assuming everybody is aware of the "optimal strategy". However, as pointed out in Case 1.u everybody playing random is not optimal in terms of best strategy as it can be beaten by the always $5$ guy.
Case 4, adjusting randomness, what needs to be done An optimal strategy will involve randomness, and has to incorporate a high possibility of choosing $5$, and not so high but non-zero possibilities of choosing $1$ and $3$. Should we prefer $3$ over $1$? Yes, for the same reason we have to prefer $5$ over the others. On the one hand we need to move fast, on the other hand we don't want our opponents to move too fast. For the first move at least we will thus have probabilities $p_1<p_3<p_5$. But it gets trickier. In the progress of the game we will have to adjust the probabilities of choosing the numbers. For instance if one player achieves step $9$ then he won't prefer big numbers over smaller numbers for himself anymore, which means that the other players can't rely on him having a low probability for $1$ anymore. So...
Conclusion 1; Case 5, machine learning Feed some computer program with the question. Use as parameters the distances of every players position to $10$ and use some form of machine learning to come up with suitable probabilities. You don't want exact numbers anyway ;-)
Conclusion 2: answer the question "who are the opponents?" and then design a fitting strategy. That's probably more practical solution What's the chance of your opponents actually having implemented the machine learning approach? |
How to prove that the inversion $x \mapsto \frac x {\| x \|^2}$ in $\Bbb R^n$ sends circles to generalised circles? | The inverse of a vector $x$ is defined by
$$\frac1x = \frac{x}{x^2} = \Big(\frac1{x\cdot x}\Big)x$$
If the circle's centre is $c$, its radius is $\lVert a\rVert=\lVert b\rVert$, and its plane is spanned by vectors $a$ and $b$ (with $a\cdot b=0$), then it can be parametrized by
$$x = c + a\cos\theta + b\sin\theta$$
Without loss of generality, $a$ and $b$ can be rotated until $b$ is orthogonal to $c$. So now we have
$$a^2 = b^2,\qquad a\cdot b = b\cdot c = 0$$
and
$$\frac1x = \frac{c+a\cos\theta+b\sin\theta}{c^2+2c\cdot a\cos\theta+a^2}$$
This equation doesn't look like a circle, but of course it must be. The new circle's centre $c'$ should be the midpoint of the two extremal points at $\theta=0,\;\theta=\pi$ :
$$c' = \frac12\Big(\frac1{c+a} + \frac1{c-a}\Big) = \overset{\text{algebra}}{\cdots} = \frac{(c^2+a^2)c-2(c\cdot a)a}{(c+a)^2(c-a)^2}$$
And the new (directed) radius $a'$ should be half the displacement between these points:
$$a' = \frac12\Big(\frac1{c+a} - \frac1{c-a}\Big) = \cdots = \frac{(c^2+a^2)a-2(a\cdot c)c}{(c+a)^2(c-a)^2}$$
and $b' = \lVert a'\rVert\frac{b}{\lVert b\rVert}$ .
(These are undefined if $c+a=0$ or $c-a=0$, but then the result is a straight line, and the problem is 2-dimensional, which is easily solved.)
If these guesses are correct, then the new point must be in this plane, and have constant radius:
$$\Big(\frac1x - c'\Big)\wedge a'\wedge b' \overset{?}{=} 0,\qquad \Big(\frac1x - c'\Big)^2 \overset{?}{=} a'^2$$
(The wedge product $\wedge$ is a measure of linear independence; any vector $a$ has $a\wedge a=0$.)
To verify these equations, first note that the denominator is
$$(c+a)^2(c-a)^2 = (c^2+a^2+2c\cdot a)(c^2+a^2-2c\cdot a) = (c^2+a^2)^2-4(c\cdot a)^2$$
thus
$$c'^2 = \bigg(\frac{(c^2+a^2)c-2(c\cdot a)a}{(c+a)^2(c-a)^2}\bigg)^2 = \frac{(c^2+a^2)^2c^2-4(c^2+a^2)(c\cdot a)^2+4(c\cdot a)^2a^2}{(c+a)^4(c-a)^4}$$
$$= \frac{\big((c^2+a^2)^2-4(c\cdot a)^2\big)c^2}{(c+a)^4(c-a)^4} = \frac{c^2}{(c+a)^2(c-a)^2}$$
$$a'^2 = \cdots = \frac{a^2}{(c+a)^2(c-a)^2}$$
and
$$c'\wedge a' = \bigg(\frac{(c^2+a^2)c-2(c\cdot a)a}{(c+a)^2(c-a)^2}\bigg)\wedge\bigg(\frac{(c^2+a^2)a-2(a\cdot c)c}{(c+a)^2(c-a)^2}\bigg)$$
$$= \frac{(c^2+a^2)^2(c\wedge a)-2(c^2+a^2)(c\cdot a)(c\wedge c)-2(c^2+a^2)(c\cdot a)(a\wedge a)+4(c\cdot a)^2(a\wedge c)}{(c+a)^4(c-a)^4}$$
$$= \frac{\big((c^2+a^2)^2-4(c\cdot a)^2\big)(c\wedge a)}{(c+a)^4(c-a)^4} = \frac{c\wedge a}{(c+a)^2(c-a)^2}$$
Now we can calculate the two expressions:
$$\Big(\frac1x - c'\Big)\wedge a' = \frac{c+a\cos\theta+b\sin\theta}{c^2+2c\cdot a\cos\theta+a^2}\wedge\frac{(c^2+a^2)a-2(a\cdot c)c}{(c+a)^2(c-a)^2}-c'\wedge a'$$
$$= \frac{(c^2+a^2)(c\wedge a)-0+0-2(c\cdot a\cos\theta)(a\wedge c)}{(c^2+2c\cdot a\cos\theta+a^2)(c+a)^2(c-a)^2}+\frac{b\sin\theta}{x^2}\wedge a'-c'\wedge a'$$
$$= \frac{(c^2+2c\cdot a\cos\theta+a^2)(c\wedge a)}{(c^2+2c\cdot a\cos\theta+a^2)(c+a)^2(c-a)^2}+\frac{b\sin\theta}{x^2}\wedge a'-c'\wedge a'$$
$$= \frac{c\wedge a}{(c+a)^2(c-a)^2}-c'\wedge a'+\frac{b\sin\theta}{x^2}\wedge a'$$
$$= 0+\frac{b\sin\theta}{x^2}\wedge a'$$
$$\Big(\frac1x - c'\Big)\wedge a'\wedge b' = \frac{b\sin\theta}{x^2}\wedge a'\wedge\frac{b\lVert a'\rVert}{\lVert b\rVert} = 0$$
and
$$\Big(\frac1x - c'\Big)^2 = \frac1{x^2} - 2\frac{x\cdot c'}{x^2} + c'^2$$
$$= \frac1{x^2}-\frac2{x^2}\frac{(c+a\cos\theta+b\sin\theta)\cdot\big((c^2+a^2)c-2(c\cdot a)a\big)}{(c+a)^2(c-a)^2}+c'^2$$
$$= \frac{(c+a)^2(c-a)^2}{x^2(c+a)^2(c-a)^2}-2\frac{(c^2+a^2)c^2-2(c\cdot a)^2+(c^2+a^2)(c\cdot a\cos\theta)-2a^2(c\cdot a\cos\theta)+0-0}{x^2(c+a)^2(c-a)^2}+c'^2$$
$$= \frac{(c^2+a^2)^2-4(c\cdot a)^2}{x^2(c+a)^2(c-a)^2}-2\frac{(c^2+a^2)c^2-2(c\cdot a)^2+(c^2-a^2)(c\cdot a\cos\theta)}{x^2(c+a)^2(c-a)^2}+c'^2$$
$$= \frac{(c^2+a^2)(c^2+a^2-2c^2)-2(c^2-a^2)(c\cdot a\cos\theta)}{x^2(c+a)^2(c-a)^2}+c'^2$$
$$= \frac{(c^2+a^2+2c\cdot a\cos\theta)(a^2-c^2)}{(c^2+2c\cdot a\cos\theta+a^2)(c+a)^2(c-a)^2}+c'^2$$
$$= \frac{a^2-c^2}{(c+a)^2(c-a)^2}+\frac{c^2}{(c+a)^2(c-a)^2}$$
$$= \frac{a^2}{(c+a)^2(c-a)^2}$$
$$=a'^2$$ |
Finding an essential submodule | Here is an example where the submodule cannot be found (i.e., what
you are trying to prove is not true).
Let $R$ be the $\mathbb Q$-algebra with the presentation
$$
\langle x, y_1, y_2, \ldots\;|\;x^{i+1}y_i=0, i=1, 2, \ldots\rangle.
$$
Let $P$ be the ideal of $R$ generated by $\{y_i\;|\;i=1, 2, \ldots\}$.
$R/P\cong \mathbb Q[x]$, so $P$ is a prime ideal of $R$ that does
not contain $x$. Therefore multiplication by $x$ on $R/P$ is an injective
function.
The annihilator of $x$ in $R$ contains the ideal
generated by $\{x^iy_i\;|\;i=1, 2, \ldots\}$, which is essential in $R$,
so $x$ belongs to the singular ideal of $R$.
Let $M = R\oplus R/P$. $M$ is a faithful module, because $R$ is a summand.
The annihilator $A$ of $x$ in $R$ is essential, as noted in the previous
paragraph. The annihilator
of $x$ in $M$ is $A\oplus \{0\}$, which is not essential, since it
is disjoint from $\{0\}\oplus R/P$. Thus there
can be no essential submodule $N\leq M$ that is annihilated by $x$. |
Reasoning about products of reals | Irrelevant at this point, but I started this answer before the question was answered, so I might as well put it up.
Probably the easiest way to understand this is to take logs. Since everything is positive, and the logarithm is increasing, we have that $$ \prod\limits_{i=1}^n\left(x_i + k\right) > \prod\limits_{j=1}^n \left(y_j + k\right)$$ if and only if $$\log\left(\prod\limits_{i=1}^n\left(x_i + k\right)\right) > \log\left(\prod\limits_{j=1}^n \left(y_j + k\right)\right)$$ which in turn is equivalent to $$\sum\limits_{j=1}^n\log\left(x_j + k\right) > \sum\limits_{j=1}^n\log\left(y_j + k\right) $$
But this you can see is actually not true, because of the behavior of the natural log. It has sort of diminishing returns, right? So, one idea is to construct a scheme where the LHS has inputs to the log that are all too big, so the collective impact of the greater increases on the right is enough to make the difference. Notice also that if there doesn't have to be the same number of these numbers, this is quite easy. But, with the idea in mind, let's suppose that the RHS has a lot of numbers that are quite small.
So, let's take $x$ to be the sequence $1, 1, \ldots, 1$ with 10 elements. Then we take $y$ to be the sequence$10^8$, then $.1$ 9 times. Both of these sequences have 10 elements and the product of $x$ is 1 which is bigger than that of $y$, which is $\frac{1}{10}$. But if I add 1 to $x$ the product becomes just $1024$, which when I do it to $y$ it becomes $(10^8+1)(1.1)^9$, which is larger. |
Why $A_\varepsilon \subset \bigcup_{\sigma \in \mathfrak{F}} S_\sigma,\;?$ | Let $x\in A_\varepsilon$. Choose non negative rational numbers $a_i$ and $b_i$ such that $$ 0\lt \left[\Re(f_i(x))\right]^2- a_i\lt \frac{ \varepsilon}{2d} \mbox{ and } 0\lt \left[\Im(f_i(x))\right]^2 -b_i \lt \frac{ \varepsilon}{2d} .$$
Let $\sigma:=\{a_1,b_1,\cdots,a_d,b_d\}$. Then $\sigma$ belongs to $\mathfrak{F}$ and $x$ belongs to $S_\sigma$. |
If $X$ and $Y$ are both linear combinations of the same random variables, will they have to be dependent? | For the $cov(X,Y)$ you would have
$$
cov(c_1Z_1+c_2Z_2, c_3 Z_1+c_4Z_2) = c_1^2c_3^2cov(Z_1,Z_1) + (c_1^2c_4^2+c_2^2c_3^2)cov(Z_1,Z_2)+c_2^2c_4^2cov(Z_2,Z_2) = c_1^2c_3^2\sigma_{Z_1}^2 + (c_1^2c_4^2+c_2^2c_3^2)\sigma_{Z_1}\sigma_{Z_2}corr(Z_1,Z_2)+c_2^2c_4^2\sigma_{Z_2}^2.
$$ |
Circular Permutations of a Multiset | The key observation here is the following. The problem becomes
interesting when there are duplicate counts in the multiset or when
the GCD of the counts is more than one.
Remark. The task that we are treating here represents the case
where the forbidden configurations are those that contain a block
where all letters of one type appear consecutively, e.g. in the
example from the OP this would be a block of two $a$s, three $b$s,
four $c$s or five $d$s.
Applying the Polya Enumeration Theorem (PET) we have the cycle
index of the cyclic group
$$Z(C_n) = \frac{1}{n}\sum_{d|n} \varphi(d) a_d^{n/d}.$$
Let the multiset be represented by the polynomial
$$A_1^{q_1} A_2^{q_2} \times\cdots\times A_p^{q_p}.$$
where $p$ is the number of different types of elements and the $q$
represent multiplicities. Introduce $A=\{1,2,\ldots p\}$ and define
for $B\subseteq A$
$$s(B) = |B|+\sum_{k\in A\setminus B} q_k.$$
Furthermore we introduce
$$t(B) = \prod_{k\in B} A_k \prod_{k\in A\setminus B} A_k^{q_k}.$$
The sets $B$ represent fused blocks where the types contained in $B$
appear consecutively. We then have by inclusion-exclusion the closed
form
$$\sum_{B\subseteq A} (-1)^{|B|}
\left[t(B)\right] Z(C_{s(B)})
\left(\sum_{k\in A} A_k\right).$$
where $\left[t(B)\right]$ denotes coefficient extraction.
Applying this to $A_1^2 A_2^3 A_3^4 A_4^5$ yields the answer
$$\bbox[5px,border:2px solid #00A000]{144029}.$$
The Maple code for this was as follows. (We have included a total
enumeration routine which is practicable only for small element counts
but does nonetheless confirm the results from PET in those cases that
were checked.)
with(numtheory);
with(combinat);
pet_varinto_cind :=
proc(poly, ind)
local subs1, subs2, polyvars, indvars, v, pot, res;
res := ind;
polyvars := indets(poly);
indvars := indets(ind);
for v in indvars do
pot := op(1, v);
subs1 :=
[seq(polyvars[k]=polyvars[k]^pot,
k=1..nops(polyvars))];
subs2 := [v=subs(subs1, poly)];
res := subs(subs2, res);
od;
res;
end;
pet_cycleind_cyclic :=
proc(n)
local d, s;
s := 0;
for d in divisors(n) do
s := s + phi(d)*a[d]^(n/d);
od;
s/n;
end;
mset_incl_excl :=
proc(mset)
option remember;
local res, pset, types, dist,
pos, gf, src, cind, slots;
types := nops(mset);
res := 0;
for pset in powerset([seq(q, q=1..types)]) do
dist :=
[seq(`if`(member(q, pset), 1, mset[q]),
q=1..types)];
slots := add(q, q in dist);
src := add(A[q], q=1..types);
cind := pet_cycleind_cyclic(slots);
gf := pet_varinto_cind(src, cind);
gf := expand(gf);
for pos to types do
gf := coeff(gf, A[pos], dist[pos]);
od;
res := res + gf*(-1)^nops(pset);
od;
res;
end;
mset_enum :=
proc(mset)
local perm, cperm, orbits, orbit, rot, pos,
src, kind, types, slots, runl;
types := nops(mset);
src := [seq(seq(q, p=1..mset[q]), q=1..types)];
slots := add(q, q in mset);
orbits := table();
for perm in permute(src) do
orbit := {};
for pos to slots do
rot :=
[seq(perm[q], q=pos..slots),
seq(perm[q], q=1..pos-1)];
orbit := orbit union {rot};
od;
cperm := [op(perm), op(perm)];
for kind to types do
runl := 0;
for pos to nops(cperm) do
if cperm[pos] = kind then
runl := runl + 1;
else
runl := 0;
fi;
if runl = mset[kind] then
break;
fi;
od;
if pos <= nops(cperm) then
break;
fi;
od;
if kind = types + 1 then
orbits[orbit] := 1;
fi;
od;
nops([indices(orbits)]);
end;
Addendum Nov 2 2016. The above formula admits radical
simplification. We effectively remove all symmetries as soon as we
fuse a block of similar elements. Therefore we only need the cycle
index in the first step when we compute the set of all circular
configurations. We may simply divide by the number of elements
thereafter. This gives the formula
$$[t(\emptyset)] Z(C_{s(\emptyset)})\left(\sum_{k\in A} A_k\right)
+ \sum_{B\neq\emptyset, B\subseteq A} (-1)^{|B|}
\frac{(s(B)-1)!}{\prod_{k\in A\setminus B} q_k!}.$$
The Maple code now becomes
mset_incl_excl2 :=
proc(mset)
option remember;
local res, pset, types, dist,
pos, gf, src, cind, slots;
types := nops(mset);
slots := add(q, q in mset);
src := add(A[q], q=1..types);
cind := pet_cycleind_cyclic(slots);
gf := pet_varinto_cind(src, cind);
gf := expand(gf);
for pos to types do
gf := coeff(gf, A[pos], mset[pos]);
od;
res := gf;
for pset in powerset({seq(q, q=1..types)})
minus {{}} do
dist :=
[seq(`if`(member(q, pset), 1, mset[q]),
q=1..types)];
slots := add(q, q in dist);
res := res +
(-1)^nops(pset)*(slots-1)!
/ mul(q!, q in dist);
od;
res;
end; |
Generalization of $\frac{x^n - y^n}{x - y} = x^{n - 1} + yx^{n - 2} + \ldots + y^{n - 1}$ | I think that this formula is what you are looking for. If $\mathbf x = (x_1,\dotsc,x_r)$, then
$$ \sum_{|I|=n} \mathbf{x}^I = \sum_i\frac{x_i^{n}}{\prod_{j\neq i}(1-\frac{x_j}{x_i})}. $$
With 1 variable, it gives
$$ x^n = x^n, $$
for two,
$$ \sum_{i+j = n} x^i y^j = \frac{x^{n+1}}{x-y} + \frac{y^{n+1}}{y-x}, $$
and for three, if gives
$$ \sum_{i+j+k=n} x^i y^j z^k= \frac{x^{n+2}}{(x-y)(x-z)} + \frac{y^{n+2}}{(y-x)(y-z)} + \frac{z^{n+2}}{(z-x)(z-y)}. $$ |
Why is $gH=H=Hg$ trivial for $g\in H$? | Suppose $g\in H$. Then $g^{-1}$ is also in $H$. Therefore, if $h\in H$, $k=g^{-1}h\in H$, so $gk=g(g^{-1}h)=h\in gH$. Therefore, $H\subset gH$. On the other hand, an arbitrary element $k\in gH$ has the form $gh$ for some $h\in H$. Since both $g$ and $h$ are in $H$, $k\in H$. Therefore $gH\subset H$. We conclude that $gH=H$. A symmetric argument shows $Hg=H$, completing the proof. |
Borel $\sigma$-algebra on subset of $\mathbb{R}^n$ | $\mathcal{E}$ is a $\sigma$-algebra on $E$ meaning, for the record, that it is a family of subsets of $E$ that is closed under taking complements (with respect to $E$) and countable unions and it contains the empty set.
As for your other question, $(\mathbb{R}^n - A) \cap E = (\mathbb{R}^n \cap E) - (A \cap E) = E - (A \cap E)$. Do you see why this has to be in $\mathcal{E}$? |
Showing a recursive sequence is Cauchy | Try to prove the following: $|a_n-a_{n-1}|=1/2^{n-1}$.
If you prove that lemma, then for fixed $\epsilon>0$, choose an $N$ so that $1/2^{N-1}<\epsilon$.
Then, if $m,n\geq N$ (wlog say $m\geq n$), you have $$|a_m-a_n|\leq |a_m-a_{m-1}|+|a_{m-1}-a_{m-2}|+...+|a_{n+1}-a_n|$$
$$=\frac{1}{2^{m-1}}+...+\frac{1}{2^n}=\frac{1}{2^n}(\frac{1}{2^{m-1-n}}+...+1)<\frac{1}{2^n}(2)=\frac{1}{2^{n-1}}$$but since $n\geq N$, we have that $\frac{1}{2^{n-1}}\leq \frac{1}{2^{N-1}}<\epsilon$. |
calculate the characteristic function of a random variable X with E(X)=0 and variance = 1 | $$\begin{align}\Bbb{E}[e^{i t X}]&=\Bbb{E}\left[\sum_{j=0}^\infty \frac{(i t X)^j}{j!}\right]\\&=\sum_{j=0}^\infty \frac{(it)^j}{j!}\Bbb{E}\left[ X^j\right]\\&=\Bbb{E}[1]+i\Bbb{E}[X] - \frac{1}{2}\Bbb{E}[X^2] + \sum_{j=3}^\infty t^j \frac{i^j}{j!} \Bbb{E}\left[ X^j\right]\end{align}$$
Just using linearity of expectation. Can you finish it off yourself from there? |
Closed unit square is connected. | With respect to your original proof: I agree with Step 1, and I agree that Step 3 follows from Step 2; I haven't checked Step 2. Once you've got Step 3, you've got a contradiction unless $\mu = 1$: since $U$ is open, there is an open neighbourhood of $(\mu, \mu)$ in $U$, and this open neighbourhood isn't allowed to contain any points to the top-right of $(\mu, \mu)$ because $\mu$ is a sup.
There is a simpler version if you're allowed access to "path connected implies connected", because $[0,1]^2$ is obviously path-connected (indeed, convex).
Alternatively, you can use that the product of connected spaces is connected. |
Rolling a die with Chebyshev's inequality | $μ=1/2$ and $σ^2=1/4$?
That´s right.
But that's for a single roll, let's say $X_i$ and
$X=X_1+...+X_{360}$. Are $μ$ and $σ$ the same for $X$ or do I need to
divide/multiply them by 360?
The expected value of the sum of the die rolls is equal to the sum of the extected values: $$\mathbb E\left(\sum\limits_{i=1}^{360} X_i \right)=360\cdot \mu_{x_1} $$
And the variance of the sum of the die rolls is equal to the sum of the variances, due independence of the rolls: $$Var\left(\sum\limits_{i=1}^{360} X_i \right)=360\cdot \sigma^2_{x_1}$$ |
Order of an integral | Why do you think it should be $O(x^a)$? You can factor out the $x^a$ and the $x$ (did you miss the $x$ sitting there between the $i$ and the $e^{i\theta}$?) so it's $x^{a+1}$ times the integral of something not far from $1$ in modulus, so $O(x^{a+1})$. |
Unfamiliar Property of Modular Arithmetic | Example: $a = 10, b = 2, n = 8, c = 4$.
\begin{align}
10 &\equiv 2 \pmod{8}.
\end{align}
The equivalence guarantees
\begin{align}
a &\equiv b + dn \pmod{32}
\end{align}
for some $0 \leq d < 4$. Indeed,
\begin{align}
10 &\equiv 2 + 1 \cdot 8 \\
&\equiv 10\pmod{32}.
\end{align}
Justification: Here is a proof for the first direction. Let $c$ be a positive integer. Suppose
\begin{align}
a \equiv b \pmod{n}.
\end{align}
Then $a = b + d n$ for some integer $d$. If we reduce modulo $cn$ we have
\begin{align}
a \equiv b + dn \pmod{cn}.
\end{align}
for some integer $d$. The set $\{0,n,2n,\dots,(c-1)n\}$ are all possible reductions of $dn$ modulo $cn$. Hence
\begin{align}
a \equiv b + dn \pmod{cn}.
\end{align}
for some integer $0 \leq d < c$. Moreover only one integer $d$ in this range will satisfy this congruence, since $jn \not\equiv kn \pmod{cn}$ for integers $j,k$ such that $0 \leq j < k < c$.
The reverse direction is much easier, just reduce modulo $n$. That is, suppose
\begin{align}
a \equiv b + dn \pmod{cn}.
\end{align}
for some integer $0 \leq d < c$. Then $a = b + dn + kcn$ for some integer $k$. Thus,
\begin{align}
a &\equiv b + dn + kcn \\
&\equiv b \pmod{n}.
\end{align} |
Show $\sigma(T)=\sigma{(\overline{T^{*}})}$ | Hint: $T-\lambda I$ is invertible with inverse $A \in \mathcal{B}(H)$ iff
$$
I = (T-\lambda I)A = A(T-\lambda I)
$$
The above holds iff
$$
I = A^{\star}(T^{\star}-\overline{\lambda}I) = (T^{\star}-\overline{\lambda}I)A^{\star}.
$$ |
probability of repeating values in a hashmap | If a size of hash table is $m$, a size of your set is $n$, and you are picking the numbers from this set by uniform distribution, this is a probability that one random element $z$, which is in $N$, is in $M$, where $M \subseteq M$. This is sure $m \over n$.
If you want to know a conditional probability, where you know already $x$ is in a certain chain, you need to determine the probability that the size of a full class of $x$ is equal to some $|class|$, when you know already, $|class| \ge |chain|$. $P(|class| = n ~\backslash~ |class| \ge |chain|) = {P(|class| = n) \over P(|class| \ge |chain|)}$, if $|class| \ge |chain|$, and $0$ if $|class| < |chain|$. Note that |class| and |chain| are here constants. The full probability that $x$ is in chain already is $$\sum_{C = |chain|}^{n}{P(x~in~chain~\backslash~|class|=C)P(|class|=C ~\backslash~ |class| \ge |chain|)} = \\ =\sum_{C = |chain|}^{n}({|chain|\over C}\cdot{P(|class| = C) \over P(|class| \ge |chain|)}) ~=~
\sum_{C = |chain|}^{n}({|chain|\over C}\cdot{{C_{n}^{C}p^{C}(1-p)^{n-C}} \over P(|class| \ge |chain|)}) ~=\\
=~ ({|chain| \over P(|class| \ge |chain|)}) \cdot {\sum_{C = |chain|}^{n} {{C_{n}^{C}p^{C}(1-p)^{n-C}} \over {C}}}$$
Here $|class|$ is already a binomial random variable, $|chain|$ is a constant you know, and $1 \over p$ is a number of classes. This function can be sure approximated. |
What's the point of the cross product? | Yes, $v\times w$ is orthogonal to both $v$ and $w$. But it also has the property that $\lVert v\times w\rVert=\lVert v\rVert.\lVert w\rVert.\sin\theta$, where $\theta$ is the angle between $v$ and $w$. In particular, it provides an easy way to find an unit vector which is orthogonal to two given unit vectors which are already orthogonal to each other. |
Expectation of the product of two random variables | Hint: If X and Y are independent, E[XY] = E[X]E[Y]. |
Fixed points of contractions in metric spaces | HINT:
find $d(a_m,a_n)$ with $m>n$ and use the fact the series $1+k+k^2...$ is convergent hence the tail goes to $0$
now if it has $2$ fixed points say $x,y$ find relation between $d(x,y)$ and $d(f(x),f(y))$ where $f$ is the contraction |
Binary to Hexadecimal number | In hexadecimal you need 16 digits. This is usually achieved by using $0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F$. That is $F$ is fifteen and $10$ is sixteen.
When converting from binary to hexadecimal, each group of four binary digits becomes one hexadecimal digit. In the case you are asking, $1100_2$ corresponds to $C_{16}$. |
Concluding the proof of the Baire category theorem | It’s not true in general.
Consider $\Bbb R$ with the usual metric, let $\Bbb Q=\{q_k:k\in\Bbb N\}$, and for $k\in\Bbb N$ let $A_k=\{q_k\}$; clearly each $A_k$ is nowhere dense in $\Bbb R$. Let $\alpha\in\Bbb R\setminus\Bbb Q$, and for $k\in\Bbb N$ let $x_k=2^{-k}\alpha$. Then $x_k\notin\bigcup_{k\in\Bbb N}\operatorname{cl}A_k=\Bbb Q$, but $\langle x_k:k\in\Bbb N\rangle$ converges to $0\in\Bbb Q$. |
There exists an automorphism of algebraic closure of $\mathbb{Q}$ that fixed $i$ and does not fix $2^{1/3}$ | Let $K$ be the splitting field of $x^3 - 2$ over ${\mathbb Q}(i)$. The extension $K : {\mathbb Q}(i)$ is Galois and because $x^ 3 - 2$ is irreducible over ${\mathbb Q}(i)$, the Galois group $\text{Gal}(K : {\mathbb Q}(i))$ acts transitively on the roots of $x^3 - 2$. So, pick any element of that Galois group that sends $\sqrt[3]{2}$ to $\zeta_3 \sqrt[3]{2}$. Note, by the way, that every element of that Galois group fixes $i$.
Finally, extend this automorphism of $K$ to an automorphism of $\overline{\mathbb Q}$ (see extension of automorphism of field to algebraically closed field or extension of automorphism of field to algebraically closed field). |
Prove that a set is a subset of another | Let $x \in (A \cap C) \cup ( B \cap D)$. Then $x \in A \cap C$ or $x \in B \cap D$.
Suppose $x \in A \cap C$, then $x \in A$ and $x \in C$. Therefore $x \in A \cup B$ and $x \in C \cup D$, i.e. $x \in (A \cup B) \cap (C \cup D)$.
Suppose $x \in B \cap D$, then $x \in B$ and $x \in D$. Therefore $x \in A \cup B$ and $x \in C \cup D$, i.e. $x \in (A \cup B) \cap (C \cup D)$.
It follows that $(A \cap C) \cup ( B \cap D) \subseteq(A \cup B) \cap (C \cup D)$. |
Exact sequences of bundles and orientations | A cleaner way of thinking about an orientation of an $n$-dimensional vector bundle $V$ for the purposes of doing this is that it's a trivialization of the top exterior power $\Lambda^{\dim V} V$. This makes the argument for vector bundles almost identical to the one for vector spaces: show that if
$$0 \to U \to V \to W \to 0$$
is a short exact sequence of vector bundles then there is an isomorphism of line bundles
$$\Lambda^{\dim V} V \cong \Lambda^{\dim U} U \otimes \Lambda^{\dim W} W$$
which clearly shows that trivializations of any two of these line bundles induce trivializations of the third. |
can't find closed form of linear recurrence system | HINT: There are many ways to solve this particular type of recurrence; here’s one. Any recurrence of the form $x_{n+1}=ax_n+b$ can be reduced to the form $y_{n+1}=cy_n$ by a change of variable. Specifically, let $y_n=x_n-d$ for some constant $d$ that will be determined later. Then $x_n=y_n+d$, and your recurrence $x_{n+1}=0.9x_n-5$ becomes $y_{n+1}+d=0.9(y_n+d)-5$, or
$$y_{n+1}=0.9y_n-5-0.1d\;.\tag{1}$$
Setting $-5-0.1d=0$ and solving for $d$, we get $d=-50$; substituting $d=-50$ into $(1)$ leaves us with the very simple recurrence
$$y_{n+1}=0.9y_n\;,\tag{2}$$
with initial value $y_1=x_1-d=20-(-50)=70$.
Now solve $(2)$ to get a closed form for $y_n$, and use the relationship $x_n=y_n+d=y_n-50$ to get a closed form for $x_n$. |
How long can the Enterprise's shields hold? | I don't understand the initial condition. It seems like the shields start at $100\%$ efficiency, so they would never be depleted. I think you need to start them at something close to $100\%$, but not exactly.
If we let $y(t)$ be the resistance of the shields at time $t$, where $0\leq y(t)\leq 1$ then the equation is $$y'= -k(1-y)\tag1$$ where $k$ represents the power of the phasers. In the example, $k=\frac13.$ When $y=\frac34$, we get $$y'=-\frac13\left(1-\frac34\right)=-\frac1{12},$$ so that the resistance decreases by $\frac1{12},$ as in the example.
The general solution to $(1)$ is $$y=ae^{kt}+1,\tag2$$ where $a$ is a constant to be determined from the initial condition. That is, $a=1-y(0)$. If we set $y(0)=1$ as you say, then of course, $a=0$, and the shields stay at full resistance always.
If the shields start at $99.99\%$ efficiency, then we get $$y(t)=1-.0001e^{t/3}$$ when $k=\frac13.$ |
Trig substitution for integral with no square root? | This looks like a straightforward $\arctan$ integral.
Try writing it in the form
$$
\int \frac{f'(x) } {1+[f(x)]^2} \mathrm{d} x.
$$
It should be clear to you, comparing the denominators, that $f(x)=\sqrt{3}x$. |
Proving the following set is a $G_\delta$ set $\{x\in \mathbb{R}, \limsup_{m\to\infty}\vert f_m(x)\vert = +\infty\}.$ | Yes, that is wrong. Taking $f_m(x)=\frac m 2$ for all $x$ we get a counter-example to your claim.
There is no reason why you should have $|f_m(x)| >m$. Correct proof: The given set is $\bigcap_N \bigcap_k\bigcup_{j\geq k} \{x: |f_j(x)| >N\}$ which is a countable intersection of open sets. |
Find a vector w that is not in the image of T. | $T(x,y)=(10x+2y,−10x+10y,−8x−6y)=x(10,-10,-8)+y(2,10,-6)$. So the image of $T$ is the plane spanned by $(10,-10,-8)$ and $(2,10,-6)$. Any vector not in this plane will not be in the image, for example the cross product of the spanning vectors $(10,-10,-8)\times(2,10,-6)$. |
Quotient of the fundamental group | So as I mentioned in the comments, I think the punchline comes from the following fact:
Let $(X,x)$ be a connected based nice (*) space. Then the following assignment is an order -preserving bijection : to a connected based covering map $p :(\tilde X, b)\to (X,x)$, assign the subgroup $p_*\pi_1(\tilde X,b)\subset \pi_1(X,x)$; between connected based covering maps and subgroups of $\pi_1(X,x)$; where connected based covering maps are ordered as follows :
$p:(\tilde X,b)\to (X,x)$ is smaller than $q: (\overline X, c)\to (X,x)$ if there is a map $f:\tilde X\to \overline X$ such that $q\circ f = p$ (the fact that this is indeed defines an order is itself a theorem)
Moreover, if $c\in p^{-1}(b)$ is another point, then $p: (\tilde X,c)\to (X,x)$ corresponds to a subgroup which is conjugate to the one corresponding to $p:(\tilde X,b)\to (X,x)$
(*) : such that usual covering theory applies, this is clearly the case of $\mathbb RP^2\vee \mathbb RP^2$
This theorem is the main story of covering space theory (although there are other, better ways to phrase it), as it essentially says that studying subgroups of $\pi_1(X,x)$ is the same as studying covering spaces of $X$.
Proving it essentially relies on the lifting theorem for covering spaces.
Once you have that, the computation you want to do is pretty straightforward : if you have an abelian covering $\tilde X\to X$ (say $X$ is based at $x$), which means that it is normal (i.e. the subgroup associated to $\tilde X$ does not depend on the choice of a basepoint $b\in p^{-1}(x)$) and that its group of automorphisms (here, $\pi_1(X)/p_*\pi_1(\tilde X)$) is abelian; then $\pi_1(X)/p_*\pi_1(\tilde X)$ is abelian, so $p_*\pi_1(\tilde X)$ contains $[\pi_1(X),\pi_1(X)]$, the commutator subgroup of $X$.
This is the smallest normal subgroup $H$ of $\pi_1(X)$ such that $\pi_1(X)/H$ is abelian.
In particular, since we have an order preserving bijection between based covering maps and subgroups, if we take a covering map $\rho : Y\to X$ which corresponds to the commutator subgroup itself (which is normal), then $\pi_1(Y)\subset \pi_1 (\tilde X)$, and so there is a map of based connected covering maps $(Y,b)\to (\tilde X, c)$ (for any choices of $b,c$, since we chose normal coverings)
This means that $(Y,b)$ is the universal abelian cover of $(X,x)$.
In particular, $\rho_*\pi_1(Y) = [\pi_1(X),\pi_1(X)]$ and so $\pi_1(X)/\rho_*\pi_1(Y) = \pi_1(X)/[\pi_1(X),\pi_1(X)] = \pi_1(X)^{ab}$, the abelianization of $\pi_1(X)$ (this is the largest abelian quotient of $\pi_1(X)$)
Now there are various ways of computing this.
if you know van Kampen's theorem, and if $S$ is nice enough (here, it's $\mathbb RP^2$, so that's the case), you can compute $\pi_1(S\vee S) = \pi_1(S)*\pi_1(S)$ (the free product of $\pi_1(S)$ with itself); and then you can check by hand that $(G*H)^{ab} = G^{ab}\times H^{ab}$ (using the definition as "largest abelian quotient" for instance). So here, you get $\pi_1(S)^{ab}\times \pi_1(S)^{ab}$, or in the specific case $S= \mathbb RP^2$, so $\pi_1(\mathbb RP^2) =\mathbb Z/2$, you get $\mathbb Z/2\times \mathbb Z/2$.
If you know homology, then you'll recognize $\pi_1(X)^{ab}$ from another theorem : Hurewicz's theorem says that for connected $X$, $\pi_1(X)^{ab}= H_1(X)$, so here $\pi_1(S\vee S)^{ab} = H_1(S\vee S) = H_1(S)\oplus H_1(S) = \pi_1(S)^{ab}\times \pi_1(S)^{ab}$, and so you may conclude as above. |
Normal closures of finite dimensional field extensions (existence, uniqueness, example) | Existence: Looks pretty good.
Uniqueness: Suppose there are two, $G_1$ and $G_2$. You don't need to consider elements in one and not the other and derive a contradiction; an easier path would be to try to prove $G_1$ contains $G_2$ and vice versa, so they must be equal. Can you do this from the definition of normal closure?
Example: Since $\sqrt[3]{4} = \sqrt[3]{2}^2$, $\mathbb Q(\sqrt[3]2,\sqrt[3]4)$ is just the same as $\mathbb Q(\sqrt[3]2)$. So it can't be the normal closure unless $\mathbb Q(\sqrt[3]2)$ is already normal over $\mathbb Q$. Is it? If not, why not? This will help you find the normal closure. |
Find the matrix representing T and Find the Image of T (as a span of vectors) | Yes, finding the matrix representing $T$ is that straightforward. To find the
image of $T$, you should consider a set that $T$ maps to, or at least some
representation of this set. What might be a good idea is to find a basis for
this image and use this basis to define the vectors generatable within the
image. |
Check if $(2,\pi/2)$ lies on $r=2\cos(2\theta)$ | I assume the pair $(2, \pi/2)$ is given in polar, rather than cartesian, coordinates.
In this context, the typical convention is that $r < 0$ is permitted. In other words: $r$ and $\theta$ are both permitted to be any real value. As you know, distinct pairs $(r_1, \theta_1)$ and $(r_2, \theta_2)$ may map to the same point on the plane. So the question of whether $(2, \pi/2)$ "lies on the graph" should be interpreted like this: Does there exist some pair $(r, \theta)$ of real numbers—equivalent to $(2, \pi/2)$ in the sense that it represents the same point on the cartesian plane—which satisfies the equation $r = 2\cos(2\theta)$?
The point in question has the following polar representations, and no others: $(2, \pi/2 + 2\pi n), n \in \mathbb Z$, and $(-2, 3\pi/2 + 2\pi n), n \in \mathbb Z$.
Some of these pairs do satisfy the equation:
$$2\cos(2(\pi/2 + 2\pi n)) = -2 \ne 2,$$
$$2\cos(2(3\pi/2 + 2\pi n)) = -2.$$
Therefore, the point lies on the graph.
To elaborate a bit more on what's going on here: The cartesian graph of the polar relation $r = 2\cos(2\theta)$ is (by definition) the set of all points $(x,y) \in \mathbb R^2$ such that there exists a pair $(r, \theta) \in \mathbb R^2$, with $(x,y) = (r\cos\theta,r\sin\theta)$, which satisfies $r = 2\cos(2\theta)$. This is a bit of a mouthful, but the key thing to understand is this: Although the pair $(2, \pi/2)$ does not satisfy our relation, it is true that a pair equivalent to that pair satisfies the relation. So if you plot on the cartesian plane the equation $y = 2\cos(2\theta)$, and the point $(x=\pi/2, y=2)$, you will see that the point doesn't lie on the graph. But once we replace $x, y$ by $r, \theta$, multiple points on the graph "collapse" to a single point, so the answer becomes "yes".
To be fair, the question is slightly ambiguous, but I believe this is the usual interpretation in precalculus courses. |
Divide proportionnally to lowest possible value? | So you have $255\geq x\geq 120$. Subtract $120$ and you'll get:
$$135\geq x-120\geq 0$$
multiply with $\frac{255}{135}$ to obtain:
$$255\geq\frac{255}{135}(x-120)\geq 0$$
Now it's in the range that you want.
So to get the number between $255$ and $0$, subtract $120$ and then multiply by $\dfrac{255}{135}$. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.