title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
In a convex quadrilateral each vertex is connected by two line segments with the midpoints of the two opposite sides.
Let the quadrilateral have vertexes $A$, $C$, $E$ and $G$ and let the midpoints between consecutive vertexes be $B$, $D$, $F$ and $H$, respectively. Let the unknown length be $\overline{AD}$. The triangles $ACF$, $EGB$ are isosceles. Hence the segment $BF$ is perpendicular to both $AC$ and $EG$ and divides those two triangles in half. Therefore the edges of the quadrilateral $AC$ and $EG$ are parallel. Since the lengths $\overline{GH} = \overline{HA}$, $H$ is also the midpoint of the straight line that goes through $H$ and is perpendicular to the straight line that passes through the edges $AC$ and $EG$. We have a similar result for point $D$ because the lengths $\overline{CD} = \overline{DE}$. Hence we have that the segment $DH$ is parallel to the edges $AC$ and $EG$. The triangle $CEH$ is also isosceles. Therefore, the edge $CE$ is perpendicular to the segment $DH$. Hence, the edge $CE$ is also perpendicular to the edges $AC$ and $EG$. The triangles $DEG$ and $EGH$ are congruent by the side-angle-side condition. Which means that the lengths $\overline{DE} = \overline{GH}$. Therefore, since $DE$ is perpendicular to $EG$, so is $GH$. Hence, $HA$ is parallel to $CE$ and has the same length. Consequently, triangles $CDH$ and $ADH$ are congruent, leading to the conclusion that $\overline{AD} = \overline{CH} = a$. After reading all of this I wonder if there were a simpler way of getting this result...
Probability that product will be faulty
Hint for a): You are on the right track. You have the expected value and the variance of the binomial distribution. Now use the cdf to provide a formula, which calculates the sum of $2$ upto $200$ successes. After that you can apply the converse probability: $P(X\geq 2)=1-P(X\leq 1)=1-P(X=1)-P(X=0)$.
Find the extreme points of an integral function
Well, we can find that point by solving: $$\frac{\text{d}}{\text{d}x}\left(\int_0^x e^{t^2}\left(t^3-3t+2\right)\space\text{d}t\right)=0\tag1$$ Using the fundamental theorem of calculus: $$ \frac{\text{d}}{\text{d}x}\left(\int_0^x e^{t^2}\left(t^3-3t+2\right)\space\text{d}t\right) =e^{x^2}\left(x^3-3x+2\right)\tag2 $$ Now, solve: $$e^{x^2}\left(x^3-3x+2\right)\space\Longleftrightarrow\space x=-2\space\vee\space x=1\tag3$$ So.....
The geometry of a transformation given by a singular $2\times 2$ matrix?
$\newcommand{\Vec}[1]{\mathbf{#1}}$Briefly, if $A$ is a real $2 \times 2$ matrix that is non-zero and not invertible, the image of the associated linear transformation is a line $\ell$. The linear system $A\Vec{x} = \Vec{b}$ has infinitely many solutions if $\Vec{b}$ lies on $\ell$, and has no solutions if $\Vec{b}$ does not lie on $\ell$. Algebraically, there exist non-zero vectors $\Vec{v}$ (unique up to multiplication by a non-zero scalar) and $\Vec{u}$ (uniquely determined by $A$ and $\Vec{v}$) such that $A = \Vec{v} \Vec{u}^{\mathsf{T}}$. The line $\ell$ is spanned by $\Vec{v}$, and the null space of $A$ is the line orthogonal to $\Vec{u}$. (Proving this is a pleasant exercise.)
How to prove that the equation $x^{2n}-2x^{2n-1}-\cdots-2nx+(2n+1)=0$ have no real roots?
Multiplying your function $$P(y)=\sum_{j=0}^{2n} (j+1)(-y)^j$$ by $(y+1)^2$ gives $$P_2(y)=(2n+1)y^{2n+2}+(2n+2)y^{2n+1}+1.$$ Its derivative is $$(2n+1)(2n+2)\left(y^{2n+1}+y^{2n}\right)=(2n+1)(2n+2)y^{2n}(y+1).$$ Can you see why this means that $P_2(y)\geq 0$ for all real $y$, with its minimum reached at $y=-1$ (the new root we have added and can check in $P$ instead of $P_2$ later)?
What is the generated set $\langle a,b\rangle$?
As ancient mathematician pointed out, $\langle a,b\rangle$ is the smallest subgroup containing the elements $a$ and $b$. This is the definition of the notation. Consider the set $\{g_1g_2...g_n : n \in \mathbb{N}, g_i \in \{a,b,a^{-1},b^{-1}\}\}$. Let's call this set $S$. Firstly note that this set is exactly all the combinations of products of $a,b$ and their inverses. It is not hard to see that $e \in S$, $S$ is closed with respect to the operation, and $S$ is closed with respect to inverses. This means that $S$ is a subgroup of our underlying group $G$ (and obviosuly $S$ contains $a,b$). Now consider $H$, which we define to be any subgroup of $G$ that contains $a$ and $b$. Note that any element of $S$ is also an element of $H$ (because subgroups are closed with respect to the operation and inverses). So this means that $S \subset H$. But this means that $S$ is a subset of any subgroup containing both $a$ and $b$, and $S$ is also a subgroup containing $a,b$! In other words, $S$ is the smallest subgroup of $G$ containing both $a,b$. This means we have proven that the set $\langle a,b \rangle$ is actually the set $S$. The point here is that the notation $\langle a,b\rangle$ was defined to be the smallest subgroup containing $a$ and $b$, and then we proved that such a subgroup exists, and in fact is equal to the set $S$. So now whenever we see $\langle a,b \rangle$, we can just replace it with the set $S$.
Does this infinite geometric series diverge or converge?
If we apply your reasoning, $$ \sum_{n=1}^\infty 2^n=\frac{2}{1-2}=-2. $$ You should ask yourself how you get a negative result by adding all positive terms. The reason is that the formula for the geometric series $\sum_n r^n$ applies when the series is convergent, which requires $|r|<1$. On another note, the formula for your series (had it been convergent) would have been $$\sum_{n=2}^\infty ar^n=\frac{ar^2}{1-r}.$$
Evaluate the integral $ \ \ \int_C z \cos (z) \ dz \ $
Since the function is analytic over $\mathbb{C}$, you can apply the FTOC here $$ \begin{align} \int_2^{\pi/2+i} z\cos z \ dz &= [z\sin z + \cos z]\bigg|_0^{\pi/2+i} \\ &= \left(\frac{\pi}{2} + i\right)\cosh 1 - i \sinh 1 - 2\sin 2 - \cos 2 \\ &= \left[\frac{\pi}{4}\left(e + \frac{1}{e} \right) - 2\sin 2 -\cos 2 \right] + \frac{i}{e} \end{align} $$
Complexity of $T(n) = T(n-10) + \sqrt{n}$
Given a positive integer $n$, the recurrence easily provides $$T(n) = T(n’) + \sum_{j=0}^{\frac{n-n’}{10}} \sqrt{n’+10j},$$ where $1\le n’\le 10$ and $n=n’\pmod {10}$. We claim that the sum $S(n)$ is $\Theta (n^{3/2})$. Indeed, $$\int_{n’}^n \sqrt{x}dx=\frac 23 x^{3/2}\Big|_{n’}^n=\Theta (n^{3/2})$$ and $$S(n)- 10\int_{n’}^n \sqrt{x}dx=\sqrt{n’}+\sum_{j=1}^{\frac{n-n’}{10}} \sqrt{n’+10j}-10\int^{n’+10j}_{n’+10(j-1)}\sqrt{x}dx= O(1)+O(n),$$ because for each $j$ $$0\le \sqrt{n’+10j}-10\int^{n’+10j}_{n’+10(j-1)}\sqrt{x}=$$ $$10\int^{n’+10j}_{n’+10(j-1)}\left(\sqrt{n’+10j}-\sqrt{x}\right)dx=O(1),$$ because for each $x\in [n’+10(j-1), n’+10j]$ by Lagrange’s Theorem there exists $y\in (x, n’+10j)$ such that $\sqrt{n’+10j}-\sqrt{x}=\frac 1{2y}(n’+10j-x)=O(1)$. (I expect in fact the approximation of the sum by the integral is even $O(\sqrt{n})$). Finally $T(n)=\Theta (n^{3/2})$.
How to go about solving $x=2^{\frac1x}$
Note that $$x=2^{\frac1x}\iff x^x=2$$ abd since $f(x)=x^x$ is strictly increasing (for $x>1/e$), with $f(x)\le1$ for $0<x\le 1$, $f(1)=1$ and $f(2)=4$ by IVT an unique solution exists which can be found by numerical methods.
Show that: $V=U\oplus W \wedge \exists_{u\in U,w\in W}u\neq 0,w\neq 0\Rightarrow \exists \tilde{W}\neq W\subseteq V: \tilde{W}\oplus U=V$
Consider the vector subspace $U$. We know that $U$ is not the trivial subspace since there exists an element $u\in U$ with $u\neq 0$. Using Theorem 5.23 (d) we now have that $B=\{u\}$ (that of course is a set of linearly independent vectors) can be completed to a basis $B_U$ of $U$.
Turning a spiral into coordinates
The solution to your problem is fairly straightforward in complex variables. I hope that you're familiar with that because anything else would be ghastly. In fact, in Cartesian coordinates it would be more of a problem of accounting than mathematics. The first step in identifying the location of the $n^{th}$ point is to have an equation for the entire spiral. Let's begin with the idea that you have sequence of numbers, in your case $S=\{0,1,1,2,2,3,3,4,4,5,...\}$ and that a each point there's a $-90^{\circ}$ (clockwise) turn. Now, we can build up the spiral by the method of linkages, so-called because of it similarity to the old-fashioned surveyor chain, made up of articulated segments of unit length. Thus, $$\begin{align} & z_0=0\\ & z_1=-1e^{-i\pi/2}\\ & z_2=-1e^{-i\pi/2}-1e^{-i2\pi/2}\\ & z_3=-1e^{-i\pi/2}-1e^{-i2\pi/2}-2e^{-i3\pi/2}\\ & z_4=-1e^{-i\pi/2}-1e^{-i2\pi/2}-2e^{-i3\pi/2}-2e^{-i4\pi/2}\\ & z_5=-1e^{-i\pi/2}-1e^{-i2\pi/2}-2e^{-i3\pi/2}-2e^{-i4\pi/2}-3e^{-i5\pi/2}\\ \end{align}$$ and so on. So now the exact solution for the position of the $n^{th}$ point is given by $$z_n=-\sum_{k=0}^nS(k)e^{-ik\pi/2}$$ For you information, the minus signs are due to the fact that your spiral evolves clockwise, rather than the conventional anticlockwise. And if you wanted the complete collection of points, say for plotting, the you would use the cumulative sum rather than the sum. (It's aggravating that there is no mathematical symbol for the cumulative sum or product.)
Show that the UMP test of size $\alpha = 1/3$ based on $X$ leads to rejecting $H_0$ when $X = 0$
Looks right. I'm not $100\%$ percent about the technical details. But you can check the answer. In order to keep the $\alpha$, i.e., the probability of erroneously rejecting $H_0$ to be equal $1/3$, you must choose one of the values $\{-1,0,1\}$ as a rejection region. Now, you can simply check what would be the power for every value, i.e., the probability of getting the value under $H_1$, that is $\{1/4, 1/2, 1/4\}$ respectively, so the maximal power $(1/2)$ is attained for $C=\{0\}$.
Diagonalizability of a given matrix
Note that your matrix $A$ is a block upper triangular matrix of the form $$ A = \begin{pmatrix} 0_{d \times d} & B \\ 0 & D_{(d + 1) \times (d + 1)} \end{pmatrix} $$ where $D$ is also a block upper triangular matrix with $c_d$ on the diagonal. Hence, the characteristic polynomial of $A$ is $$ p_A(x) = x^d(x - c_d)^{d+1}. $$ We have two options: If $c_d = 0$ then the characteristic polynomial of $A$ is $x^{2d + 1}$ and so $A$ is nilpotent which implies that if $A$ is also diagonalizable we must have $A = 0$. Hence, $c_d = c_{d-1} = \dots = c_0 = 0$. If $c_d \neq 0$ then $A$ will be diagonalizable if and only if the minimal polynomial of $A$ is $x(x - c_d)$. Plugging $A$ into $x(x - c_d)$, we see that we must have $$ A(A - c_d I) = \begin{pmatrix} 0 & B \\ 0 & D \end{pmatrix} \begin{pmatrix} -c_d I & B \\ 0 & D - c_d I \end{pmatrix} = \begin{pmatrix} 0 & B(D - c_d I) \\ 0 & D(D - c_d I) \end{pmatrix} = 0. $$ In particular, $D(D - c_dI) = 0$ and so $D$ must be diagonalizable with a single eigenvalue $c_d$ which implies that in fact $D = c_dI$ and so $c_{d-1} = \dots = c_0 = 0$. In any case, we see that we must have $c_{d-1} = \dots = c_0$ for the matrix to be diagonalizable. An alternative, more elementary, solution that doesn't involve the minimal polynomial continuous from the calculation of the characteristic polynomial as follows: If $c_d = 0$ then the characteristic polynomial of $A$ is $x^{2d+1}$ and so if $A$ is diagonalizable, we must have $\dim \ker(A) = 2d + 1$ so $A = 0$. If $c_d \neq 0$ then we must have $\dim \ker A = d$ and $\dim \ker (A - c_d I) = d + 1$. Since $c_d \neq 0$, the $d + 1$ non-zero columns of $A$ are linearly independent and so $\dim \ker A = d$. However, $$ A - c_dI = \begin{pmatrix} -c_dI & B \\ 0 & D - c_dI \end{pmatrix} $$ and so the first $d$ columns of $A - c_dI$ are linearly independent. In order that $\dim \operatorname{Im}(A - c_dI) = d$, the rest of the columns must belong to the span of the first $d$ columns. By looking at the last column, we see the this implies that $c_{d-1} = \dots = c_0 = 0$.
If n and m are sums of two squares so is $\frac{n}{m}$
As a hint to get you started: Consider the special case of $\frac 1m$ with $m=a^2+b^2$. Then we have $$\frac 1m=\frac {a^2}{m^2}+\frac {b^2}{m^2}=\left(\frac am\right)^2+\left(\frac bm\right)^2$$ Now try to adapt this to the general case.
These equations can generate integers that have the same total stopping time in the Collatz Conjecture. Has this been discovered?
All your numbers are multiple of $2^k$ and the well known numbers of the branch $\{3,13,53,213,853,3413,...\}$ of the Collatz tree. The numbers on this branch are $a_{n+1}=4\cdot a_n+1$ with $a_0=3$ or $\frac{10\cdot 4^n-1}{3}$, and have all the same ("odd") stopping time (since they are on the same branch). Multiplying by $2^k$ is the same as looking at the direct next branch of these numbers in the tree, which means they all have the same stopping time too (well, it depends on what you are looking at when determining the stopping time: Odd only, odd/even,...looking at "odd/even" stopping time is just a bit trickier, the exponent of 2 of the current/next branch plays a role in that case) Your formulation is a bit strange (you go backward, limiting the results), but you could generalize to any branch. You could also try to go further than the direct next branch.
What are the units of $\mathbb Z/2\mathbb Z \times \mathbb Z/5\mathbb Z$
I'll clarify again the product of rings: For $R, S$ rings, the ring $R \times S$ is defined to be $$R \times S := \{(r,s) \mid r \in R, s \in S\}$$ with addition and multiplication defined pointwise, additive identity $(0_R, 0_S)$ and multiplicative identity $(1_R, 1_S)$ and additive inverses also defined pointwise. So, the product $\mathbb Z/2\mathbb Z \times \mathbb Z/5\mathbb Z$ looks like $$\{(0,0),(0,1),(0,2),(0,3),(0,4),(1,0),(1,1),(1,2),(1,3),(1,4)\}$$ with $10$ elements as you expected. Now, for $u=(u_1,u_2)$ to be a unit, we need to find a $v = (v_1, v_2)$ such that $uv = 1$. Looking at the definition of the product ring, this means $u_1 v_1 = 1$ and $u_2 v_2 = 1$, i.e. $u_1$ and $u_2$ are units in their respective rings. In $\mathbb{Z}/2\mathbb{Z}$, the only unit is $1$ and in $\mathbb{Z}/5\mathbb{Z}$, the non-zero elements $1,2,3,4$ are all units. So, $u = (1,1),(1,2),(1,3),(1,4)$ are the units of $\mathbb Z/2\mathbb Z \times \mathbb Z/5\mathbb Z$.
Transformation-matrix for square to circle?
A matrix represents a linear transformation. And a linear transformation maps lines onto lines. Hence no linear transformation will have the properties you ask for. You need to look at other kind of transformations!
Prove the operator matrix is positive
$M_2(A)$ is a unital subalgebra of $M_2(\mathbb B(\mathcal H))$, thus for any $x\in M_2(A)$, the spectrum is independent of the algebra, that is, $\sigma_{M_2(A)}(x)=\sigma_{M_2(\mathbb B(\mathcal H))}(x)$. Since $\begin{bmatrix} 1& a\\ a^*& 1\end{bmatrix}$ is self-adjoint, and $$\sigma_{M_2(A)}\left(\begin{bmatrix} 1& a\\ a^*& 1\end{bmatrix}\right)=\sigma_{M_2(\mathbb B(\mathcal H))}\left(\begin{bmatrix} I& \pi(a)\\ \pi(a)^*& I\end{bmatrix}\right)\subset[0,\infty),$$ it follows that $\begin{bmatrix} 1& a\\ a^*& 1\end{bmatrix}$ is positive. For your second question, Since $\|A\|>1$, there exists $y\in\mathcal H$ with $\|y\|=1$ such that $\|Ay\|>1$. Letting $x_0=\frac{1}{\|Ay\|}Ay$, we have $$\langle Ay,x_0\rangle=\|Ay\|>1.$$ Now let $x=-x_0$. Then $$\langle Ay,x\rangle=-\langle Ay,x_0\rangle<-1.$$
Does saying "to negate a sign of a const term" has a meaning
You could say that, you could also say "to negate the constant term" -- both seem grammatically correct. In your expression, 'negate' changes the object of negation, while in the "to negate the constant term" 'negate' changes the sign in front of the object of negation, but it makes sense both ways.
Show $\lim_{a \to 0} a \cdot \mu (\{ x \in \mathbb{R} : |f(x)| > a \}) = 0$ for $f \in L^1 (\mathbb{R})$, $a>0$
It is a consequence of Lebesgue's dominated convergence theorem. Note that $$ a\cdot 1_{\{|f(x)|>a\}}\leq |f(x)| $$ for all $a>0$. Thus for any sequence $a_n$ such that $a_n \geq a_{n+1} \downarrow 0$, we have $$ \lim_{n\to\infty}a_n \cdot\mu(E_{a_n}) =\int \lim_{n\to\infty}\left(a_n\cdot 1_{\{|f(x)|>a_n\}}\right)d\mu =\int 0d\mu = 0. $$ This establishes $$\lim_{a\to 0} a\cdot\mu(E_a) =0.$$ Note: One can see that $a\cdot \mu(E_a)$ is the area of a rectangle below the graph $\{(x,|f(x)|) \;|\;x\in \mathbb{R}\}$ of $|f|$ which has finite area by the assumption. This observation is the motivation of this approach.
How to find the equation of the normal line to the surface S
Wiki: If a (possibly non-flat) surface $S$ is parameterized by a system of curvilinear coordinates $x(s, t)$, with $s$ and $t$ real variables, then a normal is given by the cross product of the partial derivatives $${\partial \mathbf{x} \over \partial s}\times {\partial \mathbf{x} \over \partial t}$$ $$\frac{\partial f} {\partial u}=(2,2u,3u^2)$$ $$\frac{\partial f} {\partial v}=(-1,2v,-3v^2)$$ The cross product is: $$(-6 u^2 v-6 u v^2, -3 u^2+6 v^2, 2 u+4 v)$$ Since the point is $(3,5,7)$ and is in the surface it must satisfy: $$2u-v=3$$ $$u^2+v^2=5$$ $$u^3-v^3=7$$ Which solution is $u=2$ and $v=1$. So the normal is $(-36,-6,8)$ so the equation of the line is: $$p(t)=(3,5,7)+t\cdot (-36,-6,8)$$
Prob. 22, Chap. 5 in Baby Rudin: Fixed Points of Real Functions
All but (d) look correct to me. You don't ever actually find a fixed point even with the simplest of contractions $f(x) = \frac{1}{2}x$ and starting x value $x_1 = 1$: you only converge to a fixed point.
If $a,b,c>0, a+b+c=3$, minimize $\frac{2-a^3}{a}+\frac{2-b^3}{b}+\frac{3-c^3}{c}$
According to Wolfram Alpha, this expression has a local minimum of approximately 3.87713 for a=0.865495, b=0.865495, c=1.26901. That checks out with a+b+c=3. This should be the value of the answer, but I am not sure how to go about solving it without computational tools. This should be the minimum value however, as req'd in the question.
Convergence in probability of random probability measures
Since $\Vert f \Vert < M$ for some $M>0$, we can find $k>0$ such that $B^k \geq M$, and $$f(x) \, I_{\vert x \vert > B} \leq M \leq B^k \leq \vert x \vert^k \, I_{\vert x \vert > B} \quad \mbox{ for any }x\in\mathbb R.$$ Thus, $$ P( \langle L_N , f \, I_{\vert x \vert > B} \rangle > \epsilon ) \leq P( \langle L_N , \vert x \vert^k \, I_{\vert x \vert > B} \rangle > \epsilon ),$$ and because $\langle \sigma, f \, I_{\vert x \vert > B} \rangle = 0$ we can write $$ P( \vert \langle L_n, f \rangle - \langle \sigma, f \rangle \vert > \epsilon ) \leq P( \vert \langle L_n, f \, I_{\vert x \vert \leq B} \rangle - \langle \sigma, f \, I_{\vert x \vert \leq B} \rangle \vert > \epsilon/2 ) + P( \vert \langle L_n, f \, I_{\vert x \vert > B} \rangle \vert > \epsilon/2 )$$ $$ \leq P( \vert \langle L_n, g \rangle - \langle \sigma, g \rangle \vert > \epsilon/2 ) + P( \langle L_N , \vert x \vert^k \, I_{\vert x \vert > B} \rangle > \epsilon/2 )$$ for $g$ supported in $[-B,B]$, and some $k>0$.
Surprising applications of topology
Francis Su described in 1999 ("Rental Harmony: Sperner's Lemma in Fair Division", Amer. Math. Monthly, 106, 1999, 930-42) how to apply Sperner's Lemma---which says that every so-called Sperner coloring of a triangulation of an $n$-simplex contains a cell colored with a complete set of colors---to produce a list of variously sized rents for rooms in a shared house that are fair in a certain sense that accounts for all roommates' preferences. See the column "To Divide the Rent, Start With a Triangle", (New York Times 2014 April 28) for an interesting interactive tool that illustrates the algorithm that exploits the lemma.
Why is $ab(\frac1a)b^2cb^{-3} = c$?
First off, associativity means we don't need parentheses, which is good, because there are none given to us. Now, let's use the definition of exponents to get $$ ab\frac1abbc\frac1b\frac1b\frac1b $$ Then we use commutativity to arrange things alphabetically. This gives us $$ a\frac1abbb\frac1b\frac1b\frac1bc $$ The definition of fraction gives $a\frac1a=1$ (and similarly for $b$). We get $$ 1\cdot1\cdot1\cdot1\cdot c $$ And finally, by definition of $1$, this simplifies to $c$.
radius of convergence of a power series
It seems mostly OK (at a quick glance), but there are few minor issues. Don't write $|a_n/a_{n+1}|=(\dots)=\lim (\dots)$, because the elements of the sequence are not equal to the limit of the sequence. Either you write "lim" in each step, like $\lim (\dots) =\lim (\dots) = A$, or else use the arrow notation $(\dots) = (\dots) \to A$. (But don't mix the two notations as many people do; that is, don't write "$\lim(\dots) \to A$"!). Write $x\in [2,6]$, not $x=(2,6)$. Also: $x \in (-19,13]$ (the smallest number first). In the last problem, you've made a mistake: $(n+1)!/(n+2)! = 1/(n+2)$, not $(n+1)/(n+2)$.
Existence of invariant measures on cosets
You need $H$ to be closed in $G$. Even that is not enough, though, as there is a rather technical condition involving the modular functions of $G$ and $H$. In particular, you want $\Delta_G \restriction H = \Delta_H$. See here for more information online, or Theorem 2.51 in Folland's "A Course in Abstract Harmonic Analysis" (page 62 of my edition). That entire section (2.6, "Homogenous Spaces") will probably be of interest. I hope this helps ^_^
Inequality involving supremum and integral
For fixed $t\in[0,1]$, then $f(x,t)\leq\sup_{t\in[0,1]}f(x,t)$, taking integrals both sides we have $\displaystyle\int_{{\bf{R}}}f(x,t)dx\leq\int_{{\bf{R}}}\sup_{t\in[0,1]}f(x,t)dx$, and then taking supremum on the left to all $t\in[0,1]$ we get the result. One should note that $\sup_{t\in[0,1]}f(x,t)$ is not necessarily measurable, so the assumption about this measurability should be made.
Rigorous proof that the monotone sequence diverges.
Hint: Try using $f(a_{n-1})$ instead of $f(0)$.
"For each given real number s, find a real number $t$ in the interval $[0,2\pi)$ so that the point on the unit circle $P(t) = P(s)$"
For each point $(x,y)$ on the unit circle in $\mathbb R^2$ you can find a unique $t\in [0,2\pi)$ such that $$ (x,y) = (\cos(t),\sin(t)). $$ This is what is meant by the point $P(t)$ on the unit circle. So, if you are given $s\in\mathbb R$, you find $k\in\mathbb Z$ such that $s + 2k\pi\in [0,2\pi)$. Call this number $t$. Since $\cos$ and $\sin$ are $2\pi$-periodic, you have $P(s) = P(t)$.
Minimizing univariate quadratic via gradient descent — choosing the step size
The way you choose $\alpha$ depends, in general, on the information you have about your function. For example, for the function in your example, it is $$ f'(x) = 4x - 5 $$ and $f''(x) = 4$, so $f'$ is Lipschitz continuous with Lipschitz constant $L=4$. You should then choose $a$ to be smaller than $1/L$, so, in this case, $a<0.25$. In general, you might not know $L$. Then you have to resort to a linesearch method (e.g., exact linesearch or Armijo's rule). You can read Chapter 3 in the book of Nocedal and Wright.
Connectivity, Path Connectivity and Differentiability
Proposition: Let $X$ be a locally path-connected space (in particular, any open subset of $\mathbb{R}^n$ has this property). Then every path component of $X$ is a connected component of $X$. In particular, $X$ is connected if and only if it is path-connected. Proof. Let $U$ be a path component of $X$. If $x \in U$, then there is an open neighborhood $V$ containing $x$ which is path-connected, hence $V \subseteq U$. It follows that $U$ is open. If $x \in \bar{U}$, again choose an open neighborhood $V$ containing $x$ which is path-connected. By assumption, this neighborhood intersects $U$, so it follows that $V \subseteq U$. Hence $U$ is closed. It follows that $U$ and its complement in its connected component $C$ are disjoint open sets whose union is $C$, hence that $U = C$. Proposition: Let $X$ be an open subset of $\mathbb{R}^n$ (or a smooth manifold). If two points $a, b$ are connected by a path in $X$, then they are connected by a smooth path in $X$. Proof. Let $\alpha : [0, 1] \to X$ be such a path. Choose for each point $\alpha(t)$ an open ball $U_t$ containing $\alpha(t)$ and contained in $X$. By compactness, the $U_t$ have a finite subcover $U_{t_1}, ... U_{t_n}$. Now it is not hard to explicitly write down a smooth path from $a$ to $b$ going through the balls $U_{t_i}$. (For example, it is trivial to write down a piecewise-linear path with this property, and then one just has to deform this path slightly in a neighborhood of each of its points of nondifferentiability using a smooth bump function.)
Examples of short maps (Lipschitz functions with $k=1$) with exactly 2 fixed points.
No such $f:\mathbb{R}\to\mathbb{R}$ exists. Indeed, suppose $f:\mathbb{R}\to\mathbb{R}$ is short and $f(a)=a$ and $f(b)=b$ with $a<b$. Then for any $c\in (a,b)$, $f(c)=c$, since if $d<c$ then $|d-b|>|c-b|$ and if $d>c$ then $|d-a|>|c-a|$. So if $f$ fixes two points, it must also fix the entire interval between them. However, there are many examples in other metric spaces. For instance, taking $S^1\subset\mathbb{C}$ with the induced metric from $\mathbb{C}$ (or the arc length metric), $f(z)=\bar{z}$ is an isometry $S^1\to S^1$ which fixes only $1$ and $-1$. Or more trivially, if $X$ is a metric space with two points, then the identity map works.
Where are the formulas for frequency $\omega=\sqrt{\lambda_1 \lambda_2}$, period $T=2\pi/(\sqrt{\lambda_1 \lambda_2)}$
A matrix $$\pmatrix{0&-u\\v&0},~~~u,v>0,$$ has eigenvalues $λ_{1,2}=±iω$ where $ω=\sqrt{uv}$. As a conjugate pair on the imaginary axis or as consequence of the Viete formulas for the characteristic equation, you get also back $λ_1λ_2=ω^2=uv$, but that should only be a secondary insight. The linearized system $$ \dot x = -uy\\ \dot y = vx $$ can be transformed into the scalar second order equation $\ddot x=-uvx=-ω^2x$ which is a harmonic oscillator with frequency $ω$ and thus period $2\pi/ω$.
Homogeneous polynomials minoration
I'll add the answer to this post and accept it. The function $\xi\mapsto\vert P(\xi)\vert$ is continuous on the sphere which is compact (finite dimension context). Furthermore, this function does not have any zero on the sphere. Thus, its infimum (and even minimum) $C$ is positive and if $\xi\in\mathbb{R}^n\setminus\{0\}, \vert P(\frac{\xi}{\Vert\xi\Vert})\vert\geqslant C$ (if $\xi=0$, we have the inequality as the right-hand side is $0$). As $P$ is d-homogeneous, we thus have $$ \begin{equation} \vert P(\xi)\vert\geqslant C\Vert\xi\Vert^d \end{equation} $$ for all $\xi\in\mathbb{R}^d$.
How do I prove this vector calculus relationship?
I think your expansions might not be quite correct. Following this answer here, we use the Levi-Civita symbol $\varepsilon_{ijk}$ with the following identities: \begin{align*} \varepsilon_{ijk}&=\varepsilon_{jki}=\varepsilon_{kij} \\ \varepsilon_{ijk}\,\varepsilon_{ilm}&=\delta_{jl}\delta_{km}-\delta_{jm}\delta_{kl}\\ (F\times G)_i&=\varepsilon_{ijk}F_jG_k. \end{align*} Here, we're using the Einstein summation convention all over the place. So, let's see what we have: \begin{align*} [\nabla\times(\vec{m}\times\nabla\phi)+\nabla(\vec{m}\cdot\nabla\phi)]_i&=\varepsilon_{ijk}\,\partial_j\,(\vec{m}\times\nabla\phi)_k+\partial_i\,m_n\,\partial_n\,\phi \\ &=\varepsilon_{ijk}\,\partial_j\,\varepsilon_{klm}\,m_l\,(\nabla\phi)_m+\partial_i\,m_n\,\partial_n\,\phi \\ &=\varepsilon_{ijk}\,\varepsilon_{klm}\,m_l\,\partial_j\,\partial_m\,\phi+m_n\,\partial_i\,\partial_n\,\phi \\ &=\varepsilon_{kij}\,\varepsilon_{klm}\,m_l\,\partial_j\,\partial_m\,\phi+m_n\,\partial_i\,\partial_n\,\phi \\ &=(\delta_{il}\delta_{jm}-\delta_{im}\delta_{jl})\,m_l\,\partial_j\,\partial_m\,\phi+m_n\,\partial_i\,\partial_n\,\phi \\ &=\delta_{il}\delta_{jm}\,m_l\,\partial_j\,\partial_m\,\phi-\delta_{im}\delta_{jl}\,m_l\,\partial_j\,\partial_m\,\phi+m_n\,\partial_i\,\partial_n\,\phi \\ &=m_i\,\partial_m\,\partial_m\,\phi-m_j\,\partial_j\,\partial_i\,\phi+m_n\,\partial_i\,\partial_n\,\phi \\ &=[m_i\,\partial_m\,\partial_m-m_j\,\partial_j\,\partial_i+m_n\,\partial_i\,\partial_n]\,\phi. \end{align*} Here's where we can see how we get zero out of it all. We note that any repeated indices here are summed over, and thus the summed indices are dummy. We rewrite using the same ones, therefore, to obtain $$[\nabla\times(\vec{m}\times\nabla\phi)+\nabla(\vec{m}\cdot\nabla\phi)]_i= [m_i\,\partial_j\,\partial_j-m_j\,\partial_j\,\partial_i+m_j\,\partial_i\,\partial_j]\,\phi.$$ By Clairaut's Theorem, $\partial_i\,\partial_j\,\phi=\partial_j\,\partial_i\,\phi,$ so the second two terms vanish, leaving $$[\nabla\times(\vec{m}\times\nabla\phi)+\nabla(\vec{m}\cdot\nabla\phi)]_i= m_i\,\partial_j\,\partial_j\,\phi.$$ But, because we were given that $\nabla^2\phi=0$, this last expression, also, is zero. QED.
Solving an equation with floor function
Since $x$ must be a multiple of $m$, write $x=my$. Then the equation becomes $$y=\left\lfloor\sqrt{\frac {my}k}\right\rfloor$$ and equivalent to $$y\le \sqrt{\frac {my}k}<y+1,$$ i.e. $$y^2\le \frac {my}k <y^2+2y+1$$ or (using $y\ne 0$, though in fact $x=0$ is a trivial solution) $$y\le \frac {m}k <y+2+\frac1y\le y+3.$$ Therefore $y=\left\lfloor \frac mk\right\rfloor$, $y=\left\lfloor \frac mk\right\rfloor-1$, and in rare cases $y=\left\lfloor \frac mk\right\rfloor-2$ are the solutions - the latter only if $m\ge 3k$ and $\frac mk-\left\lfloor \frac mk\right\rfloor<\frac 1{\left\lfloor \frac mk\right\rfloor-2}$.
Find $\iint_Ay\,dA$, where $A$ is defined by $z=x+y^2$,$0\le x\le 1$ and $0\le y\le2$
Running the following code in Mathematica: z[x_, y_] := x + y^2 dA = Sqrt[1 + D[z[x, y], x]^2 + D[z[x, y], y]^2]; Integrate[y*dA, {x, 0, 1}, {y, 0, 2}] we can get the following results: (13 Sqrt[2])/3
Finding the order of the product of disjoint cycles in $S_n$.
I have started from the stage where I got stuck in proving the above lemma. It is easy to show what I just mentioned in the edit is that $\text {Ord}\ (ab)\ \big |\ \text {lcm}\ \left (\text {Ord}\ (a), \text {Ord}\ (b) \right ).$ To prove equality we need to prove the other way round which is not true for arbitrary finite groups even if $a$ and $b$ commute. We are so lucky that the other part is true for our case. Why? Lets discuss. Before proving the required result I noticed that if we can prove the following lemma we are through. Lemma $:$ Let $\sigma, \tau \in S_n$ be two disjoint cycles. Then $\text {Ord}\ (\sigma \tau ) = \text {lcm}\ \left (\text {Ord}\ (\sigma), \text {Ord}\ (\tau) \right ).$ For proving the equality in the lemma let us first introduce the following definition. Let $\rho = (a_1,a_2, \cdots , a_r) \in S_n$ be an $r$-cycle. Then the support of $\rho$ is denoted by $\text {Supp}\ (\rho)$ and it is defined as $\text {Supp}\ (\rho) = \{a_1,a_2, \cdots , a_r \}.$ So $\text {Supp}\ (\rho)$ consists of those points in $\{1,2, \cdots, n \}$ which are disturbed by the operation of $\rho.$ Observation $:$ If $\rho,\rho' \in S_n$ are two cycles inverses of each other then $\text {Supp}\ (\rho) = \text {Supp}\ (\rho').$ (Because inverse cycles fix same points). Now let us take two disjoint cycles $\sigma , \tau \in S_n.$ On contrary let us assume that $\text {Ord}\ (\sigma \tau) = m < \text {lcm}\ \left (\text {Ord}\ (\sigma), \text {Ord}\ (\tau) \right ).$ Then it is easy to see that $m\ \bigg |\ \text {lcm}\ \left (\text {Ord}\ (\sigma), \text {Ord}\ (\tau) \right ).$ Let us assume that $\sigma^m \neq \text {id}$ and $\tau^m \neq \text {id}$ for otherwise $m = \text {lcm}\ \left (\text {Ord}\ (\sigma), \text {Ord}\ (\tau) \right ),$ a contradiction to our assumption. Since fixed points of $\sigma$ and $\tau$ are respectively fixed points of $\sigma^m$ and $\tau^m$ respectively it follows that $\text {Supp}\ (\sigma^m) \subseteq \text {Supp}\ (\sigma)$ and $\text {Supp}\ (\tau^m) \subseteq \text {Supp}\ (\tau).$ Since $\sigma$ and $\tau$ are disjoint cycles so we have $\text {Supp}\ (\sigma) \cap \text {Supp}\ (\tau) = \varnothing.$ Hence $\text {Supp}\ (\sigma^m) \cap \text {Supp}\ (\tau^m) = \varnothing.\ \ \ \ (*)$ Now since $\text {Ord}\ (\sigma \tau) = m$ so we have $$\begin{align*} (\sigma \tau)^m & = \text {id} \implies \sigma^m \tau^m = \text {id} \implies \sigma^m = (\tau^m)^{-1} \end{align*}$$ So $\sigma^m$ is the inverse of $\tau^m.$ So from our Observation it follows that $\text {Supp}\ (\sigma^m) = \text {Supp}\ (\tau^m).$ Since $\sigma^m \neq \text {id}$ and $\tau^m \neq \text {id}$ it follows that $\text {Supp}\ (\sigma^m) = \text {Supp}\ (\tau^m) \neq \varnothing$ and hence $\text {Supp}\ (\sigma^m) \cap \text {Supp}\ (\tau^m) \neq \varnothing,$ which contradicts $(*).$ That implies either $\sigma^m = \text {id}$ or $\tau^m = \text {id}.$ But if one of $\sigma^m$ or $\tau^m$ is identity then by using the equation $\sigma^m \tau^m = \text {id}$ we find that the other is also an identity. So we must have $\sigma^m = \tau^m = \text {id}.$ This implies $\text {Ord}\ (\sigma)\ \big |\ m$ and $\text {Ord}\ (\tau)\ \big |\ m.$ But it means that $\text {lcm}\ \left ( \text {Ord}\ (\sigma),\text {Ord}\ (\tau) \right )\ \bigg |\ m,$ which is a contradiction to our assumption that $m < \text {lcm}\ \left (\text {Ord}\ (\sigma), \text {Ord}\ (\tau) \right ).$ Hence our assumption is false. So $m \geq \text {lcm}\ \left (\text {Ord}\ (\sigma), \text {Ord}\ (\tau) \right ).$ But since $m\ \bigg |\ \text {lcm}\ \left (\text {Ord}\ (\sigma), \text {Ord}\ (\tau) \right )$ it follows that $m \leq \text {lcm}\ \left (\text {Ord}\ (\sigma), \text {Ord}\ (\tau) \right ).$ Hence combining these two inequalities it follows that $m = \text {lcm}\ \left (\text {Ord}\ (\sigma), \text {Ord}\ (\tau) \right ).$ QED
Distribution equivalence of $X/Y$ and $X/|Y|$
$$\begin{array}{ccccc} P\left(\frac{X}{Y}\leq z\right) & = & P\left(Y>0\wedge X\leq zY\right) & + & P\left(Y<0\wedge X\geq zY\right)\\ & = & P\left(Y>0\wedge X\leq zY\right) & + & P\left(Y<0\wedge-X\geq zY\right)\\ & = & P\left(Y>0\wedge X\leq zY\right) & + & P\left(Y<0\wedge X\leq-zY\right)\\ & = & P\left(Y>0\wedge X\leq z|Y|\right) & + & P\left(Y<0\wedge X\leq z|Y|\right)\\ & = & P\left(\frac{X}{|Y|}\leq z\right) \end{array}$$ Note that only the symmetry of $X$ and $P(Y=0)=0$ are used. edit (inspired by comment of @Did): Crucial is of course the equality $P\left(Y<0\wedge X\geq zY\right)=P\left(Y<0\wedge-X\geq zY\right)$ which is actually true because $\langle X,Y\rangle$ and $\langle -X,Y\rangle$ have equal probabilities. This follows from:$$P(X\leq x\wedge Y\leq y)=P(X\leq x)P(Y\leq y)=P(-X\leq x)P(Y\leq y)=P(-X\leq x\wedge Y\leq y)$$ Here it is used that $X$ and $Y$ are independent and that $X$ and $-X$ have equal distribution.
Correlation between three variables question
Here's an answer to the general question, which I wrote up a while ago. It's a common interview question. The question goes like this: "Say you have X,Y,Z three random variables such that the correlation of X and Y is something and the correlation of Y and Z is something else, what are the possible correlations for X and Z in terms of the other two correlations?" We'll give a complete answer to this question, using the Cauchy-Schwarz inequality and the fact that $\mathcal{L}^2$ is a Hilbert space. The Cauchy-Schwarz inequality says that if x,y are two vectors in an inner product space, then $$\lvert\langle x,y\rangle\rvert \leq \sqrt{\langle x,x\rangle\langle y,y\rangle}$$ This is used to justify the notion of an ''angle'' in abstract vector spaces, since it gives the constraint $$-1 \leq \frac{\langle x,y\rangle}{\sqrt{\langle x,x\rangle\langle y,y\rangle}} \leq 1$$ which means we can interpret it as the cosine of the angle between the vectors x and y. A Hilbert space is an infinite dimensional vector space with an inner product. The important thing for this post is that in a Hilbert space the inner product allows us to do geometry with the vectors, which in this case are random variables. We'll take for granted that the space of mean 0 random variables with variance 1 is a Hilbert space, with inner product $\mathbb{E}[XY]$. Note that, in particular $$\frac{\langle X,Y\rangle}{\sqrt{\langle X,X\rangle\langle Y,Y\rangle}} = \text{Cor}(X,Y)$$ This often leads people to say that ''correlations are cosines'', which is intuitively true, but not formally correct, as they certainly aren't the cosines we naturally think of (this space is infinite dimensional), but all of the laws hold (like Pythagorean theorem, law of cosines) if we define them to be the negative of the cosines of the angle between two random variables, whose lengths we can think of as their standard deviations in this vector space. Because this space is a Hilbert space, we can do all of the geometry that we did in high school, such as projecting vectors onto one another, doing orthogonal decomposition, etc. To solve this question, we use orthogonal decomposition, which is often called the ''uncorrelation trick'' in statistics and consists of writing a random variable as a function of another random variable plus a random variable that is uncorrelated with the second random variable. This is especially useful in the case of multivariate normal random variables, when two components being uncorrelated implies independence. Okay, let's suppose that we know that the correlation of X and Y is $p_{xy}$, the correlation of Y and Z is $p_{yz}$, and we want to know the correlation of X and Z, which we'll call $p_{xz}$. Note that we don't lose generality by assuming mean 0 and variance 1 as scaling and translating vectors doesn't affect their correlations. We can then write that: $$X = \langle X,Y\rangle Y + O^X_Y$$ $$Z = \langle Z,Y\rangle Y + O^Z_Y$$ where $\langle \cdot,\cdot\rangle$ stands for the inner product on the space and the $O$ are uncorrelated with Y. Then, we take the inner product of $X,Z$ which is the correlation we're looking for, since everything has variance 1. We have that $$\langle X,Z\rangle = p_{xz} = \langle p_{xy}Y+O^X_Y,p_{zy}Y+O^Z_Y\rangle = p_{xy}p_{yz}+\langle O^X_Y,O^Z_Y\rangle$$ since the variance of Y is 1 and the other terms of this bilinear expansion are orthogonal and hence have 0 covariance. We can now apply the Cauchy-Schwarz inequality to the last term above to get that $$p_{x,z} \leq p_{xy}p_{yz} + \sqrt{(1-p_{x,y}^2)(1-p_{y,z}^2)}$$ $$p_{x,z} \geq p_{xy}p_{yz} - \sqrt{(1-p_{x,y}^2)(1-p_{y,z}^2)}$$ where the fact that $$\langle O^X_Y,O^X_Y\rangle = 1-p_{xy}^2$$ comes from the equation setting the variance of X equal to 1 or $$1 = \langle X,X\rangle = \langle p_{xy}Y + O^X_Y,p_{xy}Y+O^X_Y\rangle = p_{xy}^2 + \langle O^X_Y,O^X_Y\rangle$$ and the exact same thing can be done for $O^Z_Y$. So we have our answer. Sorry this was so long.
What are the simplicies?
I'm assuming that in $\Delta^n\otimes X_n$ the tensor product is the ordinary product, as it is on nLab. This functor is a colimit, and as a colimit, I think you'll struggle to say anything interesting about its simplices except by just taking the construction of the colimit as a coequalizer, and bashing it out. So let's do that. Taking the definition gives you that $i$-simplices of $|X|$ are the quotient of the set of all pairs $(a :i\to n, b_n)$ of $i$-simplices of $\Delta^n\times X_n$ by the smallest equivalence relation generated by $(a,f^*b_m)\sim (f\circ a,b_m)$ where $b_m\in X_m(i)$ and $f:n\to m$ is a morphism in $\Delta$. Note then that we always have $(a,b_n) \sim (\mathrm{id}_i,a^*b_n)$. Thus the $i$ simplices will be $X_i(i)$ for all $i$. I assume therefore that $|X|$ should be the composite of $X$ with the diagonal functor $D:\Delta^{\text{op}}\to \Delta^{\text{op}}\times \Delta^{\text{op}}$. To prove this, we could do the following $$ \newcommand\of[1]{\left({#1}\right)} \newcommand\Set{\mathbf{Set}} \newcommand\sSet{\mathbf{sSet}} \begin{align} \sSet(|X|,S) &= \sSet\of{ \int^{n\in\Delta} \Delta^n\times X_n ,S } \\ &= \int_{n\in\Delta} \sSet(\Delta^n\times X_n,S) \\ &= \int_{n\in\Delta} \int_{m\in\Delta} \Set\of{ X_n(m)\times \Delta(m,n), S(m) } \\ &= \int_{n\in\Delta} \int_{m\in\Delta} \Set\of{ \Delta(m,n), \Set\of{ X_n(m), S(m) } } \\ &= \int_{n\in\Delta} \Set\of{ X_n(n), S(n) } \\ &= \sSet(X\circ D,S). \end{align} $$
Show that there exists a basis $\{y_1,\dots,y_n\}$ of $U$ such that the projection on $y_i$ on $\langle u \rangle$ is $2u$ for $i=1,\dots,n$
Hint : If $U=\mathbb R^n$ and $\{y_1,...,y_n\}$ is standard basis , then projection of $y_i$ on $u=(1,1,...,1)$ is $$\frac{u.y_i}{u.u}u=(\frac{1}{\sqrt{n}},...,\frac{1}{\sqrt{n}})$$ And it's true for $\alpha u$ for any $\alpha \in \mathbb R$ and $u \in U$ (normed vector space) that there exist basis of $U$ s.t projection of any elements of this basis on $u$ is $\alpha u$ .
Show that $\psi(n)$ has finitely many roots
yes, Rosser and Schoenfeld showed (formulas 3.41 and 3.42) that $\phi$ (which is, on average, linear) is never much worse than that: namely $$ \phi(n) \geq \frac{n}{e^\gamma \log \log n + \frac{2.50637}{\log \log n}} $$ Here the logarithm is base $e$ and the constant $2.50637$ chosen to give equality at (and only at) $$ n = 2 \cdot 3 \cdot 5 \cdot 7 \cdot 11 \cdot 13 \cdot 17 \cdot 19 \cdot 23 = 223092870 $$ There are also upper bounds on $\pi(x),$ for example formula 3.2.
Certain algebraic dependence of two polynomials
Let $d=t^2$, $u=t+1$, and $v=t-1$. Then $U+V=2t^3$ and $U-V=2t^2$, so $U$ and $V$ satisfy $H(U,V)=0$ where $H(x,y)=2(x+y)^2-(x-y)^3$. In fact, such a nonzero $H$ always exists. Indeed, since $U$ and $V$ are algebraically dependent there exists $H\neq 0$ such that $H(U,V)=0$. Now just observe that the constant term of such an $H$ must be $0$, since otherwise $H(U,V)$ would be nonzero mod $d$.
Show that $\sum_{g \in G} \chi(g)=0$.
Relate $\sum_{g\in G}\chi(g) $ to $\sum_{g\in G}\chi(ag)$ for suitable $a\in G$.
Finding indicated trigonometric value in specified quadrant
$$\csc \theta = \frac 1{\sin\theta} = -\frac {10}{3} \implies \sin\theta = -\frac 3{10} = \frac{\text{opposite}}{\text{hypotenuse}}$$ $$\tan \theta = \dfrac{\sin\theta}{\cos\theta} = \frac{\text{opposite}}{\text{adjacent}}$$ All you need is to find $\cos\theta$ in the third quadrant to compute tangent. Use the right angle that $\theta$ forms with the x-axis and the Pythagorean Theorem: $$3^2 + \text{adjacent}^2 = 10^2 \implies \text{adjacent} = \sqrt{91}$$ or else use the identity: $$\sin^2\theta + \cos^2\theta = 1$$ knowing that $\cos \theta < 0$ in the third quadrant. However, you are also correct having used your method: $$\tan\theta = \dfrac 3{\sqrt {91}} = \dfrac{3\sqrt{91}}{91}$$
Motivation behind the Archimedean norm on number fields .
Here is one motivation for considering Archimedean valuations on an ''equal'' level with the non-Archimedean ones, from a purely arithmetic viewpoint. When you do this, you get nice results like the following product formula: $$ \text{for }x\in \mathbb{Q},\qquad \prod_v |x|_v = |x|_\infty\prod_p |x|_p = 1.$$ The first product is taken over all ''places'' $v$ in $\mathbb{Q}$, where place means equivalence class of absolute values (choosing appropriate normalization), and the second product above uses Ostrowski's result that $\mathbb Q$ has a single Archimedean place, denoted $|x|_\infty$, and that all other places are the non-Archimedean ones corresponding to prime integers. One way to interpret this product formula is that you actually don't need embeddings into $\mathbb C$ to figure out the Archimedean norm on $\mathbb Q$ -- you just have to multiply together all the non-Archimedean norms, and then take the reciprocal. The ''inside'' and ''outside'' views actually mirror each other. For number fields $K$ of higher degree, the product formula $\prod_{v\in K} |x|_v = 1$ still holds, but since you generally have more than one Archimedean norm (sometimes denoted $v|\infty$) this does not allow you to express a single Archimedean norm in terms of the non-Archimedean ones. However, it is still true that taken together, the product over non-Archimedean norms gives you equivalent information (that is, the multiplicative inverse) to the product over Archimedean norms. You can think of this situation as closely analogous to the case of rational functions $k(T)$ over some algebraically closed ground field $k$. Assuming $k$ itself has no ''interesting'' norms, just the trivial one, then there are two types of norms on $k(T)$: the finite valuations $v_a((T-a)^n) = n$ for any $a\in k$ the degree valuation $v_\infty(f(T)) = - \deg(f)$ for any polynomial $f\in k[T]$. (Here the valuations correspond to norms via $|\cdot| = \exp(-v(\cdot))$.) Note that the finite valuation $v_a$ has the geometric interpretation of picking out the order of vanishing, as a zero or pole, of some function on the affine line $\mathbb A^1$, at the closed point $\mathfrak m_a = (T-a)$. We chose to denote the degree valuation by $v_\infty$ because there is also a perfectly nice geometric interpretation here: if we consider the projective line $\mathbb P^1$ as containing $\mathbb A^1$ and one additional point $\infty$, then $v_\infty = -\deg$ picks out the order of vanishing of a function on $\mathbb P^1$ at $\infty$, under the identification of $k(T)$ as the rational function field of both $\mathbb P^1$ and $\mathbb A^1$. Given a rational function $f\in k(T)$, summing the orders of zeros and poles of $f$ over all points in $\mathbb A^1$ could give you any integer (e.g. when $f = T^n$). However, if we take the same sum over all points in the compact curve $\mathbb P^1$, then the sum is always $0$. This corresponds to the product formula for $K = k(T)$: $$ \sum_{v\in k(T)} v(f) = 0 \quad \Leftrightarrow\quad \prod_{v\in k(T)} |f|_v = 1.$$ The ''geometric'' case $K = k(T)$ has one important advantage over the ''arithmetic'' one $K = \mathbb Q$. In $k(T)$, the infinite valuation $v_\infty$ is isomorphic to any of the finite ones $v_a$ in the sense that there is an automorphism of $k(T)$ sending one to the other, namely $T-a \mapsto \frac1{T-a}$. In $\mathbb Q$, however, there is no such automorphism which exchanges $v_\infty$ with any $v_p$, since $v_p$ is a discrete valuation while $v_\infty$ is not. The geometric analogy suggests that by considering the Archimedean (or ''infinite'') place $v_\infty \in \mathbb Q$ in addition to the $p$-adic places, we are somehow compactifying an arithmetic curve. But the resulting compactification is not smooth and homogeneous like $\mathbb P^1$; rather, the arithmetic curve is highly singular at $\infty$.
Linear algebra mapping question
If $A$ exists, $A$ satisfies $$A\begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix}= \begin{pmatrix} 1&0&0&0\\ 0&1&1&0\\ 0&0&0&1\\ 0&0&0&0\end{pmatrix}$$ So $$A= \begin{pmatrix} 1&0&0&0\\ 0&1&1&0\\ 0&0&0&1\\ 0&0&0&0 \end{pmatrix}$$
silly question about convergent sequences
Yes, $a_{n+1}$ is the $(n+1)$th term of the sequence, just like $a_n$ is the $n$th term. To answer your other question, it's because if $n \to +\infty$ then $n+1 \to +\infty$ also. So then $$ \lim_{n\to+\infty} s_{n+1} = \lim_{n+1 \to+\infty} s_{n+1}.$$ Now just make the substitution $m=n+1$.
Calculate $\sum_{n \in A} 2^{-n}$
We can approach with something similar to inclusion-exclusion. Let $2\Bbb N,3\Bbb N,5\Bbb N,7\Bbb N$ be the sets of natural numbers divisible by $2,3,5,7$ respectively. For ease of notation, let us let $s(E,F,\dots)$ be the summation over $E\cap F\cap\dots$, so for example $s(2\Bbb N, 3\Bbb N)$ would be $\sum\limits_{n\in 2\Bbb N\cap 3\Bbb N}2^{-n}$. We can see that to get the total sum over the elements which are in exactly two of the sets, we will have a good start by looking at $s(2\Bbb N,3\Bbb N)+s(2\Bbb N,5\Bbb N)+\dots+s(5\Bbb N,7\Bbb N)$ but we will have included things we didn't intend in doing so., having also included elements in our sum which belong to three or more sets., and having done so several times. Correcting our count, then correcting our count again, we get a final sum of: $s(2\Bbb N,3\Bbb N)+s(2\Bbb N,5\Bbb N)+\dots+s(5\Bbb N,7\Bbb N)-3s(2\Bbb N,3\Bbb N,5\Bbb N)-\dots-3s(3\Bbb N,5\Bbb N,7\Bbb N)+6s(2\Bbb N,3\Bbb N,5\Bbb N,7\Bbb N)$ Now, recognize that for coprime $a,b,\cdots$ you have $s(a\Bbb N,b\Bbb N,\dots) = \sum\limits_{n=1}^\infty 2^{-nab\cdots} = \sum\limits_{n=1}^\infty (2^{ab\cdots})^{-n}=\dfrac{1}{2^{ab\cdots}-1}$ We get then our sum as being: $$\dfrac{1}{2^6-1}+\dfrac{1}{2^{10}-1}+\dfrac{1}{2^{14}-1}+\dfrac{1}{2^{15}-1}+\dfrac{1}{2^{21}-1}+\dfrac{1}{2^{35}-1} - \dfrac{3}{2^{30}-1}-\dfrac{3}{2^{42}-1}-\dfrac{3}{2^{70}-1}-\dfrac{3}{2^{105}-1}+\dfrac{6}{2^{210}-1}$$ The result is $\approx 0.01694256444264848\cdots$
mathematical analysis
By MVT $$f\left({k\over n}\right)-f\left({k\over n+1}\right)={kf'(\xi_{k,n})\over n(n+1)},\qquad{k\over n+1}<\xi_{k,n}<{k\over n}\\ \Rightarrow \sum_{k=1}^{n-1}\left[f\left({k\over n}\right)-f\left({k\over n+1}\right)\right]-f\left({n\over n+1}\right)=\sum_{k=1}^{n-1} {kf'(\xi_{k,n})\over n(n+1)}-f\left({n\over n+1}\right)\\ \Rightarrow x(n)-x(n+1)=\sum_{k=1}^{n-1} {kf'(\xi_{k,n})\over n(n+1)}-f\left({n\over n+1}\right)\\ \Rightarrow \lim_{n\to\infty} (x(n)-x(n+1))=\lim_{n\to\infty}\sum_{k=1}^{n-1}{kf'(\xi_{k,n})\over {n+1}}{1\over n}-f(1)\\ =\lim_{n\to\infty}\sum_{k=1}^{n-1}\xi_{k,n}f'(\xi_{k,n}){1\over n}-f(1)\\ =\int_0^1xf'(x)dx-f(1)=-\int_{0}^1f(x)dx$$ So the required limit is $$\int_{0}^1f(x)dx$$
Prove by Induction that every term of the following sequence is irrational
Assume that for some $n$, $x_n$ is rational. We know that $x_n=(3x_{n-1}+1)^{1/2}$. By algebra we have that $\frac{1}{3}(x^2_n-1)=x_{n-1}$ and so we have that $x_{n-1}$ is also rational. By iterating this argument $n-1$ times we find out that $x_1$ is rational. However, you’ve already noted this isn’t true. Thus $x_n$ couldn’t have been rational. Since this applies for any $n$, there is no value of the sequence that is rational. To specifically phrase this as being induction, the base case is just noting that $44$ isn’t a perfect square. Now we need to prove the inductive hypothesis If $x_n$ is irrational then $x_{n+1}$ is irrational. This statement is logically equivalent to its contrapositive If $x_{n+1}$ is rational then $x_n$ is rational. This contrapositive version is proven by my first paragraph, so since the contrapositive is true the original statement is true. Therefore by induction the entire sequence is irrational.
Picking assigned items from sets without knowing exact contents of sets
I assume when you say "You draw a number of items from the sets at random" you in fact mean that they are in a particular order in each set which you do not know, and you can choose which ones from the order to pick; otherwise this might go on forever as you randomly fail to pick the right ones. I also assume that you are not told the detailed results of attempts; otherwise, if you were, then you could stick with what worked and cut down future searches. So to take an example of 2 items which may be in boxes $A$ or $B$, each containing two items, the possibilities are 1 way of choosing nothing ( $\{\}$ ), 4 ways of choosing one item from one box and none from the other ( $\{A_1\}$, $\{A_2\}$, $\{B_1\}$, $\{B_2\}$ ), plus 4 ways of choosing one item from each box ( $\{A_1, B_1\}$, $\{A_1, B_2\}$, $\{A_2, B_1\}$, $\{A_2, B_2\}$ ), and 2 ways of choosing two items from one box and none from the other ( $\{A_1, A_2\}$, $\{B_1, B_2\}$ ), giving 11 possibilities in all. If so then, perhaps surprisingly, the answer does not depend on $y$ or $z$ individually but only on the product $yz$, as the having the sets ordered mean you in effect have $yz$ individual sets of one desired or undesired element each. So the formula for the number of possibilities is $$ \sum_{i=0}^{x} {yz \choose i}$$ since there are ${yz \choose i}$ ways of putting $i$ of the original $x$ in the boxes. For your $x=5$, $y=3$, $z=9$ example this gives 101584 possibilities.
Number of positive divisors of a number
Hint: The number of divisors of primes is even (equals to $2$). The number of divisors of $4$ is odd ($\{1,2,4\}$). The number of divisors of $6$ is even ($\{1,2,3,6\}$). The number of divisors of $8$ is even ($\{1,2,4,8\}$). The number of divisors of $9$ is odd. The number of divisors of $10,12,14,15$ is even. The number of divisors of $16$ is odd. Can you see a pattern?
Limit of a Logarithm with Different Bases
Let us define the function $f(.)$ as follow: $$f(n)=\dfrac{2^{\log_3n}}{3^{\log_2n}}.$$ Using the base change formula, we can easily transform $f(.)$ into the following function: (I used $\log$ and $\exp$ to denote the natural logarithm and the exponential function, respectively) $$f(n)=\exp(\log2\log n\log_3e-\log3\log n\log_2e)$$ $$\Leftrightarrow$$ $$f(n)=\exp(C\log n)=n^C,$$ where $C$ is a constant. $C=\log2\log_3e-\log3\log_2e=\dfrac{\log2}{\log3}-\dfrac{\log3}{\log2}$. Since $C$ is negative, i.e., $C<0$, then: $$\lim_{n \to +\infty}f(n)=0.$$
Optimization methods for probability distributions with respect to percentiles (for example medians)?
Not sure what you mean by "optimize with respect to." An obvious way to estimate the population median, sometimes denoted $\eta,$ is to find the sample median, sometimes denoted $H.$ For many symmetrical distributions, the population mean $\mu$ is the same as the population median $\eta.$ Then the 'best' estimator of $\eta$ from a sample $X_1, X_2, \dots, X_n$ may be the sample mean $\bar X.$ An example is a normal distribution. The unbiased estimator of the 'center' $\mu = \eta$ with the smallest variance is $\bar X.$ Without going into the distribution theory, here is a demonstration via simulation for samples of size $n = 10$ from a normal distribution, that both $\bar X$ (A in the R code) and $H$ are unbiased estimators of the center, but that the sample mean is a less-variable estimator of the center than is the sample median. m = 10^6; n = 10 x = rnorm(m*n, 100, 15) DTA = matrix(x, nrow=m) # each of m rows is a normal sample of size n A = rowMeans(DTA) # vector of m sample means H = apply(DTA, 1, median) # vector of m sample medians mean(A); mean(H) # A and H both unbiased estimators of center ## 99.99282 # aprx E(A) = 100 ## 99.99174 # aprx E(H) = 100 sd(A); sd(H) # But A has less variability than H ## 4.747664 # aprx SD(A) = 15/sqrt(10) = 4.743416 ## 5.585213 # indicates SD(H) > SD(A) However, the Laplace (double exponential) distribution is also symmetrical and the best estimator of the center $\mu = \eta$ is the sample median $H.$ Here is an analogous demo that for a Laplace distribution, the sample median is a better estimator of the center than the sample mean. I generated a Laplace random variable $X$ as $X = U - V + 100,$ where $U$ and $V$ are independent exponential distributions with rate 1. Thus $E(X) = 100.$ m = 10^6; n = 10 x = rexp(m*n)-rexp(m*n)+100 DTA = matrix(x, nrow=m) A = rowMeans(DTA); H = apply(DTA, 1, median) mean(A); mean(H) # Both estimators unbiased ## 99.99987 ## 99.99969 sd(A); sd(H) # But median H has less variability ## 0.4466381 ## 0.3809883 Also, there are cases in which neither $\bar X$ nor $H$ is best. An example is uniform distributions of the form $\mathsf{Unif}(0, \theta),$ for which the center is $\mu = \eta = \theta/2.$ An unbiased estimator of $\theta$ is $\hat \theta = \frac{n+1}{n}X_{(n)} = \max X_i.$ Then the best estimator of the center is $\hat \theta/2.$ There are even some useful symmetrical distributions that do not have a population mean $\mu,$ and so one may use the median $\eta$ as the center and try to estimate it by $H$. An example is Student's t distribution with one degree of freedom. Finally, there are distributions in which $\mu \ne \eta,$ such as the family of gamma distributions, including the exponential. For these, it may be best to use $\bar X$ to estimate $\mu,$ find the relationship between $\mu$ and $\eta,$ and modify the estimator of $\mu$ to get an estimator of $\eta.$ In a mathematical statistics course one important topic of discussion is methods of finding optimal estimators for various parameters. This is not the place for a full discussion. If you can say something about your background in statistics and the situation(s) in which you want to estimate population medians and quantiles, perhaps I or someone else can give an answer targeted on your primary interests.
modular arithmetic with exponents
The important thing is that it is from $19^3 \pmod {23}$ to $(-4)^3 \pmod {23}$ This works because $19 \equiv -4 \pmod {23}$ You lost that in going to the final question.
The equivalent axiom of closure operator on a partial order set
$\newcommand{\cl}{\operatorname{cl}}\cl(\cl(z))\le \cl(z)$ iff $\cl(z)\le \cl(z)$ (why?) Take $x=\cl z$ and $y=z$.
Intuition of characteristic property of the free group
An analogy with vector spaces may be helpful. If $V$ is a vector space with basis $B$, then any set function on $B$ into another vector space $W$ can be extended to a linear map $V \to W$. So $V$ is a free vector space on the basis $B$. In the case of vector spaces this property does not characterise $V$ because every vector space has a basis. This is not true in the case of groups, even if they are abelian. Free groups are special in that they admit "basis" expansions, and group homomorphisms from a free group are determined completely by their action on "basis elements". In the same way that a vector space is determined (upto linear isomorphism) by its vector space dimension, which is the cardinality of any basis for that space, a free group is determined (upto group isomorphism) by the cardinality of its generating set, called its rank.
Can one prove by contraposition in intuitionistic logic?
Contraposition in intuitionism can, sometimes, be used, but it's a delicate situation. You can think of it this way. Intuitionistically, the meaning of $P\to Q$ is that any proof of $P$ can be turned into a proof of $Q$. Similarly, $\neg Q \to \neg P$ means that every proof of $\neg Q$ can be turned into a proof of $\neg P$. If $P\to Q$ is true, and you are given a proof of $\neg Q$, can you construct a proof of $\neg P$ ? The answer is yes, as follows. Well, we are given a proof that there is no proof of $Q$. Suppose $P$ is true, then it can be turned into a proof of $Q$, but then we will have a proof of $Q\wedge \neg Q$, which is impossible. Thus we just showed that it is not the case that $P$ holds, thus $\neg P$ holds. In other words, $(P\to Q)\to (\neg Q \to \neg P)$. In the other direction, suppose that $\neg Q \to \neg P$, and you are given a proof of $P$. Can you now construct a proof of $Q$? Well, not quite. The best you can do is as follows. Suppose I have a proof of $\neg Q$. I can turn it into a proof of $\neg P$, and then obtain a proof of $P\wedge \neg P$, which is impossible. It thus shows that $\neg Q$ can not be proven. That is, that $\neg \neg Q$ holds. In other words, $(\neg Q \to \neg P)\to (P\to \neg \neg Q)$.
If the $m-1$ first derivatives of a rational function vanish at a point, does the function have a zero of order $m$ at that point?
The statement you mentioned is correct. The order of the zero can be higher because more derivations can vanish.
Interpretation of probability density greater than one
The density function of a continuous random variable is not an uncountably infinite 'list' of probabilities. A continuous random variable has no probability at any one point. A continuous random variable has positive probabilities only for intervals. (Intervals can be very short, but they cannot shrink of length $0.)$ The density function of a random variable $X$ provides a way to find probabilities such as $P(0 < X < 0.1).$ By convention, one writes $P(X = 0.0300)=0,$ and similarly for any other individual value. Example 1. Let $X \sim \mathsf{Unif}(-.2, .2),$ with density function $f_X(t) = 2.5,$ for $-.2 \le x \le 2$ and $0$ elsewhere. The total area under the density curve is $1.$ In order for that to be true, notice that the height of the density function must exceed $1$ for some values of $t.$ (See plot below.) Then $$P(0 < X < 0.1) = \int_0^{0.1} f_X(t)\,dt = \int_0^{0.1} 2.5\, dt = 0.25.$$ Consider the plot below: curve(dunif(x,-.2,.2), -.5,.5, col="blue", lwd=2, n=10001, ylab="PDF", xlab="t", main="Density of UNIF(-.2, .2)") abline(h=0, col="green2") abline(v = c(0, .1), col="red") The desired probability is the area beneath the (blue) density curve between the red vertical lines. Example 2. Suppose that $Y \sim \mathsf{Norm}(\mu = 10, \sigma=0.1),$ with density function $f_Y(t).$ Then $P(Y \le 9.8) = \int_{-\infty}^{9.8} f_Y(t)\, dt = 0.02275.$ However, the this integral cannot be evaluated using the ordinary methods of calculus. One must use numerical integration or printed tables (obtained by numerical integration). In R, one can evaluate this integral using a normal cumulative distribution function (CDF) pnorm, (also obtained by numerical means). pnorm(9.8, 10, .1) [1] 0.02275013 In the figure below, the desired probability is the area under the density curve to the left of the vertical red line. curve(dnorm(x, 10,.1), 9.5,10.5, col="blue", ylab="PDF", xlab="t", main="Density of NORM(10, .1)") abline(h=0, col="green2") abline(v = 9.8, col="red")
Inverting a Large Symmetric Matrix
You should use matrix block multiplication. $A=\begin{pmatrix}2&1\\1&1\end{pmatrix}$ $B=\begin{pmatrix}2&3\\3&5\end{pmatrix}$ $C=\begin{pmatrix}1&0\\0&\cfrac{1}{2}\end{pmatrix}$ $M=\begin{pmatrix}A&0&0\\0&B&0\\0&0&C\end{pmatrix}$ So $M^{-1}=\begin{pmatrix}A^{-1}&0&0\\0&B^{-1}&0\\0&0&C^{-1}\end{pmatrix}$ And to inverse those matrix, you can use $X^{-1}=\cfrac{Adj(X)}{det(X)}$ where $Adj(X)=\,^t C$ with $C$ the matrix of cofactors. (http://en.wikipedia.org/wiki/Adjugate_matrix)
Why is $\cos(2x)=\cos^2(x)-\sin^2(x)$ and $\sin(2x)=2\sin(x)\cos(x)$?
$\qquad\qquad\qquad\qquad\qquad\qquad\qquad$ Proof without words: $\qquad\quad$ $\qquad\quad$ Image credit: "Blue's Blog: The Bloog!". See also this Math.SE answer by Blue.
$n > 2$ implies $n < p < n!$ - Proof
Let $n$ be a natural number greater than 2. Let $p$ be any prime number that divides $n!−1$. Since $p\mid n!−1$, $p$ does not divide $n!$. It follows that $p$ is not any natural number less than or equal to $n$, and so $p$ is a natural number greater than $n$. Also, $p$ is less than or equal to $n!−1$, since $p$ divides $n!−1$. It follows that $n &lt; p &lt; n!$, so there is a prime between $n$ and $n!$.
How many orthogonal matrices map one vector to another?
There are an uncountably infinite number. In particular, any Euler rotation whose first step rotates $u$ in the direction of $v$, and then performs a rotation around $v$, will satisfy $Au=v$, and there are (at least in 3 dimensions or higher) an infinite number of such Euler rotation matrices. Here is an example in Mathematica: u = {1, 2, 3}; v = {3, 1, 2}; RotationMatrix[RandomReal[], v].RotationMatrix[{u, v}].u The output is always {3,1,2}, irregardless of the value of RandomReal[].
Proximal Operator / Mapping of $ g \left( x \right) = {\left\| x \right\|}_{1} $ ($ {L}_{1} $ Norm) in the Complex Domain
Let $\phi(u,x) = \|u\|_1 + {1 \over 2\lambda} \|x-u\|_2^2$. With a slight abuse of notation, note that $\phi(u,x) = \sum_k \phi(u_k,x_k)$, so we may as well assume that $u,x \in \mathbb{C}$. If $x=0$, we see that $\operatorname{prox}_{\lambda {|\cdot|}} (x) = 0$, so assume $x \neq 0$. Note that if $|\theta|=1$, then $\phi(u,x) = \phi(\theta u, \theta x)$ and also $\phi(u,x) \ge \phi(\operatorname{re} u, \operatorname{re} x)$. In particular, $\phi(u,x) = \phi({\bar{x} \over |x|} u, |x|) \ge \phi(\operatorname{re}({\bar{x} \over |x|} u), |x|)$, from which we see that (i) we need only optimise over real $u$ and (ii) $\operatorname{prox}_{\lambda {|\cdot|}} (x) = {|x| \over \bar{x}} \operatorname{prox}_{\lambda {|\cdot|}} (|x|)$, where the latter is computed over a real domain (we have $\operatorname{prox}_{\lambda {|\cdot|}} (|x|) = \max(0,|x|-\lambda)$). Notes: (i) ${|x| \over \bar{x}} = { x \over |x| }$. (ii) The real (one or higher dimension) problem $\min_u \|u\|_1 + {1 \over 2 \lambda } \|x-u\|_2^2$ has a nice solution via a minor extension of the von Neumann mimimax theorem. We have $\|u\|_1 =\max_{\|h\|_\infty \le 1} \langle h, u \rangle$ and the problem can be written as $\min_u \max_{\|h\|_\infty \le 1} \langle h, u \rangle + {1 \over 2 \lambda } \|x-u\|_2^2$. Applying the aforementioned theorem (with a minor extension to deal with the non compact domain for $u$) we have $\min_u \max_{\|h\|_\infty \le 1} \langle h, u \rangle + {1 \over 2 \lambda } \|x-u\|^2 = \max_{\|h\|_\infty \le 1} \min_u \langle h, u \rangle + {1 \over 2 \lambda } \|x-u\|_2^2$ and solving the inner quadratic problem gives $\min_u \max_{\|h\|_\infty \le 1} \langle h, u \rangle + {1 \over 2 \lambda } \|x-u\|_2^2 = \max_{\|h\|_\infty \le 1} \langle h, x \rangle - {\lambda \over 2} \|h\|_2^2$ (with $\lambda h - (x-u) = 0$). This is separable, and reduces to solving $\max_{|h_k| \le 1} h_k x_k - {\lambda \over 2} h_k^2$, which has solution $h_k = \operatorname{sgn} x_k \min(1,{|x_k| \over \lambda})$, and substituting into $u_k= x_k-\lambda h_k$ gives the solution $u_k=\operatorname{sgn} x_k \max(0,|x_k|-\lambda)$
Universal coverings and fully faithful fiber functors?
This is never going to be full, if the fibers aren't singletons, because the action is by automorphisms. It won't even be full onto the core of automorphisms in essentially any case. In the universal cover, the fiber is identified with $\pi_1,$ and the fiber functor is identified with the left multiplication action of $\pi_1$ on itself. So this is only going to be full if every set automorphism of $\pi_1$ is induced by left multiplication by some element, which is almost always impossible, which you can see for finite groups just by cardinality counting. The functor is faithful for the universal cover, though, since no nonidentity group element fixes the group under left multiplication. A more robust fiber functor lands in $\pi_1$-sets, and in that case your claim is true.
Constructing Unbounded Linear Maps
Proving the existence of Schauder bases sometimes relies on the axiom of choice, but many important normed spaces have explicit Schauder bases. Suppose we know a (countable) Schauder basis for $X$, call it $\{x_i\}$. The subspace $X_0$ for which $\{x_i\}$ is a Hamel basis is incomplete. Explicitly $X_0=\{\sum_{j=1}^Nc_jx_{i_j}:N\in\mathbb{N},c_j\in\mathbb{C}\}$. To show $X_0$ is incomplete, notice that the sequence $\left\{\sum_{j=1}^Nx_j2^{-j}\right\}_{N=1}^\infty$ is Cauchy but does not converge in $X_0$. It shouldn't be hard to show $X$ is the completion of $X_0$ (basically that's how a Schauder basis is defined). You can construct an unbounded linear functional $f:X_0\to\mathbb{C}$ easily: $f(\sum_{j=1}^Nc_jx_{i_j})=\sum_{j=1}^Ni_jc_j$. To show $f$ is unbounded, assume $\|x_i\|=1$ for each $i$. Then $f(x_i)=i$ and so $\{x_i\}$ is a sequence on the unit sphere with an unbounded image. If $0\neq y\in Y$, you can define an unbounded linear function $g:X_0\to Y$ by $g(x)=yf(x)$.
Does Mixed boundary conditions change Von Neumann Stability Analysis
von Neumann analysis assumes a plane wave like solution $u = \alpha e^{i(\kappa x - \omega t )}$. So it only applies to infinite space. (I do not even think it applies to periodic space). However, because of compact support basis of many numerical methods. It does not matter how large the domain is until boundary is encountered. When boundary condition comes in, you should use something different: Sengupta, Tapan K., Anurag Dipankar, and Pierre Sagaut. "Error dynamics: beyond von Neumann analysis." Journal of Computational Physics 226.2 (2007): 1211-1218.
Proof of L'Hôpital's Rule for $x \to$ finite $a^{+}$ (J Stewart pp A-46)
Question 4: The trick is exactly Squeeze Theorem. If $c$ is bounded above by $x$ and below by $a$, $c$ will approach $a^+$ if $x$ does. Question 5: This change of variables is quite intuitive if you think about it. We seek to understand why this is true: $ {\lim_{c \to a^{+}}} \dfrac{ f'(c) }{ g'(c) } = {\lim_{x \to a^{+}}} \dfrac{ f'(x) }{ g'(x) }$. Think about it like this: $\lim_{x \to a^+} h(x) =\lim_{c \to a^+} h(c)$ for any rational function $h$ - this is simply the tautology $h(a^+)=h(a^+)$. Letting $h(x)=\frac{f'(x)}{g'(x)}$, we have shown our statement. For the first three questions, we can presage to work with the interval $(a,x)$ by looking at the trick we are using: Cauchy's Mean Value Theorem. Knowing this, if we can extrapolate that the two tricks we will be using are setting $f(a)=g(a)=0$ and changing $c$ to $x$, we want to bound $c$ from above with our variable $x$, since we are approaching $a^+$, not $a^-$.
is relative velocity formula wrong?
You are not wrong. An object released from the top of a pole has slightly more angular momentum than the base of the pole so it will land to the east of the base of the pole by a very small distance. The reason we do not need to worry about this in real life (most of the time) is that the effect is very small indeed. For a 5 meter high pole, the time taken to fall from the top to the bottom of the pole is approximately 1 second. The additional velocity at the top of the pole due to the Earth's rotation is $10\pi$ meters per day. This is about 0.36 mm per second. So the displacement away from the base of the pole is about 1/3 of a mm. You would have to carry out a very accurate experiment to be able to detect such a small effect. However, effects due to the rotation of the Earth do become significant when calculating the trajectory of artillery shells, which can be in flight for 10s of seconds.
How to solve $y^3=x(x+1)$ where $x$ and $y$ are integers?
Hint: notice that $gcd(x,x+1)=1$ and thus $x=a^3$ and $x+1=b^3$
How can I safely make statistical inferences on abnormal data?
This looks very much like $t-t_\min$ is distributed in a log-normal distribution. That is, $\log(t-t_\min)$ is normal distributed, with some mean $\mu$ and some std $\sigma$. To estimate the values of $\mu$ and $\sigma$ for the set of points labelled $t_k$ you can do $$\hat{\mu} = \frac1n \sum_k \ln(t_k-t_\min)$$ and $$\hat{\sigma}^2 = \frac1{n-1} \sum_k ( \ln(t_k-t_\min) - \hat{\mu})^2$$ Then you can do all the usual manipulations that you would do with a Normal distribution; for example the interval $[\mu - \sigma, \mu+\sigma]$ contains about $68\%$ of the probability; but that translates to some interval in $t$ which is $[t_\min + e^{\mu-\sigma},t_\min + e^{\mu+\sigma}]$.
Taking the limit involving gamma function
Hint: $f(a)=g(a)h(a)$ with $g(a)=\Gamma(a+1)$ and $h(a)=\sin(\pi a/2)/a$. Now use $$g(0)=1,\ h(0)=\pi/2,\ g'(0)=-\gamma,\ h'(0)=0.$$
If $p>3$ what are two solutions of $x^2 ≡ 4 \pmod p$?
Since $2^2=4$, it's sure that $2^2\equiv 4\pmod p$ for any $p$. Thus, by the theorem you're citing, there must be another solution; since $$ (-2)^2=4 $$ also $-2\equiv p-2\pmod p$ is a solution. Note that $p-2\not\equiv 2\pmod{p}$. The condition $p&gt;3$ is irrelevant, just that it's an odd prime suffices.
Recurrence or transience of the 1-3 tree
I realized that I can use the same technique as was used to show that the branching number is $1$ in order to show that the tree walk is actually recurrent. Thus it does not provide an answer to either exercise 3.4 or 3.5. Namely, any flow with finite energy, which we assume exists for a contradiction, would have to send nonzero flow down the right most branch or nonzero flow somewhere else. Either case is a contradiction.
Confused about how to show independent random variable $Y$ has the Poisson distribution with parameter $t\lambda$
Let $S_k:=\sum_{i=1}^k X_i$ (note that $S_k\sim \Gamma(k,\lambda^{-1})$). Then \begin{align} \mathsf{P}(Y=k)&amp;=\mathsf{P}(S_k\le t, S_k+X_{k+1}&gt;t)=\mathsf{E}\left[1\{S_k\le t\} \mathsf{P}(X_{k+1}&gt;t-S_k\mid S_k)\right] \\ &amp;=\mathsf{E}\left[1\{S_k\le t\}e^{-\lambda(t-S_k)}\right]=\int_0^t\frac{\lambda^k}{(k-1)!}x^{k-1}e^{-\lambda t}\,dx \\ &amp;=\frac{(\lambda t)^ke^{-\lambda t}}{k!}. \end{align}
Need help in pizza counting total number of possibilities problem.
For the first ingredient on the first pizza you have 7 choices. Since double and triple toppings are allowed, you have 7 choices for the second and third topping. This yields 7*7*7 choices for the first pizza. For the second pizza you have the same choices(7*7*7). Since you are ordering 2 pizzas, the answer is (7*7*7) * (7*7*7), or 7^6. If single topping and double toppings are allowed, then it is probably: (7 + 7*7 + 7*7*7) * (7 + 7*7 + 7*7*7)
Permutations of length $n$ in which the first ascent occurs in an even position
Let's first construct a generating function $g(x,y)$ such that $k![x^ky^l]g(x,y)$ counts the permutations of length $k$ without ascent before the $l$-th position. So $g(x,y)$ is exponential in $x$, $x$ keeps track of the length of the permutation and $y$ keeps track of the number of positions without ascent. There is exactly one permutation of given length with no ascent before the last position (namely the descending one), so these permutations are counted by $\mathrm e^{xy}$. A permutation without ascent before the $l$-th position is such a permutation without ascent before the last position, followed by an arbitrary permutation. The exponential generating function for arbitrary permutations is $\frac1{1-x}$. Concatenation of labeled objects corresponds to multiplication of their exponential generating functions, so $$ g(x,y)=\frac{\mathrm e^{xy}}{1-x}\;. $$ The number of permutations where the first ascent occurs after $l$ positions is the number of permutations without ascent before the $l$-th position minus the number of permutations without ascent before the $(l+1)$-th position, so we want $([y^l]-[y^{l+1}])g(x,y)$. (Note that this correctly handles the special case where a descending permutation is considered to have an ascent in the last position, since nothing is subtracted in this case.) Summing over all even $l=2i$ yields $$ g(x)=\sum_i([y^{2i}]-[y^{2i+1}])g(x,y)=\sum_j(-1)^j[y^j]g(x,y)=g(x,-1)=\frac{\mathrm e^{-x}}{1-x}\;. $$ This is the Exponential Generating Function For Derangements, so the number of permutations of length $n$ with the first ascent in an even position is the number $!n$ of derangements of length $n$.
How can we pass from $\dot z=Az+\varepsilon g(z,\dot z,t)$ to $\dot x=\varepsilon g(x,\dot x,t)$?
Define $$ \binom{z_1}{z_2}=e^{At}\binom{x_1}{x_2}$$ and put it in the orignal equation to get $$ Ae^{At}\binom{x_1}{x_2}+e^{At}\binom{\dot{x}_1}{\dot{x}_2}=Ae^{At}\binom{x_1}{x_2}+\binom{0}{\epsilon g(e^{At}x,Ae^{At}x+e^{At}\dot{x},t)}. $$ From this, it is easy to see $$ \binom{\dot{x}_1}{\dot{x}_2}=\epsilon e^{-At}\binom{0}{g(e^{At}x,Ae^{At}x+e^{At}\dot{x},t)}.$$
Maximum likelihood estimation of $a,b$ for a uniform distribution on $[a,b]$
First, $ a\leq \min(X_1 , \ldots , X_n) $ and $ b\geq \max(X_1 , \ldots , X_n) $ That is because otherwise we wouldn't be able to have the samples $ X_i $ which are less than $ a $ or greater than $ b $ because the distribution is $$ X_i \sim \operatorname{Unif}(a,b) $$ and the minimum value $ X_i $ can have is $ a $, and the maximum value $ X_i $ can have is $ b $. The likelihood function is $$ \mathcal{L}(a,b)= \prod_{i=1}^n f(x_i;a,b) = \prod_{i=1}^n \frac{1}{(b-a)} = \frac{1}{(b-a)^n} $$ Consider the log-likelihood function $$ \log\mathcal{L}(a,b) = \log{\displaystyle \prod_{i=1}^{n} f(x_i;a,b)} = \displaystyle \log\prod_{i=1}^{n} \frac{1}{(b-a)} = \log{\big((b-a)^{-n}\big)} = -n \cdot \log{(b-a)} $$ Note that we are looking for the arguments $a$ and $b$ that maximizes the likelihood (or the log-likelihood) Now, to find $ \hat{a}_{MLE} $ and $ \hat{b}_{MLE} $ take the log-likelihood function derivatives with respect to $ a $ and $ b $ $$ \frac{\partial}{\partial a} \log\mathcal{L}(a,b) = \frac{n}{(b-a)} \\ \frac{\partial}{\partial b} \log \mathcal{L}(a,b) = -\frac{n}{(b-a)} $$ We can see that the derivative with respect to $ a $ is monotonically increasing, So we take the largest $ a $ possible which is $$ \hat{a}_{MLE}=\min(X_1 , ... , X_n) $$ We can also see that the derivative with respect to $ b $ is monotonically decreasing, so we take the smallest $ b $ possible which is $$ \hat{b}_{MLE}=\max(X_1 , ... , X_n) $$
Inclusivity of the domain of the derivative of a function with a vertical tangent
$f'(a) = \lim_\limits{x\to a} \frac {f(x) - f(a)}{x-a}$ if $f:\mathbb R_{\ge0} \to \mathbb R_{\ge 0}; f(x) = \sqrt x$ That is $f(x)$ is defined over the non-negative real numbers. Question 1: does $\lim_\limits{x\to 0} \sqrt{x}$ exist. Since there are no negative values of $x$ in the domain, we do not consider the right hand limit when evaluating whether the limit exists. That is, in the definition of limits, we consider all $x$ in the domain of $f(x)$ in the neighborhood of $0$. $\lim_\limits{x\to 0} \sqrt{x} = 0$ Using similar logic, I am inclined to say: $f'(0) = \lim_\limits{x\to 0} \frac {\sqrt x}{x} = \infty $
Find the probability the sum of two dices equal to 4
To find $\Bbb P(N=2|S=4)$ using the law of total probability we need to find $\Bbb P(N=k\cap S=4)$ for $k=1,2,3,4$. (We need not consider $k&gt;4$ because the sum of five or more dice is always greater than 4.) $\Bbb P(N=1)=\frac12$ and out of six ways to roll a die only one way results in a 4, so $\Bbb P(N=1\cap S=4)=\frac12×\frac16=\frac1{12}$. $\Bbb P(N=2)=\frac14$ and you have already calculated that only three rolls of the 36 possible with two dice sum to 4, so $\Bbb P(N=2\cap S=4)=\frac14×\frac1{12}=\frac1{48}$. $\Bbb P(N=3)=\frac18$. Out of $6^3$ possible rolls of three dice, only three (112, 121, 211) sum to 4. $\Bbb P(N=3\cap S=4)=\frac18×\frac3{216}=\frac1{576}$. $\Bbb P(N=4)=\frac1{16}$, but only one roll out of $6^4$ sums to 4: 1111. Thus $\Bbb P(N=4\cap S=4)=\frac1{16}×\frac1{1296}=\frac1{20736}$. The law of total probability then states that $$\Bbb P(N=2|S=4)=\frac{\frac1{48}}{\frac1{12}+\frac1{48}+\frac1{576}+\frac1{20736}}=\frac{432}{2197}$$
Continuous function on a compact metric space with no fixed point
The function $g : X \to \mathbb{R}$ defined $g(x) = d(f(x),x)$ is continuous because it is a composition of continuous functions. For every $\epsilon &gt; 0$ let $I_{\varepsilon} = \{r\in \mathbb{R} \mid r &gt; \varepsilon\}$, which is an open set in $\mathbb{R}$. Then $g^{-1}(I_{\varepsilon})$ is open in $X$, and the family of sets $\{g^{-1}(I_{\varepsilon}) \mid \varepsilon &gt; 0 \}$ covers $X$ because $X$ has no fixed point. Since $X$ is compact, any open cover of it will have a finite subcover: $X = \bigcup_{i=1}^{n} g^{-1}(I_{\varepsilon_i})$. Let $\varepsilon_{min} = \min\{\varepsilon_i,\dots,\varepsilon_n\}$. Then for every $x\in X$ we have $x \in g^{-1}(I_{\varepsilon_i})$ for some $i$, hence $d(f(x),x) = g(x) \in I_{\varepsilon_i}$, so $d(f(x),x) &gt; \varepsilon_i \geq \varepsilon_{min}$, and we are done.
Regularity of semilinear PDE
My suggestion would be: $-\Delta u = f - c(u)$. Now you are done if you can show that the RHS is in $L^2$ by elliptic regularity. To do this it is of course enough to show that $c(u)$ is in $L^2(\mathbb{R}^n)$. $$\int |c(u)|^2 = \int_{\text{supp}(u)}|c(u)|^2 \le \|c\|^2_{L^{\infty}(\text{supp}(u))}|\text{supp}(u)| &lt; \infty$$ Do you think this work? Let me know! :)
Is it worth it to get better at contest math?
No, it is completly useless. The fact that they are timed, require no advanced mathematics, often solutions are ad-hoc/brute-force-ish, are good indicators. Its relevance for research is comparable to that of beeing able to recite the digits of Pi.
Distinguishing cryptographic properties: hiding and collision resistance
Pre-image resistant: given a hash value $h$ find $m$ such that $h=Hash(m)$. Consider storing the hashed of password on the server. An attacker will try to find your password. Second Pre-image resistant: As Henno Brandsma said this is also called collusion resistant means. In this case, the attacker wilt try to find and $h'$ such that $h' = Hash(m)$. Note that; $h'$ can be equal to $h$. Hiding: This technique due to Time-Memory attack on Block cipher by Hellman and Hash Functions. Modern from, Rainbow Tables, is due to Oechslin. To mitigate the attacks, one solution is concatenating a random value to passwords, called salt. So the $h=Hash(r|m)$ is stored in the database with the salt. Now, the attacker must have to build a new table for any of the password in the database. Finding an $m'$ s.t $Hash(m') = Hash(r|m)$, in this case, will not solve the problem. Because the server will calculate $Hash(r|m') \neq Hash(r,m)$. Adding salt (or in general random value), in Cryptography, has many applications. The answer to your question; yes, it is not hiding, because there is no random. For some applications, non-hiding will be enough, as comparing the hash of the download with hash from the server to see that the download is complete. I assume that the $g$ generates a cyclic group; If the two elements are unique, yes the hash will be different. In general, however, one must support larger values than the group size. such two values $x_1 = x_2 + k \bmod m$ will have collision. To hide it, you must add random value.
Bayes Theorem Breast Cancer
I'm not so much a fan of probability trees. I like tables. In any case, always define your events precisely. Here I will use $$T = \text{A randomly selected person tests positive,} \\ C = \text{A randomly selected person has breast cancer.}$$ Furthermore, $\bar T$ and $\bar C$ represent the complementary events of testing negative and not having cancer, respectively. Then for the following table structure $$\begin{array}{c|c|c|c} &amp; C &amp; \bar C &amp; \\ \hline T &amp; n_{11} = T \cap C &amp; n_{12} = T \cap \bar C &amp; n_{1*} = n_{11} + n_{12} \\ \hline \bar T &amp; n_{21} = \bar T \cap C &amp; n_{22} = \bar T \cap \bar C &amp; n_{2*} = n_{21} + n_{22} \\ \hline &amp; n_{*1} = n_{11} + n_{21} &amp; n_{*2} = n_{12} + n_{22} &amp; n_{**}\end{array}$$ we are given the following frequencies: $$\begin{array}{c|c|c|c} &amp; C &amp; \bar C &amp; \\ \hline T &amp; 8 &amp; 95 &amp; ?\\ \hline \bar T &amp; ? &amp; ? &amp; ? \\ \hline &amp; 10 &amp; 990 &amp; 1000\end{array}$$ Filling in the blanks is trivial: $$\begin{array}{c|c|c|c} &amp; C &amp; \bar C &amp; \\ \hline T &amp; 8 &amp; 95 &amp; 103\\ \hline \bar T &amp; 2 &amp; 895 &amp; 897 \\ \hline &amp; 10 &amp; 990 &amp; 1000\end{array}$$ Now the probability of a false positive is simply $$\Pr[T \mid \bar C] = \frac{\Pr[T \cap \bar C]}{\Pr[\bar C]} = \frac{n_{12}}{n_{*2}} = \frac{95}{990}.$$ Indeed, this could be immediately stated from the question; no computation was needed. The probability of a false negative is $$\Pr[\bar T \mid C] = \frac{\Pr[\bar T \cap C]}{\Pr[C]} = \frac{n_{21}}{n_{*1}} = \frac{2}{10},$$ only slightly less trivial than the previous question. So we can see there's no real need to rigorously do any Bayesian calculations--the questions ask for information that is readily found from the given conditions. Be advised that the definition of false positive and false negative are as follows: false positive: A test result is positive when the true condition is negative. false negative: A test result is negative when the true condition is positive. Do not confuse this with false discovery and false omission: false positive and false negative have to do with the chance of the test giving a result that is contradictory to the true condition, rather than the condition being actually present (absent) when the test is positive (negative). If the question had asked, "what is the positive predictive value (PPV) of the test; i.e., what is the chance that someone who tests positive actually has breast cancer," then this would be $$\Pr[C \mid T] = \frac{\Pr[T \cap C]}{\Pr[T]} = \frac{8}{103}.$$ This is a horrible rate; it means that less than $8\%$ of women with a positive mammogram actually have breast cancer; thus mammography should not be used as a diagnostic tool. What is the negative predictive value (NPV) of the test; i.e., if a woman tests negative, what is the probability she in fact does not have breast cancer? This is $$\Pr[\bar C \mid \bar T] = \frac{\Pr[\bar T \cap \bar C]}{\Pr[\bar T]} = \frac{895}{897},$$ which is very high: it means a woman testing negative can be fairly assured that she does not have breast cancer. So, mammography's utility, according to the problem, seems to be in providing reassurance for those who test negative.
Help on Complex Analysis Question
For Q1: If you are using the so called natural branch of $\log{z}$, which requires $0&lt;\arg{z}&lt;2\pi$, then $\log(z-6)$ is analytic on and inside the circle $|z|=3$. Then you can use Cauchy's integral theorem. The other "standard" branch of $\log{z}$ is the so called principal branch, and it requires $-\pi&lt;\arg{z}&lt;\pi$. This is the most commonly used branch, as it is defined for positive real numbers (and has $\log{x}=\ln{x}$ there). In your problem though, $\log(z-6)$ is not defined along (or everywhere inside) the whole curve $|z|=3$, which makes it harder to compute the integral.
How to evaluate $ \lim_{x \to 0} [\frac{\sin x \tan x}{x^2}] $where [] is GIF
For small values of $x$ use $\sin x \approx x-x^3/6$ and $\tan x \approx x+x^3/3$, then $$\lim_{x\rightarrow 0} \left[ \frac{\sin x \tan x}{x^2} \right]=\lim_{x \rightarrow 0} \left [ (1-x^2/6) (1+x^2/3) \right]=\lim_{x \rightarrow 0} [1+x^2/6-x^4/18]=1.$$
Why is the kernel of this strange polynomial homomorphism what it is?
I think part of the problem stems from your notations, which I will take the liberty to change. Recall that a matrix $M=(a_{ij})\in M_{l,n}(\mathbb C)$ corresponds to a linear map $\mu:\mathbb C^n\to \mathbb C^l$ and that $\mu$ will be of rank $\leq 1$ iff $M=a\cdot b^T$ for some column vectors $a\in \mathbb C^l$, $b\in \mathbb C^n $. Hence the map $f:\mathbb C^l\times \mathbb C^n \to M_{l,n}(\mathbb C): (a,b)\mapsto a\cdot b^T=(a_ib_j)$ has as image exactly the set $S\subset M_{l,n}(\mathbb C)$ of matrices of rank $\leq 1$. We may see $f$ as the morphism of affine spaces corresponding to the $\mathbb C$-algebra morphism of polynomial rings $\phi: \mathbb C[z_{ij}] \to \mathbb C[x_i;y_j]\ :z_{ij} \mapsto x_i y_j$. The claim you want is then that the ideal of polynomials in $I(S)\subset \mathbb C[z_{ij}]$ consisting of the polynomials vanishing on $S=\operatorname{Im}(f)$ is $\ker(\phi)$, in conformity with the basic correspondence in algebraic geometry between morphism of rings and associated morphisms of varieties.
Rudin exercise 2.18: perfect set with only irrational numbers.
It's not really Rudin's style, but there are lots of general topological arguments based on the topological structure of the irrationals and the Cantor set, to see that such a set exists. One way is to prove that the Cantor set is unique topologically: if $X$ is a compact, totally disconnected (i.e. no connected subset has more than one point), metric space without isolated points, then $X$ is homeomorphic to the standard "middle third" Cantor set $C \subset [0,1]$. Such an $X$ is also called "a Cantor set". They're clearly perfect if embedded in $\mathbb{R}$ (being compact $X$ will be a closed subset and it has no isolated points). This is a nice topological argument (not using decimal expansions etc.) to see that $\{0,1\}^\mathbb{N} \simeq C$, and also $\prod_n F_n \simeq C$, where all $F_n$ are finite discrete spaces. It also implies that $C \times C \simeq C$: if $X$ is a Cantor set, so is $X \times X$: still compact metric; still totally disconnected (if $A \subseteq X \times X$ is connected, so are $\pi_1[A]$ and $\pi_2[A]$, so these are singletons and hence so is $A$) and still no isolated points, (if $\{(x,y)\}$ were open, so would $\pi_1[\{(x,y)\}] = \{x\}$ be, which is false, etc.). So the characterisation says that $C \times C \simeq C$. We can also reach the same conclusion if you already know that $C \simeq \{0,1\}^\mathbb{N}$ without the characterisation (Using the ternary expansion argument) because it' s clear that $\{0,1\}^I \times \{0,1\}^J \simeq \{0,1\}^{I \cup J}$ (disjoint union), and $\{0,1\}^I$ is just homeomorpic to $\{0,1\}^\mathbb{N}$ whenever $I$ is countably infinite (powers only depend on the cardinality of the index set). So once you have convinced yourself that $C \times C \simeq C$, it's easy: Clearly $\{x\} \times C \simeq C$ (for $x \in C$ as a subspace of $C \times C$), so the product shows us that a Cantor set is an uncountable disjoint union of homeomorphic copies of the Cantor set. As subsets of $\mathbb{R}$, only countably many of these disjoint copies can contain any rational number. So uncountably many consist of irartional numbers only. And such a Cantor set is what's wanted.
Can I build a finitely additive function on $\mathbb{Q}_p$?
Yes; under addition, $\mathbb{Q}_p$ forms a locally compact group, so it has a Haar measure. I.e. there is a measure on $\mathscr{B}$ which is translation invariant, positive on nonempty open sets, finite on compact sets, and satisfies some regularity conditions (see the page). It's unique up to scaling. If we normalize it so that $\mu(\mathbb{Z}_p)=1$, as you've done (which is standard), then we get what you've described above. Also, if we take what you've described above and just assume the regularity conditions in addition I think this should be enough to force you to get the Haar measure? In any case, I think it's safe to say the Haar measure is what you're looking for. :) Note, by the way, that if we consider p-adic integers as p-adic expansions, then Haar measure on $\mathbb{Z}_p$ is just the product measure of the uniform measures on the digits. (I.e. we can reason about randomly chosen p-adic integers in the same way we would reason about randomly chosen infinite strings of digits.)
Followup to question concerning $N_G(H) / C_G(H) \cong B \leq \mathrm{Aut}(H)$
$\textbf{Proof of Proposition 13}$: As $H$ is normal in $G$, $ghg^{-1}\in gHg^{-1}=H$ and so each $ghg^{-1}\in H$. If $h,k \in H$ then $$f_g(hk)=g(hk)g^{-1} \text{ and } f_g(h)f_g(k)=(ghg^{-1})(gkg^{-1})=g(hk)g^{-1}$$ so $f_g(hk)=f_g(h)f_g(k)$ so $f$ is a homomorphism. If $h \in H=gHg^{-1}$ then $h=gh'g^{-1}$ for some $h' \in H$ and so $$f_g(h')=gh'g^{-1}=h$$ so $f_g$ is a surjection. We have that \begin{align*} \mathrm{Ker}(f_g) &amp;= \{ h \in H: f_g(h)=e_H \} \\ &amp;= \{ h \in H: ghg^{-1}=e_H \} \\ &amp;= \{ h \in H: gh=e_Hg \} \\ &amp;= \{ h \in H: gh=ge_H \} \\ &amp;= \{ h \in H: h=e_H \} \\ &amp;= \{ e_H \} \end{align*} so $f_g$ is injective. We conclude that $f_g$ is an isomorphism and as $f_g:H \rightarrow H$, we have $f_g$ is an automorphism. Let $f$ be as above. If $g,k \in G$ then $$f(gk)=f_{gk} \text{ and } f(g) \circ f(k) = f_g \circ f_k.$$ For any $h \in H$, $f_{gk}(h)=(gk)h(gk)^{-1}$ and $$(f_g \circ f_k)(h)=f_g( f_k(h) ) = f_g(khk^{-1})=g(khk^{-1})g^{-1}=(gk)h(gk)^{-1}=f_{gk}(h)$$ so $f_{gk} = f_g \circ f_k$ meaning $$f(gk)=f(g) \circ f(k).$$ Therefore $f$ is a homomorphism. We have then \begin{align*} \mathrm{Ker}(f) &amp;= \{ g \in G : f(g)=\mathrm{id}_H \} \\ &amp;= \{ g \in G: f_g = \mathrm{id}_H \} \end{align*} and so if $g \in \mathrm{Ker}(f)$ then for all $h \in H$, $f_g(h)=ghg^{-1}=h$ and so $gh=hg$ meaning that $g \in C_G(H)$. Further if $g \in C_G(H)$ then $f_g(h)=ghg^{-1}=hgg^{-1}=h=\mathrm{id}_H$ so $g \in \mathrm{Ker}(f)$. We have shown then $$\mathrm{Ker}(f)=C_G(H).$$ Therefore, we can take the function $j: G \rightarrow \mathrm{Im}(f)$ by $j(g)=f(g)$. This function $j$ is a surjective homomorphism and so by the first isomorphism theorem we have that since $\mathrm{Ker}(f)=\mathrm{Ker}(j)$ that $$G/C_G(H) \cong \mathrm{Im}(f)$$ and we know that $\mathrm{Im}(f) \leq \mathrm{Aut}(H)$ proving what was needed. $\textbf{Proof of desired thing: }$ We have that $H$ is a normal subgroup of $N_G(H)$ and so from above $N_G(H)/C_{N_G(H)} \cong B \leq \mathrm{Aut}(H)$ and further $C_{N_G(H)}(H)=C_G(H)$ proving the statement.
Real roots of a polynomial with all its coefficients equal to 1 or -1
There can be as many roots in this interval as you like. Namely, start with the polynomial $f_1(x) = -x^2-x+1$ which has one root $\phi^{-1}$ strictly between $0$ and $1$. Next we can consider $$f_2(x) = f_1(x)f_1(x^3) = x^8+x^7-x^6+x^5+x^4-x^3-x^2-x+1$$ which has all its coefficients $\pm 1$ and whose roots include $\phi^{-1}$ and $\phi^{-1/3}$. And in general $$ f_n(x) = \prod_{j=0}^{n-1} f_1(x^{3^j}) = \sum_{i=0}^{3^n-1} (-1)^{(\text{number of nonzero digits in base-3 rep of }i)}\, x^i$$ is a polynomial of degree $3^n$ with coefficients $\pm 1$ and $n$ roots strictly between $0$ and $1$. In order to meet your precise specification, negate all of the odd-degree coefficients to move all the $n$ roots to $(-1,0)$.
On the symmetric labelled structure of 2-regular graphs
You need to begin by finding the generating function for the number of connected $2$-regular graphs. The connected $2$-regular graphs are the undirected cycle graphs. There are $(n-1)!$ labelled directed cycle graphs on $n$ vertices, so there are $\frac12(n-1)!$ labelled undirected cycle graphs on $n$ vertices: each undirected cycle graph can be directed in $2$ ways. Of course a cycle requires at least $3$ vertices, so if $c_n$ is the number of labelled undirected cycle graphs on $n$ vertices, we have $$c_n=\begin{cases} 0,&amp;n=0,1,2\\\\ \frac12(n-1)!,&amp;n\ge 3\;. \end{cases}$$ This yields the exponential generating function $$\begin{align*} C(x) &amp;= \sum_{n\ge 0}c_n\frac{x^n}{n!}\\ &amp;=\frac12\sum_{n\ge 3}\frac{(n-1)!}{n!}x^n\\ &amp;=\frac12\sum_{n\ge 3}\frac{x^n}n\\ &amp;=\frac12\left(\sum_{n\ge 1}\frac{x^n}n-x-\frac{x^2}2\right)\;. \end{align*}$$ You should recognize (or be able easily to discover) the generating function for the series $\sum_{n\ge 1}\frac{x^n}n$, and after that it’s just a matter of plugging it into the exponential formula to get $G(x)=e^{C(x)}$.