title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
How can I prove that this polynomial is irreducible in $\mathbb{Q}[x]$?
By the rational root theorem the only possible rational roots are $\pm 1, \pm 2$, and by inspection none of these are roots. If the polynomial is reducible, it therefore factors into the product of a quadratic and cubic factor (over $\mathbb{Z}$ by Gauss's lemma). $\bmod 2$ the polynomial factors as $x(x^4 + x + 1)$. The latter factor has no root $\bmod 2$, so if it is reducible it is the product of two irreducible quadratics. But the only irreducible quadratic $\bmod 2$ is $x^2 + x + 1$, and $(x^2 + x + 1)^2 = x^4 + x^2 + 1$. Hence $x^4 + x + 1$ is irreducible $\bmod 2$. But if the polynomial factored as the product of a qudaratic and cubic factor over $\mathbb{Z}$, it would only have at most cubic irreducible factors $\bmod 2$; contradiction. Hence the polynomial is irreducible.
How does recursion evaluate in type theory
The correspondance of types to numerals in this way is a rather handwavey affair outside of finite types, so the correct answer to your question is: infinite. Now of course that's rather boring. In some specific cases we can still work with these values, though the degree to which it helps us isn't all that high. For a number of types, similar to the map you've proposed above (or standard lists), they can be represented as power series, including this map here. Your type corresponds to an infinite sum $\sum_0^\infty 6^i$ in some sense, but this isn't a valid power series, and there's little sense we can make out of an infinite sum this way. One nice solution is to add a term for the level of indirection, so that it's a correct power series $\sum_0^\infty 6^i x^i$. This is about the best we can do to assign a value (in a non-useless way), by expanding our domain of definition from $\mathbb{N}$ to $\mathbb{N}((x))$. To give you a bit more of what you're possibly looking for, we can consider it as a power series over $\mathbb{C}$ instead. For a few equations, we can use proofs with complex numbers, which we can carry back. So in this case, the series converges to $-\frac{1}{5}$ (as a standard geometric series). This is the origin of the famous "Seven Trees in One", and there's a good exposition on the generalizability in this paper. You may also want to take a look at another paper with some generalizations of the concept you're using here. You may also be interested in combinatorial species, for which there are a few standard resources, and which use power series for similar counting arguments (with symmetry). They're part of the structure types mentioned in the second paper. The book Combinatorial Species and Tree-Like Structures is a good exposition if you can find a copy of it.
Number of solution(s) of a complex equation
Notice that $z=0$ is NOT a solution (see the denominator $|z|$). So $z\not=0$ and by multiplying the equation by $z^2$ we obtain $$z^5+\frac{3z^2(\bar z)^2}{|z|}=z^5+3|z|^3=0\Rightarrow |z|^5=3|z|^3\Rightarrow r=|z|=\sqrt{3}.$$ Hence, by letting $z=\sqrt{3}e^{i\theta}$, it remains to solve $$z^5+3(\sqrt{3})^3=0\Leftrightarrow \left(\frac{z}{\sqrt{3}}\right)^5=-1\Leftrightarrow e^{i5\theta}=\cos(5\theta)+i\sin(5\theta)=-1.$$ Can you take it from here?
Permutations of three points
Bear in mind that the elements of the permutation group are acting on the set of symbols, so $P_{(12)} P_{(13)}$ acts on a string of symbols, say $x$. It is sometimes difficult to interpret switching numbers, so what we could do is relabel the numbers as letters of corresponding order and then switch back at the end. Of course, our set need not be either numbers or letters, merely elements of a set. With this in mind we can see that we must work from right to left with permutations. As an example, let's see how that combination works on the string $x=(a,b,c,d,e,f,g)$: $$ P_{(12)} P_{(13)} (a,b,c,d,e,f,g) = P_{(12)} (c,b,a,d,e,f,g) = (b,c,a,d,e,f,g).$$ Let's look at a more complicated example: $$ P_{(124)} P_{(423)} (a,b,c,d,e,f,g).$$ For this look at $P_{(423)} (a,b,c,d,e,f,g)$ first: $$(a,b,c,d,e,f,g) \mapsto (a,d,c,b,e,f,g) \mapsto (a,c,d,b,e,f,g) \mapsto (a,c,b,d,e,f,g)$$ Then simply apply $P_{(124)}$ to the resulting string at the end: $$ P_{(124)} P_{(423)} (a,b,c,d,e,f,g) = P_{(124)} (a,c,b,d,e,f,g), $$ so $$ (a,c,b,d,e,f,g) \mapsto (c,a,b,d,e,f,g) \mapsto (c,d,b,a,e,f,g) \mapsto(a,d,b,c,e,f,g). $$
Proving irreducibility; What is this method and what is the logic behind it?
First observe the beginning of the polynomial is the beginning of the expansion of $$(x-1)^5=\color{red}{x^5-5x^4+10x^3}-10x^2+5x-1,$$ so we rewrite $p(x)$ as $$(x-1)^5+3(x^2+x-1).$$ Now set $s=x-1$, and write everything with $s$: $$p(x)=p(s+1)=s^5+3(s^2+3s+1).$$ Eisenstein's criterion says $p(s+1)$ is irreducible, hence $p(x)$ is, since $x\mapsto x-1$ defines an automorphism of $\mathbf Q[x]$.
Ratio of triangle's areas
Consider $\overline{AB}$ the base of $\triangle ABC$. By dropping perpendiculars from $C$ and $E$ to $\overline{AB}$, and examining the similar triangles thus created from vertex $A$, we see that the altitude of $\triangle ABE$ is smaller than the altitude of $\triangle ABC$ by a factor of $s$, so the area of $\triangle ABE$ is also smaller than the area of $\triangle ABC$ by a factor of $s$. Similarly, the base $AD$ is smaller than the base $AB$ by a factor of $r$, so the area of $\triangle ADE$ is also smaller than the area of $\triangle ABE$ by a factor of $r$. Putting these together, we find that the area of $\triangle ADE$ is smaller than the area of $\triangle ABC$ by a factor of $rs$.
Definition of a permutation as product of cycles
$\rho$ is original permuation . By that way you can able to decompose $\rho$ as component function which moves distant elements Take example $\rho=(123)(45)$ There are 2 distant cycle $\rho_1=(123),\rho_2=(45)$. Now if you take 1 which is in first cycle $\rho_1(1)$=$\rho(1)=2$ BUT as $\rho_2$ is disjoint from $\rho_1$ it fixes 1 that is $\rho_2(1)=1$. In this way you can decompose original permutation into component permutation Edit: I do not think your 2nd statement is correct As every cycle which does not contain x , fixes it . That is if you are having disjoint cycle then only one cycle moves other send x to x only Function fixes x means $f(x)=x$ And for statement 1: Cycle containing x is part of original cycle so how it could be whole cycle?
What's a proof that a set of disjoint cycles is a bijection?
If you want to use a minimum of machinery, I don’t think that you can get too much simpler than these: (2) Fix $d\in D$ and consider the sequence $\langle d,f(d),f^2(d),f^3(d),\dots\rangle$; $D$ is finite, so there must be distinct $i,k\in\Bbb N$ such that $i<k$ and $f^i(d)=f^k(d)$. Since $f$ is a bijection, it has an inverse $g$. Let $n=k-i$, so that $f^k(d)=f^i\big(f^n(d)\big)$. Then $$d=g^i\big(f^i(d)\big)=g^i\big(f^k(d)\big)=g^i\big(f^i\big(f^n(d)\big)\big)=f^n(d)\;.$$ (1) To show that $f$ is injective, suppose that $f(d)=f(e)$ for some $d,e\in D$. There are positive integers $m$ and $n$ such that $f^m(d)=d$ and $f^n(e)=e$. But then by an easy induction $f^{km}(d)=d$ and $f^{kn}(e)=e$ for all $k\in\Bbb N$, and therefore $d=f^{mn}(d)=f^{mn}(e)=e$. To show that $f$ is surjective, merely observe that if $d\in D$, then there is $n\in\Bbb Z^+$ such that $d=f^n(d)=f\big(f^{n-1}(d)\big)\in\operatorname{ran}f$.
what the central limit theorem says
Maybe this helps: Take an appropriate random variable (finite second moment). Let's say that $\frac{S_n}{n}$ is the empirical mean of the random variable, and $\mu$ the theoretical mean. In this setting, $$\displaystyle \frac{S_n}{n} - \mu$$ is the deviation of the empirical mean from the theoretical one. What the CLT says is: with an appropriate scaling, the deviations are normally distributed, i.e. $$\mathbb P\left( \frac{\sqrt{n}}{\sigma}\left(\frac{S_n}{n} - \mu\right) \leq x \right) \to \phi(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^x \exp(-\frac{y^2}{2}) dy.$$ Quoting Frank den Hollander (Large Deviations, AMS): "CLT quantifies the probability that $S_n$ differs from $\mu n$ by an amount of order $\sqrt{n}$. Deviations of this size are called "normal". [...] [Deviations of size $n$] are called "large"." An equivalent formulation of the result above is: $$\frac{S_n}{n} \sim \mathcal N(\mu, \frac{\sigma^2}{n}),$$ so, I would say that you are right.
Is there a closed form expression for entropy on tanh transform of gaussian random variable?
The equality that you state is actually an inequality that defines an upper bound for the differential entropy of the transformed random variable: $$ h(U) \leq h(X) + \int f(x) log|\frac{d (tanh(x))}{dx}| dx $$ To obtain the differential entropy of the transformed random variable we can use the definition: $$ h(U)=-\int _{sup(U)}f_U(x)\log f_U(x)\,dx$$ First we need to compute the density of U=tanh(X). Note that the cdf of U is: $$F_U(x)= \begin{cases} 1 & if & x>1 \\ F_X(tanh^{-1}(x)) & if & x \in [-1,1] \\ 0 & if & x<1 \end{cases} $$ As it is continuous ($F_X(tanh^{-1}(-1))=F_X(-\infty)=0$ & $F_X(tanh^{-1}(1))=F_X(\infty)=1$ ), U has a Lebesgue pdf: $$ f_U(x)=f_X(tanh^{-1}(x)) I_{[-1,1]}(x) = \frac {1}{\sigma \sqrt{2 \pi}} e^{-\frac{1}{2}(\frac{tanh^{-1}(x) - \mu}{\sigma})^2} I_{[-1,1]}(x)$$ Finally, using the base e logarithmic units: $$h(U) = - \int_{-1}^{1} f_U(x) \left [-ln(\sigma \sqrt{2\pi}) - \frac{1}{2} \left (\frac{tanh^{-1}(x) - \mu}{\sigma} \right)^2 + ln\left( I_{[-1,1]}(x) \right) \right]dx$$ $$h(U)=\frac{1}{2}ln(2\pi \sigma^2) + \frac{1}{2}\int_{-1}^{1} \left (\frac{tanh^{-1}(x) - \mu}{\sigma} \right)^2 f_U(x)dx$$ $$h(U)=\frac{1}{2}ln(2\pi \sigma^2) + \frac{1}{2} E\left[ \left (\frac{tanh^{-1}(x) - \mu}{\sigma} \right)^2 \right] $$
Gradient of an implicitely defined function?
You can apply the derivative on identities only. For example, we know that $$(x-1)^2 = x^2 - 2x + 1 $$ right? Taking derivatives, we get $$2(x-1) = 2x - 2,$$ which is true. Now, this serves as an explanation for why, in general, we can't use derivatives to solve equations. Suppose I want to solve the equation $$\frac{x^3}{3} - \frac{5x^2}{2} + 6x = 0 $$ Third degree, too hard. Let's differentiate. We get: $$x^2 - 5x + 6 = 0,$$ which is easy and has $2$ and $3$ as solutions. But these are not solutions to the original equation. So, unless your $F(x,y,z) = 0 $ is not an identity, hardly $\nabla F(x,y,z)$ will be $\bf 0 $.
Determine the Integral $\int_{-\infty}^{\infty} e^{-x^2} \cos(2bx)dx$
If we set: $$ f(b) = \int_{\mathbb{R}} e^{-x^2}\cos(2bx)\,dx \tag{1}$$ we have: $$ f'(b) = -\int_{\mathbb{R}} 2x\,e^{-x^2} \sin(2bx)\,dx \stackrel{\text{IBP}}{=}-2b\int_{\mathbb{R}}e^{-x^2}\cos(2bx)\,dx=-2b\,f(b).\tag{2}$$ So we have that $f$ is a solution of a separable differential equation and $$ f(b) = f(0)\, e^{-b^2}.\tag{3}$$ Since $f(0)=\sqrt{\pi}$, $$ \int_{\mathbb{R}} e^{-x^2}\cos(2bx)\,dx = \color{red}{\sqrt{\pi}\, e^{-b^2}}\tag{4}$$ follows.
Isometry group of $3$-sphere
Consider $S^3 = \{(x,y)^t\in\mathbb C^2|\|(x,y)\|=1\}\subset \mathbb C^2$. Then the matrix $A:=\left(\begin{matrix}x&-\overline{y}\\y&\overline x\end{matrix}\right)$ is unitary, has determinant $1$ and $A\cdot(1,0)^t = (x,y)^t$.
Convergence in $L^p$ spaces.
$(\Rightarrow)$ By the reverse triangle inequality $$ | \|g_n\| -\|g\| | \leq \| g_n -g \| $$ Since $\| g_n -g \| \to 0$ then clearly $\|g_n\| \to \|g\|$ The converse is also true as long $g, g_n \in L ^1$ and $g_n \to g$ a.e. In that case you can use the next argument: $(\Leftarrow)$ First note that since $|g_n -g| \leq |g_n| + |g|$ then $|g_n -g| \in L^1$. Also $ |g_n| + |g| \to 2|g|$ a.e and by hypothesis $\int |g_n| + |g| dm \to \int 2|g| dm$, moreover $|g-g_n| \to 0 $ a.e, thus by a generalized version of the dominated convergence theorem we conclude that $$ \|g_n - g \| \to 0 $$
Wrapping a quadratic about a sphere
(This is really a long comment.) I think I understand what you are asking --- and I think it is a nice question! I am imagining "wrapping" as a function $w: \mathbf R^2 \rightarrow S^2$, where $S^2$ is the sphere, that tells us where each point in the plane (your "2-d sheet") should go to on the sphere. You also require that $w$ maps the $x$-axis in $\mathbf R^2$ to a circumference of the sphere. You are then asking about the image $w(P)$, where $P$ is the parabola $y=x^2$. However, there is a basic issue with this set-up, namely curvature. Have you ever tried to wrap a sheet of paper around a ball? You find that it is impossible to do so without crumpling the paper in some way. This is precisely because the sheet of paper is flat, but the surface of the ball is curved. (Fun question: why doesn't this happen when you wrap the toilet paper around the cardboard tube?) In fact, it is impossible to map even a tiny piece of the sheet onto the sphere in a way that preserves distances (i.e without stretching or shrinking) and does not "crumple" the sheet. On the other hand, if you are willing to shrink as you wrap, then there is a very elegant wrapping function called stereographic projection. Rather than try to describe it in words, let me just reproduce the nice illustration from the Wikipedia article (created by User:Mark.Howison): I will leave it as an exercise to imagine how a parabola in the plane will map to the surface of the sphere under this function.
Help prove $f:X \rightarrow Y$ is an injection $\Leftrightarrow$ $f:X\rightarrow Y$ is a surjection when $|X|=|Y|$
Before throwing around the pigeonhole principle, just try to reason through a few simple cases. It may help to look at the situation with sets of 4 or 5 elements, and then generalize what you find. Here are some things to think about: For #3, note that it is impossible to have distinct $a, b \in X$ such that $f(a) = f(b)$. So what does that tell you about the number of elements of $Y$ that are in the image of $f$? For #4, suppose there are two distinct elements of $X$, say $a$ and $b$, that get mapped to the same $y \in Y$. Then use the surjectivity assumption to count how many elements $X$ must have. Note that the analogous statement is clearly not true when dealing with sets of the same infinite cardinality (e.g. take $X = \mathbb{N}$ and $Y = \mathbb{N}$, it's easy to come up with a counterexample).
Real Analysis, Folland problem 3.3.23 Differentiation on Euclidean Space
First of all, the supremum in the definition of $H^*f(x)$ is taken over a set which contains the set in the definition of $Hf(x)$. Since suprema get larger as the sets get larger, this shows that $Hf(x)\leq H^*f(x)$. For the other inequality, let $B$ be any ball containing $x$, and suppose that $B$ has radius $r$. Then $B\subset B(2r,x)$, hence $$\frac{1}{m(B)}\int_B|f(y)|\;dy\leq \frac{m(B(2r,x))}{m(B)}\frac{1}{m(B(2r,x))}\int_{B(2r,x)}|f(y)|\;dy\leq 2^n Hf(x)$$ Since $B$ was any ball containing $x$, taking the supremum over all such balls shows that $H^*f(x)\leq 2^nHf(x)$.
Closure of topology sine $Y = \lbrace (x,\sin(\frac{1}{x})) : x > 0 \rbrace $
The open set $(\leftarrow,0)\times\Bbb R$ (i.e., the open left half-plane) shows that $Y$ has no limit points to the left of the $y$-axis. $\Bbb R$ is a Hausdorff space, and the sine function is continuous on $(0,\to)$, so $Y$ is a closed subset of $(0,\to)\times\Bbb R$; this shows that the only limit points of $Y$ to the right of the $y$-axis are the points of $Y$. Thus, all of the limit points of $Y$ not in $Y$ must be on the $y$-axis. The open sets $\Bbb R\times(1,\to)$ and $\Bbb R\times(\leftarrow,-1)$ are disjoint from $Y$, so any limit points of $Y$ on the $y$-axis must lie on the segment $\{0\}\times[-1,1]$. To complete the argument, for each $y\in[-1,1]$ find a sequence $\langle x_n:n\in\Bbb N\rangle$ in $\Bbb R$ such that $\big\langle\langle x_n,y\rangle:n\in\Bbb N\big\rangle$ converges to $\langle 0,y\rangle$.
Finding the Fourier series of an absolute value function.
In your formula L is the period of your function. In your case f is $2\pi$ periodic, so you have $L=2\pi$. But Since that your function is $2\pi$ periodic, you could also integrate between $-L/2$ and $L/2$. And if you integrated betwen $0$ and $2L$, the factor in front of the integral would be $\frac{1}{L}$, since you double the size of the interval you divide by 2 the integral. And you are only calculating the cosine terms because it is a consequence of the fact that f is an even-function. In the end you have : $$f(t)=\frac {2}{\pi} - \sum\limits_{n = 0}^\infty {\frac{4}{\pi((2n)^2-1)}cos(2nt)}$$
Is it possible to prove this combinatorial?
You want to prove that there are $2^{n-1}$ ways to select an even number of elements out of a set with $n$ elements. First pick a number of elements from the first $n-1$ elements; there are $2^{n-1}$ ways to do this. Each of these ways extends uniquely to a way of selecting an even number of elements from $n$ elements, by choosing the last element if we have an odd number of elements and by not choosing the last element if the number of elements we have is already even. In other words, there is a bijection between the even subsets of an $n$-element set and all the subsets of an $(n-1)$-element-set; as a result, in both cases there are $2^{n-1}$ such sets.
Where is the symmetric group hidden in the Yoneda lemma?
$S(G)$ is the set of all bijections in $\mathcal{Set}$ from $|G|$ to itself, where I denote by $|G|$ the underlying set of $G$. This is exactly the permutation group on $|G|$. Now, the application of Yoneda that gets us Cayley is, as you say, taking $F=\hom_G(B,-),$ where $\hat{G}$ denotes the one-object category with $G$ as its arrows. Call its only object $X$. So, for every $A,B,$ we get $F(A)=\hom(B,A)=\textrm{nat} (\hom(A,-),\hom(B,-))$. But the only candidate for $A$ or $B$ is $X$, so all we really have is $\hom(X,X)=\textrm{nat} (\hom(X,-),\hom(X,-))$. Now by the construction of $\hat{G}, \hom(X,X)=G,$ so now we just want to see why the natural transformations from $\hom(X,-)$ to itself are contained in the bijections on $|G|$. First, the objects of the image of $\hom(X,-)$ are just $\hom(X,X)=|G|.$ Let's interpret the images of morphisms in $\hat{G}$, which are the elements of $g,$ by the right action: $\hom(X,g): h \in |G| \mapsto hg$. Now a natural transformation $\alpha$ needs a component morphism at each object in the image of the functor, but since our functors have singleton images let's identify $\alpha$ with $\alpha_{|G|}$. The naturality $\alpha$ needs is given by this diagram: $$ \newcommand{\ra}[1]{\kern-1.5ex\xrightarrow{\ \ #1\ \ }\phantom{}\kern-1.5ex} \newcommand{\ras}[1]{\kern-1.5ex\xrightarrow{\ \ \smash{#1}\ \ }\phantom{}\kern-1.5ex} \newcommand{\da}[1]{\bigg\downarrow\raise.5ex\rlap{\scriptstyle#1}} \begin{array}{ccc} |G|&\ra{\alpha}&|G|\\ \da{g}&&\da{g}\\ |G|&\ra{\alpha}&|G|\\ \end{array} $$ That is, we need $\alpha(hg)=(\alpha(h))g$ for each $h,g$. We can obviously accomplish this for a set of $\alpha$ isomorphic to $G$ by letting each $k$ in $G$ act on the left. Note that any natural $\alpha$ will have to be a bijection on $|G|,$ in short because the right action of $G$ on itself is transitive. So we see that the admissible bijections are some subset of $S(|G|)$; by Yoneda, they're exactly $G$, so that $G \leq S(|G|)$.
Is it possible to create a function that's onto but not 1-1 between two sets with the same cardinality?
If the two sets have infinite cardinality, then yes; Take for example the function that sends each natural number to its half, rounded down. The function is clearly onto, because we know the number $2k$ will be mapped to $k$, for any $k$ you pick. But it is not 1-1 because $3$ and $2$ are mapped to $1$, for example. In the case of finite sets, such a function would not be possible, as a function $f: S \to T$ onto can only exist iff $|S| \geq |T|$ and a function that is 1-1 can only exist iff $|T| \geq |S|$. If $|T| = |S|$ then a function that is onto is automatically 1-1
A question about meaning of a notation
It means that you take the sum over the cycles based on $(a \ b \ c)$ which are $$\begin{cases} (a \ b \ c) \to (a \ b \ c)\\ (a \ b \ c) \to (b \ c \ a)\\ (a \ b \ c) \to (c \ a \ b) \end{cases}$$
How can I prove that $A \setminus (B \setminus C) \subseteq (A \setminus B) \cup C$?
Let $x\in A\setminus(B\setminus C)$. By definition, we have that $x\in A$ and $x\notin B\setminus C$. Suppose $x\notin C$. Then, $x\notin C\cup (B\setminus C)$, so since $B\subseteq C\cup(B\setminus C)$, we have $x\notin B$. Thus, $x\in A\setminus B$. Thus, $A\setminus (B\setminus C)\subseteq (A\setminus B)\cup C$.
Evaluate $\lim_{n \rightarrow \infty} \int_0^1 \frac{nx^{n-1}}{2+x} dx$
The integrand strongly suggest a gaussian hypergeometric function with a last argument being $\pm \frac x 2$ $$I_n=\int\frac{nx^{n-1}}{x+2} dx=\frac{x^n}{2}-\frac{n\, x^{n+1} }{4 (n+1)}\, _2F_1\left(1,n+1;n+2;-\frac{x}{2}\right)$$ Assuming $n>1$ $$J_n=\int_0^1\frac{nx^{n-1}}{x+2} dx=\frac 12-\frac n{4(n+1)}\, _2F_1\left(1,n+1;n+2;-\frac{1}{2}\right)$$ $$\lim_{n\to \infty } \, \, _2F_1\left(1,n+1;n+2;-\frac{1}{2}\right)=\frac 23$$ $$J_n \sim \frac{1}{2}-\frac{n}{6 (n+1)}\quad \to\quad \frac 13$$
Does $\int_{-x}^{x}f(t)dt=0$ implies $f$ to be an odd function?
You can take an integrable odd function and change its value at a point $x > 0$ so that it's no longer odd but still satisfies all the integral conditions.
Multivariable polynomial in $R[X_1,..,X_n]$ whose roots are all elements in $R^n$ is zero.
Good question. Yes, $p$ is zero and what we need to use is that $R$ is infinite (which is implied since it is algebraically closed in your case). The idea is to induct (because we understand polynomials in one variable after all!). The result is folklore for $n = 1$. Let $$0 \neq p \in R[x_1, \dots, x_n] = R[x_1, \dots, x_{n-1}][x_n].$$ Since $p \neq 0$, there exists some $f \in R[x_1, \dots, x_{n-1}]$ such that $0 \neq p(f) \in R[x_1, \dots, x_{n-1}]$. By induction hypothesis, there exists $a_1, \dots, a_{n-1} \in R$ such that $p(f)(a_1, \dots, a_{n-1}) \neq 0$. In particular, $p(a_1, \dots, a_{n-1}, f(a_1, \dots, a_{n-1})) \neq 0$.
Infinite series challenging problem
Your suspicions are well-founded : your “step” is not valid. But you can proceed as follows : if you put $$ \delta_{m,n}=\bigg(\sum_{k=m}^n \frac{a_k}{t_k}\bigg)- \bigg(1-\frac{t_n}{t_m}\bigg) $$ then $$ \begin{array}{lcl} \delta_{m,n} &=& \bigg(\sum_{k=m}^n \frac{a_k}{t_k}\bigg)- \frac{a_{m}+\ldots +a_{n-1}}{t_m} \text{ (as you noticed)} \\ &=& \frac{a_m}{t_m}+ \bigg(\sum_{k=m+1}^n \frac{a_k}{t_k}\bigg)- \frac{a_{m}+\ldots +a_{n-1}}{t_m} \\ &=& \bigg(\sum_{k=m+1}^n \frac{a_k}{t_k}\bigg)- \frac{a_{m+1}+\ldots +a_{n-1}}{t_m} \\ &=& \bigg(\sum_{k=m+1}^n \frac{a_k}{t_k}\bigg)- \frac{t_{n-1}-t_m}{a_m+t_{m+1}} \\ &=& \bigg(\sum_{k=m+1}^n \frac{a_k}{t_k}\bigg)- \frac{t_{n-1}-t_m}{a_m+t_{m+1}}+\frac{t_{n-1}-t_m}{t_{m+1}}- \frac{t_{n-1}-t_m}{t_{m+1}} \\ &=& \bigg(\sum_{k=m+1}^n \frac{a_k}{t_k}\bigg)+ \frac{a_m(t_{n-1}-t_m)}{t_{m+1}(a_m+t_{m+1})}- \frac{a_{m+1}+\ldots +a_{n-1}}{t_{m+1}} \\ &=& \frac{a_m(t_{n-1}-t_m)}{t_{m+1}(a_m+t_{m+1})} + \bigg(\sum_{k=m+1}^n \frac{a_k}{t_k}\bigg)- \frac{a_{m+1}+\ldots +a_{n-1}}{t_{m+1}} \\ &=& \frac{a_m(t_{n-1}-t_m)}{t_mt_{m+1}} + \bigg(\sum_{k=m+1}^n \frac{a_k}{t_k}\bigg)- \bigg(1-\frac{t_n}{t_{m+1}}\bigg) \\ &=& \frac{a_m(t_{n-1}-t_m)}{t_mt_{m+1}} + \delta_{m+1,n} \\ \end{array} $$ You can then argue by induction on $n-m$, starting with the base case $n-m=1$, where $$ \delta_{m,n}=\delta_{m,m+1}=\frac{a_{m+1}}{t_{m+1}} $$
Test for symmetric property of this ordered pair
You are correct in that if $R=\{(1,2),(1,3),(2,3)\}$ then it is not a symetric relation, however $R$ is not that relation. $(a,b)\in R$ implies $(b,a)\in R$ means that we for each pair of $(a,b)$ in $R$ needs to check if $(b,a)$ is in $R$. $(1,1) \in R$ We confirm that $(1,1)\in R$ $(1,3)\in R$ We confirm that $(3,1)\in R$. $(2,2)\in R$ We confirm that $(2,2)\in R$. $(3,1)\in R$ We confirm that $(1,3)\in R$. $(3,3)\in R$ We confirm that $(3,3)\in R$. Thus we have checked all pairs, which all hold. Hence the relation is symetric. You do not need to check $(2,3)$ since $(2,3)$ is not in $R$.
What's behind the function $g(x)=\operatorname{inf}\{f(p)+d(x,p):p\in X\}$?
Another way to describe $g_n$ is: $$ g_n = \sup\{ g : g \text { is $n$-Lipschitz and } g \le f\} \tag{1}$$ The key point is that the property of being Lipschitz with a fixed constant is preserved under taking supremum or infimum. Hence, we can take the supremum of all $n$-Lipschitz minorants of $f$, this obtaining the greatest $n$-Lipschitz minorant (or $n$-Lipschitz lower envelope) of $f$. As $n\to\infty$, it converges to $f$. To connect (1) to your formula, observe that the requirement $g(x)\le f(x)$ together with $g$ being $n$-Lipschitz force $g(x)\ge f(p)+nd(x,p)$ for all $p$. Thus, $g\le g_n$. On the other hand, $g_n$ belongs to the family; hence it is the supremum. The Lipschitz modulus of continuity is the simplest one to use, but one could equivalently take the supremum of all minorants that satisfy the Hölder condition of order $\alpha$ with constant $n$. Or use any sequence of moduli of continuity that grows to infinity. The idea stays the same: the set of all functions with a particular modulus of continuity is a lattice. One can also take infimum of $n$-Lipschitz majorants, arriving at a counterpart of $g_n$: $$\begin{split} h_n(x) &= \inf\{ h : h \text { is $n$-Lipschitz and } h \ge f\} \\ &= \sup\{f(p)-n d(x,p) : x\in X\}\end{split} \tag{2}$$ You may want to compare (1) to another construction: $$ \phi = \sup\{ h : h \text { is convex and } h \le f\} \tag{3}$$ where "convex" can also be replaced by "affine". The construction (3) makes sense only on linear spaces, since it refers to convexity. But the idea is similar: having a property that is preserved under taking supremum, we can build the greatest minorant with the given property. It's called the convex envelope of $f$, I believe.
A guess about Jacobi
Up to a sign, the answer is "you are right". Let's have a look at the transformation formula for integrals, we have \begin{align*} \def\v{\operatorname{vol}}\v f[B_r(x)] &= \int_{f[B_r(x)]}\, dy\\ &= \int_{B_r(x)} |\det Df(\xi)| \, d\xi\\ &= |\det Df(\xi_{r,x})| \cdot \v B_r(x) \end{align*} where $\xi_{r,x} \in B_r(x)$ exists as the integrated function is continuous. Now $\lim_{r\to 0} \xi_{r,x} = x$, and hence $$ |J_f(x)| = |\det Df(x)| = \lim_{r\to 0} \frac{\v f[B_r(x)]}{\v B_r(x)} $$
Prove That Every Simple Graph Has Two Vertices Of The Same Degree.
Your proof looks fine. I could probably be simplified a little bit, but it's fine. In particular, the last paragraph is redundant, and can be fixed by slightly altering the end of the 2nd-to-last: "...there must be a vertex $A$ of degree 0, and a different vertex $B$ of degree $n-1$. Since there are exactly $n-1$ non-$B$ vertices, and the graph is simple, $B$ must share an edge with every other vertex, including $A$. But that contradicts the fact that $\deg A = 0$."
How to derive the dual for LP like this?
Rewrite as: maximize 0y + 1z s.t. -3y + 1z <= -2 -1y + 0z <= -1 1y + 0z <= 2 Then the dual is: minimize -2t - 1u + 2v s.t. -3t - 1u + 1v = 0 1t + 0u + 0v = 1 t, u, v >= 0 Equivalently: minimize -u + 2v - 2 s.t. -u + v = 3 u, v >= 0 Equivalently: minimize u + 4 s.t. u >= 0 Hence $(t,u,v)=(1,0,3)$ is the unique optimal dual solution, and complementary slackness yields optimal primal solution $(y,z)=(2,4)$.
Arrangements in a circle such that A is closer to F than B is to F
The number of ways to choose the positions for A and B is $5\cdot4=20$, so this gives $p=\frac{1}{2}(1-\frac{4}{20})=\frac{2}{5}$. Alternate solution: 1) If A is one place away from F, then there are 2 choices for A and 3 choices for B. 2) If A is two places away from F, then there are 2 choices for A and only 1 choice for B. Therefore $ p=\frac{2\cdot3+2\cdot1}{5\cdot4}=\frac{8}{20}=\frac{2}{5}$.
Prove or disprove $\exp(AB)=\exp(BA)$ where $A$ and $B$ are square matrices
Consider a nonsingular matrix $X$. It will have various logarithms (i.e. matrices whose exponential is $X$): let two of them be $Y$ and $Z$, and suppose $Y$ and $Z$ are similar. Thus there is a nonsingular matrix $A$ such that $Z = A^{-1} Y A$. Take $B = A^{-1} Y$, so $Z = BA$ and $Y = AB$, and $\exp(AB) = \exp(BA)$. Conversely, any case where $A$ is invertible will arise in this way, taking $X = \exp(AB)$, $Y = AB$ and $Z = BA$.
Showing that $\sum\limits_{i=1}^n X_i^2$ is a sufficient statistic for $\theta$ from an $N(0,\theta)$ population
Your solution is perfect. Also, you can solve this problem simply saying that the Exponential Family includes the Normal Distribution. The proof is exactly what you did.
A detail in the proof that left invariant vector fields are smooth
I think the problem here is the canonical isomorphism $\mathbb{R}\simeq T_p\mathbb{R}$ for all $p\in\mathbb{R}$. Notice that if $f:G\to\mathbb{R}$, then $f_*|_x : T_x G\to T_{f(x)}\mathbb{R}\simeq \mathbb{R}$, and that is why you can identify $f_*|_x(X|_x)\in T_{f(x)}\mathbb{R}$ with $X|_x f\in \mathbb{R}$.
Solve for x in $x^y = y^x$ and prove continuity.
Take the logarithm. $$y\log x=x\log y,$$ or $$\frac{\log x}x=\frac{\log y}y.$$ For $x>1$, there are two solutions in $y$, one of course being $y=x$. The other can be expressed by means of the Lambert $W$ function. The asymptotes are justified by the fact that for $x\to1$, the other solution is $y\to\infty$. In the intervals $(0,e)$ and $(e,\infty)$, the function $\log x/x$ is continuous and strictly monotonous, hence continuously invertible, with codomains $(-\infty,1/e)$ and $(0,1/e)$.
Transformation self adjoint proof
Remember the properties of the adjoint operation $S \mapsto S^*$. We now that for any $S_1, S_2 \in L(V)$, we have $$ (S_1 + S_2)^* = S_1^* + S_2^*, \quad (S_1S_2)^* = S_2^*S_1^*, \quad S_1^{**} = S_1 $$ To check, whether, say, $U_1$ is self-adjoint, compute $U_1^*$. We have $$ U_1^* = (T+T^*)^* = T^* + T^{**} = T^* + T = T + T^* = U_1 $$ For $U_2$ we have $$ U_2^* = (TT^*)^* = T^{**}T^* = TT^* = U_2 $$
Scaling numbers cleverly to prevent arithmetic overflows or rounding errors
Disclaimer: This isn't a 100% complete answer, but for the case that I need it for, it works. Maybe someone with additional ideas can improve it... One can make some progress by looking at different possible special cases: If $P*r<2^{63}$, there is no problem with overflows to begin with, so we can just calculate the answer directly as $p=\lfloor P*r/R \rfloor$. The only thing we need to do is to perform the check as $P<\lfloor2^{63}/r \rfloor$ instead to allow for a false result without overflow. To cover the other case, one can do the following transformation: $$ p=\lfloor P*r/R \rfloor= \bigg\lfloor\frac{P}{R/r}\bigg\rfloor=\bigg\lfloor\frac{P}{\lfloor R/r \rfloor + (R\bmod{r})/r}\bigg\rfloor $$ $$ =\bigg\lfloor\frac{P}{\lfloor R/r \rfloor}-\frac{P(R\bmod{r})/r}{\lfloor R/r \rfloor(R/r)}\bigg\rfloor=\Bigg\lfloor\bigg\lfloor\frac{P}{\lfloor R/r\rfloor}\bigg\rfloor + \frac{P\bmod{\lfloor R/r\rfloor}}{\lfloor R/r\rfloor}-\frac{P(R\bmod{r})}{R\lfloor R/r \rfloor}\Bigg\rfloor $$ $$ =\bigg\lfloor\frac{P}{\lfloor R/r\rfloor}\bigg\rfloor + \Bigg\lfloor\frac{P\bmod{\lfloor R/r\rfloor}}{\lfloor R/r\rfloor}-\frac{P}{R}\frac{R\bmod{r}}{\lfloor R/r \rfloor}\Bigg\rfloor $$ The first term in the sum can be calculated directly without overflow issues. The second half after the '$+$' - sign is a little more difficult. But, looking at it, one realizes the following: The fraction to the left of the '$-$' - sign is always between $\ge 0$ and $<1$. Also: $0 \le P/R < 1$. So, we can look at two different special cases here: If $R\ge r^2$, the entire part to the right of the '$+$' - sign can only be either $0$ or $-1$. Otherwise, things could get more complicated... Luckily, in my application, I know for a fact, that $R\ge r^2$ is always true. Therefore, for me, the answer is: $$ p = \left\{\begin{array}{1 1}\lfloor (P*r)/R \rfloor & \quad \text{if $P<\lfloor2^{63}/r \rfloor$} \\ \lfloor P/\lfloor R/r\rfloor \rfloor \;[-1] & \quad \text{if $P\ge \lfloor2^{63}/r \rfloor$ and $R\ge r^2$} \\ \text{here be dragons} & \quad \text{if $P\ge \lfloor2^{63}/r \rfloor$ and $R<r^2$} \end{array} \right.\ $$ Here, $[-1]$ indicates that I only subract -1 in some cases, which I check by first leaving it out, testing the value of $p$ I get and then putting it in if I need it. In the third case, one could use an arbitrary precision approach (c.f. GMP) and calculate the answer by doing long division as suggested by MvG. But maybe someone can find a more elegant way to slay the dragons for completeness... ;) Otherwise, if I have time a little later, I (or someone else) might add the formula for implementing the long division.
Extracurricular ideas for UK GCSE level maths student
Read through (and work the problems in) "What is Mathematics" by Courant and Robbins. It won't detract/interfere with her studies, it is quite well self-contained, it is at once rigorous, deep, and broad. It is the book I wish I had read when I was 15. Seeing some of the other answers, I thought perhaps I might expound a bit about why I recommend this text over others. I don't mean to insult you, and please correct me if this is not the case, but I got the impression from your question that these are not topics which you have a confident footing. One concern with approaching a book such as Fraleigh's Abstract Algebra is that there is the possibility of misinterpreting some of the material. This is still possible with the book I recommend, but Courant and Robbins' book does not really cover a particular subject and is not meant to be a primer in, for instance, abstract algebra or some other course that your daughter may take later in her studies. A misinterpretation in the material covered in "What Is Mathematics" is, in my opinion, more likely to be corrected later by a rigorous course in college than a misinterpretation in one of the building blocks of Abstract Algebra. Courant and Robbins does an excellent job of encouraging intuition, and will definitely hit that "wow" spark for a young mind. I worry that your daughter's interest in mathematics might be squashed like a book like Rosenlicht's. Better, in my opinion, to build the motivation with interesting and amazing proofs and results - that way, the desire to struggle through some of the tedium of learning the basics will be all the more worthwhile. The book includes an excellent section on number theory in the beginning, including covering complex numbers and de Moive's formula. This is followed by an excellent geometry section covering projective and hyperbolic geometry. Topology is then covered (when I was a senior in college, I still thought the "topology" course would be about mapping, so to expose a 15 year old to this would be amazing). This also includes a proof of Brouwer's fixed-point theorem (or one of them, rather). It then moves onto limits and calculus concepts, including calculus of variations, as it applies to minimal surfaces! The main reason I would recommend this book is that she will be excited while reading it. She will get an excellent selection of some of the most interesting and amazing results in mathematics. If she likes what she reads then she will be more interested in mathematics than before. If, on the other hand, she doesn't like it, at least she is making a very well informed opinion about it.
Determining sample space and random variable for a construction of random maximal P-graph
The sample space is the set of subgraphs of $K_n$ (though some of these may have probability zero). More precisely, $\mathbf{M}_n(P)$ is concentrated on subgraphs of $K_n$ that satisfy property $P$ and to which no edge can be added while preserving property $P$. The process has not been completely described, in that it is not specified if or how the random choice at time $i+1$ is influenced by the previous random choices. However, I think it's reasonable to assume the choices are independent (i.e., enumerate the elements of $F^i$, choose a random number $N_i$ between $1$ and $|F^i|$ at time $i$ independent of all past random choices, and let $e$ be the $N_i$th element of $F^i$). The construction is not a Markov process, as illustrated below. Consider an example where the property $P$ is "there exists an isolated vertex" and $V=\{1,2,3,4\}$. We have: $E^0=\emptyset$ $F^0=\{\{1,2\},\{1,3\},\{1,4\},\{2,3\},\{2,4\},\{3,4\}\}$ $e=$ a random element from $F^0$, say $e=\{1,3\}$ $E^1 =\{\{1,3\}\}$ $F^1 = \{\{1,2\},\{1,4\},\{2,3\},\{3,4\}\}$. Note $\{1,3\}$ is removed because it has already been chosen, and $\{2,4\}$ is removed because choosing it would leave no isolated vertices. $e = $ a random element from $F^1$, say $e=\{2,3\}$ $E^2 = \{\{1,3\},\{2,3\}\}$. Note at this point $4$ is the only vertex that can be isolated in $\mathbf{M}_n(P)$. This is determined by both of our previous edge choices, not just the most recent choice, hence the process is not Markov. $F^2 =\{\{1,3\}\}$ since this is the only edge not yet in the random graph that is not incident to $4$. $e=$ a random element from $F^2$, i.e., $e=\{1,2\}$. $E^3 = \{\{1,2\},\{1,3\},\{2,3\}\}$ $F^3 = \emptyset$ and the construction terminates. In this example you should be able to convince yourself that $\mathbf{M}_n(P)$ is uniform on the triangle subgraphs of $K_4$. That is, for any given triangle subgraph, with probability $\frac{1}{4}$ the construction ends in that subgraph. Note that $|F^0|=6$, $|F^1|=4$, and $|F^3|=1$ regardless of the choices we make in this example. So if edges $e_1$, $e_2$, and $e_3$ form a triangle subgraph, then they are chosen with probability $\frac{1}{6}\cdot \frac{1}{4}\cdot 3!=\frac{1}{4}$ (since it doesn't matter which order we choose the edges). In general it will be much more complicated to calculate the probability distribution of $\mathbf{M}_n(P)$. In my opinion it is best understood as being built by a random process over time.
Representing the empty set on an Euler diagram
The condition $A\oplus A = \emptyset$, where $\oplus$ is symmetric difference, is satisfied by every set $A$ as $$A\oplus A = (A\setminus A)\cup(A\setminus A) = \emptyset\cup\emptyset = \emptyset.$$ Therefore, if you want to draw an Euler diagram to represent this condition, just draw any set $A$. The condition $\emptyset \subset A\cap B \subset A\cup B$, where $\subset$ allows for the possibility of equality, is satisfied by any two sets $A$ and $B$. Clearly $\emptyset \subset A\cap B$ as $\emptyset$ is a subset of every set, and any element which is in both $A$ and $B$ is in at least one of them so $A\cap B \subset A\cup B$. Therefore, to draw a Euler diagram to represent this condition, just draw any two sets $A$ and $B$.
Homogenous Matrix-valued ODE with left and right multiplication
As stated in a comment before: An ODE of the form $X'(t)=A(t)X(t)+X(t)B(t)$ is called Sylvester ODE. It is treated in "Sylvester matrix Differential Equations: Analytical and Numerical Solutions" by Laurene V. Fausett. I will quickly copy the general solution here: Let $Y$ be a solution of $T'=AT$ and $Z$ be a solution of $T'=TB$. Let $C$ be any constant square matrix. Then $X(t):=Y(t) C Z(t)$ satisfies: $X'(t) =Y'(t) C Z(t)+ Y(t) C Z'(t) =A(t) Y(t) C Z(t) + Y(t) C Z(t) B(t) =A(t) X(t) + X(t) B(t)$, i.e. is a solution to the above mentioned ODE. Given any initial data $X(0)=X_0$ we can choose $C$ to satisfy there data. By the Picard-Lindelöf theorem this solution to the IVP is unique.
Prove a graph be a planar graph
You are correct that the statement "If $e \leq 3v-6$, then the graph is planar" is false. Consider $K_{3,3}$, which satisfies $e = 9 \leq 3(6)-6 = 12 = 3v-6$, but is not planar. I don't read the statement this way, but if the statement is supposed to restrict to graphs that have at least one triangle, it still is not true (consider $K_{3,3}$ plus an edge). Your proof of the converse is fine. For the second question, your restatement is correct.
One well-known property of resultant
As per the comments, see the proof at this site. The author explicitly states that no assumption about distinct roots is made.
Associated elements in a ring
This is a non-trivial example taken from some notes I had lying around (I was being flippant earlier simply because it does not make sense to define a unit in a non-unital ring) : Consider $R = \mathbb{Q}[x,y,z]/(x-xyz)$, and denote by $\overline{f}$ the image of $f\in \mathbb{Q}[x,y,z]$ in $R$. Now note that $$ \overline{x} = \overline{xy}\overline{z} $$ and hence $$ \overline{x} \mid \overline{xy} \text{ and } \overline{xy} \mid \overline{x} \text{ in } R $$ I claim that there does not exist $\overline{f} \in R^{\ast}$ such that $$ \overline{f}\overline{x} = \overline{xy} $$ Suppose such an $f \in \mathbb{Q}[x,y,z]$ existed, then $fx - xy \in (x-xyz)$, whence $f-y \in (1-yz)$. So there exists $h \in \mathbb{Q}[x,y,z]$ such that $$ f = y + h(1-yz) $$ Now suppose $\overline{f}$ is a unit, then it must follow that $$ (y+h(1-yz),x-xyz) = \mathbb{Q}[x,y,z] $$ But, by setting $x=0, y=z$, one gets $$ (z+h(1-z^2)) = \mathbb{Q}[z] $$ Check that this is not possible.
Are determinants always real?
A determinant is always a member of the field (or ring) that the matrix entries comes from -- for any given size of the matrix the determinant is a particular polynomial in the entries. Thus, if the matrix entries are all real, then so is the determinant. But with, say, complex entries in the matrix it is easy to find a matrix with a non-real determinant: $$\left|\begin{matrix}i & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix}\right|=i $$ By the way, it doesn't really work well to define the determinant as the product of the eigenvalues, because some matrices have fewer different eigenvalues than the size of the matrix, and then you need to include some eigenvalues in the product several times, according to their (algebraic) multiplicity. But the way to define the multiplicity of eigenvalues is through the characteristic polynomial which in itself is defined using determinants!
Short proof for the non-Hamiltonicity of the Petersen Graph
If you can use the symmetry (as Jernej suggests), the case argument has a lot going for it. There is a proof using interlacing. Observe that if $P$ has a Hamilton cycle then its line graph $L(P)$ contains an induced copy of $C_{10}$. Eigenvalue interlacing then implies that $\theta_r(C_{10}) \le \theta_r(L(P))$. But $\theta_7(C_{10}) \approx -0.618$ and $\theta_7(L(P))=-1$. [I have forgotten who this argument is due to. There are a number of variants of it too.]
minors and rank of a matrix
A minor is the determinant of a square submatrix. However the statement given is not valid. Consider a $1\times 2$ matrix, $[0\quad 1]$. Clearly this matrix has rank 1. The above assertion says this is so if and only all $2\times 2$ minors vanish. There are none, so one might be tempted to say the criterion is satisfied "vacuously". However then it would also be true for rank $r=0$, which is inconsistent with the definition of rank of a matrix being the dimension of its row space (equiv. dimension of its column space). A correct statement would be that an $m\times n$ matrix has rank $r$ if and only if some $r\times r$ minor does not vanish and every $(r+1)\times (r+1)$ minor does vanish, i.e. $r$ is the largest number such that some $r\times r$ minor does not vanish (is not zero). For this to work we need the technical convention that a $0\times 0$ minor is $1$, i.e. that the $0\times 0$ matrix has determinant $1$. Such a convention is consistent with the notion of an empty product being $1$, though it may strike some as counterintuitive.
Investigating monotone and bounded nature of a function.
Hint Consider the derivative $f'(x)=3x^2+2bx+c$; solving, for $x$, $f'(x)=0$ (which is a quadratic equation) shows that it cancels at $$x_{\pm}=\frac{1}{3} \left(-b\pm\sqrt{b^2-3 c}\right)$$ But you are told that $0<b^2<c$; so, what about the roots in the real domain ? I am sure that you can take from here.
Properties of group extensions
[7.] Not sure what you mean here. Are you talking about Noetherian groups? If so, then you can show that any subgroup $H$ of $G$ is finitely generated, given that $K$ and $G/K$ are Noetherian for some $K\trianglelefteq G$. This follows from the facts that $H\cap K\leq K$ is finitely generated and that $H/(H\cap K)\cong HK/K\leq G/K$ is also finitely generated. [8.] Not sure what you mean here either. Are you talking about Artinian groups? If so, the answer is also yes. Consider the chain of normal subgroups $G=G_0\geq G_1\geq G_2\geq \ldots$. Suppose that a normal subgroup $K$ of $G$ satisfies $K$ and $G/K$ being Artinian. The image of this chain under $G\to G/K$ is given by $G/K=H_0\geq H_1\geq H_2\geq \ldots$. Then, by the assumption that $G/K$ is Artinian, we get $H_q=H_{q+1}=H_{q+2}=\ldots=:\bar{H}$ for some $q\in\mathbb{N}_0$. Now, we look at the sequence of subgroups of $K$, $K=K_0\geq K_1\geq K_2\geq \ldots$, where $K_i:=K\cap G_i$ for $i=0,1,2,\ldots$. Note that, for some integers $k\geq q$, $K_k=K_{k+1}=K_{k+2}=\ldots=:\bar{K}$. Thus, for $l=k,k+1,k+2,\ldots$, we have the exact squence $1 \to \bar{K} \to G_l \to \bar{H}\to 1$. By the Five Lemma for groups, $G_k=G_{k+1}=G_{k+2}=\ldots$. [9.] If $1\to K \to G \to Q\to 1$ is exact with $K$ and $Q$ being periodic, then every $x\in G$ satisfies $x^q\mapsto 1_Q$ via $G\to Q$ for some integer $q>0$. Thus, $x^q$ is in the image of $K\to G$. As $K$ is periodic, $x^{qk}=\left(x^q\right)^k=1_G$ for some integer $k>0$. [10.] If $1\to K\to G\to Q\to 1$ is exact with $K$ and $Q$ being torsion-free, then $x\in G$ with $x^q=1_G$ for some integer $q>0$ implies $x^q\mapsto 1_Q$ via $G\to Q$. As $Q$ is torsion-free, $x\mapsto 1_Q$ via $G\to Q$. Hence, $x$ is in the image of $K\to G$. Since $K$ is torsion-free and the map $K\to G$ is injective, $x^q=1_G$ happens only when $x=1_G$.
Is there anyway to bound the $L^\infty$ norm by other $L^p$ norm?
Typically questions of this type can only possibly have one answer and you can find that answer based on a scaling argument. Unfortunately here, the scaling shows that it just can't happen. Here's how you can see that: Take a generic function $f \in L^\infty(\mathbb{R}^2)$ and consider the function $f_\lambda$ where $f_\lambda (x) = f(\frac{x}{\lambda})$. Let's assume that $\| f\|_{L^p} < \infty$ for whatever $p$ this inequality might hold for. Notice that $\| f_\lambda \|_{L^\infty} = \| f \|_{L^\infty}$ by construction. But if $0 < p < \infty$ then $ \| f_\lambda \|_{L^p} = \lambda^{\frac{1}{p}} \|f\|_{L^p}$. If this inequality holds for some $p$ we must have that for every $f \in L^\infty$ we would have that for every $\lambda$ $$ \| f \|_{L^\infty} = \| f_\lambda \|_{L^\infty} \leq c \|f_\lambda \|_{L^p} = \lambda^{\frac{1}{p}} c \|f\|_{L^p}$$ Taking $\lambda \to 0$ gives us a contradiction since this inequality would imply that $L^\infty \cap L^p = \{0\}$ which is clearly not true by considering the indicator function of the set $[0,1]$.
convergence of a generalized Riemann integral
$$ \begin{align} \int_{0}^{\infty} \frac{\sin (x) \sin (2x)}{x^{\alpha}} \ dx &=\frac{1}{2} \int_{0}^{\infty} \frac{\cos (x) - \cos (3x)}{x^{\alpha}} \ dx \\ &= \frac{1}{2} \int_{0}^{1} \frac{\cos (x) - \cos(3x)}{x^{\alpha}} \ dx + \frac{1}{2} \int_{1}^{\infty} \frac{\cos (x) - \cos(3x)}{x^{\alpha}} \ dx \end{align}$$ Expanding $\cos (x)$ and $\cos (3x)$ in Maclaurin series, you can see that $\cos(x) - \cos(3x)$ behaves like $4x^{2}$ near $x=0$. So the first integral converges if $2-\alpha > -1$. That is, if $\alpha <3$. And $$ \begin{align} \int_{1}^{b} \Big( \cos (x) - \cos (3x) \Big) \ dx &= \sin(x) - \frac{1}{3} \sin(3x) \Bigg|_{1}^{b} \\ &= \frac{4}{3} \sin^{3}(x) \Big|^{b}_{1} \\ &= \frac{4}{3} \Big( \sin^{3}(b) - \sin^{3}(1)\Big) \end{align}$$ which remains bounded for any value of $b$ greater than $1$. So by Dirichlet's convergence test, the second integral converges if $\alpha >0$. See here for more information about the test. Therefore, the original integral converges if $0 < \alpha < 3$.
From Eisenbud, why is $g(\mathfrak{m}_x)=\mathfrak{m}_{g^{-1}x}$?
I observed it as following: The key point is that the condidtion "$G$ acts by polynomial maps". I considered the example: $X=k[u,v]/(v^2-u^3-u)$ and $g∈G$ acts on $X$ by sending $(a,b) \to (-a,ib)$. [More precisely, it sends the prime ideal $(u-a,v-b) \to (u+a,v-ib)$.] Then $g∈G$ acts on $A[X]$ by sending $f(u,v) \to f(-u,iv)$. Thus, $g$ sends $u-a \to -u-a$, $v-b \to iv-b$. So it sends $m_x = (u-a,v-b) \to (-u-a,iv-b) = (u+a,v+ib)$, which correspond to $m_{g^{-1}x}$.
Suppose $(a_n)$ is a sequence such that $a_n=\frac{1!+2!+\cdots+n!}{n!}$. Show that $\lim{a_n}=1$
Each of $1!,2!,...,(n-2)!$ is at most $(n-2)!$. Since there are $n-2$ such terms you have $$\frac{1! + 2! + \cdots + n!}{n!} \leq \frac{(n-2)(n-2)! + (n-1)! + n!}{n!}$$ $$= \frac{n-2}{n(n-1)} + \frac{1}{n} + 1$$ The limit of this as $n \rightarrow \infty$ is $1$, which is what you need.
Derivative of the median?
Let's look at an example: suppose $f_1(x) = 4x+12, f_2(x)=x^2$, and $f_3(x) = 2x^3+5x^2$, where $f_i$ is defined on $[-1,7]$. You can check that: $$ f_1(x) \geq f_3(x)\geq f_2(x) \phantom{NNN} \textrm{for }-1\leq x \leq \frac{3}{2} $$ $$ f_3(x) \geq f_1(x)\geq f_2(x) \phantom{NNN} \textrm{for }\frac{3}{2}\leq x \leq 6 $$ $$ f_3(x) \geq f_2(x)\geq f_1(x) \phantom{NNN} \textrm{for }6\leq x \leq 7 $$ (Note that $f_3(0)=f_2(0)$ but otherwise $f_3>f_2$ on $[-1,3/2]$.) So the median of these functions on $[-1,7]$ is: $$ M(x)=\begin{cases} f_3(x), & \textrm{if }-1\leq x \leq 3/2 \\ f_1(x), & \textrm{if }3/2 < x \leq 6 \\ f_2(x), & \textrm{if }6 < x \leq 7 \end{cases} $$ Thus $M$ is differentiable except at $x=3/2$ and $x=6$: $$ M'(x) = \begin{cases} 6x^2 + 10x, & \textrm{if }-1< x < 3/2 \\ 4, & \textrm{if }3/2 < x < 6 \\ 2x, & \textrm{if }6 < x < 7 \end{cases} $$ (Unfortunately $M'$ has non-removable discontinuities at $x=3/2$ and $x=6$, so this is the best we can do. And it's really pretty good!) So if your $f_i$ functions admit only finitely many pairwise intersections on your interval, then you should be able to measure the rate of change of the median at all but finitely many points. (It's not surprising that the left-hand and right-hand derivatives would not coincide at the intersection points: at those points you have one function "overtaking" another, so it would make sense that one function would be changing faster than the other.) If you happen to have an even number of functions, then you'll still want to find all the pairwise intersection points, but then you'll just replace the "middle two" by their mean $\frac{f_i+f_j}{2}$ on each subinterval.
Finding subgroups of $G=\displaystyle\normalsize{\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/4\mathbb{Z}}\LARGE_{/}\large_{\langle(1,0)\rangle}$
It looks like you're classifying subgroups of $\mathbb Z/2\mathbb Z \times \mathbb Z/4\mathbb Z$, but remember there's a quotient. For example $(1, 0)$ represents the identity element in your group, but you have it listed as having order $2$. The first thing you should do is consider what that quotient does. Show that there is an isomorphism $(\mathbb Z/2\mathbb Z \times \mathbb Z/4\mathbb Z)/\langle(1, 0)\rangle \simeq \mathbb Z/4\mathbb Z$, classify the subgroups of $\mathbb Z/4\mathbb Z$, and then use the isomorphism to transfer this information over to $(\mathbb Z/2\mathbb Z \times \mathbb Z/4\mathbb Z)/\langle(1, 0)\rangle$.
Bounded sequence in Hilbert space contains weak convergent subsequence
Suppose $M$ bounds the sequence. Then, if we think of $H$ as sitting inside $H^{**}$, then for any $T \in H^\ast$ with $\|T\| \leq 1$, we have $\|x_n(T)\| = \|Tx_n\| \leq \|x_n\| \leq M$, so the operator norms of the $x_n$ thought of as operators on $H^\ast$ are bounded by $M$. Apply Banach-Alaoglu. (To the unit ball of $H^{\ast \ast}$.)
Factorization of linear operator
Define $u\equiv \sum_k \sqrt{s_k} x_k\otimes e_k\in\mathcal X\otimes\mathcal Z$, where $e_k$ are an orthonormal basis for $\mathcal Z$. Define $v\equiv \sum_k \sqrt{s_k} e_k\otimes y_k\in\mathcal Z\otimes\mathcal Y$. Then, you have $$(I_{\mathcal X}\otimes v^*)(u \otimes I_{\mathcal Y}) =\sum_{jk}\sqrt{s_j s_k} (I_{\mathcal X}\otimes e_j^*\otimes y_j^*) (x_k\otimes e_k\otimes I_{\mathcal Y}) =\sum_k s_k x_k\otimes y_k^*, $$ which is $A$ thought of as an element of $\mathcal X\otimes\mathcal Y^*$. To define $u,v$ you need a number of $e_k$ elements equal to the number of nonvanishing singular values $s_k$, which equals the rank of $A$, and thus $\dim\mathcal Z=\operatorname{rank}(A)$ is enough to accommodate this number of orthonormal vectors and write the matrix in this form.
When can an infinite matrix have uniformly bounded rows and columns in $\ell^2$ but not be bounded on $\ell^2$.
Let $T$ be such that the $n$-th component of $Tx$ is $$(Tx)_n=\frac1n\sum_{j=1}^{n^2}x_j.$$ Say $x_j=1/N$ for $j\le N^2$, $0$ for $j>N^2$, so $||x||_2=1$. If $n\le N$ then $$(Tx)_n=\frac{1}{nN}n^2=\frac nN.$$And $$\sum_{n=1}^{N}\left(\frac nN\right)^2\ge cN\ne O(1),$$so $T$ is not bounded on $\ell_2$. (Haven't looked at it carefully, but I bet that for $p>1$ just taking $(Tx)_n=\frac1n\sum_{j=1}^{n^p}x_j$ works...) Edit: Yes, if $p>1$ and you define $T$ that way then, taking $x$ to be the $N$-th row of the matrix, if I did my sums correctly, you get $||Tx||_p^p\ge c N^{(p-1)^2}$.
Variational Distance vs. maximum norm
They are not the same, and the comments show some simple counter-examples. In fact, while $D(x) \leqslant 2 \| x\|$, the gap between these two quantities might be as large as $\Omega(n)$: e.g., take $x$ to be the vector containing $+1$ in half the coordinates, and $-1$ in the remaining half. Then, $D(x) = 2$ while $\| x \| = n/2$. What is equivalent to the variational distance is a new quantity $D'(x)$ defined as $$ D'(x) = \max_{I, J \subseteq [n]} \ | x(I) - x(J) |, $$ where we define $x(I) = \sum \limits_{i \in I} x_i$ (and $x(J)$ similarly). Without loss of generality, we may assume that $I$ and $J$ are disjoint in the above definition. This satisfies the identity $$ D'(x) = 2 \| x \|. $$ Proof. $D'(x) \leqslant \sum_{i} |x_i|$ holds by the triangle inequality. For the other direction, take $I = \{ i \,:\, x_i \geqslant 0 \}$ and $J = [n] \setminus I$. For this choice of $I$ and $J$, it holds that $| x(I) - x(J) | = \sum_i |x_i|$.
How to show that the given algorithm generates every subset with equal probability?
I will not provide a full answer here, but rather give some tips to help you make a start. An important observation is as follows: the permutation $\pi$ can also be used to make a map of the subsets of $\{1, \dots, n\}$ of size $m$ to itself by $C \mapsto \pi(C) = \{\pi(a) | a \in C \}$. Moreover, there are exactly $m!(n-m)!$ permutations that send $C$ to $\pi(C)$ (see if you can prove this!). This probability only depends on the size of this subset. As a result, $\{\pi(1), \dots, \pi(m)\}$ is an equiprobable subset of $\{1, \dots, n\}$ of size $m$. Moreover, $B = \{\pi(m+1), \dots, \pi(n)\}$ is an equiprobable subset of size $n - m$ (why?). One final hint to help you gain some insight into this problem: what does the algorithm look like for $m = 0$? Do you indeed get every subset with the same probability? Try $m = 1$ afterwards. Does it still hold? Then try general $m$.
Easy papers on fundamental groups (for beginners)
There are many books in Algebraic Topology that discuss the fundamental group without talking about homology (singular/cellular or simplicial). Resources I have used: Hatcher - Algebraic Topology (used in last semester's MATH 4204 at ANU) Bredon - Geometry and Topology Rotman - Algebraic Topology There is no doing research without going through the basics and slogging it out in understandinh the full theory first.
Completeness of continuous real valued functions with compact support
Take a particular example of a continuous function that goes to $0$ at $\pm \infty$, and a sequence of continuous functions of compact support that converges uniformly to it. This is a Cauchy sequence ...
$2^{n!}\bmod n$ if $n$ is odd
For the first question, note that $\varphi(n)$ divides $n!$, and use Euler's Theorem. The $n$ even problem is more interesting. Let $n=2^km$ where $m$ is odd. Then by the result for odd moduli, we have $2^{n!}\equiv 1\pmod{m}$. Also, $2^{n!}\equiv 0\pmod{2^k}$. Now use the Chinese Remainder Theorem. Added: In more detail, we want to find a $t$ such that $1+tm$ is divisible by $2^k$. So we are looking at the congruence $tm\equiv -1\pmod{2^k}$. This can be solved in the usual way, by multiplying both sides by the inverse of $m$ modulo $2^k$.
Propositional logic problem about a conversation of four people who lie or tell the truth
Normally for problems like this you are expected to assume that each person consistently lies or tells the truth. Then you can just assume one is a specific kind and see where that leads. Unfortunately, the first three statements cannot be assigned a consistent set of truth values. If A lies,B is truthful, and so is C, so A must tell the truth, contradicting our assumption. If A tells the truth, B lies, so C lies, so A lies.
Evaluating $\int 7^{2x+3} dx$
You do not show detail, but you presumably found $\int 7^u\,du$ by looking up a formula for $\int a^u\,du$. That is perfectly correct. But I would use a slightly different approach, which is a bit more complicated but does not rely on remembering $\int a^u\,du$. Note that $$7^{2x+3}=(e^{\ln 7})^{2x+3}=e^{(\ln7)(2x+3)}. \qquad\qquad (\ast)$$ Let $v=(\ln 7)(2x+3)$. Then $dv=(\ln 7)(2) \,dx$. Substituting, we find that $$\int e^{(\ln7)(2x+3)}\,dx=\int \frac{1}{2\ln 7}e^v\,dv=\frac{1}{2\ln 7}e^v+C.$$ Finally, by $(\ast)$, $e^v=7^{2x+3}$ so our integral is $$\frac{1}{2\ln 7}7^{2x+3}+C.$$
Why does dividing the definite integral by the width give the average value?
That's easy to see: the definite (Riemann) integral is the limit of Riemann sums : $$\int_a^b f(x)\,\mathrm d\mkern1mu x=\lim_{n\to\infty}\sum_{k=1}^nf(\xi_k)\frac{b-a}n,\quad\text{where}\quad\xi_k\in[x_{k-1},x_k],\quad x_k=a+k\,\frac{b-a}n.$$ Thus $$\frac1{b-a}\int_a^b f(x)\,\mathrm d\mkern1mu x=\lim_{n\to\infty}\frac{\sum_{k=1}^nf(\xi_k)}n$$ is the limit, as $n$ tends to $\infty$, of the averages of $n$ values of the function $f(\xi_k)$.
Rank of $A=BC$, when ranks of $B,C$ are given.
If both $B$ and $C$ have rank $3$, then $\operatorname{rank}A=3$. In order to see why, note that $C.\mathbb{R}^5=\mathbb{R}^3$. Therefore, since $\operatorname{rank}B=3$, $A.\mathbb{R}^5=B.(C.\mathbb{R}^5)=B.\mathbb{R}^3$, which has dimension $3$. On the other hand, if $\operatorname{rank}B=\operatorname{rank}C=2$, $\operatorname{rank}A$ may be equal to $2$, but it may be smaller, too.
What is the value of a?
Note, $$\frac{8−i}{3−2i}=\frac{8-i}{3-2i}\cdot\frac{3+2i}{3+2i}=\frac{(24+2)+(16-3)i}{3^2-(2i)^2}=\frac{26+13i}{13}=2+i$$
Constants in a differential equation.
Generally speaking, the constants are real. We are typically looking for real solutions. This is true regardless of whether or not you are dealing with ODE's (Ordinary Differential Equations) or with PDE's (Partial Differential Equations). In addition, regardless of the order of the differential equation, or whether it's homogeneous or inhomogeneous, we are looking for real solutions! There are some cases where you study complex solutions, but this is not usually material that should be studied when you are first learning differential equations, work with real numbers only!
can you consider a series to be a sequence of sums?
Yes in fact that is what a series is considered to be. When you ask about convergence of a series all you are really asking about is the convergence of its sequence of partial sums.
Contour integrals using residues
It's a third order pole, so you need to use the higher order formula if you aren't doing a series expansion. $$Res(f,c)=\frac{1}{(n-1)!}\lim_{z\rightarrow c}\frac{d^{n-1}}{dz^{n-1}}((z-c)^{n}f(z))$$ So in this case, $$Res(f,e^{\frac{i\pi}{3}})=\frac{1}{2!}\lim_{z\rightarrow e^{\frac{i\pi}{3}}}\frac{d^{2}}{dz^{2}}\left(\frac{(z-e^{\frac{i\pi}{3}})^{3}}{(1+z^{3})^{3}}\right)\\=\frac{1}{2}\lim_{z\rightarrow e^{\frac{i\pi}{3}}}\frac{d^{2}}{dz^{2}}\left(\frac{1}{(1+z)^{3}(z-1+e^{\frac{i\pi}{3}})^{3}}\right)\\=\frac{1}{2}\lim_{z\rightarrow e^{\frac{i\pi}{3}}}\frac{-3+3i\sqrt{3}+42ze^{\frac{i\pi}{3}}+42z^{3}}{(1+z)^{5}(z-1+e^{\frac{i\pi}{3}})^{5}}\\=-\frac{5}{27}e^{\frac{i\pi}{3}}$$
What is the "conjugacy problem for differentiable maps"?
I think you'll get more results if you search for "topological conjugacy problem," or "conjugacy problem for diffeomorphisms." Why not have a look, and then let us know what you have found?
For compact Lie groups, is $[\mathfrak{g},\mathfrak{g}]=\mathfrak{g}$ equivalent to being semisimple?
Yes, a connected compact Lie group is the product of a semi-simple Lie group and a torus $T^n$, thus its Lies algebra $g$ is the sum of a semi-simple Lie algebra $s$ and a commutative algebra $c$, such that $[s,c]=0$, thus $[s+c,s+c]=s$. https://en.wikipedia.org/wiki/Compact_Lie_algebra#Definition
Functions in the Structure Sheaf of an Affine Scheme on Arbitrary Open Sets
This is just a straightforward computation. Let's first identify what the localization of $A$ at the set of functions which do not vanish away from the common origin is. I claim that the collection of functions $S\subset A$ which do not vanish away from the common origin is just the units of $A$. To do the interesting inclusion, note that the vanishing locus of a function on an affine scheme is either empty (units), everything (nilpotents), or purely codimension one (nonunit nonnilpotents) by Krull's Hauptidealsatz. So $f$ vanishes nowhere, so it's a unit, and thus $S^{-1}A=A$ because all of $S$ is already invertible. On the other hand, our function which is $0$ and $1$ is not an element of $A$: such a function would have to simultaneously have the value $0$ and $1$ at the origin, which is obviously wrong. The reason this counterexample fails with $\Bbb A^1$ instead of $\Bbb A^2$ is that the point of intersection is pure codimension one inside two copies of $\Bbb A^1$ meeting at point, so the argument with Krull's Hauptidealsatz doesn't work and we really can just find functions which vanish just at the point of intersection (if we write the two copies of $\Bbb A^1$ glued together as $\operatorname{Spec} k[x,y]/(xy)$, we can take $x+y$, for instance).
Non-Abelian groups exact sequences, right split and left-split are different?
For examples that split on the right but not on the left, one can take any right split short exact sequence where $H$ is finitely generated but $G$ is not. For example, start with the free group $F\langle a, b \rangle$ with free basis $a,b$, and let $h : F\langle a, b \rangle \to \mathbb Z \oplus \mathbb Z = \langle a\rangle \oplus \langle b\rangle$ be the abelianization homomorphism, let $\pi_a : \langle a\rangle \oplus \langle b\rangle \to \langle a\rangle = \mathbb Z$ be the projection homomorphism, and consider the composition $h_a = \pi \circ h : F \langle a,b \rangle \to \mathbb Z$, which is surjective. Let $K = \text{Kernel}(h_a)$. We get a short exact sequence $$1 \to K \to F \langle a,b \rangle \to \mathbb Z \to 1 $$ However, $K$ is not finitely generated, so there is no retraction homomorphism $F \langle a,b \rangle \mapsto K$. However, every short exact sequence with $K=Z$ is split. There are also examples that split on the right but not on the left where $G$ and $H$ are both finitely generated. The key idea here is that if a subgroup injection $G \hookrightarrow H$ is split, i.e. if there is a retraction $H \mapsto G$, then that injection is undistorted with respect to word metrics on $G$ and on $H$. This means that if I pick a generating set $S$ for $G$ and if I extend $S$ to a generating set $T$ for $H$, and if I pick $g \in G$, then the length of the shortest word in the generators $S$ that represents $g$ is comparable (up to a constant multiplicative bound) to the length of the shortest word in the generators $T$ that represents $g$. Now let's take this solvable group expressed as a semi direct product $$1 \to\mathbb Z \oplus \mathbb Z \to H \to \mathbb Z \to 1 $$ where the action of $\mathbb Z$ on $\mathbb Z \oplus \mathbb Z$ is generated by the matrix $\begin{pmatrix} 2 & 1 \\ 1 & 1 \end{pmatrix}$. One can show without too much trouble that the normal subgroup injection $\mathbb Z \oplus \mathbb Z \hookrightarrow H$ is distorted, hence this example does not split on the left. But, it does split on the right, because again the group $K$ is just infinite cyclic.
The integral of a limit of simple functions
Here's a sketch of what you should be doing: Consider the simple function $$ \phi_n(x)=\begin{cases} m/2^n,&m/2^n\leq f(x)<(m+1)/2^n \\ n,&f(x)\geq n \end{cases} $$ This function is a simple function with $\phi_n(x)\leq g_n(x)\leq f(x)$ for all $x$. It should be clear that for $x$ such that $f(x)<\infty$, for sufficiently large $n$, we have $|f(x)-\phi_n(x)|\leq 2^{-n}$ (eventually, $n>f(x)$ so $m/2^n\leq f(x)< (m+1)/2^n$ will hold). Hence, $\phi_n(x) \uparrow f(x).$ If $f(x)=\infty$, then $\phi_n(x)=n \uparrow f(x)$. Now, using the definition $\displaystyle \int f=\sup_{0\leq s \leq f \\ s \text{ simple }}\int s,$ it is easy to prove that $\displaystyle \int f=\lim_{n\to\infty} \int \phi_n$. Without knowing dominated convergence theorem, monotone convergence theorem, etc. the aforementioned limit (which may be $+\infty$) still exists since $\int \phi_n$ is a monotone increasing sequence of real numbers. Finally, the trick is observing that $$\{x: f(x)\geq n\}=\cup_{m=n2^n}^\infty E_{n,m}.$$ Hence, although $\phi_n$ is simple, it has the representation $$ \phi_n=\sum_{m=0}^{n2^n-1}m/2^n \mathbb{1}_{E_{n,m}}+\sum_{m=n2^n}^\infty n \mathbb{1}_{E_{n,m}}. $$ See if you can fill in the details and apply a squeeze theorem argument.
Alternating Recurrence relation $a_n = b_{n-1} + 5$ and $b_n = na_{n-1}$
Start with the recursion for $a_n$ : $$ a_n=(n-1)a_{n-2}+5 $$ Let $c_n=\dfrac{a_{2n}}{(2n-1)!!}$ and we get $$ \begin{align} a_{2n}&=(2n-1)a_{2n-2}+5\\ (2n-1)!!\,c_n&=(2n-1)!!\,c_{n-1}+5\\ c_n&=c_{n-1}+\frac5{(2n-1)!!} \end{align} $$ Thus, $$ \begin{align} a_{2n} &=(2n-1)!!\left(1+5\sum_{k=1}^n\frac1{(2k-1)!!}\right)\\ &=\left\lfloor(2n-1)!!\,\left(1+5\sqrt{\frac{e\pi}{2}}\mathrm{erf}\left(\frac1{\sqrt2}\right)\right)\right\rfloor&&\text{for }n\ge3 \end{align} $$ Let $c_n=\dfrac{a_{2n+1}}{(2n)!!}$ and we get $$ \begin{align} a_{2n+1}&=2na_{2n-1}+5\\ (2n)!!c_n&=(2n)!!c_{n-1}+5\\ c_n&=c_{n-1}+\frac5{(2n)!!} \end{align} $$ Thus, $$ \begin{align} a_{2n+1} &=(2n)!!\left(6+5\sum_{k=1}^n\frac1{(2k)!!}\right)\\ &=\left\lfloor(2n)!!\,\left(1+5\sqrt{e}\right)\right\rfloor&&\text{for }n\ge2 \end{align} $$ Thus, for indices greater than $4$, we get the closed formulae $$ \begin{align} a_{2n}&=\left\lfloor c_{\text{even}}\,(2n-1)!!\right\rfloor&&\text{for }n\ge3\\ a_{2n+1}&=\left\lfloor c_{\text{odd}}\,(2n)!!\right\rfloor&&\text{for }n\ge2\\ \end{align} $$ where $$ \begin{align} c_{\text{even}}&=1+5\sqrt{\frac{e\pi}{2}}\mathrm{erf}\left(\frac1{\sqrt2}\right)&&=8.05343067321223998845412355710\\ c_{\text{odd}}&=1+5\sqrt{e}&&=9.24360635350064073424325393907\\ \end{align} $$
Two integers $a$ and $b$ are coprime, is it possible that $a \mid b$?
Assuming $a$ and $b$ are positive: If $a\mid b$, then, since $a\mid a$, $\gcd(a,b)\geqslant a$. So, unless $a=1$, no, you cannot have both $\gcd(a,b)=1$ and $a\mid b$.
Does there exist a vector field tangent to a given curve?
You can almost always do this, as long as you do not object to the vector field needing to become $0$ at times. One case where it becomes impossible is where $\gamma$ follows the $y$ axis along the segment with $-1 \leq y \leq 1,$ then $\gamma$ goes in a curved arc out to the positive $x$ axis, and returns along the curve $y = \sin \left( \frac{1}{x} \right)$ with $x > 0.$ The trouble then is that there are parts of the curve arbitrarily close to the $y$ axis where the tangent is pointing almost straight up, other parts of the curve arbitrarily close to the $y$ axis where the tangent is pointing almost straight down. So there is no continuous vector field extending $\gamma'$ in a neighborhood of that segment of the $y$ axis. It can always be done if, along a short arc of $\gamma,$ a very narrow tubular neighborhood contains no other points of $\gamma.$ In that case you can extend $\gamma'$ orthogonally times a bump function, as Sammy Black points out. Note that, even in very pleasant situations, the extended vector field may be forced to become $0$ somewhere anyway. This is the case if $\gamma$ is the unit circle, travelling counterclockwise. Somewhere inside, any vector field that extends $\gamma'$ becomes zero. Essentially Brouwer fixed point theorem: there is no continuous retraction of the unit disk onto the unit circle.
An interesting series to test convergence
$|n^{-3}+n^{-2}\cos n|\le 2n^{-2}$, and $|\sin x|\le |x|$; hence the terms of the series are bounded in absolute value by the respective terms of the convergent series $\sum 2n^{-2}$, and the series converges absolutely.
Prove $H_{\frac{1}{8}}=\int_{0}^{1}\frac{x^{\frac{1}{8}}-1}{x-1}dx$
Let $t=x^{\frac{1}{8}}$. Then, $$H_{\frac18}=\int_{0}^{1}\frac{x^{\frac{1}{8}}-1}{x-1}dx =8\int_{0}^{1}\frac{t^8-t^7}{t^8-1}dt =8-8\int_{0}^{1}\frac{t^7-1}{t^8-1}dt $$ Decompose the integrand $$\frac{t^7-1}{t^8-1} = \frac14\frac{1}{t+1}+\frac14\frac{t+1}{t^2+1}+\frac12\frac{t^3+1}{t^4+1}$$ and express the integral as $$H_{\frac18}=8-2\ln 2 - 2I_1 - 4I_2\tag 1$$ where $$I_1 = \int_{0}^{1} \frac{t+1}{t^2+1}dt=\left(\frac12\ln(t^2+1)+\tan^{-1}t\right)_0^1 =\frac12\ln2+\frac\pi4\tag 2$$ $$I_2 = \int_{0}^{1} \frac{t^3+1}{t^4+1}dt=\frac14\ln2+\int_{0}^{1} \frac{1}{t^4+1}dt$$ Integrate $$\int_{0}^{1} \frac{2}{t^4+1}dt= \int_0^1\frac{1+x^2}{x^4+1} dx + \int_0^1\frac{1-x^2}{x^4+1} dx$$ $$= \int_0^1\frac{\frac1{x^2}+1}{x^2+\frac1{x^2}} dx + \int_0^1\frac{\frac1{x^2}-1}{x^2+\frac1{x^2}} dx = \int_0^1\frac{d(x-\frac1{x})}{(x-\frac1{x})^2+2} - \int_0^1\frac{d(x+\frac1{x})}{(x+\frac1{x})^2-2}$$ $$=\left(\frac1{\sqrt2} \tan^{-1}\frac{x^2-1}{\sqrt2x} + \frac1{\sqrt2} \coth^{-1}\frac{x^2+1}{\sqrt2x} \right)\bigg|_0^1=\frac\pi{2\sqrt2}+\frac1{2\sqrt2}\ln\frac{\sqrt2+1}{\sqrt2-1}$$ Then, $$I_2 =\frac14\ln2+\frac\pi{4\sqrt2}+\frac1{4\sqrt2}\ln\frac{\sqrt2+1}{\sqrt2-1}\tag 3$$ Plug (2) and (3) into (1), $$H_{\frac{1}{8}}=8-\frac{\pi}{2}-4\ln\left(2\right)-\frac{1}{\sqrt{2}}\left(\pi+\ln\left(2+\sqrt{2}\right)-\ln\left(2-\sqrt{2}\right)\right)$$
Free abelian subgroup of index 2.
No further information, as this group, as noted in the comments, is (isomorphic to) the infinite dihedral group, and thus has a free abelian subgroup $F$ of rank one (isomorphic to the integers, that is) and index $2$, and thus nornal. This is $\langle x y \rangle$, as $(x y)^x = (x y)^y = y x = (x y)^{-1}$.
Let $\mathbb{F}$ be any field. Show that the number of cube-roots of unity in $\mathbb{F}$ is either $1$ or $3$.
First of all there must be at most 3 cube roots of unity because there are at most 3 solutions to the degree 3 polynomial $x^3-1$. Clearly one is one, so it remains to show that there cannot be two: Suppose there are at least two cube roots of unity: 1 (necessarily) and $\omega$ then $\omega^2$ is another distinct cube root of unity since $(\omega^2)^3-1 = (\omega^3-1)(\omega^3+1) = 0$ and $\omega^2 \not =1$ since otherwise $\omega = 1$.
Evaluating $\sum_{n=1}^{\infty} \frac{4(-1)^n}{1-4n^2}$
$$\frac{4}{1-4 n^2} = \frac{2}{1-2 n} + \frac{2}{1+2 n}$$ Thus, $$\begin{align}\sum_{n=1}^{\infty} \frac{4 (-1)^n}{1-4 n^2} &= 2 \sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{2 n-1}+2 \sum_{n=1}^{\infty} \frac{(-1)^n}{2 n+1} \\ &= 2 \frac{\pi}{4} + 2 \left (\frac{\pi}{4}-1 \right ) \\ &= \pi-2 \end{align}$$
Probability of no Ace but at least one King
Answer to the edited version of the question: Find the probability that a five card hand contains no aces but at least one king. As you know, the number of five card hands is $\binom{52}{5}$ since we are selecting five cards from a deck with fifty-two cards. To calculate the number of hands that contain no aces but at least one king, we subtract the number of hands that contain neither aces nor kings from the number of hands that contain no aces. By taking the difference of these numbers, we find the number of five cards hands which contain no aces but do contain at least one king. Since there are four aces in a deck of $52$ cards, the number of cards that are not aces is $52 - 4 = 48$. Therefore, the number of five card hands that contain no aces is $\binom{48}{5}$ since we must select five cards from the $48$ cards that are not aces. Since there are four aces and four kings in a $52$ card deck, the number of cards that are neither aces nor kings is $52 - 2 \cdot 4 = 44$. Therefore, the number of ways of selecting five cards from the deck that are neither aces nor kings is $\binom{44}{5}$ since we must select five cards from the $44$ cards that are neither aces nor kings. Hence, the number of five card hands that contain no aces but at least one king is $$\binom{48}{5} - \binom{44}{5}$$ from which we obtain the probability $$\frac{\dbinom{48}{5} - \dbinom{44}{5}}{\dbinom{52}{5}}$$ that a hand contains no aces but at least one king. Answer to the original version of the question: Your approach to both problems is correct, but there are some mistakes in the execution. Find the probability that a five card hand contains no ace but at least one king. In your first approach, as JMoravitz pointed out in the comments, the last term in the numerator should be $\binom{4}{4}\binom{44}{1}$ since you are selecting four kings and one additional card from the $52 - 2 \cdot 4 = 44$ cards that are neither aces nor kings, so the probability is $$\frac{\dbinom{4}{1}\dbinom{44}{4} + \dbinom{4}{2}\dbinom{44}{3} + \dbinom{44}{3}\dbinom{44}{2} + \dbinom{4}{4}\dbinom{44}{1}}{\dbinom{52}{5}}$$ Your second approach is correct. It also, as JMoravitz pointed out in the comments, is more efficient. It has the added virtue of containing fewer steps and, consequently, fewer opportunities to make an error than your first approach. Find the probability that a five card hand contains no aces and no queens but at least one king. Your approach of subtracting the number of hands that contain no aces, no queens, and no kings from the number of hands that contain no aces and no queens is correct. However, there are $52 - 3 \cdot 4 = 40$ cards that contain no aces, no kings, and no queens. Hence, the probability is $$\frac{\dbinom{44}{5} - \dbinom{40}{5}}{\dbinom{52}{5}}$$
Is the number 0.251 reachable in the binary number system(with decimals)?
A finite decimal number can be expressed as $x/10^m$, for some integers $x$ and $m\ge0$. Similarly, a (rational) number has a finite $b$-adic expansion if and only if it can be expressed as $y/b^n$, for some integers $y$ and $n\ge0$. Now, suppose $$ \frac{251}{1000}=\frac{y}{2^n} $$ Then $$ 251\cdot2^n=1000y $$ which is a contradiction, because the right-hand side is divisible by $5$ and the left-hand side isn't. The converse, however, is true: every finite binary expansion can be converted to a finite decimal expansion because $$ \frac{y}{2^n}=\frac{y\cdot5^n}{10^n} $$ A rational number always has a repeating (or finite) expansion in every (integer) basis.
Inequality with a (finite) sequence of numbers
First off, rearrange the first inequality to $$(1-C_2)a_n\leq(C_1+C_2)a_0+C_2\sum_{m=1}^{n-1}a_m$$ Then, by induction, replace $a_1$ to $a_{n-1}$ with their version of the second inequality.
Find the 3rd degree of polynomial having trigonometry as a root
MORE DETAILS You want a relation involving only $\;c:=\cos\left(\frac{\pi}7\right)\,$ the real part of $\,x=e^{i\pi/7}\,$ such that : $$x^6-x^5+x^4-x^3+x^2-x+1 = 0$$ Divide by $x^3$ to get : \begin{align} \tag{1}&\bigl(x^3+x^{-3}\bigr)-\bigl(x^2+x^{-2}\bigr)+\bigl(x^1+x^{-1}\bigr)=1\\ \end{align} But \begin{align} \tag{2}(2\;c)=\left(x+\frac 1x\right)^1&=\bigl(x^1+x^{-1}\bigr)\\ \tag{3}(2\;c)^2=\left(x+\frac 1x\right)^2&=\bigl(x^2+x^{-2}\bigr)+2\\ \tag{4}(2\;c)^3=\left(x+\frac 1x\right)^3&=\bigl(x^3+x^{-3}\bigr)+3\,\bigl(x^1+x^{-1}\bigr)\\ \end{align} so that everything may be written in function of $\,c\,$ only : from $(2)$ and $(4)$ deduce $\,\bigl(x^3+x^{-3}\bigr)=(2\;c)^3-3\,(2\;c)\,$, from $(3)$ deduce $\,\bigl(x^2+x^{-2}\bigr)$ from $(2)\ \cdots$ Conclude !
Limit of complex function ${z^2\over |z|}$ as z tends to $z_0$, $z$ not equal to 0.
One obvious way is to use the fact that the limit of a quotient is the quotient of the limits, as long as the denominator (the limit of) is not 0.
$\ x^4-7x^3+\left(13+m\right)x^2-\left(3+4m\right)x+m=0 $
Since $2+\sqrt3$ is a root, we can get a multiplier $x^2-4x+1$: $$m(x^2-4x+1)+x(x^2-4x+1)(x-3)=0.$$ Now, in the equation $x^2-3x+m=0$ we need $x_3+x_4=3$, which gives $x_3=2$ and $x_4=1$ and $m=2$.
Showing that the function is measurable.
HINT: Let's do a simple case: $\mu( A) = \int_{A} f(t) dt$ for some bounded Borel function $f$ and $E=[0,1]$. We have $$x\mapsto \phi(x)\colon = \mu(E+x) = \int_x^{x+1} f(t) dt$$ If $f$ is continoous it is easy to show that $\phi(x)$ is continous ( in fact $C^1$). Now for every bounded Borel function $f$ there exists a sequence of continous functions $f_n{\longrightarrow } f$ ae and moreover $|f_n| \le M $ ( same constant bounding $f$). By dominated convergence theorem $$\int_x^{x+1} f_n(t) dt \to \int_x^{x+1} f(t) dt$$ for every $x$. We may cover in this way the case of absolutely continous measures. We'll now give a proof that works in general. Let $\mu$ a sigma-finite Borel measure on $\mathbb{R}$. Let $K$ a compact. Let's show that the function $x\mapsto \mu(K+x)$ is upper semicontinous. Indeed, let $m > \mu(K)$. There exists an open subset containing $K$ such that $mu(U) < m$. Now, for $|x|$ small we'll have $K+x\subset U$. It follows that $\mu(K+x) < m$. It follows that the function is also Borel measurable. Let now $U$ open. There exists a sequence of compact subsets $K_n$ so that $K_n \subset U$ and $K_n$ exhaust $U$. It follows that the functions $x \mapsto \mu(K_n + x)$ converge pointwise to $x \mapsto \mu (U+x)$. Hence this function is also Borel measurable. Let's consider now the set of all Borels $E$ so that the corresponding function is Borel. This forms a Dynkin system (not hard to show). It follows that this set contains the sigma algebra generated by the open sets, thus all the Borels.
Prove that $\mathbb{F}_5[X]/(X^2+3)$ is a field with 25 elements
You probably saw a theorem that states that for any field $F$ and irreducible polynomial $P(X)\in F[X]$, the quotient ring $F[X]/(P(X))$ is a field. So, you need to establish that your polynomial $X^2+3$ is irreducible over the field $F=\mathbb F_5$. Since the polynomial has degree $2$, all you need to do is verify that the polynomial has not roots in the field. Since there are only $5$ elements in the field, it's quite easy to check them one by one.
Unbounded linear operator
Take $A=B$ be the set of complex sequences with finitely many nonzero terms: $$ A=B=\{\,\{x_n\}\,:\ \exists m\in\mathbb N\ \text{ with }x_n=0\ \forall n\geq m\}. $$ Consider in both the supremum norm ($\|x\|=\max\{|x_1|,|x_2|,\ldots\}$). Define $$ T(x_1,x_2,\ldots,x_n,0,0,\ldots)=(x_1,2x_2,3x_3,\ldots,nx_n,0,0,\ldots). $$ Then $T$ is linear. And, if $e$ is the sequence $(0,\ldots,0,1,0,0,\ldots)$ (the 1 in the $n^{\rm th}$ position), then $\|e\|=1$ and $$ \|Te\|=n. $$ As we can do this for every $n$, $\|T\|=\infty$. As you can see, here $\|Tx\|<\infty$ for all $x$. Finally, you ask about $T(0)$; for a linear operator, $T(0)=0$ always (bounded or unbounded, it doesn't matter). For a different and maybe more natural example, consider $A=B$ the set of polynomials as a subset of $C[0,1]$, and let $$ Tp=p' $$ be the differentiation operator. This is an unbounded operator, since $\|x^n\|=1$ and $\|T(x^n)\|=n$ for all $n$.
Diagonal power series is holonomic
Hint: Using the coefficient of operator $[z^n]$ to denote the coefficient of $z^n$ of a series we can extract the diagonal from $F(x,y)$ via \begin{align*} G(t)=[y^0]F\left(\frac{t}{y},y\right)=\sum_{n} a_{n,n}t^n \end{align*} Related information can be found in this paper. Interesting information about diagonalisation is also given in this MO post.
$\forall x \in \mathbb{R}$, there exits $\delta$ such that $(x-\delta,x+\delta) \cap A$ is countable. Prove that $A$ is countable.
For every $x\in \mathbb{R}$, let $\delta_x$ such that $(x-\delta_x,x+\delta_x)\cap A$ is countable. Set $B_x=(x-\delta_x,x+\delta_x)$. Since we have that $\mathbb{R}=\bigcup_{x\in \mathbb{R}}B_x$ and since $\mathbb{R}$ is separable, by this there's a countable subcover, that is, there's a sequence of points $(x_n)_{n\in\mathbb{N}}$ such that $\mathbb{R}=\bigcup_{n=1}^\infty B_{x_n}$. Can you conclude what you want from here?
Radius of convergence of random power series
If $|z|\lt 1$, then $$\mathbb E\left[\sum_{n=1}^N \left|X_n\right| \left|z\right|^n \right] = \mathbb E\left[ \left|X_1\right|\right]\sum_{n=1}^N \left|z\right|^n\leqslant \mathbb E\left[ \left|X_1\right|\right]\sum_{n=1}^\infty \left|z\right|^n$$ and by monotone convergence, the random variable $\sum_{n=1}^\infty \left|X_n\right| \left|z\right|^n$ has a finite expectation, hence is finite almost everywhere. Consequently, the series $\sum_{n=1}^{\infty} X_n z^n$ converges almost everywhere. The radius of convergence is thus bigger or equal to $1$. Now, let $r\gt 1$. The series $\sum_{n=1}^{\infty} \left|X_n\right| r^n$ cannot be convergent (unless $X_n=0$ a.s.): by the $0$-$1$ law, it should be almost everywhere convergent, hence $\left|X_n\right| r^n\to 0$ almost surely, hence in probability hence $\mathbb P\left\{\left|X_1\right|\gt r^{-n}\right\}\to 0$ which entails $\mathbb P\left\{\left|X_1\right|\gt 0\right\}=0$. In conclusion: if $X_1=0$ a.s., the radius of convergence is almost surely infinite, otherwise, it is $1$.
maclaurin series $\ln \left(\frac{1+x^2}{1-x^2}\right)$
Since the function is defined for $|x|<1$, you can write it as $$ f(x)=\ln(1+x^2)-\ln(1-x^2) $$ You surely know the Maclaurin series $$ \ln(1+t)=\sum_{n\ge1}\frac{(-1)^{n+1}t^n}{n} \qquad \ln(1-t)=-\sum_{n\ge1}\frac{t^n}{n} $$ and so $$ \ln(1+x^2)=\sum_{n\ge1}\frac{(-1)^{n+1}x^{2n}}{n} $$ and $$ \ln(1-x^2)=-\sum_{n\ge1}\frac{x^{2n}}{n} $$ Hence $$ f(x)= \sum_{n\ge1}\frac{(-1)^{n+1}x^{2n}}{n}+ \sum_{n\ge1}\frac{x^{2n}}{n} $$ What are the terms that “survive”? Only those with odd $n$, so you get $$\sum_{\substack{n\ge1\\n\text{ odd}}}\frac{2x^{2n}}{n}=\sum_{n\ge1}\frac{2x^{4n-2}}{2n-1}=\sum_{n\ge0}\frac{2x^{4n+2}}{2n+1}$$