title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
What is the domain of $f(x,y)=(-1)^{x+y}$?
This depends on whether you are working in $\mathbb R$ or $\mathbb C$. In $\mathbb R$, one notes that for integer $n$, $$(-1)^{2n+1} = -1 \implies -1=(-1)^{1/(2n+1)}$$ and further, for integer $m$, $$(-1)^m = \boxed{(-1)^{m/(2n+1)}} = -1\text{ or }+1$$ We cannot generalize the exponent to be all of rational numbers, since $(-1)^{1/(2n)}$ is not real. In $\mathbb C$: $$(-1)^a := e^{a\log(-1)} = e^{a(i\pi+2\pi i n)}$$ for integer $n$. This is defined for all $a\in \mathbb C$.
How many ways can you split $11$ people into $2$ groups
Suppose one of the students is Fred. Each of the other students must be placed in his group or the other group. That gives us $2^{10}$ choices. However, we cannot place all ten of the other students in Fred's group, otherwise the other group would be empty, so there are $2^{10} - 1$ ways to distribute the students to two non-empty groups. We could correct your solution. Start with Fred. We must place either one, two, three, four, five, six, seven, eight, nine, or ten of the other ten students in the other group, giving $$\sum_{k = 1}^{10} \binom{10}{k} = \sum_{k = 0}^{10} \binom{10}{k} - 1 = 2^{10} - 1$$ ways to distribute the students to two non-empty groups.
Proof that $\lim_{x\to+\infty} (1+1/x)^x$ is finite
Take the limit along integer x. $$\left(1+\frac{1}{x}\right)^x=\sum_{h=0}^x{x \choose h}\frac{1}{x^h}=\sum_{h=0}^x\frac{x(x-1)\dotsb (x-h+1)}{h!x^h}=\sum_{h=0}^x \frac{1}{h!}\left(1-\frac{1}{x}\right)\left(1-\frac{2}{x}\right)\dotsb \left(1-\frac{h-1}{x}\right)$$ This is monotonically increasing as $x\to\infty$ and is also $\leq\sum_{h=0}^x \frac{1}{h!}$ which converges, so the limit also converges.
How to deal with the different powers of the same variable here?
Factorize: $$ 0 = 6Ar_0^{-7} - 12Br_0^{-13} = 6r_0^{-13} ( Ar_0^6 - 2B ) $$ so $Ar_0^6 = 2B$. I think you can take it from there.
Continuous Probability - Bus Arriving
You're looking for the waiting time distribution for either bus, which is exponential. This is because you know the bus has equal probability of arriving at any given time in an hour, and that the distribution is memoryless. So the waiting time for a bus has density $f(t)=\lambda e^{-\lambda t}$, where $\lambda$ is the rate. To understand the rate, you know that $f(t)dt$ is a probability, so $\lambda$ has units of $1/[t]$. Thus if your bus arrives $r$ times per hour, the rate would be $\lambda=r$. Since the expectation of an exponential distribution is $1/\lambda$, the higher your rate, the quicker you'll see a bus, which makes intuitive sense. So define $X=\min(B_1,B_2)$, where $B_1$ is exponential with rate $3$ and $B_2$ has rate $4$. It's easy to show the minimum of two independent exponentials is another exponential with rate $\lambda_1+\lambda_2$. Thus you want: $$P(X>\mbox{20 minutes}) = P(X> 1/3)=1-F(1/3),$$ where $F(t)=1-e^{-t(\lambda_1+\lambda_2)}$.
An example of an exists-sentence such that the sentence is true on an infinite model M, yet on every submodel, the sentence is false
As Noah and Eric pointed out, the statement of the proplem is missing the word "proper" (the sentence should be false only on the proper substructures of $M$, since $M$ is alwaays a substructure of itself). And the problem can be solved vacuously by considering a structure $M$ with no proper substructures. The solution as you described it makes no sense. Here's an example which does have proper substructures and which I believe is similar in spirit to the intention of the proposed solution (but simpler). Consider the language $\{P,f\}$, where $P$ is a unary relation symbol and $f$ is a unary function symbol. Let $M = \mathbb{N}$, where $P^M$ holds only of $0$ and $f^M$ is the successor function $f^M(n) = n+1$. The substructures of $M$ are of the form $\{k,k+1,k+2,\dots\}$ for any $k$. Consider the sentence $\exists x\, P(x)$. This sentence is true in $M$ (witnessed by $0$), but false in every proper substructure of $M$ (since no proper substructure of $M$ contains $0$).
How to tell if I'm good enough for graduate school?
At my university we use baby Rudin, Dummit and Foote, and Munkres in the first year analysis, algebra, and topology courses (respectively), which is the average schedule for an incoming grad student. You can also test out of these and move to the 2nd year if you happen to be good in one of those fields (or two, if you're really exceptional). My opinion is that most math majors could handle this schedule given an appropriate amount of studying. Worst case you can opt out at a master's if you decide you don't want to do research. As far as being stupider than other people goes: of course you are. Unless you're the smartest person on earth, there are smarter people, and if you spend your time sitting around thinking about who's smarter than who, they're all you're going to pay attention to. If you're getting a math degree, you're probably pretty smart, and that's good enough. I embraced my own stupidity long ago, and in doing so, I gained the ability to ask questions without losing face. Instead of worrying about other students knowing I'm not a genius, I can talk my problems out with them, gain their insight, and learn from my own mistakes. For whatever reason, people in the mathematical field have this tendency to try to convince you that you're too dumb to do anything. Don't listen to them. If I miss a proof, I'll get mad, reread the chapter, work book problems, harass the professor, and stop sleeping until I understand everything. Be stubborn, not smart. You've got to have grit.
Cardinality of the set of continuous functions
Let $A=\{f\mid f:(0,1)\rightarrow\mathbb{R}\mbox{ is continuous}\}.$ Let $B=\mathbb{R}^{\mathbb{Q}\cap(0,1)}=\{g\mid g:\mathbb{Q}\cap(0,1)\rightarrow\mathbb{R}\}$ be the set of all real-valued functions defined on $\mathbb{Q}\cap(0,1)$. Clearly $|B|=c^{\omega}=(2^{\omega})^{\omega}=2^{\omega\times\omega}=c$. Define $\theta:A\rightarrow B$ by $\theta(f)=f|_{\mathbb{Q}\cap(0,1)}$ (i.e., $f$ restricted on $\mathbb{Q}\cap(0,1)$). Note that $\theta$ is injective. For, let $f_{1},f_{2}\in A$ such that $\theta(f_{1})=\theta(f_{2})$. Let $x\in(0,1)$ be arbitrary. Choose a sequence $(r_{n})$ in $\mathbb{Q}\cap(0,1)$ such that $r_{n}\rightarrow x$. By continuity, we have \begin{eqnarray*} f_{1}(x) & = & \lim_{n}f_{1}(r_{n})\\ & = & \lim_{n}\theta(f_{1})(r_{n})\\ & = & \lim_{n}\theta(f_{2})(r_{n})\\ & = & \lim_{n}f_{2}(r_{n})\\ & = & f_{2}(x). \end{eqnarray*} Therefore, $f_{1}=f_{2}$. It follows that $|A|\leq|B|=c$. For each $a\in\mathbb{R}$, define $\bar{a}:(0,1)\rightarrow\mathbb{R}$ by $\bar{a}(x)=a$. (i.e., $\bar{a}$ is a constant function). Let $A_{0}=\{\bar{a}\mid a\in\mathbb{R}\}$. Clearly, the map $\mathbb{R}\rightarrow A_{0}$, $a\mapsto\bar{a}$ is injective, so $c=|\mathbb{R}|\leq|A_{0}|$. Obviously, $A_{0}\subseteq A$. Therefore, we have $c\leq|A_{0}|\leq|A|\leq c$. This shows that $|A|=c$.
The difference between pointwise convergence and uniform convergence of functional sequences
Look at this as a game between two people (practically, this can't happen but let our imagination flow). Each game actually is a "translation" of the corresponding definition. You want to prove pointwise convergence. Lets make a game for it. Step 1. You chose some $x$ from the domain. Step 2. The opponent chooses some $\epsilon >0$. Step 3. You try to find an $N\in\mathbb{N}$ such, that $\forall n\geq N$, $|f_n(x)-f(x)|<\epsilon$. Additional Rules: Steps 2 and 3 must repeat until your opponent is convinced that for the particular $x$ you picked up, whatever $\epsilon$ he tells you, you'll be able to find such an $N$. You will be able to select another $x$ from the domain, only after you have convinced your opponent. The games is over when you have done this for all $x$ in the domain. As you may have already noticed, there is a precise correspondence between the definition and the game. Step 1 corresponds to binding the variable $x$. When $x$ takes a value, this value becomes fixed, so that we can range over $\epsilon$, which corresponds to Step 2. After a value is chosen for $\epsilon$, this value remains fixed. In Step 3 you find the proper $N$ within the context of already chosen values for $x$ and $\epsilon$. This is why in pointwise convergence $N$ depends on both $x$ and $\epsilon$. Now, you want to prove uniform convergence. The game here changes. Step 1. Your opponent chooses an $\epsilon > 0$. Step 2. You try to find an $N$ such that $\forall n\geq N$, $|f_n(x)-f(x)|<\epsilon, \forall x$. The difference here is that you don't check each $x$ separately. On the contrary, your opponent gives you an $\epsilon$ and the $N$ you have to find refers to all $x$ in the domain, not just one you picked up. It is like considering the range of $x$ all at once. The game ends when your opponent is convinced that you'll find such an $N$, whatever $\epsilon$ he tells you.
How is the t-statistic computed in this context?
When you go for t statistic, it is assume that you do not know the population standard deviation. To answer your question, the sample standard deviation is calculated based on the below formula $$s = \sqrt{\frac{\sum_{1}^{n}(x_i -\bar x)^2}{n-1}}$$
Let $R$ be a relation, $R: A \to A$ where $R$ is both symmetric and transitive. Prove the following:
Let $\;a\in A\;$ . Then there exists $\;y\in A\;$ s.t. $\;aRy\;$ , but then also $\;yRa\;$ , and since the relation is transitive $$aRy\;\;\text{and}\;\;yRa\;\implies aRa$$ and the relation is reflexive.
What is the radius of convergence of a power series in two variables?
A power series in several complex variables doesn't have a radius of convergence. In general, the domain of convergence is a logaritmically convex Reinhardt domain, but not necessarily a ball. Consider for example the series $$\sum_{k=0}^\infty z^k w^k \qquad\text{and}\qquad \sum_{j=0}^\infty \sum_{k=0}^\infty z^j w^k$$ respectively.
Reverse a simple equation of value to scale
If you want to undo the original function, let $f(x)=v=\displaystyle\frac{\sqrt x}{x}$, then, to calculate $f^{-1}(x)$, all that needs to be done is you need to solve for the following $x$: $v=\displaystyle\frac{\sqrt{x}}{x}$ square both sides $v^2=\displaystyle\frac{1}{x}$ attain the reciprocal $\displaystyle\frac{1}{v^2}=x$ and then swap the variables $\displaystyle\frac{1}{x^2}=v$ hence, $f^{-1}(x)=\displaystyle\frac{1}{x^2}$ To confirm whether or not what we have written is true, we can take the rule that $f(f^{-1}(x))=x$, so lets substitute in $x=2$. $f^{-1}(2)=\displaystyle\frac{1}{2^2}=\frac{1}{4}$ $\displaystyle f(\frac{1}{4})=\frac{\sqrt\frac{1}{4}}{\frac{1}{4}}=\frac{\frac{1}{2}}{\frac{1}{4}}=\frac{1}{2}\cdot4=2$ We can go further to prove that in this case $f(f^{-1}(x))=x$ if we were to use induction on $x$.
Why $\mathbb{E}[X\mid \sigma(\mathcal{H},\mathcal{E})]=\mathbb{E}[X\mid \mathcal{H}]$?
This is a ell known result by Doob. Theorem: Let $\mathscr{A}$, $\mathscr{B}$ and $\mathscr{C}$ be sub--$\sigma$--algebras of $\mathscr{F}$. $\mathscr{A}\perp_\mathscr{C} \mathscr{B}$ iff $$ \begin{align} \Pr[A|\sigma(\mathscr{C},\mathscr{B})]=\Pr[A|\mathscr{C}]\tag{1}\label{doob-independence} \end{align} $$ for all $A\in \mathscr{A}$. Here is a shot proof: Suppose that $\mathscr{A}$ and $\mathscr{B}$ are conditional independent given $\mathscr{C}$, that is $$ \Pr[A\cap B|\mathscr{C}]=\Pr[A|\mathscr{C}] \Pr[B|\mathscr{C}] $$ for all $A\in \mathscr{A}$ and $B\in \mathscr{B}$. Then, for any $A\in\mathscr{A}$, $\mathscr{B}$ and $C\in\mathscr{C}$ we have $$ \begin{align} \Pr\big[A\cap\big(C\cap B)\big]&=\Pr\big[ \mathbb{1}_C\Pr[A\cap B|\mathscr{C}]\big]= \Pr\big[\mathbb{1}_C\Pr[A|\mathscr{C}]\Pr[B|\mathscr{C}]\big]\\ &= \Pr\big[\Pr[A|\mathscr{C}]\Pr[B\cap C|\mathscr{C}]\big]= \Pr\Big[\Pr\big[\Pr[A|\mathscr{C}]\mathbb{1}_{B\cap C}\big|\mathscr{C}\big]\Big]\\ &= \Pr\big[\Pr[A|\mathscr{C}]\mathbb{1}_{B\cap C}\big] \end{align} $$ Since $\sigma(\mathscr{B},\mathscr{C})=\sigma\Big(\{B\cap C: B\in\mathscr{B}, C\in\mathscr{C}\}\Big)$, a monotone class argument shows that $$ \begin{align} \Pr[A\cap H]=\Pr\big[\Pr[A|\mathscr{C}]\mathbb{1}_H \big] \end{align} $$ for all $H\in\sigma(\mathscr{B},\mathscr{C})$. This means that $$ \Pr[A|\sigma(\mathscr{B},\mathscr{C})]=\Pr[A|\mathscr{C}] $$ Conversely, suppose that $\eqref{doob-independence}$ holds. For any $A\in\mathscr{A}$ and $B\in\mathscr{B}$ we have \begin{align*} \Pr[A\cap B|\mathscr{C}]=\Pr\Big[\mathbb{1}_{B}\Pr[A|\sigma(\mathscr{B},\mathscr{C})]\Big| \mathscr{C}\Big]= \Pr\Big[\mathbb{1}_B\Pr[A|\mathscr{C}]\Big|\mathscr{C}\Big] =\Pr[A|\mathscr{C}]\Pr[B|\mathscr{C}] \end{align*} This shows that $\mathscr{A}$ and $\mathscr{B}$ are independent given $\mathscr{C}$. The extension to random variables is done by expanding first to simple functions and then by the usual monotone approximation by simple functions.
Feller's Probability - Counting problem of pairs of shoes.
(a) $\ddot\smile\checkmark$, you are correct, because you are selecting $2r$ distinct pairs and taking one shoe from of each of them. $$P(\text{no completes}) = \dfrac{~2^{2r}\dbinom n {2r}~}{\dbinom{2n}{2r}}$$ (b) $\ddot{\frown}\times$ The book is correct, because you are selecting one complete pair from $n$, and from the $n-1$ pairs selecting $2r-2$ distinct pairs and taking just one shoe from each of those.   There is no order to these selections. $$P(\text{exactly one complete}) = \dfrac{2^{2r-2}\dbinom{n}{1}\dbinom{n-1}{2r-2}}{\dbinom{2n}{2r}}$$ [PS: Your logic of selecting one of 2n shoes, then its pair, over counts cases, so is wrong right from the start.] (c) $\ddot\sim?$ Now can you apply the same principle to find the probability of selecting exactly two complete pairs?
Property of $L^1$ function
When you find the domain of integration changing with the variable, you can use indicator functions to make the domain of integration uniform. This allows us to then use the convergence theorems we know. In this case, $\int_{|z| > k} |f(z)| d \mu = \int_{\mathbb R^N} 1_{\{|z| > k\}} |f(z)| d\mu$. Now, let $g_k(z) = 1_{\{|z| > k\}} |f(z)|$. Where does $g_k(z)$ converge pointwise as $k \to \infty$? Is $g_k$ uniformly bounded by some integrable function? $f$ is given to be integrable : that should help. The dominated convergence theorem completes the argument. A comment says above that this holds for $L^p, p \geq 1$ as well. You can see if an argument similar to the above works out.
Non spatial isomorphisms
Fix an orthonormal basis $\xi_j$, and consider the associated matrix units $\{e_{kj}\}$, i.e. $e_{kj}\xi=\langle \xi,\xi_j\rangle\,\xi_k$. The matrix units are characterized by $$ e_{kj}e_{st}=\delta_{js}\,e_{kt}.$$ It is easy to check that $\{\phi(e_{kj})\}$ is also a system of matrix units. If we let $\eta_j$ denote a unit vector in the range of the projector $\phi(e_{jj})$, then $\{\eta_j\}$ is an orthonormal basis (because $\sum\phi(e_{jj})=1$). Let $u\in B(H)$ be the unitary that sends $\xi_j\longmapsto \eta_j$. Then, for any $\xi\in H$, $$ \phi(e_{kj})\xi=\langle\xi,\eta_j\rangle\,\eta_k =\langle\xi,u\xi_j\rangle\,u\eta_k=u\,\langle u^*\xi,\xi_j\rangle\,\xi_k =ue_{kj}u^*\xi. $$ So $\phi(e_{kj})=ue_{kj}u^*$. Now one can extend by linearity and continuity to all of $K(H)$ (and all of $B(H)$ by using wot-continuity.
Jordan's Decomposition Theorem in Real Analysis
How about $h(x) = \sin x + 2x$ and $g(x) = 2x$. Both are increasing as $$h'(x) = \cos x + 2 \geq 1$$
Do those Hermitian and unitary matrices form a basis for the underlying complex vector space?
I worked it out myself. This is the link to my proof.
Calculate $\lim_{z\to0} \frac{z}{|z|}$ , if it exists.
Yes, those sequences work. Or you can just note that $\frac{\lvert z\rvert}z=1$ if $z$ is a real number greater than $0$ and it is equal to $-1$ when $z$ is a real number smaller than $0$.
Finding 'Corresponding generalized eigenvectors:'
Let's start by naming your given matrix : $$A=\begin{pmatrix}1&1&1\\0&1&1\\0&0&1\end{pmatrix}$$ To find the eigenvalues of $A$, you must solve the equation : $\det(A-λI)=0$. So, we have : $$\det(A-λΙ)=0\Rightarrow \Bigg| \begin{matrix} 1-λ & 1 & 1 \\ 0 & 1-λ & 1\\ 0 & 0 & 1-λ\end{matrix} \Bigg| = 0 \Rightarrow \dots \Rightarrow -(λ-1)^3=0 \Leftrightarrow λ =1$$ Note that your eigenvalue $λ_{1,2,3}=λ_1=1$ is of multiplicity $3$. Now, to find the eigenvectors, you must solve the system : $(A-λ_1I)x=0$. So : $$(A-λ_1I)x=0 \Rightarrow (A-I)x=0 \Rightarrow \begin{bmatrix} 0 & 1 & 1 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{bmatrix}\begin{bmatrix}x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0\end{bmatrix}$$ $$\Leftrightarrow$$ $$\begin{cases} x_2 +x_3=0 \\ x_3=0 \end{cases}$$ This is something to be expected, as having an eigenvalue of multiplicty of $3$ while being the only one, should yield you $3$ different eigenvectors. You can observe that $x_2=x_3=0$ but $x_1$ is undetermined. Note, that eigenvectors should NOT be entirely $0$ vectors and must be linearly independent, so you cannot just take $3$ different values for $x_1$. The fact that the equation $ 0x_1 + 0x_2 + 0x_3 =0 \Rightarrow 0x_1 =0$ can be hold $\forall x_1 \in \mathbb R$, enables you to get $1$ eigenvector for a random $x_1$. Taking $x_1=1$, we get the eigenvector : $$v_1= \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}$$ To get, now, the other $2$ eigenvectors, you need to go through the Generalized Eigenvector calculation, which for $v_2$ will go as solving the system : $$(A-λ_1I)v_2 = v_1$$ where you will express $v_2$ with variables that you'll find through solving the system. Can you now, solve this system and yield $v_2$ and then proceed correctly for $v_3$ ?
If $X$ and $Y$ are i.i.d., and $N=X+Y$, is $\mathbb{P}(X, Y, X+Y) = \mathbb{P}(X, Y)$?
A few quibbles with the language and notation you're using: ultimately the probability of X, Y, and X+Y is the same as the probability of X and Y This doesn't quite make sense; we talk about the probabilities of events. You could discuss, for instance, "the probability that $X$ is $12$," but it doesn't make sense to just talk about "the probability that $X$." Second: be careful with your notation in expressions like $\mathbb P(X, Y, X+Y)$; here, you've made the notation version of the same error as above. What (I think) you're referring to is the joint probability function $p$ for all three variables: that is, $$p(x, y, n) = \mathbb P(X = x, Y = y, N = n).$$ Notice the distinction: capital letters represent the random variables (things that can change), and lower-case letters represent fixed values (things that don't change). On the other hand, another expression you wrote: $P(X=x,Y=y,N=x+y)$ is perfectly good and sensible. This discusses a probability that three random variables will do something all at the same time. Quibbles aside, the crux of your argument is good. The equation $\mathbb P(X=x,Y=y,N=x+y)=\mathbb P(X=x) \mathbb P(Y=y)$ holds because the event $\{X = x, Y = y\}$ implies (i.e. is a subset of) the event $\{N = x + y\}$, and anytime you are considering events $A$ and $B$ with $A \subset B$, it follows that $\mathbb P(A \cap B) = \mathbb P(A)$. Since $\mathbb P(X = x, Y = y, N = x+y) = \mathbb P(X = x, Y = y)$, and since $X, Y$ are independent, it does indeed follow that this is equal to $\mathbb P(X = x) \mathbb P(Y = y)$. The rest of your work follows from things you know about geometric random variables. One more thing you should be sure of, though: remember that a joint pmf is a function of three variables. That means you need to consider cases like, for instance, $p(3, 6, 8)$ -- that is, $\mathbb P(X = 3, Y = 6, N = 8)$. (Consider the trouble with such an expression.) This needs to be encoded into your pmf formula in some specific way, and this is why it's not quite right to say that the joint pmf of all three is just equal to the joint pmf of $X$ and $Y$; remember, that joint pmf is just a function of two input variables instead of three.
How to calculate limit of series
HINT: For the first problem, since $c<1$, define a number $x>0$ by $c=\frac{1}{1+x}$. Then, $$nc^n=\frac{n}{(1+x)^n}$$ Now expand the numerator using the binomial theorem. For the second problem, recognize the sum is the Riemann sum for $\int_0^2 e^x\,dx$.
Which of these constructions are left adjoints?
You have correctly described the right adjoint of $\mathsf{Gpd} \to \mathsf{Cat}$. It maps a small category to its core. It is a functor $\mathsf{Cat} \to \mathsf{Gpd}$. Given a functor between small categories, it maps isomorphisms to isomorphisms, hence restricts to a functor between the cores. The functor $\mathsf{Mon} \to \mathsf{Cat}$ has no right adjoint because it does not preserve colimits. In fact, already the initial object is not preserved (the initial monoid is $\{1\}$, whereas the initial category is empty). However, this functor has a left adjoint: We map a category $C$ to the free monoid generated by $\mathrm{Mor}(C)$ modulo the relations $[f] * [g] = [f \circ g]$ if $f,g \in \mathrm{Mor}(C)$ and $\mathrm{cod}(g)=\mathrm{dom}(f)$. In some sense this is the category with all objects contracted to a single object. The functor $\mathsf{PreOrd} \to \mathsf{Cat}$ has a left adjoint: It maps a small category $C$ to the preorder on $\mathrm{Ob}(C)$ given by $X \leq Y$ iff there is a morphism $X \to Y$. But it has no right adjoint.
Formal explanation for this change of integration
The integration is on the domain $$ \{ (x,y): x>0, y>x \} = \{ (x,y): y>0, 0<x<y \} $$ So this is just an application of Fubini's theorem with the function $$ f(x,y)1_{\{0<x<y\}} $$ and using: $$ \int f(x,y)1_{\{0<x<y\}} dx = \int_0^y f(x,y) dx\\ \int f(x,y)1_{\{0<x<y\}} dy = \int_y^\infty f(x,y) dy $$
Finding a general solution to $x \cdot y'(x)-y(x)=\log(y'(x))$
Actually, differentiating gives you $$ x y'' = \frac{y''}{y'}$$ If $y'' \ne 0$ this says $y' = 1/x$, but the other possibility is $y'' = 0$, which actually leads to the general solution: $$y = c x - \ln(c)$$
$p$-elements generate a $p$-group in Nilpotent groups?
If by $p$-elements you mean elements of order $p$ then yes, they generate a $p$-group in a (finite) nilpotent group, since they are all contained in the unique Sylow $p$-subgroup.
$n$ out of $ m$ theorems (some imply the rest)
Often the most obvious solution is the best solution. I would write T(s) be 1 if a statement is true and 0 if it is false. I would then call the statements A, B and C and let S={A, B, C}. Then is would write sum T(s) over S >=2 => all s are true. You can make these statement more symbolic if you really like
Elements as a product of unit and power of element in UFD
Here is a somewhat detailed account: Since $R$ is UFD both $a$ and $b$ have unique factorizations into irreducible elements, say $a = p_1 p_2 \cdots p_k$ and $b = q_1 q_2 \cdots q_l$. But so does $c$, say $c = t_1\cdots t_d$. Then we know that $c^n = ab$ so writing out using the factorizations we have $t_1^n t_2^n \cdots t_d^n = p_1 \cdots p_k q_1 \cdots q_l$. By the uniqueness of factorizations we then get that $p_1 = ut_i$ for some $i = 1, \dots , d$, where $u$ is some unit. Suppose $p_1 = ut_1$ (if not then just reshuffle the indices). Now since $a$ and $b$ are coprime and since we have just shown that $t_1$ divides $a$, we know that $t_1$ does not divide $b$. So all the rest of the $t_1$'s must divide $a$ as well, i.e. $t_1^n$ divides $a$. This happens for each of the $t_j$'s so we find (with some reshuffling of indices) $a = ut_1^n \cdots t_j^n$ $b = vt_{j+1}^n \cdots t_d^n$. Where $u$ and $v$ are units (gotten by multiplying all units together in the above process) Now $u$ and $v$ are inverses as is seen by the fact that $ab = c^n$ so they cancel each other in the product. As for $R = \mathbb{Z}$ and $R = \mathbb{Z}[i]$, well the above works if $R$ is a UFD. Is this the case for $\mathbb{Z}$ and $\mathbb{Z}[i]$?
Proof the golden ratio with the limit of Fibonacci sequence
$$F_{n+1}=F_{n}+F_{n-1} \Rightarrow \frac{F_{n+1}}{F_n}=1+\frac{F_{n-1}}{F_n}$$ Let $$x_n:= \frac{F_{n+1}}{F_n}$$ Then $$x_n=1+\frac{1}{x_{n-1}}$$ You can now prove that $1 \leq x_n \leq 2$ and by induction that $$ x_1 \leq x_3 \leq x_5 \leq .. \leq x_{2n+1} \leq x_{2n} \leq x_{2n-2} \leq .. \leq x_2$$ From here you get that $x_{2n-1}$ and $x_{2n}$ converge, and you can use their definitions to get their limits. You prove that both limits are the same, which yields the desired result.
Why must the plane go through the origin?
$\vec c$ is a constant vector in the context of the answer, and The Riddler is pointing out that the set of all vectors $\vec r$ for which $\vec r\cdot\vec c=0$ forms a plane passing through the origin (and the plane is perpendicular to $\vec c$ ; that is, $\vec c$ is a normal vector to the plane). So if the particle’s position vector $\vec r$ always satisfies $\vec r\cdot\vec c=0$, the particle is confined to some plane through the origin.
Other approach for the fundamental group
If you know the actual proof of Van Kampen's Theorem, it is not at all hard to use the technology of the proof to directly show that the map $i_*$ induced by inclusion is injective. What you have to prove is that if a closed loop $\gamma : [0,1] \to M$ has image contained in $M \setminus \{p\}$, and if $\gamma$ is path homotopic to a constant in the whole manifold $M$ then $\gamma$ is also path homotopic to a constant in $M \setminus \{p\}$ (where $p$ is distinct from the base point of $\gamma$). In brief outline, to do this, choose a path homotopy $H : [0,1] \times [0,1] \to M$. Choose a small open ball $U$ around $p$, and let $V = M \setminus \{p\}$. One obtains an open cover $H^{-1}(U),H^{-1}(V)$ of $[0,1] \times [0,1]$. Applying the Lebesgue Number Lemma to this open cover, one obtains a subdivision of $[0,1] \times [0,1]$ into little squares each of which has $H$-image contained in $U$ or $V$. Now is the tricky part: On each of those little squares whose image is contained in $U$, you have to change the function $H$ to avoid the point $p$, and this is best done in coordinates.
Matrix $A\in M_{3}(\mathbb{Q})$ meet the equation $A^{8} = I$. Prove, that $A^4=I$ and answer, if this matrix can be diagonalized over $\mathbb{Q}$
If $A^8=I$, the minimal polynomial $\mu_A(t)$ of $A$ divides $t^8-1$. This last polynomial factors as $(t^4-1)(t^4+1)$. Now $\mu_A(t)$ has degree at most $3$, so it must be coprime with $t^4+1$, which is irreducible. It follows that it divides the other factor, $t^4-1$. This tells us, precisely, that $A^4=I$. Now we know $\mu_A(t)$ divides $t^4-1=(t-1)(t+1)(t^2+1)$. It follows that $\mu_A(t)$ has all its roots simple, so $A$ is diagonalizable over $\mathbb C$. Over $\mathbb Q$, it is diagonalizable iff all its eigenvalues are rational, and this happens iff $\mu_A(t)$ divides $(t-1)(t+1)$. This is not always the case. For example, consider $A=\left(\begin{array}{ccc}0&1&0\\-1&0&0\\0&0&1\end{array}\right)$.
Stamp Induction problem
First of all, you can realize : $24$ cents postage : $2\times5+2\times7=24$. Now let $N\ge24$ be a integer for which there exists a solution : $N=n_5\times5+n_7\times7$. Let's try to realize a postage of $N+1$ cents. There is basically two ways to add $1$ cent : $3\times5-2\times7=1=3\times7-4\times5$. Problem is : you have to subtract some $5$ cents or $7$ cents stamps. Suppose now that $n_5<4$, so that we can't add three $7$ cents stamps and remove four $5$ cents stamps. As $N\ge24$ and $n_5\le3$, $N-n_5\times5\ge 24-3\times5=9$, so there is at least two $7$ cents stamps, and we can use the other solution : remove two $7$ cents stamps and add three $5$ cents stamps. In all cases, we can find a solution for the $N+1$ postage problem.
Is $SL_2(\mathbb Z)$ a maximal discrete subgroup in $SL_2(\mathbb R)$?
Yes. Here's a direct proof working in arbitrary dimension: $\mathrm{SL}_d(\mathbf{Z})$ is maximal among discrete subgroups of $\mathrm{SL}_d(\mathbf{R})$. Proof: Let $\Gamma\subset\mathrm{SL}_d(\mathbf{R})$ be a discrete subgroup containing $\Lambda=\mathrm{SL}_d(\mathbf{Z})$. Since $\Lambda$ is a lattice, the index $[\Gamma:\Lambda]$ is finite. Since $\mathbf{Z}^d$ is $\Lambda$-invariant, it has finitely many images by $\Gamma$, and hence $E=\bigcap_{\gamma\in\Gamma}\gamma\mathbf{Z}^d$ has finite index in $\Gamma$, and hence $\Gamma\subset\mathrm{SL}(E)$. Let $u\in\mathrm{GL}_d(\mathbf{Q})$ map $\mathbf{Z}^d$ onto $E$. Then $u\Lambda u^{-1}=\mathrm{SL}(E)$, so $$\Gamma\subset u\Lambda u^{-1}\subset u\Gamma u^{-1}.$$ Since the $\mathrm{GL}_d(\mathbf{R})$-action by conjugation on $\mathrm{SL}_d(\mathbf{R})$ preserves some Haar measure, it preserves the covolume of lattices. We deduce that these inclusions are both equalities. Hence $\Lambda=\Gamma$.
Prove $\text{Li}_2(e^{-2 i x})+\text{Li}_2(e^{2 i x})=\frac{1}{3} (6 x^2-6 \pi x+\pi ^2)$ when $0<x<\pi$
$$\operatorname{Li}_2(e^{-2 i x})+\operatorname{Li}_2(e^{2 i x})\\ =2\Re\operatorname{Li}_2(e^{2i x})\\ =2\operatorname{Sl}_2(2x)\\ =2(\frac{\pi^2}{6}+\pi x+x^2)$$ Where $\operatorname{Sl}$ is the SL-type Clausen function.
How to calculate following integral?
I assume you mean that $x_0=0$ and $x_{n+1}=1$. So $$\int_{x_j}^{x_{j+1}}\frac{(\frac{j}{n}-x)^2}{x(x-1)}$$ $$=(j/n)^2\log(x_j/x_{j+1})+(j/n-1)^2\log\left(\frac{1-x_{j+1}}{1-x_j}\right)+(x_{j+1}-x_j)$$ This gives us three pieces when we take the sum. For the first piece, $$\sum_{j=0}^n(j/n)^2\log(x_j/x_{j+1})=\frac{1}{n^2}\log\left(\prod_{j=1}^n\frac{x_j^{j^2}}{x_{j+1}^{j^2}}\right)$$ $$=\frac{1}{n^2}\log\left(x_1\prod_{j=2}^nx_j^{2j-1}\right)$$ $$=\log\left(x_1\prod_{j=2}^nx_j^{(2j-1)/n^2}\right)$$ The middle piece is harder but mostly the same idea. I believe we will get this: $$\log\left(\prod_{j=0}^{n-1}\frac{(1-x_{j+1})^{(j/n-1)^2}}{(1-x_j)^{(j/n-1)^2}}\right)$$ $$=\log\left((1-x_n)^{1/n^2}\prod_{j=1}^{n-1}(1-x_j)^{(2j-2n+1)/n^2}\right)$$ The last piece is just $1$, aka $\log(e)$. So we could put it all together as $$\log\left(ex_1(1-x_1)^{(3-2n)/n^2}x_n^{(2n-1)/n^2}(1-x_n)^{1/n^2}\prod_{j=2}^{n-1}x_j^{(2j-1)/n^2}(1-x_j)^{(2j-2n+1)/n^2}\right)$$ I did this quickly so there might be some minor computational errors, but you get the gist.
Series representation of hyperbolic cotangent
First note that the sum can be written as $$\sum_{k=-\infty}^{\infty}\frac{z}{z^{2}+k^{2}\pi^{2}}=\frac{1}{z}+2z\sum_{k=1}^{\infty}\frac{1}{z^{2}+k^{2}\pi^{2}}$$ From Euler's formula for the trigonometric cotangent (see e.g. Find the sum of $\sum 1/(k^2 - a^2)$ when $0&lt;a&lt;1$ or http://mathworld.wolfram.com/Cotangent.html, formula (18)) $$\pi \cot \pi z=\frac{1}{z}+2z\sum_{k=1}^{\infty}\frac{1}{z^{2}-k^{2}}$$ you find $$\cot z=\frac{1}{z}+2z\sum_{k=1}^{\infty}\frac{1}{z^{2}-k^{2}\pi^{2}}$$ Now use $\cot z = i \coth(iz)$ to get $\begin{align*} \coth(z) &amp;= -i\cot(-iz)\\ &amp;= -i\left(\frac{1}{-iz}+2(-iz)\sum_{k=1}^{\infty}\frac{1}{(-zi)^{2}-k^{2}\pi^{2}}\right)\\ &amp;=\frac{1}{z}-2z\sum_{k=1}^{\infty}\frac{1}{-z^{2}-k^{2}\pi^{2}}\\ &amp;=\frac{1}{z}+2\sum_{k=1}^{\infty}\frac{1}{z^{2}+k^{2}\pi^{2}} \end{align*}$
Log inequality- is $(\lceil\log x\rceil - \lfloor\log m\rfloor)\cdot m+2^{\lfloor\log m\rfloor+1}\leq m\cdot(\lceil\log\frac{x}{m}\rceil+2)$?
Take, for example, $x=33, m=6$. We then have $\lceil \log x \rceil =6$, $\lfloor \log m \rfloor=2$, $\lceil\log \frac{x}{m}\rceil=\lceil \log 5.5\rceil=3$. $x&gt;m$ is obviously fulfilled as well. Putting that in to your inequality, we have $$ (\lceil\log x\rceil - \left\lfloor\log m\right\rfloor)\cdot m+2^{\lfloor\log m\rfloor+1}=(6-2)\cdot 6+2^3=32\leq 30=6\cdot(3+2)= m\cdot\left(\left\lceil\log\frac{x}{m}\right\rceil+2\right), $$ contradiction. So no, your inequality does not hold. The way you can construct your counterexample is actually more or less working the proof of your mentioned previous question backwards. You find $x,m$ such that $\lceil\log \frac{x}{m}\rceil=\lceil\log x\rceil - \lfloor\log m \rfloor -1$ (namely by choosing $x$ just a bit above and $m$ just a bit below a power of $2$), and then utilize that $2^{\lfloor \log m \rfloor +1}&gt;m$.
How to show that every group in the normal series of $G$ is normal in $G$
You used that $Z_i(G)$ is normal when you wrote $G/Z_i(G)$. Other than that, this result is true for any normal subgroup i.e., if $N$ is normal in $G$ then $H=\pi^{-1}(Z(G/N))$ is also normal in $G$.
Calculating the number of 3*3 Matrices whose trace of $A^t$*$A$=6
Let $A=\begin{bmatrix} a_{11} &amp; a_{12} &amp; a_{13} \\ a_{21} &amp; a_{22} &amp; a_{23} \\ a_{31} &amp; a_{32} &amp; a_{33} \end{bmatrix}$. Then by a matrix multiplication, one obtains that $$\operatorname{tr}(A^t A)=\sum_{i,j} a_{ij}^2=6.$$ Hence all entries must be between $-2$ and $2$. First, consider the case where one of the entries has absolute value $2$. Then there must be $2$ other entries with absolute value $1$, and all other entries are zero. The number of such matrices is $$\binom{9}{1}\cdot 2\cdot \binom{8}{2}\cdot 2^2,$$ where we have multiplied by $2^3$ to take into account that these entries may also be negative. Now consider the case where there does not exist an entry with absolute value $2$. Then there are $6$ entries with absolute value $1$ and $3$ are zero. The number of such matrices is $$\binom{9}{6}\cdot 2^6.$$ The sum of these is the result $$\binom{9}{1}\binom{8}{2}\cdot 2^3+\binom{9}{6}\cdot 2^6=7392.$$
Let ${a_n}$ be the sequence of positive integers defined by $a_1=1$, $a_2=3$, $a_{n+2}=(n+3)a_{n+1} - (n+2)a_n$.
As you observed, if both $a_n$ and $a_{n+1}$ are divisible by $11$, then so is $a_{n+2}$. Thus if you find an index $n$ such that $a_n$ and $a_{n+1}$ are divisible by $11$, then all $a_m$ with $m \ge n$ are divisible by $11$. An easy compution shows that up to $n = 11$ precisely $a_4, a_8, a_{10}, a_{11}$ are divisible by $11$. This answers your question. Note that it suffices to explicitly compute $a_n$ up to $n = 8$. We have $a_8 = 46233 = 11*4203$. Since $a_7 = 5913$ is not divisible by $11$, also $a_9 = 10*a_8 + 9*a_7$ is not divisible by $11$. But $a_{10} = 11*a_9 + 10*a_8$ and thus $a_{11} = 12*a_{10} + 11*a_9$ are.
The natural map $V^* \times W \to \text{Hom}(V,W) $ Is bilinear.
This is written very cumbersome. Nobody writes it like that. I present a way to write down a solution nicely: The issue is that $f(\varphi,w) \in \operatorname{Hom}(V,W)$ is a map, and showing that $$f(\varphi+ \mu,w) = f(\varphi, w) + f(\mu, w)$$ boils down to showing that two maps are equal! But two maps are equal if they are equal on every element of the domain, so for fixed $v \in V$ it suffices to show that $$f(\varphi+\mu,w) (v) = f(\varphi,w)(v) + f(\mu,w)(v)$$ But this last thing is obvious: $$f(\varphi+ \mu,w)(v) = (\varphi+\mu)(v)w = \varphi(v)w + \mu(v)w = f(\varphi,w)(v) + f(\mu,w)(v)$$ Similarly, you can check all the other axioms for bilinearity.
One-sided limits of $f'(x)$ at a point vs. one-sided limits of the difference quotient at that point
Yes, if a one-sided limit of $f'(x)$ exists at a point $a$, then the limit of the one-sided difference quotients exists at that point, and coincides with the one-sided limit of the derivative. That is a consequence of the mean value theorem. Consider the right limits, then by the mean value theorem we have $$\frac{f(x)- f(a)}{x - a} = f'(y)$$ for some $y \in (a,x)$. Thus if $L = \lim\limits_{x \to a^+} f'(x)$ and $$\lvert f'(y) - L\rvert &lt; \varepsilon$$ for $y \in (a, a+\delta)$, it follows that $$\Biggl\lvert \frac{f(x) - f(a)}{x-a} - L\Biggr\rvert &lt; \varepsilon$$ for $x \in (a,a+\delta]$, i.e. $$\lim_{x \to a^+} \frac{f(x) - f(a)}{x-a}$$ exists and equals $L$. The same argument holds for the left limit of course. And hence the existence of $\lim\limits_{x\to a} f'(x)$ implies the differentiability of $f$ at $a$ and the continuity of $f'$ at $a$. As you are aware, this is a sufficient, but not a necessary condition for differentiability at $a$.
Is trace of regular representation in Lie group a delta function?
As you implicitly observe, there is no well defined notion of the trace of this particular infinite-dimensional matrix. (There is a class of (infinite-dimensional) matrices to which one can assign traces, inventively called trace class (https://en.wikipedia.org/wiki/Trace_class), but there are no infinite-dimensional unitary matrices among them.) Notice that the statement that there is no trace is different to the statement that the trace is infinite. For example, the regular representation of $S^1$ on itself may be diagonalised so that the image of $z$ is the matrix $(z^n[n = m])_{m, n \in \mathbb Z}$, and there's no reasonable answer for the sum $\sum_{n \in \mathbb Z} z^n$ that doesn't make some assumptions that will destroy the nice invariance properties that we'd like a trace to have. What one may do instead is to remember that the scalar character is just the trace of the distribution character, in the sense that $\operatorname{tr} \int \pi(g)f(g)\mathrm dg = \int \Theta_\pi(g)f(g)\mathrm dg$ for $f \in C^\infty(G)$ and $\pi$ an irreducible (finite-dimensional) representation of the compact Lie group $G$, where $\mathrm dg$ is a Haar measure on $G$; and one may ask about the distribution character of the regular representation. This is the $\delta$ distribution at the identity, in the sense that $$\sum_{\pi \in \hat G} \operatorname{deg}(\pi)\operatorname{tr} \int \pi(g)f(g)\mathrm dg = f(e)$$ (if $\mathrm dg$ is normalised to give $G$ total mass $1$). This is known as the Plancherel theorem. EDIT: The original version of my second paragraph directly contradicted the first, by (inadvertently) not really speaking of the distribution character at all. The reason that I prefer to work via this roundabout approach of looking for a scalar function representing the distribution character $f \mapsto \int \pi(g)f(g)\mathrm dg$, rather than just taking the trace of $\pi(g)$ as one is used to doing, is precisely because it handes the infinite-dimensional case with élan. This is particularly important in the theory of non-compact groups, which may not even have any finite-dimensional representations.
binomial distributions and their transforming (6.37-6.39)
In going from 6.37 to 6.39 we need to prove that: $$\sum_{h=0}^{N-1} \binom{N-1}{h} (1-w)^h w^{N-h-1} \left(\frac{h}{h+1}\right) = 1 - \frac{1-w^N}{N(1-w)} \qquad\qquad\text{(*)}$$ \begin{eqnarray*} &amp;&amp; \\ \text{LHS} &amp;=&amp; \dfrac{1}{N} \sum_{h=0}^{N-1} h\binom{N}{h+1} (1-w)^h w^{N-h-1} \qquad\qquad\qquad\qquad\qquad\qquad\text{(using 6.38)} \\ &amp;=&amp; \dfrac{1}{N(1-w)} \sum_{h=0}^{N-1} h\binom{N}{h+1} (1-w)^{h+1} w^{N-h-1} \\ &amp;=&amp; \dfrac{1}{N(1-w)} \left[ \sum_{h=0}^{N-1} (h+1)\binom{N}{h+1} (1-w)^{h+1} w^{N-h-1} - \sum_{h=0}^{N-1} \binom{N}{h+1} (1-w)^{h+1} w^{N-h-1} \right] \\ &amp;=&amp; \dfrac{1}{N(1-w)} \left[ N(1-w) - \left[ ((1-w)+w)^N-w^N \right] \right] \qquad\qquad\qquad\text{(see (a), (b) below)} \\ &amp;=&amp; 1- \dfrac{1-w^N}{N(1-w)} \\ &amp;=&amp; RHS. \end{eqnarray*} Notes: (a) $\sum\limits_{h=0}^{N-1} (h+1)\binom{N}{h+1} (1-w)^{h+1} w^{N-h-1}$ is the expansion of the expectation of a binomial random variable with parameters $N$ and $1-w$. Hence it has value $N(1-w)$. That is, if $X\sim Bin(n,p)$ then $np=E(X)=\sum\limits_{x=1}^{n}x\binom{n}{x}p^x(1-p)^{n-x}=\sum\limits_{x=0}^{n-1}(x+1)\binom{n}{x+1}p^{x+1}(1-p)^{n-x-1}$. (b) $\sum\limits_{h=0}^{N-1} \binom{N}{h+1} (1-w)^{h+1} w^{N-h-1}$ is the binomial expansion of $((1-w)+w)^N$ but the sum is missing the term for $h=-1$ which has value $w^N$. So $\sum\limits_{h=0}^{N-1} \binom{N}{h+1} (1-w)^{h+1} w^{N-h-1} = ((1-w)+w)^N-w^N$.
To show that $\mathbb{Q}(c)=\mathbb{Q}(d)$
You need to show containment both directions. Showing $\mathbb{Q}(c)\subseteq \mathbb{Q}(d)$ should be completely straightforward, since $c = d^2 + d$ For the other direction \begin{align*} c&amp;= \sqrt[4]{5}+\sqrt{5} \\ c - \sqrt{5} &amp;= \sqrt[4]{5} \\ c^2 - 2c \sqrt{5} + 5 &amp;= \sqrt{5}\\ c^2 +5 &amp;= (2c+1) \sqrt{5} \\ \frac{c^2+5}{2c+1} &amp;= \sqrt{5} \end{align*} So we can conclude $$ c - \frac{c^2+5}{2c+1} = d $$ and hence $\mathbb{Q}(d)\subseteq \mathbb{Q}(c)$
Prove that if $x<0$ then $F_\mu^n(x)\rightarrow -\infty$ as $n\rightarrow \infty$, where $F_\mu=\mu x(1-x)$.
Fix some $\mu&gt;1$. Since $x_0&lt;0$, $\mu\left(1-x_0\right)&gt;t_0$ for some $t_0&gt;1$. Note that $$μ(1−x_n)=\mu(1-x_{n-1})+\mu x_{n-1}^2&gt;μ(1−x_{n−1})&gt;t_0,$$ for every $n$, in particular $\{x_n\}$ is decreasing and $x_n&lt;t_0^nx_0$ for every $n$. Since $t^nx_0\to-\infty$, this proves that $\left\{ x_{n}\right\} $ is strictly decreasing with no lower bound. (This argument is in @Jean-Claude Arbaut's comment).
What can we say of a group all of whose proper subgroups are abelian?
The finite case was settled by Miller–Moreno (1903) and described again in Redei (1950). The infinite case is substantially different, due to the existence of Tarski monsters. However, the case of non-perfect groups is reasonably similar to finite groups and is handled in Beljaev–Sesekin (1975), along with some more general conditions. Generally speaking, infinite groups are not very similar to finite groups in these sorts of questions, but if you restrict to (nearly) solvable groups, then things are a bit better. Nearly solvable in this case means "not perfect" and for quotient properties, often means "non-trvial Fitting/Hirsch-Plotkin radical". In other words, for subgroup properties you want an abelian quotient, and for quotient properties you want a (locally nilpotent or) abelian normal subgroup. More current research along the lines I enjoy generalize "abelian" rather than "finite"; a reasonable framework for this is given in Beidleman–Heineken (2009). Miller, G. A.; Moreno, H. C. Non-abelian groups in which every subgroup is abelian. Trans. Amer. Math. Soc. 4 (1903), no. 4, 398–404. MR1500650 JFM34.0173.01 DOI:10.2307/1986409 Rédei, L. Die Anwendung des schiefen Produktes in der Gruppentheorie. J. Reine Angew. Math. 188, (1950). 201–227. MR48432 DOI:10.1515/crll.1950.188.201 Beljaev, V. V.; Sesekin, N. F. Infinite groups of Miller-Moreno type. Acta Math. Acad. Sci. Hungar. 26 (1975), no. 3-4, 369–376. MR404457 DOI:10.1007/BF01902346 Beidleman, J. C.; Heineken, H. Minimal non-F-groups. Ric. Mat. 58 (2009), no. 1, 33–41. MR2507791 DOI:10.1007/s11587-009-0044-2
Probability of X or more simultaneous/overlapping events at any point over time period Y given Z agents issuing N events/second of average duration M?
There is something called a "Poisson Birth and Death Process" that might serve you here. A homework problem in a textbook (prob 18 on p.212 of S. Karlin, A first course in stochastic processes) gives a typical result: "A telephone exchange has $m$ channels. Calls arrive in the pattern of a Poisson process with parameter $\lambda$; they are accepted if there is an empty channel, otherwise they are lost. The duration of each call is a r.v. whose distribution is exponential with parameter $\mu$. The lifetimes of separate calls are independent random variables. Find the stationary probabilities of the number of busy channels." The solution: $$p_n =\frac{(\lambda/\mu)^n (1/n!) }{\sum_{k=0}^m (\lambda/\mu)^k (1/k!) }$$ for $n=0,\ldots,n$. In your problem, $\lambda$ and $\mu$ might be $1/180$ and $1/2$ per second each, so the ratio $\lambda/\mu=1/90$, and $m$ is $M$. If $m=3$, the fraction of time your server is saturated is $p_3 = (90^{-3}/6) / (1+90^{-1}+90^{-2}/2 + 90^{-3}/6)$ , the fraction of time it's idle is $p_0 = 1/(1+90^{-1}+190^{-2}/2 + 90^{-3}/6)$, and so on. I have no idea if http servers obey the same models that telephone exchanges seem to have obeyed, but this is the kind of model phone companies seem to have used.
examples of continuous, bounded function
Pick any continuous from $\mathbb{R}^3$ to $\mathbb{R}^2$ whose image is fully contained inside some disk and it will do. There are infinitely many such functions. As mentioned by David in the comments any constant function will do. For a slightly more complicated example consider the the following continuous function $f:\mathbb{R}^2 \to \mathbb{R}^2$ $$f(x,y) = \begin{cases} \langle x,y \rangle &amp; x^2+y^2\leq 1 \\ \frac{\langle x,y \rangle}{\sqrt{x^2+y^2}} &amp; x^2+y^2&gt;1 \end{cases} $$ Since $f$ maps the whole $\mathbb{R}^2$ into the unit disk, composing it with any continuous function from $\mathbb{R}^2\times\mathbb{R}$ to $\mathbb{R}^2$ will yield a continuous bounded function. For example if $g_1(\langle x,y \rangle,z)= \langle x,y \rangle$ then $f\circ g_1$ will be continuous and bounded. For other functions to compose you can try: \begin{align} g_2(\langle x,y \rangle,z) &amp;= \langle x+z,y-z \rangle\\ g_3(\langle x,y \rangle,z) &amp;= \langle xz,yz \rangle\\ g_4(\langle x,y \rangle,z) &amp;= \langle y,-x \rangle\\ g_5(\langle x,y \rangle,z) &amp;= \langle e^{x+y}\cos{z}, e^{x-y}\sin{z} \rangle\\\end{align}
Rigorous definition of characteristic function of random variable as Lebesgue integral
For a given complex-valued function $f;\Omega \to \mathbb{C}$ we can use the decomposition $$f(x) = \text{Re} \, f(x) + i \, \text{Im} \, f(x)$$ to define the integral of $f$. Indeed, since the real part and the imaginary part are both real-valued functions, we know how to integrate them, and therefore the following definition makes sense: $$\int f(x) \, \mu(dx) := \int \text{Re} f(x) \, \mu(dx) + i \int \text{Im} \, f(x) \, \mu(dx) \tag{1}$$ where $\mu$ is some measure on the measurable space $(\Omega,\mathcal{A})$ (e.g. the Lebesgue measure or the distribution of a random variable). Obviously, we need to ensure that the integrals on the right-hand side of $(1)$ are well-defined, i.e. we need $\text{Re} f \in L^1(\mu)$ and $\text{Im} \, f \in L^1(\mu)$. Note that the so-defined integral is linear - this is one of the most important properties of the integral. Moreover, if $f$ is a real-valued function, then we recover the "classical" integral for real-valued functions. The most common $\sigma$-algebra on $\mathbb{C}$ is the Borel-$\sigma$-algebra $\mathcal{B}(\mathbb{C})$, i.e. the $\sigma$-algebra induced by the metric $$d(z_1,z_2) := |z_1-z_2| := \sqrt{(\text{Re}(z_1-z_2))^2 + (\text{Im}(z_1-z_2))^2}.$$ It is possible to show that a function $f: (\Omega,\mathcal{A}) \to (\mathbb{C},\mathcal{B}(\mathbb{C}))$ is measurable if, and only if, $\text{Re}\, f: (\Omega,\mathcal{A}) \to (\mathbb{R},\mathcal{B}(\mathbb{R}))$ and $\text{Im} \, f: (\Omega,\mathcal{A}) \to (\mathbb{R},\mathcal{B}(\mathbb{R}))$ are measurable. This stems from the fact that the bijective mapping $$\iota: \mathbb{C} \to \mathbb{R}^2, z \mapsto (\text{Re} \, z, \text{Im} \, z)$$ is bi-measurable, i.e. both $\iota$ and its inverse are (Borel-)measurable.
Removing finitely many points from a connected manifold (dimension greater then 2) remains connected?
Hint: Things get much easier if you use that a manifold is connected if and only if it is path connected.
Best fit for 'puzzle' shapes inside a frame
This is the 2D version of the bin packing problem, which Wikipedia cites as 2D packing but doesn't have a page for. Certainly it is at least as hard-just make all your rectangles have one dimension as large as the larger dimension of the frame. A search on 2D packing turns up many references.
Show that $I-K$ has a closed range when $K$ is compact on a Hilbert space.
Well you are essentially done simply by proving that there is a $\beta $ such that $\| x-K(x)\| \geq \beta \|x\|$, because this proves that $I-K$ is bounded below and any operator that is bounded below has closed range: Indeed, suppose that $\|Tx\| \geq \beta \|x\|$ for all $x \in X$ (where $T:X\to Y$ is a bounded linear map between any two Banach spaces). If $Tx_n \to y \in Y$ we must have that $(x_n)_n$ is Cauchy and therefore $x_n \to x \in X$ and of course $y=Tx \in T(X)$, whence $T(X)$ is closed.
Proof without induction : $(n+1)! < n(1!+2!+\cdots +n!)$
$$n(1!+2!+...+n!)=n(n-1)!\left(\frac{1}{(n-1)!}+\frac{2}{(n-1)!}+...+1+n\right)&gt;$$ $$&gt;n(n-1)!(n+1)=(n+1)!$$ $$(n-1)(1!+2!+...+n!)=(n-1)n!\left(\frac{1}{n!}+...+\frac{1}{n}+1\right)&lt;$$ $$&lt;(n-1)n!(n+1)=(n+1)!$$
O(p,q; C) isomorphic to the usual orthogonal group O(p + q; C) for complex field
Let $$Q_{p,q}(z,w) = \sum_{i = 1}^{p} z_i w_i - \sum_{i=p+1}^{p+q} z_i w_i.$$ This is the standard symmetric bilinear form of signature $(p,q)$ on $\Bbb C^{p+q}$ (note that the definition of indefinite orthogonal groups uses bilinear forms, not sesquilinear). The complex indefinite orthogonal group is then defined by $$\mathrm{O}(p,q; \Bbb C) = \{A \in \mathrm{GL}_{p+q}(\Bbb C) : Q_{p,q}(Az, Aw) = Q_{p,q}(z,w)\}.$$ If we take $\{e_i\}$ to be the standard basis of $\Bbb C^{p+q}$ and define $$\varphi(e_i) = \begin{cases} e_i, &amp; 1 \leq i \leq p, \\ ie_i, &amp; p+1 \leq i \leq q \end{cases}$$ then we have that $$Q_{p,q}(\varphi(z), \varphi(w)) = \sum_{i = 1}^p z_i w_i + \sum_{i = p+1}^{p+q} z_i w_i = \sum_{i = 1}^{p+q} z_i w_i = Q_{p+q,0}(z,w).$$ Hence $\varphi$ is an isomorphism between the symmetric bilinear forms $Q_{p,q}$ and $Q_{p+q,0}$. Then we get an induced isomorphism $$\mathrm{O}(p,q; \Bbb C) \xrightarrow{~\cong~} \mathrm{O}(p+q; \Bbb C),$$ $$A \mapsto A \circ \varphi.$$ This of course works for $\mathrm{SO}(p,q; \Bbb C)$ as well since we just need to take the determinant $1$ elements for that.
Continuous open maps and first countability
Make it better structured: Suppose $y \in f[X]$ and let $x \in X$ be such that $f(x)=y$. It suffices to show $y$ has a countable local base. Let $\mathcal{B}_x$ be a countable local neighbourhood base for $x$ ($X$ is first countable). I claim that $$\mathcal{B}'_y:= \{f[B]: B \in \mathcal{B}_x\}$$ is a (countable) local base at $y$. Proof: First note that the sets in $\mathcal{B}'_y$ are open by openness of $f$ (an assumption you mentioned in the title but not in the proof!) and all contain $y$. So that out of the way, let $O$ be open in $f[X]$ and $y \in O$. As again $f[X]$ is open in $Y$ we also know that $O$ is open in $Y$. And so $x \in f^{-1}[O]$ and the latter set is open by continuity of $f$, there is some $B \in \mathcal{B}_x$ such that $B \subseteq f^{-1}[O]$. Hence $f[B] \in \mathcal{B}'_y$ and $f[B] \subseteq O$ and so $\mathcal{B}'_y$ is a local base at $y$. Finally, as $\mathcal{B}_x$ is countable, so is $\mathcal{B}'_y$ as its image under a surjective map. The weird unions in the OP's proof are unnecessary. Just straightforwardly define a local base in $f[X]$ from one in $X$.
Prove that $\forall n\geq2,\;\exists \text{unique} \; t\in N $ such that $1+2+\ldots t < n \leq 1+2+3\ldots (1+t)$
Inducting on $n$ isn't your best option. It's easier to prove by induction on $t$ that $T_t:=\sum_{k=1}^tk$ is a strictly increasing sequence in $\Bbb N$. Since any such sequence contains arbitrarily large elements (in particular $T_t\ge t$), for any $n\in\Bbb N$ there is at least one $t\in\Bbb N$ with $n\le T_{t+1}$. Therefore, there is a least such $t$, whence $n\not\le T_{t-1+1}\implies T_t&lt;n\le T_{t+1}$.
Irrationality/Transcendentality of values of $e^{e^x}$
I see you have also asked similar question here. It is a bit old, but nevertheless I believe I have results that might interest you, especially the later one. Claim: $[2]$ is true, conditionally on Schanuel's conjecture Proof: Specifically, Schanuel's conjecture states that $$\text{trdeg}_{\mathbb{Q}} \,\! \mathbb{Q}\left(x_1, x_2, \cdots x_n, e^{x_1}, e^{x_2}, \cdots , e^{x_n}\right) \geq n$$ for $\mathbb{Q}$-linearly independent $x_1, x_2, \cdots, x_n \in \mathbb{C}$. One can verify that $(x_1 , x_2) = (e^x, 1)$ is $\mathbb{Q}$-linearly independent, thus the above gives $\left(e^x, e^{e^x}\right)$ is a $\mathbb{Q}$-algebraically independent pair and since $e^x$ for $x \in \mathbb{\bar Q}$ is transcendental, we have transcendence of $e^{e^x}$. $\blacksquare$ Claim: At least one of $\large e^{e^x}, e^{e^{2x}}$ and $\large e^{e^{3x}}$ is transcendental Proof: This can be concluded from the lines of the proof of Schneider's conjecture done in Brownawell$(1974)$. In general, it has been proved (unconditionally, in the same paper) that for $\alpha, \beta \in \mathbb{C} \! \setminus \!\!\mathbb{Q}$ and $\gamma \in \mathbb{C}$ if $e^{\alpha}, e^{\alpha \gamma} \in \mathbb{\bar Q}$ then at least two of $$\large \alpha, \beta, \gamma, e^{\beta\gamma}, e^{\alpha\beta\gamma}$$ are $\mathbb{Q}$-algebraically independent. The claim is then proved by setting $\alpha = \beta = \gamma = e^{x}$ $\blacksquare$ Note: $[1]$ in general is wide open currently, as far as I know. I recall though that transcendence of power towers of polynomials of $e$ over $\bar Q$ holds with some weaker condition than Schanuel. I'll link it to here as soon as I remember.
compact subset of real
$\bullet \ T$ compact subset of $\mathbb{R}$ implies that $T$ is closed and bounded. $\bullet$ Let $\ p \in \mathbb{R}-T$. $T$ compact implies that if we consider closure($T$) it is of the form $[-M,M]$. $\bullet$ If $p \in cl(T)$ then we are done since by assumption this will make $p$ a limit point of $T$. $\bullet$ If $p$ is not in the closure of $T$ then consider $(q_n) := -M+1/n$ or $q_n:=M-1/n$ which are taken this way depending on the sign of $p$. Thus the limit of any one of these sequence will give infimum of $|x-p|$.
Exercise 2.18 Brezis' Functional Analysis
$u\in R(A')^\perp$ means that for every $y'\in D(A')$ and every sequence $(u_n)\subset D(A)$ with $u_n\to u$ we have $y'(Au_n)\to 0$ as $n\to\infty$. Now, as you started, if $(u,0)\notin G(A)$, then there exists a bounded linear functional $f$ on $E\times F$ such that $f(x,Ax) = 0$ for all $x\in D(A)$ and $f(u,0) = 1$. Now, define the functional $y' : F\to\Bbb K$ by $y'(y) := f(0,y)$. Then, obviously, $y'\in F'$. But also for $x\in D(A)$ we have $$ |y'(Ax)| = |f(0,Ax)| = |f(x,Ax)-f(x,0)| = |f(x,0)|\le \|f\|\|x\|. $$ That is, $y'\in D(A')$. So, if $(u_n)\subset D(A)$ is a sequence with $\lim_{n\to\infty}u_n = u$, then \begin{align*} 0 &amp;= \lim_{n\to\infty}y'(Au_n) = \lim_{n\to\infty}f(0,Au_n) = \lim_{n\to\infty}f(u_n,Au_n)-f(u_n,0)\\ &amp;= -\lim_{n\to\infty}f(u_n,0) = -f(u,0) = -1, \end{align*} a contradiction.
Is $\omega_1^{CK}$ recursively enumerable?
There is no recursive enumeration of $\omega_1^{CK}$. Indeed, there is no hyperarithmetic (or even $\Sigma^1_1$) way to present $\omega_1^{CK}$! This was proved by Spector. More generally, if a Turing degree ${\bf a}$ computes a copy of an ordinal $\alpha$ and ${\bf a}$ is hyperarithmetic relative to ${\bf b}$, then ${\bf b}$ also computes a copy of $\alpha$. This general property is called "hyperarithmetic-is-recursive," and turns out to have surprising connections with model theory (at least, its more set-theoretic side). We say that a class of structures $\mathbb{K}$ satisfies hyperarithmetic-is-recursive if whenever a Turing degree ${\bf a}$ is hyperarithmetic in ${\bf b}$ and computes a copy of $\mathcal{A}$, then ${\bf b}$ also computes a copy of $\mathcal{A}$, for all $\mathcal{A}\in\mathbb{K}$. Montalban showed that a first-order theory $T$ is a counterexample to Vaught's conjecture if and only if $T$ satisfies "hyperarithmetic-is-recursive on a cone," a slight weakening of "hyperarithmetic-is-recursive." With regard to the first part of your question, however, it's not entirely clear to me what process you have in mind; specifically, how Replacement is allowed to be used. In the naive sense, we can automatically reach any countable ordinal which has a copy (= ordering of $\omega$ with appropriate ordertype) definable in set theory with a single application of replacement. $\omega_1^{CK}$, and much more, falls into this category. Here is an explicit definable well-ordering of $\omega$ with ordertype $\omega_1^{CK}$: We define Kleene's set $\mathcal{O}$ of ordinal notations as usual. We now define a relation $\triangleleft$ on $\mathcal{O}$ as follows: $m\triangleleft n$ iff $m$ is an index for an ordinal which is smaller than the ordinal $n$ is an index for. Finally, we let $$M=\{m\in\mathcal{O}: \forall n\in\mathcal{O}(n&lt;m\implies (m\triangleleft n\vee n\triangleleft m))\}$$ be the set of minimal indices for ordinals. Then $M$, ordered by $\triangleleft$, has ordertype $\omega_1^{CK}$. Now, since $M$ doesn't actually consist of every natural number, we're not technically done, but this is easily fixed: let $M=\{m_0&lt;m_1&lt;m_2&lt; ...\}$ and let $\triangleleft'$ be defined by $x\triangleleft'y\iff m_x\triangleleft m_y$. Then $\triangleleft'$ is a well-ordering of $\omega$ with ordertype $\omega_1^{CK}$. And this doesn't even scratch the surface of ordinals we can build. So in the process you describe, you probably want to limit attention somehow to "effective" applications of replacement, and it's not entirely clear to me what this means. That said, you may be tangentially interested in this paper of Arai, in which he essentially calculates a bound on the $L$-rank of reals (equivalently, hereditarily countable sets) which can be "proved to exist" in ZFC+V=L in a precise sense. I think this overshoots what you want by a huge margin, both because powerset is allowed and because there's no "finite construction" picture going on (as far as I can tell), but you might still find it neat.
data point on an exponential line in excel
I assume that 1E-07x is at the superscript. Then you should read $$y=41899\cdot \exp(10^{-7}x)$$ So $e$ is the base of the natural logarithm ($2.718...)$, and E is the power of $10$ in the scientific notation: $$1.23\text{E}-7=1.23\cdot10^{-7}$$
How to calculate an inner ellipse points that is always a set distance from an outer ellipse points
$\frac {x^2}{a^2} + \frac {y^2}{b^2} = 1$ Suppose we look at $z = \frac {x^2}{a^2} + \frac {y^2}{b^2} \\ \nabla z = \frac {2x}{a^2}, \frac {2y}{b^2} $ This vector is perpendicular to the elipse. if $(x,y)$ is on your elipse, then $(x,y) - 5\dfrac{(\frac {x}{a^2},\frac {y}{b^2})}{\sqrt{\frac {x^2}{a^4}+\frac {y^2}{b^4}}}$ is on the curve you are looking for.
How to compute $\log\binom {n_H + n_T}{n_H}$?
I don't think you need to care about what $\log \binom{n_H+n_T}{n_H}$ is since to find argmax, you need to differentiate with respect to , and $\log \binom{n_H+n_T}{n_H}$ vanishes after differentiation.
Why I can't reduce an integral divided by another integral?
You cannot, since you're basically assuming $$\frac{ \sum_i f(i) }{ \sum_i g(i) } = \sum_i \frac{f(i)}{g(i)}$$ This is not true in general. You're assuming this because each integral is basically its own, individual (limit of a) sum. For instance, it is generally not true that $$\frac{a+b}{c+d} = \frac a c + \frac b d$$ It is also generally untrue that $\int f / \int g = \int (f/g)$ as a result.
Independence of random variable?
It doesn't imply that $X$ and $Z$ are independent. They might be independent, but they need not be independent. As an extreme example let $X$ and $Y$ be independent, and let let $Z=X$. One should probably give an explicit example. A fair dime and a fair quarter are tossed. Let $X=1$ if we get a head on the dime, and let $X=0$ otherwise. Let $Y=1$ if we get a head on the quarter, and $0$ otherwise. Let $Z=X$. It is clear (and easy to show) that $X$ and $Z$ are not independent. But in a standard model of coin tossing, $X$ and $Y$ are independent and, for the same reason, $Y$ and $Z$ are independent.
Kirchhoff's formula to Euler-Poisson-Darboux equation
The linear transformation $L(z) = x+tz$ has the following properties: $L(0)=x$ $|L(a)-L(b)|=t|a-b|$ for all $a,b$ As a consequence, $L(\partial B(0,1))=\partial B(x,t)$. Also, since the transformation scales all distances by the factor of $t$, it scales the $k$-dimensional measure of sets by $t^k$. In particular, Lastly, $dS(y)=t^{n-1} dS(z)$. This factor of $t^{n-1}$ cancels out because $$|\partial B(x,t)|=t^{n-1}|\partial B(0,1)|$$
Formula to limit a number within a minimum and maximum value
$$f(x)={\rm median}(\{{0,x,100\}})$$
Way to aggregated percentiles
There is no math for meaningfully aggregating percentiles. Once you've summarized things as percentiles (and discarded the raw data or histogram distribution behind them) there is no way to aggregate the summarized percentiles into anything useful for the same percentile levels. And yes, this means that those "average percentile" legend numbers that show in various percentile monitoring charts are completely bogus. A simple way to demonstrate why any attempt at aggregating percentiles by averaging them (weighted or not) is useless, try it with a simple to reason about percentile: the 100%'ile (the max). E.g. If I had the following 100%'iles reported for each one minute interval, each with the same overall event count: [1, 0, 3, 1, 601, 4, 2, 8, 0, 3, 3, 1, 1, 0, 2] The (weighted or not) average of this sequence is 42. And it has as much relation to the overall 100%'ile as the phase of the moon does. No amount of fancy averaging (weighted or not) will produce a correct answer for "what is the 100%'ile of the overall 15 minute period?". There is only one correct answer: 601 was the 100%'ile seen during the 15 minutes period. There are only two percentiles for which you can actually find math that works for accurate aggregation across intervals: - the 100%'ile (for which the answer is "the max is the max of the maxes") - 0%'ile (for which the answer is "the min is the min of the mins") For all other percentiles, the only correct answer is "The aggregate N%'ile is somewhere between the lowest and highest N%'ile seen in any interval in the aggregate time period". And that's not a very useful answer. Especially when the range for those can cover the entire spectrum. In many real world data sets, it often amounts to something close to "it's somewhere between the overall min and overall max". For more ranting on this subject: http://latencytipoftheday.blogspot.com/2014/06/latencytipoftheday-q-whats-wrong-with_21.html http://latencytipoftheday.blogspot.com/2014/06/latencytipoftheday-you-cant-average.html
Evaluating a summation of inverse squares over odd indices
Note that $$\sum_{n \text{ is even}} \dfrac1{n^2} = \sum_{k=1}^{\infty} \dfrac1{(2k)^2} = \dfrac14 \sum_{k=1}^{\infty} \dfrac1{k^2} = \dfrac{\zeta(2)}4$$ Also, $$\sum_{k=1}^{\infty} \dfrac1{k^2} = \sum_{k \text{ is odd}} \dfrac1{k^2} + \sum_{k \text{ is even}} \dfrac1{k^2}$$ Hence, $$\sum_{k \text{ is odd}} \dfrac1{k^2} = \dfrac34 \zeta(2)$$
Question About Primitive Root of Unity
All primitive $n$th roots of unity have the form $\zeta=\exp(\frac{2\pi ik}{n})$ with $k$ an integer such that $(k,n)=1$. So it is enough to observe that $\zeta^2=\exp(\frac{2\pi i\cdot 2k}{n})$ and that if $k$ is prime to $n$ then so is $2k$, since $n$ is odd.
How to plot this direction field in Maple.
restart: Explore( DEtools[dfieldplot](diff(y(x),x) = a*y(x)+b, y(x), x= -2..2, y= -2..2), parameters= [a= 0.0..2.0, b= 0.0..2.0] ); Then use your mouse to control the sliders for $a$ and $b$.
$f| _{dense\;set}=0$ and continuous on lines $\implies f \equiv 0$
NOTIFICATION : this answer is valid for an uniform continuity on the lines, i.e. the below $\delta_x,\delta_y$ are independant of the choice of the horizontal/vertical line. Let $(x,y)\in I^2\setminus A$ and $\varepsilon&gt;0$. By hypothesis, there exist $\delta_x,\delta_y&gt;0$ such that $$|x-x_0|&lt;\delta_x\Longrightarrow|f(x,y)-f(x_0,y)|&lt;\varepsilon/2\quad\quad\forall y\in I$$ and $$|y-y_0|&lt;\delta_y\Longrightarrow|f(x_0,y)-f(x_0,y_0)|&lt;\varepsilon/2\quad\quad\forall x_0\in I.$$ Now, as $A$ is a dense set, we can find $(x_0,y_0)\in A$ such that $$|x-x_0|&lt;\delta_x,\quad |y-y_0|&lt;\delta_y.$$ Then we have $$|f(x,y)|=|f(x,y)-\underbrace{f(x_0,y_0)}_{=0}|\leq|f(x,y)-f(x_0,y)|+|f(x_0,y)-f(x_0,y_0)|&lt;\varepsilon$$ As $\varepsilon$ was arbitrary, we've shown that $f(x,y)=0$. As $(x,y)\in I^2\setminus A$ was arbitrary, we've shwon that $f$ is identically zero on $I^2\setminus A$.
Let $\lVert\cdot\rVert_1,\lVert\cdot\rVert_2$ be norms on vector space $X$. Prove that they generate the same topology iff they are equivalent.
Let $B_{r,i}(x)$ denote the open ball of radius $r$ around $x$ according to the $i$th norm. Suppose the topologies are equivalent. $B_{1,1}(0)$ is open according to $||\cdot||_2$, so there is some $r&gt;0$ s.t. $B_{r,2}(0)\subseteq B_{1,1}(0)$. Therefore, if $||x||_1 =1$ then $||x||_2\geq r$. So for every $x\neq 0$, $||\frac{x}{||x||_1}||_2\geq r$ hence $||x||_2\geq r||x||_1$. A similar argument yields the other inequality.
Understanding infinitely many primes proof.
Because $P$ is greater than the putative greatest prime, it must be composite, and is thus divisible by some $p_i$. Because $p_i$ is included in the product $p_1 \dots p_n$, it divides $p_1 \dots p_n$. Then, $p_i$ divides both $P$ and $p_1 \dots p_n$, so $p_i$ divides their difference.
Orthogonal 3D rotational matrices
You obtain your rotation matrix $R$ from the product of three matrices $$R=R_x(\alpha)\,R_y(\beta)\,R_z(\gamma).$$ If $R $is orthogonal thus $R^T\,R=I_3$ where $I_3$ $3\times 3$ identity matrix $$R^T\,R=(R_x\,R_y\,R_z)^T\,(R_x\,R_y\,R_z)=R_z^T\,R_y^T\,R_x^T\,R_x\,R_y\,R_z$$ with: $R_x^T\,R_x=I_3\quad,R_y^T\,R_y=I_3$ and $\quad R_z^T\,R_z=I_3.$ Thus $$R^T\,R=I_3\, ,\quad R^{-1}=R^T\, ,$$ and the determinant of $R$ is: $$\det(R)=\det(R_x)\,\det(R_y)\,\det(R_z)=1\times 1\times 1=1$$ $$S_x(\alpha)=\left[ \begin {array}{ccc} 1&amp;0&amp;0\\ 0&amp;\cos \left( \alpha \right) &amp;-\sin \left( \alpha \right) \\ 0&amp; \sin \left( \alpha \right) &amp;\cos \left( \alpha \right) \end {array} \right] $$ $$S_y(\beta)= \left[ \begin {array}{ccc} \cos \left( \beta \right) &amp;0&amp;\sin \left( \beta \right) \\ 0&amp;1&amp;0\\ -\sin \left( \beta \right) &amp;0&amp;\cos \left( \beta \right) \end {array} \right] $$ $$S_z(\gamma)= \left[ \begin {array}{ccc} \cos \left( \gamma \right) &amp;-\sin \left( \gamma \right) &amp;0\\ \sin \left( \gamma \right) &amp;\cos \left( \gamma \right) &amp;0\\ 0&amp;0&amp;1\end {array} \right] $$
Why $\cosh(y)$ cannot be zero in this example?
Now here I thought this implied $\sin(x) = 0$ or $\cosh(y) = 0$. However the solutions give only $\sin(x) = 0$, why? $$ \cosh x = \frac{e^x + e^{-x}}2$$ So if $\cosh x = 0$, then $e^x + e^{-x} = 0$. But this can't happen because both $e^x$ and $e^{-x}$ are positive, no matter what real value $x$ has. And the sum of two positive numbers can never be zero. Therefore $\cosh x$ can never be zero for real values of $x$. Also, when doing the second equation, you get $\cos(x)\sinh(y) = 0$, but this time the solutions split the case into $\cos(x)=0$ and $\sinh(y)=0$ as I did above. Why this? $$ \sinh x = \frac{e^x - e^{-x}}2$$ So if $\sinh x = 0$, then $e^x - e^{-x} = 0$. And this can happen because the difference of two positive numbers can be zero. And the solution is \begin{align*} e^x - e^{-x} &amp;= 0\\ e^x &amp;= \frac1{e^x}\\ e^{2x} &amp;= 1\\ 2x &amp;= \ln 1\\ x &amp;= 0 \end{align*}
An urn contains five balls numbered 1 to 5. We conduct a sample of two balls with replacement
Your E2 for C) (without replacement) is incorrect. It should still be $\frac{3}{5}$. Think of it like this: suppose you draw all five balls without replacement, i.e. you effectively shuffle them and then line them up. Then the probability of any of them (no matter what position they are in the line-up) being smaller than $4$ is $\frac{3}{5}$ If you still don't believe me: If the first ball is smaller than $4$ (which has a probability of $\frac{3}{5}$), then the probability of the second ball also being smaller than $4$ is $\frac{2}{4}$. But if the first ball is $4$ or greater (which has a probability of $\frac{2}{5}$), then the probability of the second ball being smaller than $4$ is $\frac{3}{4}$. So: $$P(E2) = \frac{3}{5} \cdot \frac{2}{4} + \frac{2}{5} \cdot \frac{3}{4} = \frac{3}{5}$$
What is the real analysis version of this complex analysis Weierstrass theorem?
We find in section 10.2.3 Uniformly Convergent Series of Concise Calculus by Sheng Gong the following theorem: Given a function series $\sum_{n=1}^\infty u_n(x)$ on $[a,b]$. If every term of a function series is differentiable, every derivative function is continuous, and the series of derivative functions is uniformly convergent, then the series is termwise differentiable. That is, if a functional series of $\sum_{n=1}^\infty u_n(x)$ is convergent, the derivative function $u_n^{\prime}(x)$ of each term is continuous and the function series $\sum_{n=1}^\infty u_n^{\prime}(x)$ is uniformly convergent, then \begin{align*} \frac{d}{dx}\left(\sum_{n=1}^\infty u_n(x)\right)=\sum_{n=1}^\infty \frac{d}{dx}u_n(x)=\sum_{n=1}^\infty u_n^{\prime}(x). \end{align*} In other words, the order of the derivative $\frac{d}{dx}$ and the sum $\sum$ can be exchanged. Conclusion: Here we see how strong the concept of a holomorphic function is. In the Weierstrass theorem we consider functions $f_n$ being holomorphic on a compactum. This implies also the existence of derivatives of the functions $f_n$ of arbitrary order. Armed with that it is sufficient to require uniform convergence of the holomorphic functions $f_n$ on a compactum and it follows not only that $f(z)=\sum_{n=1}^\infty f_n(z)$ is holomorphic on this compactum but also the uniform convergence of all $k$-th derivatives $f^{(k)}$ of $f$ on it. On the other hand, in the real case we have to require that all the functions $u_n(x)$ are $C^1$, i.e. differentiable and the derivatives are continuous. Contrary to the complex case we have to require that the derivatives are uniformly convergent on $[a,b]$ and we cannot conclude anything for higher derivatives of the functions $u_n(x)$, since we even don't know if they exist.
Is it consistent that every set is the countable union of sets with smaller cardinality, or is it just alephs?
In the final part of his paper M. Gitik, All uncountable cardinals can be singular, Israel J. Math. 35 (1980), no. 1-2, 61--88. Gitik proves that in his model every set is the countable union of sets with a smaller cardinality, and moreover if we close all the countable sets under countable unions, we obtain every set. He then proves that it is consistent that every ordinal has countable cofinality, but there is a set which is not the countable union of smaller sets.
If $y(t)$ cannot be extended to a solution on $[t_0,\infty)$, then $|y(t)|\rightarrow\infty$
Yes, the uniqueness is useful here. In particular the fact that you can glue solutions (over overlapping time intervals) together. Note that the smoothness of $f$ implies that it satisfies a Lipschitz condition an any closed interval (in the second variable). We can thus use the Picard-Lindeloeff theorem to bound the lifetime of any solution from below. More concretely, if we assume that the maximal solution $y$ lives on $[0,t_1)$ but stays bounded as $t\to t_1-$ we can find a rectangle $$[0,t_1)\times y([0,t_1))\subset R:=[0,t_2]\times I.$$ Picard-Lindeloeff assures that for any fixed $t'\in [0,t_1)$ the problem of propagating $y(t')$ into the future admits a solution at least on $[t',t'+b]$, where $b$ is determined by the size of the rectangle and the supremum of the values of $f$ on it (the concrete formulation somewhat differs by source). Now we can simply move $t'$ closer to $t_1$ so that $t'+b&gt;t_1$. This extension then glues with $y$, which is bogus.
Can a probability be a random variable?
The law of total probability is fine. Your mistake comes from (confusingly) assuming that $P(\textrm{white})$ is a random variable, in particular that it is $X$. In fact, it is not: it is a probability, which is just a constant, so can be equal to $E(X)$, which it is. That also makes sense: the chance of a white ball is the mean of all proportions, which is "intuitive" to me at least. $X$ is just a randomly drawn "proportion" so we can assume from what you have told us that $X$ is a $U(0,1)$ uniform random variable (this by the way makes your sum notation wrong: you need an integral if $X$ has support $\mathbb{R}$). Your sum notation is correct if $X$ has countable support.
Isotropic Metric on $\mathbb{R}^{\mathbb{N}}$
If you want the topology induced by the metric to be the product topology, there is no such metric, because $e^i$ converges to $0$ as $i\to\infty$ in the product topology so $d(e^i,0)$ must go to $0$. If you don't care what the topology is, there are many reasonable metrics you could define. For instance, you could define $d((x_n),(y_n))=\min(1,\sup_n |x_n-y_n|)$. (Note that the topology induced by this metric has some properties that might be surprising; for instance, it is disconnected.)
Finding local extrema of $f(x) = x^2 e^{-x}$
Hint: $$\frac{df}{dx}=2x e^{-x}+x^2(-e^{-x})=(2x-x^2)e^{-x}$$ Extrema can occur only when $df/dx=0$, and $e^{-x}$ is always positive.
What derivative rule is applied here and how to see that it is?
The revenue can be written as $R=P\cdot Q(\color{blue}P)=P\cdot (a-bP) $. Here you have to apply the $\textbf{product rule}$. $Q$ depends on $P$ here. $\text{First summand}$: The first factor $P$ is differentiated w.r.t $P$ and ismultiplied by the second factor $Q(P)$: $$\frac{\partial P }{\partial P}\cdot Q(P)=1\cdot (a-bP)$$ $\text{Second summand}$: The second factor ($Q(P)$) is differentiated w.r.t $P$ and is multiplied by the first factor $P$: $$\frac{\partial Q(P) }{\partial P}=-b\Rightarrow \frac{\partial Q(P) }{\partial P}\cdot P=-bP $$ The sum gives the marginal revenue: $$MR=\frac{\partial P }{\partial P}\cdot Q(P)+\frac{\partial Q(P) }{\partial P}\cdot P=1\cdot (a-bP)-bP=a-2bP$$
Invariant Probability Vector
If the transition matrix is $A$ and the probability vector is $\mu$, "invariant" means that $\mu A = \mu$. Another way of saying this is that $\mu$ is a left eigenvector of $A$ with eigenvalue 1. $\mu A = \mu$ is really just a system of linear equations. If we write $\mu = [\mu_1, \mu_2, \mu_3]$ then we have $$[\mu_1, \mu_2, \mu_3] \begin{bmatrix} .4&amp;.2&amp;.4 \\\\ .6&amp;0&amp;.4 \\\\ .2&amp;.5&amp;.3 \end{bmatrix}= [\mu_1, \mu_2, \mu_3]$$ or in other words $$\begin{align*} .4 \mu_1 + .6 \mu_2 + .2 \mu_3 &amp;= \mu_1 \\ .2 \mu_1 + 0 \mu_2 + .5 \mu_3 &amp;= \mu_2 \\ .4 \mu_1 + .4 \mu_2 + .3 \mu_3 &amp;= \mu_3. \end{align*} $$ Since $\mu$ is to be a probability vector we also have to have $$\mu_1 + \mu_2 + \mu_3 = 1.$$ So you have a system of 4 linear equations in 3 unknowns. Now you just have to solve this system.
Taylor series and integration
Let $M = \sup_{s \in [0,t]} |f''(s)|$. Suppose $t_{i+1}-t_i \le \delta$, then $|\sum_k f''(s_k)(t_{k+1}-t_k)^2| \le M\sum_k (t_{k+1}-t_k)^2 \le M\delta \sum_k (t_{k+1}-t_k) = M\delta t$. So given $\epsilon&gt;0$, choose $\delta &lt; {1 \over Mt} \epsilon$.
A nicer proof of Lagrange's 'best approximations' law?
Of course there is a nicer proof! In fact, it's almost obvious if one thinks about geometric interpretation of continued fraction: consider the line $y=\alpha x$; then the best approximation (i.e. approximation that minimizes $|\alpha q-p|=q|\alpha-\frac{p}{q}|$) is the point of integer lattice nearest to this line; finally observe that convergents with even/odd numbers of the continued fraction give coordinates of [verices of] the convex hull of [points of] lattice lying over/under the line. One can also (as J. M. suggests) take a look at Lorentzen, Waadeland. Continued Fractions with Applications, I.2.1 (esp. figure 1 and the text near it; there are no words "convex hull" there but implicitly everything is explained, more or less). Upd. One more reference is Davenport's "The Higher Arithmetics" (section IV.12). Finally, an illustration (from Arnold's book). Bold line $y=\alpha x$ (on the picture $\alpha=\frac{10}7$) is approximated by vectors $e_i$ corresponding to convergents (namely, $e_i$ ends at $(p_i,q_i)$); each vector (starting from $e_3$) is a linear combination of two preciding ones: $e_{i+2}=e_i+a_{i-1}e_{i+1}$, where $a$ is the maximal integer s.t. the new vector still doesn't intersect the line; this is exactly the algorithm for representing $\alpha$ by a continued fraction: $\alpha=[a_0,a_1,\dots]$.
Equivalence relation of a group acting on a set
Hint: Intriguingly, the group axioms (neutral element, inverse element, associativity) are in one-to-one-correspondence with the axioms of equivalence relation (reflexive, symmetric, transitive)
Two questions about discrete valuation rings of varieties
$\def\OO{{\mathcal O}}\def\CC{{\mathbb C}}$So it turns out that the situation is actually much more complicated when your variety is of dimension greater than 1. First some background on Zariski-Riemann spaces. Let $K/k$ be a field extension (in our case $k(X)/\CC$). Then the Zariski-Riemann space $Z(K/k)$ is the space of all valuation rings $k \subset \OO_v \subset K$. I will sometimes denote the point in $Z$ by just the valuation $v$ where convenient though this is technically abuse of notation. We topologize this space by defining open sets to be of the form $$ \mathscr{V}(A) := \{\OO_v \in Z : A \subset \OO_v\} $$ where $k \subset A \subset K$ is a subalgebra finitely generated over $k$. Then this space inherits a sheaf of rings by $$ \mathcal{O}(U) := \bigcap_{v \in U} \mathcal{O}_v $$ which makes it a locally ringed space with local ring at each point the valuation that that point represents. In the case that $k$ is algebraically closed of characteristic $0$ (possibly unnecessary assumptions?) and the transcendence degree of $K/k$ is $1$, the Zariski-Riemann space is exactly the unique smooth projective curve over $k$ with function field $K$. For higher dimensions, the Zariski Riemann space is not a scheme, just a locally ringed space. The point of all this is that in higher dimensions, there are many more points of the Zariski Riemann space than valuations that come from divisors on some birational model of the field which I will call divisorial valuations. Let's take the example of a surface $S$ over $\CC$ and look at the Zariski-Riemann space of $k(S)/\CC$. Then on top of the trivial valuation and the divisorial valuation, we have a valuation corresponding to the germ of a curve $C$ going through a fixed point $p$ on some birational variety $\tilde{S}$. This is a rank two discrete valuation with value group isomorphic to $\mathbb{Z} \times \mathbb{Z}$ lexicographically ordered that gives the order of vanishing of a function $f$ along the curve, and then at the point $p$ as a function on the curve. Even if we restrict to rank one discrete valuations (i.e. value group $\mathbb{Z}$) there are non algebraic valuations given by the order of vanishing along some analytic germ like $y = e^x$ in the plane. This gives a rank one discrete valuation that can't come from a divisor on some birational model. In higher dimensions the situation gets even more complicated and in fact as far as I can tell, a complete classification of the points in the Zariski-Riemann space is not well understand. In general you will at least get a rank $k$ discrete valuation with value group the lexicographically ordered $\mathbb{Z}^k$ given by the order of vanishing along the germ of an inclusion of subvarieties $V_1 \supset V_2 \supset \ldots \supset V_k$ where $\operatorname{codim} V_i = i$. However, according to someone I asked, the divisorial discrete valuations are dense in the Zariski-Riemann space. So "most" valuations come from a divisor on some birational model. I have not been able to find a reference for this though as the literature on Zariski-Riemann spaces is sparse.
Show $I_b \subseteq I_a$ when $b \leq a$
In short, the answer is no. You also have to take care of the limit ordinal case to have a complete proof. And by the way, there is probably a typo in your question. I suspect that $$I_b = \bigcup_{b\leq a} I_a$$ in case of a limit ordinal.
$f(x^2*f(y)^2) = f(x^2) * f(y)$
Observe, we do the following manipulation: let $x = 1$ : $$f(f(y)^2) = f(x^2 \cdot f(y)^2) = f(x^2) \cdot f(y) = f(1) \cdot f(y) = f(y \cdot f(1)^2) $$ Case: Assume the function is left-invertible (or simply invertible), which tends to be a desirable quality for solutions of functional equations, you get: $$f(y)^2 = y \cdot f(1)^2$$ Which, since $f(y) \geq 0$, becomes: $$f(y) = f(1) \sqrt{y}$$ However, in order for $f(y) \in \mathbb{Q}^{+}$, the only valid choice is $f(1) = 0$. That is, $$f(y) = 0$$ Note: The zero function isn't invertible, but I added it effectively falls into the next case, as well as your discovery that $f(x) = 1$ is a solution as well. Case: $f$ possesses no left inverse. So $f$ is not injective. This is where everything starts to break apart. You can start going down the Rabbits hole of tossing away assumptions. Observe, let $x = 1$ and $y = 0$, then: $$f(1) \cdot f(0) = f(f(0)^2) = f(0 \cdot f(1)^2) = f(0) $$ And letting $x = 0 = y$: $$f(0) = f(0) \cdot f(0) \in \{ 0, 1 \}$$ Case: $f(0) = 1 = f(1)$. Then the equation becomes: $$f(f(y)^2) = f(y)$$ This is simply a recurrence relation over $\mathbb{Q}^{+}$. Given a value $y$, choose the value $f(y) = m \in \mathbb{Q}^{+}$. Then: $$f(m^2) = m$$ So this takes care of all the values $\{ x \in \mathbb{Q}^{+} : \exists y \in \mathbb{Q}^{+} \text{ such that } y^2 = x\}$. For the remainder of the values, we can make a variety of choices. Specifically, we can let those values go to one. That is, we can have the function: $$f(x) = \begin{cases} \sqrt{x} &amp; \sqrt{x} \in \mathbb{Q}^{+} \\ 1 &amp; \text{otherwise} \end{cases}$$ But we can also try many other potential functions. You might not be able to construct a function that is continuous however. This because if you have a sequence of numbers from the right that approach $y$ that are rationally square rootable, and a sequence from the left that are rationally square rootable, you can try to choose $f(y)$ to such that $$x_n \rightarrow y \leftarrow z_n$$ $$\sqrt{x_n} \rightarrow f(y) \leftarrow \sqrt{z_n}$$ But due to the incompleteness of $\mathbb{Q}$, I don't remember if there is a way of choosing a satisfactory alternative $f(y)$. Case: $f(0) = 0$. I'll let you take this from here. It's very similar to the function we found in the previous case, except $f(0) = 0$ is stated explicitly. Edit: Totally assumed $0 \in \mathbb{Q}^{+}$. If not, replace with an $\epsilon \in \mathbb{Q}^{+}$.
Tangent line and $x$-intercept for exponential function?
We have $$y'(x)=-\frac A{x_0}e^{-\frac x{x_0}} \implies y'(0)=-\frac A{x_0}$$ then the tangent line at $x=0$ is $$y-A=-\frac A{x_0}(x-0) \iff y=-\frac A{x_0}\cdot x+A$$ which gives for $y=0$ $$0=-\frac A{x_0}\cdot x+A \implies x=x_0$$ and more in general the tangent at a point $x=\bar x$ is given by $$y-Ae^{-\frac {\bar x}{x_0}}=-\frac A{x_0}e^{-\frac {\bar x}{x_0}}(x-\bar x) \iff y=-\frac A{x_0}e^{-\frac {\bar x}{x_0}}\cdot x+\frac A{x_0}e^{-\frac {\bar x}{x_0}}\cdot \bar x+Ae^{-\frac {\bar x}{x_0}}$$
Five-coloring plane graphs
For the first one I guess because in every circle if we have an odd number of vertices it will take us 3 colors to color the cycle and if we have and even number of vertices it will takes us 2 colors. This is correct. We actually have a theorem dealing with outer planar graphs. Any graph with no interior vertices is at most three-colorable. You can read more about them here: http://en.wikipedia.org/wiki/Outerplanar_graph But this is not always true if the vertices of the cycle are connected to other vertices outside the cycle for example: This would eventually violate the outer-planar property if you add enough edges. Then the coloring of x,y can be extended to a proper coloring of G by choosing colors from the lists. In particular $\chi(G)$ (indexed l) $\leq 5$" We can easily a construct a planar graph where $\chi(G) = 5$, so we assume $\chi(G) \geq 5$. We then construct conditions limiting us to at most $5$ colors. Are you looking for a more in-depth explanation of the proof, or just the explanation of the assumptions? Edit Think about this in terms of colors. I think you're getting confused in terms of cardinalities of sets. So we have a couple cases. The first is that we have a cycle. We require three colors at most to color any cycle. So if we take a chord and connect two vertices of different colors, we are still doing good. The big part of the proof lies in this next step. It's a contradiction argument. We start by picking a vertex adjacent to five other vertices, and removing it (and all incident edges). This modified graph we will call $G^{\prime}$. This is an extremality argument. That is, we assume an extreme case of planarity. We know this is possible by induction. Let the neighbors of $v$ be $v_{a}, v_{b}, v_{c}, v_{d}, v_{e}$ (labeled as they are colored). So now here's the trick part. We pick two colors $\{a, b\}$, and we consider a subgraph of $G^{\prime}$ only of vertices colored using $a, b$. We include only edges connecting these two colors. So if two different colored vertices are on the same component, there is a vertex color-alternating path between them. Otherwise, we can reverse the coloring of the vertex of $v_{a}$ as using color $b$, and color $v$ with color $a$. We apply the same argument with colors $\{c, d\}$. So if we re-color vertex $v$ with color $b$, we're good. Otherwise, we can connect $v_{c}, v_{d}$ with only a $c,d$ vertex-colored path. That would end up intersecting with the $\{a, b\}$ colored subgraph though, a contradiction. I actually found a really nice (and short) proof of the Five-Color Theorem: http://math.byu.edu/~forcader/FiveColor.pdf And Wikipedia has a good proof.
$x^a - 1$ divides $x^b - 1$ if and only if $a$ divides $b$
Let $b=a \cdot q+r$, where $0 &lt; r &lt; a$. Then $$x^b-1=x^b-x^r+x^r-1=x^r(x^{aq}-1)+x^r-1 \,.$$ Use that $x^a-1$ divides $x^{aq}-1$.
Natural numbers inequality $na^{n-1}b\leq(n-1)a^n+b^n$ by induction
It's just AM-GM: $$(n-1)a^n+b^n\geq n\sqrt[n]{(a^{n})^{n-1}b^n}=na^{n-1}b.$$ Also, by the assumption of an induction and by Rearrangement we obtain: $$na^{n+1}+b^{n+1}=(n-1)a^{n+1}+ab^n+a^{n+1}+b^{n+1}-ab^n\geq$$ $$\geq na^{n-1}ba+a^{n+1}+b^{n+1}-ab^n=$$ $$=(n+1)a^nb+a^{n+1}+b^{n+1}-a^nb-ab^n\geq(n+1)a^nb.$$ We can get the last inequality also by the following way: $$a^{n+1}+b^{n+1}-a^nb-ab^n=(a^n-b^n)(a-b)\geq0.$$
Does gradient descent converge in convex optimization problems? If so, how?
In https://stanford.edu/~boyd/papers/pdf/monotone_primer.pdf, section 5, subsection &quot;Gradient method&quot; or &quot;Forward step method&quot; shows a proof for functions that are strongly convex with Lipschitz gradient, using constant step sizes. If the function is convex with Lipschitz gradient then https://www.stat.cmu.edu/~ryantibs/convexopt-F13/scribes/lec6.pdf shows a proof of convergence with constant step sizes. For strictly convex functions (not Lipschitz), gradient descent will not converge with constant step sizes. Try this very simple example, let $f(x) = x^{4}$. You will see that there is no constant step size for gradient descent that will converge to $0$ (for any initial condition). In this case, people use diminishing step sizes. I've never seen results about convergence of the average but it might be what happens when you use a constant step size on functions that are strictly convex and don't have Lipschitz gradient. Unless $x$ lives in an infinite-dimensional space, weak and strong convergence is the same and I wouldn't worry about it. Here is additional reading if you are interesting, https://people.maths.bris.ac.uk/~mazag/fa17/lecture9.pdf
Find a sequence of whole numbers $n _ { 1 } , n _ { 2 } , \ldots$ such that $n _ { i - 1 } | n _ { i }$ for all $i \geq 2$
What about $n_i = i!$? (Is this enough characters?)