title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Number of ways to partition a set into subsets of given cardinality
We could order the numbers and say that the first $9$ are meant to be the elements of $B$, the folllowing $8$ the elements of $C$, et cetera. There are $30!$ possible orders. Now look at some fixed result and realize that $9!8!7!6!$ orders of the $30!$ orders lead to that result. This multiple counting can be repaired by dividing.
Lagrange multipliers and compact sets
Closed sets include their own boundary. The boundary is the stuff that's between any closed set containing your set and not in any open set inside your set. It's the intersection of the closure of the set itself and the closure of the complement of the set. One definition of closed sets that I find intuitive is that a set is closed iff it contains its own limit points. When you apply Lagrange multipliers, you find all the critical points - those that in a certain sense are where the derivative is 0. These are not necessarily maxima or minima (ex: 0 in $x^3$). The maxima/minima is either non-existent, on the boundary, or not on the boundary. All non boundary extremal points are critical points. You can consider those, and then restrict further by restricting to the boundary and optimizing. The nice thing about continuous functions on compact spaces is that the maxima/minima will always exist. If it's not compact, there may be no maxima/minima. For example, the open the function $f(x)=x$ on the unit interval $(0,1)$ has no maxima or minima since there's always a point closer to 0/1 with a lower value for any point inside the interval. The closed/compact set $[0,1]$ however does attain maxima/minima at 0,1.
Do these spaces have a name?
This could be considered as a weighted Lebesgue space (which are well studied). The space of sequences $a_n$ such that $n^2a_n\in l^\infty$ is a subset of $l^2$.
Book for studying Linear Algebra
I can heartily recommend Linear Algebra, as I am the author. Coverage is what you asked for, with an emphasis on improving students's mathematical maturity, including lots of examples using computations. It is totally Free and you can also get a physical book from Amazon if you prefer that. There are completely solved answers for all exercises, on the web page (click on the question for the answer and click on the answer for the question). In addition, the beamer slides do different examples from the book, so that's twice as many examples right there.
Basic number theory proofs
Since you have shown that there are no primes between $n!$ and $n!+n$, now it's time to combine this with the infinitude of prime numbers and we have the following: Claim: Let $n>2$ be a natural number, let $p$ be the maximal prime number such that $p<n!$ and let $q$ be the least prime number such that $n!<q$, then there is no prime number between $p$ and $q$ and $q-p\geq n$. Proof. The first part of the claim is true because any number between $p$ and $q$ would either be between $p$ and $n!$ in contradiction to how we chose $p$, or between $n!$ and $q$ which is a contradiction to how we chose $q$. The second part follows from the fact that $n!+i$ is not a prime number for any $i<n$ and therefore $p<n!+i<q$ and so $q-p\geq n$.
Show that: $(E^{\circ})^{\circ}=E^{\circ}$ i.e. the interior of the interior of a set equals the interior of the set.
Your definition of $E^\circ$ will affect your proof that $E^\circ$ is open. One may define $$E^\circ=\bigcup\{O:O\subset E\text{ and } O \text{ is open }\}$$ That is, $E^\circ$ is the largest open set contained in $E$, for the union of open sets is open. If you define $$E^\circ=\{x:x \text{ is an interior point of E}\}$$ The observe that if $x\in E^\circ$ then there is an open set $O$ such that $x\in O\subseteq E$. But if $y\in O$ then $O$ itself is an open set such that $y\in O\subseteq E$, i.e $y\in E^\circ$. Thus for any $x\in E^\circ$ there is an open set $O$ such that $x\in O\subseteq E^\circ$, and $E^\circ$ is open.
${y \in \mathbb{N}}$ but, ${y}$ comes out to be zero, Why?
I don't really understand what $\mathbb{N}, \mathbb{A}, \mathbb{K}$ are since usually they denote sets rather than values. Also, at the step when you are multiplying by $(-1)$ you have a mistake - it should be $\frac{y-x}{-x-y}$ rather than $\frac{x+y}{x-y}$
Divergence equation on noncompact manifold.
If you assume that $M$ is connected (or at least that $M$ has no compact components), it's true. Let's assume first that $M$ is orientable. Let $\mu_g$ denote its Riemannian volume form. Because the top-degree de Rham cohomology on a connected, noncompact smooth manifold is trivial (see, e.g., my Introduction to Smooth Manifolds, 2nd ed., Theorem 17.32), there is a smooth $(n-1)$-form $\eta$ such that $d\eta=\mu_g$. Set $X = (-1)^{n-1}(\mathop{*}\eta)^\sharp$ (where $\ast$ is the Hodge star operator and $\sharp$ is the index-raising operator determined by the metric). It follows that $$ \operatorname{div} X = \mathop{*} d \ast X^\flat = \mathop{*} d \big((-1)^{n-1}\ast\mathop{*} \eta\big) = \mathop{*} d\eta = \mathop{*} \mu_g = 1. $$ Now if $M$ is nonorientable, let $\widehat \pi \colon \widehat M\to M$ be the oriented double cover of $M$, and let $\widehat g=\widehat\pi^*g$. Then by the previous argument, there is a vector field $\widehat X$ on $\widehat M$ satisfying $\operatorname{div} \widehat X = 1$. Let $\alpha\colon \widehat M\to \widehat M$ denote the nontrivial covering automorphism of $\widehat M$, and let $\widehat X_1 = \tfrac12 (\widehat X + \alpha_* \widehat X)$. Because $\alpha$ is an isometry of $\widehat g$, it follows that $\operatorname{div}\widehat X_1=1$. Moreover, since $\widehat X$ is invariant under $\alpha$, it can be "pushed down" to a well-defined vector field $X$ on $M$, which satisfies $\operatorname{div} X=1$ because $\widehat \pi$ is a local isometry from $(\widehat M,\widehat g)$ to $(M,g)$. If $M$ has a compact component, a vector field $X$ satisfying $\operatorname{div} X = 1$ would contradict the divergence theorem, which would require $\int_M (\operatorname{div} X)\, \mu_g = 0$.
System of equations solution step by step
One way to proceed is to solve the last equation for and $z$ and substitute into the first to obtain, after rearrangement, $$\frac12(2a-c+1)x+\frac12(2b+c-1)y=0.$$ This equation must hold for all $x$ and $y$, which can only occur if both coefficients vanish. Together with the second equation, that gives you a system of three linear equations in the unknowns $a$, $b$ and $c$ to solve.
What can be computed by axiomatic summation?
I will assume that linearity is $$\sum(\alpha a_n+\beta b_n)=a\sum a_n+\beta\sum b_n.$$ If not, the constant coefficients below take them to be rational. Denote by $S(a_1,a_2,...):=(0,a_1,a_2,...)$ the unilateral shift operator. Denote by $A$ the summation functional, that takes a series and returns its sum. We assume that $A$ is stable, linear, and regular. We have that $A\circ S=A$, from stability. Moreover, stability is equivalent (assuming also linearity) to $$A\circ S=A.$$ In fact, if $A\circ S=A$, and since $S(a_1,0,0,...)=a_1$, $$\begin{align}A(a_1,a_2,...)&=A(a_1,0,0,...)+A(0,a_2,a_3,...)\\&=a_1+A(0,a_2,...)\\&=a_1+A(a_2,a_3,...)\end{align},$$ which is the axiom of stability. If $A(\vec{a})$, with $\vec{a}:=(a^{(1)},a^{(2)},...)$ a vector of sequences, and $a^{(i)}:=(a_{i1},a_{i2},...)$, can be determined by the axioms it means that there is a finite number of applications of the axioms to the sequence and to some convergent series that gives you an equation from which the sum id determined. Denote by $L_C:=C_kS^k+C_{k-1}S^{k-1}+...+C_1S+C_0I$ to be a linear finite-difference operator with constant coefficients, where the $C_i$ are vectors and the $S$ is applied to vectors of sequences componentwise. Then we can translate the above statement on being determined by the axioms into: There exist finitely many vectors of convergent series $\sum f_n^{(1)},\sum f_n^{(2)},...,\sum f_n^{(r)}$, and linear operators $L_{C^{(1)}}, L_{C^{(3)}},...,L_{C^{(r)}}$, and $L_C$, such that $$L_C(a)=L_{C^{(1)}}(f^{(1)})+L_{C^{(3)}}(f^{(2)})+...+L_{C^{(r)}}(f^{(r)}),$$ and $d:=\det[C_k|C_{k-1}|...|C_0]\neq0$. This is what allows to apply $A$ in both sides and obtain $$dA(\vec{a})=A(L_{C^{(1)}}(f^{(1)})+L_{C^{(3)}}(f^{(2)})+...+L_{C^{(r)}}(f^{(r)})),$$ from where the sum is computed. Clearly the sequence in the right-hand side is convergent. Therefore we get that $$L_C(\vec{a})=f$$ for some $f:=(f_1,f_2,...)$, with $\sum f_n$ convergent (usual sense). This answers your first question (as I understand it) about what series' values are determined by the axioms: Answer: These solutions of linear recurrence equations with constant coefficients adding up to a non-zero number, and with independent term being the term of a convergent series (the usual convergence). You second question about different summations satisfying the axioms and defining sums in incompatible ways seems to be answered in the positive in Wikipedia (as dani_s pointed out). Maybe later I (or someone can give an explicit example of that). It remains though the following Question: Consider the series that have sums determined by the axioms (those solutions of recurrences). Are the values determined by the axioms consistent? In other words, is it possible to have a series satisfying two of these recurrence equations that then determine for the series two different values for its generalized sum?
Taylor Series of $ \cfrac{x}{1+x} $ at $ x = -2 $
Hint. Note that $$\frac{x}{1+x}=\frac{x}{-1+x+2}=\frac{-(x+2)+2}{1-(x+2)}=1+\frac{1}{1-(x+2)}.$$ Now use the result: for $|z|<1$ then $\displaystyle \cfrac{1}{1-z} = \displaystyle\sum_{n=0}^{\infty}{ z^n}$.
Poisson distribution as limit of a binomial distribution
Because the application of the poisson distribution is an approximation only. The quality of a tablet is binomial distributed as $X\sim \textrm{Bin}(500,0.001)$. Then $$P(X=0)=\binom{500}{0}\cdot 0.01^0\cdot 0.99^{500}=0.00657...=0.6750\%$$ But your approximation is not so bad.
Solve this logarithmic equation: $2^{2-\ln x}+2^{2+\ln x}=8$
We have $2^{2-\ln(x)} = \dfrac4{2^{\ln(x)}}$ and $2^{2+\ln(x)} = 4\cdot2^{\ln(x)}$. Hence, setting $2^{\ln(x)} = a$, we obtain $$\dfrac4a + 4a = 8 \implies a^2 + 1 =2a \implies a =1 \implies 2^{\ln(x)} = 1 \implies x = 1$$
Determine if the set $A=\{(x,y) \in \overline{B}\mid x \geqslant0 \}$ is open or closed.
Yes, this shows that $A$ is not open in $\bar B$. To show that it is closed, note that $\bar B\setminus A=\{(x,y)\in\Bbb R^2:x<0\}\cap\bar B$ which is open in $\bar B$ since the former set is open in $\Bbb R^2$.
Transform an AR(2) to MA($\infty$)
Your $\delta + \phi_2\delta + \phi_2^2\delta + \cdots$ is just a geometric series so let's call this $\mu$ and we have $\mu= \frac\delta{1-\phi_2}$, at least when $|\phi_2|<1$ Meanwhile your $\cdots +\phi_2^3z_{t-6} + \phi_2^2\epsilon_{t-4} + \phi_2\epsilon_{t-2} + \epsilon_{t}$ can be written as $\sum\limits_{i=0}^\infty \psi_i \epsilon_{t-i}$ where $\psi_{2n}=\phi_2^n$ and $\psi_{2n+1}=0$, noting $\psi_{0}=1$. You will have $\sum\limits_{i=0}^\infty |\psi_i| = \frac\delta{1-|\phi_2|} < \infty$ when $|\phi_2|<1$. So, providing that $|\phi_2|<1$, you can write $$z_t=\mu + \sum\limits_{i=0}^\infty \psi_i \epsilon_{t-i} = \frac\delta{1-\phi_2} + \sum\limits_{n=0}^\infty \phi_2^n \epsilon_{t-2n} $$ which is the form of an $\mathrm{MA}(\infty)$ process
Condition for positive function such that $f(t)\leq Ct^{2}$?
Not true; consider $f(t):=t^2\log t$. This $f$ is monotone increasing for $t\ge 1$, yet $$ \int_1^\infty \frac{tdt}{f(t)}=\int_1^\infty\frac{dt}{t\log t}=\int_0^\infty\frac{du}{u} $$ using the substitution $u:=\log t$. EDIT: To avoid the division by zero at $t=1$, an alternative is $f(t):=1+t^2\log t$, for which we have $$ \int_1^\infty \frac{tdt}{f(t)}=\int_1^\infty\frac{tdt}{1+t^2\log t}=\int_0^\infty\frac{e^{2u}du}{1+e^{2u}u} \ge\int_1^\infty\frac{e^{2u}du}{1+e^{2u}u}\ge\int_1^\infty\frac{du}{1+u}. $$
Do Contour Integrals and Infinite Series Commute? Fubini's Theorem
To elaborate on my comment, the Fubini-Tonelli theorem states the conditions for interchange of the limit operations, in particular for integrals. Summation is integration on a discrete space, so this is exactly the conditions for use of the theorem. As always, there are ways to break this if you do not have absolute convergence. Beyond just the general case, Robert Israel correctly notes that the usual situation in complex analysis is that the series of functions converges absolutely and uniformly on compact subsets of some domain, in particular if the contour is a rectifiable curve in $\Bbb C$, the conditions are satisfied.
Help to solve the equation involving complicate fractions
Note that $$\frac{2}{x}-5=5x-\frac{5}{7} \implies 7x\left(\frac{2}{x}-5\right) = 7x\left(5x-\frac{5}{7}\right) \implies35x^2+30x-14=0.$$
continuity of monotonically increasing function
Let $f(x+)=\lim_{y\rightarrow x^{+}}f(y)$ and $f(x-)=\lim_{y\rightarrow x^{-}}f(y)$. Note that $f(x+)\geq f(x-)~\forall x$ as $f$ is increasing. Then let $D_{n}=\{x:f(x+)-f(x-)\geq\frac{1}{n}\}$. As $f:[0,1]\rightarrow[0,1]$, hence $|D_n|\leq n$. Also the set of all discontinuities, $D=\cup_{n=1}^{\infty}D_n$, which is a countable union of finite sets, and hence is countable. Also Lebesgue measure of a countable set is $0$. Hence Lebesgue measurable.
Solving $\tan 2x=1+2\sin 4x$
By tangent half angle identities we have that by $t=\tan (2x)$ $$\tan 2x=1+2\sin 4x \iff t=1+\frac{4t}{1+t^2} \iff t^3-t^2-3t-1=0$$ $$ \iff (t+1)(t^2-2t-1)=0$$
Integration of Vectors and improper integrals
You did very well for part a. Note that this is not the only parametrization you could have chosen. You made a small error for part b. From part a, you indicated a set of positions that produce the circle. For part b, you essentially want to rotate these points an angle $\alpha$ about the $z$-axis. You need to have the parametrization of $K$ with position vector $\vec{p}$ as follows: $ \vec{p}(t)= \begin{pmatrix} x(t) \\ y(t) \\ z(t) \end{pmatrix} = \begin{pmatrix} [r+r\cos(t)]\cos(\alpha) \\ [r+r\cos(t)]\sin(\alpha) \\ r+r\sin(t) \end{pmatrix} = \begin{pmatrix} r\cos(\alpha)+r\cos(t)\cos(\alpha) \\ r\sin(\alpha)+r\cos(t)\sin(\alpha) \\ r+r\sin(t) \end{pmatrix} $ This comes directly from the multiplication of the original parametrization of the position vector with a CCW rotation matrix about the $z$-axis. For part c, I think there might be an error because $x$ isn't a vector in the context of the problem. I think you meant $d\vec{x}$ as a differential path element which is $d\vec{p}$. Similarly, it should be $\vec{F}(\vec{p})$. You need to establish the function in terms of your parameter $t$ by substituting in for $x$, $y$, and $z$ from your parametrization of $K$. Then you need to account for $d\vec{p}$ by recognizing that $d\vec{p} = \frac{d\vec{p}}{dt}dt$. The function vector w.r.t. the parametrization is as follows: $\vec{F}(\vec{p}) = \begin{pmatrix} x - r\cos(\alpha) \\ y - r\sin(\alpha) \\ z - r \end{pmatrix} = \begin{pmatrix} r\cos(\alpha)+r\cos(t)\cos(\alpha) - r\cos(\alpha) \\ r\sin(\alpha)+r\cos(t)\sin(\alpha) - r\sin(\alpha) \\ r+r\sin(t) - r \end{pmatrix} = \begin{pmatrix} r\cos(t)\cos(\alpha) \\ r\cos(t)\sin(\alpha) \\ r\sin(t) \end{pmatrix} $ The differential path vector is as follows: $d\vec{p} = \frac{d\vec{p}}{dt}dt = \begin{pmatrix} -r\sin(t)\cos(\alpha) \\ -r\sin(t)\sin(\alpha) \\ r\cos(t) \end{pmatrix}dt$ Therefore, the following holds: $\vec{F}(\vec{p}) \cdot \frac{d\vec{p}}{dt} = \begin{pmatrix} r\cos(t)\cos(\alpha) \\ r\cos(t)\sin(\alpha) \\ r\sin(t) \end{pmatrix} \cdot \begin{pmatrix} -r\sin(t)\cos(\alpha) \\ -r\sin(t)\sin(\alpha) \\ r\cos(t) \end{pmatrix} \\ = -r^2\sin(t)\cos(t)\cos^2(\alpha)-r^2\sin(t)\cos(t)\sin^2(\alpha) + r^2\sin(t)\cos(t)\\ = -r^2\sin(t)\cos(t) + r^2\sin(t)\cos(t) = 0 $ The function vector is orthogonal to the path for all $t$, so the path integral is simply $0$.
Permutations and combinations and divisibility problem
The last two digits need to be divisible by 4. This means that given the conditions, the last two digits can be $\{12, 24, 32, 44, 52\}$ Since the repetition of digits is allowed, the remaining 3 places can be filled in $5^3$ ways for each of the 5 cases. The total number of ways is thus $5 \times 5^3 = 625$
Second Stiefel-Whitney Class of a Five Manifold
I'm assuming that your end goal is the computation of $w_2(TS(E))$, rather than that of the Wu class. If so, here's how I'd proceed. First, I'll abuse notation and let $p:S(E)\rightarrow \mathbb{T}^2$ denote the projection map. First note that $\pi_1(\mathbb{T}^2)\cong\mathbb{Z}^2$ acts trivially on $H^\ast(S^3)$. This follows because the bundle is orientable and the only non-zero cohomology group of $S^3$ is in degree $3$, and is generated by a choice of orientation. Thus, in particular, the Serre spectral sequence machinery works easily (no extension problems!), and we get $H^\ast(S(E)) \cong H^\ast(S^3\times \mathbb{T}^2)$. We also see from the edge homomorphism, that $p^\ast:H^2(T^2)\rightarrow H^2(S(E))$ is an isomorphism. This implies that $H^\ast(S(E);\mathbb{Z}_2)\cong H^\ast(S^3\times \mathbb{T}^2; \mathbb{Z}_2)$ and that the induced map on $H^2$ with $\mathbb{Z}_2$ coefficieints is also an isomorphism. Now, we use a nice trick. I claim that $TS(E)\oplus 1$ is isomorphic to $p^\ast(T \mathbb{T}^2)\oplus p^\ast(E)$. Believing this, it follows that $w(TS(E)\oplus 1) = p^\ast(w(T\mathbb{T}^2))\cup p^\ast (w(E))$. But $\mathbb{T}^2$ is parallelizable, so this reduces to $w(TS(E)) = p^\ast(w(E))$. In particular, since $w_1(E) = w_k(E) = 0$ for $k>2$ while $w_2(E)$ is a generator of $H^2(\mathbb{T}^2;\mathbb{Z}_2)$, we conclude that $w(TS(E))$ is only nonzero in degree $2$ (and degree $0$), where it is the unique nonzero element. So, why does the nice trick work? Well, it works for any oriented sphere bundle $S$ which is the sphere bundle of a vector bundle $S = S(E)$. The idea is to pick Riemannian metrics on the total space $E$ and on the base for which the projection is a Riemannian submersion. This allows you to canonically decompose $T_{(b,v)}E$ into three pieces: vectors tangent to $B$, those tangent to the sphere in $p^{-1}(b)$, and those vectors in $p^{-1}(b)$ pointing radially inward/outward. (The inward/outward is there the $\oplus 1$ comes from.)
Prove $\sum_{k=0}^n (k + 1)\binom{n}{k}= 2^{n - 1}(n + 2)$
$\sum\limits_{k=0}^n (k+1) \binom n k = \sum\limits_{k=0}^n \binom n k + \sum\limits_{k=0}^n k\binom n k = 2^n + n\sum\limits_{k=0}^{n-1} \binom {n-1} k $. The last "=" is because $k \binom n k = \frac{k*n!}{k!(n-k)!} = n\frac{(n-1)!}{(n-k)!(k-1)!}$. Because $\sum\limits_{k=0}^{n-1} \binom {n-1} k = 2^{n-1}$, $\sum\limits_{k=0}^n (k+1) \binom n k = 2^{n-1}(n+2)$. Done.
What's the theoretical basis for integration using partial fractions?
The reason we can break up a function of the form $\frac{P(x)}{Q(x)}$, where the degree of the polynomial $P(x)$ is less than that of $Q(x)$ into the sum of functions of the form $\frac {a} {x+b}$ is that, so long as the sum of the numerators is zero, the degree of $P(x)$ will be less than $Q(x)$ by at least one. In order to see this let's consider the situation where $$f(x)=\frac{mx+n}{(x+r)(x+s)}$$ We can split this function up into $$f(x)=\frac{a_1}{x+r}+\frac{a_2}{x+s}=\frac{a_1(x+s)+a_2(x+r)}{(x+r)(x+s)}$$ Which we arrive at by cross multiplying, just like a numerical fraction. We see then that so long as $a_1+a_2=m$ and $a_1s+a_2r=n$ then this fraction decomposition works. In the event that $Q(x)$ doesn't neatly factor into monomials then we always need to place a polynomial whose degree is one less than that of its denominator polynomial in our expansion, so that we can guarantee that the degree of $P(x)$ is less than that of $Q(x)$. Let's consider the situation where $$f(x)=\frac{mx^2+nx+p}{(b_1x^2+b_2x+b_3)(c_1x+c_2)}=$$ $$\frac{a_1x+a_2}{b_1x^2+b_2x+b_3}+\frac{a_3}{c_1x+c_2}=\frac{(a_1x+a_2)(c_1x+c_2)+a_3(b_1x^2+b_2x+b_3)}{(b_1x^2+b_2x+b_3)(c_1x+c_2)}$$ We see again that so long as the degree of $P(x)$ is less than $Q(x)$ this approach is promising. But if $Q(x)$ has 'redundant terms' in it (i.e. $(g(x))^k$ for some polynomial $g(x)$ and some natural number $k$, which is greater than 1) then our fraction sum expansion will have 'redundant terms' as well. Can you see why?
Show if $\lim_{x \rightarrow 0} \frac{\sin(\sin(x))}{x}$ exists, with product/quotient/substitution rules
Since the OP is not allowed to use L'Hôpital's rule, the following known limit can be used: $$ \lim_{x\to 0} \frac{ \sin x}{ x}=1 $$ If we multiply numerator and denominator by $\sin x$ we obtain: $$ \lim_{x\to 0} \frac{\sin (\sin x)}{x}=\lim_{x\to 0}\frac{\sin x \sin (\sin x)}{x \sin x} $$ Now compute the two limits: $$ \lim_{x\to 0} \frac{ \sin (\sin x)}{ \sin x}=\lim_{y\to 0} \frac{ \sin y}{ y}=1 $$ where we have called $y=\sin x$, and: $$ \lim_{x\to 0} \frac{ \sin x}{ x}=1 $$ and apply the product rule, which is now allowed because both limits exist.
Maclaurin series expansion
$$ \cos x = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \frac{x^8}{8!} - \ldots $$ Thus, $$ 6\cos(5x^2) = 6\biggl[1 - \frac{(5x^2)^2}{2} + \frac{(5x^2)^4}{24} - \frac{(5x^2)^6}{720} + \frac{(5x^2)^8}{40320} - \ldots \biggr] $$
Find a basis for the given subspace in $\Bbb R^4$
Look at your vector and it consists of three different variables, suggesting you're going to probably $3$ vectors to span this. Notice that it is in the form of a general equation. How can you write the variables $a, b, c$ in terms of some basis? What basis is the easiest to pick? Note: Basis of the coefficients of equations in the vector is the first choice $$ \begin{bmatrix} a + c \\ a - b \\ b + c \\ -a + b \\ \end{bmatrix} = a\begin{pmatrix}1\\1\\0\\-1\end{pmatrix}+b\begin{pmatrix}0\\-1\\1\\1\end{pmatrix} + c\begin{pmatrix}1\\0\\1\\0\end{pmatrix}$$ Verify this solution by expressing the vector in a system of equations.
All real numbers in $[0,2]$ can be represented as $\sqrt{2 \pm \sqrt{2 \pm \sqrt{2 \pm \dots}}}$
Here is a possible explanation. Let $\alpha \in [0, \pi/2]$ and define $\epsilon_1, \epsilon_2, \cdots$ by $ \epsilon_i = \operatorname{sgn}( \cos ( 2^i \alpha )) \in \{-1, 1\}$. Here, we take the convention that $\operatorname{sgn}(0) =1 $. Then applying the identity $2\cos\theta = \operatorname{sgn}(\cos\theta) \sqrt{2 + 2\cos(2\theta)}$ repeatedly, we have $$ 2\cos \alpha = \sqrt{2 + \epsilon_1 \sqrt{2 + \epsilon_2 \sqrt{ \cdots + \epsilon_n \sqrt{2 + \smash[b]{2\cos(2^{n+1} \alpha)} }}}}. $$ This can be used to show that, with an appropriate definition of infinite nested radical, the following identity $$ 2\cos \alpha = \sqrt{2 + \epsilon_1 \sqrt{2 + \epsilon_2 \sqrt{ 2 + \cdots }}} $$ is true. This shows that any real number between $[0, 2]$ can be written as an infinite nested radical of the desired form. Moreover, if we denote $x = 2\cos\alpha$, then $\epsilon_1 = \operatorname{sgn}(2\cos (2\alpha)) = \operatorname{sgn}(x^2 - 2)$, $\epsilon_2 = \operatorname{sgn}(2\cos (4\alpha)) = \operatorname{sgn}((x^2 - 2)^2 - 2)$, and likewise. This explains why signs are determined by OP's algorithm.
Proof Critique - Atiyah-MacDonald Question 1.2
The proof is good, but perhaps the confusion between $f$ and its image modulo $\mathfrak{p}$ needs to be sorted out. If $\mathfrak{p}$ is a prime ideal, denote by $f_{\mathfrak{p}}$ the reduction of $f$ modulo $\mathfrak{p}$. Assuming $f$ is a unit, we have that $f_{\mathfrak{p}}$ is a unit in a polynomial ring over a domain, so it has degree $0$. Hence $a_1,\dots,a_n$ are zero modulo $\mathfrak{p}$, so they belong to it. Since $\mathfrak{p}$ is arbitrary, $a_1,\dots,a_n$ are nilpotent. Assuming $f$ is nilpotent, so is its reduction $f_{\mathfrak{p}}$, for every prime ideal $\mathfrak{p}$, which implies $f_{\mathfrak{p}}$ is zero. Therefore, as before, all coefficients of $f$ are nilpotent. The rest is well explained.
Is there a divergent series with "largest" terms?
Here is a non-rigorous strategy, which I think can be made rigorous quite easily: $$ \log(x) - \log (x-h) \approx \frac hx .$$ Hence $$ \frac{a_n}{r_n} \approx \log(r_{n}) - \log(r_{n+1}) .$$ Hence, using a telescoping sum, $$ \sum_{m=1}^n \frac{a_m}{r_m} \approx -\log(r_{n+1}) ,$$ which diverges since $r_{n+1} \to 0$. Similarly, for every $k>1$, $$ \sum_{m=1}^n \frac{a_m}{r_m^{1/k}} \approx \frac{k}{k-1}(r_1^{1-\frac1k}-r_{n+1}^{1-\frac1k})$$ which converges.
$\mathbb F$-Martingale indexed on $\mathbb Q$ and left/right limit on $\mathbb R$ along $\mathbb Q$
Answer to question 1: To prove the existence of the limits the so-called upcrossing lemma is a very useful tool. Definition Let $f: [0,\infty) \to \mathbb{R}$ be a function and $0 \leq t_1 < \ldots < t_n$. For $a<b$ and $D := \{t_1,\ldots,t_n\}$ we denote by $$U_D(f,[a,b]) := \sup\{k \in \mathbb{N}; \exists \tau_1 < \sigma_1 < \tau_2 < \ldots < \tau_k < \sigma_k: f(\tau_j)<a, f(\sigma_j)>b \, \, \text{for $j=1,\ldots,k$}\}$$ the number of upcrossings. Upcrossing lemma Let $(X_t,\mathcal{F}_t)_{t \in \mathbb{Q} \cap [0,\infty)}$ be a martingale and $D = (t_j)_{j \in \mathbb{N}} \subseteq \mathbb{Q} \cap [0,\infty)$. Then we have for all $a<b$ $$(b-a) \mathbb{E}(U_{D}(X,[a,b])) \leq \sup_{t \in D} \mathbb{E}((X_t-a)^+)$$ where $$U_D(X,[a,b]) := \sup_{n \in \mathbb{N}} U_{\{t_1,\ldots,t_n\}}(X,[a,b]).$$ For a proof see e.g. Revuz & Yor, p.61. Using the upcrossing lemma, we get the following statement. Theorem Let $(X_t,\mathcal{F}_t)_{t \in \mathbb{Q} \cap [0,\infty)}$ be a martingale. Then there exists $\Omega_0$, $\mathbb{P}(\Omega_0)=1$ such that the limits $$\lim_{\mathbb{Q} \ni r \uparrow t} X_r(\omega) \qquad \text{and} \qquad \lim_{\mathbb{Q} \ni r \downarrow t} X_r(\omega) \tag{1}$$ exist in $[-\infty,\infty]$ for all $t>0$ and $\omega \in \Omega_0$. Proof: Fix some $t \in I := [u,v] \subseteq [0,\infty)$. Applying the upcrossing lemma, we find for any $a<b$, $a,b \in \mathbb{Q}$, $$(b-a) \mathbb{E} U_{\mathbb{Q} \cap [u,v]}(X,[a,b]) \leq \sup_{t \in \mathbb{Q} \cap [u,v]} \mathbb{E}((X_t-a)^+) \leq \mathbb{E}((X_v-a)^+) \leq \mathbb{E}(|X_v|)+a$$ where we have used that $t \mapsto (X_t-a)^+$ is a submartingale. Consequently, we get $U_{\mathbb{Q} \cap [u,v]}(X,[a,b])<\infty$ almost surely. This, however, implies that the limits $(1)$ exist almost surely for any $t \in (u,v)$. To prove this note that $$\begin{align*}&\quad \{\omega; \lim_{\mathbb{Q} \ni r \downarrow t} X_r(\omega) \, \text{does not exist in $[-\infty,\infty]$}\} \\ &= \bigcup_{a,b \in \mathbb{Q},a<b} \{\omega; \liminf_{\mathbb{Q} \ni r \downarrow t} X_r(\omega)<a<b< \limsup_{\mathbb{Q} \ni r \downarrow t} X_r(\omega)\} \\ &= \bigcup_{a,b \in \mathbb{Q},a<b} \{\omega; U_{\mathbb{Q} \cap [u,v]}(X(\omega),[a,b])=\infty\}. \end{align*}$$ A similar reasoning works for $\lim_{\mathbb{Q} \ni r \uparrow t} X_r(\omega)$. This finishes the proof. In fact, it is possible to show that the limits in $(1)$ exist in $(-\infty,\infty)$. To this end, recall that we have the following maximal inequality for (super)martingales: $$r \mathbb{P} \left( \sup_{t \in \mathbb{Q} \cap [0,T]} |X_t| \geq r \right) \leq \mathbb{E}(X_0) + 2 \mathbb{E}(X_T)$$ for all $r>0$. Since the right-hand side is bounded as $r \to \infty$, we get $$\lim_{r \to \infty} \mathbb{P} \left( \sup_{t \in \mathbb{Q} \cap [0,T]} |X_t| \geq r \right)=0$$ which is equivalent to $$\mathbb{P} \left( \sup_{t \in \mathbb{Q} \cap [0,T]} |X_t| = \infty \right)=0.$$ This implies that the limits in $(1)$ exist (almost surely) in $(-\infty,\infty)$ for all $t \geq 0$. Answer to question 2: From the first part of this answer, we know that $$\bar{X}_t := \lim_{\mathbb{Q} \ni r \downarrow t} X_r$$ is well-defined and takes values in $(-\infty,\infty)$ (up to a null set). The so-defined process is a modification of $(X_t)_{t \in \mathbb{Q} \cap [0,\infty)}$, i.e. $$\mathbb{P}(\forall t \in \mathbb{Q} \cap [0,\infty): X_t = \bar{X}_t)=1. \tag{2}$$ Indeed: Since $\mathbb{Q}$ is countable, it suffices to prove $\mathbb{P}(X_t = \bar{X}_t)=1$ for each $t \in \mathbb{Q} \cap [0,\infty)$. Fix $t \in \mathbb{Q} \cap [0,\infty)$. By the martingale property, we have $$\mathbb{E}(X_T \mid \mathcal{F}_s) = X_s$$ for all $s \leq T$, $s,T \in \mathbb{Q}$. If we let $s \downarrow t$, then the right-hand side converges to $\bar{X}_t$. On the other hand, the left-hand side converges by Lévy's convergence theorem and the right-continuity of the filtration to $\mathbb{E}(X_T \mid \mathcal{F}_t) = X_t$. Consequently, $X_t = \bar{X}_t$ almost surely. Finally, we can show that $$\mathbb{E}(X_t \mid \mathcal{F}_s) = \bar{X}_s \tag{3}$$ for all $t \in \mathbb{Q}$, $s \in \mathbb{R}$, $t > s$. To this end, note that $$\bar{X}_s = \lim_{\mathbb{Q} \ni r \downarrow s} X_r = \lim_{\mathbb{Q} \ni r \downarrow s} \mathbb{E}(X_t \mid \mathcal{F}_r) = \mathbb{E}(X_t \mid \mathcal{F}_s)$$ where we have used again Lévy's continuity theorem and the martingale property. Using $(2)$, we actually get $$\mathbb{E}(\bar{X}_t \mid \mathcal{F}_s) = \bar{X}_s.$$
What does "radial solution" for the wave equation mean?
Theo is correct: a radial solution of an evolution equation posed in $\mathbb{R}^d$ is a solution $u(t,x)$ whose spatial dependence is only on the magnitude of the spatial variable, i.e. $u(t,x) = u(t,|x|)$. In the particular example provided, since the spatial dimension is $1$, you are looking for a solution that is an even function.
Expanding a series with a constant
$$\sum_{j=1}^{6} 4=4+4+4+4+4+4=24$$
X red balls indistinguishable and Y green balls indistinguishable
Yes indistinguishable means it doesn't matter if you swap two indistinguishable balls, since you can't distinguish the two results (but I assume only in question 2. you mean balls of the same color are indistuingishable and for the rest the balls are distinguishable) Assuming you mean distinguishable balls (else the second question formulation wouldn't make sense) some hints: Your answer is correct: For the first position you have 13 choices of balls (since they are distinguishable). For the second position 12 and so on. This yields your solution. Hint: When only the color matters think of putting 13 balls in one line. Now choose which three you paint yellow. So it's choose 3 out of 13 (to be yellow) possibilities (I hope you know binomial coefficients). Hint: How many ways can you choose 3 balls out of 10, how many ways 2 out of 3? Now multiply these numbers (why?).
Non differentiable, continuous functions in metric spaces.
Your question is equivalent to asking Is it possible to approximate a continuous function uniformly with a differentiable one, or Is the set of differentiable function dense in $C([0,1])$. Both is true and stays true if you replace "differentiable" by differentiable infintely often and can be seen by convolution with mollifiers.
If nonempty, finite $U,V \subseteq R^n$, then $\operatorname{span}(U)+\operatorname{span}(V) \subseteq \operatorname{span}(U \cup V)$.
I assume that by $\operatorname{span}(U) + \operatorname{span}(V)$ you mean the vector space $$ \{u+v : u \in \operatorname{span}(U), v \in \operatorname{span}(V)\}. $$ For notation's sake write $U = \{e_1,\dotsc,e_n\}$ and $V = \{e_{n+1},\dotsc,e_{n+m}\}$. Then $$ u+v = \sum_{i=1}^n a_i e_i + \sum_{i=n+1}^{n+m} a_i e_i = \sum_{i=1}^{n+m} a_i e_i \in \operatorname{span}(U \cup V). $$ Reading the equation from left to right shows that $\operatorname{span}(U \cup V) \subseteq \operatorname{span}(U) + \operatorname{span}(V)$, too.
$P(X>16|X>10)$ - normal distribution
Hint. Start with the usual conditional probability formula, $$P(A\mid B)=\frac{P(A\cap B)}{P(B)}\ .$$ In this case $A\subseteq B$, so $A\cap B=A$. See if you can finish the calculation from here.
Examples of theorems that haven't been proven without AC in practice but can be proven without it in principle
Here is some general theorem that you can prove.$\newcommand{\Ord}{\mathsf{Ord}}$ We write $(\exists^{\Ord}x)\varphi$ and $(\forall^{\Ord}x)\varphi$ to mean that we are bounding the quantifier on $x$ as a set of tuples of ordinals. Meaning $x\subseteq\Ord^{<\omega}$, and $\varphi$ holds. (You can observe that this doesn't increase the complexity of the formula in the Levy hierarchy more than a usual quantifier would.) If $\varphi(x)$ is a statement which is upwards absolute, then $(\forall^\Ord x)\varphi(x)$ is provable with the axiom of choice if and only if it is provable without the axiom of choice. The proof is almost trivial, given $x\subseteq \Ord^{<\omega}$, we can consider $L[x]$ as a model of choice, $\varphi(x)$ holds there, and it is upwards absolute, so it holds in $V$. $\qquad\square$ This gives an immediate plethora of examples in the sense of finite coloring theorems and properties, e.g. "every coloring of a subset of $[\kappa]^{<\omega}$ in $\lambda$ colors, has a subset of order type/cardinality $\alpha$ which is homogeneous/anti-homogeneous" are all statements of the form $\Pi_2$ whose universal quantifiers can be replaced by $\forall^\Ord$. You can also take a look in Herrlich's book The Axiom of Choice which includes many examples of less set-theoretical nature.
Difference between smooth structures being equivalent or diffeomorphic.
Double check that my definition agrees with yours: a smooth structure $\mathcal{U}$ on a paracompact, second-countable, Hausdorff topological space $M$ is collection of chart neighborhoods $(U,\phi)$ such that: the $U$ cover $M$, the charts $\phi: U\to\Bbb{R}^n$ is a topological embedding, and for every pair $(U,\phi), (V,\psi)$ the transition map $\phi\psi^{-1}$ is a smooth map with everywhere-invertible differential from $\psi(U\cap V)$ to $\phi(U\cap V)$ $\mathcal{U}$ is maximal with respect to these conditions, so that given another smooth structure $\mathcal{V}$, if $\mathcal{U}\cup\mathcal{V}$ is a smooth structure, then $\mathcal{V}\subset\mathcal{U}$. I'm going to guess at the precise definitions of equivalent and diffeomorphic provided by your text. We might say two smooth structures $\mathcal{U}, \mathcal{V}$ are diffeomorphic provided there exists a homeomorphism $h:M\to M$ such that the pullback of $\mathcal{U}$, defined as the structure $$h^*\mathcal{U} = \{(h^{-1}U, h^*\phi)\ |\ (U,\phi)\in\mathcal{U}\},$$ is equal to $\mathcal{V}$. Note that $h$ need not be smooth with respect to $\mathcal{U}$ or $\mathcal{V}$. We might say two smooth structures are equivalent provided they are diffeomorphic by the identity map.
For $z\in\mathbb{C}, |z|=1$, Prove that $\Big(\frac{1+ia}{1-ia}\Big)^4=z$ has all roots real and distinct
For the reality of the roots, take module on both sides : $$\left|\frac{i-a}{-i-a}\right|=|z|^{1/4}=1$$ so if $A$ is the point of affix $a$, $I$ of affix $i$ and $I'$ of affix $-i$, this says that $AI=AI'$, so $A$ belongs to the perpendicular bisector of segment $[II']$, therefore the real axis. For distinct roots, take argument : $(\overrightarrow{AI'};\overrightarrow{AI})=\frac14\arg(z)+k\frac\pi2$ with $k\in\{0,1,2,3\}$, this means 4 distinct points. (There could be a better argument for last point). With some calculus (which doesn't explain why roots are real) : let $z=e^{i\theta}$, and for $k\in\{0,1,2,3\}$, $\theta_k=\theta/4+k\pi/2$. Equation solves as $$a=\frac{e^{i\theta_k}-1}{i(e^{i\theta_k}+1)} = \frac{e^{i\theta_k/2}-e^{-i\theta_k/2}}{i(e^{i\theta_k/2}+e^{-i\theta_k/2})} = \tan(\theta_k/2) = \tan(\theta/8+k\pi/4)$$ OK, so it's real, but why ?
question about convergence series
Note that $4 p q < 1$ (Why?). Let $4 pq = a < 1$. We then have $$\sum_{k=1}^{\infty} \dfrac{a^k}{\sqrt{k}} < \sum_{k=1}^{\infty} a^k = \dfrac{a}{1-a} < \infty$$ The series you have is the polylogarithm function, $\text{Li}_{1/2}(a)$.
$\lim_{t \to 0} \dfrac{f(g(1 + t, 2), h(1 + t, 2)) - f(g(1, 2), h(1, 2)) }{t} ?$
a) $$\dfrac{\partial f}{\partial g} (g(1,2),h(1,2)) \dfrac{\partial g}{\partial x} ((1,2)) + \dfrac{\partial f}{\partial h} (g(1,2),h(1,2)) \dfrac{\partial h}{\partial x} ((1,2)) \ $$ b) $$\dfrac{\partial f}{\partial x} (g(1,2),h(1,2)) $$
Recursive Definition for Logarithm
$$\log ab = \log a + \log b$$ Before the advent of digital computers, that was the formula used to build up log tables very rapidly.
Is it true that if $b_1, b_2, b_3 \in V$ then $\operatorname{Span}(B) \subseteq V$?
It's true,recall that if $\mathbb{V}$ is a subspace over $\mathbb{R}$ then,for all u,v $\in\mathbb{V}$ and $\alpha\in\mathbb{R}$: $u+v\in\mathbb{V}$ and $\alpha v\in\mathbb{V}$ then $Sp(u,v)\subseteq\mathbb{V}$ follow from these properties because $Sp(u,v)=\alpha_1u +\alpha_2v$ with $\alpha_1,\alpha_2\in\mathbb{R}$
Uniqueness of the maximum of a multi-dimensional function
A sum of concave functions is concave, and $\ln(1+\text{erf}(t))$ and $\ln(1-\text{erf}(t))$ are easily seen to be concave. So your function is concave. Moreover, since $\ln(1+\text{erf}(t))$ and $\ln(1-\text{erf}(t))$ are strictly concave, the only way for a maximum of your function (assuming it exists) to be non-unique would be for all the $x_0 + \sum_j a_{ij} x_j$ and all the $x_0 + \sum_j b_{ij} x_j$ to be equal at two different points, i.e. the matrix $\pmatrix{1 & A\cr 1 & B\cr}$ to have rank $< M+1$, where $A$ and $B$ are the matrices of coefficients $a_{ij}$ and $b_{ij}$, and the $1$'s are column vectors of all $1$'s.
Inverse Matrices and Infinite Series
This only works if the series converges. If $\|A\|<1$ and $\|\cdot\|$ is a submultiplicative norm, then the result follows from a geometric series approach. We have $\| \sum_{k=0}^n A^k \| \le \sum_{k=0}^n \|A\|^k \le {1 \over 1 - \|a\|}$, hence the series converges. We have $(I-A) \sum_{k=0}^n A^k = I-A^{n+1}$, hence letting $n \to \infty$, we have $(I-A) C = (I-A) \sum_{k=0}^\infty A^k = I$.
Struggling with the proof of Fermat's little theorem
If we had $ra\ne sa$, that would still leave the possibility that $ra-sa$ is a multiple of $p$, since $ra$ and $sa$ can have a difference greater than $p-1$. If that were true, then we would have $ra\equiv sa\pmod{p}$, which would mean the multiples of $a$ would not form a complete residue system. Do you see how that's a problem? The confusion here seems to be over notation. When we write: $$a\equiv b\pmod{p},$$ we are not saying that $a$ is what you get when you reduce $b$ modulo $p$. We're saying that, modulo $p$, the numbers $a$ and $b$ are the same. For example: $$12\equiv 27\pmod 5$$ is a true statement. The $\pmod5$ at the end doesn't apply to the $27$; it applies to the $\equiv$ sign. What that statement really means is that $12-27$ is a multiple of $5$. Thus, what you are meaning when you say "$ra\pmod{p}\ne sa\pmod{p}$" (which is not a generally accepted notation), is precisely what we mean when we write $ra\not\equiv sa\pmod{p}$.
Lim sup and Lim inf of real-valued functions
I have just formulated my solution to this problem.
$\int_{-\infty}^{\infty}{e^x+1\over (e^x-x+1)^2+\pi^2}\mathrm dx=\int_{-\infty}^{\infty}{e^x+1\over (e^x+x+1)^2+\pi^2}\mathrm dx=1$
We have that $\frac{e^x-1}{(e^x-x+1)^2+\pi^2}$ has a simple primitive, given by $\frac{1}{\pi}\arctan\frac{1+e^x-x}{\pi}$. It follows that: $$ I_1 = \int_{-\infty}^{+\infty}\frac{e^x+1}{(e^x-x+1)^2+\pi^2}\,dx= 2\int_{-\infty}^{+\infty}\frac{dx}{(e^x-x+1)^2+\pi^2}$$ and the residue theorem gives the following Lemma. If $a>0$ and $b\in\mathbb{R}$, $$ \int_{-\infty}^{+\infty}\frac{a^2\,dx}{(e^x-ax-b)^2+(a\pi)^2}=\frac{1}{1+W\left(\frac{1}{a}e^{-b/a}\right)} $$ where $W$ is Lambert's function. In our case, by choosing $a=1$ and $b=-1$ we get that $I_1$ depends on $W(e)=1$ and equals $\color{red}{\large 1}$. $I_2$ is easier to compute: $$ I_2 = \int_{-\infty}^{+\infty}\frac{e^x+1}{(e^x+x+1)^2+\pi^2}\,dx = \frac{1}{\pi}\,\left.\arctan\left(\frac{1+x+e^x}{\pi}\right)\right|_{-\infty}^{+\infty} = \color{red}{1}.$$ Addendum. Due to the identity $$\begin{eqnarray*} I_1 &=& \int_{0}^{+\infty}\frac{u+1}{u}\cdot\frac{du}{(u+1-\log u)^2+\pi^2}\\&=&\frac{1}{\pi}\int_{0}^{+\infty}\int_{0}^{+\infty}(u+1)u^{x-1}e^{-(u+1)x}\sin(\pi x)\,dx\,du\\&=&\frac{2}{\pi}\int_{0}^{+\infty} e^{-x} x^{-x}\Gamma(x)\sin(\pi x)\,dx\\&=&2\int_{0}^{+\infty}\frac{e^{-x} x^{-x}}{\Gamma(1-x)}\,dx\end{eqnarray*}$$ the previous Lemma also proves the highly non-trivial identity $$ \int_{0}^{+\infty}\frac{(ex)^{-x}}{\Gamma(1-x)}\,dx = \frac{1}{2} \tag{HNT}$$ equivalent to the claim $I_1=1$. It would be interesting to find an independent proof of $\text{(HNT)}$, maybe based on Glasser's master theorem, Ramanujan's master theorem or Lagrange inversion. There also is an interesting discrete analogue of $\text{(HNT)}$, $$ \sum_{n\geq 1}\frac{n^n}{n!(4e)^{n/2}}=1$$ that comes from the Lagrange inversion formula.
Why is the bit sequences of infinite length not the same size as bit sequences with finite length?
Why is it strange? The fact that you can approximate every infinite string by its finite initial segments, doesn't mean that the string is there. The fact that $\pi$ is the limit of its decimal expansions, all of which are rational, doesn't mean that $\pi$ itself is rational. The fact that every natural number has only finitely many predecessors doesn't mean that the set of natural numbers itself is finite. Limits sit "beyond" the approximations. And this is exactly that. I also wrote about this exact issue here, both in my answer and in many comments throughout that page. I should also remark that sequences are indexed with ordinals, not with cardinals, since sequences are in their nature objects of a sequential manner, and ordinals measure "length of a queue" where as cardinals just measure "how large it is". While for the finite case this is the same, for the infinite case it's not the same anymore (there are queues of different lengths of the same cardinality). So writing $\{0,1\}^k$ for all $|k|=\aleph_0$ you actually bite off a lot more than you can chew with finite sequences, since you allow all the countable ordinals. Instead you should probably write $\{0,1\}^\Bbb N$ which is what you have meant, I suppose.
Family with five children- probability.
I think this is a problem of conditional probability. This means that instead of calculating $P(A\cap B \cap C)$, you would be calculating $P(C | (A \cap B))$ or the probability of $C$ given $A$ and $B$. You can calculate this by using the conditional probability formula. Let $D = ( A\cap B)$. Then: $P(C|D) = \frac{P(C \cap D)}{P(D)}$ **Note that this gets difficult because $P(A \cap B)$ are not independent events. If it is not an independent event then you must use $P(A \cap B) = P(A)*P(B|A) = P(B)*P(A|B)$. You can find these with either total probability theorem or complementary counting. Edit: Typo
$\epsilon-\delta$ proof of $\lim_{ x\to 5} \frac{1}{x-4}=1$
We see $$\left \lvert \frac{1}{x-4} - 1\right \rvert = \left \lvert \frac{1}{x-4} - \frac{x-4}{x-4}\right \rvert = \left \lvert \frac{5-x}{x-4} \right \rvert.$$ For fixed $\epsilon > 0$, take $\delta = \min\{1/2, \epsilon / 2 \}$. Then for $\lvert x - 5 \rvert < \delta$ we have $\lvert x - 4 \rvert > 1/2$ so $\frac{1}{\lvert x -4 \rvert} < 2$. Thus $$\left \lvert \frac{1}{x-4} - 1\right \rvert =\frac{1}{\lvert x-4 \rvert} \lvert x - 5 \rvert < 2\delta < \epsilon.$$
Does $a\in\mathbb Z$ such that $\gcd(n,a(m-a))=1$ exist for every $(m,n)\not=(\text{odd},\text{even})$?
If $n$ is prime, then there is $a$ such that $n\nmid a(m-a)$, since among values $a=0,\dots, n-1$, at most two are such that $n\mid a(m-a)$: it is $0$ and $m\mod n$. So if $n>2$, then there are at least $n-2$ values that fit, and if $n=2$, then, from your additional condition, $m$ is even, so the value $a=1$ fits. Now, if $n = p_1^{d_1}\dots p_k^{d_k}$, where $p_i$ are prime, then there are $a_1, \dots, a_k$ such that $p_i\nmid a_i(m-a_i)$ for all $i = 1, \dots, k$. By Chinese remainder theorem, there is $a$ such that $a\equiv a_i\pmod {p_i}$ for all $i$. Then $a(m-a)\equiv a_i(m-a_i) \pmod {p_i}$, i.e., $a(m-a)$ is not divisible by any $p_i$, i.e., $(n,a(m-a))=1$.
What is the Cumulative Distribution Function of the following random variable?
@May: if they are independent (as in the first i of iid) then $\sum_i \sum_j X_i Y_j =\sum_i X_i \times \sum_j Y_j$. For example if they are normally distributed then you have the product of two normal random variables, which is in general not normally distributed and this extends to most other distributions, especially if the sums are over a large number of cases, though if the means are non zero and the standard deviations small compared with the means, this may be difficult to spot.
Is the surface of a cube a 2-dimensional differentiable manifold?
No, at least not in the sense that you want. If it were a manifold, then by the classification theorem for compact 2-manifolds, it'd have to be diffeomorphic to a sphere. So the question then becomes Is there an embedding $f$ of the sphere in 3-space whose image is exactly the unit cube? If there were, then (with a little bit of isotopy in the domain sphere) we could assume that $f(0, 1, 0) = (0, 1, 0)$ and $f(1, 0, 0) = (1, 0, 0)$, i.e., that the north pole and the "east pole" of the cube get transformed into the center of the top of the cube and the center of the "east" face of the cube, respectively. Now consider the arc $c$ defined by $$ t\mapsto (\sin t, \cos t, 0) $$ for $0 \le t \le \pi/2$. That's a smooth path on the sphere (it's part of a line of longitude!), so $b = f \circ c$ must be a smooth path on the cube. Because the cube with all its edges and vertices removed has six connected components, with the north and east points in different components, but the interval is connected, we see that the path $b$ must meet the set of edges and vertices at some point $P$. Let's say that $$ b(t_0) = P $$ where $t_0$ is neither $0$ nor $\pi/2$, i.e., it's in the interior of the domain of $b$. What is $b'(t_0)?$ On the one hand, it's $f'(c(t_0)) c'(t_0)$, and must therefore be nonzero, because $f$ is a diffeomorphism onto its image. On the other hand, it has to be a "tangent vector" at $b(t_0)$, a point on the edge of the cube. That pretty much requires that it lie in the direction of the edge. So a smooth curve from the north to the east pole of the sphere could become a smooth curve from the north to the east "pole" of the cube by heading towards an edge, running along the edge, and continuing off the edges into the eastern face. But now take another unit arc $d$ that meets $c$ orthogonally at $c(t_0)$, and let's say $d(t_0) = c(t_0)$ by a shift of parameter if necessary. Let $a(t) = f(d(t))$. What's $a'(t_0)$? It's got to be tangent to the cube at $a(t_0) = P$, and it's got to be nonzero and linearly independent from $b'(t_0)$, because $f$ is a diffeomorphism. So: take any arc on the cube that crosses an edge transversally, and with nonzero speed at the crossing. Can that arc be differentiable at the crossing? No. For just before crossing, the curve lies in (say) the north face, and hence has a tangent orthogonal to $(0,1,0)$; just after the crossing, it lies in the east face, and has a tangent orthogonal to $(1,0,0)$. Hence the tangent at the crossing must be orthogonal to both, and thus be in the direction of $(0,0,1)$. But that makes it NOT linearly independent from $b'(t_0)$. And that's a contradiction. I've been a little sloppy here, talking about "just before the crossing" and "just after the crossing"...how do I know that for some small time before $t_0$ we have $a(t)$ is on the "north face" and that for some small time after, it's on the "east face"? Answer: a little bit of careful analysis, and the fact that every nonconstant curve is well approximated by its tangent locally. But the gist here is clear: the tangent plane to $S^2$ at $c(t_0)$ would have to be transformed, by $f'$, isomorphically to a tangent plane to the cube along one of the edges. But that "tangent plane" can really only contain the edge itself, hence is 1-dimensional, and you've got a contradiction.
Show that if $Y$ is $\sigma (X)-$ measurable, there is a measurable function $f:\mathbb R\to \mathbb R$ s.t. $Y=f(X)$.
Let $\{Y_n\}_n$ be a sequence of $\sigma(X)$-measurable simple functions converging pointwise to $Y$. You know that the result holds for simple functions so for each $n$ let $f_n$ be the corresponding Borel function such that $Y_n=f_n(X)$. That $\{Y_n\}_n$ converges pointwise this tells us that $\{f_n\}_n$ converges pointwise on $X(\Omega)$. Now let $A\subset \mathbb{R}$ denote the Borel measurable set of the points $s\in \mathbb{R}$ for which $\{f_n(s)\}_n$ converges pointwise (we just showed that $X(\Omega)\subset A)$. Then we can define the Borel function $f:=\lim\limits_{n \rightarrow \infty}\mathcal{X}_Af_n$, and by simply taking pointwise limits on both sides of the equality $Y_n=f_n(X)$ we see that $Y=f(X)$.
$d_1, d_2$ are metrics on $X$; $(X,d_1)$ is complete. Let $i:(X,d_1)\to(X,d_2)$ be continuous and $i^{-1}$ unif. cont. Show $(X,d_2)$ is complete.
Assume $(x_n)$ is Cauchy in $d_2$. For any $\delta > 0$ : $d_2(x_n,x_m) < \delta$ for $n,m$ large enough. By uniform continuity of $i^{-1}$ it follows that $(x_n)$ is Cauchy in $d_1$, hence convergent. To be very precise: Let $\epsilon > 0$ and $\delta > 0$ such that $d_2(x,y) < \delta$ implies that $d_1(i^{-1}(x),i^{-1}(y)) = d_1(x,y) < \epsilon$ . Then, there exists an $N$ with $n,m \geq N$ implies $d_2(x_n,x_m) < \delta$ and hence $d_1(x_n,x_m) < \epsilon$ for $n,m \geq N$. Since this can be done for every $\epsilon > 0$ it follows that $(x_n)$ is Cauchy in $d_1$.
Ways of choosing $16$ integers from first $150$ integers such that there is no $(a,b,c,d)$ for which $a+b=c+d$
There is a solution! Despite my doubts for much of this time, expecting the answer to be that a solution could not be found without going above 150, it turns out that there is a valid solution. The solution that I have found is: $$ 1,4,6,7,33,50,60,69,94,107,119,127,131,135,142,149 $$ I have confirmed that all 120 sums of two values are unique. Note that a second solution is the same as this, except with each value replaced with 150 minus the value. The value was found by my code, after maybe 2 days of run time. The code is still running, so it may find more solutions. Previous observations... To assist in seeking an answer to this, I have investigated the equivalent questions for smaller numbers. In particular, what's the smallest number of "first $n$ integers" that you need to be able to choose $k$ integers satisfying the condition. For each of the numbers presented here, I have used code to confirm that it cannot be done with a smaller range of values. In the table below, the set of solutions are "up to reversal" - this is because, if you replace all values, $s$, in the solution with $n+1-s$, you get another solution, and thus all solutions come in equivalent pairs. For example, $[1,2,3,5,8]$ and $[1,4,6,7,8]$ are identical, up to reversal. $$\begin{array}{c|c|c} \hline\text{choose $k$ integers} & \text{from first $n$ integers} & \text{Solutions (up to reversal)} \\ \hline 5 & 8 & [1,2,3,5,8] \\ \hline 6 & 13 & \begin{array}{c}[1,2,3,5,8,13]\\ [1,2,3,7,10,13]\end{array} \\ \hline 7 & 19 & \begin{array}{c}[1,2,3,5,9,14,19]\\ [1,2,3,8,11,15,19]\end{array} \\ \hline 8 & 25 & [1,2,3,5,9,15,20,25] \\ \hline 9 & 35 & \begin{array}{c}[1,2,3,5,9,16,25,30,35]\\ [1,2,3,8,14,17,27,31,35]\\ [1,2,3,15,20,25,28,31,35]\end{array} \\ \hline 10 & 46 & [1,2,8,11,14,22,27,42,44,46] \\ \hline 11 & 58 & \begin{array}{c}[1,2,6,10,18,32,35,38,45,56,58]\\ [1,2,6,10,18,32,34,45,52,55,58]\end{array} \\ \hline 12 & 72 & \begin{array}{c}[1,2,3,8,13,23,38,41,55,64,68,72]\\ [1,2,12,19,22,37,42,56,64,68,70,72]\end{array} \\ \hline 13 & 87 & [1,2,12,18,22,35,43,58,61,73,80,85,87] \\ \hline 14 & 106 & \left[\begin{array}{c}1,2,7,15,28,45,55,67,\\70,86,95,102,104,106\end{array}\right] \\ \hline 15 & 127 & \left[\begin{array}{c}1,2,3,23,27,37,44,51,81,\\96,108,111,114,119,127\end{array}\right]\\ \hline \end{array}$$ Some insights: The increase in $n$ needed to increase $k$ by 1 seems to grow each time (with the only exception being 13->19->25 - the differences between $n$ values go 5,6,6,10,11,12,14,15,19 - this is not overly surprising, but worth noting. The density of sums within the possible range can be calculated relatively easily. There are $\frac{k(k-1)}2$ pairs within the set, and thus that many sums if no two are the same. There are $2n-3$ possible sums creatable from a pair of numbers between $1$ and $n$ (as the sum cannot be $1$, $2$, or $2n$). Therefore, the density is given by $\frac{k(k-1)}{2(2n-3)}$ - the resulting densities are (approximately) 76.9%, 65.2%, 60.0%, 59.6%, 53.7%, 50.6%, 48.7%, 46.8%, 45.6%, and 43.5%. All optimal solutions found so far (accounting for reversal) can be written with $[1,2...$ at the start. Interestingly, while a solution for $k=15$ can be found with $n=127$, no such solution can be found for $n=128$ if you require both 1 and 128 in the set. Approaches I have used to try to find solutions: A code that seeks the solution in an "additive" approach - find which values can be added to the existing set you have, and keep expanding until you run out of values, then backtrack and try a new value. With careful coding to ensure it minimises the search space reasonably well, this can find all optimal solutions for up to $k=14$ within a reasonable amount of time, if you input the correct $n$ in (and can similarly be used to confirm that no lower $n$ will work). However, for larger $n$, it becomes a lot more unwieldy. A greedy code that seeks the solution in a "subtractive" approach - start with all possible values in the same set (so, 1 through 150 for the actual question), then remove numbers based on which ones will reduce the number of "collisions" (sums that appear multiple times) by the most - with some randomness where multiple numbers can achieve the same reduction... and then add numbers back if they don't substantially increase the number of collisions. This can find many solutions quite quickly for "easy" cases, but due to the random aspect, is also pretty slow to find optimal values for larger $n$. A non-random approach is likely to be very, very slow, as you start with a lot of numbers and there are a lot of options for removal. Take a smaller solution, such as for $k=12$ ($n=72$), then rescale the numbers and try to fill the gaps. For example, multiplying the first $k=12$ solution by 2 gives $[2,4,6,16,26,46,76,82,110,128,136,144]$. From here, we can find that $[31,89,147]$ is capable of being added, for a total of $k=15$... but despite various attempts, I have not succeeded in reaching $k=16$ using this approach.
Real Analysis: function achieves minimum value
A very rough sketch of a proof: So $f$ is a function $A \rightarrow \mathbb{R}$. First, you want to show that $f$ is continuous. This will come in handy later. Define a subset $Y \subset A$ where $\displaystyle x \in Y \iff \inf_{b \in B}d(x, b) > c$ for some $c \in \mathbb{R}$. Since $f(Y) > f(A \setminus Y)$, it follows that the minimum value, if it exists, will exist somewhere in $A \setminus Y$. Well, $A \setminus Y$ is compact, and so what do we know about the continuous image of a compact set? It is compact! What can we conclude?
Prove |Re z| less than or equal to |z| and |Im z| less than or |z|
It is almost correct. You should have added that, when you write $z$ as $x+yi$, your numbers $x$ and and $y$ are real (although, yes, this is usually implicit). And you also should have added that $\sqrt{x^2}=|x|=|\operatorname{Re}z|$.
Sketching a parametrised cone and a geodesic lying on it.
Start by noticing that $z=1-\sqrt{\alpha^2-1}\sqrt{x^2+y^2}$, which is the equation of a cone with vertex at $(0,0,1)$, whose intersection with the $(x,y)$ plane is a circle of radius $1/\sqrt{\alpha^2-1}$.
Wave equation: travelling solutions
if you select a certain value of $f$, say $f_*$ when $x=x_*$ and $t=t_*$and then ask for what other combinations of $x$ and $t$ does $f(x-ct)=f(x_*-ct_*)=f_*$ you'll see that that happens when $x-ct=x_*-ct_*$ ... for that to be true, the relations $t=x/c$ and $t_*=x_*/c$ must hold ... meaning that $f=f_*$ when $x=ct=x_*=ct_*$
Surface Integral of Sphere between 2 Parallel Planes
You should be able to find the distance between the planes in terms of the angle $\phi$ and the radius $r$
Every bounded sequence of dual space contains a subsequence which is weak* convergent
The idea is to diagonalize, as mentioned earlier, but you have to do it carefully: Let $\{x_n\}$ be a countable dense subset of $X$. Not $\{T_n(x_1)\}$ has a convergent subsequence by Bolzano-Weierstrass, which you index with an increasing sequence $\{s(1,n) : n\in \mathbb{N}\} \subset \mathbb{N}$. Now $\{T_{s(1,n)}(x_2)\}$ has a convergent subsequence, which you index by an increasing $\{s(2,n) : n\in \mathbb{N}\}$. Proceed inductively to obtain strictly increasing sequences $\{s(j,n) : n\in \mathbb{N}\}$ for each $j \in \mathbb{N}$ such thath $\{s(j+1,n) : n \in \mathbb{N}\}$ is a subsequence of $\{s(j,n) : n \in \mathbb{N}\}$ $\{T_{s(j,n)}(x_j)\}$ is convergent for each $j \in \mathbb{N}$ Now consider the subsequence $T_{s(n,n)}$, and this converges pointwise at each $x_j$. As you mention, this completes your proof.
A limit with trigonometric functions: $\lim_{x \to 0} \frac{1-\cos2x}{x\sin(x+4\pi)}$
Just continue start by dividing both fraction sides by $\sin x$: $$\lim_{x \to 0} \frac{2\sin^2x}{x\sin x}=\lim_{x \to 0} \frac{2\sin x}{x}=2\lim_{x \to 0} \frac{\sin x}{x}=2$$
Green function of a boundary value problem
By visual inspection, esp. of the last two terms, $y(x)=x$ is a solution. Insert $y(x)=xu(x)$ to get $$ 0=x^2(\ln x-1)(xu''+2u')-x(xu'+u)+xu=x^3(\ln x-1)u''+x^2(2\ln x-3)u' $$ which now is first order and separable in $u'$ giving $$ \frac{u''}{u'}=-\frac1x\frac{2\ln x-3}{\ln x-1}\implies u'(x)=Cx^{-2}(\ln x-1) \\~\\ \implies u(x)=D-Cx^{-1}\ln x $$ so that $y(x)=\ln x$ is the second basis solution. Check that $y(x)=x$ satisfies the left boundary condition and $y(x)=\ln x$ the right one. Now compute the Wronskian and compose the Green function.
Can anyone help me calculate the probabilities of some UEFA champions league draws?
I will not answer your question, but I will make some comments. You say that "I've always been highly suspicious of some of the draws in the champions league in the past", so my answer will be centered around this comment. You ask: What is the probability of Arsenal playing Bayern four out of five seasons? I don't believe that this is the question you want to ask. I am assuming that you are interested in figuring out, by looking at the data of the last draws, whether we have reason to suspect that the draws have been fixed in any way. Furthermore I assume that you would be equally suspicious If Some pair of teams played eachother at least 4 times out of the 5 last seasons. If Some pair of teams played eachother at least 7 times out of the 9 last seasons. By this I mean that it seems somewhat arbitrary to look at the last $5$ years. Why are you not considering the last $3$ years or even $10$ years, and why not include all seasons since the Champions League started? It seems to me that you may be cherrypicking the number $5$. If this had happened in any $5$-year period, say if for instance Ajax and Porto played eachother in 2010,2011,2013 and 2014. Furthermore, there is some trouble with the data: The same teams do not participate in the tournament each year. Champions League is a knock-out tournament. In other words, we do not know how many games each team is expected to play each season. We could give a weight the teams by the number of games they have played historically, e.g. in recent years Real Madrid have played $7$ other teams each year, while let's say that Bayern has played $6$. A problem with this however is that if the tournement draws are already being fixed, we have no way of knowing how far a team would go in the tournament if the draws were not fixed. The tournament format has changed since the Champions League was created. Further comments Teams from different countries have a higher chance at playing eachother, as the round of 16 draws do not allow teams from the same country to play eachother. This means that calculating the probability that a given pair of teams qualified for the round of 16 play eachother in a given year is in itself quite cumbersome. Draws in the group stages are seeded. The best teams in a fair tournament have a higher chance of playing eachother as they play more games. There are undoubtedly even more things one should consider. Conclusion: I think it would be more interesting to answer a question like this: "Since the start of the current format of the Champions League, is there reason to believe that an unusually high number of pairs of teams have played eachother frequently?" Here we of course have to specify what we consider "unusual" and "frequent". From your question, it seems to me that you consider that 1 pairs of teams play eachother in 80% of the tournaments over a period of $5$ years in addition to 1 pair of teams playing eachother 100% of the tournaments over a period of $3$ years to be frequent. Perhaps someone will do the calculations and make a reasonable conclusion.
Solving $(f'(x))^2 = f(x)f''(x)$ with boundary conditions.
Hint: rewrite as $$\frac{f(x)}{f'(x)} = \frac{f'(x)}{f''(x)}$$
Determining absolute convergence
For $|x| > 1$ simply consider that $\frac{|x|^{2k-1}}{2k-1} \to \infty$ so the series can't converge.
Conceptual question on graphs
Let the $2$ functions be $f(x)$ and $g(x)$. Then, we wish to find the point at which $f(x)-g(x)$ is maximum. If we define $h(x) = f(x) - g(x)$, then the question becomes - find the $x$ at which $h(x)$ is maximum. Now, we know that this occurs at the maxima of the function, which are one of the points where $h'(x) = 0$, or where $f'(x) = g'(x)$. Thus, the distance between the $2$ functions is maximum when their slopes are equal.
What does wolfram alpha means by this
The reason for this starts with the two limits $$ \lim_{\substack{n \to \infty \\ n \text{ even}}} \frac{n!(-e)^n}{n^n} = +\infty \text{ and } \lim_{\substack{n \to \infty \\ n \text{ odd}}} \frac{n!(-e)^n}{n^n} = -\infty. $$ Notice that the $(-1)^n$ in the numerator causes it to flip between positive and negative. Now what I believe WolframAlpha is doing is treating $n$ as a real number. If this happens then $$ (-1)^n = (e^{\pi i})^n = e^{n \pi i}. $$ Now, if you look at the sequence $n = a, a + 2\pi, a + 4\pi, \dots, a+ 2k\pi,\dots$ this value is always equal to $$ e^{a \pi i}. $$ The rest of the limit, $n!e^n/n^n$ goes to infinity, so what you are left with is $$ e^{a \pi i} \infty $$ where $a$ can take on any real number. But because adding or subtracting $2\pi$ from $a$ doesn't change the value of the exponential, we can restrict our attention to $a$ in the interval $[0,2\pi)$. This gives the answer $$ e^{[0,2\pi)i} \infty $$ which is supposed to represent a value of $\infty$ but at every possible angle in the complex plane.
Squared triangular number
$$\sum_{k=1}^n k^2 = \frac{1}{6}n(n+1)(2n+1)$$ with n=5 :$$\frac{1}{6}*5*6*11=55$$ it is sometimes called number for quadratic pyramid. trula
How come the columns of a matrix can form its nullspace?
It's a somewhat common mistake. Even though $\dim \ker A + \dim \operatorname{Im} A$ equals the dimension of the space, it doesn't mean at all that the two have a null intersection: just think about the fact that some non zero matrixes verify $A^2=0$ (then $\operatorname{Im} A \subset \ker A$ and it's possible to have equality). The correct statement is: there is a subspace $F$ such that $F \cap \ker A = 0$ (on which $A$ is injective) and such that $\operatorname{Im} A_{|F} = \operatorname{Im} A$. It's the key to proving the rank theorem.
Find all integer solutions to linear congruences
Using this, $(10,25)=5$ must divide $9$ to admit any solution, which is not, so there is no solution. $3x\equiv 24\pmod 6\implies 2\mid x$ i.e., any even value(not only $8$) of $x$ will satisfy the 1st congruence. So, the system of the linear congruence has no solution as the 2nd congruence is not solvable. Alternatively, for the 2nd congruence, $10x-18=25y$ for some integer $y$, So,$5(2x-5y)=18, \frac{18}5=2x-5y$ which is an integer, hence contradiction.
generating permutations
Doing a Bing search for algorithm for generating permutations I came up with Heap's Algorithm.
Pontrjagin duality for profinite and torsion abelian groups
I've done some reading about Pontryagin duality and this is what I've come up with: Pontryagin duality says that $\hom(\hom(G,\mathbb{R}/\mathbb{Z}),\mathbb{R}/\mathbb{Z}) = G$ in case of $G$ being a locally compact abelian group and $\hom$ standing for all continuous group homomorphisms. Now, in case of $G$ being a torsion abelian group or a profinite abelian group, we can change $\mathbb{R}$ by $\mathbb{Q}$ (If $G$ is torsion, this is trivial, since every image of $g \in G$ has to have finite order. If $G$ is profinite, this follows from $$ \hom(G,\mathbb{R}/\mathbb{Z}) = \hom(\varprojlim G_i,\mathbb{R}/\mathbb{Z}) = \varinjlim\hom(G_i,\mathbb{R}/\mathbb{Z}) = \varinjlim\hom(G_i,\mathbb{Q}/\mathbb{Z})\\ = \hom(G,\mathbb{Q}/\mathbb{Z})$$, since all $G_i$ are finite). Now, since $\varinjlim\hom(G_i,\mathbb{Q}/\mathbb{Z})$ is a quotient $\oplus_i\hom(G_i,\mathbb{Q}/\mathbb{Z})$, which is clearly a torsion abelian group, we are done. Does this seem correct ?
How to prove if there are exist positive integer solutions to these two variables inequalities system?
$$ 143 \cdot 71 - 141 \cdot 72 = 1 $$ ........................
A peculiar Euler sum
Here it is my approach, with some definitions first: $\mathscr{H}_n$ is $\sum_{k=0}^{n}\frac{1}{2k+1}$; $D^{1/2}$ is the semi-derivative operator, i.e. a linear operator which acts on power series (centered at $0$ or $1$) by mapping $x^{\alpha}$ into $x^{\alpha-1/2}\frac{\Gamma(\alpha+1)}{\Gamma(\alpha+1/2)}$; $D^{1/2}_{\perp}$ is the adjoint operator of $D^{1/2}$ with respect to the standard inner product of $L^2(0,1)$, which is denoted through $\langle\cdot,\cdot\rangle$; $K$ is Catalan's constant; $K(x)$ and $E(x)$ are the complete elliptic integrals of the first and second kind, with the parameter being the elliptic modulus. $\tau$ is the involutive and self-adjoint operator bringing $g(x)$ into $g(1-x)$; $\text{IBP},\text{SBP}$ stand for integration/summation by parts; $\text{FL}$ stands for Fourier-Legendre expansion(s). $\newcommand{Li}{\operatorname{Li}}$ By semi-integration by parts $\zeta(2)$ equals $$ \int_{0}^{1}\frac{\arcsin(\sqrt{x})\arcsin(\sqrt{1-x})}{\sqrt{x(1-x)}}\,dx=\frac{\pi}{2}\int_{0}^{1}D^{1/2}\log(1-x)\cdot D^{1/2}_{\perp}\log(x)\,dx. $$ where the LHS can be written both in terms of power series and Fourier-Legendre expansions, leading to: $$ \begin{eqnarray*}\frac{\pi^3}{24}&=&\sum_{m,n\geq 0}\frac{\left[\frac{1}{4^m}\binom{2m}{m}\right]\left[\frac{1}{4^n}\binom{2n}{n}\right]}{(2n+1)(2m+1)(m+n+1)\binom{m+n}{n}}\\ &=& \sum_{n\geq 0}\frac{(-1)^n}{2n+1}\left[\pi(-1)^n+\frac{2}{(2n+1)}-4(-1)^n\sum_{k=0}^{n}\frac{(-1)^k}{2k+1}\right]^2\end{eqnarray*}$$ or, equivalently: $$\begin{eqnarray*} \frac{\pi^3}{24}&=&\sum_{n\geq 0}\frac{(-1)^n}{2n+1}\left[\frac{2}{(2n+1)}+4(-1)^n\sum_{k>n}\frac{(-1)^k}{2k+1}\right]^2\\&=&\frac{\pi^3}{8}+16\sum_{n\geq 0}\frac{1}{(2n+1)^2}\sum_{k>n}\frac{(-1)^k}{2k+1}+16\sum_{n\geq 0}\frac{(-1)^n}{2n+1}\left[\sum_{k>n}\frac{(-1)^k}{2k+1}\right]^2\end{eqnarray*}$$ $$-\frac{\pi^3}{192}=-\sum_{n\geq 0}\frac{\mathscr{H}_n(2)(-1)^n}{(2n+3)}+\sum_{n\geq 0}\frac{(-1)^n}{2n+1}\left[\sum_{k>n}\frac{(-1)^k}{2k+1}\right]^2$$ $$\begin{eqnarray*}&& \sum_{n\geq 0}\frac{(-1)^n}{2n+1}\left[\sum_{k>n}\frac{(-1)^k}{2k+1}\right]^2\\ &=& \frac{5\pi^3}{192}-\frac{\pi K}{4}+K-\frac{\pi^2}{16}+\frac{\pi\log(2)}{4}-\int_{0}^{1}\sum_{m\geq 0}\frac{x^{2m}(-1)^m}{(2m+1)}\sum_{n\geq 0}\frac{x^{2n}(-1)^n}{(2n+1)^2}\,dx\end{eqnarray*} $$ $$\begin{eqnarray*} \sum_{m,n\geq 0}\frac{(-1)^{m+n}}{(2n+1)^2 (2m+1) (2m+2n+1)}&\stackrel{\text{sym}}{=}&\sum_{m,n\geq 0}\frac{(-1)^{m+n}(m+n+1)}{(2m+1)^2(2n+1)^2(2m+2n+1)}\\ &=&\frac{K^2}{2}+\frac{1}{2}\int_{0}^{1}\left(\sum_{n\geq 0}\frac{x^{2n}(-1)^n}{(2n+1)^2}\right)^2\,dx \end{eqnarray*}$$ via $$ \frac{\arcsin(\sqrt{x})}{\sqrt{x}}=\sum_{n\geq 0}P_n(2x-1)\left[\pi(-1)^n+\frac{2}{(2n+1)}-4(-1)^n\sum_{k=0}^{n}\frac{(-1)^k}{2k+1}\right].$$ By summation by arts and Fourier-Legendre expansions we have the following relations: $$ \sum_{n\geq 0}\frac{\mathscr{H}_n(2)(-1)^n}{(2n+1)}\stackrel{\text{SBP}}{\longleftrightarrow}\sum_{n\geq 0}\frac{\sum_{k=0}^{n}\frac{(-1)^k}{2k+1}}{(2n+1)^2} \stackrel{\text{FL}}{\longleftrightarrow}\int_{0}^{1}\frac{\arcsin(\sqrt{x})}{\sqrt{x}}K(1-x)\,dx=\left\langle D^{-1/2}\frac{\text{arctanh}{\sqrt{x}}}{x\sqrt{\pi}},K(1-x)\right\rangle $$ where the RHS equals $$\begin{eqnarray*} \left\langle \frac{\text{arctanh}{\sqrt{x}}}{x\sqrt{\pi}},(\tau D^{-1/2})K(x)\right\rangle &=& \int_{0}^{1}\frac{\text{arctanh}{\sqrt{x}}\arcsin(\sqrt{1-x})}{x}\,dx\\ &=& 2\int_{0}^{\pi/2}\theta\tan(\theta)\text{arctanh}{\cos\theta}\,d\theta\end{eqnarray*} $$ and $$ \int_{0}^{\pi/2}\theta \tan(\theta)\left(\cos\theta\right)^{2k+1}\,d\theta \stackrel{\text{IBP}}{=}\frac{4^k}{(2k+1)^2 \binom{2k}{k}} $$ allows us to state $$\begin{eqnarray*}\int_{0}^{1}\frac{\arcsin(\sqrt{x})}{\sqrt{x}}K(1-x)\,dx&=&2\sum_{k\geq 0}\frac{4^k}{(2k+1)^3 \binom{2k}{k}}=-2\int_{0}^{1}\frac{\arcsin(x)\log(x)}{x\sqrt{1-x^2}}\,dx\\&=&2\int_{0}^{\pi/2}\frac{-\theta\log(\sin\theta)}{\sin(\theta)}\,d\theta.\end{eqnarray*}$$ We may notice that $$ -\log(\sin \theta) = \log(2)+\sum_{n\geq 1}\frac{\cos(2n\theta)}{n},$$ $$ \int_{0}^{\pi/2}\frac{\theta}{\sin\theta}\cos(2n\theta)\,d\theta = 2K-2\sum_{k=0}^{n-1}\frac{(-1)^k}{(2k+1)^2}, $$ hence the integral $\int_{0}^{1}\frac{\arcsin(\sqrt{x})}{\sqrt{x}}K(1-x)\,dx$ boils down to the Euler sum $$ \sum_{n\geq 1}\frac{1}{n}\sum_{k\geq n}\frac{(-1)^k}{(2k+1)^2}\stackrel{\text{SBP}}{=}\sum_{n\geq 1}\frac{(-1)^n H_n}{(2n+1)^2}=\int_{0}^{1}\frac{\log(x)\log(1+x^2)}{1+x^2}\,dx $$ getting rid of some alternating Stirling numbers. Additionally the last integral is known (at least) since De Doelder and Flajolet: $$ \int_{0}^{1}\frac{\log(x)\log(1+x^2)}{1+x^2}\,dx = -\frac{\pi^3}{64}-K\log(2)-\frac{\pi}{16}\log^2(2)+2\operatorname{Im}\Li_3\left(\tfrac{1+i}{2}\right).$$ Now we just have to re-combine all the pieces to find a closed form for $\int_{0}^{1}\operatorname{Ti}_2(x)^2\,\frac{dx}{x^2}$ and the monstrosity $\sum_{n\geq 0}\frac{(-1)^n}{2n+1}\left(\sum_{k=0}^{n}\frac{(-1)^k}{(2k+1)}\right)^2$. $$ \sum_{k\geq 0}\frac{4^k}{(2k+1)^3 \binom{2k}{k}} =\phantom{}_4 F_3\left(\tfrac{1}{2},\tfrac{1}{2},1,1;\tfrac{3}{2},\tfrac{3}{2},\tfrac{3}{2};1\right) = -\frac{\pi^3}{32}-\frac{\pi}{8}\log^2(2)+4\operatorname{Im}\Li_3\left(\tfrac{1+i}{2}\right)$$ $$ \sum_{n\geq 0}\frac{1}{(2n+1)^2}\left[\pi+\frac{2(-1)^n}{(2n+1)}-4\sum_{k=0}^{n}\frac{(-1)^k}{2k+1}\right]=-\frac{\pi^3}{32}-\frac{\pi}{8}\log^2(2)+4\operatorname{Im}\Li_3\left(\tfrac{1+i}{2}\right)$$ $$ \sum_{n\geq 0}\frac{1}{(2n+1)^2}\sum_{k=0}^{n}\frac{(-1)^k}{2k+1}=\frac{7\pi^3}{128}+\frac{\pi}{32}\log^2(2)-\operatorname{Im}\Li_3\left(\tfrac{1+i}{2}\right)$$ $$ \sum_{n\geq 0}\frac{\mathscr{H}_n(2)(-1)^n}{2n+3} = \frac{3\pi^3}{128}+\frac{\pi}{32}\log^2(2)-\operatorname{Im}\Li_3\left(\tfrac{1+i}{2}\right) $$ $$ \sum_{n\geq 0}\frac{(-1)^n}{2n+1}\left[\sum_{k>n}\frac{(-1)^k}{2k+1}\right]^2=\frac{7\pi^3}{384}+\frac{\pi}{32}\log^2(2)-\operatorname{Im}\Li_3\left(\tfrac{1+i}{2}\right) $$ $$ \sum_{m,n\geq 0}\frac{(-1)^{m+n}}{(2n+1)^2 (2m+1) (2m+2n+1)}=\frac{\pi^3}{128}-\frac{\pi K}{4}+K-\frac{\pi^2}{16}+\frac{\pi\log(2)}{4}-\frac{\pi\log^2(2)}{32}+\operatorname{Im}\Li_3\left(\tfrac{1+i}{2}\right) $$ $$ \int_{0}^{1}\left(\sum_{n\geq 0}\frac{x^{2n}(-1)^n}{(2n+1)^2}\right)^2\,dx =\frac{\pi^3}{64}-\frac{\pi K}{2}+2K-K^2-\frac{\pi^2}{8}+\frac{\pi\log(2)}{2}-\frac{\pi\log^2(2)}{16}+2\operatorname{Im}\Li_3\left(\tfrac{1+i}{2}\right).$$
Determine all n element N, so that (1-i)^n is natural
Alright, I just found my mistake... of course coniditon 2) is wrong. It needs to be as followed 2) e^((-pi/4)in needs to be element N -> pi/4 * n = 2*pi * k -> n=8*k, k element N now 2) implies 1), therefore the only condition that needs to be accounted for is 2). Done.
Measurable Set can be Written as Countable Union of Measurable Sets
I am working out of the 4th edition. Near the top of p. 41, the authors write Now consider the case that $m^{\ast}(E) = \infty$. Then $E$ may be expressed as the disjoint union of a countable collection $\{E_k\}_{k=1}^{\infty}$ of measurable sets, each of which has finite outer measure. Note that this passage is from the chapter on the Lebesgue measure on $\mathbb{R}$, and that the particular quote above is part of the proof of a theorem which asserts several equivalent characterizations of the measurability of a set of real numbers $E$. Hence the theorem (and the quoted passage) is about sets of real numbers. This is important, as the real numbers are $\sigma$-finite—the result does not hold in complete generality. As an example of such a decomposition into disjoint sets of finite measure, take $$ E_k = E \cap [k,k+1), $$ where $k$ ranges over the integers. Since each interval is measurable and $E$ is measurable (Royden and Fitzpatrick are seeking to show that each of four conditions is equivalent to measurability, hence $E$ is measurable by hypothesis), the intersection of each interval with $E$ is also measurable. Finiteness of the outer measure of $E_k$ follows from monotonicity: each set is contained in an interval of unit length.
Any real number has at most two decimal representations
$\pi$ does indeed have only one decimal representation. However, some numbers have two! For example, $$1.0000...=0.9999....$$ Based on this, it's reasonable to ask how many decimal representations a number can have. E.g., is there a number with three decimal representations? Tao's comment is that the answer is no: every number has either one or two decimal representations, that is, at most two.
How to sketch the first and second derivative of this curve?
At the left end, the slope is nearly $+\infty$. It decreases til it looks like it's about $1$. So I'd put a vertical asymptote at the left end of the interval, and draw a curve coming down from $+\infty$ to $1$ at the point where the function has the sharp point. After that, the slope looks like it's about $-2/3$, so I'd sketch a vertical line at $y=-2/3$ over the interval between the two sharp points of the graph. Then it looks like the slope is about $1$ and it gets steeper, so I put another asymptote at the right end and draw a curve starting at $1$ and shooting up to $+\infty.$
Understanding the Beck-Chevalley condition
$\require{AMScd}$ When you do algebraic geometry, you can consider diagrams like $$ \begin{CD} X @>f>> A\\ @VgVV \\ B \end{CD} $$ where $X,A,B$ are spaces and $g,f$ maps thereof. It is fairly natural to think these as "generalized functions" between $A$ and $B$. They are in fact called "correspondences" between $A$ and $B$, and they organize in a category that contains your category of spaces in (actually two) canonical way(s). Morally, these guys correspond to "pull-push" functors like $g_*f^* : D(A)\to D(B)$. But there is a subtlety: in order for this to be a well-defined (bi)category, you must specify coherence conditions on a composition law: the most natural thing to do, given a diagram $$ \begin{CD} @. X @>f>> A\\ @. @VgVV \\ Y @>h>> B\\ @VkVV\\ C \end{CD} $$ is to complete it to a pullback $$ \begin{CD} P @>q>> X @>f>> A\\ @VpVV @VgVV \\ Y @>h>> B\\ @VkVV\\ C \end{CD} $$ Notice that you have two seemingly different ways to read your composition, now: the first, as $(kp)_*(fq)^* = k_* p_* q^* f^*$ and the second as $k_* h^* g_* f^*$. When will these two be equal (or rather, canonically isomorphic)?
How to find the value of $\lim_\limits{x \to 0}\frac{e^{x\cos{x^2}}-e^{x}}{x^5}$
$$\cos x^2=1-\frac{x^4}{2}+o(x^4)$$ so $$e^{x\cos x^2} = e^{x-\frac{x^5}{2}+o(x^5)} = e^xe^{-\frac{x^5}{2}+o(x^5)}$$ therefore $$\frac{e^{x\cos x^2}-e^x}{x^5} = e^x\frac{e^{-\frac{x^5}{2}+o(x^5)}-1}{x^5} \sim e^x\frac{-\frac{x^5}{2}}{x^5} \xrightarrow[x\to0]{}-\frac{1}{2}$$ using $e^u-1\sim u$ when $u\to0$.
Two limit questions involving radicals
This exercise seem me "bad-looking" but... $$\lim_{x\rightarrow1^{-}}\frac{\sqrt{x^2+1}-2}{\sqrt{x-1}}=i\infty$$ $$\lim_{x\rightarrow1^{+}}\frac{\sqrt{x^2+1}-2}{\sqrt{x-1}}=\infty$$ so two-sided limit does not exist. $$\lim_{x\rightarrow1^{-}}\frac{x^2+1}{\sqrt{2x+2}-2}=-\infty$$ $$\lim_{x\rightarrow1^{+}}\frac{x^2+1}{\sqrt{2x+2}-2}=\infty$$ so two-sided limit does not exist.
Notation: is it proper to multiply matrix with a vector represented using an n-tuple $\begin{bmatrix} a_1 & a_2 \\ a_3 & a_4 \end{bmatrix}(x_1, x_2)$?
In at least one course that I've been to the $n$-tuple notation $(x_1,\dots,x_n)$ was used as a convenient way to write column vectors on one line. So in that course our convention was $$(x_1,\dots,x_n)=\begin{pmatrix}x_1\\ \vdots\\ x_n\end{pmatrix}$$ $$(x_1,\dots,x_n)\neq\begin{pmatrix}x_1& \dots& x_n\end{pmatrix}$$ In other words adding commas turns a row into a column (which is a bit confusing but saves space on the page). In this case multiplying by a matrix on the left would make perfect sense.
Ordinal multiplication property: $\alpha<\beta$ implies $\alpha\gamma\le\beta\gamma$
If you have already proved that inequality of ordinals is trichotomic, then the second part, which has the form of implication $$\gamma \alpha &lt; \gamma \beta \qquad\Rightarrow\qquad \alpha&lt;\beta$$ can be equivalently reformulated as $$\beta\le\alpha \qquad\Rightarrow\qquad \gamma\beta\le\gamma\alpha.$$ So now the two parts have very similar form: We have to prove for $\gamma&gt;0$ that $\alpha&lt;\beta \qquad\Rightarrow\qquad \gamma\alpha\le\gamma\beta$ $\alpha\le\beta \qquad\Rightarrow\qquad \alpha\gamma\le\beta\gamma$ (I have reversed the notation in the second parts so that the assumption in both parts is similar.) By definition $\alpha\le\beta$ means that the ordinal $\alpha$ is isomorphic to an initial segment of the ordinal $\beta$. But it can be shown that this is equivalent to the fact that $\alpha$ is isomorphic to a subset of $\beta$. (See for example Order-isomorphic with a subset iff order-isomorphic with an initial segment. This fact was also mentioned in an answer to another question of yours). So now we know that $\alpha\le\beta$ and we wonder whether $\gamma\alpha$ can be realized as a subset of $\gamma\beta$; and similarly for $\alpha\gamma$ and $\beta\gamma$. This is not very difficult: The ordinal $\gamma\alpha$ simply means that we have replaced each point of $\alpha$ by a copy of the ordinal $\gamma$. This can be clearly embedded into "$\beta$-many" copies of $\gamma$. Similarly, $\alpha\gamma$ means that we have "$\gamma$-many" copies of $\alpha$. If we take "$\gamma$-many" copies of $\beta$, we can embed each copy of $\alpha$ into a copy of $\beta$. The above is a rather informal argument, I assume you would be able to make it more formal and describe the embeddings between the well-ordered sets we are working with.
If $f ' (x ; y)=0$ for every $x$ in open convex set, then $f$ is constant on open convex set.
If $f'(x,y)=\frac{\partial}{\partial y}f(x,y)$ as you wrote, then $f$ does not need to be constant. Take for example $f:\mathbb R \times \mathbb R\rightarrow \mathbb R: (x,y) \mapsto x$. It is $\frac{\partial}{\partial y}f(x,y) = 0$ for each $(x,y) \in \mathbb R^2$, but $f$ is not constant...
Transformation of random variable and distribution function
Your first step is correct, but I do not agree with the second part of your answer. \begin{align*} F_Y(y) &amp; = P(Y \leq y) \\ &amp; = P( \exp(X) \leq y ) \\ &amp; = P( X \leq \log y) \\ &amp; = F_X(\log y) \\ &amp; = 1 - \exp \left( - \lambda \log y \right) \\ &amp; = 1 - y^{-\lambda} \end{align*}
Integrating $\int_{a}^{\infty} \frac{x^4e^x}{(e^x-1)^2}dx$ for large values of $a$
We have that for $a&gt;\ln(2)$, $$\int_{a}^{\infty}\frac{x^4e^x}{(e^x-1)^2}dx=\int_{a}^{\infty}\frac{x^4e^{-x/2}}{e^{x/2}(1-e^{-x})^2}dx&lt; \frac{\int_{0}^{\infty} x^4 e^{-x/2}dx}{e^{a/2}(1-e^{-\ln(2)})^2}=\frac{2^2\cdot 2^5\cdot 4!}{e^{a/2}}$$ which implies that the LHS goes to zero exponentially as $a$ goes to $+\infty$.
What is this associative algebra with two generators and one defining relation?
Similar to this question, this algebra is the universal enveloping algebra of a Lie algebra, this time the unique nonabelian $2$-dimensional Lie algebra. One way to describe it is as the Borel subalgebra of $\mathfrak{sl}_2(\mathbb{C})$: explicitly, it is the Lie subalgebra of $\mathfrak{sl}_2(\mathbb{C})$ spanned by $$H = \left[ \begin{array}{cc} 1 &amp; 0 \\ 0 &amp; -1 \end{array} \right], E = \left[ \begin{array}{cc} 0 &amp; 1 \\ 0 &amp; 0 \end{array} \right].$$ You can check for yourself that $[H, E] = HE - EH = 2E$. If we write $\mathfrak{g} = \mathfrak{sl}_2(\mathbb{C})$ then it's common to denote the Borel subalgebra by $\mathfrak{b}$ and the universal enveloping algebra by $U(-)$, so that this algebra would be called $U(\mathfrak{b})$. It naturally occurs in the study of the representation theory of $\mathfrak{g}$, or equivalently the study of modules over $U(\mathfrak{g})$; see Verma module for some details.
how to describe $f_*: \pi(S^1,1)\rightarrow \pi(S^1,1)$ in terms of the isomorphism of $\pi(S^1,1) \cong Z$
I don't if I am right, but I will give it a try. This is what I know- (1) $\displaystyle f_{*} (&lt;\alpha&gt;) = &lt;f \circ \alpha&gt;$ where $\alpha$ is a loop based at $1$ in $S^1$. (2) The exponential mapping $\pi: \mathbb{R} \rightarrow S^1, x \mapsto e^{2\pi ix}$ and the path $p_n(s) = ns, 0\leq s\leq1, n \in \mathbb{Z}$, joining $0$ to $n$ in $\mathbb{R}$ provide us with an isomomorphism $\phi: \mathbb{Z} \rightarrow \pi(S^1,1), n\mapsto &lt;\pi\circ p_n&gt; $. (3) Now say we have a loop $\beta$ based at $1$, then we can find $n_o\in \mathbb{Z} $ such that $\pi\circ p_{n_o}=\beta$, so $\beta= e^{ 2\pi i n_o s}$ and then $f\circ \beta$ would be $e^{2\pi i n_o k s}$ which is then $\phi (n_o k)$. So does it mean in terms of the isomomorphism it just multilies by $k$ or what?
How do we solve the system of equations?18-1-2013
Divide the first equation by $\sqrt[4]{x}$ and the second by $\sqrt[4]{y}$. Then adding and subtracting the two resulting equations gives us the new pair of simultaneous equations: $$ \frac{2}{\sqrt[4]{x}}+\frac{1}{\sqrt[4]{y}}=\frac{1}{2} \\ \frac{2}{\sqrt[4]{x}}-\frac{1}{\sqrt[4]{y}}=2\frac{2\sqrt{x}+\sqrt{y}}{x+y} \, . $$ Multiplying these two equations together, we have: $$ \frac{4}{\sqrt{x}}-\frac{1}{\sqrt{y}}=\frac{2\sqrt{x}+\sqrt{y}}{x+y} \, . $$ Clearing denominators yields: $$ (4\sqrt{y}-\sqrt{x})(x+y)=2x\sqrt{y}+y\sqrt{x} $$ which after some algebra can be reduced to $$ (x+2y)(2\sqrt{y}-\sqrt{x})=0 \, . $$ So either $x=-2y$ or $\sqrt{x}=2\sqrt{y}$. If $x$ and $y$ are positive and real the first is clearly impossible and the second is equivalent to $x=4y$. (If they're not positive and real, we have to worry more about branch cuts than I really want to.) Now, if $x=4y$ we have $$ \frac{\sqrt{2}}{\sqrt[4]{y}}+\frac{1}{\sqrt[4]{y}}=\frac{1}{2} \, , $$ from the top equation in this answer. So $\sqrt[4]{y}=2(1+\sqrt{2})$, yielding the solution $$ x=64(1+\sqrt{2})^4 \\ y=16(1+\sqrt{2})^4 \, , $$ which upon expanding is precisely what Robert Israel got from Maple in the other answer.
How many ways can we choose $20$ letters from a collection of $9$ b’s , $8$ h’s, $10$ s’s?
You are asked to find the number of sums $b+h+s=20$ where $b,h,s$ are nonnegative integers with $b\leq9$, $h\leq8$ and $s\leq10$. Define $b'=9-b$, $h'=8-h$ and $s'=10-s$. Then the question comes to the same as finding the number of sums $b'+h'+s'=7$ where again $b',h',s'$ are nonnegative integers with $b'\leq9$, $h'\leq8$ and $s'\leq10$. Fortunately the condition that $b',h',s'$ are nonnegative integers with $b'+h'+s'=7$ allready implies that $b'\leq9$, $h'\leq8$ and $s'\leq10$, so that the last condition can be put aside now. Then with stars and bars we find $\binom{7+2}{2}=36$ possibilities. The trick I used here does not work always, but it is a good habit to check that out.
A density proof not via Stone-Weierstrass
Consider using the power series for $e^{|x|} = \sum_{n\ge0} \frac{|x|^n}{n!}$ and use the fact that partial sums are polynomials and they converge uniformly in compact sets. Such a partial sum combined with $e^{-|x|}$ will be almost $1$ for a sufficiently large set.
Does $\mathbb{E}e^X=e^{\mathbb{E}X}$?
$e^{EX} \leq Ee^{X}$ and equality cannot hold unless $X$ is almost surely constant. Take any non-constant random variable (like the outcome of a coin toss) to get a counter-example. When you use Taylor expansion and use linearity the problem you face is $EX^{n} =(EX)^{n}$ is not true.
How to formulate the sum of a sequence?
The recurrence is solved from the roots of the characteristic polynomial $$r^2=br+c.$$ The latter is quadratic, so that the general solution is $$a_n=pr_0^{n}+qr_1^n$$ where the constants $p,q$ are determined from the initial conditions. You easily deduce $$\sum_{k=0}^n a_n=p\frac{r_0^{n+1}-1}{r_0-1}+q\frac{r_1^{n+1}-1}{r_1-1}.$$
Maximum value of the modulus of a holomorphic function
$$f(z) = (z-1)(z+1/2) = z^2 - z/2 - 1/2$$ Since you know that the maximum is hit on the boundary, $z = e^{i\theta}$, we get that $$F(\theta) = e^{2i \theta} - \dfrac{e^{i\theta}}2 - \dfrac12 = \left(\cos(2\theta) - \dfrac{\cos(\theta)}2 - \dfrac12 \right)+i \left(\sin(2 \theta) - \dfrac{\sin(\theta)}2\right)$$ Let $g(\theta) = \vert F(\theta) \vert^2$. \begin{align} g(\theta) &amp; = \vert F(\theta) \vert^2 = \left(\cos(2\theta) - \dfrac{\cos(\theta)}2 - \dfrac12 \right)^2 + \left(\sin(2 \theta) - \dfrac{\sin(\theta)}2\right)^2 \\ &amp; = \cos^2(2\theta) + \dfrac{\cos^2(\theta)}4 + \dfrac14 - \cos(2\theta) \cos(\theta) - \cos(2\theta) + \dfrac{\cos(\theta)}2\\ &amp; + \sin^2(2\theta) + \dfrac{\sin^2(\theta)}4 - \sin(2\theta) \sin(\theta)\\ &amp; = 1 + \dfrac14 + \dfrac14 - \dfrac{\cos(\theta)}2 - \cos(2\theta)\\ &amp; = \dfrac32 - \dfrac{\cos(\theta)}2 - \cos(2 \theta)\\ &amp; = \dfrac52 - \dfrac{\cos(\theta)}2 - 2 \cos^2(\theta)\\ &amp; = \dfrac52 - 2 \left( \cos(\theta) + \dfrac18\right)^2 + 2 \left(\dfrac18 \right)^2\\ &amp; = \dfrac52 + \dfrac1{32} - 2 \left( \cos(\theta) + \dfrac18\right)^2 \end{align} The maximum is at $\cos(\theta) = -\dfrac18$ and the maximum value of $g(\theta) = \dfrac{81}{32}$. Hence, the maximum value is $$\max_{\vert z \vert \leq 1}\vert f(z) \vert = \sqrt{\dfrac{81}{32}} = \dfrac98 \sqrt{2}$$
Find all values for cos(i)
$$\cos(i)=\frac{e^{i^2}+e^{-i^2}}{2}=\frac{e^{-1}+e}{2}=\frac{1+e^2}{2e}$$