title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
if $|az^2+bz+c|\le 1$, find the maximum of $|a|+|b|$
By replacing $f(z)$ with $$ \tilde f(z) = e^{i\phi} f(e^{i\psi} z) = e^{i(\phi+2\psi)}a z^2 + e^{i(\phi+\psi)}b z + c = \tilde a z^2 + \tilde b z + c $$ with suitably chosen $\phi, \psi \in \Bbb R$ we can assume that both $a$ and $b$ are real and $\ge 0$. Now let $\omega = e^{i\pi/3} = \frac 12 + \frac i2 \sqrt 3$ be the primitive $6^{\text{th}}$ root of unity. Then $\omega^2 = \omega - 1$ and $ \omega^{10} = \omega^5 - 1$. Therefore $$ f(\omega) = (a+b)\omega + c - a \\ f(\omega^5) = (a+b)\omega^5 + c - a $$ which implies $$ a + b = \frac{f(\omega)- f(\omega^5)}{\omega - \omega^5} = \frac{f(\omega)- f(\omega^5)}{i \sqrt 3} \, . $$ Now use that $|f(z)| \le 1$ on the unit circle, this gives the estimate $$ |a| + |b| = a + b \le \frac{2}{\sqrt 3} \, . $$ And this is the actual maximum. Credit for the following example goes to achille hui: Let $p(z) = 2z^2+4z - 1$, then $$ \begin{align}|p(e^{it})|^2 &= |2e^{it} - e^{-it} + 4|^2 = |\cos(t) + 4 + 3i\sin(t)|^2\\ &=(\cos(t)+4)^2 + 9\sin(t)^2 = 25 +8\cos(t)(1-\cos(t))\\ &\le 25 + \frac{8}{4} = 27 \, ,\end{align} $$ so that $$ f(z) = \frac{2z^2 + 4z - 1}{3\sqrt{3}} $$ satisfies $|f(z)| \le 1$ on the unit circle and therefore – due to the maximum principle – for all $z$ in the unit disk. Remark: This is how I came up with above approach: In order to compute $a+b$ from two equations $$ f(z_1) = a(z_1^2-z_1) + (a+b)z_1 + c \\ f(z_2) = a(z_2^2-z_2) + (a+b)z_2 + c $$ we need different $z_1, z_2$ with $z_1^2-z_1 = z_2^2-z_2$, or $z_1 + z_2 - 1 = 0$. Also $z_1, z_2$ should be of absolute value $\le 1$, and their difference as large as possible. This eventually led to the choice $z_1 = \frac 12 + \frac i2 \sqrt 3$ and $z_2 = \frac 12 - \frac i2 \sqrt 3$.
Probability of 5 cards drawn from shuffled deck
a) There are $ \binom{52}{5} = 2,598,960 $ ways of choosing 5 cards. There are $\binom{4}{4} = 1 $ way to select the 4 aces. So there are $\binom{48}{1}= 48 $ ways to select the remaining card. Thus there are a total of 48 ways to select 5 cards such that 4 of them are aces, and the probability is: $\frac{48}{2,598,960} = \frac{1}{54,145}$. b) There are $\binom{4}{4} = 1 $ way to choose the 4 aces, and there are $\binom{4}{1} = 4 $ ways to choose a king. So there are $1\times4 = 4 $ ways to choose 5 cards such that 4 are aces and the other is a king card. The probability is: $\frac{4}{2,598,960} = \frac{1}{649,740} $. c) There are $\binom{4}{3} = 4 $ ways to choose 3 ten cards, and there are $\binom{4}{2} = 6 $ ways of choosing 2 jacks. So there are $ 4\times 6 = 24 $ ways to choose 5 cards such that 3 are ten and 2 are jacks. The probability for this case is: $\frac{24}{2,598,960} = \frac{1}{108,290} $. d) The probability of 5 non-ace cards is: $\frac{\binom{48}{5}}{\binom{52}{5}} = \frac{1,712,304}{2,598,960} = 0.6588 $, so the probability of getting 5 card at least one ace is: $1 - 0.6588 = 0.34$.
Why aren't these negative numbers solutions for radical equations?
This is due to notation. When we write $\sqrt{n}$ we mean only the positive square root of $n$. If we wished to include both negative and positive solutions, we would write $\pm\sqrt{n}$. I know this can be irritating, but it is the convention that is used, since square root would not be a function if it gave multiple values.
Projective bundle formula for sheaf cohomology
One has $$ Rp_*(\mathcal{O}_{\mathbb{P}(E)}(1) \otimes p^*L) \cong Rp_*(\mathcal{O}_{\mathbb{P}(E)}(1)) \otimes L \cong E^\vee \otimes L, $$ therefore $$ H^\bullet(\mathbb{P}(E), \mathcal{O}_{\mathbb{P}(E)}(1) \otimes p^*L) \cong H^\bullet(X, E^\vee \otimes L). $$
Limit $\lim\limits_{a \to 0}( a\lfloor\frac{x}{a}\rfloor)$
We know that $\lfloor x/a \rfloor \le x/a < \lfloor x/a \rfloor +1$, so $$ a > 0 \Rightarrow a\lfloor x/a \rfloor \le x < a\lfloor x/a \rfloor +a \\ a < 0 \Rightarrow a\lfloor x/a \rfloor +a < x \le a\lfloor x/a \rfloor < $$ which means that $$ a > 0 \Rightarrow 0 \le x - a\lfloor x/a \rfloor < a \\ a < 0 \Rightarrow a < x - a\lfloor x/a \rfloor \le 0, $$ and so $$ | x - a\lfloor x/a \rfloor | \le |a|. $$ Thus $a \lfloor x/a \rfloor \to x$ as $a \to 0$ by the squeeze lemma.
Determine the quadratic character of $293 \bmod 379$.
$\newcommand{kron}[2]{\left( \frac{#1}{#2} \right)}$ You are trying to determine the Legendre symbol $\kron{293}{379}$. There is a pretty general recipe when trying to compute Legendre symbol $\kron{a}{p}$ when $p$ doesn't divide $a$ and $p$ is an odd prime. The tools available are Explicitly find all the squares mod $p$, or represent $a$ as a square. (This is only worthwhile for small $p$) Use Euler's Criterion to compute $a^{\frac{p-1}{2}} \bmod p$. This is $1$ is $a$ is a square mod $p$ and $-1$ if $a$ is not a square mod $p$. Alter $a$ by adding a multiple of $p$. Factor $a$ into a product of primes $p_i^{k_i}$. Then $\displaystyle \kron{a}{p} = \prod_i \kron{p_i}{p}^{k_i}$, and one can consider each Legendre symbol separately. Use quadratic reciprocity and its supplemental laws. This means that: When $a$ is an odd prime $q$, you can use $\displaystyle \kron{q}{p} = (-1)^{\frac{p-1}{2}\frac{q-1}{2}}\kron{p}{q},$ $\kron{2}{p} = 1$ if $p \equiv \pm 1 \pmod 8$ and $-1$ otherwise, and $\kron{-1}{p} = 1$ if $p \equiv 1 \pmod 4$ and $-1$ otherwise. One can always proceed algorithmically by factoring the numerator, using quadratic reciprocity to flip each Legendre symbol, reducing the new numerators mod the denominators, and repeating until the numbers are small enough to analyze by inspection. Here, $293$ and $379$ are prime. By quadratic reciprocity, $$ \kron{293}{379} = \kron{379}{293}.$$ Reducing mod $293$, this becomes $$ \kron{379}{293} = \kron{86}{293} = \kron{2}{293} \kron{43}{293}.$$ As $293 \equiv 5 \pmod 8$, this becomes $$\kron{2}{293} \kron{43}{293} = - \kron{43}{293}.$$ Applying quadratic reciprocity and reducing mod $43$, this becomes $$- \kron{43}{293} = - \kron{293}{43} = -\kron{35}{43}.$$ One could factor and repeat. But one might also write this as $$-\kron{35}{43} = - \kron{-8}{43} = -\kron{-1}{43} \kron{2}{43}^3.$$ As $43$ is $3 \bmod 4$, we know $\kron{-1}{43} = -1$. And as $43$ is $3 \bmod 8$, we know that $\kron{2}{43} = -1$. So this becomes $$-\kron{-1}{43} \kron{2}{43}^3 = -(-1)(-1)^3 = -1.$$ And so we conclude that $293$ is not a square mod $379$. $\diamondsuit$
Markov and Chebyshev Inequality for bounds =[0,n]
Trick is $\Pr(X>1)\le K$ so $\Pr(X\le 1)\ge 1-K$. Note the Markov inequality only matters when the number being compared to is greater than the mean. So $1>K$ gives some results but otherwise vapid truths. Only positive numbers are allowed, no $0$ or negatives.
Prove the following claim
We have, assuming $m \neq n$ implies $\lambda_n \neq \pm \lambda_m$ \begin{eqnarray*} I &=& \int_0^L \sin ( \lambda_n x ) \sin (\lambda_m x) \textrm{d} x \\ &=& - \frac{1}{\lambda_n} \cos ( \lambda_n x ) \sin (\lambda_m x) \big|_{x=0}^{x=L} + \frac{\lambda_m}{\lambda_n} \int_0^L \cos( \lambda_n x ) \cos(\lambda_m x) \textrm{d} x \\ &=& - \frac{1}{\lambda_n} \cos ( \lambda_n L ) \sin (\lambda_m L) + \frac{\lambda_m}{\lambda_n^2} \sin( \lambda_n x ) \cos(\lambda_m x) \big|_{x=0}^{x=L} + \frac{\lambda_m^2}{\lambda_n^2} I \\ &=& - \frac{1}{\lambda_n} \cos ( \lambda_n L ) \sin (\lambda_m L) + \frac{\lambda_m}{\lambda_n^2} \sin( \lambda_n L ) \cos(\lambda_m L) + \frac{\lambda_m^2}{\lambda_n^2} I \\ \end{eqnarray*} Now rearranging yields \begin{eqnarray} I &=& \frac{1}{\lambda_n} \frac{\lambda_n^2}{\lambda_m^2 - \lambda_n^2}\cos ( \lambda_n L ) \sin (\lambda_m L) - \frac{\lambda_m}{\lambda_n^2} \frac{\lambda_n^2}{\lambda_m^2 - \lambda_n^2} \sin( \lambda_n L ) \cos(\lambda_m L) \\ &=& \frac{\lambda_n}{\lambda_m^2 - \lambda_n^2}\cos ( \lambda_n L ) \sin (\lambda_m L) - \frac{\lambda_m}{\lambda_m^2 - \lambda_n^2} \sin( \lambda_n L ) \cos(\lambda_m L) \\ &=& \frac{\lambda_n \cos( \lambda_n L) \sin( \lambda_m L ) - \lambda_m \cos( \lambda_m L) \sin( \lambda_n L)}{\lambda_m^2 - \lambda_n^2}\\ &=&\frac{- h\sin( \lambda_n L) \sin( \lambda_m L ) + h \sin( \lambda_m L) \sin( \lambda_n L)}{\lambda_m^2 - \lambda_n^2}\\ &=&0 \end{eqnarray}
How to calculate the success rate against non-occurring events?
This is equivalent to quantifying $\frac00$ which is undefined. Do not take a reading if this occurs.
How to write 3 matrix multiplications as a sum?
This is correct. There's not really any easier way of writing it down. There is some intuition, though: $A_{ik}B_{kl}{C_{lj}}$ can be thought of as a "walk" from $i$ to $j$ with three steps $i \rightarrow k \rightarrow l \rightarrow j$. Then $(ABC)_{ij}$ is given by taking the sum over all "walks" from $i$ to $j$, where each walk is "weighted" by the entries $A_{ik}B_{kl}{C_{lj}}$. Similarly, $$(ABCD)_{ij} = \sum_{i_1} \sum_ {i_2}\sum_{i_3} A_{i, i_1} B_{i_1, i_2} C_{i_2, i_3} D_{i_3, j}$$ is given by taking the sum over all walks from $i$ to $j$ with four steps. If you write it as a single sum, and generalize it to arbitrary products, the notation gets a little trickier. Let $A^1, A^2, \ldots, A^l$ be $n \times n$ matrices; note that the superscripts here are not powers, just indices on the tuple - we need them because we'll reserve subscripts for the row and column indices. Define a walk $\gamma$ from $i$ to $j$, of length $l$, as a tuple $(i, i_1, i_2, i_3, \ldots, i_{l-1}, j)$, with each entry between $1$ and $n$, and let $P_l$ be the set of all walks $\gamma$ of length $l$. Define the weight of $\gamma \in P_l$ to be $$w(\gamma) = A^1_{i,i_1} A^2_{i_1, i_2} A^3_{i_3, i_4} \cdots A^{l}_{i_{l-1}, j}.$$ Then $$(A^1 A^2 \cdots A^l)_{ij} = \sum_{\gamma \in P_l} w(\gamma).$$ This idea is used frequently in graph theory (the incidence matrix) and probability (Markov chains).
Supremum and infimum for xyz = 1
Hint $$x=m, y=\frac{1}{m}, z=1 \Rightarrow xyz=1$$ Note For the inf, setting $x=y=z=1$ shows that $3$ is a minimum for your set.
Ring homomorphisms form an abelian group
To give a (not necessarily unital) ring homomorphism from $\mathbb{Z}_n$ to $\mathbb{Z}_m$ is the same as giving an idempotent in $\mathbb{Z}_m$ whose order divides $n$. Now, it is well known that the idempotents in any unital commutative ring form a Boolean ring, with the same multiplication and the "sum" of $e$ and $f$ being $e+f-2ef$. Also, it is easily seen that if $e$ and $f$ have order diving $n$, then so does $e+f-2ef$. Finally, with this addition, the identity element is still zero and every element is its own inverse. Hence, there is an isomorphic abelian group structure on the set of ring homomorphisms from $\mathbb{Z}_n$ to $\mathbb{Z}_m$ with $(f +' g)(k)=k[f(1)+g(1)-2f(1)g(1)]$ (using the prime mark to avoid confusion with the pointwise sum).
Solving a Fredholm integral equation with a logarithmic kernel
Ok, so, the way I found to the solution is the following. First, I asked myself, what would be a differential problem related to that integral equation? You know, if you have a differential problem: $$ D f(x) = y(x) $$ where $ D $ is a differential operator, and if you have a Green's function such that $ D G(x,t) = \delta(x-t) $, then $$ f(x) = \int G(x,t) y(t) $$ is a solution. You can make the identification $$ f(x) = \pi \left[ \pi + 2 \tan^{-1}(x) - \tan^{-1} \left(\frac{x}{a}\right) - \tan^{-1} \left(\frac{x}{c}\right) \right] $$ and $$ G(x,t) = \ln \left| \frac{ t - x }{ t + x } \right| $$ and then the difficult question is: what is the differential operator $ D $? Rather than obtaining it from calculation, I answered that question using some physics. The Green's function for the Laplacian in two dimensions is a logarithm. More precisely, consider the problem: $$ \nabla^2 g(x) = \delta(0) $$ By the spherical simmetry of the problem, I know $ g $ depends only on $ r $, the radial distance from origin. Then I need to write the laplacian in polar coordinates, which in two dimensions read: $$ \frac{1}{r} \frac{\partial}{\partial r} \left( r \frac{\partial g(r)}{\partial r} \right) $$ For that to become 0 outside of the origin, I need the term inside the parenthesis to be a constant, what is easily obtained if $ g(r) = k ln r $, k being a constant. Then, I need the integral of this expression over a circle surrounding the origin to be one, as to make the laplacian equivalent to a delta. The integral over the circle of area A reads, using Gauss's theorem in two dimensions: $$ \int_A dA \nabla^2 (k \ln r) = \int_0^{2\pi} r d\theta \nabla(k \ln r) = 2\pi k $$ So that $$ k = \frac{1}{2\pi} $$ And the Green's function is: $$ g(r) = \frac{1}{2\pi r} $$ Now, let's consider the problem: $$ \frac{1}{t} \frac{\partial}{\partial t} \left(t \frac{\partial}{\partial t} G(x,t) \right) = \delta(x-t) $$ Subject to the boundary condition $$ G(x,0) = 0 $$ This can be seen as the physical problem of a point charge in the point (x,0) in the plane generating an electrical potential $ \phi_x(t,u) $ which vanishes at the surface t=0 (because, say, of a plane conductor being placed there), and we want to know the value $ G(t,x) = \phi_x(t,0) $ of the potential in the point (t,0). That is, the field lines extend through the whole plane, but in your problem we only care about the line u=0. That is, we have a one-dimensional problem, but with the laplacian that we want. The electrical potential satisfies the equation: $$ \nabla^2 \phi_x(t,u) = \delta((t,u) - (x,0)) $$ Or, using $ r = \sqrt{(t-x)^2 + u^2} $ $$ \frac{1}{r} \frac{\partial}{\partial r} \left(r \frac{\partial}{\partial r} \phi_x(r) \right)= \delta(0) $$ With the aforementioned boundary condition. The standard way to solve this problem is through the method of images: you consider that there is no boundary condition, and instead there are TWO charges, one at $ (x,0) $ and one at $ (-x,0) $. Then, the potential generated by the two is the sum of the potential which would be generated by each one: $$ \phi = \frac{1}{2\pi} \ln r + \frac{1}{2\pi} \ln r' $$ r being the distance between the evaluation point and the first charge, and r' being the distance to the second charge. Well, if we consider the evaluation point as being $ (t,0) $, then $ r = |t-x| $ and $ r' = |t+x| $, and $$ G(t,x) = \frac{1}{2\pi} \ln \left| \frac{t-x}{t+x} \right| $$ Finally, we found out that this Green's function is the solution of the differential problem stated above, with that boundary condition. This means that the $ y(t) $ that we want is given by $$ \frac{1}{t} \frac{\partial}{\partial t} \left(t \frac{\partial}{\partial t} f(t) \right) = 2 \pi y(t) $$ Putting the expression for f, we have then $$ y(t) = \frac{1}{2t} \frac{\partial}{\partial t} \left(t \left[ \frac{2}{1 + t^2} - \frac{a}{a^2 + t^2} - \frac{c}{c^2 + t^2} \right] \right) $$ Which can then be calculated explicitly. Obs: I am thinking whether the fact that $ f(t) $ does not satisfy the boundary condition is a problem or not. In the differential problem it just amounts to an integration constant, or a change of ground potential. But I don't know if it affects the integral problem in an important way. Obs2: Reading the comment from Mark Viola, it seems to me that this $ \pi^2 $ really should not be in the integral problem at all.
Diagonalizablity and complex conjugates linear algebra proof problem
I guess $\overline{E_{\mu_j}}=\{\bar x:x\in E_{\mu_j}\}$, where $\bar x=\overline{(x_1,\dots,x_n)}= (\overline{x_1},\dots, \overline{x_n})$ for each $x=(x_1,\dots,x_n)\in\Bbb C^n$. Let $x=(x_1,\dots,x_n)\in\Bbb C^n$ be any vector. Using that all entries of $A$ are real, it is easy to check that $T\bar x=\overline{Tx}$. Thus for each $1\le j\le s$ we have $$x\in E_{\overline{\mu_j}}\Leftrightarrow Ax=\overline{\mu_j}x \Leftrightarrow A\bar x=\overline{Ax}=\overline{\overline{\mu_j}x}=\mu_j\bar{x} \Leftrightarrow \bar{x}\in E_{\mu_j} \Leftrightarrow x\in \overline{ E_{\mu_j}}.$$
How to find the equation of the 3D plane passing through three points?
Explicitly, the plane $\mathcal{P}$ through the point $P_0 = (x_0, y_0, z_0)$ is uniquely determined (up to a scalar multiple) by a normal vector $\mathbf n = \langle a,b,c \rangle$ according to the following: a point $P$ lies on $\mathcal{P}$ if and only if $\mathbf n$ and $\overrightarrow{P_0 P}$ are orthogonal if and only if $n \cdot \overrightarrow{P_0 P} = 0$ if and only if $$a(x - x_0) + b(y - y_0) + c(z - z_0) = 0.$$ By setting $d = ax_0 + by_0 + cz_0,$ we have $ax + by + cz = d.$ Given three points $P = (x_0, y_0, z_0),$ $Q,$ and $R,$ the equation of the plane through $P,$ $Q,$ and $R$ can be determined by setting $\mathbf n = \overrightarrow{PQ} \times \overrightarrow{PR}$ and computing the dot product $$0 = \mathbf n \cdot \langle x - x_0, y- y_0, z - z_0 \rangle.$$
Show that a nonconstant monic must have modulus $\geqslant 1$ on the boundary of the unit disk.
If $\sup_{|z|=1} |p(z)|<1$, then by Cauchy's bounds, $p^{(n)}(0)<n!$, a contradiction.
Abelianization of free profinite group
This is not a complete answer but it's too long to be a comment. Profinite completion is a functor $C:\mathbf{Grp}\to \mathbf{Prof}$ (the latter being the category of profinite groups and continuous group morphisms) that has a very nice property: it is left adjoint to the forgetful functor $\mathbf{Prof}\to \mathbf{Grp}$. Therefore, as all left adjoints do, it commutes with colimits (in particular, this tells you how it behaves wrt exact sequences and coproducts: it is right exact, and commutes with coproducts -note however that this last bit is subtle : it takes coproducts in $\mathbf{Grp}$ to coproducts in $\mathbf{Prof}$ which may not look like coproducts in $\mathbf{Grp}$ !) Taking a quotient is a colimit, hence $C(\mathbb{Z}^{(k)})=C(F_k/[F_k,F_k])=C(F_k)/C([F_k,F_k])=G/C([F_k,F_k])$ (this quotient is meant as "the coequalizer of the diagram $C([F_k,F_k]) \to C(F_k)$" where the two arrows are $C($inclusion$[F_k,F_k]\to F_k)$ and $C($trivial morphism$) =$trivial morphism. It suffices now to see what the image of $C([F_k,F_k])$ in $G$ looks like, and what its normal closure looks like. If it happened to be $\overline{[G,G]}$, that would be great because it would imply that $G/\overline{[G,G]} = C(\mathbb{Z}^{(k)})$ Another way to look at this would be to say that abelianization is also a functor, here we're looking at profinite abelianization, so it's a functor $^{abProf}:\mathbf{Prof}\to \mathbf{AbProf}$ (the category of abelian profinite groups). This one is left adjoint to the inclusion $\mathbf{AbProf}\to \mathbf{Prof}$, so again it commutes with colimits. Therefore $G^{abProf} = C(F_k)^{abProf}= C(\displaystyle\coprod_{i\in k}\mathbb{Z})^{abProf} = (\displaystyle\coprod_{i\in k}C(\mathbb{Z}))^{abProf}$ (because $C$ commutes with coproducts -note that the coproducts are to be understood in the correct category; i.e. they need not be the same !), so $G^{abProf} = \displaystyle\coprod_{i\in k}C(\mathbb{Z})^{abProf}$. Now $C(\mathbb{Z})=\widehat{\mathbb{Z}}$ is already abelian, so $G^{abProf} = \displaystyle\coprod_{i\in k}C(\mathbb{Z})=\displaystyle\coprod_{i\in k}\widehat{\mathbb{Z}}$, where this coproduct is to be taken in $\mathbf{AbProf}$. Now what does the coproduct in $\mathbf{AbProf}$ look like? To answer this let's make a detour through $\mathbf{Ab}$: we have another profinite completion functor $K: \mathbf{Ab}\to \mathbf{AbProf}$ and it's again left adjoint to the forgetful functor, so it commutes with coproducts. Hence in $\mathbf{AbProf}$, $\displaystyle\coprod_{i\in k}\widehat{\mathbb{Z}}= \displaystyle\coprod_{i\in k}K(\mathbb{Z})=K(\mathbb{Z}^{(k)})$. Therefore $G^{abProf} = $ the profinite completion of $\mathbb{Z}^{(k)}$. For finite $k$ I expect this should be $\widehat{\mathbb{Z}}^{(k)}$. Hence computing what you call $G'_k$ is reduced to one of the following tasks : -computing the topological closure of the normal closure of the image of $C([F_k,F_k]) $ in $G$ (under the induced morphism) and finding that this is $\overline{[G,G]}$ ( I have very little intuition/knowledge about profinite groups; but this seems like it should be true ? ) -Or describe the coproduct in the category $\mathbf{AbProf}$. I expect this shouldn't be complicated (I'm just not seeing it right now for some reason). If you have this, then $G'_k$ is the coproduct in this category of $k$ copies of $\widehat{\mathbb{Z}}$. -Or compute the profinite completion of $\mathbb{Z}^{(k)}$. For finite $k$ this should be easy, for infinite $k$ I don't know. If you have this, then $G'_k$ is the profinite completion of $\mathbb{Z}^{(k)}$. Edit: I can confirm that it works for a finite $k$. Indeed, the finite coproduct of profinite abelian groups (in $\mathbf{AbProf}$ !) is their product: if $G_1,..,G_n$ are such groups then their product is Hausdorff, compact, totally disconnected, and it's a topological group so it's an abelian profinite group, and then the universal property is immediate. So for finite $k$, we do have $G_k' = \widehat{\mathbb{Z}}^k$.
Evaluating $\lim_{x\to 0}\frac{x\sin x-2+2\cos x}{x\ln(1+x)-x^2}$ using L'Hôpital
In general applying L'Hospital's Rule directly is never a good idea (unless the limit problem is too simple). One must first rewrite the given expression under limit into a form which is suitable for the application of L'Hospital's Rule. Dividing the numerator and denominator of the expression under limit by $x^3$ we get $$\dfrac{\dfrac{x\sin x - 2+2\cos x} {x^3}} {\dfrac{\log(1+x)-x}{x^2}} $$ Now we can calculate the limit of denominator above by a single application of L'Hospital's Rule and it will come out to be $-1/2$ and hence the desired limit equals $$2\lim_{x\to 0}\frac{2(1-\cos x) - x\sin x} {x^3}$$ We can now apply L'Hospital's Rule once more to get $$2\lim_{x\to 0}\frac{\sin x - x\cos x} {3x^2}$$ The above limit can be evaluated directly without any use of L'Hospital's Rule or series expansions, but it is simpler to deal with it via L'Hospital's Rule. Applying it again we get $$\frac{1}{3}\lim_{x\to 0}\frac{x\sin x} {x} =0$$ The desired limit is thus $0$. Thumb rule to apply L'Hospital's Rule is that use it only when the necessary differentiation is damn simple.
integrability functions
I can answer the first question. Of course $f(x)$ is bounded, since it's values can only be of the form $10^{-n}$ for $n \geq 0$, so $ 0 \leq f(x) \leq 1$. Note that $f$ would be a step function, except for the fact that there are infinitely many steps. But we can take care of this. To show that $f$ is Riemann integrable, it's enough to show that we can sandwich it between two step functions, which are definitely integrable. Then, these step functions would have integrals which converge to the same point. The first step function set would look as follows : $S_n(x) = f(x) $,for $x \geq \frac 1{2^{n+1}}$, and zero before that. Similarly, $T_n(x) = f(x)$, for $x \geq \frac{1}{2^{n+1}}$, and $1$ before that. So, we have step functions $S_n$ and $T_n$ such that $S_n \leq f \leq T_n$. Now, I leave you to see that $S_n$ and $T_n$ have a Riemann integral which converge to the same number as $n \to \infty$. By definition, this would be the integral of $f$ as well. EDIT : You are saying that the question is:$\int_{0}^{x^2(1+x)}f(t)dt = x$ for all $x > 0$. I think they mean that $x$, as a function of $x$, is the right hand side. Define $g(y) = \int_0^y f(t)dt$. Then, we want to find $g(x^2(1+x)) = x$ for all $x$. Differentiating, we get $g'(x^2(1+x))(2x + 3x^2) = 1$, so $g'(x^2+x^3) = \frac{1}{2x+3x^2}$. However, note that $g'(x^2(1+x)) = f(x^2(1+x))$ by the fundamental theorem of calculus. Therefore, it follows that $f(x^2(1+x)) = \frac 1{2x+3x^2}$ for all $x$. Now, see that $1^2(1+1) = 2$, so putting $x=1$ in this statement, $f(2) = \frac{1}{2+3} = \frac 15$.
Finding coefficient of $x^{111}$ in $P=(1+x)+2(1+x)^2+3(1+x)^3+\ldots+1000(1+x)^{1000}$
You made a mistake when taking the derivative, it should be $$(1+x)\left[\frac{1000(1+x)^{1000}}{x}-\frac{(1+x)^{1000}}{x^2}+\frac{1}{x^2}\right].$$ Once this is fixed, you will get the right answer.
What is the probability that a triangle is isoceles?
In order to state a probability question properly it is important to be clear about the universe from which you selecting your special set. For example, you could ask: Select any three non-colinear points at random in a 1 by 1 square, what is the probability that the triangle they form is isosoceles? There are other ways of asking the question which may or may not be equivalent. In almost all cases I suspect that the answer would be, as you surmised, $0$. However, the probability they form a non-isosoceles triangle would not be infinity it would be $1$. This is a simple result of elementary probability theory.
Find area of region bounded by four graphs
The area between two curves can be understood as follows: Let $f(x)$ be the top curve, and let $g(x)$ be the bottom curve. Then the area under $f(x)$ is $\int_a^b f(x) dx$. The area under $g(x)$ is $\int_a^b g(x) dx$. If we remove the area under $g$, then we have the area between them. This is $\int_a^b f(x) dx - \int_a^b g(x) dx = \int_a^b f(x) - g(x) dx$ by linearity of the integral. So now what I would do is split up the region into two parts, since the 'top' function is piecewise defined. Find the intersection point between the green and yellow curves, call it maybe $c$. Then you want to integrate (green) - (blue from $0$ to $c$, and (yellow) - (blue) from $c$ to $3$.
Two Questions about Sobolev inequalities and Lipschitz smooth functions
The answer to Question 2 is yes. Since $f$ is smooth and has bounded derivative, it belongs to $L_1^p$ for all $p\ge 1$. I'm not sure I understand Question 1. Does $M=\mathbb R^n$ work?
Set theory notation for intersection
It's a short-hand notation for $B= A_1\cap A_2\cap \ldots \cap A_{10}$.
How to prove the formula of altitude from this following triangle?
By the picture that you drew, we have $\frac{AD}{c}=\frac{b}{\sqrt{b^2+c^2}}$, which gives $AD=\frac{bc}{\sqrt{b^2+c^2}}$. This is hardly ever equal to $\sqrt{2}\frac{bc}{b+c}$. For if we had equality, we would have $\frac{\sqrt{2}}{b+c}=\frac{1}{\sqrt{b^2+c^2}}$, or equivalently $2(b^2+c^2)=(b+c)^2$. This simplifies to $b^2-2bc+c^2=0$, that is, $b=c$. So the proposed formula only gives the right answer if our triangle is right-angled and isosceles.
Using hyperbolic substitution, wrong coefficient
I don't see anything wrong with what you did. As far as I can tell, the solution set you linked to has a mistake in their answer. Note for their answer to be correct means that $$\frac{\sinh^2(2y)}{2} = \sinh^2(y)\cosh^2(y) \; \implies \; \sinh(2y) = \pm\sqrt{2}\sinh(y)\cosh(y) \tag{1}\label{eq1}$$ which is not true. Also, it appears it wasn't just a typo in that one place, as this error is carried all the way through to their final answer.
Regarding Conditional Probability
Bayes' Theorem helps, but you can do it from first principles. How can Fred win? Well, two ways. Either his first goes in and he wins or his first serve misses, second serve goes in, and he wins. Thus the total probability that he wins is $$.6\times .7 +.4\times .9\times .4 = .564$$ That is your denominator. The numerator is that portion of that probability which is explained by Fred getting his first serve in. Thus your answer is $$\frac {.6\times .7}{.6\times .7 +.4\times .9\times .4}\approx \boxed {.745}$$
If $f'(x)\geq 1$ for all $x\in[0,1]$ and $f$ is twice differentiable which of the following is true?
Proof of (d) Using Lagrange's Mean Value Thm - $\frac{ f(x_2)- f(x_1)}{x_2 - x_1} = f'(c) $ where $x \in (x_1,x_2) \implies c \in [0,1]$ Now, $f'(x) \geq 1 \forall x \in [0,1]$ Thus $f'(c)\geq 1$ And you get (d )after some trivial manipulations .
Find the integral of $\frac{x^5+x^2+4x+\sin(x)}{64+x^6} dx$ from $-2$ to $2$
$$\int^2_{-2}\dfrac{x^5+x^2+4 x+\sin(x)}{64+x^6} dx=\int^2_{-2}\dfrac{x^5}{64+x^6} dx+\int^2_{-2}\dfrac{x^2}{64+x^6} dx+\\ \int^2_{-2}\dfrac{4 x}{64+x^6} dx+\int^2_{-2}\dfrac{\sin(x)}{64+x^6} dx=\\ \left.\dfrac{1}{6}\ln|64+x^6|\right|^2_{-2}+\int^2_{-2}\dfrac{x^2}{1+\left(\frac{x^3}{8}\right)^2} dx+\int^2_{-2}\dfrac{4 x}{64+x^6} dx+\int^2_{-2}\dfrac{\sin(x)}{64+x^6} dx=\\ 0+\left.\dfrac{1}{24}\arctan\left(\frac{x^3}{8}\right)\right|^2_{-2}+0+0\text{ (by properties of odd functions)}=\boxed{\dfrac{\pi}{48}}$$
Is this reasoning correct?
Almost complete! The gap in the proof is when you go from from $(gx)^2=g^2$ to $gx=g$. Here you need to use the fact that the group is of odd order. Note that the order of an element must divide the order of the group. So a group of odd order has no elements of order $2$. Use that tool to finish. Added: We need to show that any square has a unique "square root." Consider the object $a^2$. It Since the order of $a^2$ divides the order of the group, $a^2$ has odd order, say $2k+1$. So $a^{2k+1}=e$, and therefore $a^{2k+2}=a$. So $a=(a^2)^{k+1}$. In particular, $a$ is completely determined by $a^2$.
Number of invertible matrices modulo 26
The Chinese Remainder Theorem tells us that $\mathbb{Z}_{26}\cong\mathbb{Z}_2\times\mathbb{Z}_{13}$, which in turn is what tells us that $GL(n,\mathbb{Z}_{26})\cong GL(n,\mathbb{Z}_2\times\mathbb{Z}_{13})\cong GL(n,\mathbb{Z}_2)\times GL(n,\mathbb{Z}_{13})$. Then the result follows by the proof you have in your answer. It is definitely not the case that a matrix which is invertible modulo $13$ but not invertible modulo $2$ will be invertible modulo $26$. For example, take the matrix with all $2$'s on its diagonal and zeros everywhere else. Then this matrix is in $GL(n,\mathbb{Z}_{13})$, is not in $GL(n,\mathbb{Z}_2)$, and is not in $GL(n,\mathbb{Z}_{26})$, as $2$ does not have a multiplicative inverse in $\mathbb{Z}_{26}$.
Binary relations: Can someone see if I have done this correctly?
In order to check if a relation is reflexive you need to know on which set the relation is defined. My guess would be that you are meant to use $\{a,b,c\}$ in all cases. This assumption makes $S_2$ not a reflexive relation, because it is missing $(c,c)$. Your second answer is correct, but just inserting $(b,a)$ is sufficient. Your third answer is correct, but just inserting $(b,c)$ is sufficient. Your sixth answer is wrong, only $S_4$ is reflexive under the assumption mentioned earlier. The other answers are correct.
Evaluating Convergence (Uniform)
For $ x\in (0,1)$ and $ n$ large enough, $$g_n(x)=|f_n(x)-\frac{1}{x^2}|=\frac{|x^2-1|}{(n^3x^2+1)x^2}$$ Now take the sequence $ (x_n) $ such that $$n^3x_n^2=1$$ or $$x_n=n^{-\frac 32}=\frac{1}{n^{\frac 32}}$$ Then, $ x_n\in (0,1)$ and $$|f_n(x_n)-f(x_n)|=\frac{|n^{-3}-1|}{2n^{-3}}$$ $$=\frac 12|1-n^{3}| \to +\infty$$ But $$|f_n(x_n)-f(x_n)|\le \sup_{(0,1)}|f_n-f|$$ thus $$\lim_{n\to+\infty}\sup_{(0,1)}|f_n-f|=+\infty$$ The convergence is not uniform at $(0,1)$. It is uniform at $(a,1) $ with $0<a<1$.
Convergence of Uniform random variables
Assuming that $U_n:=[nU]/n$, $$ \mathsf{P}(U_{n+1}>U_n)=\sum_{i=1}^{n-1}\mathsf{P}\!\left(\frac{i}{n+1}< U\le \frac{i}{n}\right)=\frac{1}{2}\times\frac{n-1}{n+1}. $$
Bijection between set of all binary sequences and power set of $\mathbb N$.
Yes, that is a bijection. Define, for $A \subseteq \Bbb N$, $\varphi(A)=\chi_A \in \{0,1\}^{\Bbb N}$ where $$\chi_A(n) = \begin{cases} 1 &\text{ if } n \in A\\ 0 &\text{ if } n \notin A\end{cases}$$ (Note that a binary sequence is just a function from $\Bbb N$ to $\{0,1\}$) and show $\varphi$ is a bijection: $\chi_A = \chi_B$ (as functions) iff $A=B$ (as sets) and for any $x \in \{0,1\}^{\Bbb N}$ there is some $A \subseteq \Bbb N$ with $\varphi(A)=x$ (as functions). Both are not hard, and exercises in the definitions. This sort of justifies the notation $2^X$ for the powerset of $X$ one sometimes sees.
Analytic continuation of a power series 2
We have $f(z) = \sum_n a_n z^n$, where $a_n = \begin{cases} 1 & \text{if}\ \exists k\ n = 2^k \\ 0 & \text{otherwise} \end{cases}$ i) The radius of convergence is given by $\frac{1}{R} = \limsup_n \sqrt[n]{a_n}= \lim_n \sqrt[2^n]{1} = 1$. Hence $R=1$, and $f$ is defined on $D$. ii) $f(z) = \sum_{n=0}^\infty z^{2^n} = z+\sum_{n=1}^\infty z^{2^n} = z+\sum_{n=0}^\infty z^{2^{n+1}} = z+\sum_{n=0}^\infty (z^2)^{2^{n}} = z+f(z^2)$. It follows by induction that $f(z) = \sum_{k=0}^{n-1} z^{2^k} + f(z^{2^n})$ for any $n$. iii) Note that $\lim_{r \uparrow 1}f(r) = \infty$ (for $r$ real). If $w^{2^n} = 1$ ii) gives $f(rw) = \sum_{k=0}^{n-1} (rw)^{2^k} + f(r^{2^n})$, and we have $\lim_{r \uparrow 1}|f(rw)| = \infty$. Let $\Omega_n = \{w| w^{2^n} = 1\}$, and $\Omega = \cup_n \Omega_n$. It is easy to see that $\Omega$ is dense in $\partial D$, and hence the set $\{z \in \partial D | \lim_{r \uparrow 1}|f(rz)| = \infty \}$ is dense in $\partial D$. Hence $f$ can not be continued in any neighborhood of any point in $\partial D$.
Three numbers that multiply to ten (It gets harder)
You could try programming a search that factors integers that are close to powers of $10$, and then examines the factors for their digits. For example, $100002$ factors as $2\times3\times7\times2381$. Searching through possible triples of divisors, $6\times7\times 2381$ has unique digits (but is missing $4$, $5$, and $9$). If all digits were accounted for, this would lead to to $10.002=6\times7\times0.02381$. (I'm not sure from your post if you are allowed to repeat digits, like how $0$ is repeated in this example.) The search parameters would be which powers of $10$ you would examine, and also constraints on how far away from that power of $10$ you would roam. If digits aren't supposed to be repeated, I think $10^a$ with $a$ in $\left\{9,10,11\right\}$ would be appropriate. (And at this level, factoring shouldn't take the program too long.)
Does the following groupoid have an identity element/zero element?
Let $a \in \mathcal{R}$ and define $e :=0$. Then: $$a \cdot e = ae + e + a = 0+0+a = a.$$ You can easily check that $e \cdot a$ results in the same. Thus, $e$ is the identity element.
Show that the collection of cylinders form an algebra.
For the first question you only have to know what $B^{c}$ means. If an element does not belong to $B$ it belongs to $B^{c}$. For the second question you need the following fact about cylinder sets: $\{(x:x_{t_1},(x:x_{t_2},...,x_{t_n}) \in B$ can be written as $\{x:(x_{t_1},x_{t_2},...,x_{t_n},x_{t_{n+1}}) \in B_1\}$ where $B_1 =B \times \mathbb R$ and $t_{n+1}$ is any point in the index set . Using this repeatedly you observe that we can always increase the indexing set $(t_1,t_2,...,t_n)$ in any cylinder set by suitably modifying the Borel set $B$. The idea now is to write the two given cylinder sets with the same indexing set (by taking the union of the given indexing sets) and this makes it obvious that their union/intersection is also a cylinder set.
Separable form by substitution
Let $u=4x+7y$, now we get: $du/dx=4+7dy/dx$
Dense subset in Hilbert space: a basic question
Yes. Suppose $z \in \operatorname{Span}(z_0)^\perp$, i.e. $\langle z, z_0 \rangle = 0$. There is a sequence $x_n \in H$ with $x_n \to z$. Let $u$ be any vector in $H$ with $\langle u, z_0 \rangle \ne 0$ (this $u$ must exist because $\operatorname{Span}(z_0)^\perp$ is not dense, so it cannot contain $H$). By rescaling, assume $\langle u, z_0 \rangle = 1$. Now set $y_n = x_n - \langle x_n, z_0 \rangle u$; clearly $y_n \in H$. By continuity, $\langle x_n, z_0 \rangle \to \langle z, z_0 \rangle = 0$, so $y_n \to z$. And $\langle y_n, z_0 \rangle = \langle x_n, z_0 \rangle - \langle x_n, z_0 \rangle \langle u, z_0 \rangle = 0$, so $y_n \in \operatorname{Span}(z_0)^\perp$.
Conjecture $\sum_{m=1}^\infty\frac{y_{n+1,m}y_{n,k}}{[y_{n+1,m}-y_{n,k}]^3}\overset{?}=\frac{n+1}{8}$, where $y_{n,k}=(\text{BesselJZero[n,k]})^2$
There is a rather neat proof of this. First, note that there is already an analogue for this: DLMF §10.21 says that a Rayleigh function $\sigma_n(\nu)$ is defined as a similar power series $$ \sigma_n(\nu) = \sum_{m\geq1} y_{\nu, m}^{-n}. $$ It links to http://arxiv.org/abs/math/9910128v1 among others as an example of how to evaluate such things. In your case, call $\zeta_m = y_{\nu,m}$ and $z=y_{\nu-1,k}$ ($\nu$ is $n$ shifted by $1$), so that after expanding in partial fractions your sum is $$ \sum_{m\geq1} \frac{\zeta_m z}{(\zeta_m-z)^3} = \sum_{m\geq1} \frac{z^2}{(\zeta_m-z)^3} + \frac{z}{(\zeta_m-z)^2}. $$ Introduce the function $$ y_\nu(z) = z^{-\nu/2}J_\nu(z^{1/2}). $$ By DLMF 10.6.5 its derivative satisfies the two relations $$\begin{aligned} y'_\nu(z) &= (2z)^{-1} y_{\nu-1}(z) - \nu z^{-1} y_\nu(z) \\&= -\tfrac12 y_{\nu+1}(z). \end{aligned} $$ It also has the infinite product expansion $$ y_\nu(z) = \frac{1}{2^\nu\nu!}\prod_{k\geq1}(1 - z/\zeta_k). $$ Therefore, each partial sum of $(\zeta_k-z)^{-s}$, $s\geq1$ can be evaluated in terms of derivatives of $y_\nu$: $$ \sum_{k\geq1}(\zeta_k-z)^{-s} = \frac{-1}{(s-1)!}\frac{d^s}{dz^s}\log y_\nu(z). $$ When evaluating this logarithmic derivative, the derivative $y'_\nu$ can be expressed in terms of $y_{\nu-1}$, going down in $\nu$, but the derivative $y'_{\nu-1}$ can be expressed in terms of $y_\nu$ using the other relation that goes up in the index $\nu$. So even higher-order derivatives contain only $y_\nu$ and $y_{\nu-1}$. I calculated your sum using this procedure with a CAS as: $$ -\tfrac12z^2(\log y)''' -z(\log y)'' = \tfrac18\nu + z^{-1} P\big(y_{\nu-1}(z)/y_\nu(z)\big), $$ where $P$ is the polynomial $$ P(q) = -\tfrac18 q^3 + (\tfrac38\nu-\tfrac18) q^2 + (-\tfrac14\nu^2 + \tfrac14\nu - \tfrac18)q. $$ When $z$ is chosen to be any root of $y_{\nu-1}$, $z=\mathsf{BesselJZero}[\nu-1, k]\hat{}2$, $P(q)=0$, your sum is equal to $$ \frac{\nu}{8}, $$ which is $(n+1)/8$ in your notation. It is possible to derive a number of such closed forms for sums of this type. For example, by differentiating $\log y$ differently (going $\nu\to\nu+1\to\nu$), one would get $$ \sum_{m\geq1} \frac{y_{\nu,m}y_{\nu+1,k}}{(y_{\nu,m}-y_{\nu+1,k})^3} = -\frac{\nu}{8}. $$ Some other examples, for which the r.h.s. is independent of $z$ ($\zeta_m=y_{\nu,m}, z=y_{\nu-1,l}$, $l$ arbitrary): $$ \begin{gathered} \sum_{k\geq1} \frac{\zeta_k}{(\zeta_k-z)^2} = \frac14,\\ \sum_{k\geq1} \frac{z^2}{(\zeta_k-z)^4} - \frac{1}{(\zeta_k-z)^2} + \frac1{24}\frac{5-\nu}{\zeta_k-z} = \frac{1}{48}, \\ \sum_{k\geq1} \frac{\zeta_k}{(\zeta_k-z)^4} + \frac1{96}\frac{z-\zeta_k-8+4\nu}{(\zeta_k-z)^2} = 0. \end{gathered} $$ or with $z=y_{\nu+1,l}$, $l$ arbitrary: $$ \begin{gathered} \sum_{k\geq1} \frac{z^2}{(\zeta_k-z)^3} = -\tfrac18\nu-\tfrac14, \end{gathered} $$ and they get messier with higher degrees.
A philosophical question on randomness
Practical example: In the absence of randomness, we can balance an egg standing up. This is unstable. The tiniest breeze or shake of the table will cause it to fall. Then it will be stable, lying on its side.
Is it true that $P(z\mid x,y)=P(z\mid x)P(z\mid y)$ if x and y are independent?
$$P(z|x,y)=\frac{P(z,x,y)}{P(x,y)}=\frac{P(z,x,y)}{P(x)P(y)}$$ while $$P(z|x)P(z|y)=\frac{P(x,z)}{P(x)}\frac{P(y,z)}{P(y)}=\frac{P(x,z)P(y,z)}{P(x)P(y)}$$ we can see that $P(z|x,y)= P(z|x)P(z|y)$ is only possible if $P(x,y,z)=P(x,z)P(y,z)$. It should be fairly easy to find examples where this is not true. For example, as the comments already mention, if $x,y,z$ are independent, then $P(x,z)P(y,z)=P(x)P(y)P(z)^2$, while $P(x,y,z)=P(x)P(y)P(z)$, and if $P(x), P(y), P(z)\neq 0$, these two numbers are not the same. For example, if $x,y,z$ are the results of three independent coin flips, then $$P(z=H|x=H,y=H)=\frac12$$ and $$P(z=H|x=H)P(z=H|y=H)=\frac12\cdot\frac12=\frac14\neq\frac12$$
Discrete Math on Functions as bijection
Hint: Each $(a,b)\in\mathbb{Z}\times\mathbb{Z}$ is unique. Hence, each $(-b,a)\in\mathbb{Z}\times\mathbb{Z}$ is also unique. Use the definition of well-defined bijection: Each element of $\mathbb{Z}\times\mathbb{Z}$ must be paired with at least one element of $\mathbb{Z}\times\mathbb{Z}$, no element of $\mathbb{Z}\times\mathbb{Z}$ may be paired with more than one element of $\mathbb{Z}\times\mathbb{Z}$, each element of $\mathbb{Z}\times\mathbb{Z}$ must be paired with at least one element of $\mathbb{Z}\times\mathbb{Z}$, and no element of $\mathbb{Z}\times\mathbb{Z}$ may be paired with more than one element of $\mathbb{Z}\times\mathbb{Z}$. I can't tell any more, or else the answer is obvious.
Sum to $n$ terms the series $\frac{1}{3\cdot9\cdot11}+\frac{1}{5\cdot11\cdot13}+\frac{1}{7\cdot13\cdot15}+\cdots$.
Let $$S_n=\sum_{k=1}^n\frac{1}{(2k+1)(2k+7)(2k+9)}.$$ By the partial fraction decomposition: $$\frac{1}{(2k+1)(2k+7)(2k+9)}=\frac{1/48}{2k+1} -\frac{1/12}{2k+7}+\frac{1/16}{2k+9}$$ Then, after letting $O_n=\sum_{k=1}^n\frac{1}{2k+1}$, we have that \begin{align} S_n&=\frac{O_n}{48}-\frac{O_{n+3}-O_3}{12}+\frac{O_{n+4}-O_4}{16}\\ &=\frac{4O_3-3O_4}{48}+\frac{O_n-4O_{n+3}+3O_{n+4}}{48}\\ &=\frac{1}{140}-\frac{1}{48}\left(O_{n+3}-O_n-3(O_{n+4}-O_{n+3})\right)\\ &=\frac{1}{140}-\frac{1}{48}\left(\frac{1}{2n+3}+\frac{1}{2n+5}+\frac{1}{2n+7}-\frac{3}{2n+9} \right). \end{align}
How to tell if its possible to build a graph based on given info.
Degree list $(3)$ has five vertices of odd degree; do you know why that’s impossible? There is a graph with degree list $(4)$. Let the vertices be labelled $1,2,\ldots,8$. The graph has the following adjacency matrix: $$\begin{array}{cc} &\begin{array}{c}1&2&3&4&5&6&7&8\end{array}\\ \begin{array}{c} 1\\2\\3\\4\\5\\6\\7\\8\end{array}& \begin{bmatrix} 0&1&1&1&1&1&1&0\\ 1&0&1&1&1&1&1&0\\ 1&1&0&1&1&1&1&0\\ 1&1&1&0&1&1&1&0\\ 1&1&1&1&0&1&0&1\\ 1&1&1&1&1&0&0&0\\ 1&1&1&1&0&0&0&0\\ 0&0&0&0&1&0&0&0 \end{bmatrix} \end{array}$$ Vertices $1,2,3,4$, and $5$ have degree $6$, vertex $6$ has degree $5$, vertex $7$ has degree $4$, and vertex $8$ has degree $1$. Here’s how I discovered this graph. If in fact the degree sequence were impossible, the vertex of degree $1$ was likely to be a sticking point, since there are so many vertices of relatively large degree. If we remove it, we’re left with $7$ vertices, of which at least $4$ have degree $6$. That means that every one of the remaining vertices must have degree at least $4$, so depending on which vertex was adjacent to the vertex of degree $1$, we could conceivably have either degree sequence $6,6,6,6,6,4,4$ or degree sequence $6,6,6,6,5,5,4$. I started sketching a graph with vertices $1,2,\ldots,7$, giving vertices $1,2,3$, and $4$ degree $6$. At that point vertices $5,6$, and $7$ had degree $4$. One of those would have to be the vertex of degree $4$ in the original graph; I made it vertex $7$. One of vertices $5$ and $6$ had to be the one adjacent to the vertex of degree $1$ in the original graph; I made it vertex $5$, and I made that degree $1$ vertex number $8$. And at that point it was clear that be adding an edge between vertices $5$ and $6$ I’d have a graph with the right degree sequence.
Set of linear transformations being a vector space
Since you already know that the set of linear transformations is closed under the addition and scalar multiplication you've been given, you only need to check the vector space axioms. Is addition associative? Is there a zero element? Are there additive inverses? and so on. Other than that, there's not much to check. Check out the Wikipedia page for the axioms: https://en.wikipedia.org/wiki/Vector_space
AI strategies for losing positions
I think your idea of maximizing the least number of moves your opponent needs to win is basically sound. However, I'd refine it a bit and suggest that you try to maximize the search depth your opponent needs to find a winning response to your next move, assuming that they use the same search strategy as you do. The basic idea is that, if your opponent knows the optimal strategy, they'll win and there's nothing you can do about it. Thus, you should assume that they don't know it. What should you assume they know, then? Well, in the absence of evidence otherwise, it's generally better to overestimate your opponent than to underestimate them, so a conservative assumption would be that your opponent just barely failed to see the winning response to at least one of your possible moves. This suggests that you should figure out which move that was and play it. The complication here, of course, is that your opponent might recognize certain moves as winning ones based on heuristics or prior information, without necessarily having to search all the way to the final victory. This is particularly likely when playing against a human: we make a lot of use of heuristic rules and prior experience. Thus, simply maximizing the number of turns to your opponent's victory may not be a very good strategy, as some of those turns might be very easily predictable even for a rather "dumb" opponent. On the other hand, assuming that you've already taken some effort to come up with a fairly efficient search algorithm tuned for the specific game you're playing, you might as well assume that your opponent is probably doing something similar. Thus, assuming that you're essentially playing against a just slightly weaker copy of yourself seems to give you a fairly good chance of winning, assuming that you have one at all.
Discrete Math - Combinatorics - Trinomial Coefficients question
The lines numbered (1) and (2) are the definition of $\binom{n}{k,\ell,m}$, so it means exactly what they say it means. It’s a recursive definition: it tells you how to compute trinomial coefficients with upper number $n$ if you already know how to compute them with upper number $n-1$. However, you’ve miscopied (2), unless there was a typo in your source: it should read $$\binom{n}{k,\ell,m}=\binom{n-1}{k-1,\ell,m}+\binom{n-1}{k,\ell-1,m}+\binom{n-1}{k,\ell,m-1}\;.$$ Here’s an example to illustrate how to use (1) and (2) to calculate a trinomial coefficient: $$\begin{align*} \binom4{1,2,1}&=\binom3{0,2,1}+\binom3{1,1,1}+\binom3{1,2,0}&\text{using (2)}\\\\ &=\binom32+\binom3{1,1,1}+\binom31&\text{using (1)}\\\\ &=\binom32+\binom2{0,1,1}+\binom2{1,0,1}+\binom2{1,1,0}+\binom31&\text{using (2)}\\\\ &=\binom32+\binom21+\binom21+\binom21+\binom31&\text{using (1)}\\\\ &=3+2+2+2+3\\ &=12\;. \end{align*}$$ The pyramid that you describe is exactly what’s wanted for (b). At the peak you have $\binom0{0,0,0}$, which is $1$. Below that you have a triangle of three $1$’s. The next layer down will have six entries forming a triangle, corresponding to $\binom1{1,0,0},\binom1{0,1,0}$, and $\binom1{0,0,1}$. The next layer will have ten entries in a triangle, corresponding to the ten possible trinomial coefficients $\binom2{k,\ell,m}$ with $0\le k,\ell,m$ and $k+\ell+m=2$. And so on.
How to differentiate $ \exp \left\{ \int^t_sf(u)du \right\} $ w.r.t. $t$?
It is a composite of the function $h(t):= \exp(t)$ and $k(t):= \int_s^t f(u) \, \mathrm{d} u$. By the chain rule we have $g'(t) = k'(t) h'(k(t))$. On the other hand, the fundamental theorem of calculus says that $k'(t) = f(t)$. Both together imply that $$g'(t) = k'(t) h'(k(t)) = f(t) \exp(k(t)) = f(t) g(t).$$ Note that this argument also proves that $g$ is differentiable. This steps are valid provided that $f$ is contiuous.
Showing inverse composed with function is $x$ for all $x$ in the domain.
Suppose that $f^{-1}(f(x)) \neq x$ for some $x \in D(f)$. Then $(f(x), x) \notin f^{-1}$. But since $$\{f^{-1} = \{(b,a): (a,b) \in f\}$$ this means $(x, f(x)) \notin f$ which is a contradiction. The proof of the second assertion is similar.
The height projected to the base of the isosceles triangle is equal to H and is twice as large as its projection on the side. Find.....
$|AD|=H,|AE|=H/2$. We have $AE/AD=\cos BAD$, so $\angle BAD=60^o$. $\angle ADB=90^o$. So $BD=\sqrt3\ AD$. Hence the area of the triangle (1/2 base x height) is $\sqrt3\ H^2$.
Length of a vector $a$ using scalar product of $\langle u|v\rangle$
HINT: $\|a\|^2 = \langle a \vert a \rangle$
Find the kernel of a homomorphism
The kernel of the original map is $H$. Of course, the kernel of the restriction will be $H\cap[H\rtimes K,H\rtimes K]$; but this, in general, contains more than just $[H,H]$. Since $H\triangleleft H\rtimes K$, it also contains $[H,H\rtimes K]=\langle [h,h'k]\mid h,h'\in H,k\in K\rangle$, which is contained in $H$ by the normality of $H$, and certainly contained in $[H\rtimes K,H\rtimes K]$; this is nontrivial and strictly larger than $[H,H]$ unless the semidirect product is actually a direct product. In fact, the kernel is exactly $[H,H\rtimes K]$. Indeed, it is easy to see that every generator of this group is contained in the kernel. Conversely, suppose that $$[h_1k_1,h_2k_2]\cdots [h_{2m-1}k_{2m-1},h_{2m}k_{2m}]$$ is an element of the kernel. This means that $$[k_1,k_2]\cdots[k_{2m-1},k_{2m}]=1.$$ Using the identity $$[xy,zt] = [x,y]^y[y,t][x,z]^{yt}[y,z]^t$$ (note: my commutators are defined by $[a,b]=a^{-1}b^{-1}ab$) we can rewrite each $[h_{2i-1}k_{2i-1},h_{2i}k_{2i}]$ as $$[h_{2i-1},k_{2i}]^{k_{2i-1}}[k_{2i-1},k_{2i}][h_{2i-1},h_{2i}]^{k_{2i-1}k_{2i}}[k_{2i-1},h_{2i}]^{k_{2i}}.$$ Now, all terms except perhaps for $[k_{2i-1},k_{2i}]$ lie in $[H,H\rtimes K]$, and since $[H,H\rtimes K]$ is normal, we can rewrite $$[k_{2i-1},k_{2i}][h_{2i-1},h_{2i}]^{k_{2i-1}k_{2i}}[k_{2i-1},h_{2i}]^{k_{2i}}$$ as $$\alpha [k_{2i-1},k_{2i}]\text{ for some }\alpha\in [H,H\rtimes K].$$ Repeating this, starting from the rightmost factor and working towards the left, we can rewrite $$[h_1k_1,h_2k_2]\cdots [h_{2m-1}k_{2m-1},h_{2m}k_{2m}]$$ as $$x[k_1,k_2][k_3,k_4]\cdots[k_{2m-1},k_{2m}]$$ for some $x\in[H,H\rtimes K]$. And since $[k_1,k_2]\cdots[k_{2m-1},k_{2m}]=1$ by assumption, this proves that $$[h_1k_1,h_2k_2]\cdots [h_{2m-1}k_{2m-1},h_{2m}k_{2m}]\in [H,H\rtimes K]$$ as desired.
solve differential equation (separable)
I don't think you really need to do a substitution. With the final form you have there, you can simply divide by $(1+x)$ and $(1+y)$ to obtain: $$ \frac{x}{1+x} dx + \frac{y}{1+y} dy = 0 $$ Now, your equation is "separated" and you can just integrate to obtain an expression relating $y$ and $x$.
How to find circumference origin position?
You can find point $T$ as $(\frac 12(x_A+x_B),\frac 12(y_A+y_B))$ Then the slope of $TO$ is the negative reciprocal of the slope of $AB$, so is $-\frac {x_B-x_A}{y_B-y_A}$. The line TO is then $y-y_T=-\frac {x_B-x_A}{y_B-y_A}(x-x_T)$ The circle around $A$ with radius $r$ is $(x-x_A)^2+(y-y_A)^2=r^2$. Solve these two simultaneously and you have your answer. There are two solutions. The other will be the other side of AB (upper left in your diagram)
Finding $\int\frac{\sqrt{1+\sqrt{1+\sqrt{1+\cos(2\sqrt{x+5})}}}}{\sqrt{x}} dx$
I'm not sure what the OP means by 'well posed'. It's an indefinite integral, so there is no question about convergence. As for the antiderivative, it exists, but most likely can't be found in closed form, as it was said in the comments. However, we can simplify the integral a little. First, make a change of variable: $$y=\sqrt{x+5}$$ $$x=y^2-5$$ $$dx=2y~dy$$ Then we obtain $$2 \int \sqrt{1+\sqrt{1+\sqrt{1+\cos(2y)}}} \frac{y ~dy}{\sqrt{y^2-5}}$$ Now use the double angle formula: $$|\cos y| = \sqrt{\frac{1+\cos 2y}{2}}$$ We get: $$2 \int \sqrt{1+\sqrt{1 + \sqrt{2} |\cos y|}} \frac{y ~dy}{\sqrt{y^2-5}}$$ Here $||$ denotes absolute value. We got rid of one of the nested roots. I don't think we can simplify it any further.
Write $\cos(2t) + \sqrt3 \sin(2t)$ in the form $C\cos(w(t-t_0))$
Hint use angle sum and difference identities for trignonometric functions: $$C\cos(w(t-t_0))=C(\cos(wt)\cos(wt_0)+\sin(wt)\sin(wt_0))$$ $$=C\cos(wt_0)\cos(wt)+C\sin(wt_0)sin(wt)$$ Compare this expression (for $w=2$) with $$\cos(2t)+\sqrt{3}\sin(2t)$$ After comparing both representations, we can conclude $$1=C\cos(2t_0)$$ $$\sqrt{3}=C\sin(2t_0)$$ Square both equations and add them to get $$1+3=C^2(\cos^2(2t)+\sin^2(2t))=C^2$$ In the last step we used $\cos^2(2t)+\sin^2(2t)=1$. So $C=\pm2$. Now plug this result into $1=C\cos(2t_0)$ and solve for $t_0$.
Does the following limit exist? (a)$ \lim_{(x,y) \to (0,0)} f(x,y) = \frac{x^2+y}{x^2+y^2}$?
we use $$x=\frac{1}{n},y=\frac{\sqrt{n}}{n}$$ and plug in this in your function $$f(x,y)=\frac{x^2+y}{x^2+y^2}$$ we get $${\frac {{n}^{2}+\sqrt {n}}{\sqrt {n} \left( n+1 \right) }}$$ and we obtain $$\lim_{n\to \infty}{\frac {{n}^{2}+\sqrt {n}}{\sqrt {n} \left( n+1 \right) }}=\infty$$ therefore the limit doesn't exist
Show a matrix is similar to a lower triangular matrix
That means find an invertible matrix $P$ with \begin{align} B = P^{-1} A P \iff \\ \left( \begin{array}{cc} \lambda & 0 \\ 1 & \lambda \end{array} \right) &= \frac{1}{ad-bc} \left( \begin{array}{cc} d & -b \\ -c & a \end{array} \right) \left( \begin{array}{cc} 2 & -1 \\ 0 & 2 \end{array} \right) \left( \begin{array}{cc} a & b \\ c & d \end{array} \right) \\ &= \frac{1}{ad-bc} \left( \begin{array}{cc} d & -b \\ -c & a \end{array} \right) \left( \begin{array}{cc} 2a-c & 2b-d \\ 2c & 2d \end{array} \right) \\ &= \frac{1}{ad-bc} \left( \begin{array}{cc} 2(ad-bc) - cd & -d^2 \\ c^2 & 2(ad-bc) + cd \end{array} \right) \end{align} Comparing components, we take $d=0$ and continue with \begin{align} \left( \begin{array}{cc} \lambda & 0 \\ 1 & \lambda \end{array} \right) &= \frac{1}{-bc} \left( \begin{array}{cc} -2bc & 0 \\ c^2 & -2bc \end{array} \right) \\ &= \left( \begin{array}{cc} 2 & 0 \\ -c/b & 2 \end{array} \right) \end{align} So we can pick any $a$, any $b \ne 0$ and then choose $c = -b$ and $d = 0$. $$ P = \left( \begin{array}{cc} a & b \\ -b & 0 \end{array} \right) \quad P^{-1} = \left( \begin{array}{cc} 0 & -1/b \\ 1/b & a/b^2 \end{array} \right) $$ For example $$ P = \left( \begin{array}{cc} 1 & 1 \\ -1 & 0 \end{array} \right) \quad P^{-1} = \left( \begin{array}{cc} 0 & -1 \\ 1 & 1 \end{array} \right) $$ should do the job of making $B$ similar to $A$. \begin{align} P^{-1} A P &= \left( \begin{array}{cc} 0 & -1 \\ 1 & 1 \end{array} \right) \left( \begin{array}{cc} 2 & -1 \\ 0 & 2 \end{array} \right) \left( \begin{array}{cc} 1 & 1 \\ -1 & 0 \end{array} \right) \\ &= \left( \begin{array}{cc} 0 & -1 \\ 1 & 1 \end{array} \right) \left( \begin{array}{cc} 3 & 2 \\ -2 & 0 \end{array} \right) \\ &= \left( \begin{array}{cc} 2 & 0 \\ 1 & 2 \end{array} \right) \\ &= B \end{align}
What is being maximised in the channel capacity formula?
Channel capacity computes the maximum of the mutual information $I(X,Y)$ between the input $X$ and output $Y$ for a channel. This maximization is done over all possible probability distributions $p(X)$ of the input signal.
Which of two distributions was sampled from?
For the first question: the expected value of the empirical mean is $$ E\left[ \frac 1N \sum_{i=1}^N X_i \right] = \frac 1N \sum_{i=1}^N E[X_i] $$ For the sample A it is $\hat\mu$. For the sample B it is $$ \frac 1N \sum_{i=1}^N E[X_i] = \frac 1N \sum_{i=1}^N E[E[X_i | \mu_i]] = E\left[\frac 1N \sum_{i=1}^N E[X_i | \mu_i] \right] = E\left[\frac 1N \sum_{i=1}^N \mu_i\right] = \hat\mu $$ using the same computation as for A. For the second question: Let us try with the second moment: $$ \text{var}\left[ \frac 1N \sum_{i=1}^N X_i \right] = \frac 1{N^2} \sum_{i=1}^N \text{var}[X_i] $$ For the case A: this is $\frac{\hat\sigma^2}N$. For the case B: it is $$ \frac 1{N^2} \sum_{i=1}^N \left( \text{var}[E[X_i|\mu_i, \sigma_i] + E[\text{var}[X_i|\mu_i, \sigma_i]] \right) = \frac 1{N^2} \sum_{i=1}^N \left( \sigma_{\hat\mu}^2 + (\sigma_{\hat\sigma}^2 + \sigma^2) \right) = \frac 1N \left( \sigma_{\hat\mu}^2 + \sigma_{\hat\sigma}^2 + \sigma^2 \right) $$according to the conditional variance formula. So the sample B must be more scattered than the sample A.
Statistics. How are standard error and confidence intervals useful without knowing population size?
Here's the thing. Qualitatively, we all 'grock' the Law of the Big Numbers: we all understand the intuitive idea that as the sample size increases, the observed percentage is more likely to approximate the actual percentage of the target group as a whole. For example, flipping an unbiased coin 10 times and getting 7 heads is not weird ... but flipping an unbiased coin 100 times and getting 70 heads is weird ... and getting 700 heads in 1000 flips makes it a near-certainly that we are dealing with a biased coin. Unfortunately, what we're not so good at is the quantitative side of things. When, for example, we poll 1000 people about something, but this is out of a population of hundreds of millions, we feel that our sample couldn't possibly be anywhere large enough to give us some kind of narrow confidence interval ... and yet that is exactly what is the case: with a sample size of 1000, it doesn't matter whether your target size is 1 million, 1 billion, 1 trillion or, for that matter, infinite, the margin of error will only be about 3%! That is, we are surprised how narrow the confidence is, given how far removed the sample size is from the target population size. But the thing is, the target population size really doesn't matter. Well, it matters if the target is close to the sample size, e.g. if sample is 1000 and target is 2000, then the margin of error will actually start to visibly decrease ... (i.e. it's getting even smaller yet!) ... but once the target is 'far enough' removed from the sample size, it really doesn't matter whether it's billions or quadrillions of infinite. One way to see this is to go back to the coins. Here, we actually did a good bit better: we know that with 1000 flips, we should get pretty close to the actual percentage with which this coin comes up heads. But what is the 'target population' here? Well, it's effectively infinite: a biased coin that comes up heads 70% of the time, comes up heads 70% of the time when flipped infinitely many times. But if we flip it a 'mere' 1000 times, you and I know we should be getting pretty darn close to 70% as well. Or, more to the point, if we got 70% heads in 1000 flips, then we'll get somewhere near 70% for infinite flips as well.
About the convergence of an integral
Yes. I was wrong before when I said no. (Thanks, Claude Leibovici.) $$\frac{d(\frac{\ln(y-1)}{\ln(y)})}{dy} = \frac{\frac{\ln y}{y-1} - \frac{\ln(y-1)}{y}}{\ln^2 y} = \frac{1}{(y-1)\ln y} - \frac{\ln(y-1)}{y\ln^2y}$$ $$\int_2^\infty \frac{y}{5\ln^2(y)} \frac{d(\frac{\ln(y-1)}{\ln(y)})}{dy} dy = \int_2^\infty \left(\frac{y}{5(y-1)\ln^3(y)} - \frac{\ln(y-1)}{5\ln^4(y)} \right)dy = \int_2^\infty \frac{y\ln(y) - (y-1)\ln(y-1)}{5(y-1)\ln^4(y)}dy = \int_2^\infty \frac{(y-1)(\ln(\frac{y}{y-1})) + \ln(y)}{5(y-1)\ln^4(y)}dy = \int_2^\infty\frac{\ln\left(1 +\frac{1}{y-1}\right)}{5\ln^4(y)}dy + \int_2^\infty\frac{1}{5(y-1)\ln^3(y)}dy < \int_2^\infty\frac{1.4}{5y\ln^4(y)}dy + \int_2^N\frac{1}{5(y-1)\ln^3(y)}dy + \int_N^\infty\frac{1}{5(y-1)\ln^3(y-1)}dy = \frac{1.4}{15\ln^{3}2} + \frac{1}{10\ln^{2}(N-1)} + \int_2^N\frac{1}{5(y-1)\ln^3(y)}dy$$ Since this is a fairly rough upper bound, I'm not going to bother to work it out further, but this shows it exists. (If you're wondering about the $1.4$, it's a number chosen to be slightly greater than the highest value for which $\ln(1+\frac{1}{y-1}) > y$).
Generating function for partitions of a number in which no odd number appears twice
You'll have a factor of $(1+q^n+q^{2n}+q^{3n}+\cdots)$ for all even $n$, but just a factor of $1+q^n$ for odd $n$. This is because even numbers can occur any number of times.
Number of paths on a (5,3) grid
Going from $(0,0)$ to $(5,3)$ will always require 8 moves, with $5$ being right moves, and $3$ being up moves. One such example is the sequence $UUURRRRR$. How many ways can you arrange this sequence? You can think about it as "how many ways can you change 3 $R$'s to $U$'s in the sequence $RRRRRRRR$". If you swap the different $R$'s around, will the sequence still be the same?
About measurability of the interior of a measurable set.
Interiors are open, and hence measurable. However, your equality need not hold: for example, in $\Bbb{R}$ the irrationals have infinite measure but empty interior.
Is $f(x,y)$ = $e^\frac{1}{1-x^2-y^2}$ if $x^2+y^2<1$ and $0$ if $x^2+y^2\ge1$ differentiable?
It has been treated many times here that the function $$p(t):=\left\{\eqalign{e^{-1/t}&amp;\qquad(t&gt;0)\cr 0\quad&amp;\qquad(t\leq0)\cr}\right.$$ behaves strangely at $t=0$, but is $C^\infty$ on all of ${\mathbb R}$ nevertheless. Your function $f$ of $(x,y)$ can be viewed as composition of the smooth function $$g:\quad (x,y)\mapsto 1-x^2-y^2$$ with the $p$ above: $$f=p\circ g:\quad (x,y)\mapsto\left\{\eqalign{\exp\left({-1\over 1-x^2-y^2}\right)&amp;\qquad(x^2+y^2&lt;1)\cr 0\quad\qquad&amp;\qquad(x^2+y^2\geq1)\ .\cr}\right.$$ This implies that $f$ is a smooth function of $(x,y)$ in the complete plane ${\mathbb R}^2$.
Transitivity of Equivalence Relation of Tangent Vectors on Manifold
Indeed, at this point the use of the chain rule is still illegal because the domains of charts are open subsets of the manifold and you still &quot;don't know&quot; what a derivative means there. Here's how to fix it. If $\gamma_1\sim_p\gamma_2$ and $\gamma_2\sim_p \gamma_3$, there are $(U,\varphi)$ and $(V,\psi)$ around $p$ with $(\varphi\circ \gamma_1)'(0) = (\varphi\circ \gamma_2)'(0)$ and $(\psi\circ \gamma_2)'(0)=(\psi\circ \gamma_3)'(0)$, alright. Compute $$\begin{align} (\varphi\circ \gamma_3)'(0) &amp;= (\varphi\circ\psi^{-1}\circ \psi\circ\gamma_3)'(0) \\ &amp;= {\rm d}(\varphi\circ \psi^{-1})_0((\psi\circ\gamma_3)'(0)) \\ &amp;= {\rm d}(\varphi\circ \psi^{-1})_0((\psi\circ\gamma_2)'(0)) \\ &amp;= (\varphi\circ \psi\circ \psi^{-1}\circ\gamma_2)'(0) \\ &amp;= (\varphi \circ \gamma_2)'(0) \\ &amp;= (\varphi\circ \gamma_1)'(0).\end{align}$$My use of the chain rule here was legal because I made sure to apply it only for the transition between charts, which is a map between open subsets of Euclidean spaces, and composing a curve in the manifold with a chart gives a curve in Euclidean space.
Does the integral $\int\limits^{{\pi}}_{0} \frac{3\sin^2\left(2x\right)}{\sqrt{x}}\,\mathrm{d}x$ converge or diverge?
Hint. The integrand is continuous over $(0,\pi]$ and one has $$ \left|\int\limits^{{\pi}}_{0} \dfrac{3\sin^2\left(2x\right)}{\sqrt{x}}\,\mathrm{d}x\right|\le\int\limits^{{\pi}}_{0} \left|\dfrac{3\sin^2\left(2x\right)}{\sqrt{x}}\right|\,\mathrm{d}x\le3\int\limits^{{\pi}}_{0} \dfrac{1}{\sqrt{x}}\,\mathrm{d}x&lt;\infty $$
Prove or refute that $\log_5(0.7-2^x)=\log_2(0.7-5^x)$ if and only if $x=-1$
It is sufficient to prove that $f'&lt;g'$: \begin{align}f'(x)&lt;g'(x)&amp;\iff-\frac{2^x}{.7-2^x}\frac{\log2}{\log5}&lt;-\frac{5^x}{.7-5^x}\frac{\log5}{\log2}\\ &amp;\iff\frac{2^x}{.7-2^x}\frac{.7-5^x}{5^x}&gt;\frac{\log^25}{\log^22}=:k\\ &amp;\iff\frac{.7\cdot5^{-x}-1}{.7\cdot2^{-x}-1}&gt;k\\ &amp;\iff(.7\cdot5^{-x}-1)&gt;(.7\cdot2^{-x}-1)k\\ &amp;\iff2^{-x}k-5^{-x}&lt;\frac{10}7(k-1)\\ \end{align} Thus we are showing that $h(x):=2^{-x}k-5^{-x}$ has a particular upper bound. Since it is a difference of exponentials, we already know it has one root and is upper bounded. The derivative $h'(x)=2^{-x}k\log 2-5^{-x}\log 5$, which is zero when $x=x^\circ:=-\log_{5/2}\log_25$. Then $$h(x)\le 2^{-x^\circ}k-5^{-x^\circ}\approx5.805&lt;6.273\approx\frac{10}7(k-1).$$ (Sorry I had to resort to numerical evaluations, but this is a rigorous proof, since with a good enough evaluation of the two constants you can bound one above the other.)
Mahler volume of regular polygons
I think you may be misunderstanding the concept of duality, seeing that your picture shows $P^\circ$ inscribed in $P$. This is not what the duality of convex bodies means. Let's take a square of sidelength $2L$. It is the convex hull of the points $(\pm L,\pm L)$. So, the dual polygon is bounded by the lines $\pm Lx\pm Ly= 1$. This implies that the dual polygon is described by $|x|+|y|\le 1/L$. This is a square with diagonal $2/L$. Its sidelength is $\sqrt{2}/L$. Areas: the area of original square is $4L^2$. The area of its dual is $2/L^2$. The Mahler volume is $8$, independent of $L$.
Reciprocal of $7.5^{1-x}$
The reciprocal of$$7.5^{1-x}$$ is $$\frac 1{7.5^{1-x}}=\frac 1{7.5*7.5^{-x}}=\frac 1{7.5}*\frac 1{7.5^{-x}}=0.13333*7.5^x$$
Inner Product and Norms of vectors
First the inner product. Using the definition we find $$\langle p_0,p_1\rangle = p_0(-2)p_1(-2) + p_0(0)p_1(0) + p_0(2)p_1(2) = \\ =5(-14) + 1(-2) + (-3)(-6)= -70 -2 +18 = -54 . $$ The norms are compited in the same way: $$||p_0|| = \sqrt{\langle p_0,p_0\rangle} = \sqrt{5^2 + 1^2 + (-3)^2} = \sqrt{35} ,\\ ||p_1|| = \sqrt{\langle p_1,p_1\rangle} =\sqrt{14^2 + (-2)^2 + (-6)^2} = \sqrt{236} . $$
Area under a normal distribution: Why is my answer wrong?
(2) is just the square of (1). $$\int_{-\infty}^\infty \int_{-\infty}^\infty e^{-(x^2+y^2)/2} \, dx \, dy = \left(\int_{-\infty}^\infty e^{-x^2/2} \, dx\right) \left(\int_{-\infty}^\infty e^{-y^2/2} \, dy\right)$$
Soccer and probability distributions
I am assuming that it is equally likely to be in Brazil group or not Brazil group since each group contains 4 teams. This means that probability of getting in Brazil group is $\frac{1}{2}$. (This is wrong as Graham Kemp pointed out it should be $\frac{3}{7}$) Let $X$ be number of points scored in game. Since this is a discrete random variable (because has finite outcomes) in order to find pmf, $f(x)$, you must find $P(X=x)$ for $x\in\{3,4,6\}$. To do this it may be helpful to note that (letting $B$ represent event that end up in Brazil group) $$P(X=x)=P(X=x|B)P(B)+P(X=x|B^{c})P(B^{c})$$ which is just an application of law of total probability. Thus your pmf should be a peicewise function. Now for CDF you just have to find $P(X\leq x)$ which you would just sum values of pmf for values less than or equal to x
Troubles with continuum hypothesis
Kaplansky's Conjecture for Banach Algebras is an example of an interesting statement with no obvious set theoretic content whose truth depends on CH. The conjecture is that for every algebraic homomorphism from C(X) (where X is some Compact Haussdorff space) to a Banach Algebra is continuous. Under CH, the conjecture is false for any infinite choice of X. Without CH, the conjecture may be true. In general there's a character to CH which makes pathological uncountable things more easily "constructable", because you can always build them out of countable pieces through a process of transfinite induction. I no longer remember (if I ever knew) how the construction of the discontinuous homorphism works, but I'd expect it's something along these lines.
How to derive mean and variance for a Bayes estimator?
I get a slightly different value for the posterior variance for $\theta$. You have: $\pi (\theta | \mathbf{X}) \propto_\theta p(\mathbf{X} | \theta) \pi(\theta) = \frac{1}{\left(\sigma\sqrt{2\pi}\right)^n } \; e^{ -\sum_i\frac{(X_i-\theta)^2}{2\sigma^2} }\frac{1}{\sigma\sqrt{2\pi/\kappa_0} } \; e^{ -\frac{(\theta-\theta_0)^2}{2\sigma^2/\kappa_0} } \\ \propto \exp\left(-\frac{\sum_i X_i^2 - 2 \sum_i X_i \theta+ n \theta^2 + \kappa_0\theta^2-2 \kappa_0 \theta_0 \theta+\kappa_0\theta_0^2}{2\sigma^2}\right) \\ \propto \exp\left(-\frac{(\kappa_0+n)\theta^2 - 2(\kappa_0\theta_0+\sum_i X_i ) \theta }{2\sigma^2}\right) \\ \propto \exp\left(-\dfrac{\left(\theta - \dfrac{ \kappa_0\theta_0+\sum_i X_i }{\kappa_0+n} \right)^2 }{2\dfrac{\sigma^2}{\kappa_0+n}}\right) $ which, using $\overline{\mathbf{X}} =\frac1n \sum_i X_i$, is proportional to the density of a normal distribution with mean $ \dfrac{ \kappa_0\theta_0+n\overline{\mathbf{X}} }{\kappa_0+n} $ and variance $\dfrac{\sigma^2}{\kappa_0+n}$
Why does $E=\{r \in \mathbb{Q} : -2 \leq r \leq 2\}$ have no maximal or minimal?
I have no idea what you are saying. The set $\{r\in Q| -2\le r\le 2\}$ certainly does have both max and min. The inf= min is -2 and the sup= max is 2. I wonder if you do not something like $\{r\in Q| -2\le r^2\le 2\}$ (although the "-2" would be peculiar since a square is never negative) It is true that there is no rational r such that $r^2= 2$ so the set $\{r\in Q| 0\le r^2\le 2\}$ has no max or min.
Why $a_{-1}$ term of Laurent series may not be residue?
Thank you for your comment on my answer here https://math.stackexchange.com/a/845625/159855 I believe that you are absolutely correct. It was my fault for using the same letter $R$ in my first and second paragraphs there. I did not intend it to be the same $R$, so certainly I should have used two different symbols ! Please see the edit that I made to that answer, where I have gone into a little more detail.
Integral of $x^2\cos^n x$ over $[-\pi,\pi]$
Too long for a comment. I suppose that this would be given in terms of some hypergeometric functions. However, having a look at "Table of Integrals, Series, and Products" (seventh edition) by I.S. Gradshteyn and I.M. Ryzhik, at the bottom of page 214, there is an interesting recurrence relation for the antideivative $$\int x^m\cos^n(x)\,dx=\frac{x^{m-1} \cos ^{n-1}(x) (m \cos (x)+n x \sin (x))}{n^2} +\frac{(n-1)}n\int x^m \cos ^{n-2}(x)\,dx-$$ $$\frac{m(m-1) }{n^2}\int x^{m-2} \cos ^n(x)\,dx$$ Applied to $m=2$, this would give $$I_n=(-1)^n\frac{4 \pi }{n^2}-\frac{2 \sqrt{\pi } \left(1+(-1)^n\right) \Gamma \left(\frac{n+1}{2}\right)}{n^2\, \Gamma \left(\frac{n+2}{2}\right)}+\frac{n-1}n I_{n-2}$$ with $I_0= \frac{2 \pi ^3}{3}$ and $I_1= -4\pi$. This means that $$I_{2n}=\frac{\pi }{n^2}-\frac{\sqrt{\pi }\, \Gamma \left(n+\frac{1}{2}\right)}{n^3 \Gamma (n)}+\frac{2n-1}{2n} I_{2n-2}$$ $$I_{2n+1}=-\frac{4 \pi }{(2 n+1)^2}+\frac{2n}{2n+1} I_{2n-1}$$ Edit In terms of hypergoemetric functions, at least $$I_{2n+1}=-\frac{\pi \, 2^{1-2 m}}{(2 m+1)^2}\,\,\, _3F_2\left(-2 m-1,-m-\frac{1}{2},-m-\frac{1}{2};\frac{1}{2}-m,\frac{1}{2}-m;-1\right)$$
Prove that the Simson line of $P$ bisects the segment $HP$ from the orthocentre $H$ to $P$
Let $P_A,P_B,P_C$ be the projections of $P$ on the sides $BC,AC,AB$; let $Q$ be the intersection between $AH$ and the Simson's line of $P$. We want to show that $HQ=PP_A$. If we call $H_A$ the symmetric of $H$ with respect to the $BC$ side, we have that $H_A$ lies on the circumcircle of $ABC$, so, in order to prove that $HQ=PP_A$, it is sufficient to show that $PP_A QH_A$ is an isosceles trapezoid. Let $\theta=\widehat{P_C P_A B}=\widehat{P_B P_A C}$. Since $P P_A P_C B$ and $P P_A C P_B$ are cyclic quadrilaterals, we have $\theta=\widehat{P_C P B}=\widehat{P_B P C}$, too. Now: $$\widehat{P_A Q H_A}=\frac{\pi}{2}-\theta,$$ and: $$\widehat{P H_A Q}=\widehat{P H_A A}=\widehat{P B A}=\widehat{P B P_C}=\frac{\pi}{2}-\theta.$$ Since $PP_A$ and $QH_A$ are both ortogonal to $BC$, $PP_A Q H_A$ is an isosceles trapezoid and we have $PP_A=QH$, QED.
Euler's infinite product for the sine function and differential equation relation
The first sum is a known sum which I will not prove here: $$\sum_{k=1}^{\infty} \frac{1}{\pi^2 k^2-x^2} = \frac{1}{2 x} \left ( \frac{1}{x}-\cot{x}\right)$$ The second sum, on the other hand, I could not find in a reference. You can, however, evaluate it using residues. That is, $$\sum_{k=-\infty}^{\infty} \frac{1}{(\pi^2 k^2-x^2)^2} = -\sum_{\pm} \text{Res}_{z=\pm x/\pi} \frac{\pi \cot{\pi z}}{(\pi^2 z^2-x^2)^2}$$ I will spare you the residue calculation here; needless to say, the result for the sum is $$\sum_{k=-\infty}^{\infty} \frac{1}{(\pi^2 k^2-x^2)^2} = \frac{\cot^2{x}}{2 x^2} + \frac{\cot{x}}{2 x^3}+ \frac{1}{2 x^2}$$ which means that $$\sum_{k=1}^{\infty} \frac{1}{(\pi^2 k^2-x^2)^2} = \frac{\cot^2{x}}{4 x^2} + \frac{\cot{x}}{4 x^3}+ \frac{1}{4 x^2} - \frac{1}{2 x^4}$$ I also leave the algebra to the reader in plugging these expressions into the equation the OP has provided. In the end, yes, the relation is true.
Probability married couple problem
Your argument is not correct. When they make choices in succession the number of options of each person depends on the choices made before. E.g., if B chooses 4 then C only can choose among 2 and 5. We have to count the number of permutations $\pi:\&gt;[5]\to[5]$ with exactly one fixed point. There are $5$ ways to choose the fixed point $f$. Given $f$ we have to count the permutations of the four-element set $[5]\setminus\{f\}$ without fixed points. Such permutations are called derangements; there are $9$ of them. As there are $120$ permutations in all the requested probability comes to $$p={5\cdot 9\over 120}={3\over8}\ .$$
Find intersection(s) between parametrized parabola and a line
If $P=(x_P,y_P)$ and $Q=(x_Q,y_Q)$ the line between $PQ$ has equation $\frac{x-x_P}{x_Q-x_P}=\frac{y-y_P}{y_Q-y_P}$ which we can rewrite as $$ ax+by+c=0, $$ where the coefficients $a$, $b$ and $c$ are very easily computed. Now, to get the values of the parameter $t$ for which your generic parabola point $(x(t),y(t))$ meets the line $PQ$ you just need to solve the equation (in $t$) $$ ax(t)+by(t)+c=0. $$ This is actually an easy task, since it's just a quadratic equation. Mind that the procedure is actually more general: it can be easily adapted to find the points where any curve given in parametric form meets a given line. Of course, in general one runs into the problem that the final equation may not be easily solvable.
Minimizing the criterion function $f(a) = \int_0^1 [g(x) - p(x)]^2\ dx$ by a polynomial
Since the goal is to approximate $g$ by the polynomial $p$, the optimal coefficients would be those that minimize the given integral (so that $g\approx p$). The determinant of the Hessian matrix must then be $&gt;0$. When the first derivatives are set to zero, the linearity of integration allows to write it as a system of $n$ equations with the coefficient of $a_k$ being $-\int_0^1 x^{k+j}\ dx$. This can be solved with Cramer's rule if desired.
zero extension of positive currents are always positive
You are totally right : as for distributions, it is easier to define restrictions rather than extensions for currents. This is why the local boundedness of $\lVert T \rVert$ near $E$ is crucially required in this theorem. The key is now you can locally approximate your test fonctions by smooth functions supported outside $E$. I take the same notations as in the proof of Demailly. In particular, you have a smooth non-negative function $(\theta \circ v_k)$ such that the current $(\theta \circ v_k) T$ weakly converges to $\tilde T$ (I think this limit is the way to understand $\tilde T$). Let $f \in \mathcal D(X)$ with non-negative values, and $u \in \mathcal C_{p,p}^{\infty}(X)$ be strongly positive. As $fu$ is a test form (smooth, compactly supported), weak convergence gives you the following : $$\langle \tilde T, fu \rangle = \lim_{k\to \infty} \langle (\theta \circ v_k) T, f u \rangle $$ You conclude with duality : $$ \langle \tilde T \wedge u , f \rangle = \lim_{k \to \infty} \langle T \wedge u, (\theta \circ v_k) f \rangle \geqslant 0 $$ Since $(\theta \circ v_k) f$ is a non-negative function.
Prove that $n \in \mathbb{Z}^\star \Leftrightarrow n \mid 1$ and $n-1 \mid 1$ or $n+1 \mid 1$, and $(x-1)/(t-1) \equiv n \pmod {t-1}$
I had wanted to give some hints and then put the solution into the "hidden" mode. However, the command &gt;! didn't seem to be working with my text (this might have something to do with LaTeX commands), so I will just put my solution then. I would greatly appreciate if anybody can suggest a way to hide my text below. First, suppose $n\in\mathbb{Z}\setminus\{0\}$. Clearly, $n\mid 1$, and either $n+1\mid 1$ or $n-1\mid 1$. Furthermore, if $n&gt;0$, then $$\frac{t^n-1}{t-1}=\sum_{j=0}^{n-1}\,t^j=n+(t-1)\left(\sum_{j=0}^{n-1}\,\frac{t^j-1}{t-1}\right)=n+(t-1)\left(\sum_{j=1}^{n-1}\,\sum_{k=0}^j\,t^k\right)\equiv n\pmod{t-1}\,.$$ If $n&lt;0$, then $$\frac{t^n-1}{t-1}=-t^{n}\left(\frac{t^{-n}-1}{t-1}\right)\equiv -t^n(-n)\equiv nt^n\equiv n\pmod{t-1}\,.$$ Now assume that $f(t)\in F\left[t,t^{-1}\right]$ is such that (a) $f(t)\mid 1$, (b) $f(t)+1\mid 1$ or $f(t)-1\mid 1$, and (c) there exists $n\in\mathbb{Z}$ such that $\frac{t^n-1}{t-1}\equiv f(t)\pmod{t-1}$. We shall prove that $f(t)=n \neq 0$. Condition (c) implies that $f(t)-n$ is divisible by $t-1$. Hence, $$f(t)=n+(t-1)\,g(t)$$ for some $g(t)\in F\left[t,t^{-1}\right]$. Since $f(t)\mid 1$, $f(t)$ is a unit in $F\left[t,t^{-1}\right]$, whence $f(t)=at^k$ for some $a\in F\setminus\{0\}$ and $k\in\mathbb{Z}$. If $f(t)\pm 1 \mid 1$, then $f(t)\pm 1$ is also a unit. That is, $at^k\pm 1$ is of the form $bt^l$, where $b\in F\setminus\{0\}$ and $l\in\mathbb{Z}$. If $k\neq 0$, we note that $at^k\pm1$ has a nonzero root in the algebraic closure of $F$, but $bt^l$ does not. Therefore, $k=0$. Hence, $f(t)=a$ is a nonzero, constant polynomial. Now, from $f(t)=n+(t-1)\,g(t)$, we have $0\neq a=f(1)=n$. Ergo, $f(t)=n \neq 0$ as desired.
Dimension of Cantor set with middle quarter removed
If you are talking about the Hausdorff dimension, the standard way of computing it for a cantor-like set is to exploit self-similarity and to rewrite it as union of two properly scaled and traslated version of itself. As an example, in your case $$C_1=\frac{3}{8}C_1\cup(\frac{3}{8}C_1+\frac{5}{8}) .$$ From which, if $0&lt;H^d(C)&lt;\infty$, $$1=2\left(\frac{3}{8}\right)^d\implies d=\log_{\frac{8}{3}}2,$$ which is your result. Anyway the subtle (and not so easy) point is to show that $H^d(C)$ is neither $0$ nor $\infty$.
Intuition of orthogonal polynomials?
Okay, here is a primer on the stuff you need to know to see why we like Orthogonal Polynomials. Vectors In physics, vectors are usually things that live in three-dimensional space. In computer science, they tend to be lists of numbers. In mathematics, a vector is a thing that lives in a vector space. A vector space is designed so that all the simple things you want to do to vectors work: you can add them, there's a zero and additive inverses, you can scale them by multiplying by numbers, which are things in the field of scalars (which we can assume is $\mathbb{R}$ for the purposes of this discussion, so we are talking about real vector spaces), and all the usual properties extend from the vector space $\mathbb{R}^3$ over the field $\mathbb{R}$ that physics uses (what does not extend at this stage is the notion of product). A vector space has a positive integer called the dimension associated to it, which is the smallest number of fixed vectors $v_i$ you need to generate the whole thing by using linear combinations $\sum_i \alpha_i v_i$, where $\alpha_i$ are scalars (e.g. $\mathbb{R}^n$ has dimension $n$). This number is the most important quantity associated to a vector space, because one can prove that over the same field, if two vector spaces have the same dimension, they are isomorphic. A set of $v_i$ of minimal size that generates the whole space in this way is called a basis. The sensible sort of map on vector spaces is one that keeps the addition and the scalar multiplication structure intact: $$ L\left(\sum_i \alpha_i u_i\right) = \sum_i \alpha_i L(u_i) $$ for any set of vectors $\{u_i\}$. These are called linear maps. The definition of vector space covers the spaces we expect, but includes many objects that we would not initially expect to be vectors: $\mathbb{R}^n$ is a vector space of dimension $n$, as it should be. Polynomials of degree $n$ or less are a vector space of dimension $n+1$ (addition is defined pointwise). An example of a linear map is sending a polynomial $p$ to its value at a point, $p(a)$. Polynomials of all degrees are also a vector space. Continuous functions on an interval $[a,b]$ (written $C[a,b]$) form a vector space using pointwise addition. Infinite sequences of real numbers $(a_1,a_2,\dotsc)$ are also a vector space The last three of these are infinite-dimensional, so the previous discussion does not apply completely, but we'll need infinite-dimensional spaces. Worth remembering is that your ordinary basis is defined to generate the space using finite linear combinations, so an infinite-dimensional space can have a very weird (and very big) basis in this sense. (Polynomials aren't so bad because they are finite linear combinations already, so $\{1,x,x^2,\dotsc\}$ is a perfectly reputable basis.) Inner products and Orthogonality The easiest product on $\mathbb{R}^3$ to generalise is the dot product $u \cdot v = \sum_{i=1}^3 u_i v_i$. Over a real vector space $V$, an inner product is a function $\langle \cdot,\cdot\rangle : V \times V \to \mathbb{R}$ that is symmetric ($\langle u,v\rangle=\langle v,u\rangle$), nondegenerate ($\langle u,u\rangle&gt;0$ for all $u \neq 0$), and bilinear. (Linear in each argument. Symmetry means we only need to check the first argument, $\langle \lambda u+\mu v,w\rangle = \lambda \langle u,w\rangle + \mu \langle v,w\rangle $ for any scalars $\lambda,\mu$). The ordinary dot product is one of these. The example we shall need to worry about is $$ \langle f,g \rangle = \int_a^b f(x) g(x) w(x) \, dx, $$ where $f,g \in C[a,b]$ and $w$ is a positive function. A vector space equipped with one of these is called an inner-product space. The inner product essentially allows us to generalise the idea of angles, so we borrow the terminology from geometry: two vectors $u,v$ are said to be orthogonal if $$ \langle u,v\rangle = 0. $$ A vector is normalised if $\langle u,u\rangle=1$. A set of vectors where each vector is normalised and each pair of vectors is orthogonal is called orthonormal. In a finite-dimensional inner product space, if I give you any old basis, there is an algorithm (called Gram–Schmidt) that will produce an orthonormal basis. One final thing: given a vector $u$ and an orthonormal basis $\{v_i\}$, there is a unique way to express $u$ as a linear combination $u=\sum_i \alpha_i v_i$: taking inner products of both sides with $v_j$ gives $$ \langle v_j,u \rangle =\sum_i \alpha_i \langle v_i,v_j \rangle = \alpha_j, $$ using linearity of the inner product and orthonormality of the $v_i$. So $$ u = \sum_i \langle v_i,u \rangle v_i. $$ This is where we have to take a short deviation into infinite-dimensional spaces and analysis, where things become rather more complicated. Hilbert space A Hilbert space is a complete inner-product space. Complete technically means that Cauchy sequences converge, but you can think of it as a space where every convergent sequence converges to something still in the space. Given an inner-product space, we can make a Hilbert space by adding the appropriate limits in. Here, convergence means that $v_n \to v$ if $\langle v_n-v,v_n-v \rangle \to 0$ (this is either called convergence in mean square or strong convergence, but it doesn't really matter since it's the only sort we consider). Any finite-dimensional inner-product space is a Hilbert space (no converging needs to happen), but this is not true in infinite dimensions. $C[a,b]$ is not a Hilbert space (one can construct continuous functions that converge to a function with a jump in it, for example), but we can turn it into a Hilbert space using the inner product given above; this new space is called $L^2[a,b]$, the space of square-integrable functions on $[a,b]$. Note that convergence is not pointwise: functions in $L^2[a,b]$ &quot;look the same&quot; if they differ on a finite set of points (more is true, but we shan't need the fully general statement). A orthonormal basis in a Hilbert space $H$ is a bit different: we mean an orthonormal set of vectors so that given any element $u$ of $H$, we can approximate $u$ as closely as we like by a finite linear combination of elements of the basis (we say that these finite linear combinations are dense in $H$). In the case of $L^2[0,2\pi]$ with $w=1$, the classic example is Fourier series: any element of $L^2[0,2\pi]$ can be written as $\sum_{n=-\infty}^{\infty} a_n e^{inx}$, so $\{e^{inx}\}$ form an orthonormal basis of $L^2[a,b]$. Orthogonal Polynomials With the preliminaries finally out of the way, we can finally move on to what orthogonal polynomials are. Given $L^2[a,b]$ with a $w$ so that polynomials have finite integral (e.g. $1$ on [-1,1], $e^{-x}$ on $[0,\infty)$), polynomials form a dense subspace of $L^2[a,b]$ (this is a consequence of something called the Stone–Weierstrass theorem, but never mind about that). Hence, one might want to approximate functions in $L^2[a,b]$ by polynomials. To do so, one requires an orthonormal set of polynomials, and this is where orthogonal polynomials come in. Definition: Let $\langle f,g \rangle = \int_a^b f(x) g(x) w(x) \, dx$ (i.e. the inner product on $L^2[a,b]$ with weight $w$). A sequence of polynomials $(P_n)_{n\geqslant 0}$ are called orthogonal polynomials for this space if $P_n$ has degree $n$. $\langle P_n,P_m \rangle = 0 $ when $n \neq m$. One can show that each polynomial is unique up to an overall scaling. How? Apply Gram–Schmidt to the sequence $1,x,x^2,\dotsc$! For example, a simple computation of this kind gives the first few Legendre polynomials, $1,x,(3x^2-1)/2,\dotsc$. For various reasons, the scaling of orthogonal polynomials is not usually chosen so that $\langle P_n,P_n \rangle = 1$. The Legendre polynomials, for example, are normally defined to be scaled so that $P_n(1)=1$. Gram–Schmidt is rather inefficient when you want a lot of terms. More commonly, orthogonal polynomials are defined by generating functions of one sort or another, which is often where the funny scalings come from. Given a function in $L^2[a,b]$, we can expand it in terms of orthogonal polynomials in exactly the same way as one employs a Fourier series: it gives an expansion of the function in terms of simpler functions that are orthogonal to one another. The usefulness of this expansion over the Fourier expansion often lies in the polynomials being solutions to certain differential equations. For example, Legendre polynomials appear entirely naturally when solving Laplace's equation in spherical coordinates, and one can use them to expand an axisymmetric solution to Laplace's equation in what is called the multipole expansion. This ability to expand functions in terms of solutions of a differential equation is a specific case of a more general phenomenon, which goes under the general title of Sturm–Liouville theory. Orthogonal polynomials are a particularly nice example of this. Perhaps the best way to think of expansion in terms of orthogonal polynomials is that the coefficients $\alpha_i$ in $f(x) - \sum_{i=0}^n \alpha_i P_i(x) = R(x)$, where $R(x)$ is the error in the approximation, is that the $\alpha_i$ are chosen so that for a given $n$, $ \int_a^b (R(x))^2 w(x) \, dx $ is minimised: $\sum_{i=0}^n \alpha_i P_i$ is the best approximation of $f$ (in this sense) by a polynomial of degree at most $n$. The orthogonality makes it easy to derive what the coefficients must be, and it corresponds precisely with the $\alpha_i = \langle f,P_i \rangle / \langle P_i,P_i\rangle$ that we obtain by analogy with both the Fourier series idea and the calculation in finite-dimensional space. One may also understand $\sum_{i=0}^n \alpha_i P_i$ as the projection of $f$ onto the finite-dimensional space spanned by $P_0,P_1,\dotsc,P_n$; we note the analogy with orthogonal projection of a point onto a plane in three dimensions, which also gives the closest point in the plane to the original point.
Existence of solution to Newton's equation on a manifold
I think the addition of the smooth potential function makes no difference. Look at what I wrote here: Is a geodesic always regular?
Nonaffines must base change to nonaffines
We can use this version 01XF of Serre's criterion. We have that $X_{0}$ is quasi-compact by e.g. 02KQ. The argument in our case is that the morphism $\pi_{X} : X \to X_{0}$ is a quasi-compact, surjective morphism and $X$ is quasi-compact so $X_{0}$ is quasi-compact (just point-set topology). One shows that $X_{0}$ is quasi-separated in the same way. The isomorphism $H^{1}(X_{0},\mathcal{F}) \otimes_{F} E \simeq H^{1}(X,\pi_{X}^{\ast}(\mathcal{F}))$ follows from flat base change 02KH. Here is a general result: 02L5 If $S' \to S$ is a faithfully flat morphism of affine schemes and $X \to S$ is an $S$-scheme such that $X' := X \times_{S} S'$ is affine, then $X$ is affine.
Vandermonde-like identities
Here are some almost "Vandermonde-like" identities that may be of interest. They're not exactly what you're asking for, but they are pretty close. $$\begin{align*} \sum_{k=0}^n \binom{p+k}{k} \binom{q+n-k}{n-k} &amp;= \binom{n+p+q+1}{n} \\ 2 \sum_{k=0}^r \binom{n}{2k} \binom{n}{2r+1-2k} &amp;= \binom{2n}{2r+1} \\ 2 \sum_{k=0}^r \binom{n}{2k} \binom{n}{2r-2k} &amp;= \binom{2n}{2r} + (-1)^k \binom{n}{r} \\ 2 \sum_{k=0}^{r-1} \binom{n}{2k+1} \binom{n}{2r-2k-1} &amp;= \binom{2n}{2r} - (-1)^k \binom{n}{r} \end{align*}$$ The first one is on p. 148 of Riordan's Combinatorial Identities, and the last three are on p. 144. There may be more in Riordan's book; I just flipped through until I found a few.
Solutions of a system with parameter $a$ and three unknown $x,y,z$
Hint That is such an awesome start. Now Either $a=1$ or not. If so, then the equations reduce to$$5x+3y+3z=1\\3x+2y+2z=1$$which is very easy to solve. The case $a\ne 1$ is more interesting, for which$$(a-1)^2(y-z)=2(a-1)\implies (a-1)(y-z)=2$$therefore we have an answer only if $$a-1|2\to a\in\{-1,0,2,3\}$$under which we have $$y-z={2\over a-1}$$which gives us the answer by substitution.
Atlas for hyperbolic pair of pants
The basic fact is, in hyperbolic plane, if $\gamma_1, \gamma_2$ are geodesics, $p_1, q_1, \in\gamma_1$ and $p_2,q_2\in\gamma_2$ and $d(p_1, q_1)=d(p_2, q_2)$, then there is a unique isometry that moves $\gamma_1$ to $\gamma_2$, $p_1$ to $p_2$, $q_1$ to $q_2$. Therefore a domain with a geodesic boundary piece can be glued to another domain with a geodesic boundary piece of the same length, the new combined surface is smooth, with constant curvature $-1$. Now take $2$ identical regular hexagons so that all angles are right angle, all sides are equal to $L$. Label the 6 sides of the first hexagon by $S_1, S_2, ..., S_6$, going counterclockwise; Label the 6 sides of the second hexagon by $T_1, T_2, ..., T_6$, going clockwise. Glue $S_1$ to $T_1$, $S_3$ to $T_3$, $S_5$ to $T_5$, you get a surface that is a sphere with three holes, the first hole has boundary $S_2\cup T_2$, the second hole has boundary $S_4\cup T_4$, the third hole has boundary $S_6\cup T_6$. Since the hexagon angles are all right angle, these three boundary pieces are all smooth circles. For further readings,, see https://www.amazon.com/Lectures-Hyperbolic-Geometry-Universitext-Benedetti/dp/354055534X/ref=sr_1_16?keywords=hyperbolic+manifolds&amp;qid=1578428703&amp;sr=8-16
How to determine a deficit of base 10 exponent from a minimum unit
Let $10^{-k}$ be the minimum unit. If $\lfloor 10^k x\rfloor = 10^k x$, then $x$ is a multiple of the minimum unit and thus would pass the test. $\lfloor \cdot \rfloor$ is the floor function.
How to tell that $x^2 - x \sin(x) -\cos (x)$ has no real roots?
$f(0)=-1$ and $f(2)=4-2\sin(2)-\cos(2)\geq 4-2-1=1 \,.$ Then, by IVT it has a root between $0$ and $2$. Similarly, it has a second root between $-2$ and $0$.
How many ways can he select 5 marbles containing at least one of each color?
This can be worked in a stars and bars manner because there are enough marbles to cover any scenario without running out. First, pick one of each color, as required, and restate the question as, Dave has a collection of identical balls in which 4 are blue, 4 are red and 2 are black. In how many ways can he select 2 marbles? You should search for stars and bars problems. In this case, 3 flavors, choose 2. A similar problem would be, How many two-scoop ice cream dishes can you make with three flavors. ETA: Here's an example: https://math.stackexchange.com/a/535792/115823
Probability: Unordered, with replacement
The number of possible outcomes is not $10^4/4!$, that's not even an integer. In order to count outcomes, we need to first define what we mean by outcome, that is, we have to describe the sample space. Also, even though ultimately order does not matter, the number of outcomes is not $\binom{10}{4}$. For since drawing was done with replacement, it is possible to get repetition of digits. And if we did give a real count of possible outcomes, that included for example the possibility of getting three $1$'s and one $4$, we would still have some work to do. For the event three $1$'s and one $4$, and the event one each of $3,6,8,9$ are not equally likely. It is convenient (though not always possible) to set up a set of possible outcomes such that all outcomes are equally likely. Then we can find our probability by counting. If we do that, then if our counting is correct, we will get the right answer. In this particular problem, the sample space of $10^4$ strings of length $4$ consists of equally likely outcomes. Since we are counting order, for the numerator we have to use $4!$.