title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
The comparison between fiber and stalk of a locally free sheave
The fibre is a quotient of the stalk and therefore "smaller". In fact, this follows from the isomorphism $$\mathcal F_x\otimes_{\mathcal R_{M,x}}\mathcal R_{M,x}/\mathfrak m \simeq \mathcal F_x/\mathfrak m \mathcal F_x.$$
Code$C$ over $Z_3$ given by the solutions of a linear system of equations
The linear equations give you the parity check matrix directly. That is, for any codeword, $$ \begin{bmatrix}x_1&x_2&x_3&x_4&x_5&x_6\end{bmatrix}\begin{bmatrix}1&1\\ 0&2\\ 2&1\\ 1&2\\ 1&0\\ 2&2\end{bmatrix}=\begin{bmatrix}0&0\end{bmatrix} $$ This two-column matrix is the parity check matrix $H$. Finding the generator matrix is then just a matter of finding a basis for the space orthogonal to the subspace generated by the two columns of the parity check below. They will form the rows of a matrix $G$ such that $GH=0$. You will be able to obtain four linearly independent vectors this way. It's easiest to try using $1$'s and $0$'s. For example, with $x_1=x_6=1$ and $x_2=x_3=x_4=x_5=0$, the parity check identifies this as a codeword. Trying with $x_1=0$ and $x_2=1$, you can find that $x_3=x_4=0$, but we need to use $x_5=2$ and $x_6=1$ to get the resulting codeword to be orthogonal to the parity check. Now trying with $x_1=x_2=0$ and $x_3=1$, it's safe to use $x_4=0$ and $x_5=1$ and $x_6=2$. Finally we try $x_1=x_2=x_3$ and $x_4=1$, and it's clear that $x_5=x_6=1$ does the trick. So a generator matrix would be \begin{bmatrix} 1&0&0&0&0&1\\ 0&1&0&0&2&1\\ 0&0&1&0&1&2\\ 0&0&0&1&1&1\\ \end{bmatrix} You can also use more sophisticated tricks to find $G$ based on $H$ so that $GH=0$, but this is a very elementary one that I think gives you the nuts and bolts experience for figuring it out by hand. It's all just a matter of understanding the basic relationship between the parity check matrix and the codewords.
"Symmetric" connected k-regular bipartite graph
It is not clear from your question whether you want your graphs to be vertex transitive, or merely have the action transitive on each colour class. Nor is it clear whether you are interested in infinite graphs. I consider the finite case. If $k=1$ we have only $K_2$. If $k=2$ we have only cycles of even length. If $k\ge3$ however, there are infinitely many finite vertex-transitive bipartite graphs with valency $k$, and there is no hope of classifying them. Just to give one class, if we choose three involutions at random for the symmetric group, then the corresponding Cayley graph is cubic and vertex-transitive (and connected with probability one). If the graph is not bipartite, its direct product with $K_2$ is still cubic and vertex transitive, and is bipartite as well. Note that bipartiteness is no real restriction, because of the "direct product with $K_2$" trick. Weakening the requirement from vertex transitive to transitive on colour classes just allows more examples.
Quick question on DFA
Formally a DFA is a quintuple $M=\langle Q,\Sigma,q_0,\delta,F\rangle$, where $Q$ is a finite set of states, $\Sigma$ is a finite input alphabet, $q_0\in Q$ is the initial state, $\delta:Q\times\Sigma\to Q$ is the transition function, and $F\subseteq Q$ is the set of acceptor states. You’ve been given three of the five components: $Q=\{s_0,s_1\}$; $\Sigma=\{0\}$; and $q_0=s_0$. You’re also told that $F$ is either $\{s_0\}$ or $\{s_1\}$. Thus, the possible DFAs are of the forms $$\langle \{s_0,s_1\},\{0\},s_0,\delta,\{s_0\}\rangle$$ and $$\langle \{s_0,s_1\},\{0\},s_0,\delta,\{s_1\}\rangle\;,$$ where $\delta$ can be any function from $\{s_0,s_1\}\times\{0\}$ to $\{s_0,s_1\}$. There aren’t many possibilities for $\delta$: $\delta(s_0,0)$ can be either $s_0$ or $s_1$, and $\delta(s_1,0)$ can also be either $s_0$ or $s_1$. These choices are independent, so there are altogether $2\cdot2=4$ possible transition functions $\delta$. Each can be paired with either of the two possibilities for $F$, so there are altogether just $4\cdot2=8$ DFAs that meet the requirements of the problem. Here’s a diagrammatic representation of one of them: Here $\delta(s_0,0)=s_1$, $\delta(s_1,0)=s_0$, and $F=\{s_0\}$. You can easily show that a string $0^n$ of $n$ zeroes is accepted by this DFA if and only if $n$ is even, so the language accepted by this DFA is $\{0^{2n}:n\ge 0\}$. Now just do the same thing for the other seven possible automata.
Fourier inversion for $f \in L^p(\mathbb R^n)$ and $\hat f \in L^1(\mathbb R^n)$ , where $1<p<2$
For $\epsilon&gt;0$ consider the function $$f_\epsilon(x) = e^{-\epsilon |x|^2}f(x)$$ By Hölder's inequality $f_\epsilon \in L^1$. Also, $\widehat{f_\epsilon} = \gamma_\epsilon * \hat f$ where $\gamma_\epsilon = \widehat{e^{-\epsilon |x|^2}}$ is a tightly localized (when $\epsilon$ is small) Gaussian with integral $1$. Hence $\widehat{f_\epsilon}\in L^1$ and $\widehat{f_\epsilon} \to \hat f$ in $L^1$ as $\epsilon&gt;0$. Since $f_\epsilon, \widehat{f_\epsilon} \in L^1$, the $L^1$ Fourier inversion formula applies. As $\epsilon\to 0$, we have $f_\epsilon\to f$ pointwise and $\widehat{f_\epsilon} \to \hat f$ in $L^1$, where the latter suffices to pass to the limit in the integral $$ \int_{\mathbb R^n} \widehat {f_\epsilon} (\xi) e^{-2\pi i \xi\cdot x} d\xi \to \int_{\mathbb R^n} \widehat {f} (\xi) e^{-2\pi i \xi\cdot x} d\xi $$ Note that this applies to $f\in L^p$ with any $p&gt;1$, not only $1&lt;p&lt;2$.
How many non-isomorphic ways a convex polygon with $n + 2$ sides can be cut into triangles?
This type of problem (counting orbits of objects under some symmetry group$~G$, here the dihedral group associated to the $(n+2)$-gon) can be handled using Burnside's lemma, which says that the number of orbits is equal to the average over all group elements $g\in G$ of the number of objects fixed by$~g$. You already know the number of objects (triangulations) fixed by the identity element, namely all $C_n$ of them. The remaining group elements can be partitioned into conjugacy classes since the number of objects fixed by$~g$ is constant as $g$ varies over a conjugacy class. Only a few conjugacy classes will admit any fixed objects at all, which makes counting this way feasible. To simplify put $m=n+2$, so the we are considering triangualtions of an $m$-gon, and $G=\operatorname{Dih}_m$, the dihedral group of order$~2m$. Among the rotations we need only consider those of order $2$ or $3$, if they exist, since the center of the $m$-gon must either be on an edge of the triangulation or in the interior of a triangle, and that edge/triangle alone limits the possible rotational symmetry of the triangulation to be at most twofold respectively threefold. The reflections in$~G$ form either one or two conjugacy classes (when $m$ is odd respectively even). When $m$ is even, the reflections$~g$ whose axis does not pass through any vertices (but rather through two midpoints of sides) do not admit any triangulations fixed by$~g$, since for either of the sides cut by the axis of the reflection, the third corner of the triangle on that side would have to be a $g$-fixed vertex, which doesn't exist. By the same reasoning, if $m$ is odd, any triangulation fixed by a reflection must contain the (isosceles) triangle defined by the side and the vertex on the axis of the reflection. Consider the rotation$~z$ of order$~2$, which occurs in$~G$ when $m$ is even. Any triangulation fixed by it must contain exactly one of the $m/2$ diagonals that pass through the center of the $m$-gon. Once this diagonal is chosen, a $z$-symmetric triangulation is determined by a triangulation of one of the two $(m/2+1)$-gons into which the $m$-gon is cut, as the other one must be its image by$~z$; this can be done in $C_{m/2-1}$ different ways. So whenever $m$ is even, $z$ contributes $\frac m2C_{m/2-1}$ triangulations fixed by it. Similarly, whenever $m$ is divisible by$~3$ there are two rotations$~\rho$ of order$~3$; each one fixes the same set of triangulations, but we must not forget to count them twice. A $\rho$-fixed triangulation must contain exactly one of the $m/3$ equilateral triangles that share their center with the $m$-gon. Each such triangle leaves three $(m/3+1)$-gons to be triangulated, but after this has been done for one of them in one of the $C_{m/3-1}$ possible ways, the remaining ones are determined by the required $\rho$-symmetry. So the contribution of $3$-fold symmetry when it exist is $2\frac m3C_{m/3-1}$. We are left with the reflections. If $m$ is odd there are $m$ conjugate reflections, for each of which as we have seen the axis determines a triangle that must occur in any triangulation fixed by it, and there remain two $(m+1)/2$-gons, of which as before it will suffice to triangulate one, in one of $C_{(m-3)/2}$ possible ways. This contributes $mC_{(m-3)/2}$ whenever $m$ is odd. The final case, with $m$ even and $\sigma$ is one of the $\frac m2$ reflections whose axis passes through two opposite vertices, is the most subtle one. On one hand the axis itself might occur in the triangulation, and this possibility accounts (for all such reflections combined) for $\frac m2C_{m/2-1}$ $\sigma$-fixed triangulations (the same number as contributed by $z$ alone). However the axis of reflection need not occur in the triangulation: it may run through the interior of triangles, and since every edge of the triangulation that meets the axis must be perpendicular to it one sees that there must be exactly two such (isosceles) triangles that cover the symmetry axis. To find the contribution of such symmetric configuration one might sum over all possibilities for this pair of triangles and discover that the result can be simplified by the quadratic recurrence satisfied by the Catalan numbers; one can however jump to the resulting expression by the following trick: the quadrilateral formed by the two triangles covering the axis can be re-triangulated in one other way, and the result is a $\sigma$-fixed triangulation in which the axis does occur. The correspondence is bijective, so we get one more contribution of $\frac m2C_{m/2-1}$. We summarise, using the Iverson bracket $[d\mid n]$ to denote $1$ when $d$ divides $n$ and $0$ otherwise. The number of configurations for $m=n+2$ is given by $$ f(m)= \frac1{2m}\left(C_{m-2} +[2\mid m]\,\frac{3m}2C_{m/2-1} +[2\not\mid m]\,mC_{(m-3)/2} +[3\mid m]\,\frac{2m}3C_{m/3-1}\right) $$ If you like you can combine the middle two terms in parentheses by $\Bigl(1+\frac{[2\mid m]}2\Bigr)\,mC_{\lfloor m/2\rfloor-1}$ for a somewhat more compact formula. The first few values of this expression, as function of $n$, are $$\scriptstyle \begin{matrix} n &amp; 1&amp;2&amp;3&amp;4&amp;5&amp;6&amp;7&amp;8&amp;9&amp;10&amp;11&amp;12&amp;13&amp;14&amp;15&amp;16 \\ \hline f(n+2)&amp;1&amp;1&amp;1&amp;3&amp;4&amp;12&amp;27&amp;82&amp;228&amp;733&amp;2282&amp;7528&amp;24834&amp;83898&amp;285357&amp;983244 \end{matrix} $$ Once we've got these numbers, it is of course easy to look this up in OEIS. Indeed it is sequence A000207 whose title is "Number of inequivalent ways of dissecting a regular $(n+2)$-gon into $n$ triangles by $n-1$ non-intersecting diagonals under rotations and reflections; also the number of planar $2$-trees".
On the definition of commutators
$[x,y,z]=[[x,y],z]$ is the standard “left normed bracket convention” used with the definition $[x,y] = x^{-1} y^{-1} xy$. See for instance Robinson's textbook for a Course in the Theory of Groups section 5.1 page 119 in the 1ed, Gorenstein's Finite Groups section 2.2 page 19, etc.
Extension of Fourier transform to $L^2(\mathbb{R})$
Once you have the Parseval (Plancherel) identity for Schwartz class, then the Fourier transform extends to $L^2$ as you noted in the first paragraph. For $f\in L^2\cap L^1$, it's not hard to show that this continuous extension to $L^2$ is the same as the classical integral definition: $$ \hat{f}(s)=\lim_{R\rightarrow\infty}\frac{1}{\sqrt{2\pi}}\int_{-R}^{R}f(x)e^{-isx}dx,\;\;\; f\in L^1(\mathbb{R})\cap L^2(\mathbb{R}). $$ The convergence of the above limit is pointwise everywhere, and it converges in $L^2(\mathbb{R})$ as well. Alternatively, can start with the above classical definition of the Fourier transform as a pointwise limit, show that the Parseval identity holds, and use this to show that the above limit converges to $\hat{f}$ for any $f \in L^2(\mathbb{R})$ because $\chi_{[-R,R]}f \in L^1\cap L^2$ for any finite $R$, which gives the Parseval identity for $\chi_{[-R,R]}f$.
Centralizers of reflections in parabolic subgroups of Coxeter groups
I posted the question over at MathOverflow and just posted a proof. So to not have the question without an answer on this site, here it goes: Surprisingly the proof only uses standard facts on Coxeter-groups (exchange condition, solving the word problem via braid-moves,...). Let me first make the notation a bit easier: Claim: Let $P\leq W$ be a special subgroup of $W$ generated by some subset $S'\subsetneq S$ of $S$ and $s \in S-S'$. Then the centralizer of $s$ in $P$ is generated by those involutions in $S'$ which commute with $s$, $C_P(s)=\langle s' \in S'~|~ s's=ss' \rangle$. Proof: Let $w \in C_P(s)$ and $w=s_1...s_r$ be a reduced expression. By induction it is enough to prove that $s_rs=ss_r$ since the elements of length $1$ in the centralizer are precisely the simple involutions commuting with $s$. We have $\ell(ws)=\ell(w)+1$ and since $\ell(wsw^{-1})=\ell(s)=1$ we conclude that $\ell(wss_r)&lt;\ell(ws)$, so $\ell(wss_r)=\ell(w)$. By the exchange condition there is a reduced expression for $ws$ ending in $s_r$ and since $s_1...s_rs$ is already a reduced expression for $ws$ there exists a finite series of braid-moves connecting these two expressions. The expression $s_1...s_rs$ contains $s$ only once and no simple involution that does not commute with $s$ shows up to the right of $s$. Consider now any braid-move in this situation. If $s$ is not involved in the move the two conditions obviously still hold afterwards. If $s$ is involved the other simple involution involved must commute with $s$ since any braid-move involving $s$ and a non-commuting $s'$ requires either at least two occurrence of $s$ (two the left and two the right of $s'$) or an occurrence of $s'$ to the right of $s$ neither of which there is. Hence any braid-move fixes our two conditions and after finitely many braid moves there is still no simple involution to the right of $s$ which does not commute with $s$. On the other hand there is, as noted above, a finite series of braid-moves after which the expression ends in $s_r$ so $s_r$ has to commute with $s$ as asserted.
Finding nth term when recurrence relation is given
The more general approach would be to write a characteristic equation $$x^k=2x^{k-1}-x^{k-2}$$ and simplify this to $$x^2=2x-1$$ which has the double solution $x=1$. It there had been two distinct solutions $\beta$ and $\gamma$ then a general solution to the recurrence would be $a_k= A\beta^k+B\gamma^k$ and you would find $A$ and $B$ by looking at the initial terms. But if there a double solution $\beta$ to the characteristic equation then a general solution to the recurrence would be $a_k= A\beta^k+Bk\beta^k$. With longer linear recurrences, there may be more roots, and a similar approach works. Here we are in the latter situation and $\beta=1$, so the solution is of the form $a_k=A+Bk$, and the initial terms $a_1=2, a_2=3$ tell us $A=1, B=1$ and thus $$a_k=1+k.$$
Why is this expression returning NaN?
This expression gives NaN in each one of the following cases: mz type | sz type | FPS type | Condition ---------|---------|----------|----------------------------------------- int | int | float | (FPS == inf || FPS == -inf) &amp;&amp; mz == sz ---------|---------|----------|----------------------------------------- int | float | int | (FPS &gt; 60 || FPS &lt; -60) &amp;&amp; mz == sz ---------|---------|----------|----------------------------------------- float | int | int | (FPS &gt; 60 || FPS &lt; -60) &amp;&amp; mz == sz In addition to that, if the value of any of these variables is NaN to begin with, then so will be the value of the entire expression.
Confusion with simple probability concepts
Yup, that's correct. Since it's impossible for all $7$ balls to be red - because there aren't $7$ or more red balls to begin with - the probability of that would be $0$.
Chain rule for a matrix derivative
Define the variables $V=A^{-1}\,$ and $\,M=yy^T$. Let's also use the inner/Frobenius product as a cleaner way of writing the trace, i.e. $$X:Y={\rm tr}(X^TY)$$ Now rewrite the function as $$f=M:V^TBV$$ Find the differential $$\eqalign{ df &amp;= M:(dV^T\,BV+V^T\,dB\,V+V^TB\,dV) \cr &amp;= M:(V^TB^T\,dV+V^T\,dB\,V+V^TB\,dV) \cr &amp;= VMV^T:dB + (BVM+B^TVM):dV \cr &amp;= VMV^T:dB - (BVM+B^TVM):(V\,dA\,V) \cr &amp;= VMV^T:dB - V^T(BVM+B^TVM)V^T:dA \cr\cr }$$ The derivative wrt the scalar $\theta$ has the same form, just replace $d$ by $\frac{d}{d\theta}$.
How to prove that there does not exist a sequence of continuous functions that converge pointwise to $\chi_{\mathbb{Q}}$ (definition only)
I think I have an answer, I received some assistance from a friend of mine. Slight question: does it depend implictly on any non-stated theorems (I mean, so called "big theorems.") Suppose that $\{f_{n}\}_{n=1}^{\infty}$ is a point-wise convergent sequence that converges to $f$. Let $a,b \in [0,1]$ such that $0&lt;a&lt;b&lt;1$. We will show that for each non-degenerate segment $A \in [0,1]$, there exists some $B \subseteq A$ and arbitrarily large $N \in \mathbb{N}$ such that $f_{N}(B) = [a,b]$. Let $A$ be a non-degenerate segment of $[0,1]$. Let $y \in (\mathbb{R}-\mathbb{Q})\cap A$ and $x \in \mathbb{Q} \cap A$ such that $x &lt; y$. Since $\{f_{n}\}_{n=1}^{\infty}$ converges pointwise to $f$, we have that there exists $N \in \mathbb{N}$ such that $f_{N}(x) \geq b$ and $f_{N}(y) \leq a$. But since $f_{N}$ is continuous,by the Intermediate Value Theorem, there exists some $ B \subseteq [x,y] \subseteq A \subseteq [0,1]$ such that $f_{N}(B)=[a,b]$. Then consider the embedded sequence of $\{A_{n}\}$, where $A_{n+1} \subseteq A_{n}$. Clearly for each $A_{n}$ there exist respective $K_{n} \in \mathbb{N}$ such that $f_{K_{n}}(B_{n})=[a,b]$ for $B_{n} \subseteq A_{n}$. By the nested interval theorem, the intersection of $A_{n}$ is non-empty. Then let $x \in \bigcap_{n \in \mathbb{N}}A_{n}$. But then $f(x)$ is the limit of $f_{n}(x) \in [a,b]$, which is a contradiction since $f(x)=0$ or $f(x)=1$ and $0&lt;a&lt;b&lt;1$. **edit: i think this proof is incorrect since it makes little reference to the rationals (Although I suppose it assumes denseness.)
Find the Galois group of a quotient field extension
You're right that $(1,X)$ is a $K$-basis of $L$. First notice that $\mathbb{F}_2[X]\subset L$. For $P\in\mathbb{F}_2[X]$, one can write $$P=\sum_{}p_{2k}X^{2k} + \left(\sum_{}p_{2k+1}X^{2k}\right)X$$ Then remember that the Frobenius is $\mathbb{F}_2$-linear : this implies, among other things, that for $P\in\mathbb{F}_2[X]$, $P(X^2)=P(X)^2$. The result follows : for $(P,Q)\in\mathbb{F}_2[X]^2$ and $Q\neq 0$, just write $$\frac{P(X)}{Q(X)}=\frac{P(X)Q(X)}{Q(X)^2}=\frac{P(X)Q(X)}{Q(X^2)}$$ which is therefore in the $K$ vector space spanned by $(1,X)$ inside $L$.
Number of distinct decompositions
First form $N$, the product of all the numbers in $a$. We only care about the ordered ways to break $N$ into $n$ factors. Now factor $N$ into powers of primes: $N=\prod_ip_i^{b_i}$ We can consider each prime separately. For $p_i$ we want a weak compositionof $b_i$ into $n$ parts, which can be done in ${b_i+n-1 \choose n-1}$ ways. In total there are then $$\prod_i {b_i+n-1 \choose n-1}$$ ways to write $N$ as $n$ factors.
Why isn't the perfect closure separable?
The intuition I can provide is that in an algebraic extension $F\subset K$ of characteristic $p$ , the field $K$ will be a separable extension of its purely inseparable extension $K_p$ over $F$ whenever $trdeg_{\mathbb F_p} F\leq 1$. This is why my counterexamples on MathOverflow here and here have $F=\mathbb F_p(u,v)$ , a purely transcendental extension of $\mathbb F_p$ of degree two. I can't prove my intuition but it implies that Zev's example won't work (he didn't say it did, so he made no mistake), and here is a proof that indeed it doesn't. The key point is that is that if $\mathbb F_q$ is a finite field of characteristic $p$, the Frobenius map $\mathbb F_q \to \mathbb F_q: x\mapsto x^p$ is surjective and this implies that for $f(T)\in \mathbb F_q [T]$ we have $ (f(T))^{1/p}\in \mathbb F_q [T^{1/p}]$, because of the freshman's dream $(a+bT+\ldots)^{1/p}=a^{1/p}+b^{1/p}T^{1/p}+\ldots $ ! This implies that $$(\mathbb F_q(T))^{p^{-\infty}}=\mathbb F_q(T,T^{p^{-1}},T^{p^{-2}},\ldots)$$ And now the rest is easy. If $F= \mathbb F_p(T)$ and $K=(\mathbb F_{p^2}(T))^{p^{-\infty}}=\mathbb F_{p^2}(T,T^{p^{-1}},T^{p^{-2}},\ldots \quad )$ we have $K_p=\mathbb F_p(T,T^{p^{-1}},T^{p^{-2}},\ldots \quad )$ and $K$ is a separable extension of the perfect closure $K_p$ of $F$ in $K$ . Indeed $K$ is just the simple separable extension $K=K_p(g)$ of $K_p$, where $g \in \mathbb F_{p^2}$ is any element such that $\mathbb F_{p^2}=\mathbb F_{p}(g)$. [ The element $g$ is of course separable over the (perfect !) field $\mathbb F_p$, so it is a fortiori separable over $K_p$]. Edit What I called my intuition above has now been proved by ulrich on MathOverflow here. He proves that if $trdeg_{\mathbb F_p} F\leq 1$, any algebraic extension $F\subset K$ has the property that $K$ is separable over its purely inseparable extension $K_p$ over $F$.
Does absolutely continuity of probability measures imply absolute continuity of conditional probability measures almost everywhere?
Not only this is true, but also one can write the "conditional density" explicitly. Let us try first to identify $d\mu_y/d\mu_y'$ informally, and then prove the heuristics. Write $$ \mu_y(\cdot) = \mu(\cdot)/\mu(f^{-1}(\{y\})) $$ and similarly for $\mu'$, so that $$ \frac{d\mu_y}{d\mu_y'}(x)= \frac{d\mu}{d\mu'}(x) \frac{\mu'(f^{-1}(\{y\}))}{\mu(f^{-1}(\{y\}))}. $$ Now $$ \frac{\mu(f^{-1}(\{y\}))}{\mu'(f^{-1}(\{y\}))} = \frac{1}{\mu'(f^{-1}(\{y\}))} \int_{f^{-1}(\{y\})} \frac{d\mu}{d\mu'}(z)\mu'(dz), $$ which looks like the conditional expectation $\mathsf E_{\mu'}[\frac{d\mu}{d\mu'}\mid f = y]$. Turning to the formal part, let $\mathcal G$ be the sub-sigma-algebra of $\mathcal B(X)$ generated by $f$. Obviously, $\mu\ll \mu'$ on $\mathcal G$ as well, so there is a Radon-Nikodym density $Z = d\mu|_\mathcal G/d\mu'|_\mathcal G$ (it coincides with the above conditional expectation). But since $\mathcal G$ is generated by $f$, $Z(x) = h(f(x))$ for some measurable $h$. I claim that $$ \frac{d\mu_y}{d\mu_y'}(x)= \frac{d\mu}{d\mu'}(x) \frac{\mathbf{1}_{h(y)&gt;0}}{h(y)}. $$ a.e. This is equivalent to the equality $$ \mu(A\mid \mathcal G)(w) = \frac{\mathbf{1}_{Z(w)&gt;0}}{Z(w)}\int_A \frac{d\mu}{d\mu'}(x)\mu'(dx\mid \mathcal G)(w)\tag{1} $$ for a.a. $w$. To show the last, write for any $A\in \mathcal B(X)$, $B\in \mathcal G$ $$ \mathsf E_{\mu}\left[\frac{\mathbf{1}_{Z&gt;0}}{Z} \mathsf E_{\mu'}\left[\frac{d\mu}{d\mu'}\mathbf{1}_{A}\,\Big|\,\mathcal G\right]\mathbf{1}_{B} \right] = \mathsf E_{\mu'}\left[Z\frac{\mathbf{1}_{Z&gt;0}}{Z} \mathsf E_{\mu'}\left[\frac{d\mu}{d\mu'}\mathbf{1}_{A}\,\Big|\,\mathcal G\right]\mathbf{1}_{B} \right] \\ = \mathsf E_{\mu'}\left[\mathbf{1}_{Z&gt;0}\frac{d\mu}{d\mu'}\mathbf{1}_{A}\mathbf{1}_{B} \right] = \mathsf E_{\mu}\left[\mathbf{1}_{Z&gt;0}\mathbf{1}_{A}\mathbf{1}_{B}\right]. $$ (The first equality holds since the expression inside the expectation is $\mathcal G$-measurable.) This shows that $$ \mathsf E_{\mu}\left[\mathbf{1}_{A}\mid \mathcal G\right] = \frac{\mathbf{1}_{Z&gt;0}}{Z} \mathsf E_{\mu'}\left[\frac{d\mu}{d\mu'}\mathbf{1}_{A}\,\Big|\,\mathcal G\right] $$ almost surely on $\{Z&gt;0\}$, which is equivalent to (1) (obviously, $d\mu_y/d\mu'_y$ vanishes on $\{Z=0\}$).
Proof that $\frac{1}{2\pi i}\oint f'(z)/f(z) \, dz = n$
If $f$ has a zero of order $n$ at $z_0$, then there is an function $h$ holomorphic near $z_0$ and such that $f(z)=(z-z_0)^nh(z)$ near $z_0$ and $h(z_0)\neq0$. Computing, $$\frac{f'}{f}=\frac{n}{z-z_0}+\frac{h'(z)}{h(z)}.$$ If you use the Cauchy formula now over a sufficiently small circle around $z_0$ you get (a corrected version of) what you want.
Working out the length of the 3rd side of an isosceles triangle- Pythagoras' theorem
If $$a^2 + b^2 = c^2$$ and $a=b$: $$a^2 + a^2 = 2 a^2 = c^2$$ You can indeed find the third side of the right triangle with the following formula: $$\sqrt{2 a^2} = \sqrt{c^2}$$ $$\sqrt{2} a = c$$ Both sides having the same length allows you to turn the sum into a product, which you can partially calculate the square root of. (or at least calculate the root and be able to write down the result in a finite amount of time) If the angle between the 2 sides of equal length is not 90°, you can start from the general formula for the third side, which also includes the angle $\alpha$ between them: $$a^2 + b^2 -2ab\cos (\alpha)= c^2$$ again $a=b$: $$a^2 + a^2 -2aa\cos (\alpha)=2a^2(1-\cos (\alpha))= c^2$$ unfortunately, the angle is still there and the formula does not simplify as much as it does for the special case of 90°.
Evaluate $\lim_{x \rightarrow 0} \frac{(1-\cos x)^2}{\log (1 + \sin^4x)} $
$$ \lim_{x \rightarrow 0} \frac{(1-\cos x)^2}{\log (1 + \sin^4x)} = \lim_{x \rightarrow 0} \frac{(2\sin^2( x/2))^2)}{\log (1 + \sin^4x)}=1\cdot\lim_{x \rightarrow 0} \frac{4\sin^4(x/2))}{\sin^4x} $$ and this behaves as $$ \frac{4x^4}{2^4x^4}, $$ which indeed tends to $1/4$. We used $1-\cos x=(\sin^2(x/2)+\cos^2(x/2))-(\cos^2(x/2)-\sin^2(x/2))$ and $\displaystyle\lim_{y\to0}\dfrac{\ln(1+y)}{y}=1$.
Help with understanding proof of the product rule
I don't know the idea of the rewritten numerator. Proving $F'(x)=f'(x)g(x) + f(x)g'(x)$ geometrically, you can use \begin{align*} \Delta F&amp;=\Delta F+0\\ &amp;=f(x+h)g(x+h)\color{red}{-f(x)g(x+h)}-f(x)g(x)+\color{red}{f(x)g(x+h)}\\ &amp;=g(x+h)\left [ f(x+h)-f(x) \right ]+f(x)\left [ g(x+h)-g(x) \right ]\\ &amp;=g(x+h)\Delta f+f(x)\Delta g. \end{align*} The expression $0=f(x)g(x+h)-f(x)g(x+h)$ is used as a "trick" to simplify the proof. You can also use $0=g(x)f(x+h) - g(x)f(x+h)$. Hence, $$\lim_{h\to 0}\frac{\Delta F}{h}=g(x)\lim_{h\to 0}\frac{\Delta f}{h}+f(x)\lim_{h\to 0}\frac{\Delta g}{h}=F'(x).$$
What does $\prod_{i<j} (j - i)$ mean where $i,j\in\{1,2,\ldots,8\}$?
It means that we're multiplying factors $j-i$ where $i&lt;j$ and $i,j\in\{1,2,3,4,5,6,7,8\}.$ Put another way, it is $$\prod_{j=2}^8\prod_{i=1}^{j-1}(j-i).$$
Proving inverse image of ideal is an ideal
Let $I$ any ideal in $L_2$. Of course $\varphi^{-1}(I)\neq\emptyset$ since $\varphi(0)=0\in I$. Now for all $x,y\in\varphi^{-1}(I)$ we have $\varphi(x), \varphi(y)\in I$ and, in particular, $\varphi(x+y)=\varphi(x)+\varphi(y)\in I$. Therefore $x+y\in\varphi^{-1}(I)$. Finally for every $r\in L_1$ and any $x\in\varphi^{-1}(I)$ we have $\varphi(r)\in L_2$ and $\varphi(x)\in I$; hence $\varphi([r,x])=[\varphi(r),\varphi(x)]\in I$. Therefore $[r,x]\in\varphi^{-1}(I)$. Answer to your additional doubt: YES!
Some questions regarding (relative) constructibility and the condensation lemma
Apostolos: About the first question, yes. The point is $(*)$: If $(M,\dot\in)$ is a model of set theory, $\phi$ is a sentence, and $(M,\dot\in)\models$"$(N,E)$ is a model of $\phi$", then $(N,E)\models\phi$. More precisely, $(N,E)^*\models\phi$, where $(N,E)^*$ is the model that $M$ thinks $(N,E)$ is. Here, $(N,E)^*$ has universe $\{a\in M\mid (M,\dot\in)\models a\dot\in N\}$ and its relation is $\{(a,b)\in M\times M\mid (M,\dot\in)\models a,b\dot\in N\land aE b\}$. $(*)$ is easily established by an induction on the complexity of formulas. Now, if $\phi$ is an axiom of ZFC, then $(M,\dot\in)$ thinks that $\phi$ is an axiom of ZFC (more precisely, if $n$ is a Gödel number for $\phi$, then in $M$, the numeral corresponding to $n$ is a Gödel number for a formula, and that formula is precisely $\phi$), and therefore, $(N,E)^*$ satisfies all ZFC axioms. Note that $M$ may fail to be an $\omega$-model, in which case there are "natural numbers" in $M$ that code what $M$ believes are ZFC axioms, and $M$ may believe that $(N,E)$ satisfies them. We do not care about these "fake formulas". Similarly, there may be an $(N,E)$ in $M$ that $M$ thinks is not a model of ZFC but $(N,E)^*$ is, in fact, a ZFC model. The reason is similar: $M$ may think that $(N,E)$ does not satisfy one of the fake axioms, but this does not matter. In the above, it doesn't matter whether $(N,E)$ is a proper class or a set in the sense of $M$. The argument by induction in the complexity of formulas is perfectly general, so in fact $((L,\in)^M)^*$ is a model of every $\phi$ that follows from ZFC$+V=L$. In particular, GCH, diamonds, squares, etc, hold in $(L^M)^*$. Of course, there are also additional properties this model has that are not provable from ZFC$+V=L$. About the second question: Suppose that CH fails, and let $A$ be a subset of $\omega_2$ that codes an injective $\omega_2$-sequence of reals, say the $\alpha$-th real is coded in $A\cap(\omega\cdot\alpha,\omega\cdot(\alpha+1))$. Then $L[A]$ cannot possibly be a model of GCH, because $A\in L[A]$ so $L[A]$ sees at last $\omega_2^V\ge\omega_2^{L[A]}$ many different reals. As you see, the usual proof of CH breaks down because we cannot ensure that for every real there is a countable $\alpha$ such that the real belongs to $L_\alpha[A]$. In fact, if we are careful we could even have a situation where $\omega_1=\omega_1^L$, ${\mathfrak c}=\omega_2$, $A$ codes all the reals, and $L_{\omega_1}[A]=L_{\omega_1}$, i.e., none of the "new" reals are visible in $L_{\omega_1}[A]$. The issue is that $A$ may not collapse correctly, meaning, what you do is take $\delta$ large and a countable elementary $X\prec L_\delta[A]$ that contains $r$. Then you collapse $X$. Its collapse may not be $L_\alpha[A]$ for any $\alpha$, because the collapse of $A$ needs not be $A$. Of course, an initial segment of $A$ coincides with an initial segment of its collapse, but its collapse may "code information faster", in particular, it will code $r$ by a countable stage.
MAX-CUT to integer programming
I guess what you want is max-cut in an undirected graph. In the given formulation, the vertices are partitioned into two sets based on whether $x_i$ is $1$ or $-1$. Then, we can see that for all edges $(i,j)$ within the same block of vertices, $(1-x_ix_j)$ will be $0$. So only those edges which are across the blocks remain in the summation. And for these edges $(i,j)$, $(1-x_ix_j)=2$. Also each edge is counted twice, once as $(i,j)$ and once as $(j,i)$. So the required cut value is $1/4$ of the summation. For max-cut in directed graphs, look at this LP formulation for min-cut. We can modify it to get the ILP formulation for max-cut as (assuming the weights $w_{ij}$ are all non-negative) $$ \begin{array}{rcll} &amp; &amp; max \sum_{(i,j) \in E} w_{ij}d_{ij} &amp; \\ d_{ij}-p_i+p_j &amp; \geq &amp; 0 &amp; (i,j) \in E\\ p_i &amp; \in &amp; \{0,1\} &amp; i \in V \\ d_{ij} &amp; \in &amp; \{0,1\} &amp; (i,j) \in E \end{array} $$
Find a function $f$ in $S(\mathbb R)$ the schwarz class such that $\|f\|_{2} = 1$, and $\int x^k f(x) =0$ for any $k$
If $f\in \mathcal S,$ then so is $\hat f,$ and $f$ is the inverse Fourier transform of $\hat f.$ It follows that $$D^kf(0) = i^k\int_{\mathbb R} x^k\hat f(x)\,dx,\,\, k=0,1,2,\dots,$$ modulo constants arising from the definition/normalization of the Fourier transform. So take $f(x) = \exp (-1/[x(1-x)]),$ $0&lt;x&lt;1,$ $f=0$ elsewhere. Then $f\in \mathcal S,$ and all derivatives of $f$ at $0$ vanish. Since $\mathcal S\subset L^2,$ $\hat f/\|\hat f\|_2$ will have the desired properties.
Log equation $\log(2x-1) = -x+3$ with two non log values
For an equation that is similar to this one, you will typically need the Lambert $W$ function. I assume that your $\log$ is $\ln$. $$\begin{align} \ln(2x-1)&amp;=-x+3\\ 2x-1&amp;=e^{-x+3}\\ (2x-1)e^{x-3}&amp;=1\\ \left(x-\frac12\right)e^{x-3}&amp;=\frac12&amp;&amp;\text{make coefficients of $x$ equal}\\ \left(x-\frac12\right)e^{x-1/2}&amp;=\frac12e^{5/2}&amp;&amp;\text{make subtractions from $x$ equal}\\ W\left(\left(x-\frac12\right)e^{x-1/2}\right)&amp;=W\left(\frac12e^{5/2}\right)\\ x-\frac12&amp;=W\left(\frac12e^{5/2}\right)\\ x&amp;=W\left(\frac12e^{5/2}\right)+\frac12\\ \end{align}$$ And $x\approx1.941304330652583916268007448815937227342114287347817503685\ldots$.
Show that $f(x) = {1\over x^n}$ is continuous in its domain, $n\in\Bbb N$
When $|x-y|&lt;\delta &lt;|x|/2$, then \begin{align*} \bigg|\frac{1}{x^n}-\frac{1}{y^n} \bigg|&amp;= \frac{|y-x||x^{n-1}+\cdots +y^{n-1}|}{|x^ny^n|}\\&amp; \leq \frac{\delta n( |x|+\delta)^{n-1} }{|x|^n (|x|-\delta)^n} \\&amp;&lt; \frac{n\delta 2\cdot 3^{n-1} }{ |x|^{n+1} }\end{align*}
Equivalence of two formulas for variance and covariance
Let $X, Y$ be random variables with means $\mu_X = \mathrm{E}[X]$ and $\mu_Y = \mathrm{E}[Y]$. Then $$\begin{align*} \mathrm{Cov}[X,Y] &amp;\equiv \mathrm{E}[(X - \mu_X)(Y - \mu_Y)] \\ &amp;= \mathrm{E}[XY - \mu_Y X - \mu_X Y + \mu_X \mu_Y] \\ &amp;= \mathrm{E}[XY] - \mathrm{E}[\mu_Y X] - \mathrm{E}[\mu_X Y] + \mathrm{E}[\mu_X \mu_Y] \\ &amp;= \mathrm{E}[XY] - \mu_Y \mathrm{E}[X] - \mu_X \mathrm{E}[Y] + \mu_X \mu_Y \\ &amp;= \mathrm{E}[XY] - \mu_X \mu_Y - \mu_X \mu_Y + \mu_X \mu_Y \\ &amp;= \mathrm{E}[XY] - \mu_X \mu_Y \\ &amp;= \mathrm{E}[XY] - \mathrm{E}[X]\mathrm{E}[Y]. \end{align*}$$ If $X = Y$, then $\mathrm{Cov}[X,Y] = \mathrm{Var}[X]$, $XY = X^2$, and the above becomes $$\mathrm{Var}[X] = \mathrm{E}[X^2] - \mathrm{E}[X]^2.$$ Now suppose $W = f(X)$; i.e., $W$ is a random variable that is some function $f$ of the random variable $X$. Then $$\mathrm{Var}[W] = \mathrm{E}[W^2] - \mathrm{E}[W]^2$$ is equivalent to $$\mathrm{Var}[f(X)] = \mathrm{E}[f^2(X)] - \mathrm{E}[f(X)]^2.$$
Let $S$ be a spanning set of a vector $V$. If another vector of $V$ is added to $S$, can the new set still be spanning set of $V$?
I will write $v_0$ for the new vector instead of $v_k$. You did not prove what you are supposed to prove. You have to prove that any vector $v \in V$ is a l.c. of vectors from $\overline S$. Since we can write $v=\sum\limits_{i=1}^{n} c_iv_i$ we can write $v=\sum\limits_{i=1}^{n} c_iv_i +0(v_0)$ and that finishes the proof.
Why do biadditive, balenced maps define a homomorphism of bimodules?
One needs $f(am,n)=af(m,n)$ and $f(m,na)=f(m,n)a$. Then those imply $\tilde f(am\otimes n)=a \tilde f(m\otimes n)$ and $\tilde f(m\otimes na)=\tilde f(m\otimes n)a$ by the definition of $\tilde f$.
Is this proof legit?
Your new proof looks OK, but in the last part you shouldn't say "Since $T$ is linear, $T(v)=w$". What you mean is: "$\gamma$ is a basis of $V$, and therefore I can define a linear map by specifying values on the basis vectors and extending linearly". Your example also looks fine. The main idea there is that the choice of $W'$ is not unique (unless $W=V$), so you will end up with different projections. Also, be careful with how you use $\forall$ (in fact, you shouldn't have to use it in this proof at all). You claim $\forall v \in V,w\in W,w' \in W'$ we have $v=w+w'$, but that is obviously not true if I take $v=w=0$ and $w'$ to be some nonzero element.
Find the area of the shaded section on a square.
Here is a hint with double integrals. Split the area in two: Then \begin{align} S_{CBED}&amp;= S_{CED}+S_{CBE} = \int_{x_1}^{x_2}\int_{f_2(x)}^{f_1(x)} dy dx + \int_{x_2}^{x_3}\int_{f_3(x)}^{f_1(x)} dy dx \end{align} where \begin{align} f_1(x)&amp;=\sqrt{6^2-(x-6)^2}, \\ f_2(x)&amp;=6-\sqrt{6^2-(x-6)^2}, \\ f_3(x)&amp;=x, \\ x_1&amp;=6-3\sqrt{3}, \\ x_2&amp;=6-3\sqrt{2}, \\ x_3&amp;=6. \end{align} $S_{CBED}=\tfrac{15}{2}\pi-9\sqrt{3}\approx 7.97348763$. Another way to split the area: suggests a geometric solution as a sum of the sector $BCD$ and a difference between the sector $ABD$ and $\triangle ABD$. $\triangle ABD$ is equilateral (why?), so \begin{align} S_1&amp;=\tfrac12 \cdot 36(\tfrac\pi3-\tfrac\pi4)=\tfrac32 \pi \\ S_2&amp;=\tfrac12 \cdot 36\tfrac\pi3=6\pi \\ S_3&amp;=9\sqrt3 \end{align} And the answer is $\tfrac{15}2\pi-9\sqrt3$, as above.
How to prove that the slice category over a Quillen model category is also a Quillen model category?
It looks like you've got the idea. Working in the slice category $\mathcal{C}/X$, where $\mathcal{C}=\{\mathcal{C},F,\text{co}F,W\}$ is a model category with fibrations $F$, cofibrations $\text{co}F$, and weak equivalences $W$, we want to show $\mathcal{C}/X$ satisfies the axiom that retracts of morphisms in $F,\text{co}F,$ or $W$ are also in $F,\text{co}F,W$ respectively. So suppose we have $$\begin{matrix} A&amp;&amp;\stackrel{f}{\to}&amp;&amp;A'\\&amp;\searrow&amp; &amp;\swarrow&amp;\\&amp;&amp;X&amp;&amp;\end{matrix}$$ as a retract of $$\begin{matrix} B&amp;&amp;\stackrel{g}{\to}&amp;&amp;B'\\&amp;\searrow&amp; &amp;\swarrow&amp;\\&amp;&amp;X&amp;&amp;\end{matrix}$$ That means we have maps $i:A\to B, r:B\to A, j:A'\to B',s: B'\to A'$ such that $ri=\text{id}_A, sj=\text{id}_A'$, everything commutes with the maps from $A,A',B,B'$ to $X$, and the squares commute: $gi=jf, fr=sg$. But we can send this diagram to $\mathcal{C}$ just by forgetting the slice structure: ignore the bit about commuting with maps to $X$ in the previous sentence and it's expressing exactly that $f$ is a retract of $g$ in $\mathcal{C}$. Then $f$ is a weak equivalence (fibration, cofibration) between $A$ and $A'$ in $\mathcal{C}$, so that it'll be a weak equivalence (fibration, cofibration) between $A\to X$ and $A'\to X$ in $\mathcal{C}/X$ whenever this same diagram commutes: $$\begin{matrix} A&amp;&amp;\stackrel{f}{\to}&amp;&amp;A'\\&amp;\searrow&amp; &amp;\swarrow&amp;\\&amp;&amp;X&amp;&amp;\end{matrix}$$ And in the given case we required that it does commute, so we have a weak equivalence, fibration, or cofibration in $\mathcal{C}/X$.
Why does $\lim_{\lambda\to+\infty}\frac{\mathcal{L}\{v(x)\leq \lambda\}}{\mathcal{L}\{v(x)+|x|\leq \lambda\}}=1$? ($|x|^2\leq v(x)\leq 2|x|^2$)
This is false. Fix a large $N$ and define $v(x)=N^2$ for $N/\sqrt{2}\le |x|\le N$. Note that this is consistent with the bounds that are imposed. On $|x|&lt;N/\sqrt{2}$, we could set $v=2x^2$, and $v$ will be $&gt;N^2$ for $|x|&gt;N$. Now consider the quotient for $\lambda=N^2$. The set in the numerator is $[-N,N]$, but in the denominator we have at most $[-N/\sqrt{2},N/\sqrt{2}]$ in the set. Thus the quotient is $\ge \sqrt{2}$. We can give $v$ this behavior for a sequence $N_k\to\infty$, so the quotient need not go to $1$.
Can LP with bounded feasible region be converted to LP with unbounded feasible region after adding artificial variables?
This holds true for all feasible bounded problems. A constraint $a^Tx \geq b$ with $b &gt; 0$ becomes $a^Tx-s_i+ a_i = b$ with $s_i$ the slack variable and $a_i$ the excess variable. If $(x,s,a)$ is feasible, then $(x,s+\alpha,a+\alpha)$ is also feasible for any $\alpha&gt;0$, so the new feasible region is unbounded. Based on the comment it seems like you are interested in whether the projection of the feasible region of $p'$ onto the variables in $p$ is unbounded. That can also be the case, but not necessarily. As a simple example, consider the set $S_1=\{x\in\mathbb{R} : 1 \leq x \leq 2\}$. After adding slacks and an artificial variable, this becomes the set $S_2=\{(x,a,s_1,s_2) \in\mathbb{R}^4 : x-s_1+a=1, x+s_2=2\}$. The constraint $x \geq 1$ can be violated arbitrarily by increasing the value of $a$ (for example, $(0,1,0,2) \in S_2$), so the projection of $S_2$ onto $x$ is $x \leq 2$.
Weak net convergence in $\ell_p$, where $1 < p < \infty$.
As for (i) use the fact that the evaluation functionals are bounded. Weak convergence means convergence of the net when hit by any functional, so in particular you may take any of the evaluation functionals. For (ii), suppose that your net is norm-bounded, and point-wise convergent. By subtracting the point-wise limit, we may suppose that this net converges point-wise to 0. Pick $g\in \ell_{p^*}$ (Here we identify $\ell_{p^*}$ with $(\ell_p)^*$ in the canonical way, and so $p^*$ is the Hölder conjugate exponent of $p$.) As $p^*\neq \infty$, the finitely supported sequences are dense in $\ell_{p^*}$. Fix $\delta &gt; 0$ and choose a finitely supported sequence $g^\prime\in \ell_{p^*}$ such that $\|g-g^\prime\|&lt;\delta$. Let $M$ be a bound for our net. Then $$|\langle g, \beta^{(\alpha)} \rangle| \leqslant |\langle g - g^\prime, \beta^{(\alpha)} \rangle| + |\langle g^\prime, \beta^{(\alpha)} \rangle| \leqslant M\delta + |\langle g^\prime, \beta^{(\alpha)} \rangle|$$ As $g^\prime$ is finitely supported, we use our assumption of point-wise convergence to $0$, so the right-hand side tends to $\delta M$. As $\delta$ was arbitrary, we conclude that $|\langle g, \beta^{(\alpha)} \rangle|$ tends to $0$.
Does $W^{1,2}$ convergence on compact subsets imply convergence on the entire domain?
No, and here is a counterexample : Take $\Omega=(0,1)^2$ and $f_{n}(x,y)$ to be the piecewise linear function that satisfies $f_{n}(0,y)=\frac{1}{n}$ and $f_{n}(x,y)=0$ for $x\ge \frac{1}{n^2}$. Then $||f_{n}||_{L^{2}(\Omega)}=\frac{1}{2n^3}$ and $||f_{n}'||_{L^{2}(\Omega)}=1$ , but $f_{n}'(x)=0$ for $x\ge \frac{1}{n}$ and subsequently $||f_{n}||_{H^{1}([\frac{1}{n},1)\times(0,1))} = 0$. Thus $f_{n} \rightarrow 0$ in $H^{1}(U)$ for every $U \subset \subset \Omega$ because any $U$ will eventually be contained in $[\frac{1}{n},1)\times(0,1)$ for some sufficiently large $n$. But since $||f_{n}'||_{L^{2}(\Omega)} = 1$ for all $n$, $f_{n}$ does not converge to $f$ in $H^{1}(\Omega)$.
Why is $\sup_{x \in [a,b]}|\int_a^xk(x,t)(f(t)-g(t))dt| \le ||f-g|| \sup_{x \in [a,b]} \int_a^x |k(x,t)|dt$?
Because one has that $$\left| \int h(s)\,ds \right| \le \int |h(s)|\,ds$$ Apply to your and you get $$\left| \int k(x,t) (f(t)-g(t))\,dt\right| \le \int |k(x,t)||f(t)-g(t)|\,dt \le \int |k(x,t)|||f-g||_\infty\,dt$$ And being $||f-g||_\infty$ a scalar number you can bring it out the integral and the supremum.
$\sigma$-algebra generated by open balls
No. See, for example, this question here or this question on MO. Separability, or equivalently second countability, is sufficient to guarantee the ball sigma-algebra is the same as the Borel sigma-algebra (since the topology is then countably generated by balls). Infinite-dimensional normed spaces are typically not separable, and the ball $\sigma$-algebra is smaller than the Borel $\sigma$-algebra. The ball $\sigma$-algebra does have rather curious applications in nonparametric statistics and empirical process theory, mainly because many interesting functions (e.g. Banach-space-valued functions) are ball-measurable but not Borel measurable, and we still have a lot of nice things (regularity, weak convergence for measures whose support is separable, etc.). However, when they don't coincide, it meant that continuous functions are no longer necessarily measurable, which does create some oddities.
Under invariant space
Yes,if $W$ is invariant by $P(T)$ where $P$ is any polynomiale, take $P(X)=X$, $P(T)(W)=T(W)\subset W$ since $X(T)=T$.
Is this true about the inverse sine?
Let $\theta=\arcsin{(-x)}$. Then $$\sin\theta=-x$$ $$-\sin\theta=x$$ $$\sin(-\theta)=x$$ $$-\theta=\arcsin{x}$$ $$\theta=-\arcsin{x}$$ Therefore, $-\arcsin{x}=\arcsin{(-x)}$.
Finding the max area for a rectangle inside of a circle
Referring to the figure below, you can see that the rectangle has a base of length $2r\cos\theta$ and a height of length $(h-r) + r\sin\theta$ so its area, as a function of $\theta$, is given by $A(\theta) = 2r\cos\theta\,(h-r + r\sin\theta)$ so $\displaystyle\frac{A(\theta)}{2rh} = a\,\sin\theta\cos\theta + (1-a)\,\cos\theta$ where $a \equiv \displaystyle\frac{r}{h}$. To maximise $A(\theta)$ we take the derivative with respect to $\theta$ and set it to zero, so the value of $\theta$ that maximises the area is the solution of the equation $\displaystyle\frac{\sin\theta}{\cos(2\theta)} = \frac{a}{1-a}$. $a$ is a known quantity so you can solve this numerically for $\theta$, from which you can get the width and height of the rectangle, as well as the maximum area. Edit: Actually, since $\cos(2\theta) = 1 - 2\sin^2\theta$, the equation above reduces to a quadratic equation for $\sin\theta$, whose solution is $\sin\theta = \displaystyle\frac{-1 + \sqrt{1+8b^2}}{4b}$ where $b \equiv \displaystyle\frac{a}{1-a}$.
Algebraic groups with no small subgroups
The group $GL_n(\mathbb{C})$ does have small subgroups for the Zariski topology. Indeed, let $U$ be an open subset of $GL_n(\mathbb{C})$ containing the identity matrix. Consider the diagonal morphism $$ \Delta:\mathbb{C}^\times \to GL_n(\mathbb{C}): \lambda\to diag(\lambda, \ldots, \lambda).$$ Then $\Delta^{-1}(U)$ is a non-empty Zariski-open subset of $\mathbb{C}^\times$; in other words, it is a cofinite subset of $\mathbb{C}^\times$. In particular, there exists a natural number $m&gt;1$ such that all $m$-th roots of unity are in $\Delta^{-1}(U)$. Let $1, \zeta, \zeta^2, \ldots, \zeta^{m-1}$ be these roots of unity. Then the matrices $diag(\zeta^i, \ldots, \zeta^i)$ for $i=0, \ldots, m-1$ form a non-trivial finite subgroup of $GL_n(\mathbb{C})$ contained in $U$.
setting up the equation of area between two polar curves
Indeed, two polar curves intersect themselves at: $$2\cos(t)=1\to t=\pi/3$$ Note that i consider the symmetric which rules the whole story so, $$0\leq t\leq\pi/3$$and so $$A=2\times\left(\frac{1}2\int_0^{\pi/3}(4\cos^2(t)-1)dt\right)$$
Rating system for 2 vs 2, 2 vs 1 and 1 vs 1 game
I have a proposition, maybe you can find some weak points in it: It's based on Elo. First, all the players get 1000 rating points ($R$). Let's focus on 1 vs 1, in the end I will show how to deal with 2 vs 1 and 2 vs 2. Player's A expected score $E_A$ is calculated as follows: $$ E_A = \frac{1}{1+10^{(R_B - R_A)/500}} \in (0, 1)$$ Where $R_A$ is a point rating for player A. Player's B expected rating is $E_B = 1 - E_A$. New rating for player A: $$R_A' = R_A + 100(S_A - E_A)$$ Where $S_A$ is the actual score. In case of winning $10:i$ $(i \in {0, 1, ..., 9}$) it is1: $$S_A = \frac{10}{10+i}$$ In case of losing $i:10$ it is: $$S_A = 1-\frac{10}{10+i}$$ Example: Player A with $R_A = 1000$ points won 10:6 with player B with $R_B = 1000$ points. In such case, expected score $E_A = E_B = 0.5$. Actual score $S_A$ for player A is $\frac{10}{10 + 6}=0.625$. So, $R_A' = 1000 + 100(0.625 - 0.5) = 1000 + 12.5 = 1012.5$, $R_B' = 1000 - 12.5 = 987.5$. What to do with 2 players A and B in one team playing versus player C? We can treat player A and B as one player with rating $\frac{R_A + R_B}{2}$. So, $E_A = E_B = \frac{1}{1+10^{(R_C - \frac{R_A + R_B}{2})/500}}$. New rating will be calculated individually: $$R_A' = R_A + 100(S_{AB} - E_A), R_B' = R_B + 100(S_{AB} - E_B)$$ $S_{AB}$ is the actual game score for player A and B. Since $E_A = E_B$, rating change for players in the same team is the same. 2 vs 2 case analogically. 1Why $S_A = \frac{10}{10+i}$? Consider a winning $10 : i$. Let $p$ be the probability of the winning team scoring a goal. This will be our "actual score (performance)". Let's assume that this probability is constant for the whole game. After seeing the actual outcome, $10 : i$, we can ask: what's the most likely $p$? One can use maximum likelihood estimation. What's the probability that it was $10:i$, given that the probability of the winning team scoring a goal was $p$? It is: $P(10:i | p) = \binom{9+i}{9}p^{10} (1-p)^i$ There were $10 + i$ goals in total. The last one belong to the winning team, it was their 10th goal, so this one is pinned. $\binom{9+i}{9}$ is the number of 9 winners goals and $i$ losers goals combinations. Multiplied by the probability of winners scoring 10 goals, $p^{10}$. Multiplied by the probability of losers scoring $i$ goals, $(1-p)^i$. Their goals positions are already determined, so no combinations here. Now we just need to find which $p$ maximizes $P(10:i | p)$. One could just take the first derivative and set it to zero. However, the maximum is the same for $\log P(10:i | p)$ and it will be easier to find maximum for it, for log-likelihood: $\log P(10:i | p) = \log \binom{9+i}{9} + \log p^{10} + \log (1-p)^i = \log \binom{9+i}{9} + 10 \log p + i \log (1-p)$ Set first derivative of $P$ with respect to $p$ to 0: $\log P(10:i | p)' = \frac{10}{p} + \frac{i}{p-1} = 0$ $\frac{10(p-1) + ip}{p(p-1)} = 0$ $10p-10 + ip = 0$ $p(10+i) = 10$ $p = \frac{10}{10+i}$
Calculate the number of terms in geometric sequence
Write it as $a_{n}=13\cdot 1.8^{n}$, $n\ge 0$. Then solve the inequality $a_{n}\ge 9.6E13$. That will tell you the first $n$ for which the terms arent smaller than 9.6E13, and it should be simple to find the answer from there
Let $\epsilon$ be a topos and let $f:Y\to X$ be a morphism. Let $M,N$ be two subobjects of $X$. Is it true that $f^*(M\cap N)=f^*(M)\cap f^*(N)$?
This is true in any category whatsoever (as long as the category has binary pullbacks so the question even makes sense). This is a standard "pullback cube" argument, which is not to say that it's easy! At each of the following steps, you should verify that the diagram remains commutative. First, form the pullback square defining $M \cap N$: Next, paste on the pullback squares defining $f^*M$ and $f^* N$: Next, paste on the pullback square defining $f^*(M \cap N)$: Because right-hand face is a pullback, the maps $f^*(M \cap N) \to N$ and $f^*(M \cap N) \to Y$ induce a map $f^*(M \cap N) \to f^*N$: Likewise, because the bottom face is a pullback, the maps $f^*(M \cap N) \to M$ and $f^*(M \cap N) \to Y$ induce a map $f^*(M \cap N) \to f^*(M)$: Here is our completed "pullback cube": Now, using the fact that the back, bottom, right, and "diagonal" (red) squares are pullbacks, prove that the front square is a pullback. This is a direct diagram chase, which @MaliceVidrine did for you :) This setup was important, though, because it tells us that the diagram actually commutes!
Continuity and Uniform Continuity on half closed intervals
If $\lim\limits_{x\,\downarrow\,a} f(x)=y$, then the function that equals $y$ at $a$ and equals $f(x)$ at $x=$ anything besides $a$, is continuous on $[a,b]$; hence uniformly continuous. A restriction of a uniformly continuous function to a subset of its domain is uniformly continuous.
Given a simple pole, find the difference of two Laurents series expansion coefficients
Hint: $g(z) = f(z) - \frac1{z-1}$ has a removable singularity at $z=1$. It can be continued to an analytic function on the annulus $\{z \in \mathbb C:1/2&lt;|z|&lt;2\}$ with a Laurent expansion $$ g(z) = \sum_{n=-\infty}^\infty c_n z^{n} $$ valid for $1/2 &lt; |z| &lt; 2$. Now compute the Laurent expansion of $g$ in $ \{z : 1/2 &lt; |z| &lt; 1\}$ and $\{z : 1 &lt; |z| &lt; 2\}$, respectively, using the given Laurent expansions of $f$ in those annuli, and the geometric series for $|z| &lt; 1$ and $|z| &gt; 1$, respectively. Then use that the coefficients of the Laurent expansion are unique, to get a relationship between $a_n$, $b_n$ and $c_n$. Spoiler: For $n \ge 0$: $c_n = a_n + 1 = b_n$. For $n &lt; 0$: $c_n = a_n = b_n - 1$.
In the context of topology, what does the notation "$H_1(A)$" mean?
$H_1(A)$ is the first (singular) homology group of the annulus $A$. Homology is a homotopy invariant, and since an annulus deformation retracts onto one of the base circles $S^1$, we have $$H_1(A) \cong H_1(S^1) \cong \mathbb{Z}.$$ An element of this group may be understood as a winding number around the circle. A choice of generator of this group simply corresponds to an orientation of the circle, i.e., a full loop around the circle in either direction. Therefore, the two base circles are coherently oriented iff they represent the same generator in this group.
Ultralimit of an eventually constant generalized sequence
The claim is in fact false. For an explicit counterexample, let $I=\mathbb{Z}$ with the usual ordering, let $X$ be the two-point Hausdorff space $X=\{a, b\}$, let $x_i=a$ if $i&lt;0$ and $x_i=b$ if $i\ge 0$, and let $\mathcal{U}$ be a nonprincipal ultrafilter on $\mathbb{Z}$ such that $\mathbb{Z}_{&lt;0}\in\mathcal{U}$. Then the limit of $(x_i)_{i\in I}$ along $\mathcal{U}$ is $a$, even though it clearly "ought" to be $b$. From here it's an easy exercise to find the necessary-and-sufficient condition $\mathcal{U}$ needs to satisfy in order to make this statekement work. (Note that as Eric Wofsey's answer shows, this condition will actually contradict non-principality unless $I$ satisfies "there is no largest element.")
Find $\sqrt[m]{\frac{\sqrt[m]{\frac{\sqrt[m]{\frac{\sqrt[m]{a}}{a}}}{a}}}{\begin{array}{c} a\\\vdots \end{array}}}$
As it stands the recurrence is \begin{eqnarray*} A_n= \sqrt[m]{ \frac{A_{n-1}}{a}}. \end{eqnarray*} Now assuming a limit $A$ exist then gives $A^m a=A$, assume $A=a^{\alpha}$ then $ \alpha=1/(1-m)$ and we have the solution \begin{eqnarray*} A= a^{\frac{1}{1-m}} \end{eqnarray*} So in your example $a=128, m=6$ does indeed give $A=0.378929 \cdots$. Not sure what the recurrence would need to be in order to get the solution suggested by the author.
Proof that two sub-sequences limits are the same $\iff$ sequence converges
For the $\implies$ direction. Note that you are given that $$b_{2n} \to a, b_{2n+1} \to a$$ We show that it will follow that $b_n \to a$ Let $\epsilon &gt; 0$. Choose $n_0$ such that $|b_{2n}-a| &lt; \epsilon$ if $n \ge n_0$ and choose $n_1$ such that $|b_{2n+1}-a| &lt; \epsilon$ if $n \geq n_1$. Put $N:= \max\{n_0,n_1\}$. If $n \geq 2N+1$, then $|b_n-a| &lt; \epsilon$. Indeed, if $n$ is even then $n= 2k$ for some $k$. Since $n \geq 2n_0$, we have $k \geq n_0$ and thus $|b_{2k}-a| &lt;\epsilon$. Similarly, one handles the case where $n$ is odd. Your proof of the other direction is correct. A subsequence of a convergent sequence always converges to the same limit and thus the subsequence of terms with odd index and the subsequence of terms with even index have the same limit.
On inversive geometry
Recall that $z\bar{z} = |z|^2$. It may be helpful to observe that $$ \bar{z} = \frac{|z|^2}{z} $$ Hyperbolic geometry, incidentally doesn't have anything to do with the inversive plane.
Homeomorphism between $(0,1)^{\mathbb{R}}$ and $\mathbb{R}^{\mathbb{R}}$
Hint: Prove that for homeomorphic topological spaces $X,Y$ their product topologies $X^I, Y^I$ (for any $I$) are homeomorphic as well. (Extend the homeomorphism $\pi \colon X \to Y$ coordinate wise to a homeomorhpism $\pi^I \colon X^I \to Y^I$.)
Even function I don't think is even
The function is even about the line $x=\frac{\pi}{2}$; i.e. about the line $x=\frac{\pi}{2}$, if the left hand part of the function is the reflection of the right hand part of the function, then the function is even. Or, in other words, shift the origin of the co-ordinate system to $(\frac{\pi}{2},0)$ and suitably change the functions according to the shifting of the axes. Then you will find that the function is even i.e. $f(-x)=f(x)$.
A particular polynomial - 2
Let $P=\sum a_ix_i^2+\sum a_{i,j}x_ix_j$ and $Q$ similarly with $b_i$ and $b_{i,j}.$ Now provided you want all coefficients positive, and also want each of $x_i^4$ to appear in the product with positive coefficients, comparing coefficients shows each $a_i&gt;0,$ and each $b_i&gt;0.$ So from these terms alone there is already a positive contribution to each of the six cross-square terms, meaning products of two squares. If cross terms from $P$ i.e. $a_{i,j}x_ix_j$ are used and or cross terms from $Q$ are used, this won't kill off any of the six cross-square terms we already have, since the coefficients of the factors have been required to be positive or zero. Now I did find an example in which we drop the positivity requirements for the coefficients of the factors and the result, but in the product all six products of two squares appear. This example is $$P=x_1^2+x_2^2+x_3^3+x_4^2+x_1x_2+x_3x_4,\\ Q=x_1^2+x_2^2+x_3^3+x_4^2-x_1x_2-x_3x_4.$$ I have not been able to find one with two of the products of two squares missing as your list implies. A better example: Define $$P(a)=w^2+x^2+y^2+z^2+awz+axy.$$ Then $P(\sqrt{2})\cdot P(-\sqrt{2})$ has all and only nonzeo terms from your list. $$w^4+x^4+y^4+z^4+2w^2x^2+2w^2y^2+2x^2z^2+2y^2z^2-4wxyz.$$ [There are still negative coefficients in one of the factors and in the product, but this cannot be completely avoided anyway as noted above.]
Can we say Dirac delta function is zero almost surely?
Because that is a bad "definition" of the "Dirac function"! The "Dirac function" is not a function at all, it is a "distribution" or "generalized function", a functional that assigns a number to every function. Specifically, the "Dirac function" assigns the number f(0) to every function, f.
Series Question: $\sum_{n=1}^{\infty}\frac{n+1}{2^nn^2}$
Consider Maclaurin series of natural logarithm $$ \ln(1-x)=-\sum_{n=1}^\infty\frac{x^n}{n}. $$ Taking $x=\dfrac12$ yields \begin{align} \ln\left(1-\frac12\right)&amp;=-\sum_{n=1}^\infty\frac{1}{2^n\ n}\\ \ln2&amp;=\sum_{n=1}^\infty\frac{1}{2^n\ n}. \end{align} Now, dividing the Maclaurin series of natural logarithm by $x$ yields \begin{align} \frac{\ln(1-x)}{x}&amp;=-\sum_{n=1}^\infty\frac{x^{n-1}}{n}, \end{align} then integrating both sides and taking the limit of integration $0&lt;x&lt;\dfrac12$. We obtain \begin{align} \int_0^{\Large\frac12}\frac{\ln(1-x)}{x}\ dx&amp;=-\int_0^{\Large\frac12}\sum_{n=1}^\infty\frac{x^{n-1}}{n}\ dx\\ &amp;=-\sum_{n=1}^\infty\int_0^{\Large\frac12}\frac{x^{n-1}}{n}\ dx\\ &amp;=-\left.\sum_{n=1}^\infty\frac{x^{n}}{n^2}\right|_{x=0}^{\Large\frac12}\\ -\int_0^{\Large\frac12}\frac{\ln(1-x)}{x}\ dx&amp;=\sum_{n=1}^\infty\frac{1}{2^n\ n^2}. \end{align} The integral in the LHS is $\text{Li}_2\left(\dfrac12\right)=\dfrac{\pi^2}{12}-\dfrac{\ln^22}{2}$, but since you are not familiar with dilogarithm function, we will evaluate the LHS integral using IBP. Taking $u=\ln(1-x)$ and $dv=\dfrac1x\ dx$, we obtain \begin{align} \int_0^{\Large\frac12}\frac{\ln(1-x)}{x}\ dx&amp;=\left.\ln(1-x)\ln x\right|_0^{\large\frac12}+\int_0^{\Large\frac12}\frac{\ln x}{1-x}\ dx\\ &amp;=\ln^22-\lim_{x\to0}\ln(1-x)\ln x-\int_1^{\Large\frac12}\frac{\ln (1-x)}{x}\ dx\ ;\\&amp;\color{red}{\Rightarrow\text{let}\quad x=1-x}\\ \int_0^{\Large\frac12}\frac{\ln(1-x)}{x}\ dx+\int_1^{\Large\frac12}\frac{\ln (1-x)}{x}\ dx&amp;=\ln^22-0\\ -\left.\sum_{n=1}^\infty\frac{x^{n}}{n^2}\right|_{x=0}^{\Large\frac12}-\left.\sum_{n=1}^\infty\frac{x^{n}}{n^2}\right|_{x=1}^{\Large\frac12}&amp;=\ln^22\\ \sum_{n=1}^\infty\frac{1}{2^n\ n^2}+\sum_{n=1}^\infty\frac{1}{2^n\ n^2}-\sum_{n=1}^\infty\frac{1}{n^2}&amp;=-\ln^22\\ 2\sum_{n=1}^\infty\frac{1}{2^n\ n^2}-\frac{\pi^2}{6}&amp;=-\ln^22\\ \sum_{n=1}^\infty\frac{1}{2^n\ n^2}&amp;=\frac{\pi^2}{12}-\frac{\ln^22}{2}. \end{align} Thus, \begin{align} \sum_{n=1}^\infty\frac{n+1}{2^n\ n^2}&amp;=\sum_{n=1}^\infty\left(\frac{1}{2^n\ n}+\frac{1}{2^n\ n^2}\right)\\ &amp;=\large\color{blue}{\ln2+\frac{\pi^2}{12}-\frac{\ln^22}{2}}. \end{align}
No of pairs of elements whose XOR satisfies a condition
Let's make the problem a little more general and have two sets of input numbers $A$ and $B$, and ask for the number of pairs $(a,b)\in A\times B$ such that $a \oplus b \le K$. If $K=0$ then the answer is just $|A\cap B|$. Otherwise let $K=2^p+Q$ with $Q&lt;2^p$. (This is just finding the most significant set bit in $K$). Then partition both $A$ and $B$ into subsets according to the quotient $\lfloor a/2^p \rfloor$ for each number $a$. Whenever there's both a subset $A_i$ and a subset $B_i$ with the same quotient, that gives us $|A_i|\times|B_i|$ pairs that xor to something less than $K$ -- indeed, less than $2^p$. On the other hand, we can also pair up the partitions such that the subset of $A$ corresponding to the quotient $x$ is matched up to the subset of $B$ corresponding to the quotient $x\oplus 1$. When we xor numbers from two such subsets together we get something between $2^p$ and $2^{p+1}$, which will be $\le K$ if the xor of the remainders is $\le Q$. Count the numbers of relevant pairs between the two partitions recursively. If we start by setting $A=B=$ the input set, this algorithm gives the number of ordered pairs that xor to something $\le K$. We can get the number of unordered pairs instead by noting $(a,a)$ is a qualifying pair for every input number $a$ -- and these $n$ pairs are counted exactly once, whereas everything else was counted exactly twice. So to get the number of unordered pairs, add $n$ and divide by two at the very end. (Or subtract $n$ if you don't want to count $a\oplus a$ as "pairs" at all). Time complexity. If we sort the set of input numbers once and for all at the beginning, the order will be preserved each time we take remainders for a recursive invocation of the algorithm, so the partitioning in each step takes linear time. Each of the $n$ input numbers ends up in at most one recursive call for each $p$, and there are at most $\log K$ levels in the recursion, so the total time complexity in $O(n\log n + n\log K)$. It feels like it should be possible to prove that this running time is actually linear in the size of the the input in bits (rather than the number of input numbers), but the details of how to dispose of the $\log K$ factor are a bit murky if $K$ is larger than most but not all of the input numbers. Perhaps it isn't true.
$L^1$-bounded quadratic variation of a continuous local martingale $\implies$ it is a true martingale, $L^2$-bounded
For completeness, I will include the proof from Revuz, Yor "Continuous martingales and Brownian Motion" here. $M-[M]$ is a local martingale, so there exists ${T_n},n\ge1$ such that $T_n\to\infty$ and $\mathbb{E}[M^2_{t\wedge T_n}]=\mathbb{E}([M]_{t\wedge T_n})&lt;K$. Apply Fatou: $\mathbb{E}[M^2_t]\leq \liminf\mathbb{E}[[M]_{t\wedge T_n}]\leq K$. So $M$ is $L^2$-bounded. Need to show now that $M$ is a true martingale. Let $T^{\prime}_n$ be stopping times reducing $M$, then $\{M_{t\wedge T^{\prime}_n}\}$ is UI, since it is $L^2$-bounded and $M_{t\wedge T^{\prime}_n} \to M_t$ in $L^1$, and so $\mathbb{E}[M_{t}|\mathcal{F_s}]=M_s$ a.s. Edit The omitted steps are: $\mathbb{E}[M_{t\wedge T^{\prime}_n}|\mathcal{F}_s]=M_{s\wedge T^{\prime}_n}$, so $\mathbb{E}[M_{t\wedge T^{\prime}_n}\mathbb{1}_A]=\mathbb{E}[M_{s\wedge T^{\prime}_n}\mathbb{1}_A]$ for all $A \in \mathcal{F}_s$. Now take $n\to \infty$, to conclude $\mathbb{E}[M_{t}\mathbb{1}_A]=\mathbb{E}[M_{s}\mathbb{1}_A]$ for all $A \in \mathcal{F}_s$, hence $\mathbb{E}[M_t|\mathcal{F}_s]=M_s$ a.s.
Proving binomial sum using mathematical induction
hint for $k\ge2$: $$\binom{2m+3}{k}=\binom{2m+1}{k}+\binom{2m+1}{k-1} + \binom{2m+1}{k-1}+\binom{2m+1}{k-2}$$ now sum using your induction hypothesis
Why does this perceptron algorithm work?
Note that for $w_1 = (0, -3)$, we have $$ w_1^T x \cdot y = (-6) \cdot (-1) = 6 &gt; 0 $$ Therefore, the perceptron algorithm will terminate with $w = (0, -3)$ and the resultant classifier would label $x$ as $\texttt{sign}(w^Tx) = -1$.
Expectation of functions of inner products of Gaussian vectors
I don't think your last paragraph is correct. $a^Tb = \frac{1}{\sqrt{p}}a^Ta$ with $a \sim N(0,I_p)$. This gives us: \begin{equation} a^Tb = \frac{1}{\sqrt{p}}a^Ta = \frac{1}{\sqrt{p}}\sum_{i=1}^pa^2_i = \frac{1}{\sqrt{p}}\sum_{i=1}^pZ_i^2 \sim \frac{1}{\sqrt{p}}\chi^2_p \overset{d}{=}\textrm{Gamma}(\frac{p}{2},\frac{2}{\sqrt{p}}) \end{equation} By symmetry of the real inner product $a^Tb = b^Tc$. Specifying $a^Tb = b^Tc = X \sim \textrm{Gamma}(\frac{p}{2},\frac{2}{\sqrt{p}})$. We have that your expectation is: \begin{equation} \int_{0}^\infty f(x)^2 \frac{(\frac{2}{\sqrt{p}})^{\frac{p}{2}}2x^{\frac{p}{2}-1}}{\Gamma(\frac{p}{2})}\textrm{Exp}(-\frac{2x}{\sqrt{p}})dx \end{equation} Which is: \begin{equation} E_X[f(X)^2], \hspace{2mm} X \sim \textrm{Gamma}(\frac{p}{2},\frac{2}{\sqrt{p}}) \end{equation} This is the expectation of the second moment of $f(X)$ under the given Gamma distribution.
Is $\frac{\mathrm d}{\mathrm dx}$ an operation?
It all depends on your definitions. Wikipedia defines an operation in the following way. In mathematics, an operation is a function from zero or more input values (called operands) to an output value. Since functions aren't really values, I wouldn't consider differentiation to be an operation, since it's an association between input functions and output functions, not input values and output values. Operators are a generalization of the notion of an operation. They need not necessarily input or output values. More generally, the can input and output values of any prespecified set. According to Wikipedia, the definition of an operator is as follows. In mathematics, an operator is generally a mapping that acts on elements of a space to produce elements of another space (possibly the same space, sometimes required to be the same space). There are numerous sorts of classes of operators that have been given names in mathematics. One class that differentiation falls under is that of linear operators, which are operators acting on vector spaces which satisfy a particular property (you can check out more information here). In the context of differentiation, the key property that differentiation satisfies in order to be able to regard it as a linear operator is the following one, which holds for any functions $f,g$ and values $a,b$. $$\frac{d}{dx}\left( af(x)+bg(x)\right) = a\frac{d}{dx}f(x) + b\frac{d}{dx}g(x)$$
Finding a positive definite function to apply Lyapunov's Stability Theorem
If you have the the equation $$\ddot{x}+ (1-x^2)\dot{x} + x=0$$ is stable by using Lyapunov's Stability Theorem Let $\dot{x}=y$. The we have the system $$\dot{x}=y$$ $$\dot{y}=-(1-x^2)y-x$$ Let $V=x^2 + y^2$. Then $$\dot{V}=2x\dot{x} + 2y\dot{y}=2xy+2y[-(1-x^2)y-x]=-2(1-x^2)y^2$$ In the neighborhood of the equilibrium $(0,0)$ we have $-2(1-x^2)y^2&lt;0$, therefore the equilibrium $(0, 0)$ is asymptotically stable.
Homology with local coefficients (question on Whitehead's proof)
It sounds like you might be missing the general lifting theorem for covering maps. Here's the statement. Let $p : Y \to X$ be a covering map, and let $f : Z \to X$ be a continuous map, where $X,Y,Z$ are path connected and locally path connected, and let $x \in X$, $y \in Y$, and $z \in Z$ be such that $f(z)=p(y)=x$. Question: Does a lift exists? Meaning, does there exist a continuous function $\tilde f : Z \to Y$ such that $\tilde f(z)=y$ and such that $p \circ \tilde f = f : Z \to X$? Answer: the lift $\tilde f$ exists if and only if the image of the induced homomorphism $f_* : \pi_1(Z,z) \to \pi_1(X,x)$ is a subgroup of the image of the induced monomorphism $p_* : \pi_1(Y,y) \to \pi_1(X,x)$. In your case, because the fundamental group of $\Delta^n$ is trivial, it follows that a lift of the function $\sigma : \Delta^n \to X$ does indeed exist.
ODE equilibrium point stable/unstable proof
Your equilibrium point is $y=-b/a$. If you write the equation as $$ \frac{dy}{dx}=a(y+\frac b{a}) $$ you can see that if you start at some $y_0&gt;-\frac b{a}$, when $a&lt;0$ the sign of the right hand side is negative, meaning that $y$ decreases while $y(x)&gt;-\frac b{a}$, getting closer and closer to the equilibrium. If, on the other hand, you start at some $y_0&lt;-\frac b{a}$, when $a&lt;0$ the sign of the right hand side is positive, meaning that $y$ increases while $y(x)&lt;-\frac b{a}$ towards the equilibrium. The equilibrium is thus stable by definition. The opposite happens if $a&gt;0$, making the equilibrium unstable.
Classify the singularities of the function
A function $f$ which is holomorphic for all $z$ near $z_0 \in \mathbb{C}$ (with $z \neq z_0$) has a pole of order $m&gt;0$ at $z=z_0$ if and only if $$ \lim_{z \to z_0} (z-z_0)^m f(z) $$ is finite and nonzero. If $f$ has a singularity at $z=z_0$ and $$ \lim_{z \to z_0} f(z) $$ is finite and nonzero then the singularity is removable, and vice versa.
Relation between jet and singularities of a curve
Let $\gamma$ be a regular plane curve, $p_{0}$ a point of the plane, and $$ f(t) = \|\gamma(t) - p_{0}\|^{2} = \bigl(\gamma(t) - p_{0}\bigr) \cdot \bigl(\gamma(t) - p_{0}\bigr). $$ Note that $$ f'(t) = 2\bigl(\gamma(t) - p_{0}\bigr) \cdot \gamma'(t), \tag{1} $$ and $$ f''(t) = 2\bigl[\bigl(\gamma(t) - p_{0}\bigr) \cdot \gamma''(t) + \gamma'(t) \cdot \gamma'(t)\bigr]. \tag{2} $$ Let $T$ be the unit tangent field and $N$ the normal field, and write $\gamma' = vT$, $T' = \kappa N$. By (1), $f$ has a critical point at $t_{0}$ if and only if $\gamma'(t_{0})$ is orthogonal to the displacement $\gamma(t_{0}) - p_{0}$, if and only if $\gamma(t_{0}) - p_{0}$ is a multiple of $N$; by (2), $f''(t_{0}) = 0$ in addition if and only if $$ \bigl(\gamma(t_{0}) - p_{0}\bigr) \cdot \gamma''(t_{0}) + \gamma'(t_{0}) \cdot \gamma'(t_{0}) = 0. $$ Since $\gamma''(t_{0}) \cdot N(t_{0}) = \kappa(t_{0}) v(t_{0})^{2}$, the preceding equation gives $$ \left\lVert \gamma(t_{0}) - p_{0} \right\rVert \kappa(t_{0}) v(t_{0})^{2} = \bigl(\gamma(t_{0}) - p_{0}\bigr) \cdot \gamma''(t_{0}) = -v(t_{0})^{2}, $$ or $\left\lVert \gamma(t_{0}) - p_{0} \right\rVert = -1/\kappa(t_{0})$. That is, $$ \gamma(t_{0}) - p_{0} = \left\lVert \gamma(t_{0}) - p_{0} \right\rVert N(t_{0}) = -\frac{1}{\kappa(t_{0})} N(t_{0}),\quad\text{or}\quad p_{0} = \gamma(t_{0}) + \frac{1}{\kappa(t_{0})} N(t_{0}), $$ which means $p_{0}$ is the center of curvature. The connection with $2$-jets is immediate: the $2$-jet of $f$ at $t_{0}$ is $$ f(t_{0}) + f'(t_{0})(t - t_{0}) + \tfrac{1}{2} f''(t_{0})(t - t_{0})^{2} = c_{0} + c_{1}(t - t_{0}) + c_{2} (t - t_{0})^{2}. $$ The curve $\bigl(c_{1}(t_{0}), c_{2}(t_{0})\bigr) = \bigl(f'(t_{0}), \frac{1}{2}f''(t_{0})\bigr)$ crosses the $c_{2}$-axis if and only if $c_{1}(t_{0}) = 0$, if and only if $f$ has a critical point at $t_{0}$; the curve touches the origin if and only if both components are zero, if and only if $p_{0}$ is the center of curvature of $\gamma$ at $\gamma(t_{0})$.
Simple Lie algebras have irreducible root systems?
The assumption is that any root will be in either $\Phi_1$ or $\Phi_2$. But if $\alpha + \beta$ was in $\Phi_1$ it would be orthogonal to $\beta$, and if it was in $\Phi_2$ it would be orthogonal to $\alpha$.
Exercise on Limits of Functions
Using Cesàro mean http://en.wikipedia.org/wiki/Ces%C3%A0ro_mean to obtain the solution. Let $a_{n+1}=f(n+1)-f(n) (n\in \mathbb{N})$. Then we have $$ f(n)=[f(n)-f(n-1)]+[f(n-1)-f(n-2)]+\ldots+[f(2)-f(1)]+[f(1)-f(0)]+f(0)=a_n+a_{n-1}+\ldots+a_1+f(0) $$ and so $$ \frac{f(n)}{n}=\frac{\displaystyle\sum_{i=1}^{n}a_i}{n}+\frac{f(0)}{n}. $$ Since $\lim_{n\rightarrow\infty} a_n=L$ then by the Cesaro's theorem $$ \lim_{n\rightarrow\infty}\frac{\displaystyle\sum_{i=1}^{n}a_i}{n}=L. $$ Hence, $\displaystyle\lim_{n\rightarrow\infty} \frac{f(n)}{n}=L$
What is the proper substitution for this integration?
Your limits for $y$ is incorrect. As $xy \gt \sqrt3$, $y$ cannot be zero. See the shaded region in the sketch. Intersection of the circle $x^2+y^2=4$ and curve $xy = \sqrt3$ will be given by $x^2 + \frac{3}{x^2} = 4 \implies x = 1, y = \sqrt3, x = \sqrt3, y = 1$ (as $x\gt0$, we have only considered positive values of $x$). So the integral should be $\displaystyle \int_1^\sqrt3 \int_{\sqrt3/y}^\sqrt{4-y^2} \frac{x}{y} \ dx \ dy$
Solution verification: family of polynomials, $\gcd$ of polynomials
Since $p_n(x)=(x-n)^2(x+2)$ the answer to (a) is yes because you can take $f(x)=x+2$. Your answer of $d(x)=x-2$ does not work since this is not a factor of, say, $p_1(x)$. For (b) the answer is also yes because if $f(x)=(x+2)^3$, then $d(x)=x+2$ and $f(x)=(x+2)^2 d(x)$. [I hope this is what you are intending.]
Applying Bayes Rule to Cards
The three cards in your friend's hand and burned don't matter unless the betting tells you something about the chance your friend has a Jack. On the turn, you have two favorable outcomes out of $47$ because you know five of the cards. Assuming you don't get a Jack on the river the chance you have two favorable out of $46$ cards for the river. The chance you get no Jack is then $\frac {45}{47}\cdot \frac {44}{46}$ so the chance you get at least one Jack (wouldn't two be nice?) is $1-\frac {45}{47}\cdot \frac {44}{46}=\frac {91}{1081} \approx 8.4\%$
Look simple to prove: $\int_{0}^{\pi\over 2}(\sin{x}-\cos{x})\ln{\left({\sin{x}\over \sin{x}+\cos{x}}\right)}\mathrm dx=\color{blue}{\ln{2}}$
Let $$I=\int_{0}^{\pi\over 2}(\sin{x}-\cos{x})\ln{\left({\sin{x}\over \sin{x}+\cos{x}}\right)}\,\mathrm dx$$ and let $x\to\dfrac{\pi}{2}-x$ we get $$I=-\int_{0}^{\pi\over 2}(\sin{x}-\cos{x})\ln{\left({\cos{x}\over \sin{x}+\cos{x}}\right)}\,\mathrm dx$$ hence $$2I=\int_{0}^{\pi\over 2}(\sin{x}-\cos{x})\ln{\tan x}\,\mathrm dx=\int_{0}^{\pi\over 2}\sin{x}\ln{\tan x}\,\mathrm dx-\int_{0}^{\pi\over 2}\cos{x}\ln{\tan x}\,\mathrm dx$$ these two integrals are not hard to calculate, hope you can take it from here.
Can I find the distance?
This is a slightly different question to this one. But the answer is the same: it is not possible to determine $a$ or $b$ or $a+b$. Look at this figure: We can move the red lines anywhere in the $\theta$ cone without changing $\alpha$ and $k=a/b$. But $a$ and $b$ are changed.
Symbol to denote length of geometric vector
Yes, both are preferred. There are various considerations. For example, some people use $\vert$ for set notation like $\{x\in\mathbb{R}\,\vert\,x&lt;0\}$. Then $\Vert$ helps to avoid ambiguity. Sometimes $\vert$ is for an absolute value of an element of a simple algebraic structure such as a field, whereas $\Vert$ is then used for higher structures, like matrices, functions, operators etc. That's how you have used it. But a single symbol is generally unambiguous because the meaning is argument-class-dependent. You just need to look at the argument to see if it is a real number, complex number, function, matrix etc., and then you know how to interpret $\vert$ or $\Vert$. Thus the distinction between $\vert$ and $\Vert$ is largely a hint to help the reader see the meaning more easily.
The Rademacher functions for counting binary digits
The fact that $R_n(\omega)=1$ means that $\omega$ belongs to $\Delta_{k,n}$ for some odd integer $k$. That is, $k\leqslant2^n\omega\lt k+1$ and $k=2\ell+\color{red}{1}$ for some integer $\ell$. Hence $2^n\omega=2\ell+\color{red}{1}+s$ with $0\leqslant s\lt 1$. Since $0\leqslant k\lt2^n$, $0\leqslant\ell\lt2^{n-1}$ and $\ell$ is a binary integer $\ell=\sum\limits_{i=0}^{n-2}\ell_i2^i$ with $\ell_i=0$ or $\ell_i=1$. Thus, $$ \omega=\frac2{2^n}\sum\limits_{i=0}^{n-2}\ell_i2^i+\frac{\color{red}{1}}{2^n}+\frac{s}{2^n}=\sum\limits_{i=1}^{n-1}\frac{\ell_{n-i-1}}{2^i}+\frac{\color{red}{1}}{2^n}+\frac{s}{2^n}. $$ This is the binary expansion of $\omega$, up to and including its $n$th bit, and this $n$th bit is $\color{red}{1}$. To solve the case $R_n(\omega)=-1$, rewrite the whole paragraph replacing each $\color{red}{1}$ by $\color{blue}{0}$.
Drawing 2 marbles from a box with 3 red, 3 purple, 5 green, and 7 blue marbles.
You are correct. Another way to compute it: Let $A_i$ be the event in which the $i$-th ball you grab is red. For the first ball you choose, you have $3$ red marbles out of a total of $18$. Therefore, $$P(A_1) = \frac{3}{18}.$$ For the second marble you grab, given that you already took a red one, that is, given $A_1$, you have $2$ red balls out of a total of $17$, then $$P(A_2 \mid A_1) = \frac{2}{17}.$$ Finally, $$P(A_1 \cap A_2) = P(A_1)P(A_2 \mid A_1) = \frac{3}{18}\frac{2}{17} = \frac{1}{51}.$$ You can use the same reasoning to compute for the purple ones and then add those probabilities.
probability of drawing two card simultaneously
Two cards drawn simultaneously is the same as drawing one card after the other without replacement. Specifically, if you draw two cards simultaneously, the two cannot be the same card.
An equation involving inner products being independent of the inner product space
You may need to add some condition on the dimension of the spaces. If both have dimension of at least n, (or identical dimension since then they are isometric anyway) then what you say should work and the proof should look something like this: Assume that the equation holds for $X$, and let $y_1$ to $y_n$ be in $Y$. Now find $x_1$ to $x_n$ in $X$ such that $$\left\langle x_k,x_l\right\rangle_X = \left\langle y_k,y_l\right\rangle_Y \quad \forall k,l \in \{1,...,n\}$$ (should be doable by some linear algebra, you may need to work out the details. also here the dimension of at least $n$ comes in). Then (I changed the last sum to a sum over $l$) by just the properties of the inner product $$\sum_{j=1}^m\left\langle \sum_{k=1}^n a_{j,k} y_k, \sum_{k=1}^n b_{j,l} y_l \right\rangle_Y = \sum_{j=1}^m \sum_{k=1}^n a_{j,k} \sum_{l=1}^n \overline{b_{j,l}} \left\langle y_k, y_l \right\rangle_Y $$ $$ = \sum_{j=1}^m \sum_{k=1}^n a_{j,k} \sum_{l=1}^n \overline{b_{j,l}} \left\langle x_k, x_l \right\rangle_X = \sum_{j=1}^m\left\langle \sum_{k=1}^n a_{j,k} x_k, \sum_{k=1}^n b_{j,l} x_l \right\rangle_X = 0$$ Which proves your statement. Concerning a lower amount of dimensions, I think there may be a counterexample, but I did not try to find one yet.
Gaussian estimate conditioned to stay positive
Suppose $f_{X,Y}(x,y; \sigma,c)$ is the bivariate normal pdf with mean $(0,0)$ and $c = \frac{\rho}{\sigma^2}$ and $f_{X}(x;\sigma)$ and $f_{Y}(y;\sigma)$ are the corresponding marginal normal pdf's with mean $0$. $$\begin{align}\color{red}{E(X,X&lt;0|Y&lt;0)} &amp;= \int\limits_{-\infty}^{0}xf_{X|Y}(x|y&lt;0)dx \\\\ &amp;= \int\limits_{-\infty}^{0}x \frac{f_{X,Y}(x,y&lt;0)}{f_{Y}(y&lt;0)}dx \\\\ &amp;= \int\limits_{-\infty}^{0} x \frac{\int\limits_{-\infty}^{0}f_{X,Y}(x,y)dy}{\int\limits_{-\infty}^{0}f_{Y}(y)dy}dx \\\\ &amp;= \int\limits_{-\infty}^{0}x\frac{\int\limits_{-\infty}^{0}f_{X,Y}(x,y)dy}{0.5}dx \\\\ &amp;= \int\limits_{-\infty}^{0}2xf_{X}(x)\Phi(\frac{-\rho x}{\sqrt{1-\rho^2}})dx\end{align}$$ where $\Phi$ is the normal cdf with parameters, mean $ = 0$ and variance $=\sigma^2$. Now, based on this answer, let the RHS be $h(\rho)$. Then using Leibniz rule, we get, $$\begin{align}\frac{\partial h(\rho)}{\partial \rho} &amp;= \int\limits_{-\infty}^{0}2xf_{X}(x)f_{X}(\frac{-\rho x}{\sqrt{1-\rho^2}})\left(\frac{-x}{(1-\rho^2)^{3/2}}\right)dx \\\\ &amp;= \frac{-2}{2\pi\sigma^2(1-\rho)^{3/2}} \int\limits_{-\infty}^{0} x^2\exp\left(\frac{-x^2}{2\sigma^2(1-\rho^2)}\right)dx \\\\ &amp;= \frac{-\sigma}{\sqrt{2\pi}}\end{align}$$ So, $h(\rho) = \frac{-\sigma\rho}{\sqrt{2\pi}} + K$, where $h(0) = \frac{-\sigma}{\sqrt{2\pi}} \implies K = \frac{-\sigma}{\sqrt{2\pi}}$. $$E(X,X&lt;0|Y&lt;0) = h(\rho) = \frac{-\sigma(\rho+1)}{\sqrt{2\pi}}$$ $$E(X,X&lt;0|Y&lt;0) = E(X\mathbb{I}(X&lt;0)|Y&lt;0) = \frac{E(X\mathbb{I}(X&lt;0)\mathbb{I}(Y&lt;0))}{P(Y&lt;0)}$$ $$E(X|X&lt;0,Y&lt;0) = \frac{E(X\mathbb{I}(X&lt;0)\mathbb{I}(Y&lt;0))}{P(X&lt;0,Y&lt;0)} = \frac{E(X,X&lt;0|Y&lt;0)P(Y&lt;0)}{P(X&lt;0,Y&lt;0)}$$ $$\implies E(X|X&lt;0,Y&lt;0) = \frac{h(\rho)\frac{1}{2}}{\Phi_{X,Y}(0,0)}$$ where $\Phi_{X,Y}$ is the bivariate normal cdf with mean $(0,0)$, variance $\sigma_X^2 = \sigma_Y^2 = \sigma^2$ and covariance $c = \rho \sigma^2$. Have a look at this answer to get the value of $\Phi_{X,Y}(0,0)$. $$\Phi_{X,Y}(0,0) = \frac{1}{2\pi}\arcsin\rho + \frac{1}{4}$$ Finally, $$E(X|X&lt;0,Y&lt;0) = \frac{\frac{-\sigma(\rho+1)}{\sqrt{2\pi}}}{\frac{1}{\pi}\arcsin\rho + \frac{1}{2}}$$
Prove that in every tree, any two paths with maximum length have a node in common.
If we have a path we can measure its length (how many vertices, or edges, it goes through). The length is a natural number, the question asks to show that given two paths which have the longest possible length (i.e. the length is maximum possible) then they have a common node; however if we only take maximal paths (which are paths which we just cannot extend any further) then it might not be true if the length of these paths is not the maximum from the first part.
Probability of having numbers '$1$' and '$2$' in a hand of $10$ cards from a deck of $30$ cards
Yes, you are correct. Note that since the cards are all distinct we can put it also in this way. The probability that the $1$ is in your hand of 10 cards is $10/30$. Given that, the probability that the $2$ is among the remaining 9 cards is $9/29$. Hence the probability that both the 1 and 2 are in your hand is $$\frac{10} {30}\cdot\frac{9}{29}=\frac{3}{29}$$ which coincides with your result because $${\binom{30}{10}}=\frac{30\cdot 29}{10\cdot 9}\cdot {\binom{28}{8}}.$$
What is the spectrum of the sequence operator $B: (x_1,x_2,\ldots) \rightarrow (0,x_1,\frac{1}{2}x_2,\ldots,\frac{1}{n}x_n,\ldots)$?
Note that $\|B^n\| \le 1/n!$. From this you can show that the spectral radius of $B$ is $0$.
Cluster point. Accumulation point.
You can imagine a bunch of concenctric circles around the origin whose radii are $1-\frac 1n$ for $n \in \{ 1,2,\cdots \}$. If $(x,y) = (r \cos \theta, r \sin \theta) \in A$, you can choose a sequence $\theta_n \neq \theta$ such that $\theta_n \to \theta$, so that $(r \cos \theta_n, r \sin \theta_n) \to (x,y)$ by continuity of $\cos$ and $\sin$. So $A \subseteq A'$ (where $A'$ denotes the set of cluster points of $A$). Additionally, if $p = (x,y) = (\cos \theta, \sin \theta)$, then $p_n = ((1-1/n)x,(1-1/n)y)$ is a sequence of points of $A$ not equal to $p$ which converges to $p$, so the unit circle is also a subset of $A'$. (I assumed $0 \notin \mathbb N$ in the way you wrote the question ; if you assume $0 \in \mathbb N$, notice that $(0,0) \notin A'$ because it is an isolated point of $A$). So so far any point with distance $1 - \frac 1n$ or $1$ from the origin belongs to $A'$. Otherwise, the set $$ \{ (r \cos \theta, r \sin \theta) \in \mathbb R^2 \, | \, \theta \in [0,2\pi], r &gt; 0, r \neq 1, r \neq 1-1/n \} $$ is open and does not intersect $A$, so any point in there does not belong to $A'$. This characterizes $A'$. Hope that helps,
Are there infinitely many pairs of rational numbers such that...
Yes. Consider the two elliptic curves: $$7r^2 = a^3+1$$ and $$7s^2 = b^3+2.$$ One can check (using SAGE or MAGMA) that these two curves have infinitely many rational points (in fact they both have rank 1). For rational points $(a,r)$ and $(b,s)$ on these curves we have that $a$ and $b$ satisfy the requirements of your question.
$k$-space tensor integral in statistical physics
(This will not be a complete result so much as a strategy, since I don't recall all the ins-and-outs of this calculation.) While one can integrate directly, the more 'physical' strategy is to take advantage of the symmetry and tensorial form of the integral. First, we note that the integral is invariant under rotations about the origin; for convenience, then, we can orient our coordinate system so that $\mathbf{u}=u\hat{z}$ i.e. taking $\mathbf{u}$ to define the $z$-axis. Second, we note that $Q$ is a rank-two spherically symmetric tensor. But the only such tensors available in the problem are the identity matrix $I$ and the outer product $\mathbf{u}\mathbf{u}=u^2\hat{z}\hat{z}$, so $Q$ must be a linear combination of these tensors. To this end it will be useful to introduce the longitudinal and transversal projection matrices $P_L:=\hat{u}\hat{u}=\hat{z}\hat{z}$ and $P_T:=I-P_L$ respectively. (Note that $P_L P_T=P_T P_L=0$). Then we assume that the $k$-space integral is of the form $Q(u) = Q_L(u)P_L+Q_T(u)P_T$ where $Q_L,Q_T$ are functions of $u$ which we need to find. This structure can be exploited by taking traces. Since $\text{tr}(P_L)=1$ and $\text{tr} (P_T)=2$, we have $$\text{tr}(P_L Q) = Q_L(u),\quad \text{tr}(P_T Q) = 2Q_T(u),$$ $$\implies Q(u) = \text{tr}(P_LQ)P_L+\frac{1}{2}\text{tr}(P_TQ)P_T$$ The problem is thereby reduced to evaluating the traces $$\text{tr}(P_L Q)=\int_{\text{all space}} \frac{\hbar \nu_g k_z^2}{\exp[(\hbar \nu_g |\mathbf{k}|-k_z u)/k_B T]-1}d\mathbf{k},\\ \text{tr}(P_T Q)=\int_{\text{all space}} \frac{\hbar \nu_g (k_x^2+k_y^2)}{\exp[(\hbar \nu_g |\mathbf{k}|-k_z u)/k_B T]-1}d\mathbf{k}.$$ We have therefore traded one tensorial integral for two scalar integrals, a task best left for spherical coordinates. For now, I'll leave the remainder of the calculation to the reader.
How to find the value $k$ of the line $x=k$ that Bisects the Area Under any Real Curve
Let $a, b$ be the lower bounds of $x$ for the region that you want to bisect. You then have to solve $$\int_a^kf(x)dx = \int_k^bf(x)dx$$ Let the integral of $f(x)$ be $F(x)$. Then this is $$F(k)-F(a)=F(b)-F(k) \to F(k) = \frac{F(a)+F(b)}{2}$$ Assuming $f(x) &gt; 0$ means that $F(x)$ is monotonically increasing, so $$k = F^{-1}\left(\frac{F(a)+F(b)}{2}\right)$$
Exterior powers of tensor products
This fact can be deduced from Schur-Weyl duality. Consider the vector space $(E \otimes F)^{\otimes d}$, which carries commuting actions of the three groups $GL(E)$, $GL(F)$, and $S_d$. We can rearrange things by writing $(E \otimes F)^{\otimes d} = E^{\otimes d} \otimes F^{\otimes d}$, so now $GL(E)$ acts on the first $d$ factors, $GL(F)$ on the last $d$ factors, and this time the symmetric group acts along the diagonal: $\sigma$ acts by permuting the first $d$ and last $d$ factors simultaneously. It is helpful to think of this $S_d$ action as being diagonal: $S_d \hookrightarrow S_d \times S_d$. By Schur-Weyl duality, considering the pairs $GL(E) \times S_d$ and $GL(F) \times S_d$ indepdently, we get $$ \begin{aligned} E^{\otimes d} \otimes F^{\otimes d} &amp;\cong \left( \bigoplus_{|\lambda| = d} L_\lambda E \otimes S^\lambda \right) \otimes \left( \bigoplus_{|\mu| = d} L_\mu F \otimes S^\mu \right) \\ &amp; = \bigoplus_{|\lambda| = |\mu| = d} L_{\lambda} E \otimes L_{\mu} F \otimes S^\lambda \otimes S^\mu. \end{aligned}$$ Now take the sign-isotypic component of the $S_d$ action on both sides. On the left, we get the sign-isotypic component of $(E \otimes F)^{\otimes d}$, which is isomorphic to the exterior power $\bigwedge^d(E \otimes F)$ because we are in characteristic zero. On the right, the tensor product $S^\lambda \otimes S^\mu$ of Specht modules contains the sign irrep once if $\mu = \lambda'$, and does not contain the sign irrep otherwise. (This is hom-tensor-dual to the fact that tensoring with the sign representation sends $S^\lambda$ to $S^{\lambda'}$). We get $$ \bigwedge^d(E \otimes F) \cong \bigoplus_{|\lambda| = d} L_\lambda E \otimes L_{\lambda'} F.$$ This is essentially the argument given in 4.1 of Howe's &quot;Perspectives on invariant theory&quot;, and he calls this theorem &quot;skew $(GL_n, GL_m)$-duality&quot; (others sometimes call it &quot;skew Howe duality&quot;). As for what the isomorphism looks like, Howe also offers some explanation. Fix bases $e_1, \ldots, e_n$ and $f_1, \ldots, f_m$ of $E$ and $F$ respectively, so that $E \otimes F$ has a basis $\{v_{ij} = e_i \otimes f_j\}$. The highest-weight vector corresponding to some partition $\lambda$ on the right can be written down by imagining the $v_{ij}$ in a rectangular matrix, overlaying the Young diagram of $\lambda$, and wedging everything together. For instance, for the partition $(4, 2, 1)$ would have highest weight vector $$ v_{(4, 2, 1)} := (v_{11} \wedge v_{12} \wedge v_{13} \wedge v_{14}) \wedge (v_{21} \wedge v_{22}) \wedge (v_{31}),$$ which we can see has $GL(E)$-weight $(4, 2, 1)$ and $GL(F)$-weight $(3, 2, 1, 1)$.
How do I transform the left side into the right side of this equation?
There is a more conceptual explanation. The complex conjugation is an automorphism of the field $\mathbb{C}$ of complex numbers (easy to see for any construction of $\mathbb{C}$). It follows that the norm function $N : \mathbb{C} \to \mathbb{R}, z \mapsto z \cdot \overline{z}$ is also multiplicative, i.e. satisfies $N(z z')=N(z) N(z')$. When $z=a+ib$ and $z'=c+id$, this means $(ad-bd)^2+(ad+bc)^2=(a^2+b^2)(c^2+d^2)$. There is also such a formula for sums of four squares, using the quaternions $\mathbb{H}$, and the octonions $\mathbb{O}$ give a formula for sums of eight squares. But there is no such formula for sums of three squares, which corresponds to the fact there is no $3$-dimensional real normed algebra.
Best books for combinatorics for beginner
The Art and Craft of Problem Solving by Paul Zeitz has a nice section on combinatorics with plenty of motivation.
Bounded Area between $y=x^2$, and $y=9$, and $y=k$
Yes, correct as you had found. $$\int_\limits{0}^9\sqrt{y} dy = 18$$ $$\dfrac{18}{2}=\int_\limits{k}^9\sqrt{y}dy \rightarrow k=.. $$
Direct use of conditional probability formula
You don't need induction to apply the multiplication rule $n$ times, i.e., $$ \mathbb{P}\left(\bigcap_{i=1}^n A_i\right) = \mathbb{P}\left(A_1 \cap\left( \bigcap_{i=2}^n A_i \right)\right) = \mathbb{P}\left(A_1\right) \mathbb{P}\left(\bigcap_{i=2}^n A_i | A_1\right), $$ and then apply the rule alliteratively $n-1$ times to get $$ \mathbb{P}\left(\bigcap_{i=1}^n A_i\right) = \mathbb{P}\left( A_1 \right) \mathbb{P}\left( A_2 | A_1 \right)\cdots\mathbb{P}\left(A_n|\bigcap_{i=1}^n A_i\right) . $$
||gm is formed by the lines $ax^2+2hxy+by^2=0$ and the lines through $(p,q)$ parallel to them. Find the equation of the diagonal
Useful fact: For hyperbola, the tangency point bisects the line segment of the tangent between the asymptotes. $h^2-ab&gt;0$ Hyperbola passes through $\left( \frac{p}{2}, \frac{q}{2}\right)$ $$4(ax^2+2hxy+by^2)=ap^2+2hpq+bq^2$$ Asymptotes $$ax^2+2hxy+by^2=0$$ Tangent at $(x',y')$ $$4[ax'x+h(y'x+x'y)+by'y]=ap^2+2hpq+bq^2$$ Two diagonals bisect each other at $\left( \frac{p}{2}, \frac{q}{2}\right)$. The tangent at $\left( \frac{p}{2}, \frac{q}{2}\right)$ is the diagonal not passing through the origin, i.e. $$2[apx+h(qx+py)+bqy]=ap^2+2hpq+bq^2$$ $$2(ap+hq)x+2(hp+bq)y=p(ap+hq)+q(hp+bq)$$ $$(ap+hq)(2x-p)+(hp+bq)(2y-q)=0$$ $$\lambda=\phi=2$$ $$\fbox{$\lambda^3+\phi^3=16$}$$ See another answer here for your interest.
Show that $\int_0^1 \prod_{n\geq 1} (1-x^n) \, dx = \frac{4\pi\sqrt{3}\sinh(\frac{\pi}{3}\sqrt{23})}{\sqrt{23}\cosh(\frac{\pi}{2}\sqrt{23})}$
Note that $$\prod_{n=1}^\infty(1-x^n)=\sum_{m=-\infty}^\infty(-1)^mx^{m(3m+1)/2} $$ (Euler's pentagonal number formula), so the integral equals \begin{align} I&amp;=\sum_{m=-\infty}^\infty(-1)^m\frac2{3m^2+m+2}\\ &amp;=\frac4{\sqrt{23}} \sum_{m=-\infty}^\infty(-1)^m\left(\frac{1}{m+(1-\sqrt{23})/6} -\frac{1}{m+(1+\sqrt{23})/6}\right). \end{align} Now you can attack this with identities for the digamma function.
The shortest distance from the parabola to the straight-line
Hint: Two possible ways to resolve: as an extremal problem, finding minimal value; or geometric approach, as a distance between two parallels where one is a tangent of parabola