title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
How to show that if $x^T x = 0$, then $x=0$ for any column vector $x$?
Let $$x=\left( \begin{matrix} x_1 \\ x_2 \\ . \\ .\\ .\\ x_n \end{matrix}\ \right). $$ Then $x^Tx$ = $x_1^2+x_2^2+...+x_n^2$. Since for all real $x_i$, $x_i^2 = 0$ if $x_i=0$ and $x_i^2>0$ otherwise, $x^Tx=0$ implies $x_i=0$ for all $i$, i.e., $x=\mathbf 0.$
Orthogonal Transformations and Eigenvalues
Hint: If $T$ is orthogonal, then $\langle Tx,Tx \rangle = \langle x,x \rangle$ for every $x$.
Please help how to work the inequation (x-1)/(x-5)<0
If $\dfrac{x-1}{x-5}&lt;0$, that means $x-1$ and $x-5$ have opposite sign. So $x-1&lt;0$ and $x-5&gt;0$ or $x-1&gt;0$ and $x-5&lt;0$. Which of those is possible?
Abstract Algebra: Homomorphism, Kernel, Image
$$ \ker(\phi) = \{f_{m,b} : m = 1\} = \{f_b : b\in \mathbb{R}\} $$ where $f_b$ denotes the map $x\mapsto x+b$. Define a function $\ker(\phi) \to \mathbb{R}$ given by $f_b \mapsto b$. Can you check that this is an isomorphism? Also, for any $m \in \mathbb{R}^{\times}$, define $f(x) = mx$, then what is $\phi(f)$? What can you conclude abou the image of $\phi$?
Given is relation $R$. What is $R^T, R^2, R^+, h_{\text{sym}}(R)$?
$R^2$ = { (a,b) : some x with aRx, xRb } For $R+$ add as few pairs as possible to R to make R transitive. For sym(R) add as few pairs as possible to R to make R symmetric.
Tiny arithmetic trigonometry anomaly
That's just rounding issues in your software ... The main point is that, given that all numbers are encoded using a fixed maximum number of bits, every system has a smallest encodable $\epsilon &gt; 0$ (link). Therefore all calculations involving numbers close to $\epsilon$ or whose difference is close to $\epsilon$ will generate noticeable rounding errors. Most often you just don't see them because your system is "smart enough" to hide them from you using nice-looking roundings.
dimension of $U_{p,q}$ the pseudo unitary group
Let $U(p,q) = \lbrace M \in M_{p+q}(\mathbb{C}) | ^t\overline{M}JM = J \rbrace$. One can compute the Lie algebra to be $\mathfrak{u}(p,q) = \lbrace M \in M(p,q) | ^t\overline{M}J + JM = 0 \rbrace$. Now we will compute the explicit form as block matrix as below. $M = \begin{bmatrix} A &amp; B \\ C &amp; D\end{bmatrix}$. One can compute with above equations to get that the $M \in \mathfrak{u}(p,q)$ if and only if the matrix $M$ satisfies $A + (^t\overline{A}) = 0 = D + (^t\overline{D})$ and $B = (^t\overline{C})$. That is $A \in \mathfrak{u}(p)$, $D \in \mathfrak{u}(q)$ and $B$ is any $p \times q$ matrix with complex entries. Now the only thing left is to add up the dimension of the spaces of matrices $A,B$ and $D$ which are $p^2, 2pq$ and $q^2$ respectively. Thus we have the answer. One can caluculate the dimension of $SU(p,q)$ from here. Since the condition of determinant one imposes a linear equation in the Lie algebra of matrices having trace zero.
Bi-character for finite, commutative monoids?
There is always the trivial bi-character $\beta(m,n)=1$. There exist finite commutative monoids for which this is the only bi-character. Take for example the monoid with two elements $\{1,0\}$. It must be the case that $\beta(1,1)=1$. Suppose that $\beta(1,0)=x$ and $\beta(0,0)=y$. Then $xy=\beta(1,0)\beta(0,0)=\beta(0,0)=y$ and so $x=1$. Similarly, $\beta(0,1)=1$. Additionally, $y^2=\beta(0,0)\beta(0,0)=\beta(0,0)=y$ so $y=1$.
Is $F=(yx,x+z,yz)$ conservative?
$curl F = \nabla \times F = \begin{vmatrix} i &amp; j &amp; k \\ \frac{\partial}{\partial x} &amp; \frac{\partial}{\partial y} &amp; \frac{\partial}{\partial z} \\ yx &amp; x+z &amp; yz\end {vmatrix} = \, (z-1,0,1-x) \ne 0$ So yes it is not a conservative vector field. Your working is correct and the way the vector field is set up, $x$ and $z$ components will cancel out over any circle parallel to $XZ$ plane. But that may not be the case if the circle was not parallel to $XZ$ plane or if it was another closed curve, say ellipse. Also be careful in orientation of the curve to make sure it matches the orientation of the surface. In this case it does not matter as the integral is zero and the question does not state orientation for the surface explicitly.
Is $a=\frac{1992!-1}{3449\times 8627}$ a prime number?
No, $\dfrac{1992!-1}{3449\times 8627}$ is divisible by $86544733151681393$, found using GMP-ECM. Moreover, $\dfrac{1992!-1}{3449 \times 8627 \times 86544733151681393}$ is also not prime; it fails the Fermat Test for bases $a \in \{2,3,5,7\}$, checked with OpenPFGW.
Chinese Remainder Theorem with with non-pairwise coprime moduli
$$ x\equiv 1\pmod 2 \\ x\equiv 2\pmod 3 \\ x\equiv 3\pmod 4 \\ x\equiv 4\pmod 5 \\ x\equiv 5\pmod 6 \\ x\equiv 0\pmod 7 $$ $x\equiv 5\pmod 6$ if and only if $x\equiv 1\pmod 2$ and $x\equiv 2\pmod 3$. and both conditions are already on the list. This leaves us with $$ x\equiv 1\pmod 2 \\ x\equiv 2\pmod 3 \\ x\equiv 3\pmod 4 \\ x\equiv 4\pmod 5 \\ x\equiv 0\pmod 7 $$ $x\equiv 3\pmod 4$ implies that $x\equiv 1\pmod 2$ but not vice-versa. So we keep the first and remove the second. $$ x\equiv 2\pmod 3 \\ x\equiv 3\pmod 4 \\ x\equiv 4\pmod 5 \\ x\equiv 0\pmod 7 $$ The solution is $x = 119$
What's the definition of the inner product in the trace space $H^{1/2}(\partial\Lambda)$?
Let $W_0^{1,2}(\Lambda)$ be the kernel of the trace map $\gamma_0 \colon W^{1,2}(\Lambda) \to L^2(\partial \Lambda)$; this coincides with the closure of $C_c^\infty(\Lambda)$, if only $\partial \Lambda$ is smooth enough. The definition $(2)$ you have given is just the quotient space $W^{1,2}(\Lambda) / W_0^{1,2}(\Lambda)$. As you have already noticed, for general Banach spaces the quotient is again a Banach space. Here $W^{1,2}(\Lambda)$ is a Hilbert space and the quotient can be identified with the orthogonal complement: $$ \left(W_0^{1,2}(\Lambda)\right)^\perp = \bigg\{ u \in W^{1,2}(\Lambda) : u \perp v \text{ for any } v \in W_0^{1,2}(\Lambda) \bigg\}, $$ which of course is a Hilbert space with the inner product inherited from $W^{1,2}(\Lambda)$. This approach is quite abstract and has nothing to do with the fact that elements of $L^2(\partial \Lambda)$ are functions. It seems to me a bit unproductive unless you are able to define the norm (or the inner product) in an intrinsic way. And this is where fractional Sobolev spaces are needed. Note. I changed $H^1$ to $W^{1,2}$ to stress that the same can be done for Sobolev spaces $W^{1,p}$ and their trace spaces $W^{1-\frac{1}{p},p}$ (just not in the Hilbert space setting).
What is the error in my calculation of complex numbers?
The last step should be $x_i= \pm 4$. Because the square root of a number is either its positive or negative value. Recognize that $4 \times 4 = 16$ and $-4 \times -4 = 16$. However, now you can check your answer with the first line $x=4i$. $x \times i = 4i \times i = -4$. Thus $xi=-4$.
Definition of the volume element
The parts right before the red underline are the justification for what follows. Consider a fixed orientation $\mu$ on $V$, and a given inner product $T$. We may then speak of orthonormal oriented bases. So let $\{v_1, \ldots, v_n\}$ be an orthonormal basis with orientation $\mu$. Then consider some alternating $n$-linear form $\omega$ on $V$ which does not vanish on this basis. Then, normalizing if necessary, we may assume $\omega(v_1, \ldots, v_n) = 1$. From page 82, of Spivak, if $A$ is the change of basis matrix from a basis $\{v_1, \ldots, v_n\}$ to a basis $\{w_1 = Av_1, \ldots, w_n = Av_n\}$, then $$ \omega(w_1, \ldots, w_n) = \det A \cdot \omega(v_1, \ldots, v_n). $$ Thus, if we single out a $\mu$-oriented basis $\{v_i\}$, it follows by definition that if $\{w_i\}$ is another $\mu$-oriented basis, the change of basis matrix $A$ has determinant +1, so $\omega(w_i) = \omega(v_i)$. Thus, $\omega$ is 1 on every $\mu$-oriented orthonormal basis. If $\eta$ is another alternating $n$-linear form such that $\eta(v_i) = 1$, then $\omega = \eta$ by $n$-linearity. The determinant is an example of an alternating $n$-linear form which satisfies this uniqueness criterion, since $\det(e_1, \ldots, e_n) = 1$.
integral proof equality
Let $c$ be the point in tthe interval where $f(c)$ is maximum. $\int_a^b f(x)g(x)dx \le \int_a^b f(c)g(x)dx=f(c)\int_a^b g(x)dx$, since $g(x)\ge 0$.
Equilateral polygon inscribed within an ellipse
This is not a complete answer, but too long for a comment. Before investigating the general problem, let's look at some simple specific example. Consider for example a pentagon inscribed into the ellipse $x^2+4y^2=1$. Pick the points as $$A=(1,0)\quad B=(x_B,y_B)\quad C=(x_C,y_C)\quad D=(x_C,-y_C)\quad E=(x_B,-y_B)$$ Then solve for the variables in question and the resulting edge length $s$: $$x_B\approx0.304\quad y_B\approx0.476\quad x_C\approx-0.537\quad y_C\approx0.422\quad s\approx0.843$$ But approximations are unsatisfactory here, so let's look at the underlying minimal polynomials of these algebraic numbers, which I computed in Sage like this: PR.&lt;xB,yB,xC,yC,s&gt; = QQ[] # declare multivariate polynomial ring vs = ideal([ # formulate conditions for these variables xB^2 + 4*yB^2 - 1, xC^2 + 4*yC^2 - 1, # B and C on ellipse (xB - 1)^2 + yB^2 - s^2, # |AB| = s (xB - xC)^2 + (yB - yC)^2 - s^2, # |BC| = s 2*yC - s # |CD| = s ]).variety(AA) # compute all real algebraic solutions v = [i for i in vs if i[s]&gt;0 and i[yB]&gt;0 and i[yC]&gt;0 and i[xB]&gt;i[xC]][0] for g in PR.gens(): print v[g].minpoly().numerator()(g) \begin{align*} 31293\,x_B^4 - 115344\,x_B^3 + 149974\,x_B^2 - 49648\,x_B + 4205 &amp;= 0 \\ 979251849\,y_B^8 + 239544\,y_B^6 + 338986960\,y_B^4 - 211934720\,y_B^2 + 28037120 &amp;= 0 \\ 10431\,x_C^4 - 20922\,x_C^3 + 3400\,x_C^2 + 8762\,x_C - 391 &amp;= 0 \\ 108805761\,y_C^8 - 17105940\,y_C^6 - 4845200\,y_C^4 + 70720\,y_C^2 + 128000 &amp;= 0 \\ 108805761\,s^8 - 68423760\,s^6 - 77523200\,s^4 + 4526080\,s^2 + 32768000 &amp;= 0 \end{align*} If you wanted to, you could turn these into radical expressions, but the results I got wouldn't fit the screen width here. So my message is that even for a fairly simple example, the results look very complicated. The degrees of the minimal polynomials are already pretty large, and the coefficients even more so. This doesn't get easier with more vertices or more complicated ellipses. Also note that the Galois group of the polynomials for the $x$ coordinates is $S(4)$, so the coordinate is not something you can get from repeated quardatic extensions and therefore not a constructible number. For the $y$ coordinates $S(4)$ is included in the derived series as well. This confirms the assumption Jack D'Aurizio stated in a comment that a straightedge and compass construction of such a polygon would likely be impossible. You might wonder whether using Cartesian coordinates instead of the polar ones you suggested are to blame for the uglyness of these results. But the solutions are algebraic in nature, so adding a bunch of trigonometric functions into the mix will likely only make things even more complicated. This is no proof, of course, just a gut feeling that a nice description seems pretty unlikely to me.
What are the set of vectors that satisfies this quadratic form equality?
Let $x=\alpha b + y$, where $y \bot b$. Then $x^*bb^*x = |\alpha|^2 \|b\|^4$. Hence if $c \ge 0$, then all solutions of $x^*bb^*x =c$ are given by $e^{i\theta}{\sqrt{c} \over \|b\|^2}b + y $, where $y \bot b$ and $\theta$ is arbitrary. If $c &lt;0$, there are no solutions.
Residue fields at points on $\mathbb{A}^n$
Let's take an example. Consider an elliptic curve, for argument's sake $$y^2=x^3+x$$ over $k=\Bbb C$. The residue field at its generic point is the fraction field of $$\frac{\Bbb C[x,y]}{(y^2-x^3-x)}.$$ This is $\Bbb C(x)[y]$ where $y$ is a square root of $x^3+x$, so a quadratic extension of $\Bbb C(x)$ and also it is $\Bbb C(y)[x]$ where $x$ is a root of the equation $x^3+x-y^2=0$. Thus the function field is a degree $3$ extension of $\Bbb C(y)$. This behaviour is quite typical. If you like consider the rational curve $y^2=x^3$: its function field is $\Bbb C(t)$ where $t=y/x$, $y=t^3$ and $x=t^2$. Then $|\Bbb C(t):\Bbb C(x)|=|\Bbb C(t):\Bbb C(t^2)|=2$ and $|\Bbb C(t):\Bbb C(y)|=|\Bbb C(t):\Bbb C(t^3)|=3$.
Use of "Coefficient of variation" to investigate cause and effect correlation?
I guess you want quantify the association between variables $V$ and $P.$ By eight 'reactors', I suppose you mean you have eight values of $V$ and eight matching values of $P.$ First, I would make a plot of the eight $V_i$ against their corresponding $P_i.$ If the relationship is 'mainly' linear, then you should use the coefficient of correlation $r$ to assess the association. (See the link.) By "linear" I don't mean the points all need to lie exactly on a straight line; I mean that a straight line should 'fit' the data better than some kind of simple curve. The coefficient of correlation $r$ measures linear association. If $r &gt; 0,$ then $V$ increases as $P$ increases. If $r &lt; 0$ then $V,$ decreases as $P$ increases. If $r \approx 0,$ then there is no linear association. Always, $-1 \le r \le 1,$ where $r = \pm 1$ indicates perfect fit of the points to a line. There is a statistical test to check whether the data are consistent with $r = 0.$ If the relationship is steadily increasing or decreasing, but not mainly linear, then Spearman's rank correlation may be more useful. (See the link.) Here is a plot to illustrate these ideas. In the nearly linear plot at left $r = .995.$ In the curved plot at right the coefficient of correlation $r = .964,$ but Spearman's rank coefficient $r_s = 1.$ Regression methods might allow you to find the best equation for expressing $V$ in terms of $R$ (or $R$ in terms of $V$), or to predict the value of one variable given the value of the other. These are only vague statements for orientation. Without seeing your data and knowing what your objective is, it is very difficult to give specific advice. If you have data, please post them with $V$ on one line, separated by commas, and corresponding values of $R$ (same order) on another line, separated by commas. Also say what you mean by a 'significant' finding. Then maybe I or someone else on this site can give your further guidance.
Is it possible to have a norm in a vector space and a vector whose norm is smaller than the absolute value of one of its coordinates?
Bessel's inequality gives you something of that flavor. More or less it says that if you have an orthonormal basis (finite or countable) of a Hilbert space, then the sum of the squares of the moduli of the coefficients is less than or equal the square of the norm of that element, i.e. $$\sum_{n=1}^{\infty} | \langle v, e_i \rangle |^2=\sum_{n=1}^{\infty} |a_i|^2 \leq \|v\|^2$$
Proving Bishop's Theorem using Krein-Milman Theorem
As for the first part of your question, recall that by Reisz theorem for a compact $S$ the space $C(S)^*$ is isometric to the space of regular complex valued borel measures on $S$ denoted by $M(S)$. Moreover duality between $C(S)$ and $M(S)$ is given by $$ \langle\cdot,\cdot\rangle:M(S)\times C(S)\to\mathbb{C}:(\mu,g)\mapsto \int_S gd\mu $$ The map $$ \mathrm{ev}_g:M(S)\to\mathbb{C}:\mu\mapsto \int_S gd\mu $$ is weak-$^*$ continuous,because it is continuous with respect to seminorm $$ \Vert \mu\Vert_g=\left|\int_S gd\mu\right| $$ Indeed $$ |\mathrm{ev}_g(\mu)|=\left|\int_S gd\mu\right|\leq\Vert\mu\Vert_g $$ Now we proceed to the second part. As it was proved earlier on the same page $K$ is weak-$^*$ compact convex subset of $C(S)^*=M(S)$. By Krein Milman theorem $K=\mathrm{cl}_{w^*}(\mathrm{conv}(\mathrm{extr}(K)))$. On the same page it is was proved that for all $\mu\in\mathrm{extr}(K)$ we have $$ \int_S gd\mu=0 \tag{1} $$ Obviously this equality holds for all convex combintaions of such points so $(1)$ is valid for all $\mu\in\mathrm{conv}(\mathrm{extr}(K))$. In other words $$ \mathrm{conv}(\mathrm{extr}(K))\subset\mathrm{Ker}(\mathrm{ev}_g) $$ Since $\mathrm{ev}_g$ is a weak-$^*$ continuous $\mathrm{Ker}(\mathrm{ev}_g)$ is weak-$^*$ closed. Hence $$ K=\mathrm{cl}_{w^*}(\mathrm{conv}(\mathrm{extr}(K)))\subset \mathrm{cl}_{w^*}(\mathrm{Ker}(\mathrm{ev}_g))=\mathrm{Ker}(\mathrm{ev}_g) $$ This means that for all $\mu\in K$ equality $(1)$ holds.
Perceptron and gradient descent
Hint: check subdifferentiability in the context of convex analysis.
How to find length of vector in a particular direction?
you can find the angle $t$ between your $m$ and the unit vector $d.$ it is given by $\cos t = \dfrac{m.t}{|m|}$ the component in the direction orthogonal has length $|m|\sin t.$
Contraction Mapping and fixed point
Notice that there exists $ \epsilon &gt;0$ such that $|f'(x_0)|&lt;1-\epsilon$, for that $\epsilon &gt; 0$ there exists $\delta_1 &gt;0$ such that $|f'(x)-f'(x_0)|&lt; \epsilon$ for all $x \in (x_0-\delta_1, x_0+\delta_1)$, then $|f'(x)|\leq |f'(x_0)|+|f'(x)-f'(x_0)|&lt;|f'(x_0)|+\epsilon~~~(&lt;1-\epsilon+\epsilon=1)$ for all $x \in (x_0-\delta_1, x_0+\delta_1)$. Let $\delta=\frac{1}{2}\delta_1$, for any $x_1&lt;x_2 \in [x_0-\delta, x_0+\delta]$ and by MVT in $[x_1,x_2]$, $|f(x_1)-f(x_2)|=|f'(\xi)|\cdot|x_1-x_2|\leq(|f'(x_0)|+\epsilon)|x_1-x_2|$ since $\xi \in (x_0-\delta_1, x_0+\delta_1)$. Thus $f$ is a contraction on $[x_0-\delta, x_0+\delta]$.
How to calculate $\delta(x^4-\alpha^4)$?
If $\delta$ is the probability measure supported on the solution set to $x^4-a^4=0,$ then yes, the result has this support. The constant is $1/|a^3|$ and not $1/|a|$, though.
Revenue Equivalence in Auction Theory: how does an English auction generate the same revenue as First Price Sealed Bid?
At first sight, if one person bids 10\$, another 20\$, and a third one 30\$, then it seems that first-price auction would result in 30\$ and second-price in 20\$. However, under different rules - that are of course known in advance - the participants would make different bids. E.g., under a second-price auction, you tend to bid more than what you are willing to pay because that's not what you are going to pay. Then again, you won't bid that much more and risk the second-price be above your original evaluation. This may explain how the &quot;obvious&quot; difference is not that obvious after all. But can we expect in this scenario that the second-highest price is much higher than what the second bidder estimates? Shouldn't they - by the same argument - bid more than 20, but only slightly more, perhaps 22\$ (so still way below 30\$)? The key part in the full theorem is also that it is about &quot;privately-known signal[s] independently drawn from a common strictly-increasing, atomless distribution&quot; (emphasis mine). So it is not the case that one participant inherently values the object more than the others. I didn't look up the proof though, but under these circumstances, it doesn't look that much infeasible any more.
limit of and integral depending on n
Note that the p-norm of a 2D vector is defined as $$ \left\| {\bf x} \right\|_{\,p} = \left( {x_{\,1} ^{\,p} + x_{\,2} ^{\,p} } \right)^{\,1/p} $$ and it is known that $$ \mathop {\lim }\limits_{p\, \to \,\infty } \left\| {\bf x} \right\|_{\,p} = \left\| {\bf x} \right\|_{\,\infty } = \max \left\{ {\left| {x_{\,1} } \right|,\left| {x_{\,2} } \right|} \right\} $$ Therefore $$ \eqalign{ &amp; \mathop {\lim }\limits_{n\, \to \,\infty } \,\;\int_{x = 0}^{\,4} {\root n \of {x^{\,n} + \left( {4 - x} \right)^{\,n} } dx} = \cr &amp; = \,\;\int_{x = 0}^{\,4} {\left( {\mathop {\lim }\limits_{n\, \to \,\infty } \root n \of {x^{\,n} + \left( {4 - x} \right)^{\,n} } } \right)dx} = \cr &amp; = \,\;\int_{x = 0}^{\,4} {\left( {\max \left\{ {\left| x \right|,\left| {4 - x} \right|} \right\}} \right)dx} = \cr &amp; = \,\;2\int_{x = 0}^{\,2} {\left( {4 - x} \right)dx} = \,\;2\int_{x = 2}^{\,4} {x\,dx} = 12 \cr} $$
Calculate probabilities from distribution function
You have to include the possibility that $X=1$, which you don't if what you subtract is $F(1)=P(X\leq1)$. It doesn't make a difference for continuous distribution functions, but this time it does.
Exchange limits of definite integrals of the form $\int_t^{T} f(u) \int_0^{u} g(s) ds du$
You can. The region of integration is the trapezoid with base the segment $[t,T]$ on the $u$-axis, vertical bases and the other leg on the diagonal $u=s$. To change the order of integration, you notice that the range of $s$ is $[0,T]$. The limits for $u$ given $s$ depend on whether $s\in[0,t]$ or $s\in[t,T]$. If you draw the trapezoidal region, you should easily see that you get $$ \int\limits_0^tg(s)[\int\limits_t^Tf(u)du]\,ds+\int\limits_t^Tg(s)[\int\limits_s^Tf(u)du]\,ds $$
Algebraic proof that $\binom{n+k-1}{k} = \sum\limits_{i=0}^k \binom{(n-1)+(k-i)-1}{k-i}$
We have $$ \eqalign{ &amp; \sum\limits_{0\, \le \,i\, \le \,k} {\left( \matrix{ n - 2 + k - i \cr k - i \cr} \right)} = \cr &amp; = \sum\limits_{0\, \le \,j\, \le \,k} {\left( \matrix{ n - 2 + j \cr j \cr} \right)} = \quad \quad (1) \cr &amp; = \sum\limits_{\left( {0\, \le } \right)\,j\,\left( { \le \,k} \right)} {\left( \matrix{ k - j \cr k - j \cr} \right)\left( \matrix{ n - 2 + j \cr j \cr} \right)} = \quad \quad (2) \cr &amp; = \sum\limits_{\left( {0\, \le } \right)\,j\,\left( { \le \,k} \right)} {\left( { - 1} \right)^{\,k - j} \left( \matrix{ - 1 \cr k - j \cr} \right)\left( { - 1} \right)^{\,j} \left( \matrix{ - n + 1 \cr j \cr} \right)} = \quad \quad (3)\cr &amp; = \left( { - 1} \right)^{\,k} \sum\limits_{\left( {0\, \le } \right)\,j\,\left( { \le \,k} \right)} {\left( \matrix{ - 1 \cr k - j \cr} \right)\left( \matrix{ - n + 1 \cr j \cr} \right)} = \quad \quad (4) \cr &amp; = \left( { - 1} \right)^{\,k} \left( \matrix{ - n \cr k \cr} \right) = \quad \quad (5) \cr &amp; = \left( \matrix{ n + k - 1 \cr k \cr} \right) \quad \quad (6) \cr} $$ where: (1) change of index (2) replacing sum bounds with binomial (3) upper negation (5) convolution (6) upper negation Note to (2): The binomial $\binom{k-j}{k-j}$ equals $1$ for $j \le k$ and is null for $k&lt;j$. Therefore we can use it to replace the upper bound for $j$ in the sum. On the other side, the second binomial intrinsically ensures for the lower bound $0 \le j$. Therefore we can leave the index free to take all values: that's why I indicated the bound in brackets. That's a "trick" many time useful in manipulating binomial sums, and I am in debt to the renowned "Concrete Mathematics" for teaching that.
Variance of VAR(2) time series
I am not completely sure about this $Z_t=ALZ_T+BL^2Z_t+CE_t$ $(1-AL-BL^2)Z_t=CE_t$ $Z_t=(1-(AL+BL^2)^{-1}CE_t$ Write $(1-(AL+BL^2)^{-1}$ as infinite geometric progression $Z_t=CE_t+(AL+BL^2)CE_t+(AL+BL^2)^2CE_t+...$ Now take variance from both sides. Lag operators dont matter because $Var(E_t)=Var(E_{t-j})$ so you can just get rid of them. $Var(Z_t)=CC^T+(A+B)CC^T(A+B)^T+...$ Wrap it back $Var(Z_t)=(I-(A+B)(A+B)^T)^{-1}CC^T$
Complex Borel Measure and Bounded Variation Functions
Yes, this $F$ is NBV. The $\delta$'s are indeed Dirac measures, however, as such, they take measurable sets as values, rather than function values. $$\delta_a(A) := \begin{cases}1, &amp;A \ni a\\ 0, &amp; \text{otherwise.} \end{cases} \qquad (A \in \mathcal{B}\mathbb{R})$$ $\mu_F$ is now the Lebesgue-Stieltjes measure based on NBV function $F$. In order to show this, we need to show that $$\mu_F((c,d]) = \delta_a((c,d]) - \delta_b((c,d]) = F(d)-F(c) \qquad (c, d \in \mathbb{R}, c &lt; d)$$ It is probably best to consider different cases: (i) $c &lt; d &lt; a &lt; b \Rightarrow \mu_F((c,d]) = 0 - 0 = F(d)-F(c)$ (ii) $c &lt; a \leq d &lt; b \Rightarrow \mu_F((c,d]) = 1 - 0 = F(d)-F(c)$ (iii) $a \leq c &lt; d &lt; b \Rightarrow \mu_F((c,d]) = 1 - 1 = F(d)-F(c)$ (iv) $a \leq c &lt; b \leq d \Rightarrow \mu_F((c,d]) = 0 - 1 = F(d)-F(c)$ (v) $a &lt; b \leq c &lt; d \Rightarrow \mu_F((c,d]) = 0 - 0 = F(d)-F(c)$ As the set of intervals $\{(c,d] : c, d \in \mathbb{R}\}$ is a semiring, we can now apply Carathéodory's extension theorem and know that the $$\mu_F(A) = \delta_a(A) - \delta_b(A) \qquad (A \in \mathcal{B}\mathbb{R}).$$ Maybe one hint to give you an intuition, also towards the second question: If you have an NBV function, you can compute a something like a generalised "derivative". "Derivative" refers to the measure-theoretic fundamental theorem of calculus. Like in the case above, this "derivative" may be represented by a measure rather than a function - especially in cases like in 1., where $F$ is clearly not differentiable. It actually is a function, if $F$ is absolutely continuous (with respect to the Lebesgue measure). The latter is the case here. In particular, we have $$\mu_F(A) = \int_{A} F'(x)\mathrm{d}x \qquad (x \in \mathcal{B}\mathbb{R})$$ Also, $\arctan'(x) = 1/(1+x^2)$, (x > 0) \begin{align*}F(x) &amp;:= \begin{cases}\arctan(x), &amp;x &gt; 0\\ 0, &amp; \text{otherwise.} \end{cases} \qquad (x \in \mathbb{R})\\ F'(x) &amp;= \begin{cases}\arctan'(x), &amp;x &gt; 0\\ 0, &amp; \text{otherwise.} \end{cases} \\ &amp;= \begin{cases}1/(x^2 +1), &amp;x &gt; 0\\ 0, &amp; \text{otherwise.} \end{cases} \qquad (x \in \mathbb{R}) \qquad (x \in \mathbb{R}) \end{align*}
A formula for the roots of a solvable polynomial
Lagrange and Vandermonde (and others) knew how to treat various cases before "Galois theory", by "Lagrange resolvents". Some of my algebra notes show how this idea recovers the formula for the solution of cubics: http://www.math.umn.edu/~garrett/m/algebra/notes/23.pdf By the late 19th century such ideas were well known, and in many cases of solvable Galois groups (though people were not quite able to say things so simply) this device produce solutions in radicals. Expressing roots of unity in radicals is another example where Lagrange resolvents gives expressions in radicals. This quickly becomes computation-heavy, however. A slightly more sophisticated implementation of "Lagrange resolvents" that does bigger cyclotomic examples before getting bogged down is worked out at http://www.math.umn.edu/~garrett/m/v/kummer_eis.pdf In that case, some additional ideas of algebraic number theory are used. Van der Waarden's "Algebra" (based on notes from E. Noether) was the place I saw Lagrange resolvents, and it was a revelation. Indeed, many treatments of "Galois theory" give no inkling of how to do any computations.
Minimum polynomial transformaton
$T$ is rotation by $120$ degree, so $T^3=T\circ T\circ T=Identity$. Determine $T^7, T^4$. What is $P(T)=T^7-T^4+T^3$? Then what is $P(T)(x,y)$?
For which real values of $p$ is the following series convergent
Hint: Use equivalents: $$\Bigl(n^{\tfrac1n}-1\Bigr)^p=\Bigl(\mathrm e^{\tfrac{\ln n}n}-1\Bigr)^p\sim_\infty\Bigl(\frac{\ln n}n\Bigr)^p,$$ so the general term is equivalent to the general term of a Bertrand's series: $$\frac1{n^{2-p}\bigl(\ln n)^{1+p}}.$$
Show that $\sum\nolimits_{d|n} \frac{1}{d} = \frac{\sigma (n)}{n}$ for every positive integer $n$.
$\displaystyle n\sum_{d|n} \frac{1}{d} = \sum_{d|n} \frac{n}{d} = \sum_{d|n} {d} = \sigma (n) $ or $\displaystyle \frac{\sigma (n)}{n} = \frac{1}{n} \sum_{d|n} {d} = \sum_{d|n} \frac{d}{n} = \sum_{d|n} \frac{1}{d} $
$L^p$ Spaces, Young's Theorem, Convolutions, and Minkowski's Inequality
$$\Big(\int |(f*g)(x)|^p\ dx\Big)^\frac{1}{p} = \Big(\int \Big|\int f(x - y)g(y)dy\Big|^p\ dx\Big)^\frac{1}{p} \le \int \Big(\int |f(x - y)g(y)|^pdx\Big)^{\frac{1}{p}}dy = \int \Big(\int |f(x - y)|^p dx\Big)^{\frac{1}{p}}|g(y)|dy = \|f\|_p\|g\|_1.$$ Here the only inequality sign is given exactly by the Minkowski inequality (just to emphasize that there is nothing hidden somewhere :D )
Geometry. How to solve this problem?
Let $O_C$ be the center of the circumcircle of $APX$ and $O_B$ be the center of the circumcircle of $AQY$. $O_C$ has to lie on three different lines: the perpendicular to $CA$ through $A$, the angle bisector of $\widehat{ACB}$ and the perpendicular bisector of $AP$. Similarly, $O_B$ has to lie on the perpendicular to $BA$ through $A$, on the angle bisector of $\widehat{CBA}$ and on the perpendicular bisector of $AQ$. Since $O_C$ lies on a angle bisector and on a perpendicular, $O_C A$ is given by $b\tan\frac{C}{2}$. Since $O_C$ lies on the perpendicular bisector of $AP$, we also have $O_C A\sin A=\frac{4}{9}c$. A similar argument for $O_B$ leads to $O_B A = c\tan\frac{B}{2}$ and $O_B A\sin A=\frac{15}{32}b$. Summarizing: $$\left\{\begin{array}{rcl}b \tan\frac{C}{2}&amp;=&amp;\frac{4c}{9\sin A}\\c\tan\frac{B}{2}&amp;=&amp;\frac{15b}{32\sin A} \end{array}\right. $$ that can be written as $$\left\{\begin{array}{rcl}\frac{br}{a+b-c}&amp;=&amp;\frac{4c R}{9a}\\ \frac{cr}{a+c-b}&amp;=&amp;\frac{15b R}{32 a} \end{array}\right. $$ implying $$ c^2=18\cdot\frac{\Delta^2}{(a+b)^2-c^2},\qquad b^2=\frac{256}{15}\cdot\frac{\Delta^2}{(a+c)^2-b^2} $$ and $$ c^2=\frac{9}{8}(a-b+c)(-a+b+c),\qquad b^2=\frac{16}{15}(-a+b+c)(a+b-c) $$ due to Heron's formula. This implies $$ (a,b,c)=\lambda\cdot (13,8,15) $$ and $$ \cos\widehat{BAC} = \frac{b^2+c^2-a^2}{2bc} = \color{red}{\frac{1}{2}}. $$ $\widehat{B}$ is not a right angle but it is quite close to a right angle, since $8^2+13^2=15^2+8$.
The join of two convex sets is convex?
Let $p_1$ and $p_2$ belong to the join of $A$ and $B.$ $$\text {Let } q=x p_1+(1-x)p_2 \text { with } x\in [0,1].$$ There exist $a_1\in A$ and $b_1\in B$ and $r_1\in [0,1]$ with $$p_1=r_1 a_1+(1-r_1)b_1.$$ There exist $a_2\in A$ and $b_2$ in $B$ and $r_2\in [0,1]$ with $$p_2=r_2 a_2+(1-r_2)b_2.$$ Since $A$ and $B$ are convex, we have $$A\supset \{c a_1+(1-c)a_2 :c\in [0,1]\}$$ $$\text {and }\quad B\supset \{db_1+(1-d)b_2:d\in [0,1]\}.$$ So if we can find $c,d,e\in [0,1]$ such that $$\bullet \;q=e[c a_1+(1-c)a_2]+(1-e)[d b_1+(1-d)b_2],$$ then $q$ belongs to the join of $A$ and $B.$ $$\text {Now }\; q=x p_1+(1-x)p_2=x[r_1a_1+(1-r_1)b_1]+(1-x)[r_2a_2+(1-r_2)b_2].$$ I will leave it to you to show that there do exist $c,d,e\in [0,1]$ such that $$x r_1=e c.$$ $$(1-x)r_2=e(1-c).$$ $$x(1-r_1)b_1=(1-e)d.$$ $$(1-x)(1-r_2)=(1-e)(1-d).$$ Then the equation $\bullet$ is satisfied.
$R$ a local ring, also a PID. $I,J$ ideals from $R$. Show that $I \subseteq J$ or $J \subseteq I$
If it is not a field, it has exactly one prime element (up to associates, of course.) (explain why) It is a UFD, and apparently everything that is nonzero has a factorization which is a power of p times a unit. From this, observe the nontrivial ideals are just $(p^i)$ for integers $i&gt;0$.
Literature recommendation on PDE
Partial differential equations by Evans is kind of a canonical answer to such questions.
If $f$ is analytic and $f^{(k)}(z_0) \neq 0$ for some k, then $f(z) \neq 0$ for all $z \neq z_0$ in a disk centered around $z_0$
Let $k$ be the first $k$ that this happens, then taking the Taylor expansion for $f$ centered at $z_0$ we have \begin{align*} f(z)&amp;=f^{(k)}(z_0)\frac{(z-z_0)^{k}}{k!}(1-\frac{k!}{f^{(k)}(z_0)}\sum_{i\geq k+1} f^{(i)}(z_0)\frac{(z-z_0)^{i-k}}{i!}) \\ &amp;=f^{(k)}(z_0)\frac{(z-z_0)^{k}}{k!}(1-g(z)) \end{align*} Now, $g(z)$ is continuous and $g(0)=0$. From here the results follows.
Assumptions necessary to justify the method of proof by contradiction.
The smallest assumption that you need to allow proof by contradiction is Peirce's law. $((P→Q)→P)→P$. From peirce’s law you can derive the law of the excluded middle as well as doing RAA proofs.
Matrix algebra: The "magical inverse" trick
There is a way to get from (1)-(3) to (5)-(7) without the trick of assembling the matrices $u,v,w,x$ into a block matrix. The special ingredient is the identity $$ (1+rs)^{-1} = 1 - r(1+sr)^{-1}s, \tag{*} $$ which is easy to verify. Admittedly, this too may seem "magical" if you're unfamiliar with it. At least, though, it's a universal kind of magic, in the sense that it's an identity in all rings in general, and not just for matrices. (For a reference about this identity, see mathoverflow.net/questions/31595, where it appears with a tiny difference in sign.) Using this identity, we will derive equation (5) from (1)-(3). First, note that $u$ and $x$ are invertible. (This follows from (1) and (2), although I'm glossing over the details.) Now write (3) as $w^\dagger (x^{-1})^\dagger = u^{-1} v$. Multiply that equation by its conjugate transpose $x^{-1} w = v^\dagger (u^{-1})^\dagger$ to get $$ w^\dagger (x^{-1})^\dagger x^{-1} w = u^{-1} v v^\dagger (u^{-1})^\dagger. $$ The right-hand side, applying (1), is \begin{align} u^{-1} v v^\dagger (u^{-1})^\dagger &amp;= u^{-1} (u u^\dagger - I) (u^{-1})^\dagger \\ &amp;= I - u^{-1} (u^{-1})^\dagger \\ &amp;= I - (u^\dagger u)^{-1}, \end{align} while the left-hand side, applying (2) and the identity (*), is \begin{align} w^\dagger (x^{-1})^\dagger x^{-1} w &amp;= w^\dagger (x x^\dagger)^{-1} w \\ &amp;= w^\dagger (I + w w^\dagger)^{-1} w \\ &amp;= I - (I + w^\dagger w)^{-1}. \end{align} This shows that $u^\dagger u = I + w^\dagger w$ (equation (5)). Equation (6) can be derived in the same way. Equation (7) is easy to derive once (1)-(6) are all available: \begin{align} u^\dagger v &amp;= u^\dagger u u^{-1} v \\ &amp;= (I + w^\dagger w) w^\dagger (x^{-1})^\dagger \\ &amp;= w^\dagger (I + w w^\dagger) (x^{-1})^\dagger \\ &amp;= w^\dagger x x^\dagger (x^{-1})^\dagger \\ &amp;= w^\dagger x. \end{align} This way, we've proven (5)-(7) from (1)-(3), without going up to the higher dimensional "big picture" of the generalized unitary matrix $B$.
Which series it would be?
It is not an arithmetic series. For an arthimetic series, the difference between successive terms is constant. But $8-1 \neq 27-8$. It is also not a geometric series, for which the ratio of successive terms is constant. $\frac 81 \neq \frac {27}8$. What other choices have you got?
Show that the "calculation rule" is wrong for the given scalar product
You're making this much harder than necessary. To disprove a rule, you need merely show one counterexample. Here is one: $$\vec{u}= \begin{pmatrix} 1\\0\\0 \end{pmatrix}, \quad \vec{v}= \begin{pmatrix} 1\\0\\0 \end{pmatrix}, \quad \vec{w} = \begin{pmatrix} 0\\1\\0 \end{pmatrix}.$$
Logarithms of Negative Numbers
In the complex numbers, you can take the logarithm of negative numbers as you are thinking. Unfortunately, the answer is not unique because of the periodicity of $\sin$ and $\cos$. From $e^{i\theta}=\cos \theta + i\sin \theta$ you also get $e^{i(\theta+2\pi)}=\cos \theta + i\sin \theta$, so you can add any integral multiple of $2\pi i$ to the log and get another value. You can restrict the values of the integer part of the log function to an interval of length $2 \pi i$ (say to $[-\pi i,\pi i)$ )to make it single valued, similar to what we do with $\arcsin$ and $\arccos$
How can Complex numbers be written in this way?
It is true for any complex number $z$ that $z - z^* = 2i\operatorname{Im}(z)$. This is easy to confirm, both geometrically in the plane, and algebraically by setting $z = x + yi$.
Proving the Division Algorithm using induction
Uniqueness doesn't need induction. Suppose $m=qn+r=q'n+r'$, where $0\le r\le n-1$. It's not restrictive to assume $r\le r'$, so we have $0\le r'-r\le n-1$; but $r'-r=n(q-q')$, so $$ 0\le n(q-q')&lt;n $$ As $n&gt;0$, this implies $$ 0\le q-q'&lt;1 $$ so $q=q'$ and therefore $r'=r$. The proof of existence can be conveniently split into the cases $m\ge0$ and $m&lt;0$. The first case is done by induction. The case $m=0$ is obvious: take $q=0$ and $r=0$. Assume you know $m=qn+r$, with $0\le r&lt;n$; then $$ m+1=qn+r+1 $$ If $r+1=n$, then $m+1=q(n+1)+0$, otherwise $r+1&lt;n$ (using the hypothesis that $r\le n-1$, so $r+1\le n$) and the assert is true. Now let's prove the case $m&lt;0$. From the first case we get $$ -m=qn+r $$ with $0\le r&lt;n$. If $r=0$, then $m=(-q)n+0$ and we're done. Otherwise $0&lt;r&lt;n$ and $$ m=(-q)n-r=(-q)n-n+n-r=(-q-1)n+(n-r) $$ where $0&lt;n-r&lt;n$ and we're done.
$\mathbb Z_2$-equivariant cohomology of tori
Since in the fibration $X \hookrightarrow X \times_\mathbb{Z_2}E{\mathbb{Z}_2} \to B\mathbb{Z}_2 \simeq \mathbb{RP}^{\infty}$ the fiber's cohomology groups are freely generated, your situation calls for the Leray–Hirsch theorem (in Hatcher it is p. 432). You want then the inclusion $X \hookrightarrow X \times_\mathbb{Z_2}E{\mathbb{Z}_2}$ to induce a surjection on cohomologies. For that, it is enough to show the surjectivity of $H_\mathbb{Z_2}^1(X) \to H^1(X)$ (use multiplicativity). I can't now think of a clever way to show that but to investigate 1- and 2-dimensional cells seems to be enough. (Another way which might work is to replace absolute circles with pairs of intervals and their boundaries: in the relative case you won't have the first row in the spectral sequence, so the differential $d_2^{0,1}$ would vanish, which is equivalent to surjectivity.) UPD: The spectral sequence reference was unnecessary, deleted.
State Change in a Turing Machine(Computer of Integer Function)
You initially start with $0^m10^n$ on the tape followed by infinite B. In state $q_0$ you read the first $0$s and leave them unchanged. When you reach the $1$ it change it to a $0$ and move to $q_1$. At this point the tape is $0^m00^n$ you thus have $m+1+n$ $0$s. Hence you have to remove a $0$ in order to have $m+n$ that is what is done in $q_1$. In $q_1$ you read all the $0$ until the end of the string, when you reach the end (reading a B) you go back on step end remove the last $0$ and thus get $0^m00^{n-1}=0^{m+n}$. I hope it is clearer.
A basic question about sum of two subspaces
In a finite dimensional inner product vector space, yes, because for every subspace $W$ we have $(W^{\perp})^{\perp} = W$. In particular, if $W\neq V$, then $W^{\perp}\neq\mathbf{0}$, since $\mathbf{0}^{\perp}=V$. That $W$ is a sum is immaterial. In the infinite dimensional case, no. You can have a proper subspace whose orthogonal complement is trivial. E.g., in the vector space of all square summable real sequences, viewed as a subspace of $\mathbb{R}^{\infty}$, the span of the basis vectors $\mathbf{e}_i$ is a proper subspace that is dense, so its orthogonal complement is trivial. The same holds in any infinite dimensional Hilbert space, by taking the span of a Hilbert basis. Added. If your space does not have an inner product, then the very concept of "orthogonality" has no meaning, so the answer is Mu.
Conditional Statements
While there is not a single symbol, you can use the laws of logic to rewrite it in a couple of ways: $$\neg(p \implies q) \quad \iff \quad \neg(\neg p \, OR \, q) \quad \iff \quad p \,\, AND \,\, \neg q $$ In words, "$p \implies q$" is false if and only if $p$ is true and $q$ is false.
Modulus function, finding $2^{2014} \mod 11$
$$2^{10}\equiv 1 \mod 11$$ $$2^{2014} \equiv 2^{2010+4} \equiv 2^4 \equiv 16 \equiv 5 \mod 11$$
A question about Riemann Integral, the function is not integrable, but the limit of Riemann sum exists
Since you are always using left endpoints, the limit could exist even if $ f $ is not Riemann-integrable. One sometimes says that a function $ f $ is Cauchy-integrable on $ [ a , b ] $ if this limit exists, although the resulting notion of integral has some bad properties and is not really accepted as a kind of integral. (Cauchy himself only proposed this definition for functions known to be continuous, which he then proved are Riemann-integrable. In 1915, Gillespie proved that a function is Riemann-integrable if and only if it is bounded and Cauchy-integrable. But you have no reason to believe that $ f $ must be continuous or even bounded.) So you appear to have proved that the derivative of a differentiable function $ F $ must be Cauchy-integrable, in a way that satisfies the Fundamental Theorem of Calculus in that the value of the integral is $ F ( b ) - F ( a ) $. That said, I don't believe that you have really proved this! (Indeed, it is false; if $ F $ is Volterra's Function, then its derivative exists but is not Cauchy-integrable.) There is a lot hidden in this claim that $$ \lim _ { \lVert \pi \rVert \to 0 } \sum _ { i = 0 } ^ { n - 1 } \varphi _ i ( \Delta x _ i ) \Delta x _ i = 0 \text . $$ As you move from one partition to another (as you must for $ \lVert \pi \rVert $ to approach $ 0 $), the points $ x _ i $ change and so the functions $ \varphi _ i $ change. If you have a fixed function $ \varphi _ i $, then sure, $ \varphi _ i ( \Delta x _ i ) \Delta x _ i \to 0 $ as $ \Delta x _ i \to 0 $, and even quickly enough to overcome that there are more terms in the sum. But in general, $ \varphi _ i ( \Delta x _ i ) $ could change in an uncontrolled way as $ \varphi _ i $ changes. To look at this alleged limit more closely, you want to prove that for each $ \varepsilon &gt; 0 $, for some $ \delta &gt; 0 $, whenever $ \lVert \pi \rVert &lt; \delta $, the absolute value of the sum is less than $ \varepsilon $. And you want to prove this by choosing $ \delta $ so that, whenever $ \Delta x _ i &lt; \delta $ (which you'll have since $ \Delta x _ i \leq \lVert \pi \rVert $), $ \lvert \varphi _ i ( \Delta x _ i ) \rvert &lt; \varepsilon / ( b - a ) $. Because then $$ \Bigg \lvert \sum _ { i = 0 } ^ { n - 1 } \varphi _ i ( \Delta x _ i ) \Delta x _ i \Bigg \rvert \leq \sum _ { i = 0 } ^ { n - 1 } \lvert \varphi _ i ( \Delta x _ i ) \rvert \Delta x _ i \leq \sum _ { i = 0 } ^ { n - 1 } \frac \varepsilon { b - a } \Delta x _ i = \frac \varepsilon { b - a } \sum _ { i = 0 } ^ { n - 1 } \Delta x _ i = \frac \varepsilon { b - a } ( b - a ) = \varepsilon \text . $$ But unfortunately you cannot pick $ \delta $ in this way for all partitions at once, only for a particular function $ \varphi _ i $ (which in turn is determined by the value of $ x _ i $). One thing that you can prove by this argument is that the derivative of a uniformly differentiable function is Cauchy-integrable, satisfying the FTC. To say that $ F $ is uniformly differentiable on an interval is to say that the same function $ \varphi $ can be used at every point in the interval; then you can drop the subscripts on $ \varphi $ and the argument goes through. Another thing that this argument proves is that the derivative of a differentiable function is Cauchy–Henstock–Kurzweil-integrable, also satisfying the FTC. This is a kind of integral that I just made up, a variation of the Henstock–Kurzweil integral that uses only left endpoints. The Henstock–Kurzweil integral may be defined much like the Riemann integral, but the constant $ \delta &gt; 0 $ such that $ \lVert \pi \rVert &lt; \delta $ is replaced by a function $ \delta \colon [ a , b ] \to \{ d \mid d &gt; 0 \} $, so that instead of requiring $ \Delta x _ i &lt; \delta $, we require $ t _ i - x _ i &lt; \delta ( t _ i ) $ and $ x _ { i + 1 } - t _ i &lt; \delta ( t _ i ) $, where $ t _ i $ is the tag in the subinterval $ [ x _ i , x _ { i + 1 } ] $. In the left-endpoint version, this becomes $ \Delta x _ i &lt; \delta ( x _ i ) $, and so we may let $ \varphi _ i $ give us $ \delta ( x _ i ) $. Neither of these results are new; in fact, the derivative of any uniformly differentiable function is Riemann-integrable (and in fact continuous), and the derivative of any differentiable function is Henstock–Kurzweil-integrable, with the Fundamental Theorem of Calculus applying in each case. So you have essentially proved weaker left-endpoint-only versions of these theorems. But actually, by applying the Fundamental Increment Lemma to the half-subintervals of an arbitrary tagged partition (so start by writing $ F ( b ) - F ( a ) $ as $ \sum _ i \big ( F ( t _ i ) - F ( x _ i ) \big ) + \sum _ i \big ( F ( x _ { i + 1 } ) - F ( t _ i ) \big ) $ instead of as just $ \sum _ i \big ( F ( x _ { i + 1 } ) - F ( x _ i ) \big ) $ and use both $ F ( x _ i ) - F ( t _ i ) = f ( t _ i ) ( x _ i - t _ i ) + \varphi _ i ( x _ i - t _ i ) ( x _ i - t _ i ) $ and $ F ( x _ { i + 1 } ) - F ( t _ i ) = f ( t _ i ) ( x _ { i + 1 } - t _ i ) + \varphi _ i ( x _ { i + 1 } - t _ i ) ( x _ { i + 1 } - t _ i ) $), you can prove the full theorems!
Between bayesian and measure theoretic approaches
Kadane's book Principles of Uncertainty derives Bayesian probability from first principles and does use measure theory (in section 4.9). It is based on DeFinetti's framework of coherence instead of Cox's axioms.
Blowing up at a point
A blowing-up morphism of integral algebraic varieties (more generally of integral schemes) is birational: it is an isomorphism outside of the center we blow-up (under the obvious condition that the center is nowhere dense). As birational algebraic varieties have the same dimension (equal to the transcendental degree of the function fields), blowing-up doesn't change the dimension. The strict transform of $C$ in $X$ is regular (straightforward computationsn see e.g. a solution here), so the second part of your question is irrelevant.
Does anyone know of any interesting proofs using complex analysis not covered in an intro sequence?
I've always liked covering the theory of complex infinite products. There are a lot of really cool applications to this, like Dedekind's $\eta$ function $$ \eta(\tau) = e^{i\pi\tau/12}\prod_{n=1}^{\infty}(1-e^{2\pi in\tau}) $$ for $\tau \in \mathcal{H} = \lbrace z \in \mathbb{C} \; | \; \Im(z) &gt; 0 \rbrace$. This function is an modular form of half integral weight, and proving its functional equation $$ \sqrt{\frac{\tau}{i}}\eta(\tau) = \eta\left(\frac{-1}{\tau}\right) $$ is a nice illustration of analytic continuation and logarithmic derivatives, as well as manipulating normal convergence. This function is also extremely useful because $\eta(\tau)^{24}$ is a cusp form of weight 12 of full level, which is the first nontrivial example of a full weight cusp form one can find. These have applications in the theory of modular forms and in automorphic representation theory, and hence are extremely useful in number theory and the Langlands' Programme. Perhaps a more natural complex analytic answer would be to prove things about the Weierstrass $\Delta$ function, for $\gamma$ the Euler-Mascheroni constant, $$ \Delta(z) = ze^{\gamma z}\prod_{n=1}^{\infty}\left(1-\frac{z}{n}\right)e^{-z/n}, $$ like functional equations and whathaveyou, and then show that there is an equality $$ \Delta(z) = \frac{1}{\Gamma(z)}, $$ where $\Gamma$ is the Gamma function! This makes finding the residues and stuff about $\Gamma$ easier and is a nice illustration of using infinite products in complex analysis.
Is this a valid proof that the set of all finite subsets of $\mathbb{R}$ is uncountable?
Yes, the proof is okay. I'd phrase the last sentence a bit differently, though: The set of all finite subsets of $\Bbb R$ is a superset of the set of all the singletons of $\Bbb R$, and the superset of an uncountable set is uncountable we have that the set of all finite subsets of $\Bbb R$ is uncountable. (And a minor point for improvement, in case that you are using "the set of finite subsets of $\Bbb R$" often enough, it is worth adding a notation for it, e.g. $\operatorname{Fin}(\Bbb R)$.)
Given an n-sided polygon, how would you random sample points within it?
The idea to triangulate is not bad. You have to choose a triangle with a probability proportional to its area. For this, form a vector with the prefix sums of the areas and draw a random number in the range $[0,\text{total area}]$. Then the interval in which the value falls tells you the triangle. Next you need to draw a random point in the triangle, uniformly. Consider drawing two numbers in the range $[0,1]$; they define a point uniformly distributed over a unit square. Then if the point is located past the diagonal ($x+y&gt;1$), you mirror it around the diagonal. Now you get a point uniformly distributed in a unit triangle, and finally you deform this unit triangle to the randomly chosen triangle. Update: A triangulation in $n$ pieces is not necessary. You can triangulate efficiently by sorting the vertices top to bottom and intersecting the polygon with horizontals by every vertex. It is a so-called sweepline process. This will partition the polygon in trapezoids, that you can further split in two triangles. This method produces more triangles but wit a cost $O(n\log n)$, and the &quot;penalty&quot; of having more triangles will be very low.
Is $n=2 $ the only solution of the below identity?
Note that for all $x\ne0$ we have $$0\le\left|\frac{\sin{(x)}}x\right|\lt1$$ and hence we have for $n\in\mathbb{N}$ with $n\ge3$ \begin{align} \int_{-\infty}^\infty\left(\frac{\sin{(x)}}x\right)^n\mathrm{d}x &amp;\le\int_{-\infty}^\infty\left|\frac{\sin{(x)}}x\right|^n\mathrm{d}x\\ &amp;\lt\int_{-\infty}^\infty\left|\frac{\sin{(x)}}x\right|^2\mathrm{d}x\\ &amp;=\int_{-\infty}^\infty\left(\frac{\sin{(x)}}x\right)^2\mathrm{d}x\\ \end{align} So the equality you mention only holds for $n=2$.
Lagrange Multipliers where no solutions(s) satisfy the constraints
Lagrange multipliers work in the following situation. Suppose $g(x,y)=0$ defines a bounded level curve. Suppose also that $\nabla g \neq 0$, that is the gradient is non-zero, on this level set. Then the max an min of $f$ on $g=0$ occurs at a point that satisfies $\nabla f = \lambda \nabla g$ and $g=0$. What if you just assume that $g(x,y)=0$ and allow for $\nabla g=0$? Then the min and max of a $f$ will occur at a point where $\nabla g =0 $ or where $\nabla f = \lambda \nabla g$ in addition to $g=0$. However, your problem is that your constraint does not define a bounded set. Think about it, if you have an unbounded set and you are trying to maximize the distance to the origin, is this possible? No. This is the sort of situation you have.
Factoring a multivariate polynomial.
The polynomial has the seprated form $$P(x_1,\dots,x_n) = p_1(x_1)\cdots p_n(x_n), $$ if and only if the PDE $$P^{n-1}\frac{\partial^n}{\partial x_1 \cdots \partial x_n} P = \frac{\partial P}{\partial x_1} \cdots \frac{\partial P}{\partial x_n} $$ holds. Conditional on that fact, you may also check whether the polynomial is invariant under all permutations in $S_n$.
Probability - chess problem. What's wrong with my approach.
You problem in the last formulation there is that you are effectively allowing the probability that the person who loses the first round, beats Al in the second round. The chance of there being a second round includes two victories by either player You first, correct, diagram for question 3 would be modified to have an addition node after the first two games, but obviously that doesn't happen.
An inequality involving norms.
Assuming that $u \in H^1(0, T; L^2(\Omega))$, you have $$u(T) = u(0) + \int_0^T u_t(t) \, dt.$$ By the triangle inequality, we find $$\| u(T) \|_{L^2(\Omega)} \le \| u(0)\|_{L^2(\Omega)} + \left\| \int_0^T u_t(t) \, dt\right\|_{L^2(\Omega)} \le \| u(0)\|_{L^2(\Omega)} + \int_0^T \| u_t(t) \|_{L^2(\Omega)} \, dt. $$
How to prove Bonferroni inequalities?
A proof is there. The main idea is that this is the integrated version of analogous pointwise inequalities and that, for every $k$, $$ S_k=\mathbb E\left({T\choose k}\right),\qquad T=\sum_{i=1}^n\mathbf 1_{A_i}. $$ Hence the result follows from the stronger inequalities asserting that, for every positive integer $N$, $$ \sum_{i=0}^k(-1)^ia_i,\qquad a_i={N\choose i}, $$ is nonnegative when $k$ is even and nonpositive when $k$ is odd. In turn, this fact follows from the properties that the sequence $(a_i)_{0\leqslant i\leqslant N}$ is unimodal and that $\sum\limits_{i=0}^N(-1)^ia_i=0$.
Generalized Euler Lagrange Equation with Integral of Action over a Compact Domain
The equation you wrote is the generalization of the usual Euler-Lagrange equation from classical mechanics to classical field theory. You can find the derivation of this in a lot of places, just try searching for the &quot;classical field theory version&quot; of the Euler-Lagrange equations. For instance, you can look at Section 5.3 of Nastase's Classical field theory. Also, there's no reason that these equations can only apply to products of intervals; the derivation doesn't assume anything like $\Omega = I_1 \times \cdots \times I_n$. Indeed, these equations hold for any compact manifold $\Omega$ with boundary (for instance, $\Omega = \mathsf{B}^n \subset \mathbb{R}^n$), as long as you place appropriate boundary conditions on the fields (i.e., the functions $f$) that you're considering. This is because the derivation involves some kind of &quot;integration by parts&quot; that introduces a boundary term, which you'd like to make zero in order to obtain the equations above. Edit. To be clear, maybe you're under the impression that we can only work with products $\Omega = I_1 \times \cdots \times I_n$ since for the $1$-dimensional Euler-Lagrange equations (i.e. where we replace $x_1,\dots,x_n$ by $t$), we work with a single interval $\Omega = I$. But this is just because any compact $1$-dimensional manifold (connected, with nonempty boundary) has to be (diffeomorphic to) a closed interval! However, in dimensions $n &gt; 1$, there are way more compact manifolds (with boundary) than just products of intervals.
Are some of the Real number axioms redundant?
I think you're not asking the right question. For a typical analysis textbook, it is not about whether some axioms are necessary or what the minimal set of axioms should be. A good analysis textbook should pick a manageable set of axioms that are intuitive and from there proceed with a good exposition of the subject. You'll have forgotten the details of the axiom system very soon anyway. If you're into questions like whether certain axioms are necessary or how to minimize axiom systems (or whether two sets of axioms are equivalent), then foundations is the way to go.
If $a$ and $b$ are positive integers such that $a^n+n\mid b^n + n$ for all positive integers $n$, prove that $a=b$.
Assume the contrary and consider a prime $p$ that does not divide $b-a$, By the Chinese Remainder Theorem we can find a postive integer $n$ such $$\begin{cases}n\equiv 1\pmod{p-1}\\ n\equiv -a\pmod p \end{cases}$$ Then by Fermat's little theorem $$a^n+n\equiv a+n\equiv a-a\equiv 0\pmod p$$ and $$b^n+n\equiv b+n\equiv b-a\pmod p$$ It follows that $p$ divides $a^n+n$,But does not divide $b^n+n$.Contradiction.
degree of a projective embedding
No, they are not the same, they are equal only in dimension one. There are lots of counterexamples. To name just one, if $X$ is an Abelian variety of dimension $n$ and $d=1$, then degree of $X$ inside $\mathbb{P}^N$ equals $(\chi(L)-\chi(\mathcal{O}_X))\cdot n!$. There exists a formula relating $(\chi(L^d)-\chi(\mathcal{O}_X))$ to degree of the embedding, but it cannot be reduced to the equals sign as in the case of curves:) Cf. Riemann-Roch-Hirzebruch Theorem.
Conditional probability ordering singers
It is $1/4$, because here the ordering does not matter at all, it's all just someone choosing one song over four possible. (Unless the song is Wonderwall, of course. Then it is more likely to be chosen.)
In Information Theory, what is the lower bound that minimizes the value of $2^{l_x}$
Notice that we are speaking here of "decipherable codes", so we must put that restriction in play somehow. Fortunately we have Kraft-McMillan inequality. So we must have $\sum_x 2^{-l_x}\le 1$. Let's call $a_x = 2^{-l_x}$ and $b_x = p_x 2^{l_x}=p_x a_x^{-1} $ Then we want to bound $E=\sum b_x$ subject to $\sum a_x \le 1$ Now, for any non-negative $a_x,b_x$ , we can write Cauchy-Schwarz as $$\sum_x \sqrt{a_x b_x} \le \sqrt{ \sum_x a_x}\sqrt{ \sum_x b_x} \tag{1}$$ In our case this gives $$\sum_x \sqrt{p_x} \le \sqrt{E} \sqrt{ \sum_x a_x} \le \sqrt{E} \tag{2}$$ which gives the desired bound.
Why do we need the Well-Ordering Property in the proof of Fundamental Theorem of Arithmetic?
Without the well ordering property each number with multiple factorizations could be the product of some prime and another number with multiple factorizations.
Continuous mapping from n-sphere to (n+1)-sphere
No. Sard's Theorem. Short version, the image of $\mathbb S^n$ has measure zero in $\mathbb S^{n+1}$ unless the mapping is highly non-differentiable.
What is the probability that I have watched neither news?
You first working is wrong because you have identified the wrong complement. The second working is correct but it is unclear to me whether you are aware of the reasoning. It turns out the event of watching each news is independent in this case. Alternatively, not watching both news is the complement of watching at least one news. $$P( A' \cap B')=1-P(A \cup B) = 1-P(A)-P(B)+P(A \cap B)$$
Which grammar generates the language: $L = \{a^i b^j d^k | i, j, k ≥ 0 ∧ j < k\}$
It can't be the first because the first grammar generate the word $ddbd\notin \mathcal L$ $$S\to SA \to Sbd \to Adbd \to ddbd$$
Calculate m and n of function g(x) knowing it admits an local minimum in point (2;4)
We have $ g'(2)=0$. Can you proceed?
Binary constraint integer programming problem
If $X \le 2$, $Y\le 3$ is equivalent to $X \gt 2$ or $Y \le 3$ which can be modelled with the following 2 constraints: $X - 2 + M_1 \delta \gt 0$ $Y - 3 \le M_2 (1- \delta )$ $X \le 1000$ where $\delta$ is a binary variable, $M_1 = 2$ and $M_2 = 1003$. The values of $M_1$ and $M_2$ are chosen to allow the full posible range of values for X and Y. The constraints become $X - 2 + 3 \delta \gt 0$ $Y - 3 \le 1003 (1- \delta )$ $X \le 1000$ When $\delta = 0, X \gt 2$ so the only requirement on Y is that it be $\le$ 1000, precisely what the second constraint becomes when $\delta = 0$ is substituted in. When $\delta = 1$ the first constraint becomes $X \gt 0$, thus allowing $X \le 2$ and so we need Y$\le 3$, which is what the Y constraint becomes when $\delta = 1$ is substituted in.
Determine Volumes of n cubes in 3D space
Each cube is bound by six planes. The intersection area has to satisfy all of these constraints, which will reduce to 6 active constaints. To reduce the constraints, for each dimension find the greatest lower bound and the smallest upper bound. Once you determine the active constraints then it should be straight forward to find the volume.
Gradient of Hadamard product
The inner/Frobenius product provides a nice infix notation for the trace operation $${\rm tr}(A^TB)=A:B$$ Use this to rewrite the function, then find the differential and gradient $$\eqalign{ f &amp;= A:(X\circ X) - A:1 \cr\cr df &amp;= A:d(X\circ X) - 0 \cr &amp;= A:2(X\circ dX) \cr &amp;= 2(A\circ X):dX \cr\cr \frac{\partial f}{\partial X} &amp;= 2A\circ X \cr\cr }$$
Derivative of Bezier Rectangle
Yes, it's true (almost). If the surface parameters are $u$ and $v$, the partial derivative wrt $u$ is a Bezier patch of degree $(M, N-1)$ and the partial derivative wrt $v$ is a Bezier patch of degree $(M-1, N)$. If you are using the deCasteljau algorithm to calculate points on your patch, then the partial derivatives will be natural by-products. Often the best way to handle a surface computation is to reduce it to a curve computation. So, suppose you want to calculate the partial derivative wrt $u$ at parameter values $(\bar u, \bar v)$ on the Bezier patch $(u,v) \mapsto \mathbf{S}(u,v)$. The curve $\mathbf{C}(u) = \mathbf{S}(u,\bar v)$ is a Bezier curve, whose control points are easy to obtain. The desired partial derivative is just the derivative of the Bezier curve $\mathbf{C}$ at the parameter value $u = \bar u$.
Closure of an infinite half-open interval
Note that every topological space is closed in itself. So is $\Bbb R=(-\infty, \, \infty)$. So, the closure of $(-\infty,\, a)$ is indeed $(-\infty, a]$. However, if we work on the extended real line $\bar{\Bbb R}=\Bbb R\cup\{-\infty, \, \infty\}$, then its closure will be $[-\infty, \, a] $.
Sequence of Functions Satisfying Conditions
Here's a reasonably general construction. Let $I_n$ be any pairwise disjoint sequence of nontrivial intervals (or more generally, measurable sets with positive measure) contained in $[0,1]$. Let $a_n$ be any sequence of nonnegative real numbers such that $a_n \to 0$ but $\sum a_n = \infty$. Define $$f_n(x) = \begin{cases} \displaystyle\frac{a_n}{|I_n|} &amp; \text{ if }x \in I_n \\ 0 &amp; \text{ otherwise} \end{cases}$$ Then $f_n(x) \to 0$ pointwise (indeed for a given $x$, $f_n(x)$ is nonzero for at most one $n$). Moreover, $$\int_0^1 f_n(x)\ dx = \int_{I_n}\frac{a_n}{|I_n|}\ dx = a_n \to 0$$ and, because $f_n \geq 0$ and the supports of $f_n$ are pairwise disjoint, $$\int_0^1 \sup_n f_n(x)\ dx = \int_0^1 \sum_n f_n(x)\ dx = \sum_n \int_0^1 f_n(x)\ dx = \sum_n a_n = \infty$$ For a concrete example, you can take $I_n = [1/(n+1), 1/n)$ and $a_n = 1/n$, which means that $|I_n| = 1/(n(n+1))$ and $$f_n(x) = \begin{cases} n+1 &amp; \text{ if }x \in [1/(n+1), 1/n) \\ 0 &amp; \text{ otherwise } \end{cases}$$
What is a good estimate for this log sum?
Your sum is $\sum_i(\log n-i\log 3)=K\log n-\frac{K(K+1)}{2}\log 3$.
Building a telephone number with special digits [Combinatorics]
Once the first number is a $2$, we essentially have to figure out the remaining $6$ numbers out of the multiset $\{1,2,2,3,3,3\}$. Your intuition is correct that you can use multinomial coefficients, but let's demystify that: We have $\binom{6}{3}$ ways of choosing where the $3$'s go; from there, we just have to choose where the place the two remaining $2$'s in the remaining three spots, so we have to multiply by $\binom{3}{2}$. If we'd like, we can also multiply by $\binom{1}{1}$, the number of ways to place the $1$. Then we get $$\binom{6}{3}\binom{3}{2} = \binom{6}{1,2,3}\,.$$
Prove that $Tu(x)$ is a contraction. $Tu(x) = -\lambda\int_0^1g(x,y)\sin(u(y))\,dy$
Its just grind... \begin{eqnarray} |Tu(x)-Tv(x)| &amp;\le &amp; |\lambda| \int_0^1 |g(x,y)| |\sin u(y) - \sin v(y)| dy \\ &amp;\le&amp; |\lambda| \int_0^1 |g(x,y) | | u(y) - v(y)| dy \\ &amp;\le&amp; |\lambda| \|u-v\|_\infty \int_0^1 |g(x,y) | dy \\ \end{eqnarray} Hence $ \|Tu-Tv\|_\infty \le |\lambda| \|u-v\|_\infty \sup_{x \in [0,1]}\int_0^1 |g(x,y) | dy $. Now compute $\int_0^1 |g(x,y) | dy = \int_0^x y(1-x)dy + \int_x^1 x(1-y)dy = \frac{1}{2}x(1-x)$, and this is maximized at $x=\frac{1}{2}$, and so $\int_0^1 |g(x,y) | dy \le \frac{1}{8}$, which gives $$\|Tu-Tv\|_\infty \le \frac{1}{8}|\lambda| \|u-v\|_\infty $$ (Of course, this only shows $T$ to be a contraction if $|\lambda| &lt; 8$.)
$(-1)^{\sqrt{2}} = ? $
Using Euler's formula: $-1=\exp(\ln(-1))=\exp(\ln(1)+i\arg(-1))=\exp(0+i(2k+1)\pi) =\exp(i(2k+1)\pi)$ for $k\in \Bbb{Z}$ Thus: $(-1)^{\sqrt{2}}=\exp(i(2k+1)\pi\sqrt{2})=\cos((2k+1)\sqrt{2}\pi)+i\sin((2k+1)\sqrt{2}\pi)$
When does a real matrix have a real square root?
Let $A\in M_n(\mathbb{C})$; we consider the non-increasing sequence: $d_i=dim(\ker(A^i))-dim(\ker(A^{i-1}))$. $A$ is a square IFF $A$ satisfies the following condition (P) about its iterated kernels (if $A$ is invertible, then there are no conditions!) (P) (cf. Cross, Lancaster, Square roots of complex matrices, Linear and Multilinear Algebra): $(d_i)_i$ does not contain two successive occurrences of the same odd integer. Note that, in Topics in Matrix Analysis, p. 472, Horn and Johnson add a condition which is useless. Let $A\in M_n(\mathbb{R})$. $A$ is a square over $\mathbb{R}$ IFF $A$ satisfies (P) and the following condition (Q) concerning the $&lt;0$ eigenvalues of $A$. (Q) $A$ has an even number of Jordan blocks of each size for every $&lt;0$ eigenvalue. (Use the real Schur decomposition. cf. Functions of matrices, Higham, p.17 or Horn and Johnson -above-). EDIT. Answer to Aaron. 1. The supplementary condition by H&amp;J, can be rewritten: "if $dim(\ker(A))$ is odd, then $dim(\ker(A^2))&lt;2dim(\ker(A))$". Note that $dim(\ker(A^2))\leq 2dim(\ker(A))$ is always true. Then it suffices to assume that $dim(\ker(A^2))=2dim(\ker(A))$; then $d_1=d_2$ are odd and, according to the other conditions, the square root of $A$ does not exist. The problem only stands for real $&lt;0$ eigenvalues; indeed, if $\lambda$ is an eigenvalue of $A$ then $\bar{\lambda}$ too, and the dimensions of the Jordan blocks associated to $\lambda$ are the same as those of $\bar{\lambda}$; it is not difficult to find a square root of $diag(\lambda I+J_k,\bar{\lambda}I+J_k)$ when $\lambda\notin \mathbb{R}$ or of $\lambda I+J_k$ when $\lambda &gt;0$ or of $diag(\lambda I+J_k,\lambda I+J_k)$ when $\lambda&lt;0$.
Does the series $\sum_{n=1}^{\infty} \frac{(-1)^n}{n(\sin(n)+2)}$ converge or diverge?
My claim is as follows: Claim. For any $\theta \in \Bbb{Q}$, we have $$ \sum_{n=1}^{\infty} \frac{(-1)^n}{n(2+\sin n\theta)} = -\frac{\log 2}{\sqrt{3}} - \frac{2}{\sqrt{3}} \Re \sum_{n=1}^{\infty} \frac{i^n}{(2+\sqrt{3})^n} \log(1+e^{in\theta}). \tag{*} $$ For example, if we take $\theta = 1$ as in the original problem and evaluating the first 10000 terms, we get $$ \sum_{n=1}^{\infty} \frac{(-1)^n}{n(2+\sin n)} \approx -0.27206008766393670467\cdots, $$ which agrees well with MathUser's observation. Before the proof (optional). My first trial was to use some equidistribution results, but this turned out to be daunting because I know nothing about asymptotic behavior of $$ \frac{1}{n}\sum_{k=1}^{n} \frac{1}{2+\sin k} = \frac{1}{2\pi}\int_{0}^{2\pi}\frac{dx}{2+\sin x} + \boxed{\epsilon_n : \text{error term}}, $$ since we can write $$ \sum_{n=1}^{N} \frac{(-1)^n}{n(2+\sin n)} = 2\sum_{n=1}^{N} (-1)^{n} \epsilon_n + (-1)^N \epsilon_N + \sum_{n=1}^{N} \frac{(-1)^n}{n} (C + \epsilon_n)$$ with the constant $C = \frac{1}{2\pi}\int_{0}^{2\pi}\frac{dx}{2+\sin x}$. So I gave up this approach and tried new one, which indeed led me to a correct proof. I consider my proof a sledgehammer method, though, and I suspect that there is a simpler proof. Preliminary. We first introduce two big theorems. The first one is the following famous version of Tauberian theorems: Littlewood's Tauberian Theorem. If $a_n = \mathcal{O}(1/n)$ and $\sum a_n$ is Abel summable, i.e., $$ \lim_{s \to 0^+} \sum_{n=1}^{\infty} a_n e^{-ns} = S$$ converges, then $\sum a_n$ converges in the usual sense and $$ \sum_{n=1}^{\infty} a_n = S. $$ The next theorem deals with how $e^{in\theta}$ gets closer to $-1$ as $n$ grows. Theorem. The irrationality measure of $1/\pi$ is finite. In particular, there exists $c, \mu &gt; 0$ such that $$ \mathrm{dist}(n, \pi \Bbb{Z}) \geq c n^{-\mu} $$ for any $n = 1, 2, \cdots $. Proof. The actual proof is quite straightforward. In view of the Littlewood's Tauberian theorem, it suffices to show that (*) holds in Abel summbability sense. To ghis end, let $r = i(2-\sqrt{3})$ and notice that for $x \in \Bbb{R}$, \begin{align*} \frac{1}{2+\sin x} &amp;= \frac{1}{\sqrt{3}} \left( \frac{1}{1 - re^{ix}} + \frac{\bar{r}e^{-ix}}{1 - \bar{r}e^{-ix}} \right) \\ &amp;= \frac{1}{\sqrt{3}} + \frac{2}{\sqrt{3}} \Re \sum_{k=1}^{\infty} r^k e^{ikx}. \end{align*} So by the Fubini's theorem, if $s &gt; 0$, we have \begin{align*} \sum_{n=1}^{\infty} \frac{(-1)^n}{n(2+\sin n\theta)}e^{-ns} &amp;= \sum_{n=1}^{\infty} \frac{(-1)^n}{n}e^{-ns} \left( \frac{1}{\sqrt{3}} + \frac{2}{\sqrt{3}} \Re \sum_{k=1}^{\infty} r^k e^{ikn\theta} \right) \\ &amp;= -\frac{\log (1 + e^{-s})}{\sqrt{3}} - \frac{2}{\sqrt{3}} \Re \sum_{k=1}^{\infty} r^k \log(1 + e^{-s+ik\theta}). \end{align*} Now the Theorem 2 shows that $$ \left| \log(1 + e^{-s}e^{ik\theta}) \right| \leq -\log|\sin k\theta| + \mathcal{O}(1) \leq C \log k + \mathcal{O}(1) \tag{1} $$ uniformly in $k$ and $s$. So we can take termwise limit as $s \to 0^+$ and the resulting series $$ -\frac{\log 2}{\sqrt{3}} - \frac{2}{\sqrt{3}} \Re \sum_{k=1}^{\infty} r^k \log(1 + e^{ik\theta}) $$ still converges absolutely. This shows that (*) is true in Abel summability sense and hence completes the proof of our claim. //// Addendum: Proof of (1). I admit that I omitted some detail when proving (1). So here is my derivation: Notice first that \begin{align*} \left| \log(1 + e^{-s}e^{ik\theta}) \right| &amp;= \left| \log\left| \frac{1 + e^{-s}e^{ik\theta}}{2} \right| + i\arg(1 + e^{-s}e^{ik\theta}) + \log 2 \right| \\ &amp;\leq - \log\left| \frac{1 + e^{-s}e^{ik\theta}}{2} \right| + \pi + \log 2 \\ &amp;= - \log\left| 1 + e^{-s}e^{ik\theta} \right| + \pi + 2\log 2 \end{align*} where the intermediate inequality follows from the triangle inequality together with the observation that $|1 + e^{-s}e^{ik\theta}| \leq 2$. Now let us write $r = p/q$ for $p, q \in \Bbb{Z}$ and $q &gt; 0$. Then \begin{align*} \left| 1 + e^{-s}e^{ik\theta} \right| &amp;\geq \left| \Im (1 + e^{-s}e^{ik\theta}) \right| = e^{-s}\left| \sin(k\theta) \right| \geq e^{-s} \cdot \frac{2}{\pi}\,\mathrm{dist}(k\theta, \pi \Bbb{Z}) \\ &amp;= \frac{2}{\pi q}e^{-s}\,\mathrm{dist}(kp, q \pi \Bbb{Z}) \geq \frac{2}{\pi q}e^{-s}\,\mathrm{dist}(k|p|, \pi \Bbb{Z}) \\ &amp;\geq \frac{2c}{\pi|p|^{\mu}q}e^{-s} k^{-\mu}. \end{align*} Taking negative log, we get $$ -\log \left| 1 + e^{-s}e^{ik\theta} \right| \leq \mu \log k + s + \mathcal{O}(1). $$ As we are interested only in the limit as $s \to 0^+$, by restricting the range of $s$ onto $0 &lt; s \leq 1$, the term $s$ can be absorbed into the $\mathcal{O}(1)$ term. Therefore this yields the desired inequality (1) with the constant $C = \mu$.
being connected of R^infinity?
Hint: Consider the set of all bounded sequences. Is it open? Is it closed?
Finding probability using Var
The problem here is that you are mistreating the distribution $F$. If $F$ is the distribution for the number of Heads resulting from tossing two fair coins then you want to write $$\text{Var}(F+B)=\text{Var}(F)+\text{Var}(B)$$ We get $\text{Var}(F)=.5$ and all is good. We get $$.5+\text{Var}(B)=.55$$ which makes perfect sense. But in your first expression, you appear to want $F$ to be the distribution for a single coin. In that case you should have written $$\text{Var}(F_1+F_2+B)=2\text{Var}(F)+\text{Var}(B)$$ Where the $F_i$ are i.i.d. and both equal $F$. But in this case $\text{Var}(F)=.25$ and again we get $.5+\text{Var}(B)=.55$ In either case, you should not have written $2F$ as no variable is being multiplied by $2$.
How to use Holder inequality in PDE?
Actually, it’s Cauchy-Schwarz inequality used here. \begin{align} 8|\eta D_i\eta D_j u D_{ij}u|&amp;=|4 D_i\eta D_j u\cdot 2 \eta D_{ij}u|\\&amp;\leq\frac{1}{2}\left(16 {D_i\eta}^2 {D_j u}^2+4{\eta}^2{D_{ij}u}^2\right)\\&amp;=8 {D_i\eta}^2 {D_j u}^2+2 {\eta}^2{D_{ij}u}^2. \end{align} Hence we know $$8\eta\sum_{i,j=1}^n D_i\eta D_j u D_{ij}u\geq -8|D\eta|^2|D u|^2-2\eta^2\sum_{i,j=1}^n(D_{ij}u)^2.$$
Which is the fundamental group of $\mathbb{R}^2-\{(0,0),(0,1)\}?$
Your space is homotopically equivalent to the double circle. Just like how $\mathbb{R}^2\setminus\{0\}$ is homotopically equivalent to the circle. As a result, the fundamental group is $\mathbb{Z}\coprod\mathbb{Z} $, the nonabelian coproduct of $\mathbb{Z}$ with itself.
Are all analytic continuations consistent with $1-1+1-1+\dots = \frac{1}{2}$?
Consider the function: $$f(s)=\sum_{n=1}^\infty\left(\frac1{(2n-1)^s}-\frac{1+\frac sn}{(2n)^s}\right)$$ Then it is obvious that we have $$f(s)=\eta(s)-\frac s{2^s}\zeta(s+1)$$ which, as $s\to0$, gives: $$\sum_{n=1}^\infty(-1)^{n+1}\sim-\frac12$$
Further deriving the weak field limit in general relativity
A short answner: For your first question, here a hint: consider that $\gamma = \frac{1}{\sqrt{1 - \beta^2}}$ where $\beta = v/c$, approximate $\gamma$ when $\ v &lt;&lt; c$ (Slow moving particle). for the second and third question's: yes, $\frac{dx^i}{d\tau}\frac{dx^j}{d\tau}=\gamma^2 u^i u^j$, $\gamma^2 \approx 1$, then $\frac{dx^i}{d\tau}\frac{dx^j}{d\tau}=u^i u^j$, if the particle's velocity is sufficiently small, the product of these two vectors is negligible, then we consider only the "time component" i.e. $\Gamma_{00}^a\frac{dx^0}{d\tau}\frac{dx^0}{d\tau}$, but $\frac{dx^0}{dt}=1$, then $\frac{d^2x^a}{d\tau^2}+\Gamma_{00}^a=0$. about the third question, when the spacetime is static i.e. time-independent, all time-derivatives of the metric vanishes. *Obs: the $ \gamma$ factor that i mentioned is called Lorentz factor, here a Wikipedia's article about this.
What does this function do? (why do we xor?)
The elements of $GF(8)$ are presumably represented as $3$-bit numbers standing for vectors over $GF(2)$. Then XOR is the addition in the field. When $k=0$ the function returns $x$; When $k=1$ the function returns $x^2+x$; When $k=2$ the function returns $(x^2+x)^2+x^2+x$; and so on, where addition and multiplication are in $GF(8)$.
How to find visible part of prism
If you see a face, you see all the four points on it. Compute the centers $C_k$ of all the faces, and the outer normal to that face $\bf n_k$. Compute the vectors $\bf{v_k} = \vec {AC_k}$. Only the faces for which $\bf {v_k} \cdot \bf{n_k} &lt;0$ will be visible. In another way, associate to each vertex $P_k$, the normals $\bf n_{k,j}$ corresponding to each of the three concurrent faces. The visible points will be those for which at least one of the dot products $\vec {AP_k} \cdot \bf n_{k,j }$ is negative. In this way you also can arrange the points as having $3,2,1,0$ negative dot products, and those with $0$ will be the obscured points.
Write down matrix of T with respect to the ordered basis ${ v_1,v_2,v_3}$
Hint: Use the fact that, by definition: $$ Tv_1=v_1 \qquad Tv_2=v_2 \qquad T v_3=0 $$
The preimage of $\triangle$ is a compact zero-dimensional manifold.
Jellyfish, you need to pay attention to details yourself! Aren't $X$ and $Z$ complementary dimension in $Y$? Do the arithmetic. No, it doesn't assume transversality of the maps.
easy inequality to prove
Take the exponents of both sides, it is equivalent to showing $xe^{\frac{1}{x}-1} &gt; 1$. But $e^x = 1+x+\cdots &gt; 1+x$ whenever $x$ is positive, and thus $xe^{\frac{1}{x}-1} &gt; x(1+\frac{1}{x}-1) = 1$.