title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Please explain the algebraic manipulations between these two formulas for Stirling numbers of the second kind
We have $$S(n,r)=\frac1{r!}\sum_{i = 0}^r (-1)^i\binom{r}{i}(r-i)^n$$ Note that $i$ runs from $0$ to $r$, but we can also run from $r$ to $0$, which is equivalent with replacing all $i$'s with $r-i$. We get $$S(n,r)=\frac1{r!}\sum_{i = 0}^r (-1)^{r-i}\binom{r}{r-i}(r-(r-i))^n$$ Note that $\binom{r}{r-i}=\binom{r}{i}$ so that this reduces to $$S(n,r)=\frac1{r!}\sum_{i = 0}^r (-1)^{r-i}\binom{r}{i}i^n$$ en that's the last line.
Have I computed this definite integral correctly?
Before computing such an integral, you should first check if it converges. In your case, the value $x=\dfrac{\pi}{16}\in\left[0;\dfrac{\pi}{6}\right]$ is clearly a problematic point, as $$\lim_{\underset{x\neq\frac{\pi}{16}}{x\to\frac{\pi}{16}}}(\sec(8x))=+\infty$$ Let $f\colon x\mapsto\tan^3(8x)\sec^3(8x)$ and let us check whether $\displaystyle\int_0^\frac{\pi}{16}f(x)\,\mathrm{d}x$ converges. In fact, there are many ways to check the latter convergence. I chose to simply compute an antiderivative $F$ of $f$ and then find out its limit at point $x=\dfrac{\pi}{16}$. We can take for example $$F\colon x\mapsto-\frac{1}{240}\sec^5(8x)\left(5\cos(16x)-1\right)$$ One can now easily prove that the limit of $F$ when $x$ tends to $\dfrac{\pi}{16}$ is equal to $+\infty$, and it follows that $\displaystyle\int_0^\frac{\pi}{16}f(x)\,\mathrm{d}x$ diverges. As $$\int_0^\frac{\pi}{6}f(x)\,\mathrm{d}x=\underbrace{\int_0^\frac{\pi}{16}f(x)\,\mathrm{d}x}_{I_1}+\underbrace{\int_\frac{\pi}{16}^\frac{\pi}{6}f(x)\,\mathrm{d}x}_{I_2}$$ converges iff $I_1$ and $I_2$ converge (by definition), we can finally conclude that $$\int_0^\frac{\pi}{6}\tan^3(8x)\sec^3(8x)\,\mathrm{d}x$$ does not converge.
Baby Rudin: Theorem 7.17 understanding how the mean value theorem is used.
Applying the Mean Value Theorem on $h:=f_n -f_m\colon [x,t] \to \mathbf R$ gives that there exists a $\xi \in (x,t)$ with $$h(x)-h(t)= h'(\xi)(x-t).$$ Now take the absolute value of this equality and use the inequality for $|h'|$. If $a\leq x,t \leq b$ we have $|x-t|\leq b-a$ what shows the last inequality.
Jacobian with right inverse
We assume that $f\in C^1$. $Jac(f)$ has a right inverse in $X_0$ IFF $Jac(f)$ is surjective in $X_0$; then $Jac(f)$ is surjective in $U$, a neighborhood of $X_0$ and $f_{|U}$ is an open application. The converse is true.
Does this density limit exist?
Well, "can we say anything" is fuzzy enough that one can't give a definitive no. But it seems to me that we can't say anything that's anything like the sort of thing you want to say, unless you strengthen the hypotheses. EDIT: No, two counterexamples, illustrating it seems to me different sorts of things that can go wrong. EDIT: No, three examples. The first two seem to me to indicate you need to assume some smoothness for $T$. The third shows that even if you assume $T$ is infinitely differentiable on $\mathbb R^n$ you may also want to assume that the derivative of $T$ at $a$ is non-singular (I suspect this is the case you have in mind). EDIT: Well, for 100 points the OP should get his or her money's worth. An example where $T$ is linear (and non-singular), showing that we also need to strengthen the hypotheses on $\mu$; the other examples were all about bad $T$. First example: Take $\mu=\lambda$, so $a=1$. There exists $T$ satisfying all your hypotheses such that $\lambda(T(B(0,r)))=0$ for every $r>0$ (making the second limit $0$, but only because of the escape clause you inserted in the definition; the second limit does not exist in any interesting sense). Write $\mathbb R\setminus\{0\}$ as the disjoint union of measurable sets $E_n$ which tend to $0$ in the sense that $\sup_{x\in E_k}|x|\to0$. And also so that $dist(0,E_k)>0$, and in fact $$\frac{diam(E_k)}{dist(0,E_k)}\to0.$$Choose $x_k\in E_k$, and let $$T=\sum_kx_k\chi_{E_k}.$$Then $T(0)=0$, $T$ is differentiable at the origin, in fact $|x-T(x)|=o(x)$, but $\lambda(T(\mathbb R^n))=0$. That's pretty cheap. Could be modified to give more interesting badness I suspect, but it seems to me this version is enough to show that you really need more hypotheses to get an interesting positive result. Second example: Again we're going to have $a=1$. This time we're going to get $\lambda(T(B(0,r)))\sim r$ but $\mu(T(B(0,r)))=0$. Take $n=2$, and regard $\mathbb R^2$ as the same as $\mathbb C$ just for the sake of notation. Let $$S=\{re^{it}\,:\,0\le r\le 1,r^2\le t\le 2\pi-r^2\}.$$There exists a map $T:\mathbb C\to\mathbb C$ such that $T(0)=0$, $T(\mathbb C)\subset S$, and $|z-T(z)|=o(z)$, so $T$ is differentiable at the origin. For example map $re^{it}$ to $re^{i\phi_f(t)}$ for suitable $\phi_r$. (Or rather do that for $0\le r\le 1$ and do whatever you want for $r>1$.) If you define $T$ that way then $$\frac{\lambda(T(B(0,r)))}{\lambda(B(0,r))}\to1.$$ Now $\mu$ is going to be the measure supported on the positive real axis $[0,\infty)$, such that $$\mu((\alpha,\beta))=\pi(\beta^2-\alpha^2).$$Then $\mu(B(0,r))=\lambda(B(0,r))$ for all $r$, while $\mu(T(B(0,r)))=0$. Third example. $n=1$, $a=0$, $T(x)=x^2$, $$\mu(E)=\lambda(E\cap(0,\infty)).$$Now the first limit is $1/2$ while the second is $1$. Example the fourth: As in the second example consider $\mathbb R^2=\mathbb C$, and let $\mu$ be supported on the positive real axis with $\mu((\alpha,\beta))=\pi(\beta^2-\alpha^2)$. Let $T(x+iy)=2x+iy$. First limit is $1$, second is $2$.
Bijection between tetrahedron of side length $n$ and three objects from a list of length $n + 1$.
One way to interpret the bijection in this picture is to say that the orange cell is specified by which positive-slope diagonal and which negative-slope diagonal of the triangle it's in. But for generalizing to higher dimensions, a different system of coordinates is useful. In this picture, we can locate the orange cell by saying "It's $2$ steps away from the bottom, $2$ steps away from the left side of the triangle, and $1$ step away from the right side." In general, any cell in the triangle can be given coordinates by saying "It's $x$ steps away from the bottom, $y$ steps away from the left side of the triangle, and $z$ steps away from the right side" for some integers $x, y, z \ge 0$ with $x+y+z = n-1$. There are $\binom{n+1}{2}$ ways to choose such $x$, $y$, and $z$. (This is no longer as obvious, but it follows from the well-known stars and bars method.) A $k$-dimensional simplex has $k+1$ faces, and we can uniquely specify each cell of the simplex by saying "It's $x_0$ steps away from the $0^{\text{th}}$ face, $x_1$ steps away from the $1^{\text{st}}$ face, ..., $x_k$ steps away from the $k^{\text{th}}$ face." The coordinates of each cell are now integers $x_0, x_1, \dots, x_k \ge 0$ with $x_0 + x_1 + \dots + x_k = n-1$, and there are $\binom{n+k-1}{k}$ ways to find such a partition (again, by stars and bars). This coordinate system, by the way, is nothing more than identifying the $k$-simplex with the subset $$\{(x_0, x_1, \dots, x_k) \in \mathbb R^{k+1} : x_0 + x_1 + \dots + x_k = n-1, x_0, x_1, \dots, x_k \ge 0\}$$ (a scaled version of the standard $k$-simplex) and identifying the cells inside it with the lattice points contained in the simplex.
What am I doing wrong calculating natural logarithm of this matrix?
Let's first look at $1\times 1$ matrices instead. Those are some times more intuitive. For instance, the natural logarithm of the matrix $[e]$ is $[1]+2\pi i n[1]$ because $e^{1+2\pi in}=1$ for any integer $n$. In your case it's exactly the same thing happening, only your matrices are $3\times 3$.
When solving for angles using law of sines, how do you know if the angle is obtuse or acute?
The simple answer (in this problem and many others like it) is that you do not know. The "solution" presented in the video is incomplete. Given the known information (your friend is $40$ feet away, the distance from your friend to the kite is $30$ feet, and the angle between your line of sight to your friend and the line of sight to the kite is $40$ degrees), and accepting the unspoken assumption that all of these things are in one vertical plane (that is, the kite is directly in front of you when you face your friend, not off to the left or right), there are two places where the kite might possibly be, namely, the two intersections of the line from your position at $40$ degrees above with the circle of radius $30$ feet around your friend. At one of those intersections, the angle between the string and your line of sight is $$\arcsin\left(\frac{40}{30}\sin(40^\circ)\right) \approx 58.987^\circ,$$ and at the other the angle is $$180 - \arcsin\left(\frac{40}{30}\sin(40^\circ)\right) \approx 121.013^\circ.$$ In the first case the string makes an angle approximately $81.013^\circ$ with the ground, and in the other the string makes an angle approximately $18.987^\circ$ with the ground. Mathematically, either one of these angles is a possible solution of the given problem, so instead of a unique solution you have a solution set containing two possible angles. You could apply additional real-world information, such as noticing that the wind is relatively light and having experience that tells you your friend cannot fly a kite in such a wind at an angle of less than $20^\circ$ above horizontal, and therefore rule out the second answer, but you cannot do that using only the information given in the original problem. Moreover, if the numbers in the problem were different in such a way that both of the geometric solutions put the angle of the string at a plausible angle from the ground for the given wind conditions, you would not be able to identify the angle of the string uniquely even with this additional real-world information. One case where you could determine mathematically that the angle is acute is if the string is longer than the distance between you and your friend. For example, suppose the string were $45$ feet long. In that case, one of the two intersections of the line at an angle $40^\circ$ above the direction from you to your friend and the circle of radius $45$ around your friend would be below you. Of course in the real world kites cannot fly underground, but even without that knowledge, we can say the solution is mathematically unique, since if the kite were at that second intersection point we would have said you were standing at a $140^\circ$ angle of the triangle made by you, your friend, and the kite, not at a $40^\circ$ angle.
General approach to solving problems with extending multivariable functions to be continuous, e.g. $f(x,y,z) = \frac{x^2-y^2+z^2}{x+y}$
There's a nice sufficient condition for nonexistence of a limit in situations similar to your examples. (By "limit" I mean finite limit). To keep things simple, let $g,h$ be continuous on $\mathbb R^2,$ and let $Z_h$ be the zero set of $h.$ Then $f=g/h$ is defined and continuous on $D_f = \mathbb R^2\setminus Z_h.$ I'll assume each point of $Z_h$ is a limit point of $D_f.$ Claim 1: Suppose $(x_0,y_0)\in Z_h$ and $g(x_0,y_0)\ne 0$ Then $f$ is unbounded in $B((x_0,y_0),r) \cap D_f$ for every $r>0.$ Hence $f$ doesn't have a limit at $(x_0,y_0)$ within $D_f.$ Proof: Fix $r>0.$ Let $(x_n,y_n)\in D_f$ be a sequence converging to $(x_0,y_0).$ Then the tail end of this sequence lies in $B((x_0,y_0),r).$ Hence it is enough to show $f(x_n,y_n)$ is unbounded. Now $$f(x_n,y_n)=\frac{g(x_n,y_n)}{h(x_n,y_n)}.$$ But $g,h$ are both continuous at $(x_0,y_0),$ so the numerator $g(x_n,y_n)\to g(x_0,y_0)\ne 0,$ while the denominator $h(x_n,y_n) \to h(x_0,y_0)=0.$ This implies $f(x_n,y_n)$ is unbounded, and we're done. In your example $f(x,y) = \dfrac{e^{xy-1}}{x(x^2-y)},$ we see $Z_h$ equals the union of the curves $x=0, y=x^2.$ Since $e^{xy-1}$ is never $0,$ $f$ fails to have a limit at each point of $Z_h$ by Claim 1. Sometimes you can answer these questions quickly! Here is a companion to Claim 1: Claim 2: Suppose $(x_0,y_0)\in Z_h$ and it is the limit of a sequence $(x_n,y_n)$ in $Z_h$ such $g(x_n,y_n)\ne 0$ for all $n.$ Then the conclusion of Claim 1 holds. Proof: By Claim 1, for each $n$ there exists $(x_n',y_n') \in D_f$ such that $|(x_n',y_n')-(x_n,y_n)| < 1/n$ and $|f(x_n',y_n')| > n.$ Because $(x_n,y_n)$ converges to $(x_0,y_0),$ the same is true of $(x_n',y_n').$ We have thus produced a sequence in $D_f$ converging to $(x_0,y_0)$ along which $f$ is unbounded. The conclusion follows. These ideas, and the claims, extend naturally to $\mathbb R^3.$ Going to your example $$f(x,y,z) = \dfrac{x^2-y^2+z^2}{x+y},$$ we have $Z_h = \{(x,-x,z): x,z\in \mathbb R\},$ as you found. If $z\ne 0,$ then $g(x,-x,z) \ne 0.$ Thus from Claim 1, $f$ has no limit at a point of $Z_h$ where $z\ne 0.$ What about the points $(x,-x,0)\in Z_h?$ We use Claim 2: If $(x_0,-x_0,0)$ is one of these, then $(x_0,-x_0,1/n)\to (x_0,-x_0,0),$ and each $(x_0,-x_0,1/n)$ satisfies the hypotheses of Claim 2. By Claim 2, $f$ does not have a limit at $(x_0,-x_0,0).$ We have shown $f$ fails to have a limit at each point in $Z_h.$ The strategy described above is not often mentioned in my experience, but as we've seen, it can be useful in the case where $f=g/h$ and $Z_h$ is "large" (say a curve or a surface). I'll stop here as this is the case I wanted to focus on.
Proof for a graph has Euler tour iff each vertex has even degree
Notice that in order to have a contradiction, what we need to show is that there is no longer walk than $W$. So while making an assumption on $W'$, we just need to choose $W'$ such that length of $W'$ is more than $W$. We don't assume $W'$ is the longest walk. In other words, there might be another shorter walk, say $v_i, a_1, a_2,...,a_m$ which makes degree of $v_i$ even but not the part of the proof since we are assuming there is a longer walk. In this case, longest walk might be $u,v_i,v_{i+1},..., v_k, v_1, v_2,..., v_i, a_1, a_2,...,a_m$ with $u = a_m$ (because we need to have a closed walk in order for it to be the longest). As long as we don't assume $W'$ is the longest walk but a longer walk, we may have odd degrees in it. That is the same reason why the proof had to state $v_k = v_0$ for $W$ (since we assumed it is the longest).
Is there a faithful two-dimensional representation for the Bianchi IV Lie algebra?
No. Indeed, after conjugation, the image would be exactly the Lie algebra of upper triangular matrices, which is not isomorphic to yours since it has a nontrivial center. Actually, over the complex numbers every Lie subalgebra of $\mathfrak{gl}_2$ is isomorphic to $\{0\}$, $\mathfrak{a}$ the 1-dimensional abelian Lie algebra, $\mathfrak{b}$ (the 2-dim non-abelian Lie algebra), $\mathfrak{sl}_2$, or the direct product of one of these three by $\mathfrak{a}$.
Finding all compact subsets of a given set
Hint: take $C \subset K$. If $C$ is finite, it is compact. Assume now that $C$ is infinite, so that $C$ contains some of the terms of the sequence $a_n = 1/n$, and possibly zero. Recall that since $C \subset K$ and $K$ is compact, the set $C$ will be compact if and only if it is closed. On the other hand, regardless of wether $0 \in C$ we know that $C \setminus \{0\}$ can be ordered as a subsequence of the former sequence, i.e. $C \setminus \{0\} = (a_{n_k})_k$. Try to extend what you know about $K$ and $K \setminus 0$, which is the case $n_k = k$. If $0 \not \in C$, then $a_{n_k} \to 0$ is a sequence in $C$ but $0 \not \in C$, hence $C$ is not closed and in particular, it is not compact. On the other hand, if $0 \in C$, any convergent subsequence $(b_k)$ in $C$ either contains $0$ or is a convergent subsequence of $(a_n)_n$, and thus $\lim b_k = \lim a_n = 0 \in C$. This shows that $C \subset K$ is compact if and only if $0 \in C$. If you need to use covers, here's a possible argument: suppose that $0 \not \in C$. Then you should not be able to extract a cover from $\{(a_{n_k},+\infty) \cap K\}_k$ since this covers a finite amount of terms of the sequence (recall that $a_n = 1/n$). If $0 \in C$ and $\mathcal{U}$ is a covering of $C$ by open sets, then we have some open set $U \ni 0$ of the covering which then has to contain all but finite elements of $C$. To extract a finite subcover, pick an element of $\mathcal{U}$ for each point in $C \setminus U$.
Graph with both slant and horizontal asymptotes
You cannot have, with a single rational expression $$y=\dfrac{P(x)}{Q(x)}=\dfrac{a_nx^n+\cdots}{b_px^p+\cdots} \ \ \ (1)$$ (with $n=p$ or $n=p+1$, the only cases of interest) a horizontal asymptote for $x\rightarrow +\infty$ and a slant asymptote for $x\rightarrow -\infty$. This is due to the fact that, expression (1) has the same behaviour at $-\infty$ and $\infty$: it is equivalent to $\dfrac{a_n}{b_p}x^{n-p}$ : thus if $n=p$, it is the horizontal asymptote with equation $y=\dfrac{a_n}{b_p}$ at both ends. if $n=p+1$, it is the same slant asymptote at both ends (as you have explained in your question). A different behavior at $-\infty$ and $-\infty$ needs the introduction of non-rational functions, for example a square root like in $y=x-\sqrt{x^2+1}$ (which is a branch of hyperbola) or an exponential like in $y=\dfrac{x}{1+e^{-x}}$ (see graphical representation below), etc... (which, both, have a horizontal asymptote for $x\rightarrow +\infty$ and a slant asymptote for $x\rightarrow -\infty$, as you desire.
Solution of equation $\frac{x\cdot 2014^{\frac{1}{x}}+\frac{1}{x}\cdot 2014^x}{2} = 2014$
Applying AM-GM probably gives the most elegant solution (and indeed the presentation is quite suggestive of AM-GM), but this can also be solved by using an idea behind AM-GM. Noting first that $x\ge0$, we have: $$\begin{align} &\frac{x\cdot 2014^{\frac{1}{x}}+\frac{1}{x}\cdot 2014^x}{2} = 2014 \\\implies&x2014^{\frac{1}{x}-1}+\frac{1}{x}2014^{x-1}=2 \\\implies&\left(\sqrt{x2014^{\frac{1}{x}-1}}-\sqrt{\frac{1}{x}2014^{x-1}}\right)^2+2\sqrt{2014^{\frac{1}{x}+x-2}}=2 \\\implies&\left(\sqrt{x2014^{\frac{1}{x}-1}}-\sqrt{\frac{1}{x}2014^{x-1}}\right)^2+2\sqrt{2014^{\left(\sqrt{\frac{1}{x}}-\sqrt{x}\right)^2}}=2 \\\implies&2\sqrt{2014^{\left(\sqrt{\frac{1}{x}}-\sqrt{x}\right)^2}}\le2 \\\implies&2014^{\left(\sqrt{\frac{1}{x}}-\sqrt{x}\right)^2}\le1 \\\implies&\left(\sqrt{\frac{1}{x}}-\sqrt{x}\right)^2\le0 \\\implies&\left(\sqrt{\frac{1}{x}}-\sqrt{x}\right)^2=0 \\\implies&\sqrt{\frac{1}{x}}-\sqrt{x}=0 \\\implies&\frac{1}{x}=x \\\implies&x=1 \end{align}$$ where we have used the fact that $x\ge 0$ several times.
To show a function is analytic
Hint: Use Morera's theorem for $f_n$. The details are below. For any compact $K$ of $G$ the inequality $|f'_n(z)|<|h(z)|$ shows that $f'_n$ are uniformly bounded on $K$. Therefore the family $\{f'_n\}$ is normal (Montel's theorem). Therefore there is a subsequence $f'_{n_k}$ that converges uniformly. Hence $f_{n_k}$ converges uniformly (to $f$). Now your plan is applicable. $f$ is continuous for being the uniform limit of continuous function (analytic actually). And Morera is satisfied for $f$ because it is satisfied for $f_{n_k}$ and they converge uniformly to $f$.
Difference between arbitary collection and finite collection
An arbitrary collection of open sets is just that: any collection of open sets at all, finite or infinite, with no restriction save that the sets be open. If you change finite to arbitrary in the second statement, it becomes false. For example, in $\Bbb R$ the sets $(-x,x)$ for $x>0$ are all open, but their intersection is $\{0\}$, which is not open: $\bigcap_{x>0}(-x,x)=\{0\}$. Here are two more examples. For each $q\in\Bbb Q$ let $U_q=\Bbb R\setminus\{q\}=(\leftarrow,q)\cup(q,\to)$; this is an open set. But $\bigcap_{q\in\Bbb Q}U_q=\Bbb R\setminus\Bbb Q$, the set of irrational numbers, which is definitely not an open set. For each $n\ge 2$ let $U_n=\left(\frac1n,1+\frac1n\right)$; then $$\begin{align*}\bigcap_{n\ge 2}U_n&=\left(\frac12,\frac32\right)\cap\left(\frac13,\frac43\right)\cap\left(\frac14,\frac54\right)\cap\ldots\\ &=\left(\frac12,1\right]\;.\end{align*}$$ You may have to think a bit about that intersection to see that it really is $\left(\frac12,1\right]$.
Help to find a proof in natural deduction
Your intuition is absolutely correct. Below I formalized it in a derivation in natural deduction. I assume that $\lnot P $ is a shorthand for $P \to \bot$, thus inference rules $\lnot_\text{intro}$ and $\lnot_\text{elim}$ are just special cases of $\to_\text{intro}$ and $\to_\text{elim}$. The following is a derivation (without assumptions) of the formula $(P \to \lnot P) \to (P \to Q)$ in natural deduction. Symbols $*$ and $\circ$ mark which assumptions are discharged by the corresponding instance of the rule $\to_\text{intro}$. The rule $\text{efq}$ (ex falso quodlibet or principle of explosion) is the special case of the rule $\text{raa}$ that does not discharge any assumption. \begin{equation} \dfrac{\dfrac{[P \to \lnot P]^\circ \qquad[P]^*}{\lnot P}\to_\text{elim} \qquad [P]^*}{\dfrac{\dfrac{\bot}{Q}\scriptsize{\ \text{efq}}}{\dfrac{P \to Q}{(P \to \lnot P) \to (P \to Q)}\to_\text{intro}^\circ}\to_\text{intro}^*}\lnot_\text{elim} \end{equation}
Question about determining whether vector field is conservative and about determining a potential function for said vector field.
If we manage to find a potential function $U$ for $\mathbf{F}$, we are done so let's try to do that. We need: $$ \frac{\partial U}{\partial x} = 2x + y \implies U(x,y,z) = \int (2x + y) \, dx + G(y,z) = x^2 + yx + G(y,z) $$ for some function $G = G(y,z)$ which depends only on $y,z$. Next, $$ \frac{\partial U}{\partial y} = z \cos(yz) + x \implies x + \frac{\partial G}{\partial y} = x + z \cos(yz) \implies \\ G(y,z) = \int z \cos(y z) \, dy + H(z) = \sin(yz) + H(z)$$ for some function $H = H(z)$ that depends only on $z$. Finally, $$ \frac{\partial U}{\partial z} = y \cos(yz) \implies y \cos(yz) + \frac{\partial H}{\partial z} = y \cos(yz) \implies H \equiv C $$ so a potential for $\mathbf{F}$ is given by $$ U(x,y,z) = x^2 + yx + \sin(yz) + C $$ where $C \in \mathbb{R}$ is arbitrary.
Let A be a real 3times 3 matrix. Which of the following conditions does not imply that A is invertible?
$A$ is invertible iff the rank is 3. The rank is the dimension of the image. So D) says that the rank is $3$. For E) there may be three vectors with nonzero image, but they may all be sent to the same vector so it doesnt guarantee that the image has dimension $3$.
Transformation of independent variables in regression (Measurement Error)
Let us take a look at the first case, the fact that we measure $X_i = X_i^*+3$ instead of $X_i^*$ it won't effect the variance and covariance measures, then it will no effect $\hat{\beta}_1$. You can see it if you plug in $$ \hat{\beta}_1 = \frac{\sum (X^*_i - \bar{X}^*)(Y_i - \bar{Y})}{\sum(X^*_i-\bar{X}^*)^2}, $$ $X_i - 3$ instead of $X_i^*$, however, for the $\beta_0$ estimator that measures location, you will get $$ \hat{\beta_0^*} = \bar{Y}_n - \hat{\beta_1}\bar{X}_n^* = \bar{Y}_n + \hat{\beta_1}3-\hat{\beta}_1\bar{X}_n = \hat\beta_0 +3\hat{\beta}_1 . $$ Thus your $\hat{\beta}_0$ estimator will be biased estimator with $$ b(\hat{\beta}_0^*)=E(\hat{\beta}_0^* - \beta_0) = \beta_0+3\beta_1 - \beta_0 = 3\beta_1. $$ As such if $\beta_1 \neq 0$, then your $\hat\beta_0$ will be inconsistent estimator. And as it is biased then there is no sense talking about being BLUE or efficient.
almost sure convergence of expected value
By Markov's inequality, $\mathbb P(|X| > \varepsilon) \le \mathbb{E}[|X|]/\varepsilon = 0$ for all $\varepsilon > 0$. Therefore $|X| \le \varepsilon$ a.s. for all $\varepsilon > 0$ and therefore $|X| = 0$ a.s.
Shifting phase in Fourier frequency domain
Recall that translation in time (or space, if you prefer) corresponds to a modulation in the frequency domain; i.e. $$\mathcal{F} (T_\alpha f(x)) = \mathcal{F}( f(x-\alpha)) = E_\alpha \mathcal{F}(f)(\xi) = e^{-2\pi i \alpha \xi} \mathcal{F}(f)(\xi)$$ where I have denoted $T_\alpha$ to be the translation operator, $T_\alpha f(x) = f(x-a)$ and $E_\alpha$ to be the modulation operator, $E_\alpha g(\xi) = e^{-2\pi i \alpha \xi}g(\xi)$ and the Fourier transform $\mathcal{F}(f) = \int_\mathbb{R} e^{-2\pi i \xi x}f(x)dx$. So if you want the signals to be offset in time, modulate in the frequency domain.
Easy probability question
If we interpret $H \implies C$ as $H \subseteq C$, then $H \cap C = H$ so $$P(H \vert C) = \frac{P(H \cap C)}{P(C)} = \frac{P(H)}{P(C)} \le 1$$ while $$P(C \vert H) = \frac{P(H \cap C)}{P(H)} = \frac{P(H)}{P(H)} = 1$$
Finding a simple spline-like interpolating function
I constructed a quadratic Bezier curve (somewhat like your b-spline idea), and then converted it to $y = f(\alpha, x)$ form. The result is: $$y = \frac{1 + 4 \alpha (x-1)-2 x+\sqrt{(1-4 \alpha)^2+8 (1-2 \alpha) x}}{2-4 \alpha}$$ My guess is that this isn't very good for your purposes because of the square root. Also, this formula only works for values of $\alpha$ with $0.25 \le \alpha \le 0.75$. Here are the graphs of these functions for $\alpha = 0.25, 0.35, 0.45, 0.55, 0.65, 0.75$ They are all (rotated) parabolas. The $0.25 \le \alpha \le 0.75$ restriction could be lifted by using a rational quadratic curve, instead. But the square root won't go away, so I want to know if you can live with that before I spend time working out the details of the rational case.
continuous rational value function in R
No. Assume there was such an $\,f \,:\, [a,b] \to \mathbb{Q}$. If its' non-constant there are $x,y \in [a,b]$ with $f(x) \neq f(y)$. You can then pick $c$ from $[f(x),f(y)] \setminus \mathbb{Q}$, since that set is non-empty. Now, can you show that there's an $z$ between $x$ and $y$ for which $f(z)=c$?. If so, you have a contradiction, since $f(z) = c \notin \mathbb{Q}$ per the choice of $c$. Hint: Use that $f$ is continous for the missing part.
The empty function and constants
An $n$-any function on a set $X$ is a function $X^n\to X$, where $X$ is the $n$-fold cartesian product of $X$ with itself. Thus, a $0$-ary function is a function $X^0\to X$, not $\emptyset \to X$. So the question is to figure out what $X^0$ is. One way to reason about what it should be is to note that $X^n\times X^m$ is essentially the same as $X^{n+m}$ (as most people will agree is true for all $m,n>0$. To make this true also for $n=0$, we need, e.g., that $X^0\times X^m$ is essentially the same as $X^m$. Which set has $Y$ has the property that $Y\times X^m$ is essentially just $X^m$ (for $X\ne \emptyset$)? Well, a moment's thought should reveal that the answer is that $Y$ can be any singleton set. So, to preserve some basic realizations about the cartesian product of sets, it makes sense to define $X^0$ (for nonempty $X$) to be a singleton set (whichever one you want). Another way to argue is categorically. The cartesian product of sets is a special case of the notion of categorical product, and $X^0$ corresponds to an empty product. The universal property for the empty product is just a terminal object in the category. The terminal objects in the category of sets are precisely the singletons. I just saw your edit: Notice that the conventions agree: $X^0=X^\emptyset =\{\emptyset \to X\}$ is a singleton set.
Is there an analytically continued function of $z^p$ at zero?
If $p\in\mathbb N$, then yes, obviously. Otherwise, the answer is negative. So, you have $p=\frac mn$, with $m,n\in\mathbb N$, $n>1$ and $\gcd(m,n)=1$. Suppose that you there was an analytic continuation to a larger set containing $0$. Let$f$ be that continuation. Then $f^n(z)=z^m$. Therefore, $f(0)=0$. Let $a_1z+a_2z^2+\cdots$ be the Taylor series of $f$ at $0$ (there is no $a_0$ here, since $a_0=f(0)=0$). Then$$(a_1z+a_2z^2+\cdots)^n=z^m.$$But if you expand $(a_1z+a_2z^2+\cdots)^n$, you get a power series which begins with ${a_1}^nz^n$ and therefore $n=m$. This cannot happen, since $n>1$ and $\gcd(m,n)=1$.
Annihilator of maximal ideals in a finite dimensional algebra
I think I found a counterexample, very simple: the $2 \times 2$ upper triangular matrix over a field $k$. The subspace $$\begin{pmatrix} 0 & k \\ 0 & k\end{pmatrix}$$ is a two sided maximal ideal. But its left annihilator is zero.
Probability of finding a fit for bin packing
A partial answer: If you have $n$ space and $k$ bins into which you wish to place an object of size $j$, you need to determine the number of ways in which there is at least one box of size at least $j$. For the "at least one box of size $j$" portion, we have $$\binom{k+n-j-2}{n-j-1}$$ since we are holding one box of size $j$ fixed and so finding the number of solutions to $x_1+x_2+ \cdots +x_{k-1} = n-j$. And to find the number of ways to have a box of size at least $j$, you could sum up $$\sum_{i=0}^{n-j}{\binom{n-(j-i)+k-2}{n-(j-i)-1}}$$ but this would be overcounting and you'd have to take out the duplicates.
Banach space, $(c,||\cdot||_\infty)$
Your proof that $\|\cdot\|_\infty$ is a norm is fine. Your proof that $c$ is complete, however, is not quite correct. You need to show show that the Cauchy sequence $(f_n)$ converges to some $f\in c$ (in the metric induced by the norm). That is, given the Cauchy sequence you need to find some $f:\mathbb N\to\mathbb C$ such that $\lim_nf(n)$ exists and for every $\varepsilon>0$ there is some $N\in\mathbb N$ such that $\|f_n-f\|_\infty<\varepsilon$ whenever $n\geq N$.
Distribution of $Z = \sin(X) \sin(Y)$ where $X$ and $Y$ are independent and uniform in $[-\pi,\pi]$?
Hint: Just find the distribution of difference of two arcsine distribution(which is much easier than a product) $$Z= \sin(X)\sin(Y)=\frac{1}{2}\left(\cos(X-Y) -\cos(X+Y)\right) $$ $$\sim \frac{1}{2}\left(\sin(X-Y) -\sin(X+Y)\right) $$ $$\sim \frac{1}{2}\left(\sin(X) -\sin(Y)\right) $$ R code n<-100000 x<-runif(n,-pi,pi) y<-runif(n,-pi,pi) plot(density(sin(x)*sin(y))) lines(density(.5*(sin(x)-cos(y))),col=2) If $U$ and $V$ are uniform so all of the following random variables have same distribution $\cos(U)$,$\cos(2U)$,$\sin(U)$,$\sin(2U)$,$\cos(U+V)$,$\cos(U-V)$,$-\cos(U)$ R code n<-100000 x<-runif(n,-pi,pi) y<-runif(n,-pi,pi) plot(density(cos(x))) lines(density(sin(y)),col=2) lines(density(sin(2*x)),col=3) lines(density(cos(2*x)),col=4) lines(density(cos(x+y)),col=5) lines(density(sin(x+y)),col=6)
Finding $[T_{|W_i}]_{C_i}$
$ \{v_1 + 2v_3\}=\left( \begin{array}{c} 2\\ 0\\ 4\\ \end{array} \right)\\ \{v_1+v_2,v_3\} =\left(\begin{array}{c} 3\\ 3\\ 5\\ \end{array}\right),\left(\begin{array}{c} -2\\ -2\\ -3\\ \end{array}\right)$ and $T\left( \begin{array}{c} 2\\ 0\\ 4\\ \end{array} \right)\ = 2\left( \begin{array}{c} 2\\ 0\\ 4\\ \end{array} \right)\\$ thus your first matrix comes out as $(2)$ Now, $T\left(\begin{array}{c} 3\\ 3\\ 5\\ \end{array}\right)=3\left(\begin{array}{c} 3\\ 3\\ 5\\ \end{array}\right)+5\left(\begin{array}{c} -2\\ -2\\ -3\\ \end{array}\right)$ $T\left(\begin{array}{c} -2\\ -2\\ -3\\ \end{array}\right)=-2\left(\begin{array}{c} 3\\ 3\\ 5\\ \end{array}\right) -3\left(\begin{array}{c} -2\\ -2\\ -3\\ \end{array}\right)$ hence second matrix comes out as $\left(\begin{array}{cccc} 3&-2\\5&-3 \end{array}\right)$
Placing $L$-shaped tiles on $2\times n$ checkerboard
These are all the possible indecomposable ways to extend a tiling: We see that $(A,B,C,D)=(1,4,2,2)$ and can work out $a_8=207$. This is A337492 in the OEIS.
Proof Using the Monotone Convergence Theorem for the sequence $a_{n+1} = \sqrt{4 + a_n}$
It's easy to see that $a_n>0$ for all $n\geq 1$. Use mathematical induction to prove that $a_n\leq 4$ for all $n\geq1$ and $a_n$ is increasing. Hence $a_n$ is convergent, say its limit is $A$, then $ A=\sqrt{A+4}$ and $A>0$, which give us that $ A=\frac{1+\sqrt{17}}{2}$.
Let $H$ and $N$ be normal subgroups of a group $G$ with $H \cap N = \{e\}$. Prove that $hn = nh$ for all $h \in H$ and $n \in N$.
Hint: look at the element $(h^{-1}n^{-1}h)n$ and (the same) $h^{-1}(n^{-1}hn)$, where $h \in H$ and $n \in N$. To which subgroup(s) does it belong?
How tall should my Christmas tree be?
Nice question! I learned a few things as I was looking for the solution. I assume that the length of your light cord is $0.3\text{ meters} \times 99 = 29.7$m (explanation: In the picture you posted, it seems that the cord starts and ends with a light bulb, so there are $99$ segments in between, each with a length of $0.3$m. I also assume (as you state in the comments) that you want each twist to be $0.3$m apart. Note that this is not the same as a bulb being equidistant with the bulbs around it, as the bulbs on different twists will be somewhat further apart than $0.3$m. But it is a good enough approximation. Besides, I think that your original restriction (exactly equidistant bulbs) might not be possible with a conical spiral. In any case, since you are fine with each twist being $0.3$m apart, we will work with this assumption, as it makes the problem easier to solve. The general parametric equations that define a conical spiral are: $$\begin{array}{rl} x =& t\cdot r\cdot \cos(\alpha \cdot t)\\ y =& t\cdot r\cdot \sin(\alpha \cdot t) \\ z =& t \end{array} $$ Where $t$ is a variable that expresses the vertical distance from the tip of the cone, $r$ is the radius of the cone at $t=1$ and $a$ is a parameter that affects how densely the twists are wound around the cone. The bigger the $a$ the more dense the winding. What is $r$ in our problem? We want the height to be double the diameter so at distance $1$m from the cone tip we simply want $r = \frac14$ meters (all distance units will be expressed in meters). What should $\alpha$ be? Setting $\alpha \cdot t = 2\pi$ means a full turn/twist around the cone, and since we want the starting point of the twist with the ending point of the twist to be $0.3$m apart, this means that $\alpha = \frac{2\pi}{0.3}$. Edit: no, this means that they are $0.3$m apart in the vertical direction ($t$ is vertical distance). What we need is that the spirals are $0.3$m apart on the surface of the cone. So, how much is $t$ if the distance on the surface of the cone is $0.3$? If we take a cross section of the cone we can form a right triangle, where the hypotenuse is $0.3$, one side (the vertical distance) is $t$, and the other side is $t/4$. Applying the pythagorean theorem we find that $t = 0.3\cdot \frac{4}{\sqrt{17}}$. So we want $\alpha \cdot \left( 0.3\cdot \frac{4}{\sqrt{17}}\right) = 2\pi \iff \alpha = \frac{2\pi}{0.3} \cdot \frac{\sqrt{17}}{4}$ We have established parameters $\alpha$ and $r$, so our conical spiral is defined fully. But how to we find the height of the cone/tree? The arc length of a conical spiral is: $$\text{length}(t) = \frac12t \sqrt{1+r^2(1+\alpha^2t^2)}+\frac{1+r^2}{2\alpha r}\text{sinh}^{-1}\left( \frac{\alpha\cdot r\cdot t}{\sqrt{1+r^2}}\right)$$ Plugging in $\text{length}(t)=29.7$, $r=\frac14$, $\alpha = \frac{2\pi}{0.3}\cdot \frac{\sqrt{17}}{4}$ we can solve for t to get $t \approx \bbox[5px,border:2px solid red]{3.295}$ meters. So if you make your tree about $3.3$ meters tall and make your twists about $0.3$ meters apart, then you will have the coverage that you want. Here's how your light bulb spiral might look like (it was a bit tricky to place the $100$ bulbs on the graph, I was happy that I succeeded in the end): $\hspace{2cm}$ And here's a side view of the spiral. As you can see there are about $11.5$ twists. $\hspace{3cm}$ You can find the Python code I wrote to create the graphs here. I hope this answer can help you with your lights installation. Merry Christmas! :)
Trigonometric Integration Problem:
The first equality comes from the fact that if $g$ is an even function, then $$ \int_{-a}^a g(t)\,dt=2\int_0^a g(t)\,dt. $$ Either draw a figure or write the integral as $\int_{-a}^0+\int_0^a$ and make the substitution $t\mapsto -t$ in the first integral to see that. The second one is just calculating the integral using the fundamental theorem of calculus, using the facts that $$ \cos 0 = 1\quad\text{and}\quad \cos(k\pi)=(-1)^k\ \text{if}\ k\in\mathbb N. $$
Variables and square number problem
Hint: Let, $$N+2000=a^2$$ $$N-17=b^2$$ So that, $$(N+2000)-(N-17)=a^2-b^2$$ $$2017=(a-b)(a+b)$$ $2017$ is prime so there isn't really that many ways to break it apart into multiplication of integers.
Calculating a weighted average for different pay rates and different hours.
Let's say there are $n$ days, for which $h_i$ is the number of hours worked and $r_i$ is the number of dollars per hour for that day for $i \in \Bbb{N}$. $h_i$ is the weight on each day because days with more hours affect the average rate of pay more. Therefore, to find the weighted sum, we simply need to sum up all of the $r_i$s with a weight of $h_i$, which can be expressed as: $$\sum_{i=1}^n r_ih_i$$ Then, to find the weighted average, we need to divide this weighted sum by the total number of hours. The total number of hours is the sum of all $h_i$, or: $$\sum_{i=1}^n h_i$$ Thus, we just need to divide the first part by the second part: $$\frac{\sum_{i=1}^n r_ih_i}{\sum_{i=1}^n h_i}$$ Notice that this is exactly the same as total money divided by total hours. However, we are just looking at this process differently by looking at the $r_i$s as our objects and the $h_i$s as our weights.
Expansion of ${n+3-1 \choose 3}$
By definition: $${n\choose k}=\frac{n!}{k!(n-k)!}$$ So: $${n+3-1\choose 3}={n+2\choose 3}=\frac{(n+2)!}{3!(n+2-3)!}=\frac{(n+2)!}{3!(n-1)!}\;\;\;\;\;\left(=\frac{(n+2)(n+1)n}{6}\right)$$ Like MathyMatherson already said in the comments!
Show a connection $\nabla$ is compatible with a metric $\langle \cdot, \cdot \rangle$ of $\mathbb{R}^3$
Since $\nabla\langle \cdot,\cdot\rangle$ is a tensor, it suffices to check that $(\nabla_{\partial_k}\langle\cdot,\cdot\rangle)(\partial_i,\partial_j) = 0$ for all $i,j,k \in \{1,2,3\}$. In other words, we must show that $$\partial_k \langle \partial_i,\partial_j\rangle = \langle \nabla_{\partial_k}\partial_i,\partial_j\rangle + \langle \partial_i, \nabla_{\partial_k}\partial_j\rangle$$for all $i,j,k \in \{1,2,3\}$. The left side is obviously zero, so the situation doesn't look so bad. By the definition of $\nabla$, we have that $$\begin{align} \langle \nabla_{\partial_k}\partial_i,\partial_j\rangle + \langle \partial_i, \nabla_{\partial_k}\partial_j\rangle &= \langle \omega \varepsilon^r_{~ki}\partial_r,\partial_j\rangle + \langle \partial_i, \omega \varepsilon^r_{~kj}\partial_r\rangle \\ &= \omega \varepsilon^r_{~ki} \delta_{rj} + \omega \varepsilon^r_{~kj}\delta_{ir} \\ &= \omega(\varepsilon^j_{~ki} + \varepsilon^i_{~kj}) \\ &= 0,\end{align}$$because since $(j,k,i) \mapsto (i,k,j)$ is an odd permutation, we have that $\varepsilon^j_{~ki} =- \varepsilon^i_{~kj}$.
What does "coproduct of $\Bbb{Z}*\Bbb{Z}$ of $\Bbb{Z}$ by itself" mean?
You know what $A * B$ is for groups $A,B$? Now put $A=B=\mathbb{Z}$. Then you know what $\mathbb{Z} * \mathbb{Z}$ is. If $S,T$ are sets, then $F(S \sqcup T) = F(S) * F(T)$. This is formal (left adjoints preserve colimits). If $S = \{x\}$, then $F(S) = \mathbb{Z}$. It follows that $F(\{x,y\})=\mathbb{Z} * \mathbb{Z}$.
Hausdorff partial metric
Is $h_p$ a partial metric on $K(X)$? Not necessarily. Let $X$ be the first of the examples of partial metric space from the site linked by you, that is $X=\Bbb R^+$ with $p(x,y)=\max\{x\}$. Let $A=\{1,2,3\}$ and $B=\{1,3\}$ be elements of $K(X)$. Then $$h_p(A,A)= h_p(A,B)= h_p(B,B),$$ but $A\ne B$, which violates the equality of $h_p$.
Convergence of the series $ \sum_0^{\infty} 1/(1+x^n)$
It converges: Note that $\frac{1}{|1+x^n|}\leq\frac{1}{|x|^n-1}\leq \frac{C}{|x|^n}$ for sufficiently large C when $|x|>1$. Now comparison to the Geometric Series will complete the proof.
Continuity of $x\log(x^2)$ when $x\neq 0$
HINT I don't know if it helps, but you can do this: $$|x \log{x^2}-c \log{c^2}|=|x \log{x^2}+c\log{x^2}-c\log{x^2}-c \log{c^2}|$$ $$\leq |x-c||\log{|x|^2}|+|c||\log{\frac{|x|^2}{|c|^2}}|$$ $$=2|x-c||\log{|x|}|+2|c||\log{\frac{|x|}{|c|}}|$$ Note that $\log{a^2}=2\log{a}$ for $a>0$
Show that $\mathbb{Z_3 x Z_4}$ is a cyclic group
You can calculate the order of an element without listing every element. For example, you can show that $(1,0)$ has order $3$ and $(0,1)$ has order $4$. So $(1,1)$ has order divisible by $3$, since the first element has order $3$. Similarly its order must be divisible by $4$. Hence, it must have order $12$ since $3,4$ are coprime and the order of the group is $12$. This is a group theoretic way of phrasing the Chinese Remainder theorem.
Solution of a equation of matrices
Define a new matrix variable $$A=R\otimes R$$ Vectorize each side of the equation to obtain $$\eqalign{ {\rm vec}(M) &= \sum_{k=1}^\infty A^k {\,\rm vec}(S) \cr m &= \sum_{k=1}^\infty A^k\,s = \Big(\frac{A}{I-A}\Big)\,s \cr (I-A)\,m &= As \cr m &= A(s+m) \cr }$$ De-vectorizing the last equation yields $$\eqalign{ M &= R(S+M)R^T \cr }$$ Now, can you solve that equation?
Autocovariance function of stationary process
The autocovariance $\gamma(h):=\Bbb E[X_t X_{t+h}]-[\Bbb E[X_0]]^2$ admits a spectral representation $$ \gamma(h)=\int_{\Bbb R} e^{2\pi ihz}G(dz),\qquad h\in\Bbb R, $$ where $G$ is a positive measure on $\Bbb R$, with total mass $\gamma(0)$. If the measure $G$ admits a density with respect to Lebesgue measure, then $\lim_{|h|\to \infty}\gamma(h) =0$ by the Riemann-Lebesgue lemma. The absolute continuity of $G$ is is known hold if and only if the process $X$ admits a moving-average representation.
Prove that the only ring homomorphism from $\mathbb Z$ to $\mathbb Z$ is Identity mapping
Let $f : \mathbb{Z} \to \mathbb{Z}$ be a function such that it verifies every condition of a ring morphism, except $f(1) = 1$ (i.e. a non-unital ring morphism). Now, since $f(a+b) = f(a) + f(b)$, using Robert Shore's hint, for $n \geq 1$ we get $$ f(n) = f(1 + \cdots+ 1) = f(1) + \cdots + f(1) = nf(1) $$ and $f(-n) = f( (-1) + \cdots + (-1)) = nf(-1)$. Moreover, $$ f(-1) + f(1) = f(-1+1) = f(0) = 0 $$ and thus, by definition of additive inverse, we see that $f(-1) = -f(1)$. Hence, $f$ is completely determined by $f(1)$ via $$ f(k) = kf(1) \ (\forall k \in \mathbb{Z}). $$ Also, one should have $f(1) = f(1^2) = f(1)^2$ which in $\mathbb{Z}$ means that either $f(1) = 1$ or $f(1) = 0$. If $f$ ought to be unital, then necessarily $f \equiv id$.
Representation of $U(n)$ group as a permutation group
Just use the proof of Cayley's theorem: $u$ is identified with the permutation $x \mapsto ux$. $1 \leftrightarrow (1,3,5,7)=(1)(3)(7)(9)$ $3 \leftrightarrow (3,9,7,1)=(1397)$ $7 \leftrightarrow (7,1,9,3)=(1793)$ $9 \leftrightarrow (9,7,3,1)=(19)(37)$ If you do the same for $U(5)$, you'll find that the representation is essentially the same, that is, $U(5)$ and $U(10)$ are isomorphic. There isn't much hope to be able to say anything for all $U(n)$ because even for $n$ prime there is no formula for the elements of order $n-1$ (aka primitive roots).
How do I find the Levy triplet of a Levy process
Let $(N_t)_{t \geq 0}$, $(D_t)_{t \geq 0}$ (arbritrary) one-dimensional Lévy processes with Lévy triplet $(\ell_1,q_1,\nu_1)$ and $(\ell_2,q_2,\nu_2)$, respectively. Note that we can determine the Lévy triplet of $(D_t,N_t)$ (uniquely) from the characteristic function of $(D_1,N_1)$, i.e. $$\mathbb{E}\exp \left( \imath \, (\xi,\eta) \cdot (D_1,N_1) \right)$$ By the independence of the processes, we have $$\mathbb{E}\exp \left( \imath \, (\xi,\eta) \cdot (D_1,N_1) \right) = \mathbb{E}\exp(\imath \, \xi \cdot D_1) \cdot \mathbb{E}\exp(\imath \, \xi \cdot N_1)$$ Using Lévy-Khinchine's formula we conclude $$\begin{align*} & \mathbb{E}\exp \left( \imath \, (\xi,\eta) \cdot (D_1,N_1) \right) \\ &= \exp \left( - \imath \, \ell_1 \xi - \frac{1}{2} q_1 \xi^2 - \int (e^{\imath \, y \xi}-1-\imath \, y \xi 1_{|y|<1}) \, d\nu_1(y) \right) \\ & \quad \cdot \left( - \imath \, \ell_2 \eta - \frac{1}{2} q_2 \eta^2 - \int (e^{\imath \, y \eta}-1-\imath \, y \eta 1_{|y|<1}) \, d\nu_2(y) \right) \\ &= \exp \left(- \imath \, \ell \cdot (\xi,\eta) - \frac{1}{2} (\xi,\eta) Q (\xi,\eta) - \int (e^{\imath \, (y_1,y_2) \cdot (\xi,\eta)}-1-\imath \, (\xi,\eta) \cdot (y_1,y_2) \cdot 1_{|y|<1}) \, d\nu(y_1,y_2) \right) \end{align*}$$ where $$\begin{align*} \ell &:= \begin{pmatrix} \ell_1 \\ \ell_2 \end{pmatrix} \\ Q &:= \begin{pmatrix} q_1 & 0 & \\ 0 & q_2 \end{pmatrix} \\ \nu &:= \nu_1 \otimes \delta_{0} + \delta_0 \otimes \nu_2 \end{align*}$$ Consequently, $(\ell,Q,\nu)$ equals the Lévy triplet of $(D_t,N_t)$. In your case, we have $$\ell_1 = 0 \quad q_1 = 0 \quad \nu_1 = \lambda \delta_1 \\ \ell_2 = \alpha \Gamma(1-\alpha) \quad q_2 = 0 \quad d\nu_2(y) = \alpha \Gamma(1-\alpha) y^{-\alpha-1} \, dy$$ Thus, $$\begin{align*} \ell&=\begin{pmatrix} 0 \\ \alpha \Gamma(1-\alpha) \end{pmatrix} \\ Q &= 0 \\ d\nu(y_1,y_2) &= \lambda d\delta_1(y_1) d\delta_0(y_2) + \alpha \Gamma(1-\alpha) {y_2}^{-\alpha-1} \, dy_2 d\delta_0(y_1) \end{align*}$$
What functions $f_n(k)$ have the property $\sum_{k=r}^n k(k - 1)\ldots(k - r + 1)f_n(k) = n!$
Only an idea, to be verified: Put $\displaystyle S_n(x)=\sum_{k=0} f_n(k)x^k$. Then your hypothesis is $S_n^{(r)}(1)=n!$ for $r=0,\cdots,n$. Hence $\displaystyle S_n(x)=n!\sum_{r=0}^n \frac{(x-1)^r}{r!}$.
What is number theory today?
It's not really as black and white as that. A lot of Algebraic Geometry is studied in Number Theory to look at Elliptic Curves. Erdös even used probabilistic techniques to study the primes back in the 30s by essentially looking at the primes as random variables. Elementary Number Theory is also the study of integers without the use of Complex Analysis (as in Analytic NT) or algebraic structures (As in Algebraic NT). Algebraic Number Theory uses techniques from abstract algebra to study the properties of the integers ($\Bbb Z$) and the rationals $\Bbb Q$, and extensions thereof. Other structures that are similar (for instance the Gaussian integers $\Bbb Z[i]$) are also studied to see HOW they are similar and why. Analytic Number Theory uses techniques from Complex Analysis to infer results about the integers (strangely, since Complex Analytic objects tend to be continuous in nature). There is a lot of overlap though; the theory of modular forms and elliptic curves merge in and out of either of the two fields.
If $\gamma_n$ are roots of $\tan x = x$ can every function be expanded in form of $\sum_n a_n \sin(\gamma_n x)$?
The functions $x$ and $\sin(\gamma_j x)$ with $\tan(\gamma_j) = \gamma_j$, are orthogonal on the interval $[0,1]$. Suitably normalized, I think we get an orthonormal basis of $L^2[0,1]$ (and I think this can be obtained from Sturm-Liouville theory). Thus for any square-integrable function $f$ on $[0,1]$, we should have the expansion $$ f(x) = c_0 x + \sum_{j=1}^\infty c_j \sin(\gamma_j x)$$ (converging in $L^2$) where $$ c_0 = 3 \int_0^1 f(x) x\; dx,\ c_j = \frac{2}{\sin^2(\gamma_j)} \int_0^1 f(x) \sin(\gamma_j x)\; dx$$
How to complete a primitive vector to a unimodular matrix
This may not be what you want, but the completion can be done as follows. Let $x=x_1$ be the given integer vector. We look for $2n-1$ integer vectors $x_2,\ldots,x_n,y_1,\ldots,y_n$ such that $$ Y^TX=\pmatrix{y_1^T\\ y_2^T\\ \vdots\\ y_n^T}\pmatrix{x_1&x_2&\cdots&x_n} =\pmatrix{1&\ast&\cdots&\ast\\ &1&\ddots&\vdots\\ &&\ddots&\ast\\ &&&1}. $$ Since both $X$ and $Y$ have integer determinants and $\det(Y)\det(X)=\det(Y^TX)=1$, if $X$ and $Y$ do exist, we must have $\det(X)=\pm1$. We can construct the columns of $X$ and $Y$ by mathematical induction. In the base case, we pick an $y_1$ such that $y_1^Tx_1=1$. This is possible because the GCD of the entries of $x_1=x$ are coprime. In the inductive step, suppose $1\le k<n$ and $x_1,\ldots,x_k,y_1,\ldots,y_k$ are such that $y_i^Tx_i=1$ for each $i\le k$ and $y_i^Tx_j=0$ whenever $j<i\le k$. Since the rank of $A=\pmatrix{x_1&\cdots&x_k}$ is smaller than $n$, the equation $y_{k+1}^TA=0$ has a nontrivial solution $y_{k+1}\in\mathbb Q^n$. By rationalising the denominators in its entries, $y_{k+1}$ can be chosen as an integer vector. Then by pulling out common factors in its entries, $y_{k+1}$ can be chosen to be primitive. Thus there exists an integer vector $x_{k+1}$ such that $y_{k+1}^Tx_{k+1}=1$ and our proof is complete.
Using the def. of integrability with Darboux sums
To show that $f$ is Darboux integrable you need to show that $\sup_P L(f,P) = \inf_P U(f,P)$, where $P$ are the partitions of $[a,b]$. Since we always have $L(f,P) \le U(f,P)$, it is sufficient to show that we can make $U(f,P)-L(f,P)$ as small as we like. If $f$ was continuous, we could use uniform continuity to show this quickly. For general non increasing $f$ we need to find a similar uniformity while taking care around points where $f$ 'drops' too much. The idea of the proof is straightforward but the notation is cumbersome. Let $o(x) = \lim_{y \downarrow x} f(y) - \lim_{y \uparrow x} f(y)$. Since $f$ is non increasing, we have $o(x) \ge 0 $. It is not hard to show that the set of points for which $o(x) >0$ is at most countable, and furthermore, $f(a)-f(b) \ge \sum_k o(x_k)$ where $x_k$ are the points at which $o(x_k) >0$. Note that $|f|$ is bounded by some $B$. Also note that if $I$ is an interval, and $f$ is non increasing, $\sup_{y \in I} f(y) -\inf_{y \in I} f(y) \le f(\inf I)-f(\sup I)$. Let $\epsilon>0$ and choose $N$ such that $\sum_{k >N} o(x_k) < \epsilon$. Choose $\delta>0$ such that $4 \delta N B < \epsilon$ and the intervals $[x_k-\delta,x_k+\delta]$ do not intersect for $k=1,...N$. Let $I_k = [x_k-\delta,x_k+\delta]$, then we have $(\sup_{y \in I_k} f(y)-\inf_{y \in I_k} f(y)) l(I_k) < {1 \over N}\epsilon$. Let $P_1 $ be the partition consisting of the points $x_k-\delta,x_k+\delta$, for $k =1,...N$. Let $J_k$ be the remaining intervals. We may take the $J_k$ to be closed (and hence compact). Note that $o(x) < \epsilon$ for all $x \in \cup_k J_k$ (this is the 'uniformity' condition that we need). By definition, for each such $x$ we may find an open interval $U_x$ containing $x$ such that $f(\inf U_x) -f(\sup U_x) < \epsilon$. Since the $J_k$ are compact, they are covered by a finite number of such sets. In particular, we find a partition $\Pi_k$ of each $J_k$ such that for any neighbouring pair of points $x,y \in \Pi_k$ ($x<y$), we have $f(x)-f(y) < \epsilon$. Now let $P$ be the partition consisting of the union of $P_1$ and the $\Pi_k$. Let ${\cal I}$ be the collection of intervals $I$ in the partition $P$. Note that $|{\cal I} \setminus {\cal U}| = N$. Let ${\cal U} \subset {\cal I}$ (u for 'uniform') be the intervals $I$ contained in the $\Pi_k$. Then we have \begin{eqnarray} U(f,P)-L(f,P) &=& \sum_{I \in {\cal I}} (\sup_{y \in I} f(y) - \inf_{y \in I} f(y)) l(I) \\ &=& \sum_{I \in {\cal U}} (\sup_{y \in I} f(y) - \inf_{y \in I} f(y)) l(I) + \sum_{I \notin {\cal U}} (\sup_{y \in I} f(y) - \inf_{y \in I} f(y)) l(I) \\ &\le& \sum_{I \in {\cal U}} (\sup_{y \in I} f(y) - \inf_{y \in I} f(y)) l(I) + N {1 \over N} \epsilon \\ &\le& \epsilon \sum_{I \in {\cal U}} l(I) + N {1 \over N} \epsilon \\ &\le& \epsilon (b-a) + \epsilon \end{eqnarray} Hence we can make $U(f,P)-L(f,P)$ as small as we like and so we conclude that $f$ is Darboux integrable.
number of pairs of integers whose sum is even
Your answer is correct (though I don't understand parts of your reasoning; for $X$ we have $2^4-1\ne 31$ and neither of the two numbers makes sense) as can be seen by a different reasoning: Each subset of the nine-element set of given numbers gives rise to a sum. Since all summnds are positive, the sum is positive as soon as the set is not empty. There are $2^9$ subsets, minus the empty set we obtain $2^9-1=511$.
Limit Theorem applicability
The probability is $1/99$. Denote by $P(a,b)$ the probability that after $a+b$ shots, exactly $a$ were in and $b$ were out. Then $P(1,1)=1$ and $P(a,b) = \frac{a-1}{a+b-1} P(a-1,b) + \frac{b-1}{a+b-1} P(a,b-1)$ We prove by induction on $a+b$ that $P(a,b) = 1/(a+b-1)$. When $a+b = 2$, this is true. Otherwise $P(a,b) = \frac{a+b-2}{a+b-1} \cdot \frac{1}{a+b-2} = \frac{1}{a+b-1}$
number of permutations in which $k$ numbers come after each other
Just ignore all the other numbers and there are $k!$ possible orders of the $k$ numbers, one of which is acceptable. That means $\frac 1{k!}$ of the permutations are acceptable. As there are $n!$ total permutations, the number of acceptable ones is $$\frac {n!}{k!}$$
What does it mean for a solution to be regular (Legendre equation)?
Regular means that the function is not singular in the point mentioned; singular points being the ones where you divide by zero. Of course there are singular points of functions that may be resolved, for instance $f(x)=\frac{x}{x}$, but other functions have singular points that cannot be resolved, like $f(x)=\frac{1}{x}$ or indeed the other solutions of the Legendre differential equation, for example $Q_0(x)=\frac{1}{2}ln(\frac{1+x}{1-x})$, which is singular (not regular) in $x=1$
Gradient of least-squares cost — how to compute it?
Yes, you can indeed perform matrix calculus, which is exactly what it sounds like. In this case, we're differentiating a scalar value (since it's a norm, which is just a number) by the vector $x$, and the way you do that is surprisingly simple - you differentiate by each of the vector's components, and make a vector out of that. In other words, if $x = \left( x_1, x_2, \ldots, x_n \right)$ then $\frac{\partial}{\partial x} = \left(\frac{\partial}{\partial x_1}, \frac{\partial}{\partial x_2}, \ldots, \frac{\partial}{\partial x_n} \right)$. It might not be entirely clear how that's going to work when you've got matrix operations happening under the hood there, but because matrices and derivatives are both linear operators (i.e. they behave nicely when you add things together or multiply them by constants) stuff mostly follows sensible rules, and in particular if you go to the section in that Wikipedia article on "Identities" you'll find the bit you're looking for under "Scalar-by-vector identities", in particular: $\mathbf{A}$ is not a function of $\mathbf{x}$, $\mathbf{A}$ is symmetric: $\frac{\partial \mathbf{x}^\top\mathbf{A}\mathbf{x}}{\partial \mathbf{x}} = 2\mathbf{x}^\top\mathbf{A}$
What is probability $P(X<x |X <Y)$
Let $Z(x) = \mathbb P\left[X &lt; x \Big | X &lt; Y\right] = \mathbb E\left[\mathbf 1_{X &lt; x} \Big | X &lt; Y\right]$ this is a random variable projection of $1_{X &lt; x}$ on $\sigma (\{X &lt; Y\})$ in $L^2$ then $$Z (x) = a(x) + b(x)\mathbf 1_{X &lt; Y}$$ such that $$(a(x), b(x)) = \underset{(a,b) \in \mathbb R^2}{\text{argmin }} \mathbb E \left[\left(\mathbf 1_{X&lt;x} - a - b\mathbf 1_{X &lt; Y}\right)^2\right]$$ $$\mathbb E \left[\left(\mathbf 1_{X&lt;x} - a - b\mathbf 1_{X &lt; Y}\right)^2\right] = \mathbb P[X &lt; x] + a^2 + b^2\mathbb P[X &lt; Y] + 2ab \mathbb P[X &lt; Y] - 2a\mathbb P[X &lt; x] - 2b\mathbb P[X &lt; x,X &lt; Y]$$ then at the minimum the derivative is zero so $$\left\{\begin{array}{ccr} a(x) + b(x)\mathbb P[X &lt; Y] - \mathbb P[X &lt; x] &amp;=&amp; 0 \\ b(x)\mathbb P[X &lt; Y] +a(x)\mathbb P[X &lt; Y] - \mathbb P[X &lt; x, X &lt; Y] &amp;=&amp; 0\end{array}\right.$$ $$\left\{\begin{array}{ccr} a(x) &amp;=&amp; \frac{\mathbb P[X &lt; x] - \mathbb P[X &lt; x, X &lt; Y]}{1-\mathbb P[X &lt; Y]} \\ b(x) &amp;=&amp; \frac {\mathbb P[X &lt; x]\mathbb P[X &lt; Y] - \mathbb P[X &lt; x, X &lt; Y]}{\mathbb P[X &lt; Y] (1-\mathbb P[X &lt; Y])}\end{array}\right.$$ Now you hqve to compute $P[X &lt; x]$, $P[X &lt; Y] = P[X-Y &lt; 0]$ and $P[X &lt; x, X-Y &lt; 0]$ to finish the answer. In the case where $X$ and $Y$ are independent let $\Phi$ be the cdf of $\mathcal N (0, 1)$ $$P[X &lt; x] = \Phi \left(\frac {x-\mu_1}{\sigma_1}\right)$$ $$P[X-Y &lt; 0] = P\left[\mathcal N \left(\mu_1 - \mu_2, (\sigma_1^2 + \sigma_2^2)^\frac12\right) &lt; 0\right] = \Phi \left(\frac {\mu_2 - \mu_1}{(\sigma_1^2 + \sigma_2^2)^\frac12}\right)$$ $$P[X&lt;x, X&lt;Y] = \int_{-\infty}^x \Phi \left(\frac {t-\mu_2}{\sigma_2}\right)\frac {1}{\sqrt{2\pi}\sigma_1}e^{-\frac {t-\mu_1}{2\sigma_1^2}} \mathrm d t$$ I do not think in the case that $\mu_1 \neq \mu_2$ you can have a closed form for the last one.
An approximation for $\left(1-{1\over x}\right)^{2y} - \left(1-{2\over x}\right)^y$
We have $$\left( 1+ \frac{a}{x}\right)^x = e^a \left( 1 - \frac{a^2}{2x} +O(x^{-2}) \right)$$ Using this, assuming $y/x = a$ (constant) we get $$\left(1-{1\over x}\right)^{2y} - \left(1-{2\over x}\right)^y \approx e^{-2 a} \frac{a}{x}=e^{-2 y/x} \frac{y}{x^2} $$
$\iint_A\nabla\times\textbf{u}\cdot \textbf{n}\ dS$ with $\nabla\times\textbf{u}$ known.
The hint essentially means that you should translate this to another surface (with the same boundary curve) than the one you currently have. In other words, $\displaystyle \iint_{S} (\nabla\times\textbf{u})\cdot \textbf{n}\ dS = \int_{C} \textbf{u}\cdot d\textbf{r} = \iint_{S_1} (\nabla\times\textbf{u})\cdot \textbf{n}\ dS$. The boundary curve of the elliptic paraboloid surface, as you correctly found, is - $\displaystyle (\frac{x+0.5}{3})^2+(y+2.5)^2 = 1, z = 9 - x^2 - 9y^2$. Now using Stokes' Theorem a second time, we can equate line integral of the vector field along the boundary curve to the surface integral of the curl of $\textbf{u}$ over elliptic disk given by, $\displaystyle (\frac{x+0.5}{3})^2+(y+2.5)^2 \leq 1, z = x + 45y + \frac{113}{2}$. $\nabla \times \textbf{u}= (-2+43x-86y+43z, 3-2x+4y-2z, 2-47x+94y-47z)$ For the dot product, use the fact that normal vector to the plane is $(-1, -45, 1)$ and the dot product is simply $-131$. So your task is just finding the area of the ellipse. You can, Either use the formula $\pi ab$ for area of the ellipse and multiply by $131$ which gives you answer of $-393 \pi$. Or you can do the integral as below. Use substitution, $x = - 0.5 + 3r \cos\theta, y = -2.5 + r \sin\theta$ for the surface integral, $0 \leq r \leq 1, 0 \leq \theta \leq 2\pi$ and plug in Jacobian $3r$ instead of usual $r$. That is $3$ times the area of a unit circle( = $3 \pi$).
Q: Passing to a circle function to a square function
The notation $x^\infty$ and $y^\infty$ means nothing. The best point of view is norm. The classical euclidean norm $\|\cdot \|_2$ is defined by ($\overrightarrow x=(x_1,x_2)$) $$\| \overrightarrow x \|_2 = \sqrt{x_1^2+x_2^2}$$ You can check that the set of $\overrightarrow x$ such that $\| \overrightarrow x \|_2 = 1$ is exactly the standard unit circle. At the opposite, you can define another norm $\|\cdot \|_\infty$ by $$\| \overrightarrow x \|_\infty = \max(\vert x_1 \vert,\vert x_2 \vert)$$ and you can check that the set of $\overrightarrow x$ such that $\| \overrightarrow x \|_\infty = 1$ is your square. You can also define intermediate norms $\|\cdot \|_p$ by $$\| \overrightarrow x \|_p = (x_1^p+x_2^p)^{1/p}$$ The set of $\overrightarrow x$ such that $\| \overrightarrow x \|_p = 1$ looks like rounded square sharper the more $p$ is large. Actually one can show that $$\| \overrightarrow x \|_p \underset{p \to +\infty}{\longrightarrow} \| \overrightarrow x \|_\infty.$$ This justify your affirmation.
Prove with mathematical induction (or with a combinatorics) that $E(x)=\sum\limits_{n=0}^{\infty}{(D_n x_n)/n!} = e^{-1}/(1-x)-1$?
Use the identity $D_n=nD_{n-1}+(-1)^n$, which is easily proved by induction on $n$: multiply by $\frac{x^n}{n!}$ and sum over $n$ to get $$\begin{align*} \sum_{n\ge 0}D_n\frac{x^n}{n!}&amp;=\sum_{n\ge 0}D_{n-1}\frac{x^n}{(n-1)!}+\sum_{n\ge 0}(-1)^n\frac{x^n}{n!}\\ &amp;=x\sum_{n\ge 0}D_n\frac{x^n}{n!}+\sum_{n\ge 0}(-1)^n\frac{x^n}{n!} \end{align*}$$ and solve for $E(x)=\sum_{n\ge 0}D_n\frac{x^n}{n!}$. You can find a derivation of $E(x)$ starting from the identity $D_{n+1}=nD_n+nD_{n-1}$ here and a proof of that identity here.
If a closed linear operator is one to one on a dense set, then is it one to one on the whole set?
You have $$D(\lambda - A)=\{x\in L:\lambda x - Ax \in L\} = \{x\in L : Ax\in L\}=D(A)$$ because $\lambda x - Ax$ lies in $L$ iff $Ax$ lies in $L$. So you only need it to be injective on $D(A)$. $U_{\lambda}$ takes care of both its surjectivity and injectivity. Finally, once that's done it is autumatically invertible. This is applies to slightly more general case: Lemma: Suppose that $A$ is a closed linear operator on a Banach Space $X$, then it is invertible iff it is bijective onto $X$. Clearly if it is invertible then it is bijective onto $X$. So the converse is interesting one; Let $G(A)$ denote its graph, and suppose that $A$ is bijective onto $X$. Then its algebraic inverse exists, call it $B$. Then the map $(x,y)\mapsto (y,x)$ from $G(A)$ bijective onto $G(B)$ is a homeomorhpism, so $G(A)$ closed implies $G(B)$ is closed. Now as $X$ is complete, and $B$ is a bijection from $X$ onto $D(A)$, by closed graph theorem $B$ is bounded. So $A$ is invertible
need to know the topological property of the set
Compactness: Arzela-Ascoli. Connectedness: the closure of a connected set is connected (note that $B$ is convex, hence so is $A$). Since you didn't say dense where this is impossible to answer.
boolean simplification with k map
I think you made a typo when you said you got to: $CD+ABC'+ABC'D$ Because that should really be: $CD+ABC'+ABCD'$ This can be simplified more using: Nested Reduction $PQ+PQ'R=PQ+PR$ Applied to your statement, given $CD$, you can reduce $ABCD'$ to just $ABC$. And using Adjacency, that combines with $ABC'$ to just $AB$ Final result: $CD+AB$ Of course, you could have gotten this immediately from the original expression as well. Indeed, you made the 'mistake' of thinking that you can use terms only once when combining with other terms. This is why you combined the first four terms to $CD$, the fifth and sixth term to $ABC'$, and were left with the last one. However, please realize that you can use terms more than once, because of: Idempotence $P+P=P$ Applied to your original expression, we can use Idempotence to duplicate the third term $ABCD$, and now you have: $A'B'CD + A'BCD + ABCD + AB'CD + ABC'D' +ABC'D + ABCD'+ ABCD $ $=A'CD+ACD+ABC'+ABC$ $=CD+AB$
Summation in subsets
Consider the matrix $X = (x_{i,j})_{i,j \in V}$. The first expression says that, for every column (which are indexed by $j \in V$), the sum of its elements is 1. In other words, the $j$-column is fixed and the row index $i$ runs: so it's a sum over a column. The second expression is the same but for every row.
Independence of two sequences of 1's and 0's.
For $n=3$, I think the events $A_3$ and $B_3$ are independent: $P(A_3)=\frac12$; $P(B_3)=\frac34$: $P(A_3\cap B_3)=\frac38=P(A_3)\cdot P(B_3)$ . For this last one, $A_3\cap B_3$ occurs if you have one girl and two boys.
Using the residue theorem
Use the integral $$\int_R \frac{z^2}{(z^2 + 1)^2} dz$$ where $R$ is a semicircle with base on the real interval $(-r, r)$ for some $r$. Then take $r \rightarrow \infty$ to get your integral (the integral over the part not on the real line goes to $0$ as $r \rightarrow \infty$).
What is doubling time of tumor? (Using exponential growth)
If it's exponential growth it'd be $$N(t)=N(0)e^{rt}$$ So we can solve for $r$: $$r={\ln({N(t)-N(0)})\over t}$$ So we know $N(0)=5$ and $N(3)=8$ so now we have $$r={{\ln(8)-\ln(5)}\over3}$$ Now we can say that $$2=e^{{{\ln(8)-\ln(5)}\over3}t}$$We can take the natural log of each side and simplify a little to get:$$\ln2={\ln(3)\over 3}t$$ solving for $t$ give us that:$$t={3\ln(2)\over {{\ln(8)-\ln(5)}}}=4.42430954207$$
Prove the inequality $4+xy+yz+zx \ge 7xyz$
Hint Use AM-GM to show $$1\ge xyz, \quad (x+y+z)(xy+yz+zx)\ge9xyz$$
Taylor's theorem and evaluating limits when x goes to infinity
$$e^t \left(t^2-2 t+2\right)=2+\frac{t^3}{3}+O\left(t^4\right)$$ $$2\sqrt{t^6+1}=2+O\left(t^4\right)$$ thus $$\frac{e^t \left(t^2-2 t+2\right)-2 \sqrt{t^6+1}}{2 t^3}\sim \frac{2+\frac{t^3}{3}-2}{2 t^3},\text{ as }x\to 0^+$$ Therefore $$\underset{t\to 0^+}{\text{lim}}\frac{e^t \left(t^2-2 t+2\right)-2 \sqrt{t^6+1}}{2 t^3}=\frac{1}{6}$$
Proving a necessary and sufficient condition for a finite group being nilpotent
... by a proposition in the book, every proper subgroup of a nilpotent group is a proper subgroup of its normalizer. [...] Is my proof wrong? What group do you suppose is nilpotent, so that you can apply this to it? It's not like you know $G$ is nilpotent yet: that's what you're trying to prove. Suppose you believe the hint. Then in the case that $N_G(P)\neq G$, you argue that it's contained in a maximal subgroup of $G$, which is equal to its own normalizer. But that contradicts the hypotheses...
How to perform modulo operation on a fraction?
You need to find the multiplicative inverse of 4 modulo 23, that is, an integer $q$ with $4\cdot q \equiv 1 \pmod{23}.$ For this, use the extended Gauss algorithm to find integers $p$ and $q$ such that $23\cdot p+4\cdot q =1,$ which is possible since $\gcd(23,4)=1.$ It follows that $q=1/a \pmod{23}.$
How to show from the definition of topological space that the topology $\tau$ is the family of open sets of a topology with a filter base?
It’s immediate from the definition: if $U\in\tau$ and $x\in U$, then $U\in\mathscr{B}(x)\subseteq\mathscr{N}(x)$, and certainly $U\subseteq U$, so $U$ is open in the sense of (3). Of course to use Theorem 1 to prove Theorem 2 you still have to show that each $\mathscr{B}(x)$ really is a filter base, but that’s very straightforward.
Number theory problem and Diophantine Equations
From a) and b), $m$ must be odd and hence so must be $n^4-4=(n^2+2)(n^2-2)$. As $\gcd(n^2+2,n^2-2)\mid 4$, this implies that $n^2\pm2$ are coprime, hence each of them must be a cube. Hence we have two cubes that differ by $4$.
generating function for distribution of 12 balls into 10 cells
There are $10^{12}$ ways to assign the balls to the cells, each of which we assume is equally likely. (It's easy to get off on the wrong foot in this problem by overlooking this assumption, for example by assuming that all non-negative integer solutions to $x_1+x_2+ \dots +x_{10} = 12$ are equally likely, which is a very unlikely assumption.) We want to count all the assignments in which exactly one cell is empty. To do this, we will apply a variation of the Principle of Inclusion/Exclusion (PIE). Let's say an assignment of balls to cells has "property $i$" if cell $i$ is empty, for $i = 1,2,3,\dots ,10$, so our goal is to find the number of arrangements with exactly one of the properties. Further, we define $S_j$ as the total number of assignments which have $j$ of the properties (with over-counting), for $j = 1, 2, 3, \dots ,9$. Then we have $$S_j = \binom{10}{j} (10-j)^{12}$$ for $1 \le j \le 9$, since there are $\binom{10}{j}$ ways to pick the empty cells and $(10-j)^{12}$ ways to assign the balls to the remaining cells. The variation of PIE we want to apply is that if there are $n$ properties, then the number of arrangements with exactly $m$ of the properties is $$N_m = S_m - \binom{m+1}{m} S_{m+1} + \binom{m+2}{m} S_{m+2} - \dots + (-1)^{n-m} \binom{n}{m}S_n$$ (Reference: Applied Combinatorics, Second Edition by Allan Tuckser, Section 8.2, Theorem 2; or An Introduction to Probability Theory and Its Applications, Volume I, Third Edition, by William Feller, Section IV.3.) The case we are interested in is $m=1$, $n=9$, so $$N_1 = S_1 - \binom{2}{1} S_2 + \binom{3}{1} S_3 - \dots + \binom{9}{1}S_9$$ which yields $N_1 \approx 8.08315 \times 10^{10}$. Therefore the probability of having exactly one cell empty is $$\frac{N_1}{10^{12}} \approx \boxed{0.0808315}$$
Number Theory: Finding specific new square-triangular numbers given that (m, n) satisfied n^2=m(m+1)/2
what you are expected to do is pretty similar to induction. You start with a pair that satisfies $ m^2 + m = 2n^2.$ Your task is to find four positive integers $A,B,C,D$ that make $(1 + Am +Bn, 1 + C m + D n)$ is also a solution, in the order I wrote it would be $$ (1 + Am+Bn)^2 + (1 + Am + Bn) = 2 (1 +Cm+Dn)^2. $$ You just need to multiply it out, apply $m^2 + m = 2 n^2$ wherever applicable, and finally find $A,B,C,D.$ This problem, done from the start, goes through Pell type equations and the automorphisms of quadratic forms. They want you to just jump to the end.
Solution to $AX=XB$ for $3\times3$ rotation matrices.
If $A,B$ are 3x3 real matrices representing rotations ("3D rotation matrices"), the following are equivalent: (1) $tr(A) = tr(B)$ (2) There exists real orthogonal $X$ such that $A = XBX^{-1}$. (3) There exists rotation matrix $X$ such that $A = XBX^{-1}$. Since rotation matrices are those orthogonal matrices with determinant 1, it's evident that (3) implies (2), and by replacing $X$ by $-X$ if necessary, also (2) implies (3). Since similar matrices have equal traces, (2) implies (1). It remains only to show (1) implies (2), which we will do in a constructive fashion. Setting aside the trivial case of zero rotation (identity matrix), every rotation in three dimensions is characterized by an axis of points fixed by the rotation and an angle by which points rotate around that axis. In the case of a special orthogonal linear transformation (rotation represented by a 3x3 matrix), the axis passes through the origin and may be identified by a unit vector $u$ that generates it. If we adopt a convention that positive angles of rotation are counterclockwise about the axis pointed in direction $u$, then changing the sign of the angle is equivalent to changing instead the vector $u$ into $-u$ (and keeping the same angle as before). As the Wikipedia article points out, if rotation matrix $A$ has angle of rotation $\theta$ about an axis, then the trace of $A$ is $1 + 2 \cos \theta$. For example, by our convention the matrix which represents counterclockwise rotation by $\theta$ about the positive x-axis is: $$R_x(\theta) = \begin{pmatrix} 1 &amp; 0 &amp; 0 \\ 0 &amp; \cos \theta &amp; -\sin \theta \\ 0 &amp; \sin \theta &amp; \cos \theta \end{pmatrix} $$ and the trace is the sum of diagonal entries. If $A$ is a rotation matrix with $tr(A) = 1 + 2 \cos \theta$, let $u$ be a unit vector that generates the axis of rotation, i.e. $Au = u$. Extend that to an orthonormal basis $\{u,v,w\}$ for $\mathbb{R}^3$, and obtain an orthogonal matrix $P$ using these basis vectors as columns. Then similar rotation matrix $P^{-1} A P$ is either $R_x(\theta)$ or $R_x(-\theta)$ in the notation above. If the latter, replace $w$ by $-w$, so that: $$R_x(\theta) = P^{-1} A P$$ Do the same with rotation matrix $B$ also having $tr(B) = 1 + 2 \cos \theta$: $$R_x(\theta) = Q^{-1} B Q$$ Now $X = P Q^{-1}$ satisfies (2), so proving that (1) implies (2) as desired.
Use $\sum_{n=0}^{\infty} \frac {2n +1} {2^n} = 6 $ to show $\sum_{n=0}^{\infty} \frac {2n +1} {2^n} i^n = \frac 4 {25} + i \frac {22} {25}$.
Hints: $$i^n=\begin{cases}\;\;\,i&amp;,\;\;n=1\pmod4\\-1&amp;,\;\;n=2\pmod4\\-i&amp;,\;\;n=3\pmod4\\\;\;\,1&amp;,\;\;n=0\pmod4\end{cases}$$ Added on request: Using the above and the sum of a geometric series with ratio $\;r\;,\;\;|r|&lt;1\;$ $$\sum_{k=0}^\infty\frac{2n+1}{2^n}i^n=2\sum_{n=0}^\infty \frac {i^nn}{2^n}+\sum_{n=0}^\infty\left(\frac i2\right)^n=$$ $$=2\sum_{n=0}^\infty\frac{4n+1}{2^{4n+1}}-2\sum_{n=0}^\infty\frac{4n+2}{2^{4n+2}}-2i\sum_{n=0}^\infty\frac{4n+3}{2^{4n+3}}+2\sum_{n=0}^\infty\frac{4n}{2^{4n}}+\frac1{1-\frac i2}=$$ Now use that (yes, again geometric, but this time power, series): $$\frac1{1-x}=\sum_{n=0}^\infty x^n\implies\frac1{(1-x)^2}=\sum_{n=1}^\infty nx^{n-1}\;,\;\;|x|&lt;1\;\;\;(**)$$ so for example $$\sum_{n=0}^\infty\frac{4n+1}{2^n(=2\cdot2^{n-1})}=2\sum_{n=1}^\infty n\left(\frac12\right)^{n-1}+\frac1{1-\frac12}=2\frac1{\left(1-\frac12\right)^2}+2=10$$ $\color{red}{\text{But now I'm realizing there's a much easier way to do this...!}}$ Using (**) above: $$\sum_{k=0}^\infty\frac{2n+1}{2^n}i^n=i\sum_{n=0}^\infty n\left(\frac i2\right)^{n-1}+\sum_{n=0}^\infty\left(\frac i2\right)^n=i\frac1{\left(1-\frac i2\right)^2}+\frac1{1-\frac i2}=$$ $$=\frac{4i}{3-4i}+\frac2{2-i}=\frac{-16+12i}{25}+\frac{20+10i}{25}=\frac4{25}+\frac{22}{25}i$$
Maximum ( or minimum ) of two functions on arbitrarily small interval.
That can't happen. Consider $d(x):=f(x)-g(x)$. If it is identical zo zero, the two functions $f$ and $g$ are the same. So let's assume there is a non-zero vale of it: $$d(x_0)=y_0 \neq 0$$. Now since $f$ and $g$ are continous, there difference $d$ is as well. So there exists a $\delta_0 &gt; 0$ such that $\lvert x - x_0 \rvert &lt; \delta_0$ implies $\lvert d(x)-d(x_0) \rvert &lt; \frac12 \lvert y_0 \rvert$, which implies $d(x) \neq 0$ (otherwise the LHS of the last inequalilty would be $\lvert 0 - d(x_0)\rvert = \lvert y_0 \rvert$). Because $d(x)$ is continuous, that means $d(x)$ can't change it's sign inside the interval $(x_0-\delta_0, x_0+\delta_0)$. So $d(x)=f(x)-g(x)$ is either always positive or always negative in that interval, which is the same as saying that $f(x)$ is always bigger or always smaller than $g(x)$ in that interval.
Topology and Measures
I would try to challenge a statement made in another answer: For a completely general pair of measures, there is definitely nothing topological that can be said. Recall that $\mu \ll \nu$ iff for all $\epsilon &gt; 0$ there exists $\delta&gt;0$ such that $\mu(F)\leq\epsilon$ whenever $\nu(F) \leq \delta$. Now, let us endow the $\sigma$-algebra $\mathcal F$ with the $L^1(\nu)$ pseudometric given by $d_\nu(A,B):=\nu(A\triangle B)$ where $A\triangle B = (A\setminus B)\cup(B\setminus A)$ denotes the symmetric difference between two sets. Note that $d_\nu$ is a pseudometric only since $d_\nu(A,B) =0 $ does not imply $A = B$. Note that $|\mu(A) - \mu(B)|\leq \mu(A\triangle B)$ so that $\mu$ is a continuous function (on $\mathcal F)$ with respect to the pseudometric $d_\nu$, and hence the corresponding topology. I think, it is a rather good justification to say that $\mu$ is continuous w.r.t. $\nu$. If you are more comfortable in thinking about L-spaces of functions, just consider $\mu$ as a functional on functions endowed with $L^1(\nu)$ distance.
Attaching a disk $D^2$ along the boundary circle to a circle $S^1.$
Let's start from the begining. $S^1$ is given and another, distinct $D^2$ is given. The boundary $\partial D^2$ of $D^2$ is $S^1$ as well, but since it is distinct I will denote it as $\partial D^2$. 1- I do not understand the statement: "by attaching a disk $D^2$ along the boundary circle" what do the question mean by $along the boundary$? does it mean tangentially? The concept is the same as in CW complex construction. You start with a map $f:\partial D^2\to S^1$ (in your case the triple winding) and then you glue $D^2$ and $S^1$ along this map, i.e. you take the quotient space $$(D^2\sqcup S^1)/\sim$$ where "$\sim$" is generated by $x\sim f(x)$ for $x\in\partial D^2$. In particular note that if $f(x)=f(y)$ then $x\sim y$. Also, are there other ways of attaching a disk? Of course. If you glue along say identity $f(x)=x$ then the result is simply $D^2$. The same goes for the antipodal map $f(x)=-x$. But in your case this is something different. Note that if you attach along a double winding you obtain the real projective space $\mathbb{R}P^2$. 2- I feel like I should use Van Kampen theorem but I do not know how to divide my space $Y$ into union path-connected open sets each containing the basepoint $y_{0} \in Y$? So let's generalize this a bit and assume that the attaching map winds $n$ times. Calculating the fundamental group for general $n$ is very similar to calculating it for $\mathbb{R}P^2$. Here is the answer that goes through the process in details: An intuitive idea about fundamental group of $\mathbb{RP}^2$ The core idea there is that they use the path lifting property of coverings instead of Van Kampen. Try to generalize it (the quotient is no longer $x\sim -x$ but $x$ is now related to $n-1$ other points on $\partial D^2$) and note that the result should be $\mathbb{Z}_n$.
Percentage based on Profit and Loss
In question first statement you found cost price. But you are not able to find marked price. As for offered discount you need to know marked price. Now placing things in an order - From statement I - After discount original SP 12350. We can also calculate CP from it. CP = 10000. From statement II - MP = 13000 (Also SP if no discount). Thus, I and II give the answer. II and III can not give the answer. Because we require profit percentage with discount and profit percentage without discount. So II and III are not sufficient. Since III gives C.P. = Rs. 10,000, I and III give the answer. Therefore, I and II [or] I and III give the answer. So option 5th is answer.
Vector notation for sum over elementwise product of 3 vectors
You can write the above expression in the following way $$\left(\begin{array}{lll}A_1\\A_2\\ \vdots \\A_N\end{array}\right)^T\left(\begin{array}{llll}B_1\\ &amp; B_2\\ &amp;&amp; \ddots\\ &amp;&amp;&amp;B_N\end{array}\right)\left(\begin{array}{lll}C_1\\C_2\\ \vdots\\C_N\end{array}\right)$$ The Matrix in the middle is diagonal.
Ackermann Function primitive recursive
You seem to be conflating primitive recursive functions with total recursive functions. Actually, primitive recursive are a subset of total recursive functions. The hierarchy can be described informally as follows: Partial recursive functions are defined by an algorithm which only works on some inputs. Total recursive functions are defined by an algorithm which works on all inputs but can take an arbitrarily long time to compute. Primitive recursive functions are defined by an algorithm that completes in a knowable time (“not too long” for some theoretic definition of long). More precisely (though I'll refer you to Wikipedia or some other reference for a complete definition), a primitive recursive function can be computed by a sequence of elementary computations where there is a way to bound the running time at every step. The only form of recursion allowed in the definition is primitive recursion, where a function calls itself on a smaller argument. All primitive recursions can be reduced to giving definitions for $f(0)$ (not involving $f$) and $f(n+1)$ as a function of $f(n)$ only. General recursive definitions allow an extra operation: looking for the smallest $n$ that makes some property come true. In general, it's impossible to know how far you'll have to go to find such an $n$. The definition of the Ackermann function contains the clause $A(m+1,n+1) = A(m, A(m+1, n))$. The “next” value of the function (going from $n$ to $n+1$) depends on calls with larger parameters (starting with $A(m+1,n)$). So the definition is not in primitive recursive form. It takes some work which I am not going to reproduce here, but you can show that there is no alternate presentation of the Ackermann function that uses only primitive recursion. Yet the function is well-defined, because computing $A(?,n)$ only requires computations of $A(?,n&#39;)$ with $n&#39; &lt; n$. If you prefer a programming approach, recursive functions can be computed with dynamic memory allocation and while loops. Primitive recursive functions can be computed with only for loops, where the loop bound may be computed at run time but must be known when the loop starts.
Intuition of section of a hermitian line bundle
$\newcommand{\Reals}{\mathbf{R}}\newcommand{\Cpx}{\mathbf{C}}\newcommand{\Proj}{\mathbf{P}}$If you're seeking intuition about how a line bundle can be non-trivial, the simplest example is the Möbius strip viewed as the total space of a real line bundle over a circle. In detail, view $\Reals^{2} \to \Reals$ as the trivial real line bundle via projection to the first factor. Divide $\Reals^{2}$ by the glide reflection $\gamma(x, v) = (x + 1, -v)$, and view the result, by projection to the first factor, as a line bundle over the circle $\Reals/\mathbf{Z}$. A fundamental domain for the action is the strip $[0, 1] \times \Reals$, and $(0, v) \sim (1, -v)$; that is, the edges of the strip are glued with a half-twist. It's easy (intermediate value theorem) to show this line bundle has no continuous, non-vanishing section, so the bundle is non-trivial. Your intuition about sections of Hermitian line bundles being local complex-valued functions is correct. The simplest Hermitian line bundles (over complex curves) are harder to visualize than real line bundles because their total spaces are complex surfaces, hence real four-manifolds. Take $M = \Cpx\Proj^{1}$, the complex projective line (real two-sphere). Removing many details: In the trivial complex line bundle, the set of unit vectors is (diffeomorphic to) $S^{2} \times S^{1}$. In the tangent bundle $TM = TS^{2}$, the set of unit vectors is (diffeomorphic to) the special orthogonal group $SO(3)$: A point&nbsp;$p$ of&nbsp;$S^{2}$ is a unit vector in&nbsp;$\Reals^{3}$, a unit tangent vector&nbsp;$v$ at&nbsp;$p$ is an element of&nbsp;$\Reals^{3}$ orthogonal to&nbsp;$p$, and the ordered triple $(p, v, p \times v)$ may be viewed as an element of&nbsp;$SO(3)$. This association is clearly bijective (because reading the columns from an element of&nbsp;$SO(3)$ recovers a point of the sphere and a unit tangent vector, and these uniquely determine the third column) and smooth (write either map in coordinates coming from the entries of $3 \times 3$ real matrices). A possibly less familiar example is the complement&nbsp;$L$ of a point&nbsp;$q$ in the complex projective plane $\Cpx\Proj^{2}$, which is the total space of a non-trivial Hermitian line bundle in which the set of unit vectors is (diffeomorphic to)&nbsp;$S^{3}$. (Pick a complex projective line&nbsp;$M$ not containing&nbsp;$q$. For each point&nbsp;$x$ of&nbsp;$L$, let $\ell$&nbsp;be the unique line through $q$&nbsp;and $x$, and let $p = M \cap \ell$. The mapping $x \mapsto p$ is a line bundle projection. The bundle of unit vectors is (essentially) a sphere in&nbsp;$\Cpx\Proj^{2}$ centered at&nbsp;$q$.) The preceding examples are distinct because their sets of unit vectors are mutually non-diffeomorphic. There are infinitely many Hermitian lines bundles over the projective line, classified by an integer, the Chern number. It turns out the preceding examples have Chern number $0$, $2$, and $1$ respectively.
If $A$ is not a regular language and $B$ is a regular language and $B \neq \varnothing$, does $AB$ is not regular language?
Okay, I have asked my senior for help So what he told is Let $L = \{ 0^11^2...0^{n-1}1^n0^{n-1}...1^20^1\}$. Given $W = XYZ$ for $\forall W\in L$ We got $|W| = n^2$. For arbitrary Y, $1\le|Y|\le n$ Consider $n^2 \lt n^2 + 1 \le |XY^2Z| \le n^2 + n &lt; n^2 + 2n &lt; (n + 1)^2$ So $n^2 &lt; |XY^2Z| &lt; (n+1)^2$ Thus proving $XY^2Z \notin L$. From pumping lemma we can conclude that $L$ is not regular.
How many moves are at least needed to find a ball that is from the majority?
We suppose you have a strategy. I will be the person who can recognize the balls. What I will do is I will tell you what my answers are (based on your strategy). Then I will show that (a) there is an assignment of materials to the balls so that my answers are truthful and (b) you can't identify any iron ball after three questions. If you ask me about two balls neither of which you have asked about before, I will say they are the same. If you ask me about two balls at least one of which you have asked me about before, I will say they are different (except if you could already deduce what the correct answer was, in which case I give that answer, but it would be silly for you to ask such a question, because it's a wasted question). Now, let's draw a graph whose vertices are the seven balls, and whose edges are the three pairs of balls you asked about. For each of the connected components other than the isolated vertices, based on my answers, you know how the vertices are divided within that component into two materials. These components have two, three, or four vertices (because there are only three edges). Because of how I answered the questions, for each component, one material accounts for exactly two balls. We split into cases: either there is one component of size 4 split 2 and 2, together with 3 isolated vertices; or there is a component of size 3 split 2 and 1, a component of size 2 with both vertices the same, and 2 isolated vertices; or three components of size 2, each the same, and one isolated vertex. In each of these cases, you don't have enough information to determine an iron ball, and clearly, my answers were correct for some assignment of materials to balls.
Detect when two edges make a "inner" angle or an "outer" angle
Let $P_1=(x_1,y_1),\;P_2=(x_2,y_2),\;P_3=(x_3,y_3)$. Then we have two vectors (arrows, if you like): $${\bf v_1} = \overrightarrow{P_1P_2} = (x_2-x_1,\; y_2-y_1) \\ {\bf v_2} = \overrightarrow{P_2P_3} = (x_3-x_2,\; y_3-y_2).$$ The angle, $\theta$, between them is found via their dot product: $$\theta_0 = \cos^{-1}\left(\dfrac{{\bf v_1}\cdot {\bf v_2}}{\|{\bf v_1}\|\;\|{\bf v_2}\|} \right).$$ This gives a value $\theta_0\in [0,\pi]$ and it assumes that ${\bf v_2}$ starts at $P_1$ instead of at $P_2$, which we must adjust for. We also need to consider whether, in moving from ${\bf v_1}$ to ${\bf v_2}$, we are "turning left" or "turning right". We can determine this by using the cross product of these two vectors. If we are turning left, the cross product vector ${\bf v_1}\times {\bf v_2}$ is in the positive $z$ direction, and if we are turning right it is in the negative $z$ direction. The $z$-coordinate, $z_c$, of this cross product is: $$z_c = (x_2-x_1)(y_3-y_2) - (x_3-x_2)(y_2-y_1).$$ Finally, we consider the side of the lines we move on. As we move along from $P_1$ to $P_2$ to $P_3$ this could be either the left or the right side of the lines. In the new examples provided, we are on the left in rows $1$ and $4$. we are on the right in rows $2$ and $3$. So you could equate "being on the left of the lines" with either: moving left on the upper side, or moving right on the lower side. And you could equate "being on the right of the lines" with either: moving left on the lower side, or moving right on the upper side. So, putting this all together, we have the required angle, $\theta_1$, as follows: \begin{eqnarray*} &amp;&amp; \text{if $\theta_0 = 0$ then $\qquad\theta_1 = \pi$} \\ &amp;&amp; \text{else if $\theta_0 = \pi$ then $\qquad\theta_1 = 0$} \\ &amp;&amp; \text{else if we are on the LEFT of the lines and $z_c \gt 0$ then $\qquad\theta_1 = \pi - \theta_0$} \\ &amp;&amp; \text{else if we are on the LEFT of the lines and $z_c \lt 0$ then $\qquad\theta_1 = \pi + \theta_0$} \\ &amp;&amp; \text{else if we are on the RIGHT of the lines and $z_c \gt 0$ then $\qquad\theta_1 = \pi + \theta_0$} \\ &amp;&amp; \text{else if we are on the RIGHT of the lines and $z_c \lt 0$ then $\qquad\theta_1 = \pi - \theta_0$}. \end{eqnarray*}
Conversion into contour integral and poles
As shown in many many places throughout this site, one can limit the number of residues to evaluate by using the symmetry of the integrand and making a wise choice of contour that exploits this symmetry. In this case, consider $$\oint_C dz \frac{z^2}{z^6+1} $$ where $C$ is a wedge-shaped contour in the upper-half plane of radius $R$ and of angle $\pi/3$. The reason why the wedge has an angle of $\pi/3$ will become apparent as we carry out the calculation. The contour integral is equal to $$\int_0^R dx \frac{x^2}{1+x^6} + i R \int_0^{\pi/3} d\theta \, e^{i \theta} \frac{R^2 e^{i 2 \theta}}{1+R^6 e^{i 6 \theta}}+ e^{i \pi/3} \int_R^0 dt \, \frac{(e^{i \pi/3})^2 t^2}{1+(e^{i \pi/3})^6 t^6}$$ As $R \to \infty$, the magnitude of the second integral is bounded by $$\left | i R \int_0^{\pi/3} d\theta \, e^{i \theta} \frac{R^2 e^{i 2 \theta}}{1+R^6 e^{i 6 \theta}} \right | \le \frac{\pi}{3} \frac{R^3}{R^6-1}$$ which clearly vanishes in this limit. In this limit, the first integral becomes what we are looking for and the third integral becomes a multiple of what we are looking for. The sum of these two is equal to the contour integral which is $$2 \int_0^{\infty} dx \frac{x^2}{1+x^6} = \int_{\infty}^{\infty} dx \frac{x^2}{1+x^6} $$ By the residue theorem, this is equal to $i 2 \pi$ times the sum of the residues of the poles inside $C$. Note that we have the advantage of having only one pole inside $C$ at $z=e^{i \pi/6}$ instead of three had we carried out the computation with a semicircle. Thus, $$\int_{\infty}^{\infty} dx \frac{x^2}{1+x^6} = i 2 \pi \frac{(e^{i \pi/6})^2}{6 (e^{i \pi/6})^5}= \frac{\pi}{3}$$ Note that the wedge angle of $\pi/3$ made the integrands of the first and third integrals essentially the same within a constant factor.
Krull dimension of $A[x]/\langle x^2 + 1 \rangle$
The ring extension $A\subset A[x]/(x^2+1)$ is integral ($x$ is a root of the monic polynomial $t^2+1\in A[t]$), so $\dim A=\dim A[x]/(x^2+1)$. Remark. You can replace $x^2+1$ by any monic polynomial.
Dimension of a vector space of sequences and if it is complete?
The sequence that you created does not converge in that norm. The answer has been presented by @Mindlack in the comment. But I would like to point out a way to see the completeness of $V$. Actually $V$ is $L^{1}(\mu)$, where $d\mu(x)=\omega(x)d\nu(x)$, here the variable $x$ is discrete, it is just $n$, and $\omega(x)=2^{x}$, $\nu$ being the counting measure, and that \begin{align*} \sum_{n}2^{n}|a_{n}|=\int_{\mathbb{N}}|f(x)|\omega(x)d\nu(x)=\int_{\mathbb{N}}|f(x)|d\mu(x), \end{align*} where $f(x)=a_{x}$. And we know that $L^{1}(\mu)$ is complete for every measure $\mu$.
Baire functions question
Do you understand why $F(x)$ is discontinuous? Compare $F(0)$ to $F(x)$ for $x\neq 0$. This proves $F(x)$ is Baire class 1, at least according to the Wikipedia definition.
exactly one bilinear form
Can I write it so?: Let $A(e_{i},e_{j})=a_{ij}$ - any transformation. Since $B(e_{i},e_{j}) = A(e_{i},e_{j})$, so $A(x,y)=\sum_{i, j = 1}^{n}a_{ij}x_{i}y_{j}=B(x,y)$ which proves the uniqueness bilinear form $B:V \times V \rightarrow K$.
Difficulty in computing double integral over rectangle
Over the rectangle, the parabola $y=x^2$ passes through the corner (1,1). The other parabola remains above it. You are first integrating in X and then Y. This is correct. In the other case you would have to split it into two integrals. Can you try in the other form, with order if integration interchanged?
Perturbation problem
We're looking for a perturbative expansion of the solution $y(x;\epsilon)$ to $$ y(x) = x-\epsilon \sin(2y)$$ I don't know if you're asking for an expansion to all orders. If so, I have no closed form to offer. But, for illustration, the first four terms in the series may be found as follows by use of the addition theorem (this procedure may be continued to any desired order). Expand $$y(x;\epsilon)=y_o(x)+\epsilon y_1(x)+\epsilon^2 y_2(x)+\epsilon^3 y_3(x) + o(\epsilon^3).$$ Then clearly, $y_o(x)=x$ and $$\epsilon \,y_1(x) +o(\epsilon)=-\epsilon\sin(2x+\epsilon 2y_1+o(\epsilon))=-\epsilon\sin(2x)\underbrace{\cos(\epsilon\, 2y_1+o(\epsilon))}_{\sim 1}\\ \phantom{tttttttttttttttttttttttttttttttttttttttttttttttttt}+\epsilon\cos(2x)\underbrace{\sin(\epsilon\, 2y_1+o(\epsilon))}_{\sim 2\epsilon y_1 = O(\epsilon)}. \\$$ This implies $y_1(x)= -\sin(2x)$. Similarly, at the next two orders we find $y_2(x)=2\sin(2x)\cos(2x)=\sin(4x)$ and $y_3(x)=2\sin^3(x)-2\cos(2x)\sin(4x)$, if I haven't made a mistake in the algebra. Hence $$y(x;\epsilon) = x - \epsilon \sin(x) + \epsilon^2 \sin(2x)\cos(x)+\epsilon^3 (2\sin^3(x)-2\cos(2x)\sin(4x)) +o(\epsilon^3). $$