title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
On a symplectic manifold $(M,\omega)$, is it true for any $2$-form $\mu$, that $\mu=0$ iff $\omega^{n-1}\wedge\mu=0$?
It is a mistake in the paper, as the counterexamples in the comments section demonstrate.
How to prove that my CFG grammar generates $L_1$
It's easy to see that no string in $L$ is generated by the grammar. A string in $L$ starts with $a$ and ends in $b$, so the first production must be $S\to aSb$. Then there can't be a counterexample of minimal length. Now we must show that the grammar generates every string in $L_1$. We proceed by induction on the length of string in $L_1$. The grammar generates the strings of length $1$ (and $\lambda\notin L_1$). Suppose that $\omega\in L_1,\ |\omega|=n+1$ and that the grammar generates all strings in $L_1$ of length $\leq n$. In that case we can generate $\omega$ starting with $S\to aSa$ since we can generate $a^{k-1}b^k$ from $S$. If $\omega$ ends in $b$ and starts in $b$, we can generate $\omega$ starting with $S\to BSB$, unless $\omega=ba^kb^kb$, in which case, we can generate $\omega$ starting with $S\to bS$. If $\omega$ ends in $b$ and starts with $a$, then we can generate $\omega$ starting with $S\to aSb$, since $\omega$ cannot be of the form $aa^kb^kb$.
Group law on invertible fractional ideals on a scheme
$\mathcal{K}_X$ is a sheaf of $\mathcal{O}_X$ algebras. Thus for sections of submodulus of $\mathcal{K}_X$ the notion of product is naturally defined over any open set. So for any $U$ there is a defined product $\mathcal{I}(U)\mathcal{J}(U)$. This gives you the desired sheaf.
If the line tangent to the graph of $f(x)=e^{3x^2}$ at the point $(a,f(a))$ is parallel to the line $y=3x-5$, then what is the value of $a$?
In terms of the Lambert W function: \begin{align} 2a\exp(3a^2)&=1 ,\\ 4a^2\exp(6a^2)&=1 ,\\ 6a^2\exp(6a^2)&=\tfrac32 ,\\ \operatorname{W}(6a^2\exp(6a^2))&=\operatorname{W}(\tfrac32) ,\\ 6a^2&=\operatorname{W}(\tfrac23) . \end{align} Since $\tfrac32>0$, there is only one real value of the right hand side, $\operatorname{W_0}(\tfrac32)$. The answer then is \begin{align} a=\sqrt{\tfrac16\operatorname{W_0}(\tfrac32)} \approx0.3478173 . \end{align} .
Quick calculation of modulo with large exponent
We have $$2^{10}=1024\equiv89\pmod{187}$$ $$89^2=7921\equiv67\pmod{187}$$ $$67^2=4489\equiv1\pmod{187}$$ so $$2^{107}=2^{100}\cdot2^{7}=(2^{10})^{10}\cdot128\equiv89^{10}\cdot128\equiv67^5\cdot128\equiv67\cdot128=161\pmod{187}$$as desired.
Show that $x^5+x+1=0$ has a solution in $\mathbb{R}$
Yes, your work appears to be correct. To prove this solution is unique, note that $$f\left(x\right) = x^5 + x + 1$$ means that $$f'\left(x\right) = 5x^4 + 1$$ which is always $\ge 1$ as $x^4 \ge 0$ for all real $x$. Thus, $f$ is a strictly increasing function. You could also graph the function to see this.
What is the equation of the following polar curve?
Probably not what you are looking for, but, with $$f(t) = \frac{1}{3}t+\frac{3}{2}\cos\left(\frac{1}{2}t\right)-\frac{1}{5}\sin t,$$ the curve $$ x(t)=\int_{0}^t \cos(f(u)) \, du, \,\, y(t) = \int_{0}^t \sin(f(u)) \, du $$ looks like this: which looks like your curve if you squint. UPDATE: With $$f(t) = \frac{1}{3}t+\cos\left(\frac{1}{2}t\right)-\frac{1}{5}\sin t,$$ the curve is this:
Is $\|x\| = \| \overline{x} \|$ in an inner product space?
I believe I have a counterexample to your statement about the equality $\|x\|=\|\bar{x}\|$. Let $S=\{a, b\}$ be a set with two entries. Define a complex-valued function $f$ on $S$ by $$f(a)=1+i, \quad f(b)=1$$ and consider the two-dimensional space $V=\operatorname{Span}\{f, \bar{f}\}$ over $\mathbb{C}$ equipped with the inner product $$(\phi, \psi)=[\phi]_{\mathcal{B}}^*P[\psi]_{\mathcal{B}}$$ where $\mathcal{B}=\{f, \bar{f}\}$ and $*$ denotes the conjugate transpose and $P=\begin{bmatrix}1&1\\1&2\end{bmatrix}$. Now consider $g=3if+\bar{f}\in V$. Then $$\|g\|^2= \begin{bmatrix}-3i&1\end{bmatrix} \begin{bmatrix}1&1\\1&2\end{bmatrix} \begin{bmatrix}3i\\1\end{bmatrix}=11 $$ whereas, $$\|\bar{g}\|^2= \begin{bmatrix}1&3i\end{bmatrix} \begin{bmatrix}1&1\\1&2\end{bmatrix} \begin{bmatrix}1&-3i\end{bmatrix}=19. $$ The same example shows that the function defined by $\theta(x, y)=(\bar{x}, y)$ is not symmetric in general which is a counterexample to the statement of the problem linked to this question.
d_fold Veronese Embedding
First of all \begin{equation*} N=\binom{n+d}{d}-1. \end{equation*} a) You prove that $\nu_d$ is a continuous injective map (use the canonical affine open covering of $\mathbb{P}^N$); so its image is an irreducible closed subset of $\mathbb{P}^N$, equivalentely $\ker\nu_d^{*}=I$ is a homogeneous prime ideal of $\mathbb{K}[Y_0,\dots,Y_N]$ (warning to dimension of codomain). b) From (a), $\nu_d$ is a bijection with $\mathbb{P}^n$ and $V_+(I)$; using again the canonical affine open covering of $\mathbb{P}^{N-1}$, you construct the explicit inverse map, and you see that it is a regular map as well. Hint: On $U_0$ you can define the local inverse morphism $\left(1:\frac{z_1z_0^{d-1}}{z_0^d}:\dots:\frac{z_n^d}{z_0^d}\right)\mapsto\left(1:\frac{z_1}{z_0}:\dots:\frac{z_n}{z_0}\right)$. c) Let $f$ be a homogeneous of degree $d$, then $f$ is the image via $\nu_d^{*}$ of a degree $1$ polynomial $g$ in $\mathbb{K}[Y_0,\dots,Y_N]$. It's easy to prove that \begin{equation*} \nu_d(V_+(f))=V_+(I)\cap V_+(g) \end{equation*} where the last set is a hyperplane in $\mathbb{P}^N$. Update. By canonical open affine covering of $\mathbb{P}^m$ I mean the covering given by following open affine sets: \begin{equation*} \forall i\in\{0,\dots,m\},\,U_i=\{[z^0:\dots:z^m]\in\mathbb{P}^m\mid z^i\neq0\}. \end{equation*} $\nu_d^{*}$ is the morphism of rings associated to $\nu_d$.
N-nacci Identities: The Final Question (Generalizing Time!)
Here is the generalization they are probably looking for (with a few details left to be filled in): Pseudo-theorem: Suppose that $f_n$ is a sequence defined by a recurrence relation with a generating function $g(z)$. Then if $g(z)$ satisfies certain properties (what properties are required to make the proof go through?), then one can derive an explicit formula for $f_n$ via $f_n + \sum_{z\neq 0} Res_z \frac{1}{z^{n+1}g(z)}=0$. As a first attempt at figuring out what properties $g$ should satisfy, try seeing what is necessary if you assume that $g(z)$ is a rational function. Note that you will need to be a little more explicit about part (4) before you can get a satisfactory answer here. Also, if you assume that the denominator of $g$ only has simple roots, you should be able to come up with a rather explicit formula in terms of the factorization. For this, the following result (which you should try to prove; it uses nothing more than the product rule) is useful: Lemma If $a$ is a simple root of a polynomial $p(z)$, then $\displaystyle \lim_{z\to a} \frac{p(z)}{(z-a)}=p'(a)$. To see why this lemma is useful, try converting what you have so far for (4) into the Binet formula. Edit: Adding an answer to the newly updated question (really, it's bad etiquette to continually change what you are asking in a question, as it makes previous answers look irrelevant). The radius of convergence of a function $f(z)=\displaystyle\sum_{i\geq 0} a_i z^i$ is $\inf \{ \left| z \right| \mid z \in \mathbb C \text{ and } f \text{ has a pole/singularity at } z\}$. The statement can be found on wikipedia. It is easy to prove that the radius of convergence is at most this value, but proving the other direction requires a bit of complex analysis. The wikipedia page links to another artical which (in the remarks section) has a brief proof sketch. Therefore, for a rational function, as long as the denominator is not a multiple of $z$, you have a non-zero radius of convergence. There is a slightly subtler question to be asked with generating functions. Given a sequence $(f_n)$, you can always form the generating function $\sum f_n z^n$ as a formal power series, and if the sequence satisfies a recurrence relation, the generating function will often satisfy some sort of algebraic or differential equation, and in this case you can often use this equation to solve for the generating function (as you did in this problem). However, there is an important caveat. At least a priori, since the ring of formal power series is different from the ring of power series with non-zero radii of convergence is different from the ring of continuous/smooth functions defined on a neighborhood of $0$, there being a solution to in one of these places does not mean there is a solution in another, and even if there are, it doesn't mean they correspond. To give an analogy, you can formally consider the sum $s=1+2+4+8+\ldots$, ignoring issues of convergence. Formally speaking $s=2s+1$ (because multiplying by 2 shifts very term over), and so if the sum converged to a number in $\mathbb R$, it would have to converge to $-1$, because that is the only solution to our constraining equation over $\mathbb R$. A sum of positive numbers can't be negative, though. The problem is that, to the extent which working formally with sums of this kind makes any sense, there are multiple solutions to the equation $s=2s+1$ when working formally. The good news is that everything works out quite nicely for the n-Fibonacci numbers, or more generally for linear constant-coefficient recurrences (I think).
Show $\int_{\mathbb{R}^3} \frac{1}{\vert{\eta -v\vert}^2} \frac{1}{(1+\vert \eta \vert)^4} d\eta \leq \frac{C}{(1+\vert v \vert)^2}$
We can compute the integral by rotating the $\sigma$-coordinate system in such a way that $v$ points towards the north pole of the sphere (i.e. in $e_3$-direction) and introducing polar coordinates $(\theta,\phi)$. Then the norm becomes $\lvert v + \rho \sigma \rvert = \sqrt{\lvert v \rvert^2 + \rho^2 + 2 \lvert v \rvert \rho \cos(\theta)}$ and the integral turns into $$ \int \limits_0^\infty \int \limits_0^\pi \int \limits_0^{2\pi} \frac{\sin(\theta) \, \mathrm{d} \phi \, \mathrm{d} \theta \, \mathrm{d} \rho}{\left(1 + \sqrt{\lvert v \rvert^2 + \rho^2 + 2 \lvert v \rvert \rho \cos(\theta)}\right)^4} = 2 \pi \int \limits_0^\infty \int \limits_{-1}^1 \frac{\mathrm{d} t \, \mathrm{d} \rho }{\left(1 + \sqrt{\lvert v \rvert^2 + \rho^2 + 2 \lvert v \rvert \rho t}\right)^4} \equiv f(\lvert v \rvert) \, .$$ Clearly, $f(0) = \frac{4 \pi}{3}$. For $r > 0$ we can let $s = \sqrt{r^2 + \rho^2 + 2 r \rho t}$ to obtain $$ f(r) = 2 \pi \int \limits_0^\infty \int \limits_{\lvert r - \rho \rvert}^{r + \rho} \frac{s}{r \rho (1+s)^4} \, \mathrm{d} s \, \mathrm{d} \rho \, .$$ The remaining integrals are not particularly hard, albeit slightly tedious, and the final result looks rather nice: $$ f(r) = \frac{4\pi}{3} \begin{cases} 1, & r = 0 \\ \frac{1}{3}, & r = 1 \\ \frac{1-r^4 + 4 r^2 \log(r)}{(1-r^2)^3} , & r \in (0,\infty) \setminus \{1\}\end{cases} \, .$$ We want to find some $C > 0$ such that $f(r) \leq \frac{C}{(1+r)^2}$ holds for every $r \geq 0$. If any constant is fine, we can simply use the inequality $\frac{-\log(r)}{1-r} \geq \frac{2}{1+r}\, ,r > 0,$ which follows from $\frac{\tanh(t)}{t} \leq 1\, , t \in \mathbb{R}$, with $t = \frac{1}{2} \log(r)$. It leads to $$ \frac{3}{4\pi} f(r) = \frac{1 + r + r^2 + r^3 - 4 r^2 \frac{-\log(r)}{1-r}}{(1+r)^3 (1-r)^2} \leq \frac{1 + r + r^2 + r^3 - 4 r^2 \frac{2}{1+r}}{(1+r)^3 (1-r)^2} = \frac{1 + \frac{2 r}{(1+r)^2}}{(1+r)^2} \leq \frac{3}{2 (1+r)^2} $$ and therefore $C = 2 \pi$. In order to find the optimal constant we need to use the more complicated inequality $$\frac{-\log(r)}{1-r} \geq \frac{-1 + 7r + 7r^2-r^3}{12 r^2} \, , \, r > 0,$$ which is a consequence of $$ \frac{\tanh(s)}{s} \leq \frac{1}{4} \left[\frac{\sinh(s)}{s} + \frac{3}{\cosh(s)}\right]\, , \, s \in \mathbb{R},$$ with $s = \log(r)$ (the latter can be proven by rewriting it as $1 - \frac{\sinh(s) [4- \cosh(s)]}{3 s} \geq 0$ and noting that the Maclaurin series of the left-hand side contains only non-negative terms). It follows that $$ \frac{3}{4\pi} f(r) \leq \frac{1 + r + r^2 + r^3 - \frac{-1+7r+7r^2-r^3}{3}}{(1+r)^3 (1-r)^2} = \frac{4}{3 (1+r)^2} $$ and the corresponding constant $C = \frac{16 \pi}{9}$ is the smallest possible, since $f(1) = \frac{4\pi}{9} = \frac{16 \pi}{9 (1+1)^2}$. It would be nice to obtain the estimate without evaluating the integral explicitly, but I have not found a way to do that yet.
Is $A$ a hereditary subalgebra of $A\otimes\mathcal{K}$
If $A$ is $\sigma$-unital you can do the following. Let $h_A \in A$ be a strictly positive element. Then you can consider the corner $$ \overline{h(A \otimes \mathcal K)h} \ , $$ where $h = h_A \otimes e_{11}$.
Maximising and minimising $f(x,y)$ on $x^2+y^2\leqslant 9$
You can use a very elementary argument to argue that maximum and minimum must occur on the boundary. Suppose, if possible, the maximum occurs at a point $(a,b)$ with $a^{2}+b^{2} <9$. By increasing/decreasing $a$ slightly you can stay within the disk but make the first term in $f$ larger. This contradiction shows that maximum occurs on the boundary. Similarly for the minimum.
Galois groups of polynomials and explicit equations for the roots
One of the fundamental techniques for doing this is to compute the (Lagrange) resolvants associated to the equation. Assuming the group is solvable, the resolvants will be factorizable and amenable to lower degree auxiliary equations. To have a first idea of all this, I would advice you to look at Harold Edwards splendid book : "Galois Theory" (ISBN 038790980X) where it does it for the elementary cases and an example of the cyclotomic equation. With modern Computer Algebra System, one can more easily explore the idea further, but considerable ingenuity is needed because the degree of the auxiliary equations rise quickly. Some CAS (such as GAP which is a free and open-source academic research tool) are able to give you the Galois groups of low degree polynomials, as well as properties of splitting fields and you can look at their tutorial to have examples of their use. Effective and Inverse Galois Theory is still an active research subject. Some of the works by Annick Valibouze, N. Durov, Klueners, etc. might help you to go further.
Convolution - Heaviside
If $t<1$ then $t-u-1<0$ for all $u$ between $0$ and $t$ so $H(t-u-1)=0$ for all $u$ and hence the integral is $0$. If $t>1$ then $H(t-u-1)=0$ for $u >t-1$ and $H(t-u-1)=1$ for $u <t-1$ so the integral is effectively from $0$ to $t-1$.
Should every group be a monoid, or should no group be a monoid?
In my opinion Definition 2' is the correct definition (and Definition 2 is not a definition, but rather a characterization), because it follows the general principles how algebraic structures are defined in universal algebra (by operations satisfying equations), and therefore it also directly generalizes to other categories, for example topological spaces. A topological group is not just a topological monoid such that every object has an inverse. We also need that the inverse map is a continuous map (it is an interesting fact that for Lie groups this is automatic). So it is useful when the inverse map belongs to the data. There are several other reasons why this is important: The subgroup generated by a subset $X$ of a group has underlying set $\{a_1^{\pm 1} \cdots \dotsc \cdots a_n^{\pm 1} : a_i \in X\}$. When you view groups as special monoids, you may forget the $\pm 1$ here. Different categories should always be considered as disjoint. This helps to organize mathematics a lot. Unfortunately, forgetful functors between these categories are usually ignored, treated as if they were identities, but of course they are not. For example, we have the forgetful functor $U : \mathsf{Grp} \to \mathsf{Mon}$. It turns out that it is fully faithful (in many texts group homomorphisms are defined that way, of course again this is not conceptually correct). It has a left adjoint (Grothendieck construction) as well as a right adjoint (group of units). In particular it preserves all limits and colimits. These properties of $U$ show that often (but not always!) there is no harm when you identitfy $G$ with $U(G)$. By the way, Definition 2' is conceptually not complete yet. You have to assume $x \cdot i(x) = i(x) \cdot x = e$. This becomes important when you study group objects in non-cartesian categories, aka Hopf monoids, for example Hopf algebras. The axiom for the antipode then contains two diagrams.
Show that $E=\mathbb{R}u\oplus\mathbb{R}v$
The statement is false. On $E=\mathbb R^4$ let $$ \varphi(u,v) = u^t \begin{pmatrix} 0 & 1 & 0 & 0 \\ -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & -1 & 0 \end{pmatrix} v. $$ Now $\varphi(e_1, e_2) = 1$ but $e_1$ and $e_2$ do not span $\mathbb R^4$.
Evaluation of $dx$ in trigonometric substitution
\begin{align} x&=2\sin\theta\\ \frac{d}{d\theta}(x)&=\frac{d}{d\theta}(2\sin\theta)\\ \frac{dx}{d\theta}&=2\cos\theta\\ dx&=2\cos\theta\ d\theta \end{align} Treating the $d\theta$ like a denominator is almost always permissible when working with single variable calculus.
Derivative of a trigonometric function $\arctan$
Let the function $f(x)$ be given by $$f(x)=\arctan\left(\sqrt{\frac{1-\cos(x)}{1+\cos(x)}}\right)=\arctan\left(|\tan(x/2)|\right)$$ If $x\in (2n\pi,(2n+1)\pi)$, then $\tan(x/2)> 0$ and we have $$\arctan\left(|\tan(x/2)|\right)=\arctan\left(\tan(x/2)\right)=\frac x2-n\pi$$ If $x\in ((2n-1)\pi,2n\pi)$, then $\tan(x/2)< 0$ and we have $$\arctan\left(|\tan(x/2)|\right)=-\arctan\left(\tan(x/2)\right)=-\frac x2+n\pi$$ Differentiating, we find that $$f'(x)=\begin{cases}\frac12&,x\in(2n\pi,(2n+1)\pi)\\\\-\frac12&,x\in((2n-1),2n\pi)\end{cases}$$
Showing that $\log(n)^{\log(\log(n))} \in \mathcal{O}(n)$
Set $e^t = \log n$. Then LHS grows as $e^{t^2}$ and RHS as $e^{e^{t}}$
What's the multiplication in the continuation monad?
Given $f\in [[[[X,S],S],S],S]$ and $g\in [X,S]$, define $\mu_X(f)(g) = f(\eta_{[X,S]}(g))$. Why is this the right thing to do? Well, the functor $[-,S]$ is (contravariantly) self-adjoint, with the unit equal to the counit (in the opposite category), and the monad $[[-,S],S]$ arises from this adjunction. If you work out what the definition of the monad induced by an adjunction says in this case, you get the formula for the unit map given in your question and the formula for the multiplication given above. A little searching on this subject led me to this question, where it's pointed out that Hayo Thielecke's PhD thesis "Categorical Structure of Continuation Passing Style" might a relevant reference.
Sum of a stochastic process
Let $X_1$, $X_2$ be independent (and discrete, for the time being) random variables, and let $Z=X_1+X_2$. Then, to obtain the distribution $P(Z)$ on needs to compute $P(Z=z)$ where $z$ is an arbitrary value: $$P(Z=z) = \sum_{k=-\infty}^{\infty} P(X_1=k) P(X_2=z-k)$$ This is convolution. It is easy to do the same for the continuous versions of $X_1$ and $X_2$. So, the distribution of the sum of two independent random variables is the convolution of their distributions. This can be generalized to $N$ variables (by adding them one by one). When $X_1$ and $X_2$ are not independent, then above is not true. To see that, think on how to compute $P(Z=z)$. You would need to use the conditional probabilities, i.e. $P(X_1|X_2)$ or $P(X_2|X_1)$. When $X_i$s are independent normally distributed random variables, $Z$ is also normally disrtibuted with mean $\sum_i \mu_{X_i}$ and variance $\sum_i \sigma^2_{X_i}$. There are other distributions where the distribution of the sum can be expressed in closed form. For example, the Cauchy distribution.
Formally express all possible 2-tuples of a set
It looks like you want the set of two-element subsets of $S$. (I'd avoid saying "pair", since that suggests "ordered pair," which isn't what you want based on your last sentence.) The notation for this is "$[S]^2$" or "$[S]^{(2)}$" (I've seen both); similarly, we use "$[S]^{<\omega}$" for the set of finite subsets of $S$.
Minimize interpolation error for sin$(x)$ on $[0,\pi]$
Your idea in general fails because $\xi$ is a function of $x$. The equioscillation theorem tells you that the minimizer is uniquely characterized by the requirement that there be three points a,b,c where the error is the same value but alternating in sign. By the symmetry this can be achieved by simply taking p to be 1/2, so that the requirement is satisfied by $0,\pi/2,\pi$.
Solve the stochastic differential equation $dY_t = rdt + \alpha Y_tdB_t$ where $B_t$ is a Brownian motion.
Since $F_t = \exp(-\alpha B_t + \alpha^2 t / 2)$, we can write $F_t = f(t,B_t)$, where $f(t,x) = \exp(-\alpha x + \alpha^2 t/2)$. Thus, by Itô's Lemma, $$ \mathrm d F_t = \frac 12 \alpha^2 F_t \,\mathrm d t -\alpha F_t \, \mathrm d B_t + \frac 12 \alpha^2 F_t \,\mathrm d t=\alpha^2 F_t \,\mathrm d t -\alpha F_t\, \mathrm d B_t.$$ Then, by the product rule, \begin{align*} \mathrm d(FY)_t &= F_t\,\mathrm d Y_t + Y_t\, \mathrm dF_t + \mathrm d [F,Y]_t \\ &= r F_t\,\mathrm d t + \alpha F_t Y_t \,\mathrm d B_t +\alpha^2 F_t Y_t \,\mathrm dt -\alpha F_t Y_t\,\mathrm d B_t -\alpha^2 F_t Y_t \,\mathrm d t \\ &=r F_t \,\mathrm dt. \end{align*} This means that \begin{align*} F_tY_t &= F_0 Y_0 + r\int_{[0,t]} F_s \,\mathrm ds \\ &= Y_0 +r\int_{[0,t]} \exp\left( -\alpha B_s + \frac 12 \alpha^2 s \right) \,\mathrm d s. \end{align*} Since $F_t \neq 0$ for all $t\ge 0$, we have that $$ Y_t = \exp\left( \alpha B_t - \frac 12 \alpha^2 t \right)\left(Y_0 + r\int_{[0,t]} \exp\left( -\alpha B_s + \frac 12 \alpha^2 s \right) \,\mathrm d s\right). $$ Let's check that this satisfies the SDE $$ \mathrm d Y_t = r\,\mathrm d t + \alpha Y_t \,\mathrm d B_t.\label{*}\tag{*} $$ To this end, define the processes $$ X_t = \exp\left( \alpha B_t - \frac 12 \alpha^2 t \right) $$ and $$ Z_t = Y_0 + r\int_{[0,t]} \exp\left( -\alpha B_s + \frac 12 \alpha^2 s \right) \,\mathrm d s. $$ It's easy to check via Itô's lemma that $X$ satisfies $$ \mathrm d X_t = \alpha X_t \,\mathrm d B_t. $$ Moreover, $$ \mathrm d Z_t = rF_t \,\mathrm dt. $$ Thus, as $Z$ has finite variation, and $FX = 1$, the product rule gives us that \begin{align*} \mathrm d (XZ)_t &= \alpha X_t Z_t \,\mathrm d B_t + r \,\mathrm d t \end{align*} which means that $Y = XZ$ satisfies the SDE \eqref{*}.
When is a proper map closed?
This is not true if you merely assume $Y$ is Hausdorff (or even if you assume both $X$ and $Y$ are Hausdorff). For instance, let $S$ be an uncountable set and fix an element $a\in S$. Let $Y$ be $S$ with the topology such that a set is open iff it is either cocountable or does not contain $a$. Let $X$ be $S$ with the discrete topology. Then $X$ and $Y$ are Hausdorff, and the identity map $f:X\to Y$ is proper since every compact subset of $Y$ is finite. However, $f$ is not closed. (More generally, you could take $Y$ to be any Hausdorff space which is not compactly generated and let $f:X\to Y$ be its $k$-ification. So, combining this with Stefan Hamcke's answer at the linked question, a necessary and sufficient condition for this theorem to be valid for a Hausdorff space $Y$ is that $Y$ is compactly generated.)
Attractivity and Lyapunov Stability
There is an easy example for "stability $\not\Rightarrow$ attractivity": $$ x'' + \omega^2x = 0, x(0) = x_0, x'(0) = 0 $$ for some $\omega \neq 0$. The solution is $x(t, x_0) = x_0\operatorname{cos}(\omega t), t \in \mathbb{R}$ and an equilibrium is given by $(0,0)$. This equilibrium is certainly stable since ($\epsilon > 0, \delta := \epsilon$ in the definition of stability) $$ |x(t,x_0)| = |x_0| |\operatorname{cos}(\omega t)| \leq |x_0| \leq \delta := \epsilon.$$ However, it is not attractive as for $x_0 \neq 0: \lim_{t \rightarrow \infty} x_0\operatorname{cos}(\omega t)$ does not even exist. Examples for "attractivity $\not\Rightarrow$ stability" are more difficult. I will only post one equation system and a link to a phase portrait as it requires some enhanced techniques: \begin{cases} x' = x + xy - (x+y)(x^2+y^2)^\frac{1}{2} \\ y' = y - x^2 + (x-y)(x^2+y^2)^\frac{1}{2} \end{cases} which is in polar coordinates $$ r' = r(1-r), \varphi' = r(1-\operatorname{cos}(\varphi))$$ (1,0) is a (global) attractor which is not stable (phaseportrait). You might also want to search for Vinograd (1957) who provided the first counterexample (according to one of my professors).
Solution of a PDE using price of a european put option
At expiration, the put option has value $p(T,y)=\max\left(A^{-1}(F-S) - y,0 \right)$ where $y$ is the underlying price at that time. When the underlying price is $0$ the payoff is $p(T,0) = A^{-1}(F-S)$. As is usual for equities when bankruptcy occurs, the state $y=0$ is assumed to be an absorbing barrier. If $y = 0$ is attained at an earlier time $t < T$, then the value of the put must be the present value of the payoff at expiration to avoid arbitrage. Hence, $$p(t,0) = A^{-1}(F-S)e^{-\beta^2(T-t)}$$ With $g(t,y)$ as given, the corresponding condition for $g(y,0)$ is $$g(t,0) = Ae^{(\beta^2-q_1)(T-t)}p(t,0)+Se^{-q_1(T-t)} = Ae^{(\beta^2-q_1)(T-t)}A^{-1}(F-S)e^{-\beta^2(T-t)}+Se^{-q_1(T-t)} \\ = (F-S)e^{-q_1(T-t)}+ Se^{-q_1(T-t)}= Fe^{-q_1(T-t)}$$ It appears you are correct and the condition $g(t,0) = F$ in the article is a typographical error.
Question about balls in urns
The probability that at least $k$ are red balls is equal to the probabilities of exactly $k$ read plus probability of exactly $k+1$ red balls plus...plus the probability of $\min\{m,r\}$ red balls: $$ p=p_k+p_{k+1}+\cdots+p_{\min\{m,r\}}. $$ But $$ p_j=\frac{\displaystyle\binom{r}{j}\binom{n-r}{m-j}}{\displaystyle\binom{n}{m}}. $$ Thus the sought probability is $$ p=\frac{\displaystyle\sum_{k}^{\min\{m,r\}}\binom{r}{j}\binom{n-r}{m-j}}{\displaystyle\binom{n}{m}}. $$
What is the sum of all solutions satisfying the equations: $Cos (x)-4Sin (x)=1$
$$\cos x-4 \sin x=1$$ Use substitution $t=\tan\frac{x}{2}$ $$\sin x = \frac{2t}{1+t^2};\;\cos x=\frac{1-t^2}{1+t^2}$$ The equation becomes $$\frac{1-t^2}{1+t^2}- \frac{8t}{1+t^2}=1$$ $$1-t^2-8t=1+t^2$$ $$2t^2+8t=0$$ $$2t(t+4)=0$$ Two solutions $t_1=0\to \tan\frac{x}{2}=0\to \frac{x}{2}=k\pi\to x=2k\pi,\;k\in\mathbb{Z}$ $t_2=-4\to \tan\frac{x}{2}=-4\to \frac{x}{2}=-\arctan 4+k\pi\to x=-2\arctan 4+2k\pi,\;k\in\mathbb{Z}$
Demonstrate the following ....
Let $AD= a$ (smaller side). $BC=b$. Height of trapezoid $=h$. Then, $ABC+BDC-A_2+A_1=ABD+ACD-A_1+A_2=\text{Area of trapezoid}$ Let height of $AOD$ = y , then height of $BOC=h-y$ $A_1=ay/2$. $A_2=b(h-y)/2$ Find area of other triangles using $base*height/2$ Find y. Find $(\sqrt{A_1}+\sqrt{A_2})^2$ Simplify to well known formula of area of trapezoid.
What's the application of Lipschitz smoothness and how to use it?
It seems that the term Lipschitz smooth is used here to indicate that $f'$ is Lipschitz continuous (not $f$ itself). The usual notation for this property is $f\in C^{1,1}$. This property appears in obstacle problems where it is often the best regularity one can hope for. For a simple example of an obstacle problem, imagine a heavy rope hanging over a cylindrical pulley: the shape of the rope is partly a circular arc (where it touches the cylinder) and partly a line (where it hangs freely). This union of circular arc and a line is a curve of class $C^{1,1}$. It is not in class $C^2$ because the curvature has a jump discontinuity (jumps from 1/(radius of cylinder) to zero). The wikipedia article gives more sophisticated examples of this kind. OK, let's prove something. Suppose $f\colon\mathbb R\to\mathbb R$ is differentiable and $f'$ is Lipschitz. I claim that $|f(x+h)-f(x)-hf'(x)|\le \frac{L}{2}h^2$ for all $x,h\in\mathbb R$, where $L$ is the Lipschitz constant of $f$. Proof: $f(x+h)-f(x)=\int_x^{x+h} f'(t)\,dt = hf'(x)+\int_x^{x+h} (f'(t)-f'(x))\,dt$ where $\left|\int_x^{x+h} (f'(t)-f'(x))\,dt\right|\le \int_x^{x+h} L|t-x|\,dt = Lh^2/2$. $\quad \Box$.
Find x so three points form a right-angled triangle
The vectors connecting the three points are $\pmatrix{2 \\ 2}$, $\pmatrix{1 \\ x+1}$, and $\pmatrix{-3 \\ -x-3}$. Since the vectors need to be perpendicular, we need to find when $\hat{u} \cdot \hat{v} = 0$. In the first case, $2 + 2(x+1) = 0$, which leads to $x = -2$. In the second case, $-3 + (x+1)(x-3) = 0$. We then have a quadratic equation: $x^2-2x-6=0$. Completing the square, we have $x^2-2x+1 = 6+1$, so $(x-1)^2 = 7$, and $x = \sqrt{7}+1$ or $-\sqrt{7}+1$. In the third case, with $\pmatrix{-3 \\ -x-3}$ and $\pmatrix{2 \\ 2}$, $-6 + 2(-x-3) = 0$, so $x=-6$.
What to teach first: Riemann sums or anti-derivative?
The historical order is the pedagogical order. Areas and any other relevant ideas from geometry, treated informally d(Area under graph of $f$) = $f(x)$ by the visual geometric argument. Antiderivatives are therefore useful. Polynomials etc. Riemann sum as discrete approximation, as formalization of "area" concept, as motivation for "dx", and as method to compute limits of some finite sums by taking integrals.
Showing given map is a ring homomorphism
Hint: Recall ring homs $f : \mathbb{Z}/(n) \to R$ are in bijective correspondence with ring homs $\tilde{f} : \mathbb{Z} \to R$ with the bonus property that $\tilde{f}(n) = 0$. This is sometimes called a Universal Property, and if your book didn't prove it, it's a good exercise to do yourself. In light of this fact, to show that $f$ is a ring hom, it suffices to show that the map $\tilde{f} : \mathbb{Z} \to \mathbb{Z}/(20)$ which is defined by the same formula ($\tilde{f}(x) = \overline{16}\overline{x}$) is a ring hom with $\tilde{f}(10) = 0$. Is this problem more approachable? If not, notice $\tilde{f}$ is really a composite of two functions: first we have $\pi : \mathbb{Z} \to \mathbb{Z}/(20)$ the natural quotient map (i.e. $\pi(x) = \overline{x}$). Then we have $m : \mathbb{Z}/(20) \to \mathbb{Z}/(20)$ with $m(\overline{x}) = \overline{16} \overline{x}$. Can you show that $\pi$ and $m$ are both ring homs? Then, since the composition of ring homs is a ring hom, you will have that $\tilde{f} = m \circ \pi$ is a ring hom too, as needed. I hope this helps ^_^
Vector field on riemannian manifold
Yes, this is true. Let $p :TM\vert_N \to TN$ be the orthogonal projection. By uniqueness of the Levi-Civita connection, it is sufficient to show that the connection $\nabla^N$ on $N$ defined by $\nabla^N_X Y = p(\nabla^M_X Y)$ where $X$ and $Y$ are vector fields on $N$ (technically to take $\nabla^M_X Y$ we need $Y$ to be a vector field on $M$ but $\nabla^M_X Y$ depends only on $Y$ along $X$ so we can extend it to $M$ anyway we want and this will be well-defined) is compatible with the metric and torsionfree. For compatibility with the metric, let $X, Y, Z$ be vector fields on $N$ with $\tilde X, \tilde Y, \tilde Z$ denoting extensions to $M$. Then on $N$ we have $$ X \langle Y, Z\rangle = \tilde X \langle \tilde Y ,\tilde Z\rangle = \langle \nabla^M_{\tilde X} \tilde Y, \tilde Z\rangle + \langle \tilde Y, \nabla^M_{\tilde X} \tilde Z\rangle = \langle \nabla_{\tilde X}^M \tilde Y, Z\rangle + \langle Y, \nabla_{\tilde X}^M \tilde Z\rangle$$ $$ = \langle p(\nabla_{\tilde X}^M \tilde Y), Z\rangle + \langle Y, p(\nabla_{\tilde X}^M \tilde Z)\rangle = \langle \nabla^N_{X} Y, Z \rangle + \langle Y, \nabla_X^N Z\rangle.$$ Torsionfree comes from the fact that $p([\tilde X, \tilde Y]) = [X,Y]$ (which can be easily checked in local slice coordinates).
How to prove these triangle relations?
Assuming second, you can prove your first result by vectors. Take origin at circumcentre. hence, $\vec {OH}=\vec a+\vec b+\vec c$ and $\vec {OG}=\vec{GH}/3$ $|\vec a+\vec b+\vec c|^2=\vec{OH}.\vec{OH}$ Note that $|\vec a|=R$ and same with other two and angles between them is twice of angle subtended at vertex(remember theorem from circles?) $OH^2=R^2(3+2(\sum\cos 2A))=R^2(1-8\prod \cos A)$. You should be able to simplify now. If you want a proof for centroid one, google Ceva's theorem.
Transition between field representation
I try to describe the passage between $GF(16)$ and $GF(4^2)$. The same principles can be applied in going from $GF(256)$ to $GF(16^2)$ and back. Let us begin with $GF(4)$. It's elements are $0,1$, an element called $\alpha$ (aka the generator of the field) and $\alpha+1$. The arithmetic follows from the rule that $\alpha$ is a zero of the polynomial $x^2+x+1$, in other words $\alpha^2=\alpha+1$. In a program we represent the element $a_1\alpha+a_0$ with the pair of bits $(a_1,a_0)$. So the individual bits are viewed as coefficients of a (at most linear) polynomial that we imagine to be evaluated at $\alpha$. Similarly elements of $GF(16)$ are represented as sequences of bits $(a_3,a_2,a_1,a_0)$. This sequence represents the element $a_3\beta^3+a_2\beta^2+a_1\beta+a_0$, where $\beta$ is the (possibly, but hopefully not mysterious) generator. This time the arithmetic follows from the fact that $\beta$ is a zero of the polynomial $x^4+x+1$, in other words $\beta^4=\beta+1$. For example, we then see that $$ \beta^5=\beta\cdot\beta^4=\beta(\beta+1)=\beta^2+\beta, $$ and $$ \beta^{10}=(\beta^5)^2=(\beta^2+\beta)^2=\beta^4+2\beta^2\beta+\beta^2 =\beta^4+\beta^2=\beta^2+\beta+1, $$ using the fact that in all fields of characteristic two we have $2=1+1=0$. This piece of information is systematically used in a lot of software and hardware in the sense that field addition is implemented as bitwise XOR of the sequences representing the elements of the field. We make the important observation that $\beta^2+\beta$ is a root of the same equation $x^2=x+1$ as $\alpha$. This allows us to identify $GF(4)$ as a subset of $GF(16)$. We can choose to equate $\alpha=\beta^2+\beta$. Transformation from $GF(16)$ to $GF(4^2)$: Consider an arbitrary element $z=a_3\beta^3+a_2\beta^2+a_1\beta+a_0$ of $GF(16)$. We can get rid of the high powers of $\beta$ here using the equation $\beta^2+\beta=\alpha$, or rather the equivalent form $\beta^2=\beta+\alpha$, and its consequence $\beta^3=\beta\cdot\beta^2=\beta(\beta+\alpha)=\beta^2+\alpha\beta=\alpha\beta+\beta+\alpha.$ Substituting these allows us to rewrite $$ \begin{aligned} z&=a_3(\alpha\beta+\beta+\alpha)+a_2(\beta+\alpha)+a_1\beta+a_0 =(a_3\alpha+[a_3+a_2+a_1])\beta+([a_3+a_2]\alpha+a_0)\\ &=a_h\beta+a_\ell, \end{aligned} $$ where $a_h=(a_3\alpha+[a_3+a_2+a_1])$ and $a_\ell=([a_3+a_2]\alpha+a_0)$ are elements of $GF(4)$. Internally we represent elements of $GF(4)$ with the sequence of coefficient of $\alpha^j,j=0,1$, e.g. $a_h=a_{h1}\alpha+a_{h0}$, so here $$ a_{h1}=a_3,\quad a_{h0}=a_3+a_2+a_1,\quad a_{\ell1}=a_3+a_2,\quad a_{\ell0}=a_0. $$ Transformation from $GF(4^2)$ to $GF(16)$: The other direction is also straightforward with the identification $\alpha=\beta^2+\beta$ in place. If we are given an element of $GF(4^2)$ in the form $$ z=a_h\beta+a_\ell, $$ where $a_h=a_{h1}\alpha+a_{h0}$ and $a_\ell=a_{\ell1}\alpha+a_{\ell0}$, then $$ \begin{aligned} z&=(a_{h1}[\beta^2+\beta]+a_{h0})\beta+(a_{\ell1}[\beta^2+\beta]+a_{\ell0})\\ &=a_{h1}\beta^3+(a_{\ell1}+a_{h1})\beta^2+(a_{h0}+a_{\ell1})\beta+a_{\ell0}\\ &=z_3\beta^3+z_2\beta^2+z_1\beta+z_0, \end{aligned} $$ where $z_3=z_{h1}$, $z_2=(a_{\ell1}+a_{h1})$, $z_1=(a_{h0}+a_{\ell1})$ and $z_0=a_{\ell0}$. Inversion in $GF(4^2)$: Here the idea is to take advantage of the fact that inversion is easier in $GF(4)$ than it would be in $GF(16)$. Namely $1^{-1}, \alpha^{-1}=\alpha+1$ and $(\alpha+1)^{-1}=\alpha$. Equivalently $x^{-1}=x^2$ for any non-zero $x\in GF(4)$. The idea is similar to inverting a complex number $z=x+yi$ where $x,y$ are real, at least one non-zero. There we make the observation that $$ (x+yi)(x-yi)=x^2+y^2=Q(x,y), $$ where we can deduce that the real number $Q(x,y)\neq0$. As we know how to compute the reciprocal of $Q(x,y)$ we can take advantage and compute $$ \frac1{x+yi}=\frac1{Q(x,y)}(x-yi). $$ The key to the success of such calculation is (this is bread and butter in field theory, but does require a more extensive background in abstract algebra) that $i$ and $-i$ are roots of the same polynomial $x^2+1$ with real coefficients. We can do something similar here. Remember that $\beta$ is a root of the polynomial $x^2+x+\alpha$ with coefficients in $GF(4)$. The other root of that polynomial is $\beta'=\beta+1$, because $$ \beta'^2+\beta'=(\beta+1)^2+(\beta+1)=\beta^2+1+\beta+1=\beta^2+\beta=\alpha. $$ So given an element $z=a_h\beta+a_\ell$ of $GF(4^2)$ let's calculate $$ z(a_h\beta'+a_\ell)=a_h^2\beta\beta'+a_ha_\ell(\beta+\beta')+a_\ell^2. $$ Here $\beta\beta'=\beta(\beta+1)=\beta^2+\beta=\alpha$ and $\beta+\beta'=1$, so we get $$ z(a_h\beta'+a_\ell)=Q(a_h,a_\ll), $$ where $Q(x,y)=\alpha x^2+xy+y^2$. Here we can prove that $Q(x,y)\neq0$ if at least one of $x,y$ is non-zero. If $x=0$, then $Q(x,y)=Q(0,y)=y^2\neq0$, because then $y\neq0$. OTOH, if $x\neq0$, then $$ Q(x,y)=\frac1{x^2}\left(\alpha+\frac{y}{x}+(\frac{y}{x})^2\right). $$ Here the factor in round parens is $P(y/x)$, where $P(T)=T^2+T+\alpha$. We have seen that $P(T)=0$, when $T=\beta$ or $T=\beta'$. Because $P(T)$ is a quadratic polynomial it can have at most two zeros in the field $GF(16)$. Because neither of those zeros was in the smaller field $GF(4)$ we can deduce that here $P(y/x)\neq0$, because $y/x\in GF(4)$. Putting all this together gives us, at long last, $$ z^{-1}=\frac{1}{Q(a_h,a_\ell)}(a_h\beta'+a_\ell) =\frac{1}{Q(a_h,a_\ell)}(a_h\beta+[a_h+a_\ell]). $$ In other words, using the representation of $GF(16)$ as $GF(4^2)$ allows us to calculate the inverse in $GF(16)$ as a single inversion, a single addition, and two multiplications in $GF(4)$. I guess that may be worth the engineers' while in this setting. What is needed to extend this to conversion between $GF(256)$ and $GF(16^2)$: AES specifies that the field $GF(256)$ be given by using a generator $\theta$ that is a zero of the polynomial $x^8+x^4+x^3+x+1$ [EDIT Earlier I had another polynomial here. Corrected.]. IOW the arithmetic of $GF(256)$ follows from the rule $$ \theta^8=\theta^4+\theta^3+\theta+1. $$ We need to identify a copy of $GF(16)$ inside that field. A little bit of tinkering allowed me to find that $$ (\theta^{16}+\theta)=\theta^2+\theta^3+\theta^4+\theta^6 $$ is a solution of $x^4+x+1=0$. So let us declare $$\beta=\theta^2+\theta^3+\theta^4+\theta^6.$$ Then we can calculate that $${\bf\{e\}}=(1,1,1,0)_2=\beta^3+\beta^2+\beta=\theta^2+\theta^3+\theta^5+\theta^6+\theta^7.$$ It is possible to prove that the polynomial $q(x)=x^2+x+{\bf\{e\}}$ has no solutions in $GF(16)$, and hence two in $GF(256)$. The simplest proof of this relies on the properties of the trace mapping. A little bit of searching reveals that the roots of $q(x)$ in $GF(256)$ are $$ \gamma=\theta+\theta^2+\theta^3+\theta^4+\theta^5+\theta^6+\theta^7 $$ and $$\gamma+1=\gamma^{16}.$$ We can then represent elements $z\in GF(256)$ in "$GF(16^2)$" format like $$ z=a_h\gamma+a_\ell, $$ where this time $a_h,a_\ell\in GF(16)$. The formula for converting the 8-tuple of bits $(z_0,z_1,\ldots,z_7)$ to the pair $(a_h,a_\ell)\in GF(16)^2$ such that $$ z_0+z_1\theta+z_2\theta^2+\cdots+z_7\theta^7=z=a_h\gamma+a_\ell $$ is quite messy, and I haven't calculated it.
If there is a cycle containing edges $a, b$ and another one containing edges $b, c$, then there is a cycle containing $a, c$
Using the OP's notation: Consider cycle $C_1$ including edges $a$ and $b$. Without loss of generality, we can say there is a chain (linear graph, Part $A$) from vertex $v_a$ to $u_b$ and a chain (linear graph, Part $B$) from vertex $u_a$ to $v_b$. For simplicity we include the end vertices, $v_a$ and $u_b$ in Part $A$ and vertices $u_a$ and $v_b$ in Part $B$. Because there is a cycle $C_3$ including edge $c$ and edge $b$, that cycle must pass through $u_b$ and $v_b$. Thus the cycle must pass through at least one vertex in Part $A$ and one vertex in Part $B$. Call $u_1$ the vertex on $C_3$ on Part $A$ that is closest to $v_a$. One path from $u_1$ leads to edge $c$ without passing through $b$. Call that path Part $C$. (Without loss of generality we can assume Part $C$ includes $u_c$.) Call $u_2$ the vertex on $C_3$ on Part $B$ that is closest to $u_a$. One path from $u_2$ leads to the other vertex of edge $c$ (i.e., $v_c$). Call the path from $u_2$ to $v_c$ Part $D$. A cycle containing $a$ and $c$ is thus: edge $a$ From $v_a$ to $u_1$ along a portion of Part $A$ From $u_1$ to $u_c$ along portion of Part $C$ edge $c$ From $v_c$ to $u_2$ along a portion of Part $D$ From $u_2$ to $u_a$ along a portion of Part $B$ Informally, we're finding the vertex in $C_3$ on Part $A$ closest to $a$ on "one side" (i.e., to $v_a$) and likewise the vertex in $C_3$ closest to $a$ on the "other side" (i.e., to $u_a$), and then replacing the path from $u_1$ to $u_2$ in $C_3$ with the linear path from $u_1$ to $v_a$ to $a$ to $u_a$ to $u_2$.
Given that $\cos a= 24/25$ and $\sin a<0$, find $\cos(a+\pi/6)$
$$\implies\sin A=-\sqrt{1-\cos^2A}=\cdots$$ I think you have meant $\cos\left(A+\dfrac\pi6\right)$ then, $$\cos\left(A+\frac\pi6\right)=\cos A\cdot\cos\frac\pi6-\sin A\cdot\sin\frac\pi6$$
Minimization of Sum of Squares Error Function
Let's, as the cited passage suggest, look at the derivative of $E$. First we note that $y$ is linear in $w$, as $$ y(x,w+\mu w') = \sum_i (w_i +\mu w_i')x^i = \sum_i w_ix^i +\mu \sum_i w'_ix^i= y(x,w) + \mu y(x,w') $$ Now we have for $w, h \in \mathbb R^{m+1}$ that \begin{align*} E(w+ h) &amp;= \frac 12\sum_{n=1}^N \bigl(y(x_n, w) + y(x_n, h) - t_n\bigr)^2\\ &amp;= E(w) + \sum_{n=1}^N \bigl(y(x_n, w) - t_n)y(x_n, h) + \sum_{n=1}^N y(x_n, h)^2\\ &amp;= E(w) + \sum_{n=1}^N \bigl(y(x_n, w) - t_n)\bigr)y(x_n, h) + o(h) \end{align*} so $E'(w)h = \sum_{n=1}^N \bigl(y(x_n, w) - t_n)\bigr)y(x_n, h)$. The second derivative is $$ E''(w)[h,k] = \sum_{n=1}^N y(x_n, k)y(x_n, h) $$ Now for $h \in \mathbb R^{m} -\{0\}$ $$ E''(w)[h, h] = \sum_{n=1}^N y(x_n, h)^2 $$ and this is positive if $N \ge m+1$ and all $x_i$ are different (as a polynomial of degree $m$ cannot have $N \ge m+1$ zeros). So $E''(w)$ is positive definite for every $w$, as $E''$ is constant, hence every zeros of $E'$ is a minimum for $w$. Now lets look at $E'$, we have $E'(w) = 0$, if $E'(w)e_i = 0$ for each $i$ ($e_i$ denoting the $i$th standard basis vector), it holds $$ E'(w)e_i = \sum_{n=1}^N \bigl(y(x_n, w) - t_n\bigr) x_n^i $$ That is we want $w$ to be such that $y(x,w)- t$ is orthogonal to $(x_1^i, \ldots, x_N^i)$ for all $i$. Projection of $t$ onto the subspace generated by this vectors, gives us an unique point, as $w \mapsto y(x,w)$ is injective, $w$ is unique.
Find all primes $p$ and $q$ such that $p^2-2q^2=1.$
Since $p^2-2q^2=1$, $p$ must be odd. Furthermore, $q$ must be even; if $q$ were odd, then $p^2-2q^2$ would be $3$ mod $4$. Thus, $q=2$ and then $p=3$.
When does the underlying map of a polynomial induce a permutation on $\mathbb{Z}/p\mathbb{Z}$?
Such polynomials are called permutation polynomials. There is a lot of literature on this, a starting point can be the Wikipedia article. For a finite field $F_q$, one does not have to check all possible values. There is a polynomial-time algorithm known to check whether a given rational function (hence in particular polynomial) induces a permutation, see e.g. Recognizing permutation functions in polynomial time, Neeraj Kayal.
Getting all positive integer solution (All possible states of a chemical system) to undertermined linear system (Conservation law from stoichiometry)
Finding all the solutions to a given system of $\mathbb{Z}$-linear equations is a hard mathematical problem. In your application, is $A$ always $\begin{bmatrix}1&amp;1&amp;1\end{bmatrix}$? If so, then for an initial state $X\in\mathbb{Z}_+^3$, the solutions to your system are precisely the partitions of $X_1+X_2+X_3\in\mathbb{N}$ into $3$ parts. For general matrixes $A$ and right-hand sides $b$, you can use the software LattE which counts the number of solutions.
have I applied metric continuity definition correctly?
Even when proving continuity under the taxicab metric, this doesn't work. Given $\epsilon &gt; 0$, the thing you are trying to prove is: There is a $\delta &gt; 0$ such that, for all $(x,y)$, if $d((x,y),(x_0,y_0)) &lt; \delta$, then $d(f(x,y), f(x_0,y_0)) &lt; \epsilon$. There in an order to this statement. You pick a $\delta$ first, then for an otherwise arbitrary $(x,y)$ about which you only know that they satisfy the condition $d((x,y),(x_0,y_0)) &lt; \delta$, you use that condition to prove $d(f(x,y), f(x_0,y_0)) &lt; \epsilon$. That is the way the proof must flow. If it doesn't flow that way, you have not proved the statement. Your "proof" does not flow this way. So let's try to fix that. The first step is to pick a delta, and this part is good. Since $f,g$ are continuous, there do exist $\delta_f, \delta_g&gt; 0$ such that For all $x, y$, if $|x - x_0| &lt; \delta_f, |y - y_0| &lt; \delta_g$, then $|f(x) - f(x_0)| &lt; \frac \epsilon 2, |g(y) - g(y_0)| &lt; \frac \epsilon 2$ Note that we do not know yet that $|x-x_0| &lt;\delta_f$ or $|y - y_0| &lt; \delta_g$. In fact, we haven't even introduced values of $x$ or $y$ to put in these statements. We just know that later when we do have such values $x,y$ that satisfy $|x-x_0| &lt;\delta_f$ or $|y - y_0| &lt; \delta_g$, then we will also know $|f(x) - f(x_0)| &lt; \frac \epsilon 2, |g(y) - g(y_0)| &lt; \frac \epsilon 2$. You pick $\delta = \delta_f + \delta_g$ Next, we introduce a point $(x,y) \in \Bbb R^2$. This point must be arbitrary to satisfy the "for all" quantifier. We cannot assume anything about it other than the condition in the statement to be proven, the hypothesis: $d((x,x_0), (y, y_0)) &lt; \delta$, or for our taxicab metric: $$|x - x_0| + |y - y_0| &lt; \delta = \delta_f + \delta_g$$ From this you need to show that $|f(x) - f(x_0)| + |g(y) - g(y_0)| &lt; \epsilon$ It would be easy if we knew that $|f(x) - f(x_0)| &lt; \frac \epsilon 2$ and $|g(y) - g(y_0)| &lt; \frac \epsilon 2$. But we don't know that yet. What we know is that these hold when $|x - x_0| &lt; \delta_f$ and $|y - y_0| &lt; \delta_g$. But we don't know that, and in fact by your choice of $\delta$, they don't even have to be true. We know $$|x - x_0| + |y - y_0| &lt; \delta_f + \delta_g$$ But any shortfall of $|x - x_0|$ below $\delta_f$ allows $|y - y_0|$ to be greater than $\delta_g$ and since $x,y$ are arbitrary, $y$ for which this is true will be included. So $|x - x_0| + |y - y_0| &lt; \delta_f + \delta_g$ is not sufficient to guarantee that $|x - x_0| &lt; \delta_f$ and $|y-y_0|&lt;\delta_g$, and the proof is stuck. Now consider what would have happened if you had chosen $\delta = \min\{\delta_f, \delta_g\}$ instead.
Why is the phase shift -c/b instead of -c
A phase shift for a function $f(x)$ by $c$ units is given by $f(x-c)$ (so if we're given a function $f(x)$ and we shift it to the right 5 units, we'll have $f(x-5)$). For $\sin(2x+3)$, we no longer have a function of $x$ alone, but one of $2x$ (ignoring the shift for the moment). To "fix" this, factor out a power of 2 and you have $\sin\left(2\left(x+\dfrac{3}{2}\right)\right)$.
For every integer $n$ , find how many elements of order $n$ there are in $S_5$
Write an element of $S_5$ in its cycle form. If there are cycles of length $a_1,a_2,\ldots,a_n$, then it is not hard to show that the element has order $\text{lcm}(a_1,a_2,\ldots,a_n)$. The different combinations of cycles which can appear are precisely the partitions of $5$, i.e. $$ \begin{align*} 1+1+1+1+1\\ 2+1+1+1\\ 2+2+1\\ 3+1+1\\ 3+2\\ 4+1\\ 5 \end{align*} $$ Elements of the resulting form have orders $1,2,2,3,6,4,5$ (by taking lcms). Do you see how to finish from here?
What is the flaw of this proof (largest integer)?
In fact, you have given a valid proof of a true theorem. Theorem. If $n$ is the largest positive integer, then $n=1$. You would start a proof of this theorem exactly the way you did start: "Let $n$ be the largest positive integer." So your proof is perfectly valid. But it doesn't prove that $1$ is the largest positive integer; that would be a different theorem: There exists a largest positive integer, and it is equal to $1$. To prove that stronger theorem, you'd first have to prove existence, which of course you cannot do. The theorem statement you did prove is an example of a mathematical statement that is "vacuously true." This means it is true because its hypothesis is always false. If you look at the truth table for the implication $P\implies Q$, you'll see that in all cases in which $P$ is false, the implication is true. So you proved a true (but entirely uninteresting) result!
A problem of definite integral inequality?
$$\left|\int_{10}^{19} \frac{\sin x \,dx}{1+x^{a}}\right|\le \int_{10}^{19}\left|\frac{\sin x }{1+x^{a}}\right|\,dx\le \int_{10}^{19}\frac{1 }{1+x^{a}}\,dx&lt;\int_{10}^{19}\frac{1}{x^a}\,dx= \frac{19^{1-a}-10^{1-a}}{1-a}.$$ Since the denominator contains $1-a$, we will assume that $a &gt;1$. For $a=2$, $$ \frac{19^{-1}-10^{-1}}{-1}=\frac{9}{190}&lt;\frac19.$$ For $a =3$, $$\frac{19^{-2}-10^{-2}}{-2}=\frac{261}{72200}&lt;\frac19.$$ Therefore, the smallest odd value of $a$ which satisfies the inequality is $a=3$.
Unclear Leibnitz's(?) rule, differential
Sure. For the first, $\gamma\colon (-t_0-\epsilon,t_0+\epsilon)\to M$ is a curve, and by definition $\dot\gamma(t_0)$ is the tangent vector to this curve at $t=t_0$, which is $d\gamma_{t_0}\big(\frac d{dt}\big)$ (i.e., the image under $d\gamma_{t_0}$ of the standard basis vector for $\Bbb R = T_{t_0}\Bbb R$. The second is exactly how we interpret tangent vectors as differentiation along curves. If $v$ is the tangent vector to the curve $\gamma$ at $t=t_0$, then $v(f) = \frac d{dt}\big|_{t_0} f(\gamma(t)) = (f\circ\gamma)'(t_0)$.
Is there one-tailed version of Vysochanskiï–Petunin inequality, like Chebyshev?
We are talking about continuous random variables with unimodal densities. The Wikipedia article you link to says for any $ \lambda \gt \sqrt{8/3}=1.63299\ldots$, you have $\Pr(\left|X-\mu\right|\geq \lambda\sigma)\leq\dfrac{4}{9\lambda^2}.$ This, which I wrote many years ago and have not checked recently, says (trying to translate to a similar notation) Unimodal two-tailed case: If $\lambda \ge B $, then $\Pr(|X-\mu|\ge \lambda\sigma) \le \dfrac{4 }{\; 9 \lambda^2}$ If $\lambda \le B $, then $\Pr(|X-\mu|\ge \lambda\sigma) \le 1-\left(\dfrac{4 \lambda^2}{3(1+\lambda^2)}\right)^2$ where B is the largest root of $7x^6-14x^4-x^2+4=0$, about $1.38539\ldots...$ which seems similar but slightly stronger. It also says Unimodal one-tailed case: $\Pr(X-\mu \ge \lambda\sigma) \le \max \left\{ \dfrac{4}{9(1+\lambda^2)}, \dfrac{3-\lambda^2}{3(1+\lambda^2)} \right\}$ so taking the first term if $\lambda \ge \sqrt{\frac{5}{3}}$ and the second if $0 \le \lambda \le \sqrt{\frac{5}{3}}$
The countable product of Fréchet spaces is a Fréchet space
Let $d_n$ be one metric in each $E_n$. Then $$d(\tilde{x}, \tilde{y}):= \sum_{n=1}^{+\infty} \frac{d_n(x_n,y_n)}{1+d_n(x_n,y_n)}\frac{1}{2^n} $$ where $\tilde{x}, \tilde{y}\in E$, is a metric in $E$.
Prove $e^{T^{-1}AT} = T^{-1}e^AT$.
Hint: first prove by induction $(T^{-1}AT)^n=T^{-1}A^nT$, then use $e^M=\sum_{n\ge0}\tfrac{M^n}{n!}$.
Given a matrix $B$, what is $det(B^4)$?
I don't know how you have obtained $$\begin{pmatrix} 1 &amp; 0 &amp; 1\\ 0 &amp; 1 &amp; 2\\ 0 &amp; 0 &amp; -4 \end{pmatrix}$$ But making elementary operations ($R2\leftarrow R2-R1$, $R3\leftarrow R3-R1$) I obtain $$\begin{pmatrix} 1 &amp; 0 &amp; 1\\ 0 &amp; 1 &amp; 1\\ 0 &amp; 2 &amp; 0 \end{pmatrix}$$ and then ($R3\leftarrow R3-2\cdot R2$): $$\begin{pmatrix} 1 &amp; 0 &amp; 1\\ 0 &amp; 1 &amp; 1\\ 0 &amp; 0 &amp; -2 \end{pmatrix}$$
Using expressions like $ \langle x,y \rangle$ in predicate logic formulas
Not all authors of books on set theory. See Patrick Suppes, Axiomatic set theory (1960 - Dover reprint), page 31. The pair set is defined in : Definition 8. $\{ x,y \} = w \leftrightarrow (\forall z) (z \in w \leftrightarrow z = x \lor z = y)$ &amp; $w$ is a set. Then we have : Definition 9. $\{ x \} = \{ x,x \}$. and finally : Definition 10. $\langle x,y \rangle = \{ \{ x \}, \{ x,y \} \}$. Technically, $\langle ..., --- \rangle$ is a term-forming operator, i.e. a (binary) function symbol (like $+$ in arithmetic) which receives as inputs two terms and gives as output a term.
If $b+(a)$ is not a zero divisor in $R/(a)$, does it follow that $(a,b)=R$
Consider $R=S[a,b]$ for any commutative ring $S$ and indeterminates $a$ and $b$. Alternatively, notice $ax+by=1$ in $R$ implies $b$ is invertible in $R/(a)$, and invertibility doesn't follow from simply not being a zero divisor (as the $b\in S[b]$ example is an illustration of).
The boundary of the convex hull of squares of skew-symmetric matrices
I will prove in the below that $C$ is identical to $$ L=\left\{S: S\preceq0,\ 2\lambda_\min(S)\ge\operatorname{tr}(S)\right\}. $$ Note that the conditions $S\preceq0$ and $2\lambda_\min(S)\ge\operatorname{tr}(S)$ together imply that any $S\in L$ cannot be a rank one matrix, and when $S$ has rank two, its two nonzero eigenvalues must be the same. It follows that if $S\in L$ has rank $\le2$, it must be the square of some skew-symmetric matrix. If $C$ is indeed equal to $L$, then $C$ is closed in the cone of negative semidefinite matrices, in the set $\mathcal S$ of all symmetric matrices, or in $M_n(\mathbb R)$, and $\partial C=C$ in $M_n(\mathbb R)$, as we can always perturb every $S\in C$ to some non-symmetric matrix, so that $C$ has an empty interior, $\partial C=\{S\in C: 2\lambda_\min(S)=\operatorname{tr}(S)\text{ or } S \text{ is singular}\}$ in the set of all symmetric matrices, and $\partial C=\{S\in C: 2\lambda_\min(S)=\operatorname{tr}(S)\}$ in the cone of negative semidefinite matrices. To prove the inclusion $C\subseteq L$ is easy. The square of every (possibly zero) skew-symmetric matrix is clearly a member of $L$. It follows that every sum of squares of skew-symmetric matrices $S=K_1^2+\cdots+K_m^2$ is a member of $L$ too, because $$ 2\lambda_\min(S) =2\lambda_\min\left(\sum_iK_i^2\right) \ge2\sum_i\lambda_\min\left(K_i^2\right) \ge\sum_i\operatorname{tr}\left(K^2\right) =\operatorname{tr}(S). $$ To prove $L\subseteq C$, we prove by mathematical induction that $$ L^{(r)}=\left\{S\in L: \operatorname{rank}(S)\le r\right\}\subseteq C $$ for $r=2,3,\ldots$. The base case $r=2$ is true because every $S\in L^{(2)}$ is necessarily the square of some (possibly zero) skew-symmetric matrix. In the induction step, suppose $r\ge3$ and $L^{(r-1)}\subseteq C$. Pick any $S\in L^{(r)}$. By orthogonal diagonalisation, we may assume that $S=\operatorname{diag}(s_1,s_2,\ldots,s_r,0,\ldots,0)$ where $s_1\le s_2\le\cdots\le s_r&lt;0$. The condition $2\lambda_\min(S)\ge\operatorname{tr}(S)$ is equivalent to that $s_1\ge\sum_{i&gt;1}s_i$. There are three cases and we will prove in each case that $S\in C$: $s_1=s_2$. Then $$ \begin{aligned} S&amp;=(s_2-s_3)(I_2\oplus0)+(s_3-s_4)(I_3\oplus0)+\cdots+(s_{r-1}-s_r)(I_{r-1}\oplus0)+s_r(I_r\oplus0)\\ &amp;=|s_2-s_3|(-I_2\oplus0)+|s_3-s_4|(-I_3\oplus0)+\cdots+|s_{r-1}-s_r|(-I_{r-1}\oplus0)+|s_r|(-I_r\oplus0). \end{aligned} $$ Since $-I_2\oplus0=(E_{12}-E_{21})^2$ and $$ -I_k\oplus0 =\frac12\left[(E_{12}-E_{21})^2+(E_{23}-E_{32})^2 +\cdots+(E_{k-1,k}-E_{k,k-1,k})^2+(E_{k1}-E_{1k})^2\right] $$ for each $k\ge3$, we have $S\in C$. $s_1-s_2\ge s_3$. Let $A=\operatorname{diag}\left(s_3,\,s_2+s_3-s_1,\,s_3,\,s_4,\ldots,s_r,0,\ldots,0\right)$. Then $A\preceq0$ and the two most negative eigenvalues of $A$ are both equal to $s_3$. Therefore $A\in C$ by case 1. But $S-A=(s_1-s_3)\operatorname{diag}\left(1,1,0,\ldots,0\right)=|s_1-s_3|(E_{12}-E_{21})^2$ also belongs to $C$. Hence $S\in C$. $s_1-s_2&lt;s_3$. Let $B=\operatorname{diag}(s_1-s_2,0,s_3,\ldots,s_r,0,\ldots,0)$. Then $B\preceq0$ and $$ 2\lambda_\min(B)=2(s_1-s_2)\ge(s_1-s_2)+s_3+\cdots+s_r=\operatorname{tr}(B). $$ Hence $B\in L^{(r-1)}$ and by induction hypothesis, $B\in C$. But $S-B=\operatorname{diag}(s_2,s_2,0,\ldots,0)$ also belongs to $C$. Therefore $S\in C$.
Binary subtraction with borrowing
The correct result is 0x1B. 0x44 - 0x29 = 68 - 41 = 27 = 0x1B. Even without doing the calculation, you can see that 0x29 + 0x20 = 0x49, which is greater than 0x44, so the difference has to be less than 0x20. The procedure looks fine, the only small thing I see is that the number with just the high bit set (i.e. 1000 0000) is 128, not 127. Otherwise the procedure is just the same as in base 10, except you borrow 2, not 10 (and in base n, you'd borrow n) (or to be more correct, you always borrow 10, except it's "one zero" in the base you're working in).
Quotient topology and continuity
Remember that if $\pi : X \to Y$ is a quotient map then it is surjective, and $U \subset Y$ is open if and only if $\pi^{-1}(U) \subset X$ is. With this in mind, let's take an open set $B$ of $Z$. Consider $g^{-1}(B)$. Because $g \circ \pi$ is continuous, $\pi^{-1}(g^{-1}(B))$ is open. So by the above, $g^{-1}(B)$ is open. Therefore $g$ is continuous.
If $X_0 \sim \text{Poisson}(\lambda)$, find the distribution of $X_1$
If $x\ge 1$ then \begin{align} &amp; \Pr(X_1 = x) \\[8pt] = {} &amp; \Pr\Big((X_0=x-1\ \&amp;\ X_1=x) \text{ or } (X_0=x\ \&amp;\ X_1 = x) \text{ or }(X_0=x+1\ \&amp;\ X_1=x)\Big) \\[8pt] = {} &amp; \frac{\lambda^{x-1} e^{-\lambda}}{(x-1)!} \cdot p_{x-1} + \frac{\lambda^x e^{-\lambda}}{x!} \cdot r_x + \frac{\lambda^{x+1} e^{-\lambda}}{(x+1)!} \cdot q_{x+1} \\[8pt] = {} &amp; \frac{\lambda^{x-1} e^{-\lambda}}{(x-1)!} \left( p_{x-1} + \frac \lambda x\cdot r_x + \frac{\lambda^2}{x(x+1)} \cdot q_{x+1} \right). \end{align} However, since you have given no information about the way in which $p_x,r_x,q_x$ depend on $x,$ I am inclined to doubt that more can be said.
Is it possible to formulate category theory without set theory?
From a formal viewpoint it is possible to study category theory within category theory, using the notion of a topos. Topos theory does many things, but one thing it provides is an alternative, category-theoretic foundation for mathematics. By augmenting topos theory with sufficient additional axioms, it is theoretically possible to re-construct all of ZFC within topos theory. So, if someone can study category theory in ZFC, they can do the same thing by studying category theory within ZFC within topos theory! Or they can just study category theory using topos theory without using ZFC as an intermediate step. A practical challenge to doing this is that the axioms for a topos are arguably more complicated than the axioms for ZFC, which apart from replacement can all be justified in terms of relatively basic properties of sets. One other way to look at some issues raised in this thread is to look at the notion of type. There is a nice analogy for the difference between ZFC vs. some categorical foundations: it is like the difference bewteen an untyped programming language (such as Scheme) and a strongly typed language (such as Java or C++). In Scheme and other untyped languages, there is no separation between code and data: given any two objects, we can treat the first as a function the second as an input, and (attempt to) compute the corresponding output. So, for example, we could define natural numbers using Church numerals, treat "$5$" as a function, and compute its value on the ordered pair $(0,17)$. Of course, nobody really does this seriously in practice. Similarly, in ZFC, we can ask whether the $\pi$ is a member of the ordered pair $(8, \mathbb{R})$, although in practice nobody does this seriously. In Java and C++, there are strict definitions of each data type. For example, if I have a "natural number" object, and I want a "real number" object, I need to convert ("cast") the original object to make it have the appropriate type. Thus I cannot directly add $1_\mathbb{N}$ and $\pi_\mathbb{R}$. This is similar to the way that some categorical foundations handle things. Instead of speaking of "casting", these foundations focus on the "natural inclusion map" from $\mathbb{N}$ to $\mathbb{R}$, etc. It is worth knowing that there are many other type theories, apart from the ones inspired by topos theory. There is intuitionistic type theory, which is very poweful, and classical second-order arithmetic, which is much weaker but which is still able to formalize almost all undergraduate mathematics. I believe, as do many working in foundations of math, the the naive informal mathematics found in practice is done in some sort of complicated (and informal) type theory. This makes type-theoretic foundations much more natural for many mathematicians -- many of the objections laid out to ZFC rest on the lack of typing in set theory. Simple type-theoretic foundations would arguably be a more natural formal system than ZFC for many practical purposes, just as Java is a more practical language than Scheme for many purposes. On the other hand, the lack of typing in ZFC, like the lack of typing in Scheme, is useful for many theoretical purposes, and so it is good for mathematicians to be aware of untyped systems as well. For example, to make a model of ZFC we only need to define one undefined relation, $\in$. To make a model of type theory we have to lay out the system of types, then lay our a domain for each type, and also lay out all the maps between types and operations on each individual type. This is much more complicated. Analogously, it is a common exercise in computer science classes to ask students to write a scheme interpreter in Scheme, or even to write a Scheme compiler in assembly language, but it is not common to ask students to write a full Java interpreter in Java, much less in assembly language.
How small is an infinitesimal quantity?
There are, I think, three main senses. The first sense (sometimes called "true" infinitesimals) is when you have an easily identified collection of things that are not infinitesimal, and some sense of comparing "size". The infinitesimals are those objects that are smaller than every non-infinitesimal. A typical example is the hyperreals from nonstandard analysis: an infinitesimal hyperreal is a number whose magnitude is smaller than the magnitude of every nonzero (standard) real number. The second sense is somewhat metaphorical; where you have objects that represent some infinitesimal-like notion. So while you don't have any "true" infinitesimals, you can still use the metaphor to do many of the things you wanted to use infinitesimals for anyways. For example, the real line has no (nonzero) infinitesimals, but we can talk about its tangent bundle: the set of pairs of real numbers $(x,y)$ where $x$ denotes a point on the real line and $y$ is imagined as the scale of some infinitesimal displacement from $y$. Then, to do calculus with these, we say that if $f$ is a differentiable function, we also treat it as a function on the tangent bundle too, with $f(x,y) = (f(x), f'(x) y)$. This sort of thing is very important to differential geometry. We can actually make the tangent bundle into an algebraic structure called the dual numbers in a similar fashion to how the complex numbers are defined: we interpret a real number $x$ as the point $(x,0)$, let $\epsilon = (0,1)$. Addition is defined in the obvious way, and multiplication by setting $\epsilon^2 = 0$. (rather than $i^2 = -1$ as we do with complex numbers) Repeating the above, if $f$ is differentiable, we set $f(x + y \epsilon) = f(x) + f'(x) y \epsilon $. Note the appealing similarity to the notion of a "differential approximation". In this sort of algebraic setup, we say that $\epsilon$ is a "nilpotent infinitesimal" (to distinguish from "true" infinitesimals). Nilpotent is an adjective that means you get zero by raising it to a sufficiently large power. The third sense is approximate; "infinitesimal" is used as a shorthand for the idea of being "approximately infinitesimal", which means something is small enough for whatever purpose you need.
How to prove that the system of ODE's won't have any negative solutions?
There are in fact negative solutions: if your initial condition has a negative value, it will stay negative (at least for a while). The correct statement is that if the initial values $S(0)$ and $I(0)$ are positive, the solutions will stay positive for all future time. One way to see this is that if a solution ever becomes negative, it must pass through $0$. But the solution with $I(t_0) = 0$, $S(t_0) = s_0$ is constant $I(t) = 0$, $S(t) = s_0$, while the solution with $I(t_0) = i_0$, $S(t_0) = 0$ is $I(t) = i_0 e^{-\beta (t-t_0)}$, $S(t) = 0$. Thus if $I(t)$ is ever $0$, it must always be $0$, and if $S(t)$ is ever $0$, it must always be $0$ (and have always been $0$).
How to prove matrix Q is equal to $E_{m}+\lambda e_{p}e^{T}_{q}$?
If we were to represent $\delta_{ip}$ is a vector, we have $$\begin{bmatrix}\delta_{1p} \\ \vdots \\ \delta_{p-1,p}\\ \delta_{pp} \\ \delta_{p+1,p} \\ \vdots \\\delta_{mp}\end{bmatrix} $$ Note that $\delta_{ip} = 1$ only when $ i = p$, hence the only values that is non zeros is $\delta_{pp}$, we get $$\begin{bmatrix}\delta_{1p} \\ \vdots \\ \delta_{p-1,p}\\ \delta_{pp} \\ \delta_{p+1,p} \\ \vdots \\\delta_{mp}\end{bmatrix}=\begin{bmatrix}0\\ \vdots \\ 0\\ 1 \\ 0 \\ \vdots \\ 0\end{bmatrix}=e_p $$ Likewise, representing the term $\delta_{jq}$ will give us $e_q$. Hence the term $\lambda e_pe_q^T$ should be clear. As for the term $\delta_{ij}$, we have that $\delta_{ij} = 1$ only when $i=j$ else it is zero, hence we get the identity matrix $E_m$.
Real Solutions to Delayed Neutral Differential Equations
Note that $u(t)=y(t)−y(t−τ)$ satisfies $u'=au$ and thus has the solution $u(t)=Ce^{at}$. $$y(t)=y(t−τ)+Ce^{at}=y(t−2τ)+Ce^{a(t−τ)}+Ce^{at}=...=y(t−nτ)+Ce^{at}\frac{1-e^{-anτ}}{1-e^{-aτ}}.$$ Thus you can select any differentiable function for the restriction of $y$ to $[0,τ]$, determine $C=e^{-aτ}(y(τ)-y(0))$ and then for any $t=nτ+s$, $s\in[0,τ)$, the value of the solution function is $$y(t)=y(s)+\frac{e^{at}-e^{-as}}{e^{aτ}-1}(y(τ)-y(0)).$$
Number of Bits Required to Store Every 32-bit/sample Audio File at 44100 Samples/Second up to 20 minutes
In general, $$ \sum_{t=1}^n t r^t = \frac{r (n r^{n + 1} - (n + 1) r^n + 1)}{(r - 1)^2}. $$ Plug in $n = 1200\times 44100$ and $r = 2^{32}$, then multiply the result by $32.$ For such a large $r,$ however, the numerator is dominated by the term $nr^{n+2}$ and the denominator is dominated by the term $r^2.$ You introduce a very small percentage error by ignoring all the other terms. This gives a total number of bits \begin{align} S \approx 32 nr^n &amp;= 32 \times (1200\times 44100) \times 2^{32\times1200\times 44100} \\ &amp;= 1693440000 \times 2^{1693440000}, \end{align} including the outermost factor $32.$ Equivalently, almost all (something like $99.9999\%$) of the space is occupied by files of the maximum length, so we can just count the space for files that are a full $20$ minutes long, so the length of each file is $L = 32 \times 1200\times 44100$ bits and there are $2^L$ of these files, resulting in the same approximation for the total size $S$ as given above. Considering that the estimated number of protons, neutrons, and electrons in the observable universe is something on the order of $10^{80}\approx 2^{266},$ with maybe a few more orders of magnitude if we count all neutrinos, photons, and whatever is in dark matter, and it is evident that even if we could take every particle in the observable universe and replace it with an entire observable universe's worth of particles, the number of particles we would have would still be only a negligible fraction of the number of bits required. This doesn't bode well for any kind of useful comparison to anything meaningful, except to say "it's bigger than that."
$X \to Y$ flat $\Rightarrow$ the image of a closed point is also a closed point?
This is just a consequence of Nullstellensatz (the flatness is useless). If $x\in X$ is a closed point, then $k(x)$ is a finite extension of $k$ and so is $k(f(x))$ (being a subextension of $k(x)$). This is enough to show that $f(x)$ is a closed point. If $X, Y$ are not finite type over a field, this is false even if $f$ is finite type (and faithfully flat if you like): consider a DVR $R$ with uniformizing element $\pi$, and let $f : X=\mathbb A^1_R \to Y=\mathrm{Spec}(R)$ be the canonical morphism. It is finite type and faithfully flat. The polynomial $1-\pi T\in R[T]$ defines a closed point of $X$ whose image by $f$ is the generic point of $Y$.
Algebra books for olympiad preparation
For number theory I would suggest "An introduction to the theory of numbers" by Niven, Zuckerman and Montgomery. It has a very good theory and problems. Everything is nicely explained with elegant proofs. An easier book would be "Elementary Number Theory" by Burton. However, it doesn't have any difficult problems. However, for olympiad preparation you learn by solving problems. Therefore, I would recommend "104 Number Theory Problems" by Titu Andreescu. It has some basic theory and then progressively harder problems all with solutions. Similarly, for algebra try "101 problems in algebra" by Titu Andreescu.
Explanation Needed: $(\sqrt{a}+\sqrt{b})(\sqrt{a}+\sqrt{b}) = (a-b)$
This is, no doubt, a typo. $(\sqrt{a}+\sqrt{b})(\sqrt{a}-\sqrt{b})=a-b$. Your computation is correct, in that $(\sqrt{a}+\sqrt{b})^2\ne a-b$.
Show that $|\int_X f d \mu|= \int_X|f|d\mu$ if and only if there is a constant $\alpha$ such that $|f| = \alpha f$ a.e. on $X$.
$\begin{cases} f=f^+-f^- &amp;\\ |f|=f^++f^-\end{cases}$ and $\begin{cases}I^+=\int_X f^+d\mu\\ I^-=\int_X f^-d\mu\end{cases}$ $\displaystyle \int_X |f|d\mu=I^++I^-=\bigg|\int_X fd\mu\bigg|=\pm(I^+-I^-)$ positive case : $I^++I^-=I^+-I^-\iff 2I^-=0\iff f^-=0\quad\mu$-pp on $X$ negative case : $I^++I^-=-I^++I^-\iff 2I^+=0\iff f^+=0\quad\mu$-pp on $X$ And this can be rewritten $|f|=\alpha f$ with $\alpha=\pm 1$. Beware that the equality is only true $\mu$-pp on $X$.
Is every submodule of a projective module projective?
Since any projective module is a submodule (even a summand) of a free module, the two formulations of your questions are equivalent. A ring $R$ which has the property that any submodule of a projective $R$-module 'inherits' projectivity is called hereditary. This is equivalent to saying that the global dimension of $R$, $$\text{gldim}(R\text{-Mod}) := \text{sup}\{k\ |\ \text{Ext}^k_R(-,-)\not\equiv 0\},$$ is at most $1$, so $\text{Ext}^k_R(M,N)=0$ for all $k\geq 2$ and all $R$-modules $M,N$. I will try to give some intuition for this in the contexts of geometry and representation theory: Geometry In commutative algebra and algebraic geometry, the finiteness of the global dimension is a homological way to express regularity/smoothness: Serre's Theorem If $(R,{\mathfrak m},k)$ is a local Noetherian ring, then the following are equivalent: $R$ is regular, i.e. $\text{dim}_k({\mathfrak m}/{\mathfrak m}^2)=\text{dim}(R)$. $\text{gldim}(R\text{-Mod})&lt;\infty$. In case the two equivalent conditions are met, $\text{gldim}(R\text{-Mod})=\text{dim}(R)$. Hence, you will get lots of examples by looking at non-regular local rings, of which there are plenty, e.g. any local ring with zero divisors will do: Example Let $k$ be a field, and put $R := k[\varepsilon]/(\varepsilon^2)$. Then $\text{dim}(R)=0$, but $\text{dim}_k{\mathfrak m}/{\mathfrak m}^2=1$, so $R$ is not regular. We therefore must have infinite global dimension, and in particular non-projective submodules of free modules. And indeed, $k\cong k\cdot\langle x\rangle\subset R$ is not projective, since the quotient map $R\to R/{\mathfrak m}=k$ does not split. Representation Theory In the representation theory of finite-dimensional algebras, hereditary algebras occurr as path-algebras of acyclic quivers without relations (this means that you fix a finite graph without oriented cycle and look at the category of configurations of vector spaces where at each vertex of the graph you put a vector space and at each oriented edge you put a homomorphism of vector spaces - similar to the representation theory of the commutative square below). Adding relations, however, will produce non-hereditary algebras. Example Consider the representation theory of the commutative square, i.e. pick $R$ such that $R\text{-Mod}$ is equivalent to the category of diagrams of vector spaces $$\begin{array}{ccc} V_{00} &amp; \stackrel{\alpha}{\to} &amp; V_{01}\\ {\scriptsize\beta}\downarrow &amp; &amp; \downarrow{\scriptsize\gamma} \\ V_{10} &amp; \stackrel{\delta}{\to} &amp; V_{11}\end{array}$$ satisfying $\gamma\alpha=\delta\beta$. Then the module $$\begin{array}{ccc} k &amp; \stackrel{1}{\to} &amp; k\\ {\scriptsize 1}\downarrow &amp; &amp; \downarrow{\scriptsize 1} \\ k &amp; \stackrel{1}{\to} &amp; k\end{array}$$ is projective (it represents the exact functor $V_{\ast\ast}\mapsto V_{00}$), but its submodule $$\begin{array}{ccc} 0 &amp; \stackrel{0}{\to} &amp; k\\ {\scriptsize 0}\downarrow &amp; &amp; \downarrow{\scriptsize 1} \\ k &amp; \stackrel{1}{\to} &amp; k\end{array}$$ is not: it represents the pullback functor $$V_{\ast\ast}\mapsto \{(v,v^{\prime})\in V_{01}\oplus V_{10}\ |\ \gamma(v)=\delta(v^{\prime})\}$$ which is not exact (not even the kernel functor, which is obtained by restriction to representations with $V_{00}=V_{10}=0$, is exact; see http://en.wikipedia.org/wiki/Snake_lemma)
Solving matrices equations using invert matrices properties and determinant properties.
We need to solve $$\begin{bmatrix} 3 &amp; 4 \\ 5&amp;-2\end{bmatrix}\cdot P\cdot\begin{bmatrix}3\\2\end{bmatrix}=\begin{bmatrix} 5 \\ 1 \end{bmatrix}\iff P\cdot\begin{bmatrix}3\\2\end{bmatrix}=\begin{bmatrix} 3 &amp; 4 \\ 5&amp;-2\end{bmatrix}^{-1}\cdot \begin{bmatrix} 5 \\ 1 \end{bmatrix}$$ For the inverse use the formula here and then indicating with $v_1$ and $v_2$ the column of $P$ we have $$P\cdot\begin{bmatrix}3\\2\end{bmatrix}=3v_1+2v_2$$
How to Solve It: A Rate Problem
We have that $V=\frac{1}{3}\pi r^2h$. By similar triangles, $\frac{h}{r}=\frac{b}{a}$, so this is just $V=\frac{1}{3}\pi\frac{a^2h^3}{b^2}$. Differentiating, $\frac{dV}{dt}=\frac{\pi a^2h^2}{b^2}\frac{dh}{dt}$, as desired.
New Year Maths 2016: $\sum_{r=3}^{\; 3^2}r^3=2016$
$$\large\begin{align} &amp;\color{darkblue} {\sum_{\qquad\qquad r={\sum_{m=0}^\infty\left(\frac{n-1}n\right)^m }}^{\quad \qquad\qquad \sum_{m=0}^\infty\left(\frac{n^2-1}{n^2}\right)^m}}\color{purple}{r^n}\\\\ =&amp;\color{darkblue}{\sum_{\qquad\qquad r={\sum_{m=0}^\infty\left(1-\frac 1n\right)^m }}^{\quad \qquad\qquad \sum_{m=0}^\infty\left(1-\frac 1{n^2}\right)^m}}\color{purple}{r^n}\\\\ =&amp;\color{darkblue}{\sum_{\qquad \qquad r=1\big/\left[1-\left(1-\frac 1n\right)\right]}^{\quad \qquad \qquad 1\big/\left[1-\left(1-\frac 1{n^2}\right)\right]}}\color{purple}{r^n}\\\\ =&amp;\color{darkblue}{\qquad\qquad \; \sum_{r=n}^{\; \;n^2} }\color{purple}{r^n}&amp;&amp;=\color{red}{(n-1)^{n+2}}\color{orange}{n^{n-1}}\color{green}{(2n+1)} \end{align}$$ By inspection, equality holds when $n=3$, giving the interesting summation result $$\large\begin{align}\color{darkblue}{\sum_{r=3}^{\; 3^2}} \color{purple}{r^3} &amp;=\color{red}{2^5\cdot}\color{orange}{3^2\cdot}\color{green}{7}\\ \large\color{red}{\sum_{r=3}^{\; 3^2}}\color{red}{r^3} &amp;\color{red}{=2016} \end{align}$$ Happy New Year, everyone!!
Finding modulus and argument of z³ - 4√3 + 4i = 0
Your mistake is in calculating the absolute value: $$|w|=\sqrt{(4\sqrt3)^2+(-4)^2}=\sqrt{48+16}=\sqrt{64}=8.$$ The general rule is $$ |a+bi|=\sqrt{a^2+b^2}. $$ Think of the absolute value as distance from the origin. The complex number $a+bi$ corresponds to the point $(a,b)$. The distance of the point $(a,b)$ from the origin is gotten from Pythagoras theorem.
Books of Random Numbers. How were the numbers generated?
The book is A Million Random Digits with 100,000 Normal Deviates. The introduction, freely available here, explains in detail how the numbers were generated and tested.
Integral $\int_0^1\arctan(x)\arctan\left(x\sqrt3\right)\ln(x)dx$
Following the method outlined in another answer, and simplifying the resulting expression, we get the following closed form: $$\frac{5 G}{6 \sqrt{3}}-\frac{\Im\operatorname{Li}_3(1+i)}{\sqrt{3}}+\Im\operatorname{Li}_3\left(i \sqrt{3}\right)-\frac{\Im\operatorname{Li}_3\left(i \sqrt{3}\right)}{4 \sqrt{3}}-\frac{1}{2} \Im\operatorname{Li}_3\left(1+i \sqrt{3}\right)\\ -3 \Im\operatorname{Li}_3\left(\left(-\frac{1}{2}+\frac{i}{2}\right) \left(-1+\sqrt{3}\right)\right)+\sqrt{3} \Im\operatorname{Li}_3\left(\left(-\frac{1}{2}+\frac{i}{2}\right) \left(-1+\sqrt{3}\right)\right)\\ +\frac{1}{\sqrt{3}}\Im\operatorname{Li}_3\left(\tfrac{(1+i) \sqrt{3}}{1+\sqrt{3}}\right)-3 \Im\operatorname{Li}_3\left(\left(\frac{1}{2}+\frac{i}{2}\right) \left(1+\sqrt{3}\right)\right)+\frac{2}{\sqrt{3}}\Im\operatorname{Li}_3\left(\left(\frac{1}{2}+\frac{i}{2}\right) \left(1+\sqrt{3}\right)\right)\\ -\frac{1}{288} \pi \left[-2 \left\{\vphantom{\Large|}3 \left(4+\sqrt{3}\right) \cdot \operatorname{Li}_2\left(\tfrac{1}{3}\right)+6 \ln ^23-6 \left(7 \sqrt{3}-24\right) \cdot \ln ^2\left(1+\sqrt{3}\right)\\+24 \ln 3 -4 \left(9+4 \sqrt{3}\right) \cdot \ln \left(1+\sqrt{3}\right)\right\}+3 \left(5 \sqrt{3}-36\right) \cdot \ln ^22\\ -4 \left\{\vphantom{\Large|}9+7 \sqrt{3}-6 \ln 3+3 \left(7 \sqrt{3}-24\right) \cdot \ln \left(1+\sqrt{3}\right)\right\}\cdot\ln 2\right]\\ -\frac{1}{216} \left(18+5 \sqrt{3}\right) \pi ^2+\left(\frac{5}{36}-\frac{31}{384 \sqrt{3}}\right) \pi ^3+\frac{5 \psi ^{(1)}\left(\frac{1}{3}\right)}{48 \sqrt{3}},$$ that might be possible to simplify further. Mathematica expression is here.
If $2^n=3^m$ and both $n,m$ are integers, which one is greater?
$3^m$ is never an even number. Since $2^n=3^m$, that means that $2^n$ is not even. The only way for that to happen would be for $n=0$ and $2^n=1$. Since $2^n=3^m$, we have that $3^m=1$ and $m=0$. This: The only solution is $n=0$ and $m=0$. Also, neither is bigger. This all assumes that $n$ and $m$ are whole numbers. Without that, this all goes out the window.
Show that the integral of a bounded measurable function of finite support is properly defined.
From what I understand, your prerequisites are : you know how to define the integral of functions on measured spaces with finite measure ; you know the additivity of the integral with respect to the domain ; you know that the integral of a zero function is zero. Then yes, your proof is perfectly correct. On the other hand, I don't really understand the point, since if you want to define the integral of a measurable function $f:E\to \mathbb{R}$ with finite support, then since $f$ is measurable the support $E'$ is measurable, and has finite measure by hypothesis, so you may directly define $$\int_E f = \int_{E'} f.$$ PS : you should probably not use $E \sim F$ for set complementation, it's very likely confusing for most people (me included).
Fundamental Theorem of Calculus on line integrals
You need to build an parametric equation of $y$ and $z$ as functions of $t$ or constant values. You aren't able to integrate $y$ and $z$ that easy because they are functions of $t$.
Calculating the Limit . $\lim_{x \rightarrow 0}\space \left[ \int_0^{1}\ [ by+a(1-y)]^{x}dy\right]^{1/x}$
By L'Hopital's Rule, one has \begin{eqnarray} &amp;&amp;\lim_{x\to0^+}\ln\bigg\{\left[ \int_0^{1}\ [ by+a(1-y)]^{x}dy\right]^{1/x}\bigg\}\\ &amp;=&amp;\lim_{x\to0^+}\frac{\ln\bigg\{\int_0^{1}\ [ by+a(1-y)]^{x}dy\bigg\}}{x}\\ &amp;=&amp;\frac{1}{\int_0^{1}\ [ by+a(1-y)]^{x}dy}\int_0^{1}\frac{d}{dx} [ by+a(1-y)]^{x}dy\bigg|_{x=0}\\ &amp;=&amp;\int_0^1(b-a)\ln[(b-a)y+a]dy\\ &amp;=&amp;\int_a^b\ln tdt\\ &amp;=&amp;b\ln b-a\ln a-(b-a). \end{eqnarray}
Prove that the space of sequences with limit $0$ is complete.
To avoid confusing yourself with sequences of sequences, I recommend thinking of the elements of $C_0$ as functions $f:\mathbb{N} \to\mathbb{R}$ such that $\lim_{n\to\infty}f(n)=0$. Then the claim becomes: if $(f_k)_{k=1}^\infty$ is a Cauchy sequence of functions with respect to the uniform norm, there is $f\in C_0$ such that $f_k\to f$ uniformly. The proof consists of two steps: For each $n\in\mathbb{N}$ the sequence $(f_k(n))_{k=1}^\infty$ is a Cauchy sequence of numbers, hence has a limit. Denote this limit by $f(n)$, and you have a function $f:\mathbb{N}\to \mathbb{R}$. To show that $f\in C_0$: given $\epsilon&gt;0$ pick $n$ such that $\|f-f_n\|&lt;\epsilon/2$, and then pick $K$ such that $|f_n(k)|&lt;\epsilon/2$ whenever $k\ge K$. Conclude that $|f(k)|&lt;\epsilon$ whenever $k\ge K$.
Does This Condition Characterize $e^z$?
When you want to find out whether a condition uniquely determines an entire function, it is often fruitful to assume one has two (not necessarily different) functions satisfying the condition and find out what one can determine about their difference or quotient. In particular, if one function satisfying the condition is explicitly known. Here, we have a condition that is satisfied by the exponential function, a function that is rather well known and has several very convenient properties. One of the convenient properties is that it has no zeros, hence the quotient $\dfrac{f(z)}{e^z}$ is entire with $f$. The fact that the exponential function satisfies the differential equation $y' = y$ is also often very useful. So suppose $f$ is an entire function with $$f(0) = 1\quad \text{and }\quad f'(n) = f(n),\; n \in \mathbb{Z}^+\tag{1}$$ To find out whether necessarily $f(z) = e^z$, let us consider $h(z) = f(z)e^{-z}$ and see what we can say about that. First, since $f(0) = e^0 = 1$, we have $h(0) = 1$. Differentiating, we find $$h'(z) = f'(z)e^{-z} + f(z)\bigl(-e^{-z}\bigr) = \bigl(f'(z) - f(z)\bigr)e^{-z},$$ so $h'(n) = 0$ for all positive integers $n$. On the other hand, if $g(0) = 1$ and $g'(n) = 0$ for all positive integers $n$, then we find for $f(z) = g(z)e^z$ that $f(0) = 1$ and since $f'(z) = g'(z)e^z + g(z)e^z = \bigl(g'(z) + g(z)\bigr)e^z$, that $f'(n) = \bigl(g'(n) + g(n)\bigr)e^n = g(n)e^n = f(n)$ for positive integers $n$. So the solutions to $(1)$ are in correspondence to the entire functions $h$ with $h(0) = 1$ and $h'(n) = 0,\; n \in \mathbb{Z}^+$. Given only $h'(n) = 0,\; n \in \mathbb{Z}^+$, we can always achieve $h(0) = 1$ by adding a constant, so the question is: Are there entire functions $g$ with $g(n) = 0,\, n \in \mathbb{Z}^+$ other than $g \equiv 0$? Any primitive of such a function (since $\mathbb{C}$ is simply connected, all entire functions have a primitive) gives rise to a solution of $(1)$. A non-constant function that vanishes in all $n \in \mathbb{Z}$ (not only in the positive integers, but that doesn't hurt of course) is $\sin (\pi z)$ - of course all (nonzero) multiples of that have the same property. Choosing the factor $-\pi$, we see that $h(z) = \cos (\pi z)$ gives rise to the solution $f(z) = \cos (\pi z)e^z$ of $(1)$ which evidently is different from $e^z$. There are many more solutions to $(1)$, but few can be expressed as simply as the above.
Are these two mathematical objects the same from a practical standpoint, or literally identical mathematical objects?
So, if you want to be really exact, we should think carefully about the condition $x\in\Bbb Z$ for an element $x\in\Bbb R$. Does this mean that you have defined $\Bbb R$ in such a way that $\Bbb Z$ is actually a subset of $\Bbb R$? The way it is often done, $\Bbb Z$ only embeds into $\Bbb R$ by mapping a number $x\in\Bbb Z$ to the equivalence class of the constant Cauchy sequence $(\frac{x}1,\frac{x}1,\ldots)$. So if you have defined $\Bbb R$ as a set of equivalence classes of Cauchy sequences in $\Bbb Q$, then $\Bbb R$ does not truly contain the set $\Bbb Z$, it "only" contains something that is isomorphic to $\Bbb Z$ in some very strong ways. If Mathematicians were unable to get over this fact, then we'd have to conclude that $\{ \mathbf{x}\in\Bbb R^n \mid x_1,\ldots,x_n\in\Bbb Z \}=\emptyset$. of course, this would be madness. The fact of the matter is that there is always a strongly intuitive, vastly structure-preserving, injective map $i:\Bbb Z\to\Bbb R$, no matter how you construct $\Bbb R$. This map allows you to think of $i(\Bbb Z)$ as $\Bbb Z$ itself. In fact, it means that you could have constructed $\Bbb R$ in such a way that $\Bbb Z$ is truly a subset of it. And if you are willing to accept that $\Bbb Z$ is actually a subset of $\Bbb R$, then indeed $$ \{ x\in\Bbb R^n \mid x_1,\ldots,x_n\in\Bbb Z \} = \Bbb Z^n. $$ Proof. In set theory, we usually define the tuple $(x_1,\ldots,x_n)$ recursively by the following rules: \begin{align*} (x_1,x_2) &amp;:= \{ \{x_1\}, \{x_1,x_2\} \}, \\ (x_1,\ldots,x_k,x_{k+1}) &amp;:= ((x_1,\ldots,x_k),x_{k+1}) \end{align*} The set $\Bbb Z^n$ simply denotes the set of all tuples $(x_1,\ldots,x_n)$ with $x_1,\ldots,x_n\in\Bbb Z$. Since $\Bbb Z\subseteq \Bbb R$, we have \begin{align*} \{ (x_1,\ldots,x_n)\in\Bbb R^n \mid x_1,\ldots,x_n\in\Bbb Z \} &amp;= \{ (x_1,\ldots,x_n) \mid x_1,\ldots,x_n\in(\Bbb Z\cap\Bbb R) \} \\&amp;= \{ (x_1,\ldots,x_n) \mid x_1,\ldots,x_n\in\Bbb Z \} = \Bbb Z^n. \end{align*}
Algorithms for symbolic manipulation
The algorithms behind symbolic integration (due to Liouville, Ritt, Risch, Bronstein et al.) are discussed in prior questions here, e.g. the transcendental case and algebraic case. For general references on symbolic computation see any of the standard texbooks, e.g. Geddes et al. Algorithms for computer algebra, Grabmeier et al: Computer algebra handbook,von zur Gathen: Modern computer algebra, and Zippel: Effective polynomial computation, and many other books. See also the Journal of Symbolic Computation and various conferences: SIGSAM ISSAC, EUROCAL, etc.
Locus of vertex of a rectangle
First, the coordinates of the outside corner point can be simply written as $$(\frac{4a}{m^2}+4am^2, \frac{4a}{m}-4am)$$ by using vector addition. Yours is correct too. It can be simplified to the same value. Then square the $y$ value gives you that $y^2=16a^2(\frac{1}{m^2}+m^2-2)$. Now notice that the $x$ value contains a factor $\frac{1}{m^2}+m^2$. You can then do substitution to remove the $m$.
1-form on $S^n$ with non-degenerate differential.
The answer is no, there is no such $\omega$ for any $n$. To see this, assume for a contradiction that such an $\omega$ exists. Then, since $d^2 \omega= 0$, $d\omega$ is closed, and it is non-degenerate by assumption. This implies $(S^n, d\omega)$ is a symplectic manifold. Much is known about symplectic manifolds. For example, it immediately follows that $n$ is even. Further, the $n$-form $(d\omega)^\frac{n}{2}$ is known be a volume form. In particular, $[d\omega]^\frac{n}{2} \neq 0\in H_{\text{de Rham}}^n(S^n)$. Of course, this is absurd, because, by definition of de Rham cohomology, $[d\omega] = 0 \in H_{\text{de Rham}}^2(S^n)$. This contradiction shows that no such $\omega$ exists.
Show that these events are independent (dyadic expansions)
To show independence we may assume WLOG that $i_1&lt; \dots&lt;i_n$. For each $n$ consider the grid of points in $[0,1]$ defined by $$P_n:=\bigg\{\frac{k}{2^n}:k=0,1,\dots,2^n\bigg\}$$ Clearly, $P_{n}\subset P_{n+1}$, i.e. the grids are getting finer when $n$ gets larger. Each grid $P_{n}$ determines a set of $2^n$ half-open disjoint intervals in $[0,1)$ of the form $I^k_n=[\frac{k}{2^n},\frac{k+1}{2^n})$ $k=0,1,\dots,2^n-1$. We see that $A_n$ is the disjoint union of those intervals having $k$ even, i.e. $A_n$ leaves out every other interval $I^k_n$ $k=0,1,\dots,2^n-1$. Hence each $A_n$ is the disjoint union of $2^{n-1}$ intervals of length $1/2^n$, and therefore has measure $1/2$. Note also that each point $\frac{k}{2^n} \in P_n$ with $k$ even becomes $\frac{2k}{2^{n+1}} \in P_{n+1}$, and so points having an even numerator $P_n$ will still have an even numerator in subsequent grids. What happens if we intersect $A_{i_1}$ and $A_{i_2}$ with $i_1&lt;i_2$? Each of the $2^{i_1-1}$ half-open intervals of $A_{i_1}$ will be sub-divided into $2^{i_2-i_1}$ half-open intervals by the points in $P_{i_2}$, and $A_{i_2}$ leaves out every other such interval. So the result is a disjoint union of $2^{i_1-1} \cdot 2^{i_2-i_1-1}=2^{i_2-2}$ half-open intervals of the form $[\frac{k}{2^{i_2}},\frac{k+1}{2^{i_2}})$ with $k$ even. If we intersect $A_{i_1}\cap A_{i_2}$ with some $A_{i_3}$ with $i_1&lt;i_2&lt;i_3$, then the same reasoning gives that $A_{i_1}\cap A_{i_2}\cap A_{i_3}$ is a disjoint union of $2^{i_2-2} \cdot 2^{i_3-i_2-1}=2^{i_3-3}$ half-open intervals of the form $[\frac{k}{2^{i_3}},\frac{k+1}{2^{i_3}})$ with $k$ even. Inductively we get that $A_{i_1}\cap \dots \cap A_{i_n}$ is a disjoint union of $2^{i_n-n}$ half-open intervals of the form $[\frac{k}{2^{i_n}},\frac{k+1}{2^{i_n}})$ with $k$ even. Therefore the formula $$P(A_{i_1}\cap \dots \cap A_{i_n})=\frac{1}{2}P(A_{i_1}\cap \dots \cap A_{i_{n-1}}) $$ is verified for $i_1&lt; \dots&lt;i_n$. Then induction gives us independence. Finally $x\in[0,1)$ has a terminating dyadic expansion $(\varepsilon_j)$ with $n$th digit equal to zero if and only if $x\in\bigg[\sum_{j=1}^{n-1} \frac{\varepsilon_j}{2^j},\sum_{j=1}^{n-1} \frac{\varepsilon_j}{2^j}+\frac{1}{2^n} \bigg)=\big[\frac{k}{2^{n}},\frac{k+1}{2^{n}}\big)$ with $k$ even, i.e. if and only if $x\in A_n$.
Intuitively, what is the difference between Eigendecomposition and Singular Value Decomposition?
Consider the eigendecomposition $A=P D P^{-1}$ and SVD $A=U \Sigma V^*$. Some key differences are as follows, The vectors in the eigendecomposition matrix $P$ are not necessarily orthogonal, so the change of basis isn't a simple rotation. On the other hand, the vectors in the matrices $U$ and $V$ in the SVD are orthonormal, so they do represent rotations (and possibly flips). In the SVD, the nondiagonal matrices $U$ and $V$ are not necessairily the inverse of one another. They are usually not related to each other at all. In the eigendecomposition the nondiagonal matrices $P$ and $P^{-1}$ are inverses of each other. In the SVD the entries in the diagonal matrix $\Sigma$ are all real and nonnegative. In the eigendecomposition, the entries of $D$ can be any complex number - negative, positive, imaginary, whatever. The SVD always exists for any sort of rectangular or square matrix, whereas the eigendecomposition can only exists for square matrices, and even among square matrices sometimes it doesn't exist.
If $f,g$ are linear forms of a finite dimensional vector $V$ that are not proportional then $ker (F) \cap ker (G)$ has dimension $(n-2)$
Hint: Use Grassmann's formula for linear subspaces: $$\dim U + \dim W = \dim(U \cap W) + \dim(U + W)$$
Cartesian coordinates for vertices of a regular 16-simplex?
Hint: You could start to define the vertices $q_i^{(n)}$ and midpoints $m^{(n)}$ of a regular n-simplex (edge length equalling 1) recursively, e.g. $$ q_0^{(1)} = (0),\\ q_1^{(1)} = (g_1), \quad\text{with}\quad g_1 = 1,\\ m^{(1)} = (h_1), \quad\text{with}\quad h_1 = \frac{1}{2},\\ q_i^{(n+1)} = (q_i^{(n)},0), \quad\text{with}\quad 0 \le i \le n,\\ q_{n+1}^{(n+1)} = (m^{(n)},g_{n+1})\\ m^{(n+1)} = (m^{(n)},h_{n+1}). $$ If one then imposes the conditions $(*)$: $$ \|q_{n+1}^{(n+1)} - q_{n}^{(n+1)} \|^2 = 1\\ \|q_{n+1}^{(n+1)} - m^{(n+1)}\|^2 = \|q_{n}^{(n+1)} - m^{(n+1)}\|^2. $$ This gives $$ \|q_{n+1}^{(n+1)} - q_{i}^{(n+1)} \|^2 = 1$$ and then $$ \|q_{j}^{(n+1)} - q_{i}^{(n+1)} \|^2 = 1.$$ It follows from the conditions $(*)$, that we have $(2*)$: $$ (g_{n+1}-h_{n+1})^2 = (1-g_{n+1}^2) + h_{n+1}^2,\\ (g_{n+1}-h_{n+1})^2 = (g_n-h_n)^2 + h_{n+1}^2. $$ From which one can deduce $$ g_{n+2}^2 = \frac{4g_{n+1}^2-1}{4g_{n+1}^2}. $$ Which can be used to get by induction $$g_n^2 = \frac{n+1}{2n}.$$ and hence $$h_n^2 = \frac{1}{2n(n+1)}.$$ I hope there is no typo and the details can be filled by the reader. The definition of $q_i^{(n)}$ and the determinant formula for the simplex volume, give additionally for this simplex $$ Vol(S_n) = \frac{(n+1)^{1/2}}{n!} \left(\frac{1}{2}\right)^{n/2} $$
On finding the order of an infinitely small quantity
I think (I'm not expert here at all!) that you want to say that $f$ is infinitely small with respect to some polynomial (given your definition) like $(x-a)^n$. For instance, I think that as you approach 0, the function $$f(x) = x$$ is infinitely small compared to $$g(x) = 1,$$ but not compared to $$g(x) = x,$$ etc. If $f$ is infinitely small wrt $$g(x) =(x-a)^n,$$ we say it has order at least $n+1$ (at $a$). If it's also not infinitely small wrt $g(x) = (x-a)^{n+1}$, we say it has order exactly $n+1$ at $a$. In your case, your Taylor series seems to show that your function is small to order 3.
Combinatorics- monotonic subsequence
HINT: $n=4$: * * * * * * * * * * * * * * * *
Checking if $f: \Phi_1 \to \Phi_2 \; ; f_1(\theta, v) \mapsto f_2(\theta, \sinh v)$ is an isometry.
Let's try to make the question a bit more precise. Define $$ S_1 = \{ f_1(\theta, v) \, | \, (\theta, v) \in (0,2\pi) \times \mathbb{R} \} \subseteq \mathbb{R}^3_{x,y,z}, \\ S_2 = \{ f_2(\phi, u) \, | \, (\phi, u) \in (0,2\pi) \times \mathbb{R} \} \subseteq \mathbb{R}^3_{a,b,c}. $$ Then $S_1,S_2$ are both parametric surfaces in $\mathbb{R}^3$. In order to make things less confusing, it is comfortable to think of each $S_i$ as living in a different copy of $\mathbb{R}^3$. To emphasize this point I can give different names to the coordinates of $\mathbb{R}^3$ in which each $S_i$ lives (this what my non-standard notation $\mathbb{R}^3_{x,y,z},\mathbb{R}^3_{a,b,c}$ mean). The functions $f_i \colon (0,2\pi) \times \mathbb{R} \rightarrow S_1$ given by the formulas above are global parametrizations for the surfaces $S_i$. Let's write $$ f_1(\theta,v) = (x(\theta,v), y(\theta,v),z(\theta,v)). $$ The function $f_1$ is one-to-one and onto $S_1$ so given a point $p = (x_0,y_0,z_0) \in S_1$, we have a unique point $$f^{-1}(p) = f^{-1}(x_0,y_0,z_0) = (\theta(p), v(p)) = (\theta(x_0,y_0,z_0), v(x_0,y_0,z_0))$$ such that $$ f_1(\theta(p),v(p)) = f_1(\theta(x_0,y_0,z_0),v(x_0,y_0,z_0)) = p $$ and similarly for $f_2$. Now, let's define a map $F \colon S_1 \rightarrow S_2$ by the formula $$ F(p) = f_2(\theta(p), \sinh v(p)). $$ This is related to your definition for if we write $p = f_1(\theta,v)$ then $$ F(f_1(\theta,v)) = f_2(\theta, \sinh v). $$ The local representation of the map $F$ between the surfaces is the map $$\tilde{F} = f_2^{-1} \circ F \circ f_1 \colon (0,2\pi) \times \mathbb{R} \rightarrow (0,2\pi) \times \mathbb{R}$$ and is given by $$ \tilde{F}(\theta,v) = (\theta, \sinh v). $$ You are asked whether $F$ is an isometry between $S_1$ and $S_2$. For $F$ to be an isometry, it needs to satisfy two conditions: The map $F$ needs to be one-to-one and onto. For all $p \in S_1$ and $v,w \in T_p(S_1)$, we should have $\left&lt; v, w \right&gt; = \left&lt; dF|_p(v), dF|_p(w) \right&gt;$. That is, $F$ should infinitesimally preserve the length of tangent vectors. Here, $dF \colon T_p(S_1) \rightarrow T_{F(p)} S_2$ is the differential of the map $F$. Now, it turns out that instead of checking this directly from the definitions for $F$, you can deduce everything from the representation $\tilde{F}$ of the map $F$. Namely, $F$ will be an isometry if and only if: The map $\tilde{F}$ needs to be one-to-one and onto. The map $\tilde{F}$ needs to preserve the (local representations of the) first fundamental form. For each $(\theta,v) \in (0,2\pi) \times \mathbb{R}$, we should have $$ E_1(\theta,v) = E_2(\tilde{F}(\theta,v)), F_1(\theta,v) = F_2(\tilde{F}(\theta,v)), G_1(\theta,v) = G_2(\tilde{F}(\theta,v)) $$ where $E_i,F_i,G_i$ are the coefficients of the first fundamental form of $S_i$ with respect to the parametrization $f_i$. How do we calculate the $E_i,F_i,G_i$? By the formulas $$ E_1(\theta,v) = \left&lt; \frac{\partial f_1}{\partial \theta}, \frac{\partial f_1}{\partial \theta} \right&gt;, \,\, F_1(\theta, v) = \left&lt; \frac{\partial f_1}{\partial \theta}, \frac{\partial f_1}{\partial v} \right&gt;, \,\, G_1(\theta, v) = \left&lt; \frac{\partial f_1}{\partial v}, \frac{\partial f_1}{\partial v} \right&gt; $$ and similarly $$ E_2(\phi,u) = \left&lt; \frac{\partial f_2}{\partial \phi}, \frac{\partial f_2}{\partial \phi} \right&gt;, \,\, F_2(\phi, u) = \left&lt; \frac{\partial f_2}{\partial \phi}, \frac{\partial f_2}{\partial u} \right&gt;, \,\, G_2(\phi, u) = \left&lt; \frac{\partial f_2}{\partial u}, \frac{\partial f_2}{\partial u} \right&gt;. $$ For example, $$ E_1(\theta,v) = \left&lt; \frac{\partial f_1}{\partial \theta}, \frac{\partial f_1}{\partial \theta} \right&gt; = \| \left( -\sin \theta \cosh v, \cos \theta \cosh v, 0 \right) \|^2 = \cosh^2(v), \\ E_2(\phi, u) = \left&lt; \frac{\partial f_2}{\partial \phi}, \frac{\partial f_2}{\partial \phi} \right&gt; = \| \left( -\sin \phi u, \cos \phi u, 1 \right) \| = 1 + u^2$$ and we indeed see that $$\cosh^2(v) = E_1(\theta,v)) = E_2(\tilde{F}(\theta,v)) = E_2(\theta, \sinh v) = 1 + \sinh^2 v. $$ I'll leave the rest of the calculations for you.
General formula for nth element of the sequence 1, 1, 0, 1, 1, 0, 1, 1, 0, ...
How about $a_n=\dfrac43\sin^2\left(\dfrac{n\pi}3\right)$?
How to find sum of $\sum_{n=1}^{\infty}(-1)^n \frac{1}{n^2}$
It's well known that $$\sum_{n=1}^\infty \frac{1}{n^2}=\frac{\pi^2}{6}$$ We have that: $$\sum_{n=1}^\infty\frac{(-1)^n}{n^2}=\sum_{n=1}^\infty\frac{1}{(2n)^2}-\sum_{n=0}^\infty\frac{1}{(2n+1)^2}$$ and that $$\sum_{n=1}^\infty\frac{1}{n^2}=\sum_{n=1}^\infty\frac{1}{(2n)^2}+\sum_{n=0}^\infty\frac{1}{(2n+1)^2}$$ See if you can use these to obtain your result
Logical equivalence implication between Kleene and Classical logic
I assume you are talking about Kleene's three-valued logic of indeterminacy. If so , the equivalence you are trying to prove does not hold, because classical logic proves the law of excluded middle $\phi \lor \lnot \phi$, but Kleene's logic does not: if $U$ is the undefined truth value, $U \lor \lnot U$ is $U$ not $\top$. [This needs to be combined with the trick in David C. Ullrich's answer as discussed below.]
Prove that $\|(D-L_1-U_1)^{-1} (L_2+U_2)\|_{\infty} < 1$
I suppose $A$ is strictly diagonally dominant. The problem boils down to proving this: let $A=D-F-G$ be strictly diagonally dominant, where $D$ is diagonal and $F,G$ are two entrywise nonnegative matrices with zero diagonals. Then $\|(D-F)^{-1}G\|_\infty&lt;1$. Suppose the contrary that $y=(D-F)^{-1}Gx$ has norm $\ge1$ for some $x$ of supremum norm $1$. Without loss of generality, let $y_1(=\|y\|_\infty\ge1)$ be the largest-sized element among all entries of $y$. Then in the first row of $Dy-Fy-Gx=0$, you may use the triangle inequality $|a-b|\ge|a|-|b|$ and the strict diagonal dominance of $A$ to obtain a contradiction: $$ 0=\left|\,d_{11}y_1-\sum_{j=2}^n(f_{1j}y_j+g_{1j}x_j)\,\right| \ge|y_1|\left(|d_{11}|-\sum_{j=2}^n(|f_{1j}+g_{1j}|)\right) &gt;0. $$
Number of real embeddings $K\to\overline{\mathbb Q}$
We note that an embedding is a non-trivial homomorphism (therefore injective as the domain is a field) $$\varphi:\Bbb Q[x]/((x^2-1)^2-2)\to\Bbb C.$$ Where we call it "real" if the image is contained in $\Bbb R$. However, if $\beta$ denotes a choice of root of $f(x)=(x^2-1)^2-2$, then as $$\Bbb Q(\beta)=\{a+b\beta+c\beta^2+d\beta^3 : a,b,c,d\in\Bbb Q\}$$ we see that this is a set of real numbers iff $\beta\in\Bbb R$, so that the number of ways to map our field into $\Bbb R$, i.e. the number of real embeddings, is exactly the number of real roots of $f(x)$. Factoring we see $$f(x)=(x^2-1-\sqrt{2})(x^2-1+\sqrt{2}).$$ Clearly the first factor has two real roots, and they are exactly as you found them, $\pm\sqrt{1+\sqrt{2}}$, and the second has none--its roots are the other you found, $\pm i\sqrt{\sqrt{2}-1}$--so the total number of real roots is $2$, and therefore the number of real embeddings is $2$.
Prove the approximate identity from the unitization
The norm in $A^+$ is the same as in $A$. So if your sequence $\{x_n\}$ converges to $1$, this means that $1\in A$, since $A$ is complete (as it sits as a closed ideal in $A^+$). To think about this stuff it is often useful to think of $C_0(\mathbb R)$ as your prototype of non-unital C$^*$-algebra.