title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Semivariance of a normal distribution
Let's just deal with the standard normal distribution. Then, as I understand it, you are disregarding negative data values and taking the variance of only positive values. Intuitively, I anticipate the mean and variance of such values to be the same as for the 'half-normal' distribution: $\sqrt{2/\pi}$ and $1 - 2/\pi,$ respectively. (Given what we know about the related integrals of ordinary standard normal distributions, I believe these values are not difficult to prove.) A simulation in R with a million standard normal samples, each of size 100, gives the following results, verifying my guess (within 2 or 3 place accuracy): m = 10^6; sv = sm = numeric(m) for (i in 1:m) { x = rnorm(100); x.pos = x[x >= 0] # truncate to non-neg values only sv[i] = var(x.pos) sm[i] = mean(x.pos) } mean(sm); mean(sv) ## 0.7977938 # aprx E(semivar) ## 0.3633463 # aprx V(semivar) sqrt(2/pi); 1 - 2/pi ## 0.7978846 ## 0.3633802 Here is a histogram of the simulated distribution of semivariances, with a vertical red line at its mean.
Find the differential equation of family of circles each of which touch the lines $ y=x $ and $y=-x$
To derive the circle equation beginning from your general circle form: $$ x^2+y^2 + 2 f x + 2 g y + h = 0 \tag {1} $$ Put $x= +y$ and simplify, you get $$ 2 x^2 + (2 f+2 g)x +h =0 , \quad x^2+ (f+ g) x + h/2 =0 \tag{2}$$ Discriminant should vanish when we have tangency (but not secancy :), which cuts at two points like the situation at H). $ x^2=y^2$"> $$ (f+g)^2 = 4 * h/2 = 2 h \tag{3} $$ Similarly for $ x= -y $ we have $$ (f-g)^2 = 4 * h/2 = 2 h \tag{4} $$ Solving (3),(4) $$ g=0,\, f = \sqrt {2 h} = \sqrt {2} u \tag {5} $$ Plug back into (0) and re-write $$ x^2 + y^2 - 2 u\sqrt 2 + u^2 =0 \tag{6} $$ $$ ( y - \sqrt 2 \,u)^2 +x^2 = u^2 \tag{7}$$ Similarly for the other set of circles centered on x-axis. There are these two sets we see with the same tangency loci $ x= \pm y$ also called envelopes, with full symmetry about both axes of $x$ and $y$. So change sign before $\pm$ symbol is admissible. $$x^2+(y−u √2)^2=u^2,x^2+(y−v √2)^2=v^2 \tag{8} $$ Differentiate and eliminate the constants $(u,v)$ in either case. The latter part I leave it to you, others also indicated it in their answers. EDIT1: Radius of circle is u, power $u^2$ $$x^2 +(y-\sqrt 2 u)^2= u^2 $$ Differentiate once and cancel 2 $$ x+y'(y-\sqrt 2 u) =0 $$ Differentiate once again to obtain second order DE $$ 1 +(y-\sqrt 2 u) y^{''}+y'^2 =0 $$ $$ y +\frac{1+y'^2}{y''}= \sqrt2 u $$ To remove RHS third time differentiate and simplify $$ y' + \frac{y''(2 y'y'') -(1+y'^2) y'''}{y''^2} =0 $$ $$ y'''= \frac{ 3 y'y''^2}{1+y'^2}$$
How can I show that arc length $L(\gamma)$ of a curve is unchanged after reparametrization?
Let $\gamma = \gamma(t)$ Then $$L(\gamma) = \int_0^t |\frac{d\gamma}{dt}|dt$$ If $\gamma$ is reparametrized so $\gamma = \gamma(s(t))$ then $\displaystyle\gamma' = \frac{d\gamma}{ds}\frac{ds}{dt}$ by the chain rule so $$L(\gamma) = \int_0^s \left|\frac{d\gamma}{ds}\frac{ds}{dt}\right|ds$$ Can you show that $L(\gamma)$ is the same computed either way? Hopefully this helps, this is my first post so go easy on me! Edit: Be careful on the limits of integration, it will depend on the domain of the curve.
Brownian motion and second derivatives.
Let's do this slowly. We start with $$ \frac{\mathbb{E}^x[f(B_t)]-f(x)}{t}=\mathbb{E}^x\left[\frac{f(B_t)-f(x)}{t}\right] =\int_{\mathbb{R}}\frac{f(x+\sqrt{t}\cdot y)-f(x)}{t}\frac{e^{-y^2}\,\mathrm{d}y}{\sqrt{2\pi}} $$ and we expand $f(x+\sqrt{t}\cdot y)$ to get $$ \dots=\int_{\mathbb{R}}\frac{f'(x)\sqrt{t}\cdot y}{t}\frac{e^{-y^2}\,\mathrm{d}y}{\sqrt{2\pi}} +\int_{\mathbb{R}}\frac{\frac12f''(x+\xi\sqrt{t}\cdot y)ty^2}{t}\frac{e^{-y^2}\,\mathrm{d}y}{\sqrt{2\pi}}. $$ The first integrand is an odd function of $y$ so its integral over $\mathbb{R}$ is 0 and we are left with the second integral $$ \frac{\mathbb{E}^x[f(B_t)]-f(x)}{t}=\int_{\mathbb{R}}\frac12f''(x+\xi\sqrt{t}\cdot y)\frac{y^2e^{-y^2}\,\mathrm{d}y}{\sqrt{2\pi}}. $$ Now comes the DCT part. Since $f\in C^2_c(\mathbb{R})$, we have $\sup\lvert f''\rvert=M<\infty$ and so the integrand on the RHS is dominated by the integrable function $\frac12 M e^{-y^2}$. Hence we have $$ \lim_{t\to 0}\int_{\mathbb{R}}\frac12f''(x+\xi\sqrt{t}\cdot y)\frac{y^2e^{-y^2}\,\mathrm{d}y}{\sqrt{2\pi}} =\int_{\mathbb{R}}\lim_{t\to 0}\frac12f''(x+\xi\sqrt{t}\cdot y)\frac{y^2e^{-y^2}\,\mathrm{d}y}{\sqrt{2\pi}} $$ i.e., $$ \lim_{t\o 0}\frac{\mathbb{E}^x[f(B_t)]-f(x)}{t} =\int_{\mathbb{R}}\lim_{t\to 0}\frac12f''(x+\xi\sqrt{t}\cdot y)y^2\frac{e^{-y^2}\,\mathrm{d}y}{\sqrt{2\pi}}=\frac12f''(x). $$
Legendre polynomials and Rodrigues' formula
$$\frac{d^n}{dx^n}[(1-x^2)f_n''+2(n-1)xf_n'+2nf_n]=0 $$ $$\frac{d^n}{dx^n}[(1-x^2)f_n'']+2(n-1)\frac{d^n}{dx^n}[xf_n']+2nf_n^{(n)}=0 \tag{1}$$ We wish to differentiate this n times by use of Leibniz's formula, $$\frac{d^n}{dx^n}A(x)B(x)=\sum^n_{k=0}\frac{n!}{k!(n-k)!}\frac{d^kA}{dx^k}\frac{d^{n-k}B}{dx^{n-k}} \tag{2}$$ Applying Leibniz's formula to each term in (1) separately $$\frac{d^n}{dx^n}[(1-x^2)f_n'']=(1-x^2)f_n^{(n+2)}+n(1-x^2)'f_n^{(n+1)}+\frac{n(n-1)}{2}(1-x^2)''f_n^{(n)} $$ $$\frac{d^n}{dx^n}[(1-x^2)f_n'']=(1-x^2)f_n^{(n+2)}-2nxf_n^{(n+1)}-n(n-1)f_n^{(n)} \tag{3}$$ $$\frac{d^n}{dx^n}[xf_n']=xf_n^{(n+1)}+n(x)'f_n^{(n)} $$ $$\frac{d^n}{dx^n}[xf_n']=xf_n^{(n+1)}+nf_n^{(n)} \tag{4}$$ If we put the results (3) and (4) above in (1) You can get the desired result $$(1-x^2)f_n^{(n+2)}-2xf_n^{(n+1)}+n(n+1)f_n^{(n)}=0$$
Using diagonal argument to prove that H(x)=μyT(x,x,y) has no total computable extension
This is very easy, because you already have your diagonal function. Hence let $$\delta:x\mapsto BIG(x)+1$$ If $BIG$ is total and computable, so must be $\delta$. Let $y$ be the number of $\delta$. Then $$\delta(y)=BIG(y)+1=\{y\}(y)+1=\delta(y)+1$$ Do you see the contradiction ?
find all fucntion such $0<|f(x)-f(y)|<2|x-y|$
$$0&lt;|f(n+1)-f(n)|&lt;2\implies |f(n+1)-f(n)|=1\;(\because f:\mathbb N^{+}\to\mathbb N^{+})\\|f(n+1)-f(n)|=1\implies \color{blue}{f(n+1)=f(n)\pm 1}$$ $f(n+1)=f(n)+1\implies f(n+2)=f(n+1)+1$ Proof: $$0&lt;|f(n+2)-f(n)|&lt;4\implies |f(n+2)-f(n)|=1,2,3\;(\because f:\mathbb N^{+}\to\mathbb N^{+})\\f(n+2)=\begin{cases}f(n)\pm1=f(n+1)-1\pm1&amp;\color{blue}{\text{contradiction}}\\ f(n)\pm2=f(n+1)-1\pm2=\color{blue}{f(n+1)-1+2}\\ f(n)\pm3=f(n+1)-1\pm3&amp;\color{blue}{\text{contradiction}}\end{cases}$$ So the only possible choice is $f(n+2)=f(n)+2=f(n+1)+1$ $f(n+1)=f(n)-1\implies f(n+2)=f(n+1)-1$ Proof: Similar Conclusion: $f(n)=f(1)+n-1$ or $f(n)=f(1)-(n-1)$. The latter doesn't work as $f:\mathbb N^{+}\to\mathbb N^{+}$ as pointed out by almagest.
On the intersection of the closure of direct images of nested sets
Note that 2. implies 4. Also note that if the spaces $E$, $F$ are discrete, then you have 3. and 5. for free, and your question reduces to whether $T(⋂_{α &lt; β} B_α) = ⋂_{α &lt; β} T(B_α)$. So try to solve this reduced situation (if you find a counterexample here, you will be done, if you prove the equality here, than you will have a partial answer and you can try to generalize your proof). The point is that looking at a simplified special situation is a good idea in general when you have no idea how to prove a general theorem.
Summation Simplification Confusion
You are not quite right. You get $\frac 12\frac{(\frac n2 -1 )\frac n2}2 - \frac n2 = \frac 12( \frac {n^2}8 - \frac n4) - \frac n2$
Calculating transmission and reception delays
Following from my comment, writing the problem in matrix form $$ \begin{pmatrix} 1 &amp; &amp; &amp; &amp; 1 &amp; \\ 1 &amp; &amp; &amp; &amp; &amp; 1 \\ &amp; 1 &amp; &amp; 1 &amp; &amp; \\ &amp; 1 &amp; &amp; &amp; &amp; 1 \\ &amp; &amp; 1 &amp; 1 &amp; &amp; \\ &amp; &amp; 1 &amp; &amp; 1 &amp; \end{pmatrix} \begin{pmatrix} TD_1 \\ TD_2 \\ TD_3 \\ RD_1 \\ RD_2 \\ RD_3 \end{pmatrix} = \begin{pmatrix} T_{1,2}\\ T_{1,3}\\ T_{2,1}\\ T_{2,3}\\ T_{3,1}\\ T_{3,2} \end{pmatrix}\,, $$ we see that the matrix is not invertible since the vector $(1,1,1,-1,-1,-1)$ is in its kernel.
Calculate $\sum_{A,B \subset X} |A \cup B|$ for $|X|=n$
An different answer (look my comment to see how you can adapt your solution): Each element $x\in X$ is counted $2^{n-1}2^n+2^n2^{n-1}-2^{n-1}2^{n-1}$ times ($x\in A + x\in B - x\in (A\cap B)$). The answer is then $n 2^{2(n-1)}(2+2-1)= 3 n 4^{n-1}$ (your formula simplifies to $(3^n-1)/2$ which is different).
Proving that the pullback map commutes with the exterior derivative
You're missing only one item. Why is $d(\phi^*(dx^{\mu_{1}}\wedge\cdots\wedge dx^{\mu_{r}}))=0$?
Identity of $\frac{n!}{x(x+1)(x+2)...(x+n)}$
A proof by induction is possible, using the identity $$\binom{n}{k} + \binom{n}{k+1} = \binom{n+1}{k+1}.$$ To this end, let $$P_n(x) = \sum_{k=0}^n \binom{n}{k} \frac{(-1)^k}{x+k}.$$ Then $$\begin{align*} P_{n+1}(x) &amp;= \sum_{k=0}^{n+1} \left(\binom{n}{k-1} + \binom{n}{k}\right) \frac{(-1)^k}{x+k} \\ &amp;= \sum_{m=0}^n \binom{n}{m} \frac{(-1)^{m+1}}{x+1+m} + \sum_{m=0}^n \binom{n}{m} \frac{(-1)^m}{x+m} \\ &amp;= -P_n (x+1) + P_n (x). \end{align*}$$ Now let $Q_n(x) = \frac{n!}{x(x+1)\ldots(x+n)}$. Then if $P_n(x) = Q_n(x)$ for some positive integer $n$, we have \begin{align} P_{n+1}(x) &amp;= -Q_n(x+1) + Q_n(x) \\ &amp;= -\frac{n!}{(x+1)(x+2)\ldots(x+n+1)} + \frac{n!}{x(x+1)\ldots(x+n)} \\ &amp;= \frac{n!}{(x+1)(x+2)\ldots(x+n)} \left( \frac{1}{x} - \frac{1}{x+n+1} \right) \\ &amp;= \frac{n! (n+1)}{x(x+1)\ldots(x+n+1)} \\ &amp;= Q_{n+1}(x), \end{align} completing the induction step.
Solving the two ODEs $\frac {dv}{dt}+\frac va=F$ and $c-\frac 12 \frac {du}{dt}-\frac ua=0$
The first one is $$\dot{v}+\frac{1}{a}v=F$$ We find integrating factor which is $\exp\left(\displaystyle \int\dfrac{1}{a}\text{d}t\right)=\exp\left(\dfrac{1}{a}t\right)$, i.e we can write $$\exp\left(\frac{1}{a}t\right)\dot{v}+\exp\left(\frac{1}{a}t\right)\frac{1}{a}v=F\exp\left(\frac{1}{a}t\right)\iff \text{d}\left(\exp\left(\frac{1}{a}t\right)v\right)=F\exp\left(\frac{1}{a}t\right)$$ Integrating both sides we get $$\exp\left(\frac{1}{a}t\right)v=aF\exp\left(\frac{1}{a}t\right)+C\iff v=aF+C\exp\left(-\frac{1}{a}t\right)$$ Using initial condition $v(0)=0$, we find that $C=-aF$ and all in all $$v(t)=aF\left(1-\exp\left(-\frac{t}{a}\right)\right)$$ Second equation can be solved using same method. Edit: as suggested in comments, the equation is separable and that suggests simpler solution.
On the existence of a certain real measure
If $\mu$ is supposed to be positive, then $\langle f,g\rangle=\int fgd\mu$ defines a pre-inner product on the real vector space of polynomial functions on $[0,1]$. Gramian matrices with respect to this inner product must be positive semidefinite. For example, $\begin{bmatrix} \langle x,x\rangle &amp; \langle x,x^2\rangle \\ \langle x^2,x\rangle &amp; \langle x^2,x^2\rangle \end{bmatrix}$ should be positive semidefinite.
Solve by using substitution method $T(n) = T(n-1) + 2T(n-2) + 3$ given $T(0)=3$ and $T(1)=5$
suppose we look for a constant solution $t_n = a.$ then $a$ must satisfy $a = a+2a + 3.$ we pick $a = -3/2.$ make a change a variable $$a_n = t_n + 3/2, t_n = a_n - 3/2.$$ then $a_n$ satisfies the recurrence equation $$a_n= a_{n-1}+ 2a_{n-2}, a_0 = 9/2, a_1=13/2.$$ now look for solutions $$a_n = \lambda^n \text{ where }\lambda^2-\lambda - 2 = 0\to \lambda = 2, -1$$ the solution is $$a_n = c2^n + d(-1)^n, c+d = 9/2, 2c-d = 13/2$$ which gives you $$c = 11/3, d = 5/6.$$ finally $$t_n = \frac{11}3\, 2^n +\frac 56 \, (-1)^n -\frac32. $$
Conceptual question about solutions to system of linear equations when $\det(A) = 0$.
You've got two $n$-dimensional spaces, the domain and the codomain. The column space lives in the codomain. The kernel lives in the domain. Every solution of $A\mathbf{v} = \mathbf{b}$ for $\mathbf{b}$ fixed is of the form $\mathbf{v}_0 + \mathbf{u}$ where $\mathbf{v}_0$ is a fixed particular solution and $\mathbf{u}$ is in the kernel of $A$. Hence the number of free parameters, when there is a solution at all, is the dimension of the kernel. So, you're asking why the dimension of the kernel plus the dimension of the column space is $n$. You know the kernel gets smashed down to the origin in the codomain. You also know the whole domain gets smashed down to the column space. Okay, now take the domain and imagine collapsing the kernel. For instance, if the kernel was the $z$-axis, that would result in just the $xy$-plane. In any case, we've now got an injective map from the collapsed domain to the codomain. But really it only hits the column space, so we've got a bijective map from the collapsed domain to the column space. The only way that ever happens is if the dimensions of the collapsed domain and the column space are the same, say $r$, the (column) rank of the matrix. But the dimension of the collapsed domain is $n-k$ where $k$ is the dimension of the kernel. Hence $r = n-k$, so $r+k=n$. This is a standard proof of the rank-nullity theorem, phrased less algebraically than usual.
Arc Length and Differential Forms
Use Pythagoras. The arc length between the points $A$ and $B$ is approximated by the length of the straight segment $AB$. If $A=(x_1,y_1,z_1)$ and $B=(x_2,y_2,z_2)$ then $AB=\sqrt{(x_2-x_1)^2+(y_2-y_1)^2+(z_2-z_1)^2}$. When $A$ and $B$ are close together then $x_2-x_1\sim (F\circ\gamma_1)'d\theta=F_1d\theta$. Similarly for $y_2-y_1$ and $z_2-z_1$.
Basic questions about pythagorean triples and "n-lets"
Question $1$ appears to follow from the two variable equivalent expression: $$a^2+b^2=c^2, b\in 2k, a,b,c,k \in \Bbb Z $$ $$\to (p^2-q^2)^2 + (2pq)^2=(p^2+q^2)^2, p,q\in \Bbb Z $$
Inscribing a sphere in a pyramid
SHORT ANSWER: It is possible to inscribe a sphere inside a square pyramid whose edge lengths are all equal. If the edge lengths are $1$ then the center of the inscribed sphere is above the center of the square at a distance of $\frac{1}{\sqrt 6+\sqrt 2}$ from that square, and the radius of the sphere is $\frac{1}{\sqrt 6+\sqrt 2}$. If the sphere is to be inside the pyramid this is the only configuration. If the edge lengths are not equal then a sphere inside the pyramid that is tangent to all five faces may not exist. LONG ANSWER: The problem is equivalent to finding a point that is equidistant from all five faces of the square pyramid. Let's use 3D analytic geometry to see if such a point exists for a general pyramid with a square base. To be particular, let's place the square base $ABCD$ in the first quadrant of the $xy$-plane with corner $A$ at the origin, sides aligned with the axes. Let's say the apex of the pyramid is at point $E(f,g,h)$. Here is a diagram looking at the pyramid from above. We want to find the distance of point $P(x,y,z)$ to each of the side faces of the pyramid, labelled $F_1$ through $F_4$. The distance from $P$ to the square base is obviously $z$, but how do we find the distances to the faces defined by their vertices? Here is one way. Consider face $ABE$. If we take the cross-product of two vectors defined by two sides of the triangular base, $\overrightarrow{AE}\times\overrightarrow{AB}$ we get a vector perpendicular to the face. Taking the right order of those vectors guarantees that the vector points from the face to the interior of the pyramid. Normalize that vector to one $\mathbf{F_1}$ in the same direction but with unit length. Then the distance of point $P$ to face $ABE$ is $$(\mathbf{P}-\mathbf{A})\cdot\mathbf{F_1}$$ which uses the dot product. For face $F_1$ we use $\overrightarrow{AE}\times\overrightarrow{AB}$ normalized to get the row vector $$\mathbf{F_1}=\left[0,\ \frac{h}{\sqrt{g^2+h^2}},\ -\frac{g}{\sqrt{g^2+h^2}} \right]$$ For face $F_2$ we use $\overrightarrow{AD}\times\overrightarrow{AE}$ normalized to get the row vector $$\mathbf{F_2}=\left[\frac{h}{\sqrt{f^2+h^2}},\ 0,\ -\frac{f}{\sqrt{f^2+h^2}} \right]$$ For face $F_3$ we use $\overrightarrow{DC}\times\overrightarrow{DE}$ normalized to get the row vector $$\mathbf{F_3}=\left[0,\ -\frac{h}{\sqrt{(1-g)^2+h^2}},\ -\frac{1-g}{\sqrt{(1-g)^2+h^2}} \right]$$ For face $F_4$ we use $\overrightarrow{BE}\times\overrightarrow{BC}$ normalized to get the row vector $$\mathbf{F_4}=\left[-\frac{h}{\sqrt{(1-f)^2+h^2}},\ 0,\ -\frac{1-f}{\sqrt{(1-f)^2+h^2}} \right]$$ Since the distance from $P(x,y,z)$ to face $F_1$ must equal $z$, we get the equation $$(\mathbf{P}-\mathbf{A})\cdot\mathbf{F_1}=z$$ This can be written out and put into standard linear form. Doing this for all four faces we get these simultaneous linear equations. $$\begin{array}{rrrr} 0\,x \ + &amp;\frac{h}{\sqrt{g^2+h^2}}\,y \ + &amp;\left(-1-\frac{g}{\sqrt{g^2+h^2}}\right)z= &amp;0 \\ \frac{h}{\sqrt{f^2+h^2}}\,x \ + &amp;0\,y \ + &amp;\left(-1-\frac{f}{\sqrt{f^2+h^2}}\right)z= &amp;0 \\\ 0\,x \ + &amp;\frac{-h}{\sqrt{(1-g)^2+h^2}}\,y \ + &amp;\left(-1-\frac{1-g}{\sqrt{(1-g)^2+h^2}}\right)z= &amp;\frac{-h}{\sqrt{(1-g)^2+h^2}} \\ \frac{-h}{\sqrt{(1-f)^2+h^2}}\,x \ + &amp;0\,y \ + &amp;\left(-1-\frac{1-f}{\sqrt{(1-f)^2+h^2}}\right)z= &amp;\frac{-h}{\sqrt{(1-f)^2+h^2}} \\ \end{array}$$ If we use $f=g=\frac 12$, $h=\frac 1{\sqrt 2}$ we get the square pyramid with all edge lengths equal to $1$. Using those values in those four linear equations in three variables does give us a unique solution, namely $$\left(\frac 12,\ \frac 12,\ \frac 1{\sqrt 6+\sqrt 2}\right)$$ That gives the first part of my short answer. However, if we use the apex point $f=\frac 12$, $g=h=1$ we get four inconsistent equations with no solution. We can find a sphere to be tangent to any three of the side faces as well as the square base, but none that fits all four side faces and the base. That is the last part of my short answer.
Convert polar to rectangular equation
First of all, Desmos Graphing Calculator (online) is a great tool. You can enter the exact expression and it'll graph it for you. Analytically, here's how you can do this (it's actually not that bad). First, multiply both sides by $2\sin(\theta)-\cos(\theta)$, and you get $2r\sin(\theta)-r\cos(\theta)=1$. Now, substitute $y=r\sin(\theta)$ and $x=r\cos(\theta)$, to get $2y-x=1$.
Are all Nash equilibrium pure strategies also Nash equilibrium mixed strategies.
It's like tonic, which isn't considered a mixed drink ($0$ parts gin and $1$ part tonic). That's degenerate for you: "In mathematics [as in mixology], a degenerate case is a limiting case in which a class of object changes its nature so as to belong to another, usually simpler, class."
Expected number of targets hit with paired shooters
The probability that some target is hit, is $$1-\frac{\binom{2n-2}{k}}{\binom{2n}{k}}=1-\frac{(k-2n)(k-2n+1)}{2n(2n-1)}=\frac{k^2-4kn+k}{2n-4n^2}$$ The expected number of hit targest is $n$ times this probability. So, $$E(X)=\frac{k^2-4kn+k}{2-4n}$$ It should be much more difficult to calculate the probabilities that $m$ targets are hit.
Area between curves $y=x^3$ and $y=x$
The area between two curves is always positive. See the below graph. The area in green and orange is the area you are finding. It is always going to be positive because it exists. When you subtract the two curves, you are finding the area between the curves, regardless of their position relative to the x axis. Also, you could have just multiplied one of your integrals by 2 to get the answer because they are odd functions :P
Inequality involving negative powers of positive definite matrices
You can't because it isn't true. Counterexample: $(A+B)^{-2}&lt;A^{-2}$ is equivalent to $(A+B)^2-A^2&gt;0$, but \begin{aligned} &amp;\left[\pmatrix{1&amp;5\\ 5&amp;26}+\pmatrix{2&amp;10\\ 10&amp;51}\right]^2-\pmatrix{1&amp;5\\ 5&amp;26}^2\\ =&amp;\pmatrix{3&amp;15\\ 15&amp;77}^2-\pmatrix{1&amp;5\\ 5&amp;26}^2\\ =&amp;\pmatrix{234&amp;1200\\ 1200&amp;6154}-\pmatrix{26&amp;135\\ 135&amp;701}\\ =&amp;\pmatrix{208&amp;1065\\ 1065&amp;5453} \end{aligned} is not positive definite because its determinant is $-1$.
Proving equivalence relation
$\sim$ is reflexive: For all $(a,b) \in Q$ we have $ab = ba$, that is, $(a,b) \sim (a,b)$. $\sim$ is symmetric: For all $(a,b),(c,d) \in Q$ such that $(a,b) \sim (c,d)$ we have $ad=bc$ and then $cb = da$, that is, $(c,d) \sim (a,b)$. $\sim$ is transitive: For all $(a,b),(c,d),(e,f) \in Q$ such that $(a,b) \sim (c,d)$ and $(c,d) \sim (e,f)$ we have $ad=bc$ and $cf = de$. Thus, $$(af)\color{red}{d} = (ad)f = (bc)f = b(cf) = b(de) = (be)\color{red}{d}$$ and since $d\neq0$, it follows that $af = be$, that is, $(a,b) \sim (e,f)$.
Prove: There is a $g \in G$ such that $\forall$ $x \in X: g \circ x \neq x$
since $G$ acts transitively on $X$ so $|X/G|=1$... again if we assume there is no such $g \in G$ then $|X^g| \geq 1$ and $|X^e|=|X|$...then by burnside's lemma (http://en.wikipedia.org/wiki/Burnside%27s_lemma) $|X/G| &gt;1$ which is a contradiction.
Finding the sum of the infinite series
If your series is $$ S_1=11+11\times\sum_{n=1}^{\infty}\frac{2^n}{11^{n}} $$ then you may use $$ x+x^2+\cdots+x^n+\cdots=\frac{x}{1-x}, \quad |x|&lt;1, \tag1 $$ with $x=\dfrac2{11}$. If your series is $$ S_2=11+2+\sum_{n=1}^{\infty}\frac{4n}{11^{n}} $$ then differentiating $(1)$ termwise and multiplying by $x$ you get $$ x+2x^2+3x^3+\cdots+nx^n+\cdots=\frac{x}{(1-x)^2}, \quad |x|&lt;1, \tag2 $$ giving, with $x=\dfrac1{11}$, $$ \sum_{n=1}^{\infty}\frac{n}{11^{n}}=\frac{11}{100} $$ It is now pretty easy to obtain $S_2$.
Using the Gram -schmidt procedure to find the orthonormal set (Linear Algebra)
suppose that you've found the orthonormal basis $e_1,e_2,e_3$ from the Gram-schmidt process so for $1\le i,j\le 3$ we have $\langle e_i,e_j\rangle=0$ and $\langle e_i,e_i\rangle=1$ because the basis is an orthonormal one so we have: $$\langle w_{3}, {e_2}\rangle=\langle v_{3}-\langle v_{3}, e_{2}\rangle e_{2}-\langle v_{3}, e_{1}\rangle e_{1},e_2\rangle=\langle v_3,e_2\rangle-\langle v_3,e_2\rangle\langle e_2,e_2\rangle-\langle v_3,e_1\rangle\langle e_1,e_2\rangle=\langle v_{3}, e_{2}\rangle-\langle v_{3}, e_{2}\rangle=0$$ and we have the same procedure for proving that $\langle w_{3}, e_{1}\rangle=0$ Now if you don't know how to calculate $e_1,e_2,e_3$ from the Gram-schmidt process, tell me to write it down for you but in order to answer the section (b) you don't need the values of $e_1,e_2,e_3$ you don't even need to find the value of $w_3$ as you did in your solution. You just have to use the definition of an orthonormal basis. The definition is if $e_1,e_2,\dots,e_n$ is an orthonormal basis for an n-dimensional vector space found by the Gram-schmidt process then for $1\le e_1,e_2,\dots,e_n\le n$ we have $\langle e_i,e_j\rangle=0$ and $\langle e_i,e_i\rangle=1$ meaning that the vectors of an orthonormal basis are two-by-to perpendicular to each other having norm 1.
How can we know the answer to 1-1+1-1+1...?
The series does not converge But if you treated it as a formal geometric series $1+r+r^2+\ldots= \dfrac{1}{1-r}$ and then let $r=-1$ you would get $\dfrac{1}{2}$ Similarly if $S=1-1+1-1+\cdots$ then you might set up S = 1 - 1 + 1 - 1 + ... S = 1 - 1 + 1 - ... and adding the two vertically gives 2S = 1
Martingale: show switching of two supermartingales with a stopping time is also a supermartingale
Hint: Observe that because $X^1_N\ge X^2_N$ you have $$ Z_{n+1} \le X^1_{n+1}1_{\{N&gt;n\}}+X^2_{n+1}1_{\{N\le n\}}. $$ Now take conditional expectations on both sides with rspect to $\mathcal F_n$, use the fact that $1_{\{N&gt;n\}}$ is $\mathcal F_n$-measurable, and the supermartingale property of $X^1_n$ and of $X^2_n$.
Isomorphism between Homomorphism rings of rings and Homomorphism rings of localized rings
Your putative natural isomorphism isn't well-defined. If I let $\theta$ denote your homomorphism, then observe these two examples: $$ \theta(f/s)(m/s) = f(m)/s $$ $$ \theta(g/1)(n/1) = g(n)/1 $$ Now, if you let $f = sg$ and $m = sn$, the first equation simplifies to $$ \theta(g/1)(n/1) = f(n) = s g(n)$$ What you're missing, I think, is the idea to have division work inversely to multiplication. $\theta(sf/1) = s \theta(f/1)$, so conversely, you want $\theta(f/s) = (1/s) \theta(f/1)$ It would be most natural to let your natural isomorphism have the property that $$ \theta(f/1)(m/1) = f(m)/1$$ that is, $\theta$ just acts as the identity operator whenever it makes sense to think of it that way. Then, you just fill everything else in by multiplicativity: $$ \theta(f/s)(m/s&#39;) = f(m)/(ss&#39;)$$ Hopefully you can take it from here. (Don't forget to verify that $\theta(f/s)$ is a well-defined homomorphism, and also that $\theta$ is a well-defined homomorphism!)
Let $K$ be compact, if $\{f_n\}$ is point wise bounded and equicontinuous on $K$, then $\{f_n\}$ contains a uniformly convergent subsequence.
Sure, you can do that. To flesh it out, given $\epsilon &gt; 0$, choose a $\delta$ from the assumption of equicontinuity. Take a $\delta$ net and find a subsequence $f_{n_k}$ that converges at each point of your net. Conclude that $\limsup_{j,k \to \infty} |f_{n_j} - f_{n_k}| &lt; \epsilon$. But the subsequence you produced depended on $\epsilon$, and you want a single subsequence that works for all $\epsilon$. So you need a "diagonal subsequence" or Tychonoff type argument now. That will give you a single subsequence $f_{m_k}$ for which $\limsup_{j,k \to \infty} |f_{m_j} - f_{m_k}| \to 0$, i.e. the subsequence is uniformly Cauchy. Now you have to invoke the uniform completeness of the space of continuous functions, and you are done. It didn't really save a lot. You still had to use the same "diagonal subsequence" argument that gets used in proving that you can find a subsequence converging at every point of your countable dense subset, so that wasn't avoided. And here you had to use the uniform completeness theorem to assert the existence of a limiting function. With the dense subset approach, you would avoid that. Once you have a subsequence $f_{n_k}$ converging at every rational, you have a fairly explicit definition of the limiting function $f$: it's the unique continuous function which is given at the rationals $q$ by $f(q) = \lim_{k \to \infty} f_{n_k}(q)$. Note that you also didn't strengthen the result: every compact metric space has a countable dense subset (proof: for each $n$ take the points of a $1/n$ net), so the existence of such a subset isn't an extra assumption.
Show $\exists x \in \Bbb R$ that is **unique**, such that $ \forall a \in A$ and $ \forall b \in B$ $a\leq x\leq b$.
Let $x=\sup A$. Since $a \leq b$ for all $a \in A$, for all $b \in B$ we get $a \leq b$ for all $b \in B$. Of course $a \leq x$ for all $a \in A$ by definition of supremum. If $y$ is another real number with the same properties then $ a\leq y$ for all $a \in A$ so $x \leq y$. If possible let $x&lt;y$ . Let $r$ be a rational number in $(x,y)$ Then $r \in A$ or $r \in B$. In the first case $x&lt;r \in A$ contradicting the definition of $x$. In the second case there is a member of $b$ (namely $r$) less than $t=y$ which is again a contradiction.
Number theory: Can we reach from $(x_0,y_0)$ to $(x_1,y_1)$ ,with following transitions?
Gcd is invariant under operations a,b,c. Gcd may double under operation d. So the transition is possible if either $\gcd(x_s,y_s)=\gcd(x_d,y_d)$ or $2^n\gcd(x_s,y_s)=\gcd(x_d,y_d)$ for some $n\in\mathbb N$ where we assume $\gcd(x,y)=\gcd(|x|,|y|)$. Note that $$\gcd(x,y)|ax+by\;\forall\;a,b\in\mathbb Z$$ We can justify our claim that under c gcd is invariant using Bezout's lemma. If $\gcd(x,y)=d\;$$$\exists a,b\in\mathbb Z: ax+by=d\implies a(x+y)+(b-a)y=d\implies\gcd(x+y,y)|d$$ Again $d|x+y,y\implies d|\gcd(x+y,y)$. As both are positive $d=\gcd(x+y,y)$. For operation d, note that $x/d,y/d$ can't have any common factor other than $1$, it would violate the fact that $d=\gcd(x,y)$. So if $y/d$ is odd $\gcd(2x,y)=d\gcd(2x/d,y/d)=d$. If $y/d$ is even $\gcd(2x,y)=d\gcd(2x/d,y/d)=2d$.
Why is the function unique?
The comments lead to a proof that $f(m)=m$ for all $m$: By induction on $k$ you find from the given functional equation that $f(m^k)=f(m)^k$ for all $m\in\Bbb N$, $k\in\Bbb N$. For $m&gt;1$, the sequence $m, m^2,m^3,\ldots$ is a subsequence of $1,2,3,\ldots$, hence $$\lim_{k\to\infty}\frac{\log f(m^k)}{\log (m^k)}=\lim_{n\to\infty}\frac{\log f(n)}{\log n}$$ But also $$ \frac{\log f(m^k)}{\log (m^k)}=\frac{\log(f(m)^k)}{\log(m^k)}=\frac{k\log f(m)}{k\log m}=\frac{\log f(m)}{\log m},$$ which does not depend on $k$, hence $\frac{\log f(m)}{\log m}=1$ and so $f(m)=m$. For $m=1$, the above cannot be applied (both because $1,1^2,1^3,\ldots$ is constant instead of a subsequence and because we must not divide by $\log 1=0$). But we get immediately that $f(2)=f(1\cdot 2)=f(1)f(2)$ and hence from $f(2)=2\ne 0$ (by the preceding result), $f(1)=\frac{f(2)}{f(2)}=1$.
Area growth of harmonic functions
This question was asked and answered at MathOverflow. The point of this answer is to provide the link and remove this from the list of unanswered questions. (I made this answer community wiki so that I don't gain reputation when I have not earned it and anyone can freely add details.)
Simple limit, wolframalpha doesn't agree, what's wrong? (Just the sign of the answer that's off)
Others have already pointed out a sign error. One way to avoid such is to first simplify the problem by changing variables. Let $\rm\ z = \sqrt{4+x}\ $ so $\rm\ x = z^2 - 4\:.\:$ Then $$\rm \frac{\frac{1}{\sqrt{4+x}}-\frac{1}{2}}{x}\ =\ \frac{\frac{1}z - \frac{1}2}{z^2-4}\ =\ \frac{-(z-2)}{2\:z\:(z^2-4)}\ =\ \frac{-1}{2\:z\:(z+2)}$$ In this form it is very easy to compute the limit as $\rm\ z\to 2\:$.
existence and uniqueness of Initial value problem
No solution: $$ \dot x =\begin{cases}-1&amp;x\ge 0\\1&amp;x&lt;0\end{cases} $$ with $x(0)=0$. In some weakened generalized context, $x\equiv0$ could be called a solution, but since $\dot x=0$ it does not satisfy this ODE in the strong sense. Note that the Peano theorem guarantees local solutions if the right side is continuous, which covers all the "nice" cases. More than one solution: The classical example is $$ \dot x=2\sqrt{|x|} $$ with $x(0)=0$ which has solutions $$ x(t)=\begin{cases}0&amp;0\le t&lt;c\\\pm(t-c)^2&amp;t\ge c\end{cases} $$ for any $c&gt;0$ including $c=\infty$.
Is there a way to express matematically that B is closer to A than C is to A?
If you accept the possibility that the distance could be the same: $|A-B|\le |C-A|$ otherwise, $|A-B|\lt |C-A|$
Finding the volume when a parabola is rotated about the line $y = 4$.
$$\frac{144 \pi}{15}$$ dividing the numerator and denominator by three: $$\frac{48 \pi}{5}$$ In the future, if you don't feel like simplifying, you can just divide the two answers and see if it lines up $$\frac{144 \pi}{15} \approx 30.1593$$ $$\frac{48 \pi}{5} \approx 30.1593$$ So, yes, you're right.
Addition of fractions with different powers of variables
We have, essentially, $$\dfrac 3{4x} - \dfrac{2}{5x^2}$$ Find the common denominator: $$\dfrac {3\cdot 5x - 4\cdot 2}{20x^2} = \dfrac{15x - 8}{20x^2}$$ Note: There may have been a misprint in the problem statement. To obtain an answer of $\dfrac 7{20x}$ would require we subtract $\dfrac 2{5x}$ from $\dfrac 3{4x}$. But in that case, the principle is the same: find the common denominator and subtract: $$\dfrac 3{4x} - \dfrac 2{5x} = \dfrac{3\cdot 5 - 4\cdot 2}{20x} = \dfrac{7}{20x}$$
Is must conditional or biconditional?
It is only conditional; i.e. to use the network, it is necessary to pay the fee or subscribe. However, this may not be sufficient as one might also need to set up some sort of account or whatever the case may be.
Covariant derivative geometric interpretation
I'll say a few words about how I think about covariant derivatives, which is really just expanding on janmarqz's comment (hopefully others will contribute their own viewpoints as well): For me, the most important geometric idea behind a covariant derivative $\nabla$ is that given a curve $\gamma$ in a manifold $M$, $\nabla$ gives you an isomorphism between the tangent spaces $T_{\gamma(t_1)}M$ and $T_{\gamma(t_2)}M$ for any two points on the curve. Mathematically, this isomorphism $$ P : T_{\gamma(t_1)}M \to T_{\gamma(t_2)}M $$ is the unique isomorphism with the property that for any $v \in T_{\gamma(t_1)}M$, there exists a vector field (which I'll call $v(t)$) along $\gamma$ such that $v(t_1) = v, v(t_2) = P(v)$, and $\nabla_{\gamma'(t)} v(t) = 0$ for all $t \in [t_1, t_2]$. This isomorphism is called "parallel transport"; I like to picture a surface embedded in $\mathbb{R}^3$, such as the 2-sphere, and think of parallel transport along a curve $\gamma$ as "dragging" vectors along that curve. (Important remark: the isomorphism obtained depends on the choice of curve $\gamma$ in general.) Of course, once you have an isomorphism of vector spaces, you get an isomorphism of any of the associated tensor spaces as well. So if $T$ is a $(k,l)$-tensor on $T_{\gamma(t_1)}M$, then we get a $(k,l)$-tensor $PT$ on $T_{\gamma(t_2)}M$. Now the point is that once you have this "parallel transport" isomorphism, the covariant derivative $\nabla_X \mathcal{T}$ is a literal derivative in the following precise sense: Given a vector $X \in T_pM$, let $\gamma$ be any curve with $\gamma'(0) = X$, and let $P_t$ be the "parallel transport along $\gamma$" isomorphism $$ P_t : T_{\gamma(t)}M \to T_{\gamma(0)}M \quad (= T_pM). $$ Then for any tensor field $\mathcal{T}$ on $M$, $$ \nabla_X \mathcal{T} = \frac{d}{dt}\Big|_{t=0} \Big( P_t \big( \mathcal{T}(\gamma(t)) \big) \Big). $$ This is a very precise interpretation of the idea that $\nabla_X \mathcal{T}$ gives you the derivative of $\mathcal{T}$ in the direction of $X$.
Nested Tetration properties
No, this does not simplify. It is notable that the specific case of $k=n$ is given by the Steinhaus-Moser notation by definition, but it does not give any other nice forms. For the general case though, we have the bounds: $${}^{a+b-1}n\le{}^a({}^bn)\le{}^{a+b}n$$ as an extended version of Knuth's arrow theorem and in your specific case: $${}^{k+1}n\le f^k(n)\le{}^{k+2}n$$ for sufficiently large $n$. The lower bound is easy to deduce by noting that: $$f^{k+1}(n)=f^k(n)^{f^k(n)}\ge n^{({}^{k+1}n)}={}^{k+2}n$$ while the upper bound is deduced by instead proving the even tighter bound: $$f^k(n)\le\underbrace{n~\widehat~~n~\widehat~~\dots~\widehat~~n~\widehat~~n}_k~\widehat~~(n+k)\le{}^{k+2}n$$ which gives \begin{align}f^{k+1}(n)&amp;=f^k(n)^{f^k(n)}\\&amp;\le\underbrace{n~\widehat~~n~\widehat~~\dots~\widehat~~n}_k~\widehat~~(n^{n+k}+n+k)\tag{*}\\&amp;\le\underbrace{n~\widehat~~n~\widehat~~\dots~\widehat~~n}_k~\widehat~~(n^{n+k}+n^{n+k})\\&amp;=\underbrace{n~\widehat~~n~\widehat~~\dots~\widehat~~n}_k~\widehat~~(n^{n+k}\cdot2)\\&amp;=\underbrace{n~\widehat~~n~\widehat~~\dots~\widehat~~n}_k~\widehat~~(n^{n+k}\cdot n)\\&amp;=\underbrace{n~\widehat~~n~\widehat~~\dots~\widehat~~n}_k~\widehat~~(n^{n+k+1})\\&amp;=\underbrace{n~\widehat~~n~\widehat~~\dots~\widehat~~n~\widehat~~n}_{k+1}~\widehat~~(n+k+1)\end{align} where $(*)$ comes from pushing all of the exponents upwards using $$a^bc\le(a^b)^c=a^{bc}$$
Prove $\alpha$ is well difined
It seems to me that your proof is quite wrong. All you have really done is to state that since $m=n$, we have $\alpha(m)=\alpha(n)$. This is what is meant by $\alpha$ being well-defined, but you have not actually given any reasons. Also, since $m=n$, there is really no point in having both $m$ and $n$. In fact, I wouldn't bother having either. What you need to prove is: if $\frac ab=\frac cd$, then $\frac{a+3b}{2b}=\frac{c+3d}{2d}$. So, assume that $$\frac ab=\frac cd\ ;$$ then $$\frac12\frac ab+\frac32=\frac12\frac cd+\frac32\ ;$$ and simplifying this gives the result you need.
Plane curve with curvature that tends to zero
Yes, I think your intuition is correct. HINT: Recall that if $\gamma \colon I \to \mathbb{R}^2$ is parametrized by arc length ( that is $\|\gamma(t)'\| = 1$ for all $t$) then the curvature satisfies $$|\kappa(t)| = \|\gamma''(t)\|$$ Now the intuition is this: if $|\kappa|$ is small then $\gamma'$ varies little, so the direction of $\gamma$ changes little. This, coupled with the fact that the velocity is constant, implies that $\gamma$ approaches a line. Therefore, in some finite time it should get out of a given disk. Let's formalize that: We have for any $a$, $b \in I$ $$\gamma(b) = \gamma(a) + \int_a^b \gamma'(t) dt $$ Assume that on the interval $[a,b]$ we have $|\kappa(t)|\le \epsilon$. Then we get $$\|\gamma'(t) - \gamma'(a)\| \le \epsilon(t-a) $$ Therefore $$\|(\gamma(b) - \gamma(a)) - (b-a)\gamma'(a) \|\le \epsilon \frac{(b-a)^2}{2}$$ We conclude: $$\|(\gamma(b) - \gamma(a))\| \ge (b-a)\| ( 1 - \frac{(b-a)\epsilon}{2})$$ Now assume that the interval $[a,b]$ has length $\frac{1}{\epsilon}$. Then we get $$\| \gamma(b) - \gamma(a) \| \ge \frac{1}{\epsilon} \cdot ( 1 - \frac{1}{2}) = \frac{1}{2 \epsilon}$$. Hence $\gamma(I)$ cannot be contained in a disk of radius $&lt; \frac{1}{4 \epsilon}$ Notice the proof works in any dimension $\ge 2$.
Equivalence of functionals $L^\infty \to \mathbf{R}$
(In general it's not true. Assume there exist $A, B\in \Sigma$ such that $0&lt;\mu(A),\mu(B)&lt;\infty$ and $A\cap B=\emptyset$. Then the functionals $T(f)=\int_A f d\mu$ and $T'(f)=\int_B f d\mu$ fulfill all your conditions except the desired inequality.) Take $X=\mathbb{N}, \Sigma=2^{\mathbb{N}}, \mu(\{ n \})= 2^{-n}$. Define for $0&lt;a&lt;1$ $$ T_a (f)= \sum_{n\in \mathbb{N}} a^n f(n).$$ Then $T_a$ is a bounded linear operator $T_a(f)\geq 0$ and $T_a(f)\neq 0$ for $f\geq 0$ and $f\neq 0$. However, for $f=\chi_{\{ n\}}$ we get $$ T_a(f)= a^n $$ Hence, taking $T=T_{1/2}$ and $T'=T_{1/3}$ gives you a counterexample.
Does $n^2-\sin(n)\sqrt{n}$ go to $+\infty$ as $n \to +\infty$?
You are almost correct: if $n&gt;M&gt;2$ then $$n^2-\sin(n)\sqrt{n}\geq n^2-\sqrt{n}&gt;n^2-n=n(n-1)&gt;M.$$ You may also say that $$n^2-\sin(n)\sqrt{n}=n^2\left(1-\frac{\sin(n)}{n\sqrt{n}}\right)$$ which goes to $+\infty$ because $n^2\to +\infty$ and $\frac{\sin(n)}{n\sqrt{n}}\to 0$ because $$0\leq \frac{|\sin(n)|}{n\sqrt{n}}\leq \frac{1}{n}.$$
Is this iteration involving primes known?
This doesn't answer your question, but you may find it helpful. Here's some more Python 2 code to calculate terms of your sequence. It uses a deterministic form of the Miller-Rabin primality test, so it's a little slower than my earlier 5 minute hack at producing the small terms of the sequence, but it can go much higher, and because it doesn't build a huge table of primes it's much more RAM-friendly. In theory it could go even higher, given an appropriate set of witnesses. #!/usr/bin/env python ''' Prime sequence maker See http://math.stackexchange.com/q/1635263/207316 Written by PM 2Ring 2016.02.03 miller-rabin primality test written 2015.04.29 ''' # miller-rabin primality test. def is_prime_mr(n, #This set of witnesses is sufficient to prove primality #for all n &lt; 3317044064679887385961981 witnesses=(2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41), range=range, pow=pow): #Test small prime factors. for p in witnesses: if n % p == 0: return n == p m = n - 1 s, d = -1, m while d % 2 == 0: s, d = s + 1, d // 2 srange = range(s) for a in witnesses: x = pow(a, d, n) if x == 1 or x == m: #Looks good, check next witness continue for _ in srange: x = x * x % n if x == 1: #previous (x+1)(x-1) == 0 mod n return False if x == m: #Looks good, check next witness #but previous x may be a fake root of -1 mod n break else: #x**m != 1 return False return True hi = 3317044064679887385961981 x = 3 lst = [x] i = 1 print 'i: x + y + 1 = s, y-x' while True: y = x while True: y += 2 if not is_prime_mr(y): continue s = x + y + 1 if is_prime_mr(s): break if s &gt;= hi: break print '%2d: %d + %d + 1 = %d, %d' % (i, x, y, s, y-x) x = s lst.append(x) i += 1 print print ', '.join([str(u) for u in lst]) output i: x + y + 1 = s, y-x 1: 3 + 7 + 1 = 11, 4 2: 11 + 17 + 1 = 29, 6 3: 29 + 31 + 1 = 61, 2 4: 61 + 89 + 1 = 151, 28 5: 151 + 179 + 1 = 331, 28 6: 331 + 359 + 1 = 691, 28 7: 691 + 761 + 1 = 1453, 70 8: 1453 + 1499 + 1 = 2953, 46 9: 2953 + 2969 + 1 = 5923, 16 10: 5923 + 5939 + 1 = 11863, 16 11: 11863 + 11897 + 1 = 23761, 34 12: 23761 + 23801 + 1 = 47563, 40 13: 47563 + 47639 + 1 = 95203, 76 14: 95203 + 95267 + 1 = 190471, 64 15: 190471 + 190529 + 1 = 381001, 58 16: 381001 + 381047 + 1 = 762049, 46 17: 762049 + 762227 + 1 = 1524277, 178 18: 1524277 + 1524401 + 1 = 3048679, 124 19: 3048679 + 3048737 + 1 = 6097417, 58 20: 6097417 + 6097439 + 1 = 12194857, 22 21: 12194857 + 12194909 + 1 = 24389767, 52 22: 24389767 + 24389861 + 1 = 48779629, 94 23: 48779629 + 48779903 + 1 = 97559533, 274 24: 97559533 + 97559597 + 1 = 195119131, 64 25: 195119131 + 195119207 + 1 = 390238339, 76 26: 390238339 + 390238367 + 1 = 780476707, 28 27: 780476707 + 780477179 + 1 = 1560953887, 472 28: 1560953887 + 1560954053 + 1 = 3121907941, 166 29: 3121907941 + 3121908089 + 1 = 6243816031, 148 30: 6243816031 + 6243816911 + 1 = 12487632943, 880 31: 12487632943 + 12487633439 + 1 = 24975266383, 496 32: 24975266383 + 24975266753 + 1 = 49950533137, 370 33: 49950533137 + 49950534041 + 1 = 99901067179, 904 34: 99901067179 + 99901067291 + 1 = 199802134471, 112 35: 199802134471 + 199802134667 + 1 = 399604269139, 196 36: 399604269139 + 399604269587 + 1 = 799208538727, 448 37: 799208538727 + 799208539709 + 1 = 1598417078437, 982 38: 1598417078437 + 1598417082629 + 1 = 3196834161067, 4192 39: 3196834161067 + 3196834161293 + 1 = 6393668322361, 226 40: 6393668322361 + 6393668323331 + 1 = 12787336645693, 970 41: 12787336645693 + 12787336646477 + 1 = 25574673292171, 784 42: 25574673292171 + 25574673292991 + 1 = 51149346585163, 820 43: 51149346585163 + 51149346586409 + 1 = 102298693171573, 1246 44: 102298693171573 + 102298693172747 + 1 = 204597386344321, 1174 45: 204597386344321 + 204597386346407 + 1 = 409194772690729, 2086 46: 409194772690729 + 409194772691297 + 1 = 818389545382027, 568 47: 818389545382027 + 818389545382499 + 1 = 1636779090764527, 472 48: 1636779090764527 + 1636779090764621 + 1 = 3273558181529149, 94 49: 3273558181529149 + 3273558181529171 + 1 = 6547116363058321, 22 50: 6547116363058321 + 6547116363059369 + 1 = 13094232726117691, 1048 51: 13094232726117691 + 13094232726118211 + 1 = 26188465452235903, 520 52: 26188465452235903 + 26188465452236177 + 1 = 52376930904472081, 274 53: 52376930904472081 + 52376930904473327 + 1 = 104753861808945409, 1246 54: 104753861808945409 + 104753861808945467 + 1 = 209507723617890877, 58 55: 209507723617890877 + 209507723617890899 + 1 = 419015447235781777, 22 56: 419015447235781777 + 419015447235784361 + 1 = 838030894471566139, 2584 57: 838030894471566139 + 838030894471567241 + 1 = 1676061788943133381, 1102 58: 1676061788943133381 + 1676061788943136391 + 1 = 3352123577886269773, 3010 59: 3352123577886269773 + 3352123577886270323 + 1 = 6704247155772540097, 550 60: 6704247155772540097 + 6704247155772540173 + 1 = 13408494311545080271, 76 61: 13408494311545080271 + 13408494311545080659 + 1 = 26816988623090160931, 388 62: 26816988623090160931 + 26816988623090161187 + 1 = 53633977246180322119, 256 63: 53633977246180322119 + 53633977246180325303 + 1 = 107267954492360647423, 3184 64: 107267954492360647423 + 107267954492360651273 + 1 = 214535908984721298697, 3850 65: 214535908984721298697 + 214535908984721301833 + 1 = 429071817969442600531, 3136 66: 429071817969442600531 + 429071817969442602191 + 1 = 858143635938885202723, 1660 67: 858143635938885202723 + 858143635938885203273 + 1 = 1716287271877770405997, 550 68: 1716287271877770405997 + 1716287271877770406121 + 1 = 3432574543755540812119, 124 69: 3432574543755540812119 + 3432574543755540813821 + 1 = 6865149087511081625941, 1702 70: 6865149087511081625941 + 6865149087511081625987 + 1 = 13730298175022163251929, 46 71: 13730298175022163251929 + 13730298175022163252467 + 1 = 27460596350044326504397, 538 72: 27460596350044326504397 + 27460596350044326504983 + 1 = 54921192700088653009381, 586 73: 54921192700088653009381 + 54921192700088653009667 + 1 = 109842385400177306019049, 286 74: 109842385400177306019049 + 109842385400177306020187 + 1 = 219684770800354612039237, 1138 75: 219684770800354612039237 + 219684770800354612041011 + 1 = 439369541600709224080249, 1774 76: 439369541600709224080249 + 439369541600709224080463 + 1 = 878739083201418448160713, 214 77: 878739083201418448160713 + 878739083201418448165697 + 1 = 1757478166402836896326411, 4984 3, 11, 29, 61, 151, 331, 691, 1453, 2953, 5923, 11863, 23761, 47563, 95203, 190471, 381001, 762049, 1524277, 3048679, 6097417, 12194857, 24389767, 48779629, 97559533, 195119131, 390238339, 780476707, 1560953887, 3121907941, 6243816031, 12487632943, 24975266383, 49950533137, 99901067179, 199802134471, 399604269139, 799208538727, 1598417078437, 3196834161067, 6393668322361, 12787336645693, 25574673292171, 51149346585163, 102298693171573, 204597386344321, 409194772690729, 818389545382027, 1636779090764527, 3273558181529149, 6547116363058321, 13094232726117691, 26188465452235903, 52376930904472081, 104753861808945409, 209507723617890877, 419015447235781777, 838030894471566139, 1676061788943133381, 3352123577886269773, 6704247155772540097, 13408494311545080271, 26816988623090160931, 53633977246180322119, 107267954492360647423, 214535908984721298697, 429071817969442600531, 858143635938885202723, 1716287271877770405997, 3432574543755540812119, 6865149087511081625941, 13730298175022163251929, 27460596350044326504397, 54921192700088653009381, 109842385400177306019049, 219684770800354612039237, 439369541600709224080249, 878739083201418448160713, 1757478166402836896326411 Note that $y-x$ appears to be less than $i^2$, and often much smaller, which makes me hopeful that it will always be easy to find a $y$ that works. But of course, relying on the behaviour of "small" numbers when making conjectures in number theory is not a Good Idea. :)
sum of two Dice game
Presumably your opponent picks either $6$ or $8$. Assuming it is $6$ ($8$ is symmetric) your opponent wins if the roll is $2,3,4,5,6$ and you win otherwise. Can you find the chance of each of those?
Geometry- power center or something?
Extend $AO$ to meet the circumcircle of $\triangle APQ$ at point $C$. $K$ is intersection point of $CM$ with the circle. $O$ is the circumcentre of $\triangle APQ$ and $OM\perp PQ$. Hence if $H$ is the orthocentre of $\triangle APQ$, $AH=2OM$. Extend $MK$ and $AD$ to meet at some point $R$. Observe that $AR=2OM$ and since $R$ lies on the perpendicular from $A$ on $PQ$, $R$ is $H$, the orthocentre of $\triangle APQ$. Thus, points $H$, $M$ and $K$ are collinear. Now, observe that, in $\triangle HAM$, $MD\perp HA$ and $AK\perp HM$ and hence $S$, the intersection of those two perpendiculars, is the orthocentre of $\triangle HAM$. Now, in quadrilateral $HFSK$, $\angle KFS=\angle KFB=\angle KAB=\angle KAM=\angle MHS=\angle KHS$ and thereafter it is cyclic. Therefore, $\angle HFB=90^{\circ}$ and hence $HF$ passes through the intersection of $BO$ with the circumcircle of $\triangle APQ$, a fixed point.
What's the point of creating discrete control laws for analog processes?
Usually when actuating a continuous system with a digital controller the control input $u$ is constant between sample times (this is also what c2d assumes if the method of discretization is not specified). So even though at the start of the sampling time the same state information is available to both controller, the discrete controller effectively considers the effect of keeping that input constant during one sampling period, while for the continuous state feedback it assumes that the control input also changes continuously. But if during one sample time the state does not change much, they are roughly equivalent. This happens when the closed and open-loop poles are significantly above the sample frequency. This also holds for other discretization methods as well. For example when considering your continuous system and cost matrices $Q$ and $R$, then in the figure below you can see the 2-norm of the difference between the $K$ matrices obtained from continuous and discrete LQR for different discretization methods as a function of the sample frequency. From that figure you can see that the 2-norm goes to zero as the sample frequency goes to infinity (or the sample time goes to zero) for all used methods with the exception of the matched poles method. Another way of comparing continuous and discrete systems is to use the same $K$ matrix and quadratic Lyapunov function for both and compare the resulting derivative of the Lyapunov function. So find a stabilizing $K$ for the continuous system, for example with LQR or pole placement, and a Lyapunov $V(x) = x^\top P\,x$ function which satisfies $$ P\,(A - B\,K) + (A - B\,K)^\top P = -Q, $$ with $Q$ semi-positive definite. The derivative of the Lyapunov function for the continuous system is then simply $\dot{V}(x) = -x^\top Q\,x$. For the discretized system the derivative of the Lyapunov function can be approximated with $$ \dot{V}(x) \approx x^\top \underbrace{\left[\frac{(A_d - B_d\,K)^\top P\,(A_d - B_d\,K) - P}{h}\right]}_{-Q_d}\,x. $$ Using again your system then the resulting 2-norm of $Q - Q_d$ for different discretization methods as a function of the sample frequency can be seen in the figure below. From that figure you can again see that the 2-norm goes to zero as the sample frequency goes to infinity for all used methods with again the exception of the matched poles method.
How is the Ornstein-Uhlenbeck process stationary if the mean and variance are time dependent?
The O-U process with a delta initial condition is not stationary in this sense. But that's the wrong initial condition. The O-U process is a Markov process which admits a stationary distribution, so if you want a stationary process, you should start it in the stationary distribution. Here it's a Gaussian distribution, and it isn't hard to work out what the mean and variance of that distribution ought to be. This corresponds to a time-independent solution of the Fokker-Planck equation, which you can easily verify is of the form $p(v) = e^{-x^2/2 \sigma^2}$, and you can work out the right value for $\sigma$ in terms of your parameters.
Checking if one "special" kind of block matrix is Hurwitz
I assume that by a Hurwitz matrix, you mean a stable matrix, i.e. a matrix such that all of its eigenvalues have negative real parts. Please correct me if I am wrong. I did some numerical experiments too, but found that your conjecture is wrong. Here is a counterexample: $$ J=\begin{pmatrix} -1.4268 &amp; -0.6777 &amp; 0.7134 &amp; 0.4497\\ -0.6777 &amp; -0.3444 &amp; 0.2280 &amp; 0.1722\\ -3.7490 &amp; 0 &amp; 0 &amp; 0\\ 0 &amp; -8.6780 &amp; 0 &amp; 0 \end{pmatrix}. $$ You may verify that the eigenvalues of $A$ are $-1.7529$ and $-0.0183$, the eigenvalues of $B$ are $0.8620$ and $0.0236$, and the eigenvalues of $J$ are $-0.9401\pm1.8348i$ and $\color{red}{0.0545\pm0.3906i}$. As to the stability of $$M = \begin{bmatrix}A &amp; B \\ -B^T &amp;0\end{bmatrix}$$ for negative definite $A$, I assume that the $A$ and $B$ here are real matrices. The proof is easy. Suppose $v^T=(x^T,y^T)$ is a unit eigenvector of $M$ corresponding to the eigenvalue $\lambda$. Then \begin{align} \lambda = v^\ast Mv &amp;=(x^\ast,y^\ast)\begin{bmatrix}A &amp; B \\ -B^T &amp;0\end{bmatrix}\begin{bmatrix}x\\ y\end{bmatrix}\\ &amp;=x^\ast Ax + x^\ast By - y^\ast B^Tx\\ &amp;=x^\ast Ax + x^\ast By - (y^\ast B^Tx)^T\\ &amp;=x^\ast Ax + x^\ast By - x^T B\bar{y}\\ &amp;=x^\ast Ax + x^\ast By - \overline{x^\ast By}\\ &amp;=x^\ast Ax + 2\,\mathrm{Imag}(x^\ast By). \end{align} Therefore the real part of $\lambda$ is given by $x^\ast Ax$, which is negative because $A$ is negative definite. Edit: In the modified question, you ask whether $J=\begin{bmatrix}A &amp; B \\ -cI &amp;0\end{bmatrix}$ is always stable for sufficiently small $c&gt;0$, given that $A=-(B+B^T)$ is negative definite. After some rough calculations, the answer seems to be negative. (OK, I was wrong. For the 2-by-2 case, actually $J$ is stable when $c$ is small.) Consider $A=\mathrm{diag}(-2a,-2b)$ and $B=\begin{pmatrix}a&amp;-w\\w&amp;b\end{pmatrix}$ with $a,b&gt;0$ and $w\in\mathbb{R}$. It can be shown that $J$ is nonsingular and its characteristic equation is given by $\det(x^2 I - xA + cB)=0$, or equivalently, $$f(x) := x^4 + 2(a+b)x^3 + (ca+cb+4ab)x^2 + 4cabx + c^2(ab+w^2) = 0.$$ One can further show that $f$ has no purely imaginary root. So, we may employ Routh–Hurwitz theorem (see also here) to test the stability of $f$. I have done some calculations, but they are too long to fit here (uh, ... I've started to sound like Fermat). The result seems to show that when $w$ is large and $c&gt;0$ is small, the related Cauchy index is always zero (thus $J$ is not stable). I have also done some computer experiments. The results apparently point to the same conclusion. For instance, when $B=\begin{bmatrix}1 &amp; -320\\ 320&amp;1\end{bmatrix}$, the resulting $J$ always seems to have an eigenvalue with nonnegative real part when $0&lt;c\le\frac12$. Another edit: I may have messed up something, but it is too tedious to double-check the calculations.
Two riflemen A and B shoot at a target simultaneously. A has a 0.8 chance of hitting and B has a 0.9 chance
Yes, that's a correct argument. (Assuming independence, which is not quite right in the real world because their bullets could collide in midair!)
Why is "mathematical induction" called "mathematical"?
About question n°1 : Who coined the expression &quot;mathematical induction&quot;? the qualificative &quot;mathematical&quot; was introduced in order to separate this method of proof from the inductive reasoning used in empirical sciences (the &quot;all ravens are black&quot; example); it is common also to call it complete induction, compared to the &quot;incomplete&quot; one used in empirical science. The reason is straightforward : the mathematical method of proof establish a &quot;generality&quot; (&quot;all odd numbers are not divisible by two&quot;) that holds without exception, while the &quot;inductive generalization&quot; established by observation of empirical facts can be subsequently falsified finding a new counter-example. Note : induction (the non-mathematical one) was already discussed by Aristotle : Deductions are one of two species of argument recognized by Aristotle. The other species is induction (epagôgê). He has far less to say about this than deduction, doing little more than characterize it as “argument from the particular to the universal”. However, induction (or something very much like it) plays a crucial role in the theory of scientific knowledge in the Posterior Analytics: it is induction, or at any rate a cognitive process that moves from particulars to their generalizations, that is the basis of knowledge of the indemonstrable first principles of sciences. For the history of the name &quot;mathematical induction&quot;, see Florian Cajori, Origin of the Name &quot;Mathematical Induction&quot; (1918) : The process of reasoning called &quot;mathematical induction&quot; has had several independent origins. It has been traced back to the Swiss Jakob (James) Bernoulli [Opera, Tomus I, Genevae, MDCCXLIV, p. 282, reprinted from Acta eruditorum, Lips., 1686, p. 360. See also Jakob Bernoulli's Ars conjectandi, 1713, p. 95], the Frenchmen B.Pascal [OEuvres completes de Blaise Pascal, Vol. 3, Paris, 1866, p. 248] and P.Fermat [Charles S Peirce in the Century Dictionary, Art.&quot;Induction,&quot; and in the Monist, Vol. 2, 1892, pp. 539, 545; Peirce called mathematical induction the &quot;Fermatian inference&quot;], and the Italian F.Maurolycus [G.Vacca, Bulletin Am. Math. Soc., Vol. 16, 1909, pp. 70-73]. The process of Fermat differs somewhat from the ordinary mathematical induction; in it there is a descending order of progression, leaping irregularly over perhaps several integers from $n$ to $n - n_1, n - n_1 - n_2$, etc. Such a process was used still earlier by J.Campanus in his proof of the irrationality of the golden section, which he published in his edition of Euclid (1260). John Wallis, in his Arithmetica infinitorum (Oxford, 1656), page 15, [uses] phrases like &quot;fiat investigatio per modum inductionis&quot; [...].He speaks, p. 33, of &quot;rationes inductione repertas&quot; and freely relies upon incomplete &quot;induction&quot; in the manner followed in natural science. Thus, his method has been criticized by Fermat as being &quot;conjectural&quot;, i.e.based on a perceived regularity or repeated schema in a group of formuale. Wallis states (page 306) that Fermat &quot;blames my demonstration by Induction, and pretends to amend it. . . . I look upon Induction as a very good method of Investigation; as that which doth very often lead us to the easy discovery of a General Rule.&quot; For about 140 years after Jakob Bernoulli, the term &quot;induction&quot; was used by mathematicians in a double sense: (1) &quot;Induction&quot; used in mathematics in the manner in which Wallis used it; (2) &quot;Induction&quot; used to designate the argument from $n$ to $n + 1$. Neither usage was widespread. The former use of &quot;induction&quot; is encountered, for instance, in the Italian translation (1800) of Bossut and Lalande's dictionary,' article &quot;Induction (term in mathematics).&quot; The binomial formula is taken as an example; its treatment merely by verification, for the exponents $m = 1, m = 2, m = 3$, etc., is said to be by &quot;Induction.&quot; We read that &quot;it is not desirable to use this method, except for want of a better method.&quot; H.Wronski (1836) in a similar manner classed &quot;methodes inductionnelles&quot; among the presumptive methods (&quot;methodes presomptives&quot;) which lack absolute rigor. The second use of the word &quot;induction&quot; (to indicate proofs from $n$ to $n + 1$) was less frequent than the first. More often the process of mathematical induction was used without the assignment of a name. In Germany A.G.Kastner (1771) uses this new &quot;genus inductionis&quot; in proving Newton's formulas on the sums of weakness of Wallis's Induction, then explains Jakob Bernoulli's proof from $n$ to $n + 1$, but gives it no name. In England, Thomas Simpson [Treatise of Algebra, London, 1755, p. 205.] uses the $n$ to $n + 1$ proof without designating it by a name, as does much later also George Boole [Calculus of Finite Differences, ed. J.F.Moulton, London, 1880, p. 12.] A special name was first given by English writers in the early part of the nineteenth century. George Peacock, in his Treatise on Algebra, Cambridge, 1830, under permutations and combinations, speaks (page 201) of a &quot;law of formation extended by induction to any number,&quot; using &quot;induction,&quot; as yet in the sense of &quot;divination.&quot; Later he explains the argument from $n$ to $n + 1$ and calls it &quot;demonstrative induction&quot; (page 203). The next publication is one of vital importance in the fixing of names; it is Augustus De Morgan's article &quot;Induction (Mathematics)&quot; in the Penny Cyclopedia, London, 1838. He suggests a new name, namely &quot;successive induction,&quot; but at the end of the article he uses incidentally the term &quot;mathematical induction.&quot; This is the earliest use of this name that we have seen.
Question on Radicals
Maple rationalizes it as $$ {\frac { \left( -1+\sqrt {2}-\sqrt [3]{3} \right) \left( 9+6\,\sqrt { 2}\;\sqrt [3]{3}+9\,{3}^{2/3}+4\,\sqrt {2}+3\,\sqrt [3]{3} \right) \left( 28+11\,\sqrt {2} \right) }{1084}} $$ $$ = -{\frac{337}{542}}-{\frac {42\,\sqrt {2}}{271}}-{\frac {49\,\sqrt {2} \sqrt [3]{3}}{271}}-{\frac {77\,\sqrt [3]{3}}{542}}-{\frac {135\,{3}^{ 2/3}}{542}}-{\frac {12\,\sqrt {2}\;{3}^{2/3}}{271}} $$ Once you know that you want something of the form $Q = a + b \sqrt{2} + c 3^{1/3} + d \sqrt{2}\; 3^{1/3} + e 3^{2/3} + f \sqrt{2}\; 3^{2/3}$, expand out $1 - \sqrt{2} + 3^{1/3} = (1 + 2 \sqrt{2} + 3 \cdot 3^{1/3}) Q$, equate coefficients of $1$, $\sqrt{2}, ..., \sqrt{2}\cdot 3^{2/3}$, and solve.
Show function is continuous using sequences
Note that $|g(x_{n}) - 0| = |g(x_{n})| = 2|x_{n}| \to 0$ if $x_{n} \to 0$.
Using induction to prova a regular expression belongs to the language generated by a grammar (well half-proving anyways)
I think that your regular expression may make the proof harder than necessary. Since, as you say, $L(B)=b^*$, the production $S\to aBSBBa\mid\epsilon$ is effectively $S\to ab^*Sb^*a\mid\epsilon$. Suppose that we apply the first alternative $n$ times for some $n\ge 1$ and then apply the second; since $b^*b^*=b^*$, we get $$(ab^*)^n(b^*a)^n=\underbrace{ab^*ab^*\ldots a}_{n\;a\text{’s}}b^*\underbrace{a\ldots b^*ab^*a}_{n\;a\text{’s}}\;.\tag{1}$$ This is a string of $2n$ $a$s, beginning and ending with $a$, with arbitrary numbers of $b$s between adjacent $a$s. We can think of it as $$(ab^*ab^*)^{n-1}ab^*a\;.$$ Finally, $$L(S)=\sum_{n\ge 1}(ab^*ab^*)^{n-1}ab^*a+\epsilon=(ab^*ab^*)^*ab^*a+\epsilon\;.$$ This gives you a regular expression that is built up in a slightly simpler way. A slightly less obvious alternative is to rewrite $(1)$ as $$a(b^*ab^*ab^*)^{n-1}a\;,$$ taking advantage of the fact that $b^*b^*=b^*$. Then we have $$L(S)=\sum_{n\ge 1}a(b^*ab^*ab^*)^{n-1}a+\epsilon=a(b^*ab^*ab^*)^*a+\epsilon\;;$$ considering the form of the $S$ production, this is probably easier to work with. See if you can manage the proof with one or the other of these regular expressions.
Group completion of a particular monoid
There is a two-step process to compute the Grothendieck group that is usually a bit faster. Let $S$ be any abelian monoid, and define an equivalence relation $\approx$ on $S$ by $$ x \approx y \quad\Leftrightarrow\quad (\exists z\in S)(x+z = y+z). $$ Then $\approx$ is a congruence relation, so the quotient $S/{\approx}$ is a monoid. It is not hard to prove the following theorem: Theorem. The quotient $S/{\approx}$ is isomorphic to the image of $S$ in $G(S)$. It follows from this theorem that $G(S)$ is isomorphic to $G(S/{\approx})$, but the latter is usually easier to compute. In particular, if you can find any embedding of $S/{\approx}$ into an abelian group, then $G(S/{\approx})$ is isomorphic to the subgroup generated by the elements of $S/{\approx}$. In the example you gave, the equivalence relation $\approx$ is obviously defined by $$ a_{n,m} \approx a_{n&#39;,m&#39;} \quad\Leftrightarrow\quad n=n&#39;\text{ and }m \equiv m&#39;\:(\mathrm{mod}\;2) $$ Then the quotient consists of elements $A_{n,m}$, where $n\in\mathbb{N}$ and $$ \begin{cases}m=0 &amp; \text{if }n = 0\text{ or }1, \\ m\in\mathbb{Z}/2\mathbb{Z} &amp; \text{if } n\geq 2.\end{cases} $$ with operation defined by $$ A_{n,m} + A_{n&#39;,m&#39;} = A_{n+n&#39;,m+m&#39;} $$ where $m+m&#39;$ is always computed in $\mathbb{Z}/2\mathbb{Z}$. This is obviously isomorphic to the submonoid of $\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}$ consisting of the elements $\{(0,0),(1,0),(2,0),(2,1),(3,0),(3,1),\ldots\}$. These elements generate $\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}$, so $G(S)$ is isomorphic to $\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}$.
The proofs of limit laws and derivative rules appear to tacitly assume that the limit exists in the first place
You're correct that it doesn't really make sense to write $\lim\limits_{h\to 0}\frac{f(x+h)-f(x)}{h}$ unless we already know the limit exists, but it's really just a grammar issue. To be precise, you could first say that the difference quotient can be re-written $\frac{f(x+h)-f(x)}{h}=2x+h$, and then use the fact that $\lim\limits_{h\to 0}x=x$ and $\lim\limits_{h\to 0}h=0$ as well as the constant-multiple law and the sum law for limits. Adding to the last sentence: most of the familiar properties of limits are written &quot;backwards&quot; like this. I.e., the &quot;limit sum law&quot; says $$\lim\limits_{x\to c}(f(x)+g(x))=\lim\limits_{x\to c}f(x)+\lim\limits_{x\to c}g(x)$$ as long as $\lim\limits_{x\to c}f(x)$ and $\lim\limits_{x\to c}g(x)$ exist. Of course, if they don't exist, then the equation we just wrote is meaningless, so really we should begin with that assertion. In practice, one can usually be a bit casual here, if for no other reason than to save word count. In an intro analysis class, though, you would probably want to be as careful as you reasonably can.
What are the combination outcomes to the following questions based on information given
(i) $$ {20\times19\times18\times17 \over 4\times3\times2\times1} = {116280 \over 24} = 4845 $$ (ii) Alaska must be chosen. Therefore you are excluding one from the original 20. It leaves 19 and you have to find many combinations of 3 you can get from the 19. $$ {19\times18\times17 \over 3\times2\times1} = {5814 \over 6} = 969 $$ Alaska then goes in with all these 969 combinations. (iii) 3 warm countries must be chosen and only 1 cold.  So warm countries has the most and we will calculate that first. Same formula again $$ {12\times11\times10 \over 3\times2\times1} = {1320 \over 6} = 220 $$ So with 220 combinations of 3 warm countries from a pick of 12. We have 8 cold countries and only need to pick 1 of them for each combination. Therefore 1 cold country for each one of the 220. $$ =220\times8 = 1760 $$ Can someone please confirm or deny this?
Solving a series in the proof of the expectation of the binomial distribution
$$\left(p+q\right)^{n}=\sum_{k=0}^{n}\binom{n}{k}p^{k}q^{n-k}$$ Taking the derivative w.r.t. $p$ on both sides we find: $$n\left(p+q\right)^{n-1}=\sum_{k=0}^{n}\binom{n}{k}kp^{k-1}q^{n-k}$$ Multiplying with $p$ on both sides we find: $$np\left(p+q\right)^{n-1}=\sum_{k=0}^{n}\binom{n}{k}kp^{k}q^{n-k}$$ If $p+q=1$ then this equality can be written as: $$np=\mathbb{E}X$$ Where $X$ is binomially distributed with parameters $n$ and $p$. Edit: I cannot withhold myself from reaching you an alternative route. It is a very good thing to keep in mind that a random variable $X$ binomially distributed with parameters $n$ and $p$ can be written as: $$X=X_{1}+\cdots+X_{n}$$ where the $X_{i}$ are iid and Bernouilli-$p$ distributed. There are $n$ experiments. $X_i$ takes value $1$ if there is a 'success' and value $0$ if there is a 'failure'. Then: $$\mathbb{E}X_{i}=1\mathbb{P}\left(X_{i}=1\right)+0\mathbb{P}\left(X_{i}=0\right)=\mathbb{P}\left(X_{i}=1\right)=p$$ and making use of the linearity of expectation we find: $$\mathbb{E}X=\mathbb{E}X_{1}+\cdots+\mathbb{E}X_{1}=p+\cdots+p=np$$
Instructive examples of elegant, clear, rigorous, terse, but "non-dull" mathematical prose
I nominate Halmos. All of his writing is good, but you might look in particular at "Finite dimensional vector spaces".
Principal fiber bundles and invariant differential forms
$\omega\rightarrow \pi^*\omega$ is linear. Suppose that $\pi^*\omega=0$, since $\pi$ is a submersion, for every $x\in G/X, u_1,...,u_k\in T_x(G/X)$ and $y\in\pi^{-1}(x)$, there exists $v\in T_yX$ such that $d\pi_y(v_i)=u_i$, $\pi^*\omega_y(v_1,...,v_k)=\omega_x(u_1,...,u_k)=0$ implies that $\omega=0$ and $\omega\rightarrow \pi^*\omega$ is injective. Consider $X\times G$ and take any non zero $1$-form $\beta$ invariant on $G$ by the right translations. Write $\alpha_{(x,g)}(u,v)=\beta_g(v)$ is a form invariant by $G$. You cannot have $\alpha=\pi^*\omega$ since $\pi^*\omega$ vanishes on the fibre.
Classic $2n$ people around a table problem
a) The probability that a particular wife has her husband sitting next to her is $\dfrac{2}{2n-1}$ since she has two neighbours. b) You can regard couple $i$ together as breaking the circle so that the question now involves a row of $2n-2$ people. These can be arranged in $(2n-2)!$ ways. But the number of ways they can be arranged if couple $j$ sit together is $2(2n-3)!$ since we could treat couple $j$ as a single person, but doubling the number as they can sit either way round. So the probability is $\dfrac{2}{2n-2}$. c) Going back to (a), the expected number of couples sitting together is $\tfrac{2n}{2n-1}$ which for large $n$ approaches $1$. Using your Poisson approximation [which also uses the almost independence between couples illustrated by the answer to (b) being close to the answer for (a)] with an expectation of $1$, the limit of the probability of no couples together is $e^{-1}\approx 0.3678794$. For a similar question (couples in a row rather than a circle) see Showing probability no husband next to wife converges to $e^{-1}$
Poisson process question (in reverse?)
Let $N(t)$ be a Poisson process with rate $m$. For (a), we have for fixed $T&gt;0$ $$ \mathbb P(N(t)\geqslant T) = e^{-mT}. $$ For (b), let $P$ be the time the pedestrian takes to cross the road, then $$ \mathbb P(T_1 &gt; P) = e^{-mP}, $$ where $T_1=\inf\{t&gt;0:N(t)=1\}$ is the first arrival time of the Poisson process. For (c), we have $$ \mathbb P(T_1&gt;P, T_2-T_1&gt;P) = \mathbb P(T_1&gt;P)\mathbb P(T_2-T_1&gt;P) = e^{-mP}e^{-mP} = e^{-2mP}. $$ For (d), let $P^*$ be the speed of the pedestrian. Then \begin{align} \mathbb P(N(P^*)=0)\geqslant \frac 9{10} &amp;\iff e^{-\frac{P^*}m}\geqslant \frac9{10}\\ &amp;\iff -\frac{P^*}m \geqslant \log\frac9{10}\\ &amp;\iff P^* \leqslant -m\log\frac9{10} = m\log\frac{10}9. \end{align}
Upper semicontinuity in real analysis
Let $\varepsilon &gt; 0$ and $g(x) = \inf_{k \in \mathbb N} f_k(x)$. Therefore, there exists $k_0 \in \mathbb N$ with $$f_{k_0}(x_0) - g(x_0) &lt; \frac{\varepsilon}{2}.$$ Moreover, for such a $k_0$, there is $\delta_{k_0}&gt; 0$ with the property that $$|x - x_0| &lt; \delta_{k_{0}} \quad \Rightarrow \quad f_{k_0}(x) &lt; f_{k_0}(x_0) + \frac{\varepsilon}{2}.$$ We deduce that, if $|x - x_0| &lt; \delta_{k_0}$, $$g(x) \le f_{k_0}(x) &lt; f_{k_0}(x_0) + \frac{\varepsilon}{2} &lt; g(x_0) + \varepsilon.$$
Minimal geodesic on the real projective $n$-space
Lift to $S^2$. Suppose that your geodesic in $\mathbb RP^2$ has length $L$ larger than $\pi/2$. Lift to a geodesic segment in $S^2$ between points $A$ and $B$. Let $A' = -A$. Then the extension of the great-circle arc from $A$ to $B$ onwards to $A'$ will give you a segment $BA'$ that's shorter than $\pi/2$. Its projection to $\mathbb RP^2$ will be a (shorter) geodesic between your original two points. The other halfof this -- that if it's not minimal, then its length is greater than $\pi/2$ -- follows a similar argument: lift to $S^2$, and you've got a not-minimal "short" geodesic between two points $A$ and $B$; the "shorter" geodesic might run from $B$ to $A'$ rather than $A$, but evn if it does, you end up with a split geodesic between $A$ and $A'$ whose length is less than $\pi$, which is impossible. If the "shorter" geodesic ran between $A$ and $B$, you'd have a contradiction, because the raduis of injectivity of the exponential map on $S^2$ is $\pi$ (i.e., all geodesic paths of length less than $\pi$ on $S^2$ are in fact shortest paths).
How to invert the derivative of the logistic function?
Following the advice from Bernard Masse's comment I am able to successfully invert the function by substituting $w = e^x$ and solving the equation using the quadratic formula since you can divide both sides by $\sigma'(x)$, as $\sigma'(x) &gt; 0$ for all x
Canonical form of a Quadratic form.
When you don’t have any squared terms, a common trick is to pick one of the cross terms $z_iz_j$ and make the change of variables $z_i=\frac12(y_1+y_2)$, $z_j=\frac12(y_1-y_2)$. This change of variables comes from a polarization identity for quadratic forms. You then have a difference of squares with which you can continue. Here, we can try $z_1=\frac12(y_1+y_2)$, $z_2=\frac12(y_1-y_2)$, obtaining $\frac14y_1^2-\frac14y_2^2+y_1z_3-y_2z_3-3z_3z_4$. After completing the squares a couple of times, you’ll once more be left with only a cross term, so apply another change of variables to it. When you’re all done, substitute for the $y_i$. (The factor of $\frac12$ in the change of variables is there to make this final substitution for the original variables “nicer.”)
Convergence of the series: $\sum_{n=2}^{\infty}\frac{1}{n^2(\ln n)^2\left | \sin(n\pi\sqrt{2}) \right |}$
Easier than it looks. Let $m$ be the closest integer to $n \sqrt 2.$ We have $$ | 2 n^2 - m^2 | \geq 1. $$ they are integers and $\sqrt 2$ is irrational. $2 n^2 - m^2 \neq 0$ is an integer. $$ | n \pi \sqrt 2 - m \pi | = \left| \frac{\pi (2 n^2 - m^2)}{n \sqrt 2 + m} \right| \geq \frac{\pi }{n \sqrt 2 + m} &gt; \frac{\pi}{3n\sqrt 2} &gt; \frac{1}{2n} $$ This is as $n$ gets large... Therefore $$ \left| \sin \left(n \pi \sqrt 2 \right) \right| &gt; \frac{1}{3n} $$ and the thing you are summing is smaller than $$ 3 \; \; \left( \frac{1}{n \log^2 n} \right) $$ which sum converges by the integral test
Relation between $a$ & $d$ in the ratio of arithmetic progressions $ (S_n/S_{k.n})$
Yours will be $$\frac{dn+2a-d}{dk^2n+2ak-dk}=\frac{1}{k^2}+\frac{(2a-d)\left(1-\frac 1k\right)}{dk^2n+2ak-dk}.$$ Hence, since this value is independent of $n$, the numerator has to be zero. Hence, we have $$2a-d=0\ \text{or}\ 1-\frac 1k=0\iff\ 2a=d\ \text{or}\ k=1.$$ Now $k=1$ means $\frac{S_n}{S_n}=1$ is independent of $n$. This is obvious. Hence, we have $2a=d$ as the relation between $a$ and $d$. This is what you want.
Context Free Grammar Equal Lengths
No, from Definition of this language, we can generate '#0#1#' and in your grammar, we cant generate this. First way: Change definition of language to: $$L=\{X\#Y \mid X,Y\in\{0,1\}^* \&amp; |X|=|Y| \}$$ Second way: Change grammar $$S\rightarrow0S0|1S1|0S1|1S0|0S\#|\#S0|1S\#|\#S1$$ Of course i think your goal is First Way
Why do I get the wrong solution? Difference equation
Plugging $n=1$ should give you $-B = 4$ so that $B=-4$ and the correct result follows.
Example of $f \in K[x]$ solvable by radicals but having a root inexpressible only by coefficients of $f$ and +, -, *, /, $\sqrt[n]{...}$
I think this is more of a confusion of language and nothing else. If $f(x) \in K[x] $ is a specific polynomial then the coefficients of $f$ are nothing but specific members of $K$. And then if you have a formula for roots of $f$ which involves a combination of some members of $K$ along with operations like $+, -, \times, /, \sqrt[n] {. } $ then the coefficients of $f$ themselves being members of $K$ can not be visually located in the formula. Any member of $K$ can for example easily be written as a combination of any given number of members of $K$ using just the field operations. You are perhaps trying to think of an example where the coefficients are literals like in case of $x^2+ax+b$ and $K=\mathbb{Q} $, but again this is wrong. In such case the field should be $K=\mathbb{C} (a, b) $. Let us then assume that we have a literal polynomial $$f(x)= x^n+a_1x^{n-1}+\dots+a_{n-1}x+a_n$$ over field $K=\mathbb{C} (a_1,a_2,\dots,a_n)$. If $f$ is solvable by radicals over $K$ then formula for roots involves arithmetic operations and radicals (nested if needed) applied on members of $K$ and it does include the literal coefficients of $f$ because they are what $K$ is made of. This is easily seen to be the case in case of quadratic or cubic equations which are known to be solvable. Thus the coefficients always enter the formula for roots if there is a formula available. Also note the well known fact (established by Abel well before Galois) that the polynomials with literal coefficients are solvable over their field of coefficients ($K=\mathbb{C} (a_1,a_2,\dots,a_n)$) if and only if $n&lt;5$. To summarize such an example which you are seeking does not exist. I have tried to discern the meaning of the comment by reuns and it appears related to the treatment of solvable quintic given by Dummit and Foote in his Abstract Algebra. They describe a criterion to check whether a given quintic $$f(x) =x^5+ax^4+bx^3+cx^2+dx+e\in\mathbb{Q}[x]$$ is solvable over $\mathbb{C} $. The idea is to form a complicated polynomial of degree 6 in $\mathbb{Q} [x] $ with coefficients made using coefficients of $f$ and checking whether it has a rational root or not. And if the polynomial of degree 6 mentioned above does have a rational root then $f$ is solvable by radicals over $\mathbb{C} $. You perhaps want to check (for this case) if there is a formula for roots based on elements of $K=\mathbb {C} (a, b, c, d, e) $. I think there is such a formula but I am not sure. Usually when we consider the problem of solvability of a polynomial $f(x) \in K[x] $, the field $K$ is the smallest field containing the coefficients of $f$. In this case if the polynomial is solvable by radicals over $K$ then the roots can be expressed in terms of coefficients of $f$ via arithmetic operations and radicals. Enlarging the field $K$ to some extension $L$ and checking solvability over $L$ makes the problem simpler (trivial if $L$ is splitting field of $f$). Also if we consider the scenario where $f(x) \in K[x] $ is solvable by radicals over $K$ and $F\subset K$ is the smallest field containing the coefficients we need to investigate the problem of solvability of $f$ over $F$ separately and one can't deduce anything from its solvability over $K$. Thus your problem makes sense only in the usual setting where the solvability is checked over the field of coefficients and then (to repeat what I said earlier) the kind of example you seek does not exist.
Fibonacci recursive algorithm yields interesting result
The function nfib that returns the number of calls of fib (which is roughly linear to the time of execution) is following a similiar recursive scheme. You end up with a similiar growth behaviour. See also A001595. nfib(0) = 1 nfib(1) = 1 nfib(n) = 1 + nfib(n-1) + nfib(n-2) nfib: 1, 1, 3, 5, 9, 15, 25, 41, 67, .. fib: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, .. Using $$ \mathtt{nfib}(n) = 2 \, \mathtt{fib}(n+1) - 1 $$ one sees that $O(\mathtt{nfib}) = O(\mathtt{fib})$.
When is the ring of continuous functions absolutely flat?
These are exactly the finite spaces. Completely regular topological spaces $X$ with the property that every prime ideal in $C(X)$ is maximal are called $P$-spaces. A topological characterization is: every $G_{\delta}$-set (countable intersection of open sets) is itself open. It is not hard to show that countable subsets of $P$-spaces are closed and discrete. It follows that countably compact and in particular compact $P$-spaces are finite. For a long list of equivalent characterizations and basic properties of $P$-spaces consult exercises 4J and 4K on pages 62 and 63 of Gillman-Jerison, Rings of continuous functions, Springer GTM&nbsp;43, 1976. In the MO-thread on prime ideals in $C[0,1]$ you'll also find some relevant information and constructions.
Help thinking about sample space probability
It's just the intersection $A\cap B$. Often the sample space is not a simple as you have shown, even for simple experiments. For example, think about tossing a coin. For a single toss, the sample space is $$\mathscr S_1 = \{H,T\}.$$ But for multiple tosses, say two, the sample space becomes $$ \mathscr S_2 = \mathscr S_1 \times \mathscr S_1 =\{ (H,H),(H,T),(T,H),(T,T)\}$$ If $A$ is the event "same result on both tosses", then $$A=\{(H,H),(T,T)\}$$ while if $B$ is the event "at least one head on any toss", then $$B=\{(H,H),(H,T),(T,H)\}.$$ This means the intersection of $A$ and $B$ is "head on both tosses", and indeed $$A\cap B = \{(H,H)\}.$$ To make matters more complicated, it doesn't have to be a single repeated experiment, e.g., you could be tossing a coin and spinning a spinner that can select a number from 1 to 9.
Eigenvalues of the sum of a symmetric and a anti-symmetric matrix
To summarize the discussion in the comments: It is true that the eigenvalues of $M$ must lie in the left half of the complex plane because the symmetric matrix $M + M^T = 2M_S$ is negative definite. The fact that $M + M^T$ is negative definite implies that $M$ has eigenvalues with negative real part is explained in this question/answer.
Find recurrence relation with general solution $a_n=A+Bn+C2^n+\frac{1}{3}n2^{n-1}$
The answer given by @user84413 is slightly simpler than the initial relationship but still has two non constant coefficients. It is possible to get a third order constant coefficients affine recurrence relationship (relationship (6) below) generating sequence $(a_n)$. Here is a method for obtaining it. Consider the two following auxiliary sequences: $$b_n:=a_{n+1}-a_n=B+C 2^n+\dfrac{1}{6}(n+2)2^n \ \ \ (1)$$ thus, $$c_n:=b_n-B \ \ \ (2)$$ is of the form $c_n=(D+En)2^n \ \ \ (*)$. Thus sequence $c_n$ is governed by the following second order recurrence relationship $$c_{n+2}=4c_{n+1}-4c_n \ \ \ \ (3) $$ Remark 1: (3) is a direct consequence of (*) by combining particular relationships : $$\begin{cases} c_{n+2} &amp; = &amp; (D+En+2E)2^{n+2}\\ c_{n+1} &amp; = &amp; (D+En+E)2^{n+1}\\ c_{n} &amp; = &amp; (D+En)2^n \end{cases}$$ Remark 2: Relationship (3) is not at all unexpected: it is connected to results governing the characteristic equation of second order linear recurrences having a double root (see for example here). Remark 3: We have not yet taken initial conditions into account. This will be done in a last step. Plugging in relationship (2) into (3), one gets: $$b_{n+3}=4b_{n+2}-4b_{n+1} + B\ \ \ \ (4) $$ Using (1), (4) gives: $$a_{n+3}-a_{n+2}=4(a_{n+2}-a_{n+1})-4(a_{n+1}-a_{n}) + B\ \ \ \ (5) $$ yielding final recurrence relationship: $$\displaystyle\color{red}{a_{n+3}=5a_{n+2}-8a_{n+1}+4a_{n} + B}\ \ \ \ (6) $$ with initial values $$\begin{cases}a_1 &amp; = &amp; A+B+2C+1/3\\a_2 &amp; = &amp; A+2B+4C+4/3\\ a_3 &amp; = &amp; A+3B+8C+4\end{cases}$$
Exponential decay of Heat equation solution
The factor $e^{bx}$ seems to be irrelevant for the theorem. I haven't a reference, but it's straightforward to get an estimate, using an explicit formula for solution via Green's function: $$ W(t,x)=\int_0^\infty G(x,y,t)F(y)\,dy. $$ It can be assumed that $D=1$. For Dirichlet condition, say, $$ G(x,y,t)=\Gamma(x-y,t)-\Gamma(x+y,t), $$ where $$ \Gamma(x,y,t)=\frac{e^{-\frac{x^2}{4 t}}}{\sqrt{4 \pi t}}, \quad t&gt;0, $$ is a fundamental solution for the heat equation. Let $\alpha,\beta$ and $\varepsilon&gt;0$ be s.t. $F(x)\ge \varepsilon$ on $[\alpha,\beta]$. Since $G(x,y,t)$ is positive for $x,y,t&gt;0$, $$ W(t,x)\ge \int_\alpha^\beta G(x,y,t)F(y)\,dy\ge \varepsilon\int_\alpha^\beta (\Gamma(x-y,t)-\Gamma(x+y,t))\,dx= $$ $$ \frac\varepsilon2 \left(\text{erf}\left(\frac{x-\alpha }{2 \sqrt{t}}\right)+\text{erf}\left(\frac{\alpha +x}{2 \sqrt{t}}\right)-\text{erf}\left(\frac{x-\beta }{2 \sqrt{t}}\right)-\text{erf}\left(\frac{\beta +x}{2 \sqrt{t}}\right)\right)= $$ $$ \frac{\varepsilon x \left(\beta^2-\alpha^2\right)}{4 \sqrt{\pi }t^{3/2}}+O\left(\left(\frac{1}{t}\right)^{5/2}\right),\quad t\to+\infty, $$ where $O$ is uniform on compact subsets of $(0,\infty)$. So the solution decreases no faster than $t^{-3/2}$ and it's enough to multiply it on power function $t^a$, $a&gt;3/2$, to obtain growth when $t\to+\infty$.
n empty balls and an observer
There is no inconsistency between the claimed result and the value you think you get in the case $r=1$, since the result says "at most", and $\frac{n-1}{2}$ is certainly at most $n-1$. Also, $\frac{n-1}{2}$ isn't correct for $r=1$. For example, if $n=2$ the expected value is precisely $1$ (there is one uncoloured circle, and the observer can see both circles), and for $n=3$ the expected value is $\frac 53$, since if the black circle is in the middle (probability $1/3$) the observer can only see one uncoloured circle, but otherwise (s)he can see both.
Finding a point on a polynomial function where there are 3 points where the line y=mx +c is touched
No idea if this is a unique problem. So let us see.. The line $y = m x + c$ is a tangent line to $f$, so it intersects $f$ at $x_0$ and shares the slope at $x_0$: $$ y(x_0) = m x_0 + c = f(x_0) \\ m = f'(x_0) $$ When I set this up in GeoGebra and vary the tangential location $x_0$, I get values with one, three or five intersections of the tangent line with the graph of $f$. So it seems not to be a unique problem. The equation of the tangent line at $x_0$ is $$ t(x) = f'(x_0) x + (f(x_0) - f'(x_0) x_0) = f(x_0) + f'(x_0) (x - x_0) $$
Is it true that if a graph is n-regular that it must have n+1 vertices?
It seems from your last sentence that you're asking whether an $n$-regular graph must have exactly $n+1$ vertices (rather than at least $n+1$ vertices). If so, as Gregor commented, the answer is no. For the proof you're trying to find, try counting the number of incidences in two different ways.
Equivalence between two definitions of infinitary logic
I don't see any reason to assume they would be the same. If I understand you correctly, the formulas of Chang and Keisler are the same as the formulas of ordinary arithmetic; they just have a more permissive consequence relation. In contrast, what you describe as $\mathcal L_{\omega_1,\omega}$ has formulas that are more expressive than those of ordinary first-order formulas. For example, you can use diagonalization at the metalevel to produce a set of integers that isn't definable by any (ordinary finite) arithmetic formula, and then $\mathcal L_{\omega_1,\omega}$ would still allow you to claim that such-and-such holds about all of those integers and perhaps even prove it. Chang and Keisler can't even express that. A more interesting example may be that $\mathcal L_{\omega_1,\omega}$ has a formula expressing "$x$ is a standard integer" (namely the infinite disjunction $x=0 \lor x=S(0) \lor x=S(S(0)) \lor \cdots$), so you can write down a theory that has $\mathbb N$ as its only model, up to isomorphism. In contrast it looks like $\omega$-logic in the Chang/Keisler sense doesn't give a way to rule out models that are elementarily equivalent to $\mathbb N$, such as an ultrapower of $\mathbb N$s.
Leibniz's rule application
Well, since the sequence $\,\displaystyle{\frac{\sqrt n}{n+100}}\,$ is eventually monotone converging to zero (why?), Leibnitz theorem/test applies here and thus your series converges. Oh, and it really doesn't matter, for convergence purposes, whether you being with a positive or a negative summand in the series.
What is an effective and practical means to teach about natural logarithms and log laws to high school students?
For introducing logarithm, perhaps something like starting with exponential population growth and then noting that the graph is linear in log-space? You can then change back and forth between representations and see how things like base-change affect the plot and the interpretation. This is also probably representative of where less-mathematical scientists run into logarithms the most.
Part of simple proof of nontrivial center in p-group
$|G|=p^k$ for some $k$ as it is a $p$ group, we are only talking about finite groups here, this statement may not hold for infinite groups. Now as $C_G(x)&lt;G$ therefore $|C(x)|$ divides $p^k \implies |C(x)|=p^i$ for some $0 \le i &lt; k$, so, $|G:C(x)|=p^{k-i}$ and $k-i&gt;0$ implies $p$ divides $|C(x)|$
Can I find a power of a closest diffusional operator uniquely?
Yes, this is doable, even quite impressive results can be had by simplifying to $$\min_{\bf G}\left\{\sum_k \|{\bf [G,-I,0,\cdots,0]C}^k {\bf f}\|_F^2\right\}$$ for the cyclic permutation operator $\bf C$, encouraging $\bf G$ to be Toeplitz with an extra cost-term and of course other regularizing terms like punishing "filter taps" too far off the diagonal. This we can achieve by adding a cost on the convolution of $\left[\begin{array}{rr}1&amp;0\\0&amp;-1\end{array}\right]$ on $\bf G$ as if it were a discretized 2D function fulfilling a differential equation. We use only one naiive guess of a sequence of $$f_k(t) = \frac k N f_N(t) + \left(1-\frac k N\right)f_0(t)$$ and can get results like this, even without any iteration, solving for better $f_k(t)$ candidate functions. (The goal is that purple (which is the result of the diffusion of blue) should reach all the way to red)
Show $(1+x_1)(1+x_2)...(1+x_n)\geq2^n$ given $x_1x_2...x_n=1$
By AM-GM inequality, $$(1+x_i) \geq 2 \sqrt x_i$$ Taking product from $i=1$ to $n$, we get the desired result.
Does $x^2+x+1 \equiv 0 \pmod {997}$ have solutions? Why or why not?
From this, using $\displaystyle\left(\frac{ab}p\right)=\left(\frac ap\right)\left(\frac bp\right)$ $$\left(\frac{-3}p\right)=\left(\frac{-1}p\right)\left(\frac 3p\right)$$ As $\displaystyle997\equiv1\pmod4,\left(\frac{-1}p\right)=1 $ (See $-1$ is a quadratic residue modulo $p$ if and only if $p\equiv 1\pmod{4}$) Now use Quadratic Reciprocity Theorem, $$\left(\frac 3{997}\right)\left(\frac{997}3\right)=1$$ As $997\equiv1\pmod3,$ $$\left(\frac{997}3\right)=\left(\frac13\right)$$ Now $\displaystyle\left(\frac1p\right)=(1)^{\dfrac{p-1}2}=1$ for all odd prime $p$
Finding an area of a triangle inside of a triangle, given certain areas of other triangles, and area ratios.
Your conclusion that $\triangle PVY$ has area $40$ is correct, and your reasoning — that $\triangle PVY$ and $\triangle PYW$ have the same height and bases in ratio $4:3$ — is correct. (A detail: $PY$ is not necessarily the height of these triangles, since it is not necessarily perpendicular to $VW$. But the perpendicular distance from $P$ to $VW$ is the same for both triangles anyway.) So at this point you have the following areas known and unknown: Here is a small further step: The same idea also applies to $\triangle UVY$ and $\triangle UYW$: they have the same height and bases in ratio $4:3$, so their areas are also in ratio $4:3$. We don't know what their areas are, but this does tell us that $$ a+b+40 = \tfrac43 (c+35+30) $$ It would be nice to apply this same idea to some more triangles, to get some more algebraic relations among $a,b,c$, hoping that eventually we'll have enough relations to actually solve for them (or at least for $b$, which is what we were asked for). For example, if we knew the ratio $UX:XV$, then we could draw conclusions about the ratio of the areas $a$ and $b$, and about the ratio of the areas $a+40+30$ and $b+c+35$. Of course, we don't know the ratio $UX:XV$. Can we do something anyway?
Allegation or Mixture Proportion
Final ratio is 16:9 so total volume can be taken as 25x. So final concerntration of milk $= 25x*[1- \frac{15}{25x}] *[1- \frac{15}{25x}]$ $16x = 25x*[1- \frac{15}{25x} ] *[1- \frac{15}{25x} ]$ $\frac{16}{25} = [1- \frac{15}{25x} ]^2$ $\frac{4}{5} = [1- \frac{15}{25x}]$ $\frac{1}{5} = \frac{15}{25x}$ Total volume, $ 25x=15*5=75$. Ans
Probability to reach a space
To see that your method is not correct, change $440$ to $9$. If you apply your reasoning, you would end up with a probability of zero, which is clearly incorrect. For each integer $n$, let $p(n)$ be the probability that a sum of $n$ can be achieved in some number of steps. Then $p$ satisfies the recursion \begin{align*} p(n) = &amp; \frac{1}{36}p(n-2) + \frac{2}{36}p(n-3) + \frac{3}{36}p(n-4) + \frac{4}{36}p(n-5) \\[4pt] &amp; + \frac{5}{36}p(n-6) + \frac{6}{36}p(n-7) + \frac{5}{36}p(n-8) + \frac{4}{36}p(n-9) \\[4pt] &amp; + \frac{3}{36}p(n-10) + \frac{2}{36}p(n-11) + \frac{1}{36}p(n-12) \end{align*} for all $n &gt; 0$, together with the initial conditions $p(0)=1$ and $p(n)=0$ for all $n &lt; 0$. Here are the values of $p(n)$ for $0\le n\le 12$ . . . \begin{array}{c|c} n&amp;p(n)\\ \hline 0&amp;1\\ \hline 1&amp;0\\ \hline 2&amp;{\large{\frac{1}{36}}}\approx .02777777778\\ \hline 3&amp;{\large{\frac{1}{18}}}\approx .05555555556\\ \hline 4&amp;{\large{\frac{109}{1296}}}\approx .08410493827\\ \hline 5&amp;{\large{\frac{37}{324}}}\approx .1141975309\\ \hline 6&amp;{\large{\frac{6841}{46656}}}\approx .1466263717\\ \hline 7&amp;{\large{\frac{1417}{7776}}}\approx .1822273663\\ \hline 8&amp;{\large{\frac{279397}{1679616}}}\approx .1663457600\\ \hline 9&amp;{\large{\frac{32653}{209952}}}\approx .1555260250\\ \hline 10&amp;{\large{\frac{8935921}{60466176}}}\approx .1477837957\\ \hline 11&amp;{\large{\frac{4271189}{30233088}}}\approx .1412753140\\ \hline 12&amp;{\large{\frac{292122973}{2176782336}}}\approx .1341994412\\ \hline \end{array} Given that $A$ starts at position $1$, the probability that $A$ reaches position $440$ is equal to $p(439)$. Applying the recursion using a CAS such as Maple or Mathematica, $p(439)$ evaluates to a rational number with huge numerator and denominator, but it is approximately $1/7$.
Fundamental group of the grid in $\mathbb{R}^2$ isomorphic with commutator subgroup of $\mathbb{Z}*\mathbb{Z}$
Let $X$ be the wedge of two circles, and note that $\pi_1(X) \cong F_2$. First note that $\mathbb Z^2$ acts transitively on the grid, and that in fact the grid is just $\tilde{X}/[F_2,F_2]$, and in fact this is the Cayley graph for $\mathbb Z^2$. Then, we know that $[F_2,F_2] \cong \pi_1([\tilde{X}/[F_2,F_2]) \cong F_{\infty}$. The previous fact can be checked by noting that we have an isomorphism $\pi_1(Y)/p_*(\pi_1(\tilde{X}) \cong\pi_1(Y)\cong G$ where the first isomorphism follows since the universal cover is contractible, and the second follows from basic covering space theory, with reference here.
Lie group action and foliation
I'll be using the definition of foliation that appears in Lee's book (page 501, it should be equivalent to the definition that you are using). Essentially, a foliation of dimension $k$ is a partition of $M$ into connected $k$-dimensional immersed submanifolds which locally look like the union of hyperplanes. Also, I'll be assuming that $G$ is connected and $k$-dimensional. The proof that I propose uses quite a lot of machinery from distributions, so if you're not familiar with those feel free to ask! First, there is a correspondence between foliations and (involutive) distributions: if $\mathcal{F}=\{A_{p}\colon p\in M\}$ is a $k$-dimsntional foliation of $M$ (and $A_{p}$ denotes the leaf which contains $p$), then the subset $\Delta=\bigcup_{p\in M}T_{p}(A_{p})$ of $T(M)$ must be an involutive rank $k$ distribution. Conversely, if $\Delta=\bigcup_{p\in M}\Delta_{p}$ is an involutive distribution, for every $p\in M$ there is a unique maximal connected integral manifold of $\Delta$ passing through $M$, and the collection of all integral manifolds is a $k$-dimensional foliation of $M$. These results are due to the Frobenius Theorem, which is wonderfully explained in &quot;Foundations of Differentiable Manifolds and Lie groups&quot;, by Warner (pages 41-49). Also, it's important to know that integral manifolds of an involutive distribution are weakly embedded (Warner, Theorem 1.62). So, suppose $\{G\cdot p \colon p\in M\}$ is a foliation. Then, $\Delta=\bigcup_{p\in M}T_{p}(G\cdot p)$ must be an involutive distribution. For this to be possible, we need to guarantee that $\dim T_{p}(G\cdot p)$ is independent of $p$ (that is, all orbits must have the same dimension, which is equivalent to saying that all isotropy subgroups must have the same dimension). Let's go now to the question at hand: we're assuming that $G_{p}$ is zero-dimensional for all $p$. This implies that $\dim G\cdot p = \dim G$ for all $p\in M$, so $\Delta=\bigcup_{p\in M}T_{p}(G\cdot p)$ is a union of $k$-subspaces. We need it to be a distribution. For this, take any basis $\{X_{1},...,X_{k}\}$ of the Lie algebra $\mathfrak{g}$ of $G$ and define $$ X_{i}^{*}(p)=\dfrac{d}{dt}\Big|_{t=0}\operatorname{Exp}(tX_{i})\cdot p, \quad p\in M. $$ The vector fields $X_{i}^{*}$ are well defined and smooth in $M$, and they form a basis for $T_{p}(G\cdot p)$ at every point $p\in M$ (notice that this is still true locally if we suppose that the orbits have constant dimension. We'd have that the fields span those subspaces). From this, we get that $\Delta$ is a distribution. Also, it is involutive because the orbits are integral manifolds. The Frobenius Theorem (global version) now implies that the collection $\mathcal{F}=\{A_{p}\colon p\in M\}$ of maximal integral manifolds of $\Delta$ is a $k$-dimensional foliation of $M$. In general, since $G\cdot p$ is a connected integral manifold through $p$, $G\cdot p\subseteq A_{p}$ (because all conected integral manifolds are contained in the maximal integral manifold, c.f. Warner Theorem 1.64), so if we can guarantee that $G\cdot p=A_{p}$, we are done. By hypothesis, $G$ is compact, so the action is proper. In particular, the orbits are closed in $M$ (c.f. Lee, Proposition 21.7). Now, let $p\in M$. We know that $G\cdot p\subseteq A_{p}$ and it is closed in $M$, hence $G\cdot p$ is closed in $A_{p}$. This is because the inclusion $A_{p}\hookrightarrow M$ is continuous, but not neccessarily an embedding. Now, consider the inclusion $G\cdot p \hookrightarrow A_{p}$. Since $A_{p}$ is weakly embedded, this inclusion is a smooth immersion. Also, $G\cdot p$ and $A_{p}$ have the same dimension, so the inclusion is a submersion and $G\cdot p$ is open in $A_{p}$. Now, $A_{p}$ is connected, se we get that $G\cdot p=A_{p}$, and we conclude. Hope this helps!
Determine the dimension of the range and determine the kernel
Hint: $g$ sends a basis of $\mathbb{R}^3$ to linearly independent vectors in $V$. Hence $g$ must be one-one. (Verify this fact!) If $g$ is one-one, what is $\ker g$? If you've determined this, what is $\text{im } g$?
Which ZFC axiom schemes are reducible to a single axiom?
First of all, this depends greatly on your axiomatization. Since Replacement implies Separation, you can reduce the Separation schema to $\exists x(x=x)$ in presence of Replacement. So we're really just left with Replacement. Now here's the kicker. You cannot have a finite axiomatization of $\sf ZFC$. Unless of course the theory itself is inconsistent. The reason is that $\sf ZFC$ proves the consistency of any finite number of its consequences. So you can't quite reduce Replacement to a single schema in the presence of $\sf ZF-Rep$.