title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Proof that $|\sin z|<|z\cos z|$ for $z=x+iy$ such that $\max\{|x|,|y|\}=n\pi$.
We have to prove $$2\sin z\&gt;\sin\bar z&lt;z\&gt;\bar z\cdot 2\cos z\&gt;\cos\bar z\ ,$$ or $$\cos(z-\bar z)-\cos(z+\bar z)&lt;z\bar z\bigl(\cos(z-\bar z)+\cos(z+\bar z)\bigr)\tag{1}$$ for $z=\alpha+in\pi\&gt;$ with $n\geq1$ and $|\alpha|\leq n\pi$. Now $(1)$ amounts to $$-\cos(z+\bar z)&lt;{z\bar z-1\over z\bar z+1}\&gt;\cos(z-\bar z)\ ,$$ or $$-\cos(2\alpha)&lt;{|z|^2-1\over|z|^2+1}\&gt;\cosh(2n\pi)\ .\tag{2}$$ Since $|z|^2\geq n^2\pi^2&gt;9$ and $\cosh(2\pi)&gt;267$ the inequality $(2)$ is obviously true for all real $\alpha$ and $n\geq1$.
Solving the equation find the value of $u(2,t)$
Looks to me like you made some mistake in the limits of the integral. The bounds on $t$ means we will always have $|2+2t| &gt; 2$, but $|2-2t| &gt; 2$ only when $t\in (2,3)$. Thus we get $$\int_{x-2t}^{x+2t} g = \int_{2-2t}^{2+2t} {\bf1}_{|\theta|\le 2}d\theta = \begin{cases}\int_{2-2t}^{2}d\theta &amp; \textrm{if }t\le2\\\int_{-2}^2d\theta &amp; \textrm{if }t&gt; 2\end{cases}.$$ Does this fix your answer?
Galois Theory by Rotman, Exercise 60, a field of four elements by using Kronecker's theorem and adjoining a root of $x^4-x$ to $\Bbb Z_2$
You are not required to adjoin a complex root to $\mathbb{Z}_2$. You can't do that even if you try because $\mathbb{C}$ and $\mathbb{Z}_2$ have different characteristic. Instead, the hint is to use Kronecker's theorem, so let's do that. First, what are the obvious roots of $f(x) = x^4 - x$ in $\mathbb{Z}_2$? Clearly, $\bar{0}$ and $\bar{1}$ are both roots of $f$. So, we can factor out $x$ and $x - 1$ to get $f(x) = x(x-1)(x^2+x+1)$. Note that $x^2 + x + 1$ is irreducible over $\mathbb{Z}_2$. By Kronecker's theorem, there exists an extension field of $\mathbb{Z}_2$ that contains a root of $x^2+x+1$, which is also a root of $f(x)$. The degree of the extension is equal to the degree of the polynomial $x^2 + x + 1$, since it is irreducible over $\mathbb{Z}_2$. A degree two extension of $\mathbb{Z}_2$ has to contain $4$ elements (do you see why?) and so we are done.
Unitary transformation: order of $U^{\dagger}$ and $U$
Counterexample: $$ U=\pmatrix{0&amp;-1&amp;0\\ 1&amp;0&amp;0\\ 0&amp;0&amp;1}, A=\pmatrix{1&amp;0&amp;0\\ 0&amp;0&amp;1\\ 0&amp;1&amp;0}, U^\ast AU=\pmatrix{0&amp;0&amp;1\\0&amp;1&amp;0\\ 1&amp;0&amp;0}, UAU^\ast=\pmatrix{0&amp;0&amp;-1\\0&amp;1&amp;0\\ -1&amp;0&amp;0}. $$
get area edges with a point, a vector and 2 distance
With the clarification, what you need to use is easy. If the length of Dir is the distance from LL to LR, (that is, $(\text{Dir.x})^2 + (\text{Dir.y})^2 + (\text{Dir.z})^2 = (\text{LLLR})^2$) you need to define just two new points: NewLR = LL + Dir NewUR = UL + Dir That is, NewLR = (LL.x + Dir.x, LL.y + Dir.y, LL.z + Dir.z) NewUR = (UL.x + Dir.x, UL.y + Dir.y, UL.z + Dir.z) And your 4 corners are: LL, UL, NewLR, NewUR
Finding an unbiased estimator for $\mu^2$
Hint: For any $k\in\{1,\dots,n\}$, $$E[\bar X_k^2]=\mathrm{Var}(\bar X_k)+E[\bar X_k]^2=1+\mu^2.$$ Can you use this to find an unbiased estimator for $\mu^2$?
f I$M,N$ are local martingale, then so is $N+M$ but not $NM$.
Hint For the sum, notice that if $(M_t)$ is a right-continuous martingale, then so is $(M_{t\wedge \tau})$ for any stopping time. Therefore, $\kappa_n=\sigma _n\wedge \tau_n$ indeed make the job.
$l^{1}$-distance and projection in Euclidean space
consider H = $\mathbb{R}^2$ and $S = &lt;(u,v)&gt;.$ $\, (u^2 + v^2 = 1)$ define $h = (a,b)$ and $g = (u,v)$. then our criterion will be $$ \|a-u,b-v\|_1 &lt; |au+bv-1|\|u,v\|_1\\i.e.\quad|a-u|+|b-v|&lt;|au+bv-1|(|u|+|v|) $$ By setting $ s = a- u,\,\, t = b - v$, $$ \quad|s|+|t|&lt;|su+tv|(|u|+|v|) $$ Now, assume $s = 1$ . Moreover, since $\, u^2 + v^2 = 1$, set $u = cos\theta $ and $ v = sin\theta$. $$ \quad|1|+|t|&lt;|cos\theta+tsin\theta|(|cos\theta|+|sin\theta|) $$ you can check $(t,\theta) = (7,1(rad))$ satisfies above inequality. Equivalently, $g = (u,v) = (0.5403,0.8415)$ and $h = (a,b) = (1.5403,7.8415)$.
Is this a correct definition of a line integral?
The first is a line integral over a vector field (presented quite horribly), defined as $$\int_a^bF(r(t))\cdot r'(t) dt$$ $F$ represents the formula for the field, and $r$ represents the path. If we write $r$ as $(x(t),y(t))$, then we get that $r'(t)=(\frac{dx}{dt},\frac{dy}{dt})$ If $F(x,y)$ represents the equation defining the vector field, we can write it in component form as $F(x,y)=(P(x,y),Q(x,y))$. so $P$ defines the $x$-component of the vector field at each point $(x,y)$, and $Q$ does the same for the $y$-component. Expanding the dot product: $$\int_a^b P(x,y)\frac{dx}{dt} dt + Q(x,y)\frac{dy}{dt} dt$$ $$\int_a^b P(x,y)dx + Q(x,y)dy$$ Gives your version. This is useful for determining the work done by a "field type" force (an electromagnetic field, for example) on a moving object. The second is a line integral over a scalar field. This is just the normal integral really, just extended so it can be defined over any curve, and not just the $x$-axis in the 2D case which is rather limiting. We call them both line integrals (since we integrate over a curve), but one is over a vector field and the other is over a scalar field, leading to different definitions.
Approximating a cosine
There isn't any simple approximation of the sort you seek. Simply defining an angle $\theta_{k\ell}$ such that $\cos(\theta_{k\ell}) = \tfrac12 \bigl( \cos(\tfrac{2\pi k}n) + \cos(\tfrac{2\pi \ell}n) \bigr)$ is not sufficient. There are infinitely many different angles $\theta_{k\ell}$ which satisfy this equation, after all, which are very different from one another. So what you want is to obtain some single value $\theta_{k\ell}$ which you can approximate well using rational multiples of a full rotation, as you describe. This will require mor than just a definition in terms of a periodic function such as cosine.
List of properties of the Syracuse sequence
The authoritative reference on the Collatz conjecture is the book The Ultimate Challenge: the 3x+1 problem, edited by Jeffrey C. Lagarias, the leading expert in the subject. There is also the web page On The 3x + 1 Problem maintained by Eric Roosendaal.
Continuous Invertible Function Satisfying $ f ( x ) + f ^ { - 1 } ( x ) = x $
We can write that $$ x = f(f^{-1}(x)) = f(x - f(x)). $$ Notice that the left hand side can assume any value on $\mathbb{R}$, so the image of $f$ is $\mathbb{R}$. Also, since $f$ is invertible it must be injective. In particular, there is a point $x_0$ such that $f(x_0) = 0$. Substituting on the previous equation, we conclude that $x_0 = f(x_0 - 0) = 0$. We have then $f(0) = 0$. We claim that $0$ is the only fixed point of $f$. Indeed, let $x_1$ be some fixed point of the function. Then $x_1 = f(f(x_1) - x_1) = f(0) = 0$. We have $f$ is a monotone function, as a consequence of the intermediate value theorem. We claim that $f$ cannot be increasing. Indeed, assume that $f$ is increasing. Then, $f^{-1}$ is also increasing. Then, for $x &gt; 0$ since $f(x) \neq x$ we have two possible cases: $0 &lt; x &lt; f(x)$, implying that $0 &lt; f^{-1}(x) &lt; x$ and thus $x &lt; f(x) &lt; f(x) + f^{-1}(x) = x$, contradiction. $0 &lt; f(x) &lt; x$, implying that $0 &lt; x &lt; f^{-1}(x)$ and thus $x &lt; f^{-1}(x) &lt; f^{-1}(x) + f(x) = x$, contradiction. We claim that $f$ cannot be decreasing. If it were, then for $x &gt; 0$ we would have $f(x) &lt; 0 &lt; x$ and applying the also decreasing function $f^{-1}$, one would get $x &gt; 0 &gt; f^{-1}(x)$. Thus, we conclude that $x = f(x) + f^{-1}(x) &lt; 0 &lt; x$, contradiction. Since $f$ cannot be decreasing nor increasing, we conclude that there is no $f$ satisfying the equation.
Why does the solution set for this inequality turn out this way?
Up until this point $$(x-7)(x+1)\ge 0$$ your reasoning to correct. But now look at the expression. It is a product of two numbers. Ask yourself: When is a product of two real numbers positive? The product is only positive, if both terms have same sign. Hence, A)Both expressions have positive sign: $x-7\geq 0$ and $x+1\geq0$ or B)Both expressions have negative sign: $x-7\leq 0$ and $x+1\leq0$. Now, look at: A) $x\geq7$ and $x\geq-1$. Both conditions are only satisfied if $x\geq7$ Now try to the same for B).
Is this 7-th degree polynomial solvable?
Since you mention Galois theory in the question, I presume you are about whether $$ f(x) = x^7 - \frac{1}{2} x^6 - \frac{3}{2} $$ is solvable in radicals. If we factor the polynomial modulo various primes (not dividing the discriminant), and look at the degrees of the resulting irreducible factors we determine which cycle types must appear in the Galois group over $ \mathbb{Q} $. This is a theorem of Dedekind. More formally Let $ f(x) \in \mathbb{Q}[x] $ be an irreducible polynomial of degree $ n = \deg(f) $. Suppose $ p $ is a prime not dividing $ \operatorname{disc}(f) $, and that modulo $ p $ the polynomial $ f(x) $ factors as $$ f(x) \equiv h_1(x) \cdots h_k(x) \pmod{p} $$ where the $ h_i(x) $ are monic irreducibles modulo $ p $, with degrees $ d_i = \deg(h_i) $. Then the is an element in the Galois group $ G &lt; S_n $ of $ f $ over $ \mathbb{Q} $, which has cycle type $ (d_1 \, \cdots \, d_k) $. Conrad has a short note about this, and using it to recognise Galois groups $ S_n $ and $ A_n $. For this example, if we reduce modulo $ p = 1019 $, the polynomial $ f(x) = x^7 - \frac{1}{2} x^6 - \frac{3}{2} $ factors as $$ f(x) \equiv (53 + x) (348 + x) (542 + x) (961 + x) (1014 + x) (20 + 648 x + x^2) \pmod{1019} \, . $$ This means the Galois group of $ f(x) $ over $ \mathbb{Q} $ must contain a cycle of type $ (1,1,1,1,1,2) $, i.e. a transposition. From user21820's answer, we also know that it must contain a 7-cycle (or get this by factoring modulo 5). [Edit: Actually, rather than reducing mod $ p = 1019 $, a better approach is to reduce modulo $ p = 23 $. We find that $$ f(x) \equiv (3 + x) (18 + x) (12 + 22 x + x^2) (14 + 20 x + 14 x^2 + x^3) \pmod{23} $$ This shows that the Galois group of $ f(x) $ over $ \mathbb{Q} $ contains a cycle of type $ (1,1,2,3) $. But taking the third power of this gives a cycle of type $ (1,1,1,1,1,2) $, so we can continue as we would above.] By Theorem 2.1 in Conrad's note, a 7-cycle and a transposition generate all of $ S_7 $. So we conclude the Galois group of $ f(x) $ over $ \mathbb{Q} $ is $ S_7 $. And since $ S_7 $ is not a solvable group, the polynomial $ f(x) $ is not solvable in radicals.
Propositional Logic Puzzle
Presumably the &quot;we&quot; in question refers to the Blue creature and its interlocutor (i.e., you), as opposed to Blue and Red. Assuming you are human and not a cannibal, then this suggests that Blue is Dragos. In some languages, there are different words for &quot;we including you&quot;, &quot;we not including you&quot;, etc., whereas English has no such distinction and relies on context. This kind of problem is really a language interpretation problem, not so much a mathematics problem. It might be more appropriate to ask it at puzzling.stackexchange.com.
$\mathbb{S}^n$ without two points
As per the comments, $S^n\setminus\{a\}$ is homeomorphic to $\mathbb{R}^n$. Now it suffices to see that $\mathbb{R}^n\setminus\{0\}$ (strongly) deformation retracts onto $S^{n-1}$ (if you know that you just have to move things around a bit to get $b$ to align with $0$ and $S^{n-1}$ to align with the image of the equator) For this, consider $r:x\mapsto \frac{x}{||x||}$, which is indeed a retraction. Moreover, define $H(x,t)= r(x)t+(1-t)x$. Clearly it is continuous, and it is easy to check that it does land in $\mathbb{R}^n\setminus\{0\}$ (and not just $\mathbb{R}^n$), and it is clearly a homotopy between $id$ and $r$ (I should really write $i\circ r$, where $i$ is the inclusion); and moreover (although it is not necessary to get a deformation retract, that we already got) you can check that $H$ is constant equal to the identity on $S^{n-1}$
polynomial of degree n and its divisor
HINT: Let $\omega $ be the complex cube root of unity. Then $f(\omega )=0$ and this implies $$f(\omega )=(-1)^n \omega^{2n}+\omega^n+1=0$$ First, suppose $n\equiv 1\pmod{3}$, then we have $(-1)^n\omega^2+\omega +1=0$ i.e. $(-1)^n\omega^2=\omega^2$ and hence $n$ is even. Similarly $\omega^2$ is also a root of $f$ when $n$ is even. Hence $p\mid f$ when $n$ is even and $n\equiv 1\pmod{3}$. Similarly check the cases $n\equiv 0,2\pmod{3} $.
Question about when a projection from a fiber product is open
Reminder: if we have a map of rings $\varphi:R\to S$, the corresponding map on spectra $\operatorname{Spec} S\to \operatorname{Spec} R$ is determined by sending the prime ideal $\mathfrak{p}\subset S$ to the prime ideal $\varphi^{-1}(\mathfrak{p})\subset R$. Saying the ring homomorphism factors as $B\to A'\otimes_kB\to A\otimes_k B$ is the same as saying that the map on spectra factors as $\operatorname{Spec} A\otimes_k B\to \operatorname{Spec} A'\otimes_k B \to \operatorname{Spec} B$. So for each prime ideal $\mathfrak{p}\subset A\otimes_k B$ with preimage $\mathfrak{q}\subset B$, it's preimage $\mathfrak{p}'$ inside $A'\otimes_k B$ is a prime ideal which has preimage $\mathfrak{q}\subset B$ as well. This shows that the image of $D(f)\subset \operatorname{Spec} A\otimes_k B$ in $\operatorname{Spec} B$ is contained inside the image of $D(f) \subset \operatorname{Spec} A'\otimes_k B$ in $\operatorname{Spec} B$. Now we want to show that the image of the 2nd $D(f)$ can't be any bigger. Pick a prime ideal $\mathfrak{q}\subset B$ in the image of the 2nd $D(f)$. This means that $(A'\otimes_k B)_f\otimes_B \kappa(\mathfrak{q})=(A'\otimes_k\kappa(\mathfrak{q}))_f$ is nonzero, so $A'\otimes_k\kappa(\mathfrak{q})$ is nonzero and the image of $f$ in it isn't nilpotent. By injectivity of $A'\to A$, we get that $A'\otimes_k\kappa(\mathfrak{q})\to A\otimes_k\kappa(\mathfrak{q})$ is also injective, so $(A\otimes_k B)\otimes_B \kappa(\mathfrak{q})=A\otimes_k\kappa(\mathfrak{q})$ is also not the zero ring, nor is the image of $f$ in it nilpotent. So there is some point in $\operatorname{Spec} A\otimes_kB$ which maps to $\mathfrak{q}\in\operatorname{Spec} B$, and we're done. The images of $D(f)$ under the two maps are the same.
How to solve linear, second order ODE with Frobenius method with a difficult recurrence relation?
For every $c$, $$n+c=\frac{\Gamma(n+c+1)}{\Gamma(n+c)},$$ hence the recursion you arrived at can be rewritten as $$a_{n+1}=\frac{-a_n}{4(n+r+1)(n+r+\frac12)}=\frac{(-1)^n4^n\Gamma(n+r+1)\Gamma(n+r+\frac12)}{(-1)^{n+1}4^{n+1}\Gamma(n+r+2)\Gamma(n+r+\frac32)}a_n,$$ which immediately leads to $$ a_n=\frac{(-1)^n}{4^n}\frac{\Gamma(r+1)\Gamma(r+\frac12)}{\Gamma(n+r+1)\Gamma(n+r+\frac12)}a_0.$$ A basis of the space of solutions is given by these series when $r=0$ and when $r=\frac12$, that is, one can choose $\{y_0,y_{1/2}\}$, with $$y_r(x)\propto x^r\sum_{n\geqslant0}\frac{(-1)^nx^n}{4^n\Gamma(n+r+1)\Gamma(n+r+\frac12)}.$$ Legendre duplication formula reads$$4^{n+r}\,\Gamma(n+r+\tfrac12)\Gamma(n+r+1)=\sqrt{\pi}\,\Gamma(2n+2r+1),$$ which leads to a more familiar formulation of the basis of the space of solutions, namely, $$y_0(x)=\sum_{n\geqslant0}\frac{(-1)^nx^n}{(2n)!},\qquad y_{1/2}(x)=\sqrt{x}\sum_{n\geqslant0}\frac{(-1)^nx^n}{(2n+1)!},$$ also known as $$y_0(x)=\cos\sqrt{x},\qquad y_{1/2}(x)=\sin\sqrt{x}.$$
Isometry group of $\Bbb RP^3$
John's argument establishes that the collection of orientation preserving isometries of $\mathbb{R}P^3$ has the structure of an $SO(3)$ bundle over $SO(3)$. But, as mentioned in the comments, it does not immediately follow that such a bundle is Lie isomorphic to $SO(3)\times SO(3)$. However, I claim that this is, nonetheless true. That is, if $G$ is a Lie group diffeomorphic to the total space of some $SO(3)$ bundle over $SO(3)$, then $G$ must, in fact, be isomorphic as a Lie group to $SO(3)\times SO(3)$. To see this, begin with the bundle $SO(3)\rightarrow G\rightarrow SO(3)$, the long exact sequence of homotopy groups together with the fact that $\pi_2 = 0$ for any Lie group gives an exact sequence $$0\rightarrow \pi_1(SO(3))\rightarrow \pi_1(G)\rightarrow \pi_1(SO(3))\rightarrow 0.$$ From this we conclude that either $\pi_1(G) \cong C_4$ or $C_2\times C_2$ (with $C_k$ being the cyclic group of order $k$). In either case, the fundamental group is order $4$. Now, the universal cover of $G$, $\tilde{G}$, is a compact simply connected Lie group of dimension $6$. Simply connected compact Lie groups are completely classified: They are all products of simple Lie groups, which are all classified. The only simple Lie group in dimension at most $6$ is $SU(2)$. It follows that $\tilde{G}$ is isomorphic as a Lie group to $SU(2)\times SU(2)$. Now, every Lie covering, including $\tilde{G}\rightarrow G$, is given by modding out $\tilde{G}$ by a subgroup of its center. Since $Z(SU(2)) = C_2$ (generated by $\pm I$), $Z(\tilde{G}) = C_2\times C_2$. Since $\pi_1(G)$ has order $4$, we must therefore conclude that $G\cong \tilde{G}/Z(\tilde{G}) = SU(2)/\{\pm I\} \times SU(2)/\{\pm I\}\cong SO(3)\times SO(3).$
Convergence in Probability and in Quadratic Mean for a sequence of random variables
For convergence in m.s check that $E[|X_n - 0|^2] = n^2.\frac{1}{n^2} + \frac{1}{n^2}(1 - \frac{1}{n^2} )$ which tends to $1$ as $n \rightarrow \infty$. Therefore, it does not converge to $0$ in m.s. For, convergence in probability, note that for any $\epsilon &gt; 0$ for $n &gt; \frac{1}{\epsilon}$ , $Pr(|X_n - 0| &gt; \epsilon) = \frac{1}{n^2} $. This goes to $0$ as $n \rightarrow \infty$. Therefore $X_n \rightarrow 0$ in probability.
Can we take a derivative with respect to $y$?
We can differentiate $y=f(x)$ with respect to y if we also know that $x$ is a function of $y$ in which case we have $$ 1=f'(x) \frac {dx}{dy}$$ For example for $y=\tan (x)$, we have $1=( 1+\tan ^2 x) \frac {dx}{dy}$ which implies $\frac {dx}{dy} =\frac {1}{1+y^2}$ That is $$\frac {d}{dy} \tan ^{-1} (y)= \frac {1}{1+y^2}$$
can not find root of the equation
You can get it by solving a constrained optimization problem. You want the point $(u,v)$ that minimizes $f(u,v)=(u-x)^2+(v-y)^2$ subject to the constraint $\frac{u^2}{a^2} + \frac{v^2}{b^2} = 1$. The point $(x,y)$ doesn't need to be inside the ellipse. The Lagrangian is given by $$ L(u,v,\lambda) = (u-x)^2+(v-y)^2-\lambda\left(\frac{u^2}{a^2} + \frac{v^2}{b^2}-1\right) $$ Now you just need to compute the critical points of $L$ and choose the ones that yield the smallest distance to $(x,y)$. You can solve this by hand but, out of laziness, I give you Wolfram's solution... Two critical points WARNING: Wolfram is not giving the full set of solutions. After computing $u,v$ in terms of $\lambda$ from the first two equations of the system $\nabla L = 0$ and substituting them in the last equation we get a fourth degree polynomial equation in $\lambda$. The solution must be conmputed in an alternative way if this polynomial has four real roots. $$ \left( -\frac{a \sqrt{b^2-y^2}}{b}, \frac{-a^2 b^4 y+a^2 b^2 y^3+a b^3 x y \sqrt{b^2-y^2}+b^6 y-b^4 y^3}{a^4 b^2-a^4 y^2-2 a^2 b^4-a^2 b^2 x^2+2 a^2 b^2 y^2+b^6-b^4 y^2}\right) $$ and $$ \left( \frac{a \sqrt{b^2-y^2}}{b}, \frac{-a^2 b^4 y+a^2 b^2 y^3-a b^3 x y \sqrt{b^2-y^2}+b^6 y-b^4 y^3}{a^4 b^2-a^4 y^2-2 a^2 b^4-a^2 b^2 x^2+2 a^2 b^2 y^2+b^6-b^4 y^2}\right) $$ One will correspond to the minimum distance and the other will correspond to the maximum distance. This is not relevant to your question, but the Lagrange multipliers are $$ \frac{a^2 b^2-a^2 y^2+a b x \sqrt{b^2-y^2}}{b^2-y^2} $$ and $$ \frac{a^2 b^2-a^2 y^2-a b x \sqrt{b^2-y^2}}{b^2-y^2}, $$ respectively.
How to find the area vector of a shape?
For a flat surface with area $A$, one can define its corresponding area vector as $\mathbf A = A\mathbf n$ where $A$ is the total area of the surface, and $\mathbf n$ is a normal unit vector to the surface. If you have two surfaces joined together as in your example, then one can define area vectors $\mathbf A_1$ and $\mathbf A_2$ for each surface independently. However, since the area vectors of these surfaces will be orthogonal to one another, I don't personally know a reasonable physical interpretation of combining them into a single area vector, say by adding them. Of course you can certainly make whatever definition you want, but it wouldn't be a standard one as far as I am aware.
Let me know your opinion about simple formula to find $\begin{pmatrix} n \\ k \end{pmatrix}$
I often say to my students: There is a simple way to find binomial formula ,and the way is :$$ \binom n k =\dfrac{\overbrace{n\cdot (n-1)\cdot (n-2)\cdots}^{\text{k-terms decreasing}}}{\underbrace{1\cdot 2\cdot 3\cdots k}_{\text{k-terms increasing}}}$$ Sure, but to be clearer try it like this: $$ \dbinom n k ~=~\dfrac{n^{\underline k}}{k^{\underline k}}~=~\dfrac{\overbrace{n\cdot (n-1)\cdot (n-2)\cdots (n-k+1)}^{\text{k-terms decreasing}}}{\underbrace{k\cdot (k-1)\cdot (k-2)\cdots 1}_{\text{k-terms decreasing}}\qquad\qquad}$$ Where $n^{\underline k}$ is called the $k$ falling-factorial of $n$. &nbsp; This is excellent if you want to calculate by hand, and will give the students a practical feel for the numbers. However, modern personal computers (or phones even) can easily handle most factorials that you'll want to use, so using the factorial function will be quicker (that is, unless your package has access to a binom[,] function which probably uses the falling factorial to calculate it anyway). However, there is a reason to have the students think of the binomial coefficient as $\frac{n!}{k!(n-k)!}$. &nbsp; This is because when used in formulas that multiply or divide binomial coefficients expressed like so, there is often a lot of convenient cancelling. Still knowing the falling factorial expression will prove useful if you cover generalised binomial expressions. $$\dbinom{1/2}{5}=\dfrac{0.5\cdot (-0.5)\cdot(-1.5)\cdot(-2.5)\cdot(-3.5)}{5!}=\dfrac 7{256}$$ And such. So knowing the falling-factorial form cannot hurt, and can help, but should be taught in addition to rather instead of the factorial form.
Help understand canonical isomorphism in localization (tensor products)
Yes, the map you construct is an isomorphism. It might be easiest to verify this by first using the canonical isomorphism $A_p\otimes_A M \cong M_p$, so that one then has the following simple chain of canonical isomorphisms: $$ M_P \otimes_{A_P} N_P \cong (A_P\otimes_A M)\otimes_{A_P} (A_P\otimes_A N) \qquad$$ $$\cong (A_P\otimes_A M) \otimes_A N \cong A_P\otimes_A (M\otimes_A N) \cong (M\otimes_A N)_P.$$
there is at most one point at distance exactly $1$ from three distinct points in the plane
Take any pair of the three points. The set of points that are equidistant from those two points form a line; the perpendicular bisector of the line joining those two points. Now use the other point and one of first two points to form an equidistant line from those two points also. If these two lines intersect, they do so at only one point. This is the unique point that is equidistant from all three points. (It is also the circumcenter of the triangle created by those three points). If the three points are collinear, the two lines formed will not intersect and there is no equidistnt point. Note that the two equidistant set lines must be distinct as the original three points are distinct. The point that is equidistant from the three specified points may or may not be at distance $1$, of course. But there is at most one such point.
Ring homomorphisms from $A[X]$
But $a_ix^i$ is not just a formal product, it is actually a product of two polynomials, viz,, $$a_i= a_i +0x+0x^2+\cdots$$ and $$x^i=0+0x+\cdots+0x^{i-1}+1x^i+0x^{i+1}+\cdots.$$ So indeed in a ring homomorphism from $A$, $$f(a_ix^i)=f(a_i)f(x^i).$$
natural logarithm of a complex number $\ln(a+bi)$
When you have $z^u$ it is convenient to have $z=a+ib=re^{i\theta}$ in polar form and keep $u=c+id$ in cartesian form. The polar formula is $\begin{cases}r=|z|=\sqrt{a^2+b^2}\\\theta=\operatorname{atan2}(b,a)\end{cases}$ You got it ok, but for the angle it is more convenient to use atan2 which deals with the proper quadrants for the angle (this function is defined on most systems that have a math library): https://fr.wikipedia.org/wiki/Atan2 We have $\ln(z)=\ln(r)+i\theta+2ik\pi\quad k\in\mathbb Z$, and it is important to keep this $k$ because the complex logarithm is multivalued (i.e. it has branches). It ensures that formulas like $z^{u}\times z^v=z^{u+v}$ and $(z^u)^v=z^{uv}\ $ stays true at least for some value of $k$. $\begin{align}z^u &amp;=\exp\bigg(u\ln(z)\bigg)\\ &amp;=\exp\bigg((c+id)(\ln(r)+i\theta)+2ik\pi)\bigg)\\ &amp;=\exp\bigg((c\ln(r)-d\theta)+i(c\theta+d\ln(r)+(-2kd\pi+2ikc\pi)\bigg) \end{align}$ So we have the formula $\boxed{z^u\in\bigg\{z_{[k]}=z_{[0]}\ w^k\mid k\in\mathbb Z\bigg\}}$ where: $$\begin{cases}z_{[0]}&amp;=r^c\ e^{-d\theta}\ \bigg(\cos\big(c\theta+d\ln(r)\big)+i\sin\big(c\theta+d\ln(r)\big)\bigg)\\\\w&amp;=e^{-2d\pi}\ \bigg(\cos(2c\pi)+i\sin(2c\pi)\bigg)\end{cases}$$ Comparatively to your formula (i.e. $z_{[0]}$), you still have a complex multiplication remaining, while I separated real part and imaginary part, but this is basically the same. But I also have the multiplicative factor $w^k$ which correspond the the branches of the complex log. The value $z_{[0]}$ obtained for $k=0$ is called the principal value. Note: the principal value is not necessarily the &quot;simplest&quot; one, it happens that some other $z_{[k]}$ is simpler (i.e. real for instance) for a value of $k\neq 0$.
Is there no solution to the blue-eyed islander puzzle?
Argument 1 is clearly wrong. Consider the island with only two blue-eyed people. The foreigner arrives and announces "how unusual it is to see another blue-eyed person like myself in this region of the world." The induction argument is now simple, and proceeds for only two steps; on the second day both islanders commit suicide. (I leave this as a crucial exercise for the reader.) Now, what did the foreigner tell the islanders that they did not already know? Say that the blue-eyed islanders are $A$ and $B$. Each already knows that there are blue-eyed islanders, so this is not what they have learned from the foreigner. Each knows that there are blue-eyed islanders, but neither one knows that the other knows this. But when $A$ hears the foreigner announce the existence of blue-eyed islanders, he gains new knowledge: he now knows that $B$ knows that there are blue-eyed islanders. This is new; $A$ did not know this before the announcement. The information learned by $B$ is the same, but mutatis mutandis. Analogously, in the case that there are three blue-eyed islanders, none learns from the foreigner that there are blue-eyed islanders; all three already knew this. And none learns from the foreigner that other islanders knew there were blue-eyed islanders; all three knew this as well. But each of the three does learn something new, namely that all the islanders now know that (all the islanders know that there are blue-eyed islanders). They did not know this before, and this new information makes the difference. Apply this process 100 times and you will understand what new knowledge was gained by the hundred blue-eyed islanders in the puzzle.
Use CLT to find the probability
Based on your calculations (I think they are correct) you can find mean and variance of $\sum_{i=1}^{20}V_i$. Denoting these by $\mu$ and $\sigma^2$ to be found is: $$P(20&lt;\sum_{i=1}^{20}V_i&lt;30)=P\left(\frac{20-\mu}{\sigma}&lt;\frac{\sum_{i=1}^{20}V_i-\mu}{\sigma}&lt;20&lt;\frac{30-\mu}{\sigma}\right)\approx P\left(\frac{20-\mu}{\sigma}&lt;U&lt;\frac{30-\mu}{\sigma}\right)$$where $U$ has standard normal distribution.
Discretization of Unit Vector in 3D
Why don't you try this? Extend your unit vector $v$ to a ray from its origin $o$, $r=o + \alpha v$ where $\alpha \in \mathbb{R}$. Now iterate over the $26$ neighboring cells, computing the closet approach of $r$ to the center of each cell. (I am assuming you know how to perform this computation. E.g, see Wikipedia's Distance from a point to a line.) Select the cell with the minimum distance to the ray.
Terminology for the representation of a rational number using a tuple {integer,scaling-factor}
$N$ is known as the mantissa, while $D$ (or more accurately, $-D$) is the exponent.
Linear recurence relation
There is a fixed point at $x = 40$. Now put $y_n = x_n - 40$ for any $n$. Then you have $$ y_{n+1} + 40 = (3/2)(y_n + 40) - 40$$ so $$ y_{n+1} = {3\over 2} y_n.$$ The equilibrium at 40 is unstable and repels.
Finding eigenvalues of a linear transformation
The transformation matrix of $T$ is $$ A=\pmatrix{1\cdots1\\\hspace{-9 mm}\vdots&amp;\hspace{-11 mm}\vdots\\1\cdots1}=\pmatrix{1\\\vdots\\1}\pmatrix{1\cdots1}=vv^T $$ Then the characteristic polynomial of $A$ is $$ p(\lambda)=|\lambda I-A|=|\lambda I-vv^T|=\lambda^n-Tr(vv^T)\lambda^{n-1}=\lambda^n-n\lambda^{n-1}=\lambda^{n-1}(\lambda-n) $$ for $vv^T$ is a Rank-$1$ matrix and all principal minors above $2$ are $0$. Thus eigenvalues of $A$ are $0$ with multiplicity of $n$ and $n-1$ with multiplicity of $1$.
Why is $\mathfrak{c}$ the cardinality of the lower limit topology on $\mathbb{R}$?
There are open sets in the lower limit topology that are not of the form $[a,b)$: the lower limit topology consists of the subsets of $\Bbb R$ that are unions of sets of the form $[a,b)$, so, for instance, every set of the form $(a,b)$ is open in the lower limit topology: $$(a,b)=\bigcup_{a&lt;x&lt;b}[x,b)\;.$$ Let $\tau$ be the lower limit topology on $\Bbb R$; there are several ways to see that $|\tau|\le\mathfrak{c}$. One is to observe that $\langle\Bbb R,\tau\rangle$ is hereditarily Lindelöf. Now let $U\in\tau$, and let $\mathscr{U}=\{[a,b):[a,b)\subseteq U\}$; clearly $\mathscr{U}$ is an open cover of $U$, so it has a countable subcover $\mathscr{U}_0$. Thus, every open set in the lower limit topology is the union of countably many sets of the form $[a,b)$. There are $\mathfrak{c}$ such sets, so there are $\mathfrak{c}^{\aleph_0}=(2^{\aleph_0})^{\aleph_0}=2^{\aleph_0\cdot\aleph_0}=2^{\aleph_0}=\mathfrak{c}$ countable families of them and hence at most $\mathfrak{c}$ members of $\tau$.
What algebraic manipulations of these sequences are needed to apply the squeeze theorem?
rewrite your first term as $2^\cdot n^{\frac{1}{2n}}\left(\frac{5}{n}+1\right)^{1/(2n)}$ and rewrite your second term in the form $\left(\sqrt{4n^2+7}-2n\right)\frac{\sqrt{4n^2+7}+2n}{\sqrt{4n^2+7}+2n}$
Is this proof correct? Show $\mathbb{Q}$ is dense in $\mathbb{R}$
The problem is, the proof is proceeding by contradiction. That is why it says "Then ${m_1+1\over N}&gt;b$" -- because it's assumed there's no number of the form $m/N$ between $a$ and $b$. It then proves that this leads to a contradiction, just as you proved that in fact $(m_1+1)/N$ in your situation was actually $&lt;b$ and not $&gt;b$ as assumed. The contradiction is what establishes the theorem. This is a case where perhaps a proof by contradiction was not necessary.
Eigenvector of a square matrix whose rank is 1.
Yes I agree with your considerations, indeed since $A\vec x$ is a linear combinations of the columns of $A$ we have that $$A\vec x=\lambda \vec x$$ where possibly $\lambda=0$.
Functions - Inverses of graphs.
You are right: the function defined by your equation is its own inverse. That is not uncommon, and I have my class deal with several similar functions in homework and a test. Some examples are $y=\frac 1x$ and $y=\sqrt{1-x^2},\ 0\le x\le 1$. The line of symmetry for all self-inverse functions is the line $y=x$. Your graph also has another line of symmetry, namely itself, but that will not hold for all self-inverse functions. The lines $y=x+c$ for constant $c$ are also lines of symmetry, but again only the one for $c=0$ is important for your question.
Convergence of a Series involving $\cos$ and $\log$
Today it is a fashion argument: see this question and this other one, too. We have that $\log(2\sin u)$ has a nice Fourier series: $$\log(2\sin u)=-\sum_{n=1}^{+\infty}\frac{\cos(2nu)}{n}\tag{1}$$ hence: $$\begin{eqnarray*} I_k &amp;=&amp; \int_{0}^{\pi}\int_{0}^{\pi}\cos(2k(x-y))\log\sin\left|\frac{x-y}{2}\right|\,dx\,dy\\ &amp;=&amp; 2\int_{0}^{\pi}\int_{0}^{x}\cos(2k(x-y))\log\sin\frac{x-y}{2}\,dy\,dx\\&amp;=&amp;2\int_{0}^{\pi}\int_{0}^{1}x\cos(2k(x-xt))\log\sin\frac{x-xt}{2}\,dt\,dx\\&amp;=&amp;2\int_{0}^{\pi}\int_{0}^{1}x\cos(2kxz)\log\left(2\sin\frac{xz}{2}\right)\,dz\,dx\\&amp;=&amp;-2\sum_{n=1}^{+\infty}\frac{1}{n}\int_{0}^{\pi}\int_{0}^{1}x\cos(2kxz)\cos(nxz)\,dz\,dx\end{eqnarray*}$$ Now the last double integral equals $\frac{\pi^2}{4}$ if $n=2k$, $$4\frac{n^2+4k^2}{(n^2-4k^2)^2}$$ if $n$ is odd, zero otherwise, hence the contribute given by $n=2k$ makes the original double integral not summable over $k$: $$\sum_{k=1}^{K}\int_{0}^{\pi}\int_{0}^{\pi}\cos(2kx)\log\sin\left|\frac{x-y}{2}\right|\,dx\,dy&lt; -\frac{\pi^2}{4}\sum_{k=1}^{K}\frac{1}{k}&lt;-\frac{\pi^2}{4}\log K.$$
How can we express "induction is the same as recursion", formally?
In a type system with propositions-as-types, such as Homotopy Type Theory (I will write what follows with that in mind) it is not something you would prove, but something that is built in. Indeed in Homotopy Type Theory, the type of natural numbers, $\mathbb N$, is given ("freely") by $\mathbb N: \mathcal U, 0: \mathbb N, \mathrm{succ}: \mathbb{N\to N}$ and the following "recursion" principle (I'm using quotes here, because it is as relevant for recursion as it is for induction and since your question is specifically about the distinction between those two, one has to be careful) : given a family of types $C: \mathbb N\to \mathcal U$, $p:C(0)$ and $f: \displaystyle\prod_{n:\mathbb N}(C(n)\to C(\mathrm{succ}(n))$, we have $rec(C,p,f): \displaystyle\prod_{n:\mathbb N} C(n)$, together with the following "rules" (defining equations) : $rec(C,p,f)(0) \equiv p$, $f(n)(rec(C,p,f)(n)) \equiv rec(C,p,f)(\mathrm{succ}(n))$ Now say you have a type $X$, $p:X$ and a map $h:\mathbb N\to X\to X$ and want to define a map $H:\mathbb N\to X$ such that $H(0)\equiv p$ and $H(\mathrm{succ}(n)) \equiv h(n)(H(n))$, then this recursion principle allows you to do that, by putting $C(n):\equiv X$ and $f(n):\equiv h(n)$, so it recovers what you seem to call recursion. Now to recover induction say you have a proposition $P(n)$ that you want to prove by induction. But in this setting, $P(n)$ is a type a proving $P(n)$ is finding $p:P(n)$, so all in all you want an element of $\displaystyle\prod_{n:\mathbb N}P(n)$. Now if you have $p:P(0)$ (read "I have a proof of $P(0)$") and $f: \displaystyle\prod_{n:\mathbb N}(P(n)\to P(\mathrm{succ}(n))$ (read "I have a proof of $\forall n, P(n)\implies P(\mathrm{succ}(n))$"), then the induction principle gives you $q: \displaystyle\prod_{n:\mathbb N}P(n)$, that is, a proof of "$\forall n, P(n)$". So in this context the two principles aren't just "formally similar" or "isomorphic", they're literally the same. This can be seen category-theoretically as well. Say you have for simplicity a topos $T$ with a natural numbers object $N$, that is $N$ has a global element $0: 1\to N$, and a map $s: N\to N$ that have the obvious universal property. This universal property is literally what encodes recursion, but induction can be found from it too. Indeed say you have a property $P$ on $N$, which really means (category-theoretically) a map $N\to \Omega$ ($\Omega$ being the subobject classifier). Then saying that "$P(0)$ holds" means essentially "the composite $1\overset{0}{\to} N \overset{P}{\to} \Omega$ is the $\mathrm{true}$ morphism". Saying "for all $n$, $P(n)\implies P(s(n))$" can be interpreted as "the map $\{n:P\}\to N\overset{s}{\to} N$ factors through $\{n:P\}$" where $\{n:P\}\to N$ is the pullback of $\mathrm{true}: 1\to \Omega$ along $P:N\to \Omega$ (the subobject of $N$ defined by $P$). This means that we have a commutative square $\require{AMScd} \begin{CD}\{n:P\} @&gt;&gt;&gt; \{n: P\} \\ @VVV @VVV \\ N @&gt;{s}&gt;&gt; N \end{CD}$ which can be extended with a triangle with $1\to N$ and $1\to \{n:P\}$ But now by the NNO property of $N$, this map $\{n:P\}\to N$ has a section (it is a section because the map to $N$ is unique) : however it was a monomorphism, therefore it having a section implies that it is an isomorphism, so $\{n:P\} = N$ and $N\overset{P}{\to} \Omega = N\overset{\mathrm{true}_N}\to \Omega$, in other words "for all $n:N, P(n)$", so we get back the induction principle as an easy corollary of the NNO-property, which is what defines recursion. I don't know enough about topoi and specifically their internal language, but it is easy to see how the recursion principle can be found from the induction principle even set-theoretically, and I don't think one needs the law of excluded middle (although one may need some more assumptions on $N$, like that $0$ is not in the image of $\mathrm{succ}$, which doesn't follow from NNO-ness; these conditions would be needed to make sure for instance that $N$ is decidable even without the LEM) so there should be a way to do that internally to a topos, but someone with more knowledge should say something about that. If that's true (it definitely is in $\mathbf{Set}$ with LEM), then this shows that (in a topos) the two principles are again similar. Unfortunately what I did was only give systems where the two principles are similar or can be derived from one another or are the same; I didn't answer the question of "how to state that they are somewhat isomorphic", so you'll have to tell me if that's fine by you !
Notation for Expressing Sets of Functions
In general, if $A$ and $B$ are any sets, $A^B$ denotes the set of all functions from $B$ to $A$. Here, $B^{\{1,2,...,n\}}$ denotes the set of all functions from $\{1,2,...,n\}$ to $B$. This shouldn't be too weird; a sequence of length $n$ of elements from $B$ is essentially a way to assign an element of $B$ to each $k$ such that $1\leq k \leq n$. This notation is used because, in the finite case, if $A$ has $n$ elements and $B$ has $m$ elements, then there are $n^m$ functions from $B$ to $A$. Since for each of the $m$ elements of $B$ we have $n$ choices of what to define as the image of that element, there are $n^m$ total ways to "build" a function from $B$ to $A$.
Kernel of the tangent map
As Wikipedia says: "The inverse function theorem (and the implicit function theorem) can be seen as a special case of the constant rank theorem, which states that a smooth map with locally constant rank near a point can be put in a particular normal form near that point." See http://en.wikipedia.org/wiki/Derivative_rule_for_inverses#Constant_rank_theorem The Constant Rank Theorem is stated as Theorem (7.1) p. 47 of An Introduction to Differentiable Manifolds and Riemannian Geometry, Revised Second Edition, William M. Boothby, Academic Press. (This is the reference given by Wikipedia.) Here is, for the reader's convenience, a statement of the Constant Rank Theorem. Let $k,n$ and $r$ be positive integers, let $a$ be in $\mathbb R^n$, let $b$ be in $\mathbb R^k$, let $f$ be a smooth map from a neighborhood of $a$ to $\mathbb R^k$ sending $a$ to $b$, and let $\ell$ be the linear map from $\mathbb R^n$ to $\mathbb R^k$ sending $x$ to $(x_1,\dots,x_r,0,\dots,0)$. Assume that the rank of the tangent map to $f$ at $x$ is equal to $r$ for all $x$ in our neighborhood of $a$. Then there is a diffeomorphism $g$ from a neighborhood of $a$ to a neighborhood of 0 in $\mathbb R^n$, and a diffeomorphism $h$ from a neighborhood of 0 in $\mathbb R^k$ to a neighborhood of $b$, such that the equality $f=h\circ\ell\circ g$ holds in some neighborhood of $a$. EDIT OF MARCH 19, 2011 Here is a statement and a proof of the Constant Rank Theorem. Theorem. Let $U$ be open in $\mathbb{R}^n$, let $a$ be a point in $U$, and let $f$ be $C^p$ map ($1\le p\le\infty$) of rank $r$ from $U$ to $\mathbb{R}^k$. Then there are open sets $U_1,U_2\subset\mathbb{R}^n$, $U_3\subset\mathbb{R}^k$ and $C^p$ diffeomorphisms $\varphi:U_1\to U_2$, $\psi:U_3\to U_3$ such that $a\in U_1$ and $(\psi\circ f\circ\varphi^{-1})(x)=(x_1,\dots,x_r,0,\dots,0)$ for all $x$ in $U_2$. Proof. For $$x\in\mathbb{R}^r,\quad y\in\mathbb{R}^{n-r},\quad(x,y)\in U$$ write $$f(x,y)=(f_1(x,y),f_2(x,y)),\quad f_1(x,y)\in\mathbb{R}^r,\quad f_2(x,y)\in\mathbb{R}^{k-r}.$$ We can assume that $\partial f_1(x,y)/\partial x$ is invertible for all $(x,y)\in U$. Define $$\varphi:U\to\mathbb{R}^n,\quad(x,y)\mapsto(f_1(x,y),y).$$ By the Inverse Function Theorem, there are open sets $$U_1\subset\mathbb{R}^n,\quad U_4\subset\mathbb{R}^r,\quad U_5\subset\mathbb{R}^{n-r}$$ such that $a\in U_1\subset U$, $\varphi$ is a $C^p$ diffeomorphism from $U_1$ onto $U_2:=U_4\times U_5$, and $U_5$ is connected. Then $f(\varphi^{-1}(x,y))=(x,g(x,y))$ for some $C^p$ map $g$ from $U_2$ to $\mathbb{R}^{k-r}$. As $\partial g/\partial y=0$, we can write $g(x)$ for $g(x,y)$, and it suffices to set $U_3:=U_4\times\mathbb{R}^{k-r}$ and $\psi(u,v):=(u,v-g(u))$ for $ u\in U_4$ and $v\in\mathbb{R}^{k-r}.$
Using that $1 + z + z^{2} + ... + z^{n} = \frac{1-z^{n+1}}{1-z}$ and taking the real parts, prove that:
Hint: Factor out $\;\mathrm e^{\tfrac{(n+1)i\theta}2}$ in the numerator and $\;\mathrm e^{\tfrac{i\theta}2}$ in the denominator, and use Euler's formulæ.
A question about numbers with a certain property
Let $n$ be any nonnegative integer. Using the base $4$ expansion of $n$, we have $$n=\sum_{i=0}^k d_i\cdot 4^i,\text{ where each $d_i$ is one of $0,1,2$, or $3$. }$$ Now define $a_i=d_i\% 2$ (here $\%$ is the mod (remainder) operator); and define $b_i=\lfloor\frac{d_i}{2}\rfloor$ Note: (1) For all $i$, $a_i+2b_i=d_i$; and (2) For all $i$, $a_i$ and $b_i$ are each either $0$ or $1$. Define $a=\displaystyle\sum_{i=0}^k a_i\cdot 4^i$ and $b=\displaystyle\sum_{i=0}^k b_i\cdot 4^i$. We have that $n=a+2b$. Thus we may take $X$ to be the set of nonnegative integers that can be written as a sum of distinct powers of $4$. Note: this includes the empty sum, $0$. That is, $0\in X$. So $X=\{0,1,4,5,16,17,20,21,64,65,68,69,80,\dots\}$ For example, let's look at $n=147$. We have $147=3+0\cdot 4+1\cdot 4^2+2\cdot 4^3$. We take $a=1+4^2=17$ and $b=1+4^3=65$. An, as expected, $n=a+2b$.
area of a convex quadrilateral
Here are two quadrilaterals with the specified sides: The areas are 261 for the brown quadrilateral, while the blue quadrilateral at 522 is twice as big. And there are many other possibilities.
Finding Truth Values Of Nested Quantifiers
You want to prove: $\exists x \forall y(x \geq y + 1)$, i.e., that there is an $x$ that is greater than or equal to the successor of every $y$. Assuming that $x,y$ range over nonnegative integers, then you can't prove it: Fact. There is no $x \in \mathbb N$ s.t. $x \geq y + 1$ for all $y \in \mathbb N$. Proof. We want to prove: $\lnot\exists x \forall y(x \geq y + 1) \equiv\forall x \exists y(x \leq y + 1)$. Let $\varphi(x) \equiv \exists y(x \leq y + 1)$. The property holds for $0$; let $y := 0$, obtaining $0 \leq 0 + 1 \leq 1$. Our inductive hypothesis is that $\varphi(n) \equiv \exists y(n \leq y + 1)$ is true; we want to derive $\varphi(n+1)$. Suppose, for contradiction, that $\varphi(n+1)$ is false, i.e., that $\lnot\exists z(n+1 \leq z + 1) \equiv \forall z(n + 1 &gt; z + 1)$. Since successor is order-preserving, it'll suffice to reduce $\forall z(n &gt; z)$ to absurdity. Instantiate that with $y + 1$ for $z$ to get: $n &gt; y + 1$. That contradicts the inductive hypothesis, so $\varphi(n + 1)$ must be true. Since propoerty $\varphi(x)$ holds for $0$ and is closed under succession, it holds for all natural numbers. The fact is trivial, but as Mauro said, your approach is wrong, so take the proof above as an illustration of how you might go about actually proving properties of natural numbers by induction.
Number of irreducible divisors in a UFD
Hint: The shortest way uses induction on the number $m$ of prime factors of $b$: If $m=1$, as anyway $n\ge 1$, there's nothing to prove. Suppose the assertion is true an element with $m$ prime factors and consider the case $b=q1\,q_2\dotsm q_m\,q_{m+1}$. The last prime $q_{m+1}$ divides $b$, hence $a=p_1\,p_2\dots p_n$. By Euclid's lemma, $q_{m+1}$ is associated to one of the $p_i$s, say $p_n$. Set $a'=\frac a{p_n}$, $b'=\frac{b}{q_{m+1}}$ and check $b'$ divides $a'$.
Integration of $\sqrt{\sec x}$.
From trigonometric half angle formula we have $\cos x=1-2\sin^2\left(\dfrac x2\right)$ and we get $$\int\sqrt{\sec x}\ dx=\int\dfrac{1}{\sqrt{\cos x}}\ dx=\int\dfrac{1}{\sqrt{1-2\sin^2\left(\dfrac x2\right)}}\ dx = 2F\bigg(\dfrac x2\bigg|2\bigg )+C$$ Where $ F(x|k) $ is an Elliptic Integral of First Type
$f$ satisfies the Cauchy Functional Equation if and only if it is $\mathbb{Q}$-linear?
If f is Q-linear, then f(x+y)=f(1x+1y)=1f(x)+1f(y)=f(x)+f(y). Conversely, if f(x+y)=f(x)+f(y) for all x,y then all we need do is show that f(qx)=qf(x) for all rational q. First, the equation f(0)=f(0+0)=f(0)+f(0) shows that f(0)=0. Second f(-x)+f(x)=f(o)=0 , so f(-x)=-f(x) for all x Thirdd, mathematical induction on n shows that f(nx)=nf(x) for all natural numbers n. Fourth, f(-mx)=-f(mx) =-mf(x) for all natural numbers m, so f(zx)=z(f(x) for all integers z Finally, if a and b are integers and b is positive then f(b(a/b) x)=bf((a/b)x)=f(ax)=af(x) so f((a/b)x)=af(x)/b. Since any rational q has the form a/b, we are finished.
Reverse Engineer from digits sum
I don't think I can put this into a formula (I'll reply back if I can come up with a simple one), but the way I would solve a problem like this is to just think it through. Let's take $10$ as an example. Here are all the possible combinations of single digit, positive integers that add up to $10$: $1, 9$ $2, 8$ $3, 7$ $4, 6$ $5, 5$ Remember that each set of digits could be joined in two ways to make two numbers, so all of the possible two digit numbers are as follows: $1, 9 \Rightarrow 19, 91$ $2, 8 \Rightarrow 28, 82$ $3, 7 \Rightarrow 37, 73$ $4, 6 \Rightarrow 46, 64$ $5, 5 \Rightarrow 55$ Technically, we could list many more numbers of 3-10 digits (i.e. 127, 523), but if we have two digit possibilities, then there's no reason to check three digits and up since we're looking for the smallest number. From the list of numbers above, you can see that $19$ is the smallest. However, we could have found this without exhausting all of the two digit options. Unfortunately, I am suggesting a guess-and-check method, but a strategical/informed version. In this case, we check if 1 could be a possible first digit, since we're looking for the smallest possible number. $10-1 = 9$, so $19$ is a valid result and we can not find a smaller answer (since there is no one-digit number that would equal $10$ and there are no other two digit numbers in the 10s that would add up to $10$. Now, let me take 16 to show you the proper approach. You don't just want to start with $16-1$ and then $16-2$ since these return a two-digit number ($1+15 = 16$, etc.), so you want to make an informed guess to try and find the lowest pair of single digit numbers that add up to $16$. Since $6+10 = 16$, anything $\le 6$ would have a two-digit partner, so we try $7$ and find that $7+9 = 16$, so $79$ is our smallest value that adds up to 16. Let's do another example that is $&lt; 20$: $15$. Since $5+10 = 15$, try $6$ since this should be the first with another single digit (since you're technically adding $1$ to $5$ and subtracting $1$ from $10$, so you find $6$ and $9$, or $69$. Once you get to $19$, it's not possible to find a two-digit answer that adds up to $20$, so we need to move up to 3 digits. Try using $1$ as the first digit (this only works for $19$ by the way, just like it only worked for $10$ for 2 digits), so $19-1 = 18$ and now we need to find two one digits that will add up to $18$, which is $9+9$, so $1+9+9 = 19$, so our answer is $199$. One-digit answers will only get you as far as $9$, Two-digit answers will only get you as far as $9+9=18$, Three-digits will only go up to $9+9+9=27$, and so on. Sorry there isn't a formula that I can think of to solve this quickly, but with some informed tries, you should be able to find the answer rather quickly.
Ellipse radius from center and point
Assuming that the green point is angle $\alpha$ from the positive $x$-axis, measured counterclockwise, and that we place $(x_2, y_2)$ at the origin, the equation of the lower-right ellipse is $$ \left( \frac{x}{2r_2}\right)^2 + \left( \frac{y}{r_2}\right)^2 = 1 $$ and the coordinates of the green point are $$ (u, v) = (2r_2 \cos \alpha, r_2 \sin \alpha) $$ Since you claim to know the distance from this point to the center (I think you've labelled that "d", but it could be an "$\alpha$" -- hard to tell!), we then know that \begin{align} d^2 &amp;= (2 r_2 \cos \alpha)^2 + (r_2 \sin \alpha)^2\\ &amp;= r_2^2\left[ (2 \cos \alpha)^2 + (\sin \alpha)^2\right]\\ &amp;= r_2^2\left[ 4 \cos^2 \alpha + \sin^2 \alpha\right]\\ \end{align} hence \begin{align} r_2^2 &amp;= \frac{ 4 \cos^2 \alpha + \sin^2 \alpha}{d^2}\\ r_2 &amp;= \sqrt{\frac{ 4 \cos^2 \alpha + \sin^2 \alpha}{d^2}}\\ \end{align} That answers the question you asked in words; it doesn't seem to match what you've drawn in the picture, however, which involves both $r_1$ and $r_2$, and ... well, frankly, I have no idea what it could be about.
Can we determine the set of all angles that can be trisected
Yes. See Wikipedia on constructible angles You can construct any angle that is $2\pi$ divided by a Fermat prime, $2^{2^n}+1.$ We know of $3,5,17,65537$, but there may be others. Then you can bisect and add these as much as you want.
Prove that the Language $L= \{ 0^n1^m \;|\; n,m \ge 0 \}$ is regular
Note that $w\in L$ if and only if $w$ starts with a sequence of zeros followed by a sequence of ones (the length of each sequence can be zero). So to build a DFA for the language you can check: If the first letter in $w$ is $1$ then all of $w$ must be ones If the first letter in $w$ is $0$ then after we see the letter $1$ we must have all the letters $1$ Note that counting is not required! In your examples you needed yo know something about the amount of times each letter ($1$ or $0$) appears, but here we just care about the order
A problem about minimizing the area of a triangle within the coordinate plane.
So we have point $P(b,c)$ and $M(m,0)$. The equation of the line between them is $$\frac{x-b}{m-b}=\frac{y-c}{0-c}$$ The intersection of this line with $l$ is $$y=ax$$ When we put these together we get $$\frac{x-b}{m-b}=\frac{ax-c}{0-c}$$ or $$-cx+bc=amx-cm-abx+bc$$ Extracting $x$ (the x coordinate of the $Q$ point): $$x_Q=\frac{cm}{am-ab+c}$$ The corresponding $y$ coordinate is $$y_Q=\frac{acm}{am-ab+c}$$ The area of the triangle $OMQ$ is $$A=\frac 12\frac{am^2c}{am-ab+c}$$ since $y_Q$ is the height and $m$ is the base. Take the derivative with respect to $m$ and equate it to $0$. One solution is $m=0$. The other is $$m=\frac{2}{a}(ab-c)$$ Now just plug in this $m$ into the equations for $x_Q$, $y_Q$, then calculate $|PM|$ and $|PQ|$ and you get the answer. If you calculate $PM$ and $PQ$ as vectors, you should get $PM=-PQ$.
Working with average numbers and finding unknown variables?
Hint: Sum the left-hand sides and the right-hand sides of the 4 equations. What do you get?
Burkholder's inequality for elementary stochastic integral
By assumption, $$P[|H\cdot M|^*_t\ge c]\le P[|H\cdot U|^*_t\ge c/2]+P[|H\cdot V|^*_t\ge c/2]\le 18/c(\|U_t\|_1+\|V_t\|_1)$$ The $M=U-V$ decomposition as described by Writing a martingale as the difference of two non-negative martingales satisfies $\|M_t\|_1=\|U_t\|_1+\|V_t\|_1$.
Circle geometry: The gear system on a bicycle
Hint: draw the two wheels (circles), one line tangent to both circles at $A$ and $B$ (part of this represents the chain) and two radii joining centers $P$ and $Q$ to their tangency points $A$ and $B$. You have then a right trapezoid $ABQP$, of which you know the length of two bases (radii of circles) and the height (36 cm). It is not difficult then to compute (with the help of Pythagoras' theorem) the length of the fourth side, which is the distance between centers.
Simplifying sum equation. (Solving max integer encoded by n bits)
The thing you've asked to show isn't too hard: \begin{align} \sum_{i=0}^{n-2} \left(2^{-i+n-2} + 2^i\right) &amp;= \sum_{i=0}^{n-2} \left(2^{-i+n-2}\right) + \sum_{i=0}^{n-2}\left(2^i\right) \\ &amp;= \sum_{i=0}^{n-2} \left(2^{-i+n-2}\right) + \left(1 + 2 + \ldots + 2^{n-2}\right) \\ &amp;= \sum_{i=0}^{n-2} \left(2^{-i+n-2}\right) + \left(2^{n-1} - 1\right) \\ \end{align} where that last thing comes form the formula for the sum of a geomtric series, which I think you probably know. Now let's simplify the left-hand term... \begin{align} \sum_{i=0}^{n-2} \left(2^{-i+n-2} + 2^i\right) &amp;= \sum_{i=0}^{n-2} \left(2^{-i+n-2}\right) + \left(2^{n-1} - 1\right) \\ &amp;= \sum_{i=0}^{n-2} \left(2^{(n -2)-i)}\right) + \left(2^{n-1} - 1\right) \\ &amp;= \left(2^{n -2} + 2^{n-3} + \ldots + 2^0\right) + \left(2^{n-1} - 1\right) \\ \end{align} which we recognize as another geometric series, written in reverse order; the sum there gives us \begin{align} \sum_{i=0}^{n-2} \left(2^{-i+n-2} + 2^i\right) &amp;= \left(2^{n -2} + 2^{n-3} + \ldots + 2^0\right) + \left(2^{n-1} - 1\right) \\ &amp;= \left(2^{n -1} - 1\right) + \left(2^{n-1} - 1\right) \\ &amp;= 2 \cdot 2^{n -1} - 2 \\ &amp;= 2^{n} - 2. \end{align} Quick proof for the geometric series: If we expand $$ U = (1 - a) (1 + a + a^2 + \ldots a^k) $$ with the distributive law, and then gather like terms via the commutative law for addition, we get this: \begin{align} U &amp;= (1 - a) (1 + a + a^2 + \ldots + a^{k-1} + a^k)\\ &amp;= 1 \cdot (1 + a + a^2 + a^{k-1} + \ldots a^k) - a \cdot (1 + a + a^2 + \ldots + a^{k-1} + a^k)\\ &amp;= (1 + a + a^2 + \ldots + a^{k-1} + a^k) - (a + a^2 + a^3 + \ldots + a^k + a^{k+1})\\ &amp;= 1 + (a + a^2 + \ldots a^k) - (a + a^2 + a^3 + \ldots + a^k) - a^{k+1})\\ &amp;= 1 - a^{k+1}) \end{align} so we have that $$ 1-a^{k+1} = (1-a) (1 + a + \ldots + a^k) $$ hence (for $a \ne 1$), $$ 1 + a + \ldots + a^k = \frac{1-a^{k+1}}{1-a}, $$ which is the formula for the sum of a finite geometric series whose ratio is not 1. For an infinite series whose ratio has absolute value less than 1, the infinite sum turns out to be $\frac{1}{1-a}$, by the way, but this requires a careful definition of a sum for an infinite series.
Renyi entropy (zeroth order)
Using the definition the formula you want to prove is turning into: $\log(Tr\rho_{AB})\leq\log(Tr\rho_A)+\log(Tr\rho_B)$ Now using the property of sums of logarithms this is equivalent to: $\log(Tr\rho_{AB})\leq\log(Tr\rho_A\cdot Tr\rho_B)$ which finally comes down to $Tr\rho_{AB}\leq Tr\rho_A\cdot Tr\rho_B$. This last inequality holds as all $\rho$ are density matrices.
Initial value problem (solve for velocity)
The velocity decreases by 10 meters per second every second; this is what it means for the acceleration to be -10m/s^2. So $v(0) = 10$, and to determine $v(t)$, just consider how much time has elapsed and how the acceleration would affect velocity during that time.
What is the series expansion of reciprocal of theta function $\frac{1}{\theta(z;q)}$?
Andrews in [1] (equation (2.1)) sights the partial fraction expansion $${\prod_{n\gt0}\frac{(1-q^n)^2}{(1-zq^n)(1-z^{-1}q^{n-1})}}=\sum_{m\ge0}\left(\frac{(-1)^mq^{m(m+1)/2}}{1-z^{-1}q^m}-z\frac{(-1)^mq^{m(m+1)/2}}{1-zq^m}\right)\tag{*}$$ where $1\lt|z|\lt|q|^{-1}$ and $|q|\lt1$. Making the substitution $z{\to}z/q$, one obtains an expression for $1/\theta(z,q)$. The Laurent series expansion of the righthand side of $(*)$ is given in Lemma 1 of the paper and proved in Section 2. To wit one has, $${\prod_{n\gt0}\frac{(1-q^n)^2}{(1-zq^n)(1-z^{-1}q^{n-1})}}=\sum_{\substack{(N,r)\in\mathbb{Z}\\r\ge|N|}} {(-1)^{r+N}z^Nq^{(r^2-N^2)/2+(r+N)/2}}\tag{**}$$. References [1] HECKE MODULAR FORMS AND THE KAC-PETERSON IDENTITIES, G.E.Andrews
Proving the limit of a function by using epsilon-delta
Prove that $\;\lim\limits_{x\to16}\left(3-\dfrac12 x\right)=-5\;$ using a $\;\varepsilon\;,\;\delta\;$ argument. My attempt: For any $\varepsilon&gt;0\;$ there exists $\;\delta=2\varepsilon&gt;0\;$ such that $\forall\;x\in\big]16-\delta,16+\delta\big[\setminus\{16\}\;$ it follows that $\left|\left(3-\dfrac12 x\right)-\big(-5\big)\right|=\left|-\dfrac12 x+8\right|=\left|-\dfrac12\big(x-16\big)\right|=$ $=\dfrac12\big|x-16\big|&lt;\dfrac12\delta=\varepsilon\;.$
Finding area of $y=\tan x$ and $y=2\sin x$
Edit: Completely rewritten to correct a misreading of the original problem. You know that the graphs of $y=\tan x$ and $y=2\sin x$ cross at $x=0$. Where else do they cross? You need to solve $$2\sin x=\tan x=\frac{\sin x}{\cos x}\;,$$ or $$2\sin x\cos x=\sin x\;.$$ One solution is $\sin x=0$, which in the interval from $-\pi/3$ to $\pi/3$ means that $x=0$; we already knew about that. If $\sin x\ne 0$, we can divide through by it to find that $2\cos x=1$, or $\cos x=1/2$. That happens at $x=\pi/3$ and $x=-\pi/3$, so the curves actually cross at the ends of your interval as well as in the middle. Now, which one is on top where? A really close look at the graphs with a graphing calculator will probably tell you, but you don’t need that tool. Just look at what happens at $x=\frac{\pi}4$: $2\sin\frac{\pi}4=\sqrt2$, and $\tan\frac{\pi}4=1$, so at $x=\frac{\pi}4$, the $y=2\sin x$ curve is above the $y=\tan x$ curve. We know that the curves don’t cross between $x=0$ and $x=\frac{\pi}3$, so the $y=2\sin x$ curve must be above the $y=\tan x$ curve over the entire interval from $x=0$ to $x=\frac{\pi}3$. This means that the length of the vertical strip at some $x$ between $0$ and $\frac{\pi}3$ is top $y$-coordinate minus bottom $y$-coordinate, or $2\sin x-\tan x$. For $-\pi/3\le x\le 0$ you can either notice that the strips now run from a low $y$ value of $2\sin x$ up to a high $y$ value of $\tan x$ and therefore have length $\tan x-2\sin x$, or you can use your observation that the functions are odd to say that the lefthand area must be equal to the righthand area. Either way, your final result is just the sum of the left- and righthand areas. Added: In view of your last comment, I thought that I’d finish the calculation; perhaps you’ll easily be able to track down where you went astray. (If you did: sometimes the answer at the back of the book is wrong, even in a well-established text like Stewart’s.) $$\begin{align*}\int_0^{\pi/3}(2\sin x-\tan x)dx&amp;=\Big[-2\cos x+\ln|\cos x|\Big]_0^{\pi/3}\\ &amp;=-2\cos\frac{\pi}3-(-2\cos 0)+\ln\left|\cos\frac{\pi}3\right|-\ln|\cos 0|\\ &amp;=-1-(-2)+\ln\frac12-\ln 1\\ &amp;=1+\ln\frac12+0\\ &amp;=1+(\ln 1-\ln 2)\\ &amp;=1-\ln 2\;. \end{align*}$$ This is the area of the righthand half, so by symmetry the total area is twice this, or $2-2\ln 2$.
Derivative of a piecewise function with x=0
Hint: you can prove that the derivatives of $f$ always writes as $f(x)\frac{P(x)}{Q(x)}$ where $P$ and $Q$ are polynomials. Then taking limit gives that all the derivatives at $x=0$ are equal to 0. It's the typical example used to show that a function's Taylor expansion is not always equal to the function itself
A countable elementary submodel of $L_{\omega_1}$
One of the main points of the argument is that every object in $L_{\omega_1}$ is countable internally. (Of course, $\omega_1=\omega_1^L$ in this problem.) That is, $L_{\omega_1}$ catches every set in itself is countable. Since $N\prec L_{\omega_1}$, $N$ also thinks every object is countable. Thus this argument does not work if we replace $\omega_1$ with other ordinals, like $\omega_2$: every countable elementary submodel of $L_{\omega_2}$ would think there is an uncountable set, although it could not be a genuine uncountable set. I explained why $f$ is definable in the comment, but let me summarize the point: $L_{\omega_1}$ thinks there is a definable global well-order (namely $&lt;_L$.) Thus $N$ also thinks there is a global well-order which is definable. Hence the formula $$\phi(f) :\equiv [f\text{ is onto from $\omega$ to $X$ and $f$ is the $&lt;_L$-least function among them}]$$ is a first-order formula which defines an object (which is a function we desire.)
Does $n|\phi(m^n-1)$?
Mod $m^n-1$, $m$ has multiplicative order $n$. By Lagrange the conclusion follows.
Ideals and the Chinese Remainder Theorem
No. The generalised Chinese remainder theorem is an abstract version in the context of commutative rings, which states this: Let $R$ be a commutative ring, $I_1,\dots, I_n$ pairwise relatively prime ideals (i.e. $I_k+I_\ell=R\;$ for any $k\ne \ell$). Then $I_1\cap\dots\cap I_n=I_1\dotsm I_n$. The canonical homomorphism: \begin{align} R&amp;\longrightarrow R/I_1\times\dotsm \times R/I_n,\\ x&amp;\longmapsto (x+I_1,\dots ,x+I_n), \end{align} induces an isomorphism: $$R/I_1\cap\dots\cap I_n=R/I_1\dotsm I_n\simeq R/I_1\times\dotsm \times R/I_n.$$
Compute iterated kernels for symmetric kernel $K(x,t)=\sum_{n=1}^\infty \frac{\sin(n\pi x) \sin(n \pi t)}{n}.$
Your intuition is correct, kernel $K$ can be written as $$K(x,t)=\sum_{n=1}^\infty \frac{\phi_n(x)\phi_n(t)}{\lambda_n}$$ with $\phi_n(x)=\sin(n\pi x)$ and $\lambda_n=n$. This is no special theorem, we're just writing the definition of $K$. So, assuming the integration interval is $[0, 1]$, and using the following notation for the inner product in $L^2([0,1])$: $$\langle f, g\rangle = \int_0^1 f(x)g(x)dx$$ the square of $K$ is $$K^2(x,t)=\sum_{n=1}^\infty \sum_{m=1}^\infty \frac{\phi_n(x)\phi_m(t)}{\lambda_n^2}\langle \phi_n, \phi_m\rangle$$ You can verify that $$\langle \phi_n, \phi_m\rangle=\int_0^1 \sin(n\pi u)\sin(m \pi u)du$$ is equal to $\frac 1 2$ if $n=m$, and is equal to $0$ otherwise. Thus, $$K^2(x,t)=\frac 1 2 \sum_{n=0}^{+\infty}\frac{\phi_n(x)\phi_m(t)}{\lambda_n^2}$$ the $p$-th power of $K$ is given by $$\boxed{K^p(x,t)=\frac 1 {2^{p-1}} \sum_{n=1}^\infty\frac{\phi_n(x)\phi_m(t)}{\lambda_n^p}=\frac 1 {2^{p-1}} \sum_{n=1}\frac{\sin(n\pi x) \sin(n \pi t)}{n^p}}$$ Edit: Following the comment from @Jean-Marie (merci!), an alternative form can be obtained by using the polylogarithm: $$Li_p(z)=\sum_{n\geq 1}\frac {z^p}{n^p}$$ With this, $$\begin{split} K^p(x,t)&amp;=\frac 1 {2^{p-1}} \sum_{n=1}\frac{\sin(n\pi x) \sin(n \pi t)}{n^p}\\ &amp;= \frac 1 {2^p}\sum_{n=1}\frac{\cos(n\pi(x-t))-\cos(n\pi(x+t))}{n^p}\\ &amp;= \frac 1 {2^p}\Re \left( \sum_{n=1}\frac{e^{in\pi(x-t)}-e^{in\pi(x+t)}}{n^p} \right) \end{split}$$ $$\boxed{K^p(x,t) = \frac 1 {2^p} \Re \left( Li_p(e^{i\pi(x-t)}) - Li_p(e^{i\pi(x+t)}) \right)}$$ I'm not sure if that can be simplified further. With $p=1$, $L_1(z) = -\ln(1-z)$, you get $$K(x,t)=\dfrac12\operatorname{ln}\left|\dfrac{\sin(\pi(x+t)/2)}{\sin(\pi(x-t)/2)}\right|$$
In a T1 space, derived set = closure
The lemma is not true. Let $S=\{x\}$. Then $S'=\varnothing$ so cannot equalize the non-empty set $\overline S$.
Combinations & probability: dividing 9 people into 3 groups
I think this can be done in a straight-forward way. The order in which groups are formed does not matter, so we can assume that the process begins by randomly drawing people from the pool to add to the first group. This makes it easier because now you simply have to compute {M is drawn} * {M is drawn again} * {M is drawn a third time}. This is $$\frac39 \cdot \frac28 \cdot \frac17 = \frac1{84}$$ Then the probability that any group consists of only men is 3 times that, so $\frac{1}{28}$. Alternatively, begin after the first man has been assigned, and just multiply $\frac28 \cdot \frac17 = \frac{1}{28}$. For the second, again we can just fix one order. Assume we first draw people for just group one, then the probability to have exactly one man in there (which is necessary to have one men in each group) is $$3 \cdot \frac39 \cdot \frac68 \cdot \frac57$$ (Compute the probability for MWW, then multiply it by 3 to account for the permutations WMW and WWM.) Similarly for group two: $$3 \cdot \frac26 \cdot \frac45 \cdot \frac34$$ and group three: $$3 \cdot \frac13 \cdot \frac22 \cdot \frac11$$ This one just yields 1, which makes sense because after the first two groups have one man in each, the last one will inevitably also have one. In total, we get $$3 \cdot \frac39 \cdot \frac68 \cdot \frac57 \cdot 3 \cdot \frac26 \cdot \frac45 \cdot \frac34 = \frac9{28}$$
Deriving a function from an intergal
$t = \int \frac 1{20-v} dv$ so you want to differentiate both sides with respect to v. $\frac {dt}{dv} = \frac {1}{20-v}$ and invert it. $\frac {dv}{dt} = 20-v$ Giving you a linear diff eq. $v = C_1 e^{-t} + 20$ $v(0) = 0$ $v = (-20) e^{-t} + 20$ that will work, just as well.
Using $\epsilon-\delta$ arguments, find the limit of $(e^{xy} -1)/y$ when $(x,y)\to(0,0)$
Then $|e^{xy}-1|\leq|xy|e^{|xy|}$, so $\left|\dfrac{e^{xy}-1}{y}\right|\leq|x|e^{|xy|}$. We let $|x|^{2}+|y|^{2}\leq $1, then both $|x|,|y|\leq 1$ and hence $|xy|\leq 1$ and so $|x|e^{|xy|}\leq e|x|\leq e\sqrt{|x|^{2}+|y|^{2}}$. Now you let $\delta=\min\{1,\epsilon/e\}$ to finish the job.
Is $\forall x(P(x)\to Q(x)) \to (\forall x(P(x)) \to \forall x(Q(x)))$ valid?
It is correct Sorry, I'll add more details about this: First: if we have that for every P(x), then Q(x) Then, if all P(x) are true, then obviously all Q(x) are true too If we try to demonstrate this implication, we'd have to negate the conclusion, to see if we get a contradiction: !(∀x(P(x))→∀x(Q(x))) = !(!∀x(P(x)) or ∀x(Q(x))) = (∀x(P(x)) and !∀x(Q(x))) = (∀x(P(x)) and Ex(!Q(x))) Then, we have that every P(x) is true, and that, for some of them, Q(x) is false But it contradicts our premise that ∀x(P(x)→Q(x)), because exists at least 1 x where this doesn't happens As we found a contradiction by negating the conclusion, then the premise must be true Also, I'd suggest to add parentheses for more clarity (you have some extra ones): ∀x(P(x)→Q(x))→(∀x(P(x))→∀x(Q(x)))
How is direct sum of von Neumann algebras defined?
Digging through the book, it appears that Murphy intends to define the direct sum of von Neumann algebras to be $$\oplus_\lambda A_\lambda=\{(a_\lambda):\sup_\lambda\|a_\lambda\|&lt;\infty\}.$$ His definition of direct sum of Banach algebras can be found in Exercise 1.1.
Proving a matrix $A$ is of certain form
The polynomial $x^3-x=x(x-1)(x+1)$ annihilates $A$ so it's diagonalizable. Hence there is an invertible matrix $P$ such that $$A=P\operatorname{diag}(I_p,-I_q,0)P^{-1}$$ with $p+q=r$ so $$A^2=P\operatorname{diag}(I_r,0)P^{-1}$$
Maximal real subfield of $\mathbb{Q}(\zeta )$
As Dylan points out, parts (1) and (2) are clear. Moreover, $\mathbb{Z}[\zeta + \zeta^{-1}]$ contains $\zeta^j + \zeta^{-j}$ for all $j \ge 1$ (by induction using the binomial theorem); these include all the conjugates of $\zeta + \zeta^{-1}$, so (4) implies (3). Thus it suffices to prove (4), which follows from the corresponding fact for the full cyclotomic field $\mathbb{Q}(\mu_\ell)$ (which is well known), as follows: Let $u \in A_0$. Because $u$ is an algebraic integer in $\mathbb{Q}(\zeta)$, we can write $u = \sum_{i = 0}^{\ell - 1} u_i \zeta^i$ for some $u_i \in \mathbb{Z}$. But since $u = \overline{u}$, we have $u = \sum u_i \zeta^{-i} = \sum u_i \zeta^{\ell - i}$. Hence $u_i = u_{\ell - i}$. Thus we have $u = \sum_{i = 0}^{(\ell - 1)/2} u_i (\zeta_i + \zeta^{-i})$, and (4) is proved.
Is there a site to draw a curve and get the equivalent equation?
Your curve appears to be the cubic spline parametrized by $$ p(t) = (1 - t)^{3}\, p_{1} + 3(1 - t)^{2}t\, p_{2} + 3(1 - t)t^{2}\, p_{3} + t^{3}\, p_{4},\quad 0 \leq t \leq 1, $$ with $p_{1} = (0, 1)$, $p_{2} = (\frac{8}{3}, 1)$, $p_{3} = (\frac{4}{3}, 0)$, $p_{4} = (4, 0)$. Dmitry Baranovskiy's Raphaël JavaScript library has an interactive demo for drawing splines.
A very interesting geometry problem
$A+4B+4C=1,\; A+3B+2C=\dfrac{\pi}{4},\; A+2B+C=\dfrac{1}{2}\left(\dfrac{2\pi}{3}-\dfrac{\sqrt{3}}{2}\right).$ The last one is the area of a circular segment.
Prove that a separable metric space is Lindelöf without proving it is second-countable
Let $\mathcal{U}$ be an open cover of $(X,d)$. Let $\{d_n: n \in \mathbb{N}\}$ be a dense subset of $X$. For every $n\in \mathbb{N}, q \in \mathbb{Q}$, if there exists some member $U$ of $\mathcal{U}$ that contains $B(d_n, q)$, pick some $U(n,q) \in \mathcal{U}$ that does (there could be plenty of such $U$ so we use AC to pick definite ones). Otherwise $U(n,q) = U_0$ for some fixed $U_0 \in \mathcal{U}$, for definiteness. Claim: $\{ U(n,q): n \in \mathbb{N}, q \in \mathbb{Q}\}$ is a countable subcover of $\mathcal{U}$. To see this, let $x \in X$ and find some $U_x \in \mathcal{U}$ that contains $x$ (as we have a cover). Then for some $r&gt;0$ we have $B(x,r) \subset U_x$, and we can also find some $d_{n(x)}$ from the dense subset inside $B(x,\frac{r}{2})$, and next we find a $q(x) \in \mathbb{Q}$ such that $d(x, d_{n(x)}) &lt; q(x) &lt; \frac{r}{2}$. Note that $x \in B(d_{n(x)}, q(x))$ and $B(d_{n(x)}, q(x)) \subset B(x,r) (\subset U_x)$ by the triangle inequality. Then $U_x$ witnesses that some member of $\mathcal{U}$ contains $B(d_{n(x)},q)$ and so we know that $x \in B(d_{n(x)}, q(x)) \subset U(n(x), q(x))$ and so $x$ is indeed covered by the subcover, as required. This proof is quite direct, but it is essentially the same argument one needs to see that all sets $B(d_n, q)$ for $n \in \mathbb{N}, q \in \mathbb{Q}$ form a (countable) base for $X$. So it's not really a simplification per se, but just to show a direct proof is possible. BTW, it's not too hard to go to hereditarily Lindelöf as well. Essentially the same proof holds: $\mathcal{U}$ is then not necessarily a cover of $X$ and the $x$ is then chosen to be in $\cup \mathcal{U}$ instead of $X$. We use the formulation of hereditarily Lindelöf as : every family of open sets has a countable subfamily with the same union. The proof goes through the same way, really.
Given an M x N matrix, is there a way to produce an orthogonal set of N vectors of length M, where M < N?
Hint: A set of orthogonal vectors is linearly independent. Think about a basis for the space.
Prove by contradiction that there is an $i \in [n]$ such that $x_i \geq 2$ if $x_1,\ldots,x_n \in \mathbb{N} \cup \{0\}$
If $x_i\leq 1$ for all $i$, then $x_1+\ldots+x_n\leq 1+\ldots +1 = n\cdot 1 = n$. Contraposition: If $x_1+\ldots+x_n&gt; n$, there is an $i$ with $x_i\geq 2$. $P\Rightarrow Q \Leftrightarrow \neg Q\Rightarrow\neg P$.
Numerical method for finding the square-root.
Essentially if you are interesting in evaluating $\sqrt{a}$, the idea is to first find the greatest perfect square less than or equal to $a$. Say this is $b^2$ i.e. $b = \lfloor \sqrt{a} \rfloor \implies b^2 \leq a &lt; (b+1)^2$. Then consider the function $$f(x) = b + \dfrac{a-b^2}{x+b}$$ $$f(b) = b + \underbrace{\dfrac{a-b^2}{2b}}_{\in [0,1]} \in [b,b+1]$$ $$f(f(b)) = b + \underbrace{\dfrac{a-b^2}{f(b) + b}}_{\in [0,1]} \in [b,b+1]$$ In general $$f^{(n)}(b) = \underbrace{f \circ f \circ f \circ \cdots f}_{n \text{times}}(b) = b + \dfrac{a-b^2}{f^{(n-1)}(b)+b}$$ Hence, $f^{(n)}(b) \in [b,b+1]$ always. If $\lim\limits_{n \to \infty}f^{(n)}(b) = \tilde{f}$ exists, then $$\tilde{f} = b + \dfrac{a-b^2}{\tilde{f}+b}$$ Hence, $$\tilde{f}^2 + b \tilde{f} = b \tilde{f} + b^2 + a - b^2 \implies \tilde{f}^2 = a$$ To prove the existence of the limit look at $$(f^{(n)}(b))^2 - a = \left(b + \dfrac{a-b^2}{f^{(n-1)}(b)+b} \right)^2 - a = \dfrac{(a-b^2)(a-(f^{(n-1)}(b))^2)}{(b+f^{(n-1)}(b))^2} = k_{n-1}(a,b)((f^{(n-1)}(b))^2-a) $$ where $\vert k_{n-1}(a,b) \vert \lt1$. Hence, convergence is also guaranteed. EDIT Note that $k_{n-1}(a,b) = \dfrac{(a-b^2)}{(b+f^{(n-1)}(b))^2} \leq \dfrac{(b+1)^2 - 1 - b^2}{(b+b)^2} = \dfrac{2b}{(2b)^2} = \dfrac1{2b}$. This can be interpreted as larger the number, faster the convergence. Comment: This method works only when you want to find the square of a number $\geq 1$. EDIT To complete the answer, I am adding @Hurkyl's comment. Functions of the form $$g(z) = \dfrac{c_1z+c_2}{c_3z+c_4}$$are termed Möbius transformations. With each of these Möbius transformations, we can associate a matrix $$M = \begin{bmatrix} c_1 &amp; c_2\\ c_3 &amp; c_4\end{bmatrix}$$ Note that the function, $$f(x) = b + \dfrac{a-b^2}{x+b} = \dfrac{bx + a}{x+b}$$ is a Möbius transformation. Of the many advantages of the associated matrix, one major advantage is that the associate matrix for the Möbius transformation $$g^{(n)}(z) = \underbrace{g \circ g \circ \cdots \circ g}_{n \text{ times}} = \dfrac{c_1^{(n)} z + c_2^{(n)}}{c_3^{(n)} z + c_4^{(n)}}$$ is nothing but the matrix $$M^n = \begin{bmatrix}c_1 &amp; c_2\\ c_3 &amp; c_4 \end{bmatrix}^n = \begin{bmatrix}c_1^{(n)} &amp; c_2^{(n)}\\ c_3^{(n)} &amp; c_4^{(n)} \end{bmatrix}$$ (Note that $c_k^{(n)}$ is to denote the coefficient $c_k$ at the $n^{th}$ level and is not the $n^{th}$ power of $c_k$.) Hence, the function composition is nothing but raising the matrix $M$ to the appropriate power. This can be done in a fast way since $M^n$ can be computed in $\mathcal{O}(\log_2(n))$ operations. Thereby we can compute $g^{(2^n)}(b)$ in $\mathcal{O}(n)$ operations.
The law of syllogism example
If you are smart you like Math. $$(P\to Q)$$ If you like Math you get a good job.$$(Q\to R)$$ Thus If you are smart you get a good job.$$(P\to R)$$
To show $X$ and $|X|$ are not jointly continuous
Just for ease of notation, let $Y=|X|$. If $X,Y$ are jointly continuous then integrating over all possible values for $X,Y$, we have \begin{eqnarray*} 1 &amp;=&amp; \int_{x=-\infty}^{0}\int_{y=-x}^{-x}{f_{X,Y}(x,y)\;dx\;dy} + \int_{x=0}^{\infty}\int_{y=x}^{x}{f_{X,Y}(x,y)\;dx\;dy} \\ &amp;&amp; \\ &amp;=&amp; 0+0 \qquad\qquad\text{both inner integrals are $0$ due to their limits of integration.} \end{eqnarray*} This is a contradiction, so $X,Y$ are not jointly continuous.
Analysis limit proof
Hint: Note that $$x^3-8=(x-2)(x^2+2x+4)$$ and $$x^2-4=(x-2)(x+2).$$ This should let you use the given fact.
Determine degree of polynomial given as black box
We can take values $$P_0(i)=P(i), i=0\dots n,$$ and to calculate differences $$P_k(j)=P_{k-1}(j+1)-P_{k-1}(j), j=0\dots n-k-1$$ until all $P_k(j)$ became constants. It is easy to prove that all $P_k=const$ for $k$th order polynomial.
Spectral radius Volterra operator with an arbitrary kernel from $L^2$
$$ |V_K^nf|^2= \\ = \left|\int_{a}^{x}K(x,x_{n-1})\cdots\int_{a}^{x_2}K(x_2,x_1)\int_{a}^{x_1}K(x_1,x_0)f(x_0)dx_0 dx_1\cdots dx_{n-1}\right|^2 \\ \le \left[\int_{a}^{x}\cdots\int_{a}^{x_2}\int_{a}^{x_1}|K(x,x_{n-1})\cdots K(x_2,x_1)K(x_1,x_0)||f(x_0)|dx_0dx_1\cdots dx_{n-1}\right]^{2} \\ \le\left(\int_a^x\cdots\int_a^{x_1}|K(x,x_{n-1})\cdots K(x_1,x_0)|^2dx_0dx_1\cdots dx_{n-1}\right) \\ \times \left(\int_{a}^{x}\cdots\int_a^{x_1}|f(x_0)|^2dx_0dx_1\cdots dx_{n-1}\right) \\ \le \|f\|_{L^2}^2 \int_a^x\cdots\int_{a}^{x_3}\int_a^{x_2}dx_1dx_2\cdots dx_{n-1} \\ =\|f\|_{L^2}^2\frac{(x-a)^{n-1}}{(n-1)!} $$ Therefore, after integrating in $x$ on $[a,b]$, one obtains $$ \|V_K^n\| \le \sqrt{\frac{(b-a)^{n-1}}{(n-1)!}} $$ That's enough to give an infinite radius of convergence for the exterior resolvent expansion.
Non-commuting coprime elements in finite non-abelian groups
The answer to the question as it stands, is no. A non-Abelian nilpotent group $G$ of order $p^{a}q^{b}$ where $p,q$ are distinct primes and $a,b$ are positive integers has only two Sylow subgroups (one for $p$ and one for $q$) say $P$ and $Q$ respectively. These commute with each other in the strongest sense that every element of $P$ commutes with every element of $Q$. Perhaps a more natural condition for subgroups is whether they are permutable. Two subgroups $A$ and $B$ are said to be permutable if $AB = BA$. This does not necessarily mean that all elements of $A$ commute with all elements of $B$. In a finite nilpotent group, any two Sylow subgroups are permutable, and this condition characterizes nilpotent groups (I do not know who was the first to observe this). If $G$ is a finite group which is not nilpotent, $G$ has a Sylow $p$-subgroup $A$ which is not normal for some prime $p.$ Then $G$ must have another Sylow $p$-subgroup $B.$ The set $AB$ has cardinality a power of $p$ greater than $|A|$, so $AB$ is not a a subgroup of $G$ by Lagrange's theorem, so it is not the case that $AB = BA$, and $A$ and $B$ are not permutable. Later edit: In response to an expansion of the original question, it is possible to go further. If $G$ is a finite group which is not nilpotent, then there are two primes $p$ and $q$ and an element $x \in G$ of order a power of $p$ and an element $y \in G$ of order a power of $q$ such that $x$ and $y$ do not commute. There may be a simpler way to see this, but one way to do it is to use Frobenius's normal $p$-complement theorem. Since $G$ is not nilpotent, there is a prime $p$ such that $G$ has no normal $p$-complement. By the mentioned theorem of Frobenius, there is a $p$-subgroup $P$ of $G$ (which need not be Sylow) such that $N_{G}(P)/C_{G}(P)$ is not a $p$-group. For some prime $q \neq p,$ there is then an element $y$ of $q$-power order in $N_{G}(P) \backslash C_{G}(P)$. There must be an element $x \in P$ such that $xy \neq yx,$ and we have the desired two elements.
Countable weighted shift has no invariant subspace.
False. The closed span of $e_k$ for $k \ge n$ is invariant.
Differentiability of a supremum of a family of functions with respect to a parameter
Define $M=(-1,1)$ and $$ f(t,x) = xt $$ Then $$ g(t) = \sup_{x\in M} (xt) = |t|, $$ which is non-smooth. It can be shown that $g$ is continuous: Take a sequence $(t_n)$ with $t_n\to t$. Then for each $n$ there is $x_n$ with $g(t_n)=f(t_n,x_n)$ and $f(t_n,x_n) \ge f(t_n, y)$ for all $Y\in M$. Since $M$ is compact, $x_{n_k}\to x$ for some subsequence. Then passing to the limit in the inequality gives $f(t,x)\ge f(t,y)$ for all $y\in M$, hence $g(t)=f(t,x)=\lim_{k\to\infty}f(t_n,x_{n_k})=\lim_{k\to\infty}g(t_{n_k})$. The value $g(t)$ does not depend on the chosen subsequence and limit point $x$, hence the whole sequence $g(t_n)$ converges, and $g$ is continuous at $t$.
Matrix notion for a double summation
We can start with a quadratic form for vector $\mathbf v = (v_1, v_2, \ldots, v_N)$ and array $A$, which already has some resemblance to what you want: $$ \mathbf v^T A\, \mathbf v =\sum_{i=1}^N \sum_{j=1}^N A_{i,j} v_i v_j.$$ All that is missing is the factor of $\cos\left(\sigma_i + \sigma_j\right)$. An obvious way to deal with that is to set $A_{i,j} = X_{i,j} \cos\left(\sigma_i+\sigma_j\right)$ for $i = 1, \ldots, N$ and $j = 1, \ldots, N$, in which case $$ \sum_{i=1}^N \sum_{j=1}^N b_i b_j X_{i,j} \cos\left(\sigma_i+\sigma_j\right) = \mathbf b^T A\, \mathbf b, $$ though I suspect that's probably not very satisfying. Alternatively, use the identity $$ \cos\left(\sigma_i + \sigma_j\right) = \cos\sigma_i \cos\sigma_j - \sin\sigma_i \sin\sigma_j $$ to write \begin{align} \sum_{i=1}^N \sum_{j=1}^N b_i b_j X_{i,j} \cos\left(\sigma_i+\sigma_j\right) &amp;= \sum_{i=1}^N \sum_{j=1}^N b_i b_j X_{i,j} \cos\sigma_i \cos\sigma_j - \sum_{i=1}^N \sum_{j=1}^N b_i b_j X_{i,j} \sin\sigma_i \sin\sigma_j \\ &amp;= \sum_{i=1}^N \sum_{j=1}^N X_{i,j} (b_i \cos\sigma_i) (b_j \cos\sigma_j) \\ &amp; \qquad - \sum_{i=1}^N \sum_{j=1}^N X_{i,j} (b_i \sin\sigma_i)(b_j \sin\sigma_j) \\ &amp;= \mathbf u^T X\, \mathbf u - \mathbf v^T X\, \mathbf v \end{align} where $u_i = b_i \cos\sigma_i$ and $v_i = b_i \sin\sigma_i$, $i = 1, \ldots, N$.
Integral $\int_{-\infty}^\infty J^3_0(x) e^{i\omega x}\mathrm dx $
It turns out that the Fourier transform of $J_0^3$ can still be expressed in terms of complete elliptic integrals, but it's considerably more complicated than the formula for ${\cal FT}(J_0^2)$: for starters, it involves the periods of a curve $E$ defined over ${\bf C}$ but (except for a few special values of $\omega$) not over ${\bf R}$. Assume $|\omega| &lt; 3$, else $I(\omega) = 0$. Then the relevant curve is $$ E : Y^2 = X^3 - \bigl(\frac{3}{4} f^2 + \frac{27}{2} f - \frac{81}{4}\bigr) X^2 + 9 f^3 X $$ where $$ f = \frac12 \bigl( e + 1 + \sqrt{e^2-34e+1} \bigr) $$ and $$ e = \bigl( |\omega| + \sqrt{\omega^2-1} \, \bigr)^2. $$ Let $\lambda_1, \lambda_2$ be generators of the period lattice of $E$ with respect to the differential $dx/y$ (note that these are twice the periods that gp reports, because gp integrates $dx/2y$ for reasons coming from the arithmetic of elliptic curves). Then: if $|\omega| \leq 1$ then $$ I(\omega) = \left|\,f\,\right|^{5/2}\, \left|\,f-1\right| \frac{\Delta}{(2\pi)^2}, $$ where $\Delta = \bigl|{\rm Im} (\lambda_1 \overline{\lambda_2}) \bigr|$ is the area of the period lattice of $E$. If $1 \leq |\omega| \leq 3$ then $$ I(\omega) = \left|\,f\,\right|^{-4}\, \left|\,f-1\right|^5 (3/2)^{13/2} \frac{\Delta'}{(2\pi)^2}, $$ where $\Delta' = \bigl| {\rm Re}(\lambda_1 \overline{\lambda_2}) \bigr|$ for an appropriate choice of generators $\lambda_1,\lambda_2$ (these "appropriate" generators satisfy $|\lambda_1|^2 = \frac32 |\lambda_2|^2$, which determines them uniquely up to $\pm$ except for finitely many choices of $\omega$). The proof, alas, is too long to reproduce here, but here's the basic idea. The Fourier transform of $J_0$ is $(1-\omega^2)^{-1/2}$ for $|\omega|&lt;1$ and zero else. Hence the Fourier transforms of $J_0^2$ and $J_0^3$ are the convolution square and cube of $(1-\omega^2)^{-1/2}$. For $J_0^2$, this convolution square is supported on $|\omega| \leq 2$, and in this range equals $$ \int_{t=|\omega|-1}^1 \left( (1-t^2) (1-(|\omega|-t)^2) \right)^{-1/2} \, dt, $$ which is a period of an elliptic curve [namely the curve $u^2 = (1-t^2) (1-(|\omega|-t)^2)$], a.k.a. a complete eliptic integral. For $J_0^3$, we likewise get a two-dimensional integral, over a hexagon for $|\omega|&lt;1$ and a triangle for $1 \leq |\omega| &lt; 3$, that is a period of the K3 surface $$ u^2 = (1-s^2) (1-t^2) (1-(|\omega|-s-t)^2). $$ (The phase change at $|\omega|=1$ was already noted here in a now-deleted partial answer.) In general, periods of K3 surfaces are hard to compute, but this one turns out to have enough structure that we can convert the period into a period of the surface $E \times \overline E$ where $\overline E$ is the complex conjugate. Now to be honest I have only the formulas for the "correspondence" between our K3 surface and $E \times \overline E$, which was hard enough to do, but didn't keep track of the elementary multiplying factor that I claim to be $\left|\,f\,\right|^{5/2}\, \left|\,f-1\right|$ or $\left|\,f\,\right|^{-4}\, \left|\,f-1\right|^5 (3/2)^{13/2}$. I obtained these factors by comparing numerical values for the few choices of $\omega$ for which I was able to compute $I(\omega)$ to high precision (basically rational numbers with an even numerator or denominator); for example $I(2/5)$ can be computed in gp in under a minute as intnum(x=0,5*Pi,2*cos(2*x/5) * sumalt(n=0,besselj(0,x+5*n*Pi)^3)) There were enough such $c$, and the formulas are sufficiently simple, that they're virtually certain to be correct. Here's gp code to get $e$, $f$, $E$, and generators $\lambda_1,\lambda_2$ of the period lattice: e = (omega+sqrt(omega^2-1))^2 f = (sqrt(e^2-34*e+1)+(e+1)) / 2 E = ellinit( [0, -3/4*f^2-27/2*f+81/4, 0, 9*f^3, 0] ) L = 2*ellperiods(E) lambda1 = L[1] lambda2 = L[2] NB the last line requires use of gp version 2.6.x; earlier versions did not directly implement periods of curves over $\bf C$. For $\omega=0$ we have $e=1$, $f=3$, and $E$ is the curve $Y^2 = X^3 - 27 X^2 + 243 X = (X-9)^3 + 3^6$, so the periods can be expressed in terms of beta functions and we recover the case $\nu=0$ of Question 404222, How to prove $\int_0^\infty J_\nu(x)^3dx\stackrel?=\frac{\Gamma(1/6)\ \Gamma(1/6+\nu/2)}{2^{5/3}\ 3^{1/2}\ \pi^{3/2}\ \Gamma(5/6+\nu/2)}$? .
A query while showing that the Gamma function $\Gamma$ is logarithmically convex for $x \gt 0.$
Show the infinite product $$\prod_{n=1}^\infty (1-n^{-1}z)^{-1}\exp(z/n)$$ converges uniformly on compact subsets away from the poles $-1,-2,\dots$. Hence $\Gamma$, being local uniform limit of holomorphic functions on $\mathbb{C}-\{0,-1,-2,\dots\}$, is holomorphic there by Morrera.
Find the equation of the plane, in $\bf{r}\cdot n = d$ form
Presumably, what you are looking for is an equation in the form $$ax + by + cz = d$$ with $a,b,c,d\in\mathbb{R}$, which can be expressed in the form $$(x,y,z)\cdot\mathbf{n} = d$$ with $\mathbf{n}=(a,b,c)$ the normal vector of the plane. To find the normal vector of the plane, construct two vectors that are in the plane, are not collinear, and take their cross product. Since you have three points that are not collinear that are in the plane, note that $\mathbf{p1}-\mathbf{p2}$, $\mathbf{p1}-\mathbf{p3}$, and $\mathbf{p2}-\mathbf{p3}$ are all vectors that (after suitable translation) lie in the plane. Once you have the normal vector, finding $d$ is a matter of figuring out what value is required by plugging in at least one point in the plane. Example. Suppose you want to find the plane that contains $\mathbf{p}_1=(1,1,1) = \mathbf{i}+\mathbf{j}+\mathbf{k}$, $\mathbf{p}_2=(-3,-2,0) = -3\mathbf{i}-2\mathbf{j}$, and $\mathbf{p}_3 = (0,3,1) = 3\mathbf{j}+\mathbf{k}$. The plane contains vectors parallel to $$\mathbf{p}_1-\mathbf{p}_2 = (1,1,1) - (-3,-2,0) = (4,3,1)$$ and to $$\mathbf{p}_1 - \mathbf{p}_3= (1,1,1)-(0,3,1) = (1,-2,0).$$ So then the vector $(4,3,1)\times(1,-2,0)$ is perpendicular to plane: $$(4,3,1)\times(1,-2,0) = \left|\begin{array}{rrr} \mathbf{i} &amp; \mathbf{j} &amp; \mathbf{k}\\ 4 &amp; 3 &amp; 1\\ 1 &amp; -2 &amp; 0 \end{array}\right| = \Bigl( 0+2, -(0-1), -8-3\Bigr) = (2,1,-11).$$ Therefore, the plane that contains the three points will have equation of the form $$2x + y - 11z = d$$ for some $d$. Plugging in $(1,1,1)$, which we know is in the plane, we get $$d = 2(1) + 1 - 11(1) = -8$$ so the plane in question is the plane with equation $$2x + y - 11z = -8.$$ Indeed, you can verify that $(1,1,1)$, $(-3,-2,0)$, and $(0,3,1)$ all satisfy the equation. To write in "$\mathbf{r}\cdot \mathbf{n}=\mathbf{d}$ form", we simply write $$(x,y,z)\cdot(2,1,-11) = -8.$$
Conjugacy classes of $Sym(6)$
In $\text{Aut}(S_6)$ the $4$-cycles don't map to permutations of the shape $(42)$ since the former are odd permutations, and the latter even permutations. The only non-trivial fusions of conjugacy classes are $(2^3)\leftrightarrow(21^4)$, $(3^2)\leftrightarrow(31^3)$ and $(6)\leftrightarrow(321)$. To see $(3^2)\leftrightarrow(31^3)$, consider the permutation $\sigma=(ABC)(DEF)$. This fixes the $1$-factorisation formed of $AB|CF|DE$, $AC|BE|DF$, $AD|BC|EF$, $AE|BF|CD$ and $AF|BD|CE$. As the image of $\sigma$ under an outer automorphism has order $3$ and a fixed point, it has shape $(31^3)$.
Is a subfunctor of a representable functor also representable?
Here is my answer to your first question. Consider the non trivial category $\mathcal{C}=\{0\circlearrowleft\circlearrowleft\},$ where $0$ has a non-identity morphism $f$ (of order $2$) to itself. Now consider the representable functor $G:\mathcal{C}^{\text{op}}\to\mathbf{Set}$ given by $G(x)=\operatorname{hom}_{\mathcal{C}}(x, 0).$ This has exactly one proper sub-functor, the empty functor, and it is clearly not representable. Added: The full description of $G$ is $0\mapsto \{\operatorname{id}_0, f\}$ $\operatorname{id}_0\mapsto\operatorname{id}_{\{\operatorname{id}_0, f\}}$ $f\mapsto f_*=(\operatorname{id}_0\mapsto f, f\mapsto \operatorname{id}_0)$ and the only proper sub-functor that can make a naturally square is the empty functor.
Two related but opposite variables, how to calculate a score for them?
Use the inverse of your measures: deaths/damage taken kills/damage dealt Edit based on OP comment: You can modify the above expressions so that they satisfy your requirements as such: damage taken/deaths kills/damage dealt If you want to prevent dividing by zero on the first one you can go damage taken/(deaths+1) or assign an otherwise max_value to this scenario.
Convergence of the series $\sum_n\sqrt{a_n\cdot a_{n+1}}$ given that $\sum_na_n$ converges
Try using the inequality $\sqrt{xy}\leq \frac{x+y}{2}$ for $x,y\geq 0$ to compare $\sum\sqrt{a_na_{n+1}}$ to $\sum_n a_n$ and $\sum_{n}a_{n+1}$.