title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
What is the mathematical notation of in the range of a value higher than | One might simply say "Measure_x is zero on the interval $(1.56,5]$, so we have not shown it on this table."
Notice we use "$($" next to the $1.56$, which indicates that we do not exclude $1.56$ itself. |
Compute the integral $\int_0^\pi \frac{\ln(1+\cos(t))}{\cos(t)}dt$ using a double integral. | Evaluate $K=\int_0^1dx\int_0^\pi\frac{dt}{1+x\cos t}$ by Fubini's theorem with $u=\tan(t/2)$ viz.$$\int_0^\pi\frac{dt}{1+x\cos t}=\int_0^\infty\frac{2du}{1+x+(1-x)u^2}=\frac{\pi}{\sqrt{1-x^2}}$$so$$K=\int_0^1\frac{\pi dx}{\sqrt{1-x^2}}=\tfrac12\pi^2.$$ |
Use L'Hospital's rule to prove a certain limit | Yes, $f(x)$ is just a constant with respect to $h$.
But shouldn't it be $h^2$ instead of $2h$ in the denominator of the original limit? |
Attaching map of $(k+l)$-cell of $S^k \times S^l$ with its usual $CW$ structure. | I went and asked to an explanation of this and the map is given by
viewing the $(k+l)$-disk as $D^k \times D^l$ and then looking at the boundary, $$\partial(D^k \times D^l) = S^{k-1} \times D^l \cup D^k \times S^{l-1}$$
where $S^{k-1} \hookrightarrow S^k$ is the inclusion and $D^{l} \to S^l$ is given by the quotient mapping $(D^l, \partial D^l)$ for the first portion of the union. Similarly, $S^{l-1} \hookrightarrow S^l$ is the inclusion of the equator and again we use the quotient $(D^k, \partial D^k)$. |
Number of iterations to achieve the desired convergence accuracy | Since
$a_{i+1}
=\left(\dfrac{a_i}{1-a_i}\right)^2
$,
$\dfrac{a_{i+1}}{a_i}
=\dfrac{a_i}{(1-a_i)^2}
$.
Since
$\dfrac1{1-x}
\le 1+2x
$
for
$1 \le 1+x-2x^2
$
or
$x \le \frac12$,
if
$a_i \le\frac15 < \frac12$,
$\begin{array}\\
a_{i+1}
&=\dfrac{a_i^2}{(1-a_i)^2}\\
&\le a_i^2(1+2a_i)^2 \\
&= a_i^2(1+4a_i+4a_i^2)\\
&= a_i^2(1+4a_i(1+a_i))\\
&= a_i^2(1+5a_i)\\
&\le 2a_i^2\\
\text{so}\\
a_{i+k}
&\le (2a_i)^{2^{k-1}}\\
\end{array}
$
Note:
The exponent of $2$
may be not quite right,
but the exponent of
$a_i$ probably is correct. |
The hypotenuse of an isosceles... | Because you are given two points where the hypotenuse starts and ends, the first thing you should do is find the slope:
$$
m = \frac{y_2-y_1}{x_2-x_1} = \frac{3-1}{1-(-4)} = \frac{2}{5}
$$
With the generic linear equation form $y = mx+b$, plug in $m$, which was derived in the previous step. So, the equation is now $y = \frac{2}{5}x + b$.
In order to find $b$, plug in $(x,y)$ that satisfies the equation. You can choose one of the points of the hypotenuse because one of those two points obviously lies on the line.
$$
1 = \frac{2}{5}(-4) + b \\
1 = \frac{-8}{5} + b \\
b = \frac{13}{5}
$$
Plug $b$ back into the equation, giving you $y = \frac{2}{5}x + \frac{13}{5}$. For a sanity check, I plotted the two points and the line that goes through it – it's a perfect fit! |
How find prime numbers $p_{i}$ such $p_{1}+p_{2},p_{2}+p_{3},p_{3}+p_{4},\cdots,p_{n}+p_{1}$ is square number | If they are all odd, then their remainders mod 4 must alternate 1 and 3, and it is impossible for an odd cycle to alternate.
So $p=2$ must appear. It can appear $n$ times; or $n-1$ times and 7 appear once.
If they must all be different, then the neighbours of 2 must both be $3\mod 4$, and again alternation is impossible. |
Prerequisites for Geneterating functions | The main prerequisite for ordinary generating functions would be a concrete understanding of sequences, Sigma notation / manipulation and the various closed form expressions for common sums, basic differential calculus including Taylor Series and knowing what a power series is.
Assuming you have a decent Maths background you shouldn't have too much trouble in covering / refreshing these topics in a day or so.
I would also add that while not a prerequisite, an understanding of Recurrences (Divide & Conquer and Linear) may help you understand why Generating Functions are used.
MIT have a great 25 page document on Ordinary Generating Functions that you may find helpful. It is part of their wider Discrete Mathematics course.
Link here |
Is the field extension $\mathbb{Q}(\sqrt{5 + \sqrt{7}})$ over $\mathbb{Q}$ a Galois extension? | Hint: $K$ is Galois over $F=\Bbb{Q}[\sqrt{7}]$. Suppose $\sqrt{5-\sqrt{7}}$ belonged to $K$. What would the action of $\operatorname{Gal}(K/F)$ on $\sqrt{5+\sqrt{7}}\cdot\sqrt{5-\sqrt{7}}$ be? |
Guillemin-Pollack: "Moebius strip is not orientable" | The pennies are meant as metaphors for those local parametrizations. You start with heads up, and continue to align the next with the last: orientations are preserved. So they are all have heads up. After the whole trip, you see the first one and it is tails up. You can't align the last one with the first one. That is the contradiction. |
Calculate $\lim_{x\to 1^{-}} {e^{1 \over \ln{x}} \over \ln^2{x} }$ | Hint: change variable: $u = \frac{1}{\ln x}$ |
Convergence to a finite random variable | In this context, "finite" usually means "almost surely finite", i.e. $|X(\omega)|<\infty$ for almost all $\omega \in \Omega$. Note that this does not imply the boundedness of $X$. |
How and why does one come up with inequalities such as $(n!)^2\leq n^n(n!)<(2n)!\;$? | Identities like these are usually discovered by playing with strings of numbers. That's also a good way to discover the intuition behind the identity. For example take $n = 6$. The three expressions are:
$A = \left(6!\right)^2 = \left(1\cdot2\cdot3\cdot4\cdot5\cdot6\right)^2$
$B = 6^6 \left(6!\right) = \left(6\cdot6\cdot6\cdot6\cdot6\cdot6\right)\left(1\cdot2\cdot3\cdot4\cdot5\cdot6\right)$
$C = \left(2\cdot6\right)! = 1\cdot2\cdot3\cdot4\cdot5\cdot6\cdot7\cdot8\cdot9\cdot10\cdot11\cdot12$
We can go from $A$ to $B$ by replacing one of the $1$s with $6$, and one of the $2$s with $6$, ... and one of the $6$s with $6$. Each of these makes the number bigger, so $A \leq B$.
Similarly, we can go from $B$ to $C$ by replacing one of the $6$s in the first group with $7$, and one of the $6$s in the first group with $8$, ... and the last one of the $6$s in the first group with $12$. Each of these makes the number bigger, so $B \lt C$.
THIS IS NOT A PROOF but it shows WHY the identity is true. |
Prove continuity and find norm | (i). The absolute value of $\lim_na_n$ cannot be more than $\sup_n|a_n|,$ so $|h(a)|\leq \|a\|.$
(ii). If $a_n=1$ for all $n$ then $h((a_n)_n)=1=\|(a_n)_n\|.$ |
algebraic word problem involving algebraic manipulation | It looks like you accidentally divided 550 in half instead of doubling it. That should fix your problem, because 2(550)=1100 and 1100-200 is 900 |
Poisson Approximation to Normal Distribution | A useful tool to measure if your data is normally distributed is the coefficient of variation, which is : $\frac{standard deviation}{average}$ This thus uses the first two moments and in practice, if this ratio stays below $0.5$, you can assume a normal distribution. Which is the case in your question. Just use the normal distribution and not the poisson distribution. Because negative patients can't happen, maybe you can also go for the gamma distribution. |
Show that $h(x) = pf(x) + (1 − p)g(x)$ is a PDF. | Good job for part $(a)$.
For part $(b)$,
Let $$Z= \begin{cases} X, & \text{with probability } p\\ Y, & \text{with probability } 1-p\end{cases}$$
By total law of probability,
then \begin{align}
P(Z \le z) &= pP(X \le z)+(1-p)P(Y \le z)
\end{align}
If you differentiate the equation above with respect to $z$, we obtain $h$. |
Prove $D_x(\int_a^x f(t)dt) = f(x)$ | Your first line has a serious problem: $\int f(t)dt$ is not a function of $t$; the indefinite integral is a family of functions, not a single function. So saying "Let $\int f(t)dt= F(t)$" does not make sense. So as soon as you try to get started, you are already in trouble.
Now, you can say, "well, pick some antiderivative instead of taking the whole family." But if you try to define $F$ as "pick some antiderivative of $f$", then your problem is that you have no way to guarantee that there is such a thing in the first place! The whole point of this result is showing that there is an antiderivative, so you cannot assume there is one to start with.
Of course, if you assume the First Fundamental Theorem of Calculus and that $f(t)$ has an antiderivative, then this result is very easy, exactly as you do it: $\int_a^x f(t)dt = F(x)-F(a)$, so the derivative with respect to $x$ is $F'(x)=f(x)$. But this assumes that there is an antiderivative for $f$ and that the FTC (Part 1) holds; but since this theorem is often used to prove Part 1, that could also make your argument circular.
But, really, the major flaw is that you are assuming that there is an antiderivative for $f$ in the first place (you can prove FTC (Part 1) without this result, so the circularity problem is not fatal). |
Prove $|h(x)|\leq (Mx^2)/2$ | Define
$$
g:[0,a]\rightarrow \mathbb{R}, g(x) = h(x) - M\frac{x^2}{2} \, .
$$
Then $g(0) = g'(0) = 0$ and $g''(x) \le 0$ in $[0, a]$.
Apply the MVT twice to obtain $g(x) \le 0$ in $[0, a]$.
For the other direction, use $\tilde g(x) = h(x) + M\frac{x^2}{2}$ . |
Changing of variables from Cartesians to polars | To change from cartesion to polar coordinates, write $$x = r\cos\theta$$ $$y = r\sin\theta$$ and then calculate $dx$ and $dy$ in terms of $dr$ and $d\theta$ the way we usually do in calculus, i.e. $$dx =dr\cos\theta - r\sin\theta d\theta$$ Then you may calculate $$dx^2 + dy^2 = dr^2 + r^2 d\theta^2$$ and you then have your expression of the metric tensor in polar coordinates, i.e.$$ \frac{1}{g(r)^2}(dr^2 + r^2d\theta^2)$$ |
How can I compute an expression for $P(X_1>X_2>X_3>X_4)$ if $X_1,X_2,X_3,X_4$ are normal and mututally independent? | \begin{align}
&\mathbb{P}\left(X_1>X_2>X_3>X_4\right)\\
&=\int_{x_1>x_2>x_3>x_4}f(x_1)f(x_2)f(x_3)f(x_4){\rm d}x_1{\rm d}x_2{\rm d}x_3{\rm d}x_4\\
&=\int_{-\infty}^{\infty}f(x_1){\rm d}x_1\int_{-\infty}^{x_1}f(x_2){\rm d}x_2\int_{-\infty}^{x_2}f(x_3){\rm d}x_3\int_{-\infty}^{x_3}f(x_4){\rm d}x_4\\
&=\int_{-\infty}^{\infty}f(x_1){\rm d}x_1\int_{-\infty}^{x_1}f(x_2){\rm d}x_2\int_{-\infty}^{x_2}F(x_3)f(x_3){\rm d}x_3\\
&=\int_{-\infty}^{\infty}f(x_1){\rm d}x_1\int_{-\infty}^{x_1}f(x_2){\rm d}x_2\int_{-\infty}^{x_2}F(x_3){\rm d}F(x_3)\\
&=\int_{-\infty}^{\infty}f(x_1){\rm d}x_1\int_{-\infty}^{x_1}\frac{1}{2}F^2(x_2)f(x_2){\rm d}x_2\\
&=\int_{-\infty}^{\infty}f(x_1){\rm d}x_1\int_{-\infty}^{x_1}\frac{1}{2}F^2(x_2){\rm d}F(x_2)\\
&=\int_{-\infty}^{\infty}\frac{1}{6}F^3(x_1)f(x_1){\rm d}x_1\\
&=\int_{-\infty}^{\infty}\frac{1}{6}F^3(x_1){\rm d}F(x_1)\\
&=\frac{1}{24}F^4(x)|_{-\infty}^{\infty}\\
&=\frac{1}{24}.
\end{align} |
Analysis of calls to a call center using poisson distribution | A standard 95% CI for $\lambda$ for a particular time period based on
counting $X$ events in that time period is $X \pm 1.96\sqrt{X}.$
This is sometimes called a Wald interval; it uses a normal
approximation, and so works best for fairly large $X$.
A slight improvement is the CI $X + 2 \pm 1.96\sqrt{X + 1}.$
Its coverage probability tends to be closer to 95%. This interval
is based on the test of $H_0: \lambda = \lambda_0$ against
$H_a: \lambda \ne \lambda_0$, using the test statistic
$Z = (X - \lambda_0)/\sqrt{\lambda_0},$ rejecting $H_0$ when $|Z| > 1.96.$
If you 'invert the test' by solving for values $\lambda_0$
with $|X| \le 2,$ you get the interval $X + 2 \pm 1.96\sqrt{X + 1}$
of values of $\lambda_0.$
Suppose $X = 25.$ Then in R, a simple 'grid search' gives the
interval $(16.8, 37.2):$
lam = seq(0, 50, by=.001); z = (25 - lam)/sqrt(lam)
int = (abs(z) <= 2)
min(lam[int]); max(lam[int])
## 16.802
## 37.198
while the formula gives
(25 + 2) + c(-2,2)*sqrt(25 + 1)
## 16.80196 37.19804
Note: The relatively simple form of the 'improved' 95%
confidence interval depends on conflating 1.96 and 2. For
other confidence levels, the algebra is very slightly messier. |
Multiple Integration, polar coordinates | $$\cos^2\theta-\sin^2\theta=\cos2\theta\;.$$ |
Evaluate the least significant decimal digit $ 109873^{7951}$ | $\varphi(10)$ is $4$ because $4$ of the numbers between 0 and 10 are coprime to 10, namely $\{1,3,7,9\}$.
We can also compute this by the rules that $\varphi(ab)=\varphi(a)\varphi(b)$ when $a$ and $b$ are coprime, and $\varphi(p)=p-1$ when $p$ is prime, so
$$ \varphi(10)=\varphi(2)\varphi(5)=(2-1)(5-1)=1\cdot 4 = 4$$
Since $10$ is square-free, a generalization of Euler's theorem states that $a^k\equiv a^m\pmod{10}$ when $k\equiv m\pmod{\varphi(10)}$.
So since $7951\equiv 3\pmod 4$ we have
$$ 109873^{7951} \equiv 109873^3 \pmod{10} $$
And it is easy to compute the last digit of this: just ignore everything to the left of the ones column when multiplying, so the last digit of $109873^3$ is the same as the last digit of $3^3$. |
Co-variance of two dependent Gaussian variables | $$E[xy]=E[x(cx+v)]=E[cx^2+xv]$$
and $x$ and $v$ are independent. |
The base-10 integers 36, 64, and 81 can be converted into other bases so that their values are represented by the same digits | Hint:
$$
36=6^2 \qquad 64=8^2 \qquad 81=9^2
$$ |
What is the number of $4$ tuples $(a,b,c,d)$ whereas $a,b,c,d\in \mathbb{Z^+}$ satisfying the following $3$ equations? | Big Hint: For integers,$a,b$, having $a^3=b^2$ means that $a$ is a perfect square and $b$ is a perfect cube. Likewise for $c$ and $d.$ Then if $a=a_0^2,$ you get $b=a_0^3,$ and $c=c_0^2,d=c_0^3.$ Now you need:
$$c-a=(c_0-a_0)(c_0+a_0)=64.$$
There are only so many ways to factor $64,$ and for each you can solve for $c_0$ and $a_0$ and get a solution $(a,b,c,d)=(a_0^2,a_0^3,c_0^2,c_0^3)$ when $a_0,c_0$ are both positive integers.
Note that my $c_0=\frac{d}{c}$ and $a_0=\frac{b}{a}$ from your beginnings.
You are assuming the factorization of $64$ is $(\pm 8)(\pm 8).$ But there are other factorizations of $64,$ and just because $UV=8\cdot 8$ doesn't mean that $U=\pm 8,V=\pm 8.$ Indeed, given that $U=c_0-a_0\neq c_0+a_0=V,$ you probably want other factorizations. When $U=V$ here, you get $a_0=0$ and hence $a=0,$ which violates the condition that $a$ is positive. |
trigonometry law of cosines | There's no way should get two solutions. Set $c=725$, $a=650$, $b=575$ and the law of cosines:
$$c^2 = a^2 + b^2 - 2ab \cos \gamma$$
And solve for $\cos\gamma$:
$$\cos\gamma = {a^2+b^2-c^2\over 2ab}\approx 0.304$$
Solving then for $\gamma$ gives:
$$\gamma = \pm\cos^{-1}{a^2+b^2-c^2\over 2ab} + 2n\pi \approx \pm 1.262+2n\pi \approx \pm72.28^\circ + n 360^\circ$$
Obviously the negative of $\pm$ and $n$ other than $0$ are out of range.
Also geometrically we know that the angles are uniquely determined by the length of the sides due to congruence.
The error you probably did was to confuse the general solution to equations of the form $\sin v=A$ and $\cos v=A$. |
How to show that the gradient descent for unconstrained optimization can be represented as the argmin of a quadratic? | Define $$g(x) = f(x_{k-1}) + \langle x - x_{k-1}, \nabla f(x_{k-1})\rangle + \tfrac{1}{2t_k}\|x-x_{k-1}\|_2^2$$
Note that $x_{k-1}$ and $t_k$ are constants in this context; the only variable is $x$. This is obviously a strongly quadratic function, so it is guaranteed to have a unique minimum. And it is differentiable, so we can determine that minimum by finding the point with zero gradient:
$$\nabla g(x) = \nabla f(x_{k-1}) + \tfrac{1}{t_k}(x-x_{k-1}) = 0$$
Isolate $x$ and you do indeed get
$$x = x_{k-1} - t_k \nabla f(x_{k-1})$$
In fact, with a bit of care, you can show that
$$g(x) = f(x_{k-1}) + \tfrac{1}{2t_k}\|x-(x_{k-1}-t_k\nabla f(x_{k-1}))\|_2^2 - \tfrac{t_k}{2}\|\nabla f(x_{k-1})\|_2^2$$
which obviously obtains its minimum at the given point, and that minimum value is $$g(x_k) = f(x_{k-1}) - \tfrac{t_k}{2}\|\nabla f(x_{k-1})\|_2^2$$ |
Not following what's happening with the exponents in this proof by mathematical induction. | Perhaps writing shortly a justification for each step can help. Let us denote
$$\begin{align*}&2^k>k^2=\color{red}{I.H.} =\text{ Inductive Hypothesis. This is what you assume after showing for k = 5}\\&k^2=k\cdot k>4k=\color{blue}{B.A.}=\text{Basic Assumption. From here we start}\\&4k\ge 2k+1=\color{green}{S.A.}=\text{Simple Algebra:}\;\;4k\ge 2k+1\Longleftrightarrow 2k\ge 1\end{align*}$$
So now your proof can look as follows:
$$2^{k+1}\stackrel{\text{exponents law}}=2\cdot 2^k\stackrel{\color{red}{I.H.}}>2\cdot k^2=k^2+k^2\stackrel{\color{blue}{B.A.}}>k^2+4k\stackrel{\color{green}{S.A.}}\ge k^2+2k+1=(k+1)^2$$ |
Are regular Hausdorff $G_\delta$ spaces normal? | The Moore plane is an example. It is Tychonoff and closed subsets are $G_\delta$, but it is not normal.
That this space is Tychonoff and not normal is mentioned in the wiki article linked above. For the fact that it has closed sets $G_\delta$, see this post: Moore plane / Niemytzki plane and the closed $G_\delta$ subspaces. |
A conjecture related to Viviani's theorem | I suppose you refer to three segments $\alpha$, $\beta$ and $\gamma$ originating from a point $P$ inside an equilateral triangle and perpendicular to its three sides. Polygon $A$ is a parallelogram having $\alpha$ and $\beta$ as sides and $B$ is an equilateral triangle with side $\gamma$.
If you intend those properties to hold for any $P$ inside the triangle, then they are certainly false. To check, take for instance $P$ on the left side: then $\alpha=0$ and $\gamma>0$, whereas $\alpha>0$ and $\gamma=0$ if $P$ is on the right side.
EDIT.
Asking for $\alpha$, $\beta$, $\gamma$ to be positive doesn't change the situation: you can place $P$ so that $\gamma$ is almost vanishing while $\alpha$ and $\beta$ are not (diagram below, left), or vice versa (diagram below, right).
EDIT 2.
If you are asking, instead, for the locus of points such that $\gamma^2≷2\alpha\beta$, then it is not difficult to find that $\gamma^2=2\alpha\beta$ along an arc of ellipse, passing through two vertices of the triangle and tangent there to two sides.
To prove that, it is convenient to use an oblique system of coordinates with $x$ axis along side $AB$ of the triangle and $y$ axis along side $AC$ (see diagram below).
For simplicity I also set $AB=AC=1$.
Coordinates $(x,y)$ of point $P$ have a simple relation with $\alpha$ and $\beta$:
$$
\alpha={\sqrt3\over2}y,\quad \beta={\sqrt3\over2}x
\quad\text{and}\quad
\gamma={\sqrt3\over2}-\alpha-\beta\quad
\text{by Viviani's theorem.}
$$
Plugging these into $\gamma^2=2\alpha\beta$ gives a simple equation for $P$:
$$
(x-1)^2+(y-1)^2=1.
$$
In orthogonal coordinates that would be the equation of a circle with center $O=(1,1)$ and unit radius. In oblique coordinates that is instead the equation of an ellipse, whose properties can be deduced from the equation:
the ellipse is tangent to coordinate axes at $B$ and $C$ and has center $O$;
the coordinates of its vertices $R$, $S$, $U$, $V$ can be found from the intersection of the ellipse with lines $AO$ and $UV$ having equations $y=x$ and $x+y=2$; it turns out that those vertices have coordinates ${1\pm 1/\sqrt2}$;
from that it follows that $RO=\sqrt{3/2}$ and $UO=1/\sqrt2$.
Once you know the length of the axes of the ellipse and their direction, the ellipse is completely defined. |
Definition of group action. | An action is a way to "using" the elements of a group (the "actors") to "move" each element of a set (the "acted upon") to one element of the same set. Therefore, an action is firstly and foremost a map $\mathcal{A}\colon G\times X \to X$. The usual pair of conditions qualifying such a map as an action are nothing else that the generalization to abstract groups of the basic properties ensured by the natural action of the "prototypical" group $\mathrm{Sym}(X)$, namely:
$Id_X(x)=x, \forall x \in X$
$(\sigma\tau)(x)=\sigma(\tau(x)), \forall \sigma,\tau\in \mathrm{Sym}(X), \forall x\in X$
You can prove that, given an action $\mathcal{A}$ as above, the position $\varphi(g)(x):=\mathcal{A}(g,x)$ defines a group homomorphism $\varphi\colon G\to \mathrm{Sym}(X)$. Conversely, given a group homomorphism $\varphi\colon G\to \mathrm{Sym}(X)$, the position $\mathcal{A}(g,x):=\varphi(g)(x)$ defines a $G$-action on $X$. |
Extreme points of polytopes | The claim is false for all $N \geq 2$.
Let $e_1, \dots, e_N$ be the standard basis of $\mathbb{R}^N$. Let $F$ be the convex hull of $\{e_1, \dots, e_N\} \cup \{-e_N\}$. Then $X(F) = \{e_1, \dots, e_N\} \cup \{-e_N\}$. Let $S = \{(x_1, \dots, x_N) \in \mathbb{R}^N : x_N = 0\}$. Then $X(F) \cap S = \{e_1, \dots, e_{N-1}\}$ spans $S$, but $0 \notin X(F)$ is an extreme point of $F \cap S$.
The case $N=2$ is easy to visualize; there $F$ is the triangle with vertices $(1,0), (0,1), (0,-1)$ and $S$ is the $x$-axis. |
Why is a polynomial with infinite zeropoints the zeropolymomial? | A polynomial $f(x)$ has by definition a finite degree $n$ which is given by the highest degree $n$ of the variable $x$ involved in the polynomial. If you multiply two polynomials, the degrees add (if the underlying coefficient ring has no zero divisors as in the case of a field). Only the zero polynomial has all elements (in the coefficient ring or an extension ring) as zeros. A polynomial as described cannot exist. |
Did Russell correct his proof of Peano Postulates as was in the second edition of Principia Mathematica? | Ref to the title's question :
Did Russell correct his proof of Peano Postulates as was in the second edition of Principia Mathematica ?
the answer is no.
We have to look at the detailed treatment in :
Bernard Linsky, The Evolution of Principia Mathematica : Bertrand Russell’s Manuscripts and Notes for the Second Edition (2011), Ch.6 : Induction and types in Appendix B, page 138-on.
The topic of Appendix B is the principle ofmathematical induction which, together with the definition of numbers as classes of equinumerous classes, forms the heart of the logicist account of arithmetic [138].
Theorem ∗89·12 is important for the rest of the appendix, but also problematic,
as it seems to violate the theory of types. On the surface it asserts that every
inductive or finite class of order 3 is identical with some class of order 2 [...] The argument seems to be that since each finite class can be seen as built up from one individual by a finite number of uses of this operation, that class must itself be definable by a function of order 2. That itself can be seen to be fallacious, using the models later developed by Myhill, but what’s more, the claim of identity of these classes of differing types rather than mere co-extensiveness raises issues about what notion of types is in force in the Appendix. These issues arise from the technical objections to Appendix B by
Gödel and Myhill, to be considered below. [147].
The crucial lemmas needed for this most important theorem of Appendix B are
∗89·12 and ∗89·17. The rest, despite the great amount of complication, and the
difficulty of the notation, seems to follow directly. It is clearly a sign of the degree to which Gödel and Myhill studied and understood the content of Appendix B that it is precisely those two theorems that drew their attention. As we shall see, the first requires that there be a new use of types in Appendix B, and the second is simply mistaken [148].
The “non-standard” models of arithmetic used in Myhill’s proof should also cast
doubt on Russell’s belief, as evidenced in ∗89·16, that sets of finite numbers can always be defined using limited logical resources [...] Reflection on Gödel’s theorems about systems of arithmetic led Hao Wang to assert that the project of Appendix B was impossible already in 1963, well before Myhill’s paper [150].
Myhill’s argument, however, used an obvious and natural semantic interpretation. It is understandable, then, that it became common knowledge that Appendix B had been vitiated. But the reputation of Appendix B had already suffered worse [151].
Kurt Gödel found an error in the proofs of Appendix B in his “Russell’s mathematical logic” (1944,pp.145–6) [151].
Gödel took this objection, and it appears that he wrote to Russell about it. In the only known letter from Gödel to Russell, which survives in draft form among Gödel’s papers but was likely typed and then sent, he expresses the hope that Russell’s decision not to reply to his article is not due to the mistaken impression that nothing in it would be controversial. He writes:
I am advocating in some respects the exact opposite of the development inaugurated by Wittgenstein and therefore suspect that many passages will contradict directly your present opinion. Furthermore I am criticizing the vicious circle principle and the appendix B of Principia, which I believe contains formal mistakes that make the proof invalid. The reader would probably find it very strange if there is no reply.
There is no record of a response from Russell. Indeed, in My Philosophical Development (p.89), written in 1959, Russell still refers to the second edition of PM as successful in showing that the axiom of reducibility is not “indispensible ... in all uses of mathematical induction”. So there is no published reply from Russell [152-53].
It appears, then, that despite the allegations of errors throughout Appendix B,
and the proof by Myhill that the main result does not hold, there is really only one definite example of an error in a proof, namely, line (3) in the proof of ∗89·16. That error was first identified by Gödel, then Myhill singled it out as well as one of only two examples of what he suggests are many [153-54].
Gödel says that (3) is “evidently false”. Myhill, as we have seen, seems to follow Gödel, and quotes (3) in full, presenting it as one of the two examples he actually gives of the “many superficial” errors that “can be corrected in various ways”. While Myhill says that he gave up on Appendix B because of these many mistakes, Gödel responds differently. He was interested enough in the argument to still seek some explanation from Russell for what he describes as “formal mistakes”.
What is the mistake? Line (3) of the proof of ∗89·16 is stated without justification, and does not follow from an earlier line in the proof. [...] However, (3) is in fact only true in particular cases. It is easy to find counterexamples to (3) [155].
The history of the Appendix B manuscript is consistent with this assessment of
the mistake as being a simple oversight. ∗89·16 is on page 9 of the manuscript
of Appendix B. [...] Is this mistake easily corrected? As (3) constitutes the inductive step in a proof by induction, it is crucial to the argument. Could a different inductive property be used? [156].
It is tempting to hypothesize that since Gödel realized that the project of Appendix B was impossible, precisely because of the impossiblity of constructing a categorical theory of arithmetic with only limited orders of quantifiers, in other words, that something like Myhill’s result would be provable. It would be no accident, then, that Gödel found the mistake at line (3) in ∗89·16. There would have to be a mistake somewhere, and someone with the notion of non-standard numbers in mind would have the counter-example to (3) ready to mind.
Russell, on the other hand, working before Gödel’s results, assumed that he
could capture the structure of the numbers with limited resources. One can imagine him checking the validity of (3) with a model of initial segments of the numbers, and finding it to be an obvious truth. Line (3) would be correct of the case where it is applied if the goal of the project of Appendix B was true. Seen this way, the proof of ∗89·16 fails because it begs the question. Gödel was not trying to get Russell to confess to having made an error of elementary set theory when he wrote asking Russell to respond to his paper. Instead he pointed to an exact point in a technical proof around which to crystalize the larger question of whether Russell’s project was bound to fail. [157-58]
Gödel’s paper was published in 1944, Myhill’s in 1974. It then took until 1991
for the next discussion of the proof in Appendix B to appear in print. This was in Davoren and Hazen (1991), an abstract of a talk to the Association for Symbolic Logic [166].
In his (1996) and (2007), Gregory Landini also proposes that the logic of the
second edition is modified [...] and consequently relaxes the conditions on well-formed formulas. He adds to this a certain axiom of extensionality, and so produces a proof of ∗89·17, thus finally “rectifying induction”, as he describes Russell’s project. [...] Landini is indeed able to “rectify” induction by proving it without using the axiom of reducibility, and with an axiom of extensionality, in the new theory of types that he, Gödel, and Hazen each found in Appendix B. Unfortunately in that system of types the new axiom of extensionality Landini proposes is strong enough to prove the axiom of reducibility. [...] Yet a principle stronger than the axiom of reducibility, which so directly entails the axiom, would hardly seem “less objectionable”, however well it is justified. This does not seem to be a genuine “rectification” of induction in the system of the second edition [166-68].
Gödel was right that there is a mistake at line (3) in the proof of ∗89·16. Russell did not notice the mistake, which survived several revisions of the appendix intact. The error was to cite without proof an assertion of elementary set theory which is not true in general, but which would hold in the situation of the hypothesis of the argument if the theorem were correct. Russell’s error is to beg the question in his proof. Myhill was right to focus on ∗89·12 as problematic. He says that it is either ill formed, or trivial. It is indeed intended in the sense that he finds ill formed, as identifying a class of one order with a class of another order. Other theorems, and a line of comment, in the manuscripts, but not the published version, verify that ∗89·12 is also intended and not a result of a slip or inattention. The explanation is
as Gödel suggested, there must be a new theory of types in Appendix B, one which
makes ∗89·12 well formed.
Myhill’s result about the system of ramified types in the first edition of PM, with his proposed semantic interpretation, is quite correct. It is impossible to derive a principle of induction using a principle of extensionality but not an axiom of reducibility, in the system of the first edition. It is an open question whether some other plausible extensional system of ramified type theory (or so-called “predicative analysis”) will have such a result [168].
So Gödel is right to say that the issue concerning induction is
still open, and that Appendix B as it stands cannot be easily repaired. In particular, the difficulties extend beyond the elementary mistake in line 3 of the demonstration of ∗89·16. [...] The errors in Appendix B were serious, and not easily remediable, but not because they were based on superficial confusions and slipshod notations. Rather, it seems, they represented the state of logic in 1924. The notions needed to understand the logical resources necessary for defining the structure of the natural numbers were not yet developed. It was the notions of model theory and the study of the power of deductive theories of arithmetic in the 1930s that would finally allow a better understanding of these issues. As well, it seem clear that Russell had new ideas about the hierarchy of types, ideas that were not fully worked out. The ideas of extensionality and the more thorough-going atomism from his own earlier “Philosophy of logical atomism”, and Wittgenstein’s Tractatus, had influenced Russell’s ideas of the structure of the hierarchy of types [169]. |
A question of sequences | Hint
$$\frac{a_1+a_2+...+a_n}{n}=n \rightarrow S=a_1+a_2+...+a_n=n^2$$
So,
$$a_{17}=(a_1+a_2+...+a_{17})-(a_1+a_2+...+a_{16})$$
Can you finish?
For a general approach you can write:
$$a_n=(a_1+a_2+...+a_{n})-(a_1+a_2+...+a_{n-1})=n^2-(n-1)^2=2n-1$$
And the sum is of course given by:
$$S=a_1+a_2+...+a_{n}=n^2$$ |
Find differential. Is it the same as finding dy? | Yes. The definition of a differential, at least according to Stewart's calculus text, is
$$dy= f'(x)dx,$$
where $dx$ and $x$ are independent variables. What this is providing is a best linear approximation you can get, namely $$f(a+dx)\approx f(a)+dy.$$ |
Plot $Im(\zeta(s))=0$ in complex plane | Sure, the mathematica command
ComplexPlot[Im[Zeta[s]], {s, -10 - 10 I, 10 + 10 I},
PlotLegends -> Automatic]
Does it and produces the plot below (notice that the color corresponds to the argument, so red is positive, and blue(?) is negative. I am not sure how much of the complex plane you want, but this is in the square with vertices at $-10 - 10 i$ and $10+10 i.$ |
Each node has three edges, proof that we can make 4 pairs of nodes which are not connected | Hint
Your computation will work but it is indeed a bit tedious, and would quickly be not feasible with higher number of nodes. If you want to do something "neat":
Call $G=(V,E)$ your graph, on 8 nodes, 3-regular, hence on 12 edges. Note that your problem is equivalent to
Find a perfect matching in the complement of $G$
Indeed If you find a perfect matching in $G^c$ (the complement of $G$), then you will have a way to pair all nodes, with no edges (in $G$) between elements of a pair.
Now your graph $G^c$ has 8 nodes, and is 4-regular. Do you have notions of perfect matching? and how to prove they exist?
To finish you could
Say that you have a 4-regular graph with $n=8$ nodes, hence the graph must be connected. Hence the graph includes an eulerian cycle. Select every other edges in this cycle and you end up with a 2 factors, made up of collection of even cycle. select every second edges in these cycles and you obtain the desired perfect matching. |
Total differential of weighted average | The formula that you calculate using derivatives gives you the instantaneous change at a certain time. When you calculate with difference of $y$, you get the average change. Also note that $x_1,x_2,w_1,w_2$ might not change linearly, so $d_x1=x_1(t_2)-x_1(t_1)$ is just an approximation. Same for the other variables. You need to get more information about how your variables behave. Are you trying to solve a differential equation? |
5th order multivariable taylor polynomial | Expand $e^t=P_5(t)+O(t^6)$ and $\cos(t)=Q_5(t)+O(t^6)$ and $\dfrac{1}{3-6t}=\dfrac{1}{3} \sum\limits_{k=0}^\infty (2t)^k=R_5(t)+O(t^6)$. Also, $\arctan(t)=\int \dfrac{1}{1+t^2} dt=\int \sum\limits_{k=0}^{\infty}(-t^2)^k dt=\sum\limits_{k=0}^\infty \int (-1)^kt^{2k} dt=S_5(t) + O(t^6).$ The rest is algebra. |
For iid random variables $Y_1, \ldots, Y_n$ with cdf $F$, what is $P(Y_n > \max(Y_1, \ldots, Y_{n-1})$? | Your equality is not correct. The condition $Y_n > \max\{Y_1, \cdots, Y_{n-1}\}$ is not equivalent to $Y_n > Y_{n-1} > \cdots > Y_1$, and even if this is true, you cannot split the probability in your way because they are not independent.
The idea is to use the symmetry. By the assumption, the probability that any two of $Y_1, \cdots, Y_n$ is equal is zero. Thus we have
$$ \Bbb{P}(Y_n > \max\{Y_1, \cdots, Y_{n-1}\}) = \Bbb{P}(Y_n = \max\{Y_1, \cdots, Y_n\})$$
Moreover, since $Y_1, \cdots, Y_n$ are i.i.d, the joint distribution does not change if we relabel them. Thus
$$ \Bbb{P}(Y_n = \max\{Y_1, \cdots, Y_n\}) = \Bbb{P}(Y_k = \max\{Y_1, \cdots, Y_n\}), \qquad k = 1, \cdots, n.$$
Therefore
$$ \Bbb{P}(Y_n > \max\{Y_1, \cdots, Y_{n-1}\}) = \frac{1}{n}\underbrace{\sum_{k=1}^{n} \Bbb{P}(Y_k = \max\{Y_1, \cdots, Y_n\})}_{=1} = \frac{1}{n}. $$ |
understanding the homogeneous space $SL(2,\mathbb C)\times \mathbb C\cong SU(2)\times S^1$? | This is false. $SU(2)$ is $S^3$ topologically, so its product with $S^1$ is compact. $SL(2, \mathbb{C})$ and $\mathbb{C}$ are both not compact, so neither is their product. Nor do the dimensions match. |
Computing $\ E[Y^2] $ when $Y$ is a piecewise function of $X \sim Pois(2) $ | \begin{align}
E[Y^2]
&= \sum_{x=0}^3 (2x)^2 Pr(X=x)+ \sum_{x=4}^\infty x^2 Pr(X=x)\\
&=\sum_{x=0}^3 4x^2 Pr(X=x)+ \sum_{x=4}^\infty x^2 Pr(X=x)\\
&=\sum_{x=0}^3 3x^2 Pr(X=x)+ \sum_{x=0}^\infty x^2 Pr(X=x)\\
&=3\sum_{x=1}^3 x^2 Pr(X=x)+ E[X^2]\\
&= 3\left( Pr(X=1) + 4Pr(X=2)+9Pr(X=3) \right) + E[X^2]
\end{align} |
Outer Measure of a countable union of disjoint intervals | I think there is more to say than what you have when you wrote down the list of equalities.
I will use the fact that if $(I_n)$ and $(J_k)$ are sequences of intervals such that $\bigcup\limits_{n=1}^{\infty} I_n = \bigcup\limits_{k=1}^{\infty} J_k$ and the $(I_n)$ are pairwise disjoint, then $\sum_{n=1}^{\infty} l(I_n) \leq \sum_{k=1}^{\infty} l(J_k).$ (Where $l(I_n)$ means the length of the interval.) A proof of this can be found in Carother's Analysis on page 268.
By assumption $E = \bigcup\limits_{n=1}^{\infty} I_n$, so $m^{*}(E) = m^{*}(\bigcup\limits_{n=1}^{\infty}) \leq \sum_{n=1}^{\infty} l(I_n) = \sum_{n=1}^{\infty} m^{*}(I_n)$.
Now by definition of Outer Measure given $\epsilon > 0 $ there is a sequence of intervals $(J_k)$ such that $E \subseteq \bigcup\limits_{k=1}^{\infty} J_k$ and $\sum_{k=1}^{\infty} l(J_k) \leq m^{*}(E) + \epsilon$.
Then because the $I_n$'s are pairwise disjoint we have $\sum_{n=1}^{\infty} l(I_n) \leq \sum_{k=1}^{\infty} l(J_k) \leq m^{*}(E) + \epsilon$.
Letting epsilon go to zero we have the other inequality, thus
$\sum_{n=1}^{\infty} l(I_n) = m^{*}(E)$. |
Is my evaluation of complex trigonometric expression $\sin\left(\frac{\pi}{4}+2i\right) -\sin(2i)$ correct? | The result is not correct. Note,
$$\begin{align}
& \sin\left(\frac{\pi}{4}+2i\right) -\sin(2i) \\
& =\frac{1}{2i}\left( e^{-2}\left( \cos\frac{\pi}{4}+i\sin\frac{\pi}{4}\right) -e^2\left( \cos\frac{\pi}{4}-i\sin\frac{\pi}{4} \right) \right)
-\frac1{2i}\left(e^{-2}-e^{2}\right) \\
&=\frac{e^{-2}}{2i} \left( \frac{\sqrt{2}-2}{2} +\frac{\sqrt{2}}{2}i \right) +\frac{e^{2}}{2i}\left( \frac{2-\sqrt{2}}{2} +\frac{\sqrt{2}}{2}i \right) \\
&=\frac{1}{2i} \left( \frac{2-\sqrt{2}}{2} (e^2-e^{-2})+i\frac{\sqrt{2}}{2}(e^2+e^{-2}) \right)\\
& = \frac{\sqrt{2}}{2}\cosh(2)-i\frac{2-\sqrt{2}}{2}\sinh(2)
\end{align}$$ |
existence/uniqueness of solution and Ito's formula | You shall distinguish between the equation and the 'direct' definition of something. For example, whenever you have an expression of the form
$$
x = g(x) \tag{1}
$$
where $g$ is a certain function/operator that is given to you, you may be asked to find $x$ which satisfies such expression. You can never be sure whether such $x$ exists, or whether there is only one such $x$. Indeed, if you change the $x$ in the RHS, the LHS changes as well since it depends on $x$. Anyway, suppose we found such $x$ and it is unique.
Now, imagine that you also have an expression
$$
y = h(x) \tag{2}.
$$
This is not an equation, but rather a definition of $y$. Indeed, to find the value of $y$ the only thing we need to do is to substitute $x$ (found on the previous step) as an argument of $h$ - that's it.
What do you have in your original post (OP) is that $x$ is the process $(X_t)_{t\geq 0}$ which satisfies
$$
X_T = X_0 + \int_0^Ta(X_t,t)\mathrm dt + \int_0^Tb(X_t,t)\mathrm dB_t \tag{$1^\prime$}
$$
which can be compactly written through differentials as in your case. Here the operator $g$ as in $(1)$ is just these integrals and functions $a,b$ applied to $X$. Again, the LHS and the RHS both contain $X$, so that it's not obvious whether there exists such $X$ which makes both sides be equal. Thus, we need conditions on $a$ and $b$ to assure such existence, or uniqueness.
Now, let $Y_t = f(X_t,t)$ be another process. For $f\in C^2$ we can use an alternative definition of $Y$:
$$
Y_T = Y_0 + \int_0^T\frac{\partial f}{\partial t}(X_t,t)\mathrm dt + \int_0^T \frac{\partial f}{\partial X}(X_t,t)\mathrm dX_t + \int_0^T \frac{\partial^2 f}{\partial X^2}(X_t,t)\mathrm dX_t^2 \tag{$2^\prime$}
$$
which yet again can be written in a compact symbolic form as via differentials as you did. Here you can think of the RHS as a function of the process $X$: $h((X_t)_{t\geq 0})$. It is only used to define $Y$. The only thing that you need to take care of is that $h$ is defined for a particular value of the argument, that is all the integrals in the RHS of $(2')$ are well-defined. Actually, here the only Ito integral is the middle one (a part of $\mathrm dX_t$) and you indeed shall check that
$$
Z_t:=b(X_t,t)\frac{\partial f}{\partial X}(X_t,t)
$$
satisfies for example $(iii)$ in Definition 3.1.4 or $(iii)'$ in Definition 3.3.2 of Oksendal: "Stochastic Differential Equations".
As a result, you actually shall not talk about solutions of $(2')$ as much as you don't talk about the solutions of $y = 3$. Instead, you talk about solutions of $x = x^2+1$. |
Unequal circles within circle with least possible radius? | Clearly radius 10 is not quite enough; once you have an arrangement like this, it may be possible to find the eaxct outer radius, but in any case it can be estimated fairly well. Reminds me of these, http://en.wikipedia.org/wiki/Sangaku
I drew in some trial outer circles, $r = 11$ worked with room to spare, so I split the difference, $r=10.5 = 21/2$ also worked with just a little extra room.
EDIT: did it in coordinates, I thought it was going to be a degree four polynomial but there was cancellation and it became linear, the best outer radius is
$$\frac{637}{61} \approx 10.4426 $$
EEDDIITT: did it over with symbols. If the larger given radius, now 7, is called $A,$ and the smaller given radius, now 3, is called $B,$ then the radius of the circumscribed circle is $$ R = \frac{A^2 (A+2B)}{A^2 + AB - B^2}. $$ |
Urgent - Find the equation of the lines tangent to a circle | The equation of any line passing through $(0,6)$ can be written as $\frac{y-6}{x-0}=m$ where $m$ is the gradient, So, $y=mx+6$
If the $(h,k)$ be the point of contact, then $k=mh+6$ and $h^2+k^2-4h-4=0$
Replacing $k$ in the 2nd equation, $h^2+(mh+6)^2-4h-4=0$
or $(1+m^2)h^2+2h(6m-2)+32=0$, it is a quadratic equation in $h,$
For tangency, both the root should be same, so, $4(6m-2)^2=4\cdot(1+m^2)32$
$(6m-2)^2=(1+m^2)32, (3m-1)^2=8(1+m^2) \implies m^2-6m-7=0$
So, $m=7,-1$
If $m=7,\frac{y-6}{x-0}=7, 7x-y+6=0$
If $m=-1,\frac{y-6}{x-0}=-1, x+y-6=0$ |
How to simplify $f(x)=\sum\limits_{i=0}^{\infty}\frac{x^{i \;\bmod (k-1)}}{i!}$? | Group them by $0,1,2$ and so on. Your formula is
$$\sum_{j=0}^{k-2} \left( x^j \sum_{n=0}^\infty \frac{1}{(n(k-1)+j)!} \right) \,.$$
You should be able to get a closed formula for inside bracket by calculating the Taylor series of $e^z$ at the $(k-1)$-roots of unity.
P.S. For the last part: adding $e^{w_1}+...+e^{w_{k-1}}$ yields the coefficient of $x^0$. Now if you add $\sum w_ie^{w_i}$ you get the coefficient of $x^1$, and so on. |
Minimal polynomial of $\alpha+i$ over $\mathbb{Q}$ | First find the polynomial that has the root $\beta=\alpha+ i$ over $\Bbb{Q}(\alpha)\subset\Bbb{R}$ in-terms of $\alpha.$
Then use the fact that $\alpha^3-\alpha+1=0$ to get rid of $\alpha$ in previous polynomial.
OR:
Note that $$(\beta-i)^3-(\beta-i)+1=0$$ |
Convergence epsilon - check my proof please | The last step is not good, you have <= and then you just turn it into <. The two are not equivalent. Start with < from the very start and all will be fine. Meaning: choose N such that $N > 2/\epsilon - 1$. |
Is there a way to know when $(a+l_n)^n+(b+k_n)^n$ is an integer for integer $a$, $b$ and rational $l_n$, $k_n$ with $l_n+k_n=1$? | Let's start with a simple form:
$A=(a+\frac{c}{d})+(b+\frac{e}{d})$
It can be seen that if $d \big|c+e$ then A can be an integer. For example:
$(13+\frac{7}{15})+(17+\frac{8}{15})=\frac{465}{15}=31$
$(7+\frac{3}{5})+(3+\frac{2}{5})=\frac{465}{15}=15$
Where denominators of fraction are equal. We can make denominators equal in the case they are not equal. Let $l_n=\frac{c}{d}$ and $k_n=\frac{e}{f}$, then we may write:
$A=(a+\frac{c}{d})+(b+\frac{e}{f})=(a+\frac{c.f}{d.f})+(b+\frac{e.d}{f.d})$
Then the condition is:
$d.f\big|c.f+e.d$
For example we have:
$c=2$,$d=5$, $e=3$ and we want to find $f$ such that the condition is provided, we must have:
$ed+cf=15+2f=t(5f)$
The only solution is:
$t=1$, $15=5f-2f=3f$ ⇒ $f=5$
Now let $a=8$ and $b=11$ we have:
$(8+\frac{2}{5})+(11+\frac{3}{5})=\frac{100}{5}=20$
For general form $A=(a+\frac{c}{d})^n+(b+\frac{e}{f})^n$
n must be odd ,let $n=2t+1$,such that A can be reduced but the condition will be the same, i.e in following relation:
$A=(a+\frac{c}{d})^{2t+1}+(b+\frac{e}{d})^{2t+1}=\big[(a+\frac{c}{d})+(b+\frac{e}{f})\big]((a+\frac{c}{d})^{2t}(b+\frac{e}{f})+ . . . )$
if $(a+\frac{c}{d})+(b+\frac{e}{f})$ is integer, then A is integer and for that we must have:
$d.f\big|c.f+e.d$
c, d, e and f can be functions of n, for example $c=2^n$, $d=5^n$ etc. |
components of normal field are Jacobi fields? | The other answer has given a good intuitive explanation. Let me just give a computational one. The Jacobi operator is (1.147 in the book)
$$L u = \Delta u + |A|^2 u + \text{Ric}_M(N, N)u = \Delta u + |A|^2 u .$$
as $M = \mathbb R^3$ has $\text{Ric}_M = 0$. To see this $u = \langle N, v\rangle $ satisfies $Lu_ = 0$ (where $v$ is a fixed vector in $\mathbb R^3$), note that if we fix a normal coordinate at $x$ in the minimal surface, then (note that the $e_i$'s orthonormal basis of the minimal surface, not that of $\mathbb R^3$)
$$\begin{split}
\Delta u &= \nabla_{e_i} \nabla_{e_i} u \\
&= \nabla_{e_i} \nabla_{e_i} \langle N, v\rangle \\
&= \nabla_{e_i} \langle \nabla_{e_i}N, v\rangle \\
&= - \nabla_{e_i}\langle A_{ij} e_j, v\rangle \\
&= -\langle \nabla_{e_i}A_{ij} e_j + A_{ij} \nabla^{\mathbb R^3} _{e_i } e_j, v\rangle \\
&= -\langle \nabla_{e_j}A_{ii} e_j + A_{ij}A_{ij} N, v\rangle
\end{split}$$
Note that in the last equaliy we used the Codazzi equation $\nabla_{e_k} A_{ij} = \nabla_{e_i} A_{kj}$ and the fact that $\nabla^{\mathbb R^3} _{e_i } e_j = A_{ij} N$. Then the first term is zero as the surface is minimal, thus
$$\Delta u = - A_{ij}^2 \langle N, v\rangle = -|A|^2 u.$$
This is the same as $Lu= 0$. |
How to simplify basic mathematical expression? | We know that
$$
(2 - 2^{-g}) + 2^{-(g+1)} = 2 - 2^{-g} + 2^{-g} \cdot 2^{-1} = 2 - 2^{-g} + 2^{-g} \cdot \frac{1}{2} = 2 - 2^{-g}\left(1 - \frac{1}{2}\right).
$$
Hence
$$
(2 - 2^{-g}) + 2^{-(g+1)} = 2 - 2^{-g} \cdot \frac{1}{2} = 2 - 2^{-g} \cdot 2^{-1} = 2 - 2^{-g-1} = 2 - 2^{-(g+1)},
$$
which is the result you were looking for. |
Proof of the irrationality of $\sqrt{2}$. | Your calculation shows that, if $a^2=2b^2$ then, with $x=a-b$, you also have $(a-2x)^2=2x^2$, i.e. $(2b-a)^2=2(a-b)^2$. This last equation could be checked more simply by expanding both sides, cancelling a few terms, and remembering that $a^2=2b^2$. So, in particular, you've shown that, from any purported rational representation $a/b$ of $\sqrt2$, you can produce another one, $(2b-a)/(a-b)$, with a smaller denominator. So no such rational representation can be in lowest terms, and therefore no rational representation exists. (The last step could be replaced by an appeal to the least number principle: If there's a denominator that works, then there's a smallest one.)
This proof is correct, and I've seen it a few times before, but I don't remember where. |
integrate the following function | it is $$\sqrt{x}(x^2+4x+4)=x^2\cdot x^{1/2}+4x\cdot x^{1/2}+4x^{1/2}=x^{5/2}+4x^{3/2}+4x^{1/2}$$ and now you can integrate this term |
Evaluate the integral $\int_{0}^{\pi}\frac{\log(1+a\cos x)}{\cos x}\,dx$ | Let us assume $|a|<1$ to avoid any inconvenience in the definition of $\log(1+a\cos x)$. A useful pre-processing is to reduce the integration problem to the $(0,\pi/2)$ interval, through:
$$\begin{eqnarray*} I(a) &=& \int_{0}^{\pi}\frac{\log(1+a\cos x)}{\cos x}\,dx \\&=& \int_{0}^{\pi/2}\frac{\log(1+a\cos x)}{\cos x}\,dx - \int_{0}^{\pi/2}\frac{\log(1-a\cos x)}{\cos x}\,dx\\&=& \int_{0}^{\pi/2}\frac{dx}{\cos x}\,\log\left(\frac{1+a\cos x}{1-a\cos x}\right)\\(\text{Taylor series})\qquad&=&2\sum_{n\geq 0}\frac{a^{2n+1}}{2n+1}\int_{0}^{\pi/2}\cos(x)^{2n}\,dx\end{eqnarray*}\tag{1}$$
And since $\cos(x)=\frac{e^{ix}+e^{-ix}}{2}$, due to the extended binomial theorem and the Taylor series of $\frac{1}{\sqrt{1-x^2}}$
$$ I(a) = \pi a\sum_{n\geq 0}\frac{(a/2)^{2n}}{2n+1}\binom{2n}{n}=\color{red}{\pi\arcsin(a)}.\tag{2}$$ |
Finding the minimum of $\frac pq + \frac rs$ for distinct integers $p, q, r, s$ from $\{1,2,3,4,5,\ldots,16,17\}$ | Obviously, we want the minimum, so as small $p,r$ as possible, and as large $q,s$ as possible. There are 2 choices:
$1/17+2/16=50/(16\cdot17)$ and $1/16+2/17=49/(16\cdot17)$. The second one is smaller, the fraction is $a/b=49/272$ and hence $a+b=321$.
I don't know what other answer you want. In general case of choosing from $\{1,2,\dotsc,n\}$ you will get $a+b=(n+2(n-1))+(n(n-1))=n^2+2n-2$, the reasoning is the same. |
Find the Distribution that corresponds to the given MGF | The MGF of a Poisson r.v. features a double exponential, and that of an exponential r.v. is a rational function (no exponential), so it looks unlikely at first glance these two would be candidates (also, unclear what you mean by "separate" here.)
However, eyeballing the Wikipedia table showing a list of standard MGFs, you can spot one that looks quite similar, that of the geometric distribution. Namely, if $X$ follows a Geometric distribution with parameter $p$, then the MGF of X is
$$
M_x(t) = \frac{p e^t}{1-(1-p)e^t}, \quad t < \log\frac{1}{1-p} \tag{1}
$$
So let's see. What you have is
$$
M_X(t) = \frac{2e^t}{3-e^t} = \frac{\frac{2}{3}e^t}{1-\frac{1}{3}e^t}
= \frac{\frac{2}{3}e^t}{1-\left(1-\frac{2}{3}\right)e^t} \tag{2}
$$
which should allow you to conclude. |
Skew coordinate systems do not allow separation of variables - proof? | As far as I know, this is a nontrivial theorem.
For a similar statement in a related but slightly different setting, see Section 12 in the paper Separability in Riemannian Manifolds by Sergio Benenti (2016). |
sum of fractional powers | The sum is
$$\sum_{k=0}^n\cos\left(\frac\theta{2^k}\right)+i\sum_{k=0}^n\sin\left(\frac\theta{2^k}\right).$$
Using Taylor to the first order, this yields
$$\sum_{k=0}^n1+i\sum_{k=0}^n\frac\theta{2^k}=n+1+i\theta\frac{1-\frac1{2^{n+1}}}{1-\frac12}.$$
The second order terms add
$$-\frac12\sum_{k=0}^n\frac{\theta^2}{4^k}=-\frac{\theta^2}2\frac{1-\frac1{4^{n+1}}}{1-\frac14}.$$
The third order
$$-i\frac16\sum_{k=0}^n\frac{\theta^3}{8^k}=-\frac{i\theta^3}6\frac{1-\frac1{8^{n+1}}}{1-\frac18}.$$
And so on.
$$n+1+\sum_{k=1}^n\frac{(i\theta)^k}{k!}\frac{1-\frac1{2^{k(n+1)}}}{1-\frac1{2^k}}.$$ |
Maximal ideals in certain affine algebras | $R$ is a quotient of $A=\mathbb{C}[y_1,y_2,\ldots,y_m]$ and maximal ideals of $A$ are of the form you require and then so are the maximal ideals of $R$. |
Why do we define the $\mathfrak{p}$-adic logarithm on a $\mathfrak{p}$-adic number field such that $\log(p) = 0$? | The main goal is to construct a continuous function $log_p: \mathbf C_p ^* \to \mathbf C_p$ s.t. $log_p (xy) = log_p + log_p (y)$. Since $ \mathbf C_p ^* = p^\mathbf Q \times W \times U_1$, where $U_1$ is the group of principal units and $W$ the group of roots of $1$ of order prime to $p$, it suffices to define $log_p$ on each of the direct factors. On $U_1$ one has already the usual power series $log_p (1+x)$ whose radius of convergence is $1$. On $W$, one must have the nullity of $log_p$, since for any root of unity $w$ of order $n$, necessarily $n.log_p (w)= log_p (1) = 0$. It remains only to adjust the value $log_p (p)$.
The choice is not quite arbitrary, because any $\sigma \in G_\mathbf {Q_p} $can be extended to a continuous automorphism of $\mathbf C_p$, and it follows that $log_p (p) \in \mathbf Q_p$. Your suggested choice $log_p (p)=e$ is not good either because it depends on the ambient field $K$. Actually, most of the ramification problems in CFT are concentrated in $U_1$, as well as most of the calculations about $L_p$-functions , so the definitely most natural (which is also the most simple) choice is $log_p (p)=0$. It follows that Ker $log_p = p^\mathbf Q \times \mu$, where $\mu$ is the group of all roots of unity (of arbitrary order). |
number of paths of length $k$ in a random directed graph | Based on the specified graph construction scheme, a path of length $k$ requires $k+1$ vertices, in ascending vertex number order.
It follows that there are exactly ${\large{\binom{n}{k+1}}}$ potential paths of length $k$.
For each such potential path, the probability that it's an actual path is $p^k$.
Therefore, the expected number of paths of length $k$ is ${\large{\binom{n}{k+1}}}p^k$. |
Give $\epsilon$-$\delta$ proofs of the following | Hint:
\begin{align}
(a^2 - a)(x+1) - (x^2 - x)(a+1) &= a^2 x + a^2 - ax - a - ax^2 + ax - x^2 + x\\
&= ax(a - x) + (a + x)(a - x) - (a - x)
\end{align} |
Question about non-degenerate polynomials, and a proof | Just an example. We begin with an isotropic ternary quadratic form,
$$ f(x,y,z) = 24 x^2 + 24 y^2 + 24 z^2 -43 yz - 43 z x - 43 x y. $$
Its Hessian matrix of second partial derivatives is
$$
H =
\left(
\begin{array}{rrr}
48 & -43 & -43 \\
-43 & 48 & -43 \\
-43 & -43 & 48
\end{array}
\right)
$$
Isotropic means that there is at least one triple $(x,y,z)$ of integers, not all equal to zero, with $f(x,y,z) = 0.$ Nondegenerate means $\det H \neq 0.$
The theorem in question guarantees an integer matrix,
$$
R =
\left(
\begin{array}{rrr}
58 & 61 & 18 \\
18 & -25 & 15 \\
15 & 55 & 58
\end{array}
\right)
$$
Back to that in a minute. The indicated form, $y^2 - z x,$ has Hessian matrix
$$
G =
\left(
\begin{array}{rrr}
0 & 0 & -1 \\
0 & 2 & 0 \\
-1 & 0 & 0
\end{array}
\right)
$$
Alright, the relationship is $R^T H R = 157339 G.$ That is,
$$
\left(
\begin{array}{rrr}
58 & 18 & 15 \\
61 & -25 & 55 \\
18 & 15 & 58
\end{array}
\right)
\left(
\begin{array}{rrr}
48 & -43 & -43 \\
-43 & 48 & -43 \\
-43 & -43 & 48
\end{array}
\right)
\left(
\begin{array}{rrr}
58 & 61 & 18 \\
18 & -25 & 15 \\
15 & 55 & 58
\end{array}
\right) =
\left(
\begin{array}{rrr}
0 & 0 & -157339 \\
0 & 314678 & 0 \\
-157339 & 0 & 0
\end{array}
\right)
$$
The rows of $R$ tell us that, for any integers $(u,v),$ we always have
$$ f( 58 u^2 + 61 uv + 18 v^2, 18 u^2 -25 uv + 15 v^2, 15 u^2 + 55 uv + 58 v^2 ) = 0 $$ |
Show that the autocovariance function depends on $s$ and $t$ only through their difference $\left|s-t\right|$ | We use the fact that $(w_s,w_t)$ has the same distribution as $(w_0,w_{t-s})$ (which is the same as that of a couple of independent centered random variables with variance $\sigma_w^2$). Therefore,
$$
\mathbb E\left[w_sw_t\right]=\mathbb E\left[w_0w_{t-s}\right].
$$
Do a similar reasoning for the other three terms in the expression of $\gamma(s,t)$. |
Find roots or splitting field of a polynomial given its Galois group | I seem to be giving lots of answers that depend on very special properties of the supplied example. Here’s an argument, tailored to your polynomial $f(x)=x^3-x-1$. Not the general method you were hoping for at all.
First set $\alpha$ to ba a root of your polynomial, which we all know is irreducible over $\Bbb Q$. It’s not too hard to calculate the discriminant of the ring $\Bbb Z[\alpha]$ as the norm down to $\Bbb Q$ of $f'(\alpha)=3\alpha^2-1$; this number is $23$, surprisingly small for a cubic extension. The fact that it’s square-free implies that $\Bbb Z[\alpha]$ is the ring of integers in the field $k=\Bbb Q(\alpha)$.
Our field $k$ clearly is not totally real, since $f$ has only one real root. So in the jargon of algebraic number theory, $r_1=r_2=1$, one real and one (pair of) complex embedding(s). We can apply the Minkowski Bound
$$
M_k=\sqrt{|\Delta_k|}\left(\frac4\pi\right)^{r_2}\frac{n!}{n^n}\,,
$$
which for $n=3$ gives a bound less than $2$, so that $\Bbb Z[\alpha]$ is automatically a principal ideal domain.
Let’s factor the number $23$ there: we certainly know that it’s not prime, since $23$ is ramified.
Now, we already know a number of norm $23$, necessarily a prime divisor of the integer $23$, it’s $3\alpha^2-1$, and indeed $23/(3\alpha^2-1)=4 + 9\alpha - 6\alpha^2$. But better than that, $23/(3\alpha^2-1)^2=3\alpha^2-4$. This number has norm $23$ (because the norm of $23$ itself is $23^3$). So we’ve found the complete factorization of $23$.
Now let’s look more closely at $f(x)=(x-\alpha)g(x)$, for a polynomial $g$ that we can discover by Euclidean Division to be $g(x)=x^2+\alpha x+\alpha^2-1$. And the roots of $g$ are the other roots of $f$; the Quadratic Formula tells you what they are, and the discriminant of $g$ is $\alpha^2-4(\alpha^2-1)=4-3\alpha^2$. which we already know as $-23/(3\alpha^2-1)^2$. Going back to the Quadratic Formula, our other roots are
$$
\rho,\rho'=\frac{\alpha\pm\sqrt\delta}2\>,\>\delta=\frac{-23}{(3\alpha^2-1)^2}\>,\>\sqrt\delta=\frac{\sqrt{-23}}{3\alpha^2-1}\>.
$$
And that gives you your roots of this one very special cubic polynomial in terms of one root $\alpha$ and $\sqrt{-23}$. |
What is the period and phase shift of $f(x)=3\sin 4\left(x+\frac{\pi}{4}\right)-1$? | I'm not sure why you say that your attempt is wrong. That's how I learned it.
Edit: As you've mentioned in the comments below, the options provided to you are
(A) period $4$ and phase shift $-\cfrac\pi4$
(B) period $\cfrac\pi2$ and phase shift $-\cfrac\pi4$
(C) period $-\cfrac\pi2$ and phase shift $-\cfrac\pi4$
(D) period $-\cfrac\pi2$ and phase shift $\cfrac\pi4$
Using the method that you were given to find the period, the only possible option is (B). Remember that a phase shift of $-\cfrac\pi4$ is the same thing as a phase shift of $\cfrac\pi4$ to the left, so this is the same as your answer.
Edit: Let me give you some alternate (hopefully clearer) rules. Suppose you are given an equation $$y=a\sin\bigl(b(x-h)\bigr)+k$$ or $$y=a\cos\bigl(b(x-h)\bigr)+k$$ for some real $a,b,h,k$ where $a\ne0$ and $b>0$. (If $a=0$ or $b=0,$ there's really nothing interesting to say. Do you know why we may assume that $b$ isn't negative?) Then:
The amplitude is $|a|$. Graphically, this will be the vertical distance from the max (or min) value to the midline, or half the vertical distance from the max value to the min value.
The phase shift is $h$. Graphically, this is a shift by $|h|$ to the right if $h$ is positive; by $|h|$ to the left if $h$ is negative.
The midline shift is $k$. Graphically, this is a shift by $|k|$ upward if $k$ is positive; by $|k|$ downward if $k$ is negative. |
differential equation for the susceptible infection susceptible model | You have a mistake, in that the $\mu idi$ term should really be $\mu i dt$ (i assume it comes from the second term of the r.h.s. of the original equation). So instead, start with the original equation, and just divide both sides by the r.h.s. to get:
$$\frac{di}{\beta(k)i(1-i)-\mu i}=dt,$$
and integrate this (e.g. by partial fractions). |
Value of the the sum of reciprocals of combinators | Yours is a telescoping sum. Write
$$\frac{1}{\binom{n}{2009}} = \frac{2009}{2008} \left( \frac{1}{\binom{n-1}{2008}} - \frac{1}{\binom{n}{2008}}\right)$$
so that your sum is
$$\frac{2009}{2008} \left( 1- \frac{1}{\binom{2009}{2008}} + \frac{1}{\binom{2009}{2008}} - \frac{1}{\binom{2010}{2008}} + \frac{1}{\binom{2010}{2008}} - \dots \right) = \frac{2009}{2008}$$ |
show norm of self-adjoint operator is maximum of abs value of eigenvalue | Recall that any self-adjoint matrix is diagonalizable in some orthonormal basis, i.e. there exists an orthonormal basis $e_1,...,e_n$ of $\mathbb{C}^n$ such that $Ae_i=\lambda_ie_i$ for $i=1,2,...,n$. Denote the linear transformation associated with matrix $A$ by $T$. Let $\lambda=\text{max} \, \{|\lambda_1|,...,|\lambda_2|\}$. Then for any $x \in \mathbb{C}^n$ such that $\|x\| \leq 1$, $x=a_1e_1+...+a_ne_n$ for some $a_1,...,a_n \in \mathbb{C}$, and thus we get that
\begin{equation}
\begin{split}
\|Tx\|
& =\|T(a_1e_1+...+a_ne_n)\|\\
& =\|a_1 T(e_1)+...+a_nT(e_n)\|\\
& =\|a_1 \lambda_1e_1+...+a_n\lambda_ne_n\|\\
& = \sqrt{|a_1 \lambda_1|^2+...+|a_n\lambda_n|^2}\\
& \leq \sqrt{|a_1 \lambda|^2+...+|a_n\lambda|^2}\\
& = \lambda \sqrt{|a_1|^2+...+|a_n|^2}\\
& = \lambda \|x\|\\
& \leq \lambda\\
\end{split}
\end{equation}
Since $x$ was an arbitrary element in $\mathbb{C}^n$ such that $\|x\| \leq 1$, we conclude that
$$\|T\|=\sup_{\|x\| \leq 1} \|Tx\| \leq \lambda$$
Conversely, notice that $|\lambda_i|=\lambda$ for some $i$. Then, for that $i$, we have
\begin{equation}
\begin{split}
\|Te_i\|
& =\|\lambda_i e_1\|\\
& =|\lambda_i|\|e_i\|\\
& =\lambda\\
\end{split}
\end{equation}
Since $\|e_i\|=1$, this implies that
$$\|T\|=\sup_{\|x\| \leq 1} \|Tx\| \geq \lambda$$
Thus, we conclude that
$$\|T\|=\sup_{\|x\| \leq 1} \|Tx\| = \lambda$$ |
Conditions for loops to be homotopic | There is a classical combinatorial algorithm to decide if two loops on a surface are homotopic, originally described by Max Dehn in 1912. For closed walks in arbitrary surface graphs, there is an efficient implementation of Dehn's algorithm that runs in $O(n+L+L')$ time, where $n$ is the number of edges in the graph and $L$ and $L'$ are the lengths of the walks. For details, see my SODA 2013 paper with Kim Whittlesey: "Transforming Curves on Surfaces Redux". |
Bit of help gaining intuition about conditional expectation and variance | I'm a bit unsure of my answer above. In my head it makes sense that the mean of $N$ is $λ$ because it is the property of the poisson distribution - however, does the fact that $λ$ is uniformly distributed change anything?
Yes. What you have is that the conditional expectation of $N$ for a given $\lambda$ is $\lambda$, as is its conditionl variance, since the distribution is Poisson for a given $\lambda$. Since $\lambda$ is itself continuous uniformly distributed over $[0;3]$, thence we know its expectation and variance too.
$$\begin{split}\mathsf E(N\mid \lambda)&=\lambda\\\mathsf{Var}(N\mid \lambda)&=\lambda\\\mathsf E(\lambda)&=\frac 32\\\mathsf {Var}(\lambda)&=\frac{9}{12}\end{split}$$
What you are required to use is the Tower Property, also known as the Law of Iterated Expectation (sometimes "Law of Total Expectation").
$$\mathsf E(N) = \mathsf E(\mathsf E(N\mid \lambda))$$
The Law of Iterated Variance follows from the same principle.
$$\mathsf {Var}(N) = \mathsf E(\mathsf {Var}(N\mid \lambda))+\mathsf {Var}(\mathsf E(N\mid \lambda))$$
The rest is just substitution . |
$\bigcup \alpha$ where $\alpha$ is a finite ordinal. | Assume $\alpha=\beta+1=\beta\cup\{\beta\}$ where $\beta$ is an ordinal (note that this is applicable for all nonzero finite ordinals, but also for many infinite ones). Then $x\in\bigcup\alpha$ iff $x\in y$ for some $y\in \alpha$. And this is equivalent to $x\in\beta$ or $x\in y$ for some $y\in \beta$. Since $x\in y\in \beta$ implies $x\in \beta$ as well, we ultimately have
$$ x\in\bigcup\alpha\iff x\in\beta$$
which means $$ \bigcup\alpha=\beta.$$ |
Find an initial condition for the ODE $xy'+3y=6x^3$ | Since $y(x_0)=y_0$ and $y=x^3+\frac{C}{x^3}$, plugging the initial condition yields to
$$y_0=y(x_0)=x_0^3+\frac{C}{x_0^3}.$$
From this,
$$C=(y_0-x_0^3)\;x_0^3.$$
Then, the solution of your ODE is
$$y=x^3+\frac{(y_0-x_0^3)\;x_0^3}{x^3}.$$ |
Stalks at points in the fibre of scheme morphisms | Let $\mathfrak m_y\subset \mathcal O_{Y,y}$ be the maximal ideal. Then the local ring you are looking for is
$$\mathcal O_{X_y,x}=\mathcal O_{X,x}\otimes_{\mathcal O_{Y,y}}k(y)=\frac {\mathcal O_{X,x}}{\mathfrak m_y \mathcal O_{X,x}} $$
The proof consists in reducing to the affine case.
The affine case is then handled in Matsumura, Commutative Ring Theory, pages 47-48. |
Please simplify $((p \land f) \lor q) \land t$ and prove correctness of your simplification. | I'm trying to simplify this expression, and I got the following:
$$((p \land f)\lor q) \land t ~{=~ (((p\land f)\lor q)\land t)
\\ =~(t \land (p\land f))\lor (q\land t)}$$
would this be the simplified version. I'd appreciate some explaining. Thanks
That is okay. Whether this is simpler or not is debatable, as you have increased the count of literals and operators. However, you have reduced the depth of nesting and arrived at the Distributed Normal Form: $(p\wedge f\wedge t)\vee (q\wedge t)$
You can also easily put it into the Conjunctive Normal Form: $(p\vee q)\wedge (f\vee q)\wedge t$
It is a matter of perspective, but there is a certain elegance to the Normal Forms, and they are at least easy to compare and evaluate. |
Use the Green's Theorem to calculate the work and the flux | The work is correct, but the flux is not. Notice that it is conceptually incorrect to talk about the flux for a closed counter-clockwise circuit. Flux is not related to the circulation of the field. What makes sense is the flux through a surface. Therefore, the theorem that has some relevance to what you are trying to calculate is
$\displaystyle \int_S \vec{F} d\vec{s} = \int_V \left(\vec{\nabla}\vec{F}\right) dV$,
where $d\vec{s}$ represents the differential surface vector perpendicular to the surface and $dV$ is the differential volume. However, this is valid for a closed surface that encloses a volume. This is not the case for your exercise, where you have just a flat surface.
To evaluate your flux you have to integrate the field $\vec{F}$ over the surface. Here, you are dealing with the simplest case, because your $d\vec{s}$ vector is always perpendicular to your field $\vec{F}$, so the integral is zero. |
What is the growth rate of $\sum_{n=1}^N \frac{a_n^2}{ \sum_{i=1}^n a_i } $? | For the first inequality we do not really need convexity:
let $a_0=0$ and $A_N=\sum_{n=0}^{N}a_n$. We have
$$\begin{eqnarray*}\sum_{n=1}^{N}\frac{a_n}{\sqrt{A_n}}=\sum_{n=1}^{N}\frac{A_{n}-A_{n-1}}{\sqrt{A_n}}&=&\sum_{n=1}^{N}\left(\sqrt{A_{n}}-\sqrt{A_{n-1}}\right)\frac{\sqrt{A_{n}}+\sqrt{A_{n-1}}}{\sqrt{A_n}}\\&\color{red}{\leq}& 2\sum_{n=1}^{N}\left(\sqrt{A_{n}}-\sqrt{A_{n-1}}\right)=2\sqrt{A_N}\end{eqnarray*} $$
since the last sum is telescopic and $A_n$ is increasing. On the other hand if the sequence $\{a_n\}_{n\geq 1}$ is rapidly increasing (say $a_n=2^{n^2}$) we have
$$\sum_{n=1}^{N}\frac{a_n^2}{A_n}=\sum_{n=1}^{N}a_n\frac{a_n}{A_n}\sim \sum_{n=1}^{N}a_n=A_N $$
so in order to produce tight bounds we need more informations on the rate of growth of $\{a_n\}_{n\geq 1}$.
At the beginning of the post it is stated that $a_n\leq\sqrt{n}$, but in the middle $a_n=O(1)$ is assumed.
Which one should we actually take into consideration?
After the clarification occurred in the comment, we may notice that $a_n\sim n^c$ implies $A_n\sim \frac{1}{c+1} n^{c+1}$ and $\frac{a_n^2}{A_n}\sim (c+1)n^{c-1}$, such that
$$ \sum_{n=1}^{N}\frac{a_n^2}{A_n}\sim(c+1)\sum_{n=1}^{N}n^{c-1} \sim \left(1+\frac{1}{c}\right) N^{c}\sim K_c A_n^{\frac{c}{c+1}} $$
and there is no way to go beyond the $O\left(A_N^{\frac{c}{c+1}}\right)$ bound. Computations can be performed in explicit terms for a lot of sequences, like
$$ a_n = \frac{n}{4^n}\binom{2n}{n}\sim\sqrt{\frac{n}{\pi}},\qquad A_n = \frac{n(2n+1)}{3\cdot 4^n}\binom{2n}{n}\sim \frac{2n}{3}\sqrt{\frac{n}{\pi}}$$
$$ \frac{a_n^2}{A_n}=\frac{3n}{(2n+1)4^n}\binom{2n}{n}\sim\frac{3}{2\sqrt{n\pi}},\qquad \sum_{n=1}^{N}\frac{a_n^2}{A_n}\sim 3\sqrt{\frac{n}{\pi}}.$$ |
Finding invertible polynomials in polynomial ring $\mathbb{Z}_{n}[x]$ | Lemma 1. Let $R$ be a commutative ring. If $u$ is a unit and $a$ is nilpotent, then $u+a$ is a unit.
Proof. It suffices to show that $1-a$ is a unit when $a$ is nilpotent. If $a^n=0$ with $n\gt 0$, then
$$(1-a)(1+a+a^2+\cdots+a^{n-1}) = 1 - a^n = 1.$$
QED
Lemma 2. If $R$ is a ring, and $a$ is nilpotent in $R$, then $ax^i$ is nilpotent in $R[x]$.
Proof. Let $n\gt 0$ be such that $a^n=0$. Then $(ax^i)^n = a^nx^{ni}=0$. QED
Lemma 3. Let $R$ be a commutative ring. Then
$$\bigcap\{ \mathfrak{p}\mid \mathfrak{p}\text{ is a prime ideal of }R\} = \{a\in R\mid a\text{ is nilpotent}\}.$$
Proof. If $a$ is nilpotent, then $a^n = 0\in\mathfrak{p}$ for some $n\gt 0$ and all prime ideals $\mathfrak{p}$, and $a^n\in\mathfrak{p}$ implies $a\in\mathfrak{p}$.
Conversely, if $a$ is not nilpotent, then the set of ideals that do not contain any positive power of $a$ is nonempty (it contains $(0)$) and closed under increasing unions, so by Zorn's Lemma it contains a maximal element $\mathfrak{m}$. If $x,y\notin\mathfrak{m}$, then the ideals $(x)+\mathfrak{m}$ and $(y)+\mathfrak{m}$ strictly contain $\mathfrak{m}$, so there exists positive integers $m$ and $n$ such that $a^m\in (x)+\mathfrak{m}$ and $a^n\in (y)+\mathfrak{m}$. Then $a^{m+n}\in (xy)+\mathfrak{m}$, so $xy\notin\mathfrak{m}$. Thus, $\mathfrak{m}$ is prime, so $a$ is not in the intersection of all prime ideals of $R$. QED
Theorem. Let $R$ be a commutative ring. Then
$$p(x) = a_0 + a_1x + \cdots + a_nx^n\in R[x]$$
is a unit in $R[x]$ if and only if $a_0$ is a unit of $R$, and each $a_i$, $i\gt 0$, is nilpotent.
Proof. Suppose $a_0$ is a unit and each $a_i$ is nilpotent. Then $a_ix^i$ is nilpotent by Lemma 2, and applying Lemma 1 repeatedly we conclude that $a_0+a_1x+\cdots+a_nx^n$ is a unit in $R[x]$, as claimed.
Conversely, suppose that $p(x)$ is a unit. If $\mathfrak{p}$ is a prime ideal of $R$, then reduction modulo $\mathfrak{p}$ of $R[x]$ maps $R[x]$ to $(R/\mathfrak{p})[x]$, which is a polynomial ring over an integral domain; since the reduction map sends units to units, it follows that $\overline{p(x)}$ is a unit in $(R/\mathfrak{p})[x]$, hence $\overline{p(x)}$ is constant. Therefore, $a_i\in\mathfrak{p}$ for all $i\gt 0$.
Therefore, $a_i \in\bigcap\mathfrak{p}$, the intersection of all prime ideals of $R$. The intersection of all prime ideals of $R$ is precisely the set of nilpotent elements of $R$, which establishes the result. QED
For $R=\mathbb{Z}_n$, let $d$ be the squarefree root of $n$ (the product of all distinct prime divisors of $n$). Then a polynomial $a_0+a_1x+\cdots+a_nx^n\in\mathbb{Z}_n[x]$ is a unit if and only if $\gcd(a_0,n)=1$, and $d|a_i$ for $i=1,\ldots,n$. In particular if $n$ is squarefree, the only units in $\mathbb{Z}_n[x]$ are the units of $\mathbb{Z}_n$. |
Show $x_n=\frac{\sin1}{2}+\frac{\sin(2)}{2^2}+...+\frac{\sin(n)}{2^n}$, converges | One simple way is using that $|\sin(n)|\leq 1$, so
$$\left|\sum_{n=1}^N \frac{\sin(n)}{2^n}\right|\leq \sum_{n=1}^N \frac{|\sin(n)|}{2^n} \leq \sum_{n=1}^N \frac{1}{2^n},$$
which clearly converges. |
Proving a group $H$ is a subgroup of $G$ - proving the associativity? | Let $H \subseteq G$, where $G$ is a group under the operation *.
Yes, in short, your argument is sufficient to conclude the group operation on G, and hence on $H\subseteq G$, is associative:
Since $H$ is a subset of $G$, every element $a,b, c \in H$ is an element of the group $G.$ Since G is a group by hypothesis, associativity of the group operation on $G$ holds also for $H$ because $H\subseteq G$.
To prove $H$ is a subgroup of $G$, of course, you must also show the identity element of $G$ is in $H$. And, you must also show that the inverse of any $a\in H$ is also in $H$.
Depending on what you've learned about groups, subgroups, etc., it never hurts, also, until you learn more concise tests for groups/subgroups, to ensure that $H\subset G$ has closure under the group operation. That is, for any $a, b\in H$, we must have $a*b \in H$. |
where to find a proof of the Lebesgue Density Theorem | This is called the Lebesgue Density Theorem. With that knowledge in hand it should be easy to search for and find a proof. I have made a nice proof, due to C.-A. Faure, from a recent Monthly article available here. |
Can someone explain to a calculus student what "dual space is the space of linear functions" mean? | Fix a vector space $V$, finite-dimensional, say. The dual space is defined by $$V^\ast = \{ f: V \to \Bbb R \mid f \text{ is linear} \}.$$
my understanding is that a function sends a number to a real number, when I write $f(4)=5$, I am not sending a vector to a number
If $f: \Bbb R \to \Bbb R$ is linear, and you're looking the first $\Bbb R$ as the vector field, and the second $\Bbb R$ as the underlying field, then $f \in \Bbb R^\ast$. You're applying $f$ in a vector: $4$ (an element of the first $\Bbb R$).
functions are already vectors that satisfies all axioms of the vector space $V$, why would be also belong in the dual space of the vector space $V^∗$?
Vectors are elements of a vector space. So functions are vectors, because they're elements of a vector space (the dual space of the initial space). Functions don't necessarily verify the axioms for $V$. They will do so for $V^\ast$, with the operations defined pointwise.
what does "linear function" mean in this case? Why is the dual space not the space of non-linear functions such as $f(x)=\sin(x)$
Given $V,W$ vector spaces over the same field, say, $\Bbb R$, we say that $T: V \to W$ is linear if $T(x+\lambda y) = T(x)+\lambda T(y)$ for all $x,y \in V$ and for all $\lambda \in \Bbb R$. The dual space only consider the linear functions by definition, since they're have special properties and applications (such as linear approximations, derivatives, etc), and are easier to work with than arbitrary functions. |
If $A$ and $B$ are nonempty, bounded subets of $\mathbb R$, then $ \boldsymbol{\sup (A\cap B) \leq \sup A}$. | Hint: $A \cap B \subset A$. What does this tell you about the sup? |
Can all positive integers of the form $4n+1$ be expressed as the sum of two squares? | An odd integer greater than $1$ is the sum of two squares if and only if every prime factor of the form $4k+3$ in the factorization occurs with a power with even exponent.
Since the product of two distinct prime numbers of the form $4k+3$ results in a number of the form $4k+1$, such a product is a counterexample. |
Determine the sets on which $f$ is continuous and discontinuous. | As usual, I'm going to suggest that you draw a picture. It's impossible in this case to actually draw a precise graph of the function, but you can definitely draw the general gist of what it looks like if you squint. That should help you get an intuitive grasp of where it should be continuous and where it should not be, after which you can prove it by various means (I personally would go for a plain old $\epsilon$-$\delta$ approach, but you can use sequences if you like). |
Basic trigonometry with popsicle sticks | By half cycle, I assume $[0, \pi]$. The height of the $n$th popsicle stick is $\sin(n*\pi/(total+1))$ |
Kuhn-Tucker condition for $\max _{0 \leq x \leq a}f(x)$? | The KKT conditions require
$$f^\prime (x) = \mu-\lambda$$
$$ \lambda x = 0 $$
$$ \mu(x-a) = 0 $$
where $\lambda$ and $\mu$ are non-negative KKT multipliers for the constraints $x\ge 0$ and $x \le a$ respectively.
If there were no upper bound $a$ (or, equivalently, if the constraint $x\le a$ did not bind), then the relevant constraints described above reduce to the first-order conditions you state. |
Calculating equilibrium point of non-linear ODE with free parameter | We have:
$$
\dot{x}=1+x^{2}y-(1+A)x
$$
$$
\dot{y}=Ax-yx^{2}
$$
So, to find the equilibrium points, we need to simultaneously find $x' = y' = 0$.
We have:
$\dot{x}=1+x^{2}y-(1+A)x = 0$ and $\dot{y}=Ax-yx^{2} = 0$.
From the second equation, we get that $x^2 y = Ax$ and substitute that back in to the first equation, yielding:
$\dot{x}=1+x^{2}y-(1+A)x = 1 + Ax -(1+A) x = 1 + Ax -Ax -x = 0$, so
$1 - x = 0$, yielding $x = 1$.
So, we have a fixed point at $(1, A)$.
Yes, your method is correct.
If you wanted to analyze the behaviors, you would use this fixed point and see if it provides information on the behavior with the Jacobian evaluated at it (if these are not borderline points).
Here are the direction fields and phase portraits for $A = -1$, $A = 0$ and $A=1$.
Regards |
Maximum order of integers coprime to a prime $p$ | Below is the key theorem as it applies to arbitrary finite abelian groups. See below for an example of how it is applied to deduce the more general result that a finite multiplicative subgroup of a domain is cyclic. The lemma is famous as "Herstein's hardest problem" - see the note below.
$\!\begin{align}{\bf Theorem}\quad\rm maxord(G)\ &\rm =\ expt(G)\ \ \text{for a finite abelian group $\rm\, G,\, $ i.e.}\\[.5em]
\rm \max\,\{ ord(g) : \: g \in G\}\ &=\rm\ \min\, \{ n>0 : \: g^n = 1\ \:\forall\ g \in G\} \end{align}$
Proof $\:$ By the lemma below, $\rm\: S = \{ ord(g) : \:g \in G \}$ is a finite set of naturals closed under$\rm\ lcm$.
Hence every elt $\rm\ s \in S\:$ is a
divisor of the max elt $\rm\: m\: $ [else $\rm\: lcm(s,m) > m\:$],$\ $ so $\rm\ m = expt(G)\:$.
Lemma $\ $ A finite abelian group $\rm\:G\:$ has an lcm-closed order set, i.e. with $\rm\: o(X) = $ order of $\rm\: X$
$\rm\quad\quad\quad\quad\ \ X,Y \in G\ \Rightarrow\ \exists\ Z \in G:\ o(Z) = lcm(o(X),o(Y))$
Proof$\ \ $ By induction on $\rm o(X)\: o(Y)\:.\ $ If it's $\:1\:$ then trivially $\rm\:Z = 1\:$. $\ $ Otherwise
write $\rm\ o(X) =\: AP,\: \ o(Y) = BP',\ \ P'|P = p^m > 1\:,\ $ prime $\rm\: p\:$ coprime to $\rm\: A,B$
Then $\rm\: o(X^P) = A,\ o(Y^{P'}) = B\:.\ $ By induction there's a $\rm\: Z\:$ with $\rm \: o(Z) = lcm(A,B)$
so $\rm\ o(X^A\: Z)\: =\: P\ lcm(A,B)\: =\: lcm(AP,BP')\: =\: lcm(o(X),o(Y))\:$.
Note $ $ This lemma was presented as problem 2.5.11, p. 41 in the first edition of Herstein's popular textbook "Topics in Algebra". In the 2nd edition Herstein added the following note (problem 2.5.26, p.48)
Don't be discouraged if you don't get this problem with what you know of group theory up to this stage. I don't know anybody, including myself, who has done it subject to the restriction of using material developed so far in the text. But it is fun to try. I've had more correspondence about this problem than about any other point in the whole book."
Below is excerpted from my sci.math post on Apr 29, 2002, as is the above Lemma.
Theorem $ $ A subgroup $G$ of the multiplicative group of a field is cyclic.
Proof $\ X^m = 1\,$ has $\#G$ roots by the above Lemma, with $\,m = {\rm maxord}(G) = {\rm expt}(G).\,$ Since a polynomial $\,P\,$ over a field satisfies
$\,\#{\rm roots}(P) \le \deg(P)\,$ we have $\,\#G \le m.\,$ But $\,m \le \#G\,$ because the maxorder $\,m <= \#G\,$ via $\,g^{\#G} = 1\,$ for all $\,g \in G\,$ (Lagrange's theorem). So $\,m = \#G = {\rm maxord}(G) \Rightarrow G\,$ has an elt of order $\,\#G,\,$ so $G$ is cyclic. |
Prove or disprove $2abc(a+b+c)\ge 3(a^2b^2c^2+1)$ | What follows is not so much an answer to the question
(it was already fully answered in comments)
as an essay intended to bring consolation to, perhaps even cheer up,
the inequality stated in the question, which must feel somewhat mistreated, poor thing,
and justifiably so.
The counterexample provided by "Macavity" is elegant $($albeit not fully spelled out$)$:
decrease $c$ towards $0$ $($$c$ never reaches $0$ since it must be strictly positive$)$,
fix $a>0$, and let $b:=(3-ac)/(a+c)$ so that the constraint $ab+ac+bc=3$ is satisfied
$($also $b>0$ when $c$
is small enough$)$. Then $abc\to a\cdot(3/a)\cdot 0 = 0$,
the LHS of the inequality converges to $0$, the RHS converges to $3$,
thus for all small enough $c$ the inequality LHS${}\geq{}$RHS fails.
The given inequality, which we were asked to prove or disprove, is definitely disproved.
And that's it, problem solved.
But is it?
Have you gained any mental image of why the inequality fails to hold?
Have you learned anything useful, anything that you may reuse, perhaps in a modified form,
when working on a similar, or a not-so-similar problem?
Have you at least acquired a couple of ways of looking for counterexamples,
or of ways of making preliminar probes into a problem, trying to find its soft spots?
My guess is that the anwers are, respectively, "no", "certainly not", and "very probably not".
My experience as a budding mathematician $($oh so long ago$)$ was
that the so called exercises are mere starting points, not ultimate goals.
When doing an exercise I always kept asking additional questions
which were as interesting, if not more interesting, than the exercise itself.
Also, trying this or that approach I sometimes veered away from the theme of the exercise
into an uncharted new territory;
or during the work on the exercise there emerged an object with special properties,
which made my analyze the object, perhaps study the class of all objects with those properties;
or I took
a second look at some failed attempt and noticed that
the faulty reasoning I used in some situation may become a legitimate reasoning
in a slightly different situation.
And so on, and so forth.
Now and then a work on an exercise developed into a miniature research project.
And that's how
I learned doing mathematics,
working through some thousands of exercises in this peculiar way.
Let us play around with the inequality in the present question and see where it leads us.
The user "geromty", who asked the question, first gave the inequality
\begin{equation*}
2abc(a+b+c) \,\geq\, 3(a^2b^2c^2+3)
\end{equation*}
$($assuming $a,b,c>0$ and $ab+ac+bc=3$$)$,
which after the easy counterexample $a=b=c=1$
he changed to the present inequality
\begin{equation*}
2abc(a+b+c) \,\geq\, 3(a^2b^2c^2+1)~.
\end{equation*}
Both versions of the inequality can be written as
\begin{equation*}
2abc(a+b+c) \,\geq\, 3a^2b^2c^2 + \mathit{const}~,
\end{equation*}
the first one with $\mathit{const}=9$, the second one with $\mathit{const}=3$.
Let us make another try with $\mathit{const}=0\,$:
something tells us that third time round we might be lucky.
Since $abc>0$, the inequality $2abc(a+b+c)\geq 3a^2b^2c^2$ is equivalent to the simpler inequality
\begin{equation*}
2(a+b+c) \,\geq\, 3abc~.
\end{equation*}
Before we proceed, we define the function
\begin{equation*}
f_1(a,b,c) \,:=\, 2(a+b+c)-3abc~, \qquad a,b,c>0\,,~ ab+ac+bc = 3\,,
\end{equation*}
and ask Mathematica to minimize $f_1(a,b,c)$, within the given constraints:
Wow! This is even better than what we expected: we have the inequality
\begin{equation}
2(a+b+c) \,\geq\, 3(abc+1) \qquad \text{if $a,b,c>0$ and $ab+ac+bc = 3$}\,. \tag{1}
\end{equation}
If we multiply this inequality by $abc>0$, we obtain the $($valid$)$ inequality
\begin{equation*}
2abc(a+b+c) \,\geq\, 3abc(abc+1)~,
\end{equation*}
with the same constraints on $a$, $b$, $c$ as inequality $(1)$,
which suspiciously closely resembles the inequality in the question asked by "geromty":
it is as if someone too hastily multiplied $abc+1$
by $abc$ and obtained $a^2b^2c^2+1$,
never noticing the error.
Mathematica told us that the inequality $(1)$ is true,
but that does not absolve us from actually proving it.
We define the function
\begin{equation*}
f_2(a,b,c) \,:=\, 2(a+b+c) - 3(abc+1)\,, \qquad a,b,c>0\,,~ ab+ac+bc = 3~, \tag{2}
\end{equation*}
that is, $f_2(a,b,c)=f_1(a,b,c)-3$.
Let us express, from the constraint $ab+ac+bc=3$,
the variable $c$ as a function of the variables $a$ and $b$,
\begin{equation*}
c \,=\, \frac{3-ab}{a+b}~; \tag{3}
\end{equation*}
since $a+b>0$, the condition $c>0$ is equivalent to $ab<3$.
We feed $c$ expressed by $a$ and $b$
as in $(3)$ to $f_2(a,b,c)\,$:
\begin{equation*}
g_2(a,b) \,:=\, f_2\Bigl(a,\,b,\,\frac{3-ab}{a+b}\Bigr)~, \qquad a,b>0\,,~ab<3\,.
\end{equation*}
We want to get a 'feel' for the behavior of the function $g_2$.
The awkward thing about the funtion $g_2$ is that it is defined on an unbounded domain,
namely on the region between the positive halves of the axes $(a)$ and $(b)$
and the branch of the rectangular hyperbola $ab=3$ in the first quadrant,
and besides that it can attain arbitrarily large positive values.
We want to actually see the function $g_2$
in its entirety,
over all of its domain and over its complete range of values.
This can be done.
The homeomorphism $\theta\colon\mathbb{R}_{\geq0}\to[\mspace{2mu}0,1)$ defined by
\begin{equation*}
\theta(x) \,:=\, \frac{x}{1+x}~, \qquad x \geq 0\,,
\end{equation*}
has the inverse
\begin{equation*}
\theta^{-1}(y) \,=\, \frac{y}{1-y}~, \qquad 0 \leq y < 1~.
\end{equation*}
We can extend the homeomorphism $\theta$ to a homeomorphism $[0,\infty]\to[0,1]$,
still denoted $\theta$, by setting $\theta(\infty):=1$.
To the points $0$, $1$, $\infty$ of the nonnegative half $[0,\infty]$ of the extended real axis
there correspond, via the homeomorphism $\theta$,
the points $0$, $1/2$ and $1$ of its "foreshortened"
image $[0,1]$.
Now we can squeeze the infinite extents of the positive half-axes and of the range of $g_2$
into unit intervals and obtain a function
\begin{equation*}
h_2(u,v) \,:=\, \theta\bigl(g_2\bigl(\theta^{-1}(u),\,\theta^{-1}(v)\bigr)\bigr)~,
\qquad\quad 0<u,v<1\,,\quad\frac{u}{1-u}\,\frac{v}{1-v}<3\,,
\end{equation*}
whose diagram within the confines of the unit cube represents, point by point,
the entire diagram of the function $g_2$:
Though the diagram is of the function $h_2$ of arguments $u$ and $v$,
we read it as a diagram representing the function $g_2$ of arguments $a$ and $b$,
with the tick's labels on the axes indicating the values of $g_2$ and its arguments.
The function $h_2$ is defined on the set of all $(u,v)$ satisfying the inequalities
$0<u<1$, $0<v<1$ and $v<(3-3u)/(3-2u)$;
the part of the boundary of the domain of $h_2$ given by $v=(3-3u)/(3-2u)$ for $0\leq u\leq 1$
is an arc of the rectangular hyperbola $(u-3/2)(v-3/2)=3/4$.
The two crossed lines drawn in the domain of $h_2$ intersect at the point $(u,v)=(1/2,1/2)$
that correponds to the one and only point $(1,1)$
at which the function $g_2$ attains its global minimum $0$.
It is high time we start thinking about actually proving the inequality
\begin{equation*}
g_2(a,b) \,\geq\, 0 \qquad \text{if $a,b>0$ and $ab<3$}\,, \tag{4}
\end{equation*}
which is equivalent to the inequality $(1)$ we intend to prove.
We multiply the inequality $(4)$ by $a+b>0$,
obtaining another inequality $G_2(a,b)>0$ equivalent to $(1)$,
where $G_2(a,b):=(a\!+\!b)\mspace{-2mu}\cdot\mspace{-2mu}g_2(a,b)$.
In the expanded form the inequality $G_2(a,b)>0$ reads
\begin{equation*}
\begin{split}
6 - 3a - 3b + 2a^2 - 7ab + 2b^2 + 3a^2b^2 &\,\geq\, 0 \\[.5ex]
&\text{if $a,b>0$ and $ab<3$}\,.
\end{split}
\tag{5}
\end{equation*}
Let $\alpha$ be a real parameter in the range $0<\alpha<2$.
We rewrite the left hand side of the inequality $(5)$ as
\begin{align*}
&6 \,+\, \alpha(a^2\mspace{-2mu}-\mspace{-2mu}2ab\mspace{-2mu}+\mspace{-2mu}b^2) \\[.5ex]
&\qquad\quad {}\,+\, \bigl((2\!-\!\alpha)a^2\mspace{-2mu}-\mspace{-2mu}3a\bigr)
\,+\, \bigl((2\!-\!\alpha)b^2\mspace{-2mu}-\mspace{-2mu}3b\bigr)
\,+\, \bigl(3(ab)^2\mspace{-2mu}-\mspace{-2mu}(7\!-\!2\alpha)ab\bigr) \\[1ex]
&\quad\,=\, 6 \,-\, \frac{9}{4(2\!-\!\alpha)} \,-\, \frac{9}{4(2\!-\!\alpha)}
\,-\, \frac{(7\!-\!2\alpha)^2}{12} \,+\, \alpha(a\!-\!b)^2\\
&\qquad\quad
\,+\, (2\!-\!\alpha)\Bigl(a\mspace{-1mu}-\mspace{-1mu}\frac{3}{2(2\!-\!\alpha)}\Bigr)^{\!2}\!
\,+\, (2\!-\!\alpha)\Bigl(b\mspace{-1mu}-\mspace{-1mu}\frac{3}{2(2\!-\!\alpha)}\Bigr)^{\!2}\!
\,+\, 3\Bigl(ab\mspace{-1mu}-\mspace{-1mu}\frac{7\!-\!2\alpha}{6}\Bigr)^{\!2}~.
\end{align*}
The constant part of the second formula
$($it being constant means it does not depend on $a$ or $b$$)$, which is
\begin{equation*}
C(\alpha) \,:=\, 6 \,-\, 2\cdot\frac{9}{4(2\!-\!\alpha)} \,-\, \frac{(7\!-\!2\alpha)^2}{12}~,
\end{equation*}
cannot be $>0$ for any $\alpha$ in the range $0<\alpha<2$ since this would imply
that there is an $\varepsilon>0$ such that $g_2(a,b)\geq\varepsilon$ over all of the domain of $g_2$,
which we know is not true because the global minimum of $g_2$ is $0$ $($we trust Mathematica in this$)$.
The best we may hope for is that $C(\alpha)=0$ for some $\alpha$ within its prescribed range.
Well, $C(\alpha)$ is a quotient of polynomials,
concretely it is
a cubic polynomial in $\alpha$ $($with rational coefficients$)$ divided by $2-\alpha$.
If the cubic polinomial in
the numerator has a zero, it must have a double zero,
so it will factor into linear factors.
So let's see
if our hunch is worth its salt:
\begin{equation*}
C(\alpha) \,=\, \frac{(\alpha-8)(2\alpha-1)^2}{12(2-\alpha)}~.
\end{equation*}
We are through, since we have $C(\alpha)=0$ by choosing $\alpha=1/2$,
and with this value of $\alpha$ the left hand side $G_2(a,b)$ of $(5)$ gets rewritten as
\begin{equation*}
G_2(a,b) \,=\,
\tfrac{1}{2}(a\mspace{-2mu}-\mspace{-2mu}b)^2
+\, \tfrac{3}{2}(a\mspace{-2mu}-\mspace{-2mu}1)^2
+\, \tfrac{3}{2}(b\mspace{-2mu}-\mspace{-2mu}1)^2
+\, 3(ab\mspace{-2mu}-\mspace{-2mu}1)^2~.
\end{equation*}
Observe that $G_2(a,b)\geq 0$ for all real $a$ and $b$, not just for $a,b>0$ satisfying $ab<3$,
and that $G_2(a,b)=0$ if and only if $(a,b)=(1,1)$.
The fact that $G_2(a,b)>0$ for all $a,b\in\mathbb{R}$
does not mean that $g_2(a,b)>0$ for all $a,b\in\mathbb{R}$.
First of all, if $a+b=0$, then $g_2(a,b)$ is not defined.
And if $a+b<0$, then $g_2(a,b)=G_2(a,b)/(a\!+\!b)<0$ because $G_2(a,b)>0$.
Now we are going to turn the tables:
starting with the inequality $(1)$, which we have just proved,
we will concoct one truly remarkable inequality.
You will have the rare opportunity to observe at close quarters
the working of a devious mind of the kind that produces all those
bothersome inequalities that you are then given to prove $($or disprove$)$.
Let us introduce the set $H_3:=\{(a,b,c)\in\mathbb{R}^3\mid a b+a c+b c=3\}$
$($we should have done this the moment we first mentioned the constraint $ab+ac+bc=3\,$;
but, as it is said, better late than never$)$.
If $(a,b,c)\in H_3$, then $a+b\neq 0$, and by symmetry also $a+c\neq 0$ and $b+c\neq 0\,$;
indeed, $a+b=0$ implies $-a^2 = ab+ac+bc = 3$, which cannot be.
Thus we can have either $a+b>0$ or $a+b<0\,$.
This result is slightly unsettling: the condition $(a,b,c)\in H_3$ is symmetric in $a$, $b$, $c$
while the condition $a+b>0$ is slanted towards the pair $a$, $b$.
However, contrary to the appearance the condition $a+b>0$ is actually symmetric in $a$, $b$, $c$
since it implies
\begin{equation*}
a+c \,=\, a+(3\!-\!ab)/(a\!-\!b) \,=\, (3\!+\!a^2)/(a+b) \,>\, 0~;
\end{equation*}
by symmetry the three conditions $a+b>0$, $a+c>0$, $b+c>0$ imply each other
$($supposing $(a,b,c)$ is a point in $H_3$, do not forget about this$)$.
Similarly the conditions $a+b<0$, $a+c<0$, $b+c<0$ imply each other.
Let us visualize the set $H_3$, over the whole of its extent.
As with visualizing the function $g_2(a,b)$
$($for $a,b>0$ and $ab<3$$)$,
we have to squeeze an infinite extent into a finite interval,
but now we have in each of the three cooordinate directions
a two-way infinite extent, that of the whole real line, to squeeze.
We achieve this by extending our 'squeeze' map $\theta\colon[0,\infty]\to[0,1]$
to a map $[-\infty,\infty]\to[-1,1]$, still denoted $\theta$,
by setting $\theta(x):=-\theta(-x)$ for $-\infty\leq x\leq 0$.
That is, for
a real $x\leq 0$ we have $\theta(x)=x/(1-x)$, and $\theta(-\infty)=-1$.
The inverse of the extended $\theta$ gives $\theta^{-1}(y)=y/(1+y)$ for $-1<y\leq 0$
and $\theta^{-1}(-1)=-\infty$.
The extended 'squeeze' map $\theta$ is constructed as a non-smooth spline of two smooth halves.
This would be a serious shortcoming for theoretical 'squeezing' of, say,
smooth fields or smooth differential forms,
but it is good enough for our visualizations.
If you insist on using an overall smooth homeomorphism $(-\infty,\infty)\to(-1,1)$,
and one that is an odd functions to boot,
then there is no shortage of such maps.
Here are two maps for you to choose from:
one is $x\mapsto (2/\pi)\arctan(x)$ with the inverse $y\mapsto\tan(\pi y/2)$,
another one is $x\mapsto (4/3)x/\bigl(1\!+\!\sqrt{1\!+\!((4/3)x)^2}\,\bigr)$
with the inverse $y\mapsto (3/2)y/\bigl(1\!-\!y^2\bigr)\,$.
The curious multipliers $4/3$ and $3/2$ appearing in the formulas for the second map
are there to ensure that
the points $-1$ and $1$ of the real line
correspond to the points $-1/2$ and $1/2$ in the interval $(-1,1)$.
From the defining equation $ab+ac+bc=3$ of the set $H_3$
we express $c\in\mathbb{R}$
as a function $c(a,b)=(3\!-\!ab)/(a\!+\!b)$ of $a,b\in\mathbb{R}$, $a+b\neq 0$.
In order to see the set $H_3$ using our infini-vision,
we draw the diagram of
$w(u,v):=\theta\bigl(c\bigl(\theta^{-1}(u),\theta^{-1}(v)\bigr)\bigr)\in(-1,1)$,
where $u,v\in(-1,1)$, $u+v\neq 0$.$\,$
$($Because $\theta$ is an odd function, the points $a$ and $b$ of the real line such that $a+b=0$
correspond to the points $u:=\theta(a)$ and $v:=\theta(b)$ in the interval $(-1,1)$
such that $u+v=0$.$)\,$
Here it is, the diagram:
Note that the points on the diagonals forming the boundary of the diagram,
such as the diagonal from $(-\infty,\infty,\infty)$ to $(\infty,-\infty,\infty)$,
do not belong to the diagram proper.
We clearly see
that the parallel projection of the set $H_3$ to the coordinate plane $(a,b)$
along the axis (c) is the whole plane $\mathbb{R}^2$
minus the 'cross-diagonal' $\{(x,-x)\mid x\in\mathbb{R}\}$,
and that the same is true of the projection to the plane $(a,c)$ along the axis (b)
and of the projection to the plane $(b,c)$ along the axis (a).
And here is the part of the set $H_3$ in the positive octant $\mathbb{R}_{>0}^3$,
with which we began the discussion of the inequality $(1)$:
Now we extend the function $f_2$,
which was defined in $(2)$ only on the part of $H_3$ in the positive octant, to the whole $H_3$:
\begin{equation*}
f_2(a,b,c) \,:=\, 2(a+b+c) - 3(abc+1)~, \qquad ab+ac+bc = 3~. \tag{6}
\end{equation*}
Let us also recall the definitions of the elementary symmetric polynomials
\begin{align*}
s_1 \,=\, s_1(a,b,c) &\,:=\, a+b+c~, \\[.5ex]
s_2 \,=\, s_2(a,b,c) &\,:=\, ab+ac+bc~, \\[.5ex]
s_3 \,=\, s_3(a,b,c) &\,:=\, abc~,
\end{align*}
which we at once employ to recast the definition $(6)$ of the extended function $f_2$,
\begin{equation*}
f_2(a,b,c) \:=\, 2s_1(a,b,c) - 3s_3(a,b,c) - 3~, \qquad s_2(a,b,c) = 3~. \tag{7}
\end{equation*}
We introduce the function $\overline{f}_2(a,b,c):=-f_2(-a,-b,-c)$ for $s(a,b,c)=3$
$($note that $s(a,b,c)=3$ implies $s(-a,-b,-c)=s(a,b,c)=3$$)$, so that
\begin{equation*}
\overline{f}_2(a,b,c) \,=\, 2s_1(a,b,c) - 3s_3(a,b,c) + 3~, \qquad s_2(a,b,c)=3~. \tag{8}
\end{equation*}
We have $\overline{f}(a,b,c)\geq 6$ when $s_2(a,b,c)=3$ and $a+b>0$.
The function
\begin{equation*}
f_3(a,b,c) \,:=\, f_2(a,b,c)\,\overline{f}_2(a,b,c)~, \qquad s_2(a,b,c) = 3~, \tag{9}
\end{equation*}
which is defined on $H_3$,
satisfies the condition that $f_3(-a,-b,-c)=f_3(a,b,c)$ for every $(a,b,c)\in H_3$,
and it follows that
\begin{equation*}
f_3(a,b,c)\geq 0 \qquad \text{everywhere on $H_3$}\,,
\end{equation*}
where $f_3(a,b,c)=0$ if and only if $(a,b,c)=(1,1,1)$ or $(a,b,c)=(-1,-1,-1)$.
Using $(7)$ and $(8)$ we rewrite $(9)$ as
\begin{equation*}
f_3 \,=\, (2s_1-3s_3)^2 - 9~, \qquad \text{defined when $s_2=3$}\,. \tag{10}
\end{equation*}
We introduce yet another function,
\begin{equation*}
f_4 \,:=\, (2s_1s_2-9s_3)^2 - 3s_2^3~, \tag{11}
\end{equation*}
which is a homogeneous polynomial in $a$, $b$, $c$ of degree $6$ defined everywhere on $\mathbb{R}^3$.
We claim that
\begin{equation*}
f_4(a,b,c) \,\geq\, 0 \qquad \text{for all $a,b,c\in\mathbb{R}$}~. \tag{12}
\end{equation*}
This is the truly remarkable inequality we have promised a while ago.
Fully expanded the inequality $(12)$ reads
\begin{equation*}
\begin{aligned}
&4\bigl(a^4 b^2 + a^2 b^4 + a^4 c^2 + a^2 c^4 + b^4 c^2 + b^2 c^4\bigr) \\
&\qquad {} \,+\, 8\bigl(a^4 b c + a b^4 c + a b c^4\bigr)
\,+\, 5\bigl(a^3 b^3 + a^3 c^3 + b^3 c^3\bigr) \,+\, 15 a^2 b^2 c^2 \\
&\qquad\qquad {} \,-\, 13\bigl(a^3 b^2 c + a^3 b c^2 + a^2 b c^3
+ a^2 b^3 c + a b^3 c^2 + a b^2 c^3\bigr) ~\geq~ 0~.
\end{aligned}
\end{equation*}
Proving the inequality $(12)$ is simple.
If $s_2(a,b,c)\leq0$, then it is evident from $(11)$ that $f_4(a,b,c)\geq 0$.
Now the case $s_2(a,b,c)>0$.
We first observe that $s_2=3$ implies that $f_3$ is defined,
and that $f_4=9f_3$ which is clear from $(10)$ and $(11)$.
Then with $\kappa:=\sqrt{3/s_2(a,b,c)}$
we have $s_2(\kappa a,\kappa b,\kappa c) = \kappa^2 s_2(a,b,c) = 3$,
and
\begin{equation*}
\kappa^6 f_4(a,b,c)
\,=\, f_4(\kappa a,\kappa b,\kappa c)
\,=\, 9 f_3(\kappa a,\kappa b,\kappa c)
\,\geq\, 0~,
\end{equation*}
therefore $f_4(a,b,c)\geq 0$.$~$ Done.
The zeros of $f_4$ are easy to determine.
If $s_2(a,b,c)>0$, then $f_4(a,b,c)=0$ iff $f_3(\kappa a,\kappa b,\kappa c) = 0$
(with $\kappa$ as above), iff $(\kappa a,\kappa b,\kappa c)=\pm(1,1,1)$,
iff $a=b=c$ $($where the common value of $a$, $b$, $c$
is $\neq 0$ since $s_2(0,0,0)=0$$)$.
Otherwise, if $s_2(a,b,c)\leq 0$, then $f_4(a,b,c)=0$
iff $s_2(a,b,c)=0$ and $-9s_3(a,b,c)=2s_1(a,b,c)s_2(a,b,c)-9s_3(a,b,c)=0$,
iff at least two of $a$, $b$, $c$ are $0$.
Here's why: because of $abc = s_3(a,b,c)=0$ at least one of $a$, $b$, $c$ is $0\,$;
if, say, $a=0$, then $bc=s_2(0,b,c)=0$, hence $b=0$ or $c=0$.
Conversely, if at least two of $a$, $b$, $c$ are $0$, then $s_2(a,b,c)=0$ and $s_3(a,b,c)=0$,
whence $f_4(a,b,c)=0$.
The set of zeros of $f_4$ is therefore
the union of the one-dimensional subspaces $\mathbb{R}u$ of $\mathbb{R}^3$
for $u\in \{(1,1,1),\,(1,0,0),\,(0,1,0),\,(0,0,1)\}$.
In other words, the homogeneous polynomial $f_4$ has besides the trivial 'affine' zero $(0,0,0)$
$($which is a zero of every nonconstant homogeneous polynomial$)$
the set $Z_4=\{(1\!:\!1\!:\!1),\,(1\!:\!0\!:\!0),\,(0\!:\!1\!:\!0),\,(0\!:\!0\!:\!1)\}$
of four 'projective' zeros.
Now we of course want to see what the function $f_4$ looks like.
It helps that $f_4(a,b,c)$ is a homogeneous polynomial in $(a,b,c)$;
if we know its values on the unit sphere $S^2 = \{(a,b,c)\mid a^2+b^2+c^2=1\}$
$($the "$2$" in $S^2$ is a superscript, not an exponent$)$,
then we know its values on a sphere $rS^2$ of any radius $r\,$:
every point in $rS^2$ is of the form $ru$ for some $u\in S^2$
and therefore $f_4(ru) = r^6 f_4(u)$ (where the exponent $6$ is the degree of $f_4$).
An attractive way to represent values of some nonnegative function $h$ defined on the unit sphere
is the one used for the visualization of the Earth's geoid:
at each point $u\in S^2$ we represent the value $h(u)$
by the point $(1+\rho\mspace{1mu}h(u))\mspace{1mu}u$,
that is, by the point at the elevation $\rho\mspace{1mu}h(u)$
vertically above the point $u$ on the sphere,
where "vertical" means "in the direction of the vector $u$" $($which points straight up$)$.
The positive constant $\rho$ is a suitably chosen scaling factor:
usually we will want the average value of $\rho\mspace{1mu}h$ on the unit sphere
to be a fraction of its radius $1$, that is, some value between $0$ and $1$,
not too small and not too large, and we choose a factor $\rho$ accordingly.
Of course, if you like the 'geoid' extending halfway to the Moon,
or, at the other extreme, to be barely discernible from the unit sphere,
with microscopic maximum heights,
go forth and choose a scaling factor to achieve the appearance your heart desires.
Let us ask Mathematica to tell us the maximum value of $f_4$ on the unit sphere:
With the maximum slightly over $2$ the scaling factor $\rho=1/3$ seems reasonable.
Here is the corresponding geoid of $f_4$:
We oriented the geoid so that the vector $(1,1,1)$ points upwards.
The blue dot at the top marks the position of the zero $(1,1,1)/\sqrt{3}$ of $f_4$ on the unit sphere.
The polynomial $f_4$ has eight zeros on the unit sphere
$($each of the four projective zeros of $f_4$ is represented on the unit sphere by
a pair of antipodal affine zeros$)$,
but only three of them are visible.
Here's a thought: why not make an anti-geoid of a function $h$ defined on the unit sphere,
representing the value of $h$ at a point $u\in S^2$ by the point
$(1-\rho\mspace{1mu}h(u))\mspace{1mu}u$ below the point $u$ on the sphere.
If we choose the scaling factor $\rho$
so that the maximum value of $\rho\mspace{1mu}h(u)$ will be less than $1$,
then we will obtain a non-self-intersecting surface in the interior of the unit sphere.
Let's do this
for $f_4$.
We choose $\rho=1/3$ as for the geoid;
but this time we draw only the upper hemisphere and the part of the anti-geoid of $f_4$ it contains,
and make everything, the hemisphere and the anti-geoid inside it, transparent:
As before, the blue dots mark the positions of zeros of $f_4$ on the unit sphere.
The result is slightly reminiscent of snow globes
$($well, of half-globes, without the snow$)$.
We shall use the inequality $(12)$ to produce another remarkable inequality
\begin{equation*}
f_5(a,b,c) \,\geq\, 0 \qquad \text{for all $a,b,c\in\mathbb{R}$}~, \tag{13}
\end{equation*}
where $f_5$ will be a symmetric homogeneous polynomial of degree $6$.
To this end we first restrict the inequality $(12)$ to the strictly positive $a$, $b$, $c\,$:
\begin{equation*}
f_4(a,b,c) \,\geq\, 0 \qquad \text{for all $a,b,c>0$}~. \tag{14}
\end{equation*}
Into this restricted inequality we introduce new strictly positive variables
$x:=bc$, $y:=ac$, $z:=ab$.
$($There was an answer which employed this change of variables,
but it was later removed by its author.
With my reputation far below 10,000 I am not able to peek into the deleted answer
in order to find out and tell you who posted it.$)$
The transformation $(a,b,c)\mapsto(x,y,z)$ is a bijection $\mathbb{R}_{>0}^3\to\mathbb{R}_{>0}^3$;
the inverse transformation is $(x,y,z)\mapsto\bigl(\sqrt{yz/x},\,\sqrt{xz/y},\,\sqrt{xy/z}\,\bigr)$.
From the inequality $(14)$ we therefore obtain the inequality
\begin{equation*}
f_4\Bigl(\sqrt{yz/x},\,\sqrt{xz/y},\,\sqrt{xy/z}\,\Bigr) \,>\, 0
\qquad \text{for all $x,y,z>0$}~. \tag{15}
\end{equation*}
The left hand side of this inequality is not a polynomial,
it is a quotient $f_5(x,y,z)/(xyz)$,
where $f_5(x,y,z)$ is a symmetric homogeneous polynomial of degree $6$.
So we multiply the inequality $(15)$ by $xyz>0$,
then rename the variables $x$, $y$, $z$ back to $a$, $b$, $c$,
and thus obtain the promised inequality $(13)$, although restricted to $a,b,c>0$.
It is a fact that $f_5$ satisfies the unrestricted inequality $(13)$, for any $a,b,c\in\mathbb{R}\,$:
Mathematica assures us that that it is so (as we shall see shortly).
At this point I have no idea how to go about proving the inequality $(13)$.
Do you? Have an idea, I mean, how to prove $(13)$?
The symmetric polynomial $f_5$ of $a$, $b$, $c$
can be expressed as a polynomial in the elementary symmetric polynomials $s_1$, $s_2$, $s_3$, thus:
\begin{equation*}
f_5 \,=\, (2s_1s_2-9s_3)^2 - 3s_1^3s_3~. \tag{16}
\end{equation*}
The formula for $f_5$ differs very little from the formula $(11)$ for $f_4$:
instead of the term $-3s_2^3$ in $(11)$ there is the term $-3s_1^3s_3$ in $(16)$.
Since the polynomial $f_5$ is symmetric and homogeneous of even degree,
we only need to verify that $f_5(a,b,1)\geq 0$ for $-1\leq a,b\leq 1$
to be assured that $f_5(a,b,c)\geq 0$ for all $a,b,c\in\mathbb{R}$.
Indeed, if $(a,b,c)\neq(0,0,0)$,
one of the absolute values $|a|$, $|b|$, $|c|$ will be the largest and nonzero,
and because of the symmetry it suffices to consider the case where $|c|$ is the largest;
but then $f_5(a,b,c)=c^6f_5(a/c,\,b/c,\,1)$, where $-1\leq a/c,\,b/c\leq 1$.
So we make Mathematica minimize $f_5(a,b,1)$ on the square $-1\leq a,b\leq 1$:
Therefore the inequality $(13)$ holds $($though we do not know how to prove it$)$.
The point $(a,b)=(0,0)$ may not be the only point in the square $-1\leq a,b\leq 1$
at which $f_5(a,b,0)$ attains the minimum $0$,
so we ask Mathematica to find all zeros of $f_5$ in the square:
It follows that the set of all zeros of $f_5$
is the union of the seven one-dimensional subspaces $\mathbb{R}u$
for $u\in \{(1,1,1),\,(1,0,0),\,(0,1,0),\,(0,0,1),\,(1,-1,0)\,,(0,1,-1)\,,(-1,0,1)\}$.
We also request the maximum value of $f_5$ on the unit sphere,
which is only a little larger than the maximum value of $f_4$ on the unit sphere,
so we again choose the scaling factor $\rho=1/3$ with which to construct the geoid of $f_5$,
and also the anti-geoid of $f_5$ inside the unit hemisphere,
As before, the blue dots mark the positions of zeros of $f_5$ on the unit sphere.
What do we get if we apply to the function $f_5$ the same change of variables
that we applied to the function $f_4$ to obtain the function $f_5$?
We are not very surprised when we find that we get back the function $f_4$.
This is not a mere lucky coincidence:
Lemma 1.
$~$Let $h(a,b,c)$ be a symmetric homogeneous polynomial of degree $6$
in real variables $a$, $b$, $c$,
with the individual degree $\deg_a h$ at most $4$
$($and hence also $\deg_b h\leq 4$ and $\deg_c h\leq 4$$)$.
Then the function $h^*(a,b,c)$ of real variables $a$, $b$, $c$,
which is defined by
\begin{equation*}
h^*(a,b,c) \,:=\, abc\mspace{-1mu}\cdot\mspace{-2mu}
h\bigl(\sqrt{b}\mspace{1mu}\sqrt{c}/\mspace{-1mu}\sqrt{a},\,
\sqrt{a}\mspace{1mu}\sqrt{c}/\mspace{-1mu}\sqrt{b},\,
\sqrt{a}\mspace{1mu}\sqrt{b}/\mspace{-1mu}\sqrt{c}\,\bigr)~,
\end{equation*}
is a symmetric homogeneous polynomial of degree $6$ in $a$, $b$, $c$
such that $\deg_a h^* \leq 4\,$.
Moreover, $h^{**}=h\,$.
$($The polynomial $h^*(a,b,c)$ is computed in the field
$\mathbb{R}\bigl(\sqrt{a},\sqrt{b},\sqrt{c}\,\bigr)$,
where $a$, $b$, $c$ are formal indeterminates
and $\sqrt{a}$, $\sqrt{b}$, $\sqrt{c}$
are symbols whose squares are, respectively, $a$, $b$, $c$.$)$
The proof
of Lemma 1 is straightforward:
just observe how the monomials are transformed by the change of variables
-- there are only five monomials to consider.
The fact that $f_4^{**} = f_4$ leads to a simple proof of the inequality $(13)$.
This is a special instance of
a more general result:
Lemma 2.
$~$If $h$ and $h^*$ are as in Lemma 1,
then $h(a,b,c)\geq 0$ for all $a,b,c\in\mathbb{R}$
if and only if $h^*(a,b,c)\geq 0$ for all $a,b,c\in\mathbb{R}$.
Proof.\, Suppose that $h^*(a,b,c)\geq 0$ for all $a,b,c\in\mathbb{R}\,$.
Applying the change of variables backwards we get
$h(a,b,c)=h^*(bc,ac,ab)/(abc)^2$ for all $a,b,c\neq 0$,
hence $h(a,b,c)\geq 0$ for all $a,b,c\neq 0\,$.
Since the polynomial function $\mathbb{R}^3\to\mathbb{R} : (a,b,c)\mapsto h(a,b,c)$ is continuous,
and the set of all $(a,b,c)\in\mathbb{R}^3$ such that $a,b,c\neq 0$
is a dense (open) subset of $\mathbb{R}^3$,
it follows that $h(a,b,c)\geq 0$ for all $(a,b,c)\in\mathbb{R}^3$.
The converse follows because $h^{**}=h$ by Lemma 1.$\,$ Done.
This is not the end of the story.
The present essay is intended to be just the first one in a series of essays
dedicated to the inequality stated in the question.
I'll be back.
Later. $~$Because of the unanimous protests (of one) against long and unwanted answers I decided that enough is enough and so won't be back, anytime soon, anywhere on StackExchange Mathematics. This is no web site for old mathematicians with a didactic leaning. |
Transform SE3 pose covariance | Indeed, after mapping a covariance from quaternion space to axis angle space
and back you will not get the same covariance, because these mappings are not
linear. Perhaps you would expect that $\frac{\partial y}{\partial x}
\frac{\partial x}{\partial y} = 1$, but this is in general not the case, see
Manipulating Partial Derivatives of Inverse Function.
The formulas for $f$ and $f^{-1}$ are correct, but I think there is a small error in
one of the jacobian matrices, namely in $\partial r_k / \partial q_w$. Here is correct version (change highlighted in red):
$$
\frac {\partial r_k}{\partial q_w}(q_0) =
\frac {2 q_k} {\sqrt{1-w^2}}\Big[ \frac 1 {\sin(\phi/2)} - \frac{\phi\ cos(\phi/2)} {\color{red}{2} \sin^2(\phi/2)}\Big]
$$
For reference, below are perhaps more concise versions of the jacobian matrices for the rotation part:
$$
\frac{\partial q}{\partial r} =
\left[\begin{array}{c}
\frac{\frac{1}{2} \cos(\phi / 2) - \alpha} {\phi^2} r r^T + \alpha I_{3\times3} \\
- \frac{\alpha}{2} r^T
\end{array}\right]_{4\times3},
$$
with $\phi = \|r\|_2$ and $\alpha = \frac{\sin(\phi / 2)}{\phi}$.
$$
\frac{\partial r}{\partial q} =
\left[\begin{array}{c}
\frac{\beta \cos(\psi) - 2}{\sin^2(\psi)} q_v
&
\beta I_{3\times3}
\end{array}\right]_{3\times4},
$$
with $\psi = \arccos(q_w)$, $\beta = \frac{2 \psi}{\sin(\psi)}$ and
$ q_v =
\left(\begin{array}{c}
q_x \\
q_y \\
q_z
\end{array}\right).
$ |
Evaluating sum of binomial coefficients | This is a special case of the result
$$\sum_{k=0}^m(-1)^k{n\choose k}=(-1)^m{n-1\choose m}$$
($0\le m\le n-1$) which can be proved by induction on $m$. |
Example of a non finitely generated module such that Hom doesn't preserve coproducts. | Hint: For an infinite set $I$ and a non-trivial module $X$, consider the $I$-fold coproduct $X^{(I)}$: Its covariant Hom-functor $\text{Hom}(X^{(I)},-)$ does not perserve coproducts. For the proof, look at $\text{id}\in\text{Hom}(X^{(I)}, X^{(I)})$. |
Physics Related Watt Question | Let $P$ be the average power output. The total amount of work done is $P \cdot (7.3 \,\mbox{s})$. But the total amount of work done is also the sum of these two quantities:
The work done to overcome friction.
The kinetic energy of the dragster at the end of the 400 m.
Part 1 is just force times distance, or if the force is not constant, you can use force averaged over distance. You don't say how the force was averaged, so I'll assume it's averaged over distance, since the answer to the problem is indeterminate otherwise. Then the first part of the work done is
$$(1200 \,\mbox{N}) \cdot (400 \,\mbox{m}) = 4.8 \times 10^5 \mbox{J}.$$
Part 2 is given by the kinetic energy formula,
$$\frac{1}{2} (550 \,\mbox{kg}) (110 \,\mbox{m/s})^2 = 3.3275 \times 10^6 \mbox{J}$$
Taken together, that's $3.8075 \times 10^6 \mbox{J}$. Divide by the time $(7.3 \,\mbox{s})$ and you have average power output of about $521575.3 \, \mbox{W}$.
Although I liked solving physics problems when I took first-year calculus, I share your confusion about what this problem is doing in a calculus course. The solution is just algebra, and there isn't enough information to compare the results with what you would get by integrating functions. |
Is there an analytical solution to this integral? | Let's have a look. First, binomial expansion:
$$(1 - a\sin^2(x'))^n = \sum_{k =0}^n (-a \sin^2 x')^k$$
Second, let's expand the sine of the difference
$$\sin(x-x') = \sin x \cos x' - \cos x \sin x' $$
All together, arranging:
$$\sum_{k = 0}^n (-a^k) \int_0^x (\sin^2 x' )^{n+1}(\sin x \cos x'- \cos x \sin x')\ \text{d}x'$$
This can be split into two terms, as you can easily see. Their integral are rather simple (just some tricks of substitutions and by parts iterations). Also consider that $\sin x $ and $\cos x$ are taken outside since you are integrating in $\text{d}x'$
$$\sin x \int_0^x (\sin ^2 x')^{n+1}\cos x' \ \text{d}x' = \frac{\sin ^{2 n+4}(x)}{2 n+3}$$
This is true provided that either the imaginary part of $x$ is zero, or the real part of $x$ lies between $0$ and $\pi$.
The other is a mess, I had to invoke Mathematica:
$$-\cos x \int_0^x (\sin ^2 x')^{n+2} \ \text{d}x' = -\frac{\pi ^{3/2} \sec (\pi n)\cos(x)}{2 \Gamma \left(-n-\frac{3}{2}\right) \Gamma (n+3)}-\cos^2 (x) \, _2F_1\left(\frac{1}{2},-n-\frac{3}{2};\frac{3}{2};\cos ^2(x)\right)$$
The same conditions as before are needed.
Thence, in the end, we got a solution by series:
$$\sum_{k = 0}^n (-a^k)\left(\frac{\sin ^{2 n+4}(x)}{2 n+3} -\frac{\pi ^{3/2} \sec (\pi n)\cos(x)}{2 \Gamma \left(-n-\frac{3}{2}\right) \Gamma (n+3)}-\cos^2 (x) \, _2F_1\left(\frac{1}{2},-n-\frac{3}{2};\frac{3}{2};\cos ^2(x)\right)\right)$$
Where also here $_2F_1(\star)$ is the Hypergeometric Special Function.
Not an analytic solution but you can go on from here, taking the first terms of the sum or asking mathematica if the sum can be written in another way. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.