title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Largest eigenvalue of series of real symmetric matrices | I think you can try to derive the general form of det$(\pmb{A} - \lambda \pmb{I})$, with $\pmb{A}$ being a $3\times3$ matrix, in terms of determinants and traces.
If
$$
\pmb{A} =
\begin{pmatrix}
a & b & c \\
d & e & f \\
g & h & i
\end{pmatrix}
$$
and
$$
\pmb{X} = \begin{pmatrix}
e & f \\
h & i
\end{pmatrix},
\pmb{Y} = \begin{pmatrix}
d & f \\
g & i
\end{pmatrix},
\pmb{Z} = \begin{pmatrix}
d & e \\
g & h
\end{pmatrix}
$$
then, making use of Laplace method:
$$
\textrm{det}(\pmb{A} - \lambda \pmb{I}) = (a - \lambda) \cdot \textrm{det}(\pmb{X} - \lambda \pmb{I}) - b \cdot \textrm{det}(\pmb{Y} - \lambda \pmb{I}) + c \cdot \textrm{det}(\pmb{Z} - \lambda \pmb{I})
$$
Making all calculations you can arrive to the following generalization:
$\lambda^3 + \textrm{tr}(\pmb{A})\cdot\lambda^2 + (bd + cg - \textrm{det}(\pmb{X}) - a\cdot\textrm{tr}(\pmb{X})\cdot\lambda -b\cdot\textrm{det}(\pmb{Y}) + c\cdot\textrm{det}(\pmb{Z}) - afh$
Probably you can further simplify the polynomial with the substitution $\lambda = t - \frac{\textrm{tr}(\pmb{A})}{3}$ (see here)
In this way you might only need to calculate traces, determinants and simple multiplications to estimate the largest positive eigenvalue. If not enough, I hope this will at least give you some ideas on how to continue. Good luck! |
Trigonometric ratios of a triangle | The short answer:
We use these functions because after many years of trying to do trigonometry in different ways, these are the definitions that people found convenient.
The long answer:
For most of the history of (Western) trigonometry, people did not use sine,
cosine, tangent, or the other functions that we learn today.
Instead, they had tables of chords of angles.
The chord of an angle $\alpha$ is the straight-line distance
between the two endpoints of an arc of angle $\alpha$.
That is,
$$ \mathrm{crd\ } \alpha = 2r \sin\frac\alpha2, $$
where $r$ is the radius of your reference circle.
This was the way trigonometry apparently was done for many hundreds of years.
In the 2nd century CE, the famous reference work, the Almagest by Ptolemy, had a table of chords of angles from $\frac12$ degree to $180$ degrees in $\frac12$-degree increments.
But no tables of sines, cosines, or tangents.
(And of course there were no calculators that could compute trig functions for you--the only practical way to use trig functions until a few decades ago was to read their values from tables.)
In India, the story was a little different;
Aryabhata computed tables of the sine function and the versed sine or versine
in the 6th century CE.
Eventually some clever people realized that just having tables of chords of angles (or even sines and versines) was not always the most convenient way to solve a trigonometry problem,
and they started to come up with tables of new functions that were more convenient to use, depending on what problem it was that you needed to solve.
They came up with a lot of functions that you will not see in a typical trigonometry textbook.
These new functions also were defined in terms of measurements on a circle, in particular a circle of radius $1$
called the unit circle.
(For some reason, many educators seem to like to start with the definition you have seen, $\sin \alpha = \frac ph,$ although it is only useful for angles less than a right angle; the circle-based definition is more general.)
But the idea that the circle should be a unit circle is apparently a
relatively modern one;
Ptolemy's circle had radius $r = 60$.
Eventually the published tables of trigonometric functions settled (mostly) on the sine, cosine, and tangent.
Apparently those were the most-demanded tables, or at least the publishers of tables thought they were.
So in the end it all comes down to (perceived) convenience
and usefulness. |
Finding minimum value of $\tan A+\tan B$, given $A+B=30^\circ$ | Bill's idea may be this..
consider $f(x)=\tan x$ $f''(x)=2\sec^2 x \tan x>0$ for $x\in (0,\pi/3)$
.By jensen $$f(x)+f(y)\le \frac{f(x)+f(y}{2}\Rightarrow \frac{\tan A+\tan B}{2}\ge {\tan(\frac{A+B}{2})}=\tan 15^o$$ |
Applications of $f(x_0)=f'(x_0)$ | If I'm interpreting this correctly, I believe you are asking if there is anything special about this happening locally, i.e at a certain point $x_0$, does the fact that $f(x_0)=f'(x_0)$ mean anything? I would say no; note that by just translating the function appropriately, you can make this happen at any point. To see this, for any differentiable $f$, and any point $x_0$ define $g_{x_0}(x)=f(x)+(f'(x_0)-f(x_0))$, and this function will have that property at the point $x_0$. |
If $f(g(x))$ $1-1$, then must$ f $ and $g$ be $1-1$? | You will have to figure out what you mean. Let
\begin{align*}
f(x) &= \begin{cases}\log x & \text{if }x>0; \\
44 & \text{if }x\leq 0
\end{cases} \\
g(x) &= e^x,\qquad x \in \mathbb R
\end{align*}
which of these is 1-1? What is $f(g(x))$ ? |
Find the area enclosed by $2|x| + 3|y| < 6$ | Yes, of course. We have a rhombus with diagonals $4$ and $6$ and the area is $\frac{4\cdot6}{2}=12$.
We can understand that it's rhombus by the following reasoning.
Our figure is symmetric in-relation to axis $x$ and it's symmetric in-relation to axis $y$.
For the vertexes we need $x=0$ or $y=0$ and the rest is easy. |
if T is normal, then $\sigma(T)=\sigma_{ap}(T)$ | We show that $\sigma_{ap}(T)^c \subset \sigma(T)^c$ :
Suppose $\lambda \notin \sigma_{ap}(T)$, then we want to show that $\lambda \notin \sigma(T)$. Note that $\exists c > 0$ such that
$$
\|(T-\lambda I)x\| \geq c\|x\| \quad \forall x\in H
$$
and hence $(T-\lambda I)$ is injective, so it suffices to prove that $R(T-\lambda I) = H$.
Also, $R(T-\lambda I)$ is complete (why?), and so it is closed in $H$. It suffices to show that $R(T-\lambda I)^{\perp} = \{0\}$. So choose $x \in R(T-\lambda I)^{\perp}$, then
$$
0 = \langle x,(T-\lambda I)(T^{\ast} - \overline{\lambda} I)x \rangle
$$
$$
= \langle x, (T^{\ast} - \overline{\lambda}I)(T-\lambda I)x\rangle \quad\text{ (since $T$ is normal)}
$$
$$
= \langle (T-\lambda I)x, (T-\lambda I)x \rangle \geq c^2\|x\|^2
$$
and hence $x=0$. Thus $R(T-\lambda I) = H$, and so $T-\lambda I$ is invertible. |
The number of ways to divide 5 people into three groups | Two add-ons to the information already given.
The factor $\frac{1}{2!}$ occurs in fact twice in your example, since we have
\begin{align*}
&(^5C_3) (^2C_\color{blue}{1})(^1C_{\color{blue}{1}})\color{blue}{\frac{1}{2!}}
+\, (^5C_\color{blue}{2})(^3C_\color{blue}{2})(^1C_1)\color{blue}{\frac{1}{2!}}\\
&\quad=10\cdot2\cdot1\cdot\frac{1}{2}+10\cdot 3\cdot 1\cdot \frac{1}{2}+\\
&\quad=25
\end{align*}
We can reformulate the problem and ask for the number of ways to partition a set consisting of $5$ elements into $3$ non-empty subsets. These numbers are known as Stirling numbers of the second kind ${n\brace k}$.
Here we are looking for
\begin{align*}
{5\brace 3}=25
\end{align*} |
Prove $a^2\cos B\cos C+b^2\cos C\cos A+c^2\cos A\cos B\leq2S.$ | Let $H$ be the orthocenter of $ABC$. Then
$$ d(H,BC) = 2R\cos B\cos C $$
hence:
$$ \sum_{cyc} 2R a\cos B\cos C = 2S. $$
If $ABC$ is an acute triangle, then $\cos A,\cos B,\cos C>0$ and $a,b,c < 2R$, hence:
$$ \sum_{cyc} a^2\cos B\cos C < \sum_{cyc} 2R a\cos B\cos C = 2S. $$
However, if $ABC$ is an obtuse triangle, the given inequality does not hold: consider, for instance, $(a,b,c)=(\sqrt{3},1,1)$, so that $(A,B,C)=(120^\circ,30^\circ,30^\circ)$,
$$\sum_{cyc}a^2\cos B\cos C = \frac{9}{4}-\frac{\sqrt{3}}{2},\qquad 2S=\frac{\sqrt{3}}{2}.$$ |
Suggestions for the optimal estimator in "one-shot" prediction problems? | This question brings on the surface a fact about prediction that tends to be forgotten: that a prediction is "optimal" only relative to the specific objective function being optimized -and this objective function must represent the real world situation in which the "users" of the prediction are in, and how the prediction error will affect them.
The ubiquitous "mean squared error" criterion is not so much an objective function that was conceived to represent the case of "repeated predictions", but mainly it reflects a situation where the costs of prediction error are quadratic. In obvious notation, minimizing $E(x-\hat x)^2$ is the same as minimizing $E\left[A(x-\hat x)^2\right], \; A>0$, wich means that we implicitly assume that our error cost function is $A(x-\hat x)^2$. If for example, the error cost function is linear, zero at zero error, but not symmetric around zero, then the optimal predictor is not the expected value -the median is. If we move away from quadratic functions, then in general we need the error cost function to be symmetric around zero and the distribution to be symmetric around its mean, so that the expected value remains the optimal predictor (essentially because it then equals the median).
That said, you need to think on what error cost function represents better your situation. Does under-prediction has "equal costs" to you compared to over-prediction by the same amount? (if "prediction accuracy" is the only thing that you are after, then you could argue that your error cost function is symmetric around zero, for example, since the direction of the error is not important to you, only its magnitude).
After that, and since the error cost function will contain the unknown quantity of the actual future value of the variable you want to predict, you need to consider how you will nevertheless end up with a predictor that can be calculated/estimated (considering the expected value is one way to do that).
I am willing to expand on this question if you provide feedback on these matters.
EXPANSION (after OP's own answer and comments).
Since we assume independent r.v's, maximizing the sum of probabilities is indeed equivalent to maximize each probability separately. So
$$\max_{\hat X_i} P(\textbf{1}_{|X_{i}-\hat X_{i}|\leq \epsilon_{i}}) = \max_{\hat X_i}\left\{F_i(\hat x_i+\epsilon_i) - F_i(\hat x_i-\epsilon_i)\right\} =\max_{\hat X_i}\int_{\hat x_i-\epsilon_i}^{\hat x_i+\epsilon_i}f(x_i)dx_i $$ and the first order condition will give us
$$\hat x_i^* : f(\hat x_i^*+\epsilon_i) - f(\hat x_i^*-\epsilon_i) =0 $$
What about the 2nd-order conditions? They are
$$f'(\hat x_i^*+\epsilon_i) - f'(\hat x_i^*-\epsilon_i) <0 $$
In the nice case of a unimodal and symmetrical density, the above conditions lead to $\hat x_i^*$ being the mode.
Assume now that the distribution is unimodal but not symmetric (as are many well-known distributions of non-negative r.v's, like the Gamma family). Then can you find a point in its support that will satisfy the first order condition?
Next, assume that the distribution is symmetric but not unimodal, but has one central global mode, and (being symmetric), it also has two other local modes to the left and to the right of the global modal, placed symmetrically, but also that the pre-determined error includes the two local modes in the permissible set of values. Then? |
Function converging in both $L^1$ and $L^2$ norm | As Nate Eldredge noted, since $|f_m|\rightarrow |f|$ pointwise, and $|f_m|$ is dominated by the integrable function $|f|$, and $|f_m|^2$ is dominated by the integrable function $|f|^2$, so the conclusion follows by the dominated convergence theorem. |
Embedding as a subgroup | There are several ways you could attempt this, and finding the most efficient method for the types of groups you are interested in might require some experimentation. One way would be to start by finding all (conjugacy classes of) subgroups of $G$ of order dividing $|H|$ and then test them in decreasing order of size for being quotients of $H$.
My inclination would be to try first the simple-minded method of just computing all homomorphisms form $H$ to $G$. I tried that in Magma with a couple fo groups and it worked very quickly. There is a Magma function ${\mathtt {Homomorphisms}}$, which computes the homomorphisms from a finitely presented group to a finite group, up to conjugacy of the image. It finds surjective homomorphisms by default, but there is an option to turn that off. Here is a randomish example with $G$ a simple group of order $20160$ and $H$ a group of order $120$.
I am sure you can do a similar computation in GAP, but I am slightly less familiar with the relevant functions, so I expect someone else could help you with that. Or you could write to the GAP forum mailing list.
> G:=PSL(3,4);
> H:=SmallGroup(120,30);
> H:=FPGroup(H);
> time homs:=Homomorphisms(H,G:Surjective:=false);
Time: 0.220
> [ Order(Kernel(h)) : h in homs ];
[ 120, 60, 60, 20, 60, 30, 30, 30, 30, 30, 30, 30, 30, 12, 12, 15, 15,
15, 15, 15, 15 ]
So in this example, computing the homomorphisms took $0.220$ seconds, and the smallest kernels have order $12$.
Here is another example with the same $G$:
> H:=SmallGroup(120,5);
> H:=FPGroup(H);
> time homs:=Homomorphisms(H,G:Surjective:=false);
Time: 0.040
> [ Order(Kernel(h)) : h in homs ];
[ 120, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 ]
The group $H$ there was ${\rm SL}(2,5)$ and it has found homomorphisms to $A_5$.
In fact I see now that $G$ has no subgroups of order $120$, so perhaps this was a bad choice!
Added later: I have experimented some more, and I see now that the approach above is too naive and works very badly for some types of groups $G$. I tried
> G:=DirectProduct(SmallGroup(100,2),SmallGroup(100,5));
> H:=SmallGroup(100,4);
and the homomorphism computation did not finish in ten minutes. However, the alternative approach of first finding the subgroups of $G$ of order dividing $|H|$ worked fine. Here is how I did that.
> S := [s`subgroup : s in Subgroups(G : OrderDividing := Order(H)) ];
> S := Reverse(S);
> H:=FPGroup(H);
> for s in S do
homs := Homomorphisms(H,s);
if #homs ne 0 then "Found homomorphism, kernel size", #Kernel(homs[1]);
break;
end if; end for;
Found homomorphism, kernel size 25 |
Covariance Matrix in Weighted Least Square Estimation | This is a very basic and important topic.
Regarding the deterministic and stochastic least square estimation, I highly recommend the book "Linear estimation" by Thomas Kailath.
First, I think we need to clarify the problem statement. The system here should be $y=Ax+v$ where $y$ is the measurement (I prefer to use $y$ instead of $b$), and $v$ is a zero-mean random noise who is uncorrelated to $x$. In this system, $x$ is deterministic and $v$ and $y$ are stochastic.
Second, the estimator $\hat{x}=Ky$ with $K=(A^T\Sigma^{-1}A)^{-1}A^T\Sigma^{-1}$ is a linear minimum variance unbiased estimator of $x$. Note $\mathbb{E}(v)=0$ it is easy to check $\mathbb{E}(\hat{x})=x$ and $\mathbb{E}{(x-\hat{x})(x-\hat{x})^T}=(A^T\Sigma^{-1}A)^{-1}$. The proof that the estimation variance is the minimum can be found on page 97 of "Linear estimation".
Third, I think $\mathbb{E}(ee^T)=K\Sigma K^T$ instead of $\mathbb{E}(ee^T)=\Sigma$. Hint: $KA=I$ because $\mathbb{E}\hat{x}=KAx=x$. |
Help in Modular Arithmetic | Note that $999=27\times 37$
Now Fermat-Euler Theorem tells us that $2^{18}\equiv 1 \bmod 27$ and $2^{36}\equiv 1 \bmod 37$ whence $2^{36}\equiv 1 \bmod 999$
So, for a start, we can reduce $499$ modulo $36$ before doing any of the detailed calculations.
Or notice that $2^{10}=1024\equiv 25 = 5^2\bmod 999$ and use Fermat-Euler with $5$ instead of $2$.
Or combine both approaches and notice some other things too. |
Every transcendental number satisfies a power series | Edit: There's a straightforward proof below for the result exactly as mentioned in the OP. However, the paper linked to cites a stronger result, specifying that the power series should define an entire function of exponential growth. I don't see how the argument below gives that; see Comments at the bottom.
Say $\alpha\in\Bbb C$, $\alpha\ne0$.
First, if $\alpha$ is real it's trivial that it's a zero of some power series with rational coefficients: If $r_0,\dots,r_n\in\Bbb Q$ have been chosen, there exists $r_{n+1}\in\Bbb Q$ with $$|r_0+\dots+r_{n+1}\alpha^{n+1}|<1/n;$$hence $$r_0+r_1\alpha+\dots=0.$$
Now say $\alpha=\rho e^{it}$, $\rho>0$, $t\in\Bbb R$. If $t/\pi$ is rational then there exists a positive integer $N$ so that $\beta=\alpha^N\in\Bbb R$; so $\beta$ is a root of some rational power series, hence so is $\alpha$.
Finally, suppose $t/\pi$ is irrational. Then $\{e^{ikt}:k=1,2\dots\}$ is dense in the unit circle. Hence for every $n$ the set $\{r\alpha^k:r\in\Bbb Q, k=n+1,n+2,\dots\}$ is dense in $\Bbb C$ (to approximate $z$ by $r\alpha^k$, first choose $k$ so as to get the argument approximately right, then choose $r$ to fix up the modulus). So as above we can recursively construct a sequence $r_j$ of rationals and a strictly increasing sequence $n_j$ of positive integers so that $$\sum r_j\alpha^{n_j}=0.$$
Comments. Now what about getting an entire function of exponential growth?
If $\alpha$ is real this is no problem: Say wlog $\alpha>1$ to keep the inequalities clean and replace the main inequality above by $$|r_0+\dots+r_{n+1}\alpha^{n+1}|<1/(n+2)!;$$it follows that $r_n=O(1/n!)$, hence the power series is an entire function of exponential growth.
And so we're done if $\alpha^N$ is real. But the case $t/\pi$ irrational is not so simple, as far as I can see. We can make $\left|\sum_{j=0}^k r_j\alpha^{n_j}\right|$ as small as we want as a function of $k$, but the doesn't help; saying for example $$\left|\sum_{j=0}^k r_j\alpha^{n_j}\right|\le1/k!!!$$ says nothing about the radius of convergence. The problem is that in order to make $\left|\sum_{j=0}^k r_j\alpha^{n_j}\right|$ small, by the trivial argument above, we may be forced to take $n_k$ large, so we don't get anything analogous to$$\left|\sum_{j=0}^k r_j\alpha^{n_j}\right|\le1/(n_k)!,$$which is what we need.
Well, the paper cited in the OP calls this an elementary result, so it can't be that hard. Probably there's a simple proof that's nothing like what above; the argument above is after all a simple brute-force sort of thing, surely one can do something more subtle?
Perhaps one can fix the argument above, showing that you don't need to take $n_k$ too large to make $\left|\sum_{j=0}^k r_j\alpha^{n_j}\right|$ small.
Or something I just thought of. The proof that the sum of two algebraic numbers is algebraic is fairly simple if you look at it right, but it's not a priori obvious. Maybe some extension of that argument shows that if $\alpha=x+iy$ then the existence of suitable power series for $x$ and for $y$, proved above, implies the same for $\alpha$? |
Convergence of $s_{n+1}=\sqrt{1+s_n}$ | For the case $s_1 = 3$, then $s_2 = 2 < 3 = s_1$ and $f(x) = \sqrt{1+x}\implies f'(x)=\dfrac{1}{2\sqrt{1+x}}> 0\implies f$ is an increasing function $\implies s_n$ is strictly decreasing sequence and is bounded below by $0$ as $s_n > 0, \forall n\ge 1$, hence is convergent to $L$ which is the solution of $L = \sqrt{1+L}\implies L = \dfrac{1+\sqrt{5}}{2}$ as claimed. In general, $s_1 \ge -1$ to begin with. Now if $\dfrac{1-\sqrt{5}}{2} < s_1 < \dfrac{1+\sqrt{5}}{2}\implies s_2 > s_1$ and the sequence is strictly increasing and is bounded above by $\dfrac{1+\sqrt{5}}{2}$ which can be shown by induction on $n \ge 1$. Thus it converges to $L =\dfrac{1+\sqrt{5}}{2}$ again. If $s_1 = \dfrac{1+\sqrt{5}}{2}$, then $s_n = \dfrac{1+\sqrt{5}}{2}, \forall n \ge 1\implies s_n \to L =\dfrac{1+\sqrt{5}}{2}$ also. If $s_1 > \dfrac{1+\sqrt{5}}{2}\implies s_n$ is a decreasing sequence as before and is bounded below by $0$ so is convergent to $L = \dfrac{1+\sqrt{5}}{2}$ because $L \ge 0$. If $s_1 = \dfrac{1-\sqrt{5}}{2}\implies s_n = \dfrac{1-\sqrt{5}}{2}, \forall n \ge 1\implies s_n \to L = \dfrac{1-\sqrt{5}}{2}$ as $n \to \infty$. Finally, if $-1 \le s_1 < \dfrac{1-\sqrt{5}}{2}\implies s_1 >s_2\implies s_n$ is a strictly decreasing sequence, and is bounded below by $0$ hence is convergent to $L = \dfrac{1+\sqrt{5}}{2}$ since $L \ge 0$. This completes the analysis regarding the possible values of the initial term $s_1$. |
Second order condition for convexity | For functions $\Bbb R\to \Bbb R$ the condition $\nabla^2_xf\succcurlyeq 0$ reduces to $f''(x)\geqslant0$.
$x^3$ is in point of fact convex on $[0,\infty)$ (because $f''\geqslant 0$ there) and not convex in any larger interval (because $f''$ has some negative values there). |
Poisson and conditional probability | You're right about a) and b).
I suspect you mean $P(B_x\cap A_y)$ in b)? If you do mean $P(B_x\cup A_y)$, you'll need $P(B_x)$ from c) to calculate $P(B_x\cup A_y)=P(B_x)+P(A_y)-P(B_x\cap A_y)$.
No, the answer to c) isn't just $p^x$. Particles are observed with rate $\lambda p$, so $P(B_x)=\frac{(\lambda p)^k}{k!}\mathrm e^{-\lambda p}$. |
PDE: How to show that this function is the zero function? | Multiply both sides by $u$ and integrate over $\Omega$. Then use integration by parts and the fact that $u=0$ on $\partial \Omega$ to get $$\int\limits_{\Omega}{|\nabla u|^2dx}+\int\limits_{\Omega}{u^4dx}=0$$
From here it is clear that $u$ must be zero everywhere in $\Omega$ |
Setting up Iterated Integral to polar form | The region $D$ is a triangle with vertices $(0,0)$, $(3,3)$, and $(-3,3)$. So you should have $\frac{\pi}{4}\leq \theta \leq \frac{3\pi}{4}$. For each such $\theta$, $r$ ranges from $0$ to whatever the value of $r$ is at the line $y=3$. To solve for this $r$ in terms of $\theta$, set $3=y=r\sin(\theta)$, so $r=3 \csc \theta$. The integral should be
$$\int_{\frac{\pi}{4}}^{\frac{3\pi}{4}}\int_0^{3\csc\theta} f(r\cos\theta,r\sin\theta)rdrd\theta.$$ |
The set of all continuous functions on a compact group $G$ form a ring (without unit unless the group is finite)? | (I will assume $G$ is Hausdorff.)
You can't just show $\delta$ is identically zero because that is false in finite groups! But you can show the following in any compact group: if $\delta$ is a unit element then $\delta(x)=0$ for all $x \ne e$. Since zero is not the unit element, we must have $\delta(e) \ne 0$, meaning $e$ is an isolated point, meaning $G$ is discrete, meaning (by compactness) that $G$ is finite.
To show the claim, take $x \ne e$ and suppose without loss of generality $\delta(x) > 0$. Using continuity, choose $\epsilon > 0$ and a neighborhood $V$ of $e$ such that $\delta(xy^{-1}) > \epsilon$ for all $y \in V$. Using Urysohn's lemma, produce a continuous nonnegative function $f$ supported inside $V \setminus \{x\}$ with $\int f(y)\,dy = 1$. Now verify that $(\delta \ast f)(x) \ge \epsilon \ne 0 = f(x)$ so that $\delta$ is not a unit. |
fundamental group of $GL^{+}_n(\mathbb{R})$ | The Gram-Schmidt process shows that $\text{GL}_n^{+}(\mathbb{R})$ deformation retracts onto $\text{SO}(n)$. There is a natural fiber bundle
$$\text{SO}(n-1) \to \text{SO}(n) \to S^{n-1}$$
given by considering the action of $\text{SO}(n)$ on the unit sphere in $\mathbb{R}^n$, and the corresponding long exact sequence in homotopy shows that $\pi_1(\text{SO}(n)) \cong \pi_1(\text{SO}(3))$ for $n \ge 3$. But $\text{SO}(3) \cong \mathbb{RP}^3$ has fundamental group $\mathbb{Z}/2\mathbb{Z}$ (or more explicitly its double cover is $\text{SU}(2) \cong S^3$, which is simply connected), hence so does $\text{SO}(n)$ for $n \ge 3$, hence so does $\text{GL}_n^{+}(\mathbb{R})$ for $n \ge 3$. The cases $n = 1, 2$ are straightforward.
The corresponding double covers of $\text{SO}(n), n \ge 3$ are the spin groups. |
$(x+2y+z)\cdot \left( \frac{x}{y} +\frac{2y}{z}+\frac{z}{x}\right) > 12$ for $x^2+y^2+z^2=3$ | Let
$$x=u^2,\quad y=v^2,\quad z=w^2,\qquad(1)$$
then the inequality becomes
$$(u^2+2v^2+w^2)\left(\dfrac{u^2}{v^2}+\dfrac{2v^2}{w^2}+\dfrac{w^2}{u^2}\right) > 12$$
for
$$u^4+v^4+w^4=3.$$
Using Cauchy-Schwarz inequality, we have
$$(u^2+2v^2+w^2)\left(\dfrac{u^2}{v^2}+\dfrac{2v^2}{w^2}+\dfrac{w^2}{u^2}\right)\geq\left(\dfrac{u^2}v+\dfrac{2v^2}w+\dfrac{w^2}u\right)^2.\qquad (2)$$
Therefore, it is sufficiently to find the conditional minimum of $$\dfrac{u^2}v+\dfrac{2v^2}w+\dfrac{w^2}u$$
for $u,v,w>0$ and $u^4+v^4+w^4=3.$
Apply the method of Lagrange multipliers, which reduces it to a system of equations.
For this, search uncoditional minimum of function
$$f(u,v,w,\lambda) = \dfrac{u^2}v+\dfrac{2v^2}w+\dfrac{w^2}u + \lambda\left(u^4+v^4+w^4-3\right).$$
The nesessary conditions of extremum are
$$f'_u=0,\quad f'_v=0,\quad f'_w=0,\quad f'_\lambda=0,$$
so
$$\begin{cases}
\dfrac{2u}v-\dfrac{w^2}{u^2} + 4\lambda u^3 = 0\\
\dfrac{4v}w-\dfrac{u^2}{v^2} + 4\lambda v^3 = 0\\
\dfrac{2w}u-\dfrac{2v^2}{w^2} + 4\lambda w^3 = 0\\
u^4+v^4+w^4-3 = 0.
\end{cases}$$
Given $u,v,w>0,$
$$\begin{cases}
\dfrac{2u^2}v-\dfrac{w^2}u + 4\lambda u^4 = 0\\
\dfrac{4v^2}w-\dfrac{u^2}v + 4\lambda v^4 = 0\\
\dfrac{2w^2}u-\dfrac{2v^2}w + 4\lambda w^4 = 0\\
u^4+v^4+w^4-3 = 0.
\end{cases}$$
Now we can eliminate $\lambda:$
$$\begin{cases}
\left(\dfrac{2u^2}v-\dfrac{w^2}u\right)v^4=\left(\dfrac{4v^2}w-\dfrac{u^2}v\right)u^4\\
\left(\dfrac{2u^2}v-\dfrac{w^2}u\right)w^4=\left(\dfrac{2w^2}u-\dfrac{2v^2}w\right)u^4\\
u^4+v^4+w^4-3 = 0.
\end{cases}$$
For $u,v,w>0$
$$\begin{cases}
2u^3v^4w-v^5w^3=4u^5v^3-u^7w\\
2u^3w^5-vw^7=2u^4vw^3-2u^5v^3\\
u^4+v^4+w^4-3 = 0.
\end{cases}$$
$$\begin{cases}
u^3w(u^4+2v^4) = 4v^3u^5+v^5 w^3\\
2u^3(u^2v^3+w^5) = vw^3(2u^4+w^4)
u^4+v^4+w^4-3 = 0.
\end{cases}$$
The first two equations are homogenius, and we can use substituion
$$s=\dfrac vu,\quad t=\dfrac wu,$$
$$\begin{cases}
s^5t^3 - (2s^4+1)t + 4s^3 = 0 \\
st^7 - 2t^5 + 2st^3 - 2s^3 = 0.
\end{cases}\qquad (3)$$
This system has the only extremum in the area
$$s\approx 0.739672,\quad t\approx 1.36409.$$
Using the third equation, we obtain the extremum
$$(u_0,v_0,w_0,f_0)=(0.890923, 0.65899, 1.215299, 3.576929).$$
Note that in the edges of area limit of the target function is $+\infty$ and the only extremum in the area, this is global minimum in area, wherein $f_0^2\approx 12.794424.$
Using $(1-2)$ and taking in account the computation accuracy this means that
$$\boxed{(x+2y+z)\left(\dfrac{x}y+\dfrac{2y}z+\dfrac{z}x\right) > 12}$$
Addition
Considering the unknown $s$ as a parameter, it is possible to reduce the order of the system $(3)$ with respect to unknown $t$. Finally, we can obtain an equation of one unknown $s$.
In fact, instead of the second equation of the system can take a linear combination of the first and second equations. So, it is possible to multiply the second equation of system (3) on the leading coefficient of the first equation and subtract from it the first equation, multiplied to the leading coefficient of the second equation, which leads to a decrease in the order of the second equation. By repeating this action if necessary, the second equation may be obtained with less unknown $t$ than the first.
For $s\not=0,$ obtaining
$$\begin{cases}
s^5t^3 - (2s^4+1)t + 4s^3 = 0 \\
C_{2,2}t^2 + C_{2,1}t +C_{2,0}= 0,\\
\end{cases}$$
where
$$C_{2,2} = 8s^{12}+8s^8,$$
$$C_{2,1} = -4s^{14}-16s^{11}-2s^{10}-4s^8-4s^4-1,$$
$$C_{0,0} = 2s^{17}+8s^{13}+8s^7+4s^3.$$
Similarly, there is posible to reduce the level of the first through the second equation with obtaining the system in the form
$$\begin{cases}
C_{1,1}t + C_{1,0} = 0\\
C_{2,2}t^2 + C_{2,1}t +C_{2,0}= 0,\\
\end{cases}$$
where
$$C_{1,1} = -16s^{29} +16s^{28} +48s^{25} +16s^{24} -128s^{23} +288s^{22} +4s^{20} -256s^{19} +48s^{18} +16s^{16} -224s^{15} +24s^{14} +32s^{12} -64s^{11} +4s^{10} +24s^8 +8s^4 +1,$$
$$C_{1,0} = 256s^{22} -32s^{28} -36s^{27} -8s^{25} -128s^{24} -16s^{23} -8s^{31} -72s^{21} +384s^{18} -66s^{17} -32s^{15} +192s^{14} -16s^{13} -48s^{11} -24s^7-4s^3.$$
Again lowering the level of the second equation, we can obtain
$$\begin{cases}
C_{1,1}t + C_{1,0} = 0\\
C_{0,0} = 0,\\
\end{cases}$$
where
$$C_{0,0} = 32s^{40} -56s^{43} -64s^{41} -4s^{47} -292s^{39} -144s^{37} +112s^{36} -944s^{35} +2112s^{34} -472s^{33} +152s^{32} -1856s^{31} +8192s^{30} -1472s^{29} +132s^{28} -10880s^{27} +14240s^{26}-2040s^{25} +176s^{24} -19616s^{23} +12384s^{22} -1216s^{21} +1300s^{20} -11056s^{19} +4260s^{18} -256s^{17} +2328s^{16} -1600s^{15} +40s^{14} +1194s^{12} -496s^{11} +4s^{10} +61s^8-64s^7+12s^4+1$$
Converting the system was carried out in vector form using the Mathcad program. Using $polyroots()$ function allows us to find all the roots on the Laguerre method, and among them is the only real root
$$s_{38}\approx 0.73967226,$$
wherein
$$t_{38} = \left.-\dfrac{C_{1,0}}{C_{1,1}}\right|_{s=s_{38}} \approx 1.36409053.$$
It is easily seen that the obtained solution satisfies $(3)$.
Another modification of the method used here. |
Every finite group of congruences of the $n$-dimensional Euclidean space has a fixed point | I am assuming a "congruence" of $\mathbb{R}^n$ is an affine transformation, e.g. a map of the form $v \mapsto Tv + a$ where $T \in M_n (\mathbb{R})$ and $a \in \mathbb{R}^n$. Let's denote by $\text{Aff}(\mathbb{R}^n)$ the group of all affine transformations of $\mathbb{R}^n$, and suppose $G \le \text{Aff}(\mathbb{R}^n)$ is a finite subgroup.
Consider $x:= \frac{1}{|G|} \sum_{f \in G} f(0)$. For any $g \in G$, the map $y \mapsto g(y) - g(0)$ is linear; hence, we may compute $$g(x) = g(x) - g(0) + g(0) = \frac{1}{|G|} \sum_{f \in G} (g(f(0)) - g(0)) + g(0) = \frac{1}{|G|} \sum_{f \in G} g(f(0)) = x.$$ Thus, $x$ is a fixed point of $G$. |
Is a group homomorphism a module homomorphism? | In general there is no reason for $f$ to be an $R$-module homomorphism just because it is an abelian group homomorphism. Consider complex conjugation $\bar{\cdot} : \mathbb{C} \to \mathbb{C}$. It is clearly an abelian group morphism since $\overline{z+z'} = \bar{z} + \bar{z}'$. But if you take $R = \mathbb{C}$, it's clearly not an $R$-module homomorphism, since e.g. $\overline{i \cdot 1} = - i \neq i \cdot \bar{1} = i$.
If $R = \mathbb{Z}$ then as quid mentions it would actually be true. If $n \ge 0$ is an integer then
$$f(n \cdot x) = f(x + \dots + x) = f(x) + \dots + f(x) = n \cdot f(x),$$
because $f$ is a group morphism, and then $f((-n) \cdot x) = -f(n \cdot x)$ (still because $f$ is a group morphism) thus $f((-n) \cdot x) = (-n) \cdot f(x)$. But this is a very special situation. |
Prove or disprove : there exist at most two root of $f(x)=f'(x)$. | Suppose that there exists $0 < a < b$ such that $f(a) = f'(a)$ and $f(b) = f'(b)$.
Since $f'$ is concave we have
$$\frac{f'(b) - f'(a) }{b-a} < \frac{f'(b)-f'(0)}{b-0}< \frac{f'(a)-f'(0)}{a-0}=\frac{f'(a)}{a}$$
Since $f'$ is increasing, $f$ is convex and
$$\frac{f(b)-f(a)}{b-a} > \frac{f(a)-f(0)}{a-0}= \frac{f(a)}{a}$$
Thus, we arrive at a contradiction
$$\frac{f(a)}{a} < \frac{f(b)-f(a)}{b-a} = \frac{f'(b) - f'(a) }{b-a} < \frac{f'(a)}{a} = \frac{f(a)}{a}$$
Edit: The original question included the assumption that $f’$ is concave. |
Generalizing Newton's identities: Trace formula for Schur functors | Put $m = \dim V$, and let $\{e_1,\ldots,e_m\}$ be a basis of $V$ over $\mathbb{C}$. Let $\pi : \operatorname{GL}(V) \to \operatorname{GL}(V^{\otimes n})$ and $\rho : S_n \to \operatorname{GL}(V^{\otimes n})$ denote the commuting representations of $\operatorname{GL}(V)$ and $S_n$ on $V^{\otimes n}$, respectively. In particular, for $\sigma \in S_n$ and $1 \leq i_1,\ldots,i_n \leq m$ we have
$$ \rho(\sigma)(e_{i_1} \otimes \cdots \otimes e_{i_n}) = e_{i_{\sigma^{-1}(1)}} \otimes \cdots \otimes e_{i_{\sigma^{-1}(n)}}.$$
Fix a partition $\lambda \vdash n$. The operator $P_\lambda \in \operatorname{End}(V^{\otimes n})$, given by
$$ P_\lambda = \frac{\dim M_\lambda}{n!}\sum_{\sigma\, \in\, S_n}\overline{\chi_\lambda(\sigma)}\;\rho(\sigma), \tag{1}$$
is the projection onto the $\lambda$-isotypic component of $V^{\otimes n}$ as a module over $S_n$ (cf. e.g. Cor. 4.3.11 on pp. 213-214 in Symmetry, representations, and invariants (2009) by Goodman and Wallach). By Schur-Weyl duality, the $\lambda$-isotypic component of $V^{\otimes n}$ is isomorphic to $\mathbb{S}_\lambda(V)\otimes_{\mathbb{C}} M_\lambda$, as a module over $\operatorname{GL}(V)\times S_n$. Therefore, one can relate the character of $\mathbb{S}_\lambda(V)$ to the character of $V^{\otimes n}$ as follows:
$$ \operatorname{tr}_{\mathbb{S}_\lambda(V)}\pi(A) = \frac{1}{\dim M_\lambda}\operatorname{tr}_{V^{\otimes n}}\big(\pi(A)\,P_\lambda\big). \tag{2}$$
For our purposes, it will be more convenient to rewrite $(1)$ with $\sigma$ replaced by $\sigma^{-1}$. Thus,
$$P_\lambda = \frac{\dim M_\lambda}{n!}\sum_{\sigma\, \in\, S_n}\chi_\lambda(\sigma)\;\rho(\sigma^{-1}). \tag{3}$$
In view of $(2)$ and $(3)$, in order to prove formula $(\circ)$ in the question above, we have to show that
$$\operatorname{tr}_{V^{\otimes n}}\big(\pi(A)\,\rho(\sigma^{-1})\big) = \prod_{k=1}^n \operatorname{tr}_V\big(A^k\big)^{c_k(\sigma)} \tag{4}$$
for $A \in \operatorname{GL}(V)$ and $\sigma \in S_n$. Since the semi-simple elements are everywhere dense in $\operatorname{GL}(V)$ and the characters are polynomial class functions, it suffices to consider only the case when $A$ is diagonal with respect to the chosen basis $\{e_1,\ldots,e_m\}$ of $V$. Thus, let $A(e_i) = x_ie_i$ with $x_i \in \mathbb{C}^\times$. If $\{e_1^*,\ldots,e_m^*\}$ is the dual basis of $V^*$, then we have
$$ \begin{multline}\operatorname{tr}_{V^{\otimes n}}\big(\pi(A)\,\rho(\sigma^{-1})\big) = \sum_{1\leq i_1,\ldots,i_n \leq m} \langle e_{i_1}^*\otimes \cdots \otimes e_{i_n}^*,\, A(e_{i_{\sigma(1)}})\otimes \cdots \otimes A(e_{i_{\sigma(n)}})\rangle =\\ \sum_{1\leq i_1,\ldots,i_n\leq m} \delta_{i_1,i_{\sigma(1)}}\cdots \delta_{i_n,i_{\sigma(n)}}\cdot x_{i_1}\cdots x_{i_n},\end{multline} \tag{5}$$
where $\delta_{ij}$ denotes Kronecker's symbol. Fix $\sigma \in S_n$, and let $\sigma = \zeta_1 \cdots \zeta_s$ be its decomposition into a product of disjoint cycles (including those of length $1$). For $1\leq j\leq n$, let $\ell_j$ denote the length of the cycle $\zeta_j$, and fix $p_j \in \{1,\ldots,n\}$ so that $\zeta_j = \left(p_j,\sigma(p_j),\ldots,\sigma^{\ell_j-1}(p_j)\right)$. The coefficient in front of $x_{i_1}\cdots x_{i_n}$ in $(5)$ is non-zero if and only if $i_q = i_{\sigma(q)}$ for all $1 \leq q \leq n$, i.e. if and only if $i_{\sigma^k(p_j)} = i_{p_j}$ for all $1\leq j\leq s$, $0< k <\ell_j$. Thus, the non-zero terms of $(5)$ are of the form
$$ x_{i_1} \cdots x_{i_n} = \prod_{j=1}^s \left(\prod_{k=0}^{\ell_j-1}x_{i_{\sigma^k(p_j)}}\right) = x_{i_{p_1}}^{\ell_1}\cdots x_{i_{p_s}}^{\ell_s}.$$
This means that in $(5)$, the values of the indices $i_{p_1},\ldots,i_{p_s}$ can be chosen arbitrarily and independently of each other in $\{1,\ldots,m\}$, while the values of the remaining indices are determined by those of the former ones, according to the cycle structure of $\sigma$. Hence, putting $r_j = i_{p_j}$ for $1\leq j \leq s$, we obtain
$$ \operatorname{tr}_{V^{\otimes n}}\big(\pi(A)\,\rho(\sigma^{-1})\big) = \sum_{1\leq r_1,\ldots,r_s\leq m} x_{r_1}^{\ell_1} \cdots x_{r_s}^{\ell_s}.$$
The right hand side of the above formula is nothing but
$$ \prod_{j=1}^s\left(\sum_{r=1}^m x_r^{\ell_j}\right) = \prod_{j=1}^s \operatorname{tr}_V\left(A^{\ell_j}\right).$$
Grouping together the contributions from cycles of the same length in this product, we obtain the right hand side of $(4)$.
The idea for this calculation was taken from http://mathnt.mat.jhu.edu/zelditch/LargeNseminar/archive/SchurWeyl.pdf, slides 53-54 |
Are compact subsets of the topologist's sine circle, equipped with the Hausdorff distance, path connected? | $\mathcal{K}(C)$ is path connected.
Let $d$ be the usual Euclidean metric on $C$ and let $d_H$ denote the corresponding Hausdorff distance on $\mathcal{K}(C)$. Let $K_\epsilon = \{x : d(x,K) \le \epsilon\}$ denote the closed $\epsilon$-fattening of a compact set $K$, and recall that if $K' \subset K_\epsilon$ and $K \subset K'_\epsilon$ then $d_H(K, K') \le \epsilon$.
Define $f : (0,1] \to C$ by $f(t) = (t, \sin(1/t))$ for $0 < t \le 1/2$, and for $1/2 \le t \le 1$ let it cover the rest of $C$ in any continuous manner (it won't be injective but that's okay). Let $x_1 = f(1)$.
Let $K \subset C$ be a nonempty compact set.
First, let us show there is a continuous path from $K$ to $K \cup \{x_1\}$. Choose any $t_0$ with $f(t_0) \in K$, and define $\sigma(t) = K \cup \{f(t)\}$ for $t_0 \le t \le 1$. Clearly $\sigma(t_0) = K$ and $\sigma(1) = K \cup \{x_1\}$, and the continuity of $\sigma$ follows immediately from continuity of $f$.
Now let $\gamma(t) = K \cup f([t,1])$ for $0 < t \le 1$, and set $\gamma(0) = C$. Clearly $\gamma(t)$ is compact for each $t$, and $\gamma(1) = K \cup \{x_1\}$. I claim $\gamma$ is continuous.
To show it is continuous at 0, fix $\epsilon >0$ and assume without loss of generality that $\epsilon < 1/2$. I claim that if $t < \epsilon$ we have $\gamma(t)_\epsilon = C$ and thus $d_H(\gamma(t), \gamma(0)) = d_H(\gamma(t), C) \le \epsilon$. For if $p \in C \setminus \gamma(t)$ then $p = f(s) = (s, \sin(1/s))$ for some $0 < s < t < \epsilon$. Now the point $q = (0, \sin(1/s))$ is contained in $f([1/2,1])$ by assumption, hence contained in $\gamma(t)$, and $d(p,q) = s < \epsilon$, so $p \in \gamma(t)_\epsilon$.
Now fix any $t_0 > 0$; we will show $\gamma$ is continuous at $t_0$. By continuity of $f$ we can find $\delta$ such that if $|t-t_0| < \delta$ then $d(f(t), f(t_0)) < \epsilon$. Fix such a $t$ and suppose that $t \ge t_0$. We have $\gamma(t) \subset \gamma(t_0) \subset \gamma(t_0)_{\epsilon}$ by construction. If $p \in \gamma(t_0) \setminus \gamma(t)$ then $p = f(s)$ for some $s \in [t_0, t]$. This means $|s-t| < \delta$, so letting $q = f(t)$, we have $d(p,q) = d(f(s), f(t)) < \epsilon$. Hence $q \in \gamma(t)_\epsilon$ so $\gamma(t_0) \subset \gamma(t)_\epsilon$, and we have shown $d_H(\gamma(t), \gamma(t_0)) < \epsilon$. By symmetry, the same holds if $t \le t_0$.
Here is an animation to help visualize this path.
https://youtu.be/WzjJFCj2whM |
What does it mean that the probability is an integral in continuous probability? | Note that the two method results in the same number.
$$\int_{260}^{360} f(x)dx=$$
$$\int_{260}^{360} \frac {1}{360}dx =$$
$${1}{360}x|_{260}^{360}= $$
$$\frac {1}{360}(360-260) =$$
$$\frac {20}{360}= \frac {1}{12}$$ |
Projector matrices and properties | What you need is that your $P$ has no nilpotent part. That is, its Jordan form should be diagonal. In terms of blocks, your $P$ will be of the form $SXS^{-1}$ with $S$ invertible and
$$
X=\begin{bmatrix}Y&0\\0&0\end{bmatrix},
$$
with $Y\in\mathbb C^{m\times m}$, $m\leq n$.
At its simplest form, for instance, here is an example of such $P$:
$$
P=\begin{bmatrix}1&0&0\\ 0&2&0\\0&0&0\end{bmatrix}.
$$
The column space is the span of the first two vectors in the canonical basis, while the kernel is the span of the third vector in the canonical basis. |
How to prove divergence of this series | Hint for showing series divergence:
Define $a_n = n^{1/n}-1$. Then $\ln n = n \ln(1+a_n) \leqslant na_n$, and
$$\frac{\ln n}{n} \leqslant a_n= n^{1/n} -1.$$
Apply the binomial theorem to show that sequence converges to $0$ (without using L'Hospital's rule).
Note that for $n \geqslant 2$,
$$n^{1/n} = 1 + a_n \geqslant 1 \implies n = (1 + a_n)^n \geqslant \frac{n(n-1)}{2}a_n^2 \implies a_n^2 \leqslant \frac{2}{n-1}.$$
Hence,
$$0 \leqslant a_n \leqslant \frac{2^{1/2}}{(n-1)^{1/2}},$$
and for $n \geqslant 4$
$$0 \leqslant (n^{1/n}-1)^n \leqslant \frac{2^{n/2}}{(n-1)^{n/2}} < \left(\frac{2}{3}\right)^{n/2}.$$
Taking the limit and applying the squeeze principle we find $(n^{1/n}-1)^n \to 0$. |
Bernoulli String of Luck Question | You seem to be using a wrong notion of $\limsup$. Usually one has
$$\limsup_{n \to \infty} A^l_n = \bigcap_{n = 1}^{\infty} \bigcup_{m=n}^{\infty} A^l_m$$
According to this definition, $\limsup A^l_n$ is the set of elementary events that occur in infinitely many $A^l_n$. In this exercise it would be the set of runs that contain infinitely many strings of luck of length $l$.
Hence,
$$A^1 = \{ X_n = 1 \text{ for infinitely many } n \in \mathbb{N}\}$$
and
$$\mathbb{P}(A^1) = 1.$$
EDIT: Maybe I should explain in detail how one can compute $\mathbb{P}(A^1)=1$. First off, I use continuity from above of $\mathbb{P}$, i.e. for sets $B_n$ such that $B_{n+1}\subseteq B_n$ we have
$$\mathbb{P}(\bigcap_{n=1}^{\infty} B_n) = \lim_{n \to \infty} \mathbb{P}(B_n).$$
If we set $B_n:= \bigcup_{m=n}^{\infty} A_m^1$ we get $B_n = (A_n^1 \cup B_{n+1}) \supseteq B_{n+1}$ and hence
$$\mathbb{P}(A^1) = \mathbb{P}( \bigcap_{n=1}^{\infty} B_n ) = \lim_{n \to \infty} \mathbb{P}(B_n).$$
Now lets look at $\mathbb{P}(B_n)$. We have $A_m^1 = \{X_m = 1\}$ and
\begin{align*}
\mathbb{P}(B_n) &= \mathbb{P}(\bigcup_{m=n}^{\infty} A_m^1 ) = \mathbb{P}(\{ X_k = 1 \text{ for some } m \geq n\})\\
& = 1 - \mathbb{P}(\{X_k = 0 \text{ for all } m \geq n\}).
\end{align*}
We can use either intuition or a similiar argument to before to see that $$\mathbb{P}(\{X_k = 0 \text{ for all } k \geq m\}) = 0$$
for all $n$. But this means that $\mathbb{P}(B_n) =1$ for all $n$ and hence
$$\mathbb{P}(A^1) = \lim_{n \to \infty} \mathbb{P}(B_n) = 1.$$
I hope this makes it a little bit clearer for you.
EDIT #2: My hint regarding the Borel-Cantelli Lemma was aimed at a statement that is often regarded as the lemma's 2nd statement:
If $(E_n)$ is a sequence of pairwise independent events such that $\sum_{n=1}^{\infty} \mathbb{P}(E_n) = \infty$, then
$$\mathbb{P}(\limsup_n E_n) = 1$$ |
How many ways to seat 4 couple and 2 single around a round table | Edit: Sorry for the confusion!
Viewing each couple as a unit, you're arranging six units around a round table, for which there are $5! = 120$ ways, since we can fix one of the singles to a chair. Then, you can change the order within each couple, so there should be $120 \times 2^4 = 1920$ ways. |
Transform into product: $\sin (3x)- \cos x - \sin x$ | I am agree totally with the user @Somos, because
$$\sin(3x)-\cos(x)-\sin(x) \iff \sin(3x)-(\cos(x)+\sin(x))$$
But
$$\cos(x)+\sin(x)=\sqrt2 \sin (x+\varphi)$$
(added angle method).
Hence
$$\sin(3x)-\cos(x)-\sin(x) \iff \sin(3x)-\sqrt2 \sin (x+\varphi)$$
and you can not apply the prostapheresis formulas for the presence of $\sqrt 2$. |
Understanding the definition of a tensor product of chain complexes of abelian groups | I believe you have some notation issues.
It should be $g_{n,p}(x,y) = (\partial^C_p (x) \otimes y)+ (-1)^p ( x \otimes \partial^D_{n-p}(y))$.
The first term of this lives in $C_{p-1}\otimes D_{n-p}$ and the second term lives in $C_{p} \otimes D_{n-p-1}$, so this sum should be thought of as happening in the giant direct sum $(C \otimes D)_{n-1}$ (but note that it lands in two pieces of it, rather than a single one as you described in the question).
To check that the composition is zero, it's enough to do it on homogenous pieces. The induced map thing isn't a problem: it's still exactly what you think it should be (i.e. $\partial_{n,p}(x\otimes y) = (\partial^C_p (x) \otimes y)+ (-1)^p ( x \otimes \partial^D_{n-p}(y))$ on homogeneous pieces and extend by linearity). When you do the double composition, you'll get four terms: one will die by virtue of their being a composition of $C$'s differentials, another will die by similar reasoning for $D$'s differentials, and the final two will cancel out thanks to the clever choice of sign in $g_{n,p}$. |
Upper and lower bound in distributive lattice | Let $\rho : L \to \mathbb N$ be the rank function of $L$.
Then the interval $[a,b]$ has finite length, given by $\rho(b) - \rho(a)$.
Since $[a,b]$ is a sublattice of $L$, it is distributive.
A distributive lattice is finite iff it has finite length.
Notice that the elements which cover $a$, in $L$, are the atoms of $[a,b]$;
analogously the elements covered by $b$ which are above $a$ are co-atoms of $[a,b]$.
So we're left with the task of proving that in a finite distributive lattice, if the join of the atoms is $1$, then the meet of the co-atoms is $0$.
That just follows from the fact that if the join of the atoms is $1$, then those are the only join-irreducible elements of the lattice, which is then Boolean, and it is clear that in a Boolean lattice the meet of the co-atoms is $0$. |
Convergent sequence in Lp has a subsequence bounded by another Lp function | Since $\|f_n-f\|_p\to 0$, for every $k\in \Bbb N$, there exists $n_k\in \Bbb N$, such that $\|f_{n_k}-f\|_p\le \frac{1}{2^k}$. Let
$$g:=|f|+\sum_{k=1}^\infty |f_{n_k}-f|.$$
By definition, $g$ is measurable, and $g\ge |f|+|f_{n_k}-f|\ge|f_{n_k}|$ for every $k\in \Bbb N$ . Moreover, by Minkowski's inequality,
$$\|g\|_p\le \|f\|_p+\sum_{k=1}^\infty \|f_{n_k}-f\|_p\le \|f\|_p+1.$$ |
Compute the homology of the CW complex directly from the cell structure | Let $X$ be a CW-complex whose cellular chain complex $C_{\bullet}(X)$ has all zero differentials. Note that this necessarily occurs when $X$ does not have cells in consecutive dimensions, e.g. $S^n$ for $n \geq 2$, $\mathbb{C} \mathbb{P}^n$ for $n \geq 1$. It also occurs for $S^1$. Then $C_n(X) \cong H_n(X,\mathbb{Z})$ for all $n$. Such a complex must be "minimal" in your sense: if you had fewer cells in any given dimension that would give rise to a smaller Betti number.
It follows from the Eilenberg-Zilber Theorem that a finite product of CW-complexes with all zero differentials has all zero differentials. This shows for instance that the $n$-dimensional torus has this property for all $n \in \mathbb{Z}^+$. In particular this explains all of your examples.
Conversely, let $X$ be a finite CW-complex whose $n$th Betti number is equal to the number of $n$-cells for all $n$. Let $d_n$ be the $n$th differential in the singular complex.
Let $K_n$ be the kernel of $d_n$ and $I_{n}$ be the image of $d_{n+1}$. Then
$K_n \subset \mathbb{Z}^n$, so $K_n \cong \mathbb{Z}^d$ for some $d \leq n$. Further $K_n/I_n \cong \mathbb{Z}^n \oplus T$ for a finite abelian group $T$, so $\mathbb{Z}^n \oplus T$ is a homomorphic image of $\mathbb{Z}^d$. We must then have $d =n$ and $T = 0$, so the map $K_n \rightarrow K_n/I_n$ is a surjective map from $\mathbb{Z}^n$ to itself. Such a thing is known to be an isomorphism, e.g. by structure theory of finitely generated modules over a PID. In other words $I_n = 0$ and all the differentials are trivial. In particular all of the homology groups are free abelian. |
Is the distribution of the orders of the cyclic groups generated by elements of $S_n$ known? | The question can be factored into two questions: what is the joint distribution of the cycle lengths of a random permutation, and what is the lcm of all cycle lengths? The second question is an annoyingly delicate arithmetic question (e.g. the lcm depends delicately on whether a large cycle happens to have prime length) so I am tempted to ignore it to concentrate on the first question.
Here is what I know about cycle lengths of random permutations. First, it's a nice exercise to show that the expected number of cycles of length $k \le n$ in a permutation of $n$ elements is $\frac{1}{k}$, and hence the total expected number of cycles is
$$H_n = \sum_{k=1}^n \frac{1}{k} \approx \log n.$$
When $k \ll n$ one can make much stronger statements: as $n \to \infty$, the number of cycles of length $k \ll n$ is asymptotically Poisson with parameter $\frac{1}{k}$. Moreover, for various $k \ll n$ these Poisson random variables are asymptotically jointly independent. See this blog post for some details.
Unfortunately, I don't know much about large $k$.
Edit: An upper bound for the lcm of the cycle lengths is their product, and we can say something about that. The product is just $\prod k^{c_k}$ where $c_k$ is the number of cycles of length $k$, so the logarithm of the product is
$$\sum_{k=1}^n c_k \log k.$$
If we assume that the $c_k$ are, for all $k$, jointly Poisson with parameter $\frac{1}{k}$, then we can compute the moments of this logarithm. For example, its expected value is
$$\sum_{k=1}^n \frac{\log k}{k} \approx \frac{(\log n)^2}{2}.$$
Plugging in $n = 52$ gives that the exponential of the expected value of the logarithm of the product is about $2400$, which is at least within an order of magnitude of what you got. |
Irreducibility of $f(x)=x^n-p^m$ | Hats off to Lord Shark the Unknown for providing the key that unlocks the puzzle-door on this one; taking a tip from his comment on the question, I have tried below to flesh out an answer. I wanted to see how things work out in detail, so . . .
First of all, let's consider the case
$\gcd(n, m) = d > 1; \tag 1$
then since
$d \mid m, \; d \mid n, \tag 2$
we may write
$n = n_1 d, \; m = m_1 d, \tag 3$
and we have
$X^n - p^m = X^{n_1 d} - p^{m_1 d} = (X^{n_1})^d - (p^{m_1})^d$
$= (X^{n_1} - p^{m_1}) \displaystyle \sum_0^{d - 1} (X^{n_1})^{d - 1 - i} (p^{m_1})^i, \tag4$
which demonstrates that $X^n - p^m$ is in fact reducible in the event that $\gcd(n, m) > 1$; thus the condition $\gcd(n, m) = 1$ is not in fact inconsequential, insofar as does distinguish 'twixt reducible and irreducible cases of $X^n - p^m$, provided of course the question of the problem is answerable in the affirmative, which we shall indeed herein establish.
We begin with $m = 1$; here we trivially have $\gcd(n, m) = 1$, and the Eisenstein criterion with prime $p$ directly applies to show $X^n - p$ is irreducible over $\Bbb Q$. However, Eisenstein does not apply to $X^n - p^m$ when $m \ge 2$; therefore we seek another line of analysis.
We consider the field extension of $\Bbb Q$ formed by adjoining $\sqrt [n] p$; that is, the field $\Bbb Q(\sqrt [n] p)$; since
$X^n - p$ is irreducible over $\Bbb Q$, we have from elementary considerations that
$\Bbb Q(\sqrt[n] p) \simeq \Bbb Q[X] / \langle X^n - p \rangle, \tag 5$
where
$\langle X^n - p \rangle = (X^n - p) \Bbb Q[X] \tag 6$
is the principal ideal generated by $X^n - p$ in $\Bbb Q[x]$. Furthermore it follows from (5) that
$[\Bbb Q(\sqrt[n] p): \Bbb Q] = n; \tag 7$
that is, the dimension of $\Bbb Q(\sqrt[n] p)$ as a vector space over $\Bbb Q$ is $n$.
We next observe that
$\sqrt[n]{p^m} = (\sqrt[n] p)^m \in \Bbb Q(\sqrt[n] p), \tag 8$
from which it is seen that
$\Bbb Q(\sqrt[n] {p^m}) \subset \Bbb Q(\sqrt[n] p); \tag 9$
we now show that under the hypothesis
$\gcd(n, m) = 1 \tag{10}$
we also have
$\sqrt[n] p \in \Bbb Q(\sqrt[n]{p^m}); \tag{11}$
for, as is well-known, (10) implies that there are $a, b \in \Bbb Z$ such that
$an + bm = 1; \tag{12}$
then
$\sqrt[n] p = (\sqrt[n] p)^1 = (\sqrt[n] p)^{an + bm} = (\sqrt[n] p)^{an} (\sqrt[n] p)^{bm}$
$= ((\sqrt[n] p)^n)^a ((\sqrt[n] p)^m)^b = p^a (\sqrt[n] {p^m})^b \in \Bbb Q(\sqrt[n]{p^m}); \tag{13}$
since, then
$\sqrt[n] p \in \Bbb Q(\sqrt[n]{p^m}), \tag{14}$
it follows that
$\Bbb Q(\sqrt[n] p) \subset \Bbb Q(\sqrt[n]{p^m}), \tag{15}$
and thus, via (9),
$\Bbb Q(\sqrt[n] p) = \Bbb Q(\sqrt[n]{p^m}), \tag{16}$
and now from (7) we see that
$[\Bbb Q(\sqrt[n]{p^m}): \Bbb Q] = n \tag{17}$
as well.
Now if $X^n - p^m$, $m > 1$, were reducible over $\Bbb Q$, we could write
$X^n - p^m = r(X) s(X), \tag{18}$
with
$r(X), s(X) \in \Bbb Q[X], 0 < \deg r, \deg s < n, \tag{19}$
then
$r(\sqrt[n]{p^m}) s(\sqrt[n]{p^m}) = (\sqrt[n]{p^m})^n - p^m = p^m - p^m = 0, \tag{20}$
so that at least one of
$r(\sqrt[n]{p^m}), s(\sqrt[n]{p^m}) = 0; \tag{21}$
then in the light of (19), we may conclude that
$[\Bbb Q(\sqrt[n]{p^m}): \Bbb Q] < n, \tag{22}$
since $\sqrt[n]{p^m}$ satisfies a polynomial of degree less than $n$. This of course stands in contrediction to (17);
therefore,
$X^n - p^m \tag{23}$
is irreducible over $\Bbb Q$. |
combinatorics -- assigning jobs to children when jobs matter | There are $3$ options for each job and $10$ jobs. The answer is thus $3^{10}$ |
How to prove following inequality? | The end points of the chord are $(x_1,f(x_1))$ and $(x_2,f(x_2))$ The co-ordinate of the mid point of the chors is $[\frac{x_1+x_2}{2}, \frac{f(x_1)+f(x_2)}{2}]$
Since the chord is above the curve then $$\frac{f(x_1)+f(x_2)}{2} \ge f\left(\frac{x_1+x_2}{2}\right).$$ |
Changing the order of summation in a triple sum | Let us examine the set over which the summation is taken. The values of $i$ and $j$ are independent of each other, but both depends on $k$.
\begin{align}
1\leq& k\leq N\\
1\leq &i\leq k\\
1\leq &j\leq k
\end{align}
Next, we change the bounds on $k,i,j$ such that $k$ is bounded in terms of $i$ and $j$, and bounds on $i$ and $j$ are independent of $k$. It is easy to see that $1\leq i,j\leq N$. Further, we have
\begin{align}
1\leq &i\leq k \text{ and } 1\leq k\leq N \implies i\leq k\leq N\\
1\leq &j\leq k \text{ and } 1\leq k\leq N \implies j\leq k\leq N.
\end{align}
Thus, we get
\begin{equation}
\max\{i,j\}\leq k\leq N.
\end{equation}
Therefore, the result. If $i$ and $j$ were dependent, we have to do this cyclically, i.e., first find bound on $i$ which is independent of $j$ and $k$. Then, bound $j$ interms of $i$, and finally, bound $k$ using $i$ and $j$. |
Grammar Mistakes in Math Writing | A typical statement concerning finite fields would be statement $B$:
Consider two elements $\alpha$ and $\beta$ in $\Bbb F_q$.
Here the two elements are arbitrary. With an article "the two elements" they would be specific elements, such as $0$ and $1$ for example. Usually such a specification would then be given. "Let $\alpha$ and $\beta$ be the elements in $\Bbb F_q$ given by the sum of all squares respectively of all cubes." |
How to solve equation $ x=W(a+bx^{n})+1 $? | I don't think it will be easy to solve. Since $x = W(y)$ if and only if $y \ge - \dfrac 1e$ and $y = xe^{x}$, you will have to solve $$a+bx^n = (x-1)e^{x-1},\quad a + bx^n \ge - \dfrac 1e.$$
Are you looking for an exact solution? |
How do I find the time elapsed from a given point in a parabollic trajectory until the impact to the ground? | In this case you can split the motion. You can see the problem as the projectile drop from H to 0 and drop from H to H/2.
So the t equals to sqrt(2H/g)+sqrt(H/g) |
Need help with simplifying a radical expression | Noting that $$\sqrt{5+2\sqrt 6}=\sqrt{(2+3)+2\sqrt{2\times 3}}=\sqrt{(\sqrt 2+\sqrt 3)^2}=\sqrt 2+\sqrt 3,$$
we have
$$\begin{align}(\sqrt 2+\sqrt 3)(49-20\sqrt 6)&=49\sqrt 2-20\cdot 2\sqrt 3+49\sqrt 3-20\cdot 3\sqrt 2\\&=-11\sqrt 2+9\sqrt 3.\end{align}$$
Hence, we have
$$\begin{align}(-11\sqrt 2+9\sqrt 3)(9\sqrt 3+11\sqrt 2)&=(9\sqrt 3-11\sqrt 2)(9\sqrt 3+11\sqrt 2)\\&=(9\sqrt 3)^2-(11\sqrt 2)^2\\&=243-242\\&=1.\end{align}$$
Hence, $$\sqrt{(5+2\sqrt 6)}\ (49-20\sqrt 6)(9\sqrt 3+11\sqrt 2)=1.$$ |
Differentiability of the sum of the series $\sum_k \sin(kx)/k^2$ | To showing differentiability, show that $\displaystyle\sum\frac{\cos(kx)}{k}$ converge uniformly on any compact $\subset (0,1)$. |
Using approximation to find a value of theta | Define
$$f(\theta)=\sum_{i=1}^{N}\frac{e^{\alpha_i(\theta-\beta_i)}}{1+e^{\alpha_i(\theta-\beta_i)}}-60$$
You want to find a $\theta$ for which $f(\theta)=0$. This is a common task, and is called root finding. There are numerous ways to achieve that, and python (scipy) has a built in function that does that - fsolve. |
Understanding the definition of compactness. | The definition is that EVERY open cover has finite subcover. So those open covers which have X as their element can be written on that way, but there are many other open covers that do not have X! If for all those covers we can find finite subcovers, than that set would be called a compact set. |
Maximizing compound interest with fee per compound | Presuming that both $p$ and $t$ are measured in years, the number of compoundings per year is $\frac 1p$, and the interest rate for each period is the yearly interest rate divided by the number of compounding per year, or $r_p = \dfrac r{\frac 1p} = pr$.
At the end of the first period, an amount equal to $Ppr - x$ is added to the account (where $r$ is expressed as a fraction out of $1$, not as a percent), totalling
$$P_1 = P + Ppr - x = P(1+pr) - x$$
To make this a little easier, let $a = (1+pr)$, so $P_1 = Pa - x$.
At the end of the second period, the calculation is the same, except the beginning principle $P$ is replaced by the amount $P_1$ in the account at the beginning of the period.
$$\begin{align}P_2 &= P_1a - x\\&= (Pa - x)a - x\\&= Pa^2 - xa - x\\&=Pa^2 - x(a +1)\end{align}$$
Continuing we see that
$$P_3 = Pa^3 - x(a^2 + a + 1)\\P_4 = Pa^4 - x(a^3 + a^2 + a + 1)\\\vdots$$
Now this sum of powers is well known. If you multiply it by $a-1$ you get
$$\begin{align}(a-1)(a^{n-1} + a^{n-2} + \cdots + a + 1) &= a(a^{n-1} + a^{n-2} + \cdots + a + 1)\\&\qquad-(a^{n-1} + a^{n-2} + \cdots + a + 1)\\&=(a^n + a^{n-1} + \cdots + a^2 + a)\\&\qquad - (a^{n-1} + a^{n-2} + \cdots + a + 1)\\&=a^n - 1\end{align}$$
so
$$a^{n-1} + a^{n-2} + \cdots + a + 1 = \dfrac{a^n - 1}{a - 1}$$
And we can write
$$P_n = Pa^n - x\dfrac{a^n - 1}{a - 1}$$
The number $n$ of periods is going to be $n = \frac tp$. And noting that $a - 1 = pr$, this becomes
$$\begin{align}A &= Pa^{t/p} - x \frac{a^{t/p} - 1}{pr}\\&=Pa^{t/p}-\frac x{pr}a^{t/p}+\frac x{pr}\\&=\left(P-\frac x{pr}\right)a^{t/p} + \frac x{pr}\end{align}$$
$$\bbox[5px,border:2px solid] {A =\left(P-\frac x{pr}\right)\left(1+pr\right)^{t/p} + \frac x{pr}}$$ |
How to find equivalence class of this relation? | https://www.youtube.com/watch?v=rFexPRbJLlw
After seeing this video, the answer is:
[a] = [d] = {a,d}
[b] = [c] = {b,c} |
Conditional probabilities in a context | Rather than giving a straight answer I leave something for you to think about.
First write down formally all the probabilities that the exercise provides for example $$P(Recovered \mid Untreated)=1/20.$$
Second look at the two basic theorems that will get you trough almost all exercises like this: Law of total probability and Bayes theorem. The part one is to help you use these theorems.
I hope this will get you started. |
What is a Hilbert space? | For you are a programmer, here is an example involving $1$'s and $0$'s.
Schrödinger's cat is argubably the most widespread "hard" science thought experiment that invaded the pop culture most.
The state of the cat lives in a Hilbert space. The following comic is pretty illustrative (from abstrusegoose):
In this comic, several characteristics of Hilbert spaces are shown (not all though, for some mathematical facts are hard to interpret without rigorous formality).
Vector in a vector space does not have to be two dimensional list of numbers: "Vector" can be a very abstract function $\psi(x)$. Here it is the state of the cat, namely $|0\rangle$ (dead) and $|1\rangle$ (alive), later the author added a new discovered basis in this space $|\mathrm{LOL}\rangle$.
Linearity: The member of a Hilbert spaces can be linearly combined with another member in this Hilbert spaces:
$$
|\psi\rangle = \frac{1}{\sqrt{2}}|0\rangle + \frac{1}{\sqrt{2}}|1\rangle,
$$
or rather for $\alpha^2+\beta^2 = 1$:
$$
|\psi\rangle = \alpha|0\rangle + \beta|1\rangle.
$$
We can get members in the same Hilbert space in a more abstract way.
Inner product structure: where this "inner product" can be view more abstractly as well other than $a\cdot b$. In this case, it can be interpreted as observation collapsing the states: $0$ state inner product with an arbitrary state $\psi$
$$
\langle 0 | \psi \rangle = \alpha \langle 0 |0\rangle = \alpha .
$$
Once we observe the cat' status, the probability we observe the cat in $|0\rangle $ is $\alpha^2$. Inner product gives us the square root of the probability that we observe an arbitrary state $\psi$ in a fixed known state $0$. |
Limit of a line equation | If we eed the limit, we need first to find if it exists and then to calculate it. So we need to solve the intersection problem and then look at limat as $c \rightarrow 1$.
Intersection of the lines $3x + 5y = 1$ and $(c + 2)x + 5c^2 y = 1$
$$\begin{bmatrix} 3 & 5 & | & 1\\ (c+2) & 5c^2 & | & 1 \end{bmatrix}$$
$$\begin{bmatrix} 3 & 5 & | & 1& ...\\ 0 & 15c^2-5(c+2) & | & 3 - (c+2)& 3 Row 2 - (c+2)Row 1 \end{bmatrix}$$
$$\begin{bmatrix} 3 & 5 & | & 1& ...\\ 0 & 15c^2-5c-10 & | & 1-c& 3 Row 2 - (c+2)Row 1 \end{bmatrix}$$
At the point of intersection $ y = \frac {1-c}{15c^2 -5c - 10} = \frac{1-c}{5(3c^2 - c - 2)}= \frac{1-c}{5(3c+2)(c-1)}$
Noting that the author carefully designed this question so that the factoring would work out, at the intersection $y= \frac{1-c}{5(3c+2)(c-1)}= \frac{-1(c-1)}{5(3c+2)(c-1)}$
As long as $c \neq 1$ we can take the limit and say that as $c$ approaches 1, the value of y at the intersection is $y = \frac{-1}{5(3c+2)}$ which approaches $-1/25$ in the limit.
Substituting into the first equation, $3x + 5(-1/25) = 1$ so $3x - 1/5 = 1$, $3x = 6/5$, $x = 2/5$
The limiting point of intersection is therefore $(2/5, -1/25)$ and you can find the circle and finish off from there. |
Show that this limit is equal to $\liminf a_{n}^{1/n}$ for positive terms. | The typical textbook proof goes as follows.
If the $\lim \frac{a_{n+1}}{a_n} = q \gt 0$, then given an $\epsilon \gt 0$ such that $q \gt \epsilon$, there is some $n_0$ such that for all $n \ge n_0$
$$q - \epsilon \lt \frac{a_{n+1}}{a_n} \lt q+\epsilon$$
Multiplying gives us
$$a_{n_0}(q - \epsilon)^{n-n_0} \lt a_{n} \lt (q+\epsilon)^{n-n_0} a_{n_0}$$
and so
$$a_{n_0}^{1/n}(q - \epsilon)^{1-n_0/n} \lt a_{n}^{1/n} \lt (q+\epsilon)^{1-n_0/n} a_{n_0}^{1/n_0}$$
And thus (by taking the limit as $n \to \infty$)
$$ q - \epsilon \le \liminf (a_n)^{1/n} \le \limsup (a_n)^{1/n} \le q + \epsilon $$
Since $\epsilon$ was arbitrary, we have that $q = \lim a_n^{1/n}$.
The case $q=0$, we replace left hand side by $0$, and the proof carries through. |
The min of $x + k/x$ | If you don't like differentiating, by AM-GM
$$\sqrt{k}=\sqrt{x\cdot\dfrac{k}{x}}\leq\dfrac{x+\dfrac{k}{x}}{2}$$
When will you have equality? |
Holomorph of a group $G$, then the automorphism of $G$ are inner automorphisms | The precise wording would be that 'every automorphism of $G$ extends to an inner automorphism of $A$'.
Namely, conjugation by $(1,\theta)$ restricted to $G\subseteq A$ will give back just $\theta\in Aut(G)$. |
Homogeneous Wave Equation with None-Homogeneous Boundary Condition: using Separation of variables | Hint: If
\begin{cases}
u_x(0,t)=p\\
u_x(\pi,t)=q
\end{cases}
we at first make vanish $p$ and $q$ with another assumption. Let $u(x,t)=v(x,t)+w(x,t)$ so for $w(x,t)=\alpha x+\beta$, from
\begin{cases}
u_x(0,t)=2\sin t\\
u_x(\pi,t)=0
\end{cases}
we have $w(x,t)=(-\dfrac{2}{\pi}\sin t)x+2\sin t$ and from $u(x,0)=v(x,0)+w(x,0)$ we know $v(x,0)=x$, also with $u_t(x,0)=v_t(x,0)+w_t(x,0)$ then $v_t(x,0)=\dfrac{2}{\pi}x-2$, so new problem takes the form
\begin{cases}
v_{tt}=v_{xx}+\left(-\dfrac{2}{\pi}x+2\right)\sin t\\
v(x,0)=x \\
v_t(x,0)=\dfrac{2}{\pi}x-2 \\
v_x(0,t)=0\\
v_x(\pi,t)=0
\end{cases} |
For what functions $f$ does the following integral equation hold? | If I have well understood your question,
I assume that you are using Lebesgue theory. Then your function is $f$ in $L^1(\mathbb{R})$.
Put $g(u,x)=f(u)\chi_{[x-\alpha,x+\alpha]}(u)$. We have
$$\int_{-\infty}^{+\infty}(\int_{-\infty}^{+\infty}|g(u,x)|dx)du=\int_{-\infty}^{+\infty}2\alpha|f(u)| du<+\infty$$
Hence $g(u,x)$ is integrable on $\mathbb{R}^2$, and you can use Fubini:
$$\int_{-\infty}^{+\infty}(\int_{-\infty}^{+\infty}g(u,x)du)dx=\int_{-\infty}^{+\infty}(\int_{-\infty}^{+\infty}g(u,x)dx)du=\int_{-\infty}^{+\infty}2\alpha f(u) du=2\alpha\int_{-\infty}^{+\infty}f(u)du$$ |
Solving an equation with radicals in the exponent | By Am Gm inequality (we use it twice) we have $$2\cdot2^{x^{\frac{1}{6}}} =2^{x^{\frac{1}{12}}}+2^{x^{\frac{1}{4}}}\geq 2\sqrt{\cdot2^{x^{\frac{1}{12}}+x^{\frac{1}{4}}}}\geq 2\cdot \sqrt{2^{2{\sqrt{x^{\frac{1}{12}+\frac{1}{4}}}}}}=2\cdot2^{{x^{\frac{1}{6}}}}$$
So we have equality case which is achieved iff $$ x^{\frac{1}{12}}=x^{\frac{1}{4}}\implies x^3=x\implies x\in\{0,1,-1\}$$
Since radicand $x$ must be nonegative we have $x=0$ or $x=1$ |
Find the minimum p: for q > p cubic $ x^3 -7x^2 + qx + 16 =0 $ has only one real root using algebraic arguements only | If $q=8$ then $(x-4)^2(x+1)=0$ has three roots so $p\ge 8$. Now consider $q>8$ and suppose there is more than 1 real root. Then all three (counting multiplicity) roots are real and, by Vieta, their product is $-16$.
However the equation can be written as $$(x-4)^2(x+1)=(8-q)x.$$
The cubic $y=(x-4)^2(x+1)$ has no points in the fourth quadrant and only has points in the second quadrant for $-1\le x\le0$. However, $y=(8-q)x$ only has points in the second and fourth quadrants.
Therefore all three roots satisfy $-1\le x\le0$ and so the product of the roots has magnitude at most $1$, a contradiction.
Therefore $p=8$. |
What are the polynomial quotients $R/x$ and $R/(R/x)$ for $R = (\mathbb{R}[x]/x^n)$? | I'd prefer to use the common parenthesized notation for ideals, i.e., $R=\Bbb R[x]/(x^n)$ etc.
Strictly speaking, $x\notin R$ and hence $(x)$ is nor n ideal of $R$. Instead of $x$, we should use the image of $x$ (and element of $\Bbb R[x]$) under the canonical projection to $R$, aka. the residue class of $x$, or $x+(x^n)$.
With these caveats, yes, we have
$$ R/(x+(x^n))\cong \Bbb R$$
per the obvious homomorphism.
However, there is no good way to view $R/(x+(x^n))$ as an ideal of $R$ (even on the level of mere sets, viewing the elements of $R/(x+(x^n))$ as the constant polynomials in $R$ or $\Bbb R[x]$ is not natural), hence we cannot speak of a quotient $R/(R/(x+(x^n)))$. |
Finding Mean Value and Standard Deviation | You're given $X$ which is normal distributed with an unknown mean $\mu$ and unknown variance $\sigma^2$ which satisfies
$$
P(X>10.256)=0.1,\quad\text{and}\quad P(X<9.671)=0.05.\tag{1}
$$
Recall that if $X\sim\mathcal{N}(\mu,\sigma^2)$ then $Z=\frac{X-\mu}{\sigma}\sim\mathcal{N}(0,1)$. Thus $(1)$ can be rewritten as
$$
P\left(Z>\frac{10.256-\mu}{\sigma}\right)=0.1,\quad\text{and}\quad P\left(Z<\frac{9.671-\mu}{\sigma}\right)=0.05,\tag{2}
$$
where $Z\sim\mathcal{N}(0,1)$ is standard normal distributed.
Now, which point satisfies that $P(Z>z)=0.1$ or equivalently $P(Z\leq z)=0.9$? This is exactly the $90\%$-percentile of the standard normal distribution which is approximately $1.28$, and hence $(2)$ becomes
$$
\frac{10.256-\mu}{\sigma}=1.28,\quad\text{and}\quad \frac{9.671-\mu}{\sigma}=?
$$
Now you have two equations with two unknowns, which you can easily solve. |
problem defining distribution and probability | Recall that a sum of Bernoulli-distributed random variables is a binomial random variable, so considering the Bernoulli variables $X_i$ independently or the sum of them as binomial, is the same.
On the other hand, since you are adding 2400 (large number in this context) of those variables, using the central limit theorem you may safely approximate your binomial variable as normal variable.
Finally, on the question regarding your probability $P(\sum X_i < 1000)$, yes, this is what you are being asked to calculate. |
Complex solutions for Fermat-Catalan conjecture | I found the next complex solution! :)
$$(238+72i)^3+(7+6i)^8=(7347−1240i)^2$$
(There is no new method here. Just small contribution to the problem.) |
Arithmetic or Geometric sequence? | The sequence you gave is called the Harmonic sequence. It is neither geometric nor arithmetic. Not all sequences are geometric or arithmetic. For example, the Fibonacci sequence $1,1,2,3,5,8,...$ is neither.
A geometric sequence is one that has a common ratio between its elements. For example, the ratio between the first and the second term in the harmonic sequence is $\frac{\frac{1}{2}}{1}=\frac{1}{2}$. However, the ratio between the second and the third elements is $\frac{\frac{1}{3}}{\frac{1}{2}}=\frac{2}{3}$ so the common ratio is not the same and hence this is NOT a geometric sequence.
Similarly, an arithmetic sequence is one where its elements have a common difference. In the case of the harmonic sequence, the difference between its first and second elements is $\frac{1}{2}-1=-\frac{1}{2}$. However, the difference between the second and the third elements is $\frac{1}{3}-\frac{1}{2}=-\frac{1}{6}$ so the difference is again not the same and hence the harmonic sequence is NOT an arithmetic sequence. |
Is this function nowhere analytic? | This is not a solution, but rather some history about the problem I uncovered while looking through my personal "library" this morning and from some brief on-line searches just now. [2] appears to give a rigorous proof of a slightly more general result, and I can email anyone interested a .pdf copy of [2]. See my stackexchange profile for my email address. In particular, I encourage someone to use this paper to write up a careful proof of the result, preferably a proof for the specific case that s.harp asked about (which will decrease the notational clutter involved in dealing with additional generalizations given in [2]).
As I mentioned in a comment above, this is basically Problem 1 on p. 2 of Bishop/Crittenden's 1964 book Geometry of Manifolds.
Maury Barbato asked how to do the Bishop/Crittenden problem in a 29 October 2009 sci.math post. That sci.math thread has 16 other posts in it, including posts by several very respectable sci.math participants, and no one was able to come up with a rigorous proof. Maury Barbato made a follow-up post on 12 November 2009 where he said that the problem remains unsolved for him.
This morning I found the following two relevant items in my folders of papers on this topic.
[1] Editorial Note, Infinitely differentiable functions which are nowhere analytic [Solution to Advanced Problem #5061], American Mathematical Monthly 70 #10 (December 1963), 1109.
Editorial Note. I. N. Baker proves that the following function $F(x)$ is infinitely differentiable but not analytic anywhere on the real axis: $\sum_{n=1}^{\infty}2^{-n}f_n(x),$ where $f(x) = \exp(-1/x^2),$ $x \neq 0,$ $f(0)=0,$ and $f_n(x) = f(x - p_n),$ with $p_n$ the rational numbers in a sequence. Many examples are in the literature. The following references are cited by readers: $[[\cdots]]$
[2] Paweł Grzegorz Walczak, A proof of some theorem on the $C^{\infty}$-functions of one variable which are not analytic, Demonstratio Mathematica 4 #4 (1972), 209-213. MR 49 #504; Zbl 253.26011
(from top of p. 212) As a corollary of the theorem we prove that a well known function defined by formula (2) when $r$ is the set of all rational numbers and
$$\varphi(x)=\begin{cases} e^{-\frac{1}{x}} & \text{ when } x > 0 \\ 0 & \text{ when } x \leq 0 \end{cases}$$
is a $C^{\infty}$-function which is not analytic at any point of $R.$
Walczak's paper has only two references --- the 1950 Russian edition of Markushevich's book "Theory of Analytic Functions" and Bishop/Crittenden's 1964 book "Geometry of Manifolds". Markushevich's book is only cited for a standard fact about bounds on the magnitudes of the derivatives of a function that is analytic on a specified bounded open interval. Bishop/Crittenden's book is not cited anywhere as far as I can tell, which I suspect was an editing oversight in the final draft of the paper. My guess is that Walczak's paper arose from a student project to give a rigorous proof of the claim made in Problem 1 on p. 2 of Bishop/Crittenden's book, although the paper gives no explicit mention of its purpose (aside from stating what is to be proved). I have sent an email to Walczak asking him what led him to write the paper, and I will give an update if/when I get a reply from him.
I am fairly certain that Bishop/Crittenden underestimated the difficulty of their Problem 1. Indeed, when their book was reprinted (with corrections) by AMS Chelsea in 2001, Problem 1 was replaced with
$$f(x) \; = \; \sum_{n=1}^{\infty} 2^{-2^{n}}\exp\left(-\csc^2\left(2^{n}x\right)\right)$$
along with the comment "This replacement of the problem given in the first edition was formulated by Eric Bedford of Indiana University."
(NEXT DAY UPDATE) I have heard back from Paweł Walczak and my guess about the origin and context of his paper was correct. For those who might be interested, below is a description of what is proved in the paper. In what follows I have tried to convey exactly what is done mathematically, but the wording is my own and it differs quite a bit from the original wording.
Walczak's paper proves the following result and then gives a specific illustration of this result. (Here I use ${\mathbb R},$ $Q,$ ${\delta},$ where Walczak uses $R,$ $r,$ ${\delta}_{0}$ but otherwise the notation is essentially the same.) Let $Q = \{r_1,r_2,r_3,\ldots \}$ be an injectively-indexed countably infinite subset of ${\mathbb R}$ (i.e. $i \neq j$ implies $r_i \neq r_{j})$ and let $\{a_n \}$ be a sequence of nonzero real numbers such that $\sum_{n=1}^{\infty}|a_n| < \infty.$ Let $\varphi : {\mathbb R} \rightarrow {\mathbb R}$ be bounded on $\mathbb R$ and $C^{\infty}$ on $\mathbb R$ and real-analytic on ${\mathbb R} - \{0\}.$ Assume there exist $\delta > 0$ and $A > 0$ and $L > 0$ such that, for each $x \in \mathbb R$ with $|x| > A$ and for each $k \in \{0,1,2,\ldots\},$ we have $|\varphi^{(k)}(x)| < L \cdot k! \cdot {\delta}^{-k}.$ Finally, define $f: {\mathbb R} \rightarrow {\mathbb R}$ by $f(x) = \sum_{n=1}^{\infty}a_{n}\varphi(x-r_{n}).$ Then $f$ is $C^{\infty}$ at each $x \in {\mathbb R},$ and $f$ is real-analytic at each $x \in {\mathbb R} - \overline{Q},$ and $f$ is NOT real-analytic at each $x \in \overline{Q},$ where $\overline{Q}$ is the topological closure of $Q$ in ${\mathbb R}.$
As a corollary Walczak shows that the assumptions above hold if we let $Q = \mathbb Q$ and $\varphi(x)=\begin{cases} e^{-\frac{1}{x}} & \text{ when } x > 0 \\ 0 & \text{ when } x \leq 0. \end{cases}$ Doing this gives us a function $f:{\mathbb R} \rightarrow {\mathbb R}$ that is $C^{\infty}$ and nowhere real-analytic. Indeed, as Walczak mentions at the bottom of p. 211, given any (infinite) closed set $E \subseteq \mathbb R$ (the case for finite closed sets is easy without Walczak's result) and letting $Q$ be a countable dense subset of $E,$ we can get a $C^{\infty}$ function $f: {\mathbb R} \rightarrow {\mathbb R}$ that is real-analytic at each $x \in {\mathbb R} - E$ and NOT real-analytic at each $x \in E.$
(2 DAYS AFTER LAST UPDATE) A couple of days ago, shortly after my last update, I sent an email to Eric Bedford in which I mentioned this stackexchange web page and asked if he had anything to add to what I have written. Bedford said that during Fall 1969 or Spring 1970 (in Fall 1970 he began graduate school at University of Michigan) he worked through a large portion of Bishop/Crittenden's 1964 book in a reading course with Bishop, and at this time he came up with a replacement function for Problem 1. He did not actually say whether the function he came up with then is the same function that appears in the 2001 edition of the book. However, I strongly suspect it was the same function, because the function in the 2001 book is essentially the same function I have written on a piece of paper (which took me over an hour to locate this morning, by the way) that he gave me in his office in Fall 1982, a day or two after I had asked him in a class meeting (a first semester graduate complex analysis course I was taking at that time from him) whether there exists a function that is $C^{\infty}$ and nowhere analytic. He had just given us the $\exp(-1/x^2)$ example in class, and it seemed natural to me to wonder whether a $C^{\infty}$ function can actually be nowhere analytic (in analogy with the fact that a continuous function can be nowhere differentiable). I seem to recall that he said, when I asked the question, something to the effect that he was pretty sure he had an example, but he didn't remember the exact formulation and needed to look through his stuff in his office for it.
For what it's worth, here is the exact formulation---the exact same symbols and grouping symbols and such---of what is on this piece of paper from Fall 1982:
$$f(x) \; = \; \sum_{n=0}^{\infty} 2^{-2^{n}}\exp\left[\frac{-1}{\left(\sin\left(2^{n}x\right)\right)^{2}}\right]$$ |
The $\sigma$-algebra of subsets of $X$ generated by a set $\mathcal{A}$ is the smallest sigma algebra including $\mathcal{A}$ | Let me make a general comment rather than a specific one, because the construction that you are having trouble with is one that is very common and very useful (though it does have its limitations; see below) so it is important and good to have it "down" properly.
You have the following situation: you are considering a certain type of object of interest. For simplicity, let's look at the earliest example that most students encounter, which is vector spaces. So, you are looking at vector spaces. Specifically, you are looking at a particular vector space $\mathbf{V}$.
The objects have sub-objects (subspaces). These are subsets of your original $\mathbf{V}$, which are also objects (vector spaces) in their own right. Not every subset is a subobject, but every subobject is a subset.
In this situation, it is often fruitful to consider the following problem:
Given a subset $S$ of $\mathbf{V}$, what is the smallest subspace of $\mathbf{V}$ that contains $S$?
That is, we want to find a $\mathbf{W}$ with the following properties:
$\mathbf{W}$ is a subspace of $\mathbf{V}$;
$S$ is contained in $\mathbf{W}$ ("...that contains $S$");
If $\mathbf{Z}$ is any subspace of $\mathbf{V}$ that contains $S$, then $\mathbf{W}\subseteq\mathbf{Z}$ ("... smallest ...")
This is the situation you have at hand, and it's also a very common situation that we encounter over and over again. Some examples:
Given a group $G$ and a subset $S$, to find the smallest subgroup of $G$ that contains $S$ (the "subgroup generated by $S$");
Given a group $G$ and a subset $S$, to find the smallest normal subgroup of $G$ that contains $S$;
Given a subset $S$ of the plane $\mathbb{R}^2$, to find the smallest convex set that contains $S$ (the "convex hull of $S$");
Given a set $X$ and a collection of subsets $\mathcal{S}\subseteq \mathcal{P}(X)$, find the smallest $\sigma$-algebra on $X$ that contains $\mathcal{S}$ (the case you have);
Given a set $X$ and a relation $R$ on $X$, find the smallest transitive relation on $X$ that extends $R$ (the "transitive closure");
Given a topological space $X$ and a subset $S$, find the smallest closed subset of $X$ that contains $S$ (the "closure of $S$").
and so on and so forth.
Now, in general, such a thing may not exist; or there may be minimal objects but no minimum object. For example, if in the last example above you replace "closed" with "open", there may be no such object: if $X=\mathbb{R}$ and $S=[0,1]$, there is no "smallest open set that contains $S$".
But in many situations, there is one single observation that lets you conclude that such as "smallest subobject" must exist. Namely, if you can show that the intersection of any collection of "subobjects" is again a "subobject". For the example with vector spaces: is the intersection of an arbitrary family of subspaces of $\mathbf{V}$ itself a subspace of $\mathbf{V}$? For the above examples:
Is the intersection of an arbitrary family of subgroups of $G$, itself a subgroup of $G$?
Is the intersection of an arbitrary family of normal subgroups of $G$ itself a normal subgroup of $G$?
Is the intersection of an arbitrary family of convex subsets of $\mathbb{R}^2$ itself a convex subset of $\mathbb{R}^2$?
Is the intersection of an arbitrary family of $\sigma$-algebras on $X$ itself a $\sigma$-algebra on $X$?
Is the intersection of an arbitrary family of transitive relations on $X$ itself a transitive relation on $X$?
Is the intersection of an arbitrary family of closed subsets of $X$ itself a closed subset of $X$?
When the answer is "yes", then the following construction will always show that there is such a thing as the "smallest subobject that contains $S$":
Take the family of all subobjects that contain $S$; then take the intersection of the family. That's the smallest subobject that contains $S$.
Why does this work?
Because:
(i) There is at least one subobject that contains $S$, (namely the original object itself; for $\sigma$-algebras, this would be $\mathcal{P}(X)$; for the transitive closure example, you would take the "total relation" $X\times X$).
(ii) Since the intersection of an arbitrary family of subobjects is a subobject (this is our assumption), then this intersection is a subobject.
(iii) Since each thing being intersected contains $S$, the intersection contains $S$.
This means that the intersection is indeed a subobject that contains $S$. Finally:
(iv) The intersection is always contained in each and every element of the family being intersected. So if $\mathbf{Z}$ is any subobject that contains $S$, then it is a member of the family being intersected, so the intersection is contained in $\mathbf{Z}$. This shows the intersection is indeed the "smallest subobject" with the desired properties.
So:
To find the smallest subspace of $\mathbf{V}$ that contains $S$, intersect all subspaces that contain $S$.
To find the smallest subgroup of $G$ that contains $S$, intersect all subgroups that contain $S$.
To find the smallest normal subgroup of $G$ that contains $S$, intersect all normal subgroups that contain $S$.
To find the smallest convex set that contains $S$, intersect all convex subsets of $\mathbb{R}^2$ that contain $S$.
To find the smallest $\sigma$-algebra that contains $S$, intersect all $\sigma$-algebras that contain $S$.
To find the smallest transitive relation that contains $R$, intersect all transitive relations that contain $R$.
To find the smallest closed subset that contains $S$, intersect all closed subsets that contain $S$.
And this works like magic. Voilá! You have shown that this object exists. It necessarily has the properties you want.
This is a "top-down" approach. Imagine yourself looking at the "big object", and you are "paring it down" until you get "just enough" for the object you want (intersections make things smaller; you are paring down stuff that may not be needed).
The problem? Like most magic spells, it doesn't really tell you much about the end product. The fact that the end product appeared "as if by magic" means that you are likely to be as clueless about the actual nature of the "smallest object" in question as you were when you started. You now know that there is such a thing, but you don't really know what it "looks like".
That is why in almost every situation like this, you also want a "bottom-up" description of this "smallest subobject that contains $S$". You want an explicit description of what it actually looks like. For the examples above:
The smallest subspace of $\mathbf{V}$ that contains $S$ is the set of all linear combinations of vectors in $S$.
The smallest subgroup of $G$ that contains $S$ is the set of all finite products of elements of $S$ and their inverses.
The smallest normal subgroup of $G$ that contains $S$ is the set of all finite products of conjugates of elements of $S$ and their inverses.
The smallest convex subset of $\mathbb{R}^2$ that contains $S$ is the set of all convex combinations of elements of $S$.
Asaf gives an explicit description of the smallest $\sigma$-algebra on $X$ that contains $S$ in his answer, described by starting from $S$.
The smallest transitive relation on a set $X$ that contains a given relation $R$ is the set of all pairs $(a,b)$ such that there exist a finite sequence $x_0,x_1,\ldots,x_n$ of elements of $X$ such that $x_0=a$, $x_n=b$, and $(x_i,x_{i+1})\in R$ for $i=0,\ldots,n-1$.
The smallest closed subset of a topological space $X$ that contains a given set $S$ is equal to $S\cup\partial S$ or to $S\cup S'$.
In each of these cases, one would have to show that the given description actually has the desired properties. This is a "bottom-up" approach.
The "top-down" description has the benefit of simplicity, that the "universal properties" that define the object are very clearly satisfied, and that they make proving results about how the "smallest object" relates to other objects easy. However, the "top-down" description is usually very hard to actually use to prove things about the specific smallest object. The "bottom-up" construction has the benefit of (usually) being a very concrete way of getting your hands on the object itself, making it easy to prove things about the object itself, but proving the universal properties is usually difficult. Thus, for example, the top-down definition of "subspace $\mathbf{V}$ generated by $S$" in the linear algebra setting makes it very hard to figure out things like the dimension of the subspace, or a basis, while the "bottom-up" approach makes that very easy, but then proving that the collection of all linear combinations forms a subspace is more difficult than simply taking an intersection of subspaces.
In most books or presentations, when discussing "the smallest X that contains S", you will see one of two approaches:
Define it as a big intersection, then prove a theorem that gives the "bottom-up" description; or
Give a "bottom-up" description; then prove the object described has the desired properties of being a subobject, containing S, and being the "smallest".
Whenever possible, you want both descriptions because they have complementary strengths and weaknesses. |
For every closed $F$ and $x \in X \setminus F,$ there exists an open neighbourhood $U$ of $x$ such that $\overline{U} \cap F = \emptyset.$ | Since $X$ is a metric space, it is regular. By regularity, as $x$ not in closed set $F$, there are open disjoint $U,V$ separating $x$ and $F$, i.e.
$x \in U$, $F \subset V$. Thus $x \in U \subset X \setminus V \subset X \setminus F$.
As $X \setminus V$ is closed, $\overline{U} \subset X \setminus V \subset X \setminus F$.
Thus the $\overline{U}$ and $F$ are disjoint. |
How can I quickly get $\sin\beta\sec\beta\cot\beta$ from $(1-\sin^2\beta)(1+\tan^2\beta)$? | Recall that
$$\begin{align}
\cos\beta &= \dfrac{\sin\beta}{\tan\beta} \\
\sec\beta &= \dfrac1{\cos\beta} \\
\cot\beta &= \dfrac1{\tan\beta}
\end{align}$$
Then, your expression can be re-written as
$$\begin{align}
\cos^2\beta\sec^2\beta &= \cos\beta\cos\beta\sec\beta\dfrac1{\cos\beta} \\
&= \require{cancel}\cancel{\cos\beta}\cos\beta\sec\beta\dfrac{1}{\cancel{\cos\beta}} \\
&= \cos\beta\sec\beta \\
&= \dfrac{\sin\beta}{\tan\beta}\sec\beta \\
&= \sin\beta\sec\beta\cot\beta
\end{align}$$ |
Finding the average value of following function on given curve | We could use $x=t, y=t^{3/2}$ to parametrize the curve, but let's use $x=t^2, y=t^3$ instead.
Then $r^{\prime}(t)=\langle x^{\prime}(t), y^{\prime}(t)\rangle=\langle2t, 3t^2\rangle$, so $\big|r^{\prime}(t)\big|=\sqrt{(2t)^2+(3t^2)^2}=\sqrt{4t^2+9t^4}$.
Then $\displaystyle L=\int_{C}ds=\int_{C}\big|r^{\prime}(t)\big|dt=\int_0^{\sqrt{5}}\sqrt{4t^2+9t^4}dt=\int_0^{\sqrt{5}}t\sqrt{4+9t^2}dt$
and $\displaystyle\int_{C}f(x,y)ds=\int_{0}^{\sqrt{5}}\sqrt{4+9t^2}\sqrt{4t^2+9t^4}dt=\int_0^{\sqrt{5}}t(4+9t^2)dt=\int_0^{\sqrt{5}}(4t+9t^3)dt$
and, as you indicated, you want to calculate $\displaystyle \frac{1}{L}\int_{C}f(x,y)ds$. |
Number theory in sequence $x_{n+1}=x_n^3-2x_n^2+2$ | I found this to be a quite interesting (and challenging to solve) question. It's asking about the primes $p$ where
$$p \mid x_n^2 - 3x_n + 3 \tag{1}\label{eq1A}$$
As you showed, multiplying by $4$ gives
$$p \mid 4x_n^2 - 12x_n + 12 = (2x_n - 3)^2 + 3 \tag{2}\label{eq2A}$$
Since $p \neq 3$ (note $x_n \equiv 5 \pmod{72}$ for all $n \ge 1$, with this giving $x_n^2 - 3x_n + 3 \equiv 13 \pmod{72}$), this shows $-3$ is a quadratic residue modulo $p$.
Next, multiplying \eqref{eq1A} by $x_n - 3$ gives
$$\begin{equation}\begin{aligned}
(x_n - 3)(x_n^2 - 3x_n + 3) & = x_n^3 - 3x_n^2 + 3x_n - 3x_n^2 + 9x_n - 9 \\
& = x_n^3 - 6x_n^2 + 12x_n - 8 - 1 \\
& = (x_n - 2)^3 - 1
\end{aligned}\end{equation}\tag{3}\label{eq3A}$$
This means
$$(x_n - 2)^3 \equiv 1 \pmod{p} \tag{4}\label{eq4A}$$
If $n \gt 1$, substituting $x_n = x_{n-1}^3 - 2x_{n-1}^2 + 2$ gives
$$\begin{equation}\begin{aligned}
(x_{n-1}^3 - 2x_{n-1}^2)^3 & \equiv (x_{n-1}^2(x_{n-1} - 2))^3 \\
& \equiv x_{n-1}^6(x_{n-1} - 2)^3 \\
& \equiv 1 \pmod{p}
\end{aligned}\end{equation}\tag{5}\label{eq5A}$$
Repeating the substitutions for each $x_i - 2 = x_{i-1}^2(x_{i-1} - 2)$ for $i$ from $n - 1$ down to $2$ gives a product of $0$ or more $x_{i-1}^6$ values multiplied by $(x_1 - 2)^3 = 3^3$, i.e.,
$$\left(\prod_{i=1}^{n-1}x_i\right)^6(3^3) \equiv 1 \pmod{p} \tag{6}\label{eq6A}$$
Multiplying both sides by $3$ gives
$$\left(\left(\prod_{i=1}^{n-1}x_i\right)^3 3^2\right)^2 \equiv 3 \pmod{p} \tag{7}\label{eq7A}$$
This shows $3$ is also a quadratic residue. Thus, $3^{-1}(-3) \equiv -1 \pmod{p}$ is a quadratic residue, so there's an integer $x$ with $x^2 \equiv -1 \pmod{p} \implies x^4 \equiv 1 \pmod{p}$. Therefore, $4$ is the multiplicative order of $x$ modulo $p$, so $4 \mid p -1 \implies p \equiv 1 \pmod{4}$. This proves that $p$ cannot be of the form $4k + 3$. |
Open sets and the Cauchy-Schwarz inequality | As pointed out in the comments, it suffices to show that $x \mapsto \langle x, u\rangle$ is continuous, and so it suffices to show that $|\langle y, u\rangle|$ is small whenever $y$ is close enough to $0$ - this follows from the linearity of the inner product in the first coordinate (if $z \approx x$, then $x - z \approx 0$). Now this can be done with the Cauchy-Schwarz inequality by noting that
$$|\langle y, u \rangle| \le \|y\| \|u\|$$
Hence if $\|y\| < \epsilon / \|u\|$, then $|\langle y, u\rangle| < \epsilon$, as desired. Since $u$ was fixed, this works. |
Proof involving quantifiers | First consider the meaning of $(\forall z\in\Bbb N)(x+y<z)$: this says that $x+y<0$ and $x+y<1$ and $x+y<2$ and so on. Since each of these inequalities implies the next, we can say the same thing by saying just that $x+y<0$. Thus, we can rewrite the original simply as
$$(\forall x\in\Bbb Z)(\exists y\in\Bbb Z)(x+y<0)\;.\tag{1}$$
(If for you $\Bbb N$ does not include $0$, replace $0$ by $1$.) In words $(1)$ says that no matter what integer $x$ you choose, I can find an integer $y$ such that $x+y$ is negative. Can you find a simple function of $x$ that will produce such a $y$? |
Limit of permuted numbers when each of them is approaching $0$ | Consider $k$ with $\sigma(k)\ne k$. Then along the path $$x_i=\begin{cases}t&i=k\\t^{n+1}&i\ne k\end{cases}\qquad\text{with } t>0,$$ the numerator is between $t^k$ and $t^k+nt^{n+1}$ for $0<t<1$; in fact this is between $t^k$ and $2t^k$ for small enough $t$. Similarly, the denominator is between $t^{\sigma(k)}$ and $2t^{\sigma(k)}$ for small enough $t$. Thus the quotient is between $\frac12t^{k-\sigma(k)}$ and $2t^{k-\sigma(k)}$. If $k>\sigma(k)$ this tends $\to0$ as $t\to 0^+$; if $k<\sigma(a)$ it tends to $+\infty$ instead. But if $\sigma$ is not the identity, we can find both kind of $k$, i.e., the limit does not exist. |
Convergence of the series $\sum_{n=0}^\infty \frac{n^n z^n}{n!}$ when $e|z|=1$ | As is quite frequently the case on the boundary of convergence, this is a job for Dirichlet's test:
If
$(a_n)$ is a real sequence with $a_n \downarrow 0 $
$(b_n)$ is a complex sequence with bounded partial sums, $ \left\lvert \sum_{k=0}^n b_k \right\rvert < M$ for some fixed $M$,
then $ \sum_{n} a_n b_n $ converges.
Namely, since
$$ \frac{n^n e^{-n}}{n!} \downarrow 0 , $$
(the ratio of successive terms is $ (1+1/n)^n/e < 1$, see below for a better estimate) and
$$ \left\lvert \sum_{k=0}^n e^{in\theta} \right\rvert = \left\lvert \frac{1-e^{i(n+1)\theta}}{1-e^{i\theta}} \right\rvert < \csc{(\theta/2)} $$
is bounded for $\theta \neq 0$, we do indeed have convergence for all points on the circle apart from $\theta=0$.
As for divergence for $\theta=0$, there are various ways to do this, but probably the simplest is to derive a lower bound for $n^n e^{-n}/n!$ and apply the comparison test. The graph of $\log{x}$ is concave, so the Trapezium Rule gives an underestimate of its area; this can be massaged to give the following lower bound:
$$ \frac{1}{2}\log{1} + \log{n!} - \frac{1}{2}\log{n} < \int_1^n \log{x} \, dx = 1-n + n\log{n} , $$
so
$$ n! < e\sqrt{n} n^n e^{-n} . $$
Therefore
$$ \frac{n^n e^{-n}}{n!} > \frac{1}{e\sqrt{n}} , $$
and finally $\sum_n n^{-1/2}$ diverges by comparison with the harmonic series, for example. |
How to find the spectrum of the hypercube? | This is just a data point:
Computing characteristic polynomials of their adjacency matrices, one finds the roots for the $d$-dimensional hypercube are $d$, $d-2$, $d-4$, $\dots$, $-d$, with multiplicities $\binom{d}{0}$, $\binom{d}{1}$, $\binom{d}{2}$, $\dots$, $\binom{d}{d}$.
Since this is extraordinarily regular, it suggests that one find a recursive formula. If $A_n$ is the adjacency matrix of hypercube on $2^{n-1}$ vertices, then $A_n=\begin{pmatrix}A_{n-1}&I_{2^{n-2}}\\I_{2^{n-2}}&A_{n-1}\end{pmatrix}$ so we have what to work with. |
Complexity of calculating the determinant of a $n \times n$ matrix using Laplace expansion | By definition, you have that
\begin{align}
\det A = \sum_{\sigma \in S_N}(-1)^\sigma a_{1,\sigma(1)}a_{2,\sigma(2)}\ldots a_{N, \sigma(N)}
\end{align}
where $S_N$ is the group of permutation of $N$ elements. Since $|S_N| = N!$ that means to compute $\det A$ you need to sum up $N!$ number of products.
Edit: Use induction. Suppose the statement holds for $k=n$, i.e. $u_n = \mathcal{O}(n!)$. Then for the case $k=n+1$ we see that
\begin{align}
u_{n+1}= (n+1) u_n+(2n+1) = \mathcal{O}(n!)\cdot(n+1) + 2n+1 =\mathcal{O}((n+1)!)+2n+1 = \mathcal{O}((n+1)!).
\end{align} |
Are there spheres of non-constant curvature? | are there any metrics on $n$-sphere of non-constant curvature?
Let's assume it's about intrinsic curvature and $n\geqslant 2$. Then you can get such a metric quite easily, here an example in 2 dimensions:
Take a potato and map it's surface 1:1 and smoothly to $S_2 = \{x\in\mathbb R^2/\|x\|_2=1\}$. Then define the distance of two points on $S_2$ to be the distance of their preimages on the potato. This defines a metric, and the curvature is non-constant. |
Using $\int_{0}^{1} x^n dx =\frac{1}{n+1}$ find the sums of the series | Transform
$$\frac{1}{3n-2} = \int_0^1 x^{3n-3} \mathrm d x$$
and
$$(-1)^{n+1}=(-1)^{3n-3}$$
Thus
$$\sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{3n-2} =
\sum_{n=1}^{\infty} (-1)^{n+1} \int_0^1 x^{3n-3} \mathrm d x =
\int_0^1 \sum_{n=1}^{\infty} (-1)^{n+1} x^{3n-3} \mathrm d x =$$
$$=\int_0^1 \sum_{n=1}^{\infty} (-x)^{3n-3} \mathrm d x = \int_0^1 \frac{1}{1+x^3}\mathrm d x$$
note that we can swap sum and integral because the geometric series is uniformly convergent on compact sets inside $(-1; 1)$.
Now, you have to compute this integral, which is not that easy.
WA says it evaluates $$\frac{\pi}{3 \sqrt 3} + \frac{\log 2}{3} \approx 0.835$$ |
If there's no relation between $c_1$ and $c_2$, can I say that $c_1= r\sin \theta,c_2=r\cos \theta$? | If $c_1=c_2=0$ then $r=0$ and $\theta$ can be anything.
For other values of $c_1,c_2$ there will be many solutions, but it is often convenient to assume
$$r>0\ ,\qquad -\pi<\theta\le\pi\ ,$$
in which case there will be exactly one solution. We have
$$r=\sqrt{c_1^2+c_2^2}$$
and
$$\tan\theta=\frac{c_1}{c_2}$$
provided $c_2\ne0$. By finding the sign of $\cos\theta$ and $\sin\theta$ you can determine which quadrant $\theta$ is in, then this last equation gives you one definite value for $\theta$. Finally, in the case $c_2=0$ we have
$$\theta=\begin{cases}\frac\pi2&\hbox{if}\ c_1>0\\ -\frac\pi2&\hbox{if}\ c_1<0.\end{cases}$$ |
Asymptotics of binomial coefficients and the entropy function | Let $q=1-p$.
The Stirling approximation asserts $n! \sim (\frac{n}{e})^{n} \sqrt{2\pi n}$. This gives:
$$\binom{n}{pn} \sim \frac{(n/e)^{n}}{(np/e)^{np}(nq/e)^{nq}}\frac{\sqrt{2\pi n}}{\sqrt{2\pi np}\sqrt{2\pi nq}}$$
The $(n/e)^n$ cancels with $(n/e)^{np}(n/e)^{nq}$, so this simplifies to:
$$\binom{n}{pn} \sim \frac{1}{p^{np}q^{nq}}\frac{1}{\sqrt{2\pi npq}}$$
Upon taking logarithm in base 2, we get:
$$\log_2\binom{n}{pn} \sim -n (p\log_2 p + q \log_2 q) - \log_2 (2\pi n pq)/2$$
Dividing by $n$, it transforms to:
$$\frac{1}{n}\log_2\binom{n}{pn} \sim -(p\log_2 p + q \log_2 q) - \log_2 (2\pi n pq)/(2n) = H(p) + O(\ln n/ n)$$
And when $n$ tends to infinity, we get the desired limit.
EDIT: You don't need the full strength of Stirling approximation. $n! = (n/e)^{n} g(n)$ where $n^{\alpha} \le g \le n^{\beta}$ is enough. |
Infinitely many transcendental numbers over Q | Yes, using the Lindemann-Weierstrass theorem.
Let $S$ be any infinite set of algebraic real numbers linearly independent over $\mathbb{Q}$, for example $\{\sqrt p : p \text{ is prime}\}$. (A maximal such set is a basis for $\overline{\mathbb{Q}}$ over $\mathbb{Q}$.)
Then $\{e^s : s \in S\}$ is algebraically independent. |
Are these statements of my professor about periodicity of harmonic processes in time series analysis correct? | Preliminary, let us do a slight change in notation: I will indicate with $k \in \mathbb{Z}$ the discrete time, with $K$ the $k$ present in the summation and the period (if it exists) with $N \in \mathbb{N}_0$.
Firstly, let us refresh the following concepts. Let $\theta \in \mathbb{R}_+$, $k \in \mathbb{Z}$, $i = \sqrt{-1}$, and consider the following (complex valued) function: $f(\theta) = e^{i \theta k}$. It is easy to check that this function is periodic with period $2\pi$, in fact
$
\begin{align}
f(\theta + 2\pi) = e^{i (\theta +2\pi) k} = e^{i \theta k} e^{i 2\pi k} = e^{i\theta k} = f(\theta)
\end{align}
$
since $e^{i 2 \pi k}=1$. Now, consider instead (with the same meaning of the symbols) the following (complex valued) function
$
\begin{align}
f(k) = e^{i \theta k}
\end{align}
$
Is it periodic? For discrete time functions, the period must be an integer, since the argument of the function must be an integer (the domain of $f(k)$ is $\mathbb{Z}$). A discrete time function can't be, e.g., periodic with period $\pi$. Thus, we must find, if it exists, an integer $N \in \mathbb{N}_0$ such as that $f(k+N) = f(k)$. Repeating the same reasoning as above,
$
\begin{align}
f(k+N) = e^{i\theta(k+N)} = e^{i \theta k} e^{i \theta N}
\end{align}
$
Now, $e^{i \theta N}$ is equal to $1$ if and only if $\ \theta N = 2 \pi n$, with $n \in \mathbb{N}$, which means:
$
\begin{align}
(*) \quad \theta = 2 \pi \frac{n}{N}, \quad n \in \mathbb{N}, N \in \mathbb{N}_0
\end{align}
$
The key concept is the following
The function $e^{i \theta k}$ is periodic (with respect to $k \in \mathbb{Z}$) if and
only if the pulsation $\theta \in \mathbb{R}_+$ satisfies $(*)$, i.e., the pulsation is a rational multiple of $\pi$.
Now, back on your question. Your example is unfortunately ill-worded, since the period of discrete time functions/processes can't be $2 \pi$ ("the last part is periodic with period $2 \pi$" is, strictly speaking, wrong): as explained above, only integers are allowed. In fact, since $k$ is used as an (integer) index in the formalism $X_k$, what means, e.g., $X_{\pi}$ ? There exists only things like $X_1, X_2, X_3, \dots$, because your process is discrete time (this is the reason why I asked and, as it's now clear, it is a crucial fact).
Luckily enough, this means that it is very easy to prove that, in general, $X(k)$ (i.e., the process viewed as a function of discrete time) is not periodic, since the phasors are in general not periodic (with respect to $k$). However, I doubt your professor was referring to this: he probably was referring to something along the lines of $f(\theta)$ is $2 \pi$-periodic, i.e., viewing the process as a function not of time, but of the pulsations $\lambda_j$.
At this point, it is useless to describe in detail the fact about the spectral distribution. Thus, I will just give you a sketch. Using the Wiener-Khinchin theorem, we know that the spectral density of a random process is equal to the Fourier transform of the ACF (AutoCorrelation Function) of the process. But the ACF of a periodic function is periodic (and with the same period), thus the information on periodicity is "carried on" the spectral density.
This is the same as what happens deterministically. Let $f(t)$, $t \in \mathbb{R}$, be a function of (continuous) time; the Fourier transform of $f(t)$, which we can denote with $F(\omega)$, where $\omega \in \mathbb{R}$ is the pulsation, contains all the information present in $f(t)$. In short, this is simply the usual time domain - frequency domain duality.
Example: This is a trivial example to show that the process is in general not periodic. Starting with
$
\begin{align}
X_k = \sum_{j=-K}^{K} A_j e^{i \lambda_j k}
\end{align}
$
Now, choose $\lambda_j = \lambda \ \forall j$ (single frequency process), from which
$
\begin{align}
X_k = \sum_{j=-K}^{K} A_j e^{i \lambda k} = e^{i \lambda k} \underbrace{\sum_{j=-K}^{K} A_j}_{=A} = A e^{i \lambda k}
\end{align}
$
The periodicity of $X_k$ depends on the periodicity of the phasor $e^{i \lambda k}$, which depends on $\lambda$, remember the condition $(*)$. Thus, we can easily conclude that $X_k$ is, in general, not periodic. |
Coordinates of a point when it is known the distance and the coordinates of another point | That would be any point that satisfies the equation $$\sqrt{ \left(x-55.0001\right)^2+\left(y-32.6789\right)^2}=10$$ |
Theorem stating that raising to $\rm n^{th}$ power both sides of an equation *is ok*. | If you have a polynomial with $n$ solutions $a_1, a_2, \ldots , a_n$ then you have a factorisation
$$
(x-a_1)(x-a_2)\ldots(x-a_n) = 0
$$
Raising each side to the same power doesn't give you any more roots, just increases the multiplicity of each root. This works when you have $0$ on the right hand side because that is the factorisation that gives you the roots.
I think the question you linked is slightly different because it isn't a polynomial (unless you count the infinite Taylor series). In any case the first answer explained it nicely. https://math.stackexchange.com/q/2024377
If you have an equation $f(x) = g(x)$ then
$$
f(x)^2 = g(x)^2 \implies f(x)^2 - g(x)^2 = 0\implies (f(x) - g(x))(f(x)+g(x))=0$$
so that you have the solution set of $f(x)=g(x)$ (as intended) but also you've obtained solutions for $f(x)=-g(x)$. As you said though, the original solution set is clearly contained in the new equation
edit: if you use other powers then you still have the $f(x)-g(x)$ term in the complete factorisation, same conclusion. I think it's just a natural consequence of your theorem (and would therefore explain it as such), not aware of any other theorem stating that as a result. |
Minimum number of locks and keys | Any group of two people should not be able to open the safe, it means that for each of the $\binom 52$ groups of two, there must be at least one key they don't own. This key must be different for any two groups of two. The minimal solution is then $\binom 52=10$. For each of the 10 groups of two, there is a key they don't possess. |
Injectivity of a function $\frac{x}{1+|x|}$ | You obtained that $|x|=|y|$ then replacing this in the first equality you get
$$\frac{x}{1+|x|} =\frac{y}{1+|y|} \Longleftrightarrow\frac{x}{1+|x|} =\frac{y}{1+|x|}\Longleftrightarrow x=y$$ |
A term of $f(x)$ which has a root being a generator of an extension over a finite field | Well, consider the field $GF(16)$. It corresponds to the quotient field ${\Bbb Z}_2[x]/\langle f(x)\rangle$, where $f(x)$ is an irreducible polynomial of degree $4$ over ${\Bbb Z}_2$. These polynomials are $x^4+x+1$, $x^4+x^3+1$ and $x^4+x^3+x^2+x+1$. The first two of which are primitive and conjugate to each other, while the latter is not primitive. Indeed, it divides $x^5-1$ and so each element is a 5th root of unity.
What you refer to is the notion of primitive polynomial, where each root generates the full cyclic group of the field. |
If $a_1a_2 = 1, a_2a_3 = 2, a_3a_4 = 3 \cdots$ and $ \lim_{n \to \infty} \frac{a_n}{a_{n+1}} = 1$, find $|a_1|$ | Note that
$$
\frac{a_3}{a_1} = \frac{a_2a_3}{a_1a_2} = \frac21,\quad
\frac{a_5}{a_3} = \frac{a_4a_5}{a_3a_4} = \frac43,\quad
\frac{a_7}{a_5} = \frac{a_6a_7}{a_5a_6} = \frac65,\quad\dots
$$
and therefore
$$
\frac{a_{2k+1}}{a_1} = \frac{a_3}{a_1} \frac{a_5}{a_3}\cdots \frac{a_{2k+1}}{a_{2k-1}} = \frac21 \frac43 \cdots \frac{2k}{2k-1} = \frac{4^k(k!)^2}{(2k)!}.
$$
Similarly,
$$
\frac{a_{2k+2}}{a_2} = \frac{a_4}{a_2} \frac{a_6}{a_4} \cdots \frac{a_{2k+2}}{a_{2k}} = \frac32 \frac54 \cdots \frac{2k+1}{2k} = \frac{(2k+1)!}{4^k(k!)^2}.
$$
Therefore
$$
1 = \lim_{k\to\infty} \frac{a_{2k+1}}{a_{2k+2}} = \lim_{k\to\infty} \frac{4^k(k!)^2 a_1/(2k)!}{(2k+1)!a_2/4^k(k!)^2} = \lim_{k\to\infty} \frac{a_1}{a_2} \frac{16^k(k!)^4}{(2k)!(2k+1)!} = \frac{a_1}{a_2} \frac\pi2
$$
(using Stirling's formula),
which forces $a_1/a_2 = 2/\pi$; together with $a_1a_2=1$ this yields $a_1=\pm\sqrt{2/\pi}$. (One can reality check $\lim_{k\to\infty} \frac{a_{2k}}{a_{2k+1}}$ as well if desired.) |
Prove: $\theta(n^2)+O(n^3)\subset O(n^3)$ | Is it safe to assume that $n^2\in\theta(n^2)$ and $n^3\in O(n^3)$.
Thus, $f(n) = n^3 + n^2 $
is as valid as
It is safe to assume that the murderer lives in California. You live in California, therefore you are the murderer.
I hope that if you served on a jury and heard the above argument, you'd find a flaw in it.
Anyway, suppose $f\in \theta(n^2)$ and $g\in O(n^3)$. The proof can proceed in two steps.
Show that $f \in O(n^3)$
From $f\in O(n^3)$ and $g\in O(n^3)$ obtain that $f+g\in O(n^3)$
You don't need to know any formula for $f$ or $g$ to run the above argument. |
$A$-linear maps between $A$-modules where $A$ is a $K$-algebra and $K$ is a commutative ring | Note first that $M$ is a $K$-module by the action
$$k.m:= (k1_A)m$$
Similarly $N$ is a $K$-module.
Let $\alpha: M \to N$ be an $A$-module homomorphism. Let $k \in K, m \in M$
$$\alpha(k.m)=\alpha((k1_A)m) = (k1_A)\alpha(m) = k. \alpha(m)$$
so $\alpha$ is $K$-linear.
In short, this comes down to saying that $A$ is a $K$-algebra implies $K \subseteq Z(A) \subseteq A$ so a map that is $A$-linear is automatically also $K$-linear. |
How many even numbers of four distinct digits greater than 5000 are possible | There might be a prettier solution to this problem, but I would use some variation of a decision tree:
The layers being the digits chosen left to right starting with the thousands, and the beige and blue circles being the odd and even options respectively. Each branch has the product taken and the different (exclusive) alternatives summed to 1288.
edit
fleablood's answer gives a far better order of choosing the digits: first, then last, then others. A diagram for this might be: |
Derive $\tan(3x)$ in terms of $\tan(x)$ using De Moivre's theorem | Don't “normalize” the triplication formulas to only sines and cosines.
We have
$$
\cos3x+i\sin3x=\cos^3x+3i\cos^2x\sin x-3\cos x\sin^2x-i\sin^3x
$$
so
\begin{align}
\sin3x&=3\cos^2x\sin x-\sin^3x \\[6px]
\cos3x&=\cos^3x-3\cos x\sin^2x \\[12px]
\tan3x&=\frac{3\cos^2x\sin x-\sin^3x}{\cos^3x-3\cos x\sin^2x}
\end{align}
Now divide numerator and denominator by $\cos^3x$. |
The Mystery of the Número Cabalístico | The theory is in your last line, $142857 \cdot 7 = 999999$, which means that $\frac 17 = 0.\overline{142857}$ Any prime which has full period will do the same. The next is $\frac {1}{17}=0.\overline{0588235294117647}$ which you can multiply by any number from $2$ through $16$ and get a cyclic shift. |
Maximum winner matches | I'm going to assume here that the maximum number of matches played by the winner is monotonic in the total number of players - I haven't thought about a proof, but I believe it. Let $f(n)$ be the minimum number of players necessary so that the maximum number of matches is $n$. So $f(0) = 1$, $f(1) = 2$, $f(2)=3$, $f(3) = 5$. This suggests that $f(n)$ is Fibnoacci.
To prove this, assume that players 1 and 2 play in the last match of the tournament. Then they haven't played before that point, so essentially, 1 and 2 have played two entirely separate tournaments to get to that point. If player 1 wins and has played $n$ games after the last one, the most efficient way to do that (by monotonicity) is if player 1 has played $n-1$ games, and player 2 has played $n-2$ games going into the final game. The minimum number of players needed to make that happen is $f(n-1) + f(n-2)$, which is thus equal to $f(n)$.
So, if we count 1 as the 0th Fibonacci number, 2 as the 1st, etc., then the answer to your original question comes from counting those numbers. Round $n$ down to the closest Fibonacci number less than it. Whatever the index of that Fibonacci number is, that's your answer. So, for example, 100 would round down to 89, which is the 9th Fibonacci number, giving your answer above. |
Suppose $V$ is a vector space over field $F$, $\operatorname{char}(F)\neq 2$. Show that $V=V^+ \oplus V^-$, details below | use $v=\frac{v+T(v)}{2}+\frac{v-T(v)}{2}$ since $T(\frac{v+T(v)}{2})=\frac{v+T(v)}{2}$ and $T(\frac{v-T(v)}{2})=(\frac{T(v)-T(T(v))}{2})=\frac{T(v)-v}{2}=\color{red}- \frac{v-T(v)}{2}$
And if $v_0\in V^+ \cap V^-$ we have $T(v_0)=v_0=-v_0$ it means $2v_0=0$ contradict with $F, char(F)\neq 2$ |
Reducing a 2nd order ODE to a system of 1st order ODEs: chain rule issue | Here, $a(f(x))$ is just a composite function. The chain rule for composite functions is
$$ \frac{d}{dx} \Big(a \big( f(x) \big) \Big) = \frac{df}{dx}\big(f(x)\big)\cdot \frac{da}{dx}(x)= a'(f)\cdot f'(x). $$
Necessarily,
$$g(x) = - (a(f))' = -a'(f)\cdot f'(x).$$ |
Weyl group of $\mathfrak{sl}(2,\mathbb{C})$ | The centralizer $Z(\mathfrak{t})$ is a subgroup of the normalizer $N(\mathfrak{t})$.
Note that
$$Z(\mathfrak{t})=\left\{\begin{pmatrix} a&\\&a^{-1}\end{pmatrix}a\in\mathbb{R}^\times_{+}\right\}
$$
and
$$N(\mathfrak{t})=\left\{\begin{pmatrix} a&\\&a^{-1}\end{pmatrix},\begin{pmatrix}&-b\\b\end{pmatrix},a,b\in\mathbb{R}^\times_{+}\right\}.
$$
Thus obviously
$$N(\mathfrak{t})/Z(\mathfrak {t})=\left\{\begin{pmatrix} 1&\\&1\end{pmatrix}Z(\mathfrak{t}),\quad\begin{pmatrix} &-1\\1&\end{pmatrix}Z(\mathfrak{t})\right\}$$
contains only two elements.
Note that $Z(\mathfrak{t})$ acts on $A$ trivially. Thus one can choose the following two elements
$$
I=\begin{pmatrix} 1&\\&1\end{pmatrix},\quad\omega=\begin{pmatrix} &-1\\1&\end{pmatrix}
$$
in the cosets, repsectively.
The Adjoint action of $I$ on elements $\begin{pmatrix} ia_1\\&ia_2\end{pmatrix}$ gives
$$
\mathrm{Ad} (I)\begin{pmatrix} ia_1\\&ia_2\end{pmatrix}=\begin{pmatrix} ia_1\\&ia_2\end{pmatrix},
$$
and
$$
\mathrm{Ad} (\omega)\begin{pmatrix} ia_1\\&ia_2\end{pmatrix}=\omega\begin{pmatrix} ia_1\\&ia_2\end{pmatrix}\omega^{-1}=\begin{pmatrix} ia_2\\&ia_1\end{pmatrix}.
$$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.