title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Bounds on $\lim_{m\to\infty}\sum_{s=x}^m\frac{s!}{s^n(s-x)!}$? | We have $$\frac{s!}{(s-x)!} < s^x,$$ so the infinite sum is bounded by
$$\sum_{s=x}^\infty s^{x-n}=\zeta (n-x,x),$$ where the zeta function is the Hurwitz zeta. |
Spectral theory for compact normal operators. | The statements are immediate consequences of what is known as the spectral theorem in its compact, normal version. Conway's book is the place to look for this theorem.
Theorem (spectral theorem, normal, compact version)
Let $T$ be a compact, normal operator in $\mathbb{B}(H)$. Then $T$ has at most countably many distinct eigenvalues $\{\lambda_n\}$ and if they are countably many then $\lambda_n\to0$. If $P_n$ denotes the projection onto the eigenspace $\ker(T-\lambda_n I)$, then the projections $\{P_n\}$ are pairwise orthogonal and
$$T=\sum_n\lambda_nP_n$$
in the sense that
$$\|T-\sum_{k=1}^n\lambda_nP_n\|_{\mathbb{B}(H)}\xrightarrow[n\to\infty]{}0. $$
The claims follow directly from this theorem. (1) follows trivially and for (2) note that $T-T_n=\sum_{k\geq n+1}\lambda_kP_k$, so $T-T_n$ is a compact, normal operator and its only eigenvalues are $\{\lambda_k\}_{k=n+1}^\infty$, thus $\sigma(T-T_n)=\{\lambda_k\}_{k=n+1}^\infty\cup\{0\}$. |
Continuous function taking each of its values twice | Examples of functions which take each their values three times are
$$f(x)=\cot\left(\frac{3\pi}{1+\exp(-x)}\right)$$
as well as
$$f(x)=2\left\lfloor\frac{x}{3\pi}\right\rfloor-\cos\left(3\pi\left\{\frac{x}{3\pi}\right\}\right).$$ |
Least Square Estimators of a Linear Regression Model | To start you differentiate $S=\sum_{i=1}^n \left(y_i-a_0-a_1x_i-a_1\overline x \right)^2$ w.r.t. $a_1$.
$\frac{\partial S}{\partial a_1}=2\cdot \sum_{i=1}^n \left(y_i-a_0-a_1x_i-a_1\overline x \right)\cdot (x_i-\overline x)=0$
Multipliying out the brackets and drop 2
$\sum_{i=1}^n x_iy_i-\overline x \sum_{i=1}^n y_i-a_0\sum_{i=1}^n x_i+a_0\overline x \sum_{i=1}^n 1-a_1\sum_{i=1}^n x_i^2 + \overline x\cdot a_1 \sum_{i=1}^n x_i-a_1 \overline x \sum_{i=1}^n +a_1 \overline x ^2 \sum_{i=1}^n 1=0$
Isolation of $a_1$ and $\sum_{i=1}^n x_i=n\cdot \overline x$ and $\sum_{i=1}^n y_i=n\cdot \overline y$
$a_1 \cdot \left[ \overline x \sum_{i=1}^n x_i+n\cdot \overline x ^2- \overline x \cdot n \cdot \overline x- \sum_{i=1}^n x_i ^2 \right]+\sum_{i=1}^n y_ix_i-\overline x \sum_{i=1}^n y_i=0$
Now solve for $a_1$. $a_0$ has disappeared.
Differentiating $S$ w.r.t $a_0$
$\frac{\partial S}{\partial a_0}=2\cdot \sum_{i=1}^n \left(y_i-a_0-a_1x_i-a_1\overline x \right)\cdot (-1)=0$ |
Are linear operator just projections in projective geometry? | This was too long for a comment.
Let's consider a simple example. A generic $2 \times 2$ matrix $A$ has $2$ distinct eigenvalues, so let's say $Av=\lambda v$ and $Aw=\sigma w$ with $\lambda \ne \sigma$, hence $v \ne w$. That means that $A$ dilates the eigenlines, that is, $A$ scales the span of $v$ by $\lambda$ and it scales the span of $w$ by $\sigma$. In projective space $\mathbb{P}^1$, these lines are now two fixed points. If $|\lambda|>|\sigma|$ then the span of $v$ is an attracting fixed point, while if $|\lambda|<|\sigma|$ the span of $v$ is a repelling fixed point.
In general the dynamics are a bit more subtle, because the matrix may not be diagonalizable and because the eigendirections sort of stratify. The largest eigenvalue corresponds to an attracting point (or region), while the next largest becomes a saddle point (or region). |
Why is Implicit Runge-Kutta implicit? | You surely refer with IVP to Initial Value Problems?
"Implicit" in general means that to get to the desired value you have to solve a , in general non-linear, system of equations instead of just evaluating a function, which would give you an explicit method to get at the desired value. |
How do I compute the rank of this matrix involving trigonometric functions? | I think it is easier to compute the kernel of the matrix. So, let $v_1 = \begin{pmatrix} -\sin u + \sin(u+v) \\ \cos u - \cos(u+v) \\ 1 \end{pmatrix}$ and $v_2 = \begin{pmatrix} \sin(u+v) \\ -\cos(u+v) \\ 0 \end{pmatrix}$ and consider the equation
$$\lambda_1v_1 + \lambda_2v_2 = 0.$$
The matrix has maximal rank (rank $2$ in this case) if and only if this equation only has the trivial solution $\lambda_1 = \lambda_2 = 0$. From the third row we get
$$\lambda_1 + 0 = 0,$$
hence $\lambda_1 = 0$ and $\lambda_2v_2 = 0$. As $v_2$ is never $0$, this implies that $\lambda_2 = 0$ as well. Thus your matrix always has rank $2$. |
convert DNF to CNF for SAT solver | Okay so what you actually want is to convert the given clause to an equi-satisfiable CNF, not to solve it nor to simply convert it to CNF.
Just create new boolean variables $p_{1..5}$.
Note that $a_k \land b_k \to p_k \equiv \neg a_k \lor \neg b_k \lor p_k$ and $p_k \to a_k \land b_k \equiv ( \neg p_k \lor a_k ) \land ( \neg p_k \lor b_k )$.
Thus by appending these two CNFs for each $k$ you get $p_k \equiv a_k \land b_k$ in any satisfying assignment.
After that the original clause can be replaced by $p_1 \lor \cdots \lor p_5$. |
An identity about Vandermonde determinant. | The difference of the RHS and the LHS is a degree $\binom{k}{2}+1$ polynomial in $x_1, x_2, \ldots, x_k, t$.
First, note that if $x_i=x_j$ for any $i, j$, then the equation is true. Indeed, the RHS would be 0, and all terms on the LHS would be 0 other than $x_i\Delta(x_1, x_2, \ldots, x_i+t, \ldots, x_k)$ and $x_j\Delta(x_1, x_2, \ldots, x_j+t, \ldots, x_k)$, which cancel out (all factors not involving $x_i$ and $x_j$ are the same; factors involving only one can be paired up as $(x_i+t-x_k)(x_k-x_j)$ which is the same as $(x_i-x_k)(x_k-(x_j+t))$(recall $x_i=x_j$), and the only factors left are $x_i+t-x_j$ and $x_i-(x_j+t)$, causing one to be the opposite of the other).
Therefore, $x_i-x_j$ is a factor of the difference for any $i, j$, giving $\binom{k}{2}$ factors.
When $t=0$, the equation is true by the distributive law. So $t$ is also a factor of the difference of the LHS and the RHS.
Therefore, the difference LHS-RHS must be of the form $Ct\prod_{i<j}{(x_i-x_j)}$ for some constant $C$. To prove that $C=0$, it sufficies to show that the equation is true for some values for $x_1,x_2,\ldots,x_k,t$ where no two $x_i$ are equal and $t\neq0$.
I will choose $x_i=i$ and $t=-1$.
$$\textrm{LHS}=\sum_{i=1}^{k}i\Delta(1,\ldots, i-1,i-1,i+1,\ldots,k),$$
and all terms are 0 except the first, which is $\Delta(0,2,\ldots,k).$
$$\textrm{RHS}=\left(1+2+\ldots+k+\binom{k}{2}(-1)\right)\cdot\Delta(1,2,\ldots,k)=k\Delta(1,2,\ldots,k).$$
So, the goal is to show
$$\Delta(0,2,\ldots,k)=k\Delta(1,2,\ldots,k).$$
To do this, note that all the factors in the $\Delta$ part not involving the first term are the same in both. For the LHS, the factors involving the first term are $-2,-3,\ldots,-k$, and for the RHS, the factors involving the first term are $-1,-2,\ldots,-(k-1)$, so we are done. |
Find points of relative extrema, the intervals of increase & decrease for $k(x)=x^4+2x^2-4$ | Essentially, what you do with "old calculus" is doing this: notice that $f'(x)$ is continuous. By the Intermediate Value Theorem, it can only change signs if it goes through $0$; since the only place it is equal to $0$ is at $0$, it always has the same sign on $(-\infty,0)$, and it always has the same sign on $(0,\infty)$. Evaluating at $-1$ tells you the sign on $(-\infty,0)$, evaluating at $1$ tells you the sign on $(0,\infty)$.
You can now verify that if you pick $c=0$ and $d=1$ (or $d$ any positive value), you will satisfy the condition. The point $c$ must be the critical point; the value $d$ is just any positive value small enough that $(c-d,c+d)$ does not get you "out of" the interval $I$, and where the derivative does not change signs except at $c$. Here, your derivative only changes signs in one place and your interval is the entire real line, so both of these "requirements" become vacuous: any $d\gt 0$ will work. |
Choosing interpolation points | If you are expected to choose interpolation points only, but not the weights, then for this problem the hint would be: Chebyshev interpolation (or Tschebyschev, depends who you talk to)
The Chebyschev polynomial $T_n$ of degree $n$ has the property, that it has minimal $L_\infty$-Norm among all polynomials of degree $n$ on $[-1,1]$ with maximal coefficient $a_n=2^{n-1}$.
The roots $x_0,\dots,x_n$ of $T_{n+1}$ minimize $\omega_{n+1}(x)=\prod_{i=0}^n (x-x_i)$ (on $[-1,1]$) |
Motivation behind Ito integral | There is no problem integrating a brownian motion pathwise with respect to a
continuous (say) integrand. This is because a brownian motion has continuous almost everywhere sample paths.
In the Ito integral the brownian motion is used as an integrand, i.e. it has the form $\int_{[0,T]} f(s,\omega) dW(s,\omega)$. One can show that if every continuous function is Riemann integrable with respect to some integrand $g$, then $g$ has to be of bounded variation. But it is a well known fact that the sample paths of brownian motion do not have bounded variation. |
Proving sets to be (not) recursive or r.e. | The first set is r.e., but not recursive. It isn't recursive by Rice's theorem (one can also reduce from the halting problem by letting $\phi_{f(e)}(n) = \phi_e(e)$; then $f(e)$ is in your set if and only if $e \in H$). To see that it is r.e., dovetail all computations $\phi_i(n)$ together and enumerate $i$ whenever you see $\phi_i(n)\downarrow$ and $\phi_i(n+1)\downarrow$ for some $n$.
The second set is not r.e. (and not recursive). It is not recursive by Rice's theorem (one can also use the same reduction as above, and obtain that $f(e)$ is in your set if and only if $e \not\in H$). On the other hand, the set is co-r.e., as one can enumerate the complement by dovetailing all computations $\phi_i(n)$ together and enumerating $i$ if we see both $\phi_i(n)\downarrow$ and $\phi_a(n)\downarrow$ for some $n$. Since any function which is both r.e. and co-r.e. is recursive, the set cannot be r.e..
The third set is all of $\omega$ since every set is disjoint from the empty set, so it is recursive. |
Implementing 1D Discrete Wavelet Transform in Matlab | The problem lies with the convolution. Standard Matlab conv function will use full convolution. So the length will always be dependent on the longest of the 2 arguments. This can be resolved by using the same parameter for convolution.
'same': Central part of the convolution of the same size as u.
So the correct code is:
function R=myDWT(sig, count)
[Lo_D, Hi_D] = wfilter_bior44();
input = sig;
while(count ~= 0) % While count not equal to 0
% Pass through filters by using convolution
Ca = conv(input, Lo_D, 'same');
Cd = conv(input, Hi_D, 'same');
% Downsample by 2
Ca = downsample(Ca, 2);
Cd = downsample(Cd, 2);
% TODO: Save Ca and Cd somewhere
count = count - 1;
input = Ca;
end
R = input;
end |
Computing Legendre symbol value using quadratic reciprocity | The Question seems to be wrong
$$\left(\frac{27}{31}\right)=\left(\frac{-4}{31}\right)=\left(\frac{2^2}{31}\right)\left(\frac{-1}{31}\right)=\left(\frac{-1}{31}\right)\text{ as } (\pm2)^2\equiv 4\pmod {31}$$
Again, we know, $\left(\frac{-1}p\right)=1\iff $ prime $p\equiv1\pmod 4$ |
Blocks and simple modules | Restrict $S$ to $D$. What you get is semisimple by Clifford ($D$ is normal in $N_G(D)$). Now, $D$ is a $p$-group, hence the only simple module is trivial. Thus $D$ acts trivially on $S$, which is all you need. |
A ring which contains a nonprime maximal ideal | Suppose that $a=ra^2+na^2$. Then, $a(ra+na)=a$, so $ae=a$. Then, for any $x\in R$, $x=ta+ma$ for some $t\in R$, $m\in \mathbb{Z}$. So $xe= (ta+ma)e= tae+mae= ta+ma=x$. Therefore, $e$ is the identity element.
To prove that $M$ is not a prime ideal, it is enough to check that there are two ideals $A$, $B$ with $AB\subseteq M$, and $A$ and $B$ are not contained in $M$. By taking $A=B=(a)$, $AB=(a)(a)\subseteq (a^2)\subseteq M$, but $(a)=R$ is not contained in $M$ because it is a maximal ideal, so by definition $M\neq R$. |
Find all the values of $y$ so that $\min\limits_{[1, 2]}\left | x^{3}- 3x+ y \right |= 6$ . | Let $f(x)=x^3-3x$. Note that $f'(x)=3x^2-3$ is positive on the interval $x\in[1,2]$, so $f(x)$ is monotonically increasing. Thus, the minimum value of $|f(x)+y|$ either occurs at $x=1$, or at $x=2$ (because if it were somewhere in the middle, then the minimum value has to be zero, but the question needs it to be $6$). So we are seeking solutions to either
$$|f(1)+y|=|y-2|=6$$
or to
$$|f(2)+y|=|y+2|=6.$$
Clearly, the solutions are $y=\pm4,\pm8$.
Now we need to check that these values actually work, which I'll leave to you. You should come to the conclusion that, out of these four values, only $y=\pm8$ are solutions. |
What are the odds of rolling a 1 and a 6 in four dice throws? | There are $6^4$ rolls, of which $5^4$ have no $1$'s and $5^4$ have no $6$'s. However, we've counted the rolls with neither $1$'s nor $6$'s twice. The probability of both a $1$ and a $6$ is $$\frac{6^4-2\cdot5^4+4^4}{6^4}=\frac{151}{648}\approx.23302469$$ |
deduce that $\frac{S_4}{V_4}$ is isomorphic to $S_3$ | Suppose
$$ h_{1} V_{4} = h_{2} V_{4} ,$$
with $h_{1}, h_{2} \in H$.
Then
$h_{2} = h_{1} v$ for some $v \in V_{4}$. Therefore
$$
4 = 4 h_{2} = 4 h_{1} v = 4 v.
$$
But the only element $v \in V_{4}$ that fixes $4$ is $v = e$, so $h_{2} = h_{1} e = h_{1}$.
Now you know that there are $6$ distinct cosets $h V_{4}$, for $h \in H$. Since $\lvert S_{4} : V_{4} \rvert = 6$, all cosets of $V_{4}$ are of this form, and thus $S_{4} = V_{4} H$. It follows (or you can check) that $V_{4} \cap H = \{ e \}$. Therefore
$$
\frac{S_{4}}{V_{4}} = \frac{V_{4} H}{V_{4}} \cong \frac{H}{V_{4} \cap H} = \frac{H}{\{ e \}} \cong H.
$$
Here I have used the second isomorphism theorem. |
Determine all the positive integers $n\geq 3$, such that $1+{n\choose 1}+{n\choose 2}+{n\choose 3}\vert 2^{2000}$ | HINT:
$$n+1+\dfrac{(n+1)n(n-1)}3=\dfrac{(n+1)(n^2-n+3)}3$$
Now $n^2-n+3=n(n+1)-2(n+1)+5$
$\implies(n+1,n^2-n+3)|5$
But $(n+1,n^2-n+3)\ne5$ as then $5|(n+1)$ but $5\nmid2^{2000}$
$\implies(n+1,n^2-n+3)=1$
So, one of them must be $\pm1$ |
Number theory proof from AoPS | Let's define a $k$-digit number $n$ to have the decimal (base-10) digits $d_0, d_1, \ldots d_{k-1}$. We can then write $n$ as:
$$n = 10^0d_0 + 10^1d_1 + \cdots + 10^{k-1}d_{k-1}$$
Note that we can re-arrange this as:
$$\begin{align}
n &= d_0 + 10^1d_1 + \cdots + 10^{k-1}d_{k-1}\\
&= d_0 + (9d_1+d_1) + \cdots + (10^{k-1}-1)d_{k-1} + d_{k-1}\\
&= (d_0 + d_1 + \cdots + d_{k-1}) + \underbrace{(9d_1 + 99d_2 + \cdots (10^{k-1}-1)d_{k-1})}_{\text{all this is a multiple of 9}}
\end{align}$$
Because anything that is a multiple of $9$ is congruent to $0 \pmod{9}$, we have:
$$\begin{align}
n &\equiv (d_0 + d_1 + \cdots + d_{k-1}) + 0 &\pmod{9}\\
&\equiv d_0 + d_1 + \cdots + d_{k-1} &\pmod{9}
\end{align}$$
An example may help. Let's try $d_0=1, d_1=6,d_2=7,d_3=3,d_4=4$, which makes our $5$-digit number $16734$. Then, we have:
$$\begin{align}
n &= 1 + 6\cdot10 + 7\cdot10^2 + 3\cdot10^3 + 4\cdot10^4\\
&= 1 + (6 + 6\cdot9) + (7 + 7\cdot99)+(3+3\cdot999) + (4+4\cdot9999)\\
&= (1+6+7+3+4) + (6\cdot9 + 7\cdot99 +3\cdot999+4\cdot9999)\\
&\equiv (1+6+7+3+4) + 0 &\pmod{9}\\
&\equiv 11 &\pmod{9}
\end{align}$$ |
Show that $z=w$ iff $r_1=r_2$ and $\theta_1=\theta_2$. | $z=w \to |z|=|w|\to r_1=r_2$. Now, $z=w\to arg(z)=arg(w)\to \theta_1=\theta_2$. |
Proof explaining opposite of what is observed | You've done pretty well in your proof, which requires just one extra step.
If $\ m+n \le g-1\ $, then $$\ 2G>2g\ge 2g-n-m \ge g+1= G .$$
Thus, since $\ 2G>2g-n-m\ge G\ $ in this case, you have $\ \lfloor\frac{2g-n-m}{G}\rfloor=1\ $, $\ \lfloor\frac{n+m}{G}\rfloor=0\ $, and
$\lfloor\frac{2g-n-m}{G}\rfloor+\lfloor\frac{n+m}{G}\rfloor=1\ .$
On the other hand, if $\ m+n>g-1\ $, then (because $\ m,n < g\ $):
$$0<2g-n-m<g+1=G\ ,$$
and so $\ \lfloor\frac{2g-n-m}{G}\rfloor=0\ $, $\ \lfloor\frac{n+m}{G}\rfloor=1\ $, and
$\lfloor\frac{2g-n-m}{G}\rfloor+\lfloor\frac{n+m}{G}\rfloor=1\ .$
One thing you should be made aware of, however, is that when writing out proofs, you should avoid just writing down the equation you're trying to prove and manipulating it into something which is purportedly equivalent. The main reason why this is inadvisable is the that the flow of logic is wrong:
$$\
\left[\matrix{\mbox{Proposition}\\ \mbox{to be proved}}\right]\implies P_2\implies P_3\implies \dots \implies\left[\matrix{\mbox{True}\\ \mbox{Proposition}}\right]\ ,$$
and the final proposition will be equivalent to the original only if every step in your manipulation is reversible. If that does happen to be true, then you will be able to reverse the implications in the above chain of propositions to obtain a valid proof:
$$\
\left[\matrix{\mbox{True}\\ \mbox{propostion}}\right]\implies \dots \implies P_3\implies P_2 \implies\left[\matrix{\mbox{Proposition}\\ \mbox{to be proved}}\right]\ ,$$
and this is the form which any proof you give should take.
As it turns out, your addition, $\ "➕"\ $, is essentially the same thing as addition modulo $\ b^r-1\ $:
$$ a➕c \equiv a+c \pmod{b^r-1}\ .$$
Your negation is also the same as negation modulo $\ b^r-1\ $:
$$ m'\equiv -m \pmod{b^r-1}\ .$$
So your observation that $\ n➕m= (n'➕m')'\ $ is equivalent to the equation:
$$ m+n\equiv -\left(\left(-m\right)+\left(-n\right)\right) \pmod{b^r-1}\ ,$$
which is not difficult to prove using modular arithmetic. |
Probability of a sequence of characters within a random sequence of characters | It depends on how well the string can overlap with itself. You can model this with a Markov chain, with states (Start) and the possible prefixes of your target string. Thus for ABA the states are (Start), A, AB and ABA, and the transition matrix is
$$ P = \pmatrix{25/26 & 1/26 & 0 & 0\cr
24/26 & 1/26 & 1/26 & 0\cr
25/26 & 0 & 0 & 1/26\cr
0 & 0 & 0 & 1\cr}$$
where e.g. the second row entries mean that in state A, if the next character is A you stay in state A, if it is B you go to state AB, and anything else returns you to (Start).
The probability you're looking for is the probability, starting in the (Start) state, of absorption by round $n$, which is
$ ( P^n)_{1,4}$.
The characteristic polynomial is $C(\lambda) = { { \left( \lambda-1 \right) \left({\lambda}^{3}-
{\lambda}^{2}+26/17576\,\lambda-25/17576 \right) }}
$. If the matrix is diagonalized as $S \Lambda S^{-1}$ where $\Lambda$ is a diagonal matrix with diagonal entries $\lambda_i$ (the roots of $C(\lambda)$), then $$(P^n)_{1,4} = \sum_{i=1}^4 S_{1,i} S_{i,4} \lambda_i^n$$. |
Repeating digits in difference of squares of 8-digit numbers | Looking at the digits modulo $4$, we know that since the only possible squares modulo $4$ are $0$ or $1$, the only possible differences would be $0,1,$ or $3$ mod4. In particular, $m^2-n^2$ is never $2$mod4. This implies that since any number modulo 4 is equivalent to the final two digits modulo 4, if the final two digits are the same they cannot be $22$ or $66$ as these are $2$mod4.
From here, it remains to show that each of the others are possible. Indeed:
$$18518520^2-18518517^2=111111111$$
Ending in twos is not possible
$$18518523^2-18518514^2=333333333$$
$$11111112^2-11111110^2=44444444$$
$$18518526^2-18518511^2=555555555$$
Ending in sixes is not possible
$$18518529^2-18518508^2=777777777$$
$$11111113^2-11111109^2=88888888$$
$$18518532^2-18518505^2=999999999$$
$$m^2-m^2=0$$
These numbers were constructed by looking at the prime factorization of $111111111=3^2\cdot37\cdot 333667=3\cdot 37037037$ and recognizing that we could express $37037037$ as the sum of two numbers very close to it's half. |
If a $12\times 16$ sheet of paper is folded on its diagonal, what is the area of the region of the overlap? | The overlapping region is triangle $AEC$, with base $AC=20$ and altitude $EH$. To find $EH$, observe that $GH=BF=48/5$ and $DB'=AC-2CF=28/5$. By similitude one then gets $EH=15/2$. |
Quadratic surd continued fraction convergent ratio limit | I'll show how to tackle the first question on an example with $r=0$. For concreteness' sake, let's look at the continued fraction for $\sqrt{7}$: $\sqrt{7}=[2; \overline{1, 1, 1, 4}]$. Then the critical insight here is that if $c_{4n}$ is the $4n$'th convergent, then the $4(n+1)$th convergent is $\displaystyle c_{4(n+1)}=2+\frac1{1+}\frac1{1+}\frac1{1+}\frac1{2+c_{4n}}$. Now, we can unwind this term-by-term:
$$
\begin{align}
c_{4(n+1)} &= 2+\frac1{1+}\frac1{1+}\frac1{1+}\frac1{2+c_{4n}} \\
&= 2+\frac1{1+}\frac1{1+}\frac{2+c_{4n}}{3+c_{4n}} \\
&= 2+\frac1{1+}\frac{3+c_{4n}}{5+2c_{4n}} \\
&= 2+\frac{5+2c_{4n}}{8+3c_{4n}} \\
&= \frac{21+8c_{4n}}{8+3c_{4n}}. \\
\end{align}
$$
Finding an expression for the $(4(n+1)-1)$th convergent in terms of the $(4n-1)$th is a little bit trickier, but it's still possible; the trick is that if $c_{4i-1}$ is the $4i-1$th convergent to $\sqrt{7}$, then $d_i=1+\dfrac1{2+c_{4i-1}}$ is the $4i$th convergent to $1+\dfrac{1}{2+\sqrt{7}} = [\overline{1, 4, 1, 1}]$. Now it's all just algebra (which I will hopefully get right):
$$
\begin{align}
d_{4(n+1)} &=1+\frac1{4+}\frac1{1+}\frac1{1+}\frac1{d_{4n}} \\
&= 1+\frac1{4+}\frac1{1+}\frac{d_{4n}}{1+d_{4n}} \\
&= 1+\frac1{4+}\frac{1+d_{4n}}{1+2d_{4n}} \\
&= 1+\frac{1+2d_{4n}}{5+9d_{4n}} \\
&= \frac{6+11d_{4n}}{5+9d_{4n}}.
\end{align}
$$
And since $d_{4n}=\dfrac{3+c_{4n-1}}{2+c_{4n-1}}$, $c_{4n-1}=\dfrac{3-2d_{4n}}{-1+d_{4n}}$; so $c_{4(n+1)-1}=\dfrac{3-2\frac{6+11d_{4n}}{5+9d_{4n}}}{-1+\frac{6+11d_{4n}}{5+9d_{4n}}}$ $=\dfrac{3(5+9d_{4n})-2(6+11d_{4n})}{(-5-9d_{4n})+6+11d_{4n}}$ $=\dfrac{3+5d_{4n}}{1+2d_{4n}}$ $=\dfrac{3+5\frac{3+c_{4n-1}}{2+c_{4n-1}}}{1+2\frac{3+c_{4n-1}}{2+c_{4n-1}}}$ $=\dfrac{3(2+c_{4n-1})+5(3+c_{4n-1})}{(2+c_{4n-1})+2(3+c_{4n-1})}$ $=\dfrac{21+8c_{4n-1}}{8+3c_{4n-1}}$. Notably, this is the same formula that holds for $c_{4n}$; this should generically be the case (but I haven't proved it).
Finally, let's look at $r_{4(i+1)-1}$ (to use the original notation). Then $r_{4(i+1)-1} = \dfrac{c_{4(i+1)}-\sqrt{7}}{c_{4(i+1)-1}-\sqrt{7}}$. Let's look first at the denoninator here: $c_{4(i+1)-1}-\sqrt{7}$ $=\dfrac{21+8c_{4i-1}}{8+3c_{4i-1}}-\sqrt{7}$ $= \dfrac1{8+3c_{4i-1}}(21+8c_{4i-1}-\sqrt{7}(8+3c_{4i-1})$ $= \dfrac1{8+3c_{4i-1}}(8c_{4i-1}-8\sqrt{7}+21-3\sqrt{7}c_{4i-1})$ $=\dfrac1{8+3c_{4i-1}}(8(c_{4i-1}-\sqrt{7})+3\sqrt{7}(\sqrt{7}-c_{4i-1}))$ $=\dfrac1{8+3c_{4i-1}}(8-3\sqrt{7})(c_{4i-1}-\sqrt{7})$. Since we have a similar formula in the numerator, we get ultimately $r_{4(i+1)-1} = \dfrac{8+3c_{4i-1}}{8+3c_{4i}}r_{4i-1}$. Now, $\dfrac{8+3c_{4i-1}}{8+3c_{4i}} = 1-\dfrac{3(c_{4i-1}-c_{4i})}{8+3c_{4i}}$; but it's well-known that the difference between successive convergents $c_{4i-1}=\dfrac{a_{4i-1}}{b_{4i-1}}$ and $c_{4i}=\dfrac{a_{4i}}{b_{4i}}$ has magnitude $|c_{4i-1}-c_{4i}|=\dfrac1{b_{4i-1}b_{4i}}$ and that the denominator of convergents $b_i$ grows exponentially with $i$; this means that $r_{4(i+1)-1}=(1-O(K^{-i}))r_{4i-1}$ for some constant $K$ and guarantees convergence of the infinite product (and thus existence of the limit).
It would take a bit more knowledge of continued fractions (or a lot more digging and effort) to fill in the gaps here in the general case, but it shouldn't be too complicated: note that in the "magic formula" $c_{4n+1}=\dfrac{21+8c_{4n}}{8+3c_{4n}}$ we have $\left(\begin{smallmatrix}21\\8\end{smallmatrix}\right)=\left(\begin{smallmatrix}0&7\\1&0\end{smallmatrix}\right)\left(\begin{smallmatrix}8\\3\end{smallmatrix}\right)$ and the $\langle8, 3\rangle$ here are the components of the 'fundamental solution' $8^2-7\cdot 3^2=1$ of the Pell equation $x^2-7y^2=1$, which of course is intimately related to the continued fraction expansion of $\sqrt{7}$. I strongly suspect that a similar argument should work canonically for all quadratic surds, and that you'll find $c_{(n+1)p}=\dfrac{D\hat{b}+\hat{a}c_{np}}{\hat{a}+\hat{b}c_{np}}$ where $(\hat{a}, \hat{b})$ are the fundamental solution to the Pell equation $a^2-Db^2=1$. |
Combination of Binary sequences | Following lulus comment, a string in the symbols '0' and '11' of length $10-k$ with $k$ '11' symbols is a valid string, giving
$$\sum_{k=0}^5 \binom{10-k}{k} = 89$$
valid strings. |
"WLOG" when studying Schwarzschild geodesics | You can check (explicitly) that $\theta=\pi/2$ satisfies that equation for $\theta$. Since the solution for the given initial conditions of a geodesic should be unique, it must be that $\theta=\pi/2$ over the entire geodesic. (So the geodesic will necessarily lie in a plane.)
If you have a geodesic with $\theta(0)\neq\pi/2$ or $\dot\theta(0)\neq0$, you can rotate your system of coordinates to get $\theta(0)=\pi/2$ and $\dot\theta(0)=0$, for which $\theta=\pi/2$ will then solve your equation of motion. |
criteria if given irrational number is or not transcedental | A number $x$ is irrational if there are no integers $a_0, a_1$ such that $a_1x + a_0 = 0$. That is, if there is no integer polynomial $P$ of degree 1 with $P(x)=0$.
A number $x$ is transcendental if there is no positive integer $n$ and no integers $a_0, \ldots a_n$ such that $a_nx^n + a_{n-1}x^{n-1} + \ldots + a_0 = 0$. That is, if there is no integer polynomial $P$ of any degree $n$ with $P(x)=0$.
All transcendental numbers are irrational, because we can take $n=1$. Not all irrational numbers are transcendental. Non-transcendental numbers are called algebraic. $\sqrt2$ is irrational, but not transcendental, because $(\sqrt2)^2 - 2 = 0$. (That is, $n=2, a_2 = 1, a_1 = 0, a_0=-2$.)
Nobody knows methods that work in general to show that a particular number is rational, irrational, or transcendental. (Many methods are known that work in particular cases.) $\pi$ and $e$ are known to be transcendental, but nobody knows the answer even for simple combinations of $\pi$ and $e$ such as $\pi+e$ or $\pi e$. The important constant $\gamma$ has been studied for hundreds of years, but nobody has yet proved that it is not rational.
Historically the first example of a specific number known to be transcendental was Liouville's number, which is:
$$
\sum_{i=1}^\infty {1\over 10^{i!}} = \frac1{10^{\vphantom1}} + \frac1{10^2} + \frac1{10^6} + \frac1{10^{24}} +\cdots = 0.1100010000000000000000010\ldots
$$
The proof that Liouville's number is transcendental is particularly simple. If you want to see a proof that a number is transcendental, that is a good place to start. |
Is there an intuitive see why $\lim_{n \to \infty} \frac{e^{x/n}-1}{x/n} = 1$ | The limit in your post is zero, since $e^{x/n} \to 1$ as $n \to \infty$.
However, you are probably thinking of something like
$$\lim_{n \to \infty} \frac{e^{x/n}-1}{x/n} = 1,$$
which can be viewed roughly as "$e^{x/n} - 1 \approx x/n$" as $n \to \infty$.
Indeed, by replacing $y=x/n$ and taking $y \to 0$, the above is the same as
$$\lim_{y \to 0} \frac{e^y-1}{y} = 1.$$
This can be verified by writing out the definition of the derivative of $f(y)=e^y$ at zero: $f'(y) = 1$.
Alternatively, this can be verified by looking at the Taylor series $e^y = 1 + y + O(y^2)$ as $y \to 0$, as mentioned in the comments. |
Optimization Technique for Step function in constraints and objective function | No, it can not be done as an LP, but easily written as a mixed-integer LP through big-M modelling.
With $u_1$ defined as a binary variable, your first logic constraint can for instance be written as $Mu_1 \geq v_1 - a \geq -M(1-u_1)$ where $M$ is a small-but-sufficiently-large constant such that no optimal solutions are cut off (called big-M constant).
Your bilinear constraints can be handled by replacing the product $u_iv_i$ with a new variables $w_i$ and introducing the big-M constraints $-M(1-u_i) \leq w_i-v_i \leq M(1-u_i), -Mu_i \leq w_i \leq Mu_i$. |
Expected Value of Normal CDF | Assume that the $\Phi$ function is for some other independent standard normal random variable Y, and simply rewrite the problem like so:
$$E\left(\Phi\left( \frac{a-bX}{c}\right)\right)=P(Y<\frac{a-bX}{c})=P\left( Y+\frac{bX}{c}<\frac{a}{c}\right)$$
Now, since $X$ ~ $N(0,1), \frac{bX}{c}$ ~ $N(0,\frac{b^2}{c^2})$, so $Y+\frac{bX}{c}$ ~ $N(0,1+\frac{b^2}{c^2})$
So,
$$ P\left( Y+\frac{bX}{c}<\frac{a}{c}\right) = P\left( \frac{Y+\frac{bX}{c}}{\sqrt{1+\frac{b^2}{c^2}}}<\frac{a}{c\sqrt{1+\frac{b^2}{c^2}}}\right)=\Phi\left( \frac{a}{c\sqrt{1+\frac{b^2}{c^2}}}\right) $$
Therefor: $E\left(\Phi\left( \frac{a-bX}{c}\right)\right)=\Phi\left( \frac{a}{c\sqrt{1+\frac{b^2}{c^2}}}\right)$ |
What is a periodic module | For the definition see, for example, the introduction here. If the definition with syzygy here is not what you want, then it may be easier to understand some properties of periodic modules. For example, every periodic module over a principal ideal ring is a direct sum of cyclic modules. |
Criteria for dense open subset of schemes | We need a Lemma first (which is exercise $ 2.5.2 $ in Qing Liu's book).
Lemma. Let $ X $ be a scheme and $ x \in X $. Then, $ \text{codim} ( \overline { \left \{ x \right \} } , X ) = \dim \mathcal{O}_{X,x} $. If $ Z $ is a closed subset of $ X $, then $ \text{ codim} ( Z, X ) = \min _ { z \in Z } \dim \mathcal{O}_{X,z} $.
Proof. If $ \left \{ x \right \} \subseteq Z_{0} \subsetneq Z_{1} \ldots, \subsetneq Z_{n} = X $ is a sequence of irreducible closed subsets of $ X $, then, for any open $ U \subset X $ containing $ x $, we have $ \left \{ x \right \} \subseteq U \cap Z_{0} \subsetneq U \cap Z_{1} \subsetneq \ldots \subsetneq U \cap Z_{n} = U $. We may thus assume that $ U $ is the spectra of a ring $ A $, and $ x = \mathfrak{p} $ is a prime. Then, $ \overline { \left \{ x \right \} } = V ( \mathfrak{p} ) $ and $ \text { codim } ( V( \mathfrak{p} ) , X ) = \text{ht} ( \mathfrak{p} ) = \dim A_{ \mathfrak{p} } = \dim \mathcal{O} _ { X , x } $.
Let $ Z $ be a closed subset of $ X $. Let $ C_{i} $ be the irreducible closed subsets of $ Z $. Then, by definition, $ \text{codim} ( Z, X ) = \inf _{ i } \text{ codim } ( C_{i} , X ) $. It therefore suffices to assume that $ Z $ is irreducible. Moreover, if $ U_{j} $ is an open covering of $ Z $, then $ \text{codim} ( Z, X ) = \inf _{ j } \text{codim } ( Z \cap U _{ j } , U _{ j } ) $ for similar reasons as above. We may thus also assume that $ X $ is affine and $ Z $ is an irreducible closed subset i.e. $ X = $ Spec $ A $ and $ Z = V ( \mathfrak { p } ) $ for some prime $ \mathfrak { p } $ of $ A $. Then, $ \text {codim} ( Z, X ) = \text{ht} ( \mathfrak{p} ) = \dim A_ { \mathfrak { p } } = \min _ { \mathfrak{q} \supset \mathfrak{p} } \dim A_{\mathfrak{q} } = \min _ { z \in Z } \dim \mathcal{ O } _ { X , z } $. $ \quad \quad \quad \quad $ $ \square $
We now turn to the proof of the proposition.
$ \boxed { 3) \Longleftrightarrow 2) } $. By the Lemma, we see that $ \text{codim} ( X - U , x ) \geq 1 $ iff $ U $ contains points all the points of codimension $ 0 $, which are the same as generic points.
$ \boxed { 2) \implies 1 ) } $ If $ U $ contains all generic points, then any closed subset of $ X $ that contains $ U $ must contain all the generic points, and thus, must contain all the irreducible components of $ X $, and thus must equal $ X $.
Suppose now that $ X $ is locally Noetherian.
$ \boxed { 1) \implies 2) } $ Suppose that $ \eta $ is a generic point of $ X $ outside $ U $. Then, $ U \cap \overline { \left \{ \eta \right \} } = \emptyset $. Consider the set $ S = \left \{ Z | Z \text{ irreducible component of } X \right \} \setminus \overline { \left \{ \eta \right \} } $. Let $$ C = \bigcup _ { Z \in S } Z . $$
We claim that $ C $ is closed. This will result in a contradiction since $ C \supset U $ and $ C \neq X $. Let $ V $ be an open affine subset of $ X $. Then, since $ V $ is Noehterian, the union $ \bigcup _ { Z \in S } V \cap Z $ is a finite union of closed subsets of $ V $, and is thus closed in $ V $. Thus, $ C $ is closed in $ X $. |
Equation of dot product and cross product | (1) is a special case of Binet-Cauchy identity described HERE
(2) has the opposite sign on the right side so it's definitely wrong.
For (3), let's start with the formula for the triple product proved HERE:
$$\mathbf{a}\times(\mathbf{b}\times\mathbf{c})=\mathbf{b}(\mathbf{a}\cdot\mathbf{c})-\mathbf{c}(\mathbf{a}\cdot\mathbf{b})$$
Swap places of operands on the left and the whole expression changes sign:
$$(\mathbf{b}\times\mathbf{c})\times\mathbf{a}=\mathbf{c}(\mathbf{a}\cdot\mathbf{b})-\mathbf{b}(\mathbf{a}\cdot\mathbf{c})$$
Now replace $\mathbf{b}$ with $\mathbf{a}$, $\mathbf{c}$ with $\mathbf{b}$ and $\mathbf{a}$ with $\mathbf{c}\times\mathbf{d}$ and you get:
$$(\mathbf{a}\times\mathbf{b})\times(\mathbf{c}\times\mathbf{d})=\mathbf{b}((\mathbf{c}\times\mathbf{d})\cdot\mathbf{a})-\mathbf{a}((\mathbf{c}\times\mathbf{d})\cdot\mathbf{b})$$
...which is exactly (3). |
Why/When we need the axiom schema of replacement? | That depends very much on your axiomatization of $\sf ZF$.
Specifically, the axiom of pairing is a consequence of replacement, power set, and infinity (one could use "empty set" instead of infinity, but that would be redundant, since infinity implies the empty set exists directly).
If minimality is what your heart desires, then pairing is a theorem, not an axiom, and then I don't see why things work out in the finite case either.
But let's just put this aside, and let pairing be part of our system. Indeed, then in that case you need the axiom of replacement for infinite collections.
The classic example is $V_{\omega+\omega}$. We start with $V_0=\varnothing$, and for $n$, $V_{n+1}=\mathcal P(V_n)$. When we reach $\omega$, $V_\omega=\bigcup\{V_n\mid n<\omega\}$, and so we continue again with power sets and unions.
It is not hard to check that $V_{\omega+\omega}$ actually satisfies all the axioms of $\sf ZF$ except Replacement. Including pairing, just to be clear. But now consider the function given by $f(0)=V_\omega$ and $f(n+1)=\mathcal P(f(n))=V_{\omega+n+1}$. The range of $f$ is exactly $\{V_{\omega+n}\mid n<\omega\}$, and it is easy to see that this is not an element of $V_{\omega+\omega}$.
So indeed Replacement fails, and since pairing and union hold, it fails for infinite sets. |
Complex matrices with null trace | You can read the following short, nice paper
http://www.cs.berkeley.edu/~wkahan/MathH110/trace0.pdf
Please note the gist of the paper for you is Corollary 4: any square matrix over the complex is similar to a matrix all of whose diagonal elements are the same element, and of course this is all you need. |
Equality between matrix and vector norms 2 | You are almost done, there remains to remark that $\|b^T\cdot x\|_2\le \|b\|_2\cdot \|x\|_2$ by Cauchy-Schwartz inequality and the equality is attained for $x=\lambda b$, where $\lambda\in\Bbb R$. |
Exercise 3.4.3 in David Marker's "Model Theory" | Let me explain what the question is asking, and then give a hint as to how to approach it.
The theories in question are the theories of the structures. A theory in a language $\Sigma$ is, as you say, just a set of $\Sigma$-sentences (some authors also require that it be "deductively closed"). But each structure $\mathcal{M}$ in a language $\Sigma$ has a particular theory associated to it - its "full" theory, the set of all sentences true in the structure: $$Th(\mathcal{M})=\{\varphi: \mathcal{M}\models\varphi\}.$$ The theory of a structure is always complete (since for each structure $\mathcal{M}$ and each sentence $\varphi$ we either have $\mathcal{M}\models \varphi$ or $\mathcal{M}\models\neg\varphi$). In fact, theories of structures are exactly the complete theories: if $T$ is complete, then for each $\mathcal{M}\models T$ we have $Th(\mathcal{M})=T$.
Now let's think about the two structures in question here, $\mathcal{Z}=(\mathbb{Z}; s)$ and $\mathcal{N}=(\mathbb{N}; s)$. The first thing to note is that their theories are not too different - e.g. both $Th(\mathcal{Z})$ and $Th(\mathcal{N})$ contain the sentence $$\forall x(\neg(s(x)=x)).$$ On the other hand, there is one striking difference between the two structures: $\mathcal{Z}$ "goes to infinity in both directions" but $\mathcal{N}$ "has a beginning."
Can you go from that observation to finding a specific sentence $\varphi$ which is in $Th(\mathcal{Z})$ but not in $Th(\mathcal{N})$? HINT: what makes $0$ special, in terms of the successor operation?
You can turn that sentence into a formula which defines $\{0\}$ in $\mathcal{N}$; this formula will have a quantifier in it. To show that $\mathcal{N}$ doesn't have quantifier elimination, it will be enough to show that $\{0\}$ is not definable by a quantifier-free formula.
Finally, you'll need to show that $Th(\mathcal{Z})$ does have quantifier elimination. But once you understand what $Th(\mathcal{Z})$ is, per the above section, this should fit the pattern of the examples of quantifier elimination you've already seen. |
Proof of "$p$-groups are nilpotent" theorem in Rotman's Theory of Groups | I don't have the book here but he probably considers the action of $N_G(H)$ on the conjugates of $H$, this is very different from the action of $G$ on the conjugates of $H$: the latter has (as you noticed) exactly one orbit, but if you restrict yourself to $N_G(H)$ this orbit falls apart into a number of orbits, one of which will be $\{H\}$. |
How can I determine the type of bifurcation I have found? | The equilibrium points satisfy $x^2 + x - 1/r = 0$, with solutions
$$
x_\pm = -\frac{1}{2}\pm\sqrt{\frac{1}{4}+\frac{1}{r}}\, .
$$
The Jacobian $J(x) = r (2x + 1)$ evaluated at the equilibrium points reads
\begin{aligned}
J(x_\pm) &= \pm r\sqrt{{1}+{4}/{r}} \\ &= \pm \text{sgn}(r)\sqrt{r(r+{4})}\, ,
\end{aligned}
where $\text {sgn}$ denotes the sign function.
Therefore,
if $r>0$, $x_+$ is an unstable equilibrium while $x_-$ is asymptotically stable;
if $-4<r<0$, there is no equilibrium;
if $r<-4$, $x_-$ is an unstable equilibrium while $x_+$ is asymptotically stable.
Two bifurcations occur: one at $r=0$ and one at $r= -4$, as illustrated on this diagram:
One can remark that these bifurcations are of saddle-node type. |
Prove $\mathbb{R}^2$ is regular | The easiest way is to let $s\in(0,r)$ and use the open sets $B_s(p)$ and $\Bbb R^2\setminus\operatorname{cl}B_s(p)$: $\operatorname{cl}B_s(p)\subseteq B_r(p)$, so $\Bbb R^2\setminus\operatorname{cl}B_s(p)\supseteq\Bbb R\setminus B_r(p)\supseteq A$.
Your definition of $U_a$ makes it the infimum of a set of non-negative real numbers, so it is simply a real number, not a subset of $\Bbb R$. I suspect that you wanted to do something like this: let $s=\inf\{d(a,p):a\in A\}$, and consider $\bigcup_{a\in A}B_s(a)$. That gives you an open nbhd of $A$ that does not contain $p$, but it does not give you one that is disjoint from $B_r(p)$. You could, however, let $s=\frac{r}2$ and set $U=\bigcup_{a\in A}B_s(a)$; then it’s an easy exercise in the triangle inequality to show that $B_s(p)\cap U=\varnothing$. |
Is a holomorphic function analytic in a ‘real’ sense? | For the sake of simplicity, assume that $ f $ is entire. Then
\begin{align}
\forall (x,y) \in \mathbb{R}^{2}: \quad
f(x + i y)
& = \sum_{m = 0}^{\infty} a_{m} (x + i y)^{m} \\
& = \sum_{m = 0}^{\infty}
\left[ a_{m} \sum_{n = 0}^{m} \binom{m}{n} x^{n} (i y)^{m - n} \right] \\
& = \sum_{m = 0}^{\infty}
\left[ \sum_{n = 0}^{m} a_{m} i^{m - n} \binom{m}{n} x^{n} y^{m - n} \right]\\
& = \sum_{p,q = 0}^{\infty} a_{p + q} i^{q} \binom{p + q}{p} x^{p} y^{q}.
\end{align}
The transition from Line $ 3 $ to Line $ 4 $ in the derivation is justified because the sum in the last line is absolutely convergent, which in turn is true because
$$
\forall (x,y) \in \mathbb{R}^{2}: \quad
\sum_{m = 0}^{\infty} |a_{m}| (|x| + |y|)^{m} < \infty.
$$ |
Finding number with prime number of divisors? | While prime factorization doesn't exactly happen instantly for large numbers, you can determine the number of divisors of a number using the exponents from the prime factorization.
For every distinct power in a number's prime factorization, there is an integral exponent. Add 1 to every one of these powers, multiply the resulting numbers, and you will obtain the number of divisors that divide even into the original number.
Some examples are called for to demonstrate the power of this fact (of which I can't recall a rigorous proof).
Example 1: 60. The prime factorization of 60 is $2^23^15^1$. Just looking at each of the exponents, we have 2,1,1. Add one to each of them to get 3,2,2 and multiply them to get 3*2*2 = 12. This means that 60 has 12 divisors (including 1 and 60 itself). Indeed, the positive divisors of 60 are, in ascending order, 1,2,3,4,5,6,10,12,15,20,30,60.
Example 2: 17. 17 is prime, so its prime factorization is (explicitly with exponents) 17^1. Take the exponent 1, add one to it to get 2, and now you have the number of positive divisors of 17: 1,17.
Example 3: $p$, where $p$ is any prime. Needless to say, the prime factorization is $p^1$, and the positive divisors of $p$ are 1,$p$. Since the number of divisors is always 2 (which is prime), we can conclude that all primes have a prime number of divisors.
Now we can consider composites. Every natural number $n\geq 1$ must have 1 and $n$ at least in its prime divisor list. For every other divisor $d$ of $n$, there is a corresponding divisor $d/n$ that keeps the number of divisors even. The only special case is when $d=\frac{n}{d} \implies d^2 = n$. But then $d$ is just the square root of $n$ (!) and will only occur once in the list of $n$'s divisors, thus producing an odd number of divisors. So the only composite numbers with an odd number of divisors are perfect squares. There is no need to check any non-square composite number because they will have an even number of at least 4 divisors.
Computational Pro-Tip: when searching for all numbers on the interval $[a,b]$, "simply" list the primes in the interval. Then assign $m = \lfloor \sqrt{a} \rfloor$, and $n = \lceil \sqrt{b} \rceil$. Find the prime factorization for every natural number on the interval $[m,n]$, double each of the exponents in said prime factorization, add one to every exponent, multiply all of them as before and determine if this product is prime. If so, just append the square of corresponding integer to the list that keeps track of the numbers with a prime number of positive divisors. |
finding the piecewise linear function in a way that the results can be replicated or I can carry it forward to other problems | The answer to the direct question
Did I do this correctly?
is simply "yes", if what matters is the correct answer.
But it's rather longwinded. In each part of the problem you first found a second point on the graph using the given slope and then used that second point to calculate the slope you already knew.
Here is the first version of you question:
I can't find the piecewise linear function given the following set of dependent and independent variables. |
Show that $\mathbb{R} ^ {\mathbb{R}} = M \oplus L.$ | The comments suggest simple formulas, but suppose no formulas came to your mind and you wanted to somehow "brute force" this problem.
Consider an arbitrary $f \in \mathbb R^{\mathbb R}$. For each $t$, consider $f(t)=A$ and $f(-t)=B$. Let's write down everything that we need to be true:
\begin{align*}
L(t) + M(t) &= A \\
L(-t) + M(-t) &= B \\
L(t) - L(-t) &= 0 \\
M(t) + M(-t) &= 0
\end{align*}
Let's suppose for the moment that $t \ne 0$, and so we can think of $L(t)$, $M(t)$, $L(-t)$, and $M(-t)$ as four independent unknown quantities that we need to solve for. So this is a system four linear equations in four unknowns, and we can solve it.
The case $t=0$ is a special case, we must have $M(0)=0$ and we can just take $L(0)=f(0)$. |
Which one is correct option for significant correlation meaning | By a significant correlation we mean that the observed correlation in the sample is not due to chance, at that level of significance. The observed correlation in the sample is a representative of the correlation in the population. In other words, the correlation observed in the sample can be generalized to the population. Option (c) is correct. |
Why is the Weil restriction of the multiplicative group a torus? | Recall the definition of a torus over $k$ is an algebraic group $T$ defined over $k$ so that after base-changing up to the algebraic closure $\overline{k}$, $T_{\overline{k}}$ is isomorphic to a finite product of $\Bbb G_m$ defined over $\overline{k}$.
The central issue here for you is that over fields that are algebraically closed, there is exactly one torus of each dimension up to isomorphism. On the other hand, if you are working over a field which is not algebraically closed, there can be multiple non-isomorphic tori of a given dimension. Your second example shows that you've found two different torii of dimension 2 over $\Bbb R$.
To answer your original question, you may compute that after base changing to $\overline{\Bbb Q}$, the torus $Res(K)$ is isomorphic to $d$ copies of $\Bbb G_m$. |
How can one interpret a module as a vector space? (in a specific circumstance) | Let $M$ be a left $A$-module. Then $M$ is also a module over the subring of constant polynomials, which is isomorphic to $k$. This makes $M$ a vector space over $k$. Now let $T:M\to M$ be defined by $T(m)=x\cdot m$. Then $T$ is a linear transformation on $M$. The action of $A$ on $M$ is given by $p(x)\cdot m =p(T)(m)$.
Conversely, if $V$ is a $k$-vector space and $T$ is a linear transformation on $V$, then defining $p(x)\cdot v=p(T)(v)$ makes $V$ an $A$-module. |
How to show that a root of the equation $x (x+1)(x+2) ....... (x+2009) = c $ can have multiplicity at most 2? | Let denote by
$$P(x)=x (x+1)(x+2) ....... (x+2009) - c .$$
We have
$$P(0)=P(-1)=\cdots P(-2009)=-c$$
and by Rolle's theorem there's
$$0>t_0>-1>\cdots>t_{2008}>-2009$$
s.t.
$$P'(t_0)=P'(t_1)=\cdots=P'(t_{2008})=0.$$
Since the polynomial $P$ has the degree $2010$ then the degree of $P'$ is $2009$ and then the $t_i$ are the roots of $P'$.
If $c=0$ then the roots of $P$ are different from the roots of $P'$ and there is not a multiple root.
If $c\neq 0$, we apply the Rolle's theoem on $P'$ and we find the roots $s_i$ of $P''$ s.t
$$t_0>s_0>t_1>\cdots>s_{2007}>t_{2008}$$
and if we have $P(\alpha)=P'(\alpha)=0$ then $\alpha=t_{i_0}$ ($\alpha$ is a root of $P'$) and then $\alpha\neq s_i\forall i=0,\ldots,2007$ and hence $P''(\alpha)\neq 0$. |
Best constant for Lipschitz continuous gradient function | Let $x,y\in\mathbb{R}^n$, and let $x(t) := t x + (1-t)y$.
We have that
$$
\begin{split}
f(x) - f(y) & = \int_0^1 \langle\nabla f(x(t)), x-y\rangle\, dt
\\ & = \int_0^1 \langle\nabla f(x(t)) - \nabla f(y), x-y\rangle\, dt
+ \langle\nabla f(y), x-y\rangle.
\end{split}
$$
On the other hand
$$
\begin{split}
& \left|\int_0^1 \langle\nabla f(x(t)) - \nabla f(y), x-y\rangle\, dt\right|
\leq \int_0^1 \| \nabla f(x(t)) - \nabla f(y)\| \, \|x-y\|\, dt
\\ & \leq L \int_0^1 \|x(t) - y\|\, \|x-y\|\, dt
= L \|x-y\|^2 \int_0^1 t\, dt = \frac{L}{2}\|x-y\|^2,
\end{split}
$$
so that
$$
f(x) - f(y) \geq \langle\nabla f(y), x-y\rangle
- \frac{L}{2}\|x-y\|^2\,.
$$ |
If $e^{2\pi i}=1$ then is $2\pi i=0$? | In $(0,\infty)$, every number has one and only one real logarithm. So, in $(0,\infty)$, it makes sense to assert that $e^x=y\iff x=\log y$.
However, every complex numbers has infinitely many logarithms. In particular every number of the form $2k\pi i$ (with $k\in\mathbb Z$) is a logarithm of $1$. So, your approach does not work here.
Another way of seeing this is: in $\mathbb R$, the exponential function is injective, but not in $\mathbb C$. |
If $\sup \limits_{x\in M}f(x)<\infty$, $f$ has a global maximum in $M$. | Take a sequence $(x_n)_n$ such that $f(x_n)\to\sup f(x)$.
As the supremum is finite, $(f(x_n))_n$ is bounded, so $(x_n)_n$ is also bounded. By Bolzano-Weierstrass Theorem, there is a convergente subsequnece $(x_{n_k})_k$, that is, $x_{n_k}\to x$ and, because $M$ is closed, $x\in M$.
Now, the continuity of $f$ and the fact that $(f(x_n))_n$ is convergent, do the last job:
$$f(x)=f(\lim_k x_{n_k})=\lim_k f(x_{n_k})=\lim_n f(x_n)=\sup f.$$ |
Correlation is zero but with non-zero correlation coefficient | Yes, it is possible.
Following this very good paper on the topic, consider two random variables $X$ and $Y$ with the following realizations:
$$X = (1, -5, 3, -1), Y = (5, 1, 1, 3)$$
You have:
$$R_{XY} = E(XY') = E(1\times5 - 5\times1 + 3\times1 - 1\times3) = 0$$
but:
$$\mu_{X} = -\frac{1}{2}, \mu_{Y} = \frac{5}{2} \Rightarrow \rho_{X,Y} \neq 0$$
In general, recall that while both orthogonality and uncorrelation imply linear independence, there's no implication between orthogonality and uncorrelation themselves.
EDITING TO ANSWER COMMENT:
When it comes to stochastic processes, following your notation, we say that $(X_{t})_{t \geq 1}$ $(Y_{t})_{t \geq 1}$ are uncorrelated if:
$$\forall t_{1}, t_{2}, COV_{X,Y}(t_{1}, t_{2}) = R_{XY}(t_{1}, t_{2}) - \mu_{X}(t_{1})\mu_{Y}(t_{2}) = 0
$$
while we say that they're orthogonal if:
$$\forall t_{1}, t_{2}, R_{XY}(t_{1}, t_{2}) = E[X(t_{1})Y(t_{2})'] = 0$$
so the same reasoning as before applies. Here for a broader analysis. |
Generic construction of a probability measure on a random variable X | $X^{-1}(-\infty,x]=\{\omega \in \Omega: X (\omega) \leq x\}$. The notation $[X \leq x]$ is used in Probability theory as an abbreviation for $\{\omega \in \Omega: X (\omega) \leq x\}$. |
Is there a picture of the tangent lines through a point inside a circle? | The picture would have to be in $\mathbb C^2\cong\mathbb R^4$, thus not easily to visualize. Note that a variety in $\mathbb C$ is just a finite set of points (zeroes of a complex polynomial) - no circles, no lines.
If you do the math with $C$ the unit circle, you'll find that $P=(x_P, y_P)$ leads to the line given by $x x_P+y y_P = 1$. This may be derived from the tangent construction for $x_P^2+y_P^2>1$, and it produces the tangent through $P$ in the limiting case $x_P^2+y_P^2=1$. As a pure algebraic transformation, it remains valid for $P$ inside the circle, but as he says, you don't have real tangent points any more.
Mind you: The $x$ and $y$ we are talking about here are not to be seen as real and complex part of a complex number $z=x + i y$, but rather as to complex numbers themselves. That's why we'd have to visualize (real) four-dimensional space to appreciate the situation. If we write the circle equation with complex numbers, and substitute real and imaginary parts for these, we obtain (now with all variables taking only real values) $(x+i y)^2+(u+iv)^2=1$, i.e. the simultaneous (real) equations $x^2-y^2+u^2-v^2=1$ and $xy+uv=0$. I don't know if there is an easy way to visualize even this "circle" in four real dimensions - one could try to use one variable ($v$ say) as time and then the whole thing is a 3D movie. It just won't look as circular as you might expect. |
Verify that $\frac{\pi}{4} = 1-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+....$ can be found via a Fourier series for $x$ in $-\pi\lt x \le \pi$ | Fourier Series is defined as
$$f(x)=\frac{a_0}{2}+ \sum_{r=1}^{r=\infty}\left(a_r\cos\left(\frac{2\pi r x}{T}\right)+b_r\sin\left(\frac{2\pi r x}{T}\right)\right)\space\space\space\space\space\space\space\space\space\space\space\space\space\space\space\color{blue}{(1)}$$
For $r>0$ You can obtain the coefficients using the formulas you wrote down first. For the case $r=0$ you have to use the formula for $a_r$
$$a_r=\frac{2}{T}\int_{-\pi}^{\pi}f(x)\cos{(\frac{2\pi rx}{T})}\mathrm{d}x$$
and set $r = 0$. Applying this will lead to $f(x)\cos(\frac{2\pi x\cdot0}{T})=f(x)\cos(0)=f(x)$. Now integrate this from $-\pi$ to $\pi$ and multiply this by $\frac{2}{T}$ to get what is written in red.
Alternatively:
You can also conclude $a_n=0\space \forall n\geq0$ as $f(x)$ is odd.
Another way to see that $a_0=0$ can be obtained by pluging in $x=0$ into the series, as all sine terms will vanish and $a_r=$ in the sum we directly obtain:
$$f(0)=x|_{x=0}=0=\frac{a_0}{2}$$ |
How do I evaluate this integral involving monomial and trig? | Hint: Integration by parts with $u=x$.
You can use the identity
$$ \cos(nx)=\frac{e^{inx}+e^{-inx} }{2}. $$
But, still you have to use integration by parts. |
Can a ring isomorphism change the structure of a module? | Consider a field $F$ and the ring $R=F\times F$.
Let $M=\{0\}\times F$ with both the ordinary right $R$-module structure $(0,m)(r,s)=(0,ms)$, and let $M'$ be the same set with another $R$-module structure given by $(0,m)(r,s)=(0,mr)$.
This second structure is just given by the involution on $R$ given by $(r,s)\mapsto(s,r)$.
The annihilator in $R$ of $M$ is $ F\times \{0\}$ while the annihilator in $R$ of $M'$ is $\{0\}\times F$. Since isomorphic modules share annihilators, this shows $M\ncong M'$. |
Compute $\int_0^{\infty}\frac{\cos(\pi t/2)}{1-t^2}dt$ | First, decompose the denominator in partial fractions and perform a change of variable which leads to the integral of Cos[x] / x.
Please, have a look at http://en.wikipedia.org/wiki/Trigonometric_integral#Sine_integral
Then the antiderivative of the integrand is just
1/2 (SinIntegral[Pi (1 + t) / 2] - SinIntegral[Pi (1 - t) / 2])
and the integration between 0 and infinity leads to Pi / 2. |
Bernstein polynomial: function to approximate | Is impossible figure out the explicit formula of the red function using only a picture of its graph. However, this description says:
This animation was created with Maple 9.5 and Jasc Animation Shop. It shows Bernstein polynomials approximating $x \sin(\frac 72\pi {\sqrt {x}})$ (chosen for purely aesthetic reasons).
Although, it appears that the true formula is $-x \sin(\frac 72\pi {\sqrt {x}})$. |
Merging 2 ordered sets preserving order of each set | Let there be $n$ elements in the first list and $m$ in the second. The combined list will have $m+n$ elements and is completely determined by the positions of the elements of the first list. So there are $$\binom{n+m}{n}$$ possibilities. |
Continuous uniform distribution over a circle with radius R | Let the pair $(X,Y)$ of random variables have uniform distribution on the disk with centre the origin and radius $R$. Then the joint density function $f_{X,Y}$ is $\frac{1}{\pi R^2}$ on the disk, and $0$ elsewhere.
For the density function of $x$, "integrate out" $y$. So we are integrating a constant from $-\sqrt{R^2-x^2}$ to $\sqrt{R^2-x^2}$. The result is $\frac{2\sqrt{R^2-x^2}}{\pi R^2}$ (for $-R\le x\le R$, and $0$ elsewhere).
The density function of $Y$ can be read off by symmetry.
For the mean of $X$, we don't need to calculate at all. By symmetry it is $0$.
The mean of $XY$ is also $0$. The second and fourth quadrant parts cancel the first and third quadrant parts. |
are the integrals and bounds correct for $P(X \gt Y)$ and $P(Y \gt \frac12 | X \lt \frac12)$ of the joint probability function | $$P(Y \gt \frac12 , X \lt \frac12) = \int_0^\frac12\int_\frac12^2 \frac67(x^2+\frac{xy}{2})dy dx$$
$$P(Y \gt \frac12 | X \lt \frac12) = \frac{\int_0^\frac12\int_\frac12^2 \frac67(x^2+\frac{xy}{2})dy dx}{P(X \lt \frac12)}$$
To compute $P(X \lt \frac12)$, we must get the marginal distribution of $X$:
$$f_X(x) = \int_0^2 f(x,y) dy$$
Now we have
$$P(X \lt \frac12) = \int_0^1 f_X(x) 1_{(-\infty, 1/2)} dx$$
$$ = \int_0^{1/2} f_X(x) dx$$
$$ = \int_0^{1/2} \int_0^2 f(x,y) dy dx$$ |
$\text{Prove That,}\;f(n) = \prod\limits_{i=1}^{n}(4i - 2) = \frac{(2n)!}{n!}$ | To prove that it is true for $i =n+1$:
$$\prod_{i=1}^{n+1} (4i-2) = \prod_{i=1}^{n} (4i-2)(4(n+1)-2) = \frac{(2n)!}{n!} (4n+2) =\frac{(2n)!}{n!} 2(2n+1) =\frac{(2n)!}{n!}\frac{2n+2}{n+1}(2n +1) = \frac{(2n+2)!}{(n+1)!} = \frac{(2(n+1))!}{(n+1)!}$$ which was to be proved. Hope it helps. |
Finding the complex eigenvectors in a matrix | $x_2$ being free implies that it is arbitrary. It should have been $2+2i=x_1$ though.
If $x_1=(1+i)$, then you are correct, and we can deduce that $x_2=1$. |
Is it possible to slove problem with norm constraints using only linear programming? | Unless you can get rid of the two-norm, I doubt it's possible.
You can however get it to work with a $||\bullet||_1$ or $||\bullet||_\infty$ |
Why the sum of the sample autocovariance function of a stationary process is zero? | $\newcommand{\hgamma}{\hat{\gamma}}$
This is Exercise $7.3$ from the classic text Time Series: Theory and Methods. We can prove it by a series of reductions of the original equality.
First note by $\hgamma(h) = \hgamma(-h)$, the equality to be proven is
$$0 = \hgamma(0) + 2\sum_{h = 1}^{n - 1}\hgamma(h). $$
Next note by definition of $\hgamma$, without loss of generality we can assume $\bar{x} = 0$. Therefore, it suffices to prove that under the condition of $x_1 + \cdots + x_n = 0$, it holds that
\begin{align*}
0 = \sum_{t = 1}^n x_t^2 + 2\sum_{h = 1}^{n - 1}\sum_{t = 1}^{n - h}x_{t + h}x_t.
\end{align*}
But this is just the result of expanding the quadratic form
\begin{align*}
0 = \bar{x}^2 = (x_1 + x_2 + \cdots + x_n)^2.
\end{align*}
This completes the proof. |
Computational Topology and Lie Group Theory | I haven't studied Computational Topology or Lie Groups, but I can maybe recommend some stuff to get you started in that direction.
First off, you will need a good understanding of Algebra and Point-Set Topology. Here are some topics you will probably need to know:
Abstract Algebra
Groups: actions, homomorphisms, isomorphisms, subgroups, quotient groups, Fundamental Theorem of Finitely Generated Abeliean Groups
Rings: ring homomorphism, quotient rings, ideals
Modules
Fields
a little bit about Vector Spaces maybe
Point-Set Topology
Definition of Topology
Continuous Functions
Homeomorphisms
Product Topology
Subspace Topology
Quotient Space
Compactness and Completeness
For books I used Dummit and Foote for Algebra and Munkres for Topology. The topics in algebra correspond to parts I, II, and III in D&F. The topics in Point-Set Topology correspond to chapters 1-3 of Munkres.
From there you can study Algebraic Topology (you should be able to start studying this after finishing the initial Topology subjects and knowing groups and rings). I hear Hatcher is pretty good (and it's free online). I'm guessing from that you will most need to study homology (I believe that's an entire chapter).
You will also need to study Differential Geometry for Lie Groups, but I don't know enough about that to suggest a book. But the books above should keep you busy for a while anyway.
But like I said, I don't know too much about the subjects of Computational Topology and Lie Groups, so take this advice with a grain of salt. |
Prove that $\sin^2{x}+\sin{x^2}$ isn't periodic by using uniform continuity | We proceed from the definition of uniform continuity. (One can get the result more simply by considering the derivative of $\sin(x^2)$.)
It is enough to show that $\sin(x^2)$ is not uniformly continuous. For if $\sin^2 x+\sin(x^2)$ were uniformly continuous, then since $\sin^2 x$ is uniformly continuous, the difference $\sin^2 x+\sin(x^2)-\sin^2 x$ would be uniformly continuous.
Let $\epsilon=\frac{1}{2}$. We show that there does not exist a $\delta\gt 0$ such that for all $x$, if $|x-y|\lt \delta$, then $|\sin(x^2)-\sin(y^2)|\lt \epsilon$.
Let $\delta$ be a fixed positive quantity. Let $x=\sqrt{2n\pi}$, where $n$ is a large integer to be chosen later. Then $\sin(x^2)=0$. Let $n$ be such that $2\delta\sqrt{2n\pi}\gt \frac{\pi}{2}$. Note that
$$(x+\delta)^2=x^2+2x\delta+\delta^2\gt x^2+2x\delta\gt 2n\pi +2\delta\sqrt{2n\pi}\gt 2n\pi+\frac{\pi}{2}.$$
It follows that there is a $y$ such that $x\lt y\lt x+\delta$ and $\sin(y^2)=1$. In particular, $|\sin(x^2)-\sin(y^2)|\gt \frac{1}{2}$. This completes the proof. |
If $f$ is Lebesgue integrable on $[0,2]$ and $\int_E fdx=0$ for all measurable set E such that $m(E)=\pi/2$. Prove or disprove that $f=0$ a.e. | Hint: let $A$ and $B$ be small enough (let's say $m(A)=m(B)<0.001$). Then there exists $C$ disjoint from $A \cup B$ such that $m(C)=\pi/2 - m(A)$. Using your conditions on $A \cup C$ and $B \cup C$ show that $\int_Afdm=\int_Bfdm$. Now consider the positivity and negativity sets of $f$. Show, that if one or both of them have measure zero you are done. Show that otherwise it will contradict previous argument |
Proof that for all Real Numbers $a \in \mathbb R$, $0 \leq a^2$ using certain Axioms | Let $a\in\mathbb{R}$ with $a\leq 0$. Then $(-a)\in\mathbb{R}$ and axiom (IV) gives us:
$a+(-a)\leq 0+(-a)$ which is (by the field axioms) $0\leq -a$, what you wanted to show. |
If $x>0$we have $(1+x^2)f'(x)+(1+x)f(x)=1$ and $g'(x)=f(x), f(0)=g(0)=0$Prove: | Using integrating factor $m$ we first solve given ODE:
\begin{align*}
f'+\frac{1+x}{1+x^2}f&=\frac{1}{1+x^2}\\
m'&=m\frac{1+x}{1+x^2}\\
(\ln(m))'&=\frac{1+x}{1+x^2}\\
\ln(m)&=\int_0^x\frac{1+y}{1+y^2}dy=arctg(x)+\frac{1}{2}\int_{1}^{1+x^2}\frac{1}{z}dz=arctg(x)+\frac{1}{2}\ln(1+x^2)\\
m&=\exp(arctg(x))+\sqrt{1+x^2}\\
f&=\frac{1}{m}\int_0^x\frac{m}{1+y^2}dy=\frac{1}{m}\Big(\int_0^xarctg'(y)\exp(arctg(y))dy+\int_0^x\frac{1}{\sqrt{1+y^2}}dy\Big)=\frac{1}{m}\Big(\exp(arctg(x))-1+\ln(x+\sqrt{1+x^2})\Big)
\end{align*}
Thus
\begin{align*}
g(x)&=\int_0^xf(y)dy=\int_0^x\frac{\exp(arctg(y))-1+\ln(y+\sqrt{1+y^2})}{\exp(arctg(y))+\sqrt{1+y^2}}dy
\end{align*}
Using monotonicity of $f$ we get that
\begin{align*}
g(1)&\leq\int_0^1\frac{\exp(arctg(1))-1+\ln(1+\sqrt{1+1^2})}{\exp(arctg(1))+\sqrt{1+1^2}}dy=\frac{\exp(\pi/4)-1+\ln(1+\sqrt{2})}{\exp(\pi/4)+\sqrt{2}}<0,575
\end{align*} |
Polynomial division simple | You compare the highest powers to get the next term, $\frac{s^4}{2s^3} = \frac12 s$.
Then distribute that term across the denominator: $\frac12 s (2s^3 + 2s^2 + 3s + 1) = s^4+s^3+\frac32 s^2 + \frac12 s$.
Then subtract from the numerator to get $2s^3 + \frac52 s^2 + \frac72 s + 1$.
Repeat from the beginning until done. |
Trigonometric and exp limit | Most of the common limit problems can be solved without the use of L'Hospital's Rule and Taylor's series (i.e. without using differentiation in anyway). This limit is not one of them. I have avoided differentiation but at the cost of using the following limit: $$\lim_{x \to 0}\frac{\log(1 + x) - x}{x^{2}} = -\frac{1}{2}\tag{1}$$ apart from the following standard limits $$\lim_{x \to 0}\frac{e^{x} - 1}{x} = 1 = \lim_{x \to 0}\frac{\log(1 + x)}{x}\tag{2}$$ The limit $(1)$ is established at the end of this answer using a specific definition of $e^{x}$.
Let's put $\sin x = t$ so that as $x \to \pi/2$ we have $t \to 1$. Now the desired limit is evaluated as follows
\begin{align}
L &= \lim_{x \to \pi/2}\frac{\sin x - (\sin x)^{\sin x}}{1 - \sin x + \log(\sin x)}\notag\\
&= \lim_{t \to 1}\frac{t - t^{t}}{1 - t + \log t}\notag\\
&= \lim_{t \to 1}\frac{t - \exp(t\log t)}{1 - t + \log t}\notag\\
&= \lim_{h \to 0}\frac{1 + h - \exp((1 + h)\log(1 + h))}{\log(1 + h) - h}\text{ (putting }t = 1 + h)\notag\\
&= \lim_{h \to 0}\dfrac{1 + h - \exp((1 + h)\log(1 + h))}{\dfrac{\log(1 + h) - h}{h^{2}}\cdot h^{2}}\notag\\
&= -2\lim_{h \to 0}\dfrac{1 + h - \exp(\log(1 + h) + h\log(1 + h))}{h^{2}}\text{ (using (1))}\notag\\
&= -2\lim_{h \to 0}\dfrac{1 + h - (1 + h)\cdot\exp(h\log(1 + h))}{h^{2}}\notag\\
&= -2\lim_{h \to 0}\dfrac{1 + h - (1 + h)\cdot\exp(h\log(1 + h))}{h^{2}}\notag\\
&= 2\lim_{h \to 0}(1 + h)\cdot\dfrac{\exp(h\log(1 + h)) - 1}{h^{2}}\notag\\
&= 2\lim_{h \to 0}\dfrac{\exp(h\log(1 + h)) - 1}{h\log(1 + h)}\cdot\frac{\log(1 + h)}{h}\notag\\
&= 2 \cdot 1 \cdot 1 = 2\notag
\end{align}
To establish $(1)$ we first establish $$\lim_{x \to 0}\frac{e^{x} - 1 - x}{x^{2}} = \frac{1}{2}\tag{3}$$ via definition $$e^{x} = \lim_{n \to \infty}\left(1 + \frac{x}{n}\right)^{n}\tag{4}$$ and then we have
\begin{align}
A &= \lim_{x \to 0}\frac{\log(1 + x) - x}{x^{2}}\notag\\
&= \lim_{t \to 0}\dfrac{t - e^{t} + 1}{(e^{t} - 1)^{2}}\text{ (putting }t = \log(1 + x))\notag\\
&= \lim_{t \to 0}\dfrac{t - e^{t} - 1}{\left(\dfrac{e^{t} - 1}{t}\right)^{2}\cdot t^{2}}\notag\\
&= \lim_{t \to 0}\dfrac{t - e^{t} + 1}{t^{2}}\notag\\
&= -\frac{1}{2}\text{ (using (3))}\notag
\end{align}
To prove $(3)$ note that using $(4)$ and binomial theorem for positive integral index we have $$e^{x} = \lim_{n \to \infty}1 + x + \dfrac{1 - \dfrac{1}{n}}{2!}x^{2} + \dfrac{\left(1 - \dfrac{1}{n}\right)\left(1 - \dfrac{2}{n}\right)}{3!}x^{3} + \cdots$$ and hence $$\frac{e^{x} - 1 - x}{x^{2}} = \lim_{n \to \infty}\dfrac{1 - \dfrac{1}{n}}{2!} + \dfrac{\left(1 - \dfrac{1}{n}\right)\left(1 - \dfrac{2}{n}\right)}{3!}x + \cdots = \lim_{n \to \infty}f(x, n)\text{ (say)}\tag{5}$$ where $f(x, n)$ is a finite sum. If $0 < x < 1$ then $$\frac{n - 1}{2n} \leq f(x, n) \leq \frac{1}{2} + \frac{x}{6} + \frac{x^{2}}{24} + \cdots + \frac{x^{n - 2}}{n!}$$ and hence $$\frac{n - 1}{2n} \leq f(x, n) \leq \frac{1}{2} + \frac{x}{6} + \frac{x^{2}}{18} + \cdots = \frac{3}{6 - 2x}$$ Taking limit of the above inequality as $n \to \infty$ we get using $(5)$ $$\frac{1}{2} \leq \frac{e^{x} - 1 - x}{x^{2}} \leq \frac{3}{6 - 2x}$$ for $0 < x < 1$. Now taking limit as $x \to 0^{+}$ we get $$\lim_{x \to 0^{+}}\frac{e^{x} - 1 - x}{x^{2}} = \frac{1}{2}$$ and the case for $x \to 0^{-}$ is handled by putting $x = -y$.
The above long proofs of limits $(1)$ and $(3)$ show that Taylor's series and L'Hospital's Rule are very powerful tools which can replace these long proofs to just one step evaluation. But there is an intrinsic interest in establishing limits without the use of differentiation and I hope the above presentation is a good attempt in that direction. |
Inequality real analysis about logarithm | Let $f(t)=(1+t)\ln(1+t)-t-\frac{t^2}{2\left(1+\frac{t}{3}\right)}.$
Thus, $$f''(t)=\frac{t^2(t+9)}{(t+1)(t+3)^3}\geq0.$$
Thus $f'$ increases.
But $$f'(t)=\ln(1+t)-\frac{3t(t+6)}{2(t+3)^2}\rightarrow-\infty$$ for $t\rightarrow-1^+$ and $f'\rightarrow+\infty$ for $t\rightarrow+\infty$, which says that $0$ is an unique root of $f'$,
which gives that for $t=0$ our function gets a minimal value.
Id est, $$f(t)\geq f(0)=0$$ and we are done! |
Geometrically construct sides on cube | The question as posed isn't completely clear to me (i.e. 'how many degrees'), but I assume you need to know how a circle drawn on a side of a cube looks, if we observe it in the direction of the main diagonal of the cube.
Let $u,v,w$ be the Cartesian axes going from a corner along the edges of our cube. Projected onto an $x,y$ plane perpendicular to the main diagonal, they will look like this:
For the three circles on the sides of the cube the equations in $u,v,w$ would look like this:
$$(u-u_0)^2+(v-v_0)^2=R^2 \\ (u-u_0)^2+(w-w_0)^2=R^2 \\ (v-v_0)^2+(w-w_0)^2=R^2$$
From elementary geometry we can find for any point in the plane with (projected) $u,v,w$ coordinates the related $x,y$ coordinates, depending on the sector (side of the cube) the point belongs to:
$$\begin{cases} u=x+\frac{1}{\sqrt{3}}y \\ v= \frac{2}{\sqrt{3}}y \end{cases} \tag{1}$$
$$\begin{cases} u=x-\frac{1}{\sqrt{3}}y \\ w= -\frac{2}{\sqrt{3}}y \end{cases} \tag{2}$$
$$\begin{cases} v=-x+\frac{1}{\sqrt{3}}y \\ w= -x-\frac{1}{\sqrt{3}}y \end{cases} \tag{3}$$
Now simply substitute the above expressions into each circle equation, and we obtain for circles centered at their respective $u_0,v_0,w_0$ and of radius $R$:
$$ \left( x+\frac{1}{\sqrt{3}}y-u_0 \right)^2+\left( \frac{2}{\sqrt{3}}y-v_0 \right)^2=R^2$$
$$ \left( x-\frac{1}{\sqrt{3}}y-u_0 \right)^2+\left( \frac{2}{\sqrt{3}}y+w_0 \right)^2=R^2$$
$$ \left( x-\frac{1}{\sqrt{3}}y+v_0 \right)^2+\left(x+ \frac{1}{\sqrt{3}}y+w_0 \right)^2=R^2$$
Here's an example for $u_0=v_0=w_0=5$ and $R=3$: |
Characterization of Zero in a module | Let $R = \mathbb Z_6$ and $M = \mathbb Z_2=\{0,3\}$. Then $2m = 0$ $\ \forall m \in M$, but $2 \ne 0$ in $R$.
More generally, if $R$ is not an integral domain - say $xy = 0$ where $x \ne 0\ne y$, then if $M$ is the ideal generated by $y$ viewed as an $R$- module, then we will have $x\cdot m = 0$ $\ \forall m \in M$.
$R$ can even be an integral domain: $R = \mathbb Z$, $M = \mathbb Z_5$, then $x=5$ has this property.
If $M$ is a free module over $R$, then this cannot happen, since no element $m \in M$ can be linearly independent, as $x\cdot m =0$. |
Reparametrize the curve problem in Differential geometry | After reparametrization you have a map $\beta: \mathbb R_{>0} \to \mathbb R^3$ defined by
$$\beta (s) = \alpha (h(s)) = (s, {1\over s}, \sqrt{2} \log s)$$
For the LHS of the lemma compute $\beta' (s)$ as
$$ \beta' (s) = {d \over ds} \beta (s) = (1, -{1 \over s^2}, {\sqrt{2}\over s})$$
For the RHS of the lemma compute
$$ {d \over ds} h(s) = {1 \over s}$$
and
$$ \alpha '(h(s)) = {d \over dh }\alpha (h) = (e^{h(s)}, -e^{-h(s)}, \sqrt{2})= (s, -{1\over s}, \sqrt{2})$$
and note that
$$ \alpha'(h(s)) \frac{dh(s)}{ds}= (s,-{1 \over s},{\sqrt{2}}) \cdot {1 \over s} = (1, -{1 \over s^2}, {\sqrt{2}\over s}) = \beta '(s)$$
as desired. |
Closed in $\mathbb{R}^{2}$ vs closed in $\mathbb{R}$ | The set $$ B=\left\{(x,\frac{1}{x}): x\in\mathbb{R} \smallsetminus \left\{0\right\} \right\}$$ is closed because its complement is open.
The set$$ A=\left\{\frac{1}{n}: n \in \mathbb{N}\right\}$$ is not closed and it is not the same as the set $B$ because it does not contain its limit point $0$
The two sets are not the same and obviously one is countable and the other is not. |
Show that there exists an element $a≠e$, the identity in $g$, such that $a^2 = e$. | 1) The above proof works for all groups, not just abelian ones.
2) Even order implies finite. Hint: Split $G$ into sets of the form $\{a, a^{-1}\}$. |
topologically equivalent spaces | Any homeomorphism $f : (-1, 1) \to \mathbb{R}$ yields a complete metric for $(-1, 1)$, because $\mathbb{R}$ is complete. Just put $d(x_0, x_1) = d(f(x_0), f(x_1))$. Note that $d$ is obviously symmetric and satisfies the triangle inequality. The only point where you need to use the fact that $f$ is a bijection is to show that $d(x_0, x_1) = 0$ implies $x_0 = x_1$; and to show that $d$ is topologically equivalent to the standard metric on $(-1, 1)$, you need to use the fact that $f$ is a homeomorphism. |
Convergence in the discrete metric proof | It is really a matter writing out the definitions of the notions involved.
For contra positive, assume there does not exist such an $N$ as described i.e. assume that for every $N \in \mathbb{N}$ there exists some $n > N$ s.t. $x_n \neq y$. Then we want to show that $(x_n)_{n \in \mathbb{N}}$ does not converge to $y$ i.e. we want to show that there is some $\varepsilon> 0$ such that there does not exist an $N \in \mathbb{N}$ with the property that $\forall n > N: x_n \in B_{\varepsilon}(y)$. Take here $\varepsilon := 1/2$. Then indeed, the above gives us precisely that, since whenever $x_n \neq y$ we have $x_n \not\in B_{\varepsilon}(y)$.
We can also show this directly. Assume that $x_n \rightarrow y$. That means that for every $\varepsilon > 0$, there exists some $N \in \mathbb{N}$ s.t. for any $n > N$ we have $x_n \in B_{\varepsilon}(y)$. Now choose $\varepsilon := 1/2$. Then this gives you that after some index $N$, all the $x_n$ have distance strictly less than $1/2$ to the point $y$. But the only point in the entire space with that property is $y$ itself. Hence after this index, we have $x_n = y$. |
A matrix w/integer eigenvalues and trigonometric identity | Since (b) has been largely answered in the comments, I'll deal with (a) here.
It is the case here that there is a very simple (and otherwise useful and well-known) basis of eigenvectors. Let $\zeta$ be a primitive $2n+1$-th root of unity. Our eigenvectors for $\Lambda$ will be the “cyclic” vectors, whose coordinates are successive powers of a root of unity :
$$
C_s=(\zeta^s,\zeta^{2s},\zeta^{3s}, \ldots ,\zeta^{(2n)s},1) \ (0 \leq s \leq 2n)
\tag{1}
$$
Let us compute all the coordinates of $\Lambda C_s$ ; in other words, we must
compute the sum
$$
f(i,s)=\sum_{j=1}^{2n+1} \Lambda_{ij} \zeta^{sj} \tag{2}
$$
We have $f(i,s)=\frac{2}{3}n(n+1)\zeta^{si}+g(i,s)$, where
$$
g(i,s)=\sum_{j\neq i} \frac{\zeta^{sj}}{\frac{\zeta^{i-j}+\zeta^{j-i}}{2}-1}=
\sum_{t\neq 0} \frac{\zeta^{s(i+t)}}{\frac{\zeta^{-t}+\zeta^{t}}{2}-1}=
2\zeta^{si}\sum_{t=1}^{2n} \frac{(\zeta^t)^{s+1}}{(\zeta^{t}-1)^2}=
2\zeta^{si}\sum_{\eta\in U, \eta\neq 1} \frac{\eta^{s+1}}{(\eta-1)^2}
\tag{3}
$$
where $U$ is the set of all $(2n+1)$-th roots of unity. There are classical techniques to compute the RHS of (3) above ; see the answer to this question, where the final
formula obtained is
$$
\sum_{\eta\in U, \eta\neq 1} \frac{\eta^{s+1}}{(\eta-1)^2}
=\frac{6(2n+1-s)s-((2n+1)^2-1)}{12}=
\frac{s(2n+1-s)}{2}-\frac{n^2+n}{3}
\tag{4}
$$
Combining (3) with (4), the terms in $\frac{n^2+n}{3}$ cancel out and we thus obtain
$$
\sum_{j=1}^{2n+1} \Lambda_{ij} \zeta^{sj}=s(2n+1-s)\zeta^{si} \tag{5}
$$
Or, to express things vectorially,
$$
\Lambda C_s=s(2n+1-s) C_s \tag{6}
$$
So $C_s$ is an eigenvector associated to the eigenvalue $s(2n+1-s)$ as wished. |
Lagrange’s Remainder Theorem | Look at the Taylor polynomial of $f(x) = \ln x$ at $a = 1$ with the Lagrange form of the remainder. Since $f$ is infinitely differentiable on $(0,\infty)$, for any $n \ge 1$ and for any $x >0$ you have
$$f(x) = \sum_{k=0}^n \frac{f^{(k)}(1)}{k!}(x-1)^k + \frac{f^{n+1}(\xi_n)}{(n+1)!} (x-1)^{n+1}$$
for some $\xi_n$in between $1$ and $x$. Since $f(1) = 0$ and
$$f^{(k)}(x) = (-1)^{k-1} \frac{(k-1)!}{x^k}$$ for all $x > 0$ you can evaluate the above expression becomes
$$\ln x = \sum_{k=1}^n\frac{(-1)^k}{k} (x-1)^k + \frac{(-1)^n}{(n+1) \xi_n^{n+1}}(x-1)^{n+1}.$$
If $x=2$ then $\xi_n \ge 1$ so that
$$ \left| \ln x - \sum_{k=1}^n\frac{(-1)^k}{k} (x-1)^k \right| \le \frac{1}{n+1}.$$
Now take $n \to \infty$. |
Topology on $\mathbb R$ induced by a strictly monotone function | Consider the strictly monotonic function $f$ given by
$$
f(x) = \begin{cases}
x & x \le 0\\
x +1 & x > 0\\
\end{cases}
$$
It's continuous except at $0$, but in the topology induced by it the sequence $x_n = 1/n$ no longer converges to the point $0$. So your statement about giving the same topology isn't correct.
I haven't proved it, but I think the topology induced by this function is a "separated union" of $(-\infty, 0]$ and $(0, +\infty)$, which you can think of taking regular $\mathbb{R}$ and breaking it at $0$ so that the point $0$ stays attached to the negatives. You can generalize this to multiple points of discontinuity, and think about what the difference would be if the function was right-continuous, or neither left- nor right- continuous, at the points of discontinuity. |
How to explain indeterminations, and some aprpoaches to $+\infty$ or $-\infty$, for middle school students? | Here is what I would suggest as an informal explanation for some kinds of indeterminate forms, though it may be less helpful for others.
If you try to evaluate $0^0$ by concentrating on the exponent, you would probably say, "anything to the power $0$ is $1$, therefore the answer is $1$". On the other hand, if you concentrated on the base, you would probably say "$0$ to any power is $0$, therefore the answer is $0$". The fact that you can get contradictory answers in this way is what makes it an indeterminate form.
Similarly, for "$\frac00$", concentrating on the numerator suggests an answer of $0$ while concentrating on the denominator suggests an answer of $\infty$. In this case however, I would be very careful not to let the students believe that $\infty$ is ever a sensible answer to an arithmetic question. |
Zeros of multivariate polynomials | Bezout's theorem is stronger than (1) because it tells you exactly how many points there are in $V(f,g)$.
In fact, one can show (1) directly. First note that it's enough to prove the claim for $k$ algebraically closed. This is obvious if your definition of $V(f)$ is $\{(a,b)\in k^2\colon f(a,b)=0\}$. If you don't know scheme theory, just skip the next paragraph.
If for you $V(f)=\{\mathfrak p\in \text{Spec } k[x,y]\colon (f)\in \mathfrak p\}$ then maybe you have to use the fact that the inclusion $k[x,y]\to \overline{k}[x,y]$ induces a surjective map on the spectrums because the extension is integral. So from now on, $k=\overline{k}$. Note that $\text{Spec }k[x,y]$ has 3 type of elements: $(0)$, the maximal ideals $(x-a,y-b)$ for $a,b\in k$ and the principal prime ideals $(g)$ generated by an irreducible element $g$. Now $V(f)$ can contain only a finite number of this last type of ideals, because $k[x,y]$ is a UFD. So since $V(f,g)=V(f)\cap V(g)$, then $V(f,g)$ can contain only a finite number of nonmaximal ideals. Now a maximal ideal $(x-a,y-b)$ lies in $V(f,g)$ precisely when $f(a,b)=g(a,b)=0$.
We'll argue by contradiction. Suppose there are infinitely many elements $(a,b)\in k^2$ s.t. $f(a,b)=g(a,b)=0$. Now let's use the following trick: we can think of $f,g$ as elements of $k(y)[x]$, namely the ring of polynomials in the variable $x$ with coefficients in $k(y)$. I claim that $f,g$ are coprime in $k(y)[x]$. In fact, suppose $d(x)\in k(y)[x]$ is a common divisor. Then there exist $e(x),e'(x)\in k(y)[x]$ such that $f(x,y)=d(x)e(x)$, $g(x,y)=d(x)e'(x)$. Now clearing denominators we end up with polynomials $h,h'\in k[y]$, $d',l,l'\in k[x,y]$ such that $h(y)f(x,y)=d'(x,y)l(x,y)$ and $h'(y)g(x,y)=d'(x,y)l'(x,y)$. This equality holds in $k[x,y]$ which is a UFD. Therefore, if $r(x,y)\in k[x,y]$ is an irreducible component of $d'(x,y)$, it can't divide both $f,g$ because by hypothesis they don't have irreducible components in common. Therefore we can assume $r(x,y)\mid h(y)$ in $k[x,y]$ and this proves that $r(x,y)$ and so also $d'(x,y)$ is a polynomial in $k[y]$. But now $d'(x,y)=m(y)d(x)$ in $k(y)[x]$ for some $m(y)\in k[y]$. This proves that $d(x)$ has degree $0$ in $x$, and is therefore a constant in $k(y)[x]$. Thus $f,g$ are coprime in $k(y)[x]$.
Now since this ring is an Euclidean domain there are $r(x),s(x)\in k(y)[x]$ with $r(x)f(x,y)+s(x)g(x,y)=1$. Clearing denominators again we get an expression of the type $r(x,y)f(x,y)+s(x,y)g(x,y)=n(y)$ which holds in $k[x,y]$. By hypothesis, there are infinitely many $(a,b)\in k^2$ with $f(a,b)=g(a,b)=0$, so there are infinitely many $(a,b)$ with $n(b)=0$. Since $n(y)\neq 0$, this means that for some fixed $\bar{b}\in k$ there are infinitely many $a\in k$ such that $f(a,\bar{b})=0$. But then $f(x,\bar{b})=0$ for all $x$. This proves that $f(x,y)$ has degree $0$ in $x$, so it must lie in $k[y]$. To conclude, note that the whole construction is symmetric in $x,y$, so doing everything again replacing $x$ and $y$ tells us that $f(x,y)=0$, and we're done. |
Deriving multi-angle addition sin formulaes | The key idea is to use Euler's formula for one angle
$$ e^{i\theta}=\cos(\theta)+i\sin(\theta)=
\cos(\theta)(1+i\tan(\theta)). \tag{1} $$
Thus, for several angles let $\,\theta:=\theta_1+\theta_2+\dots+\theta_n.\,$
Then let $$ E:=e^{i\theta_1} e^{i\theta_2}\cdots e^{i\theta_n}=
e^{i\theta}=\cos(\theta)+i\sin(\theta). \tag{2}$$
From equations $(1)$ and $(2)$ get
$$ E \!=\! \cos(\theta_1)\!\cos(\theta_2)\cdots\cos(\theta_n)
(1\!+\!i\tan(\theta_1))(1\!+\!i\tan(\theta_2))\cdots(1\!+\!i\tan(\theta_n))
. \tag{3}$$
Expand equation $(3)$ and separate the real and imaginary parts to get
$$ \cos(\theta) = \cos(\theta_1)\cos(\theta_2)\cdots\cos(\theta_n)
(e_0-e_2+e_4-\dots) \tag{4} $$ and
$$ \sin(\theta) = \cos(\theta_1)\cos(\theta_2)\cdots\cos(\theta_n)
(e_1-e_3+e_5-\dots) \tag{5} $$
where the $\,e_n\,$ are the $n$-th elementary symmetric polynomials
of the tangents. |
Find the joint PMF of X and Y, are they independent? | $\newcommand{\rchi}{\raise{0.5ex}\chi}$
You have been given (effectively) that: $X\sim \mathcal U\{1,2,3,4,5,6\}$ and $Y\mid X\sim \mathcal{Bin}(X, p)$.
That is that $X$ is discrete uniformly distributed (the roll of an unbiased die), and $Y$ when conditioned on $X$ is binomially distributed (the count of successes in $X$ iid Bernoulli events).
So you know the marginal pmf of $X$ is $\mathsf P_X(k) = \frac 1 6 \;\rchi_{k\in\{1;6\}}$ and the conditional pmf of $Y$ is $\mathsf P_{Y\mid X}(h\mid k) = \binom{k}{h} p^h(1-p)^{k-h} \;\rchi_{h\in\{0;k\}} $
From this you can determine the joint pmf of $X,Y$, and from that the marginal pmf of $Y$.
$$\begin{array}{|c|c:c:c:c:c|c|} \hline
& \text{1} & \text{2} & \text{3} & \text{4} & \text{5} & \text{6} & f_Y(y) \\ \hline
\text{0} & (1-p)/6 & (1-p)^2/6 & (1-p)^3/6 & (1-p)^4/6 & (1-p)^5/6 & (1-p)^6/6 & \tfrac 1 6 \sum\limits_{k=1}^6 (1-p)^k \\ \hdashline
\text{1} & p/6 & p(1-p)/3 & & & & & \\ \hdashline
\text{2} & 0 & p^2/6 & & & & & \\ \hdashline
\text{3} & 0 & 0 & & & & & \\ \hdashline
\text{4} & 0 & 0 & 0 & & & & \\ \hdashline
\text{5} & 0 & 0 & 0 & 0 & & & \\ \hdashline
\text{6} & 0 & 0 & 0 & 0 & 0 & & \\ \hline
f_X(x) & 1/6 & 1/6 & 1/6 & 1/6 & 1/6 & 1/6 & 1 \\ \hline
\end{array}$$ |
Let p be a prime. If a group has more than $p-1$ elements of order $p$, then prove that the group can't be cyclic. | Here is a proof without using Lagrange's theorem. Suppose $p$ does not divide $|G|=n$. Since $G=\langle a\rangle $ we must have $|a|=n$. Now, by our assumption there is an element $x\in G$ or order $p$. Since it is an element in the group there must be some $0\leq t\leq n-1$ such that $x=a^t$. Then:
$a^{pt}=x^p=e$
$n$ is the order of $a$, so this implies $n|pt$. But by our assumption $\gcd(p,n)=1$, so we conclude that $n|t$. But since $0\leq t\leq n-1$ this implies $t=0$. So $x=a^0=e$. It is a contradiction because $e$ has order $1$, not $p$. |
$SL(2, \mathbb F_3)$ does not have a subgroup of order $12$ | Since $G := \text{SL}(2,\mathbb{F}_3)$ has order 24, any subgroup of order $12$ would be normal (because of index 2), and hence $G$ would have an abelian quotient isomorphic to $C_2$. In particular we obtain an epimorphism $G^{ab} \to C_2$, where $G^{ab} := G/G'$.
To get a contradiction, you could proof that $G^{ab}$ does not contain an element of order 2 (indeed, $G^{ab} \cong C_3$). |
Question regarding addition of order types | Ok so here goes the answer that I came up with.
Let $\bar1 = it\langle1,\epsilon_1\rangle$ and $\bar\omega=it\langle\omega,\epsilon_\omega\rangle$
As by definition of addition of order types $\bar1 + \bar\omega$ we require $\langle A,R\rangle\in\bar\omega$ and $\langle B,S \rangle\in\bar1$ where A and B are disjoint sets.
We now define :
$$R\oplus S = R\cup S \cup (A\times B)$$
It is easily provable that defined set is linear ordering on $A\cup B$
Now notice that since B is equinumerous to 1 it must contain only one element,which is maximum in $A \cup B$ by the $R \oplus S$ ordering
Now let $\alpha$ be the single element in B we can now see that $\langle \omega ,\epsilon_\omega \rangle \cong \langle seg\;\alpha,R\rangle$ thus it can not be isomorphic to $\langle A \cup B , R \oplus S\rangle $ thus to find isomorphism we consider the next largest ordinal namely succesor of omega and thus we see it satisfies since it contains and aditional element to pair with our element $\alpha$
Piece of cake right? |
How can we prove that $\sqrt{ x^{2} }$ is equals to $|x|$? | All you need to show is that for all (real!) numbers $x$, the number $|x|$ is $\ge 0$ and its square equals $x^2$. |
evaluating an integral - clarification | While I'm not sure what your book means by " a curve consisting of $|z|=1$ (anticlockwise) and $|z|=3$ (clockwise)", I believe this should clear things up for you as to why $|z| = C$ is a curve.
Recalling that if $z=x+iy$ where $x,y \in \mathbb {R}$ and $C \in \mathbb {R}^+$
$$|z|=\sqrt{x^2+y^2}=C$$
So if we let $z = C\cos \theta + i C\sin \theta = C e^{i \theta}$
$$|z|=\sqrt{C^2(\cos^2 \theta+\sin^2 \theta)} = |C| = C$$
So we see that the curve $|z|=C$ is a circle of radius $C$ centred at the origin of the complex plane, which can be parametrized by $z(\theta) = Ce^{i\theta}$. |
What is the region in the coefficient plane such that $P(x)$ has no real roots? | But $Q(u)$ can have real roots. If both the roots of $Q(u)=0$
lie in the interval $(-2,2)$ then $x+1/x=u$ is insoluble over $\Bbb R$ and $P(x)=0$ will have no real roots. So you need to find all $p$, $q$ such that either $Q(u)=0$ has no real roots, or two real roots both in $(-2,2)$. |
Quaternionic general linear group is open | I suppose that by $\mathbb{H}$ you mean real quaternions, i.e. quaternions $q=x_{0}+x_{1}\mathbf{i}+x_{2}\mathbf{j}+x_{3}\mathbf{k}$ with coefficients $x_{i}$ real.
You can make the idea of the usal proof work, i.e. you want to show that $GL(n,\mathbb{H})=\phi^{-1}(\mathbb{C}-\{0\})$ for a suitable continuous map $\phi:M_n(\mathbb{H})\rightarrow \mathbb{C}$.
The steps below will lead you to $\phi$, which will turn out to be related to the determinant of a square matrix with complex entries:
Show that every quaternion $q$ can be uniquely expressed as $q=a_1 + a_2 \mathbf{j}$, where $a_1, a_2\in \mathbb{C}$.
Show that a matrix $A\in M_n(\mathbb{H})$ can be uniquely expressed as $A=A_{1}+A_{2}\mathbf{j}$, where $A_{1},A_{2}\in M_{n}(\mathbb{C})$.
If $A\in M_{n}(\mathbb{H})$ let $\overline{A}$ denote its conjugate (recall that the conjugate of a real quaternion $q=q_{0}+q_{1}\mathbf{i}+q_{2}\mathbf{j}+q_{3}\mathbf{k}$ is $\bar{q}=q_{0}-q_{1}\mathbf{i}-q_{2}\mathbf{j}-q_{3}\mathbf{k}$).
Show that the map $\varphi:M_n(\mathbb{H})\rightarrow M_{2n}(\mathbb{C})$ defined by
$$A=A_{1}+A_{2}\mathbf{j}\longmapsto\chi_{A}:=\begin{pmatrix}
A_{1} & A_{2}\\
-\overline{A_{2}} & \overline{A_{1}}
\end{pmatrix}.
$$
is continuous.
Show that if $A,B\in M_{n}(\mathbb{H})$ and $AB=I$, then $BA=I$.
Show that if $A,B\in M_{n}(\mathbb{H})$ then $\chi_{AB}=\chi_{A}\chi_{B}$. Thus $\chi_{A^{-1}}=\chi_{A}^{-1}$.
Show that $A\in GL(n,\mathbb{H})$ iff $\chi_{A}\in GL(2n,\mathbb{C})$.
Let $\phi=\varphi\circ \det$. Show that $A\in GL(n,\mathbb{H})$ iff $\phi\neq 0$, and that $GL(n,\mathbb{H})=\phi^{-1}(\mathbb{C}-\{0\})$.
Have fun! |
Product Spaces and Integrals of Indicator Functions | The function $1_{A\times B}(x, y)$ takes the value $1$ wherever $x\in A$ and $y\in B$; otherwise its value is $0$. If you are integrating the function over the second variable, just keep $x$ fixed.
In fact, note that you can express this function as a product of two single variable functions:
$$1_{A\times B}(x, y) = 1_{A}(x)1_{B}(y).$$
As you are only integrating in the $y$ variable, this should make your computation easier. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.