title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Orthogonal differentiable family of curves | It is possible to show that a surface which admits two orthogonal families of geodesic is isometric with a plane (i.e. the Gaussian curvature is zero) in three steps:
we show the the coordinate lines are orthogonal
we show that once parametrized the two families of geodesics by coordinate curves $E_v=0$ and $G_u=0$
we use the fact that when $F=0$, $E_v=0$ and $G_u=0$ the Gaussian curvature is zero
First step
Consider two plane curves $\alpha(t)$ and $\beta(t)$ in the domain of $X$, and let $\gamma(t)=X \circ \alpha$ and $\delta(t)=X \circ \beta$ be the corresponding curves on the regular surface S. Then at the point $p= X(\alpha(t_0))=X(\beta(t_0))$, the angle $\theta$ between the two curves is
\begin{equation}
\cos{\theta}= \frac{\gamma' \cdot \delta'}{|\gamma'||\delta'|}
\end{equation}
In the basis ${X_u,X_v}$ the above equation becomes
\begin{equation}
\cos{\theta}= \frac{X_u \cdot X_V}{|X_u||X_v|}= \frac{F}{E G}
\end{equation}
Thus, all the coordinate curves of a parametrization are orthogonal if and only if $F=0$ for all $(u,v)$ in the domain of $X$.
Second step
Given the metric tensor $f$
\begin{equation}
f=\sqrt{E(u,v)u'^{2}+2 u' v' F(u,v)+G(u,v) v'^{2}}
\end{equation}
we compute the following partial derivative:
\begin{equation}
\frac{\partial f}{\partial u}=\frac{1}{2 f} \big(u'^{2} E_u+2 u' v' F_u+v'^2 G_u \big)
\end{equation}
\begin{equation}
\frac{\partial f}{\partial u'}=\frac{1}{f} \big(u' E+v' F \big)
\end{equation}
\begin{equation}
\frac{\partial f}{\partial v}=\frac{1}{2 f} \big(u'^{2} E_v+2 u' v' F_v+v'^2 G_v \big)
\end{equation}
\begin{equation}
\frac{\partial f}{\partial v'}=\frac{1}{f} \big(u' F+ v' G \big)
\end{equation}
where $E_x$, $F_x$, and $G_x$ are the derivative of $E$, $F$, and $G$ with respect to the coordinate x ={u,v}, while $u'$ and $v'$ are the derivative withy respect to the parameter $t$. On the curve $v=constant$, $u$ can be taken as a parameter, then the curve is $u=t$, $v=c$. We also have that $u'=1$ and $v'=0$. By substituting these values in the previous equations we have:
\begin{equation}
\frac{\partial f}{\partial u}=\frac{1}{2 \sqrt{E}} E_u
\end{equation}
\begin{equation}
\frac{\partial f}{\partial u'}=\sqrt{E}
\end{equation}
\begin{equation}
\frac{\partial f}{\partial v}=\frac{1}{2 \sqrt{E}} E_v
\end{equation}
\begin{equation}
\frac{\partial f}{\partial v'}=\frac{F}{\sqrt{E}}
\end{equation}
According to the Euler-Lagrange theorem in the calculus of variations, the parametric equation $(u(t),v(t))$ that optimize the metric $f$ must satisfy the differential equations:
\begin{equation}
\frac{\partial f}{\partial u}-\frac{d}{dt} \big (\frac{\partial f}{\partial u'} \big)=0
\end{equation}
\begin{equation}
\frac{\partial f}{\partial v}-\frac{d}{dt} \big (\frac{\partial f}{\partial v'} \big)=0
\end{equation}
By substituting the above values, after some calculations we have that the curve $v= constant$ is a geodesic if
\begin{equation}
E (E_v- 2 F_u) + E_u F =0
\end{equation}
This condition is therefore necessary. Conversely if the above condition is satisfied, it is possible to prove that the curve $v= constant$ is a geodesic. We can now repeat the same computation for a curve $u=constant$. In this case, $u=constant$ is a geodesic if and only if
\begin{equation}
G (G_u- 2 F_v) + G_v F =0
\end{equation}
Third step
We can then conclude that for orthogonal parametrizations two families of geodesics are coordinate curvesmusrt satisfy the conditions $F=0$, $E_v=0$, and $G_u=0$. Thus the metric tensor becomes:
\begin{equation}
f=\sqrt{E(u)u'^{2}+G(v) v'^{2}}
\end{equation}
In the situation where one considers an orthogonal parametrization of a neighborhood of a surface, the Gaussian curvature can be written as
\begin{equation}
K=-\frac{1}{\sqrt{E G}} \big[\frac{\partial }{\partial u} \big(\frac{1}{\sqrt{E}} \frac{\partial \sqrt{G}}{\partial u} \big)+\frac{\partial }{\partial v}\big(\frac{1}{\sqrt{G}} \frac{\partial \sqrt{E}}{\partial v} \big)\big]
\end{equation}
Since $E=E(u)$ and $G=G(v)$, we have that the Gaussian curvature is zero. The formula I used for the Gaussian curvature can be derived by the usual definition of the Gaussian curvature computed as the radio of the determinants of the second and first fundamental form, by writing the determinal of the second fundamental form by using the triple product. The computation is quite long. |
If E is measurable, inner and outer measure could differ? | Here is a simple counter-exemple: a non-measurable set $E$ such that $|E|_i=|E|_e=+\infty$
Let $A$ be a non-measurable subset of $[0,1]$. Take $E=(-\infty,-1] \cup A$. Clearly $E$ is non-measurable. We also have that
$$ |E|_i \geqslant |(-\infty,-1] |_i =+\infty$$
and
$$ |E|_e \geqslant |(-\infty,-1] |_e =+\infty$$
So, we have $|E|_i=|E|_e=+\infty$. |
Definition of orderd pairs multiplication for complex numbers | Once upon a time, people started doing arithmetic with numbers that involved $i$ subject to the rule $i^2=-1$. So when they multiplied $a+bi$ times $c+di$, they got $ac+adi+bic+bidi=(ac-bd)+(ad+bc)i$ (where the minus sign comes from $i^2=-1$). But their rules for doing arithmetic didn't provide an answer for the obvious question "What exactly is this $i$?" If asked this question they would answer (as some people allegedly do nowadays when asked how to interpret quantum mechanics) "Shut up and compute."
Later, people noticed that the complex numbers correspond to ordered pairs of real numbers, with $a+bi$ corresponding to $(a,b)$. [So you could even plot complex numbers as points in the plane, but that's not relevant to he present question.] So they got the bright idea of saying that the complex number $a+bi$ is the ordered pair $(a,b)$, where $a$ and $b$ are real. Now they had an answer to "What is $i$?", namely $i=0+1i=(0,1)$.
All the old algebraic facts about complex numbers could be translated in terms of this new view of what complex numbers are. Instead of writing $(a+bi)\cdot(c+di)=(ac-bd)+(ad+bc)i$, they would write $(a,b)\cdot(c,d)=(ac-bd,ad+bc)$.
They could cover up their trail by never mentioning all that stuff about $i$ and just writing definitions that say "A complex number is an ordered pair of real numbers" and "You add complex numbers by $(a,b)+(c,d)=(a+c,b+d)$" and "You multiply them by $(a,b)\cdot(c,d)=(ac-bd,ad+bc)$." They proved theorems about these, and they lived happily ever after, or at least until somebody like you came along and asked how they made up those formulas. |
Intuition for a Proof of the Fundamental Theorem of Algebra (According to 3Blue1Brown) | What Rouche's theorem says is that if two analytic functions $f,g$ are "relatively closed" (given by the condition that $|f-g|<f$ along some closed curve contained in the domain of analyticity) then both have the same nomeber of roots (according to multiplicity). So in the proof of the fundamental theorem of algebra, $Q(z)=z^d$ has $d$ roots ($0$ with multiplicity $d$) thus, by the fact that $P$ and $Q$ are relatively closed (along a large circular path for example), then $P$ has also $d$ roots inside the circular path. (some could have multiplicity larger than one, but when you add the multiplicities of each of the roots you get $d$)
The proof of Rouche's theorem is indeed based in the winding number of curves. In particular the following results
Lemma: Let $\gamma_0$ and $\gamma_1$ be closed paths in $\mathbb{C}$
parameterized by the interval $[a,b]$. If there is
$\alpha\in\mathbb{C}$ such that
$$
|\gamma_1(t)-\gamma_0(t)|<|\alpha-\gamma_0(t)|,\qquad a\leq t\leq b
$$
then, $\operatorname{Ind}_{\gamma_0}(\alpha)=\operatorname{Ind}_{\gamma_1}(\alpha)$.
This result says roughly that if $\gamma_1$ and $\gamma_2$ are closed curves that are close relative to a point $\alpha$, then $\operatorname{Ind}_{\gamma_1}(\alpha)=\operatorname{Ind}_{\gamma_2}(\alpha)$.
The last ingredient is that the number of zeroes of the function can be directly connected to the winding (Index) number of a curve:
Theorem: Let $D\subset\mathbb{C}$ open and let $\gamma$ be a closed path (parameterized by the interval $[a,b]$) such that $\operatorname{Ind}_\gamma(z)=0$ for all $z\in D^c$.
Suppose that $\operatorname{Ind}_\gamma(z)\in\{0,1\}$ for all
$z\in D\setminus\gamma^*$. If $f$ is an analytic function on $D$ and
If $f(\gamma(t))\neq0$ for all $a\leq t\leq b$, then the number of zeroes $N_f$ of
$f$ in $D_1=\{z\in D:
\operatorname{Ind}_\gamma(z)=1\}$, counted according to their multiplicity,
is finite and
\begin{aligned}
N_f=\frac{1}{2\pi i}\int_\gamma
\frac{f'(z)}{f(z)}\,dz=\operatorname{Ind}_{\gamma_f}(0)
\end{aligned}
where $\gamma_f:=f\circ\gamma$.
I hope this give you more context to the intuition you got from the video you mentioned. |
How do I evaluate the following integrals? | Hint:
$$\begin{align}
& I-J=\int{\frac{{{x}^{2000}}-{{x}^{666}}}{{{x}^{2668}}+1}dx}=\int{\frac{1}{x}\frac{{{x}^{2001}}-{{x}^{667}}}{{{x}^{2668}}+1}dx} \\
& =\int{\frac{1}{x}\frac{{{\left( {{x}^{667}} \right)}^{3}}-{{x}^{667}}}{{{\left( {{x}^{667}} \right)}^{4}}+1}dx},\ \ Now\ use\ u={{x}^{667}}, \\
& =\frac{1}{667}\int{\frac{{{u}^{2}}-1}{{{u}^{4}}+1}du}=\frac{1}{667}\frac{\ln \left( -{{u}^{2}}+\sqrt{2}u-1 \right)-\ln \left( {{u}^{2}}+\sqrt{2}u+1 \right)}{2\sqrt{2}}. \\
\end{align}$$ |
How many pieces do $k$ random hyperplanes split space into? | The question of "how many" is answered in Theorem 1 of "On the number of regions in an $m$-dimensional space
cut by $n$ hyperplanes" here where it says (adapted to variable names in the OP):
$\textbf{Theorem 1 }$ The number of regions $C_n(k)$, cut by a $k$-cluster in $\mathbb{R}^n$ (or cut by $k$ great circles in $S^{k-1}$), is exactly twice $G_{n-1}(k-1)$, that is
\begin{equation}
C_n(k)=2\left[{k-1 \choose 0} + {k-1 \choose 1}+{k-1 \choose 2}+\cdots + {k-1 \choose n-1}\right],
\end{equation}
where ${a\choose b}$ is 0 if $b>a.$
They define a $k$-cluster to be a collection of $k$ distinct $(n-1)$-dimensional affine sets that have a point in common. But if an affine set characterized by a matrix $M$ and offset $a$, $A=a+\{Mx|x\in\operatorname{dom} M\}$ contains $0$ then there is some $x_0$ where $a+Mx_0=0$ so that $A=a+\{M(x+x_0)|x\in\operatorname{dom}M\}=\{Mx|x\in\operatorname{dom}M\}$ so then $A$ is just a subspace. In particular, a $k$-cluster about 0 is just a collection of $k$ distinct hyperplanes.
Trivially with probablilty 1 the construction in the OP gives $k$ distinct hyperplanes (the collection of outcomes where one of the hyperplane's spanning sets is a linear combination of another has volume 0) so the OP's hyperplanes are a $k$-cluster.
I've moved the discussion of the distribution of area to here: https://mathoverflow.net/questions/293151/what-is-the-shape-of-a-random-finitely-generated-convex-cone |
Cauchy-Schwarz Inequality | For any $x$, the sequence $x_n = \langle x,e_n \rangle$ is in the classic Hilbert space $\ell^2$, and $\sum_{n}|\langle x,e_n \rangle|^2 \le \|x\|^2$ follows from Bessel's inequality. Using Cauchy-Schwarz for $\ell^2$ and Bessel's inequality gives
$$
\sum_n |\langle x,e_n\rangle\langle e_n,y\rangle| \le \left(\sum_n |\langle x,e_n\rangle|^2\right)^{1/2}\left(\sum_n |\langle e_n,y\rangle|^2\right)^{1/2}
\le \|x\|\|y\|.
$$
Your statement after "where" doesn't add up. |
Concavity, convexity, quasi-concave, quasi-convex, concave up and down | Yes, convex and concave up mean the same thing.
The function $f(x)=\frac2x,x>0$ is strictly convex (or strictly concave up), because:
$$f(tx_1+(1-t)x_2)< tf(x_1)+(1-t)f(x_2), \ \ \ 0<x_1<x_2,\ \ t\in [0,1] \ \ \text{or}\\
f'(x_1)< \frac{f(x_2)-f(x_1)}{x_2-x_1}, \ \ 0<x_1<x_2, \ \text{or}\\
f''(x)=\frac4{x^3}>0,x>0,$$
where $f\in C^0$, $f\in C^1$ or $f\in C^2$, respectively.
The function $f(x)=\frac2x$ is both quasi-concave and quasi-convex, because:
$$f(tx_1+(1-t)x_2)\ge \min\{f(x_1),f(x_2)\}, t\in[0,1] \ \text{(quasi-concavity)}\\
f(tx_1+(1-t)x_2)\le \max\{f(x_1),f(x_2)\}, t\in[0,1] \ \text{(quasi-convexity)}$$ |
The net signed area between $t=0, y=0, t=x$, and $y=f(t)$ | I suppose you're assuming $f$ is continuous. The statement is easily seen to be true for $f(t)=t$, and by subtracting a multiple of this we can assume $f(0) = f(x)$. By the Stone-Weierstrass theorem, trigonometric polynomials with period $x$ are uniformly dense in the continuous functions with period $x$, and so it suffices to prove the result for $f(t) = \cos(2 \pi j t/x)$ and $\sin(2 \pi j t/x)$ for each integer $j$. But for those it can be done explicitly: $\sum_{k=1}^n \sin(2 \pi j k/n) = 0$ for all integers $j$, $\sum_{k=1}^n \cos(2 \pi j k/n) = n$ if $j$ is a multiple of $n$, $0$ otherwise... |
Equicontinuous problem: supremum of equicontinuous functions | We can prove lower and upper semicontinuity separately.
We know that supremum of any family of continuous functions is lower semicontinuous. In fact supremum of any family of lower semicontinuous functions is lower semicontinuous. See, for example, these posts: To show that the supremum of any collection of lower semicontinuous functions is lower semicontinuous or Show that the supremum of a collection of lower semicontinuous function is lower semicontinuous.
So it remains to show that $g(x)=\sup\limits_{f\in S} f(x)$ is upper semicontinuous. I.e., we want to show that for any $M$ the set
$$g^{-1}(-\infty,M)=\{x\in X; g(x)<M\}$$
is open.
So let $x_0$ be a point such that $g(x_0)<M$. Let us choose $\varepsilon=\frac{M-g(x_0)}2$. From equicontinuity we get that there is a neighborhood $U$ of $x_0$ such that for $x\in U$ and for any $f\in S$ we have $|f(x)-f(x_0)|<\varepsilon$.
Then for every $f\in S$ and $x\in U$ we get
$$f(x)=f(x_0)+(f(x)-f(x_0)) \le f(x_0) + |f(x)-f(x_0)| \le g(x_0)+\varepsilon = \frac{M+g(x_0)}2$$
which implies
$$g(x) = \sup_{s\in S} f(x) \le \frac{M+g(x_0)}2 < M.$$
We have shown that any point $x_0\in M$ has a neighborhood $U$ such that $x_0\in U\subseteq g^{-1}(-\infty,M)$, which means that the set $g^{-1}(-\infty,M)$ is open. |
Induced topology on subset are equal | $U \subseteq W$ is open in the $X$ induced topology if and only if there is an open set $V \subseteq X$ such that
$$
U = V \cap W
$$
Note that
$$
V \cap W = (V \cap Y) \cap W
$$
since $W \subseteq Y$, and so if $U$ is open in the $X$ induced topology, it is open in the $Y$ induced topology and vice versa, since every open set in $Y$ can be written in the form $V \cap Y$ for some $V$ open in $X$. |
Determinant of the product of two matrices with different dimensions | Note that $\ker P$ is non-trivial because $\dim \ker P = 3-\dim \text{im}P\ge 1$ by the dimension theorem. Also, we have
$$
\ker (QP)\ge \ker P,
$$ which implies that $QP$ is not invertible. This yields $\det(QP)=0$. Determinant of $PQ$ is irrelevant. |
How to find the Limit of a Sequence, to find a power series radius of convergence | It is easier to use the ratio test; rewrite the sequence as: $a_n = \frac1{n(n-1)}$. Then,
$$\left| \frac{a_{n+1}}{a_n} \right| = \frac{n-1}{n+1}$$
(Edit):
So$$\lim \left| \frac{a_{n+1} x^{n+1}}{a_n x^n} \right| = |x|$$
If $|x| < 1$, then.. and if $|x| > 1$, then.. Conclusion $R = 1$. |
Power Series Approach to Real Functions | I don't know enough about complex variables to answer this, other than to offer that perhaps the other approaches are based on the idea of being complex differentiable and on the validity of certain integral representations. However, as someone who often pursues old literature, I have a suggestion that I often follow. Get ahold of several standard textbooks on the subject from the date the paper was written to approximately 30-40 years before the date the paper was written and look at what they have to say. In the past (before 2000 or so), at least for me, this meant checking out volumes from a university library (when I was at a university) or travelling to a university (often staying in a hotel one or two nights, especially when the nearest university library over 100 miles away) and photocopying appropriate sections from such textbooks. However, now this can mostly be done from any location with internet access. Further below are some examples of the books I would probably look at if I studying what you are studying and I had questions similar to your questions.
One way to find out about the existence of such books is to scan over the book reviews in old volumes of Bull. Amer. Math. Soc. and Amer. Math. Monthly. These used to be on (U.S.) university library shelves, so you could just sit down in front of them, pull the volumes off the shelves one-by-one, and quickly look through their table of contents. However, in the past 10-15 years many libraries have moved these volumes to remote storage locations that make this no longer possible, but the Bull. Amer. Math. Soc. volumes are now freely available on the internet and JSTOR offers those of Amer. Math. Monthly (sufficiently old volumes are freely available). Another way to find out about such books (quicker, but you get less information about the books) is to do various searches in the JFM site. I got the list of books below by searching this site for "words in title" (not phrase option) for the words "functions", "complex", and "variable".
Andrew Russell Forsyth, Theory Of Functions Of A Complex Variable (1918) [Or see the 1893 edition]
James Pierpont, Functions of a Complex Variable (1914)
Heinrich Durège, Elements of the Theory of Functions of a Complex Variable with Especial Reference to the Methods of Riemann (1896 English translation)
Thomas Scott Fiske, Functions of a Complex Variable (1907)
Heinrich Burkhardt, Theory of Functions of a Complex Variable (1913 English translation)
Edgar Jerome Townsend, Functions Of A Complex Variable (1915)
Thomas Murray MacRobert, Functions of a Complex Variable (1917) |
Prove: $\gcd(a,b) = \gcd(a, b + at)$. | HINT $\rm\ \ $ If $\rm\ c\ |\ a\ $ then $\rm\ c\ |\ b + a\ t\ \iff\ c\ |\ b\:.\ $ This implies that $\rm\ \{a\:,\:b+a\ t\}\ $ and $\rm\ \{a\:,\: b\}\ $ have the same set of common divisors $\rm\:c\:,\:$ hence they have the same greatest common divisor.
Modly: $\:$ if $\rm\ a\equiv 0\ $ then $\rm\ b+a\ t\equiv 0\: \iff\: b\equiv 0\ \ \ (mod\ c)$ |
What is the maximal value of the expression $\overline{abc}-(a^3+b^3+c^3)?$ | You can write the whole thing as $100a + 10b + c - a^3 - b^3 - c^3$.
Rearranging the terms gives $(100a - a^3) + (10b - b^3) + (c - c^3)$, which you can solve one summand at a time:
You maximize $100a - a^3$ for $a = 6$.
You maximize $10b - b^3$ for $b = 2$.
You maximize $c - c^3$ for $c = 0$ or $1$. |
Dimension of vector space (using matrices) | Yes, there are a difference, even both must to give the same result, you have these two algorithms for found a basis given a set (preferable ordered) of vectors:
Row space Algorithm:
Step 1. Form the matrix A whose rows are the given vectors.
Step 2. Row reduce A to an echelon form.
Step 3. Output the nonzero rows of the echelon matrix.
Casting-Out Algorithm:
Step 1. Form the matrix M whose columns are the given vectors.
Step 2. Row reduce M to echelon form.
Step 3. For each column Ck in the echelon matrix without a pivot, delete (cast-out) the vector from the given vectors.
Step 4. Output the remaining vectors (which correspond to columns with pivots). |
Eisenstein's criterion help understanding proof | You've already reduced $f$ modulo $p$, a prime, so you have a monomial in a field, $\mathbb{Z}/p\mathbb{Z}$. Consequently, it must factor as a product of monomials having the stated properties of coefficients and degrees. |
If $\ln(1+x) \approx A+Bx+Cx^2$, differentiate twice both sides and show that $\ln(1+x) \approx x-\frac{1}{2}x^2$ | Other answers have offered the solution as expected. I wanted to highlight that such sort of questions tend to encourage more hand waving than doing any service to mathematics.
When you write $$\log (1 + x) \approx A + Bx + Cx^{2}\tag{1}$$ you actually mean that there is an error $R(x)$ involved so that $$\log(1 + x) = A + Bx + Cx^{2} + R(x)\tag{2}$$ for all values of $x$ with $|x| < 1$. It is not possible to differentiate both sides of a relation like $(1)$ which holds only approximately. Rather we can only differentiate both sides of identities like $(2)$ and then the further analysis depends on the nature and behavior of error term $R(x)$ and its derivatives. The differentiation of approximation cannot be justified because we can't ensure that if $R(x)$ is small then its derivative will also be small for the values of $x$ under consideration.
A proper way to obtain the approximation for $\log(1 + x)$ is to apply Taylor's theorem or perhaps directly start with the approximation $$\frac{1}{1 + t} \approx 1 - t$$ and then see this approximation as an identity with error term as $$\frac{1}{1 + t} = 1 - t + \frac{t^{2}}{1 + t}$$ and integrate it to obtain $$\log(1 + x) = \int_{0}^{x}\frac{dt}{1 + x} = x - \frac{x^{2}}{2} + \int_{0}^{x}\frac{t^{2}}{1 + t}\,dt$$ and then analyze the behavior of error term $$\int_{0}^{x}\frac{t^{2}}{1 + t}\,dt$$ for $|x| < 1$ and get the approximation $$\log (1 + x) \approx x - \frac{x^{2}}{2}$$ when $x$ is small enough. |
Fundamental Theorem of Abelian Groups - intuition regarding Lemma | Here is some intuition.
Let $G$ be an abelian group of order $n$. If we can write $n=rs$ with $\gcd(r,s)=1$, then $G = G(r) \times G(s)$, where $G(m) = \{ g \in G : g^m =1 \}$.
Indeed, write $1 = ru + sv$. Then $g = g^1 = g^{ru + sv} = g^{ru}g^{sv}$ and $g^{ru} \in G(s)$ and $g^{sv} \in G(r)$. Finally, $G(r) \cap G(s)=1$, again because $\gcd(r,s)=1$.
Therefore, you can decompose $G= \prod_{p^k \mid\mid n} G(p^k)$. |
prove that $ -2 + x + (2+x)e^{-x}>0 \quad \forall x>0$ | We have that $f(0)=0$ and
$$f(x)=-2 + x + (2+x)e^{-x}\implies f'(x)=e^{-x}(e^x-1-x)$$
As alternative note that for $x\ge2$ the inequality is trivially satisfied then consider $0<x<2$ and we have
$$-2 + x + (2+x)e^{-x}>0 \iff e^x<\frac{2+x}{2-x}$$
and by $x=\frac2y$ with $y>1$ we obtain
$$f(y)=\left(\frac{y+1}{y-1}\right)^y=\left(1+\frac{2}{y-1}\right)^y>e^2$$
which, for my knowledge, can’t be easily proved for $y\in\mathbb{R}$ without derivatives, namely showing that $f(y)$ is monotonic.
The monotonicity of $f(y)$ can be easily proved for $y\in\mathbb{N}$ and extended to the real case assuming that $f(y)$ is convex. We could also try to extend the result to reals through rationals. |
Connection between eigenvalues and maximas on a elliptic paraboloid restricted on a circle | If $A$ is a real symmetric matrix, $\max \{\langle x, Ax \rangle: \|x\| = r \} = r^2 \lambda$ where $\lambda$ is the greatest eigenvalue of $A$. This follows from the Min-max theorem.
EDIT:
It's not true for non-symmetric matrices. But note that $$\langle x, Ax \rangle = \left\langle x, \frac{A + A^T}{2} x \right\rangle$$
so instead of the greatest eigenvalue of $A$ you should take the greatest
eigenvalue of $(A + A^T)/2$. |
absolute value of logarithm | Disclaimer: I know nothing about this topic.
I think WolframAlpha is interpreting $|\log_{1/2}x|$ as the modulus of a complex logarithm of $x$.
How can we find such for $x$ negative? Let's start with natural logs:
$e^{\ln x} = x$, so $e^{-i\pi-2ki\pi+\ln x}=-x$, so $-\pi i-2k\pi i+\ln x=\ln(-x)$, so $\ln x=\ln(-x) + \pi i + 2k\pi i$. The modulus of that is
$\sqrt{(\ln(-x))^2+(\pi + 2k\pi)^2}$, for whatever value of $k$ is conventional. Raising $1/2$ to that unwieldy power does not look likely to give anything pretty. That said, when $|x|$ is very large, the value will be very close to $-x=|x|$. |
Polynomial relation between two complex numbers | You have $$y-1=(\sqrt{x}+1)(\sqrt{x}-1)=x^{\frac 23}-2x^{\frac 13}$$
Hence$$(y-1)^3=x^2-6x^{\frac 53}+12x^{\frac 43}-8x$$
$$\Rightarrow (y-1)^3-x^2+2x=-6x^{\frac 53}+12x^{\frac 43}-6x=-6x(x^{\frac 23}-2x^{\frac 13}+1)=-6xy$$ QED |
Projective resolution of $k$ as $k[x]/(x^n)$-module? | Let us write $R=k[x]/(x^n)$, and start as you did with the projection
$$ R\to k \to 0.$$
You correctly identified the kernel with $xR$. But instead of using the inclusion $xR\to R$, let us instead use the map $R\to R$ given by $a\mapsto xa$, which has $xR$ as it image. This leads to
$$R\xrightarrow{\cdot x} R\to k\to 0.$$
Now let us keep going: the kernel of the leftmost map is $x^{n-1}R$, since $xa=0$ implies that $xa$ is a multiple of $x^n$. Once again we can use the map $R\to R$ sending $1$ to $x^{n-1}$, which has the correct image. This gives
$$R\xrightarrow{\cdot x^{n-1}} R\xrightarrow{\cdot x} R\to k\to 0.$$
Now something special happens here: the kernel of the leftmost map is... $xR$ again. Can you see how to keep going? |
Standard Normal Distribution Problem | You want to solve for
$$P(|M_n| \ge 2) \le \frac1{100}$$
Note that mean of $M_n=0$,
Hence this reduces to
$$P(M_n \le -2) \le \frac1{200}$$
You might like to find the standard deviation of $M_n$ to proceed with the question. |
Different Definition of Differentiability | Consider e.g. $f(x,y) = y^2$ with $x_0 = 0$, $y_0 = 0$. Then you can take $a(x,y) = 0$ and $b(x,y) = y$. You never have $y_1 = f_y(x_1,y_21) = 2 y_1$ unless $y_1 = 0$.
EDIT: OK, here's a counterexample. Take $$\eqalign{f(x,y) &= x^2 y + {y}^{3}\cr
a(x,y) &= x y\cr
b(x,y) &= y^2 \cr
x_0 = y_0 = 0\cr}
$$
Then $f_y(x_1,y_1) = x^2 + 3 y^2 \ne b(x,y) = y^2$ unless $x=y=0$. |
Showing if $\lim_{n\to\infty} a_n=L$ then $\lim_{n\to\infty} -2a_n=-2L$ using definition | The general idea of you solution is very much correct. The only thing that is missing is that we have: $L-\epsilon<a_n<L+\epsilon$, for sufficiently large $n$. |
Clarification about combinations and permutation problems. | You are right that it's a combination since order doesn't matter. But remember you're choosing $10$ girls/boys from $25$ and not from $50$. So it's ${25\choose 10}{25\choose 10}.$ |
Is my answer of finding domain of $\frac{x^2}{2x-1}<1$ correct? | First of all $x \neq 1/2$.
Then if $x > 1/2$, you have $2x-1>0$ and $x^2<2x-1$, i.e $(x-1)^2<0$, impossible
And if $x < 1/2$, you have $2x-1<0$ and $x^2>2x-1$, i.e $(x-1)^2>0$, always true.
Finally you find that $x < 1/2$. |
factor of Riemann Zeta product formula must be zero? | Even ignoring the analytic continuation issue, you've made a fundamental error: treating infinite products in the same way as finite products.
While it is true that if a product of finitely many terms is zero, then one of the terms must be zero, this is false when we consider infinite products. Consider the product $1\cdot {1\over 2}\cdot {1\over 3}\cdot ...$; it's easy to show that this equals zero, even though each term is positive.
Remember that - just like an infinite sum - an infinite product is defined as a limit: $$\prod_{i\in\mathbb{N}} a_i=\lim_{n\rightarrow\infty}\prod_{i\le n} a_i.$$ If each $a_i$ is nonzero, then $\prod_{i\le n}a_i$ is nonzero - but the limit of a sequence of nonzero terms can be zero, so this doesn't stop the whole product from being zero.
(Along the same lines, you might consider why $0.999999...=1$ even though any finite sequence of $9s$ after the decimal point yields a number $<1$.) |
What are some great topics to do my IB Maths IA (Exploration) on? | Modular elliptic curves. Lots of interesting material and this is an area of significant virgin territory. Also useful for future job prospects as it's important in cryptography and cybersecurity. |
What About The Converse of Lagrange's Theorem? | See these two links to have additional information:
The inverse of Lagrange's Theorem is true for finite supersolvable group.
and
A kind of converse of Lagrange's Theorem. In the second one, there is a great classification for the problem done by missed @Arturo Magidin. |
In $u^{T}A^{-1}= v^{T}B$ with $B$ being a non-square matrix, why are there infinite solutions for $v$? | Given the matrix $X\in{\mathbb C}^{m\times n},$ use its pseudoinverse to construct ortho-projectors into the nullspace
$$P=I-X^+X\\Q=I-XX^+$$
The defining (and easily verified) properties of these projectors are
$$\eqalign{
XP &= 0,\quad P^2=P=P^* \\
QX &= 0,\quad Q^2=Q=Q^* \\
}$$
Set $\,X=B^T$ and check the proposed solution
$$\eqalign{
v &= \quad X^+A^{-T}u &+\; Pw \\
Xv &= XX^+A^{-T}u &+\; 0 \\
}$$
This satisfies the original matrix equation only if $\;XX^+=I_m$
In other words if $\;\operatorname{rank}(X) = m$. |
CLT for triangular array of finite uniformly distributed variables | Is it known how the Lindeberg condition translate to a (hopefully simple) condition...
Yes. In your situation the assumption $$
\frac{1}{s_n^2}\max_i \mathbb{V}(X_{ni}) \rightarrow 0, n\to\infty, \tag{1}
$$ (equivalently, $\frac{\max_i a_{ni}^2}{\sum_{i} a_{ni}^2}\to 0,n\to\infty$) implies the Lindeberg condition. Try to prove this (hint: for any $\varepsilon>0$, the expression in question is zero for $n$ large enough).
Assuming $\frac{1}{s_n^2}\max_i \mathbb{V}(X_{ni}) \not\rightarrow 0$,
is it anyways possible for $X_{ni}$ to satisfy a central limit?
No. (Provided the meaning of "central limit" is standard, which is normal distribution.) You can write characteristic to check that the uniform smallness assumption $(1)$ is necessary in your case. |
On the global section of tensor product of two vector bundles on a surface | Note that
$$
H^0(X, E_2 \otimes E_1^*) \cong Hom(E_1,E_2).
$$
Now $E_1$ and $E_2$ are stable (by your assumption), and the condition on $c_1$ means that they have the same slope. Therefore, there are no morphisms between them, hence the space is zero. The case of the second space is analogous. |
$\omega_2$ is not a countable union of countable sets | Yes, this can be proved without the axiom of choice.
Assume $\omega_2 = \cup_n E_n$ with $E_n$ countable. Then each $E_n$ is well-ordered, hence is isomorphic to a unique ordinal $\alpha_n < \omega_1$ via a unique isomorphism.
This makes it easy to construct a surjection $\omega \times \omega_1 \to \omega_2$, a contradiction since $|\omega \times \omega_1| = \aleph_1$. |
How to prove that $a^x>0$ | $a^x=e^{x\ln(a)}>0$ is positive since the image of exponential is $>0$. |
Sketching $F(x)=\lim_{n\to\infty}\frac{x^{2n}\sin(\frac{nx}{2})+x^2}{x^{2n}+1}$ | We have that for $|x|<1\: F(x)\to x^2$ and for $|x|>1$
$$\frac{x^{2n}\sin\left(\frac{nx}{2}\right)+x^2}{x^{2n}+1}= \frac{x^{2n}}{x^{2n}}\frac{\sin\left(\frac{nx}{2}\right)+x^{2-2n}}{1+x^{-2n}}\sim \sin\left(\frac{nx}{2}\right) $$
therefore the limit doesn’t exist since for any $x\neq 2k\pi$ the $\sin$ term oscillates between $-1$ and $1$ as also for the special case $|x|=1$. |
Can we show $\int_{B_i}\left|f-\frac1{λ(B_i)}\int_{B_i}f\:{\rm d}λ\right|^2\:{\rm d}λ\le\int\left|f-\frac1{λ(B)}\int f\:{\rm d}λ\right|^2\:{\rm d}λ$? | It is easy to check that $\int_\Omega |f - \sigma|^2 \, d \mu \leq \int \big|f - \frac{1}{\mu(\Omega)} \int_\Omega f\big|^2 \, d \mu$ for all $\sigma \in \Bbb{R}$.
Therefore, setting $\sigma_i := \frac{1}{\lambda(B_i)} \int_{B_i} f d \lambda$ and $\sigma := \frac{1}{\lambda(B)}\int_B f \, d \lambda$, we see
$$
\int_{B_i}
|f - \sigma_i|^2
d \lambda
\leq \int_B |f - \sigma_i|^2 d \lambda
\leq \int_B |f - \sigma|^2 \, d \lambda,
$$
which proves your inequality with $c = 1$. |
Expository Books for Galois Theory | I know it is not a "big name", but Ian Stewart's Galois Theory served me well. I went into the class with some linear algebra and group theory, and came out with a good grasp of Galois Theory and some of its famous consequences.
A really nice feature is that he reaps some benefits of the theory before it is fully explained. Like after he explains the degree of a field extension, he then does some non-constructability proofs (one example: he shows that you can't trisect an angle of $\pi /3$ radians with a compass and straight-edge.) I'm not sure if this is standard, but it gives you a sense of satisfaction early on.
Also the introductory chapters are historically rich in a way I haven't seen before. |
Applying boundary conditions to counting combinatorial question | Big Hint:
To divide 5 into 4 parts none of which is larger than 2 (or negative), can be done in following ways (ignoring permutations)
$2,2,1,0$
$2,1,1,1$
Consider $(x^0+x^1+x^2)^4$. Then $x^5$ would be formed from terms like $x^2.x^2.x^1.x^0$ and $x^2.x^1.x^1.x^1$ only. In how many ways? Obviously that's the strength of the coefficient. |
Is there an infinite countable $\sigma$-algebra on an uncountable set | Suppose that $\lvert \Omega\rvert\ge\aleph_0$, and $\mathscr M\subset\mathscr P(\Omega)$ is a $\sigma-$algebra. We shall show that:
$$
\textit{Either}\,\,\,\, \lvert\mathscr M\rvert<\aleph_0\quad or\quad \lvert\mathscr M\rvert\ge 2^{\aleph_0}.
$$
Define in $\Omega$ the following relation:
$$
a\sim b\qquad\text{iff}\qquad
\forall E\in\mathscr M\, (\,a\in E\Longleftrightarrow b\in E\,).
$$
Clearly, "$\sim$" is an equivalence relation in $\Omega$, and every $E\in\mathscr M$ is a union of equivalence classes. Also, for every two different classes $[a]$ and $[b]$, there are $E,F\in\mathscr M$, with $E\cap F=\varnothing$, such that $[a]\subset E$ and $[b]\subset F$.
Case I. If there are finitely many classes, say $n$, then each class belongs to $\mathscr M$, and clearly $\lvert \mathscr M\rvert=2^n$.
Case II. Assume there are $\aleph_0$ classes. Fix a class $[a]$, and let
$\{[a_n]:n\in\mathbb N\}$ be the remaining classes. For every $n\in\mathbb N$, there exist $E_n,F_n\mathscr\in M$, such that $[a]\subset E_n$, $[a_n]\subset F_n$ and $E_n\cap F_n=\varnothing$. Clearly, $[a]=\bigcap_{n\in\mathbb N} E_n\in\mathscr M$, and thus $\lvert \mathscr M\rvert=2^{\aleph_0}$.
Case III. If there are uncountably many classes,
we can pick infinite countable of them $[a_n]$, $n\in\mathbb N$, and disjoint sets $E_n\in\mathscr M$, with $[a_n]\subset E_n$, (using the Axiom of Choice), and then realise that the $\sigma-$algebra generated by the $E_n$'s has the cardinality of the continuum and is a subalgebra of $\mathscr M$. |
Why is there always a multiplicative function $g$ such that $\sum\limits_{k=1}^n f(\gcd(n,k)) = \sum\limits_{d\mid n} f(d)g(\tfrac{n}{d})$? | We have $$\sum_{k=1}^{n}f\left(\left(n,k\right)\right)=\sum_{d\mid n}f\left(d\right)\sum_{\underset{{\scriptstyle \left(k,n\right)=d}}{k=1}}^{n}1
$$ and now note that $$\sum_{\underset{{\scriptstyle \left(k,n\right)=d}}{k=1}}^{n}1=\sum_{\underset{{\scriptstyle \left(k/d,n/d\right)=1}}{k/d\leq n/d}}1=\phi\left(\frac{n}{d}\right)
$$ where $\phi\left(n\right)
$ is the Euler totient function. Hence $g=\phi
$ and $$\sum_{k=1}^{n}f\left(\left(n,k\right)\right)=\sum_{d\mid n}f\left(d\right)\phi\left(\frac{n}{d}\right).
$$ |
Prove that $q$ is a perfect square | Let $a=0$. Then
$$
q=\frac{a^2+b^2}{1+ab}=\frac{0+b^2}{1+0}=b^2
$$
Also, since $b\in\mathbb{Z}$, you know that $q\in\mathbb{Z}$ and that $q$ is a perfect square. Therefore, for any $x\in\mathbb{Z}$, the triplet $(0,x,x^2)$ satisfies the requirements. So does $(x,0,x^2)$.
There are others, too, but you don't need to find them, since there are infinitely many already.
Restricting $a,b,q>0$, as per your edit, makes things a bit trickier. I'm still working on it, but I've only found the rather trivial $(1,1,1)$ so far. |
For Riemann integral partitions, does it matter whether $f$ is evaluated at a point in $(t_{i-1},t_i)$ or $[t_{i-1},t_i]$? | It does not hurt anything to allow for $\xi$ to be in closed subintervals. A kind of manual way to see this is to connect the Riemann sum on a tagged partition with tags at endpoints to a Riemann sum on a tagged partition without tags at endpoints. One way to do that is to split into two cases depending on whether two neighboring $\xi$'s are the same or not.
If they are the same, merge those two subintervals; doing this all over results in a partition with mesh $2\delta$ which still goes to zero, and the Riemann sum on this tagged partition is exactly the same as the Riemann sum you already had.
If they are not the same, enlarge the subinterval which had $\xi$ at the endpoint by a little bit, at the expense of its neighbor. As long as the sum of all the enlargement lengths is less than $\varepsilon/4M$ where $M$ is the bound on $f$, the resulting change in the Riemann sum will be less than $\varepsilon$. |
Prove that $\lim_{t \rightarrow 0} t \int_{0}^{\infty} e^{-tx} f(x) dx =1$ | Assuming you left out the $t$, split the interval $[0,\infty)$ into $[0,A]$ and $[A,\infty)$ and choose $A$ so that $|f(x)-1|<\epsilon$ for all $x>A$. Write the integral as the sum of integrals over either interval. On the first interval, you can use dominated convergence theorem and the fact that $f$ is integrable on $[0,A]$. On the infinite interval, use the fact that $f$ is close to 1. |
One's Complement to Decimal, Table Conversion Question | Explanation
One's complement are used to add negative to positive numbers.
But there is a one offset error when we compute with the one's complement.
For example if the msb-sign is used for sign bit, the only positives we have now is $0$ to $127$ where sign(msb) bit is 0, that is: 0xxxxxxx.
Likewise we have $-0$ to $-127$, where sign(msb) bit is 1: 1xxxxxxx. For these two representations we spot the one offset error, i.e. we don't need two negative zeros, only one. Read wiki for more explanation.
So we add 1 producing a so-called 2's complement computation. $0$ is $00000000_2$ and $-1$ is $11111111_2$ not $-0$. Here we see one's complement in action. Say we want to make $1$ i.e. $00000001_2$ negative: We invert it using one's complement: $11111110_2$. But this is $-2$, so instead we add $1$ and get $-1$ in two's complement. Now the correct representation is $11111111_2$.
Example
We want to compute: $-5 + 27$. We know that the answer is: $22$.
First convert $5$ to $-5$ using 1's complement:
$00000101_2 \oplus 11111111_2 = 11111010_2.$ (i.e. invert all bits)
Now add $27$ to two's complement of $-5$:
$00011011_2 + 11111010_2 + 1_2 = 00010110_2.$
Result:
$00010110_2 = 22.$ |
Determine all real numbers x for which there exist 3*3 real matrices AB-BA such that | Hint: What is the trace of the left hand side expression? |
What value in d makes this matrix diagonalisable over the field R? | The characteristic polynomial of your matrix is $-\lambda^3+2\lambda d+d^3$, whose discriminant is $32d^3-27d^6$. So, the discriminant is $0$ if and only if $d=0$ or $d=\frac23\sqrt[3]4$. For every value of $d$ between those two numbers, your matrix will have $3$ distinct real eigenvalues and therefore will be diagonalizable over $\mathbb R$ if $d>\frac23\sqrt[3]4$ or $d<0$, the discriminant will be negative, and therefore your matrix will not be diagonalizable over $\mathbb R$. Now, check by hand the cases in which $d=0$ and $d=\frac23\sqrt[3]4$. |
Fibonacci numbers and perfect squares | You are dealing with Lucas numbers, defined by $L_0=2$, $L_1=1$, $L_n=L_{n-1}+L_{n-2}$. We have $L_n=\varphi^n+\psi^n$.
Observe that $L_n^2=\varphi^{2n}+2\varphi^n\psi^n+\psi^{2n}=L_{2n}+2(-1)^n$. |
Convergence of a sequence in a k- dimensional Euclidean metric space | No. Any non-empty set $X$ has the "trivial metric" which is defined as
\begin{align}
d(x,y)=
\begin{cases}
1&\text{if $x\neq y$}\\
0&\text{if $x= y$}
\end{cases}
\end{align}
A sequence $\{x_n\}_{n=1}^{\infty}$ converges to a point $x$ relative to this metric if and only if there exists an $N\in\Bbb{N}$ such that for all $n\geq N$, we have $x_n=x$. In other words, convergence relative to this metric requires the sequence eventually be constant.
So, a sequence in $\Bbb{R}^2$ (same idea works for any $k$) such as $x_n=\left(\frac{1}{n},\frac{1}{n}\right)$ does not converge relative to this metric, even though each "component" $x_{n,1}=\frac{1}{n}$ and $x_{n,2}=\frac{1}{n}$ converges to $0$ relative to the standard metric on $\Bbb{R}$.
Edit in Response to Comment
Yes, we can also construct counterexamples for the converse; here's the general idea. Let $X$ be a set, $\delta$ a metric on $X$ and $f:X\to X$ a bijection. We can consider the pull-back metric $d=f^*\delta$ on $X$ defined as $d(x,y):=\delta(f(x),f(y))$. We now ask whether it is possible to find a sequence $\{x_n\}_{n=1}^{\infty}$ such that
$\{x_n\}_{n=1}^{\infty}$ converges relative to $d=f^*\delta$ (or equivalently $\{f(x_n)\}_{n=1}^{\infty}$ converges relative to $\delta$);
$\{x_n\}_{n=1}^{\infty}$ does not converge relative to $\delta$.
Such examples are easy to construct in $\Bbb{R}$ (perhaps there are even simpler solutions, but this is the first which came to mind). So, we consider $X=\Bbb{R}$, $\delta$ to be the standard metric and $x_n=(-1)^nn$. This sequence alternates in sign and approaches $\infty$ in magnitude,so it is clear that $\{x_n\}_{n=1}^{\infty}$ does not converge relative to $\delta$ (the standard metric on $\Bbb{R}$).
Now, define $f:\Bbb{R}\to\Bbb{R}$ by setting $f(x)=\frac{1}{x}$ if $x\neq 0$ and $f(0)=0$. Then, $f$ is a bijection, and $f(x_n)=\frac{(-1)^n}{n}$ converges to $0$ relative to $\delta$, the standard metric. Thus, $x_n$ converges relative to $d=f^*\delta$, which is precisely what we want. |
Limit of maximum of $f_{n}(x)=\frac{1}{n}(\sin{x}+\sin{(2x)}+\cdots+\sin{(nx)})$ | [The proof has been completed following the strategy suggested in the earlier revisions of the answer.]
For $x = \frac{2u}{n}$ the limit of $a_n$ is $\frac{\sin^2 u}u$ and one can take $u$ to maximize this. Let $M$ be the maximum value of $(\sin^2{u})/u$. We have proved that $\limsup a_n \geq M$ and want to show the opposite inequality to prove $\lim a_n = M$.
Lemma 1. If $b_n = \max_y \sin^2 (ny) / (n \sin y)$, then $a_n \leq \max(b_n,\frac{n}{n+1} b_{n+1}$).
Proof: $\sin (ny) \sin((n+1)y) \leq \max \sin^2(ny), \sin^2((n+1)y)$.
Lemma 2. $\limsup a_n \leq \limsup b_n$.
Proof. From lemma $1$, the definition of lim sup, and its invariance under multiplication by factors that converge to $1$.
Lemma 3. In the optimization problem, we can replace $f_n(x)$ by $g_n(x)=\sin^2(nx)/n \sin x$ (whose maxima are $b_n$ instead of $a_n$) if we can prove that $\limsup b_n \leq M$.
Proof. It would show that $\limsup a_n$ is also $\leq M$, by lemma 2.
Lemma 4. The maximum of $g_n(y) = \sin^2(ny)/(n \sin y)$ occurs with $ny \in (0,\pi)$
Proof. There are $n$ values of $y' \in (0,\pi)$ such that $|\sin(ny')|=|\sin ny|$ and $\sin y' > 0$. The smallest of these $n$ values is the unique one in $(0,\pi/n)$ and it also has the smallest value of $|\sin y'|$. For this best choice of $y'$, $g(y') \leq g(y)$ (it is a fraction where we have kept the numerator the same and increased the denominator).
Lemma 5. $\limsup b_n = M$
Proof. The maxima of $g_n(y)$ occur at values of $y$ less than $\pi/n$. But $g_n(y)=(\frac{y}{\sin y})(\frac{\sin^2 (ny)}{ny})$ where the first part converges to $1$ at the maxima and $M$ is defined as the largest possible value of the second part.
Conclusion. $\limsup a_n \leq M$, and therefore $\lim a_n = M$. |
Two Jordan normal form matrices | The matrices
\begin{align*}
J_1 &=\left[\begin{array}{rr|r|r}
1 & 1 & 0 & 0 \\
0 & 1 & 0 & 0 \\
\hline
0 & 0 & 1 & 0 \\
\hline
0 & 0 & 0 & 1
\end{array}\right]
&
J_2 &=\left[\begin{array}{rr|rr}
1 & 1 & 0 & 0 \\
0 & 1 & 0 & 0 \\
\hline
0 & 0 & 1 & 1 \\
0 & 0 & 0 & 1
\end{array}\right]
\end{align*}
are in Jordan canonical form and satisfy $\chi_{J_1}(t)=\chi_{J_2}(t)=(t-1)^4$. Their minimal polynomials are
$$
\mu_{J_1}(t)=\mu_{J_2}(t)=t^2-2\,t+1
$$
This shows that the minimal polynomial does not determine the Jordan canonical form of a matrix.
Now, let's consider your examples
\begin{align*}
A &= \left[\begin{array}{rrr}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 2
\end{array}\right]
& B &=\left[\begin{array}{rrr}
1 & 1 & 0 \\
0 & 1 & 0 \\
0 & 0 & 2
\end{array}\right]
\end{align*}
As you observe, these matrices have the same characteristic polynomials. Thus the eigenvalues of these matrices are $\lambda=1$ and $\lambda=2$. Moreover, the algebraic multiplicities of these eigenvalues are $\DeclareMathOperator{am}{am}\am_A(1)=\am_B(1)=2$ and $\am_A(2)=\am_B(2)=1$.
However, the geometric multiplicities are $\DeclareMathOperator{gm}{gm}\gm_A(1)=2\neq 1=\gm_B(1)$ and $\gm_A(2)=\gm_B(2)=1$. |
How find this sum $\sum_{n=1}^{\infty}n\sum_{k=2^{n-1}}^{2^n-1}\frac{1}{k(2k+1)(2k+2)}$ | Throughout, let $H_k\equiv \sum_{j=1}^k \frac{1}{j}$ and let $\gamma$ be the Euler-Mascheroni constant.
First, note
\begin{eqnarray}
\sum_{n=1}^{\infty} n \sum_{k=2^{n-1}}^{2^n-1} \frac{1}{k(2k+1)(2k+2)}
&=& \sum_{j=0}^{\infty} \sum_{k=2^j}^{\infty} \frac{1}{k(2k+1)(2k+2)}.
\end{eqnarray}
Mathematica says
\begin{eqnarray}
\sum_{k=2^j}^{\infty} \frac{1}{k(2k+1)(2k+2)} = -\frac{1}{2}\psi(2^j) + \psi(2^j+\frac{1}{2}) - \frac{1}{2} \psi(2^j+1)
\end{eqnarray}
where $\psi$ is the digamma function. To compute this without as much help from softare, use the original poster's expression for the summand and the fact that
$$
\psi(z+1) = -\gamma + \sum_{n=1}^{\infty} \frac{z}{n(n+z)}
$$
for any $z\in\mathbb{C}$ as long as $z\ne -1,-2,-3,\ldots$.
Using explicit formulas (see Wikipedia) for $\psi(m)$ and $\psi(m+\frac{1}{2})$ where $m$ is an integer,
\begin{eqnarray}
-\frac{1}{2}\psi(2^j) + \psi(2^j+\frac{1}{2}) - \frac{1}{2} \psi(2^j+1) &=&
-\frac{1}{2}H_{2^j-1} - \frac{1}{2} H_{2^j} - 2 \log 2 + \sum_{k=1}^{2^j} \frac{2}{2k-1} \\
&=& -\frac{1}{2}H_{2^j-1} - \frac{1}{2} H_{2^j} - 2 \log 2 + 2(H_{2^{j+1}} - \frac{1}{2} H_{2^j}) \\
&=& -\frac{1}{2}H_{2^j-1} - \frac{3}{2} H_{2^j} - 2 \log 2 + 2 H_{2^{j+1}} \\
&=& -\frac{1}{2}(H_{2^j} -\frac{1}{2^j}) - \frac{3}{2} H_{2^j} - 2 \log 2 + 2 H_{2^{j+1}} \\
&=& \frac{1}{2^{j+1}} - 2 \log 2 + 2 (H_{2^{j+1}} - H_{2^j}).
\end{eqnarray}
Observe
\begin{eqnarray}
\sum_{j=0}^m \frac{1}{2^{j+1}} - 2 \log 2 + 2 (H_{2^{j+1}} - H_{2^j})
&=& 1 - 2^{-(m+1)} + 2\left(H_{2^{m+1}}- \log 2^{m+1}\right) - 2H_1.
\end{eqnarray}
It follows
\begin{eqnarray}
\sum_{j=0}^{\infty} \sum_{k=2^j}^{\infty} \frac{1}{k(2k+1)(2k+2)} &=& \lim_{m\to\infty} 1 - 2^{-(m+1)} + 2\left(H_{2^{m+1}}- \log 2^{m+1}\right) - 2H_1 \\
&=& 2\gamma - 1.
\end{eqnarray} |
How to find primitive roots of big numbers modulo n, like 121? | In general, if $g$ is a primitive root mod $p$ then $g$ or $g+p$ is a primitive root mod $p^2$.
For $p=11$, we have that $g=2$ works.
It is enough to test that $2^{110/q}\not \equiv 1 \bmod 121$ for $q=2,5,11$, the prime divisors of $110=\phi(121)$. |
find $\sum_{i=1}^\infty \frac{1}{n3^n}$ | Consider
\begin{align*}
\sum_{n=1}^\infty\frac{x^n}{n}.
\end{align*}
This series converges for all $|x|<1$. Differentiation wrt $x$ yields
\begin{align*}
\sum_{n=1}^\infty x^{n-1}=\sum_{n=0}^\infty x^n=\frac{1}{1-x},
\end{align*}
and integrating again gives
\begin{align*}
\sum_{n=1}^\infty\frac{x^n}{n}=\int\frac{1}{1-x}dx+c=-\log(1-x)+c.
\end{align*}
When we plug in $x=0$ we obtain $c=0$, so we eventually have
\begin{align*}
\sum_{n=1}^\infty\frac{x^n}{n}=-\log(1-x),\qquad |x|<1.
\end{align*}
Now, for $x=\frac{1}{3}$ we obtain
\begin{align*}
\sum_{n=1}^\infty\frac{1}{n3^n}=\log\left(\frac{3}{2}\right).
\end{align*} |
Classify the nonabelian groups of order $16p$, where $p$ is a prime number | It's a (lengthy) exercise. Denote by $C_p$ the cyclic group of order $p$. If $p>7$, then a simple verification (not based on the classification) shows that no 2-group of order $\le 16$ has an automorphism of order $p$. It follows that if $G$ has $C_p$ as a quotient, then $G$ is direct product of a group of order 16 and $C_p$. In particular, if $p>7$, then the intersection $G_2$ of subgroups of index 2 is a proper subgroup of $G$. This also applies to $G_2$ itself and it follows that the $p$-Sylow of $G$ is normal. Hence $G=C_p\rtimes D$ with $D$ a 2-Sylow. Since the case of direct products was already obtained, the remaining case is when $D$ comes with a homomorphism onto $C_p$, which provides the action on $C_p$. (Hence, for each $D$ we obtain as many groups as homomorphisms from $D$ to $C_2$ up to composition by an automorphism of $D$.)
For $p=3,5,7$, the above provides all examples in which the $p$-Sylow is normal, but there are also a few examples with a non-normal $p$-Sylow.
For $p=5$ the only 2-group of order $\le 16$ with an automorphism of order 5 is the elementary 2-abelian group $C_2^4$ of order 16, and this automorphism is unique up to conjugation. Hence for $p=5$ the only further example is the nontrivial semidirect product $C_2^4\rtimes C_5$.
For $p=2,3,7$ there are a few more examples; lists are available even if everything must be doable by hand with some patience. |
Prove that any two bases of a field extension have the same cardinality. | If $E/F$ is a finite field extension, then the field axioms imply that $E$ is a finite-dimensional $F$-vector space. Your definition of basis is exactly that of a vector space, and any two bases of a finite dimensional vector space have the same cardinality.
Theorem Let $V$ be a vector space and let $A,B\subseteq V$ be finite subsets such that $A$ is linearly independent and $B$ spans $V$. Then $\# A\le\# B$.
Proof. Let $A=\lbrace a_1,\dots,a_n\rbrace$ and $B=\lbrace b_1,\dots,b_m\rbrace$. Then $\lbrace a_1,b_1,\dots,b_m\rbrace$ also spans $V$, and we can write
$$
b_r=\lambda_1 a_1+\mu_1 b_1+\dots+\mu_{r-1} b_{r-1}+\mu_{r+1} b_{r+1}+\dots+\mu_m b_m,
$$
for some $1\le r\le n$ (since $a_1\in A\Rightarrow a_1\ne 0$). Hence $B_1=\lbrace a_1,b_1,\dots,b_{r-1},b_{r+1},\dots,b_m\rbrace$ also spans $V$, and $\# B=\# B_1$.
Now we repeat: $\lbrace a_1,a_2,b_1,\dots,b_{r-1},b_{r+1},\dots,b_m\rbrace$ spans $V$ and we can again remove one of the $b_j$'s and still keep a spanning set.
After doing this $n$ times, we will have $B_n=\lbrace a_1,\dots,a_n,b_{m_1},\dots,b_{m_k}\rbrace$, with $k\ge 0$, $B_n$ spanning $V$, and $\#B = \#B_n$. But $A\subseteq B_n$, and so $\# A\le\# B$. QED.
Now apply to two bases of $E/F$. |
Lambda-Calculus - Alternative proof without fixed point combinator | Your proof is correct, except for a small detail. You defined $F \equiv \lambda fx.ffff$, which means that $F$ is nothing but a shorthand for $\lambda fx.ffff$. Hence $FF \equiv (\lambda fx.ffff)F$, which means that $FF$ is nothing but a shorthand for $(\lambda fx.ffff)F$ (and for $(\lambda fx.ffff)(\lambda fx.ffff)$ and $F(\lambda fx.ffff)$ as well), i.e. they are syntactically the same object. In particular, this implies that $FF = (\lambda fx.ffff)F$ by the first equality rule (Def. 2.7 in your notes), and not by the first compatibility rule.
Concerning a simpler solution of the problem that does not use $\bf Y$, you can repeat your proof by setting $F \equiv \lambda fx.ff$ and $G \equiv FF$. Then,
\begin{align}
G &\equiv FF = \lambda x.FF \equiv \lambda x.G \\
GG &\equiv (\lambda x.G)G = G = (\lambda x. G)X \equiv GX.
\end{align} |
What is the gödel number of 'SSS(0)? | $\mathbf{0}$ represents... $0$, not too surprisingly. $\mathbf{SSS0}$ is a constant term, not a primitive recursion function or even an expression denoting one.
Just what the Gödel number of the term is depends on the particular scheme of Gödel numbering you're using. In Gödel's original paper, $\mathbf{0}$ is assigned to $1$, $\mathbf{S}$ is assigned to $3$. In that system, the expression would be $\mathbf{SSS0}$ with no parentheses. Its Gödel number is:
$$
\#\mathbf{SSS0} = 2^3 3^3 5^3 7^1 = 8\cdot 27\cdot 125\cdot 7 = 189,000.
$$ |
Determining of two arbitrary module are isomorphic? | Short answer: no.
A little longer answer: in contrast to the case of vector spaces (i.e. modules over a field) there is no general invariant like dimension that one can check in order to see if two modules are isomorphc.
Nevertheless there some techinques which can be used to solve the problem for some subclasses of modules: for instance you can still use the dimension if you are working with free modules, you can compare presentations if you are working with finitely presented modules...
Anyway that was to be expected, proving that things are isomorphic is a very hard problem in mathematics, usually it's easier to prove that two things aren't isomorphic. |
Volume of revolution of solid formed by $y=-2x^2+3x$ and $y=x^a$ about $y$-axis where $a$ is a constant > 1 | You say "volume of the solid," but the equations and the picture define a 2-dimensional figure not a solid. Are you rotating this about the x- or y-axis? Or are you really looking for the area between the curves? |
sequence of function $f_n(x)= \sin(n\pi x)$ | Here is a hint for your problem:
Note that given an $n$ you will always be able to find a $c \in [0, \frac{1}{n}]$ such that $f_n(c) = 1$. |
Dimensions of the co-domain of the graph defining a $k$-manifold in $\mathbb R^n$ | Thank you, Anthony Carapetis:
The graph of a function $f:\mathbb R^k\to \mathbb R^d$ is the $k$-dimensional subset $\{(x,f(x)):x\in \mathbb R^k\}\subset \mathbb R^{k+d};$ so in order to get a $k$-dimensional graph in $\mathbb R^n$ we need to choose $d=n−k.$
[my f-u ?] So $n$ is the ambient space, which in the case of a 2-sphere, it would be $\mathbb R^3.$ The sphere is a $2$-dim manifold, and hence, $k=2$, leaving $d=3−2=1.$ This means that there are $2$ free variables and $1$ pivot variable?
$2$ free variables, $1$ dependent variable. The example given by @rldias is exactly the case of the sphere in $R^3$ - it's locally the graph of the function $f:\mathbb R^2\mapsto \mathbb R:f(x,y)=\sqrt{1−x^2−y^2}.$
Just to make sure there is no lingering misunderstanding, I have tried to illustrate the concept in this plot of the $2$ sphere, which hopefully encapsulates the comments accurately:
I tend to think that the problem visualizing was in the surface (2D) nature of the open set around the point $P$ (in red), which doesn't preclude the function from the patch on the $k=2$ standard coordinate system in purple (on the $xy$ plane) to the neighborhood of $P$ from being a map $\mathbb R^2\mapsto \mathbb R^1.$ |
How to isolate $\alpha$ in $\frac{\alpha}{\log_2(4\alpha)}$ | The solution of
$$\frac{\alpha}{\log_2(4\alpha)} = 96\,\varepsilon^{-1}$$ is given in terms of Lambert function
$$\alpha=-\frac{96 }{\varepsilon \log (2)}W\left(-\frac{\log (2)}{384} \varepsilon \right)$$
To evaluate it, since the argument is small $(\lt 0.002)$, use the series expansion
$$W(x)=x-x^2+\frac{3 x^3}{2}-\frac{8 x^4}{3}+O\left(x^5\right)$$ or, even better, the Padé Approximant
$$W(x)=\frac{x+\frac{4 }{3}x^2}{1+\frac{7 }{3}x+\frac{5 }{6}x^2}$$
Since $0 \leq \varepsilon\leq 1$, you can have a very good approximation using Taylor again to get
$$\alpha=\frac{1}{4}+\frac{ \log (2)}{1536}\varepsilon+\frac{\log
^2(2)}{393216}\varepsilon^2+\frac{ \log ^3(2)}{84934656}\varepsilon^3+O\left(\varepsilon ^4\right)$$
Update
Thinking more about the problem, I think that we could have avoided Lambert function. Let $\beta=4\alpha$ and rewrite the equation as
$$\varepsilon=\frac{384}{\log (2)}\frac{ \log (\beta )}{\beta }$$ Expanding the rhs as a Taylor series built at $\beta=1$, this would give
$$\varepsilon=\frac{384 (\beta -1)}{\log (2)}-\frac{576 (\beta -1)^2}{\log (2)}+\frac{704 (\beta
-1)^3}{\log (2)}+O\left((\beta -1)^4\right)$$ and then, using series reversion
$$\beta=1+\frac{\log (2)}{384} \varepsilon +\frac{ \log
^2(2)}{98304}\varepsilon ^2+\frac{\log ^3(2)}{21233664}\varepsilon ^3 +O\left(\varepsilon ^4\right)$$ and, then, the same
$$\alpha=\frac{1}{4}+\frac{ \log (2)}{1536}\varepsilon+\frac{\log
^2(2)}{393216}\varepsilon^2+\frac{ \log ^3(2)}{84934656}\varepsilon^3+O\left(\varepsilon ^4\right)$$ |
Set Builder Notation for Powers of relations | In general, let $R \subseteq A \times A$ be a relation. Then
$$
R^{2} = \{(a,c) \mid \exists b \in A \colon (a,b) \in R \wedge (b,c) \in R \}.
$$
In your specific case we get, setting $Z := \{ (z, z+1) \mid z \in \mathbb Z \}$,
$$
\begin{align*}
Z^{2} &= \{ (a,c) \mid \exists b \in \mathbb Z \colon (a,b) \in Z \wedge (b,c) \in Z \} \\
&= \{ (z, z+2) \mid z \in \mathbb Z\}.
\end{align*}
$$ |
sum of rational terms in $\left(\sqrt{2}+\sqrt{27}+\sqrt{180}\right)^{10}$ | Note that the Galois group of $\mathbb{Q}(\sqrt{2},\sqrt{3},\sqrt{5})$ over $\mathbb{Q}$ is generated by three automorphism: one swaps $\sqrt{2}$ to $-\sqrt{2}$, the other swaps $\sqrt{3}$ and $-\sqrt{3}$, the last one swaps $\sqrt{5}$ and $-\sqrt{5}$.
Let $\alpha_1,\alpha_2,\alpha_3,\alpha_4$ be the orbits of $(\sqrt{2}+3\sqrt{3}+6\sqrt{5})^2$ under the action of Galois group.
Note that ${\alpha_1}^{5}+{\alpha_2}^{5}+{\alpha_3}^{5}+{\alpha_4}^{5}$ is a rational number, because it is fixed by the three automorphisms above. Your desired answer is $$
\frac{{\alpha_1}^{5}+{\alpha_2}^{5}+{\alpha_3}^{5}+{\alpha_4}^{5}}{4}$$
If your acumen is strong enough, you can also arrive at this result without any Galois theory. Also note that $\alpha_i$ are roots of the root of the polynomial $$510082225 - 19503140 x + 219894 x^2 - 836 x^3 + x^4 = 0 $$ Repeated application of Netwton's identities gives the value of your sum.
As an alternative, a recurrence relation can be used to compute
$$S_n = {\alpha_1}^n + {\alpha_2}^n + {\alpha_2}^n + {\alpha_3}^n + {\alpha_4}^n$$
which is given by
$$510082225 S_n- 19503140 S_{n+1} + 219894 S_{n+2} - 836 S_{n+3} + S_{n+4} = 0 $$
valid for $n\geq 1$. You have to know $S_1,S_2,S_3$ to get this recurrence running. You might want to use this as an alternative to compute $S_5$ or $S_n$ for higher $n$. |
Expectation of the Ratio of a Poisson Processes | The counts of events in disjoint regions are independently distributed.
$$\mathsf P(N_1=n, N_3=m)~=~\mathsf P(N_1=n)~\mathsf P(N_3-N_1=m-n)$$
The count of arrival after time 1 until time 3 is Poisson distributed rate $(3-1)\lambda$.
$$N_3-N_1\sim\mathcal{Poiss}(2\lambda)$$
So putting it all together:
$$\mathsf P(N_1=n, N_3=m)~=~\dfrac{\lambda^m 2^{m-n}e^{-3\lambda}}{n!(m-n)!} \;\mathbf 1_{(n,m)\in\Bbb N^2, 0\leq n\leq m, }$$
(NB: the support is $\{(n,m)\in\Bbb N^2: 0\leq n \leq m\}$ because there cannot be less arrivals in the whole three minutes as there were in the first one minute.)
And in your equation
$$\begin{align}\mathsf E(N_1^2/N_3) & = \sum_{n=0}^\infty\sum_{m=n}^\infty \dfrac{n^2}{m}\dfrac{\lambda^m2^{m-n}e^{-3\lambda}}{n!(m-n)!} \\ & = \sum_{n=1}^\infty\sum_{m=n}^\infty \dfrac{n^2}{m}\dfrac{\lambda^m2^{m-n}e^{-3\lambda}}{n!(m-n)!} \tag{$\star$}\end{align}$$
Which works if we allow, as mentioned in the post comments, that $\lim_{n\to 0}n^2/n = 0$
However, yes, unfortunately the same convenience does not help avoid blowout for the other expectation.
$$\begin{align}\mathsf E(N_3^2/N_1) & = \sum_{m=0}^\infty\sum_{n=0}^m \dfrac{m^2}{n}\dfrac{\lambda^m2^{m-n}e^{-3\lambda}}{n!(m-n)!} \\ & = \sum_{m=1}^\infty\sum_{n=0}^m \dfrac{m^2}{n}\dfrac{\lambda^m2^{m-n}e^{-3\lambda}}{n!(m-n)!} \tag{$\star$}\end{align}$$ |
What's the difference between a contrapositive statement and a contradiction? | When one speaks of a contrapositive or proving a contrapositive, one is speaking about the contrapositive of an implication (an "if...then" statement), and as pointed out in the earlier answers, if one wants to prove that $$P \implies Q\tag{1}$$ one can choose, instead, to prove $$\lnot Q \implies \lnot P,\tag{2}$$ because both statements are equivalent (i.e., if one is true, so is the other...and if one is false, so is the other). Don't confuse the appearance of the $\lnot$ symbol on each side of (2) as being either a negation of (1) nor contradiction. To see what I mean, one can correctly state that (1) (which does not contain the "$\lnot$" symbol) is the contrapostive of (2) because (2) is equivalent to $$\lnot(\lnot P) \implies \lnot(\lnot Q) \equiv P \implies Q.\tag{3}$$
In contrast, a contradiction is obtained when one derives or asserts that both a statement $P$ and its negation $\lnot P\;$ hold, i.e., when one asserts or derives: $$P \land \lnot P\tag{4}$$ (E.g., $x \in A \land x \notin A$ is a contradiction, and as such, is false regardless of whether or not $x \in A$).
Another way of putting it is that a contradiction is any statement which is always false (i.e., a statement which is "inherently" false), and a contradiction can be thought of as the "opposite" of a tautology which is always true: e.g. $P \lor \lnot P$ is a tautology, and as such is true without knowing whether $P$ is true or false). |
Check if a given set of vectors is the basis of a vector space | $\{1,X,X^{2}\}$ is a basis for your space. So the space is three dimensional. This implies that any three linearly independent vectors automatically span the space. |
How many different lottery tickets are there in which at least three numbers match numbers in the draw? | To have exactly three numbers correct, you choose three of the winning numbers which you can do in $6 \choose 3$ ways, then choose three losing numbers, which you can do in $43 \choose 3$ ways. As each choice of winning numbers can go with any choice of losing numbers you multiply them, getting ${6 \choose 3}{43 \choose 3}$. The other terms work the same way. To get exactly $n$ winning numbers you have ${6 \choose n}{43 \choose 6-n}$ ways because there are $6-n$ losing numbers. |
give an example of a cyclic group with 6 generators. | HINT: You know that the identity element can’t be a generator. The simplest possibility that could conceivably work is that it’s the only non-generator, in which case the group must have order $7$. Do you know a group of order $7$? What are its generators? |
One subgroup divisible by another | Frobenius introduces in Über endliche Gruppen, 1895 the following terminology for two sets that a part of a larger group:
A set $B$ is divisible by $A$ iff $A \subset B$.
The lcm of $A$ and $B$ is the smallest group containing $A \cup B$ (or $A B$).
The gcd of $A$ and $B$ is therefore (the group) $A \cap B$.
(He then shows in Thm I, §.I, that $|A B| = |A| |B| / |A \cap B|$.) |
Closure of a set proof | We know that all $y_n\in\bar{A}$ and according to the definition of $\bar A$ this means there exists a sequence $\{x_i\}_{i=1}^\infty$ such that $x_i\to y_n$ as $i\to\infty$.
This means in particular that for every $y_n$ we can find an $x_n$ such that $|x_n-y_n|<\frac{1}{n}$ (because we can come arbitrarily close). By choosing this $x_n$ for every $y_n$ we obtain
$$
|x_n-y_n|<\frac{1}{n}
$$
for all $n\in\mathbb{N}$. |
Is $\prod_{n=0}^\infty \left(1-\frac{1}{\cosh ^2((n+1/2)\pi)}\right)=\frac{1}{\sqrt[4]{2}}$ true? | If $q=e^{-\pi} $ then we can see that your product equals $$\prod_{n\geq 1}\frac{(1-q^{2n-1})^2}{(1+q^{2n-1})^2}$$ which equals $(g_1/G_1)^2$ where $g, G$ represent Ramanujan class invariants. This is indeed $2^{-1/4} $ as $G_1=1,g_1=2^{-1/8}$.
A little amount of explanation about Ramanujan class invariants is necessary here.
Let $k\in(0,1)$ be the elliptic modulus and define complete elliptic integral of first kind $$K(k) =\int_{0}^{\pi/2}\frac{dx} {\sqrt{1-k^2\sin^2x}}\tag{1}$$ Further we define complementary modulus $k'=\sqrt{1-k^2}$ and the expressions $K(k), K(k') $ are usually denoted by $K, K'$ if the value of $k$ is known from context.
A lot of magic is hidden in the elliptic modulus $k$ and the value of $k$ can be obtained if $K, K'$ are given using nome $q=\exp (-\pi K'/K) $.
We have $$k=\frac{\vartheta_2^2(q)}{\vartheta_3^2(q)}, k'=\frac{\vartheta_4^2(q)}{\vartheta_3^2(q)}\tag{2}$$ where
\begin{align}
\vartheta_2(q)&=\sum_{n\in\mathbb {Z}} q^{(n+(1/2))^2}\notag\\
&=2q^{1/4}\prod_{n=1}^{\infty}(1-q^{2n})(1+q^{2n})^2 \tag{3a}\\
\vartheta_3(q)&=\sum_{n\in\mathbb {Z}} q^{n^2}\notag\\
&=\prod_{n=1}^{\infty} (1-q^{2n})(1+q^{2n-1})^2\tag{3b}\\
\vartheta_4(q) &= \vartheta_3 (-q) \tag{3c}
\end{align}
are theta functions of Jacobi with one parameter being $0$. The equality of series and product expressions above is due to Jacobi Triple Product identity.
Ramanujan defined his class invariants $g_N, G_N $ using functions $g, G$ as \begin{align}
g(q) &=2^{-1/4}q^{-1/24}\prod_{n=1}^{\infty} (1-q^{2n-1})\tag{4a}\\
G(q)&=2^{-1/4}q^{-1/24}\prod_{n=1}^{\infty} (1+q^{2n-1})\tag{4b}\\
g_N &=g(\exp(-\pi\sqrt{N}))\tag{4c}\\
G_N &=G(\exp (-\pi\sqrt{N})) \tag{4d}
\end{align}
where $N$ is a positive rational number.
It can be proved using the product expressions for theta functions that $$g(q) =(2k/k'^2)^{-1/12},G(q)=(2kk')^{-1/12}\tag{5}$$ It can also be proved with some effort (say using the theory of modular equations) that if $N$ is a positive rational number and $q=\exp(-\pi\sqrt{N}) $ then the values of $k, k'$ are algebraic and hence $G_N, g_N $ are also algebraic numbers.
If $N=1$ we have $$q=e^{-\pi} =\exp (-\pi K'/K) $$ so that $K'=K$ and $k'=k=1/\sqrt{2} $ and from $(5)$ we get $g_1=2^{-1/8},G_1=1$. Using these values the product in question evaluates to $2^{-1/4}$. |
Problem with Summation of series | You have solved it already. Recognise, that you can factor out $\frac 13$ from all terms. You have an infinite terms of +/- fractions, from which the first positive three ($\frac 12$, $\frac 13$, $\frac 14$) remains in the sum, every other is cancelled out by a negative counterpart with a 3 step gap. Therefore the end result is $\frac 13 (\frac 12 + \frac 13 + \frac 14)$. |
Find the derivative of $\arctan \left( \cos x\over1+\sin x \right)$. | It's $$\frac{1}{1+\left(\frac{\cos{x}}{1+\sin{x}}\right)^2}\cdot\left(\frac{\cos{x}}{1+\sin{x}}\right)'.$$
Can you end it now?
Also, you can use $$\frac{\cos{x}}{1+\sin{x}}=\frac{\sin\left(\frac{\pi}{2}-x\right)}{1+\cos\left(\frac{\pi}{2}-x\right)}=\tan\left(\frac{\pi}{4}-\frac{x}{2}\right).$$ |
Calculate determinant | I checked your work, and both the first process you used to compute the determinant and your first solution are correct.
Wolfram Alpha agrees with us, too. |
For any $R$-submodule $N$ of $M$, $N=Ma~\text{ if and only if}~ N=MaR,a\in R.$ | The question really is just simply "If $M$ is a right $R$ module and $a\in R$ and $Ma$ is a right $R$ module, then does $Ma=MaR$?
Obviously yes, in any ring (not just a principal ideal ring.) Just as sets you have $Ma\subseteq MaR$. Since you have assumed $Ma$ is a right $R$ module, then by definition $Ma\supseteq (Ma)R$.
You noted the first containment as I did, but your explanation for the second half isn't convincing. Why would you say $MaR\subseteq Ma(1_R)$? You gave no reason other than to refer to the definition of a principal right ideal, which doesn't help. You need to appeal to the fact that $Ma$ is closed under multiplication by $R$ on the right. |
Probability of two events | To begin with, in order to satisfy
$$
\prod_{i=1}^{N}\epsilon_{i}^{n_{i}}=1
$$
we must have that, for $S=\left\{ i\backepsilon\epsilon_{i}=-1\right\} $,
$$
\sum_{i\in S}n_{i}
$$
is even. Since
$$
\sum_{i=1}^{N}n_{i}=M
$$
we want to enumerate the partitions of M in N parts such that a sum
of a subset S of its parts is even. The subset is selected at random,
selecting a part if $\epsilon_{i}=-1$ and leaving a part out if $\epsilon_{i}=1$.
We don't care about the $n_{i}$ for which $\epsilon_{i}=1$.
Furthermore, in order to satisfy
$$
\sum_{i=1}^{N}\epsilon_{i}=0
$$
there needs to be the same number of $\epsilon_{i}$ such that $\epsilon_{i}=1$
as there are $\epsilon_{i}$ such that $\epsilon_{i}=-1$, so that
N has to be even.
Therefore, the partitions of M into N parts (where N is even) have
to be such that a subset of its parts is the partition of an even
number. If M is odd, this subset must be strict. And we select parts
from the partition of M at random, with a coin toss.
The problem is not solved, but it is framed in terms that could lead
to a solution. |
Exponential Distribution - Lifetime of Chips | Use the fact that P[X>5000]=P[X<5000]
Deduce that median is $5000$
And use it to find the parameter $\lambda$.by integrating from $0 to 5000$ and equalling it to $1/2$.
After this just take the reciprocal that your mean i guess you should get
$$5000/log2$$ as your mean |
The non-existence of a matrix $A\in M_3(\mathbb{R})$ such that $A^2-A+I=0$. | No such linear map exists. Suppose the contrary. Then $p(A)=0$ where $p(x)=x^2-x+1$. Pick any vector $v\ne0$. There are two possibilities:
$Av=cv$ for some $c\in\mathbb R$. Then $0=p(A)v=p(c)v$ and hence $p(c)=0$.
$V=\operatorname{span}\{v,Av\}$ is two-dimensional. Then $\mathbb R^3=V\oplus\operatorname{span}\{u\}$ for some $u\not\in V$ and
$$
Au=v_1+cu
$$
for some $v_1\in V$ and $c\in\mathbb R$. Since $A^2=A-I$, we have $A\{v,Av\}\subseteq V$. Hence $AV\subseteq V$ and in particular, $Av_1\in V$. It follows that
\begin{align}
A^2u=A(v_1+cu)=v_2+c^2u\tag{1}
\end{align}
for some $v_2\in V$. However, we also have
\begin{align}
A^2u=Au-u=v_1+(c-1)u.\tag{2}
\end{align}
By comparing the coefficients of $v$ in $(1)$ and $(2)$, we obtain $p(c)=0$.
Thus we arrive at contradictions in both cases, because $p(x)=0$ has not any real root. Hence $p(A)$ is never zero and $f$ does not exist. |
Partial differentiation notation in thermodynamics | Whenever we take a partial derivative, we need to specify what we are keeping constant. We commonly just use the partial derivative notation (say $\frac{\partial }{\partial x_1}$ in $\mathbb{R}^n$) to mean "keeping the other $n-1$ coordinates constant" and if that's what is meant, we usually don't need to say anything more. But that is not the only meaning of the partial notation. You can keep other things constant. For example, in $\mathbb{R}^2$, you can ask, "what is $\frac{\partial f}{\partial x_1}$ keeping $(x_1+x_2)$ constant?".
In the definitions you have given, $C_v$ is the partial derivative w.r.t. T and indeed P is allowed to vary while V is held constant. And in the case of $C_p$, V is allowed to vary as P is held constant. |
Product of Two subnormal subgroups. | In a finite $p$-group, every subgroup is subnormal, so this is a good place to look for counter-examples. Consider, for example, the respective groups generated by non-commuting involutions in a dihedral group of order 8... |
Check whether the following is a subgroup of S4 | Quick way: $((12) (13))^{2} \ne 1$, so it's not a subgroup.
But why did I choose those two elements? Well, if $a^{2} = b^{2} = 1$, then $(a b)^{2} = 1$ implies $1 = a b a b = a b a^{-1} b^{-1}$, whence $ab = ba$.
So I have chosen $a = (12)$ and $b = (13)$ because $ab \ne ba$. |
Bouncing a point across a rectangle | Reflecting the rectangle repeatedly on its long side gives a straight line:
The vertical distance travelled after the first reflection is $W\tan\theta-x$. Let $q$ and $r$ be the quotient and remainder after dividing by $H$.
If $q$ is odd, as in the above picture, the top rectangle has the same orientation as the bottom one, giving $y=H-r$. If $q$ is even, the top rectangle has opposite orientation and $y=r$. |
What is the intuition behind right-continuous filtration? | The idea is that you gain no additional information by taking an infinitesimal step forward in time.
Remember that an $\mathit{intersection}$ means that we are taking only the elements contained in EVERY set in the intersection. So, if we think of each $F_t$ as the information contained in the system up to time $t$, the intersection $\cap_{\epsilon > 0} \mathcal{F}_{t+\epsilon}$ contains only the information in EVERY $\mathcal{F}_{t+\epsilon}$ for every possible value of $\epsilon > 0$. That is, only the information contained up until $t+\epsilon$ for every $\epsilon > 0$, in particular, any arbitrarily small $\epsilon$. So, in this intersection, we have added only the information gained by taking an infinitesimally small step forward in time.
Thus, the idea of right continuity, $\mathcal{F}_t=\cap_{\epsilon > 0} \mathcal{F}_{t+\epsilon}$ is that no information is added in this infinitesimal step. In other words, there are no instantaneous developments of the system, it evolves in a continuous fashion going forward in time.
(Much credit for this answer is due to Huyen Pham, whose book I'm currently using to review some of this material.) |
Most common naming for graph edges ("to $\to$ from" vs "in $\to$ out" vs "source $\to$ target", etc) | To me (with little formal graph theory education) those three things signify slightly different things. This is what my gut says:
To-From: For a pair of vertices, there can be an edge going from one and to the other
In-Out: For a given vertex, some edges go in, some edges go out
Source-Target: For a given edge, one vertex is the source, and one vertex is the target |
Prove that if two norms on V have the same unit ball, then the norms are equal. | Suppose that $p_1(v)< p_2(v) $ for some $v$. Let $w=v/p_1(v) $. Then $p_1 (w)=1$, so $w\in B(0,1)$. But $p_2 (w)> 1$, and this implies that $w\not\in B(0,1) $, a contradiction. |
Prove that roots are real | We look at the discriminant of the the polynomial, which for a quadratic $ax^2 +bx +c$ is $b^2 -4ac$, plugging the values in for our polynomial gives
$$\Delta = 4(a+b)^2-4(a+b-c)(a+b+c)\\
= 4[(a+b)^2 - (a+b)^2+c^2]\\
= 4c^2$$
Since the square of a real number is positive, we know that the roots must be real, by looking at the quadratic forumla and seeing that the solutions are
$$\frac{-b\pm\sqrt\Delta}{2a}$$
And the square root of a positive real number is real. We used the discriminant because it makes computation so much easier, than if we were doing everything that we did in the first step underneath the radical, and it would be rather ugly. Inspection shows that if $\Delta > 0$, there are two distinct real roots, if $\Delta < 0$, there are two complex roots, which are conjugate, and if $\Delta = 0$ then you have a real double root. |
In what ways is a Kalman-filter a filter? | A Kalman Filter is a filter in the same sense as that used by the air defence system for the UK in the WW2 "filter room" where radar data from multiple sources was combined to form a tactical picture.
From the relevant Wikipedia page:
Radar detection of objects at that time was at its early stages of
development and there was a need for a method to combine the different
radar information gathered from different stations.
Accurate details of incoming or outgoing aircraft were obtained by
combining overlapping reports from adjacent radar stations and then
collating and correcting them. This process of combining information
was called "filtering" and took place in seven Filter Rooms. |
a question on hereditary $C^*$- subalgebras | What follows is an incomplete solution, but perhaps has some merit. One can apply Proposition 2.5 of this paper here, but there appears to be a problem I describe below.
Lemma: Let $X$ be a compact Hausdorff space, $f,g\in C(X)_+$ be two positive functions such that $\text{supp}(g) \subset \text{supp}(f)$. Then, for any $\epsilon >0, \exists h\in C(X)$ such that $\|g-hfg\| < \epsilon$
Now apply the above lemma to the function $f$ and $g(t) =t$. Let $h$ as in the lemma, and let $b = h(a)f(a)a$. Then for any $x\in A, bxb \in f(a)Af(a)$, and
$$
\|axa - bxb\| \leq \|axa-axb\| + \|axb - bxb\| < \epsilon\|x\|(\|a\| + \|b\|)
$$
Now
suppose one can control $\|b\|$ in terms of $\|a\|$ and $\|f(a)\|$,
then this would imply $axa \in \overline{f(a)Af(a)}$ proving that $\overline{aAa} \subset \overline{f(a)Af(a)}$
The problem then is to control $\|b\|$, which amounts to controlling $\|h\|$ in the above lemma. Going through the proof, one sees that
$$
\|h\| \leq \frac{1}{\delta}
$$
where $\delta > 0$ is obtained by the continuity of $f$. I am not sure if there is a way to control this quantity. However, for certain functions $f$, it is possible (for instance if $f(t) \geq t$ for all $t\in [0,\|a\|]$).
Hope this helps. |
What is the number of permutations of a $N \times N$ square grid where the rows of the grid contain $N$ different numbers? | Just to clarify, when you say
all possible permutations of a $6\times6$ grid where every row of the grid contained the numbers 1 through 6 once
do you mean all possible $6 \times 6$ grids such that each row contains the numbers $1$ through $6$ once? Using the word "permutation" makes it sound like we're moving around the entries of a preexisting grid of numbers.
If so, you're right. The idea is that there are $6!$ possibilities for each row: the first entry can be any of the numbers from 1 to 6, and the second entry can be any of the five remaining numbers, and the third entry can be any of the four remaining numbers, etc. for total number of possibilities of $6 \cdot 5 \cdot 4 \cdot 3 \cdot 2 \cdot 1 = 6!$. From there, we notice that each row can be chosen independently: there are $6!$ possibilities for each row, so there are $6! \cdot 6! \cdot 6! \cdot 6! \cdot 6!\cdot 6! = (6!)^6$ total possibilities for the grid. |
Understanding relation between vector valued function and function objective in an multi objective optimization problem | I don't know much about optimization, but I believe your questions are mainly about math notations, which I will now try to explain.
Just looking at the name, a "vector valued function" is a function with values in a vector space. It is usually specified to point out that the function output is a multi-dimensional vector, which is the same as saying it has multiple numerical outputs (in your example, those are $x^2$ and $(x-2)^2$ for input $x$). Therefore, your function is still an objective function, but with multiple outputs, that's why it is called "vector valued". The best way to represent it would be as a parametrized curve: in your example, consider the variable $x$ as time. At each instant, the function gives a point in the plane $\mathbb R^2$, so we can represent all of them as a curve. Here is the curve for your particular example, when $-2\leq x\leq 4$:
You are free to use the notation you find the most easy to work with, as long as it is explicit that you have one input $x$ and two outputs $f_1(x)$ and $f_2(x)$.
The notations with $i$ and $j$ are rather ambiguous, unless you have defined those as basis of the vector space before. The arrow notation is nice if you want to remember that the function is vector valued, but might be cumbersome to carry everywhere.
The $(\cdot)^T$ notation stands for transpose, which means change rows to columns and columns to rows in a matrix. This is because vectors in $\mathbb{R}^n$ are usually represented as a column vector, while we usually write in rows. Specifically,
$$(f_1(x), f_2(x))^T =\left(\begin{array}{c}f_1(x)\\ f_2(x)\end{array}\right).$$
The $*$ notation is more ambiguous, and could mean many different things, according to context. You can find here an explanation which seems to suit your case (although I cannot be sure of it), as the vector of solutions to the optimization problems of each coordinates. Following this rule, in the example, $$x^*=(x_1^*,x_2^*) = (0,2)$$ (because $0$ is the optimal value of $x$ for $f_1$ and $2$ is the optimal value of $x$ for $f_2$). |
Find median of a combination of tables. | There are $50$ elements of $R$ that are $5$ or below: $44$ from $P$ and $6$ from $Q$. There are also $50$ elements of $R$ that are $6$ or above: $16$ from $P$ and $34$ from $Q$. The median is halfway between $5$ and $6$, or $5.5$. The statement that the items are distinct means that $R$ has $16$ items in the $5$ bin because we can add $10+6$. If some of the items were the same we would have to deduct the matches from $16$. |
How to extract the principal part of a Laurent series at an essential singularity? | If $f$ is analytic for $0<|z|<r$ then for $0<a<|z|<b<r$
$$f(z)= \frac1{2i\pi} \int_{|s|=b}f(s)( \frac{1}{s-z}-\frac1s)ds+\frac1{2i\pi} \int_{|s|=b} \frac{f(s)}{s}ds-\frac1{2i\pi} \int_{|s|=a} \frac{f(s)}{s-z}ds$$
Expanding $1/(s-z)$ in power series in $s/z$ or $z/s$ this is how we prove that $f$ has a Laurent series. |
Sampling from a given pdf | Try this (see my latest question whose +50 Bounty I just posted) ... first sample $u=U[0,1]$ (think of it as your CDF F(x) target value) then take a random value of $x$ from 0 to A, to wit $x_1=AU[0,1]$ now compute $f(x_1)$ and then multiply it by sufficiently small $\Delta x$ ... do this repeatedly computing $z_j = \sum_{i=1}^{j}f(x_i)\Delta x$ take as your sample the $x_j$ where $z_j > u$ |
Confusion on Lemma to the First Isomorphism Thm | By definition of the canonical map, for each $x\in G$, $\gamma(x)=xN$. Also $y\in \gamma^{-1}\bigl(\gamma(x)\bigr)$ means that $\gamma(y)=\gamma(x)$, i.e. $yN=xN$. In other words, $y\in xN$. Thus $\;\gamma^{-1}\bigl(\gamma(x)\bigr)=xN$, and finally
$$\gamma^{-1}\bigl(\gamma(L)\bigr)=\bigcup_{x\in L}\gamma^{-1}\bigl(\gamma(x)\bigr)=\bigcup_{x\in L}xN\subset L\quad\text{if }N\subset L.$$
The reverse inclusion is obvious. |
Matching polynomial is real rooted | Hint. Start by showing that for a tree $T$ then $x^{|T|}M_T(-1/x^2)=\det(Ix-A_T)$ where $A_T$ is the adjacency matrix of $T$. Since $A_T$ is symmetric it follows that the polynomial $\det(Ix-A_T)$ is real rooted.
A useful reference can be found HERE (although the definition of matching polynomial is a bit different). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.