title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Check the integral of complex function, verification. Cauchy integral formula | Almost. It's $-\dfrac53$ that matters here, not $\dfrac53$, and therefore you have\begin{align}\oint_{|z|=2}\frac{2^z}{(3z+5)^7} &= \frac{2\pi i\left(1/3\right)^72^{-5/3} \log^6(2)}{6}\\&=\frac{2\pi i\log^6(2)}{2^{5/3}3^76!}.\end{align} |
When to use the root test. Is this not a good situation to use it? | It doesn't cause any problems, because $\lim_{n\to\infty}\sqrt[n]{n^4}=1.$ Actually, the root test is stronger than the ratio test. Sometimes the root test limit exists, but the ratio test limit does not. However, if they both exist, then they are equal. Which is why if one limit is $1$ you shouldn't try the other, even though the root test is stronger. |
Trapezoid, find the sides | I just needed to put in a system these two formulas:
$$
\bigg \{
\begin{array}{rl}
r=\frac{ab}{a+b} \\
P=2(a+b) \\
\end{array}
$$ |
Integer Programming Conditional Constraints | Such constraints are called disjunctive constraints. You can proceed as follows (for your constraint 1.):
$$
y_2+y_3+y_4\le2+(1-x_1)\quad y_2+y_3+y_4\ge2-2(1-x_1),
$$
This way, if $x_1=1$, you have
$$
y_2+y_3+y_4\le2 \quad y_2+y_3+y_4\ge2,
$$
which is equivalent to $y_2+y_3+y_4=2$. And if $x_1=0$, you have
$$
y_2+y_3+y_4\le2+1=3\quad y_2+y_3+y_4\ge2-2=0,
$$
which will always be satisfied since variables are boolean. |
Prove that $ 5^{2^{x}} \equiv 1 \pmod {17} $ for all $ x \in \mathbb{N}_{> 3} $. | HINT: $\phi(17) = 16$ divides $2^x$ for $x>3$ , where $\phi$ is the Euler Totient Function. |
Showing that a $X(u,v)$ is one-to-one | Probable Hint: Let be $U\subset\mathbb{R}^m$ open set and $f:U\to \mathbb{R}^n$ a differentiable application of class $C^1$. If, for $a\in U$, $f'(a): \mathbb{R}^m\to \mathbb{R}^n$ is one-to-one then $f$ is locally one-to-one. |
Reduce the range of weighted sum | Suppose you are getting $x$ in the range $0.0000002 - 1$. Let $y$ = $((x - 0.0000002) * 0.998 / 0.9999998) + 0.002$. Now $y$ is in the range $0.002 - 1$. So just use $y$ in place of $x$ if you want values in that range. |
If $\sum_{k=0}^n \frac{A_k}{x+k}=\frac{n!}{x(x+1)(x+2)(x+3)\cdots(x+n)}$ Then find $A_7$ | Note that:
$$\frac1x\frac1{\binom{x+n}n}=\frac{n!}{x(x+1)(x+2)\dots(x+n)}$$
And thus,
$$\frac1x\left[\frac1{\binom{x+n}n}-\frac1{\binom{x+{n-1}}{n-1}}\right]=\frac{A_n}{x+n}$$
Which gives the general solution immediately. |
Find the conjugacy classes of $D_6$ | Note that conjugating by $a^2$ is the same as conjugating by $a$ twice. And you know how to conjugate by $a$ in $D_6$. In general, it is useful to denote generators of the dihedral group $D_{2n}$ (symmetries of a regular $n$-gon) by $r$ and $s$, with $$D_{2n}=\langle r,s\mid r^n=s^2=1,srs=r^{-1}\rangle$$
These remind you that $r$ is a rotation (by an angle $2\pi/n$) and $s$ is a symmetry across the first vertex. Note $s=s^{-1}$, so $srs=srs^{-1}=r^{-1}$ tells you that conjugation by $s$ flips inverts $r$. You have $$D_{6}=\langle r,s\mid r^3=s^2=1,srs=r^{-1}\rangle=\{1,r,r^2,s,rs,r^2s\}$$
So, let's say you have to find the class of $s$. It is clear $s$ and $1$ fix $s$. Then
$$r^{-1}sr=r^{-1}sr(ss)=r^{-1}(srs)s=r^{-1}r^{-1}s=rs=r^{-1}s=r^2s$$
Since conjugating $s$ by $r^2$ is the same as conjugating by $r$ twice, $r^{-2}sr^2=r^{-1}(rs)r=sr$. Then
$$(rs)^{-1}srs=sr^{-1}(srs)=sr^{-1}r^{-1}=sr$$
$$(r^2s)^{-1}sr^2s=(sr^{-2})(sr^2s)=(sr)(srs)(srs)=sr r^{-1}r^{-1}=sr^{-1}=(sr^{-1}s)s=rs$$ |
A general element of U(2) | Preliminary Note: For some reason, physicists and mathematicians have different notation for Lie groups and Lie algebras. For mathematicians, the Lie algebra corresponding to a matrix Lie group $G$ is the set
$$
\mathcal{L}_{\mathrm{math}} \;=\;\{A\in\mathbb{C}^{n\times n} \mid \exp(tA) \in G\text{ for all }t\in\mathbb{R}\}.
$$
For physicists, the corresponding Lie algebra is the set
$$
\mathcal{L}_{\mathrm{phys}} \;=\; \{A\in\mathbb{C}^{n\times n} \mid \exp(itA) \in G\text{ for all }t\in\mathbb{R}\}.
$$
These are related by the equation
$$
\mathcal{L}_{\mathrm{math}} \;=\; \{i A \mid A \in \mathcal{L}_{\mathrm{phys}} \}.
$$
I will follow the OP's lead and use physics notation.
Answer: The Lie algebra $\mathcal{L}_{\mathrm{phys}}$ associated to $U(2)$ is the set of all $2\times 2$ Hermitian matrices. Every such matrix has the form
$$
a I \,+\, b_1\sigma_1 \,+\, b_2\sigma_2 \,+\, b_3\sigma_3
$$
where $I$ is the $2\times 2$ identity matrix, $\sigma_1,\sigma_2,\sigma_3$ are the three Pauli matrices, and $a,b_1,b_2,b_3\in\mathbb{R}$. We can write this as
$$
a I + \textbf{b} \cdot \boldsymbol{\sigma}
$$
where $\textbf{b} = (b_1,b_2,b_3) \in \mathbb{R}^3$, and $\boldsymbol{\sigma}$ is the $3$-tuple $(\sigma_1,\sigma_2,\sigma_3)$ of matrices.
It is true that every element of $U(2)$ can be written as an exponential $e^{iH}$, where $H$ is a $2\times 2$ Hermitian matrix. (This sort of thing isn't always true for Lie groups, but it is true in this case.) It follows that
$$
U(2) \;=\; \{\exp(iaI + i\textbf{b}\cdot \boldsymbol{\sigma}) \mid a\in\mathbb{R}\text{ and }\textbf{b}\in\mathbb{R}^3\}.
$$
Now, it is not true in general that $\exp(A+B) = \exp(A)\exp(B)$ when $A$ and $B$ are matrices. However, it is true as long as $A$ and $B$ commute. Since $iaI$ is a multiple of the identity matrix, it commutes with anything, so
$$
U(2) \;=\; \{\exp(iaI) \exp(i\textbf{b}\cdot \boldsymbol{\sigma}) \mid a\in\mathbb{R}\text{ and }\textbf{b}\in\mathbb{R}^3\}.
$$
It turns out that $\exp(iaI) = \exp(ia)I$, so it follows that
$$
U(2) \;=\; \{\exp(ia) \exp(i\textbf{b}\cdot \boldsymbol{\sigma}) \mid a\in\mathbb{R}\text{ and }\textbf{b}\in\mathbb{R}^3\}.
$$
This is essentially option (1) in your question.
It is possible to derive this result from a version of the reasoning you gave. In particular, let
$$
H \;=\; \left\{\left.\begin{bmatrix}u & 0 \\ 0 & u\end{bmatrix} \;\right|\; u\in U(1) \right\}.
$$
Then $H$ is a subgroup of $U(2)$ isomorphic to $U(1)$, and $U(2)$ is generated by the elements of $H$ together with the elements of $SU(2)$:
$$
U(2) \;=\; \langle SU(2) \cup H\rangle.
$$
Now, in general if you have two matrix groups it is not true that every element of $\langle G\cup H\rangle$ can be written as an element of $G$ multiplied by an element of $H$. (Instead, elements of $\langle G\cup H\rangle$ can be written as words involving elements of $G$ and elements of $H$.) However, it is true whenever every element of $G$ commutes with every element of $H$, which is why it works in this case. |
Numerical stability of Euler Forward for a differential equation. | The condition for stability is that for eigenvalues $λ$ with negative real part you have $$|1+hλ|<1.$$ For your problem these eigenvalue are $λ=\frac12(-1\pm i\sqrt{3})$ and $λ=-\frac12(1+\sqrt5)$ so that the condition is
$$
(1-\tfrac12h)^2+\tfrac34h^2<1\iff h^2-h<0~~\text{ and }~~h(1+\sqrt5)<4
$$
and as $h>0$, the bound for stability is $h<\min(1, \frac{4}{1+\sqrt5})=1$.
However this only means that you avoid visibly, blatantly wrong behavior of the numerical solution, that is, that if the exact solution moves towards a fixed point, that the numerical solution were to explode away from it. The overall global error of size $O(h)$ or for variable time, $O((e^{Lt}-1)h)$, still remains. It may be advisable to take $h$ much smaller than the stability bound to get a sensible global error.
While the exact solution converges very rapidly towards the origin, with a bit more than one visible rotation about zero, the numerical solution with step size $h=0.5$ still somewhat shows that rapidity while $h=0.75$ has a rather slow convergence and $h=1$ looks undecided if it approaches the origin at all. |
Does $n^n \geq n$ and $n! \geq 1$ prove that $\frac{1}{n} \geq \frac{n!}{n^n}$? | You can use those inequalities but they are not sufficient (see stewbasic's comment).
Consider that
$$\frac{n!}{n^n}=\frac{1}{n}\cdot\left( \frac{2}{n}\cdot \frac{3}{n}\cdots \frac{n-1}{n}\cdot \frac{n}{n}\right).$$
Now recall that if $0<a\leq 1$ and $0<b\leq 1$ then also the product $0<a\cdot b\leq 1$.
Can you take it from here? |
Equivalent definitions in $\mathbb{R}$ topology | Sketches:
(i) $\implies$ (ii): Let $x_n \to a$ where $a \in A$. If there does not exist $n_0$ such that $\{x_n : n \ge n_0\} \subseteq A$, then that means the sequence $(x_n)$ steps outside of $A$ infinitely often. So, there exists a subsequence of $(x_n)$ that lies outside of $A$, but converges to $A$. This contradicts (i).
(ii) $\implies$ (i): Let $y_n \to b$ where $y_n \notin A$ for all $n$. If $b \in A$, then (ii) implies $y_n \in A$ for all sufficiently large $n$, which is a contradiction. |
Evaluate logrithmic integral | I hope this is correct,
A substitution $y=ln(x)$ gives,
$$\int_0^{\infty}\ln(x)^ndx=\int_{-\infty}^{\infty}y^ne^ydy=\int_{-\infty}^0y^ne^ydy+\int_0^{\infty}y^n e^ydy $$
where
$$\int_{-\infty}^0y^ne^ydy=\int^{\infty}_0(-y)^ne^{-y}dy=(-1)^n\int^{\infty}_0y^ne^{-y}dy=(-1)^nn!$$
but
$$
\int_0^{\infty}y^n e^ydy\geq\int_0^{\infty}y^ndy=\lim_{y\rightarrow \infty}\frac{1}{n+1}y^{n+1}=\infty$$
Hence $\int_0^{\infty}\ln(x)^ndx$ is divergent for every $n\geq0$. |
Understanding the solution of a probability question | If Toby stops after examining exactly $n+1$ policies, this means that first $n$ policies would be non-claims, and the last (the $n+1$st policy) would be a claim. This has probability $0.8^n\times 0.2$. So $0.8^n$ is not the probability that Toby examines exactly $n+1$ policies.
The event that Toby examines more than $n$ policies is the same as the event that the first $n$ policies examined by Toby are all non-claims. This has probability $0.8^n$, as advertised. |
A hermitian positive semi-definite matrix with all entries on the complex unit circle has rank one | Let $A$ be $HPSD$ and $n\times n$
then $A$ has all ones on its diagonal (why?)
with eigenvalues in the usual ordering
$\lambda_1 \geq \lambda_2\geq ... \geq \lambda_n \geq 0$
$n^2$
$= \Big(1+1+...+1\Big)^2 $
$= \Big(\text{trace}\big(A\big)\Big)^2 $
$= \big(\lambda_1+ \lambda_2+...+\lambda_n\big)^2$
$\geq \lambda_1^2+ \lambda_2^2+...+\lambda_n^2 $
$= \text{trace}\big(A^2\big) $
$=\Big \Vert A\Big \Vert_F^2$
$= n^2$
this inequality is met with equality
$\implies 0=\lambda_2=...=\lambda_n$
$\implies \lambda_1 = n$
and $A$ is diagonalizable, hence it is rank 1. |
Consistent with the name of the coordinates for $\mathbb R^3$? | You don't have to, but you should.
Like all language, mathematical notation is all about communication. The point of writing mathematics down is to move ideas from your head to someone elses head (or possibly back into your own head at a later time). To achieve this, you write down your ideas using a mutually understood language. Some of it will use English, or whatever other human language you and your reader prefer to use, and some of it will use mathematical notation.
In order to fulfill this purpose, it is desireable that whatever you write is as easy as possible to read. Part of that is to make sure that the reader doesn't have to keep track of too much at the same time, and doesn't get overloaded with new stuff to keep track of at every turn, as well as to play along with the reader's expectations.
If you have written about a function $f:\Bbb R^3\to \Bbb R$, that takes the three variables $x_1, x_2, x_3$ as input, and you then introduce another function $g:\Bbb R^3\to \Bbb R$, then if you describe its inputs as $x, y, z$, well, that's another set of symbols that your reader now has to remember and keep track of as they read. At the same time, they will wonder whether there was any reason to switch, and what will happen with $x_1, x_2, x_3$. All this goes on in the back of their minds, and distract them from being able to follow your actual argument. At least, it would do that to me. |
Jordan Decomposition Theorem Double Implication | The two terms $\phi(A_k)$ and $-\phi(E\setminus A_k)$ only differ by a constant.
Thus, any one of the two tends towards its supremum value iff the other does the same thing.
EDIT : here is a more detailed explanation :
There are three elementary properties you need to know :
(1) The minus sign reverses order, so in abstract nonsense
terms $-({\sf inf})$ is the same thing as ${\sf sup}(-)$.
(2) $\sf sup$ (or $\sf inf$) behaves well with respect to translation : in
abstract nonsense terms again, ${\sf sup}(c+)$ is the same thing as
$c+{\sf sup}()$.
(3) $\cal A$ is closed under taking complements, so that
$$
\bigg\lbrace E\setminus A \ \bigg| \ A\subseteq E, A\in {\cal A} \bigg\rbrace=
\bigg\lbrace A \ \bigg| \ A\subseteq E, A\in {\cal A} \bigg\rbrace
$$
Then, you have
$$
\begin{array}{lcl}
\underline{V}(E) &=& -{\sf inf}_{A\subseteq E, A\in {\cal A}} \phi(A) \\
&=& {\sf sup}_{A\subseteq E, A\in {\cal A}} -\phi(A) \ (\text{by} \ (1))\\
&=& {\sf sup}_{A\subseteq E, A\in {\cal A}} -\phi(E \setminus A) \ (\text{by} \ (3))\\
&=& {\sf sup}_{A\subseteq E, A\in {\cal A}} -(\phi(E)-\phi(A)) \\
&=& {\sf sup}_{A\subseteq E, A\in {\cal A}} (-\phi(E))+\phi(A) \\
&=& (-\phi(E))+{\sf sup}_{A\subseteq E, A\in {\cal A}} \phi(A) \ (\text{by} \ (2))\\
&=& (-\phi(E))+\overline{V}(E)
\end{array}
$$
Thus, if a $\phi(A_k)$ converges to $x$, $-\phi(E\setminus A_k)$ converges to $x-\phi(E)$. Taking $x=\overline{V}(E)$, your double implication follows. |
Does the following necessarily converge to a normal random variable in distribution? | For $c_n=\sqrt n$, observe that $\frac{\max_{1\le k\le n}c_k^2}{\sum_{k=1}^n c_k^2}=\frac{n}{n(n+1)/2}=\frac{2}{n+1}\to 0$ as $n\to \infty%$.
As $X_i$'s are i.i.d, by Hajek-Sidak's CLT,
$$\frac{\sum_{k=1}^n c_k X_k}{\sqrt{\sum_{k=1}^n c_k^2}}=\sqrt{\frac{2}{n(n+1)}}\sum_{k=1}^n \sqrt k X_k \stackrel{d}\longrightarrow N(0,1)$$
That is, $$\sqrt{\frac{2n}{(n+1)}}S_n\stackrel{d}\longrightarrow N(0,1)$$
Hajek-Sidak's CLT can be shown using Lyapounov's condition (which implies Lindeberg's condition) under the additional assumption $E|X_1|^3<\infty$. But I am not aware of the general proof. |
Fundamental matrix of shifted linear periodic system | For a linear system
$$\dot x(t)=A(t)x(t)$$
the solution is given by $x(t)=\Phi(t)x(0)$, where
$$\Phi(t)=\exp({\int_0^tA(\tau)d\tau})$$
So for the shifted system,
$$\frac{dx}{dt} = A(t+s)x(t)$$
the associated fundamental matrix is
$$\begin{align}\Phi(t)&=\exp({\int_0^tA(\tau+s)d\tau})=\exp({\int_s^{t+s}A(v)dv})\\
&=\exp({-\int_0^s A(v)dv+\int_0^{t+s}A(v)dv})=
\Phi(s)^{-1}\Phi(t+s)\end{align}$$
So the solution of the system is given by
$$x(t)=\Phi(s)^{-1}\Phi(t+s)x(0)$$
and then you can relate the Lyapunov exponents of the two systems. |
Number of sub-intervals for an array? | Let $i,j$ be indices marking the start and end of the sub-interval respectively. We can select $i,j\in\{1,2,...,n\}$ such that $i\ne j$ in $\binom n2$ ways, and assign $i$ as the smaller index and $j$ as the larger index of the selected indices. We can select $i,j$ such that $i=j$ in $n$ ways. The total is $\displaystyle\binom n2+n=\frac{n(n-1)}2+n=\frac{n(n+1)}2$ |
find invariant subspace of polynomials | Let $V = \mathbb{R}[x]$ and let $V_n$ be its subspace of polynomials of order at most $n$. Each $V_n$ is $L(\mathbb{R})$-stable. Note that $V_n = 0$ if $n$ is negative.
Let $\Delta : V \to V$ be the linear operator $L(0) - L(1)$. Then $(\Delta f)(x) = f(x) - f(x-1)$ so $\Delta(V_n) \subseteq V_{n-1}$ and $\Delta(x^n) \equiv n x^{n-1} \mod V_{n-2}$ for all $n \geq 1$.
Let $W$ be some $L(\mathbb{R})$-stable subspace. Then $\Delta(W) \subseteq W$. Let $n$ be largest possible integer such that $W$ contains $V_n$ (if it exists). We will show that $W = V_n$ in this case.
Suppose for a contradiction that $f(x) \in W \backslash V_n$. Then $f(x) \equiv ax^m \mod V_{m-1}$ for some non-zero $a$ and some $m \geq n+1$, so
$(\Delta^{m-n-1} f)(x) = a m (m-1) \cdots (n+2) x^{n+1} + v \in W$
for some $v \in V_n$. Since $am(m-1) \cdots (n+2) \neq 0$ and $v \in V_n \subseteq W$ we get $x^{n+1}\in W$ and hence $V_{n+1} \subseteq W$. This contradicts the maximality of $n$.
Therefore either $W = V_n$ for some $n$, or no such $n$ exists in which case $W$ contains all $V_m$ and hence $W = V$.
Thus $0 = V_{-1} \subset V_0 \subset V_1 \subset \cdots \subset \bigcup V_n = V$ are the only possible $L(\mathbb{R})$-stable subspaces of $V$. |
Double Integral $\int_0^\infty \int_0^\infty \frac{\log x \log y}{\sqrt {xy}}\cos(x+y)\,dx\,dy=(\gamma+2\log 2)\pi^2$ | Consider
$$\int_0^{\infty} dx \, x^{\alpha} e^{i x}$$
We know from Cauchy's theorem that this integral is equal to (when it converges)
$$i \, e^{i \pi \alpha/2} \int_0^{\infty} du \, u^{\alpha} \, e^{-u} = i \, e^{i \pi \alpha/2} \, \Gamma(\alpha+1)$$
Differentiating both sides with respect to $\alpha$, we get
$$\int_0^{\infty} dx \, x^{\alpha} e^{i x}\, \log{x} = \Gamma(\alpha+1) e^{i \pi \alpha/2} \left [i \, \psi(\alpha+1)-\frac{\pi}{2} \right ] $$
Square both sides:
$$\begin{align}\int_0^{\infty} dx \, x^{\alpha} e^{i x}\, \log{x} \int_0^{\infty} dy \, y^{\alpha} e^{i y}\, \log{y} &= \Gamma(\alpha+1)^2 e^{i \pi \alpha} \left [\frac{\pi^2 }{4}-\psi(\alpha+1)^2-i \pi \psi(\alpha+1) \right ] \end{align}$$
Now plug in $\alpha=-1/2$ and consolidate; use the fact that $\Gamma(1/2)=\sqrt{\pi}$ and $\psi(1/2)=-\gamma-2 \log{2}$:
$$\int_0^{\infty} dx \, \int_0^{\infty} dy \frac{\log{x} \log{y}}{\sqrt{x y}} e^{i (x+y)} = -i \pi \left [\frac{\pi^2}{16} - (\gamma+2 \log{2})^2 + i \pi (\gamma+2 \log{2}) \right ]$$
Take the real part of both sides, and get
$$\int_0^{\infty} dx \, \int_0^{\infty} dy \frac{\log{x} \log{y}}{\sqrt{x y}} \cos{(x+y)} = \pi^2 (\gamma+2 \log{2}) $$
as was to be shown. |
Chapter 2 Sec. 2.6 Hoffman Kunze Linear Algebra exercise 1 | Note that the columns of the $s\times n$ matrix $A$ resides in $\mathbb{F}^s$. By theorem $4$, it follows that the set of columns of $A$ is linearly dependent (why?). Therefore there is some non-trivial linear combination of the columns, which I denote with $\mathbf{a}_i$, which sum to $\mathbf{0}$, i.e.
$$c_1\mathbf{a}_1 + \cdots + c_n\mathbf{a}_n = \mathbf{0}$$
for $c_i\in\mathbb{F}$. It follows that we must have
$$A\mathbf{c} = \mathbf{0}$$
where $\mathbf{c}=\begin{pmatrix}c_1 & c_2 & \cdots & c_n\end{pmatrix}^\mathrm{T}$. |
Why can’t “the empty set” be selected with a choice function and placed in a set of chosen sets? Is there an alternative Axiom of Choice? | Yes, you can always say that a choice function will return $\varnothing$ if you give it the empty set. There are some caveats, though.
If we think about $\prod_{i\in I}A_i$ as the collection of choice functions from $\{A_i\mid i\in I\}$, then we expect it to be empty when one of the $A_i$ is empty. Because that's the rule for products: if one of the sets is empty, the product is empty.
But now, if we modified the definition of a choice function, this is no longer the case, since every collection of sets have a choice function (under AC), but $\prod_{i\in I}A_i$ can be $\varnothing$.
So like everything else, it's a choice of "where do you want to place your inconvenience" and the answer is very much context dependent.
The analogy with the bag, by the way, is misleading, since you can choose the empty set if it is an element of some set; but you cannot choose something form inside an empty bag, because it's empty. Try it: take an empty bag and offer your friend to choose whatever candy bar they want, see if they can do it. |
If $ f(n) = \sum_{i = 1}^{n} (n / i) \log(n / i) $ and $ g(n) = n ~ {\log^{2}}(n) $, then is $ O(f) = O(g) $? | Hint. One may recall that, by the Euler-Maclaurin formula, as $n \to \infty$, we have
$$
\begin{align}
\sum_{i=1}^n \frac 1i &=\ln n+\gamma+\mathcal{O}\left( \frac 1n\right)\\
\sum_{i=1}^n \frac {\ln i}i &=\frac 12\ln^2 n+\gamma_1+\mathcal{O}\left( \frac {\ln n}n\right)
\end{align}
$$ where $\gamma$ is the Euler-Mascheroni constant and $\gamma_1$ is the Stieltjes constant.
Then we may write, as $n \to \infty$:
$$
\begin{align}
f(n) &=\sum_{i=1}^n \frac {n}i \ln\frac {n}i\\\\
&=n \ln n \sum_{i=1}^n \frac 1i -n \sum_{i=1}^n \frac {\ln i}i\\\\
&=n \ln n \left(\ln n+\gamma+\mathcal{O}\left( \frac 1n\right) \right) -n \left(\frac 12\ln^2 n+\gamma_1+\mathcal{O}\left( \frac {\ln n}n\right)\right)\\\\
&=\frac n2\ln^2 n+\gamma \:n \ln n-\gamma_1\:n+\mathcal{O}\left( \ln n \right)
\end{align}
$$ In particular, you have $f(n)=\mathcal{O}(n\ln^2 n)$ and since $g(n)=n\ln^2 n$, you obtain
$$
\mathcal{O}\left(f(n)\right) = \mathcal{O}\left(g(n)\right),
$$ as announced. |
Who was the first to use dual space? | It was Hahn (of Hahn-Banach Theorem) who, in 1927, first introduced the dual of a normed linear space ("polare Raum" was his term.) Hahn had extended previous arguments for separable spaces in order to obtain the existence of general continuous linear functionals on a normed linear space; Hahn did this by introducing methods of transfinite induction. Hahn introduced the idea of regular normed spaces (now called reflexive.) It makes sense that the existence of general classes of continuous linear functionals would lead to a proper notion of a dual space.
Two years later after Hahn, Banach proved the same theorem with the same proof (He later acknowledged the primacy of Hahn's work.) Banach introduced a convex functional and extensions of linear functional bounded by convex functions, which paved the way for locally-convex spaces.
It appears that the understanding gained in Functional Analysis came before a proper understanding of the dual for a finite dimensional vector space!
In History of Functional Analysis, J. Dieudonne writes that "before 1930 nobody had a correct conception of duality between finite dimensional vector spaces; even in van der Waerden's book (1931), such a vector space and its dual are still identified. All this was to weigh heavily on the evolution of a linear Functional Analysis; in particular it followed (over a shorter span of years) the same unfortunate succession of stages through which linear Algebra had to go; and it is only after it was realized that the current conception of vectors as "n-tuples" could not possibly be extended to infinite dimensional function spaces, that this conception was finally abandoned and that geometrical notions won the day." |
Proving that $M :=\{A\subseteq\mathbb N: A\text{ or }A^{\complement}\text{ is finite}\}$ is infinitely and countable | Let $M_{Fin}:=\{A\in\wp(\mathbb N)\mid A\text{ is finite}\}$ and $M_{Cof}:=\{A\in\wp(\mathbb N)\mid A^{\complement}\text{ is finite}\}$. Then $M_{Fin}$ and $M_{Cof}$ have the same cardinality and $M=M_{Fin}\cup M_{Cof}$.
So for proving that $M$ is countable it is enough to prove that $M_{Fin}$ is countable.
We can write $M_{Fin}=\bigcup_{n=0}^{\infty} M_n$ where $M_n$ denotes the collection of all subsets $\mathbb N$ that have cardinality $n$.
This shows that it is enough to prove that for each $n$ the set $M_n$ is countable, since a countable union of countable sets is countable.
Can you do that yourself? |
Find the solution of $u_t + bu_x = 0$ | Your approach is correct provided that the solution is sufficiently smooth (which is the case here). However, you made some mistakes in the derivation of the solution.
Indeed, differentiating the linear advection equation $u_t + b u_x = 0$ w.r.t. $t$, we have
$u_{tt} = -b u_{xt}$.
Now, we use the equality of mixed derivatives $u_{xt} = u_{tx}$, and the initial PDE itself $u_{tx} = -b u_{xx}$ after differentiation w.r.t. $x$. Thus, the wave equation
$$
u_{tt} = b^2 u_{xx}
$$
is obtained.
D'Alembert's solution for the initial condition $u(x,0) = x^2$ and $$u_t(x,0) = -b u_x(x, 0 ) = -2b x$$ reads
\begin{aligned}
u(x,t) &= \tfrac{1}{2} \left[(x+b t)^2 + (x-b t)^2\right] - \int_{x-bt}^{x+bt} \xi \,\text d \xi \\
&= (x-bt)^2 .
\end{aligned}
This is exactly the solution obtained by the more straightforward method of characteristics (see answer by @JJacquelin). |
Induction: How to prove that $ab^n+cn+d$ is divisible by $m$. | $\bmod m\!:\ \color{#0a0}{f_{\large n+1}\!-b f_{\large n}} =\, \overbrace{(1\!-\!b)c}^{\large \equiv\ 0}n \,\overbrace{-\color{#c00}d\,b+\color{#c00}d+c}^{\large \equiv\ 0\ \ {\rm by}\ \ \color{#c00}{d}\ \equiv\ -a}\!\equiv\color{#0a0} 0,\ $ so $\ f_n\equiv 0\,\Rightarrow\,\color{#0a0}{f_{n+1}\equiv bf_n\equiv} 0$ |
$\dim_{\mathbb k} A= \dim_{\mathbb k_1}(_{\mathbb k_1} A)\dim_{\mathbb k} \mathbb k_1 $. | I think the best way to do this is to take a $k_1$-basis of $A$, $e_1,...,e_n$; a $k$-basis of $k_1$, $f_1,...,f_d$, and show that the family $(f_ie_j)$ is a $k$-basis of $A$. From this the equality follows. |
Analytical expression of a factorial related function | Observe that
$$
g(x) = \frac{\binom{N - m}{x}}{\binom{N}{x}} = \frac{N-m}{N} \cdot \frac{N-m - 1}{N-1} \cdot \cdots \cdot \frac{N - m - x + 1}{N - x + 1}
$$
is a decreasing function on $x$; that is, for $y > x$, we have $g(y) < g(x)$. Also note $0 \leq g(x) \leq 1$. We have then
$$
f(x) = (1 - g(x))^{N / x} < \left(1 - g(y)\right)^{N/x} < (1 - g(y))^{N / y} = f(y)
$$
for $y > x$. |
Proof of the existence of liftings in Introduction to Algebraic Topology by Rotman | The last term f(x$_n$) remains in the product since its inverse is not in the product and it says that x$_n$ = x. |
Integer solutions to $\frac{d^3}{r}+r=a^2$ | This has an interesting connection to Pell equations. Let,
$$(a^2-r)r = d^3\tag1$$
then this has an infinite number of integer solutions given by,
$$d,r,a = \tfrac{n}{2}y^2;\;\tfrac{1}{4}y^2,\;\tfrac{1}{2}xy,\tag2$$
where,
$$x^2-2n^3y^2 = 1\tag3$$
Since for $n=1$ (and others) the $y$ are all even $y = 0, 2, 12, 70, 408, 2378, 13860,\dots$ as A001542, then $(2)$ are all integers. Its smallest $d,r,a$ are,
$$2,\,1,\,3\\72,\,36,\,102$$
Of course, $(2)$ does not give all solutions, but easily shows there is an infinity of them. |
What is the range of the function $f(x)=\log x+\sin x $? | $\log x$ is a strictly increasing function with range $(-\infty,+\infty)$ and domain of $(0,+\infty)$ if we consider the real valued logarithm. So, since we have addition of an unbounded function on $\Bbb{R}$ with a function oscillating in $[-1,1]$ , the resultant $f(x)$ is also unbounded on $\Bbb{R}$. |
Floquet Theory theorem clarification | Yes, you're missing something obvious.
With $\mu_0 = 1$, $x(t+\omega) = x(t)$.
With $\mu_0 = -1$,
$x(t+2\omega) = - x(t + \omega) = x(t)$. |
Multivariate Non-Differentiability | Discontinuous partial derivatives do not imply that the function is not differentiable. See this link for a counter example: http://mathinsight.org/differentiable_function_discontinuous_partial_derivatives |
Show that this disk is convex: | Try applying the inequality
$$|a + b| \leq |a| + |b|$$ |
An equivalent condition for having finite length | Since $M$ is finite over a Noetherian $R$, it admits a filtration $M=M_0\supsetneq M_1\supsetneq\cdots\supsetneq M_n=0$ of $R$-submodules with successive quotients isomorphic to $R/\mathfrak{p}$ for some prime ideal $\mathfrak{p}$ of $R$. If $\mathfrak{p}$ appears in this way for the filtration, say $M_i/M_{i+1}\cong R/\mathfrak{p}_i$, then localizing at $\mathfrak{p}$ shows that $(M_i)_\mathfrak{p}/(M_{i+1})_\mathfrak{p}\cong R_\mathfrak{p}/\mathfrak{p}R_\mathfrak{p}\neq 0$, so $(M_i)_\mathfrak{p}$ is non-zero, and by exactness of localization $M_\mathfrak{p}$ is not zero.
Now, if your condition holds, it shows by the reasoning above that the quotients in the filtration are of the form $R/\mathfrak{m}$ with $\mathfrak{m}$ maximal. Thus the given filtration is a composition series, so $M$ has finite length.
Conversely, if $M$ has finite length, take a composition series $M=M_0\supsetneq M_1\supsetneq\cdots\supsetneq M_n=0$. Take a non-maximal prime $\mathfrak{p}$. Since $M_{n-1}\cong R/\mathfrak{m}$ for some maximal $\mathfrak{m}$, when we localize at $\mathfrak{p}$ we get $(M_{n-1})_\mathfrak{p}\cong R_\mathfrak{p}/\mathfrak{m}R_\mathfrak{p}=0$ because $\mathfrak{m}$ is not contained in $\mathfrak{p}$. Now do this with $M_{n-2}/M_{n-1}$ to get that it is zero after localizing at $\mathfrak{p}$, and hence so is $M_{n-2}$. Now keep going in this fashion and you'll get $M_\mathfrak{p}=0$.
For the existence of the filtration in the first paragraph, see Lemma 7.57.1 in the Stacks Project. |
I am looking for an example of a sequence of points with the following conditions. | $\{1,2,...\}$ is an example. Since no subsequence is convergent the hypothesis is vacuously satisfied. But the sequence is not convergent.
Another example: $a_n=n$ for $n$ even and $a_n=\frac 1 n$ for $n$ odd. All convergent subsequences converge to $0$ in this case.
You can take $K=\mathbb R$ for both.
Edit based on OP's comment below: if every subsequence of $(a_n)$ converges to the same limit then the sequence converges to that limit simply because $(a_n)$ is itself a subsequence!. |
Writing product $\prod_{i=1}^m (p_i^{n_i}-1)$ as a sum | Since the formula $$\prod_{i=1}^m (p_i^{n_i}-1) = n \prod_{i=1}^m \left(1-\frac1{p_i^{n_i}}\right)$$ resembles the totient function, I thought that this kind of generalization of totient function might have been studied somewhere. The book Sandor J., Crstici B. Handbook of number theory, vol.2, has a chapter on generalizations of totient function, so I tried to look there. I will copy the relevant part from this book below.
I read there that the product from your question is sometimes called unitary totient function and denoted $\varphi^*(n)$.
There are also other arithmetical functions related to unitary divisors.
Also a paper by Eckford Cohen was mentioned as a reference. (Exact reference is given below.) In this paper we can find the following:
Corollary 2.4.1. $$\varphi^*(n)=\sum_{\substack{d\delta=n\\(d,\delta)=1}} \mu^*(d)\delta.$$
Where $\mu^*(n)=(-1)^{\omega(n)}$ is the unitary Möbius function and
$\omega(n)$ denotes number of distinct prime factors of $n$.
This sum over unitary divisors can be rewritten as
$$\sum_{\substack{d\mid n\\(d,\frac nd)=1}} \mu^*(d) \frac nd$$
which seems to be exactly the sum from your post. (Notice that unitary divisors are precisely the divisors of the form $p_i^{n_i}$.)
This is the part of Section 3.7.6 from Sandor-Crstici relevant to the question. (You can find much more facts about unitary versions of various arithmetical functions as well as further references in this book.)
The unitary analogue of $\varphi(n)$ was introduced by E. Cohen [83] as follows. Let $(a,b)^*$
denote the greatest divisor of a which is a unitary divisor of $b$
(a divisor $r$ of $b$ is called unitary, if $\left(r,\frac br\right)=1$.)
If $(a,b)^*=1$, then $a$ is said to be semi-prime to $b$
Let $\varphi^*(n)$ be the number of positive integers $r\le n$, semi-prime to $n$. In fact,
$$\varphi^*(n)=\sum_{d\mid n} d\mu^*(n/d) = \prod_{p^\alpha\mathrel{\|} n}(p^\alpha-1),$$
where $\mu^*(n)=(-1)^{\omega(n)}$ is the unitary Möbius function. For unitary divisors see also
1.9 of Chapter 1, 2.2.2 of Chapter 2, and 3.6.1 of Chapter 3. For the corresponding
notions of bi-unitary divisors and convolution, see 1.9 of Chapter 1, and 2.2.3 of
Chapter 2.
[83] Eckford Cohen: Arithmetical functions associated with the unitary divisors of an integer, Math. Z. 74(1960), 66-80;
doi: 10.1007/BF01180473, eudml, MR 0112861, Zbl 0094.02601. |
compound proposition logically equivalent | It's quite simple; let start with :
$\lnot p \equiv p \downarrow p$.
Then :
$p \rightarrow q \equiv ((p \downarrow p) \downarrow q) \downarrow ((p \downarrow p) \downarrow q)$.
In order to check the definition, we have to use the truth-table for $\downarrow$ : it is true only when both $p$ and $q$ are false.
This fact justify the definition of $\lnot p$ as $p \downarrow p$.
For the conditional, we will work by steps:
$((p \downarrow p) \downarrow q)$ is $\lnot p \downarrow q$ [see the definition of $\lnot$ above].
Thus, the complete formula is simply : $\lnot ( \lnot p \downarrow q)$.
Now we may check that the only case when it is false (i.e.$0$) is when $p=1$ and $q=0$.
Note. You can see this paper on Adeqaute set of connectives for a general overview of the topic. |
Almost sure equality from equality in distribution and a.s. inequality | If they have finite mean the this is very easy: $E(Y-X)=EY-EX=0$ and $Y\geq X$ so $Y=X$ a.s.. For the general case note that $tan^{-1} X \leq tan^{-1}Y$ a.s. and $tan^{-1} X$ has same distribution as $tan^{-1}Y$ so (by the first case) $tan^{-1} X = tan^{-1}Y$ a.s. which implies $X=Y$ a.s.. |
Poisson process on nonintersecting sets | For simplicity of notation let $X=N(A)$, $Y=N(B)$ and $Z=X+Y=N(A\cup B)=N(A)+N(B)$, let $a = EX = \lambda|A|$ and $b=EY$.
The probability that $(X,Y)=(k,l)$ can be found by conditioning on $Z$ as follows:
$$\begin{align*}
P((X,Y)=(k,l)) &= \sum_{n\ge0}P(Z=n)\times P((X,Y)=(k,l)|Z=n)\\
&=P(Z=k+l)\times P((X,Y)=(k,l)|Z=k+l)\tag{*}\\
&=\frac{(a+b)^{k+l}}{(k+l)!}e^{-(a+b)}\times P((X,Y)=(k,l)|Z=k+l)\\
&= \frac{(a+b)^{k+l}}{(k+l)!}e^{-(a+b)}\times \binom{k+l}k
\left(\frac a{a+b}\right)^k \left(\frac b{a+b}\right)^l\\
&= \frac{(a+b)^{k+l}}{(k+l)!}e^{-(a+b)}\times \frac{(k+l)!}{k!l!}
\left(\frac a{a+b}\right)^k \left(\frac b{a+b}\right)^l\\
&= \frac{a^k}{k!}e^{-a}\,\frac {b^l}{l!} e^{-b}\\
&= P(X=k)P(Y=l).
\end{align*}$$
Here (*) holds because all terms $P((X,Y)=(k,l)|Z=n)$ vanish, except when $n=k+l$.
In words: the variables $X,Y,Z$ are definitely not independent.
But when $Z$ is Poisson and $X$ conditional on $Z$ is binomial and conditional on $Z$ we have $Y=Z-X$, magically the right factorials cancel to make $X$ and $Y$ independent. |
The last (and the weirdest) problem from Chen`s "Brief Introduction to Olympiad Inequalities" | First of all, the solution: by weighted AM-GM we have
$$1 = \frac{a}{a+b+c} a^{-6/7} + \frac{b}{a+b+c} b^{-6/7} + \frac{c}{a+b+c} c^{-6/7} \ge (a^ab^bc^c)^{\frac{-6/7}{a+b+c}}. $$
Of course, from this solution it's clear that both the number of variables and the choice of the number $7$ is irrelevant. (Pedagogically and aesthetically, I'm of the opinion that it's usually better to include the specific, concrete instance if it's clear how to generalize it.)
Seeing as I'm the author of this problem (which was the second problem of ELMO 2013), I can comment on its creation too. I had long known that one could get $a^2+b^2+c^2 \ge a^ab^bc^c$, when $a+b+c=1$, using weighted AM-GM. I really wanted to see if I could get something more decent, since the problem is rather trivial in that formulation. After two hours of playing around with this weighted idea before I decided to put a condition of $a^2 + b^2 + c^2 = a + b + c$ so that the left-hand side of $\frac{a^2+b^2+c^2}{a+b+c} \ge a^{\frac{a}{a+b+c}} b^{\frac{b}{a+b+c}} c^{\frac{c}{a+b+c}}$ would be nice. Suddenly I realized I could just factor out the exponent, and the whole thing simplified super nicely as $a^ab^bc^c \le 1$. And so after some cosmetic changes it became the problem you see now. |
How to choose right and good notation and symbols? | Two pieces of advice based on your examples:
It often helps to use the same sort of letters for the same sort of things to speed up comprehension. e.g. $m$ and $n$ are always natural number or at least integers; $i$, $j$ and $k$ are indices. For groups, you usually see $g$ and then $h$ as group elements, which is why I slightly prefer $gHg^{-1}$ over $xHx^{-1}$.
Always be aware of when you are using the same symbols for things. $g \in C_g$ means something entirely different to $x \in C_g$. The first is a summary of the proof that $g(g)g^{-1}=g$ and therefore $g$ must be in its own centraliser. The second just picks some element $x$ in the centraliser of g. If you write as though those notations mean the same thing you are going to confuse your reader. |
Convert grammar into Chomsky Normal Form | Following exactly the steps on Wikipedia, we have the following:
$\textbf{START}$
Since the start symbol $S$ appears on the right hand side of a rule, we must introduce a new start symbol $S_0$, so we have the rules
$S_0\to S$
$S\to aSc\mid X$
$X\to aXb\mid\lambda$
$\textbf{TERM}$
Next, we replace each of the terminal symbols $a$, $b$, $c$, and $d$ with nonterminal symbols $A$, $B$, $C$, and $D$ and add the rules $A\to a$, $B\to b$. $C\to c$, and $D\to d$. Then we now have
$S_0\to S$
$S\to ASC\mid X$
$X\to AXB\mid\lambda$
$A\to a$
$B\to b$
$C\to c$
$D\to d$
$\textbf{BIN}$
Next, we want to split up the rules $S\to ASC$ and $X\to AXB$ into rules with only two nonterminals on the right hand side. To do this, we introduce new nonterminal symbols, $S_1$ and $X_1$ and replace $S\to ASC$ and $X\to AXB$ with the new rules $S\to AS_1$, $S_1\to SC$, $X\to AX_1$, and $X_1\to XB$. Then we now have
$S_0\to S$
$S\to AS_1\mid X$
$S_1\to SC$
$X\to AX_1\mid\lambda$
$X_1\to XB$
$A\to a$
$B\to b$
$C\to c$
$D\to d$
$\textbf{DEL}$
Next, we want to remove any $\lambda$-rules, namely $X\to\lambda$. In order to do this while making sure the grammar generates the same language, we need to determine the set of nullable nonterminals (see Wikipedia). It follows immediately from the definition that the nullable nonterminals are $X$, $S$, and $S_0$ (although $S_0$ does not appear on the right hand side of any rule, so it does not matter that $S_0$ is nullable). So, we introduce a new rule for every rule which has a nullable nonterminal on the right hand side by deleting the nullable nonterminal. This yields
$S_0\to S\mid\lambda$
$S\to AS_1\mid X\mid\lambda$
$S_1\to SC\mid C$
$X\to AX_1\mid\lambda$
$X_1\to XB\mid B$
$A\to a$
$B\to b$
$C\to c$
$D\to d$
After that, we can simply remove every rule of the form $Y\to\lambda$ for any nonterminal $Y$ with $Y\neq S_0$. So, we have
$S_0\to S\mid\lambda$
$S\to AS_1\mid X$
$S_1\to SC\mid C$
$X\to AX_1$
$X_1\to XB\mid B$
$A\to a$
$B\to b$
$C\to c$
$D\to d$
$\textbf{UNIT}$
Finally, we want to remove all the unit rules (i.e. rules of the form $Y\to Y'$ where $Y$ and $Y'$ are nonterminals). To do this, we first need to repeatedly add a new rule for every unit rule $Y\to Y'$ and every rule starting with $Y'$. In our case, the unit rules are $S_0\to S$, $S\to X$, $S_1\to C$, and $X_1\to B$. Since we have $S_0\to S$ and $S\to AS_1$ as rules, we need to add the rule $S_0\to AS_1$. By the same reasoning, we need to add the rules $S_0\to X$, $S\to AX_1$, $S_1\to c$, and $X_1\to b$. But now there is a new unit rule! Since the new unit rule $S_0\to X$, has been added, we repeat the process to get the new rule $S_0\to AX_1$. This time, there were no new unit rules, so we finish by deleting every unit rule, and get
$S_0\to AS_1\mid AX_1\mid\lambda$
$S\to AS_1\mid AX_1$
$S_1\to SC\mid c$
$X\to AX_1$
$X_1\to XB\mid b$
$A\to a$
$B\to b$
$C\to c$
$D\to d$
And hence, this is a new context-free grammar in Chomsky normal form which generates the same language as the original. I tried to make this as detailed as possible, but let me know if you need further explanation on any of the steps. |
Why does this distribution of polynomial roots resemble a collection of affine IFS fractals? | I can't give a complete answer, but I can make a few observations in the general direction of an answer. Note that most of this is based on memory from spending several hours looking at the behavior of these plots and observing how they relate to roots for individual polynomials. This is all "empirical math" (assuming that's even a thing, heh) but the patterns are pretty clear and while I doubt I have the mathematical background to make most of it rigorous I doubt it would be too difficult.
It's not limited to coefficients from $\{-1,1\}$; almost any collection of polynomials with coefficients chosen from a very limited set will produce similar patterns.
The character of the pattern is directly tied to the location on the complex plane, specifically the effect of multiplication. So I suspect that it catalogs the IFSs only insofar as the complex plane catalogs (some) affine transformations. Note that the Julia set formula, in contrast, lets the transformation vary during iteration with only an additive constant to alter the shape, which is why it has the more elaborate chaotic behavior vs. the IFS-like structure here.
Each fragment of a pattern roughly resembles a Cantor set, with fragments overlapping heavily except at the outer fringes. Each fragment is twisted according to the nature of complex multiplication, and the combination creates the similarity you noticed.
The latter two points are fairly intuitive given the nature of complex numbers and mostly apparent from inspection of the plots, but don't really do much to explain why, which I imagine is why Baez didn't remark on it.
Also unexplained are the gaps, such as those around the roots of unity. I don't recall the exact placements, but I recall it also being obvious why the locations of the gaps were relevant, but not why root density drops off so sharply around them. I could probably make some guesses, but only with copious handwaving involved.
On the other hand, except for the gaps, I believe the thick ring around the unit circle simply consists of more variations on the same patterns, denser and heavily overlapped, until detail is no longer visible.
Regarding variations on plots like this, the same approximate pattern appears regardless of polynomial degree, and for any set of coefficients of the form $\{-N, -(N - 1) ... -1, 1 ... N - 1, N\}$. The density of roots around the unit circle, the prominence of the gaps around certain points, and the character of the fringe inside the gaps all vary with degree; this is hard to see in Derbyshire's plot rather than one limited to polynomials of a single degree.
The shape of the overall plot changes if the coefficients are not chosen as above, being either of different magnitudes or chosen from more than one set. Note the plots on this page, particularly those near the bottom. The coloring indicates sensitivity of the points to changes in coefficients, which indicates which parts of the plot may distort asymmetrically.
My observations, in general, were:
Large differences in magnitude between coefficients makes the patterns tighter and denser, without changing the symmetry if each individual coefficient is chosen from a set like the original plot uses.
Large differences in magnitude between the elements in each set makes the plot as a whole more asymmetric, while making the density variations more prominent and removing the fringe patterns around the edge.
Doing both in moderation creates... odd effects: |
Eigenvalues of adjoint, is this proof good? | It is a bit more complicated. As we assume finite dimension then $\lambda$ being an eigenvalue of $T$ means $Z=\ker (T -\lambda I)$ is non-trivial. So if $v\in Z\setminus\{0\}$ we have for all $w\in V$:
$$ 0 = ((T-\lambda)v, w) = (v, (T^*-\bar{\lambda})w) $$
showing that the image of $T^*-\bar{\lambda}$ is orthogonal to $v$. In other
words the map is non-surjective. Since in finite dimension it must also be non-injective so $W=\ker(T^* -\bar{\lambda} I)$ is non-trivial. Whence $\bar{\lambda}$ is an eigenvalue of $T^*$. |
What can we deduce about $R$ in this graph of $I$ against $V$, where $R = \frac{V}{I}$? | First: The correct answer most be C or D because of the formula that is given $R = V / I$. Now the question is just which of the two
$$
\frac{V_1}{I_1} \quad\text{ and }\quad \frac{V_2}{I_2}
$$
is greatest.
Since the curve passes through the origin for any point $(V,I)$ on the curve the quantity $\frac{I}{V}$ gives the slope of the line that passes through the origin and the point $(V,I)$. It is clear then from the graph that
$$
\frac{I_2}{V_2} > \frac{I_1}{V_1}.
$$
That means
$$
R_2 = \frac{V_2}{I_2} > \frac{V_1}{I_1} = R_2.
$$ |
What is the difference between the terms smooth, analytical and continuous? | A smooth function is a continuous function with a continuous derivative. Some texts use the term smooth for a continuous function that is infinitely many times differentiables (all the $n$-th derivatives are thus continuous, since differentiability implies continuity).
An analytic function is a function that is smooth (in the sense that it is continuous and infinitely times differentiable), and the Taylor series around a point converges to the original function in the neighbourhood of that point. The existence of all derivatives doesn't imply that the Taylor series converges. A famous example is the function
$$f(x)=\exp\left(\frac{-1}{x^2}\right) \text{ if } x \neq 0$$
$$f(0)=0$$
This function is continuous and infinitely many times differentiable in $x=0$. The Taylor series around this point is the constant function $T(x)=0$, so the Taylor series doesn't converge to the function $f(x)$ in the neighnourhood of $0$. |
$B$ is a subset of some $s(n)$ | HINT: Suppose that $B$ is not a subset of any $s(n)$. Then for each $n\in\omega$ there is an $x_n\in B\setminus s(n)$. Moreover, you can choose the $x_n$ recursively so that $x_n\notin\{x_k:k<n\}$ for each $n\in\omega$.
Let $B'=\{x_n:n\in\omega\}$, and show that $B'$ is infinite, but $B'\cap s(n)$ is finite for each $n\in\omega$. You’ll need to use the hypothesis that $s(n)\subseteq s(n^+)$ for each $n\in\omega$. |
Torsion elements of a group aren't necessarily a subgroup | the group of isometries of the real line is generated by translations and reflections, with, for $a \in \mathbb{R}$:
$$
T_a: x \to x + a \\
R_a: x \to 2a - x
$$
So $R_a$ and $R_b$ are both involutions (elements of order 2), but their product: $R_b \circ R_a: x \to 2b - (2a -x) = x + 2(b - a) = T_{2(b-a)}(x)$. If $a \ne b$ the translation $T_{2(b-a)}(x)$ has infinite order. |
Replace x with -1 in $ y' = \frac 32({1+x^\frac 23})^\frac 12 \ ({\frac 23 x^\frac {-1}3}) $ | HINT Where you see $x$ on the $RHS$ put a $-1$ and evaluate.
e.g. if $f'(x) = x+1$ at $x = -1$ we get $f'(-1) = -1+1 = 0$
For your specific case: $f'(-1) = \frac{3}{2}(1+(-1)^{\frac{2}{3}})^\frac{1}{2}\times(\frac{2}{3}(-1)^{\frac{-1}{3}} = \sqrt{2}$ |
Geometric interpretation of limit on a complex plane | Decomposing $n$-Dimensional Functions
In general, if you have a function...
$$\displaystyle f:\prod_{i=0}^m X_i\to\prod_{j=0}^n Y_j$$
...you can decompose it into $n$ (or fewer) functions...
$$\displaystyle f_k:\prod_{i=0}^m X_i\to Y_k$$
For example, if you have $f:\Bbb{R}^2\to\Bbb{R}^3$, where
$$f(x,y)=(x^2-y^2,2xy,x)$$
you can break $f$ into the three functions:
$$f_1(x,y)=x^2-y^2$$
$$f_2(x,y)=2xy$$
$$f_3(x,y)=x$$
Since every complex number can be represented as a pair of real numbers by $a+bi\mapsto(a,b)$, you can think of a function $\Bbb{C}\to\Bbb{C}$ instead as a function $\Bbb{R}^2\to\Bbb{R}^2$. In this sense, a complex function is indeed "four-dimensional." However, since you can decompose any function $\Bbb{R}^2\to\Bbb{R}^2$ into two functions $\Bbb{R}^2\to\Bbb{R}$, you can likewise decompose any complex function into two "three-dimensional" functions $\Bbb{C}\to\Bbb{R}$.
Be aware, however, that "dimension" is defined differently in different contexts. What you are referring to is the "real dimension" of a vector space (in this case the complex numbers are a vector space with real dimension $2$). The "complex dimension" of the complex numbers is still $1$.
Graphing Complex Functions
Generally, there are three ways to represent a complex function $f:\Bbb{C}\to\Bbb{C}$.
The first is as a vector field (no different from a vector-valued function $\Bbb{R}^2\to\Bbb{R}^2$). For a function $f:\Bbb{C}\to\Bbb{C}$, the vector field is given by $(\Re(z),\Im(z))\mapsto(\Re[f(z)],\Im[f(z)])$, where $\Re(z)$ is the real part of $z$ and $\Im(z)$ is the imaginary part. The equivalent vector-valued function $\Bbb{R}^2\to\Bbb{R}^2$ is given by $(x,y)\mapsto(\Re[f(x+yi)],\Im[f(x+yi)]):$
The second method is to use two surfaces, one representing the real value of the function at a point $x+yi$, the other representing the imaginary value. Each surface is defined by decomposing the function $f:\Bbb{C}\to\Bbb{C}$ into two functions $f_1:\Bbb{R}^2\to\Bbb{R}$ and $f_2:\Bbb{R}^2\to\Bbb{R}$.
The third way is to use a surface representing the real value of the function at a point and to color the surface to indicate its imaginary value at that point (or vice-versa) - or to plot the absolute value and color it according to the phase/complex argument as shown below. This might be the most common visualization of complex functions.
There are other ways to represent complex functions graphically, but I find that these are more artistic than they are informative (especially for the color-blind and/or visually impaired). Many excellent examples of different types of visualization can be found on the Wikimedia pages here and here.
Limits in the Complex Plane
The limit of a complex function $f:\Bbb{C}\to\Bbb{C}$ at a point $z_0\in\Bbb{C}$ can likewise be represented a number of ways. If you are taking the limit visually (assuming you are using a computer), then I would suggest looking at a surface plot of the real part of $f$ and finding the apparent value at $z_0$, then doing the same with the imaginary part and adding them together.
This is the same as showing that $\displaystyle\lim_{z\to z_0}f(z)=\lim_{z\to z_0}\Re[f(z)]+i\lim_{z\to z_0}\Im[f(z)]$
If your looking at the limit of a sequence, it's common to plot the points of the sequence in the complex plane (like you would a parametric curve in $\Bbb{R}^2$) rather than against an axis (like you would a sequence in $\Bbb{R}$).
Since a complex sequence is a function $\Bbb{N}\to\Bbb{C}$ you may think of it as "three-dimensional" - or as two functions $\Bbb{N}\to\Bbb{R}$. |
Serre-Swan theorem for Hilbert bundles? | There are different aspects here. To prove that $E\cong F$ if and only if $\Gamma(X,E)\cong\Gamma(X,F)$ as $C(X)$ modules seems to be much easier than a full version of the Serre-Swan theorem. As far as I can see, the only fact you need in order to prove the first statement is that a map $\Gamma(X,E)\to\Gamma(X,F)$ which is linear over $C(X)$ is induced by a bundle map between the bundles. It seems to me that the standard proof that if $\Phi$ is such that map then $\Phi(s(x))$ depends only on $s(x)$ extends to the setting of infinite rank bundles without problems. You'd probably have to add an appropriate continuity condition the setting of sections to ensure continuity of the induced bundle map, but this looks rather harmless to me.
To obtain a full version of the Serre-Swann Theorem, you would need an algebraic characterization of the modules $\Gamma(X,E)$ (which replaces standard condition "finitely generated projective"). This seems much harder to me unless you work in some restricted class of Hilbert bundles (say involving a $C^*$-algebra). |
Prime number between $n$ and $n!+1$ | Hint: $n!+1$ has some prime factor $p$. If $p \leq n$ then $p\mid n!$. |
Derivative of the complementary cdf | Hint: Look up Survival Function. |
Is this martingale constant 0? | Example: Let $N(t)$ be a unit rate Poisson process. Then $M(t):=N(t)-t$, $t\ge 0$, is a martingale with paths of finite variation (on each finite time interval) and $M(0)=0$. |
distortion to virtually rotate an orthographic sphere image | There are two distinct parts in your question 1) and 2).
Let us consider a rotation with angle $R$ to the right for the owner of the eye, therefore to the left for the observer. let us call $A$ the Azimuth and $E$ the Elevation.
Fig. 1: A small feature and its image by a rotation with $R=5 \pi/12$ using formulas (1).
We have to make first a mathematical computation from cartesian coordinates $(x,y) \to (x',y')=(x',y)$.
$$\begin{cases}x&=&\cos(A)\cos(E)\\y&=&\sin(E)\end{cases} \ \ \to \ \ \begin{cases}x'&=&\cos(A')\cos(E)\\y'&=&y\end{cases}\tag{1}$$
where $A$ and $A'$ are the old and new azimuth, $E$ the common elevation, and $A'=A+R$ where $R$ is the rotation angle. (1) can be done by using the following computations:
$$A=\arccos\left(\dfrac{x}{\sqrt{1-y^2}}\right) \ \ \to \ \ x'=\cos(A+R)\sqrt{1-y^2}\tag{2}$$
Please note that we have taken a radius equal to $1$.
But this is enough for the contour of shapes only. If one wants to take into account the image itself, with its pixels and their colors, it is not sufficient. We have to use the fact that the image of a group of $N$ horizontal pixels undergoing a rotation will give a group of $N'$ horizontal pixels with, in general, $N \ne N'$. The idea is to work backwards. Let us take the example of pixels in the iris ; all of them will be more or less compressed by a $75°$ rotation. The pixel which is for example right in the center of the iris in its rotated position will correspond to, say, $5$ pixels in the initial position of the center. Therefore you will have to find them, make an average color and attribute this color to the new pixel. Saying that for $75°$, we must take a ratio $1:5$, that for $45°$ the ratio is $2:3$ depends in general from the initial position.
The precise rule of compression (or dilatation according to the case at hand) can be established as $N'/N = $
$$\text{relative distorsion coefficient: } \ \ \dfrac{\sin(A)}{\sin(A')}\tag{3}$$
Please note that (3) is independent from Elevation. |
If $||\nabla f(x,y)||^2=2$, determine constants $a$ and $b$ such that $a(\frac{\partial g}{\partial u})^2-b(\frac{\partial g}{\partial v})^2=u^2+v^2.$ | Note that $ \left\|\nabla f \right\| ^2 = \left(\frac{\partial f}{\partial x}\right)^2 + \left(\frac{\partial f}{\partial y}\right)^2 = 2$.
Picking up on where you left, lets write down a system of equations:
$$
\begin{cases}
\left(\dfrac{\partial f}{\partial x}\right)^2 + \left(\dfrac{\partial f}{\partial y}\right)^2 = 2
\\
\dfrac{\partial f}{\partial x}^2 \left(av^2-bu^2\right)+\dfrac{\partial f}{\partial y}^2\left(au^2-bv^2\right)+2\dfrac{\partial f}{\partial x}\dfrac{\partial f}{\partial y}uv(a+b) = u^2 + v^2
\end{cases}
$$
Collecting coefficients in front of the different powers of $u$ and $v$ in the second equation, and assuming non-trivial case $u\not\equiv0$ and $v\not\equiv0$, we write
$$
\begin{cases}
f_x^2 + f_y^2 = 2
\\
f_x^2\left(av^2-bu^2\right)+
f_y^2\left(au^2-bv^2\right)+
2\,f_x \,f_y\, uv(a+b) = u^2 + v^2
\end{cases}
\iff
\\
\iff
\begin{cases}
f_x^2 + f_y^2 = 2
\\
u^2\left(a \,f_y^2 - b\,f_x^2 - 1 \right) + v^2\left(a \,f_x^2 - b\,f_y^2 - 1 \right) + uv\left( \,f_x \, f_y \,\left( a+b\right)\right) = 0
\end{cases}
\implies
\\
\implies
\begin{cases}
f_x^2 + f_y^2 = 2 \\
a \,f_y^2 - b\,f_x^2 = 1\\
a \,f_x^2 - b\,f_y^2 = 1 \\
a+b = 0
\end{cases}
$$
In this way you have a system of for equations for the four unknowns, which are $a,b,f_x$, and $f_y$. Let us sum the second and the third equation:
$$
\begin{cases}
f_x^2 + f_y^2 = 2 \\
a \,f_y^2 - b\,f_x^2 = 1\\
a \,f_x^2 - b\,f_y^2 = 1 \\
a+b = 0
\end{cases}
\implies
\begin{cases}
f_x^2 + f_y^2 = 2 \\
a \left( f_x^2 + f_y^2\right) - b\left( f_x^2 + f_y^2\right) = 2\\
a+b = 0
\end{cases}
\implies
\begin{cases}
a - b = 1\\
a+b = 0
\end{cases}
\implies
\begin{cases}
a = \frac{1}{2}\\
b = -\frac{1}{2}
\end{cases}
$$ |
Limit when an expoent goes to infinity | Looks good.
For case 1, the denominator grows faster and thus the limit goes to 0.
The second case is wrong however. It can be simplified like this:
$$\lim_{n \to \infty} \frac{y^n}{y^{n+1}} = \lim_{n \to \infty} \frac{1}{y} = \frac{1}{y}$$
In other words, you can use the same method as for case 1. |
Find the values of $k$ for which the equation $(f\circ g)(x) = x$ has two equal roots | You said that
$$(f\circ g)(x)=\frac{36}{2-x}-2k$$
But as both you and okrzysik pointed out, the composition $(f\circ g)(x)=x$ and so
$$(f\circ g)(x)=\frac{36}{2-x}-2k=x$$
So multiplying both sides by $2-x$ yields
$$(2-x)x=36-2k(2-x)\Rightarrow x^2+(2k-2)x+(36-4k)$$
So the solution by QF is
$$x=\frac{2-2k\pm\sqrt{4k^2-8k+4-4(36-4k)}}{2}$$
$$=1-k\pm\sqrt{k^2+2k-35}$$
For the problem at hand, we need the two roots to be equal, so
$$1-k+\sqrt{k^2+2k-35}=1-k-\sqrt{k^2+2k-35}$$
$$\sqrt{k^2+2k-35}=-\sqrt{k^2+2k-35}$$
$$k^2+2k-35=0 \Rightarrow (k+7)(k-5)=0...$$
and solve for $k$ from here.... But, to check to see if this is valid or not, realize that if $(f\circ g)(x)=x$, then $g=f^{-1}$ and so
$$f^{-1}(x)=\frac{x+2k}{4}$$
and since $g=f^{-1}$, we get that
$$\frac{x+2k}{4}=\frac{9}{2-x}$$
Solving by cross multiplication yields the same quadratic as above. |
Grouping and counting methods, same answer for two different questions? | You need to multiply the second answer by 3!
Do you see why?
There are 3! ways to arrange the three disciplines. You didn't take that into account and calculated as though one specific discipline must come first, then another, then the third. |
Fourier transform of $L^2$ function | It depends on what you are after: what do you want to do with the transform? What you described is a reasonable thing to do; it amounts to extending $f$ by zero outside of the interval $[-\pi,\pi]$ and applying the usual formula on the line. A possible drawback: if $f$ is not zero at $\pm \pi $, the steep drop-off will result in low accuracy near the endpoints (Gibbs phenomenon).
Another approach is to expand $f$ into Fourier series. That would be
$$\hat f(n) = \frac{1}{2\pi}\int_{-\pi}^\pi f(t) \exp(- i n t) dt,\qquad n\in\mathbb Z$$
Essentially the same formula, but you only compute countably many coefficients, and reconstruct $f$ with the formula
$$f(t) = \sum_{n\in\mathbb Z} \hat f(n) \exp(i n t)$$
which is a convergent series in the $L^2$ sense (and also pointwise at almost every point).
The series works particularly well when $f(\pi)=f(-\pi)$. Otherwise the Gibbs phenomenon will show up again.
The placement of $2\pi$ is a matter of taste, by the way. |
What can I say about the continuity of the function $f(x)=\sin(x)$ if $x$ is rational, and $f(x)=0$ otherwise? | $f(x) \; \text{is continuous at} \; x \Longleftrightarrow x = n\pi, \; n \in \Bbb Z; \tag 1$
for suppose $x \ne n \pi, n \in \Bbb Z$; then
$\sin x \ne 0, \tag 2$
so if $x \in \Bbb Q \setminus \{n\pi \mid n \in \Bbb Z \} = \Bbb Q \setminus \{ 0 \}$ (since $n\pi \in \Bbb Q$ if and only if $n = 0$), then
$f(x) = \sin x \ne 0, \tag 3$
and since there are irrationals $r$ arbitrarily close to $x$ where $f(r) = 0$, we cannot have
$\displaystyle \lim_{y \to x} f(y) = f(x) = \sin x \ne 0; \tag 4$
more formally, set
$\epsilon = \dfrac{\vert \sin x \vert}{2}; \tag 5$
for any $0 < \delta \in \Bbb R$, no matter how small,
$\exists y \; [[\vert y - x \vert < \delta] \wedge [\vert f(y) - f(x) \vert > \epsilon]], \tag 6$
which can't be true if
$\forall \epsilon \exists \delta [\forall y [[\vert y - x \vert < \delta] \Longrightarrow [\vert f(y) - f(x) \vert < \epsilon]]]; \tag 7$
the reader will recognize (7) as a statement of the continuity of $f(x)$ at $x$ thus we have shown that $f(x)$ cannot be continuous at $x \in \Bbb Q \setminus \{0\}$; likewise, if $x \in (\Bbb R \setminus \Bbb Q) \setminus \{n\pi \mid n \in \Bbb Z \}$, that is, a non-rational real which is not of the form $n \pi, n \in \Bbb Z$, then $\sin x \ne 0$ but $f(x) = 0$; there are thus rationals $y$ arbitrarily close to $x$ where $\vert f(y) \vert = \vert \sin y \vert > \vert \sin x \vert / 2$; since $f(x) = 0$, $f(x)$ is not continuous at such $x$.
So we have seen that $f(x)$ is not continuous at any real not of the form $n \pi$, which by contraposition is equivalent to the statement that
$f(x) \; \text{continuous at} \; x \Longrightarrow x = n \pi; \tag 8$
now if $x = n \pi$, then $\sin x = 0$ whether or not $n \pi$ is rational; thus as $y \to n \pi$ through rational values $\sin y \to \sin n \pi$; for irrational $y$ close to $x$, $\sin y = 0 = \sin x$ and in either case $f(y)$ is arbitrarily close to $f(x)$ for $y$ sufficiently close to $x$; thus $f(x)$ is continuous at $n\pi$; this completes the proof of (1). |
How to determine lexicographically the smallest Prüfer-Code of a spanning tree? | Basically, we want as many $0$s in the beginning of our prufer code as we can. We can get at most 2. Thus, the minimum prufer will occur when there are exactly $ 2$ 0s in the beginning slots of the code. After this, there are only 3 possible ways to construct the spanning tree from this point, so it is easy to try them all and determine the minimum. Take the edges $e_1 = (0,1);e_2 = (0,2); e_3= (0,6);e_4=(6,7);e_5=(4,6);e_6=(4,5);e_7 = (3,5).$ This yields the prufer code, $s: 006546$. |
Operator between two Hilbert spaces that preserve inner product must be linear | It is possible to expand
$$
\|U(f+g)-U(f)-U(g)\|^{2}
$$
into 9 inner-product terms involving $U$ applied to one of $f+g$, $f$, $g$ in the first coordinate with another such term in the second coordinate. Asumming $(Uf,Ug)=(f,g)$ for all $f, g \in X$, then $U$ can be removed from both coordinates of all 9 terms, resulting in the trivial identity
$$
\|U(f+g)-U(f)-U(g)\|^{2}=\|(f+g)-f-g\|^{2}=0.
$$
So $U$ is automatically additive, meaning that $U(f+g)=U(f)+U(g)$. If $\alpha$ is a scalar and $f \in X$,
$$
\begin{align}
\|U(\alpha f)-\alpha U(f)\|^{2}
& =(U(\alpha f),U(\alpha f))-\overline{\alpha}(U(\alpha f),U(f)) \\
& -\alpha (U(f),U(\alpha f)+|\alpha|^{2}(U(f),U(f)) \\
& = (\alpha f,\alpha f)-\overline{\alpha}(\alpha f,f)-\alpha(f,\alpha f)+|\alpha|^{2}(f,f) = 0.
\end{align}
$$
So $U(\alpha f)=\alpha U(f)$. Therefore, $U$ is linear. (If $X$ and $Y$ are real spaces, ignore the conjugation, and the above continues to hold.) |
Maximal ideals of the ring of formal polynomials over a ring $R$ | How can I prove that $(x)$ is the unique maximal ideal of $F[[x]]$ without using the quoted statement above?
The fact you mentioned about $F[[x]]\setminus(x)\subseteq U(R)$ is enough to show $F[[x]]$ is local. In fact, that means $F[[x]]\setminus(x)= U(R)$, and therefore $(x)$ is the set of nonunits. A ring is local iff the set of nonunits is closed under addition.
Here's the collection of equivalences anyone studying local rings should prove, at some point:
TFAE for a ring $R$ (with identity)
$R$ has a unique maximal right ideal
The nonunits of $R$ are closed under addition
for every $x\in R$, either $x$ or $1-x$ is a unit.
They're very manageable for any abstract algebra student, but hints can be supplied if you ask.
If $R$ is a local ring, then so is $R[[x]]$.
Suppose $R$ is local and let $I$ is a maximal right ideal of $R[[x]]$. Then the projection of $R[[x]]\to R$ carries $I$ to the unique maximal right ideal $M$ of $R$. Then the inverse image of $M$ under this projection is also a right ideal containing $I$, and therefore it's equal to $I$. Thus all maximal right ideals of $R[[x]]$ are equal to the preimage of $M$, so there is only a single maximal right ideal in $R[[x]]$. |
Number of strings of size $k$ that do not have 'ab' | Substituting the second recurrence into the first,
$$(N_{k+1}-2N_k)=(N_k-2N_{k-1})+N_{k-1}$$
and so
$$N_{k+1}=3N_k-N_{k-1}\ .$$
This is the recurrence relation for "every second Fibonacci", and by checking some initial conditions you can show that
$$N_k=F_{2k+1}\ .$$
In a similar way you can prove that
$$A_k=F_{2k}\ ,$$
and the the total is
$$T_k=A_k+N_k=F_{2k+2}\ .$$
In all of this I am assuming that the Fibonacci numbers start with $F_0=0$ and $F_1=1$.
BTW, you can get a recurrence for $T_k$ directly by using inclusion/exclusion to count the number of strings of length $k$ which DO contain $ab$. Exercise. See if you can explain why
$$3^k-T_k=3^{k-2}+3(3^{k-1}-T_{k-1})-(3^{k-2}-T_{k-2})\ .$$ |
Explain why calculating this series could cause paradox? | It's because the series for ln2 is conditionally convergent. (see also Riemann's rearrangement theorem) |
How to determine the distance from a point to a line | A general point on that line can be written as $$\left(x\,,\,-\frac{a}{b}x-\frac{c}{b}\right)$$ (If $\,b=0\,$ then this is a vertical line and the distance is just the abolute value of the difference of the abscissas and we don't need all this).
Then you want to minimize the function $$f(x):=\sqrt{\left(\alpha-x\right)^2+\left(\beta-\left(-\frac{a}{b}x-\frac{c}{b}\right)\right)^2}\,\,,\,\,p=(\alpha,\beta)$$Of course, you don't have to work with this function if you don't want to: you can work with his square $\,g(x):=f(x)^2\,$ (why?) , so $$g'(x)=2(x-\alpha)+\frac{2a}{b^2}(ax+b\beta+c)=0\Longrightarrow x=\frac{b^2\alpha-ab\beta-ac}{a^2+b^2}...etc$$ |
How to show $\operatorname{ann}(M) = \operatorname{ann}(X)$. | The claim of part $(b)$ of the exercise is:
If $R$ is an artinian ring and $M$ is a finitely generated left $R$-module such that $\text{ann}(M)=0$, then $M$ has a submodule isomorphic to $R$.
However the claim, as stated, is false.
Before presenting a counterexample, we state and prove a lemma.
lemma:
If $R$ is an artinian ring, then no proper left ideal of $R$ is isomorphic, as a left $R$-module, to $R$.
Proof of the lemma:
Let $R$ be an artinian ring and suppose $A$ is a left ideal of $R$ which is isomorphic, as a left $R$-module, to $R$.
Our goal is to derive a contradiction.
Let $A_0=A$.
Since $A_0$ is isomorphic, as a left $R$-module, to $R$, it follows that $A_0$ has a proper $R$-submodule, $A_1$ say, which is isomorphic, as a left $R$-module, to $R$.
Note that $A_1$ is also a left ideal of $R$.
Iterating the process, we get a strictly descending infinite chain
$$
A_0\supset A_1\supset A_2\supset\cdots
$$
of left ideals of $R$, contradiction, since $R$ is an artinian ring.
This completes the proof of the lemma.
The counterexample:
Let $R=M_n(K)$ where $K$ is a field and $n\ge 2$.
Regarding $R$ as a vector subspace over $K$, $R$ is finite-dimensional (more precisely, it has dimension $n^2$).
Since every left or right ideal of $R$ is closed under multiplication by elements of $K$, it follows that every left or right ideal of $R$ is vector subspace of $R$.
Since the subspaces of a finite-dimensional vector space satisfy the descending chain condition, it follows that $R$ is an artinian ring.
For $1\le i\le n$, let $a[i]\in R$ be the $n{\times}n$ matrix with all entries in the $i$-th row equal to $1$ and all other entries equal to $0$.
Let $A$ be the left ideal generated by $a[1]$.
Thus, regarding $A$ as an $R$-module, $A$ is finitely generated.
Note that $\det(a[1])=0$.
If $a\in A$, then $a=ra[1]$ for some $r\in R$, hence $\det(a)=\det(r)\det(a[1])=0$.
Thus all elements of $A$ are singular, so the inclusion $A\subset R$ is proper.
Applying the lemma, no $R$-submodule of $A$ is isomorphic to $R$.
It remains to show $\text{ann}(A)=0$.
Let $r\in\text{ann}(A)$.
For $1\le i,j\le n$, let $e[i,j]\in R$ be the $n{\times}n$ matrix with $(i,j)$-th entry equal to $1$ and all other entries equal to $0$.
By definition of $A$, we have $a[1]\in A$.
For $2\le i\le n$, we have $a[i]=e[i,1]a[1]$.
Thus we have $a_1,...,a_n\in A$.
Then from $rA=0$ we get $ra[i]=0$ for all $i\in\{1,...,n\}$.
From $ra[i]=0$ it follows that the $i$-th column of $r$ is zero.
But then all columns of $r$ are zero, so $r=0$.
Thus we have $\text{ann}(A)=0$, so $A$ qualifies as a counterexample to the claim. |
Is this a known distribution? | This is the beta distribution with pdf
$$f(x,\alpha,\beta) = \frac{x^{\alpha - 1} (1-x)^{\beta - 1}}{C}$$ and parameters $\alpha = \frac{1}{2}$, $\beta = \frac{3}{2}$. (where $C$ is an appropriate constant). |
Find a formula for $\cos(5x)$ in terms of $\sin(x)$ and $\cos(x)$ | Your way is right. But I think it should be
$$(\cos x + i\sin x)^5 = \cos^5 x + 5i\cos^4x\sin x - 10\cos^3x\sin^2 x - 10i\cos^2x\sin^3x + 5\cos x\sin^4x +i\sin^5x$$
So, you have
$$\cos 5x = \cos^5x-10\cos^3x\sin^2x+5\cos x\sin^4x$$
$$\sin 5x = \sin^5x-10\cos^2x\sin^3x+5\cos^4 x\sin x$$ |
Find integer solution of a system of equations | Your equations are
\begin{cases}
12k= 839+ p^2\\
2k= 135+ q^2
\end{cases}
and we observe that $$p^2-q^2 = 10k - 704 = 5(2k-141)+1$$
so $\,p^2 - q^2 = 1 \pmod{5}$ which tells us that $p^2 = 1 \mod 5$ and $q = 0 \mod 5$, or $p=0 \mod 5$ and $q^2 = -1 \mod 5$. We consider the first case here, and the second later. Since $q$ is prime and divisible by $5$, $q=5$. Then
$$
2k = 135+25 = 160 \implies k=80
$$
and
$$
12k = 960 =- 839+p^2\\
P^2 = 121\\
p=11
$$
Finally,
$$
12 n - 21143 = 25p^2= 3025\\
12n = 24168\\
n=2014
$$
or
$$
2 n - 3403= 25q^2= 625\\
2n = 4028\\
n=2014
$$
So that is the one solution.
Now consider the second case: $p=0 \mod 5$ and $q^2 = -1 \mod 5$.
$$12k=839+p^2 = 144 \\
k=72$$
And the other equation becomes
$$2\cdot 72 = 135 + q^2\\
q=3$$
So that will give a second solution, with
$$
n = 14 + 25*k = 14 + 25\cdot 72=1814 $$
Kudos to Leox for noting the second case, which I had overlooked in my original answer. |
selecting two students(males) at random | I would have to disagree with Dirk. I actually think you are correct.
Suppose we had 2 boys Arthur and Bob and 1 girl Clelia and we pick 2 at random. Our possibilities are $AB,\ AC,\ BA,\ BC,\ CA,\ CB$. Now it is revealed to us that our second pick was a boy, leaving us with the options $AB,\ BA,\ CA,\ CB$. In two of these cases our first pick was a boy, in two of these cases our first pick was a girl, leaving us with chance $\frac{1}{2}$ our first pick was a boy, instead of the initial $\frac{2}{3}$ chance. For bigger cases it useful to conditional probability as you have correctly applied. |
Bernoulli Number Sum: At B4 i get 1/180? | You wrote $4$ where you should have $4!$ from the binomial coefficients. That explains the factor $6$. Also, you're using the recursion for the $+$ convention but substituting the values for the $-$ convention. You should either use $B_2=+\frac12$ or drop the initial $1$ in the recursion. |
Why does $F(\sqrt{a+b+2\sqrt{ab}}) = F(\sqrt{a},\sqrt{b})$? | So you've proved that both fields have the same degree over $F$. If you can show that one of them is contained in the other, you're done. Note that $$a+b + 2 \sqrt{ab} = (\sqrt{a} + \sqrt{b})^2$$ |
Given a single die, what is the probability it takes an even number of rolls to get a 4? | $$\sum_{k=1}^{\infty}\frac{1}{6}\cdot\left(\frac{5}{6}\right)^{2k-1}$$
$$=\frac{1}{6}\sum_{k=1}^{\infty}\left(\frac{5}{6}\right)^{-1}\cdot\left(\frac{5}{6}\right)^{2k}$$
$$=\frac{1}{6}\sum_{k=1}^{\infty}\frac65\cdot\left(\left(\frac{5}{6}\right)^2\right)^k$$
$$=\frac{1}{6}\cdot\frac65\sum_{k=1}^{\infty}\left(\frac{25}{36}\right)^k$$
$$=\frac15\sum_{k=1}^{\infty}\left(\frac{25}{36}\right)^k$$
$$=\frac15\left(\sum_{k=0}^{\infty}\left[\left(\frac{25}{36}\right)^k\right]-1\right)$$
$$=\frac15\left(\frac1{1-\frac{25}{36}}-1\right)$$
$$=\frac15\left(\frac1{\frac{11}{36}}-1\right)$$
$$=\frac15\left(\frac{36}{11}-1\right)$$
$$=\frac15\cdot\frac{25}{11}$$
$$=\frac5{11}$$ |
The differential of a smooth map on manifold at points of local maxima | Recall that $(f_*)_p(X_p)=X_p(f)$, where we treat $X_p\in T_pM$ as a derivation on the ring of smooth functions. Therefore, showing that $(f_*)_p(X_p)=0$ whenever $f$ reaches a local maximum is the same as showing that the directional derivative of $f$ in any given direction is $0$ at $p$. Think you can take it from there? |
Show $f(x) = 0$ given continuity and integrals. | You have that $$\int_0^x f=\int_x^1 f$$ for each $x\in[0,1]$. By FTC, upon differentiation, you get that $$f(x)=-f(x)$$ for each $x\in [0,1]$. That is $2f(x)=0$ or $f(x)=0$ over $[0,1]$. |
Square root of the max is the max of the square root? | It's a big assumption that $\max |f(x)|$ or $\max(|f(x)|^2)$ exist but if one or the other does, they both do $(\max(|f(x)|^2))^{\frac 12} = \max |f(x)|$.
Suppose $\max |f(x)|$ exist. That means there is $a\in \mathbb R$ so that for every $y\in \mathbb R$ we have $|f(y)| \le |f(a)|$ and $\max|f(x)| = |f(a)|$.
If $|f(y)|\le |f(a)|$ then $|f(y)|^2 \le |f(a)|^2$ so $\max(|f(x)|^2) = |f(a)|^2$.
And $(\max(|f(x)|^2)^{\frac 12} = (|f(a)|^2)^{\frac 12} = |f(a)| = \max (|f(x)|)$.
And the other direction is too similar to be worth dealing with.
Now a more subtle question is if $\max(|f(x)|)$ doesn't exist but $\sup(|f(x)|)$ does.
Does $\sup (|f(x)|^2)$ exist and if so does $(\sup(|f(x)|^2)^{\frac 12} = \sup |f(x)|$?
The answer is still yes.
$|f(y)| \le \sup |f(x)| \iff |f(y)|^2 \le (\sup |f(x)|)^2$ so $\{|f(x)|^2\}$ is bounded above by $(\sup |f(x)|)^2|$. So $\sup(|f(x)|^2)$ exists.
If $0< k < (\sup |f(x)|)^2$ then $\sqrt k < \sup|f(x)|$ and so there $y: \sqrt k < |f(y)| \le \sup |f(x)|$ and $k < |f(y)|^2$ so $k$ is not an upper bound. So $\sup(|f(x)|^2) = (\sup(|f(x)|)^2$.
And so $(\sup(|f(x)|^2))^{\frac 12} =((\sup(|f(x)|)^2)^{\frac 12} = \sup(|f(x)|)$. |
For this probability question, should I consider him stepping back and then forward again? | Split it into disjoint events, and add up their probabilities:
$P(BBBB)=0.6\cdot0.6\cdot0.6\cdot0.6$
$P(BBBF)=0.6\cdot0.6\cdot0.6\cdot0.4$
$P(BBFB)=0.6\cdot0.6\cdot0.4\cdot0.6$
$P(BBFF)=0.6\cdot0.6\cdot0.4\cdot0.4$
$P(BFBB)=0.6\cdot0.4\cdot0.6\cdot0.6$
$P(BFBF)=0.6\cdot0.4\cdot0.6\cdot0.4$
UPDATE:
As soon as we have $2$ Bs, the man will not fall off the cliff in the first $4$ steps.
So we can simplify the solution above by splitting it into the following events:
$P(BB )=0.6\cdot0.6$
$P(BFB)=0.6\cdot0.4\cdot0.6$ |
Discrete LTI systems with complex inputs? | It seems that the simplification in the calculation of the convolution is the main goal of using complex exponentials as a basis for a linear combination of the system inputs. |
Using Rank-Nullity Theorem, find dimension and basis | The Rank-Nullity Theorem doesn't do anything to help construct a basis. But you could use it to determine the dimension of $W$. Let $$T: \mathbb{Q}_4[x] \to \mathbb{Q}$$ be the transformation defined by $$T(a_0 + a_1x + a_2x^2 + a_3 x^3 + a_4 x^4) = a_1 + a_2 + a_3 + a_4.$$
Then $W = ker(T)$ and rank($T) = 1$.
The Rank-Nullity Theorem says that dim$(W)= \text{nullity}(T) = \text{dim}(\mathbb{Q}_4[x]) - \text{rank}(T) = 5-1 = 4$.
Now we need to construct a basis of $W$. That is we need $4$ vectors in $W$ that span.
If $f(x) = a_0 + a_1x + a_2 x^2 + a_3x^3 + a_4x^4 \in W$, then $a_1 = -a_2 - a_3 - a_4$. Therefore
\begin{align*}
f(x) &= a_ 0 + (-a_2 - a_3 - a_4)x + a_2 x^2 + a_3x^3 + a_4x^4 \\
&= a_ 0(1) + a_2(x^2 - x) + a_3(x^3 - x) + a_4(x^4-x) \in \text{span}(1, x^2-x,x^3-x,x^4-x)
\end{align*}
$\{1, x^2-x,x^3-x,x^4-x\}\subseteq W$ is a spanning set of $W$ of the correct cardinality and is thus a basis. |
How would one find the number of spanning trees in a unlabled graph | For an arbitrary graph, you would use something like Kirchhoff's theorem. But with a graph like yours with lots of symmetry one may hope to find a clever shortcut specific to that graph.
For example, consider the 9 "outer" edges of the graph, and consider which of them are not in the spanning tree. It is clear that at most one of them can be missing in each of the squares, or some vertices would be cut off. So we can do an analysis by cases:
If none of the 9 outer edges are missing, then they form a cycle. Impossible.
If one of the 9 outer edges are missing, then the remaining 8 form a spanning tree together.
If two of the 9 outer edges are missing, they must be in different squares, and exactly one of the corresponding "inner" eges must be added to the spanning tree.
If three of the 9 outer edges are missing, then we must choose exactly two of the inner edges to complete the tree.
Alternatively: Exactly one of the three corners of the triangle must have the property that the path between the two other corners (along the spanning tree) passes through it.
Each of the squares adjacent to that corner can have a spanning tree selected in $4$ ways, and then there are $3$ ways to connect the two outer vertices in the last square. |
Solve the following problem by using the Binomial formula | You have exchanged $p$ and $1-p$. As you said $p=0.6$. Therefore
$P(X\leq 2)={10 \choose 2} \cdot 0.6^2 \cdot 0.4^8 + {10 \choose 1} \cdot 0.6^1 \cdot 0.4^9 + {10 \choose 2} \cdot 0.6^0 \cdot 0.4^{10}=0.01229$
Thus $P(X\geq 3)=1-P(X\leq 2)=1-0.01229=0.98771$ |
Why is this function a really good asymptotic for $\exp(x)\sqrt{x}$ | Repeated integrations by parts show that, for every positive $a$ and $x$, $$\int_0^x\mathrm e^{-t}t^{a-1}\mathrm dt=\Gamma(a)\mathrm e^{-x}\sum_{n\geqslant0}\frac{x^{n+a}}{\Gamma(n+a+1)}.$$ When $x\to\infty$, the LHS converges to $\Gamma(a)$, hence the series in the RHS is equivalent to $\mathrm e^x$. Now, $$\sum_{n\geqslant0}\frac{x^{n}}{\Gamma(n+a)}=\frac1{\Gamma(a)}+x^{1-a}\sum_{n\geqslant0}\frac{x^{n+a}}{\Gamma(n+a+1)}$$ hence $$\sum_{n\geqslant0}\frac{x^{n}}{\Gamma(n+a)}\sim x^{1-a}\mathrm e^x.$$ For $a=\frac12$, this is the result mentioned in the question.
An exact formula using the incomplete gamma function $\gamma(a,\ )$ (that is, the LHS of the first identity in this answer) is $$\sum_{n\geqslant0}\frac{x^{n}}{\Gamma(n+a)}=\frac{\gamma(a,x)}{\Gamma(a)} x^{1-a}\mathrm e^x+\frac1{\Gamma(a)}.$$
Edit: ...And this approach yields the more precise expansion, also mentioned in the question, $$\sum_{n\geqslant0}\frac{x^{n}}{\Gamma(n+a)}=x^{1-a}\mathrm e^x+\frac{1-a}{\Gamma(a)}\frac1x+O\left(\frac1{x^2}\right).$$ More generally, for every nonnegative integer $N$ and every noninteger $a$, $$\sum_{n\geqslant0}\frac{x^{n}}{\Gamma(n+a)}=x^{1-a}\mathrm e^x+\frac{\sin(\pi a)}{\pi}\sum_{k=1}^N\frac{\Gamma(k+1-a)}{x^k}+O\left(\frac1{x^{N+1}}\right).$$ |
Convergence of sequence of Bernoulli random variables | First, looking at your rightmost term (note that $p_iq_i = p_i(1-p_i) \le \frac 14$ for all $i$, we have
\[
1 - \frac{\sum_i p_iq_i}{\epsilon^2 n^2} \ge 1 - \frac 1{4\epsilon^2 n}
\]
and this indeed goes to 1.
For the $\mu_n \leadsto l$ we observe that by convergence, given $\epsilon > 0$, for $n$ large enough we will have $|\mu_n - l| \le \frac \epsilon 2$. For these $n$ it holds
\[
\left\{|X_n' - \mu_n| < \frac \epsilon 2 \right\} \subseteq
\left\{|X_n' - l| < \epsilon\right\}
\]
by the triangle inequality. As the probability of the former set converges to 1, as you proved, the one of the latter also does. |
If $\phi :G\rightarrow H$ is a group homomorphism and $G$ is soluble, then $Im(\phi)$ is also soluble | It is enough to show that $\phi(G')=\phi(G)'$.
Observe that $\phi(xyx^{-1}y^{-1})=\phi(x)\phi(y)\phi(x)^{-1}\phi(y)^{-1}$. Hence the result follows.
Edit: If $$\phi(G')=\phi(G)'$$ then
$$\phi(G^r)=\phi(G)^r$$. By $G^r$ I mean $r$ th commutater subgroup of $G$.
Since $G$ is solvable, $G^n=1$ for some $n\implies 1=\phi(G^n)=\phi(G)^n$. Hence $\phi(G)$ is solvable. |
Is there a natural ring structure on $\operatorname{Pic}(\mathbb{CP}^1)$? | Question: "Is there a natural ring structure on $Pic(P^1)$ such that the isomorphism between $Pic(P^1)$ and $Z$ becomes an isomorphism of rings?"
Answer: If $C$ is a non-singular algebraic curve there is an isomorphism
$$K_0(C) \cong \mathbb{Z}\oplus Pic(C)$$
where $K_0(C)$ is the Grothendieck group of finite rank locally free sheaves on $C$.
Since $C$ is non-singular there is a product on $K_0(C)$ induced by the tensor product.
Example: If $C$ is the projective line it follows $Pic(C) \cong \mathbb{Z}$. The projective bundle formula proves that there is an isomorphism of rings
$$K_0(C) \cong \mathbb{Z}[t]/(t^2)\cong \mathbb{Z}\{1,t\}.$$
In general if $G$ is an abelian group you may define $F(G):=\mathbb{Z}\oplus G$
with multiplicative unit $1:=(1,0)$ and the following multiplication: $(m,g)*(n,h):=(mn, mh+ng)$. It follows $F(G)$ is a commutative unital ring and this construction is functorial and satisfies a certain universal property. If you do this construction with the projective line $C$ and $Pic(C)$ you get an isomorphism
$$F(Pic(C)) \cong K_0(C)$$
hence there is in some sense a "functorial way" to introduce a ring structure on an abelian extension $F(Pic(C))$ of $Pic(C)$ recovering the Grothendieck ring $K_0(C)$. |
Monotone Convergence Property $\iff$ Least Upper Bound Property | Monotone Convergence Property $\implies$ Archimedean Property.
Let's assume $\mathbb{F}$ is not archimedean, $\exists$ $c \in \mathbb{F}$ with $n<c$
for all $n \in \mathbb{N}_{\mathbb{F}}$.
By assumption $(1,2,...)$ must converge,say to $r \in \mathbb{F}$.
$\implies$ $(0,1,2,...)$ also converges to $r$.
Subtracting the two sequence we find (1,1,1,...) converges to 0, which is absurd.
Therefore $\mathbb{F}$ must be Archimedean.
Let $(\phi \ne)A \subseteq \mathbb{F}$ which is bounded above in $\mathbb{F}$, with $U$ being the set
of upperbounds of $A$ in $\mathbb{F}$.
Want: To find an lub for $A$.
We now move on to build a few tools in order to prove the existence of $\operatorname{lub} A$.
Claim 1: $\{u-\varepsilon: u \in U\}=:U- \varepsilon \nsubseteq U$,
for all $\varepsilon (>0) \in \mathbb{F}$.
Let $\varepsilon>0$.
Let $U-\varepsilon \subseteq U$, Now we use induction on $n \in \mathbb{N}_{\mathbb{F}}$.
If $U-n\varepsilon \subseteq U$ for some $n \in \mathbb{N}_{\mathbb{F}}$.
then $U-(n+1)\varepsilon=(U-\varepsilon)-n\varepsilon$ $\subseteq U-n\varepsilon$ $\subseteq U$.
$\implies$ $U-n\varepsilon \subseteq U$ for all $n \in \mathbb{N}_{\mathbb{F}}$.
Hence by Archimedean Property we have $\mathbb{F}= \bigcup_{n=1}^\infty U-n\varepsilon \subseteq U$,
which contradicts $A \neq \phi$.
Claim 2: $\bigcap_{n=1}^\infty U-\frac{1}{n} \subseteq U$.
Let $x \in \bigcap_{n=1}^\infty U-\frac{1}{n}$ and let $(x<)y \in \mathbb{F}$.
By archimedean property $\exists$ $n \in \mathbb{N}_{\mathbb{F}}$ such that $x+\frac{1}{n} <y$.
$x \in U-\frac{1}{n}$\pause $\Rightarrow$ $x+\frac{1}{n} \in U$\pause $\Rightarrow$ $(x+\frac{1}{n}<)y
\notin A$.
$y$ is chosen arbitrarily, so $x$ is an upperbound of A, i.e $x \in U$.
Claim 3: $U-\frac1n$ $\subseteq$ $U-\frac1m$ for all $m \le n$.
$m \le n \Rightarrow \frac1n \le \frac1m$.
$x \in U-\frac1n$ $\Rightarrow$ $x+\frac1n \le x+\frac1m$ $\Rightarrow x+\frac1m \in U$ $\Rightarrow x
\in U-\frac1m$.
Corollary:
Let $n_k$ be any increasing sequence in $\mathbb{N}_{\mathbb{F}}$ then
$\bigcap_{k=1}^\infty \left(U-\frac{1}{n_k}\right)$ =
$\bigcap_{n=1}^\infty \left(U-\frac{1}{n}\right) \subseteq U.$
Claim 4: If $x \in \mathbb{F}$ with $x \notin U-\frac1n$, then
$x<y$ for all $y \in U-\frac1n$.
Let $u-\frac1n<x$ for some $u \in U.$
Then $x-(u-\frac1n)>0$ $\implies u+x-(u-\frac1n) \in U$
$\implies x \in U-\frac1n$, contradiction.
Hence $x$ is less than every member of $U-\frac1n$.
Now we proceed in creating an increasing sequence so that we can apply Monotone Convergence Property.
Let $x_1\in (U-1) \setminus U$ ($\ne \phi$ by Claim 1)
$x_1\notin U$ $\Rightarrow$ $x_1 \notin \bigcap_{n=1}^\infty U-\frac{1}{n}$.(from Claim 2)
so $\exists$ $n_1>1$ such that $x_1 \notin U-\frac{1}{n_1}$.
Let $x_2 \in \left(U-\frac{1}{n_1}\right) \setminus U$.
Then $\exists$ $n_2>n_1$(from Claim 3)
such that $x_2 \notin U - \frac{1}{n_2}$.
Again consider $x_3 \in \left(U-\frac{1}{n_2} \setminus
U\right)$ and so on.
This yields an increasing sequence $(1,n_1,n_2,...)$ in $\mathbb{N}_{\mathbb{F}}$(from Claim 3)
and an increasing sequence $(x_k)$ in $\mathbb{F}$
(from Claim 4,3) such that
$x_k \in \left(U-\frac{1}{n_{k-1}}\right) \setminus U.$
Then $(x_k)$ is bounded above by each element of U.
$\implies$ $(x_k)\rightarrow x \in \mathbb{F}$.(M.C.P)
$\implies$ $x \le u$ for all $u \in U$.
Remains to show that $x \in U$
Let $x_k \leq x$ for all k and $x_k \in \left(U-\frac{1}{n_k}\right)$
$\implies x \in \bigcap_{k=1}^\infty \left(U-\frac{1}{n_k}\right) = \bigcap_{n=1}^\infty \left(U-
\frac{1}{n}\right) \subseteq U$, i.e $x \in U$.
Therefore $x$ is the smallest element in $U$, which means $x=\operatorname{lub}A$.\
Hence for any nonempty bounded subset of $\mathbb{F}$, $\operatorname{lub} A \in \mathbb{F}$
i.e $\mathbb{F}$ is Order Complete. |
Ensure that for each number in specific space there is inverse | You need the number you want to invert and the modulus to be relatively prime, which means to have greatest common divisor of $1$. You can check that with the Euclidean algorithm. Having the modulus a prime greater than the serial number is a way to make sure the GCD is $1$, but it is not required. |
Can we determine the sign of this product of complex numbers? | We can't know the sign. Assuming $a+b+c=abc$ we have that
$$(-1+ai)(-1+bi)(-1+ci)=ab+ac+bc-1.$$ Now, if $a=1,b=-1,c=0$ we get $-2<0$ and if $a=b=-2,c=-4/3$ we get $25/3>0.$ |
Closed subsets of $\beta \mathbb R$ | Is there an easier way to get my final conclusion?
Let $o(X)$ denote the cardinality of the topology of the space $X$ (i.e., it is the number of open subsets of $X$.) The same notation is used in
Juhasz' book Cardinal Functions in Topology - Ten Years Later.
Then we have inequality $o(X)\le 2^{w(X)}$, which follows from the fact that every open set can be obtained as a union of some system of basic sets.
If we use $w(\beta\mathbb R)\le\mathfrak c$, then we get
$$o(\beta\mathbb R)\le 2^{\mathfrak c}.$$
So there are at most $2^{\mathfrak c}$ open subsets in $\beta\mathbb R$. Clearly, the number of closed subsets is the same. And since we are working with subsets of a compact space, compact subsets are precisely closed subsets, so we get $|K(\beta\mathbb R)|\le 2^{\mathfrak c}$.
Since weight of a subspace is less or equal to the weight of the whole space, we also have $w(\mathbb R^*)\le w(\beta\mathbb R) \le \mathfrak c$ and we can use the same argument to get
$$|K(\mathbb R^*)|=o(\mathbb R^*) \le 2^{\mathfrak c}.$$
On the other hand, we have $o(X)\ge |X|$ for any $T_1$ space. Since $|\beta\mathbb R|=2^{\mathfrak c}$ we get $$o(\beta\mathbb R)=2^{\mathfrak c}.$$
For the proof of cardinality of $\beta\mathbb R$ see: Stone–Čech compactification of $\mathbb{N}, \mathbb{Q}$ and $\mathbb{R}$ |
Convergence test and remainders | The root test says that if
$$c := \limsup_{n\to\infty} \sqrt[n]{\lvert a_n\rvert} < 1,$$
then $\sum\limits_{n=0}^\infty a_n$ is absolutely convergent. By the definition of $\limsup$, it follows that for all $q > c$ there is an $N(q)\in\mathbb{N}$ such that $\sqrt[n]{\lvert a_n\rvert} \leqslant q$ for all $n > N(q)$. Of course, $\sqrt[n]{\lvert a_n\rvert} \leqslant q$ is equivalent to $\lvert a_n\rvert \leqslant q^n$. Now if we choose $q \in (c,1)$, we find
$$\lvert R_n\rvert = \left\lvert \sum_{k=n+1}^\infty a_k\right\rvert \leqslant \sum_{k=n+1}^\infty \lvert a_k\rvert \leqslant \sum_{k=n+1}^\infty q^k = \frac{q^{n+1}}{1-q}$$
for all $n \geqslant N(q)$.
Similarly, the ratio test asserts that if $a_n \neq 0$ for all $n$ and
$$d := \limsup_{n\to\infty} \left\lvert \frac{a_{n+1}}{a_n}\right\rvert < 1,$$
then $\sum\limits_{n=0}^\infty a_n$ is absolutely convergent. Like for the root test, for every $q \in (d,1)$ we find an $N(q)\in\mathbb{N}$ such that for $n > N(q)$ we have $\left\lvert \frac{a_{n+1}}{a_n}\right\rvert \leqslant q$, and then it follows that
$$\lvert a_{n+k}\rvert = \lvert a_n\rvert \cdot \prod_{m=1}^k \left\lvert \frac{a_{n+m}}{a_{n+m-1}}\right\rvert \leqslant \lvert a_n\rvert\cdot q^k.$$
Therefore, for $n \geqslant N(q)$ we have
$$\lvert R_n\rvert = \left\lvert \sum_{k=0}^\infty a_{n+1+k}\right\rvert \leqslant \sum_{k=0}^\infty \lvert a_{n+1+k}\rvert \leqslant \sum_{k=0}^\infty \lvert a_{n+1}\rvert\cdot q^k = \lvert a_{n+1}\rvert \sum_{k00}^\infty q^k = \frac{\lvert a_{n+1}\rvert}{1-q}.$$
Here, all $a_n$ are assumed positive, hence the absolute value can be ignored/omitted wherever it occurs. |
Is there a simple characterization of the functions for which the Fourier inversion theorem holds identically? | You don't necessarily get convergence everywhere for $\mathcal{F}^{-1}\mathcal{F}f$. So your choice of $f$ would not be defined everywhere using this methodology. It is true that $\mathcal{F}f$ and $\mathcal{F}g$ are identical if $f=g$ a.e. because these are defined through limits of integrals. So that means you're generally not going to be able to find an $f$ in the equivalence class for which $\mathcal{F}^{-1}\mathcal{F}f$ converges everywhere. |
Is the Incenter always "below" the middle point of an angle bisector segment in a triangle? | Because in the standard notation we obtain: $$\frac{CI}{ID}=\frac{BC}{BD}=\frac{a}{\frac{ac}{a+b}}=\frac{a+b}{c}>1$$ |
General Annuity practice question | You have not taken account of the delay in receiving payments. There should be another factor $(1+i)^{-5}$ to take account of the five months with no payments. Then solving for $i$ requires a numeric approach. A spreadsheet will do it for you or you can use bisection. $i=0$ is too low and you can easily find an $i$ that is too high. Then compute it at the center point and see if it is too low or too high. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.