title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Recover complex function from its imaginary part | $u = -2xy - 2x - 1$ is wrong !
From $u=-2xy+h(y)+C$ we get
$u_y=-2x+h'(y)=-G_x=-2x-1$, hence $u=-2xy-y+c$. |
Geometric interpretation of the dual cone of $l^1$ is $l^\infty$? | The key is the dual relationship $\|x\|_\infty = \max_{\|z\|_1 \le 1} z^T x$.
Note
\begin{eqnarray}
K^* &=& \{ (y,s) | x^T y + st \ge 0 \text{ for all } (x,t) \in K \} \\
&=& \{ (y,s) | -x^T y + s \ge 0 \text{ for all } (-x,1) \in K \}\\
&=& \{ (y,s) | x^T y \le s \text{ for all } \|x\|_1 \le 1 \}\\
&=& \{ (y,s) | \max_{\|x\|_1 \le 1 }x^T y \le s \}\\
&=& \{ (y,s) | \|y\|_\infty \le s \}\\
\end{eqnarray} |
Computing $\int_{0}^{1\over 2}{\ln(1+x)\over x}dx$. | We have
$$\begin{align}
\int_0^{1/2}\frac{\log (1+x)}{x}dx&=\int_0^{-1/2}\frac{\log (1-x)}{x}dx\\\\
&=-\text{Li}_2(-1/2) \tag 1 \\\\
&=\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{2^k\,k^2} \tag 2
\end{align}$$
where $\text{Li}_2$ is the dilogarithm function in $(1)$.
We note that $(2)$ is the sum of an alternating series of monotonically decreasing terms. Therefore, the error of the partial sums through $k=K-1$ is given by
$$\left|\sum_{k=1}^{K-1}\frac{(-1)^{k+1}}{2^k\,k^2}-\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{2^k\,k^2}\right|\le \frac{1}{2^K\,K^2}$$
Thus, if $\frac{1}{2^K\,K^2}<0.01$, then the sequence of partial sums from $k=1$ to $k=K-1$ will have the desired accuracy. Taking $K=4$, we have $\frac{1}{2^4\,4^2}<0.004<0.01$ and the partial sum is
$$\sum_{k=1}^{3}\frac{(-1)^{k+1}}{2^k\,k^2} \approx. -0.451388888888889
$$
The value of the integral is approximately
$$\int_0^{1/2}\frac{\log (1+x)}{x}dx\approx. -0.448414206923646$$
and thus the first three terms have an absolute error of roughly $0.00297468196524253<0.01$. |
Uniform Convergence of Maximum of Sequence of Functions | Hint.
Another proof that doesn't use Arzela-Ascoli theorem.
$(g_n)$ is an increasing sequence of uniformly bounded functions. Hence it converges to a function $g$ that is defined for all points of the compact $K$.
Using equicontinuity, you can prove that $g$ is continuous. You can finally use Dini's theorem or prove it for this specific case. |
Use of this condition on the instrumental density in importance sampling? | Because then $H(x)f(x)\neq 0$ implies $g(x)\neq 0$, and so we are allowed to multiply and divide by $g(x)$
$$
\begin{align*}
\int H(x)f(x)\,\mathrm dx&=\int_{\{H(x)f(x)\neq 0\}} H(x)f(x)\,\mathrm dx=\int_{\{H(x)f(x)\neq 0\}}\frac{H(x)f(x)}{g(x)}g(x)\,\mathrm dx\\
&=\int_{\{g(x)\neq 0\}}\frac{H(x)f(x)}{g(x)}g(x)\,\mathrm dx.
\end{align*}
$$ |
Why is the map $\mathrm{SL}_2(\mathbb{R})/\mathrm{SO}(2) \rightarrow \mathbb{H} : A \mapsto Ai$ injective? | If $X$ is any set and $G$ is a group acting transitively on it then we can identify $X$ with $G/G_x$ where $G_x$ is the stabilizer of some fixed point $x \in X$.
The upper triangular matrices act transitively on $\Bbb H$, so $\mathrm{SL}_2(\Bbb R)$ does too, and $\mathrm{SO}(2)$ is the stabilizer of $i$.
We can get injectivity explicitly too:
$Ai = Bi \iff AB^{-1} \in \mathrm{SO}(2) \iff A \cdot \mathrm{SO}(2) = B \cdot \mathrm{SO}(2)$, showing that $A=B$ modulo $\mathrm{SO}(2)$ as required. |
Equation to zero confused | You can subtract one side, so change $\cos(x-y)=xe^y$ to $\cos(x-y)-xe^y=0$. Is that what you are looking for? |
The value of a series . | By the Binomial Theorem, you know that $$\displaystyle\sum_{t = 0}^{n}\dbinom{n}{t}(q-1)^t = \left[(q-1)+1\right]^n = q^n.$$
If you differentiate both sides with respect to $q$ to get $$\displaystyle\sum_{t = 0}^{n}t\dbinom{n}{t}(q-1)^{t-1} = nq^{n-1}.$$
To get the sum you wanted, just multiply by $(q-1)$, which gives you $$\displaystyle\sum_{t = 0}^{n}t\dbinom{n}{t}(q-1)^{t} = n(q-1)q^{n-1}.$$ |
Matrix representation from a linear function | I've found that my first representation was wrong. The exact way is the following:
$\mathbf{p}^{(t)} = \mathbf{A} \mathbf{p}^{(0)} + \mathbf{B} \mathbf{p}^{(t-1)}$
where:
$A=diag(\alpha_1, \ldots, \alpha_n)$ and $\alpha_i= \frac{1}{1 + \sum_{j \in N(i)} w_{i,j}}$,
$B_{i,j} = \frac{w_{i,j}}{1 + \sum_{j \in N(i)} w_{i,j}}$, if $j \in N(i)$
$B_{i,j} = 0$, otherwise. |
Are the plane and the third dimensional space homeomorphic? | — No, they are not homeomorphic (for the usual topologies). Suppose that $h : \Bbb R^2 \to \Bbb R^3$ is a homeomorphism.
Then $$h' : X=\Bbb R^2 \setminus \{(0,0)\} \to Y=\Bbb R^3 \setminus \{h(0,0)\}$$ would be also a homeomorphism.
[Indeed, it is well-defined: if $p \neq (0,0)$ then $h(p) \neq h(0,0)$ (because $h$ is injective), so that the image by $h$ of any point of $X$ lies in $Y$. As a restriction of a continuous map, $h'$ is also continuous. It is also injective, because $h$ is injective. Moreover, it is surjective onto $Y$ because $h$ is surjecive. Finally the inverse of $h'$ is continuous because it is the restriction to $Y$ of the inverse $h^{-1}$ of $h$, which is continuous (because $h$ is a homeomorphism).]
$\\$
— However, $Y$ is simply connected, while $X$ is not. In $X$, the loop $$\gamma : t \mapsto (\cos(2\pi t), \sin(2\pi t))$$ can't be deformed continuously to get the constant path $c : t \mapsto (1,0)$.
This idea is that if you want to "shrink" the circle (which is the image of $\gamma$) into a point, you would have to pass through the origin $(0,0)$... which doesn't belong to our space $X$ !
Another way would be to "cut" the circle... but then it is not a continuous deformation anymore.
— On the other hand, any path in $Y$ can be deformed continuously into a constant loop (or into any other path).
This argument shows that $X$ can't be homeomorphic to $Y$ (because simply connectedness is preserved under homeomorphisms).
In order to prove in details that $X$ is not simply connected, I think that you need to compute its fundamental group, which happens to be $\Bbb Z$. More precisely, $$\pi_1(X) = \{\underbrace{\gamma * \cdots * \gamma}_{n \text{ times}} \;|\; n \in \Bbb Z\},$$ where $*$ denotes the concatenation of paths ;
if $n<0$ we agree that we concatenate the opposite of $\gamma$.
You can actually show that $\Bbb R^n$ is homeomorphic to $\Bbb R^m$ if and only if $n=m$, but this is not obvious. You can do it using singular homology, for instance. (In fact, using homology you can prove that any non-empty open set of $\Bbb R^n$ is not homeomorphic to any non-empty open set of $\Bbb R^m$, unless $n=m$). |
Composition of Limits In The Most General Case | Well, $f \circ g$ is only defined on $X \cap Y$ and that is empty, but the definition of limit requires for $h(x)=f(g(x))$ to be defined in a neighbourhood of $c$, so there is no limit... |
Let a be a positive number. Then there exists exactly one natural number b such that b++=a. | So the lemma you want to prove is:
For every natural $k$ there is exactly one natural $a$ such that $a\text{++} = k\text{++}$.
This requires two axioms. One is axiom 2.4:
If $n\text{++} = m\text{++}$, then $n = m$.
The other axiom may not be directly stated. It is a consequence of the axiom of equality. The general form of it says that if $x = y$ and you apply the same process $P$ to both $x$ and $y$, then the result will also be equal: $P[x] = P[y]$. In this case the process is taking the successor:
if $n = m$, then $n\text{++} = m\text{++}$.
The proof of the lemma has two parts:
First, it proves that there is at least one $a$ such that $a\text{++} = k\text{++}$.
Second, it proves that there is at most one $a$ such that $a\text{++} = k\text{++}$.
The first part is proven by noting that $k$ itself is a natural number such that $k\text{++} = k\text{++}$. I.e., if we set $a = k$, then by the axiom of equality $a\text{++} = k\text{++}$. Hence there is at least one value for which this is true.
The second part is proven by assuming that $a$ and $b$ are both natural numbers such that $a\text{++} = k\text{++}$ and $b\text{++} = k\text{++}$. But by transitivity of equality, that means $a\text{++} = b\text{++}$, and by axiom 2.4, that means $a = b$.
So if you are given two solutions, in fact, they have to be the same solution hiding behind two nom-de-plumes. |
Are all intermediate growth branch groups just-infinite? | All known branch groups of intermediate growth are just infinite. That means your question is still open. |
How to calculate : $\frac{1+i}{1-i}$ and $(\frac{1+\sqrt{3i}}{1-\sqrt{3i}})^{10}$ | What you calculated in the first part is not the same quantity as what you asked in the original question: is the quantity to be computed $$\frac{1-i}{1+i},$$ or is it $$\frac{1+i}{1-i}?$$ Even if it is the latter, you have made a sign error in the fourth line: it should be $$(1+i)(\tfrac{1}{2} + \tfrac{i}{2}) = \frac{(1+i)^2}{2} = \frac{1+i+i+i^2}{2} = \frac{2i}{2} = i.$$
For your second question, you need to be absolutely sure that what you mean is $\sqrt{3i}$, rather than $i \sqrt{3}$. They are not the same. I suspect the actual question should use the latter, not the former. |
What is the inverse Laplace transform of this? | Thank you all for the tip's.
I finally choose the @Moo advice.
H(s) = 1 - (s+0.5*10^6) / ((s+0.5*10^6)^2 + 0.75*10^12) + (0.5*10^6) / ((s+0.5*10^6)^2 + 0.75*10^12)
h(t) = δ(t) + (0.5*10^6/0.75*10^12)* exp(-0.5^6*t)*sin(sqrt(0.75*10^12)*t) - 0.5*10^6*exp(-0.5^6*t)*cos(sqrt(0.75*10^12)*t)
It is correct?
Thank you again.
Juan. |
Nominal Rates/Effective rate computation, confusion. | Finance, as practiced, is littered with legacy jargon and imprecise shortcuts. For an interest rate $r$ not too far from zero, we have the approximation:
$$ (1 + r)^n \approx 1 + r \cdot n $$
For example: $$(1+.01)^{12} = 1.1268 \approx 1.12$$
That's close enough for the marketing department (and not close enough for accounting department)!
So a bank might say it's charging you "a nominal interest rate of 12 percent compounded monthly" instead of saying an "annual effective interest rate of 12.68 percent."
In this context, the effective interest rate is the mathematically relevant concept, and the nominal interest rate is all about marketing, back of the envelope shortcuts, legacy terminology etc...
In some sense, this is like measuring length in feet or meters, the important thing is not to mix up what numbers are in what units. |
Estimating a probability of head of a biased coin | $0.65$ is the maximum-likelihood estimate, but for the problem you describe, it is too simple. For example, if you toss the coin just once and you get a head, then that same rule would say "prob = 1".
Here's one way to get the answer. The prior density is $f(p) = 1$ for $0\le p\le 1$ (that's the density for the uniform distribution). The likelihood function is $L(p) = \binom{100}{65} p^{65}(1-p)^{35}$. Bayes' theorem says you multiply the prior density by the likelihood and then normalize, to get the posterior density. That tells you the posterior density is
$$
g(p) = \text{constant}\cdot p^{65}(1-p)^{35}.
$$
The "constant" can be found by looking at this. We get
$$
\int_0^1 p^{65} (1-p)^{35} \; dp = \frac{1}{101\binom{100}{65}},
$$
and therefore
$$g(p)=101\binom{100}{65} p^{65}(1-p)^{35}.
$$
The expected value of a random variable with this distribution is the probability that the next outcome is a head. That is
$$
\int_0^1 p\cdot 101\binom{100}{65} p^{65}(1-p)^{35}\;dp.
$$
This can be evaluated by the same method:
$$
101\binom{100}{65} \int_0^1 p\cdot p^{65}(1-p)^{35}\;dp = 101\binom{100}{65} \int_0^1 p^{66}(1-p)^{35}\;dp
$$
$$
= 101\binom{100}{65} \cdot \frac{1}{\binom{101}{66}\cdot 102} = \frac{66}{102} = \frac{11}{17}.
$$
This is an instance of Laplace's rule of succession (Google that term!). Laplace used it to find the probability that the sun will rise tomorrow, given that it's risen every day for the 6000-or-so years the universe has existed. |
What makes proving the Riemann Hypothesis so difficult? | This is not an answer but it's too long for a comment: the reason why it is so difficult to prove the Reimann Hypothesis could be that you cannot prove something that is not true.
Here is an interesting quote from a wonderful book written on the subject, Prime Obsession: Bernhard Riemann and the Greatest Unsolved Problem in Mathematics, by John Derbyshire:
You can decompose the zeta function into different parts, each of
which tell you something about different zeta's behavior. One of these
parts is the so-called $S$ function. For the entire range for which
zeta has so far been studied - which is to say, for arguments on the
critical line up to a height of around $10^{23}$ - $S$ mainly hovers
between -1 and +1. The largest value known is around 3.2. There are
strong reasons to think that if $S$ were ever to get up to around 100,
then RH might be in trouble. The operative word there is "might"; $S$
attaining a value near 100 is a necessary condition for the RH to be
in trouble but not a sufficient one.
Could values of the $S$ function ever get that big? Why, yes! As a
matter of fact, Atle Selberg proved in 1946 that $S$ is unbounded;
that is to say, it will eventually, if you go high enough up the
critical line, exceed any number you name! The rate of growth of $S$
is so creepingly slow that the heights involved are beyond imagining;
but certainly $S$ will eventually get up to 100. Just how far would we
have to explore up the critical line for $S$ to be that big? Probably
around $10^{10^{10000}}$. Way beyond the range of our current
computational abilities. |
Translation help (French): "Monoïde cycliste" | Welcome to the humour of (the late) Alain Lascoux, and Marco Schützenberger. In French a cycliste is one who rides a bicycle, and as an adjective it means in relation to (the sport of) riding bicycles. Since that monoid has two generators, they no doubt found the allusion to bicycles appropriate. It would seem to me there is no really pressing need to give the two-letter instance of the plactic monoid a special name, but as a translation "cyclist monoid" is as good as any. As an alternative option I could propose "bicyclic monoid", although that could possibly be confused with the free monoid with two generators. |
Find all solutions mod $19$ to $4x^2+6x+1 \equiv 0$ mod $19$ | Since the modulus $19$ is so small, you could just try all $19$ possibilities and see which of them give the result $0$. That would certainly be faster than trying something more sophisticated. I did that and got the solution $x\equiv 11$ or $x\equiv 16$.
If you want something more algebraic,
$$4x^2+6x+1\equiv 0 \pmod{19}$$
$$5\cdot(4x^2+6x+1)\equiv 0 \pmod{19}$$
$$x^2+11x+5\equiv 0 \pmod{19}$$
$$x^2-8x+5\equiv 0 \pmod{19}$$
$$x^2-8x\equiv -5 \pmod{19}$$
$$x^2-8x+16\equiv 16-5 \pmod{19}$$
$$(x-4)^2 \equiv 11\pmod{19}$$
$$x-4 \equiv \pm 7\pmod{19}$$
$$x\equiv 4\pm 7 \pmod{19}$$
$$x\equiv 11 \text{ or }x\equiv -3 \pmod{19}$$
$$x\equiv 11 \text{ or }x\equiv 16 \pmod{19}$$ |
Help with a proof using energy method for PDE | The function $w(\cdot,0)$ is identically zero so it's gradient is also identically zero. |
Triangle with the area at most $\frac{7}{12}$. | Divide the unit cube into 27 cubes of size $\frac{1}{3} \times \frac{1}{3} \times \frac{1}{3}$.
By the pigeonhole principle, one of these cubes contains 3 out of the 75 points. From the given condition, these points are not collinear. So they form a triangle
In a cube of side $a$, the maximum area of a triangle that can fit in it is $\frac{\sqrt{3}a^2}{2}$.
For side $\frac{1}{3}$, this is $\approx 0.0962 < \frac{7}{12}$
Therefore, these three points form a triangle of area less than $\frac{7}{12}$ |
Matrix such that $B^2 + B - I = 0$ | a) The size of the matrix $B$ must be the same as $I$ so it is $N \times N$.
b) $(I+B) B=IB +B^2=B+B^2=I$ using the equation and so $I+B$ is the inverse of $B$.
c) $B^3=B B^2=B(I-B)= B -B^2=B-(I-B)=2B-I$ using the equation $B^2+B-I=0$.
If you need more details just tell me. |
Determining $(A \times B) \cup (C \times D) \stackrel{?}{=} (A \cup C) \times (B \cup D)$ | Try $A=B=[0,1]$ and $C=D=[1,2]$ (intervals on the real line). The left-hand side is the union of two small squares, while the right-hand side is the large square $[0,2] \times [0,2]$. |
$X\sim\mathcal N(0,1);\ Y = \sqrt{|X|}$; find $f_Y(y)$ | \begin{align}
f_Y(y) & = \frac d {dy} \Pr(Y\le y) = \frac d{dy} \Pr( -y^2 \le X \le y^2) \\[10pt]
& = \frac d {dy}\int_{-y^2}^{y^2} \frac 1 {\sqrt{2\pi}} e^{-x^2/2} \,dx = 2 \frac d {dy} \int_0^{y^2} \frac 1 {\sqrt{2\pi}} e^{-x^2/2} \,dx \\[10pt]
& = 2 \frac d {dy} \int_0^u \cdots = 2 \frac{du}{dy} \cdot \frac d {du} \int_0^u \cdots =2 (2y) \cdot \frac d {du} \int_0^u \frac 1 {\sqrt{2\pi}} e^{-x^2/2} \, dx \\[10pt]
& = 4y \cdot \frac 1 {\sqrt{2\pi}} e^{-u^2/2} = 4y \cdot \frac 1 {\sqrt{2\pi}} e^{-y^4/2} \text{ for } y\ge 0.
\end{align} |
Question from $p$-adic HodgeTheory, linear algebra data | I always find this notation also a bit confusing and incomplete. Let me spell out, regarding 1. and 2., the construction of linearization explicitly:
$\varphi: M \rightarrow M$ is an $\varphi_O$-semilinear map, meaning that the compatibility with the scalar multiplication is somewhat "twisted", i.e. one has $\varphi(am)=\varphi(a)\cdot \varphi(m), \;a \in O, m \in M$ (rather than the "usual" linearity "$\varphi(am)=a\varphi(m)$"). This can cause some problems, for example, the image $\varphi(M)$ is not an $O$-submodule of $M$ in general.
For that reason, it is good to associate some "actually linear" map to $\varphi,$ hence the name linearization. The way to do it is as follows:
Consider $_O O_\varphi$, that is, $O$ as the $O$-bimodule whose left multiplication is the usual one but whose right multiplication is given by viewing $O$ as an $O$-algebra via the ring homomorphism $\varphi: O \rightarrow O$. That is, one has, for $x\in O$ and a scalar $a \in O,$ $a\cdot x=ax$ but $x \cdot a=x\varphi(a)$. Then there is an actually $O$-bilinear map
\begin{align*}
_O O_\varphi\times M & \longrightarrow M \\
(a, m) &\longmapsto a \varphi(m),
\end{align*}
and so this induces an $O$-linear (with respect to scalars acting on the left, i.e. utilizing the left action of $_O O_{\varphi}$) map $_O O_{\varphi} \otimes_{O}M \rightarrow M$, given by $a \otimes m \mapsto a \varphi(m)$. This is the linearization of $\varphi$. From the explicit description, you can kind of see why the map is described by $1\otimes \varphi_M$. |
What is the integral of a function from $\infty$ to $\infty$? | $f(t)=e^{t^2}$ is an increasing function over $\mathbb{R}^+$, hence
$$\int_{x}^{x+\frac{1}{x}} e^{t^2}\,dt \geq \int_{x}^{x+\frac{1}{x}}e^{x^2}\,dt = \frac{e^{x^2}}{x} $$
and the limit is just $+\infty$. |
A non-noetherian ring with all localizations noetherian | Recall that a ring is Boolean if every element is idempotent: for all $x \in R$, $x^2 = x$. And in fact a Boolean ring is necessarily commutative. Here are two further rather easy facts about Boolean rings (for proofs see e.g. Section 9 of these notes):
A Boolean ring is Noetherian iff it is finite.
A Boolean ring is local iff it is a domain iff it is a field iff it is isomorphic to $\mathbb{Z}/2\mathbb{Z}$.
Combining these facts, one sees that any infinite Boolean ring -- e.g. the product of infinitely many copies of $\mathbb{Z}/2\mathbb{Z}$ -- will be non-Noetherian but everywhere locally Noetherian. |
Show that $\lambda$ is a repeated root of $p(z)$ if and only if $p(\lambda) = p'(\lambda) = 0$ | Let $$p(\lambda)=0\tag 1.$$
You already proved that $p'(\lambda)=q(\lambda)$ which implies $$q(\lambda)=p'(\lambda)=0\tag2.$$
For $(2)$ take $q(z)=(z-\lambda)t(z)$ where $t(\lambda)\ne 0$. On plugging this in
$p(z)=(z-\lambda)q(z)+r$
You get $p(z)=(z-\lambda)^2t(z)+r$. Now use $(1)$ to get $r=0$ and then $p(z)=(z-\lambda)^2q(z)$ shows the result. |
Connected Graph Proofs | I think you have the right idea but trying to do it inductively might be making the problem trickier to work with.
Try this: Let $G$ be a connected (simple) graph with $n$ vertices and $m$ edges. Since $G$ is connected, $m\geq n-1$, so let $m = n-1 +k$ for some $k\in \mathbb{Z}_{\geq 0}$. Recall that any connected graph contains a spanning tree $T_0$ as a subgraph, which necessarily has $n-1$ edges and 0 cycles. Let $\{e_1, e_2, \ldots e_k \}$ denote the set of edges not in $T_0$.
Now think about what happens when we add $e_1 = (v_1,v_2)$ into $T_0$. Since $T_0$ is a spanning tree there exists a path from $v_1$ to $v_2$ in $T_0$. Therefore adding $e_1$ to this path will produce a cycle. Let $T_1$ denote the subgraph consisting of $T_0$ with $e_1$.
Now think about adding in $e_2$. You'll see that by the same argument we will produce at least one new cycle, and the resulting graph $T_2$ will have at least 2 cycles. Continuing in this way we see that $G$ itself must contain at least $k$ cycles, but $k = m -(n-1) = m-n+1$ as desired.
For part (b), part (a) tells you that you have at least $(m-n+1)$ cycles. To form a spanning tree, you essentially just have to remove an edge from each of those cycles. Think about how many ways you can do that (remember a bipartite graph has no odd cycles) and you should get your answer. |
Contour Integral parametrization | Just using the definition:
$$\int_{\gamma} \frac{1}{z} dz=\int_{\frac{-\pi}{2}}^\frac{\pi}{2} f(\gamma(\theta))\gamma'(\theta)d\theta=\int_{\frac{-\pi}{2}}^\frac{\pi}{2}\frac{ie^{i\theta}}{e^{i\theta}} d\theta=\int_{\frac{-\pi}{2}}^\frac{\pi}{2}i d\theta=i\pi$$ |
Geometric progression of an $n$th root | The question is to determine the asymptotic behaviour of a sequence $(x_n)_{n\geqslant0}$ defined by $x_0\gt0$ and $x_{n+1}=u(x_n)$ for every $n\geqslant0$, where $u:x\mapsto a^{1/x}$ for some $a\gt1$.
The function $u$ is decreasing from $u(0^+)=+\infty$ to $u(+\infty)=1$. The function $v=u\circ u$ is increasing from $v(0)=1$ to $v(+\infty)=a$ hence $1\leqslant x_n\leqslant a$ for every $n\geqslant2$, for every $x_0\gt0$. Furthermore $(x_{2n})_{n\geqslant0}$ and $(x_{2n+1})_{n\geqslant0}$ are monotone hence both these sequences converge. If their limits $\ell$ and $\ell'$ coincide, then $\ell=\ell'$ is a fixed point of $u$ (note that $u$ always has a fixed point). Otherwise, $\ell'=u(\ell)$ for some fixed point $\ell$ of $v$ not a fixed point of $u$.
When $a=16$, both cases occur, namely $2.74537$ is a fixed point of $u$ and $(2,4)$ is a $2$-cycle. When $a=2$ for example, the $2$-cycle does not occur hence $x_n\to\ell=1.55961$. Likewise, when $a=4$, $x_n\to\ell=2$. But when $a=20$, the fixed point is $2.85531$ and the $2$-cycle is $(1.50907,7.28017)$.
One can show that $u$ and $v$ have the same unique fixed point when $a\leqslant a^*$ and that $v$ has two distinct fixed points additionally to the fixed point of $u$ when $a\gt a^*$, where $a^*=\mathrm e^\mathrm e=15.15426$ (and for $a=a^*$ the fixed point is $\mathrm e=2.71828$). |
Expected value of a lognormal distribution | Standard method to find expectation(s) of lognormal random variable.
1)
Determine the MGF of $U$ where $U$ has standard normal distribution.
This comes to finding the integral:$$M_U(t)=\mathbb Ee^{tU}=\frac1{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{tu}e^{-\frac12u^2}du=e^{\frac12t^2}$$
2)
If $Y$ has lognormal distribution with parameters $\mu$ and $\sigma$ then it has the same distribution as $e^{\mu+\sigma U}$ so that: $$\mathbb EY^{\alpha}=\mathbb Ee^{{\alpha}\mu+{\alpha}\sigma U}=e^{{\alpha}\mu}\mathbb Ee^{{\alpha}\sigma U}=e^{{\alpha}\mu}M_U({\alpha}\sigma)=e^{{\alpha}\mu+\frac12{\alpha}^2\sigma^2}$$ |
Find the value of $\lim_{x\to 2}[(1+{5^{(x-2)}}^{-1})^{-1}]$ | I don't know where you're getting the $\frac12$ in your computations.
$$\lim_{x\to2+}\left\lfloor\frac1{1+5^{1/(x-2)}}\right\rfloor=\lim_{x\to\infty}\left\lfloor\frac1{1+5^x}\right\rfloor=0$$
$$\lim_{x\to21}\left\lfloor\frac1{1+5^{1/(x-2)}}\right\rfloor=\lim_{x\to\infty}\left\lfloor\frac1{1+5^{-x}}\right\rfloor=0$$
I suspect that the error the author made was in the second calculation, putting $\lim_{x\to\infty}5^{-x}=0$ and getting the answer $1$. But when $x$ gets large and $5^{-x}$ small, we still have $5^{-x}>0$, so the denominator is greater than $1$, the fraction is less than $1$, and the floor function gives $0$. There isn't some kind of "jump discontinuity at $\infty.$" |
Proving $D(G) ≤ w(1 + (t − 1) \log w) $ | We have $G$ is $t$-th Cartesian power of $C_w$, so $|G|=w^t$, $\exp (G)=w$, therefore
$$\exp(G)(1 + \log(|G|/\exp(G))) = w (1+\log (w^{t-1}))= w (1+(t-1)\log w)\le tw\log w,$$
for $w>2$, because in this case $t\log w\ge\log w\ge 1$. |
Independence of complement Independence events | Hint Use induction. Another hint for the basis ($n=2$):
Prove that the following theorem holds
Let $E_1,E_2$ independent. Then $E_1$ and $E_2^c$ are independent.
Applying this theorem you can easily prove the independence of $E_1^c$, $E_2^c$
(If you are done with $n=2$, I can add some more hints...) |
Let $F$ a separable space and $T: E \to F$ a linear isometry. Is $E$ a separable space? | Hint: you can assume without loss of generality that $E \subseteq F$. As $F$ is separable, we can assume, as you did, that $F$ has a dense subset $\{y_0, y_1, \ldots\}$. Given $m, n \in \Bbb{N}$, if there is an $x \in E$ such that $\|x - y_n\| < 1/(m+1)$, define $x_{mn}$ to be some such $x$, otherwise define $x_{mn}$ to be $0$. Now show that the $x_{mn}$ form a countable dense subset of $E$. |
Find the solution space over $\mathbb{R}$ when $|x|=\max\{-2x+1,(x/2)+1\}$ | First, you can start to consider the maximum.
$$
-2x+1\leq (x/2)+1 \Leftrightarrow -2x\leq x/2 \Leftrightarrow x\geq \ldots
$$
Next, you consider the case $x\geq \ldots$ and $x<\ldots$ because for this cases you know what the maximum is.
Further, you have to get rid of $|x|$. The natural choise is to consider the cases $x\leq 0$ and $x>0$ to know the value of $x$.
You, you have two, three or four cases where you have a simple equation, you should be able to solve. |
Algebraic Riccati Inequality Solution via LMI | Your question is incorrect to begin with, as the continuous-time ARE is
$\quad A^TP + P A - PBR^{-1}B^TP + Q = 0$
Hence, just as bad form.
The LMI formulations of LQ in both discrete-time and continuous-time would typically be done in both the Riccati matrix and the feedback matrix.
Find minimum-trace $P$ such that
$ (A-BK)^T P (A-BK) - P + K^TRK + Q \preceq 0$
Multiply with $P^{-1}$ from left and right, define $Y = P^{-1}$ and $F = KP^{-1}$, and apply suitable Schur complements to arrive at linear form. |
Minimize euclidean distance in relation to horizontal distance for a series of points by only adjusting y axis | It sounds like you want to minimize the sum of Euclidean distances
$$\sum_{i=1}^{n-1} \sqrt{(x_i-x_{i+1})^2+(y_i'-y_{i+1}')^2}$$
subject to
$y_i' \in [y_i,y_i+2]$ for all $i\in\{1,\dots,n\}$.
The initial sum of distances is $88.097$, and by perturbing $y$ to
[22, 26, 28, 25, 22, 20, 22, 32, 30, 28, 22, 24]
you can reduce the sum of distances to $82.398$.
By request, here is the SAS code:
data indata;
input x y;
datalines;
10 20
20 24
25 28
30 24
35 20
40 18
45 20
50 32
55 30
60 28
70 20
80 24
;
proc optmodel;
set POINTS;
num x {POINTS};
num y {POINTS};
read data indata into POINTS=[_N_] x y;
var C {POINTS} >= 0 <= 2;
impvar Yprime {i in POINTS} = y[i] + C[i];
min Z = sum {i in 1..card(POINTS)-1} sqrt((x[i]-x[i+1])^2+(Yprime[i]-Yprime[i+1])^2);
solve;
create data outdata from [i] x y C Yprime;
quit;
proc sgplot data=outdata noautolegend;
scatter x=x y=y;
vector x=x y=Yprime / xorigin=x yorigin=y lineattrs=(color=red);
run; |
Proving the existence of consecutive quadratic residues modulo $p>5$ by means of Pell's equation | That's a lovely idea, but there is an issue. Given a Pell equation:
$$ x^2-dy^2 = 1$$
its solutions $(x_0,y_0)=(1,0),(x_1,y_1),\ldots $ may be arranged in such a way that both the sequences $\{x_n\}_{n\geq 0}$ and $\{y_n\}_{n\geq 0}$ are Fibonacci-like sequences with the same characteristic polynomial, whose coefficients depend on the fundamental solution $(x_1,y_1)$. Any Fibonacci-like sequence is periodic $\pmod{p}$, hence there is for sure some $x_n\neq 1$ such that $p\nmid x_n$, since $x_0=1$. However,
$$ y_n = \frac{(x_1+y_1\sqrt{d})^n-(x_1-y_1\sqrt{d})^n}{2\sqrt{d}} $$
is always a multiple of $y_1$, hence if $p\mid y_1$, there is no way to find some $y_n$ such that $y_n\not\equiv 0\pmod{p}$. |
Diophantine equation: $7^x=3^y-2$ | Here is a bit ugly solution to this. First, note that $(x,y) = (1,2)$ is a solution. Now suppose $x \geq 2$. We have,
$$
3^y \equiv 2 \pmod{49}.
$$
Now, using Euler's theorem, we deduce
$$
3^{\phi(49)}\equiv 1 \pmod{49} \implies 3^{42}\equiv 1 \pmod{49}.
$$
Now, we will observe that $3^{21}\equiv (-1)\pmod{49}$, since $3^{21} \equiv (3^{5})^4 \times 3\equiv -1\pmod{49}$.Now with some little algebra we obtain that $3^{26} \equiv 2 \pmod{49}$. Hence,
$$
y\equiv 26\pmod{42}.
$$
Now, we will turn our attention to modulo $43$. With similar tricks again, we observe that
$$
3^{26}\equiv 15 \pmod{43} \implies 7^y \equiv 13 \pmod{43}.
$$
But the values $7^y$ can take in modulo $43$ are given by:
$$
7,6,-1,-7,-6,1
$$
and repeats itself. Hence, $7^y \equiv 13 \pmod{43}$ is not satisfied.
Thus the equation does not have any solution other than the aforementioned one. |
Finding an inner product for an orthonormal basis | Start by finding three vectors, each of which is orthogonal to two of the given basis vectors and then try and find a matrix $A$ which transforms each basis vector into the vector you've found orthogonal to the other two. This matrix gives you the inner product.
I would first work out the matrix representation $A'$ of the inner product with respect to the given basis, because then the columns of $A'$ are just the vectors that you've already found (since you want to transform each basis element into each of these vectors). It is important that you express the vectors you've found in terms of the basis given, and not the standard basis for this part.
Once you've done this, we can relate $A$ and $A'$ by $A = P^{\dagger}A'P$ where $P$ is the change of basis matrix from the given basis to the standard basis. |
What is the dimension of the "Julia set" generated by inverse iteration and why do I get numbers different from Hausdorff dimension | Lutz comment on the density obtained via simple inverse iteration is certainly correct. There are, though, ways to modify the inverse iteration algorithm to obtain a more uniform distribution. You can find a description of these techniques with an implementation using Mathematica in this paper. The main algorithm described in that paper is built into Mathematica as of version 10 and can be accessed via a command like
JuliaSetPlot[z^2 - 1, z, ColorFunction -> None, PlotStyle -> Black]
The modified inverse iteration algorithm is also implemented in this web app, which I used to generate this image:
Performing box-counting on that image, I compute a fractal dimension of about 1.367. |
Jacobian in Levenberg-Marquardt for 4-Parameter equation | I realise now that I have made a basic mathematical error with this. I now have a working Levenberg-Marquardt algorithm with the following Jacobian format:
$$\begin{bmatrix} \frac{\partial x_1}{\partial p_1} \frac{\partial x_1}{\partial p_2} \frac{\partial x_1}{\partial p_3} \frac{\partial x_1}{\partial p_4} \\\ \frac{\partial x_2}{\partial p_1} \frac{\partial x_2}{\partial p_2} \frac{\partial x_2}{\partial p_3} \frac{\partial x_2}{\partial p_4} \\\ .................. \\\ .................. \\\ \frac{\partial x_{10}}{\partial p_1} \frac{\partial x_{10}}{\partial p_2} \frac{\partial x_{10}}{\partial p_3} \frac{\partial x_{10}}{\partial p_4} \end{bmatrix}$$
As this algorithm uses the Jacobian of residuals, and the residuals would take the form:
$$ x - p_2 * [\frac{2p_1 + p_4 - y}{y - p_1}]^{(1/p_3)} $$
Differentiating would cause the $x$ to "disappear" as it is a constant and so disappears when differentiation is performed.
So the entry on the first row and second column would be:
$$ \frac{\partial}{\partial p_2} = - [\frac{2p_1 + p_4 - y}{y_1 - p_1}]^{(1/p_3)}$$
My confusion stemmed from the fact that the Jacobian is referred to using many names in texts such as the negative Gradient, change in residuals and determinant of the negative Gradient. |
Why does the definition of orthogonality use a weighting function? | If $f(x) = \sum a_i e_i(x)$ and the $e_i(x)$ are orthonormal, within the appropriate restrictions, then, for all $i$, $a_i = \langle f, e_i \rangle$.
In that sense, weighting functions allow you to more fully abstract the concept of orthogonality, making it possible to exploit the above property more often. |
Ackermann's valuation of well formed formulas | It says:
A wff $(x)A(x)$ has the value T by an assignment for the free variables, if $A(x)$ has the value T by the same assignment from the free variables different from $x$ and an arbitrary assignment for the variable $x$.
It means: consider a variable assignment $s : \text {Var} \to D$, where $D$ is the domain of the interpretation.
And consider a new assignment $s'$ such that $s'(y)=s(y)$ if $y$ is free in $A(x)$ and $y \ne x$ and $s'(x)$ whatever.
If $A(x)$ has value T for every such $s'$, then $(x)A(x)$ has value T for $s$.
Intuitively, $(x)A(x)$ is true if $A(x)$ is true for every value that we can assign to $x$.
Regarding your example:
$(x)(Rx∧y)$ can't be F if $y$ is T ?
we have to consider that $y$ is a variable; thus, an assignment $s$ assigns to $y$ an "object" of the domain, and not a truth value.
About:
Is he asserting that all wff with no (free) variables evaluate to F ?
Of course no. A sentence (i.e. a wff with no free variables) is evaluated to T (F) by an assignment $s$ iff every assignment evaluates it to T (F).
Consider e.g. the formula $(x)(x=0)$ and interpret it in the domain $\mathbb N$ of natural numbers.
Consider $s(x)=0$; by the above definition, it is not true that $s$ satisfies the formula in $\mathbb N$, i.e. $\mathbb N, s \nvDash (x)(x=0)$.
This is so becuase if we consider e.g. $s'$ such that $s'(x)=1$, we have that $(x)(x=0)$ is not satisfied by $s'$ (we have that: $0 \ne 1$) and thus it is not true that every assignment that differs from $s$ only on the value assigned to $x$ satisfies the formula. |
Provably unprovable statement? | It turns out, perhaps surprisingly, that the answer to your question is no!
Note that - for an appropriate theory $T$ - if $T$ proves "$T$ doesn't prove $\varphi$" for some sentence $\varphi$, then $T$ proves "$T$ is consistent" (since $T$ proves that inconsistent theories prove everything). So by Godel's theorem, $T$ is inconsistent.
How do we reconcile this with the fact that there are statements $T$ clearly knows are false? Well, the point is that $T$ - in order to be consistent - can't "trust itself" too much. While $T$ proves (say) $\neg(0=1)$, and $T$ proves "$T$ proves $\neg(0=1)$," $T$ does not prove "$T$ doesn't prove $0=1$." As far as $T$ knows, $T$ might be inconsistent.
Statements of the form "Provable things are true" are called soundness principles. Soundness principles are related to consistency principles, but soundness is much stronger than mere consistency (e.g. PA+$\neg$Con(PA) is consistent but not sound). In fact, Lob's theorem shows that in a precise sense a reasonable theory $T$ won't prove any nontrivial amount of soundness about itself.
It's worth noting that these foundational theorems (Lob and Godel) have some subtle limitations, including:
Depending how we talk about provability, Lob's theorem can break down.
When we pass from axiomatizable theories to definable theories, Godel's theorem can break down. For example, there is a formula $\varphi$ defining a theory such that (1) PA proves that the theory defined by $\varphi$ is consistent but (2) the theory defined by $\varphi$ is in fact PA itself (the resolution of this apparent paradox being of course that PA can't prove point (2)); see here for details. |
$f'(x)=0$ implies $f$ constant, although finite or countable exceptions | The problem here is a common one in that the formulation of the result can be at odds with everyday usage of the language.
The only examples of such $f$ are in fact constant functions, which have $f'(x)= 0$ for all $x$.
What does
[...] $f'(x)=0$ for every $x \in \mathbb{R}$ except for a finite set $E$ or a countable set $E$, [...]
mean to say precisely?
It means to say that there is a set $E$ that is finite or countable such that $f'(x)= 0$ for all $x \notin E$.
Note that:
$E$ can be empty.
It is not uncommon that the author does not insist that $f'(x) \neq 0$ for $x \in E$.
Informally, the result says if for continous $f$ you know $f'(x) = 0$ for all $x$ except possibly some few exceptions, then you can conclude that $f$ is constant (and after that you'd know in fact $f'(x) = 0$ for all $x$).
In particular, this result tells you that there cannot be any interesting examples: by the very result every example has to be constant! |
equation $x^2+ax+6a = 0$ has integer roots, Then integer values of $a$ is | You've done nothing wrong so far. And there's not "and many more", for we may assume wlog. that $\alpha\ge \beta$ and then your enumeration is complete at least for the cases with $\alpha+6,\beta+6\ge 0$. The same factorings with negative numbers are of course possible as well. All in all you get that $$\begin{align}a=-\alpha-\beta&\in\{0+0,-3+2,-12+4,-30+5, 15+10, 24+8, 42+7\}\\&=\{0,-1,-8,-25,25,32,49\}\end{align}$$ and in fact all these are correct solutions.
EDIT: Oops, your "and many more" indeed was justified - slightly - as you had left out the factorization $12\times 3$. This adds $-6+3=-3$ and $18+9=27$ to the list of solutions. |
Matrix Vector Multiplications | I'm not sure I know what you mean. But let me try to interpret the best way I can:
It depends on the matrix. It's perfeclty possible to get two equal components, if the matrix is just right. You can always find a rotation (orthogonal matrix) that rotates a chosen vector to any direction you want.
Components depend on the choice of your coordinate system anyway, having them equal has no mathematical meaning at all. You can even choose two vector and rotate in such a way that you get the second one to the first one (and you can choose vectors with equal or different components, whatever you want). Just like grabbing a stick and rotating it in the direction you want. No surprise here - rotational matrix rotates stuff. |
Is the Alternating group $A_5$ isomorphic to the external direct product $A_4 \oplus \mathbb{Z}_5$? | Without solvability/simplicity, one can use the center. The center of $A_5$ is the trivial group $\{1\}$ while the center of $A_4\times C_5$ is at least $C_5$. |
Applications of the Formal Laurent Lattice | It is the group algebra of the group $\mathbb Z^n$, so it shows up in lots of places. One place where you see it a lot is in toric geometry.
What exactly do you want to know? |
If $A^{4}=I$, does this imply $A$ is invertible? | The equation $A^4=I$ says precisely that $A^{-1}=A^3$. |
Volume of revolution - shell method | This picture helps. So the pond is between $y=2-(\cos{x})^2$ and $y=2-(\cos{\pi/4})^2=\frac{3}{2}$.
Then the volume should be
$$2\pi \int ^{\pi/4}_{-\pi/4} (1-x)(\frac{3}{2}-y)dx=2\pi \int ^{\pi/4}_{-\pi/4} (1-x)(-\frac{1}{2}+(\cos{x})^2)dx\\=2\pi \int ^{\pi/4}_{-\pi/4} (1-x)(-\frac{1}{2}+\frac{\cos{2x}+1}{2})dx=2\pi \int ^{\pi/4}_{-\pi/4} (1-x)\cos{2x}dx$$ |
Irreducible Polynomial Field Extensions with Root $\cos \frac{2\pi}{7}$ | Using Chebyshev polynomials, we can write $\cos 4\theta − \cos 3\theta =0$ as $T_4(\cos \theta) - T_3(\cos \theta)=0$.
Now,
$$
T_4(x)- T_3(x) = (8x^4-8x^2+1) - (4x^3-3x) = (x - 1) (8 x^3 + 4 x^2 - 4 x - 1)
$$
Finally, $8 x^3 + 4 x^2 - 4 x - 1$ is irreducible over $\mathbb Q$ because it has no rational root. (Check!) Alternatively, it has no root mod $3$. |
Homology of tori and the Universal Coefficient Theorem | This is all correct, but it is conceptually more complicated than it needs to be to compute the first homology group of the $n$-torus. The $n$-torus is a product of $n$ circles, and therefore has a cell structure consisting of one vertex, $n$ edges, etc., with $\displaystyle\binom{n}{k}$ cells in dimension $k$. (This is the $n$-fold product of the cell structure on the circle that has one vertex and one edge.) Each of these cells has trivial boundary, so we get the following chain complex:
$$
\mathbb{Z}_2 \;\;\xrightarrow{\;\;0\;\;}\;\; \cdots \;\;\xrightarrow{\;\;0\;\;}\;\; \mathbb{Z}_2^\binom{n}{2} \;\;\xrightarrow{\;\;0\;\;}\;\; \mathbb{Z}_2^n \;\;\xrightarrow{\;\;0\;\;}\;\; \mathbb{Z}_2
$$
Therefore, $H_1(\mathbb{T}^n,\mathbb{Z}_2) \cong \mathbb{Z}_2^n$, and more generally $H_k(\mathbb{T}^n,A) \cong A^\binom{n}{k}$ for any abelian group $A$. |
On $\Bbb R^2$, Are unit circle centred at the origin and the origin homotopic equivalent? | In $\mathbb{R}^2$, every curve is homotopic to the origin, since you can smoothly deform any curve to a point curve - there are no "snags" in the space that stop you from doing this. |
If $f+g$ is measurable, are $f$ and $g$ measurable? | Hint
Question 1 : Let $\mathcal N$ be a non-measurable set. What do you think about $$\boldsymbol 1_{\mathcal N}+\boldsymbol 1_{\mathcal N^c} \ \ ?$$
Question 2 : $$f=(f+g)+(-g).$$ |
Understanding the Argument Principle in Conway. | Question 1. This is a generally useful result which you should remember for the future: Let $D\subseteq\mathbb C$ a domain on which every holomorphic function has an antiderivative (for instance, convex domains, star domains, or most generally, simply connected domains) and let $f(z)\neq0$ for all $z\in D$. Then there exists a holomorphic function $L:D\to\mathbb C$ where $f(z)=\exp(L(z))$. $L$ is called a logarithm of $f$.
proof: Since $f$ has no zeros, $\frac{f'}{f}$ (called the logarithmic derivative, since $\frac{\mathrm d}{\mathrm dz}\operatorname{Log}(f)=\frac{f'}{f}$ if a suitable Logarithm exists) is holomorphic on $D$, so it has an antiderivative $F$ by assumption. We will show that $\exp F$ is equal to $f$ up to a multiplicative constant. Consider the derivative of $\frac{\exp F}{f}$ (this is holomorphic on $D$, sonce $f$ has no zeros). Its derivative is
$$\frac{f\frac{f'}{f}\exp F~-~f'\exp F}{f^2}=\frac{f'\exp F~-~f'\exp F}{f^2}=0,$$
so the function itself is a nonzero constant $c$:
$$\frac{\exp F}{f}=c.$$
Nonzero because $\exp$ is never $0$. Solving for $f$ we get
$$f=\exp(F+\log c),$$
where $\log c$ is some logarithm of the number $c$. And thus $F(z)+\log c$ is the function $L$ we were looking for.
Question 2. Depends. Some authors define it to be not just continuous, but also holomorphic. But on domains this is equivalent, since a continuous inverse of a holomorphic function whose derivative is never $0$ (like $\exp$) is also holomorphic.
Question 3. This is the so-called chordal metric, and the formulas are correct. We interpret $\bar{\mathbb C}$ as the Riemannian sphere, which we can embed into $\mathbb R^3$ as the unit sphere, and then take the Euclidean distance between two points on this sphere. That is, we take two points on the sphere, draw a chord from one point to the other (the chord goes through the interior of the sphere, not along its surface!), and measure its distance. You can convince yourself by plugging in some values: The poles $0$ and $\infty$ should be $2$ apart, since that's the sphere's diameter. You'll find that this is true. The same goes for any two points on opposite sides, like $1,-1$ and $\mathrm i,-\mathrm i$. Similarly, $0,1$ should be $\sqrt2$ apart due to the Pythagorean theorem, which is also true.
You can also derive the metric rigorously via the stereographic projection. |
For the $3\times 3$ matrix $A$ with eigenvalues $x_1$, $x_2$, $x_3$ find idempotents matrix $E_1,E_2.E_3$ so that $A=x_1E_1+x_2E_2+x_3E_3$ . | For each eigenvalue, find an eigenvector, call it $w,$ as a column vector. Next, define column vector $v = w /|w|,$ which is now a unit length vector. You should confirm unit length.
Now, if you write $v^T v$ you just get the one by one matrix with single entry $1;$ usually people just say $v^T v = 1.$
Instead, the idempotent matrix for that eigenvalue is
$$ E = v v^T $$
This is a symmetric matrix of rank one, positive semi-definite but not definite. It has trace exactly $1.$ Finally,
$E^2 = v v^T v v^T = v (v^T v) v^T = v (1) v^T = v v^T = E$ |
Prove that number $\underbrace{11 \ldots1}_{100} \underbrace{22 \ldots2}_{100}$ is product of two consecutive numbers | You are not done because the two factors you exhibited are not consecutive. For three digits, you have shown $111222=1002\cdot 111$. You can, however, make one more step and be there. lulu has given a good hint. |
$\frac{1}{a} = \frac{1}{b} + \frac{1}{c} - \frac{1}{abc}$ and $a^2 + b^2 = c^2$ | (This is not a solution. I made an error.)
Show that (If you get stuck, show your work.)
$(a, b, c)$ is a primitive pythagorean triplet.
If $(a, b, c ) = (m^2 - n^2, 2mn, m^2+n^2)$ with $m > n > 0$, then there are no solutions.
Substituting in, we get $n^4 = m^4 - 4mn^3$.
View this as a quartic in $ x = \frac{n}{m}$, we get $ x^4 + 4x^3$ - 1 = 0, which has no rational solutions.
Hence, there are no integer solutions to $(m, n)$.
If $(a, b, c) = ( 2mn, m^2 - n^2, m^2 + n^2)$ with $m > n > 0$, then the only solution is $(m, n) = (4, 1) $ which leads to $ (a, b, c) = (8, 15, 17)$.
(Hm, I thought I had a solution to this part, but it turns out that I made an error.)
Substituting in, we have $ m^4 - 4m^3 n= n^4 - 1$.
If $ n = 1$, we have $ m = 4n = 4$.
Otherwise, writing it as $ m^3 (m-4n) = n^4 - 1 > 0 \Rightarrow m -1 \geq 4n$. |
Is there a name for those commutative monoids in which the divisibility order is antisymmetric? | At least two different terms are used in the literature for a commutative monoid in which division is a partial order: holoid and naturally partially ordered. Another possibility would be $\mathcal{H}$-trivial since a commutative semigroup has the required property if and only if the Green's relation $\mathcal{H}$ is the equality in this monoid.
See Grillet's book Commutative Semigroups (2001), pages 120 and 201.
I was able to trace back the term "holoid" as early as 1942, but it might have been introduced long before. |
Showing a Functor is not Representable | Some Motivation
Recall that a (covariant) functor $F \colon \mathscr{C} \to \mathsf{Set}$ is representable by an object $A$ of $\mathscr{C}$ if and only if there exists a universal element $a \in F(A)$.
This means that there exists for every other object $X$ of $\mathscr{C}$ and every element $x \in F(X)$ a unique morphism $f \colon A \to X$ in $\mathscr{C}$ with $F(f)(a) = x$.
The natural bijection $\operatorname{Hom}_{\mathscr{C}}(A, -) \to F$ is then given by $f \mapsto F(f)(a)$.
A useful example to keep in mind in the functor $F \colon \mathsf{Ring} \to \mathsf{Set}$ given by $F(R) = R^n$.
This functor is represented by the ring $A = \mathbb{Z}\langle t_1, \dotsc, t_n \rangle$, the (non-commutative) polynomial ring in the variables $t_1, \dotsc, t_n$.
The (usual) universal element $a \in F(A)$ is given by $a = (t_1, \dotsc, t_n)$.
And indeed, that this is a universal element means precisely that there exists for every other ring $R$ and every element $x \in F(R)$ with $x = (x_1, \dotsc, x_n)$ a unique ring homomorphism $f \colon A \to R$ with $F(f)(a) = x$, i.e. with $f(t_i) = x_i$ for every $i = 1, \dotsc, n$.
And this is precisely how the ring $A$ represents the functor $F$.
The Problem
We show more generally that for every number of elements $n \geq 2$ the functor
\begin{align*}
F
\colon
\mathsf{Ring}
&\to
\mathsf{Set},
\\
R
&\mapsto
\{
(x_1, \dotsc, x_n) \in R^n
\mid
x_1 R + \dotsb + x_n R = R
\}
\end{align*}
is not representable.
Assume otherwise that the functor $F$ is representable by a ring $A$ and let $(a_1, \dotsc, a_n) \in F(A)$ be the universal element (corresponding to some choice of isomorphism $F \cong \operatorname{Hom}_{\mathsf{Ring}}(A,-)$).
We will consider some auxilary functors which are representable:
We can consider for every index $i = 1, \dotsc, n$ the functor
\begin{align*}
E_i
\colon
\mathsf{Ring}
&\to
\mathsf{Set},
\\
R
&\mapsto
\{
(x_1, \dotsc, x_n) \in R^n
\mid
\text{$x_i$ is a unit in $R$}
\} \,.
\end{align*}
This functor is representable by the ring $U_i := \mathbb{Z}\langle t_1, \dotsc, t_n, t_i^{-1} \rangle$.
We can also consider for any two indices $i$, $j$ with $1 \leq i \neq j \leq n$ the functor
\begin{align*}
E_i
\colon
\mathsf{Ring}
&\to
\mathsf{Set},
\\
R
&\mapsto
\{
(x_1, \dotsc, x_n) \in R^n
\mid
\text{$x_i$ and $x_j$ are units in $R$}
\} \,.
\end{align*}
This functor is representable by the ring $U_{ij} := \mathbb{Z}\langle t_1, \dotsc, t_n, t_i^{-1}, t_j^{-1} \rangle$.
We fix for the rest of this argumentation two such indices $i$, $j$.
(This is where we use that $n \geq 2$.)
We have inclusions of functors as follows:
These inclusions correspond to ring homomorphism between their representing objects:
The inclusion $E_i \subseteq E_{ij}$ correspond to the canonical ring homomorphisms $U_i \to U_{ij}$, and the inclusion $E_i \subseteq F$ correspond to the ring homomorphisms $f_i \colon A \to U_i$ with $f(a_k) = t_k$ for every index $k = 1, \dotsc, n$.
(Such a homomorphism exists because it is the unique homorphism $f \colon A \to U_i$ with $F(f)( (a_1, \dotsc, a_n) ) = (t_1, \dotsc, t_n)$.
So here we use that $(a_1, \dotsc, a_n)$ is a universal element and that $(t_1, \dotsc, t_n)$ is an element of $F(U_i)$.)
Similar for $U_j$ instead of $U_i$.
The commutativity of the above diagram gives (by the faithfulness of the Yoneda embedding) the commutativity of the corresponding diagram:
This means that $f_i(a) = f_j(a)$ for every element $a \in A$.
It follows that $f_i$ and $f_j$ restrict to the same ring homomorphism $f \colon A \to U_i \cap U_j$, with the intersection $U_i \cap U_j$ taken in $U_{ij}$.
This intersection is precisely the usual polynomial ring $U := \mathbb{Z}\langle t_1, \dotsc, t_n \rangle$.
We have seen in the above explicit description of the homomorphism $f_i$ (and $f_j$) that the homomorphism $f \colon A \to U$ is given by $f(a_k) = t_k$ for every index $k = 1, \dotsc, n$.
The existence of such a homorphism means that $(t_1, \dotsc, t_n) \in F(U)$ bescause the ring $A$ represents the functor $F$ via the universal element $(a_1, \dotsc, a_n) \in F(A)$.
But this would means that $U = t_1 U + \dotsb + t_n U$, which is not the case.
We hence see that such a representing object $A$ cannot exist. |
Monotone Class Theorem Application | Ok let's go but I will only outline the solution (using theroem 2 of planetmath.org) :
Take $\mathcal{K}$ as of processes of the form $\sum_{i=1}^n C_i.1[({s_i},{t_i}]\times A_{s_i}]$ where $0\leq {s_i}<{t_i}$, $s_i$ are increasing and $A_{s_i}\in\mathcal{F}_{s_i}$ (those processes are bounded and predictable, and $\mathcal{K}$ is stable by multiplication).
Now take $\mathcal{H}$ as the set of processes that are bounded and predictable and which verify (*). So we have $\mathcal{K}\subset \mathcal{H}$ and $\mathcal{H}$ includes constants.
I'll let you check by use on dominated convergence (or monotone convergence) (which both work well even when conditioning with respect to $\mathcal{F}_t$), that the condition in the bullet point of theorem 2 on $\mathcal{H}$ is satisfied.
This allows you to conclude that $\mathcal{H}$ contains all bounded predictable processes generated by the set of function $\mathcal{K}$, let's call it $\sigma(\mathcal{K})^b$, this in turn contains (prove it) all bounded predictable processes and so we are done because this is what we wanted from the beginning.
Best regards |
higher level view of construction of tensor product | In the situation in your diagram, for $\tilde{f}$ to be uniquely determined, we want $f$ to be a bilnear map from $M \times N$ to $P$. This means that it is linear in each factor: $f(m + m',n) = f(m,n) + f(m',n)$, $f(m,n+n') = f(m,n) + f(m,n')$. and $f(rm,n) = rf(m,n) = f(m,nr)$, where $m,m' \in M$, $n,n' \in N$, and $r \in R$. The equalities expressed here are exactly the relations generating $B$, which is exactly what gives the tensor product this universal property. Let's see how we can use this to prove $\tilde{f}$ is uniquely determined.
Existence: Since $R^{M \times N}$ is free, we can define a morphism, call it $\phi$, to $P$ just by specifying where generators go. The natural thing to do is $(m,n) \mapsto f(m,n)$. To show this actually defines a map from $R^{M \times N}/B$, we need to show that $B$ is contained in the kernel. Again, this is a result of $f$ being bilinear: the generators of $B$ look like $(m+m',n) - (m,n) - (m',n)$, $(rm,n) - r(m,n)$, etc. The first of these is, by the definition of our map from $\phi$, sent to $f(m+m',n) - f(m,n) - f(m',n)$, but $f$ being bilinear tells us that this is 0. Similarly, the second generator here is sent to $f(rm,n) - rf(m,n)$, and again, $f$ being bilinear tells us that this is zero. Therefore $B$ is in the kernel. Therefore by the universal properties of quotients, we get a unique map $\tilde{f}:R^{M \times N}/B \rightarrow B$ such that $\tilde{f}p = \phi$, where $p$ is the quotient map $R^{M \times N} \rightarrow R^{M \times N}/B$. Note though, that we defined $\phi$ such that $f = \phi \iota$, where $\iota$ is the obvious map $M \times N \rightarrow R^{M \times N}$. Note also that, as you've labelled it, $\pi = p\iota$. Therefore $f = \tilde{f}p\iota = \tilde{f}\pi$, as needed.
Uniqueness: suppose $g$ is another map like $\tilde{f}$ making the diagram commute. Then, for all $(m,n) \in M \times N$, we must have $g(m,n) = g(\pi(m,n)) = f(m,n)$, but this was also the value taken by $\tilde{f}(m,n)$. Since the pairs $(m,n)$ generate $R^{M \times N}$, they generate $R^{M \times N}/B$, so since $\tilde{f}$ and $g$ agree on all these pairs, they agree on the whole module, and are the same map. |
What does the notation $\{ \pm 1 \}^X$ in relation to functions and hypothesis classes means in the context of PAC learning over half spaces? | It seems my suspicion was correct. For $ \mathcal{H} \subset \{ \pm1\}^X$ means:
$$ \{ \pm1\}^X \iff \{ h \mid h : X \rightarrow \{+1, -1\} \}$$
or more generally (as Rob Arthan suggested):
$$ Y^X \iff \{ f \mid f : X \rightarrow Y \}$$
is a short hand for saying the set of function from X to Y. |
What value of $b$ makes the graph of $y^2 = x^3-x+b$ self-intersection? | For the curve to be self intersecting, we take the form
$y^2 = (x-p)^2 (x-q) = x^3 - (2p+q)x^2 + (2pq+p^2)x-p^2q \ $. Please note that the curve forms only for $x \geq q$
As $x^2$ term is zero, $2p+q = 0 \implies q = - 2p$.
So, $2pq + p^2 = - 3p^2, -p^2q = 2p^3$
We also note that if $p \lt 0, q \gt 0$ but $x$ must be $\geq q$ so there is no solution (self-intersecting curve) for $p \lt 0$. Also when $p = 0, q = 0$ and so there is no solution for $p = 0$ either.
That leads to,
$y^2 = x^3 -3p^2 x +2p^3, p \gt 0$ |
Roots of Complex Polynomial in Disc | Let $|z|>1,$ and by Cauchy-Schwarz inequity
\begin{align}
|p(z)|&=|z|^n\left|1+\sum_{j=1}^{n}\frac{a_{n-j}}{z^j} \right|\geq |z|^n\left(1-\left|\sum_{j=1}^{n}\frac{a_{n-j}}{z^j}\right| \right)\\&\geq |z|^n\left[1-\left(\sum_{j=1}^{n}|a_{n-j}|^2\right)^{1/2}\left(\sum_{j=1}^{n}\frac{1}{|z|^{2j}} \right)^{1/2}\right]\\&>|z|^n\left[1-\left(\sum_{j=1}^{n}|a_{n-j}|^2\right)^{1/2}\left(\sum_{j=1}^{\infty}\frac{1}{|z|^{2j}} \right)^{1/2}\right]\\&=|z|^n\left[1-\left(\sum_{j=1}^{n}|a_{n-j}|^2\right)^{1/2}\left(\frac{1}{|z|^2-1} \right)^{1/2} \right]
\end{align}
Therefore, $|P(z)|>0$ if $1-\left(\sum_{j=1}^{n}|a_{n-j}|^2\right)^{1/2}\left(\frac{1}{|z|^2-1} \right)^{1/2}\geq 0$ i.e., if $|z|\geq \sqrt{1+|a_{n-1}|^2+|a_{n-2}|^2+\ldots+|a_0|}=R.$
Thus, all those zeros whose modulus is greater than 1 lies in $|z|\leq R,$ and those zeros whose modulus is less than or equal to 1 already lie in $|z|\leq R.$ |
How to construct such a function? | You can simply use $h_n=g_{n+1}-g_n$ and then have $f-g_N=\sum_{n\ge N} h_n$. |
What is the difference between weak topology and strong topology? | An affine subspace is $a+Y=\{a+y:y\in Y\}$ where $Y$ is a linear subspace of $X$.
There is exactly one topological vector space structure on a given finite dimensional real vector space.
A basis for the weak topology consists of finite intersections
$V=\bigcap_{j=1}^n l_j^{-1}(U_j)$ with $l_j$ linear functions and $U_j$
are open subsets of $\Bbb R$. If nonempty, $V$ contains a subset $a+Y$
where $Y$ is the intersection of the kernels of the $l_j$, that is $Y$
is a vector subspace of finite codimension in $X$. So if $X$ is
infinite-dimensional every weakly open nonempty set contains an $a+Y$
with $Y$ an infinite-dimensional vector subspace. Thus an open ball
in the metric topology cannot be weakly open. |
Notation query: matrix projection | I collect three common options, ranked by my own preference.
$G(\mathcal H_2)\subseteq \mathcal H_2$
$G(w) \in \mathcal H_2$ whenever $w\in \mathcal H_2$
$G:\mathcal H_2\to \mathcal H_2$ |
Possibility to simplify $\sum\limits_{k = - \infty }^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}} = \frac{\pi }{{\sin \pi a}}} $ | EDIT: The classical demonstration of this is obtained by expanding in Fourier series the function $\cos(zx)$ with $x\in(-\pi,\pi)$.
Let's detail Smirnov's proof (in "Course of Higher Mathematics 2 VI.1 Fourier series") :
$\cos(zx)$ is an even function of $x$ so that the $\sin(kx)$ terms disappear and the Fourier expansion is given by :
$$\cos(zx)=\frac{a_0}2+\sum_{k=1}^{\infty} a_k\cdot \cos(kx),\ \text{with}\ \ a_k=\frac2{\pi} \int_0^{\pi} \cos(zx)\cos(kx) dx$$
Integration is easy and $a_0=\frac2{\pi}\int_0^{\pi} \cos(zx) dx= \frac{2\sin(\pi z)}{\pi z}$ while
$a_k= \frac2{\pi}\int_0^{\pi} \cos(zx) \cos(kx) dx=\frac1{\pi}\left[\frac{\sin((z+k)x)}{z+k}+\frac{\sin((z-k)x)}{z-k}\right]_0^{\pi}=(-1)^k\frac{2z\sin(\pi z)}{\pi(z^2-k^2)}$
so that for $-\pi \le x \le \pi$ :
$$
\cos(zx)=\frac{2z\sin(\pi z)}{\pi}\left[\frac1{2z^2}+\frac{\cos(1x)}{1^2-z^2}-\frac{\cos(2x)}{2^2-z^2}+\frac{\cos(3x)}{3^2-z^2}-\cdots\right]
$$
Setting $x=0$ returns your equality :
$$
\frac1{\sin(\pi z)}=\frac{2z}{\pi}\left[\frac1{2z^2}-\sum_{k=1}^{\infty}\frac{(-1)^k}{k^2-z^2}\right]
$$
while $x=\pi$ returns the $\mathrm{cotg}$ formula :
$$
\cot(\pi z)=\frac1{\pi}\left[\frac1{z}-\sum_{k=1}^{\infty}\frac{2z}{k^2-z^2}\right]
$$
(Euler used this one to find closed forms of $\zeta(2n)$)
The $\cot\ $ formula is linked to $\Psi$ via the Reflection formula :
$$\Psi(1-x)-\Psi(x)=\pi\cot(\pi x)$$
The $\sin$ formula is linked to $\Gamma$ via Euler's reflection formula :
$$\Gamma(1-x)\cdot\Gamma(x)=\frac{\pi}{\sin(\pi x)}$$ |
Cohomology of degree $d$ line bundles on projective curves | For any $g$ one has
$$
H^0(C,L) =
\begin{cases}
k, & \text{if $L \cong \mathcal{O}_C$},\\
0, & \text{otherwise}.
\end{cases}
$$
So, $f_0$ for $ = 0$ is zero away from the origin of $Pic^0(C)$.
For arbitrary $d$ the function $f_0$ is upper semi-continuous. The corresponding stratification of $Pic^d(C)$ is not so easy to describe. The subvariety, where $f_0 > 0$ is the image of the natural map
$$
\gamma \colon S^d(C) \to Pic^d(C).
$$
The subvariety $f_0 > 1$ is the critical locus for this map, and more generally, the subvariety $f_0 \ge i$ coincides with the locus of points, over which the dimensions of the fibers of $\gamma$ is at least $i$. |
Lagrange Multipliers - Maxinising volume of a box with corner on origin and plane | The problem has solution if the point $(x,y,z)$ is in the first quadrant. If one allows negative $x,y$, for instance, then one gets arbitrarily high volume (look at the image provided by @Eff). On the other hand the volume of the box is $V(x,y,z)=xyz$. The conditions $x\ge0,y\ge0, z\ge0$ bound a compact region of the plane, hence there is indeed a maximum and a minimum volume. Of course the minimum is $0$ for points with one coordinate $0$, hence we assume $xyz\ne0$ for the sequel. Summing up, we must maximize $V$ subject to $f(x,y,z)=x+2y+3z-6=0$ and Lagrange multipliers say that
$\nabla V=\lambda\nabla f$. Thus we look for a point in the given plane such that the vector
$\nabla V=(yz,xz,xy)$ is proportional to $\nabla f=(1,2,3)$. This means $xz=2yx, xy=3yz$. Since $xyz\ne0$, the two equations simplify to $z=2y, x=3z$, giving the point $(6y,y,2y)$. But this point must be in the plane, hence $6y+2y+3(2y)-6=0$ and $y=3/7$. Thus the maximum volume is $V(18/7,3/7,6/7)=324/343$. |
Is there an expression for getting number of times to divide an even number by 2 to get an odd number? | Appears
$$ f(i,j) = 2^i \cdot (2j+1) $$ |
Proof of characterization of measurable sets | In fact the given condition does not imply that $f$ is measurable. We're given that $f:X\to[-\infty,\infty]$, so $f$ is measurable if the inverse image of every Borel subset of $[-\infty,\infty]$ is measurable. This does not follow from the given condition; the suggested characterization is false.
Counterexample: Say $E$ is some non-measurable set, and let $F=X\setminus E$. Let $f=\infty\chi_E-\infty\chi_F$. Then $f^{-1}((a,\infty))=\emptyset$ for every $a\in\Bbb R$. But $\{\infty\}$ is a Borel subset of $[-\infty,\infty]$, and $f^{-1}(\{\infty\})=E$. |
Rate of change of perimeter of segment in a circle. | The chord is $20\sin(\frac{\theta}{2})$, so considering $\theta=\theta(t)$ as function of time the perimeter is:
$$P(t)=10\left(2\sin\left(\frac{\theta(t)}{2}\right)+\theta(t)\right)$$
Obviously $\theta(t)=0.1t=\frac{t}{10}$:
$$P(t)=10\left(2\sin\left(\frac{t}{20}\right)+\frac{t}{10}\right)=20\sin\left(\frac{t}{20}\right)+t$$
The derivative is:
$$P'(t)=\cos\left(\frac{t}{20}\right)+1$$
If $\theta(t)=\frac\pi3$ then $t=\frac{10}{3}\pi$:
$$P'(\frac{10}{3}\pi)=\cos(\frac\pi6)+1=\frac{\sqrt{3}} 2+1$$ |
An injective map into a subset of an infinite set. | an example is the map from naturals to even numbers $f:N \rightarrow A$ as $f(n)=2∗n$. Infact If $X$ is infinite and $A$ is finite subset then there is a bijection from $X-A\rightarrow X$. |
Surface area of the unit ball in $R^{n}$ | Here's a better approach, which you can work out for yourself.
Find a formula for the volume of the ball of radius $1$ in $\Bbb R^n$. This can be done recursively, relating to the ball in $\Bbb R^{n-2}$. Then adjust for the volume $V_n(r)$ of the ball of radius $r$.
Now, the area $A_{n-1}(r)$ of the boundary sphere is, not surprisingly, given by differentiating: $A_{n-1}(r)=V_n'(r)$. So, in other words, $A_{n-1}(1) = V_n'(1) = nV_n(1)$. |
Why isn't the volume formula for a cone $\pi r^2h$? | What you need to look at is the second theorem of Pappus. When you rotate things in a circle, you have to account for the difference in the distance traveled by the point furthest from the axis (the bottom corner of our triangle) which goes the full $2\pi$ around. However, the vertical side doesn't actually move, so it doesn't get the full $2\pi$.
Pappus' theorem says you can use the radius of the centroid as your revolution radius, and it just so happens the the centroid of this triangle is a distance $r/3$ from the axis.
Another good read is this. |
Minimal area ellipse containing a given circumference | The objective is simple:
minimize $ab$
The contraint is tricker.
By inspection we can guess
$b>2, b>a, a>1$
If the circle is inside the ellipse, the two curves do not intersect, and this system has:
$\frac {x^2}{a^2} + \frac {y^2}{b^2} = 1\\
x^2 +y^2 = 2y$
has no real solutions.
So, we try to find solutions, apply the quadratic formula, and then set the discriminant $\le 0.$
$x^2 + \frac {a^2}{b^2}y^2 = a^2\\
x^2 +y^2 = 2y\\
(1-\frac {a^2}{b^2})y^2 - 2y + a^2 = 0\\
y = 1 \pm \sqrt{1 - a^2(1-\frac {a^2}{b^2})}$
$1 - a^2(1-\frac {a^2}{b^2})\le 0$
And, the minimal ellipse will be when the two curves are tangent.
$a^4 + b^2 - a^2b^2 = 0$
$f(a,b,\lambda) = ab + \lambda(a^4 + b^2 - a^2b^2)\\
\nabla f = 0$
$b + 4a^3\lambda + 2ab^2\lambda= 0\\
a + 2b\lambda + 2a^2b\lambda=0\\
b^2 = 2a^4$
$a^4 +2a^4 - 2a^6 = 0\\
a^4(3 -2a^2)= 0\\
a^2= \frac 32\\
b^2 = \frac {9}{2}$
$ab = \frac {3\sqrt 3}{2}$
$\pi ab = \frac {3\sqrt 3 \pi}{2}$ |
Example of a compact subset that is not closed | The empty set is not a problem: it's finite and it's closed and it's compact.
But note that any subset of $\mathbb{R}$ in the cofinite topology is compact. The same argument that shows the whole space is compact also works for any subset.
But the only closed subsets are the finite ones, so take $A = \mathbb{N}$ as an example of a compact non-closed subspace. Your open interval is fine too. |
$L^2$ Bounds for Markov Chains. | This is not a full answer, but maybe helpful anyway, as it clarifies, that an answer depends on the properties of $P$ itself.
The induced matrix norm of the $L_2$-vector norm is the spectral norm, which is the maximal singular value of the matrix under consideration, so in order to find out something about $||APA^{-1}||_2$, we should look at
$$(A^{-1})^TP^TA^TAPA^{-1} \, .$$
If $A$ is orthogonal, so $A^{-1} \, = \, A^T$, this expression simplifies to
$$ AP^TPA^{-1} \, ,$$
while if $A$ commutes with $P$, so $AP \, = \, PA$, it simplifies to
$$ P^TP \, .$$
Therefore, as $P^TP$ and $AP^TPA^{-1}$ have the same spectrum, we have to find out something about the Perron-eigenvalue of $P^TP$, given $P$ is row stochastic.
By now, the nice answer of user1551 (my own answer was really clumsy compared to that) to your question
Characterize stochastic matrices such that max singular value is less or equal one.
gives
$$||APA^{-1}||_2 \ \geq \ 1$$
for row stochastic $P$ and orthogonal or commuting $A$ and
$$||APA^{-1}||_2 \ = \ 1 \, ,$$
if and only if $P$ is doubly stochastic. Still, I leave my example here:
If $P$ is row stochastic, $P^T$ is column stochastic, which implies, that $P$ and $P^TP$ have the same column sums, because any matrix, which is column stochastic, preserves the component sum of any vector multiplied with it from the right.
Therefore, if $P$ is doubly stochastic, $P^TP$ is also doubly stochastic and the orthogonal matrices or matrices, which commute with $P$ are included in the class of matrices you are looking for in this case.
If $P$ is not doubly stochastic, there is at least one column sum larger than $1$, while the column sums sum to $n$. This always implies the Perron-eigenvalue of $P^TP$ to be larger than $1$, see the answer of user1551 mentioned above, example:
\begin{equation}
\mathbf{P} \ := \
\begin{pmatrix}
\frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\
1 & 0 & 0 \\
0 & 0 & 1
\end{pmatrix}
\end{equation}
\begin{equation}
\mathbf{P^TP} \ := \
\begin{pmatrix}
\frac{10}{9} & \frac{1}{9} & \frac{1}{9} \\
\frac{1}{9} & \frac{1}{9} & \frac{1}{9} \\
\frac{1}{9} & \frac{1}{9} & \frac{10}{9}
\end{pmatrix}
\end{equation}
The Perron-eigenvalue of $P^TP$ is $ \frac{\sqrt{3}+2}{3} \, > \, 1$, so no matrix commuting with $A$ or any orthogonal matrix is contained in the sought class for the matrix $P$. |
Solutions of $x^2=1$ | Hint: Let $g$ be the generator for the group, and write out all the elements of the group as $1, g, g^2, \cdots, g^{n-1}$. Then $x$ has to be one of these, so let $x = g^k$ for $0 \le k \le n-1$.
If the cyclic group is infinite, then write it out as $\ldots, g^{-2}, g^{-1}, 1, g, g^2, \ldots$ and do the same thing. |
Distinct non-negative integers $y<9$ such that $f(y) ≡ 0 (\bmod 9)$. | Hint $\ $ mod $\,9,\,$ the discriminant is a square iff it is $\equiv 0\,$ iff $\,f(x) = (x\!-\!a)^2,\,$ and then
$$ 9\mid (x\!-\!a)^2\iff 3\mid x\!-\!a\iff x\equiv a,\,a\!+\!3,\,a\!+\!6\!\!\pmod{9}$$
Alternatively: the only factorizations mod $3$ are $\,(x\pm1)^2.\,$ Examine how those lift mod $9$. |
How to find a minumum vertex cover from a maximum matching in a bipartite graph? | The proof in the article you linked gives such an explicit construction. In your example, if $L = \{1,2,3,4,5\}$ , $R = \{6,7,8,9,10\}$ are the partite sets, let $Z$ be the set of vertices that are either unmatched vertices of $L$
or connected to an unmatched vertex in $L$ by an alternating path.
$4$ is the only unmatched vertex in $L$ and the only vertices we can reach by alternating paths are $2$ and $8$, so $Z = \{2,4,8\}$. A minimum vertex cover is then given by $(L\setminus Z) \cup (R\cap Z) = \{1,3,5,8\}$. |
How do I prove that the exponential function $e^x$ has gradient $e^x$ from first principles? | If you define $$\exp(x) = \sum_{n = 0}^{\infty} {x^n \over n!} = 1 + x + {x^2 \over 2!} + {x^3 \over 3!} + {x^4 \over 4!} + \cdots$$ then if you differentiate term by term you get back what you started with. |
How to construct a function which has 5-period point and has no 3-period point | Consider the basic map that has a 3-period point:
$$ x \mapsto 2|x - \frac12| $$
Iterating this map three times you see a 3-period point $x$ solves
$$ |||8x - 4|-2|-1| = x $$
This equation has 8 solutions, corresponding to each of the possible signs in the absolute value. These include the fixed points $x = 1/3$ and $x = 1$, as well as the 2 cycles
$$ 1/7 \to 5/7 \to 3/7 \qquad \text{and} \qquad
1/9 \to 7/9 \to 5/9 $$
Consider now instead the map
$$ f(x) = \max(\frac5{32}, 2|x - \frac12| ) $$
First one checks easily that it has a period 5 point
$$ 5/32 \to 11/16 \to 3/8 \to 1/4 \to 1/2 \to 5/32 $$
It suffices to argue that $f$ doesn't have a 3-period point.
Claim 1 Any periodic orbit must only contain points that are greater than 5/32.
Proof: $f(x) \geq 5/32$ for any input $x$.
Claim 2 Restricted to any periodic orbit of period $\neq 5$, $f$ acts the same as the $x\mapsto 2|x-\frac12|$ map.
Proof: Since the orbit has no points less than 5/32, we know that $f(x) \geq 5/32$ for any point $x$ in the orbit. If $f(x) > 5/32$ for all points, we are done. If $f(x) = 5/32$ for some $x$ in the orbit, we know this must be the period 5 orbit computed before.
Now suppose for contradiction that $f$ has a 3-period point, by the two claims above $f$ restricted to the 3-cycle is equal to the map $x\mapsto 2|x-\frac12|$. But the 3-cycles of this latter map we have solved explicitly, and both solutions pass through a point less than 5/32 (both 1/7 and 1/9 are less than 5/32). Therefore the 3-cycle is impossible. |
Why are simplicial sets contravariant functors and not covariant? | First, let's look at the Simplex Category $\Delta$. We can see that $\Delta$, rather than $\Delta^{op}$, is a natural category to look at, both because it has a nice combinatorial description (the objects are finite total orders and the morphisms are order preserving maps) and, more importantly for us, because it has a topological visualisation which goes as follows:
The objects are simplicies.
The face maps $[n] \rightarrow [n+1]$ are given by the inclusion of an $n$-simplex as a face of an $n+1$ simplex.
The degeneracy maps $[n+1] \rightarrow [n]$ are given by the projection onto a face.
In particular, all the morphisms correspond to continuous maps.
Therefore, we expect that our functors will have source $\Delta$ and would prefer to make functors contravariant rather than switch to having $\Delta^{op}$ as the source.
Now lets look at what's going on with a simplicial set $X$ and see how we can construct it as a functor. The set $X_n$ is the set of all instances of $n$-simplices in the simplicial set. Now, for any $n$-simplex we can pick out each of its faces: for each choice of face we get a function $X_n \rightarrow X_{n-1}$. But this function was constructed from the face map $[n-1] \rightarrow [n]$: it is a function sending each $n$-simplex to the $n-1$-simplex that was included into it by that. So, face maps should be sent to functions going the other way round.
Similarly, for each $n$-simplex we can construct a degenerate instance of an $n+1$-simplex occupying the same space. We can view this constructed $n+1$-simplex as the map that first projects and $n+1$-simplex onto a face using a degeneracy map, then includes the original $n$-simplex into the simplicial set. Thus the degeneracy maps $[n+1] \rightarrow [n]$ are turned into functions $X_n \rightarrow X_{n+1}$.
Combining this, we see that $X$ is a covariant functor $\Delta^{op} \rightarrow \underline{Set}$. Since $\Delta$ is a more natural category to work with and can be viewed as a category consisting precisely of the simplices as topological spaces and continuous maps, we think of this as being a contravariant functor $\Delta \rightarrow \underline{Set}$. |
Prove that if $\mathrm D$ is integral domain, then $\mathrm D$is UFD iff these conditions hold. | You may follow these steps.
Let $a \in D$ be nonzero, and not a unit. Then $a$ is divisible by an irreducible element. (For this you use (1). The idea is that either $a$ is irreducible, and you're done, or it has a proper divisor $a_{1}$, that is, a divisor which is not a unit, nor associated to $a$. So you start constructing an AC of ideals $(a) \subset (a_{1}) \subset \dots$.)
Let $a \in D$ be nonzero, and not a unit. Then $a$ is a product of irreducible elements. (For this you use (1). The proof is similar to that of the previous point. If $a$ is irreducible, you're done. Otherwise $a = p_{1} a_{1}$, with $p_{1}$ irreducible. So you start constructing an AC of ideals $(a) \subset (a_{1}) \subset \dots$.)
Factorization is unique. (For this you use (2).) |
Quotient map not nullhomotopic | Consider the composite map
$$(D.\partial D) \rightarrow (M,M-D) \rightarrow (S^2, x_0)$$
It suffices to show that $(D.\partial D) \rightarrow (S^2, x_0)$ is not nullhomotopic.
For example it induces an isomorphism in homology $H_2(D.\partial D) \rightarrow H_2(S^2, x_0)$. |
Show that $f(z)$ is not differentiable at $0$ | $d(z)={f(z)-f(0) \over z} = ({\overline{z} \over z})^2$.
Consider the paths $z_1(t)=t$ and $z_2(t)=t(1+i)$ for real $t$.
$d(z_1(t)) = 1$, $d(z_2(t)) = -1$.
Actually, it would have been simpler just to consider $z(t) = t e^{i\theta}$, then $d(z(t)) = e^{-4i \theta}$ and the result follows since this is not a constant (as a function of $\theta$). |
$h(x)={f(x)\over x}$ is decreasing or increasing or both over $[0,\infty)$ | Since $f(0)=0$ then for each $x>0$ there is $c\in(0,x)$ such that $f'(c)=\frac{f(x)}{x}$ by mean value theorem.
Let $x>0$ then we have that:
$h'(x)=\frac{xf'(x)-f(x)}{x^{2}}=\frac{f'(x)-\frac{f(x)}{x}}{x}=\frac{f'(x)-f'(c)}{x}=\frac{f'(x)-f'(c)}{x-c}\cdot\frac{x-c}{x}=f''(d)\frac{x-c}{x}>0$
since $c<x$, $x>0$, and $f''(d)>0$. So $h$ is increasing on $[0,\infty)$. |
Wirtinger derivatives $\mathbb{C}$-linear | Since $\partial_x (cf) = c\partial_x f$ and $\partial_y(cf) = c\partial_y f$ for all differentiable $f: dom(f) \subseteq \mathbb{C} \rightarrow \mathbb{C}$ and $c \in \mathbb{C}$ the $\mathbb{C}$-linearity of the Wirtinger derivatives follows by simply taking the appropriate complex-linear combinations of the identities I claim above. Likewise, the usual Leibniz rule for products is true for the real partial derivatives hence you can add those rules to obtain the Wirtinger Leibniz rules. For example:
$$ \partial_x(fg) = \partial_x(f)g+ f\partial_x(g) $$
$$ i\partial_y(fg) = i[\partial_y(f)g+ f\partial_y(g) ]$$
Then divide by two and add to obtain:
$$ \partial_z(fg) = \partial_z(f)g+ f\partial_z(g) $$
Similar comments apply to the $\bar{z}$-derivatives.
It is interesting to note that for $f: \mathbb{C} \rightarrow \mathbb{C}$ the differential $df = \partial_x f dx+ \partial_yf dy$ can be written as $df = \partial_z f dz+ \partial_{\bar{z}}f d\bar{z}$. The condition $df(ch) = cdf(h)$ is equivalent to the condition of complex-differentiability. You can think of $ch$ being fed into the $dz$ or $d\bar{z}$. The $c$ filters out of the $dz$ but it comes out as $\bar{c}$ if we feed it to $d\bar{z}$. Thus the condition that a function on the complex plane be antiholomorphic (which means it's just a function of $\bar{z}$) is that the $\partial_z f=0$. In contrast, we get holomorphic (just a function of $z$) functions if $\partial_{\bar{z}}f=0$. Of course, if you're only interested in calculations then these comments about the differential maybe tangential to your interests. |
An isomorphism $f:G_1 \to G_2$ maps the identity of $G_1$ to the identity of $G_2$ | Your proof is correct, but you are explicitly using surjectivity. You don't actually need this, but instead all you need is for the map $f$ to be a homomorphism (so not necessarily a surjective or injective):
If $f: G_1\rightarrow G_2$ is a group homomorphism then $f(e_1)=e_2$.
Proof. Write $g:=f(e_1)$. Then $g^2=f(e_1)^2=f(e_1^2)=f(e_1)=g$. That is, $g^2=g$. Then:
$$\begin{align*}
g^2&=g\\
\Rightarrow
g^2\cdot g^{-1}&=g\cdot g^{-1}\\
\Rightarrow
g&=e_2
\end{align*}$$
Therefore, $f(e_1)=e_2$ as required. |
Computing the derivative of $\|Ax\|_2$ | It is useful to introduce the Frobenius inner product as:
$$ A:B = \operatorname{tr}(A^TB)$$
with the following properties derivied from the underlying trace function
$$\eqalign{A:BC &= B^TA:C\cr &= AC^T:B\cr &= A^T:(BC)^T\cr &= BC:A \cr } $$
Then we work with differentials to find the gradient. Your problem becomes, with $u=Ax$
$$\eqalign{ f &= \|u\|_{F}^{2} = u:u \\
df &= 2u : du\\
&= 2Ax : A dx\\
&= 2A^TAx : dx}
$$
Thus
$$ \frac{\partial f}{\partial x} = 2 A^TAx$$
EDIT:
For $g = \|u\|_{2} $ this becomes:
$$\eqalign{ g&= f^{1/2} \\
dg &= \frac{1}{2} f^{-\frac{1}{2}} : df\\
dg &= \frac{1}{2} f^{-\frac{1}{2}} : 2u : du\\
&=f^{-\frac{1}{2}} u : du\\
&=\frac{1}{||Ax||} Ax : Adx\\
&=\frac{1}{||Ax||} A^TAx : dx\\
}$$
$$\frac{\partial g}{\partial x}= \frac{1}{||Ax||}A^T A x$$ |
Solve the following trigonometric exercise... | Hint In degree
$$\tan(90-x)=\frac{1}{\tan x}$$ |
Example of degenerate scalar product | On 2-space, define $\langle\langle x, y \rangle \rangle = x_1 y_1$. That's symmetric, bilinear, etc., but it's degenerate, as the vector $(0, 1)$ is in the kernel. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.