title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
How to solve this Diophantine equation (involving natural logarithms)? | This is somewhat an ill-posed problem, because unless $a = c = 1$, your real number will be irrational and thus there will probably be no way to present the number other than writing it in terms of natural logarithms and rational numbers in the first place, in which case you can use a fairly straightforward algorithm to see if the expression in terms of natural logarithms and rational numbers can be written in your given form. |
Evaluating $\sum^{7}_{k=1}\frac{1}{\sqrt[3]{k^2}+\sqrt[3]{k(k+1)}+\sqrt[3]{(k+1)^2}}$ | hint
$$(k+1)-k=((k+1)^{\frac{1}{3}}-k^{\frac{1}{3}})((k+1)^{\frac{2}{3}}+(k+1)^{\frac{1}{3}}k^{\frac{1}{3}}+(k)^{\frac{2}{3}})$$ |
Tower Property With Product of 2 Distributions | It might be more clear to write it out as: $E_Y[Y\mid Z = 0]=\operatorname{Var}_Y[Y\mid Z = 0]=0$ and $E_Y[Y\mid Z = 1]= E_X[X] = 8 $ and finally:
$$\operatorname{Var}_Y[Y\mid Z = 1]=\operatorname{Var}_X[X] = 64$$
What we really want is $\operatorname{Var}[Y] = \operatorname{Var}[Y]=E[\operatorname{Var}[Y\mid Z]]+\operatorname{Var}[E[Y\mid Z]]$. This is just $\mathbb{E}[64Z] + \operatorname{Var}[8Z]$ which can be evaluated with what you have above.
$Z$ appears in the conditional expectation since when you condition on $Z$, you treat it as a constant. |
Assuming without loss of generality that a real symmetric matrix is in fact diagonal | This will strongly depend on the property that you want to prove. For example, if you wanted to prove that all symmetric matrices have only zeroes outside of the diagonal (which is obviously false), you can't assume that the matrix is diagonal. |
Proving $\sqrt{2}$ is a real number when the set is bounded from below. | Proof:
Consider $S=\{s\in\mathbb{R}:s^2>2\}$, for any $x\in\mathbb{R}$, $x^2\geq 0$, thus $S\not=\emptyset$ and is a nonnegative set, so it has a lower bound. By the Axiom of Completeness, there exists an $\alpha=\inf S$, we claim $\alpha^2=2$. We will proceed by contradiction by exhausting the cases $\alpha^2>2$ and $\alpha^2 <2$.
for the first case we assume $\alpha^2>2$ then
$$
\begin{align*}
\left(\alpha-\frac{1}{n}\right)^2=&\alpha^2 -\frac{2\alpha}{n}+\frac{1}{n^2}\\
>& \alpha^2-\frac{2\alpha}{n}
\end{align*}
$$
Since $\alpha^2>2$, let $\alpha^2-2=\epsilon >0$, then we have $\displaystyle\alpha^2-\frac{2\alpha}{n}-2=\epsilon -\frac{2\alpha}{n}$
$\displaystyle\epsilon -\frac{2\alpha}{n}>0\Leftrightarrow\epsilon>\frac{2\alpha}{n}\Leftrightarrow\frac{\epsilon}{2\alpha}>\frac{1}{n}$
By the Archimedean property, such $n\in\mathbb{N}$ exists, thus $\left(\alpha-\frac{1}{n}\right)\in S$, but $\left(\alpha-\frac{1}{n}\right) < \alpha$, contradicting our assumption that $\alpha$ is a lower bound.
For the second case we assume $\alpha^2<2$ then
$$
\begin{align*}
\left(\alpha+\frac{1}{n}\right)^2=&\alpha^2 +\frac{2\alpha}{n}+\frac{1}{n^2}\\
<& \alpha^2+\frac{2\alpha}{n}+\frac{1}{n}\\
=&\alpha^2+\frac{2\alpha+1}{n}
\end{align*}
$$
Since we assumed $\alpha^2<2$ we can fix the $\displaystyle\frac{2\alpha+1}{n}$ term such that we are less than 2, that is by the Archimedean property we can find an $n_0\in\mathbb{N}$ such that:
$$
\frac{1}{n_0}<\frac{2-\alpha^2}{2\alpha+1},\text{ which implies } \frac{2\alpha+1}{n_0}<2-\alpha^2
$$
Then $$ \left(\alpha+\frac{1}{n}\right)^2 < \alpha^2+(2-\alpha^2)=2$$
Thus $\displaystyle\left(\alpha+\frac{1}{n}\right)$ is a lower bound for $S$, but $\displaystyle \left(\alpha+\frac{1}{n}\right)>\alpha$, a contradiction to our assumption that $\alpha =\inf A$. |
How many ways can you choose 4 non empty subsets from q 10 element set | Are the 4 sets necessarily disjoint? If so, the answer is just 818,250, by considering each surjective map's pre-image. It should be straightforward enough to see that there's a 1-1 correspondence between these pre-images and choices of 4 non-empty disjoint subsets.
Then just divide by 4! and you're good. :) |
Let $q_n$ be the smallest prime that is strictly great than $P_n = p_1 p_2 \dots p_n + 1$, prove $q_n - p_1 p_2 \dots p_n$ is always prime. | The conjecture is called Fortune's conjecture, with the smallest integer $m \gt 1$ where $p_n\# + m$ (where $p_n\#$ is the primorial) is a prime number, being called Fortunate numbers.
Note that, in addition, there are also currently at least $6$ related questions on this site, e.g., as shown in this search. |
In $\Delta ABC$, $\frac{AP}{AQ}=\frac{BP}{BQ}= \frac{CP}{CQ}=2$ find circumcentre given coordinates of $P,Q$. | The given conditions on $A,B,C$ show that they can be randomly chosen on the ellipsoid with equation
$$
(x-1)^2+(y-1)^2+(z-2)^2=4(x-4)^2+4(y-7)^2+4(z-8)^2\ .
$$
(It is a quadric in the space, it is bounded since for points $X$ going to "infinity" we have $XP/XQ\to 1$, it passes through the point on the segment $PQ$ at $1/3$ distance (from this length) from $Q$, and through the symmetric point of $P$ w.r.t. $Q$.)
In particular our random choice can use a "great variety" of planes $ABC$. Also pairs of such triangle planes that not in intersect.
We can also make a choice of three "very near points" $A,B,C$ (distance less $\epsilon=10^{-10}$) to a given point $E$ on this ellipsoid, so that the circumcenter of $\Delta ABC$ (chosen to be with angles $<90^\circ$) should be interior and thus also "near" $E$.
We can also fix a normal direction, say one corresponding to the line $PQ$, and intersect the ellipsoid with a "moving plane" normal to the fixed normal direction. We obtain an ellipse (a circle for the specific choice), and the circumcenters of triangles with vertices on such an ellipse (or circle) are not determined.
From all this discussion, we cannot "find" the exact position of the circumcenter.
The geometric locus of the circumcenter is an other problem. |
Find order of given factor group | Yaaa.
Using Lagrange’s thm Which states that if G is a finite group and H is a subgroup of G then no of left (right) cosets of H in G is |G|/|H| |
Prove the limit of $(x+y)/(x^2+y^2)$ as $(x,y)$ approaches $(0,0)$ does not exist | It is very much correct. You've shown that close to $(0,0)$, the value of $\frac{x+y}{x^2 + y^2}$ does not approach one specific value. Not even $\infty$ or $-\infty$. |
Number of homomorphisms $\mathbb Z_3 \times \mathbb Z_3\to\mathbb Z_9$ | You can have two different homomorphisms $\mathbb Z_3\times\mathbb Z_3\to\mathbb Z_9$ which have kernels of equal size, for example $f$ defined by $(1,0)\mapsto 3$, $(0,1)\mapsto 0$ and $g$ defined by $(1,0)\mapsto 0$, $(0,1)\mapsto 3$ both have $|\ker f|=|\ker g|= 3$. So counting the number of possible kernel cardinalities is not enough to count all homomorphisms.
Since any homomorphism is defined by the images of a generating set, you need to figure out the possible images for $(1,0)$ and $(0,1)$ that define a homomorphism. Be aware that for example $(1,0)\mapsto 1$ can not give you a homomorphism, do you know why?
Any homomorphism $\varphi:\mathbb Z_3\times\mathbb Z_3\to\mathbb Z_9$ satisfies
$$
0 = \varphi(0,0) = \varphi\left(3\cdot(1,0)\right) = 3\cdot\varphi(1,0),
$$
where $3\cdot g$ is just a notation for $g+g+g$.
Now assume $\varphi(1,0)=1$ then this equation becomes $0=3$, which is wrong even in $\mathbb Z_9$. In general the order of $\varphi(g)$ has to be a divisor of the order of $g$. |
Define a set of arcs based on the arrangements of a permutation | The elements where $\pi$ takes the same value form an equivalence class. You can identify them with the union-find algorithm. Once you have identified the equivalence classes, you can sort them with the key being their original position in the list. You can them make the arcs according to whatever protocol you are using. The simplest of course would be the first with the second, the second with the third, and so on.
It may be true that the sorting step will be unnecessary, due to the order in which the elements are processed by union-find. You should check this. |
Is there a way to intuitively understand this? | Your guess is incorrect: you are looking at what people call compound growth.
If you had \$1 and received 100% interest in a year, you would expect to have \$2 by the end of the year. However, if you had 50% compounded twice a year, you would earn an additional 50% interest on the \$0.50 that you received at mid year (hence the term 'compound interest', where you earn interest on interest), so you would finish with \$2.25.
Thus compounding the system indeed increases the growth rate, but there's a limit to this: as you compound more frequently (say every second), the most you can have by the end of the year is about \$2.72. This number is called $e$, and one way we define this in mathematics is the limit as $n$ goes to infinity of $\left(1+\frac{1}{n}\right)^n$. |
Stopping criteria for gradient method | I will discuss the termination criteria for the simple gradient method $x_{k+1} = x_{k} - \frac{1}{L}\nabla f(x_k)$ for unconstrained minimisation problems. If there are constraints, then we would use the projected gradient method, but similar termination condition hold (imposed on the norm of the difference $x_k-z_k$).
The third criterion, namely $\|\nabla f(x_k) \| < \epsilon$ if fine for strongly convex functions with $L$-Lipschitz gradient. Indeed, if $f$ is $\mu$-strongly convex, that is
$$\begin{aligned}
f(y) \geq f(x) + \nabla f(x)^\top (y-x) + \tfrac{\mu}{2} \|y-x\|^2
\end{aligned},\tag{1}
$$
then, for $x^*$ such that $\nabla f(x^*)=0$ (the unique minimiser of $f$), we have
$$\begin{aligned}
f(x) - f(x^*)\leq \tfrac{1}{2\mu}\|\nabla f(x) \|^2,
\end{aligned}\tag{2}
$$
so, if $\|\nabla f(x) \|^2 < 2\mu\epsilon$, then $f(x) - f(x^*) < \epsilon$, i.e., $x$ is $\epsilon$-suboptimal.
But termination is a mysterious thing... In general (under the assumptions you drew) it is not true that we will have $\|x-x^*\|<\epsilon$ if $\| \nabla f(x) \| < \kappa \epsilon$, for some $\kappa > 0$ (not even locally). There might be specific cases where such a bound holds, notwithstanding. Unless you draw some additional assumptions on $f$, this will not be a reliable termination criterion.
However, strong convexity is often too strong a requirement in practice. Weaker conditions are discussed in the article: D. Drusvyatskiy and A.S. Lewis, Error bounds, quadratic growth, and linear convergence of proximal methods, 2016.
Let $f$ be convex with $L$-Lipschithz gradient and define $\mathcal{B}_\nu^f = \{x: f(x) - f^* < \nu\}$. Let us assume that $f$ has a unique minimiser $x^*$ (e.g., $f$ is strictly convex). Then assume that $f$ has the property
$$\begin{aligned}
f(x) - f(x^*) \geq \tfrac{\alpha}{2} \|x-x^*\|^2,
\end{aligned}\tag{3}\label{3}$$
for all $x\in\mathcal{B}_\nu^f$ for some $\nu>0$. Functions which satisfy this property are not necessarily strongly convex. As a counterexample we have $f = (\max\{|x|-1,0\})^2$. Of course if $f$ is strongly convex the above holds and if $f$ is given in the form $f(x) = h(Ax)$ where $h$ is a strongly convex function and $A$ is any matrix.
Then, condition \eqref{3} is shown to be equivalent to
$$\begin{aligned}
\|x-x^*\| \leq \frac{2}{\alpha} \|\nabla f(x) \|,
\end{aligned}\tag{4}\label{4}$$
for all $x\in\mathcal{B}_{\nu}^f$ and with $\alpha < 1/L$.
Clearly in this case we may use the termination condition $\| \nabla f(x) \| < \epsilon\alpha/2$ which will imply that $\|x-x^*\| < \epsilon$.
In regard to the second condition, you may use it again for strongly convex functions or if \eqref{3} holds locally about $x^*$. The reason for that is that the following bound holds for the gradient method:
$$\begin{aligned}
\tfrac{L}{2}\|\nabla f(x_k) \|^2 \leq f(x_k) - f(x_{k+1}).
\end{aligned}\tag{5}\label{5}$$
The right hand side of \eqref{5} is further upper bounded by $L_f \|x_k - x_{k+1}\|$, where $L_f$ is the Lipschitz constant of $f$ (we know that $f$ is Lipshcitz continuous), so a condition on $\|x_{k+1}-x_{k}\|$ may potentially be used, but we may see that the basis for all this is the bound on $\|\nabla f(x_k) \|$. |
System of mean recurrence time | If we denote by $\mu_i = \mathbb{E}[T\,|\, X_0=i]$ the returning time to state $i$, then we have that $$\mu_i = 1+\sum_{j\in S} \mu_j p_{i,j}$$ where in the sum we set $\mu_j=0$ if and only if the last state seen is $j$. Thus, in my case, we have
$$\mu_C=1+(4/5)\mu_C + (1/5) \cdot 0$$
$$\mu_T=1+(3/4)\mu_C + (1/4) \cdot 0$$
hence $$\mu_C=1+(4/5)\mu_C$$
$$\mu_T=1+(3/4)\mu_C .$$ |
Conclusion about a holomorphic function which is not bounded | The function $\frac{\sin(\pi z)}{\sin(\pi \sqrt{2})}$ is equal to $0$ for $z = 1, 2, ...$ and equal to $1$ for $z=\sqrt{2}.$
Furthermore, it is holomorphic everywhere and unbounded. To see the latter, let $z=it$ and let $t$ be an arbitrary large real number. |
A question on continuous functions | $\sin\pi x$ and $-\sin\pi x$. |
Sum of Stirling's number of first kind | This follows immediately from the definition.
The Stirling number of the first kind $s(n,i)$, also written $n\brack i$, is the number of permutations of $[n]=\{1,\ldots,n\}$ having exactly $i$ cycles. Every permutation of $[n]$ has some number of cycles, and clearly the maximum possible number of cycles is $n$ (for the identity permutation). How many permutations of $[n]$ are there altogether? |
$y''+6y'+c=0$ satisfies $\lim_{x\to \infty} y(x)= 0$ | Rewrite
$$y''e^{6t}+6y'e^{6t}+ce^{6t}=(y'e^{6t})'+ce^{6t}=0.$$
Then integrating once,
$$y'e^{6t}+\frac c6e^{6t}+c'=0,$$
$$y'+\frac c6+c'e^{-6t}=0.$$
For $y$ to keep a finite value at infinity, you need $c=0$. |
Linear Transformation $T\colon V\rightarrow V$ defined by $T(y)=y+e^{-2t}\frac{dy}{dt}$ | Since teh detreminant of $M$ is 1 there can be only one solution. Equate $T(a+be^{2t}+ce^{4t})$ to $1+2e^{2}+3e^{4t}$ and you will get $a=13, b=-10,c=3$. |
Uniform Distribution: Probability that $X$ is rational | Well, you're not arguing against the question, you merely have a (correct) answer for it: In the usual uniform probability measure on $[0,1]$, the measure of $\mathbb Q\cap[0,1]$ is zero.
But merely that you know an answer doesn't invalidate asking the question in the first place. It looks like the book you're quoting from poses that question as a way to motivate developing measure theory rigorously instead of just getting by with hand-wavy intuition.
As Hans Lundmark notes in a comment, the classical example of a non-measurable set would be a Vitali set. All such examples are strange beasts that cannot be completely specified in finite space, because proving they exist requires the axiom of choice. (There are, under certain plausible conditions, models of standard set theory except the axiom of choice where all the subsets of $[0,1]$ that the model knows are in fact measurable -- but they can then be pretty weird in other ways). |
Showing $\lim_{n\rightarrow\infty}\sqrt[3]{n^3+n^2}-\sqrt[3]{n^3+1}\rightarrow\frac{1}{3}$ | You should use that $a^3-b^3=(a-b)(a^2+ab+b^2)$. Take $a=\sqrt[3]{n^3+n^2}$, $b=\sqrt[3]{n^3+1}$ and then multiply your expression by $(a^2+ab+b^2)/(a^2+ab+b^2)$. Then use the trick you are trying to use. |
Sum of geometric series $\frac{7}{8} - \frac{49}{64} + \frac{343}{512}$ | Define
$$a=\frac78\;,\;\;q=-\frac78$$
Thus
$$|q|<1\implies \sum_{n=0}^\infty aq^n=\frac a{1-q}=\frac{\frac78}{1+\frac78}=\frac7{15}$$ |
Explicit scheme for the transport equation | Here we would rather write
$$
v_j^{n+1} = H(v_{j-2}^n\dots v_{j+2}^n)
$$
as the scheme has a five-point stencil. Hope you can take it from here |
Algebra equations - how to solve? | For the second equation, multiply both sides by $x$; that will give you
$$0.83x = 123.$$
Now you are in the same situation as the first equation, which you know how to solve.
Alternatively, you can take reciprocals on both sides, and go from
$$0.83 = 123/x$$
to
$$\frac{1}{0.83} = \frac{x}{123}.$$
Since
$$\frac{x}{123} = \frac{1}{123}\times x,$$
you are again in the same situation as the first equation. |
Let $G$ be a group of order $360$. Let $A$ and $B$ be its Sylow-$2$ and Sylow-$3$ subgroups. Find the order of $A\cap B$ | Your proof is correct but because you are using Lagrange's Theorem which is result about cardinality of subgroup, it will be better if you consider $|A\cap B|$ rather than $|a|$. Of course your proof is not wrong because $|a|=|\langle a\rangle|$, where $\langle a\rangle$ is a subgroup.
Since $A\cap B$ is a subgroup of $A$ and $B$, by Lagrange's Theorem, $|A\cap B|$ divides $|A|$ and $|B|$ and hence divides $\gcd(|A|,|B|)$, which is 1 since $|A|$ and $|B|$ are relatively prime.
Hence $|A\cap B|=1$. |
Derivatives of function when the function is defined differently for different values of x | First of all, since $f'(x)=1+2x\sin\left(\frac1x\right)-\cos\left(\frac1x\right)$, you simply cannot plug in $0$.
Concerning the second question, suppose that you define$$\begin{array}{rccc}g\colon&\mathbb R&\longrightarrow&\mathbb R\\&x&\mapsto&\begin{cases}x&\text{ if }x\neq0\\0&\text{ if }x=0.\end{cases}\end{array}$$If the method that you suggested worked, we would have $g'(0)$. But $g$ is the identity function (that is, $g(x)=x$ for each $x$) and therefore $g'(0)=1$).
FInally, my answer to the second question also answers the third one. |
Does being a n x n homogeneous equation with a nontrivial solution mean A must have fewer than n pivots? | The number of pivots is the rank of the matrix.
Therefore, for a $n \times n$ matrix, having $n$ pivots is equivalent to have full rank, i.e. to be invertible, i.e. to have null space equal to $\lbrace 0 \rbrace$, i.e. to have only the trivial solution $X=0$ to the system $AX=0$.
In other words, if the rank is strictly less than $n$, then the rank of the matrix is strictly less than $n$, so the null space has dimension at least $1$, so there exists at least one non-trivial vector $X$ such that $AX=0$. |
Show that the function $f: x \rightarrow x^2$ is uniformly continuous on the set $S = \bigcup \{[n,n + n^{-2}] ~|~n \in \mathbb N\}$ | Note that $$|f(x)-f(y)|=|x^2 -y^2|= |x-y||x+y|.$$
Thus for $x$ and $y$ in $[k,k+\frac {1}{k^2}]$ we have $$ |f(x)-f(y)|\le \frac {1}{k^2} (2k+ \frac {2}{k^2})$$
Let $\epsilon >0$ be an arbitrary small number.
Pick a natural number $n_0$ such that $$ \frac {1}{n_0^2} (2n_0+ \frac {2} {n_0^2})<\epsilon$$
If $n\ge n_0$ then $$ \frac {1}{n^2} (2n+ \frac {2} {n^2}) < \frac {1}{n_0^2} (2n_0+ \frac {2} {n_0^2})<\epsilon$$
The first two interval intersect at $x=2$, therefore the union of the first two interval, $[1,2\frac {1}{4}]$requires special attention.
The function $ f(x) =x^2$ is uniformly continuous on $[1,2\frac {1}{4}],$ thus for our given $\epsilon$ there exists a $ \delta _1$ satisfying the condition of uniform continuity on $[1,2\frac {1}{4}].$
Note that $f(x)=x^2$ is also uniformly continuous on each of closed intervals $$ [k,k+\frac {1}{k^2}]$$ for $k=3,...,n_0-1$.
We can find $$\delta _3, \delta _4, \delta _5,...., \delta _{n_0-1}$$ to satisfy the condition of uniform continuity on $ [k,k+\frac{1}{k^2}]$.
Let $\delta_2 = \delta_1$ and choose $$\delta = min( 1/4, \delta _1, \delta _2, \delta_3,...., \delta _{n_0-1})$$.
We have $$ |x-y| < \delta \implies |x^2 -y^2| < \epsilon $$ on the given union.
Thus $f(x) = x^2$ is uniformly continuous on the given union. |
Finding all integral solutions to a particular problem | I am not sure what the author had in mind. But here is something that does use the suggested $(x^2-1)(y^2-1)=z^2+1$.
Imagine that we know, as in the post, that $x$ and $y$ must be even. Suppose one of them, say $x$, is non-zero. Then $x^2-1$ is congruent to $3$ modulo $4$, and is $\ne -1$, so it is divisible by a prime $p$ of the shape $4k+3$.
Then $p$ divides $z^2+1$, that is, $z^2\equiv -1\pmod{p}$. But it is a standard result of elementary number theory that the congruence $t^2\equiv -1\pmod{p}$ has no solution if $p$ is of the form $4k+3$. |
Show that if $S$ is a finite set with n elements, then $S$ has $2^n$ subsets by using mathematical induction | An example might help . . .
Suppose $k = 3$.
Let $T$ be a set of $4$ elements, say $T = \{a,b,c,d\}$.
Then $T = S \cup \{a\}$, where $S = \{b,c,d\}$ is a set of $3$ elements.
Suppose we've already shown that any set with $3$ elements has $2^3 = 8$ subsets. Thus, we know that $S$ has $8$ subsets.
Each subset of $S$ is also a subset of $T$, so $T$ has those $8$ subsets to begin with.
But for each of the $8$ subsets of $S$, there is a new subset of $T$ obtained by including the element $a$ as an additional element. That yields $8$ more subsets.
Thus, $T$ has $2^3 + 2^3 = 2\cdot 2^3 = 2^4$ subsets. |
Show equivalence class of equality | Since you never have any $(a,b) \in R$ with $a \not = b$, that means that every $a$ forms an equivalence class all by itself for any $a \in A$. Or, if you want: $[a] = \{ a \}$ for every $ a \in A$. |
Find the degree of the splitting field of $x^4 + 1$ over $\mathbb{Q}$ | Let $\alpha$ be a root of $x^4+1$. Then $\alpha^3,\,\alpha^5,\,\alpha^7$ are also (distinct!) roots of $x^4+1$. An easy way to see this is to consider 8th roots of unity, i.e. roots of $x^8-1 = (x^4+1)(x^4-1)$. If $\alpha$ is a primitive 8th root of unity, then every odd power $\alpha^{2k+1}$ is a root of $x^4+1$ (just draw a picture). Or you could just check it directly.
This means that $\mathbb Q[\alpha]$ is the splitting field of $x^4+1$. All you have to do now is to prove that $x^4+1$ is irreducible over $\mathbb Q$ to conclude that the degree of the splitting field is $4$.
EDIT: Perhaps a better way to show that $[\mathbb Q[\alpha]:\mathbb Q] = 4$ is to first notice that since $\alpha$ is a root of $x^4+1$ that $[\mathbb Q[\alpha]:\mathbb Q] \leq 4$. Now notice that $\alpha + \alpha^7 = \sqrt 2$, so $\mathbb Q[\sqrt 2]\subseteq \mathbb Q[\alpha]$. But, $\mathbb Q[\sqrt 2]\subseteq\mathbb R$, while $\alpha$ is complex, so $\mathbb Q[\sqrt 2]\neq \mathbb Q[\alpha]$, so it must be that $[\mathbb Q[\alpha]:\mathbb Q] = 4$.
In the bellow graph are shown 8th roots of unity. Red are roots of $x^4+1$ and blue are roots of $x^4-1$. |
Using the mgf to find the mean and variance. | The reason why this function is called the moment generating function is that you can obtain the moments of $X$ by taking derivatives of $X$ and evaluating at $t=0$.
$$\left.\frac{d}{dt^n} M(t) \right|_{t=0} = \left.\frac{d}{dt^n} E[e^{tX}] \right|_{t=0} = E[X^n e^{tX}]|_{t=0} = E[X^n].$$
In particular, $E[X]=M'(0)$ and $E[X^2]=M''(0)$. This gives you the mean and the second moment. To obtain the variance, do $Var(X)=E[X^2]-E[X]^2$. |
Normal distribution sample | Look up a table for standard normal distribution. The random variable $X$ is distributed normally. Let $Z = \frac{X-\mu}{\sigma}$ be the standardisation. Then we're interested in
$$ \mathbb P\left (Z\leq \frac{t_0-\mu}{\sigma}\right ) = 0.9 $$
The table can be used to find the closest desirable value and one can then solve for $t_0$.
Also, verify quickly what happens when percentile increases or decreases. Or how affecting mean/st deviation changes result.
Here is a flexible table and graph for standard normal distribution to try |
Isometries of $\ell^p_n(\mathbb{C})$ | The definition of "isometry" here is inadequate. A norm-preserving surjection (which I take to mean a surjection satisfying $\|f(x)\| = \|x\|$ for all $x$) need not preserve distances, or be linear, or even be continuous. (Choose for each $r \geq 0$ an arbitrary surjection $f_r$ from $\{x: \|x\| = r\}$ to itself. Then the map on the whole space given by $x \mapsto f_{\|x\|}(x)$ is a norm-preserving surjection.) In particular, if "isometry" is interpreted in this sense, there are far more isometries of $\ell^p_n(\mathbb{R})$ than the ones listed in the question.
But one can take the list as a hint at a more appropriate definition of isometry. Here are two candidates:
A function on normed linear spaces $f: V \to W$ is an isometry if $f$ is linear and $\|f(x)\| = \|x\|$ for all $x \in V$.
A function on normed linear spaces $f: V \to W$ is an isometry if $f$ is real linear (that is, satisfies $f(x+y) = f(x) + f(y)$ and $f(tx) = t f(x)$ for all $x, y \in V$ and all $t \in \mathbb{R}$) and satisfies $\|f(x)\| = \|x\|$ for all $x \in V$.
These are the same thing when the field of scalars is taken to be $\mathbb{R}$, but the first is more restrictive than the second if the field of scalars is $\mathbb{C}$ (e.g. the conjugation $z \mapsto \overline{z}$ on $\mathbb{C}$ is not an isometry in the first sense as it is not complex linear). The main motivation for using the second definition with a complex vector space is that in strongly convex spaces at least, this condition completely characterizes the distance-preserving maps (functions $f$ satisfying $\|f(x) - f(y)\| = \|x - y\|$ for all $x,y$) that map $0$ to itself. If one wants an "isometry" to be as close to a "distance-preserving function" as possible, complex linearity is perhaps not essential. Offhand, I do not know a characterization of the isometries, in the second sense, of $\ell^p_n(\mathbb{C})$.
But if one uses the first definition and phrases things appropriately, the complex story is identical to the real story. Identify linear operators on $\ell^p_n$ (real or complex, your choice) with $n \times n$ matrices. Say a matrix is a generalized permutation matrix if it is a product of a diagonal matrix whose entries have modulus $1$ and a permutation matrix. Then when $1 < p < \infty$ and $p \neq 2$, the only isometries of $\ell^p_n$ are the generalized permutation matrices.
It is perhaps surprising that these matrices, which are rather obviously isometries of $\ell^p_n$, are the only isometries of $\ell^p_n$--- and that the set of isometries does not depend on $p$. (As Martin Argerami points out, the Hilbert space case $p=2$ is special; but the sets of isometries in this case, the so-called "orthogonal" and "unitary" matrices, are well understood.)
Assuming you know the basic duality theory of $\ell^p_n$ spaces, you can give a short and elementary proof of this fact. One approach is given (in the real case, but the complex case is almost the same) in Isometries of the $\ell^p$ norm by Chi-Kwong Li and Wasin So in the American Mathematical Monthly Vol. 101 No. 5, pp 452-53. (The authors note that their argument was previously and independently found by R. Mathias.)
This is an exceedingly special case of what is often called the Banach-Lamperti theorem, which like many results in the theory of normed linear spaces is really a family of results, varying in the generality of the hypotheses. The basic theme is that when $p \neq 2$, an isometry from one $L^p$ space to another (the measure spaces need not be finite or the same) must be the product of a suitably "nice" multiplication operator and a composition operator (ie, a map of the form $f \mapsto f \circ \sigma$, where $\sigma$ is a map on the underlying measure spaces). In general, the set of isometries does depend on $p$ and the measures used to define the $L^p$ spaces. Lamperti's original paper from 1958 is available at Project Euclid; there have been various generalizations and improvements since then, as you will find if you Google for them. |
is the limit as k approaches infinity of a Taylor Polynomial of order k, that approximates a function f, the same as the function itself? | In general, no. Take, for instance,$$\begin{array}{rccc}f\colon&\Bbb R&\longrightarrow&\Bbb R\\&x&\mapsto&\begin{cases}e^{-1/x^2}&\text{ if }x\ne0\\0&\text{ if }x=0.\end{cases}\end{array}$$You can check that $(\forall n\in\Bbb Z_+):f^{(n)}(0)=0$. So, $\sum_{n=0}^\infty\frac{f^{(n)}(0)}{n!}x^n=0\ne f(x)$ (unless $x=0$). |
finding the multiplicative inverse in a field | To put the comments in a full answer.
two ways are presented here to answer why $K[a]$ is a field for a given algebraic element $a$.
The direct way : Since $K[a]$ is clearly an integral domain, it is enough to show that any non zero element $p(a)\not = 0$ has a multiplicative inverse. Now since
$a$ is an algebraic element then it has a minimal polynomial $m$. Now it is clear that $m$ and $p$ are coprime since $m$ is irreducible and $p$ can not be a multiple of $m$ because $p(a)\not = 0$. It follows that there exist $\alpha $ and $\beta$ in $K[X]$ such that $\alpha .m+\beta . p=1$. Now evaluate this identity on $a$ to get $\beta(a) . p(a)=1$ hence $p(a)$ is invertible with inverse $\beta(a)$.
The indirect way : consider the evaluation map $$ev_a:K[X]\rightarrow L;\; p\mapsto p(a)$$
the map $ev_a$ is clearly a ring homomorphism with image $K[a]$ and kernel the principal ideal $(m)$ generated by the minimal polynomial $m$ which is a maximal ideal because $m$ is irreducible, therefore we have a field isomorphism $k[x]/(m(x)) \cong k[a]$ |
solve for $x$: $9^x-6^x=4^{x+1/2}$ (hints only, please) | Try dividing $9^x $ throughout, and substitute $(\frac {2}{3} )^x $ as $y $. Also, notice that $\frac {4}{9} = (\frac {2}{3})^2 $. |
Need good material on multifractal analysis | OK, I'm going to hijack this thread even though there's an answer as I haven't found any quality, localized information about multifractals.
As mentioned in the comments, I first heard about multifractals from a Google Tech Talk by Rogene M. Eichler West, which can be found, without sound, on YouTube, called "Multifractals: Theory, Algorithms, & Applications" . Unfortunately Google Video got discontinued after they bought out YouTube and I can't find the original video that had the sound included.
I still do not understand on a deep level what, how and why multifractals are doing, are better than another method or how they do it, but from what I understand the idea is to generalize the concept of spectrum to include functions that have a scale symmetry, where the scale symmetry can be on many different scales (thus multi-fractal, instead of just being fractal). Just as the Fourier spectra is constructing a profile of the translation invariances of a function, the multifractal spectra gives information about the scale invariances of a function.
The general methodology seems to be, for a given function $f(t)$:
Find the Hölder exponent, $h(t)$, as a function of time, $t$
Find the singularity spectrum, $D(\alpha)$
Where $D(\alpha) \stackrel{def}{=} D_F\{x, h(x) = \alpha\}$, and $D_F\{\cdot\}$ is the (Hausdorff?) dimension of a point-set.
I believe the idea is that for chaotic/fractal/discontinuous functions, at any point they can be characterized, locally, by the largest term of their Taylor expansion and the Hölder exponent is a way to characterize this. Once you have the function, $h(t)$, characterizing the Hölder exponent, you use that to construct the singularity spectrum. I believe the singularity spectrum is a synonym for the multi-fractal spectrum.
From what I can tell, the specifics of how to calculate $h(t)$ and $D(\alpha)$ in practice vary from approximating them outright by their definition or by using wavelets to approximate the Hölder exponent and then using a Legendre transform to approximate the multifractal spectrum.
From what I understand, $D(\alpha)$ tends to be (or is always?) concave. I have only the vaguest notion of why this is so. How one relates wavelet transforms to finding the Hölder exponent, how one uses the Legendre transform to find the multi-fractal spectrum, why the multi-fractal spectrum should be concave, what kind of intuitive feeling one should get about a function from viewing the spectrum, amongst many others, I still have no idea about.
The multiplicative cascade seems to be a canonical example of a multifractal process.
Online, "A Brief Overview of Multifractal Time Series" gives a terse run through of multifractals. They claim to be able to tell a healthy heart from one that is suffering from congestive heart failure (see here).
Here are some slides giving a brief overview of multifractals. Near the end of the slides, they give a wavelet transform of the Devil's staircase function and talk a bit about using Wavelet Transform Modulus Maxima Method (WTMM), which appears to be a standard tool when doing this type of analysis (anyone have any good links for this?).
Looking around, I found Wavelets in physics by J. C. van den Berg that had this section web accessible for a definition of the singularity spectrum.
Rudolf H. Riedi seems to have a few papers out there that describe multifractal processes. Here are a few:
"Multifractal Processes"
"Introduction to Multifractals"
Along with Jacques L. Véhel "TCP traffic is multifractal: a numerical study."
While focused on finance, Laurent Calvet and Adlai Fisher have a lot of introduction to terminology in "Multifractility in asset returns: Theory and evidence".
And of course Mandelbrot, along with other authors, has many papers, some of which are:
"Large Deviations and the Distribution of Price Changes"
"A Multifractal Model of Asset Returns"
Fractional Brownian Motion is also mentioned frequently, but I have no real idea of how they relate. Large Deviation Theory also seems to be mentioned, but I don't know how this relates to multifractals either. I believe I've also seen entropy, phase transitions and statistical mechanics mentioned here and there. I would be curious if and what the relation to these subjects and multifractals is.
I feel like I'm stumbling around trying to understand this subject and I have yet to find a cohesive text that brings together enough intuition, math and implementation details so that I feel like I have a firm grasp of what's going on. I would welcome any additional resources or corrections to this answer. |
A confusion on a Theorem | Yes. In other words, the factor group $G / Z(G)$ cannot be simultaneously cyclic and nontrivial. |
Convergence of an integral | First lets do a change of variables to get rid of the x in the exponent. Let $y=xu$. Then we have
$$F(x)=\int\limits _{-\infty}^{\infty}f(y)\frac{1}{x\sqrt{2\pi}}\mathrm{e}^{-y^{2}/2x^{2}}\, dy=\int\limits _{-\infty}^{\infty}f(xu)\frac{1}{\sqrt{2\pi}}\mathrm{e}^{-u^{2}/2} du.$$
Next, what is the definition of continuity? Given $\epsilon>0$, we need to show that for fixed $x$
$$\biggr|F(x+\delta)-F(x)\biggr|=\biggr|\int\limits _{-\infty}^{\infty}\left(f((x+\delta)u)-f(xu)\right)\frac{1}{\sqrt{2\pi}}\mathrm{e}^{-u^{2}/2}\, du\biggr|<\epsilon.\ \ \ \ \ \ (1)$$
But if $f$ is bounded, say $|f|\leq M$, then the integral on the right hand side is uniformly bounded by $2M\int\limits _{-\infty}^{\infty}\frac{1}{\sqrt{2\pi}}\mathrm{e}^{-u^{2}/2} du=2M$ for all $\delta$. Hence when taking the limit as $\delta\rightarrow 0$ the dominated convergence theorem applies so we can switch the order of integration and the limit. Consequently since $f$ is continuous the limit on the right hand side is zero, and we have $$\lim_{\delta\rightarrow 0} F(x+\delta)-F(x)=0$$ so that $F$ is continuous.
Alternative:
We could split up the integral instead of using the dominated convergence theorem. For the given $\epsilon$ we could choose $N$ so large that the integral
$$\int_{-\infty}^N\frac{2M}{\sqrt{2\pi}}\mathrm{e}^{-u^{2}/2}du+\int_N^\infty \frac{2M}{\sqrt{2\pi}}\mathrm{e}^{-u^{2}/2}du<\frac{\epsilon}{2}.$$ On the interval $[-Nx,Nx]$ $f$ will be uniformely continuous so we can choose $\delta$ so small that $|f((x+\delta)u)-f(x)|\leq \frac{\epsilon}{4N}$. This implies $$\biggr|\int\limits _{-N}^{N}\left(f((x+\delta)u)-f(xu)\right)\frac{1}{\sqrt{2\pi}}\mathrm{e}^{-u^{2}/2}\, du\biggr|<\frac{\epsilon}{2}.$$ Upon adding these inequalities we obtain equation $(1)$ and the proof is complete. |
How to evaluate $\lim\limits_{n\to +\infty}\frac{1}{n^2}\sum\limits_{i=1}^n \log \binom{n}{i} $ | We have:
$$ \prod_{i=1}^{n}\binom{n}{i} = \frac{n!^n}{\left(\prod_{i=1}^{n}i!\right)\cdot\left(\prod_{i=1}^{n-1}i!\right)}\tag{1}$$
hence:
$$ \frac{1}{n^2}\sum_{i=1}^{n}\log\binom{n}{i}=\frac{1}{n^2}\left((n-1)\log\Gamma(n+1)-2\sum_{i=1}^{n-1}\log\Gamma(i+1)\right)\tag{2}$$
and by Stirling's approximation:
$$ \log\Gamma(z+1) = \left(z+\frac{1}{2}\right)\log z-z+O(1) \tag{3}$$
and partial summation we get:
$$ 2\sum_{i=1}^{n-1}\log\Gamma(i+1) = n^2\left(\log n-\frac{3}{2}\right)+O(n)\tag{4}$$
so:
$$ \sum_{i=1}^{n}\log\binom{n}{i} = \frac{n^2}{2}+O(n\log n)\tag{5} $$
and our limit is just $\color{red}{\frac{1}{2}}.$
A simpler approach is given by the identity:
$$\begin{eqnarray*}\sum_{i=1}^{n}\log\binom{n}{i}&=&(n-1)\log n!-2\sum_{i=1}^{n-1}\sum_{k=1}^{i}\log k \\ &=&(n-1)\log n+\sum_{k=1}^{n-1}(2k-n-1)\log k\tag{6}\end{eqnarray*}$$
hence partial summation gives:
$$\begin{eqnarray*}\sum_{i=1}^{n}\log\binom{n}{i}&=&(1-n)\log\left(1-\frac{1}{n}\right)-\sum_{k=1}^{n-2}(k^2-kn)\log\left(1+\frac{1}{k}\right)\\&=&O(1)+\sum_{k=1}^{n-2}(n-k)+O\left(\sum_{k=1}^{n-2}\frac{n-k}{k}\right)\\&=&\frac{n^2}{2}+O(n\log n).\tag{7}\end{eqnarray*} $$
Still another approach through summation by parts:
$$\begin{eqnarray*}\sum_{i=1}^{n}\log\binom{n}{i}&=&-\sum_{i=1}^{n-1}i\cdot\log(i+1)+\sum_{i=1}^{n-1}(n-i)\log(i)\\&=&O(n)+\sum_{i=1}^{n-1}(2i-n)\log(n-i)\\&=&O(n)+\sum_{i=1}^{n-1}(2i-n)\log\left(1-\frac{i}{n}\right)\tag{8}\end{eqnarray*}$$
gives that the limit is provided by a Riemann sum:
$$\begin{eqnarray*}\lim_{n\to +\infty}\frac{1}{n^2}\sum_{i=1}^{n}\log\binom{n}{i}&=&\int_{0}^{1}(2x-1)\log(1-x)\,dx\\&=&\int_{0}^{1}\frac{x^2-x}{x-1}\,dx\\&=&\int_{0}^{1}x\,dx=\color{red}{\frac{1}{2}}.\tag{9}\end{eqnarray*}$$
This limit can be related with the entropy of the binomial distribution through the Kullback-Leibler divergence, for instance. |
An infinite sum from Green function | Hint: Use
$$\sin \alpha \sin \beta = \frac12\left( \cos (\alpha- \beta) - \cos(\alpha +\beta) \right)$$
and
$$\sum_{n=1}^{\infty} \frac{\cos nx}{n^2} = \frac{x^2}{4} - \frac{\pi x}{2} + \frac{\pi^2}{6}, \quad 0 \le x \le \pi,$$
which can be obtained from the Fourier series of $\frac{\pi-x}{2}$. |
If $a_n \to z$, then does $\frac{{n \choose 1} a_1 + {n \choose 2} a_2 \dots {n \choose n} a_n}{2^n} \to z$? | Something far more general is true. Define the doubly half-infinite matrix $ c_{mn}$,
$$ c_{mn} = \frac1{2^m}\binom{m}{n}\mathbb 1_{n\le m} $$
then you're asking if
$$ t_m := \sum_{n=0}^\infty c_{mn} a_n \to z?$$
According to Hardy's Divergent Series, Theorem 2 (page 43) (apparently due to Toeplitz and Schur):
Theorem 2. (Paraphrased) For any such infinite matrix $(c_{mn})_{m,n\ge 0}$, the statement
$$\text{“$a_n \to z$ implies $t_m \to z$''}$$
is true iff the following conditions for $c_{mn}$ are satisfied:
$\sum_{n=0}^\infty |c_{mn} | < H $ where $H<\infty$ is uniform in $m$,
for each $n$, $c_{mn} \xrightarrow[m\to\infty]{} 0$,
$\sum_{n=0}^\infty c_{mn} \xrightarrow[m\to\infty]{} 1$.
For your specific $c_{mn}$, for each of these conditions:
$\sum_{n=0}^\infty |c_{mn} | =\sum_{n=0}^\infty c_{mn} = 2^{-m}\sum_{n=0}^m \binom{m}{n} = 1$.
$c_{mn} = \frac{m!}{2^m n!(m-n)!} \le \frac{m^n}{n!2^m} \xrightarrow[m\to\infty]{} 0.$
see 1.
So the assumptions for Theorem 2 are verified, and the result holds.
(This type of result is a "regularity" result for a summation method, and such methods are the main focus of Hardy's book.) |
Is the zero function the only function which satisfies... | Over any interval $(0,a)$
$f(a) \le \int_0^a \max(f'(x)|x\in(0,a)) dx =$
Since $f'(x)$ non-decreasing $\max (f'(x)|x\in(0,a)) = f'(a)$
$f(a) \le af'(a)$
but we have the restriction.
$xf'(x) \le c f(x)$
and $c<1$
$f(x) \le xf'(x) \le c f(x) < f(x)$
Indeed the $0$ function is the only function that meets all the criteria. |
Why does the tensor product of rings correspond to the product of their spectra? | Let $C$ be a commutative ring and $A$ and $B$ be $C$-algebras. You have $C = k$, but I’ll be a bit more general.
As Tobias Kildetoft already said, $A \otimes_C B$ is not a quotient of $A × B$, but instead a quotient of the very much larger structure
$$F(A×B) = \bigoplus_{(a,b) ∈ A × B} C·(a,b)$$
as a $C$-module – not as a ring. That module is free $C$-module on the set $A×B$ (hence the notation ‘$F(…)$’).
It then just turns out that the tensor product $A \otimes_C B$ of this structure can also be endowed with a ring structure, turning it to a $C$-algebra. The structure inherently comes from the universal property of tensor products as $C$-module, since multiplication in $C$-algebras are $C$-bilinear maps.
So you can’t easily compare the ideals of $A \otimes_C B$ with the ideals of $A × B$, that’s why your intuition fails.
Another thing: The high-level reason that $\operatorname{Spec} \colon \mathrm{C\,Rings}^\mathrm{op} → \mathrm{Schemes}$ turns the tensor product into the fibre product is not that it “is a contravariant functor”, but that it is, as such, a right-adjoint functor to the global section $Γ\colon \mathrm{Schemes} → \mathrm{C\,Rings}^\mathrm{op}$ and category theory tells us that right adjoints always preserve limits. Since the tensor product of $C$-algebras is the coproduct in $\mathrm{C\, Rings}$, it’s the product in the opposite category, so it is preserved by $\mathrm{Spec}$.
To help your intuition, maybe think of the very intuitive statements
$$\mathbb A^m_C × \mathbb A^n_C = \mathbb A^{m+n}_C \quad\text{and}\quad C[X_1,…,X_m] \otimes_C C[Y_1,…,Y_n] = C[X_1,…,X_m,Y_1,…Y_n].$$ |
Expected values of a dice game with a 30-sided die and a 20-sided die. | Letting $D_k$ be the random value of a fair $k$-sided dice roll, the probability that $A$ will win is:
$$\begin{align}
\mathsf P(D_{30}>D_{20}) & = \sum_{b=1}^{20} \mathsf P(D_{20}=b)\; \mathsf P(D_{30}>b)
\\ & = \frac{1}{20}\sum_{b=1}^{20}\left(1- \frac{b}{30}\right)
\\ & =\frac{13}{20}
\end{align}$$
Allowing a reroll effectively means taking the maximum result of two rolls ($D_{20,1}, D_{20,2}$), so then the probability of A winning is:
$$\begin{align}
\mathsf P(D_{30}>\max(D_{20,1}, D_{20,2})) & =\frac{10}{30} + \sum_{a=1}^{20} \mathsf P(D_{30}=a)\;\mathsf P(D_{20,1}< a)\;\mathsf P(D_{20,2}< a)
\\ & = \frac{10}{30} + \frac{1}{30}\sum_{a=1}^{20} \left(\frac {a-1} {20}\right)^2
\\ & = \frac{647}{1200}
\end{align}$$
From these probabilities you can chocolate the expected returns.
Can you complete the rest? |
Fixed point of a line through two points on parabola. | Let $S$ be the focus, $T$ the intersection between line $AB$ and the axis, $H$ and $K$ the projections of $A$ and $B$ on the axis. We want to prove that $VT$ is of fixed length.
We'll repeatedly use Apollonius' definition of parabola, which entails:
$$
AH^2=4VS\cdot VH,\quad BK^2=4VS\cdot VK.
$$
From the similitude of triangles $AHT$, $BKT$ we have $TK:TH=BK:AH$, and
$(TK+TH):TH=(BK+AH):AH$, hence (supposing WLOG that $VK>VH$):
$$
\begin{align}
TH
&={TK+TH\over BK+AH}AH={VK-VH\over BK+AH}AH={1\over4VS}{BK^2-AH^2\over BK+AH}AH\\
&={BK-AH\over4VS}AH={BK\cdot AH\over4VS}-{AH^2\over4VS}={BK\cdot AH\over4VS}-VH.\\
\end{align}
$$
It follows that: $\displaystyle VT=TH+VH={BK\cdot AH\over4VS}$.
But from Pythagoras' theorem we also have:
$$
\begin{align}
AB^2
&=(AH+BK)^2+(VK-VH)^2=AH^2+VH^2+BK^2+VK^2+2AH\cdot BK-2VH\cdot VK\\
&=AV^2+BV^2+2AH\cdot BK-2VH\cdot VK=AB^2+2AH\cdot BK-2VH\cdot VK.
\end{align}
$$
It follows that:
$$
AH\cdot BK=VH\cdot VK={1\over16VS^2}AH^2\cdot BK^2,
\quad\text{whence:}\quad
AH\cdot BK=16VS^2.
$$
Substituting that into our previous result for $VT$ gives then $VT=4VS$, and the proof is complete. |
Can the empty set be the image of a function on $\mathbb{N}$? | No. By definition of $f:A \to B$, then for every $a\in A$ then $f(a)$ must exist and $f(a) \in B$. So if $A$ is not empty then $f(A)$ is not empty (although it can have a few as only one element.)
However it is possible that $A$ is empty in which case $f(A)$ is (obviously) also empty.
$f: \emptyset \to B$ is the empty function in this case.
=====
Or to put it really simple $f(1)$ has to be in the image so the image can't be empty. |
Show that Brownian motion on the unit circle is exponentially ergodic and has the uniform measure as its invariant distribution. | 1. The uniform distribution is invariant
Let $\theta_0$ have a uniform distribution in the circle, so that its pdf is $f(\theta)= \frac 1{2\pi}$ for $\theta \in [0,2\pi]$ and $0$ otherwise.
If $\theta_0$ moves according to the law of Brownian motion during a time interval $[0,t]$, then its new distribution over the reals can be calculated by applying the Brownian transition function to the initial density $f$.
Note that arguments $\theta$ and $\theta + 2k\pi$ represent the same point in the circle, so in order to get the probability density $f_t$ over the circle their probability densities should be added . If we write $p_t(x,y)$ for the standard Brownian transition density
$$p_t(x,y) = \frac{1}{\sqrt{2\pi t}}e^{-\frac{(y-x)^2}{2t}}$$
between reals $x$ and $y$, then the density $q_t(\phi, \theta)$ between points in the circle would read $$q_t(\phi, \theta) =
\cases{
\sum_{k\in\mathbb Z} p_t(\phi, \theta + 2k\pi) & \text{if $\phi, \theta\in [0,2\pi]$,} \\
0 & \text{otherwise.}
}$$
Using this new transition density, the distribution of the point at time $t$ is
$$\begin{align}
f_t(\theta)
&= \int_{0}^{2\pi}q_t(\phi,\theta)f(\phi) \, d\phi\\
&= \sum_{k\in\mathbb Z}\int_{0}^{2\pi}p_t(\phi,\theta + 2k\pi)f(\phi) \, d\phi\\
&= \sum_{k\in\mathbb Z} \int_{0}^{2\pi} p_t(\phi - 2k\pi,\theta) \frac 1{2\pi} \, d\phi\\
&= \frac 1{2\pi}\sum_{k\in\mathbb Z} \int_{-2k\pi}^{-2(k-1)\pi} p_t(\phi,\theta) \, d\phi\\
&= \frac 1{2\pi} \int_{-\infty}^{\infty} p_t(\phi,\theta) \, d\phi = \frac 1{2\pi} = f(\theta).\\
\end{align}$$
(Changes in the order of summation are justified by the monotone convergence theorem, as the quantities are positive.)
In other words, the distribution doesn't change, as we wished to show.
2. Exponential ergodicity
Now we want to establish the exponential ergodicity of Brownian motion in the circle towards the uniform distribution. Writing $Q^t(x,A)$ for its transition function and $\mu$ for the uniform probability measure, this means that$^1$ $$\sup_{A\in\mathcal B[0,2\pi]}|Q^t(x,A)-\mu(A)| \le M(x)\rho^{t} $$ for some positive $\rho < 1$, finite $M(x)$. Here I used the definition of the total variation distance between probability measures.
We will actually prove that the pdf of $\theta_t = W_t \bmod 2\pi$ converges to the pdf of the uniform distribution over time. Indeed, the probability density of $\theta_t$ starting at $0$ is
\begin{align}
g_t(\phi)=\sum_{k\in \mathbb Z} p_t(0,\phi+2k\pi)
&= \sum_{k\in \mathbb Z} \frac 1 {\sqrt{2\pi t}}e^{-\frac 1{2t}(\phi+2k\pi)^2}\\
&= \frac 1 {\sqrt{2\pi t}}e^{-\phi^2/2t}\sum_{k\in \mathbb Z} e^{-2k\pi\phi/t}e^{ -2k^2\pi^2/t}.
\end{align}
We want to know the limit of this last expression as $t\to\infty$. Note that the terms of the series approach $1$ and it's difficult to see clearly how that compares to the decreasing factor outside the sum. Still, following the approach in this post, we can express it in terms of Jacobi's theta function $$\vartheta_3(z,q) = \sum_{k\in\mathbb Z}q^{k^2}e^{2kiz}$$ and apply Jacobi's imaginary transformation (more on that below) to get a more tractable expression.
Applying the transformation involves introducing a new variable $\tau$ defined by $q=e^{i\pi\tau}$. The same function expressed in terms of this variable reads $$\vartheta_3(z\mid \tau) = \sum_{k\in\mathbb Z}e^{i\pi\tau k^2}e^{2kiz}.$$ In our case we have
$$z =\frac{\phi\pi i}{t},\quad q = e^{-2\pi^2/t},\quad \tau=\frac{2\pi i}{t}$$
and using this notation our expression becomes
$$g_t(\phi) = \frac 1 {\sqrt{2\pi t}}e^{-\phi^2/2t} \vartheta_3(z\mid \tau).$$
Now we can apply the imaginary transformation $(6)$ in this link, and get the equivalent form
$$\frac 1 {\sqrt{2\pi t}}e^{-\phi^2/2t} \sqrt{\frac{t}{2\pi}} e^{-iz^2/\tau\pi} \vartheta_3\Bigr(-\frac{z}{\tau} \Bigr | \Bigl. -\frac{1}{\tau}\Bigl) = \frac 1{2\pi} \vartheta_3\Bigr(-\frac{\phi}{2} \Bigr | \Bigl. \frac{it}{2\pi}\Bigl) = \frac 1{2\pi} \vartheta_3(z', q'),$$
with $z' = -\phi/2,\,\, q'= e^{-t/2}$. Finally, notice that $q'$ approaches $ 0$ as $t\to\infty$, therefore only the constant term survives in the series $\vartheta_3(z', q')$. Thus, we have pointwise convergence of the density $g_t(\phi) \rightarrow \frac 1{2\pi}=f(\theta)$ for all $\phi\in[0,2\pi]$. Note that the same argument holds for the process started at another point $\theta_0$ if we replace $\phi$ by $\phi-\theta_0 $.
This establishes convergence in distribution of the Brownian motion on the circle to the uniform distribution. Moreover,for any borel subset $A$ of $[0,2\pi]$ we have
$$|Q^t(\theta_0,A)-\mu(A)|
= \left |\int_A Q^t(\theta_0, d\phi) -\int_A f(\theta)\,d\phi\right|
\le \int_A |g_t(\phi-\theta_0)- f(\theta)|\,d\phi $$
$$= \int_A \frac 1{2\pi}|\vartheta_3(z', q')- 1|\,d\phi
= \int_A \frac 1{2\pi}\left|\sum_{k\ne 0}e^{-tk^2/2} e^{-ki(\phi-\theta_0)}\right|\,d\phi
\le \int_A \frac 1{2\pi}\sum_{k\ne 0 }e^{-tk^2/2}\,d\phi$$
$$ \le 2\sum_{k\ge 1}e^{-tk^2/2} \le 2\sum_{k\ge 1}e^{-tk/2} = 2\frac{e^{-t/2}}{1-e^{-t/2}} \le 4e^{-t/2} $$
for $t\ge \log 4$, so that the process is exponentially ergodic with $M=4, \rho=e^{-1/2}$, as we wished to show.
Ergodicity definition taken from this article |
Show that if $y-x,z-y,x-z\in U$ with $\dim(U)=1$ then one of $x,y,z$ is a convex combination of the other two | Choose a $v$ that spans $U$.
Write $y-x = \alpha_1 v$, $z-y = \alpha_2 v$, $z-x = \alpha_3 v$. Without loss of generality, suppose that $0 \leq \alpha_2 \leq \alpha_3$. We then have
$$
y = z - \alpha_2 v = z - \frac{\alpha_2}{\alpha_3}(z - x) = \left(1 - \frac{\alpha_2}{\alpha_3} \right)z + \frac{\alpha_2}{\alpha_3}x
$$
So, $y$ is a linear combination of $x$ and $z$. |
Implicit Differentiation of 3 variables | You are considering the equation:
\begin{equation*} (-5x+z)^{4}-2x^{3}y^{6}+3yz^{6}+6y^{4}z=10\end{equation*}
and you wish to calculate $\frac{dy}{dz}$. It follows that
\begin{align*} 0&=\frac{d}{dz}{10}=\frac{d}{dz}\left[(-5x+z)^{4}-2x^{3}y^{6}+3yz^{6}+6y^{4}z\right] \\ &=4(z-5x)^{3}\left(1-5\frac{dx}{dz}\right)-2\left[6x^{3}y^{5}\frac{dy}{dz}+3x^{2}\frac{dx}{dz}y^{6}\right]+3\left[6yz^{5}+z^{6}\frac{dy}{dz}\right]+6\left[y^{4}+4zy^{3}\frac{dy}{dz}\right]\end{align*}
Can you take it from here to verify whether or not your answer is correct? |
A question about a proof of the "Least Upper Bound Property" in the Tao's Real Analysis notes | We already know that $x_0-\dfrac{1}{n}$ is not a upper bound for $E$, while $x_0+\dfrac{K}{n}$ is an upper bound for $E$. Let's then prove that:
There exists a natural number $i$ with $0\leq i\leq K$ such that $x_0+\dfrac{i}{n}$ is an upper bound for $E$, but $x_0+\dfrac{i-1}{n}$ is not an upper bound for $E$.
Let's give two different proofs: One by induction (as in the text) and one using the well-order of natural numbers.
Proof 1 (Induction). Suppose no such $i$ existed, that is, for every natural number $0\leq i\leq K$, either both numbers $x_0+\dfrac{(i-1)}{n}$ and $x_0+\dfrac{i}{n}$ are upper bounds for $E$ or both of them are not upper bounds for $E$ (let's call this property $(*)$).
Let
$$B=\left\{i\in\mathbb{N}:i>K\text{ or }x_0+\dfrac{i}{n}\text{ is not an upper bound for }E\right\}.$$
Let's show, by induction, that $B=\mathbb{N}$. First, notice that $0\leq 0\leq K$, but $x_0-\dfrac{0-1}{n}=x_0-\dfrac{1}{n}$ is not an upper bound for $E$, so $x_0+\dfrac{0}{n}$ is also not an upper bound for $E$ (by $(*)$), thus $0\in B$.
Now, suppose that $i\in B$. We have two cases:
If $i\geq K$, then $i+1>K$, so $i+1\in B$.
The second case is $i<K$. Then $i+1\leq K$, and $x_0+\dfrac{(i+1-1)}{n}=x_0+\dfrac{i}{n}$ is not an upper bound for $E$ (because $i\in B$), so $(*)$ again implies that $x_0+\dfrac{i+1}{n}$ is not an upper bound for $E$, so $i+1\in B$.
By induction, we proved that $B=\mathbb{N}$. In particular, since $K$ is not strictly larger than $K$, we have that $x_0+\dfrac{K}{n}$ is not an upper bound for $E$, a contradiction. Thus, there exists an $i$ with the desired properties.
Proof 2 (Well-ordering). Let $A=\left\{i\in\mathbb{N}:x_0+\dfrac{i}{n}\text{ is an upper bound for }E\right\}$. Notice that $K\in A$, so $A$ is nonempty and hence has a minimum element (by the Well-Ordering Principle). Let $i=\min A$. In particular, $i\leq K$ and $x_0+\dfrac{i}{n}$ is an upper bound for $E$. It remains only to show that $x_0+\dfrac{(i-1)}{n}$ is not an upper bound for $A$. Again we have two cases:
If $i=0$, then $x_0+\dfrac{i-1}{n}=x_0-\dfrac{1}{n}$ is not an upper bound for $E$ (as we know)$, as we wanted.
If $i\neq 0$, then $i-1$ is a natural number which is lesser than the minimum of $A$, so $i-1\not\in A$, that is, $x_0+\dfrac{(i-1)}{n}$ is not an upper bound for $E$, as we wanted. |
Calculate $\sum\limits_{n\geq 1}\frac{1}{1+2+\ldots+n} $ | The series 'telescopes' with the only remaining terms being $1-\frac{1}{r+1}$ where $r$ is the number of terms. That's $\frac{r}{r+1}$ which tends to $1$ for partial sums. It's not too hard to show that this sum is not commutative because it alternates with decreasing terms. |
Is $\sum_{k=0}^n \left(k+1\right)\left(C^n_k\right)^2 = \frac{n+2}{2} C^{2n}_n$ for any positive integer $n$? | Using $[x^n]$ as the coefficient-extractor operator (it returns the coefficient of $x^n$ in the Maclaurin series of its argument) we have
$$\sum_{k=0}^{n}(k+1)\binom{n}{k}^2 =[x^n]\left[\left(\sum_{k=0}^{n}\binom{n}{k}(k+1)x^k\right)\cdot\left(\sum_{k=0}^{n}\binom{n}{k}x^k\right)\right]$$
hence the LHS can be written as
$$ [x^n]\left[(1+x)^n\cdot \frac{d}{dx}\sum_{k=0}^{n}\binom{n}{k}x^{k+1}\right]=[x^n]\left[(1+x)^n\cdot \frac{d}{dx}\left(x(1+x)^n\right)\right] $$
or as
$$ [x^n] \left[(1+x)^n\cdot\left((1+x)^n+nx(1+x)^{n-1}\right)\right]=[x^n](1+x)^{2n}+n[x^{n-1}](1+x)^{2n-1} $$
which, by the binomial theorem, equals
$$ \binom{2n}{n}+n\binom{2n-1}{n-1}=\left(1+\frac{n}{2}\right)\binom{2n}{n} $$
as is was to be shown. |
Is $R[X,Y]/(XY-1)$ a finitely generated $R[X]$-module? | If I'm not mistaken, this map is never finite, unless $R$ is the zero ring. Indeed, since $X$ is not a unit of $R[X]$, there is always some maximal ideal $M$ of $R[X]$ containing $X$. The prime ideals of $R[X]_{X}$ are in bijection with ideals of $R[X]$ not containing $X$, and the map on spectra $\varphi^{\ast} \colon \mathrm{Spec}(R[X]_{X}) \to \mathrm{Spec}(R[X])$ induced by the canonical localization morphism $\varphi \colon R[X] \to R[X]_{X}$ realizes this inclusion of prime ideals. Since the set of prime ideals of $R[X]$ which contain $X$ is nonempty (it contains $M$), $\varphi^{\ast}$ cannot be surjective. But every integral ring extension induces a surjective map on spectra by Going Up, so we're done. |
Proving that $\int_0^{\pi/2} |\exp(ire^{it})|dt < \pi /2r$. | Since $\| e^z\|=e^{\Re(z)}$, your integral equals:
$$\int_{0}^{\pi/2}e^{-r \sin t}dt\leq \int_{0}^{\pi/2}e^{-2rt/\pi}dt\leq\int_{0}^{+\infty}e^{-2rt/\pi}dt=\frac{\pi}{2r},$$
as wanted.
Notice that a more careful estimation is given by:
$$\int_{0}^{\pi/2}e^{-r \sin t}dt\leq\int_{0}^{\pi/2}\frac{dt}{1+r\sin t}=\int_{0}^{1}\frac{2\, dt}{t^2+2rt+1}\leq\int_{0}^{1}\frac{2\,dt}{2rt+1}=\frac{\log(2r+1)}{r}.$$ |
How to solve $x^n - Ax + A - 1 = 0, n≥4$? | You may just start Newton's method at a point suitably to the right of $1$, like $x=2\sqrt[n-1]{\frac{A}{n}}-1$, chosen by noticing where the derivative of the given polynomial vanishes. |
Let $A \in \mathbb{C}^{3 \times 3}$ with $A^2(A^2 - 4 I) = 0$. What are possible minimal polynomials of $A^3$? | HINT
You can greatly reduce your work by noting a couple of simple results.
Since $A^4=4A^2$ you know that $A^{12}=64A^6$. If $B=A^3$, then $B^4=64B^2$ and so $B$s minimal polynomial divides $x^4-64x^2$.
Also, since the matrices are $3\times 3$, the minimal polynomial is at most a cubic.
For each possibility check if you can find a suitable $A$ and $B$.
Addendum
In view of the comments below it might be helpful for me to add some general points about minimal polynomials.
The minimal polynomial $m(x)$ will divide any polynomial satisfied by the matrix. If you can find several polynomials the matrix has to satisfy then that will greatly reduce the possibilities.
Every root of the minimal polynomial is an eigenvalue. Again this could reduce the possibilities.
Every matrix satisfies the characteristic polynomial, $|A-xI|=0$. So $m(x)$ must divide this polynomial. (It might be considerably simpler than the characteristic polynomial.) |
Find point on Elliptic Curve | Step 1: pick $x_0 \bmod p$ and compute $f=x_0^3+ax_0+b$.
Step 2: compute the Legendre symbol $L(x_0)=\left(\frac{f}{p}\right)$. This should be "fast" using quadratic reciprocity and other such results about Legendre symbols. If $L(x_0)=0$ or $-1$, then return to Step 1 and pick a different value of $x_0$.
Step 3: compute a quadratic residue $y_0$ for $f$, i.e., find $y_0\bmod p$ such that $y_0^2 \equiv f \bmod p$, using a method such as Cipolla's algorithm or Tonelli-Shanks' algorithm.
The point $(x_0,y_0)$ is on $E$. |
Improper integral comparison test | For positive $x$, the top is $\ge 3x^2$. For $x\ge 1$, the bottom is $\le x^3+6x^3+x^3+4x^3$.
Edit: For your added question, the range of values of $x$ is not specified. However, if $x\ge 1$, then the top is $\le 4x$. The bottom is $\ge \sqrt{x^5}$. So for $x\ge 1$, the whole thing is $\le \frac{4x}{x^{5/2}}$, which simplifies to $\frac{4}{x^{3/2}}$. |
Find the closed form of the generating function | Notice that your sum is equivalent to $2 \sum_{n=0}^\infty nz^n$. Then consider the geometric series:
$$\sum_{n=0}^\infty z^n = \frac{1}{1-z}$$
(whenever $|z|<1$). Take the derivative of both sides with respect to $z$, then multiply both sides by $z$. You'll find an expression for the summation you have. I'll leave the calculations/justification up to you. |
Expected Value of "Double" Random Variable | That's not so wrong at all, actually. Not quite there, but you do have an intuitive grasp of the problem.
The Law of Iterated Expectation, also known as Law of Total Expectation, or the Tower Property, states:
$$\mathsf E(Y)~=~\mathsf E\big(\mathsf E(Y\mid X)\big)$$
Now you have been told that $Y\mid X ~\sim~\mathcal U[0;X]$ and $X\sim\mathcal U[0;n]$, so you can take it from here.
$$\begin{align}\mathsf E(Y) ~&=~ \mathsf E( X/2) \\[1ex] &= n/4\end{align}$$ |
Finding a set of points that minimize distance from any point in a sphere | Maybe you want to parameterize based on $$I_N = \int_{\Sigma = S^3} d(x,N) d\Sigma(x).$$ Because you have $N \subset \mathbb{R}^3$ and $|N|$, you have a cost function $f(y) : \mathbb{R}^{3|N|} \to \mathbb{R}$ defined as $$f(y) = \int_{\Sigma = S^3} d(x,N) d\Sigma(x).$$
The vector $y$ encodes, sorted, all the coordinates in the set $N$, and by definition you have a lot of symmetries, for example if $P$ is a permutation matrix (permuting points though, not single coordinates) it is easy to prove that $f(Py) = f(y)$. |
Inclusion-Exclusion Principle; certain intersection has to be empty | From "there is at least one r-combination of S", we can see that r is smaller or equal to n1 + n2 + ... + nk. When trying to figure out the cases of all the intersections, we need to assume there're n1+1, n2+1, ..., nk+1 elements respectively, which is clearly bigger than r. Hence, the intersection is the empty set. |
If $A$ is well ordered and $B$ is well ordered then $A\times B$ with the lexicographic order is well ordered | Your proof is okay, but I would slightly modify a couple of things:
The definition of $a_{i_0}$ is slightly cumbersome, and the reason is that you are trying to use the set index set for both the $a_i$ and the $b_j$. It's better to just say that $U$ is a non-empty set of pairs.
Then you can simply define $a_i=\min\{a\in A\mid\exists b(a,b)\in U\}$.
Similarly, the use of $i_0$ as the index for both $a$ and $b$ is confusing and unclear. It is better to define $b_j$ as $\min\{b\in B\mid (a_i,b)\in U\}$.
Note that in both the definition of $a_i$ and $b_j$ you omitted the condition that the pairs are taken from $U$. |
Highest and lowest, exercise. | For a counterexample, you could take $A = B$ |
My minimal polynomial over this field of rational functions has two indeterminates, please help me understand my error | As Jyrki mentioned it is not hard to see that $G$ has order $6$. Indeed we have that $\sigma^3 = \tau^2 = 1$. Moreover $\tau^{-1} \sigma \tau = \sigma^2$ and hence we have that $G$ is the dihedral group of 6 elements.
Furthermore, as Jyrki also noted we have that $t$ is a primitive element of the extension $F \subset E$. Hence we have that $\sigma(t),\sigma^2(t), \tau(t),\sigma\tau(t),\sigma^2\tau(t)$ are roots of the minimal polynomial of $t$, too. In particular it is given by:
$$(x-t)(x-2t)(x-4t)\left(x - \frac 1t\right)\left(x - \frac 2t\right)\left(x - \frac 4t\right) =$$
$$(x^3 - 7tx^2 + 14t^2x - 8t^3)\left(x^3 - \frac{7x^2}{t} + \frac{14x}{t^2} - \frac{8}{t^3}\right) =$$
$$(x^3 - t^3)\left(x^3 - \frac{1}{t^3}\right) = x^6 - \left(t^3 + \frac 1{t^3}\right)x^3 + 1$$
You can indeed note that this is a polynomial in $F[x]$. |
Are the Euler-Lagrange equations equivalent to the functional having a stationary point? | Intuitively, yes... if $DS(q_0)$ were nonzero, there would be some nice function $r_0$, vanishing at $t=0$ and $t=1$, such that $S(q_0 + \varepsilon r_0) - S(q_0)$ was $\Theta(\varepsilon)$. But since the Euler-Lagrange equations are satisfied at $q_0$,
$$
S(q_0 + \varepsilon r_0)-S(q_0) = O(\varepsilon^2)+\varepsilon\int_{0}^{1}dt \left(r_0 \partial_1{\cal{L}}(q_0,\dot q_0, t)+\dot r_0\partial_2{\cal{L}}(q_0,\dot q_0, t)\right) \\ =O(\varepsilon^2)+\varepsilon\int_{0}^{1}dt \left(r_0 \frac{d}{dt}\partial_2{\cal{L}}(q_0,\dot q_0, t)+\dot r_0\partial_2{\cal{L}}(q_0,\dot q_0, t)\right) \\ =O(\varepsilon^2)+\varepsilon \frac{d}{dt}(r_0\partial_2{\cal{L}}(q_0,\dot q_0, t))\big\vert_0^1=O(\varepsilon^2).
$$ |
Solving $(1+x^2)y' - 2xy = (1+x^2)\arctan(x)$ | Take into account that
$$
\frac{d}{dx}\arctan(x) = \frac{1}{1+x^2}
$$
so that
$$
\int dx\; \frac{\arctan(x)}{1+ x^2} = \int dx\; \arctan(x) \frac{d}{dx}\arctan(x) = \int dx\; \frac{d}{dx}\left(\frac{1}{2}\arctan^2(x)\right) = \frac{1}{2}\arctan^2(x) + c
$$ |
Integration of $\int_\pi^{+\infty}e^{-st}(t-\pi)\,dt$ | Integration by parts will give you (choosing $f = t-\pi$ and $g' = e^{-st}$)
$${\displaystyle\int}\left(t-{\pi}\right)\mathrm{e}^{-st}\,\mathrm{d}t=-\dfrac{\left(t-{\pi}\right)\mathrm{e}^{-st}}{s}-{\displaystyle\int}-\dfrac{\mathrm{e}^{-st}}{s}\,\mathrm{d}t +C$$
which is
$${\displaystyle\int}\left(t-{\pi}\right)\mathrm{e}^{-st}\,\mathrm{d}t=-\dfrac{\left(t-{\pi}\right)\mathrm{e}^{-st}}{s}-\dfrac{\mathrm{e}^{-st}}{s^2} +C$$
Organizing, we get
$${\displaystyle\int}\left(t-{\pi}\right)\mathrm{e}^{-st}\,\mathrm{d}t=-\dfrac{\left(s\left(t-{\pi}\right)+1\right)\mathrm{e}^{-st}}{s^2}+C$$
Evaluating at the boundaries, we get
$${\displaystyle\int_{\pi}^{\infty}}\left(t-{\pi}\right)\mathrm{e}^{-st}\,\mathrm{d}t=\dfrac{\mathrm{e}^{-{\pi}s}}{s^2}$$ |
Sextonion Cayley Table | Let $S$ be a real $6$-dimensional vector space with basis $1,i,j,k,r,s$. We can define the multiplication on this basis as we want and then extend bilinearly to $S$. This yields "sextonians", as you wish, with $i^2=j^2=k^2=-1, r^2 =s^2=1$. The question is which identities we would require form this bilinear product. We know that there is no real normed division algebra of dimension $6$, so necessarily "some" properties will get lost. However, there are many $6$-dimensional commutative algebras (but not necessarily associative), see for example here. |
$\left|x^{s-1} \right|=x^{\left| s-1 \right|}$? | If $x$ is positive real and $s=\sigma+it$ with $\sigma,t\in\mathbb R$ then $x^{s-1}=x^{\sigma-1}\cdot x^{it}$. The factor $x^{it}=e^{it\ln x}$ has modulus $1$, so after taking absolute values we have $$\left |x^{s-1}\right|=x^{\sigma-1}$$instead. |
Example that the union of sigma algebra is not an algebra | If you just mean two sets that have a sigma algebra on them that the union of them is not an algebra, that's pretty trivial. Take the power sets of two completely different sets. Say $P(\mathbb R)$ and $P(\text {set of all finite words generated from two letters \{a,b\}})$. Each is a sigma algebra under the usual set operations. Take the union of those two sets, and you don't even have an algebra, as $\{0\}\cup \{ab\}$ is not in the union of those two power sets |
Stinespring dialation | The partial trace is not a standard feature of the Stinespring decomposition. I don't think you can get the decomposition you want in general.
Write
$$
U=\begin{bmatrix}V_1\\ \vdots \\ V_n\end{bmatrix}.
$$
Then
$$
UMU^H=\begin{bmatrix}V_1MV_1^*&V_1MV_2^*&\cdots&V_1MV_n^*\\
V_2MV_1^*&V_2MV_2^*&\cdots&V_2MV_n^*\\
\vdots \\
V_nMV_1^*&V_nMV_2^*&\cdots &V_nMV_n^*
\end{bmatrix},
$$
and
$$
\text{tr}_B(UMU^H)=\begin{bmatrix}\text{tr}(V_1MV_1^*)&\text{tr}(V_1MV_2^*)&\cdots&\text{tr}(V_1MV_n^*)\\
\text{tr}(V_2MV_1^*)&\text{tr}(V_2MV_2^*)&\cdots&\text{tr}(V_2MV_n^*)\\
\vdots \\
\text{tr}(V_nMV_1^*)&\text{tr}(V_nMV_2^*)&\cdots &\text{tr}(V_nMV_n^*)
\end{bmatrix}.
$$
You want this last matrix to be equal to $\text{tr}(M)A$ for all $M$. That is,
$$\tag{1}
\text{tr}(MV_j^*V_k)=\text{tr}(M)A_{kj}
$$
for all $M$. Taking $M=E_{kj}$ (matrix unit) we see that $\text{tr}(V_j^*V_k)=0$ if $k\ne j$. But now, taking $M=I$, we deduce that $A_{kj}=0$ when $k\ne j$, i.e. $A$ has to be diagonal. Now we have $\text{tr}(MV_j^*V_k)=0$, when $j\ne k$, for all $M$; it follows that $V_j^*V_k=0$.
From $U^HU=I_n$ we get $$\tag{2}V_1^*V_1+\cdots+V_n^*V_n=I_n.$$ Now adding over $k=j$ on $(1)$ we obtain
$$
\text{tr}(M)=\text{tr}(M)\,\sum_kA_{kk},
$$
so $\text{tr}(A)=1$.
Fix $k$. Going back to $(1)$, we have
$$
\text{tr}(MV_k^*V_k)=\text{tr}(M)A_{kk}
$$
for all $M$. As the right-hand-side does not change as we choose $M=E_{11},\ldots,E_{nn}$, we get that the diagonal of $V_k^*V_k$ is constant, equal to $A_{kk}$. Taking $M=E_{st}$ for any $s\ne t$ we get
$$
(V_k^*V_k)_{st}=\text{tr}(E_{1t}V_k^*V_kE_{s1})=\text{tr}(E_{st}V_k^*V_k)=\text{tr}(E_{st})A_{kk}=0.
$$
Thus, $V_k^*V_k=A_{kk}I_n$. As $A_{kk}>0$, we deduce that the matrix $V_k/A_{kk}^{1/2}$ is a unitary. But now we get that, for $j\ne k$,
$$
0=V_j^*V_k
$$
is a scalar multiple of a unitary: a contradiction. In conclusion, no such $U$ can exist. |
How to find a counter-model for a formula with implication and only one variable. | $\forall x \ p(x)$ is a much stronger claim than $\exists x \ p(x)$, and so it is easy to come up with counterexamples to the claim that $\exists x \ p(x) \to \forall x \ p(x)$
For example, take as the domain all natural numbers, and let $p(x)$ be '$x$ is prime'
Then $\exists x \ p(x)$ is true (yes, there are some numbers that are prime), but $\forall x \ p(x)$ is false (no, it is not true that all numbers are prime. Hence, with this interpretation, $\exists x \ p(x) \to \forall x \ p(x)$ is false. |
Natural map from cokernel of a monad | A monad is in particular a complex, meaning that $\beta \circ \alpha = 0$. Thus, $\operatorname{im} \alpha \subseteq \ker \beta$. Use the fundamental theorem on homomorphisms. |
How would this (x,y) function look like? | Geogebra 3D formula:
a(x,y)=If(0<x ∧ x<y ∧ y≤1, (1)/(y^(2)), 1≥x ∧ x>y ∧ y>0, -((1)/(x^(2))), 0)
The input space is the blue/grey plane (more specifically, the square $[0,1] \times [0,1]$ on the grey plane), the output space is the blue axis. Your function is in 3 parts. The line $x=y$ maps to 0, the part above that line on the input plane maps to $1/y^2$ (the part above the grey plane), the part below that line on the input plane maps to $-1/x^2$ (the part below the grey plane)
Screenshots: |
For trees with $10$ vertices, consider those which have a vertex of degree $8$. What is the number of such trees? | The distinction is that there are $10 \times 9$ inequivalent ways of labeling the $K_{1,8}$ subgraph.
We choose one of the $10$ vertices to be the degree-$8$ vertex.
Then, one of the remaining $9$ vertices to be its non-neighbor.
Then, we attach the non-neighbor to one of the $8$ leaves of $K_{1,8}$. (Your step.)
This gives $10 \times 9 \times 8$. |
Can $ \int \frac{1}{x^5} dx $ be used to approximate $ \int \frac{1}{x^5+0.001} dx $? | For indefinite integrals, no, not in general. Indefinite integration is a tricky business and you have to be much more precise about what type of result you're looking for. It's much easier to be precise with definite integration.
For definite integrals, yes, with some restrictions. For fixed constants $0 <a<b$ we can consider the integral
$$
I(\varepsilon) = \int_a^b \frac{1}{x^5 + \varepsilon}dx
$$
where $\varepsilon$ is a small positive constant. (The condition $a>0$ is to avoid the singularity of $1/x^5$ at $x=0$, but we could have done $a<b<0$ as well. For this problem we can also take $0<a<b=\infty$, though this isn't always possible for other integrals.) Then we would be interested in the value of $I(0.001)$.
We can get an approximate answer by looking for an asymptotic expansion of $I(\varepsilon)$ as $\varepsilon \to 0$. In general there are lots of techniques that go into computing an asymptotic expansion. In this case we can get the first few terms by computing the Taylor series of $f(x;\varepsilon) = 1/(x^5+\varepsilon)$ with respect to $\varepsilon$ near $\varepsilon = 0$, which looks like
$$
f(x;\varepsilon) = \frac{1}{x^5} - \frac{1}{x^{10}}\varepsilon + \frac{1}{x^{15}}\varepsilon^2 + \cdots
$$
We can then obtain an asymptotic series expansion of $I(\varepsilon)$ by integrating term by term:
$$
I(\varepsilon) \sim \int_a^b \frac{1}{x^5}dx - \varepsilon\int_a^b \frac{1}{x^{10}}dx + \varepsilon^2\int_a^b \frac{1}{x^{15}}dx + \cdots
$$
which is valid as $\varepsilon\to 0$. This means that if you take any finite number of terms on the right hand side, then the resulting expression will be a good approximate to $I(\varepsilon)$ for small values of $\varepsilon$. You can check with a computer that this is the case, as I have done here:
Here the horizontal axis is $\varepsilon$, ranging over $\varepsilon\in[0,1]$. We set the limits of integration to $[a,b]=[1,2]$ just for illustration. We plot the values of $I(\varepsilon)$ in red, the two-term expansion
$$
\int_1^2\frac{1}{x^5}dx - \varepsilon\int_1^2\frac{1}{x^{10}}dx
$$
in blue, and the three-term expansion
$$
\int_1^2\frac{1}{x^5}dx - \varepsilon\int_1^2\frac{1}{x^{10}}dx + \varepsilon^2\int_1^2 \frac{1}{x^{15}}dx
$$
in green. We can see that as $\varepsilon\to 0$, the blue and green curves form better and better approximations to the red curve, and the green curve becomes a good approximation faster than the blue curve, as expected. |
Prove the following statements are equivalent | Let us prove the result.
Let $(X,\Sigma, \mu)$ be a measure space.
$(1 \Rightarrow 2)$. Suppose for every $f\geq 0$,$\int_{X}fd\mu<\infty\iff\int_{X}e^{f}d\mu<\infty$.
Take $f=0$. Clearly, $\int_{X}fd\mu=0<\infty$. So
$$ \mu(X)= \int_X 1 d\mu= \int_X e^0 d\mu= \int_X e^f d\mu < \infty$$
Now, let us prove that $X$ is not the union of infinitely many sets of positive measure.
Suppose that $X$ is the union of infinitely many sets of positive measure. Since $\mu(X)<+\infty$, it follow that for all $\delta >0$ there is $E \in \Sigma$ such that $\mu(E)<\delta$. It follows, from lemma $1$ and lemma $2$ in the Remark 1, that $L^2(X) \subsetneq L^1(X)$. Take a function $f \in L^1(X)$ such that $f \notin L^2(X)$. But then we have that $|f|\geq 0$, $\int_{X}|f|d\mu<\infty$ and
$$ \int_X e^{|f|} d\mu > \int_X \left (1+|f| + \frac{1}{2}|f|^2 \right) d\mu \geqslant \frac{1}{2} \int_X |f|^2 d\mu =+\infty $$
Contradiction to $(1)$.
So $X$ is not the union of infinitely many sets of positive measure.
$(2 \Rightarrow 1)$ In fact, $(2)$ entails a stronger result than $(1)$. Let us assume $(2)$. Given any $f \geq 0$ and given any $n \in \mathbb{N}$ let
$$ A_n=\{x \in X: n \leq f(x) < n+1 \}$$
Clearly $\{A_n\}_n$ is an infinite family of disjoint sets and $X=\bigcup_nA_n$. Since $X$ is not the union of infinitely many sets with positive measure, it is easy to prove that there is $n_0 \in \mathbb{N}$ such that if $n \geq n_0$, $\mu(A_n)=0$. So $f<n_0$ $a.e.$. So $e^f<e^{n_0}$ $a.e.$. So we have $\int_X f d\mu < n_0\mu(X)<\infty$ and $\int_X e^f d\mu < e^{n_0}\mu(X)<\infty$. Then $(1)$ follows immediately.
Remark 1:
Lemma 1. Let $(X,\Sigma, \mu)$ be a measure space. If $\mu(X)<+\infty$ is finite then $L^2(X) \subseteq L^1(X)$.
Proof Since $\mu(X)<+\infty$, we have $\chi_X \in L^2(X)$. Then for any $f\in L^2(X)$, applying Hölder inequality, we have
$$ \|f\|_1= \|f\, \chi_X \|_1\leqslant \|\chi_X\|_2 \|f\|_2 =(\mu(X))^{1/2} \|f\|_2 $$
Lemma 2. Let $(X,\Sigma, \mu)$ be a measure space. If $L^1(X) \subseteq L^2(X)$, then there is $\delta >0$ such that, for any $E\in \Sigma$, if $\mu(E)>0$ then $ \mu(E) > \delta $.
Proof: Let $T: L^1(X) \rightarrow L^2(X)$ be the inclusion map, that is $T(f)=f$. $T$ is obvious linear. Using the Closed Graph Theorem, we can prove $T$ is continuous (bounded).
In fact, let $\{f_n\}$ be a sequence in $L^1(X)$ which converges to $f$ in the $L^1$ norm, and to $g$ for the $L^2$ norm. We extract a subsequence $\{f_{n_j}\}$ which converges to $f$ $a.e.$; this subsequence still converges to $g$ for the $L^2$ norm; now extract from this subsequence a new subsequence which converges to $g$ almost $a.e.$.
So we have a subsequence which converges $a.e.$ to $f$ and $g$, hence $f=g$, and, since both $L^1(X)$ and $L^2(X)$ are Banach spaces, by the Closed Graph Theorem we conclude that $T$ is continuous (bounded).
Since $T \neq 0$, we have that $0<\|T\| <+\infty$.
Now, let us prove that, given any $E \in \Sigma$ such that $\mu(E)>0$,
$ \mu(E) \geqslant \frac{1}{\|T\|}$.
If $\mu(E)=+\infty$, then it trivial that $\mu(E) \geqslant \frac{1}{\|T\|}$.
Suppose $0<\mu(E)<+\infty $,
let
$$f_E= \frac{1}{\mu(E)} \chi_E$$
It follows immediately that $\|f_E\|_1=1$ and $\|f_E\|_2=\frac{1}{\mu(E)}$.
So
$$ \frac{1}{\mu(E)}=\|f_E\|_2 =\|T(f_E)\|_2 \leqslant \|T\| \|f_E\|_1 =\|T\| $$
So
$$ \mu(E) \geqslant \frac{1}{\|T\|} $$
Taking $\delta = \frac{1}{2\|T\|}$, we have $ \mu(E) > \delta $.
Remark 2: In fact, we proved a slightly more general result. We proved that
he following statements are equivalent:
for every $f\geq 0$,$\int_{X}fd\mu<\infty\iff\int_{X}e^{f}d\mu<\infty$
$\mu$ is finite measure and X is not the union of infinitely many sets of positive measure.
for every $f\geq 0$,$\int_{X}fd\mu<\infty$ and $\int_{X}e^{f}d\mu<\infty$ |
Related rates of a sphere being filled | It seems that you're using $V$ for two different things -- both the volume of the entire sphere, and the volume of water yet in the sphere.
What you need to do is choose a variable for for the volume of water in the sphere, and then use the formula for a spherical cap to relate it to the depth of water. Only after you have this relation will it make sense to start doing calculus on the situation. |
Joint distribution of different sided dice | Then I took the products of P(S) and P(T) for each element and put everything together in a table;
No, the random variables are not independent; you cannot obtain the joint probability by multiplying the marginals. You have to use conditional probability.
For a start, those $1/104$ should be $1/16$. $$ \def\P{\operatorname{\mathsf P}}\begin{align}\P(S=4, T=t) ~&=~ \P(S=4)\P(T=t\mid S=4) \\[1ex] &=~ \tfrac 14\cdot\tfrac 14\cdot\mathbf 1_{t\in\{1,2,3,4\}} \\[1ex] & =~ \tfrac 1{16}\cdot\mathbf 1_{t\in\{1,2,3,4\}}\end{align}$$
And so forth. The rest should be recalculated likewise. |
A question regarding the proof of Riesz Representation Theorem for the dual of Lp. | Why is
$$g\mapsto S(g)-\int\limits_{X}{fg\text{ d}\mu}\text{ for all }g\in L^p$$
continuous?
By assumption, $S$ is a bounded linear functional on $L^p$. Since $f\in L^q$, as established in the previous line, by Holder's inequality we have
$$\left|\int\limits_{X}{fg\text{ d}\mu}\right|\le\|f\|_{L^q}\|g\|_{L^p}.$$
Hence, $g\mapsto\int\limits_{X}{fg\text{ d}\mu}$ is a bounded linear functional on $L^p$. Thus, $g\mapsto S(g) - \int\limits_{X}{fg\text{ d}\mu}$ is also a bounded linear functional on $L^p$, and hence continuous.
Right above equation (15), why is $|g_n-g|^p\le |g|^p$?
By construction, $g_n = g$ on $X_n$ and $g_n = 0$ on $X\backslash X_n$. Thus, $g_n-g = 0$ on $X_n$ and $g_n-g = -g$ on $X\backslash X_n$, so $|g_n-g|^p\le |g|^p$ on $X$, as $|g_n-g|^p$ at any point is either $0$ or $|g|^p$.
Where in the proof is it showing that the mapping is isometric and isomorphic?
Presumably, equation (8) says something like the following: for $f\in L^q(X,\mu)$, define $T_f\in (L^p(X,\mu))^*$ by
$$T_f(g) = \int\limits_{X}{fg\text{ d}\mu}\text{ for all }g\in L^p(X,\mu).$$
Let $T:L^q(X,\mu)\rightarrow(L^p(X,\mu))^*$ by $T(f) = T_f$. Clearly, $T$ is linear. Now, $T$ is surjective, since the above proof shows that for every $S\in (L^p(X,\mu))^*$, there exists $f\in L^q(X,\mu)$ such that $T_f = S$. To show that $T$ is an isometry from $L^q(X,\mu)$ to $(L^p(X,\mu))^*$, we must show that $\|T_f\|_{(L^p)^*} = \|f\|_{L^q}$ for every $f\in L^q$. Now by Holder's inequality
$$\|T_f\|_{(L^p)^*} = \sup\limits_{\|g\|_p=1}{\langle T_f, g\rangle} = \sup\limits_{\|g\|_p=1}{\int\limits_{X}{fg\text{ d}\mu}}\le\sup\limits_{\|g\|_p=1}{\|f\|_{L^q}\|g\|_{L^p}} = \|f\|_{L^q}$$
so $\|T_f\|_{(L^p)^*}\le \|f\|_{L^q}$ for all $f\in L^q$. However, the above proof also showed that if $S\in (L^p)^*$ and $S = T_f$, then $\|f\|_{L^q}\le\|S\|_{(L^p)^*}$. In particular, $\|f\|_{L^q} \le \|T_f\|_{(L^p)^*}$ for all $f\in L^q$. This implies $\|T_f\|_{(L^p)^*} = \|f\|_{L^q}$ for all $f\in L^q$, and so $T$ is indeed an isometry from $L^q(X,\mu)$ to $(L^p(X,\mu))^*$. I leave it to you to show why $T$ is an isomorphism, if it is not already clear by now. |
To show that a function defined by integral is absolutely continuous | Nice problem. Sketch: Define $$(1)\,\,\,\,g(x) = \int_0^x f(s,x)ds + \int_0^x f(x,t)dt.$$ The definition makes sense by Fubini: for a.e. $x$ we have both integrands in $L^1([0,1]).$ Check that both functions of $x$ on the right of (1) are in $L^1([0,1]).$ Hence $g\in L^1([0,1]).$ Now you want to verify that $\int_0^y g(x)\,dx = F(y)$ for all $y\in [0,1].$ |
Help phrasing (geometric?) probability distribution-like problem without rigorously invoking limit | If I understand what you're asking for correctly, I believe you want the $x$ value (with the $x$-axis corresponding to all possible distances $D$) where the maximum $y$ value occurs in the Continuous probability distribution graph.
The $x$-axis for your question is, in general, from $-\infty$ to $\infty$, but for distances this can be limited to $0$ to $\infty$ since distances are non-negative. However, in your specific case, you only need to be concerned with the range of allowable distances of $1$ to $3$, so if the $x$-axis extends outside this range, the $y$ value will just always be $0$ there. Note the total area under this type of graph is always $1$. As stated in the link, the $y$ value is the Probability density function, with this being an appropriately normalized value of the arc length (the normalization is such to ensure the total area under the graph is $1$) you mentioned in your question. Also, as indicated in the first link, the probability of the value of the distance $D$ being between $D_1$ and $D_2$ would be the area under the graph between those $2$ points on the $x$-axis.
If this is not what you're looking for, please give some more details. Thanks. |
Automorphisms of non-abelian groups of order 27 | The non-abelian group of order $p^3$ with no elements of order $p^2$ is the Sylow $p$-subgroup of $\operatorname{GL}(3,p)$. Its automorphism group can also be viewed as a group of $3\times3$ matrices, the affine general linear group,
$$\operatorname{AGL}(2,p) = \left\{ \begin{pmatrix}a & b& e\\ c& d& f\\ 0 & 0 & 1\end{pmatrix} : a,b,c,d,e,f \in \mathbb{Z}/p\mathbb{Z},\; ad-bc ≠ 0 \right\}, $$
which is the semi-direct product of $\operatorname{GL}(2,p)$ on its natural module.
This description is reasonably famous, especially when considering non-abelian groups of order $p^{2n+1}$ with no elements of order $p^2$ whose center and derived subgroup have order $p$. Instead of $\operatorname{GL}(2,p)$ you get a variation on $\operatorname{Sp}(2n,p)$, that simplifies to $\operatorname{GL}(2,p)$ when $n=1$.
The non-abelian group of order $p^3$ with an element of order $p^2$ and $p ≥ 3$ has as its automorphism group a semi-direct product of $\operatorname{AGL}(1,p)$ with the dual of its natural module, so you get all $3×3$ matrices
$$\left\{ \begin{pmatrix}a & b& 0\\ 0& 1& 0\\ c & d & 1\end{pmatrix} : a,b,c,d \in \mathbb{Z}/p\mathbb{Z},\; a ≠ 0 \right\}. $$
In both cases the "module part" of the semi-direct product is the group of inner automorphisms and the quotient ( $\operatorname{GL}(2,p)$ and $\operatorname{AGL}(1,p)$ ) are the outer automorphism groups.
You can read about some of this in section A.20 of Doerk–Hawkes, or Winter (1972).
Winter, David L.
“The automorphism group of an extraspecial p-group.”
Rocky Mountain J. Math. 2 (1972), no. 2, 159–168.
MR297859
Doerk, Klaus; Hawkes, Trevor.
Finite soluble groups.
de Gruyter Expositions in Mathematics, 4. Walter de Gruyter & Co., Berlin, 1992. xiv+891 pp. ISBN: 3-11-012892-6
MR1169099 |
Wikipedia Raabe's test proof doubt on second part | Since $$\log\left(\left(1+\frac RN\right)\cdots\left(1+\frac Rn\right)\right)=R\log n+\mathcal O(1),$$ taking exponentials gives $$\left(1+\frac RN\right)\cdots\left(1+\frac Rn\right)=e^{R\log n+\mathcal O(1)}=e^{\mathcal O(1)}\cdot n^R$$ and $\exp(\mathcal O(1))$ is always greater than some constant $c$. |
Monotonicity of vector fields | If $I$ is your interval, and for each $t\in I$ your vector field at $t$ is $\big(x(t), y(t), z(t)\big)$, all you need to do is check that for each $t\in I$
$$x(t) < y(t) < z(t),$$
or the analogous inequalities with $>$ instead of $<$.
These conditions can be met regardless of whether each individual coordinate function is increasing, decreasing or not monotonic at all. |
Differentiability of absolute function | Thanks to Dominique who pointed out a serious flaw in my previous argument.
Let $f(x) = |u(x)|$.
The map $x \mapsto |x|$ differentiable except at $x=0$.
Let $Z= u^{-1} ( \{0 \} )$, and $C = (Du)^{-1} ( \{0 \} )$.
If $x \in Z \cap C$, then $f$ is differentiable at $x$.
If $x \in Z \setminus C$, the implicit function shows that there
is a neighbourhood $U$ of $x$ such that $U \cap Z$ has measure zero.
Since $\mathbb{R}^n$ is second countable, it follows that $Z \setminus C$ has
measure zero. |
Laplace Transform initial value problem to solve | Hint:
Ok, lets go slow.
$\displaystyle \mathcal{L} (y'') = s^2 y(s) -s y(0) - y'(0)$
$\displaystyle \mathcal{L} (y') = sy(s) - y(0)$
$\displaystyle \mathcal{L} (y) = y(s)$
Now, we substitute all of those into the DEQ and arrive at:
$$s^2 y(s) -s -2 +3(y(s)-1) + 2y(s) = \frac{6}{s+1}$$
$$y(s) = \frac{s^2+6 s+11}{(s+1)^2(s+2)} = \frac{3}{s+2}-\frac{2}{s+1}+\frac{6}{(s+1)^2}$$
Now, you can do the inverse of each term on the RHS.
$$y(t) = e^{-2 t} (e^t (6 t-2)+3)$$ |
Total angle within a closed surface... | Very true! There is such a function. If you have convex $n$-gon, then the sum of its interior angles is $(180n-360)^{\circ}$. To prove this just triangulate the $n$-gon like the following:
Sum of angles for each triangle add upto $180^{\circ}$, and sum of all those angles add upto sum of all interior angles of the $n$-gon. (Note that a $n$-gon can be triangulated into $n-2$ triangles).
From this we can also imply a very interesting fact that: Whatever be $n$ th sum of the external angles of a convex $n$-gon always add upto $360^{\circ}$. |
Multivariable Calc. Continuity at Origin | You have to define a value of $f(x,y)$ at the origin firstly,to check the continuity.
If you take polar coordinates: $$x=r \cos{t}$$ $$y=r \sin{t}$$
Then the $f(r,t)= \frac{r^{a+1}\cos{t}\sin^a{t}}{r^2}=r^{a-1}\cos{t}\sin^a{t}$
If $a=1$ the limit does not exist.
If $a>1$ the limit is zero,as $r \to 0^+$
If $a<1$ the limit does not exist. |
Is the limit of a periodic function its range? | Assuming we are considering a function $f:\mathbb{R}\rightarrow\mathbb{R}$, and taking the limit $\lim_{x\rightarrow\infty} f(x)$, then if $f$ is periodic this limit exists only if $f$ is constant.
Given what I assume is the level you are working at, the limit of a function can never be a range of values: It is either a particular point, or it fails to exist. |
Making sense of Goedel's First Incompleteness Theorem | You misunderstood: $G$ is not the biconditional $(F \vdash G_F) \leftrightarrow (\lnot Prov(⌈G_F⌉))$. Rather, $G$ is the metalogical claim that the biconditional $G_F$ $\leftrightarrow$ $\lnot Prov(⌈G_F⌉)$ (which is a logical claim) can be proven in $F$. And that metalogical claim we write as $F \vdash G_F \leftrightarrow \lnot Prov(⌈G_F⌉)$ (think of it as $F \vdash (G_F \leftrightarrow \lnot Prov(⌈G_F⌉))$
Moreover, it is not claim $G$ that is what is typically referred to as the 'Godel sentence', but rather $G_F$. That is, Godel's proof shows that relative to any 'strong enough' (strong enough to prove elementary arithmetical truths basically), consistent, and recursive formal system $F$, there must be some sentence $G_F$ such that $F \vdash G_F$ $\leftrightarrow$ $\lnot Prov(⌈G_F⌉)$ is the case. When you now assume that $G_F$ is false, you get a problem, because then, given the truth of the biconditional (it's true since $F$ derives it), it would have to be true that $G_F$ is provable, and thus true, which contradicts the assumption that is was false. So, it must be true. And, given that same biconditional, not provable.
The truth of $G$ itself is not under discussion. Godel's proof that $G$ has to be the case. |
How do I fill in points in an equation? | The points are bold because they are representing vectors. So, the equation you are looking at actually represents a $B_x(t)$ and $B_y(t)$. You then need to graph these parametrically. So, for $B_x(t)$, take the x-coordinates of the points on your control polygon and plug it in for $P_x$ and then $B_y(t)$ is merely the same equation with the y-coordinates. Then, graph these as a $(B_x(t),B_y(t))$ curve, where $t \in [0,1]$. |
Subspace Preserved Under Addition of Elements? | You have to show (among other things) that $Z=\{{\,a+b:a\in C,b\in D\,\}}$ is closed under addition. So, let $x$ and $y$ be in $Z$; you have to show $x+y$ is in $Z$. So, what is $x$? Well, it's in $Z$, so $x=a+b$ for some $a$ in $C$ and some $b$ in $D$. What's $y$? Well, it's also in $Z$, so $y=r+s$ for some $r$ in $C$ and some $s$ in $D$. Now what's $x+y$? Can you take it from here? |
Show that $S=\{(0,y):y\in \Bbb R-\{0\}\}$ is a subgroup of $G=(((\Bbb R \times (\Bbb R-\{0\})),⊥)$ for $(x,y)⊥(x',y')=(x+x'y,yy').$ | You only need to show that $S\neq\varnothing$ and $a\,\bot\, b^{-1}\in S$ for all $a, b\in S.$ This is called the one-step subgroup test/lemma.
Here is the answer . . .
If $(0, a), (0, b)\in S$, then $$(0,a)\,\bot\, \left(0,\frac{1}{b}\right)=\left(0+\frac{0}{b}, \frac{a}{b}\right)\in S.$$ Since you have found $e\in S$, we have $S\neq \varnothing$. This is enough to prove $S\le G$. |
For $\omega$ and $\eta$ k-forms exist a $C^{1}$ function $f: \mathbb{R}^{3} \to \mathbb{R}$ such that $\eta = f\omega$. | Unforunately your $f$ so defined might not be $C^1$: for example, if $\eta = \omega$, then from your construction you would get $f=1$ when $\omega_3(p) \neq 0$. So extending it by $0$ actually makes it discontinuous.
To deal with this, we consider the following: Let $U_i$, $i=1, 2, 3$ be the open sets
$$ U_i = \{ p\in \mathbb R^3: \omega_i (p) \neq 0\}.$$
Then since $\omega$ is nonzero everywhere, we have $U_1\cup U_2\cup U_3 = \mathbb R^3$.
On each $U_i$, define $f_i = \frac{\eta_i}{\omega_i}$. Note that they are well defined and $C^1$ (Indeed, $C^\infty$). Note that we can show $f_i = f_j$ on the intersection $U_i \cap U_j$: for example, using
$$\omega_1 \eta_2 = \omega_2 \eta_1 $$
we have
$$ f_2 =\frac{\eta_2}{\omega_2} = \frac{\eta_1}{\omega_1} = f_1$$
on $U_1\cap U_2$. Thus the function
$$f(x) = \begin{cases} f_1(x) & x\in U_1 \\f_2(x) & x\in U_2 \\f_3(x) & x\in U_3 \end{cases}$$
is a well-defined $C^1$ functions on $\mathbb R^3$ and $\eta = f\omega$. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.