title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
A question on modules over Noetherian ring | Actually the answer is yes:
$R=\mathbb Z, G=\mathbb Q$ . |
How to calculate $\lim_{x \to 0} \frac{e^{\tan^3x}-e^{x^3}}{2\ln (1+x^3\sin^2x)}$? | It looks like a bad guy. I used Wolfram Mathematica to compute
$$
\tan^3 x -x^3 = x^5+\frac{11}{15}x^7 + O(x^8)
$$
and
$$
e^{\tan^3 x -x^3}=1+x^5+\frac{11}{15}x^7 + O(x^8).
$$
Therefore
$$
e^{\tan^3 x}-e^{x^3}=x^5+\frac{11}{15}x^7 + O(x^8).
$$
Since
$$
\log(1+x^3 \sin^2 x) = x^5 +O(x^6),
$$
the limit exists and is equal to $1/2$.
But it can be very boring to perform these computations by hand: all the terms up to power $5$ cancel out, so you should take care of really many terms in each expansion, otherwise you'll get nothing useful. |
$f(z)=\frac{1}{z}$ has an antiderivative on any simply connected domain | With your hypotheses, use residue thm to show that if you integrate $1/z$ on any simple closed curve (in your simply connected domain which avoids origin), it integrates to zero. Pick a point $c$ in your domain. Set $f(z) = f(c) + \text{ integral of } (1/z)dz$ along any path from $c$ to $z$. Previous sentence implies this is independent of path, hence well-defined. Fundamental theorem of calc for rest |
Joint PDF of dependent random variables | Draw a picture. Let $T$ be the triangle with corners $(0,0)$, $(2,0)$, $(0,2)$. We want to choose $c$ so that
$$\iint_T cxy \,dy\,dx=1.$$
To evaluate the integral, express it as the iterated integral
$$\int_{x=0}^2\left(\int_{y=0}^{2-x} cxy\,dy\right)\,dx.$$
For $E(XY)$, we want
$$\iint_T (xy)(cxy)\,dy\,dx.$$ |
Contour Integral Question | Since the integrand is even,
$$
\int^{\infty}_{0} \frac{x-\sin(x)}{x^3}\,dx=\frac12\int^{\infty}_{-\infty} \frac{x-\sin(x)}{x^3}\,dx.
$$
Take $\epsilon>0$ small and $R>0$ large. The integral of $f(z)$ along the closed path formed by the segment $[\epsilon,R]$, the semi-circumference $\gamma_R$ counterclockwise, the segment $[-R,-\epsilon]$ and the semi-circumference $\gamma_\epsilon$ of radius $\epsilon$ (in the upper half-plane) clockwise is equal to $0$. Then
$$
\Bigl(\int_{-R}^{-\epsilon}+\int_{\epsilon}^{R}\Bigr)f(x)\,dx+\int_{\gamma_R}f(z)\,dz+\int_{\gamma_\epsilon}f(z)\,dz=0.
$$
As $R\to\infty$ and $\epsilon\to0$,
$$
\Bigl(\int_{-R}^{-\epsilon}+\int_{\epsilon}^{R}\Bigr)f(x)\,dx\to\int^{\infty}_{-\infty} \frac{1-\cos(x)}{x^3} dx+i\int^{\infty}_{-\infty} \frac{x-\sin(x)}{x^3}\,dx.
$$
You already know that $\lim_{R\to\infty}\int_{\gamma_R}f(z)\,dz=0$. All is left is to find $\lim_{\epsilon\to0}\int_{\gamma_\epsilon}f(z)\,dz$. |
Does the composite function $h=f(g(x))$ share the discontinuity of $g(x)$? | In all your examples, you consider functions that are not defined at some point. If $f$ is not defined at $x=0$, then $g\circ f$ is not defined at $x=0$ either and therefore not continuous at $x=0$.
In more generality, if $f$ is defined over all $\Bbb R$, but not continuous, it may very well be that $g\circ f$ is continuous: this happens for example if $g$ is constant - but of course this is not necessary. |
Number of elements in $\{z \in \Bbb{C}: z^{60} = -1, z^k \neq -1 \:\text{ for }\: 0 < k < 60\}$ | We know $z=e^{\dfrac{in \pi}{60}}$ for some odd $n$.
Assume $\gcd(n, 60)=1$. Then if $\frac{nk}{60}$ is an odd integer (as is required for $z^k=-1$ to hold), then we know $60 \mid nk$. But since $n$ and $60$ are relatively prime, then $60 \mid k$, so in particular $k \geq 60$. Thus, $\{ e^{\dfrac{in \pi}{60}} \mid \gcd (n, 60)=1 \} \subseteq S$.
Conversely, suppose $d \gt 1$ is a common factor of $n$ and $60$. Then for some $a, b \in \Bbb Z, n=da$ and $60=db.$ Moreover, because $n$ is odd, $a$ must also be odd. Thus, $z=e^{\dfrac{ia \pi}{b}}$ and $z^b=-1$ where $b \lt 60$. Thus, $S=\{ e^{\dfrac{in \pi}{60}} \mid \gcd (n, 60)=1 \}$.
There are $16$ integers less than $60$ that are relatively prime to $60$, so $\vert S \vert = 16$. |
Need help developing intuition for proving this geometric relationship | I second Toni’s answer about trying to figure out how the author may have come up with the idea of the proof. Some authors don’t bother with providing any intuition, and the proofs often look “magical” and sometimes rather opaque. Clearly, there are a few arguments for and against this. Such an approach may make things hard to read and unintuitive, but at the same time, maybe that is a challenge to the reader: find out and verify for yourself. George Polya said that math is not a spectator sport. Paul Halmos said “Don’t just read it; fight it! […]”.
This is not to say that intuition shouldn’t be provided; on the contrary, that is usually helpful (whatever works for you — so long as it is accurate!). But sometimes you have to do the hard work to build up some intuition — say by solving many problems, working through the omitted details in a proof, etc. There is something to be said about well-written books with great explanations and intuition, for sure. But no matter how great the book is, I believe that ultimately you still have to solve the problems by yourself if you want to really understand things well and be able to work with them comfortably (unless you are some genius/wizard…).
Speaking of Polya, you may enjoy his book How to Solve It if you haven’t already read it. It can help you with developing that “problem-solving mindset” you inquired about. I also second Toni's recommendation for Velleman's How to Prove It to learn more about proof strategies and proof writing.
$\\[30pt]$
Let's look at your example. I can show how one might approach this by asking a few general questions. Many of these suggestions overlap with what Polya has to say in his book, like asking questions like “What do we want to show?”, “What do we know?”, “Do you know a similar problem?”, etc. (And I am not at all an experienced problem solver — I’m just showing how asking these general questions can make life easier and help to organize things and break the problem into simpler parts.)
What do we want to show? We want to show that the diagonals of a parallelogram have equal lengths if and only if it is a rectangle.
What do we know? (Also: Draw a picture, introduce notation, rephrase things, etc.) I know that we can represent the two sides of the parallelogram by vectors $u$ and $v$. Then the diagonals correspond to the vectors $u + v$ and $u - v$. Therefore the phrase “diagonals of the parallelogram have equal lengths” is equivalent to “$\lVert u + v \rVert = \lVert u - v \rVert$”. Furthermore, if we think about the picture, we see that $u, v$ are perpendicular if and only if we have a rectangle, and I know that $u, v$ being perpendicular means $u \cdot v = 0$. Therefore the phrase “it is a rectangle” is equivalent to “$u \cdot v = 0$”. Combining these two observations, we can now rephrase our question as follows: Show that $\lVert u + v \rVert = \lVert u - v \rVert$ if and only if $u \cdot v = 0$.
In particular, the intuition for our question can in some sense be reduced to the intuition behind why perpendicular means $u \cdot v = 0$, of which there are many great answers that can be easily looked up.
Do you know a similar problem? (Or are there any related facts/knowledge you can use?) Well, maybe I don’t really know a similar problem specifically, but I know that the dot product is the square of the norm. (Maybe I’ve solved many problems with the dot product already, and “moved symbols around” a lot, so I might be familiar with some of its properties.) Specifically, I am reminded of the formula $v \cdot v = \lVert v \rVert^2$.
Now note that the condition “$\lVert u + v \rVert = \lVert u - v \rVert$” means the same as “$\lVert u + v \rVert^2 = \lVert u - v \rVert^2$”, because two nonnegative numbers are equal if and only if their squares are equal. So I have converted this condition into an equality of dot products now! Maybe this gets me closer to trying to show how it is equivalent to the other condition “$u \cdot v = 0$”. So let’s write $\lVert u + v \rVert^2 = \lVert u - v \rVert^2$ as $(u + v) \cdot (u + v) = (u - v) \cdot (u - v)$. I happen to know the basic property of dot product, namely that it is “linear”, so I recognize this as saying that $u \cdot u + 2 (u \cdot v) + v \cdot v = u \cdot u - 2 (u \cdot v) + v \cdot v$. This is equivalent to $2 (u \cdot v) = -2 (u \cdot v)$, i.e. $4 (u \cdot v)$, i.e. $u \cdot v = 0$. The problem is solved.
Polya also recommends looking back at your solution/attempt: once you have written out your answer, examine your solution and check your result. Perhaps you could do some sanity checks. Perhaps you could approach the problem another way or see it with a different perspective. Perhaps there were some important or general strategies you used that could be adopted in future problems. (I can think of a few in this example: (1) Perpendicular means dot product equals zero. (2) If $a, b \geq 0$, then $a = b$ if and only if $a^2 = b^2$. (3) Linearity of the dot product. Etc.) |
Roots of Unity in fields | The first three are easy enough so I will tackle the last two. Recall that the degree of the extension $\Bbb{Q}(\zeta_n)/\Bbb{Q}$ is $\varphi(n)$ where $\varphi$ is the Euler Totient Function. Now it is not hard to see that the only values of $n$ for which $\varphi(n) = 2$ is when $n = 2,3,4$ and $6$.
Now when you look at $\Bbb{Q}(\sqrt{-2})$ and $\Bbb{Q}(\sqrt{-3})$, if you have an $n^{th}$ root of unity in there it can only be for those stipulated values of $n$ above, because otherwise you have a $\Bbb{Q}$ - subspace of dimension greater than 2 sitting inside of a $\Bbb{Q}$ - vector space of dimension 2 which is impossible. Now let us write out $\zeta_n$ for these values of $n$, we have: $\zeta_2 = - 1$, $\zeta_3 = \frac{-1 + \sqrt{3}i}{2}$, $\zeta_4 = i$ and $\zeta_6 = \frac{1 + \sqrt{3}i}{2}.$
Can you now complete your problem? I leave the rest for you since this is a homework problem. By applying degree arguments, etc. you should be able to eliminate cases. For example, you should be able to work out for yourself why $\pm i$ is not in $\Bbb{Q}(\sqrt{-3})$ say. |
How "bad" can presentation of the trivial group get? | Expanding on the comments, there is a very strong sense in which every "infinitary" description of the trivial group can be boiled down to a "finitary" one - namely, the compactness theorem for first-order logic. This asserts roughly that any set of axioms which proves some single sentence has a finite subset which proves that sentence.
Finitely generated group presentation fall into this setting: the sentence we're interested in is $$(a_1=e)\,\,\wedge\,\, ...\,\,\wedge\,\, (a_n=e)$$ (where "$\wedge$" is "and" and the $a_i$s are the generators of our group), and our set of axioms consists of the axioms of group theory together with axioms corresponding to each relation. By the compactness theorem, any set of relations which makes the finitely many generators each trivial has a finite subset which does the same thing.
Note that this argument breaks down for infinitely-generated groups - and indeed the claim itself is false, the simplest counterexample having infinitely many generators, each of which is trivialized directly - the point being that to say that each of infinitely many generators is trivial is no longer first-order, since we need an infinite conjunction.
The compactness theorem similarly applies to your more general $R_{fam}$ situation: some finitely many sets in $R_{fam}$ must enforce triviality, as long as $S$ is infinite. |
How to prove that if $n > 1$ is a natural number, then $n-1$ is also a natural number? | For each $n\in\Bbb N$, let $P(n)$ be the assertion “$n=1$ or $n-1\in\Bbb N$”. Clearly, $P(1)$ holds. Now let $n\in\Bbb N$ and suppose that $n=1$ or that $n-1\in\Bbb N$. Then $(n+1)-1=n\in\Bbb N$, and therefore $P(n+1)$ holds. |
Parametric Equations of an Oblique Circular Cone | Suppose we have a curve $C(u)$ and a point $P$, and we want a parametric equation for the cone that has its apex at $P$ and contains the curve $C$. A suitable equation is
$$
S(u,v) = (1-v)P + vC(u)
$$
You can see that $S(u,0) = P$ and $S(u,1) = C(u)$ for all $u$. Also, if we fix $u$, then the curve $v \mapsto S(u,v)$ is a straight line passing through the points $P$ and $C(u)$.
In your case, you can use any of the circles parallel to the $xy$ plane for $C$. For example, you could use the circle in the plane $z=h$, which is
$$
C(u) = \left(0, \tfrac{b}2, h\right) +
\left(\tfrac{b}2 \cos u, \tfrac{b}2 \sin u, 0\right) \quad (0 \le u \le 2\pi)
$$
I checked the case $b=4$, $h=7$. Here is the Mathematica code:
P = {0, 0, 0};
Cu = {0, b/2, h} + {(b/2)*Cos[u], (b/2)*Sin[u], 0};
Suv = (1 - v)*P + v*Cu;
fix = {b -> 4, h -> 7};
f = Suv /. fix;
ParametricPlot3D[f, {u, 0, 2*Pi}, {v, 0, 1}]
and here is the image it produced: |
Struggling with a question at high school level | There are multiple solutions to this problem because the second constraint is the same as the first. |
Evaluating: $ \sum_{n=0}^{\infty}x^{n^{2}} $ | There is the Jacobi Theta function
$$
\vartheta_3(z,q) = \sum_{n=-\infty}^\infty e^{2 i n z} q^{n^2}
$$
so yours is
$$
\sum_{n=0}^\infty x^{n^2} = \frac{\vartheta_3(0,x)+1}{2}
$$
Of course for $a=1$ it can also be evaluated using known functions. But not for other values of $a$. |
An abstract algebra teacher wanted to give his students a list of nine whole numbers that form a group under multiplication modulo 91. | The missing number is 29. The mentioned group of order 9 is $\mathbb Z_3^2$, with 9 and 16 as generators:
1 > 16 > 74 >
v v v
9 > 53 > 29 >
v v v
81 > 22 > 79 >
v v v |
Tough Probability Question: Visiting 4 Friends, Finding Probability Mass Function | I agree with your answer to part (b), and note that this is also the probability that the traveler makes a total of exactly $\ i-1\ $ visits before returning home.
To answer part (a), note that if the traveller makes exactly $\ 2n\ $ visits before returning home then exactly $\ n\ $ of those must be to A or C and exactly $\ n\ $ of them must be to B or D. On the other hand, if he makes a total of $\ 2n+1\ $ visits then exactly $\ n+1\ $ of those must be to one of the two pairs A and C or B and D and exactly $\ n\ $ to the other pair. Thus, if $\ V_A\ $ and $\ V_C\ $ are the numbers of visits made to A and C, respectively, then
$$
P\left(V_A + V_C = n\right)=\frac{3}{5}\left(\frac{2}{5}\right)^{2n-1}\hspace{-0.7em}+\frac{1}{2}\cdot\frac{3}{5}\left(\frac{2}{5}\right)^{2n-2}\hspace{-0.7em}+\frac{1}{2}\cdot\frac{3}{5}\left(\frac{2}{5}\right)^{2n}\\
=\frac{147}{40}\left(\frac{2}{5}\right)^{2n}\ ,
$$
for $\ n\ge 1\ $, or $ P\left(V_A + V_C = 0\right)=\frac{3}{10}\ $.
A key observation now is that each visit to A and C is equally likely to be to A or C, and the house visited is independent of all previous visits. Thus, given $\ V_A + V_C = n\ $, $\ V_A\ $ will follow a binomial distribution with parameters $\ n\ $ and $\ p=\frac{1}{2}\ $:
$$
P\left(V_A=v\,\left\vert\,V_A + V_C = n\right.\right)=\frac{n\choose v}{2^n}\ .
$$
Therefore, for $\ v\ge 1 $,
\begin{align}
P\left(V_A = v\right)&=\sum_\limits{n=v}^\infty P\left(V_A=v\,\left\vert\,V_A + V_C = n\right.\right)P\left(V_A + V_C = n\right)\\
&=\frac{147}{40}\sum_\limits{n=v}^\infty {n\choose v}\left(\frac{2}{25}\right)^n\\
&= 735\cdot \frac{2^{v-3}}{23^{v+1}}\ ,
\end{align}
the final equation following from the identity
$$
\sum_\limits{n=v}^\infty{n\choose v}y^n=\frac{y^v}{(1-y)^{v+1}}
$$
for $\ \vert y\vert < 1\ $, and
$$
P\left(V_A = 0\right)=1-735\sum_{v=1}^\infty\frac{2^{v-3}}{23^{v+1}}=\frac{57}{92}\ .
$$ |
How did Ramanujan find this formula? | The question about "how Ramanujan get this formula" is, in general, very mysterious. Nobody knows how he came up with his thousands of formulas.
He was a genius of the highest order and we can only speculate how he was able to do what he did. He was familiar with many special examples and formulas and was able to do elaborate symbolic and numerical calculations combined with extraordinary intuition. Hardy worked closely with Ramanujan in his last years and wrote Ramanujan: Twelve Lectures on Subjects Suggested by His Life and Work and gives his opinions on Ramanujan's methods.
From the theory of Fibonacci/Lucas sequences the ordinary generating function of such a sequence is the reciprocal of a quadratic. For example:
$\, \sum_{n=0}^\infty F_{n+1} T^n = 1/(1-T-T^2).\,$ Ramanujan may have asked what is the o.g.f. of a product of two such sequences? The left side of equation $(1)$ is the o.g.f. and the right side is the rational function answer to that question. Expanding the coefficient of $\,T^n\,$ on the left side and splitting the infinite sum into four infinite sums of geometric series we get the left side of equation $(2)$. The happy suprising result is how simple the numerator of the right side is.
Just to be clear, the key step is to expand the product
$$ (a^{n+1}\!-\!b^{n+1}) (c^{n+1}\!-\!d^{n+1}) \!=\!
(ac)^{n+1} \!-\! (ad)^{n+1} \!-\! (bc)^{n+1} \!+\! (bd)^{n+1}. $$
EDIT: The Wikipedia article generating function transformation section
on Hadamard product of rational generating function has an
example which is essentially the Ramanujan formula just not
expressed in terms of the roots of the quadratics, but the
coefficients of the quadratics. |
Two Similar Looking Stochastic Integrals Have Different Expectations | Yes you do know $B_{t_k}$ and $B_{t_{k+1}}-B_{t_k}$ are independent, as a basic property of a standard Wiener process
For your later question, while $B_{t_k}$ and $B_{t_{k+1}}-B_{t_k}$ are independent, $B_{t_{k+1}}$ and $B_{t_{k+1}}-B_{t_k}$ are not. So, also using $E[B_{t_{k+1}}]=E[B_{t_{k}}]=0$ to produce $E[B_{t_k}(B_{t_{k+1}}-B_{t_k})]=0$, you have
$$E[B_{t_{k+1}}(B_{t_{k+1}}-B_{t_k})]$$ $$=E[(B_{t_{k+1}}-B_{t_k}+B_{t_k})(B_{t_{k+1}}-B_{t_k})]$$ $$=E[(B_{t_{k+1}}-B_{t_k})^2]+E[B_{t_k}(B_{t_{k+1}}-B_{t_k})]$$ $$=E[(B_{t_{k+1}}-B_{t_k})^2]+0$$ |
Trying to solve for x when you have sine and cosine in the function | We can do a few simplifications: Substitute $z=x^2$ so $x = \pm\sqrt z$. Then
$$\begin{align*}
2\cos x^2 - 4x^2 \sin x^2 & = 0\\
\Leftrightarrow \cos z - 2z\sin z & = 0 & z \ge 0
\end{align*} $$
Now since $z=0$ is no solution and $\cos z = 0 \Rightarrow \sin z \ne 0$ is also no solution, we can divide by $\cos z$ and get
$$1 = 2z\tan z$$
so
$$z = \frac12 \cot z$$
The solutions for the latter seem symbolically intangible, but we can use the numerical results to obtain numerical results for $x = \pm\sqrt z$ |
a country accepts A people a year from other countries, we know that : $x'(t) = −0.03x(t) + A$ while $x(0)=16m$ and $x(-10)=15m$ find A | The provided information is
$$x'(t) = −0.03x(t) + A \tag{1}\label{eq1}$$
$$x(0) = 16m \; \; \text{ and } \; \; x(-10) = 15m \tag{2}\label{eq2}$$
Your attempted solution implies that the value of $x(t)$ only changes once per year. If so, I would assume something like a linear difference equation would have been provided. However, since \eqref{eq1} involves a derivative, it seems $x(t)$ is meant to be considered a continuous function instead. If this assumption is correct, then note that \eqref{eq1} shows an exponential decay along with (I assume) a fixed increase value. Note the equation implies using something where the derivative is a multiple of itself, which the exponential function satisfies. A general solution would be of the form
$$x(t) = ae^{bt} + c \tag{3}\label{eq3}$$
for real constants $a, b$ and $c$. Substituting \eqref{eq3} into \eqref{eq1} gives
$$abe^{bt} = -0.03ae^{bt} - 0.03c + A \tag{4}\label{eq4}$$
Since this must hold for all $t$, this means the coefficients of the $e^{bt}$ and constant terms must be the same on both sides of the equation, i.e.,
$$ab = -0.03a \; \; \Rightarrow \; \; a = 0 \; \text{ or } \; b = -0.03 \tag{5}\label{eq5}$$
$$0 = -0.03c + A \; \; \Rightarrow \; \; c = \frac{A}{0.03} \tag{6}\label{eq6}$$
Note that if $a = 0$, then \eqref{eq3} shows that $x(t)$ is a constant function, but \eqref{eq2} shows it's not, so $b = -0.03$ must be the case instead in \eqref{eq5}. Thus, \eqref{eq3} becomes
$$x(t) = ae^{-0.03t} + \frac{A}{0.03} \tag{7}\label{eq7}$$
Using the provided values, you can now substitute $t = 0$ and $t = -10$ into \eqref{eq7} to get $2$ equations in the $2$ unknowns of $a$ and $A$. You can then solve these equations to get $A$. I trust you can do these remaining calculations yourself. |
How to formally use transfinite recursion to construct a sequence for a proof of Zorn's lemma | We can avoid the appeal to Hartogs numbers if we do an indirect proof instead. As a bonus, the transfinite recursion becomes a bit cleaner:
Assume that (a) $(P,\preceq)$ is a partially ordered set that satisfies the premises of Zorn's lemma, (b) $P$ has no maximal element, and (c) $C$ is a choice function on $P$. We then seek a contradiction.
Without loss of generality extend $C$ such that $C(\varnothing)=42$, no matter whether or not $42\in P$.
Apply Transfinite Recursion I to the class function
$$ G(X) = C(\{y\in P\mid \forall z\in \operatorname{Rng}(X)\cap P: z\prec y\}) $$
This gives a class function $F$ such that for every ordinal $\alpha$,
$$ F(\alpha) = C(\{y\in P \mid \forall \beta<\alpha : F(\beta)\in P \to F(\beta)\prec y\}) $$
Lemma. For all $\alpha$ it holds that $F(\alpha)\in P$ and $\forall \beta : F(\beta)\prec F(\alpha)$.
Proof. By transfinite induction on $\alpha$. The induction hypothesis tells us that $\{F(\beta)\mid \beta<\alpha\}$ is a chain in $P$ (note that it is a set in any case thanks to Replacement). The Zorn premises tells us that this chain has an upper bound; because there is no maximal element it even has a strict upper bound. In other words, $$\{y\in P \mid \forall \beta<\alpha : F(\beta)\in P \to F(\beta)\prec y\},$$ the set of all the strict upper bounds, is not empty, so $F(\alpha)$ is in $P$ and is greater than all the $F(\beta)$s.
The lemma tells us that $F$ is an order-preserving function from ON to $P$. Since $F$ preserves order, it is in particular injective. Therefore the formula
$$ H(x)=\alpha \iff F(\alpha)=x \lor (\alpha=117 \land \forall\beta:F(\beta)\ne x) $$
is functional, and applying Replacement on $P$ and $H$ tells us that ON is a set. The Burali-Forti paradox now furnishes our desired contradiction.
Note that we never actually needed $C(\varnothing)=42$ except to make sure that $G$ was a class function so we could use the recursion theorem on it.
The proof structure here is typical: The definition by transfinite recursion is followed immediately by a transfinite induction that extracts useful facts from the recursion formula. This two-step approach is often necessary for using the general recursion machinery because we need to define $G$ in a defensive way such that we can prove that it is makes sense and is functional without depending on its argument being produced by recursion. This defensive scaffolding -- i.e., intersecting $\operatorname{Rng}(X)$ with $P$ so $z\prec y$ makes sense, and making sure that $C(\{x\in P\mid \cdots\})$ always means something even if the condition ends up filtering everything away -- is then cleaned away by the lemma once the recursive magic has done its job and we know the input to $G$ comes from a well-behaved recursion.
As an additional note, one might argue that this is not "morally" an indirect proof. The way I prefer to think about it is that I would like to assume just (a) and (c), and then in the middle of the proof of the lemma say
... this chain has an upper bound. If that upper bound is a maximal element, then we're done; otherwise there's something larger, which is therefore a strict upper bound ...
Claiming that "we're done" with the entire Zorn's Lemma when we were in the middle of an inner induction step is not really done in polite company, though. Phrasing the entire proof as an indirect proof will allow us to follow this intuition anyway, in simple yet formally acceptable way.
The part of the proof after the lemma is then just an argument that eventually we must hit one of the "we're done" conditions, because continuing forever would have absurd consequences. |
Independence with p.d.f | first define
$$
c_m= \int_\mathbb R g_m(x)\; dx.
$$
Then you have that
\begin{align}
P(X_m \in A) &= P( X_m \in A, \ X_i \in \mathbb R, \ i\neq m) = \prod_{i \neq m}c_i \cdot \int_A g_m(x) \; dx \\
&= \int_A \prod_{i \neq m}c_i \cdot g_m(x) \; dx.
\end{align}
So the denistiy of $X_m$ is given by $ \prod_{i \neq m}c_i \cdot g_m(x)$ and everything works fine.
$ \prod_{i \neq m}c_i$ is the normalizing constant for $g_m$. |
multiplicity of schemes in positive characteristic | Let $L$ be a purely inseparable extension field of $K$ of degree $p$; then $L\cong K[X]/(X^p-a)$ for some $a\in K$. The scheme $X:=\mathrm{Spec} (L)$ is integral and thus has multiplicity $1$, but $L\times_K\overline{K}\cong \overline{K}[X]/((X-b)^p)$ for $b\in\overline{K}$ with $b^p=a$ is irreducible and non-reduced. A composition series for the local ring $O_{\overline{X},\overline{\eta}}=\overline{K}[X]/(X-b)^p\overline{K}[X]$ is given by
$
0\subset (X-b)^{p-1}\overline{K}[X]/(X-b)^p\overline{K}[X]\subset (X-b)^{p-2}\overline{K}[X]/(X-b)^p\overline{K}[X]\subset\ldots
$
$
\ldots\subset (X-b)\overline{K}[X]/(X-b)^p\overline{K}[X]\subset \overline{K}[X]/(X-b)^p\overline{K}[X].
$
Hence the multiplicity of $\overline{X}$ equals $p$. |
Is $F_p (X,Y)$ a simple extension over $F_p (X^p,Y^p)$? | Your fact is indeed useful. From this you should be able to show that $[F_p(X,Y):F_p(X^p,Y^p)] = p^2$. Now, take some $a \in F_p(X,Y)$. You can see that, by applying $Frob_p$ the Frobenius morphism, that $a^p \in F_p(X^p,Y^p)$. So now, we have that $[F_p(X^p, Y^p)(a): F_p(X^p,Y^p)] \leq p$. This shows that there is no generating element so it is not simple. |
Unqiue Factorization Domains, is the product finite? | (To make sure this question doesn't hang around unanswered...)
In every definition of UFD that I've seen, the fact the product is finite is always explicitly mentioned (although it is implicit because infinite products are not defined in general rings.)
To state uniqueness of a decomposition, an author is usually obliged to write this:
If $a=p_1p_2\ldots p_n=q_1q_2\ldots q_m$ are two factorizations of $a$ where $p_i, q_i$ are irreducibles and $n,m\in \Bbb Z^+$, then $n=m$ and after permutation the $p_i$'s and $q_i$'s pair up as associates. |
If $S_0=1$ and $S_n=1-e^{-S_{n-1}}$ then prove that $0\le S_n\le1$ and $S_n$ converges | $e^{-x} \leq 1$ if $x \geq 0$. Hence, $S_n \geq 0$ implies $S_{n+1}=1-e^{-S_n} \geq 0$. By induction $S_n \geq 0$ for all $n$. |
If $|A|\leq|B|$, then $|\mathscr{P}(A)|\leq|\mathscr{P}(B)|$ | It is enough to Know that $|\mathscr{P}(A)|=2^{|A|}$ so $$ |\mathscr{P}(A)|=2^{|A|}\leq 2^{|B|}=|\mathscr{P}(B)|$$ It remains to show $$ |\mathscr{P}(A)|=2^{|A|}$$
define a bijection function $$f\colon \mathscr{P}(A)\to 2^{A}$$ by $f(C)=\chi_{C}$ where $\chi$ is the characteristic function of $C$, that is , $\chi_{C} (x)=1$ for $x\in C$ and $\chi_{C} (x)=0$ for $x\in A\setminus C.$ I hope it is clear now |
How to tell if a vector lies in a given vector space? | Let H is a matrix, which columns span the vector space V; $v \in V$ iff there exists a vector a such that $v = H a$. Suppose that orthonormal basis of V is available, i.e. columns of H are orthonormal. Then
$$v = H a \Rightarrow H^Tv = H^THa = a$$
and $v \in V \Leftrightarrow v - HH^Tv = 0$. This is also the fastest test for $v \in V$ if othonormal basis of V is available.
If orthonormal basis is not available, then
you may create orthonormal basis from H by applying Gram Schmidt orthogonalization process to H (or compute QR factorization of H) and apply test above.
use the fact, that $v \in V \Leftrightarrow rank(H) = rank([H,v])$ and compute LU factorization of $[H,v]$. If all diagonal elements of U factor are nonzero, then $v\not\in V$, otherwise $v\in V$. If $dim(V) = n-1$ then this test is equivalent to $v \in V \Leftrightarrow det([H,v]) = 0$.
Answer to the question b depends on modifications of v, that are allowed; it is not clear what you mean. If you are looking for a a scalar $\alpha$, such that $v + \alpha z \in V$ for a given vector z, then such problem has solution iff $v = [H,z]\times b$ for some b. |
A morphism form $G$ to $\mathbb{C}^*$, character what does it represent | The fantastic thing about the dual group $G^\vee = \operatorname{Hom}(G, \mathbb{C}^\times)$ is that it is in fact a group, and so a lot of questions can be reduced to simply asking whether something is the identity or not. Here are some facts about linear characters:
The identity character is $\chi_{\mathrm{id}}(g) = 1$ for all $g \in G$.
If $\chi$ is a character, so is $\overline{\chi}$, and since characters are valued on the unit circle, $\overline{\chi(g)} = \chi(g)^{-1} = (\chi^{-1})(g)$.
After this we can prove the relation
$$\sum_{g \in G} \chi(g) =
\begin{cases}
|G| & \text{if } \chi = \chi_{\mathrm{id}} \\
0 & \text{otherwise}
\end{cases}$$
the first case is clear, so let $\chi \neq \chi_{\mathrm{id}}$. Then there is some $g_0 \in G$ such that $\chi(g_0) \neq 1$, and we have
$$\chi(g_0) \sum_{g \in G} \chi(g) = \sum_{g \in G} \chi(g_0 g) = \sum_{g \in G} \chi(g)$$
and hence $\sum_{g \in G} \chi(g) = 0$.
After this, the orthgonality relations are clear, since
$\langle \chi_1, \chi_2 \rangle$ is just plugging in $\chi_1 \overline{\chi_2} = \chi_1 \chi_2^{-1}$ into that sum up above.
If you read further into the representation theory of finite groups, it will turn out that for any irreducible representation $V$ of $G$ (not just one-dimensional representations), there is a character $\chi_V$, and we have $\langle \chi_V, \chi_W \rangle = 1$ when $V$ and $W$ are isomorphic representations, and 0 when they are different. This is a good motivating example for defining the inner product. |
Probability that $25$ calls are received in the first $5$ minutes. | We weren't given a conditional probability question. It isn't previously known or given that $6$ calls happened in the first minute. Had it said something along those lines then your conditional probability approach would have been correct.
Instead we have two events $A$ and $B$, and we want the probability of $A\cap B$, and because events over disjoint time intervals are independent in the Poisson process, we can find $\mathbb{P}(A\cap B) = \mathbb{P}(A) \mathbb{P}(B)$ |
Prove that $\sqrt 5$ is irrational | It is, but I think you need to be a little bit more careful when explaining why $5$ divides $p^2$ implies $5$ divides $p$. If $4$ divides $p^2$ does $4$ necessarily divide $p$? |
How to prove this sequence is convergent? | Let $s_n=a_1+\cdots+a_n\to a$. Then
$$
\sum_{k=1}^n ka_k=\sum_{k=1}^n k(s_k-s_{k-1})=\sum_{k=1}^n ks_k-\sum_{k=1}^{n-1}(k+1)s_k=ns_n-\sum_{k=1}^{n-1}s_k.
$$
Hence
$$
\frac{1}{n}\sum_{k=1}^n ka_k=s_n-\frac{n-1}{n}\cdot\frac{1}{n-1}\sum_{k=1}^{n-1}s_k\to a-a=0.
$$
We have used the fact that: If $b_n\to b$, then so does $\,\,\dfrac{b_1+\cdots+b_n}{n}$.
Note. However, it is not in general true that $na_n\to 0$. |
Gauss curvature of a scaled metric (by a constant) | Since the Gaussian curvature is the product of two geodesic curvatures, which scale as $\ell^{-1}$ (where $\ell$ is length as measured by the metric), it scales as $\ell^{-2}$. The metric itself is of scale $\ell^2$, so you're right that $h=ag$ implies $K_h = a^{-1}K_g.$
If you want a computational proof, you could start with one of the explicit formulae for $K_g$ in terms of $g$ and it should fall out pretty quickly. |
Quadratic equation with different indices | There was a typo in the question. It should have been
$$729+3^{2x+1}=4\times3^{x+3}$$
Note the $+3$ in the exponent instead of $+2$.
(You are correct that the question as stated does not have integer solutions, in fact it has no real solutions).
There's actually some pretty mathematical content here though. We can rewrite $729=3^6$ and $4=1+3$ to make this expression:
$$3^6+3^{2x+1}=3^{x+3}+3^{x+4}$$
Now the solutions $x=2$ and $x=3$ boil down to adding the same powers of $3$ on both sides. The completing the square method you outlined in your work will show that these are the only solutions. |
How to calculate this expected value | Note that the function $F$ is the cumulative distribution function of $X$. The usual notation is $F(x)$, or $F_X(x)$ if we want to be reminded of whch random variable we are working with. Very importantly, it should never be written $F(X)$.
By definition,
$$F_X(x)=\Pr(X\le x).$$
With continuous random variables, we can be pretty casual about the use of inequality symbols. With discrete random variables, we have to be much more careful, and cannot casually replace $\le $ by $\lt$.
Look at the first part of the specification of $F$. It tells us, among other things, that $F(-17)=0$. So $\Pr(X\le -17)=0$: there is no "weight" at $-17$ or to the left of it. We also have $F(-\pi)=0$: so $\Pr(X\le -\pi)=0$.
There is a sudden jump in the cumulative distribution function at $-3$. A tiny bit to the left of $-3$, it was $0$. But all of a sudden, at $-3$, $F$ has value $\dfrac{3}{8}$. The jump at $-3$ means that we must have $\Pr(X=-3)=\dfrac{3}{8}$.
Then things are steady until $0$, the cumulative distribution function is $\dfrac{3}{8}$ at $-2$, $-1$, $-0.2$. So $\Pr(X\le -0.2)=\dfrac{3}{8}$, no weight has been added. But at $x=0$, the cdf jumps to $\dfrac{1}{2}$. So $\Pr(X=0)=\dfrac{1}{2}-\dfrac{3}{8}=\dfrac{1}{8}$.
Similarly, $\Pr(X=3)=\dfrac{3}{4}-\dfrac{1}{2}=\dfrac{1}{4}$. similarly, $\Pr(X=4)=\dfrac{1}{4}$.
Now we are at a simple expected value problem.
$$E(X)=(-3)\frac{3}{8}+(0)\frac{1}{8}+(3)\frac{1}{4}+(4)\frac{1}{4}=\frac{5}{8}.$$
Remark: For a non-negative integer-valued random variable $Y$, there is a useful way to compute $E(Y)$:
$$E(Y)=\sum_{i=1}^\infty \Pr(Y\ge i).$$
We can adapt this to our problem by adding $3$ to $X$, computing the expectation by using the above formula, and subtracting $3$ at the end. One needs to be careful in using the cdf to calculate $\Pr(X\ge i)$.
There is a useful analogue of the above formula for non-negative random variables with continuous distribution. For some details, please look at this. |
Arbitrary constants in the general solution to the Schrödinger equation | Allow me to assume you care more about the physics than the algebra.
Given a potential (for example, a combination of wells and barriers), there are going to be regions (bands) depending on "total energy vs. potential", where the particle behaves differently in different regions.
When $E > V$, there is kinetic energy $T = E - V >0$ for the particle to travel around.
That is, the wave function here is in the form of waves (helical in complex space, sinusoidal when projected to the real) , either standing waves or traveling wave.
In this case, the coefficients of $\psi(x)=Ae^{ikx}+Be^{-ikx}$ are just real $\{A,B\}\in\mathbb{R}^2$ to setup the combination of left- and right- traveling waves to be nailed down by the boundary conditions.
When $E < V$, there is no kinetic energy and the particle is "trapped".
That is, the wave function here is in the form of exponential decay. Classically, this can be either a particle trapped in a well or a particle not having enough energy to go through the barrier. Quantum mechanically, we always at least have a "dying wave", which is exponential decay (in the real space) made up of $\cosh$ and $\sinh$.
In this case, the coefficients of $\psi(x)=Ae^{ikx}+Be^{-ikx}$ are complex $\{A,B\}\in\mathbb{C}^2$ to setup the combo of hyperbolic sine and hyperbolic cosine, which again shall to be nailed down by the boundary conditions.
I shall emphasize again that there can be different levels of critical $V_i$ (or $V_i(x)$ if you prefer) depending on how the potential is set. You will have different regions (energy bands) accordingly.
Also note that in most case, like the standard harmonic oscillator (which is infinitely wide), your total energy is never going to exceed the potential so the coefficients are complex. However, if you have a parabolic well of finite width (and flat outside, or lower back down outside), then there exists the band where $E-V>0$, where the particle "escapes the well" and can travel (and the coefficients are real). |
Finding the second partial derivatives of $w=\sqrt{u^2+v^2}$ | Your second derivative is wrong:
$$
w_u=\frac{u}{\sqrt{u^2+v^2}} \Rightarrow w_{uu}=\frac{\sqrt{u^2+v^2}-u\frac{u}{\sqrt{u^2+v^2}}}{u^2+v^2}=\frac{u^2+v^2-u^2}{(u^2+v^2)\sqrt{u^2+v^2}}
$$ |
How to determine equivalence relation on a set of ordered pair | For 2, $X$ is the set of subsets of $\{1,2,3,\ldots,N\}$. $A$ and $B$ are two of those subsets. They are related if the complement of their union is all of $\{1,2,3,\ldots,N\}$. First use DeMorgan's law on the complement of the union to get ??? Then check RST. |
Computing Non-zero End Digits of Large Factorials | Hint: if you wanted just the last digit, a naive approach would be to do all the multiplication mod 10, skipping the 0's. $ (1*2*3*4*5*6*7*8*9)^{10^{11}}$. A second thought is that the $5$ swallows the $2$, so we can skip those. This works fine for all the factors besides $2$ and $5$, which need some more thought... |
Example of compact manifold with bdd such that there exists geodesic whose first time of hitting boundary is different form exit time of geodesic, | Consider a sphere with a small open disk removed.
Then it is a manifold with boundary, the boundary being a circle. Consider a geodesic starting from a point on this circle and going tangentially to the circle boundary.
Then this geodesic is defined on $\mathbb{R}$ but hits the boundary periodically.
The degenerate case is when the boudary circle is a great circle: in this case, the boundary is totally geodesic, and the geodesic constructed above stays forever in the boundary while being defined for all time.
Another example, this time with a finite exit time:
This is a flat torus with a little open disk removed. |
Continuity of a parametric integral (where the integrated function is discontinuous) | This should work,
Let
$$F(t):=\int_\mathbb{R}e^{-x^2/2}\log|t+e^x|\,dx \;.$$
Note first that $f(t,x):=e^{-x^2/2}\log|t+e^x|$ is continuous for almost all $x$. The only singularity we can find is if $|t+e^x|=0$. But as $e^x$ is injective this happens at most in one point, namely if $t<0$. Since singeltons are of Lebesgue measure zero this has no influence on the coninuity of $F(t)$.
Furthermore the follwing holds,
$$|e^{-x^2/2}\log|t+e^x||\leq e^{-x^2/2}|t+e^x|\leq e^{-x^2/2}|t|+e^{-x^2/2}e^x$$
and since continuity is a local property it is sufficent to find a dominating function vor every $t$ in a arbitrary small neighborhood of $t$. Thus let $\epsilon>0$ be arbitrary small and consider $(t-\epsilon,t+\epsilon)$ for some fixed $t\in\mathbb R$.
Then we have
$$e^{-x^2/2}|t|+e^{-x^2/2}e^x\leq|t+\epsilon|e^{-x^2/2}+e^{-x^2/2}e^x$$
where the last function is integrable in $\mathbb R$. Consequently by the dominated convergence theorem the continuity follows in $t$ and since $t$ was arbitrary we can conlude that $F(t)$ is indeed continous for every $t\in\mathbb R$. |
Hints for proving some algebraic integer rings are Euclidean | HINT $\ \ $ It is norm Euclidean, i.e. the absolute value of the norm serves as a Euclidean function. For a nice survey see Lemmermeyer: The Euclidean algorithm in algebraic number fields. |
Resolve a system of equations | You want both $y_1=mx_1+p$ and $y_2=mx_2+p$. Solve these equations to find $m$ and $p$. Subtracting we get $y_1-y_2=m(x_1-x_2)$ so $m =\frac {y_1-y_2} {x_1-x_2}$. I will let you find $p$. |
BIBO stability with positive eigenvalue | BIBO stability states that when the system starts in the origin at $t=0$ and a bounded input $u(t)$ is applied, such that $|u(t)|<a\ \forall\, t>0$, with $a$ some positive constant, then the system output also remains bounded (there exists some constant $b$ such that $|y(t)|<b$). This basically comes down to that the impulse response of the system should always be bounded. This implies that all poles should have a negative real part for continuous LTI systems.
However if we consider a state space model representation of a system,
$$
\left\{ \begin{align}
\dot{x} & = A\, x + B\, u \\
y & = C\, x + D\, u
\end{align}\right.
$$
then the state matrix $A$, does not have to be Hurwitz. Namely if the system is controllable then all the eigenvalues of $A$ would correspond to the poles of the transfer function. But if unstable modes of the system are not controllable, then they can't be disturbed out their equilibrium at the origin. For example,
$$
A = \begin{bmatrix}
-1 & 0 \\ 0 & 1
\end{bmatrix}, \quad B = \begin{bmatrix}
1 \\ 0
\end{bmatrix}, \quad C = \begin{bmatrix}
1 & 1
\end{bmatrix}, \quad D = \begin{bmatrix} 0 \end{bmatrix},
$$
is BIBO stable, even though $A$ has an eigenvalue of $1$. However I do have to note that only controllable (and observable) modes of a system are visible in transfer functions. So if the poles of a transfer function all have a negative real part then it will be BIBO stable; if not then it is not BIBO stable. |
Probability for phones | a) You want to count the probability for selecting $5$ from $10$ places for cordless phones the when selecting $5$ from $15$.
[edit: You have the probability for selecting all of the cordless phones and five from the ten others when selecting any ten of the fifteen. That is okay too. ]
b) You have calculated probability that all five cordless phones have been serviced, which is also the probability that all five of each of the other two types have been serviced. However, if you just added you would be over counting common cases; so you must exclude the probability for selecting all five of two types at once.
(IE use the Principle of Inclusion and Exclusion) |
2-adic valuation of odd harmonic sums | Only a partial result. I note $v_2$ the $2$-adic valuation.
Let $\displaystyle Q_k(x)=\prod_{i=1}^{k}(x-j)=x^{k}+b_{k-1}x^{k-1}+\cdots+b_0$. We have $b_l\in \mathbb{Z}$. Note that $b_{k-1}=-\frac{k(k+1)}{2}$. Put $P_k(x)=2^kQ_k(\frac{x}{2})$. Then $P_k(x)=x^{k}+2b_{k-1}x^{k-1}+\cdots+2^{k-1}b_1 x+2^k b_0=\prod_{j=1}^{k}(x-2j)$.
Now
$$\frac{P_k^{\prime}(x)}{P_k(x)}=\sum_{j=1}^k \frac{1}{x-2j}$$ Putting $x=1$, we get that $v_2(\tilde{H_k})=v_2(P_k^{\prime}(1))$.
Now suppose that $k$ is odd. Then $P_k^{\prime}(1)=k+2q$ with $q\in \mathbb{Z}$. Hence your result is true in this case (as you have noted).
Suppose that $k=2m$ with $m$ odd.Then the first two terms in $P_k^{\prime}(1)$ give $ (2m-(2m(2m+1)(2m-1))=4m(1-2m^2)$, and it is easy to see that $8$ divide the other terms. Hence your result is true in this case. |
How does the usual properties of Hilbert adjoint operator follow from this definition? | This can be done by a straightforward calculation:
For all $x \in X$ and $y \in Y$ we have
\begin{align*}
\langle x, S(y) \rangle
&= \langle x, J_X^{-1}(T'(J_Y(y))) \rangle
= \langle x, J_X^{-1}(T'( \langle y, - \rangle)) \rangle
= \langle x, J_X^{-1}(\langle y, T(-) \rangle) \rangle \\
&= \overline{\langle J_X^{-1}(\langle y, T(-) \rangle), x \rangle}
= \overline{J_X(J_X^{-1}(\langle y, T(-) \rangle))(x)}
= \overline{\langle y, T(-) \rangle(x)} \\
&= \overline{\langle y, T(x)}
= \langle T(x), y \rangle.
\end{align*} |
Curved normal random variable and conjugate prior | In general I think this is going to be hard to say much in the general case, but when looking for conjugate priors lets first indicate the strategy with a popular example and then try your case
Finding a Conjugate Prior; the Gaussian precision
So in this instance the likelihood given a collection of observations is
$$
p(\mathbf{x}|\lambda) = \prod_{i=1}^{n}\mathcal{N}(x_n | \mu, \lambda^{-1} ) \propto \lambda^{N/2} \exp\left\{ -\frac{\lambda}{2}\sum_{n=1}^{\infty}(x_n-\mu)^2 \right\}
$$
Now we take that final term and write it in the form $f(\lambda)g(\lambda)$ where
\begin{align}
f(\lambda) &= \lambda^{N/2} \\
g(\lambda) &= \exp\left\{-\frac{\lambda}{2}\sum(x_n - \mu)^2 \right\}.
\end{align}
Now the step for constructing a conjugate prior $\pi(\lambda)$ is to chose its functional form such that
$$
p(\mathbf{x}|\lambda)\pi(\lambda) \propto\tilde{f}(\lambda) \tilde{g}(\lambda)
$$
where $\tilde{f}$ and $\tilde{g}$ are of the same function class as $f$ and $g$ respectively. This leads us to to consider a prior of the form
$$
\pi (\lambda) \propto \lambda^c e^{a\lambda + b}
$$
which after some normalisation we recognise as implying a Gamma distribution.
Curved Normal $\mathcal{N}\left(x | \mu, \mu\right)$
Using the same kind of strategy as above we have
$$
p(\mathbf{x} | \mu) \propto \frac{1}{\mu^{\frac{1}{2}}}\exp\left\{-\frac{1}{2}\left(\mu + \frac{\sum_i x_i^2}{\mu} - 2\sum_i x_i\right) \right\}
$$
this suggests a good prior may be of the form
$$
\pi(\mu) \propto \frac{1}{\mu^{r+1}}\exp\left\{ -\frac{1}{2}\left(a_0\mu + \frac{b_0}{\mu} + 2c_0\right)\right\}
$$
the $c_0$ term is just going to get rolled into the normalising constant, indeed we find
$$
Z_{\pi} := e^{-c_0}\int_{0}^{\infty} \frac{1}{\mu^{r_0+1}}e^{-\frac{1}{2}\left(a_0\mu + \frac{b_0}{\mu} \right)}d\mu = 2e^{-c_0} \frac{\sqrt{a_0b_0}}{b_0}^{r_0} K_{r_0}(\sqrt{a_0 b_0})
$$
where $K_r(\cdot)$ is a modified Bessel function of the second kind. Since we are getting a clean closed form expression it would not be surprising if this example has been covered somewhere in the literature!
Finally then you get a posterior of the form
$$
p(\mu | x ) \propto \frac{1}{\mu^{r + 1}}\exp\left\{-\frac{1}{2}\left( a\mu + \frac{b}{\mu} - 2c \right) \right\}, \qquad \mu > 0,
$$
where the posterior parameters are
\begin{align*}
r &= r_0 + \frac{1}{2} \\
a &= a_0 + 1 \\
b &= b_0 + \sum_i x_i^2 \\
c &= c_0 + \sum_i x_i
\end{align*}
and the normalising constant is
$$
Z_{\mu|x} = 2e^{-\sum_i x_i} \cdot \left( \frac{\sqrt{ab}}{b} \right)^{r}K_{r}(\sqrt{ab}).
$$
For general curved exponential distributions I don't know how much can be said beyond examining specific cases. Having thought about it for a bit one would think it should be possible to take the conjugate prior for the full exponential family of interest and then apply the same parameter restriction - where I can see this getting harder is if enforcing these parameter constraints lead to awkward expressions for the hyper parameters of the prior.
I guess a reasonable place to start would be to see if the Gaussian-Gamma distribution which is the conjugate prior to the $\mathcal{N}(\mu,\sigma^2)$ distribution contains this distribution as a special subclass, in particular does the same parameter constraint $\mu = \sigma^2$ lead to the prior found above. |
Is my proof for $f(0)=1$ for a specific continuous function correct? | The solution is not correct. By definition,
$$f'(0)=\lim_{x\to 0}\frac{f(x)-f(0)}{x}=\lim_{x\to 0}\frac{f(x)-1}{x}$$
so,
$$\lim_{x\to 0}\frac{f(x)-f(0)}{x}-\lim_{x\to 0}\frac{f(x)-1}{x}=0\to \lim_{x\to 0}\frac{f(0)-1}{x}=0\quad (1)$$
but
$$\lim_{x\to 0}\frac{c}{x}=c\cdot \lim_{x\to 0}\frac{1}{x}$$
Doesn't exist if $c\ne 0$ and it is a constant number.
Then if $(1)$ is true you must have $f(0)=1$.
P.S.: Your solution in not correct because when you write $c\cdot f'(0)=f(c)-1$ you are assuming that $c\ne 0$. |
Is there a linear transformation that would flip the elements of a matrix around its center? | Of course:
Note that $e_ie_j^T$ where $i,j \in \{1, \ldots, 3\}$ and $e_i$ is the standard unit basis in $\mathbb{R}^3$ is a basis for $\mathbb{R}^{3 \times 3}$.
The linear map is:
$$L\left( e_ie_j^T \right)=e_{4-i}e_{4-j}^T$$
For example:
$$L\left( e_1e_1^T \right)=e_{3}e_{3}^T$$
We can permute the entries of a matrix and it is linear, we just have to describe the image of each basis element. |
Find (describe) the range of the complex function: | If you think in polar co-ordinates you can stop at ${e^{2x}}{e^{2iy}}=r{e^{i\theta}}$
The first part is the distance from the origin, and the second part is the angle. So the distance from the origin ranges from ${e^{0}=1}$ to ${e^{2\ln(2)}}=4$, and the angle ranges from $\pi\over 2$ to $\pi$. The range is the entire "block arc" shape this covers. |
Is $\sqrt x$ locally Lipschitz continuous everywhere? | Hint: Consider any interval $[0,\varepsilon)$ where $\varepsilon > 0$. For $x,y\in [0,\varepsilon)$, with $x < y$, we have
$$\begin{align*}
\frac{|f(x)- f(y)|}{|x-y|} &= \frac{\sqrt{y} - \sqrt{x}}{y - x}\\
&= \frac{1}{\sqrt{y}+\sqrt{x}}.
\end{align*}$$ Can you show that this can be made arbitrarily large, and if so, what can you conclude? |
Can you help me subtract intervals? | You are looking at two different definitions of $A-B$:
Set difference: $A - B = \{ x\in A \, \mid \, x \notin B \}$ which in this case gives $$[3,6]−[4,8) = [3,4)$$
Interval arithmetic: $A - B = \{ x-y \in \mathbb{R} \, \mid \, x\in A, \,y \in B \}$ which in this case gives $$[3,6]−[4,8) = (-5,2]$$ |
Proving $\langle f,f \rangle =0 \implies f=0$ | Ah, but how do you know $||f||=<f,f>$ defines a norm? That requires a proof too!
Suppose $f\ne 0$. Then we can find $x \in [0,1]$ such that $f(x) \ne 0$. So $f(x)>0$ or $f(x)<0$. In either case, $f(x)^{2}>0$. But since $f$ is continuous at $x$, for any $\epsilon>0$ we can choose $\delta$ sufficiently small that if $|y-x|<\delta$, $|f(y)-f(x)|<\epsilon$. In particular, we can keep $f(y)^{2}>0$ on a small interval. Can you bound the integral of $f(y)^{2}$ on this small interval? What does it tell you about the integral over the whole of $[0,1]$? |
Expected square difference determines the distribution of a Gaussian random vector | The normal (Gaussian) distribution is fully characterized by it's mean and covariance matrix. In your case, the mean is $0$, the diagonal elements of $\operatorname{Var}(X)$ are $1$'s, and for $i\ne j$,
$$
\operatorname{Cov}(X_i,X_j)=\mathsf{E}X_iX_j=1-\frac{1}{2}[d(i,j)]^2.
$$ |
Divergence: Exterior vs Covariant | There are three definitions about divergence.
$$divX=tr\nabla X =L_{X} *1=d \circ i_{X}(*1)=\delta X^\flat$$
For reference Riemannian Geometry GTM 171. |
Higher order derivatives of Composition of Dirac delta distributions | TL;DR: Formula (2) is correct, which can easily be checked$^1$ by using test functions. However, formula (1) is incorrect.
We calculate$^2$
$$ u_k(x)~:=~(-1)^k\delta^{(k)}(1\!-\!x^2)
~=~\delta^{(k)}(x^2\!-\!1)~=~\left.\delta^{(k)}(y) \right|_{y=x^2-1}$$
$$~=~\left.\left(\frac{d}{dy}\right)^k\delta(y)\right|_{y=x^2-1}
~=~\left(\frac{1}{2x}\frac{d}{dx}\right)^k\delta(x^2\!-\!1)
~=~\left(\frac{1}{2x}\frac{d}{dx}\right)^k\frac{1}{2}\sum_{\pm}\delta(x\!\mp\!1). \tag{A}$$
By anti-normal-ordering (=ordering derivatives to the left) of eq. (A) we get for the first few $k\in\mathbb{N}_0$:
$$\begin{align}u_{k=0}(x)&~=~\frac{1}{2}\sum_{\pm}\delta(x\!\mp\!1),\cr
u_{k=1}(x)&~=~\frac{1}{2x}\frac{d}{dx}\frac{1}{2}\sum_{\pm}\delta(x\!\mp\!1)\cr
&~=~\left(\frac{d}{dx}\frac{1}{2x}+\frac{1}{2x^2}\right)\frac{1}{2}\sum_{\pm}\delta(x\!\mp\!1)\cr
&~=~\frac{1}{4}\sum_{\pm}\pm \delta^{\prime}(x\!\mp\!1)+\frac{1}{4}\sum_{\pm}\delta(x\!\mp\!1),\cr
u_{k=2}(x)&~=~\left(\frac{1}{2x}\frac{d}{dx}\right)^2\frac{1}{2}\sum_{\pm}\delta(x\!\mp\!1)\cr
&~=~\left(\frac{d^2}{dx^2}\frac{1}{4x^2}+\frac{d}{dx}\frac{3}{4x^3}+\frac{3}{4x^4}\right)\frac{1}{2}\sum_{\pm}\delta(x\!\mp\!1)\cr
&~=~\frac{1}{8}\sum_{\pm}\delta^{\prime\prime}(x\!\mp\!1)+\frac{3}{8}\sum_{\pm}\pm \delta^{\prime}(x\!\mp\!1)+\frac{3}{8}\sum_{\pm}\delta(x\!\mp\!1),\cr
&~~~\vdots\end{align}\tag{B}$$
By normal-ordering (=ordering derivatives to the right) of eq. (A) we get for the first few $k\in\mathbb{N}_0$:
$$\begin{align}u_{k=0}(x)&~=~\frac{1}{2}\sum_{\pm}\delta(x\!\mp\!1),\cr
u_{k=1}(x)&~=~\frac{1}{2x}\frac{d}{dx}\frac{1}{2}\sum_{\pm}\delta(x\!\mp\!1)\cr
&~=~\frac{1}{4x}\sum_{\pm}\delta^{\prime}(x\!\mp\!1),\cr
u_{k=2}(x)&~=~\left(\frac{1}{2x}\frac{d}{dx}\right)^2\frac{1}{2}\sum_{\pm}\delta(x\!\mp\!1)\cr
&~=~\left(\frac{1}{4x^2}\frac{d^2}{dx^2}-\frac{1}{4x^3}\frac{d}{dx}\right)\frac{1}{2}\sum_{\pm}\delta(x\!\mp\!1)\cr
&~=~\frac{1}{8x^2}\sum_{\pm}\delta^{\prime\prime}(x\!\mp\!1)
-\frac{1}{8x^3}\sum_{\pm} \delta^{\prime}(x\!\mp\!1),\cr
&~~~\vdots\end{align}\tag{C}$$
which is different from eq. (1). It is interesting how anti-normal-ordering (B) and normal-ordering (C) generate very different-looking expressions for the same underlying mathematical distributions.
--
$^1$ In this answer, we make repeated use of the following distribution identities:
$$\delta(f(x))~=~\sum_{i}^{f(x_i)=0}\frac{1}{|f'(x_i)|}\delta(x\!-\!x_i), \tag{D}$$
$$ \{f(x)-f(y)\}\delta(x\!-\!y)~=~0, \tag{E}$$
and derivatives thereof:
$$ \left(\frac{d}{dx}\right)^k[\{f(x)-f(y)\}\delta(x\!-\!y)]~=~0. \tag{F}$$
$^2$ We have for convenience shifted OP's definition $k-1\to k$ and removed an over-all sign factor $(-1)^k$. This can of course easily be reinstalled. |
How can I know which angle will be given between two vectors using dot product | $A-B=(-1,-4,4)$, magnitude is $\sqrt{33}$
$C-B=(2,-12,5)$, magnitude is $\sqrt{173}$
dot product is $66$
$\therefore\cos(\theta)=\dfrac{66}{\sqrt{33}\sqrt{173}}$
$\therefore\theta\approx29.1^\circ$ |
How do I find the congruence of a polynomial modulo another polynomial? | $$x^4 \equiv 16 \pmod{x^4-16}$$
\begin{align}
7x^{13}-11x^9+5x^5-2x^3+3 &\equiv 7(x^4)^3x - 11(x^4)^2x+5(x^4)x-2x^3+3 \pmod{x^4-16}\\
&\equiv 7(16)^3 x-11(16)^2x+5(16)x-2x^3+3 \pmod{x^4-16}
\end{align} |
Solutions to Bertrand's Paradox in J. Neyman's Confidence Interval Paper | Neyman was sloppy with his variables. His answer can be obtained using basic geometry:
$$B < 2r\sin\left(\frac{1}{2}(y-x)\right) \Longrightarrow y > x + 2\arcsin\left(\frac{B}{2r}\right)$$
So the probability is:
$$P\{l > B\}= \frac{2\pi \left(\pi - 2\arcsin\left(\frac{B}{2r}\right)\right)}{2\pi^2} = 1 - 2\pi^{-1}\arcsin\left(\frac{B}{2r}\right)$$ |
Theorem from Birkhoff and Rota -- method of undetermined coefficients(?) | The problem here is that I was trying to find a specific $q(t)$, rather than merely demonstrating that a suitable $q(t)$ exists. In other words, I was mistaking an existence proof for a construction proof. (Lesson: don't post when you're exhausted.)
So there are two cases:
$\lambda$ is not a root of $p_L$. Write $p_L(D)$ as $(D - \lambda_1) \cdots (D - \lambda_n)$ where all factors have a multiplicity of $1$ (i.e. some may be repeated.) Now apply Lemma 2 iteratively over all $n$ factors of $p_L(D)$. Since $\lambda \neq \lambda_i$ for all $i$, we successively demonstrate the existence of polynomials $q_i(t)$ of degree $s$ such that $(D - \lambda_1)\cdots(D-\lambda_i)(e^{\lambda t}q_i(t)) = e^{\lambda t}r(t)$. When we reach $q_n(t)$, we have a polynomial of degree $s$ such that $p_L(D)(e^{\lambda t}q_n(t)) = L[e^{\lambda t}q_n(t)] = e^{\lambda t}r(t)$, as desired.
$\lambda$ is a root of $p_L$; assume without loss of generality that $\lambda = \lambda_1$. Write $p_L(D)$ as $(D - \lambda_1)(D- \lambda_1)\cdots (D-\lambda_1)(D - \lambda_2) \cdots (D - \lambda_n)$, where each factor is repeated $k_i$ times, where $k_i$ is the multiplicity of the root $\lambda_i$. Now apply Lemma 1 $k_1$ times to $(D - \lambda_1)$; we obtain a polynomial $q_1$ of degree $s - k_1$ such that $(D - \lambda_1)^{k_1}(e^{\lambda t}q_1(t)) = e^{\lambda t}r(t)$. Finally, apply Lemma 2 to each remaining factor of $p_L(D)$; we eventually obtain a polynomial $q_n(t)$ of degree $s - k_1$ such that $p_L(D)(e^{\lambda t}q_n(t) = L[e^{\lambda t}q_n(t)] = e^{\lambda t}r(t)$, which is what we want. |
Pontryagin Maximum Principle - Mayer form - Null adjoint | Yes, this is possible under a few conditions.
This is what you already stated:
$$
\frac{\partial \phi}{\partial x(T)} = 0
$$
The Hamiltonian cannot be an explicit function of $x$. This makes the equation of motion describing $p$ equal to 0:
$$
\dot{p} = -\frac{\partial H}{\partial x} = 0
$$
Boundary constraints on $x$ adhere to some conditions. If initial or terminal $x$ is fixed, the value of $p$ at either the initial or terminal point may be non-zero. With boundary constraints on $x$, augmented cost becomes:
$$
\psi = \phi + \nu_1(x(0) - x_0) + \nu_2(x(T) - x_T)
$$
You would need both $x_0 = x_T = 0$ in general. If boundary conditions on $x$ are free, then you discover you can choose pretty much any value for $p$, which then 0 becomes the obvious choice given your goal. I'll admit, however, that I'm a little less familiar with point 3 and I probably missed some details.
Those are the conditions, now to find an optimal control problem where this is the case. Consider a spacecraft performing an orbit raising maneuver maximizing final radius:
$$
\begin{aligned}
\min_{\alpha} J &= -r^2 \\
\text{Subject to:} \; r &= v_r \\
\dot{\theta} &= \frac{v_{\theta}}{r} \\
\dot{v_r} &= \frac{v_{\theta}^2}{r} - \frac{\mu}{r^2} + \frac{\text{Thrust}}{m}\cos(\alpha) \\
\dot{v_{\theta}} &= -\frac{v_r v_{\theta}}{r} + \frac{\text{Thrust}}{m}\sin(\alpha) \\
\dot{m} &= m_{rate} \\
r(0) &= r_0, \theta(0) = \theta_0, v_r(0) = v_{r0}, v_{\theta} = v_{\theta 0} \\
m(0) &= m_0, v_r(T) = v_{rf}, v_{\theta}(T) = \sqrt{\mu / r}
\end{aligned}
$$
In this case, you can fix $\theta(0)$ to anything. Also, since $\theta$ doesn't appear in the Hamiltonian, then $p_\theta$ is 0 for all time. |
Distribution of Uniform Random Variable When Bounds Are Uniform | Note first that we have
$$
P(X\le x \vert \; Y=y) =
\begin{cases} 0 &:& x < -y\\
\frac{x+y}{2y} &:& x \in [-y, y)\\
1 &:& otherwise
\end{cases}
$$
and
$$
f_Y(y) =
\begin{cases} \frac{1}{b-a} &:& y \in [a,b]\\
0 &:& otherwise
\end{cases}
$$
Then, using the law of total probability we get:
$$\begin{split}P(X\le x) &= \int_{-\infty}^{\infty} P(X\le x \vert \; Y=y)\, f_Y(y)\, dy \\
&= \frac{1}{b-a} \Big( \int_{a}^{b} \mathbb{1}_{x < (- b)} \cdot 0 \;dy
+ \int_a^b \mathbb{1}_{x \geq b} \; dy \\
&+ \int_a^b \mathbb{1}_{-a \leq x < a} \; \frac{x+y}{2y} \; dy +
\int_{-x}^b \mathbb{1}_{-b \leq x < - a} \; \frac{x+y}{2y} \; dy \\
&+ \int_x^b \mathbb{1}_{a \leq x < b} \; \frac{x+y}{2y} \; dy +
\int_a^x \mathbb{1}_{a \leq x < b} \; dy \Big) \\\\
&= \frac{1}{b-a}
\begin{cases}
0 &:& x < -b \\
F_1(x) &:& -b \leq x < -a \\
F_2(x) &:& -a \leq x < a \\
F_3(x) + F_4(x) &:& a \leq x < b \\
(b-a) &:& b \leq x
\end{cases}
\end{split}$$
where
$$
F_1(x) = \frac{1}{2} (x (ln\vert b \vert - ln\vert x \vert) + b + x)
$$
$$
F_2(x) = \frac{1}{2} (x (ln\vert b \vert - ln\vert a \vert) + b - a)
$$
$$
F_3(x) = \frac{1}{2} (x (ln\vert b \vert - ln\vert x \vert) + b - x)
$$
and
$$
F_4(x) = (x-a).
$$ |
Finding change of variables to give linear homogeneous system | $$Y' = AY+b$$
Suppose $X=Y-c$, where $c$ is a constant vector.
then we have $$X'=Y'$$
and $$AY+b = A(X+c)+b=AX+(Ac+b)$$
Hence $$X' = AX+(Ac+b)$$
Hence if you pick $c$ such that $Ac+b=0$, then we have $X'=AX$, that is $B$ is just $A$. |
Do subgroup and quotient group define a group? | No.
Both $S_3$ and $C_6$ have normal subgroups isomorphic to $C_3$ with quotient isomorphic to $C_2$.
This is not even true of abelian groups: $C_4$ and $C_2 \times C_2$ both have subgroups isomorphic to $C_2$ with quotient isomorphic to $C_2$. |
Suppose $X,Y$ are independent and $X\sim N(1,4)$ and $Y\sim N(1,9)$. If $P(2X+Y\le a)=P(4X−2Y\ge 4a)$, then find $a$. | Recall if $X \sim N(\mu_X,\sigma_X^2)$,$Y \sim N(\mu_Y,\sigma_Y^2)$ and $X,Y$ are independent then $AX+BY \sim N(A \mu_X + B \mu_Y, A^2 \sigma_X^2+B^2 \sigma_Y^2)$.
Then manipulate and use the symmetrical properties of $\Phi$. |
Self-adjoint Operator Properties | The facts you state do not depend on $K$ being compact, but only on $K$ being selfadjoint. For example, suppose $K^2x=0$. Then $0=\langle K^2x,x\rangle=\langle Kx,Kx\rangle=\|Kx\|^2$ implies $Kx=0$. So,
$$
\mathcal{N}(K^2)=\mathcal{N}(K).
$$
Therefore, if $K^2=K^3$, then $K^2(I-K)=0$ implies $K(I-K)=0$ or $K^2=K$.
There are several ways to prove that either $\|K\|$ or $-\|K\|$ is in the spectrum of $K$. One way is the prove that the norm of $K$ is the same as the spectral radius, which can be done by showing that $\|K\|=\|K^2\|^{1/2}$ for any selfadjoint operator $K$. This is because the spectral radius is $r_{\sigma}(K)=\lim_{n}\|K^n\|^{1/n}$, and
$$
\|K\|=\|K^2\|^{1/2}=\|K^4\|^{1/4}=\cdots=\lim_{n}\|K^{2^n}\|^{1/2^n}=r_{\sigma}(K).
$$
To see that $\|K\|=\|K^2\|^{1/2}$ for any selfadjoint $K$, note that
$$
\|K^2\| \le \|K\|\|K\|=\|K\|^2,
$$
and
$$
\|K\|^2=\sup_{\|x\|=1}\|Kx\|^2 = \sup_{\|x\|=1}(K^2x,x) \le \sup_{\|x\|=1}\|K^2x\|\|x\|=\|K^2\|.
$$
Every non-zero point of the spectrum $\sigma(K)$ of a selfadjoint compact operator is an eigenvalue. So, either $\|K\|=0$ or at least one of $\|K\|,-\|K\|$ is such a non-zero point of the spectrum because $\|K\|=r_{\sigma}(K)$.
An alternative method for showing that either $\|K\|$ or $-\|K\|$ is in the spectrum relies on the following operator norm equality which holds for selfadjoint operators $K$:
$$
\sup_{\|x\|} |\langle Kx,x\rangle| = \|K\|.
$$
Either $\sup_{\|x\|=1}\langle Kx,x\rangle =\|K\|$ or $\inf_{\|x\|=1}\langle Kx,x\rangle =-\|K\|$. The first case may be assumed to hold by replacing $K$ with $-K$ if necessary. Let $\{ x_n \}$ be a sequence of unit vectors chosen so that $\langle Kx_n,x_n\rangle$ converges to $\|K\|$. Then
$$
\|\|K\|x_n-Kx_n\|^2=\|K\|^2-2\|K\|\langle Kx_n,x_n\rangle+\langle Kx_n,x_n\rangle^2\rightarrow 0.
$$
Because $K$ is compact, there is a subsequence $\{ x_{n_k} \}$ such that $\{ Kx_{n_k}\}$ converges to some $y\in \mathcal{H}$. Then $\{ x_{n_k} \}$ also converges:
$$
x_{n_k} = \frac{1}{\|K\|}(\|K\|x_{n_k}-Kx_{n_k})+\frac{1}{\|K\|}Kx_{n_k}\rightarrow 0+\frac{1}{\|K\|}y.
$$
Hence $\frac{1}{\|K\|}y$ is a unit vector and, by the continuity of $K$, one obtains $K\left(\frac{1}{\|K\|}y\right)=y$, which proves that $\|K\|$ is an eigenvalue. |
Proving $\tau $ is a topology. | Let $X-\sigma(A_i), i \in I$ be open sets from $\tau$ and we need that their union is in $\tau$ too. My claim is that
$$\bigcup_{i \in I} (X-\sigma(A_i)) = X - \sigma(\bigcap_{i \in I} \sigma(A_i)) \in \tau$$
Taking complements on both sides and using de Morgan we see that the first identity is equivalent to
$$\bigcap_{i \in I} \sigma(A_i) = \sigma(\bigcap_{i \in I} \sigma(A_i))$$
which can be shown thusly:
For any fixed $j \in I$ we have $\bigcap_{i \in I} \sigma(A_i) \subseteq \sigma(A_j)$ so taking $\sigma$ on both sides preserves this inclusion (this follows from axiom 4 of $\sigma$, see below) so that $\sigma(\bigcap_{i \in I} \sigma(A_i)) \subseteq \sigma(\sigma(A_j))=\sigma(A_j)$ and as $j \in I$ is arbitrary we have $\sigma(\bigcap_{i \in I} \sigma(A_i)) \subseteq \bigcap_{i \in I} \sigma(A_i)$. The other inclusion is trivial from 2.
The lemma from 4: $A \subseteq B \to \sigma(A) \subseteq \sigma(B)$ because
$A \subseteq B$ implies $A \cup B=B$ so $\sigma(A \cup B)=\sigma(A) \cup \sigma(B)=\sigma(B)$ which implies in turn $\sigma(A) \subseteq \sigma(B)$. |
Extension of Splitting Fields over An Arbitrary Field | If $\theta$ is a root of $x^4+1$, then so are $\theta,\theta^3,\theta^5,\theta^7$.
These are all different because $\theta$ has order $8$ in $E^\times$, since $\theta^4=-1\ne1$.
Therefore, $x^4+1$ splits in $F(\theta)$.
If $\operatorname{char}(F)=2$, then $x^4+1=(x+1)^4$ and $E=F=F(1)$, also a simple extension. |
how analytically (or with matlab) can i find the solution to the following algebraic problem? | If we are given $m$ linear functions
$$\phi_i:\quad{\mathbb R}^n\to{\mathbb R}, \qquad x\mapsto \phi_i(x):=\langle d_i,x\rangle +c_i\qquad(1\leq i\leq m)$$
and a real $\gamma>0$ the $2m$ inequalities
$$-\gamma\leq \phi_i(x)\leq \gamma\qquad(1\leq i\leq m)$$
define a convex polytope $P\subset{\mathbb R}^n$ (which may be empty, and need not be bounded). The set you are interested in is the boundary $\partial P$ of this polytope. This boundary can have a complicated combinatorial structure. The possible numbers of vertices, edges, $\ldots$, facets form an intensive area of research. This is to say that your problem is computationally hard as soon as $m$ and $n$ get larger.
A simple case is treated in the following first version of this answer:
Assume that the matrix $D=[d_{ki}]$ is square and nonsingular, and consider the affine map
$$A:\quad x\mapsto y:=D'x+c\ .$$
(Your indexing of the $d_{ki}$ is somewhat unfortunate, therefore the $'\>$) You are interested in the set $$S:=\{x\in{\mathbb R}^n\>|\>\max_i|y_i|=\gamma\}\ .$$
Now the set $\{y\in {\mathbb R}^n\>|\>\max_i|y_i|=\gamma\}$ is the surface $\partial C$ of the $n$-dimensional cube $$C:=\{y\in {\mathbb R}^n\>|\>-\gamma\leq y_i\leq\gamma \ (1\leq i\leq n)\}\ ,$$
and is the union of $2n$ such cubes of one dimension less (an ordinary $3$-dimensional cube has $6$ two-dimensional square faces).
It follows that
$$S=A^{-1}(\partial C)=(D')^{-1}(\partial C -c)\ .$$ |
Fencing the Group size,and its implication to Finiteness of Tate-Shafarevich Group | For any group $G$, if $M$ is a group, $N$ is a subgroup of $M$, and $G\cong M/N$, then $|G||N| = |M|$ in the sense of cardinality.
This because the underlying set of $G$ is the set of equivalence classes of $M$ modulo $N$. These equivalence classes partition $M$, so if $\{m_i\}_{i\in I}$ are a complete set of coset representatives for $N$ in $M$, then
$$|M| = \left|\bigcup_{i\in I}m_iN\right| = \sum_{i\in I}|m_iN| = \sum_{i\in I}|N| = |I||N|,$$
the next-to-last equality because $mN$ is bijectable with $N$ for every $m\in M$. Since the number of equivalence classes is $I$, $|G|=|I|$, so we get $|G||N|=|M|$.
In particular, if $M$ is finite, then necessarily $G$ is finite.
Are you sure this is really what you wanted to ask? |
Find unknown coordinates of two vectors | Directed opposite sides means that there exists $k$ such that $ka=b$ and $k<0$. You get the equations $kx=8; 3k=x$. From here, find $k$ and pick the negative solution. From there, find $x$. |
Prove that a matrix $\mathbf{A} \in M_n$ is similar to a Hermitian matrix if and only if it is diagonalizable and has real eigenvalues | If $A$ is diagonalizable and it has real eigenvalues, then it is similar to a diagonal matrix $D$ such that all the entries of the main diagonal are real. But then $D$ is Hermitian. |
Find a matrix-valued product, given the eigenvalues and eigenspaces of the matrix | The eigenspaces of $-2$ and $5$ are non-empty and coincide, which is not possible. Judging from your matrix $P$ I guess that the eigespace of $5$ is actually $\{(r,0,0) \mid r \in \mathbb{R}\}$.
Then we already have all we need: We have $M = P D P^{-1}$ for the matrices
$$
P =
\begin{pmatrix}
0 & 0 & 1 \\
1 & 0 & 0 \\
0 & 1 & 0 \\
\end{pmatrix}
\quad\text{and}\quad
\begin{pmatrix}
-2 & 0 & 0 \\
0 & -2 & 0 \\
0 & 0 & 5 \\
\end{pmatrix}.
$$
Because $P^{-1}$ is given by
$$
P^{-1} =
\begin{pmatrix}
0 & 1 & 0 \\
0 & 0 & 1 \\
1 & 0 & 0 \\
\end{pmatrix}
$$
we get that
$$
M \begin{pmatrix} 1 \\ 1 \\ -2 \end{pmatrix}
= PDP^{-1} \begin{pmatrix} 1 \\ 1 \\ -2 \end{pmatrix}
= PD \begin{pmatrix} 1 \\ -2 \\ 1 \end{pmatrix}
= P \begin{pmatrix} -2 \\ 4 \\ 5 \end{pmatrix}
= \begin{pmatrix} 5 \\ -2 \\ 4 \end{pmatrix}.
$$ |
Semigroup where the sum of any two elements is one of the two elements | Examples include the right zero semigroups, which satisfy the identity $xy = x$, and the left zero semigroups, which satisfy the identity $yx = x$.
In fact, your semigroups are chains of right or left zero semigroups.
Theorem. Let $S$ be a semigroup such that, for all $x, y \in S$, $xy \in \{ x, y \}$. Then there is a totally ordered set $(I, \leqslant)$ and for each $i \in I$ a semigroup $S_i$ such that:
for each $i \in I$, $S_i$ is either a right or a left zero semigroup,
$S$ is the disjoint union of the $S_i$,
If $s\in S_i$ and $t\in S_j$, the product on $S$ is given by
$st =
\begin{cases}
s & \text{if $i < j$ or $i = j$ and $S_i$ is a right zero semigroup}\\
t & \text{if $j < i$ or $i = j$ and $S_i$ is a left zero semigroup}
\end{cases}$
Proof. I let you verify that these conditions define a semigroup. For the rest of the proof, you need to know about Green's relations. First of all, since for all $x, y \in S$, $xy \in \{ x, y \}$, the relation $\leqslant_\mathcal{J}$ is a total order. Let $(S_i)_{i \in I}$ be the set of all $\mathcal{J}$-classes of $S$. Then (2) holds. Moreover, $I$ is totally ordered by the relation $\leqslant$ defined by $i \leqslant j$ if and only if $S_i \leqslant_\mathcal{J} S_j$. Since $S$ is idempotent, each $\mathcal{J}$-class $S_i$ is a subsemigroup of $S$. Moreover, since for all $x, y \in S$, $xy \in \{ x, y \}$, by Green's lemma, $S_i$ is necessarily either a right or a left zero semigroup, which proves (1). Condition (3) is now clear: if $s\in S_i$ and $t\in S_j$ with $i < j$, then since $st \leqslant_\mathcal{J} s <_\mathcal{J} t$ and $st \in \{s, t\}$, one has $st = s$. |
inequality probability between order statistics of two independent distribution | Well, assuming the two distributions are for continuous random variables, and that they have well behaved probability density functions $f_1, f_2$, then:
$$\Pr(X_{(1)}>Y_{(1)}) = \displaystyle\int_\Bbb R \binom{k}{k}\, (1-F_1(y))^k\, \binom{k}{k-1}\,(1-F_2(y))^{k-1}\,f_2(y)\operatorname d y$$
By reason that we want the probability that all of the $k$ type-1 samples and all but one of the $k$ type-2 samples are larger than the least of the type-2 samples, whatever that is that it is.
Use similar logos to find the rest. |
Torus defined by relation | When constructing a quotient space of a topological space $X$, one is given a decomposition of $X$, which means a collection of pairwise disjoint subsets of $X$ whose union is $X$.
From set theory, we learn that there is a one-to-one correspondence as follows:
$$\{\text{decompositions of $X$}\} \leftrightarrow \{\text{equivalence relations on $X$}\}
$$
So, the relation $p$ in your post must be an equivalence relation (... and it is ...) in order to be able to use it to obtain a decomposition and thus a quotient space.
If you had instead defined $p$ without the last requirement, then $p$ would not be an equivalence relation.
However, there is a shortcut we often take. Another theorem of set theory says that for any set $X$ and for any relation $R$ on $X$, there exists a unique "smallest" equivalence relation that contains $R$, namely the "reflexive, symmetric, transitive closure" of $R$. One calls this the equivalence relation generated by $R$.
One can then define a quotient space using any relation $R$ on any topological space $X$: use the decomposition of $X$ that corresponds to the equivalence relation generated by $R$.
So, if in your example you had left out the last condition, you would obtain a relation $R$ which is not an equivalence relation; but then by taking the transitive closure you would be putting the last condition back in, and the result would be the equivalence relation $p$ written in your post. |
Inequality involving sums of reciprocals and n-th root | Write this as
$$2^{1/n} \le \frac{\left(1 + \frac{1}{n}\right) + \dots + \left(1 + \frac{1}{2n-1}\right)}{n}.$$
The result follows from AM-GM once you show that (for example by induction)
$$\left(1 + \frac{1}{n}\right) \left(1 + \frac{1}{n+1}\right) \dots \left(1 + \frac{1}{2n-1} \right) = 2.$$ |
The number 3750 satisfies ϕ(3750)=1000 ϕ(3750)=1000. Find a number a that has the following three properties | Since $7$ is coprime to $3750$, what can you say about $7^{1000}$? What does that tell you about $7^{3003}?$ |
How does one use ML inequality with this equation? | $|1+e^{2iz}|\le1+|e^{2iz}|=2$.
So, we have
$$\lvert \frac{1+ e^{2iz}}{R^2 -1} \rvert \le \frac{2}{R^2-1} $$
Now,
$$\lvert \oint_Cf(z) \,dz \rvert \le \oint_C\frac{2}{R^2-1} dz \;,$$ where $C$ is semi-circle. So, $$\lvert \oint_Cf(z) \,dz \rvert \le \frac{2}{R^2-1}\oint_Cdz \;\le \frac{2\pi R}{R^2-1}$$
The statement 'I understand that you should $1+e^{2iz}$ should be equal to $2$. Which means that e$^{2iz}$ should equal $1$' is not the correct statement. |
Expanding and understanding the poison pills riddle | It's not true that you can solve this with $13$ pills; this page explains why.
You can get an upper bound on the number of pills for which this can be solved with $k$ weighings as follows. For $n$ pills, there are $2n$ different results to distinguish ($1$ out of $n$ pills and $1$ out of the $2$ possibilities "lighter" and "heavier"). From each weighing, you get one of $3$ results, so with $k$ weighings you can differentiate $3^k$ outcomes. Thus, if you can find an ideal weighing scheme such that the remaining possible results are distributed as equally as possible among the three outcomes of each weighing, you can solve the problem for $\lfloor3^k/2\rfloor$ pills with $k$ weighings. For $3$ weighings, this yields an upper bound of $13$ pills, so this bound isn't tight, since the problem can in fact only be solved for $12$ pills. For $4$ weighings, this yields $40$ pills, but a moment's reflection shows that this is in fact unsolvable for a similar reason as the $k=3,n=13$ case.
You may also be interested in this: Finding four numbers. |
representing $(\frac{x}{2-x})^3$ as a power series | For this problem,
the generalized binomial theorem
is your friend.
It states that,
for any real $a$,
$(1+x)^a
=\sum_{n=0}^{\infty} \binom{a}{n} x^n
$
for $|x| < 1$.
In this,
$\binom{a}{n}
=\dfrac{a(a-1)...(a-n+1)}{n!}
$.
When $a$ is negative integer,
$a=-m$,
$\begin{array}\\
\binom{a}{n}
&=\dfrac{a(a-1)...(a-n+1)}{n!}\\
&=\dfrac{\prod_{k=0}^{n-1}(a-k)}{n!}\\
&=\dfrac{\prod_{k=0}^{n-1}(-m-k)}{n!}\\
&=(-1)^n\dfrac{\prod_{k=0}^{n-1}(m+k)}{n!}\\
&=(-1)^n\dfrac{\prod_{k=0}^{n-1}(m+n-1-k)}{n!}\\
&=(-1)^n\dfrac{(m+n-1)!}{(m-1)!n!}\\
&=(-1)^n\binom{m+n-1}{m-1}\\
\end{array}
$
Therefore
$(1+x)^{-m}
=\sum_{n=0}^{\infty} (-1)^n\binom{m+n-1}{m-1} x^n
$.
If we put $-x$ for $x$,
as your problem has,
we get
$(1-x)^{-m}
=\sum_{n=0}^{\infty} (-1)^n\binom{m+n-1}{m-1} (-x)^n
=\sum_{n=0}^{\infty} \binom{m+n-1}{m-1} x^n
$.
In your case,
starting as you have done,
but using this formula,
we get
$\begin{array}\\
f(x)
&=\left(\dfrac{x}{2-x}\right)^{3}\\
&=\dfrac{x^3}{8}(1-x/2)^{-3}\\
&=\dfrac{x^3}{8}\sum_{n=0}^{\infty} \binom{3+n-1}{2} (x/2)^n\\
&=\dfrac{x^3}{8}\sum_{n=0}^{\infty} \binom{n+2}{2} (x/2)^n\\
\end{array}
$ |
Questions about the proof of Riesz-Fischer Theorem in Measure Theory | Since $\int_{\mathbb{R}}g < \infty$, $g$ is finite almost everywhere.
It means that $|f_{n_1}(x)| + \sum_{k = 1}^\infty (|f_{n_{k+1}}(x) -f_{n_{k}}(x)|)$ converges almost everywhere.
So, $f_{n_1}(x) + \sum_{k = 1}^\infty (f_{n_{k+1}}(x)
-f_{n_{k}}(x)) $ converges (absolutely) almost everywhere. |
Calculate speed of light beam cast onto sphere surface | Hence $$x=R\cos \theta$$
$$\theta=\cos^{-1} (\frac{x}{R})$$
Now we can get,
$$\frac{d\theta}{dt}=\frac{-1}{\sqrt{R^2-x^2}}\frac{dx}{dt}$$
Now using basic knowledge of circular motion,
$$v_{\text{surface}}=\color{red}{-}R\frac{d\theta}{dt}=\frac{R}{\sqrt{R^2-x^2}}\frac{dx}{dt}$$
$$v_{\text{surface}}=\frac{R}{\sqrt{R^2-x^2}}v_{object}$$
We have the $\color{red}{\text{minus}}$ because $\omega$ is anticlockwise, which would give us $v$ in opposite direction of what we want. |
How to calculate the total width of an object with a radius | Suppose you have a small circle of radius $r$ and a large circle of radius $R$, centered around the origin of a coordinate system. From both you take a segment from angle $0$ to angle $\alpha$. Then the points touching the vertical extremal lines will be
$$A=\begin{pmatrix}r\cos\alpha\\r\sin\alpha\end{pmatrix}\qquad
B=\begin{pmatrix}R\\0\end{pmatrix}$$
so the width is the difference of $x$ coordinates, namely $R-r\cos\alpha$.
Now you know that $\alpha=30°$ resp $\alpha=45°$ and you also know $R-r=40$ is the width of the ring. But you don't seem to know $r$ or $R$ itself. So even assuming the corners are not rounded, you will need additional information besides what is given in your figure.
You could estimate either radius from measements you perform on the figure, but in that case you might as well measure the width there, and obtain the scale from one of the given dimensions to convert from figure to real world dimensions.
Update: If Connot Harris is correct in assuming that the $80$ in these sketches refers to the outer radius $R$, then you can solve this just as he said:
$$80 - 40\cos30°=80-20\sqrt3\approx45.36\qquad
80 - 40\cos45°=80-20\sqrt2\approx51.72$$ |
Two variable function in this ODE? (Peano theorem) | $\underline{\text{More general case}}$ :
$f(x,y)\quad$ is a two variables function : The variables are $x$ and $y$.
In the case of those variables are functions of another common variable, say $t$ , this generates a new function of only one variable $t$ :
$f\left(x(t),y(t)\right)= F(t)$
$F(t)$ and $f(x,y)$ are not the same function since they are functions of different variables and the form of those functions are different.
The functions are different but, of course, the values taken by $\quad F(t)\quad$ and by $\quad f(x,y)\quad$ are equal when $\quad x=x(t)\quad$ and when $\quad y=y(t)$.
$\underline{\text{This is the same in the case considered in the raised question}}$ :
$f(x,y)\quad$ is a function of two variables. If $y$ is function of $x$ this generates a new function of only one variable $x$ :
$f\left(x,y(x)\right)= F(x)$
$F(x)$ and $f(x,y)$ are not the same function since they are functions of different variables and the form of those functions are different.
The functions are different but, of course, the values taken by $\quad F(x)\quad$ and by $\quad f(x,y)\quad$ are equal when $\quad y=y(x)$. |
Prove there exists a constant $K>0$ such that $|e^z-1-z-\frac{z^2}{2}|<K|z^3|$ as $z \to 0$ | We have
$$e^z - 1 - z - \frac{z^2}{2} = \sum_{n = 3}^\infty \frac{z^n}{n!},$$
hence
$$\biggl\lvert e^z - 1 - z - \frac{z^2}{2}\biggr\rvert \leqslant \sum_{n = 3}^\infty \frac{\lvert z\rvert^n}{n!}.$$
If we require $\lvert z\rvert < 1$, then we have $\lvert z\rvert^n < \lvert z\rvert^3$ for all $n \geqslant 3$, and that gives you an explicit $K$. |
Show that $\phi(z) = i\frac{1-z}{1+z}$ is bijection between the open disk and upper half plane | Note that if we solve $w=\phi(z) = i {1-z \over 1+z} $ formally then we get $z = {i-w \over i+w}$.
Note that $-1$ in the domain and $-i$ in the range (codomain) are special. If we try to solve $\phi(z) = -i$ we get $1=-1$ hence the range of $\phi$ does not include $-i$.
In particular, it shows that $\phi$ is a bijection $\phi: \mathbb{C} \setminus \{ -1\} \to \mathbb{C} \setminus \{ -i\}$. (As an aside, as a map on the Riemann sphere, $\phi: C_\infty \to C_\infty$ is a bijection and $\phi(-1) = -i$.)
Hence if we can show that $\phi(D) = \mathbb{H}$ we are finished.
(Note that $-1 \notin D$ and $-i \notin \mathbb{H}$.)
We have $\phi(x+iy) = i{1-x-iy \over 1+x+iy} = i{(1-x-iy) (1+x-iy) \over (1+x)^2+y^2} = i{1-x^2-y^2 -2 iy \over (1+x)^2+y^2} = {-2y +i(1-x^2-y^2) \over (1+x)^2+y^2}$.
Hence we see that $\operatorname{im} \phi(x+iy)>0 $ iff $x^2+y^2 < 1$. |
Under which conditions a non-compact and connected space possesses a non-constant continuous real-valued function | Here is a easy example: An uncountable $X$ equipped with the cocountable topology is connected noncompact and the only continuous real-valued functions are constants. The proof is almost the same as the cofinite case.
Indeed, to admit a nonconstant $\mathbb{R}$-valued function means there is a nontrivial Hausdorff quotient $X\to f(X)$, and of course you can impose compactness there too since you can cut off $f$. |
Prove another matrix is positive definite given that A is a Hermitian matrix | Hint: $A$ is diagonalizable, meaning $A=PDP^{-1}$, $D$ diagonal, for some $P$. Now try to diagonalize $A^3+I$ |
How to avoid the undetermined form $0/0$ | Since $x\to -1$ we have that $|x|=-x$, so we have that
$$\lim_{x\to-1} \ln \left(\frac{x^2+1-2|x|}{x^2-1}\right)=\lim_{x\to-1} \ln \left(\frac{x^2+1+2x}{x^2-1}\right)=\lim_{x\to-1} \ln \left(\frac{(x+1)^2}{x^2-1}\right)$$
can you conclude? |
Derive Distance of skew lines with methods from analysis | Without loss of generality, let $\ c=0,\ $ but we do not let $\ |b|=|d|=1\ $
because we would lose the valuable homogeneous property.
We recall the standard scalar
Triple product identity:
$$ \Delta := (a \cdot (b \times d))^2 = \begin{vmatrix} a\cdot a & a \cdot b & a\cdot d \\
b \cdot a & b\cdot b & b\cdot d \\
d \cdot a & d\cdot b & d\cdot d
\end{vmatrix}. \tag{1} $$
$$ \textrm{Let }\quad b_v := \frac{b}{|b|} \!-\! \frac{b\cdot d}{|b|\,|d|} \frac{d}{|d|},
\quad d_v := \frac{d}{|d|} \!-\! \frac{b\cdot d}{|b|\,|d|} \frac{b}{|b|},
\quad \textrm{ and} \tag{2} $$
$$ a_v := a\ |b\times d|^2 - |b|^2|d|^2 \Big(
a\cdot b_v\frac{b}{|b|} + a\cdot d_v\frac{d}{|d|}\Big),
\quad D := a_v/|b\times d|^2. \tag{3} $$
A bit of simplification of equation $(2)$ gives us
$$ |b|\ |d|^2\ b_v = b\ |d|^2 - d\ (b\cdot d), \quad
|b|^2\ |d|\ d_v = d\ |b|^2 - b\ (b\cdot d). \tag{4} $$
Substituting equation $(3)$ in equation $(4)$ gives us
$$ a_v = (a_0)a + (b_0)b + (d_0)d \quad \textrm{ with } \tag{5} $$
$$ a_0 :=|b\!\times\!d|^2,\quad
b_0 := -a\!\cdot\!\big(b\ |d|^2 \!-\! d\ (b\!\cdot\! d)\big),\quad
d_0 := -a\!\cdot\!\big(d\ |b|^2 \!-\! b\ (b\!\cdot\! d)\big). \tag{6} $$
Taking the dot product of equation $(5)$ we get
$$ a_v\!\cdot\!a_v = (a\!\cdot\!a)a_0^2 \!+\! (b\!\cdot\!b)b_0^2
\!+\! (d\!\cdot\!d)d_0^2 \!+\! 2(a\!\cdot\!b)a_0b_0 \!+\!
2(a\!\cdot\!d)a_0d_0 \!+\! 2(b\!\cdot\!d)b_0d_0. \tag{7}$$
Using $\ |b\!\times\!d|^2 = |b|^2|d|^2\!-\!(b\!\cdot\!d)^2,\ $
and equations $(1)$ and $(6)$ and factoring equation $(7)$ we get
$$ |a_v|^2 = a_v\cdot a_v = |b\times d|^2 \Delta =
|b\times d|^2 |a\cdot (b\times d)|^2 \tag{8}$$
Solving for $\ |a_v|\ $ and using equation $(3)$ we get
$$ |a_v| = |b\times d|\ |a\cdot b\times d|,\quad
|D| = |a\cdot b\times d|/|b\times d| \tag{9} $$
which is what we wanted to prove. |
Trigonometric identity expressing $\sec \theta+\text{cosec } \theta$ in terms of sine and cosine | Perhaps most algebraically natural is to go from right to left. We have
$$\frac{\sin\theta+\cos\theta}{\sin\theta\cos\theta}=\frac{\sin\theta}{\sin\theta\cos\theta}+\frac{\cos\theta}{\sin\theta\cos\theta}=\frac{1}{\cos\theta}+\frac{1}{\sin\theta}=\sec\theta+\csc\theta.$$
However, going from left to right is also in a certain sense natural. Express the left side in terms of sines and cosines. We have
$$\csc\theta+\sec\theta=\frac{1}{\sin\theta}+\frac{1}{\cos\theta}.$$
Now bring the expression on the right to a common denominator $\sin\theta\cos\theta$. |
Conditional Mean and Variance - Random Walk | $$
\mathbf{E}\left[P_t|P_{t-1} = x\right] = \mathbf{E}\left[P_{t-1}+\varepsilon_t|P_{t-1} = x\right] = \mathbf{E}\left[x+\varepsilon_t\right] = \mathbf{E}\left[x\right] + \mathbf{E}\left[\varepsilon_t\right] = x + \mu.
$$
$$
\mathbf{Var}\left[P_t|P_{t-1} = x\right] = \mathbf{Var}\left[P_{t-1}+\varepsilon_t|P_{t-1} = x\right] = \mathbf{Var}\left[x+\varepsilon_t\right] = \mathbf{Var}\left[x\right] + \mathbf{Var}\left[\varepsilon_t\right] = 0 + \sigma^2 = \sigma^2.
$$ |
Calculating the limit $\lim_{x \to \infty } \left( \frac{a-1+b ^{ \frac{1}{x} } }{a} \right) ^{x}$ for $a>0, b>0$ | The idea of setting $x=1/y$ is good; go ahead by taking the logarithm:
$$
\log\left(\frac{a-1+b^y}{a}\right)^{\!1/y}=\frac{\log(a-1+b^y)-\log a}{y}
$$
so you want to compute
$$
\lim_{y\to0^+}\frac{\log(a-1+b^y)-\log a}{y}
$$
This is the derivative at $0$ of $f(y)=\log(a-1+b^y)$; since
$$
f'(y)=\frac{b^y\log b}{a-1+b^y}
$$
we have
$$
f'(0)=\frac{\log b}{a}=\log(b^{1/a})
$$
Therefore your limit is
$$
\exp(\log(b^{1/a})=b^{1/a}
$$ |
General Topology: pasting lemma problem | Let $F\subseteq Y$ be closed.
Then $f^{-1}(F)\cap A_i$ is closed in $A_i$ for every $i\in I$ because $f$ restricted to $A_i$ is continuous for every $i\in I$.
That means that we can write $f^{-1}(F)\cap A_i=A_i\cap G$ for some closed $G\subseteq X$.
But then $A_i\cap G$ is - as an intersection of closed sets - also closed (here it is used that $A_i$ is closed).
Then $f^{-1}(F)=\bigcup_{i\in I}(f^{-1}(F)\cap A_i)$ is a finite union of closed sets, hence is closed (here it is used that $I$ is finite).
This proves that $f$ is continuous. |
Prove or disprove $\mathbb{Q}[x] /(x^5-3) \cong \mathbb{Q}[x] /(x^5-9)$ | Let's see some of the numbers in $\mathbb{Q}(\sqrt[5]3)$. We clearly have $\sqrt[5] {3}$. So we must have its powers
$$ \sqrt[5] {3}, \sqrt[5] {9},\sqrt[5] {27},\sqrt[5] {81}.$$
And so in particular, we can see that
$$ \mathbb{Q}(\sqrt[5]{9}) \subset \mathbb{Q}(\sqrt[5] 3).$$
Since both are degree five extensions, this subset relation must actually be an equality. So
$$ \mathbb{Q}(\sqrt[5]{9}) = \mathbb{Q}(\sqrt[5] 3),$$
finishing your proof. $\diamondsuit$ |
Lattice teory: joins of vectors containing lattice elements? | What you're asking is equivalent to the following question:
If $x_1,\ldots,x_n$ and $y_1,\ldots, y_n$ are elements of a lattice such that $x_i\leq y_i$, for $1\leq i \leq n$, then is it true that $\bigvee_{i=1}^n x_i \leq \bigvee_{i=1}^n y_i$?
The answer is yes, it it very clear that it is so.
To prove it, you can argue by stating that
$$x_1 \vee x_2 \leq x_1 \vee y_2 \leq y_1 \vee y_2$$
and proceed in the obvious way for bigger values of $n$.
Formally, you can use an induction process in a straightforward way.
More generally, if $A$ and $B$ are subsets of a lattice such that $\bigvee A,\bigvee B$ exist in that lattice, and if for every $a \in A$ there exist $b \in B$ with $a \leq b$, then $\bigvee A \leq \bigvee B$.
This is just because every upper bound of $B$ is also an upper bound of $A$.
Indeed, if $u$ is an upper bound of $B$ and $a \in A$, then there exists $b \in B$ such that $a \leq b$, whence $a \leq u$. |
What does $$ notation mean in gradient? | $\langle x,y,z\rangle$ means nothing but an element of $\mathbb{R}^3$ in the context of your given link.
The author of that website explains here the distinction of $(\ )$ and $\langle\ \rangle$. |
Proving that the number of integer solutions of $x^2-Ny^2=1$ is infinite | I will sketch a classical approach.
Lemma 1. If $N$ is a square-free positive integer, the continued fraction of $\alpha=\sqrt{N}$ has the following structure: $\alpha=[M,\overline{s,2M}]$ where $s$ is a palyndromic string of positive integer.
I believe this Lemma dates back to Lagrange and Brouncker. Stein outlines a proof here.
This Lemma is essentially equivalent to the fact that the class number of $\mathbb{Q}(\sqrt{D})$ is finite.
By exploiting the structure of continued fractions one may prove that $\frac{p}{q}=[M,s]$ is such that $p^2-Nq^2=1$. As an alternative, one may use the approach shown by Booher at page 8 here. There are an infinite number of convergents since $\sqrt{N}$ is irrational, and only a finite number of choices for $p^2- Nq^2$, so there must be an $r$ with an infinite number of convergents satisfying $p^2 − Nq^2= r$.
There are only a finite number of choices for $(p, q)$ to reduce to modulo $r$, and an infinite number of convergents satisfying the equation, so there are two distinct convergents $(p_0 , q_0 )$ and $(p_1 , q_1 )$ with
$p_0\equiv p_1\pmod{r}$ and $q_0\equiv q_1\pmod{r}$ and $p_0^2-Nq_0^2 =p_1^2-Nq_1^2 = r$. Thus we may consider the ratio
$$ u=\frac{p_0+q_0\sqrt{N}}{p_1+q_1\sqrt{N}}=\frac{p_0p_1-Nq_0q_1+\sqrt{N}(p_1 q_0-p_0 q_1)}{r} $$
and notice that $p_0 p_1 − N q_0 q_1 \equiv p_0^2-Nq_0^2 \equiv 0\pmod{r}$ and $ p_1 q_0 − q_1 p_0 \equiv 0 \pmod{r}$, hence $u$ is of the form $p+qN$ with $p,q\in\mathbb{Z}$. Since the norm on $\mathbb{Q}(\sqrt{d})$ is multiplicative (aka Brahmagupta's identity), $u$ has norm 1. Therefore Pell’s equation has a non-trivial solution and $\mathbb{Z}[\sqrt{d}]$ has an infinite number of invertible elements. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.