title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Iterated Derivatives of function combinations? | The Lagrange inversion formula treats $(f^{-1})^{(n)}.$$
Faà di Bruno's formula treats $(f\circ g)^{(n)}.$
In a sense, the latter is simpler if the $n$th derivative is $\dfrac{\partial^n}{\partial x_1\,\cdots\,\partial x_n}$ than if it is $\dfrac{d^n}{dx^n}.$
Start with an example:
\begin{align}
& \frac{\partial^3}{\partial x_1\,\partial x_2\,\partial x_3} f(g(x))\\[10pt]
= {} & f'(g(x_1,x_2,x_3)) \frac{\partial^3}{\partial x_1\,\partial x_2\,\partial x_3} g(x_1,x_2,x_3) & & (\text{with } f') \\[10pt]
& {} + f''( g(x_1,x_2,x_3)) \cdot \left( \begin{array}{r}
\dfrac{\partial^2}{\partial x_1\,\partial x_2} g(x_1,x_2,x_3) \cdot \dfrac{\partial}{\partial x_3} g(x_1,x_2,x_3) \\[5pt]
{} + \dfrac{\partial^2}{\partial x_1\,\partial x_3} g(x_1,x_2,x_3) \cdot \dfrac \partial {\partial x_2} g(x_1,x_2,x_3) \\[5pt]
{} + \dfrac{\partial^2}{\partial x_2\,\partial x_3} g(x_1,x_2,x_3) \cdot \dfrac \partial {\partial x_1} g(x_1,x_2,x_3)
\end{array} \right) & & (\text{with } f'') \\[10pt]
& {} + f'''(g(x_1,x_2,x_3)) \frac \partial {\partial x_1} g(x_1,x_2,x_3) \cdot \frac \partial {\partial x_2} g(x_1,x_2,x_3) \cdot \frac \partial {\partial x_3} g(x_1,x_2,x_3). & & (\text{with } f''')
\end{align}
There is one term for each of the five of the partitions of the set of three variables $x_1,x_2,x_3.$ The $k$th derivative of $f$ is multiplied by an expression involving all partitions of the set of independent variables into $k$ parts.
Similarly, if we had $\dfrac{\partial^6}{\partial x_1\,\partial x_2\,\partial x_3\,\partial x_4\,\partial x_5\,\partial x_6} f(g(x_1,x_2,x_3,x_4,x_5,x_6)),$ we would have listed all $203$ partitions of the set of six independent variables.
But what if there's just one independent variable? $\dfrac{d^3}{dx^3} f(g(x)) = \text{what?}$
Then just let all three variables coalesce into one variable called $x:$
\begin{align}
& \frac{d^3}{dx^3} f(g(x)) \\[15pt]
= {} & f'(g(x)) \frac{d^3}{dx^3} g(x) \quad + \quad 3 f''(g(x)) \frac {d^2}{dx^2} g(x) \cdot \frac d {dx} g(x) \quad + \quad f'''(g(x)) \left( \frac d {dx} g(x) \right)^3
\end{align}
(where the three terms involving the second derivative of $f$ have become indistinguishable from each other and hence have become just one term with the coefficient $3$).
Faà di Bruno will also treat $(1/f)^{(n)}(x)$ by regarding it as a derivative of a composite function: $ x \mapsto f(x) \mapsto 1/f(x).$ |
Field $K$ with $\operatorname{Gal}(\overline{K}/K)\simeq\widehat{F_2}$ | Yes, there is. In fact, for every projective profinite group $G$, there is a perfect and pseudo-algebraically closed field whose absolute Galois group is $G$.
You may want to check corollary 23.1.2 of Field Arithmetic by Fried and Jarden for a proof. |
Why is $O_{\mathbb{C}_p}/p$ not perfect? | Being perfect would mean that the map $x\mapsto x^p$ is bijective, or in other words, every element would have a unique $p$-th root and unique $p$-th power. Raising to the $p$-th power also raises the (multiplicative) valuation $|x|$ of an element $x$ to the $p$-th power. Now with $0 < c := |p| < 1$, can you see a whole bunch of elements in $O_{\mathbb{C}_p}$ which are nontrivial mod $p$, but whose $p$-th powers become $0$ mod $p$? |
About two measurable set question | $$(A \cup B) \cap A^C=(A\cap A^C)\cup (B \cap A^C)=\emptyset\cup (B\setminus A)=B\setminus A$$ |
Decompositions of open sets in $\mathbb{R}^n$ | The first property you ask for is fairly trivial to show; in particular, for any open set $O$ define an equivalence relation $$a\sim b$$ defined by the property that, if $S$ is a set of disjoint open sets covering $O$, then $a\in s\in S$ implies $b\in s$ - that is, $a$ and $b$ are in the same connected component. The equivalence classes of this can easily be shown to be disjoint open sets, whose union is $O$.
The second property is somewhat more difficult, but still doable. One approach is essentially to build an octree as a partition. In particular, let $S$ be the set of intervals of the form
$$\left[\frac{x_1}{2^n},\frac{x_1+1}{2^n}\right)\times \left[\frac{x_2}{2^n},\frac{x_2+1}{2^n}\right)\times \left[\frac{x_3}{2^n},\frac{x_3+1}{2^n}\right)$$
along with the set $\mathbb R^2$ itself. That is, we have axis aligned cubes whose corners are at "adjacent" dyadic rationals - if we put $8$ of these cubes together, we get a bigger cube which is also in the set. Then, we can quickly come up with a partition $P\subseteq S$ of $O$ - in particular, let $P$ be the set of $s\in S$ such that $s\subseteq O$ but for which there does not exist any $s'\in S$ which (strictly) contains $s$ but is contained in $O$ - that is, $P$ is the set of cubes of maximal size fitting in $O$.
This is a partition since, for any $p\in O$, there is some ball around $p$ contained in $O$ - and therefore, as every point is contained in a cube of arbitrarily small radius, there is some $s\in S$ such that $p\in s\subseteq O$. We can then argue that if two cubes $s_1,s_2\in S$ contain $p$, then either $s_1\subseteq s_2$ or $s_2\subseteq s_1$ (because of how the cubes nest) - and thus, the set of cubes containing $p$ forms a chain when ordered by inclusion - and our partition has the property that the union over such a chain is still in $S$ - therefore, we conclude that there is a maximal element in $S$ containing $p$ and that this is in $P$, thus $P$ is a partition.
We can generalize this argument to say that, if $S$ is a set satisfying:
If $s_1,s_2\in S$ then $s_1\cap s_2$ is either $s_1$, $s_2$, or the empty set.
Every subset $S'\subseteq S$ which is totally ordered by inclusion (i.e. a chain) has that the union of all $s\in S'$ is a member of $S$. (Equivalently, every subset of $S'$ has a least upper bound when ordered by inclusion)
For any open set $O$ and $p\in O$, there exists a $s\in S$ which contains $p$ and which is contained wholly within $O$.
then there is a partition $P$ of any open set satisfying $P\subseteq S$. In our particular example, we chose a suitable $S$ composed of convex sets with non-empty interior. |
Slick and fast linear algebra treatment for finite field extensions? | Anthony W. Knapp, Basic Algebra (Digital Second Edition, 2016) is freely downloadable from the link just given, or from Project Euclid; so bulk and cost are not an issue, even though the book also covers many other topics.
However, if it is necessary to print any part of it:
Printing of the files is constrained legally. The reason is that Springer Science+Business Media, Inc. has a stake in printed copies of these books, as is explained in the brief history below, and Springer's permission may be required for printing.
I haven't yet read it myself, but it seems to be well worth a look.
The topic of quotients of vector spaces is treated in section 5 of Chapter II. The restriction to $\mathbb{Q}$, $\mathbb{R}$ and $\mathbb{C}$ in the chapter title does not seem to matter.
The topic of finite fields is treated in section 3 of Chapter IX. |
Can $q^2 \alpha \,(\text{mod } 1)$ be made arbitrarily small? | First of all, the usual phrasing of the Dirichlet approximation theorem replaces "$(\text{mod } 1)$" with $\| \cdot \|_\mathbb{T}$, which gives the distance to the nearest integer (i.e. the distance to $0$ in $\mathbb{T} = \mathbb{R}/\mathbb{Z}$), so $\| 1\frac34 \|_\mathbb{T} = \frac14$, whereas $1\frac34 \pmod{1}$ could mean $\frac34$. This is what is proved using the pigeonhole principle in the Wikipedia article you reference.
We can actually use a similar strategy to the proof of the Dirichlet approximation theorem.
Let $k \in \mathbb{N}$, and consider the sets
$$S_i := \left\{ q \in \{1, ..., n\} \, : \, \left \| \frac{q^2 \alpha}{2} \, \right\|_\mathbb{T} \in \left[\frac{i}{k}, \frac{i+1}{k} \right) \right\}$$
for $i = 0, ..., k-1$.
The sets, $S_i$, partition $\{1, ..., n\}$, so by the pigeonhole principle, there is a set, $S = S_j$, containing at least $\frac{n}{k}$ elements.*
If we take $n$ large enough (i.e. so that $\frac {C}{\log \log n} \leqslant \frac1k$), then Roth’s theorem gives us an arithmetic progression $\{x-d, x, x+d\} \subseteq S$.
Then
\begin{align}
\|d^2 \alpha\|_\mathbb{T} &= \left \|\frac{(x-d)^2 \alpha}{2} + \frac{(x+d)^2 \alpha}{2} - x^2 \alpha \right\|_\mathbb{T} \\
&\leqslant \left \| \frac{(x-d)^2 \alpha}{2} - \frac{x^2 \alpha}{2} \right \|_\mathbb{T} + \left \| \frac{(x+d)^2 \alpha}{2} - \frac{x^2 \alpha}{2} \right \|_\mathbb{T} \\
&\leqslant \frac1k + \frac1k = \frac2k
\end{align}
and so
$$\min_{1 \leqslant q \leqslant n} \| q^2 \alpha \|_\mathbb{T} \leqslant \frac2k$$
Since $k$ was arbitrary, we are done.
Note that $n$ only needed to be large enough in terms of $k$ - it was chosen independently of $\alpha$, so convergence is indeed uniform in $\alpha$.
*In the proof of the Dirichlet approximation theorem, at this point, as long as $n > k$, we have two distinct elements $q_1\alpha$, $q_2\alpha$ that are at most $2/k$ apart, modulo 1. Taking their difference, $Q = q_2 - q_1$, then gives $Q \alpha\pmod{1} \leqslant 2/k$ and $1 \leqslant Q \leqslant k+1$.
However, we can't quite do the same think in this case. If we set $Q = q_1^2 - q_2^2$, then whilst $Q \alpha \leqslant 2/k$, $Q$ need not be a square, so is not in the required form. This is why we need to use Roth's theorem - if we have three squares in arithmetic progression which are close together modulo 1, we can form a new square which is close to zero, modulo 1. |
Axis Angle to quaternion and quaternion to Axis angle question | Right off the bat, a reality check shows something amiss here.
Given ANY axis U = (xi, yj, zk) and a rotational angle alpha, the w (i.e., real) value of the resulting quaternion should be cos(alpha/2). But if alpha is 45 degrees, then w = cos(pi/8.0) should be around 0.924... and not somewhere about 0.963..., as you seem to expect when 'correctly' calculating ccc=quat([.3 .1 .6], 45).
Your error is that the axis rotation quaternion given axis U and rotation angle alpha assumes that U is already normalized (has length 1.0). So your formula should (in an ideal world, without any round-off error) be:
norm(U)*sin(alpha/2) + cos(alpha/2).
where norm(U) is U, 'normalized' to have length 1. What you have coded up is instead
norm(U*sin(alpha/2) + cos(alpha/2)).
As a test, try using your existing code to evaluate ccc = quat([300, 100, 600], 45). It will be much more 'off' than your example (where U is 'relatively' close to being normalized).
That should make sense: the quaternion that rotates 45 degrees around the axis (.3, .1, .6) should be exactly the same quaternion that rotates 45 degrees around the axis (300, 100, 600). In both caes, it's really the same axis: the latter is just a 'longer version' of the former.
Since we are not in an idealized world lacking round-off error, what you want to do is first, normalize the axis of rotation U; then construct the quaternion, and then normalize the quaternion:
norm(norm(U)*sin(alpha/2) + cos(alpha/2))
BTW, while it's quite readable, I don't recognize the syntax of the language your are working in. What language is it? |
Continuous linear functional and weak convergence | Yes, $\varphi$ can be approximated by such sums.
First of all, $\varphi(f)=\int_{[0,1]}f\,d\mu,$ where $\mu\in\mathscr M[0,1]$ - the set of signed or complex Borel measures on $[0,1]$.
Next, let $n\in\mathbb N$, $\,P=\Big\{t_k=\frac{k}{n}: k=0,1,\ldots,n\Big\}$, and
$$
w_k=\mu\big((t_{k-1},t_k]\big),\,\, k=1,\ldots,n.
$$
Then
$$
\varphi(f)-\sum_{k=1}^n f(t_k)w_k=\varphi(f)-\varphi_n(f)=\sum_{k=1}^n
\int_{(t_{k-1},t_k]} \big(f(x)-f(t_k)\big)\,d\mu(x),
$$
and due to the uniform continuity of $f$, then for every $\varepsilon$,
there exists an $n$ sufficiently large, such that
$$
\lvert \varphi(f)-\varphi_n(f)\rvert \le\varepsilon \|\mu\|.
$$
This shows that $\varphi_n\to\varphi$, weakly. |
Why these conclusions from the definition? Absolute value | There are two possible cases:
Case $1$:
$x \geq 0 \Rightarrow |x| = x$
Case $2$:
$x < 0 \Rightarrow -x > 0 \Rightarrow |x|=-x>x$
Therefore, by 'exhaustion' of all cases, $|x| \geq x$ |
Give an example of R over A so that: symmetric and transitive but not reflexive | Your answer is not correct because it is not symmetric: $(1,3)$ is in $S$ but $(3,1)$ isn’t. The correct answer is symmetric $(a,b)\in S$ means $(b,a)\in S$ is trivially true, and transitive by the same triviality. You’re correct in saying that this question has many, many different solutions, however. |
Difference between $C^k$s and $C^\infty$ | There are huge differences between $C^0$ and $C^1$; from the perspective of manifolds and maps between them, there is not much difference between any of the $C^k$ with $1 \leq k \leq \infty$. I will be precise about how.
1) Every $C^k$ atlas on a manifold $M$ has a subordinate $C^\infty$ atlas. Given a $C^k$ structure on a manifold $M$, any two compatible smooth structures on $M$ are $C^\infty$ diffeomorphic. So from the perspective of classification of manifolds, there is no difference between $C^k$ and $C^\infty$, $k>0$.
2) The spaces of maps $C^k(M,N)$ (take $M$ to be compact and the Whitney topology) are homotopy equivalent for all $k$. This is more or less because of the smooth approximation theorem.
3) For $\infty \geq \ell \geq k > 0$, the inclusion $\text{Diff}^\ell(M) \hookrightarrow \text{Diff}^k(M)$ is a homotopy equivalence. This also follows from smooth approximation theorems; the point is that $\text{Diff}^k \cap C^\ell = \text{Diff}^\ell$ is open in $C^\ell$. This is the strongest form of 1) (especially if you replace this by $\text{Diff}^k(M,N)$); not only does it say that $C^\ell$ manifolds $M$ and $N$ are $C^k$ diffeomorphic iff they're $C^\ell$ diffeomorphic, it says that their spaces of symmetries are homotopy equivalent (which in my eyes means 'pretty much the same'!) |
Checking reflexive, symmetric and transitive properties of $\neq$ on $\mathbb{N}$ | A simple counter example will suffice. Take $x=z=1$, $y=2$. Then $x \neq y$ and $y\neq z$, but $x = z$.
As to your last question, we use $x,y \in \mathbb{N}$ to mean that both $x$ and $y$ are natural numbers. This differs from $(x, y) \in \mathbb{N^2}$ where $(x,y)$ is an ordered pair. |
Compute the following real integral using the residue theorem | Complex plane
In the complex plane we use the function
$$
f(z) = \frac{1}{z^{2}-2z+4}
$$
Find poles
Where does $z^{2}-2z+4=0$? When
$$
z = 1 \pm i\sqrt{3}
$$
These poles are represented as $\color{red}{\times}$ in the figure below.
Contour
Jordan lemma
$$
\oint f(z)\, dz = \lim_{R\to\infty} \left(
\int_{\Gamma_{R}} f(z)\, dz +
\int_{\Omega_{R}} f(z)\, dz
\right)
$$
Therefore,
$$
\int_{-\infty}^{\infty} f(x)\, dx =
\oint f(z)\, dz
+ \lim_{R\to\infty} \left(
\int_{\Gamma_{R}} f(z)\, dz
\right)
$$
Because
$$
\lim_{z\to\infty} |z\, f(z)| = \lim_{z\to\infty} \Bigg|\frac{z}{z^{2}-2z+4}\Bigg| = 0
$$
$$
\lim_{R\to\infty} \left(
\int_{\Gamma_{R}} f(z)\, dz
\right) = 0
$$
Residue theorem
$$
\oint_{\Gamma_{R}} f(z)\, dz = 2\pi i \sum_{k} \text{Res }f(z_{k})
$$
where the points $z_{k}$ are the poles enclosed by the contour. In this instance, there is a single pole at
$$z_{1} = 1+i\sqrt{3}$$
The Laurent expansion about this point is
$$
f\left(z-z_{1}\right) =
-\frac{i}{2 \sqrt{3}} \frac{1}{\zeta -1 - \sqrt{3}} +
\frac{1}{12} +
\frac{i}{24 \sqrt{3}} \left(\zeta -1-i \sqrt{3}\right) +
\mathcal{O}\left(\zeta -1-i \sqrt{3} \right)^{2}
$$
$$
\oint_{\Gamma_{R}} f(z)\, dz = 2\pi i \left( -\frac{i}{2 \sqrt{3}} \right) = \frac{\pi}{\sqrt{3}}
$$
Conclusion
$$
\int_{\infty}^{\infty} \frac{dx}{ x^{2}-2x+4 } = \frac{\pi}{\sqrt{3}}
$$ |
Recover $f$ if we know that $\frac{d}{dx} \log f(x)$ and $f(x) \to 0$ as $x \to \infty$ | Given one of your functions $f$, we can always write it as $f(x) = e^{g(x)}$ where $f$ and $g$ determine each other uniquely. The condition $\lim_{x \to \infty} f(x) = 0$ translates to $\lim_{x \to \infty} g(x) = -\infty$. Also, $\frac{d}{dx} \log(f(x)) = g'(x)$, so knowing the logarathmic derivative of $f$ equates to knowing the ordinary derivative of $g$.
Having made this translation, we see that the basic issue is that, given a function $g(x)$ with $\lim_{x \to \infty} g(x) = -\infty$, we cannot recover $g(x)$ from $g'(x)$. For example $g(x)=-x+c$ has $g'(x) = -1$ and $\lim_{x \to \infty} g(x) = -\infty$, regardless of the value of the constant $c$. Translating back to $f$, we can say that $f(x) = k e^{-x}$ has logarithmic derivative $-1$ and $\lim_{x \to \infty} f(x) = 0$, regardless of the value of the constant $k$. |
Given sequence of nested intervals $I_n[a_n,b_n]$. Show that if $y=\inf\{b_{1},b_{2},b_{3},\ldots\}$ then $y\in[a_{n},b_{n}]$ $\forall n$ | You have made slight errors in the inequalities in both the cases.
If $k\geq n$, then the nested property gives $[a_n, b_n]\supseteq [a_k, b_k]$ so $a_n\leq a_k\leq b_k$ and therefore $a_n\leq b_k$.
If $k<n$, then $[a_k, b_k]\supseteq [a_n, b_n]$ so $a_n\leq b_n\leq b_k$ and so $a_n\leq b_k$ again. |
Indefinite integral of a rational function with linear denominator: $ \int \frac{ x^7}{(x+1)}{dx} $ | Let
$$I=\int\frac{x^7}{x+1}dx$$
Apply $x+1\to y$ to get
$$I=\int\frac{(y-1)^7}{y}dy$$
Using the binomial theorem
$$I=\int\frac{1}{y}\sum_{k=0}^{7}\binom{7}{k}y^k(-1)^{7-k}dy$$
$$=\int\sum_{k=0}^{7}\binom{7}{k}y^{k-1}(-1)^{7-k}dy$$
$$=\sum_{k=0}^{7}\int\binom{7}{k}y^{k-1}(-1)^{7-k}dy$$
$$=C-\ln|y|+\sum_{k=1}^{7}\frac{1}{k}\binom{7}{k}y^k(-1)^{7-k}$$
$$=C-\ln|x+1|+\sum_{k=1}^{7}\frac{1}{k}\binom{7}{k}(x+1)^k(-1)^{7-k}$$
At which point you have by most standards a satisfactory answer.
EDIT
If you teacher did some geometric progression stuff this might have been what he did:
$$I=\int\frac{x^7}{x+1}dx$$
$$=\int\frac{y^7}{1-y}dy\qquad(x\to -y)$$
$$=-\int\frac{-1+1-y^7}{1-y}dy$$
$$=\int\frac{1}{1-y}dy-\int\frac{1-y^7}{1-y}dy$$
$$=\ln|1-y|-\int\left(\sum_{n=0}^6y^n\right)dy$$
$$=\ln|1-y|-\sum_{n=0}^6\int y^ndy$$
$$=\ln|1-y|-\sum_{n=1}^7\frac{y^n}{n}$$
$$=\ln|1+x|-\sum_{n=1}^7\frac{(-x)^n}{n}$$
That method there makes use of the partial sum formula for a geometric series. |
$\vDash (\forall x \forall y \forall z (R(x,y) \land R(y,z)\rightarrow R(x,z)))\land (\forall x \exists y R(x,y)) \rightarrow \exists x R(x,x) $ | First, some intuitions. The hypothesis $\forall x \forall y \forall z (R(x,y) \land R(y,z) \to R(x,z))$ means that $R$ represents a binary relation that is transitive. A typical example of a transitive relation is the strict order relation $<$.
The hypothesis $\forall x \exists y R(x,y)$ means that such a transitive relation $R$ is "unbounded upwards". If you keep in mind the intuition of $<$, this condition is satisfied by the strict order relation $<$ over an infinite totally ordered set without greatest element, for instance $\mathbb{N}$.
But the strict order relation $<$ is not reflexive, i.e. it does not satisfies the thesis $\exists x R(x,x)$. This should convince you that the formula $\big( \forall x \forall y \forall z (R(x,y) \land R(y,z) \to R(x,z)) \land \forall x \exists y R(x,y) \big) \to \exists x R(x,x)$ is not valid, i.e. there is a structure that does not satisfy this formula.
Formally, let $\mathcal{N} = (\mathbb{N}, <)$ where the set $\mathbb{N}$ of natural numbers is the domain of $\mathcal{N}$ and the usual strict order relation over $\mathbb{N}$ is the interpretation in $\mathcal{N}$ of the binary symbol $R$. As we have seen above, $\mathcal{N} \not \vDash \big(\forall x \forall y \forall z (R(x,y) \land R(y,z) \to R(x,z)) \land \forall x \exists y R(x,y) \big) \to \exists x R(x,x)$. |
Positive and negative parts of a standard normal random variable | It does not matter which definition you use so long as the reader is clear. Similarly with the complex number $2+3i$, you could call the imaginary part $3i$ or $3$ but you need to be clear which you are using.
It may be more convenient in this case to have $X^-$ having the same distribution as $X^+$, which would justify using the non-negative version and $X=X^+ - X^-$
In either definition $X^+ \not=0 \implies X^- =0$ and $X^- \not=0 \implies X^+ =0$ and the product is always $0$
So $\mathbb P(X^+X^-=0)=1$ and $\mathbb E[X^+X^-]=0$ |
Transformation matrix from principal angles and vectors | I think I have worked it out. Start from two $n\times2$ matrices, $A$ and $B$, each with orthonormal columns (basis vectors of the two planes). Obtain the principal angles ($\theta_1, \theta_2$) and vectors ($\vec{u}_1, \vec{u}_2, \vec{v}_1, \vec{v}_2$) through SVD of $A^TB$. The vectors are the columns of the matrices $U$ and $V$ respectively (see this PDF).
Now, each pair $\vec{u}_i,\vec{v}_i$ forms a plane of rotation, corresponding to the angle $\theta_i$. Since all these planes are mutually orthogonal, in the subspace spanned by $\vec{u}_1,\vec{u}_2,\vec{v}_1,\vec{v}_2$ the rotation matrix is simple to write, but we must have an orthonormal basis for this subspace. $\vec{u}_i$ and $\vec{v}_i$ are in general not orthogonal, they form an angle $\theta_i$, but we can get a vector orthogonal to $\vec{v}_i$, in the same plane:
$$\vec{w}_i = \frac{\vec{u}_i-\cos\theta_i\vec{v}_i}{\sin\theta_i}$$
and now the matrix $X$ formed by the columns $\vec{w}_1,\vec{v}_1,\vec{w}_2,\vec{v}_2$ is an orthonormal basis for this subspace, where the transformation matrix is:
$$M = \begin{pmatrix}
\cos\theta_1 &-\sin\theta_1 & 0 & 0 \\
\sin\theta_1 & \cos\theta_1 & 0 & 0 \\
0 & 0 & \cos\theta_2 &-\sin\theta_2 \\
0 & 0 & \sin\theta_2 & \cos\theta_2 \\
\end{pmatrix}$$
Therefore, $XMX^T$ transforms $U$ directly into $V$. But it also projects out all components orthogonal to the planes of rotation, and I want to keep them intact. These orthogonal components can be obtained with $I-XX^T$, so I need: $R=I-XX^T+XMX^T = I + X(M-I)X^T$.
$M$ is $4\times4$, $X$ is $n\times4$, but $R$ is $n\times n$, and I don't want to store that if $n\gg2$, so to transform an arbitrary vector $\vec{c}$, I do:
$$\begin{align}
M'&=(M-I)X^T\\
\vec{c}' &= M'\vec{c} \\
R\vec{c} &= \vec{c} + X\vec{c}'
\end{align}$$
and I only need to store $M'$ and $X$ (of $n\times 4$ size).
Alright, this transforms the subspace spanned by $A$ into the subspace spanned by $B$, leaving the orthogonal complement unaltered. But it doesn't transform $A$ into $B$ (it transforms $U$ into $V$). There is an additional internal rotation within the subspace needed for that. This can be obtained by noting that $A$ will be transformed into $VU^TA$, so the additional rotation to bring this into $B$ is:
$$P=B(VU^TA)^T=BA^TUV^T$$
But this is an $n\times n$ matrix. It's easier to work in $V$ basis, where $B$ is $V^TB$ and $A$ transforms into $U^TA$, therefore:
$$P_V=V^TBA^TU$$
which is a $2\times2$ matrix. Analogously to the above, this additional internal rotation becomes:
$$R^I=I+V(P_V-I)V^T$$
But $V$ is contained in $X$, so it is trivial to express $P$ in the $X$ basis:
$$P_X=\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & P_V(1,1) & 0 & P_V(1,2) \\
0 & 0 & 1 & 0 \\
0 & P_V(2,1) & 0 & P_V(2,2) \\
\end{pmatrix}$$
and
$$R^I=I+X(P_X-I)X^T$$
To fully transform any vector in the $n$-dimensional space, the two transformations $R$ and $R^I$ can be applied consecutively, or the full transformation:
$$\begin{align}
R^F&=R^IR=(I+X(P_X-I)X^T)(I+X(M-I)X^T) \\
&=I+X(P_X-I+M-I+(P_X-I)(M-I))X^T \\
&=I+X(P_XM-I)X^T \\
&=I+X(F-I)X^T \\
F&=P_XM
\end{align}$$
which should give $R^FA=B$, and $R^F\vec{d}=\vec{d}$ if $\vec{d}$ is orthogonal to $A$ and $B$.
I'm sure this can be simplified, but hope it will be useful anyway. It is straightforward to extend this to $m$-dimensional subspaces, by using $m$ and $2m$ instead of 2 and 4. |
Centre of the algebra $\mathbb{Z} [\hat{W}]$ of the affine Weyl group | I prefer thinking of the extended affine Weyl group in terms of affine transformations, rather than an abstract semidirect product. For any $\lambda \in P$, let $T_\lambda \colon P \to P$ be translation by $\lambda$, so that $T_\lambda(\mu) = \lambda + \mu$. This embeds the additive group $P$ inside the group $\operatorname{Aff}(P)$ of invertible affine transformations of $P$. The Weyl group $W$ sits naturally inside $\operatorname{Aff}(P)$ as a set of linear transformations, then $\hat{W}$ is the group generated by $P$ and $W$.
This makes the semidirect structure natural: since $w \in W$ is a linear transformation it is easy to see that $w T_\lambda = T_{w \lambda} w$, leading to the multiplication rule
$$ (T_\lambda w)(T_\mu x) = T_{\lambda + w \mu} wx, \quad \text{or } (\lambda, w)(\mu, x) = (\lambda + w\mu, wx).$$
The inversion rule is also straightforward to deduce:
$$ (T_\lambda w)^{-1} = w^{-1} T_{-\lambda} = T_{- w^{-1} \lambda} w, \quad \text{or } (\lambda, w)^{-1} = (- w^{-1} \lambda, w^{-1}).$$
I bring this up because I think the formula for the inverse in the conjugation in the question is incorrect, since it only has $-\lambda$ rather than $-w^{-1} \lambda$.
First let's see that elements of $\mathbb{Z}[P]^W$ are central in the group ring $\mathbb{Z}[\hat{W}]$. Let $\sum_{\lambda \in P} a_\lambda e^\lambda \in \mathbb{Z}[P]^W$ be any element; we must have that $a_\lambda$ is finitely supported, and that $a_\lambda = a_{w \lambda}$ for all $w \in W$. Insert this into the group ring $\mathbb{Z}[P]^W$ to get $\sum_{\lambda} a_\lambda (T_\lambda)$. Now for a general element $T_\mu w$ we have
$$ T_\mu w \sum_\lambda a_\lambda (T_\lambda) = \sum_\lambda a_\lambda (T_{\mu + w\lambda} w) = \sum_\nu a_{w^{-1} \nu} (T_\nu T_\mu w) = \left(\sum_\nu a_\nu (T_\nu)\right) T_\mu w$$
where we used the substitution $\nu = w \lambda$ at the second equality, and the fact that $a_\lambda = a_{w \lambda}$ for the third. So these elements are indeed central.
Next, why is any central element of this form? Suppose $\sum_{\lambda, w} a(\lambda, w) (T_\lambda w)$ is central in $\mathbb{Z}[\hat{W}]$. To commute with a translation $T_\mu$, we must have
$$ \sum_{\lambda, w} a(\lambda, w) (T_\lambda w) = T_\mu \sum_{\lambda, w} a(\lambda, w) (T_\lambda w) T_{- \mu} = \sum_{\lambda, w} a(\lambda, w) (T_{\lambda + \mu - w \mu} w),$$
and hence $a(\lambda, w) = a(\lambda + \mu - w \mu, w)$ for all $\mu \in P$. If $w$ is not the identity then we can take some $\mu$ which is moved by $w$ so that $(1 - w)\mu \neq 0$. The same is true of $2 \mu$, $3 \mu$, and so on, so we get a chain of equalities
$$ a(\lambda, w) = a(\lambda + (1 - w)\mu, w) = a(\lambda + 2(1 - w)\mu, w) = \cdots$$
and since the coefficients $a(-, -)$ are finitely supported, sufficiently far out we find a zero, and so all these coefficients are zero. Therefore we have that $a(\lambda, w) \neq 0$ only for $w \neq \operatorname{id}$, so a central element must be of the form $\sum_\lambda a_\lambda T_\lambda$. You can easily check that requiring this sum to commute with the elements of $W$ forces the condition $a_\lambda = a_{w \lambda}$ for all $w \in W$, and so we are done. |
Limit point compactness | Suppose that $A$ is not closed. In particular $A$ is infinite, since finite sets in metric spaces are closed. Since $A$ is not closed, there is a a point $x\in M$ that is a limit point of $A$, but is not contained in $A$. This means there exist an infinite subset $x_i\neq x_j$ if $i\neq j$, $S=\{x_1,x_2,\ldots\}$ of $A$ such that $x_n\to x$ but $x\notin A$. I claim that $x$ is the only limit point of $S$. Can you prove this? (You have two options, either prove that if $y$ is a limit point of $S$; then $x=y$, or that if $x\neq y$; then $y$ is not a limit point of $S$).
Note that if $y$ was a limit point of $S$, for each $\varepsilon >0$ there'd exist infinitely many $x_i\in d(y,\varepsilon)$. I claim we can construct a subsequence of $(x_n)_{n\geqslant 1}$ that converges to $y$. Indeed, taking $\varepsilon=1$ there exists a least $n_1$ such that $x_{n_1}\in B(y,1)$. Removing the finite subset $S_1=\{x_1,\ldots,x_{n_1}\}$ from $S$ doesn't affect its limits points. Thus, we can find a least $n_2$ such that $x_{n_2}\in B(y,1/2)\cap (S-S_1)$. Note that since $x_{n_2}\in B(y,1)$, $n_1<n_2$. Continuing, we produce a sequence $n_1<n_2<n_3<\cdots$ for which $d(y,x_{n_k})<k^{-1}$. Evidently then $x_{n_k}\to y$. Since $x_n\to x$, $x_{n_k}\to x$ too, so $x=y$, as we wanted. |
Lipschitz continuity and integration. | You need to prove
$$
(b-c)^2 + (a-c)^2 \le (b-a)^2.
$$
A bit of algebra reduces this to
$$
c^2-bc-ac+ab \le 0.
$$
Factor by grouping:
$$
(c-a)(c-b)\le 0.
$$
This just says $c$ is between $a$ and $b$. It says that regardless of whether $a\le b$ or $b\le a$. |
Function (set theory). | You mean a proper inclusion. Consider $A = B = \text{real numbers}$, $f(x) = x^{2}$, and $Y = B$.
We have that $f(f^{-1}(Y))$ only contains positive numbers, hence it is properly contained in $Y = B$.
More generally, any non-surjective $f$ will do, as $f(f^{-1}(B)) \subseteq f(A)$ will be properly contained in $B$.
As noted in a comment, you mean $f^{-1}(Y)$. |
Uniform integrability and weak L1 convergence | Actually, I made a mistake: The question is not an exercise in the book, but rather a complement and a rather famous result in functional analysis, known as the Dunford-Pettis theorem (see Uniform Integrability Wiki). The proof can be found in several textbooks and in a short research note here. |
Definition of Negative Integral - Apostol Calculus Vol.1 | If $f|_{[a,b]}$ is Riemann-integrable and such that $(\forall x\in[a,b]):f(x)\geqslant0$, then $\int_a^bf(x)\,\mathrm dx$ means the are below the graph of $f$ (and above the $x$-axis). Then, if $c\in[a,b]$, we have$$\int_a^bf(x)\,\mathrm dx=\int_a^cf(x)\,\mathrm dx+\int_c^bf(x)\,\mathrm dx.\tag1$$The definition made in Apostol's textbook is done so that $(1)$ always holds, even if $c<a$ or that $c>b$. It has nothing to do with the computation of areas. |
Independence of two Brownian motions | By the scaling property, we have $B_s \stackrel{d}{=} \sqrt{s} B_1$ which implies
$$\mathbb{E}(|B_s|) = \sqrt{s} \mathbb{E}(|B_1|).$$
Note that $\mathbb{E}(|B_1|)$ is finite since $B_1$ is Gaussian. Consequently,
$$\mathbb{E} \left( \int_0^t \frac{|B_s|}{s} \, ds \right) = \int_0^t \frac{\mathbb{E}(|B_s|)}{s} \, ds = \mathbb{E}(|B_1|) \int_0^t \frac{1}{\sqrt{s}} \, ds < \infty.$$ |
Identity matrix properties | Yes. It's due to the fact that matrix multiplication is associative. So
$${\bf a^* I a} = {\bf a^* (I a)} = {\bf a^*(a)} = {\bf a^*a}$$ |
On finite solvable groups | Here is an outline for a proof. To prove this, we can apply the following
Lemma: Suppose $G$ is a finite nonabelian group where intersections of distinct maximal subgroups are trivial. Then $G$ is not simple.
I will not prove the lemma here. Let $G$ be a non-nilpotent finite group such that every proper subgroup of $G$ is nilpotent. A group $G$ with this property is often called a minimal non-nilpotent group.
To prove that $G$ is solvable, it suffices to show that $G$ is not simple (prove this). Hoping to find a contradiction, we assume that $G$ is simple. We can see that there are at least two maximal subgroups of $G$, let $D = K \cap L$ be an intersection of two maximal subgroups $K \neq L$ such that $D$ has largest possible order.
Next you should show that $D$ is normal in $G$. Since $G$ is simple, this implies that $D$ is trivial. But this is in contradiction with the lemma, so $G$ cannot be simple.
To prove that $D$ is normal, show $K$ is the only maximal subgroup containing $N_K(D)$ and $L$ is the only maximal subgroup containing $N_L(D)$. Hence $N_G(D)$ is not contained in a maximal subgroup and $N_G(D) = G$ since $G$ is finite.
This proof is the one given in Derek Robinson's group theory book, which contains a detailed proof (the lemma is also proven) and a couple of more facts about "minimal non-nilpotent groups". For example, with a little more work you can show that the order of $G$ has exactly two prime divisors and that exactly one Sylow subgroup of $G$ is normal. These results were first proven by O. J. Schmidt in 1929. |
How do i prove that a dilation(?) of a Borel set is a Borel set? | Use the usual sigma-algebra trick. Show that the set
$ \{ A \in \mathscr{B}_{\mathbb{R}^n \setminus 0} | \mathbb{R}^+ \cdot A \in \mathscr{B}_{\mathbb{R}^n\setminus 0 } \}$ is a sigma-algebra.
Then, since the open sets are contained in this sigma-algebra, it has to be the entire Borel set. |
Einstein Summation Convention Minkowski Metric | The book you are using employs the Einstein summation convention. This means that something of the form $f_{\mu}g^{\mu}$ is actually the same as $\sum\limits_{\mu=0}^{3}f_{\mu}g^{\mu}$. So, in $\eta_{\mu \nu} (\Delta x^\mu)(\Delta x^\nu)$, everything is being written as a scalar. In the Minkowski metric, this yields the summation you have written above.
For the second part, writing $x'=\Lambda x$ (regular matrix times vector multiplication) out component-wise yields $$x'=\Lambda x=\sum_{\nu} \Lambda^{\mu'}_{\nu}x^{\nu}=\Lambda^{\mu'}_\nu x^\nu$$ |
How to solve $B = x^c - (1 - x)^c$ | For "most" values of $c$ and $B$, the Galois group of $x^c-(1-x)^c-B$ is not solvable, so in particular the equation $x^c-(1-x)^c=B$ can not (in general) be solved in radicals.
For a concrete example (suggested already in the comments), with $c=5$ and $B=2$, the Galois group is $S_5$. This is also the case for
$$(c,B) \in \{ (5,3), (5,4), (5,5), (6,2), \ldots \}$$
and many many other examples. |
Solution to the following diophantine equation | From $a = b + 30$ and $x + y = 12\to y=12-x$
substituting in the 2nd equation we have
$(b+30)x+b(12-x)=1200$
$bx +30x+12b-bx=1200$
$30x+12b=1200$
$5x+2b=200$
$x=\dfrac{200-2b}{5}$
$y=12-\dfrac{200-2b}{5}=\dfrac{2b-140}{5}$
and finally the solution
$$a= b+30,x= \frac{200-2b}{5},y= \frac{2b-140}{5}$$
$200-2b>0$ then must be $b<100$ and $2b-140>0$ because $y$ must be positive
so must be $b>70$
Thus must be $70<b<100$ and divisible by $5$ so the solution is
$
\begin{array}{l|l|l|l}
a & b & x & y\\
\hline
105& 75 &10&2\\
110&80&8&4\\
115&85&6&6\\
120&90&4&8\\
125&95&2&10\\
\end{array}
$
Hope this helps |
Which number base contains the most Palindromic Numbers? | We have:
$$b^3-2b^2+4b-2=(b-2)b^2+3b+(b-2)$$
and:
$$b^3-2b^2+4b-2=(b-1)^3+(b-1)^2+3(b-1)+1$$
For all $b\in\mathbb{N}_{\ge 5}$, let $f(b)$ be the amount of palindromes in base $b$ up to and including $b^3-2b^2+4b-2$ and let $g(b)$ be the amount of palindromes in base $b-1$ up to and including $b^3-2b^2+4b-2$. Now, from the very first 'factorization', we see that:
$$f(b)=(b-3)b+4+(b-1)=b^2-2b+3$$
Because, if a palindrome starts with a digit $1\le k\le b-3$, we can choose whatever digit we want for the middele one and when the palindrom starts with $b-2$, we have $4$ options for the middle digit. Also, we can choose a total of $b-1$ two digit palindromes.
Now for $g(b)$. We split this case up into $4$ digits and $3$ digits. If we have a $4$ digit number, the first digit is a $1$ and the second one is $1$ or $0$, so the only two options option are $1111_{b-1}$ and $1001_{b-1}$. All $3$ digit numbers are possible, a total amount of $(b-2)(b-1)$ numbers (because we can't choose $0$ as the leading digit). So when we also include the two digit palindromes:
$$g(b)=2+(b-2)(b-1)+(b-2)=b^2-2b+2=f(b)-1$$
And since $b^3-2b^2+4b-2$ is a palindrome in base $b$, but not in base $b-1$, the base $b$ 'takes' over at $b^3-2b^2+4b-2$.
The thing I'm still trying to prove is that this is the first time $b$ 'takes over'. As soon as I know, I'll edit it in |
Solve logical equivalence of $\neg (p \land \neg q)$ | One may use
$$
\neg (a \land b)=\neg a \lor \neg b
$$ giving
$$
\neg (p \land \neg q)=\neg p \lor \neg (\neg q)=\neg p \lor q.
$$ |
An Identity for Pell-numbers | You know $P_{n+2} = 2 P_{n+1} + P_{n}$ $\Rightarrow$ $P_{n+2}-(1+\sqrt{2})P_{n+1}= (1-\sqrt{2})(P_{n+1}-(1+\sqrt{2})P_{n})$. In other word, we have
\begin{align*}
\frac{P_{n+2}-(1+\sqrt{2})P_{n+1}}{P_{n+1}-(1+\sqrt{2})P_{n}} = (1-\sqrt{2}).
\end{align*}
Since $P_0 = 0, P_1 = 2, P_2 = 2$, we deduce that
\begin{align*}
P_{n} = \frac{(1+\sqrt{2})^n - (1-\sqrt{2})^n }{2\sqrt{2}}.
\end{align*}
You refer to Pell_number. |
integral from 0 to infinity of limiting function | Very bad idea. You can't exchange freely lim and integral (without uniform convergence). Start calculating
$$\lim_{n\to->\infty}{\cos(x)\over1+(\arctan(x))^n}.$$ |
How do we geometrically multiply and divide circular arcs? | Use a thick disk and a thread. You can roll/unroll and straighten the thread to convert from arc to line segment.
Alternatively, use an Archimedes' spiral. The polar angle and the modulus perform the same transformation. |
Prove that $f\left ( x \right )- x^{2021}$ always has at least one root $x_{0}\in\left ( 0, 1 \right )$ | $f(0)>0$ is crucial for this. (Otherwise $\epsilon x^{2021}$ would be a counter-example for small enough $\epsilon$).
If $f(x) -x^{2021}$ is never zero then it is always positive or always negative (by IVP). If it is always negative we get a contradiction by letting $x \to 0$. If it is always positive we get a contradiction to $\int_0^{1} f(x) dx <\frac 1 {2022}$. |
Domain of a piecewise continuous function | The statement that the endpoints need to be well-defined doesn't mean that the function values are defined at the endpoints, just that we know what the endpoints are. To say that the function is piecewise continuous on $[a,b]$ in the first definition is not to say that the domain of the function is $[a,b]$ That would be contradictory, as you have pointed out.
I don't agree with either of the proposed answers. I don't agree with the first option because $2$ is not in the domain. I don't agree with the second option because $1$ is in the domain. My answer would be
$\left [ 1,2 \right )\cup \left ( 2,3 \right ) \cup \left ( 3,4 \right )\cup \left ( 4,6 \right ]$ |
Regular and irregular points | The definition of wikipedia :
For the ordinary differential equation: $$f''(x)+p_{1}(x)f'(x)+p_{0}(x)f(x)=0.\,$$
Point $a$ is an ordinary point when functions $p_1(x)$ and $p_0(x)$ are analytic at $x = a$.
Point $a$ is a regular singular point if $p_1(x)$ has a pole up to order $1$ at $x = a$ and $p_0$ has a pole of order up to $2$ at $x = a$.
Otherwise, point a is an irregular singular point.
For your example 1)
$$x^2 (1-x^2)y'' + \frac {2}{x}y'+4y =0$$
we divide by the coefficient function of $y''$ to obtain:
$$y'' + \frac {2}{x^3 (1-x^2)}y'+\dfrac{4}{x^2 (1-x^2)}y =0.$$
The singular points are $x=0$ and $x=\pm 1$. From the definition, we see that $x=0$ is an irregular singular point because it is a third order pole. $x=\pm 1$ are order $1$ poles of the coefficient function of $y'$ and also order $1$ poles of the coefficient function of $y$. Hence, we know that these are regular singular points.
Can you do the next example? |
Identity relating scalar triple product of basis | Each $e_i$ itself is a vector, so this is nothing else than the matrix with three column vectors $e_i,e_j,e_k$, so in total we have $A\in k^{3\times 3}$ with $a_{1i}=a_{2j}=a_{3k}=1$ and zeros everwhere else. The other matrix is the same just flipped, so instead of column vectors you have row vectors with $b_{i1}=b_{j2}=b_{k3}=1$ and zeros elsewhere.
Example for $i=3,j=l=1,k=m=1$:
$$
\det\left(\begin{array}& 0&1&1\\0&0&0\\1&0&0\end{array}\right) \cdot \det\left(\begin{array}& 0&0&1\\1&0&0\\1&0&0\end{array}\right) = \det\left(\begin{array}& 1&0&0\\0&1&1\\0&1&1\end{array}\right)
$$
The equality follows from the fact that $\det$ is multiplicative, that means $\det (A) \cdot \det (B) = \det (A\cdot B) = \det (B\cdot A)$ for $A,B$ square matrices.
If you multiply the given matrices you will see that the matrix on the RHS is nothing else than the product of the matrices on the LHS. |
$\sum_i x_i^2 +\sum_i\sum_{i\neq j}B_{ij}x_i x_j \geq 0$? | No.
$$
B =
\begin{bmatrix}
0 & 1 & 1 \\
1 & 0 & 0\\
1 & 0 & 0
\end{bmatrix}\\
x =
\begin{bmatrix}
1\\
-1\\
-1
\end{bmatrix}
$$
Then $x^t (I+B) x = -1$, if I've calculated correctly. |
Generalizing $\sum_{n=1}^{\infty}\frac{H_n{2n \choose n}}{2^{2n}(2n-1)}=2$ | What about reindexing and induction? The terms $\frac{1}{(2n-1)\cdots(2n-2k-1)}$ have a nice telescopic structure: by the residue theorem
$$ \frac{1}{(2n-1)(2n-3)\cdots(2n-2k-1)}=(-1)^k\sum_{h=0}^{k}\frac{(-1)^h}{(2n-2h-1)}\cdot\frac{1}{2^{k+1}(2h)!!(2k-2h)!!} $$
equals
$$ \frac{(-1)^k}{2^{2k+1}k!} \sum_{h=0}^{k}\frac{(-1)^h}{(2n-2h-1)}\binom{k}{h}.$$
The natural temptation is now to compute
$$ \sum_{n\geq 1}\frac{H_n}{4^{n}}\binom{2n}{n}\frac{1}{2n-2h-1}$$
through $\frac{-\log(1-z)}{1-z}=\sum_{n\geq 1}H_n z^n$ and $\frac{1}{4^n}\binom{2n}{n}=\frac{2}{\pi}\int_{0}^{\pi/2}\left(\cos\theta\right)^{2n}\,d\theta$, multiply both sides by $(-1)^k \binom{k}{h}$, sum over $h=0,1,\ldots,k$ and finish by invoking Fubini's theorem (allowing to switch the integrals with respect to $d\theta$ and $dz$) and the Fourier series $\sum_{m\geq 1}\frac{\cos(m\varphi)}{m}$ and $\sum_{m\geq 1}\frac{\sin(m\varphi)}{m}$.
The only obstruction is that $\frac{1}{2n-2h-1}=\int_{0}^{1}z^n\left[\frac{1}{2z^{h+3/2}}\right]\,dz$ does not hold unconditionally: we would have been happier in having rising Pochhammer symbols rather than falling ones. On the other hand, reindexing fixes this issue. Since $\binom{2n+2}{n+1} = \frac{2(2n+1)}{n+1}\binom{2n}{n}$, the original series can be written as
$$ \sum_{n\geq 1}\frac{H_n \binom{2n}{n}}{4^n(2n-1)} = \sum_{n\geq 0}\frac{2H_{n+1}\binom{2n}{n}}{4^{n+1}(n+1)}=-\frac{1}{\pi}\int_{0}^{1}\sum_{n\geq 0}\int_{0}^{\pi/2}z^n\left(\cos\theta\right)^{2n}\log(1-z)\,d\theta\,dz $$
or
$$ -\frac{1}{\pi}\int_{0}^{1}\int_{0}^{\pi/2}\frac{\log(1-z)}{1-z\cos^2\theta}\,d\theta\,dz =-\frac{1}{2}\int_{0}^{1}\frac{\log(1-z)}{\sqrt{1-z}}\,dz,$$
clearly given by a derivative of the Beta function. This approach works also by replacing $(2n-1)$ with $(2n-1)\cdots(2n-2k-1)$, you just have to be careful in managing the involved constants depending on $k$.
On second thought, I have just applied the binomial transform in disguise. |
Finding an equation with related roots | An equation which has $\alpha\beta$, $\alpha\gamma$, and $\beta\gamma$ as roots is, of course $P(x)=0$, with$$P(x)=(x-\alpha\beta)(x-\alpha\gamma)(x-\beta\gamma).$$On the other hand, note that$$P(x)=x^3-(\alpha\beta+\alpha\gamma+\beta\gamma)x^2+(\alpha^2\beta\gamma+\alpha\beta^2\gamma+\alpha\beta\gamma^2)x-(\alpha\beta\gamma)^2.$$Since $\alpha$, $\beta$, and $\gamma$ are the roots of $x^3-8x+14$, you know that $\alpha+\beta+\gamma=0$, that $\alpha\beta+\alpha\gamma+\beta\gamma=-8$, and that $\alpha\beta\gamma=-14$. So,$$\alpha^2\beta\gamma+\alpha\beta^2\gamma+\alpha\beta\gamma^2=\alpha\beta\gamma(\alpha+\beta+\gamma)=0$$and therefore $P(x)=x^2+8x^2-196$.
Can you do the same thing in the other case? |
Existence of a global minimum of a set. | The axiom you are after is the "Well ordering principle". Every non-empty subset of the natural numbers has a least element.
See: http://en.wikipedia.org/wiki/Well-ordering_principle |
How many numbers between $0$ and $9999$ have either of the digits $2,5,8$ at least once - Check my answer | Your answer is correct, but there is a more straightforward method you could use.
Just like you did first, let's consider the number of invalid numbers. So what we want to do is figure out how many numbers can be made using only the $7$ invalid digits. Imagine 4 slots, in which any of the $7$ digits can go. In the first slot, there can be 7 differents digits, in the second slot there can be also 7 different digits, and so on. We thus have $7\cdot 7 \cdot 7 \cdot 7$ different numbers consisting of invalid digits.
Your answer is $10000 - 7^{4}$ . |
What makes a function of a matrix convex? | The function you've written is not a function of a matrix (even though it is written in terms of a matrix) unless you consider vectors as matrices.
To answer your second question, $\Sigma$ must be positive semidefinite so that the hessian has only nonnegative eigenvalues and thus only positive curvature and thus is convex. |
How to calculate the pullback of a $k$-form explicitly | Instead of thinking of $\alpha$ as a map, think of it as a substitution of variables:
$$
x = uv,\qquad y=u^2,\qquad z =3u+v.
$$
Then
$$
dx \;=\; \frac{\partial x}{\partial u}du+\frac{\partial x}{\partial v}dv \;=\; v\,du+u\,dv
$$
and similarly
$$
dy \;=\; 2u\,du\qquad\text{and}\qquad dz\;=\;3\,du+dv.
$$
Therefore,
$$
\begin{align*}
xy\,dx + 2z\,dy - y\,dz \;&=\; (uv)(u^2)(v\,du+u\,dv)+2(3u+v)(2u\,du)-(u^2)(3\,du+dv)\\[1ex]
&=\; (u^3v^2+9u^2+4uv)\,du\,+\,(u^4v-u^2)\,dv.
\end{align*}
$$
We conclude that
$$
\alpha^*(xy\,dx + 2z\,dy - y\,dz) \;=\; (u^3v^2+9u^2+4uv)\,du\,+\,(u^4v-u^2)\,dv.
$$ |
Show that $L$ is compact | Isn't that $L(\overline{B_{X}})$ being relatively compact (in fact, it is compact by the assumption)? Here $B_{X}$ is the closed unit ball in $X$. Then the compactness of the operator follows immediately by definition. |
Check if a given point is inside the convex hull of 4 points. | The convex hull can be constructed as only 2 triangles.
You could also try to evaluate the bilinear coordinates of M with respect of it and verify if they range in [0,1]² |
Find the Particular Solution of the following system using the Laplace Transform method: | Laplace transforming we have
$$
sX = X-2Y+x_0\\
sY = 3X-4Y + y_0
$$
Now solving for $X,Y$
$$
X = \frac{(s+4) x_0-2 y_0}{s^2+3 s+2},Y = \frac{(s-1) y_0+3 x_0}{s^2+3 s+2}
$$
so
$$
x = e^{-2 t} \left(e^t (3 x_0-2 y_0)-2 x_0+2 y_0\right)u(t)\\
y = e^{-2 t} \left(3 \left(e^t-1\right) x_0+\left(3-2 e^t\right) y_0\right)u(t)
$$
with $u(t)$ the unit step function
etc. |
Existence of a smooth map to its embedded submanifold | There does not exist even a continuous map $f$ from $M = \mathbb S^2$ to it's equator $N$ such that $f|_N = id$: If $f$ is such a map, then
$$f_* : \{0\} = \pi _1(M) \to \pi_1(N)$$
would sends $[N] = 0\in \pi_1(M)$ to the generator of $\pi_1(N)$. |
Calculate total number of iterations | Fast answer:
$$\text{number of items}^{\text{number of spaces}}$$
for example, in 5 repetitions and entry $A,B$, the number of spaces is 5 and number of items is 2
Also, if your program has a part like:
for variable1 in lista1:
for variable2 in lista2:
print(variable1+variable2)
Then you can do:
count = 0
for variable1 in lista1:
for variable2 in lista2:
count += 1
print(variable1+variable2)
print(count)
The math behind is like this:
You have $X$ items, and you have $Y$ slots, then in the first of slot how many options do you have to put one item? Of course, you have $X$ options.
Then for the second slot, you ask how many items can be in that slot? Of course, the answer is again $X$.
Since you have $y$ slots, and in everyone of them you can put $X$ items, you will have a total of combinatios of :
$$X\times X\times X\times \cdots X$$
How many times? The number of slots that you have, it is $Y$
So the number of combinatios are $$\underbrace{X\times X\times X\times \cdots X}_{Y \text{times}} = X^Y$$ |
A quadratic polynomial $f$ such that $f\circ f' = f'\circ f$ | HINT: we have $$f(x)=ax^2+bx+c\implies f'(x)=2ax+b$$ $$f(f'(x))=a(2ax+b)^2+b(2ax+b)+c$$ &
$$f'(f(x))=2a(ax^2+bx+c)+b$$
hence, $$a(2ax+b)^2+b(2ax+b)+c=2a(ax^2+bx+c)+b$$ |
Definition of proper morphism between schemes | You are correct that if $f : X \to Y$ is proper over some base $S$ then it has the property you described. If $Z \subset Y$ is a closed subscheme proper over $S$ then $f^{-1}(Z) \subset X$ is a closed subscheme proper over $S$.
Because $f^{-1}(Z)$ has $f^{-1}(Z)$ has a natural subscheme structure as $f^{-1}(Z) = Z \times_Y X$ which is not always reduced, it is probably better to require that $f^{-1}(Z)$ be proper with this standard subscheme structure.
However, it's also true if we give $f^{-1}(Z)$ the reduced subscheme structure because this is a closed subscheme of $f^{-1}(Z)$ with its standard structure and closed subschemes of proper schemes are proper.
Now let me prove the claim. The map $f^{-1}(Z) \to Z$ is proper because it is the base change of $f : X \to Y$. Therefore the composition $f^{-1}(Z) \to Z \to S$ is proper showing that $f^{-1}(Z)$ is proper over $S$.
However, I do not believe this property is enough to characterize proper morphisms even for varieties over an algebraically closed field. The problem is ``there aren't enough proper closed subvarieties''. For example, $\mathbb{A}^1 \setminus \{ 0 \} \to \mathbb{A}^1$ is not proper but the only proper closed subvarities of $\mathbb{A}^1$ are points whose preimages are also points.
In fact, we can make examples where $f : X \to Y$ is a closed surjective map of varieties over $\mathbb{C}$ with your property but not proper. For example, let $Y$ be the affine nodal curve $y^2 = x^2(x-1)$ and $X = \mathbb{A}^1 \setminus \{ i \}$. The normalization map $\nu : \tilde{Y} \to Y$ where $\tilde{Y} = \mathbb{A}^1$ sends $t \mapsto (t^2 + 1, t(t^2 + 1))$ so take $X \to \tilde{Y} \to Y$ i.e. we remove one of the two preimages of the node. Then $f$ is closed and surjective (even bijective!) pulling back proper closed subvarities (points) to proper closed subvarities. However, $f$ is not proper. To see this, consider $\tilde{f} : X \times \tilde{Y} \to Y \times \tilde{Y}$ and the closed set
$$\Delta = \{ (x,x) \mid x \in X \} \subset X \times \tilde{Y} $$
however $\tilde{f}(\Delta) = \{ (f(x), x) \mid x \in X \} \subset Y \times \tilde{Y}$ is the graph of $f$ minus one point and thus not closed.
However, for smooth varieties over $\mathbb{C}$, we can view an algebraic map $f : X \to Y$ as a map of complex manifolds $f^{\mathrm{an}} : X^{\mathrm{an}} \to Y^{\mathrm{an}}$ viewing $X, Y$ as complex manifolds with the analytic topology. Then it is a comforting fact that $f$ is proper iff $f^{\mathrm{an}}$ is proper in the usual topology sense. |
Proof Correction: $\cos ' (x) = -2 \sin (x)$ | You forgot to apply the chain rule and multiply by $(x/2)'$ when differentiating $(A^{-1})(x/2)$. |
planar embedding a graph | Draw the graph on a sphere. Declare a point next to the chosen edge to be the north pole and project the sphere stereographically to a plane. |
Why is the Skyscraper Sheaf defined as it is? | There's a simple pragmatic reason that you can't use the empty set in place of $\{e\}$, at least when $S$ is nonempty: If $i_{p,*}S(X) = S$, then for any open set $U$, there is a restriction map $\rho^X_U\colon S\to i_{p,*}S(U)$, so $i_{p,*}S(U)$ cannot be empty.
As for why the definition is as it is, the skyscraper sheaf satisfies a universal property: The stalk of $i_{p,*}X$ at $p$ is $S$, and for any other sheaf $F$ on $X$ such that $F_p = S$, there is a unique sheaf morphism $F\to i_{p,*}S$ such that the induced map $F_p \to (i_{p,*}S)_p$ is the identity map on $S$. That is, the skyscraper sheaf is terminal in the category of sheaves on $X$ with stalk $S$ at $p$ (where the morphisms are those which induce the identity map on $S$). The appearance of the singleton set $\{e\}$ can be explained by the fact that $\{e\}$ is the terminal object in $\mathsf{Set}$.
This universal property is a bit awkward to state, since the uniqueness of the map $F\to i_{p,*}S$ depends on an identification of the stalk of $F$ at $p$ with $S$. It's cleaner to view it as a special case of the fact (it's a good exercise, and I'd be surprised if it's not in Vakil's notes) that the skyscraper sheaf at $p$ functor $i_{p,*}\colon \mathsf{Set}\to \mathsf{Sh}(X)$ is right adjoint to to the stalk at $p$ functor $(-)_p\colon \mathsf{Sh}(X)\to \mathsf{Set}$. Then the canonical map in the previous paragraph is the image of the identity map on $S$ under the natural isomorphism $\mathrm{Hom}_{\mathsf{Set}}(F_p,S)\to \mathrm{Hom}_{\mathsf{Sh}(X)}(F,i_{p,*}S)$. To put it another way, it's the component $\eta_F$ at $F$ of the unit of this adjunction $\eta\colon \text{id}_{\mathsf{Sh}(X)} \to i_{p,*}(-)_p$.
This answer was in terms of sheaves of sets, but the same things are true for sheaves of abelian groups, modules, etc. as mentioned in Qiaochu's comment. |
Coordinate of ends of a irregular polygon | In the diagram below, $A$, $B$ and $C$ are three consecutive vertices of your polygonal. Suppose you know angles $\theta_A$ (made by line $AB$ with the $x$ axis) and $\varphi_B$ (angle between $AB$ and $BC$), as well as the coordinates of $A$ and $B$ and the length $L_{BC}$ of ${BC}$. The angle $BC$ forms with the $x$ axis is $\theta_B=\varphi_B+\theta_A-\pi$ and the coordinates of $C$ can thus be found from
$$
x_C=x_B+L_{BC}\cos\theta_B,\quad
y_C=y_B+L_{BC}\sin\theta_B.
$$
You can then recursively compute the coordinates of all vertices, given the coordinates of the first two. |
Proof of combinatorial set | Consider the number of ways to colour $n$ elements from $2n$ red and the others blue. This is clearly the RHS.
Now split the $2n$ into two equally sized groups. In the first group, choose $i$ elements to colour red and make the rest blue. In the second group, choose $i$ elements to colour blue and make the rest red. Summing over all $i$ gives the number of ways to colour $n$ objects red and we are done. |
Problem with friction | Hints:
For the bottom mass, you have the following force equation in x-axis (why?)
$$T - \mu_s m_{top}g - 0.24 (m_{top}+m_{bottom})g = (m_{top}+m_{bottom})a \tag{1}$$
where $a$ is the common acceleration if the top body does not slip.
Writing equations for the top mass you have:
$$\mu_s m_{top}g = m_{top}a \implies a \le \mu_sg \quad \text{for not slipping} \tag{2}$$
Use the bound from $(2)$ in $(1)$ to get the maximum force for not slipping. |
An analog of the Myhill-Nerode Theorem for context-free languages? | This is probably not really what you are looking for, but it's the best I know. It's a strengthening of the fairly well-known Parikh's Theorem.
There is a characterisation of the bounded context-free languages. A language $L$ is bounded if $L\subseteq w_1^* w_2^* \ldots w_n^*$ for some fixed words $w_1,\ldots,w_n$, in which case we can define a corresponding subset of $\mathbb{N}_0^n$:
$\Phi(L) = \{(m_1,m_2,\ldots,m_n) \mid w_1^{m_1} w_2^{m_2} \ldots w_n^{m_n}\in L\}.$
By a theorem of Ginsburg and Spanier (Thm 5.4.2 in Ginsburg's book 'The Mathematical Theory of Context-free Languages'), a bounded language $L$ is context-free if and only if $\Phi(L)$ can be expressed as a finite union of linear sets, each with a stratified set of periods. For the definitions of the terms in the last sentence, see my MO question.
This characterisation can be very useful for proving (not necessarily bounded) languages not to be context-free. If we can find words $w_1,\ldots,w_n$ such that $L\cap w_1^*\ldots w_n^*$ is not context-free, then $L$ is not context-free either, since $w_1^*\ldots w_n^*$ is a regular language, and the intersection of a context-free language with a regular language is context-free. |
How to compute 2-adic square roots? | Because the derivative of $x^2-17$, i.e. $2x$ is $0 \bmod{2}$ Hensel's Lemma doesn't work very cleanly. In this situation when going from $p$ to $p^2$ either there is no lift, or every lift will work$\bmod p^2$. Let's look at what happens here -
$x^2\equiv 17 \bmod 2 \text{ has the solution }x\equiv 1 \bmod 2$
$(2y+1)^2 \equiv 17 \bmod 4 \text { is always true, telling us } x\equiv 1,3 \bmod 4 \text{ both work}$
When we lift to$\bmod 8$ we find $1$ and $5$ (lifts of $1 \bmod 4\,$) both work$\bmod 8$ as well as $3$ and $7$ (the lifts of $3 \bmod 4$). Note that we seem to have 4 solutions! Let's look at$\bmod 16$ and beyond.
$$
\begin{array}\\
1,5\pmod 8 & 1^2 \equiv (1+16) \equiv 17 \pmod{16} & 5^2\equiv 9 \not \equiv 17 \pmod{16} \\
3,7\pmod{ 8} & 3^2 \equiv 9 \not\equiv 17 \pmod{16} & 7^2\equiv 49 \equiv 17 \pmod{16} \\
\end{array}
$$
So of our 4 solutions only $1$ and $7\bmod 8$ will lift to$\bmod 16$. We lift those and try$\bmod 32$.
$$
\begin{array}\\
1,9\pmod{16} & 1^2 \not\equiv 17 \pmod{32} & 9^2\equiv 81 \equiv 17 \pmod{32} \\
7,15\pmod{16} & 7^2 \equiv 49 \equiv 17 \pmod{32} & 15^2\equiv 225 \not\equiv 17 \pmod{32} \\
\end{array}
$$
So of our 4 solutions only $9$ and $7\bmod 16$ will lift to$\bmod 32$. We lift those and try$\bmod 64$.
\begin{array}\\
9,25\pmod{32} & 9^2 \equiv 81 \equiv 17 \pmod{64} & 25^2\equiv 625 \not\equiv 17 \pmod{64} \\
7,23\pmod{32} & 7^2 \equiv 49 \not\equiv 17 \pmod{64} & 23^2\equiv 529 \equiv 17 \pmod{64}\end{array}
Fairly tedious stuff for humans, but nothing a computer algebra system won't whip out in no time. We have found 2 roots, $1 + 2^3 + O(2^5)$ and $1 + 2+ 2^2 + 2^4 + O(2^5)$.
When Doing the calculations by hand it would probably make more sense to find only one root and multiply by $-1=\frac{1}{1-2}=1+2+2^2+...$ for the other root. |
Interior triangle bisects midpoints of exterior triangle. Find the perimeter of the exterior triangle? | You could also solve it by determining that all four triangles are congruent. Combined with the theorem you used for parallel lines opposing an interior angle, you could then determine congruent angles to show similarity of triangles (opposite angles of parallelograms, alternate angles, corresponding angles etc).
Then determine a congruent side or sides by the fact that the other three triangles share a side with the middle triangle and opposite sides of parallelograms (XY = ZC for instance) to determine congruence. |
Estimate variance in a nested study design | Let's say the true means are $\mu$ and $\mu_m$ for $m=1,\dots,M$ for the mother bacteria and the $m^{\text{th}}$ mutation, respectively. You want to estimate
$$\displaystyle\sum_{m=1}^M(\mu_m-\mu)^2=\sum_{m=1}^M\mu_m^2-2\mu\sum_{m=1}^M\mu_i+M\mu^2.$$
Let's now try to come up with unbiased estimators of each of the terms. By linearity of expectation, we will be done. Note that for any random variable $\text{Var}(X)=\mathbb{E}(X^2)-(\mathbb{E}(X))^2.$ With this in mind we define
$$V=\dfrac{1}{R-1}\displaystyle\sum_{i=1}^R(P_i-\bar{P})^2\hspace{1cm}\text{ and }\hspace{1cm}V_m=\dfrac{1}{R-1}\displaystyle\sum_{i=1}^R(P_{m,i}-\bar{P}_m)^2$$
where $P_i$ is the $i^{\text{th}}$ measurement from the mother bacteria, $i=1,\dots,R$ and $P_{m,i}$ is the $i^{\text{th}}$ measurement from the $m^{\text{th}}$ mutant, $m=1,\dots,M$ and $i=1,\dots,r.$ Also, $\bar{P}=\dfrac{1}{R}\displaystyle\sum_{i=1}^RP_i$ and $\bar{P}_m=\dfrac{1}{r}\displaystyle\sum_{i=1}^rP_{m,i}.$
Then $\mathbb{E}(V)=\text{Var}(P_{\text{mother}}),$ and if we define the estimator $T=\dfrac{1}{R}\displaystyle\sum_{i=1}^RP_i^2-V,$ we will get $\mathbb{E}(T)=\mathbb{E}(P_\text{mother}^2)-\text{Var}(P_\text{mother})=\mathbb{E}(P_\text{mother})^2=\mu^2.$
Similarly define $T_m=\dfrac{1}{r}\displaystyle\sum_{i=1}^rP_{m,i}^2-V_m$ to get $\mathbb{E}(T_m)=\mu_m^2.$ We then define the estimator
$$T_{*}=\displaystyle\sum_{m=1}^MT_m^2-2\bar{P}\sum_{m=1}^M\bar{P}_m+MT.$$
Check that by linearity of expectation we get the required result. |
Does every bounded sublattice of a bounded lattice have the same top and bottom? | This depends on what you mean with bounded lattice.
If you mean a lattice $\mathbf{L} = \langle L, \wedge, \vee \rangle$ which just happens to be bounded (meaning that it has a maximum and a minimum element), then a sublattice $\mathbf{M}$ of $\mathbf{L}$ doesn't even have to be bounded, but even a bounded lattice doesn't have to have the same bounds.
For example, each element of a lattice is a sublattice coinciding with its minimum and maximum.
If by a bounded lattice you mean $\mathbf{L} = \langle L, \wedge, \vee, 0, 1 \rangle$ (here the bounds are nullary operations), then every bounded sublattice of $\mathbf{L}$ must have the same $0$ and $1$ (precisely because these are fundamental operations of the algebra).
So this is essentially a matter of definition.
By the way, in your argument, you seem to be considering that if $\mathbf{L}$ is a bounded lattice and $\mathbf{M}$ a bounded sublattice of $\mathbf{L}$, then for any $B \subseteq M$, we have $\bigwedge_{M} B = \bigwedge_L B$.
This is not the case.
For example, consider the usual order on the set
$$L = \{ -1, 0 \} \cup \{ 1/n : n \in \mathbb{N} \}.$$
This is a bounded lattice with $\top = 1$ and $\bot = -1$.
Now consider $M = L \setminus \{0\}$; it is a bounded sublattice of $L$, but, taking $B = L \setminus \{-1,0\}$ we have
$\bigwedge_M B = -1 \neq 0 = \bigwedge_L B.$ |
Show a step function is measurable | It is not true in general.
Let $X$ be constant so that $\sigma(X)=\{\varnothing,\Omega\}$.
Then it is not difficult to find a sequence of step functions converging to $X$, but these functions are not necessarily constant.
However every function that is measurable wrt $\{\varnothing,\Omega\}$ is constant. |
Find the Bezout identity for $\,\gcd(14565695, 61489)$. | You need to use the Extended Euclidean Algorithm. This will give you exactly the decomposition you are looking for. Check http://en.wikipedia.org/wiki/Extended_Euclidean_algorithm for details.
The answer is $m=-19,793$ and $n=468,8624$ |
Simple model of car-following as a linear system of ODE | You get $\mathbf {\dot x}=A\mathbf x+\mathbf b$ where $A=λ(-I+S)$ with $S$ is the sub-diagonal matrix with entries $1$ on the first sub-diagonal. And $\mathbf b=λx_0(t)\mathbf e_1$.
Set $n=4$, write out the equations, especially the first one, and transcribe into matrix form to confirm that pattern. |
The elevator starts with seven passengers and stops at ten floors | It's because the $(5,2)$ discharge means that all passengers get off on one of two floors. So, that could be $5$ in floor $1$, and $2$ on floor $2$, or $5$ on floor $1$ and $2$ on floor $3$, or $5$ on floor $2$, and $2$ on floor $1$ ... etc.
There are $\frac{10!} {8!*1!*1!}$ ways to pick those two floors: $8$ floors where no one gets off, $1$ floor where $5$ people get off, and $1$ floor where $2$ people get off. |
For a homomorphism $\phi:G\to H$, is the preimage of a coset $aN$ of $H$ is of the form $b\phi^{-1}(N)$, where $b\in \phi^{-1}(a)$? | Yes, I think you're right. Well done!
As is customary for a proof-verification question, a little more than a yes/no answer is polite. I would like to point out, then, that your current use of paragraphs is slightly strange. You cut through your proof of one side of the set inclusion, which starts directly after your proof of the other, where I believe the paragraph should start.
I can't fault your mathematics but note that $b$ might not be unique, which is to say that there could be some $b'$ with the same property. |
Is the following set dense in $L^2$? | Yes, the answer is affirmative. It is equivalent to
$\{p(x)e^{-|x|} : p\in\mathcal{P}\}$ is dense in $L^2(\Bbb{R},\Bbb{R})$.
In our context of Hilbert spaces, we need to show that
$$L_0 = \left\{f\in L^2(\Bbb{R}) : (\forall n\in\Bbb{Z}_{\geq 0})\ \int_{\Bbb{R}}x^n e^{-|x|}f(x)\,dx=0\right\}$$
consists of $f\equiv 0$ only. Here and below, all functions are complex-valued.
Let $\Lambda=\{\lambda\in\Bbb{C} : |\Re\lambda|<1\}$; for $f\in L^2(\Bbb{R})$, the function
$$B_f(\lambda)=\int_{\Bbb{R}}e^{-|x|+\lambda x}f(x)\,dx$$
is analytic in $\Lambda$ (differentiation is admissible under the integral sign). Further, for any $n\in\Bbb{Z}_{\geq 0}$ we have $B_f^{(n)}(0)=\displaystyle\int_{\Bbb{R}}x^n e^{-|x|}f(x)\,dx$ and, therefore, $f\in L_0$ if and only if $B_f\equiv 0$.
Now let $f\in L_0$ and $g\in L^1(\Bbb{R})$ (fixme... much less is enough, see comments). Then
$$0=\int_{\Bbb{R}}g(\lambda)B_f(i\lambda)\,d\lambda=\int_{\Bbb{R}}e^{-|x|}\hat{g}(x)f(x)\,dx,\quad\hat{g}(x)=\int_{\Bbb{R}}e^{i\lambda x}g(\lambda)\,d\lambda.$$
Thus, $e^{-|x|}f(x)$ is orthogonal to $\{\hat{g} : g\in L^1(\Bbb{R})\}$. This space is dense in $L^2(\Bbb{R})$ because, e.g., it contains all continuous piecewise linear finite functions obtained from
$$\hat{g}_0(x) = \max\{0,1-|x|\} \impliedby g_0(\lambda)=\frac{2}{\pi}\left(\frac{\sin\lambda/2}{\lambda}\right)^2$$
using linear combinations and shifts; $\hat{g}_1(x)=\hat{g}(x+a)\impliedby g_1(\lambda)=e^{i\lambda a}g(\lambda)$.
(I'm sure I've duplicated some known facts about integral transforms. Thus, it might be good to replace some parts of the above with references to these...) |
Long division differs from calculator | Long division:
28.6
-----------------
6 ) 172.000
12
--
52
48
--
4 0
3 6
---
4
At this point the remainder, 4, is the same as the remainder in the previous step, also 4, so everything will repeat as it did in the previous step, adding an infinite line of 6es to the quotient, which is 28.66666…, and not 28.4 nor 28.6 as you said. |
The real number reached by the probability sequence | Work recursively. Let $E_n$ denote the expected value of the $n-$ digit entry. Of course $E_1=\frac 12$. Let $E$ denote $\lim_{n\to \infty}E_n$, so $E$ is the answer we seek.
We have $$E_n=E_{n-1}\times \left(E_{n-1}+\frac 1{2^n}\right)+(1-E_{n-1})\times \left(E_{n-1}\right)\implies E_n=E_{n-1}\times \left(1+\frac 1{2^n}\right)$$
In the limit we have $$E=\frac 12\times \prod_{n=2}^{\infty}\left(1+\frac 1{2^n}\right)$$
That can be evaluated using the Pochhammer Symbol and we get $$\boxed {E\approx 0.794744}$$
To justify the recursion, observe that $$E_n=\sum_w p(w)w$$ where $w$ spans the $n-$digit binary decimals and of course $p(w)$ is the probability of observing $w$. Of course we can write $w=w_{n-1}X$ where $w_{n-1}$ is a word of length $n-1$ and $X$ is $1$ or $0$ with probability $w_{n-1}, 1-w_{n-1}$ respectively. Splitting the sum into two parts, according to whether $X=1,0$ we see that
$$E_n=\sum_{w_{n-1}} p(w_{n-1})w_{n-1} \left(w_{n-1}+ \frac 1{2^n}\right)+\sum_{w_{n-1}} (1-w_{n-1})p(w_{n-1}) w_{n-1}=E_{n-1}\times \left(1+\frac 1{2^n}\right)$$as desired. |
Can this integral be computed in closed-form? | Since you have a full symmetry by exchanging $r_1$, $r_2$ and $r_3$, the integral can be computed as
$$3!\int_{-\infty}^{+\infty}\mathrm dx \int_{-\infty}^x\mathrm dy\int_{-\infty}^y\mathrm dz \mathrm e^{-(x^2+y^2+z^2)/2\sigma^2}\sinh\left(\frac{x-y}2\right)\sinh\left(\frac{x-z}2\right)\sinh\left(\frac{y-z}2\right).$$
Then writing the $\sinh$ in the exponential form, you end up with a sum of product integrals that are all of Gaussian type. |
How to prove this theorem using order axioms? | $ab\gt0$, so:
$$\frac{a}{ab}\lt\frac{b}{ab}$$
$$\frac{1}{b}\lt\frac{1}{a}$$ |
What is the correct name for relation like things such as $\in$ and = | Ultimately, use the shortest term you don't think will be misunderstood. I second @Arthur's suggestion: If you define relation too precisely (i.e. on sets) for that to be a legal description, go with class relation. (I prefer binary class relation over class binary relation, but neither is likely to be used often.) Having said that, @MauroALLEGRANZA noted such precision is skipped in Kunen 2009, and suggested calling them predicate symbols, which makes sense unless you want to emphasize that they're mathematical objects. For example in $\{\emptyset,\{\emptyset\}\}\in\mathord{\in}$ (I can't apologize enough for how horrible that looks!), the first $\in$ is a symbol for a binary predicate, but the second is a binary class relation. |
Can you prove that $\frac{a+b}{ab+1}$ is real if $|a|=1$, $|b|=1$, and $ab\ne-1$? | $$\dfrac{a+b}{ab+1}= \dfrac{(a+b)(1+\overline{a}\overline{b})}{(ab+1)(1+\overline{a}\overline{b})}=\dfrac{(a+b)(1+\overline{a}\overline{b})}{|ab+1|^2}=\dfrac{a+b+|a|^2\overline{b}+\overline{a}|b|^2}{|ab+1|^2}=\dfrac{a+b+\overline{b}+\overline{a}}{|ab+1|^2}=\dfrac{2Re(a+b)}{|ab+1|^2} \in \Bbb R$$ |
computing integrate of exponential divide by generalization trigonometry | Let $u = \dfrac{e^x}2\implies2\,\mathrm du = e^x\mathrm dx$. Therefore,
$$\int\dfrac{e^x}{\sqrt{4 - e^{2x}}}\,\mathrm dx\equiv\int\dfrac2{\sqrt{4 - 4u^2}}\,\mathrm du = \int\dfrac1{\sqrt{1 - u^2}}\,\mathrm du$$
This is a standard integral for $\arcsin(u)$. That is,
$$\int\dfrac1{\sqrt{1 - u^2}}\,\mathrm du = \arcsin(u) + \mathrm{constant}$$
Undo substitution.
$$\int\dfrac{e^x}{\sqrt{4 - e^{2x}}}\,\mathrm dx = \arcsin\left(\dfrac{e^x}2\right) + \mathrm{constant}$$ |
p adic valuation strong triangle inequality | What does the triangle inequality tell you? The fact that $|x|_p\leq 1$ tells you that $v_p(x)\geq 0$. Then $v_p(1)=0 (Why?)$. We also know that $v_p(a+x)\geq \min \{v_p(a), v_p(x)\}$. Can you conclude the result from this? |
Proving translations in R and rotations in C are groups under composition | To rotate the plane through an angle $\theta$ around a point $A$ you translate the plane by $-A$, rotate by $\theta$ and then translate by $A$ to put the plane back to where it was.
Thus, $r_{\theta}(z) = e^{i\theta}(z-A)+A$
Now,
$r_{\phi}(r_{\theta}(z)) = r_{\phi}(e^{i\theta}(z-A)+A) = e^{i\phi}((e^{i\theta}(z-A)+A)-A)+A= e^{i\phi}(e^{i\theta}(z-A))+A=e^{i(\phi+\theta)}(z-A)+A=r_{\phi + \theta}(z)$
Associativity follows by considering both $r_{\theta_1 + \theta_2}(r_{\theta_3}(z))$ and $r_{\theta_1}(r_{\theta_2 + \theta_3}(z))$.
$r_0$ serves as the identity and $r_{-\theta}$ is the inverse of $r_{\theta}$ |
How continuity of $f$ and path-connectedness of $g$ results in $f\circ g$ to be path-connected? | You are confusing your terms:
Topological spaces are connected and/or path connected. "Connected" is not an adjective that can be applied to a function
Functions can be continuous or not. "Continuous" is not an adjective that can be applied to a topological space.
That said, what you need to prove is that $f(X)$ (which is a set) is path connected. Therefore, you need to prove, for two points $p, q$ in $f(X)$, that there exists a path in $f(X)$ from $p$ to $q$ (i.e., a continuous function $\gamma$ from $[0,1]$ to $f(X)$ such that $\gamma(0) = p$ and $\gamma(1) = q$).
Do you understand that:
$f\circ g$ is continuous?
$f\circ g$ maps from $[0,1]$ to $f(X)$?
$(f\circ g)(0) = p$?
$(f\circ g)(1) = q$?
Do you understand that from these four points, it follows that $f\circ g$ is a path from $p$ to $q$?
Do you understand that from 2., it follows that $f(X)$ is path connected?
If you answer any of these questions with "no", please explain in the comments which part is confusing you. |
Is there an analytic continuation of the function, $P(n)$, that gives the $n$-th prime number? | Yes. In fact, any sequence of complex numbers is the sequence of values $f(n)$ of some entire function $f$ at the natural numbers $n$. This is a corollary of a theorem of Mittag-Leffler about existence of meromorphic functions with poles at exactly the natural numbers and with prescribed principal parts there. Taking such a function with only simple poles and multiplying it by $\sin(2\pi z)$, you get an entire function as desired. |
Line Intersection - Queries | Here is an approach that takes $O(\sqrt n)$ time for each query, so $O(n \sqrt n)$ time total.
Divide lines into lines with a gentle slope (a nonzero slope less than $\sqrt n$ in absolute value) and a steep slope (a slope greater than $\sqrt n$ in absolute value, or, awkwardly, slope $0$).
The idea is that there are very few gentle slopes, so we can afford to just store a count of how many times we've added each possible unique line with gentle slope. (We replace $k$ by $|k|$ and $b$ by $b\bmod k$, if necessary, since this doesn't affect anything.)
On the other hand, there are infinitely many lines with steep slopes, but each one intersects only a few lines $y=q$ at an integer point. So we can just have an array indexed by $0,1,\dots,n$, and at the $q^{\text{th}}$ position, record how many lines with steep slopes intersect $y=q$ at an integer coordinate. This can be updated in $O(\sqrt n)$ time when a line is added or removed: just compute $y=kx+b$ for $x=0,1,\dots, \lfloor \frac nk\rfloor$.
Finally, to check how many lines intersect $y=q$ at an integer coordinate, we handle all steep lines with a single array access, and handle each gentle slope separately. There are $O(\sqrt n)$ gentle slopes, so this takes $O(\sqrt n)$ time. |
Intuition for Absorption and Distribution Rules (of Replacement in Propositional Logic) | I abbreviate foods as follows in my explanation, where $P :=$ I have a burger, $Q :=$ I have fish, and $R :=$ I have chips. I'll just explain the $\Rightarrow$ direction of each equivalence for now. In each case, keep in mind that "or" in math means inclusive or, so there is no need to ever say the phrase "or both."
$\huge{1.}$ If I have a burger with (ketchup or mustard) then either I have a burger with ketchup, or I have a burger with mustard. My burger must have at least one of the two condiments on it.
$\huge{2.}$ If I have a burger or (fish and chips) then statements
(a) "I have a burger or fish" $\qquad \qquad $ and $\qquad \qquad$ (b) "I have a burger or chips"
must be true. To see this, let's break $(2)$ down by cases.
Case 1: I have a burger. Then (a) and (b) are both true because of the burger, regardless of the fish and chips.
Case 2: I have fish and chips. Again (a) and (b) are both true because of the fish and the chips respectively, regardless of the burger.
Alternatively, note that from "I have a burger or (fish and chips)" we can conclude "I have a burger or fish"; this just weakens the second disjunct (fish and chips) by forgetting the about the chips. Formally, $P \vee (Q \wedge R) \implies P \vee Q$. Likewise we can conclude "I have a burger or chips", while forgetting about the fish. So we can conclude the conjunction: "I have a burger and fish, or I have a burger and chips."
$\huge{3.}$ If I have a burger or (a burger with cheese) then in any case I must have a burger. (I can't say for certain whether it has cheese on it, though.)
$\huge{4.}$ If I have a burger and (a burger or fish) then I must have a burger. (The fish is just a red herring.)
This is not a complete explanation, so if something still doesn't make sense, please ask. |
Existential arithmetical formula independent of ZFC | No, any true existential sentence is provable even in fairly weak theories such as Robinson Arithmetic. So see why, take a true existential sentence $(\exists x) \phi(x)$, where $\phi$ is quantifier free. If such a sentence is true, there is an $n$ such that $\phi(n)$ is true. By induction on the construction of quantifier free sentences, one can easily show that Robinson Arithmetic proves $\phi(n)$. Thus Robinson Arithmetic proves $(\exists x) \phi(x)$. |
Dynamic optimisation / optimal control where control is a function of state | If the control is subject to dynamics then it can be treated as a state. Simply include a brand new control, for instance $v(t)$, and write the new dynamics as:
$$
\dot{x}_a =
\begin{bmatrix}
\dot{x} \\
\dot{u}
\end{bmatrix} =
\begin{bmatrix}
f(x, u, t) \\
v
\end{bmatrix} =
f_a(x_a, v, t)
$$
with $v \geq 0$.
For $u$ to be a function of $x$ is a matter of including a path constraint of the form:
$$
g_a(x_a,t) = u - g(x,t) = 0
$$
Similarly, if $x$ is a function of $u$ the path constraint is:
$$
g_a(x_a,t) = x - g(u,t) = 0
$$
if $u$ is a function of $x$ then $x$ cannot be a function of $u$! If this is the case, and the constraints are linearly independent, then the problem is infeasible, as no solutions can satisfy the two constraints simultaneously.
Finally, if $x$ is not subject to dynamics, then $x$ is not a state! In this case simply switch the names of the variables in your second problem statement. |
Evaluating $ \int_0^{\infty}\frac{v}{\sqrt{v + c}}e^{-\frac{y^2}{2(v + c)} - \frac{(u-v)^2}{u^2v}}dv$ | For this integral, the best result seems to be a Taylor series in $c$ with coefficients in closed form.
In the typical case $c=1$, even specializing to the easiest case of $u=1$, $y=0$, we get the integrals $\int_0^\infty v^{\pm1}(v+1)^{-1/2}e^{-v-1/v}dv$,
for which neither Mathematica nor Gradshteyn and Ryzhik has any answer.
However, there is an explicit expression in the limit case $c=0$, which includes Gradshteyn and Ryzhik 3.471.15. Setting $z=\sqrt{4+2y^2}/u$, Mathematica gives the two integrals as:
$$\frac{u^3(1+z)}{2}\sqrt{π}\,e^{\,z(\sqrt{2}−1)} \text{ and }
\frac{2}{zu}\sqrt{π}\,e^{\,z(\sqrt{2}−1)}.$$
When we expand the integrands above as power series in $c$, the integrals of each term have similar closed-form expressions. |
Green Function in $\mathbb{R^3}$ (bounded) | The function $\frac{1}{4\pi |M-M_0|}$ satisfies the PDE and has the kind of singularity we want, so all is left is to enforce the boundary condition (which is the Dirichlet condition $G=0$). In a half-space this is done using the fact that an odd function of one variable is necessarily $0$ at the origin. So, we create an odd function (with respect to $x_3$) by taking
$$\frac{1}{4\pi |M-M_0|}-\frac{1}{4\pi |M-M_0'|} \tag1$$
where prime means reflection in the $x_3=0$ plane. Now we'd like to apply reflection to (1), this time about the plane $x_3=L$. This creates two more terms and unfortunately destroys the symmetry about $x_3=0$. We can restore the symmetry about $x_3=0$ by reflecting there again... This process leads to an infinite series: see xkcd555.
To make the long story short, observe the following: an odd function of $x_3$ which is periodic with period $2L$ will necessarily be zero at both $x_3=0$ and $x_3=L$. (Why?) So, all we need to do is to create a periodic function out of (1). Like this:
$$\sum_{n\in\mathbb Z}\left(\frac{1}{4\pi |M+2Ln e_3-M_0|}-\frac{1}{4\pi |M+2Ln e_3-M_0'|} \right)\tag2$$
where $e_3$ is the basis vector for $x_3$ axis. (This is what user8268 suggested in a comment.) |
The difference between Taylor and Laurent expansions for Holomorphic functions | I'm not quite sure what's going on in your first statement, with a set $U$ and closed balls inside $U$. All that matters is that $f$ be holomorphic on an open disk centered at the point in question - this is the disk on which the power series representation of $f$ will converge. Maybe you are worried about the kind of convergence? For both Taylor and Laurent series of holomorphic functions, the convergence is uniform on compact subsets of the disk/annulus.
Laurent series exist when $f$ is holomorphic on the annulus. It does not have to be holomorphic on the disk removed from the interior, and if I remember correctly, that removed disk could even be a point, i.e. $r=0$ is okay, and that $R = \infty$ is okay also. |
If $\frac{a+1}{b}+\frac{b}{a}$ is an integer then it is $3$. | (The following answer is quite similar to the one given by @math110, but I
realized that only after arriving at my solution independently.
My formulation and the minimality condition is slightly different, so that I hope that it deserves
a separate answer.)
For fixed $k \in \mathbb N$, define the set of all solutions to
$\frac{a+1}{b}+\frac{b}{a} = k$ as
$$
S := \{ (a, b) \in \mathbb N^2 \mid a^2 + a + b^2 = kab \} \, .
$$
We want to proof that if $S$ is not empty then $k = 3$.
The proof is based on the following observations (similar to the
"Vieta Root Jumping" method):
(i) If $(a, b) \in S$ then $(b^2/a, b) \in S$.
(ii) If $(a, b) \in S$ then $(a, (a^2+a)/b) \in S$.
Proof of (i): $a$ is a solution of the quadratic equation
$$
P(x) = x^2 - (kb-1)x + b^2 = 0 \, .
$$
It follows from Vieta's formulas that $a' = kb-1 -a \in \mathbb Z$ is the
other solution, and $aa' = b^2$, so that actually $a' \in \mathbb N$.
Therefore $(a', b) \in S$.
The proof of (ii) is the same, now considering $b$ as a solution of the quadratic equation
$$
Q(y) = y^2 - (ka) y + (a^2 + a) = 0 \, .
$$
Finally we prove: If $S$ is not empty then $k = 3$.
The set $\{ a + b \mid (a, b) \in S \}$ is a non-empty set of positive
integers and therefore has a minimal element. Let $(a, b) \in S$ be such that $a + b$ is minimal.
Case 1: $a > b$. Then $a' = b^2/a < b < a$. From (i) we know that $(a', b)\in S$.
But $a' + b < a + b$ in contradiction to the minimality, so this case
cannot happen.
Case 2: $a < b$. Then $b' = (a^2+a)/b \le (a^2+a)/(a+1) = a < b$.
From (ii) we know that $(a, b') \in S$.
But $a + b' < a +b $ again in contradiction to the minimality, so this case
can also not happen.
That leaves us with Case 3: $a = b$. From
$$
k = \frac{a+1}{b}+\frac{b}{a} = 2 + \frac 1b
$$
it follows that $b = 1$ and therefore $k = 3$ and we are done.
Remark: The proof has shown that if we start with an arbitrary solution $(a, b) \in S$
and apply the transformations (i), (ii) alternatingly, then we get a
sequence of solutions
$$
(a, b) = (a_0, b_0), (a_1, b_1), \ldots (a_{n-1}, b_{n-1}), (a_n, b_n) = (1,1)
$$
where alternatingly $a_i < b_i$ or $a_i > b_i$ as long as $i < n$.
Since the transformations (i) and (ii) are self-inverse,
we can reverse the process to get all solutions iteratively
by starting with $(1,1)$ and then applying (i) and (ii) alternatingly.
This gives the solutions
$$
(1, 1), (1, 2), (4, 2), (4, 10), (25, 10), (25, 65), \ldots
$$ |
Some steps in Jech's presentation of Solovay-Tennenbaum Theorem | Disclaimer: This is a partial answer, as I don't quite see how to get equality in your first question.
You can apply the Mixing Lemma (Lemma 14.18 in Jech) with the antichain $\{b,-b\}$ to find $\dot Q_\alpha$ such that $\|\dot Q=\dot Q_\alpha\|\ge b$ and $\|\dot Q_\alpha=\{1\}\|\ge-b$. Note that this is sufficient to conclude that $\|\dot Q_\alpha \text{ has the ccc}\|=1$, which is what is needed for the proof to go through. For more information on the Mixing Lemma, see Bell's Set Theory: Boolean-Valued Models and Independence Proofs.
For your second question, let $B\supset P$ be the completion of $P$. For each $b\in B$, let $A_b\subset \{p\in P: p\le b\}$ be a maximal antichain (in the poset $\{p\in P: p\le b\})$. This antichain will be maximal in the poset $\{c\in B^+:c\le b\}$ by the density of $P$. Because $A_b\subset P$ and $A_b$ is an antichain in $P$ (because $P$ is dense in $B$), we have that $A_b$ is countable. The claim is that $A_b=A_c\to b=c$.
Suppose $A_b=A_c$ but $b\not\le c$. Put $d=b\cdot (-c)$, so that $d\le b$. By the maximality of $A_b$, it follows that $d\cdot a\neq 0$ for some $a\in A_b$. As $A_b=A_c$, we get that $a\in A_c$, so $a\le c$, so $d\cdot a\le d\cdot c=0$, contradiction. Thus, $b\ge c$. By symmetry, $b=c$.
Applying the above to each $\|\xi\in\dot X\|$, we get a countable antichain $A(\xi)$ for each $\xi<\lambda$. Thus, $A:=\bigcup_{\xi<\lambda}A(\xi)$ is a set of size at most $\lambda$. Since $\kappa>\lambda$ is regular, the set
$$
\{\max \text{spt}(p):p\in A\}
$$
is bounded in $\kappa$, and we're done.
I personally find the proof without Boolean algebras to be easier to follow. In particular, Kunen's presentation is very readable and pays close attention to the details, as it's the first iterated forcing proof done in the book (Theorem V.4.1 in the new edition). Another option for the poset-based approach is Section 3 of Baumgartner's Iterated Forcing.
If you want to see an alternate exposition using Boolean algebras, you can find it in chapter 6 of Bell's text, although I haven't read that one (I just know it's there). |
Fourier analysis questions | Once question a) is done, use (sesqui)linearity of the involved operators to do this when $f=T_ae^{-\pi x^2}$ and $\phi=T_be^{-\pi x^2}$.
For question a), what we have to show is that if $f\in L^2$ and for each $c$, we have $\langle f,T_ce^{-x^2}\rangle=0$ then $f=0$.
To do that, use Plancherel identity and $\widehat{T_c(x\mapsto e^{-\pi x^2})}=e^{icx}e^{-\pi x^2}$. The function $x\mapsto \widehat f(x)e^{-\pi x^2}$ is integrable, and its Fourier transform is $0$, hence $\widehat f=0$ and $f=0$. |
$\lambda-z-e^{-z}=0$ has one solution in the right half plane | A hint.
If $\operatorname{Re} z > 0$ and $\lambda - z - e^{-z} = 0$ then
$$
|\lambda - z| = e^{-\operatorname{Re} z} < 1.
$$
In other words, if the equation has any solutions in the right half-plane then they lie in the open disc $|z-\lambda|<1$. |
Exponential decrease of amplitude with time | The graph is below. What doesn't look right about it? The peaks decrease exponentially like you want. As the period is $2$, each peak is $0.64$ of the one before of the same sign. |
Regarding proving a series result from Tom M Apostol Modular functions and Dirichlet series in number theory | For part (a) you need another function $H(x) $ defined by $$H(x) =\sum_{n=1}^{\infty}\frac{n^5x^n}{1+x^n}$$ and you should note that $$G(x) - H(x)=2G(x^2)$$ and $$H(x) =F(x) +32H(x^2)$$ For part (b) you need some idea about elliptic function theory.
Your function $G$ is related to Ramanujan's function $R(q) $ via $$R(q) =1-504G(q)$$ If $k$ is the elliptic modulus corresponding to nome $q$ and $K$ is the corresponding complete elliptic integral of first kind then
\begin{align}
R(q)&=\left(\frac{2K}{\pi}\right) ^6(1+k^2)(1-34k^2+k^4)\notag\\
R(q^2)&=\left(\frac{2K}{\pi}\right) ^6(1+k^2)(1-2k^2)\left(1-\frac{k^2}{2}\right)\notag\\
R(q^4)&=\left(\frac{2K}{\pi}\right) ^6\left(1-\frac{k^2}{2}\right)\left(1-k^2-\frac{k^4}{32}\right)\notag
\end{align}
If $q=e^{-\pi} $ then $k^2=1/2$ and $$R(q) =-\left(\frac{2K} {\pi} \right) ^6\cdot\frac{3}{2}\cdot\frac{63}{4},R(q^2)=0,64R(q^4)=\left(\frac{2K}{\pi}\right)^6\cdot\frac{3}{2}\cdot\frac{63}{4}$$ and therefore $$R(q) - 34R(q^2)+64R(q^4)=0$$ or $$31-504\{G(q)-34G(q^2)+64G(q^4)\}=0$$ It follows that $$F(q) =\frac{31}{504}$$ |
Linear Algebra Hoffman Kunze Chapter 3 example 5! | $Tg$ is not a class of functions of a specific form; we can evaluate $Tg$ explicitly:
$$(Tg)(x) = \int_0^x 0 dt = 0 |_0^x = 0 - 0 = 0$$
Keep in mind that this is a definite integral, so there is not an arbitrary constant that needs to be added. |
Can we abstract over kinds in $\lambda\underline{\omega}$? | No, it isn't. The only terms of sort $\square$ in $\lambda\underline{\omega}$ are $*$ and $t_1 \to t_2$ where $t_1$ and $t_2$ are terms of sort $\square$. $\lambda k:\square.k$ is not a term of sort $\square$. In fact, you can easily formulate $\lambda\underline{\omega}$ without even talking about a sort $\square$.
It's trivial to add such terms if you want. Probably the easiest system to consider such thing is a Pure Type System aka generalized type systems introduced also by Barendregt in this paper I believe. You could allow $(\lambda k:\square. k \to k)$ by having sorts $\{*,\square,\square_2\}$, axioms $\{(*:\square),(\square:\square_2)\}$, and rules $\{(*,*),(\square,\square),(\square_2,\square_2)\}$. You can continue on in the obvious manner for higher and higher towers. You can go off in other directions too, not just a vertical tower. The generalized type systems paper referenced above gives examples of systems with a variety of other arrangements of sorts. You could also consider the "settings" described in Bart Jacobs' PhD thesis "Categorical Type Theory" which provides a bit more flexibility than Pure Type Systems. |
Question about an inequality with sequences | It is true.
For each $m \in \mathbb N$, we have $a_m \le \|a_n\|_{\infty}$ (by definition of $\sup$). So, $\lim_{m \to \infty} a_m \le \|a_n\|_{\infty}$. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.