title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
how the Snake Lemma has been applied here? | Note that in this case $f=\operatorname{id}_{F(A)}$ and $h=\operatorname{id}_{F(B)}$. Hence
$$\ker f=\ker h=0,\ \operatorname{coker}f=\operatorname{coker}h=0$$
This gives exact sequences
$$\ker f=0\rightarrow\ker g\rightarrow 0=\ker h,\ \operatorname{coker} f=0\rightarrow\operatorname{coker} g\rightarrow 0=\operatorname{coker} h$$
But its a standard fact that $0\rightarrow M\rightarrow 0$ being exact implies that $M=0$. Hence $g$ is an isomorphism as well since we have that $\ker g=\operatorname{coker}g=0$. So the full strength of the snake lemma is not needed here (as these kernel and cokernel sequences are rather trivial to construct) but rather an important byproduct. |
How to evaluate GF(256) element | GF$(256)$ is small enough that you should construct an antilog table for it and save it for later reference rather than compute the polynomial form of $\alpha^{32}$ or $\alpha^{100}$ on the fly each time you need it. The
computer version of the antilog table is an array that stores the polynomial forms for $1 (= \alpha^0), \alpha, \alpha^2, \cdots, \alpha^{254}$ in locations $0, 1, 2, \cdots, 254$. For human use, the table
is constructed with two columns and looks something like this
$$\begin{array}{r|l}
\hline\\
i & \alpha^i \text{ equals}\\
\hline\\
0 & 00000001\\
1 & 00000010\\
2 & 00000100\\
3 & 00001000\\
4 & 00010000\\
5 & 00100000\\
6 & 01000000\\
7 & 10000000\\
8 & 00011101\\
9 & 00111010\\
10 & 01110100\\
11 & 11101000\\
12 & 11001101\\
\vdots & \vdots\quad \vdots \quad \vdots\\
254 & 10001110\\
\hline
\end{array}$$
The $i$-th entry in the second column is the polynomial representation
of $\alpha^i$ in abbreviated format. For example, $\alpha^8$ is stated to be equal to $00011101$ which is shorthand for $\alpha^4+\alpha^3+\alpha^2+1$.
The entry for $\alpha^i$ is obtained by shifting the entry
immediately above by one place to the left (inserting a $0$ on the
right) and if there is an $\alpha^8$ term thus formed, removing it
and adding $\alpha^4+\alpha^3+\alpha^2+1$ (i.e. XORing $00011101$)
into the
rightmost $8$ bits. This process is easy to mechanize to produce
the antilog table by computer rather than by hand (which can be tedious
and mistake-prone). |
Probability of an edge in directed random configuration graph | We can choose a uniformly random matching in any order. So let's choose the $k_i$ edges out of node $i$ one at a time and see the probability none of them go to node $l$.
The first edge we choose has probability $\frac{j_l}{m}$ of going to $l$. If it doesn't, then the second edge has probability $\frac{j_l}{m-1}$. If that also doesn't happen, then the third edge has probability $\frac{j_l}{m-2}$, and so on. So the overall probability that none of the edges go to node $l$ is
$$
1-p_{il} = \left(1 - \frac{j_l}{m}\right)\left(1 - \frac{j_l}{m-1}\right) \dotsb \left(1 - \frac{j_l}{m-k_i+1}\right)
$$
and the probability you want is the complement of this one.
Asymptotically, however, assuming $k_i$ and $j_l$ are small relative to $m$, the answer is in fact close to $\frac{k_i \cdot j_l}{m}$ which is what you have, except that $m$ and not $2m-1$ is the number of options for the first edge. (Unlike the undirected configuration model, here there are $m$ out-stubs and $m$ in-stubs, which we distinguish.)
Additionally, because there are $k_i$ edges with probability $\frac{j_l}{m}$ of going to $l$, then $\frac{k_i j_l}{m}$ edges is the expected number of edges from $i$ to $l$. |
Determinant Calculation Increasing entries | $$A = \left|\begin{array}{ccccccc}1 & 2 & 3 & \cdots & n-2 & n-1 & n \\ 2 & 3 & 4 & \cdots & n-1 & n & n \\ 3 & 4 & 5 & \cdots & n & n & n \\ \vdots & \vdots & \vdots & \cdots & \vdots & \vdots & \vdots \\ n & n & n & \cdots & n & n & n\end{array}\right|$$
$$ \downarrow (R_i \rightarrow R_i-R_{i-1})$$
$$A = \left|\begin{array}{ccccccc}1 & 1 & 1 & \cdots & 1 & 1 & 1 \\ 2 & 1 & 1 & \cdots & 1 & 1 & 0 \\ 3 & 1 & 1 & \cdots & 1 & 0 & 0\\ \vdots & \vdots & \vdots & \cdots & \vdots & \vdots \\ 1 & 0 & 0 & \cdots & 0 & 0 & 0\end{array}\right|$$
$$ \downarrow (C_i \rightarrow R_i-R_{i+1})$$
$$A = \left|\begin{array}{ccccccc}1 & 0 & 0 & \cdots & 0 & 0 & 1 \\ -1 & 0 & 0 & \cdots & 0 & 1 & 0 \\ -1 & 0 & 0 & \cdots & 1 & 0 & 0\\ \vdots & \vdots & \vdots & \cdots & \vdots & \vdots & \vdots \\ n & 0 & 0 & \cdots & 0 & 0 & 0\end{array}\right|$$
$=(-1)^{(n+1)}{n}{\left|\begin{array}{ccccccc} 0 & 0 & \cdots & 0 & 0 & 1 \\ 0 & 0 & \cdots & 0 & 1 & 0 \\ 0 & 0 & \cdots & 1 & 0 & 0 \\ \vdots & \vdots & \vdots & \cdots & \vdots & \vdots \\ 1 & 0 & 0 & \cdots & 0 & 0 & \end{array}\right|}=(-1)^{n+1}(n)\sigma(n)$
Where, $\sigma(n)=1$ if $n=4k+${$1,0$} and $\sigma=-1$ if $n=4k+${$-1,2$}. |
Absolute continuity for positive measures | You can do this by contradiction, but your assumption should be that, there exists $\varepsilon>0$ such that, for all $\delta>0$, there exists $E_{\delta}$ with $\nu(E_{\delta})\geq\varepsilon$, and $\mu(E_{\delta})<\delta$.
Then, for all $n\in\mathbb N$, there exists $E_n$ such that $\nu(E_n)\geq\varepsilon$ and $\mu(E_n)<\frac{1}{n^2}$. Since $\displaystyle\sum_{n\in\mathbb N}\mu(E_n)<\infty$, the Borel-Cantelli lemma shows that $$\mu\left(\limsup_{n\to\infty}E_n\right)=0.$$ Absolute continuity now shows that $\displaystyle\nu\left(\limsup_{n\to\infty}E_n\right)=0$. But, $$\limsup_{n\to\infty}E_n=\bigcap_{n\in\mathbb N}\bigcup_{m\geq n}E_m,$$ and also, for all $n\in\mathbb N$, $$\nu\left(\bigcup_{m\geq n}E_m\right)\geq\nu(E_n)\geq\varepsilon,$$ hence $$\nu\left(\limsup_{n\to\infty}E_n\right)\geq\varepsilon,$$ which is a contradiction. |
Reflexivity: How can something be related to itself? | In mathematics the term “relation” is defined for mathematical purposes. One could have named it differently. You must never compare a mathematical notion a word can have to the non-mathematical notions it may also have. You mustn't see any relationship between what's mathematically is called “relation” with other notions related to “relation.” (Pun intended.)
To give an example: for Nietzsche there is no longer any “absolute value,” whereas mathematicians hardly can live without one. |
Can we show $\text E\left[X\mid\mathcal F_\sigma\right]=\text E\left[X\mid\mathcal F_{\sigma\:\wedge\:\tau}\right]$ on $\{\sigma\le\tau\}$? | Proof of $\{\sigma \leq \tau\} \in \mathcal F_{\sigma \wedge \tau}$:
We have to show that $\{\sigma \leq \tau\} \cap \{\sigma \wedge \tau \leq t\} \in \mathcal F_t$ for all $t$. Write this as $\{\tau >t, \sigma \leq t\} \cup \{\sigma \leq t, \tau \leq t, \sigma \leq \tau\}$ (by considering the cases $\tau \leq t$ and $\tau >t$). Now $\{\sigma \leq t, \tau \leq t, \sigma \leq \tau\}=\{\sigma \leq t, \tau \leq t\} \setminus \{\sigma \leq t, \tau \leq t, \sigma > \tau\}$. Finally write $\{\sigma \leq t, \tau \leq t, \sigma > \tau\}$ as $\bigcup_{r \in \mathbb Q, r <t} \{\sigma \leq t, \tau \leq t, \sigma > r>\tau\}$. Since $\{\sigma \leq t, \tau \leq t, \sigma > r>\tau\}$ is the intersection of $\{\sigma \leq t\}, \{\tau \leq t\}, \{\sigma > r\}$ and $\{\tau <r\}$ we are done. [The last event is the union over $n$ of $\tau \leq r-\frac 1 n$]. |
Possible degree of a cover $p \colon S^{2n} \to X$ | If you already know that $\mathbb Z_2$ is the only group that can freely act on $S^{2n}$ then you are done. Just like you said, the group of deck transformations acts freely on a covering space. Also since $S^{2n}$ is simply connected, the group of deck transformations is isomorphic to $\pi_1(X)$. (See for example Prop 1.39 of Hatcher's book Algebraic Topology.) So that means $\pi_1(X)$ is either trivial or $\mathbb Z_2$, corresponding to either a degree $1$ or degree $2$ cover. |
Solutions to Euler's Equations and Potential Flow | By Helmholtz' theorem, any continuously differentiable vector field that vanishes at infinity can be decomposed into irrotational and soleniodal parts of the form
$$\mathbf{u} = \nabla \phi + \nabla \times \mathbf{a}.$$
In potential flow we assume that the velocity field is irrotational, whence
$$\mathbf{u} = \nabla \phi.
$$
Applying the continuity condition we have
$$\nabla \cdot \mathbf{u} = \nabla^2 \phi = 0.$$
With suitable boundary conditions there always exists a solution of Laplace's equation for the velocity potential $\phi$ and hence, the velocity field. The Euler equation ensures conservation of momentum and closes the system of equations so we can solve for the pressure field (always).
The Euler equation applies to the general class of inviscid flows (incompressible or compressible) where incompressible potential flow is a special case. With respect to (b), there is no steady, incompressible, inviscid flow (irrotational or rotational) that does not satisfy the Euler equation. Only if the velocity field solves the viscous Navier-Stokes equations will it fail to satisfy the Euler equation.
Also, there is such a thing as unsteady potential flow. |
Integral over a simplex | Let's suppose that we are aware of the integral
$$\int_{C_k} \prod_{i=1}^k x_i^{\alpha_i-1} dx_i = \frac{\prod_{i=1}^k \Gamma(\alpha_i)}{\Gamma\left(\sum_{i=1}^k \alpha_i\right)}
$$
$(\textbf{1})$
(See here for a proof).
It turns out that only thing we need is to rewrite properly the inner one of the two products:
$$
P(k)=\prod_{j=i}^k (x_i x_j)^{\alpha_{ij}-1} \text{ =?}
$$
Let's try the easy example $k=3$, (assuming the $\alpha_{ij}$ are symmetric)we get
$$
(x_1x_1)^{\alpha_{11}-1}(x_1x_2)^{\alpha_{12}-1}(x_1x_3)^{\alpha_{13}-1}(x_2x_2)^{\alpha_{22}-1}(x_2x_3)^{\alpha_{23}-1}(x_3x_3)^{\alpha_{33}-1}
$$
which equals
$$
x_1^{2\alpha_{11}+\alpha_{12}+\alpha_{13}-4}x_2^{2\alpha_{22}+\alpha_{12}+\alpha_{23}-4}x_3^{2\alpha_{33}+\alpha_{13}+\alpha_{23}-4}
$$
or
$$
P(3)=\prod_i^3 x_i^{2\alpha_{ii}+\sum_{j\neq i}^3\alpha_{ij}-4}
$$
It is obvious (or can be shown by induction) that one can generalize this to arbitrary $k$
$$
P(k)=\prod_{i=1}^k x_i^{\alpha_{ii}-1+\sum_{j=1}^k(\alpha_{ij}-1)}
$$
Plugging this expression into our integral (using $\prod_ib_i\prod_i a_i=\prod_i a_i b_i$) we see that it is given by
$$\int_{C_k} \prod_{i=1}^k dx_i\prod_{j=i}^k (x_i x_j)^{\alpha_{ij}-1}=\int_{C_k} \prod_{i=1}^k dx_ix_i^{\alpha_{ii}-1+\sum_{j=1}^k(\alpha_{ij}-1)}$$
which allows us to just use $(\textbf{1})$, replacing $\alpha_i$ by $\alpha_{ii}+\sum_{j=1}^k(\alpha_{ij}-1)$
$$
\int_{C_k} \prod_{i=1}^k dx_i\prod_{j=i}^k (x_i x_j)^{\alpha_{ij}-1}=\frac{\prod_{i=1}^k \Gamma\left(\alpha_{ii}+\sum_{j=1}^k(\alpha_{ij}-1)\right)}{\Gamma\left(\sum_{i=1}^k \left[\alpha_{ii}+\sum_{j=1}^k(\alpha_{ij}-1)\right]\right)}
$$
Edit:
It appears that in my original answer, i assumed implicitly that the $a_{ij}$ are symmetric $\alpha_{ij}$=$\alpha_{ji}$. If this is not the case, one has to modify the answer as follows:
Replace $\sum_{j\neq i}^k\alpha_{ij}$ by $\sum_{j\neq i}^k\left(\Theta(i-j)\alpha_{ij}+\Theta(j-i)\alpha_{ji}\right)$
where $\Theta(x)$ is Heaviside's step function.
Please note also that this approach is generalizable to integrals like
$$
\int_{C_k} \prod_{i=1}^k dx_i\prod_{i_1, i_2, \dot .......,i_n=i}^k (x_{i_1} x_{i_2}...... x_{i_n})^{\alpha_{i_1 i_2...i_n}-1}
$$ |
Show that the subset of strictly monotonous functions in $C(I)$ is nowhere dense | This is based upon a common alternative characterization of convexity which is used to show that the pointwise limit of strictly convex functions is, at least, convex.
For a fixed function $f$ on an interval $I$, let's define
$$f^{[1]}(x,y)=\frac{f(x)-f(y)}{x-y}$$
for $x$ and $y$ distinct. Then you may verify that $f$ is strictly increasing if and only if
$$f^{[1]}(x,y)>0$$
for every $x$ and $y$ distinct. Now suppose that $\{f_n\}$ is strictly increasing for every $n$ and that this sequence converges pointwise to $f$ on $I$. Pick $x$ and $y$ distinct from $I$. Then
$$f^{[1]}(x,y)=\frac{f(x)-f(y)}{x-y}=\lim_{n\rightarrow\infty}\frac{f_n(x)-f_n(y)}{x-y}\geq 0\,.$$
which demonstrates that $f$ is (weakly) increasing. As a last remark, if $\{f_n\}$ converges in $C(I)$ (with the sup norm, I assume) then of course it converges pointwise to its limit function. |
Does there exist a $2×2$ matrix such that $A^2 \neq 0$ but $A^3=0$? | By Cayley Hamilton theorem, the minimal polynomial divides the characteristic polynomial. The characteristic polynomial of a $n\times n$ matrix is degree $n$. Thus the minimal polynomial cannot be degree $n+1$. |
Help with understanding $V_\omega$ | $V_\omega$ consists of exactly the sets you can write down in finite space using only the symbols {, }, and ,.
It is in bijection with $\omega$, by the rule
$$f:\omega\to V_\omega \qquad f(n) = \{f(a_1),f(a_2),\ldots,f(a_{k_n})\}$$
where $n=2^{a_1}+2^{a_2}+\cdots+2^{a_{k_n}}$ and all the $a_i$s are different.
(This bijection provides the standard proof that Peano Arithmetic is equiconsistent with ZFC$-$Infinity). |
The fair meat bazaar: Who has right? | Certainly a fair solution would be to pick $k$ numbers from $\{1,2,\ldots,2000\}$ and then
\begin{array}{c|c|c}
\text{number} & \text{book} & \text{person in the book} \\\hline
1-1000 & \mathrm{I} & n \\
1001-1600 & \mathrm{II} & n - 1000 \\
1601-2000 & \mathrm{III} & n - 1600
\end{array}
That is completely doable using pen and paper, if you don't have a way to pick a random number, throw a coin $11$ times: $2^{11} = 2048$ (if you happen to pick over $2000$ or a person picked previously, just try again; if you don't have a fair coin, throw the coin twice and interpret TH and HT as heads and tails respectively, while disregard any TT or HH as invalid throws).
As for whether you solution is fair: in general it is not. Assume there are to be two winners, and we would like to calculate the probability that some particular pair wins, which is $$\binom{2000}{2}^{-1} = \dfrac{2}{2000\cdot 1999}.$$
Observe that $1999$ is a prime number, so whatever calculation method we use, the denominator must has it as a factor. However, in your case the denominator can't have factors bigger than 1000 (unless you put it there artificially), so the prime number $1999$ does not happen, and so these numbers can't be equal.
I hope this helps $\ddot\smile$ |
Finding out the minimum yield of a premium bond with a different redemption fee. ($F=100, r^{(2)}=10\%, i^{(2)}=8\%, C=110$) | By the premium/discount formula, which is
$ P = C + (Fr - iC)a_{n|i}$
you can see that, if $ Fr -iC >0 $, the bond is selling at premium so the earliest redemption date is the most favorable for the issuer. (Because he would like to stop repaying the premium via the coupon payments as soon as possible.)
For $ i^{(2)} = 0.08 $, the bond is effectively selling at premium. ( assuming C = 110, as in your computation )
The bond is callable from the 31th coupon, and the price of the bond for $ 31 <= n <= 40 $ is
$P = 110 + ( 5 - 4.4)a_{n|0.04} $
As you can see, the minimum price clearly occurs at $ n = 31 $.
For $ n = 31 : P = 120.55$
For $ n = 40 : P = 121.88$ |
Intuition of position vs time, and complex numbers | I think that's $i$ and $j$ as in the standard unit vectors in the $x$- and $y$-directions, respectively. Not as in complex numbers. The reason I think this is because the scenario involves finding a velocity vector from a position vector, and describing the position in terms of imaginary units $i$ and $j$ is extremely outlandish, if not completely nonsensical. I think this has nothing to do with complex numbers at all.
So we should really be saying $\mathbf i$ and $\mathbf j$ to indicate they're vectors (and therefore avoid this type of confusion).
Anyway, to differentiate a vector, simply differentiate its components individually.
$$
\mathbf r(t) = (2 + 3t)\mathbf i + (3-2t^2)\mathbf j
$$
Therefore
$$
\mathbf r'(t) = (2+3t)' \mathbf i + (3-2t^2)' \mathbf j = 3\mathbf i - 4t \mathbf j.
$$
Integrating works the same way. Integrate each component independently. And make sure to add an arbitrary constant vector (a vector with two arbitrary constant components) if your integral is indefinite, which would be unusual for vectors given my experience. Unusual but not impossible I guess. |
Order of a permutation, how to calculate | First you'll need to express $(123)(241)$ in terms of the product of disjoint cycles.
$(123)$ and $(241)$ are not disjoint cycles, as you note, since both share the elements $1, 2$.
To do so, you start from the right cycle, and compose with the left cycle. So, in the right-hand cycle, we have $1\mapsto 2$ and in the lefthand cycle, $2\mapsto 3$. Now since $3\mapsto 3$ (righthand) and $3\maps 1$ (lefthand, we have the cycle (13).
In the end, you'll find $$\phi = (123)(241)=(13)(24)$$
Second, the order of a one-cycle permutation is its length; to find the order of a product of more than one permutation cycle, as is the case here, the order of $\phi$ is the $\operatorname {lcm}$ of the lengths of the cycles.
So we have the product of two $2$-cycles, and hence the order of $\phi$ is equal to the $\operatorname {lcm}(2, 2) = 2$. |
Using this definition of ordinals, do I need foundation? | If you remove the axiom of foundation from ZF, then you cannot exclude the existence of, for instance, sets satisfying $x = \{x\}$. Such a set would be transitive, and have only transitive sets as elements, so it would fall under your definition of an ordinal. |
Questions about different formulations of the Taylor expansion terms | You can do something like that, though you need to be a bit careful with your notation. $\nabla f(a)^Tp$ no longer depends on $x$, so if you differentiate it again, you will just the zero matrix. But I know what you mean: let's define $Jf(x)$ to be the Jacobian of $f$ at $x$ (a row vector, for $f:\mathbb{R}^n\to \mathbb{R}$). Then indeed you can write the second-order term as
$$\frac{1}{2}J\left[J(x)p\right](a)p.$$
Here the inside $J(x)p$ is again a function of $x$, and so can be differentiated again.
The proof is straightforward, in coordinates:
$$Jf(x)p = \sum_{i=1}^n \frac{\partial f}{\partial x_i}(x) p_i$$
$$J[Jf(x)p](a)p = \sum_{j=1}^n \left[\frac{\partial}{\partial x_j}\left(\sum_{i=1}^n \frac{\partial f}{\partial x_i}(x)p_i\right)\right]_{x=a}p_j = \sum_{i,j=1}^n \frac{\partial^2 f}{\partial x_i\partial x_j}(a)p_ip_j = p^T Hf(a)p$$
where we have used equality of mixed partials and the fact that $p$ is a constant that does not depend on the $x_i$. |
Quadratic formula -rocket height at given equation | STRONG HINT:
We must do $\frac{190}{60}$ which simplified equals $\frac{19}{6} \implies \frac{190}{19/6} = \frac{6 \times 190}{19} = 60$.
$$\therefore \frac{6(c^2 + 160c + 20)}{19} \ \ \text{determines the height of the rocket at 60m}$$
$$\begin{align} &= \frac{6c^2 + 960c + 120}{19} \\ &= \frac{6}{19}c^2 + \frac{960}{19}c + \frac{120}{19} \end{align}$$
Now use the formula:
$$x = \left\{\frac{-b \pm \sqrt {\Delta}}{2a} : \Delta = \text{discriminant}, b^2 - 4ac\right\}$$
And you will find the value for $c$. To calculate the time the rocket will take to travel $60$m high, we use the formula:
$$\text{Time} = \frac{\text{Distance}}{\text{Speed}}$$
$\because c =$ Time and $60 =$ Distance, you can calculate the speed. |
Why $99x^2 \equiv 1 \mod 5 \implies (-1)x^2 \equiv 1 \mod 5$ | Since we have $$99\equiv -1 \mod 5$$ this is the reason. |
Probability of getting split pill from bottle? | I have written a paper on this which will be published in the American Mathematical Monthly sometime in the next year or so. The title is "A drug-induced random walk." The main theorem is this: Consider a bottle of $n$ pills. Every day, you remove a pill from the bottle at random (with each pill equally likely to be chosen). If it is a whole pill, you cut it in half, take half of the pill, and return the other half to the bottle. If it is a half pill, then you take it and nothing is returned to the bottle. At any time, let $x$ be the fraction of the original pills in the bottle that are still whole, and let $y$ be the fraction that are now half pills. ($x+y$ may be less than $1$, since some pills may have been used up completely.) Then the point $(x,y)$ executes a random walk in the plane, starting at the point $(1,0)$ (all pills whole) and ending at $(0,0)$ (no pills left). The theorem says that for large $n$, the random walk will approximately follow the curve $y = -x \ln x$. More precisely, the theorem says that for every $\epsilon > 0$, the probability that the walk stays within $\epsilon$ of the curve $y = -x \ln x$ approaches $1$ as $n$ approaches infinity. The paper also answers the questions "What is the expected number of whole pills removed before the first half pill is removed?" and "What is the expected number of half pills removed after the last whole pill is removed?" |
Is every linear finite element space over a bounded domain a subspace of the sobolev space H^1? | I don't think this is the definition you want for $V_g^h$.
First of all, an unrelated technical point: the inclusion between $L^2(\Omega)$ and $H^1(\Omega)$ goes the other way: $H^1(\Omega) \subset L^2(\Omega)$. $H^1$ consists of the $L^2$ functions that have an $L^2$ weak derivative, so it is a strictly smaller space than $L^2$.
Second, $V_g^h$ is not even a vector space. The condition $v\big|_{\partial\Omega} = g$ is incompactible with a vector space structure unless $g = 0$.
Third, $V_g^h$ probably doesn't contain the functions you intend for it to contain. There are very few smooth functions that are piecewise linear over a triangulation: the only such functions are in fact globally affine. Smoothness implies that the restrictions of your functions to the triangulations need to match up at the boundaries in a smooth way, and if your functions are linear/affine on the triangulations then this is an extremely restrictive condition. Also, it's not clear that the piecewise linear/affine condition is compatible with the boundary condition: the space could well be empty if $g$ is not itself piecewise affine. |
Real VS Complex for integrals: $\int_0^\infty \frac{dx}{1 + x^3}$ | Make the substitution $x = \frac{1}{t}$ and you get
$$ \int_{0}^{\infty} \frac{t}{1+t^3} \text{d}t$$
Write the one you want as
$$ \int_{0}^{\infty} \frac{1}{1+t^3} \text{d}t$$
Now you can add both and cancel that pesky $1+t$ factor.
btw, a straightforward approach using partial fractions also works.
You consider
$$F(x) = \int_{0}^{x} \frac{1}{1+t^3} \text{d}t$$
Using partial fractions you can find that (I used Wolfram Alpha, I admit)
$$F(x) = \frac{1}{6}\left(2\log(x+1) - \log(x^2 - x -1) + 2\sqrt{3} \arctan\left(\frac{2x-1}{\sqrt{3}}\right)\right) + \frac{\pi}{6\sqrt{3}}$$
Now as $x \to \infty$, we have that $2\log(x+1) - \log(x^2 - x + 1) \to 0$ . |
A Probability Problem about the $Trypanosoma$ parasite | All three questions can be solved in a pretty similar fashion. The key to answering them is using the rule that, if $A$ and $B$ are independent events, then $P(A\cap B) = P(A)P(B)$ — in other words, multiply the probabilities of each event. So for example, the probability of selecting a parasite shorter than 20 micrometers is $0.01 + 0.34 = 0.35$. The probability of selecting two such parasites, then, is $(0.35)(0.35) = 0.1225$.
There is a twist in the final question. The second question dictates what order you select the parasites in. But the third question states simply that you pick two parasites. Therefore, you could pick the shorter parasite followed by the longer one or the longer one followed by the shorter one. If $A$ is the first event and $B$ the second, you're seeking $P(A\cup B)$. If $P(A\cap B) = 0$ — which is the case here; it's impossible to pick both the shorter parasite and the longer one first — then $P(A\cap B) = P(A) + P(B)$.
Does that make sense? |
Is it okay to say that a function that's non-continuous at a point is continuous on its domain? | Yes
An example illustrating this for an isolated point is $$f(x)=\sqrt{x^4-x^2}$$ which has domain $(-\infty,-1] \cup \{0\} \cup [1, \infty)$.
This $f(x)$ is continuous on its domain according to the definition of continuity, even at the isolated point $x=0$, as there are no other points near enough to $0$ to demonstrate discontinuity |
Calculate the volume of $D=\{(x,y,z):x^2+y^2+z^2<1,ax+by+cz<d\}$ | I would not regard this as an algebraic question, but a geometrical one: $x^2+y^2+z^2<1$ is a simple square, $ax+by+cz<d$ is a simple plane. Either they intersect or they don't.
In case they don't intersect, then the result is either the volume of the unity sphere, or it is zero.
In case they intersect, you can use the basic geometry formula for the volume of a sphere segment or sector.
By the way: who or what is Fubini? |
generalization of multisets | For fixed $k\in\Bbb N$ the expression
$$p(c,d)=\left(\!\!\binom{c+d}k\!\!\right)-\sum_{j=0}^k\left(\!\!\binom{c}j\!\!\right)\left(\!\!\binom{d}{k-j}\!\!\right)$$
is a polynomial in $c$ and $d$. If we fix $c\in\Bbb N$, it becomes a polynomial in $d$. Either this polynomial is identically $0$, or it has only finitely many zeroes. Since it is $0$ for each $d\in\Bbb N$, it must be identically $0$. Thus, $p(n,d)=0$ for each $n\in\Bbb N$ and $d\in\Bbb R$. But we can now hold $d$ fixed and view $p(c,d)$ as a polynomial in $c$, and by the same argument that polynomial must be identically $0$. Thus, $p(c,d)=0$ for all $c,d\in\Bbb R$. |
For any k-dim. subspace $ L$ of $T_p M$, can we find a sub manifold, say $R$, of $M$ containing $p$ s.t $T_p R = L$ | Yes, we can. Let $\varphi\colon\mathbb R^n\longrightarrow M$ be a chart of the manifold $M$ such that $\varphi(0)=p$. Then $D\varphi(0)^{-1}(L)$ is a $k$-dimensional subspace of the tangent space of $\mathbb R^n$ at $0$ (which is $\mathbb R^n$, of course). And in $\mathbb R^n$ it is clear that you can find a submanifold $N$ such that $0\in N$ and that its tangent space at $0$ is $D\varphi(0)^{-1}(L)$. So, take $R=\varphi(N)$. |
Infinite Probability | Suppose that two players, $A$ and $B$, have $a$ dollars and $b$ dollars respectively.
We can set up the following events:
$E_i$: player $B$ wins the total fortune $a+b$ starting with $i$ dollars
$F$ : player $B$ wins the first game
Let $p_i = \Bbb{P}(E_i)$. Therefore, $p_i$ is the probability that player $B$ wins the total fortune $a+b$ starting with $i$ dollars. (We will let $i = b$ eventually.) Note that $\{F,\bar{F}\}$ is a partition of the sample space. Let $p=\Bbb{P}(F)$ and $q=\Bbb{P}(\bar{F})=1-p$. Therefore, by the Law of Total Probability
$$p_i = \Bbb{P}(E_i|F)\Bbb{P}(F) + \Bbb{P}(E_i|\bar{F})\Bbb{P}(\bar{F})=pp_{i+1}+(1-p)p_{i-1}$$
or $$p_i = pp_{i+1} + qp_{i−1}.\tag 1$$
The $p_{i+1}$ appears in the formula since if player $B$ wins the first game, then he
or she has won a dollar from player $A$. Similarly, $p_{i−1}$ appears since if player
$B$ loses the first game, then he or she pays player $A$ one dollar, and therefore
has one less dollar. Clearly, $p_0 = 0$ since if player $B$ has no money, then he or
she is already ruined. Also, $p_{a+b} = 1$ since player $B$ cannot be ruined if he or
she has all the money.
Since $p+q=1$, equation (1) can be rewritten as $pp_i +qp_i= pp_{i+1} + qp_{i−1}$,
yielding
$$p_{i+1} − p_i =\gamma (p_i − p_{i−1})$$
where we put $\gamma=\frac{q}{p}$.
In particular,
$p_2-p_1=\gamma(p_1-p_0)=\gamma p_1$ (since $p_0 = 0$), so that $
p_3 -p_2 = \gamma(p_2 - p_1) = \gamma^2p_1$; and more generally
$$
p_{i+1} − p_i =\gamma^i p_1\qquad \text{for}\; 0<i<a+b
$$
Thus
$$
p_{i+1} − p_1 =\sum_{k=1}^i (p_{k+1} − p_k)=\sum_{k=1}^i\gamma^k p_1
$$
yielding
$$
p_{i+1} =p_1+p_1\sum_{k=1}^i\gamma^k =p_1 \sum_{k=0}^i\gamma^k=
\begin{cases}
p_1\frac{1-\gamma^{i+1}}{1-\gamma} & \text{if }p\ne q\\
p_1(i+1) & \text{if }p= q
\end{cases} \tag 2
$$
(Here we are using the geometric sum $\sum_{k=0}^n a^i=\frac{1-a^{n+1}}{1-a}$ for any number $a$ and any integer $n\ge 1$.)
Choosing $i = a+b - 1$ and using the fact that $p_{a+b} = 1$ yields
$$
1=p_{a+b}=\begin{cases}
p_1\frac{1-\gamma^{a+b}} {1-\gamma}& \text{if }p\ne q\\
p_1(a+b) & \text{if }p= q
\end{cases}
$$
from which we conclude that
$$
p_{1} =
\begin{cases}
\frac{1-\gamma}{1-\gamma^{a+b}} & \text{if }p\ne q\\
\frac{1}{a+b} & \text{if }p= q
\end{cases} \tag 3
$$
Combining equations (2) and (3) gives
$$
p_{i} =
\begin{cases}
\frac{1-\gamma^i}{1-\gamma^{a+b}} & \text{if }p\ne q\\
\frac{i}{a+b} & \text{if }p= q
\end{cases}
$$
or
$$
p_{i} =
\begin{cases}
\frac{1-(q/p)^i}{1-(q/p)^{a+b}} & \text{if }p\ne q\\
\frac{i}{a+b} & \text{if }p= q
\end{cases}
$$
So taking $i=b$, and for $a=2$ and $b=3$, we have
$$
\Bbb{P}(E_b)=p_b=
\begin{cases}
\frac{1-2^b}{1-2^{a+b}}=\frac{1-2^3}{1-2^{5}}=\frac{7}{31} & \text{if }p=\frac{1}{3},\, q=\frac{2}{3};\gamma=\frac{q}{p}=2\\
\frac{b}{a+b}=\frac{3}{5} & \text{if }p= q=\frac{1}{2}
\end{cases}
$$
observing that $\Bbb{P}(Heads)=\Bbb P(F)$. |
Is the Completeness of $X$ really necessary here? | You are right. The result holds in every locally convex space (i.e., a vector space $X$ with a family $\mathcal P$ of semi-norms: $(X,\mathcal P)$ and $(X,\sigma(X,X'))$ have the same bounded sets. |
Relationship of camera matrices and real world units | It turns out that it doesn’t matter what units you use for the extrinsic matrix as long as they’re consistent with the units that you use for world coordinates.
For simplicity, let’s assume that there’s no rotation, as is the case for your matrices. That part of the extrinsic matrix isn’t affected by the units of measurement, anyway. We have some intrinsic matrix $K$ and $$E=\pmatrix{1&0&0&t_x\\0&1&0&t_y\\0&0&1&t_z}$$ for the extrinsic. If we apply these to the world point $(x,y,z)$ we get $$K\cdot E\cdot\pmatrix{x\\y\\z\\1}=K\cdot\pmatrix{x+t_x\\y+t_y\\z+t_z}.\tag{1}$$ Changing the units of measurement amounts to multiplying the last column of $E$ by some constant scale factor $s$ and multiplying world coordinates by the same factor. With these new units, we have $$K\cdot\pmatrix{1&0&0&st_x\\0&1&0&st_y\\0&0&1&st_z}\cdot\pmatrix{sx\\sy\\sz\\1}=K\cdot\pmatrix{s(x+t_x)\\s(y+t_y)\\s(z+t_z)}.$$ Since $K$ is linear, that factor of $s$ remains after multiplication by $K$ and will cancel out when you divide through by the last coordinate to get the 2-D coordinates of the point in the image plane, giving the same result as $(1)$. (Remember that in homogeneous coordinates, $(x,y,1)$ and $(kx,ky,k)$ with $k\ne0$ represent the same point.) |
I cannot understand why the range of integral of $x, y$ are from $0$ to $1$ when $x > 0 $ and $y < 1$. | The integrals are "technically" not on $(0,1)\times(0,1)$ only, but on $\mathbb{R}^2$: you do have
$$
\mathbb{E}[Z] = \int_{\mathbb{R}^2} dxdy f(x,y) \sqrt{x^2+y^2}
$$
as... expected. However, by definition, $f(x,y)$ is non-zero only if $(x,y)\in (0,1)\times(0,1)$, so we end up having
$$
\mathbb{E}[Z] = \int_{\mathbb{R}^2} dxdy 4xy \mathbf{1}_{(0,1)\times(0,1)}(x,y) \sqrt{x^2+y^2}
= \int_{(0,1)\times(0,1)} dxdy 4xy \sqrt{x^2+y^2}
$$
This is because $\mathbf{1}_{(0,1)\times(0,1)}(x,y)$ is by definition zero when $(x,y)\in(0,1)\times(0,1)$, and $1$ otherwise (it is the indicator function of $(0,1)\times(0,1)$). |
Does $\int_R \cos(rx) f(x) = 0$ with $f(x)=f(-x)$ imply $f(x) = 0$ almost everywhere? | If $f$ is even, then
$$
\hat f(r)=\int_{\Bbb R}f(x)\,e^{irx}\,dx=\int_{\Bbb R}f(x)\,\cos(r\,x)\,dx=0\quad\forall r\in\Bbb R.
$$
By the uniqueness theorem for Fourier transforms, $f$ is equal to zero almost everywhere. |
Can anyone solve this ODE? | I can explain Mathematica's answer. An explicit solution is highly unlikely. If it exists, the integral we get at the end should be (a) computable by some method and (b) the antiderivative obtained should be invertible. Both are unlikely.
Multiply by $u'/u$:
$$u'' u' + au^2 u' = bu'/u$$
Integrate:
$$\frac{1}{2} (u')^2 + \frac{a}{3} u^3 = b \log u+C.$$
Rearrange:
$$\frac{u'}{\sqrt{2 b \log u + 2C - \frac{2a}{3} u^3}} = 1$$
Integrate each side:
$$\int \frac{1}{\sqrt{2 b \log u + 2C - \frac{2a}{3} u^3}} du = t+D$$ |
Constant-Coefficient Systems. Find a real general solution of the following system. | We are given:
$$A = \begin{bmatrix}9 & 27/2\\3/2 & 9\end{bmatrix}$$
The characteristic equation and eigenvalues are found by solving $[A-\lambda I] = 0$, which gives:
$$1/4 (2 \lambda-27) (2 \lambda-9) = 0 \rightarrow \lambda_1 = \dfrac{27}{2}~,\lambda_2 = \dfrac{9}{2}$$
We now find the eigenvectors for each distinct eigenvalue by solving $[A-\lambda_i I]v_i = 0$.
For $\lambda_1 = 27/2$, we have the RREF of:
$$\begin{bmatrix} 1 & -3 \\ 0 & 0 \end{bmatrix}v_1 = 0$$
This leads to:
$a - 3b = 0 \rightarrow a = 3b, ~\mbox{so let}~ b = 1 \rightarrow a = 3$, so our eigenvector is:
$$v_1 = \begin{bmatrix}3\\ 1 \end{bmatrix}$$
Repeating this process for $\lambda_1 = 9/2$, we have the RREF of:
$$\begin{bmatrix} 1 & 3 \\ 0 & 0 \end{bmatrix}v_2 = 0$$
This leads to:
$a + 3b = 0 \rightarrow a = -3b, ~\mbox{so let}~ b = 1 \rightarrow a = -3$, so our eigenvector is:
$$v_2 = \begin{bmatrix} -3 \\ 1 \end{bmatrix}$$
Here is one approach to finding the exponential matrix using the eigenvalues / eigenvectors.
We can write the solution to this system as:
$$x(t) = \begin{bmatrix} x_1(t) \\ x_2(t) \end{bmatrix} = c_1 e^{27t/2 }\begin{bmatrix} 3 \\ 1 \end{bmatrix}+ c_2 e^{9t/2 }\begin{bmatrix} -3 \\ 1 \end{bmatrix}$$
This gives us a fundamental matrix of:
$$\phi(t) = \begin{bmatrix} 3e^{27t/2} & -3e^{9t/2} \\ e^{27t/2} & e^{9t/2} \end{bmatrix}$$
From this, we can find the matrix exponential using:
$$e^{A t} = \phi(t)(\phi(0))^{-1} = \begin{bmatrix}1/2(e^{9t/2} + e^{27t/2}) & 3/2 (-e^{9t/2} + e^{27t/2})\\ ~1/6(-e^{9t/2} + e^{27t/2}) & 1/2(e^{9t/2} + e^{27t/2}) \end{bmatrix}$$ |
Is a vector of weakly convergent sequences of random variables weakly convergent? | The statement about weak limts is false but the statement about weak sequential compactness is true. By Prohorov's Theorem weak sequential compactness is equivalent to tightness and the vector of your sequences is tight because product of compact sets is compact.
$X_n \to X$ weakly and $Y_n \to Y$ weakly does not imply $(X_n,Y_n) \to (X,Y)$ weakly. |
Why are the powers of an element in an integral domain infinite? | That's because there's something missing from the statement.
For $r=1$, the powers are trivially a finite set!
Furthermore, for any $n$-th root of unity in $\Bbb C$, the powers are a finite set, so that is a counterexample to the statement of the problem also.
You could make it a true statement this way: For any nonzero nonunit $r$ in a domain, the set of powers of $r$ is infinite. From your own work so far, you can prove this. |
Closed form for an infinite series involving lower incomplete gamma functions | $Q(t) = \frac{e^{at}}{a}\Big[e^{b/a}-e^{-at}\sum_{k=0}^\infty \sum_{l=0}^k \frac{(at)^l(b/a)^k}{k!l!}\Big].
$
I'll blindly try
to reverse the order of summation
and see what happens.
$\begin{array}\\
S(u, v)
&=\sum_{k=0}^\infty \sum_{l=0}^k \frac{u^lv^k}{k!l!}\\
&=\sum_{l=0}^\infty\sum_{k=l}^\infty \frac{u^lv^k}{k!l!}\\
&=\sum_{l=0}^\infty\frac{u^l}{l!}\sum_{k=l}^\infty \frac{v^k}{k!}\\
&=\sum_{l=0}^\infty\frac{u^l}{l!}(e^v-\sum_{k=0}^{l-1} \frac{v^k}{k!})\\
&=\sum_{l=0}^\infty\frac{u^l}{l!}e^v-\sum_{l=0}^\infty\frac{u^l}{l!}\sum_{k=0}^{l-1} \frac{v^k}{k!}\\
&=e^ue^v-\sum_{l=0}^\infty\frac{u^l}{l!}\sum_{k=0}^{l-1} \frac{v^k}{k!}\\
&=e^{u+v}-\sum_{l=0}^\infty\frac{u^l}{l!}(\sum_{k=0}^{l} \frac{v^k}{k!}-\frac{v^l}{l!})\\
&=e^{u+v}-\sum_{l=0}^\infty\frac{u^l}{l!}\sum_{k=0}^{l} \frac{v^k}{k!}+\sum_{l=0}^\infty\frac{u^l}{l!}\frac{v^l}{l!}\\
&=e^{u+v}-\sum_{l=0}^\infty\sum_{k=0}^{l}\frac{u^l}{l!} \frac{v^k}{k!}+\sum_{l=0}^\infty\frac{(uv)^l}{l!^2}\\
&=e^{u+v}-S(v, u)+I_0(2\sqrt{uv})
\\
\end{array}
$
where
$I_0$
is the modified Bessel function
of the first kind.
So this isn't a evaluation
but we get the relation
$S(u, v)+S(v, u)
=e^{u+v}+I_0(2\sqrt{uv})
$.
Then
$\begin{array}\\
Q(t)
&= \frac{e^{at}}{a}\Big[e^{b/a}-e^{-at}\sum_{k=0}^\infty \sum_{l=0}^k \frac{(at)^l(b/a)^k}{k!l!}\Big]\\
&= \frac{e^{at}}{a}\Big[e^{b/a}-e^{-at}S(at, b/a)\Big]\\
&= \frac{1}{a}\Big[e^{at+b/a}-S(at, b/a)\Big]\\
&= \frac{1}{a}\Big[e^{at+b/a}-(e^{at+b/a}-S(b/a, at)+I_0(2\sqrt{(at)(b/a)}))\Big]\\
&= \frac{1}{a}\Big[S(b/a, at)-I_0(2\sqrt{tb})\Big]\\
\end{array}
$
Again,
not an evaluation,
but a possibly useful
alternative expression.
This reminds me
very much
of some work I did
over forty years ago
on the Marcum Q-function.
You might look that up
and follow the references.
You can start here:
https://en.wikipedia.org/wiki/Marcum_Q-function |
Probably simple, but i don't get it | Use
$$\frac{m_1 (v_1^2 - w_1^2)}{m_1 (v_1 + w_1)} = \frac{m_1 (v_1 - w_1)(v_1 + w_1)}{m_1 (v_1 + w_1)} = v_1 - w_1$$ |
Nondegenerate representation | A cyclic representation is nondegenerate. A direct sum of nondegenerate representations is nondegenerate. Hence, the theorem could only hold for nondegenerate representations.
You are right that in Conway's proof it can be seen that the algebra must be nondegenerate, as we have $\mathscr{H}_0$ defined to be a subset of $\mathrm{cl}[\pi(A)\mathscr{H}]$, and see later that $\mathscr{H_0}=\mathscr{H}$. If I had my copy of Conway handy I would check whether it states somewhere earlier that representations would be assumed nondegenerate unless stated otherwise. |
If $a_nx^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0=0$, must we have always $-\frac{a_0}{a_n} \in \mathbb{Z}$? | Yes. If a polynomial with all integer coefficients has only integer roots then it must be the case that it factors into: $p(x) = c(x-k_1)(x-k_2)(x-k_3)\dots(x-k_n)$ where $c$ and each $k$ are integers. So $a_n= c$ and $a_0 = \pm ck_1k_2k_3\dots k_n$. So $a_n$ divides $a_0$. |
Why n! equals sum of some expression? | Summary:
This formula is just a convoluted way of saying,
$$
n! = [n \cdot 1][(n-1) \cdot 2][(n-2) \cdot 3] \cdots
\left\{
\begin{array}{cc}
\left[ \frac{n+1}{2} \right] & n \text{ is odd} \\
\left[\frac{n}{2} \cdot \frac{n+2}{2} \right] & n \text{ is even}
\end{array}
\right.
$$
Explanation:
Note that by sum of an arithmetic sequence,
$$
\sum_{i = 0}^j (n - 2i) = (n - j)(j+1)
$$
Also,
$$
\frac{2n + \cos(\pi n) - 5}{4}
=
\frac{2n + (-1)^n - 5}{4}
=
\left\{
\begin{array}{cc}
\frac{n - 3}{2} & n \text{ is odd}\\
\frac{n - 2}{2} & n \text{ is even}
\end{array}
\right.
$$
Finally,
$$
{\left(\frac{n+1}{2}\right)}^{\left(\frac{\cos(\pi n + \pi)+1}{2}\right)}
=
{\left(\frac{n+1}{2}\right)}^{\left(\frac{(-1)^{n+1}+1}{2}\right)}
=
\left\{
\begin{array}{cc}
\frac{n + 1}{2} & n \text{ is odd}\\
1 & n \text{ is even}
\end{array}
\right.
$$
Thus, if $n$ is odd,
\begin{align*}
{\left(\frac{n+1}{2}\right)}^{\left(\frac{\cos(\pi n + \pi)+1}{2}\right)} \prod_{j=0}^\frac{2 n +\cos(\pi n) - 5}{4}\sum_{i=0}^j(n-2i)
&=
\frac{n+1}{2} \prod_{j=0}^{\frac{n - 3}{2}} (n - j)(j+1) \\
&= \frac{n+1}{2} [n \cdot 1][(n-1) \cdot 2] \cdots \left[ \left( \frac{n + 3}{2} \right) \left( \frac{n-1}{2} \right) \right] \\
&= n!
\end{align*}
And if $n$ is even,
\begin{align*}
{\left(\frac{n+1}{2}\right)}^{\left(\frac{\cos(\pi n + \pi)+1}{2}\right)} \prod_{j=0}^\frac{2 n +\cos(\pi n) - 5}{4}\sum_{i=0}^j(n-2i)
&=
1 \cdot \prod_{j=0}^{\frac{n - 2}{2}} (n - j)(j+1) \\
&= [n \cdot 1][(n-1) \cdot 2] \cdots \left[ \left( \frac{n + 2}{2} \right) \left( \frac{n}{2} \right) \right] \\
&= n!
\end{align*}
So indeed the formula holds for any $n$. |
Connected components of the complement of a connected component | Revised to match the revised question:
The answer to the revised question is yes. Let $C$ be a component of $X\setminus F$. There must be a component $D$ of $X\setminus E$ such that $D\cap C\ne\varnothing$. If $D\nsubseteq C$, then $C\cup D$ is a connected subset of $X\setminus F$ properly containing the component $C$, which is impossible. Thus, $D\subseteq C$. |
Equivalent descriptions of Sobolev spaces on compact manifolds | My question is: do the first two constructions above produce the same space?
Yes, these two constructions produce the same space.
Why isn't $H^k$ defined simply as $ \{ u \in L^2 \mid X_1 \dots X_i u \in L^2 \forall i \le k \forall X_j \in \Gamma(TM) \} $?
To be more precise, even this third construction you have cited will also produce the same space.
After pondering about it for a few days, I believe I have succeeded in cooking up a proof after considering the outline of a stronger version of it which presented in the third chapter of the paper Sobolev spaces on Lie manifolds (...).
This equivalence is rather tricky.
We shall prove this result in a slightly more general setting, that of the Sobolev space $W^{k,p}(M)$ where $k$ is a non-negative integer and $p \in [1,\infty[$.
I'll first fix the notation.
Let $(M^n,g)$ be a compact Riemannian manifold with $\nabla$ its associated Levi-Cività connection.
Fix $\tilde{\mathcal{U}}$ a finite smooth atlas for $M$.
Fix $\ \mathcal{U}=\{ (U_r, \psi_r) : 1 \leq r \leq N\}$ a finite smooth atlas for $M$ such that
$$\forall r=1,...,N \ \exists (\tilde{U},\tilde{\psi}) \in \tilde{\mathcal{U}} \ \left( \overline{U_r} \subset \tilde{U} \ \& \ \psi_r = \left. \tilde{\psi} \right|_{U_r} \right)$$
Fix $\{ \rho_r : 1 \leq r \leq N\}$ a partition of unity strictly subordinate to $\mathcal{U}$.
Given $r=1,...,N$, we define $V_r=\psi_r(U_r) \subset \mathbb{R}^n$.
Given $u \in C^\infty(M)$, we define
$$
||u||^p_{W^{k,p}} = \sum_{l=0}^k \int_M |\nabla^l u|^p d\mu_g
$$
$$
\lambda(u)^p=\sum_{r=1}^N ||\rho_r u||^p_{W^{k,p}}
$$
$$
\nu_{k,p}(u)^p = \sum_{r=1}^N \lVert (\rho_r u) \circ \psi_r^{-1}\rVert^p_{W^{k,p}(V_r)}
$$
We shall prove that those norms are equivalent, hence the completion of $C^\infty(M)$ endowed with any of them yields the same space $W^{k,p}(M)$.
We shall procede in two steps which employ (almost) the same tricks: first, we prove that $|| \cdot ||_{W^{k,p}}$ is equivalent to $\lambda$, then that $\lambda$ is equivalent to $\nu_{k,p}$.
First of all, $\{ |\nabla^l \rho_r| : 1 \leq r \leq N; \ 0 \leq l \leq k \}$ is a finite set of continuous functions in the compact space $M$. Therefore, there is $C>0$ such that
$$
\forall r=1,...,N \ \forall l=0,...,k \ \left( ||\nabla^l \rho_i||_{\infty} \leq C \right)
$$
Let $L>0$ be such that
$$
\forall l=1,...,k \ \left( || \cdot ||_{1, \mathbb{R}^{l}} \leq L || \cdot ||_{p, \mathbb{R}^{l}} \right)
$$
Let
$$
K = {{k}\choose{\lfloor k/2 \rfloor}}
$$
Fix $u \in C^\infty(M)$, $r \in \{1,...,N\}$ and $l \in \{0,...,k\}$.
\begin{align*}
\lvert \nabla^l (\rho_r u) \rvert
&=
\lvert \sum_{m=0}^l {{l}\choose{m}} \nabla^m \rho_r \otimes \nabla^{l-m}u \rvert
\\
&\leq
\sum_{m=0}^l {{l}\choose{m}} \lvert \nabla^m \rho_r \rvert \lvert \nabla^{l-m}u \rvert
\\
&\leq
CLK \left( \sum_{m=0}^l \lvert \nabla^m u \rvert^p \right)^{1/p}
\end{align*}
We considered a generic $l$ in $\{0,...,k\}$, hence
\begin{align*}
\lVert \rho_r u \rVert^p_{W^{k,p}}
&\leq
(CLK)^p \sum_{l=0}^k \int_M \sum_{m=0}^l \lvert \nabla^m u \rvert^p
\\
&\leq
(CLK)^p \sum_{l=0}^k \lVert u \rVert_{W^{l,p}}^p
\\
&\leq
(k+1) (CLK)^p \lVert u \rVert_{W^{k,p}}^p
\end{align*}
We considered a generic $r$ in $\{1,...,N\}$, hence
$$
\lambda(u)^p \leq A \lVert u \rVert_{W^{k,p}}^p
$$
where $A=(k+1) N (CLK)^p$.
We obtained our first inequality, so we're halfway there.
Let $T>0$ be such that $|| \cdot ||_{p, \mathbb{R}^N} \geq T || \cdot ||_{1, \mathbb{R}^N}$.
We then obtain
\begin{align*}
\lambda(u)
&=
\left( \sum_{r=1}^N \lVert \rho_r u \rVert^p_{W^{k,p}} \right)^{1/p}
\\
&\geq
T \sum_{i=1}^N \lVert \rho_r u \rVert_{W^{k,p}}
\\
&\geq
T \lVert \sum_{i=1}^N \rho_r u \rVert_{W^{k,p}}
\\
&\geq
T \lVert u \rVert_{W^{k,p}}
\end{align*}
That is, we have established our first equivalence of norms.
To repeat all that we have done in the first step, we need to analyse the local form of the covariant derivatives of functions $u \in C^\infty(M)$.
Let $k$ be a positive integer, $l \in \{1,...,k\}$ and $(\tilde{U}, \tilde{\varphi}) \in \tilde{\mathcal{U}}$.
Then there exists a set indexed by multi-indices $\alpha$
$$
\{ P_\alpha : 1 \leq |\alpha| \leq k \} \subset C^\infty(\tilde{U})
$$
such that the $k$th covariant derivative of $u \in C^\infty(M)$ can be locally written at $\tilde{U}$ as
$$
\nabla^k u = \sum_{1 \leq \lvert \alpha \rvert \leq k} (D^{\alpha} u) P_{\alpha} \ d x_{i_1} \otimes ... \otimes d x_{i_l}
$$
where $\alpha=(i_1,...,i_l)$ is a multi-index.
This remark can be easily proved with an induction.
For each chart $(U_r,\psi_r)$, the $P_\alpha$s are continuous functions which have bounded covariant derivatives, so we can repeat the arguments for the first equivalence of norms. |
The Hilbert space of Dirichlet Series, with square-summable coefficients is not an algebra | I don't see an explicit example of $f, g\in \mathcal{H}^2$ such that $f\cdot g \notin \mathcal{H}^2$, but we can prove that such $f,g$ must exist.
Suppose to the contrary that for all $f,g \in \mathcal{H}^2$ their product $f\cdot g$ also belongs to $\mathcal{H}^2$. Then the product is a (commutative) bilinear map $\mu \colon\mathcal{H}^2 \times \mathcal{H}^2 \to \mathcal{H}^2$. Since convergence with respect to $\lVert\,\cdot\,\rVert_2$ implies pointwise convergence of the coefficients and $\mathcal{H}^2$ is a Banach space, the closed graph theorem asserts that $\mu$ is separately continuous. For, if for a fixed $g\in \mathcal{H}^2$ we have $\lVert f_k - f\rVert_2 \to 0$ and $\lVert f_k\cdot g - h\rVert_2 \to 0$, it follows that $h = f\cdot g$, since the coefficients of $f_k\cdot g$ converge to those of $f\cdot g$. But a separately continuous bilinear map on a product of Banach spaces is continuous by the Banach-Steinhaus theorem, so there is a $C\in (0,+\infty)$ with
$$\lVert f\cdot g\rVert_2 \leqslant C\cdot \lVert f\rVert_2\,\lVert g\rVert_2\,.\tag{$\ast$}$$
However, no such $C$ exists: For $\varepsilon > 0$, let
$$f_{\varepsilon}(s) = \sum_{k = 0}^{\infty} \frac{1}{2^{k(\varepsilon+s)}}\,,$$
so the coefficients are
$$a_n = \begin{cases} n^{-\varepsilon} &\text{if } n = 2^k\,, \\\; 0 &\text{if } n \text{ is not a power of } 2\,.\end{cases}$$
Then the coefficients $b_n$ of $f_{\varepsilon}\cdot f_{\varepsilon}$ are $0$ if $n$ is not a power of $2$, and
$$\sum_{m = 0}^k 2^{-m\varepsilon}\cdot 2^{-(k-m)\varepsilon} = (k+1)2^{-k\varepsilon}$$
for $n = 2^k$. We compute
$$\lVert f_{\varepsilon}\rVert_2^2 = \sum_{k = 0}^{\infty} 2^{-2k\varepsilon} = \frac{1}{1 - 2^{-2\varepsilon}}$$
and
\begin{align}
\lVert f_{\varepsilon}^2\rVert_2^2
&= \sum_{k = 0}^{\infty} (k+1)^2\cdot 2^{-2k\varepsilon} \\
&= \sum_{k = 0}^{\infty} (k+2)(k+1) 2^{-2k\varepsilon} - \sum_{k = 0}^{\infty} (k+1)2^{-2k\varepsilon} \\
&= \frac{2}{(1 - 2^{-2\varepsilon})^3} - \frac{1}{(1 - 2^{-2\varepsilon})^2}\,,
\end{align}
so
$$\frac{\lVert f_{\varepsilon}^2\rVert_2^2}{\lVert f_{\varepsilon}\rVert_2^4} = \frac{2}{1-2^{-2\varepsilon}} - 1$$
tends to $+\infty$ as $\varepsilon \to 0$, contradicting $(\ast)$. |
Expected value of $x^TAx$ | $x^TAx=\sum_{i=1}^d \sum_{j=1}^d a_{ij}x_ix_j$ where $a_{ij}$ are elements of $A$.
Then use linearity of expectation and given variance-covariance matrix. |
Pursuit curves solution | You have some wrong signs in the last two equations.
Based on the equation of the derivative of the pursuit curve $y=f(x)$ described by the chaser object that you indicate
$$
\frac{dy}{dx}=\frac{ut-y}{p-x}\tag{1}
$$
I assume that the chased object moves along the straight line $x=p$, as I commented above. Assume further that it moves upwards. (See remark 2). Then
$$
t=\frac{y}{u}+\frac{p-x}{u}\frac{dy}{dx}.\tag{2}
$$
and from
$$
s=\int_{0}^{x}\sqrt{1+(f^{\prime }(\xi ))^{2}}d\xi =vt\tag{3}
$$
we conclude that
$$
t=\frac{1}{v}\int_{0}^{x}\sqrt{1+(f^{\prime }(\xi ))^{2}}d\xi .\tag{4}
$$
Equating the two equations for $t$ $(2)$ and $(4)$ we get
$$
\frac{1}{v}\int_{0}^{x}\sqrt{1+(f^{\prime }(\xi ))^{2}}d\xi =\frac{y}{u}+
\frac{p-x}{u}f(x).
$$
Differentiating both sides we obtain the equation (note that the LHS is
positive)
$$
\frac{1}{v}\sqrt{1+(f^{\prime }(x))^{2}}=\frac{p-x}{u}f^{\prime \prime }(x).\tag{5}
$$
If we let $w=\frac{dw}{dx}=f^{\prime }(x)$ this equation corresponds to the following one in $w$ and $w^{\prime }=\frac{dw}{dx}$
$$
\sqrt{1+w^{2}}=k(p-x)\frac{dw}{dx}\qquad w=f^{\prime }(x),k=\frac{v}{u},\tag{6}
$$
which can be rewritten as
$$
\frac{dw}{\sqrt{1+w^{2}}}=\frac{1}{k}\frac{dx}{p-x}\tag{7}
$$
by applying the method of separation of variables to $x$ and $w$. The integration is easy
$$
\begin{eqnarray*}
\int \frac{dw}{\sqrt{1+w^{2}}} &=&\int \frac{1}{k}\frac{dx}{p-x}+\log C \\
\text{arcsinh }w &=&-\frac{1}{k}\log \left( p-x\right) +\log C.
\end{eqnarray*}
$$
The initial condition $x=0,w=f^{\prime }(0)=0$ yields
$$
\begin{eqnarray*}
-\frac{1}{k}\log p+\log C &=&\text{arcsinh }0=0 \\
&\Rightarrow &\log C =\frac{1}{k}\log p ,
\end{eqnarray*}
$$
which means that
$$
\text{arcsinh }w=-\frac{1}{k}\log \left( p-x\right) +\frac{1}{k}\log p=-\frac{1}{k}\log \frac{p-x}{p}.\tag{8}
$$
Solving for $w$ we get
$$
\begin{eqnarray*}
\frac{dy}{dx} &=&w=\sinh \left( -\frac{1}{k}\log \frac{p-x}{p}\right) =\frac{1}{2}\left( e^{-\frac{1}{k}\log \frac{p-x}{p}}-e^{\frac{1}{k}\log \frac{p-x}{p}}\right) \\
&=&\frac{1}{2} \left( \frac{p-x}{p}\right) ^{-1/k}-\frac{1}{2}\left( \frac{p-x}{p}\right) ^{1/k}\tag{9}
\end{eqnarray*}
$$
To integrate this equation consider two cases.
(a) $k=\frac{v}{u}>1$
$$\begin{eqnarray*}
y &=&\frac{1}{2}\int \left( \frac{p-x}{p}\right) ^{-1/k}-\left( \frac{p-x}{p}\right) ^{1/k} dx \\
&=&-\frac{1}{2}\frac{pk}{k-1}\left( \frac{p-x}{p}\right) ^{1-1/k}+\frac{1}{2}\frac{pk}{k+1}\left( \frac{p-x}{p}\right) ^{1+1/k}+C.
\end{eqnarray*}$$
The constant of integration $C$ is defined by the initial condition $x=0,y=0$
$$\begin{eqnarray*}
0 &=&-\frac{1}{2}\frac{pk}{k-1}+\frac{1}{2}\frac{pk}{k+1}+C \\
&\Rightarrow &C=\frac{pk}{k^{2}-1}.
\end{eqnarray*}$$
Hence
$$y=-\frac{1}{2}\frac{pk}{k-1}\left( \frac{p-x}{p}\right) ^{1-1/k}+\frac{1}{2}\frac{pk}{k+1}\left( \frac{p-x}{p}\right) ^{1+1/k}+\frac{pk}{k^{2}-1}$$
$$\tag{10}$$
The chaser overtakes the chased object at the point $(p,f(p))$, with $f(p)=
\frac{pk}{k^{2}-1}$.
(b) $k=\frac{v}{u}=1$. We have
$$\frac{dy}{dx}=\frac{1}{2} \left( \frac{p-x}{p}\right) ^{-1}-\frac{1}{2}\left(
\frac{p-x}{p}\right)
$$
and
$$\begin{eqnarray*}
y &=&\frac{1}{2}\int \left( \frac{p-x}{p}\right) ^{-1}-\left( \frac{
p-x}{p}\right) dx \\
&=&-\frac{1}{2}p\ln \left( p-x\right) -\frac{1}{2}x+\frac{1}{4p}x^{2}+C.
\end{eqnarray*}$$
The same initial condition $x=0,y=0$ yields now
$$\begin{eqnarray*}
C &=&\frac{1}{2}p\ln \left( p\right) \\
&& \\
y &=&-\frac{1}{2}p\ln \left( \frac{p-x}{p}\right) -\frac{1}{2}x+\frac{1}{4p}x^{2}.\tag{11}
\end{eqnarray*}$$
The chaser never overtakes the chased object.
Example for (a): graph of $y=f(x)$ for $k=2,p=50$
Example for (b): graph of $y=f(x)$ for $k=1,p=50$
Remarks:
This answer is similar to the answer of mine to the question Cat Dog problem using integration.
It was inspired by Helmut Knaust's The Curve of Pursuit. |
Computing a combined average (mean) of values | The mean of the new data set is
$$\frac1{m+n}\left(\sum_{i=1}^{k}f_ix_i+\sum_{i=1}^{j}g_iy_i\right)$$
where there $g_i$'s are the frequencies for the new data set and $\sum_{i=1}^k f_i+\sum_{i=1}^j g_i=n+m$. Also
$$n\bar{x}=\sum_{i=1}^kf_ix_i,\hspace{10mm}m\bar{y}=\sum_{i=1}^jg_iy_i$$
Therefore the new mean is
$$\frac{n\bar{x}+m\bar{y}}{m+n}$$ |
Quadratic equation find the value of k | The discriminant is
$$25-4k^2.$$
The discriminant has to be positive so that there are two distinct solutions. That is, we have to solve the following inequality:
$$25>4k^2. $$
Taking the the square root of both sides we get $5>2|k|.$ Or
$$|k|<\frac 52.$$
$k$ has to be negative because we are looking for solutions between $-6$ and $-5$. But the absolute value of the $k$s in this interval are greater than $\frac52$. So there are no such $k$s. |
question about conditional probability for continuous r.v | The conclusion from this example is that sometimes we need to pay special attention when conditioning on events of zero probability. Suppose, as in the example, that $X_1$ and $X_2$ are i.i.d. exponentials. Fix $a,b > 0$ with $a < b$, and consider the question what is ${\rm P}\big[ X_1+X_2 \in [a,b] \big| X_1=X_2 \big]$? You may (naturally) wish to interpret this as
$$
{\rm P}\big[X_1 + X_2 \in [a,b]\big|X_1 - X_2 = 0 \big] = \mathop {\lim }\limits_{h \to 0^ + } {\rm P}\big[ X_1 + X_2 \in [a,b]\big|0 < X_1 - X_2 < h \big],
$$
so in this case
$$
{\rm P}\big [ X_1+X_2 \in [a,b] \big| X_1=X_2 \big ] = \mathop {\lim }\limits_{h \to 0^ + } \frac{{{\rm P} [ X_1 + X_2 \in [a,b],0 < X_1 - X_2 < h]}}{{{\rm P} [0 < X_1 - X_2 < h]}}.
$$
On the other hand, you may (less likely) wish to interpret that as
$$
{\rm P} \big[ X_1 + X_2 \in [a,b] \big|X_1 / X_2 = 1 \big] = \mathop {\lim }\limits_{h \to 0^ + } {\rm P}\big[ X_1 + X_2 \in [a,b] \big|1 < X_1 / X_2 < 1+h \big],
$$
leading to
$$
{\rm P}\big [ X_1+X_2 \in [a,b] \big| X_1=X_2 \big ] = \mathop {\lim }\limits_{h \to 0^ + } \frac{{{\rm P} [ X_1 + X_2 \in [a,b],0 < X_1 - X_2 < X_2 h]}}{{{\rm P} [0 < X_1 - X_2 < X_2 h]}}.
$$
However, it should not be surprising that the two interpretations lead to different probabilities, since we have limits of the form $a_i (h) / b_i (h)$, $i=1,2$, with $a_i(h),b_i(h) \to 0$ as $h \to 0^+$. Clearly, ${\rm P} [0 < X_1 - X_2 < h]$ and ${\rm P} [0 < X_1 - X_2 < X_2 h]$ are not expected to have the same behavior as $h \to 0^+$. So, the only problem was how to interpret conditioning on $X_1 = X_2$, and this is up to you. In general, however, there is no such problem; you just use the formula $f_{Y|X} (y|x) = \frac{{f_{X,Y} (x,y)}}{{f_X (x)}}$, to find the conditional density function of $Y$ given $X=x$ (where ${f_{X,Y}}$ is the joint density function of $X$ and $Y$). The conditional distribution function of $Y$ given $X=x$ is obtained by integrating the conditional density.
EDIT: The conditional density function of $X_1+X_2$ given $X_1-X_2=0$ (where $X_1$ and $X_2$ are independent exponential($\lambda$) rv's), when interpreting the conditioning with respect to the random variable $X_1-X_2$, is, by Eq. (11) in the book, the exponential$(\lambda)$ density function, $\lambda e^{-\lambda y}$, $y \geq 0$. Hence, ${\rm P}\big[X_1 + X_2 \in [a,b]\big|X_1 - X_2 = 0 \big]$ is given by
$$
\mathop {\lim }\limits_{h \to 0^ + } {\rm P}\big[ X_1 + X_2 \in [a,b]\big|0 < X_1 - X_2 < h \big] = \int_a^b {\lambda e^{ - \lambda y} \,{\rm d}y} = e^{ - \lambda a} - e^{ - \lambda b}.
$$
On the other hand, the conditional density function of $X_1+X_2$ given $X_1/X_2=1$, when interpreting the conditioning with respect to the random variable $X_1/X_2$, is, by Eq. (10) in the book, the ${\rm Gamma}(2,\lambda)$ density function, $\lambda ^2 ye^{ - \lambda y}$, $y \geq 0$ (i.e., the density function of $X_1+X_2$; this is since $X_1+X_2$ and $X_1/X_2$ are independent). Hence, ${\rm P} \big[ X_1 + X_2 \in [a,b] \big|X_1 / X_2 = 1 \big]$ is given by
$$
\mathop {\lim }\limits_{h \to 0^ + } {\rm P}\big[ X_1 + X_2 \in [a,b] \big|1 < X_1 / X_2 < 1+h \big] = \int_a^b {\lambda ^2 ye^{ - \lambda y} \,{\rm d}y} =
(\lambda a + 1)e^{ - \lambda a} - (\lambda b + 1)e^{ - \lambda b}.
$$
Both results agree with numerical simulations (approximating the probabilities for small values of $h$). |
Is there a computer programm or CAS (maybe GAP?) that can calculate with projective (indecomposable) A-modules (A is a finite dimensional k-algebra)? | GAP's meataxe works with finite dimensional $k[G]$-modules for finite groups $G$ and finite fields $k$. However, the way you specify these modules $V$ does not actually keep track of $G$, only of the image of a generating set of $G$ in $\operatorname{Aut}_k(V) = \operatorname{GL}(V)$.
Conversely given any finite field $k$, finite dimensional $k$-vector space $V$, and matrices $g_i \in \operatorname{GL}(V)$, we can define a finite group $G=\langle g_i \rangle$ and $k[G]$-module $V$, such that the images of its generators in $\operatorname{GL}(V)$ are precisely the $g_i$.
Every $n$-dimensional $k$-algebra $A$ with one for $n < |k|$ is generated by invertible matrices, and so is a $k[G]$ module for $G=A^\times$.
If your field is infinite, or if you just happen to be studying some sort of very diagonal algebra over a small field, then the meataxe does not apply, but for most $k$-algebras, $k$ finite, the meataxe should be ok.
Given a generating set X of $A$ consisting of invertible matrices over the field k, just use m:=GModuleByMats(X,k);.
The projective indecomposables are given by SMTX.Indecomposition(m) and to check if two projective indecomposables are isomorphic it suffices to check their heads,
gap> h1 := SMTX.InducedActionFactorModule( m1, SMTX.BasisRadical( m1 ) );;
gap> h2 := SMTX.InducedActionFactorModule( m2, SMTX.BasisRadical( m2 ) );;
gap> m1_iso_m2 := SMTX.Isomorphism( h1, h2 ) <> fail;
true
If you really want GAP to try harder, you can ask it to find isomorphisms between the actual modules too:
`
gap> m1_iso_m2 := SMTX.Isomorphism( m1, m2 ) <> fail;
true
` |
I want to simplify the stochastic integral by change variable | Hint: Use Itô's isometry, i.e. $$\mathbb{E}\left[ \left( \int_0^t f(s) \, dB_s \right) \left( \int_0^t g(s) \, dB_s \right) \right] = \mathbb{E} \left( \int_0^t f(s) g(s) \, ds \right).$$
No, in general we cannot expect this. Just consider e.g. $f(s) = s$, then, by Itô's formula, $$\int_0^t s \, dB_s = t B_t - \int_0^t B_s \, ds.$$ We cannot expect to write the right-hand side as a function of $B_t$ since the right-hand side does depend on the sample path of the Brownian motion up to time $t$. |
How do I show the following modification of the Counting formula of zeros and poles? | The argument principle is based on the fact that the function $\;\frac {f'}f\;$ has a simple pole with residue $\;\eta\;$ at any zero of order $\;\eta\;$ of $\;f\;$ , and a simple pole with residue $\;-\eta\;$ at any pole of order $\;\eta\;$ of $\;f\;$ , and then the winding number and etc.
Well, the generalization is then very simple (following Ahlfors' "Complex Analysis") , since exactly the same argument as above goes for $\;g\frac{f'}f\;$ , namely: it has a simple pole of residue $\;\eta g(a)\;$ at any pole $\;a\;$ of order $\;\eta\;$ of $\;f\;$ and etc. for the poles of $\;f\;$ . |
$H_1(X,X-N,\mathbb Z_2)=\mathbb Z_2$, proof? | This might not be the easiest way to go around it and I'd be very happy to see a more elementary solution, but I'm sharing this since it is natural from the point of view of Thom spaces (and also proves a stronger result). Note that you cannot really hope for a "three-line proof" in the topological case, since your statement implies the separation theorem for circles in the plane.
I assume some regularity assumptions that provide the existence of a tubular neighbourhood$N \subseteq U \simeq N _{X/N}$ , for example it's enough to assume that $X, N$ are smooth and $N \hookrightarrow X$ is an embedding. By excision, we have
$H_{i}(X, X \setminus N) \simeq H_{i}(U, U \setminus N) \simeq H_{i}(N _{X/N}, N_{X/N} \setminus N) \simeq H_{i}(DN _{X/N}, DN _{X/N} \setminus N)$,
where $DN _{X/N}$ is the associated unit disk bundle. If $SN _{X/N}$ is the associated unit sphere bundle, then $(DN _{X/N}, SN _{X/N}) \rightarrow (DN _{X/N}, DN _{X/N} \setminus N)$ is a homotopy equivalence of pairs by radial retraction. Now, the inclusion $SN _{X/N} \hookrightarrow DN _{X/N}$ is a cofibration and so we have
$H_{i}(DN _{X/N}, SN_{X/N}) \simeq \tilde{H}_{i}(DN_{X/X} / SN_{X/N})$,
where $TN _{X/N} = DN_{X/N} / SN_{X/N}$ is by definition the Thom space of the normal bundle. Since you're working over $\mathbb{Z}_{2}$, every bundle is orientable and the Thom isomorphism theorem applies to tell us that
$H^{i}(N) \simeq \tilde{H}^{i+1}(TN _{X/N})$,
which immediately implies your statement, since $\mathbb{Z}_{2}$ is a field and so homology and cohomology are dual. |
Describe all subrings of the ring of integers | I am going to admit non-unital subrings of $\Bbb Z$ to my discussion; if every subring contains $1$, then every subring is equal to $\Bbb Z$ and there is not much more to be written.
I claim that if
$S \subseteq \Bbb Z \tag 1$
is an additive subgroup, then it is of the form
$S = s \Bbb Z, \; \tag 2$
for some $s \in \Bbb Z$; for if
$S \ne \{ 0 \}, \tag 3$
there is some $0 \ne s \in S$; since $S$ is a subgroup,
$s \in S \Longleftrightarrow -s \in S, \tag 4$
so we may without loss of generality assume that
$s > 0; \tag 5$
since $S$ has positive elements, it has a smallest such; we may assume that $s$ is the same. Then clearly
$ns \in S, \; \forall n \in \Bbb Z; \tag 6$
this may be seen by simply adding $s$ or $-s$ to itself $n$ times. Thus
$s\Bbb Z \subset S; \tag 7$
now if there is some
$t \in S \setminus s \Bbb Z, \tag 8$
we may let $m \in \Bbb Z$ be the largest integer with
$ms < t; \tag 9$
then
$0 < t - ms < s; \tag{10}$
we cannot have $t - ms = 0$ or $t - ms = s$ since then $t \in s \Bbb Z$; but since $t \in S$,
$t - ms \in S, \tag{11}$
which contradicts the hypothesis that $s$ is the smallest positive element of $S$. Therefore (2) binds.
Since every subring of $\Bbb Z$ is also an additive subgroup, we have shown that every subring of $\Bbb Z$ is of the form $s \Bbb Z$, which sets are clearly closed under addition and multiplication.
Also, every additive subgroup $S$, being of the form $S = s \Bbb Z$, is a subring as well since it is multiplicatively closed: if $as, bs \in S$, then
$(as)(bs) = abs^2 = (abs)s \in s \Bbb Z$; the other ring axioms, associativity, commutativity, etc., are simply inherited from the ring $\Bbb Z$.
For the same reasons, that every subring is an additive subgroup etc., it suffices to find the additive subgroups of $\Bbb Z$.
I guess it is worth pointing out the the sets $S = s \Bbb Z$ are also the ideals of the principal ideal domain $\Bbb Z$. |
One of the terms in the open form of $(3x^2+2x+y+4z)^{10}$ is randomly chosen, what is the probability that the chosen term contains $x^7$? | Just enumerate the possibilities:
$k_1=0, k_2=7$: There are 4 possibilities for $(k_2, k_3)$ (in order that $k_1+k_2+k_3+k_4=10$), i.e. $(0, 3), (1, 2), (2, 1), (3, 0)$.
$k_1=1, k_2=5$: There are 5 possibilities
etc.
The above enumeration generates 22 terms. Since there are $\binom{13}{3} = 286$ terms in total, we get $22/286 = 11/143 = 1/13$ |
Cardinality of the class of $G_\delta$ subset of $\mathbb{R}$ of Lebesgue measure zero | To show that there are at most $2^\omega$ such sets, it suffices to show that there are at most $2^\omega$ $G_\delta$-sets. $\Bbb R$ has a countable base $\mathscr{B}$ (e.g., the family of open intervals with rational endpoints), and every open set is the union of some subfamily of $\mathscr{B}$, so there are at most $|\wp(\mathscr{B})|=2^\omega$ open sets. Each $G_\delta$ is the intersection of a countable family of open sets, and there are only $(2^\omega)^\omega=2^{\omega\cdot\omega}=2^\omega$ such families, so there are at most $2^\omega$ $G_\delta$-sets in $\Bbb R$. |
Condition of concavity for a function | Firstly, $f(x)\ge\frac{1}{2}f(x-y)+\frac{1}{2}f(x+y)$ is no different from $f(\frac{x+y}{2})\ge\frac{1}{2}f(x)+\frac{1}{2}f(y)$.
Secondly, Midpoint-Convex and Continuous Implies Convex
Thirdly, A Lebesgue measurable function on an interval C is concave if and only if it is midpoint concave: https://en.wikipedia.org/wiki/Concave_function |
Why does orthogonalizing the monomials give Legendre polynomials? | The thing here is that "same orthogonality and completeness as ... does".
Orthogonality here always relates to some specified inner product. The main difference as far as I can see is in the definition of inner product for which these families of polynomials are designed. Those are different inner products.
You are renormalizing using the inner product for which the Legendre polynomials are designed to be orthogonal for:
$$\left<f|g\right> = \int_{-1}^1 f(x)g(x) {dx}$$
Try changing your renormalizing scalar product to:
$$\left<f|g\right> = \int_{-1}^1 f(x)g(x) \cdot \frac{dx}{\sqrt{1-x^2}}$$
It is the factor $\frac 1{\sqrt{1-x^2}}$ which supposedly makes big difference in results.
I would not be surprised if the new choice would generate Chebyshev polynomials if you keep everything else the same in the Gram-Schmidt procedure. |
Elementary Set Theory Proof/Identity Relation | Let $R$ be a relation on $A$, ie $R\subset A\times A$.
The identity relation on $A$ is $I = \{(x,x)~|~x\in A\}$.
"$R$ is reflexive" means exactly $\forall x\in A,~(x,x)\in R$.
"$I \subset R$ means exactly $\forall (x,y)\in I,~(x,y)\in R$. But everyone in $I$ is a $(x,x)$, so $I\subset R$ means $\forall x\in A,~(x,x)\in R$. |
How to construct the matrix of a kernel? | The map $\operatorname{ker} A$ is essentially the identity map $\operatorname{Id}:V\to V$ restricted to the subspace $\operatorname{Ker} A\subseteq V$.
If you want to represent it by a matrix, you need bases for the domain and the codomain. As a restriction of the identity map, its matrix is going to be $$\pmatrix{I_{\dim \ker A}\\0}$$ for the most natural choices of bases. Note that the $0$ submatrix here has dimensions $(\dim V-\dim \ker A)\times (\dim \ker A)$.
I'll give more details as required in the comments. First, in the usual way matrix multiplication works, the equation is not $\Bbb{K\cdot A}=0$ but rather $\Bbb{A\cdot K}=0$. Then, if the basis of $V$ is fixed (call it $\mathcal B$) and so is the matrix $\Bbb A$, then to find such a matrix $\Bbb K$ people usually solve the linear system where each row of $\Bbb A$ is an equation. Once you have a system of $k=\dim \ker A$ linearly independent solutions, let's call them $e_1,\ldots, e_k$, of which you computed the coordinates $e_{ij}$ in your original basis $\mathcal B$, then $(e_{ij})$ are the entries of the desired matrix $\Bbb K$, in the bases $e_1,\ldots , e_k$ of $\ker A$ and $\mathcal B$ of $V$. |
in type theory does (x:A) imply ((x:A):A) | This depends on the type-system, nevertheless, it is a common feature. However, in some type-systems it is possible to write something like:
$$(\lambda x.\ x) : (\alpha \to \mathtt{int})$$ (note that here the most general type for $\lambda x.\ x$ is $\alpha \to \alpha$) which would result in a value of type $\mathtt{int} \to \mathtt{int}$.
I hope this helps ;-) |
How do I calculate $(1+i)^{3-4i}$ in normal form $z = x + iy$ and in the exponential representation $ z =\left | z \right |\exp (i \arg (z))$? | You want to compute the logarithms of $1+i$; note that
$$
1+i=\sqrt{2}e^{i\pi/4}
$$
so the logarithms of $1+i$ are
$$
\frac{1}{2}\log2+\left(\frac{\pi}{4}+2k\pi\right)i
$$
for $k$ any integer.
Therefore the determinations of $(1+i)^{3-4i}$ are
$$
\exp\left((3-4i)\left(\frac{1}{2}\log2+\left(\frac{\pi}{4}+2k\pi\right)i\right)\right)
$$
I leave the final computations to you. The value for $k=0$ is called the principal value. |
Tennis match probability - is my logic incorrect? | Your working: $\displaystyle \small \left(\frac{2}{3}\right)^{3}+ {4\choose 1}\ \cdot \frac{1}{3} \cdot \left(\frac{2}{3}\right)^{3}+{5\choose2}\left(\frac{1}{3}\right)^{2}\left(\frac{2}{3}\right)^{3}$
Please see the second term. You are allowing any $1$ of the $4$ sets to be won by player B. But if there are $4$ sets, player A cannot win first three sets. Same logic for the next term.
So it should be,
$\displaystyle \small \left(\frac{2}{3}\right)^{3}+ {3\choose 1}\ \cdot \frac{1}{3} \cdot \left(\frac{2}{3}\right)^{3}+{4\choose2}\left(\frac{1}{3}\right)^{2}\left(\frac{2}{3}\right)^{3}$ |
Reversing the Order of Integration and Summation | The more general question is about interchanging limits and integration. With infinite sums, this is a special case, because by definition $\sum_{n=1}^\infty f_n(x) = \lim_{N \to \infty} \sum_{n=1}^N f_n(x)$. So because one can always interchange finite sums and integration, the only question is about interchanging the limit and the integration.
Writing what I just said in symbols, we want conditions such that
$$\sum_{n=1}^\infty \int_X f_n(x) dx = \int_X \sum_{n=1}^\infty f_n(x) dx.$$
Expanding the definition:
$$\lim_{N \to \infty} \sum_{n=1}^N \int_X f_n(x) dx = \int_X \lim_{N \to \infty} \sum_{n=1}^N f_n(x) dx.$$
Now one interchange is free:
$$\lim_{N \to \infty} \sum_{n=1}^N \int_X f_n(x) dx = \lim_{N \to \infty} \int_X \sum_{n=1}^N f_n(x) dx.$$
The issue is with the last interchange, which is what most of the rest of this answer is about.
The most general result of this type is the Vitali convergence theorem. It says that if $f_n$ is a sequence of measurable functions, $f_n \to f$ pointwise, $f_n$ is uniformly integrable, and $f_n$ is tight, then $\int_X f_n(x) dx \to \int_X f(x) dx.$ (Here $X$ is the set over which we integrate.) You can look up the formal definitions of "uniformly integrable" and "tight" yourself. Roughly speaking they mean that you cannot "compress mass into a point" and that you can't "move mass to infinity". These intuitions are illustrated by the failure of the conclusion of the theorem for the sequences $f_n(x)=\begin{cases} n & x \in [0,1/n] \\ 0 & \text{otherwise} \end{cases}$ on $[0,1]$ and $g_n(x)=\begin{cases} 1 & x \in [n,n+1] \\ 0 & \text{otherwise} \end{cases}$ on the whole line.
The Vitali convergence theorem is general but it is not convenient. The result with perhaps the best balance between generality and convenience to check is the dominated convergence theorem. This says that if $f_n \to f$ pointwise and there is a fixed integrable function $g$ such that $|f_n(x)| \leq g(x)$ for all $n$ and $x$, then $\int_X f_n(x) dx \to \int_X f(x) dx.$
One relatively basic result is the monotone convergence theorem, which says that if $f_n$ is an increasing sequence of nonnegative functions and $f_n \to f$ pointwise, then $\int_X f_n(x) dx \to \int_X f(x) dx$. In particular this holds whether or not $f$ is actually integrable (if it isn't, then the limit of the integrals is $+\infty$). This is also applicable to the case when $f_n$ are nonpositive and decrease to $f$ (this is easy to prove, since $\int_X -g(x) dx = -\int_X g(x) dx$). This is useful for summation, because if $f_n(x) \geq 0$ then $g_N(x)=\sum_{n=1}^N f_n(x)$ is an increasing sequence of nonnegative functions.
Finally in the special case of interchanging summation and integration, one can apply the abstract version of the Fubini-Tonelli theorem. This is because summation can be identified as integration with respect to the counting measure. As a result, if either
$$\sum_{n=1}^\infty \int_X |f_n(x)| dx < \infty$$
or
$$\int_X \sum_{n=1}^\infty |f_n(x)| dx < \infty$$
then one may interchange summation and integration. (This requires a hypothesis about $X$; because this holds for the case of $\mathbb{R}^n$, I won't state it, since this is already a more advanced writeup than you wanted.) |
Formula and percentages | Your second question is easy to answer: You are more likely to roll a $1$ with $2$ $D20$'s. The probability of that is $1$ minus the probability of not getting a $1$, and so:
$P(1|2D20) = 1-P(1') = 1-\frac{19}{20}\cdot\frac{19}{20}=0.0975$
While the probability of rolling a $1$ with one $D16$ is:
$P(1|1D16) = \frac{1}{16} = 0.0625$
For the other question: which has better odds of getting the lower number: that's a little nasty since you need to pick the minimum of the two... Intuitively I would think one of the $2$ $D20$ is likely to 'beat' the $D16$ ... but you'll have to do the math. |
Issue with Proof by Contradiction | If it is NOT TRUE that all members of a set of numbers are $\ge15$, that's equivalent to saying at least one of them is less than $15$, not that all of them are less than $15$. |
When is $\int \int_D f(r, \theta) dA = \int f_{\theta} d \theta \int f_r dr$? | If $f(r, \theta)$ can be written as $f(r, \theta) = g(r) \cdot h(\theta)$ and if $D$ is a polar rectangle
$$
D = \left\{ (r, \theta) \, \big| a \leq r \leq b, \, \alpha \leq \theta \leq \beta \right\},
$$
then you have
$$
\iint\limits_D f(r, \theta) \, dA = \int_\alpha^\beta \int_a^b g(r) \cdot h(\theta) r \, dr \, d\theta = \int_\alpha^\beta h(\theta) \, d\theta \, \cdot \, \int_a^b g(r) r \, dr.
$$
Where you run into trouble in trying to write the integral like this is if the function can't be factored in this way (e.g., $f(r, \theta) = r + \theta$), or if the region you want to integrate over is more complicated than a polar rectangle and so your limits of integrations for $r$ are functions of $\theta$, or vice versa. |
When can $f(z)$ be extended to be analytic on $D$? | Holomorphic functions have quite a lot of rigidity. In particular, holomorphic functions cannot have arbitrary growth behaviour at an isolated singularity.
Let $f$ be holomorphic on a punctured disk $\{ z : 0 < \lvert z-a\rvert < R\}$. Consider the following conditions on $f$:
$f$ has a holomorphic extension to the disk $\{ z : \lvert z-a\rvert < R\}$.
$\lim\limits_{z \to a} f(z)$ exists (in $\mathbb{C}$).
There is an $r \in (0,R]$ such that $f$ is bounded on $\{ z : 0 < \lvert z-a\rvert < r\}$.
$\lim\limits_{z\to a}\: (z - a) f(z) = 0$.
Then it is almost trivial to see that each condition implies the following condition(s). But for holomorphic $f$, the weakest of these conditions is strong enough to imply the strongest. Let us prove that.
So assume $(z - a)f(z) \to 0$ as $z\to a$, and consider the function
$$F \colon z \mapsto \begin{cases}(z - a)^2 f(z) &, z \neq a \\ \qquad 0 &, z = a.\end{cases}$$
On the punctured disk, $F$ is the product of two holomorphic functions, and therefore holomorphic. Since already $(z-a)\cdot f(z) \to 0$ as $z \to a$, $F$ is clearly continuous at $a$. And
$$\lim_{z\to a} \frac{F(z) - F(a)}{z - a} = \lim_{z\to a} \frac{(z-a)^2 f(z)}{z-a} = \lim_{z\to a} (z-a)f(z) = 0,$$
so $F$ is complex differentiable at $a$ too, with $F'(a) = 0$. This means $F$ is holomorphic on the full disk $\{ z : \lvert z-a\rvert < R\}$. Therefore, $F$ has a power series expansion about $a$,
$$F(z) = \sum_{n = 0}^\infty a_n (z-a)^n.$$
In this power series expansion, we have $a_n = \frac{1}{n!} F^{(n)}(a)$, and we saw that $F(a) = F'(a) = 0$, so in fact
$$F(z) = \sum_{n = 2}^\infty a_n (z-a)^n = (z-a)^2 \sum_{n = 0}^\infty a_{n+2}(z-a)^n,$$ and therefore
$$f(z) = \frac{F(z)}{(z-a)^2} = \sum_{n = 0}^\infty a_{n+2}(z-a)^n\tag{$\ast$}$$
for $0 < \lvert z-a\rvert < R$.
But the right hand side of $(\ast)$ clearly defines a holomorphic function on the full disk $\{ z : \lvert z-a\rvert < R\}$, and thus yields the desired holomorphic extension of $f$.
Although for the holomorphic extensibility of $f$ the boundedness near $a$, and the existence of $\lim\limits_{z\to a} f(z)$ are clearly necessary, a formally weaker condition is in fact sufficient. Such is the power of complex analyticity. |
Weak$^*$ convergence on dense subspace of Hilbert space | No. Let $H=\ell^2(\mathbb N)$, and $V$ the subspace of sequences with finitely many nonzero elements. Denote by $\{e_n\}$ the canonical basis, and let
$$
T_n(x)=n\,x_n.
$$
Then, for any $x\in V$, eventually $x_n=0$, so $T_n(x)\to0$. On the other hand, if $x=(1/n)_n$, then $T_n(x)=1$ for all $n$.
The assertion becomes true if the norms of the $T_n$ are bounded. If $\|T_n\|\leq c$ for all $n$, then given $\varepsilon>0$ and $w\in H$ choose $v\in V$ with $\|v-w\|<\varepsilon$. Then
$$
|T_n(w)|\leq|T_n(v)|+|T_n(v-w)|\leq|T_n(v)|+c\varepsilon.
$$
Thus $\limsup_n|T_n(w)|\leq c\varepsilon$ for all $\varepsilon>0$, which shows that the limit exists and is zero. |
Parabolas Integration Question | You wish to solve
$$\int_0^{x_0} 1-(1+c)x^2 \ dx = \int_{x_0}^1 (c+1)x^2-1 \ dx$$
where
$$x_0=\dfrac{1}{\sqrt{c+1}}$$
which is found by solving $1-x^2=cx^2$ and noting $x_0>0$. |
Iinearly independent solutions of homogeneous system of linear equations | If you reduce the coefficient matrix, you should find that the last 2 equations are a multiple of the first, so the system of equations reduces to $$ x + 2y + 4z=0 $$
As you noticed, if $y=z=0$, then $x=0$. So $(0,0,0)$ is a particular solution of the system.
Now we ask the question, what if $y$ and $z$ were not $0$? We can answer that question by making $y\neq0$, then $z\neq0$. You will notice that the vectors we end up with are NOT multiples of each other, so they are "linearly independent"
1) If $y=c\neq0\in\mathbb{R}$ and $z=0$, then $x=-2c$ and $(-2c,c,0)=c(-2,1,0)$ is a solution for any $c\in\mathbb{R}$.
2)If $z=d\neq0\in\mathbb{R}$ and $y=0$, then $x=-4d$ and $(-4d,0,d)=d(-4,0,1)$ is a solution for any $d\in\mathbb{R}$.
As mentioned earlier, there does not exist a $c$ and $d$ such that $c(-2,1,0)+d(-4,0,1)=0$. Therefore the vectors are not multiples of each other. This condition is the definition of linear independence |
Recurrence Master Theorem Question with asymptotic Upper and Lower Bounds | Suppose we are trying to solve the recurrence
$$T(n) = 4 T(\lfloor n/2 \rfloor) + n^2+n$$
where $T(0)=0.$
Let the binary digits of $n$ be given by
$$n = \sum_{k=0}^{\lfloor \log_2 n \rfloor} d_k 2^k.$$
Now unroll the recurrence to get the following exact formula:
$$T(n) =
\sum_{j=0}^{\lfloor \log_2 n \rfloor}
4^j \sum_{k=j}^{\lfloor \log_2 n \rfloor}
\left(d_k 2^{k-j}\right)^2
+ \sum_{j=0}^{\lfloor \log_2 n \rfloor}
4^j \sum_{k=j}^{\lfloor \log_2 n \rfloor}
d_k 2^{k-j}.$$
To get an upper bound consider the string of one digits, which
gives
$$T(n) \le
\sum_{j=0}^{\lfloor \log_2 n \rfloor}
4^j \sum_{k=j}^{\lfloor \log_2 n \rfloor}
\left(2^{k-j}\right)^2
+ \sum_{j=0}^{\lfloor \log_2 n \rfloor}
4^j \sum_{k=j}^{\lfloor \log_2 n \rfloor}
2^{k-j}.$$
This simplifies to
$$\lfloor \log_2 n \rfloor
\times 4^{\lfloor \log_2 n \rfloor +1}
+ 2^{\lfloor \log_2 n \rfloor +1}.$$
For a lower bound consider a one digit followed by zeros, which gives
$$T(n) \ge
\sum_{j=0}^{\lfloor \log_2 n \rfloor}
4^j
\left(2^{\lfloor \log_2 n \rfloor-j}\right)^2
+ \sum_{j=0}^{\lfloor \log_2 n \rfloor}
4^j
2^{\lfloor \log_2 n \rfloor-j}.$$
This simplifies to
$$(3+\lfloor \log_2 n \rfloor) \times 4^{\lfloor \log_2 n \rfloor}
- 2^{\lfloor \log_2 n \rfloor}.$$
Joining the upper and the lower bound we see that the dominant term in
both is
$$\lfloor \log_2 n \rfloor \times 4^{\lfloor \log_2 n \rfloor}$$
and therefore the asymptotics are
$$\Theta
\left(\lfloor \log_2 n \rfloor \times 4^{\lfloor \log_2 n \rfloor}\right)
= \Theta\left(\log_2 n \times 2^{2\lfloor \log_2 n \rfloor} \right)
\\= \Theta\left(\log n \times 2^{2 \log_2 n } \right)
= \Theta\left(\log n \times n^2\right).$$
The exact result confirms precisely what the Master Theorem would have predicted.
This MSE link points to a series of similar calculations. |
Periodic cycles of the Poincare map | In simple terms n-cycle corresponds to a periodic orbit as well. It is easy to see because n-cycle gives you n fixed points of n-th power of the Poincare map.
Pictures like this are very common (here you have 2-cycle) |
Paper in Calculus of Variation: Question on why $\;\inf \{ E_1(u)\}\;$ can't be less than $\;\min \{E_1(u)\}\;$ | It is because of your definition of the minimzer $e_1$ by:
\begin{align}
e_1 = \min\{ E_1(v): v \in S(a,b) \}
\end{align}
Now if your assume that in the above corollar the equality holds than, by the property of the infimum you can find a sequence $u_n$ so that:
\begin{align}
E_1(u_n) \longrightarrow e_1 = E_1(z) \quad (n \rightarrow \infty) \text{ and $d(u_n,z) \geq \beta$ for some $z \in \mathcal{Z}$ }
\end{align}
If you find a sequence $x_n$, which converges pointwise to $z$ then the distance to $\mathcal{Z}$ tends especially to zero, because we have taken $z \in \mathcal{Z}$. |
Probability for not finding a product | If we go into a store, let $S$ stand for success, the store had it, and let $F$ stand for failure. Note that the probability of success is $\frac{1}{4}$ and the probabiliy of failure is $\frac{3}{4}$
As you had it, $\Pr(X=1)=\Pr(S)=\frac{1}{4}$.
$\Pr(X=2)=\Pr(FS)=\frac{3}{4}\cdot \frac{1}{4}=\frac{3}{16}$.
$\Pr(X=3)=\Pr(FFS)=\frac{3}{4}\cdot\frac{3}{4}\cdot\frac{1}{4}=\frac{9}{64}$.
$\Pr(X=4)=\Pr(FFFS)=\frac{27}{256}$.
For $\Pr(X=5)$ things are a little different, because of the "give up" condition. There are two ways to find $\Pr(X=5)$. We could add up the probabilities previously obtained, and subtract the sum from $1$. Or else we can note that $X=5$ if we have four failures in a row. So $\Pr(X=5)=\frac{81}{256}$.
Mean and variance are somewhat tedious calculations. I will assume that, now that you have the distribution, you can carry them out.
We turn to the conditional probability problem. First we do it mechanically.
Let $A$ be the event "looked without success in two stores" and $B$ the event "won't find." We want $\Pr)B|A)$, which by definition is
$$\frac{\Pr(A\cap B)}{\Pr(A)}.$$
The top is just the probability of $FFFFF$. This is $(3/4)^5$. The bpttom is the probability of $FF$. This is $(3/4)^2$. Divide. We get $(3/4)^3$.
Remark: The answer to the conditional probability problem is clear without all the symbols. Given that we failed twice, the probability we will strike out completely is $(3/4)^3$, since we now are going only to three stores. |
My LinearSystemPlot function keeps disappearing when using with(Student[LinearAlgebra]) | Your syntax for the second statement is invalid. You need to index infolevel on the left-hand-side. It should be either,
infolevel[Student:-LinearAlgebra] := 1:
or,
infolevel[Student[LinearAlgebra]] := 1:
You can generally use Student:-LinearAlgebra instead of Student[LinearAlgebra] in both. I prefer the first, and it's somewhat unfortunate that so many Help example use the alternate syntax, IMO. Note that wasn't the reason why your infolevel attempt failed.
So your first statement would then be,
with(Student:-LinearAlgebra):
Note that if you intend on calling LinearSystemPlot(...) in that short form then you always need to reload the package after any restart. Your intermittent results might be due to your doing a restart without a reload of the package. You haven't given us code that reproduces your intermittent failure of the call to LinearSystemPlot, though, or said what was actually returned when it didn't work.
Also, make sure that you don't end a statement with a call to LinearSystemPlot with a full colon, if you expect to see a result. The full colon suppresses output from a statement from being displayed.
You should always be able to access the command with its fully qualified "long form", ie,
:-Student:-LinearAlgebra:-LinearSystemPlot(...);
or (if you haven't written your own Student package),
Student:-LinearAlgebra:-LinearSystemPlot(...);
You might have been having difficulty in reading and copying the examples from the Help page in the (default) of 2D-Math input form. Note that the Help window has an icon at its top which allows you to toggle most examples from Help pages from typeset 2D Math input form to 1D Maple Notation (plaintext) form. |
$A\in M_2(\mathbb C)$ and $A $ is nilpotent then $A^2=0$.How to prove this? | If $A$ is nilpotent, then this follows from Cayley-Hamilton: $A^2-\mathrm{tr}(A)A+\det(A)I_2=0$ , and $\mathrm{tr}(A)=\det(A)=0$ since $A$ is nilpotent. |
Prove $f(x)$ is constant. | Let $ I = [a, b] $ be any closed interval and assume, without loss of generality, that $ f $ attains its maximum value $ M $ in the interior; the case where it attains its minimum value instead is similar. Suppose $ f(a) < f(b) $ without loss of generality, I'll obtain a contradiction from this assumption.
Since $ f $ is continuous, $ [a, b] \cap f^{-1}(\{ M \}) $ is a closed and bounded set, therefore it has minimum and maximum values $ x_{\textrm{min}}, \, x_{\textrm{max}} $. If $ x_{\textrm{min}} = a $ then $ f(a) = M $ and this contradicts $ f(a) < f(b) $, so we can assume $ x_{\textrm{min}} > a $.
Now, since $ x_{\textrm{min}} $ is the minimum value for which $ f $ attains its maximum value $ M $, it can't attain the value $ M $ anywhere in the interval $ (a, x_{\textrm{min}}) $. Therefore it must instead attain its minimum value $ m $ on the interval $ [a, x_{\textrm{min}}] $ somewhere in the interior. Similar to the above, take the maximum value such that $ f $ attains its minimum $ m $ there, call it $ y_{\textrm{max}} $. If $ y_{\textrm{max}} = x_{\textrm{min}} $ then $ m = M $ and therefore $ f(a) = m = M $ once more, which yields the same contradiction as above with $ f(a) < f(b) $. Therefore we must have $ y_{\textrm{max}} < x_{\textrm{min}} $, in which case $ f $ attains neither its maximum nor its minimum on $ [y_{\textrm{max}}, x_{\textrm{min}}] $ in the interior, yielding a contradiction.
We conclude that $ f(a) < f(b) $ is impossible, and we can similarly show that $ f(a) > f(b) $ is impossible, so we must have $ f(a) = f(b) $. Since $ a, b $ were arbitrary, it follows that $ f $ is constant. |
Please help me to prove that $|f(x)| \le M \Vert x\Vert$ around $0$ when $|f(x)| \le \Vert x \Vert^\alpha$ around $0$ | Lemma: There exist $\delta > 0$ and $M>0$ such that whenever $\Vert x \Vert < \delta$, $|f(x)| \le M \Vert x \Vert$.
Warning: I have not mucked about with this sort of mathematics in a number of years, so please check my argument carefully.
Since $f$ has continuous first partial derivatives at $0$, it is differentiable at $0$.
That is,
$$\lim_{x \to 0} \frac {f(x) - f(0) - x \cdot \nabla f(0)}{\Vert x \Vert} = 0.$$
Since $|f(0)| \le \Vert 0 \Vert^\alpha = 0$, $f(0)=0$.
Thus $$\lim_{x \to 0} \frac {f(x) - x \cdot \nabla f(0)}{\Vert x \Vert} = 0.$$
That is, for any $\epsilon > 0$ there is a $\delta > 0$ such that whenever $\Vert x \Vert < \delta$, $$\left\vert \frac {f(x) - x \cdot \nabla f(0)}{\Vert x \Vert} \right\vert < \epsilon.$$
Rearranging the fraction,
$$\left\vert \frac {f(x)}{\Vert x \Vert} - \frac {x \cdot \nabla f(0)}{\Vert x \Vert} \right\vert < \epsilon.$$
Now the quantity $$q(x) := \frac{x\cdot\nabla f(0)}{\Vert x \Vert}$$ does not depend on the magnitude of $x$, so by a continuity and compactness argument, it is bounded.
Thus the quantity $$\frac {f(x)}{\Vert x \Vert}$$ is also bounded.
$\square$ |
how can I find the period of $f=\sin^4(x)$? | If $T$ is a period of $\sin^4{x},$ then $\forall x \in \mathbb{R}$
$$
\sin^4{(x+T)}-\sin^4{x}=0 ,\\
(\sin^2{(x+T)}-\sin^2{x})(\sin^2{(x+T)}+\sin^2{x})=0,\\
\dfrac{1-\cos(2(x+T))}{2}-\dfrac{1-\cos(2x)}{2}=0, \\
\cos(2x)-\cos{(2(x+T))}=0,\\
-2\sin( 2x+T)\sin(-T)=0. \\
$$
The last equation implies that $T=\pi.$ |
Prove that if $-2 \leq x_0 \leq 2$, then $-2 \leq 3x_0 - x_0^3 \leq 2$. | $$f(x)=3x-x^3 \implies f'(x)=3-3x^2, f'(x)=0 \implies x=0,\pm 1$$
So
$$f_{max}=f(1)=2, f(2)=-2, f(0)=0 \implies -2\le f(x) \le 2$$ |
$\mathcal{F}_{T}$, when $T$ is constant. | The answer is yes, I think you got lost in notation. Let $j$ be such that $T=j$ in the whole probability space. As you said, $A\cap[T=n]\in \{\emptyset, A\}$. Moreover, to define $\mathcal{F}_T$, we assume that $\{\mathcal{F}_t\}_t$ is a filtration and take $\mathcal{F}_{\infty}= \vee_{t} \mathcal{F}_t$ (the $\sigma$-algebra generated by the union of the finite time filtrations), hence $ \mathcal{F}_\infty \cap \mathcal{F}_n= \mathcal{F}_n$. Therefore,
$$\mathcal{F}_{T}=\left\{A \in \mathcal{F}_{\infty}: A \cap \Omega \in \mathcal{F}_j, A\cap \emptyset \in \mathcal{F}_n, \forall n \neq j \right\} = \mathcal{F}_j.$$ |
Clearify I understood the question (best aproximation) | Let
$$e= (1,1)- \alpha(1,-1)$$
be the difference or error between the vector $(1,1)$ with a vector $\alpha(1,-1)\in \mbox{Span} \{(1,-1)\}$. For best approximations the length of the error should be minimimum. That is you have to find $\alpha$ such that
$$\|e\|_p = \|(1,1)-\alpha(1,-1)\|_p=\bigg((1-\alpha)^p +(1+\alpha)^p\bigg)^{\frac{1}{p}},$$ is minimum. |
Proving that the Gamma function(infinite product) satisfies $\Gamma(n+1)=n!$ | So the question is:
How do we know $\prod_{n=1}^{\infty}\frac{n}{n+1}e^{\frac{1}{n}}$ converges?
$\prod_{n=1}^\infty a_n $ converges $ \iff
\sum_{n=1}^\infty\ln(a_n)$ converges.
So apply the natural log to the product
$$ \begin{align}
&\ln\bigg(\prod_{n=1}^{\infty}\frac{n}{n+1}e^{\frac{1}{n}} \bigg) \\&=\sum_{n=1}^\infty \ln(n)-\ln(n+1)+1/n \\ &=\sum_{n=1}^\infty \bigg( \frac{1}{n} -\int_{n}^{n+1} dt/t \bigg)
\\& =\lim_{N\to \infty}\sum_{n=1}^N\frac{1}{x}-\int_1^N dt/t \\ & =\gamma
\end{align}$$
And this is the famous Euler Mascheroni Constant. You can find a proof that this converges over here. |
Is there a convergence for the series $ \sum_{i=0}^{\infty} \frac{(x-y*i)^i}{i!} $? | Since
$$
\begin{align}
\lim_{n\to\infty}\left|\frac{(x-yn)^n}{n!}\right|^{1/n}
&=\lim_{n\to\infty}\left|\,\frac xn-y\,\right|\,\lim_{n\to\infty}\frac n{(n!)^{1/n}}\\
&=|y|\,e\tag{1}
\end{align}
$$
then the Root Test says that the series converges if $|y|\lt\frac1e$.
Regarding the Limit
By the Stolz-Cesàro Theorem,
$$
\begin{align}
\lim_{n\to\infty}\left(\frac{n^n}{n!}\right)^{1/n}
&=\lim_{n\to\infty}\frac{\frac{(n+1)^{n+1}}{(n+1)!}}{\frac{n^n}{n!}}\\
&=\lim_{n\to\infty}\left(1+\frac1n\right)^n\\[9pt]
&=e
\end{align}
$$ |
Show that the $n \times n$ identity matrix is commutative with any $n \times n$ martix using Suffix Notation | Maybe it would be better to write summations:
$$(MI)_{ik}=\sum_{j=1}^n M_{ij}\delta_{jk}=M_{ik}\delta_{kk}+\sum_{j\neq k} M_{ij}\delta_{jk}=M_{ik},$$
and similarly for the right-handside. Implicitly you are using the fact that two matrices are equal (by definition) if all of their matching entries are equal. |
Subgroup relations in $GL(3,\mathbb Z)$ | There is a natural homomorphism from $\operatorname{GL}(3,\mathbb{Z})$ to $\operatorname{GL}(3,\mathbb{F}_3)$, by which each finite subgroup of $\operatorname{GL}(3,\mathbb{Z})$ is mapped injectively to a subgroup of $\operatorname{GL}(3,\mathbb{F}_3)$. From subgroup lattice in later, one can obtain subgroup lattice in $\operatorname{GL}(3,\mathbb{Z})$. |
Measure is continuous and difference quotient is indicator function | It looks like your proof that $H$ is continuous is correct apart from some typographical errors - the key point is that $|H(y) - H(x)| = m(E \cap [x,y]) \leq m([x,y])$.
For the bit about the limit, it is certainly not the case that $E$ is the union of open sets (disjoint or otherwise) - $E$ could be the complement of $\mathbb{Q}$ in $[a,b]$, for instance. Also, the equation need not hold for all $x \in E$: for instance if $x$ is an isolated point of $E$ then $m(E \cap [x, x+h]) = 0$ for $h$ sufficiently small and hence the limit is $0$.
Unfortunately you aren't going to find a straightforward characterization of the points where the limit takes the desired value; no matter how you cut it you're going to have to use some tools. Perhaps the simplest approach is to apply the Lebesgue differentiation theorem to $H$ by expressing it as $H(x) = \int_a^x \chi_E(x)\, dm$. The conclusion of the theorem is that $H'(x) = \chi_E$ almost everywhere, and this proves what you want since
$$H'(x) = \lim_{h \to 0} \frac{H(x+h) - H(x)}{h} = \lim_{h \to 0} \frac{m(E \cap [x,x+h])}{h}$$ |
Find a prime factor of $7999973$ without a calculator | The thing to notice here is that 7,999,973 is close to 8,000,000. In fact it is $8000000 - 27$. Both of these are perfect cubes. Differences of cubes always factor: $$a^3 - b^3 = (a-b)(a^2+ab+b^2)$$
Here we have $a=200, b=3$, so $a-b= 197$ is a factor. |
Proof verification and advice needed for a measure on a semialgebra | Let it be that $A=\bigcup_{i=1}^{k}B_{i}=\bigcup_{j=1}^{m}C_{j}$
where $B_{1},\dots,B_{k}$ are pairwise disjoint elements of $\mathcal{C}$
and also where $C_{1},\dots,C_{m}$ are pairwise disjoint elements
of $\mathcal{C}$ .
$\mu$ is additive on $\mathcal{C}$ and as a semialgebra $\mathcal{C}$
is closed under finite intersections. So the sets $B_i\cap C_j$ are mutually disjoint elements of $\mathcal C$ and we find:
$$\sum_{i=1}^{k}\mu B_{i}=\sum_{i=1}^{k}\sum_{j=1}^{m}\mu\left(B_{i}\cap C_{j}\right)=\sum_{j=1}^{m}\sum_{i=1}^{k}\mu\left(B_{i}\cap C_{j}\right)=\sum_{j=1}^{m}\mu C_{j}$$
Proved is now that $\bar{\mu}$ is independent of the representation
of $A$.
Let it be that $A=\bigcup_{r=1}^{n}A_{r}$ where $A_{1},\dots,A_{n}\in\mathcal{A}$
are mutually disjoint.
As elements of the algebra generated by semialgebra $\mathcal{C}$
every $A_{r}$ can be written as $A_{r}=\bigcup_{l=1}^{k_{r}}B_{r,l}$
where the $B_{r,1},\dots,B_{r,k_{r}}$ are mutually disjoint elements
of $\mathcal{C}$.
Then: $$A=\bigcup_{r=1}^{n}A_{r}=\bigcup_{r=1}^{n}\bigcup_{l=1}^{k_{r}}B_{r,l}$$
and: $$\bar{\mu}A=\sum_{r=1}^{n}\sum_{l=1}^{k_{r}}\mu B_{r,l}=\sum_{r=1}^{n}\bar{\mu}A_{r}$$
Proved is now that $\bar{\mu}$ is finitely additive on $\mathcal{A}$. |
Determinant of a matrix corresponding to a linear mapping $T:V \to V$ is positive | One way to define a determinant of a linear transformation that is independent of a basis is by
$$\text{det}(T) = \Pi_i \lambda_i = \lambda_1 \lambda_2 \cdots \lambda_n$$
where $\lambda_i$ are the eigenvalues of $T$. |
Is integrability of the almost complex structure redundant in the definition of Kähler manifolds? | If $\omega$ is a non-degenerate two-form on $M$ (not necessarily closed), then there exists an almost complex structure $J$ such that $g(X, Y) := \omega(X, JY)$ is a Riemannian metric compatible with the almost complex structure (i.e. $g(JX, JY) = g(X, Y)$); see Theorem $1.17$ of these notes. We say that such an almost complex structure is compatible with the symplectic form.
If in addition $\omega$ is closed (i.e. $\omega$ is a symplectic form), then we call $(M, \omega, J, g)$ an almost Kähler manifold. So rephrasing the above, we see that for any symplectic manifold, there is a compatible almost complex structure $J$ such that $(M, \omega, J, g)$ is an almost Kähler manifold. So the only difference between a symplectic manifold and an almost Kähler manifold is a choice of compatible almost complex structure (there are many).
Your question can now be rephrased as follows: is there an almost Kähler manifold $(M, \omega, J, g)$ such that $J$ is not integrable? The answer to this question is yes. In the paper Compact Parallelizable Four Dimensional Symplectic and Complex Manifolds by Fernández, Gotay, and Gray, the authors construct examples of compact four-dimensional manifolds which admit symplectic structures but no complex structures (in particular, any choice of almost complex structure compatible with a fixed symplectic form will not be integrable).
In conclusion, the answer to your initial question is no, the integrability condition on the almost complex structure is not redundant in the definition of Kähler manifolds.
Although not necessary to answer the question, you may be interested in the following.
If $\omega(X, Y) = g(JX, Y)$, then "$\nabla J = d\omega\oplus N_J$" where $\nabla$ is the Levi-Civita connection with respect to $g$, and $N_J$ is the Nijenhuis tensor of $J$. The quotation marks are used to indicate this is not a literal equation, but rather a description of a relationship between the quantities involved. More precisely, $\nabla J = 0$ if and only if $d\omega = 0$ and $N_J = 0$. In particular, on an almost Kähler manifold, $J$ is integrable if and only if it is covariantly constant. |
When the integral of products is the product of integrals. | Why are we able to write the integral of products as the product of integrals here?
Assume you have two differentiable functions $f,g$ such that
$$
f'+g'=f'\cdot g' \tag1
$$ by multiplying by $\displaystyle e^{f+g}$ one gets
$$
(f'+g')\cdot e^{f+g}=\left(f'e^{f} \right)\cdot \left(g'e^{g} \right) \tag2
$$ then by integrating both sides
$$
e^{f+g}=\int\left(f'e^{f} \right)\cdot \left(g'e^{g} \right) \tag3
$$ since $\displaystyle e^f=\int\left(f'e^{f} \right) $ and $\displaystyle e^g=\int\left(g'e^{g} \right)$ we have
$$
\int\left(f'e^{f} \right)\cdot \int\left(g'e^{g} \right) =\int\left(f'e^{f} \right)\cdot \left(g'e^{g} \right). \tag4
$$
By taking, $f'=-\dfrac1{x^2}$ and $g'=\dfrac1{1+x^2}$ we have
$$
f'+g'=-\frac1{x^2}+\frac1{1+x^2}=-\frac1{x^2(1+x^2)}=f'g'
$$ which leads to $(4)$ with the given example. |
Coupled Volterra Integral equation | $\displaystyle\int_{0}^{x}(x-t)^{-1/2}dt=\lim_{u\rightarrow x}\int_{0}^{u}(x-t)^{-1/2}dt=\lim_{u\rightarrow x}-(1/2)(x-t)^{1/2}\bigg|_{t=0}^{t=x}=(1/2)x^{1/2}$, and then doing differentiation is no problem.
Leibniz rule still goes through, but needs a proof. The idea is that, the integrand is nonnegative, so improper integrable and Lebesgue integrable are the same, by running the classical proof of Leibniz rule, one uses Lebesgue Dominated Convergence Theorem.
Let $F(x,t)$ to be the integrand, note that $t$ is defined only for $t<x$. Now let $G(x)=\displaystyle\int_{0}^{x}F(x,t)dt$, we are to consider $\dfrac{1}{-h}[G(x-h)-G(x)]$ for small $h>0$. Note that we cannot consider about the right derivative, because $F(x,t)$ is not defined for $t\in(x,x+h)$. Now
\begin{align*}
\dfrac{1}{-h}[G(x-h)-G(x)]=\frac{1}{-h}\left\{\int_{0}^{x-h}F(x-h,t)-F(x,t)dt\right\}+\dfrac{1}{-h}\left\{\int_{x-h}^{x}F(x,t)dt\right\},
\end{align*}
now the partial derivative of $F(x,t)$ with respect to $x$ is locally bounded for $x$, and the interval $[0,x]$ is bounded, Lebesgue Dominated Convergence Theorem applies. |
Write each expression in the form $ca^pb^q$ | On what grounds did you move $b$ to the top on (c)? It's incorrect. You cannot just multiply by $\frac{b}{1}$ because it pleases you to do so. And, after you suddenly create a factor of $\frac{b}{1}$ ex nihilo, it would have cancelled with the denominator. So both the penultimate and antepenultimate equality signs are incorrect.
$$\begin{align*}
\frac{a(\frac{2}{b})}{\frac{3}{a}} &= \frac{2a}{b}\frac{a}{3}\\
&= \frac{2}{3}\frac{a^2}{b}\\
&= \frac{2}{3}a^2b^{-1}.
\end{align*}$$
(d) is almost correct, except that $1+\frac{1}{2}=\frac{3}{2}$, not $\frac{2}{3}$. So the exponent of $a$ is incorrect. |
Differentiation problem solving | Notice that
$$\frac{d}{dt}S=\frac{d}{dt}(4\pi r^2) = 8\pi r\frac{dr}{dt}$$
thus from $\frac{d}{dt}S=35$ you get
$$\frac{dr}{dt}=\frac{35}{8\pi r}$$
For $r=7$
$$\frac{dr}{dt}=\frac{35}{7\cdot8\pi} = \frac{5}{8\pi}$$
Through all this the units of length are $cm$, of area $cm^2$ and of time $years$. |
A simple question about vector and geometry | $OM = \frac{OA + OB + OC}3$ because the midpoint of a polyhedron is the "average" of those points, thus you sum them and divide by their quantity (which is here $3$).
The reason why you can put $OA' = 2 OM - OA$ is because you can write
$$
OA = OM - (OM - OA).
$$
Therefore "reflecting through the point" may be see as
$$
OA' = OM + (OM - OA) = 2 OM - OA.
$$
Since the tetrahedron is reqular, the vector $OA - OM$ is orthogonal to the triangle $BCD$, so that the reflection of the point $OA$ to the otherside just corresponds to changing the $-$ for a $+$ in the writing of $OA = OM - (OM - OA)$.
Hope that helps, |
Trouble solving a second-order differential equation | Assuming, as you state, that you can solve first degree equations, here is a way of converting one second degree equation into two first degree ones.
Note that:
$$x''-i\omega x'=-i\omega x'-\omega^2 x$$ so setting $y=x'-i\omega x$ this gives the linear equation $y'=-i\omega y$ with the solution $y=Ae^{-i\omega t}$
Also note:$$x''+i\omega x'=i\omega x'-\omega^2 x$$ and set $z=x'+i\omega x$ to obtain $z'=i\omega z$ and $z=Be^{i\omega t}$
Then note that $$z-y=2i\omega x = Be^{i\omega t}-Ae^{-i\omega t}$$
Incorporating $2i\omega$ into the constants this gives $$x=Ce^{i\omega t}+De^{-i\omega t}=E\sin \omega t+F\cos \omega t$$
(the last formulation arises from the formulae for sin and cos in terms of complex exponentials)
Note that in general the second degree equation $$x''=(a+b)x'-abx$$ can be rewritten as $$x''-ax'=bx'-abx$$ and $$x''-bx'=ax'-abx$$
and the same technique can be applied. This does not work when $a=b$ (you get one linear equation repeated twice) and you might want to investigate this possibility further. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.