title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Mass-action differential equations for the Ivanova reaction system | The law of mass action says that the rate of reaction is proportional to the concentrations of reagents. This is how you can write your equation for $[X]$, concentration of $X$:
$$
\dot{[X]}=k_3[Z][X]+k_3[Z][X]-k_3[Z][X]-k_1[X][Y],
$$
where two first terms comes from $2X$ expression in one of your reactions, third term comes from $Z+X$ part, and the last term comes from $X+Y$ term in the first reaction. I also assumed that the rate constants are $k_1,k_2,k_3$ from top to bottom. Simplifying, you get
$$
\dot{[X]}=k_3[Z][X]-k_1[X][Y],
$$
and the rest should be easy. |
Solve $y''-2y'+y=2\cos{x}$ | we have
$$y'(x)=-a\sin(x)+b\cos(x)$$
$$y''(x)=-a\cos(x)-b\sin(x)$$
then $$-a\cos(x)-b\sin(x)-2(-a\sin(x)+b\cos(x))+a\cos(x)+b\sin(x)=2\cos(x)$$
from here we get
$$\cos(x)(-a-2b+a)+\sin(x)(-b+2a+b)=2\cos(x)$$
thus $$b=-1$$ and $$a=0$$ |
Definition of the joint spectrum of Hilbert space operators | The double commutant $A''$ is defined as $(A')'$, where
$$
A'=\{T\in B(H):\ TA_j=A_jT,\ j=1,\ldots,n\}
$$
and
$$
A''=\{S\in B(H):\ ST=TS\ \forall T\in A'\}.
$$
The double commutant is mostly interesting when the original set contains adjoints, because the commutant of a set that contains its adjoints is a von Neumann algebra. I find it weird the way it's used in the paper, but maybe that's just me.
The Double Commutant Theorem says that, if $M\subset B(H)$ is a $*$-algebra, then
$$
M''=\overline{M}^{\rm sot}.
$$ |
Homogenous PDE, changing of variable | Assume that $f:\textbf{R}^n\rightarrow\textbf{R}$ and $f=f(x_1,x_2,\ldots,x_n)$ is a function of $n$ variables. By saying that $x_i=x_i(\xi)$, then $C:\overline{x}=\{x_1(\xi),x_2(\xi),\ldots,x_n(\xi)\}$, $\xi\in\textbf{R}$, then $C$ is one dimentional object in $\textbf{R}^n$ and hence $C$ is a curve of $\textbf{R}^n$. Then
$$
\frac{df}{d\xi}=\sum^{n}_{k=1}\frac{\partial f}{\partial x_k}\frac{dx_k}{d\xi}
$$
is the derivative of $f$ allong $C$ (or total derivative of $f$ allong the curve $C$). You also have the equation:
$$
\frac{df}{d\xi}-\xi\frac{dx_1}{d\xi}=0\Leftrightarrow \sum^{n}_{k=1}\frac{\partial f}{\partial x_k}\frac{dx_k}{d\xi}=\xi\frac{dx_1}{d\xi} \tag 1
$$
If $\xi=u y$, then $\frac{d\xi}{dy}=u$. Hence
$$
\frac{df}{d\xi}-\xi\frac{dx_1}{d\xi}=0\Leftrightarrow \frac{df}{dy}\frac{dy}{d\xi}-\xi\frac{dx_1}{dy}\frac{dy}{d\xi}=0\Leftrightarrow \frac{df}{dy}\frac{1}{u}-\xi\frac{dx_1}{dy}\frac{1}{u}=0\Leftrightarrow
$$
$$
\frac{df}{dy}-\xi\frac{dx_1}{dy}=0.\tag 2
$$
This answer your first question about the change of variables.
About the homogenicity
However if $f$ is homogeneous function then we have even more
If the function $f$ is homogeneous of degree $\lambda$. Then setting $x_i=uy_i$ in equation (1) we have, (knowing that $f(x_1,x_2,\ldots,x_n)$ and $(x_1,x_2,\ldots,x_n)\rightarrow x_1$ are homogeneous i.e. $f(uy_1,uy_2,\ldots,uy_n)=u^{\lambda}f(y_1,y_2,\ldots,y_n)$ and $(ux_1)=ux_1$ of degree 1):
$$
\sum^{n}_{k=1}\frac{\partial f}{\partial x_k}(uy_1,uy_2,\ldots,uy_n)\left(u\frac{dy_k}{d\xi}\right)-\xi\left(u\frac{dy_1}{d\xi}\right)=0\Leftrightarrow
$$
$$
u^{\lambda-1}\sum^{n}_{k=1}\frac{\partial f}{\partial y_k}(y_1,y_2,\ldots,y_n)\left(u\frac{dy_k}{d\xi}\right)-\xi u\frac{dy_1}{d\xi}=0
$$
$$
u^{\lambda-1}\sum^{n}_{k=1}\frac{\partial f}{\partial y_k}(y_1,y_2,\ldots,y_n)\frac{dy_k}{d\xi}-\xi \frac{dy_1}{d\xi}=0.\tag 3
$$
(That is because when $f(x_1,x_2,\ldots ,x_n)$ is homogeneous of degree $\lambda$, then $\frac{\partial f}{\partial x_{j}}$ is homogeneous of degree $\lambda-1$ i.e. $\frac{\partial f}{\partial x_j}(uy_1,uy_2,\ldots,uy_j,\ldots,uy_n)=u^{\lambda-1}\frac{\partial f}{\partial x_j}(y_1,y_2,\ldots,y_n)$). Hence when $\lambda=1$, then (3) becomes:
$$
\sum^{n}_{k=1}\frac{\partial f}{\partial y_k}\frac{dy_k}{d\xi}-\xi\frac{dy_1}{d\xi}=0.\tag 4
$$
Hence if $f=f(x_1,x_2,\ldots,x_n)$ is homogeneous of degree 1, then equation (1) is homogeneous PDE (invariant under any transformation of variables of the form $x_i=uy_i$, $i=1,2,\ldots,n$). |
Taylor Series Expansion $ f(x)=\tan\left(\frac{\pi}{4}+x\right)^\frac{1}{x} $ | $$f=\tan\left(\frac{\pi}{4}+x\right)^\frac{1}{x}\implies \log(f)=\frac{1}{x}\log\left(\tan\left(\frac{\pi}{4}+x\right)\right)$$
$$\tan\left(\frac{\pi}{4}+x\right)=1+2 x+2 x^2+\frac{8 x^3}{3}+\frac{10 x^4}{3}+\frac{64 x^5}{15}+\frac{244
x^6}{45}+\frac{2176 x^7}{315}+O\left(x^8\right)$$
$$\log\left(\tan\left(\frac{\pi}{4}+x\right)\right)=2 x+\frac{4 x^3}{3}+\frac{4 x^5}{3}+\frac{488 x^7}{315}+O\left(x^8\right)$$
$$\log(f)=2+\frac{4 x^2}{3}+\frac{4 x^4}{3}+\frac{488 x^6}{315}+O\left(x^8\right)$$
$$f=e^{\log(f)}=e^2\left(1+\frac{4 x^2}{3}+\frac{20 x^4}{9}+\frac{10552 x^6}{2835}+O\left(x^8\right) \right)$$ |
On $\delta_{0}$-continuity sets in the class of finite dimensional sets | It holds not only for finite-dimensional $\delta_0$-continuity sets, but even for all finite-dimensional sets. Those are of the form
$$
A = \{f(t_1)\in B_1,\dots, f(t_k)\in B_k\}
$$
for some distinct $t_1,\dots,t_k\in[0,1]$ and measurable $B_1,\dots,B_k\subset \mathbb{R}$.
For all $n$ large enough we have $(0,2/n)\cap\{t_1,t_2,\dots,t_k\} = \varnothing$. Then $$\delta_0(A) = \delta_{z_n}(A) = \mathbb{1}_{0\in B_i\text{ for all }i=1,\dots,k}.$$ |
$f(x)$ absolutely continuous $\implies e^{f(x)}$ absolutely continuous, for $x \in [a,b]$? | It is true. Suppose $g$ is Lipschitz with rank $L$ on $[a,b]$, and $f$ is AC. Then $g \circ f$ is also AC.
To see this, suppose $f$ is AC, and let $\epsilon>0$. Choose $\delta>0$ such that if $(y_k,x_k)$ are a finite collection of pairwise disjoint intervals in $[a,b]$ with $\sum |y_k-x_k| < \delta$, then $\sum |f(y_k)-f(x_k)| < \frac{\epsilon}{L}$.
Now consider $\sum |g \circ f(y_k)-g \circ f(x_k)| = \sum |g(f(y_k))-g ( f(x_k))| \leq L \sum |f(y_k)-f(x_k)| < \epsilon$. Hence $g \circ f$ is AC.
Since $x \mapsto e^x$ is smooth, it is Lipschitz on any compact interval, hence the function $ x \mapsto e^{f(x)}$ is AC. |
Axioms for atomless Boolean algebras | If we want to axiomatize Boolean algebras in terms of partial order, we want to specify that the order is dense in the usual sense. So a possible additional axiom is
$$\forall x \forall y(x<y\implies \exists z(x \lt z \land z \lt y)).$$
But this can be derived from the weaker-seeming axiom
$$\forall y(\lnot(y=0) \implies \exists z(0 \lt z \land z \lt y))$$
that you suggested. Thus your axiomatization is perfectly correct and complete (pun). |
Complex Numbers and their relationship with higher Mathematics | For $x\in\Bbb R$ let $x^+$ be the positive part of $x$; that is, $x^+=x$ if $x\ge0$, $x^+=0$ if $x<0$. Calculate that for any fixed $\theta$ $$\frac1{2\pi}\int_0^{2\pi}(\cos(t+\theta))^+\,dt=\frac1{2\pi}\int_0^{2\pi}(\cos(t))^+\,dt=\frac1\pi.$$
For $z\in\Bbb C$ let $$\phi(z)=(\Re z)^+.$$It follows that $$\frac1{2\pi}\int_0^{2\pi}\phi(e^{it}z)\,dt=\frac1\pi|z|.$$(Write $z=re^{i\theta}$...)
So $$\frac1{2\pi}\int_0^{2\pi}\sum_j\phi(e^{it}z_j)\,dt=\frac1\pi.$$Hence there exists $t$ with $$\sum_j\phi(e^{it}z_j)\ge\frac1\pi.$$Let $S=\{j\,:\,\Re(e^{it}z_j)\ge0\}$. Then the previous inequality shows that $$\Re\left( e^{it}\sum_{j\in S}z_j\right)\ge\frac1\pi,$$hence $$\left|\sum_{j\in S}z_j\right|\ge\frac1\pi.$$ |
Interquartile range to find out outlier & get perfect Standard deviation | The rule using the IQR is often used with boxplots. Using R statistical
software, we obtain the following:
x = c(200, 330, 675, 999, 1200, 3000, 25000)
IQR(x)
## 1597.5
sd(x)
## 9093.544
boxplot(x, horizontal=T)
The vertical line at 200 is the smallest data value above the lower 'fence'; the lower
end of the box is at 502.5; the heavy line inside the box is
the median 999; the upper end of the box is at 2100; the
vertical line at the right end of the upper whisker is at 3000
(the largest data point below the upper fence); and the dot at the
far right is the outlier 25,000.
I got these numbers from boxplot.stats (output shown below
with slight editing):
boxplot.stats(x)
$stats # lower fence, lower hinge, median, upper hinge, upper fence
[1] 200.0 502.5 999.0 2100.0 3000.0
$n
[1] 7 # sample size
...
$out # list of outliers (here only one)
[1] 25000
Various books and software may give different values for
the numbers at the ends of the box (lower and upper hinges,
or lower and upper quartiles). This also means that the IQR
can differ from one account to another.
The sample standard deviation is found as $S = \sqrt{\frac{\sum (X_i - \bar X)^2}{n-1}}.$ For a sample as highly skewed as this one, I would
not suppose you are asked to use $S$ to find outliers.
Note: If by the 'perfect' standard deviation, you mean the population
standard deviation $\sigma$, then one sometime says that
data values outside the interval $(\mu - 3\sigma, \mu + 3\sigma)$
is an outlier, where $\mu$ is the population mean. From the
information given, you have no way to know exact values of $\mu$ or
$\sigma.$ Maybe there is something missing from your question.
Addenda prompted by additional questions:
$S$ (computed from all the data) is a reasonably good estimate of σσ if the data are truly random sample from the population. The main issue is whether the outlier really 'represents' the population, or whether it is some sort of mistake (e.g., recording error, equipment failure, etc).
Example 1: The exponential distribution is a strongly right skewed and samples from it typically show many outliers in the right tail. I just simulated a sample of size 1000 from an exponential population with population mean and SD both 5. There were 27 outliers in the right tail. The sample SD of all 1000 observations was 4.70 (reasonably close to 5).Omitting the outliers, the sample SD was only 3.6.
Example 2: I simulated a sample of size 1000 from a normal dist'n with pop mean 100 and SD 10. Sample mean 100.3; sample SD 10.2. A few moderate outliers in each tail. I added an outrageous mistaken value of 2000, obviously an outlier. With this bogus observation included the sample SD is 60.9 (far from 10).
The overall lesson is not to discard 'outliers' without good
reason. Check for data input errors. Check lab notes for
indications of measurement difficulty or equipment failure.
There are no general rules.
The question whether to discard an outlier is frequently a
judgment call. Does the 'outlier' represent the target population
or not? In practice, never discard an observation without making
a note that you did so, and why. (And maybe show how analysis differs if outliers are left in.) |
Question concering a proof of the Riesz Representation Theorem | I kinda overlooked something: Notice that $x - f(x)z \in$ ker$(f) = N$, i.e. $x - f(x)z ⊥ z$ as $z ∈ N^{⊥}$. But from this we can already follow that $\langle (x-f(x)z)\,,\,||z||^{-2}z\rangle = ||z||^{-2}\langle (x-f(x)z)\,,\,z\rangle = 0.$ |
A conformal class is a connected subset | In fact $[g]$ is path-connected with respect to the $C^k$-, $C^\infty$-, or $C^{k,\alpha}$-topologies.
$C^\infty(M,\mathbb{R})$ is path-connected: if $f,h\in C^\infty(M,\mathbb{R})$, then $\gamma(t)=th+(1-t)f$ is a continuous path connecting them. The map
$$ C^\infty(M,\mathbb{R}) \ni f \mapsto e^{2f}g \in \mathcal{G} $$
is continuous. Hence the image of this map, namely $[g]$, is path-connected.
I usually just think of this argument as the observation that $t\mapsto e^{2th + 2(1-t)f}g$ is a continuous path from $e^{2f}g$ to $e^{2h}g$. |
Implicit Function Theorem in Higher Dimensions | Suppose you know that $x = 9/10$ (a representative number near $x = 1$, i.e., a candidate for an element of $U$). Then if $f(x, y, z) = 0$, what do you know about $y$ and $z$? From the second term, you know that
$$
(9/10)^2+e^{y-1} - 2y = 0\\
0.81 + e^{y-1} - 2y = 0
$$
If you're willing to guess that the solution for $y$ is near $1$, you can approximate $e^{y-1}$ with $1 + (y-1)$ (the first two terms of the Taylor series) to convert this to
$$
0.81 + (1 + (y-1)) - 2y \approx 0 \\
0.81 \approx y
$$
thus determining $y$ from a known value of $x$.
More generally, you can see that there's a unique solution for $y$: the one-dimensional implicit value theorem applied to $f_2(x, y)$ near the point $(x, y) = (1, 1)$ says so, since $\frac{\partial f_2}{\partial y} (1, 1) = -1$, as you already computed. So there's a function $h$, defined on a neighborhood $U$ of $x = 1$, with the property that
$$
f_2(x, h(x)) = 0
$$
for $x \in U$.
Now continuing with the example, knowing $x = 9/10$ and $y \approx 0.81 $, look at the first term: from that, you can solve for $z$. It'll be a square root of some kind, and one of the two roots will be near $+1$ and the other near $-1$ so you pick the $+1$ root.
Continuing with the general analysis instead of the single instance, we have that
$h(x)$ is a number such that $x^2 + e^{h(x) - 1} - 2h(x) = 0$ (for $x$ near $0$); we can then build the required function $g$ via
$$
g(x) = \begin{bmatrix} h(x) \\ \sqrt{2x^2 - h(x)^2} \end{bmatrix}
$$
Does that help? The fact that you can't explicitly write $h$ isn't a problem -- you know from the 1D implicit function theorem that it exists. |
Proof that $\bigcap_{n\in\mathbb{N}}[a_n,b_n]$ is a non-emtpy set | Every $b_n$ is an upper bound for $A$. So supremum of $A$ exist, call it as $x$. Thus, $a_n \leq x$ for all $n$ and note that every $b_n$ is an upper bound and $x$ is the supremum, so $x \leq b_n$ for all $n$ . Hence $x$ belongs to every $[a_n,b_n]$. |
Proof of inequality $x^2+3\sin x-3x \geq 0$ | If $x\leq 0$, $\sin x-x\geq 0$, so we just need to prove the inequality for $x>0$. Since in such a case $\sin x>-x$, we only need to prove the inequality for $x\in(0,6]$. Since $\sin x\geq-1$, we only need to prove it on $(0,4]$. Let $f(x)=x^2+3\sin x-3x$. Since $f(\pi)>0$ and $f'(x)>0$ on $[\pi,4]$, we only need to prove the inequality on $I=(0,\pi)$. Over such an interval, we can use the inequality:
$$ \sin(x)\geq \frac{1}{\pi}x(\pi-x) \tag{1}$$
and check that the second-degree polynomial given by replacing $\sin x$ with the RHS of $(1)$ is non-negative over $I$. |
why $\mu(A_n)$ is written before $\int_{A_n} f d \mu ?$ | Since $f(x)>\frac{1}{n}$ on $A_n$, by monotonicity of the Lebesgue integral we have $\displaystyle\int_{A_n}f\,\text{d}\mu\ge\int_{A_n}\frac{1}{n}\,\text{d}\mu=\frac{1}{n}\mu(A_n)$. |
Commutator subgroup of group generated by two elements of order 2 | As rschweib mentions, the commutator subgroup of $G=\langle a,b\rangle$ is the smallest normal subgroup containing the commutator $[a,b]$.
For a general group generated by two elements $\langle a,b\rangle$, the commutator subgroup certainly need not be cyclic. For instance if $a=(1,3,2)$ and $b=(2,5)(3,4)$, then $\langle [a,b] \rangle = \langle (1,2,3,4,5) \rangle$ has order 5 and is not a normal subgroup of $\langle a,b \rangle$. Indeed the commutator subgroup of $\langle a,b \rangle$ is the entire $\langle a,b \rangle$ in this case and one certainly does not have that $\langle [a,b] \rangle \supseteq G'$.
Hence one needs to show that $\langle [a,b] \rangle$ is normal, which makes essential use of the fact that $a$ and $b$ have order 2.
When $a,b$ have order 2, one gets the very simple formula $x^a = axa$ and $[a,b] = abab$, so that $[a,b]^a = aababa = baba = [b,a] = [a,b]^{-1}$ and $[a,b]^b = bababb = baba = [b,a] =[a,b]^{-1}$. In particular, $\langle [a,b] \rangle$ is normal, and both $a$ and $b$ act as the automorphism $x \mapsto x^{-1}$. |
Every function in $L^1$ can be expressed as a product of functions in $L^p$ and $L^{P^*}$ | The cases $p=1$ and $p=\infty$ being trivial, assume $1 < p < \infty$.
Let $g(x) = |f(x)|^{1/p} \text{signum}(f(x))$ and $h(x) = |f(x)|^{1/p^*}$. |
Is the following function convex? | Take $f^p(x)=g^q(x)=2+\cos x$, then $g^q[f^pg^{-q}]^t$ is not convex. |
Integrability and Continuous function | Hint: There’s no reason to believe that $f(x)>f(c)$ for any $x$.
However, by continuity, if $f(c)>0$, then there’s a $\delta$ such that
$$|x-c|<\delta \implies |f(x)-f(c)|<\epsilon =\tfrac12f(c)$$
That is, $-\tfrac12 f(c)<f(x) - f(c)< \tfrac12 f(c)$, so in particular $0<\tfrac12 f(c)<f(x)$. This means that
$$\int_{c-\delta}^{c+\delta}f(x)\; dx\geq 2\delta\cdot
\tfrac12 f(c)>0$$
Now finish it. |
Suggestions for very challenging vector calculus problems | The following MSE link mentions these two books:
Vector Calculus by Jerold E. Marsden and Anthony J. Tromba
Vector Calculus by Peter Baxandall and Hans Liebeck
You can also try to check out Schaum's Outline of Vector Analysis.
I also found the following link which contains some challenging problems for vector calculus from Stewart. |
Showing that a map defined using the dual is a bounded linear operator from X' into X' | After you have shown that $A := \sup\limits_n \lVert T_n'\rVert < \infty$, you have a known bound on $S(f)(x)$ for every $x\in X$ and $f\in X'$, namely
$$\lvert S(f)(x)\rvert = \lim_{n\to\infty} \lvert T_n'(f)(x)\rvert \leqslant \limsup_{n\to\infty} \lVert T_n'(f)\rVert\cdot \lVert x\rVert \leqslant \limsup_{n\to\infty} \lVert T_n'\rVert\cdot \lVert f\rVert\cdot \lVert x\rVert.$$
From $\lvert S(f)(x)\rvert \leqslant N\lVert f\rVert\cdot \lVert x\rVert$, we deduce by the definition of the norm on $X'$ that $\lVert S(f)\rVert \leqslant N\cdot \lVert f\rVert$, and this in turn shows that $S$ is a continuous (bounded) operator on $X'$ with $\lVert S\rVert \leqslant N$. |
Confusion with $O$ function | $$\frac{1}{2}[x/d]([x/d]+1)=\frac12\left(\frac xd+O(1)+1\right)\left(\frac xd+O(1)\right)=\frac{x^2}{2d^2}+\frac xdO(1)+O'(1)\\
=\frac{x^2}{2d^2}+O''\left(\frac xd\right).$$ |
Packing of nodes in a circle | I doubt very much that there is a closed-form solution. Packing problems tend to be hard. You might be able to get upper and lower bounds. |
Probability of $y \leq x$ given $x \leq \frac 12$ | Don't forget $$P(A|B)=\frac{P(A\cap B)}{P(B)}$$You worked out the numerator to be $\frac18$. What about the denominator?
Another way to see it visually is this: when asked for a conditional probability, you are given some information. So you should take this information as something you know for sure. In this example, you should assume you already know $B$ is true. So you are definitely in this region to the left of the line $x=1/2$. I.e. you are essentially now looking at a new, adjusted probability distribution, a uniform one $\tilde\Omega=[0,\tfrac12]\times[0,1]$. Now what's the probability $x\le y$? It is whatever proportion of your region has $x\le y$. From your image, its easy to see that the shaded region is $1/4$ of the total region to the left of $x=1/2$. So your answer is $1/4$ again. |
Noetherian module over noetherian ring | Hints for the main question: (under the assumption your ring has unity)
$M/N$ is a simple module when $N$ is a maximal submodule.
A simple module is isomorphic to $R/I$ for some maximal ideal $I\lhd R$.
You probably are aware of some relationship between maximal and prime ideals...
You should be able to reason out why the submodules of a Noetherian module $M$ over any ring are Noetherian. Consider an ascending chain in the submodule $N\subseteq M$ ... and remember it is a chain in $M$ too! |
Direct sum of metrizable spaces. | SKETCH: Suppose that you have metric spaces $\langle X_i,d_i\rangle$ for $i\in I$. First replace $d_i$ be $d_i'$, where
$$d_i'(x,y)=\min\{d_i(x,y),1\}$$
for $x,y\in X_i$. Observe that $d_i'$ generates the same topology as $d_i$ and is complete if $d_i$ is complete. Now define a metric $d$ on $\bigsqcup_{i\in I}X_i$ by
$$d(x,y)=\begin{cases}
d_i'(x,y),&\text{if }x,y\in X_i\text{ for some }i\in I\\
1,&\text{if }x\in X_i,y\in X_j,\text{ and }i\ne j\;.
\end{cases}$$
Show that $d$ is a metric that generates the right topology and that is complete if all of the $d_i$ are complete. |
Prove that if $C$ is a convex set containing $B(r)$, then $\sup\{d(y,0)\mid y\in C\}=\infty$ | Hint: denoting by $e_n$ the $n$-th vector of "canonical basis" of $\ell^p$, compute $d(x_N, 0)$ for each $N\in\mathbb N$, where $x_N=\frac 1N \sum_{n=1}^N e_n$. |
Finding limit of $\frac{a_n^3+5n}{a_n^2+n}$ for $(a_n)$ bounded. | HINTS
Since $a_n$ is bounded, what are $$\lim_{n \to \infty} \frac{a_n^3}{n} \text{ and } \lim_{n \to \infty} \frac{a_n^2}{n}?$$
Note that $$\frac{a_n^3+5n}{a_n^2+n} = \frac{5 + a_n^3/n}{1 + a_n^2/n}.$$ |
$f$ continuous on [a,b] and $|f|$ being of bounded variation implies that $f$ has bounded variation on $[a,b]$? | Hint: Use the intermediate value theorem to force each pair $f(x_k), f(x_{k+1})$ to both be either $\geq 0$ or $\leq 0$. |
Why are invertible matrices called 'non-singular'? | If you take an $n\times n$ matrix "at random" (you have to make this very precise, but it can be done sensibly), then it will almost certainly be invertible. That is, the generic case is that of an invertible matrix, the special case is that of a matrix that is not invertible.
For example, a $1\times 1$ matrix (with real coefficients) is invertible if and only if it is not the $0$ matrix; for $2\times 2$ matrices, it is invertible if and only if the two rows do not lie in the same line through the origin; for $3\times 3$, if and only if the three rows do not lie in the same plane through the origin; etc.
So here, "singular" is not being taken in the sense of "single", but rather in the sense of "special", "not common". See the dictionary definition: it includes "odd", "exceptional", "unusual", "peculiar".
The noninvertible case is the "special", "uncommon" case for matrices. It is also "singular" in the sense of being the "troublesome" case (you probably know by now that when you are working with matrices, the invertible case is usually the easy one). |
Justifying a product of matrices have the rows i and j if the matrix A has the rows i and j equal with i different than j | The $i$-th row of $AB$ is $$\sum_{l=1}^nA_{il}\cdot B_{l*}.$$ Here $B_{l*}$ denotes the $l$-th row of $B$. Similiary, $j$-th row of $AB$ is $$\sum_{l=1}^nA_{jl}\cdot B_{l*}.$$ Now, $A_{i*}=A_{j*}\implies A_{il}=A_{jl}$ for all $l=1,...,n$. So we are done. |
True meaning of negation of a proposition | If you postulate that any device is either excellent or terrible, then deducing the device is of terrible quality if and only if it is not of excellent quality is valid. In general, however, your intuition is right, and a semantically correct rendition of $\lnot p$ would be "The device is not of excellent quality".
As to your edit, that is not a mathematical question but a worldly one. Formally, mathematics cannot speak about non-mathematical things; devices (in their most general form) are not mathematical objects, so there simply is no convention (and formulating one doesn't even make sense). |
Estimative with kernels of the Riesz transform | First, by replacing $x$ by $x-t$ and $y$ by $y-t$, we can assume without loss of generality that $t=0$, and also $|x| \ge |y|$ (and so $|x| \ge |\tilde x|$).
Case 1: $|x| > 2|y|$. Then $|x-y| \approx |\tilde x| \approx |x|$, the inequality is easy.
Case 2: $|x| \ge |y| \ge |x|/2$.
$$ \frac{x_1}{|x|^3} - \frac{y_1}{|y|^3} = \frac{|y|^3 x_1 - |x|^3 y_1}{|x|^3|y|^3} = \frac{|y|^3 x_1 - |y|^3 y_1}{|x|^3|y|^3} + \frac{x_1(|x|^3 - |y|^3)}{|x|^3|y|^3} $$
Now
$$ \left|\frac{|y|^3 x_1 - |y|^3 y_1}{|x|^3|y|^3} \right| = \frac{|x_1-y_1|}{|x|^3} \le \frac{|x-y|}{|\tilde x|^3} ,$$
and
$$ \left|\frac{x_1(|x|^3 - |y|^3)}{|x|^3|y|^3}\right| \le \frac{(|x|-|y|)(|x|^2 + |x||y|+|y|^2)}{|x|^2|y|^3} \le \frac{3(|x|-|y|)}{|y|^3} \le \frac{24|x-y|}{|\tilde x|^3} .$$
I have a feeling this argument is over complicated. But something like this will work. |
Calculating the geometric realization of a non-representable functor | If your goal is to read Demazure and Gabriel's book, then (as I explained in the comments) your question is based on false premises and the solution is to read the definitions carefully.
But let me address your question as written, since it will illuminate why Demazure and Gabriel use the definitions they do.
First, as you observe, the category of elements of an arbitrary functor $F : \textbf{CRing} \to \textbf{Set}$ is not always small.
Actually, it is almost never small, because $\textbf{CRing}$ itself is not small: as soon as $F (A)$ is non-empty for all rings $A$, then the category of elements of $F$ will be at least as big as $\textbf{CRing}$.
This is not actually fatal for the problem at hand (though it does introduce many complications).
It sometimes happen that the functor $F$ you are interested in is a colimit of a small diagram of representable functors, i.e. there is a small diagram $A : \mathcal{I}^\textrm{op} \to \textbf{CRing}$ such that $F (B) \cong \varinjlim_\mathcal{I} \textbf{CRing} (A, B)$ naturally in $B$.
In that case, you can compute the colimit $\left| F \right|$ you seek as $\varinjlim_\mathcal{I} \operatorname{Spec} A$.
The biggest complication is that $\left| F \right|$ is not well defined for arbitrary $F$, so you only get a partially defined functor $\left| - \right|$.
If you try to restrict to a full subcategory of functors $\textbf{CRing} \to \textbf{Set}$ on which $\left| - \right|$ is well defined everywhere, you then have the complication that the putative right adjoint may not have image contained in that subcategory.
I am not aware of any good way to resolve this dilemma; I think you have no choice but to settle for a partially defined adjoint.
Now for some good news: there is a clean necessary and sufficient condition for a functor $F : \textbf{CRing} \to \textbf{Set}$ to be a colimit of a small diagram of representable functors.
Definition.
Let $\kappa$ be an infinite regular cardinal.
A $\kappa$-accessible functor is a functor that preserves $\kappa$-filtered colimits.
Proposition.
Let $F : \textbf{CRing} \to \textbf{Set}$ be a functor.
The following are equivalent:
$F$ is $\kappa$-accessible.
$F$ is the left Kan extension of a functor $\textbf{CRing}_\kappa \to \textbf{Set}$ along the inclusion $\textbf{CRing}_\kappa \hookrightarrow \textbf{CRing}$, where $\textbf{CRing}_\kappa$ is the full subcategory of $\kappa$-presentable rings (i.e. rings presentable by $< \kappa$ generators and $< \kappa$ relations).
There is a small diagram $A : \mathcal{I}^\textrm{op} \to \textbf{CRing}$ such that $F \cong \varinjlim_\mathcal{I} \textbf{CRing} (A, -)$ and, for each $i$ in $\mathcal{I}$, $A (i)$ is a $\kappa$-presentable ring.
The functor $R \mapsto R^{\oplus \mathbb{N}}$ you mention is easily seen to preserve filtered colimits (i.e. be an $\aleph_0$-accessible functor).
It is just as easy to see that it is the colimit of a small (indeed, countable!) diagram of representable functors, namely,
$$\textbf{CRing} (\mathbb{Z}, -) \longrightarrow \textbf{CRing} (\mathbb{Z} [x_1], -) \longrightarrow \textbf{CRing} (\mathbb{Z} [x_1, x_2], -) \longrightarrow \cdots$$
where the maps are the ones induced by the homomorphisms $\mathbb{Z} [x_1, \ldots, x_n, x_{n+1}] \to \mathbb{Z} [x_1, \ldots, x_n]$ that send $x_i$ to $x_i$ for $1 \le i \le n$ and $x_{n+1}$ to $0$.
Thus, the geometric realisation of $R \mapsto R^{\oplus \mathbb{N}}$ is the colimit $\varinjlim_n \mathbb{A}^n$.
I suppose I owe you an example of a functor $\textbf{CRing} \to \textbf{Set}$ that is not accessible.
Choose an ordinal-indexed sequence of fields, $K_\alpha$, such that $K_\alpha$ is strictly smaller in cardinality than $K_\beta$ whenever $\alpha < \beta$.
Let $F (R) = \coprod_{\alpha} \textbf{CRing} (K_\alpha, R)$ for non-zero rings $R$ and let $F (\{ 0 \}) = 1$.
Since any ring homomorphism $K_\alpha \to R$ is injective when $R$ is non-zero, $\textbf{CRing} (K_\alpha, R)$ is empty for sufficiently large $\alpha$, so $F (R)$ is indeed a set.
On the other hand, it is clear that $F$ cannot be the left Kan extension of any functor $\textbf{CRing}_\kappa \to \textbf{Set}$: if it were, it would be impossible to distinguish between this $F$ and the one where we cut off the disjoint union at some ordinal $\beta$ such that $K_\beta$ is not $\kappa$-presentable. |
State space model with a constant disturbance term | This term could be considered as a shift in the input:
$$r(t)=r'(t)+a$$
$$
\dot x(t)=A x(t)+B(r'(t)+a)+c
$$
then
$$\dot x(t)=A x(t)+B r'(t)+ \underbrace{B a+c}_{=0}$$
Set $a$ in the way that
$$Ba=-c$$
In cases where B does not have enough degrees of freedom to do that then it will be questionable whether the input can compensate the effect to $c$ at all. |
What is an advatage of defining $\mathbb{C}$ as a set containing $\mathbb{R}$? | In ordinary everyday mathematics we do indeed say that
$$ \mathbb N\subset \mathbb Z \subset \mathbb Q \subset \mathbb R \subset \mathbb C $$
are honest-to-Plato, true, ordinary set inclusions, such that, for example, any natural number is literally a member of each of the other sets.
This is completely unproblematic, uncontroversial and not even deserving of mention as long as we're talking about everyday ordinary mathematics.
The problem you sketch only arises when we want to translate everyday mathematics into formal, axiomatic set theory. People who first learn of the possibility of doing this very easily get the impression that we want to (or ought to want to) do this all the time -- or even that the set-theoretic formalization is "what mathematics really is" and that everyday mathematics is just some kind of imperfect sloppy approximation to the purely set-theoretic Eternal Truth.
This latter extreme is not really tenable if you think about it (just consider that people were doing mathematics for millennia before axiomatic set theory was invented, and it is absurd to claim that Cantor, Zermelo, et al. were for the first time able to see what had really been going on). And even the milder view that we should be looking to the set-theoretic formalization all of the time is not really how mathematics works. The formalization is something we do once, to convince ourselves that what we're doing doesn't entail any contradictions that aren't present in set theory, such that set theory contains all the foundational uncertainty we'll ever need to worry about. After convincing ourselves of that, we'll promptly forget about its detail and continue doing everyday mathematics we always did, treating $\mathbb N\subset \mathbb C$ as a true inclusion and so forth.
Now, for the real question. Usually the formalization is done by saying that there are homomorphic injections $\mathbb N\to\mathbb Z\to\mathbb Q\to\mathbb R\to \mathbb C$ which preserve all the structure we're interested in, and then whenever we translate (well, imagine translating, because one doesn't actually do this, you know) an everyday formula into formal set theory, we're supposed to insert "invisible" applications of these homomorphisms and their inverses at appropriate places that will keep things making sense.
This works well enough that it's what most people who explain the formalization imagine doing.
However, there's also the option of making the inclusions be actual inclusions at the formal set theory level, which it sounds like you have rediscovered. For reference, it would go something like this for the step from $\mathbb R$ to $\mathbb C$:
First define $\mathbb{\hat C}=\mathbb R\times \mathbb R$ and make it into a field in the usual way. Then notice that there's an injective field homomorphism $\phi: x\in\mathbb R\mapsto \langle x,0\rangle \in \mathbb{\hat C}$, and now define
$$ \mathbb C = \mathbb R \,\cup\, \{\mathbb R\}\times(\mathbb{\hat C}\setminus \phi(\mathbb R))$$
where the $\{\mathbb R\}$ factor just serves to make sure the union is disjoint at the "untyped" set theory level. Then define
$\psi : \mathbb C \to \mathbb{\hat C}$ by
$$ \psi(z) = \begin{cases} \hat z & \text{if }z = \langle\mathbb R,\hat z\rangle \\ \phi(z) & \text{if }z \in \mathbb R \end{cases}$$
then observe that $\psi$ is (obviously) a bijection, pull the field structure back along $\psi$ from $\mathbb{\hat C}$ to $\mathbb C$, and prove that the resulting field structure on $\mathbb C$ makes it into a true field extension of $\mathbb R$.
and we could apply exactly the same technique on each step on the way from $\mathbb N$ up to $\mathbb R$, such that we get true inclusions everywhere.
Your question is then, if I understand you correctly, whether this construction is "preferable" to the usual use of invisible injections everywhere.
Personally I like it better. It gives a set-theoretic formalization that is closer to how ordinary mathematics work, and I happen to care about such esthetic points.
It also seems to be useful to do things this way if we actually want to formalize mathematics in a computerized proof checking system. Then the gritty details of making the inclusions work can be isolated in the proof of the theorem "there exists something with the properties that $\mathbb C$ is usually taken to have", including the property of being a superset of $\mathbb R$ -- whereas the invisible-injection solution has to be supported either by special-casing the parsing of formulas all over ordinary mathematics, or by having a very complex generic solution to getting them inserted in the right places.
However, outside the computer-formalization exception, the actual mathematical answer is that it ultimately doesn't matter. Because no matter whether we're doing one thing or the other while arguing that ordinary mathematics can be reduced to formal set theory, what happens at the end of the day is still that we forget the details and continue doing ordinary mathematics as we've done it all the time anyway. |
How to prove that if p $\le$ q+1 , then G is connected. | Your first claim is wrong and this is a counter example.
Your second claim (edited version) is also wrong and this is a counter example. |
Solving linear inhomogeneous differential equation | First off, I presume you mean $y''$ instead of $y'''.$ Next, you must modify your guess because both your homogeneous solution and the right-hand side of your ODE contain $e^{-x}$. Instead, try to guess $A+Bxe^{-x}$ for the particular solution. |
Why demonstrations are important in mathematics? | If you are not
creating your own proofs
working out examples and doing computations
(with the goal of gaining intuition and understanding, to better create your own proofs)
reading other people's proofs
(with the goal of gaining intuition and understanding, to better create your own proofs)
I would say you are not doing mathematics. Now, there are other things that people often confuse with mathematics, such as
blindly following someone else's directions for doing a computation
blindly giving directions to a calculator or computer to make it do a computation
both of which have their purpose in the world, but they are not mathematics. Since proofs are an intrinsic part of mathematics, I would say they are of paramount importance (or, more accurately, the inextricable combination of intuition, understanding, and proofs is paramount).
To put it simply: math is really interesting, beautiful, and fun to think about! There's just a fundamental intellectual curiosity, a desire to understand. When we can condense that understanding into a statement which we are confident is true, and explain our justification for it, we have stated a theorem and given a proof. To me at least, and I expect to most other mathematicians, that is why proofs are important!
"Mathematicians create by acts of insight and intuition. Logic then sanctions the conquests of intuition." - Morris Kline
What is the importance of mathematical proofs in the world? Well, lots of the mathematics that humans do is, or originally was, motivated by our desire to understand the world. So far, mathematics has been very useful for this. To quote Paul Dirac:
The physicist, in his study of natural phenomena, has two methods of making progress:
the method of experiment and observation, and
the method of mathematical reasoning.
The former is just the collection of selected data; the latter enables one to infer results about experiments that have not been performed. There is no logical reason why the second method should be possible at all, but one has found in practice that it does work and meets with reasonable success.
So, we have observed that, when we use mathematical objects as a model for the world around us, theorems about the mathematical objects seem to correspond to true statements about the corresponding real-world objects. Very useful indeed!
It is also well-known that mathematics has a very close relationship with computers (e.g., via cryptography, information theory, computational complexity theory). Proving things in mathematics can give us information and ideas for how to get our computers to do things most efficiently, or send signals with minimal chance of garbling the message, etc. Also very useful!
It's true that not all of mathematics has direct applications like these right now. But many surprising connections have been made where computer scientists or physical scientists realized that some bit of mathematics, which had previously been considered just an academic curiosity, was just what they needed to model their science. So there's always a chance that any mathematical theorem will end up being useful in the world, and a theorem is nothing without a proof telling you it's true. |
A (non-artificial) example of a ring without maximal ideals | In this paper Mel Henriksen shows that a commutative ring $R$ has no maximal ideals iff (a) $J(R)=R$, where $J(R)$ is the Jacobson radical of $R$, and (b) $R^2+pR=R$ for every prime $p\in\Bbb Z$. He then gives three examples. One starts with a field $F$ of characteristic $0$ and forms the integral domain
$$S(F)=\left\{h(x)=\frac{f(x)}{g(x)}\in F(x):f(x),g(x)\in F[x]\text{ and }g(x)\ne 0\right\}\;;$$ its unique maximal ideal is $R(F)=xS(F)$, which has no maximal ideals.
This paper by Patrick J. Morandi also constructs some examples. |
Spectral Graph Theory | Suppose $G_1,G_2,\ldots$ is an infinite sequence of connected graphs such $|V(G_i)| < |V(G_{i+1})|$ and such that the set of the eigenvalues of $G_i$ is $X = \{\lambda_1,\ldots, \lambda_t\}$ for all $i$.
Let $\lambda$ be the largest eigenvalue in $X$. Then it is well known that
$$\Delta(G_i) \leq \lambda^2\,$$ and hence the graphs in our sequence have bounded degree. But then the sequence of graphs have unbounded diameter (see also this) and hence there exist an $i$ such that $\rm{diam}(G_i) \geq t$ and hence $G_i$ has to have at least $t+1$ distinct eigenvalues thus deriving a contradiction. |
prove $A \sim B\implies 2^A \sim 2^B$. | HINT: Suppose that $f:A\to B$ is a bijection, and consider the map
$$F:2^A\to 2^B:X\mapsto f[X]=\{f(a):a\in X\}\;.$$ |
Evaluate the double integral. | Some thing like this
\begin{align}
I&=\int_{0}^{2}\!\int_{x}^{-{x}^{2}+3\,x}\!{x}^{2}-xy\,{\rm d}y\,{\rm d}x\\
&=\int_{0}^{2}\!\int_{x}^{-{x}^{2}+3\,x}\!{x}^{2}\,{\rm d}y+\int_{x}^{-{
x}^{2}+3\,x}\!-xy\,{\rm d}y\,{\rm d}x\\
&=\int_{0}^{2}\!{x}^{2} \left( -{x}^{2}+3\,x \right) -{x}^{3}+\int_{x}^{
-{x}^{2}+3\,x}\!-xy\,{\rm d}y\,{\rm d}x\\
&=\int_{0}^{2}\!{x}^{2} \left( -{x}^{2}+3\,x \right) -{x}^{3}-x\int_{x}^
{-{x}^{2}+3\,x}\!y\,{\rm d}y\,{\rm d}x\\
&=\int_{0}^{2}\!{x}^{2} \left( -{x}^{2}+3\,x \right) -{x}^{3}-x \left( 1
/2\, \left( -{x}^{2}+3\,x \right) ^{2}-1/2\,{x}^{2} \right) \,{\rm d}x
\\
&=\int_{0}^{2}\!-{x}^{3} \left( x-3 \right) -{x}^{3}-1/2\,{x}^{3}
\left( {x}^{2}-6\,x+8 \right) \,{\rm d}x\\
&=\int_{0}^{2}\!-{x}^{3} \left( x-3 \right) \,{\rm d}x+\int_{0}^{2}\!-{x
}^{3}\,{\rm d}x+\int_{0}^{2}\!-1/2\,{x}^{3} \left( {x}^{2}-6\,x+8
\right) \,{\rm d}x\\
&=-\int_{0}^{2}\!{x}^{3} \left( x-3 \right) \,{\rm d}x+\int_{0}^{2}\!-{x
}^{3}\,{\rm d}x+\int_{0}^{2}\!-1/2\,{x}^{3} \left( {x}^{2}-6\,x+8
\right) \,{\rm d}x\\
&=-\int_{0}^{2}\!{x}^{4}-3\,{x}^{3}\,{\rm d}x+\int_{0}^{2}\!-{x}^{3}
\,{\rm d}x+\int_{0}^{2}\!-1/2\,{x}^{3} \left( {x}^{2}-6\,x+8 \right)
\,{\rm d}x\\
&=-\int_{0}^{2}\!{x}^{4}\,{\rm d}x-\int_{0}^{2}\!-3\,{x}^{3}\,{\rm d}x+
\int_{0}^{2}\!-{x}^{3}\,{\rm d}x+\int_{0}^{2}\!-1/2\,{x}^{3} \left( {x
}^{2}-6\,x+8 \right) \,{\rm d}x\\
&=-{\frac {32}{5}}-\int_{0}^{2}\!-3\,{x}^{3}\,{\rm d}x+\int_{0}^{2}\!-{x
}^{3}\,{\rm d}x+\int_{0}^{2}\!-1/2\,{x}^{3} \left( {x}^{2}-6\,x+8
\right) \,{\rm d}x\\
&=-{\frac {32}{5}}+3\,\int_{0}^{2}\!{x}^{3}\,{\rm d}x+\int_{0}^{2}\!-{x}
^{3}\,{\rm d}x+\int_{0}^{2}\!-1/2\,{x}^{3} \left( {x}^{2}-6\,x+8
\right) \,{\rm d}x\\
&={\frac {28}{5}}+\int_{0}^{2}\!-{x}^{3}\,{\rm d}x+\int_{0}^{2}\!-1/2\,{
x}^{3} \left( {x}^{2}-6\,x+8 \right) \,{\rm d}x\\
&={\frac {28}{5}}-\int_{0}^{2}\!{x}^{3}\,{\rm d}x+\int_{0}^{2}\!-1/2\,{x
}^{3} \left( {x}^{2}-6\,x+8 \right) \,{\rm d}x\\
&=8/5+\int_{0}^{2}\!-1/2\,{x}^{3} \left( {x}^{2}-6\,x+8 \right)
\,{\rm d}x\\
&=8/5-1/2\,\int_{0}^{2}\!{x}^{5}-6\,{x}^{4}+8\,{x}^{3}\,{\rm d}x\\
&=8/5-1/2\,\int_{0}^{2}\!{x}^{5}\,{\rm d}x-1/2\,\int_{0}^{2}\!-6\,{x}^{4
}\,{\rm d}x-1/2\,\int_{0}^{2}\!8\,{x}^{3}\,{\rm d}x\\
&=-{\frac {56}{15}}-1/2\,\int_{0}^{2}\!-6\,{x}^{4}\,{\rm d}x-1/2\,\int_{0
}^{2}\!8\,{x}^{3}\,{\rm d}x\\
&=-{\frac {56}{15}}+3\,\int_{0}^{2}\!{x}^{4}\,{\rm d}x-1/2\,\int_{0}^{2}
\!8\,{x}^{3}\,{\rm d}x\\
&={\frac {232}{15}}-1/2\,\int_{0}^{2}\!8\,{x}^{3}\,{\rm d}x\\
&={\frac {232}{15}}-4\,\int_{0}^{2}\!{x}^{3}\,{\rm d}x\\
&=-8/15
\end{align} |
How does index notation work in Hermitian spaces? | Let $(e_i)_{i\in I}$ be a basis in $E$ and $(e^{\ast j})_{j\in I}$ be the dual basis for $E^{\ast}$ such that $e^{\ast j}(e_i)=\delta^j_i $.
Then we can expand vectors $v=\sum_{i\in I} v^i e_i\in E$ and covectors $f=\sum_{j\in I} f_j e^{\ast j}\in E^{\ast}$.
The isomorphism $\Phi:E\to E^{\ast}$ is given by via the sesquilinear form$^1$ $\langle \cdot, \cdot\rangle: E\times E \to \mathbb{C}$ as
$$\Phi(v)~=~ \langle v, \cdot \rangle~=~\sum_{i,j\in I} \bar{v}^i g_{ij} e^{\ast j},$$
where $g_{ij}:=\langle e_i, e_j \rangle$.
Therefore OP's last suggestion (2) applies
$$\Phi(v)_j~=~\sum_{i\in I} \bar{v}^i g_{ij}.$$
--
$^1$In our convention the sesquilinear form is conjugated $\mathbb{C}$-linear in the first entry and $\mathbb{C}$-linear in the second entry. Be aware that the opposite convention also exists in the literature. |
Show that in an n-dimensional vector space, any collection of m vectors, where m>n, must be linearly dependent | If $m$ vectors are independent, then $\rm{dim}V\ge m \gt n$, a contradiction. |
Differentials squared - Divergence in general orthogonal curvilinear coordinates. | $\newcommand{\dv}{\mathop{\rm div}}$
From the divergence theorem $$\int_\Omega \dv B = \int_{\partial \Omega} B \cdot n$$ we see that if we want to define the divergence via flux, the correct quantity is (assuming the divergence is continuous)
$$ \dv B (x) = \lim_{r \to 0} \frac1{V(\Omega(x,r))}\int_{\Omega(x,r)} \dv B =\lim_{r \to 0} \frac1{V(\Omega(x,r))}\int_{\partial \Omega(x,r)}B \cdot n$$
where $\Omega(x,r)$ is a "box" around the point $x$ with side lengths scaled by $r$ and $V$ denotes volume. Since the volume of a 3-dimensional box $B(x,r)$ is proportional to $r^3$, the omitted terms in the Taylor series do not contribute to the divergence: since they will on the order of $r^4$ or higher, when we divide by $r^3$ they will vanish in the limit $r \to 0$. |
For give permutation $\sigma\in S_{13}$ solve equation $x^3=\sigma$ | The permutation with the cycles $(1\ 3)(2)(4\ 11)(5)(6\ 12\ 10\ 8)(9)(13\ 7)$ does the job.
You only have to invert the $4$-cycle. The other cycles remain. You can also take the cycle $(2\ 5\ 9)$ to get another solution. |
Number of 3 digit numbers in AP or GP | Let us count. The list of geometric progressions is short, though not as short as I first believed! There are the obvious $1, 2, 4$, and $2, 4, 8$, and $1, 3, 9$, and their reverses. Then there is the less obvious $4,6,9$ and its reverse, which I had missed. And of course $a, a, a$ where $a$ is any of $1$ to $9$. These last also happen to be arithmetic progressions. What about "common ratio" $0$? We will go along with Wikipedia's definition and not allow that.
Now let's count the arithmetic progressions. There are the $9$ with common difference $0$.
There are $7$ increasing ones with common difference $1$, and $8$ decreasing ones, since $0$ can be the final digit in that case.
There are $5$ increasing ones with common difference $2$, and $6$ decreasing ones.
There are $3$ increasing ones with common difference $3$, and $4$ decreasing ones.
There is $1$ increasing one with common difference $4$, and there are $2$ decreasing ones.
If we decide to forget about the sequences $a, a, a$ we get a count of $44$, for we listed $6$ geometric progressions and $36$ arithmetic progressions. Adding in the sequences $a, a, a$, which we definitely should, since they are indeed in both categories, gives us $53$.
For whatever it is worth, Wikipedia does not allow common ratio $0$. If we accept that, the correct count is $53$. |
How is $\frac{\sin(x)}{x} = 1$ at $x = 0$ | In an elementary book, they should define $\mathrm{sinc}$ like this
$$
\mathrm{sinc}\; x = \begin{cases}
\frac{\sin x}{x}\qquad x \ne 0
\\
1\qquad x=0
\end{cases}
$$
and then immediately prove that it is continuous at $0$.
In a slightly more advanced book, they will just say
$$
\mathrm{sinc}\;x = \frac{\sin x}{x}
$$
and the reader will understand that removable singularities should be removed. |
FInding the dimension of a subspace $U = \{p(x) \in V | p(3)=p(5)=0\}$ | $U$ is the set of all $4$ dimensional vectors such that
$$\begin{pmatrix}
27 & 9 & 3 & 1\\
125& 25 & 5 & 1
\end{pmatrix}\begin{pmatrix}
x_1\\x_2\\x_3\\x_4
\end{pmatrix}=0$$
Since this matrix is of rank $2$, therefore, its kernel is of dimension $2$, by rank nullity theorem. |
Prove that: If $n \in \mathbb{Z}_{m} \setminus \left\{0\right\}$ has a multiplicative inverse, then this is definitely unique | Hint If $k,l$ are multiplicative inverses of $n$ then what is
$$knl=?$$ |
Inhomogeneous differential equation | Just plug in $q(x)=ax+b$ in the equation and determine $a$ and $b$ by comparing coefficients.. The answer is $q(x)=\frac {\alpha} {a_0} x+\frac {\beta a_0 -\alpha a_1} {a_0^{2}}$. |
$h_n \to 0 \implies \int_{-\infty}^{+\infty} (\exp(itx)-1-itx/(1+x^2))\frac{1+x^2}{x^2} \nu(dx) \to 0$ | We first consider the case $|x| < 1$. Using the identity
$$ e^{i\theta} = 1 + i\theta - \theta^2 \int_{0}^{1} (1 - t) e^{i\theta t} \, \mathrm{d}t, $$
it follows that $e^{i\theta} = 1 + i\theta + \mathcal{O}(\theta^2)$ uniformly in $\theta \in \mathbb{R}$.1) From this,
\begin{align*}
\biggl( e^{ih_n x} - 1 - \frac{ih_n x}{1+x^2} \biggr) \frac{1+x^2}{x^2}
&= \biggl( ih_n x + \mathcal{O}(h_n^2 x^2) - \frac{ih_n x}{1+x^2} \biggr) \frac{1+x^2}{x^2} \\
&= \biggl( \frac{ih_n x^3}{1+x^2} + \mathcal{O}(h_n^2 x^2) \biggr) \frac{1+x^2}{x^2} \\
&= ih_n x + \mathcal{O}(h_n^2(1+x^2)).
\end{align*}
If $|x| \geq 1$, then
\begin{align*}
\left| \biggl( e^{ih_n x} - 1 - \frac{ih_n x}{1+x^2} \biggr) \frac{1+x^2}{x^2} \right|
&\leq 4 + |h_n|.
\end{align*}
Combining altogether, there exists a uniform constant $C > 0$ such that
\begin{align*}
\left| \biggl( e^{ih_n x} - 1 - \frac{ih_n x}{1+x^2} \biggr) \frac{1+x^2}{x^2} \right|
&\leq C(1 + |h_n| + |h_n|^2), \qquad \forall x \in \mathbb{R}.
\end{align*}
Now assuming that $h_n \to 0$, we can invoke the dominated convergence theorem to conclude:
\begin{align*}
&\lim_{n\to\infty} \int_{\mathbb{R}} \biggl( e^{ih_n x} - 1 - \frac{ih_n x}{1+x^2} \biggr) \frac{1+x^2}{x^2} \, \nu(\mathrm{d}x) \\
&= \int_{\mathbb{R}} \lim_{n\to\infty} \biggl( e^{ih_n x} - 1 - \frac{ih_n x}{1+x^2} \biggr) \frac{1+x^2}{x^2} \, \nu(\mathrm{d}x) \\
&= 0.
\end{align*}
1) For the purpose of proving the implication in question, this uniform estimate is a bit overshoot. Instead, we may assume that $|h_n| < 1$ and then use the Taylor approximation $e^{iz} = 1 + iz + \mathcal{O}(z^2)$, which holds for any complex $z$ on a given bounded neighborhood of $0$, say the unit disk of radius 1. |
Matrices restricted to a subspace | As a matrix (stochastic or not), $Q$ can be thought of as a mapping $$\Bbb R^n \to \Bbb R^n : v \mapsto Qv$$
Thus the domain and codomain of $Q$ are $\Bbb R^n$. Since $S \subset \Bbb R^n$, we can consider the map $$S \to \Bbb R^n : v \mapsto Qv$$
instead. The only difference from $Q$ is that the domain is just $S$. This map is the restriction $Q|_S$ of $Q$ to $S$.
Since $S$ is in fact a vector subspace of $\Bbb R^n$, $Q|_S$ is also linear over $S$. If you wanted to represent is as a matrix, first you would need to pick a basis $\{v_i\}_{i=1}^{n-1}$ for $S$ (since $S$ is $n-1$ dimensional), then you could express the matrix elements as $\left[Q|_S\right]_{ij} = \langle v_i, Qv_j\rangle$. What matrix you get depends on the basis elements you selected.
For example, if you simply drop any one of the standard basis elements of $\Bbb R^n$, the remaining elements will form a basis for this particular space $S$. The resulting matrix will be original matrix $Q$ with one column removed. The column removed corresponds to the standard basis element you dropped. |
$f,g,h\in \{0,1\}^{\mathbb{N}}$ is finite | Note that
$$
(\mathbb{N}\setminus S_1)\cap(\mathbb{N}\setminus S_2)\subseteq \mathbb{N}\setminus S_3
$$
Then,
$$
S_3\subseteq S_1\cup S_2.
$$ |
Can we apply Fundamental theorem of Algebra on entire, nonconstant functions? | As you noted, the fundamental theorem of algebra isn't directly applicable to entire functions, as shown by the example $f(z) = e^z$, but there is a substitute, namely
Picard's little theorem If $f$ is an entire, non-constant function, then the equation $f(z) = a$ has a solution for every $a \in \mathbb{C}$ with at most one exception.
The proof of this is fairly tricky though. See a good intermediate-level textbook in complex analysis. |
How to show that this integral diverges or converges? | Note that $e^{1/x}+(n-1)/n\geq e^{1/x}$ and so $\ln (e^{1/x}+(n-1)/n)\geq 1/x$. Hence the given integral is greater than or equal to $$\int_1^\infty\frac{1}{x}dx=\lim_{x\to\infty}\ln x=\infty$$ |
Any idea for sketching $y=g(x)$ where $\tan (g(x))=\frac {x}{1+x^2}$, $g(0)=\pi$ and $-\infty\lt x\lt\infty$? | $$\frac{d(\arctan(\frac{x}{1+x^2}))}{dx}=\frac{1}{1+\frac{x^2}{(1+x^2)^2}}.\frac{1+x^2-2x^2}{(1+x^2)^2}$$
$$f'(x)=\frac{1-x^2}{x^4+3x^2+1}$$
$$f'(x)=0,x=\pm1$$
Also $f(0)=0$
and $\lim_{x\to\pm\infty}{f(x)}=0$
Therefore the graph must be one of these:
But we know that $f(x)>0$ when $x>0$ and $f(x)<0$ when $x<0$
So it is the first one. |
Isomorphism between two groups of order $p^6$ | This involves a bit of calculation, but one way to show that these groups are not isomorphic is to show that all non-central elements of $G$ (i.e. those with $a \ne 0$ or $c \ne 0$) have centralizers of order $p^4$ and conjugacy class of sixe $p^2$. Any such $g$ is conjugate to $gz$ for all $z \in Z(G)$.
However, the non-central elements in the direct factors of $H \times H$ have centralizers of order $p^5$ and conjugacy class of size $p$. |
Removable Singularities | The function won't be continuous, so it isn't a removable singularity:
$$ \lim_{z\to 1} h(z) \neq h(1)=2 $$
For a removable singularity the function must be bounded in the point's neighborhood, clearly not the case in this (and many other) cases.
An example for a removable singularity is $f(z)=\frac{\sin z}{z} $ at $z=0$. Naively, the function isn't defined at that point, but we can define $f(0)=1$ and the function will be continuous for every $z\in \mathbb{C}$ (and even analytic!) |
Conditional cases of uniformly distributed shapes of unknown area | Once you established a constant value $c$ of the variable $X$, you are considering a distribution of $Y$ over an intersection of your given space and a line $x=c$.
If your space is simply connected a convex set (Thanks, @Rahul!) or at least normal with respect to the $X$ axis, the intersection is guaranteed to be a line segment, and the actual shape and area of the space does no longer matter – the initial distribution is uniform, so the resulting distribution is uniform. too, hence the density is just one over the length of the segment (and the length in the described case is $2\sqrt{r^2-x^2}$). |
If you have two envelopes, and ... | The assumption that is unrealistic is that there is a $\frac12$ chance that the other envelope contains twice the money. Realistically, there is an underlying distribution of values and that distribution dictates the probability that a given amount is the smaller.
1. Analysis of the Two Envelope Paradox
Let the value of a pair of envelopes (POE) be the smaller of the values of the envelopes. Let the pdf of the value of the POE be $f(a)$. That is, the probability that the value of a POE is between $a$ and $a+\mathrm{d}a$ is $f(a)\,\mathrm{d}a$.
The expected value of a randomly chosen envelope is
$$
\begin{align}
E
&=\frac12\int f(a)\,a\,\mathrm{d}a+\frac12\int f(a)\,2a\,\mathrm{d}a\\
&=\frac32\int f(a)\,a\,\mathrm{d}a\tag{1}
\end{align}
$$
We will assume that this exists
2. Conditional Probabilities
The probability that the value of the POE was between $\frac a2$ and $\frac a2+\frac{\mathrm{d}a}2$ and that we chose the larger is $\frac12f\left(\frac a2\right)\frac{\mathrm{d}a}2$. The probability that the value of the POE was between $a$ and $a+\mathrm{d}a$ and that we chose the smaller is $\frac12f(a)\,\mathrm{d}a$. Thus, the probability that the value of the envelope we chose is between $a$ and $a+\mathrm{d}a$ is $\frac14\left(f\left(\frac a2\right) + 2 f(a)\right)\mathrm{d}a$. Therefore, we define
$$
P(a)=\frac14\left[f\left(\frac a2\right) + 2 f(a)\right]\tag{2}
$$
Furthermore, given that the value of the envelope we chose was between $a$ and $a+\mathrm{d}a$, the probability that we chose the envelope with the larger value is
$$
L(a)=\frac{f\left(\frac a2\right)}{f\left(\frac a2\right)+2f(a)}\tag{3}
$$
and the probability that we chose the envelope with the smaller value is
$$
S(a)=\frac{2f(a)}{f\left(\frac a2\right)+2f(a)}\tag{4}
$$
This is where the unrealistic assumption falls apart. Without knowledge of $f$, we cannot know the conditional probabilities $L$ and $S$; they are usually not $\frac12$ and $\frac12$.
3. Strategies
Always Switch
Suppose we switch all the time. Then our expected value is
$$
\begin{align}
&\int\left[L(a)\frac a2+S(a)2a\right]P(a)\,\mathrm{d}a\\
&=\frac14\int\left[f\left(\frac a2\right)\frac a2+2f(a)\,2a\right]\,\mathrm{d}a\\
&=\frac32\int f(a)\,a\,\mathrm{d}a\\[8pt]
&=E\tag{5}
\end{align}
$$
Always Stay
Suppose we stay all the time. Then our expected value is
$$
\begin{align}
&\int\left[\vphantom{\int}L(a)\,a+S(a)\,a\right]P(a)\,\mathrm{d}a\\
&=\frac14\int\left[f\left(\frac a2\right)\,a+2f(a)\,a\right]\,\mathrm{d}a\\
&=\frac32\int f(a)\,a\,\mathrm{d}a\\[8pt]
&=E\tag{6}
\end{align}
$$
Therefore, the expected value is $E$ whether we switch all the time or stay. This is comforting since intuition says that switching should not help.
Better Strategy
However, there is a strategy that does give us a better expected value. Choose any function $k:[0,\infty)\to[0,1]$ such that $k(2a)\gt k(a)$; a monotonic increasing function for example. If an envelope has value $a$, keep it with probability $k(a)$ and switch otherwise. Then the expected value is
$$
\begin{align}
&\int L(a)\left[k(a)a+(1-k(a))\frac a2\right]P(a)\,\mathrm{d}a\\
&+\int S(a)\left[\vphantom{\int}k(a)a+(1-k(a))2a\right]P(a)\,\mathrm{d}a\\[3pt]
&=\frac14\int f\left(\frac a2\right)\left[k(a)a+(1-k(a))\frac a2\right]\,\mathrm{d}a\\
&+\frac14\int2f(a)\left[\vphantom{\int}k(a)a+(1-k(a))2a\right]\,\mathrm{d}a\\[3pt]
&=\frac32\int f(a)\,a\,\mathrm{d}a
+\frac12\int f(a)\left[\vphantom{\int}k(2a)-k(a)\right]a\,\mathrm{d}a\\[3pt]
&=E+\frac12\int f(a)\left[\vphantom{\int}k(2a)-k(a)\right]a\,\mathrm{d}a\tag{7}
\end{align}
$$
which, if $k(2a)\gt k(a)$, is better than $E$. If $k(a)$ is constant, as it is in the previous strategies, the expected value is $E$. |
Homotopy equivalence between $X/A$ and $X$? | First, an inclusion $A \subset X$ has HEP when for any $Y$, any map from cylinder $X \times \{0\} \cup A \times [0, 1] \to Y$ extends to a map from the whole $X \times [0, 1] \to Y$ -- this is pretty obviously equivalent to your definition.
So, let $H: A \times [0, 1] \to A$ be a homotopy contracting $A$. Glue $H$ with identity to obtain a map $C: X \times \{0\} \cup A \times [0, 1] \to X$. Extend it via HEP to $\tilde{H}: X \times [0, 1] \to X$. Let $f: X \simeq X \times \{1\} \to X$ be the restriction of $\tilde{H}$ to the top part of cylinder. Since $f$ is constant on $A$, it induces a map $\tilde{f}: X/A \to X$.
Let $p: X \to X/A$ be the quotient map. We'll show that $\tilde{f}$ is homotopy inverse to $p$.
The composition $\tilde{f}p: X \to X$ is precisely $f$, and $\tilde{H}$ gives a homotopy between $f$ and identity, so $\tilde{f}$ is left inverse to $p$.
Composing $\tilde{H}$ with $p$ gives us a homotopy $p \tilde{H}: X \times [0, 1] \to X/A$ between $pf$ and $p$. But, $p\tilde{H}$ is constant on $A \times [0,1]$ (because $\tilde{H}$ restricted to $A \times [0, 1]$ is by construction just $H$, a contraction of $A$). Thus, $p \tilde{H}$ induces a map $F: (X/A) \times [0, 1] \to X/A$, which is a homotopy between identity and $p \tilde{f}$. |
Reference for Poisson point process | Poisson Processes by J.F.C. Kingman (Oxford University Press, 1993) is an excellent introduction.
A deeper dive can be found in Lectures on the Poisson Process by G. Last and M. Penrose (Cambridge University Press, 2018). |
Does the condition ${(n +1)^2} |x_n - x_{n+1}| \to 0$ imply that $\lim x_n$ exists? | The series $\sum \frac{1}{(n+1)^2}$ is convergent. Let it converge to a number, say to $S$. Clearly $S > 0$.
Let $ε > 0$ be arbitrary. Then for $\frac{ε}{S}$, by our assumption, there exists an $n_0 \in $ N , such that
$|x_n - x_{n + 1}| < \frac{ε}{S}.\frac{1}{(n + 1)^2}$ for all $n > n_0$.
Now by triangle inequality, for $n > n_0$, we have $|x_n - x_{n_0}| \leq |x_n - x_{n - 1}| + |x_{n - 1} - x_{n - 2}| + . . . + |x_{n_0 + 1} - x_{n_0}|$
$$\leq \frac{ε/S}{(n)^2} + \frac{ε/S}{(n - 1)^2} + \frac{ε/S}{(n - 2)^2} + . . . + \frac{ε/S}{(n_0 + 1)^2}
\leq \frac{ε}{S} \sum \frac{1}{(n)^2} = ε.$$
Thus for an arbitrary $ε > 0$, there exists an $n_0 \in$ N, such that
$|x_n - x_{n_0}| \leq ε$ for each $n > n_0$ |
Solving anagrams using math. | The number of substrings of a string of the form $x^n=\underbrace{xxx\cdots xx}_{n\text{ times}}$ is $\binom{n+1}{2}$ cause essentially you are choosing inclusively(they can be the same, a substring of length $1$) the initial position and the end position of your substring. Now, every substring is an anagram of any other substring iff the length is the same, as you have noticed so it is not enough to take pairs in the space counted by $\binom{n+1}{2,}$ notice that the number of substrings of length $k$ is $n-k+1,$ just go left to right and you will see that. Here, we can take any other string, either itself but we can not repeat the pair, so we will have,notice that the plus one comes from choosing the same substring, $$\binom{n-k+1+1}{2}.$$
For example, if $k=1$ we will have $\binom{n+1}{2}=\binom{n}{2}+\binom{n}{1}$ which means take $2$ positions or the same position.(So, in your example, you are missing taking $([0],[0]),([1],[1]),\cdots$ )
The problem when the string is not of this form I suspect, requires some kind of careful recursion.DP? |
How to prove that a smooth function is NOT analytic? | An explicit example might help:
$$f(x) =\begin{cases}e^{-1/x} \text{ for } x >0 \\
0 \text{ for } x \leq 0\end{cases}$$
This is smooth but not analytic at $x=0$. Note that $f^n(0)=0$ for all $n$, so the Taylor series at $x=0$ is just $0$, which is clearly not $f(x)$ for any neighborhood.
However if you don't have smoothness, or even continuity, you don't have analyticity.
So ways you can tell is by if it's continuous/differentiable/smooth. If it IS smooth, you can check to see if it actually equals its Taylor series. |
Central limit theorem for "Markov type" independent random variables | You can use the method of moments. According to this method, since your random variables are "nice" (in this case, they are uniformly bounded), it is enough to show that the individual moments of $\frac{X_1+\cdots+X_n}{\sqrt{n}}$ converge to the corresponding moments of $\mathcal{N}(0,1)$. A standard calculation shows that when all $X_i$ are independent then this indeed happens, see for example here. In your case there will be added error terms that arise from estimating products of variables that are too close, but fortunately these terms would be asymptotically nil. Here is an example using the third and fourth moments – I'll let you figure out how to generalize it. (For the second moment we are given that it converges to 1.)
For the third moment we have
$$
\mathbb{E}\left[ \frac{(X_1+\cdots+X_n)^3}{n^{3/2}} \right] = \frac{1}{n^{3/2}} \sum_i \mathbb{E}[X_i^3] + \frac{3}{n^{3/2}}\sum_{i\neq j} \mathbb{E}[X_i^2 X_j] + \frac{6}{n^{3/2}} \sum_{i<j<k} \mathbb{E}[X_i X_j X_k].
$$
The first term is $O(B^3/\sqrt{n})$, where $B$ is the uniform bound on all $X_i$. The second term we split into two parts: $|i-j| > L$ and $|i-j| \leq L$. The first part vanishes since the variables are centered. The second part is $O(LB^3/\sqrt{n})$. The third term we again split into two parts: $|i-j|,|i-k|,|j-k| \leq L$, and its complement. The second part vanishes since the variables are centered. The first part is $O(L^2B^3/\sqrt{n})$.
For the fourth moment we have
$$
\mathbb{E}\left[\frac{(X_1+\cdots+X_n)^4}{n^2}\right] = \frac{1}{n^2} \sum_i \mathbb{E}[X_i^4] + \frac{4}{n^2} \sum_{i \neq j} \mathbb{E}[X_i^3 X_j] + \frac{6}{n^2} \sum_{i < j} \mathbb{E}[X_i^2 X_j^2] + \frac{6}{n^2} \sum_{i,j<k} \mathbb{E}[X_i^2 X_j X_k] + \frac{24}{n^2} \sum_{i<j<k<l} \mathbb{E}[X_i X_j X_k X_l].
$$
The first term is $O(B^4/n)$. The second term is $O(B^4L/n)$. The third term we leave for now. The fourth term is $O(B^4L^2/n)$. The fifth term is $O(B^4L^3/n)$. Finally, we break the third term into two parts: $|i-j| \leq L$ and $|i-j| > L$. The contribution of the first part is $O(B^4L/n)$. The second part is
$$
\frac{3}{n^2} \sum_{|i-j|>L} \mathbb{E}[X_i^2] \mathbb{E}[X_j^2] = \frac{3}{n^2} \sum_{i,j} \mathbb{E}[X_i^2] \mathbb{X_j^2} - O\left(\frac{B^4L}{n}\right) = \\ 3 \mathbb{E}\left[\frac{X_1^2 + \cdots + X_n^2}{n}\right]^2 - O\left(\frac{B^4L}{n}\right) = 3 + o(1).
$$
So the third moment tends to 0, and the fourth moments tends to 3. Similar calculations work for all other moments.
(My error estimates might contain mistakes. Please let me know and I will correct them.) |
Different way to do Product Operation in a Symmetric Group in different texts | This is just whether you act on the left or on the right. You have composed $\sigma$ first, then $\tau$ on the first line, and on the second line you have done $\tau$ first, then $\sigma$. So of course you obtain a different answer. However,
$$\tau\sigma(1)=\tau(\sigma(1))=\tau(2)=1.$$
So you obtain the same answer. |
Books for learning math from primary school, for 62-year-old? | In general, Schaum's books are very suited for self study. Here is for example a book that contains quite a bit:
http://www.amazon.co.uk/Schaums-Outline-College-Mathematics-Edition/dp/0071626476 |
Algebra with Probabilities | If the inequality holds for all $b \in (0,1)$ then this is just a consequence of continuity. You have $f(b) = 5^{-bp_1} p_2 - 5^{-bq_1} q_2$ and $f(b) \ge 0$ for $b \in (0,1)$. Since $f$ is continuous, $f(1) \ge 0$ as well. |
What is the explanation for similar decimal digits in values of Riemann zeta function with certain arguments close to one? | There's nothing mysterious going on here, and it has nothing to do with digits or bases or powers of $2$. The Laurent series of $\zeta(z)$ at $z=1$ is
$$
\zeta(z)=\frac1{z-1}+\gamma + o(1)\;,
$$
and this is exactly what you're seeing. It's not that the digits appear in some strange place; it's $\gamma$ itself, and the digits appear somewhere in a decimal expansion only because the numbers are being displayed in scientific notation. |
private solution after solving nonhomogenous euler equation | You have the correct homogeneous solution: $y_h = c_1 x+c_2x^3$.
Now, let $y_1=x$ and $y_2=x^3$. If we let $y_p=u_1(x)y_1(x) + u_2(x)y_2(x)$, our objective is to find $u_1(x)$ and $u_2(x)$ using Variation of Parameters. When I took differential equations, we used Cramer's rule to show that if $y_p = u_1y_1 + u_2y_2$ was a solution to $y^{\prime\prime}+P(x)y^{\prime}+Q(x)y=f(x)$, then it followed that
$$u_1^{\prime}(x) = \frac{\det\begin{bmatrix}0 & y_2\\ f(x) & y_2^{\prime}\end{bmatrix}}{\det\begin{bmatrix}y_1 & y_2\\ y_1^{\prime} & y_2^{\prime}\end{bmatrix}} = -\frac{y_2 f(x)}{W(y_1,y_2)}\tag{1}$$
and
$$u_2^{\prime}(x) = \frac{\det\begin{bmatrix}y_1 & 0\\ y_1^{\prime} & f(x)\end{bmatrix}}{\det\begin{bmatrix}y_1 & y_2\\ y_1^{\prime} & y_2^{\prime}\end{bmatrix}}= \frac{y_1f(x)}{W(y_1,y_2)}\tag{2}$$
where $W(y_1,y_2)=\det\begin{bmatrix}y_1 & y_2 \\ y_1^{\prime} & y_2^{\prime}\end{bmatrix}$ is the Wronskian of $y_1$ and $y_2$. We also note that we can rewrite our original ODE as follows:
$$x^2y^{\prime\prime}-3xy^{\prime}+3y = \ln x \implies y^{\prime\prime} - \frac{3}{x}y^{\prime}+\frac{3}{x^2}y = \frac{\ln x}{x^2}.$$
Thus, $f(x)=\dfrac{\ln x}{x^2}$. Plugging our $y_1$, $y_2$ and $f(x)$ into $(1)$ and $(2)$ gives us
$$u_1^{\prime}(x) = \frac{\det\begin{bmatrix}0 & x^3\\ \frac{\ln x}{x^2} & 3x^2\end{bmatrix}}{\det\begin{bmatrix}x & x^3\\ 1 & 3x^2\end{bmatrix}} = -\frac{\ln x}{2x^2}\qquad\text{ and } \qquad u_2^{\prime}(x) = \frac{\det\begin{bmatrix}x & 0\\ 1 & \frac{\ln x}{x^2}\end{bmatrix}}{\det\begin{bmatrix}x & x^3\\ 1 & 3x^2\end{bmatrix}} =\frac{\ln x}{2x^4}$$
We now integrate each one to see that $u_1(x)=\dfrac{1+\ln x}{2x}$
$$\begin{aligned}u_1(x) &= -\frac{1}{2}\int \frac{\ln x}{x^2}\,dx \\ &= -\frac{1}{2}\int te^{-t}\,dt;\quad(\text{sub: }t=\ln x)\\ &= -\frac{1}{2}\left[-te^{-t} + \int e^{-t}\,dt\right];\quad(\text{parts: }u=t,\,\,dv = e^{-t}\,dt)\\ &= \frac{1}{2}e^{-t}(1+t)\\ &= \frac{1+\ln x}{2x}\end{aligned}$$
and $u_2(x)=-\dfrac{1+3\ln x}{18x^3}$
$$\begin{aligned}u_2(x) &= \frac{1}{2}\int\frac{\ln x}{x^4}\,dx \\ &= \frac{1}{2}\int te^{-3t}\,dt;\quad (\text{sub: } t=\ln x)\\ &= \frac{1}{2}\left[-\frac{1}{3}te^{-3t} + \frac{1}{3}\int e^{-3t}\,dt\right];\quad(\text{parts: }u=t,\,\,dv = e^{-3t}\,dt)\\ &= -\frac{1}{18}e^{-3t}(3t+1)\\ &= -\frac{1+3\ln x}{18x^3}\end{aligned} $$
Note that I didn't add any constants of integration to $u_1(x)$ and $u_2(x)$; the reason for this is because when we consider the general solution $y = y_h + y_p$, the constants of integration from this process get "absorbed" into the respective arbitrary coefficients of the homogeneous solution.
Therefore, the particular solution you should get is $$\begin{aligned}y_p &= u_1(x)y_1(x) + u_2(x)y_2(x) \\ &= \frac{1+\ln x}{2} - \frac{1+3\ln x}{18} \\ &= \frac{4}{9} +\frac{1}{3}\ln x \\ &= \frac{1}{9}(4+3\ln x)\end{aligned}$$
and hence the general solution is
$$y=c_1x+c_2x^3+\frac{1}{9}(4+3\ln x)$$
which matches the solution given by WolframAlpha.
Alternatively, you could have avoided Variation of Parameters completely by solving this equation using the method of undetermined coefficients; in particular, you could have guessed a particular solution of the form $y_p = A+B\ln x$ and then you could have solved for $A$ and $B$. You can read more about the method of undetermined coefficients for Cauchy-Euler equations here.
I hope this makes sense! |
An affine open neighborhood of a nonsingular point | If $X$ is irreducible this is true iff $X$ is birational to a projective space iff the function field of $X$ is $k(x_1, ... x_n)$ (since $X$ and any open $U$ share the same function field). Easy counterexamples include any smooth projective curve of positive genus (since genus is a birational invariant). |
Making sense of combinatorics-based marketing hyperboles | I think this is a fine approach. Some of the choices could be three-way, but to get a general idea of how many choices it takes to get that many total possibilities you are spot on. |
Finding Canonical form using given functionals | Note that the kernels (nullspaces) of $\psi,\phi$ have dimensions $\dim(V) - 1$ and $\dim(W) - 1$ respectively. Let $v_2,\dots,v_{n}$ and $w_2,\dots,w_{m}$ be bases of these respective kernels. Extend these bases so that $\{v_1,\dots,v_n\}$ is a basis for $V$ and $\{w_1,\dots,w_m\}$ is a basis of $W$; scale $v_1,w_1$ so that $\psi(v_1) = \psi(w_1) = 1$. These bases are such that
$$
f\left(\sum a_i v_i,\sum b_jw_j \right) = \pmatrix{a_1\\ \vdots \\ a_n}^T \pmatrix{1&0&\cdots & 0\\
0 & 0 & \cdots & 0\\
\vdots &\vdots & \ddots & \vdots\\
0 & 0 & \cdots & 0}\pmatrix{b_1 \\ \vdots \\ b_m} = a_1b_1
$$ |
Can a diffeomorphism between submanifolds be extended to a self-diffeomorphism of total manifold? | Consider a standard circle and a trefoil knot in $\mathbb{R}^3$. Any diffeomorphism of $\mathbb R^3$ to itself that took one to the other would have to be a diffeomorphism of the complements. But the fundamental groups of the complements are different, so such a diffeomorphism does not exist. |
Density of Lipschitz functions in the set of uniformly continuous functions | Hint/trick: take $$f_j(x)=\inf_{y\in E}\{f(y)+j\cdot d(x,y)\}.$$
$$f_j\overset{Uc}{\longrightarrow} f$$ |
Ensemble of periodic functions orthogonal to their derivative | Actually, this is the case for any periodic function. Consider $f(x)$ being periodic with period $T$. That is, $f(x+T)=f(x)$. In particular, $f(T)=f(0)$. Then, integrating by parts,
$$\int\limits_0^Tf(x)f'(x)dx = f(x)f(x)\big|_0^T-\int\limits_0^Tf'(x)f(x)dx$$
Thus,
$$2\int\limits_0^Tf(x)f'(x)dx = f(x)f(x)\big|_0^T = f(T)^2-f(0)^2=0$$
Hence
$$\int\limits_0^Tf(x)f'(x)dx = 0$$ |
Solutions to $1 - \frac{r'^2}{2c^2} + \frac{r r''}{c^2}=0$? | After some experimentation, I think I found a way to solve the problem. Start by multiplying both sides by $r'$.
$$r^{-2}r'+\frac1{2c^2}(-r^{-2}r'^3+2r^{-1}r'r'')=0$$
Note that the term in parentheses can be written as
$$(r^{-1}r'^2)'$$
Knowing this, we can integrate both sides to get
$$-r^{-1}+\frac{r^{-1}r'^2}{2c^2}=k_1,\frac{r'^2}{2c^2}=k_1r+1$$
$$r'^2=k_2r+2c^2,r'=\pm\sqrt{k_2r+2c^2}$$
$$\int\frac{dr}{\sqrt{k_2r+2c^2}}=\pm\int dt$$
$$\frac2{k_2}\sqrt{k_2r+2c^2}=\pm t+k_3$$
$$\sqrt{k_2r+2c^2}=\pm\frac{k_2t}2+k_4$$
$$k_2r+2c^2=\frac{k_2^2t^2}4\pm k_2k_4t+k_4^2$$ |
some basic question about fibration | Fibration is not (in my experience) a precisely defined term in algebraic geometry; maybe particular authors give precise definitions in various contexts though.
To answer some of your other questions:
smooth implies flat, so the condition with smoothness is more stringent than the condition with flatness.
faithfully flat means flat and surjective.
If $f: X \to Y$ is proper and surjective with $X$ and $Y$ integral and $Y$ normal (e.g. smooth), and with its generic fibre both geometrically connected and geometrically reduced, then $f_* \mathcal O_X = \mathcal O_Y$. (By properness, $f_*\mathcal O_X$ is a sheaf of coherent $\mathcal O_Y$-algebra. Since $f$ is surjective, it is torsion-free over $\mathcal O_Y$. By the assumption on the generic fibre, we see that $f_*\mathcal O_X$ has rank one over $\mathcal O_Y$, and so by normality equals $Y$. Note that that geometric reducedness is important;
in characteristic $p$ Frobenius maps give examples of finite (in particular proper) maps with connected fibres where $f_*\mathcal O_X$ is not equal to $\mathcal O_Y$.
In char. zero, the geometric reducedness of the generic fibre is automatic,
given the assumption that $X$ is integral.)
Note that $f_*\mathcal O_X = \mathcal O_Y$ implies connected fibres when $f$ is proper, by the theorem on formal functions.
$f_*\mathcal O_X = \mathcal O_Y$ is not particular related to flatness one way or another. E.g. it holds when $Y$ is normal and $f$ is birational (by the third point, since birational means that $f$ is generically an isomorphism), but birational morphisms are not flat if they are not the identity (since the fibre dimension
jumps from $0$ at most points to $> 0$ at certain points).
On the other hand, any map of smooth varieties with $0$ dimensional fibres
is flat (a special case of so-called miracle flatness), but the fibres are typically not connected; think about morphisms between smooth curves.
Note that a proper flat morphism will be surjective onto a union of connected components
of $Y$ (and in particular, will be surjective if $Y$ is connected). The reason
is that proper morphisms are closed (by definition), and flat morphisms are open (under mild finiteness conditions, e.g. locally of finite presentation, so certainly
for morphisms of varieties); so the image of a proper flat morphism is both open and closed. |
Prove that ${2^n-1\choose k}$ and ${2^n-k\choose k}$ ar always odd. | $$\binom{2^n-1}{k} = \frac{(2^n-1)(2^n-2)(2^n-3)\cdots(2^n-k)}{1\cdot 2\cdot 3\cdots k} $$
and $2^n-a$ is divisible by exactly as many factors of $2$ as $a$ is.
This means that every factor of $2$ in the numerator is matched by a factor of $2$ in the denominator. After cancelling those, what's left in the numerator must be an odd number, and you can't get an even integer by dividing an odd integer by any integer. |
dimension of fiber of variety | Thanks for asking this question. It is nice to know that I am not the only one who is stuck on this. Let me try to attempt answering this, but I hope that someone corrects it, if I make any wrong remark.
Following the remarks Vakil makes in 11.4.B in his notes, we are in the situation where we have $\pi : X = \text{Spec}(A) \rightarrow \text{Spec}(B) = Y$ induced by an inclusion $B \hookrightarrow A$ of integral domains that are finitely generated over a field $k$. Fix any $y \in Y$.
We use what Vakil proves later which is the existence of a nonempty distinguished open $U$ in $X$ such that $\pi^{-1}(U) \rightarrow U$ factors as $\pi^{-1}(U) \rightarrow \mathbb{A}^{m-n}_{k} \times_{k} U \rightarrow U$ where the first map is finite and surjective and the second one is the projection map. Since $U$ is distinguished open, we may write $U = \text{Spec}(B_{g})$. Then we can write down the given factorization as $\text{Spec}(A_{\phi(g)}) \rightarrow \mathbb{A}^{m-n}_{B_{g}} \rightarrow \text{Spec}(B_{g})$ where $\phi : B \rightarrow A$ denotes the corresponding ring map of $\pi : X \rightarrow Y$. Thus, the corresponding ring map $B_{g} \rightarrow A_{\phi(g)}$ must factor as $B_{g} \rightarrow B_{g}[t_{1}, \dots, t_{m-n}] \rightarrow A_{\phi(g)}$. Since $B_{g}$ has fraction field $B_{(0)}$, by considering transcendence degree, we have $\dim(B_{g}) = \dim(B)$. Similarly, we have $\dim(A_{\phi(g)}) = \dim(A)$. This lets us replace $B_{g}$ with $B$ and $A_{g}$ with $A$ in our proof. Hence, we are working with $B \rightarrow B[t_{1}, \dots, t_{m-n}] \rightarrow A$.
Fix $y \in U$. We want to show that $\pi^{-1}(y)$ is of pure dimension $m - n$ where $m = \dim(A)$ and $n = \dim(Y)$. Note that by convention we give $\pi^{-1}(y)$ a scheme structure by taking the spectrum of $R = \kappa(y) \otimes_{B} A$, and this is a finitely generated algebra over the residue field $\kappa(y)$. Thus, for any minimal prime $\mathfrak{h} \subset R$, we have $\dim(R) = \dim(R/\mathfrak{h})$, which shows that $\dim(\pi^{-1}(y)) = \dim(R)$ is of pure dimension of some number. Thus, we only need to check $\dim(\pi^{-1}(y)) = m - n$.
Applying base change over $\kappa(y)$ to the factorization $\pi^{-1}(U) \rightarrow \mathbb{A}^{m-n}_{k} \times U \rightarrow U$, we get $\pi^{-1}(y) \rightarrow \mathbb{A}^{m-n}_{\kappa(y)} \rightarrow \text{Spec}(\kappa(y))$, and the first map is still finite and surjective by 9.4.B (d) and 9.4.D of Vakil's notes.
Step 1. We first show $\dim(\pi^{-1}(y)) \leq m - n$.
Let $Z = \overline{\{x\}} \cap \pi^{-1}(y) \subset \pi^{-1}(y) = \text{Spec}(\kappa(y) \otimes_{B} A)$ be any irreducible component. Then we may write $Z = \text{Spec}((\kappa(y) \otimes_{B} A)/\mathfrak{p}')$ where $\mathfrak{p}'$ represents $x$ in the fiber. Ring-wise, we have $\kappa(y) \hookrightarrow \kappa(y)[t_{1}, \dots, t_{m-n}]/\mathfrak{q}' \hookrightarrow (\kappa(y) \otimes_{B} A)/\mathfrak{p}'$, where $\mathfrak{q}'$ is the image of $x$ in $\mathbb{A}^{m-n}_{\kappa(y)}$ under $\pi^{-1}(y) \rightarrow \mathbb{A}^{m-n}_{\kappa(y)}$. Since the second inclusion is finite, it must preserve dimension, so
$\begin{align*}\dim(\pi^{-1}(y)) &= \dim(Z)\\ &= \dim((\kappa(y) \otimes_{B} A)/\mathfrak{p}')\\ &= \dim(\kappa(y)[t_{1}, \dots, t_{m-n}]/\mathfrak{q}')\\ &= m-n-\text{ht}(\mathfrak{q'}) \\&\leq m-n,\end{align*}$
as desired.
Step 2. It remains to show that $\dim(\pi^{-1}(y)) \geq m-n.$
We use the finite surjection $\pi^{-1}(y) \twoheadrightarrow \mathbb{A}^{m-n}_{\kappa(y)}.$ Consider $(0) \in \text{Spec}(\kappa(y)[t_{1}, \dots, t_{m-n}]) = \mathbb{A}^{m-n}_{\kappa(y)},$ the generic point. Since the map is surjective there must be some $x \in \pi^{-1}(y)$ mapping into $(0)$. Writing $\pi^{-1}(y) = \text{Spec}(R)$ and denoting $x = \mathfrak{p},$ the map corresponds to a finite ring map $\kappa(y)[t_{1}, \dots, t_{m-n}] \rightarrow R$ and this map pulls $\mathfrak{p}$ back to $(0)$. Hence we get a finite ring map $\kappa(y)[t_{1}, \dots, t_{m-n}] \rightarrow R/\mathfrak{p}$ that pulls $(0)$ to $(0)$, so we must have a finite extension $\kappa(y)[t_{1}, \dots, t_{m-n}] \hookrightarrow R/\mathfrak{p},$ and since finite extension does not change the dimension, we must have $m-n = \dim(R/\mathfrak{p}) \leq \dim(R) = \dim(\pi^{-1}(y)),$ as desired.
Remark. I am quite bothered by the fact that Step 2 does not use Vakil's hint in his book. I tried to work out that way, but I could not come up with any sounding proof. If anyone finds a mistake in Step 2 or has a different answer, I would love to see it! |
If the following piecewise function is continuous | Since $|x|+|y| \geq \sqrt {x^{2}+y^{2}}$ we see that $f(x,y) \to \arctan(\infty)= \frac{\pi} 2 $ as $(x,y) \to (0,0)$. Hence $f$ is continuous iff $\alpha =\frac {\pi} 2$ |
Triple integral boundaries | A picture can help. $W$ is the tetrahedron with the grey plane as its base.
The three sets of boundaries:
$$\int_{y=0}^{2} \int_{x=0}^{y/2} \int_{z=0}^{2-y} {x\;dy\;dx\;dz} \\
\int_{x=0}^{1} \int_{z=0}^{2-2x} \int_{y=2x}^{2-z} {x\;dx\;dz\;dy} \\
\int_{z=0}^{2} \int_{y=0}^{2-z} \int_{x=0}^{y/2} {x\;dz\;dy\;dx}.$$
The reasoning for the first one, for example:
\begin{eqnarray*}
y &:& \text{over the whole region, $y$ has a min of $0$ and max of $2$} \\
x &:& \text{given $y$, then $x$ has a min of $0$ and max of $y/2$} \\
z &:& \text{given $y,x$, then $z$ has a min of $0$ and max of $2-y$ (regardless of $x$).} \\
\end{eqnarray*} |
Linear Algebra - Quadratic Forms | To find quadratic form when you have the matrix, all you need to do is to perform a couple of vector/matrix multiplications, i.e
$$Q(\vec{x}) = \vec{x}^T A \vec{x}$$
In your case, $$Q(x, y, z) = \left(\begin{matrix}x & y &z \end{matrix}\right)
\left(\begin{matrix}1 & 3 & 2 \\ 3 & -4 & 3 \\ 2 & 3 & 1 \end{matrix}\right)
\left(\begin{matrix}x \\ y \\z \end{matrix}\right)
$$
To find characteristic polynomial, we need to find determinant of matrix $\lambda I - A$.
$$P(\lambda)=(\lambda-1)^2(\lambda+4)-22\lambda-34$$ The roots of $P(\lambda)=0$ are $-6, -1, 5$. Thus, there is an orthogonal change of coordinates that will transform the original quadratic form into: $$-6x'^2-y'^2+5z'^2=c$$ Now you need to find the extreme points by setting two variables to zero and solving for the remaining variable. After you find these points, determine which one is the closest to the origin. Finally, you need to do reverse transformation from the new coordinate system to the old one. |
indefinite integral of $e^{\frac{-x^2}{1+x}}$ | It doesn't look like there is an elementary integral, but we can find asymptotic forms.
For small $X$
Since $\frac{-x^2}{1+x}=1-x-\frac{1}{1+x}$, I will consider the integral
$$
I(b)=\int\limits_0^b dx \ e^{ -x-\frac{1}{1+x}}
$$
Which differs from the original by a multiplicative constant $e$, and investigate $b \to 0$. Integrating by parts
$$
I(b)=\int\limits_0^b dx \ (1+x)^2 e^{-x} \left[ \frac{d}{dx} e^{-\frac{1}{1+x}}\right]
$$
$$
I(b)=(1+x)^2 e^{ -x-\frac{1}{1+x}} \Bigg\vert_0^b - \int\limits_0^b dx \ (1-x^2) e^{ -x-\frac{1}{1+x}}
$$
The boundary term may be readily evaluated. Within the integral on the right we recognize $I(b)$
$$
2 I(b)=(1+b)^2 e^{ -b-\frac{1}{1+b}}-e^{-1}+\int\limits_0^bx^2e^{ -x-\frac{1}{1+x}}
$$
For $b \to 0$, the integral on the right is $O(b^3)$ and we have
$$
I(b)>>\frac{1}{2} \int\limits_0^bx^2e^{ -x-\frac{1}{1+x}} \ \ , \ \ b \to 0
$$
Therefore
$$
I(b) \sim \frac{1}{2} \left( (1+b)^2 e^{ -b-\frac{1}{1+b}}-e^{-1} \right) \ \ , \ \ b \to 0
$$
Simplifying, I find the leading order term
$$
I(b) \sim \frac{b}{e} \ \ , \ \ b \to 0
$$
In principle, repeated integration by parts could produce the asymptotic series. Here is a plot of the first term versus the numerical integration for small $b$:
For large $X$
I'll use the substitution in the comments: $t=x+1$ to write
$$
e^{-2} \int\limits_0^X \ dx \exp \left(-\frac{x^2}{1+x} \right) =\int\limits_1^a dt \ \exp(-t-t^{-1})
$$
And we're investigating $a \to \infty$. Let $f(t)=\exp(-t-t^{-1})$, rewrite the integral
$$
\int\limits_0^\infty dt \ f(t) = \int\limits_0^1 dt \ f(t) + \int\limits_1^a dt \ f(t) + \int\limits_a^\infty dt \ f(t)
$$
The integral on the left is exactly $2K_1(2)$, where $K$ is a modified Bessel function of the second kind. The first integral on the right is also constant, independent of $a$. We are left with the simpler problem of studying
$$
I(a)=\int\limits_a^\infty dt \ e^{-t-t^{-1}}
$$
For large $a$. Integrate by parts
$$
I(a)=\int\limits_a^\infty dt \ e^{-t^{-1}} \frac{d}{dt} \left[ -e^{-t} \right]
$$
$$
I(a)=-e^{-t-t^{-1}} \Big\vert_a^\infty + \int\limits_a^\infty dt \ \frac{1}{t^2} e^{-t-t^{-1}}
$$
The integral on the right differs from $I(a)$ by a factor of $1/t^2$, thus we have
$$
I(a)>> \int\limits_a^\infty dt \ \frac{1}{t^2} e^{-t-t^{-1}} \ \ , \ \ a \to \infty
$$
Which leads to
$$
I(a) \sim e^{-a} \ \ , \ \ a \to \infty
$$
Finally,
$$
\int\limits_1^a dt \ \exp(-t-t^{-1}) \sim 2K_1(2) - C - e^{-a} \ \ , \ \ a \to \infty
$$
Where $C=\int_0^1 dt \ f(t)$, and can be found numerically to be about $0.072$; probably there is a nice way to estimate it, but I don't see it right now. Here is a plot of the approximation versus the exact numerical integral:
EDIT: In terms of the incomplete Bessel functions, defined as
$$
K_\nu(x,y)=\int\limits_1^\infty dt \ t^{-\nu-1} \exp(-xt -y/t)
$$
We have for $I(a)$, by changing variables $u = t/a$
$$
I(a)=a \int\limits_1^\infty du \ e^{-au-a^{-1} u^{-1}} = a K_{-1}(a,a^{-1})
$$
And for $C$, by changing variables $u= 1/t$
$$
C= \int\limits_1^\infty du \ u^{-2} e^{-u- u^{-1}} = K_{1}(1,1)
$$
Thus your original integral, with $a=X+1$ may be written
$$
\int\limits_0^X \ dx \exp \left(-\frac{x^2}{1+x} \right) = e^2 \left[ 2K_1(2) - K_{1}(1,1) - a K_{-1}(a,a^{-1}) \right]
$$ |
Why isn't this an homeomorphism? | Your map is a nice example of a continuous bijection which is not a homeomorphism.
You want to show that $\alpha^{-1} : S^1 \to [0,2\pi)$ is not continuous. This is equivalent to showing that $\alpha$ is not an open map (or alternatively, not a closed map).
The set $U = [0,\pi)$ is open in $[0,2\pi)$, but $M = \alpha(U)$ is not open in $S^1$. In fact, $(1,0) \in M$, but $M$ does not contain any open neigborhood of $(1,0)$ because any such set must contain some $U_\epsilon(1,0) \cap S^1$, where $U_\epsilon(1,0)$ denotes the open disk with radius $\epsilon$ and center $(1,0)$. |
Proving that if $s_1$ and $s_2$ are eigenvalues for a $2\times 2$ matrix $A$, then the columns of $A-Is_1$ is a eigenvector for $s_2$ | Well, let's see: since $\;s_1,s_2\;$ are the matrix's eigenvalues, the characteristic polynomial is $\;(x-s_1)(x-s_2)=x^2-(s_1+s_2)x+s_1s_2\;$ , and then Cayley-Hamilton's Theorems tells us
$$(*)\;\;A^2-(s_1+s_2)A+s_1s_2I=0\implies A\color{red}{(A-s_1I)}=A^2-s_1A\stackrel{(*)}=s_2A-s_2s_1I=$$
$$=s_2\color{red}{(A-s_1I)}$$
and we're done. |
Finding the smallest positive integer $n$ such that $S_n$ contains an element of order 12 | Any minimal-size permutation of a given order must have two properties:
It has no fixed points, which can be removed (both from the permutation and the set it acts on) without changing the order.
It has no pair of cycles with non-coprime lengths; the GCD of the lengths could then be factored out from one of the cycles without changing the order. This means that all multiple prime factors of the target order must be incorporated into one cycle.
Given the factorisation of 12, there are two blocks, the two factors of 2 and the factor of 3. They are either together, in which case a 12-cycle is obtained, or they are apart, in which case a 3-cycle and 4-cycle acting together on 7 letters is obtained. Thus the minimum size of the underlying set is 7. |
Find the essential spectrum $\sigma_e(T)$ | I suggest you try to compute the ordinary spectrum first. You can do that by calculating $(T-z)^{-1}$ explicitly. Then for $z$ in the spectrum try to construct an operator $S$ such that $S(T-z)-1$ and $(T-z)S-1$ are compact (you can think about finite rank first; it may be handy in the full solution that compact operator may be approximated by finite rank operators).
There exists also a more direct way which requires additional knowledge. The essential spectrum can be related to the notion of a Fredholm operator and its index. In your case it is fairly easy to figure out when $T-z$ is Fredholm, provided that you know required definitions. |
Proving homeomorphism between surface and $\mathbb{R}^2$ minus Cantor Set | The surface consists of infinitely many pairs of pants sewn together in the way shown: waist line to leg opening. A pair of pants is homeomorphic to the following domain in the plane:
Following the sewing procedure, you should insert a smaller copy of such a domain in each hole, and repeat ad infinitum. Sewing everything together, you get a disk minus a Cantor set. |
Dual of $div$ on spaces where the tangential value is fixed | It is a good idea to start with the standard approach presented, e.g.,
in Ch. 13 of Functional Analysis by Walter Rudin for the case
of one Hilbert space. The same approach is widely employed in the case
of two Hilbert spaces as well. The key idea of this approach is to consider
an unbounded operator
$$
L={\rm div}\colon\;D_L=H_{t,0}\subset\bigl(L^2(\Omega)\bigr)^3
\to L^2(\Omega)\tag{1}
$$
with its dual unbounded operator
$$
L^{\ast}=\nabla\colon\;D_{L^{\ast}}\subset L^2(\Omega)
\to\bigl(L^2(\Omega)\bigr)^3,\tag{2}
$$
the domain $D_{L^{\ast}}$ of which is defined as
$$
D_{L^{\ast}}=\{v\in L^2(\Omega)\colon\; |\Lambda_v(u)|
\leqslant C_v\|u\|_{L^2}\;\forall\, u\in D_L \},
$$
where $\Lambda_v$ denotes the linear functional
$$
\Lambda_v(u)=\int\limits_{\Omega}v\,{\rm div\,}u\,dx\quad\forall\,u\in D_L\,.
$$
Since $\bigl(C_0^{\infty}(\Omega)\bigr)^3\subset H_{t,0}\,$, domain $D_L$ is dense in
$\bigl(L^2(\Omega)\bigr)^3$, and hence there is a continuos linear extension
$\widetilde{\Lambda_v}$ of $\Lambda_v$ from $D_L$ to the whole $\bigl(L^2(\Omega)\bigr)^3$. By the Riesz representation theorem, there is $w\in\bigl(L^2(\Omega)\bigr)^3$ such that a linear continuous functional $\widetilde{\Lambda_v}$ can be represented in the form
$$
\widetilde{\Lambda_v}(u)=\int\limits_{\Omega}w\cdot u\,dx
\quad\forall\, u\in \bigl(L^2(\Omega)\bigr)^3.
$$
Thus we get an integral identity
$$
\int\limits_{\Omega}v\,{\rm div\,}u\,dx=\int\limits_{\Omega}w\cdot u\,dx
\quad\forall\, u\in H_{t,0}\; \Rightarrow \;
\forall\, u\in\bigl(C_0^{\infty}(\Omega)\bigr)^3\tag{3}
$$
with $v\in L^2(\Omega)$ and $w\in\bigl(L^2(\Omega)\bigr)^3$. Taking $w=(\varphi,0,0)$
in $(3)$ we find
$$
\int\limits_{\Omega}v\,\partial_{x_1}\varphi\,dx=\int\limits_{\Omega}w_1\varphi\,dx
\quad\forall\, \varphi\in C_0^{\infty}(\Omega)
$$
whence follows the existence of a weak derivative $\partial_{x_1}v=-w_1$. In a similar
manner, $(3)$ implies the existence of weak derivatives $\partial_{x_j}v=-w_j\,,\,j=2,3$.
Therefore $\varphi\in H^1(\Omega)$ and $w=-\nabla v$, while
$$
\int\limits_{\Omega}v\,{\rm div\,}u\,dx=-\int\limits_{\Omega}\nabla v\cdot u\,dx
\quad\forall\, u\in H_{t,0}
$$
which immediately implies that
$$
\int\limits_{\Omega}v\,u\cdot n\,dx=0\;\, \forall\,u\in H_{t,0}
\; \Rightarrow \;\int\limits_{\Omega}v\,\psi\,dx=0\;\,
\forall\,\psi\in H^1(\Omega)\; \Rightarrow \;v|_{\partial\Omega}=0
$$
whence follows $v\in H^1_0(\Omega)$, i.e., domain $D_{L^{\ast}}=H^1_0(\Omega)$.
Such dual operator $L^{\ast}$ is sometimes referred to as a
"Hilbert space adjoint" of $L$. Now it is clear that for the operator
$L^{\ast\ast}=(L^{\ast})^{\ast}$, its domain $D_{L^{\ast\ast}}$ does contain
a Sobolev space $H^1(\Omega)$. More precisely, by definition,
$$
D_{L^{\ast\ast}}=\{u\in (L^2\bigl(\Omega)\bigr)^3:\,|\lambda_u(v)|
\leqslant c_u\|v\|_{L^2}\;\;\forall\,v\in D_{L^{\ast}}\}
$$
where $D_{L^{\ast}}=H^1_0(\Omega)$ while $\lambda_u$ denotes the linear functional
$$
\lambda_u(v)=\int\limits_{\Omega}u\cdot\nabla v\,dx
\quad\forall\,v\in D_{L^{\ast}}\,.
$$
Again, there is a continuos linear extension $\widetilde{\lambda_u}$ of $\lambda_u$ from $D_{L^{\ast}}$ to the whole $L^2(\Omega)$. By the Riesz representation theorem, there is $f\in L^2(\Omega)$ such that a linear continuous functional $\widetilde{\lambda_u}$ can be represented in the form
$$
\widetilde{\lambda_u}(v)=\int\limits_{\Omega}f\,v\,dx
\quad\forall\, v\in L^2(\Omega).
$$
Thus we get an integral identity
$$
\int\limits_{\Omega}u\cdot\nabla v\,dx=\int\limits_{\Omega}f\,v\,dx
\quad\forall\, v\in H^1_0(\Omega)\; \Rightarrow \;
\forall\, u\in C_0^{\infty}(\Omega)\tag{4}
$$
with $u\in\bigl(L^2(\Omega)\bigr)^3$ and $f\in L^2(\Omega)$, whence follows the existence of a weak divergence ${\rm div\,}u=-f$, as they say "in distribution sense".
The latter implies that $D_{L^{\ast\ast}}=H({\rm div};\Omega)$ where
$$
H({\rm div};\Omega)\overset{\rm def}{=}
\{u\in \bigl(L^2(\Omega)\bigr)^3\colon\;{\rm div\,}u\in L^2(\Omega)\}
$$
with a weak divergence in the sense $(4)$. The double adjoint $L^{\ast\ast}$ is known to coincide with the closure of $L$, i.e., with the minimal closed extension of $L$. Thus we conclude that the unbounded operator $(3)$ is not closed, while its closure is devoid of any prescribed boundary conditions.
And of course, this cannot be the case with the operator
$$
L={\rm div}\colon\;D_L=H_{n,0}\subset\bigl(L^2(\Omega)\bigr)^3
\to L^2(\Omega),
$$
the closure of which $\widetilde{L}$ has a domain
$$
D_{\widetilde{L}}=\{u\in H({\rm div};\Omega)\colon\;(u\cdot n)|_{\partial\Omega}=0\}
$$
where for every element $u\in H({\rm div};\Omega)$, its normal component $u\cdot n$ does possess a trace $(u\cdot n)|_{\partial\Omega}\in H^{-1/2}(\partial\Omega)$ — for details see, e.g., p. 129 in An Introduction to Navier-Stokes Equation and Oceanography by Luc Tartar. |
Axiomatic Foundations | Mathematics does not exist in a vacuum. It is strongly related, via applications, to the world around us. Mathematicians choose axioms according to what works well when we try to use the insights and results flowing from these axioms to better understand problems (usually from science) that we care about.
To draw an analogy with painting, a painter can surely mix colours in endless combinations and spread paint on canvas in equally endless possibilities. But, artists don't just randomly spread paint on canvas. The reason is that their art does not exist in a vacuum. It is strongly related to human culture, the physical world around us, and the predispositions of the human brain. These dictate what is considered good art, and so guide the artist in the creation of a good painting. |
Map from a square to a triangle | The corners of $[0,1]^2$ can be characterized by the metric: They are the only points such that there exists exactly one other point at the maximal distance $\sqrt 2$. Via an isometry, $T$ would also have to have exactly four points with that property. At least one of them would have to be a non-vertex. But then its parter point at distance $\sqrt2$ would certainly have points at distance $>\sqrt 2$. |
Example of the determination of a dual space | Suppose $T : V \to W$ is a linear transformation between finite-dimensional vector spaces. Then the image $\text{im}(T)$ is a subspace of $W$, and accordingly the dual of the image is naturally a quotient space of $W^{\ast}$.
Furthermore, suppose $V$ and $W$ are inner product spaces. Then we can identify the duals of subspaces of $W$ with their orthogonal complements, hence we can think of the "dual of $\text{im}(T)$" as the orthogonal complement $\text{im}(T)^{\perp} \subseteq W$.
An alternative characterization of this orthogonal complement is that it is the kernel $\text{ker}(T^{\dagger})$ of the adjoint of $T$. Concretely: the orthogonal complement of the column space of a real matrix $M$ is the null space of its transpose $M^T$. It's a nice exercise to show this. |
What is the closure of an open ball $B_X(\mathbf{a},r)$ in $X=\mathbb{R}^n$? | To show $\overline U \subseteq \overline B(\mathbf a,r)$ it suffices to show
$$\mathbf x\notin \overline B(\mathbf a,r) \implies \mathbf x\notin \overline U.$$
If $\mathbf x\notin \overline B(\mathbf a,r)$ then we have $d(\mathbf a,\mathbf x)>r$.
If we choose $0<\delta<d(\mathbf a,\mathbf x)-r$ then we can show (using triangle inequality) that
$$B(\mathbf x,\delta) \cap B(\mathbf a,r) = \emptyset.$$
This means that $\mathbf x$ has a neighborhood which does not intersect $U$ and, consequently, $\mathbf x$ does not belong to the closure of $U$.
A different argument to see this inclusion: Since the closed ball $\overline B(\mathbf a,r)$ is closed set and $U\subseteq\overline B(\mathbf a,r)$, we also have $\overline U\subseteq\overline B(\mathbf a,r)$.
Now let us have a look at $\overline B(\mathbf a,r)\subseteq \overline U$, i.e. that every point of the closed ball belongs to the closure.
If $d(\mathbf a,\mathbf x)<r$, then $x\in U$ and thus $x\in\overline U$. It only remains to look at the points such that $d(\mathbf x,\mathbf a)=r$.
Notice that for any $\alpha$ we can find a point $\mathbf p_\alpha= \alpha \mathbf a + (1-\alpha) \mathbf x$ such that $d(\mathbf a,\mathbf p_\alpha)=(1-\alpha)r$ and $d(\mathbf x,\mathbf p_\alpha)=\alpha r$. (This is the first time we are using that we work with the Euclidean metric. It might also be helpful if you try to draw a picture. You should see that for $\alpha\in(0,1)$ we get exactly the points on the straight lines between $\mathbf a$ and $\mathbf x$.)
Indeed, if the $i$-th coordinate of $\mathbf p_\alpha$ is $p_i=\alpha a_i+(1-\alpha) x_i$
$$d(\mathbf x,\mathbf p_\alpha) = \sqrt{\sum_{i=1}^n (p_i-x_i)^2} = \sqrt{\sum_{i=1}^n \alpha^2(a_i-x_i)^2} = \alpha \sqrt{\sum_{i=1}^n (a_i-x_i)^2} = \alpha d(\mathbf x,\mathbf a).$$
The other equality is shown similarly.
So if we take $\mathbf x_n=\mathbf p_{\frac1n}$, then this is a sequence of points from $U$ which converges to $\mathbf x$.
Maybe it is useful to mention that this proof make become a bit clearer if you rewrite it using the norm $\|\mathbf v\|_2=\sqrt{v_1^2+\dots+v_n^2}$ rather than the metric $d_2(\mathbf x,\mathbf a)=\|\mathbf x-\mathbf a\|_2$. (But I wrote this down using metric, since I do not know whether you are familiar with the notion of norm.) |
Lebesgue sigma algebra | Yes, a $ \sigma- $ algebra has properties ! Let
$ \mathcal{L}= \{E \subseteq \mathbb R: m^*(A)=m^*(A\cap E)+m^*(A\cap E^c) \quad \forall A \subseteq \mathbb R\}.$
$ \mathcal{L}$ has the follwing properties (try a proof):
$ \mathbb R \in \mathcal{L}$;
$E\in \mathcal{L}$ implies $\mathbb R \setminus E \in \mathcal{L}$;
if $(E_j)$ is a sequence in $ \mathcal{L}$, then $\bigcup E_j \in \mathcal{L}.$
This shows that $ \mathcal{L}$ is a $ \sigma- $ algebra |
Complete/Connected space | The most straightforward approach is to figure out what topology on $\Bbb R^2$ is generated by $d$.
HINT: If $x\ne\langle 0,0\rangle$, and $0<r\le\|x\|$, what is $B(x,r)$? And what is $B(\langle 0,0\rangle,r)$? You should try to figure this out on your own, but I left a description, without proof, in a spoiler box in my answer to your previous question about this metric.
Once you have that, showing that the space is not connected is very easy. To show that it is complete, you need to figure out what the Cauchy sequences are.
HINT: Every Cauchy sequence in $\left\langle\Bbb R^2,d\right\rangle$ either converges to the origin or is eventually constant. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.