title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
How to solve "distance word problems" using quadratic equations | Using the formula $t=d/v$, you can write down two equations from the statements in the problem. The first says that by combining their speeds, the two pedestrians cover the 76 km in six hours: $${76 \text{ km}\over v_1+v_2} = 6\text{ hr}\cdot60\text{ min/hr}.$$ I’m measuring the speeds in km/min, so the time is converted into minutes. The second fact is that it takes the second pedestrian one more minute than the first to cover 1 km, so you have $$\frac1{v_1}+1=\frac1{v_2}.$$ Solve the two equations for $v_1$ and $v_2$ and then compute $1/v_1$ and $1/v_2=1/v_1+1$, or substitute $t_1=1/v_1$ and $t_2=1/v_2$ into the two equations and solve for the times directly. There will be two solutions to this system of equations, but one of them doesn’t make physical sense for this problem, so that one will be rejected. |
Sign of a Particular Permutation | A matrix of the form
$$
{\displaystyle C={\begin{bmatrix}c_{0}&c_{n-1}&\dots &c_{2}&c_{1}\\c_{1}&c_{0}&c_{n-1}&&c_{2}\\\vdots &c_{1}&c_{0}&\ddots &\vdots \\c_{n-2}&&\ddots &\ddots &c_{n-1}\\c_{n-1}&c_{n-2}&\dots &c_{1}&c_{0}\\\end{bmatrix}}.}
$$
is called circulant. Obviously, $A=\begin{pmatrix}0 & I_{n-k} \\ I_{k+1} & 0\end{pmatrix}$ is circulant with $c_j =\begin{cases}1:& j=n-k\\0:&\text{else}\end{cases}$
It satisfies the determinant formula
$$ {\displaystyle \det(C)=\prod _{j=0}^{n-1}(c_{0}+c_{1}\omega _{j}+c_{2}\omega _{j}^{2}+\dots +c_{n-1}\omega _{j}^{n-1})=\prod _{j=0}^{n-1}f(\omega _{j}).}$$
where ${\displaystyle f(x)=c_{0}+c_{1}x+\dots +c_{n-1}x^{n-1}}$ and ${\displaystyle \omega _{j}=\exp \left(i{\tfrac {2\pi j}{n}}\right)}$ are the n-th roots of unity. Hence (mind the index shift $n\to n+1$):
$$\det(A) = \prod _{j=0}^{n} \exp \left(i{\tfrac {2\pi j}{n+1}}\right)^{n-k} = \exp\Big(2\pi i\frac{n-k}{n+1}\sum_{j=0}^{n}j \Big)
=\exp\Big(2\pi i\frac{n-k}{n+1}\frac{n(n+1)}{2}
\Big)$$
hence $\det(A) = \exp\Big(i\pi n(n-k)\Big) = \begin{cases}+1:&n(n-k) \text{ even} \\-1:&n(n-k)\text{ odd}\end{cases} = \begin{cases}+1:&\text{else} \\-1:&n\text{ odd and } k \text{ even}\end{cases} $ |
$\sigma$-algebra Generated by a Random Variable contains all of its null sets | It is not in general true that $\sigma(X)$ contains all of its null sets.
E.g. start with probability space $([0,1],\mathcal A,m)$ where $\mathcal A$ is the collection of Lebesgue measurable subsets of $[0,1]$ and $m$ denotes the Lebesgue measure restricted to $\mathcal A$.
Let random variable $X:[0,1]\to\mathbb R$ be prescribed by $x\mapsto x$.
Then $\sigma(X)$ is the collection of Borel subsets of $[0,1]$.
This collection does not contain all its null sets. |
Interchanging Duals and Metrics for Riemann Curvature Definition | By the definition of the wedge product we have $\sigma(e_1,e_2)=e_1^*(e_1)e_2^*(e_2)-e_2^*(e_1)e_1^*(e_2)=1$; so just plugging this frame in to the equation $K\, \sigma \otimes \sigma = R$ yields the explicit formula $$R(e_1,e_2,e_1,e_2)=K \sigma(e_1,e_2)\sigma(e_1,e_2) = K.$$ Since $e_i$ is an orthonormal frame, this is indeed the sectional curvature; and also $\sigma$ is the Riemannian area form up to sign. (As an aside, unless your surface is genus $1$ there is no global orthonormal frame; so the equation $\sigma = e_1^* \wedge e_2^*$ should be understood locally.)
If you think of $R$ as a symmetric bilinear form on bivectors then this can be written $$K = R(e_1 \wedge e_2, e_1 \wedge e_2) = R(\sigma^*, \sigma^*).$$
Here the the metric dual relating bivectors and two-forms can be simply defined by $v\wedge w \mapsto v^* \wedge w^*,$ or alternatively by $$(v \wedge w)^* = g_2(v \wedge w,\cdot)$$ where $$g_2(v\wedge w, x \wedge y) = g(v,x)g(w,y) - g(v,y)g(w,x)$$ is the inner product on bivectors induced by $g$.
Your proposed equation for $K\sigma$ is what we get by only plugging in the frame once instead of twice: $$R(e_1,e_2,X,Y) = K\sigma(e_1,e_2)\sigma(X,Y) = K\sigma(X,Y),$$ which in terms of bivectors becomes $$K \sigma = R(e_1\wedge e_2, \cdot).$$ Your $\omega$ is really the same thing as $R$, and the natural pairing is the same thing as the partial evaluation I have written. |
Number of ways to write a given even number as a sum of two odd numbers | Just consider that one number is one of : $3,5,...,2\cdot k -3$ and the other one is forced by the choice of first one. Also there is no difference between $3,2k-3$ and $2k-3,3$ so totally is $\lceil{2k-3 - 3\over 4}\rceil + 1$. Ok I see the question is slightly different, the upper bound $m+1$ is given, then the formula can be changed to :
$\lceil{min(2k-3,\lceil {m+1 \over 2} \rceil \cdot 2 - 1) - 3 \over 4}\rceil + 1$ |
Does the existence of weak derivatives require the lower order derivatives also to exist? | That is correct. Here are some examples where a higher order weak derivative exists but a lower order weak derivative does not exist:
(p.15 remark 2.19(ii)) http://bolzano.iam.uni-bonn.de/~beck/seltop/topics_pde.pdf
http://math.7starsea.com/post/308
Crostul mentioned that if ALL weak derivatives of order $n$ exist, then lower order weak derivatives exist. Crostul, do you (or anyone else), have a reference for this? If this is true, then for functions of a single variable, higher order weak differentiability would imply lower order weak differentiability. |
How to find the area using integrals? | The area equals
$$\int_0^{\frac{\pi}4}\int_{-\cos(x)}^{\sin(x)}\ dy\ dx=\int_0^{\frac{\pi}4}\sin(x)+\cos(x)\ dx=$$
$$=\left[-\cos(x)\right]_{0}^{\frac{\pi}4}+\left[\sin(x)\right]_{0}^{\frac{\pi}4}=$$
$$=-\frac1{\sqrt 2}+1+\frac1{\sqrt 2}-0=1.$$ |
Differentiability at an end point of an open interval. | Not a hard problem. I say that because the hypotheses are very close to the conclusion ... provided, however, it occurs to you to use the mean-value theorem. The only other reason why you couldn't handle it is because you need more experience working with limits. In any case after you write up a solution do reflect for a while on why it caused some difficulty! A two-step problem shouldn't. No shame, but you do need to figure out why routine problems are causing you any grief. Things do get more complicated later on.
Here the pieces of the puzzle that merely need to be assembled:
The definition of $f'(a)$ is $$f'(a) = \lim_{x\to a+} \frac{f(x)-f(a)}{x-a},$$
this because it is only the right-hand derivative at $a$ that can be
considered.
.
The mean value theorem says that $\frac{f(x)-f(a)}{x-a}= f'(\xi)$ for
some value $\xi$ between $a$ and $x$.
.
The hypothesis that $\lim_{x\rightarrow a^+}f'(x)=L$ says that
$f'(\xi)$ is as close as you please to $L$ if $\xi$ is close enough
to $a$.
Another poster suggested the mean-value theorem. Let's think about why. The connection between a function $f$ and its derivative $f'$ in one direction (i.e., obtaining information about $f'$ from $f$) is the definition itself; and in the other direction (i.e., obtaining information about $f$ from $f'$) is, for calculus students, the mean-value theorem. There are more sophisticated tools that you will learn later on, but for novices the mean-value theorem should always come immediately to mind.) |
why $m^*(A)\leq m^*(A \cap E^c) + m^*(A\cap E)$ | You have
$$
m^*(A)
= m^*((A \cap E) \cup (A \cap E^C))
\leq m^*(A \cap E) + m^*(A \cap E^C)
$$
by the (finite) subadditivity of the outer measure $m^*$. |
Homogeneoused ODE: $x' + 2x + 2x'' + x' = 0$ | Given the following ODE:
\begin{align}x' + 2x + 2x'' + x' &=0 \\ \therefore 2x''+2x'+2x&=0 \\ \therefore x''+x'+x&=0 \end{align}
We can find the characteristic equation
\begin{align}a^2 + a +1 &=0\end{align}
Which using the quadratic formula, gives us the following roots
\begin{align}a&= \frac{-1\pm\sqrt{-3}}{2} \\ &=-\frac{1}{2} \pm \frac{\sqrt{3}}{2}i\end{align}
Thus our general solution $y$ is given by
\begin{align}y &= e^{-\frac{1}{2}x}\bigg(c_1\cos{(\frac{\sqrt{3}}{2}x)}+c_2\sin{(\frac{\sqrt{3}}{2}x)}\bigg), \ c_1,c_2 \in \mathbb{R}\end{align}
Just an extra note: If the given ODE were to be non-homogeneous, the above solution would only be your complementary solution $y_c$. You would then have to find your particular solution $y_p$ as well and then sum them together as your general solution $y= y_c + y_p$ |
How to prove that a function is $C^{\infty}$? | By induction show that for $t>0$ we have that $h^{(n)}(t)$ is a polynomial in $\frac1t$ times $\exp(-1/t)$ (and of course $h^{(n)}(t)=0$ for $t<0$). Conclude from this fact (for $n-1$) that $h^{(n)}(0)=0$ and $h^{(n)}$ is continuous.
In other words, show that there exists polynomials $f_n(X)\in\mathbb R[X]$, $n\in\mathbb N_0$ such that
$$ h^{(n)}(t)=\begin{cases}0&\text{if }t\le 0\\f_n(\tfrac 1t)\exp(-\tfrac 1t)&\text{if }t>0.\end{cases}$$ |
Elementary row operation and change of matrix? | By definition, when you permute two rows $i,j$ of a matrix $A$ and the result is a matrix $B$, then you have
$$B=P_{i,j}^{\sigma}·A$$
where $P_{i,j}^{\sigma}$ is the permutation matrix of rows, i.e. the identity matrix $I$ with the two rows $i, j$ of the identity are permuted. |
Real Analysis proof, show there is a limit and find it. | You have
$$
x_{n+1} = \sqrt{3+x_n}
$$
so that
$$
x_{n+1}^2 = 3+x_n.
$$
Assuming the sequence converges to a limit $\ell\in\mathbb{R}$, then $\ell$ must satisfy (by continuity)
$$
\ell^2 = 3+\ell
$$
hence the only possible values for $\ell$ are therefore $\frac{1\pm\sqrt{13}}{2}$. Since the limit of a positive sequence has to be non-negative, if the sequence has a limit then this limit is $\ell=\frac{1+\sqrt{13}}{2}$.
Now, even if this does not seem to be requested by the question, let us also prove there is such a limit (that is, that the sequence converges).
We can easily see that $x_n \geq x_{n-1} \geq \sqrt{3}$ for all $n\geq 2$. (e.g., by induction, from the recurrence relation: this is true for $x_2 \geq \sqrt{3} = x_1$, and then if $x_n\geq x_{n-1}$ then $3+x_n \geq 3+x_{n-1}$ by induction hypothesis)).
We can show that it is bounded above by, say, $3$. Again, by induction: if $x_n \leq 3$, then $x_{n+1} = \sqrt{3+x_n} \leq \sqrt{3+3} < 3$.
A bounded non-decreasing sequence converges. |
Prove that $\lim_{N \rightarrow\infty}{\sum_{n=1}^N{\frac{\lvert\mu(n)\rvert}{n}}}=\infty$ | We make a very crude estimate of the proportion of square-free numbers in the interval $[2^N+1,2^{N+1}]$.
Note that $\le \frac{1}{4}$ of these numbers are divisible by $4$, and $\le \frac{1}{9}$ are divisible by $9$, and $\le \frac{1}{25}$ are divisible by $25$, and so on.
There is overlap, but even if we don't take account of that, the proportion of the numbers in the interval divisible by a square $\gt 1$ is
$$\le \frac{1}{4}+\frac{1}{9}+\frac{1}{25}+\frac{1}{49}\cdots.\tag{1}$$
We can find an upper bound for Sum (1), by using $\frac{1}{4}$ plus
$$\frac{1}{9}+\frac{1}{16}+\frac{1}{25}+\frac{1}{36}+\cdots,$$
which is less than
$$\frac{1}{2\cdot 3}+\frac{1}{3\cdot 4}+\frac{1}{4\cdot 5}+\frac{1}{5\cdot 6}+\cdots.\tag{2}$$
But Sum (2) is a telescoping series with sum $\frac{1}{2}$. That is because
$\frac{1}{2\cdot 3}=\frac{1}{2}-\frac{1}{3}$, and $\frac{1}{3\cdot 4}=\frac{1}{3}-\frac{1}{4}$, and so on.
When we add back the $\frac{1}{4}$ that we left out, we find that Sum (1) is $\le \frac{1}{4}+\frac{1}{2}=\frac{3}{4}$. So the proportion of numbers in $[2^N+1,2^{N+1}]$ that are square-free is at least $\frac{1}{4}$.
Thus, since there are $2^N$ numbers in the interval, and the reciprocal of each is $\ge \frac{1}{2^{N+1}}$, we have
$$\sum_{2^N+1}^{2^{N+1}}\frac{|\mu(n)|}{n}\ge 2^N\frac{1}{4}\frac{1}{2^{N+1}}=\frac{1}{8}.$$
Thus each of our infinitely many intervals makes a contribution of at least $\frac{1}{8}$ to the sum, and therefore our sum diverges. |
Characterizations of simple connectivity of subsets of $\mathbb{C}$ | Let $G$ be nonempty, open and connected.
(1) Assume $G$ is simply connected.
Assume $\mathbb C\setminus G=K\cup A$, $K$ compact, $A$ closed, $K\cap A=\emptyset$.
Note that $A\cup\{\infty\}$ is closed in $\overline{\mathbb C}$ and by compactness $K$ is closed in $\overline{\mathbb C}$. Thus via $\overline{\mathbb C}\setminus G=(A\cup\{\infty\})\cup K$ we have written the complement of $G$ as disjoint union of two closed sets. By connectednes of $\overline{\mathbb C}\setminus G$, one of them must be empty, and that can only mean $K=\emptyset$.
(2) Assume property 1 holds.
Assume $B$ is a (nonempty) bounded component of $\mathbb C\setminus G$.
Then the closure $K$ of $B$ is compact and is still disjoint to $G$.
Since $B$ is a component, $A:=\mathbb C\setminus G\setminus K$ is closed and we have $A\cap K=\emptyset$. By property 1, $K=\emptyset$, contradiction.
(3) Assume property 2 holds.
Write $\overline{\mathbb C}\setminus G=A\cap A'$ with $A,A'$ closed and disjoint. Wlog. $\infty\in A'$. Then $A$ is bounded. As $\mathbb C\setminus G$ has no bounded components, $A$ must be empty. Hence $\overline{\mathbb C}\setminus G$ is connected. |
Connection between universal Quantifier and implication | $\forall x \in D \colon P(x)$ is simply an abbreviation for the formula $\forall x ( x \in D \implies P(x))$ -- there really isn't more to it. You will find this convention in any decent textbook that covers the basics of first order logic.
And you can't replace it by $x \in D \implies P(x)$ for the simple reason that the latter is not a sentence -- it has unbounded free variables (namely $x$). Hence it's not equivalent to the sentence $\forall x \in D \colon P(x)$. |
How do I solve this quadratic-intersection question? | Hint: Your idea of discriminant is good. Substitute for $y$ in the quadratic using the equation of the line. Now if there are solution(s) for $x$, there is intersection, so set the discriminant to negative.
--
Details: We need to ensure there are no solutions for $2x+2 = 2x^2+kx+9$. $\iff 2x^2+(k-2)x+7 \neq 0 \iff (k-2)^2<4\cdot2\cdot7$ $\iff |k-2|<2\sqrt{14} \iff k \in (2-2\sqrt{14}, 2+2\sqrt{14})$. |
Does $ M$ which is diffeomorphic to torus have vanishing $K$? | A smooth isometric embedding ($C^\infty$) of the flat torus in $\mathbb{R}^3$ is not possible, but according to the Nash-embedding-theorem it is possible to have a $C^1$ map. See this Math overflow question . |
Given $\phi (u)=\sin u, u\in E$, and $f(t)=(t^2,\sqrt{t})$, find an interval $E$ such that $f\circ \phi$ is defined and differentiable on $E$ | For $(f\circ\phi)(u)=(\sin(u)^2,\sqrt{\sin(u)})$ to be defined, both $(\sin(u))^2$ and $\sqrt{\sin(u)}$ need to be defined. The problem is of course with $\sqrt{\sin(u)}$, which is defined if and only if $\sin(u)\geq0$. Can you find an interval on which $\sin(u)\geq0$? Note that there is more than one correct answer. |
Prove sum of $k^2$ using $k^3$ | Hint: take the sum for $k=1$ to $n$ of both sides of the equation $(k+1)^3 - k^3 = 3k^2 + 3k + 1$. |
Second order non-homogeneous differential equation where g(x) is a constant - how to determine particular solution? | $$y''+\frac{1}{x}y'=1$$
$$x^2y''+xy'=x^2$$
It's Cauchy- Euler's equation you can transform it to a DE with constant coefficients then apply undetermined coeffcients ((substitute $x=e^t$).
$$\implies y''(t)=e^{2t}$$
Then try $y_p(t)=Ae^{2t}$ or integrate directly
Another method:
$$x^2y''+xy'=x^2$$
try $y=x^m$ for the homogeneous equation
$$m(m-1)+m=0 \implies m=0$$
$$ \implies y=c_1+c_2 \ln x$$
For the particular solution try $y_p(x)=Ax^2$ |
Trying to understand a proof for the automorphisms of a polynomial ring | I claim that the set of automorphisms of $\mathbb{Z}[x]$ are the ring homomorphisms $\phi$ that satisfy $\phi(x) = \phi_0+\phi_1 x$ where $\phi_0 \in \mathbb{Z}$ is arbitrary and $\phi_1 \in \{\pm 1\}$.
Suppose $\phi$ is an automorphism. As above, we have $\phi(n) = n$ for $n \in \mathbb{Z}$. Since $\phi$ is an automorphism, we can find an inverse for $x$.
Then for some $p_k$ we have $\phi(\sum_{k=0}^n p_k x^k) = x$ (with $p_n \neq 0$). That is, $\sum_{k=0}^n p_k \phi(x)^k = x$. It follows that we must have $\partial \phi(x) \ge 1$, and since $p_n \neq 0$, we must have $\partial \phi(x) = 1$ (and
also, $n=1$).
Hence $\phi(x)$ has the form $\phi(x) = \phi_0+\phi_1 x$ which
gives $p_0 + p_1 \phi_0 + p_1 \phi_1 x = x$. Then $p_1 = \phi_1 \in \{\pm 1\}$
(and $p_0+p_1 \phi_0 = 0$ which gives $p_0 = -p_1 \phi_0 = - \phi_0 \phi_1$).
Now for the other direction.
Suppose $\phi$ is a ring homomorphism and $\phi(x) = \phi_0+\phi_1 x$, where $ \phi_1 \in \{\pm 1\}$.
(Since $\phi$ is a ring homomorphism we have
$\phi(n) = n$ for $n \in \mathbb{Z}$.)
We need to show that $\phi$ has an inverse. By explicit computation, we have $\phi(\phi_1^{-1}( x-\phi_0)) = x$.
Pick some element $\sum_k a_k x^k \in \mathbb{Z}[x]$ and note that
$\phi ( \sum_k a_k (\phi_1^{-1}( x-\phi_0))^k) = \sum_k a_k x^k$, hence $\phi$ is invertible and so is an automorphism. |
How to prove $|a+b|^k \leq 2^{k-1} (|a|^k+|b|^k)$? | Method $1$:
The function $f(x) = x^k$ for $x\geq 0$ is a convex function for $k \geq 1$. Now apply Jensen's inequality.
Method $2$:
Let $\lvert a \rvert \geq \lvert b \rvert$. Let $t = \dfrac{b}a \implies 0 \leq \vert t \vert \leq 1$. We then want to prove that $$\vert 1 + t \vert^k \leq 2^{k-1} \left(1+\vert t \vert^k \right)$$ $$\left \vert \dfrac{1+t}2 \right \vert^k \leq \dfrac{1+\left \vert t \right \vert^k}2 $$
$$\left \vert \dfrac{1+t}2 \right \vert^k \leq \left(\dfrac{1+ \vert t \vert}{2} \right)^k$$
Setting $y = \vert t \vert \in [0,1]$, we want to prove that $$\left(\dfrac{1+ y}{2} \right)^k \leq \dfrac{1+y^k}2$$
$$f(y) = \dfrac{1+y^k}2 - \left(\dfrac{1+ y}{2} \right)^k$$
$$f'(y) = \dfrac{ky^{k-1}}2 - \dfrac{k}{2^k}(1+y)^{k-1} = \dfrac{k}2 \left(y^{k-1} - \left(\dfrac{1+y}2 \right)^{k-1} \right)$$
For $y \in [0,1]$, $y \leq \dfrac{1+y}2$, and hence $y^k \leq \left( \dfrac{1+y}2 \right)^k$. This means that $f'(y) < 0$ and hence $f(y)$ is decreasing. Hence, $$f(y) > f(1)$$ which implies $$\left(\dfrac{1+ y}{2} \right)^k \leq \dfrac{1+y^k}2$$ |
Let $S^1$ denote the unit circle in the plane $\Bbb R^2$. Pick out the true statement(s) | HINTS:
(c) $S^1$ is compact, and continuous functions preserve compactness.
(a) After you’ve done (c), you should know what kind of subset of $\Bbb R$ the continuous image of $S^1$ must be, unless it’s just a single point. It’s a very nice kind of set: it’s connected (why?), but if you remove almost any of its points, what’s left is not connected. Show that if $x$ is one of these so-called cut points whose removal disconnects $f[S^1]$, there must be at least two distinct points $p,q\in S^1$ such that $f(p)=f(q)=x$.
(b) If you do (a) using the hint above, this one will come almost for free. |
Show $\sum \frac{xy}{xy+x+y} \le \frac{6+x^2+y^2+z^2}{9}$ | Equivenantly we have to prove that
$$\frac{3xy}{xy+x+y}+ \frac{3yz}{yz+y+z}+ \frac{3zx}{zx+z+x}\leq 2+ \frac{x^2+y^2+z^2}{3}$$
Solution
\begin{align*}
2+ \frac{x^2+y^2+z^2}{3} &=\sum \frac{x^2+y^2+4}{6} \\
&\geq \sum \sqrt[6]{x^2y^2} \\
&=\sum \frac{xy}{\sqrt[3]{(xy)xy}} \\
&\geq \sum \frac{xy}{\left ( xy+x+y \right )/3}\\
&=\sum \frac{3xy}{xy+x+y}
\end{align*}
It only makes use of AM-GM inequality.. :) |
difference between $f^{-1}([-\infty ,a))$ and $f^{-1}(-\infty ,a)$? | Sometimes, particularly in measure theory, people choose to consider the extended real line, or the real line with two points called $-\infty$ and $\infty$ adjoined. This is often written $[-\infty, \infty]$. This is a useful space to study since positive measures are functions from a $\sigma$-algebra to $[0,\infty]$, and integration by positive measures can yield $-\infty$ or $\infty$.
The extended real line is typically ordered by putting $\infty$ as the largest element and $-\infty$ as the smallest element. Once you have this order on the $[-\infty, \infty]$, the order topology can be put on the space. The open and closed sets are then the standard ones with respect to the order topology. It's a nice topology. The subspace topology of $(-\infty, \infty)$ with respect to it is the standard topology on $\mathbb{R}$, and $[-\infty, \infty]$ is compact and Hausdorff. In fact, it's homeomorphic to $[0,1]$.
However, $[-\infty, \infty]$ loses much of the algebraic niceness of $\mathbb{R}$. The expression $- \infty + \infty$ cannot be coherently defined. For this reason, attention is often restricted to functions on $[0,\infty]$ (like positive measures), where addition and multiplication can be nicely defined (put $0 * \infty = 0$).
In the context of your book, they are defining measurability of extended real-valued functions; in class, they are defining it for real-valued functions. Since the subspace topology of $(-\infty, \infty)$ in $[-\infty, \infty]$ is the standard topology on $\mathbb{R}$, these definitions coincide for all real-valued functions. Your book is just being a bit more general. |
Inverse of an infinite matrix with factorial entries | The LDU-decomposition gives dot-products of rows & columns of the following matrices, whose decomposition of entries is much obvious and simple even for the infinite case and allow Cesaro or Eulersummation, which seem to give always zero (I've checked this for a handful of leading dot-products).
The row-scaled left and column-scaled right matrices of the product look like
$$
\small \begin{array}{rrrrrrrrr|rrrrrrrrr|}
& & & & & & & & & & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \\
& & & & & & & & & & 3 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & \\
& &u^{-1} &\cdot _\mathfrak E&(d^{-1}&\cdot l^{-1})& = &\Large 0 & & & 5 & 5 & 1 & 0 & 0 & 0 & 0 & 0 & \\
& & & & & & & & & & 7 & 14 & 7 & 1 & 0 & 0 & 0 & 0 & \\
& & & & & & & & & & 9 & 30 & 27 & 9 & 1 & 0 & 0 & 0 & \\
& & & & & & & & & & 11 & 55 & 77 & 44 & 11 & 1 & 0 & 0 & \\
& & & & & & & & & & 13 & 91 & 182 & 156 & 65 & 13 & 1 & 0 & \\
& & & & & & & & & & 15 & 140 & 378 & 450 & 275 & 90 & 15 & 1 & \\
& & & & & & & & & & ... & ... & ... & ... & ... & ... & ... & .. & \\
\hline\\
1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 & ... & & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \\
0 & 1 & -3 & 6 & -10 & 15 & -21 & 28 & ... & & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \\
0 & 0 & 1 & -5 & 15 & -35 & 70 & -126 & ... & & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \\
0 & 0 & 0 & 1 & -7 & 28 & -84 & 210 & ... & & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \\
0 & 0 & 0 & 0 & 1 & -9 & 45 & -165 & ... & & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \\
0 & 0 & 0 & 0 & 0 & 1 & -11 & 66 & ... & & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & -13 & ... & & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & ... & & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \\
.. & .. & .. & .. & .. & .. & .. & .. & ... & & ... & ... & ... & ... & ... & ... & ... & .. & | \\
\hline
\end{array}$$
where $u,d,l$ are the similarity-scaled versions of $U,D,L$. Because the results of the dotproducts - taken by Euler-summation - are all zero (supposedly), the required inverse similarity-scaling of the result-matrix is irrelevant and doesn't change its character of being a zero-matrix.
You can check some entries in the first column of the results using the sumalt()-procedure in Pari/GP which is a Cesaro-sum-related version of divergent summation of alternating series.
sumalt(k=1,(-1)^k*1*(2*k-1)) \\first row by first column
vector(12,k,-(-1)^k*binomial(k+0,2)) \\second row check only
sumalt(k=2,(-1)^k*binomial(k+0,2)*(2*k-1)) \\second row by first column
vector(12,k,-(-1)^k*binomial(k+1,4)) \\3'rd row check only
sumalt(k=3,-(-1)^k*binomial(k+1,4)*(2*k-1))\\3'rd row by first column
vector(12,k,-(-1)^k*binomial(k+2,6)) \\4'th row check only
sumalt(k=4,-(-1)^k*binomial(k+2,6)*(2*k-1))\\4'rd row by first column
vector(12,k,-(-1)^k*binomial(k+3,8)) \\5'th row check only
sumalt(k=5,-(-1)^k*binomial(k+3,8)*(2*k-1))\\5'rd row by first column
vector(12,k,-(-1)^k*binomial(k+4,10)) \\6'th row check only
sumalt(k=6,-(-1)^k*binomial(k+4,10)*(2*k-1))\\6'rd row by first column |
Evaluating Summation of $5^{-n}$ from $n=4$ to infinity | The problem you have is that you do not know why the formula works to begin with. If you did the situation would be clear. Here's the thing:
$$\sum_{n=0}^{\infty}r^n=\frac{1}{1-r} \; \;\;\;\;\; |r|<1.$$
Let $r=\frac{1}{5}$, then you really have the following situation:
$$\left(\frac{1}{5}\right)^0+\left(\frac{1}{5}\right)^1+\left(\frac{1}{5}\right)^2+\left(\frac{1}{5}\right)^3+\left(\frac{1}{5}\right)^4+\left(\frac{1}{5}\right)^5+....$$
Let's call this infinite sum $S$ and proceed as follows,
$$S=\left(\frac{1}{5}\right)^0+\left(\frac{1}{5}\right)^1+\left(\frac{1}{5}\right)^2+\left(\frac{1}{5}\right)^3+\left(\frac{1}{5}\right)^4+\left(\frac{1}{5}\right)^5+....$$
then
$$ \left(\frac{1}{5}\right)S=\left(\frac{1}{5}\right)^1+\left(\frac{1}{5}\right)^2+\left(\frac{1}{5}\right)^3+\left(\frac{1}{5}\right)^4+\left(\frac{1}{5}\right)^5+\left(\frac{1}{5}\right)^6....\;\;\;\;$$
Subtract the second from the first,
$$S-\left(\frac{1}{5}\right)S=1$$
$$S(1-\left(\frac{1}{5}\right))=1$$
$$S=\frac{1}{1-\left(\frac{1}{5}\right)}$$
$$S=\frac{5}{4}.$$
Now recall what $S$ was and realize,
$$S=\left(\frac{1}{5}\right)^0+\left(\frac{1}{5}\right)^1+\left(\frac{1}{5}\right)^2+\left(\frac{1}{5}\right)^3+\left(\frac{1}{5}\right)^4+\left(\frac{1}{5}\right)^5+....=\frac{5}{4}$$
But, you want the powers to start at $n=4$, so subtract the first 4 powers(0,1,2,3) to get,
$$S-\left(1+\frac{1}{5}+\left(\frac{1}{5}\right)^2+\left(\frac{1}{5}\right)^3\right)$$
which means
$$\left(\frac{1}{5}\right)^4+\left(\frac{1}{5}\right)^5+....=\frac{5}{4}-\left(1+\frac{1}{5}+\left(\frac{1}{5}\right)^2+\left(\frac{1}{5}\right)^3\right)=\frac{1}{500}.$$ |
How to calculate limits of sequences? | For the first one, divide the numerator and the denominator by the dominant power of the numerator that is $n^5$ in this case:
$$\begin{align}
\lim a_n &= \lim\frac{n+4n-n^5}{n^3+3n^5-2n} \\
&= \lim \frac{\color{blue}{5/n^3} - 1}{\color{blue}{1/n^2} + 3 - \color{blue}{2/n}} \\
&=\frac{-1}{3}\end{align}$$
because all blue terms tend to $0$ as $n$ goes to infinity.
For the second one, here is the hint:
$$b_n = \sqrt{n^2+4} - \sqrt{n^2+2} = \frac{2}{\sqrt{n^2+4} +\sqrt{n^2+2}}$$ |
Can there exists a countable continuum? | Suppose $L=\{a_{i}:i\in\mathbb{N}\}$ is a countable linear continuum. Let $x_{1},y_{1}\in L$ such that $x_{1}<y_{1}$. For all $n$ we can pick $x_{n}$ such that $x_{n-1}<x_{n}<y_{n-1}$ and $a_{2n}\not\in (x_{n},y_{n-1})$ where
$$(x_{n},y_{n-1})=\{a\in L:x_{n}<a<y_{n-1}\}.$$
Furthermore we can pick $y_{n}$ such that $x_{n}<y_{n}<y_{n-1}$ and $a_{2n+1}\not\in(x_{n},y_{n})$.
Note that $X=\{x_{n}:n\in\mathbb{N}\}$ is a non-empty set in $L$ with upper bound $y_{1}$. Since $L$ has the least upper bound property there is an $m\in\mathbb{N}$ such that $a_{m}$ is the supremum of $X$. However either $a_{m}<x_{m}$ or $a_{m}>y_{m}>x_{m}$, so either $a_{m}$ is not an upper bound or it is not the least upper bound.
So a linear continuum can not be countable.
I am not sure about your first question, I will think about it and expand this answer when I know how to solve that part. |
Converting a $6 \times 3$ generator matrix into a check matrix | Let's take a quick tour on coding theory.
A generator matrix is a matrix whose rows form a basis for a linear code.
Let's inspect $2$nd, $4$th, and $6$th row of the generator matrix, they are $\begin{bmatrix} 1& 0 & 0\end{bmatrix},\begin{bmatrix} 0& 1 & 0\end{bmatrix}$, and $\begin{bmatrix} 0& 0 & 1\end{bmatrix}.$
Hence the codewords are every element of $F_2^3$.
A check matrix maps all codewords and only the codewords to the zero vector. To map every element of $F_2^3$ to the zero vector, the corresponding parity check matrix has to be the zero matrix.
Edit:
As OP uses $6$-bit codeword, the convention is different. Hence one strategy would be take your matrix $G$ and take its transpose, $G^T$. Perform row operations to reduce it to its RREF form, that is in the form of $(I|P)$. After which, the parity check matrix is of the form of $(P^T|I)$ |
If $f(x)=\begin{cases} x-2 & x\le 0 \\ 4-x^2 & x>0 \end{cases}$, then number of points where $f(f(x))$ is discontinuous is | To solve these kind of problems, especially when we have a pretty simple function, we should always attempt to graph it and see what's going on. It's quite difficult to imagine composition of functions so we need to take it slowly.
If you graph $f(x)$ it's clear that we have three regions of interest. One when $x\leq 0$, another when $0<x<2$ and finally when $x\geq 2$. Why are these regions of interest? Well because this is when the graph (the function more precisely) admits values that are either positive or negative and remember that $f(x)$ is entirely dependent on whether $x>0$ (positive) or $x\leq 0$ (negative).
So, lets consider when $x\leq 0$. From our graph we have $f(x)\leq 0$ which is sufficient information to calculate $f(f(x))$. Because $x\leq 0$ we have $f(x)=x-2$ and since $f(x)\leq 0$ we conclude that $$f(f(x))=(x-2)-2=x-4.$$
Applying this sort of logic to the regions $0<x<2$ and $x\geq 2$ we conclude that,
$$ f(f(x)) =
\begin{cases}
x-4 & x\leq 0, \\
4-(4-x^2)^2 & 0< x< 2, \\
2-x^2 & x\geq 2.
\end{cases}
$$
Hence, it is obvious there are two discontinuities, one at $x=0$ and another at $x=2$. |
Why dosen't the author consider $\oint_{\gamma_{R}} f(z) = 2 \pi i \sum_{j} Ind{\gamma}(P{j}) \cdot Res_{f}(P_{j})$? | The integral around a closed contour, with the original $f$ rather than the modified $g$, can indeed be evaluated by using the residue theorem, but that doesn't seem to help with the original problem of calculating the integral along the real axis, which is not a closed contour. To connect the original problem with the residue calculation, you need to close the contour, which is usually done by taking a long segment of the real line (from $-R$ to $R$ in your question) and closing it with a big semicircle in the upper or lower half-plane. And then you have to show that the integral over the semicircle is small (or otherwise under good control) so that you can infer a result about the integral along just the real axis. With the original $f$, this kind of control seems to be lacking, because the cosine function, though nice and bounded along the real axis, blows up badly when you get far above or far below the real axis in the complex plane.
Fortunately, one can (and the authors do) dodge the problem by noticing that $e^{iz}$ is bounded in the upper half-plane. So the residue calculation works if we have $e^{iz}$ instead of $\cos z$ in the numerator, and then we can get the answer for $\cos z$ by noticing that, along the original domain of integration, the real axis, it's the real part of $e^{iz}$. |
Sheaf module and morphism to sheaf hom | Yes it is reasonable, except that you maybe mean $\mathscr{O}_X$ instead of $\mathscr{O}_x$ ? ($\mathscr{O}_x$ is the stalk of $X$ at the point $x$, so it is not per se a sheaf on $X$.) |
Uniqueness of $n$'th roots $\bmod p\,$ when $n$ is coprime to $p-1$ | If $n$ is coprime to $p-1$, then there exists $k$ such that $nk\equiv 1\pmod{p-1}$, so, for some $t$, $nk+(p-1)t=1$.
You also know that $x^{p-1}=1$, for every $x\in\mathbb{F}_p$ (the $p$-element field), provided $x\ne0$.
Then, for $x\ne0$,
$$
x^{nk}=x^{nk}(x^{(p-1)})^t=x^{nk+(p-1)t}=x^1=x
$$
Obviously, $0^{nk}=0$.
Therefore the map $x\mapsto x^k$ is the inverse of $x\mapsto x^n$. |
Find max and min of $2\sqrt{x+y} - \sqrt x - \sqrt y$ | Since $x+y$ is a constant, to maximize $2\sqrt{x+y}-\sqrt{x}-\sqrt{y}$, we need to minimize what we take away from $2\sqrt{x+y}$, so we need to minimize $\sqrt{x}+\sqrt{y}$.
Similarly, to minimize $2\sqrt{x+y}-\sqrt{x}-\sqrt{y}$, we need to maximize $\sqrt{x}+\sqrt{y}$.
So let us first minimize $\sqrt{x}+\sqrt{y}$. This is equivalent to minimizing its square, which is $x+y+2\sqrt{x}\sqrt{y}$. But $x+y$ is fixed. What is the minimum possible value of $2\sqrt{x}\sqrt{y}$?
Next we maximize $\sqrt{x}+\sqrt{y}$. Equivalently, we maximize its square $x+y+2\sqrt{x}\sqrt{y}$. So we want to maximize $4xy$.
Note that $(x+y)^2=(x-y)^2 +4xy$. So $4xy=(x+y)^2-(x-y)^2$. Given that $x+y$ is fixed, what does this say about the maximum value of $4xy$? |
How does one show using algebra or basic mathematical prowess to show that ψ = 1 - φ | The best answer (imho) is already posted. But whenever $\alpha$ and $\beta$ are the two roots of a quadratic polynomial $x^2+b\,x+c$, then two things happen:
$\alpha\cdot\beta=c$
$\alpha+\beta=-b$
Since $\varphi$ and $\psi$ are roots of $x^2-x-1$, the second fact implies $$\varphi+\psi=1$$ which implies $$\psi=1-\varphi$$ |
Explain which partitions of $n$ does the coefficient of $x^n$ count in the following... | From the product, it counts the number of partitions that use:
any number of 1's
0, 1, or 2 of 2's
any number of 3's
no other terms. |
How many ways are there to take 4 out of 7, 1-month courses in 6 months? | We cannot read two courses in the same month, so we have to choose four months out of 6 to read the courses in – the 6 months come in here. There are $\binom64$ ways to do this. Multiply that by the $\frac{7!}{3!}$ ways to choose and order the courses, which you have calculated, and there are 12600 ways to pick/read courses for the semester. |
Probability problem gives different answers with different methods | Let $R$ be the number of reds and $G$ be the number of greens.
Then
\begin{align*}
P(R\geq 1, G\geq 1) &= 1-P(R=0 \cup G = 0) \\
&= 1-[P(R=0)+P(G=0)-P(R=0, G=0)]\\
&= 1 -\left[\frac{\binom{4}{0}\binom{11}{4}}{\binom{15}{4}}+\frac{\binom{6}{0}\binom{9}{4}}{\binom{15}{4}}-\frac{\binom{5}{4}\binom{10}{0}}{\binom{15}{4}}\right] \\&= \frac{914}{1365}\end{align*}
where the second line is true by inclusion-exclusion. This agrees with your first answer. |
Convergence of $ \int_0^{\infty} x^{-5a} \ln(1+x^{2a}) dx $ | In order to ensure integrability in a right neighbourhood of the origin, where the integrand function behaves like $x^{-3a}$, we must have $\text{Re}(a)<\frac{1}{3}$. In a similar way, to ensure integrability in a left neighbourhood of $+\infty$ we must have $\text{Re}(a)>\frac{1}{5}$.
It follows that the given integral is converging for $\text{Re}(a)\in\left(\frac{1}{5},\frac{1}{3}\right)$, and in such a case its value is given by
$$ I(a) = \frac{\pi}{(5a-1)\cos\frac{\pi}{2a}},\tag{1} $$
since integration by parts brings the integral in something like $\int_{0}^{+\infty}\frac{x^{\alpha}}{1+x^{\beta}}\,dx $, that can be computed through Euler's Beta function and the $\Gamma$ reflection formula, by setting $1+x^\beta = u^{-1}$. |
If $dX_1 = dX_2$ then curvatures of $\nabla^{X_1}$ and $\nabla^{X_2}$ agree | There is the following formula for the exterior derivative that you should get to know (it comes up a lot): http://en.wikipedia.org/wiki/Exterior_derivative#Invariant_formula.
In the case of a 1-form $\mu$, it says that
$$
d\mu(U,V) = U\cdot \mu(V) - V \cdot \mu(U) - \mu([U,V]).
$$
Therefore the last thing you wrote is exactly $dX^1(U,V)$.
This formula can be proved using Cartan's formula:
$$
d i_U \mu + i_U d \mu = L_U \mu.
$$
Rearranging gives
$$
i_U d\mu = L_U \mu - d(\mu(U)).
$$
Now apply to a vector field $V$ and get
$$
d\mu(U,V) = (L_U \mu)(V) - V\cdot \mu(U).
$$
Now we use the fact that the Lie derivative satisfies the product rule:
$$
U\cdot (\mu(V)) = L_U (\mu(V)) = (L_U \mu)(V) + \mu(L_U V) = (L_U \mu)(V) + \mu([U,V]).
$$
So $(L_U \mu)(V) = U\cdot(\mu(V)) - \mu([U,V])$, as desired. |
Find the value of $\int_{-\infty}^{\infty} \frac{e^{-x^2}}{1+x^2} dx$ . | Tags aside, you don't explicitly say the solution needs to be by complex analysis, so I'll suggest something else. Your integral is $ef(1)$ with $f(t):=\int_{\Bbb R}\frac{e^{-t(1+x^2)}}{1+x^2}dx$ so$$f^\prime(t)=-\int_{\Bbb R}e^{-t(1+x^2)}dx=-\sqrt{\pi}t^{-1/2}e^{-t},\,f(\infty)=0.$$Hence$$f(1)=-\int_1^\infty f^\prime(t)dt=\sqrt{\pi}\int_1^\infty t^{-1/2}e^{-t}dt=2\sqrt{\pi}\int_1^\infty e^{-u^2}du=\pi\operatorname{erfc}(1).$$So the original integral is $\pi e\operatorname{erfc}(1)$. |
Compactness in $\ell ^1$. Is $K := \{ (z_n)_n \in \ell^1 : |z_n| \leq |x_n| \}$ compact? | As you have pointed out, we have for fixed $n$ that the sequence of the nth entry of our sequence is bounded and thus admits a converging subsequence. Using a Cantor diagonal argument we can produce a subsequence $(\xi^{k_l})_l$ such that for every fixed $n$ the sequence of the $n$th entries converges. Set $y= (y_n)_n=(\lim_{l\rightarrow \infty} \xi^{k_l}_n)_n$.
Note that
$$ \Vert y -\xi^{k_l} \Vert
\leq \sum_{j=1}^N \vert y_j - \xi^{k_l}_j \vert + \sum_{j\geq N+1} 2\vert x_j\vert . $$
From this we see, that our subsequence converges in $\mathcal{l}^1$. |
Exercise 1.13 of chapter 1 of Revuz and Yor's | Re 1., note that the random variable $X=\limsup\limits_{t\to\infty}B_t/\sqrt{t}$ is asymptotic hence, by Kolmogorov zero-one law, if $P(X\gt0)\lt1$, then $P(X\gt0)=0$, that is, $X\leqslant0$ almost surely. If this holds, then, by symmetry, $\liminf\limits_{t\to\infty}B_t/\sqrt{t}\geqslant0$ almost surely, thus, $B_t/\sqrt{t}\to0$ almost surely.
In particular, this would imply that $B_t/\sqrt{t}\to0$ in distribution, which is absurd since every $B_t/\sqrt{t}$ is standard normal. Hence $X\gt0$ almost surely. |
One tailed or two tailed | There is only one scenario. The administrator's statement has to be taken at face value and is the null hypothesis.
$H_0: t=4.25$ hours
The board member has an alternative hypothesis.
$H_1: t>4.25$ hours
This is one-tailed. |
What is the remainder of ${{6457}^{76}}^{57}$ modulo $23$? | $6457\equiv-6\pmod{23}$
$\implies6457^{76^{57}}\equiv(-6)^{76^{57}}\pmod{23}\equiv6^{76^{57}}$
Now we need $76^{57}\pmod{\phi(23)}$
As $(22,76)=2,$ let us find $76^{57-1}\pmod{22/2}$
Now $76\equiv-1\pmod{11}\implies76^{56}\equiv(-1)^{56}\equiv1$
$76^{57}=76\cdot76^{56}\equiv1\cdot76\pmod{11\cdot76}\equiv76\pmod{22}\equiv10$
$\implies6^{76^{57}}\equiv6^{10}\pmod{23}$ |
properties of a real analytic function | Fix $y\in U$, and let $M$, $C$, $r$ be as given in the problem. For each $m\geq 1$, let $P_m(x)$ be the $m$th Taylor polynomial $$P_m(x) = \sum_{|i| = 0}^m\frac{(\partial^if)(y)}{i!}(x - y)^i.$$ We want to show that $P_m(x)\to f(x)$ whenever $x$ is sufficiently close to $y$, because this is what analytic means. In order to do this, we will apply Taylor's theorem in several variables, which says that for $x\in \mathbb{B}_r(y)$ $$\tag{$*$}f(x) - P_m(x) = \sum_{|j| = m + 1}R_j(x)(x - y)^j.$$ The right hand side of this expression is the "remainder term" in Taylor's theorem. We want to show that it goes to $0$ as $m\to \infty$. Using the explicit formula for $R_j(x)$ in the Wikipedia link given, $$R_j(x) = \frac{m+1}{j!}\int_0^1(1-t)^m(\partial^jf)(y + t(x - y))\,dt.$$ Here's where we use the estimates given in the problem: $$|R_j(x)|\leq \frac{m+1}{j!}\int_0^1(1-t)^mM\cdot j!\cdot C^{m+1}\,dt = MC^{m+1}.$$ Plugging this into equation ($*$) gives $$|f(x) - P_m(x)|\leq \sum_{|j| = m+1}MC^{m+1}|x-y|^{m+1} = \binom{m+n}{n-1}MC^{m+1}|x-y|^{m+1}.$$ Thus we only have to show the right hand side of this expression goes to $0$ when $|x - y|$ is small enough. One has the very crude estimate $$\binom{m+n}{n-1}\leq (m+n)^{n-1},$$ which is $\leq 2m^{n-1}$ when $m$ is large. Thus for large $m$, $$|f(x) - P_m(x)|\leq 2Mm^{n-1}C^{m+1}|x-y|^{m+1}.$$ If $|x - y|<C^{-1}$, the right hand side of this expression $\to 0$ locally uniformly as $m\to \infty$. Hope this helps! |
Amazing isomorphisms | I don't know if this is really amazing, but I was quite surprised when I did discover these isomorphisms:
$\mathbb R^n$ and $\mathbb R^m$ are isomorphic as abelian groups (where $n,m \geq 1$). In fact, pick any Hamel basis of $\mathbb R^n$ and $\mathbb R^n$ as $\mathbb Q$ vector space. Both basis are in bijection with $\mathbb R$, so any bijection between them give an isomorphism of $\mathbb Q$ vector spaces, in particular of abelian groups.
A really surprising example (for me at least): the free group with countably many generator is isomorphic to a subgroup of $F_2$, the free group with two generators! This fact comes from the covering $\sin : X \to X'$ where $X = \mathbb C \setminus \{\frac{\pi}{2} + k\pi, k \in \mathbb Z\}, X' = \mathbb C \setminus \{-1,1\}$, and the more general fact that for any covering between nice space the induced maps on fundamental groups is injective.
$L^2(\mathbb S^1) \cong \ell^2(\mathbb Z)$ via Parseval equality.
The space $\mathcal M_k$ of modular forms of weight $2k$ is finite dimensional (which is already non-trivial) and moreover, if $\mathcal M = \bigoplus_{k \in \mathbb N} \mathcal M_k$, we have the a graded algebra isomorphism $\mathcal M \cong \mathbb C[x,y] $ where $x$ has degree $2$ and $y$ has degree $3$ (they represents Eisenstein series $G_4(z)$ and $G_6(z)$ respectively). This is quite surprising that the algebra of all modular forms is simply isomorphic to an algebra of polynomials !
Finally, a cute example: the group of direct isometries which preserve a cube are isomorphic to the symmetric group $\mathfrak S_4$. In fact, this is exactly the permutation group of the big diagonals $d_1,d_2,d_3,d_4$ of the cube. |
Can someone recommend for me a text-book about applications of Taylor series? | I don't know of any textbook that focuses on applications of Taylor series.
However, here is a list (in French) of theorems that can be proved as applications of Taylor's formula. The proofs are not given in the document, but reference is made to books where the proofs can be found. The material would be roughly at the upper undergraduate level (years 3 and 4 of university) in the U.S.
https://agreg-maths.fr/uploads/versions/596/218.pdf
Here are other similar documents.
https://www.imo.universite-paris-saclay.fr/~perrin/CAPES/analyse/fonctions/formulesdetaylor%2807%29.pdf
http://perso.eleves.ens-rennes.fr/people/emeline.luirard/Lecons/218.pdf
For simple applications to computing approximate values of functions that could be understood by a first-year student, have a look at Chapter 14 of A First Course in Calculus, 3rd ed. by Serge Lang. |
Find all solutions for $A=1$ where $A=\left(1+x+\frac{1}{2}x^{2}\right)\cdot e^{-x}$ | Your derivation is not mathematically rigorous when you suddenly are interested into making the parenthesis equal to 1. Why that ? A priori, there could be solutions with $a=ln(b)$ without $b$ equal to 1.
Instead, if you have a look at the graphics, it gives you the way to follow:
In fact, this smooth function is strictly decreasing on $(-\infty,0)$ and $(0,+\infty)$ with $f(0)=1$ (you prove it rigorously by showing that its derivative is $< 0$ everywhere on these intervals), thus crosses level $y=1$ once, getting in this way the unique solution $x=0$.
Edit : Polynomial $\left(1+x+\frac{1}{2}x^{2}\right)$ is constituted by the first 3 terms of the series expansion of $e^x$. The more terms are taken, the flatter is the resulting graphics around $0$. Here is for example the graphical representation of the function defined by $f(x)=\left(1+x+\frac{1}{2}x^{2}+\frac{1}{6}x^3+\frac{1}{24}x^4\right)e^{-x} $ (roughly speaking, in the vicinity of $x=0$, it is $\approx e^{x}e^{-x}=1$): |
using this “ ∀ · · · x, · · · .” No computer scientists are unemployed | Let $C(x)$ denote "x is a computer scientist," and let $U(x)$ denote "x is unemployed."
Then we have: $$\forall x(C(x) \rightarrow \lnot U(x))\tag{1}$$
ADDED: Note that if you are confused about why speaking of "no computer scientist x such that U(x)" translates to a statement about all computer scientists, note that we can start with a more literal translation, and work our way to $(1)$.
"(There are) no computer scientists (that) are unemployed" more literally translates to
"There does not exist an x such that x is a computer scientist AND x is unemployed." $$\lnot \exists x(C(x) \land U(x))\tag{2}$$
Recall that we can "push the negation inwards; $$\lnot \exists x P(x)\equiv \forall x \lnot P(x)$$
In our case, pushing negation inwards gives us $$\begin{align} \lnot \exists x(C(x) \land U(x)) &\equiv \forall x\Big(\lnot(C(x) \land U(x))\Big) \\ \\
&\equiv \forall x \Big(\lnot C(x) \lor \lnot U(x)\Big)\tag{DeMorgan's}\\ \\
&\equiv \forall x \Big(C(x) \rightarrow \lnot U(x)\Big)\tag{1}\end{align}$$ |
For every $\lambda \in \mathbb{R}$, find the rank and nullity of the matrix | Hint: Try subtracting the first row from the other rows. |
Getting different parameters of distribution when using different methods | Your derivations of the estimators are correct, but without the data, it is not possible to verify your numeric calculations. That said, one should not expect these two estimators to perform roughly equally--for example, even the corresponding estimators for a $\operatorname{Uniform}(0,\theta)$ distribution behave quite differently: $$\hat\theta_{MM} = 2\bar X, \quad \hat\theta_{ML} = X_{(n)},$$ and although both are consistent, the small sample behavior can be drastically different.
In your case, using the sample $$\boldsymbol x = (5,7,3,2,5,6,11,55,32,19),$$ I get $$(\hat a, \hat \lambda)_{MM} = (0.463719, 0.0319806), \\ (\hat a, \hat \lambda)_{ML} = (1.10246, 0.0760318).$$ One is more than twice the other. I would conjecture that the method of moments estimator is less robust, and I am sure an investigation of this has been performed, but I haven't bothered to search the literature at this time. |
What is $10 \times 10-10+10$? | The usual convention is to give addition and subtraction equal precedence, evaluating them from left to right. So we start with multiplication ($10 \cdot 10$), then do the addition and subtraction from left to right to get $(100 - 10) + 10 = 100$.
If we were to take the acronym PEMDAS (BODMAS, BIDMAS, etc.) literally, it would seem that addition should be evaluated before subtraction. In this case we would get $10 \cdot 10 - 10 + 10 = (10 \cdot 10) - (10 + 10) = 80$. But this would not be correct; despite A being listed before S in the acronym, the intention is that they be given equal precedence and evaluated left to right. |
Poisson distribution with exponential parameter | For every nonnegative integer $n$, $$\mathbb P(X=n\mid\Lambda)=\mathrm e^{-\Lambda}\frac{\Lambda^n}{n!}$$ hence
$$
\mathbb P(X=n)=\mathbb E(\mathbb P(X=n\mid\Lambda))=\int_0^{+\infty}\left(\mathrm e^{-\lambda}\frac{\lambda^n}{n!}\right)\,f_\Lambda(\lambda)\,\mathrm d\lambda=\int_0^{+\infty}\left(\mathrm e^{-\lambda}\frac{\lambda^n}{n!}\right)\,\mu\mathrm e^{-\mu\lambda}\,\mathrm d\lambda
$$
where the first equality comes from the Law of Total Expectation and Can we prove the law of total probability for continuous distributions?. The change of variable $x=(1+\mu)\lambda$ in the rightmost integral yields
$$
\mathbb P(X=n)=\frac{\mu}{(1+\mu)^{n+1}}\int_0^{+\infty}\mathrm e^{-x}\frac{x^n}{n!}\mathrm dx=\frac{\mu}{(1+\mu)^{n+1}}
$$
To sum up,
$$
\mathbb P(X=n)=(1-p)p^n\qquad p=\frac1{1+\mu}
$$
That is, the distribution of $X$ is geometric with parameter $p$. |
Applied first order differential equations | Obviously, the law is exponential decay and you get from $x(\frac34)=\frac12x_o$, $x(k\frac34)=\frac1{2^k}⋅x_o$ the formula
$$
x(t)=2^{-\frac43t}\cdot x_o
$$
where $t$ is measured in hours.
Now you have to solve
$$
2^{-\frac43t}=0.05\iff t=-\frac34\frac{\log 0.05}{\log 2}.
$$ |
Let $H \triangleleft G$ such that $[G:H]=n$. Show that $g^n \in H$ for all $g \in G$. | Hint: apply Langrange's Theorem (every element divides the order of the group) to the quotient group $G/H$ which has order $n$. |
Pullback metric of inversion map | By definition of the pullback by $I$ of the euclidean metric, $(I^*\langle\cdot,\cdot\rangle)_x(v,w) = \langle\mathrm{d}I(x)v,\mathrm{d}I(x)w \rangle_{{I(x)}}$. The computation shows that $\mathrm{d}I(x): T_x\mathbb{R}^n \to T_{I(x)}\mathbb{R}^n$ is just, while canonically identifying these two tangent spaces to $\mathbb{R}^n$, equal to $$\mathrm{d}I(x)v = \frac{{v}}{\|x\|^2}-2\langle x,v\rangle \frac{x}{\|x\|^4}$$
Let $u$ and $v$ be two tangent vectors to $x$. Then
\begin{align}
(I^*g)_x(u,v) &= \left\langle \dfrac{u}{\|x\|^2}-2\frac{\langle u,x\rangle x}{\|x\|^4},\dfrac{v}{\|x\|^2}-2\frac{\langle v,x\rangle x}{\|x\|^4} \right\rangle \\
&=\frac{\langle u,v\rangle}{\|x\|^4} -2\dfrac{\langle u,x\rangle \langle x,v\rangle}{\|x\|^6} -2 \dfrac{\langle v, x \rangle\langle x ,u\rangle}{\|x\|^6} + 4 \dfrac{\langle u,x\rangle \langle v,x \rangle \|x\|^2}{\|x\|^8} \\
&= \dfrac{\langle u,v\rangle}{\|x\|^4} - 4 \dfrac{\langle x , u \rangle \langle x , v \rangle}{\|x\|^6} + 4 \dfrac{\langle x,u\rangle \langle x,v\rangle}{\|x\|^6}\\
&= \dfrac{\langle u,v\rangle}{\|x\|^4}
\end{align}
So you are done. Remark that you do not have to compute in any coordinate system or in any orthonormal basis. |
A problem on Expected value using the survival function | The system can be described by a 5 states Markov chain, with the state described by a number of consecutive six's accumulated so far. The transition matrix is:
$$
P = \begin{bmatrix}
1-p & p & 0 & 0 & 0 \\
1-p & 0 & p & 0 & 0 \\
1-p & 0 & 0 & p & 0 \\
1-p & 0 & 0 & 0 & p \\
0 & 0 & 0 & 0 & 1
\end{bmatrix}
$$
where for the case at hand $p=\frac{1}{6}$ is the probability to get a six at the next rolling of the die.
The classic way to solve this is to consider the expected number of rolls $k_i$ given an initial state $i$. We are interested in computing $k_0 = \mathbb{E}(X)$.
Conditioning on a first move, the following recurrence equation holds true:
$$
k_i = 1 + p k_{i+1} + (1-p) k_0 \qquad \text{for} \qquad i=0,1,2,3
$$
with boundary condition $k_4 = 0$. Solving this linear system yields:
$$
k_0 = \frac{p^3+p^2+p+1}{p^4} \quad
k_1 = \frac{p^2+p+1}{p^4} \quad
k_2 = \frac{p+1}{p^4} \quad
k_3 = \frac{1}{p^4} \quad
k_4 = 0
$$
In[24]:= Solve[{k0 == 1 + k1 p + (1 - p) k0,
k1 == 1 + k2 p + (1 - p) k0, k2 == 1 + p k3 + (1 - p) k0,
k3 == 1 + p k4 + (1 - p) k0, k4 == 0}, {k1, k2, k3, k0,
k4}] // Simplify
Out[24]= {{k1 -> (1 + p + p^2)/p^4, k2 -> (1 + p)/p^4, k3 -> 1/p^4,
k0 -> (1 + p + p^2 + p^3)/p^4, k4 -> 0}}
Substituting $p=\frac{1}{6}$ gives $k_0 = \mathbb{E}(X) = 1554$.
Added A good reference on the subject is a book by J.R. Norris, "Markov chain" (Amazon). The chapter on discrete Markov chains is available on-line for free from the author. Section 1.3 discusses finding mean hitting times $k_i$. |
For which primes $p$ does $x^2\equiv3\pmod{p}$ have a solution? | In this related question I proved that for every prime $p>3$, $-3$ is a quadratic residue iff $p\equiv 1\pmod{3}$ by using only the Cauchy theorem for groups. Since for every odd prime we know that $-1$ is a quadratic residue iff $p\equiv 1\pmod{4}$, and:
$$\left(\frac{3}{p}\right)=\left(\frac{-3}{p}\right)\left(\frac{-1}{p}\right),$$
it follows that $3$ is a quadratic residue $\!\!\pmod{p}$ iff $\color{blue}{p\equiv\pm 1\pmod{12}}$. |
A seemingly wrong definition of convergence of spectral sequences in Bott & Tu? | You're right. One does not want that $E_r$ as a whole becomes stationary but usually $E_r$ comes with a grading $E_r = \bigoplus_{p,q\in \mathbb Z} E^{pq}_r$ and then you want that $E^{pq}_r$ becomes stationary. $E_\infty$ is then the graded abelian group $\bigoplus_{p,q\in \mathbb Z} E^{pq}_\infty$. If this graded abelian group is the same as the associated graded group of a filtered group $H$, we say that $E_r$ converges to $H$. |
Sine not a Rational Function Spivak | Hint: Use continuity of $\sin$ and rational functions. |
Conditions for an order-embedding wrt $\mathbb{R}$ | First let me restate the proposition (as I understand it) in human language, rather than try to imitate your notation.
Proposition. Let $X$ be a totally ordered set. The following statements are equivalent:
(1) there is a set $D\subseteq X$ which is (at most) countable, and which is dense in $X,$ in the sense that $D\cap[a,b]\ne\emptyset$ whenever $a,b\in X,\ a\lt b.$
(2) $X$ is order-isomorphic to a subset of $\mathbb R.$
$\underline{\text{(1)}\implies\text{(2)}}:$ Let $D=\{d_n:n\in\mathbb N\}.$ Define $f:X\to\mathbb R$ by setting
$$f(x)=\sum_{d_n\lt x}2^{-n}+\sum_{d_n\le x}2^{-n}.$$
It is easy to see that $x,y\in X,\ x\lt y\implies f(x)\lt f(y).$
$\underline{\text{(2)}\implies\text{(1)}}:$ (Axiom of choice needed here.) Without loss of generality, we assume that $X\subseteq\mathbb R.$ For each rational number $r,$ let $x_r$ be the greatest element of $X\cap(-\infty,r)$ if such exists, and let $y_r$ be the least element of $X\cap(r,\infty)$ if such exists. For each rational interval $(r,s)$ such that $X\cap(r,s)\ne\emptyset,$ choose an element $z_{r,s}\in X\cap(r,s).$ Let $D$ be the set of elements $x_r,y_r,z_{r,s}$ so chosen. Clearly $D$ is a countable dense subset of $X.$ |
exactly $k$ distinct values in $n$ $n$-sided dices | This is an extended Comment just to help you get started.
Let $n = 6$ and $k = 1.$ That means getting the same result on
all $6$ rolls of an ordinary fair cubical die: $6(1/6)^6 = 0.0001286.$
Let $n = 6$ and $k = 6.$ That means getting all different faces
in $6$ such rolls: $6!/6^6 = 0.0154321.$
From there is gets a little more complicated. Make sure you exclude
the results you don't want as well as providing for the ones you do.
And make sure the numerator counts ordered arrangements if the
denominator does.
Maybe try $k=2$ or $k=5$ next. Do it soon to learn something
before another Answers spoils the fun.
Below are results from a million simulations in R of the experiment
for $n = 6,$ giving approximate probabilities for various values of $k.$ (You can trust the first 2 or 3 decimal places, sometimes 4.) The results
are not far off for $k=1$ and $k=6$ above. You might want to use it as a reality check as you try
different combinatorial approaches.
m = 10^6; n = 6; k = 3; u = numeric(m)
for (i in 1:m) {
faces = sample(1:n, n, rep=T)
u[i] = length(unique(faces)) }
table(u)/m
## u
## 1 2 3 4 5 6
## 0.000114 0.019808 0.230756 0.501897 0.231939 0.015486 |
Elegant way to show $\frac{\cosh(z)}{4z^3-z}$ is holomorphic? | Since it is the quotient of two holomorphic functions, it is holomorphic. |
Markov Chains in a Casino | 0 and 5 are absorbing states. Once entered, the process stays there.
For states 1,2,3,4, there is some probability you will immediately win enough bets in a row to get to 5 or lose enough to get to 0 (the absorbing states). Since, for which ever one of these states you start at, there is a non-zero probability of never returning, the states are transient.
Periodicity is a more difficult matter.
For instance, how can we get from state 1 back to state 1? There is only one way. You have to go from 1 to 2 to 4 to 3 back to 1. This is the only way. It is periodic with period 4.
To get from state 2 back to state 2 you can only go from 2 to 4 to 3 to 1 to 2. Period 4.
To get from state 3 back to state 3 you can only go from 3 to 1 to 2 to 4 to 3. Period 4.
Finally (you guessed it). For state 4 you can only go from 4 to 3 to 1 to 2 to 4. Period 4. |
invertibility of spherical transformation | John, I will quote wolfram mathworld here:
Blockquote
The spherical coordinates (r,theta,phi) are related to the Cartesian coordinates (x,y,z) by
r = sqrt(x^2+y^2+z^2)
(1)
theta = tan^(-1)(y/x)
(2)
phi = cos^(-1)(z/r),
(3)
where r in [0,infty), theta in [0,2pi), and phi in [0,pi], and the inverse tangent must be suitably defined to take the correct quadrant of (x,y) into account. |
What is the definition if a distinct cycle in a graph? | (Expanding the comment by Brian M. Scott): being distinct is not a property of a cycle, but a relation between two cycles. Two cycles are distinct if they are not the same cycle.
Usage example: "For all $n\ge 3$, the number of distinct Hamilton cycles in the complete graph $K_n$ is $(n−1)!/2$."
Related story from MathOverflow:
Q: "Are the groups $G_1$ and $G_2$ isomorphic?"
A: "$G_1$ is, but $G_2$ isn't." |
$1\,997^{2^{n}}-1$ is divisible by $2^{n+2}$ - proof | Your mistake seems to stem from mistaking $1997^{2^n}$ for $1997^{2n}$.
To answer the question, besides from using Euler's theorem, induction will also work. For $n=0$ we have
$$1997^1-1=2^2\cdot499$$
Now assume that $2^{n+2}k|1997^{2^n}-1$, that is $1997^{2^n}-1=2^{n+2}k$ for some integer $k$. Then we have
$$1997^{2^{n+1}}-1=\left(1997^{2^n}\right)^2-1=\left(1997^{2^n}-1\right)\left(1997^{2^n}+1\right)=\left(2^{n+2}k\right)(2^{n+2}k+2)=2^{n+3}k(2^{n+1}k+1)$$
So $2^{n+3}|1997^{2^{n+1}}-1$ |
Number of elements in quotient ring of polynomials over a finite field | (1) Because in that field, $x^2$ and $-1$ are the same thing. That means that any element has a unique representative with only first-degree and constant term (any higher degree term may be reduced). Each of those places has three possible values (being elements of $\Bbb F_3$), resulting in a total of $3\cdot 3 = 9$ possible elements. Here they are:
$$
\begin{array}{ccc}
0x + 0&0x + 1 & 0x+2\\
1x + 0&1x + 1 & 1x+2\\
2x + 0&2x + 1 & 2x+2
\end{array}
$$
(2) Because if it were reducible, then its factors would be linear. Linear polynomials over fields always have roots. The same argument works for third degree expressions, but not for degree four og higher (the prime example being $(x^2 + 1)^2$). |
convergence test for improper integrals | Because one of the assumptions is that $k>1$. Besides, although you can indeed write $e^{-x^2}$ as $\frac{xe^{-x^2}}x$, since $\int_a^\infty\frac{\mathrm dx}x$ diverges, you deduce nothing from that. |
Good books written by great mathematicians | Indiscrete Thoughts by Gian-Carlo Rota.
I Want to be a Mathematician: An Automathography by Halmos. |
partial derivative of a function | Function:
$$z(x,y)=\frac{6xy^2}{x^2y^3+10}$$
Find:
$$\frac{\partial z(x,y)}{\partial y}=\frac{\partial}{\partial y}\left(\frac{6xy^2}{x^2y^3+10}\right)=6x\cdot\frac{\partial}{\partial y}\left(\frac{y^2}{x^2y^3+10}\right)$$
Now, use the quotient rule:
$$\frac{\text{d}}{\text{d}y}\left(\frac{u}{v}\right)=\frac{v\cdot\frac{\text{d}u}{\text{d}y}\left(\frac{u}{v}\right)-u\cdot\frac{\text{d}v}{\text{d}y}\left(\frac{u}{v}\right)}{v^2}$$
Then, use the power rule:
$$\frac{\text{d}}{\text{d}y}\left(y^n\right)=ny^{n-1}$$
Then, differentiate sums term by term and factor out constants. |
'Plus' Operator analog of the factorial function? | As far as I know we haven't given that expression a name since we may write it explicitly as
$$
\sum_{i=1}^ni = {n+1 \choose 2} = \frac{n(n+1)}{2}.
$$ |
Show $\lim_{j\rightarrow\infty}\sum_{k=0}^\infty a_k z_j^k=\sum_{k=0}^\infty a_k(-1)^k$ | I think that this is a classic result. First, replace $(-)^k a_k$ by $b_k$. We then have as hypothesis that the power series $\sum b_k$ is convergent. Our $z_j$ then has to converge to $1$, and if $D$ is the domain defined by $D=\{z;|z|\leq 1, |1-z|\leq 4(1-|z|)$, we want to show that if $f(z)=\sum b_k z^k$ and if $z_j\in D$, $z_j\to 1$ and $|z_j|<1$, then $f(z_j)\to f(1)$. This is done by showing that the convergence of the series $f$ is uniform on $D$, hence $f$ is continuous on $D$ (Note that $1\in D$).
Let $\varepsilon>0$. There exists $N(\varepsilon)=N$ such that if $S_k$ is defined by $S_0=b_n$, $S_{k}=b_n+\cdots+b_{n+k}$, we have $|S_k|<\varepsilon$ for all $k$. Then for $m\geq n$
$$T_{m,n}(z)=\sum_{j=n}^{m}b_j z^j=S_0z^n+(S_1-S_0)z^{n+1}+\cdots+(S_{m-n}-S_{m-n-1})z^m$$
We rewrite this as
$$S_0(z^{n}-z^{n+1})+S_1(z^{n+1}-z^{n+2})+\cdots+S_{m-n-1}(z^{m-1}-z^{m})+S_{m-n}z^m$$
And then for $n\geq N$, $z\in D$, $|z|<1$:
$$|T_{m,n}(z)|\leq \varepsilon(|z|^{n}|1-z|+\cdots+|z|^{m-1}|1-z|)+\varepsilon |z|^m$$
We use that $\displaystyle 1+|z|+\cdots+|z|^{m-n-1}=\frac{1-|z|^{m-n}}{1-[z[}$, $[z|^{n}\leq 1$, $|z|^m\leq 1$ and $[1-z|\leq 4(1-|z|)$, and we see that $|T_{m,n}(z)|\leq 5\varepsilon$. For $z=1$, this inegality is obviously true, hence we have shown that the power serie is uniformly convergent in $D$, and we are done.
Added: Let $\displaystyle S_n(z)=\sum_{k=0}^n b_k z^k$. Then, for $m>n$, we have $S_m(z)-S_n(z)=T_{m, n-1}(z)$. Hence by the above, if $n\geq N+1$ we get $|S_m(z)-S_{n}(z)|=|T_{m,n-1}(z)|\leq 5\varepsilon$. Now let $m\to +\infty$. As $S_m(z)\to f(z)$, we get $\displaystyle |f(z)-S_{n}(z)|\leq 5\varepsilon$ for $n\geq N+1$. This show the uniform convergence on $D$ of the $S_n(z)$. |
Why is the closed surface integral $\iint_A \phi\cdot\nabla \phi$ equal to zero? | It's not exactly clear if $\phi$ is a scalar or a vector but regardless the answer will look something like this. Integration by parts says
$$
\int_A\phi\cdot\nabla\phi dA = \int_{\partial A}(\mathrm{stuff}) ds - \int_A\nabla\phi\cdot\phi dA.
$$
Since $A$ is the boundary of a closed surface, $\partial A=\emptyset$, so the middle integral is $0$ and
$$
\int_A\phi\cdot\nabla\phi dA=-\int_A\nabla\phi\cdot\phi dA=-\int_A\phi\cdot\nabla\phi dA,
$$
So $\int_A\phi\cdot\nabla\phi dA=0$.
The exact terms inside the middle integral depend on whether $\phi$ is a vector or scalar but the integral is $0$ regardless. |
How to obtain Grothendieck’s “Long March Through Galois Theory” | After digging through tones of Internet material, I have answered my own question: currently, as of August 3, 2015, there is no resource on the Internet that contains all of the work "La longue marche à travers la théorie de Galois". Not scanned, not in TeX, not in PDF. There is only that which has been made available by Leila Schneps.
"Long March Through Galois Theory" can either be obtained from Jean Malgoire directly, or from a university library, or from the Université de Montpellier (there are reportedly some 20,000 pages of material from Grothendieck stored somewhere in a box).
An interesting resource I found is the archived web page of Malgoire: https://web.archive.org/web/20070211071321/http://www.math.univ-montp2.fr/agata/malgoire.html . It's in French, but you can use Google translate. Basically it states that he has all of the work, and that he is working on retyping it in TeX and publishing it.
Edit: As of 2020, all pages of the manuscript are scanned at Montpellier's University and available online at Archives Grothendieck. |
Solving a basic differential equation | You can separate variables,
$$
\frac{dx}{a+(b-1)x}=dt.
$$
Then integrate. I leave that to you. |
to show $X$ is disconnected | The quickest route is probably to note that projection onto the first coordinate is a continuous surjection onto the rationals. Thereby the space cannot be connected because it has a disconnected continuous image. |
Uniqueness of Riesz Representation of $C^{*}[a,b]$ | If you start with $T \in C[a,b]^*$, then you can extend $T$ to $S \in L^{\infty}[a,b]*
$ in such a way that $\|S\| = \|T\|$. Then $S$ is defined on step functions such as $\chi_{[a,b]}$ and $\chi_{[a,b)}$, etc.. For any positive integer $N$, let
$$
x_0 = a, \; x_1 = a+\frac{b-a}{N}, x_2=a+2\frac{b-a}{N},\cdots, x_N=b.
$$
For any $f\in C[a,b]$ define $f_N$ to be
$$
f_N = f(x_0)\chi_{[x_0,x_1)}+f(x_1)\chi_{[x_1,x_2)}+\cdots+f(x_{N-1})\chi_{[x_{N-1},x_N]}.
$$
Then $f_N\rightarrow f$ in $L^{\infty}[a,b]$ as $N\rightarrow\infty$ because $f\in C[a,b]$. Define $\alpha(t)=S(\chi_{[a,t)})$ for $a \le t < b$ and let $\alpha(b)=S(1)$. Then $\alpha$ is of bounded variation and there are unimodular constants $c_n$ such that
$$
\sum_{n=1}^{N}|\alpha(t_n)-\alpha(t_{n-1})|=\sum_{n=1}^{N-1}c_nS(\chi_{[t_{n-1},t_n)})+c_NS(\chi_{[t_{N-1},t_{N}]}) \\
= S\left(\sum_{n=1}^{N-1}c_n\chi_{[t_{n-1},t_{n})}+C_N\chi_{[t_{N-1},t_N]}\right)
$$
Therefore,
$$
\sum_{n=1}^{N}|\alpha(t_n)-\alpha(t_{n-1})| \le \|S\|=\|T\|.
$$
So $\alpha$ is of bounded variation and $V_{a}^{b}(\alpha) \le \|T\|$. Furthermore,
$$
T(f)=\lim_{N} S(f_N)=\lim_N \sum_{n=1}^{N}f(x_n)\Delta\alpha_n = \int_{a}^{b}f(t)d\alpha(t) \\
|T(f)| \le \|f\|_{C[a,b]}V_{a}^{b}(\alpha) \\
\|T\| \le V_{a}^{b}(\alpha).
$$
So $\|T\|=V_a^b(\alpha)$. Modiffying $\alpha$ to be left- or right-continuous will not increase its variation, or the representation of $T(f)=\int_{a}^{b}fd\alpha$. So it still follows that $\|T\|=V_a^b(\tilde{\alpha})$, where $\tilde{\alpha}$ is left- or right- continuous. |
Share the beer fairly in a finite number of pours | We assume $A=B/2$, $p$, and $q$ are integers, with $1\le p\le q\le B$, and $q\ge A$. We adopt the protocol,
fill $q$ from $B$
fill $p$ from $q$ if possible, else, empty $q$ into $p$ and go to 1
empty $q$ into $B$ and go to 2
Let's see how this goes with $(B,q,p)=(20,15,7)$. We start at $(20,0,0)$ and go $(5,15,0)$, $(5,8,7)$, $(12,8,0)$, $(12,1,7)$, $(19,1,0)$, $(19,0,1)$, $(4,15,1)$, $(4,9,7)$, $(11,9,0)$, $(11,2,7)$, $(18,2,0)$, $(18,0,2)$, $(3,15,2)$, $(3,10,7)$, $(10,10,0)$ and we're done. Well, I didn't put a "stop" anywhere in the protocol, but it's clear that you stop when/as/if you have (some permutation of) $(A,A,0)$. The question is, what starting values of $B,q,p$ guarantee that you'll get to $(A,A,0)$?
Note that if $p$ and $q$ have a common divisor $d$, then the contents of the $p$- and $q$-containers are always multiples of $d$. It follows that if $A$ is not a multiple of $d$, you can't win.
But that's the only obstacle. If every common divisor of $p$ and $q$ divides $A$, then $\gcd(p,q)$ divides $A$, and we can divide through by it (think of measuring our beer in units $d$ times as large as usual), so now we have $\gcd(p,q)=1$. The contents of the $B$-container get decreased by $q$, then (repeatedly) increased by $p$, so they take on all the values of $B-(mq-np)$ (well, all the values satisfying $0\le B-(mq-np)\le B$). E.g., in the example, the contents of the $B$-container started at 20, went down by 15 to 5, then up by 7 to 12 and to 19, down by 15 to 4, up by 7 to 11 and to 18, down by 15 to 3, up by 7 to 10. Since $\gcd(p,q)=1$, elementary number theory tells us we can find $m$ and $n$ such that $mq-np=A$, so eventually the big container has $A$, and we're done. |
Does this summation index make any sense? | Just think of it as $\displaystyle\sum_{\theta=k_0+m}^{k-1}$, with the same summand, interpreted as usual.
The only way this can make a problem is if $k_0+m$ is greater than $k-1$.
In general, you can think of $\displaystyle\sum_{\theta=a}^b$ as the summation over all $\theta$ satisfying $a\leq \theta \leq b$. If $b<a$, however, there are no such $\theta$, in which case we are left with the "empty sum," which is defined to be $0$.
Remark. The only other way I could see interpreting this would be as a sum over all pairs $(\theta,m)\in \mathbb{Z}^2$ satisfying $k_0 \leq \theta-m \leq k-1$, but this is obviously an infinite sum, which is doubtfully what the context calls for. Besides, $\theta$ and $m$ are never separated in the summand, so it stands to reason they want a single sum (derived from the single sum in the previous line) with "variable" $\theta-m$, interpreted as above. |
Help understanding combinatorics problem involving restrictions on committees | To have a majority means that one party has at least $9$ members. As it is impossible for two parties to each have $9$ members, you can compute the number of ways to form a committee with at least $9$ Democrats and at least two of each other party, multiply by $3$, and subtract from your $78$ ways without the restriction. |
Out of $50$ consecutive numbers, what is the probability that the absolute difference between the two numbers is $10$ or less? | When picking 1, 10 chances to pick the second.
When picking 2, 10 chances, as 1 will not be considered as that was already included in the last one.
When picking 3, 10.
.
.
.
.
When picking 40, 10.
When picking 41, 9.
When picking 42, 8.
.
.
When picking 49, 1.
Total = 10(40) + 9 + 8 + ....1 = 445
445/1225 will be the answer which is 89/245.
Now, why both positive and negative differences won't be included? Because in your 50C2, you're only bothering with the selection of the tickets and not the order in which they came in. If you pick {1,2} or {2,1}, it will be counted only once in your sample space but twice in your event probability which will give us the wrong answer. |
Définition of density in metric space. | No, the definition you gave is correct. Take for example $\mathbb N$ with discrete metric. Of course $\mathbb N$ is dense in it self. If you use $\mathcal B(x_0,\varepsilon )\setminus \{x_0\}$ instead of $\mathcal B(x_0,\varepsilon )$, then when $\varepsilon <1$ and $n\in\mathbb N$, you'll get $$\mathcal B(n,\varepsilon )\setminus \{n\}=\emptyset,$$
what contradict density. |
Fréchet derivative of a smooth $ \mathbb{R}^3 \to \mathbb{R} $ mapping lifted to Banach spaces. | Let $ \delta w $ satisfy $ |h_0 +h| = |h_0| + \delta w $, then
$$ \delta w = |h_0 +h | - |h_0| = \frac{|h|^2 + 2(h_0,h)}{|h_0+h|+|h_0|}. $$
Let $ E $ be as in the article,
$$
\begin{aligned}
E &= F(f_0+f,g_0+g,\underbrace{|h_0+h|}_{|h_0|+\delta w})-F(f_0,g_0,|h_0|) -\nabla F(f_0,g_0,|h_0|)\cdot\left(f,g,\frac{(h_0,h)}{|h_0|}\right).
\end{aligned}
$$
Using the Taylor expansion, $ x = (f_0,g_0,|h_0|) $, $ \delta x = (f,g,\delta w) $,
$$ F(x+\delta x) = F(x) + \nabla F(x)\cdot\delta x + \delta x^T H_{F(\xi)}\delta x, $$
with $ \xi = \xi(\delta x) $, we obtain
$$
\begin{aligned}
E &= \partial_wF(f_0,g_0,|h_0|)\left(\delta w - \frac{(h_0,h)}{|h_0|}\right) + (f,g,\delta w)^T H_{F(\xi)}(f,g,\delta w).
\end{aligned}
$$
We have
$$
\begin{aligned}
\delta w - \frac{(h_0,h)}{|h_0|} &= \frac{|h|^2}{|h_0+h|+|h_0|} + \frac{2(h_0,h)}{|h_0+h|+|h_0|} - \frac{(h_0,h)}{|h_0|} \\
&=\frac{|h|^2}{|h_0+h|+|h_0|} + \left(\frac{h_0}{|h_0|}\left(\frac{|h_0|-|h_0+h|}{|h_0+h|+|h_0|}\right),h\right) \\
&=\frac{|h|^2}{|h_0+h|+|h_0|} - \left(\frac{h_0}{|h_0|}\left(\frac{|h|^2+2(h_0,h)}{(|h_0+h|+|h_0|)^2}\right),h\right) \\
&=\frac{|h|^2}{|h_0+h|+|h_0|} - \frac{|h|^2(h_0,h)}{|h_0|(|h_0+h|+|h_0|)^2} - \frac{2(h_0,h)^2}{|h_0|(|h_0+h|+|h_0|)^2}. \\
\left|\delta w - \frac{(h_0,h)}{|h_0|}\right|&\leq \frac{|h|^2}{|h_0+h|+|h_0|} + \frac{|h|^3}{(|h_0+h|+|h_0|)^2} + \frac{2|h_0|\cdot|h|^2}{(|h_0+h|+|h_0|)^2} \\
&= |h|^2\left(\frac{1}{|h_0+h|+|h_0|} + \frac{|h|+2|h_0|}{(|h_0+h|+|h_0|)^2}\right) \leq C_1|h|^2.
\end{aligned}
$$
The constant comes from $ |h(x)| < m $ on $ \Omega\backslash U $.
We note that since $ D^\alpha F $ is continuous and all of $ f_0,f,g_0,g,|h_0|,|h| $ are bounded on $ \Omega\backslash U $, $ H_{F(\xi)} $ is bounded, i.e. there is a constant $ |H_{F(\xi)}v| \leq C_2|v| $.
Finally we consider $ \delta w^2 $,
$$
\begin{aligned}
|\delta w| &\leq \left|\delta w - \frac{(h_0,h)}{|h_0|}\right| + \left|\frac{(h_0,h)}{|h_0|}\right| \leq C_1|h|^2 + |h| \\
|\delta w|^2 & \leq (C_1^2m^2 + 1 + 2C_1m)|h|^2 = C_3|h|^2.
\end{aligned}
$$
With all of this in place, let $ C $ denote an arbitrary positive constant, not necessarily the same every time
$$
\begin{aligned}
|E| &= \left|\partial_wF(f_0,g_0,|h_0|)\left(\delta w - \frac{(h_0,h)}{|h_0|}\right) + (f,g,\delta w)^T H_{F(\xi)}(f,g,\delta w)\right| \\
&\leq \left|\partial_wF(f_0,g_0,|h_0|)\right|\cdot\left|\delta w - \frac{(h_0,h)}{|h_0|}\right| + \left|(f,g,\delta w)^T H_{F(\xi)}(f,g,\delta w)\right| \\
&\leq \left|\partial_wF(f_0,g_0,|h_0|)\right|C_1|h|^2 + \left|(f,g,\delta w)\right|\cdot\left|H_{F(\xi)}(f,g,\delta w)\right| \\
&\leq C|h|^2 + C_2|(f,g,\delta w)|^2 \leq C|h|^2 + C_2(|f|^2+|g|^2+|\delta w|^2) \\
&\leq C|h|^2 + C_2(|f|^2+|g|^2+C_3|h|^2) \leq C(|f|^2+|g|^2+|h|^2).
\end{aligned}
$$ |
Find the Equation of the Plane Containing the Point $ (2, 3, -2)$ and the Line$\frac{ x-1}{6} =\frac{ y+1}{2} = z-3$ | Note that the line contains the points $Q=(1, -1, 3)$ and $R=(7, 1, 4)$. Verify that $P$ does not lie on the given line, so using these three non-collinear points we can determine the normal vector to this plane. We have $$(\vec{Q}-\vec{P})\times(\vec{R}-\vec{P})=(-1, -4, 5)\times(5, -2, 6)=\begin{pmatrix}\mathbf{i} &\mathbf{j} &\mathbf{k} \\ -1 & -4 & 5 \\ 5 & -2 & 6 \end{pmatrix}=(-14,31,22).$$ This is the desired normal vector for your plane, due to the nature of the cross product. Hence, the desired plane is $-14(x-2) + 31(y-3) + 22(z+2) = 0$, or $-14x+31y+22z=21$, of course assuming I did my arithmetic correctly. |
Monte Carlo method as a reference | The Monte Carlo method is a kind of brute force method, but instead of applying all possible inputs (like a typical brute force approach), input data is chosen randomly, and then the results are observed and counted. This is usually much easier than proving a complex mathematical system.
That works to reach an acceptable approximate resulting value if
there are enough random data applied to the system
the random generator produces fair values according to a probability distribution
With today's fast computers, improved random generators, and being able to run the program on multiple computers at the same time, the system is provided with so much random input that the probability that the aggregated result diverges more than an acceptable value is very small.
Some very complex systems/formulas that require human mathematical expertise (that could be error prone) can be comforted by the output of a Monte Carlo algorithm, or questioned if MC diverges too much from the expected results.
The Monty Hall problem solution can be easily "proven" (comforted) thanks to a Monte Carlo approach
assign randomly to the 3 doors a car and 2 goats
have the host open randomly one of the 2 goat-doors
count how many times you would have won the car by 1) keeping the same door 2) choosing the other one
repeating that simple algorithm a few million times gives you a good $66.67$% chance to get the car if you chose the other door... |
Find a unit vector from the direction (3, -1, 4) to (1, 3, 5) | Given two points $p_1, p_2$ the vector $p_2-p_1$ is in the direction of $p_1p_2$.
A unit vector in the same direction of a given (nonzero) vector can be obtained by scaling down the given vector (i.e. each component) by the magnitude of the vector.
Can you solve it now? |
Help show the following isomorphism cannot exist. | Let $u=x+\langle f(x)\rangle$ and $v=x+\langle g(x)\rangle$, so we know that
$$
u^3+u^2+u+2=0,\qquad
v^3+2v+2=0
$$
If an isomorphism $\phi$ such that $\phi(u)=v$ exists, then we also know that
$$
v^3+v^2+v+2=0
$$
so
$$
2v+2=v^2+v+2
$$
and
$$
v^2-v=0
$$
In other words
$$
x^2-x\in\langle g(x)\rangle
$$
Since $\mathbb{Z}/3\mathbb{Z}$ is a domain, this is impossible.
Note that we don't really use the complete information: no ring homomorphism with the stated property exists. |
Joint moment generating function problem | Naturally, you can take derivative $\partial^2_{a,b} M_{X,Y}(0,0)$ to get the answer. Alternatively, you can interpret the joint distribution of $X$ and $Y$ as a mixture. Namely, a random variable $\Lambda$ takes value $1$ with probability $4/5$ and value $2$ with probability $1/5$. Given $\Lambda = \lambda$, $X$ and $Y$ are independent and exponentially distributed with parameter $\lambda$. Then
$$
E[XY] = E[E[XY|\Lambda]] = E[\Lambda^{-2}] = 1\cdot\frac{4}{5} + \frac{1}{4}\cdot\frac{1}{5} = 0.85,
$$
confirming your finding. |
Sylow's Theorem Application. Prove $G$ is Abelian. | Do you know that if $p$ is a prime number, then every group of order $p^2$ is abelian? Use that and finish the proof with this one weird tip:
If $M \trianglelefteq G$ and $N \trianglelefteq G$ and $M \cap N = 1$, then $mn = nm$ for all $m \in M$ and $n \in N$.
(you should prove this if you haven't already) |
Suppose $AB=0$ for some non-zero matrix $B$. Can $A$ be invertible? | Since $A$ is invertible it is a square matrix. Now each column $y$ of $B$ satisfies $Ay=0$ which impllies $y=0$. So $B$ has to be the zero matrix. |
Notation for a function from all members of a tuple minus one. | If you mean an ordered tuple, then a common notation for
$$(a_1,a_2,\ldots,a_{i-1},a_{i+1},\ldots,a_n)$$
is
$$(a_1,a_2,\ldots,\widehat{a_i},\ldots,a_n).$$
But it is usually specified in text what it means the first time it is used. |
Is $G/G'$ an abelian group? | Instead of proving that $XY=YX$, prove that $XYX^{-1}Y^{-1}=1_{G/G'}$, to do so notice that:
$$XYX^{-1}Y^{-1}=[x,y]G'.$$ |
Smooth Vector Bundle is a Submersion | We need to show locally $U$ of $E$ submerged to $V$ of $B$. But note $U\cong V\times F$, this should be automatic. |
Use generating functions to solve the non-homogenous recurrence relation | You made a mistake somewhere in the generating function derivation (hard to tell where since you did not include this part), I've got
\begin{align}
A(x)&=2x+5x^2+\sum_{n \geq 3}a_{n}x^n\\
&=2x+5x^2+5\sum_{n \geq 3}a_{n-1}x^n-7\sum_{n \geq 3}a_{n-2}x^n+3\sum_{n \geq 3}a_{n-3}x^n+\sum_{n \geq 3}2^{n-3}x^n\\
&=2x+5x^2+5x\sum_{n \geq 2}a_{n}x^n-7x^2\sum_{n \geq 1}a_{n}x^n+3x^3\sum_{n \geq 0}a_{n}x^n+x^3\sum_{n \geq 0}2^{n}x^n\\
&=2x+5x^2+5x(A(x)-2x)-7x^2(A(x)-0)+3x^3A(x)+x^3\cdot \frac{1}{1-2x}
\end{align}
which solves to
\begin{align}
A(x)&=\frac{x(11x^2-9x+2)}{(1-2x)(1-3x)(x-1)^2}\\
&=\frac{2}{(x-1)^2}-\frac{3}{2}\frac{1}{1-x}-\frac{1}{1-2x}+\frac{1}{2}\frac{1}{1-3x}.
\end{align}
Check your solution, hopefully you can finish it from here. |
Are most vector fields conservative? | Polynomials/trigonometric functions/etc. can't be conservative, only vector fields can. Vector fields involving nice functions may or may not be conservative. It definitely isn't the case that these vector fields are usually conservative (as a side note, I don't know if "most vector fields are not conservative" is true in a mathematically precise sense). If we have $v = f(x,y)dx+ g(x,y)dy$, then at the very least, in order for $v$ to be conservative, we must have $f_y=g_x$, which usually isn't the case. For example, I just randomly came up with the example $v = (2x-2y)dx+(4x^2y^2)dy$, which is not conservative.
As for your second question. A vector field being conservative always means that its curl is zero. However, the converse does not always hold. For example, consider
$$v(x,y,z) = \frac{-y}{x^2+y^2}dx+\frac{x}{x^2+y^2}dy$$
You can verify that its curl is zero, but the integral around the unit circle on the $x-y$ plane will yield $\pm2\pi$. On the bright side, having zero curl does imply a vector field is conservative when the domain of the vector field is simply connected (meaning that if you take any loop on the domain, then you can shrink it to a point without leaving the domain). In the above example, the domain of $v$ is $\mathbb{R}^3$ minus the $z$ axis. If you have a loop around the $z$ axis, it cannot be shrunk to a point, and so the domain is not simply connected.
If I recall correctly, the simple connectedness condition can actually be weakened to requiring that the abelianization of the fundamental group of the domain not have elements of infinite order. |
Shifted laplace transform derivative? | The Laplace transforms applies to functions that are assumed to be zero on $t < 0$. The shifted function $f(t-t_0)$ would be zero on $t < t_0$. The time-shifting property says
$$ \int_0^\infty f^{(n)}(t-t_0)e^{-st}dt = \int_{t_0}^\infty f^{(n)}(t)e^{-s(t+t_0)}dt = e^{-t_0s} \int_0^\infty f^{(n)}(t)e^{-st}dt $$
where the last integral is just the Laplace transform of $f^{(n)}(t)$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.