title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Simple divisibility problem in elementary number theory | If $n+2009$ divides $n^2+2009$, it also divides $n^2-n=n(n-1)$ Similarly, $n+2010$ divides $n(n-1)$ If $n \gt 1$, one of them would have to divide $n$ and the other $n-1$, but that is not possible, so $n=1$ |
Chinese Remainder Theorem with coprime congruences | Suppose to the contrary that $(c,mn)\gt 1$. Then there is a prime $p$ that divides $c$ amd $mn$.
Since $p$ divides $mn$, it divides at least one of $m$ or $n$, say $m$.
Since $c\equiv a\pmod{m}$, $c=a+qm$ for some $q$. It $p$ divides $m$ and $c$, then $p$ divides $a$.
Thus $p$ divides $a$ and $m$, contradicting the fact that $(a,m)=1$. |
Proof that Rel is a Category | What you've quoted is a definition of all the data needed to have a category: objects, morphisms, identity morphisms, and composition operation. To verify you have a category you then just have to check that this data satisfies the axioms for a category: that the identity morphisms are actually identities for the composition operation, and that composition is associative. |
Finding $\frac1{1+1^2+1^4}+\frac2{1+2^2+2^4}+\cdots+\frac n{1+n^2+n^4}$. | Let $a_k = k/(1+k^2+k^4)$.
Then:
$$a_k =
\underbrace{\frac{k(1-k)}{2(1-k+k^2)}}_{b_k}
+
\underbrace{\frac{k(1+k)}{2(1+k+k^2)}}_{c_k}.$$
Now observe that
$$b_{k+1} = \frac{(k+1)(-k)}{2(-k+(k+1)^2)}
=
\frac{-k(1+k)}{2(1+k+k^2)} = -c_k.$$
Can you conclude? |
Purely relational structures in 0-1 law | We can consider asymptotic properties even when we have function symbols. However in the general case where we consider all such finite structures there are no $0-1$ laws.
To be specific: In the case where we have unary function symbols we have only a convergence law, i.e. not 0-1 but still convergence in the probabilities, while in the case of at least one binary function symbol we do not even have convergence. See this article for reference:
K.J. Compton, C.W. Henson, S. Shelah, Nonconvergence, Undecidability, and
intractability in asymptotic problems, Annals of Pure and Applied Logic 36 (1987) 207-224.
The general statements, especially about binary and larger symbols is quite involved, however we can easily see that we do not have a $0-1$ law in the case of a single unary function symbol by looking at the following nice example:
Note the formula $\varphi$ which stands for $\exists x (f(x)= x)$. What is the assymptotic probability of $\varphi$? If we assume that we have a finite structure $M$ with universe $\{1,\ldots, n\}$ then the probability that $f(i)\neq i$ is $1-\frac{1}{n}$. This probability is independent for each $i$. Thus the probability that $\varphi$ is false in $M$ is $(1-\frac{1}{n})^n$. Hence the assymptotic probability of $\varphi$ being false is
$$\lim_{n\to\infty} \left(1-\frac{1}{n}\right)^n = \frac{1}{e}$$
Thus $\varphi$ has asymptotic probability $1-\frac{1}{e}$, and thus we clearly do not have a $0-1$ law. |
function has a solution : Derivative at a point has rank $n$ | Since $Df(a)$ has rank $n$ there are indices $i_j$ ($j = 1,...,n$) such that
the corresponding columns of $Df(a)$ are linearly independent. Let $I$ be all the
other indices.
Let
$L:\mathbb{R}^n \to \mathbb{R}^{n+k}$ be the
map defined by $L(e_j) = e_{i_j}$, and
define $\phi(x,c) = f(L(x)+\sum_{l\in I} a_l e_l) -c$.
Let $x_0 = (a_{i_1},...,a_{i_n})$, and note
that $\phi(x_0, 0) = 0$.
Also, ${\partial \phi(x_0,0) \over \partial x}$ is invertible.
The implicit function theorem tells that there is a $C^1$ function
$\xi$ defined on a neighbourhood of $0$ such that $\xi(0) = x_0$ and
$\phi(\xi(c),c) = 0$ for $c$ in this neighbourhood.
Expanding shows that $f(L(\xi(c))+\sum_{l\in I} a_l e_l) = c$ as required. |
Existence of certain set | Hint for (ii): Enumerate the rationals as $\{r_n\}$ and put little intervals of really quickly decreasing length around each rational. This gives density and openness.
For (i), try modifying the construction of the Cantor set to remove less at each step.
Search term for (i):
Fat Cantor set |
Logic behind the combo of cards in a hand that contain only clubs | You said that you wanted to find
all combos of a 5-card hand that contain only clubs
There are $13$ clubs to choose from, so this should be simply $$\binom{13}{5}$$
But then you calculated the number of hands that contain at least one club. Which do you want to find?
For example, a hand with 4 clubs and 1 spade would be counted in the second calculation, but not in the first. |
Example of continuous transient Markov chain in detailed balance? | According to the comments you are asking for an example of a transient Markov chain with a reversible distribution. In particular, this distribution ought to be stationary and the Markov chain should be positive recurrent, hence it could not be transient. |
Proof verification: Prove that a tree with n vertices has n-1 edges | Consider the tree with n vertices.
To Prove: The number of edges will be n-1.
Assume P(n): Number of edges = n-1 for the tree with n vertices.
n will be natural number.
P(1): For one node, there will be zero edges, since there is no other node to connect with.
P(2): For two nodes, Number of edges = n-1 = 2-1 = 1, since one edge is sufficient to connect two nodes in a tree.
...
Assume P(n) is true, i.e. for n number of vertices in a tree, number of edges = n-1.
Now, For P(n+1),
the number of edges will be (n-1) + number of edges required to add (n+1)th node.
Every vertex that is added to the tree contributes one edge to the tree.
Thus, the number of edges required to add (n+1)th node = 1.
Thus the total number of edges will be (n - 1) + 1 = n -1+1 = n = (n +1) - 1.
Thus, P(n+1) is true.
So, Using Principle of Mathematical Induction, it is proved that for given n vertices, number of edges = n - 1. |
Showing $(1 + z\omega)(1 + \overline{z\omega}) \leq (1 + |z|^2)(1 + |\omega|^2)$. | We have
\begin{align}
(1 + z\omega)(1 + \overline{z\omega}) &= 1 + z\omega + \overline{z\omega} + z\omega\overline{z\omega}\\
& = 1 + 2\operatorname{Re}(z\omega) + |z\omega|^2\\
& \le 1 + 2|z\omega| + |z\omega|^2\\
&= 1 + 2|z||\omega| + |z|^2 |\omega|^2\\
&\le 1 + (|z|^2 + |\omega|^2) + |z|^2 |\omega|^2\\
&= (1 + |z|^2)(1 + |\omega|^2).
\end{align} |
Define Matlab function depending from another function | Your code will only work if phi(x,t) returns a scalar. If it returns a non-scalar array, you'll get an error. And presumably if x and/or t is non-scalar then phi(x,t) will have the same dimensions? With that assumption, here is one way you can adapt your function:
function g = f(x,t)
pxt = phi(x,t);
g = zeros(size(pxt));
i0 = (pxt < 0);
g(i0) = 6*pxt(i0)+1;
g(~i0) = 2*pxt(~i0); |
Convergence of $\sum_{n=1}^{\infty} (n^{n^\alpha}-1)$ | Since $\mathrm e^x\ge1+x$ for all $x$, we have
$$
n^{n^\alpha}-1=\mathrm e^{n^\alpha\log n}-1\ge n^\alpha\log n\;.
$$
Since the terms are non-negative, the series can only converge if the series $\sum_nn^\alpha\log n$ converges. This is the case when $\alpha\lt-1$.
Conversely, let $\alpha\lt-1$. Then
$$
n^{n^\alpha}-1=\mathrm e^{n^\alpha\log n}-1=O\left(n^\alpha\log n\right)\quad\text{for $n\to\infty$}\;,
$$
and the series converges. |
Superharmonic functions, Barriers - Complex Analysis (Conway) | I'm Dongryul, preparing my presentation on Dirichlet Problem for Complex Function Theory Course, based on the Conway's book, too.
I concerned the same question as you did, and I think that I can answer as follows.
To show that $\hat{\psi}$ is superharmonic, it suffices to prove that $\hat{\psi} - u$ satisfies the minimum principle for any harmonic function $u$ on $G_1 \subseteq G$.
Denote $G^1 = \{z \in G : |z - a| > r\}$ and $G^2 = \{ z \in G : |z-a| < r\}$, and suppose to the contrary that $\hat{\psi} - u$ is non-constant and attains minimum at $z_0 \in G_1$.
If $z_0 \in G^1$, then $$1 - u(z_0) \le \hat{\psi}(z) - u(z) \le 1 - u(z)$$ so $u$ is constant.
If $z_0 \in G^2$, then $\hat{\psi} - u$ is constant on a connected component of $G_1 \cap G^2$ containing $z_0$ since it is superharmonic there. Since $\hat{\psi} - u$ is non-constant, $G^1 \cap G_1 \neq \emptyset$ or there is another connected component, but connectedness of $G_1$ implies $G^1 \cap G_1 \neq \emptyset$ even in this case. Then by continuity we may assume that $|z_0 - a| = r$. Now $$1 - u(z_0) \le \hat{\psi}(z) - u(z) \le 1 - u(z)$$ so $u$ is constant.
Finally if $|z_0 - a| = r$, $$\hat{\psi}(z) - u(z) \ge \hat{\psi}(z_0) - u(z_0) = 1 - u(z_0)$$ and thus $$u(z_0) \ge 1 - \hat{\psi}(z) + u(z) \ge u(z).$$ Since $u$ is harmonic, it means that $u$ is constant.
In either case, $u$ is constant, and the minimum of $\hat{\psi} - u$ is $1 - u(z_0)$. However, $\hat{\psi} - u$ is assumed to be non-constant so $\hat{\psi}(w) < 1$ for some $w \in G_1$ and it contradicts to the minimality.
Therefore, we conclude that $\hat{\psi}$ is superharmonic.
I think that I'm late, but I hope that your presentation was successful. |
Solving differential equations with Fourier Transformation | You probably know how a function and its Fourier transform are related. Choosing a certain convention, we can write
$$f(x) = \frac{1}{\sqrt{2\pi}} \int_{\mathbb{R}} e^{ikx} \hat{f}(k) = \mathfrak{F}^{-1}[\hat{f}].$$
The RHS only depends on $x$ through the exponential factor; in particular,
$$\frac{\mathrm{d}f}{\mathrm{d}x} = \frac{1}{\sqrt{2\pi}} \int_{\mathbb{R}} e^{ikx} (ik) \hat{f}(k) = \mathfrak{F}^{-1}[ik\hat{f}].$$
Similarly, for the second derivative, we find
$$\frac{\mathrm{d}^2f}{\mathrm{d}x^2} = \mathfrak{F}^{-1}[-k^2\hat{f}].$$
You probably also know that
$$\delta(x) = \mathfrak{F}[\frac{1}{\sqrt{2\pi}}].$$
You know apply an FT to your entire differential equation, which yields:
$$\hat{e}(k) + \frac{1}{2}l^2 k^2 \hat{e}(k) = \frac{1}{\sqrt{2\pi}},$$
which directly implies that
$$\hat{e}(k) = \frac{1}{\sqrt{2\pi} (1 + \frac{1}{2}l^2 k^2)}.$$
Applying the inverse FT yields your answer. However, it's non-trivial. You'll need to calculate the integral
$$\int_{\mathbb{R}} \mathrm{d}k \frac{e^{-ikx}}{{1 + \frac{1}{2}l^2 k^2}}.$$
The way most people would solve this integral is using complex analysis: you'd interpret $k$ as a complex variable, identify the poles of the integrand (at $k = \pm i \sqrt{2}/l$) and use Cauchy's theorem in combination with the residue theorem in order to find its value. If you haven't studied complex analysis, it's wisest to look up the Fourier transform of the above function in a textbook on Fourier transforms. It's a Fourier transform that comes up often, so you won't have trouble finding it (here, for example). |
Curve equation - help in understanding | In order to find middle point you have to add those two expressions for $x$ and for $y$:
$$x_M=\frac{\frac{m}{2}+\sqrt{\frac{m^2}{4}+2}+\frac{m}{2}-\sqrt{\frac{m^2}{4}+2}}{2}=\frac{m}{2}$$
$$y_M=\frac{m(\frac{m}{2}+\sqrt{\frac{m^2}{4}+2})+m(\frac{m}{2}-\sqrt{\frac{m^2}{4}+2})}{2}=\frac{m^2}{2}$$
Now,we may write:
$y=2\frac{m^2}{4}=2(\frac{m}{2})^2=2x^2$ ,so $y=2x^2$ is equation of the curve that contains all middle points. |
Baffled with $\lim\limits_{x\to 0}{e^x-e^{\sin x} \over x-\sin x}$ | Note that by standard limit for $t\to 0 \quad \frac{e^t-1}{t}\to 1$ since $(x-\sin x)\to 0\,$, we have
$${e^x-e^{\sin x} \over x-\sin x}=e^{\sin x}{e^{x-\sin x}-1 \over x-\sin x}\to 1\cdot 1=1$$ |
Brauer group of a field of rational numbers | $\DeclareMathOperator{\br}{Br}$
This can be done using the Brauer-Hasse-Noether theorem. One statement of this theorem is that for $k$ a global field, there is a canonical exact sequence
$$
0 \to \br(k) \to \bigoplus_v \br(k_v) \to \mathbb Q/\mathbb Z \to 0 .
$$
where $v$ ranges over all places (finite and infinite) of $k$. It is known (see for example Serre's Local Fields, chapter 13) that $\br(k_v)\simeq \mathbb Q/\mathbb Z$. So for $k=\mathbb Q$, our sequence is
$$
0 \to \br(\mathbb Q) \to \mathbb Z/2\times \bigoplus_p \mathbb Q/\mathbb Z \to \mathbb Q/\mathbb Z\to 0.
$$
So we have
$$
\br(\mathbb Q) = \left\{(a,x):a\in \{0,1/2\}\text{ , }x\in \bigoplus_p \mathbb Q/\mathbb Z\text{ and }a+\sum x_p=0\right\}.
$$
We have similar descriptions of $\br(k)$ for any global field $k$. |
Is "Functional Analysis" by "Yosida" a good book for self study? | Yosida's book is excellent, but certainly not easy reading. It was absolutely cutting edge when it was first published and the style is demanding. I would only recommend it as second or third reading in functional analysis after having achieved a pretty solid background. In addition, some people complain about difficulties understanding the sometimes slightly outdated terminology.
On the positive side, Yosida contains many results and examples that are otherwise hard to find. This is one of the reasons that it still is a widely used reference book which has stood its test of time. Familiarity with it can't hurt...
Since your previous questions indicate that you are relatively new to functional analysis, I would recommend to read something easier going and maybe more modern. You might enjoy Lax's book or Stein and Shakarchi or Reed and Simon. See also
Good book for self study of functional analysis
An introductory textbook on functional analysis and operator theory
for further recommendations. |
The misuse of Dirac's $\delta$ | What the mathematicians are saying is not that the Dirac delta function is not continuous or integrable which requires first the object under discussion be an $\mathbf R\to\mathbf R$ function, but that it is not even an $\mathbf R\to\mathbf R$ function. However, the Dirac delta function is rigorously defined, only not as an $\mathbf R\to\mathbf R$ function but a class of linear functionals, which is a linear function from a function space into the set of real (complex) numbers $\mathbf R$, called distribution or generalized function. |
Evaluate $ \binom{n}{0}+\binom{n}{2}+\binom{n}{4}+\cdots+\binom{n}{2k}+\cdots$ | Binomial expansion allows one to find both $\binom n0+\binom n1+\binom n2+\dots=(1+1)^n$ and $(\binom n0-\binom n1+\binom n2+\dots)=(1-1)^n$. Combining these two yields desired result: $\binom n0+\binom n2+\binom n4+\dots=\frac12\bigl((1+1)^n+(1-1)^n\bigr)=2^{n-1}$. |
Span of a set of matrices | Yes, I would accept that argument. No errors that I see. |
How to use Gershgorin disc theorem to determine location of eigenvalues of $M = \begin{bmatrix}I_k&A\\A^T&-I_l\end{bmatrix}$? | The essential case is this one
$\textbf{Proposition.}$ Assume that $k<l$ and that $A$ has full rank $k$. Let $(\sigma_i)_{i\leq k}$ be the $k$ positive singular values of $A$. Then
$spectrum(M)=\{\pm\sqrt{1+\sigma_i^2};i=1,\dots,k\}\cup \{-1,\cdots,-1; l-k\;\times\}$.
$\textbf{Proof.}$ Note that $AA^T$ is invertible and $spectrum(AA^T)=(\sigma_i^2)_i$.
If $\lambda\not= -1$, then
$\det(M-\lambda I_{k+l})=\det((1-\lambda)I_k-A(-1-\lambda)^{-1}A^T)\det((-1-\lambda)I_l)=0$
IFF $\det((1-\lambda^2)I_k+AA^T)=0$.
Thus $\lambda$ is in the form $\pm\sqrt{1+\sigma_i^2}\notin[-1,1]$. These are the $2k$ eigenvalues of $M$ (with multiplicity) which are $\not= -1$. Clearly, the $l-k$ remaining eigenvalues are equal to $-1$. In particular, we can verify that $tr(M)=-(l-k)$. $\square$ |
Countable Dense Set of Discontinuities and Riemann-Integrability (Rudin Question) | Here is a plot of the fractional part of $x$:
Notice how it's discontinuous when $x$ is an integer, and continuous everywhere else. To prove this formally, notice that on $[n, n + 1) : n \in \mathbb{N}$, this function is equal to $x - n$. Only at integer values does this function make a jump.
$(nx)$ is only slightly different from $(x)$. Instead of being discontinuous at integers $\mathbb{Z}$. It's discontinuous at $\frac{1}{n}\mathbb{Z}$.
For a fixed $n$ and bounded interval, $f_n = \dfrac{(nx)}{n^2}$ has a finite number of such discontinuities. Thus, it's Riemann integrable. It follows that the uniform limit of $\displaystyle \sum_{n=1}^\infty f_n$ is also Riemann integrable on a bounded interval. |
Is there a numerical solution for a system of three 1st order nonlinear ODE? | Only to make more obvious what was already obvious, as said in several comments :
$$z=y'-\sin(x)$$
$$z'=y''-\cos(x)x'$$
$$z'=y-z=y''-\cos(x)x'=y-(y'-\sin(x))$$
$$y''+y'-y-\cos(x)x'-\sin(x)=0$$
$$x'''+x''-x'-\cos(x)x'-\sin(x)=0$$
Numerical computation of $x(t)$ , $y(t)$ , $z(t)$ requires to state a third condition, for example $x''(0)$ , or $z(0)$, any numerical method used. |
D.E. with integrating factor question | The point is that $\int \ldots\ dx$ is an antiderivative, which always is defined only up to an arbitrary constant. So if you get one solution with a certain choice $g(x)$ for the antiderivative, you will get others by replacing $g(x)$ by $g(x)+C$, where $C$ is any constant. And you will need the freedom to choose that constant, because you also have an initial condition $y(0)=1$ to satisfy. |
how to defined subset of highest n values from a set | How about "Let $B=A\cap[a,\infty)$ where $a$ is chosen such that $|B|=n$, in other words: $B$ consists of the $n$ largest elements of $A$." |
Evaluate $\int_0^{2\pi}\frac{\sin^2(x)}{a + b\cos(x)}\ dx$ using a suitable contour | Note that the integral diverges for $a\le b$. Therefore, we assume throughout the development that $a>b$.
We can simplify the task by rewriting the integrand as
$$\begin{align}
\frac{\sin^2(x)}{a+b\cos(x)}&=\frac{a}{b^2}-\frac{1}{b}\cos(x)-\left(\frac{a^2-b^2}{b^2}\right)\frac{1}{a+b\cos(x)}
\end{align}$$
Then, the integral of interest reduces to
$$\int_0^{2\pi}\frac{\sin^2(x)}{a+b\cos(x)}\,dx=\frac{2\pi a}{b^2}-\frac{a^2-b^2}{b^2}\int_0^{2\pi}\frac{1}{a+b\cos(x)}\,dx \tag 1$$
We enforce the substitution $z=e^{i x}$ in the integral on the right-hand side of $(1)$ and obtain
$$\begin{align}
\int_0^{2\pi}\frac{1}{a+b\cos(x)}\,dx& =\oint_{|z|=1}\frac{1}{a+b\left(\frac{z+z^{-1}}{2}\right)}\frac{1}{iz}\,dz\\\\
&=\frac2{ib}\oint_{|z|=1}\frac{1}{(z+(a/b)-\sqrt{(a/b)^2-1})(z+(a/b)+\sqrt{(a/b)^2-1})}\,dz\\\\
&=2\pi i \frac2{ib} \frac{1}{2\sqrt{(a/b)^2-1}}\\\\
&=\frac{2\pi}{\sqrt{a^2-b^2}}
\end{align}$$
Putting it all together, the integral of interest is
$$\bbox[5px,border:2px solid #C0A000]{\int_0^{2\pi}\frac{\sin^2(x)}{a+b\cos(x)}\,dx=\frac{2\pi}{b^2}\left(a-\sqrt{a^2-b^2}\right)}$$
as was to be shown! |
Subgroups of $G\times H$ | If you set $G = H = S_3$, the symmetric group of order 6, then consider the subgroup $K$ of elements $(x,y)$ where either both $x$ and $y$ are even or both $x$ and $y$ are odd. It's not hard to see that $K$ is a subgroup of $S_3 \times S_3$ of order 18.
Now if $K$ was to have a decomposition $K \cong G_1 \times G_2$ where $G_1, G_2$ are isomorphic to subgroups of $S_3$, then by consideration of order one of $|G_1|,|G_2|$ would be equal to 6 and the other would be equal to 3. Without loss of generality set $|G_1| = 6, |G_2| = 3$. Let $x$ be an element of $G_1$ of order 2. Then $(x,1)$ must commute with all elements of $G_1 \times G_2$ of form $(1,y)$ for all $y$ in $G_2$. This includes the two non-identity elements of $G_2$ that are of order 3. This provides two commuting elements $(x,1)$ and $(1,y)$ of $G_1 \times G_2$ of orders 2,3, respectively.
However, it is straightforward to check that the elements of order 2 in $K$ do not commute with any of the elements of order 3 in $K$. The elements of $K$ of order two are all of the form $(t_1, t_2)$ where $t_1$ and $t_2$ are both transpositions. Furthermore, all nine such elements are conjugate (as any two conjugates in $S_3$ are conjugate), meaning that for any given element $(t_1,t_2)$, the centralizer is exactly given by $\langle t_1 \rangle \times \langle t_2\rangle$, which is a Klein four subgroup containing no elements of order 3.
Thus, by contradiction, $K$ does not decompose as a direct product of form $G_1 \times G_2$. |
Stuck on this proof that $ord(f) = ord(g)$ | Let's begin with properly defining the function ${\rm ord}$. Given a sufficiently differentiable function $f: \>{\mathbb R}\to{\mathbb R}$ one defines
$${\rm ord\,}_a(f)=\inf\{n>0\>|\>f^{(n)}(a)\ne0\}\ .$$
Note that ${\rm ord}$ needs two inputs. In your question you introduced ${\rm ord\,}_a(f)$ but forgot to pin down the point $a_*$ at which ${\rm ord\,}_{a_*}(g)$ should be computed. This has to be the point $\sigma(a)$, because otherwise it is easy to give counterexamples, e.g., the following:
Assume $a'\ne a$ and put $$f(x):=(x-a)^2, \qquad g(x):=(x-a)^2-(a'-a)^2,\qquad \sigma:={\rm id},\qquad \tau (x):=x-(a'-a)^2\ .$$
Then one has $\tau\circ f=g\circ\sigma$, $f(a)=g(a')=0$, but
$${\rm ord\,}_a(f)=2,\qquad {\rm ord\,}_{a'}(g)=1\ ,$$
since $g'(a')=2(a'-a)\ne0$.
(For this reason in my original answer I tacitly assumed $a'=\sigma(a)$ and then $a=a'=0$, which made $\sigma$ and $\tau$ diffeomorphisms with $\sigma(0)=\tau(0)=0$.)
Now the positive result: The function ${\rm ord}$ satisfies a sort of chain rule:
$${\rm ord\,}_a(g\circ f)={\rm ord\,}_{f(a)}(g)\cdot{\rm ord\,}_a(f)\ .$$
Proof. We may assume
$$a=f(a)=0,\qquad {\rm ord\,}_a(f)=p\geq1,\qquad {\rm ord\,}_{f(a)}(g)=q\geq1\ .$$
The following principle is an immediate consequence of Taylor's theorem:
$${\rm ord\,}_0(f)=m<\infty\quad\Longleftrightarrow\quad f(x)=x^m\bigl(\alpha+r(x)\bigr)\quad {\rm with} \quad \alpha\ne0, \ \lim_{x\to0}r(x)=0\ .$$
So there are constants $\alpha\ne0$, $\beta\ne0$, and functions $x\mapsto r_1(x)$, $y\mapsto r_2(y)$ with $\lim_{x\to0}r_1(x)=\lim_{y\to0}r_2(y)=0$ such that
$$f(x)=x^p\bigl(\alpha+r_1(x)\bigr),\qquad g(y)=y^q\bigl(\beta+r_2(y)\bigr)\ .$$
It follows that
$$g\bigl( f(x)\bigr)=x^{pq}\bigl(\alpha+r_1(x)\bigr)^q\bigl(\beta+r_2\bigl(f(x)\bigr)\bigr)=x^{pq}\bigl(a^q\beta+r(x)\bigr)$$
for some function $x\mapsto r(x)$ with $\lim_{x\to0}r(x)=0.\qquad\square$
Since diffeomorphisms $\sigma:\>{\mathbb R}\to{\mathbb R}$ have ${\rm ord\,}_a(\sigma)=1$ at all points $a$ your original conjecture, interpreted correctly, follows. |
Find the Remainder when $24242424$.... upto $300$ digits is divided by $999$? | Since $1000\equiv1$ mod $999$, then you have $$424\cdot1000^0+\cdot242\cdot1000^1+424\cdot1000^2+\cdots\equiv424+242+424+\cdots$$ So it's the same as $50$ copies of $424+242=666$.
Now reduce $50\cdot666=48\cdot666+2\cdot666\equiv2\cdot666\equiv333$. |
Find the equation of the two tangent planes to the sphere $x^2+y^2+z^2-2y-6z+5=0$ which are parallel to the plane $2x+2y-z=0$ | There are several methods. I'll outline two here
Find the center of the sphere. Take the line in the direction $(2,2,-1)$ (the normal vector of the plane) that goes through the center. It will intersect the sphere in the two points you are after.
Given any surface defined by an equation $f(x,y,z)=0$ (where $f$ is suitably differentiable), and any point on the surface, the gradient of $f$ at that point is a normal vector to the surface at that point. We want places on the sphere where the normal vector is parallel to $(2,2,-1)$ (note that this is the gradient of the LHS of the plane equation, same deal there). Which is to say, we want places where the gradient of $x^2+y^2+z^2-2y-6z+5$ is parallel to $(2,2,-1)$. |
Highest power contained in the expression. | The integer parts do not include fractions, so for example
$$\lfloor N/p\rfloor=a_1+a_2p+...+a_np^{n-1}\\ \lfloor N/p^2\rfloor=a_2+a_3p+...+a_np^{n-2}$$
When you add them up, try collecting the $a_i$ instead of $p$. For example, $a_3$ is multiplied by $p^2+p+1$.
Multiply the full sum by $p-1$ and try to make the result equal $N-s$. |
Sequence of continuous functions which converges to a continuous limit | Hint: Consider $f_n: [0,1] \to \Bbb R$ defined by
$$f_n(x) := \begin{cases}
2n^2x & \text{if $0 \le x \le \frac1{2n}$}\\
2n-2n^2x & \text{if $\frac1{2n} \le x \le \frac 1n$}\\
0 & \text{if $x \ge \frac1n$}
\end{cases}$$
What is the pointwise limit of the $f_n$? |
How to calculate $\sin(37°)$ with a Taylor approximation? | The easiest approach may be to use the Taylor series $\sin x=x-{1\over6}x^3+{1\over120}x^5-\cdots$, since $37^\circ=37\pi/180\approx0.646\lt1$:
$$\sin(37^\circ)=\sin\left(37\pi\over180\right)\approx\left(37\pi\over180\right)-{1\over6}\left(37\pi\over180\right)^3+{1\over120}\left(37\pi\over180\right)^5$$
(noting that the next term in the alternating sum is considerably less than $1/5040\approx0.0002$). I wouldn't want to complete the decimal calculation by hand, but it's relatively straightforward with a calculator, even if you have to use an approximation like $\pi\approx3.1416$ on a pocket calculator that lacks a button for $\pi$ and only does arithmetic. |
How to show $\partial A = \varnothing \Rightarrow A=R^n$ | As $\dim A=n$, then the relative boundary of course coincides with the usual boundary of $A$ as a subset of $\mathbb R^n$. Then $\mathbb R^n$ is the union of the interior of $A$ and the exterior of $A$, two open sets. As $\mathbb R^n$ is connected, one of the two is empty, and $A$ is not. |
How to prove this a(n) is bounded above | Idea (partial answer) : You have $a_k>0$ for all $k$ and
$$\frac{1}{a_{n+1}}=\frac{1}{a_n}-\frac{1}{n^2+a_n}$$ hence
$$\frac{1}{a_{n+1}}=\frac{1}{a_1}-\sum_{k=1}^{n}\frac{1}{k^2+a_k}$$
but one has also to show that $\frac{1}{a_1}-\sum_{k=1}^{+\infty}\frac{1}{k^2+a_k} >0$ to finish... |
Hardest question IMO I had ever seen! | The statement is false. For $\alpha=1$, $\beta=2$, $\gamma=3$ there is only one polynomial of degree at most $2$ that satifies the evaluations, namely $P(x)=x$; it is not of degree$~2$. There are many similar counterexamples. |
Finding the number of continuous functions | You should do the same but inversely:
$$\int ^{1}_{0} f(x) dx = \int ^{1}_{0} f(t^2) 2tdt = \int ^{1}_{0} 2x f(x^2) dx$$
So we have:
$$\int ^{1}_{0} 2x f(x^2) dx = \frac{1}{3} +\int ^{1}_{0} f^{2}\left( x^{2}\right) dx \Longrightarrow \int ^{1}_{0} f^{2}\left( x^{2}\right) dx - \int ^{1}_{0} 2x f(x^2) dx + \frac{1}{3} = 0$$
$$\Longrightarrow 0 = \int ^{1}_{0} \left(f^{2}\left( x^{2}\right) - 2x f(x^2) \right)dx + \frac{1}{3} = \int ^{1}_{0} \left(\left(f\left( x^{2}\right) - x\right)^2 -x^2 \right)dx + \frac{1}{3} = $$
$$ = \int ^{1}_{0} \left(f\left( x^{2}\right) - x\right)^2 dx - \int ^{1}_{0}x^2dx + \frac{1}{3} = \int ^{1}_{0} \left(f\left( x^{2}\right) - x\right)^2 dx - \left[\frac{x^3}{3}\right]_0^1 + \frac{1}{3}$$
$$\Longrightarrow \int ^{1}_{0} \left(f\left( x^{2}\right) - x\right)^2 dx = 0$$
Since left side is a non negative function, it must be zero everywhere:
$$\left(f\left( x^{2}\right) - x\right)^2 = 0 \Longrightarrow f\left( x^{2}\right) - x = 0 \Longrightarrow f\left( x^{2}\right) = x \Longrightarrow f(x) = \sqrt x $$ |
Reconciling two statements of Ramsey's Theorem. | Note that Wikipedia is talking about colouring $n$-element subsets of $X$ and Shelah is talking about colouring $n$-tuples in $X$. Shelah thus needs to pick a canonical way of viewing a finite subset as a tuple and an increasing enumeration is one such way.
To get from Shelah's version to Wiki's version, we view an $n$-element subset $A\subseteq X$ as an increasing $n$-tuple (in a unique way) and colour it accordingly. To get from Wiki's version to Shelah's, we colour each tuple according to its underlying set. |
Inhomogeneous ODE system with singular Matrix | It is not the matrix $M$ that is used in the variation of constant formula. Let $Φ(t)$ be a fundamental matrix, i.e., a matrix solution of
$$
\dot Φ(t)=M·Φ(t)
$$
with $Φ(0)$ non-singular. Then $Φ(t)$ is non-singular everywhere and the particular solution is
$$
x_p(t)=Φ(t)·\int_0^tΦ(s)^{-1}g(s)\,ds
$$ |
This is a question about elementary number theory | We have that $3b+1$ and $b^2-5b+3$ are two consecutive odd integers, hence $b^2-8b+2=2$, so the only possibility is $b=8$, and it fits. |
Calculate points of a tesseract (hypercube) | The coords of the hypercube of edge length 2, centered at the origin, and with facets being parallel to the coord planes, would be $(\pm1, \pm1, \pm1, ...)$. (Just continue up to the dimension you'd like.)
--- rk |
Find the maximum and minimum value of $|A|$ | Let's be a bit more general: Let $n \in \mathbb{N} \setminus\{0, 1\}$ and $\theta_1,...,\theta_n \in \mathbb{R}$ and let
$$A:=\frac{1}{n}\sum_{k=1}^{n} \exp(i \theta_k)$$
You've already shown that
$$|A| \leqslant 1$$
with the triangle inequality. We can easily show an example for $|A|=1$ with $\theta_i=0$ for all $i=1,...,n$. It's also clear that $|A| \geqslant 0$ from the properties of the absolute value, so we just need to prove that $|A|=0$ is possible. Let
$$\omega_k=\exp\left(\frac{2ki \pi}{n}\right)$$
I claim that
$$\sum_{k=1}^{n} \omega_k = 0$$
Can you prove it? |
Show that spectral decomposition has same eigenvalues than the matrix its decomposed from. | Hint: you know what the eigenvectors/eigenvalues are supposed to be, so why not check to see if they work? |
Using numerical methods to calculate integral | Since we want to approximate:
$$ I = \frac{1}{10}\int_{0}^{+\infty}e^{-x^2}\,dx $$
it is enough to have an approximation for:
$$ J = \int_{0}^{4}e^{-x^2}\,dx,\tag{1} $$
since:
$$ \int_{4}^{+\infty}e^{-x^2}\,dx = e^{-16}\int_{0}^{+\infty}e^{-8x-x^2}\,dx \leq \frac{1}{8e^{16}}.$$
To achieve a good approximation for $(1)$, it is sufficient to integrate termwise the Taylor series of $e^{-x^2}$:
$$e^{-x^2} = \sum_{k=0}^{+\infty}\frac{(-1)^k}{k!}x^{2k}, $$
$$\int_{0}^{4}e^{-x^2}\,dx = \sum_{k=0}^{+\infty}\frac{(-1)^k 4^{2k+1}}{(2k+1)k!}.\tag{2}$$
By Leibniz' rule, we just need to find a $k$ such that:
$$\frac{4^{2k+1}}{(2k+1)k!}<10^{-4}.$$
With some rather crude bound for $k!$, it is not difficult to prove that $k=50$ is enough, so:
$$ I \approx \frac{1}{10}\sum_{k=0}^{50}\frac{(-1)^k 4^{2k+1}}{(2k+1)k!}.\tag{3}$$ |
Is taking the lim of the series the same to taking the series of the lims? | Consider
$$a_{nk} = \frac{k}{k+n}-\frac{k-1}{k-1+n}$$
Then
$$\lim_{n \to \infty}\sum_{k=1}^\infty a_{nk} = \lim_{n \to \infty}\lim_{K \to \infty}\sum_{k=1}^K a_{nk} =\lim_{n \to \infty}\lim_{K \to \infty}\frac{K}{K+n} = 1,$$
but
$$\sum_{k=1}^\infty \lim_{n \to \infty}a_{nk} = 0$$
For some conditions where the equality holds, study the monotone and dominated convergence theorems for series. |
The probability that $x$ is divisible by $p$ where $x\in\mathbb{Z}$ and $p\in\mathbb{P}$ | It's safe to say that this is a heuristic and that your criticism of the heuristic is reasonable. However, another reasonable interpretation of "the probability that a random natural number is divisible by $p$" is
$$
\lim_{N \to \infty} \frac{\text{numbers less than }N \text{ divisible by p}}{\text{all numbers less than }N} = \frac{1}{p}
$$
which is of course true. Since the heuristic can be useful, it's worth entertaining it, keeping in mind that its meaningfulness depends on some pretty deep questions in philosophy of math. |
Problem proving: $V = \ker T \oplus \operatorname{im}T$ | From your response you seem to think that $$\dim(\operatorname{im}(T)+\ker(T))=\dim(\operatorname{im}(T))+\dim(\ker(T));$$ but as pointed out in the comments this is not necessarily the case (consider for example the linear map $T:\mathbb{R}^2\to \mathbb{R}^2$ defined by the matrix $\begin{pmatrix} 0 & 1\\0 &0 \end{pmatrix}$, where
$$\operatorname{im}(T)=\ker(T)=\mathbb{R}\cdot\begin{pmatrix} 1 \\ 0 \end{pmatrix}$$
To prove that $\operatorname{im}(T)\cap \ker(T)=\{0\}$, consider an element $x\in \operatorname{im}(T)\cap \ker(T)$; then $T(x)=0$ and $x=T(y)$ for some $y\in V$. Thus $T^2(y)=T(x)=0$, which means $y\in \ker(T^2)$; since $\ker(T^2)\subset \ker(T)$, $y\in \ker(T)$, and thus $x=T(y)=0$. |
Continuous function with certain characteristics | How about $\sin(x^2)$. This doesn't converge, but the integral does, as can be seen by substituting $u=x^2$. |
problem on union of connected sets | Hint: the last part of the union is a line (going through the origin) that could connect the two disks. Think of what is the slope needed so it will touch the disks (in fact, you could only assert that one of them is intersected, the second will follow). |
argument principle with a polynomial | We are going to prove that there is exactly one zero in the first quadrant. Since the polynomial has real coefficients it will have another zero in the fourth quadrant and therefore two zeros in the right half-plane. Let be
$$
\gamma _1 \left( t \right):\left\{ \begin{gathered}
x(t) = t \hfill \\
y(t) = 0 \hfill \\
\end{gathered} \right.\,\,\,\,\,\,\,0 \leqslant t \leqslant R
$$
Let be $C=\gamma_1+\gamma_2-\gamma_3$.
$$
\gamma _2 \left( t \right):\left\{ \begin{gathered}
x(t) = R\cos (t) \hfill \\
y(t) = R\sin (t) \hfill \\
\end{gathered} \right.\,\,\,\,\,\,\,0 \leqslant t \leqslant \frac{\pi }
{2}
$$
Let be
$$
\gamma _3 \left( t \right):\left\{ \begin{gathered}
x(t) = 0 \hfill \\
y(t) = it \hfill \\
\end{gathered} \right.\,\,\,\,\,\,\,0 \leqslant t \leqslant R
$$
We notice that there are no zeros on $\gamma_1$ since the coefficients are positive. Moreover we have that there are no zeros on $\gamma_3$ also. Indeed, we have that
$$
f\left( {it} \right) = t^4 - 3t^2 + 1 + it
$$
which, of course is never zero for any $t\in \mathbb R$. let be $\gamma=\gamma_1+\gamma_2-\gamma_3$. Now, by argument principle, we have that
$$
N = \frac{1}
{{2\pi i}}\int\limits_C {\frac{{f'(z)}}
{{f(z)}}} dz = \frac{{\Delta _\gamma \left( {\arg \left( {f(z)} \right)} \right)}}
{{2\pi }}
$$
where $N$ is the number of zeros inside $\gamma$. We have that
$$
\frac{{\Delta _\gamma \left( {\arg \left( {f(z)} \right)} \right)}}
{{2\pi }} = \frac{{\Delta _{\gamma _1 } \left( {\arg \left( {f(z)} \right)} \right)}}
{{2\pi }} + \frac{{\Delta _{\gamma _2 } \left( {\arg \left( {f(z)} \right)} \right)}}
{{2\pi }} - \frac{{\Delta _{\gamma _3 } \left( {\arg \left( {f(z)} \right)} \right)}}
{{2\pi }}
$$
Now,
$$
\frac{{\Delta _{\gamma _1 } \left( {\arg \left( {f(z)} \right)} \right)}}
{{2\pi }} = 0
$$
because $f(x)>0$ on $\gamma_1$.
We notice that we can write $\gamma_2=Re^{it}$ with $0 \leq t \leq \pi/2$ and therefore we have that
$$
\begin{gathered}
\frac{{\Delta _{\gamma _2 } \left( {\arg \left( {f(z)} \right)} \right)}}
{{2\pi }} = \Delta _{\gamma _2 } \left( {\arg \left[ {R^4 e^{4it} \left( {1 + \frac{{3R^2 e^{2it} + \operatorname{Re} ^{it} + 1}}
{{R^4 e^{4it} }}} \right)} \right]} \right) = \hfill \\
= \frac{{4\left( {\frac{\pi }
{2}} \right) + o\left( {\frac{1}
{R}} \right)}}
{{2\pi }}\,,\,\,\,\,\,\,\left( {R \to + \infty } \right) = \hfill \\
\hfill \\
= 1 + o\left( {\frac{1}
{R}} \right),\,\,\,\,\,\left( {R \to + \infty } \right) \hfill \\
\hfill \\
\end{gathered}
$$
Finally,
$$
\begin{gathered}
\frac{{\Delta _{\gamma _3 } \left( {\arg \left( {f(z)} \right)} \right)}}
{{2\pi }} = \frac{{\arg \left( {f(iR)} \right) - \arg \left( {f(0)} \right)}}
{{2\pi }} = \hfill \\
\hfill \\
= \frac{{\arg \left( {R^4 - 3R^2 + 1 + iR} \right)}}
{{2\pi }} = \hfill \\
\hfill \\
= \frac{1}
{{2\pi }}\arg \left[ {R^4 \left( {1 - \frac{{3R^2 - 1 - iR}}
{{R^4 }}} \right)} \right] = 0 + o\left( {\frac{1}
{R}} \right)\,,\,\,\,\,\,\,\left( {R \to + \infty } \right) \hfill \\
\end{gathered}
$$
Therefore we have that $N=1$. Thus there are two zeros in the right half-plane. |
When is the tensor product commutative? | The tensor product's commutativity depends on the commutativity of the elements. If the ring is commutative, the tensor product is as well. If the ring R is non-commutative, the tensor product will only be commutative over the commutative sub-ring of R. There will always be tensors over the ring that will not commute if R is non-commutative. |
Why $\lim_{z\to\infty}\frac{\sin(z)}z$ doesn't exist? | Note that $z$ and hence your $t$ is not necessarily real. Investigate what happens when you let $z$ approach $\infty$ along the imaginary axis. |
The center of a tree is a vertex or an edge. | The proof is fine, provided that you restore the two observations whose necessity you’re questioning.
The observation that the eccentricity of a vertex $v$ in $T'$ is one less than the eccentricity of $v$ in $T$ (in symbols: $\epsilon_{T'}(v)=\epsilon_T(v)-1$) is needed in order to show that the vertices of minimal eccentricity in $T'$ are the same as those in $T$: we need to know that $\epsilon_T(u)<\epsilon_T(v)$ if and only if $\epsilon_{T'}(u)<\epsilon_{T'}(v)$, so that the ordering of vertices by increasing eccentricity doesn’t change when we go from $T$ to $T'$.
The observation that any vertex at maximal distance from some vertex $v$ of $T$ is a leaf is needed to prove the previous observation. Removing the leaves of $T$ removes the vertices at maximum distance from a given interior vertex $v$, thereby reducing the eccentricity of $v$ by at least $1$. It does not, however, remove the neighbors of those vertices, which are at distance $\epsilon_T(v)-1$ from $v$, so it reduces the eccentricity only by $1$. |
Find a nontrivial proper ideal of $\mathbb{Z}\times\mathbb{Z}$ that is not prime | It's pretty clear that $a = (2, 0) \not \in 4\mathbb{Z}\times\mathbb{Z}$, while $a^2 = (4, 0) \in 4\mathbb{Z}\times\mathbb{Z}$.
There's a general method of finding whether given ideal of $I \subset \mathbb{Z}^n$ is prime. Let $I = (a_1, \ldots, a_k)$. Then, $I$ is an image of an obvious map $\mathbb{Z}^k \to \mathbb{Z^n}$. This map is given by a matrix with $a_i$ as columns. Now, by a process quite similar to Gaussian elimination, we can transform this matrix to a diagonal matrix, the image of which in $\mathbb{Z^n}$ will also be $I$ -- this is just finding another set of generators of $I$. However, with diagonal generators it is now clear whether the ideal is prime -- in your problem above, the diagonal generators are $(4, 0), (0, 1)$, and it's pretty clear $(2, 0)$ is not generated by them, while $(2, 0)^2$ belongs to the ideal.
The procedure I mentioned is basically a classification of finitely generated modules over a PID, so look for how this classification works. |
Multipling / Dividing with Significant Figures | If you use what I said...which is almost certainly not what you "should" do...then we would have:
$$
f(x) = \frac{x + y}{xy} = \frac{1}{x} + \frac{1}{y}\\
\frac{\partial f}{\partial x} = -\frac{1}{x^2} \\
\frac{\partial f}{\partial y} = -\frac{1}{y^2}
$$
This would give (assuming that each last significant digit is correct to $\pm 0.5$ of the next):
$$
\Delta f \approx \frac{1}{10.3^2}(0.05) + \frac{1}{0.01345^2}(0.000\ 005) \approx 0.028\ 110\ 495\ 86
$$
The actual result would then be:
$$
f(x) \approx \frac{10.3 + 0.01345}{10.3*0.01345} \approx 74.446\ 529\ 757\ 8
$$
This would suggest that the result is accurate to the "second" decimal:
$$
f(x) \approx 74.4(4)
$$
This notation means that the results is approximately $74.44$ but that the final digit is uncertain. If, instead, you use the "rules", you will find that:
$$
10.3 + 0.01345 \approx 10.3
$$
Then that $10.3 * 0.1345 \approx 0.139$ and finally that:
$$
\frac{10.3}{0.139} \approx 74.1
$$
If, instead, we were to use: $\frac{1}{x} + \frac{1}{y}$, we would find that:
$$
\frac{1}{10.3} \approx 0.0971\\
\frac{1}{0.01345} \approx 74.35
$$
We would then go ahead and add them, to get:
$$
74.35 + 0.0971 \approx 74.45
$$ |
For positive integer N the numbers 1,2,3,...,2N are arranged in two adjacent column. In how many ways they can be arranged that: | Try putting in the numbers in order from $1$ to $2N$. Label it $a$ if it goes into the first column and $b$ if it goes into the second column. Check that you can recover the original solution from the string of labels, so that it forms a bijection. What are the restrictions on the possible strings of labels? It might now be familiar to you. |
KS test: how to test if dist1 is on average larger than dist2 | For small p-values, the p-value of a 2-sided test tends to be larger than the p-value for the one-sided test (in the correct direction) for the same data. About double for roughly symmetrical distributions.
Example with fake data, analysis using R statistical software:
x = round(rnorm(10, 120, 15),2) # generate x's
x
## 99.98 108.48 120.31 121.97 98.17 133.71 112.28 82.91 117.22 120.44
summary(x)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 82.91 102.10 114.80 111.50 120.40 133.70
y = round(rnorm(10, 100, 15),2)
y
## 94.18 112.39 104.15 112.87 115.82 95.81 94.43 111.53 111.55 108.80
summary(y)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 94.18 97.90 110.20 106.20 112.20 115.80
xy = c(x, y); gp = as.factor(rep(c("x","y"), each=10))
boxplot(xy ~ gp, horizontal=T)
ks.test(x, y, alternative="two.sided")
## Two-sample Kolmogorov-Smirnov test
## data: x and y
## D = 0.5, p-value = 0.1678
## alternative hypothesis: two.sided
ks.test(x, y, alternative="less")
## Two-sample Kolmogorov-Smirnov test
## data: x and y
## D^- = 0.5, p-value = 0.08208
## alternative hypothesis: less
In the example above, the p-value changed from about 17% (2-sided) to about 8% (1-sided).
This is why dishonest researchers sometimes make rationalizations
to use a one-sided test (after they have seen the data), when a
two-sided test is really required. The one-sided test has a
smaller p-value--maybe small enough to claim one can reject. |
An "obvious" statement about a nonincreasing supremum | Counterexample provided by Daniel Fischer in a comment:
$$f(t,x) = \begin{cases} e^{-t} ,\quad &x = 0\\ t/x ,\quad &x > 0\end{cases}$$
Observe that $\sup_{x\in [0,1]}f(t,x) = 1$ when $t=0$ and $\sup_{x\in [0,1]}f(t,x) = \infty$ when $t>0$.
Indeed, for each fixed $x$, $f(t,x)$ is a very nice function of $t$. The condition of having negative $t$-derivative holds at $t=0$, where only $e^{-t}$ is involved in it. It also holds for positive $t$, simply because the supremum is infinite while the functions are finite; so none of the functions have $f(t,y)>\frac12 \sup_{x\in [0,1]}f(t,x)$. |
Ampleness of a line bundle on the generic fibre | In EGA IV part three we find the following result:
Corollary 9.6.4: Let $f:X \to S$ be a proper and finitely presented morphisms of schemes and $\mathcal{L}$ a line bundle on $X$. Then the set $U \subset S$ of $s \in S$ such that $\mathcal{L}_s$ is relative ample for $f_s$ is open in $S$, and the restriction of $\mathcal{L}$ to $f^{-1}(U)$ is relative ample for $f:f^{-1}(U) \to U$.
This answers your second question because $\mathcal{L}_s$ being relatively ample for $f_s$ is equivalent to $\mathcal{L}_s$ being ample on $X_s$. Your first question follows by taking an affine open $V \subset U$ and applying part 3 of https://stacks.math.columbia.edu/tag/01VJ. |
Random walk on a finite square grid: probability of given position after 15 or 3600 moves | An exact computation to answer question 1. is simple in principle, difficult to do exactly with no computer, and the result is frankly not very interesting. Question 2. is more interesting since, after 3600 steps on a graph of 25 vertices with diameter 10, the random walk is close to its stationary distribution.
The random walk is equivalent to the simple reversible random walk on the augmented graph where one adds 3 supplementary loops to each corner, 2 to each side vertex not a corner, and 1 to each other vertex. Every vertex in the augmented graph has degree 5 hence the stationary distribution is uniform. The Markov chain is aperiodic hence, at any large time, the probability to be at any of the vertices of this graph with 25 vertices is approximately 1/25.
The probability to be in any set S of vertices is approximately the size of S divided by 25. There are 4 corners hence the probability to be at one corner is approximately 4/25. There are 16 side squares hence the probability to be on one side is approximately 16/25. |
Limit and continuity | I'm not sure what you mean by "differentiation or integration method" but:
Do you know that your note can be slightly generalized to $\lim\limits_{x\rightarrow\infty} (1+{a\over x})^{ x} =e^a$? If so, then just write $\lim\limits_{x\rightarrow\infty} \bigl((1+{2\over x})^{ x} \bigr)^{1/8}=\Bigl(\lim\limits_{x\rightarrow\infty} (1+{2\over x})^{ x} \Bigr)^{1/8} $.
If not you could use L'Hôpital's rule to first evaluate $\lim\limits_{x\rightarrow\infty}\ln({x\over x+2})^{x/8}=\lim\limits_{x\rightarrow\infty}\Bigl({x\over 8}\ln({x\over x+2}) \Bigr) $. |
Time derivative of polar coordinates | from $x(t)=r(t)\cos\theta(t)$
you derive once wrt $t$
$$x'(t)=r'(t) \cos (\theta (t))-r(t) \theta '(t) \sin (\theta (t))$$
then derive again
$$x''(t)=\cos (\theta (t)) \left(r''(t)-r(t) \theta '(t)^2\right)-\sin (\theta (t)) \left(2 r'(t) \theta '(t)+r(t) \theta ''(t)\right)$$
same for $y(t)=r(t)\sin\theta(t)$
$$y'(t)=r'(t) \sin (\theta (t))+r(t) \theta '(t) \cos (\theta (t))$$
and
$$y''(t)=(2 r'(t) \theta '(t) +r(t) \theta ''(t)) \cos (\theta (t))+\sin (\theta (t)) \left(r''(t)-r(t) \theta '(t)^2\right)$$ |
Build a rotation matrix that rotates 30 degrees along the axis (1,1,1)? | You can solve this by a direct application of the Rodrigues' rotation formula.
In order to apply the formula, we need a unit vector along the rotation axis, $(1,1,1)$. In other words, we need a vector $\mathbf{k}$ which is parallel to $(1,1,1)$, but with the property that $|\mathbf{k}|=1$. This is the vector $(x,y,z)$ in the solution; note that
\begin{align}
\left|\left(\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}}\right)\right| = \sqrt{\left(\frac{1}{\sqrt{3}}\right)^2+\left(\frac{1}{\sqrt{3}}\right)^2+\left(\frac{1}{\sqrt{3}}\right)^2}=\sqrt{\frac{1}{3}+\frac{1}{3}+\frac{1}{3}}=\sqrt{1}=1.
\end{align}
If we denote the aforementioned unit vector by $\mathbf{k}$, and the rotation matrix which we are trying to find by $\mathbf{R}$, we have by Rodrigues' formula that
\begin{align}
\mathbf{R} = \mathbf{I} + (\sin\theta) \mathbf{K} + (1-\cos\theta)\mathbf{K}^2,
\end{align}
where
\begin{align}
\mathbf{K} = \left(\matrix{ 0 & -z & y \\
z & 0 & -x \\
-y & x & 0}\right),
\end{align}
and $\mathbf{I}$ denotes the identity matrix as usual. |
Law of large numbers and a product of random variables | Well, what is $E(\log(Y))$? I compute $$ E(\log(Y)) = \frac{1}{2}(\log(1.35) + \log(.7)) \approx -0.03.$$
Then, from what you said about SLLN, we have, almost surely, $\log(C_n)/n \to -0.03$ which means $\log(C_n)\to -\infty$ which means $C_n\to 0$ almost surely.
On the other hand, you've calculated $E(C_n) =1.025^n C \to \infty.$
The point of the problem is to get comfortable with this seemingly paradoxical result.
Here's how I think about it: imagine an ensemble of millions of gamblers each playing this game starting with $\$ 1$. It is perfectly consistent for the following two things to both be true with high probability at some very large time $n$
A vanishingly small fraction of the gamblers have more than a penny left.
The average wealth of the gamblers is more than 100 trillion dollars.
It's just that a very small fraction of the gamblers hold comically large amount of wealth thanks to the exponential growth that occurs in rare long streaks of good luck. |
Prove that there exists a point $x_0∈ [a,b]$, where $f(x)$ is continuous and $f(x_0)>0$, then $∫^a_bf(x)dx>0$ | Your question is a little messy in terms of its statements and phrasing, so I'll demonstrate two separate and possible cases :
First case :
If you have the facts : $$f(x) \space\text{continuous and integrable on} \space [a,b] \space \text{and} \space f(x_0) > 0 \space \text{for} \space x_0 \in [a,b]$$
then you're missing out on one fact, that $b-a >0$.
To see that, take the function :
$$g(t) = \int_a^t f(x)dx$$
The function $f(x)$ is continuous which means that $g(x)$ will be continuous in such an interval and of course differentiable too. Taking the interval $[a,b]$ which is the one that is nicely set and defined for $f(x)$ via the exercise, the Mean Value Theorem can be applied for $x_0 \in \mathbb (a,b)$ :
$$g'(x_0)=\frac{g(b)-g(a)}{b-a} = \frac{\int_a^bf(x)dx-\int_a^af(x)dx}{b-a} = \frac{\int_a^bf(x)dx}{b-a}$$
$$ \Rightarrow $$
$$\bigg[\int_a^tf(x)dx\bigg]'_{t=x_0} = \frac{\int_a^bf(x)dx}{b-a} \Rightarrow f(x_0) = \frac{\int_a^bf(x)dx}{b-a} \Leftrightarrow \int_a^bf(x)dx=f(x_0)(b-a)$$
Now, if $f(x_0)>0$ from the facts given. If $b-a>0$, then we have proved that :
$$\int_a^b f(x)dx>0$$
Second case :
If you have the facts : $$f(x) \space\text{continuous and integrable on} \space [a,b] \space \text{and} \space \int_a^b f(x) > 0$$
then one can prove by taking the same function as above :
$$g(t) = \int_a^t f(x)dx$$
and following a similar path by applying the Mean Value Theorem, that there exists $x_0 \in [a,b] : f(x_0) > 0$.
Note : ("Shout out" to @DougM as he also mentioned the Mean Value Theorem approach in the comments as I was writing the answer) |
Give a minimum sized structure such that this sentence is invalid. | Minimum-sized is asking you to find a structure with the smallest possible number of elements in which the sentence is not valid. Just because the relation is given the name $<$ doesn't mean that it has to have the usual properties of an ordering relation. Take a model with 2 elements $a$ and $b$ say and define $x < y$ to hold iff $x \neq y$. Then $a < b$ and $b < a$, but $\lnot a < a$. I'll leave it to you to show that the sentence holds for any binary relation on a singleton set (there are only two such relations to check) (and hence that 2 is the minimum size). |
How to prove an inequality | $$\frac{a+c}{b+d}-\frac ab=\frac{b(a+c)-a(b+d)}{(b+d)b}=\frac{bc-ad}{b(b+d)}$$
Similarly, $$\frac{a+c}{b+d}-\frac cd=\cdots=\frac{ad-bc}{(b+d)d}$$
Observe that the signs of the terms are opposite as $a,b,c,d>0$ |
Suppose $X$,$Y$ are sets, $A,B ⊆ X$ and $C,D ⊆ Y$. Compare $(A \times B) − (C \times D)$ with $((A − C) \times B) ∪ (A \times (B − D))$ | If $(x,y)$ is in $((A-C) \times B) \cup (A \times (B-D))$, if in the LHS, then $x \in A, x \notin C, y \in B$ and then $(x,y) \in A \times B$ and $(x,y) \notin C \times D$... The same can be said when it is in the RHS (check this..).
This shows at least one inclusion. The other also holds:
Now if $(x,y) \in (A \times B) - (C \times D)$, we know $x \in A, y \in B$ and $x\notin C$ or $y \notin D$. What can you say about these two last cases? |
Error function etymology: Why the name? | http://en.wikipedia.org/wiki/Errors_and_residuals_in_statistics
An "error" is the difference between a measurement and the value it would have had if the process of measurement were infallible and infinitely accurate. If one uses a single observed value as an estimate of the average of the population of values from which it was taken, then that observed value minus the population average is the error.
Sometimes (often) errors are modeled as being distributed normally, with probability distribution
$$
\varphi_\sigma(x)\,dx = \frac 1 {\sqrt{2\pi}} \exp\left( \frac{-1} 2 \left(\frac x \sigma\right)^2 \right) \, \frac{dx} \sigma
$$
with expected value $0$ and standard deviation $\sigma$.
The cumulative probability distribution function is
$$
\Phi_\sigma(x) = \int_{-\infty}^x \varphi_\sigma(x)\,dx.
$$
Up to a rescaling of $x$, this is the error function. The usual definition of the "error function" omits the factor of $1/2$, and thus the standard deviation of the distribution whose cumulative distribution function is the "error function" is not $1$. I am far from convinced that it ought to be rescaled in that way. |
How to have a negative exponential function whose asymptote is 0? | Choose the function $$f(x)=0.3+0.7e^{-x}$$ |
Virtues of Presentation of FO Logic in Kleene's Mathematical Logic | I think you are right to be a bit puzzled by Kleene's mode of presentation of FOL in his Mathematical Logic. He gives a Hilbert-style axiomatic proof system with an overlay of derived rules which look rather natural-deduction-like. That strikes us, now, as an odd way of proceeding.
Kleene did the same back in his wonderful 1952 Introduction to Metamathematics. Now, immense credit to him for recognizing -- relatively early, and ahead of the crowd -- that Gentzen's work on deductive systems was deeply important (and natural!). But my sense is that Kleene didn't fully appreciate the change of perspective that is involved in moving from a Hilbertian logistic system [regarding logic as a body of truths, to be systematised axiomatically] to a Gentzen deductive system [regarding logic as a body of inferential rules, so the key notion is not that of logical truth but of valid deduction]. And because he didn't fully appreciate the change of perspective, we get what now seems to be Kleene's strange device of starting with an axiomatic system and giving it a Gentzen-like superstructure.
Of course, it is all too easy to sound patronising with historical hindsight! So it is worth noting that e.g. John Corcoran can write "Three Logical Theories" as late as 1969 (Philosophy of Science, Vol. 36, No. 2 (Jun., 1969), pp. 153-177), finding it still novel and necessary to stress the distinctions between different types of logical theory. So don't get me wrong: I immensely admire Kleene 1952 in particular, which is still hugely worth reading. But yes, the way he presents FOL perhaps reflects a transitional stage in our understanding of the relations between different types of logical theory. |
Why are morphisms of finite type defined in the usual way? | For question one, consider the map from $\mathbb{P}^1$ to a point. This is locally of finite type (and of finite type) but there's no way to cover the base with affines such that the pre-images are affine.
For question two, for most applications (to classical algebraic geometry, number theory, etc.) we indeed do not much care about the difference. (EDIT: I had a nonsense counterexample earlier... here is a good one: example of locally finite type not finite type)
As a word of caution, despite the seeming similarity between the two definitions, they're really quite different. "Of finite type" includes any morphism of polynomial rings over a fixed base ring, so basically anything you care about in classical algebraic geometry and more-or-less anything you care about in non-classical algebraic geometry as well. |
Is the absolute max/min and the local max/min an $x$ or $y$ value? | Yes, that is correct. You distinguish absolute maximums and minimums concern the entire range of y values. Local maximums and minimums concern whether the y is the greatest or smallest as $x$ approaches $c$ from the left and the right. |
Nonempty intersection between approximate point spectrum and residual spectrum | Assume $X$ is a Banach space and $A$ is a bounded linear operator on $X$. $\lambda$ is in the point spectrum iff $\mathcal{N}(A-\lambda I) \ne \{0\}$. $\lambda$ is in the continuous spectrum iff $\mathcal{N}(A-\lambda I)=\{0\}$ and $\overline{\mathcal{R}(A-\lambda I)}=X$. Everything else is the residual spectrum.
You want $\lambda$ to be in the residual spectrum and in approximate point spectrum. So, $\mathcal{N}(A-\lambda I)=\{0\}$ is required, $\overline{\mathcal{R}(A-\lambda I)}\ne X$ is required, and there must exist a sequence of unit vectors $\{ x_n \}$ such that $(A-\lambda I)x_n \rightarrow 0$. Let $X=\ell^2$ and define
$$
A(x_1,x_2,x_3,\cdots) = (0,x_1,\frac{1}{2}x_2,\frac{1}{3}x_3,\cdots).
$$
Clearly $\mathcal{N}(A)=\{0\}$, $(1,0,0,0,\cdots)\in\mathcal{R}(A)^{\perp}$, and $\{(1,0,0,\cdots),(0,1,0,\cdots),(0,0,1,\cdots),\cdots\}$ is a sequence of unit vectors whose images under $A$ converge in norm to $0$. |
Show that $f(n) = \left(\frac{\sqrt 2}{2}(1+i)\right)^n$ diverges | $$\frac{\sqrt2}2(1+i)=\frac1{\sqrt2}+\frac1{\sqrt2}i=e^{\frac\pi4i}\implies$$
$$f(n)=e^{\frac{n\pi}4i}=\begin{cases}1&,\;\;n=0\pmod 8\\\frac1{\sqrt2}(1+i)&,\;\;n=1\pmod8\\i&,\;\;n=2\pmod8\\\frac1{\sqrt2}(-1+i)&,\;\;n=3\pmod 8\\-1&,\;\;n=4\pmod 8\\-\frac1{\sqrt2}(1+i)&,\;\;n=5\pmod 8\\-i&,\;\;n=6\pmod 8\\\frac1{\sqrt2}(1-i)&,\;\;n=7\pmod 8\end{cases}$$
Thus, as $\;n\to\infty\;,\;\;f(n)\;\;$ approaches nothing: it keeps on attaining all the eight values above. |
Sum of 'inverse' Normal (1/X) random variables. Equivalent resistance calculation | The Question
Let $(R_1, \dots, R_n)$ denote an IID sample of size $n$, where $R_i \sim N(\mu, \sigma^2)$, and let:
$$Z = \frac{1}{R_1} + \frac{1}{R_2} + \dots + \frac{1}{R_n}$$
Find the asymptotic distribution of $R_{eq} = \large\frac{1}{Z}$.
OP asks
From a statistical analysis, it seems that
$$ R_{eq} \overset{n}{\rightarrow} \mathcal{N}\left(\frac{\mu}{n},\frac{\sigma^2}{n^2}\right) $$
... Can you see any way of deriving this last relationship analytically ?
Answer:
No, because the relationship is wrong, and does not hold.
Theoretically, even if one could apply the Central Limit Theorem, it would be the pdf of $Z$ that would be asymptotically Normal ... not the pdf of $1/Z$.
To illustrate that it does not work, here is a one-line Monte Carlo simulation of $Z$ (in Mathematica), as a function of $n$, when say $\mu = 300$ and $\sigma = 5$:
Zdata[n_] := Plus@@Table[RandomReal[NormalDistribution[300,5], 10^5]^-1, {n}];
The following plot compares the:
the empirical pdf of $R_{eq} = \large\frac{1}{Z}$ (squiggly BLUE curve)
the OP's proposed fit model (dashed red curve)
Plainly, the fit does not work.
A better fit
Suggested better fit ...
As above, the asymptotic Normal model is not the correct model ... however, if $\mu$ is large relative to $\sigma$, then a Normal fit of form: $\mathcal{N}\left(\frac{\mu}{n} - blah,\frac{\sigma^2}{n^3}\right)$ appears to perform reasonably well.
For the same example as above, with $n = 100$ (and blah = 0), the fit is:
For $n = 800$ (and blah again 0), the fit is worse:
Plainly, as $n$ increases, a mean adjustment of some function $blah(\mu, \sigma, n)$ is also required. |
i.i.d. random variables' questions. | Minor mistake: $F(1)=1-2^{-1}=1-1/2=1/2$. |
Suppose that $a$ and $b$ belong to a commutative ring $R$. If $a$ is a unit of $R$ and $b^{2}=0$ . Show that $a+b$ is a unit of $R$ | No, the product of two units is not $1$, but you are on the right track.
In fact, a more general statement is true: if $a$ is a unit, and $b$ is nilpotent, then $a+b$ is a unit.
The hint by Mark gives $(a+b)(a-b)=a^2$. How can you modify the second factor such that product is equal to $1$? (or said more simply: if the product of two elements is a unit, then both elements are actually units) |
Finding the closed form of the determinant of the Hilbert matrix | If you are willing to accept the use of special functions, then there is an explicit closed form for the determinant of the $n\times n$ Hilbert matrix $\mathbf H_n$. Using the expression for the $\mathbf D$ factor of the $\mathbf L\mathbf D\mathbf L^\top$ decomposition shown in this MO answer (and originally derived by Hitotumatu), one can multiply together the diagonal elements of $\mathbf D$ (and then recall that $\det \mathbf L=1$) to get
$$\begin{align*}
\det \mathbf H_n&=\det\mathbf D=\prod_{k=1}^n \frac1{(2k-1)\tbinom{2k-2}{k-1}^2}\\
&=\frac{G(n+1)^4}{G(2n+1)}
\end{align*}$$
where $G(n)=\prod\limits_{k=1}^{n-1}\Gamma(k)$ is the Barnes $G$-function. |
Partial derivative of a composite function | If we identify the functions of $ \ x \ $ and $ \ y \ $ involved in the definition of the function $ \ \varphi \ $ by
$$f(x,y) \ = \ \varphi (\ \underbrace{\frac yx}_u \ , \ \underbrace{ x^2-y^2}_v \ , \ \underbrace{y-x}_w \ ) \ , $$
we can use the multivariate extension of the Chain Rule to write
$$ \frac{\partial f}{\partial x} \ = \ \frac{\partial \varphi}{\partial u}\frac{\partial u}{\partial x} \ + \ \frac{\partial \varphi}{\partial v}\frac{\partial v}{\partial x} \ + \ \frac{\partial \varphi}{\partial w}\frac{\partial w}{\partial x} \ \ , $$
$$ \frac{\partial f}{\partial y} \ = \ \frac{\partial \varphi}{\partial u}\frac{\partial u}{\partial y} \ + \ \frac{\partial \varphi}{\partial v}\frac{\partial v}{\partial y} \ + \ \frac{\partial \varphi}{\partial w}\frac{\partial w}{\partial y} \ \ .$$
We can find from the available information that
$$ \frac{\partial u}{\partial x} \ = \ -\frac{y}{x^2} \ , \ \frac{\partial v}{\partial x} \ = \ 2x \ \ , \ \frac{\partial w}{\partial x} \ = \ -1 \ \ , $$
$$ \frac{\partial u}{\partial y} \ = \ \frac{1}{x} \ , \ \frac{\partial v}{\partial y} \ = \ -2y \ \ , \ \frac{\partial w}{\partial y} \ = \ 1 \ \ . $$
However, since we know nothing else about the function $ \ \varphi (u,v,w) \ , $ we cannot develop the partial derivatives for $ \ f \ $ any further. |
Given a language $L$, is there a name/notation for the set $\{w^{n}\mid w\in L, n\in\mathbb N\}$? | I would write it as follows
$$
\bigcup_{w \in L} w^*
$$ |
Probability : Venn diagrams; independent | Edit: as @Henry commented, in general Venn diagrams do not rely on surface in any way, the only thing that matters are the intersections. If you consider your diagrams are pure Venn diagrams, then it is impossible to tell apart your two diagrams, and so it is to deduce independence.
However, if you impose your diagram to respect the rule "the area is proportional to the probability", then the following applies. From what I've seen around the web, such diagrams are called scaled Euler diagram or, more informally, proportional Venn diagrams.
Think of probabilities as areas. For instance, $P(A)$ is the area of the red rectangle, divided by the total area $\Omega$. Computing $P(A|B)$ is the same, but here you only consider your universe to be the area delimited by $B$.
In the first diagram, it can been seen that the area of the red rectangle divided by the total surface, and the area of the small rectangle $AB$ divided by the area of the rectangle $B$ have the same ratio. If we put it into mathematical terms, we get $P(A) = P(A \cap B) / P(B) = P(A|B)$, so by definition $A$ and $B$ are independent.
In the second diagram, it is not the case (you may want to prove it with an exact computation of the areas of all surfaces, but I guess it's not the point of the example you read): here, the ratio of the area of $A \cap B$ to the area of $B$ and the ratio of the area of $A$ to the area of $\Omega$ are different, so $A$ and $B$ are not independant.
However, note that using Venn diagrams to deduce independance of variables is not a good idea; Venn diagrams are not made for such purposes and as such proving independance using Venn diagrams is not advised. |
Closed form for $\prod_{k=0}^n\binom{n}{k}x^ky^{n-k}$. | With a bit of algebra $$\prod_{k=0}^n\binom{n}{k}=\frac{(n!)^{n+1}}{\prod_{k=0}^n(k!)^2}=\frac{(n!)^{n+1}}{(\prod_{k=1}^nk^{n-k+1})^2}=\prod_{k=1}^nk^{-n+2k-1}=\frac1{(n!)^{n+1}}\left(\color{red}{\prod_{k=1}^nk^k}\right)^2$$
where the expression in red doesnt seems easy to simplify. Of course as @Thomas showed in the comment$$\prod_{k=0}^n x^ky^{n-k}=(xy)^{\binom{n+1}{2}}$$
EDIT: it seems that we can get an analytic form (but not closed due to the periodicity of the $\tilde B_k$ functions) for the red term using the Euler-Maclaurin formula, something like
$$\left(\prod_{k=1}^nk^k\right)^2=\exp\left(\sum_{k=1}^n 2k\ln k\right)=\exp\left(\int_1^n \left(2t\ln t+\frac{\tilde B_3(t)}{3t^2}\right)\mathrm dt+ \left(n+\frac16\right)\ln n\right)$$
where $\tilde B_3$ is the $1$-periodic extension of the Bernoulli polynomial on $[0,1]$
$$B_3(x)=x^3-\frac32x^2+\frac12x$$ |
is this solution of $\int_0^\infty \sin(z^2) dz $ valid? | The method works, but in present form it lacks justification. The key step is to prove that
the integral of $\exp(-z^2)$ along the circular arc from $R$ to $Re^{\pi i/4}$ tends to $0$ as $R\to \infty$.
Once you have this, the Cauchy integral theorem tells you that
$$\int_0^R \exp(-(re^{\pi i /4})^2)\,e^{\pi i/4}dr - \int_0^R \exp(-r^2)\,dr \to 0$$
as $ R\to\infty $. It follows that
$$\int_0^\infty \exp(-r^2 i )\,dr = \frac{\sqrt{\pi}}{2}e^{-\pi i /4}$$
which is the desired result.
It remains to prove the statement emphasized above. First,
$$\left| \int_0^{\pi/4} \exp(-R^2 e^{2i\theta} )\,R\,d\theta \right| \le
R\int_0^{\pi/4} \exp(-R^2 \cos 2\theta ) \,d\theta
\tag2$$
Next, split the interval of integration into two subintervals at $\pi/4-R^{-3/2}$. The short interval contributes at most $R\cdot R^{-3/2}=R^{-1/2}$. On the long one the integrand is bounded by $\exp(-R^2 \cos (\pi/2-2R^{-3/2}) )$ where $$-R^2 \cos (\pi/2-2R^{-3/2})= - R^2 \sin (2R^{-3/2})\sim -2R^{1/2}$$
and of course $R\exp(-2R^{1/2})\to 0$ as $R\to \infty$. |
$L^2$ mapping is necessarily onto or not? | This map is not onto.
The image consists of mean zero functions in the sense that $\int_{\mathbb{R}} Sf(x) dx=0$. Roughly speaking this is because
$$\int_\mathbb{R} Sf(x)dx=\int_\mathbb{R}\int_0^1 (f(x)-f(x+y)) dy dx = \int_0^1 \left( \int_\mathbb{R} f(x) dx - \underbrace{\int_\mathbb{R} f(x+y) dx}_{=\int_\mathbb{R} f(x) dx} \right) dy = 0. $$
Of course I cheated because the integral $\int_\mathbb{R} f(x) dx$ does not necessarily exist for a general $L^2$ function $f$.
However, you can make this argument precise by considering truncated integrals $\int_{-N}^M Sf(x) dx$ and using the Cauchy-Schwarz inequality to show that the integral $\int_{\mathbb{R}} Sf(x) dx$ does exist and is equal to zero. |
Matrix Norm Inequality Implies Invertibility | Note that from $\|B-A\|<\frac{1}{\|A^{-1}\|}$, we know
$$
\|I-BA^{-1}\|=\|(I-BA^{-1})AA^{-1}\|=\|(A-B)A^{-1}\|\leq\|(A-B)\|\|A^{-1}\| <1$$
So the geometric series of real numbers
$$\Sigma_{j=0}^{\infty}\|(I-BA^{-1})\|^{j}$$
is converging, hence the corresponding matrix series is converging as well:
$$M:=\Sigma_{j=0}^{\infty}(I-BA^{-1})^{j}$$
Furthermore let us do a direct computation showing $(I-BA^{-1})M = M-I$:
$$(I-BA^{-1})M = (I-BA^{-1})\Sigma_{j=0}^{\infty}(I-BA^{-1})^{j} \\=\Sigma_{j=1}^{\infty}(I-BA^{-1})^{j}=\\
(I-BA^{-1})+ (I-BA^{-1})^2 + (I-BA^{-1})^3 +... =\\
(I + (I-BA^{-1})+ (I-BA^{-1})^2 + (I-BA^{-1})^3 +... ) -I =\\
\left(\Sigma_{j=0}^{\infty}(I-BA^{-1})^{j}\right)-I \\ =M-I$$
Thus $BA^{-1}M=I$, which shows that $A^{-1}M$ is the inverse of $B$. |
How can I rewrite the Taylor series in order to find the interval of convergence? | $$f(x) = \ln(x)$$
$$f'(x)=\frac1x, c_1 = \frac1{8\cdot 1!}$$
$$f''(x)=-\frac1{x^2}, c_2 = -\frac1{8^2\cdot 2!}$$
$$f^{(3)}(x)=\frac2{x^3}, c_3 = \frac{2}{8^3\cdot 3!}=\frac{1}{8^3\cdot 2}$$
$$f^{(4)}(x) = -\frac{3!}{x^4}, c_4 = -\frac{3!}{8^4\cdot 4!}=-\frac1{8^4\cdot 4}$$
For $n\ge 1$,
we have $c_n=\frac{(-1)^{n-1}}{8^n \cdot n}$
Hence $$f(x) = \ln (8) + \sum_{n=0}^\infty \frac{(-1)^{n-1}}{8^n \cdot n}\cdot (x-8)^n$$
and it converges if $$\lim_{n\to \infty}\left|\frac{\frac{(-1)^n}{8^{n+1}(n+1)}(x-8)^{n+1}}{\frac{(-1)^{n-1}}{8^nn}(x-8)^n} \right|<1$$
$$\frac{|x-8|}8<1$$
Also, remember to check the boundary of the interval. |
Using induction more than once in a proof | It is certainly possible to use induction more than once in a proof. Perhaps one of the more interesting applications of this idea is Cauchy induction.
To perform Cauchy induction, one first proves a base case, $P(1)$, and then proves $P(n)$ implies $P(2n)$. This inductively implies $P(2^n)$. Finally, you use decreasing induction, $P(n)$ implies $P(n-1)$, to show $P(n)$ for every natural number $n$. |
non-intersecting lines inside a projective quadric | Two lines in the projective plane intersect. This is a quadric surface in $\Bbb P^3$. (Think of the saddle surface $z=xy$ in $\Bbb R^3$. Note that if you fix $x=c$, you get a line $(c,y,cy)\subset\Bbb R^3$. For different values of $c$, these lines are disjoint.) |
calculating probability of 3 successful rolls of 4+ on six 6 sided dice | The probability of exactly two 4+'s is $p_2 = \binom{6}{2}\cdot (1/2)^6$, and the probability of exactly three 4+'s is $p_3 = \binom{6}{3}\cdot (1/2)^6$. If you want there to be at least two 4+'s, for instance, the probability is $p_2+p_3+p_4+p_5+p_6$. |
Left multiplication operator in Banach Algebra | $M_x$ is invertible implies $M_x$ is surjective implies, there exists $y$ such that $M_x(y)=xy=1$. Thus $x$ has a right inverse.
$M_y$ is an open map since $M_x$ is invertible (Let $U$ be an open subset, $M_x(M_y(U))=U$ is open thus $M_y(U)$ is open). Since $M_y(0)=0$, for every $z$, there exists a constant $c: cz\in M_y(U_0)$ where $U_0$ is a neighborhood of $0$ since $M_y$ is open, thus $M_y$ is surjective, $M_y$ is injective $(yz=0$ implies $x(yz)=z=0)$ thus $M_y$ is open and bijective thus $M_y$ is invertible, there exist $z$ such that $M_y(z)=yz=1, x(yz)=x=(xy)z=z$ thus $x=z$ and $xy=yx=1$ thus $x$ is invertible. |
compute sums of x,y given a condition | It is not possible. We have that $f(x):\mathbb{R}\to\mathbb{R}^+$ given by:
$$f(x)=x+\sqrt{x^2+1}$$
is bijective. So, assume that $p=f(2)\cdot f(3)=6+5 \sqrt{2}+3 \sqrt{5}+2 \sqrt{10}$.
Obviously $x=2,y=3$ is a solution of $f(x)\cdot f(y)=p$, giving $x+y=5$.
But another solution exists, given by:
$$x=y=f^{-1}\left(\sqrt{6+5 \sqrt{2}+3 \sqrt{5}+2 \sqrt{10}}\right).$$
Since $f^{-1}(x)=\frac{x^2-1}{2x}$, the solution:
$$ x = y = \sqrt{\frac{5}{2} \left(1+\sqrt{2}\right)} $$
gives:
$$ x+y = \sqrt{10(1+\sqrt{2})} \neq 5. $$ |
Context Free Grammer (CFG) for a language | HINT: With $S$ as the initial symbol, start with these productions:
$$\begin{align*}
&S\to aL\mid bL\mid Ra\mid Rb\\
&L\to aL\mid bL\mid X
\end{align*}$$
Have $R$ do something similar to what $L$ does, and have $X$ generate the language
$$\big\{x\#y:x,y\in\{a,b\}^*\land |x|=|y|\big\}\;.$$
In other words, generate excess length on one side, then generate equal lengths on both sides from that point on. |
Are there sets of zero measure and full Hausdorff dimension? | For any $r<1$, you can construct a Cantor set with Hausdorff dimension $r$ by varying the lengths of the intervals in the usual Cantor set construction. In particular, you can let $C_n\subset[0,1]$ be a Cantor set of Hausdorff dimension $1-1/n$ for each $n$. The union $C=\bigcup C_n$ then has Lebesgue measure $0$ because each $C_n$ does, but Hausdorff dimension $1$. |
Prove that memorylessness of a discrete distribution defines geometric distribution | hint
For any given $k\ge 2,$ we have $$P(X=1)= P(X=k+1\mid X\ge k)\\ P(X=2)=P(X=k+1\mid X\ge k-1).$$ Dividing these equations and using the definition of conditional probability gives $$\frac{P(X\ge k)}{P(X\ge k-1)}=\frac{P(X=2)}{P(X=1)}.$$
Note that the RHS is independent of $k.$ |
how to translate the question into graph theory terms? | Once you draw the graph in accordance to my instructions from the comments, you will quickly see that this is impossible. If there is a route without repeating rooms, then clearly 8 should be the start and 9 the end. Then you see that you have to get to room 2 from room 3, but from room 2 you can only get to 10 and since 10 is the only way to get to 9, the end of the route must be 3-2-10-9. But given that from 11 you can only get to 10 you have a problem. |
Exercise about of the closure of an Set | Equality does not hold in (b). To show that $\subset$ holds, consider $x\in\overline{\bigcap_{\alpha\in I}A_\alpha}.$ It is in every closed set that contains $\bigcap_{\alpha\in I}A_\alpha$. So what does that say about closed sets containing $A_\alpha$ for some particular $\alpha$?
Equality also doesn't hold for (c). To show that $\supset$ holds, consider $x\in\overline A\cap \setminus\overline B$. Every closed set containing $A$ contains $x$, but there is a closed set containing $B$ which does not contain $x$. This means that $x$ is not contained in $B$. So what does that say about every closed set that contains $A\setminus B$? |
Rate of convergence conditions | If you assume $x_k\to0$, then the rate of convergence can be estimated as follows:
$$
\lim \frac{|x_{k+1}|}{|x_k|}
\le \lim \frac{C|x_k|+\mathrm o(|x_k|)}{|x_k|}
= C+\underbrace{\lim \frac{\mathrm o(|x_k|)}{|x_k|}}_{\to\,0\,\text{by assumption}}=C
$$
So you can conclude that it is less or equal $C$. You cannot do better because $x_n=C^n$ would be a counter-example (here $\mathrm o(|x_k|)=0$). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.