INSTRUCTION
stringlengths 61
6.63k
| RESPONSE
stringlengths 1
11k
|
---|---|
Math question please Rolle theorem? I have to prove that the equation $$x^5 +3x- 6$$ can't have more than one real root..so the function is continuous, has a derivative (both in $R$) . In $R$ there must be an interval where $f'(c)=0$, and if I prove this,than the equation has at least one real root. So $5x^4+3 =0$ ..this equation is only true for $x=0$. How to prove that this is the only root?
|
So you want to prove $5x^4+3=0$ has only one root at $0$? That's not true as $5x^4+3>0$. This establishes the proof
|
Combinatorics alphabet If say I want to arrange the letters of the alphabet a,b,c,d,e,f such that e and f cannot be next to each other.
I would think the answer was $6\times4\times4\times3\times2$ as there are first 6 letters then 4 as e cannot be next to f.
Thanks.
|
The $6$ numbers without any restriction can be arranged in $6!$ ways.
If we put $e,f$ together, we can arrange the $6$ numbers in $2!(5!)$ ways, as $e,f$ can arranged in $2!$ ways.
So, the required number of combinations is $6!-2(5!)$
|
Average limit superior Let $\mathcal{l}_\mathbb{R}^\infty$ be the space of bounded sequences in $\mathbb{R}$. We define a map $p: \mathcal{l}_\mathbb{R}^\infty\to\mathbb{R}$ by
$$p(\underline x)=\limsup_{n\to\infty} \frac{1}{n}\sum_{k=1}^n x_k.$$
My notes claim that
$$\liminf_{n\to\infty} x_n\le p(\underline x)\le \limsup_{n\to\infty} x_n.$$
I haven't found a neat way to show that this holds (only a rather complicated argument). Is there an easy, intuitive way ?
|
Let $A = \liminf_{n \to \infty} x_n$ and $B = \limsup_{n \to \infty} x_n$. For any $\epsilon > 0$, there is $N$ such that for all $k > N$, $A - \epsilon \le x_k \le B + \epsilon$. Let $S_n = \displaystyle \sum_{k=1}^n x_k$. Then for $n > N$,
$$ S_N + (n-N) (A - \epsilon) \le S_n \le S_N + (n-N) (B + \epsilon) $$
and so
$$ \eqalign{ A - \epsilon &= \lim_{n \to \infty} \frac{S_N + (n-N) (A - \epsilon)}{n}
\le \liminf_{n \to \infty} \dfrac{S_n}{n}\cr
\limsup_{n \to \infty} \dfrac{S_n}{n} &\le \lim_{n \to \infty} \frac{ S_N + (n-N) (B + \epsilon)}{n} = B + \epsilon} $$
Now take $\epsilon \to 0+$.
|
Cohen–Macaulayness of $R=k[x_1, \dots,x_n]/\mathfrak p$ I'm looking for some help for the following question:
Let $k$ be a field and $R=k[x_1, \dots ,x_n]$.
Show that $R/\mathfrak p$ is Cohen–Macaulay if $\mathfrak p$ is a prime ideal
with $\operatorname{height} \mathfrak p \in\lbrace 0,1,n-1,n \rbrace$.
My proof:
If $\operatorname{height} p=0$ then for all $q\in \operatorname{Max}(R)$ $\operatorname{height} pR_q=0$ therefore $pR_q\in \operatorname{Min}(R_q)=\lbrace 0\rbrace$, so $pR_p=(0)$ and it's done but If $\operatorname{height} pR_q=1$ then $\operatorname{grade} pR_p=\operatorname{height} pR_p=1$.Thus there exist regular sequence $z\in pR_q$ and we have $\dim R_q/zR_q=\dim R_q-\operatorname{height}zR_q=n-1$.since that $R_q/zR_q$ is Cohen-Macaulay then $\operatorname{depth}R_q/zR_q=\dim R_q/zR_q=n-1$.here,we know that $\exists y_1,...,y_{n-1}\in qR_q $ s.t $y_1,...,y_{n-1}$ is $R_q/zR_q$-sequence.Now, if we can make a $R_q/pR_q$-sequence has a length $n-1$ It's done because $\dim R_q/pR_q=\dim R_q-\operatorname{height}pR_q=n-1$.
A similar argument works for n,n-1.
|
Hint. $R$ integral domain, $\operatorname{ht}(p)=0\Rightarrow p=(0)$. If $\operatorname{ht}(p)=1$, then $p$ is principal. If $\operatorname{ht}(p)=n−1,n$, then $\dim R/p=1,0$.
|
an equality related to the point measure For any positive measure $\rho$ on $[-\pi, \pi]$, prove the following equality:
$$\lim_{N\to\infty}\int_{-\pi}^{\pi}\frac{\sum_{n=1}^Ne^{in\theta}}{N}d\rho(\theta)=\rho(\{0\}).$$
Remark:
It is easy to check that for any fixed positive number $0<\delta<\pi$, then $$|\int_{\delta}^{\pi}\frac{\sum_{n=1}^Ne^{in\theta}}{N}d\rho(\theta)|\leq \int_{\delta}^{\pi}\frac{2}{2sin(\frac{\delta}{2})N}d\rho(\theta)\to 0\text{ as}\;N\to\infty,$$
so
I think we have to show that
$$\int_{-\delta}^{\delta}\frac{\sum_{n=1}^Ne^{in\theta}}{N}d\rho(\theta)\sim \int_{-\delta}^{\delta}d\rho(\theta)\sim \rho(\{0\})?$$
Maybe we also have to choose $\delta=\delta(N)$ etc..
|
Try to show that $\int_{-\delta}^\delta e^{in\theta}d\rho(\theta)\rightarrow \rho(\{0\})$, a kind of generalized Riemann Lebesgue Lemma. Then your result will follow by the fact that you are taking a Cesaro average of a sequence that converges. I believe you need some kind of sigma finite condition on your $\rho$ for this to work though.
|
Is the tangent function (like in trig) and tangent lines the same? So, a 45 degree angle in the unit circle has a tan value of 1. Does that mean the slope of a tangent line from that point is also 1? Or is something different entirely?
|
The $\tan$ function can be described four different ways that I can describe and each adds to a fuller understanding of the tan function.
*
*First, the basics: the value of $\tan$ is equal to the value of $\sin$ over $\cos$.
$$\\tan(45^\circ)=\frac{\sin(45^\circ)}{\cos(45^\circ)}=\frac{\frac{\sqrt{2}}{2}}{\frac{\sqrt{2}}{2}}=1$$
*So, the $\tan$ function for a given angle does give the slope of the radius, but only on a unit circle or only when the radius is one. For instance, when the radius is 2, then $2\tan(45^\circ)=2$, but the slope of the 45 degree angle is still 1.
*The value of the $\tan$ for a given angle is the length of the line, tangent to the circle at the point on the circle intersected by the angle, from the point of intersection (A) to the $x$-axis (E).
$\hspace{1cm}$
*The value of the tangent line can also be described as the length of the line $x=r$ (which is a vertical line intersecting the $x$-axis where $x$ equals the radius of the circle) from $y=0$ to where the vertical line intersects the angle.
$\hspace{1cm}$
The explanations in examples 3 and 4 might seem counter intuitive at first, but if you think about it, you can see that they are really just reflections across a line of half the specified angle. Image to follow.
The images included are both from Wikipedia.
|
Who are the most inspiring communicators of math for a general audience? I have a podcast series (http://wildaboutmath.com/category/podcast/ and on Itunes https://itunes.apple.com/us/podcast/sol-ledermans-podcast/id588254197) where I interview people who have a passion for math and who have inspired others to take an interest in the subject.
I've interviewed Alfred Posamentier, Keith Devlin, Ed Burger, James Tanton, and other math popularizers I know. I'm trying to get an interview set up with Ian Stewart and I'll see if I can do interviews with Steven Strogatz and Cliff Pickover in 2013.
Who do you know, famous or not, who I should try to get for my series? These people don't need to be authors. They can be game designers, teachers, toy makers, bloggers or anyone who has made a big contribution to helping kids or adults enjoy math more.
|
I recommend Art Benjamin. He's a dynamic speaker, has given lots of math talks to general audiences (mostly on tricks for doing quick mental math calculations, I think), and is an expert on combinatorial proof techniques (e.g. he's coauthor of Proofs That Really Count). Benjamin is a math professor at Harvey Mudd College.
|
Question With Regards To Evaluating A Definite Integral When Evaluating the below definite integral $$\int_{0}^{\pi}(2\sin\theta + \cos3\theta)\,d\theta$$
I get this.$$\left [-2\cos\theta + \frac{\sin3\theta}{3} \right ]_{0}^{\pi} $$
In the above expression i see that $-2$ is a constant which was taken outside the integral sign while performing integration. Now the question is should $-2$ be distributed throughout or does it only apply to $\cos\theta$? This is what i mean. Is it $$-2\left[\cos(\pi) + \frac{\sin3(\pi)}{3} - \left ( \cos(0) + \frac{\sin3(0)}{3} \right ) \right]?$$
Or the $-2$ stays only with $\cos\theta$?
|
$$\int_{0}^{\pi}(2\sin\theta + \cos3\theta)d\theta=\left [-2\cos\theta + \frac{\sin3\theta}{3} \right ]_{0}^{\pi} $$ as you noted so $-2$ as you see in @Nameless's answer is just for cosine function. Not for all terms.
|
Problem from "Differential topology" by Guillemin I am strugling one of the problems of "Differential Topology" by Guillemin:
Suppose that $Z$ is an $l$-dimensional submanifold of $X$ and that $z\in Z$. Show that there exsists a local coordinate system $\left \{ x_{1},...,x_{k} \right \}$ defined in a neighbourhood $U$ of $z$ such that $Z \cap U$ is defined by the equations $x_{l+1}=0,...,x_{k}=0$.
I assume that the solution should be based on the Local Immersion theorem, which states that "If $f:X\rightarrow Y $ is an immersion at $x$, then there exist a local coordinates around $x$ and $y=f(x)$ such that $f(x_{1},...,x_{k})=(x_{1},...,x_{k},0,...,0)$".
I would really appreciate any pointers on how to attack this problem.
|
I found answers from Henry T. Horton at the question Why the matrix of $dG_0$ is $I_l$. and Augument, and injectivity. very helpful for solving this question.
To repeat your question, which is found in Guillemin & Pallock's Differential Topology on Page 18, problem 2:
Suppose that $Z$ is an $l$-dimensional submanifold of $X$ and that $z \in Z$. Show that there exists a local coordinate system {$x_1, \dots, x_k$} defined in a neighborhood $U$ of $z$ in $X$ such that $Z \cap U$ is defined by the equations $x_{l+1}=0, \dots, x_k=0$.
Here is my attempt
Consider the following diagram:
$$\begin{array}
AX & \stackrel{i}{\longrightarrow} & Z\\
\uparrow{\varphi} & & \uparrow{\psi} \\
C & \stackrel{(x_1, \dots, x_l) \mapsto (x_1, \dots, x_l, 0, \dots, 0)}{\longrightarrow} & C^\prime
\end{array}
$$
Since
$$i(x_1, \dots, x_l) = (x_1, \dots, x_l,0,\dots, 0) \Rightarrow di_x(x_1, \dots, x_l) = (I_x, 0,\dots, 0).$$Clearly, $di_x$ is injective, thus the inclusion map $i: X \rightarrow Z$ is an immersion.
Then we choose parametrization $\varphi: C \rightarrow X$ around $z$, and $\psi: C^\prime \rightarrow Z$ around $i(z)$. The map $C \rightarrow C^\prime$ sends $(x_1, \dots, x_l) \mapsto (x_1, \dots, x_l, 0, \dots, 0)$.
The points of $X$ in a neighborhood $\varphi(C)$ around $z$ are those in $Z$ such that $x_{l+1} = \cdots x_k=0$, and this concludes the proof.
|
Proving the stabilizer is a subgroup of the group to prove the Orbit-Stabiliser theorem I have to prove the OS theorem. The OS theorem states that for some group $G$, acting on some set $X$, we get
$$
|G| = |\mathrm{Orb}(x)| \cdot |G_x| $$
To prove this, I said that this can be written as
$$ |\mathrm{Orb}(x)| = \frac{|G|}{|G_x|}$$
In order to prove the RHS, I can say that we can use Lagrange theorem, assuming that the stabiliser is a subgroup of the group G, which I'm pretty sure it is. I don't really know how I'd go about proving this though. Also, I was thinking, would this prove the whole theorem or just the RHS?
I didn't just want to Google a proof, I wanted to try and come up with one myself because it's easier to remember. Unless there is an easier proof?
|
You’re on the right track. By definition $G_x=\{g\in G:g\cdot x=x\}$. Suppose that $g,h\in G_x$; then $$(gh)\cdot x=g\cdot(h\cdot x)=g\cdot x=x\;,$$ so $gh\in G_x$, and $G_x$ is closed under the group operation. Moreover, $$g^{-1}\cdot x=g^{-1}\cdot(g\cdot x)=(g^{-1}g)\cdot x=1_G\cdot x=x\;,$$ so $g^{-1}\in G_x$, and $G_x$ is closed under taking inverses. Thus, $G_x$ is indeed a subgroup of $G$. To finish the proof, you need only verify that there is a bijection between left cosets of $G_x$ in $G$ and the orbit of $x$.
Added: The idea is to show that just as all elements of $G_x$ act identically on $x$ (by not moving it at all), so all elements of a left coset of $G_x$ act identically on $x$. If we can also show that each coset acts differently on $x$, we’ll have established a bijection between left cosets of $G_x$ and members of the orbit of $x$.
Let $h\in G$ be arbitrary, and suppose that $g\in hG_x$. Then $g=hk$ for some $k\in G_x$, and $$g\cdot x=(hk)\cdot x=h\cdot(k\cdot x)=h\cdot x\;.$$ In other words, every $g\in hG_x$ acts on $x$ the same way $h$ does. Let $\mathscr{G}_x=\{hG_x:h\in G\}$, the set of left cosets of $G_x$, and let
$$\varphi:\mathscr{G}_x\to\operatorname{Orb}(x):hG_x\mapsto h\cdot x\;.$$
The function $\varphi$ is well-defined: if $gG_x=hG_x$, then $g\in hG_x$, and we just showed that in that case $g\cdot x=h\cdot x$.
It’s clear that $\varphi$ is a surjection: if $y\in\operatorname{Orb}x$, then $y=h\cdot x=\varphi(hG_x)$ for some $h\in G$. To complete the argument you need only show that $\varphi$ is injective: if $h_1G_x\ne h_2G_x$, then $\varphi(h_1G_x)\ne\varphi(h_2G_x)$. This is perhaps most easily done by proving the contrapositive: suppose that $\varphi(h_1G_x)=\varphi(h_2G_x)$, and show that $h_1G_x=h_2G_x$.
|
Given a ratio of the height of two similar triangles and the area of the larger triangle, calculate the area of the smaller triangle Please help, I've been working on this problem for ages and I can't seem to get the answer.
The heights of two similar triangles are in the ratio 2:5. If the area of the larger triangle is 400 square units, what is the area of the smaller triangle?
|
It may help you to view your ratio as a fraction in this case. Right now your ratio is for one-dimensional measurements, like height, so if you were to calculate the height of the large triangle based on the height of the small triangle being (for example) 3, you would write:
$3 \times \frac52 =$ height of the large triangle
Or, to go the other way (knowing the height of the large triangle to be, say, 7) you would write:
$7 \times \frac25 = $ height of the small triangle
But this is for single-dimensional measurements. For a two-dimensional measurement like area, simply square the ratio (also called the scalar):
area of small triangle $\times (\frac52)^2 =$ area of large triangle.
This can be extended to three-dimensional measurements by cubing the ratio/fraction.
(you'll know which fraction to use because one increases the quantity while the other decreases. So if you find the area of the large triangle to be smaller than the small triangle, you've used the wrong one!)
|
Simple integral help How do I integrate $$\int_{0}^1 x \bigg\lceil \frac{1}{x} \bigg\rceil \left\{ \frac{1}{x} \right\}\, dx$$
Where $\lceil x \rceil $ is the ceiling function, and $\left\{x\right\}$ is the fractional part function
|
Split the integral up into segments $S_m=[1/m,1/(m+1)]$ with $[0,1]= \cup_{m=1}^\infty S_m$. In the segment $m$, we have $\lceil 1/x \rceil=m+1$ and $\{1/x\} = 1/x- \lfloor 1/x\rfloor = 1/x - m$ (apart from values of $x$ on the boundary which do not contribute to the integral).
This yields
$$\begin{align}\int_0^1 x \bigg\lceil \frac{1}{x} \bigg\rceil \left\{ \frac{1}{x} \right\}\, dx &= \sum_{m=1}^\infty \int_{S_m}x \bigg\lceil \frac{1}{x} \bigg\rceil \left\{ \frac{1}{x} \right\}\, dx \\
&= \sum_{m=1}^\infty \int_{1/(m+1)}^{1/m} x (m+1)\left(\frac1x -m \right)\, dx\\
&= \sum_{m=1}^\infty \frac{1}{2m(1+m)}\\
&=\frac{1}{2}.
\end{align}$$
|
Algebraic manipulation of normal, $\chi^2$ and Gamma probability distributions If $X_1, \ldots, X_n \sim N(\mu, \sigma^2)$, then
$$
\frac{n - 1}{\sigma^2}S^2 \sim \chi^2_{n - 1}
$$
where $S^2 = \frac{1}{n-1}\sum_{i=1}^n (x_i^2- \bar{x})^2$, and there's a direct relationship between the $\chi^2_p$ and Gamma($\alpha, \beta$) distributions:
$$
\chi^2_{n - 1} = \text{Gamma}(\tfrac{n-1}{2}, 2).
$$
But then why is
$$
S^2 \sim \text{Gamma}(\tfrac{n-1}{2}, \tfrac{2\sigma^2}{n-1}) \,?
$$
And why do we multiply the reciprocal with $\beta$ and not $\alpha$? Is it because $\beta$ is the scale parameter?
In general, are there methods for algebraic manipulation around the "$\sim$" other than the standard transformation procedures? Something that uses the properties of location/scale families perhaps?
|
Suppose $X\sim \operatorname{Gamma}(\alpha,\beta)$, so that the density is $cx^{\alpha-1} e^{-x/\beta}$ on $x>0$, and $\beta$ is the scale parameter. Let $Y=kX$. The density function of $Y$ is
$$
\frac{d}{dx} \Pr(Y\le x) = \frac{d}{dx}\Pr(kX\le x) = \frac{d}{dx} \Pr\left(X\le\frac x k\right) = \frac{d}{dx}\int_0^{x/k} cu^{\alpha-1} e^{-u/\beta} \, du
$$
$$
= c\left(\frac x k\right)^{\alpha-1} e^{-(x/k)/\beta} \cdot\frac1k
$$
$$
=(\text{constant})\cdot x^{\alpha-1} e^{-x/(k\beta)}.
$$
So it's a Gamma distribution with parameters $\alpha$ and $k\beta$.
|
Definition of $L^0$ space From Wikipedia:
The vector space of (equivalence classes of) measurable functions on $(S, Σ, μ)$ is denoted $L^0(S, Σ, μ)$.
This doesn't seem connected to the definition of $L^p(S, Σ, μ), \forall p \in (0, \infty)$ as being the set of measurable functions $f$ such that $\int_S |f|^p d\mu <\infty$. So I wonder if I miss any connection, and why use the notation $L^0$ if there is no connection?
Thanks and regards!
|
Note that when we restrict ourselves to the probability measures, then this terminology makes sense: $L^p$ is the space of those (equivalence classes of) measurable functions $f$ satisfying
$$\int |f|^p<\infty.$$
Therefore $L^0$ should be the space of those (equivalence classes of) measurable functions $f$ satisfying
$$\int |f|^0=\int 1=1<\infty,$$
that is the space of all (equivalence classes of) measurable functions $f$. And it is indeed the case.
|
Summing elements of a sequence Let the sequence $a_n$ be defined as $a_n = 2^n$ where $n = 0, 1, 2, \ldots $
That is, the sequence is $1, 2, 4, 8, \ldots$
Now assume I am told that a certain number is obtained by taking some of the numbers in the above sequence and adding them together (e.g. $a_4 + a_{19} + a_5$), I may be able to work out what numbers from the sequence were used to arrive at the number. For example, if the number given is 8, I know that the sum was simply $a_3$, since this is the only way to get 8. If i'm told the number is 12, I know that the sum was simply $a_3 + a_2$.
If I'm told what the resulting number, can you prove that I can always determine which elements of the above sequence were used in the sum to arrive at that number?
|
Fundamental reason: The division algorithm.
For any number $a\geq 0$, we know there exists a unique $q_0\geq 0$ and $0\leq r_0<2$ such that
$$a=2q_0+r_0.$$
Similarly, we know there exists a unique $q_1\geq 0$ and $0\leq r_1<2$ such that
$$q_0=2q_1+r_1.$$
We can define the sequences $q_0,q_1,\ldots$ and $r_0,r_1,\ldots$ in this way, i.e. $q_{n+1}\geq 0$ and $0\leq r_{n+1}< 2$ are the unique solutions to
$$q_n=2q_{n+1}+r_{n+1}.$$
Note that $0\leq r_n<2$ is just equivalent to $r_n\in\{0,1\}$.
Also note that if $q_n=0$ for some $n$, then $q_k=0$ and $r_k=0$ for all $k>n$.
Because $\frac{1}{2} q_n\geq q_{n+1}$ for any $n$, and because every $q_n$ is a non-negative integer, we will eventually have $q_n=0$ for some $n$. Let $N$ be the smallest index such that $q_N=0$.
Then
$$\begin{align}
a&=2q_0+r_0\\
&=2(2q_1+r_1)+r_0\\
&\cdots\\
&=2(2(\cdots (2q_N+r_N)+r_{N-1}\cdots )+r_1)+r_0\\
&=2^Nr_N+2^{N-1}r_{N-1}+\cdots+r_0\\
\end{align}$$
is a sum of terms from the sequence $a_n=2^n$, specifically those $a_n$'s for which the corresponding $r_n=1$ instead of 0.
|
$a_{n}$ converges and $\frac{a_{n}}{n+1}$ too? I have a sequence $a_{n}$ which converges to $a$, then I have another sequence which is based on $a_{n}$: $b_{n}:=\frac{a_{n}}{n+1}$, now I have to show that $b_{n}$ also converges to $a$.
My steps:
$$\frac{a_{n}}{n+1}=\frac{1}{n+1}\cdot a_{n}=0\cdot a=0$$ But this is wrong, why am I getting this? My steps seem to be okay according to what I have learned till now; can someone show me the right way please?
And then I am asked to find another two sequences like $a_n$ and $b_n$ but where $a_n$ diverges and $b_n$ converges based on $a_n$. I said: let $a_n$ be a diverging sequence then
$$b_n:=\frac{1}{a_n}$$ the reciprocal of $a_n$ should converge. Am I right?
|
For $a_n=1$, clearly $a_n \to 1$ and $b_n \to 0$. So the result you are trying to prove is false.
In fact, because product is continuous, $\lim \frac{a_n}{n+1} = (\lim a_n) (\lim \frac{1}{n+1})=0$.
|
L'Hospital's Rule Question. show that if $x $ is an element of $\mathbb R$ then $$\lim_{n\to\infty} \left(1 + \frac xn\right)^n = e^x $$
(HINT: Take logs and use L'Hospital's Rule)
i'm not too sure how to go about answer this or putting it in the form $\frac{f'(x)}{g'(x)}$ in order to apply L'Hospitals Rule.
so far i've simply taken logs and brought the power in front leaving me with
$$ n\log \left(1+ \frac xn\right) = x $$
|
$$\lim_{n\to\infty} (1 + \frac xn)^n =\lim_{n\to\infty} e^{n\ln(1 + \frac xn)} $$
The limit
$$\lim_{n\to\infty} n\ln(1 + \frac xn)=\lim_{n\to\infty} \frac{\ln(1 + \frac xn)}{\frac1n}=\lim_{n\to\infty} \frac{\frac{1}{1 + \frac xn}\frac{-x}{n^2}}{-\frac1{n^2}}=\lim_{n\to\infty} \frac{x}{1 + \frac xn}=x$$
By continuity of $e^x$,
$$\lim_{n\to\infty} (1 + \frac xn)^n =\lim_{n\to\infty} e^{n\ln(1 + \frac xn)}=e^x $$
|
Is it possible to prove everything in mathematics by theorem provers such as Coq? Coq has been used to provide formal proofs to the Four Colour theorem, the Feit–Thompson theorem, and I'm sure many more. I was wondering - is there anything that can't be proved in theorem provers such as Coq?
A little extra question is if everything can be proved, will the future of mathematics consistent of a massive database of these proofs that everyone else will build on top of? To my naive mind, this feels like a much more rigorous way to express mathematics.
|
It is reasonable to believe that everything that has been (or can be) formally proved can bew proved in such an explicitly formal way that a "stupid" proof verification system can give its thumbs up. In fact, while typical everyday proofs may have some informal handwaving parts in them, these should always be able to be formalized in a so "obvious" manner that one does not bother, in fact that one is totally convinced that formalizing is in principle possible; otherwise one won't call it a proof at all.
In fact, I sometimes have the habit to guide people (e.g. if they keep objecting to Cantor's diagonal argument) to the corresponding page at the Proof Explorer and ask them to point out which particular step they object to.
For some theorems and proofs this approach may help you get rid of any doubts casting a shadow on the proof: Isn't there possibly some sub-sub-case on page 523 that was left out? But then again: Have you checked the validity of the code of your theorem verifier? Is even the hardware bug-free? (Remember the Pentium bug?)
Would you believe a proof that $10000\cdot10000=100000000$ that consists of putting millions of pebbles into $10000$ rows and columns and counting them more than computing the same result by a long multiplication?
|
Test for convergence the series $\sum_{n=1}^{\infty}\frac{1}{n^{(n+1)/n}}$ Test for convergence the series
$$\sum_{n=1}^{\infty}\frac{1}{n^{(n+1)/n}}$$
I'd like to make up a collection with solutions for this series, and any new
solution will be rewarded with upvotes. Here is what I have at the moment
Method 1
We know that for all positive integers $n$, $n<2^n$, and this yields
$$n^{(1/n)}<2$$
$$n^{(1+1/n)}<2n$$
Then, it turns out that
$$\frac{1}{2} \sum_{n=1}^{\infty}\frac{1}{n} \rightarrow \infty \le\sum_{n=1}^{\infty}\frac{1}{n^{(n+1)/n}}$$
Hence
$$\sum_{n=1}^{\infty}\frac{1}{n^{(n+1)/n}}\rightarrow \infty$$
EDIT:
Method 2
If we consider the maximum of $f(x)=x^{(1/x)}$ reached for $x=e$
and denote it by $c$, then
$$\sum_{n=1}^{\infty}\frac{1}{c \cdot n} \rightarrow \infty \le\sum_{n=1}^{\infty}\frac{1}{n^{(n+1)/n}}$$
Thanks!
|
Let $a_n = 1/n^{(n+1)/n}$.
Then
$$\begin{eqnarray*}
\frac{a_n}{a_{n+1}} &\sim& 1+\frac{1}{n} - \frac{\log n}{n^2}
\qquad (n\to\infty).
\end{eqnarray*}$$
The series diverges by Bertrand's test.
|
Computing left derived functors from acyclic complexes (not resolutions!) I am reading a paper where the following trick is used:
To compute the left derived functors $L_{i}FM$ of a right-exact functor $F$ on an object $M$ in a certain abelian category, the authors construct a complex (not a resolution!) of acyclic objects, ending in $M$, say $A_{\bullet} \to M \to 0$, such that the homology of this complex is acyclic, and this homology gets killed by $F$. Thus, they claim, the left-derived functors can be computed from this complex.
Why does this claim follow? It seems like it should be easy enough, but I can't seem to wrap my head around it.
|
Compare with a projective resolution $P_\bullet\to M\to 0$. By projectivity, we obtain (from the identiy $M\to M$) a complex morphism $P_\bullet\to A_\bullet$, which induces $F(P_\bullet)\to F(A_\bullet)$. With a bit of diagram chasing you shold find that $H_\bullet(F(P_\bullet))$ is the same as $H_\bullet(F(A_\bullet))$.
A bit more explict: We can build a resolution of complexes
$$\begin{matrix}
&\downarrow && \downarrow&&\downarrow\\
0\leftarrow &A_2&\leftarrow&P_{2,1}&\leftarrow&P_{2,2}&\leftarrow\\
&\downarrow && \downarrow&&\downarrow\\
0\leftarrow &A_1&\leftarrow&P_{1,1}&\leftarrow&P_{1,2}&\leftarrow\\
&\downarrow && \downarrow&&\downarrow\\
0\leftarrow &M&\leftarrow &P_{0,1}&\leftarrow&P_{0,2}&\leftarrow\\
&\downarrow && \downarrow&&\downarrow\\
&0&&0&&0
\end{matrix}
$$
i.e. the $P_{i,j}$ are projective and all rows are exact. The downarrows are found recursively using projectivity so that all squares commute: If all down maps are called $f$ and all left maps $g$, then $f\circ g\colon P_{i,j}\to P_{i-1,j-1}$ maps to the image of $g\colon P_{i-1,j}\to P_{i-1,j-1}$ because $g\circ(f\circ g)=f\circ g\circ g=0$, hence $f\circ g$ factors through $P_{i-1,j}$, thus giving the next $f\colon P_{i,j}\to P_{i-1,j}$.
We can apply $F$ and take direct sums across diagonals, i.e. let $B_k=\bigoplus_{i+j=k} FP_{i,j}$.
Then $d:=(-1)^if+g$ makes this a complex.
What interest us here, is that we can walk from the lower row to the left column by diagram chasing, thus finding that $H_\bullet(F(P_{0,\bullet}))=H_\bullet(F(A_\bullet))$.
Indeed: Start with $x_0\in FP_{0,k}$ with $Fg(x_0)=0$.
Then we find $y_1\in FP_{1,k}$ with $Ff(y_1)=x_0$.
Since $Ff(Fg(y_1))=Fg(Ff(y_1))=0$, we find $y_2\in FP_{2,k-1}$ with $Ff(y_2)=y_1$, and so on until we end up with a cycle in $A_k$.
Make yourself clear that the choices involved don't make a difference in the end (i.e. up to boundaries).
Also, the chase can be performed just as well from the left column to the bottom row ...
|
Quotient Group G/G = {identity}? I know this is a basic question, but I'm trying to convince myself of Wikipedia's statement. "The quotient group $G / G$ is isomorphic to the trivial group."
I write the definition for left multiplication because left cosets = right cosets. $ G/G = \{g \in G : gG\} $ But how is this isomorphic to the trivial group, $ \{id_G\} $? $gG$ can't be simplified to $id_G$ ?
Thank you.
|
If G is a group and N is normal in G, then G/N is the quotient group. G/N as a group consists of cosets of the normal subgroup N in G and these cosets themselves satisfy the group properties because of normality of N. Now G is clearly normal in G. Hence G/G consists of the coset that is all of G. Thus this group has only one element, thence it must be isomorphic to the identity group
|
Want to show $\sum_{n=2}^{\infty} \frac{1}{2^{n}*n}$ converges Want to show $\sum_{n=2}^{\infty} \frac{1}{2^{n}*n}$ converges. I am trying to show this by showing that the partial sums are bounded. I have tried doing this by induction but am not seeing how to pass the inductive assumption part. Do I need to instead look for a closed form? thanks
|
Since you mentioned induction:
Let $s_m = \sum_{n=2}^{m} \frac{1}{2^{n}*n}$. Then $s_m \leq 1-\frac{1}{m}$.
$P(2)$ is obvious, while $P(m) \Rightarrow P(m+1)$ reduces to
$$1-\frac{1}{m}+\frac{1}{2^{m+1}(m+1)} \leq 1- \frac{1}{m+1}$$
which is equivalent to:
$$m \leq 2^{m+1}$$
|
difficulty understanding branch of the logarithm Here is one past qual question,
Prove that the function $\log(z+ \sqrt{z^2-1})$ can be defined to be analytic on the domain $\mathbb{C} \setminus (-\infty,1]$
(Hint: start by defining an appropriate branch of $ \sqrt{z^2-1}$ on $\mathbb{C}\setminus (-\infty,1]$ )
It just seems typical language problem. I do not see point of being rigorous here. But
I think I have difficulty understanding the branch cut. I don't know if someone will explain me in some easy way and try to explain the solution to the problem. Any help will be much appreciated.
|
Alternatively, you can just take the standard branch for $\sqrt{z}$ excluding $(-\infty,0]$ and then compute $\sqrt{z-1}\sqrt{z+1}$ which is defined for $z+1,z-1\notin(-\infty,0]$, that is, for $z\notin(-\infty,1]$
|
how to find the nth number in the sequence? consider the sequence of numbers below,
2 5 10 18 31 52 . . .
the sequence goes on like this.
My Question is,
How to find the nth term in the sequence?
thanks.
|
The sequence can be expressed in many ways.
As Matt N. and M. Strochyk mentioned:
$$ a_{n+2}= a_{n}+a_{n+1}+3,$$
$$ a_1 = 2 \quad (n\in \mathbb{N})$$
Or as this one for example:
$$ a_{n+1}= a_{n}+\frac{(n-1)n(2n-1)}{12}-\frac{(n-1)n}{4}+2n+1,$$
$$ a_1 = 2 \quad (n\in \mathbb{N})$$
It's interesting that the term:
$$ b_n = \frac{(n-1)n(2n-1)}{12}-\frac{(n-1)n}{4}+2n+1$$ gives five Fibonacci numbers $(3, 5, 8, 13, 21)$ for $1 \leq n \leq 5$.
|
Show $\lim\limits_{n\to\infty} \sqrt[n]{n^e+e^n}=e$ Why is $\lim\limits_{n\to\infty} \sqrt[n]{n^e+e^n}$ = $e$? I couldn't get this result.
|
Taking logs, you must show that
$$\lim_{n \rightarrow \infty} {\ln(n^e + e^n) \over n} = 1$$
Applying L'hopital's rule, this is equivalent to showing
$$\lim_{n \rightarrow \infty}{en^{e-1} + e^n \over n^e + e^n} = 1$$
Which is the same as
$$\lim_{n \rightarrow \infty}{e{n^{e-1}\over e^n} + 1 \over {n^e \over e^n}+ 1} = 1$$
By applying L'hopital's rule enough times, any limit of the form $\lim_{n \rightarrow \infty}{\displaystyle {n^a \over e^n}}$ is zero. So one has
$$\lim_{n \rightarrow \infty}{e{n^{e-1}\over e^n} + 1 \over {n^e \over e^n}+ 1} = {e*0 + 1 \over 0 + 1}$$
$$ = 1$$
(If you're wondering why you can just plug in zero here, the rigorous reason is that the function ${\displaystyle {ex + 1 \over y + 1}}$ is continuous at $(x,y) = (0,0)$.)
|
A non-linear maximisation We know that $x+y=3$ where x and y are positive real numbers. How can one find the maximum value of $x^2y$? Is it $4,3\sqrt{2}, 9/4$ or $2$?
|
By AM-GM
$$\sqrt[3]{2x^2y} \leq \frac{x+x+2y}{3}=2 $$
with equality if and only if $x=x=2y$.
Second solution
This one is more complicated, and artificial (since I needed to know the max)$.
$$x^2y=3x^2-x^3=-4+3x^2-x^3+4=4- (x-2)^2(x+1)\leq 4$$
since $(x-2)^2(x+1) \geq 0$.
|
Convergence of the series Im trying to resolve the next exercise:
$$\sum_{n=1}^\infty\ e^{an}n^2 \text{ , }a\in R $$
I dont know in which ranges I should separe the a value for resolving the limit and finding out the convergence.
|
Write it as
$\sum_{n=1}^\infty\ r^n n^2$
where $r = e^a$ satisfies $0 < r$.
If $r \ge 1$ (i.e., $a \ge 0$), the sum clearly diverges.
If $r < 1$ (i.e., $a < 0$), you can get an explicit formula
for $\sum_{n=1}^m\ r^n n^2$
which will show that the sum converges.
Therefore the sum converges for
$a < 0$ and diverges for $a \ge 0$.
The $e^{an}$ seems like a distraction to hide the true nature
of the problem.
|
2 heads or more in 3 coin toss formula what is the formula to calculate the probabilities of getting 2 heads or more in 3 coin toss ?
i've seen a lot of solution but almost all of them were using method of listing all of possible combination like HHT,HTH,etc
what i am trying to ask is the formula and/or method to calculate this using a formula and
no need to list all of the possible combination.
listing 3 coin toss combination is easy(8 possible combination),but suppose i change the coins to dice or say 20-side dice. that would take a long time to list all the possible combination.
|
The simplest is by symmetry. The chance of at least two heads equals the chance of at least two tails, and if you add them you get exactly $1$ because one or the other has to happen. Thus the chance is $\frac 12$. This approach is not always available.
|
Infinite sum of floor functions I need to compute this (convergent) sum
$$\sum_{j=0}^\infty\left(j-2^k\left\lfloor\frac{j}{2^k}\right\rfloor\right)(1-\alpha)^j\alpha$$
But I have no idea how to get rid of the floor thing. I thought about some variable substitution, but it didn't take me anywhere.
|
We'll let $M=2^k$ throughout.
Note that $$f(j)=j-M\left\lfloor\frac{j}{M}\right\rfloor$$
is just the modulus operator - it is equal to the smallest positive $n$ such that $j\equiv n\pmod {M}$
So that means $f(0)=0, f(1)=1,...f(M-1)=M-1,$ and $f(j+M)=f(j)$.
This means that we can write:
$$F(z)=\sum_{j=0}^{\infty} f(j)z^{j}= \left(\sum_{j=0}^{M-1} f(j)z^{j}\right)\left(\sum_{i=0}^\infty z^{Mi}\right)$$
But $$\sum_{i=0}^\infty z^{Mi} = \frac{1}{1-z^{M}}$$
and $f(j)=j$ for $j=0,...,2^k-1$, so this simplifies to:
$$F(z)=\frac{1}{1-z^{M}}\sum_{j=0}^{M-1} jz^j$$
Finally, $$\sum_{j=0}^{M-1} jz^j = z\sum_{j=1}^{M-1} jz^{j-1} =z\frac{d}{dz}\frac{z^M-1}{z-1}=\frac{(M-1)z^{M+1}-Mz^{M}+z}{(z-1)^2}$$
So:
$$F(z)=\frac{(M-1)z^{M+1}-Mz^{M}+z}{(z-1)^2(1-z^{M})}$$
Your final sum is: $$\alpha F(1-\alpha)$$ bearing in mind, again, that $M=2^k$
|
show that the interval of the form $[0,a)$ or $(a, 1]$ is open set in metric subspace $[0,1]$ but not open in $\mathbb R^1$
On the metric subspace $S = [0,1]$ of the Euclidean space $\mathbb R^1 $, every interval of the form $A = [0,a)$ or $(a, 1]$ where $0<a<1$ is open set in S. These sets are not open in $\mathbb R^1$
Here's what I attempted to show that $A$ is open in $S$. I have no idea how it is not open in $\mathbb R^1$.
Let $M = \mathbb R^1$, $x \in A = [0,a)$ .
If $x = 0$, $ r \leq \min \{a, 1-a\}, \\\ B_S(0; r) = B_M(0,r)\cap[0,1] = (-r, r) \cap[0,1] = [0,r) \subseteq A $
If $x \neq 0, r \leq \min \{x,|a-x|, 1-x\}, \\B_S(x; r) = B_M(x,r)\cap[0,1] = (x-r, x+r) \cap[0,1] \subset (0,x) \text{ or } (x, 1) \subset A$
|
Hint: To show $(x,1]$ not open in $\mathbb R$ simply show that there is no open neighborhood of $1$ included in the half-closed interval.
|
Isosceles triangle Let $ \triangle ABC $ be an $C$-isosceles and $ P\in (AB) $ be a point so that $ m\left(\widehat{PCB}\right)=\phi $. Express $AP$ in terms of $C$, $c$ and $\tan\phi$.
Edited problem statement(same as above but in different words):
Let $ \triangle ABC $ be a isosceles triangle with right angle at $C$. Denote $\left | AB \right |=c$. Point $P$ lies on $AB(P\neq A,B)$ and angle $\angle PCB=\phi$. Express $\left | AP \right |$ in terms of $c$ and $\tan\phi$.
|
Edited for revised question
Dropping the perpendicular from $C$ onto $AB$ will help. Call the point $E$.
Also drop the perpendicular from $P$ onto $BC$, and call the point $F$. Then drop the perpendicular from $F$ onto $AB$, and call the point $G$.
This gives a lot of similar and congruent triangles.
$$\tan \phi = \dfrac{|PF|}{|CF|} = \dfrac{|FB| }{ |CF|} = \dfrac{ |GB| }{|EG| } = \dfrac{ |PB| }{|AP| }= \dfrac{ c-|AP| }{|AP| }$$ so $$|AP| = \dfrac{c}{ 1+\tan \phi}.$$
|
Mathematical competitions for adults. I live in Mexico city. However, I am not so interested in whether these exist in my vicinity but want to learn if they exist, if they do then It might be easier to make them more popular in other places.
Are there mathematical competitions for adults? I have been into a couple of mathematical competitions and they are fun. However, its sort of a race against the clock because you only have a couple of chances until you are too old. Are there mathematical competitions for people of all ages. I have looked this up but found none. It makes sense to separate people from second grade and 7th grade. But: for example in sports once it comes to adults they separate them in different leagues. This could work to make it fair for mathematicians and mathematics enthusiasts alike.
Thanks for your answer.
PS When I say adults I mean people who need not still be studying in college, it includes people who have regular jobs.
|
Actually I know only one. I was looking for the same thing and found your question.
"Championnat International des Jeux Mathématiques et Logiques"
http://www.animath.fr/spip.php?article595
Questions and answers must be in french. But there is no requirement for participation. Any age and nationalities are welcome.
Questions won't be translated for you, but I'm pretty sure that, if you answer in english graders won't mind. Just ask. ;)
|
$A\unlhd G$ , $B\unlhd G$ and $C\unlhd G$ then $A(B∩C)$ is a normal subgroup of $G$ If $A$ normal to $G$ , $B$ normal to $G$ and $C$ normal to $G$ then how can I show that$$A(B∩C)\unlhd G$$
how can i solve this problem? Thanks!
|
You know that if $B,C$ be subgroups of a group so does their intersection. Moreover if one of subgroups $A$ and $B\cap C$ are normal in $G$, so we have a theorem saying $A(B\cap C)\leq G$ also. Now show that the normality of $A(B\cap C)$ in $G$. In fact, show that: $$\forall x\in A(B\cap C), g\in G$$ we have $g^{-1}xg\in A(B\cap C)$ as well where $g\in G$ is an arbitrary element.
|
Closed set in $\ell^1$
Show that the set $$ B = \left\lbrace(x_n) \in \ell^1 : \sum_{n\geq 1} n|x_n|\leq 1\right\rbrace$$
is compact in $\ell^1$.
Hint: You can use without proof the diagonalization process to conclude that every bounded sequence $(x_n)\in \ell^\infty$ has a subsequence $(x_{n_k})$ that converges in each component, that is $\lim_{k\rightarrow\infty} (x_{n_k}^{(i)})$ exists for all i.
Moreover, sequences in $\ell^1$ are obviously closed by the $\ell^1$-norm.
My try: Every bounded sequence $(x_n) \in \ell^\infty$ has a subsequence $(x_{n_k})$
that converges in each component. That is $\lim_{k\rightarrow\infty} (x_{n_k}^{(i)})$ exists for all i. .And all sequences in $\ell^1$ are bounded in $\ell^1$-norm.
I want to show that every sequence $(x_n) \in B$, has an Cauchy subsequence.
Choose an N and M such that for $l,k > M$ such that $|x_{n_k}^{(i)} - x_{n_l}^{(i)}| < \frac{1}{N^2}$ Then
$$\sum_i^N |x_{n_k}^{(i)} - x_{n_l}^{(i)}| + \sum_{i = N+1} ^\infty |x_{n_k}^{(i)} - x_{n_l}^{(i)}| \leqslant \frac{1}{N} + \frac{1}{N+1} \sum_{i = N+1} ^\infty i|x_{n_k}^{(i)} - x_{n_l}^{(i)}| \leqslant \frac{3}{N+1}$$
It feels wrong to compine $M,N$ like this, is it? what can I do instead?
|
We can use and show the following:
Let $K\subset \ell^1$. This set has a compact closure for the $\ell^1$ norm if and only if the following conditions are satisfied:
*
*$\sup_{x\in K}\lVert x\rVert_{\ell^1}$ is finite, and
*for all $\varepsilon>0$, we can find $N$ such that for all $x\in K$, $\sum_{k\geqslant N}|x_k|<\varepsilon$.
These conditions are equivalent to precompactness, that is, that for all $r>0$, we can find finitely many elements $x^1,\dots,x^N$ such that the balls centered at $x^j$ and for radius $r$ cover $K$.
|
Boundedness of an integral operator Let $K_n \in L^1([0,1]), n \geq 1$ and define a linear map $T$ from $L^\infty([0,1]) $to sequences by
$$ Tf = (x_n), \;\; x_n =\int_0^1 K_n(x)f(x)dx$$
Show that $T$ is a bounded linear operator from $L^\infty([0,1]) $to $\ell^\infty$ iff
$$\sup_{n\geq 1} \int_0^1|K_n(x)| dx \lt \infty$$
My try:
$(\Leftarrow)$
$$\sup_n |x_n| = \sup_n |\int_0^1 K_n(x)f(x) dx| \leq \sup_n\int_0^1 |K_n(x)f(x)| dx \leq \|f\|_\infty \sup_n\int_0^1 |K_n(x)|dx $$
$(\Rightarrow)$
I can't get the absolute value right. I was thinking uniformed boundedness and that every coordinate can be written with help of a linear functional. But then I end up with $\sup_{\|f\| = 1} |\int_0^1 K_n(x) f(x) dx | \leq \infty$. Can I choose my $f$ so that I get what I want?
|
Yes, you can choose $f$ as you want. If $T$ is bounded then
$$
\exists C>0\qquad\left\vert \int_0^1K_n(x)f(x)\,dx\right\vert\leq C \Vert f\Vert_{L^\infty}\qquad \forall f\in L^\infty\quad \forall n\in\mathbb{N}.
$$
Fix $m\in\mathbb{N}$, if we take $f=\text{sign}(K_m)\in L^\infty$ then
$$
\int_0^1 \vert K_m(x)\vert\,dx\leq C.
$$
Repeat this construction for every $m\in\mathbb{N}$ and you obtain
$$
\sup_{n\in\mathbb{N}}\int_0^1 \vert K_n(x)\vert\,dx\leq C<+\infty.
$$
|
convergence of weighted average It is well known that for any sequence $\{x_n\}$ of real or complex numbers which converges to a limit $x$, the sequence of averages of the first $n$ terms is also convergent to $x$. That is, the sequence $\{a_n\}$ defined by
$$a_n = \frac{x_1+x_2+\ldots + x_n}{n}$$
converges to $x$. How "severe" of a weighting function $w(n)$ can we create that the sequence of weighted averages $\{b_n\}$ defined by
$$b_n = \frac{w(1)x_1 + w(2)x_2 + \ldots + w(n)x_n}{w(1)+w(2)+\ldots+w(n)} $$
is convergent to $x$? Is it possible to choose $w(n)$ such that $\{b_n\}$ is divergent?
|
Weighted averages belong to the class of matrix summation methods.
Define
$$W:=\left(\begin{matrix}W_{1,1},W_{1,2},\ldots\\W_{2,1},W_{2,2},\ldots\\\vdots\\\end{matrix}\right)$$
Represent the sequence $\{x_n\}$ by the infinite vector $X:=\left(\begin{matrix}x_1\\x_2\\\vdots\end{matrix}\right)$, and $\{b_n\}$ by the vector $B:=\left(\begin{matrix}b_1\\b_2\\\vdots\end{matrix}\right)$. Then we have $$B=WX.$$
In our case $W_{i,j}:=\frac{w(j)}{w(1)+\ldots+w(i)}$, for $j\leq i$, and $W_{i,j}:=0$, for $j>i$.
The summation method is called regular if it transforms convergent sequences into convergent sequences with the same limit. For matrix summation methods we have Silverman-Toeplitz theorem, that says that a matrix summation method is regular if and only if the following are satisfied:
*
*$\lim_{i\rightarrow\infty} W_{i,j}=0$, for every $j\in\mathbb{N}$ (entries converge to zero along columns)
*$\lim_{i\rightarrow\infty}\sum_{j=1}^{\infty}W_{i,j}=1$ (rows add up to $1$)
*$\sup_{i}\sum_{j=1}^{\infty}|W_{i,j}|<\infty$ (the sums of the absolute values on the rows are bounded.)
In your case $2$ and $3$ are satisfied (if you assume, as in the comment that $w(i)\geq0$), therefore you get the result if and only if
$$\sum w(i)=\infty.$$
|
What is the value of the given limit?
Possible Duplicate:
How can I prove Infinitesimal Limit
Let $$\lim_{x\to 0}f(x)=0$$ and $$\lim_{x\to 0}\frac{f(2x)-f(x)}{x}=0$$
Then what is the value of $$\lim_{x\to 0}\frac{f(x)}{x}$$
|
If $\displaystyle\lim_{x\to0}\frac{f(x)}{x}=L$ then
$$
\lim_{x\to0}\frac{f(2x)}{x} = 2\lim_{x\to0}\frac{f(2x)}{2x} = 2\lim_{u\to0}\frac{f(u)}{u} = 2L.
$$
Then
$$
\lim_{x\to0}\frac{f(2x)-f(x)}{x} = \lim_{x\to0}\frac{f(2x)}{x} - \lim_{x\to0}\frac{f(x)}{x} =\cdots
$$
etc.
Later note: What is written above holds in cases in which $\displaystyle\lim_{x\to0}\frac{f(x)}{x}$ exists. The question remains: If both $\lim_{x\to0}f(x)=0$ and $\lim_{x\to0}(f(2x)-f(x))/x=0$ then does it follow that $\displaystyle\lim_{x\to0}\frac{f(x)}{x}$ exists?
|
The control of norm in quotient algebra Let $B_1,B_2$ be two Banach spaces and $L(B_i,B_j),K(B_i,B_j)(i,j=1,2)$ spaces of bounded and compact linear operator between them respectively. If $T \in L(B_1,B_1)$, we have a $S \in K(B_1,B_2)$ and a constant $c>0$ such that for any $v \in B_1$,$${\left\| {Tv} \right\|_{{B_1}}} \le c{\left\| v \right\|_{{B_1}}} + {\left\| {Sv} \right\|_{{B_2}}}.$$
My question is, can we find a $A \in K(B_1,B_1)$, such that ${\left\| {T - A} \right\|_{L({B_1},{B_1})}} \le c$?
|
To start from a very simple case: If $B_1$ is a Hilbert space and $S$ is finite-dimensional such that we have
$$ \|Tv\| \le c\|v\| + \|Sv\| \quad \forall v$$
then we can find a finite-dimensional $A$ satisfying
$$\tag{1} \|Av\| \le \|Sv\|$$
and
$$\tag{2} \|(T-A)v\| \le c \|v\|$$
for every $v \in B_1$.
Proof: Write $B_1 = (\ker S)^\bot \oplus_2 \ker S$. We define $A$ separately on each summand: On $\ker S$, we can clearly choose $A = 0$. On its annihilator, we can work with an orthonormal basis: For each vector $e_n$, we set
$$ Ae_n = \frac{\|Se_n\|}{c + \|Se_n\|} Te_n$$
(note that we know $Se_n \ne 0$) which immediately gives us
$$ \|Ae_n\| \le \|Se_n\|$$
and
$$ \|(T-A)e_n\|
= \left\|\left(1 - \frac{\|Se_n\|}{c + \|Se_n\|} \right)Te_n\right\|
\le c
$$
Since (1) and (2) are thus satisfied both on all of $\ker S$ and whenever $v$ is member of the orthonormal basis for $(\ker S)^\bot$, i.e. $v = e_n$, they are satisfied for every $v \in B_1$.
|
Find all entire functions $f$ such that for all $z\in \mathbb{C}$, $|f(z)|\ge \frac{1}{|z|+1}$ Find all entire functions $f$ such that for all $z\in \mathbb{C}$, $|f(z)|\ge \frac{1}{|z|+1}$
This is one of the past qualifying exams that I was working on and I think that I have to find the function that involved with $f$ that is bounded and use Louiville's theorem to say that the function that is found is constant and conclude something about $f$. I can only think of using $1/f$ so that $\frac{1}{|f(z)|} \le |z|+1$ but $|z|+1$ is not really bounded so I would like to ask you for some hint or idea.
Any hint/ idea would be appreciated.
Thank you in advance.
|
Suppose $f$ is not constant. As an entire non-constant function it must have some sort of singularity at infinity. It cannot be a pole (because then it would have a zero somewhere, cf the winding-number proof of the FTA), so it must be an essential singularity. Then $zf(z)$ also has an essential singularity at infinity. But $|zf(z)|\ge \frac{|z|}{|z|+1}$, which goes towards $1$, which contradicts Big Picard for $zf(z)$.
|
move a point up and down along a sphere I have a problem where i have a sphere and 1 point that can be anywhere on that sphere's surface. The Sphere is at the center point (0,0,0).
I now need to get 2 new points, 1 just a little below the and another little above this in reference to the Y axis. If needed or simpler to solve, the points can be about 15º above and below the original point, this viewing the movement on a 2D circle.
Thank you in advance for any given help.
EDIT:
This is to be used on a world globe where the selected point will never be on the top or bottom.
EDIT:
I'm using the latitude and longitude suggested by rlgordonma and user1551
what I'm doing is adding and subtracting a fixed value to ϕ
These 2 apear correctly, at least they apear to look in place:
The original point is in the middle of the 2 bars.
The sphere has R=1 all the coords i'm putting here are rounded because they are to big (computer processed)
coord: (0.77, 0.62, 0,11)
coord: (0.93, -0.65, 0.019)
these don't:
coord: (-0.15, 0.59, 0.79)
coord: (-0.33, 0.73, -0.815)
there are other occasions for both but i didn't want to put all here.
calcs:
R = 1
φ = arctan(y/x)
θ = arccos(z/1)
//to move up only one is used
φ = φ + π/50
//to move down only one is used
φ = φ - π/50
(x,y,z)=(sinθ cosφ, sinθ sinφ, cosθ)
|
The conversion between Cartesian and spherical coordinates is
$$ (x,y,z) = (R \sin{\theta} \cos {\phi},R \sin{\theta} \sin {\phi}, R \cos{\theta})$$
where $R$ is the radius of the earth/sphere, $\theta = $ $+90^{\circ}$- latitude, and $\phi=$ longitude.
|
Proof by induction on $\{1,\ldots,m\}$ instead of $\mathbb{N}$ I often see proofs, that claim to be by induction, but where the variable we induct on doesn't take value is $\mathbb{N}$ but only in some set $\{1,\ldots,m\}$.
Imagine for example that we have to prove an equality that encompasses $n$ variables on each side, where $n$ can only range through $n\in\{1,\ldots,m\}$ (imagine that for $n>m$ the function relating the variables on one of the sides isn't welldefined): For $n=1$ imagine that the equality is easy to prove by manipulating the variables algebraically and that for some $n\in\{1,\ldots,m\}$ we can also show the equation holds provided it holds for $n\in\{1,\ldots,n-1\}$. Then we have proved the equation for every $n\in\{1,\ldots,m\}$, but does this qualify as a proof by induction ?
(Correct me if I'm wrong: If the equation were indeed definable and true for every $n\in\mathbb{N}$ we could - although we are only interested in the case $n\in\{1,\ldots,m\}$ - "extend" it to $\mathbb{N}$ and then use "normal" induction to prove it holds for every $n\in \mathbb{N}$, since then it would also hold for $n\in\{1,\ldots,m\}$ .)
|
If the statement in question really does not "work" if $n>m$, then necessarily the induction step $n\to n+1$ at least somewhere uses that $n<m$. You may view this as actually proving by induction
$$\tag1\forall n\in \mathbb N\colon (n>m\lor \phi(m,n))$$
That is, you first show
$$\tag2\phi(m,1)$$
(which of course implies $1>m\lor \phi(m,1)$) and then show
$$\tag3(n>m\lor \phi(m,n))\Rightarrow (n+1>m\lor \phi(m,n+1)),$$
where you can more or less ignore the trivial case $n\ge m$.
Of course $(2)$ and $(3)$ establish a perfectly valid proof of $(1)$ by induction on $n$.
|
Differing properties due to differing topologies on the same set I have been working on this problem from Principles of Topology by Croom:
"Let $X$ be a set with three different topologies $S$, $T$, $U$ for which $S$ is weaker than $T$, $T$ is weaker than $U$, and $(X,T)$ is compact and Hausdorff. Show that $(X,S)$ is compact but not Hausdorff, and $(X,U)$ is Hausdorff but not compact."
I managed to show that $(X,T)$ compact implies $(X,S)$ compact, and that $(X,T)$ Hausdorff implies $(X,U)$ Hausdorff.
However, I realized that there must be an error (right?) when it comes to $(X,T)$ compact implying that $(X,U)$ is noncompact, since if $X$ is finite (which it could be, as the problem text doesn't specify), then it is compact regardless of the topology on it. What are the minimal conditions that we need in order for there to be a counterexample to $(X,T)$ compact implying $(X,U)$ compact? Just that $X$ is not finite?
Does this also impact showing $(X,T)$ Hausdorff implies $(X,S)$ is not Hausdorff?
|
If $X$ is finite, the only compact Hausdorff topology on $X$ is the discrete topology, so $T=\wp(X)$. In this case there is no strictly finer topology $U$. If $X$ is infinite, the discrete topology on $X$ is not compact, so if $T$ is a compact Hausdorff topology on $X$, there is always a strictly finer topology.
|
Sampling from a $2$d normal with a given covariance matrix How would one sample from the $2$-dimensional normal distribution with mean $0$ and covariance matrix $$\begin{bmatrix} a & b\\b & c \end{bmatrix}$$ given the ability to sample from the standard ($1$-dimensional) normal distribution?
This seems like it should be quite simple, but I can't actually find an answer anywhere.
|
Say you have a random variable $X\sim N(0,E)$ where $E$ is the identity matrix. Let $A$ be a matrix. Then $Y:=AX\sim N(0,AA^T)$. Hence you need to find a matrix $A$ with $AA^T = \left[\matrix{a & b \\ b & c}\right]$. There is no unique solution to this problem. One popular method is the Cholesky decomposition, where you find a triangular matrix $L$ with a given covariance matrix. Another method is to perform a principle axis transform
$$ \left[\matrix{a & b \\ b & c}\right] = U^T\left[\matrix{\lambda_1 & 0 \\ 0 & \lambda_2}\right]U $$
with $UU^T=E$ and then take
$$ A = U^T\left[\matrix{\sqrt{\lambda_1} & 0 \\ 0 & \sqrt{\lambda_2}}\right]U $$
as a solution. This is the only symmetric positive definite solution then. It is also called the square root of the positive-definite symmetric matrix $AA^T$.
More generally for distribution $X\sim N(\mu,\Sigma)$ it holds $AX+b\sim N(A\mu+b,B\Sigma B^T)$, see Wikipedia.
|
A Question on $p$-groups. Suppose $G$ is a group with order $p^{n}$ ($p$ is a prime).
Do we know when we can find the subgroups of $G$ of order $p^{2}, p^{3}, \cdots, p^{n-1}$?
|
(Approach modified in light of Don Antonio's comment below question). Another way to proceed, which may not be so common in textbooks, and which produces a normal subgroup of each possible index, is as follows. Suppose we have a non-trivial normal subgroup $Q$ of $P$ (possibly $Q = P$). We will produce a normal subgroup $R$ of $P$ with $[Q:R] = p.$ Note that $P$ permutes the maximal subgroups of $Q$ by conjugation. Let $\Phi(Q)$ denote the intersection of the maximal subgroups of $Q$, and suppose that $[Q:\Phi(Q)] = p^{s}.$ Then $Q/\Phi(Q)$ is Abelian of exponent $p$. All its maximal subgroups have index $p,$ and there are $\frac{p^{s}-1}{p-1}$ of them. There is bijection between the set of maximal subgroups of $Q$ and the set of maximal subgroups of $Q/\Phi(Q).$ Hence the number of maximal subgroups of $Q$ is $1 + p + \ldots + p^{s-1},$ which is congruent to $1$ (mod $p$), and each of them has index $p$ in $Q$. The maximal subgroups of $Q$ are permuted under conjugation by $P$ in orbits of $p$-power lengths.
At least one orbit must have length $1$. This orbit contains a subgroup $R$ of $Q$ with $[Q:R]= p$ and $x^{-1}Rx = R$ for all $x \in P.$ In other words, we have found a normal subgroup $R$ of $P$ which is contained in $Q$ and satisfies $[Q:R] = p .$
|
A complex equation I want to solve the following equation:
$g(s)f(s)=0$
where $f$ and $g$ are defined in the complex plane with real values and they are not analytic.
My question is:
If I assume that $f(s)≠0$, can I deduce that $g(s)=0$ without any further complications?
I am a little confused about this case: if $f=x+iy$ and $g=u+iv$, then $fg=ux-vy+i(uy+vx)$ and $fg$ can be zero if $ux-vy=0,uy+vx=0$ without the implications: $x=y=0,u=v=0$
|
First, note that your question is really just about individual complex numbers, not about complex-valued functions.
Now, as you note, if $(x + i y)(u + i v) = 0$ then this implies that $x u - v y = x v + u y = 0$. However, the only way these equations can hold is if either $x + i y = 0$ or $u + i v = 0$.
Multiplying the first equation through by $v$ and the second by $u$, we have $x u v - v^2 y = 0$ and $x u v + u^2 y = 0$. Subtracting, $(u^2 + v^2)y = 0$. Since $u, v, y \in \mathbb{R}$, this means that either $y = 0$ or $u = v = 0$. In the latter case, we have $u + i v = 0$. In the former, since $y = 0$ we have $x (u + i v) = 0$, so $x u = x v = 0$. Either $x = 0$, in which case $x + i y = 0$, or $u = v = 0$, in which case $u + i v = 0$ again.
|
how many number like $119$ How many 3-digits number has this property like $119$:
$119$ divided by $2$ the remainder is $1$
119 divided by $3$ the remainderis $2 $
$119$ divided by $4$ the remainder is $3$
$119$ divided by $5$ the remainder is $4$
$119$ divided by $6$ the remainder is $5$
|
You seek numbers which, when divided by $k$ (for $k=2,3,4,5,6$) gives a remainder of $k-1$. Thus the numbers you seek are precisely those which are one less than a multiple of $k$ for each of these values of $k$. To find all such numbers, consider the lowest common multiple of $2$, $3$, $4$, $5$ and $6$, and count how many multiples of this have three digits.
|
Question on determining the splitting field It is not hard to check that the three roots of $x^3-2=0$ is $\sqrt[3]{2}, \sqrt[3]{2}\zeta_3, \sqrt[3]{2}\zeta_3^{2}$, hence the splitting field for $x^3-2$ over $\mathbb{Q}$ is $\mathbb{Q}[\sqrt[3]{2}, \sqrt[3]{2}\zeta_3, \sqrt[3]{2}\zeta_3^{2}]$. However, since $\sqrt[3]{2}\zeta_3^{2}$ can be compute through $\sqrt[3]{2}, \sqrt[3]{2}\zeta_3$ then the splitting field is $\mathbb{Q}[\sqrt[3]{2}, \sqrt[3]{2}\zeta_3]$.
In the case $x^5-2=0$, in the book Galois theory by J.S.Milne, the author said that the splitting field is $\mathbb{Q}[\sqrt[5]{2}, \zeta_5]$.
My question is :
*
*How can the other roots of $x^5-2$ be represented in term of $\sqrt[5]{2}, \zeta_5$, so that he can write the splitting field is$\mathbb{Q}[\sqrt[5]{2}, \zeta_5] $ ?
*Is the splitting field for $x^n -a$ over $\mathbb{Q}$ is $\mathbb{Q}[\alpha,\zeta_n]$, where $\alpha$ is the real $n$-th root of $a$ ?
|
Note that $(\alpha\zeta_n^k)^n = \alpha^n\zeta_n^{nk}=\alpha^n=a$, $0\le k<n$.
|
Are there real-life relations which are symmetric and reflexive but not transitive? Inspired by Halmos (Naive Set Theory) . . .
For each of these three possible properties [reflexivity, symmetry, and transitivity], find a relation that does not have that property but does have the other two.
One can construct each of these relations and, in particular, a relation that is
symmetric and reflexive but not transitive:
$$R=\{(a,a),(a,b),(b,a),(b,b),(c,c),(b,c),(c,b)\}.$$
It is clearly not transitive since $(a,b)\in R$ and $(b,c)\in R$ whilst $(a,c)\notin R$. On the other hand, it is reflexive since $(x,x)\in R$ for all cases of $x$: $x=a$, $x=b$, and $x=c$. Likewise, it is symmetric since $(a,b)\in R$ and $(b,a)\in R$ and $(b,c)\in R$ and $(c,b)\in R$. However, this doesn't satisfy me.
Are there real-life examples of $R$?
In this question, I am asking if there are tangible and not directly mathematical examples of $R$: a relation that is reflexive and symmetric, but not transitive. For example, when dealing with relations which are symmetric, we could say that $R$ is equivalent to being married. Another common example is ancestry. If $xRy$ means $x$ is an ancestor of $y$, $R$ is transitive but neither symmetric nor reflexive.
I would like to see an example along these lines within the answer. Thank you.
|
Actors $x$ and $y$ have appear in the same movie at least once.
|
Trouble with form of remainder of $\frac{1}{1+x}$ While asking a question here I got the following equality
$\displaystyle\frac{1}{1+t}=\sum\limits_{k=0}^{n}(-1)^k t^{k}+\frac{(-1)^{n+1}t^{n+1}}{1+t}$
I'm trying to prove this with Taylor's theorem, I got that $f^{(n)}(x)=\displaystyle\frac{(-1)^nn!}{(1+x)^{n+1}}$ so the first part of the summation is done but I can't seem to reach the remainder according to Rudin the form of the remainder is $\displaystyle\frac{(-1)^n!}{(1+c)^{n+1}}t^{n+1}$ . I know the equality can be proved by expanding the sum on the right but I want to get the result with the theorem.
|
Taylor will not help you here.
Hint: $\sum_{k=0}^n r^k=\frac{1-r^{n+1}}{1-r}$ for all positive integers $n$, when $r\neq 1$.
Take $r=$...
|
What does it mean $\int_a^b f(G(x)) dG(x)$? - An exercise question on measure theory I am reading Folland's book and definitions are as follows (p. 108).
Let $G$ be a continuous increasing function on $[a,b]$ and let $G(a) = c, G(b) = d$.
What is asked in the question is:
If $f$ is a Borel measurable and integrable function on $[c,d]$, then $\int_c^d f(y)dy = \int_a^b f(G(x))dG(x)$. In particular, $\int_c^d f(y) dy = \int_a^b f(G(x))G'(x)dx$ if $G$ is absolutely continuous.
As you can see from the title, I did not understand what does it mean $\int_a^b f(G(x))dG(x)$. Also, I am stuck on the whole exercise. If one can help, I will be very happy! Thanks.
|
This is likely either Riemann-Stieltjes or Lebesgue-Stieltjes integration (most likely the latter, given the context).
|
For any two sets $A,B$ , $|A|\leq|B|$ or $|B|\leq|A|$ Let $A,B$ be any two sets. I really think that the statement $|A|\leq|B|$ or $|B|\leq|A|$ is true. Formally:
$$\forall A\forall B[\,|A|\leq|B| \lor\ |B|\leq|A|\,]$$
If this statement is true, what is the proof ?
|
This is true in $ZFC$ because of Zermelo's Well-Ordering Theorem; given two sets $A,B$, since they are well-orderable there exist alephs $\aleph_{\alpha},\aleph_{\beta}$ with $|A|=\aleph_{\alpha}$ and $|B|=\aleph_{\beta}$, since alephs are comparable, the cardinalities of $A$ and $B$ are comparable.
Furthermore, this is equivalent to the axiom of choice:
Now suppose the cardinalities of any two sets are comparable, let us prove Zermelo's Well-Ordering Theorem from this. Let $X$ be a set, then there must be an ordinal $\alpha$ such that $|X|\leq |\alpha|$, for otherwise the set of all ordinals equipotent to a subset of $X$ would contain the proper class of all ordinals; for all ordinals $\alpha$, $|\alpha|\leq |X|$ by the comparability condition, a contradiction. Hence $|X|\leq |\alpha|$ for some ordinal $\alpha$, and thus $X$ is well-orderable.
Hence the condition of comparability is equivalent to Zermelo's Well-Ordering Theorem, and thus it is equivalent to the axiom of choice.
|
Check my proof of an algebraic statement about fractions I tried to prove the part c) of "Problem 42" from the book "Algebra" by Gelfand.
Fractions $\frac{a}{b}$ and $\frac{c}{d}$ are called neighbor fractions if their difference $\frac{cb-ad}{db}$ has numerator ±1, that is, $cb-ad = ±1$. Prove that:
b) If $\frac{a}{b}$ and $\frac{c}{d}$ are neighbor fractions, then $\frac{a+c}{b+d}$ is between them and is a neighbor fraction for both $\frac{a}{b}$ and $\frac{c}{d}$.
Me: It is easy to prove.
c) no fraction $\frac{e}{f}$ with positive integer $e$ and $f$ such that $f < b+d$ is between $\frac{a}{b}$ and $\frac{c}{d}$.
Me: we know that $\frac{a+c}{b+d}$ is between $\frac{a}{b}$ and $\frac{c}{d}$. The statement says that if we make the denominator smaller than $b+d$, the fraction can't be between $\frac{a}{b}$ and $\frac{c}{d}$ with any numerator.
Let's prove it:
0) Assume that $\frac{a}{b}$ < $\frac{c}{d}$, and $cb-ad = 1$, ($cb = ad + 1$). I also assume that $\frac{a}{b}$ and $\frac{c}{d}$ are positive.
1) Start with the fraction $\frac{a+c}{b+d}$, let $n$ and $m$ denote the changes of the numerator and denominator, so we get $\frac{a+c+n}{b+d+m}$ ($n$ and $m$ may be negative). We want it to be between the two fractions: $\frac{a}{b} < \frac{a+c+n}{b+d+m} < \frac{c}{d}$
2) Let's see what the consequences will be if the new fraction is bigger than $\frac{a}{b}$:
$\frac{a+c+n}{b+d+m} > \frac{a}{b}$
$b(a+c+n) > a(b+d+m)$
$ba+bc+bn > ba+ad+am$
$bc+bn > ad+am$
but $bc = ad + 1$ by the definition, so
$(ad + 1) + bn > ad + am$
$bn - am > -1$
All the variables denote the natural numbers, so if a natural number is bigger than -1 it implies that it is greater or equal $0$.
$bn - am \geq 0$
3) Let's see what the consequences will be if the new fraction is less than $\frac{c}{d}$:
$\frac{a+c+n}{b+d+m} < \frac{c}{d}$
...
$cm - dn \geq 0$
4) We've got two equations, I will call them p-equations, because they will be the base for our proof (they both have to be right):
$bn - am \geq 0$
$cm - dn \geq 0$
5) Suppose $\frac{a}{b} < \frac{a+c+n}{b+d+m} < \frac{c}{d}$. What $n$ and $m$ have to be? It was conjectured that if $m$ is negative, so for any $n$ this equation would not be right. Actually if $m$ is negative, $n$ can be only less or equal $0$, because when the denominator is getting smaller, the fraction is getting bigger.
6) Suppose that $m$ is negative and $n = 0$. Then the second p-equation can't be true:
$-cm - d\cdot 0 \geq 0 \implies -cm \geq 0$
7) If both n and m are negative, the p-equations can't both be true. I will get rid of the negative signs so we can treat $n$ and $m$ as positive:
$(-bn) - (-am) \geq 0$
$(-cm) - (-dn) \geq 0$
$am - bn \geq 0$
$dn - cm \geq 0$
If something is greater or equal $0$ then we can multiply it by a positive number and it still will be greater or equal $0$, so multiply by $d$ and $b$:
$d(am - bn) \geq 0$
$b(dn - cm) \geq 0$
$da\cdot m - dbn \geq 0$
$dbn - bc\cdot m \geq 0$
But $bc$ is greater than $da$ by the definition. You can already see that the equations can't both be true, but I will show it algebraicly:
by the definition $bc = da + 1$, then
$dam - dbn \geq 0$
$dbn - (da + 1)m \geq 0$
$dam - dbn \geq 0$
$dbn - dam - m \geq 0$
If two equations are greater or equal $0$ than if we add them together, the sum will still be greater or equal $0$.
$(dam - dbn) + (dbn - dam - m) \geq 0$
$-m \geq 0$
It is impossible (I changed $n$ and $m$ from negative to positive before by playing with negative signs).
QED.
If $n$ and $m$ are positive, the p-equations can both be true, I won't go through it here because it is irrelevant to our problem. But it is the common sense that I can choose such big $n$ and $m$ to situate $\frac{a+c+n}{b+d+m}$ between any two fractions.
PS: maybe my proof is too cumbersome, but I want to know is it right or not. Also advices how to make it simpler are highly appreciated.
|
First of all, the proof is correct and I congratulate you on the excellent effort. I will only offer a few small comments on the writing.
It's not clear until all the way down at (5) that you intend to do a proof by contradiction, and even then you never make it explicit. It's generally polite to state at the very beginning of a proof if you plan to make use of contradiction, contrapositive, or induction.
Tiny detail, maybe even a typo: $n$ and $m$ are integers, not necessarily naturals, so the statement at the end of (2) needs to reflect that. But for integers, also $x>-1$ implies $x\geq 0$, so it's not a big deal.
You didn't really need to make $n$ and $m$ positive since the only place you use positivity is at the very, very end you need $m>0$ to derive the contradiction. You don't even use it when you multiply by $d$ since that relied on the expressions being positive and not the individual numbers themselves. This is the only place I can imagine really see simplifying the proof.
As it stands, it would make the reader more comfortable if you named the positive versions of $(n,m)$ as $(N,M)$ or $(n',m')$ or something.
Finally, as you hint at, you don't need to consider the positive-positive case. But perhaps you should be more explicit why this is, earlier in the proof.
|
How to calculate $\sum \limits_{x=0}^{n} \frac{n!}{(n-x)!\,n^x}\left(1-\frac{x(x-1)}{n(n-1)}\right)$ What are the asymptotics of the following sum as $n$ goes to infinity?
$$
S =\sum\limits_{x=0}^{n} \frac{n!}{(n-x)!\,n^x}\left(1-\frac{x(x-1)}{n(n-1)}\right)
$$
The sum comes from CDF related to sampling with replacement. Consider a random process where integers are sampled uniformly with replacement from $\{1...n\}$. Let $X$ be a random variable that represents the number of samples until either a duplicate is found or both the values $1$ and $2$ have been found. So if the samples where $1,6,3,5,1$ then $X=5$ and if it was $1,6,3,2$ then $X=4$. This sum is therefore $\mathbb{E}(X)$.
We therefore know that $S = \mathbb{E}(X) \leq \text{mean time to find a duplicate} \sim \sqrt{\frac{\pi}{2} n}$.
Taking the first part of the sum,
$$\sum_{x=0}^{n} \frac{n!}{(n-x)!\,n^x} = \left(\frac{e}{n} \right)^n \Gamma(n+1,n) \sim \left(\frac{e}{n} \right)^n \frac{n!}{2} \sim \sqrt{\frac{\pi}{2} n}. $$
|
Let $N_n$ denote a Poisson random variable with parameter $n$, then
$$
S_n=\frac{n-2}{n-1}n!\left(\frac{\mathrm e}n\right)^n\mathbb P(N_n\leqslant n)+\frac2{n-1}.
$$
As a consequence, $\lim\limits_{n\to\infty}S_n/\sqrt{n}=\sqrt{\pi/2}$.
To show this (for a shorter proof, see the end of this post), first rewrite each parenthesis in $S_n$ as
$$
1-\frac{x(x-1)}{n(n-1)}=\frac2n(n-x)-\frac1{n(n-1)}(n-x)(n-x-1).
$$
Thus, $\displaystyle S_n=n!\,(2U_n-V_n)$ with
$$
U_n=\frac1n\sum_{x=0}^n\frac1{n^x}\frac{n-x}{(n-x)!}=\sum_{y=1}^n\frac1{n^y}\frac1{(n-y)!}=n^{-n}\sum_{x=0}^{n-1}\frac{n^x}{x!},
$$
and
$$
V_n=\frac1{n(n-1)}\sum_{x=0}^n\frac1{n^x}\frac{(n-x)(n-x-1)}{(n-x)!}=\frac{n}{n-1}\sum_{z=2}^n\frac1{n^z}\frac1{(n-z)!},
$$
that is,
$$
V_n=\frac{n}{n-1}n^{-n}\sum_{x=0}^{n-2}\frac{n^x}{x!}.
$$
Introducing
$$
W_n=n^{-n}\sum_{x=0}^n\frac{n^x}{x!},
$$
this can be rewritten as
$$
U_n=W_n-\frac1{n!},\qquad V_n=\frac{n}{n-1}\,\left(W_n-\frac2{n!}\right),
$$
hence
$$
S_n=n!\left(2\left(W_n-\frac1{n!}\right)-\frac{n}{n-1}\left(W_n-\frac2{n!}\right)\right)=n!\frac{n-2}{n-1}W_n+\frac2{n-1}.
$$
The proof is complete since
$$
W_n=n^{-n}\mathrm e^n\cdot\mathrm e^{-n}\sum_{x=0}^n\frac{n^x}{x!}=n^{-n}\mathrm e^n\cdot\mathbb P(N_n\leqslant n).
$$
Edit: Using the distribution of $N_n$ and the change of variable $x\to n-x$, one gets directly
$$
S_n=n!\left(\frac{\mathrm e}n\right)^n\left(\mathbb P(N_n\leqslant n)-\mathbb E(u_n(N_n))\right),
$$
with
$$
u_n(t)=\frac{(n-t)(n-t-1)}{n(n-1)}\,\mathbf 1_{t\leqslant n}.
$$
Since $N_n/n\to1$ almost surely when $n\to\infty$, $u_n(N_n)\to0$ almost surely. Since $u_n\leqslant1$ uniformly, $\mathbb E(u_n(N_n))\to0$. On the other hand, by the central limit theorem, $\mathbb P(N_n\leqslant n)\to\frac12$, hence Stirling's equivalent applied to the prefactor of $S_n$ yields the equivalent of $S_n$.
Edit-edit: By Berry-Esseen theorem, $\mathbb P(N_n\leqslant n)=\frac12+O(\frac1{\sqrt{n}})$. By the central limit theorem, $\mathbb E(u_n(N_n))=O(\frac1n)$. By Stirling's approximation, $n!\left(\frac{\mathrm e}n\right)^n=\sqrt{2\pi n}+O(\frac1{\sqrt{n}})$. Hence, $S_n=\sqrt{\pi n/2}+T_n+o(1)$ where $\limsup\limits_{n\to\infty}|T_n|\leqslant 2C\sqrt{2\pi}(1+\mathrm e^{-1})$ for any constant $C$ making Berry-Esseen upper bound true, hence $\limsup\limits_{n\to\infty}|T_n|\lt3.3$.
|
Diagonalizable unitarily Schur factorization Let $A$ be $n x n$ matrix.
What exactly is the difference between unitarily diagonalizable and diagonalizable
matrix $A$? Can that be that it is diagonalizable but not unitarily diagonalizable?
What are the conditions for Schur factorization to exist? For a (unitarily) diagonalizable matrix is it necessary that Schur factorization exists and vice versa?
Thanks a lot!
|
Diagonalization means to decompose a square matrix $A$ into the form $PDP^{-1}$, where $P$ is invertible and $D$ is a diagonal matrix. If $P$ is chosen as a unitary matrix, the aforementioned decomposition is called a unitary diagonalization. It follows that every unitarily diagonalizable matrix is diagonalizable.
The converse, however, is not true in general. Note that if $A$ is unitary diagonalizable as $UDU^\ast$ ($U^\ast=U^{-1}$), then the columns of $U$ are the eigenvectors of $A$ and they are orthonormal because $U$ is unitary. Therefore, a diagonalizable matrix is unitarily diagonalizable if and only if it has an orthonormal eigenbasis. So, a matrix like
$$
A=\begin{pmatrix}1&1\\0&1\end{pmatrix}\begin{pmatrix}2&0\\0&1\end{pmatrix}
\begin{pmatrix}1&1\\0&1\end{pmatrix}^{-1}
$$
is diagonalizable but not unitarily diagonalizable.
As for Schur triangulation, it is a decomposition of the form $A=UTU^\ast$, where $U$ is unitary and $T$ is triangular. Every square complex matrix has a Schur triangulation. Note that if $A=UTU^\ast$ is a Schur triangulation, then you can read off the eigenvalues of $A$ from the diagonal entries of $T$. It follows that a real matrix that has non-real eigenvalues cannot possess a Schur triangulation over the real field. For example, the eigenvalues of
$$
A=\begin{pmatrix}0&-1\\1&0\end{pmatrix}
$$
are $\pm\sqrt{-1}$. So it is impossible to triangulate $A$ as $UTU^\ast$ using real unitary matrix (i.e. real orthogonal matrix) $U$ and real triangular matrix $T$. Yet there exist complex $U$ and complex $T$ such that $A=UTU^\ast$.
If a matrix $A$ is unitarily diagonalizable as $A=UDU^\ast$, the diagonalization is automatically a Schur triangulation because every diagonal matrix $D$ is by itself a triangular matrix. Yet the converse is not true in general. For example, every nontrivial Jordan block, such as
$$
A=\begin{pmatrix}0&1\\0&0\end{pmatrix},
$$
is a triangular matrix that is not unitarily diagonalizable.
|
Compact subspaces of the Poset On page 172, James Munkres' textbook Topology(2ed), there is a theorem about compact subspaces of the real line:
Let $X$ be a simply-ordered set having the least upper bound property. In the order topology, each closed interval in $X$ is compact.
My question is whether there is a generalized theorem about a Poset(or a lattice, complete lattice, maybe). Is there some elegant way to define a topology on Poset?
|
Many topologies have been defined on partial orders and lattices of various types. One of the most important is the Scott topology. Let $\langle P,\preceq\rangle$ be a partial order. A set $A\subseteq P$ is an upper set if ${\uparrow\!\!x}\subseteq A$ whenever $x\in A$, where ${\uparrow\!\!x}=\{y\in P:x\preceq y\}$. A set $U\subseteq P$ is open in the Scott topology iff $U$ is an upper set with the following property:
if $D\subseteq P$ is a directed set in $P$, and $\bigvee D\in U$, then $D\cap U\ne\varnothing$. (In this case $U$ is said to be inaccessible by directed joins.)
The upper topology is the topology that has $\{P\,\setminus\!\downarrow\!\!x:x\in P\}$ as a subbase. The lower topology is generated by the subbase $\{P\setminus{\uparrow\!\!x}:x\in P\}$.
The Lawson topology is the join (coarsest common refinement) of the Scott and lower topologies.
A number of interval topologies have also been defined, the first ones by Frink and by Birkhoff; this paper deals with a number of such topologies.
These terms should at least give you a start for further search if you’re interested.
|
Difference between $u_t + \Delta u = f$ and $u_t - \Delta u = f$? What is the difference between these 2 equations? Instead of $\Delta$ change it to some general elliptic operator.
Do they have the same results? Which one is used for which?
|
The relation boils down to time-reversal, replacing $t$ by $-t$. This makes a lot of difference in the equations that model diffusion. The diffusion processes observed in nature are normally not reversible (2nd law of thermodynamics). In parallel to that, the backward heat equation $u_t=-\Delta u$ exhibits peculiar and undesirable features such as loss of regularity and non-existence of solution for generic data. Indeed, you probably know that the heat equation $u_t=\Delta u$ has a strong regularizing effect: for any integrable initial data $u(\cdot,0)$ the solution $u(x,t)$ is $C^\infty$ smooth, and also real-analytic with respect to $x $ for any fixed $t>0$. When the direction of time flow is reversed, this effect plays against you: there cannot be a solution unless you have real-analytic data to begin with. And even then the solution can suddenly blow up and cease to exist. Consider the fundamental solution of the heat equation, and trace it backward in time, from nice Gaussians to $\delta$-function singularity.
|
$\infty - \infty = 0$ ? I am given this sequence with square root. $a_n:=\sqrt{n+1000}-\sqrt{n}$. I have read that sequence converges to $0$, if $n \rightarrow \infty$. Then I said, well, it may be because $\sqrt{n}$ goes to $\infty$, and then $\infty - \infty = 0$. Am I right? If I am right, why am I right? I mean, how can something like $\infty - \infty = 0$ happen, since the first $\infty$ which comes from $\sqrt{n+1000}$ is definitely bigger than $\sqrt{n}.$?
|
$$\sqrt{n+100}-\sqrt n=\frac{100}{\sqrt{n+100}+\sqrt n}\xrightarrow [n\to\infty]{}0$$
But you're not right, since for example
$$\sqrt n-\sqrt\frac{n}{2}=\frac{\frac{n}{2}}{\sqrt n+\sqrt\frac{n}{2}}\xrightarrow [n\to\infty]{}\infty$$
In fact, "a difference $\,\infty-\infty\,$ in limits theory can be anything
|
How to be sure that the $k$th largest singular value is at least 1 of a matrix containing a k-by-k identity In section 8.4 of the report of ID software, it says that the $k$th largest singular value of a $k \times n$ matrix $P$ is at least 1 if some subset of its columns makes up a $k\times k$ identity.
I tried to figure it out but couldn't be sure of that. Any ideas on how to prove it?
|
The middle part of the SVD (which containts the singular values) does not change if you permute columns, so you may put the $k$ columns mentioned first. So assume the matrix has the form $A=[\begin{smallmatrix}I&B\end{smallmatrix}]$ where $I$ is a $k\times k$ identity, and $B$ is $k\times(n-k)$. Compute $$AA^T=I+BB^T\ge I$$
and conclude that eigenvalues of $AA^T$ are all at least $1$.
|
non constant bounded holomorphic function on some open set this is an exercise I came across in Rudin's "Real and complex analysis" Chapter 16.
Suppose $\Omega$ is the complement set of $E$ in $\mathbb{C}$, where $E$ is a compact set with positive Lebesgue measure in the real line.
Does there exist a non-constant bounded holomorphic function on $\Omega$?
Especially, do this for $\Omega=[-1,1]$.
Some observations:
Suppose there exists such function $f$, then WLOG, we may assume $f$ has no zeros points in $\Omega$ by adding a large enough positive constant, then, $\Omega$ is simply-connected implies $\int_{\gamma}fdz=0$, for any closed curve $\gamma\subset \Omega$, how to deduce any contradiction?
|
Reading Exercise 8 of Chapter 16, I imagine Rudin interrogating the reader.
Let $E\subset\mathbb R$ be a compact set of positive measure, let $\Omega=\mathbb C\setminus E$, and define $f(z)=\int_E \frac{dt}{t-z}$. Now answer me!
a) Is $f$ constant?
b) Can $f$ be extended to an entire function?
c) Does $zf(z)$ have a limit at $\infty$, and if so, what is it?
d) Is $\sqrt{f}$ holomorphic in $\Omega$?
e) Is $\operatorname{Re}f$ bounded in $\Omega$? (If yes, give a bound)
f) Is $\operatorname{Im}f$ bounded in $\Omega$? (If yes, give a bound)
g) What is $\int_\gamma f(z)\,dz$ if $\gamma$ is a positively oriented loop around $E$?
h) Does there exist a nonconstant bounded holomorphic function on $\Omega$?
Part h) appears to come out of the blue, especially since $f$ is not bounded: we found that in part (e). But it is part (f) that's relevant here: $\operatorname{Im}f$ is indeed bounded in $\Omega$ (Hint: write it as a real integral, notice that the integrand has constant sign, extend the region of integration to $\mathbb R$, and evaluate directly). Therefore, $f$ maps $\Omega$ to a horizontal strip. It's a standard exercise to map this strip onto a disk by some conformal map $g$, thus obtaining a bounded function $g\circ f$.
|
Proving that $\alpha^{n-2}\leq F_n\leq \alpha^{n-1}$ for all $n\geq 1$, where $\alpha$ is the golden ratio I got stuck on this exercise. It is Theorem 1.15 on page 14 of Robbins' Beginning Number Theory, 2nd edition.
Theorem 1.15. $\alpha^{n-2}\leq F_n\leq \alpha^{n-1}$ for all $n\geq 1$.
Proof: Exercise.
(image of relevant page)
|
Using Binet's Fibonacci Number Formula,
$\alpha+\beta=1,\alpha\beta=-1$ and $\beta<0$
\begin{align}
F_n-\alpha^{n-1}=
&
\frac{\alpha^n-\beta^n}{\alpha-\beta}-\alpha^{n-1}
\\
=
&
\frac{\alpha^n-\beta^n-(\alpha-\beta)\alpha^{n-1}}{\alpha-\beta}
\\
=
&
\beta\frac{(\alpha^{n-1}-\beta^{n-1})}{\alpha-\beta}
\\
=
&
\beta\cdot F_{n-1}\le 0
\end{align}
if $n\ge 1$.
\begin{align}
F_n-\alpha^{n-1}=
&
\frac{\alpha^n-\beta^n}{\alpha-\beta}-\alpha^{n-2}
\\
=
&
\frac{\alpha^{n-1}\cdot \alpha-\beta^n-\alpha^{n-1}+\beta\cdot \alpha^{n-2}}{\alpha-\beta}
\\
=
&
\frac{\alpha^{n-1}\cdot (1-\beta)-\beta^n-\alpha^{n-1}+\beta\cdot \alpha^{n-2}}{\alpha-\beta}
\\
=
&
\frac{\beta(\alpha^{n-2}-\alpha^{n-1})-\beta^n}{\alpha-\beta}
\\
=
&\frac{\beta\cdot \alpha^{n-2}(1-\alpha)-\beta^n}{\alpha-\beta}
\\
=
&
\frac{\beta\cdot \alpha^{n-2}(\beta)-\beta^n}{\alpha-\beta}
\\
=
&
\beta^2\frac{\alpha^{n-2}-\beta^{n-2}}{\alpha-\beta}
\\
=
&
\beta^2F_{n-2}\ge 0
\end{align}
if $n\ge 1$.
|
Eigenvalues of the matrix $(I-P)$ Let $P$ be a strictly positive $n\times n$ stochastic matrix.
I hope to find out the stability of a system characterized by the matrix $(I-P)$. So I'm interested in knowing under what condition on the entries of $P$ do all the eigenvalues of the matrix $(I-P)$ lie (not necessarily strictly) within the unit disk in the complex plane? The set of eigenvalues of $P$ is shifted by $(1-\sigma(P))$, which would require additional conditions to be placed on $P$ so as to keep them within the unit disk.
Or can anybody point out a direction towards which I can look for the answer? I don't really know what subject of linear algebra I should go for...
|
One sufficient condition (that is not necessary) is that all diagonal entries of $P$ are greater than or equal to $1/2$. If this is the case, by Gersgorin disc theorem, all eigenvalues of $I-P$ will lie inside the closed disc centered at $1/2$ with radius $1/2$, and hence lie inside the closed unit disc as well.
|
Linear independence of $\sin(x)$ and $\cos(x)$ In the vector space of $f:\mathbb R \to \mathbb R$, how do I prove that functions $\sin(x)$ and $\cos(x)$ are linearly independent. By def., two elements of a vector space are linearly independent if $0 = a\cos(x) + b\sin(x)$ implies that $a=b=0$, but how can I formalize that? Giving $x$ different values? Thanks in advance.
|
Although I'm not confident about this, maybe you can use power series for $\sin x$ and $\cos x$? I'm working on a similar exercise but mine has restricted both functions on the interval $[0,1]$.
|
How can I find two independent solution for this ODE? Please help me find two independent solutions for $$3x(x+1)y''+2(x+1)y'+4y=0$$ Thanks from a new beginner into ODE's.
|
Note that both $x=0, x=1$ are regular singular points( check them!). As @Edgar assumed, let $y=\sum_1^{\infty}a_nx^n$ and by writing the equation like $$x^2y''+\frac{2}{3}xy'+\frac{4x}{3(x+1)}y=0$$ we get: $$p(x)=\frac{2}{3},\; q(x)=\frac{4x}{3(x+1)}$$ and then $p(0)=\frac{2}{3},\; q(0)=0$. Now I suggest you to set the characteristic equation (see http://www.math.ualberta.ca/~xinweiyu/334.1.10f/DE_series_sol.pdf) $$s(s-1)+sp(0)+q(0)=0$$ You get $s_1=0,s_2=\frac{1}{3}$. Can you do the rest from here?
|
Is $X_n = 1+\frac12+\frac{1}{2^2}+\cdots+\frac{1}{2^n}; \forall n \ge 0$ bounded? Is $X_n = 1+\frac12+\frac{1}{2^2}+\cdots+\frac{1}{2^n}; \forall n \ge 0$ bounded?
I have to find an upper bound for $X_n$ and i cant figure it out, a lower bound can be 0 or 1 but does it have an upper bound?
|
$$1+\frac12+\frac{1}{2^2}+\cdots+\frac{1}{2^n}=1+\frac{1}{2} \cdot \frac{\left(\frac{1}{2}\right)^{n}-1}{\frac{1}{2}-1}.$$ Now it is obviously, that it is bounded.
|
Reference for an integral formula Good morning,
I'm reading a paper of W. Stoll in which the author uses some implicit facts (i.e. he states them without proofs and references) in measure theory. So I would like to ask the following question:
Let $G$ be a bounded domain in $\mathbb{R}^n$ and $S^{n-1}$ the unit sphere in $\mathbb{R}^n.$ For each $a\in S^{n-1},$ define $L(a) = \{x.a~:~ x\in \mathbb{R}\}.$ Denote by $L^n$ the n-dimensional Lebesgue area. Is the following formula true? $$\int_{a\in S^{n-1}}L^1(G\cap L(a)) = L^n(G) = \mathrm{vol}(G).$$
Could anyone please show me a reference where there is a proof for this? If this formula is not true, how will we correct it?
Thanks in advance,
Duc Anh
|
This formula seems to be false. Consider the case of the unit disk in $\mathbb{R}^2$, $D^2$. This is obviously bounded.
$L^1(D^2 \cap L(a)) = 2$ for any $a$ in $S^1$, as the radius of $D^2$ is 1, and the intersection of the line through the origin that goes through $a$ and $D^2$ has length 2. The integral on the left is therefore equal to $4\pi$ and $L^2(D^2)$ was known by the greeks to be $\pi$.
So your formula is like computing an integral in polar coordinates without multiplying the integrand by the determinant of the jacobian.
|
Abstract Algebra - Monoids I'm trying to find the identity of a monoid but all the examples I can find are not as complex as this one.
(source: gyazo.com)
|
It's straightforward to show that $\otimes$ is an associative binary operation, and as others have pointed out, the identity of the monoid is $(1, 0, 1)$. However, $(\mathbb{R}^3, \otimes)$ is not a group, since for example $(0, 0, 0)$ has no inverse element.
|
Cantor's intersection theorem and Baire Category Theorem From an old post in math stackexchange, I read a comment which goes as follows " I like to think of Baire Category Theorem as spiced up version of Cantor's Intersection Theorem". My question -----is it possible to derive the latter one using the former?
|
Do you have a copy of Rudin's Principles of Mathematical Analysis? If you do, then problems 3.21 and 3.22 outline how this is done. Quoting here:
3.21: Prove: If $(E_n)$ is a sequence of closed and bounded sets in a complete metric space $X$, if $E_n \supset E_{n+1}$, and if $\lim_{n\to\infty}\text{diam}~E_n=0$, then $\cap_1^\infty E_n$ consists of exactly one point.
3.22: Suppose $X$ is a complete metric space, and $(G_n)$ is a sequence of dense open subsets of $X$. Prove Baire's theorem, namely, that $\cap_1^\infty$ is not empty. (In fact, it is dense in $X$. Hint: Find a shrinking sequence of neighborhoods $E_n$ such that $\overline{E_n}\subset G_n$, and apply Exercise 21.
Proving density isn't actually much harder than proving nonemptiness.
|
How to find normal vector of line given point normal passes through Given a line L in three-dimensional space and a point P, how can we find the normal vector of L under the constraint that the normal passes through P?
|
Let the line and point have position vectors $\vec r=\vec a+\lambda \vec b$ ($\lambda$ is real) and $\vec p$ respectively. Set $(\vec r-\vec p).\vec b=0$ and solve for $\lambda$ to obtain $\lambda_0$. The normal vector is simply $\vec a+\lambda_0 \vec b-\vec p$.
|
$ \displaystyle\lim_{n\to\infty}\frac{1}{\sqrt{n^3+1}}+\frac{2}{\sqrt{n^3+2}}+\cdots+\frac{n}{\sqrt{n^3+n}}$ $$ \ X_n=\frac{1}{\sqrt{n^3+1}}+\frac{2}{\sqrt{n^3+2}}+\cdots+\frac{n}{\sqrt{n^3+n}}$$ Find $\displaystyle\lim_{n\to\infty} X_n$ using the squeeze theorem
I tried this approach:
$$
\frac{1}{\sqrt{n^3+1}}\le\frac{1}{\sqrt{n^3+1}}<\frac{n}{\sqrt{n^3+1}}
$$
$$
\frac{1}{\sqrt{n^3+1}}<\frac{2}{\sqrt{n^3+2}}<\frac{n}{\sqrt{n^3+1}}$$
$$\vdots$$
$$\frac{1}{\sqrt{n^3+1}}<\frac{n}{\sqrt{n^3+n}}<\frac{n}{\sqrt{n^3+1}}$$
Adding this inequalities:
$$\frac{n}{\sqrt{n^3+1}}\leq X_n<\frac{n^2}{\sqrt{n^3+1}}$$
And this doesn't help me much. How should i proced?
|
Hint: use $\frac{i}{\sqrt{n^3+n}} \le \frac{i}{\sqrt{n^3+i}} \le \frac{i}{\sqrt{n^3+1}}$. I even think the Squeeze theorem can be avoided.
|
condition on $\epsilon$ to make $f$ injective
from the condition $g$ is uniformly continuous, $x$ is also U.continuous and one-one, but we dont know about $g$ is one-one or not, so $\epsilon=0$ will work?may be I am vague. Thank you.
|
We have
$$
f'(x)=1+\varepsilon g'(x) \quad \forall\ x \in \mathbb{R}.
$$
For $\varepsilon \ge 0$ we have
$$
1-\varepsilon M \le f'(x)\le 1+\varepsilon M \quad \forall\ x \in \mathbb{R}.
$$
If we choose
$$
\varepsilon \in [0,1/M),
$$
then $f'>0$, i.e. $f$ is strictly increasing and therefore one-to-one.
For $\varepsilon < 0$ we have
$$
1+\varepsilon M \le f'(x)\le 1-\varepsilon M \quad \forall\ x \in \mathbb{R}.
$$
If we choose
$$
\varepsilon \in (-1/M,0),
$$
then $f'<0$, i.e. $f$ is strictly decreasing and therefore one-to-one.
Hence, if $|\varepsilon|<1/M$ then $f$ is injective.
|
If both roots of the Quadratic Equation are similar then prove that If both roots of the equation $(a-b)x^2+(b-c)x+(c-a)=0$ are equal, prove that $2a=b+c$.
Things should be known:
*
*Roots of a Quadratic Equations can be identified by:
The roots can be figured out by:
$$\frac{-b \pm \sqrt{d}}{2a},$$
where
$$d=b^2-4ac.$$
*When the equation has equal roots, then $d=b^2-4ac=0$.
*That means $d=(b-c)^2-4(a-b)(c-a)=0$
|
As the two roots are equal the discriminant must be equal to $0$.
$$(b-c)^2-4(a-b)(c-a)=(a-b+c-a)^2-4(a-b)(c-a)=\{a-b-(c-a)\}^2=(2a-b-c)^2=0 \iff 2a-b-c=0$$
Alternatively, solving for $x,$ we get $$x=\frac{-(b-c)\pm\sqrt{(b-c)^2-4(a-b)(c-a)}}{2(a-b)}=\frac{c-b\pm(2a-b-c)}{2(a-b)}=\frac{c-a}{a-b}, 1$$ as $a-b\ne 0$ as $a=b$ would make the equation linear.
So, $$\frac{c-a}{a-b}=1\implies c-a=a-b\implies 2a=b+c$$
|
Topology - Open and closed sets in two different metric spaces I am currently working through some topology problems, and I would like to confirm my results just for some peace of mind!
Statement: Let $(X, d_x)$ be a metric space, and $Y\subset X$ a non-empty subset. Define a metric $d_y$ on $Y$ by restriction. Then, each subset $A\subset Y $ can be viewed as a subset of $X$.
Q 1: If $A$ is open in $X$, $A$ open in $Y$.
|
This I found FALSE, example: Let $X =\mathbb{R}, Y = [0,1]$ and $A = [0, 1/2)$. $A$ is open in $Y$, howevever, $A$ is not open in $X$.
Q 3: If $A$ is closed in $X$, then $A$ is closed in $Y$.
|
Tensor Components In Barrett Oneill's Semi-Riemann Geometry there is a definition of tensor component:
Let $\xi=(x^1,\dots ,x^n)$ be a coordinate system on $\upsilon\subset
M$. If $A \in \mathscr I^r_s (M)$ the components of $A$ relative to
$\xi$ are the real-valued functions $A _j^i, \dots j^i =A(dx^i_1,\dots
> ,dx^i_s, \delta_{j 1},\dots, \delta_{j s})$ $i=1,\dots,r,\ j=1,\dots,s$ on
$\upsilon$ where all indices run from $1$ to $n=\dim M$.
By the definition above the $i$th component of $X$ relative to
$\xi$ is $X(dx^i)$,which is interpreted as $dx^i(X)=X(x^i)$.
I don't understand the last sentence.
Because one-forms are $(0,1)$ tensors we could interpret them like $V(\theta)=\theta(V)$.
So we can do the same thing here:
$X(dx^i)=dx^i(X)$. But how did we write $dx^i(X)=X(x^i)$?
Did I make a mistake?
|
The is a very nice intuitive explanation of this in Penrose's Road to Reality, ch 14. As a quick summary, any vector field can be thought of as a directional derivative operator on scalar valued functions, i.e for every scalar valued smooth function $f$ and vector field $X$, define the scalar field
$$X(f) \triangleq p \mapsto \lim_{\epsilon \rightarrow 0} \frac{f(p+\epsilon X_p) - f(p)}\epsilon $$
at every point $p$. This action on all $f$ uniquely determines $X$.
Similarly the 1-form $df$ acts on a vector field $X$ to yield the linear change in $f$ along $X$, ie:
$$df(X) = X(f)$$
Substituting $x^i$ for $f$ then gives $dx^i(X) = X(x^i)$.
In more detail, given a coordinate system $x^i$, exterior calculus says
$$df = \sum_i \partial_{x^i}f\,dx^i$$
Combining the above
$$X(f) = df(X) = \sum_i \partial_{x^i}f\,dx^i(X) = \sum_i \partial_{x^i}f \, X(x^i) = \sum_i X(x^i) \, \partial_{x^i} f \\
\therefore X = \sum_i X^i \partial_{x^i} \text{ where } X^i \triangleq X(x^i)$$
So $\partial_{x^i}$ form a basis for the space of directional derivative operators and thus also vector fields.
The $dx^i$ then are the dual basis of $\partial_{x^i}$, since
$$dx^i(\partial_{x^j}) = \partial_{x^j} x^i = \delta^i_j \\
dx^i(X) = X(x^i) = X^i$$
as required.
To formalize this for the general Riemannian setting, you need express it in terms of manifolds, charts and tangent spaces/bundles, with appropriate smoothness conditions on everything, but Penrose's explanation gives you a nice mental picture to start with. The book also has excellent diagrams. It is worth getting just for differential geometry chapter.
|
Integration of $x^3 \tan^{-1}(x)$ by parts I'm having problem with this question. How would one integrate $$\int x^3\tan^{-1}x\,dx\text{ ?}$$
After trying too much I got stuck at this point. How would one integrate $$\int \frac{x^4}{1+x^2}\,dx\text{ ?}$$
|
You've done the hardest part. Now, the problem isn't so much about "calculus"; you simply need to recall what you've learned in algebra:
$(1)$ Divide the numerator of the integrand: $\,{x^4}\,$ by its denominator, $\,{1+x^2}\,$ using *polynomial long division *, (linked to serve as a reference).
This will give you:
$$\int \frac{x^4}{1+x^2}\,dx = \int (x^2 + \frac{1}{1+x^2} -1)\,dx=\;\;?$$
Alternatively: Notice also that $$\int \frac{x^4}{x^2 + 1}\,dx= \int \frac{[(x^4 - 1) + 1]}{x^2 + 1}\,dx$$
$$= \int \frac{(x^2 + 1)(x^2 - 1) + 1}{x^2 + 1} \,dx = \int x^2 - 1 + \frac{1}{x^2 + 1}\,dx$$
I trust you can take it from here?
|
a ideal of the ring of regular functions of an affine variety. We assume that $\Bbb k$ is an algebraically closed field.
Let $X \subset \Bbb A^n$ be an affine $\Bbb k$-variety , let's consider $ \mathfrak A_X \subset \Bbb k[t_1,...t_n]$ as the ideal of polynomials that vanish on $X$. Given a closed subset $Y\subset X$ we associate the ideal $ a_Y \subset k[X]$ defined by $
a_Y = \left\{ {f \in k[X];f = 0\,on\,Y} \right\}
$.
I'm reading " Basic Algebraic Geometry of Shafarevich" and it says that " It follows from Nullstellensatz that $Y$ is the empty set if and only if $a_Y = K[X] $ But I don't know how to prove this . Maybe it's trivial , but I need help anyway.
After knowing if the result is true here, I want to know if it's true in the case of quasiprojective varieties, but first the affine case =)
It remains to prove that if $ Y\subset X $ then $$
Y = \phi \Rightarrow a_Y = k\left[ X \right]
$$
For that side we need Hilbert Nullstelensatz, but I don't know how to use it.
|
The Nullstellensatz is unnecessary here. If every function vanishes on a set Y (including all constant functions) then the set Y must obviously be empty; else the function f(x)=1 would be non-vanishing at some point x. Conversely if the set Y is empty then every function attains 0 at every point of Y.
|
Well-ordering the set of all finite sequences. Let $(A,<)$ be well-ordered set, using <, how can one define well-order on set of finite sequences?
(I thought using lexicographic order)
Thank you!
|
The lexicographic order is fine, but you need to make a point where one sequence extends another -- there the definition of the lexicographic order may break down. In this case you may want to require that the shorter sequence comes before the longer sequence.
Generally speaking, if $\alpha$ and $\beta$ are two well-ordered sets, then we know how to well-order $\alpha\times\beta$ (lexicographically, or otherwise). We can define by recursion a well-order of $\alpha^n$ for all $n$, and then define the well-order on $\alpha^{<\omega}$ as comparison of length, and if the two sequences have the same length, $n$, then we can compare them by the well-order of $\alpha^n$.
|
Find intersection of two 3D lines I have two lines $(5,5,4) (10,10,6)$ and $(5,5,5) (10,10,3)$ with same $x$, $y$ and difference in $z$ values.
Please some body tell me how can I find the intersection of these lines.
EDIT: By using the answer given by coffemath I would able to find the intersection point for the above given points. But I'm getting a problem for $(6,8,4) (12,15,4)$ and $(6,8,2) (12,15,6)$. I'm unable to calculate the common point for these points as it is resulting in Zero.
Any ideas to resolve this?
Thanks,
Kumar.
|
The direction numbers $(a,b,c)$ for a line in space may be obtained from two points on the line by subtracting corresponding coordinates. Note that $(a,b,c)$ may be rescaled by multiplying through by any nonzero constant.
The first line has direction numbers $(5,5,2)$ while the second line has direction numbers $(5,5,-2).$ Once one has direction numbers $(a,b,c)$, one can use either given point of the line to obtain the symmetric form of its equation as
$$\frac{x-x_0}{a}=\frac{y-y_0}{b}=\frac{z-z_0}{c}.$$ Note that if one or two of $a,b,c$ are $0$ the equation for that variable is obtained by setting the top to zero. That doesn't happen in your case.
Using the given point $(5,5,4)$ of the first line gives its symmetric equation as
$$\frac{x-5}{5}=\frac{y-5}{5}=\frac{z-4}{2}.$$
And using the given point $(5,5,5)$ of the second line gives its symmetric form
$$\frac{x-5}{5}=\frac{y-5}{5}=\frac{z-5}{-2}.$$
Now if the point $(x,y,z)$ is on both lines, the equation
$$\frac{z-4}{2}=\frac{z-5}{-2}$$
gives $z=9/2$, so that the common value for the fractions is $(9/2-4)/2=1/4$. This value is then used to find $x$ and $y$. In this example the equations are both of the same form $(t-5)/5=1/4$ with solution $t=25/4$. So we may conclude the intersection point is
$$(25/4,\ 25/4,\ 9/2).$$
ADDED CASE:
The OP has asked about another case, which illustrates what happens when one of the direction numbers of one of the two lines is $0$.
Line 1: points $(6,8,4),\ (12,15,4);$ directions $(6,7,0)$, "equation"
$$\frac{x-6}{6}=\frac{y-8}{7}=\frac{z-4}{0},$$
where I put equation in quotes because of the division by zero, and as noted the zero denominator of the last fraction means $z=4$ (so $z$ is constant on line 1).
Line 2: points $(6,8,2),\ (12,15,6);$ directions $(6,7,4)$, equation
$$\frac{x-6}{6}=\frac{y-8}{8}=\frac{z-2}{4}.$$
Now since we know $z=4$ from line 1 equation, we can use $z=4$ in $(z-2)/4$ of line 2 equation, to get the common fraction value of $(4-2)/4=1/2$.
Then from either line, $(x-6)/6=1/2$ so $x=9$, and $(y-8)/7=1/2$ so $y=23/2.$ So for these lines the intersection point is $(9,\ 23/2,\ 4).$
It should be pointed out that two lines in space generally do not intersect, they can be parallel or "skew". This would come out as some contradictory values in the above mechanical procedure.
|
What's the difference between expected values in binomial distributions and hypergeometric distributions? The formula for the expected value in a binomial distribution is:
$$E(X) = nP(s)$$
where $n$ is the number of trials and $P(s)$ is the probability of success.
The formula for the expected value in a hypergeometric distribution is:
$$E(X) = \frac{ns}{N}$$
where $N$ is the population size, $s$ is the number of successes available in the population and $n$ is the number of trials.
$$E(x) = \left( \frac{s}{N} \right)n $$
$$P(s) = \frac{s}{N}$$
$$\implies E(x) = nP(s)$$
Why do both the distributions have the same expected value? Why doesn't the independence of the events have any effect on expected value?
|
For either one, let $X_i=1$ if there is a success on the $i$-th trial, and $X_i=0$ otherwise. Then
$$X=X_1+X_2+\cdots+X_n,$$
and therefore by the linearity of expectation
$$E(X)=E(X_1)+E(X_2)+\cdots +E(X_n)=nE(X_1).
\tag{1}$$
Note that linearity of expectation does not require independence.
In the hypergeometric case, $\Pr(X_i=1)=\frac{s}{N}$, where $s$ is the number of "good" objects among the $N$. This is because any object is just as likely to be the $i$-th one chosen as any other. So $E(X_i)=1\cdot\frac{s}{N}+0\cdot \frac{N-s}{N}=\frac{s}{N}$. It follows that $E(X)=n\frac{s}{N}$.
Essentially the same proof works for the binomial distribution: both expectations follow from Formula $(1)$.
|
Involutions of a torus $T^n$. Let $T^n$ be a complex torus of dimension $n$ and $x \in T^n$. We have a canonical involution $-id_{(T^n,x)}$ on the torus $T^n$. I want to know for which $y \in T^n$, we have $-id_{(T^n,x)}=-id_{(T^n,y)}$ as involutions of $T^n$.
My guess is, such $y$ must be a 2-torsion point of $(T^n,x)$ and there are $2^{2n}$ choices of such $y$. Am I right?
|
Yes, you are right: here is a proof (I have taken the liberty of slightly modifying your notations).
Let $X=\mathbb C^n/\Lambda$ be the complex torus obtained by dividing out $\mathbb C^n$ by the lattice $\Lambda\subset \mathbb C^n$ ($\Lambda \cong \mathbb Z^{2n}$). This torus is an abelian Lie group, and this gives it much more structure than a plain complex manifold.
Such a torus admits of the involution $-id=\iota _0: X\to X:x\mapsto -x$, a holomorphic automorphism of the complex manifold $X$.
But for every $a\in X$ it also admits of the involution $\iota _a: X\to X:x\mapsto 2a-x$, which fixes $a$.
Your question amounts to asking for which $a\in X$ we have $\iota_ a=\iota_0=-id$.
This means $2a-x=-x$ for all $x\in X$ or equivalently $2a=0$.
So, exactly as you conjectured, the required points $a\in X$ are the $2^{2n}$ two-torsion points of $X$, namely the images of $\Lambda/2$, the half-lattice points, under the projection morphism $\mathbb C^n\to X=\mathbb C^n/\Lambda$.
|
Intermediate Value Theorem and Continuity of derivative. Suppose that a function $f(x)$ is differentiable $\forall x \in [a,b]$. Prove that $f'(x)$ takes on every value between $f'(a)$ and $f'(b)$.
If the above question is a misprint and wants to say "prove that $f(x)$ takes on every value between $f(a)$ and $f(b)$", then I have no problem using the intermediate value theorem here.
If, on the other hand, it is not a misprint, then it seems to me that I can't use the Intermediate value theorem, as I can't see how I am authorised to assume that $f'(x)$ is continuous on $[a,b]$.
Or perhaps there is another way to look at the problem?
|
This is not a misprint. You can indeed prove that $f'$ takes every value between $f'(a)$ and $f'(b)$. You cannot, however, assume that $f'$ is continuous. A standard example is $f(x) = x^2 \sin(1/x)$ when $x \ne 0$, and $0$ otherwise. This function is differentiable at $0$ but the derivative isn't continuous at it.
To prove the problem you have, consider the function $g(x) = f(x) - \lambda x$ for any $\lambda \in (f'(a), f'(b))$. What do you know about $g'(a)$, $g'(b)$? What do you conclude about $g$ in the interval $[a, b]$?
|
Solve equation $\tfrac 1x (e^x-1) = \alpha$ I have the equation $\tfrac 1x (e^x-1) = \alpha$ for an positive $\alpha \in \mathbb{R}^+$ which I want to solve for $x\in \mathbb R$ (most of all I am interested in the solution $x > 0$ for $\alpha > 1$). How can I do this?
My attempt
I defined $\phi(x) = \tfrac 1x (e^x-1)$ which can be continuously extended to $x=0$ with $\phi(0)=1$ ($\phi$ is the difference quotient $\frac{e^x-e^0}{x-0}$ of the exponential function). Therefore it is an entire function. Its Taylor series is
$$\phi(x) = \frac 1x (e^x-1) = \frac 1x (1+x+\frac{x^2}{2!}+\frac{x^3}{3!} + \ldots -1) = \sum_{n=0}^\infty \frac{x^n}{(n+1)!}$$
Now I can calculate the power series of the inverse function $\phi^{-1}$ with the methods of Lagrange inversion theorem or the Faà di Bruno's formula. Is there a better approach?
Diagram of $\phi(x)=\begin{cases} \tfrac 1x (e^x-1) & ;x\ne 0 \\ 1 & ;x=0\end{cases}$:
|
I just want to complete Hans Engler's answer. He already showed
$$x = -\frac 1\alpha -W\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right)$$
$\alpha > 0$ implies $-\tfrac 1\alpha \in \mathbb{R}^{-}$ and thus $-\tfrac 1\alpha e^{-\tfrac 1\alpha} \in \left[-\tfrac 1e,0\right)$ (The function $z\mapsto ze^z$ maps $\mathbb{R}^-$ to $\left[-\tfrac 1e,0\right)$) The Lambert $W$ function has two branches $W_0$ and $W_{-1}$ on the interval $\left[-\tfrac 1e,0\right)$:
So we have the two solutions
$$x_1 = -\frac 1\alpha -W_0\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right)$$
$$x_2 = -\frac 1\alpha -W_{-1}\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right)$$
One of $W_0\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right)$ and $W_{-1}\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right)$ will always be $-\tfrac 1\alpha$ as $W$ is the inverse of the function $z \mapsto ze^z$. This solution of $W$ would give $x=0$ which must be canceled out for $\alpha \ne 1$ as $\phi(x)=1$ just for $x=0$.
Case $\alpha=1$: For $\alpha=1$ is $-\tfrac 1\alpha e^{-\tfrac 1\alpha}=-\tfrac 1e$ and thus $W_0\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right)=W_{-1}\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right)=-1$. This gives $\phi^{-1}(1)=0$ as expected.
Case $\alpha > 1$: $\alpha > 1 \Rightarrow 0 < \tfrac 1 \alpha < 1 \Rightarrow -1 < -\tfrac 1 \alpha < 0$.
Because $W_0(y) \ge -1$ it must be $W_0\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right)=-\tfrac 1\alpha$ and so
$$\phi^{-1}(\alpha) = -\frac 1\alpha -W_{-1}\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right)\text{ for } \alpha > 1$$
Case $\alpha < 1$: $0 < \alpha < 1 \Rightarrow \tfrac 1 \alpha > 1 \Rightarrow -\tfrac 1\alpha < -1$
Because $W_{-1}(y) \le -1$ we have $W_{-1}\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right)=-\tfrac 1\alpha$ and thus
$$\phi^{-1}(\alpha) = -\frac 1\alpha -W_{0}\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right)\text{ for } \alpha < 1$$
Solution
The solution is
$$\phi^{-1}(\alpha) = \begin{cases} -\frac 1\alpha -W_{-1}\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right) & ; \alpha > 1 \\ 0 & ; \alpha = 1 \\-\frac 1\alpha -W_{0}\left(-\frac 1\alpha e^{-\tfrac 1\alpha}\right) & ; \alpha < 1 \end{cases}$$
|
Finding the value of $a$ which minimizes the absolute maximum of $f(x)$ I know that this is an elementary problem in calculus and so it has a routine way of proof. I faced it and brought it up here just because it is one of R.A.Silverman's interesting problem. Let me learn your approaches like an student. Thanks.
What value of $a$ minimizes the absolute maximum of the function $$f(x)=|x^2+a|$$ in the interval $[-1,1]$?
|
In my opinion, a calculus student would likely start by looking at the graph of this function for a few values of $a$. Then, they would notice that the absolute maxima occur on the endpoints or at $x=0$. So, looking at $f(-1)=f(1)=|1+a|$ and $f(0)=|a|$, I think it would be fairly easy for a calculus student to see that $a=-\frac{1}{2} \hspace{1 mm} $ minimizes the absolute maximum.
|
Number of solutions for $x[1] + x[2] + \ldots + x[n] =k$ Omg this is driving me crazy seriously, it's a subproblem for a bigger problem, and i'm stuck on it.
Anyways i need the number of ways to pick $x[1]$ ammount of objects type $1$, $x[2]$ ammount of objects type $2$, $x[3]$ ammounts of objects type $3$ etc etc such that $$x[1] + x[2] + \ldots x[n] = k\;.$$ Order of the objects of course doesn't matter.
I know there is a simple formula for this something like $\binom{k + n}n$ or something like that i just can't find it anywhere on google.
|
This is a so-called stars-and-bars problem; the number that you want is
$$\binom{n+k-1}{n-1}=\binom{n+k-1}k\;.$$
The linked article has a reasonably good explanation of the reasoning behind the formula.
|
Check If a point on a circle is left or right of a point What is the best way to determine if a point on a circle is to the left or to the right of another point on that same circle?
|
If you mean in which direction you have to travel the shortest distance from $a$ to $b$ and assuming that the circle is centered at the origin then this is given by the sign of the determinant $\det(a\, b)$ where $a$ and $b$ are columns in a $2\times 2$ matrix. If this determinant is positive you travel in the counter clockwise direction. If it is negative you travel in the clockwise direction. If it is zero then both directions result in the same travel distance (either $a=b$ or $a$ and $b$ are antipodes).
|
Find all the continuous functions such that $\int_{-1}^{1}f(x)x^ndx=0$. Find all the continuous functions on $[-1,1]$ such that $\int_{-1}^{1}f(x)x^ndx=0$ fof all the even integers $n$.
Clearly, if $f$ is an odd function, then it satisfies this condition. What else?
|
Rewrite the integral as
$$\int_0^1 [f(x)+f(-x)]x^n dx = 0,$$
which holds for all even $n$, and do a change of variables $y=x^2$ so that for all $m$, even or odd, we have
$$ \int_0^1 \left[\frac{f(\sqrt{y})+f(-\sqrt{y})}{2 \sqrt{y}}\right] y^{m} dy = 0. $$
By the Stone Weierstrass Theorem, polynomials are uniformly dense in $C([0,1])$, and therefore, if the above is satisfied, we must have
$$\frac{f(\sqrt{y})+f(-\sqrt{y})}{2 \sqrt{y}}=0$$
provided it is in $C([0,1])$. [EDIT: I just realized the singularity can be a problem for continuity--so perhaps jump to the density of polynomials in $L^1([0,1])$.] Substituting $x=\sqrt{y}$ and doing algebra, it follows $f$ is an odd function.
|
Scaling of random variables Note: I will use $\mathbb{P}\{X \in dx\}$ to denote $f(x)dx$ where $f(x)$ is the pdf of $X$.
While doing some homework, I came across a fault in my intuition. I was scaling a standard normally distributed random variable $Z$. Edit: I was missing the infinitesimals $dx$ and $dx/c$, so everything works out in the end. Thank you Jokiri!
$$\mathbb{P}\{cX \in dx\} = \frac{e^{-x^2 / 2c^2}}{\sqrt{2\pi c^2}}$$
while
$$\mathbb{P} \left\{X \in \frac{dx}{c}\right\} = \frac{e^{-(x/c)^2/2}}{\sqrt{2\pi}}$$
Could anyone help me understand why the following equality doesn't hold? Edit: it does, see edit below
$$\mathbb{P}\{cX \in dx\} \ne \mathbb{P} \left\{X \in \frac{dx}{c}\right\}$$
I have been looking around, and it seems that equality of the cdf holds, though:
$$\mathbb{P}\{cX < x \} = \mathbb{P}\left\{X < \frac xc\right\}.$$
Thank you in advance!
This question came out of a silly mistake on my part. Let me attempt to try to set things straight.
Let $Y = cX$. Let $X$ have pdf $f_X$ and $Y$ have pdf $f_Y$.
$$\mathbb{E}[Y] = \mathbb{E}[cX] = \int_{-\infty}^\infty cx\,f_X(x)\mathop{dx}
=\int_{-\infty}^\infty y\, f_X(y/c) \frac{dy}{c}$$
So, $f_Y(y) = \frac 1c f_X(y/c)$.
Thank you for the help, and sorry for my mistake.
|
Setting aside rigour and following your intuition about infinitesimal probabilities of finding a random variable in an infinitesimal interval, I note that the left-hand sides of your first two equations are infinitesimal whereas the right-hand sides are finite. So these are clearly wrong, even loosely interpreted. They make sense if you multiply the right-hand sides by the infinitesimal interval lengths, $\mathrm dx$ and $\mathrm dx/c$, respectively, and then everything comes out right.
|
Can this function be expressed in terms of other well-known functions? Consider the function $$f(a) = \int^1_0 \frac {t-1}{t^a-1}dt$$
Can this function be expressed in terms of 'well-known' functions for integer values of $a$? I know that it can be relatively simply evaluated for specific values of $a$ as long as they are integers or simple fractions, but I am looking for a more general formula. I have found that it is quite easy to evaluate for some fractions--from plugging values into Wolfram I am fairly sure that
$$f(1/n)=\sum^n_{k=1}\frac{n}{2n-k}$$
for all positive integer $n$. However, $f(a)$ gets quite complicated fairly quickly as you plug in increasing integer values of $a$, and I am stumped. Do any of you guys know if this function has a relation to any other special (or not-so-special) functions?
|
Mathematica says that
$$f(a)=\frac{1}{a}\Big(\psi\left(\tfrac{2}{a}\right)-\psi\left(\tfrac{1}{a}\right)\Big)$$
where
$$\psi(z)=\frac{\Gamma'(z)}{\Gamma(z)}$$
is the digamma function.
|
Mean time until adsorption for a well-mixed bounded random walk that suddenly allows for adsorption I have a random walk on some interval $[0, N]$ with probability $p$ of taking a $+1$ step, probability $(1-p)$ of taking a $-1$ step, and where we have that $p>(1-p)$. Initially the boundaries are reflecting, i.e. if the walker lands at $0$ or $N$, then it will remain in place until it takes a $+1$ or $-1$ step with probability $p$ and $(1-p)$, respectively. Let $\pi$ be the stationary distribution of this walk ( Finding the exact stationary distribution for a biased random walk on a bounded interval).
Now, imagine that we allow the walk to proceed for an arbitrary number of time steps until the walker's probability distribution on the interval approaches the stationary distribution. We then "flip a switch" and specify that the boundary $0$ will be fully adsorbing and that the walker will have a probability of $f(x)$ of being moved to $0$ at any step. More specifically, we flip a coin to decide $p$ and $q$ as we normally would, then flip another coin to decide whether to adsorb with probability $f(x)$. Here, $f(x)$ is some function that depends on the position, $x$, of the walker.
What is our mean time until adsorption? Is there a technical term for what $\pi$ is after we "flip the switch" (some kind of quasi-stationary distribution)?
For example, is it correct to assume that:
(1) Ignoring $f(x)$, that the probability of adsorbing at $(x=0)$ is $\approx \pi(0)$?
(2) That the probability the walker will be moved to $0$ at any given step can be approximated as:
$P[k \to 0]=\sum_{k=1}^{N} k(f(k)\pi(k))$
And thus that we can approximate the adsorption time as: $\mu(ads)=[(1-P[k \to 0])(1-\pi(0))]^{-1}$
I'm pretty sure, however, that the above approximation is wrong.
Update - I would be happy for a solution that only looks at absorbance at $0$ without consideration of $f(x)$. Hopefully there can an analytic solution with this simplification?
|
This is not (yet) an answer: I just try to formalise the question asked by the OP
The stationary distribution before the adsorption process sets in can be obtained from the balance equations $p \pi(k) = (1-p)\pi(k+1)$. It is given by
$$\pi(k) = \alpha^k \pi(0)= \frac{(1-\alpha) \alpha^k}{1-\alpha^{1+N}} ,\qquad \alpha = \frac{p}{1-p}.$$ This distribution serves as the start for the new random walk. The new random walk is given by the rate equations
$$\begin{align}P_{i+1}(0) &= (1-f_1)(1-p) P_i (1) + \sum_{k=1}^N f_k P_i(k)\\
P_{i+1}(1)&= (1-f_2)(1-p) P_i(2)\\
P_{i+1} (2\leq j \leq N-1)&= (1-f_{j+1})(1-p) P_i (j+1) + (1-f_{j-1})p P(j-1) \\
P_{i+1} (N) &= (1-f_{N-1})p P_{i}(N-1) +(1-f_{N})p P_i(N)
\end{align}$$
where $P_i(k)$ denotes the probability to be at step $i$ at position $k$. The initial condition is $P_0(k)=\pi(k)$.
You are asking for the mean absorbtion time given by
$$\mu=\sum_{i=0}^\infty i P_{i}(0) .$$
We define the transition matrix $M$ via
$\mathbf{P}_{i+1} = M \mathbf{P}_i$ where we collect the probabilities to a vector $\mathbf{P} =(P_0, \ldots, P_N)$. We have also the initial vector of probabilities $\boldsymbol{\pi}$. With this notation, we can write
$$\mu= \sum_{j=0}^\infty e_0 j M^j \boldsymbol{\pi}
=e_0 \frac{M}{(I-M)^2} \boldsymbol{\pi} $$
|
Calculate value of expression $(\sin^6 x+\cos^6 x)/(\sin^4 x+\cos^4 x)$
Calculate the value of expresion:
$$
E(x)=\frac{\sin^6 x+\cos^6 x}{\sin^4 x+\cos^4 x}
$$
for $\tan(x) = 2$.
Here is the solution but I don't know why $\sin^6 x + \cos^6 x = ( \cos^6 x(\tan^6 x + 1) )$, see this:
Can you explain to me why they solved it like that?
|
You can factor the term $\cos^6(x)$ from $\sin^6(x)+\cos^6(x)$ in the numerator to find: $$\cos^6(x)\left(\frac{\sin^6(x)}{\cos^6(x)}+1\right)=\cos^6(x)\left(\tan^6(x)+1\right)$$ and factor $\cos^4(x)$ from the denominator to find: $$\cos^4(x)\left(\frac{\sin^4(x)}{\cos^4(x)}+1\right)=\cos^4(x)\left(\tan^4(x)+1\right)$$ and so $$E(x)=\frac{\cos^6(x)\left(\tan^6(x)+1\right)}{\cos^4(x)\left(\tan^4(x)+1\right)}$$
So if we assume $x\neq (2k+1)\pi/2$ then we can simplify the term $\cos^4(x)$ and find: $$E(x)=\frac{\cos^2(x)\left(\tan^6(x)+1\right)}{\left(\tan^4(x)+1\right)}$$ You know that $\cos^2(x)=\frac{1}{1+\tan^2(x)}$ as an trigonometric identity. So set it in $E(x)$ and find the value.
|
Ring of formal power series finitely generated as algebra? I'm asked if the ring of formal power series is finitely generated as a $K$-algebra. Intuition says no, but I don't know where to start. Any hint or suggestion?
|
Let $A$ be a non-trivial commutative ring. Then $A[[x]]$ is not finitely generated as a $A$-algebra.
Indeed, observe that $A$ must have a maximal ideal $\mathfrak{m}$, so we have a field $k = A / \mathfrak{m}$, and if $k[[x]]$ is not finitely-generated as a $k$-algebra, then $A[[x]]$ cannot be finitely-generated as an $A$-algebra. So it suffices to prove that $k[[x]]$ is not finitely generated. Now, it is a straightforward matter to show that the polynomial ring $k[x_1, \ldots, x_n]$ has a countably infinite basis as a $k$-vector space, so any finitely-generated $k$-algebra must have an at most countable basis as a $k$-vector space.
However, $k[[x]]$ has an uncountable basis as a $k$-vector space. Observe that $k[[x]]$ is obviously isomorphic to $k^\mathbb{N}$, the space of all $\mathbb{N}$-indexed sequences of elements of $k$, as $k$-vector spaces. But it is well-known that $k^\mathbb{N}$ is of uncountable dimension: see here, for example.
|
How to read $A=[0,1]\times[a,5]$ I have this problem: consider the two sets $A$ and $B$
$$A=[0,1]\times [a,5]$$ and $$B=\{(x,y):x^2+y^2<1\}$$
What are the values of $a$ that guarantee the existence of a hyperplane that separates $A$ from $B$.
Given a chosen value of $a$, find one of those hyperplanes.
My main problem is axiomatics: how do I read: $A=[0,1]\times[a,5]$, what's with the $\times$?
Thank you
|
The $\times$ stands for cartesian product, i.e. $X\times Y=\{(x,y)\mid x\in X, y\in Y\}$.
Whether ordered pairs $(x,y)$ are considered a basic notion or are themselves defined (e.g. as Kurtowsky pairs) usually does not matter. See alo here.
|
Proof of Irrationality of e using Diophantine Equations I was trying to prove that e is irrational without using the typical series expansion, so starting off $e = a/b $ Take the natural log so $1 = \ln(a/b)$ Then $1 = \ln(a)-\ln(b)$ So unless I did something horribly wrong showing the irrationality of $e$ is the same as showing that the equation $c = \ln(a)-\ln(b)$ or $1 = \ln(a) - \ln(b)$ (whichever one is easiest) has no solutions amongst the natural numbers. I feel like this would probably be easiest with infinite descent, but I'm in high school so my understanding of infinite descent is pretty hazy. If any of you can provide a proof of that, that would be awesome.
EDIT: What I mean by "typical series expansion" is Fourier's proof http://en.wikipedia.org/wiki/Proof_that_e_is_irrational#Proof
|
One situation in which the existence of a solution
to a Diophantine equation implies an irrationality result is this:
If, for a positive integer $n$, there are
positive integers $x$ and $y$ satisfying
$x^2 - n y^2 = 1$,
then $\sqrt n$ is irrational.
I find this amusing, since this proof
is more complicated than any of the standard proofs that
$\sqrt n$ is irrational and also, since it does not
assume the existence of solutions to
$x^2 - n y^2 = 1$,
requires the "user"
to supply a solution.
Solutions are readily supplied for 2 and 5,
but are harder to find for 61.
|
$10$ distinct integers with sum of any $9$ a perfect square Do there exist $10$ distinct integers such that the sum of any $9$ of them is a perfect square?
|
I think the answer is yes. Here is a simple idea:
Consider the system of equations
$$S-x_i= y_i^2, 1 \leq i \leq 10\,,$$
where $S=x_1+..+x_n$.
Let $A$ be the coefficients matrix of this system. Then all the entries of $I+A$
are $1$, thus $\operatorname{rank}(I+A)=1$. This shows that $\lambda=0$ is an eigenvalue of $I+A$ of multiplicity $n-1$, and hence the remaining eigenvalue is $\lambda=tr(I+A)=n.$
Hence the eigenvalues of $A$ are $\lambda_1=...=\lambda_{n-1}=-1$ and $\lambda_n=(n-1)$. This shows that $\det(A)=(-1)^{n-1}(n-1)$.
Now pick distinct $y_1,..,y_n$ positive integers, each divisible by $n-1$. Then, by Cramer's rule, all the solutions to the system
$$S-x_i= y_i^2 1 \leq i \leq 10\,,$$
are integers (since when you calculate the determinant of $A_i$, you can pull an $(n-1)^2$ from the i-th column, and you are left with a matrix with integer entries).
The only thing left to do is proving that $x_i$ are pairwise distinct. Let $i \neq j$. Then
$$S-x_i =y_i^2 \,;\, S-x_j=y_j^2 \Rightarrow x_i-x_j=y_j^2-y_i^2 \neq 0 \,.$$
Remark You can easily prove that $\det(A)=(-1)^{n-1}(n-1)$ by row reduction: Add all the other rows to the last one, get an $(n-1)$ common factor from that one, and the n subtract the last row from each of the remaining ones.
|
A problem on self adjoint matrix and its eigenvalues Let $S = \{\lambda_1, \ldots , \lambda_n\}$ be an ordered set of $n$ real numbers, not all equal, but not all necessarily distinct. Pick out the true statements:
a. There exists an $n × n$ matrix with complex entries, which is not selfadjoint, whose set of eigenvalues is given by $S$.
b. There exists an $n × n$ self-adjoint, non-diagonal matrix with complex entries whose set of eigenvalues is given by $S$.
c. There exists an $n × n$ symmetric, non-diagonal matrix with real entries whose set of eigenvalues is given by $S$.
How can i solve this? Thanks for your help.
|
The general idea is to start with a diagonal matrix $[\Lambda]_{kj} = \begin{cases} 0, & j \neq k \\ \lambda_j, & j=k\end{cases}$ and then modify this to satisfy the conditions required.
1) Just set the upper triangular parts of $\Lambda$ to $i$. Choose $[A]_{kj} = \begin{cases} 0, & j>k \\ \lambda_j, & j=k \\ i, & j<k\end{cases}$.
2) & 3) Suppose $\lambda_{j_0} \neq \lambda_{j_1}$. Then rotate the '$j_0$-$j_1$' part of $\Lambda$ so it is no longer diagonal.
Let $[U]_{kj} = \begin{cases} \frac{1}{\sqrt{2}}, & (k,j) \in \{(j_0,j_0), (j_0,j_1),(j_1,j_1)\} \\ -\frac{1}{\sqrt{2}}, & (k,j) \in \{(j_1,j_0)\} \\
\delta_{kj}, & \text{otherwise} \end{cases}$. $U$ is real and $U^TU=I$. Let $A=U \Lambda U^T$. It is straightforward to check that $A$ is real, symmetric (hence self-adjoint) and $[A]_{j_0 j_1}=\lambda_{j_1}-\lambda_{j_0}$, hence it is not diagonal.
|
Inequality for dense subset implies inequality for whole set? (PDE) Suppose I have an inequality that holds for all $f \in C^\infty(\Omega)$. Then since $C^\infty(\Omega)$ is dense in, say, $H^1(\Omega)$ under the latter norm, does the inequality hold for all $f \in H^1(\Omega)$ too?
(Suppose the inequality involves norms in $L^2$ space)
|
Let $X$ be a topological space. Let $F,G$ be continuous maps from $X$ to $\mathbb{R}$. Let $Y\subset X$ be a dense subspace.
Then
$$ F|_Y \leq G|_Y \iff F \leq G $$
The key is continuity. (Actually, semi-continuity of the appropriate direction is enough.) Continuity guarantees for $x\in X\setminus Y$ and $x_\alpha \in Y$ such that $x_\alpha \to x$ you have
$$ F(x) \leq \lim F(x_\alpha) \leq \lim G(x_\alpha) \leq G(x) $$
So the question you need to ask yourself is: are the two functionals on the two sides of your inequality continuous functionals on $H^1(\Omega)$? Just knowing that they involve norms in $L^2$ space is not (generally) enough (imagine $f\mapsto \|\triangle f\|_{L^2(\Omega)}$).
For a slightly silly example: Let $G:H^1(\Omega)\to\mathbb{R}$ be identically 1, and let $F:C^\infty(\Omega)\to \mathbb{R}$ be identically zero, but let $F:H^1(\Omega) \setminus C^\infty(\Omega) \to \mathbb{R}$ be equal to $\|f\|_{L^2(\Omega)}$. Then clearly $F|_{C^\infty} \leq G|_{C^\infty}$ but the extension to $H^1$ is false in general.
|
Prove that $x = 2$ is the unique solution to $3^x + 4^x = 5^x$ where $x \in \mathbb{R}$ Yesterday, my uncle asked me this question:
Prove that $x = 2$ is the unique solution to $3^x + 4^x = 5^x$ where $x \in \mathbb{R}$.
How can we do this? Note that this is not a diophantine equation since $x \in \mathbb{R}$ if you are thinking about Fermat's Last Theorem.
|
$$f(x) = \left(\dfrac{3}{5}\right)^x + \left(\dfrac{4}{5}\right)^x -1$$
$$f^ \prime(x) < 0\;\forall x \in \mathbb R\tag{1}$$
$f(2) =0$. If there are two zeros of $f(x)$, then by Rolle's theorem $f^\prime(x)$ will have a zero which is a contradiction to $(1)$.
|
Distance is independent of coordinates I am asked to show $d(x,y) = ((x_2 - x_1)^2 + (y_2 -y_1)^2)^{1/2}$ does not depend on the choice of coordinates. My try is:
$V$ has basis $B = b_1 , b_2$ and $B' = b_1' , b_2'$ and $T = [[a c], [b d]]$ is the coordinate transformation matrix $Tv_{B'} = v_B$ and $x_{B'} = x_1 b'_1 + x_2 b'_2$ and $y_{B'} = y_1b_1' + y_2b_2'$ are the vectors and the distance in the coordinates of $B'$ is $d(x_{B'},y_{B'}) = ((x_2 - x_1)^2 + (y_2 -y_1)^2)^{1/2}$.
The coordinates in $B$ are $x_B = (x_1 a + x_2 c)b_1 + (x_1 b + x_2 d) b_2$ and similar for $y$. I compute the first term in the distance $((x_1 b + x_2 d) - (x_1 a + x_2 c))^2$. I may assume these are Cartesian coordinates so that $a^2 + b^2 = c^2 + d^2 = 1$ and $ac + bd = 0$.
With this I have $((x_1 b + x_2 d) - (x_1 a + x_2 c))^2 = x_2^2 + x_2^2 - 2(x_1^2 ab + x_1 x_2 bc + x_1 x_2 ad + x_2^2 cd)$. My problem is that $x_1^2 ab + x_1 x_2 bc + x_1 x_2 ad + x_2^2 cd \neq x_1 x_2$. How to solve this? How to show that $x_1^2 ab + x_2^2 cd = 0$ and that $bc + ad = 1$? Thank you.
|
I would try a little bit more abstract approach. Sometimes a little bit of abstraction helps.
First, distance can be computed in terms of the dot product. So, if you have points with Cartesian coordinates $X,Y$, the distance between them is
$$
d(X,Y) = \sqrt{(X-Y)^t(X-Y)} \ .
$$
Now, if you make an orthogonal change of coordinates of matrix $S$, the new coordinates $X'$ and the old ones $X$ are related through the relation
$$
X = SX'
$$
where $S$ is an orthogonal matrix. This is exactly your condition that the new coordinates are "Cartesian". That is, if
$$
S = \begin{pmatrix}
a & c \\
b & d
\end{pmatrix}
$$
the fact that $S$ is orthogonal means that $S^tS = I$, that is
$$
\begin{pmatrix}
a & b \\
c & d
\end{pmatrix}
\begin{pmatrix}
a & c \\
b & d
\end{pmatrix}
=
\begin{pmatrix}
1 & 0 \\
0 & 1
\end{pmatrix}
\qquad
\Longleftrightarrow
\qquad
a^2 + b^2 = c^2 + d^2 =1
\quad \text{and} \quad ac + bd = 0 \ .
$$
So, let's now compute:
$$
d(X,Y) = \sqrt{(SX' - SY')^t(SX' - SY')} = \sqrt{(X'-Y')^tS^tS(X'-Y')} = \sqrt{(X'-Y')^t(X'-Y')} \ .
$$
Indeed, distance does not depend on the Cartesian coordinates.
|
A measure of non-differentiability Consider $f(x) = x^2$ and $g(x) = |x|$. Both graphs have an upward open graph, but $g(x) = |x|$ is "sharper". Is there a way to measure this sharpness?
|
This may be somewhat above your pay grade, but a "measure" of a discontinuity of a function at a point may be seen in a Fourier transform of that function. For example consider the function
$$f(x) = \exp{(-|x|)} $$
which is proportional to a Lorentzian function:
$$\hat{f}(w) = \frac{1}{1+w^2} $$
(I am ignoring constants, etc., which are not important to this discussion.) Note that $\hat{f}(w) \approx 1/w^2 (w \rightarrow \infty)$. The algebraic, rather than exponential, behavior at $\infty$ is characteristic of a type of discontinuity. In this case, there is a discontinuity in the derivative. For a straight discontinuity, there is a $1/w$ behavior at $\infty$. (Note the step function and its transform which is proportional to $1/w$ at $\infty$.) For a discontinuity in the 2nd derivative, there is a $1/w^3$ behavior at $\infty$. And a discontinutiy in the $k$th derivative of $f(x)$ translates into a $1/w^{k+1}$ behavior of the Fourier transform at $\infty$.
No, I do not have a proof of this, so I am talking off the cuff from my experiences some moons ago. But I am sure this is correct for the functions we see in physics.
Also note that I define the Fourier Transform here as
$$\hat{f}(w) = \int_{-\infty}^{\infty} dx \: f(x) \exp{(-i w x)} $$
|
the existence of duality of closure and interior? Let the closure and interior of set $A$ be $\bar A$ and $A^o$ respectively.
In some cases, the dual is relatively easy to find, e.g. the dual of equation $\overline{A \cup B} = \bar A \cup \bar B$ is $(A \cap B)^o= A^o\cap B^o$.
However, I can't find the dual of $f(\bar A) \subseteq \overline{f(A)}$, the definition of continuity of $f$.
Is there some principle to translate the language of closure into that of interior in the same way as the duality principle of Boolean Algebra?
|
In the duality examples that you described as "relatively easy", the key was that you get the dual of an operation by applying "complement" to the inputs and outputs. For example, writing $\sim$ for complement, we have $A^o=\sim(\overline{\sim A})$, i.e., we get the interior of $A$ by taking the complement of $A$, then applying closure, and finally applying complement again. Similarly, $A\cap B=\sim((\sim A)\cup(\sim B))$. To do something similar for the notion of the image of a set $A$ under a function, we need the analogous thing (which unfortunately has no universally accepted name --- I'll call it $\hat f$):
$$
\hat f(A)=\sim(f(\sim A)).
$$
Equivalently, if $f:X\to Y$ and $A\subseteq X$, then $\hat f(A)$ consists of those points $y\in Y$ such that all points of $X$ that map via $f$ to $y$ are in $A$. [Notice that, if I replaced "all" by "some", then I'd have a definition of $f(A)$. Notice also that, if $y$ isn't in the image of $f$, then it automatically (vacuously) belongs to $\hat f(A)$.] Now we can dualize the formula $\bar f(A)\subseteq\overline{f(A)}$ to get $\hat f(A^o)\supseteq(\hat f(A))^o$. And this (asserted for all subsets $A$ of the domain of $f$) is indeed an equivalent characterization of continuity.
[Digression for any category-minded readers: If $f:X\to Y$ then the operation $f^{-1}$ sending subsets of $Y$ to subsets of $X$ is a monotone function between the power sets, $f^{-1}:\mathcal P(Y)\to\mathcal P(X)$. The power sets, being partially ordered by $\subseteq$, can be viewed as categories, and then $f^{-1}$ is a functor between them. This functor has adjoints on both sides. The left adjoint $\mathcal P(X)\to\mathcal P(Y)$ sends $A$ to $f(A)$. The right adjoint sends $A$ to what I called $\hat f(A)$. These adjointness relations imply the elementary facts that $f^{-1}$ preserves both unions and intersections (as these are colimits and limits, respectively) while the left adjoint $A\mapsto f(A)$ preserves unions but not (in general) intersections. The right adjoint $\hat f$ preserves intersections but not (in general) unions.]
|
How do I prove by induction that, for $n≥1, \sum_{r=1}^n \frac{1}{r(r+1)}=\frac{n}{n+1}$? Hi can you help me solve this:
I have proved that $p(1)$ is true and am now assuming that $p(k)$ is true. I just don't know how to show $p(k+1)$ for both sides?
|
$$\sum_{r=1}^{n}\frac{1}{r(r+1)}=\frac{n}{n+1}$$
for $n=1$ we have $\frac{1}{1(1+1)}=\frac{1}{1+1}$ suppose that
$$\sum_{r=1}^{k}\frac{1}{r(r+1)}=\frac{k}{k+1}$$ then
$$\sum_{r=1}^{k+1}\frac{1}{r(r+1)}=\sum_{r=1}^{k}\frac{1}{r(r+1)}+\frac{1}{(k+1)(k+2)}=$$
$$=\frac{k}{k+1}+\frac{1}{(k+1)(k+2)}=\frac{k(k+2)+1}{(k+1)(k+2)}=$$
$$=\frac{k^2+2k+1}{(k+1)(k+2)}=\frac{(k+1)^2}{(k+1)(k+2)}=\frac{k+1}{k+2}=\frac{(k+1)}{(k+1)+1}$$
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.