INSTRUCTION
stringlengths 61
6.63k
| RESPONSE
stringlengths 1
11k
|
---|---|
Prove sequence $a_n=n^{1/n}$ is convergent How to prove that the sequence $a_n=n^{1/n}$ is convergent using definition of convergence?
| Well, the easiest proof is that the sequence is decreasing and bounded below (by 1); thus it converges by the Monotone Convergence Theorem...
The proof from definition of convergence goes like this:
A sequence $a_{n}$ converges to a limit L in $\mathbb{R}$ if and only if $\forall \epsilon > 0 $, $\exists N\in\mathbb{N}$ such that $\left | L - a_{n} \right | < \epsilon$ for all $n \geq N $.
The proposition: $\lim_{n\to\infty} n^{1/n} = 1 $
Proof:
Let $\epsilon > 0$ be given. Then by Archimedean property of the real numbers, there exists $M \in \mathbb{N}$ such that $M < \epsilon + 1$ then find $x\in\mathbb{R}; x>2$ such that $1+M>x^{1/x}$ and let $P = \left \lceil x \right \rceil$. Then, since $f(x)=x^{1/x}$ is decreasing (for $x>e$) (trivial and left to the reader :D) take any $x\in\mathbb{N}$ such that $x>P$ and observe that (because of our choice and $M$ and $P$) we have $n^{1/n} \leq P^{1/P} \leq M \le 1 + \epsilon$ whenever $n\geq P$ and so $\left | 1 - a_{n} \right | < \epsilon$ whenever $n\geq P$. Thus $a_{n}$ converges (to 1).
Edit: We can not always find a natural number M such that $M < \epsilon$ (what if $0 < \epsilon < 1$)? But we can always find a natural number M such that $M < \epsilon + 1$.
|
Cauchy's Theorem (Groups) Question? I'm afraid at this ungodly hour I've found myself a bit stumped. I'm attempting to answer the following homework question:
If $p_1,\dots,p_s$ are distinct primes, show that an abelian group of order $p_1p_2\cdots p_s$ must be cyclic.
Cauchy's theorem is the relevant theorem to the chapter that precedes this question...
So far (and quite trivially), I know the element in question has to be the product of the elements with orders $p_1,\dots, p_s$ respectively. I've also successfully shown that the order of this element must divide the product of the $p$'s. However, showing that the order is exactly this product (namely that the product also divides the orders) has proven a bit elusive. Any helpful clues/hints are more than welcome and very much appreciated!
| Start by proving that, in an abelian group, if $g$ has order $a$, and $h$ has order $b$, and $\gcd(a,b)=1$, then $gh$ has order $ab$. Clearly, $(gh)^{ab}=1$, so $gh$ has order dividing $ab$. Now show that if $(gh)^s=1$ for some $s\lt ab$ then you can find some $r$ such that $(gh)^{rs}$ is either a power of $g$ or of $h$ and not the identity. Details left to the reader.
|
What row am I on in a matrix if I only have the column index of an element, and the total number of columns and rows? Assume a 3 x 3 grid, but the elements are actually contained in a zero-based index array. So I know if I'm on element 5, I'm in row 2, and if I'm on element 7 then I'm in row 3. How would I actually calculate this? Please feel free to modify the title's terminology if I'm not using it correctly. Also, I would very much appreciate the logic used to determine the formula for the answer, as I'm not a mathematician.
| The logic used is fairly simple. If you have a 3 by 3 grid, starting at 0, the elements would look as:
0 1 2
3 4 5
6 7 8
You count from left to right, until the number of columns matches the number of columns the grid has, then start again with the next row.
Mathematically, the row is floor(elementNumber / numberOfRows)
I.E.: element 4 would be floor(4 / 3) = 1.33 = 1; which is actually row 2 (0 indexed remember?)
The Column would be elementNumber mod numberofColumns. IE, element 4: 4 mod 3 = 1; which is actually column 2 (0 indexed)
|
Prove that: set $\{1, 2, 3, ..., n - 1\}$ is group under multiplication modulo $n$? Prove that:
The set $\{1, 2, 3, ..., n - 1\}$ is a group under multiplication modulo $n$ if and only if $n$ is a prime number without using Euler's phi function.
| Assume that $H=\{1,2,3,...n-1\}$ is a group. Suppose that $n$ is not a prime.
Then $n$ is composite, i.e $n=pq$ for $1<p,q<n-1$ . This implies that $pq \equiv0(mod n)$ but $0$ is not in H. Contradiction, hence $n$ must be prime.
Conversely, Suppose $n$ is a prime then $gcd(a,n)=1$ for every a in H. Therefore, $ax=1-ny$, $x,y \in H$. So, $ax\equiv1(modn)$. That is every element of H has an inverse. This conclude that H must be a group since the identity is in H and H is associative.
|
How many correct answers does it take to fill the Trivial Pursuit receptacle? My friends and I likes to play Trivial Pursuit without using the board.
We play it like this:
*
*Throw a die to determine what color you get to answer.
*Ask a question, if the answer is correct you get a point.
*If enough points are awarded you win
We would like to modify the game as to include the colors. There are 6 colors. The game could then be won by completing all colors or answering enough questions.
We would like the the effort to complete it by numbers to be similar to that of completing it by colors. So the required number of correct answers should be the same as where it is likely that all the colors has been collected.
What is the number of correct answers one needs to acquire to make it probable, P>=0.5, that all colors are collected?
We dabbled in a few sums before realizing this was over our heads.
| This is the coupon collector's problem. For six, on average you will need $6/6+6/5+6/4+6/3+6/2+6/1=14.7$ correct answers, but the variability is high. This is the expectation, not the number to have 50% chance of success.
|
Measure-theoretical isomorphism between interval and square What is an explicit isomorphism between the unit interval $I = [0,1]$ with Lebesgue measure, and its square $I \times I$ with the product measure? Here isomorphism means a measure-theoretic isomorphism, which is one-one outside some set of zero measure.
| For $ x \in [0,1]$, let $x = .b_1 b_2 b_3 \ldots$ be its base-2 expansion (the choice in the ambiguous cases doesn't matter, because that's a set of measure 0). Map this to
$(.b_1 b_3 b_5 \ldots,\ .b_2 b_4 b_6 \ldots) \in [0,1]^2$
|
Solution to an ODE, can't follow a step of a Stability Example In my course notes, we are working on the stability of solutions, and in one example we start out with:
Consider the IVP on $(-1,\infty)$:
$x' = \frac{-x}{1 + t}$ with $x(t_{0}) = x_{0}$.
Integrating, we get $x(t) = x(t_{0})\frac{1 + t_{0}}{1 + t}$.
I can't produce this integration but the purpose of the example is to show that $x(t)$ is uniformly stable, and asymptotically stable, but not uniformly asymptotically stable.
But I can't verify the initial part and don't want to just skip over it.
Can someone help me with the details here?
Update: the solution has been pointed out to me and is in the answer below by Bill Cook (Thanks!).
| Separate variables and get $\int 1/x \,dx = \int -1/(1+t)\,dt$. Then $\ln|x|=-\ln|1+t|+C$
Exponentiate both sides and get $|x| = e^{-\ln|1+t|+C}$ and so $|x|=e^{\ln|(1+t)^{-1}|}e^C$
Relabel the constant drop absolute values and recover lost zero solution (due to division by $x$) and get $x=Ce^{\ln|(1+t)^{-1}|}=C(1+t)^{-1}$.
Finally plug in the IC $x_0 = x(t_0)=C(1+t_0)^{-1}$ so that $C=x_0(1+t_0)$ and there you
go the solution is
$$ x(t) = x_0 \frac{1+t_0}{1+t} $$
|
Adjunction of a root to a UFD Let $R$ be a unique factorization domain which is a finitely generated $\Bbbk$-algebra for an algebraically closed field $\Bbbk$. For $x\in R\setminus\{0\}$, let $y$ be an $n$-th root of $x$. My question is, is the ring
$$ A := R[y] := R[T]/(T^n - x) $$
a unique factorization domain as well?
Edit: I know the classic counterexample $\mathbb{Z}[\sqrt{5}]$, but $\mathbb{Z}$ it does not contain an algebraically closed field. I am wondering if that changes anything.
Edit: As Gerry's Answer shows, this is not true in general. What if $x$ is prime? What if it is a unit?
| I think you can get a counterexample to the unit question, even in characteristic zero, and even in an integral domain (in contrast to Georges' example), although there are a few things that need checking.
Let $R={\bf C}[x,1/(x^2-1)]$, so $1-x^2$ is a unit in $R$. Then $$(1+\sqrt{1-x^2})(1-\sqrt{1-x^2})=xx$$
It remains to check that
*
*$R$ is a UFD,
*$1\pm\sqrt{1-x^2}$ and $x$ are irreducibles in $R[\sqrt{1-x^2}]$
*$1\pm\sqrt{1-x^2}$ and $x$ are not associates in $R[\sqrt{1-x^2}]$
|
Let $f:\mathbb{C} \rightarrow \mathbb{C}$ be entire and $\exists M \in\mathbb{R}: $Re$(f(z))\geq M$ $\forall z\in\mathbb{C}$. Prove $f(z)=$constant
Possible Duplicate:
Liouville's theorem problem
Let $f:\mathbb{C} \rightarrow \mathbb{C}$ be entire and suppose $\exists M \in\mathbb{R}: $Re$(f(z))\geq M$ $\forall z\in\mathbb{C}$. How would you prove the function is constant?
I am approaching it by attempting to show it is bounded then by applying Liouville's Theorem. But have not made any notable results yet, any help would be greatly appreciated!
| Consider the function $\displaystyle g(z)=e^{-f(z)}$. Note then that $\displaystyle |g(z)|=e^{-\text{Re}(f(z))}\leqslant \frac{1}{e^M}$. Since $g(z)$ is entire we may conclude that it is constant (by Liouville's theorem). Thus, $f$ must be constant.
|
Calculate the expansion of $(x+y+z)^n$ The question that I have to solve is an answer on the question "How many terms are in the expansion?".
Depending on how you define "term" you can become two different formulas to calculate the terms in the expansion of $(x+y+z)^n$.
Working with binomial coefficients I found that the general relation is $\binom{n+2}{n}$. However I'm having some difficulty providing proof for my statement.
The other way of seeing "term" is just simply as the amount of combinations you can take out of $(x+y+z)^n$ which would result into $3^n$.
Depending on what is the right interpretation, how can I provide proof for it?
| For the non-trivial interpretation, you're looking for non-negative solutions of $a + b + c = n$ (each of these corresponds to a term $x^a y^b z^c$). Code each of these solutions as $1^a 0 1^b 0 1^c$, for example $(2,3,5)$ would be coded as $$110111011111.$$ Now it should be easy to see why the answer is $\binom{n+2}{n}$.
|
what is the tensor product $\mathbb{H\otimes_{R}H}$ I'm looking for a simpler way of thinking about the tensor product: $\mathbb{H\otimes_{R}H}$, i.e a more known algbera which is isomorphic to it.
I have built the algebra and played with it for a bit, but still can't seem to see any resemblence to anything i already know.
Thanks!
| Hint :
(1) Show that the map
$$H \otimes_R H \rightarrow End_R(H), x \otimes y \mapsto (a \mapsto xay).$$
is an isomorphism of $R$-vector spaces (I don't know the simplest way to do this, but try for example to look at a basis (dimension is 16...)).
(2) Denote by $H^{op}$ the $R$-algebra $H$ where the multiplication is reversed (i.e. $x \times_{H^{op}} y = y \times_{H} x$). Denote by (1,i,j,k) the usual basis. Show that the map
$$H \rightarrow H^{op}, 1 \mapsto 1,i \mapsto i, j \mapsto k, k \mapsto j$$
is an isomorphism of $R$-algebras
(3) Show that the map in (1)
$$H \otimes_R H^{op} \rightarrow End_R(H), x \otimes y \mapsto (a \mapsto xay).$$
is an isomophism of $R$-algebras
(4) Find an isomorphism $H \times_R H \rightarrow M_4(R)$ of $R$-algebras.
|
The norm of $x\in X$, where $X$ is a normed linear space Question:
Let $x\in X$, $X$ is a normed linear space and let $X^{*}$ denote the dual space of $X$.
Prove that$$\|x\|=\sup_{\|f\|=1}|f(x)|$$ where $f\in X^{*}$.
My proof:
Let $0\ne x\in X$, using HBT take $f\in X^{*}$ such that $\|f\|=1$ and $f(x)=\|x\|$.
Now, $\|x\|=f(x)\le|f(x)|\le\sup_{\|x\|=1}|f(x)|=\sup_{\|f\|=1}|f(x)|$, this implies $$\|x\|\le\sup_{\|f\|=1}|f(x)|\quad (1)$$
Since $f$ is a bounded linear functional $|f(x)|\le\|f\|\|x\|$ for all $x\in X$.
Since$\|f\|=1$, $|f(x)|\le\|x\|$ for all $x\in X$. This implies $$\|x\|\ge\sup_{\|f\|=1}|f(x)|\quad(2)$$
Therefore $(1)$ and $(2)$ gives $\|x\|=\sup_{\|f\|=1}|f(x)|$.
If $x=0$, the result seems to be trivial, but I am still trying to convince myself. Still I have doubts about my proof, is it correct? Please help.
Edit:
Please note that, I use the result of the one of the consequences of Hahn-Banach theorem. That is, given a normed linear space $X$ and $x_{0}\in X$ $x_{0}\ne 0$, there exist $f\in X^{*}$ such that $f(x_{0})=\|f\|\|x_{0}\|$
| Thanks for the comments. Let see....
Let $0\ne x\in X$, using the consequence of HBT (analytic form) take $g\in X^{*}$ such that $\|g\|=1$ and $
g(x)=\|x\|$.
Now, $\|x\|=g(x)\le|g(x)|\le\sup_{\|f\|=1}|f(x)|$, this implies $$\|x\|\le\sup_{\|f\|=1}|f(x)|\quad (1)$$
Since $f$ is a bounded linear functional (given): $|f(x)|\le\|f\|\|x\|$ for all $x\in X$.
For a linear functional $f$ with $\|f\|=1$ we have by defintion, $|f(x)|\le\|x\|$ for all $x\in X$. This implies $$\|x\|\ge\sup_{\|f\|=1}|f(x)|\quad(2)$$
Therefore $(1)$ and $(2)$ gives $\|x\|=\sup_{\|f\|=1}|f(x)|$.
If $x=0$, the result is trivial.
Any more comments?
|
Right angles in the clock during a day Can someone provide a solution for this question ...
Given the hours , minutes and seconds hands calculate the number of right angles the three hands make pairwise with respect to each other during a day... So it asks for the second and hour angle , minute and hour and second and minute
Thanks a lot..
| Take two hands: a fast hand that completes $x$ revolutions per day, and a slow hand that completes $y$ revolutions per day. Now rotate the clock backwards, at a rate of $y$ revolutions per day: the slow hand comes to a standstill, and the fast hand slows down to $x-y$ revolutions per day. So the number of times that the hands are at right angles is $2(x-y)$.
The three hands make 2, 24, and 1440 revolutions per day, so the total is:
$$2\times(24-2) + 2\times(1440-2) + 2\times(1440-24) = 5752$$
|
How to prove that $\lim\limits_{h \to 0} \frac{a^h - 1}{h} = \ln a$ In order to find the derivative of a exponential function, on its general form $a^x$ by the definition, I used limits.
$\begin{align*}
\frac{d}{dx} a^x & = \lim_{h \to 0} \left [ \frac{a^{x+h}-a^x}{h} \right ]\\ \\
& =\lim_{h \to 0} \left [ \frac{a^x \cdot a^h-a^x}{h} \right ]
\\ \\
&=\lim_{h \to 0} \left [ \frac{a^x \cdot (a^h-1)}{h} \right ]
\\ \\
&=a^x \cdot \lim_{h \to 0} \left [\frac {a^h-1}{h} \right ]
\end{align*}$
I know that this last limit is equal to $\ln(a)$ but how can I prove it by using basic Algebra and Exponential and Logarithms properties?
Thanks
| It depends a bit on what you're prepared to accept as "basic algebra and exponential and logarithms properties". Look first at the case where $a$ is $e$. You need to know that $\lim_{h\to0}(e^h-1)/h)=1$. Are you willing to accept that as a "basic property"? If so, then $a^h=e^{h\log a}$ so $$(a^h-1)/h=(e^{h\log a}-1)/h={e^{h\log a}-1\over h\log a}\log a$$ so $$\lim_{h\to0}(a^h-1)/h=(\log a)\lim_{h\to0}{e^{h\log a}-1\over h\log a}=\log a$$
|
Question about two simple problems on covering spaces Here are two problems that look trivial, but I could not prove.
i) If $p:E \to B$ and $j:B \to Z$ are covering maps, and $j$ is such that the preimages of points are finite sets, then the composite is a covering map.
I suppose that for this, the neighborhood $U$ that will be eventually covered by the composite will be the same that is eventually covered by $j$, but I can´t prove that the preimage can be written as a disjoint union of open sets homeomorphic to $U$.
ii) For this I have no idea what to do, but if I prove that it´s injective, I'm done. Let $p:E \to B$ be a covering map, with $E$ path connected and $B$ simply connected; prove that $p$ is a homeomorphism.
| Lets call an open neighborhood $U$ of a point $y$ principal (wrt. a covering projection $p: X \to Y$), if it's pre image $p^{-1}(U)$ is a disjoint union of open sets, which are mapped homeomorphically onto $U$ by $p$.
By definition a covering projection is a surjection $p: X \to Y$, such that every point has a principal neighborhood. It is easy to see, that if $U$ is a principal neighborhood of a point $y$, then any open neighborhood $U'$ of $y$ with $U' \subset U$ is again principal.
i) Let $p: X \to Y$ and $q: Y \to Z$ be covering projections, where $q^{-1}(\{z\})$ is finite for every $z \in Z$. Let $z \in Z$ and $U$ a principal neighborhood of $z$. For every point $y \in q^{-1}(\{z\})$ choose a principal neighborhood $V_y$. We can assume that $V_y$ is a subset of the component of $q^{-1}(U)$ corresponding to $y$, possibly replacing $V_y$ with its intersection with that component.
Now let $$U' = \bigcap_{y \in q^{-1}(\{z\})}q(V_y),$$ then $U'$ is an open (being the intersection of finitely many open subsets) neighborhood of $z$. It should be easy to verify that $U'$ is principal.
ii) Let $e,e' \in E$ with $p(e) = p(e')$ and $\gamma: I \to E$ be a path from $e$ to $e'$. Now $p \circ \gamma$ is a closed path, and therefore nullhomotopic. Lifting such a homotopy shows that $e=e'$.
|
A property of Hilbert sphere Let $X$ be (Edit: a closed convex subset of ) the unit sphere $Y=\{x\in \ell^2: \|x\|=1\}$ in $\ell^2$ with the great circle (geodesic) metric. (Edit: Suppose the diameter of $X$ is less than $\pi/2$.) Is it true that every decreasing sequence of nonempty closed convex sets in $X$ has a nonempty intersection? (A set $S$ is convex in $X$ if for every $x,y\in S$ the geodesic path between $x,y$ is contained in $S$.)
(I edited my original question.)
| No. For example, let $A_n$ be the subset of $X$ consisting of vectors that are zero in the first $n$ co-ordinates.
EDIT: this assumes that when $x$ and $y$ are antipodal, convexity of $S$ containing $x$, $y$ only requires that at least one of the great-circle paths is contained in $S$. If it requires all of them, then the $A_n$ are not convex. t.b. points out in the comments that in this case we can set $A_n$ to consist of all vectors in $X$ that are zero in the first $n$ co-ordinates and non-negative in the remainder.
|
Prove that $\lim \limits_{n \to \infty} \frac{x^n}{n!} = 0$, $x \in \Bbb R$. Why is
$$\lim_{n \to \infty} \frac{2^n}{n!}=0\text{ ?}$$
Can we generalize it to any exponent $x \in \Bbb R$? This is to say, is
$$\lim_{n \to \infty} \frac{x^n}{n!}=0\text{ ?}$$
This is being repurposed in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions.
and here: List of abstract duplicates.
| The Stirling's formula says that:
$$ n! \sim \sqrt{2 \pi n} \left(\frac{n}{e}\right)^n, $$
inasmuch as
$$ \lim_{n \to \infty} \frac{n!}{\sqrt{2 \pi n} \left(\displaystyle\frac{n}{e}\right)^n} = 1, $$
thearebfore
$$
\begin{aligned}
\lim_{n \to \infty} \frac{2^n}{n!} & = \lim_{n \to \infty} \frac{2^n}{\sqrt{2 \pi n} \left(\displaystyle\frac{n}{e}\right)^n} = \lim_{n \to \infty} \Bigg[\frac{1}{\sqrt{2 \pi n}} \cdot \frac{2^n}{\left(\displaystyle\frac{n}{e}\right)^n} \Bigg]\\
&= \lim_{n \to \infty} \frac{1}{\sqrt{2 \pi n}} \cdot \lim_{n \to \infty} \left(\frac{e2}{n}\right)^n = 0 \cdot 0^\infty = 0
\end{aligned}
$$
Note: You can generalize replacing $2$ by $x$.
Visit: Stirling's approximation.
|
How to prove that $ f(x) = \sum_{k=1}^\infty \frac{\sin((k + 1)!\;x )}{k!}$ is nowhere differentiable This function is continuous, it follows by M-Weierstrass Test. But proving non-differentiability, I think it's too hard. Does someone know how can I prove this? Or at least have a paper with the proof?
The function is
$$
f(x) = \sum_{k=1}^\infty \frac{\sin((k + 1)!\;x )}{k!}$$
Thanks!
| (Edited: handwaving replaced by rigor)
For conciseness, define the helper functions $\gamma_k(x)=\sin((k+1)!x)$. Then $f(x)=\sum_k \frac{\gamma_k(x)}{k!}$.
Fix an arbitrary $x\in\mathbb R$. We will construct a sequence $(x_n)_n$ such that
$$\lim_{n\to\infty} x_n = x \quad\land\quad \lim_{n\to\infty} \left|\frac{f(x_n)-f(x)}{x_n-x}\right| = \infty$$
Such a sequence will directly imply that $f$ is not differentiable at $x$.
Let $x'_n$ be the largest number less than $x$ such that $|\gamma_n(x'_n)-\gamma_n(x)|=1$. Let $x''_n$ be the smallest number larger than $x$ such that $\gamma_n(x''_n)=\gamma_n(x'_n)$. One of these, to be determined later, will become our $x_n$.
No matter which of these two choices of $x_n$ we have $|x_n-x|<\frac{2\pi}{(n+1)!}$ so $\lim x_n=x$.
To estimate the difference quotient, write
$$f(x) = \underbrace{\sum_{k=1}^{n-1}\frac{\gamma_k(x)}{k!}}_{p(x)}+
\underbrace{\frac{\gamma_n(x)}{n!}}_{q(x)}+
\underbrace{\sum_{k=n+1}^{\infty} \frac{\gamma_k(x)}{k!}}_{r(x)}$$
and so,
$$\underbrace{f(x_n)-f(x)}_{\Delta f} = \underbrace{p(x_n)-p(x)}_{\Delta p} +
\underbrace{q(x_n)-q(x)}_{\Delta q} +
\underbrace{r(x_n)-r(x)}_{\Delta r}$$
Of these, by construction of $x_n$ we have $|\Delta q| = \frac{1}{n!}$.
Also, $r(x)$ is globally bounded by the remainder term in the series $\sum 1/n! = e$, which by Taylor's theorem is at most $\frac{e}{(n+1)!}$. So $|\Delta r| \le \frac{2e}{(n+1)!}$.
$\Delta p$ is not dealt with as easily. In some cases it may be numerically larger than $\Delta q$, ruining a simple triange-equality based estimate. But it can be tamed by a case analysis:
*
*If $p$ is strictly monotonic on $[x'_n, x''_n]$, then $p(x'_n)-p(x)$ and $p(x''_n)-p(x)$ will have opposite signs. Since $q(x'_n)=q(x''_n)$, we can choose $x_n$ such that $\Delta p$ and $\Delta q$ has the same sign. Therefore $|\Delta p+\Delta q|\ge|\Delta q|=\frac{1}{n!}$.
*Otherwise, $p$ has an extremum between $x'_n$ and $x''_n$; select $x_n$ such that the extremum is between $x$ and $x_n$. Because $p$ is a finite sum of $C^\infty$ functions, we can bound its second derivative separately for each of its terms:
$$\forall t: |p''(t)| \le \sum_{k=1}^{n-1}\left|\frac{\gamma''_k(t)}{k!}\right| \le
\sum_{k=1}^{n-1}\frac{(k+1)!^2}{k!} \le
\sum_{k=1}^{n-1} (k+1)!(k+1) \le 2n!n $$
Therefore the maximal variation of $p$ in an interval of length $\le\frac{2\pi}{(n+1)!}$ that contains a stationary point must be $\left(\frac{2\pi}{(n+1)!}\right)^2 2n!n = \frac{8\pi^2n}{(n+1)^2}\frac{1}{n!}$. The $\frac{8\pi^2n}{(n+1)^2}$ factor is less than $1/2$ for $n>16\pi^2$, so for large enough $n$ we have $|\Delta p+\Delta q|\ge \frac{1}{2n!}$.
Thus, for large $n$ we always have
$$|\Delta f| \ge \frac{1}{2n!} - \frac{2e}{(n+1)!} = \frac{1}{n!}\left(\frac{1}{2}-\frac{2e}{n+1}\right)$$
and therefore
$$\left|\frac{f(x_n)-f(x)}{x_n-x}\right| \ge \frac{(n+1)!}{2\pi}|\Delta f| \ge \frac{n+1}{2\pi}\left(\frac{1}{2}-\frac{2e}{n+1}\right) = \frac{n+1}{4\pi}-\frac{e}{\pi} \to \infty$$
as promised.
|
Is the power set of the natural numbers countable? Some explanations:
A set S is countable if there exists an injective function $f$ from $S$ to the natural numbers ($f:S \rightarrow \mathbb{N}$).
$\{1,2,3,4\}, \mathbb{N},\mathbb{Z}, \mathbb{Q}$ are all countable.
$\mathbb{R}$ is not countable.
The power set $\mathcal P(A) $ is defined as a set of all possible subsets of A, including the empty set and the whole set.
$\mathcal P (\{\})=\{\{\}\}, \mathcal P (\mathcal P(\{\}))=\{\{\}, \{\{\}\}\} $
$\mathcal P(\{1,2\})=\{\{\}, \{1\},\{2\},\{1,2\}\}$
My question is:
Is $\mathcal P(\mathbb{N})$ countable? How would an injective function $f:S \rightarrow \mathbb{N}$ look like?
| Power set of natural numbers has the same cardinality with the real numbers. So, it is uncountable.
In order to be rigorous, here's a proof of this.
|
An inequality for graphs In the middle of a proof in a graph theory book I am looking at appears the inequality
$$\sum_i {d_i \choose r} \ge n { m /n \choose r},$$
and I'm not sure how to justify it. Here $d_i$ is the degree of vertex $i$ and the sum is over all $n$ vertices. There are $m$ edges. If it is helpful I think we can assume that $r$ is fixed and $n$ is large and $m \approx n^{2 - \epsilon}$ for some $\epsilon > 0$.
My only thought is that I can do it if I replace every ${d_i \choose r}$ by $(d_i)^r / r!$, but this seems a little sloppy.
Also, for my purposes, I am happy with even a quick-and-dirty proof that
$$\sum_i {d_i \choose r} \ge C \, n { m /n \choose r}$$
holds for some constant $C>0$.
Motivation: an apparently simple counting argument gives a lower bound on the Turán number of the complete bipartite graph $K_{r,s}$.
Source: Erdős on Graphs : His Legacy of Unsolved Problems. In the edition I have, this appears on p.36.
Relevant part of the text of the book:
3.3. Turán Numbers for Bipartite Graphs
Problem
What is the largest integer $m$ such that there is a graph $G$ on $n$ vertices and $m$
edges which does not contain $K_{r,r}$ as a graph?
In other words, determine $t(n, K_{r,r})$.
.... for $2\le r \le s$
$$t(n,K_{r,s}) < cs^{1/r}n^{2-1/r}+O(n) \qquad (3.2)$$
The proof of (3.2) is by the following counting argument:
Suppose a graph $G$ has $m$ edges and has degree sequence $d_1,\ldots,d_n$. The
number of copies of stars $S_r$ in $G$ is exactly
$$\sum_i \binom{d_i}r.$$
Therefore, there is at least one copy of $K_{r,t}$ contained in $G$ if
$$\frac{\sum_i \binom{d_i}r}{\binom nr} \ge \frac{n\binom{m/n}r}{\binom nr}>s$$
which holds if $m\ge cs^{1/r}n^{2-1/r}$ for some appropriate constant $c$ (which depends of $r$ but is independent on $n$).
This proves (3.2)
| Fix an integer $r \geq 1$. Then the function $f: \mathbb R^{\geq 0} \to \mathbb R^{\geq 0}$ given by
$$
f(x) := \binom{\max \{ x, r-1 \}}{r}
$$
is both monotonically increasing and convex.* So applying Jensen's inequality to $f$ for the $n$ numbers $d_1, d_2, \ldots, d_n$, we get
$$
\sum_{i=1}^n f(d_i) \geq n \ f\left(\frac{1}{n} \sum_{i=1}^n d_i \right). \tag{1}
$$
The claim in the question follows by simplifying the left and right hand sides:
*
*Notice that for any integer $d$, we have $f(d) = \binom{d}{r}$. (If $d < r$, then both $f(d)$ and $\binom{d}{r}$ are zero.) So the left hand side simplifies to $\sum \limits_{i=1}^n \binom{d_i}{r}$.
*On the other hand, $\sum\limits_{i=1}^n d_i = 2m$ for any graph. Therefore, assuming that $2m/n \geq r-1$ (which seems reasonable given the range of parameters, see below), the right hand side of $(1)$ simplifies to $n \cdot \binom{2m/n}{r}$. (In turn, this is at least $n \cdot \binom{m/n}{r}$ that is claimed in the question.)
EDIT: (More context has been added to the question.) In the context of the problem, we have $m \geq c s^{1/r} n^{2 - 1/r}$. If $r > 1$, then $m \geq c s^{1/r} n^{3/2} \geq \Omega(n^{3/2})$ (where we treat $r$ and $s$ to be fixed constants, and let $n \to \infty$). Hence, for sufficiently large $n$, we have $m \geq n (r-1)$, and hence our proof holds.
*Monotonicity and convexity of $f$. We write $f(x)$ as the composition $f(x) = h(g(x))$, where $$
\begin{eqnarray*}
g(x) &:=& \max \{ x - r + 1, 0 \} & \quad (x \geq 0);
\\
h(y) &:=& \frac{1}{r!} y (y+1) (y+2) \cdots (y+r-1) & \quad (y \geq 0).
\end{eqnarray*}
$$
Notice that both $g(x)$ and $h(y)$ are monotonically increasing and convex for all everywhere in $[0, \infty)$. (It might help to observe that $h$ is a polynomial with nonnegative coefficients; so all its derivatives are nonnegative everywhere in $[0, \infty)$.) Under these conditions, it follows that $f(x) = h(g(x))$ is also monotonically increasing and convex for all $x \geq 0$. (See the wikipedia page for the relevant properties of convex functions.)
EDIT: Corrected a typo. We need $2m/n \geq r-1$, rather than $2m \geq r-1$.
|
Equal simple field extensions? I have a question about simple field extensions.
For a field $F$, if $[F(a):F]$ is odd, then why is $F(a)=F(a^2)$?
| Since $[F(a):F]$ is odd the minimal polynomial of $a$ is an odd degree polynomial, say $p(x)=b_{0}x^{2k+1}+b_1x^{2k}+...b_{2k+1}$, now since $a$ satisfies $p(x)$ we have: $b_{0}a^{2k+1}+b_1a^{2k}+...b_{2k+1}=0$ $\implies$ $a(b_0a^{2k}+b_2a^{2k-2}+...b_{2k})+b_1a^{2k}+b_3a^{2k-2}+...b_{2k+1}=0$ $\implies$ $a= -(b_1a^{2k}+b_3a^{2k-2}+...+b_{2k+1})/(b_0a^{2k}+b_2a^{2k-2}+...b_{2k})$ $\implies$ $a$ is in $F(a^2)$,so obviously $F(a^2)=F(a)$
|
Is it possible that $\mathbb{Q}(\alpha)=\mathbb{Q}(\alpha^{n})$ for all $n>1$? Is it possible that $\mathbb{Q}(\alpha)=\mathbb{Q}(\alpha^{n})$ for all $n>1$ when $\mathbb{Q}(\alpha)$ is a $p$th degree Galois extension of $\mathbb{Q}$?
($p$ is prime)
I got stuck with this problem while trying to construct polynomials whose Galois group is cyclic group of order $p$.
Edit: Okay, I got two nice answers for this question but to fulfill my original purpose(constructing polynomials with cyclic Galois group) I realized that I should ask for all primes $p$ if there is any such $\alpha$(depending on $p$) such that the above condition is satisfied. If the answer is no (i.e. there are primes $p$ for which there is no $\alpha$ such that $\mathbb{Q}(\alpha)=\mathbb{Q}(\alpha^{n})$ for all $n>1$) then I succeed up to certain stage.
| If you mean for some $\alpha$ and $p$, then yes: if $\alpha=1+\sqrt{2}$, then
$\mathbb{Q}(\alpha)$ is of degree 2, which is prime, and $\alpha^n$ is never a rational number, so $\mathbb{Q}(\alpha)=\mathbb{Q}(\alpha^n)$ for all $n>1$.
If you mean for all $\alpha$ such that the degree of $\mathbb{Q}(\alpha)$ is a prime number $p$, then no: if $\alpha=\sqrt{2}$, then $\mathbb{Q}(\alpha)$ is of degree 2, which is prime, but $\mathbb{Q}(\alpha^n)=\mathbb{Q}$ when $n$ is even.
|
Prove the identity $ \sum\limits_{s=0}^{\infty}{p+s \choose s}{2p+m \choose 2p+2s} = 2^{m-1} \frac{2p+m}{m}{m+p-1 \choose p}$ $$ \sum\limits_{s=0}^{\infty}{p+s \choose s}{2p+m \choose 2p+2s} = 2^{m-1} \frac{2p+m}{m}{m+p-1 \choose p}$$
Class themes are: Generating functions and formal power series.
| I will try to give an answer using basic complex variables here.
This calculation is very simple in spite of some more complicated intermediate expressions that appear.
Suppose we are trying to show that
$$\sum_{q=0}^\infty
{p+q\choose q} {2p+m\choose m-2q}
= 2^{m-1} \frac{2p+m}{m} {m+p-1\choose p}.$$
Introduce the integral representation
$${2p+m\choose m-2q}
= \frac{1}{2\pi i}
\int_{|z|=\epsilon}
\frac{(1+z)^{2p+m}}{z^{m-2q+1}} \; dz.$$
This gives for the sum the integral (the second binomial coefficient
enforces the range)
$$\frac{1}{2\pi i}
\int_{|z|=\epsilon}
\frac{(1+z)^{2p+m}}{z^{m+1}}
\sum_{q=0}^\infty {p+q\choose q} z^{2q} \; dz
\\ = \frac{1}{2\pi i}
\int_{|z|=\epsilon}
\frac{(1+z)^{2p+m}}{z^{m+1}} \frac{1}{(1-z^2)^{p+1}} \; dz
\\ = \frac{1}{2\pi i}
\int_{|z|=\epsilon}
\frac{(1+z)^{p+m-1}}{z^{m+1}} \frac{1}{(1-z)^{p+1}} \; dz.$$
This is
$$\frac{1}{2\pi i}
\int_{|z|=\epsilon}
\frac{(2+z-1)^{p+m-1}}{z^{m+1}} \frac{1}{(1-z)^{p+1}} \; dz
\\ = 2^{p+m-1} \frac{1}{2\pi i}
\int_{|z|=\epsilon}
\frac{(1+(z-1)/2)^{p+m-1}}{z^{m+1}} \frac{1}{(1-z)^{p+1}} \; dz
\\ = 2^{p+m-1} \frac{1}{2\pi i}
\int_{|z|=\epsilon} \frac{1}{z^{m+1}} \frac{1}{(1-z)^{p+1}}
\sum_{q=0}^{p+m-1} {p+m-1\choose q} \frac{(z-1)^q}{2^q} \; dz
\\ = 2^{p+m-1} \frac{1}{2\pi i}
\int_{|z|=\epsilon} \frac{1}{z^{m+1}} \frac{1}{(1-z)^{p+1}}
\sum_{q=0}^{p+m-1} {p+m-1\choose q} (-1)^q \frac{(1-z)^q}{2^q} \; dz
\\ = 2^{p+m-1} \frac{1}{2\pi i}
\int_{|z|=\epsilon} \frac{1}{z^{m+1}}
\sum_{q=0}^{p+m-1} {p+m-1\choose q}
(-1)^q \frac{(1-z)^{q-p-1}}{2^q} \; dz.$$
The only non-zero contribution is with $q$ ranging from $0$ to $p.$
This gives
$$ 2^{p+m-1} \frac{1}{2\pi i}
\int_{|z|=\epsilon} \frac{1}{z^{m+1}}
\sum_{q=0}^p {p+m-1\choose q}
(-1)^q \frac{1}{2^q} \frac{1}{(1-z)^{p+1-q}} \; dz$$
which on extracting coefficients yields
$$2^{p+m-1} \sum_{q=0}^p {p+m-1\choose q}
(-1)^q \frac{1}{2^q} {m+p-q\choose p-q}.$$
Introduce the integral representation
$${m+p-q\choose p-q}
= \frac{1}{2\pi i}
\int_{|z|=\epsilon}
\frac{(1+z)^{m+p-q}}{z^{p-q+1}} \; dz.$$
This gives for the sum the integral (the second binomial coefficient
enforces the range)
$$2^{p+m-1} \frac{1}{2\pi i}
\int_{|z|=\epsilon}
\frac{(1+z)^{m+p}}{z^{p+1}}
\sum_{q=0}^\infty
{p+m-1\choose q}\frac{(-1)^q}{2^q}
\left(\frac{z}{1+z}\right)^q \; dz
\\ = 2^{p+m-1} \frac{1}{2\pi i}
\int_{|z|=\epsilon}
\frac{(1+z)^{m+p}}{z^{p+1}}
\left(1-\frac{1}{2}\frac{z}{1+z}\right)^{p+m-1} \; dz
\\ = 2^{p+m-1} \frac{1}{2\pi i}
\int_{|z|=\epsilon}
\frac{1+z}{z^{p+1}}
\left(1+z-1/2\times z\right)^{p+m-1} \; dz
\\ = \frac{1}{2\pi i}
\int_{|z|=\epsilon}
\frac{1+z}{z^{p+1}}
\left(2+z\right)^{p+m-1} \; dz.$$
Extracting coefficients now yields
$${p+m-1\choose p} \times 2^{m-1}
+ {p+m-1\choose p-1} \times 2^m.$$
This symmetric form may be re-written in an asymmetric form as
follows,
$${p+m-1\choose p} \times 2^{m-1}
+ \frac{p}{m} {p+m-1\choose p} \times 2^m
\\ = 2^{m-1} \times
\left(1 + \frac{2p}{m}\right) {p+m-1\choose p}$$
as claimed.
The bonus feature of this calculation is that we evaluated two binomial sums instead of one.
We have not made use of the properties of complex integrals here so
this computation can also be presented using just algebra of
generating functions.
Apparently this method is due to Egorychev although some of it is
probably folklore.
|
Countable or uncountable set 8 signs Let S be a set of pairwise disjoint 8-like symbols on the plane. (The 8s may be inside each other as well) Prove that S is at most countable.
Now I know you can "map" a set of disjoint intervals in R to a countable set (e.g. Q :rational numbers) and solve similar problems like this, but the fact that the 8s can go inside each other is hindering my progress with my conventional approach...
| Let $\mathcal{E}$ denote the set of all your figure eights. Then, define a map $f:\mathcal{E}\to\mathbb{Q}^2\times\mathbb{Q}^2$ by taking $E\in\mathcal{E}$ to a chosen pair of rational ordered pairs, one sitting inside each loop. Show that if two such figure eights were to have the same chosen ordered pair, they must interesect, which is impossible. Thus, $f$ is an injection and so $\mathcal{E}$ is countable.
|
"Best practice" innovative teaching in mathematics Our department is currently revamping our first-year courses in mathematics, which are huge classes (about 500+ students) that are mostly students who will continue on to Engineering.
The existing teaching methods (largely, "lemma, proof, corollary, application, lemma, proof, corollary, application, rinse and repeat") do not properly accommodate the widely varying students.
I am interested in finding out about alternative, innovative or just interesting ideas for teaching mathematics - no matter how way out they may seem. Preferably something backed up by educational research, but any ideas are welcome.
Edit: a colleague pointed out that I should mention that currently we lecture to the 500 students in groups of around 200-240, not all 500 at once.
| “...no matter how way out they may seem.”
In that case you might want to consider the public-domain student exercises for mathematics that I have created. The address is: http://www.public-domain-materials.com/folder-student-exercise-tasks-for-mathematics-language-arts-etc---autocorrected.html
|
The constant distribution If $u$ is a distribution in open set $\Omega\subset \mathbb R^n$ such that ${\partial ^i}u = 0$ for all $i=1,2,\ldots,n$. Then is it necessarily that $u$ is a constant function?
| It's true if we assume that $\Omega$ is connected. We will show that $u$ is locally constant, and the connectedness will allow us to conclude that $u$ is indeed constant. Let $a\in\Omega$ and $\delta>0$ such that $\overline{B(a,2\delta)}\subset \Omega$. Consider a test function $\varphi\in\mathcal D(\Omega)$ such that $\varphi=1$ on $B(a,2\delta)$, and put $S=\varphi u$, which is a distribution with compact support. Let $\{\rho_k\}$ a mollifier, i.e. non-negative functions, of integral $1$, with support contained in the ball $\overline{B\left(0,\frac 1k\right)}$, and consider $S_k=\rho_k* S$. It's a distribution which can be associated to a test function. We have $\partial^i S=\partial^i\varphi u$, hence $\partial^iS=0$ on $B(a,2\delta)$. $\partial^iS_k=\rho_k*(\partial^iS)$ is $0$ if $\frac 1k<\delta$, hence $S_k$ is constant. Since $S_k$ converges to $S$ in $\mathcal D'$, $S$ is constant on $B(a,\delta)$ and so is $u$.
This topic answers the cases $\Omega=\mathbb R^n$.
|
Optimal number of answers for a test with wrong-answer penalty Suppose you have to take a test with ten questions, each with four different options (no multiple answers), and a wrong-answer penalty of half a correct answer. Blank questions do not score neither positively nor negatively.
Supposing you have not studied specially hard this time, what's the optimal number of questions to try to answer so the probabilities to pass the exam (having at least five points).
| Let's work this all the way through. Suppose you answer $n$ questions. Let $X$ be the number you get correct. Assuming $\frac{1}{4}$ chance of getting an answer correct, $X$ is binomial$(n,1/4)$. Let $Y$ be the actual score on the exam, including the penalty. Then $Y = X - \frac{1}{2}(n-X) = \frac{3}{2}X-\frac{1}{2}n$. To maximize the probability of passing the exam, you want to choose the value of $n$ that maximizes $P(Y \geq 5)$. This probability is $$P\left(\frac{3}{2}X - \frac{n}{2} \geq 5\right) \Longleftrightarrow P\left(X \geq \frac{10+n}{3}\right) = \sum_{k = \lceil(10+n)/3\rceil}^n \binom{n}{k} \left(\frac{1}{4}\right)^k \left(\frac{3}{4}\right)^{n-k}$$
$$ = \left(\frac{3}{4}\right)^n \sum_{k =\lceil(10+n)/3\rceil}^n \binom{n}{k} \frac{1}{3^k}.$$
This can be calculated quickly for the possible values of $n$. I get, via Mathematica,
5 0.000976563
6 0.000244141
7 0.00134277
8 0.00422668
9 0.00134277
10 0.00350571
Thus you maximize your probability of passing by answering eight questions.
|
What kinds of non-zero characteristic fields exist? There are these finite fields of characteristic $p$ , namely $\mathbb{F}_{p^n}$ for any $n>1$ and there is the algebraic closure $\bar{\mathbb{F}_p}$. The only other fields of non-zero characteristic I can think of are transcendental extensions namely $\mathbb{F}_{q}(x_1,x_2,..x_k)$ where $q=p^{n}$.
Thats all! I cannot think of any other fields of non-zero characteristic. I may be asking too much if I ask for characterization of all non-zero characteristic fields. But I would like to know what other kinds of such fields are possible.
Thanks.
| The basic structure theory of fields tells us that a field extension $L/K$ can be split into the following steps:
*
*an algebraic extension $K^\prime /K$,
*a purely transcendental extension $K^\prime (T)/K^\prime$,
*an algebraic extension $L/K^\prime (T)$.
The field $K^\prime$ is the algebraic closure of $K$ in $L$
and thus uniquely determined by $L/K$.
The set $T$ is a transcendence basis of $L/K$; its cardinality
is uniquely determined by $L/K$.
A field $L$ has characteristic $p\neq 0$ iff it contains the
finite field $\mathbb{F}_p$. Hence you get all fields of characteristic
$p$ by letting $K=\mathbb{F}_p$ in the description of field extensions,
and by chosing $T$ and $K^\prime$ and $L/K^\prime (T)$ as you like.
Of course in general it is then hard to judge whether two such fields
are isomorphic - essentially because of step 3.
|
Are polynomials dense in Gaussian Sobolev space? Let $\mu$ be standard Gaussian measure on $\mathbb{R}^n$, i.e. $d\mu = (2\pi)^{-n/2} e^{-|x|^2/2} dx$, and define the Gaussian Sobolev space $H^1(\mu)$ to be the completion of $C_c^\infty(\mathbb{R}^n)$ under the inner product
$$\langle f,g \rangle_{H^1(\mu)} := \int f g\, d\mu + \int \nabla f \cdot \nabla g\, d\mu.$$
It is easy to see that polynomials are in $H^1(\mu)$. Do they form a dense set?
I am quite sure the answer must be yes, but can't find or construct a proof in general. I do have a proof for $n=1$, which I can post if anyone wants. It may be useful to know that the polynomials are dense in $L^2(\mu)$.
Edit: Here is a proof for $n=1$.
It is sufficient to show that any $f \in C^\infty_c(\mathbb{R})$ can be approximated by polynomials. We know polynomials are dense in $L^2(\mu)$, so choose a sequence of polynomials $q_n \to f'$ in $L^2(\mu)$. Set $p_n(x) = \int_0^x q_n(y)\,dy + f(0)$; $p_n$ is also a polynomial. By construction we have $p_n' \to f'$ in $L^2(\mu)$; it remains to show $p_n \to f$ in $L^2(\mu)$. Now we have
$$ \begin{align*} \int_0^\infty |p_n(x) - f(x)|^2 e^{-x^2/2} dx &= \int_0^\infty \left(\int_0^x (q_n(y) - f'(y)) dy \right)^2 e^{-x^2/2} dx \\
&\le \int_0^\infty \int_0^x (q_n(y) - f'(y))^2\,dy \,x e^{-x^2/2} dx \\
&= \int_0^\infty (q_n(x) - f'(x))^2 e^{-x^2/2} dx \to 0 \end{align*}$$
where we used Cauchy-Schwarz in the second line and integration by parts in the third. The $\int_{-\infty}^0$ term can be handled the same with appropriate minus signs.
The problem with $n > 1$ is I don't see how to use the fundamental theorem of calculus in the same way.
| Nate, I once needed this result, so I proved it in Dirichlet forms with polynomial domain (Math. Japonica 37 (1992) 1015-1024). There may be better proofs out there, but you could start with this paper.
|
Confused about modular notations I am little confused about the notations used in two articles at wikipedia.
According to the page on Fermat Primality test
$
a^{p-1}\equiv 1 \pmod{m}$ means that when $a^{p-1}$ is divided by $m$, the remainder is 1.
And according to the page on Modular Exponential
$c \equiv b^e \pmod{m}$ means that when $b^e$ is divided by $m$, the remainder is $c$.
Am I interpreting them wrong? or both of them are right? Can someone please clarify them to me?
Update: So given an equation say $x \equiv y \pmod{m}$, how do we interpret it? $y$ divided by $m$ gives $x$ as remainder or $x$ divided by $m$ gives $y$ as remainder?
| Congruence is similar to equations where they could be interpreted left to right or right to left and both are correct.
Another way to picture this is:
a ≡ b (mod m) implies there exists an integer k such that k*m+a=b
There are various ways to visualize a (mod m) number system as the integers mod 3 could be viewed as {0,1,2} or {-1,0,1} as 2≡-1 (mod 3) to give an example here.
Another point is that the (mod m) is applied to both sides of the congruence. For example, in dealing with polar coordinates, the angle co-ordinate would be a good example of the use of a modular operator as 0 degrees is the same as 360 degrees is the same as 720 degrees as the only difference is the number of rotations to get that angle.
But the problem the same equation a ≡ b (mod m) is interpreted as k*m
+ a = b and k*m + b = a in the two articles.
If there exists a k such that k*m+a=b then there also exists a -k such that a=b-k*m if you remember that in an equation one can add or subtract an element so long as it is done on both sides of the equation. Thus, while the constant is different there does exist such a term.
Casting out nines would be a useful exercise if you want some examples where m=9.
|
Is there any intuition behind why the derivative of $\cos(x)$ is equal to $-\sin(x)$ or just something to memorize? why is $$\frac{d}{dx}\cos(x)=-\sin(x)$$ I am studying for a differential equation test and I seem to always forget \this, and i am just wondering if there is some intuition i'm missing, or is it just one of those things to memorize? and i know this is not very differential equation related, just one of those things that I've never really understood.
and alliteratively why is $$ \int\sin(x)dx=-\cos(x)$$
any good explanations would be greatly appreciated.
| I am with Henning Makholm on this and produce a sketch
The slope of the blue line is the red line, and the slope of the red line is the green line. I know the blue line is sine and the red line is cosine; the green line can be seen to be the negative of sine. Similarly the partial area under the red line is the blue line, though this sort of approach leads to $\int \sin(x)dx=1-\cos(x)$.
|
No. of solutions of equation? Given an equation $a_1 X_1 + a_2 X_2 + \cdots + a_n X_n = N$ where $a_1,a_2,\ldots,a_n$ are positive constants and each $X_i$ can take only two values $\{0,1\}$. $N$ is a given constant. How can we calculate the possible no of solutions of given equation ?
| This is counting the number of solutions to the knapsack problem. You can adapt the Dynamic Programming algorithm to do this in pseudopolynomial time. (I'm assuming the input data are integers.) Let $s[i,w]$ = # of ways to achieve the sum $w$ using only the first $i$ variables.
Then $$s[0,w]=\begin{cases}1 & w=0 \\0 & w \ne 0\end{cases}$$
For $i=1\ldots n$, For $w=0\ldots N$
$$s[i,w] = s[i-1,w] + s[i-1,w-a_i]$$
|
Conditional expectation for a sum of iid random variables: $E(\xi\mid\xi+\eta)=E(\eta\mid\xi+\eta)=\frac{\xi+\eta}{2}$ I don't really know how to start proving this question.
Let $\xi$ and $\eta$ be independent, identically distributed random variables with $E(|\xi|)$ finite.
Show that
$E(\xi\mid\xi+\eta)=E(\eta\mid\xi+\eta)=\frac{\xi+\eta}{2}$
Does anyone here have any idea for starting this question?
| $E(\xi\mid \xi+\eta)=E(\eta\mid \xi+\eta)$ since $\xi$ and $\eta$ are exchangeable, i.e. $(\xi,\eta)$ and $(\eta,\xi)$ are identically distributed. (Independent does not matter here.)
So $2E(\xi\mid \xi+\eta)=2E(\eta\mid \xi+\eta) = E(\xi\mid \xi+\eta)+E(\eta\mid \xi+\eta) =E(\xi+\eta\mid \xi+\eta) = \xi+\eta$ since the sum $\xi+\eta$ is fixed.
Now divide by two.
|
How do I prove equality of x and y? If $0\leq x,y\leq\frac{\pi}{2}$ and $\cos x +\cos y -\cos(x+y)=\frac{3}{2}$, then how can I prove that $x=y=\frac{\pi}{3}$?
Your help is appreciated.I tried various formulas but nothing is working.
| You could also attempt an geometric proof. First, without loss of generality you can assume
$0 <x,y < \frac{\pi}{2}$.
Construct a triangle with angles $x,y, \pi-x-y$.
Let $a,b,c$ be the edges. Then by cos law, you know that
$$\frac{a^2+b^2-c^2}{2ab}+ \frac{a^2+c^2-b^2}{2ac}+ \frac{b^2+c^2-a^2}{2bc}=\frac{3}{2}$$
and you need to show that $a=b=c$.
The inequality is
$$c(a^2+b^2-c^2)+b(a^2+c^2-b^2)+a(b^2+c^2-a^2)=3abc$$
or
$$a^2b+ab^2+ac^2+a^2c+bc^2+b^2c=a^3+b^3+c^3+3abc \,.$$
This should be easy to factor, but I fail to see it....
|
Proving that if $G/Z(G)$ is cyclic, then $G$ is abelian
Possible Duplicate:
Proof that if group $G/Z(G)$ is cyclic, then $G$ is commutative
If $G/Z(G)$ is cyclic, then $G$ is abelian
If $G$ is a group and $Z(G)$ the center of $G$, show that if $G/Z(G)$ is cyclic, then $G$ is abelian.
This is what I have so far:
We know that all cyclic groups are abelian. This means $G/Z(G)$ is abelian. $Z(G)= \{z \in G \mid zx=xz \text{ for all } x \in G \}$. So $Z(G)$ is abelian.
Is it sufficient to say that since $G/Z(G)$ and $Z(G)$ are both abelian, $G$ must be abelian?
| Here's part of the proof that $G$ is abelian. Hopefully this will get you started...
Let $Z(G)=Z$. If $G/Z$ is cyclic, then it has a generator, say $G/Z = \langle gZ \rangle$. This means that for each coset $xZ$ there exists some $i \in \mathbb{Z}$ such that $xZ=(gZ)^i=g^iZ$.
Suppose that $x,y \in G$. Consider $x \in xZ=g^iZ$ so that $x=g^iz$ for some $z\in Z$.
Represent $y$ in a similar manner and consider $xy$ and $yx$. Why are they equal?
Edit: !!!Spoiler alert!!! :) Here's the rest of the story.
$yZ \in G/Z = \langle gZ \rangle$ so that $yZ=(gZ)^j=g^jZ$ for some $j \in \mathbb{Z}$.
Therefore, $y \in yZ=g^jZ$ so that $y=g^jz_0$ for some $z_0 \in Z$.
Finally, $xy=g^izg^jz_0=g^ig^jzz_0=g^{i+j}zz_0=g^{j+i}zz_0=g^jg^izz_0=g^jz_0g^iz=yx$
The second equality follows because $z$ is in the center and thus commutes with everything. Then we're just messing with powers of $g$ (which commute with themselves). The next to last equality follows because $z_0$ is in the center and thus commutes with everything.
|
Breaking a variable out of a trigonometry equation $A = 2 \pi r^2 - r^2 (2 \arccos(d/2r) - \sin(2 \arccos(d/2r)))$
Given $A$ and $r$ I would like to solve for $d$. However, I get stuck breaking the $d/2r$ out the trig functions.
For context this is the area of two overlapping circles minus the overlapping region. Given a radius and desired area I'd like to be able to determine how far apart they should be. I know $A$ should be bounded below by $0 (d = 0)$ and above by $2 \pi r^2 (d \le 2r)$.
| This is a transcendental equation for $d$ which can't be solved for $d$ in closed form. You can get rid of the trigonometric functions in the last term using
$$\sin2x=2\sin x\cos x$$
and
$$\sin x=\sqrt{1-\cos^2 x}$$
and thus
$$\sin\left(2\arccos\frac d{2r}\right)=\frac dr\sqrt{1-\left(\frac d{2r}\right)^2}\;,$$
but that still leaves $d$ both in the argument of the arccosine and outside.
|
Example of a function that is not twice differentiable Give an example of a function f that is defined in a neighborhood of a s.t. $\lim_{h\to 0}(f(a+h)+f(a-h)-2f(a))/h^2$ exists, but is not twice differentiable.
Note: this follows a problem where I prove that the limit above $= f''(a)$ if $f$ is twice differentiable at $a$.
| Consider the function $f(x) = \sum_{i=0}^{n}|x-i|$. This function is continuous everywhere but not differentiable at exactly n points. Consider the function
$G(x) = \int_{0}^{x} f(t) dt$. This function is differentiable, since $f(t)$ is continuous due to FTC II. Now $G'(x) = f(x)$, which is not differentiable at $n$ points.
|
Counting words with parity restrictions on the letters Let $a_n$ be the number of words of length $n$ from the alphabet $\{A,B,C,D,E,F\}$ in which $A$ appears an even number of times and $B$ appears an odd number of times.
Using generating functions I was able to prove that $$a_n=\frac{6^n-2^n}{4}\;.$$
I was wondering if the above answer is correct and in that case what could be a combinatorial proof of that formula?
| I don’t know whether you’d call them combinatorial, but here are two completely elementary arguments of a kind that I’ve presented in a sophomore-level discrete math course. Added: Neither, of course, is as nice as Didier’s, which I’d not seen when I posted this.
Let $b_n$ be the number of words of length $n$ with an odd number of $A$’s and an odd number of $B$’s, $c_n$ the number of words of length $n$ with an even number of $A$’s and an even number of $B$’s, and $d_n$ the number of words of length $n$ with an odd number of $A$’s and an even number of $B$’s. Then
$$a_{n+1}=4a_n+b_n+c_n\;.\tag{1}$$
Clearly $a_n=d_n$: interchanging $A$’s and $B$’s gives a bijection between the two types of word. $(1)$ therefore reduces to
$$a_{n+1}=4a_n+b_n+c_n\;.\tag{2}$$
But $6^n=a_n+b_n+c_n+d_n=b_n+c_n+2a_n$, so $b_n+c_n=6^n-2a_n$, and $(2)$ becomes $$a_{n+1}=2a_n+6^n,a_0=0\;.\tag{3}$$ $(3)$ can be solved by brute force:
$$\begin{align*}
a_n&=2a_{n-1}+6^{n-1}\\
&=2(2a_{n-2}+6^{n-2})+6^{n-1}\\
&=2^2a_{n-2}+2\cdot 6^{n-2}+6^{n-1}\\
&=2^2(2a_{n-3}+6^{n-3})+2\cdot 6^{n-2}+6^{n-1}\\
&=2^3a_{n-3}+2^2\cdot 6^{n-3}+2\cdot 6^{n-2}+6^{n-1}\\
&\;\vdots\\
&=2^na_0+\sum_{k=0}^{n-1}2^k6^{n-1-k}\\
&=6^{n-1}\sum_{k=0}^{n-1}\left(\frac13\right)^k\\
&=6^{n-1}\frac{1-(1/3)^n}{2/3}\\
&=\frac{2^{n-1}3^n-2^{n-1}}{2}\\
&=\frac{6^n-2^n}4\;.
\end{align*}$$
Alternatively, with perhaps just a little more cleverness $(1)$ can be expanded to
$$\begin{align*}
a_{n+1}&=4a_n+b_n+c_n\\
b_{n+1}&=4b_n+2a_n\\
c_{n+1}&=4c_n+2a_n\;,
\end{align*}\tag{4}$$
whence
$$\begin{align*}a_{n+1}&=4a_n+4b_{n-1}+2a_{n-1}+4c_{n-1}+2a_{n-1}\\
&=4a_n+4a_{n-1}+4(b_{n-1}+c_{n-1})\\
&=4a_n+4a_{n-1}+4(a_n-4a_{n-1})\\
&=8a_n-12a_{n-2}\;.
\end{align*}$$
This straightforward homogeneous linear recurrence (with initial conditions $a_0=0,a_1=1$) immediately yields the desired result.
|
State-space to transfer function I’m looking into MATLAB’s state-space functionality, and I found a peculiar relation that I don’t believe I’ve seen before, and I’m curious how one might obtain it. According to this documentation page, when converting a state-space system representation to its transfer function, the following well-known equality is used
$$H(s) = C(sI-A)^{-1}B$$
However, they go one step further and state that
$$H(s) = C(sI-A)^{-1}B = \frac{{\det}(sI-A+BC) - {\det}(sI-A)}{\det(sI-A)}$$
How did they manage to convert $C{\text {Adj}}(sI-A)B$ into ${\det}(sI-A+BC) - {\det}(sI-A)$?
As far as I understand, we can assume here that $B$ is a column vector and $C$ is a row vector, i.e. it’s a single-input / single-output relationship.
| They are using the Sherman-Morrison formula, which I remember best in the form
$$
\det(I+MN) = \det(I+NM);
$$
this holds provided that both products $MN$ and $NM$ are defined. Note that if $M$ is
a column vector and $N$ a row vector, then $I+NM$ is a scalar. Now
$$\begin{align}
\det(sI-A+BC) &= \det\left((sI-A)(I+(sI-A)^{-1}BC\right)\\ &= \det(sI-A)\det(I+(sI-A)^{-1}BC) \tag{1}.
\end{align}
$$
Applying the formula with $M=I+(sI-A)^{-1}B$ and $N=C$ yields that
$$
\det(I+(sI-A)^{-1}BC) = \det(I+C(sI-A)^{-1}B).
$$
If $B$ is a column vector and $C$ a row vector then the final determinant is equal to
$$
1 + C(sI-A)^{-1}B \tag{2}
$$
Plugging (2) to (1), gives
$$
\det(sI-A+BC) = \det(sI-A)\left( 1 + C(sI-A)^{-1}B \right)
$$
Shuffling terms gives the result.
|
Is this a valid function? I am stuck with the question below,
Say whether the given function is one to one. $A=\mathbb{Z}$, $B=\mathbb{Z}\times\mathbb{Z}$, $f(a)=(a,a+1)$
I am a bit confused about $f(a)=(a,a+1)$, there are two outputs $(a,a+1)$ for a single input $a$ which is against the definition of a function. Please help me out by expressing your review about the question. Thanks
| If $f\colon A\to B$, then the inputs of $f$ are elements of $A$, and the outputs of $f$ are elements of $B$, whatever the elements of $B$ may be.
If the elements of $B$ are sets with 17 elements each, then the outputs of the function will be sets with 17 elements each. If the elements of $B$ are books, then the output will be books.
Here, the set $B$ is the set whose elements are ordered pairs. So every output of $f$ must be an ordered pair. This is a single item, the pair (just like your address may consist of many words and numbers, but it's still a single address).
By the way: whether the function is one-to-one or not is immaterial here (and it seems to me, immaterial to your confusion...)
|
Distance between $N$ sets of reals of length $N$ Let's say, for the sake of the question, I have 3 sets of real numbers of variate length:
$$\{7,5,\tfrac{8}{5},\tfrac{1}{9}\},\qquad\{\tfrac{2}{7},4,\tfrac{1}{3}\},\qquad\{1,2,7,\tfrac{4}{10},\tfrac{5}{16},\tfrac{7}{8},\tfrac{9}{11}\}$$
Is there a way to calculate the overall distance these sets have from one another? Again, there are only 3 sets for the sake of the example, but in practice there could be $N$ sets as such:
$$\{A_1,B_1,C_1,\ldots,N_1\},\qquad \{A_2,B_2,C_2\ldots,N_2\},\;\ldots \quad\{A_n,B_n,C_n,\ldots,N_n\}$$
These sets of reals can be considered to be sets in a metric space. The distance needed is the shortest overall distance between these sets, similar to the Hausdorff distance, except rather then finding the longest distance between 2 sets of points, I am trying to find the shortest distance between N sets of points.
| Let $E(r)$ be the set of all points at distance at most $r$ from the set $E$. This is called the closed $r$-neighborhood of $E$. The Hausdorff distance $d_H(E_1,E_2)$ is defined as the infimum of numbers $r$ such that $E_1\subseteq E_2(r)$ and $E_2\subseteq E_1(r)$. There are at least two reasonable ways to generalize $d_H(E_1,E_2)$ to $d_H(E_1,\dots,E_N)$:
*
*$d_H(E_1,\dots,E_N)$ is the infimum of numbers $r$ such that $E_i\subseteq E_j(r)$ for all $i,j\in\{1,\dots,N\}$.
*$d_H(E_1,\dots,E_N)$ is the infimum of numbers $r$ such that $E_i\subseteq \bigcup_{j\ne i}E_j(r)$ and $\bigcup_{j\ne i}E_j(r)\subseteq E_i$ for all $i$.
Both 1 and 2 recover the standard $d_H$ when $N=2$. Distance 1 is larger, and is somewhat less interesting because it's simply $\max_{i,j} d_H(E_i,E_j)$. Both distances turn into $0$ if and only if $E_1=\dots = E_N$.
|
The field of fractions of a field $F$ is isomorphic to $F$ Let $F$ be a field and let $\newcommand{\Fract}{\operatorname{Fract}}$ $\Fract(F)$ be the field of fractions of $F$; that is, $\Fract(F)= \{ {a \over b } \mid a \in F , b \in F \setminus \{ 0 \} \}$. I want to show that these two fields are isomorphic. I suggest this map
$$ F \to \Fract(F) \ ; \ a \mapsto {a\over a^{-1}} ,$$
for $a \neq 0$ and $0 \mapsto 0$, but this is not injective as $a$ and $-a$ map to the same image. I was thinking about the map
$ \Fract(F) \rightarrow F ;\; a/b\mapsto ab^{-1}$ and this is clearly injective. It is also surjective as $a/1 \mapsto a$. Is this the desired isomorphism?
| Let $F$ be a field and $Fract(F)=\{\frac{a}{b} \;|\; a\in F, b\in F, b\not = 0 \} $ modulo the equivalence relation $\frac{a}{b}\sim \frac{c}{d}\Longleftrightarrow ad=bc$. We exhibit a map that is a field isomorphism between $F$ and $Fract(F)$.
Every fraction field of an integral domain $D$ comes with a canonical ring homomorphism
$$\phi: D\rightarrow Fract(D);\; d\mapsto \frac{d}{1}$$
This map is clearly injective.
In the case $D$ is a field $F$, this canonical map is an isomorphism with inverse
$$Fract(F)\rightarrow F;\; {a\over b} \mapsto ab^{-1}$$
|
A deceiving Taylor series When we try to expand
$$
\begin{align}
f:&\mathbb R \to \mathbb R\\
&x \mapsto
\begin{cases}
\mathrm e^{-\large\frac 1{x^2}} &\Leftarrow x\neq 0\\
0 &\Leftarrow x=0
\end{cases}
\end{align}$$
in the Taylor series about $x = 0$, we wrongly conclude that $f(x) \equiv 0$, because the derivative of $f(x)$ of any order at this point is $0$. Therefore, $f(x)$ is not well-behaved for this procedure.
What conditions determine the good behavior of a function for the Taylor series expansion?
| To add to the other answers, in many cases one uses some simple sufficient (but not necessary) condition for analyticity: for example, any elementary function (polynomials, trigonometric functions, exponential,logarithms) is analytic in any open subset of its domain,; the same for compositions, sums, products, reciprocals (except for the points where the original function is zero... which leaves out your example), etc.
|
Inner product space computation
If $x = (x_1,x_2)$ and $y = (y_1,y_2)$ show that $\langle x,y\rangle = \begin{bmatrix}x_1 & x_2\end{bmatrix}\begin{bmatrix}2 & -1 \\ 1 & 1\end{bmatrix}\begin{bmatrix}y_1 \\ y_2\end{bmatrix}$ defines an inner product on $\mathbb{R}^2$.
Is there any hints on this one? All I'm thinking is to compute a determinant, but what good is that?
| Use the definition of an inner product and check whether your function satisfies all the properties. Note that in general, as kahen pointed out in the comment, $\langle \mathbf{x}, \mathbf{y}\rangle = \mathbf{y}^*A\mathbf{x}$ defines an inner product on $\mathbb{C}^n$ iff $A$ is a positive-definite Hermitian $n \times n$ matrix.
|
Prime reciprocals sum Let $a_i$ be a sequence of $1$'s and $2$'s and $p_i$ the prime numbers.
And let $r=\displaystyle\sum_{i=1}^\infty p_i^{-a_i}$
Can $r$ be rational, and can r be any rational $> 1/2$ or any real?
ver.2:
Let $k$ be a positive real number and let $a_i$ be $1 +$ (the $i$'th digit in the binary decimal expansion of $k$).
And let $r(k)=\displaystyle\sum_{n=1}^\infty n^{-a_n}$
Does $r(k)=x$ have a solution for every $x>\pi^2/6$, and how many ?
| The question with primes in the denominator:
The minimum that $r$ could possibly be is $C=\sum\limits_{i=1}^\infty\frac{1}{p_i^2}$. However, a sequence of $1$s and $2$s can be chosen so that $r$ can be any real number not less than $C$. Since $\sum\limits_{i=1}^\infty\left(\frac{1}{p_i}-\frac{1}{p_i^2}\right)$ diverges, consider the sum
$$
S_n=\sum_{i=1}^n b_i\left(\frac{1}{p_i}-\frac{1}{p_i^2}\right)
$$
where $b_i$ is $0$ or $1$. Choose $b_n=1$ while $S_{n-1}+\frac{1}{p_n}-\frac{1}{p_n^2}\le L-C$ and $b_n=0$ while $S_{n-1}+\frac{1}{p_n}-\frac{1}{p_n^2}>L-C$.
If we let $a_i=1$ when $b_i=1$ and $a_i=2$ when $b_i=0$, then
$$
\sum\limits_{i=1}^\infty\frac{1}{p_i^{a_i}}=\sum\limits_{i=1}^\infty\frac{1}{p_i^2}+\sum_{i=1}^\infty b_i\left(\frac{1}{p_i}-\frac{1}{p_i^2}\right)=C+(L-C)=L
$$
The question with non-negative integers in the denominator:
Changing $p_n$ from the $n^{th}$ prime to $n$ simply allows us to specify $C=\frac{\pi^2}{6}$. The rest of the procedure follows through unchanged. That is, choose any $L\ge C$ and let
$$
S_n=\sum_{i=1}^n b_i\left(\frac{1}{i}-\frac{1}{i^2}\right)
$$
where $b_i$ is $0$ or $1$. Choose $b_n=1$ while $S_{n-1}+\frac{1}{n}-\frac{1}{n^2}\le L-C$ and $b_n=0$ while $S_{n-1}+\frac{1}{n}-\frac{1}{n^2}>L-C$.
If we let $a_i=1$ when $b_i=1$ and $a_i=2$ when $b_i=0$, then
$$
\sum\limits_{n=1}^\infty\frac{1}{n^{a_i}}=\sum\limits_{n=1}^\infty\frac{1}{n^2}+\sum_{n=1}^\infty b_n\left(\frac{1}{n}-\frac{1}{n^2}\right)=C+(L-C)=L
$$
We don't need to worry about an infinite final sequence of $1$s in the binary number since that would map to a divergent series.
|
Difference between maximum and minimum? If I have a problem such as this:
We need to enclose a field with a fence. We have 500m of fencing material and a building is on one side of the field and so won’t need any fencing. Determine the dimensions of the field that will enclose the largest area.
This is obviously a maximizing question, however what would I do differently if I needed to minimize the area? Whenever I do these problems I just take the derivative, make it equal to zero and find x, and the answer always seems to appear - but what does one do differently to find the minimum or maximum?
| The Extreme Value Theorem guarantees that a continuous function on a finite closed interval has both a maximum and a minimum, and that the maximum and the minimum are each at either a critical point, or at one of the endpoints of the interval.
When trying to find the maximum or minimum of a continuous function on a finite closed interval, you take the derivative and set it to zero to find the stationary points. These are one kind of critical points. The other kind of critical points are the points in the domain at which the derivative is not defined.
The usual method to find the extreme you are looking for (whether it is a maximum or a minimum) is to determine whether you have a continuous function on a finite closed interval; if this is the case, then you take the derivative. Then you determine the points in the domain where the derivative is not defined; then the stationary points (points where the derivative is $0$). And then you plug in all of these points, and the endpoints of the interval, into the function, and you look at the values. The largest value you get is the maximum, the smallest value you get is the minimum.
The procedure is the same whether you are looking for the maximum or for the minimum. But if you are not regularly checking the endpoints, you will not always get the right answer, because the maximum (or the minimum) could be at one of the endpoints.
(In the case you are looking for, evaluating at the endpoints gives an area of $0$, so that's the minimum).
(If the domain is not finite and closed, things get more complicated. Often, considering the limit as you approach the endpoints (or the variable goes to $\infty$ or $-\infty$, whichever is appropriate) gives you information which, when combined with information about local extremes of the function (found also by using critical points and the first or second derivative tests), will let you determine whether you have local extremes or not. )
|
Proof: If $f'=0$ then is $f$ is constant I'm trying to prove that if $f'=0$ then is $f$ is constant WITHOUT using the Mean Value Theorem.
My attempt [sketch of proof]: Assume that $f$ is not constant. Identify interval $I_1$ such that $f$ is not constant. Identify $I_2$ within $I_1$ such that $f$ is not constant. Repeat this and by the Nested Intervals Principle, there is a point $c$ within $I_n$ for any $n$ such that $f(c)$ is not constant... This is where I realized that my approach might be wrong. Even if it isn't I don't know how to proceed.
Thanks for reading and any help/suggestions/corrections would be appreciated.
| So we have to prove that $f'(x)\equiv0$ $\ (a\leq x\leq b)$ implies $f(b)=f(a)$, without using the MVT or the fundamental theorem of calculus.
Assume that an $\epsilon>0$ is given once and for all. As $f'(x)\equiv0$, for each fixed $x\in I:=[a,b]$ there is a neighborhood $U_\delta(x)$ such that
$$\Biggl|{f(y)-f(x)\over y-x}\Biggr|\leq\epsilon\qquad\bigl(y\in\dot U_\delta(x)\bigr)$$
($\delta$ depends on $x$). For each $x\in I\ $ put $U'(x):=U_{\delta/3}(x)$. Then the collection $\bigl(U'(x)\bigr)_{x\in I}$ is an open covering of $I$. Since $I$ is compact there exists a finite subcovering, and we may assume there is a finite sequence $(x_n)_{0\leq n\leq N}$ with
$$a=x_0<x_1<\ldots< x_{N-1}<x_N=b$$
such that $I\subset\bigcup_{n=0}^N\ U'(x_n)$. The $\delta/3$-trick guarantees that $$|f(x_n)-f(x_{n-1})|\leq \epsilon(x_n-x_{n-1}).$$ By summing up we therefore obtain the estimate $|f(b)-f(a)|\leq \epsilon(b-a)$, and as $\epsilon>0$ was arbitrary it follows that $f(b)=f(a)$.
|
What was the notation for functions before Euler? According to the Wikipedia article,
[Euler] introduced much of the modern mathematical terminology and notation, particularly for mathematical analysis, such as the notion of a mathematical function.
— Leonhard Euler, Wikipedia
What was the notation for functions before him?
| Let's observe an example :
$a)$ formal description of function (two-part notation)
$f : \mathbf{N} \rightarrow \mathbf{R}$
$n \mapsto \sqrt{n}$
$b)$ Euler's notation :
$f(n)=\sqrt{n}$
I don't know who introduced two-part notation but I think that this notation must be older than Euler's notation since it gives more information about function and therefore two-part-notation is closer to correct definition of the function than Euler's notation.
There is also good wikipedia article about notation for differentiation.
|
Upper bound for $-t \log t$ While reading Csiszár & Körner's "Information Theory: Coding Theorems for Discrete Memoryless Systems", I came across the following argument:
Since $f(t) \triangleq -t\log t$ is concave and $f(0) = 0$ and $f(1) = 0$, we have for every $0 \leq t \leq 1-\tau$, $0 \leq \tau \leq 1/2$,
\begin{equation}|f(t) - f(t+\tau)| \leq \max (f(\tau), f(1-\tau)) = -\tau \log\tau\end{equation}
I can't make any progress in seeing how this bound follows from the properties of $f(t)$. Any insights would be greatly appreciated.
| The function $g$ defined on the interval $I=[0,1-\tau]$ by $g(t)=f(t)-f(t+\tau)$ has derivative $g'(t)=-\log(t)+\log(t+\tau)$. This derivative is positive hence $g$ is increasing on $I$ from $g(0)=-f(\tau)<0$ to $g(1-\tau)=f(1-\tau)>0$. For every $t$ in $I$, $g(t)$ belongs to the interval $[-f(\tau),f(1-\tau)]$, in particular $|g(t)|\leqslant\max\{f(\tau),f(1-\tau)\}$.
|
limit of $f$ and $f''$ exists implies limit of $f'$ is 0
Prove that if $\lim\limits_{x\to\infty}f(x)$ and $\lim\limits_{x\to\infty}f''(x)$ exist, then $\lim\limits_{x\to\infty}f'(x)=0$.
I can prove that $\lim\limits_{x\to\infty}f''(x)=0$. Otherwise $f'(x)$ goes to infinity and $f(x)$ goes to infinity, contradicting the fact that $\lim\limits_{x\to\infty}f(x)$ exists. I can also prove that if $\lim\limits_{x\to\infty}f'(x)$ exists, it must be 0. So it remains to prove that $\lim\limits_{x\to\infty}f'(x)$ exists. I'm stuck at this point.
| This is similar to a recent Putnam problem, actually. By Taylor's theorem with error term, we know that for any $x$,
$$
f(x+1) = f(x) + f'(x) + \tfrac12f''(t)
$$
for some $x\le t\le x+1$. Solve for $f'(x)$ and take limits....
|
$f'(x)-xf(x)=0$ has more roots than $f(x)=0$ Let $f(x)$ be a polynomial with real coefficients. Show that the equation $f'(x)-xf(x)=0$ has more roots than $f(x)=0$.
I saw the hint, nevertheless I can't prove it clearly. The hint is that $f(x)e^{-x^2/2}$ has a derivative $(f'(x)-xf(x))e^{-x^2/2}$, and use the Rolle's theorem.
My outline: I think that $f'-xf$ have zeros between distinct zeros of $f$, and if $f$ has a zero of multiplicity $k$, then $f'-xf$ has the same zero with multiplicity $k-1$. But how can I show that $f'-xf$ have zeros outside of zeros of $f$, i.e. $(-\infty,\alpha_1)$ and $(\alpha_n,\infty)$ where $\alpha_1$, $\alpha_n$ are the first, last zero of $f$ respectively?
| $f(x)e^{-x^2/2}$ is zero at $\alpha_1$, and tends to zero at $-\infty$. So it must have a zero derivative somewhere in $(-\infty,\alpha_1)$.
Edited to reply to Gobi's comment
You can use Rolle's theorem after a little work. Let us write $g(x)$ for $f(x)e^{-x^2/2}$. Take any point $t \in (-\infty,\alpha_1)$. Since $g(x)$ tends to zero at $-\infty$, there is a point $c < t$ such that $g(c) < g(t)/2$. Then by the Intermediate Value Theorem, there exist points $a \in (c,t)$ and $b \in (t,\alpha_1)$ such that $g(a) = g(b) = g(t)/2$. Now you can use Rolle's therem on $(a,b)$.
|
ArcTan(2) a rational multiple of $\pi$? Consider a $2 \times 1$ rectangle split by a diagonal. Then the two angles
at a corner are ArcTan(2) and ArcTan(1/2), which are about $63.4^\circ$ and $26.6^\circ$.
Of course the sum of these angles is $90^\circ = \pi/2$.
I would like to know if these angles are rational multiples of $\pi$.
It doesn't appear that they are, e.g., $(\tan^{-1} 2 )/\pi$ is computed as
0.35241638234956672582459892377525947404886547611308210540007768713728\
85232139736632682857010522101960
to 100 decimal places by Mathematica. But is there a theorem that could be applied here to
prove that these angles are irrational multiples of $\pi$? Thanks for ideas and/or pointers!
(This question arose thinking about Dehn invariants.)
| Lemma: If $x$ is a rational multiple of $\pi$ then $2 \cos(x)$ is an algebraic integer.
Proof
$$\cos(n+1)x+ \cos(n-1)x= 2\cos(nx)\cos(x) \,.$$
Thus
$$2\cos(n+1)x+ 2\cos(n-1)x= 2\cos(nx)2\cos(x) \,.$$
It follows from here that $2 \cos(nx)= P_n (2\cos(x))$, where $P_n$ is a monic polynomial of degree $n$ with integer coefficients.
Actually $P_{n+1}=XP_n-P_{n-1}$ with $P_1(x)=X$ and $P_0(x)=1$.
Then, if $x$ is a rational multiple of $\pi$ we have $nx =2k \pi$ for some $n$ and thus, $P_n(2 \cos(x))=1$.
Now, coming back to the problem. If $\tan(x)=2$ then $\cos(x) =\frac{1}{\sqrt{5}}$. Suppose now by contradiction that $x$ is a rational multiple of $\pi$. Then $2\cos(x) =\frac{2}{\sqrt{5}}$ is an algebraic integer, and so is its square $\frac{4}{5}$. But this number is algebraic integer and rational, thus integer, contradiction....
P.S. If $\tan(x)$ is rational, and $x$ is a rational multiple of $\pi$, it follows exactly the same way that $\cos^2(x)$ is rational, thus $4 \cos^2(x)$ is algebraic integer and rational. This shows that $2 \cos(x) \in \{ 0, \pm 1, \pm 2 \}$.....
|
If a Fourier Transform is continuous in frequency, then what are the "harmonics"? The basic idea of a Fourier series is that you use integer multiples of some fundamental frequency to represent any time domain signal.
Ok, so if the Fourier Transform (Non periodic, continuous in time, or non periodic, discrete in time) results in a continuum of frequencies, then uh, is there no fundamental frequency / concept of using integer multiples of some fundamental frequency?
| In order to talk about a fundamental frequency, you need a fundamental period. But the Fourier transform deals with integrable functions ($L^1$, or $L^2$ if you go further in the theory) defined on the whole real line, and they are not periodic (except the zero function).
|
Sequential continuity for quotient spaces Sequential continuity is equivalent to continuity in a first countable space $X$. Look at the quotient projection $g:X\to Y$ to the space of equivalence classes of an equivalence relation with the quotient topology and a map $f:Y\to Z$. I want to test if $f$ is continuous.
Can I do this by showing $$\lim_n\;f(g(x_n))=f(g(\lim_n\;x_n))\;?$$ Can I do this at least if $X$ is a metric space? Can I do this by showing $$\lim_n\;f(g(x_n))=f(\lim_n\;g(x_n))\;?$$
| You certainly can’t do it in general if $X$ isn’t a sequential space, i.e., one whose structure is completely determined by its convergent sequences. $X$ is sequential iff it’s the quotient of a metric space, and the composition of two quotient maps is a quotient map, so if $X$ is sequential, $Y$ is also sequential, therefore $f$ is continuous iff $\lim\limits_n\;f(y_n) = f(\lim\limits_n\;y_n)$ for every convergent sequence $\langle y_n:n\in\omega\rangle$ in $Y$.
However, you can’t always pull this back to $X$.
Edit (24 April 2016): Take $X=[0,1]$, and let $Y$ be the quotient obtained by identifying $0$ and $1$ to a point $p$, $g$ being the quotient map. Suppose that $f:Y\to Z$, and you want to test the continuity of $f$. For $n\in\mathbb{Z}^+$ let
$$x_n=\begin{cases}
\frac2{n+4},&\text{if }n\text{ is even}\\
\frac{n+1}{n+3},&\text{if }n\text{ is odd}\;,
\end{cases}$$
so that
$$\langle x_n:n\in\Bbb Z^+\rangle=\left\langle\frac12,\frac13,\frac23,\frac14,\frac34,\frac15,\frac45,\ldots\right\rangle\;.$$
Then $\langle g(x_n):n\in\mathbb{Z}^+\rangle$ converges to $p$ in $Y$, so you need to test whether $\langle f(g(x_n)):n\in\mathbb{Z}^+\rangle$ converges to $f(p)$ in $Z$, but you can’t do this by asking whether $$\lim_n\;f(g(x_n))=f(g(\lim_n\;x_n))\;,$$ because $\langle x_n:n\in\mathbb{Z}^+\rangle$ isn’t convergent in $X$. Thus, the answer to your first question is no even if $X$ is metric.
(This replaces a flawed example with one that actually works, borrowed from this answer by Eric Wofsey.)
End edit.
The answer to your second question, however, is yes. Let $\langle y_n:n\in\omega\rangle$ be a convergent sequence in $Y$ with limit $y$. Then there are $x_n\in X$ such that $y_n = g(x_n)$ for $n\in\omega$, so checking that $$\lim_n\;f(y_n)=f(y)$$ is checking that $$\lim_n\;f(g(x_n))=f(\lim_n\;g(x_n))\;.$$ Note, though, that you have to check all sequences in $X$, not just convergent ones.
|
Show $X_n {\buildrel p \over \rightarrow} X$ and $X_n \le Z$ a.s., implies $X \le Z$ a.s. Suppose $X_n {\buildrel p \over \rightarrow} X$ and $X_n \le Z,\forall n \in \mathbb{N}$. Show $X \le Z$ almost surely.
I've try the following, but I didn't succeed.
By the triangle inequality, $X=X-X_n+X_n \le |X_n-X|+|X_n|$. Hence, $P(X \le Z) \le P(|X_n-X| \le Z) + P(|X_n| \le Z)$. I know that, since $X_n {\buildrel p \over \rightarrow} X$ then $P(|X_n-X| \le Z) \to 1$, and we have $P( |X_n| \le Z)=1$.
I can't go further.
| $X_n {\buildrel p \over \rightarrow} X$ implies that there is a subsequence $X_{n(k)}$ with $X_{n(k)}\to X$ almost surely.
|
Taking the derivative of $\frac1{x} - \frac1{e^x-1}$ using the definition Given $f$:
$$
f(x) = \begin{cases}
\frac1{x} - \frac1{e^x-1} & \text{if } x \neq 0 \\
\frac1{2} & \text{if } x = 0
\end{cases}
$$
I have to find $f'(0)$ using the definition of derivative (i.e., limits). I already know how to differentiate and stuff, but I still can't figure out how to solve this. I know that I need to begin like this:
$$
f'(0) = \lim_{h \to 0} \frac{f(h)-f(0)}{h} = \lim_{h \to 0} \frac{\frac1{h} - \frac1{e^h-1}-\frac1{2}}{h}
$$
But I don't know how to do this. I feel like I should, but I can't figure it out. I tried distributing the denominator, I tried l'Hôpital's but I get $0$ as the answer, while according to what my prof gave me (this is homework) it should be $-\frac1{12}$. I really don't know how to deal with these limits; could someone give me a few tips?
| Hmm, another approach, which seems simpler to me. However I'm not sure whether it is formally correct, so possibly someone else can also comment on this.
The key here is that the expression $\small {1 \over e^x-1 } $ is a very well known generation function for the bernoulli-numbers
$$\small {1 \over e^x-1 } =
x^{-1} - 1/2 + {1 \over 12} x - {1 \over 720} x^3 + {1 \over 30240 }x^5 + O(x^7) $$
from where we can rewrite
$$\small \frac1x - {1 \over e^x-1 } = 1/2 - {1 \over 12} x + {1 \over 720} x^3 - {1 \over 30240 }x^5 + O(x^7) \qquad \text{ for } x \ne 0 $$
and because by the second definition $\small f(x)=\frac12 \text{ for }x=0$ that power series is the analytic expression for both cases at and near that point.
Then the derivative can be taken termwise:
$$\small (\frac1x - {1 \over e^x-1 })' = - {1 \over 12} + {3 \over 720} x^2 - {5 \over 30240 }x^4 + O(x^6) $$
and is $\small -\frac1{12} $ at x=0
|
A question about composition of trigonometric functions A little something I'm trying to understand:
$\sin(\arcsin{x})$ is always $x$, but $\arcsin(\sin{x})$ is not always $x$
So my question is simple - why?
Since each cancels the other, it would make sense that $\arcsin(\sin{x})$ would always
result in $x$.
I'd appreciate any explanation. Thanks!
| It is a result of deriving an inverse function for non-bijective one. Let $f:X\to Y$ be some function, i.e. for each $x\in X$ we have $f(x)\in Y$. If $f$ is not a bijection then we cannot find a function $g:Y\to X$ such that $g(f(x)) = x$ for all $x\in X$ and $f(g(y)) =y$ for all $y\in Y$.
Consider your example, $f = \sin:\mathbb R\to\mathbb R$. It is neither a surjection (since $\sin (\mathbb R) = [0,1]$) nor an injection (since $\sin x = \sin(x+2\pi k)$ for all $k\in \mathbb Z$). As a result you cannot say that $\sin$ has an inverse. On the other hand, if you consider a restriction of $f^* = \sin|_{X}$ with $X = [-\pi/2,\pi/2]$ and a codomain $Y = [-1,1]$ then $f^*$ has an inverse since
$$
\sin|_{[-\pi/2,\pi/2]}:[-\pi/2,\pi/2]\to[-1,1]
$$
is an injection. As a result you obtain a function $\arcsin:[-1,1]\to [-\pi/2,\pi/2]$ which is the inverse for $f^* = \sin|_{[-\pi/2,\pi/2]}$.
In particular it means that $\sin (\arcsin{y})=f^*(\arcsin{y}) = y$ for all $y\in[-1,1]$. On the other hand, if you take $x = \frac{\pi}2+2\pi$ then $\sin x = 1$ and hence $\arcsin(\sin{x}) = \frac{\pi}2\neq x$. More precisely, $\arcsin$ is the partial inverse for $\sin$ with all following properties.
|
Variance over two periods with known variances? If period 1 has variance v1 and period 2 has variance v2, what is the variance over period 1 and period 2? (period 1 and period 2 are the same length)
I've done some manual calculations with random numbers, and I can't seem to figure out how to calculate the variance over period 1 and period 2 from v1 and v2.
| If you only know the variances of your two sets, you can't compute the variance of the union of the two. However, if you know both the variances and the means of two sets, then there is a quick way to calculate the variance of their union.
Concretely, say you have two sets $A$ and $B$ for which you know the means $\mu_A$ and $\mu_B$ and variances $\sigma^2_A$ and $\sigma^2_B$, as well as the sizes of each set $n_A$ and $n_B$. You want to know the mean $\mu_X$ and variance $\sigma^2_X$ of the union $X=A\cup B$ of the two sets (assuming that the union is disjoint, i.e. that $A$ and $B$ don't have any elements in common).
With a little bit of scribbling and algebra, you can reveal that
$$\mu_X = \frac{n_A\mu_A + n_B\mu_B}{n_A+n_B}$$
and
$$\sigma^2_X = \frac{n_A\sigma^2_A + n_B\sigma^2_B}{n_A + n_B} - \frac{n_An_B}{(n_A+n_B)^2} (\mu_A - \mu_B)^2 $$
As pointed out in the answer by tards, the formula for the variance of the combined set depends explicitly on the means of the sets $A$ and $B$, not just on their variances. Moreover, you can see that adding a constant to one of the sets doesn't change the first term (the variances remain the same) but it does change the second term, because one of the means changes.
The fact that the dependence on the means enters through a term of the form $\mu_A-\mu_B$ shows you that if you added the same constant to both sets, then the overall variance would not change (just as you'd expect) because although both the means change, the effect of this change cancels out. Magic!
|
The Mathematics of Tetris I am a big fan of the old-school games and I once noticed that there is a sort of parity associated to one and only one Tetris piece, the $\color{purple}{\text{T}}$ piece. This parity is found with no other piece in the game.
Background: The Tetris playing field has width $10$. Rotation is allowed, so there are then exactly $7$ unique pieces, each of which is composed of $4$ blocks.
For convenience, we can name each piece by a letter. See this Wikipedia page for the Image ($\color{cyan}{\text{I}}$ is for the stick piece, $\color{goldenrod}{\text{O}}$ for the square, and $\color{green}{\text{S}},\color{purple}{\text{T}},\color{red}{\text{Z}},\color{orange}{\text{L}},\color{blue}{\text{J}}$ are the others)
There are $2$ sets of $2$ pieces which are mirrors of each other, namely $\color{orange}{\text{L}}, \color{blue}{\text{J}}$ and $\color{green}{\text{S}},\color{red}{\text{Z}}$ whereas the other three are symmetric $\color{cyan}{\text{I}},\color{goldenrod}{\text{O}}, \color{purple}{\text{T}}$
Language: If a row is completely full, that row disappears. We call it a perfect clear if no blocks remain in the playing field. Since the blocks are size 4, and the playing field has width $10$, the number of blocks for a perfect clear must always be a multiple of $5$.
My Question: I noticed while playing that the $\color{purple}{\text{T}}$ piece is particularly special. It seems that it has some sort of parity which no other piece has. Specifically:
Conjecture: If we have played some number of pieces, and we have a perfect clear, then the number of $\color{purple}{\text{T}}$ pieces used must be even. Moreover, the $\color{purple}{\text{T}}$ piece is the only piece with this property.
I have verified the second part; all of the other pieces can give a perfect clear with either an odd or an even number used. However, I am not sure how to prove the first part. I think that assigning some kind of invariant to the pieces must be the right way to go, but I am not sure.
Thank you,
| My colleague, Ido Segev, pointed out that there is a problem with most of the elegant proofs here - Tetris is not just a problem of tiling a rectangle.
Below is his proof that the conjecture is, in fact, false.
|
A book asks me to prove a statement but I think it is false The problem below is from Cupillari's Nuts and Bolts of Proofs.
Prove the following statement:
Let $a$ and $b$ be two relatively prime numbers. If there exists an
$m$ such that $(a/b)^m$ is an integer, then $b=1$.
My question is: Is the statement true?
I believe the statement is false because there exists an $m$ such that $(a/b)^m$ is an integer, and yet $b$ does not have to be $1$. For example, let $m=0$. In this case, $(a/b)^0=1$ is an integer as long as $b \neq 0$.
So I think the statement is false, but I am confused because the solution at the back of the book provides a proof that the statement is true.
| Your counterexample is valid. But the statement is true if $m$ is required to be a positive natural number or positive integer.
Alternatively, note it's not if true $m$ is required to be negative.
In my opinion, it seems like you were supposed to assume $m>0$.
|
Crafty solutions to the following limit The following problem came up at dinner, I know some ways to solve it but they are quite ugly and as some wise man said: There is no place in the world for ugly mathematics.
These methods are using l'Hôpital, but that becomes quite hideous very quickly or by using series expansions.
So I'm looking for slick solutions to the following problem:
Compute $\displaystyle \lim_{x \to 0} \frac{\sin(\tan x) - \tan(\sin x)}{\arcsin(\arctan x) - \arctan(\arcsin x)}$.
I'm curious what you guys will make of this.
| By Taylor's series we obtain that
*
*$\sin (\tan x)= \sin\left(x + \frac13 x^3 + \frac2{15}x^5+ \frac{17}{315}x^7+O(x^9) \right)=x + \frac16 x^3 -\frac1{40}x^5 - \frac{55}{1008}x^7+O(x^9)$
and similarly
*
*$\tan (\sin x)= x + \frac16 x^3 -\frac1{40}x^5 - \frac{107}{5040}x^7+O(x^9)$
*$\arcsin (\arctan x)= x - \frac16 x^3 +\frac{13}{120}x^5 - \frac{341}{5040}x^7+O(x^9)$
*$\arctan (\arcsin x)= x - \frac16 x^3 +\frac{13}{120}x^5 - \frac{173}{5040}x^7+O(x^9)$
which leads to the result.
|
How strong does a matrix distort angles? How strong does it distort lengths anisotrolicly? let there be given a square matrix $M \in \mathbb R^{N\times N}$. I would like to have some kind of measure in how far it
*
*Distorts angles between vectors
*It stretches and squeezes discriminating directions.
While I am fine with $M = S \cdot Q$, with $S$ being a positive multiple of the identity and $Q$ being an orthogonal matrix, I would like to measure in how far $M$ diverts from this form. In more visual terms, I would like to measure by a numerical expression to what extent a set is non-similar to the image of the respective set.
What I have in mind is a numerical measure, just like, e.g, the determinant of $M$ measures the volume of a cube under the transform by $M$. Can you help me?
| Consider the singular value decomposition of the matrix.
Look at the singular values. These tell you, how the unit sphere is
stretched or squeezed by the matrix along the directions corresponding to the
singular vectors.
|
Decoding and correcting $(1,0,0,1,0,0,1)$ Hamming$ (7,4)$ code Correct any error and decode $(1,0,0,1,0,0,1)$ encoded using Hamming $(7,4)$ assuming at most one error. The message $(a,b,c,d)$ is encoded $(x,y,a,z,b,c,d)$
The solution states $H_m = (0,1,0)^T$ which corresponds to the second column in the Standard Hamming (7,4) matrix which means the second digit, 0, is wrong and the corrected code is $(1,1,0,1,0,0,1)$. The resulting code becomes $(0,0,0,1)$
My question is: How do I get $H_m$?
| The short answer is that you get the syndrome $H_m$ by multiplying the received vector $r$ with the parity check matrix: $H_m=H r^T$.
There are several equivalent parity check matrices for this Hamming code, and you haven't shown us which is the one your source uses. The bits that you did give hint at the possibility that the check matrix could be
$$
H=\pmatrix{1&0&1&0&1&0&1\cr0&1&1&0&0&1&1\cr0&0&0&1&1&1&1\cr}.
$$
To be fair, that is one of the more popular choices, and also fits your received word / syndrome pair :-). The reason I'm nitpicking about this is that any one of the $7!=5040$ permutations of the columns would work equally well for encoding/decoding. Being an algebraist at heart, I often prefer an ordering that exhibits the cyclic nature of the code better. Somebody else might insist on an ordering the matches with systematic encoding. Doesn't matter much!
Here your received vector $r=(1,0,0,1,0,0,1)$ has bits on at positions 1, 4 and 7, so the syndrome you get is
$$
H_m=H r^T=\pmatrix{1\cr 0\cr0\cr}+\pmatrix{0\cr 0\cr1\cr}+\pmatrix{1\cr 1\cr1\cr}=\pmatrix{0\cr 1\cr0\cr}
$$
the modulo two sum of the first, fourth and seventh columns of $H$.
If $r$ were a valid encoded message, then the syndrome $H_m$ would be the zero vector. As that was not the case here, an error (one or more) has occurred. The task of the decoder is to find the most likely error, and the usual assumptions lead us to the goal of toggling as few bits of $r$ as possible.
The nice thing about the Hamming code is that we can always do this by toggling at most one bit. We identify $H_m$ as the second column of $H$, so to make the syndrome zero by correcting a single bit we need to toggle the second bit of $r$.
What makes the Hamming code tick is that all possible non-zero syndrome vectors occur as columns of $H$. Therefore we always meet our goal of making the syndrom zero by toggling (at most) a single bit of any received word.
|
How can one prove that $\sqrt[3]{\left ( \frac{a^4+b^4}{a+b} \right )^{a+b}} \geq a^ab^b$, $a,b\in\mathbb{N^{*}}$? How can one prove that $\sqrt[3]{\left ( \frac{a^4+b^4}{a+b} \right )^{a+b}} \geq a^ab^b$, $a,b\in\mathbb{N^{*}}$?
| Since $\log(x)$ is concave,
$$
\log\left(\frac{ax+by}{a+b}\right)\ge\frac{a\log(x)+b\log(y)}{a+b}\tag{1}
$$
Rearranging $(1)$ and exponentiating yields
$$
\left(\frac{ax+by}{a+b}\right)^{a+b}\ge x^ay^b\tag{2}
$$
Plugging $x=a^3$ and $y=b^3$ into $(2)$ gives
$$
\left(\frac{a^4+b^4}{a+b}\right)^{a+b}\ge a^{3a}b^{3b}\tag{3}
$$
and $(3)$ is the cube of the posited inequality.
From my comment (not using concavity):
For $0<t<1$, the minimum of $t+(1-t)u-u^{1-t}$ occurs when $(1-t)-(1-t)u^{-t}=0$; that is, when $u=1$. Therefore, $t+(1-t)u-u^{1-t}\ge0$. If we set $u=\frac{y}{x}$ and $t=\frac{a}{a+b}$, we get
$$
\frac{ax+by}{a+b}\ge x^{a/(a+b)}y^{b/(a+b)}\tag{4}
$$
Inequality $(2)$ is simply $(4)$ raised to the $a+b$ power.
|
Convergence of $b_n=|a_n| + 1 - \sqrt {a_n^2+1}$ and $b_n = \frac{|a_n|}{1+|a_{n+2}|}$ here's my daily problem:
1) $b_n=|a_n| + 1 - \sqrt {a_n^2+1}$. I have to prove that, if $b_n$ converges to 0, then $a_n$ converges to 0 too. Here's how I have done, could someone please check if this is correct? I'm always afraid to square both sides.
$
\begin{align*}
0&=|a_n| + 1 - \sqrt {a_n^2+1}\\
& -|a_n| = 1 - \sqrt {a_n^2+1}\\
& a_n^2 = 1 - 2 *\sqrt {a_n^2+1} + a_n^2 + 1\\
& 2 = 2 *\sqrt {a_n^2+1}\\
& 1 = \sqrt {a_n^2+1}\\
& 1 = {a_n^2+1}\\
& a_n^2 = 0 \Rightarrow a_n=0\\
\end{align*}$
2) $b_n = \frac{|a_n|}{1+|a_{n+2}|}$
I have to prove the following statement is false with an example: "If $b_n$ converges to 0, then $a_n$ too."
I'm pretty lost here, any directions are welcome! I thought that would only converge to 0, if $a_n=0$. Maybe if $a_n >>> a_{n+2}$?
Thanks in advance! :)
| You seem to have a fundamental misconception regarding the difference between the limit of a sequence and an element of a sequence.
When we say $b_n$ converges to $0$, it does not mean $b_n = 0$ for all $n$. For instance $b_n = \frac{1}{n}$ is convergent to $0$, but there is no natural number $n$ for which $b_n = 0$.
In i) What you tried is ok (though the misconception above shows in the way you have written it), but are making some assumptions which need to be proved.
If we were to rewrite what you wrote, it would be something like,
If $a_n$ was convergent to $L$, then we would have that
$ 0 = |L| + 1 - \sqrt{L^2 + 1}$
and then the algebra shows that $L = 0$.
Using $a_n$ instead of $L$ makes what you wrote nonsensical.
Also, can you tell what assumption is being made here and needs justification?
For ii) Try constructing a sequence such that $\frac{a_{n+2}}{a_n} \to \infty$.
|
How to prove such a simple inequality? Let $f\in C[0,\infty)\cap C^1(0,\infty)$ be an increasing convex function with $f(0)=0$ $\lim_{t->\infty}\frac{f(t)}{t}=+\infty$ and $\frac{df}{dt} \ge 1$.
Then there exists constants $C$ and $T$ such that for any $t\in [T,\infty)$, $\frac{df}{dt}\le Ce^{f(t)}.$
Is it correct? If the conditions are not enough, please add some condition and prove it. Thank you
| It is false as can be seen by the following proof by contradiction. Suppose it is true, and such a $C$ and $T$ exist. Then consider the following sequence of functions $f_n(t)=n(t-T) + 1+2T$ for $t\geq T$ and then extend $f_n$ smoothly for $t < T$ while keeping $f_n > 1$ and $f_n'\geq 1$. Then we have that $f_n'(T) \leq Ce^{f(T)}$, and $f_n'(T)=nT$ and $e^{f(T)}=e^{1+2T}$, hence the inequality would say that $nT \leq Ce^{1+2T}$ or $C \geq nTe^{-(1+2T)}$ which is a contradiction as $n\to\infty$.
NOTE: I assumed that you do not want the constants to depend on $f$ (as this would change the problem).
|
How would I evaluate this limit? I have no idea how to evaluate this limit. Wolfram gives $0$, and I believe this, but I would like to see how it is done. The limit is
$$\lim_{n\rightarrow\infty}\frac{x^n}{(1+x)^{n-1}}$$
assuming $x$ is positive. Thanks in advance.
| $\frac{x^n}{(1+x)^{n-1}} = \frac{x^n(1+x)}{(1+x)^{n}} = (\frac{x}{1+x})^{n}(1+x) = (\frac{x+1-1}{1+x})^{n}(1+x) = (1 - \frac{1}{1+x})^{n}(1+x)$
So taking the limit:
$\lim_{n\to\infty} \frac{x^n}{(1+x)^{n-1}} = \lim_{n\to\infty} (1 - \frac{1}{1+x})^{n}(1+x) = (1+x) * \lim_{n\to\infty} (1 - \frac{1}{1+x})^{n}$
Since $x>0$ we know $1 - \frac{1}{1+x} < 1$ therefore $\lim_{n\to\infty} (1 - \frac{1}{1+x})^{n} = 0$ giving us $(1+x)*0 = 0$
|
Solutions to Linear Diophantine equation $15x+21y=261$ Question
How many positive solutions are there to $15x+21y=261$?
What I got so far
$\gcd(15,21) = 3$ and $3|261$
So we can divide through by the gcd and get:
$5x+7y=87$
And I'm not really sure where to go from this point. In particular, I need to know how to tell how many solutions there are.
| 5, 7, and 87 are small enough numbers that you could just try all the possibilities. Can you see, for example, that $y$ can't be any bigger than 12?
|
Help remembering a Putnam Problem I recall that there was a Putnam problem which went something like this:
Find all real functions satisfying
$$f(s^2+f(t)) = t+f(s)^2$$
for all $s,t \in \mathbb{R}$.
There was a cool trick to solving it that I wanted to remember. But I don't know which test it was from and google isn't much help for searching with equations.
Does anyone know which problem I am thinking of so I can look up that trick?
| No idea. But I have a book called Putnam and Beyond by Gelca and Andreescu, and on page 185 they present a problem from a book called Functional Equations: A Problem Solving Approach by B. J. Venkatachala, from Prism Books PVT Ltd., 2002. I think the Ltd. means the publisher is British.
Almost, the publisher is (or was?) in India (Bangalore):
http://www.prismbooks.com/
http://www.hindbook.com/order_info.php
EDIT, December 3, 2011: The book is available, at least, from an online firm in India that is similar to Amazon.com
http://www.flipkart.com/m/books/8172862652
on
http://www.flipkart.com/
I cannot tell whether they ship outside India. But it does suggest that contacting the publisher by email is likely to work.
|
the name of a game I saw a two-player game described the other day and I was just wondering if it had an official name. The game is played as follows: You start with an $m \times n$ grid, and on each node of the grid there is a rock. On your turn, you point to a rock. The rock and all other rocks "northeast" of it are removed. In other words, if you point to the rock at position $(i,j)$ then any rock at position $(r,s)$ where $r$ and $s$ satisfy $1 \leq r \leq i$ and $j \leq s \leq n$ is removed. The loser is the person who takes the last rock(s).
| This game is called Chomp.
|
Finding the Laurent expansion of $\frac{1}{\sin^3(z)}$ on $0<|z|<\pi$? How do you find the Laurent expansion of $\frac{1}{\sin^3(z)}$ on $0<|z|<\pi$? I would really appreciate someone carefully explaining this, as I'm very confused by this general concept! Thanks
| use this formula
$$\sum _{k=1}^{\infty } (-1)^{3 k} \left(-\frac{x^3}{\pi ^3 k^3 (\pi k-x)^3}-\frac{x^3}{\pi ^3 k^3 (\pi k+x)^3}+\frac{3 x^2}{\pi ^2 k^2 (\pi k-x)^3}-\frac{3 x^2}{\pi ^2 k^2 (\pi k+x)^3}-\frac{x^3}{2 \pi k (\pi k-x)^3}-\frac{x^3}{2 \pi k (\pi k+x)^3}+\frac{x^2}{(\pi k-x)^3}-\frac{x^2}{(\pi k+x)^3}-\frac{\pi k x}{2 (\pi k-x)^3}-\frac{3 x}{\pi k (\pi k-x)^3}-\frac{\pi k x}{2 (\pi k+x)^3}-\frac{3 x}{\pi k (\pi k+x)^3}\right)+\frac{1}{x^3}+\frac{1}{2 x}=\csc ^3(x)$$
|
How can I prove $[0,1]\cap\operatorname{int}{(A^{c})} = \emptyset$?
If $A \subset [0,1]$ is the union of open intervals $(a_{i}, b_{i})$ such that each
rational number in $(0, 1)$ is contained in some $(a_{i}, b_{i})$,
show that boundary $\partial A= [0,1] - A$. (Spivak- calculus on
manifolds)
If I prove that $[0,1]\cap\operatorname{int}{(A^{c})} = \emptyset$, the proof is complete.
I tried to find a contradiction, but I didn't find one.
| Since you say you want to prove it by contradiction, here we go:
Suppose that $x\in \mathrm{int}(A^c)\cap [0,1]$. Then there is an open interval $(r,s)$ such that $x\in (r,s)\subseteq A^c$. Every open interval contains infinitely many rational numbers, so there are lots and lots of rationals in $(r,s)$. However, since $A$ contains all rationals in $(0,1)$, then the only rationals that can be in $(r,s)$ are $0$, $1$, and rationals that are either negative or greater than $1$.
Since $x\in [0,1]$, the only possibility is $x=0$ or $x=1$ (there's a small argument to be made here; I'll leave it to you). Why can we not have $x=0$? Well, if $x=0$, then $s\gt 0$, so $(r,s)$ contains $[0,\min\{s,1\})$. Are there any rationals between $0$ an d $1$ that are in $[0,\min\{s,1\})$?
And what happens if $x=1$?
|
Arithmetic error in Feller's Introduction to Probability? In my copy of An Introduction to Probability by William Feller (3rd ed, v.1), section I.2(b) begins as follows:
(b) Random placement of r balls in n cells. The more general case of [counting the number of ways to put] $r$ balls in $n$ cells can be studied in the same manner, except that the number of possible arrangements increases rapidly with $r$ and $n$. For $r=4$ balls in $n=3$ cells, the sample space contains already 64 points ...
This statement seems incorrect to me. I think there are $3^4 = 81$ ways to put 4 balls in 3 cells; you have to choose one of the three cells for each of the four balls. Feller's answer of 64 seems to come from $4^3$. It's clear that one of us has made a very simple mistake.
Who's right, me or Feller? I find it hard to believe the third edition of a universally-respected textbook contains such a simple mistake, on page 10 no less. Other possible explanations include:
(1) My copy, a cheap-o international student edition, is prone to such errors and the domestic printings don't contain this mistake.
(2) I'm misunderstanding the problem Feller was examining.
| Assume sampling with replacement, there are four possible balls for cell 1, four possible balls for cell 2, and four possible balls for cell 3. So there are 4^3=64 possibilities.
(Assuming sampling without replacement, there are four possible balls for cell 1, three for cell 2, and two for cell 3, for a total of 4*3*2=24.)
So it seems the original Feller was correct.
|
Why are Gram points for the Riemann zeta important? Given the Riemann-Siegel function, why are the Gram points important? I say if we have $S(T)$, the oscillating part of the zeros, then given a Gram point and the imaginary part of the zeros (under the Riemann Hypothesis), are the Gram points near the imaginary part of the Riemann zeros?
I say that if the difference $ |\gamma _{n}- g_{n} | $ is regulated by the imaginary part of the Riemann zeros.
| One thing Gram points are good for is that they help in bracketing/locating the nontrivial zeroes of the Riemann $\zeta$ function.
More precisely, recall the Riemann-Siegel decomposition
$$\zeta\left(\frac12+it\right)=Z(t)\exp(-i\;\vartheta(t))$$
where $Z(t)$ and $\vartheta(t)$ are Riemann-Siegel functions.
$Z(t)$ is an important function for the task of finding nontrivial zeroes of the Riemann $\zeta$ function, in the course of of verifying the hypothesis. (See also this answer.)
That is to say, if some $t_k$ satisfies $Z(t_k)=0$, then $\zeta\left(\frac12+it_k\right)=0$.
Now, Gram points $\xi_k$ are numbers that satisfy the relation $\vartheta(\xi_k)=k\pi$ for some integer $k$. They come up in the context of "Gram's law", which states that $(-1)^k Z(\xi_k)$ tends to be positive. More crudely, we can say that Gram points tend to bracket the roots of the Riemann-Siegel function $Z(t)$ (i.e. there is often a root of $Z(t)$ in between consecutive $\xi_k$):
"Gram's law" doesn't always hold, however:
There are a number of other "bad" Gram points.
|
Series around $s=1$ for $F(s)=\int_{1}^{\infty}\text{Li}(x)\,x^{-s-1}\,dx$
Consider the function $$F(s)=\int_{1}^{\infty}\frac{\text{Li}(x)}{x^{s+1}}dx$$ where $\text{Li}(x)=\int_2^x \frac{1}{\log t}dt$ is the logarithmic integral. What is the series expansion around $s=1$?
It has a logarithmic singularity at $s=1$, and I am fairly certain (although I cannot prove it) that it should expand as something of the form $$\log (1-s)+\sum_{n=0}^\infty a_n (s-1)^n.$$ (An expansion of the above form is what I am looking for) I also have a guess that the constant term is $\pm \gamma$ where $\gamma$ is Euler's constant. Does anyone know a concrete way to work out such an expansion?
Thanks!
| Note that the integral $F(s)$ diverges at infinity for $s\leqslant1$ and redefine $F(s)$ for every $s\gt1$ as
$$
F(s)=\int_2^{+\infty}\frac{\text{Li}(x)}{x^{s+1}}\mathrm dx.
$$
An integration by parts yields
$$
sF(s)=\int_2^{+\infty}\frac{\mathrm dx}{x^s\log x},
$$
and the change of variable $x^{s-1}=\mathrm e^t$ yields $$sF(s)=\mathrm{E}_1(u\log2),\qquad u=s-1,$$ where the exponential integral function $\mathrm{E}_1$ is defined, for every complex $z$ not a nonpositive real number, by
$$
\mathrm{E}_1(z)=\int_z^{+\infty}\mathrm e^{-t}\frac{\mathrm dt}t.
$$
One knows that, for every such $z$,
$$
\mathrm{E}_1(z) = -\gamma-\log z-\sum\limits_{k=1}^\infty \frac{(-z)^{k}}{k\,k!}.
$$
On the other hand, $$\frac1s=\frac1{1+u}=\sum_{n\geqslant0}(-1)^nu^n,$$ hence $$F(s)=\frac1s\mathrm{E}_1(u\log2)=\sum_{n\geqslant0}(-1)^nu^n\cdot\left(-\gamma-\log\log2-\log u-\sum\limits_{k=1}^\infty \frac{(-1)^{k}(\log2)^k}{k\,k!}u^k\right).
$$
One sees that $F(1+u)$ coincides with a series in $u^n$ and $u^n\log u$ for nonnegative $n$, and that $G(u)=F(1+u)+\log u$ is such that $$G(0)=-\gamma-\log\log2.$$
Finally,
$$
F(s)=-\gamma-\log\log2-\log(s-1)-\sum\limits_{n=1}^{+\infty}(-1)^{n}(s-1)^n\log(s-1)+\sum\limits_{n=1}^{+\infty}c_n(s-1)^n,
$$
for some coefficients $(c_n)_{n\geqslant1}$. Due to the logarithmic terms, this is a slightly more complicated expansion than the one suggested in the question, in particular $s\mapsto G(s-1)=F(s)+\log(s-1)$ is not analytic around $s=1$.
|
Are sin and cos the only continuous and infinitely differentiable periodic functions we have? Sin and cos are everywhere continuous and infinitely differentiable. Those are nice properties to have. They come from the unit circle.
It seems there's no other periodic function that is also smooth and continuous. The only other even periodic functions (not smooth or continuous) I have seen are:
*
*Square wave
*Triangle wave
*Sawtooth wave
Are there any other well-known periodic functions?
| "Are there any other well-known periodic functions?"
In one sense, the answer is "no". Every reasonable periodic complex-valued function $f$ of a real variable can be represented as an infinite linear combination of sines and cosines with periods equal the period $\tau$ of $f$, or equal to $\tau/2$ or to $\tau/3$, etc. See Fourier series.
There are also doubly periodic functions of a complex variable, called elliptic functions. If one restricts one of these to the real axis, one can find a Fourier series, but one doesn't do such restrictions, as far as I know, in studying these functions. See Weierstrass's elliptic functions and Jacobi elliptic functions.
|
Factorization of zeta functions and $L$-functions I'm rewriting the whole question in a general form, since that's probably easier to answer and it's also easier to spot the actual question. Assume that we have some finite extension $K/F$ of number fields and assume that the extension is not Galois. Denote the Galois closure by $E/F$. My question is the following:
Is it possible to factor $\zeta_K(s)$ in terms of either Artin $L$-functions of the form $L(s,\chi,E/F)$ or $L$-functions corresponding to intermediate Galois extensions of $E/F$?
| Yes. The first thing to notice is that the Zeta function is the Artin L-function associated to the trivial representation of $Gal(E/K)$ i.e.
$$\zeta_K(s) = L(s, \mathbb{C}, E/K),$$
where $\mathbb{C}$ is endowed with the trivial action of $\mathrm{Gal}(E/K).$
Let $G = \mathrm{Gal}(E/F)$ and $H = \mathrm{Gal}(E/K).$ Now recall that the $L$-function attached to any representation of $\rho$ of $H$ is equal to the $L$-function associated to the induced representation $\rho^G_H$ of $G.$
In particular,
$$L(s, \mathbb{C}, E/K) = L(s, \mathbb{C}[G]\otimes_{\mathbb{C}[H]}\mathbb{C}, E/F).$$
Therefore, if $\chi_1,...,\chi_n$ are the irreducible characters of $G$ and $\chi$ is the character of $\mathbb{C}[G]\otimes_{\mathbb{C}[H]}\mathbb{C},$
$$L(s, \mathbb{C}[G]\otimes_{\mathbb{C}[H]}\mathbb{C}, E/F) =L(s, \chi,E/F) = \displaystyle\prod_{i=1}^nL(s, \chi_i,E/F)^{\langle \chi_i, \chi \rangle},$$
And so,
$$\zeta_K(s) = \displaystyle\prod_{i=1}^nL(s, \chi_i,E/F)^{\langle \chi_i, \chi \rangle}.$$
In the case $H$ and $G$ are explicit, the characters $\chi_1,...,\chi_n$ and $\chi$ are easily calculated and thus give an explicit factorization of $\zeta_K(s).$
|
Limits of integration for random variable Suppose you have two random variables $X$ and $Y$. If $X \sim N(0,1)$, $Y \sim N(0,1)$ and you want to find k s.t. $\mathbb P(X+Y >k)=0.01$, how would you do this? I am having a hard time finding the limits of integration. How would you generalize $\mathbb P(X+Y+Z+\cdots > k) =0.01$? I always get confused when problems involve multiple integrals.
| Hint: Are the random variables independent?
If so, you can avoid integration by using the facts
*
*the sum of independent normally distributed random variables has a normal distribution
*the mean of the sum of random variables is equal to the sum of the means
*the variance of the sum of independent random variables is the sum of the variances
*for a standard normal distribution $N(0,1)$: $\Phi^{-1}(0.99)\approx 2.326$
|
Divisibility by 4 I was asked to find divisibility tests for 2,3, and 4.
I could do this for 2 and 3, but for 4.
I could come only as far as:
let $a_na_{n-1}\cdots a_1a_0$ be the $n$ digit number.
Now from the hundredth digit onwards, the number is divisible by 4 when we express it as sum of digits.
So, the only part of the proof that's left is to prove that $10a_1+a_0$ is divisible by 4.
So if we show that this happens only when the number $a_1a_0$ is divisible by 4, the proof is complete.
So the best way to show it is by just taking all combinations of $a_1,a_0$ or is there a better way?
| Note that $10a_1+a_0\equiv2a_1+a_0$ (mod 4). So for divisibility by 4, $a_0$ must be even and in this case $2a_1+a_0=2(a_1+\frac{a_0}{2})$. So, $a_1$ and $\frac{a0}{2}$ must be of same parity (means both are either even, or odd).
|
Using recurrences to solve $3a^2=2b^2+1$ Is it possible to solve the equation $3a^2=2b^2+1$ for positive, integral $a$ and $b$ using recurrences?I am sure it is, as Arthur Engel in his Problem Solving Strategies has stated that as a method, but I don't think I understand what he means.Can anyone please tell me how I should go about it?Thanks.
Edit:Added the condition that $a$ and $b$ are positive integers.
| Yes. See, for example, the pair of sequences https://oeis.org/A054320 and https://oeis.org/A072256, where the solutions are listed. The recurrence is defined by $$a_0 = a_1 = 1; \qquad a_n = 10a_{n-1} - a_{n-2},\ n\ge 2.$$
As to how to go about solving this, there are many good references on how to do this, including Wikipedia.
|
How many $p$-adic numbers are there? Let $\mathbb Q_p$ be $p$-adic numbers field. I know that the cardinal of $\mathbb Z_p$ (interger $p$-adic numbers) is continuum, and every $p$-adic number $x$ can be in form $x=p^nx^\prime$, where $x^\prime\in\mathbb Z_p$, $n\in\mathbb Z$.
So the cardinal of $\mathbb Q_p$ is continuum or more than that?
| The field $\mathbb Q_p$ is the fraction field of $\mathbb Z_p$.
Since you already know that $|\mathbb Z_p|=2^{\aleph_0}$, let us show that this is also the cardinality of $\mathbb Q_p$:
Note that every element of $\mathbb Q_p$ is an equivalence class of pairs in $\mathbb Z_p$, much like the rationals are with respect to the integers.
Since $\mathbb Z_p\times\mathbb Z_p$ is also of cardinality continuum, we have that $\mathbb Q_p$ can be injected into this set either by the axiom of choice, or directly by choosing representatives which are co-prime.
This shows that $\mathbb Q_p$ has at most continuum many elements, since $\mathbb Z_p$ is a subset of its fraction field, then the $p$-adic field has exactly $2^{\aleph_0}$ many elements.
|
Pumping lemma usage I need to know if my solution for a problem related with regular languages and pumping lemma is correct.
So, let $L = \{a^ib^jc^k \mid i, j,k \ge 0 \mbox{ and if } i = 1 \mbox{ then } j=k \}$
Now i need to use the pumping lemma to prove that this language is not regular. I wrote my proof like this:
Let's assume that $L$ is regular.
Let $|w|= p$ be the pumping length and $q = p -1$.
Now if we consider $i = 1$ then $j=k$, so now i can pick a string from $L$ such as $w = ab^qc^q=xyz$. Since $q = p - 1$, it implies that $x = a$, $y=b^q$ and $z=c^q$. It satisfies the property $|xy| \le p$ and $|y| \gt 0$.
Assuming that $L$ is regular, then $\forall_i\ge_0\ xy^iz \in L$, but if we choose $i=2$ we have $xy^2z$, which means that we have more $b's$ than $c's$, and we reached a contradiction, therefore $L$ is not regular, which completes the proof.
Is my proof correct? i'm having some doubts related with my $q = p - 1$, but i think that it makes sense to choose a $q$ like that to "isolate" $y=b^q$, that will make the proof trivial after.
Thanks in advance.
| You cannot choose $x$, $y$ and $z$. That is, the following statement does not help you prove that the language is not regular:
Since $q=p−1$, it implies that $x=a$, $y=b^q$ and $z=c^q$. It satisfies the propery $|xy| \le p$ and $|y|>0$.
The pumping lemma states that for every regular language $L$, there exists a string $xyz \in L$ with $|xyz| \ge p$, $|xy| \le p$ and $|y| \ne 0$ such that $xy^iz \in L$ for all $i \ge 0$.
Therefore, if you wish to prove a language is not regular, you may go by contradiction and show that for all $xyz \in L$ with $|xyz| \ge p$, $|xy| \le p$ and $|y| \ne 0$, there exists an $i \ge 0$ such that $xy^iz \not\in L$. Note how the quantifiers flip.
You are only showing that one choice of $x$, $y$ and $z$ violates the pumping lemma, but you must show that all choices violate the pumping lemma.
|
Another quadratic Diophantine equation: How do I proceed? How would I find all the fundamental solutions of the Pell-like equation
$x^2-10y^2=9$
I've swapped out the original problem from this question for a couple reasons. I already know the solution to this problem, which comes from http://mathworld.wolfram.com/PellEquation.html. The site gives 3 fundamental solutions and how to obtain more, but does not explain how to find such fundamental solutions. Problems such as this have plagued me for a while now. I was hoping with a known solution, it would be possible for answers to go into more detail without spoiling anything.
In an attempt to be able to figure out such problems, I've tried websites, I've tried some of my and my brother's old textbooks as well as checking out 2 books from the library in an attempt to find an answer or to understand previous answers.
I've always considered myself to be good in math (until I found this site...). Still, judging from what I've seen, it might not be easy trying to explain it so I can understand it. I will be attaching a bounty to this question to at least encourage people to try. I do intend to use a computer to solve this problem and if I have solved problems such as $x^2-61y^2=1$, which will take forever unless you know to look at the convergents of $\sqrt{61}$.
Preferably, I would like to understand what I'm doing and why, but failing that will settle for being able to duplicate the methodology.
| You can type it into Dario Alpern's solver and tick the "step-by-step" button to see a detailed solution.
EDIT: I'm a little puzzled by Wolfram's three fundamental solutions, $(7,2)$, $(13,4)$, and $(57,18)$. It seems to me that there are two fundamental solutions, $(3,0)$ and $(7,2)$, and you can get everything else by combining those two with solutions $(19,6)$ of $x^2-10y^2=1$. Using mercio's formalism, $$(7-2\sqrt{10})(19+6\sqrt{10})=13+4\sqrt{10}$$ shows you how to get $(13,4)$; $$(3+0\sqrt{10})(19+6\sqrt{10})=57+18\sqrt{10}$$ shows you how to get $(57,18)$.
|
Probability when I have one or 3 choices I'm wondering what is "better" to have in terms of profit:
Lets say we have 300 people come to your store and you have 3 products. Each individual is given (randomly) one product. If the individual likes the product he will buy it, but if he doesn't like the product, he will turn around and walk out of the store forever.
Now, I'm wondering is it better (more probable) to have 3 products or just one, in order to maximize the total number of sold products.
Important to note is that customer doesn't decide which product he gets (he gets it randomly), he only decides if he likes it or not (50-50 chance he likes it).
Any help is welcome, or maybe link to some theory I should read in order to come p with a solution.
edit:
Ok, so, a little detailed explanation: let say that I have unlimited number of each of the items in the store(currently 3 different items - but unlimited number of them in the stock), and lets say that each customer that comes in to my store (aproximatelly 1000 a day) either likes or doesn't like the randomly offered product to him. Each of the products offered is of the same popularity - we sold almost the same amount of product1, product2 and product3. Product1 sold for example just 100 more units than Product2 and Product 2 sold like 100 more units than Product3, but the sold numbers are as high as 1000000 so this difference is really very low. Does this now help in determining if we should chose only Product1 and keep forcing it, or should ve leave the three products as is?
|
is it better (more probable) to have 3 products or just one, in order to maximize the total number of sold products. Important to note is that customer doesn't decide which product he gets (he gets it randomly), he only decides if he likes it or not (50-50 chance he likes it).
Under the conditions you have described, the main objective is to maximize the total number of sold products. This number will be <= population number of 800.
The only factors here are:
1 - The number of people showing in the store
2 - The number of items you have in stock
factor 1 above is not described in your statement of the problem, but if you think that having more than 1 product will affect it then having 3 products is better than 1. Otherwise, the only factor would be factor (2) and adding new products will not add value.
I hope this helps.
|
Prove that $(1 - \frac{1}{n})^{-n}$ converges to $e$ This is a homework question and I am not really sure where to go with it. I have a lot of trouble with sequences and series, can I get a tip or push in the right direction?
| You have:
$$
x_n:=\left(1-\frac1n\right)^{-n} = \left(\frac{n-1}n\right)^{-n} = \left(\frac{n}{n-1}\right)^{n}
$$
$$
= \left(1+\frac{1}{n-1}\right)^{n} = \left(1+\frac{1}{n-1}\right)^{n-1}\cdot \left(1+\frac{1}{n-1}\right) = a_n\cdot b_n.
$$
Since $a_n\to \mathrm e$ and $b_n\to 1$ you obtain what you need.
|
Is there an easy way to determine when this fractional expression is an integer? For $x,y\in \mathbb{Z}^+,$ when is the following expression an integer?
$$z=\frac{(1-x)-(1+x)y}{(1+x)+(1-x)y}$$
The associated Diophantine equation is symmetric in $x, y, z$, but I couldn't do anything more with that. I tried several factoring tricks without luck. The best I could do was find three solutions such that $0<x\le y\le z$. They are: $(2,5,8)$, $(2,4,13)$ and $(3,3,7)$.
The expression seems to converge pretty quickly to some non-integer between 1 and 2.
| Since $$ \frac{(1-x)-(1+x)y}{(1+x)+(1-x)y} = \frac{ xy+x+y-1}{xy-x-y-1} = 1 + \frac{2(x+y) }{xy-x-y-1} $$
and $ 2x+2y < xy - x -y - 1 $ if $ 3(x+y) < xy - 1 .$ Suppose $ x\leq y$, then $ 3(x+y) \leq 6y \leq xy-1 $ if $ x\geq 7. $ So all solutions must have $0\leq x< 7 $ so it is reduced to solving $7$ simpler Diophantine equations.
If $x=0 $ then $ \displaystyle z= 1 - \frac{2y}{y+1}$ so the only solutions are $ (0,0,1)$ and $ (0,1,0).$
If $x=1$ then $ \displaystyle z= -y$ so $(1,m,-m)$ is a solution for $ m\geq 1.$
If $x=2$ then $ \displaystyle z = 1 + \frac{4+2y}{y-3}$ which is an integer for $y=1,2,4,5,8,13.$
I will leave you to find the others. Each of the cases are now simple Diophantine equations.
|
Find the equation of the plane passing through a point and a vector orthogonal I have come across this question that I need a tip for.
Find the equation (general form) of the plane passing through the point $P(3,1,6)$ that is orthogonal to the vector $v=(1,7,-2)$.
I would be able to do this if it said "parallel to the vector"
I would set the equation up as
$(x,y,x) = (3,1,6) + t(1,7,-2)$
and go from there.
I don't get where I can get an orthogonal vector. Normally when I am finding an orthogonal vector I have two other vectors and do the cross product to find it.
I am thinking somehow I have to get three points on the plane, but I'm not sure how to go about doing that.
Any pointers?
thanks in advance.
| vector equation of a plane is in the form : r.n=a.n,
in this case, a=(3,1,6), n=(1,7,-2).
Therefore,
r.(1,7,-2)=(3,1,6).(1,7,-2)
r.(1,7,-2)=(3x1)+(1x7)+(6x-2)
r.(1,7,-2)=3+7-12
r.(1,7,-2)=-2
OR
r=(x,y,z)
therefore,
(x,y,z).(1,7,-2)=-2
x+7y-2z=-2
|
A circle with infinite radius is a line I am curious about the following diagram:
The image implies a circle of infinite radius is a line. Intuitively, I understand this, but I was wondering whether this problem could be stated and proven formally? Under what definition of 'circle' and 'line' does this hold?
Thanks!
| There is no such thing as a circle of infinite radius. One might find it useful to use the phrase "circle of infinite radius" as shorthand for some limiting case of a family of circles of increasing radius, and (as the other answers show) that limit might give you a straight line.
|
Product rule in calculus This is wonderful question I came across whiles doing calculus. We all know that $$\frac{d(AB)}{dt} = B\frac{dA}{dt} + A\frac{dB}{dt}.$$
Now if $A=B$ give an example for which
$$\frac{dA^2}
{dt} \neq 2A\frac{dA}{at}.$$
I have tried many examples and could't get an example, any help?
| let's observe function
$y=(f(x))^2$ , this function can be decomposed as the composite of two functions:
$y=f(u)=u^2$ and $u=f(x)$
So :
$\frac { d y}{ d u}=(u^2)'_u=2u=2f(x)$
$\frac{du}{dx}=f'(x)$
By the chain rule we know that :
$\frac{dy}{dx}=\frac{dy}{du}\cdot \frac{du}{dx}=2f(x)f'(x)$
|
Solving the recurrence relation $A_n=n!+\sum_{i=1}^n{n\choose i}A_{n-i}$ I am attempting to solve the recurrence relation $A_n=n!+\sum_{i=1}^n{n\choose i}A_{n-i}$ with the initial condition $A_0=1$. By "solving" I mean finding an efficient way of computing $A_n$ for general $n$ in complexity better than $O(n^2)$.
I tried using the identity $\dbinom{n+1}i=\dbinom{n}{i-1}+\dbinom{n}i$ but I still ended up with a sum over all previous $n$'s.
Another approach was to notice that $2A_n=n!+\sum_{i=0}^{n}{n \choose i}A_{n-i}$ and so if $a(x)$ is the EFG for $A_n$, then we get the relation $$2a(x)=\frac{1}{1-x}+a(x)e^x,$$ so $$a(x)=\frac{1}{(1-x)(2-e^x)}$$ (am I correct here?) but I can't see to to use this EGF for the more efficient computation of $A_n$.
| This isn’t an answer, but it may lead to useful references.
The form of the recurrnce suggests dividing through by $n!$ and substituting $B_n=\dfrac{A_n}{n!}$, after which the recurrence becomes $$B_n=1 + \sum_{i=1}^n\binom{n}i\frac{(n-i)!}{n!}B_{n-i}=1+\sum_{i=1}^n\frac{B_{n-i}}{i!}.$$
You didn’t specify an initial condition, so for the first few terms we have:
$$\begin{align*}
B_0&=0+B_0\\
B_1&=1+B_0\\
B_2&=2+\frac32B_0\\
B_3&=\frac72+\frac{13}6B_0\\
B_4&=\frac{17}3+\frac{25}8B_0\\
B_5&=\frac{211}{24}+\frac{541}{120}B_0
\end{align*}$$
If we set $B_n=u_n+v_nB_0$, then $$u_n=1+\sum_{i=1}^n\frac{u_{n-i}}{i!}$$ with $u_0=0$, and $$v_n=\sum_{i=1}^n\frac{v_{n-i}}{i!}$$ with $v_0=1$. The ‘natural’ denominator of $u_n$ is $(n-1)!$, while that of $v_n$ is $n!$, so I looked at the sequences $$\langle (n-1)!u_n:n\in\mathbb{N}\rangle = \langle 0,1,2,7,34,211,\dots\rangle$$ and $$\langle n!v_n:n\in\mathbb{N}\rangle=\langle 1,1,3,13,75,541,\dots\rangle\;.$$ The first is OEIS A111539, and second appears to be OEIS A000670. There’s evidently a great deal known about the latter; there’s very little on the former.
Added: And with $A_0=1$ we have $B_0=1$ and $B_n=u_n+v_n$.
|
How to prove $\sum_{ d \mid n} \mu(d)f(d)=\prod_{i=1}^r (1-f(p_i))$? I have to prove for $n \in \mathbb{N}>1$ with $n=\prod \limits_{i=1}^r p_i^{e_i}$. $f$ is a multiplicative function with $f(1)=1$:
$$\sum_{ d \mid n} \mu(d)f(d)=\prod_{i=1}^r (1-f(p_i))$$
How I have to start? Are there different cases or can I prove it in general?
Any help would be fine :)
| Please see Theorem 2.18 on page $37$ in Tom Apostol's Introduction to analytic number theory book.
The proof goes as follows:
Define $$ g(n) = \sum\limits_{d \mid n} \mu(d) \cdot f(d)$$
*
*Then $g$ is multiplicative, so to determine $g(n)$ it suffices to compute $g(p^a)$. But note that $$g(p^a) = \sum\limits_{d \mid p^{a}} \mu(d) \cdot f(d) = \mu(1)\cdot f(1) + \mu(p)\cdot f(p) = 1-f(p)$$
|
Sketch the graph of $y = \frac{4x^2 + 1}{x^2 - 1}$ I need help sketching the graph of $y = \frac{4x^2 + 1}{x^2 - 1}$.
I see that the domain is all real numbers except $1$ and $-1$ as $x^2 - 1 = (x + 1)(x - 1)$. I can also determine that between $-1$ and $1$, the graph lies below the x-axis.
What is the next step? In previous examples I have determined the behavior near x-intercepts.
| You can simplify right away with
$$
y = \frac{4x^2 + 1}{x^2 - 1} = 4+ \frac{5}{x^2 - 1} =4+ \frac{5}{(x - 1)(x+1)}
$$
Now when $x\to\infty$ or $x\to -\infty$, adding or subtracting 1 doesn't really matter hence that term goes to zero. When $x$ is quite large, say 1000, the second term is very small but positive hence it should approach to 4 from above (same holds for negative large values).
The remaining part to be done is when $x$ approaches to $-1$ and $1$ from both sides. For the values $x<-1$ and $x>1$ you can show that the second term is positive and negative for $-1<x<1$. Therefore the limit jumps from $-\infty$ to $\infty$ at each vertical asymptote.
Here is the whole thing.
|
A counterexample to theorem about orthogonal projection Can someone give me an example of noncomplete inner product space $H$, its closed linear subspace of $H_0$ and element $x\in H$ such that there is no orthogonal projection of $x$ on $H_0$. In other words I need to construct a counterexample to theorem about orthogonal projection when inner product space is not complete.
| Let $H$ be the inner product space consisting of $\ell^2$-sequences with finite support, let $\lambda = 2^{-1/2}$ and put
$$
z = \sum_{n=1}^\infty \;\lambda^n \,e_n \in \ell^2 \smallsetminus H
$$
Then $\langle z, z \rangle = \sum_{n=1}^\infty \lambda^{2n} = \sum_{n=1}^{\infty} 2^{-n} = 1$.
The subspace $H_0 = \{y \in H\,:\,\langle z, y \rangle = 0\}$ is closed in $H$ because $\langle z, \cdot\rangle: H \to \mathbb{R}$ is continuous.
The projection of $x = e_1$ to $H_0$ should be
$$
y = e_1 - \langle z,e_1\rangle\, z = e_1 - \lambda z = \lambda^2 e_1 - \sum_{n=2}^\infty \lambda^{n+1}e_n
\in \ell^2 \smallsetminus H_0.
$$
For $k \geq 2$ put
$$
z_k = \sum_{n=2}^k \;\lambda^{n} \,e_n + \lambda^{k+1} \frac{1}{1-\lambda} e_{k+1} \in H_0.
$$
Then $y_k = \lambda^2 e_1-\lambda z_k \in H_0$ because
$$
\langle y_k, z\rangle =\lambda^2 - \sum_{n=2}^{k}\,\lambda^{2n+1}-\frac{\lambda^{2k+2}}{1+\lambda} = \lambda^2 - \lambda^2 \sum_{n=1}^\infty \lambda^{2n} = 0.
$$
On the other hand, we have $y_k \to y$ in $\ell^2$, so
$$
\|e_1 - y\| \leq d(e_1,H_0) \leq \lim_{k\to\infty} \|e_1-y_k\| = \|e_1-y\|
$$
and we're done because $y \in \overline{H}_0$ in $\ell^2$ is the only point realizing $d(e_1,\overline{H}_0)$ in $\ell^2$, thus there can be no point in $H_0$ minimizing the distance to $e_1$ because $y \notin H_0$.
|
First Order Logic (deduction proof in Hilbert system)
Possible Duplicate:
First order logic proof question
I need to prove this:
⊢ (∀x.ϕ) →(∃x.ϕ)
Using the following axioms:
The only thing I did was use deduction theorem:
(∀x.ϕ) ⊢(∃x.ϕ)
And then changed (∃x.ϕ) into (~∀x.~ϕ), so:
(∀x.ϕ) ⊢ (~∀x.~ϕ)
How can I continue with this? I cannot use soundness/completeness theorems.
EDIT: ∀* means it is a finite sequence of universal quantifiers (possible 0)
| If the asterisks in your axioms mean that the axioms are to be fully universally quantified, so that they become sentences, and if your language has no constant symbols, then it will not be possible to make the desired deduction in your system. The reason is that since all the axioms are fully universally quantified, they are (vacuously) true in the empty structure, and your rule of inference is truth-preserving for any structure including the empty structure. But your desired deduction is not valid for the empty structure, since the hypothesis is vacuously true there, but the conclusion is not. So it would actually be unsound for you to able to make that deduction. Your desired validity is only valid in nonempty domains, and so you need a formal system appropriate for reasoning in nonempty domains.
|
Improper integral; exponential divided by polynomial How can I evaluate $$\int_{-\infty}^\infty {\exp(ixk)\over -x^2+2ixa+a^2+b^2} dx,$$ where $k\in \mathbb R, a>0$? Would Fourier transforms simplify anything? I know very little about complex analysis, so I am guessing there is a rather simple way to evaluate this? Thanks.
| Assume $b \neq 0$ and $k\neq 0$.
Write
$\dfrac{\exp(ixk)}{-x^2+2iax+a^2+b^2}= \dfrac{\exp(ixk)}{-(x-ia)^2+b^2}=\dfrac{\exp(i(x-ia)k)}{-(x-ia)^2+b^2} \exp(-ak)$ hence
the integral becomes $I=\int_{-\infty}^\infty \dfrac{\exp(i(x-ia)k)}{-(x-ia)^2+b^2} \exp(-ak)dx=\int_{-\infty-ia}^{\infty -ia} \dfrac{\exp(izk)}{-z^2+b^2} \exp(-ak)dz$ on the contour the straight line parallel the $x$-axis and intercepting the $y$-axis (imaginary) at $-ia$.
We need to close the contour by a great semicircle in the upper half plane if $k>0$ and in this case there are two poles at $z=b$ and $z=-b$ enclosed in the contour.
Now we will use the residue theorem. The poles of the fraction $\dfrac{\exp(izk)}{-z^2+b^2}$ are $-b,+b$ and then the integral will be $I=(2\pi i)\exp(-ak)\lbrace Res(z=b)\exp(ibk)+Res(z=-b)\exp(-ibk)\rbrace$.
Now observe that $Res(z=b)=1/2b$ and $Res(z=-b)=-1/2b$ hence $I=2 \pi i \exp(-ak) \frac{\sin bk}{b}$.
If $k<0$ then we close the contour by a semicircle in the lower halfplane and in this case there are no poles enclosed and so the integral becomes zero.
Now assume $b=0$ and $K\neq 0$: in this case the residue (at $z=0$) becomes $-ik$ and the integral becomes $2\pi k$ (if $k>0$) and $0$ if $k<0$.
Finally let $k=0$: then in this case the result will be easy and I leave it for you as an exersice.
|
Proof of Convergence: Babylonian Method $x_{n+1}=\frac{1}{2}(x_n + \frac{a}{x_n})$ a) Let $a>0$ and the sequence $x_n$ fulfills $x_1>0$ and $x_{n+1}=\frac{1}{2}(x_n + \frac{a}{x_n})$ for $n \in \mathbb N$. Show that $x_n \rightarrow \sqrt a$ when $n\rightarrow \infty$.
I have done it in two ways, but I guess I'm not allowed to use the first one and the second one is incomplete. Can someone please help me?
*
*We already know $x_n \rightarrow \sqrt a$, so we do another step of the iteration and see that $x_{n+1} = \sqrt a$.
*Using limit, $x_n \rightarrow x, x_{n+1} \rightarrow x$ (this is the part I think it's incomplete, don't I have to show $x_{n+1} \rightarrow x$, how?), we have that
$$x = \frac x 2 (1 + \frac a {x^2}) \Rightarrow 1 = a/x^2 \Rightarrow x = \sqrt a$$
b) Let the sequence $x_n$ be defined as $x_{n+1}= 1 + \frac 1 {x_n} (n \in \mathbb N), x_1=1$. Show that it converges and calculate its limit.
"Tip: Show that sequences $x_{2n}$ and $x_{2n+1}$ monotone convergent to the limit."
I didn't understand the tip, how can this help me? Does it make a difference if the number is odd or even?
Thanks in advance!
| For a):
The proof of convergence can be deduced from the question/answer LFT theory found in
Iterative Convergence Formulation for Linear Fractional Transformation with Rational Coefficients
Proof when $x_1^2 > a$
Note: If both $a$ and $x_1$ are rational numbers, then this solution is obtained without recourse to the real number system.
Let $S$ represent $a$ and $K$ denote $x_1$.
We have our LFT theory for
$F(x) = \frac{S + Kx}{K + x}$
as espoused in the above link.
Considering Proposition 2 & 3, we have a sequence
$\{F^1, F^2, F^3, ..., F^n, ...\}$ of LFTs
with the corresponding decreasing sequence of Ks
$\{K, K_2, K_3, ..., K_n, ...\}$ and $(K_n)^2$ converges to $S$.
Might as well set $K_1$ to $K$ now.
Now with a little thought, you can see the following holds:
$K_2 = (S + K_1^2)/2K_1$
$K_4 = (S + K_2^2)/2K_2$
$K_8 = (S + K_4^2)/2K_4$
$K_{16} = (S + K_8^2)/2K_8$
etc.
But this is exactly the Babylonian Method.
So the method can actually be described as calculating numbers that are a subsequence of our convergent sequence
$\{K_1, K_2, K_3, ..., K_n, ...\}$
So we have shown that the squares of the numbers produced by the Babylonian Method converge to $S$.
Proof when $x_1^2 < a$
Let $K$ denote $(a + x_1^2)/2x_1$. We know by the LFT theory that the square of this number is greater than $a$. So, the proof given above can now be applied, by simply 'throwing out' the first number $x_1$ of the sequence.
|
Differential Equation Breaks Euler Method Solving ${dy\over dx} = 2y^2$, $y(0)=2$ analytically yields $y(8)= -2/31$, but from using Euler's method and looking at the slope field, we see that $y(8)$ should be a really large positive answer. Why?
Differential equation:
$$\begin{align}
&\frac{dy}{dx}=2y^2\\
&\frac{dy}{y^2} = 2\, dx\\
-&\frac{1}{y} = 2x + c\\
-&\frac{1}{2} = c\\
-&\frac{1}{y}=2x-\frac{1}{2}\\
&\frac{1}{y}=-2x+\frac{1}{2}\\
&y=\frac{1}{-2x+\frac{1}{2}}\\
&y=\frac{2}{-4x+1}\\
y(8)=-2/31\end{align}$$
| As you found, the solution is $y={2\over 1-4x}$, which has a vertical asymptote at $x=1/4$. In the slope field, you should be able to convince yourself that such a function can indeed "fall along the slope vectors". The curve will shoot up to infinity as you approach $x=1/4$ from the left. To the right of $x=1/4$ the curve "comes from below".
The graph of $y={2\over 1-4x}$ over $[0,1]$ is shown below:
|
Locally compact nonarchimedian fields Is it true that if $F$ is a locally compact topological field with a proper nonarchimedean absolute value $A$, then $F$ is totally disconnected? I am aware of the classifications of local fields, but I can't think of a way to prove this directly.
| Yes: the non-Archimedean absolute value yields a non-Archimedean metric (also known as an ultrametric), and every ultrametric space is totally disconnected. In fact, every ultrametric space is even zero-dimensional, as it has a base of clopen sets.
Proof: Let $\langle X,d\rangle$ be an ultrametric space, meaning that $d$ is a metric satisfying $$d(x,y)\le\max\{d(x,z),d(y,z)\}$$ for any $x,y,z\in X$. Let $B(x,r)=\{y\in X:d(x,y)<r\}$; by definition $B(x,r)$ is open. Suppose that $y\in X\setminus B(x,r)$; then $d(x,y)\ge r$, and I claim that $B(x,r)\cap B(y,r)=\varnothing$. To see this, suppose that $z\in B(x,r)\cap B(y,r)$; then $$d(x,y)\le\max\{d(x,z),d(y,z)\}<r\;,$$ which is impossible. Thus, $y\notin\operatorname{cl}B(x,r)$, and $B(x,r)$ is closed. Thus, every open ball is clopen, and $X$ is zero-dimensional (and hence totally disconnected). $\dashv$
The metric associated with the non-Archimedean absolute value $\|\cdot\|$ is of course $d(x,y)=\|x-y\|$.
|
Function $f\colon 2^{\mathbb{N}}\to 2^{\mathbb{N}}$ preserving intersections and mapping sets to sets which differs only by finite number of elements Define on $2^{\mathbb{N}}$ equivalence relation
$$
X\sim Y\Leftrightarrow \text{Card}((X\setminus Y)\cup(Y\setminus X))<\aleph_0
$$
Is there exist a function $f\colon 2^{\mathbb{N}}\to 2^{\mathbb{N}}$ such that
$$
f(X)\sim X
$$
$$
X\sim Y \Rightarrow f(X)=f(Y)
$$
$$
f(X\cap Y)=f(X)\cap f(Y)
$$
| Let $\{X_\alpha : \alpha\in\mathcal{A}\}\subset\mathbb{N}$ be an uncountable family of sets such that
$$
\alpha,\beta\in\mathcal{A},\quad\alpha\neq\beta\Rightarrow \text{Card}(X_\alpha\cap X_\beta)<\aleph_0
$$
Such a family does exist. Indeed for each irrational number $x\in\mathbb{I}$ consider sequence of rational numbers $\{x_n\}_{n=1}^{\infty}\subset\mathbb{Q}$ tending to $x$. Let $\varphi(x)=\{x_n:n\in\mathbb{N}\}$ be the set of this rational numbers. Obviously for $x,y\in\mathbb{I}$ such that $x\neq y$ we have $\text{Card}(\varphi(x)\cap\varphi(y))<\aleph_0$. Also obviously for all $x\in\mathbb{I}$ we have $\text{Card}(\varphi(x))=\aleph_0$. Let $i\colon 2^\mathbb{Q}\to 2^\mathbb{N}$ be some bijection between $2^\mathbb{Q}$ and $2^\mathbb{N}$ then we may take by definition $\{X_\alpha : \alpha\in\mathcal{A}\}=\{i(\varphi(x)):x\in\mathbb{I}\}$. This will be desired family.
Let $\alpha,\beta\in\mathcal{A},\alpha\neq\beta$. Then $X_\alpha\cap X_\beta\sim\varnothing$. And from the second and third properties we obtain $f(X_\alpha)\cap f(X_\beta)=f(X_\alpha\cap X_\beta)=f(\varnothing)$.
Now for each $\alpha\in\mathcal{A}$ consider $Y_\alpha=f(X_\alpha)\setminus f(\varnothing)$. By construction $X_\alpha$ is infinite, so does $f(X_\alpha)$, and as the consequence $Y_\alpha\neq\varnothing$. Now for all $\alpha,\beta\in\mathcal{A},\alpha\neq\beta$ we have
$$
Y_\alpha\cap Y_\beta=f(X_\alpha\cap X_\beta)\setminus f(\varnothing)=\varnothing
$$
Thus we built an uncountable family of disjoint subsets $\{Y_\alpha : \alpha\in\mathcal{A}\}$ in countable set $\mathbb{N}$. Contradiction, hence such a function doesn't exist.
|
how to find the parabola of a flying object how can you find the parabola of a flying object without testing it? what variables do you need? I want to calculate the maximum hight and distance using a parabola. Is this possible? Any help will be appreciated.
| I assume that you know that if an object is thrown straight upwards, with initial speed $v$, then its height $h(t)$ above the ground at time $t$ is given by
$$h(t)=vt-\frac{1}{2}gt^2,$$
where $g$ is the acceleration due to gravity. The acceleration is taken to be a positive number, constant since if our thrown object achieves only modest heights. In metric units, $g$ at the surface of the Earth is about $9.8$ metres per second per second. Of course the equation only holds until the object hits the ground. We are assuming that there is no air resistance, which is unrealistic unless we are on the Moon.
Now imagine that we are standing at the origin, and throw a ball with speed $s$, at an angle $\theta$ to the ground, where $\theta$ is not $90$ degrees.
The horizontal component of the velocity is $s\cos\theta$, and the vertical component is $s\sin\theta$. So the "$x$-coordinate" of the position at time $t$ is given by
$$x=x(t)=(s\cos\theta)t.\qquad\qquad(\ast)$$
The height ($y$-coordinate) at time $t$ is given by
$$y=y(t)=(s\sin\theta)t-\frac{1}{2}gt^2.\qquad\qquad(\ast\ast)$$
To obtain the equation of the curve described by the ball, we use $(\ast)$ to eliminate $t$ in $(\ast\ast)$.
From $(\ast)$ we obtain $t=\dfrac{x}{s\cos\theta}$. Substitute for $t$ in $(\ast\ast)$. We get
$$y=(\tan\theta)x-\frac{g}{2s^2\cos^2\theta}x^2.$$
For the maximum height reached, we do not need the equation of the parabola. For the height $y$ at time $t$ is $(s\sin\theta)t -\frac{1}{2}gt^2$. We can find the $t$ that maximizes height. More simply, the vertical component of the velocity at time $t$ is $s\sin\theta -gt$, and at the maximum height this upwards component is $0$. Now we can solve for $t$.
Nor do we need the equation of the parabola to find the horizontal distance travelled. By symmetry, to reach the ground takes time equal to twice the time to reach maximum height. The time to reach the ground is therefore $\dfrac{2s \sin\theta}{g}$, and therefore the horizontal distance travelled is $\dfrac{2s^2\sin\theta\cos\theta}{g}$. This can also be written as $\dfrac{s^2\sin 2\theta}{g}$.
Comment: Note that for fixed $s$ the horizontal distance $\dfrac{s^2\sin 2\theta}{g}$ travelled until we hit the ground is a maximum when $\theta$ is $45$ degrees. So for maximizing horizontal distance, that's the best angle to throw at, if your throw speed does not depend on angle. (But it does!)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.