INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Stochastic Processes...what is it? My university is offering stochastic processes next semester, here is the course description: Review of distribution theory. Introduction to stochastic processes, Markov chains and Markov processes, counting, and Poisson and Gaussian processes. Applications to queuing theory. I'm torn between this course and analysis 2. I'm not sure what stochastic processes deals with, and the description doesn't help. Can anyone explain what you do in such a course? Is it proofs-based? Also, would you recommend this over Analysis 2? Thanks.
"Stochastic" refers to topics involving probability -- often the treatment of processes that are inherently random in nature by virtue of being some sot of random function about a random or deterministic variable, or a process parameterized by a random quantity. For example, Brownian motion is a stochastic process; similarly, the behavior of a LTI system containing a random parameter is a stochastic process (say, the vibration of a spring-mass system, where the spring constant is actually a random variable). Stochastic calculus requires a strong background in analytical probability theory, which itself requires some notion of measure theory, algebra, and multivariable analysis. Many undergraduate courses will avoid some of the rigorous elements of stochastic analysis, but to really understand it, you will need a decent background in undergraduate analysis through multivariable analysis, Lebesgue theory, and measure theory. In short, take Analysis II, take Probability, and then consider Stochastic processes if you are interested in it.
Where is the mistake in the calculation of $y'$ if $ y = \Bigl( \dfrac{x^2+1}{x^2-1} \Bigr)^{1/4} $? Plase take a look here. If $ y = \Bigl( \dfrac{x^2+1}{x^2-1} \Bigr)^{1/4} $ \begin{eqnarray} y'&=& \dfrac{1}{4} \Bigl( \dfrac{x^2+1}{x^2-1} \Bigr)^{-3/4} \left \{ \dfrac{2x(x^2-1) - 2x(x^2+1) }{(x^2-1)^2} \right \}\\ &=& \Bigl( \dfrac{x^2+1}{x^2-1} \Bigr)^{-3/4} \dfrac{-x}{(x^2-1)^2}. \end{eqnarray} By the other hand, we have \begin{equation} \log y = \dfrac{1}{4} \left \{ \log (x^2+1) - \log (x^2-1) \right \} \end{equation} Then, \begin{eqnarray} \dfrac{dy}{dx} &=& y \dfrac{1}{4} \left \{ \dfrac{2x}{(x^2+1)} -\dfrac{ 2x}{(x^2-1)} \right \} \\ &=& \dfrac{1}{4} \dfrac{x^2+1}{x^2-1} \cdot 2x \dfrac{(x^2-1) - (x^2+1)}{(x^2+1)(x^2-1)} \\ &=& \dfrac{x^2+1}{x^2-1} \dfrac{-x}{(x^2+1)(x^2-1)} \\ &=& \dfrac{-x}{(x^2-1)^2}. \end{eqnarray} But this implies, \begin{equation} \dfrac{-x}{(x^2-1)^2} = \Bigl( \dfrac{x^2+1}{x^2-1} \Bigr)^{-3/4} \dfrac{-x}{(x^2-1)^2}. \end{equation} Where is the mistake?
I believe you forgot a power 1/4 when substituting for $y$ (in the calculation using logarithms). Edited to explain further: In your calculation, you write \begin{align} \frac{dy}{dx} &= y\frac14 \left\{ \frac{2x}{(x^2+1)} - \frac{2x}{(x^2-1)} \right\} \\ &= \frac14 \frac{x^2+1}{x^2-1} \cdot 2x\frac{(x^2-1)-(x^2+1)}{(x^2+1)(x^2-1)}. \end{align} However, this should be \begin{align} \frac{dy}{dx} &= y\frac14 \left\{ \frac{2x}{(x^2+1)} - \frac{2x}{(x^2-1)} \right\} \\ &= \frac14 \color{red}{\left(\color{black}{\frac{x^2+1}{x^2-1}}\right)^{\frac14}} \cdot 2x\frac{(x^2-1)-(x^2+1)}{(x^2+1)(x^2-1)}. \end{align}
Why is $E(Z | Y) = 0$? Let $Z$ be a random variable distributed $\mathcal{N}(0, 1)$. Let $Y = Z^2$. Apparently, $E(Z \mid Y) = E(Z \mid Z^2) = 0$ due to "symmetry." Why is that?
For completion, note that for all mesurable $f$ such that $E(|f(Y)|) < \infty$, $$E(f(Y)\mid Z) = \frac{f(Z) + f(-Z)}{2}.$$ Here $f\colon x\mapsto x$ is odd, hence $E(Y\mid Z) = \frac{Z-Z}{2}=0$. Another example of interest : if you take $f(x)=e^{i\theta x}$, you get $E(e^{i\theta Y} \mid Z) = \cos(\theta Z)$
When generating set is not a basis If a generating set of a vector space being made up of linearly independent vectors constitues a basis, when such a set is not a basis does it mean that its vectors are linearly dependent?
Yes. Let $S$ be the generating set and let $B\subset S $ be a basis. If $B,S$ are not equal, then there exists $u\in S-B$ since $B$ is a basis, it follows that $u=c_1v_1+c_2v_2+...+c_nv_n$ for some $v_1,v_2,...,v_n\in B$. From this it follows that {$u,v_1,...,v_n$} are linearly dependent. Hence, $S$ is not linearly independent.
How to simplify polynomials I can't figure out how to simplify this polynominal $$5x^2+3x^4-7x^3+5x+8+2x^2-4x+9-6x^2+7x$$ I tried combining like terms $$5x^2+3x^4-7x^3+5x+8+2x^2-4x+9-6x^2+7x$$ $$(5x^2+5x)+3x^4-(7x^3+7x)+2x^2-4x-6x^2+(8+9)$$ $$5x^3+3x^4-7x^4+2x^2-4x-6x^2+17$$ It says the answer is $$3x^4-7x^3+x^2+8x+17$$ but how did they get it?
You cannot combine terms like that, you have to split your terms by powers of $x$. So for example $$5x^2+5x+2x^2 = (5+2)x^2+5x = 7x^2+5x$$ and not $5x^3+2x^2$. Using this, you should end up with your answer.
Find conditions on $a$ and $b$ such that the splitting field of $x^3 +ax+b $ has degree of extension 3 Find conditions on $a$ and $b$ such that the splitting field of $x^3 +ax+b \in \mathbb Q[x]$ has degree of extension 3 over $\mathbb Q$. I'm trying solve do this question, it seems very difficult to me, maybe because I don't know Galois Theory yet (The content of next chapters). I need help to prove this without Galois Theory. Thanks
Partial answer: Let $f(x)=x^3+ax+b$, let $K$ be its splitting field, and $\alpha$, $\beta$ and $\gamma$ be the roots of $f$ in $K$. First of all, $f$ has to be irreducible, which is the same as saying it doesn't have a rational root: if it's not, and say $\alpha$ is rational, then $f(x)$ factors as $(x-\alpha)g(x)$ with $g$ a quadratic polynomial; then $K$ is also the splitting field of $g$, so it has degree $\leq2$. So let's assume that $f$ is irreducible. Its splitting field $\mathbf Q(\alpha)$ has degree $\deg(f)=3$, so if we want $[K:\mathbf Q]=3$, we need $K=\mathbf Q(\alpha)$. In other words, we want the two other roots $\beta,\gamma$ to be in $\mathbf Q(\alpha)$. Let's look at the relation between roots and coefficients for $f$: $$ \alpha+\beta+\gamma=0\\ \alpha\beta+\beta\gamma+\gamma\alpha=a\\ \alpha\beta\gamma=-b $$ From the first and the third equation, you see that $\beta+\gamma=-\alpha$ and $\beta\gamma=-b/\alpha=\alpha^2+a$, so $\beta$ and $\gamma$ are the roots of the second degree polynomial $g(y)=y^2+\alpha y +\alpha^2+a\in\mathbf Q(\alpha)$. Those roots are in $\mathbf Q(\alpha)$ if, and only if, the discriminant $\Delta=-3\alpha^2-4a$ of $g$ is a square in $\mathbf Q(\alpha)$. Now, the problem is that I'm not sure how to determine whether $\Delta$ is a square or not...
If $n$ is a natural number $\ge 2$ how do I prove that any graph with $n$ vertices has at least two vertices of the same degree? Any help would be appreciated. If $n$ is a natural number $\ge 2$ how do I prove that any graph with $n$ vertices has at least two vertices of the same degree?
HINT: The possible degrees of a simple graph with $n$ vertices are the $n$ integers $0,1,\dots,n-1$. However, a simple graph on $n$ vertices cannot have both a vertex of degree $0$ and a vertex of degree $n-1$; why? That means that either the degrees of the $n$ vertices are all in the set $\{0,1,\dots,n-2\}$, or they’re all in the set $\{1,2,\dots,n-1\}$. How many numbers are in each of those sets? (In case that’s not enough of a hint, I’ve added a spoiler-protected further hint; mouse-over to see it.) Pigeonhole principle.
Proving every montonic function on an interval is integrable I am trying to understand the proof of every monotonic function that is on an interval is integrable. This is what I have $U(f, P) - L(f, P) = \sum\limits_{k=1}^n(f(t_k) - f(t_{k-1}))\cdot (t_k - t_{k-1})$ Now my book says that this is equal to: $= (f(b) - f(a))\cdot(t_k - t_{k-1})$ How does one deduce that $\sum\limits_{k=1}^n(f(t_k) - f(t_{k-1})) = f(b) - f(a)$?
Note that \begin{equation*} \sum_{k=1}^{n}(f(t_{k})-f(t_{k-1}))=(f(t_{1})-f(t_{0}))+(f(t_{2})-f(t_{1}))+(f(t_{3})-f(t_{2}))+...+(f(t_{n})-f(t_{n-1})), \end{equation*} so in $i$:th term of the sum, $-f(t_{i-1})$ always eliminates the $i-1$:th appearing term $+f(t_{i-1})$. Hence you are only left the endpoints, i.e. $f(b)-f(a)$.
Proving that $\|u_1\|^2+\|w_1\|^2=\|u_2\|^2+\|w_2\|^2$ If $u_1+w_1=u_2+w_2$ and $\langle u_1,w_1\rangle=0=\langle u_2,w_2\rangle$, how can we prove that $$\|u_1\|^2+\|w_1\|^2=\|u_2\|^2+\|w_2\|^2$$ I know I can open this to $$\langle u_1,u_1\rangle+\langle w_1,w_1\rangle=\langle u_2,u_2\rangle+\langle w_2,w_2\rangle$$ but from here what can I do with that?
From $u_1+w_1=u_2+w_2$ and $\langle u_1,w_1\rangle=0=\langle u_2,w_2\rangle$, we have $\|u_1\|^2+\|w_1\|^2$ $=\langle u_1,u_1\rangle+\langle w_1,w_1\rangle+2\langle u_1,w_1\rangle$ $=\langle u_1+w_1,u_1+w_1\rangle$ $=\langle u_2+w_2,u_2+w_2\rangle$ $=\langle u_2,u_2\rangle+\langle w_2,w_2\rangle+2\langle u_2,w_2\rangle$ $=\|u_2\|^2+\|w_2\|^2$
Conditions for the mean value theorem the mean value theorem which most of us know starts with the conditions that $f$ is continuous on the closed interval $[a,b]$ and differentiable on the opened interval $(a,b)$, then there exists a $c \in (a,b)$, where $\frac{f(b)-f(a)}{b-a} = f'(c)$. I'm guessing we're then able to define $\forall a', b' \in [a,b]$ where $c \in (a', b')$ and the mean value theorem is correspondingly valid. However, are we then able to start with $\forall c \in (a,b)$ and then claim that there exist $a',b' \in [a,b]$, where the mean value theorem is still valid?
The answer is No. Consider $y=f(x)=x^3$ and $c=0$. $f'(c)=0$ but no secant line has a zero slope as ${{f(r)-f(s)}\over{r-s}}=r^2+s^2+rs>0$.
Is $Y_s=sB_{1\over s},s>0$ a brownian motion Suppose $\{B_s,s>0\}$ is a standard brownian motion process. Is $Y_s=sB_{1\over s},\ s>0$ a brownian motion or (stardard). I have found that $Y_0=0$ and $Y_s\sim N(0,1)$ as $B_s\sim N(0,s)$, so it remains to show that it is stationary increment and independent increment. But i am not sure how to do it.
Have you heard of Gaussian processes ? If you have, you only have to check that $(Y_s)$ has the same covariance function as the Brownian motion. If you haven't, don't worry, it's very simple here: you are interested in the law of the couple $(sB_{1/s},tB_{1/t}-sB_{1/s})$ when $0 < s <t$. This is a 2 dimensional centered Gaussian vector, so its law is entirely determined by its covariance matrix. In the end, you have to compute $E(sB_{1/s} tB_{1/t})$.
Representation of linear functionals I have always seen the linear functionals in $R^n$ expressed at $\ell(x) = \sum_{i=0}^n a_ix_i$ And in an countable metric space $\ell(x) = \sum_{i=0}^{\infty} a_ix_i$. I guess that this follows directly from http://en.wikipedia.org/wiki/Riesz_representation_theorem, for Hilbert spaces. But what if we are not in an Hilbert space or if the space is uncountable. If X was a 1 dimensional space I would get $f(x) = f(1)x$ by continuity and linearity (by derivation and integration) and by partial derivation it would look like $f(x) = \sum_{i=0}^n f(1)x_i$ for the n dimensional case
I can think of a nice representation theorem that holds in a non-Hilbert space. It goes by the name Riesz-Kakutani-Markov: Let $X$ be a compact Hausdorff space and $(C(X),\|\cdot\|_\infty)$ the space of continuous real valued functions on $X$ endowed with the maximum norm. Then, every bounded linear functional $F$ on $C(X)$ can be written as an integral against a signed, finite Borel measure $\mu$ on $X$: $$ F(f)=\int_X fd\mu $$ with norm $$ \|F\|=\int_X\vert d\mu\vert $$ where $\vert d\mu\vert$ is the absolute variation of $\mu$. A good resource for this theorem is Lax: Functional Analysis. Granted, this is more sophisticated than the Riesz representation theorem on Hilbert spaces, but that's to be expected.
Why does Lenstra ECM work? I came across Lenstra ECM algorithm and I wonder why it works. Please refer for simplicity to Wikipedia section Why does the algorithm work I NOT a math expert but I understood first part well enough (I suppose), what I miss is When this is the case, $kP$ does not exist on the original curve, and in the computations we found some $v$ with either $\text{gcd}(v,p)=p$ or $\text{gcd}(v,q)=q$, but not both. That is, $\text{gcd}(v,n)$ gave a non-trivial factor of $n$. As far as I know this has to do with the fact that $E(\mathbb Z/n\mathbb Z)$ is not a group if $n$ is not prime so some element (i.e. $x_1-x_2$) is not invertible but what is the link between non invertible elements and $n$ factors? Thanks to everyone
Read this paper. this is just a piece of hole paper.
Simple and intuitive example for Zorns Lemma Do you know any example that demonstrated Zorn's Lemma simple and intuitive? I know some applications of it, but in these proof I find the application of Zorn Lemma not very intuitive.
Zorn's lemma is not intuitive. It only becomes intuitive when you get comfortable with it and take it for granted. The problem is that Zorn's lemma is not counterintuitive either. It's just is. The idea is that if every chain has an upper bound, then there is a maximal element. I think that the most intuitive usage is the proof that every chain can be extended to a maximal chain. Let $(P,\leq)$ be a partially ordered set and $C$ a chain in $P$. Let $T=\{D\subseteq P\mid C\subseteq D\text{ is a chain}\}$. Then $(T,\subseteq)$ has the Zorn property, because a chain in $T$ is an increasing collection of chains. The $\subseteq$-increasing union of chains is a chain as well, so there is an upper bound. By Zorn there is a maximal element, and it is a maximal chain by definition. If you search on this site "Zorn's lemma" you can find more than a handful examples explaining slightly more in details several discussions and other applications of Zorn's lemma. Here is a quick list from what I found: * *Is there any motivation for Zorn's Lemma? *Every Hilbert space has an orthonomal basis - using Zorn's Lemma *How does this statement on posets follow from Zorn's lemma?
Periodic parametric curve on cylinder Given a cylinder surface $S=\{(x,y,z):x^2+2y^2=C\}$. Let $\gamma(t)=(x(t),y(t),z(t))$ satisfy $\gamma'(t)=(2y(t)(z(t)-1),-x(t)(z(t)-1),x(t)y(t))$. Could we guarante that $\gamma$ always on $S$ and periodic if $\gamma(0)$ on $S$?
We can reparameterize $S=\{(\sqrt{C}\cos u,\frac{\sqrt{C}}{\sqrt{2}}\sin u, v): u,v\in \mathbb{R}\}$ since $(\sqrt{C}\cos u)^2+2\left(\frac{\sqrt{C}}{\sqrt{2}}\sin u\right)^2=C$. Let $r(t)= (x(t),y(t),z(t))$ and $r(0)=(x_0,y_0,z_0)$. Define $V(x,y,z)=x^2+2y^2$. Since $V(x,y,z)=C$ then $\frac{dV}{dt}=0$. But, by chain rule we get $0=\frac{dV}{dt}=\nabla{V}\cdot(x',y',z')$ so the tangent vector of the parametrized curve that intersect $S$ in a point always parpendicular with $\nabla{V}$. Since $r(0)$ be in $S$ and $\nabla{V}$ parpendicular with the tangent plane of $S$ at $r(0)$ , then $r'(0)$ be on the tangent plane of $S$ at $r(0)$. By this argument, we can conclude that $r(t)$ must be on $S$. Since $S=\{(\sqrt{C}\cos u,\frac{\sqrt{C}}{\sqrt{2}}\sin u, v): u,v\in \mathbb{R}\}$ then $x(t)=\sqrt{C}\cos (t-t_0)$ and $y(t)=\frac{\sqrt{C}}{\sqrt{2}}\sin (t-t_0)$ with $t_0$ satisfying $x_0=\sqrt{C}\cos t_0$ and $y_0=-\frac{\sqrt{C}}{\sqrt{2}}\sin t_0$. Since $z'=xy$ then $z'(t)=\frac{C}{2\sqrt{2}}\sin(2t-t_0)$, hence $z(t)=-\frac{C}{4\sqrt{2}}\cos(2t-t_0)$. Since $r(2\pi)=(\sqrt{C}\cos (2\pi-t_0),\frac{\sqrt{C}}{\sqrt{2}}\sin (2\pi-t_0),-\frac{C}{4\sqrt{2}}\cos(2\pi-t_0))=(x_0,y_0,z_0)=r(0)$ then $r(t)$ is periodic.
Advanced integration, how to integrate 1/polynomial ? Thanks I have been trying to integrate a function with a polynomial as the denominator. i.e, how would I go about integrating $$\frac{1}{ax^2+bx+c}.$$ Any help at all with this would be much appreciated, thanks a lot :) ps The polynomial has NO real roots $${}{}$$
If $a=0$, then $$I = \int \dfrac{dx}{bx+c} = \dfrac1b \log (\vert bx+c \vert) + \text{constant}$$ $$I = \int \dfrac{dx}{ax^2 + bx + c} = \dfrac1a \int\dfrac{dx}{\left(x + \dfrac{b}{2a}\right)^2 + \left(\dfrac{c}a- \dfrac{b^2}{4a^2} \right)}$$ If $b^2 < 4ac$, then recall that $$\int \dfrac{dt}{t^2 + a^2} = \dfrac1a\arctan \left(\dfrac{t}a \right) + \text{constant}$$ Hence, if $b^2 < 4ac$, then $$I = \dfrac2{\sqrt{4ac-b^2}} \arctan \left( \dfrac{2ax + b}{\sqrt{4ac-b^2}} \right) + \text{constant}$$ If $b^2 = 4ac$, then $$I =\dfrac1a \dfrac{-1}{\left(x + \dfrac{b}{2a}\right)} + \text{constant} = - \dfrac2{2ax+b} + \text{constant}$$ If $b^2 > 4ac$, then $$I = \dfrac1a \int\dfrac{dx}{\left(x + \dfrac{b}{2a}\right)^2 - \sqrt{\left( \dfrac{b^2}{4a^2} -\dfrac{c}a\right)}^2}$$ Now $$\int \dfrac{dt}{t^2 - k^2} = \dfrac1{2k} \left(\int \dfrac{dt}{t-k} - \int \dfrac{dt}{t+k} \right) = \dfrac1{2k} \log \left(\left \vert \dfrac{t-k}{t+k} \right \vert \right) + \text{constant}$$ Hence, if $b^2 > 4ac$, then $$I = \dfrac1{\sqrt{b^2-4ac}} \log \left(\left \vert \dfrac{2ax + b - \sqrt{b^2-4ac}}{2ax + b + \sqrt{b^2-4ac}} \right \vert \right) + \text{constant}$$
Empirical distribution vs. the true one: How fast $KL( \hat{P}_n || Q)$ converges to $KL( P || Q)$? Let $X_1,X_2,\dots$ be i.i.d. samples drawn from a discrete space $\mathcal{X}$ according to probability distribution $P$, and denote the resulting empirical distribution based on n samples by $\hat{P}_n$. Also let $Q$ be an arbitrary distribution. It is clear that (KL-divergence) \begin{equation} KL( \hat{P}_n || Q) \stackrel{n\rightarrow \infty}{\longrightarrow} KL(P || Q) \end{equation} but I am wondering if there exist any known quantitative rate of convergence for it. I mean if it can be shown that \begin{equation} \Pr\Big[ | KL( \hat{P}_n || Q) - KL(P || Q) | \geq \delta\Big] \leq f(\delta, n, |\mathcal{X}|) \end{equation} and what is the best expression for the RHS if there is any. Thanks a lot!
In addition to the last answer, the most popular concentration inequality for the KL divergence is for finite alphabets. You can look for Theo. 11.2.1 of "Elements of Information Theory" by Thomas Cover and Joy Thomas: $$\mathbf{P}\left(D(\hat{P}_n\|P)\geq\epsilon\right)\leq e^{-n\left(\epsilon-|\mathcal{X}|\frac{\log(n+1)}{}n\right)}$$
Proving the uncountability of $[a,b]$ and $(a,b)$ I am trying to prove that $[a,b]$ and $(a,b)$ are uncountable for $a,b\in \mathbb{R}$. I looked up Rudin and I am not too inclined to read the chapter on topology, for his proof involves perfect sets. Can anyone please point me to a proof of the above facts without point-set topology? I am thinking along these lines: $\mathbb{R}$ is uncountable.If we can show that there exists a bijection between $(a,b)$ ad $\mathbb{R}$ we can prove $(a,b)$ is uncountable.But I am not sure how to construct such a bijection.
$$\tan \left( \frac{\pi}{(b-a)} (x-\frac{a+b}{2})\right)$$ Basically $f(x)=\frac{\pi}{(b-a)} (x-\frac{a+b}{2})$ is the linear function such that $f(a)=-\frac{\pi}{2}$ and $f(b)=\frac{\pi}{2}$.
Maximize $x_1x_2+x_2x_3+\cdots+x_nx_1$ Let $x_1,x_2,\ldots,x_n$ be $n$ non-negative numbers ($n>2$) with a fixed sum $S$. What is the maximum of $x_1x_2+x_2x_3+\cdots+x_nx_1$?
I have to solve this in 3 parts. First for $n=3$, then for $n=4$ and finally for $n>4$. For $n=3$ we can take a tranformation $x'_1=x'_2=(x_1+x_2)/2$ and $x'_3=x_3$. $\sum x_i$ remains fixed while $\sum{x'_i*x'_{i+1}}-\sum{x_i*x_{i+1}} = (x_1+x_2)^2/4-x_1*x_2 = (x_1^2+2x_1x_2+x_2^2)/4-x_1*x_2 = (x_1^2-2x_1x_2+x_2^2)/4 = (x_1-x_2)^2/4$ which is $>0$ if $x_1$ differs from $x_2$. So an optimal solution must have the first two terms equal (otherwise we can apply this transformation and obtain a higher value), but you can cycle the terms, so they must be all equal to $S/3$ for a total sum of $S^2/3$. For $n=4$ the transformation $x'_1=x_1+x_3$, $x'_3=0$ doesn't change the result, so we take an optimal solution, sum the odd and even terms, and the problem becomes finding the maximum of $(\sum x_{odd})*(\sum x_{even})$, that is maximized if both terms are equal to $S/2$, for a total of $S^2/4$. For $n>4$, I have to prove this lemma first: For $n>4$, there is at least one optimal configuration that has at least one index $i$ such that $x_i=0$ Take a configuration that is optimal and such that every $x_i>0$ and $x_1 = max(x_i)$. Now use the following transformation: $x'_2=x_2+x_4$, $x'_4=0$. $\sum x_i$ remains the same but $\sum{x'_i*x'_{i+1}}-\sum{x_i*x_{i+1}}=x_1*(x_2+x_4)+(x_2+x_4)*x_3+\sum_{i>4}{x_i*x_{i+1}}-\sum{x_i*x_{i+1}} = x_1*x_4-x_4*x_5 = x_4*(x_1-x_5) = x_4*(max(x_i)-x_5) \geq x_4*(x_5-x_5) = 0$ So we have another optimal solution with a $0$. Given that at least an optimal solution contains a $0$ for every $n>4$, the maximum value of $\sum{x_i*x_{i+1}}$ must be non-increasing for $n$ (otherwise we can take a solution for $n$ with a $0$ inside, remove that $0$, and obtain a higher solution for $n-1$). Now the value of the sum must be $\leq S^2/4$, but taking $x_1=x_2=S/2$ gives that sum, so that configuration is an optimal one, for a sum of $S^2/4$. This proves that the maximum is $S^2/3$ if $n=3$ and $S^2/4$ otherwise. I am not satisfied with this answer, because it breaks down to a lot of case analysis. I am still curious to see a simpler proof (or one that requires less space..).
Proving that $x^a=x^{a\,\bmod\,{\phi(m)}} \pmod m$ i want to prove $x^a \equiv x^{a\,\bmod\,8} \pmod{15}$.....(1) my logic: here, since $\mathrm{gcd}(x,15)=1$, and $15$ has prime factors $3$ and $5$ (given) we can apply Euler's theorem. we know that $a= rem + 8q$, where $8= \phi(15)$, $x^a \equiv x^{rem}. (x^8)^q \pmod{15}$......(2) applying Euler's theorem we get: $x^a \equiv x^{rem} \pmod{15}$......(3) Is this proof correct or should I end up in getting $x^a \equiv x^a \pmod {15}$...(4)
If $b\equiv a\pmod m, b$ will be equal to $a\iff 0\le a<m$ For example, $m-2\equiv m-2\pmod m, 13\equiv 13\pmod {15}$ but, $m+2\equiv 2\pmod m, 17\equiv 2\pmod {15}$ If $b\equiv c\pmod{\phi(m)} ,$i.e., if $b=c+d\phi(m)$ $y^b=y^{c+d\phi(m)}=y^c\cdot(y^{\phi(m)})^d\equiv y^c \pmod m$ for $(y,m)=1$ Here $\phi(15)=\phi(3)\phi(5)=2\cdot4=8$ Observe that this condition is no way necessary as proved below. If $y^b\equiv y^d\pmod m$ where $b\ge c$ $y^{b-d}\equiv1\pmod m\iff ord_my\mid (b-d)$ does not need to divide $\phi(m)$ unless $ord_my=\phi(m)$ where $y$ is a primitive root of $m$.
Element Argument Proofs - Set theory This is an exercise on my study guide for my discrete applications class. Prove by element argument: A × (B ∩ C) = (A × B) ∩ (A × C) Now I know that this is the distributive law, but I'm not sure if this proof would work in the exact same way as a union problem would, because I know how to solve that one. Here is my thinking thus far: Proof: Suppose A, B, and C are sets. * *A × (B ∩ C) = (A × B) ∩ (A × C) *Case 1 (a is a member of A): if a belongs to A, then by the definition of the cartesian product, a is also a member of A x B and A x C. By definition of intersection, a belongs to (A × B) ∩ (A × C). *Case 2 (a is a member of B ∩ C): a is a member of both B and C by intersection. a is a member of (A × B) ∩ (A × C) by the definition of intersection. *By definition of a subset, (A × B) ∩ (A × C) is a subset of A × (B ∩ C). *Therefore A × (B ∩ C) = (A × B) ∩ (A × C). Is that at least a little right? Thanks.
No, you're not doing it completely right, the cartesian product produces an element that is a pair of elements from both subsets. The definition of the cartesian product. Def. $X\times Y = \{ (x,y) : x \in X\text{ and }y \in Y \}$. PROOF. $Z = A \times (B \cap C) = \{ (a,y) : a \in A\text{ and }y \in B \cap C \}$ $W = (A \times B) \cap (A \times C) = \{ (a,b) : a \in A\text{ and }b \in B \} \cap \{ (a,c) : a \in A\text{ and }c \in C \}$ For all $a \in A$: Case 1. $b \in C$. If $b \in C$ then $(a,b) \in Z$. Also $(a,b) \in W$. Case 2. $b \notin C$. If $b \notin C$, then $b$ is not in $B \cap C$. Then $(a,b)$ is not in $Z$. $b$ is also not in $A \times C$, so it's not in $W$. The rest follows by the symmetry of intersection. $C \cap B$ is equivalent to $B \cap C$. Relabel $B$ as $C$, and vice versa. Apply case 1 and case 2. QED.
Definition of a tangent I've been involved in a discussion on the definition of a tangent and would appreciate a bit of help. At my high school and my college, I was taught that a definition of a tangent is 'a line that intersects given curve at two infinitesimally close points.' Aside from the possibility that tangent may elsewhere intersect the curve, to me, it was both intuitive and concise, but apparently, I'd have more chance of locating a set of hen's teeth than finding a similar definition online... Has anybody else encountered such definition, or may have an objection to it (or an opinion on it, for that matter)? Thanks in advance.
Given a curve $y = f(x)$ in an $xy$-coordinate system a tangent to the curve at the point $(a,f(a))$ is a straight line ($y = mx + b$) with slope $m = f'(a)$. I have never heard about the definition that you talk about. There are ways to "think" about what a tangent is. If you consider the definition of a derivative then it involves limits. And limits is where one would talk about stuff like things being "infinitesimally" close. Note that in math there isn't much room for opinions. Either the definition is correct or it is not. However, we often invent ways to think about certain definitions that make it intuitive for us. However, one always has to be careful not to make the picture that you have in your head the definition.
Proof: in $\mathbb{R}$, $((0,1),|\cdot|)$ is not compact. Let $(M,d)$ be a metric space, and $A\subset M$. By definition, $A$ is said to be compact if every open cover of $A$ contains a finite subcover. What is wrong with saying that, in $\mathbb{R}$, if $I=(0,1)$, we can choose $G=\{(0,\frac{3}{4}), (\frac{1}{4}, 1)\}$, which satisfies $I \subset \bigcup_{U\in G} U$, but we can't extract a finite subcover, so $I$ is not compact. Is $G$ a finite subcover of $G$, so it is not a valid cover for proving this? I would take $\cup_{n\in\mathbb{N}} (\frac{1}{n},1)$ in order to prove this, can we conclude that every open cover is necessarily a infinite union of open sets $\neq \emptyset$?
You counter example (the open cover $\cup_{n \in \mathbb N} (1/n,1)$) actually works. It has no finite subcover. Therefore, $(0,1)$ is not compact. Every cover is not necessarily infinite. Again, your $G$ is the counter example (it is a finite cover.) Note: you can take $G$ as a finite subcover of $G$. So, $G$ does not show that $(0,1)$ is not compact.
Optimal Coding Scheme for given Weights I'm having trouble with this homework problem. Do I create the tree by beginning with each weight being a leaf? Then combining the lowest weighted leaves, and their parent becomes the sum of their weight? I got 85 as my answer for (b) but I'm not sure if this is the correct process Consider the weights: 10, 12, 13, 16, 17, 17. (a) Construct an optimal coding scheme for the weights given by using a tree. (b) What is the total weight of the tree you found in part (a)?
Yes, you first combine $10+12=22$, then $13+16=29$, then $17+17=34$, then $22+29=51$, finally $51+34=85$ (thus your answer for b). If we always represent the first choice with 0 and the second with 1, the respective code words are $$000,001, 010, 011, 10, 11.$$ I'm not sure if part b isn't rather referring to the weighted code word length, that is $\frac{3\cdot 10+3\cdot 12+3\cdot 13+3\cdot 16+2\cdot 17+2\cdot 17}{10+12+13+16+17+17}$.
Showing that $W_1\subseteq W_1+W_2$ I found this question and answer on UCLA's website: Let $W_1$ and $W_2$ be subspaces of a vector space $V$ . Prove that $W_1 +W_2$ is a subspace of $V$ that contains both $W_1$ and $W_2$. The answer given: First, we want to show that $W_1 \subseteq W_1 +W_2$. Choose $x \in W_1$. Since $W_2$ is a subspace, $0 \in W_2$ where $0$ is the zero vector of $V$ . But $x = x + 0$ and $x \in W_1$. Thus, $x \in W_1 + W_2$ by definition. Ergo, $W_1 \subseteq W_1 + W_2$. We also must show that $W_2 \in W_1 + W_2$, but this result is completely analogous (see if you can formalize it). My question: Why is it enough to show that $x + 0 \in W_1 + W_2$, $0$ is just one element in $W_2$, why don't we have to show, for example, $x + y \in W_1 + W_2$?
$W_1+W_2=\{w_1+w_2: w_1\in W_1,w_2\in W_2\}$. To show that an element belongs to this set, we just need to show that it can be written in the form $w_1+w_2$ for some $w_1\in W_1$ and some $w_2\in W_2$.
Want to show Quantifier elimination and completeness of this set of axioms... Let $\Sigma_\infty$ be a set of axioms in the language $\{\sim\}$ (where $\sim$ is a binary relation symbol) that states: (i) $\sim$ is an equivalence relation; (ii) every equivalence class is infinite; (iii) there are infinitely many equivalence classes. Show that $\Sigma_{\infty}$ admits QE and is complete. (It is given that it is also possible to use Vaught's test to prove completeness.) I think I have shown that $\Sigma_\infty$ admits QE, but am not sure how to show completeness. There is a theorem, however, that states that if a set of sentences $\Sigma$ has a model and admits QE, and there exists an $L$-structure that can be embedded in every model of $\Sigma$, then $\Sigma$ is complete. Thanks.
According to the last sentence in your question, all you need is an $L$-structure that can be embedded into every model of $\Sigma_\infty$. In fact, $\Sigma_\infty$ has a "smallest" model, one that embeds into all other models of $\Sigma_\infty$. I think this should be enough of a hint to enable you to find the model in question --- just make it as small as the axioms of $\Sigma_\infty$ permit.
Good software for linear/integer programming I never did any linear/integer programming so I am wondering the following two things * *What are some efficient free linear programming solvers? *What are some efficient commercial linear programming solvers? It would be nice to supply a dummy usage example with each proposed answer. Also what if wish to solve a integer programming problem? What are the answers to the above two questions in this case? I know that integer LP is a hard problem but there are some relaxing methods that are sometimes employed in order to obtain a solution to an integer programming problem. Are there any software packages implementing this kind of stuff?
The Konrad-Zuse Institute in Berlin (ZIB), Germany provides a nice suite to solve all kinds of LP / ILP tasks. It includes: * *zimpl: a language to model mathematical programms *SCIP: a mixed integer programming solver and constraint programming framework *SoPlex: a linear programming solver *and more Best of all, it is free! And all implementations are reasonably fast. The state of the art in the commercial sector is probably IBM's CPLEX Studio. This is an expansive piece of software, but IBM has an academic program where you get free licenses. However it is a bit of a pain to apply. I used to work with the CPLEX package because it includes this nice modelling language ampl. However when the equivalent free zimpl came out, I switched to the more available ZIB package.
Noncontinuity and an induced equivalence relation Can someone give me an example of a map which is not continuous such that $f(\{a\}) = f(\{b \})$ induces an equivalence relation $ \{ a \} \sim \{ b \} $?
Let $f:\{a,b,c,\}\to\{0,1\}$. Let $f(a)=f(b)=0$ and $f(c)=1$. Let $x\sim y$ precisely if $f(x)=f(y)$. Then we have \begin{align} a & \sim a \\ a & \sim b \\ a & \not\sim c \\ \\ b & \sim a \\ b & \sim b \\ b & \not\sim c \\ \\ c & \not\sim a \\ c & \not\sim b \\ c & \sim c \end{align} This is an equivalence relation on the set $\{a,b,c\}$ with two equivalence classes: $\{a,b\}$ and $\{c\}$.
Quotient of Gamma functions I am trying to find a clever way to compute the quotient of two gamma functions whose inputs differ by some integer. In other words, for some real value $x$ and an integer $n < x$, I want to find a way to compute $$ \frac{\Gamma(x)}{\Gamma(x-n)} $$ For $n=1$, the quotient it is simply $(x-1)$ since by definition $$ \Gamma(x) = (x - 1)\Gamma(x-1) $$ For $n=2$, it is also simple: $$ \frac{\Gamma(x)}{\Gamma(x-2)} = \frac{(x-1)\Gamma(x-1)}{\Gamma(x-2)} = (x-1)(x-2)$$ If we continue this pattern out to some arbitrary $n$, we get $$ \frac{\Gamma(x)}{\Gamma(x-n)} = \prod_i^n (x-i)$$ Obviously I am floundering a bit here. Can anyone help me find an efficient way to compute this quotient besides directly computing the two gamma functions and dividing? I am also okay if an efficient computation can be found in log space. Currently I am using a simple approximation of the log gamma function and taking the difference. This was necessary because the gamma function gets too big to store in any primitive data type for even smallish values of $x$.
I think you mean $$ \frac{\Gamma(x)}{\Gamma(x-n)} = \prod_{i=1}^{n} (x - i) $$ Of course this might not be very nice if $n$ is very large, in which case you might want to first compute the $\Gamma$ values and divide; but then (unless $x$ is very close to one of the integers $i$) the result will also be enormous. If you're looking for a numerical approximation rather than an exact value, you can use Stirling's approximation or its variants. EDIT: Note also that if your $x$ or $x-n$ might be negative, you may find the reflection formula useful: $$ \Gamma(z) \Gamma(1-z) = \frac{\pi}{\sin(\pi z)} $$ For $x > n$, $$ \eqalign{\ln\left(\frac{\Gamma(x)}{\Gamma(x-n)}\right) &= n \ln x + \sum_{i=1}^n \ln(x-i)\cr &= n \ln x + \sum_{i=1}^n \left( - \frac{i}{x} - \frac{i^2}{2x^2} - \frac{i^3}{3x^3} - \frac{i^4}{4x^4} - \frac{i^5}{5x^5} - \frac{i^6}{6 x^6}\ldots \right)\cr &= n \ln x - \frac{n(n+1)}{2x} - \frac{n(n+1)(2n+1)}{12 x^2} - \frac{n^2 (n+1)^2}{12 x^3} \cr & -{\frac {n \left( n+ 1 \right) \left( 2\,n+1 \right) \left( 3\,{n}^{2}+3\,n-1 \right) }{120 { x}^{4}}}-\frac { \left( n+1 \right) ^{2}{n}^{2} \left( 2\,{n}^{2}+2\,n-1 \right) }{60 {x}^{5}} \ldots} $$ This provides excellent approximations as $x \to \infty$ for fixed $n$ (or $n$ growing much more slowly than $x$). On the other hand, for $n = tx$ with $0 < t < 1$ fixed, as $x \to \infty$ we have $$ \eqalign{\ln\left(\frac{\Gamma(x)}{\Gamma(x-n)}\right) &=(1-t) x \ln(x) - (t \ln(t) - t + 1) x +\frac{\ln(t)}{2}\cr &+{ \frac {t-1}{12tx}}+{\frac {1-{t}^{3}}{360{t}^{3}{x}^{3}} }+{\frac {{t}^{5}-1}{1260{t}^{5}{x}^{5}}}+\frac{1-t^7}{1680 t^7 x^7} \ldots }$$
Why is the Frobenius norm of a matrix greater than or equal to the spectral norm? How can one prove that $ \|A\|_2 \le \|A\|_F $ without using $ \|A\|_2^2 := \lambda_{\max}(A^TA) $? It makes sense that the $2$-norm would be less than or equal to the Frobenius norm but I don't know how to prove it. I do know: $$\|A\|_2 = \max_{\|x\|_2 = 1} {\|Ax\|_2}$$ and I know I can define the Frobenius norm to be: $$\|A\|_F^2 = \sum_{j=1}^n {\|Ae_j\|_2^2}$$ but I don't see how this could help. I don't know how else to compare the two norms though.
In fact, the proof from $\left\| \mathbf{A}\right\|_2 =\max_{\left\| \mathbf{x}\right\|_2=1} \left\| \mathbf{Ax} \right\|_2$ to $\left\| \mathbf{A}\right\|_2 = \sqrt{\lambda_{\max}(\mathbf{A}^H \mathbf{A})}$ is straight forward. We can first simply prove when $\mathbf{P}$ is Hermitian $$ \lambda_{\max} = \max_{\| \mathbf{x} \|_2=1} \mathbf{x}^H \mathbf{Px}. $$ That's because when $\mathbf{P}$ is Hermitian, there exists one and only one unitary matrix $\mathbf{U}$ that can diagonalize $\mathbf{P}$ as $\mathbf{U}^H \mathbf{PU}=\mathbf{D}$ (so $\mathbf{P}=\mathbf{UDU}^H$), where $\mathbf{D}$ is a diagonal matrix with eigenvalues of $\mathbf{P}$ on the diagonal, and the columns of $\mathbf{U}$ are the corresponding eigenvectors. Let $\mathbf{y}=\mathbf{U}^H \mathbf{x}$ and substitute $\mathbf{x} = \mathbf{Uy}$ to the optimization problem, we obtain $$ \max_{\| \mathbf{x} \|_2=1} \mathbf{x}^H \mathbf{Px} = \max_{\| \mathbf{y} \|_2=1} \mathbf{y}^H \mathbf{Dy} = \max_{\| \mathbf{y} \|_2=1} \sum_{i=1}^n \lambda_i |y_i|^2 \le \lambda_{\max} \max_{\| \mathbf{y} \|_2=1} \sum_{i=1}^n |y_i|^2 = \lambda_{\max} $$ Thus, just by choosing $\mathbf{x}$ as the corresponding eigenvector to the eigenvalue $\lambda_{\max}$, $\max_{\| \mathbf{x} \|_2=1} \mathbf{x}^H \mathbf{Px} = \lambda_{\max}$. This proves $\left\| \mathbf{A}\right\|_2 = \sqrt{\lambda_{\max}(\mathbf{A}^H \mathbf{A})}$. And then, because the $n\times n$ matrix $\mathbf{A}^H \mathbf{A}$ is positive semidefinite, all of its eigenvalues are not less than zero. Assume $\text{rank}~\mathbf{A}^H \mathbf{A}=r$, we can put the eigenvalues into a decrease order: $$ \lambda_1 \geq \lambda_2 \geq \lambda_r > \lambda_{r+1} = \cdots = \lambda_n = 0. $$ Because for all $\mathbf{X}\in \mathbb{C}^{n\times n}$, $$ \text{trace}~\mathbf{X} = \sum\limits_{i=1}^{n} \lambda_i, $$ where $\lambda_i$, $i=1,2,\ldots,n$ are eigenvalues of $\mathbf{X}$; and besides, it's easy to verify $$ \left\| \mathbf{A}\right\|_F = \sqrt{\text{trace}~ \mathbf{A}^H \mathbf{A}}. $$ Thus, through $$ \sqrt{\lambda_1} \leq \sqrt{\sum_{i=1}^{n} \lambda_i} \leq \sqrt{r \cdot \lambda_1} $$ we have $$ \left\| \mathbf{A}\right\|_2 \leq \left\| \mathbf{A}\right\|_F \leq \sqrt{r} \left\| \mathbf{A}\right\|_2 $$
Can floor functions have inverses? R to R $f(x) = \lfloor \frac{x-2}{2} \rfloor $ If $T = \{2\}$, find $f^{-1}(T)$ Is $f^{-1}(T)$ the inverse or the "image", and how do you know that we're talking about the image and not the inverse? There shouldn't be any inverse since the function is not one-to-one, nor is it onto since it's $\mathbb{R}\to\mathbb{R}$ and not $\mathbb{R}\to\mathbb{Z}$.
Note to this calculating; $[\frac{x-2}{2}]=2\Longrightarrow 2\leq\frac{x-2}{2}<3\Longrightarrow 4\leq x-2<6\Longrightarrow 6\leq x<8$ so the set $f^{-1}(\{2\})$ is equal to $[6,8)$. Now; $\forall y\in\mathbb{Z}\; :\; f^{-1}(\{y\})=\{x\in\mathbb{R}|[\frac{x-2}{2}]=y\}$ ... $\Longrightarrow\{x\in\mathbb{R}|x\in[2y+2,2y+4)\}$ In final; $\Longrightarrow \forall T\subset\mathbb{R}\; :\; f^{-1}(T)=\cup_{y\in T\cap\mathbb{Z}}[2y+2,2y+4)$ and if $T\cap\mathbb{Z}=\emptyset$ then $f^{-1}(T)=\emptyset$ too. And about existence of inverse functions. If a function be one-to-one it has left-inverse and if it be onto it has right-inverse. for existence both it should be bijective. But always we can define a function which bring back any point of range to set of elements that their value by f is them. like what we had done above.
Number of ordered triplets $(x,y,z)$, $−10\leq x,y,z\leq 10$, $x^3+y^3+z^3=3xyz$ Let $x, y$ and $z$ be integers such that $−10\leq x,y,z\leq 10$. How many ordered triplets $(x,y,z)$ satisfy $x^3+y^3+z^3=3xyz$? x,y,z are allowed to be equal. When I tried I got any one of x,y,z to be 0. I am not sure this is correct. And I got 21 as the answer which i am sure is not.
$\textbf{Hint}$: Note that $$x^3+y^3+z^3-3xyz=(x+y+z)(x^2+y^2+z^2-xy-yz-zx)=\frac{1}{2}(x+y+z)[(x-y)^2+(y-z)^2+(z-x)^2]=0$$ if and only if either $x+y+z=0$ or $x=y,y=z$ and $z=x$. Now count the number of ordered triples for the first case using generating functions.
Subspace Preserved Under Addition of Elements? I'm trying to understand how to complete this proof about subspaces. I understand the basics about the definition of a subspace (i.e. the zero matrix must exist, and addition and multiplication must be closed within the subspace). But I'm confused as to how to show that the addition of two elements from completely different sets somehow are preserved under the same subspace. I'm pretty sure the zero vector exists because the zero vector is within C and D, but I'm unsure about the other two conditions. The complete problem is listed below. Problem: Let W be a vector space and C,D be two subspaces of W. Prove or disprove that { a + b | a $\in$ C, b $\in$ D} is also a subspace of W. Any help would be appreciated.
You have to show (among other things) that $Z=\{{\,a+b:a\in C,b\in D\,\}}$ is closed under addition. So, let $x$ and $y$ be in $Z$; you have to show $x+y$ is in $Z$. So, what is $x$? Well, it's in $Z$, so $x=a+b$ for some $a$ in $C$ and some $b$ in $D$. What's $y$? Well, it's also in $Z$, so $y=r+s$ for some $r$ in $C$ and some $s$ in $D$. Now what's $x+y$? Can you take it from here?
How do you find the smallest integer? $$\begin{align} (x-1) \;\text{mod}\; 11 &= 3x\; \text{mod}\; 11\\ 11&\lvert(3x-(x-1)) \\ 11&\lvert2x+1\\ x &= 5?\\ \end{align} $$ $$ \begin{align} (a-b)\; \text{mod}\; 5 &= (a+b)\;\text{mod}\;5\\ 5&\lvert a+b-a+b\\ 5&\lvert2b\\ b &= 5/2\\ a &= \text{any integer} \end{align} $$ I don't know how to solve this type of problem. Can you tell me what I have to do generally step-by-step?
There are several ways; for example look at Diophantine divisors, but now I will write it ; $ax\equiv b\;(mod\;m)\Longleftrightarrow (a,m)=d|b$ and its answers are every number in congreuent classes by modulo m like $(\frac{a}{d})^{*}(\frac{b}{d})+k\frac{m}{d}$ where $0\leq k\leq d-1$ and $(\frac{a}{d})^{*}$ in Möbius inversion of $\frac{a}{d}$ in mod $\frac{m}{d}$ When $(a,m)=d|b$, we can say $(\frac{a}{d},\frac{m}{d})=1$ and from Euler theorem that $a^{\phi(m)}\equiv 1\;(mod\;m)\Longrightarrow a^{\phi(\frac{m}{d})-1}a\equiv 1\;(mod\;\frac{m}{d})$ so $a^{\phi(\frac{m}{d} )-1}$ is one Möbius inversion of $\frac{a}{d}$ in mod $\frac{m}{d}$ . By finding Möbius inversion, and replacing in $(\frac{a}{d})^{*}(\frac{b}{d})+k\frac{m}{d}$, then every number in congreuent classes by modulo m that is found, are the answers you need. For example let calculate your first question; $x-1\equiv 3x\;(mod\;11)\Longleftrightarrow 2x\equiv -1\;(mod\;11)$ because $(2,11)=1|-1$ so it has answer. Here $d$ is $1$. At first calculate Möbius inversion of $2$ in mod $11$, (note that $d=1$!). $2^{*}=2^{\phi(11)-1}=2^{10-1}=2^{9}=512\equiv 6\;(mod\;11)$ so $2^{*}=6$ now because $d-1=0$ we only put $k=0$ then answers is $\{x\in\mathbb{Z}|[x]_{11}=[6(-1)+(0)11]_{11}\}=$ $\{x\in\mathbb{Z}|[x]_{11}=[-6]_{11}\}=$ $\{x\in\mathbb{Z}|[x]_{11}=[5]_{11}\}=$ $\{11n+5|n\in\mathbb{Z}\}$ .
Basic definition of Inverse Limit in sheaf theory / schemes I read the book "Algebraic Geometry" by U. Görtz and whenever limits are involved I struggle for an understanding. The application of limits is mostly very basic, though; but I'm new to the concept of limits. My example (page 60 in the book): Let $A$ be an integral domain. The structure sheaf $O_X$ on $X = \text{Spec}A$ is given by $O_X(D(f)) = A_f$ ($f\in A$) and for any $U\subseteq X$ by \begin{align} O_X(U) &= \varprojlim_{D(f)\subseteq U} O_X(D(f)) \\ &:= \{ (s_{D(f)})_{D(f)\subseteq U} \in \prod_{D(f)\subseteq U} O_X(D(f)) \mid \text{for all } D(g) \subseteq D(f) \subseteq U: s_{D(f)\big|D(g)} = s_{D(g)}\} \\ &= \bigcap_{D(f)\subseteq U} A_f. \end{align} I simply don't understand the last equality: In my naive understanding the elements of the last set are "fractions" and the elements of the Inverse Limit are "families of fractions". Any hint is appreciated.
My personal advice is to study a bit of category theory: it will let you understand all this stuff in a very clearer way. In fact you can easily realize that the first equality is not a definition, but a way to express a limit of an arbitrary presheaf, while the second is an isomorphism, not exactly an equality, given by the universal property defining limits. I started with Hartshorne, but without category theory as a background it's just like wandering in the dark without even a candle with you.
A Fourier transform using contour integral I try to evaluate $$\int_{-\infty}^\infty \frac{\sin^2 x}{x^2}e^{itx}\,dx$$ ($t$ real) using contour integrals, but encounter some difficulty. Perhaps someone can provide a hint. (I do not want to use convolution.)
An idea, defining $$f(z):=\frac{e^{itz}\sin^2z}{z^2}\,\,,\,\,C_R:=[-R-\epsilon]\cup(-\gamma_\epsilon)\cup[\epsilon,R]\cup\gamma_R$$ with $$\gamma_k:=\{z\in\Bbb C\;;\;|z|=k\,,\,\arg z\geq 0\}=\{z\in\Bbb C\;;\;z=ke^{i\theta}\,\,,\,0\leq\theta\leq\pi\}$$ in the positive direction (check the minus sign in $\,\gamma_\epsilon\,$ above!). This works assuming $\,0<t\in\Bbb R\,$, Jordan's lemma in the lemma and its corollaty in the answer here
Applying the Thue-Siegel Theorem Let $p(n)$ be the greatest prime divisor of $n$. Chowla proved here that $p(n^2+1) > C \ln \ln n $ for some $C$ and all $n > 1$. At the beginning of the paper, he mentions briefly that the weaker result $\lim_{n \to \infty} p(n^2+1) = \infty$ can be proved by means of the Thue-Siegel theorem (note: it was written before Roth considerably improved the theorem). * *Can someone elaborate on this? I was able to reduce the problem to showing that the negative Pell equation $x^2-Dy^2 = -1$ (where $D$ is positive and squarefree) has a finite number of solutions such that $p(y)$ is bounded by some $C$. *Can someone provide examples of applications of Thue-Siegel in Diophantine equations? In the Wikipedia article it is mentioned that "Thue realised that an exponent less than d would have applications to the solution of Diophantine equations", but I am not sure I am realising it...
Suppose that the prime factors of $n^2+1$ are all bounded by $N$ for infinitely many $n$. Then infinitely many integers $n^2 + 1$ can be written in the form $D y^3$ for one of finitely many $D$. Explicitly, the set of $D$ can be taken to be the finitely many integers whose prime divisors are all less than $N$, and whose exponents are at most $2$. For example, if $N=3$, then $D \in \{1,2,4,3,6,12,9,18,36\}$. Letting $x = n$, it follows that for at least one of those $D$, there are infinitely many solutions to the equation $$x^2 - D y^3 = -1.$$ From your original post, I'm guessing you actually know this argument, except you converted $n^2 + 1$ to $D y^2$ rather than $D y^3$ (of course, one can also use $D y^k$ for any fixed $k$, at the cost of increasing the number of possible $D$). It turns out, however, that the equation $x^2 - D y^3 = -1$ only has finitely many solutions, and that this is a well known consequence of Siegel's theorem (1929), which says that any curve of genus at least one has only finitely many integral points. Siegel's proof does indeed use the Thue-Siegel method, although the proof is quite complicated. It is quite possible that this is the argument that Chowla had in mind - is is certainly consistent, since Chowla's paper is from 1934 > 1929. There are some more direct applications of Thue-Siegel to diophantine equations, in particular, to the so called Thue equations, which look like $F(x,y) = k$ for some irreducible homogeneous polynomial $F$ of degree at least three. A typical example would be: $$x^n - D y^n = -1,$$ for $n \ge 3$. Here the point is that the rational approximations $x/y$ to $\sqrt[n]{D}$ are of the order $1/y^n$, which contradict Thue's bounds as long as $n \ge 3$. Equations of this kind are what are being referred to in the wikipedia article. Edit Glancing at that paper of Chowla in your comment, one can see the more elementary approach. Let $K$ denote the field $\mathbf{Q}(i)$, and let $\mathcal{O} = \mathbf{Z}[i]$ denote the ring of integers of $K$. Assume, as above, that there exists an infinite set $\Sigma$ of integers such that $n^2+1$ has prime factors less than $N$. For $n \in \Sigma$, write $A = n+i$ and $B = n-i$; they have small factors in $\mathcal{O}$ (which is a PID, although the argument can be made to work more generally using the class number). As above, one may write $A =(a + bi)(x + i y)^3$ where $a + bi$ comes from a finite list of elements of $\mathcal{O}$ (explicitly, the elements whose prime factorization in $\mathcal{O}$ only contains primes dividing $N$ with exponent at most $2$). Since $\Sigma$ is infinite, there are thus infinitely many solutions for some fixed $a + b i$, or equivalently, infinitely many solutions to the equations (taking the conjugate): $$n + i = (a + b i)(x + i y)^3, \qquad n - i = (a - b i)(x - i y)^3,$$ Take the difference of these equations and divide by $2i$. We end up with infinitely many solutions to the equation: $$b x^3 + 3 a x^2 y - 3 b x y^2 - a y^3 = 1.$$ This is now homogeneous, so one can apply Thue's theorem, rather than Siegel's Theorem. (Explicitly, the fractions $x/y$ are producing rational approximations to the root of $b t^3 + 3 a t^2 - 3 b t - a = 0$ which contradict the Thue bounds.)
Why is the set of natural numbers not considered a non-Abelian group? I don't understand why the set of natural numbers constitutes a commutative monoid with addition, but is not considered an Abelian group.
Addition on the natural numbers IS commutative ... ...but the natural numbers under addition do not form a group. Why not a group? * *if you define $\mathbb{N} = \{ n\in \mathbb{Z} \mid n\ge 1\}$, as we typically do, then it fails to be a group because it does not contain $0$, the additive identity. Indeed, $\mathbb{N} = \{ n\in \mathbb{Z} \mid n\ge 1\}$ fails to be a monoid, if the additive identity $0\notin \mathbb{N}$. So I will assume you are defining $\mathbb{N} = \mathbb{Z}^{+} = \{x \in \mathbb{Z} \mid x \ge 0\}$, which is indeed a commutative monoid: addition is both associative and commutative on $\mathbb{N}$, and $0 \in \mathbb{N}$ if $\mathbb{N} = \mathbb{Z}^{+} = \{x \in \mathbb{Z} \mid x \ge 0\}$. *There is no additive inverse for any $n\in \mathbb{N}, n \ge 1$. For example, there is no element $x \in \mathbb{N}$ such that $3 + x = 0.$ Hence the natural numbers cannot be a GROUP. A monoid is a group if it also (in addition to being associative, has the additive identity for addition) satisfies the ADDED requirement that the inverse element of each element in the monoid is also in the monoid. (In such case, the monoid is said to be a group.)
If $P \leq G$, $Q\leq G$, are $P\cap Q$ and $P\cup Q$ subgroups of $G$? $P$ and $Q$ are subgroups of a group $G$. How can we prove that $P\cap Q$ is a subgroup of $G$? Is $P \cup Q$ a subgroup of $G$? Reference: Fraleigh p. 59 Question 5.54 in A First Course in Abstract Algebra.
$P$ and $Q$ are subgroups of a group $G$. Prove that $P \cap Q$ is a subgroup. Hint 1: You know that $P$ and $Q$ are subgroups of $G$. That means they each contain the identity element, say $e$ of $G$. So what can you conclude about $P\cap Q$? If $e \in P$ and $e \in Q$? (Just unpack that means for their intersection.) Hint 2: You know that $P, Q$ are subgroups of $G$. So they are both closed under the group operation of $G$. If $a, b \in P\cap Q$, then $a, b \in P$ and $a, b \in Q$. So what can you conclude about $ab$ with respect to $P\cap Q$? This is about proving closure under the group operation of $G$. Hint 3: You can use similar arguments to show that for any element $c \in P\cap Q$, $c^{-1} \in P\cap Q$. That will establish that $P\cap Q$ is closed under inverses. Once you've completed each step above, what can you conclude about $P\cap Q$ in $G$? $P$ and $Q$ are subgroups of a group $G$. Is $P\cup Q $ a subgroup of $G\;$? Here, you need to provide only one counterexample to show that it is not necessarily the case that $P\cup Q$ is a subgroup of $G$. * *Suppose, for example, that your group $G = \mathbb{Z}$, under addition. Then we know that $P = 2\mathbb{Z} \le \mathbb{Z}$ under addition (all even integers), and $Q = 5\mathbb{Z} \le \mathbb{Z}$ under addition (all integer multiples of $5$). So $P \cup Q$ contains $2\in P,$ and $5 \in Q.\;\;$ But:$\;$ is $\;2 + 5 = 7 \in P\cup Q\;$? So what does this tell regarding whether or not $P \cup Q$ is a subgroup of $\mathbb{Z}\;$?
What's a proof that a set of disjoint cycles is a bijection? Consider a function $f : D \to D$ (where $D$ is a finite set) so that for every $d \in D$, there is an integer $n$ so that $f(f(...(n\text{ times})...f(d)...) = d$. * *Prove that $f$ is a bijection. *Prove that if $f : D \to D$ (where $D$ is a finite set) is a bijection, then for every $d \in D$, there is an integer $n$ so that $f(f(...(n\text{ times})...f(d)...) = d$. I know this proposition is true (because a bijection over a finite set is just a permutation which is always representable using cycles), I'm just having a hard time formulating a proof for it that a first-year algebra student could understand.
For (1), it's easier to prove that if $f$ is not injective then neither is $f^{n}$ and if $f$ is not surjective then neither is $f^n$, for any $n \ge 1$. You know that $f^n$ is bijective, and hence so must $f$ be. For (2), note that the bijections on a set form a group under composition, and that if $D$ is finite then this group is finite, so all its elements have finite order.
Show that $\alpha_1u+\alpha_2v+\alpha_3w=0\Rightarrow\alpha_1=\alpha_2=\alpha_3=0$ Let $u, v, w$ be three points in $\mathbb R^3$ not lying in any plane containing the origin. Would you help me to prove or disprove: $\alpha_1u+\alpha_2v+\alpha_3w=0\Rightarrow\alpha_1=\alpha_2=\alpha_3=0.$ I think this is wrong since otherwise Rank of the Coefficient Matrix have to be 3. But for$u_1=(1,1,0),u_2=(1,2,0),u_3=(1,3,0)$, (Rank of the corresponding Coefficient Matrix)$\neq 3$. Am I right?
With $u_1=(1,1,0)$, $u_2=(1,2,0)$, $u_3=(1,3,0)$ Let $A = \begin{pmatrix} u_1 \\ u_2 \\ u_3 \end{pmatrix} = \begin{pmatrix}1 & 1 & 0 \\ 1 & 2 & 0 \\ 1 & 3 & 0 \end{pmatrix}$ we have $\det A = 0$ $\implies$ $u_1, u_2, u_3$ is linearly dependent $\implies$ you're wrong !
Convolution Laplace transform Find the inverse Laplace transform of the giveb function by using the convolution theorem. $$F(x) = \frac{s}{(s+1)(s^2+4)}$$ If I use partial fractions I get: $$\frac{s+4}{5(s^2+4)} - \frac{1}{5(x+1)}$$ which gives me Laplace inverses: $$\frac{1}{5}(\cos2t + \sin2t) -\frac{1}{5} e^{-t}$$ But the answer is: $$f(t) = \int^t_0 e^{-(t -\tau)}\cos(2\tau) d\tau$$ How did they get that?
Related techniques (I), (II). Using the fact about the Laplace transform $L$ that $$ L(f*g)=L(f)L(g)=F(s)G(s)\implies (f*g)(t)=L^{-1}(F(s)G(s)) .$$ In our case, given $ H(s)=\frac{1}{(s+1)}\frac{s}{(s^2+4)}$ $$F(s)=\frac{1}{s+1}\implies f(t)=e^{-t},\quad G(s)=\frac{s}{s^2+4}\implies g(t)=\cos(2t).$$ Now, you use the convolution as $$ h(t) = \int_{0}^{t} e^{-(t-\tau)}\cos(2\tau) d\tau . $$
How to verify this limit? Kindly asking for any hints about showing: $$\lim_{n\to\infty}\int_0^1\frac{dx}{(1+x/n)^n}=1-\exp(-1)$$ Thank you very much, indeed!
HINT: Just evaluate the integral. For $n>1$ you have $$\int_0^1\frac{dx}{(1+x/n)^n}=\int_0^1\left(1+\frac{x}n\right)^{-n}dx=\left[\frac{n}{n+1}\left(1+\frac{x}n\right)^{-n+1}\right]_0^1\;;$$ evaluating that leaves you with a limit that involves pieces that ought to be pretty familiar.
Is one structure elementary equivalent to its elementary extension? Let $\mathfrak A,\mathfrak A^*$ be $\mathcal L$-structures and $\mathfrak A \preceq \mathfrak A^*$. That implies forall n-ary formula $\varphi(\bar{v})$ in $\mathcal L$ and $\bar{a} \in \mathfrak A^n$ $$\models_{\mathfrak A}\varphi[\bar{a}] \iff \models_{\mathfrak A^*}\varphi[\bar{a}]$$ Therefore forall $\mathcal L$-sentence $\phi$ $$\models_{\mathfrak A}\phi \iff \models_{\mathfrak A^*}\phi$$ ,which implies $\mathfrak A \equiv \mathfrak A^*$ But I haven't found this result in textbook, so I'm not sure.
(I realise that it was answered in the comments, but I'm posting the answer so as to keep the question from staying in the unanswered pool.) This is, of course, true, an $\mathcal L$-sentence without parameters is an $\mathcal L$-sentence with parameters, that happens not to use any parameters, so elementary extension is a stronger condition. :) To put it differently, $M\preceq N$ is equivalent to saying that $M$ is a substructure of $N$ and that $(M,m)_{m\in M}\equiv (N,m)_{m\in M}$, which is certainly stronger than mere $M\equiv N$ (to see that the converse does not hold, consider, for instance, $M=(2{\bf Z},+)$, $N=({\bf Z},+)$ -- $M$ is a substructure of $N$ and is e.e. (even isomorphic!) to it, but is still not an elementary substructure).
The pebble sequence Let we have $n\cdot(n+1)/2$ stones grouped by piles. We can pick up 1 stone from each pile and put them as a new pile. Show that after doing it some times we will get the following piles: $1, 2, \ldots n$ stones. Example: $n = 3$ Let we have 2 piles with 3 stones of each. $$3 3 \to 2 2 2 \to 1 1 1 3 \to 2 4 \to 1 3 2$$
This was originally proved by Jørgen Brandt in Cycles of Partitions, Proceedings of the American Mathematical Society, Vol. 85, No. 3 (Jul., 1982), pp. 483-486, which is freely available here. The proof of this result covers the first page and a half and is pretty terse. First note that there are only finitely many possible sets of pile sizes, so at some point a position must repeat a previous position. Say that $P_{m+p}=P_m$ for some $p>0$, where $P_n$ is the $n$-th position (set of pile sizes) in the game. It’s not hard to see that the game will then cycle through the positions $P_{m+k}$, $k=0,\dots,p-1$, repeatedly. The proof consists in showing that when there are $\frac12n(n+1)$ pebbles, the only possible cycle is the one of length one from $\{1,2,\dots,n\}$ to $\{1,2,\dots,n\}$. Brian Hopkins, 30 Years of Bulgarian Solitaire, The College Mathematics Journal, Vol. 43, No. 2, March 2012, pp. 135-140, has references to other published proofs; one appears to be to a less accessible version of the paper Karatsuba Solitaire (Therese A. Hart, Gabriel Khan, Mizan R. Khan) that MJD found at arXiv.org. Added: And having now read the Hart, Khan, & Khan paper, I agree with MJD: the argument is quite simple, and it’s presented very well.
Counterexample to Fubini? I am trying to come up with a measurable function on $[0,1]^2$ which is not integrable, but such that the iterated integrals are defined and unequal. Any help would be appreciated.
$$ \int_0^1\int_0^1 \frac{x^2-y^2}{(x^2+y^2)^2} \,dy\,dx \ne \int_0^1\int_0^1 \frac{x^2-y^2}{(x^2+y^2)^2} \,dx\,dy $$ Obviously either of these is $-1$ times the other and if this function were absolutely integrable, then they would be equal, so their value would be $0$. But one is $\pi/2$ and the other is $-\pi/2$, as may be checked by freshman calculus methods.
Find the Matrix of a Linear Transformation. It's been a few weeks since the subject was covered in my Linear Algebra class, and unfortunately linear transformations are my weak spot, so could anyone explain the steps to solve this problem? Find the matrix $A$ of the linear transformation $T(f(t)) = 3f'(t)+7f(t)$ from $P_2$ to $P_2$ with respect to the standard basis for $P_2$, $\{1,t,t^2\}$. The resulting answer should be a $3 \times 3$ matrix, but I'm unsure of where to start when it comes to solving this problem.
NOTE Given a finite dimensional vector space $\Bbb V$ and a basis $B=\{v_1,\dots,v_n\}$ of $\Bbb V$, the coordinates of $v$ in base $B$ are the unique $n$ scalars $\{a_1,\dots,a_n\}$ such that $v=\sum_{k=1}^n a_kv_k$, and we note this by writing $(v)_B=(a_1,\dots,a_n)$. All you need is to find what $T$ maps the basis elements to. Why? Because any vector in $P_2$ can be written as a linear combination of $1$, $t$ and $t^2$, whence if you know what $T(1)$, $T(t)$ and $T(t^2)$ are, you will find what any $T(a_0 +a_1 t +a_2 t^2)=a_0T(1)+a_1 T(t)+a_2 T(t^2)$ is. So, let us call $B=\{1,t,t^2\}$. Then $$T(1)=3\cdot 0 +7\cdot 1=7=(7,0,0)$$ $$T(t)=3\cdot 1 +7\cdot t=(3,7,0)$$ $$T(t^2)=6\cdot t +7\cdot t^2=(0,6,7)$$ (Here I'm abusing the notation a bit. Formally, we should enclose the two first terms of the equations with $(-)_B$ ) Now note that our transformation matrix simply takes a vector in coordinates of base $B$, and maps it to another vector in coordinates of base $B$. Thus, if $|T|_{B,B}$ is our matrix from base $B$ to base $B$, we must have $$|T|_{B,B} (P)_B=(T(P))_B$$ where we wrote $P=P(t)$ to avoid too much parenthesis. But let's observe that if $(P)_B=(a_0,a_1,a_2)$ then $a_0T(1)+a_1 T(t)+a_2 T(t^2)=a_0(7,0,0)+a_1 (3,7,0)+a_2(0,6,7)$ is the matrix product $$\left(\begin{matrix}7&3&0\\0&7&6\\0&0&7\end{matrix}\right)\left(\begin{matrix}a_0 \\a_1 \\a_2 \end{matrix}\right)$$ And $|T|_{B,B}=\left(\begin{matrix}7&3&0\\0&7&6\\0&0&7\end{matrix}\right)$ is precisely the matrix we're after. It has the property that for each vector of $P_2$ $$|T|_{B, B}(P)_B=(T(P))_B$$ (well, actually $$(|T|_{B,B} (P)_B^t)^t=(T(P))_B$$ but that looks just clumsy, doesn't it?)
If $(a,b,c)$ is a primitive Pythagorean triplet, explain why... If $(a,b,c)$ is a primitive Pythagorean triplet, explain why only one of $a$,$b$ and $c$ can be even-and that $c$ cannot be the one that is even. What I Know: A Primitive Pythagorean Triple is a Pythagorean triple $a$,$b$,$c$ with the constraint that $\gcd(a,b)=1$, which implies $\gcd(a,c)=1$ and $\gcd(b,c)=1$. Example: $a=3$,$b=4$,$c=5$ where, $9+16=25$ At least one leg of a primitive Pythagorean triple is odd since if $a$,$b$ are both even then $\gcd(a,b)>1$
Clearly they cannot all be even as a smaller similar triple could be obtained by dividing all the sides by $2$ (your final point). Nor can two of them be even since $a^2+b^2=c^2$ and either you would have an even number plus an odd number (or the other way round) adding to make an even number or you would have an even number plus an even number adding to make an odd number. Nor can they all be odd since $a^2+b^2=c^2$ and you would have an odd number plus an odd number adding to make an odd number. Added: Nor can the two shorter sides be both odd, say $2n+1$ and $2m+1$ for some integers $n$ and $m$, as the longer side would be even, say $2k$ for some integer $k$, as its square is the sum of two odd numbers, but you would then have $4n^2+4n+1 + 4m^2+4m+1 = 4k^2$ which would lead to $k^2 = n^2+n+m^2+m + \frac{1}{2}$ preventing $k$ from being an integer.
Uniformly convergent sequence proof. Prove that if $(f_k)$ is a uniformly convergent sequence of continuous real-valued functions on a compact domain $D\subseteq \mathbb{R}$, then there is some $M\geq 0$ such that $\left|f_k(x)\right|\leq M$ for every $x\in D$ and for every $k\in \mathbb{N}$. My response: Basically, I am trying to show that uniform convergence on a compact domain implies uniform boundedness. Let $f(x)$ be the limiting function. Then I know that $\lim_{k\to\infty} \sup_{x \in D} | f_k (x) - f(x) | = 0$. Also, I know that $f$ is continuous, therefore it attains an absolute maximum $\in D$. How can I apply these two things to prove it?
I think this is quite obvious. Noted that $D$ is compact and $f$ is continuous so $f_k(D)$ is also compact for every $k$ and hence it is bounded for every $k\in\mathbb{N}$ and we can just take $M=\sup\bigcup f_k(D)$ where $M \ne +\infty$ as every $f_k(D)$ are bounded.
About Central limit theorem We prove Central limit theorem with characteristic function. If we know the $X_i$ are independent but not identically distributed, is there any weaker condition which still yields the convergence to normal distribution?
For example, suppose $X_i$ are independent with $\mathbb E(X_i) = 0$, $\text{Var}(X_i) = \sigma_i^2$, and $$\lim_{n \to \infty} \frac{1}{\sigma(n)^3} \sum_{i=1}^n \mathbb E[|X_i|^3] = 0$$ where $$\sigma(n)^2 = \text{Var}\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n \sigma_i^2$$ Then $\displaystyle \frac{1}{\sigma(n)} \sum_{i=1}^n X_i$ converges in distribution to a standard normal random variable.
Closed form for $\sum_{k=0}^{n} \binom{n}{k}\frac{(-1)^k}{(k+1)^2}$ How can I calculate the following sum involving binomial terms: $$\sum_{k=0}^{n} \binom{n}{k}\frac{(-1)^k}{(k+1)^2}$$ Where the value of n can get very big (thus calculating the binomial coefficients is not feasible). Is there a closed form for this summation?
Apparently I'm a little late to the party, but my answer has a punchline! We have $$ \frac{1}{z} \int_0^z \sum_{k=0}^{n} \binom{n}{k} s^k\,ds = \sum_{k=0}^{n} \binom{n}{k} \frac{z^k}{k+1}, $$ so that $$ - \int_0^z \frac{1}{t} \int_0^t \sum_{k=0}^{n} \binom{n}{k} s^k\,ds\,dt = - \sum_{k=0}^{n} \binom{n}{k} \frac{z^{k+1}}{(k+1)^2}. $$ Setting $z = -1$ gives an expression for your sum, $$ \sum_{k=0}^{n} \binom{n}{k} \frac{(-1)^k}{(k+1)^2} = \int_{-1}^{0} \frac{1}{t} \int_0^t \sum_{k=0}^{n} \binom{n}{k} s^k\,ds\,dt. $$ Now, $\sum_{k=0}^{n} \binom{n}{k} s^k = (1+s)^n$, so $$ \begin{align*} \sum_{k=0}^{n} \binom{n}{k} \frac{(-1)^k}{(k+1)^2} &= \int_{-1}^{0} \frac{1}{t} \int_0^t (1+s)^n \,ds\,dt \\ &= \frac{1}{n+1}\int_{-1}^{0} \frac{1}{t} \left[(1+t)^{n+1} - 1\right]\,dt \\ &= \frac{1}{n+1}\int_{0}^{1} \frac{u^{n+1}-1}{u-1}\,du \\ &= \frac{1}{n+1}\int_0^1 \sum_{k=0}^{n} u^k \,du \\ &= \frac{1}{n+1}\sum_{k=1}^{n+1} \frac{1}{k} \\ &= \frac{H_{n+1}}{n+1}, \end{align*} $$ where $H_n$ is the $n^{\text{th}}$ harmonic number.
A simple limit of a sequence I feel almost ashamed for putting this up here, but oh well.. I'm attempting to prove: $$\lim_{n\to \infty}\sqrt[n]{2^n+n^5}=2$$ My approach was to use the following inequality (which is quite easy to prove) and the squeeze theorem: $$\lim_{n\to \infty}\sqrt[n]{1}\leq \lim_{n\to \infty}\sqrt[n]{1+\frac{n^5}{2^n}}\leq \lim_{n\to \infty}\sqrt[n]{1+\frac{1}{n}}$$ I encountered a problem with the last limit though. While with functions using the $e^{lnx}$ "trick" would work, this problem is a sequences problem only, therefore there has to be a solution only using the first-semester-calculus-student-who's-just-done-sequences knowledge. Or maybe this approach to the limit is overly complicated to begin with? Anyway, I'll be glad to receive any hints and answers, whether to the original problem or with $\lim_{n\to \infty}\sqrt[n]{1+\frac{1}{n}}$, thanks!
How about using $\root n\of{1+(1/n)}\le\sqrt{1+(1/n)}$ for $n\ge2$?
An easy way to remember PEMDAS I'm having trouble remembering PEMDAS (in regards to precedence in mathematical equations), ie: * *Parentheses *Exponentiation *Multiplication & Division *Addition & Subtraction I understand what all of the above mean, but I am having trouble keeping this in my head. Can you recommend any tricks or tips you use to remember this. Thanks
I am a step by step person. (remembering always left to right) 1. Parenthesis 2. exponents 3. multiply/divide 4. add/subtract
Pattern continued The following pattern: $$\frac{3^{2/401}}{3^{2/401} +3}+\frac{3^{4/401 }}{3^{4/401} +3}+\frac{3^{6/401}}{3^{6/401} +3}+\frac{3^{8/401}}{3^{8/401} +3}$$ what will the result be if the pattern is continued $\;300\;$ times?
If you need the sum to the nth term, you're looking at computing the sum of the first 300 terms: $$\sum_{k=1}^{300}\left(\large\frac{3^{\frac{2k}{401}}}{3^{\frac{2k}{401}}+3}\right)$$ To sum to the nth term, you need to compute: $$\sum_{k=1}^{n}\left(\large\frac{3^{\frac{2k}{401}}}{3^{\frac{2k}{401}}+3}\right)= \sum_{k=1}^{n}\left(\large\frac{3\cdot 3^{\frac{2k-1}{401}}}{3\cdot\left(3^{\frac{2k-1}{401}}+1\right)}\right) = \sum_{k=1}^{n}\left(\large\frac{3^{\frac{2k-1}{401}}}{\left(3^{\frac{2k-1}{401}}+1\right)}\right)$$
Probability question with combinations of different types of an item Suppose a bakery has 18 varieties of bread, one of which is blueberry bread. If a half dozen loafs of bread are selected at random (with repetitions allowed), then what is the probability that at least one of the loafs of blueberry bread will be included in the selection. I started off by determining that if we picked up at least one blueberry bread from the start, then we could find the total ways by finding: $C_{R}(n, r)$ Plugging in for $n$ and $r$ I calculated that the number of ways would be: $C_{R}(18, 5) = C(18+5-1,5) = 26,344$ ways. Am I on the right track, and how would I go about finding the probability in this case?
I don't think that's a very good direction. It could require much work. I always fond it useful to think of a simplified version: Suppose we pick 1 loaf. What are the chances of it being blueberry? Suppose we pick 2 loaves. What are the chances of not having a blueberry? 3 loaves? 4 loaves?
How to find its closed-form? here is a sequence defined by the below recursion formula: $$a_n=2a_{n-1}+a_{n-2}$$ where $n \in \mathbb{N}$ and $a_0=1,a_1=2$.how to find its closed-form.
If we write $E^ra_n=a_{n+r},$ the characteristic/auxiliary equation becomes $E^2-2E-1=0,E=1\pm\sqrt2$ So, the complementary function $a_n=A(1+\sqrt2)^n+B(1-\sqrt2)^n$ where $A,B$ are indeterminate constants to be determined from the initial condition. $a_0=A+B,$ But $a_0=1$ So, $A+B=1$ and $a_1=A(1+\sqrt2)+B(1-\sqrt2)=2$ Now, solve for $A,B$
On some inequalities in $L_p$ spaces Let $f$ be a function such that $\|fg\|_1<\infty$ whenever $\|g\|_2<\infty$. I would like to show that $\|f\|_2<\infty$. It seems that I should use some kind of Hölder inequalities, since we have $\|fg\|_1\leq \|f\|_2\|g\|_2$, but I don't know how. Any help would be appreciated. Thanks!
You have to assume that $$M := \sup \{ \|f \cdot g\|_1; \|g\|_2 \leq 1\}<\infty$$ ... otherwise it won't work. (Assume $M=\infty$. Then for all $n \in \mathbb{N}$ there exists $g_n \in L^2$, $\|g_n\|_2 \leq 1$, such that $\|f \cdot g_n\|_1 \geq n$. And this means that there cannot exist a constant $c$ such that $\|f \cdot g\|_1 \leq c \cdot \|g\|_2$, in particular $f \notin L^2$ (by Hölder inequality).)
Simple binomial theorem proof: $\sum_{j=0}^{k} \binom{a+j}j = \binom{a+k+1}k$ I am trying to prove this binomial statement: For $a \in \mathbb{C}$ and $k \in \mathbb{N_0}$, $\sum_{j=0}^{k} {a+j \choose j} = {a+k+1 \choose k}.$ I am stuck where and how to start. My steps are these: ${a+j \choose j} = \frac{(a+j)!}{j!(a+j-j)!} = \frac{(a+j)!}{j!a!}$ but now I dont know how to go further to show the equality. Or I said: $\sum_{j=0}^{k} {a+j \choose j} = {a+k \choose k} +\sum_{j=0}^{k-1} {a+j \choose j} = [help!] = {a+k+1 \choose k}$ Thanks for help!
One way to prove that $$\sum_{j=0}^k\binom{a+j}j=\binom{a+k+1}k\tag{1}$$ is by induction on $k$. You can easily check the $k=0$ case. Now assume $(1)$, and try to show that $$\sum_{j=0}^{k+1}\binom{a+j}j=\binom{a+(k+1)+1}{k+1}=\binom{a+k+2}{k+1}\;.$$ To get you started, clearly $$\begin{align*} \sum_{j=0}^{k+1}\binom{a+j}j&=\binom{a+k+1}{k+1}+\sum_{j=0}^k\binom{a+j}j\\ &=\binom{a+k+1}{k+1}+\binom{a+k+1}k \end{align*}$$ by the induction hypothesis, so all that remains is to show that $$\binom{a+k+1}{k+1}+\binom{a+k+1}k=\binom{a+k+2}{k+1}\;,$$ which should be very easy. It’s also possible to give a combinatorial proof. Note that $\binom{a+j}j=\binom{a+j}a$ and $\binom{a+k+1}k=\binom{a+k+1}{a+1}$. Thus, the righthand side of $(1)$ is the number of ways to choose $a+1$ numbers from the set $\{1,\dots,a+k+1\}$. We can divide these choices into $k+1$ categories according to the largest number chosen. Suppose that the largest number chosen is $\ell$; then the remaining $a$ numbers must be chosen from $\{1,\dots,\ell-1\}$, something that can be done in $\binom{\ell-1}a$ ways. The largest of the $a+1$ numbers can be any of the numbers $a+1,\dots,a+k+1$, so $\ell-1$ ranges from $a$ through $a+k$. Letting $j=\ell-1$, we see that the number of ways to choose the $a+1$ numbers is given by the lefthand side of $(1)$: the term $\binom{a+j}j=\binom{a+j}a$ is the number of ways to make the choice if $a+j+1$ is the largest of the $a+1$ numbers.
Why does $\ln(x) = \epsilon x$ have 2 solutions? I was working on a problem involving perturbation methods and it asked me to sketch the graph of $\ln(x) = \epsilon x$ and explain why it must have 2 solutions. Clearly there is a solution near $x=1$ which depends on the value of $\epsilon$, but I fail to see why there must be a solution near $x \rightarrow \infty$. It was my understanding that $\ln x$ has no horizontal asymptote and continues to grow indefinitely, where for really small values of $\epsilon, \epsilon x$ should grow incredibly slowly. How can I 'see' that there are two solutions? Thanks!
For all $\varepsilon>0$ using L'Hospital's rule $$\lim\limits_{x \to +\infty} {\dfrac{\varepsilon x}{\ln{x}}}=\varepsilon \lim\limits_{x \to +\infty} {\dfrac{x}{\ln{x}}}=\varepsilon \lim\limits_{x \to +\infty} {\dfrac{1}{\frac{1}{x}}}=+\infty.$$
On an identity about integrals Suppose you have two finite Borel measures $\mu$ and $\nu$ on $(0,\infty)$. I would like to show that there exists a finite Borel measure $\omega$ such that $$\int_0^{\infty} f(z) d\omega(z) = \int_0^{\infty}\int_0^{\infty} f(st) d\mu(s)d\nu(t).$$ I could try to use a change of variable formula, but the two integration domains are not diffeomorphic. So I really don't know how to start. Any help would be appreciated! This is not an homework, I am currently practising for an exam. Thanks!
When we have no idea about the problem, the question we have to ask ourselves is: "if a measure $\omega$ works, what should it have to satisfy?". We know that for a Borel measure that it's important to know them on intervals of the form $(0,a]$, $a>0$ (because we can deduce their value on $(a,b]$ for $a<b$, and on finite unions of these intervals). So we are tempted to define $$\omega((0,a]):=\int_{(0,+\infty)^2}\chi_{(0,a]}(st)d\mu(s)d\nu(t)=\int_{(0,+\infty)}\mu(0,At^{-1}])d\nu(t).$$ Note that if the collection $\{(a_i,b_i]\}_{i=1}^N\}$ consists of pairwise disjoint elements, so are for each $t$ the $(a_it^{-1},b_it^{-1}]$, which allows us to define $\omega$ over the ring which consists of finite disjoint unions of elements of the form $(a,b]$, $0<a<b<+\infty$. Then we extend it to Borel sets by Caratheodory's extension theorem. As the involved measures are finite, $\omega$ is actually uniquely determined.
Making a $1,0,-1$ linear commbination of primes a multiple of $1000$ Prove that with every given 10 primes $p_1,p_ 2,\ldots,p_{10}$,there always exist 10 number which are not simultaneously equal to $0$, get one of three values: $-1$, $0$, $1$ satisfied that: $\sum\limits_{ i=1}^{10}a_ip_i$ is a multiple of 1000
The pigeonhole principle takes care of this. There are $2^{10}-1=1023$ non-empty sums, so two are congruent modulo $1000$, etc., etc.
Why is $S = X^2 + Y^2$ distributed $Exponential(\frac{1}{2})$? Let $X$ and $Y$ be independent, standard normally distributed random variables ($\sim Normal(0, 1)$). Why is $S = X^2 + Y^2$ distributed $Exponential(\frac{1}{2})$? I understand that an exponential random variable describes the time until a next event given a rate at which events occur. However, I do not understand what this has to do with $S = X^2 + Y^2$.
Sum of 2 standard normals is chi squared with 2 degrees of freedom. A chi squared with 2 degrees of freedom is equivalent to a exponential with parameter =$1/2.$
Existence of geodesic on a compact Riemannian manifold I have a question about the existence of geodesics on a compact Riemannian manifold $M$. Is there an elementary way to prove that in each nontrivial free homotopy class of loops, there is a closed geodesic $\gamma$ on $M$?
Let be $[\gamma]$ nontrivial free homotopy class of loops and $l=\inf_{\beta; \beta\in[\gamma]}l(\beta)$ where $l(\beta)$ is a lenght of the curve $\beta.$ We will show that there is a geodesic $\beta$ in $[\gamma]$ such that $l(\beta)=l.$ Let be $\beta_n$ a sequence of loops in $[\gamma]$ such that $l(\beta_n)\to l.$ The first intuition is that the sequence $\beta_n$ converges to the desired curve, but this is not quite true. We can assume without loss of generality that each $\beta_n$ is a geodesic by parts and are parameterized by arc length. Let us show that beta has a subsequence that converges uniformly to a continuous loop $\beta.$ In fact, as the curves $ \beta_n $ are parameterized by arc length, we have $$ d(\beta_j(t_1),\beta_j(t_2))=|t_1-t_2| $$ Therefore the set $\{\beta_n\}$ is a uniformly limited and equicontínuos set, how $M$ is compact follows from the Arzelá-Ascoli theorem that there exists a subsequence $\beta_{n_j}$ that converges uniformly for a continuous loop $\beta_0:[0,1]\to M$ Now let $t_0<t_1\cdots<t_n $ be a finite partition of $[0,1]$ such that each $\beta_0([t_i,t_{i+1}])$ is contained in a totally normal neighborhood. Now consider the geodesic by parts $\beta:[0,1]\to M$ such that in each $[t_i,t_{i+1}]$ the curve $\beta$ is equal to geodesic segment connecting the points $\beta_0(t_i)$ e $\beta_0(t_{i+1})$ A contradiction argument shows that $l(\beta)=l$ An argument of shortcuts shows that $\beta$ is a geodesic and minimizing
Dirichlet Series for $\#\mathrm{groups}(n)$ What is known about the Dirichlet series given by $$\zeta(s)=\sum_{n=1}^{\infty} \frac{\#\mathrm{groups}(n)}{n^{s}}$$ where $\#\mathrm{groups}(n)$ is the number of isomorphism classes of finite groups of order $n$. Specifically: does it converge? If so, where? Do the residues at any of its poles have interesting values? Can it be expresed in terms of the classical Riemann zeta function? Is this even an interesting object to think about? Mathematica has a list of $\#\mathrm{groups}(n)$ for $1 \le n \le 2047$. Plotting the partial sum seems to indicate that it does converge and has a pole at $s=1$.
According to a sci.math posting by Avinoam Mann which I found at http://www.math.niu.edu/~rusin/known-math/95/numgrps the best upper bound is #groups(n) $\le n^{c(\log n)^2}$ for some constant $c$. That would indicate that your Dirichlet series diverges for all $s$, having arbitrarily large terms. See also https://oeis.org/A000001 (the very first entry in the OEIS), which is where I got the link above.
Infinitely valued functions Is it possible to define a multiple integral or multiple sums to infinite order ? Something like $\int\int\int\int\cdots$ where there are infinite number of integrals or $\sum\sum\sum\sum\cdots$ . Does infinite valued functions exist (Something like $R^\infty \rightarrow R^n$ ) ?
Yes, it is possible to define multiple integrals or sums to infinite order: here is my definition: for every function $f$ let $$\int\int\int\cdots \int f:=1$$ and $$\sum\sum\cdots\sum f:=1.$$ As you can see, I defined those objects. But OK, I understand that you are looking for some definitions granting some usual properties of the integral. Here is another answer: it is possible to define integrals of functions between Banach spaces. There are measures on infinite dimensional Banach spaces (for example Gaussian measures) so this might be the concept which is "meaningful" for you. For example you can consider a Gaussian measure on the space of continuous functions $C([0,1])$ induced by a Wiener process and you can calculate integrals with respect to that measure. With some mental gymnastics you can think about those measures and integrals in a way you asked about.
Procedures to find solution to $a_1x_1+\cdots+a_nx_n = 0$ Suppose that $x_1, \dots,x_n$ are given as an input. Then we want to find $a_1,\ldots,a_n$ that satisfy $a_1x_1 + a_2x_2+a_3x_3 + a_4x_4+\cdots +a_nx_n =0$. (including the case where such $a$ set does not exist.) How does one find this easily? (So I am asking for an algorithm.) Edit: all numbers are non-zero integers.
Such $a_i$ do always exist (we can let $a_1 = \cdots = a_n = 0$) for example. The whole set of solutions is a $(n-1)$-dimensional subspace (the whole $k^n$ if $x_1 = \cdots = x_n= 0$) of $k^n$.
Questions about $f: \mathbb{R} \rightarrow \mathbb{R}$ with bounded derivative I came across a problem that says: Let $f:\mathbb{R} \rightarrow \mathbb{R}$ be a function. If $|f'|$ is bounded, then which of the following option(s) is/are true? (a) The function $f$ is bounded. (b) The limit $\lim_{x\to\infty}f(x)$ exists. (c) The function $f$ is uniformly continuous. (d) The set $\{x \mid f(x)=0\}$ is compact. I am stuck on this problem. Please help. Thanks in advance for your time.
(a) & (b) are false: Consider $f(x)=x$ $\forall$ $x\in\mathbb R$; (c) is true: $|f'|$ is bounded on $\mathbb R\implies f'$ is bounded on $\mathbb R$ [See: Related result]; (d) is false: $f=0$ on $\mathbb R\implies${$x:f(x)=0$} $=\mathbb R$.
Proof by induction for Stirling Numbers I am asked this: For any real number x and positive integer k, define the notation [x,k] by the recursion [x,k+1] = (x-k) [x,k] and [x,1] = x. If n is any positive integer, one can now express the monomial x^n as a polynomial in [x,1], [x,2], . . . , [x,n]. Find a general formula that accomplishes this, and prove that your formula is correct. I was able to figure out using stirling numbers that the formula is: $$X^n=\sum^{n}_{k=1} {n\choose k}(X_k)$$ where $[x,k]$ is a decreasing factorial $X_k = x(x-1)...(x-k+1)$ How can I prove the formula above by using induction?
I prefer the notation $x^{\underline k}$ for the falling power, so I’ll use that. You don’t want binomial coefficients in your expression: you want Stirling numbers of the second kind, denoted by $\left\{n\atop k\right\}$, and you want to show by induction on $n$ that $$x^n=\sum_{k=1}^n\left\{n\atop k\right\}x^{\underline k}\tag{1}$$ for any $n\in\Bbb Z^+$. (The formula $(1)$ becomes valid for $n=0$ as well if you start the summation at $k=0$.) The base case $n=1$ is clear, since $x^{\underline 1}=x$, and $\left\{n\atop k\right\}=1$. For the induction step you’ll the Pascal-like recurrence relation satisfied by the Stirling numbers of the second kind: $$\left\{{n+1}\atop k\right\}=k\left\{n\atop k\right\}+\left\{n\atop{k-1}\right\}\;.$$ (If you’re not familiar with it, you’ll find a combinatorial proof here.) It’s also useful to note that $x^{\underline{k+1}}=x^{\underline k}(x-k)$, so $x\cdot x^{\underline k}=x^{\underline{k+1}}+kx^{\underline k}$. Now assume $(1)$ for $n$, and try to prove that $$x^{n+1}=\sum_{k=1}^{n+1}\left\{{n+1}\atop k\right\}x^{\underline k}\;.$$ Start out in the usual way: $$\begin{align*} x^{n+1}&=x\cdot x^n\\ &=x\sum_{k=1}^n\left\{n\atop k\right\}x^{\underline k}\\ &=\sum_{k=1}^n\left\{n\atop k\right\}x\cdot x^{\underline k}\\ &=\sum_{k=1}^n\left\{n\atop k\right\}\left(x^{\underline{k+1}}+kx^{\underline k}\right)\\ &=\sum_{k=1}^n\left\{n\atop k\right\}x^{\underline{k+1}}+\sum_{k=1}^n\left\{n\atop k\right\}kx^{\underline k}\\ &=\sum_{k=2}^{n+1}\left\{n\atop{k-1}\right\}x^{\underline k}+\sum_{k=1}^n\left\{n\atop k\right\}kx^{\underline k}&&\left(\text{shift index in first sum}\right)\\ &=\sum_{k=1}^{n+1}\left\{n\atop{k-1}\right\}x^{\underline k}+\sum_{k=1}^{n+1}\left\{n\atop k\right\}kx^{\underline k}&&\left(\text{since }\left\{n\atop0\right\}=0=\left\{n\atop{n+1}\right\}\right)\;.\\ \end{align*}$$ At this point you’re almost done; I’ll leave to you the little that remains.
The compactness of $\{x_n=cos nt\}_{n\in\mathbb{N}}\in L_2[-\pi,\pi]$ Is the set $\{x_n=\cos (nt): n\in\mathbb{N}\}$ in $L_2[-\pi,\pi]$closed or compact? I don't know how to prove it.
As the elements are orthogonal, we have for $n\neq m$ that $$\lVert x_n-x_m\rVert_{L^2}^2=2,$$ proving that the set cannot be compact (it's not precompact, as the definition doesn't work for $\varepsilon=1/2$). But it's a closed set, as it's the orthogonal of the even square-integrable functions.
Proof that an analytic function that takes on real values on the boundary of a circle is a real Possible Duplicate: Let f(z) be entire function. Show that if $f(z)$ is real when $|z| = 1$, then $f(z)$ must be a constant function using Maximum Modulus theorem I'm having trouble proving that an analytic function that takes on only real values on the boundary of a circle is a real constant. I started by writing $f(r, \theta) = u(r, \theta) + i v(r, \theta)$ By definition, $v(r, \theta ) = 0$, so $\frac{d}{d\theta} v = 0$, and in fact, the nth derivative of $v(r,\theta)$ with respect to $\theta$ is 0. The Cauchy Riemann equations in polar coordinates imply that $\frac{d}{ dr} u(r, \theta ) = 0$ Unfortunately, I'm stuck here - I think I need to prove that all nth derivatives of u and v with respect to both r and \theta$ are 0, so that I can move in any direction without inducing a change in f, but at this point I'm stuck. I've played around with this a fair bit but keep running in circles (no pun intended), so there must be something simple that I am missing. What am I doing wrong, and how does one complete the proof?
You know that if $f=u+iv$, then $u$ and $v$ are harmonic. Now, by assumption you have that $v$ is zero on the boundary of the disk. But, by the two extrema principles, you know that the maximum and minimum of $v$ occur on the boundary of your disc, and so clearly this implies that $v$ is identically zero. Thus, $f=u$, and so $f$ maps the disk into $\mathbb{R}$. But, by the open mapping theorem this implies that $f=u=\text{constant}$ else the image of the disk would be an open subset of $\mathbb{C}$ sitting inside $\mathbb{R}$ which is impossible.
Linear Algebra Proof If A is a $m\times n$ matrix and $M = (A \mid b)$ the augmented matrix for the linear system $Ax = b$. Show that either $(i) \operatorname{rank}A = \operatorname{rank}M$, or $(ii)$ $\operatorname{rank}A = \operatorname{rank}M - 1$. My attempt: The rank of a matrix is the dimension of its range space. Let the column vectors of $A$ be $a_1,\ldots,a_n$. If $\text{rank}\;A = r$, then $r$ pivot columns of $A$ form a basis of the range space of $A$. The pivots columns are linearly independent. For the matrix $M = (A \mid b)$, there are only two cases. Case $(i)$: $b$ is in the range of $A$. Then the range space of $M$ is the same as the range space of $A$. Therefore $\operatorname{rank}M = \operatorname{rank}A$. I am stuck on how to do case $(ii)$?
Suppose the columns of $A$ have exactly $r$ linearly independent vectors. If $b$ lies in their span, then $\operatorname{rank} A=r=\operatorname{rank} M$. If not, then the columns of $A$ together with $b$ have exactly $(r+1)$ linearly independent vectors, so that $\operatorname{rank} A+1=r+1=\operatorname {rank} M$.
Using the SLLN to show that the Sample Mean of Arrivals tends to the Arrival Rate for a simple Poisson Process Let $N_t = N([0,t])$ denote a Poisson process with rate $\lambda = 1$ on the interval $[0,1]$. I am wondering how I can use the Law of Large Numbers to formally argue that: $$\frac{N_t}{t} \rightarrow \lambda \quad \text{ a.s.} $$ As it stands, I can almost prove the required result but I have to to assume that $t \in Z_+$. With this assumption, I can define Poisson random variables on intervals of size $1$ as follows $$N_i = N([i-1,i])$$ where $$\mathbb{E}[N([i-1,i])] = \text{Var}[N([i-1,i])] = 1$$ and $$N_t = N([0,t]) = \sum_{i=1}^t N([i-1,i]) = \sum_{i=1}^t N_i$$ Accordingly, we can use the Law of Large Numbers to state the result above... Given that $t \in \mathbb{R}_+$, this proof needs to be tweaked in some way... But I'm not exactly sure how to do it. Intuitively speaking, I believe that the correct approach would be to decompose $N[0,t]$ into $N[0,\lfloor t\rfloor]$ and $N[\lfloor t\rfloor, t]$, and argue that the latter term $\rightarrow 0$ almost surely. However, I'm not sure how to formally state this.
If $n\leqslant t\lt n+1$, then $N_n\leqslant N_t\leqslant N_{n+1}$ hence $$ \frac{n}t\cdot\frac{N_{n}}{n}\leqslant\frac{N_t}t\leqslant\frac{n+1}t\cdot\frac{N_{n+1}}{n+1}. $$ When $t\to\infty$, $\frac{n}t\to1$ and $\frac{n+1}t\to1$ because $n$ is the integer part of $n$ hence $t-1\lt n\leqslant t$. Furthermore, $\frac{N_n}n\to\lambda$ (the result you showed) because $n\to\infty$, and $\frac{N_{n+1}}{n+1}\to\lambda$ for the same reason. You are done.
greatest common divisor is 7 and the least common multiple is 16940 How many such number-pairs are there for which the greatest common divisor is 7 and the least common multiple is 16940?
Let the two numbers be $7a$ and $7b$. Note that $16940=7\cdot 2^2\cdot 5\cdot 11^2$. We make a pair $(a,b)$ with gcd $1$ and lcm $2^2\cdot 5\cdot 11^2$ as follows. We "give" $2^2$ to one of $a$ and $b$, and $2^0$ to the other. We give $5^1$ to one of $a$ and $b$, and $5^0$ to the other. Finally, we give $11^2$ to one of $a$ and $b$, and $11^0$ to the other. There are $2^3$ choices, and therefore $2^3$ ordered pairs such that $\gcd(a,b)=1$ and $\text{lcm}(a,b)=2^2\cdot 5\cdot 11^2$. If we want unordered pairs, divide by $2$. Here we used implicitly the Unique Factorization Theorem: Every positive integer can be expressed in an essentially unique way as a product of primes. There was nothing really special about $7$ and $16940$: any problem of this shape can be solved in basically the same way.
How can this isomorphism be valid? How can $\mathbb{Z}_4 \times \mathbb{Z}_6 / <(2,3)> \cong \mathbb{Z}_{12} = \mathbb{Z}_{4} \times \mathbb{Z}_{3}$? I am not convinced at the least that $\mathbb{Z}_{12}$ is isomorphic to $\mathbb{Z}_4 \times \mathbb{Z}_6 / <(2,3)>$ For instance, doesn't $<1>$ have an order of 12 in $\mathbb{Z}_{12}$? And no element of $\mathbb{Z}_4 \times \mathbb{Z}_6 / <(2,3)>$ can even have an order of $12$ no? What is the maximum order of $\mathbb{Z}_4 \times \mathbb{Z}_6 / <(2,3)>$? I know if the denominator isn't there, it is $lcm(4,6) = 12$, but even if it weren't there, I don't see how either component can produce an element of order 12. What I mean is that the first component is in $\mathbb{Z}_4$, so all elements have max order 6 and likewise $\mathbb{Z}_6$, have order 6, so how can any element exceed the order of their group?
I presume you mean ring isomorphism. If so, you should define what is the 0, the 1, the sum and the multiplication. Of course, those are rather implicit. But there lies your doubts. As @KReiser points out, the unit in $\mathbb{Z}_4 \times \mathbb{Z}_6 / \langle(2,3)\rangle$ is (the class of) (1,1). In order to prove the isomorphism to $\mathbb{Z}_{12}$, you can define the bijective function and show that it is an isomorphism of rings. That is, let $\Phi$ be the function that maps $1\in\mathbb{Z}_{12}$ to the class of $(1,1)\in \mathbb{Z}_4 \times \mathbb{Z}_6 / \langle(2,3)\rangle$. First, show that it is a morphism, that it is injective and surjective. Since both sets have the same amount of equivalence classes, they thus turn out to be isomorphic.
What is the difference between necessary and sufficient conditions? * *If $p\implies q$ ("$p$ implies $q$"), then $p$ is a sufficient condition for $q$. *If $\lnot p\implies \lnot q$ ("not $p$ implies not $q$"), then $p$ is a necessary condition for $q$. I don't understand what sufficient and necessary mean in this case. How do you know which one is necessary and which one is sufficient?
Suppose first that $p$ implies $q$. Then knowing that $p$ is true is sufficient (i.e., enough evidence) for you to conclude that $q$ is true. It’s possible that $q$ could be true even if $p$ weren’t, but having $p$ true ensures that $q$ is also true. Now suppose that $\text{not-}p$ implies $\text{not-}q$. If you know that $p$ is false, i.e., that $\text{not-}p$ is true, then you know that $\text{not-}q$ is true, i.e., that $q$ is false. Thus, in order for $q$ to be true, $p$ must be true: without that, you automatically get that $q$ is false. In other words, in order for $q$ to be true, it’s necessary that $p$ be true; you can’t have $q$ true while $p$ is false.
Prove divisibility law $\,b\mid a,c\,\Rightarrow\, b\mid ka + lc$ for all $k,l\in\Bbb Z$ We have to prove $b|a$ and $b|c \Rightarrow b|ka+lc$ for all $k,l \in \mathbb{Z}$. I thought it would be enough to say that $b$ can be expressed both as $b=ka$ and $b=lc$. Now we can reason that since $ka+lc=2b$ and $b|2b$, it directly follows that $b|ka+lc$? But I'm not sure if that works for any value of $k$ and $l$ (namely $k$ and $l$ are defined through quotient between $a$ and $c$, respectively). What am I missing?
An alternative presentation of the solution (perhaps slightly less elementary than the already proposed answers) is to work in the quotient ring. You can write that in $\mathbb{Z}/b\mathbb{Z}$ \begin{align*} \overline{ka+lc}&=\bar{k}\bar{a}+\bar{l}\bar{c}\\ &=\bar{0}+\bar{0}\\ &=\bar{0} \end{align*} where $\bar{a}=\bar{c}=\bar{0}$ because $b | a$ and $b | c$.
Square of the sum of n positive numbers I have a following problem When we want to write $a^2 + b^2$ in terms of $(a \pm b)^2$ we can do it like that $$a^2 +b^2 = \frac{(a+b)^2}{2} + \frac{(a-b)^2}{2}.$$ Can we do anything similar for $a_1^2 + a_2^2 + \ldots + a_n^2$ ? I can add the assumption that all $a_i$ are positive numbers. I mean to express this as combination of their sums and differences. I know that this question is a little bit naive but I'm curious whether it has an easy answer.
Yes. You have to sum over all of the possibilities of $a\pm b\pm c$: $$4(a^2+b^2+c^2)=(a+b+c)^2+(a+b-c)^2+(a-b+c)^2+(a-b-c)^2$$ This can be extended to n factors by: $$\sum_{k=1}^n a_k^2=\sum_{\alpha=(1,-1,...,-1)\; |a_i|=1}^{(1,...,1)}\frac{\big(\sum_{i=1}^{n}\alpha_ia_i\big)^2}{2^{n-1}}$$ ($\alpha$ is a multiindex with values that are either -1 or 1, except the first that is always 1)
How to prove the given series is divergent? Given series $$ \sum_{n=1}^{+\infty}\left[e-\left(1+\frac{1}{n}\right)^{n}\right], $$ try to show that it is divergent! The criterion will show that it is the case of limits $1$, so maybe need some other methods? any suggestions?
Let $x > 1$. Then the inequality $$\frac{1}{t} \leq \frac{1}{x}(1-t) + 1$$ holds for all $t \in [1,x]$ (the right hand side is a straight line between $(1,1)$ and $(x, \tfrac{1}{x})$ in $t$) and in particular $$\log(x) = \int_1^x \frac{dt}{t} \leq \frac{1}{2} \left(x - \frac{1}{x} \right)$$ for all $x > 1$. Substitute $x \leftarrow 1 + \tfrac{1}{n}$ to get $$ \log \left(1 + \frac{1}{n} \right) \leq \frac{1}{2n} + \frac{1}{2(n+1)}$$ and after multiplying by $n$ $$\log\left(1 + \frac{1}{n} \right)^n \leq 1 - \frac{1}{2(n+1)}.$$ Use this together with the estimate $e^x \leq (1-x)^{-1}$ for all $x < 1$ to get $$\left(1 + \frac{1}{n} \right)^n \leq e \cdot e^{-\displaystyle\frac{1}{2(n+1)}} \leq e \cdot \left(1 - \frac{1}{2n+3} \right)$$ or $$e - \left(1 + \frac{1}{n} \right)^n \geq \frac{e}{2n+3}.$$ This shows that your series diverges.
Use two solutions to a high order linear homogeneous differential equation with constant coefficients to say something about the order of the DE OK, this one utterly baffles me. I am given two solutions to an nth-order homogeneous differential equation with constant coefficients. Using the solutions, I am supposed to put a restriction on n (such as n>=5) I have no idea what method, theorem, or definition is useful to do this. My current "theory" is that I must find all the different derivatives of the solutions and tally up how many unique derivatives they have. This is wrong, but am I going in the right direction? The specific solutions for the example are t^3 and (t)(e^t)(sint) These solutions are to an nth-order homogeneous differential equation with constant coefficients, which means that n >= ? Thanks in advance.
A related problem. We will use the annihilator method. Note that, since you are given two solutions of the ode with constant coefficients, then their linear combination is a solution to the ode too. This means the function $$ y(x) = c_1 t^3 + c_2 te^{t}\sin(t) $$ satisfies the ode. Applying the operator $D^4((D-1)^2+1)^2,$ where $D=\frac{d}{dx},$ to the above equation gives $$D^4((D-1)^2+1)^2 y(x) = 0.$$ From the left hand side of the above equation, one can see that the differential equation is at least of degree $8$ or $n\geq 8.$
Proving that cosine is uniformly continuous This is what I've already done. Can't think of how to proceed further $$|\cos(x)-\cos(y)|=\left|-2\sin\left(\frac{x+y}{2}\right)\sin\left(\frac{x-y}{2}\right)\right|\leq\left|\frac{x+y}{2}\right||x-y|$$ What should I do next?
Hint: Any continuous function is uniformly continuous on a closed, bounded interval, so $\cos$ is uniformly continuous on $[-2\pi,0]$ and $[0,2\pi]$.
Open affine neighborhood of points $X$ is a variety and there are $m$ points $x_1,x_2,\cdots,x_m$ on $X$. Can we find an open affine set which contains all $x_i$s?
A such variety is sometimes called FA-scheme (finite-affine). Quasi-projective schemes over affine schemes (e.g. quasi-projective varieties over a field) are FA. On the other hand, there are varieties which are not FA. Kleiman proved that a propre smooth variety over an algebraically closed field is FA if and only if it is projective. Some more details can be found in § 2.2 in this paper. There is an easy proof for projective varieties $X$ over a field. Just take a homogeneous polynomial $F$ which doesn't vanish at any of $x_1,\dots, x_m$. Then the principal open subset $D_+(F)$ is an affine open subset containing the $x_i$'s. The existence of $F$ is given by the graded version of the classical prime avoidance lemma: Edit Let $R$ be a graded ring, let $I$ be a homogeneous ideal generated by elements of positive degrees. Suppose that any homogenous element of $I$ belongs to the union of finitely many prime homogeneous ideals $\mathfrak p_1, \dots, \mathfrak p_m$. Then $I$ is contained in one of the $\mathfrak p_i$'s. A (sketch of) proof can be found in Eisenbud, § 3.2. For the above application, take $\mathfrak p_i$ be the prime homogeneous ideal corresponding to $x_i$ and $J=R_+$ be the (irrelevant) ideal generated by the homogeneous elements of positive degrees. As $R_+$ is not contained in any $\mathfrak p_i$, the avoidance lemma says there exists a homogeneous $F\in R_+$ not in any of the $\mathfrak p_i$'s. This method can be used to prove that any quasi-projective variety $X$ is FA (embed $X$ into a projective variety $\overline{X}$ and take $J$ be a homogeneous ideal defining the closed subset $\overline{X}\setminus X$. let $F\in J$ be homogeneous and not in any $\mathfrak p_i$, then $D_+(F)$ is affine, contains the $x_i$'s and is contained in $X$).
Infinite series question from analysis Let $a_n > 0$ and for all $n$ let $$\sum\limits_{j=n}^{2n} a_j \le \dfrac 1n $$ Prove or give a counterexample to the statement $$\sum\limits_{j=1}^{\infty} a_j < \infty$$ Not sure where to start, a push in the right direction would be great. Thanks!
Consider sum of sums $\sum_{i=1}^\infty \sum_{k=i}^{2k} a_j$.
Is there any function in this way? $f$ is a function which is continous on $\Bbb R$, and $f^2$ is differentiable at $x=0$. Suppose $f(0)=1$. Must $f$ be differentiable at $0$? I may feel it is not necessarily for $f$ to be differentiable at $x=0$ though $f^2$ is. But I cannot find a counterexample to disprove this. Anyone has an example?
Hint: $$\frac{f(x)-1}{x}=\frac{f(x)^2-1}{x}\frac{1}{f(x)+1}\xrightarrow [x\to 0]{}...?$$
Give an example of a sequence of real numbers with subsequences converging to every real number Im unsure of an example Give an example of a sequence of real numbers with subsequences converging to every real number
A related question that you can try: Let $(a_k)_{k\in\mathbb{N}}$ be a real sequence such that $\lim_k a_k=0$, and set $s_n=\sum_{k=1}^na_k$. Then the set of subsequential limits of the sequence $(s_n)_{n\in\mathbb{N}}$ is connected.
how does addition of identity matrix to a square matrix changes determinant? Suppose there is $n \times n$ matrix $A$. If we form matrix $B = A+I$ where $I$ is $n \times n$ identity matrix, how does $|B|$ - determinant of $B$ - change compared to $|A|$? And what about the case where $B = A - I$?
As others already have pointed out, there is no simple relation. Here is one answer more for the intuition. Consider the (restricting) codition, that $A_{n \times n}$ is diagonalizable, then $$\det(A) = \lambda_0 \cdot \lambda_1 \cdot \lambda_2 \cdot \cdots \lambda _{n-1} $$ Now consider you add the identity matrix. The determinant changes to $$\det(B) = (\lambda_0+1) \cdot (\lambda_1+1) \cdot (\lambda_2+1) \cdot \cdots (\lambda _{n-1} +1)$$ I think it is obvious how irregular the result depends on the given eigenvalues of A. If some $\lambda_k=0$ then $\det(A)=0$ but that zero-factor changes to $(\lambda_k+1)$ and det(B) need not be zero. Other way round - if some factor $\lambda_k=-1$ then the addition by I makes that factor $\lambda_k+1=0$ and the determinant $\det(B)$ becomes zero. If some $0 \gt \lambda_k \gt -1$ then the determinant may change its sign... So there is no hope to make one single statement about the behave of B related to A - except that where @pritam linked to, or except you would accept a statement like $$\det(A)=e_n(\Lambda_n) \to \det(B)= \sum_{j=0}^n e_j(\Lambda) $$ where $ \Lambda = \{\lambda_k\}_{k=0..n-1} $ and $e_k(\Lambda)$ denotes k'th elementary symmetric polynomial over $\Lambda$... (And this is only for diagonalizable matrices)
interesting matrix Let be $a(k,m),k,m\geq 0$ an infinite matrix then the set $$T_k=\{(a(k,0),a(k,1),...,a(k,i),...),(a(k,0),a(k+1,1),...,a(k+i,i),...)\}$$is called angle of matrix $a(k,0)$ is edge of $T_k$ $a(k,i),a(k+i,i),i>0$ are conjugate elements of $T_k$ $(a(k,0),a(k,1),...,a(k,i),...)$ is horizontal ray of $T_k$ $(a(k,0),a(k+1,1),...,a(k+i,i),...)$is diagonal ray of $T_k$ Elements of diagonal ray of $T_0$ are $1$ Elements above diagonal ray of $T_0$ are $0$ Elements of edge of $T_k,k>0$ are $0$ Each element of diagonal ray of $T_k,k>0$ is sum of his conjugate and elements of horizontal ray of $T_k$ that are placed on left. Prove that sum of elements of row $k$ is partition function $p(k)$
This is a very unnecessarily complicated, ambiguous and partly erroneous reformulation of a simple recurrence relation for the number $a(k,m)$ of partitions of $k$ with greatest part $m$. It works out if the following changes and interpretations are made: * *both instances of $k\gt1$ are replaced by $k\ge1$, *"his conjugate" is interpreted as "its upper conjugate" (each entry has two conjugates), and *"on the left" is interpreted as "to the left of its upper conjugate". The resulting recurrence relation is $$ a(k,m)=\sum_{i=1}^ma(k-m,i)\;, $$ which simply states that a partition of $k$ with greatest part $m$ arises by adding a part $m$ to any partition of $k-m$ with greatest part not greater than $m$.
Limiting distribution Let $(q_n)_{n>0}$ be a real sequence such that $0<q_n<1$ for all $n>0$ and $\lim_{n\to \infty} q_n = 0$. For each $n > 0$, let $X_n$ be a random variable, such that $P[X_n =k]=q_n(1−q_n)^{k−1}, (k=1,2,...)$. Prove that the limit distribution of $\frac{X_n}{\mathbb{E}[X_n]}$ is exponential with parameter 1. I see that $\mathbb{E}[X_n] = \frac{1}{q_n}$ but after that I don't really know where to go from there. Are there any tips please?
First we calculate the characteristic function of $X_n$: $$\Phi_{X_n}(\xi) = \sum_{k=1}^\infty \underbrace{q_n}_{(q_n-1)+1} \cdot (1-q_n)^{k-1} \cdot e^{\imath \, k \cdot \xi} = - \sum_{k=1}^\infty (1-q_n)^k \cdot (e^{\imath \, \xi})^k+e^{\imath \, \xi} \sum_{k=1}^\infty (1-q_n)^{k-1} \cdot e^{\imath \, (k-1) \cdot \xi} \\ = - \left( \frac{1}{1-(1-q_n) \cdot e^{\imath \, \xi}} - 1 \right) + e^{\imath \, \xi} \cdot \left( \frac{1}{1-(1-q_n) \cdot e^{\imath \, \xi}} \right) \\ = \frac{1}{1-(1-q_n) \cdot e^{\imath \, \xi}} \cdot (-1+(1-(1-q_n) \cdot e^{\imath \, \xi})+e^{\imath \, \xi}) = \frac{q_n \cdot e^{\imath \, \xi}}{1-(1-q_n) \cdot e^{\imath \, \xi}}$$ From this we obtain easily the characteristic function of $Y_n := \frac{X_n}{\mathbb{E}X_n} = q_n \cdot X_n$: $$\Phi_{Y_n}(\xi) = \Phi_{X_n}(\xi \cdot q_n) = \frac{q_n \cdot e^{\imath \, q_n \cdot \xi}}{1-(1-q_n) \cdot e^{\imath \, q_n \cdot \xi}}$$ Now we let $n \to \infty$ and obtain by applying Bernoulli-Hôpital $$ \lim_{n \to \infty} \Phi_{Y_n}(\xi) = \lim_{n \to \infty} \frac{e^{\imath \, q_n \cdot \xi} \cdot (1+q_n \cdot \imath \, \xi)}{-e^{\imath \, q_n \cdot \xi} \cdot (\imath \, \xi \cdot (1-q_n) -1)} = \frac{1}{1-\imath \xi}$$ (since $q_n \to 0$ as $n \to \infty$). Thus we have shown that the characteristic functions converge pointwise to the characteristic function of exponential distribution with parameter 1. By Lévy's continuity theorem we obtain $Y_n \to \text{Exp}(1)$ in distribution.
countable group, uncountably many distinct subgroup? I need to know whether the following statement is true or false? Every countable group $G$ has only countably many distinct subgroups. I have not gotten any counter example to disprove the statement but an vague idea to disprove like: if it has uncountably many distinct subgroup then It must have uncountable number of element?
Let $(\mathbb{Q},+)$ be the group of the rational numbers under addition. For any set $A$ of primes, let $G_A$ be the set of all rationals $a/b$ (in lowest terms) such that every prime factor of the denominator $b$ is in $A$. It is clear that $G_A$ is a subgroup of $\mathbb{Q}$, and that $G_A = G_{A'}$ iff $A = A'$. Since there are uncountably many sets of primes, this produces uncountably many distinct subgroups of the countable group $\mathbb{Q}$.
Evaluate $\lim_{n\to\infty}\sum_{k=1}^{n}\frac{k}{n^2+k^2}$ Considering the sum as a Riemann sum, evaluate $$\lim_{n\to\infty}\sum_{k=1}^{n}\frac{k}{n^2+k^2} .$$
$$\sum_{k=1}^n\frac{k}{n^2+k^2}=\frac{1}{n^2}\sum_{k=1}^n\frac{k}{1+\left(\frac{k}{n}\right)^2}=\frac{1}{n}\sum_{k=1}^n\frac{\frac{k}{n}}{1+\left(\frac{k}{n}\right)^2}\xrightarrow [n\to\infty]{}\int_0^1\frac{x}{1+x^2}\,dx=\ldots$$
Conditions for matrix similarity Two things that are not clear to me from the Wikipedia page on "Matrix similarity": * *If the geometric multiplicity of an eigenvalue is different in two matrices $A$ and $B$ then $A$ and $B$ are not similar? *If all eigenvalues of $A$ and $B$ coincide, together with their algebraic and geometric multiplicities, then $A$ and $B$ are similar? Thanks!
Intuitively, if $A,B$ are similar matrices, then they represent the same linear transformation, but in different bases. Using this concept, it must be that the eigenvalue structure of two similar matrices must be the same, since the existence of eigenvalues/eigenvectors does not depend on the choice of basis. So, to answer (1), if the eigenvalue structure is different, such as having different multiplicities, then $A,B$ cannot be similar. To address (2), the answer is no. If there is matching geometric and algebraic multiplicities, then $A,B$ may have different Jordan block structure; for example, $A$ could have three Jordan blocks of size $2,2,2$, and $B$ could have three Jordan blocks of size $3,2,1$. Then the algebraic and geometric multiplicities of both would be, respectively, $6$ and $3$. However, if both $A,B$ are diagonalizable, then $A = PDP^{-1}$ and $B = QDQ^{-1}$, where $D$ is the diagonal matrix, and hence $A = PQ^{-1} BQP^{-1}$, so in this more specific case, they would be similar.
Building a space with given homology groups Let $m \in \mathbb{N}$. Can we have a CW complex $X$ of dimension at most $n+1$ such that $\tilde{H_i}(X)$ is $\mathbb{Z}/m\mathbb{Z}$ for $i =n$ and zero otherwise?
To expand on the comments: The $i$-th homology of a cell complex is defined to be $\mathrm{ker}\partial_{i} / \mathrm{im}\partial_{i+1}$ where $\partial_{i+1}$ is the boundary map from the $i+1$-th chain group to the $i$-th chain group. Geometrically, this map is the attaching map that identifies the boundary of the $i+1$ cells with points on the $i$-cells. For example you could identify the boundary of a $2$-cell (a disk) with points on a $1$-cell (a line segment). In practice you construct a cell complex inductively so you will have already identified the end bits of the line segment with some $0$-cells (points). Assume the zero skeleton is just one point and you attach one line segment. Then we have $S^1$ and identify the boundary of $D^2$ with it. This attaching map is a map $f: S^1 \to S^1$. You can do this in many ways, the most obvious is the identity map. This map has degree one. The resulting space is a disk and the (reduced) homology groups of this disk are $0$ everywhere except in $i=2$ where you get $\mathbb Z$. If you take $f: S^1 \to S^1$ to be the map $t \mapsto 2t$ you wrap the boundary around twice and what you get it the real projective plane which has the homology you want in $i=2$ (check it). See here for the degree of a map. Now generalise to $S^n \to S^n$.
differentiability and compactness I have no idea how to show whether this statement is false or true: If every differentiable function on a subset $X\subseteq\mathbb{R}^n$ is bounded then $X$ is compact. Thank you
Some hints: * *By the Heine-Borel property for Euclidean space, $X$ is compact if and only if $X$ is closed and bounded. *My inclination is to prove the contrapositive: If $X$ is not compact, then there exists a differentiable function on $X$ which is unbounded. *If $X$ is not compact, then either it isn't bounded, or it isn't closed. As a first step, perhaps show why the contrapositive statement must be true if $X$ isn't bounded?
Find $F'(x)$ given $ \int_x^{x+2} (4t+1) \ \mathrm{dt}$ Given the problem find $F'(x)$: $$ \int_x^{x+2} (4t+1) \ \mathrm{dt}$$ I just feel stuck and don't know where to go with this, we learned the second fundamental theorem of calculus today but i don't know where to plug it in. What i did: * *chain rule doesn't really take into effect here(*1) so just replace t with $x$ *$F'(x) = 4x + 1$ though the answer is just 8, what am i doing wrong?
Let $g(t)=4t+1$, and let $G(t)$ be an antiderivative of $g(t)$. Note that $$F(x)=G(x+2)-G(x).\tag{$1$}$$ In this case, we could easily find $G(t)$. But let's not, let's differentiate $F(x)$ immediately. Since $G'(t)=g(t)=4t+1$. we get $$F'(x)=g(x+2)-g(x)=[4(x+2)+1]-[4x+2].$$ This right-hand side ismplifies to $8$.
Can the graph of a bounded function ever have an unbounded derivative? Can the graph of a bounded function ever have an unbounded derivative? I want to know if $f$ has bounded variation then its derivative is bounded. The converse is obvious. I think the answer is "yes". If the graph were to have an unbounded derivative, it would coincide with a vertical line.
Oh, sure. I'm sure there are lots of examples, but because of the work I do, some modification of the entropy function comes to mind. Consider the following function: $$ f:\mathbb{R}^n\rightarrow\mathbb{R}, \quad f(x) \triangleq \begin{cases} x \log |x| & x \neq 0 \\ 0 & x = 0 \end{cases} $$ It is not difficult to verify that this function is continuous and has a derivative of $$f'(x) = \log|x| + 1$$ for nonzero $x$. So $f'(x)$ is unbounded at the origin; but $f(x)$ is unbounded as well, so we're not quite there. We can create the function we seek by multiplying $f$ by a well-chosen envelope that drives the function to $0$ at the extremes. For instance: $$ g:\mathbb{R}^n\rightarrow\mathbb{R}, \quad g(x) \triangleq e^{-x^2} f(x) = \begin{cases} x e^{-x^2} \log |x| & x \neq 0 \\ 0 & x = 0 \end{cases} $$ The first derivative for nonzero $x$ is $$ g'(x) = e^{-x^2} \cdot \left( ( 1 - 2 x^2 ) \log|x| + 1 \right) $$ which remains unbounded. Attached is a plot of $g$ and $g'$. EDITED to add: I notice that a number of other answers have chosen a bounded domain. From my perspective that is a bit incomplete. After all, we often consider such functions using the extended real number line, and in that context they are not bounded. There are certainly many functions that satisfy the original poster's conditions without resorting to a bounded domain.
Find the rank of Hom(G,Z)? (1) Prove that for any finitely generated abelian group G, the set Hom(G, Z) is a free Abelian group of finite rank. (2) Find the rank of Hom(G,Z) if the group G is generated by three generators x, y, z with relations 2x + 3y + z = 0, 2y - z = 0
(i) Apply the structure theorem: write $$G \simeq \mathbb{Z}^r \oplus_i \mathbb{Z}/d_i$$ Now from here we compute $$Hom(\mathbb{Z}^r \oplus_i \mathbb{Z}/d_i, \mathbb{Z}) \simeq Hom(\mathbb{Z}^r, \mathbb{Z}) \oplus_i Hom(\mathbb{Z}/d_i, \mathbb{Z}) \simeq \mathbb{Z}^r$$ (ii) We just need to find the rank of the free part of $G$, we have it cut out as the cokernel of the map $\mathbb{Z}^2 \rightarrow \mathbb{Z}^3$, given by $$\begin{pmatrix} 2 & 0 \\ 3 & 2 \\ 1 & -1 \end{pmatrix} \sim \begin{pmatrix} 2 & 2 \\ 3 & 5 \\ 1 & 0 \end{pmatrix} \sim \begin{pmatrix} 0 & 2 \\ 0 & 5 \\ 1 & 0 \end{pmatrix} \sim \begin{pmatrix} 0 & 0\\ 0 & 1 \\ 1 & 0 \end{pmatrix}$$ So if I didn't goof that up, our group is simply $\mathbb{Z}^3/\mathbb{Z}^2 \simeq \mathbb{Z}$, and hence Hom is again $\mathbb{Z}$.
Kullback-Liebler divergence The Kullback-Liebler divergence between two distributions with pdfs $f(x)$ and $g(x)$ is defined by $$\mathrm{KL}(F;G) = \int_{-\infty}^{\infty} \ln \left(\frac{f(x)}{g(x)}\right)f(x)\,dx$$ Compute the Kullback-Lieber divergence when $F$ is the standard normal distribution and $G$ is the normal distribution with mean $\mu$ and variance $1$. For what value of $\mu$ is the divergence minimized? I was never instructed on this kind of divergence so I am a bit lost on how to solve this kind of integral. I get that I can simplify my two normal equations in the natural log but my guess is that I should wait until after I take the integral. Any help is appreciated.
I cannot comment (not enough reputation). Vincent: You have the wrong pdf for $g(x)$, you have a normal distribution with mean 1 and variance 1, not mean $\mu$. Hint: You don't need to solve any integrals. You should be able to write this as pdf's and their expected values, so you never need to integrate. Outline: Firstly, $ \log({f(x) \over g(x) })=\left\{ -{1 \over 2} \left( x^2 - (x-\mu )^2 \right) \right\} $ . Expand and simplify. Don't even write out the other $f(x)$ and see where that takes you.
Properties of $\det$ and $\operatorname{trace}$ given a $4\times 4$ real valued matrix Let $A$, be a real $4 \times 4$ matrix such that $-1,1,2,-2$ are its eigenvalues. If $B=A^4-5A^2+5I$, then which of the following are true? * *$\det(A+B)=0$ *$\det (B)=1$ *$\operatorname{trace}(A-B)=0 $ *$\operatorname{trace}(A+B)=4$ Using Cayley-Hamilton I get $B=I$, and I know that $\operatorname{trace}(A+B)=\operatorname{trace}(A)+\operatorname{trace}(B)$. From these facts we can obtain easily about 2,3,4 but I am confused in 1. How can I verify (1)? Thanks for your help.
The characteristic equation of $A$ is given by $(t-1)(t+1)(t+2)(t-2)=0 $ which implies $t^{4}-5t^{2}+4=0$. Now $A$ must satisfy its characteristic equation which gives that $A^{4}-5A^{2}+4I=0$ and so we see that $B=A^{4}-5A^{2}+4I+I=0+I=I$. Hence, the eigenvalues of $(A+B)$ is given by $(-1+1),(1+1),(2+1),(-2+1)$ that is $0,2,3,-1.$[Without the loss of generality, one can take $A$ to be diagonal matrix which would not change trace or determinant of the matrix. ]So we see that $det(A+B)$ is the product of its eigenvalues which is $0$. . Also we see that trace of $(A+B)$ is the sum of its eigenvalues which is $(0+2+3-1)=4.$ Also, B being the identity matrix, $det(B)=1.$ So the options $(1),(2) and (4)$ are true .
Vector perpendicular to timelike vector must be spacelike? Given $\mathbb{R}^4$, we define the Minkowski inner product on it by $$ \langle v,w \rangle = -v_1w_1 + v_2w_2 + v_3w_3 + v_4w_4$$ We say a vector is spacelike if $ \langle v,v\rangle >0 $, and it is timelike if $ \langle v,v \rangle < 0 $. How can I show that if $v$ is timelike and $ \langle v,w \rangle = 0$ , then $w$ is either the zero vector or spacelike? I've tried to use the polarization identity, but don't have any information regarding the $\langle v+w,v+w \rangle$ term in the identity. Context: I'm reading a book on Riemannian geometry, and the book gives a proof of a more general result: if $z$ is timelike, then its perpendicular subspace $z^\perp$ is spacelike. It does so using arguments regarding the degeneracy index of the subspace, which I don't fully understand. Since the statement above seems fairly elementary, I was wondering whether it would be possible to give an elementary proof of it as well. Any help is appreciated!
The accepted answer by @user1551 is certainly good, but an intuitive physical explanation may be needed, I think. A timelike vector in special relativity can be thought of as some kind of velocity of some object. And we can find a particular reference frame in which the object is at rest, i.e. with only time component non-zero. With appropriate normalization, the coordinate components of the timelike vector $v$ are $$ v=(1,0,0,0) $$ which means $v$ is actually the first basis vector of this reference frame. And the other three basis vectors were already there when we specified this frame. So, "extending the timelike vector $v$ to an orthonormal basis" physically means a choice of inertial reference frame. What then follows is trivial. Since $v$'s only non-zero component is the time component and $\langle v,w \rangle=0$, $w$'s time component must be zero. Then it's either the zero vector or spacelike.
Consider the quadratic form $q(x,y,z)=4x^2+y^2−z^2+4xy−2xz−yz $ over $\mathbb{R}$ then which of the following are true Consider the quadratic form $q(x,y,z)=4x^2+y^2-z^2+4xy-2xz-yz$ over $\mathbb{R}$. Then which of the followings are true? 1.range of $q$ contains $[1,\infty)$ 2.range of $q$ is contained in $[0,\infty)$ 3. range=$\mathbb{R}$ 4.range is contained in $[-N, ∞)$ for some large natural number $N$ depending on $q$ I am completely stuck on it. How should I solve this problem?
If you consider that for $x=0$ and $y=0$ we have that $q$ maps onto $(-∞,0]$ because $q(0,0,z)=-z^2$, and for $x=0$,$z=0$ we have that $q$ maps onto $[0,∞)$, then as a whole $q$ maps onto $(-∞,∞) = \mathbb{R}$.
Noetherian module implies Noetherian ring? I know that a finitely generated $R$-module $M$ over a Noetherian ring $R$ is Noetherian. I wonder about the converse. I believe it has to be false and I am looking for counterexamples. Also I wonder if $M$ Noetherian imply that $R$ is Noetherian is true? And if $M$ Noetherian implies $M$ finitely generated is true? That is, do both implications fail or only one of them?
Let $R$ be a commutative non-Noetherian and let $\mathcal m$ be a maximal ideal. Then $R/\mathcal m$ is finitely generated and Noetherian - it only has two sub-$R$-modules. Note that, even if $R$ isn't Noetherian, it contains a maximal ideal, by Krull's Theorem.