INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
If $g$ is an element in an abelian group $G$ and $H\leqslant G$, must there exist an $n$ such that $g^n\in H$? Let $G$ be an abelian group and $H$ a subgroup of $G$. For each $g \in G$, does there always exist an integer $n$ such that $g^{n} \in H$?
This is evident that if $[G:H]=n<\infty$ then $\forall g\in G, g^n\in H$.
solving a basic complex equation but using de Moivres theorem I have a question which should be super super easy! If I was to solve $z^2 = 1+i$ how would I do this using de-moivres theorem? I have the answer here in front of me so I don't want the answer, I just dont understand the method very well! Any help would be appreciated! I haven't had much experience with complex numbers and have just started a complex analysis course. Many thanks
$$\forall w=x+iy\in\Bbb C:$$ $$w=|w|e^{i\phi}=|w|(\cos\phi+i\sin\phi)\;,\;\;\phi=\begin{cases}\arctan\frac{y}{x}+2k\pi &,\;\;y\neq 0\\{}\\2k\pi\end{cases}\;\;,\;\;\;k\in\Bbb Z$$ In our case: $$w=1+i\Longrightarrow |w|=\sqrt 2\,\,,\,\,\arctan\frac{1}{1}=\frac{\pi}{4}+2k\pi\Longrightarrow$$ $$z^2=1+i=\sqrt 2\,e^{\frac{\pi i}{4}\left(1+8k\right)}\;\;,\;\;k=0,1\;\Longrightarrow z=\sqrt[4]2\, e^{\frac{\pi i}{8}(1+8k)}\;,\;\;k=0,1$$ Why do we restrict ourselves only to the above values of $\,k\,$ ? Because any other integer value will give one of these two different ones (on the trigonometric circle, say) ! Thus, the solutions are $$k=0:\;\;\;\;z_0:=\sqrt[4]2\,e^{\frac{\pi i}{8}}=\sqrt[4]2\left(\cos\frac{\pi}{8}+i\sin\frac{\pi}{8}\right)\\k=1:\;\;\;z_1:=\sqrt[4] 2\,e^{\frac{9\pi i }{8}}=\sqrt[4] 2\left(\cos\frac{9\pi}{8}+i\sin\frac{9\pi}{8}\right)$$ By the way, it is a nice exercise to show that $\,z_0=-z_1\,$...
Differentiating piecewise functions. Say we have the piecewise function $f(x) = x^2$ on the interval $0 \le x < 4$; and it equals $x+1$ on the interval $ x \ge 4$. Why is it that, when I take the derivative, the intervals loose their equality and become strictly greater or strictly less than?
Note that $\displaystyle \lim_{x\to 4^-}(f(x))=\lim_{x\to 4^-}(x^2)=16$ while $\displaystyle\lim_{x\to 4^+}(f(x))=f(4)=5.$ Therefore $f$ isn't continuous on $x=4$ and so it can't be differentiable there. Hence the domain of the derivative doesn't include $4$. Around $0$ the left lateral derivative isn't even defined, therefore $f$ can't be differentiable there. Since $f$ is clearly differentiable on the interior of its domain, it follows that the domain of $f'$ is the interior of $f$'s domain, that is $\textbf{]}0,4[\cup \textbf{]}4+\infty[=\textbf{]}0,+\infty[\setminus \{4\}$.
Show that $x$, $y$, and $z$ are not distinct if $x^2(z-y) + y^2(x-z) + z^2(y-x) = 0$. Suppose that $x^2(z-y) + y^2(x-z) + z^2(y-x) = 0$. How can I show that $x$, $y$, and $z$ are not all distinct, that is, either $x=y$, $y=z$, or $x=z$?
$x^2(z-y)+y^2(x-z)+z^2(y-x)=x^2(z-y)+x(y^2-z^2)+z^2y-y^2z=(z-y)(x^2-x(y+z)+zy)=(z-y)(x-y)(x-z)$.
Logarithm proof problem: $a^{\log_b c} = c^{\log_b a}$ I have been hit with a homework problem that I just have no idea how to approach. Any help from you all is very much appreciated. Here is the problem Prove the equation: $a^{\log_b c} = c^{\log_b a}$ Any ideas?
If you apply the logarithm with base $a$ to both sides you obtain, $$\log_a a^{\log_b c} = \log_a c^{\log_b a}$$ $$\log_b c = \log_b a \log_a c$$ $$\frac{\log_b c}{\log_b a} = \log_a c$$ however this last equality is the change of base formula and hence is true. Reversing the steps leads to the desired equality.
Flow of D.E what is the idea behind conjugacy? I got some kinda flow issue, ya know? well enough with the bad jokes let A be a 2x2 matrix, T a change of Coordinate matrix, and $B=T^{-1}AT$ the canonical matrix ascoiated with A. Show that the function $h=T^{-1}: \mathbb R^{2} \to \mathbb R^{2}$, $h(x)=T^{-1}X$ is a conjugacy between the flows of the systems $X^{'}$= $AX$ and $Y^{'}$= $BY$ Firstly Is there any chance someone could quantify this in terms of algebraic set theory for me? i seem to half comprehend flow of dynamical systems and half understand there counter part in group theory. they feel so similar yet i can't seem to put the 2 together. it feels like flow of this system is a Monomorphism as its a homomorphism and it is a one-one mapping. Second question how do i prove the above is a conjugacy between the flows, the method we were shown in class was very odd and didn't involve using T and $T^{-1}$i have asked a somewhat similar question and someone was kind enough to show me how to do it using this kind of idea but with numbers. i don't actually understand why this concept works. All that i really understand is that if the same number of eigenvalues with positive sign are in each matrix there should be a mapping of the flow that exists between the two of them.
Suppose $X' = AX$. Let $Y = T^{-1} X T$. Then $$ Y' = (T^{-1} X T)' = T^{-1} X' T = T^{-1} (AX) T = (T^{-1} A T) (T^{-1} X T) = B (T^{-1} X T) = BY$$ So the flows of $B$ are simply conjugates of the flows of $A$. Is this what you wanted to know?
Show that $2 xy < x^2 + y^2$ for $x$ is not equal to $y$ Show that $2 xy < x^2 + y^2$ for $x$ is not equal to $y$.
If $x\neq y$ then without loss of generality we can assume that $x>y$ $x-y>0\Rightarrow (x-y)^2>0\Rightarrow x^2-2xy+y^2>0\Rightarrow x^2+y^2>2xy$
Homogeneous equation I am trying to solve the following homogeneous equation: Thanks for your tips $xy^3y′=2(y^4+x^4)$ I think this isHomogeneous of order4 => $xy^3dy/dx=2(y^4+x^4/1)$ => $xy^3dy=2(x^4+y^4)dx$ => $xy^3dy-2(x^4+y^4)dx=0$ I do not know how to continued
Make the substitution $v=y^4$. Then by the chain rule we have $v'=4y^3 y'$. Now your DE turns into: $$x \frac{v'}{4}=2(v+x^4)$$ Then can be simplified to: $$v'-8\frac{v}{x}=8x^3$$ We first solve the homogeneous part: $$v_h' -8\frac{v_h}{x}=0$$ This leads to $v_h=c\cdot x^8$ so that $v_h'=c\cdot 8x^7$. This is our homogeneous solution. Now to find a particular solution we guess it will look something like $$v_p=ax^4+bx^3+dx^2+3x+f$$ Filling this into the DE we get: $$(4ax^3 +3bx^2+2dx+3)-\frac{8}{x}(ax^4+bx^3+dx^2+3x+f)=8x^3$$ the only terms with $x^3$ in it are the $4ax^3$ and the $-8ax^3$. Conclusion: $-4a=8$ so $a=-2$. All the other terms are zero. We now have our particular solution $v_p=-2x^4$. The general solution is the sum of the particular and homogeneous solution so $$v_g=c\cdot x^8 -2x^4$$ Now backsubstitute $v=y^4$ or $y=v^{\frac{1}{4}}$ to obtain the final answer.
Time complexity of a modulo operation I am trying to prove that if $p$ is a decimal number having $m$ digits, then $p \bmod q$ can be performed in time $O(m)$ (at least theoretically), if $q$ is a prime number. How do I go about this? A related question is asked here, but it is w.r.t to MATLAB, but mine is a general one. The relevant text, that I am referring: chapter 32 -String Matching, PP:991, "Introduction to Algorithms" 3rd edition Cormen et al.
I will assume that $q$ is not part of your input, but rather a constant. Then, Algorithm D in 4.3.1 of Knuth's book "The Art of Computer Programming" (Volume 2) performs any long division in $O(m)$ steps.
Variance of the number of balls between two specified balls Question: Assume we have 100 different balls numbered from 1 to 100, distributed in 100 different bins, each bin has 1 ball in it. What is the variance of the number of balls in between ball #1 and ball #2? What I did: I defined $X_i$ as an indicator for ball $i$ - "Is it in between balls 1 and 2?" Also I thought of the question as this problem: "We have actually just 3 places to put the 98 remaining balls: before, after and between balls #1,2, so for each ball there is a probability of 1/3 to be in between. So by this we have $E[X_i]= $$1 \over 3$ . Now $X=\sum _{i=1} ^{98} X_i$. Since $X_i$ is a Bernoulli RV then: $V(X_i)=p(1-p)=$$2 \over 9$. But I know that the correct answer is 549 $8 \over 9$. I know that I should somehow use the formula to the sum of variances, but somehow I don't get to the correct answer.
Denote the number of balls and the the number of bins by $b$. Suppose the first ball lands in bin $X_1$ and second ball lands in the bin $X_2$. The number of balls that will land in between them equals $Z = |X_2 -X_1| - 1$. Clearly $$ \Pr\left( X_1 = m_1, X_2 = m_2 \right) = \frac{1}{b\cdot (b-1)} [ m_1 \not= m_2 ] $$ Thus: $$\begin{eqnarray} \Pr\left(Z = n\right)&=&\sum_{m_1}^b \sum_{m_2=1}^b \frac{1}{b(b-1)} [ m_1 \not=m_2, |m_1-m_2|=n+1] \\ &=& \sum_{m_1}^b \sum_{m_2=1}^b \frac{2}{b(b-1)} [ m_1 > m_2, m_1=n+1+m_2] \\ &=& \frac{b-n-1}{\binom{b}{2}} [ 0 \leqslant n < b-1 ] \end{eqnarray} $$ With this it is straightforward to find: $$ \mathbb{E}\left(Z\right) = \sum_{n=0}^{b-2} n \frac{b-n-1}{\binom{b}{2}} = \frac{b-2}{3} $$ $$ \mathbb{E}\left(Z^2\right) = \sum_{n=0}^{b-2} n^2 \frac{b-n-1}{\binom{b}{2}} = \frac{(b-2)(b-1)}{6} $$ Thus the variance reads: $$ \mathbb{Var}(Z) =\mathbb{E}(Z^2) - \mathbb{E}(Z)^2 = \frac{(b+1)(b-2)}{18} = 549 \frac{8}{9} $$
Entire function dominated by another entire function is a constant multiple These two questions I didn't even find the way to solve So please if you can help * *Suppose $f (z)$ is entire with $|f(z)| \le |\exp(z)|$ for every $z$ I want to prove that $f(z) = k\exp(z)$ for some $|k| \le 1$ *Can a non constant entire function be bounded in half a plane? Prove if yes , example if not.
Hint: 1) Liouville + $\,\displaystyle{\frac{f(z)}{e^z}}\,$ is analytic and bounded... 2) Develop the function in power series and extend analytically into the "other" halpf plane.
Is the set of discontinuity of $f$ countable? Suppose $f:[0,1]\rightarrow\mathbb{R}$ is a bounded function satisfying: for each $c\in [0,1]$ there exist the limits $\lim_{x\rightarrow c^+}f(x)$ and $\lim_{x\rightarrow c^-}f(x)$. Is true that the set of discontinuity of $f$ is countable?
Hopefully this more tedious proof is more illustrative... In light of the assumptions on $f$, there are two ways that $f$ can fail to be continuous: (1) the left and right hand limits differ, or (2) the limits are equal, but the function value differs from the limit (thanks to @GEdgar for pointing this out). Let $\epsilon>0$ and $\Delta_\epsilon = \{ x |\, |\lim_{y \downarrow x}f(y) - \lim_{y \uparrow x}f(y)| \geq \epsilon \}$. If $\Delta_\epsilon$ is not finite, then since $[0,1]$ is compact, there exists an accumulation point $\hat{x} \in [0,1]$. Let $c_+ = \lim_{y \downarrow \hat{x}}f(y), c_- = \lim_{y \uparrow \hat{x}}f(y)$. (Note that it is possible that $c_- = c_+$.) By assumption, there exists some $\delta>0$ such that for $y \in (\hat{x}-\delta,\hat{x})$, $|f(y)-c_-| < \frac{\epsilon}{4}$ and for $y \in (\hat{x}, \hat{x}+\delta)$, $|f(y)-c_+| < \frac{\epsilon}{4}$. However, this implies that $\hat{x}$ is an isolated point of $\Delta_\epsilon$, which is a contradiction. Hence $\Delta_\epsilon$ is finite. If we let $\Delta = \cup_n \Delta_{\frac{1}{n}}$, we see that $\Delta $ is at most countable. @GEdgar has pointed out an omission in my proof: It is possible that the two limits coincide at a point, but the function is still discontinuous at that point, ie, $\Delta$ is not the entire set of discontinuities. Let $\Gamma_\epsilon = \{ x |\, \lim_{y \downarrow x}f(y) = \lim_{y \uparrow x}f(y), \ |\lim_{y \downarrow x}f(y)-f(x) | \geq \epsilon \}$. Suppose, as above, that $\Gamma_\epsilon$ is not finite, and let $\hat{x}$ be an accumulation point. Let $c_+, c_-$ be the limits as above. Again, there exists a $\delta>0$ such that if $y \in (\hat{x}-\delta,\hat{x})$, $|f(y)-c_-| < \frac{\epsilon}{4}$, and similarly, if $y \in (\hat{x}, \hat{x}+\delta)$, $|f(y)-c_+| < \frac{\epsilon}{4}$. Consequently, $\hat{x}$ is isolated, hence a contradiction, and $\Gamma_\epsilon$ is finite. If we let $\Gamma= \cup_n \Gamma_{\frac{1}{n}}$, we see that $\Gamma$ is at most countable. Since the set of discontinuities is $\Delta \cup \Gamma$, we see that the set of discontinuities is at most countable.
How to prove that if group $G$ is abelian, $H = \{x \in G : x = x^{-1}\}$ is a subgroup? $G$ is abelian, $H = \{x \in G : x = x^{-1}\}$ is a subgroup? I know to prove that a subset $H$ of group $G$ be a subgroup, one needs to (i) prove $\forall x,y \in H:x \circ y \in H$ and (ii) $\forall x \in H:x^{-1} \in H.$
Unless you know $H$ is a nonempty subset of $G$, we need also to show that the identity $e \in G$ is also in $H$: (o) Clearly, the identity $e \in G$ is its own inverse, hence $e = e^{-1} \in H$. $(ii)\;$ $\forall x \in H$, $x\in H \implies x = x^{-1} \in H$. Hence we have closure under inverses. $(i)$ $x\circ y \in H$? $$\; x, y \in H, \implies x = x^{-1}, y = y^{-1}$$ $$ x \circ y = x^{-1}\circ y^{-1} = y^{-1}\circ x^{-1} \quad\quad\quad\quad\tag{G is abelian}$$ $$y^{-1}\circ x^{-1} = (x\circ y)^{-1} \implies x\circ y \in H$$ Hence we have closure under the group operation. Therefore, $H \leq G$. That is, $H$ is a subgroup of $G$. Added: Note that this problem is equivalent to the task of proving that if $G$ is an abelian group, and $H$ is a subset of $G$ such that $H = \{x \in G\mid x^2 = e\}$, then $H$ is a subgroup of $G$. Why? For $x \in G$ such that $x^2 = e$, note that $$x^2 = e \iff x^{-1}\circ x^2 = x^{-1} \circ e \iff x^{-1}\circ (x\circ x) = x^{-1}$$ $$\iff (x^{-1}\circ x) \circ x = x^{-1} \iff e\circ x = x^{-1} \iff x = x^{-1}$$
Cardinals of set operations without AC Given info: $|A|=\mathfrak{c}$ , $|B|=\aleph_0$ in ZF (no axiom of choice). Prove: $|A\cup B|=\mathfrak{c}$ If $B \subset A\implies|A \backslash B|=\mathfrak{c}$? I have found several places proving that for $|\mathbb{R} \backslash \mathbb{Q}|,$ but none of the solutions appears to work for arbitrary sets. Maybe one should prove it for $|\mathbb{R} \backslash \mathbb{Q}|$ and then show that it will work for arbitrary $A$ and $B$ as well? Here Showing that $\mathbb{R}$ and $\mathbb{R}\backslash\mathbb{Q}$ are equinumerous using Cantor-Bernstein David has a very nice idea (constructing bijection), but it requires that infinite set has countably infinite subset, which again!? needs some choice axiom. In some sources I even saw statements that these can't be proved in ZF. From what my teacher said I percieved that solution has something to do with Cantor–Bernstein theorem and that knowing how to prove $\mathfrak{c}+\mathfrak{c}=\mathfrak{c}$ would help as well. Thanks!
Prove it first for disjoint $A$ and $B$, relaxing the condition on $B$ to $|B|\le \aleph_0$. You then recover the full statement by considering $A\cup B = A\cup(B\setminus A)$. You can restrict your attention even further to, say $A=(0,1)$ and $B$ being a subset of the integers. Once you have proved it for that case, the definition of "same cardinality" guarantees that it will be true for every other choice of disjoint $A$ and $B$ of the appropriate cardinalities. It is true without any choice axiom that a set of size continuum has a countably infinite subset. By definition, because it has size continuum, there's a bijection from $\mathbb R$, and the image of $\mathbb N$ under that bijection is a countably infinite subset.
Show that that $\lim_{n\to\infty}\sqrt[n]{\binom{2n}{n}} = 4$ I know that $$ \lim_{n\to\infty}{{2n}\choose{n}}^\frac{1}{n} = 4 $$ but I have no Idea how to show that; I think it has something to do with reducing ${n}!$ to $n^n$ in the limit, but don't know how to get there. How might I prove that the limit is four?
Hint: By induction, show that for $n\geq 2$ $$\frac{4^n}{n+1} < \binom{2n}{n} < 4^n.$$
Convergence of $\int_3^{\infty} \frac{1}{(\ln(x))^2(x-\ln(x))}$ Does this integral converge? $$\int_3^{\infty} \frac{1}{(\ln(x))^2(x-\ln(x))}$$ I've been trying to solve this for the past 2 hours...literally. I know the answer is fairly simple, but I just can't think of it
We have $$\dfrac{x}e > \ln(x) \,\,\, \forall x > 0$$ Hence, $$x - \ln(x) > x - \dfrac{x}e = x \left(1 - \dfrac1e\right)$$ Hence, we get that $$I = \int_3^{\infty} \dfrac{dx}{\ln^2(x)(x-\ln(x))} < \left(\dfrac{e}{e-1} \right) \times \int_3^{\infty} \dfrac{dx}{x \ln^2(x)} = \dfrac{e}{(e-1)\ln(3)}$$ EDIT A way to approximate the integral is as follows. $$\dfrac1{x-\ln(x)} = \dfrac1x \sum_{k=0}^{\infty} \left(\dfrac{\ln(x)}{x} \right)^k$$ Hence, $$I = \int_3^{\infty} \dfrac{dx}{\ln^2(x)(x-\ln(x))} = \sum_{k=0}^{\infty} \int_3^{\infty} \dfrac{\ln^{k-2}(x)}{x^{k+1}} dx$$ Let us evaluate each term now. \begin{align} f_k & = \int_3^{\infty} \dfrac{\ln^{k-2}(x)}{x^{k+1}} dx\\ & = \int_{\ln(3)}^{\infty} \dfrac{t^{k-2}}{e^{kt}} dt\\ & = \int_{\ln(3)/k}^{\infty} \dfrac{y^{k-2}}{k^{k-2}e^y} \dfrac{dy}k\\ & = \dfrac1{k^{k-1}} \int_{\ln(3)/k}^{\infty} y^{k-2} e^{-y} dy\\ & = \dfrac{\Gamma(k-1,\ln(3)/k)}{k^{k-1}} \end{align} where $\Gamma(m,z)$ is the incomplete $\Gamma$ function and there are many ways to compute incomplete $\Gamma$ function to arbitrary accuracy. For example, as shown here. Hence, we get that $$I = \sum_{k=0}^{\infty} \dfrac{\Gamma(k-1,\ln(3)/k)}{k^{k-1}}$$which is an exponentially converging series and truncating this will provide us arbitrarily accurate answer.
Can three distinct points in the plane always be separated into bounded regions by four lines? How can I show that for any three points in the plane, four lines can be drawn that separate the three points into distinct enclosed regions? Can any six points be enclosed in distinct regions formed by five lines? Clarifications: Points are distinct, enclosed regions mean bounded regions. Thank you.
Okay, I think this works. By scaling and rotation, we can assume that two of the points are $(0,0)$ and $(0,1)$. Then the other point is $(x,y)$. Now the problem can be solved if the third point is $(1,0)$, with something like Now if $x\ne 0$, the linear transformation $A=\pmatrix{x&0\\y&1}$ maps the point $(0,1)$ to $(x,y)$ and fixes the other two points, and also maps each green line to some new line, so $A$ applied to each line gives you four lines which enclose the points $(0,1), (0,0)$ and $(x,y)$. If the third point is collinear with the other two points then it is easy to come up with the four lines that work. Just make a cone that contains the two top points and another which contains the two bottom points. Then only the middle point will be in the intersection of the cones.
Give the combinatorial proof of the identity $\sum\limits_{i=0}^{n} \binom{k-1+i}{k-1} = \binom{n+k}{k}$ Given the identity $$\sum_{i=0}^{n} \binom{k-1+i}{k-1} = \binom{n+k}{k}$$ Need to give a combinatorial proof a) in terms of subsets b) by interpreting the parts in terms of compositions of integers I should not use induction or any other ways... Please help.
$\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{\mathrm{i}} \newcommand{\iff}{\Leftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \color{#f00}{\sum_{j = 0}^{n}{k - 1 + j \choose k - 1}} & = \sum_{j = 0}^{n}{k - 1 + j \choose j} = \sum_{j = 0}^{n}{-k + 1 - j + j - 1 \choose j}\pars{-1}^{j} = \sum_{j = 0}^{n}{-k \choose j}\pars{-1}^{j} \\[3mm] & = \sum_{j = -\infty}^{n}{-k \choose j}\pars{-1}^{j} = \sum_{j = -n}^{\infty}{-k \choose -j}\pars{-1}^{-j} = \sum_{j = 0}^{\infty}{-k \choose n - j}\pars{-1}^{j + n} = \\[3mm] & = \pars{-1}^{n}\sum_{j = 0}^{\infty}\pars{-1}^{j}\oint_{\verts{z} = 1^{-}} {\pars{1 + z}^{-k} \over z^{n - j + 1}}\,{\dd z \over 2\pi\ic} \\[3mm] & = \pars{-1}^{n}\oint_{\verts{z} = 1^{-}} {\pars{1 + z}^{-k} \over z^{n + 1}}\sum_{j = 0}^{\infty}\pars{-z}^{j} \,{\dd z \over 2\pi\ic} = \pars{-1}^{n}\oint_{\verts{z} = 1^{-}} {\pars{1 + z}^{-k - 1} \over z^{n + 1}}\,{\dd z \over 2\pi\ic} \\[3mm] & = \pars{-1}^{n}{-k - 1 \choose n} = \pars{-1}^{n}{k + 1 + n - 1 \choose n}\pars{-1}^{n} = {n + k \choose n} = \color{#f00}{n + k \choose k} \end{align}
Expected value uniform decreasing function We are given a function $f(n,k)$ as for(i=0;i < k;i++) n = rand(n); return n; rand is defined as a random number generator that uniformly generates values in the range $[0,n)$. It returns a value strictly less than $n$; also $\operatorname{rand}(0)=0$. What is the expected value of our function $f(n,k)$ given $n$ and $k$?
If the random numbers are not restricted to integers but are uniformly generated on the entire interval $[0,n)$, the expected value is halved in each iteration, and thus by linearity of expectation the expected value of $f(n,k)$ is $2^{-k}n$. If $n$ and the random numbers are restricted to integers, the expected value decreases from $a_i$ to $(a_i-1)/2$ in each iteration, so we need to shift by $1$ to obtain the recurrence $a_{i+1}+1=(a_i+1)/2$, so in this case the expected value of $f(n,k)$ is $2^{-k}(n+1)-1$.
What are some open research problems in Stochastic Processes? I was wondering, what are some of the open problems in the domain of Stochastic Processes. By Stochastic Processes. Any examples or recent papers or similar would be appreciated. The motivation for this question is that I was studying stochastics from a higher level (i mean, brownian motion and martingales and stuff; beyond the undergrad markov chains and memoryless properties) and was wondering what are the questions that still lie unanswered in this field?
Various academics have lists on their website * *Richard Weber, University of Cambridge (operations research) *David Aldous, University of California, Berkeley with updates from Thomas M. Liggett *Krzysztof Burdzy, University of Washington *Hermann Thorisson, University of Iceland Other resources that might be of interest * *The journal Queueing Systems published a Special Issue on Open Problems in 2011. James Cruise maintains an ungated copy of the papers along with research progress on the highlighted problems. *John Kingman published a 2009 paper titled The first Erlang century—and the next where open problems and future reserch were discussed *Jewgeni H. Dshalalow edited a book in 1985 titled Advances in queueing: theory, methods, and open problems *Lyons, Russell, Robin Pemantle, and Yuval Peres. Unsolved problems concerning random walks on trees Classical and modern branching processes. Springer New York, 1997. 223-237. *Questions tagged open problem and probability on mathoverflow *Chapter 23 Open Problems from Levin, Peres, and Wilmer's book Markov Chains and Mixing Times
Solving equation $\log_y(\log_y(x))= \log_n(x)$ for $n$ I'm just wondering, if I log a constant twice with the same base $y$, $$\log_y(\log_y(x))= \log_n(x)$$ Can it be equivalent to logging the same constant with base $n$? If yes, what is variable $n$ equivalent to?
Yes you can do that with the initial conditions for a logarithm satisfying. The conditions for log(x) to the base n are: x > 0, n > 0 and n != 1. so you should be careful with the domain that you are choosing.
Universal algorithm to estimate probability of drawing certain combination of coloured balls I am writing AI for a board game and would be happy for some guidance in creating the function to calculate probabilities. So, there's a pool of N coloured balls. Up to 7 colours, the quantity of balls in each colour is given (zero or more). We are about to draw M balls from there. Question is, what is the probability of drawing certain combination of colours (ie, 3 red, 2 blue and 4 white). All the quantities in pool and drawn are arbitrary, passed to a function as array. If that would make the algorithm simpler, it is possible use recursion, ie P = something * P([rest of pool], [rest of drawn balls]) I got as far as this: N - total number of balls in the pool, quantity of each color is n1, n2, n3, etc. M - total balls drawn. Quantity of each color is m1, m2, m3, etc. The probability, of course, is a fraction, where the denominator is all the possible combinations of M bals drawn, and numerator is the number of valid combinations we are interested in. The total number of combinations how M unique balls can be drawn from the pool of N unique balls, can be calculated like this: $$\begin{equation}C = \frac{N!}{(N - M)!}\end{equation}$$ At the numerator, I have to calculate the product of all $$\frac{n_i!}{(n_i - m_i)!}$$, as this reflects in how many ways each colour can be picked mi times, from the pool where there are ni balls in that colour. So far, I have this: $$\frac{\frac{n_1!}{(n_1 - m_1)!}\cdot\frac{n_2!}{(n_2 - m_2)!}\cdot\frac{n_3!}{(n_3 - m_3)!}\cdot\ldots}{\frac{N!}{(N - M)!}}$$ Now, this number has to be multiplied by something, because the balls can come at any order of colours they want. And I am not sure how to solve this. Next, I will have to calculate the probability of drawing certain combination of M balls, when total Q balls are drawn, but that is even more complex, so I have to solve the first step first.
We use your notation. Imagine that we have put identity numbers on all the balls. There are $\binom{N}{M}$ ways to choose $M$ balls from $N$. Note that the binomial coefficient counts the number of possible "hands" of $M$ balls. Order is irrelevant. In case you are unfamiliar with binomial coefficients, $\binom{n}{k}$, pronounced, in English, "$n$ choose $k$," is the number of ways to choose $k$ objects from $n$ objects. By definition, $\binom{n}{0}=1$ for all $n$, and $\binom{n}{k}=0$ if $k\gt n$. It turns out that if $0\le k\le n$, then $\binom{n}{k}=\frac{n!}{k!(n-k)!}$. Now we turn to our problem. We want to find the number of ways to choose $m_1$ objects of colour $1$, $m_2$ objects of colour $2$, and so on, where $m_1+m_2+\cdots+m_k=M$. There are $\binom{n_1}{m_1}$ ways to choose $m_1$ objects of colour $1$. For each of these ways, there are $\binom{n_2}{m_2}$ ways to choose $m_2$ objects of colour $2$, and so on. So the total number of ways to choose $m_1$ of colour $1$, $m_2$ of colour $2$, and so on is $\binom{n_1}{m_1}\binom{n_2}{m_2}\cdots \binom{n_k}{m_k}$. Thus the required probability is $$\frac{\binom{n_1}{m_1}\binom{n_2}{m_2}\cdots \binom{n_k}{m_k}}{\binom{N}{M}}.\tag{$1$}$$ As to efficient ways of computing this, the subject has been studied a fair bit, particularly for the case $k=2$. One minor suggestion is to use the following recurrence: $$\binom{b}{a}=\frac{b}{a}\binom{b-1}{a-1}.$$ Remark: The above analysis is close to the one you produced. The difference is that in Formula $(1)$, your denominator gets divided by $M!$, and your numerator gets divided by $m_1!m_2!\cdots m_k!$, precisely to deal with the order issues that you identified.
Expectation of a stochastic exponential In class a while ago we used the following simplification: $$ \mathbb E \left[ \exp\left(\langle \boldsymbol a,\mathbf W_t\rangle \right) \right] \quad =\quad \exp\left(\frac12 |\boldsymbol a|^2 t\right) $$ with $\boldsymbol a$ a constant $n$-dim vector, $\mathbf W_t$ an $n$-dim Brownian Motion. I can recognize the quadratic variation on the right hand side and came up with the following (a bit hand wavy): $$ \exp\left(\langle \boldsymbol a,\mathbf W_t\rangle \right) = \exp\left(\int_0^t d(\langle \boldsymbol a,\mathbf W_s\rangle) \right) \stackrel{\text{ito}}{=} \exp\left(\int_0^t \langle \boldsymbol a,d\mathbf W_s\rangle \right)\exp\left(\frac12 |\boldsymbol a|^2 t\right) $$ and then we'd need the expected value of the first term to be equal to $1$. But I don't see why this holds. It also looks fairly similar to the mgf of a normal distribution but again I don't see the connection clearly.
Let $W_t = (W_t^1,\ldots,W_t^n)$ a $n$-dimensional Brownian motion. Then the processes $(W_t^j)_{t \geq 0}$ are independent 1-dimensional Brownian motions ($j=1,\ldots,n$). Thus $$\mathbb{E}\big(e^{\langle a,W_t \rangle} \big) = \prod_{j=1}^n \mathbb{E}\big(e^{a_j \cdot W_t^j} \big) \stackrel{W_t^j \sim N(0,t)}{=} \prod_{j=1}^n e^{\frac{1}{2} a_j^2 \cdot t} = e^{\frac{1}{2} |a|^2 \cdot t}$$ where we used $\mathbb{E}(e^{a \cdot X}) = e^{\frac{1}{2}a^2}$ for $X \sim N(0,1)$.
How can a continuous function map closed sets to open sets (and vice versa)? Definition of continuity: A function $f: X \to Y$ (where $X$ and $Y$ are topological spaces) is continuous if and only if for any open subset $V$ of $Y$, the preimage $f^{-1}(V)$ is open in $X$. Now, if $U$ is a closed subset of $X$ (meaning that the complement of $U$, $U^c$ is open and it contains all of its cluster points) and $f(U)$ (the image of $U$ under $f$) $= V$ is open in $Y$, then if $f$ is continuous, $f^{-1}(V) = f^{-1}(f(U)) = U$ is open. So if $U$ is closed then this leads to a contradiction. Conversely, if $U$ is open in $X$ and $f(U)=V$ is closed in $Y$, then $V^c$ is open. However, the complement of the preimage $f^{-1}(V)$ is closed since $(f^{-1}(V))^c = U$ which is open; which again leads to a contradiction. If there is anyone who has some valid counterexamples I'd be eager to see them.
Let $X$ be your favorite topological space, perhaps $\mathbb{R}$, and $Y$ be the space with just a single point.Then the only function $f:X\to Y$ is continuous, and, just like with Brian M. Scott's example (indeed, $Y$ has the discrete topology here), every set maps to a set which is both open and closed.
Why is the quotient map $SL_n(\mathbb{Z})$ to $SL_n(\mathbb{Z}/p\mathbb Z)$ is surjective? Recall that $SL_n(\mathbb{Z})$ is the special linear group, $n\geq 2$, and let $q\geq 2$ be any integer. We have a natural quotient map $$\pi: SL_n(\mathbb{Z})\to SL_n(\mathbb{Z}/q).$$ I remember that this map is surjective (is it correct?). It seems the Chinese Remainder Theorem might be helpful, but I forgot how to prove it. Can anyone give some tips?
The result is true for $n\geq 1$ and any integer $q\geq 1$. The group $SL_n(\mathbb{Z}/q\mathbb{Z})$ is generated by the elementary (transvection) matrices. It is easily seen that every elementary matrix is in the image of $\pi$, as the image of an elementary matrix in $SL_n(\mathbb{Z})$. So $\pi$ is surjective.
Proving a set is a subset of a union with itself and another set Let A and B be sets. Show that $$\space A \subseteq (A \cup B)$$ My proof is as follows, but I don't feel confident that what I've done is correct. Any input is much appreciated. $$x \in A \cup B \equiv x \in A \wedge x \in B \equiv x \in A$$ Therefore, A is a subset of the union of A and B.
You're going in the wrong direction. Don't assume $x\in A\cup B$, assume $x\in A$: you want to show that each element in $A$ is also an element of $A\cup B$ (not vice versa). Also, $x\in A\cup B\iff x\in A\textrm{ OR (not and) }x\in B$. Let $x\in A$, and note that $x\in A\cup B\implies x\in A$ or $x\in B$. But, as $x\in A$, $x\in A\cup B$. Therefore, each element of $A$ is an element of $A\cup B$, so that $A\subseteq A\cup B$. However, you should note that your proof is the correct proof of $A\cap B\subseteq A$ if you replace your $A\cup B$ with $A\cap B$.
Existence of an improper integral without the existence of a limit Can a continuous non-decreasing function exist for all $x \in [0, \infty)$ with $\int_0^\infty f \, dx$ existing, but the $\lim_{x \to \infty} f$ not existing? If it does, what does it look like? I feel that if the limit does not exist, how can the improper integral exist?
The limit must exist, and it must be $0$. If $f$ is not bounded above, the integral clearly doesn't exist. If $f(x)$ is bounded above, then since $f$ is non-decreasing, $\lim_{x\to\infty} f(x)$ exists. One can show that if that limit is non-zero, then the integral doesn't converge. The assumption of continuity is not necessary. But we did use strongly the hypothesis that the function is non-decreasing.
Alternating Series Using Other Roots of Unity $\sum (-1)^n b_n$ is representative of an alternating series. We look at whether $\sum b_n$ converges and if $b_{n+1}<b_n$ $\forall n\in \mathbb{Z}$. What if our alternating series is of the form $\sum z^n b_n$ where $z$ is any primitive root of unity. Do the same tests still apply? Another question: $\sum 1/n$ diverges but $\sum (-1)^n 1/n$ converges. Is there a $p \in \mathbb{Z}$ where $\sum z^n 1/n$ converges where $z$ is a primitive $p^{th}$ root of unity, but diverges when $z$ is a primitive $(p+1)^{th}$ root of unity
In Dirichlet's test you can replace $(-1)^n$ in the alternating series test by any sequence with bounded partial sums, and thus as a special case also by $z^n$ where $z$ is a root of unity. Thus the answers to your questions are "yes" and "no", respectively.
Is there an exponential bound for $(1+p)^{-n}$ when $p$ is small? Is there an exponential upper bound for $(1+p)^{-n}$ when $0<p<1$? Similarly a exponential lower bound for $(1+p)^n$ will also be good. Do you know of any resources where one can pick up bounds as such quickly? Thanks.
When $np \ll 1, (1\pm p)^n \approx 1 \pm np$ with the accuracy improving as $np$ gets smaller. The error is of order $\frac {(np)^2}2$. When you say exponential bound, do you want less than exponential growth? It certainly doesn't fall exponentially
Adjacency graph of cutting plane is a bipartite graph Each time you draw a line on a plane, you are cutting it in half. Suppose you keep doing this without drawing a line parallel to a previous one. An adjacency graph can be constructed to represent this where each node represents an undivided portion of the plane, and edges exist between portions that share a boundary edge (as created by the lines). I want to prove that the adjacency graph is bipartite regardless of what lines are drawn. I know that a graph is bipartite if it is 2-colourable and this could be used to prove that the adjacency graph is 2-colourable but I'm stuck with proving how a graph is 2-colourable by induction. Any help is appreciated!! Thanks. EDIT Here's an example
Theorem: A graph with atleast one edge is 2-chromatic if and only if it has no circuit of odd length. Reference: Page 168 Theorem 8-2 in Graph Theory by narsingn Deo. Your adjacency graph doesnt have any circuit with odd degree.
If every subsequence is convergent, prove that the sequence is convergent If every subsequence of a given sequence of real numbers is convergent, prove that the sequence is convergent. Help me please. I could not understand how to solve this question.
Your question is trivial unless you change the "If" to "If and only if". I'll prove the "if and any if" version. (Trivial direction) Any sequence is a subsequence of itself, so if all subsequences of a given sequence converge, so does the original sequence. (Nontrivial direction) Suppose $(x_n)$ converges to $L$. Let $(x_{n_i})_{i \geq 1}$ be a subsequence of $(x_n)$. Let $\epsilon > 0$. Since $(x_n)$ converges to $L$, there exists $N \equiv N(\epsilon) \in \mathbb{N}^+$ with the property that if $i \in \mathbb{N}^+$ with $i \geq N$, then $|x_i-L|<\epsilon$. This is from the definition of convergence of a sequence. Now $n_i \geq i$, so if $j\geq i$, then $n_j \geq n_i \geq i$, and $|x_{n_j}-L| < \epsilon$. So the subsequence $(x_{n_i})_{i \geq 1}$ converges to $L$. There is nothing special about $\mathbb{R}$ in the proof. It would work the same in any metric space (I'm rusty when it comes to non-metric spaces).
How to evaluate $\int_0^1 \frac{\ln(x+1)}{x^2+1} dx$ This problem appears at the end of Trig substitution section of Calculus by Larson. I tried using trig substitution but it was a bootless attempt $$\int_0^1 \frac{\ln(x+1)}{x^2+1} dx$$
First make the substitution $x=\tan t$ to find $$I=\int_0^1 dx\,\frac{\ln(x+1)}{x^2+1}=\int_0^{\pi/4} dt\,\ln(1+\tan t).$$ Now a substitution $u=\frac{\pi}{4}-t$ gives that $$I=\int_0^{\pi/4} du\,\ln\left(\frac{2\cos u}{\cos u+\sin u}\right).$$ If you add these, you get $$2I=\int_0^{\pi/4} dt\,\ln\left(\frac{\sin t+\cos t}{\cos t}\cdot\frac{2\cos t}{\cos t+\sin t}\right)=\frac{\pi}{4}\ln 2.$$
If we have $f$ is one-to-one, why can we conclude that $n\mathbb{Z}_m=\mathbb{Z}_m? $ Suppose $m,n \in \mathbb{Z},m,n\geq1.$ Define a map $$f:\mathbb{Z}_m \rightarrow \mathbb{Z}_m$$ where $[x] \rightarrow [nx]$ If we have $f$ is one-to-one, why can we conclude that $n\mathbb{Z}_m=\mathbb{Z}_m? $
Any one to one function $f$ from a set $A$ with $m$ elements to a set $B$ with $m$ elements is onto. And any onto function is one to one. This does not hold for infinite sets of the same cardinality. The proof for finite sets is a matter of counting. Since $f$ is one to one, the values of $f$ at the $m$ elements of $A$ are all different. So $f$ takes on $m$ distinct values. Since $B$ only has $m$ elements, the values of $f$ must include all the elements of $B$. Think of it this way. Let $A$ be a set of $m$ women, and let $B$ be a set of $m$ men. Each woman $a$ chooses a man $f(a)$, with the rule that no two women can choose the same man. (That says $f$ is one to one.) Will there be a man who remains unchosen? Certainly not. Thus the function $f$ is onto.
Finding $G$ with normal $H_1,\ldots,H_n$ such that $G=H_1\cdots H_n$ and $H_i\cap H_j=\{e\}$ for $i\neq j$, but $G\not\cong H_1\times\cdots\times H_n$ How can I find a group $G$ with normal subgroups $H_1,\ldots,H_n$ such that $G=H_1H_2\cdots H_n$ and $H_i\cap H_j=\{e\}$ for all $i\neq j$, but $G\not\cong H_1\times H_2\times\cdots\times H_n$. I'm thinking about $Q_8$. Is that right? Thank you very much!
$Q_{8}$ is not ok, as in it two non-trivial normal subgroups intersect nontrivially. Take instead the Klein $4$-group $V = \{e, a_1, a_2, a_3\}$, and $H_i = \langle a_i \rangle = \{ 1, a_i \}$ for $i = 1, 2, 3$. You have indeed $G = H_1 H_2 H_3$ (actually, two factors already suffice) and $H_i \cap H_j = \{ e \}$ for $i \ne j$, but $H_1 \times H_2 \times H_3$ has order $8$, not $4$ like $V$.
For which $\alpha$ does the series $\sum_{n=1}^\infty n^\alpha x^{2n} (1-x)^2$ converges uniformly at $[0,1]$. $$\sum_{n=1}^\infty n^\alpha x^{2n} (1-x)^2$$ Find for which $\alpha$ this series converges uniformly at $[0,1]$. As $(1-x)^2$ is not dependent of $n$, I thought about rewriting it as: $$(1-x)^2 \sum_{n=1}^\infty n^\alpha x^{2n} $$ And then you can use the M-test, with $n^\alpha$. Which would make me conclude that it converges if $\alpha < -1$. I was thinking that it may also converge at $\alpha \in [-1,1)$. If I take calculate the supremum of $f_n(x)$ I find that this is at $\frac{n}{n+1}$, which gives: $$f_n(\frac{n}{n+1})=n^\alpha (\frac{n}{n+1})^{2n}(1-\frac{n}{n+1})^2= n^\alpha ((1+ \frac{1}{n})^{n})^{-2}(\frac{1}{n+1})^2\leq n^\alpha \frac{1}{(n+1)^2}\leq n^{(\alpha-2)}$$. This implies it also converges at $\alpha\in[−1,1)$. The thing I don't understand is that this would mean that the factor $(1-x)^2$ makes it converge at $\alpha\in[-1,1)$. But intuitively I would say that $\sum_{n=1}^\infty n^\alpha x^{2n}$ and $\sum_{n=1}^\infty n^\alpha x^{2n} (1-x)^2$ should converge for the same $\alpha$. And how could I prove that this series diverges for $\alpha \geq 1$ ? Edit: If set $\alpha=1$, then I get $$\sum_{n=1}^\infty n x^{2n} (1-x)^2$$ I know the supremum is at $\frac{n}{n+1}$, so this gives $f_n(\frac{n}{n+1})=((1+ \frac{1}{n})^{n})^{-2}\frac{n}{(n+1)^2}$. So if $n\to\infty$, then this goes also to zero. Therefore I would conclude that it converges $\alpha\leq 1$. Is this correct ?
Let's denote by $$f_n(x)=n^\alpha x^{2n} (1-x)^2.$$ First, we search for which values of $\alpha$ we have the normal convergence of the series that implies the uniform convergence. The supremum of $f_n$ is attained at $x_n=\frac{n}{n+1}$ so $$||f_n||_{\infty}=f_n(x_n)=n^\alpha (1+\frac{1}{n})^{-2n}(\frac{1}{n+1})^2\sim e^{-2}n^{\alpha-2},$$ hence the series $\sum_n ||f_n||_{\infty}$ converges for $\alpha<1$. Thus we have the uniform convergence for $\alpha<1.$ Now, what about $\alpha\geq 1$? We have uniform convergence if $$\lim_n\sup_{x\in[0,1]}|\sum_{k=n+1}f_k(x)|=0.$$ I'll explain that if the series is of positive terms, then the uniform and normal convergence are same. Indeed, in this case we have $$\lim_n\sup_{x\in[0,1]}|\sum_{k=n+1}f_k(x)|\geq\lim_n\sum_{k=n+1}f_k(x_k)=\lim_n\sum_{k=n+1}e^{-2}k^{\alpha-2}=+\infty.$$
Principal $n$th root of a complex number This is really two questions. * *Is there a definition of the principal $n$th root of a complex number? I can't find it anywhere. *Presumably, the usual definition is $[r\exp(i\theta)]^{1/n} = r^{1/n}\exp(i\theta/n)$ for $\theta \in [0,2\pi)$, but I have yet to see this anywhere. Is this because it has bad properties? For instance, according to this definition is it true that for all complex $z$ it holds that $(z^{1/a})^b = (z^b)^{1/a}$?
There really is not a coherent notion of "principal" nth root of a complex number, because of the inherent and inescapable ambiguities. For example, we could declare that the principal nth root of a positive real is the positive real root (this part is fine), but then the hitch comes in extending this definition to include all or nearly all complex numbers. For example, we could try to require continuity, but if we go around 0 clockwise, versus counter-clockwise, we'd obtain two different nth roots for number we've "analytically continued" to. A/the traditional "solution" (which is not a real solution) is to "slit" the complex plane along the negative real axis to "prevent" such comparisons. And some random choice about whether the negative real axis is lumped together with one side or the other. But even avoiding that ambiguity leaves us with a root-taking function that substantially fails to be a group homomorphism, that is, fails to have the property that the nth root of a product is the product of the nth roots. The expression in terms of radius and argument "solves" the problem by not really giving a well-defined function on complex numbers, but only well-defined on an infinite-fold (ramified) covering of the complex plane... basically giving oneself a logarithm from which to make nth roots. But logarithms cannot be defined as single-valued functions on the complex plane, either, for similar reasons. Partially defined in artificial ways, yes, but then losing the fundamental property that log of a product is sum of the logs.
Sequences and Languages Let $U$ be the following language. A string $s$ is in $U$ if it can be written as: $s = 1^{a_1}01^{a_2}0 ... 1^{a_n}01^b$, where $a_1,..., a_n$ are positive integers such that there is a 0-1 sequence $x_1, ..., x_n$ with $x_1a_1 + ... + x_na_n = b$. Show that $U \in P$. Not sure how to even approach the problem. Any help would be appreciated. I dont want the answer specifically, but any hints or help would be great. Thanks
Hint for constructing a pushdown automaton recognising your language $U$ by empty stack: [I didn't re-read the question before writing this, so I forgot the $a_i$'s had to be positive. You have to make a slight modification, probably adding two extra states, because of that, but the basic idea is still the same.] Use three states apart from the initial state: two ($p_0$ and $p_1$) to indicate whether the current $x_i$ is $0$ or $1$, and one ($b$) to indicate that we are guessing we're up to the $1^b$ part. We only need one stack symbol, which we use to count the $1$s we read in $p_1$ and then check them off against the $1$s we read in $b$. The fact that context-free languages are in $P$ is well-known, but I suppose if you haven't covered it in your course and this is homework, I guess you will need to convert either the pushdown automaton or the grammar into a polynomial-time Turing machine (which is easy to do).
Totient function and Euler's Theorem Given $\big(m, n\big) = 1$, Prove that $$m^{\varphi(n)} + n^{\varphi(m)} \equiv 1 \pmod{mn}$$ I have tried saying $$\text{let }(a, mn) = 1$$ $$a^{\varphi(mn)} \equiv 1 \pmod{mn}$$ $$a^{\varphi(m)\varphi(n)} \equiv 1 \pmod{mn}$$ $$(a^{\varphi(m)})^{\varphi(n)} \equiv 1 \pmod{mn}$$ but I can't see where to go from here. I'm trying to somehow split the $a^{\varphi(mn)}$ into an addition so I can turn it into $m^{\varphi(n)} + n^{\varphi(m)} \equiv 1 \pmod{mn}$.
Hint What is $m^{\varphi(n)} + n^{\varphi(m)} \pmod{m}$? What about $\pmod n$?
If a group $G$ has only finitely many subgroups, then show that $G$ is a finite group. If a group $G$ has only finitely many subgroups, then show that $G$ is a finite group. I have no idea on how to start this question. Can anyone guide me?
Suppose $G$ were infinite. If $G$ contains an element of infinite order, then _. Otherwise, every element of $G$ has finite order, and if $x_1, x_2, \ldots, x_n$ are any finite set of elements of $G$, then the subgroups $\langle x_i \rangle$ cover only finitely many elements of $G$. Therefore __.
Injectivity is a local property Let $R$ be a commutative noetherian ring, and let $M$ be an $R$-module. How can I show that if any localization $M_p$ at a prime ideal $p$ of the ring $R$ is injective over $R_p$, then $M$ is injective?
Baer's criterion shows that it suffices to show that $\hom(B,M) \to \hom(A,M)$ is surjective for $B=R$ and $A=$ an ideal, in particular both are finitely presented. But then $\hom$ commutes with localization and we are done.
How do I show the equivalence of the two forms of the Anderson-Darling test statistic? It's stated in many places regarding the Anderson-Darling test statistic, which is defined as $$n\int_{-\infty}^\infty \frac{(F_n(x) - F(x))^2}{F(x)(1 - F(x))}dF(x)$$ that this is functionally equivalent to the statistic $$A^2 = -n - S$$ where $$S = \sum_{k=1}^n\frac{2k-1}{n}\left(\ln F(Y_k) + \ln(1 - F(Y_{n+1-k}))\right)$$ Note that $F_n(x)$ is the empirical distribution function and $F(x)$ is the distribution to which we are comparing the sample. $Y_k$ is the $k^{th}$ ranked element in the sample. I even went so far as to read the original 1954 paper by Anderson and Darling and I have yet to discover how this equivalence was computed - these authors merely stated the equivalence too. I've tried writing out the numerator inside the integral and splitting into 3 integrals - I was only able to simplify one of them. I have an inkling that maybe the Probability Integral Transformation should be applied, but I'm not really sure how. I'd really appreciate if anyone could give any pointers.
I think you can prove it by dividing the integral into (n+1) integrals on the intervals $[Y_k; Y_{k+1})$.
Choose 38 different natural numbers less than 1000, Prove among these there exists at least two whose difference is at most 26. Choose any 38 different natural numbers less than 1000. Prove that among the selected numbers there exists at least two whose difference is at most 26. I think I need to use pigeon hole principle, not sure where to even begin.
Pigeonhole-principle is a good idea. Hint: Think about partitioning $\{1,2,\ldots,999\}$ into subsets $\{1,2,\ldots,27\}$, $\{28,29,\ldots, 54\}$, $\{55,56,\ldots,81\}$, ..., $\{\ldots,998,999\}$ of size $27$ each.
Counting problem - How many times an inequality holds? Let $k>2$ be a natural number and let $b$ be a non-negative real number. Now, for each $n$, $m \in \{ 1, 2, ... k \}$, consider the following inequalities: $$ mb < k - n $$ We have $k^2$ inequalities. How can I count the couples of $n$ and $m$ such that the inequality (for $n$ and $m$ fixed) holds? Clearly, an algorithm would easily give an answer. But I was wondering if there is a "more-mathematical" solution.
Presumably $b$ is given. We can note that the right side ranges from $0$ to $k-1$, so define $p=k-n \in [0,k-1]$ and ask about $mb \lt p$. For a given $p$, the number of allowable $m$ is $\lfloor \frac {p-1}b \rfloor$ So we are asking for $\sum_{p=1}^{k-1}\lfloor \frac {p-1}b \rfloor$ Let $q=\lfloor \frac {k-2}b\rfloor$ Then $$\sum_{p=1}^{k-1}\lfloor \frac {p-1}b \rfloor=b\sum_{i=0}^{q-1}i+q(k-1-bq)=\frac{bq(q-1)}2+q(k-1-bq)$$ because the left sum starts with $b\ \ 0$'s, $b\ \ 1$'s, on to $b \ \ q-1$'s and finish with $q$'s.
example of a set that is closed and bounded but not compact Find an example of a subset $S$ of a metric space such that $S$ is closed and bounded but not compact. One such example that comes from analysis is probably a closed and bounded set in $C[0,1]$. I attempt to construct my own example to see if it works. Is $\{ \frac{1}{n} | n \in \mathbb{N} \}$ endowed with discrete topology a set that is closed and bounded but not compact? My guess is that it is indeed an example of closed and bounded does not imply compact. Every element is less than or equal to $1$, and it is closed as a whole set. If we let $\mathcal{A}$ be a covering of the set that consists of singletons in $\{ \frac{1}{n} \}$ so that any finite subcover $\{ \frac{1}{n_j} |j =1,...,k \quad \text{and} \quad n_j \in \mathbb{N} \}$ will not cover $\{\frac{1}{n}\}$, because if we take $n = \max \{{n_j}\}, \frac{1}{n+1}$ is not in the finite subcover. Thanks in advance for pointing out any mistake.
The "closed" ball $\lVert x \rVert \leq 1$ in any infinite dimensional Banach space is closed and bounded but not compact. It is closed because any point outside it is contained in a small open ball disjoint from the first one, by the triangle inequality. That is, if $\lVert y \rVert = 1 + 2 \delta,$ then the sets $\lVert x \rVert \leq 1$ and $\lVert x - y \rVert < \delta$ are disjoint. Hence the complement of the "closed" unit ball is open and the "closed" unit ball really is closed. But not compact if not in finite dimensions.
Express $\sin 4\theta$ by formulae involving $\sin$ and $\cos$ and its powers. I have an assignment question that says "Express $\sin 4\theta$ by formulae involving $\sin$ and $\cos$ and its powers." I'm told that $\sin 2\theta = 2 \sin\theta \cos\theta$ but I don't know how this was found. I used Wolfram Alpha to get the answer but this is what I could get : $$ 4\cos^3\theta\sin\theta- 4\cos\theta \sin^3\theta $$ How can I solve this problem?
That's a trig identity. So... $\sin{4\theta} = 2\sin{2\theta}cos{2\theta}$ Can you take it from there?
Compact injections and equivalent seminorms Let $V$ and $H$ be two Banach spaces with norm $\lVert \cdot \rVert$ and $\lvert \cdot \rvert$ respectively such that $V$ embeds compactly into $H$. Let $p$ be a seminorm on $V$ such that $p(u) + \lvert u \rvert$ is a norm on $V$ that is equivalent to $\lVert \cdot \rVert$. Set $N = \{u \in V: p(u) = 0\}$. Prove that there does not exist a sequence $(u_n)$ in $V$ satisfying * *$\operatorname{dist}(u_n, N) = 1$ for all $n$ *$p(u_n) \to 0$. Ideas: I have no reason why I should expect such a result, so I can't motivate it. Anyway, I want to claim that $u_n$ approach a limit $u$. Then hopefully $p(u_n) \rightarrow p(u) = 0$ so $u \in N$, contradicting $1 = \operatorname{dist}(u_n,N) \rightarrow \operatorname{dist}(u,N) = 0$. It would help greatly if the $(u_n)$ were bounded, since then the compact injection means that $u_n$ approach a limit in $\bar{V} \subset H$. Then somehow argue that the limit is actually in $V$?
We will first show that, given $x\in X$, there exists $z\in N$ s.t. $\|x-z\|_X=d(x-z,N)$ and $p(x)=p(x-z)$. Let $\{z_n\}\subset N$ s.t. $\|x-z_n\|_X\to d(x,N)$. Then $\infty>\|x\|_X+\sup_{n}\|x-z_n\|_X>\sup_{m}\|z_m\|_X$. Since $N$ is finite dimensional, there exists a subsequence $z_{n_k}$ and $z\in N$ s.t. $z_{n_k}\to z$. Therefore, $\|x-z_{n_k}\|_X\to\|x-z\|_X=d(x,N)$. Moreover, $p(x-z)=p(x)$ because \begin{align*} p(x)&\leq p(x-z)+p(z)=p(x-z)\\ p(x-z)&\leq p(x)+p(-z)=p(x) \end{align*} and $d(x-z,N)=d(x,N)$ since $N$ is a subspace. We now prove the main claim. AFSOC that for all $n\in\mathbb{N}$ there exists $x_n\in X$ s.t. $d(x_n,N)>np(x_n)\geq 0$. Normalizing $x_n$ by $d(x_n,N)$ we have that $1>np(x_n)$. For each $x_n$, let $z_n\in N$ be the $z$ found above. Then we have that $\|x_n-z_n\|=1$ for all $n\in\mathbb{N}$ so $\{x_n-z_n\}$ is a bounded sequence. By compactness of the embedding, $\{x_n-z_n\}$ has a subsequence, $\{x_{n_k}-z_{n_k}\}$ that converges to some $\hat{z}$. By continuity of $p$, we have that $p(x_{n_k}-z_{n_k})\to p(\hat{z})=0$. This implies that $\hat{z}\in N$. However, this implies that $d(x_{n_k}-z_{n_k},N)\leq\|(x_{n_k}-z_{n_k})-\hat{z}\|\to0$: a contradiction.
If $H \triangleleft G$ and $|G/H|=m$, show that $x^m \in H$ for $\forall x \in G$ If $H \triangleleft G$ and $|G/H|=m$, show that $x^m \in H$ for $\forall x \in G$. My attempt is: since $|G/H|=\frac{|G|}{|H|}=m$, we have $x^{|G|}=x^{m|H|}=e$, then I stuck here. Can anyone guide me ?
Since $(xH)^m=H$, it follows that $x^mH=H$, and thus that $x^m\in H$.
What was the first bit of mathematics that made you realize that math is beautiful? (For children's book) I'm a children's book writer and illustrator, and I want to to create a book for young readers that exposes the beauty of mathematics. I recently read Paul Lockhart's essay "The Mathematician's Lament," and found that I, too, lament the uninspiring quality of my elementary math education. I want to make a book that discredits the notion that math is merely a series of calculations, and inspires a sense of awe and genuine curiosity in young readers. However, I myself am mathematically unsophisticated. What was the first bit of mathematics that made you realize that math is beautiful? For the purposes of this children's book, accessible answers would be appreciated.
When I was a kid I realized that $$0^2 + 1\ (\text{the first odd number}) = 1^2$$ $$1^2 + 3\ (\text{the second odd number}) = 2^2$$ $$2^2 + 5\ (\text{the third odd number}) = 3^2$$ and so on... I checked it for A LOT of numbers :D Years passed before someone taught me the basics of multiplication of polynomial and hence that $$(x + 1)^2 = x^2 + 2x + 1.$$ I know that this may sound stupid, but I was very young, and I had a great time filling pages with numbers to check my conjecture!!!
What was the first bit of mathematics that made you realize that math is beautiful? (For children's book) I'm a children's book writer and illustrator, and I want to to create a book for young readers that exposes the beauty of mathematics. I recently read Paul Lockhart's essay "The Mathematician's Lament," and found that I, too, lament the uninspiring quality of my elementary math education. I want to make a book that discredits the notion that math is merely a series of calculations, and inspires a sense of awe and genuine curiosity in young readers. However, I myself am mathematically unsophisticated. What was the first bit of mathematics that made you realize that math is beautiful? For the purposes of this children's book, accessible answers would be appreciated.
If you've ever heard of $3,529,411,764,705,882$ being multiplied by $3/2$ to give $5,294,117,647,058,823$ (which is the same as the 3 being shifted to the back), you might consider including that in the book. There are lots of other examples, like $285,714$ turning into $428,571$ (moving the 4 from back to front) when multiplied by $3/2$, or the front digit of $842,105,263,157,894,736$ moving to the back four times in a row when you divide it by $2$. (There's a leading zero in front of the last term, though.)
What was the first bit of mathematics that made you realize that math is beautiful? (For children's book) I'm a children's book writer and illustrator, and I want to to create a book for young readers that exposes the beauty of mathematics. I recently read Paul Lockhart's essay "The Mathematician's Lament," and found that I, too, lament the uninspiring quality of my elementary math education. I want to make a book that discredits the notion that math is merely a series of calculations, and inspires a sense of awe and genuine curiosity in young readers. However, I myself am mathematically unsophisticated. What was the first bit of mathematics that made you realize that math is beautiful? For the purposes of this children's book, accessible answers would be appreciated.
For me, it was the beauty of the number 1, how it can be multiplied with anything , and it won't change the number it is being multiplied with, also how it can be represented as any number divided by itself such as 4/4=1 I would also love to share this beautiful poem by Dave Feinberg that is titled "the square root of 3" and was also featured in a Harold and Kumar Movie, it renewed my love for math and is and always has been one of my favorite poems! : I’m sure that I will always be A lonely number like root three The three is all that’s good and right, Why must my three keep out of sight Beneath the vicious square root sign, I wish instead I were a nine For nine could thwart this evil trick, with just some quick arithmetic I know I’ll never see the sun, as 1.7321 Such is my reality, a sad irrationality When hark! What is this I see, Another square root of a three As quietly co-waltzing by, Together now we multiply To form a number we prefer, Rejoicing as an integer We break free from our mortal bonds With the wave of magic wands Our square root signs become unglued Your love for me has been renewed
What was the first bit of mathematics that made you realize that math is beautiful? (For children's book) I'm a children's book writer and illustrator, and I want to to create a book for young readers that exposes the beauty of mathematics. I recently read Paul Lockhart's essay "The Mathematician's Lament," and found that I, too, lament the uninspiring quality of my elementary math education. I want to make a book that discredits the notion that math is merely a series of calculations, and inspires a sense of awe and genuine curiosity in young readers. However, I myself am mathematically unsophisticated. What was the first bit of mathematics that made you realize that math is beautiful? For the purposes of this children's book, accessible answers would be appreciated.
Maybe the fact that the homotopy category of a model category is equivalent to the full subcategory of fibrant-cofibrant objects with homotopy classes of morphisms.
What was the first bit of mathematics that made you realize that math is beautiful? (For children's book) I'm a children's book writer and illustrator, and I want to to create a book for young readers that exposes the beauty of mathematics. I recently read Paul Lockhart's essay "The Mathematician's Lament," and found that I, too, lament the uninspiring quality of my elementary math education. I want to make a book that discredits the notion that math is merely a series of calculations, and inspires a sense of awe and genuine curiosity in young readers. However, I myself am mathematically unsophisticated. What was the first bit of mathematics that made you realize that math is beautiful? For the purposes of this children's book, accessible answers would be appreciated.
I think one of my early favourite mathematical things was simply "proof by contradiction" -- any of them. I think its appeal is that you nearly have proof by example, except that you're proving a negative.
How do we integrate, $\int \frac{1}{x+\frac{1}{x^2}}dx$? How do we integrate the following integral? $$\int \frac{1}{x+\large\frac{1}{x^2}}\,dx\quad\text{where}\;\;x\ne-1$$
The integral is equivalent to $$\int dx \frac{x^2}{1+x^3} = \frac{1}{3} \int \frac{d(x^3)}{1+x^3} = \frac{1}{3} \log{(1+x^3)} + C$$
Proving a matrix is positive definite using Cholesky decomposition If you have a Hermitian matrix $C$ that you can rewrite using Cholesky decomposition, how can you use this to show that $C$ is also positive definite? $C$ is positive definite if $x^\top C x > 0$ and $x$ is a vector.
From Wikipedia: If A can be written as LL* for some invertible L, lower triangular or otherwise, then A is Hermitian and positive definite. $A=LL^*\implies x^*Ax=(L^*x)^*(L^*x)\ge 0$ Since $L$ is invertible, $L^*x\ne 0$ unless $x=0$, so $x^*Ax>0\ \forall\ x\ne 0$
Show $ \varlimsup_{n\rightarrow\infty}{\sqrt[n]{a_{1}+a_{2}+\cdots+a_{n}}}=1 $ if $\displaystyle \varlimsup_{n\rightarrow\infty}{\sqrt[n]{a_{n}}}=1 $ Let $\{a_{n}\}$ be a positive sequence with $\displaystyle \varlimsup_{n\rightarrow\infty}{\sqrt[n]{a_{n}}}=1 $. How can we show that $$ \varlimsup_{n\rightarrow\infty}{\sqrt[n]{a_{1}+a_{2}+\cdots+a_{n}}}=1 $$ I am not sure the problem is true. If it is false, what is the counterexample?
This is true. 1) Pick $\epsilon>0$. Then there exists $N$ such that $$ \sqrt[n]{a_n}\leq 1+\epsilon\quad\Rightarrow\quad a_n\leq (1+\epsilon)^n\qquad\forall n\geq N. $$ Now $$ \sum_{k=1}^Ka_k =\sum_{k=1}^{N-1}a_k+\sum_{k=N}^Ka_k\leq C+\sum_{k=N}^K(1+\epsilon)^k=C+ (K-N+1)(1+\epsilon)^K\leq C+K(1+\epsilon)^K $$ where $C=\sum_{k=1}^{N-1}a_k$ is fixed. So $$ \sqrt[K]{\sum_{k=1}^Ka_k }\leq (1+\epsilon)\left(C +\frac{K}{(1+\epsilon)^K}\right)^\frac{1}{K} \longrightarrow 1+\epsilon. $$ This proves that $$ \limsup \sqrt[K]{\sum_{k=1}^Ka_k }\leq 1+\epsilon\quad\forall\epsilon>0\quad\Rightarrow \quad\limsup \sqrt[K]{\sum_{k=1}^Ka_k }\leq 1. $$ 2) Take a subsequence such that $$ \lim \sqrt[n_k]{a_{n_k}}=1. $$ Pick $\epsilon>0$. There exists $K$ such that $$ \sqrt[n_k]{a_{n_k}}\geq 1-\epsilon\quad\Rightarrow \quad a_{n_k}\geq (1-\epsilon)^{n_k}\quad\forall k\geq K. $$ Now $$ \sum_{n=1}^{n_k}a_n\geq a_{n_K}\geq (1-\epsilon)^{n_k}\quad\forall k\geq K. $$ So $$ \sqrt[n_K]{\sum_{n=1}^{n_k}a_n}\geq 1-\epsilon \quad\forall k\geq K. $$ Hence $$ \limsup \sqrt[N]{\sum_{n=1}^{N}a_n}\geq 1-\epsilon\quad\forall\epsilon>0\quad \limsup \sqrt[N]{\sum_{n=1}^{N}a_n}\geq 1. $$ Both inequalities are now proven, so $$ \limsup \sqrt[N]{\sum_{n=1}^{N}a_n}= 1. $$
The definition of continuous function in topology I am self-studying general topology, and I am curious about the definition of the continuous function. I know that the definition derives from calculus, but why do we define it like that?I mean what kind of property we want to preserve through continuous function?
If you mean the definition that $f\colon X\to Y$ is continuous if $f^{-1}(U)$ is open for every open $U\subseteq Y$, then this is because this property is equivalent to continuity in metric spaces, but doesn't refer to the metric. This makes it a natural candidate for the general definition. It is a little harder to motivate the definition in terms of the preservation of properties, because we have to do it backwards. We are saying that a function $f$ is continuous if $f^{-1}$ (thought of as a function from the set of subsets of $Y$ to the set of subsets of $X$) preserves the property of openness. We run into problems if we try to insist that $f(U)$ is open for every open $U\subseteq X$ (maps with this property are called open), because the image function from subsets of $X$ to subsets of $Y$, taking a subset $V$ to the subset $f(V)$, doesn't respect the operations of union and intersection in the same way that $f^{-1}$ does. For example, $f^{-1}(U\cap V)=f^{-1}(U)\cap f^{-1}(V)$, but it is not true in general that $f(U\cap V)=f(U)\cap f(V)$. As the definition of a topology involves certain intersections and unions preserving the property of openness, any "structure preserving map" must do so as well, which is why we consider $f^{-1}$.
How to prove a generalized Gauss sum formula I read the wikipedia article on quadratic Gauss sum. link First let me write a definition of a generalized Gauss sum. Let $G(a, c)= \sum_{n=0}^{c-1}\exp (\frac{an^2}{c})$, where $a$ and $c$ are relatively prime integers. (Here is another question. Is the function $e(x)$ defined in the article equal to $\exp(2\pi i/x)$ or $\exp(\pi i/x)$?) In the article, a formula is given according to values of $a$ and $c$. For example, if $a$ is odd and $4|c$, then $G(a, c)=(1+i)\epsilon_a^{-1} \sqrt{c} \big(\frac{c}{a}\big)$, where $\big(\frac{c}{a}\big)$ is the Jacobi symbol. I would like to prove it but I don't know how to do it. Could you give me a guide?
The quadratic Gauss sum is given by \begin{eqnarray*} G(s;k) = \sum_{x=0}^{k-1} e\left(\frac{sx^2}k\right), \end{eqnarray*} where $\displaystyle e(\alpha) = e^{2\pi \imath \alpha}$, $s$ is any integer coprime to $p$ and $k$ is a positive integer. The generalized Gauss sums is given by \begin{eqnarray*} G(a,b,c) = \sum_{x=0}^{|c|-1} e\left(\frac{ax^2+bx}c\right), \end{eqnarray*} where $ac \neq 0$ and $ac+b$ is even. It is well known that \begin{eqnarray*} G(s;k) = \begin{cases} \left(1+\imath^s\right)\left(\frac ks\right)\sqrt{k} &\mbox{ if } k \equiv 0 \mod 4\\ \left(\frac sk\right)\sqrt{k} &\mbox{ if } k \equiv 1 \mod 4\\ 0 &\mbox{ if } k \equiv 2 \mod 4\\ \imath \left(\frac sk\right)\sqrt{k} &\mbox{ if } k \equiv 3 \mod 4 \end{cases} \end{eqnarray*} There are many proofs for the above formula: Gauss proved it using elementary methods, Dirichlet used a poisson summation formula, Cauchy used a transformation function for the classical theta function, etc... An elementary proof in the style of Gauss is available in the book Gauss and Jacobi Sums by Berndt, Evans and Williams and also the book Introduction to Number Theory by Nagell. One method is to show, for $k$ odd, $|G(s;k)|^2 = k$, and then determining the sign of $G(s;k)$ will be the hard part. From here you can use reduction properties of the quadratic Gauss sum and the Chinese Remainder Theorem to prove the even cases. There is no general formula for a generalized Gauss sum. You can find a reciprocity theorem for these sums in the book Gauss and Jacobi Sums as well, also in Introduction to Analytic Number Theory by Apostol.
Trigonometry functions How do I verify the following identity: $$\frac{1-(\sin x - \cos x)^2}{\sin x} = 2\cos x$$ I have done simpler problems but got stuck with this one. Please help. Tony
$$\frac{1-(\sin x -\cos x)^2}{\sin x}=\frac{1-(\sin^2x+\cos^2x)+2\sin x\cos x}{\sin x}=\frac{2\cos x\sin x}{\sin x}=2\cos x$$ Here I used the identity: $\sin^2x+\cos^2x=1$
Intersection of a Collection of Sets I am trying to figure out how to write out the answer to this. If I am given: $$A_i = \{i,i+1,i+2,...\}$$ And I am trying to find the intersection of a collection of those sets given by: $$\bigcap_{i=1}^\infty A_i $$ First off, am I right in saying its the nothing since: $$A_1 = \{ 1,2,3,...\}$$ $$A_2 = \{ 2,3,4,...\}$$ So, would the answer be the empty set?
Yes, that's correct, the answer is the empty set. To explain this properly you want to take a number $n \in \mathbb N$ and explain why $n \notin \bigcap A_i$. If you do this with no conditions on $n$ you will have shown that the intersection doesn't contain any positive integers, hence it's empty. To show $n \notin \bigcap A_i$ you must pick an $i$ such that $n \notin A_i$. I leave it to you to determine which $i$ to pick.
Discuss the eigenvalues and eigenvectors of $A=I+2vv^T$.. I need some help with this question: Let $v\in\mathbb R^n$. Discuss the eigenvalues and eigenvectors of $$A=I+2vv^T$$. Can anyone help me?
The matrix $vv^T$ is real symmetric, so it's diagonalizable, then there's an invertible matrix $P$ and a diagonal matrix $D$ such that $$vv^T=PDP^{-1}.$$ Since the rank of $vv^T$ is $0$ or $1$ then $D=diag(||v||^2,0,\ldots,0).$ Now, we have: $$A=I+2vv^T=P(I+2D)P^{-1}$$ so $A$ is diagonalizable in the same basis of eigenvectors than $vv^T$ and has the eigenvalues $1+2||v||^2,1,\ldots,1.$
Complex representation of sinusoids question A sum of sinusoids defined as: $$\tag1f(t) = \sum_{n=1}^{N}A\sin(2\pi tn) + B\cos(2\pi tn)$$ is said to be represented as: $$\tag2f(t) = \sum_{n=-N}^{N}C\cdot e^{i2\pi tn}$$ which is derived from Euler's identity $e^{ix} = \cos(x) + i\sin(x)$ from which follows that: $$\cos(x) = \frac{e^{ix} + e^{-ix}}{2}$$ and $$\sin(x)=\frac{e^{ix} - e^{-ix}}{2i}$$ therefore: $$ \cos(x) + \sin(x) = \frac{e^{ix} + e^{-ix}}{2} + \frac{e^{ix} - e^{-ix}}{2i} $$ The thing is, I cannot see how this turns into $(2)$? The last statement, no matter how I try to manipulate it, I cant get it to the form $$\cos(x) + \sin(x) = Ce^{ix}$$ so that the summation works and it makes no sense for it to be so, since Euler's equation is using $i\sin(x)$ - clearly a different thing... Further, the statement : $$ |\cos(x) + i\sin(x)| = 1$$ is also confusing since ${\cos(x)^2 + i\sin(x)}^2$ is not the same thing as ${\cos(x)^2 + \sin(x)}^2$ and it appears people just "drop" the $i$, which is terribly confusing. Could someone please explain what am I not understanding? Thanks a bunch!
Using the identities: $$\begin{aligned} f(t) &= \sum_{n=1}^N A \cos(2 \pi t n) + B \sin(2 \pi t n) \\ &= \sum_{n=1}^N \left[A\left(\frac{e^{i 2 \pi t n} + e^{-i 2 \pi t n}}{2} \right) + B \left(\frac{e^{i 2 \pi t n} - e^{-i 2 \pi t n}}{2i} \right)\right] \\ &= \sum_{n=1}^N \left[\left(\frac{A}{2}+\frac{B}{2i}\right)e^{i 2 \pi t n} + \left(\frac{A}{2}-\frac{B}{2i}\right)e^{-i 2 \pi t n}\right] \\ &=\sum_{n=1}^N \left(\frac{A}{2}+\frac{B}{2i}\right)e^{i 2 \pi t n} +\sum_{n=1}^N \left(\frac{A}{2}-\frac{B}{2i}\right)e^{-i 2 \pi t n} \\ &=\sum_{n=1}^N \left(\frac{A}{2}+\frac{B}{2i}\right)e^{i 2 \pi t n} +\sum_{n=-N}^{-1} \left(\frac{A}{2}-\frac{B}{2i}\right)e^{i 2 \pi t n} \end{aligned}$$ So you are right about the fact that there is no single $C$ that fulfills the sum from $-N$ to $N$, unless $A=B=0$. Otherwise, define $$ C_n = \begin{cases} \frac{A}{2}+\frac{B}{2i} \quad \text{if } n>0 \\ 0 \qquad ~~~~~~~~\text{if } n=0 \\ \frac{A}{2}-\frac{B}{2i} \quad \text{if } n<0 \\ \end{cases} $$ and you get something similar. Regarding $|\cos(x)+i\sin(x)|=1$, note that the square of the modulus of a complex number $z$ is given by $|z|^2=z\bar{z}$, which in this case gives $$\begin{aligned} |\cos(x)+i\sin(x)|^2 &= \left(\cos(x)+i\sin(x)\right) \left(\cos(x)-i\sin(x)\right) \\ &=\cos^2(x) -i^2\sin^2(x) \\ &=\cos^2(x) + \sin^2(x) \\ &=1 \end{aligned}$$
Completing a proof Say we are given this: Impossibility of ordering the complex numbers. As yet we have not defined a relation of the form $x < y$ if $x$ and $y$ are arbitrary complex numbers, for the reason that it is impossible to give a definition of $<$ for complex numbers which will have all the properties in Axioms 6 through 8. To illustrate, suppose we were able to define an order relation $<$ satisfying Axioms 6, 7, and 8. Then, since $i \neq 0$, we must have either $i > 0$ or $i < 0$, by Axiom 6. Let us assume $i > 0$. Then, taking $x = y = i$ in Axiom 8, we get $i^2 > 0$, or $—1 > 0$. Adding 1 to both sides (Axiom 7), we get $0 > 1$. On the other hand, applying Axiom 8 to $—1 > 0$ yields $1 > 0$. Thus we have both $0 > 1$ and $1 > 0$, which, by Axiom 6, is impossible. Hence the assumption $i > 0$ leads us to a contradiction. So we are given this "passage" and the question is By reading the passage: Suppose that $<$ is a relation on $\mathbb{C}$ that satisfies Axioms 6, 7, and 8. and show that the assumption $i <0$ leads to a contradiction. Axiom 6 - Exactly one of the relations $x = y$, $x<y$ and $x>y$ holds. Note $x>y$ means the same thing as $y<x$ Axiom 7 - If $x<y$, then for every $z$ we have $x+z <y+z$. Axiom 8 - If $x>0$ and $y>0$, then $xy>0$. So this is what I have but I am really confused from that passage. Didn't we already complete the proof because we ended up with a contradiction? Help on this one please.
The passage proves only for the case when $i>0$ was considered, and the $i<0$ doesn't readily follow from this, though it is not hard neither: By axiom 7., and if $i<0$, we have $0=i+(-i)<0+(-i)=-i$, that is, $-i>0$. But then the passage can be applied again, as $(-i)^2=i^2=-1$.
How to LU facctorisation of a 4 by 4 matrice using gaussian eilimination! I have a 4 by 4 matrice, A = [2 -2 0 0] [2 -4 2 0] [0 -2 4 -2] [0 0 2 -4] How would I use Gaussian Elimination to find the LU factorisation of the matrix Please could someone explain how to do this!? I have an exam where a similar question will come up so i really want to be able to fully understand this. I can do and completely understand Gaussian elimination of a 3 by 3 matrix but not when it is not a system on equations! I havnt seen anything like this before! Many thanks
Here's your matrix $A$, and you multiply it on the left with a 4 by 4 Identity matrix (it's always going to be the same dimensions as your $A$ matrix). So it'll look like $[I]*[A]$, and then you do Gaussian Elimination (GE) to your $A$ matrix, and make sure you keep track of your row operations that you do. Ex that's not important to your question: $$\ r_{2} = r_{2} - 2r_{1}$$ When you apply it to your Identity matrix, make sure you change it to: $$\ c_{1} = c_{1} + 2c_{2}$$ Where you then apply it as a column operation to the identity matrix. You keep doing those same steps until you end with your lower triangular matrix ($L$) on the left, and an upper triangular matrix on the right ($U$). Then $A = LU$. I will post an actual example soon. Note: If you have to switch rows, then you have to multiply by a Permutation matrix, P. The steps you would do it would be PAx = Pb. I will write out an example, and somehow figure out how to post it here.
Asymptotic bound $T(n)=T(n/3+\lg n)+1$ How would I go about finding the upper and lower bounds of $T(n)=T(n/3+\lg(n))+1$?
Not sure how tight a bound you need, but here is an idea. Compare your recurrence to the one that satisfies $X_n = X_{n/3} + 1$ (without the log) - what is the relationship between $T_n$ and $X_n$? Note that when you solve, $X_n = \Theta(\log n)$. Next item is in the other direction. Compare $T_n$ to $Y_n = Y_{n-1}+1$, and note that $Y_n = \Theta(n)$. If you fill in the blanks, you get an inequality bounding on both sides.
Power series infinity at every point of boundary Is there an example of a power series $f(z)=\sum_{k=0}^\infty a_kz^k$ with radius of convergence $0<R<\infty$ so that $\sum_{k=0}^\infty a_kw^k=\infty$ for all $w$ with $|w|=R$ Thank you kindly.
No there is not. In fact, there is no example of such power series $\sum_n a_n z^n$ such that $\sum_n a_n w^n = \infty$ for all $w$ in a set of positive measure in $\partial D$, where $D=\{|z|<R\}$. Indeed, suppose there exists such a power series $f$. By Abel's Theorem, we deduce that $f(z)$ has non-tangential boundary values $\infty$ on a set of positive measure in $\partial D$. This means that $1/f$ is a meromorphic function in $D$ with non-tangential boundary values $0$ on a set of positive measure in $\partial D$, and so $1/f$ is identically zero in $D$, by the Luzin-Privalov Theorem. So $f \equiv \infty$ in $D$, a contradiction.
Evaluate the limit $\lim\limits_{n\to\infty}{\frac{n!}{n^n}\bigg(\sum_{k=0}^n{\frac{n^{k}}{k!}}-\sum_{k=n+1}^{\infty}{\frac{n^{k}}{k!}}\bigg)}$ Evaluate the limit $$ \lim_{n\rightarrow\infty}{\frac{n!}{n^{n}}\left(\sum_{k=0}^{n}{\frac{n^{k}}{k!}}-\sum_{k=n+1}^{\infty}{\frac{n^{k}}{k!}} \right)} $$ I use $$e^{n}=1+n+\frac{n^{2}}{2!}+\cdots+\frac{n^{n}}{n!}+\frac{1}{n!}\int_{0}^{n}{e^{x}(n-x)^{n}dx}$$ but I don't know how to evaluate $$ \lim_{n\rightarrow\infty}{\frac{n!}{n^{n}}\left(e^{n}-2\frac{1}{n!}\int_{0}^{n}{e^{x}(n-x)^{n}dx} \right) }$$
In this answer, it is shown, using integration by parts, that $$ \sum_{k=0}^n\frac{n^k}{k!}=\frac{e^n}{n!}\int_n^\infty e^{-t}\,t^n\,\mathrm{d}t\tag{1} $$ Subtracting both sides from $e^n$ gives $$ \sum_{k=n+1}^\infty\frac{n^k}{k!}=\frac{e^n}{n!}\int_0^n e^{-t}\,t^n\,\mathrm{d}t\tag{2} $$ Substtuting $t=n(s+1)$ and $u^2/2=s-\log(1+s)$ gives us $$ \begin{align} \Gamma(n+1) &=\int_0^\infty t^n\,e^{-t}\,\mathrm{d}t\\ &=n^{n+1}e^{-n}\int_{-1}^\infty e^{-n(s-\log(1+s))}\,\mathrm{d}s\\ &=n^{n+1}e^{-n}\int_{-\infty}^\infty e^{-nu^2/2}\,s'\,\mathrm{d}u\tag{3} \end{align} $$ and $$ \begin{align} \Gamma(n+1,n) &=\int_n^\infty t^n\,e^{-t}\,\mathrm{d}t\\ &=n^{n+1}e^{-n}\int_0^\infty e^{-n(s-\log(1+s))}\,\mathrm{d}s\\ &=n^{n+1}e^{-n}\int_0^\infty e^{-nu^2/2}\,s'\,\mathrm{d}u\tag{4} \end{align} $$ Computing the series for $s'$ in terms of $u$ gives $$ s'=1+\frac23u+\frac1{12}u^2-\frac2{135}u^3+\frac1{864}u^4+\frac1{2835}u^5-\frac{139}{777600}u^6+O(u^7)\tag{5} $$ In the integral for $\Gamma(n+1)$, the odd powers of $u$ in $(5)$ are cancelled and the even powers of $u$ are integrated over twice the domain as in the integral for $\Gamma(n+1,n)$. Thus, $$ \begin{align} 2\Gamma(n+1,n)-\Gamma(n+1) &=\int_n^\infty t^n\,e^{-t}\,\mathrm{d}t-\int_0^n t^n\,e^{-t}\,\mathrm{d}t\\ &=n^{n+1}e^{-n}\int_0^\infty e^{-nu^2/2}\,2\,\mathrm{odd}(s')\,\mathrm{d}u\\ &=n^{n+1}e^{-n}\left(\frac4{3n}-\frac8{135n^2}+\frac{16}{2835n^3}+O\left(\frac1{n^4}\right)\right)\\ &=n^ne^{-n}\left(\frac43-\frac8{135n}+\frac{16}{2835n^2}+O\left(\frac1{n^3}\right)\right)\tag{6} \end{align} $$ Therefore, combining $(1)$, $(2)$, and $(6)$, we get $$ \begin{align} \frac{n!}{n^n}\left(\sum_{k=0}^n\frac{n^k}{k!}-\sum_{k=n+1}^\infty\frac{n^k}{k!}\right) &=\frac{e^n}{n^n}\left(\int_n^\infty e^{-t}\,t^n\,\mathrm{d}t-\int_0^n e^{-t}\,t^n\,\mathrm{d}t\right)\\ &=\frac43-\frac{8}{135n}+\frac{16}{2835n^2}+O\left(\frac1{n^3}\right)\tag{7} \end{align} $$
How to prove that $\lim\limits_{n\to\infty} \frac{n!}{n^2}$ diverges to infinity? $\lim\limits_{n\to\infty} \dfrac{n!}{n^2} \rightarrow \lim\limits_{n\to\infty}\dfrac{\left(n-1\right)!}{n}$ I can understand that this will go to infinity because the numerator grows faster. I am trying to apply L'Hôpital's rule to this; however, have not been able to figure out how to take the derivative of $\left(n-1\right)!$ So how does one take the derivative of a factorial?
Dominic Michaelis is the 'right' answer for such a simple problem. This is just to demonstrate a trick that is often helpful in showing limits going off to $\infty$. Consider $$\sum_{n=1}^{\infty} \frac{n^2}{n!}$$ By the ratio test this converges. So the terms $\frac{n^2}{n!} \to 0$.
How can i prove this identity (by mathematical induction) (rational product of sines) I would appreciate if somebody could help me with the following problem: Q: proof? (by mathematical induction) $$\prod_{k=1}^{n-1}\sin\frac{k \pi}{n}=\frac{n}{2^{n-1}}~(n\geq 2)$$
Let $$S_n=\prod_{k=1}^{n-1}\sin \frac{k\pi}{n}.$$ We solve the equation $(z+1)^n=1$ for $z\in\mathbb{C}$ we find $$z=e^{i2k\pi/n}-1=2ie^{ik\pi/n}\sin\frac{k\pi}{n}=z_k,\quad 0,\ldots,n-1.$$ Moreover $(x+1)^n-1=x\left((x+1)^{n-1}+(x+1)^{n-2}+\cdots+(x+1)+1\right)=xP(x).$ The roots of $P$ are $z_k, k=1,\ldots ,n-1$. By the relation between polynomial's cofficients of $P$ and its roots we have $$\sigma_{n-1}=(-1)^{n-1}n=\prod_{k=1}^{n-1}z_k.$$ In another way, we have $$\prod_{k=1}^{n-1}z_k=2^{n-1}i^{n-1}\left(\prod_{k=1}^{n-1}e^{ik\pi/n}\right)\left(\prod_{k=1}^{n-1}\sin \frac{k\pi}{n}\right)=2^{n-1}i^{n-1}e^{i\pi(1+2+\cdots+(n-1)/n)}S_n=2^{n-1}(-1)^{n-1}S_n.$$ We conclude.
Given the product of a unitary matrix and an orthogonal matrix, can it be easily inverted _without_ knowing these factors? Given the product $M$ of a unitary matrix $U$ (i.e. $U^\dagger U=1$) and an orthogonal matrix $O$ (i.e. $O^TO=1$), can it be easily inverted without knowing $U$ and $O$? Sure enough, if $M=UO$, then $M^{-1}=O^TU^\dagger$. But assuming you only know that $M$ is composed in such a way, but not how $U$ and $O$ actually look, does there still exist a simple formula for $M^{-1}$?
Note that $$M^\dagger M = O^\dagger\underbrace{U^\dagger U}_{=1} O = O^\dagger O$$ Therefore, $$(M^\dagger M)(M^\dagger M)^T = O^\dagger \underbrace{O O^T}_{=1} O^* = (OO^T)^* = 1$$ So that $$(M^\dagger)^{-1} = M(M^\dagger M)^T$$ And thus $$M^{-1} = M^T M^* M^\dagger$$
How to choose the starting row when computing the reduced row echelon form? I'm having hell of a time going around solving matrices to reduced row echelon form. My main issue is which row to start simplifying values and based on what? I have this example so again, the questions are: 1.Which row to start simplifying values? 2.Based on what criteria? Our professor solved it in the class with no fractions but I could not do it. Even though I know the 3 operations performed on matrices
Where you start is not really a problem. My tip: * *Always first make sure you make the first column: 1,0,0 *Then proceed making the second one: 0,1,0 *And lastly, 0,0,1 Step one: $$\begin{pmatrix} 1&2&3&9 \\ 2&-1&1&8 \\ 3&0&-1&3\end{pmatrix}$$ row 3 - 3 times row 1 $$\begin{pmatrix} 1&2&3&9 \\ 2&-1&1&8 \\ 0&-6&-10&-24\end{pmatrix}$$ row 2 - 2 times row 1 $$\begin{pmatrix} 1&2&3&9 \\ 0&-5&-5&-10 \\ 0&-6&-10&-24\end{pmatrix}$$ Which simplifies to $$\begin{pmatrix} 1&2&3&9 \\ 0&1&1&2 \\ 0&3&5&12\end{pmatrix}$$ Now you can proceed with step 2, and 3. row 1 - 2 times row 2 and row 3 - 3 times row 2 $$\begin{pmatrix} 1&0&1&5 \\ 0&1&1&2 \\ 0&0&2&6\end{pmatrix}$$ Simplifies to $$\begin{pmatrix} 1&0&1&5 \\ 0&1&1&2 \\ 0&0&1&3\end{pmatrix}$$ row 2 - row 3 $$\begin{pmatrix} 1&0&1&5 \\ 0&1&0&-1 \\ 0&0&1&3\end{pmatrix}$$ row 1 - row 3 $$\begin{pmatrix} 1&0&0&2 \\ 0&1&0&-1 \\ 0&0&1&3\end{pmatrix}$$
Fibonacci identity proof I've been struggled for this identity for a while, how can I use combinatorial proof to prove the Fibonacci identity $$F_2+F_5+\dots+F_{3n-1}=\frac{F_{3n+1}-1}{2}$$ I know that $F_n$ is number of tilings for the board of length $n-1$, so if I rewrite the identity and let $f_n$ be the number of tilings for the board of length $n$, then I got $$f_1+f_4+\dots+f_{3n-2}=\frac{f_{3n}-1}{2}$$ the only thing that I know so far is the Right hand side, $f_{3n}-1$ is the number of tilings for the $3n$ board with at least one $(1\times 2)$ tile (or maybe I am wrong), but I have no idea of what the fraction $\frac{1}{2}$ is doing here. Can anyone help? (P.S.: In general, when it comes to this kind of combinatorial proof question, is it ok to rewrite the question in a different way? Or is it ok to rewrite this question as $2(f_1+f_4+\dots+f_{3n-1})=f_{3n}-1$, then process the proof? Thank you for all your useful proofs, but this is an identity from a course that I am taking recently, and it is all about combinatorial proof, so some hint about how to find the number of tilings for the board of length $3n$ would be really helpful. Thanks for dtldarek's help, I finally came up with: Rewrite the identity as $2F_2+2F_5+\dots+2F_{3n-1}=\frac{F_{3n+1}-1}{2}$, then the Left hand side becomes $F_2+F_2+F_5+F_5+\dots+F_{3n-1}+F_{3n-1}=F_0+F_1+F_2+F_3+\dots+F_{3n-3}+F_{3n-2}+F_{3n-1}=\sum^{3n-1}_{i=0}F_{i}\implies \sum^{3n-1}_{i=0} F_i=F_{3n+1}-1$, and recall that $f_n$ is the number of tilings for the board of length $n$, so we have $\sum^{3n-2}_{i=0}f_i=f_{3n}-1$. For the Right hand side $f_{3n}$ is the number of tilings for the length of $3n$ board, then $f_{3n}-1$ is the number of tilings for a $3n$ board use at least one $1\times 2$ tile. Now, for the Left hand side, conditioning on the last domino in the $k^{th}$ cell, for any cells before the $k^{th}$ cell, there are only one way can be done, and all cells after the $k+1$ cell can be done in $f_{3n-k-1}$, finally sum up $k$ from 0 to $3n-1$, which is the Left hand side. Is it ok? did I change the meaning of the original identity?
This may not be the quickest approach, but it seems fairly simple, using only the recursion equation $F_i+F_{i+1}=F_{i+2}$ and initial conditions, which I take to be $F_0=0$ and $F_1=1$. Notice first that you can apply the recursion equation to replace, on the left side of your formula, each term by the sum of the two preceding Fibonacci numbers, so this left side is equal to $F_0+F_1+F_3+F_4+F_6+F_7+\dots+F_{3n-3}+F_{3n-2}$, which skips exactly those terms that are present in your formula's left side. So, adding this form of your left side to the original form, you find that twice the left side is the sum of all the Fibonacci numbers up to and including $F_{3n-1}$. So what needs to be proved is that $\sum_{i=0}^{3n-1}F_i=F_{3n+1}-1$. That can be done by induction on $n$. The base case is trivial ($0+1+1=3-1$), and the induction step is three applications of the recursion equation. In $\sum_{i=0}^{3(n+1)-1}F_i$, apply the induction hypothesis to replace all but the last three terms with $F_{3n+1}-1$. To combine that with the last three terms, $F_{3n}+F_{3n+1}+F_{3n+2}$, use the recursion equation to replace $F_{3n}+F_{3n+1}$ and $F_{3n+1}+F_{3n+2}$ with $F_{3n+2}$ and $F_{3n+3}$, respectively, and then again to replace these last two results with $F_{3n+4}=F_{3(n+1)+1}$.
How to minimize the amount of material used to make a shape of a given volume? A metal can company will make cylindrical shape cans of capacity 300 cubic centimeters. What is the bottom radius of the cans in order to use the least amount of the sheet metal in the production? Accurate to 2 decimal places.
Hint: * *Write out the expressions for surface area and volume of cylinders. Here they are for reference: $ A = 2 \pi r h + 2 \pi r^2 $ $ V = \pi r^2 h $ * *We already know what the required volume is so we can set $ V = 300 $. *Can we combine our expressions for $ A $ and $V $ and make progress that way? ETA: curse my blurred vision!
Linear Transformations - Direct Sum Let $U, V$, and $W$ be finite dimensional vectors spaces over a field. Suppose that $V\subset U$ is a subspace. Show that there is a subspace $W\subset U$ such that $U=V\oplus W$. only thing i know about this problem is that you have to use the null space. I'm pretty much lost! any help would be appreciated!
Hint: Start with a basis for $V$, $\{v_1,\ldots,v_k\}$, and extend it to a basis of $U$, $\{v_1,\ldots,v_k,u_{k+1},\ldots,u_n\}$. Now find a subspace of $U$ which has the properties that you want by using that extended basis.
Common basis for subspace intersection Let $ W_1 = \textrm{span}\left\{\begin{pmatrix}1\\2\\3\end{pmatrix}, \begin{pmatrix}2\\1\\1\end{pmatrix}\right\}$, and $ W_2 = \textrm{span}\left\{\begin{pmatrix}1\\0\\1\end{pmatrix}, \begin{pmatrix}3\\0\\-1\end{pmatrix}\right\}$. Find a basis for $W_1 \cap W_2$ I first thought of solving the augmented matrix: $ \begin{pmatrix}1 && 2 && 1 && 3\\2 && 1 && 0 && 0\\3 && 1 && 1 && -1\end{pmatrix}$ But this matrix can have 3 pivots and so it's column space dimension can be at most 3 (which doesn't make sense since the basis I'm looking for must have dimension 2. So, what is the correct way to solve these exercise.
The two sub spaces are not the same, because $W_2$ has no extent in the second axis while $W_1$ does. The intersection is then a line in the $xz$ plane, which $W_2$ spans. If you can find a vector in $W_1$ in that plane that is your basis
Is there a bijection between $\mathbb N$ and $\mathbb N^2$? Is there a bijection between $\mathbb N$ and $\mathbb N^2$? If I can show $\mathbb N^2$ is equipotent to $\mathbb N$, I can show that $\mathbb Q$ is countable. Please help. Thanks,
Yes. Imagine starting at $(1,1)$ and then zig-zagging diagonally across the quadrant. I'll leave you to formulate it. Hint: for every natural number $>1$ there's a set of elements of $\mathbb{N}^2$ that add up to that number. For $2$, there's $(1,1)$. For $3$, there's $(2,1)$ and $(1,2)$...
Prove that if matrix $A$ is nilpotent, then $I+A$ is invertible. So my friend and I are working on this and here is what we have so far. We want to show that $\exists \, B$ s.t. $(I+A)B = I$. We considered the fact that $I - A^k = I$ for some positive $k$. Now, if $B = (I-A+A^2-A^3+ \cdots -A^{k-1})$, then $(I+A)B = I-A^k = I$. My question is: in matrix $B$, why is the sign for $A^{k-1}$ negative? Couldn't it be positive, in which case we'd get $(I+A)B = I + A^k$? Thank you.
It's the usual polynomial identity $$ 1 - x^{k} = (1 - x)(1 + x + x^{2} + \dots + x^{k-1}), $$ where you are substituting $x = -A$.
Intersection points of a Triangle and a Circle How can I find all intersection points of the following circle and triangle? Triangle $$A:=\begin{pmatrix}22\\-1.5\\1 \end{pmatrix} B:=\begin{pmatrix}27\\-2.25\\4 \end{pmatrix} C:=\begin{pmatrix}25.2\\-2\\4.7 \end{pmatrix}$$ Circle $$\frac{9}{16}=(x-25)^2 + (y+2)^2 + (z-3)^2$$ What I did so far was to determine the line equations of the triangle (a, b and c): $a : \overrightarrow {OX} = \begin{pmatrix}27\\-2.25\\4 \end{pmatrix}+ \lambda_1*\begin{pmatrix}-1.8\\0.25\\0.7 \end{pmatrix} $ $b : \overrightarrow {OX} = \begin{pmatrix}22\\-1.5\\1 \end{pmatrix}+ \lambda_2*\begin{pmatrix}3.2\\-0.5\\3.7 \end{pmatrix} $ $c : \overrightarrow {OX} = \begin{pmatrix}22\\-1.5\\1 \end{pmatrix}+ \lambda_3*\begin{pmatrix}5\\-0.75\\3 \end{pmatrix} $ But I am not sure what I have to do next...
The side $AB$ of the triangle has equation $P(t) = (1-t)A + tB$ for $0 \le t \le 1$. The $0 \le t \le 1$ part is important. If $t$ lies outside $[0,1]$, the point $P(t)$ will lie on the infinite line through $A$ and $B$, but not on the edge $AB$ of the triangle. Substitute $(1-t)A + tB$ into the circle equation, as others have suggested. This will give you a quadratic equation in $t$ that you can solve using the well-known formula. But, a solution $t$ will give you a circle/triangle intersection point only if it lies in the range $0 \le t \le 1$. Solutions outside this interval can be ignored. Do the same thing with sides $BC$ and $AC$.
Prove $f(x)=ax+b$ Let $f(x)$ be a continuous function in $\mathbb R$ that for all $x\in(-\infty,+\infty)$, satisfies $$ \lim_{h\rightarrow+\infty}{[f(x+h)-2f(x)+f(x-h)]}=0. $$ Prove that $f(x)=ax+b$ for some $a,b\in\mathbb R$. This is a problem from my exercise book, but I can't figure out the solution of it, I think the solution in my book is wrong. :( Any idea and proof of it are welcome! Thank you in advance.
Given $x\in\mathbb{R}$, the limit can be rewritten as $$f(x)=\frac{1}{2}\lim_{h\to\infty}[f(x+h)+f(x-h)].\tag{1}$$ Given $y\in\mathbb{R}$, replacing $h$ with $h+y$ or $h-y$ in $(1)$, we have $$f(x)=\frac{1}{2}\lim_{h\to\infty}[f(x+y+h)+f(x-y-h)],\quad \forall x\in\mathbb{R}.\tag{2}$$ and $$f(x)=\frac{1}{2}\lim_{h\to\infty}[f(x-y+h)+f(x+y-h)],\quad \forall x\in\mathbb{R}.\tag{3}$$ Replacing $x$ with $x+y$ or $x-y$ in $(1)$ respectively, we have: $$f(x+y)=\frac{1}{2}\lim_{h\to\infty}[f(x+y+h)+f(x+y-h)]\tag{4}$$ and $$f(x-y)=\frac{1}{2}\lim_{h\to\infty}[f(x-y+h)+f(x-y-h)].\tag{5}$$ Comparing $(2)+(3)$ and $(4)+(5)$, we have: $$2f(x)=f(x+y)+f(x-y),\tag{6}$$ or equivalently, $$f(\frac{x+y}{2})=\frac{1}{2}[f(x)+f(y)].\tag{7}$$ $(7)$ together with the continuity of $f$ implies that $f$ is both convex and concave on $\mathbb{R}$, so $f$ must be a linear function.
Find $x$ such that $\sum_{k=1}^{2014} k^k \equiv x \pmod {10}$ Find $x$ such that $$\sum_{k=1}^{2014} k^k \equiv x \pmod {10}$$ I knew the answer was $3$.
We are going to compute the sum mod $2$ and mod $5$. The Chinese Remainder Theorem then gives us the result mod $10$. Mod $2$, obviously $$k^k \equiv \begin{cases}0 & \text{if }k\text{ even,}\\1 & \text{if }k\text{ odd,}\end{cases}$$ so $$\sum_{k = 0}^{2014} \equiv \frac{2014}{2} = 1007 \equiv 1 \mod 2.$$ By Fermat, $k^k$ mod $5$ only depends on the remainder of $k$ mod $\operatorname{lcm}(5, 4) = 20$. So $$\sum_{k = 1}^{2014}k^k \equiv \underbrace{100}_{\equiv 0} \cdot \sum_{k=1}^{20} k^k + \sum_{k=1}^{14} k^k \\ \equiv 1 + 4 + 2 + 1 + 0 + 1 + 3 + 1 + 4 + 0 + 1 + 1 + 3 + 1 \\\equiv 3 \mod 5.$$ Combining the results mod $2$ and mod $5$, $$\sum_{k=1}^{2014} k^k \equiv 3\mod 10.$$
Is there an analytic function applying formula? Is there an analytic function $f$ in $\mathbb{C}\backslash \{0\}$ s.t. for every $z\ne0$: $$|f(z)|\ge\frac{1}{\sqrt{|z|}}\, ?$$
How about this: Since $f(z)$ is analytic on $\mathbb{C}-\{0\}$, $g(z) = \frac{1}{(f(z))^2}$ is analytic on $\mathbb{C}-\{0\}$. Also $\bigg|\frac{g(z)}{z}\bigg| \leq 1$. I am sure you will finish the rest (think about the order of the pole at $0$ and use Liouville's Theorem).
$f(z)= az$ if $f$ is analytic and $f(z_{1}+z_{2})=f(z_{1})+f(z_{2})$ If $f$ is an analytic function with $f(z_{1}+z_{2})=f(z_{1})+f(z_{2})$, how can we show that $f(z)= az$ where $a$ is a complex constant?
It's true under weaker assumptions, but let's do it by assuming that $f$ is analytic. Fix $w \in \mathbb{C}$. Since $f(z+w) = f(z)+f(w)$, it follows that $f'(z+w) = f'(z)$ for all $z$. Hence $f'$ is constant, say $f'(z) = a$ which implies that $f(z) = az+c$. Plug in $z_1 = z_2 = 0$ in the defining equation to conclude that $f(0) = 0$, so $f(z) = az$.
A probability question that involves $5$ dice For five dice that are thrown, I am struggling to find the probability of one number showing exactly three times and a second number showing twice. For the one number showing exactly three times, the probability is: $$ {5 \choose 3} \times \left ( \frac{1}{6} \right )^{3} \times \left ( \frac{5}{6}\right )^{2} $$ However, I understand I cannot just multiply this by $$ {5 \choose 2} \times \left ( \frac{1}{6} \right )^{2} \times \left ( \frac{5}{6}\right )^{3} $$ as this includes the probability of picking the original number twice which allows the possibility of the same number being shown $5$ times. I am unsure of what to do next, I tried to write down all the combinations manually and got $10$ possible outcomes so for example if a was the value found $3$ times and $b$ was the value obtained $2$ times one arrangement would be '$aaabb$'. However I still am unsure of what to do after I get $10$ different possibilities and I am not sure how I could even get the $10$ different combinations mathematically. Any hints or advice on what to do next would be much appreciated.
First, I assume they wil all come out in neat order, first three in a row, then two in a row of a different number. The probability of that happening is $$ \frac{1}{6^2}\cdot \frac{5}{6}\cdot\frac{1}{6} = \frac{5}{6^4} $$ The first die can be anything, but the next two have to be equal to that, so the $\frac{1}{6^2}$ comes from there. Then the fourth die has to be different, and the odds of that happening is the $\frac{5}{6}$ above, and lastly, the last die has to be the same as the fourth. Now, we assumed that the three equal dice would come out first. There are other orders, a total of $\binom{5}{3}$. Multiply them, and you get the final answer $$ \binom{5}{3}\frac{5}{6^4} $$ You might reason another way. Let the event $A$ be "there are exactly 3 of one number" and $B$ be "There are exactly 2 of one number". Then we have that $$ P(A\cap B) = P(A) \cdot P(B|A) = {5 \choose 3} \cdot \frac{1}{6^3} \cdot\frac{5^2}{6^2} \cdot \frac{1}{5} = \binom{5}{3}\frac{5}{6^4} $$ Where $P(A)$ you allready calculated on your own, and $P(B|A)$ is the probability that there is an exact pair given that there is an exact triple, which again is the probability that the two last dice are equal, and that is $\frac{1}{5}$.
Eigenvalues of a rotation How do I show that the rotation by a non zero angle $\theta$ in $\mathbb{R}^2 $ does not have any real eigenvalues. I know the matrix of a rotation but I don't how to show the above proposition. Thank you
The characteristical polynomial is $$x^2-2\cos(\theta) x+1$$ and $x^2\geq 0$ and $1>0$ and $|\cos(\theta)|\leq 1$ the polynomial can only have a zero when $|\cos(\theta)|=1$. As $x^2-2x+1=(x-1)^2$ and $x^2+2x+1=(x+1)^2$
Property of $10\times 10 $ matrix Let $A$ be a $10 \times 10$ matrix such that each entry is either $1$ or $-1$. Is it true that $\det(A)$ is divisible by $2^9$?
Answer based on the comments by Ludolila and Erick Wong as an answer: The answer follows from three easily proven rules: * *Adding or subtracting a row of a matrix from another does not change its determinant. *Multiplying a line of the matrix by a constant $c$ multiplies the determinant by that constant. *The determinant of a matrix with integer entries is an integer. Take a matrix $A=(a_{ij})\in M_{10}(\mathbb{R})$ such that all its entries are either $1$ or $-1$. If $a_{11}=-1$, multiply the first line by $-1$. For $2\le i\le10$, subtract $a_{i1}(a_{1\to})$ (where $a_{1\to}$ is the first row of $A$) from $a_{i\to}$. Now all rows consist only of $0$'s and $\pm2$'s. Divide each of these rows by $2$ to obtain a matrix $B$ that has entries only in $\{-1,0,1\}$. Note that $\det B = \pm 2^{-9} \det A$ following rules 1 and 2. Following rule 3, $\det B$ is an integer, so $\det A = 2^9 \cdot n$ where $n$ is an integer.
Separation of function When can a function of 2 variables say $h(x,y)$ can be written as $$\sum_i f_i(x)g_i(y)$$ I want to know what conditions on $h$ would ensure this kind of separation.
If you don't require the sum to be finite then essentially anything expandable in a two dimensional Fourier Series will satisfy what you want. For example if $f(x,y)$ is defined on the unit square, then $$f(x,y)=\sum_{m,n\in\mathbb{Z}}a_{m,n}e^{2\pi i m x}e^{2\pi i n y}$$ for appropriate coefficients $a_{m,n}$. For example, if $f(x,y)$ is continuously differentiable, then the series converges uniformly. Otherwise, you have almost sure convergence for any $L^2$ function by Carleson's theorem.
Is there any specific formula for $\log{f(z)}$? Let $f(z)$ be a nonvanishing analytic function on a simply connected region $\Omega$. Then there is an analytic function $g(z)$ such that $e^{g(z)}=f(z)$. Is there any specific formula for $g(z)$? (By specific formula I mean, for example, on the region $\mathbb{C}-\{x\le 0\}$ we know $$\log^{[k]}{z}=\log{|z|}+i\arg{z}+i2k\pi$$ where $k$ is an integer, and $\log^{[k]}{z}$ is holomorphic on $\mathbb{C}-\{x\le 0\}$.) EDIT: Let me make my question clear. I know that we can use integral to define $\log{f}$. But that's not what I'm looking for. Let me take this example to explain what I want: Let $f(z)=z^9$ and $\Omega$ the region $Re(z)>1$. Then there is a holomorphic function $g(z)$ on $\Omega$ such that $e^{g(z)}=f(z)=z^9$ and that $g(x)=9\log{x}$ for real $x>1$. In this case the formula I want is: $$g(z)=9\log|z|+9i\arg z$$ where $\arg z\in (−\pi,\pi)$.
Hint: Try defining your function of $z$ as an integral from a certain function to a fixed point $z_0$ (the well-definedness [i.e. path independence] of which comes from simple connectedness).
Probability of rolling three dice without getting a 6 I am having trouble understanding how you get $91/216$ as the answer to this question. say a die is rolled three times what is the probability that at least one roll is 6?
There are two answers already that express the probability as $$1-\left(\frac56\right)^3 = \frac{91}{216},$$ I'd like to point out that a more complicated, but more direct calculation gets to the same place. Let's let 6 represent a die that comes up a 6, and X a die that comes up with something else. Then we might distinguish eight cases for how the dice can come up: 666 66X 6X6 X66 6XX X6X XX6 XXX We can easily calculate the probabilities for each of these eight cases. Each die has a $\frac16$ probability of showing a 6, and a $\frac56$ probability of showing something else, which we represented with X. To get the probability for a combination like 6X6 we multiply the three probabilities for the three dice; in this case $\frac16\cdot\frac56\cdot\frac16 = \frac5{216}$. This yields the following probabilities: $$\begin{array}{|r|ll|} \hline \mathtt{666} & \frac16\cdot\frac16\cdot\frac16 & = \frac{1}{216} \\ \hline \mathtt{66X} & \frac16\cdot\frac16\cdot\frac56 & = \frac{5}{216} \\ \mathtt{6X6} & \frac16\cdot\frac56\cdot\frac16 & = \frac{5}{216} \\ \mathtt{X66} & \frac56\cdot\frac16\cdot\frac16 & = \frac{5}{216} \\ \hline \mathtt{6XX} & \frac16\cdot\frac56\cdot\frac56 & = \frac{25}{216} \\ \mathtt{X6X} & \frac56\cdot\frac16\cdot\frac56 & = \frac{25}{216} \\ \mathtt{XX6} & \frac56\cdot\frac56\cdot\frac16 & = \frac{25}{216} \\ \hline \mathtt{XXX} & \frac56\cdot\frac56\cdot\frac56 & = \frac{125}{216} \\ \hline \end{array} $$ The cases that we want are those that have at least one 6, which are the first seven lines of the table, and the sum of the probabilities for these lines is $$\frac{1}{216}+\frac{5}{216}+\frac{5}{216}+\frac{5}{216}+ \frac{25}{216}+\frac{25}{216}+\frac{25}{216} = \color{red}{\frac{91}{216}}$$ just as everyone else said. Since the first 7 lines together with the 8th line account for all possible throws of the dice, together they add up to a probability of $\frac{216}{216} = 1$, and that leads to the easier way to get to the correct answer: instead of calculating and adding the first 7 lines, just calculate the 8th line, $\frac{125}{216}$ and subtract it from 1.
What is a Ramsey Graph? What is a ramsey graph and What is its relation to RamseyTheorem? In Ramsey Theorem: for a pairs of parameters (r,b) there exists an n such that for every (edge-)coloring of the complete graph on n vertices with colors r(ed) and b(lue) there will exist a complete subgraph on r vertices colored red or a complete subgraph on b vertices colored blue. Ramsey Graph : A Ramsey graph is a graph with n vertices, no clique of size s, and no independent set of size t. I couldnt understand how the above two are related. Can any one explain what is a Ramsey Graph as simple as possible? (in terms of coloring)
Instead of considering a complete graph $K_n$ whose edges are red and blue, just consider some graph $G$ with $n$ vertices. Let $\bar G$ be the complement of $G$. $\bar G$ contains a clique of size $s$ if and only if $G$ contains an independent set of size $s$. Now consider $G$ as a subgraph of $K_n$. Color the edges of $G$ in $K_n$ red, and color the other edges of the $K_n$, that is the edges of $\bar G$, blue. Now $K_n$ contains a red $K_r$ if and only if $G$ contains a clique of size $r$, and $K_n$ contains a blue $K_b$ if $\bar G$ contains a clique of size $b$, which is true if and only if $G$ contains an independent set of size $b$. The Ramsey theorem says that for given $r$ and $b$ there is an $n$ such that (what you said about $K_n$). A Ramsey graph of size $q$ is a counterexample to the Ramsey theorem, and when it exists, it shows that $n$, which must exist, must be larger than $q$. Here is an example. Let's take $r=b=3$. The Ramsey theorem says that there is some $n$ such that if we color edges of $K_n$ in red and blue, there is either a red triangle or a blue triangle. There are many Ramsey graphs. For example, consider $K_2$. It has neither a clique of size $r=3$ nor an independent set of size $b=3$ and therefore it is a ramsey graph for $r=3, b=3$ and shows that the $n$ of the previous paragraph must be bigger than 2. Now consider $C_5$, the cycle on five vertices. It has neither a clique of size $r=3$ nor an independent set of size $b=3$ and therefore it is a ramsey graph for $r=3, b=3$ and shows that the $n$ given by the Ramsey theorem must be bigger than 5. The Ramsey theorem claims that when $n$ is large enough, there is no Ramsey graph with $n$ or more vertices.
$\lim_{n\to\infty}(\sqrt{n^2+n}-\sqrt{n^2+1})$ How to evaluate $$\lim_{n\to\infty}(\sqrt{n^2+n}-\sqrt{n^2+1})$$ I'm completely stuck into it.
A useful general approach to limits is, in your scratch work, to take every complicated term and replace it with a similar approximate term. As $n$ grows large, $\sqrt{n^2 + n}$ looks like $\sqrt{n^2} = n$. More precisely, $$ \sqrt{n^2 + n} = n + o(n) $$ where I've used little-o notation. In terms of limits, this means $$ \lim_{n \to \infty} \frac{\sqrt{n^2 + n} - n}{n} = 0 $$ but little-o notation makes it much easier to express the intuitive idea being used. Unfortunately, $\sqrt{n^2 + 1}$ also looks like $n$. Combining these estimates, $$ \sqrt{n^2 + n} - \sqrt{n^2 + 1} = (n + o(n)) - (n + o(n)) = o(n) $$ Unfortunately, this cancellation has clobbered all of the precision of our estimates! All this analysis reveals is $$ \lim_{n \to \infty} \frac{\sqrt{n^2 + n} - \sqrt{n^2+1}}{n} = 0 $$ which isn't good enough to answer the problem. So, we need a better estimate. A standard way to get better estimates is differential approximation. While the situation at hand is a little awkward, there is fortunately a standard trick to deal with square roots, or any power: $$ \sqrt{n^2 + n} = n \sqrt{1 + \frac{1}{n}} $$ and now we can invoke differential approximation (or Taylor series) $$ f(x+h) = f(x) + h f'(x) + o(h) $$ with $f(x) = 1 + \frac{1}{x}$ at $x=1$ to get $$ \sqrt{n^2 + n} = n \left( 1 + \frac{1}{2n} + o\left(\frac{1}{n} \right)\right) = n + \frac{1}{2} + o(1) $$ or equivalently in limit terms, $$ \lim_{n \to \infty} \sqrt{n^2 + n} - n - \frac{1}{2} = 0$$ similarly, $$ \sqrt{n^2 + 1} = n + o(1)$$ and we get $$ \lim_{n \to \infty} \sqrt{n^2 + n} - \sqrt{n^2 + 1} = \lim_{n \to \infty} (n + \frac{1}{2} + o(1)) - (n + o(1)) = \lim_{n \to \infty} \frac{1}{2} + o(1) = \frac{1}{2} $$ If we didn't realize that trick, there are a few other tricks to do, but there is actually a straightforward way to proceed too. Initially, simply taking the Taylor series for $g(x) = \sqrt{n^2 + x}$ around $x=0$ doesn't help, because that gives $$ g(x) = n + \frac{1}{2} \frac{x}{n} + o(x^2) $$ the Taylor series for $h(x) = \sqrt{x^2 + x}$ doesn't help either. But this is why we pay attention to the remainder term! One form of the Taylor remainder says that: $$ g(x) = n + \frac{1}{2} \frac{x}{n} - \frac{1}{8} \left( n^2 + c \right)^{-3/2} x^2$$ for some $c$ between $0$ and $x$. It's easy to bound this error term for $x > 0$. $$ \left| \frac{1}{8} \left( n^2 + c \right)^{-3/2} x^2 \right| \leq \left| \frac{1}{8} \left( n^2 \right)^{-3/2} x^2 \right| = \left| \frac{x^2}{8 n^3} \right| $$ So, for $x > 0$, $$ g(x) = n + \frac{1}{2} + O\left( \frac{x^2}{n^3} \right) $$ (note I've switched to big-O). Plugging in $n$ gives $$ g(n) = n + \frac{1}{2} + O\left( \frac{1}{n} \right) $$ which gives the approximation we need (better than we need, actually). (One could, of course, simply stick to limits rather than use big-O notation) This is not the simplest way to solve the problem, but I wanted to demonstrate a straightforward application of the tools you have learned (or will soon learn) to solve a problem in the case that you can't find a 'clever' approach.
Solution of $19 x \equiv 1 \pmod{35}$ $19 x \equiv 1 \pmod{35}$ For this, may I know how to get the smallest value of $x$. I know that there is a theorem like $19^{34} = 1 \pmod {35}$. But I don't think it is the smallest.
Hint $\rm\,\ mod\ 35\!:\,\ 19x\equiv 1\iff x\equiv \dfrac{1}{19}\equiv \dfrac{2}{38}\equiv \dfrac{2}3\equiv \dfrac{24}{36}\equiv\dfrac{24}1$ Remark $\ $ We used Gauss's algorithm for computing inverses $\rm\:mod\ p\:$ prime. Beware $\ $ One can employ fractions $\rm\ x\equiv b/a\ $ in modular arithmetic (as above) only when the fractions have denominator $ $ coprime $ $ to the modulus $ $ (else the fraction may not uniquely exist, $ $ i.e. the equation $\rm\: ax\equiv b\,\ (mod\ m)\:$ might have no solutions, or more than one solution). The reason why such fraction arithmetic works here (and in analogous contexts) will become clearer when one learns about the universal properties of fraction rings (localizations).
How to write the following expression in index notation? I would like to know how can I write $ ||\vec{a} \times(\nabla \times \vec{a})||^2 $ and $(\vec{a} \cdot (\nabla \times \vec{a}))^2$ in index notation if $\vec{a}=(a_1,a_2,a_3)$ Thank you for reading/replying EDIT: found the second one: $(\vec{a} \cdot (\nabla \times \vec{a}))^2 = a_ia_ja_{k,i}a_{k,j}$ The first one can also be written as $ ||\vec{a} \times(\nabla \times \vec{a})||^2 = (a_ie_{ijk}a_{k,j})^2 $ but if one finds a better expression let me know!
To do this I would use the Levi-Civita symbol and its properties in 3 dimensions. (from Wikipedia:) Definition: \begin{equation} \varepsilon_{ijk}= \left\{ \begin{array}{l} +1 \quad \text{if} \quad (i,j,k)\ \text{is}\ (1,2,3),(3,1,2)\ \text{or}\ (2,3,1)\\ -1 \quad \text{if} \quad (i,j,k)\ \text{is}\ (1,3,2),(3,2,1)\ \text{or}\ (2,1,3)\\ \ \ \ 0\quad \text{if} \quad i=j\ \text{or}\ j=k\ \text{or}\ k=i \end{array} \right. \end{equation} Vector product: \begin{equation} \vec a\times \vec b=\sum_{i=1}^3\sum_{j=1}^3\sum_{k=1}^3\varepsilon_{ijk}\vec e_ia^jb^k \end{equation} Component of a vector product: \begin{equation} (\vec a\times \vec b)_i=\sum_{j=1}^3\sum_{k=1}^3\varepsilon_{ijk}a^jb^k \end{equation} Spatproduct \begin{equation} \vec a\cdot(\vec b\times\vec c)=\vec a\times \vec b=\sum_{i=1}^3\sum_{j=1}^3\sum_{k=1}^3\varepsilon_{ijk}a^ib^jc^k \end{equation} Useful properties: \begin{equation} \sum_{i=1}^3\varepsilon_{ijk}\varepsilon^{imn}=\delta_j^{\ m}\delta_k^{\ n}-\delta_j^{\ n}\delta_k^{\ m} \end{equation} \begin{equation} \sum_{m=1}^3\sum_{n=1}^3\varepsilon_{jmn}\varepsilon^{imn}=2\delta_{\ j}^{i} \end{equation} \begin{equation} \sum_{i=1}^3\sum_{j=1}^3\sum_{k=1}^3\varepsilon_{ijk}\varepsilon^{ijk}=6 \end{equation}
Radicals and direct sums. Let A be a K-algebra and M, N be right A-submodules of a right A module L, $M \cap N =0$. How to show that $(M\oplus N) \text{rad} A = M \text{rad} A \oplus N \text{rad} A$? Let $m \in M, n\in N, x\in \text{rad} A$. Since $(m+n)x=mx+nx$, $(m+n)x \in M \text{rad} A \oplus N \text{rad} A$. Since $M, N$ are submodules, $mx \in M, nx\in N$. Therefore $ M \text{rad} A \cap N \text{rad} A = 0$. Is this true? Thank you very much.
Sure, it's obvious that for any $S \subseteq A$, we have $MS\subseteq M$ and $NS\subseteq N$. That means $MS\cap NS\subseteq M\cap N =\{0\}.$ But this has little to do with the sum being direct: the real question here is how to get the equality $$(M\oplus N) \text{rad} A = M \text{rad} A \oplus N \text{rad} A.$$ The definition of the $A$ action on the direct sum by "distribution" proves that the left-hand side is contained in the right-hand side. The other containment is also true, but seeing that hinges on your proper understanding of the definition of $MI$ where $M$ is an $A$ module and $I\lhd A$. Do you see why the final containment holds?
Prove $a\sqrt[3]{a+b}+b\sqrt[3]{b+c}+c\sqrt[3]{c+a} \ge 3 \sqrt[3]2$ Prove $a\sqrt[3]{a+b}+b\sqrt[3]{b+c}+c\sqrt[3]{c+a} \ge 3 \sqrt[3]2$ with $a + b+c=3 \land a,b,c\in \mathbb{R^+}$ I tried power mean inequalities but I still can't prove it.
Here is my proof by AM-GM inequality,we have $$ a\sqrt[3]{a+b}=\frac{3\sqrt[3]{2}a(a+b)}{3\sqrt[3]{2(a+b)(a+b)}}\geq 3\sqrt[3]{2}\cdot \frac{a(a+b)}{2+2a+2b} $$ Thus,it's suffice to prove that $$ \frac{a(a+b)}{a+b+1}+\frac{b(b+c)}{b+c+1}+\frac{c(c+a)}{c+a+1}\geq 2 $$ Or $$ \frac{a}{a+b+1}+\frac{b}{b+c+1}+\frac{c}{c+a+1}\leq 1 $$ After homogenous,it's $$\frac{a}{4a+4b+c}+\frac{b}{4b+4c+a}+\frac{c}{4c+4a+b}\leq \frac{1}{3} $$ Now,multiply $4a+4b+4c$ to each sides.we can rewrite the inequality into $$ \frac{9ca}{4a+4b+c}+\frac{9ab}{4b+4c+a}+\frac{9bc}{4c+4a+b}\leq a+b+c $$ Using Cauchy-Schwarz inequality,we have $$ \frac{9}{4a+4b+c}=\frac{(2+1)^2}{2(2a+b)+(2b+c)}\le \frac{2}{2a+b}+\frac{1}{2b+c} $$ Therefore \begin{align} \sum{\frac{9ca}{4a+4b+c}}&\leq \sum{\left(\frac{2ca}{2a+b}+\frac{ca}{2b+c}\right)}\\ &=a+b+c \end{align} Hence we are done!
Limit of definite sum equals $\ln(2)$ I have to show the following equality: $$\lim_{n\to\infty}\sum_{i=\frac{n}{2}}^{n}\frac{1}{i}=\log(2)$$ I've been playing with it for almost an hour, mainly with the taylor expansion of $\ln(2)$. It looks very similar to what I need, but it has an alternating sign which sits in my way. Can anyone point me in the right direction?
Truncate the Maclaurin series for $\log(1+x)$ at the $2m$-th term, and evaluate at $x=1$. Take for example $m=10$. We get $$1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\frac{1}{5}-\frac{1}{6}+\cdots+\frac{1}{19}-\frac{1}{20}.$$ Add $2\left(\frac{1}{2}+\frac{1}{4}+\frac{1}{6}+\cdots +\frac{1}{20}\right)$, and subtract the same thing, but this time noting that $$ 2\left(\frac{1}{2}+\frac{1}{4}+\frac{1}{6}+\cdots +\frac{1}{20}\right)=1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{10}.$$ We get $$\left(1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{10}+\frac{1}{11}+\cdots+\frac{1}{20}\right)-\left(1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{10}\right).$$ There is nice cancellation, and we get $$\frac{1}{11}+\frac{1}{12}+\cdots+\frac{1}{20}.$$
How to calculate the asymptotic expansion of $\sum \sqrt{k}$? Denote $u_n:=\sum_{k=1}^n \sqrt{k}$. We can easily see that $$ k^{1/2} = \frac{2}{3} (k^{3/2} - (k-1)^{3/2}) + O(k^{-1/2}),$$ hence $\sum_1^n \sqrt{k} = \frac{2}{3}n^{3/2} + O(n^{1/2})$, because $\sum_1^n O(k^{-1/2}) =O(n^{1/2})$. With some more calculations, we get $$ k^{1/2} = \frac{2}{3} (k^{3/2} - (k-1)^{3/2}) + \frac{1}{2} (k^{1/2}-(k-1)^{-3/2}) + O(k^{-1/2}),$$ hence $\sum_1^n \sqrt{k} = \frac{2}{3}n^{3/2} + \frac{1}{2} n^{1/2} + C + O(n^{1/2})$ for some constant $C$, because $\sum_n^\infty O(k^{-3/2}) = O(n^{-1/2})$. Now let's go further. I have made the following calculation $$k^{1/2} = \frac{3}{2} \Delta_{3/2}(k) + \frac{1}{2} \Delta_{1/2}(k) + \frac{1}{24} \Delta_{-1/2}(k) + O(k^{-5/2}),$$ where $\Delta_\alpha(k) = k^\alpha-(k-1)^{\alpha}$. Hence : $$\sum_{k=1}^n \sqrt{k} = \frac{2}{3} n^{3/2} + \frac{1}{2} n^{1/2} + C + \frac{1}{24} n^{-1/2} + O(n^{-3/2}).$$ And one can continue ad vitam aeternam, but the only term I don't know how to compute is the constant term. How do we find $C$ ?
Let us substitute into the sum $$\sqrt k=\frac{1}{\sqrt \pi }\int_0^{\infty}\frac{k e^{-kx}dx}{\sqrt x}. $$ Exchanging the order of summation and integration and summing the derivative of geometric series, we get \begin{align*} \mathcal S_N:= \sum_{k=1}^{N}\sqrt k&=\frac{1}{\sqrt \pi }\int_0^{\infty}\frac{\left(e^x-e^{-(N-1)x}\right)-N\left(e^{-(N-1)x}-e^{-Nx}\right)}{\left(e^x-1\right)^2}\frac{dx}{\sqrt x}=\\&=\frac{1}{2\sqrt\pi}\int_0^{\infty} \left(N-\frac{1-e^{-Nx}}{e^x-1}\right)\frac{dx}{x\sqrt x}=\\ &=\frac{1}{2\sqrt\pi}\int_0^{\infty} \left(N-\frac{1-e^{-Nx}}{e^x-1}\right)\frac{dx}{x\sqrt x}. \end{align*} To extract the asymptotics of the above integral it suffices to slightly elaborate the method used to answer this question. Namely \begin{align*} \mathcal S_N&=\frac{1}{2\sqrt\pi}\int_0^{\infty} \left(N-\frac{1-e^{-Nx}}{e^x-1}+\left(1-e^{-Nx}\right)\left(\frac1x-\frac12\right)-\left(1-e^{-Nx}\right)\left(\frac1x-\frac12\right)\right)\frac{dx}{x\sqrt x}=\\ &={\color{red}{\frac{1}{2\sqrt\pi}\int_0^{\infty}\left(1-e^{-Nx}\right)\left(\frac1x-\frac12-\frac{1}{e^x-1}\right)\frac{dx}{x\sqrt x}}}+\\&+ {\color{blue}{\frac{1}{2\sqrt\pi}\int_0^{\infty} \left(N-\left(1-e^{-Nx}\right)\left(\frac1x-\frac12\right)\right)\frac{dx}{x\sqrt x}}}. \end{align*} The reason to decompose $\mathcal S_N$ in this way is that * *the red integral has an easily computable finite limit: since $\frac1x-\frac12-\frac{1}{e^x-1}=O(x)$ as $x\to 0$, we can simply neglect the exponential $e^{-Nx}$. *the blue integral can be computed exactly. Therefore, as $N\to \infty$, we have $$\mathcal S_N={\color{blue}{\frac{\left(4n+3\right)\sqrt n}{6}}}+ {\color{red}{\frac{1}{2\sqrt\pi}\int_0^{\infty}\left(\frac1x-\frac12-\frac{1}{e^x-1}\right)\frac{dx}{x\sqrt x}+o(1)}},$$ and the finite part you are looking for is given by $$C=\frac{1}{2\sqrt\pi}\int_0^{\infty}\left(\frac1x-\frac12-\frac{1}{e^x-1}\right)\frac{dx}{x\sqrt x}=\zeta\left(-\frac12\right).$$
How to prove **using SVD** that $\mathbb{C}^{n \times n}$ is dense for nonsingular matrices? How to show using SVD that the set of nonsingular matrices is dense in $\mathbb{C}^{n \times n}$? That is, for any $A \in \mathbb{C}^{n \times n}$, and any given $\varepsilon > 0$, there exists a nonsingular matrix $A_\varepsilon \in \mathbb{C}^{n \times n}$ such that: $\left \| A-A_\varepsilon \right \| \le \varepsilon$.
In finite dimensional space all norms are equivalent. We choose an algebra norm. Let $A=U\Sigma V^*$ the singular value decomposition where $U$ and $V$ are unitary matrices that's $||U||=||V||=1$ and $\Sigma=diag(\sigma_n,\ldots,\sigma_1)$ with $$\sigma_n\geq\cdots\geq\sigma_1\geq0.$$ Let $\Sigma_p=diag(\sigma_n+1/p,\ldots,\sigma_1+1/p)$ and $A_p=U\Sigma_pV^*\in GL_n(\mathbb{C})$ then we have: $$||A-A_p||=||U(\Sigma-\Sigma_p)V^*||\leq||U||||\Sigma-\Sigma_p||||V^*||=||\Sigma-\Sigma_p||=\frac{1}{p}||I_n||,$$ so $$\lim_{p\to\infty}||A-A_p||=0,$$ and we conclude.
Need help proving this integration If $a>b>0$, prove that : $$\int_0^{2\pi} \frac{\sin^2\theta}{a+b\cos\theta}\ d\theta = \frac{2\pi}{b^2} \left(a-\sqrt{a^2-b^2} \right) $$
I'll do this one $$\int_{0}^{2\pi}\frac{cos(2\theta)}{a+bcos(\theta)}d\theta$$if we know how to do tis one you can replace $sin^{2}(\theta)$ by $\frac{1}{2}(1-cos(2\theta))$ and do the same thing. if we ae on the unit circle we know that $cos(\theta)=\frac{e^{i\theta}+e^{-i\theta}}{2}$ so by letting $z=e^{i\theta}$ we wil get$$cos(\theta)=\frac{z+\frac{1}{z}}{2}=\frac{z^2+1}{2z}$$and$$cos(2\theta)=\frac{z^2+\frac{1}{z^2}}{2}=\frac{z^4+1}{2z^2}$$ thus, if$\gamma: |z|=1$ the integral becomes $$\int_{\gamma}\frac{\frac{z^4+1}{2z^2}}{a+b\frac{z^2+1}{2z}}\frac{1}{iz}dz=\int_{\gamma}\frac{-i(z^4+1)}{2z^2(bz^2+2az+b)}dz$$now the roots of $bz^2+2az+b$ are $z=\frac{-2a\pm \sqrt{4a^2-4b^2}}{2b}=\frac{-a}{b}\pm\frac{\sqrt{a^2-b^2}}{b}$, you can check that the only root inside $|z|=1$ is $z_1=\frac{-a}{b}+\frac{\sqrt{a^2-b^2}}{b}$ so the only singularities of the function that we want to integrate inside $\gamma$ are $z_0=0$ and $z_1$ both are poles. find the resudies and sum them to get the answer.Notice that this will only give youe "half" the answer you still have to do$$\frac{1}{2}\int_{\gamma}\frac{d\theta}{a+bcos(\theta)}$$ in the same way to get the full answer.
Laplace transform of the Bessel function of the first kind I want to show that $$ \int_{0}^{\infty} J_{n}(bx) e^{-ax} \, dx = \frac{(\sqrt{a^{2}+b^{2}}-a)^{n}}{b^{n}\sqrt{a^{2}+b^{2}}}\ , \quad \ (n \in \mathbb{Z}_{\ge 0} \, , \text{Re}(a) >0 , \, b >0 ),$$ where $J_{n}(x)$ is the Bessel function of the first kind of order $n$. But the result I get using an integral representation of $J_{n}(bx)$ is off by a factor of $ \displaystyle \frac{1}{b}$, and I don't understand why. $$ \begin{align} \int_{0}^{\infty} J_{n}(bx) e^{-ax} \, dx &= \frac{1}{2 \pi} \int_{0}^{\infty} \int_{-\pi}^{\pi} e^{i(n \theta -bx \sin \theta)} e^{-ax} \, d \theta \, dx \\ &= \frac{1}{2 \pi} \int_{-\pi}^{\pi} \int_{0}^{\infty} e^{i n \theta} e^{-(a+ib \sin \theta)x} \, dx \, d \theta \\ &= \frac{1}{2 \pi} \int_{-\pi}^{\pi} \frac{e^{i n \theta}}{a + ib \sin \theta} \, d \theta \\ &= \frac{1}{2 \pi} \int_{|z|=1} \frac{z^{n}}{a+\frac{b}{2} \left(z-\frac{1}{z} \right)} \frac{dz} {iz} \\ &= \frac{1}{i\pi} \int_{|z|=1} \frac{z^{n}}{bz^{2}+2az-b} \, dz \end{align}$$ The integrand has simple poles at $\displaystyle z= -\frac{a}{b} \pm \frac{\sqrt{a^{2}+b^{2}}}{b}$. But only the pole at $\displaystyle z= -\frac{a}{b} + \frac{\sqrt{a^{2}+b^{2}}}{b}$ is inside the unit circle. Therefore, $$ \begin{align} \int_{0}^{\infty} J_{n}(bx) e^{-ax} \, dx &= \frac{1}{i \pi} \, 2 \pi i \ \text{Res} \left[ \frac{z^{n}}{bz^{2}+2az-b}, -\frac{a}{b} + \frac{\sqrt{a^{2}+b^{2}}}{b} \right] \\ &= {\color{red}{b}} \ \frac{(\sqrt{a^{2}+b^{2}}-a)^{n}}{b^{n}\sqrt{a^{2}+b^{2}}} . \end{align}$$
Everything is correct up until your computation of the residue. Write $$bz^2+2az-b=b(z-z_+)(z-z_-)$$ where $$z_\pm=-\frac{a}{b}\pm\frac{\sqrt{a^2+b^2}}{b}$$ as you have determined. Now, $${\rm Res}\Bigg(\frac{z^n}{b(z-z_+)(z-z_-)};\quad z=z_+\Bigg)=\lim_{z\to z_+} (z-z_+)\frac{z^n}{b(z-z_+)(z-z_-)}$$ and here you get the desired factor of $1/b$.
On the Definition of Posets... In my book, the author defines posets formally in the following way: Let $P$ be a set, and let $\le$ be a relationship on $P$ so that, $a$. $\le$ is reflective. $b$. $\le$ is transitive. $c$. $\le$ is antisymmetric. Say for $a$, does this merely mean that if some element $x\in P$, $x$ should always have the same relation to itself? and for $b$ if $x$ has the relation to $y$ and $y$ has the relation to $z$, this implies that $x$ has the relation to $z$? Moreover, when trying to determine if a something is a poset,do I just have to determine if such a relationship exists? And that relationship is not necessarily the usual meaning of "$\le$"
Your statements are correct. On your last question: A poset is a pair ($P$, $R$), where $P$ is a set and $R$ is a relation on $P$ (which must have properties a, b and c). So the relation is a part of the poset, there is no freedom to choose it. Hence to determine if something is a poset, you don't have to determine if a suitable order relation exists (it always does), but if the set together with the given relation is a poset. The relation should always be clear from the context, in particular if the relation symbol is not ''$\leq$''.
Discrete math: Euler cycle or Euler tour/path? Could someone help explain to me how I can figure out if the graphs given are Euler cycle or Euler path? Is it through trial and error? Here are some examples: Would appreciate any help.
a graph is Eulerian if its contains an Eulerian circuit, where Eulerian circuit is an Eulerian trail. By eulerian trail we mean a trail that visits every edge of a graph once and only once. now use the result that "A connectded graph is Eulerian if and only if every vertex of G has even degree." now you may distinguish easily. You must notice that an Eulerian path starts and ends at different vertices and Eulerian circuit starts and ends at the same vertex.