INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Example of a normal operator which has no eigenvalues Is there a normal operator which has no eigenvalues? If your answer is yes, give an example. Thanks.
Example 1 'I think "shift operator or translation operator" is one of them.' – Ali Qurbani Indeed, the bilateral shift operator on $\ell^2$, the Hilbert space of square-summable two-sided sequences, is normal but has no eigenvalues. Let $L:\ell^2 \to \ell^2$ be the left shift operator, $R:\ell^2 \to \ell^2$ the right shift operator and $\langle\cdot,\cdot\rangle$ denote the inner product. Take $x=(x_n)_{n\in \mathbb{Z}}$ and $y=(y_n)_{n\in \mathbb{Z}}$ two sequences in $\ell^2$: $$\langle Lx, y\rangle = \sum_{n \in \mathbb{Z}} x_{n+1}\cdot y_n = \sum_{n \in \mathbb{Z}} x_n\cdot y_{n-1} = \langle x, Ry\rangle,$$ hence $L^*=R=L^{-1}$, i.e. $L$ is unitary. Now let $\lambda$ be a scalar and $x\in\ell^2$ such that $Lx = \lambda x$ then $x_n = \lambda^n x_0$ holds for $n \in \mathbb{Z}$ and we have $$\|x\|^2=\sum_{n \in \mathbb{Z}} x_n^2 = x_0\left( \sum_{n=1}^\infty \lambda^{2n} + \sum_{n=0}^{-\infty} \lambda^{2n} \right).$$ The first sum diverges for $|\lambda|\geq 1$ and the second sum diverges for $|\lambda|\leq 1$ so the only $x\in\ell^2$ solving the equation must be the zero sequence which cannot be an eigenvector, hence $L$ has no eigenvalues. Example 2 As Christopher A. Wong pointed out you can construct another example with a multiplication operator. Let $L^2$ be the Hilbert space of Lebesgue-square-integrable functions on $\mathbb{R}$ and $M:L^2\to L^2,\ f \mapsto f \cdot h$ where $$h(x) = \begin{cases} 1, &x \in [0,1] \\ 0, &\text{else} \end{cases}.$$ For $f,g\in L^2$ we have $$\langle Mf,g\rangle = \int_\mathbb{R} f\cdot h\cdot g \ dx = \langle f,Mg\rangle,$$ i.e. $M$ is self adjoint. Now let $\lambda$ be a scalar and $f\in L^2$ such that $(M-\lambda)f = 0$ which is only true if $f=0$ or if $h=\lambda$ hence there are no eigenvalues.
Prove the relation of $\cosh(\pi/2)$ and $e$ Prove that: $$\cosh\left(\frac{\pi}{2}\right)=\frac{1}{2}e^{-{\pi/2}}(1+e^\pi)$$ What I have tried. $$\cosh\left(\frac{\pi}{2}\right)=\cos\left(i\frac{\pi}{2}\right)$$ $$=Re\{e^{i.i\frac{\pi}{2}}\}$$ $$=Re\{e^{-\frac{\pi}{2}}\}$$ Why is $e^{-\frac{\pi}{2}}$ not answer any why is $$\frac{e^{-\frac{\pi}{2}}+e^{\frac{\pi}{2}}}{2}$$ a correct solution. Did I miss something somewhere?
$\cosh(x)$ is usually defined defined as $\frac{e^{x} + e^{-x}}{2}$. If you haven't some different definition, then it is quite straightforward: $$\cosh\left(\frac{\pi}{2}\right)=\frac{e^{\frac{\pi}{2}} + e^{-\frac{\pi}{2}}}{2} = \frac{1}{2}e^{-\frac{\pi}{2}}(1 + e^x)$$
Field of characteristic 0 such that every finite extension is cyclic I am trying to construct a field $F$ of characteristic 0 such that every finite extension of $F$ is cyclic. I think that I have an idea as to what $F$ should be, but I am not sure how to complete the proof that it has this property. Namely, let $a\in \mathbb Z$ be an element which is not a perfect square and let $F$ be the a maximal subfield of $\bar{\mathbb Q}$ which does not contain $\sqrt{a}$ (such a field exists by Zorn's lemma). Intuitively, a finite extension of $F$ should be generated by $a^{1/2^n}$ for some $n$, in which case its Galois group will be cyclic since $F$ contains the $2^n$th roots of unity. However, I cannot find a nice way to prove this. Any suggestions?
This is a fairly common question in algebra texts. Here's a series of hints taken from a prelim exam. Let $F$ be a maximal subfield of $\bar{\mathbb Q}$ with respect to not containing $\sqrt{a}$. Let $F \subset E$ be a Galois extension. Show that $F(\sqrt{a})$ is the unique subfield of $E$ of degree $2$. Deduce that $\mathrm{Gal}(E/F)$ contains a maximal normal subgroup of index $2$. Conclude that $\mathrm{Gal}(E/F)$ is cyclic.
Are PA and ZFC examples of logical systems? Wikipedia says A logical system or, for short, logic, is a formal system together with a form of semantics, usually in the form of model-theoretic interpretation, which assigns truth values to sentences of the formal language. When we talk about PA or ZFC, are these logical systems, or are they merely formal systems?
When I talk about a logical system in the way that A logical system or, for short, logic, is a formal system together with a form of semantics, usually in the form of model-theoretic interpretation I understand that a logic $\mathcal{L}$ is a pair $(L,\models)$, where $L$ is a function, and the domain of $L$ is the class of all the signatures. But the signature of $ZFC$ has just a member, $\in$. So, there is no sense in talk in the $ZFC$ as a logic.
Is the set of all bounded sequences complete? Let $X$ be the set of all bounded sequences $x=(x_n)$ of real numbers and let $$d(x,y)=\sup{|x_n-y_n|}.$$ I need to show that $X$ is a complete metric space. I need to show that all Cauchy sequences are convergent. I appreciate your help.
HINT: Let $\langle x^n:n\in\Bbb N\rangle$ be a Cauchy sequence in $X$. The superscripts are just that, labels, not exponents: $x^n=\langle x^n_k:k\in\Bbb N\rangle\in X$. Fix $k\in\Bbb N$, and consider the sequence $$\langle x^n_k:n\in\Bbb N\rangle=\langle x^0_k,x^1_k,x^2_k,\dots\rangle\tag{1}$$ of $k$-th coordinates of the sequences $x^n$. Show that for any $m,n\in\Bbb N$, $|x^m_k-x^n_k|\le d(x^m,x^n)$ and use this to conclude that the sequence $(1)$ is a Cauchy sequence in $\Bbb R$. $\Bbb R$ is complete, so $(1)$ converges to some $y_k\in\Bbb R$. Let $y=\langle y_k:k\in\Bbb N\rangle$; show that $y\in X$ and that $\langle x^n:n\in\Bbb N\rangle$ converges to $y$ in $X$.
Can I really factor a constant into the $\min$ function? Say I have $\min(5x_1,x_2)$ and I multiply the whole function by $10$, i.e. $10\min(5x_1,x_2)$. Does that simplify to $\min(50x_1,10x_1)$? In one of my classes I think my professor did this but I'm not sure (he makes very hard to read and seemingly bad notes), and I'm just trying to put these notes together. Thanks!
Ys, that is legal as long as the constant is not negative. I.e., $10 \cdot \max(3, 5) = 10 \cdot 5 = 50$ is the same as $\max(10 \cdot 3, 10 \cdot 5) = 50$, but try multiplying by $-10$...
Zero and Cozero-sets of $\mathbb{R}$ A subset $U$ of a space $X$ is said to be a zero-set if there exists a continuous real-valued function $f$ on $X$ such that $U=\{x\in X: f(x)=0\}$. and said to be a Cozero-set if here exists a continuous real-valued function $g$ on $X$ such that $U=\{x\in X: g(x)\not=0\}$. Is it true that every closed set in $\mathbb{R}$ is a Cozero-set? I guess since $\mathbb{R}$ is a completely regular this implies that every closed set is Cozero-set, but by the same argument use completely regular property on $\mathbb{R}$, every closed subset of $\mathbb{R}$ is a zero-set. This argument is correct? How can we discussed the relation between open & closed subset of $\mathbb{R}$ and zero and cozero-sets? thanks.
I just waant to know how $\phi$ and $\mathbb{R}$ are not zero set? as if i take $f(x) = 0 \forall x$ and $g(x) = e^{x} + 1 \forall x$ both are cts. Then $\phi$ and $\mathbb{R}$ are zero set.
How to show that $ n^{2} = 4^{{\log_{2}}(n)} $? I ran across this simple identity yesterday, but can’t seem to find a way to get from one side to the other: $$ n^{2} = 4^{{\log_{2}}(n)}. $$ Wolfram Alpha tells me that it is true, but other than that, I’m stuck.
Take $\log_{2}$ of both sides and get $$n^{2} = 4^{{\log_{2}} n}$$ $$2\log_{2}n =2\log_{2}n $$
$P$ projector. prove that $\langle Px,x\rangle=\|Px\|^2.$ Let $X$ be a Hilbert space and $P \in B(X)$ a projector. Then for any $x\in X$: $$\langle Px,x\rangle=\|Px\|^2.$$ My proof: $$\|Px\|^{2}=\langle Px,Px\rangle=\langle P^{*}Px,x\rangle=\langle P^2x,x\rangle=\langle Px,x\rangle.$$ Is ok ? Thanks :)
Yes, that is all. $$ \quad \quad $$
Obvious Group and subgroup questions. My awesome math prof posted a practice midterm but didn't post any solutions to it :s Here is the question. Let $G$ be a group and let $H$ be a subgroup of $G$. * *(a) TRUE or FALSE: If $G$ is abelian, then so is $H$. *(b) TRUE or FALSE: If $H$ is abelian, then so is $G$. Part (a) is clearly true but I am having a bit of difficulty proving it, after fulling the conditions of being a subgroup the commutative of $G$ should imply that $ab=ba$ somehow. Part (b) I am fairly certain this is false and I know my tiny brain should be able to find an example somewhere but it is 4 am here :) I want to use some non-abelian group $G$ then find a generator to make a cyclic subgroup of $G$ that is abelian. Any help would be appreciated, I have looked in my book but I can't seem to find for certain what I am looking for with what we have cover thus far.
(b) Take the group of $(n \times n)$-matrices with $\mathbb{R}$-coefficients with usual matrix multiplication as G and let H be the subgroup of diagonal matrices. H ist abelian, but G is not abelian.
when $f(x)^n$ is a degree of $n$ why is useful to think $\sqrt{f(x)^n}$ as $n/2$? I have come across this question when doing problems from "Schaum's 3000 Solved Calculus Problems". I was trying to solve $$\lim_{x\rightarrow+\infty}\frac{4x-1}{\sqrt{x^2+2}}$$ and I couldn't so I looked the solution and solution said Can someone please explain to me why this is and exactly how it works? Also, the next question is as such $$\lim_{x\rightarrow-\infty}\frac{4x-1}{\sqrt{x^2+2}}$$ and there the author has suggested that $x= -\sqrt{x^2}$. Why is that? Thanks EDIT Can someone use the above technique and solve it, to show it works? Because I understand the exponent rules, I am aware of that but what I don't understand is why you want to do that? Here is the solution that book shows:
Note that $$-|x|-\sqrt{2}\leq\sqrt{x^2+2}\leq|x|+\sqrt{2}$$ From the last inequality, you can conclude that the rate of growth of the function $\sqrt{x^2+2}$, is in some sense linear or in the language of the author, the degree of $\sqrt{f(x)}$ is something like $\frac{n}{2}$. Therefore the functions $4x-1$ and $\sqrt{x^2+2}$ are in some sense proportional , or better saying, there exists the limit: $$\lim_{x\rightarrow\infty}\frac{4x-1}{\sqrt{x^2+2}}$$
Is $A - B = \emptyset$? $A = \{1,2,3,4,5\}, B = \{1,2,3,4,5,6,7,8\}$ $A - B =$ ? Does that just leave me with $\emptyset$? Or do I do something with the leftover $6,7,8$?
$A - B = \emptyset$, because by definition, $A - B$ is everything that is in $A$ but not in $B$.
What is this math symbol called? My professors use a symbol $$x_0$$ and they pronounce it as x not or x nod, I am not sure what the exact name is because they have thick accents. I have tried looking this up on the Internet but I could not find an answer. Does anyone know what this is called?
They actually call it x-naught. I believe it comes from British English. Kind of like how the Canadians call the letter z "zed". All it means is "x sub zero", just another way of saying the same thing. It does flow better though, I think. "sub zero" just takes so much more work to say. I do think "naught" and "not" have similar meaning though - the absence of something, some value or quality. Im sure there is a linguistic connection.
$T:\Bbb R^2 \to \Bbb R^2$, linear, diagonal with respect to any basis. Is there a linear transformation from $\Bbb R^2$ to $\Bbb R^2$ which is represented by a diagonal matrix when written with respect to any fixed basis? If such linear transformation $T$ exists, then its eigenvector should be the identity matrix for any fixed basis $\beta$ of $\Bbb R^2$. Then, I don't see, if this is possible or not.
If the transformation $T$ is represented by the matrix $A$ in basis $\mathcal{A}$, then it is represented by the matrix $PAP^{-1}$ in basis $\mathcal{B}$, where $P$ is the invertible change-of-basis matrix. Suppose that $T$ is represented by a diagonal matrix in any basis. Let $P$ be an arbitrary invertible matrix and $A$ any diagonal matrix: $$P = \left[\begin{array}{cc} p_{1,1} & p_{1,2} \\ p_{2,1} & p_{2,2} \end{array}\right] \text{ and } A = \left[\begin{array}{cc} d_1 & 0 \\ 0 & d_2 \end{array}\right].$$ Now, calculate $PAP^{-1} = \dfrac{1}{\det P} \left[\begin{array}{cc} b_{1,1} & b_{1,2} \\ b_{2,1} & b_{2,2} \end{array}\right]$, where the entries $b_{i,j}$ are polynomials in the $p_{i,j}$ and $d_i$ variables. For this new conjugated matrix to be diagonal, we have the following two equations. (Check!) $$\begin{align*} 0 = b_{1,2} &= (d_2 - d_1)p_{1,1}p_{1,2} \\ 0 = b_{2,1} &= (d_1 - d_2)p_{2,1}p_{2,2} \end{align*}$$ Since $P$ is arbitrary, the only way for these equations to always be satisfied is for $d_1 = d_2$. In other words, the original matrix $A$ was a scalar multiple of the identity. $$A = d \cdot \operatorname{Id}_2 = \left[\begin{array}{cc} d & 0 \\ 0 & d \end{array}\right].$$
Construction of a triangle, given: side, sum of the other sides and angle between them. Given: $\overline{AB}$, $\overline{AC}+\overline{BC}$ and $\angle C$. Construct the triangle $\triangle ABC$ using rule and compass.
Draw the side given AB draw the angle given A produce the adjacent side which equal to sum of given sides AP connect remaining point from it PB bisect that side PB produce that side until cut AP take the point of intersection C now you have triangle.
Probability with Chi-Square distribution What is the difference, when calculating probabilities of Chi-Square distributions, between $<$ and $\leq$ or $>$ and $\geq$. For example, say you are asked to find P$(\chi_{5}^{2} \leq 1.145)$. I know that this is $=0.05$ from the table of Chi-Square distributions, but what if you were asked to find P$(\chi_{5}^{2} < 1.145)$? How would this be different?
The $\chi^2$ distributions are continuous distributions. If $X$ has continuous distribution, then $$\Pr(X\lt a)=\Pr(X\le a).$$ If $a$ is any point, then $\Pr(X=a)=0$. So in your case, the probabilities would be exactly the same. Many other useful distributions, such as the normal, and the exponential, are continuous.
Determining Ambiguity in Context Free Grammars What are some common ways to determine if a grammar is ambiguous or not? What are some common attributes that ambiguous grammars have? For example, consider the following Grammar G: $S \rightarrow S(E)|E$ $E \rightarrow (S)E|0|1|\epsilon$ My guess is that this grammar is not ambiguous, because of the parentheses I could not make an equivalent strings with distinct parse trees. I could have easily made a mistake since I am new to this. What are some common enumeration techniques for attempting to construct the same string with different parse trees? * *How can I know that I am right or wrong? *What are common attributes of ambiguous grammars? *How could I prove this to myself intuitively? *How could I prove this with formal mathematics?
To determine if a context free grammar is ambiguous is undecidable (there is no algorithm which will correctly say "yes" or "no" in a finite time for all grammars). This doesn't mean there aren't classes of grammars where an answer is possible. To prove a grammar ambiguous, you do as you outline: Find a string with two parses. To prove it unambiguous is harder: You have to prove the above isn't possible. It is known that the $LL(k)$ and $LR(k)$ grammars are unambiguous, and for $k = 1$ the conditions are relatively easy to check.
What is the origin of the phrase "as desired" in mathematics? This is a sort of strange question that popped into my head when I was reading a paper. In writing mathematics, many authors use the phrase "as desired" to conclude a proof, usually written to indicate that one has reached the result originally stated. I know that this is perfectly good English, but the phrase is so widespread, despite the fact that there are many other similar alternatives. Does anybody know whether the phrase has any specific origins?
From Wikipedia http://en.wikipedia.org/wiki/Q.E.D.: Q.E.D. is an initialism of the Latin phrase quod erat demonstrandum, originating from the Greek analogous hóper édei deîxai (ὅπερ ἔδει δεῖξαι), meaning "which had to be demonstrated". The phrase is traditionally placed in its abbreviated form at the end of a mathematical proof ... ...however, translating the Greek phrase ὅπερ ἔδει δεῖξαι produces a slightly different meaning. Since the verb "δείκνυμι" also means to show or to prove, a better translation from the Greek would read, "what was required to be proved." The phrase was used by many early Greek mathematicians, including Euclid and Archimedes. But I don't know how close this translation of Q.E.D. "what was required" is to the phrase "as desired", as desired by the OP.
Trigonometric Identity $\frac{1}{1-\cos t} + \frac{1}{1+\cos t}$ I am just learning about trig identities, and after doing a few, I am stuck on this one: $$ \frac{1}{1-\cos t} + \frac{1}{1+\cos t}. $$ The only way to start, that I can think of is this: $$ \frac{1}{1-(1/\sec t)} + \frac{1}{1+(1/\sec t)}. $$ And from there it just gets messed up. Can someone point me in the right direction?
Hint: Use that $$ \frac{1}{a}+\frac{1}{b}=\frac{a+b}{ab} $$ along with the identity $$ \sin^2t+\cos^2t=1. $$
Existence of Matrix inverses depending on the existence of the inverse of the others.. Let $A_{m\times n}$ and $B_{n\times m}$ be two matrices with real entries. Prove that $I-AB$ is invertible iff $I-BA$ is invertible.
Hint:$(I-BA)^{-1}=X$ (say), Now expand left side. we get $$X=I+BA+ (BA)(BA)+(BA)(BA)(BA)+\dots$$ $$AXB=AB+(AB)^2+(AB)^3+(AB)^4+\dots$$ $$I+AXB=I+(AB)+(AB)^2+\dots+(AB)^n+\dots=(I-AB)^{-1}$$ Check yourself: $(I+AXB)(I-AB)=I$, $(I-AB)(I+AXB)=I$
Intuitive meaning of immersion and submersion What is immersion and submersion at the intuitive level. What can be visually done in each case?
First of all, note that if $f : M \to N$ is a submersion, then $\dim M \geq \dim N$, and if $f$ is an immersion, $\dim M \leq \dim N$. The Rank Theorem may provide some insight into these concepts. The following statement of the theorem is taken from Lee's Introduction to Smooth Manifolds (second edition); see Theorem $4.12$. Suppose $M$ and $N$ are smooth manifolds of dimensions $m$ and $n$, respectively, and $F : M \to N$ is a smooth map with constant rank $r$. For each $p \in M$ there exist smooth charts $(U, \varphi)$ for $M$ centered at $p$ and $(V, \psi)$ for $N$ centered at $F(p)$ such that $F(U) \subseteq V$, in which $F$ has a coordinate representation of the form $$\hat{F}(x^1, \dots, x^r, x^{r+1}, \dots, x^m) = (x^1, \dots, x^r, 0, \dots, 0).$$ In particular, if $F$ is a smooth submersion, this becomes $$\hat{F}(x^1, \dots, x^n, x^{n+1}, \dots, x^m) = (x^1, \dots, x^n),$$ and if $F$ is a smooth immersion, it is $$\hat{F}(x^1, \dots, x^m) = (x^1, \dots, x^m, 0, \dots, 0).$$ So a submersion locally looks like a projection $\mathbb{R}^n\times\mathbb{R}^{m-n} \to \mathbb{R}^n$, while an immersion locally looks like an inclusion $\mathbb{R}^m \to \mathbb{R}^m\times\mathbb{R}^{n-m}$.
Groups with transitive automorphisms Let $G$ be a finite group such that for each $a,b \in G \setminus \{e\}$ there is an automorphism $\phi:G \rightarrow G$ with $\phi(a)=b$. Prove that $G$ is isomorphic to $\Bbb Z_p^n$ for some prime $p$ and natural number $n$.
Hint 1: If $a, b \in G \setminus \{e\}$, then $a$ and $b$ have the same order. Hint 2: Using the previous hint, show that $G$ has order $p^n$ for some prime $p$ and that every nonidentity element has order $p$. Hint 3: In a $p$-group, the center is a nontrivial characteristic subgroup.
Delta in continuity Let $f: [a,b]\to\mathbb{R}$ be continuous, prove that it is uniform continuous. I know using compactness it is almost one liner, but I want to prove it without using compactness. However, I can use the theorem that every continuous function achieves max and min on a closed bounded interval. I propose proving that some choices of $\delta$ can be continuous on $[a,b]$, for example but not restricted to: For an arbitrary $\epsilon>0$, for each $x\in[a,b]$ set $\Delta_x=\{0<\delta<b-a \;|\;|x-y|<\delta\Longrightarrow |f(x)-f(y)| <\epsilon\}$, denote $\delta_x = \sup \Delta_x $. Basically $\delta_x$ is the radius of largest neighborhood of $x$ that will be mapped into a subset of neighborhood radius epsilon of $f(x)$. I'm trying to show that $\delta_x$ is continuous on $[a,b]$ with fixed $\epsilon$. My progress is that I can show $\delta_y$ is bounded below if $y$ is close enough to $x$, but failed to find its upper bound that is related to its distance with $x$. Maybe either you could help me with this $\delta_x$ proof, or another cleaner proof without compactness (but allowed max and min). Thanks so much.
Let an $\epsilon>0$ be given and put $$\rho(x):=\sup\bigl\{\delta\in\ ]0,1]\ \bigm|\ y, \>y'\in U_\delta(x)\ \Rightarrow\ |f(y')-f(y)|<\epsilon\bigr\}\ .$$ By continuity of $f$ the function $x\to\rho(x)$ is strictly positive and $\leq1$ on $[a,b]$. Lemma. The function $\rho$ is $1$-Lipschitz continuous, i.e., $$|\rho(x')-\rho(x)|\leq |x'-x|\qquad \bigl(x,\ x'\in[a,b]\bigr)\ .$$ Proof. Assume the claim is wrong. Then there are two points $x_1$, $\>x_2\in[a,b]$ with $$\rho(x_2)-\rho(x_1)>|x_2-x_1|\ .$$ It follows that there is a $\delta$ with $\rho(x_1)<\delta<\rho(x_2)-|x_2-x_1|$. By definition of $\rho(x_1)$ we can find two points $y$, $\> y'\in U_\delta(x_1)$ such that $|f(y')-f(y)|\geq\epsilon$. Now $$|y-x_2|\leq |y -x_1|+|x_1-x_2|<\delta +|x_2-x_1| =:\delta'<\rho(x_2)\ .$$ Similarly $|y'-x_2|<\delta'$. It follows that $y$, $\>y'$ would contradict the definition of $\rho(x_2)$.$\qquad\quad\square$ The function $\rho$ therefore takes a positive minimum value $\rho_*$ on $[a,b]$. The number $\delta_*:={\rho_*\over2}>0$ is a universal $\delta$ for $f$ and the given $\epsilon$ on $[a,b]$.
Product of numbers Pair of numbers whose product is $+7$ and whose sum is $-8$. Factorise $x^{2} - 8x + 7$. I can factorise but it's just I can't find any products of $+7$ and that is a sum of $-8$. Any idea? Thanks guys! Thanks.
I do not understand why you are trying to factorise $x^2-8x+7$. I suggest you use viete's formulae. xy=2 and x+y=-3. Let's say you have a quadratic equation $x^2+ax+b$ Then the roots $x_1, x_2$ has the property $x_1+x_2=-a$ and $x_1.x_2=c$ So you have the quadratic equation $x^2-2x-3$ When you solve this, you get the answer.
How to find all polynomials with rational coefficients s.t $\forall r\notin\mathbb Q :f(r)\notin\mathbb Q$ How to find all polynomials with rational coefficients$f(x)=a_nx^n+\cdots+a_1x+a_0$, $a_i\in \mathbb Q$, such that $$\forall r\in\mathbb R\setminus\mathbb Q,\quad f(r)\in\mathbb R\setminus\mathbb Q.$$ thanks in advance
The only candidates are those polynomials $f(x)\in\mathbb Q[x]$ that are factored over $\mathbb Q$ as product of first degree polynomials (this is because if $\deg f>1$ and $f$ is irreducible then all of its roots are irrationals.) The first degree polynomials have this property. Can you see that these are all? (Hint: The polynomial $f(x)+q$, for suitable $q\in\mathbb Q$, is not a product of first degree polynomials)
Why does zeta have infinitely many zeros in the critical strip? I want a simple proof that $\zeta$ has infinitely many zeros in the critical strip. The function $$\xi(s) = \frac{1}{2} s (s-1) \pi^{\tfrac{s}{2}} \Gamma(\tfrac{s}{2})\zeta(s)$$ has exactly the non-trivial zeros of $\zeta$ as its zeros ($\Gamma$ cancels all the trivial ones out). It also satisfies the functional equation $\xi(s) = \xi(1-s)$. If we assume it has finitely many zeros, what analysis could get a contradiction? I found an outline for a way to do it here but I can't do the details myself: https://mathoverflow.net/questions/13647/why-does-the-riemann-zeta-function-have-non-trivial-zeros/13762#13762
Hardy proved in 1914 that an infinity of zeros were on the critical line ("Sur les zéros de la fonction $\zeta(s)$ de Riemann" Comptes rendus hebdomadaires des séances de l'Académie des sciences. 1914). Of course other zeros could exist elsewhere in the critical strip. Let's exhibit the main idea starting with the Xi function defined by : $$\Xi(t):=\xi\left(\frac 12+it\right)=-\frac 12\left(t^2+\frac 14\right)\,\pi^{-\frac 14-\frac{it}2}\,\Gamma\left(\frac 14+\frac{it}2\right)\,\zeta\left(\frac 12+it\right)$$ $\Xi(t)$ is an even integral function of $t$, real for real $t$ because of the functional equation (applied to $s=\frac 12+it$) : $$\xi(s)=\frac 12s(s-1)\pi^{-\frac s2}\,\Gamma\left(\frac s2\right)\,\zeta(s)=\frac 12s(s-1)\pi^{\frac {s-1}2}\,\Gamma\left(\frac {1-s}2\right)\,\zeta(1-s)=\xi(1-s)$$ We observe that a zero of $\zeta$ on the critical line will give a real zero of $\,\Xi(t)$. Now it can be proved (using Ramanujan's $(2.16.2)$ reproduced at the end) that : $$\int_0^\infty\frac{\Xi(t)}{t^2+\frac 14}\cos(x t)\,dt=\frac{\pi}2\left(e^{\frac x2}-2e^{-\frac x2}\psi\left(e^{-2x}\right)\right)$$ where $\,\displaystyle \psi(s):=\sum_{n=1}^\infty e^{-n^2\pi s}\ $ is the theta function used by Riemann Setting $\ x:=-i\alpha\ $ and after $2n$ derivations relatively to $\alpha$ we get (see Titchmarsh's first proof $10.2$, alternative proofs follow in the book...) : $$\lim_{\alpha\to\frac{\pi}4}\,\int_0^\infty\frac{\Xi(t)}{t^2+\frac 14}t^{2n}\cosh(\alpha t)\,dt=\frac{(-1)^n\,\pi\,\cos\bigl(\frac{\pi}8\bigr)}{4^n}$$ Let's suppose that $\Xi(t)$ doesn't change sign for $\,t\ge T\,$ then the integral will be uniformly convergent with respect to $\alpha$ for $0\le\alpha\le\frac{\pi}4$ so that, for every $n$, we will have (at the limit) : $$\int_0^\infty\frac{\Xi(t)}{t^2+\frac 14}t^{2n}\cosh\left(\frac {\pi t}4\right)\,dt=\frac{(-1)^n\,\pi\,\cos\bigl(\frac{\pi}8\bigr)}{4^n}$$ But this is not possible since, from our hypothesis, the left-hand side has the same sign for sufficiently large values of $n$ (c.f. Titchmarsh) while the right part has alternating signs. This proves that $\Xi(t)$ must change sign infinitely often and that $\zeta\left(\frac 12+it\right)$ has an infinity of real solutions $t$. Probably not as simple as you hoped but a stronger result! $$-$$ From Titchmarsh's book "The Theory of the Riemann Zeta-function" p. $35-36\;$ and $\;255-258$ :
Is this function differentiable at 0? I would like to know if this function is differentiable at the origin: $$f(x) = \left\{ \begin{array}{cl} x+x^2 & \mbox{if } x \in \mathbb{Q}; \\ x & \mbox{if } x \not\in \mathbb{Q}. \end{array} \right.$$ Intuitively, I know it is, but I don't know how to prove it. Any ideas? Thanks a lot.
For continuity at any arbitrary point $c\in\mathbb{R}$ and considering sequential criteria(first consider a rational sequence converging to $c$ and then a irrational sequence converging to $c$ and equate the limit) of continuity at $c$ you need $c^2+c=c$ so $c^2=0$ so $c=0$, so only at $c=0$ the function is continuos, Now consider the limit $\lim_{x\rightarrow 0}\frac{f(x)-f(0)}{x}$ take rational sequence $x_n\rightarrow 0$ and see what is the limit and take irrational sequence $x_n\rightarrow 0$ and see the limit, are they equal? you need read this two topic first to understand the solution:Sequential Criterion For Limit Sequential Criterion For Continuity Here you can look the Sequential criterion for Derivative
A proof question about continuity and norm Let $E⊂\mathbb{R}^{n}$ be a closed, non-empty set and $N : \mathbb{R}^{n}\to\mathbb{R}$ be a norm. Prove that the function $f(x)=\inf\left \{ N(x-a)\mid a\in E \right \}$, $f : \mathbb{R}^{n}→\mathbb{R}$ is continuous and $f^{-1}(0) = E$. There are some hints: $f^{-1}(0) = E$ will be implied by $E$ closed, and $f : \mathbb{R}^{n}\to\mathbb{R}$ is continuous will be implied by triangle inequality. I still can't get the proof by the hint. So...thank you for your help!
The other answers so far are good, but here is an alternative hint for the first part. Because $E$ is closed, its complement $E^c$ is open. A set in $\mathbb{R}^n$ is open if and only if the set contains an open ball around any point in the set. Thus, for any $x\in E^c$, there is some $r>0$ such that the open ball $B(x,r)\subset E^c$. What does that tell you about the minimum distance from $x$ to $E$?
What is inverse of $I+A$? Assume $A$ is a square invertible matrix and we have $A^{-1}$. If we know that $I+A$ is also invertible, do we have a close form for $(I+A)^{-1}$ in terms of $A^{-1}$ and $A$? Does it make it any easier if we know that sum of all rows are equal?
Check this question. The first answer presents a recursive formula to retrieve the inverse of a generic sum of matrices. So yours should be a special case.
Total divisor in a Principal Ideal Domain. Let $R$ be a right and left principal ideal domain. An element $a\in R$ is said to be a right divisor of $b\in R$ if there exists $x \in R$ such that $xa=b$ . And similarly define left divisor. $a$ is said to be a total divisor of $b$ if $RbR = <a>_R \cap$ $ _R<a>$ . How do I prove the following theorem: If $RbR \subseteq aR$ then $a$ is already a total divisor of $b$. Thanks in advance. I am finding pretty difficult to understand things in the noncommutative case.
I'm going to assume $R$ contains $1$. Also, my proof will only show that $RbR \subseteq aR \cap Ra$. Most definitions of total divisors I've seen go like the following: An element $a$ in a ring $R$ is a total divisor of $b$ when $RbR \subseteq aR \cap Ra$. Whether this implies $RbR = aR \cap Ra$ in a ring that is both a left and right principal ideal domain with unity, I'm unsure of. Maybe you could provide information on where you found this problem? The proof: Since $RbR$ is an ideal, it is also a right ideal, and because $R$ is a right principal ideal domain, we get $RbR = Rx$ for some $x \in R$. Again, because $R$ is a right PID, and sums of right ideals are right ideals, we have $Ra + Rx = Rd$ for some $d \in R$. Thus, $d = r_1a +r_2x$, with $r_1, r_2 \in R$. Further, $x = ar$ for some $r \in R$, because, $RbR \subseteq aR$ and $1 \in R$. Now, $dr = r_1ar +r_2xr = r_1ar + r_2r'x$ (for some $r' \in R$, because $xR = Rx$) $= r_1ar + r_2r'ar = (r_1a + r_2r'a)r$ $\implies d = r_1a + r_2r'a$ when $r \neq 0$, that is when $a \neq 0$ (should we have $a = 0$ the result follows trivially, so let's assume $a \neq 0$). That is, $d \in Ra$, so $Rd \subseteq Ra$, and $Rd = Ra$. From our previous equation $Ra + Rx = Rd$, we see that $Rx \subseteq Ra$. But $RbR = Rx = xR$, so $RbR \subseteq Ra$. We have, adding the hypothesis, that $RbR \subseteq aR$ and $RbR \subseteq Ra$, thus - $RbR \subseteq aR \cap Ra$.
How to check the continuity of this function defined as follows The function $f:\Bbb R\to\Bbb R$ defined by $f(x)=\min(3x^3+x,|x|)$ is (A) continuous on $\Bbb R$, but not differentiable at $x=0$. (B) differentiable on $\Bbb R$, but $f\,'$ is discontinuous at $x=0$. (C) differentiable on $\Bbb R$, and $f\,'$ is continuous on $\Bbb R$. (D) differentiable to any order on $\Bbb R$. My attempt: Here,$f(x)=|x|,x>0$;$f(x)=3x^3+x,x<0$;$f(x)=0$ at $x=0.$ Also,$Lf'(0)=Rf'(0)=1.$ So,$f$ is differentiable at $x=0.$ But I am having trouble to check whether $f'$ is continuous at $x=0$ or not. Can someone point me in the right direction?Thanks in advance for your time.
HINT: If $x<0$, $f\,'(x)=9x^2+1$, so $$\lim_{x\to 0^-}f\,'(x)=\lim_{x\to 0^-}\left(9x^2+1\right)=1\;.$$ Is this the same as $\lim_{x\to 0^+}f\,'(x)$? Are both one-sided limits equal to $f\,'(0)$?
Showing diffeomorphism between $S^1 \subset \mathbb{R}^2$ and $\mathbb{RP}^1$ I am trying to construct a diffeomorphism between $S^1 = \{x^2 + y^2 = 1; x,y \in \mathbb{R}\}$ with subspace topology and $\mathbb{R P}^1 = \{[x,y]: x,y \in \mathbb{R}; x \vee y \not = 0 \}$ with quotient topology and I am a little stuck. I have shown that both are smooth manifolds, and I used stereographic projection for $S^1$, but now I am runing into trouble when I give the homeomorphism between $S^1$ and $\mathbb{RP}^1$ as the map that takes a line in $\mathbb{RP}^1$ to the point in $S^1$ that you get when letting the parallel line go through the respective pole used in the stereographic projection. If I use the south and north poles I get a potential homeomorphism, but I cannot capture the horizontal line in my image, but when I pick say north and east then my map is not well defined as I get different lines for the same point in $S^1$. Can somebody give me a hint how to make this construction work, or is it better to move to a different representation of $S^1$ ?
The easiest explicit map I know is: $$(\cos(\theta), \sin(\theta))\mapsto [\cos(\theta/2):\sin(\theta/2)]$$ Note that although $\cos(\theta/2)$ and $\sin(\theta/2)$ depend on $\theta$ and not just $\sin(\theta)$ and $\cos(\theta)$, the map is well-defined so long as we use the same value of $\theta$ when computing coordinates in $S^1$.That is, $$\cos(\frac{\theta+2\pi}{2})=-\cos(\theta/2) \text{ and } \sin(\frac{\theta+2\pi}{2})=-\sin(\theta/2)$$ So the choice of $\theta$ modulo $2\pi$ does not affect $[\cos(\theta/2):\sin(\theta/2)]$, since $[x,y]=[-x,-y]$.
Proving that if $f: \mathbb{C} \to \mathbb{C} $ is a continuous function with $f^2, f^3$ analytic, then $f$ is also analytic Let $f: \mathbb{C} \to \mathbb{C}$ be a continuous function such that $f^2$ and $f^3$ are both analytic. Prove that $f$ is also analytic. Some ideas: At $z_0$ where $f^2$ is not $0$ , then $f^3$ and $f^2$ are analytic so $f = \frac{f^3}{f^2}$ is analytic at $z_0$ but at $z_0$ where $f^2$ is $0$, I'm not able to show that $f$ is analytic.
First rule out the case $f^2(z)\equiv 0$ or $f^3(z)\equiv 0$ as both imply $f(z)\equiv 0$ and we are done. Write $f^2(z)=(z-z_0)^ng(z)$, $f^3(z)=(z-z_0)^mh(z)$ with $n.m\in\mathbb N_0$, $g,h$ analytic and nonzero at $z_0$. Then $$(z-z_0)^{3n}g^3(z)=f^ 6(z)=(z-z_0) ^ {2m} h^2 (z)$$ implies $3n=2m$ (and $g^3=h^2$), hence if we let $k=m-n\in\mathbb Z$ we have $n=3n-2n=2m-2n=2k$ and $m=3m-2m=3m-3n=3k$. Especially, we see that $k\ge 0$ and hence $$ f(z)=\frac{f^3(z)}{f^2(z)}=(z-z_0)^k\frac{g(z)}{h(z)}$$ is analytic at $z_0$. Remark: We did not need that $f$ itself is continuous.
For each $n \ge 1$ compute $Z(S_n)$ Can someone please help me on how to compute $Z(S_n)$ for each $n \ge 1$? Does this basically mean compute $Z(1), Z(2), \ldots$? Please hint me on how to compute this. Thanks in advance.
Hint: $S_n$ denotes the symmetric group over a set of $n$ elements. It's the group of all posible permutations, so you have to find $Z(S_1),Z(S_2),...$ so you have to find the permutations that commute with every other permutations. That is the definition of $Z(G)$: $$Z(G)=\lbrace g\in G,ga=ag\;\;\forall a\in G\rbrace$$ So it's all the elements of the group that commute with ALL members of the group. It's a generalization of the centralizer of a subgroup: if you have $H\subset G$, being $G$ a group and $H$ a subgroup, then the centralizer of $H$ in $G$ is: $$C_G(H)=\lbrace g\in G, gh=hg\;\;\forall h\in H\rbrace$$ So the center of a group, $Z(G)$ is the centralizer of $G$ on $G$: $C_G(G)$ There are some trivial cases for little values of $n$, for example for $n=2$ the group is abelian so $Z(S_2)=S_2$. Remember the order of $S_n$: $|S_n|=n!$
Binomial-Like Distribution with Changing Probability The Question Assume we have $n$ multiple choice questions, the $k$-th question having $k+1$ answer choices. What is the probability that, guessing randomly, we get at least $r$ questions right? If no general case is available, I am OK with the special case $r = \left\lfloor\frac{n}{2}\right\rfloor + 1$. Example Assume we have four different multiple choice questions. * *Question 1 * *Choice A *Choice B *Question 2 * *Choice A *Choice B *Choice C *Question 3 * *Choice A *Choice B *Choice C *Choice D *Question 4 * *Choice A *Choice B *Choice C *Choice D *Choice E If we choose the answer to each question at random, what is the probability we get at least three right? (By constructing a probability tree, I get the answer as $11/120$.)
Let $U_k$ be an indicator random variable, equal to 1 if the $k$-th question has been guessed correctly. Clearly $(U_1, U_2,\ldots,U_n)$ are independent Bernoulli random variables with $\mathbb{E}\left(U_k\right) = \frac{1}{k+1}$. The total number of correct guesses equals: $$ X = \sum_{k=1}^n U_k $$ The moment generating function of $X$ is easy to find: $$ \mathcal{P}_X\left(z\right) = \mathbb{E}\left(z^X\right) = \prod_{k=1}^n \mathbb{E}\left(z^{U_k}\right) = \prod_{k=1}^n \frac{k+z}{k+1} = \frac{1}{z} \frac{(z)_{n+1}}{(n+1)!} = \frac{1}{(n+1)!} \sum_{k=0}^n z^k s(n+1,k+1) $$ where $s(n,m)$ denote unsigned Stirling numbers of the first kind. Thus:$$ \mathbb{P}\left(X=m\right) = \frac{s(n+1,m+1)}{(n+1)!} [ 1 \leqslant m \leqslant n ] $$ The probability of getting at least $r$ equals: $$ \mathbb{P}\left(X \geqslant r\right) = \sum_{m=r}^{n} \frac{s(n+1,m+1)}{(n+1)!} $$ This reproduces your result for $n=4$ and $r=3$. In[229]:= With[{n = 4, r = 3}, Sum[Abs[StirlingS1[n + 1, m + 1]]/(n + 1)!, {m, r, n}]] Out[229]= 11/120 Here is the table for $1 \leqslant n \leqslant 11$:
Recursion for Finding Expectation (Somewhat Lengthy) Preface: Ever since I read the brilliant answer by Mike Spivey I have been on a mission for re-solving all my probability questions with it when possible. I tried solving the Coupon Collector problem using Recursion which the community assisted on another question of mine. Now, I think I have come close to completely understanding the way of using recursion. But..... Question: This is from Stochastic Processes by Sheldon Ross (Page 49, Question 1.14). The question is: A fair die is continually rolled until an even number has appeared on 10 distinct rolls. Let $X_i$ denote the number of rolls that land on side $i$. Determine : * *$E[X_1]$ *$E[X_2]$ *PMF of $X_1$ *PMF of $X_2$ My Attempt: Building on my previous question, I begin: Let $N$ denote the total number of throws (Random Variable) and let $Z_{i}$ denote the result of the $i^{th}$ throw. Then: \begin{eqnarray*} E(X_{1}) & = & E\left(\sum_{i=1}^{N}1_{Z_{i}=1}\right)\\ & = & E\left[E\left(\sum_{i=1}^{N}1_{Z_{i}=1}|N\right)\right]\\ E(X_{1}|N) & = & E(1_{Z_{1}=1}+1_{Z_{2}=1}+\cdots+1_{z_{N}=1})\\ & = & \frac{N-10}{3}\\ E(X_{1}) & = & \frac{E(N)-10}{3} \end{eqnarray*} To Find : $E(N)$ Let $W_{i}$ be the waiting time for the $i^{th}$ distinct roll of an even number. Then: $$E(N)=\sum_{i=1}^{10}E(W_{i})$$ Now, \begin{eqnarray*} E(W_{i}) & = & \frac{1}{2}(1)+\frac{1}{2}(1+E(W_{i}))\\ E(W_{i}) & = & 1+\frac{E(W_{i})}{2}\\ \implies E(W_{i}) & = & 2\\ \therefore E(N) & = & \sum_{i=1}^{10}2\\ & = & 20\\ \therefore E(X_{1}) & = & \frac{10}{3}\\ & & \blacksquare \end{eqnarray*} The exact same procedure can be followed for $E(X_2)$ with the same answer. The answer matches the one given in the book. I am confused how to go from here to get the PMFs. Note : If possible, please provide me an extension to this answer for finding the PMFs rather than a completely different method. The book has the answer at the back using a different method. I am not interested in an answer as much as I am interested in knowing how to continue this attempt to get the PMFs.
The main idea is to use probability generating functions. (If you don't know what that means, this will be explained later on in the solution) We solve the problem in general, so replace $10$ by any non-negative integer $a$. Let $p_{k, a}(i)$ be the probability of getting $k$ rolls with face $i$ when a fair dice is continually rolled until an even number has appeared on $a$ distinct rolls. In relation to your problem, when $a=10$, we have $p_{k, 10}(i)=P(X_i=k)$. To start off, note that $p_{-1, a}(i)=0$ (You can't have $-1$ rolls), $$p_{k, 0}(i)=\begin{cases} 1 & \text{if} \, k=0 \\ 0 & \text{if} \, k \geq 1 \end{cases}$$ (If you continually roll a fair dice until an even number has appeared on $0$ distinct rolls, then you must have $0$ rolls for all faces since you don't roll at all.) Now we have 2 recurrence relations: $p_{k, a}(1)=\frac{1}{6}p_{k-1, a}(1)+\frac{1}{3}p_{k, a}(1)+\frac{1}{2}p_{k, a-1}(1)$ and $p_{k, a}(2)=\frac{1}{6}p_{k-1, a-1}(2)+\frac{1}{3}p_{k, a-1}(2)+\frac{1}{2}p_{k, a}(2)$. Simplifying, we get $p_{k, a}(1)=\frac{1}{4}p_{k-1, a}(1)+\frac{3}{4}p_{k, a-1}(1)$ and $p_{k, a}(2)=\frac{1}{3}p_{k-1, a-1}(2)+\frac{2}{3}p_{k, a-1}(2)$. Time to bring in the probability generating functions. Define $f_a(x)=\sum\limits_{k=0}^{\infty}{p_{k, a}(1)x^k}$, $g_a(x)=\sum\limits_{k=0}^{\infty}{p_{k, a}(2)x^k}$. Basically, the coefficient of $x^k$ in $f_a(x)$ is the probability that you have $k$ rolls of $1$. You can think of it (using your notation) as $f_{10}(x)=E(x^{X_1})$ (and similarly for $g_a(x)$) We easily see that $f_0(x)=g_0(x)=1$. Multiplying the first recurrence relation by $x^k$ and summing from $k=0$ to $\infty$ gives $$\sum\limits_{k=0}^{\infty}{p_{k, a}(1)x^k}=\frac{1}{4}\sum\limits_{k=0}^{\infty}{p_{k-1, a}(1)x^k}+\frac{3}{4}\sum\limits_{k=0}^{\infty}{p_{k, a-1}(1)x^k}$$ $$f_a(x)=\frac{1}{4}xf_a(x)+\frac{3}{4}f_{a-1}(x)$$ $$f_a(x)=\frac{3}{4-x}f_{a-1}(x)$$ $$f_a(x)=\left(\frac{3}{4-x}\right)^af_0(x)=\left(\frac{3}{4-x}\right)^a$$ The coefficient of $x^k$ in the expansion of $f_a(x)$ is just $\left(\frac{3}{4}\right)^a\frac{1}{4^k}\binom{k+a-1}{k}$. In particular, when $a=10$, the PMF $F_1(x)$ of $X_1$ is $$F_1(k)=P(X_1=k)=\frac{3^{10}}{4^{k+10}}\binom{k+9}{k}$$ Doing the same to the 2nd second recurrence gives $$g_a(x)=\left(\frac{1}{3}x+\frac{2}{3}\right)g_{a-1}(x)$$ $$g_a(x)=\left(\frac{1}{3}x+\frac{2}{3}\right)^ag_0(x)=\left(\frac{1}{3}x+\frac{2}{3}\right)^a$$ The coefficient of $x^k$ in the expansion of $g_a(x)$ is just $\frac{1}{3^a}2^{a-k}\binom{a}{k}$. In particular, when $a=10$, the PMF $F_2(x)$ is $$F_2(k)=P(X_2=k)=\frac{2^{10-k}}{3^{10}}\binom{10}{k}$$ P.S. It is now a trivial matter to calculate expectation, by differentiating the probability generating function and then evaluating at $x=1$: $$E(X_1)=f_{10}'(1)=\frac{10}{3}, E(X_2)=g_{10}'(1)=\frac{10}{3}$$
Is the diagonal set a measurable rectangle? Let $\Sigma$ denotes the Borel $\sigma$-algebra of $\mathbb{R}$ and $\Delta=\{(x,y)\in\mathbb{R}^2: x=y\}$. I am trying to clarity the definitions of $\Sigma\times\Sigma$ (the sets which contains all measurable rectangles) and $\Sigma\otimes\Sigma$ (the $\sigma$-algebra generated by the collection of all measurable rectangles). My question is (1) does $\Delta$ belong to $\Sigma\times\Sigma$? (2) does $\Delta$ belong to $\Sigma\otimes\Sigma$? I am thinking that (1) would be no (since a measurable rectangle can be arbitrary measurable sets which are not required to be intervals?) and (2) would be yes (can we write $\Delta$ like countable unions of some open intervals? I cannot find a one at time).
Let $x\neq y$. If there is a measurable rectangle containing $(x,x)$ and $(y,y)$, there must be a set $A$ containing both $x$ and $y$ and a set $B$ doing the same such that $(x,x)$ and $(y,y)$ are in $A\times B$. But then $(x,y)\in A\times B$.
Combinatorical meaning of an identity involving factorials While solving (successfully!) problem 24 in projectEuler I was doodling around and discoverd the foloowing identity: $$1+2\times2!+3\times3!+\dots N\times N!=\sum_{k=1}^{k=N} k\times k!=(N+1)!-1$$ While this is very easy to prove, I couldn't find a nice and simple combinatorical way to interpret this identity*. Any ideas? *That is, I do have a combinatorical interpretation - that's how I got to this identity - but it's not as simple as I'd like.
The number of ways you can sort a set of consecutive numbers starting from $1$ and none of which is larger than $N$ and then paint one of them blue is $(N+1)!-1$.
Show that the ideal of all polynomials of degree at least 5 in $\mathbb Q[x]$ is not prime Let $I$ be the subset of $\mathbb{Q}[x]$ that consists of all the polynomials whose first five terms are 0. I've proven that $I$ is an ideal (any polynomial multiplied by a polynomial in $I$ must be at least degree 5), but I'm unsure to how to prove that it is not a prime ideal. My intuition says that its not, because we can't use $(1)$ or $(x)$ as generators. I know that $I$ is a prime ideal $\iff$ $R/I$ is an integral domain. Again, I'm a little confused on how represent $\mathbb{Q}[x]/I$
Hint $\ $ For any prime ideal $\rm\,P\!:\,\ x^n\in P\:\Rightarrow\:x\in P.\:$ Thus $\rm\ x^5 \in I\,$ but $\rm\ x\not\in I\ \Rightarrow\ I\,$ is not prime. Equivalently, $\rm\, R\ mod\ P\,$ has a nilpotent ($\Rightarrow$ zero-divisor): $\rm\, x^5\equiv 0,\ x\not\equiv 0,\,$ so it is not a domain. Remark $\ $ Generally prime ideals can be generated by irreducible elements (in any domain where nonunits factor into irreducibles), since one can replace any reducible generator by some factor, then iterate till all generators are irreducible. In particular, in UFDs, where irreducibles are prime, prime ideals can be generated by prime elements. This property characterizes UFDs. Well-known is Kaplansky's case: a domain is a UFD iff every prime ideal $\!\ne\! 0$ contains a prime $\!\ne\! 0.$
Calculating $\lim_{x\to\frac{\pi}{2}}(\sin x)^{\tan x}$ Please help me calculate: $\lim_{x\to\frac{\pi}{2}}(\sin x)^{\tan x}$
Take $y=(\sin x)^{\tan x}$ Taking log on both sides we have, $\log y=\tan x\log(\sin x)=\frac{\log(\sin x)}{\cot x}$ Now as $x\to \pi/2$, $\log(\sin x)\to 0$ and $\cot x\to 0$ Now you can use L'Hospital's Rule. $$\lim_{x\to \pi/2}\frac{\log(\sin x)}{\cot x}=\lim_{x\to \pi/2}\frac{\cos x}{\sin x(-\csc^2 x)}=\lim_{x\to \pi/2}\frac{\cos x}{-\sin x}=0$$ $$\Rightarrow \log y\to 0, \text{as}, x\to \pi/2$$ $$\Rightarrow y\to \exp^0, \text{as}, x\to \pi/2$$ $$\Rightarrow y\to 1, \text{as}, x\to \pi/2$$
Let the matrix $A=[a_{ij}]_{n×n}$ be defined by $a_{ij}=\gcd(i,j )$. How prove that $A$ is invertible, and compute $\det(A)$? Let $A=[a_{ij}]_{n×n}$ be the matrix defined by letting $a_{ij}$ be the rational number such that $$a_{ij}=\gcd(i,j ).$$ How prove that $A$ is invertible, and compute $\det(A)$? thanks in advance
There is a general trick that applies to this case. Assume a matrix $A=(a_{i,j})$ is such that there exists a function $\psi$ such that $$ a_{i,j}=\sum_{k|i,k|j}\psi(k) $$ for all $i,j$. Then $$ \det A=\psi(1)\psi(2)\cdots\psi(n). $$ To see this, consider the matrix $B=(b_{i,j})$ such that $b_{i,j}=1$ if $i|j$ and $b_{i,j}=0$ otherwise. Note that $B$ is upper-triangular with ones on the diagonal, so its determinant is $1$. Now let $C$ be the diagonal matrix whose diagonal is $(\psi(1),\ldots,\psi(n))$. A matrix product computation shows that $$ A=B^tCB\quad\mbox{hence}\quad \det A=(\det B)^2\det C=\psi(1)\cdots\psi(n). $$ Now going back to your question. Consider Euler's totient function $\phi$. It is well-known that $$ m=\sum_{k|m}\phi(k) $$ so $$ a_{i,j}=gcd(i,j)=\sum_{k|gcd(i,j)}\phi(k)=\sum_{k|i,k|j}\phi(k). $$ Applying the general result above, we find: $$ \det A=\phi(1)\phi(2)\cdots\phi(n). $$
eigenvalues of a matrix with zero $k^{th}$ power For a matrix $A$, where $A^k=0$, $k\ge1$, need prove that $trace(A)=0$; i.e sum of eigenvalues is zero. How do you approach this problem?
I assume your matrix is an $n\times n$ matrix with, say, complex coefficients. Since $A^k=0$, the spectrum of $A$ is $\{0\}$ (or the characteristic polynomial of $A$ is $X^n$). Next we can find an invertible matrix $P$ such that $PAP^{-1}$ is upper-triangular with $0$'s on the diagonal. So $$ \mbox{trace}A=\mbox{trace}(PAP^{-1})=0 $$ where we use the fact that $\mbox{trace} (AB)=\mbox{trace}(BA)$ in general.
Variance for a product-normal distribution I have two normally distributed random variables (zero mean), and I am interested in the distribution of their product; a normal product distribution. It's a strange distribution involving a delta function. What is the variance of this distribution - and is it finite? I know that $Var(XY)=Var(X)Var(Y)+Var(X)E(Y)^2+Var(Y)E(X)^2$ However I'm running a few simulations and noticing that the sample average of variables following this distribution is not converging to normality - making me guess that its variance is not actually finite.
Hint: We need to know something about the joint distribution. The simplest assumption is that $X$ and $Y$ are independent. Let $W=XY$. We want $E(W^2)-(E(W))^2$. To calculate $E((XY)^2)$, use independence.
convolution square root of uniform distribution I need to find a probability distribution function $f(x)$ such that the convolution $f * f$ is the uniform distribution (between $x=0$ and $x=1$). I would like to generate pairs of numbers with independent identical distributions, so that their sum is uniformly distributed between $0$ and $1$. This can't be something new, and I can search on google for convolution square root but I can't seem to find the right information on probability distributions. Can someone out there point me at the right information?
Assume that $X$ is a random variable with density $f$ and that $f\ast f=\mathbf 1_{[0,1]}$. Note that the function $t\mapsto\mathbb E(\mathrm e^{\mathrm itX})$ is smooth since $X$ is bounded (and in fact, $X$ is in $[0,\frac12]$ almost surely). Then, for every real number $t$, $$ \mathbb E(\mathrm e^{\mathrm itX})^2=\frac{\mathrm e^{\mathrm it}-1}{\mathrm it}. $$ Differentiating this with respect to $t$ yields a formula for $\mathbb E(X\mathrm e^{\mathrm itX})\mathbb E(\mathrm e^{\mathrm itX})$. Squaring this product and replacing $\mathbb E(\mathrm e^{\mathrm itX})^2$ by its value yields $$ \mathbb E(X\mathrm e^{\mathrm itX})^2=\frac{\mathrm i(1-\mathrm e^{\mathrm it}+\mathrm it\mathrm e^{\mathrm it})}{4t^3(\mathrm e^{\mathrm it}-1)}. $$ The RHS diverges when $t=2\pi$, hence such a random variable $X$ cannot exist.
A question about Elementary Row Operation: Add a Multiple of a Row to Another Row The task is that I have to prove the following statement, using Linear Algebra arguments: Given a matrix A, then: To perform an ERO (Elementary Row Operation) type 3 : (c * R_i) + R_k --> R_k (i.e. replace a row k by adding c-multiple of row i to row k) is the same as replacing a row k by subtracting a multiple of some row from another row I just don't know how to formally prove this statement, like how the arguments should look like. By some inspections, I'm pretty sure that doing (c * R_i) + R_k --> R_k is the same as doing: R_i - (d * R_k) --> R_k where d can be positive or negative, but it must have opposite sign with c. I use an example as follows: A = (2 1 3, 4 3 1) Then if I want to add row 2 to row 1, say, instead of doing (1 * 4) + 2 --> 6 and so on, I do 4 - [ (-1) * 2 ] --> 6 instead. Thus c = 1 and d = -1 in this case. That's why I conclude that the coefficient d should always be the opposite sign with coefficient c. Would someone help me on how to construct a formal proof of the statement? I know how to go about the examples, but I understand examples are never proofs >_< Thank you very much ^_^
Note: If $c$ is some non-zero scalar, then * *adding $cR_i$ to $R_k$ and replacing the original $R_{k\text{ old}}$ by $(R_k + cR_i)$ is the same as * *subtracting $−c⋅R_i$ from $R_k$ and replacing the old $R_k$ by the result $R_k - (-cR_i)$. Since...$R_k + cR_i = R_k - (-cR_i)$
Using sufficiency to prove and disprove completeness of a distribution Let $X_1, \dots ,X_n$ be a random sample of size $n$ from the continuous distribution with pdf $f_X(x\mid\theta) = \dfrac{2\theta^2}{x^3} I(x)_{(\theta;\infty)}$ where $\theta \in \Theta = (0, \infty)$. (1) Show that $X_{(1)}$ is sufficient for $\theta$. (2) Show directly that the pdf for $X_{(1)}$ is $f_{X(1)}(x\mid\theta) = \dfrac{2n\theta^{2n}}{x^{2n+1}} I(x)(\theta,\infty)$. (3) When $\Theta = (0, \infty)$, the probability distribution of $X_{(1)}$ is complete. In this case, find the best unbiased estimator for $\theta$. (4) Suppose that $\Theta = (0; 1]$. Show that the probability distribution of $X_{(1)}$ is not complete in this setting by considering the function $g(X_{(1)}) = \Big [ X_{(1)} - \frac{2n}{2n-1} \Bigr] I(X_{(1)})_{(1,\infty)}$. For (1), this was pretty easy to show using Factorization Theorem. For (2), I think I am integrating my pdf wrong because I can't seem to arrive at the answer. For (3), I am trying to use a Theorem that states "If T is a complete and sufficient statistic for $\theta$ and $\phi(T)$ is any estimator based only on T, then $\phi(T)$ is the unique best unbiased estimator of its expected values", but I can't seem to simplify the expected value to get $\theta$. For (4), I am getting stuck trying to show $P(g(X_{(1)})=0) = 1$ using the given function. Any assistance is greatly appreciated.
For Part (1), great! For Part (2), I'm unsure about that one. For Part (3), Note: that the original distribution $f_{X}(x|\beta) = \frac{2*\theta^{2}}{x^{3}}*I_{(\theta,\infty)}(x)$ resembles a famous distribution, but this famous distribution has 2 parameters, $\alpha$ and $\beta$, where the value of $\beta = 2$, and your $\alpha = \theta$. Side Note: May I ask what you got for Part (3) when you integrated? (If you did integrate?) it's the Pareto distribution and once you know the right distribution, then finding the expected value for the BUE is easier than having to integrate. For Part(4), all you have to do is show that its Expected value of that function is not equal to 0, and, therefore, the function is not complete.
Helix's arc length I'm reading this. The relevant definitions are that of parametrized curve which is at the beginning of page 1 and the definition of arclength of a curve, which is in the first half of page 6. Also the author mentions the helix at the bottom of page 3. On exercise $1.1.2.$ (page 8) I'm asked to find the arc length of the helix: $\alpha (t)=(a\cos (t), a\sin (t), bt)$, but the author don't say what the domain of $\alpha$ is. How am I supposed to go about this? Usually when the domain isn't specified isn't the reader supposed to assume the domain is a maximal set? In that case the domain would be $\Bbb R$ and the arc length wouldn't be defined as the integral wouldn't be finite.
There are a number of ways of approaching this problem. And yes, you are correct, without the domain specified there is a dilemma here. You can give an answer for one complete cycle of $2\pi$. Depending on the context you may find it more convenient to measure arc length as a function of $z$-axis distance along the helix... a sort of ratio: units of length along the arc per units of length of elevation. Thirdly, you can also write the arc length not as a numeric answer but as a function of $a$ and $b$ marking the endpoints of any arbitrary domain. Personally, I recommend doing the third and last. Expressing the answer as a function is the best you can do without making assumptions about the domain in question, and it leaves a solution that can be applied and reused whenever endpoints are given.
example of morphism of affine schemes Let $X={\rm Spec}~k[x,y,t]/<yt-x^2>$ and let $Y={\rm Spec}~ k[t]$. Let $f:X \rightarrow Y$ be the morphism determined by $k[t] \rightarrow k[x,y,t]/<yt-x^2>$. Is f surjective> If f is surjective, why??
I'm assuming your map of rings comes from the natural inclusion: $i:k[t]\rightarrow[x,y,t]\rightarrow k[x,y,t]/<yt-x^2>=A$. A prime of $k[T]$ is of the form $(F(t))$ where $F(t)$ is an irreducible polynomial over $k$. Show that the $I=F(t)A$ is not the whole ring $A$ (This amounts to showing that $yt-x^2$ and $F(t)$ don't generate $k[x,y,t]$). In fact it is even prime but we won't need that. We just need the fact that $I$ is contained in a prime ideal $P\in Spec(A)$. So $i^{-1}(P)$ is a prime of $k[t]$ that contains $F(t)$ and hence equal to $(F(t))$.
Does the series $\sum\limits_{n=1}^\infty \frac{1}{n\sqrt[n]{n}}$ converge? Does the following series converge? $$\sum_{n=1}^\infty \frac{1}{n\sqrt[n]{n}}$$ As $$\frac{1}{n\sqrt[n]{n}}=\frac{1}{n^{1+\frac{1}{n}}},$$ I was thinking that you may consider this as a p-series with $p>1$. But I'm not sure if this is correct, as with p-series, p is a fixed number, right ? On the other hand, $1+\frac{1}{n}>1$ for all $n$. Any hints ?
Limit comparison test: $$\frac{\frac{1}{n\sqrt[n]n}}{\frac{1}{n}}=\frac{1}{\sqrt[n]n}\xrightarrow[n\to\infty]{}1$$ So that both $$\sum_{n=1}^\infty\frac{1}{n\sqrt[n] n}\,\,\,\text{and}\,\,\,\sum_{n=1}^\infty\frac{1}{n}$$ converge or both diverge...
Let lim $a_n=0$ and $s_N=\sum_{n=1}^{N}a_n$. Show that $\sum a_n$ converges when $\lim_{N\to\infty}s_Ns_{N+1}=p$ for a given $p>0$. Let lim $a_n=0$ and $s_N=\sum_{n=1}^{N}a_n$. Show that $\sum a_n$ converges when $\lim_{N\to\infty}s_Ns_{N+1}=p$ for a given $p>0$. I've no idea how to even start. Should I try to prove that $s_N$ is bounded ?
Put $s_n:=\epsilon_n|s_n|$ with $\epsilon_n\in\{-1,1\}$. Then from $$\epsilon_n\epsilon_{n+1}|s_n|\>|s_{n+1}|=s_n\>s_{n+1}=:p_n\to p>0\qquad(n\to\infty)$$ it follows that $\epsilon_n=\epsilon_{n+1}$ for $n>n_0$. Assume $\epsilon_n=1$ for all $n> n_0$, the case $\epsilon_n=-1$ being similar. The equation $$s_n(s_n+a_{n+1})=s_ns_{n+1}=p_n$$ implies that for all $n$ the quantities $s_n$, $a_{n+1}$, and $p_n$ are related by $$s_n={1\over2}\left(-a_{n+1}\pm\sqrt{a_{n+1}^2 +4p_n}\right)\ .$$ Since $s_n\geq0$ $\ (n>n_0)$, $\ a_{n+1}\to 0$, $\ p_n\to p>0$ it follows that necessarily $$s_n={1\over2}\left(-a_{n+1}+\sqrt{a_{n+1}^2 +4p_n}\right)\qquad(n>n_1)\ ,$$ and this implies $\lim_{n\to\infty} s_n=\sqrt{p}$.
Prove the determinant of this matrix We have an $n\times n$ square matrix $\left(a_{i,j}\right)_{1\leq i\leq n, \ 1\leq j\leq n}$ such that all elements on main diagonal are zero, whereas the other elements are defined as follows: $$a_{i,j}=\begin{cases} 1,&\text{if } i+j \text{ belongs to the Fibonacci numbers,}\\ 0,&\text{if } i+j \text{ does not belong to the Fibonacci numbers}.\\ \end{cases}$$ We know that when $n$ is odd, the determinant of this matrix is zero. Now prove that when $n$ is even, the determinant of this matrix is $0$ or $1$ or $-1$. (Use induction or other methods.) Also posted on MO.
This is just a partial answer, too long to fit in a comment, written in order to start collecting ideas. We have: $$\det A=\sum_{\sigma\in S_n}\operatorname{sign}(\sigma)\prod_{i=1}^n a_{i,\sigma(i)},$$ hence the contribute of every permutation in $S_n$ belongs to $\{-1,0,1\}$. In particular, the contribute of a permutation differs from zero iff $i+\sigma(i)$ belongs to the Fibonacci sequence for every $i\in[1,n]$. If we consider the cycle decomposition of such a $\sigma$: $$\sigma = (n_1,\ldots, n_j)\cdot\ldots\cdot(m_1,\ldots, m_k)$$ this condition gives that $n_1+n_2,\ldots,n_j+n_1,\ldots,m_1+m_2,\ldots,m_k+m_1$ belong to the Fibonacci sequence, so, if the contribute of $\sigma$ differs from zero, the contribute of $\sigma^{-1}$ is the same. However, not so many permutations fulfill the condition. For instance, the only possible fixed points of the contributing permutations are half of the even Fibonacci numbers, hence numbers of the form $\frac{1}{2}F_{3k}$:$1,4,17,72,\ldots$. Moreover, the elements of $[1,n]$ can be arranged in a graph $\Gamma$ in which the neighbourhood of a node with label $m$ is made of the integers in $[1,n]$ whose sum with $m$ is a Fibonacci number, i.e. all the possible images $\sigma(m)$ for a contributing permutation. For istance, for $n=5$: we get (apart from the self-loops in $1$ and $4$) an acyclic graph. The only contibuting permutation is $\sigma=(1 2)(3 5)(4)$, hence $|\det A|=1$. When $n=6$ or $n=7$ the neighbourhood of $5$ is still made of only one element: When $n=7$ the contributing permutations are of the form $\sigma=(4)(3 5)\tau$, where $\tau\in\{(1,2,6,7),(1,7,6,2),(1,2)(6,7),(1 7)(6 2)\}$, hence $\det A=0$. In general, the neighbourhood of the greatest Fibonacci number $F_k\leq n$ is made of $F_{k-1}$ only, hence $F_k$ is always paired with $F_{k-1}$ in a transposition of every contributing permutation. Now I believe that the conjecture $\det A\in\{-1,0,1\}$ heavily correlated with the structure of the cycles in $\Gamma_\mathbb{N}$, a graph over $\mathbb{N}$ where two integers are connected when their sum is a Fibonacci number. There are many trivial cycles in $\Gamma_\mathbb{N}$: $$(k,F_m-k),\quad (k,F_m-k,F_{m+1}+k,F_{m+2}-k),\quad (k,F_m-k,F_{m+1}+k,F_{m+2}-k,F_{m+3}+k,F_{m+4}-k),\ldots$$ and my claim is that all the cycles have even length, and all the cycles are of the given type. Given that $F$ is the set of all the Fibonacci number, it is straightforward to prove that the only elements of $F-F$ represented twice are the Fibonacci numbers, hence there are no cycles of length $3$ in $\Gamma_\mathbb{N}$.
Proving that for any odd integer:$\left\lfloor \frac{n^2}{4} \right\rfloor = \frac{(n-1)(n+1)}{4}$ I'm trying to figure out how to prove that for any odd integer, the floor of: $$\left\lfloor \frac{n^2}{4} \right\rfloor = \frac{(n-1)(n+1)}{4}$$ Any help is appreciated to construct this proof! Thanks guys.
Take $n=2k+1$ then, $\lfloor(n^2/4)\rfloor=\lfloor k^2+k+1/4\rfloor=k^2+k$ $\frac{(n-1)(n+1)}{4}=(n^2-1)/4=k^2+k=\lfloor(n^2/4)\rfloor$
how to find cosinus betwen two vector? i have task in linear-algebra. Condition: we have triangle angles A(-4,2); B(-1,6); C(8,-3); How to find cosinus between BA and BC vectors? please help :( what is solution for this task?
The dot product gets you just what you want. The dot product of two vectors $\vec u \cdot \vec v = |\vec u||\vec v|\cos \theta$ where $\theta$ is the angle between the vectors. So $\cos \theta =\frac{\vec u \cdot \vec v}{|\vec u||\vec v|}$. The dot product is calculated by summing the products of the components $\vec {BA} \cdot \vec {BC} = -4 \cdot -1 + 2 \cdot 6=4+12=16$
Construction of Hadamard Matrices of Order $n!$ I'm trying to get a hand on Hadamard matrices of order $n!$, with $n>3$. Payley's construction says that there is a Hadamard matrix for $q+1$, with $q$ being a prime power. Since $$ n!-1 \bmod 4 = 3 $$ construction 1 has to be chosen: If $q$ is congruent to $3 (\bmod 4)$ [and $Q$ is the corresponding Jacobsthal matrix] then $$ H=I+\begin{bmatrix} 0 & j^T\\ -j & Q\end{bmatrix} $$ is a Hadamard matrix of size $q + 1$. Here $j$ is the all-1 column vector of length $q$ and $I$ is the $(q+1)×(q+1)$ identity matrix. The matrix $H$ is a skew Hadamard matrix, which means it satisfies $H+H^T = 2I$. The problem is that the number of primes among $n!-1$ is restricted (see A002982). I checked the values of $n!-1$ given by Wolfram|Alpha w.r.t. be a prime power, without success, so Payley's construction won't work for all $n$. Is there a general way to get the matrices, or is it case by case different? I haven't yet looked into Williamson's construction nor Turyn type constructions. Would it be worth a closer look (sure it would, but) concerning my problem? Where can I find their constructions? PS for the interested reader: I've found a nice compilation of Hadamard matrices here: http://neilsloane.com/hadamard/
I don't think a general construction for Hadamard matrices of order $n!$ is known. The knowledge about general construction methods for Hadamard matrices is quite sparse, the basic ones (see also the Wikipedia article) are: 1) If $n$ is a multiple of $4$ such that $n-1$ is a prime power or $n/2 - 1$ is a prime power $\equiv 1\pmod{4}$, then there exists a Hadamard Matrix of order $n$ (Paley). 2) If $n$ is a multiple of $4$ such that there exists a Hadamard Matrix of order $n/2$, then there exists a Hadamard Matrix of order $n$ (Sylvester). The Hadamard conjecture states that for all multiples $n$ of $4$ there is a Hadamard matrix of order $n$. The above constructions do not cover all these $n$, the smallest case not covered is $n = 92$. There are more specialized constructions and a few computer constructions, such that the smallest open case is $n = 668$ nowadays. EDIT: I have just checked that for $n\in\{13,26,44,52,63,67,70,77,85\}$ a Hadamard matrix of order $n!$ cannot be constructed only by a combination of the Paley/Sylvester construction above. So in these cases, one had to check more specialized constructions like Williamsons' one.
If a function is uniformly continuous in $(a,b)$ can I say that its image is bounded? If a function is uniformly continuous in $(a,b)$ can I say that its image is bounded? ($a$ and $b$ being finite numbers). I tried proving and disproving it. Couldn't find an example for a non-bounded image. Is there any basic proof or counter example for any of the cases? Thanks a million!
Hint: Prove first that a uniformly continuous function on an open interval can be extended to a continuous function on the closure of the interval.
Show that open segment $(a,b)$, close segment $[a,b]$ have the same cardinality as $\mathbb{R}$ a) Show that any open segment $(a,b)$ with $a<b$ has the same cardinality as $\mathbb{R}$. b) Show that any closed segment $[a,b]$ with $a<b$ has the same cardinality as $\mathbb{R}$. Thoughts: Since $a<b$, $a,b$ are two distinct real number on $\mathbb{R}$, we need to show it is 1 to 1 bijection functions which map between $(a,b)$ and $\mathbb{R}$, $[a,b]$ and $\mathbb{R}$. But we know $\mathbb{R}$ is uncountable, so we show the same for $(a,b)$ and $[a,b]$? and how can I make use of the Cantor-Schroder-Bernstein Theorem? The one with $|A|\le|B|$ and $|B|\le|A|$, then $|A|=|B|$? thanks!!
Consider the function $f:(0,1)\to \mathbb{R}$ defined as, $$f(x)=\frac{1}{x}+\frac{1}{1-x}$$ Prove that $f$ is a bijective function. Now, by previous posts, $(0,1)$ and $[0,1]$ have the same cardinality. Consider the function $g:[0,1]\to[a,b]$, defined as, $$g(x)=({b-a})x+a$$ Prove that $g$ is bijective function to conclude that $[0,1]$ and $[a,b]$ have the same cardinality as $\mathbb{R}$.
Intuition for scale of the largest eigenvalue of symmetric Gaussian matrix Let $X$ be $n \times n$ matrix whose matrix elements are independent identically distributed normal variables with zero mean and variance of $\frac{1}{2}$. Then $$ A = \frac{1}{2} \left(X + X^\top\right) $$ is a random matrix from GOE ensemble with weight $\exp(-\operatorname{Tr}(A^2))$. Let $\lambda_\max(n)$ denote its largest eigenvalue. The soft edge limit asserts convergence of $\left(\lambda_\max(n)-\sqrt{n}\right) n^{1/6}$ in distribution as $n$ increases. Q: I am seeking to get an intuition (or better yet, a simple argument) for why the largest eigenvalue scales like $\sqrt{n}$.
The scaling follows from the Wigner semicircle law. Proof of the Wigner semicircle law is outlined in section 2.5 of the review "Orthogonal polynomials ensembles in probability theory" by W. König, Probability Surveys, vol. 2 (2005), pp. 385-447.
Solving Recurrence $T(n) = T(n − 3) + 1/2$; I have to solve the following recurrence. $$\begin{gather} T(n) = T(n − 3) + 1/2\\ T(0) = T(1) = T(2) = 1. \end{gather}$$ I tried solving it using the forward iteration. $$\begin{align} T(3) &= 1 + 1/2\\ T(4) &= 1 + 1/2\\ T(5) &= 1 + 1/2\\ T(6) &= 1 + 1/2 + 1/2 = 2\\ T(7) &= 1 + 1/2 + 1/2 = 2\\ T(8) &= 1 + 1/2 + 1/2 = 2\\ T(9) &= 2 + 1/2 \end{align}$$ I couldnt find any sequence here. can anyone help!
The generating function is $$g(x)=\sum_{n\ge 0}T(n)x^n = \frac{2-x^3}{2(1+x+x^2)(1-x)^2}$$, which has the partial fraction representation $$g = \frac{2}{3(1-x)} + \frac{1}{6(1-x)^2}+\frac{x+1}{6(1+x+x^2)}$$. The first term contributes $$\frac{2}{3}(1+x+x^2+x^3+\ldots)$$, equivalent to $T(n)=2/3$ the second term contributes $$\frac{1}{6}(1+2x+3x^2+4x^3+\ldots)$$ equivalent to $T(n) = (n+1)/6$, and the third term contributes $$\frac{1}{6}(1-x^2+x^3-x^5+x^6-\ldots)$$ equivalent to $T(n) = 1/6, 0, -1/6$ depending on $n\mod 3$ being 0 or 1 or 2. $$T(n) = \frac{2}{3}+\frac{n+1}{6}+\left\{\begin{array}{ll} 1/6,& n \mod 3=0\\ 0,& n \mod 3=1 \\ -1/6,&n \mod 3 =2\end{array}\right.$$
Another trigonometric equation Show that : $$31+8\sqrt{15}=16(1+\cos 6^{\circ})(1+\cos 42^{\circ})(1+\cos 66^{\circ})(1-\cos 78^{\circ})$$
I don't think this is how the problem came into being. But, I think this to be a legitimate way. $$(1+\cos 6^{\circ})(1+\cos 42^{\circ})(1+\cos 66^{\circ})(1-\cos 78^{\circ})$$ $$=(1+\cos 6^{\circ})(1+\cos 66^{\circ})(1-\cos 78^{\circ})(1+\cos 42^{\circ})$$ $$=(1+\cos 6^{\circ}+\cos 66^{\circ}+\cos 6^{\circ}\cos 66^{\circ})(1+\cos 42^{\circ}-\cos 78^{\circ}-\cos 42^{\circ}\cos 78^{\circ})$$ $$=\{1+2\cos 30^{\circ}\cos 36^{\circ}+\frac12(\cos60^\circ+\cos72^\circ)\} \{1+2\sin 18^{\circ}\sin60^{\circ}-\frac12(\cos36^\circ+\cos120^\circ)\}$$ (Applying $2\cos A\cos B=\cos(A-B)+\cos(A+B),$ $ \cos2C+\cos2D=2\cos(C-D)\cos(C+D)$ and $\cos2C-\cos2D=-2\sin(C-D)\sin(C+D)$ ) Now, $\sin60^{\circ}=\cos 30^{\circ}=\frac{\sqrt3}2,\cos120^\circ=\cos(180-60)^\circ=-\cos60^\circ=-\frac12$ From here, or here or here $\cos72^\circ=\sin 18^\circ=\frac{\sqrt5-1}4$ and $\cos36^\circ=\frac{\sqrt5+1}4$
How can a set be bounded and countably infinite at the same time? There is a theorem in my textbook that states, Let $E$ be a bounded measurable set of real numbers. Suppose there is a bounded countably infinite set of real numbers $\Lambda$ for which the collection of translates of $E$, $\{\lambda + E\}_{\lambda \in \Lambda}$, is disjoint. Then $m(E) = 0$. I'm a little confused about this theorem, because it's saying that a set is bounded and countably infinite at the same time. But if a set is bounded, isn't it supposed to be finite? Thanks in advance
Hint: Consider $\Bbb Q$ intersected with any bounded set, finite or infinite. Since $\Bbb Q$ is countable, the new set is at most countable, and clearly can be made infinite; for example, $[0,1]\cap\Bbb Q$ is bounded and countable.
Continuous function with zero integral Let $f$ be a continuous function on $[a,b]$ ($a<b$), such that $\int_{a}^{b}{f(t)dt}=0$ Show that $\exists c\in[a,b], f(c)=0$.
Let $m=\min\{f(x)|x\in[a,b]\}, m=\max\{f(x)|x\in[a,b]\}$(We can get the minimum and maximum because $f$ is continuous on a closed interval) . If $m,M$ have the same sign it can be shown that the integral cant be zero (for example if both are positive, the the integral will be positive). If $m,M$ have different signs apply the intermediate value theorem
Proof of the simple approximation lemma a) For the proof of the simple approximation lemma, our textbook says, Let (c,d) be an open, bounded interval that contains the image of E, f(E), and $c=y_0 < y_1 < ... < y_n = d$ be a partition of the closed bounded interval [c,d] such that $y_k - y_{k-1} < \epsilon$ for $1 \leq k \leq n$. Define $I_k = [y_{k-1}, y_k)$ and $E_k = f^{-1}(I_k)$ for $ 1 \leq k \leq n$. Since each $I_k$ is an interval and the function f is measurable, each set $E_k$ is measaurable. I was a bit confused about this last sentence. I'm not sure what theorem they are using to say that $E_k$ is measurable because $I_k$ is an interval and f is measurable...
The definition of a function $f: A \to B$ being measurable is that for any measurable set $E \subseteq B$, $f^{-1}(E)$ is measurable, so this follows by definition.
Derive a rotation from a 2D rotation matrix I have a rotation 2D rotation matrix. I know that this matrix will always ever only be a rotation matrix. $$\left[ \begin{array}{@{}cc} \cos a & -\sin a \\ \sin a & \cos a \\ \end{array} \right]$$ How can I extract the rotation from this matrix? The less steps, the better, since this will be done on a computer and I don't want it to constantly be doing a lot of computations!
Pick any non-zero vector $v$ and compute the angle between $v$ and $Av$, where $A$ is the matrix above. A simple vector is $e_1 = \binom{1}{0}$, and $Ae_1 = \binom{\cos \alpha}{\sin \alpha} = \binom{A_{11}}{A_{21}}$, hence the angle $\alpha$ can be computed from $\text{atan2}(\sin \alpha, \cos \alpha) = \text{atan2}(A_{21}, A_{11}) $. (Note that $\text{atan2}$ usually takes the $y$-component as the first argument.)
How many odd numbers with distinct digits between 1000 and 9999 How many numbers with distinct digits are there between 1000 and 9999. [1] I came up with a solution like this. Since we can't know what numbers have been used, in the tens, hundreds and thousands we start counting at the ones. 1s: { 1, 3, 5, 7, 9 }, so 5 initial possibilities 10s: { 0, 1, ... , 9 }, so 10 initial possibilities, 1 taken: 9 left 100s: { 0, 1, ... , 9 }, so 10 initial possibilities, 2 taken: 8 left 1000s: { 1, 2, ... , 9 }, so 9 initial possibilites, 3 taken: 6 left So then we arrive at the following: 5 * 9 * 8 * 6 = 2160 possibilities. I thought this was pretty straight forward. Than I had a glimpse at the solution sheet... And lo an answer which really doesn't make much sense at its first glimpse. Calculate the sum of those odd numbers with distinct digits with no 0’s, a 0 in the tens place, or a 0 in the hundreds place. No 0’s: 5 choices for the ones place, then 8 · 7 · 6 choices for the other three places; 0 in the tens place: 5 choices for the ones place and 1 choice for the tens place, then 8 · 7 choices for the other two places; 0 in the hundreds place: 5 choices for the ones place and 1 choice for the hundreds place, then 8 · 7 choices for the other two places; (5 · 8 · 7 · 6) + (5 · 1 · 8 · 7) + (5 · 1 · 8 · 7) = 2240; Why are the 0's treated special? The exercise states it should be an odd number, with distinct digits. I thought I adhered to that proposition.... [1] Exercise 2.7.15 from Applied Combinatorics 2nd edition by Fred S. Roberts and Barry Tesman
Your problem comes when you assume that there were $3$ taken from your last set. It's quite possible that the second or third choice was a $0$, so that you in fact have only eliminated $2$ options from the last set. Instead, choose the last digit first ($5$ choices), then the first digit ($8$ choices, since we can't have picked $0$ for the last digit), then deal with the second and third digit ($8$ and $7$ choices, in some order). That will get you $5\cdot 8\cdot 8\cdot 7=2240$, as desired.
Fractions in binary? How would you write a fraction in binary numbers; for example, $1/4$, which is $.25$? I know how to write binary of whole numbers such as 6, being $110$, but how would one write fractions?
$1/4=0\cdot(1/2)^0+0\cdot(1/2)^1+1\cdot(1/2)^2=0.01$ in base $2$, you just go in reverse with powers $(1/2)^n, n=0,1,2,...$
Combination / Permutation Question There are 3 bags and 5 different marbles. In how many ways can the marbles be put into the bags? (disclosure - the question is one of many in a teacher prep study guide. I am taking the qualification exam for teaching middle school math next month.) The answer and explanation for the above was given as 3^5 = 243. Earlier on, the book offered the distinction between say "3 kids of a group of ten are being chosen to play...." and the more specific "3 kids of 10 are being chosen for pitcher/catcher/first base" - the former being a lower number since it's not for specific positions. With the marble bags, it seems the author counts all in bag one separate from all in bag two, or bag three. I'm okay with a wrong answer, but asking - given a test environment where I'm not going to be able to ask for any clarification, what in the wording above leads to that conclusion that the bags must be treated differently? Note - had the question said "3 different colored bags", I'd have been satisfied with the answer. Edit - I've passed the exam. Remarkably, this is the most unsatisfying way I've ever had an exam reported, simply pass/fail. One would think especially for math, that the test taker would like a result with precision, even if it's not offered to prospective employers.
Hint: Think about the balls. In how many bags can one ball be put in? How many balls are there?
showing compactness of the intersection of a compact set and disjoint open sets. Here's the problem I'm trying to solve. "If $A\subseteq B_1\cup B_2$ where $B_1, B_2$ are disjoint open sets and $A$ is compact, show that $A\cap B_1$ is compact. Is the same true if $B_1$ and $B_2$ are not disjoint?" Hope you can help, I can't seem to wrap my head around this one. Thanks!
Let $\{V_{\alpha}\}_{\alpha\in J}$ be a family of open sets such that: $$A\cap B_1\subseteq\cup_{\alpha\in J}V_{\alpha}$$Since $A\subseteq B_1\cup B_2$, it follows that $A\subseteq\ B_2\cup(\cup_{\alpha\in J}V_{\alpha})$. Since $A$ is compact , therefore there is a finite subset of $\{{V_{\alpha}}|\alpha\in J\}\cup\{B_2\}$ that covers $A$. Now try to use this subset (along with the fact that $B_1\cap B_2=\emptyset$) to make a finite subset of $\{{V_{\alpha}}|\alpha\in J\}$ that covers $A\cap B_1$.
Proof of $\displaystyle \lim_{z\to 1-i}[x+i(2x+y)]=1+i$ I am having some difficulty with the epsilon-delta proof of the limit above. I know that $|x+i(2x+y)-(1+i)|<\epsilon$ when $|x+iy-(1-i)|<\delta$. I tried splitting up the expressions above in this way: $|x+i(2x+y)-(1+i)|\\ =|(x-1)+i(y+1)+i(2x+2)|\\ \le|(x-1)+i(y+1)|+|i(2x+2)|\\ <\delta+|i(2x+2)|$ Is this the correct approach? I don't know how else I can manipulate the expression bounded by $\delta$. Also, what can I do about the i's? * $z=x+iy$
Hint: If $z\to 1-i$ and $z=x+iy$, then $x\to 1$ and $y\to -1$.
What is mathematical research like? I'm planning on applying for a math research program over the summer, but I'm slightly nervous about it just because the name math research sounds strange to me. What does math research entail exactly? For other research like in economics, or biology one collects data and analyzes it and draws conclusions. But what do you do in math? It seems like you would sit at a desk and then just think about things that have never been thought about before. I appologize if this isn't the correct website for this question, but I think the best answers will come from here.
For me as an independent mathematical researcher, it includes: 1) Trying to find new, more efficient algorithms. 2) Studying data sets as projected visually through different means to see if new patterns can be made visible, and how to describe them mathematically. 3) Developing new mathematical language and improving on existing language. It is not so different from how you describe biological or economical research, only that you try to find patterns linked to mathematical laws rather than biological or economical laws.
Is the Euler phi function bounded below? I am working on a question for my number theory class that asks: Prove that for every integer $n \geq 1$, $\phi(n) \geq \frac{\sqrt{n}}{\sqrt{2}}$. However, I was searching around Google, and on various websites I have found people explaining that the phi function has a defined upper bound, but no lower bound. Am I reading these sites incorrectly, or am I missing something in the problem itself?
EEDDIITT: this gives a proof of my main claim in my first answer, that a certain function takes its minimum value at a certain primorial. I actually put that information, with a few examples, into the wikipedia article, but it was edited out within a minute as irrelevant. No accounting for taste. ORIGINAL: We take as given Theorem 327 on page 267 of Hardy and Wright, that for some fixed $0 < \delta < 1,$ the function $$ g(n) = \frac{\phi(n)}{n^{1-\delta}} $$ goes to infinity as $n$ goes to infinity. Note that $g(1) = 1$ but $g(2) < 1.$ For some $N_\delta,$ whenever $ n > N_\delta$ we get $g(n) > 1.$ It follows that, checking all $1 \leq n \leq N_\delta,$ the quantity $g(n)$ assumes a minimum which is less than 1. Perhaps it assumes this minimum at more than one point. If so, we are taking the largest such value of $n.$ Here we are going to prove that the value of $n$ at which the minimum occurs is the primorial created by taking the product of all the primes $p$ that satisfy $$ p^{1-\delta} \geq p-1. $$ As I mentioned, in case two the minimum occurs at two different $n,$ this gives the larger of the two. So, the major task of existence is done by Hardy and Wright. We have the minimum of $g$ at some $$ n = p_1^{a_1} p_2^{a_2} p_3^{a_3} \cdots p_r^{a_r}, $$ with $$ p_1 < p_2 < \cdots < p_r. $$ First, ASSUME that one or more of the $a_i > 1.$ Now, $$ \frac{ g(p_i)}{g(p_i^{a_i})} = p^{\delta - a_i \delta} = p^{\delta (1 - a_i)} < 1. $$ As a result, if we decrease that exponent to one, the value of $g$ is lowered, contradicting minimality. So all exponents are actually 1. Second, ASSUME that there is some gap, some prime $q < p_r $ such that $q \neq p_j$ for all $j,$ that is $q$ does not divide $n.$ Well, for real variable $x > 0,$ the function $$ \frac{x-1}{x^{1-\delta}} $$ is always increasing, as the first derivative is $$ x^{\delta - 2} (\delta x +(1-\delta)). $$ It follows that, in the factorization of $n,$ if we replace $p_r$ by $q,$ the value of $g$ is lowered, contradicting minimality. So the prime factors of $n$ are consecutive, beginning with 2, and $n$ is called a primorial. Finally, what is the largest prime factor of $n?$ Beginning with 2, multiplying by any prime $p$ with $$ \frac{p-1}{p^{1-\delta}} \leq 1 $$ shrinks the value of $g$ or keeps it the same, so in demanding the largest $n$ in case there are two attaining the minimum of $g,$ we take $n$ to be the product of all primes $p$ satisfying $$ p - 1 \leq p^{1-\delta}, $$ or $$ p^{1-\delta} \geq p-1 $$ as I first wrote it. Examples are given in my first answer to this same question. EEDDIITTTT: Jean-Louis Nicolas showed, in 1983, that the Riemann Hypothesis is true if and only if, for all primorials $P,$ $$ \frac{e^\gamma \phi(P) \log \log P}{P} < 1. $$ Alright, the exact reference is: Petites valeurs de la fonction d'Euler. Journal of Number Theory, volume 17 (1983), number 3, pages 375-388. On the other hand, if RH is false, the inequality is true for infinitely many primorials and false for infinitely many. So, either way, it is true for infinitely many primorials (once again, these are $P = 2 \cdot 3 \cdot 5 \cdots p$ the product of consecutive primes beginning with 2). For whatever reason, the criterion of Guy Robin, who was a student of Nicolas, got to be better known.
How to solve this integration: $\int_0^1 \frac{x^{2012}}{1+e^x}dx$ I'm having troubles to solve this integration: $\int_0^1 \frac{x^{2012}}{1+e^x}dx$ I've tried a lot using so many techniques without success. I found $\int_{-1}^1 \frac{x^{2012}}{1+e^x}dx=1/2013$, but I couldn't solve from 0 to 1. Thanks a lot.
You have $$\int_{-1}^{1} \frac{x^{2012}}{1+e^{x}} \ dx =\underbrace{\int_{-1}^{0}\frac{x^{2012}}{1+e^{x}}}_{I_{1}} \ dx + \int_{0}^{1}\frac{x^{2012}}{1+e^{x}} \ dx \qquad \cdots (1)$$ In $I_{1}$ put $x=-t$, then you have $dx = -dt$, and so the limits range from $t=0$ to $t=1$. So you have $$I_{1}= -\int_{1}^{0} \frac{e^{t}\cdot t^{2012}}{1+e^{t}} \ dt = \int_{0}^{1} \frac{e^{x}\cdot x^{2012}}{1+e^{x}} \ dx$$ Put this in equation $(1)$ to get the value.
Definition of Tangents When we had not learnt Calculus, we met the concept of Tangent in Circle, which was defined as the line touching the circle at ONE point. Then, after learning Calculus, we knew that a curve could intersect with its tangent at more than one point, and a line intersecting with a curve at only one point is not necessarily a tangent. Hence, we used Limit to define tangent, which involved TWO points and we considered one approached the other to obtain the tangent. My question is: The definition of a tangent to a curve should be more general than that to a circle, and hence, we can say that the definition of a tangent to a circle can be derived from the definition of a tangent to a curve. However, limit uses TWO points, even though they are very close to each other. If they overlap with each other to become ONE point, then no line occurs. So, in theory, how can we proof that the two definitions (a general curve VS a circle) are consistent?
Limit means approaching, not coincidence. So, if you take two points on the circle, line that goes through them is not a tangent, of course. But if you make one point closer to another, that line goes closer to the tangent. If you take it even more close, then line will be also closer. And here comes the limit into the business. Just like derivatives which are based on difference of two values. But to be true derivative one of your values "kind of" approaching another, although you never say that two values are the same, so you cannot take ratio because of denominator being "zero". That's why we define derivative at the point, not two points, and that's how we define tangent as well.
Applications of prime-number theorem in algebraic number theory? Dirichlet arithmetic progression theorem, or more generally, Chabotarev density theorem, has applications to algebraic number theory, especially in class-field theory. Since we might think of the density theorem as an analytic theorem, and as prime number theorem is one main theorem of analytic number theory, one is led to wonder: if there is any application of prime number theorem to algebraic number theory. Thanks for any attention in advance.
I don't know if this can be considered algebraic number theory or if it is more algebraic geometry; but, here goes. Deligne uses the methods of Hadamard-de la Vallée-Poussin to prove the Weil conjectures. Even if it is not an application of the ordinary PNT as such, the exact same methods are applied elsewhere.
What is wrong with my proof: Pseudoinverse and SVD I was trying to prove the following: Let $U\Sigma V$ be the $SVD$ decomposition of $A\in\mathbb R^{m\times n}$, where $\textrm{rank}(A)=k$. Show that the pseudoinverse of $A$ is given by, $$ \displaystyle A^\dagger=\sum_{i=1}^k\sigma_i^{-1}v_iu_i^T. $$ ${\bf Proof:}$ Let us show that $AA^\dagger A=A$, $A^\dagger AA^\dagger=A^\dagger$, $(A^\dagger A)^T=A^\dagger A$ and $(AA^\dagger)^T=AA^\dagger$. Particionate $A$ as follows, \begin{align*} \displaystyle A=\left[\begin{array}{c|c} U_1&U_2 \end{array}\right]\left[\begin{array}{c|c} \tilde{\Sigma}&0\\ \hline 0&0 \end{array}\right]\left[\begin{array}{c} V_1^T\\ V_2^T \end{array}\right], \end{align*} where $\tilde{\Sigma}=\textrm{diag}(\sigma_1, \ldots, \sigma_k)$. Then, $A=U_1\tilde{\Sigma}V_1^T$ whereas $A^\dagger=V_1\tilde{\Sigma}^{-1}U_1^T$. Since $V^TV=U^TU=I$ we see $U_1^TU_1=V_1^TV_1=I$. Then, \begin{align*} \displaystyle (A^\dagger A)^T=(V_1\tilde{\Sigma}U_1^T)(U_1\tilde{\Sigma}^{-1}V_1^T)=I=A^\dagger A. \end{align*} Analogously, we see $(AA^\dagger)=I=AA^\dagger$. Using that $A^\dagger A=AA^\dagger=I$ (how can that be? $A$ is not squared) we have, \begin{align*} \displaystyle (A^\dagger AA^\dagger)=(A^\dagger A)A^\dagger=IA^\dagger\ \textrm{and}\ (AA^\dagger A)=A(A^\dagger A)=AI=A. \end{align*} ${\bf Problems:}$ The main problem is the part $(A^\dagger A)=I$ for $A$ is not a squared matrix. Furthermore if that identity held the problem would be almost trivial.. What is the problem with my proof?
You shouldn't assume that $A^\dagger A$ is equal to $I$: \begin{align*} A &= U_1 \widetilde{\Sigma} V_1^T,\\ A^\dagger &= V_1 \widetilde{\Sigma}^{-1} U_1^T,\\ \Rightarrow A^\dagger A &= V_1V_1^T \ \text{ is symmetric}. \end{align*}
Prove that $\Gamma (-n+x)=\frac{(-1)^n}{n!}\left [ \frac{1}{x}-\gamma +\sum_{k=1}^{n}k^{-1}+O(x) \right ]$ Prove that $\Gamma (-n+x)=\frac{(-1)^n}{n!}\left [ \frac{1}{x}-\gamma +\sum_{k=1}^{n}k^{-1}+O(x) \right ]$ I don't know how to do this ? Note that $\gamma $ is the Euler-Mascheroni constant
A standard trick is to use the reflection identity $$\Gamma(-n+x) \Gamma(1+n-x) = -\frac{\pi}{\sin(\pi n - \pi x)}$$ giving, under the assumption of $n\in \mathbb{Z}$ $$ \Gamma(-n+x) = (-1)^n \frac{\pi}{\sin(\pi x)} \frac{1}{\Gamma(n+1-x)} = (-1)^n \frac{\pi}{\sin(\pi x)} \frac{1}{\color\green{n!}} \frac{\color\green{\Gamma(n+1)}}{\Gamma(n+1-x)} $$ Assuming $n \geqslant 0$, $$ \begin{eqnarray}\frac{\Gamma(n+1)}{\Gamma(n+1-x)} &=& \frac{1}{\Gamma(1-x)} \prod_{k=1}^n \frac{1}{1-x/k} \\ &=& \left(1+\psi(1) x + \mathcal{o}(x)\right) \left(1+\sum_{k=1}^n \frac{x}{k} + \mathcal{o}(x) \right) \\ &=& 1 + \left( \psi(1) + \sum_{k=1}^n \frac{1}{k} \right) x + \mathcal{o}(x) \end{eqnarray}$$ where $\psi(x)$ is the digamma function. Also using $$ \frac{\pi}{\sin(\pi x)} = \frac{1}{x} + \frac{\pi^2}{6} x + \mathcal{o}(x) $$ and multiplying we get $$ \Gamma(n+1) = \frac{(-1)^n}{n!} \left( \frac{1}{x} + \psi(1) + \sum_{k=1}^n \frac{1}{k} + \mathcal{O}(x) \right) $$ Further $\psi(1) = -\gamma$, where $\gamma$ is the Euler-Mascheroni constant, arriving at your result.
Question about closure of the product of two sets Let $A$ be a subset of the topological space $X$ and let $B$ be a subset of the topological space $Y$. Show that in the space $X \times Y$, $\overline{(A \times B)} = \bar{A} \times \bar{B}$. Can someone explain the proof in detail? The book I have kind of skims through the proof and I don't really get it.
$(\subseteq)$: The product of closed sets $\overline{A} \times \overline{B}$ is closed. For every closed $C$ that contains $\overline{A} \times \overline{B}$, $A \times B \subseteq C$ so $\overline{A \times B} \subseteq \overline{\overline{A} \times \overline{B}} = \overline{A} \times \overline{B}$. $(\supseteq)$: Choose any $(a,b) \in \overline{A} \times \overline{B}$. Notice that for every open neighborhood $W \subseteq X \times Y$ that contains $(a, b)$, $U \times V \subseteq W$ (by the definition of the product topology) for some open neighborhood $U$ of $a$ and some open neighborhood $V$ of $b$. By the definition of closure points, $U$ intersects $A$ at some $a'$. Similarly, define $b' \in V \cap B$. Hence, $(a', b') \in W \cap (A \times B)$. To summarize, every open neighborhood $W \subseteq X \times Y$ that contains $(a,b)$ must intersect $A \times B$, therefore $(a,b) \in \overline{A\times B}$.
Evaluate $\int_0^{\pi} \frac{\sin^2 \theta}{(1-2a\cos\theta+a^2)(1-2b\cos\theta+b^2)}\mathrm{d\theta}, \space 0Evaluate by complex methods $$\int_0^{\pi} \frac{\sin^2 \theta}{(1-2a\cos\theta+a^2)(1-2b\cos\theta+b^2)}\mathrm{d\theta}, \space 0<a<b<1$$ Sis.
It can be done easily without complex, if we note that $$ \frac{\sin x}{1-2a\cos x+a^2}=\sum_{n=0}^{+\infty}a^n\sin[(n+1)x]$$ Just saying. EDIT: for proving this formula, we actually use complex method
Vector Space with Trivial Dual How to construct a Vector Space $E$ (non trivial) such that, the only continuous linear functional in $E$ is the function $f=0$?
In addition to what Asaf has written: There are non-trivial topological vector spaces with non-trivial topologies which have a trivial dual. I think in Rudin's "Functional Analysis" it is shown that the $L^p$-spaces with $0<p<1$ are an example of this.
Finding the smallest positive integer $N$ such that there are $25$ integers $x$ with $2 \leq \frac{N}{x} \leq 5$ Find the smallest positive integer $N$ such that there are exactly $25$ integers $x$ satisfying $2 \leq \frac{N}{x} \leq 5$.
$x$ ranges from $N/5$ through $N/2$ (ignoring the breakage) so $N/2-N/5+1=25$ so $N=80$ and a check shows $x$ goes $16$ through $40$
How to read mathematical formulas? I'm coming from a programmers background, trying to learn more about physics. Immediately, I was encountered by math, but unfortunately unable to read it. Is there a good guide available for reading mathematical notation? I know symbols like exponents, square roots, factorials, but I'm easily confused by things like sub-notation. For example, I have no idea what this is: fn I can easily express values using programmatic notation, ie pseudocode: milesPerHour = 60 distanceInFeet = 100 feetPerMillisecond = ((milesPerHour * 5280) / (1e3 * 60 * 60)) durationInMilliseconds = 100 / feetPerMillisecond However, I have no clue even where to begin when trying to express the same logic in mathematical notation. How can I improve my ability to read and interpret mathematical formulas in notation?
I guess the most natural anwer to "How can I improve my ability to read and interpret mathematical formulas in notation?" is: through practice. If you're trying to read physics, you're probably familiar with Calculus. I would advise then that you do a Real Analysis course, only to get used to these notations, to mathematical logic and for the fun of it :)
Prove $\lim_{j\rightarrow \infty}\sum_{k=1}^{\infty}\frac{a_k}{j+k}=0$ I am only looking for a hint to start this exercise, not a full answer to the problem, please take this into consideration. Suppose that $a_k \geq 0$ for $k$ large and that $\sum_{k=1}^\infty\frac{a_k}k$ converges. Prove that $$\lim_{j\rightarrow \infty}\sum_{k=1}^{\infty}\frac{a_k}{j+k}=0$$ What I can see so far that may help is that, since $\sum_{k=1}^\infty a_k/k$ converges, $\forall \epsilon > 0$, $\exists N\in \mathbb{N}$ such that $n\geq N\Rightarrow |\sum_{k=n}^\infty \frac{a_k}k|<\epsilon$, which is a result of the Cauchy Criterion. Once again, I am only looking for a hint to start this exercise, not a full proof.
I gave the hint in my comment. For a full solution, read below: Let $\epsilon>0$. Choose $N>1$ so that $a_j\ge 0$ for $j\ge N$ and such that $\sum\limits_{k=N}^\infty {a_k\over k}<\epsilon/2$. Note that for $j>0$, we then have $$\tag{1}0\le \sum\limits_{k=N}^\infty {a_k\over k+j} \le\sum\limits_{k=N}^\infty {a_k\over k}<\epsilon/2.$$ So, we can make the tails $\sum\limits_{k=N}^\infty {a_k\over k+j}$ small (independent of $j$ in fact). Let's see how to make the remaining part of the sum, $\sum\limits_{k=1}^{N-1} {a_k\over j+k}$, small: Let $M=\max\{|a_1|,\ldots |a_{N-1}| \}$. Choose $J> M(N-1)/(2\epsilon)$. Then for $j\ge J$: $$\tag{2}\Bigl| \,\sum\limits_{k=1}^{N-1} {a_k\over j+k}\,\Bigr|\le \sum\limits_{k=1}^{N-1} {M\over J }=(N-1)M\cdot{1\over J}<\epsilon/2. $$ Using $(1)$ and $(2)$, we have for $j>J $: $$ \Bigl|\,\sum_{k=1}^\infty {a_k\over j+k}\,\Bigr| \le \Bigl|\,\sum_{k=1}^{N-1} {a_k\over j+k}\,\Bigr|+ \Bigl|\,\sum_{k=N }^\infty {a_k\over j+k}\,\Bigr| \le{\epsilon\over2}+ {\epsilon\over2}=\epsilon. $$ Since $\epsilon$ was arbitrary, the result follows.
Automorphisms on Punctured Disc I have to find the automorphism group of the punctured unit disc $D = \{|z| <1\}\setminus \{0\}$. I understand that if $f$ is an automorphism on $D$, then it will have either a (i) removable singularity or (ii) a pole of order 1 at $z=0$. If it has a removable singularity at 0, then $f$ is a rotation. I am stuck at case (ii). Also, using this result, later I also have to find the automorphism group of $\{|z|<1\}\setminus \{1/2\}$ Can anybody please help ?
A bounded holomorphic function does not have a pole.
Combinatorics Catastrophe How will you solve $$\sum_{i=1}^{n}{2i \choose i}\;?$$ I tried to use Coefficient Method but couldn't get it! Also I searched for Christmas Stocking Theorem but to no use ...
Maple gives a "closed form" involving a hypergeometric function: $$ -1-{2\,n+2\choose n+1}\; {\mbox{$_2$F$_1$}(1,n+\frac32;\,n+2;\,4)}-\frac{i \sqrt {3}}{3} $$
Semigroup homomorphism and the relation $\mathcal{R}$ Let $S$ be a semigroup and for $a\in S$ let $$aS = \{as : s \in S \}\text{,}\;\;\;aS^1 = aS \cup \{a\}\text{.}$$The relation $\mathcal{R}$ on a semigroup $S$ is defined by the rule: $$a\;\mathcal{R}\; b \Leftrightarrow aS^1 = bS^1 \;\;\;\;\forall \;\;a,b\in S\text{.}$$ Let $S,T$ be semigroups and let $\phi : S \to T$ be a homomorphism. Show that if $a,b \in S$ and $a\;\mathcal{R} \;b$ in $S$ then $\phi(a) \;\mathcal{R} \;\phi(b)$ in $T$.
Use that $a\mathcal R b \iff (a=b) \lor (a\in bS\land b\in aS)$. (For this, in direction $\Leftarrow$, in the case of $a\ne b$ we conclude $aS^1\subseteq bS$ and $bS^1\subseteq aS$.) We can assume $a\ne b$, then $a\mathcal Rb$ means $a=bs$ for some $s$ and $b=as'$ for some $s'\in S$, so $\phi(a)=\phi(b)\phi(s)\in\phi(b)T$ and $\phi(b)=\phi(a)\phi(s')\in\phi(a)T$. -QED-
What is the asymptotical bound of this recurrence relation? I have the recurrence relation, with two initial conditions $$T(n) = T(n-1) + T(n-2) + O(1)$$ $$T(0) = 1, \qquad T(1) = 1$$ With the help of Wolfram Alpha, I managed to get the result of $O(\Phi^n)$, where $\Phi = \frac{1+\sqrt 5}{2} \approx 1.618$ is the golden ratio. Is this correct and how can that be mathematically proven?
You have essentially stated the Fibonacci sequence, or at least asymptotically. There are numberless references, here for instance. And your result is not correct; as you will see from the reference, the Fib sequence behaves as $\phi^n/\sqrt{5}$
Superharmonic function and super martingale. The definition (from Durrett's "Probability: Theory and Examples"): Superharmonic functions. The name (super martingale) comes from the fact that if $f$ is superharmonic (i.e., f has continuous derivatives of order $\le 2$ and $\partial^2 f /\partial^2 x_1^2 + · · · + \partial^2 f /\partial^2 x_d^2)$, then $$ f (x) \ge \frac 1 {|B(0, r)|} \int_{B(x,r)} f(y) dy $$ where $B(x, r) = \{y : |x − y| \le r\}$ is the ball of radius $r$, and $|B(0, r)|$ is the volume of the ball of radius $r$. The question is Suppose $f$ is superharmonic on $R^d$. Let $\xi_1 , \xi_2 , ...$ be i.i.d. uniform on $B(0, 1)$, and define $S_n$ by $S_n = S_{n−1} + \xi_n$ for $n \ge 1$ and $S_0 = x$. Show that $X_n = f (S_n)$ is a supermartingale. Here the filtration should be $\mathcal{F}_n = \sigma \{X_n, X_{n-1}...\}$. I know to prove $X_n, n \ge 0$ is a super martingale, we only need to show $$ E\{X_{n+1} ~|~ \mathcal{F}_n\} \leq X_n. $$ This is easy when $n=0$. But for $n > 0$, I've no idea how to, or if it is possible, to derive the formula of the conditional expectation.
$f(S_{n+1})=f(S_n+\xi_{n+1})$ Using Did's Hint: If $g(x)=E(f(x,Y))$ then $g(X)=E(f(X,Y)|X)$ $\Rightarrow E(f(S_n+\xi_{n+1})|\mathcal F_n)=g(S_n)$ but $\displaystyle g(x)=\int f(x+y)\nu(dy)$ and in our case $\nu(dy)=\mathbf 1_{B(0,1)}\frac{1}{|B(0,1)|}dy$ (because $\xi_i$ are uniform on $B(0,1)$) $\displaystyle\Rightarrow g(S_n)=\int f(S_n+y)\mathbf 1_{B(0,1)}\frac{1}{|B(0,1)|}dy=\frac{1}{|B(0,1)|}\int_{B(S_n,1)}f(y)dy\le f(S_n)$
Are these exactly the abelian groups? I'm thinking about the following condition on a group $G$. $$(\forall A\subseteq G)(\forall g\in G)(\exists h\in G)\ Ag=hA.$$ Obviously every abelian group $G$ satisfies this condition. Are there any other groups that do? Can we give a familiar characterization for them? Can we give one if we confine the considerations to finite groups? Certainly not all groups satisfy the condition. Let $G$ be the free group on $\{x,y,z\}.$ Let $A=\{x,y\}$ and $g=z.$ Then $$Ag=\{x,y\}z=\{xz,yz\}.$$ Suppose there is $h\in G$ such that $\{hx,hy\}=hA=\{xz,yz\}.$ Then either $$\begin{cases}hx=xz\\hy=yz\end{cases}$$ or $$\begin{cases}hx=yz\\hy=xz\end{cases}$$ From the first case we get $h=xzx^{-1}$ and $h=yzy^{-1}$, which is a contradiction. From the second case we get $h=yzx^{-1}$ and $h=xzy^{-1}$, which is also a contradiction.
Assume $ab\ne ba$. Let $A=\{1,a\}$, $g=b$. Then there is $h\in G$ such that $\{h,ha\}=\{b,ab\}$. This needs $h=b\lor h=ab$. In the first case $ha=ba\ne ab$, so this fails. Therefore $h=ab$ and $ha=aba=b$. Similarly, $bab=a$. This implies $aa=abab=bb$. We conclude $$a=bab=bbabb=aaaaa, $$ hence $a^4=1$ and similarly $b^4=1$. Now take $A=\{1,a,b\}$ and $g=b$. Then there is $h\in G$ such that $\{h,ha,hb\}=\{b,ab,b^2\}$. * *$h=b$: Then $hb=b^2$ implies $ba=ha=ab$, contradiction *$h=ab$: Then $ha=aba=b$ implies $hb=b^2$, i.e. $a=1$ and of course $ab=ba$, contradiciton. *$h=b^2=a^2$: Then $ha=a^3=a^{-1}\ne b$, hence $ha=ab$, i.e. $a^2=b$ and of course $ab=ba$, contradiction
$\epsilon$-$\delta$ proof of discontinuity How can I prove that the function defined by $$f(x) = \begin{cases} x^{2}, & \text{if $x \in \mathbb{Q}$;} \\ -x^{2}, & \text{if $x \notin \mathbb{Q}$;} \end{cases} $$ is discontinuous? I see that it is true by using sequences but I cannot prove using only $\epsilon$'s and $\delta$'s.
Hint: Check if the sequence $f(1+\frac{\sqrt{2}}{n})$ becomes very close to $f(1)$ for large $n$ To do it using $\epsilon,\delta$ definition. Suppose there exists $\delta>0$ such that for $\forall y\in \mathbb{R}[|x-y|<\delta\implies|f(x)-f(y)|<0.1]$. Now choose $n$ sufficently large such that $|(1+\frac{\sqrt{2}}{n})-1|<\delta$ (Use the archemidean principle to do this). Thus $|f(1+\frac{\sqrt{2}}{n})-f(1)|<0.1$ Does this lead to a contradiction ?
What are your favorite proofs using mathematical induction? I would like to get a list going of cool proofs using mathematical induction. Im not really interested in the standard proofs, like $1+3+5+...+(2n-1)=n^2$, that can be found in any discrete math text. I am looking for more interesting proofs. Thanks a lot.
Let $a>0$ and $d\in\mathbb{N}$ and define the simplex $S_d(a)$ in $\mathbb{R}^d$ by $$ S_d(a)=\{(x_1,\ldots,x_n)\in\mathbb{R}^d\mid x_1,\ldots,x_d\geq 0,\;\sum_{i=1}^d x_i\leq a\}. $$ Then for every $a>0$ and $d\in\mathbb{N}$ we get the following $$ \lambda_d(S_d(a))=\frac{a^d}{d!},\qquad (*) $$ where $\lambda_d$ is the $d$-dimensional Lebesgue measure. Proof: Let $d=1$. Then $$ \lambda_1(S_1(a))=\lambda_1([0,a])=a=\frac{a^1}{1!}, $$ so this case holds. Assume $(*)$ holds for $d\in\mathbb{N}$ and let us prove that it also holds for $d+1$. Now we use that for $B\in\mathcal{B}(\mathbb{R}^{n+m})$ the following holds: $$ \lambda_{n+m}(B)=\int_{\mathbb{R}^n}\lambda_m(B_x)\,\lambda_n(\mathrm dx), $$ where $B_x=\{y\in \mathbb{R}^m\mid (x,y)\in B\}$. Using this we have $$ \lambda_{d+1}(S_{d+1}(a))=\int_{\mathbb{R}}\lambda_d((S_{d+1}(a))_{x_1})\,\lambda_1(\mathrm dx_1). $$ But $$ \begin{align} (S_{d+1}(a))_{x_1}&=\{(x_2,\ldots,x_{d+1})\in\mathbb{R}^d\mid (x_1,x_2,\ldots,x_{d+1})\in S_d(a)\}\\ &=\{(x_2,\ldots,x_{d+1})\in\mathbb{R}^d\mid x_1,x_2,\ldots,x_{d+1}\geq 0,\; x_2+\cdots+ x_{d+1}\leq a-x_1\}\\ &= \begin{cases} S_{d}(a-x_1)\quad &\text{if }0\leq x_1\leq a,\\ \emptyset &\text{otherwise}. \end{cases} \end{align} $$ Thus $$ \begin{align} \lambda_{d+1}(S_{d+1}(a))&=\int_0^a\lambda_d(S_d(a-x_1))\,\lambda_1(\mathrm dx_1)=\int_0^a\frac{(a-x_1)^d}{d!}\,\lambda_1(\mathrm dx_1)\\ &=\frac{1}{d!}\left[-\frac{1}{d+1}(a-x_1)^{d+1}\right]_0^a=\frac{a^{d+1}}{(d+1)!}. \end{align} $$
The set of all functions from $\mathbb{N} \to \{0, 1\}$ is uncountable? How can I prove that the set of all functions from $\mathbb{N} \to \{0, 1\}$ is uncountable? Edit: This answer came to mind. Is it correct? This answer just came to mind. By contradiction suppose the set is $\{f_n\}_{n \in \mathbb{N}}$. Define the function $f: \mathbb{N} \to \{0,1\}$ by $f(n) \ne f_n(n)$. Then $f \notin\{f_n\}_{n \in \mathbb{N}}$.
Hint : use the diadic developpement of elements of $[0,1]$.
Decomposition of $C_0^{\infty}(\mathbb{R}^n)$ -function I got the following question as part of a fourier-analysis course.. Consider $\phi\in C_0^{\infty}(\mathbb{R}^n)$ with $\phi(0)=0$. Apparantly then we can write $$\phi =\sum_{j=1}^nx_j\psi_j $$ for functions $\psi_j$ in the same space, and I would like to prove this. However I'm little stuck on this. The steps would involve that we start integrating $$\int_0^{x_1}D_1\phi(t,x_2,\cdots, x_n)dt+\phi(0,x_2,\cdots, x_n) $$ and continue doing so w.r.t. to the other variables, and change the interval of integration to [0,1]. Then get everything in $C_0^{\infty}(\mathbb{R}^n)$ by writing $$\phi = \sum \frac{x_i^2\phi(x)}{\left\|x\right\|^2 } $$ and patch everything together with a partition of unity. I just dont quite see what they mean...A partition of unity is a sequence of functions that sums up to 1 for all $x\in \mathbb{R}^n$.
The result can be shown by induction on $n$. For $n=1$, just write $\phi(x)=\int_0^x\phi'(t)dt=x\int_0^1\phi'(sx)ds$, and the map $x\mapsto \int_0^1\phi'(sx)ds$ is smooth, and has a compact support. Assume the result is true for $n-1\geqslant 1$. We have $$\phi(x)=x_n\int_0^1\partial_n\phi(x_1,\dots,x_{n-1},tx_n)dt+\varphi(x_1,\dots,x_{n-1},0).$$ As similar argument as in the case $n=1$ shows that $(x_1,\dots,x_n)\mapsto \int_0^1\partial_n\phi(x_1,\dots,x_{n-1},tx_n)dt$ is smooth with compact support. By the induction, as $(x_1,\dots,x_{n-1})\to \phi(x_1,\dots,x_{n-1},tx_n)dt+\varphi(x_1,\dots,x_{n-1},0)$ is smooth with compact support, we can write it as $\sum_{j=1}^{n-1}x_j\psi_j(x_1,\dots,x_{n-1})$. But it's not finished yet, as $(x_1,\dots,x_n)\mapsto \psi_j(x_1,\dots,x_{n-1})$ doesn't have a compact in $\Bbb R^n$ unless $\psi_j\equiv 0$. However, we can find a smooth function with compact support $\chi$ such that $\chi(x)=1$ whenever $x\in\operatorname{supp}(\phi)$. As $\phi(x)=\phi(x)\chi(x)$, we are done.
Formal proof for $(-1) \times (-1) = 1$ Is there a formal proof for $(-1) \times (-1) = 1$? It's a fundamental formula not only in arithmetic but also in the whole of math. Is there a proof for it or is it just assumed?
In any ring, it holds, where $1$ denotes the unit element ($1x=x=x1$ for all $x$) and $-x$ denotes the additive inverse ($x+(-x)=0$ for all $x$). $x=1\cdot x=(1+0)\cdot x=1\cdot x+0\cdot x=x+0\cdot x$. Then, using the additive group, it follows that $0\cdot x=0$ for all $x$. Now use distributivity for $$0=(1+(-1))(-1).$$
Unions of disjoint open sets. Let $X$ be a compact metric space (hence separable) and $\mu$ a Borel probability measure. Given an open set $A$ and $r,\epsilon>0$ $\ $does there exist a finite set of disjoint open balls $\left\{ B_{i}\right\} $ contained in $A$ and of radius smaller than $r$ , so that $\mu(\cup B_{i})\geq\mu(A)-\epsilon.$
Recall Lemma (Finite Vitali covering lemma) Let $(X,d)$ be a metric space, $\{B(a_j,r_j),j\in [K]\}$ a finite collection of open balls. We can find a subset $J$ of $[K]$ such that the balls $B(a_j,r_j),j\in J$ are disjoint and $$\bigcup_{i\in [K]}B(a_i,r_i)\subset \bigcup_{j\in J}B(a_j,3r_j).$$ A proof is given page 41 in the book Ergodic Theory: with a view towards Number Theory, Einsiedler M., Ward T. For each $a\in A$, fix $r_a<r/3$ such that $B(a,3r_a)\subset A$. As $X$ is separable, we can, by Lindelöf property, extract from the cover $\{B(a,r_a),a\in A\}$ of $A$ a countable subcover $\{B(a_j,r_j),j\in \Bbb N\}$. Now take $N$ such that $\mu(A)-\mu\left(\bigcup_{j=0}^NB(a_j,r_j)\right)<\varepsilon$. Then we conclude by finite Vitali covering lemma.
How to evaluate a definite integral that involves $(dx)^2$? For example: $$\int_0^1(15-x)^2(\text{d}x)^2$$
Just guessing, but maybe this came from $\frac {d^2y}{dx^2}=(15-x)^2$ The right way to see this is $\frac d{dx}\frac {dy}{dx}=(15-x)^2$. Then we can integrate both sides with respect to $x$, getting $\frac {dy}{dx}=\int (15-x)^2 dx=\int (225-30x+x^2)dx=C_1+225x-15x^2+\frac 13x^3$ and can integrate again to get $y=C_2+C_1x+\frac 12 225x^2-5x^3+\frac 1{12}x^4$ which can be evaluated at $0$ and $1$, but we need a value for $C_1$ to get a specific answer. As I typed this I got haunted by the squares on both sides and worry that somehow it involves $\frac {dy}{dx}=15-x$, which is easy to solve.
Question on the proof of a property of the rank of a matrix The task is that I have to prove this statement: Given $(m+1)\times(n+1)$ matrix $A$ like this: $$ A=\left[\begin{array}{c |cc} 1 & \begin{array}{ccc}0 & \cdots & 0\end{array} \\ \hline \begin{array}{c}0\\ \vdots\\0\end{array} & {\Large B} \\ \end{array}\right] $$ where $B$ is a $m\times n$ sub-matrix of $A$. Show: $\operatorname{rank}(A) = r$ implies $\operatorname{rank}(B) = r - 1$. Here are the steps that I constructed: (1) First I already proved this claim: if $m\times n$ matrix $S$ has rank $r$, then $r\le m$ and $r\le n$. And I let $\operatorname{rank}(B) = k$. (2) Use (1), I say the following 2 statements: * *for matrix $A$, $r\le m+1$ and $r\le n+1$. Thus $r-1\le m$ and $r-1 \le n$ *for matrix $B$, $k\le m$ and $k\le n$. (3) Now, this is the part that I feel shaky about. I plan to say that using (2), I have $r-1\le k$ and $k\le r - 1$. Thus $r - 1 = k$. But I'm not certain whether this argument is valid. After noticing that $r - 1\le m$ and $k \le m$ at a same time, I come up with the above relation between $k$ and $r - 1$. Would someone please help me check if there is anything wrong or missing in this proof? Thank you.
In analogy, you're arguing: "$5 \leq 7, 4 \leq 7 \Rightarrow 5 \leq 4$", which, when put this way, is clearly not true. The way to prove this exercise depends on how your class introduced these concepts. Absent that knowledge, I would argue as follows: When reduced to row-echelon form, you can read off the rank of a matrix by the number of non-zero rows. So reduce your matrix $B$ to $1 \leq k \leq \min(n, m)$ non-zero rows (which equals the rank of $B$), and note that the elementary matrices multiplied to the left and right of $A$, to achieve this, will not change column 1 (this is quite obvious as all but the first element of column $1$ and row $1$ are $0$). So you still have the $1$ in position $a_{11}$ (and $0$ in the other entries of the first column), and the entire matrix has $k+1$ non-zero rows in row-echelon form which equals the rank of $A$. So $\operatorname{rank}(A) = \operatorname{rank}(B)+1,$ which was to be shown.
About an extension of Riesz' Lemma for normed spaces The Riesz' Lemma is as follows: Let $Y$ and $Z$ be subspaces of a normed space $X$ of any dimension (finite or infinite) such that $Y$ is closed (in $X$) and is also a proper subset of $Z$. Then for every real number $\theta$ in the open interval $(0,1)$, there is a point $z$ in $Z$ such that $$||z|| = 1$$ and $$||z-y|| \geq \theta$$ for every $y$ in $Y$. Now we want to prove the following assertion: If $Y$ is finite-dimensional, then we can even take $\theta$ to be equal to $1$ in the statement of the Riesz' Lemma. How to prove this?
I would apply the following trick in case $\dim Y<\infty$: Let $z_0\in Z\setminus Y$ arbitrary and imagine the finte dimensional normed space $U$ spanned by $\langle Y,z_0\rangle$. Then there are many ways to continue, for example, the unit ball is compact in $U$, thus applying Riesz's lemma to $\theta_n:=1-\frac1n$ and $Y\subset U$, we get a sequence $u_n$ with $||u_n||=1$ and $d(u_n,Y)>1-\frac1n$. Pick a convergent subseqence and let $z$ be its limit.
Open and Closed Set Problems using a Ball I am having trouble with these two questions. In particular, using a ball and choosing an $r$ to show that a set is open. (a) $$X = \left \{ \mathbf{x} \in \mathbb{R}^d | \: ||\mathbf{x}|| \leq 1 \right \} .$$ So, $X$ is closed if its compliment $X^c$ is open. So if I can show that $X^c$ is open, then it follows that $X$ is closed. I'll start with the definition I am using for a ball. The ball about $\mathbf{a}$ in $\mathbb{R}^n$ of radius $r$ is the set $$B_{r}(\mathbf{a})= \left \{ \mathbf{x} \in \mathbb{R}^n : || \mathbf{x} - \mathbf{a} || < r\right \}$$ A subset $U$ of $\mathbb{R}^n$ is open if for every $\mathbf{a} \in U$, there is some $r=r(a) > 0$ such that the ball is contained in $U$. So, I have a set $X^{c} = \left \{ \mathbf{v} \in \mathbb{R}^d | \: ||\mathbf{v}|| > 1 \right \} $. It consists of points whose lengths are longer than 1. Let $\mathbf{a} \in X^{c}$, then $|| \mathbf{a} || > 1$. Now, using $B_{r}(\mathbf{a})$ as it is written above, how do I find an explicit formula for $r>0$? Intuitively it seems to make sense if $\mathbf{x}$ is in $X$, then $0<|| \mathbf{x} - \mathbf{a} ||$ as $||\mathbf{x}|| \neq ||\mathbf{a}||$. I am not sure how to proceed. (b) $$X = \mathbb{R}^2 \setminus \left \{ \mathbf{x} \in \mathbb{R}^2 | \mathbf{x}=(x,0) \: \right \}$$ I know that this set is open, but it again comes down to choosing some $r$ and using a ball. The set is $\mathbb{R}^2$ less a line across the x-axis - that is, any $x$ and any $y$ with $y \neq 0$. The line has no height, so it should be easy to show that for any $\mathbf{a} \in X$ there exists $r>0$ such that $B_r(a) \subseteq X$. I've done lots of scratch work and diagrams etc. but I just can't seem to put this concept together. Any help and clarification would be appreciated.
Regarding you second question: Let $a\in X$, and you want to find $r>0$ such that $B_r(a)\subseteq X$. What could "go wrong"? Well, this ball might contain points of the form $(x,0)$, and these are not in $X$. So all we have to do is to eliminate this option. Try to figure out a general way of doing it, using an example. Suppose $a=(1,8)$. If you take $r=9$ it won't work, right? But it will work for all $r<8$.
Construct a pentagon from the midpoints of its sides Let $p_{1},p_{2},p_{3},p_{4},p_{5}$ be five points in the euclidean plane such that no set of three of those points lie on the same line. It is easy to prove that there exists a unique pentagon such that $p_{1},p_{2},p_{3},p_{4},p_{5}$ are the midpoints of its sides (In fact there is a more general result saying that the same is true for any odd number n of points as the midpoints of the sides of an n-gon). The proof uses $\mathbb{C}$ as a model of the euclidean plane and then proves, that the system of linear equations $$\frac{1}{2}(x_{i}+x_{i+1}) = p_{i} \space\space\space \space 1 \leq i \leq 5$$ where $x_{6} = x_{1}$, has unique solutions for $x_{1},x_{2},x_{3},x_{4},x_{5}\in\mathbb{C}$ since the corresponding 5x5 matrix is invertible. My question is whether there is a way to construct the solution using ruler and compass. (which is possible in the case with only 3 points)
Let $A,B,C,D,E$ be the given midpoints opposite the unknown vertices $V,W,X,Y,Z$ respectively; both sets of points in rotational order. Then quadrilateral $WXYZ$ must have the midpoints of its sides on a parallelogram, and three of those midpoints are given by $E,A,B$. To find the midpoint $M$ of $\overline{ZW}$, construct lines parallel to $\overline{AB}$ through $E$ and parallel to $\overline{EA}$ through $B$. These lines intersect at $M$. Then $C,D,M$ are midpoints of the sides of $\triangle ZVW$, and the sides of that triangle are parallel to those of $\triangle DMC$. Construct lines parallel to $\overline{MC}$ through $D$ and parallel to $\overline{DM}$ through $C$. These lines intersect at $V$, and the lines contain the sides of the pentagon through $V$. Once $V$ is obtained, the remaining vertices follow readily. The given midpoint $D$ is the midpoint of $W$, so $W$ is thereby obtained from $V$ and $D$. Similarly $X$ is obtained from $W$ and $E$, and so on around the pentagon.
Existence of vector fields Does there exists two vector fields $X$ and $Y$ on $\mathbb R^2$ such that the following are satisfied? * *$X(0)= Y(0)= 0$, where $0\in \mathbb R^2$ and for others points $q\in \mathbb R^2$, we have $X(q)\neq 0, Y(q)\neq 0$. *For any curve $\gamma\in \mathbb R^2$, we have $\langle X'(t),Y(t)\rangle\geq 0$. *For any curve $\gamma\in \mathbb R^2$, we have $[X'(t),Y(t)]=0$. What happen if we change condition $3$ to $[X(t),Y(t)]=0$? Clarification: As $X$ is a vector field on $\mathbb R^2$, then for any path $\gamma :[0,1]\to \mathbb R^2$, we have a map $\tilde{X}(t):[0,1]\to \mathbb R^2$ by $\widetilde{X}(t)= X(\gamma(t)$, In above question, I mean $X(t)$ by $\widetilde{X}(t)$ and $X'(t)$ is derivative of this map which can be identified with an element in $\mathbb R^2$.
I think $ X(x,y)=(x^2+y^2)\frac{\partial}{\partial y} $ and $ Y(x,y)=(x^2+y^2)\frac{\partial}{\partial x} $ should work.
Finding second order linear homogenous ODE from the fundamental set of solutions Find second order linear homogeneous ODE with constant coefficients if its fundamental set of solutions is {$e^{3t},te^{3t}$}. Attempt: Had this question in my midterm. So, since the fundamental set of solutions is $$y=y_1+y_2=c_1e^{3t}+c_2te^{3t}$$ the characteristic equation of the second order ODE has only one root. I don't know what to do next. Help please.
The eigenvalues and eigenvectors for the coefficient matrix $A$ in the linear homogeneous system: $Y'= AY$ are $\lambda_{1} = 3$ with $v_1 =< a; b >$ and $\lambda_2 = 3$ with $v2 =< c; d >$ The fundamental form of the solution is: $$ Y = c_1 e^{3t}v_1 + c_2t e^{3t}v_2$$ Take the second derivative, $Y''$ for the DEQ. Your original system will be of the form: $$y'' - 6y' + 9 = 0$$ to give you the double eigenvalue $\lambda_{1,2} = 3$. You can actually solve this to find the corresponding eigenvectors. Regards