INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Limit $\lim_{x\rightarrow \infty} x^3 e^{-x^2}$ using L'Hôpital's rule I am trying to solve a Limit using L'Hôpital's rule with $e^x$ So my question is how to find $$\lim_{x\rightarrow \infty} x^3 e^{-x^2}$$ I know to get upto this part here, but I'm lost after that $$\lim_{x\rightarrow \infty} \frac{x^3}{e^{x^2}}$$
$$\begin{align} \lim_{x\rightarrow \infty} \dfrac{x^3}{e^{x^2}} &=\lim_{x\rightarrow \infty} \dfrac{3x^2}{e^{x^2}2x}\\ &=\lim_{x\rightarrow \infty} \dfrac{3x}{e^{x^2}2}\\ &=\lim_{x\rightarrow \infty} \dfrac{3}{e^{x^2}.2.2x}\\ &=0 \end{align}$$
The wedge sum of two circles has fixed point property? The wedge sum of two circles has fixed point property? I'm trying to find a continuous map from the wedge sum to itself, that this property fails, I couldn't find it, I need help. Thanks
If by circle you mean $S^1$, and the fixed point property is the claim that every continuous map into itself has a fixed point, for two circles like so: consider the map that rotates $A$ by 90 degrees, and sends all of $B$ to the image of $x$.
On Ceva's Theorem? The famous Ceva's Theorem on a triangle $\Delta \text{ABC}$ $$\frac{AJ}{JB} \cdot \frac{BI}{IC} \cdot \frac{CK}{EK} = 1$$ is usually proven using the property that the area of a triangle of a given height is proportional to its base. Is there any other proof of this theorem (using a different property)? EDIT: I would like if someone can use the proof of Menelaus' Theorem.
I’m sure there is a slick proof lurking in $\mathbb{C}$. This is not it. We first prove the left implication. Place the origin $O$ at the concurrent point as figured. Since ratios of lengths are invariant under dilations and rotations, WLOG let the line through $B$ and $K$ be the real axis and scale the triangle such that $B=1$. As $I,J,K$ lie on the edges of $\triangle ABC$ $$ \begin{split} I&=A+t_1(1-A)\\J&=1+t_2(C-1)\\K&=C+t_3(A-C),\end{split}\tag{*}$$ for real $t_i$, from which it follows $$\tag{**}\frac{ A-I}{ I-B}\cdot \frac{ B-J}{ J-C}\cdot \frac{ C-K}{ K-A}=\frac{t_1t_2t_3}{(1-t_1)(1-t_2)(1-t_3)}.$$ Further, $$\begin{split} I&=r_1C\\J&=r_2A \\K&=r_3.\end{split}\tag{***}$$ If we equate $(*)$ and $(***)$ and solve for the real $r_i$, then we get three complex equations in $r_i,a_i,c_i, t_i$ with imaginary parts $0$. Solving these three imaginary parts for $t_i$ then gives $$\begin{split} t_1=\frac{a_1c_2-a_2c_1}{ a_1c_2-a_2c_1-c_2} &\implies \frac{1}{1-t_1}=\frac{c_2+a_2c_1-a_1c_2}{c_2}\\t_2=\frac{a_2}{a_1c_2+a_2-a_2c_1}&\implies \frac{1}{1-t_2}=\frac{a_1c_2+a_2-a_2c_1}{a_1c_2-a_2c_1}\\t_3=\frac{c_2}{c_2-a_2}&\implies \frac{1}{1-t_3}=\frac{a_2-c_2}{a_2}\end{split}$$ and plugging these into $(**)$ gives the desired result. To prove the right implication we follow basically the same procedure, i.e. place the origin $O$ at the intersection of Ceva lines $CI$ and $AJ$, rotate and scale so that $B=1$ and then prove $K\in\mathbb{R}$ by assuming the Ceva product is $1$, hence the Ceva lines are concurrent. The same algebraic manipulations as before show $$\begin{split} \text{Im}(K)&=t_3a_2-c_2(1-t_3)\\&=t_3a_2+c_2\frac{t_1t_2t_3}{(1-t_1)(1-t_2)}\\&=t_3a_2 +c_2\frac{a_2c_1-a_1c_2}{c_2}\frac{a_2}{a_1c_2-a_2c_1}t_3\\&=0\end{split}$$ and we conclude the result. $\qquad\square$ Not nice, but maybe you can streamline the argument somewhere.
Counterexample of Sobolev Embedding Theorem? Is there a counterexample of Sobolev Embedding Theorem? More precisely, please help me construct a sobolev function $u\in W^{1,p}(R^n),\,p\in[1,n)$ such that $u\notin L^q(R^n)$, where $q>p^*:=\frac{np}{n-p}$.^-^
Here is how you can do this on the unit ball $\{x | \|x \| \le 1\}$: Set $u(x) = \|x\|^{-\alpha}$. Then $\nabla u$ is easy to find. Now you can compute $\|u\|_{L^q}$ and $\|u\|_{W^{1,p}}$ using polar coordinates. Play around until the $L^q$ norm is infinite while the $W^{1,p}$ norm is still finite.
The smallest ring containing square root of 2 and rational numbers Can anyone prove why the smallest ring containing $\sqrt{2}$ and rational numbers is comprised of all the numbers of the form $a+b\sqrt{2}$ (with $a,b$ rational)?
That ring must surely contain all numbers of the form $a+b\sqrt 2$ with $a,b\in\mathbb Q$ because these can be obtained by ring operations. Since that set is closed under addition and multiplication (because $(a+b\sqrt 2)+(c+d\sqrt2)=(a+c)+(b+d)\sqrt 2$ and $(a+b\sqrt2)\cdot(c+d\sqrt 2)=(ac+2bd)+(ad+bc)\sqrt 2$), it is already a ring, hence nothing bigger is needed.
Showing that $ (1-\cos x)\left |\sum_{k=1}^n \sin(kx) \right|\left|\sum_{k=1}^n \cos(kx) \right|\leq 2$ I'm trying to show that: $$ (1-\cos x)\left |\sum_{k=1}^n \sin(kx) \right|\left|\sum_{k=1}^n \cos(kx) \right|\leq 2$$ It is equivalent to show that: $$ (1-\cos x) \left (\frac{\sin \left(\frac{nx}{2} \right)}{ \sin \left( \frac{x}{2} \right)} \right)^2 |\sin((n+1)x)|\leq 4 $$ Any idea ?
Using the identity $$ 1 - \cos x = 2\sin^2\left(\frac{x}{2}\right), $$ we readily identify the left-hand side as $$ 2 \sin^2 \left(\frac{nx}{2} \right) \left|\sin((n+1)x)\right|, $$ which is clearly less than or equal to $2$.
K critical graphs connectivity and cut vertex Show that a k-critical graph is connected. Furthermore, show that it does not have a vertex whose removal disconnects the graph (such a vertex is known as a cut vertex). I have managed to proove , I think, the first part Let's assume G is not connected. Since χ(G) = k, (If G1,G2,...,Gr are the components of a disconnected graph G, then χ(G) = max χ(Gi) 1≤i≤r ) then there is a component G1 of G such that χ(G1)=k.If v is any vertex of G which is not in G1,then G1 isa component of the subgraph G − v. Therefore, χ (G − v) = χ (G1 ) = k. This contradicts the fact that G is k-critical. Hence G is connected. But I can't manage to prove the second part..ie that there is not cut vertex. Anyone can help?
Your statement is a special case of more general theorem I was researching when I came across your question: T: Cut in a $k$-critical graph is not clique. Proof: Assume that cut $S$ in $k$-critical graph $G=(V,E)$ is clique. Components of $G \setminus S$ are $\{C_1 \dots C_r\}$. For each subgraph of $G$ in the form of $C_i\cup S$, find total coloring $\phi_i$ with $k-1$ or less colors. Let $\{v_i \dots v_m\}$ be vertices of $S$ that have different colors in the coloring $\phi_i$. Now because $S$ is clique, you can permute the colors on $v_1 \dots v_m$ in such a way that $\phi_i(v_j)=j$. Finally, you unite all the colorings $\phi_1 \dots \phi_r$, thus getting total coloring on $G$ with only $k-1$ colors. This contradicts the $k$-criticality of $G$ and proves the theorem. Because single vertex is a clique in every graph, your statement is proven. PS: how is resurrecting of old threads looked upon in these parts?
Induction: How to prove propositions with universal quantifiers? In my book, they prove with mathematical induction propositions with successions like this: $$1 + 3 + 5 + \cdots + (2n-1) = n^2$$ In all exercises. However, recently I took some exercises from a different paper and instead of these it told me to prove this: $$\forall n \in N (11 / (10^{2n+1} + 1 ))$$ Or perhaps this: $$ \forall n \in N (n < 2^n) $$ (I can't find how to write the natural numbers set symbol. $N$ is it there.) And now I'm lost. This is is what I did with the first one: Prove that the proposition works for $n=1$ $$ 11 / (10^3+1) \implies 11/1001 \implies \exists x \in N(11x=1001)$$ Which is true, if you take $x = 91$. Assume $$\forall n \in N (11 / (10^{2n+1} + 1 ))$$ We have to prove: $$\forall n \in N (11 / (10^{2n+3} + 1 ))$$ We prove it: Which I don't know how to do. Curiously enough, my book only shows exercises with successions, so I guess that this exercise can be, somehow, written as a succession? I am not sure about that. Any ideas?
You are misunderstanding induction here. The $\forall$ is always part of it. For example, the real statement for your first result is $$\forall n\in\mathbb N: 1+3+...+(2n-1)=n^2$$ In general, if your equation is: $$\forall n\in\mathbb N: P(n)$$, the principal of mathematical induction says it is enough to show: $$P(1)$$ and $$\forall n\in\mathbb N: P(n)\implies P(n+1)$$ So to prove that $11\mid 10^{2n+1}+1$, you first show $11\mid 10^{3}+1$ and then prove that if $11|10^{2n+1}+1$ then $11|10^{2n+3}+1$.
Basic Counting Problem I was reading a probability book and am having trouble conceptually with one of the examples. The following is a modification. Let's say that we have $3$ coins that we want to randomly assign into $3$ bins, with equal probability. We can label these coins $a_1$, $a_2$, $a_3$. What is the probability that all $3$ bins will be filled? The solution is: All possible combinations of assigning these coins to bin locations is $3^3 = 27$. The possible ways that all 3 bins can be filled is $3!$. The final probability is $6/27 = 2/9$. Alternatively this could be derived as $(3/3)\cdot(2/3)\cdot(1/3) = 2/9$. Now what if the coins are not labeled and are considered interchangeable. There are now $10$ configurations in which these bins can be filled: $\binom{3+3-1}{3} = 10$. Only one of these configurations will have all bins filled. Thus the probability here is $1/10$. Shouldn't these probabilities be the same? Am I missing something with the second scenario?
It depends on whether you're thinking of choosing each arrangement of balls with equal probability, or whether you place the balls in each bin with equal probability. I would say that the most natural way to think about this is the latter, since this is most likely how it would happen in real life. When dealing with events that are independent like the placement of the unlabelled balls, I would think about it as placing one first, then another and finally a third. This is kind of a way to artificially label the balls, that makes intuitive sense. In the real world, you would probably place the balls one after another. At the very least, you would decide which one goes where one after another. If you did decide on the arrangement of them all at once, the chances are you would not be doing so independently and the outcome you describe with probability $\frac{1}{10}$ would occur. This idea of making events happen one after another is extremely useful throughout probability (actually, it is useful in combinatorics, to decide how many ways there are of doing things much more easily by splitting up a complicated scenario into smaller ones - it applies to probability when we have events happening with equal probability, and then the probability of an event occuring is the number of ways it can happen divided by the number of total events which could happen.) To bring it back to your example, assuming the balls are placed independently we can say that, if we want to fill up all three bins, the first one definitely goes into a bin, the second one goes into an empty one with probability $\frac{2}{3}$ and the third one goes into the final empty one with probability $\frac{1}{3}$ giving total probability of the event $\frac{2}{9}$ If, however, we assume that each arrangement is equally likely, then we get the answer $\frac{1}{10}$. This is because some arrangements are more likely to happen then others when the balls are placed independently, so the probabilities get skewed accordingly. For example, we have already established that the event of them all being in different bins can happen in $6$ different ways. The event of them all being in the same bin can only happen in $3$ different ways because there are $3$ different bins, so it should occur with half the probability of the balls all being spread evenly. This is what causes the discrepancy in the answers.
How to use mathematical induction with inequalities? I've been using mathematical induction to prove propositions like this: $$1 + 3 + 5 + \cdots + (2n-1) = n^2$$ Which is an equality. I am, however, unable to solve inequalities. For instance, this one: $$ 1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{n} \leq \frac{n}{2} + 1 $$ Every time my books solves one, it seems to use a different approach, making it hard to analyze. I wonder if there is a more standard procedure for working with mathematical induction (inequalities). There are a lot of questions related to solving this kind of problem. Like these: * *How to prove $a^n < n!$ for all $n$ sufficiently large, and $n! \leq n^n$ for all $n$, by induction? - in this one, the asker was just given hints (it was homework) *How to prove $n < n!$ if $n > 2$ by induction? Ilya gave an answer, but there was little explanation (and I'd like some more details on the procedure) *how: mathematical induction prove inequation Also little explanation. Solving it with one line is great, but I'd prefer large blocks of text instead. Can you give me a more in depth explanation of the whole procedure?
I'm not sure what you expect exactly, but here is how I would do the inequality you mention. We start with the base step (as it is usually called); the important point is that induction is a process where you show that if some property holds for a number, it holds for the next. First step is to prove it holds for the first number. So, in this case, $n=1$ and the inequality reads $$ 1<\frac12+1, $$ which obviously holds. Now we assume the inductive hypothesis, in this case that $$ 1+\frac12+\cdots+\frac1n<\frac{n}2+1, $$ and we try to use this information to prove it for $n+1$. Then we have $$ 1+\frac12+\cdots+\frac1n+\frac1{n+1}=\left(1+\frac12+\cdots+\frac1n\right)+\frac1{n+1}. $$ I inserted the brackets to show that we have the sum we know about, through the inductive hypothesis: so $$ 1+\frac12+\cdots+\frac1n+\frac1{n+1}<\frac{n}2+1+\frac1{n+1}. $$ Now comes the nontrivial part (though not hard in this case), where we need to somehow get $(n+1)/2+1$. Note that this is equal to $n/2+1$ (which we already have) plus $1/2$. And this suggests the proof: as $n\geq1$, $1/(n+1)\leq1/2$. So $$ 1+\frac12+\cdots+\frac1n+\frac1{n+1}<\frac{n}2+1+\frac1{n+1}\leq\frac{n}2+1+\frac12=\frac{n+1}2+1. $$ So, assuming the inequality holds for $n$, we have shown it holds for $n+1$. So, by induction, the inequality holds for all $n$.
How are complex numbers useful to real number mathematics? Suppose I have only real number problems, where I need to find solutions. By what means could knowledge about complex numbers be useful? Of course, the obviously applications are: * *contour integration *understand radius of convergence of power series *algebra with $\exp(ix)$ instead of $\sin(x)$ No need to elaborate on these ones :) I'd be interested in some more suggestions! In a way this question is asking how to show the advantage of complex numbers for real number mathematics of (scientifc) everyday problems. Ideally these examples should provide a considerable insight and not just reformulation. EDIT: These examples are the most real world I could come up with. I could imagine an engineer doing work that leads to some real world product in a few months, might need integrals or sine/cosine. Basically I'm looking for a examples that can be shown to a large audience of laymen for the work they already do. Examples like quantum mechanics are hard to justify, because due to many-particle problems QM rarely makes any useful predictions (where experiments aren't needed anyway). Anything closer to application?
I have used complex numbers to solve real life problems: - Digital Signal Processing, Control Engineering: Z-Transform. - AC Circuits: Phasors. This is a handful of applications broadly labeled under load-flow studies and resonant frequency devices (with electric devices modeled into resistors, inductors, capacitors at AC steady state). - Analog Computers and Control Engineering: Laplace Transform. Not sure if it falls into Complex Numbers, but since it has (x,y) form - CNC programming: scaling, rotating coordinates. - Rotating Dynamic Balancers. ... maybe some more but I can't recall.
Poker, number of three of a kind, multiple formulaes I wanted to calculate some poker hands, for a three of a kind I infered, 1) every card rank can form a 'three of a kind' and there are 13 card ranks, 2) there are $\binom{4}{3}$ ways to choose three cards out of the four suits of every card rank, and 3) for the remaining card I can choose two out of 49 cards, i.e. $\binom{49}{2}$. Together the formulae is $$ 13 \cdot \binom{4}{3} \cdot \binom{49}{2} = 61152 $$ But on Wikipedia I found a different formulae, namely $$ \binom{13}{1} \binom{4}{3} \binom{12}{2} \left( \binom{4}{1} \right)^2 = 54912 $$ which makes also totally sense to me (1. card rank, 2. subset of suits, 3. choose form the left card ranks, 4. assign suits). But I can't see why my first formulae is wrong, can anybody explain this to me?
We can count more or less like you did, using $\dbinom{13}{1}\dbinom{4}{3}\dbinom{48}{2}$ (note the small change), and then subtracting the full houses. Or else after we have picked the kind we have $3$ of, and the actual cards, we can pick the two "useless" cards. The kinds of these can be chosen in $\dbinom{12}{2}$ ways. Once the kinds have been chosen, the actual cards can be chosen in $\dbinom{4}{1}^2$ ways, for a total of $$\binom{13}{1}\binom{4}{3}\binom{12}{2}\binom{4}{1}^2.$$
If $f^2$ is Riemann Integrable is $f$ always Riemann Integrable? Problem: Suppose that $f$ is a bounded, real-valued function on $[a,b]$ such that $f^2\in R$ (i.e. it is Riemann-Integrable). Must it be the case that $f\in R$ ? Thoughts: I think that this is not necessarily true, but I am having trouble refuting or even proving the above. Of course, the simplest way to prove that it is not necessarily true would be to give an example, but I am unable to think of one! I also have tried using $\phi(y)=\sqrt y$ and composing this with $f^2$ (to try show $f$ is continuous); however, the interval $[a,b]$ may contain negative numbers so I can't utilise $\phi$ in that case. Question: Does there exist a function $f$ such that $f^2\in R$ but $f$ $\not\in R$ ? Or conversely, if $f^2\in R$ does this always imply $f$ $\in R$ ? (If so, could you provide a way of proving this).
$$f=2\cdot\mathbf 1_{[a,b]\cap\mathbb Q}-1$$
Using Zorn's lemma show that $\mathbb R^+$ is the disjoint union of two sets closed under addition. Let $\Bbb R^+$ be the set of positive real numbers. Use Zorn's Lemma to show that $\Bbb R^+$ is the union of two disjoint, non-empty subsets, each closed under addition.
Let $\mathcal{P}$ the set of the disjoint pairs $(A,B)$, where $A,B\subseteq\mathbb{R}^+$ are not empty and each one is closed under addition and multiplication by a positive rational number. Note that $\mathcal{P}\neq\emptyset$ because if we consider $X=\mathbb{Q}^+$ and $Y=\{n\sqrt{2}:n\in\mathbb{Q}^+\}$, then $(X,Y)\in\mathcal{P}$. Define $\leq$ as follows: $(X_1,Y_1)\leq(X_2,Y_2)$ if and only if $X_1\subseteq X_2$ and $Y_1\subseteq Y_2$, for all $(X_1,Y_1),(X_2,Y_2)\in\mathcal{P}$. Clearly, $(\mathcal{P},\leq)$ is a partially ordered set. Furthermore, it is easy to see that every chain in $(\mathcal{P},\leq)$ has an upper bound. We can now apply Zorn's lemma, so $(\mathcal{P},\leq)$ has a maximal element. Let $(A,B)\in\mathcal{P}$ a maximal element of $(\mathcal{P},\leq)$. We only have to show that $A\cup B=\mathbb{R}^+$. Suppose that $\mathbb{R}^+\not\subseteq A\cup B$. Therefore, there exists $x\in\mathbb{R}^+$ such that $x\not\in A\cup B$. Consider $A_x=\{kx+a:k\in\mathbb{Q}^+\cup\{0\}\textrm{ and }a\in A\}$ and $B_x=\{kx+b:k\in\mathbb{Q}^+\cup\{0\}\textrm{ and }b\in B\}$. Then $A\subseteq A_x$, $B\subseteq B_x$ and $A_x,B_x\subseteq\mathbb{R}^+$. It is easy to verify that $A_x$ and $B_x$ are closed under addition and multiplication by a positive rational number. If either $A_x\cap B=\emptyset$ or $A\cap B_x=\emptyset$, then $(A,B)$ would not be a maximal element of $(\mathcal{P},\leq)$. Therefore, $A_x\cap B\neq\emptyset$ and $A\cap B_x\neq\emptyset$, so there is some $q_0\in\mathbb{Q}^+$ and $a\in A$ such that $q_0x+a\in B$, and there is some $q_1\in\mathbb{Q}^+$ and $b\in B$ such that $q_1x+b\in A$. Note that $q_0,q_1\neq 0$ because $A\cap B=\emptyset$. It is easy to see that $q_0q_1x+q_1a+q_0b\in A\cap B$, so $A\cap B\neq\emptyset$, a contradiction. Therefore, $A\cup B=\mathbb{R}^+$.
Combinatorial Proof Of Binomial Double Counting Let $a$, $b$, $c$ and $n$ be non-negative integers. By counting the number of committees consisting of $n$ sentient beings that can be chosen from a pool of $a$ kittens, $b$ crocodiles and $c$ emus in two different ways, prove the identity $$\sum\limits_{\substack{i,j,k \ge 0; \\ i+j+k = n}} {{a \choose i}\cdot{b \choose j}\cdot{c \choose k} = {a+b+c \choose n}}$$ where the sum is over all non-negative integers $i$, $j$ and $k$ such that $i+j+k=n.$ I know that this is some kind of combinatorial proof. My biggest problem is that I've never really done a proof.
$\newcommand{\+}{^{\dagger}} \newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack} \newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,} \newcommand{\dd}{{\rm d}} \newcommand{\down}{\downarrow} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,{\rm e}^{#1}\,} \newcommand{\fermi}{\,{\rm f}} \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{{\rm i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\isdiv}{\,\left.\right\vert\,} \newcommand{\ket}[1]{\left\vert #1\right\rangle} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left(\, #1 \,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}} \newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,} \newcommand{\sech}{\,{\rm sech}} \newcommand{\sgn}{\,{\rm sgn}} \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert} \newcommand{\wt}[1]{\widetilde{#1}}$ $\ds{\sum_{i\ +\ j\ +\ k\ =\ n \atop{\vphantom{\LARGE A}i,\ j,\ k\ \geq\ 0}} {a \choose i}{b \choose j}{c \choose k} = {a + b + c \choose n}:\ {\large ?}}$ \begin{align}&\color{#66f}{\large% \sum_{i\ +\ j\ +\ k\ =\ n \atop{\vphantom{\LARGE A}i,\ j,\ k\ \geq\ 0}} {a \choose i}{b \choose j}{c \choose k}} =\sum_{\ell_{a},\ \ell_{b},\ \ell_{c}\ \geq\ 0}{a \choose \ell_{a}} {b \choose \ell_{b}}{c \choose \ell_{c}} \delta_{\ell_{a}\ +\ \ell_{b}\ +\ \ell_{c},\ n} \\[3mm]&=\sum_{\ell_{a},\ \ell_{b},\ \ell_{c}\ \geq\ 0}{a \choose \ell_{a}} {b \choose \ell_{b}}{c \choose \ell_{c}}\oint_{\verts{z}\ =\ 1} {1 \over z^{-\ell_{a}\ -\ \ell_{b}\ -\ \ell_{c}\ +\ n\ +\ 1}} \,{\dd z \over 2\pi\ic} \\[3mm]&=\oint_{\verts{z}\ =\ 1}{1 \over z^{n\ +\ 1}} \bracks{\sum_{\ell_{a}\ \geq\ 0}{a \choose \ell_{a}}z^{\ell_{a}}} \bracks{\sum_{\ell_{b}\ \geq\ 0}{b \choose \ell_{b}}z^{\ell_{b}}} \bracks{\sum_{\ell_{a}\ \geq\ 0}{c \choose \ell_{c}}z^{\ell_{c}}} \,{\dd z \over 2\pi\ic} \\[3mm]&=\oint_{\verts{z}\ =\ 1}{1 \over z^{n\ +\ 1}} \pars{1 + z}^{a}\pars{1 + z}^{b}\pars{1 + z}^{c}\,{\dd z \over 2\pi\ic} =\oint_{\verts{z}\ =\ 1}{\pars{1 + z}^{a + b + c} \over z^{n\ +\ 1}} \,{\dd z \over 2\pi\ic} \\[3mm]&=\color{#66f}{\large{a + b + c \choose n}} \end{align}
Finding generalized eigenbasis * *For a complex square matrix $M$, a maximal set of linearly independent eigenvectors for an eigenvalue $\lambda$ is determined by solving $$ (M - \lambda I) x = 0. $$ for a basis in the solution subspace directly as a homogeneous linear system. *For a complex square matrix $M$, a generalized eigenvector for an eigenvalue $\lambda$ with algebraic multiplicity $c$ is defined as a vector $u$ s.t. $$ (M - \lambda I)^c u = 0. $$ I wonder if a generalized eigenbasis in Jordan decomposition is also determined by finding a basis in the solution subspace of $(M - \lambda I)^c u = 0$ directly in the same way as for an eigenbasis? Or it is more difficult to solve directly as a homogeneous linear system, and some tricks are helpful? Thanks!
Look at the matrix $$M=\pmatrix{1&1\cr0&1\cr}$$ Taking $\lambda=1$, $c=2$, Then $(M-\lambda I)^c$ is the zero matrix, so any two linearly independent vectors will do as a basis for the solution space of $(M-\lambda I)^cu=0$. But that's not what you want: first, you want as many linearly independent eigenvectors as you can find, then you can go hunting for generalized eigenvectors.
Value of $\lim_{n\to \infty}\frac{1^n+2^n+\cdots+(n-1)^n}{n^n}$ I remember that a couple of years ago a friend showed me and some other people the following expression: $$\lim_{n\to \infty}\frac{1^n+2^n+\cdots+(n-1)^n}{n^n}.$$ As shown below, I can prove that this limit exists by the monotone convergence theorem. I also remember that my friend gave a very dubious "proof" that the value of the limit is $\frac{1}{e-1}$. I cannot remember the details of the proof, but I am fairly certain that it made the common error of treating $n$ as a variable in some places at some times and as a constant in other places at other times. Nevertheless, numerical analysis suggests that the value my friend gave was correct, even if his methods were flawed. My question is then: What is the value of this limit and how do we prove it rigorously? (Also, for bonus points, What might my friend's original proof have been and what exactly was his error, if any?) I give my convergence proof below in two parts. In both parts, I define the sequence $a_n$ by $a_n=\frac{1^n+2^n+\cdots+(n-1)^n}{n^n}$ for all integers $n\ge 2$. First, I prove that $a_n$ is bounded above by $1$. Second, I prove that $a_n$ is increasing. (1) The sequence $a_n$ satisfies $a_n<1$ for all $n\ge 2$. Note that $a_n<1$ is equivalent to $1^n+2^n+\cdots+(n-1)^n<n^n$. I prove this second statement by induction. Observe that $1^2=1<4=2^2$. Now suppose that $1^n+2^n+\cdots+(n-1)^n<n^n$ for some integer $n\ge 2$. Then $$1^{n+1}+2^{n+1}+\cdots+(n-1)^{n+1}+n^{n+1}\le(n-1)(1^n+2^n+\cdots+(n-1)^n)+n^{n+1}<(n-1)n^n+n^{n+1}<(n+1)n^n+n^{n+1}\le n^{n+1}+(n+1)n^n+\binom{n+1}{2}n^{n-1}+\cdots+1=(n+1)^{n+1}.$$ (2) The sequence $a_n$ is increasing for all $n\ge 2$. We must first prove the following preliminary proposition. (I'm not sure if "lemma" is appropriate for this.) (2a) For all integers $n\ge 2$ and $2\le k\le n$, $\left(\frac{k-1}{k}\right)^n\le\left(\frac{k}{k+1}\right)^{n+1}$. We observe that $k^2-1\le kn$, so upon division by $k(k^2-1)$, we get $\frac{1}{k}\le\frac{n}{k^2-1}$. By Bernoulli's Inequality, we may find: $$\frac{k+1}{k}\le 1+\frac{n}{k^2-1}\le\left(1+\frac{1}{k^2-1}\right)^n=\left(\frac{k^2}{k^2-1}\right)^n.$$ A little multiplication and we arrive at $\left(\frac{k-1}{k}\right)^n\le\left(\frac{k}{k+1}\right)^{n+1}$. We may now first apply this to see that $\left(\frac{n-1}{n}\right)^n\le\left(\frac{n}{n+1}\right)^{n+1}$. Then we suppose that for some integer $2\le k\le n$, we have $\left(\frac{k}{n}\right)^n\le\left(\frac{k+1}{n+1}\right)^{n+1}$. Then: $$\left(\frac{k-1}{n}\right)^n=\left(\frac{k}{n}\right)^n\left(\frac{k-1}{k}\right)^n\le\left(\frac{k+1}{n+1}\right)^{n+1}\left(\frac{k}{k+1}\right)^{n+1}=\left(\frac{k}{n+1}\right)^{n+1}.$$ By backwards (finite) induction from $n$, we have that $\left(\frac{k}{n}\right)^n\le\left(\frac{k+1}{n+1}\right)^{n+1}$ for all integers $1\le k\le n$, so: $$a_n=\left(\frac{1}{n}\right)^n+\left(\frac{2}{n}\right)^n+\cdots+\left(\frac{n-1}{n}\right)^n\le\left(\frac{2}{n+1}\right)^{n+1}+\left(\frac{3}{n+1}\right)^{n+1}+\cdots+\left(\frac{n}{n+1}\right)^{n+1}<\left(\frac{1}{n+1}\right)^{n+1}+\left(\frac{2}{n+1}\right)^{n+1}+\left(\frac{3}{n+1}\right)^{n+1}+\cdots+\left(\frac{n}{n+1}\right)^{n+1}=a_{n+1}.$$ (In fact, this proves that $a_n$ is strictly increasing.) By the monotone convergence theorem, $a_n$ converges. I should note that I am not especially well-practiced in proving these sorts of inequalities, so I may have given a significantly more complicated proof than necessary. If this is the case, feel free to explain in a comment or in your answer. I'd love to get a better grip on these inequalities in addition to finding out what the limit is. Thanks!
The limit is $\frac{1}{e-1}$. I wrote a paper on this sum several years ago and used the Euler-Maclaurin formula to prove the result. The paper is "The Euler-Maclaurin Formula and Sums of Powers," Mathematics Magazine, 79 (1): 61-65, 2006. Basically, I use the Euler-Maclaurin formula to swap the sum with the corresponding integral. Then, after some asymptotic analysis on the error term provided by Euler-Maclaurin we get $$\lim_{n\to \infty}\frac{1^n+2^n+\cdots+(n-1)^n}{n^n} = \sum_{k=0}^{\infty} \frac{B_k}{k!},$$ where $B_k$ is the $k$th Bernoulli number. The exponential generating function of the Bernoulli numbers then provides the $\frac{1}{e-1}$ result. I should mention that I made a mistake in the original proof, though! The correction, as well as the generalization $$\lim_{n\to \infty}\frac{1^n+2^n+\cdots+(n+k)^n}{n^n} = \frac{e^{k+1}}{e-1}$$ are contained in a letter to the editor (Mathematics Magazine 83 (1): 54-55, 2010).
Can someone show me how to prove the following? I have $f(x)=g(ax+b)$, a and b are constant. I need to show that $\nabla f(x)=a\nabla g(x)$ and $\nabla^2 f(x)=a^2\nabla^2 g(x)$... I was thinking that the final answer should have ax+b in it, but apparently it can be shown that the above is true???
In this problem $f(x) = g(h(x))$, where $h(x) = ax + b$. I'm going to consider the case where $a$ is a matrix rather than a scalar, because it's useful and no more difficult. You can assume $a$ is a scalar if you'd like. Let's establish some notation. Recall that if $F:\mathbb R^n \to \mathbb R^m$ is differentiable at $x$, then $F'(x)$ is an $m \times n$ matrix. In the special case where $m = 1$, $F'(x)$ is a $1 \times n$ matrix. I'm going to use the convention that $\nabla F(x) = F'(x)^T$, so $\nabla F(x)$ is a column vector rather than a row vector. Then $G(x) = \nabla F(x)$ is a function from $\mathbb R^n \to \mathbb R^n$, and $\nabla^2 F(x) = G'(x)$, which is an $n \times n$ matrix. The chain rule tells us that \begin{align} f'(x) &= g'(h(x))h'(x) \\ &= g'(ax + b) a. \end{align} It follows that \begin{align} \nabla f(x) &= a^T g'(ax+b)^T \\ &= a^T \nabla g(ax + b). \end{align} That is our formula for $\nabla f(x)$. Preparing to use the chain rule again, we can express $\nabla f(x)$ as $\nabla f(x) = w(h(x))$, where $w(x) = a^T \nabla g(x)$. Note that $w'(x) = a^T \nabla^2 g(x)$. Applying the chain rule to $z(x) = \nabla f(x) = w(h(x))$, we see that \begin{align} \nabla^2 f(x) &= w'(h(x))h'(x) \\ &= a^T \nabla^2 g(ax + b) a. \end{align} This is our formula for $\nabla^2 f(x)$.
How to understand $\operatorname{cf}(2^{\aleph_0}) > \aleph_0$ As a corollary of König's theorem, we have $\operatorname{cf}(2^{\aleph_0}) > \aleph_0$ . On the other hand, we have $\operatorname{cf}(\aleph_\omega) = \aleph_0$. Why the logic in the latter equation can't apply to the former one? To be precise, why we can't have $\sup ({2^n:n<\omega}) = 2^{\aleph_0}$?
Note that in ordinal arithmetic $2^\omega$ is the supremum of $2^n$ for all $n<\omega$, so it is $\omega$ and thus has cofinality $\aleph_0$. $\omega$ and $\aleph_0$ are the same set but they nevertheless behave differently in practice -- because tradition is to use the notation $\omega$ for operations where a limiting process is involved and $\aleph_0$ for operations where the "countably infinite" is used all at once in a single step. Cardinal exponentatiation $2^{\aleph_0}$ is such an all-at-once process.
Infinitely number of primes in the form $4n+1$ proof Question: Are there infinitely many primes of the form $4n+3$ and $4n+1$? My attempt: Suppose the contrary that there exist finitely many primes of the form $4n+3$, say $k+1$ of them: $3,p_1,p_2,....,p_k$ Consider $N = 4p_1p_2p_3...p_k+3$, $N$ cannot be a prime of this form. So suppose that $N$=$q_1...q_r$, where $q_i∈P$ Claim: At least one of the $q_i$'s is of the form $4n+3$: Proof for my claim: $N$ is odd $\Rightarrow q_1,...,q_r$ are odd $\Rightarrow q_i \equiv 1\ (\text{mod }4)$ or $q_i ≡ 3\ (\text{mod }4)$ If all $q_1,...q_r$ are of the form $4n+1$, then $(4n+1)(4m+1)=16nm+4n+4m+1 = 4(\cdots) +1$ Therefore, $N=q_1...q_r = 4m+1$. But $N=4p_1..p_k+3$, i.e. $N≡3\ (\text{mod }4)$, $N$ is congruent to $1\ \text{mod }4$ which is a contradiction. Therefore, at least one of $q_i \equiv 3\ (\text{mod }4)$. Suppose $q_j\equiv 3\ (\text{mod }4)$ $\Rightarrow$ $q_j=p_i$ for some $1\leq i \leq k$ or $q_j =3$ If $q_j=p_i≠3$ then $q_j$ | $N = 4p_1...p_k + 3 \Rightarrow q_j=3$ Contradiction! If $q_j=3$ ($\neq p_i$, $1\leq i \leq k$) then $q_j | N = 4 p_1...p_k + 3 \Rightarrow q_j=p_t$ for some $1 \leq i \leq k$ Contradiction! In fact, there must be also infinitely many primes of the form $4n+1$ (according to my search), but the above method does not work for its proof. I could not understand why it does not work. Could you please show me? Regards
There are infinite primes in both the arithmetic progressions $4k+1$ and $4k-1$. Euclid's proof of the infinitude of primes can be easily modified to prove the existence of infinite primes of the form $4k-1$. Sketch of proof: assume that the set of these primes is finite, given by $\{p_1=3,p_2=7,\ldots,p_k\}$, and consider the huge number $M=4p_1^2 p_2^2\cdots p_k^2-1$. $M$ is a number of the form $4k-1$, hence by the fundamental theorem of Arithmetics it has a prime divisor of the same form. But $\gcd(M,p_j)=1$ for any $j\in[1,k]$, hence we have a contradiction. Given that there are infinite primes in the AP $4k-1$, is that possible that there are just a finite number of primes in the AP $4k+1$? It does not look as reasonable, and indeed it does not occur. Let us define, for any $n\in\mathbb{N}^+$, $\chi_4(n)$ as $1$ if $n=4k+1$, as $-1$ if $n=4k-1$, as $0$ if $n$ is even. $\chi_4(n)$ is a periodic and multiplicative function (a Dirichlet character) associated with the $L$-function $$ L(\chi_4,s)=\sum_{n\geq 1}\frac{\chi_4(n)}{n^s}=\!\!\!\!\prod_{p\equiv 1\!\!\pmod{4}}\left(1-\frac{1}{p^s}\right)^{-1}\prod_{p\equiv 3\!\!\pmod{4}}\left(1+\frac{1}{p^s}\right)^{-1}. $$ The last equality follows from Euler's product, which allows us to state $$ L(\chi_4,s)=\prod_{p}\left(1+\frac{1}{p^s}\right)^{-1}\prod_{p\equiv 1\!\!\pmod{4}}\frac{p^s+1}{p^s-1}=\frac{\zeta(2s)}{\zeta(s)}\prod_{p\equiv 1\!\!\pmod{4}}\frac{p^s+1}{p^s-1}. $$ If the primes in the AP $4k+1$ were finite, the limit of the RHS as $s\to 1^+$ would be $0$. On the other hand, $$ \lim_{s\to 1^+}L(\chi_4,s)=\sum_{n\geq 0}\frac{(-1)^n}{2n+1}=\int_{0}^{1}\sum_{n\geq 0}(-1)^n x^{2n}\,dx=\int_{0}^{1}\frac{dx}{1+x^2}=\frac{\pi}{4}\color{red}{\neq} 0 $$ so there have to be infinite primes of the form $4k+1$, too. With minor adjustments, the same approach shows that there are infinite primes in both the APs $6k-1$ and $6k+1$. I have just sketched a simplified version of Dirichlet's theorem for primes in APs.
Correlated Poisson Distribution $X_1$ and $X_2$ are discrete stochastic variables. They can both be modeled by a Poisson process with arrival rates $\lambda_1$ and $\lambda_2$ respectively. $X_1$ and $X_2$ have a constant correlation $\rho$. Is there an analytic equation that describes the probability density function: $P(X_1= i,X_2= k)$
Consider this model that could generate correlated Poisson variables. Let $Y$, $Y_1$ and $Y_2$ be three independent Poisson variable with parameters $r$, $\lambda_1$ and $\lambda_2$. Let $$X_i=Y_i+Y$$ for $i=1,2$. Then $X_1$ and $X_2$ are both Poisson with parameters $\lambda_1$ and $\lambda_2$. They have the correlation $$\rho=\frac{r}{\sqrt{(\lambda_1+r)(\lambda_2+r)}}$$ Now the joint distribution can be derived as $$P[X_1=i,X_2=j]=e^{-(r+\lambda_1+\lambda_2)}\sum_{k=0}^{i\wedge j}\frac{r^k}{k!}\frac{\lambda_1^{(i-k)}}{(i-k)!}\frac{\lambda_2^{(j-k)}}{(j-k)!}$$ The case for a bivariate Poisson process is immediate from here. You could look at the Johnson and Kotz book on multivariate discrete distributions for more information (this construction of a bivariate Poisson distribution is not unique). Also, it has the drawback that $\rho \in [0, \min(\lambda_1, \lambda_2)/\sqrt{\lambda_1\lambda_2} ]$ when $\lambda_1 \neq \lambda_2$ as discussed by Genest et al. 2018.
Example of two dependent random variables that satisfy $E[f(X)f(Y)]=Ef(X)Ef(Y)$ for every $f$ Does anyone have an example of two dependent random variables, that satisfy this relation? $E[f(X)f(Y)]=E[f(X)]E[f(Y)]$ for every function $f(t)$. Thanks. *edit: I still couldn't find an example. I think one should be of two identically distributed variables, since all the "moments" need to be independent: $Ex^iy^i=Ex^iEy^i$. That's plum hard...
If you take dependent random variables $X$ and $Y$, and set $X^{'} = X - E[X]$ and $Y^{'} = Y - E[Y]$, then $E[f(X^{'})f(Y^{'})]=E[f(X^{'})]E[f(Y^{'})]=0$ as long as $f$ preserves the zero expected value. I guess you cannot show this for all $f$.
The graph of a smooth real function is a submanifold Given a function $f: \mathbb{R}^n \rightarrow \mathbb{R}^m $ which is smooth, show that $$\operatorname{graph}(f) = \{(x,f(x)) \in \mathbb{R}^{n+m} : x \in \mathbb{R}^n\}$$ is a smooth submanifold of $\mathbb{R}^{n+m}$. I'm honestly completely unsure of where or how to begin this problem. I am interested in definitions and perhaps hints that can lead me in the right direction.
The map $\mathbb R^n\mapsto \mathbb R^{n+m}$ given by $t\mapsto (t, f(t))$ has the Jacobi matrix $\begin{pmatrix}I_n\\f'(t)\end{pmatrix}$, which has a full rank $n$ for all $t$ (because of the identity submatrix). This means that its value range is a manifold. Is there anything unclear about it? How is this a proof that it is a manifold? A manifold of rank $n$ is such set $X$ that for each $x\in X$ there exists a neighborhood $H_x\subset X$ such that $H_x$ is isomorphic to an open subset of $\mathbb R^n$. In this case, the whole $X=graph(f)$ is isomophic to $\mathbb R^n$. The definition of a manifold differs, often it is required for the isomophism to be diffeomophism, which is true here as well. Think of it this way: A manifold $X$ of rank $2$ is something, in which: wherever someone makes a dot there by a pen, I can cut a piece of $X$ and say to this person: "See, my piece is almost like a piece of paper, it's just a bit curvy. The definition of manifold might seems strage here because here you can take the neighborhood as the whole $X$. This is not always the case: A sphere is a manifold as well, but a whole sphere is not isomorphic to $\mathbb R^2$, you have to take only some cut-out of it.
If $u''>0$ in $\mathbf{R}^+$ then $u$ is unbounded? If $u$ is a positive function such that $u''>0$ in the whole $\mathbf{R}^+$ then $u$ is unbounded? In fact, I know that if $u''>0$ then $u$ is strictly convex. I think that implies $u$ is coercive. I want to prove it.
$$ u(x) = e^{-x} {}{}{}{}{}{}{}{} $$ EDIT: if you actually meant the entire real line $\mathbb R,$ then any $C^2$ function $u(x)$ really is unbounded. Proof: as $u'' > 0,$ we know that $u'$ cannot always be $0.$ as a result, it is nonzero at some $x=a.$ If $u'(a) > 0,$ then for $x > a$ we have $u(x) > u(a) + (x-a) u'(a),$ which is unbounded. If, instead, $u'(a) < 0,$ then for $x < a$ we have $u(x) > u(a) + (x-a) u'(a),$ which is unbounded as $(x-a)$ is negative. Both of these are the finite Taylor theorem. Examples with minimal growth include $$ x + \sqrt{1 + x^2} $$ and $$ -x + \sqrt{1 + x^2} $$ Note that $C^2$ is not required, it suffices that the second derivative always exist and is always positive. Taylor's with remainder.
$i,j,k$ Values of the $\Theta$ Matrix in Neural Networks SO I'm looking at these two neural networks and walking through how the $ijk$ values of $\Theta$ correspond to the layer, the node number. Either there are redundant values or I'm missing how the subscripts actually map from node to node. $\Theta^i_{jk}$ ... where this is read as " Theta superscript i subscript jk " As shown here: It looks like the $\Theta$ value corresponding to the node circled in teal would be $\Theta^2_{12}$ ... where: * *superscript $i=2$ ( layer 2 ) *$j=1$ ( node number within the subsequent layer ? ) *$k=2$ ( node number within the current layer ? ) If I'm matching the pattern correctly I think the $j$ value is the node to the right of the red circled node ... and the $k$ value is the teal node... Am I getting this right? Because between the above image and this one: That seems to be the case ... can I get a confirmation on this?
Yes, $\Theta^i_{jk}$ is the weight that the activation of node $j$ has in the previous input layer $j - 1$ in computing the activation of node $k$ in layer $i$.
Arc Length: Difficulty With The Integral The question is to find the arc length of a portion of a function. $$y=\frac{3}{2}x^{2/3}\text{ on }[1,8]$$ I couldn't quite figure out how to evaluate the integral, so I appealed to the solution manual for aid. I don't quite understand what they did in the 5th step. Could someone perhaps elucidate it for me?
The expression in brackets is precisely what you need for the substitution $u=x^{2/3}+1$ to work.
Proof of $\frac{Y^{\lambda}-\lambda}{\sqrt{\lambda}}\to Z\sim N(0,1)$ in distribution as $\lambda\to\infty$? This is an exercise of the Central Limit Theorem: Let $Y^{\lambda}$ be a Poisson random variable with parameter $\lambda>0$. Prove that $\frac{Y^{\lambda}-\lambda}{\sqrt{\lambda}}\to Z\sim N(0,1)$ in distribution as $\lambda\to\infty$. I've done that $$ Z_n\to Z\sim N(0,1) $$ in distribution using the CLT, where $Z_n=(Y^n-n)/\sqrt{n}$. Some naive attempt to go is considering $$ Y^{n}\leq Y^{\lambda}\leq Y^{n+1}\tag{*} $$ where $n\leq\lambda\leq n+1$ and somehow use the squeeze theorem. But both (*) and the squeeze theorem in convergence in distribution are NOT justified. How can I go on? Or do I need an alternative direction?
The squeeze theorem in convergence in distribution can be made fully rigorous in the situation you describe--but the shortest proof here might be through characteristic functions. Recall that if $Y^\lambda$ is Poisson with parameter $\lambda$, $\varphi_\lambda(t)=\mathbb E(\mathrm e^{\mathrm itY^\lambda})$ is simply $\varphi_\lambda(t)=\mathrm e^{-\lambda(1-\mathrm e^{\mathrm it})}$. Thus $\mathbb E(\mathrm e^{\mathrm itZ^\lambda})=\mathrm e^{-\mathrm it\sqrt{\lambda}}\varphi_\lambda(t/\sqrt{\lambda})=\mathrm e^{-g_\lambda(t)}$ with $$ g_\lambda(t)=\mathrm it\sqrt{\lambda}+\lambda-\lambda\mathrm e^{\mathrm it/\sqrt{\lambda}}. $$ Expanding the exponential up to second order yields $$ g_\lambda(t)=\mathrm it\sqrt{\lambda}+\lambda-\lambda\cdot(1+\mathrm it/\sqrt{\lambda}-t^2/2\lambda)+o(1)\to\tfrac12t^2. $$ Thus, for every $t$, $\mathbb E(\mathrm e^{\mathrm itZ^\lambda})\to\mathrm e^{-t^2/2}=\mathbb E(\mathrm e^{\mathrm itZ})$ where $Z$ is standard normal, hence $Z^\lambda\to Z$ in distribution.
How can I give a bound on the $L^2$ norm of this function? I came across this question in an old qualifying exam, but I am stumped on how to approach it: For $f\in L^p((1,\infty), m)$ ($m$ is the Lebesgue measure), $2<p<4$, let $$(Vf)(x) = \frac{1}{x} \int_x^{10x} \frac{f(t)}{t^{1/4}} dt$$ Prove that $$||Vf||_{L^2} \leqslant C_p ||f||_{L^p}$$ for some finite number $C_p$, which depends on $p$ but not on $f$.
Using Hölder's inequality, we have for $x>1$, \begin{align} |V(f)(x)|&\leqslant \lVert f\rVert_{L^p}\left(\int_x^{10 x}t^{-\frac p{4(p-1)}}dt\right)^{\frac{p-1}p}\frac 1x\\ &=\lVert f\rVert_{L^p}A_p\left(x^{-\frac p{4(p-1)}+1}\right)^{\frac{p-1}p}\frac 1x\\ &=A_p\lVert f\rVert_{L^p}x^{\frac{p-1}p-\frac 14}\frac 1x\\ &=A_p\lVert f\rVert_{L^p}x^{-\frac 1p-\frac 14}. \end{align} To conclude, we have to check that $\int_1^{+\infty}x^{-\frac 2p-\frac 12}dx$ is convergent. As $p<4$, it's the case.
Compute the length of an equilateral triangle's side given the area? Given the area of an equilateral triangle, what is an algorithm to determine the length of a side?
Let $s$ be the side, and $A$ the area. Drop a perpendicular from one vertex to the opposite side. By the Pythagorean Theorem, the height of the triangle is $\sqrt{s^2-\frac{1}{4}s^2}=\frac{s\sqrt{3}}{2}$. It follows that $$A=\frac{s^2\sqrt{3}}{4}.$$ Thus $$s^2=\frac{4A}{\sqrt{3}},$$ and therefore $$s=\sqrt{\frac{4A}{\sqrt{3}}}.$$ There are several ways to rewrite the above expression.
Find an angle in a given triangle $\triangle ABC$ has sides $AC = BC$ and $\angle ACB = 96^\circ$. $D$ is a point in $\triangle ABC$ such that $\angle DAB = 18^\circ$ and $\angle DBA = 30^\circ$. What is the measure (in degrees) of $\angle ACD$?
Take $O$ the circumcenter of $\triangle ABD$, see that $\triangle DAO$ is equilateral and, since $\widehat{BAD}=18^\circ$, we get $\widehat{BAO}=42^\circ$, i.e. $O$ is reflection of $C$ about $AB$, that is, $AOBC$ is a rhombus, hence $AD=AO=AC,\triangle CAD$ is isosceles with $\widehat{ADC}=\widehat{ACD}=78^\circ$, done.
Conditions for Schur decomposition and its generalization Let $M$ be a $n$ by $n$ matrix over a field $F$. When $F$ is $\mathbb{C}$, $M$ always has a Schur decomposition, i.e. it is always unitarily similar to a triangular matrix, i.e. $M = U T U^H$ where $U$ is some unitary matrix and $T$ is a triangular matrix. * *I was wondering for an arbitrary field $F$, what are some conditions for $M$ to admit Schur decomposition? *Consider a generalization of Schur decomposition, $M = P T P^{-1}$ where $P$ is some invertible matrix and $T$ is a triangular matrix. I was wondering what some conditions are for $M$ to admit such an decomposition? Note that $M$ admit such an decomposition when $F$ is $\mathbb{C}$, since it always has Schur decomposition. Thanks!
If the characterisic polynomial factors in linear factors then the Jordan decomposition works as your triangular matrix. If you have a similar triangular matrix then the characteristic polynomial of $M$ is the characteristic polynomial of $T$ which clearly factors into linear factors. So, the criterion is exactly the same as for Jordan decomposition. The similar triangular matrix is just a lazy variant of Jordan decomposition.
Must-read papers in Operator Theory I have basically finished my grad school applications and have some time at hand. I want to start reading some classic papers in Operator Theory so as to breathe more culture here. I have read some when doing specific problems but have never systematically study the literature. I wonder whether someone can give some suggestions on where to start since this area has been so highly-developed. Maybe to focus the attention let's, say, try to make a list of the top 20 must-read papers in Operator Theory. I believe this must be a very very difficult job, but maybe some more criteria would make it a little bit easier. * *I can only read English and Chinese and it's a pity since I know many of the founding fathers use other languages. *I prefer papers that give some kind of big pictures, since I can always pick up papers related to specific problems when I need them (but this is not a strict restriction). *I would like to focus on the theory itself, not too much on application to physics. *I have already done a rather thorough study of literature related to the invariant subspace problem, so I guess we can omit this important area. Thanks very much!
Cuntz - Simple $C^*$-algebras generated by isometries The Cuntz algebras are very important in various places in C*-algebra theory.
Prove that $\lim_{x \rightarrow 0} \frac{1}{x}\int_0^x f(t) dt = f(0)$. Assume $f: \mathbb{R} \rightarrow \mathbb{R}$ is continuous. Prove that $\lim_{x \rightarrow 0} \frac{1}{x}\int_0^x f(t) dt = f(0)$. I'm having a little confusion about proving this. So far, it is clear that $f$ is continuous at 0 and $f$ is Riemann integrable. So with that knowledge, I am trying to use the definition of continuity. So $|\frac{1}{x}\int_0^x f(t) dt - f(0)|=|\frac{1}{x}(f(x)-f(0))-f(0)|$. From here, I'm not sure where to go. Any help is appreciated. Thanks in advance.
$\def\e{\varepsilon}\def\abs#1{\left|#1\right|}$As $f$ is continuous at $0$, for $\e > 0$ there is an $\delta > 0$ such that $\abs{f(x) - f(0)} \le \e$ for $\abs x \le \delta$. For these $x$ we have \begin{align*} \abs{\frac 1x \int_0^x f(t)\, dt - f(0)} &= \abs{\frac 1x \int_0^x \bigl(f(t) - f(0)\bigr)\,dt}\\ &\le \frac 1x \int_0^x \abs{f(t) - f(0)}\, dt\\ &\le \frac 1x \int_0^x \e\,dt\\ &= \e \end{align*} So $\abs{f(0) - \frac 1x \int_0^x f(t)\,dt} \le \e$ for $\abs x \le \delta$, as wished.
Does the Laplace transform biject? Someone wrote on the Wikipedia article for the Laplace trasform that 'this transformation is essentially bijective for the majority of practical uses.' Can someone provide a proof or counterexample that shows that the Laplace transform is not bijective over the domain of functions from $\mathbb{R}^+$ to $\mathbb{R}$?
For "the majority of practical uses" it is important that the Laplace transform ${\cal L}$ is injective. This means that when you have determined a function $s\mapsto F(s)$ that suits your needs, there is at most one process $t\mapsto f(t)$ such that $F$ is its Laplace transform. You can then look up this unique $f$ in a catalogue of Laplace transforms. This injectivity of ${\cal L}$ is the content of Lerch's theorem and is in fact an essential pillar of the "Laplace doctrine". The theorem is proven first for special cases where we have an inversion formula, and then extended to the general case. The difference between "injectivity" and "bijectivity" here is that we don't have a simple description of the space of all Laplace transforms $F$. But we don't need to know all animals when we want to analyze a zebra. Lerch's theorem tells us that it has a unique pair of parents.
Is there a $SL(2,\mathbb{Z})$-action on $\mathbb{Z}$? Is there a $SL(2,\mathbb{Z})$-action on $\mathbb{Z}$? I read this somewhere without proof and I am not sure if this is true. Thank you for your help.
(This is completely different to my first 'answer', which was simply wrong.) Denote by $\text{End}(\mathbb{Z})$ the semi-group of group endomorphisms of $\mathbb{Z}$. Since $\mathbb{Z}$ is cyclic, any endomorphism is determined by the image of the generator $1$, and since $1 \mapsto n$ is an endomorphism for any $n\in\mathbb{Z}$, this is all of them. Since $SL(2,\mathbb{Z})$ is a group, all of its elements are invertible, so must map to invertible endomorphisms, i.e. automorphisms. Obviously these are given only by $n = \pm 1$ in the notation above. So $\text{Aut}(\mathbb{Z}) \cong \mathbb{Z}_2$. So the question becomes: is there a non-trivial homomorphism $\phi : SL(2,\mathbb{Z}) \to \mathbb{Z}_2$? Since $\mathbb{Z}_2$ is Abelian, $\phi$ must factor through the Abelianisation of $SL(2,\mathbb{Z})$, which is$^*$ $\mathbb{Z}_{12}$. There is a unique surjective homomorphism $\mathbb{Z}_{12} \to \mathbb{Z}_2$, and therefore a unique surjective $\phi$, which gives a unique non-trivial action of $SL(2,\mathbb{Z})$ on $\mathbb{Z}$. Unfortunately, I can't see an easy way to decide whether a given $SL(2,\mathbb{Z})$ matrix maps to $1$ or $-1$, but maybe somebody else can. $^*$A proof of this can be found at the link provided in the comments.
Complex Analysis and Limit point help So S is a complex sequence (an from n=1 to infinity) has limit points which form a set E of limit points. How do I prove that every limit point of E are also members of the set E. I think epsilons will need to be used but I'm not sure. Thanks.
Let $z$ be a limit point of $E$, and take any $\varepsilon>0$. There is some $x\in E$ with $\lvert x-z\rvert<\varepsilon/2$. And since $x\in E$, there are infinitely many members of $S$ within an $\varepsilon/2$-ball around $x$. They will all be within an $\varepsilon$-ball around $z$, and you're done.
Example 2, Chpt 4 Advanced Mathematics (I) $$\int \frac{x+2}{2x^3+3x^2+3x+1}\, \mathrm{d}x$$ I can get it down to this: $$\int \frac{2}{2x+1} - \frac{x}{x^2+x+1}\, \mathrm{d}x $$ I can solve the first part but I don't exactly follow the method in the book. $$ = \ln \vert 2x+1 \vert - \frac{1}{2}\int \frac{\left(2x+1\right) -1}{x^2+x+1}\, \mathrm{d}x $$ $$= \ln \vert 2x+1 \vert - \frac{1}{2} \int \frac{\mathrm{d}\left(x^2+x+1\right)}{x^2+x+1} + \frac{1}{2}\int \dfrac{\mathrm{d}x}{\left(x+\dfrac{1}{2}\right)^2 + \frac{3}{4}} $$ For the 2nd part: I tried $ u = x^2+x+1 $ and $\mathrm{d}u = 2x+1\, \mathrm{d}x$ that leaves me with $\frac{\mathrm{d}u - 1}{2} = x\, \mathrm{d}x$ which seems wrong. because $x^2+x+1$ doesn't factor, I don't see how partial fractions again will help. $x = Ax+B$ isn't helpful.
The post indicates some difficulty with finding $\int \frac{dx}{x^2+x+1}$. We solve a more general problem. But I would suggest for your particular problem, you follow the steps used, instead of using the final result. Suppose that we want to integrate $\dfrac{1}{ax^2+bx+c}$, where $ax^2+bx+c$ is always positive, or always negative. We complete the square. In order to avoid fractions, note that equivalently we want to find $$\int \frac{4a\,dx}{4a^2 x+4abx+4ac}.$$ So we want to find $$\int \frac{4a\,dx}{(2ax+b)^2 + (4ac-b^2)}.$$ Let $$2ax+b=u\sqrt{4ac-b^2}.$$ Then $2a\,dx=\sqrt{4ac-b^2}\,du$. Our integral simplifies to $$\frac{2}{\sqrt{4ac-b^2}}\int\frac{du}{u^2+1},$$ and we are finished.
Find all singularities of$ \ \frac{\cos z - \cos (2z)}{z^4} \ $ How do I find all singularities of$ \ \frac{\cos z - \cos (2z)}{z^4} \ $ It seems like there is only one (z = 0)? How do I decide if it is isolated or nonisolated? And if it is isolated, how do I decide if it is removable or not removable? If it is non isolated, how do I decide the orders of the singularities? Thanks!!!
$$\cos z = 1-\frac{z^2}{2}+\cdots$$ $$\cos2z=1-\frac{(2z)^2}{2}+\cdots$$ $$\frac{\cos z-\cos2z}{z^4}= \frac{3}{2z^2}+\left(\frac{-15}{4!}+a_1z^2 +\cdots\right),$$ hence at $z=0$ there is a pole .
Finding asymptotes of exponential function and one-sided limit Find the asymptotes of $$ \lim_{x \to \infty}x\cdot\exp\left(\dfrac{2}{x}\right)+1. $$ How is it done?
A related problem. We will use the Taylor series of the function $e^t$ at the point $t=0$, $$ e^t = 1+t+\frac{t^2}{2!}+\frac{t^3}{3!}+\dots .$$ $$ x\,e^{2/x}+1 = x ( 1+\frac{2}{x}+ \frac{1}{2!}\frac{2^2}{x^2}+\dots )+1=x+3+\frac{2^2}{2!}\frac{1}{x}+\frac{2^3}{3!}\frac{1}{x^2}+\dots$$ $$ = x+3+O(1/x).$$ Now, you can see when $x$ goes to infinity, then you have $$ x\,e^{2/x}+1 \sim x+3 $$ Here is the plot of $x\,e^{2/x}+1$ and the Oblique asymptote $x+3$
Identity for $\zeta(k- 1/2) \zeta(2k -1) / \zeta(4k -2)$? Is there a nice identity known for $$\frac{\zeta(k- \tfrac{1}{2}) \zeta(2k -1)}{\zeta(4k -2)}?$$ (I'm dealing with half-integral $k$.) Equally, an identity for $$\frac{\zeta(s) \zeta(2s)}{\zeta(4s)}$$ would do ;)
Let $$F(s) = \frac{\zeta(s)\zeta(2s)}{\zeta(4s)}.$$ Then clearly the Euler product of $F(s)$ is $$F(s) = \prod_p \frac{\frac{1}{1-1/p^s}\frac{1}{1-1/p^{2s}}}{\frac{1}{1-1/p^{4s}}}= \prod_p \left( 1 + \frac{1}{p^s} + \frac{2}{p^{2s}} + \frac{2}{p^{3s}} + \frac{2}{p^{4s}} + \frac{2}{p^{5s}} + \cdots\right).$$ Now introduce $$ f(n) = \prod_{p^2|n} 2.$$ It follows that $$ F(s) = \sum_{n\ge 1} \frac{f(n)}{n^s}.$$ We can use this e.g. to study the average order of $f(n)$, given by $$ \frac{1}{n} \sum_{k=1}^n f(n).$$ The function $F(s)$ has a simple pole at $s=1$ and the Wiener-Ikehara-Theorem applies. The residue is $$\operatorname{Res}_{s=1} F(s) = \frac{15}{\pi^2}$$ so that finally $$ \frac{1}{n} \sum_{k=1}^n f(n) \sim \frac{15}{\pi^2}.$$ In fact I would conjecture that we can do better and we ought to have $$ \frac{1}{n} \sum_{k=1}^n f(n) \sim \frac{15}{\pi^2} + \frac{6}{\pi^2}\zeta\left(\frac{1}{2}\right) n^{-1/2}.$$
What is vector division? My question is: We have addition, subtraction and muliplication of vectors. Why cannot we define vector division? What is division of vectors?
The quotient of two vectors is a quaternion by definition. (The product of two vectors can also be regarded as a quaternion, according to the choice of a unit of space.) A quaternion is a relative factor between two vectors that acts respectively on the vector's two characteristics length and direction; through its tensor or modulus, the ratio of lengths taken as a positive number; and the versor or radial quotient, the ratio of orientations in space, taken as being equal to an angle in a certain plane. The versor has analogues in the $+$ and $-$ signs of the real numbers, and in the argument or phase of the complex numbers; in space, a versor is described by three numbers: two to identify a point on the unit-sphere which is the axis of positive rotation, and one to identify the angle around that axis. (The angle is canonically taken to be positive and less than a straight angle, so the axis of positive rotation is reversed when the two vectors are exchanged in their plane.) The tensor and versor which describe a vector quotient together have four numbers in their specification (therefore a quaternion). Two quaternions are multiplied or divided by multiplying or dividing their respective tensors and versors. A versor has a representation as a great circle arc, connecting the points where a sphere is pierced by the dividend and divisor rays going from its center; these arcs have the same condition for equality as vectors do, viz. equal magnitude, and parallel direction. But on a sphere, no two arcs are parallel unless they are part of the same great circle. So, the vector-arcs are compared or compounded by moving one end of each to the line of intersection of their two planes, then taking the third side of the spherical triangle as the arc to be the product or the quotient of the versors.
Fixed point of $\cos(\sin(x))$ I can show that $\cos(\sin(x))$ is a contraction on $\mathbb{R}$ and hence by the Contraction Mapping Theorem it will have a unique fixed point. But what is the process for finding this fixed point? This is in the context of metric spaces, I know in numerical analysis it can be done trivially with fixed point iteration. Is there a method of finding it analytically?
The Jacobi-Anger expansion gives an expression for your formula as: $\cos(\sin(x)) = J_0(1)+2 \sum_{n=1}^{\infty} J_{2n}(1) \cos(2nx)$. Since the "harmonics" in the sum rapidly damp to zero, to second order the equation for the fixed point can be represented as: $x= J_0(1) + 2[J_2(1)(\cos(2x)) + J_4(1)(\cos(4x))]$. Using Wolfram Alpha to solve this I get $x\approx 0.76868..$
Results of dot product for complex functions Suppose we are given a $C^1$ function $f(t):\mathbb{R} \rightarrow \mathbb{C}$ with $f(0) = 1$, $\|f(t)\| = 1$ and $\|f'(t)\| = 1$. I have already proven that $\langle f(t), f'(t)\rangle = 0$ for all $t$. Now I have to show that either $f'(t) = if(t)$ or $f'(t) = -i f(t)$. How do I go about showing this? (I am terribly sorry for the horrible title, I could not think of a good one).
Presumably by $\langle f(t) , f'(t) \rangle = 0$, you mean that $\text{Re} f(t) \overline{f'(t)} = 0$ (if $z_1,z_2 \in \mathbb{C}$ and $z_1 \overline{z_2} = 0$, then you must have either $z_1 = 0$ or $z_2 = 0$). If $\text{Re} f(t) \overline{f'(t)} = 0$, then $f(t) \overline{f'(t)} = i \zeta(t)$. where $\zeta$ is real valued. $\zeta$ is continuous, and furthermore, $|f(t) \overline{f'(t)}| = 1 \ =|\zeta(t)|$. Consequently, $\zeta$ is either the constant $1$ or $-1$. Multiplying $f(t) \overline{f'(t)} = i \zeta(t)$ on both sides by $f'(t)$ gives $f(t) = i \zeta(t) f'(t)$, from which the result follows.
Probability Problem with $n$ keys A woman has $n$ keys, one of which will open a door. a)If she tries the keys at random, discarding those that do not work, what is the probability that she will open the door on her $k^{\mathrm{th}}$ try? Attempt: On her first try, she will have the correct key with probability $\frac1n$. If this does not work, she will throw it away and on her second attempt, she will have the correct key with probability $\frac1{(n-1)}$. So on her $k^{\mathrm{th}}$ try, the probability is $\frac1{(n-(k-1))}$ This does not agree with my solutions. b)The same as above but this time she does not discard the keys if they do not work. Attempt: We want the probability on her $k^{\mathrm{th}}$ try. So we want to consider the probability that she must fail on her $k-1$ attempts. Since she keeps all her keys, the correct one is chosen with probability $\frac1n$ for each trial. So the desired probability is $(1-\frac{1}{n})^{k-1} (\frac1n)^k$. Again, does not agree with solutions. I can't really see any mistake in my logic. Can anyone offer any advice? Many thanks
For $(a)$, probability that she will open on the first try is $\dfrac1n$. You have this right. However, the probability that she will open on the second try is when she has failed in her first attempt and succeeded in her second attempt. Hence, the probability is $$\underbrace{\dfrac{n-1}n}_{\text{Prob of failure in her $1^{st}$ attempt.}} \times \underbrace{\dfrac1{n-1}}_{\text{Prob of success in $2^{nd}$ attempt given failure in $1^{st}$ attempt.}} = \dfrac1n$$ Probability that she will open on the third try is when she has failed in her first and second attempt and succeeded in her third attempt. Hence, the probability is $$\underbrace{\dfrac{n-1}n}_{\text{Prob failure in her $1^{st}$ attempt.}} \times \underbrace{\dfrac{n-2}{n-1}}_{\text{Prob of success in $2^{nd}$ attempt given failure in $1^{st}$ attempt.}} \times \underbrace{\dfrac1{n-2}}_{\text{Prob of success in $3^{nd}$ attempt given failure in first two attempts.}} = \dfrac1n$$ Hence, the probability she opens in her $k^{th}$ attempt is $\dfrac1n$. (Also note that the probabilities must add up-to one i.e. $$\sum_{k=1}^{n} \dfrac1n = 1$$ which is not the case in your answer). For $(b)$, the probability that she will open on her $k^{th}$ attempt is the probability she fails in her first $(k-1)$ attempts and succeed in her $k^{th}$ attempt. The probability for this is $$\underbrace{\dfrac{n-1}{n}}_{\text{Fails in $1^{st}$ attempt}} \times \underbrace{\dfrac{n-1}{n}}_{\text{Fails in $2^{nd}$ attempt}} \times \cdots \underbrace{\dfrac{n-1}{n}}_{\text{Fails in $(k-1)^{th}$ attempt}} \times \underbrace{\dfrac1{n}}_{\text{Succeeds in $k^{th}$ attempt}} = \left(1-\dfrac1n \right)^{k-1} \dfrac1n$$ Again a quick check here is the sum $$\sum_{k=1}^{\infty} \left(1-\dfrac1n \right)^{k-1} \dfrac1n$$ should be $1$. Note that here her number of tries could be arbitrarily large since she doesn't discard the keys from her previous tries.
Finding the distance between a line and a vector, given a projection So my question has two parts: a) Let L be a line given by y=2x, find the projection of $\vec{x}$=$\begin{bmatrix}5\\3\end{bmatrix}$ onto the line L. So, for this one: proj$_L$($\vec{x}$) = $\frac{\vec{x}\bullet \vec{y}}{\vec{y}\bullet \vec{y}}$$\times \vec{y}$ = $\frac{(\begin{bmatrix}5\\3\end{bmatrix} \bullet \begin{bmatrix}2\\1\end{bmatrix}}{(\begin{bmatrix}2\\1\end{bmatrix} \bullet \begin{bmatrix}2\\1\end{bmatrix}} ) \times \begin{bmatrix}2\\1\end{bmatrix}$ = $\frac{13}{5} \times \begin{bmatrix}2\\1\end{bmatrix}$ = \begin{bmatrix}5.2\\2.6\end{bmatrix} b) using the above, find the sitance between L and the terminal point of x. Here is where I am stuck... my instinct is to just do: $\begin{bmatrix}5\\3\end{bmatrix} - \begin{bmatrix}5.2\\2.6\end{bmatrix}$ = $\begin{bmatrix}-.2\\.4\end{bmatrix}$ but I'm sure this is incorrect... how would I solve this?
Yes, yes, almost done. You need the length of this distance vector, use Pythagorean theorem. One moment, your line is $y=2x$, then it rather contains $\pmatrix{1\\2}$ than $\pmatrix{2\\1}$ (and its normalvector is $\pmatrix{2\\-1}$)..
Transition from introduction to analysis to more advanced analysis I am currently studying intro to analysis and learning somethings about basic topology in metric space and almost finished the course . I am thinking of taking some more advanced analysis. Would it be demanding to take some course like functional analysis or real analysis only with knowledge of intro analysis course ?Do the courses need more mathmatical knowledge to handle?
I'm currently taking a graduate functional analysis course having only taken introductory analysis (I majored in physics). It's manageable, but knowing measure theory and lebesgue integration would have definitely helped.
If $f$ is entire and $z=x+iy$, prove that for all $z$ that belongs to $C$, $\left(\frac{d^2}{dx^2}+\frac{d^2}{dy^2}\right)|f(z)|^2= 4|f'(z)|^2$ I'm kind of stuck on this problem and been working on it for days and cannot come to the conclusion of the proof.
Let $\frac{\partial}{\partial z} = \tfrac{1}{2}(\frac{\partial}{\partial x} - i \frac{\partial}{\partial y})$ and $\frac{\partial}{\partial \overline{z}} = \tfrac{1}{2}(\frac{\partial}{\partial x} + i \frac{\partial}{\partial y})$. Then $\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} = 4 \frac{\partial}{\partial z} \frac{\partial}{\partial \overline{z}}$ so $$ (\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}) |f|^2 = 4 \frac{\partial}{\partial z} \frac{\partial}{\partial \overline{z}} f \, \overline{f} = 4 \frac{\partial}{\partial z}(0 \cdot \overline{f} + f \, \overline{f'}) = 4(f' \, \overline{f'} + f \cdot 0) = 4|f'|^2. $$
Prove the isomorphism of cyclic groups $C_{mn}\cong C_m\times C_n$ via categorical considerations As the title suggests, I am trying to prove $C_{mn}\cong C_m\times C_n$ when $\gcd{(m,n)}=1$, where $C_n$ denotes the cyclic group of order $n$, using categorical considerations. Specifically, I am trying to show $C_{mn}$ satisfies the characteristic property of group product, which would then imply an isomorphism since both objects would be final objects in the same category. $C_{mn}$ does come with projection homomorpisms, namely the maps $\pi^{mn}_m: C_{mn} \rightarrow C_m$ and $\pi^{mn}_n: C_{mn} \rightarrow C_n$ which are defined by mapping elements of $C_{mn}$ to the redisue classes mod subscript. From here I have gotten a bit lost though, as I cannot see where $m$ and $n$ being relatively prime comes in. I am guessing it would make the product map commute, but I cannot see it. Any ideas? Note This is not homework. Also, I understand there are other ways to prove this, namely by considering the cyclic subgroup generated by the element $(1_m,1_n) \in C_m\times C_n$ and noting that the order of this element is the least common multiple of $m$ and $n$ and then using it's relation to $\gcd{(m,n)}$. This then shows $\langle (1_m,1_n)\rangle$ has order $mn$ and is cyclic, hence must be isomorphic to $C_{mn}$. Also, $C_{mn}\cong C_m\times C_n$ has order $mn$, so $C_{mn}\cong C_m\times C_n=\langle (1_m,1_n)\rangle$, which completes the proof.
Just follow the definition: Let $X$ be any group, and $f:X\to C_n$, $g:X\to C_m$ homomorphisms. Now you need a unique homomorphism $h:X\to C_{nm}$ which makes both triangles with $\pi_n$ and $\pi_m$ commute. And constructing this $h$ requires basically the Chinese Remainder Theorem (and is essentially the same as constructing the isomorphism $C_n\times C_m\to C_{nm}$ right away): for each pair $(f(x),g(x))$ we have to assign a unique $h(x)\in C_{nm}$ such that, so to say, $h(x)\equiv f(x) \pmod n$ and $h(x)\equiv g(x) \pmod m$.
Prove $\lfloor \log_2(n) \rfloor + 1 = \lceil \log_2(n+1) \rceil $ This is a question a lecturer gave me. I'm more than willing to come up with the answer. But I feel I'm missing something in logs. I know the rules, $\log(ab) = \log(a) + \log(b)$ but that's all I have. What should I read, look up to come up with the answer?
Well, even after your edit, this is not an identity in general. It is valid only for integral $n$. For a counter-example of why this is not valid for a positive $n$, take $n = 1.5$. Then, $$\lfloor \log_2(n) \rfloor + 1 = \lfloor \log_2(1.5) \rfloor + 1 = 1$$ and $$\lceil \log_2(n + 1) \rceil = \lceil \log_2(2.5) \rceil = 2$$ However, for $n \in \mathbb N$ and $n > 0$, this identity holds, and can be proven as follows: Let $2^k \le n < 2^{k+1}$ for some $k \ge 0$ and $k$ is an integer. Therefore, let $n = 2^k + m$, where $k \ge 0$ and $0 \le m < 2^{k}$, and $k, m$ are positive integers. Then, $$\lfloor \log_2(n) \rfloor + 1 = \lfloor \log_2(2^k + m) \rfloor + 1 = k + 1$$ Notice that $\log_2(2^k + m) = k$ when $m = 0$. Otherwise, $k < \log_2(2^k + m) < k+1$, and hence the above result. Also, $$\lceil \log_2(n + 1) \rceil = \lceil \log_2(2^k + m + 1) \rceil = k + 1$$ This last equation is clear for $m < 2^k -1$, where $2^k + m + 1 < 2^{k+1}$. When $m=2^k-1$, notice that $\log_2(2^k+2^k-1+1) = \log_2(2^{k+1})$ is an integer, and therefore $\lceil k+1\rceil=k+1$. Hence, this identity is valid only for positive integers ($n \in \mathbb N$).
An upper bound for $\sum_{i = 1}^m \binom{i}{k}\frac{1}{2^i}$? Does anyone know of a reasonable upper bound for the following: $$\sum_{i = 1}^m \frac{\binom{i}{k}}{2^i},$$ where we $k$ and $m$ are fixed positive integers, and we assume that $\binom{i}{k} = 0$ whenever $k > i$. One trivial upper bound uses the identity $\binom{i}{k} \le \binom{i}{\frac i2}$, and the fact that $\binom{i}{\frac{i}{2}} \le \frac{2^{i+1}}{\sqrt{i}}$, to give a bound of $$2\sum_{i = 1}^m \frac{1}{\sqrt{i}},$$ where $\sum_{i = 1}^m \frac{1}{\sqrt{i}}$ is upper bounded by $2\sqrt{m}$, resulting in a bound of $4\sqrt{m}$. Can we do better? Thanks! Yair
Another estimate for $\sum_{i=1}^m\frac1{\sqrt i}$ is $$2\sqrt{m-1}-2=\int_1^{m-1} x^{-1/2}\, dx \le \sum_{i=1}^m\frac1{\sqrt i}\le1+ \int_1^m x^{-1/2}\, dx=2\sqrt m-1. $$ Thus we can remove the summands with $i<k$ by considering $$ 4\sqrt m+2 -4\sqrt{k-2}$$
Reference about Fredholm determinants I am searching for a reference book on Fredholm determinants. I am mainly interested in applications to probability theory, where cumulative distribution functions of limit laws are expressed in terms of Fredholm determinants. I would like to answer questions like : * *How to express a Fredholm determinant on $L^2(\mathcal{C})$, where $\mathcal{C}$ is a contour in $\mathbb{C}$ and the kernel takes a parameter $x$, as a deteminant on $L^2(x, +\infty)$ ; and vice versa. *Which types of kernels give which distributions. For example, in which cases we get the cumulative distribution function of the gaussian distribution ? These questions are quite vague, but I mostly need to be more familiar with the theory and the classical tricks in $\mathbb{C}$. I found the book "Trace ideals ans their applications", of Simon Barry, but I wonder if an other reference exists, ideally with applications to probability theory.
Nearly every book on random matrices deals with the subject. For a recent example, see Section 3.4 of An Introduction to Random Matrices by Anderson, Guionnet and Zeitouni.
Limiting distribution and initial distribution of a Markov chain For a Markov chain (can the following discussion be for either discrete time or continuous time, or just discrete time?), * *if for an initial distribution i.e. the distribution of $X_0$, there exists a limiting distribution for the distribution of $X_t$ as $t \to \infty$, I wonder if there exists a limiting distribution for the distribution of $X_t$ as $t \to \infty$, regardless of the distribution of $X_0$? *When talking about limiting distribution of a Markov chain, is it in the sense that some distributions converge to a distribution? How is the convergence defined? Thanks!
* *No, let $X$ be a Markov process having each state being absorbing, i.e. if you start from $x$ then you always stay there. For any initial distribution $\delta_x$, there is a limiting distribution which is also $\delta_x$ - but this distribution is different for all initial conditions. *The convergence of distributions of Markov Chains is usually discussed in terms of $$ \lim_{t\to\infty}\|\nu P_t - \pi\| = 0 $$ where $\nu$ is the initial distribution and $\pi$ is the limiting one, here $\|\cdot\|$ is the total variation norm. AFAIK there is at least a strong theory for the discrete-time case, see e.g. the book by S. Meyn and R. Tweedie "Markov Chains and Stochastic Stability" - the first edition you can easily find online. In fact, there are also extension of this theory by the same authors to the continuous time case - just check out their work to start with.
Convergence of series $\sum\limits_{n=2}^\infty\frac{n^3+1}{n^4-1}$ Investigate the series for convergence and if possible, determine its limit: $\sum\limits_{n=2}^\infty\frac{n^3+1}{n^4-1}$ My thoughts Let there be the sequence $s_n = \frac{n^3+1}{n^4-1}, n \ge 2$. I have tried different things with no avail. I suspect I must find a lower series which diverges, in order to prove that it diverges, and use the comparison test. Could you give me some hints as a comment? Then I'll try to update my question, so you can double-check it afterwards. Update $$s_n \gt \frac{n^3}{n^4} = \frac1n$$ which means that $$\lim\limits_{n\to\infty} s_n > \lim\limits_{n\to\infty}\frac1n$$ but $$\sum\limits_{n=2}^\infty\frac1n = \infty$$ so $$\sum\limits_{n=2}^\infty s_n = \infty$$ thus the series $\sum\limits_{n=2}^\infty s_n$ also diverges. The question is: is this formally sufficient?
$$\frac{n^3+1}{n^4-1}\gt\frac{n^3}{n^4}=\frac1n\;.$$
Show that if matrices A and B are elements of G, then AB is also an element of G. Let $G$ be the set of $2 \times 2$ matrices of the form \begin{pmatrix} a & b \\ 0 & c\end{pmatrix} such that $ac$ is not zero. Show that if matrices $A$ and $B$ are elements of $G$, then $AB$ is also an element of $G$. Do I just need to show that $AB$ has a non-zero determinant?
Proving that AB has a non-zero determinant is not enough, because not all 2x2 matrices with non-zero determinant are a element of G. You need to prove another property of AB. This property is that it has the shape you stated. This combined with a non-zero determinant guarantees that AB has the prescribed shape with ac not zero.
Derivatives vs Integration * *Given that the continuous function $f: \Bbb R \longrightarrow \Bbb R$ satisfies $$\int_0^\pi f(x) ~dx = \pi,$$ Find the exact value of $$\int_0^{\pi^{1/6}} x^5 f(x^6) ~dx.$$ *Let $$g(t) = \int_t^{2t} \frac{x^2 + 1}{x + 1} ~dx.$$ Find $g'(t)$. For the first question: The way I understand this is that the area under $f(x)$ from $0$ to $\pi$ is $\pi$. Doesn't this mean that the function can be $f(x)=1$? Are there other functions that satisfy this definition? The second line in part one also confuses me, specifically the $x^6$ part! For the second question: Does this have to do something with the Second Fundamental Theory of Calculus? I see that there are two variables, $x$ and $t$, that are involved in this equation.
For the first question, There are infinitely many functions other than $1$ that satisfy $$\int_0^{\pi} f(x) dx = \pi$$ For instance, couple of other examples are $$f(x) = 2- \dfrac{2x}{\pi}$$ and $$f(x) = \dfrac{2x}{\pi}$$ To evaluate $$\int_0^{\pi^{1/6}} x^5 f(x^6) dx$$ make the substitution $t = x^6$ and see what happens... For the second question, Yes make use of the fundamental theorem of calculus i.e. if $$g(t) = \int_{a(t)}^{b(t)} f(x) dx$$ then $$g'(t) = f(b(t)) b'(t) - f(a(t)) a'(t)$$
Notation for repeated application of function If I have the function $f(x)$ and I want to apply it $n$ times, what is the notation to use? For example, would $f(f(x))$ be $f_2(x)$, $f^2(x)$, or anything less cumbersome than $f(f(x))$? This is important especially since I am trying to couple this with a limit toward infinity.
In the course I took on bifurcation theory we used the notation $$f^{\circ n}(x).$$
Find a convex combination of scalars given a point within them. I've been banging my head on this one all day! I'm going to do my best to explain the problem, but bear with me. Given a set of numbers $S = \{X_1, X_2, \dots, X_n\}$ and a scalar $T$, where it is guaranteed that there is at least one member of $S$ that is less than $T$ and at least one member that's greater than $T$, I'm looking for an algorithm to create a Convex Combination of these scalars that equals $T$. For example, for the set $\{2,4\}$ and the scalar $3$, the answer is: $$.5 \cdot 2 + .5 \cdot 4 = 3.$$ I believe in many cases there are infinite infinitely many solutions. I'm looking for a generalized algorithm/formula to find these coefficients. Additionally, I would like for the coefficient weights to be distributed as evenly as possible (of course while still adding up to 1.) For instance, for the set $\{1,2,4\}$ and the scalar $3$, a technically valid solution would be the same as the first example but with the coefficient for $1$ assigned a weight of 0 - but it would be prefferable to assign a non-zero weight. I may not be thinking through this last part very clearly :)
If $X_1<T<X_2$, then $T$ is a weighted average of $X_1$ and $X_2$ with weights $\dfrac{X_2-T}{X_2-X_1}$ and $\dfrac{T-X_1}{X_2-X_1}$, as can be checked by a bit of algebra. Now suppose $X_3$ is also $>T$. Then $T$ is a weighted average of $X_1$ and $X_3$, and you can find the weights the same way. Now take $40\%$ of the weight assigned to $X_2$ in the first case, and assign it to $X_2$, and $60\%$ of the weight assigned to $X_3$ in the second case and assign it to $X_3$ and let the weight assigned to $X_1$ be $40\%$ of the weight it got in the first case plus $60\%$ of the weight it got in the first case, and you've got another solution. And as with $40$ and $60$, so also with $41$ and $59$, and so on, and you've got infinitely many solutions. But don't say "infinite solutions" if you mean "infinitely many solutions". "Infinite solutions" means "solutions, each one of which, by itself, is infinite". Later note in response to comments: Say you write $4$ as an average of $3$ and $5$ with weights $1/2$, $1/2$. And you write $4$ as an average of $3$ and $7$ with weights $3/4$, $1/4$. Then you have $4$ as a weighted average of $3$, $5$, and $7$ with weights $1/2,\ 1/2,\ 0$. And you have $4$ as a weighted average of $3$, $5$, and $7$ with weights $3/4,\ 0,\ 1/4$. So find a weighted average of $(1/2,\ 1/2,\ 0)$ and $(3/4,\ 0,\ 1/4)$. For example, $40\%$ of the first plus $60\%$ of the second is $(0.65,\ 0.2,\ 0.15)$. Then you have $4$ as a weighted average of $3$, $5$, and $7$ with weights $0.65$, $0.2$, and $0.15$. And you can come up with infinitely many other ways to write $4$ as a weighted average of $3$, $5$, and $7$ by using other weights than $0.4$ and $0.6$.
How to prove that $n^{\frac{1}{3}}$ is not a polynomial? I'm reading Barbeau's Polynomials, there's an exercise: How to prove that $n^{\frac{1}{3}}$ is not a polynomial? I've made this question and with the first answer as an example, I guess I should assume that: $$n^{\frac{1}{3}}=a_pn^p+a_{p-1}n^{p-1}+\cdots+ a_0n^0$$ And then I should make some kind of operation in both sides, the resultant difference would be the proof. But I have no idea on what operation I should do in order to prove that.
If $t^{1/3}$ were a polynomial, then its degree would be at least one (because it is not constant). This would imply $$ \lim_{t\to\infty}\frac{t^{1/3}}t\ne0. $$ But, precisely, the limit above is indeed zero. So $t^{1/3}$ cannot be a polynomial.
how to prove $437\,$ divides $18!+1$? (NBHM 2012) I was solving some problems and I came across this problem. I didn't understand how to approach this problem. Can we solve this with out actually calculating $18!\,\,?$
Note that $437=(19)(23)$. We prove that $19$ and $23$ divide $18!+1$. That is enough, since $19$ and $23$ are relatively prime. The fact that $19$ divides $18!+1$ is immediate from Wilson's Theorem, which says that if $p$ is prime then $(p-1)!\equiv -1\pmod{p}$. For $23$ we need to calculate a bit. We have $22!\equiv -1\pmod{23}$ by Wilson's Theorem. Now $(18!)(19)(20)(21)(22)=22!$. But $19\equiv -4\pmod{23}$, $20\equiv -3\pmod{23}$, and so on. So $(19)(20)(21)(22)\equiv 24\equiv 1\pmod{23}$. It follows that $18!\equiv 22!\pmod{23}$, and we are finished.
$\mathrm{Spec}(R)\!=\!\mathrm{Max}(R)\!\cup\!\{0\}$ $\Rightarrow$ $R$ is a PID Is the following true: If $R$ is a commutative unital ring with $\mathrm{Spec}(R)\!=\!\mathrm{Max}(R)\!\cup\!\{0\}$, then $R$ is a PID. If yes, how can one prove it? Since $0$ is a prime ideal, $R$ is a domain. Thus we must prove that every ideal is principal. I'm not sure if this link (first answer) helps.
As mentioned, there are easy counterxamples. However, it is true for UFDs since PIDs are precisely the $\rm UFDs$ of dimension $\le 1,\:$ i.e. such that prime ideals $\ne 0$ are maximal. Below is a sketch of a proof of this and closely related results. Theorem $\rm\ \ \ TFAE\ $ for a $\rm UFD\ D$ $(1)\ \ $ prime ideals are maximal if nonzero $(2)\ \ $ prime ideals are principal $(3)\ \ $ maximal ideals are principal $(4)\ \ \rm\ gcd(a,b) = 1\ \Rightarrow\ (a,b) = 1$ $(5)\ \ $ $\rm D$ is Bezout $(6)\ \ $ $\rm D$ is a $\rm PID$ Proof $\ $ (sketch of $1 \Rightarrow 2 \Rightarrow 3 \Rightarrow 4 \Rightarrow 5 \Rightarrow 6 \Rightarrow 1$) $(1\Rightarrow 2)$ $\rm\ \ P\supset (p)\ \Rightarrow\ P = (p)$ $(2\Rightarrow 3)$ $\ \: $ Clear. $(3\Rightarrow 4)$ $\ \ \rm (a,b) \subsetneq P = (p)\ $ so $\rm\ (a,b) = 1$ $(4\Rightarrow 5)$ $\ \ \rm c = \gcd(a,b)\ \Rightarrow\ (a,b) = c\ (a/c,b/c) = (c)$ $(5\Rightarrow 6)$ $\ \ \rm 0 \ne I \subset D\:$ Bezout is generated by an elt with the least number of prime factors $(6\Rightarrow 1)$ $\ \ \rm P \supset (p),\ a \not\in (p)\ \Rightarrow\ (a,p) = (1)\ \Rightarrow\ P = (p)$
Is $\omega_{\alpha}$ sequentially compact? For an ordinal $\alpha \geq 2$, let $\omega_{\alpha}$ be as defined here. It is easy to show that $\omega_{\alpha}$ is limit point compact, but is it sequentially compact?
I think I finally found a solution: In a well-ordered set, every sequence admits an non-decreasing subsequence. Indeed, if $(x_n)$ is any sequence, let $n_0$ be such that $x_{n_0}= \min \{ x_n : n\geq 0 \}$, and $n_1$ such that $x_{n_1}= \min \{ x_n : n > n_0 \}$, and so on; here, $(x_{n_i})$ is an non-decreasing subsequence. Because an non-decreasing sequence is convergent iff it admits a cluster point, limit point compactness and sequentially compactness are equivalent.
The sufficient and necessary condition for a function approaching a continuous function at $+\infty$ Problem Suppose $f:\Bbb R^+\to\Bbb R$ satisfies $$\forall\epsilon>0,\exists E>0,\forall x_0>E,\exists\delta>0,\forall x(\left|x-x_0\right|<\delta): \left|f(x)-f(x_0)\right|<\epsilon\tag1$$ Can we conclude that there's some continuous function $g:\Bbb R^+\to\Bbb R$ such that $$\lim_{x\to+\infty}(f(x)-g(x))=0\tag2$$ Re-describe Let $\omega_f(x_0)=\limsup_{x\to x_0}\left|f(x)-f(x_0)\right|$, we can re-describe the first condition (1) as this: $$\lim_{x_0\to+\infty}\omega_f(x_0)=0\tag3$$ Motivation In fact, I'm discovering the sufficient and necessary condition of (2). It's easier to show that (2) implies (3), i.e. (1), because $$\left|f(x)-f(x_0)\right|\le\left|f(x)-g(x)\right|+\left|g(x)-g(x_0)\right|+\left|g(x_0)-f(x_0)\right|$$ Take $\lim_{x_0\to+\infty}\limsup_{x\to x_0}$ for both sides, we'll get the result.
Choose $E_0 := 0$, $(E_n)_n \uparrow \infty$ corresponding to $\varepsilon_n := \frac{1}{n}$ for $n \geq 1$ using (1). If $E_n < x \leq E_{n+1}$, choose $\delta_x > 0$ such that $f(y) \in B_{1/n}(f(x))$ for all $y \in B_{\delta_x} (x)$, according to (1). For $n \geq 1$, the balls $(B_{\delta_x}(x))_{x \in [E_n, E_{n+1}]}$ cover $[E_n, E_{n+1}]$, so we can choose $E_n = x^{(n)}_1 < x^{(n)}_2 < \ldots < x^{(n)}_{M_n} \leq E_{n+1}$ such that $$\bigcup_{k=1}^{M_n} {B_{\delta_{x^{(n)}_k}}(x^{(n)}_k)} \supseteq [E_n, E_{n+1}].$$ Set $M_0 = 1$, $x^{(0)}_1 = 0$. The set of points $\{x^{(n)}_k \: | \: n \geq 0,\, 1 \leq k \leq M_n\}$ partitions $\mathbb{R}^+$, so we can define the graph of $g:\: \mathbb{R}^+ \rightarrow \mathbb{R}$ as the polygon joining the points $(x^{(n)}_k, f(x^{(n)}_k)$ in the order of the $x^{(n)}_k$. The function $g$ is continuous by definition and it is easy to see that the limit of $f(x) - g(x)$ for $x \to \infty$ is $0$.
How can it happen to find infinite bases in $\mathbb R^n$ if $\mathbb R^n$ does not admit more than $n$ linearly independent vectors? How can it happen to find infinite bases in $\mathbb R^n$ if $\mathbb R^n$ does not admit more than $n$ linearly independent vectors? Also considered that each basis of $\mathbb R^n$ has the same number $n$ of vectors.
Let $E=\{e_1,...,e_n\}$ be the standard basis in $\mathbb{R}^n$. For each $\lambda\neq 0$, let $E_\lambda = \{\lambda e_1, e_2,...,e_n\}$. Then each $E_\lambda$ is a distinct basis of $\mathbb{R}^n$. However, each $E_\lambda$ has exactly $n$ elements.
Exactly half of the elements of $\mathcal{P}(A)$ are odd-sized Let $A$ be a non-empty set and $n$ be the number of elements in $A$, i.e. $n:=|A|$. I know that the number of elements of the power set of $A$ is $2^n$, i.e. $|\mathcal{P}(A)|=2^n$. I came across the fact that exactly half of the elements of $\mathcal{P}(A)$ contain an odd number of elements, and half of them an even number of elements. Can someone prove this? Or hint at a proof?
Fix an element $a\in A$ (this is the point where $A\ne\emptyset$ is needed). Then $$S\mapsto S\operatorname{\Delta}\{a\}$$ (symmetric difference) is a bijection from the set of odd subsets to the set of even subsets.
How to find the number of roots using Rouche theorem? Find the number of roots $f(z)=z^{10}+10z+9$ in $D(0,1)$. I want to find $g(z)$ s.t. $|f(z)-g(z)|<|g(z)|$, but I cannot. Any hint is appreciated.
First, we factor by $z+1$ to get $f(z)=(z+1)(z^9-z^8+z^7+\dots-z^2+z+9)$. Let $F(z):=z^9-z^8+z^7+\dots-z^2+z+9$ and $G(z)=9$. Then for $F$ of modulus strictly smaller than $1$, $|F(z)-G(z)|\leqslant 9|z| \lt |G(z)|$. thus for each positive $\delta$, we can find the number of zeros of $f$ on $B(0,1-\delta)$.
A function whose value is either one or zero First I apologize in advance for I don't know math's English at all and I haven't done math in almost a decade. I'm looking for a function whose "domain/ensemble of definition" would be ℝ (or maybe ℤ) and whose "ensemble/domain of variation" would be ℕ{0, 1} that would look like something this awesome ascii graph... ^f(x) | | 1________ | __________ 0________\./__________>x 0| | | | f(x) is always 1, but 0 when x = 0 Actually I need this for programming, I could always find other ways to do what I wanted with booleans so far but it would be much better in most cases to have a simple algebra formula to represent my needs instead. I just hope it's not too complicated a function to write/understand and that it exists of course. Thanks in advance guys.
I think that the most compact way to write this is using the Iverson Bracket: $f: \mathbb{R} \to \{{0,1}\}$ $$ f(x) = [x \neq 0]$$
Find the Maclaurin series for the function $\tan^{-1}(2x^2)$ Find the Maclaurin series for the function $\tan^{-1}(2x^2)$ Express your answer in sigma notation, simplified as much as possible. What is the open interval of convergence of the series. I have the correct answer, but I would like to use another method to solve this. By taking the function and its derivative, find the sum and then taking the anti derivative. This does not yield the same answer for me.
You probably know the series for $\tan^{-1} t$. Plug in $2x^2$ for $t$. If you do not know the series for $\arctan t$, you undoubtedly know the series for $\dfrac{1}{1-u}$. Set $u=-x^2$, and integrate term by term. For the interval of convergence of the series for $\tan^{-1} (2x^2)$, you probably know when the series for $\tan^{-1}t$ converges. That knowledge can be readily translated to knowledge about $\tan^{-1}(2x^2)$.
Prove non-zero eigenvalues of skew-Hermitian operator are pure imaginary Just like the title: Assume $T$ is a skew-Hermitian but not a Hermitian operator an a finite dimensional complex inner product space V. Prove that the non-zero eigenvalues of $T$ are pure imaginary.
We need the following properties of the inner product i) $\langle au,v \rangle = a \langle u,v \rangle \quad a \in \mathbb{C}$, ii) $ \langle u, a v \rangle = \overline{\langle a v, u \rangle} = \overline{a} \overline{\langle v, u \rangle} = \overline{a} \langle u, v \rangle \quad a \in \mathbb{C}.$ Since T is skew Hermitian, then $T^{*}=-T$. Let $u$ be an eigenvector that corresponds to the eigenvalue $\lambda$ of $T$, then we have $$ \langle Tu,u \rangle= \langle u,T^{*}u \rangle \Longleftrightarrow \langle Tu,u \rangle = \langle u,-Tu \rangle$$ $$ \langle \lambda u,u \rangle = \langle u,-\lambda u \rangle \Longleftrightarrow \lambda \langle u,u \rangle = -\bar{\lambda} \langle u,u\rangle $$ $$ \Longleftrightarrow \lambda = -\bar{\lambda} \Longleftrightarrow x+iy = -x+iy. $$ What can you conclude from the last equation?
Two questions with mathematical induction First hello all, we have a lecture. It has 10 questions but I'm stuck with these two about 3 hours and I can't solve them. Any help would be appreciated. Question 1 Given that $T(1)=1$, and $T(n)=2T(\frac{n}{2})+1$, for $n$ a power of $2$, and greater than $1$. Using mathematical induction, prove that $T(n) = 2^k.T(\frac{n}{2^k}) + 2^k - 1$ for $k=0, 1, 2, \dots, \log_2 n$. Question 2 Definition: $H(j) = \frac{1}{1} + \frac{1}{2} + \frac{1}{3} + \dots + \frac{1}{j}$. Facts: $H(16) > 3.38$, $\frac{1}{3} + \frac{1}{4} = \frac{7}{12}, \frac{1}{4} + \frac{1}{5} + \frac{1}{7} = \frac{319}{420} $ a) Using induction on $n$, prove that $H(2^n) > 1 + 7n/12$, for $n\geq 4$.
1) Prove by induction on $k$: If $0\le k\le m$, then $T(2^m)=2^kT(2^{m-k})+2^k-1$. The case $k=0$ is trivial. If we already know that $T(2^m)=2^{k-1}T(2^{m-(k-1)})+2^{k-1}-1$ for all $m\ge k-1$, then for $m\ge 2^k$ we have $$\begin{align}T(2^m)&=2^{k-1}T(2^{m-(k-1)})+2^{k-1}-1\\ &=2^{k-1}\left(2\cdot T\left(\tfrac{2^{m-(k-1)}}2\right)+1\right)+2^{k-1}-1\\ &=2^k T(2^{m-k})+2^k-1\end{align}$$ 2) Note that $$H(2(m+1))-H(2m)=\frac 1{2m+1}+\frac1{2(m+2)}>\frac2{2m+2}=H(m+1)-H(m)$$ and therefore (by induction on $d$) for $d>0$ $$H(2(m+d))-H(2m)>H(m+d)-H(m)$$ hence with $m=d=2^{n-1}$ $$H(2^{n+1})-H(2^n)>H(2^n)-H(2^{n-1})$$ thus by induction on $n$ $$H(2^n)-H(2^{n-1})>H(4)-H(2)=\frac7{12}\text{ if }n\ge2$$ and finally by induction on $n$ $$H(2^n)>H(16)+\frac7{12}(n-4)\text{ for }n\ge 4.$$
Convergence of Bisection method I know how to prove the bound on the error after $k$ steps of the Bisection method. I.e. $$|\tau - x_{k}| \leq \left(\frac{1}{2}\right)^{k-1}|b-a|$$ where $a$ and $b$ are the starting points. But does this imply something about the order of convergence of the Bisection method? I know that it converges with order at least 1, is that implied in the error bound? Edit I've had a go at showing it, is what I am doing here correct when I want to demonstrate the order of convergence of the Bisection method? $$\lim_{k \to \infty}\frac{|\tau - x_k|}{|\tau - x_{k-1}|} = \frac{(\frac{1}{2})^{k-1}|b-a|}{(\frac{1}{2})^{k-2}|b-a|}$$ $$=\frac{(\frac{1}{2})^{k-1}}{(\frac{1}{2})^{k-2}}$$ $$=\frac{1}{2}$$ Show this shows linear convergence with $\frac{1}{2}$ being the rate of convergence. Is this correct?
For the bisection you simply have that $\epsilon_{i+1}/\epsilon_i = 1/2$, so, by definition the order of convergence is 1 (linearly).
Functional analysis summary Anyone knows a good summary containing the most important definitions and theorems about functional analysis.
Georgei E. Shilov's Elementary Functional Analysis, 2nd Ed. (Dover books, 1996) would be a great start, and cheap, as far as textbooks go! For a very brief (17 page) "summary" pdf document, written and posted by Dileep Menon and which might be of interest: An introduction to functional analysis. It contains both definitions and theorems, as well as a list of references. See also the lists of definitions and theorems covered in Gilliam's course on functional analysis: here, and here.
Partial fractions integration Re-express $\dfrac{6x^5 + x^2 + x + 2}{(x^2 + 2x + 1)(2x^2 - x + 4)(x+1)}$ in terms of partial fractions and compute the indefinite integral $\dfrac{1}5{}\int f(x)dx $ using the result from the first part of the question.
Hint Use $$\dfrac{6x^5 + x^2 + x + 2}{(x^2 + 2x + 1)(2x^2 - x + 4)(x+1)}=\frac{A}{(x+1)^3}+\frac{B}{(x+1)^2}+\frac{C}{x+1}+\frac{Dx+E}{2x^2-x+4}+F$$ and solve for $A,B,C,D,E$ and $F$. Explanation Note that this partial fraction decomposition is a case where the degree of the numerator and denominator are the same. (Just notice that the sum of the highest degree terms in the denominator is $2+2+1=5.$) This means that the typical use of partial fraction decomposition does not apply. Furthermore, notice that $(x^2+2x+1)$ factorizes as $(x+1)^2$, which means the denominator is really $(x+1)^3(2x^2-x+4)$. This means the most basic decomposition would involve denominators of $(x+1)^3$ and $(2x^2-x+4)$. However, we can escape that by using a more complicated decomposition involving the denominators $(x+1)^3$, $(x+1)^2$, $(x+1)$, and $(2x^2-x+4)$. The $F$ term is necessary for the $6x^5$ term to arise in the multiplication, intuitively speaking. More thoroughly stated, the $F$ term is needed because of the following equivalency between $6x^5+x^2+x+2$ and the following expression: $$ \begin{align} &A(2x^2-x+4)\\ +&B(2x^2-x+4)(x+1)\\ +&C(2x^2-x+4)(x+1)^2\\ +&[Dx+E](x+1)^3\\ +&F(2x^2-x+4)(x+1)^3. \end{align} $$ Notice how the term $F(2x^2-x+4)(x+1)^3$ is the only possible term that can give rise to $6x^5$? That is precisely why it is there. Hint 2 Part 1 In the integration of $\frac{1}{5}\int f(x)dx$, first separate the integral, $\int f(x)dx$ into many small integrals with the constants removed from the integrals. That is, $\int f(x) dx$ is $$A\int \frac{1}{(x+1)^3}dx+B\int \frac{1}{(x+1)^2}dx+C\int \frac{1}{x+1}dx+D\int \frac{x}{2x^2-x+4}dx\quad+E\int \frac{1}{2x^2-x+4}dx+F\int 1dx.$$ Part 2 Next, use the substitution $u=x+1$ with $du=dx$ on the first three small integrals: $$A\int \frac{1}{u^3}du+B\int \frac{1}{u^2}du+C\int \frac{1}{u}du+D\int \frac{x}{2x^2-x+4}dx+E\int \frac{1}{2x^2-x+4}dx\quad +F\int 1dx.$$ Part 3 To deal with the integral $\int \frac{1}{2x^2-x+4}dx$, you must complete the square. This is done as follows: $$ \begin{align} \int \frac{1}{2x^2-x+4}dx&=\int \frac{\frac{1}{2}}{x^2-\frac{1}{2}x+2}dx\\ &=\frac{1}{2} \int \frac{1}{\left(x-\frac{1}{4}\right)^2+\frac{31}{16}}dx. \end{align} $$ To conclude, you make the trig substitution $x-\frac{1}{4}=\frac{\sqrt{31}}{4}\tan \theta$ with $dx=\left(\frac{\sqrt{31}}{4}\tan \theta+\frac{1}{4}\right)'d\theta=\frac{\sqrt{31}}{4}\sec^2\theta d\theta$. This gives you: $$\frac{1}{2} \int \frac{1}{\left(x-\frac{1}{4}\right)^2+\frac{31}{16}}dx=\frac{1}{2} \int \frac{\frac{\sqrt{31}}{4}\sec^2 \theta}{\frac{31}{16}\tan^2\theta+\frac{31}{16}}d\theta.$$
How many graphs with vertex degrees (1, 1, 1, 1, 2, 4, 5, 6, 6) are there? How many graphs with vertex degrees (1, 1, 1, 1, 2, 4, 5, 6, 6) are there? Assuming that all vertices and edges are labelled. I know there's a long way to do it by drawing all of them and count. Is there a quicker, combinatoric way?
There are none. By the hand shaking lemma we know that the number of degrees of odd degree must be even. There are 5 vertices with odd degrees in your graph, these are the ones with degrees: 1,1,1,1,5
Verify these logical equivalences by writing an equivalence proof? I have two parts to this question - I need to verify each of the following by writing an equivalence proof: * *$p \to (q \land r) \equiv (p \to q) \land (p \to r)$ *$(p \to q) \land (p \lor q) \equiv q$ Thank you if you can help! It's greatly appreciated.
We make extensive use of the identity $(a \to b) \equiv (\lnot a \lor b)$, and leave you to fill in the reasons for some of the intermediate steps in (2). (1) $\quad p \to (q \wedge r) \equiv \lnot p \lor (q \land r) \equiv (\lnot p \lor q) \land (\lnot p \lor r) \equiv (p \to q) \wedge (p \to r)$. (2) $\quad(p \to q) \land (p \lor q) \equiv q\quad?$ $$(p \to q) \land (p \lor q) \equiv (\lnot p \lor q) \land (p \lor q)\tag{1}$$ $$\equiv \;[(\lnot p \lor q) \land p] \lor [(\lnot p \lor q) \land q]\tag{2}$$ $$\equiv \;[(\lnot p \land p) \lor (q \land p)] \lor [(\lnot p \land q) \lor (q \land q)]\tag{3}$$ $$\equiv \;F \lor (p \land q) \lor [(\lnot p \land q) \lor (q \land q)]\tag{4}$$ $$\equiv \;(p\land q) \lor [(\lnot p \land q) \lor (q \land q)]\tag{5}$$ $$\equiv \;(p\land q) \lor (\lnot p \land q) \lor q\tag{6}$$ $$\equiv \;[(p \lor \lnot p) \land q] \lor q\tag{7}$$ $$\equiv \;(T \land q) \lor q\tag{8}$$ $$\equiv \;q\lor q\tag{9}$$ $$\equiv \;q\tag{10}$$
Showing $\sqrt{2}\sqrt{3} $ is greater or less than $ \sqrt{2} + \sqrt{3} $ algebraically How can we establish algebraically if $\sqrt{2}\sqrt{3}$ is greater than or less than $\sqrt{2} + \sqrt{3}$? I know I can plug the values into any calculator and compare the digits, but that is not very satisfying. I've tried to solve $$\sqrt{2}+\sqrt{3}+x=\sqrt{2}\sqrt{3} $$ to see if $x$ is positive or negative. But I'm just getting sums of square roots whose positive or negative values are not obvious. Can it be done without the decimal expansion?
Method 1: $\sqrt{2}+\sqrt{3}>\sqrt{2}+\sqrt{2}=2\sqrt{2}>\sqrt{3}\sqrt{2}$. Method 2: $(\sqrt{2}\sqrt{3})^2=6<5+2<5+2\sqrt{6}=2+3+2\sqrt{2}\sqrt{3}=(\sqrt{2}+\sqrt{3})^2$, so $\sqrt{2}\sqrt{3}<\sqrt{2}+\sqrt{3}$. Method 3: $\frac{196}{100}<2<\frac{225}{100}$ and $\frac{289}{100}<3<\frac{324}{100}$, so $\sqrt{2}\sqrt{3}<\frac{15}{10}\frac{18}{10}=\frac{270}{100}<\frac{14}{10}+\frac{17}{10}<\sqrt{2}+\sqrt{3}$.
Vector Space or Not? I have a question. Suppose that $V$ is a set of all real valued functions that attain its relative maximum or relative minimum at $x=0$. Is V a vector space under the usual operations of addition and scalar multiplications? My guess is it is not a vector space, but I can't able to give a counterexample?
As I hinted in my comment above, the terms local maximum and local minimum only really make sense when talking about differentiable functions. So here I show that the set of functions with a critical point (not necessarily a local max/min) at 0 (really at any arbitrary point $a \in \mathbb{R}$) is a vector subspace. If you broaden this slightly to say that $V \subset C^1(-\infty,\infty)$ is the set of differentiable functions on the reals that have a critical point at 0 (i.e. $\forall$ $f \in V$ $f'(0) = 0$). Then it's simple to show that this is a vector space. If $f,g \in V$ ($f'(0)=g'(0)=0$), and $r \in \mathbb{R}$ then, and hence * *$(f+g)'(0) = f'(0) + g'(0) = 0 + 0 = 0$. *The derivative of the zero function is zero, and hence evaluates to 0 at 0. *$(rf)'(0) = r(f'(0))=r0 = 0$. And together these imply that $V$ is a vector subspace of $C^1(-\infty,\infty)$. Robert Israel's answer above is a nice example of why we must define our vector space to have a critical point, not just a max/min at 0.
Two norms on $C_b([0,\infty])$ $C_b([0,\infty])$ is the space of all bounded, continuous functions. Let $||f||_a=(\int_{0}^{\infty}e^{-ax}|f(x)|^2)^{\frac{1}{2}}$ First I want to prove that it is a norm on $C_b([0,\infty])$. The only thing I have problems with is the triangle inequality, I do not know how to simplify $||f+g||_a=(\int_{0}^{\infty}e^{-ax}|f(x)+g(x)|^2)^{\frac{1}{2}}$ The second thing I am interested in is how to show that there are constants $C_1,C_2$ such that $||f||_a\le C_1||f||_b$ and $||f||_b\le C_2||f||_a$ so for $a>b>0$ the norms $||.||_a$ and $||.||_b$ are not equivalent.
For the first one: Use $$e^{-\alpha \cdot x} \cdot |f(x)+g(x)|^2 = \left|e^{-\frac{\alpha}{2} \cdot x} \cdot f(x)+ e^{-\frac{\alpha}{2} \cdot x} \cdot g(x) \right|^2$$ and apply the triangel inequality in $L^2$. Concerning the second one: For $a>b>0$ you have $$e^{-a \cdot x} \cdot |f(x)|^2 \leq e^{-b \cdot x} \cdot |f(x)|^2 $$ i.e. $\|f\|_a \leq \|f\|_b$. The inequality $\|f\|_b \leq C \cdot \|f\|_a$ does not hold in general. To see this let $c \in (b,a)$ and $$f_n(x) := \min\{n,e^{c \cdot x}\} (\in C_b)$$ Then $\|f_n\|_a < \infty$,$\|f_n\|_a \to c<\infty$, but $\|f_n\|_b \to \infty$.
How can I show some rings are local. I want to prove $k[x]/(x^2)$ is local. I know it by rather a direct way: $(a+bx)(a-bx)/a^2=1$. But for general case such as $k[x]/(x^n)$, how can I prove it? Also for 2 variables, for example $k[x,y]/(x^2,y^2)$ (or more higher orders?), how can I prove they are local rings?
You can use the following Claim: A commutative unitary ring is local iff the set of non-unit elements is an ideal, and in this case this is the unique maximal ideal. Now, in $\,k[x]/(x^n):=\{f(x)+(x^n)\;\;;\;\;f(x)\in K[x]\,\,,\,\deg(f)<n\}$ , an element in a non-unit iff $\,f(0)=a_0= 0\,$ , with $\,a_0=$ the free coefficient of $\,f(x)\,$, of course. Thus, we can characterize the non-units in $\,k[x]/(x^n)\,$ as those represented by polynomials of degree less than $\,n\,$ and with free coefficient zero, i.e. the set of elements $$\,M:=\{f(x)+(x^n)\in k[x]/(x^n)\;\;;\;\;f(x)=xg(x)\,\,,\,\,g(x)\in k[x]\,\,,\deg (g)<n-1\}$$ Well, now check the above set fulfills the claim's conditions. Note: I'm assuming $\,k\,$ above is a field, but if it is a general commutative unitary ring the corrections to the characterization of unit elements are minor, though important. About the claim being true in this general case: I'm not quite sure right now.
Countable unions of countable sets It seems the axiom of choice is needed to prove, over ZF set theory, that a countable union of countable sets is countable. Suppose we don't assume any form of choice, and stick to ZF. What are the possible cardinalities of a countable union of countable sets? Could a countable union of countable sets have the same cardinality as that of the real numbers?
Yes. It is consistent that the real numbers are a countable union of countable sets. For example in the Feferman-Levy model this is true. In such model $\omega_1$ is also the countable union of countable sets (and there are models in which $\omega_1$ is the countable union of countable sets, but the real numbers are not). It is consistent [relative to large cardinals] that $\omega_1$ is the countable union of countable sets, and $\omega_2$ is the countable union of countably many sets of size $\aleph_1$, that is $\omega_2$ is the countable union of countable unions of countable sets. Indeed it is consistent [relative to some large cardinal axioms] that every set can be generated by reiterating countable unions of countable sets. For non-aleph cardinals the follow results holds: Let $M$ be a transitive model of ZFC, there exists $M\subseteq N$ with the same cardinals as $M$ [read: initial ordinals] and the following statement is true in $N$: For every $\alpha$ there exists a set $X$ such that $X$ is a countable union of countable sets, and $\mathcal P(X)$ can be partitioned into $\aleph_\alpha$ nonempty sets. D. B. Morris, A model of ZF which cannot be extended to a model of ZFC without adding ordinals, Notices Am. Math. Soc. 17, 577. Note that $\mathcal P(X)$ can only be mapped onto set many ordinals, so this ensures us that there is indeed a proper class of cardinalities which can be written as countable unions of countable sets. Some positive results are: * *If $X$ can be expressed as the countable union of countable sets, and $X$ can be well-ordered then $|X|\leq\aleph_1$. *If $\langle A_i,f_i\rangle$ is given such that $A_i$ is countable and $f_i\colon A_i\to\omega$ is an injection, then $\bigcup A_i$ is countable. *The collection of finite subsets of a countable set, as well finite sequences are both countable, without the use of the axiom of choice (so there are still only a countable collection of algebraic numbers in $\mathbb R$).
Criteria for metric on a set Let $X$ be a set and $d: X \times X \to X$ be a function such that $d(a,b)=0$ if and only if $a=b$. Suppose further that $d(a,b) ≤ d(z,a)+d(z,b)$ for all $a,b,z \in X$. Show that $d$ is a metric on $X$.
Let $X$ be a set and $d: X \times X \to X$ be a function such that $$d(a,b)=0\text{ if and only if}\;\; a=b,\text{ and}\tag{1}$$ $$d(a,b) ≤ d(z,a)+d(z,b)\forall a,b,z \in X.\tag{2}$$ There's additional criterion that needs to be met for a function $d$ to be a metric on $X$: * *You must have that $d(a, b) = d(b,a)$ for all $a, b \in X$ (symmetry).You can use the two properties you have been given to prove this. $d(a,b)\leq d(b,a)+d(b,b)= d(b, a) + 0 = d(b,a)$ and vice versa, hence we get equality. *Having proven symmetry, you will then have that $d(a,b) \leq d(z,a) + d(z, b) \iff d(a, b) \leq d(a, z) + d(z, b)$. *Finally, using the property immediately above, along with the $(1)$, you can establish that for all $a, b\in X$ such that $a\neq b$, we must have $d(a, b) > 0$. Then you are done.
What does this mathematical notation mean? Please excuse this simple question, but I cannot seem to find an answer. I'm not very experienced with math, but I keep seeing a notation that I would like explained. The notation I am referring too generally is one variable m floating over another variable n enclosed in paraentheses. You can see an example in the first equation here. What does this mean? Thanks in advance for the help.
This is called the binomial coefficent, often read "n choose m", since it provides a way of computing the number of ways to choose $m$ items from a collection of $n$ items, provided the order or arrangement of those items doesn't matter. To compute the binomial coefficient: $\displaystyle \binom{n}{m}$, you can use the factorial formula: $$\binom{n}{m} = \binom{n}{n-m}=\frac{n!}{m!(n-m)!}$$
Proof that interpolation converges; Reference request I am interested in the mathematical justification for methods of approximating functions. In $x \in (C[a, b], ||\cdot||_{\infty})$ we know that we can get an arbitrarily good approximation by using high enough order polynomials (Weierstrass Theorem). Suppose that $x \in (C[a, b], ||\cdot||_{\infty})$. Let $y_n$ be defined by linearly interpolating $x$ on an uniform partition of $[a, b]$ (equidistant nodes). Is it true that \begin{equation} \lim_{n \to \infty} ||y_n - x||_{\infty} = 0? \end{equation} Do we need to impose stronger conditions? For example \begin{equation} x(t) = \begin{cases} t \sin\left(\frac{1}{t}\right), & t \in (0, \pi] \\ 0, & t = 0 \end{cases} \end{equation} is in $C[0, 1]$, however it seems to me that we cannot get a good approximation near $t = 0$. More generally, can anyone recommend a reference containing the theory of linear interpolation and splines? It would have to include conditions under which these approximation methods converge (in some metric) to the true function.
Given an arbitrary function in $x \in C[a, b]$ and defining $y_n$ to be the linear interpolant on the uniform partition of $[a, b]$ with $n + 1$ nodes we have \begin{equation} \lim_{n \to \infty} ||y_n - x||_{\infty} = 0. \end{equation} Proof. As $x$ is continuous on the compact set $[a, b]$ it is uniformly continuous. Fix $\varepsilon > 0$. By uniform continuity there exists $\delta > 0$ such that for all $r, s \in [0, 1]$ we have \begin{equation} |r - s| < \delta \quad \Rightarrow \quad |x(r) - x(s)| < \varepsilon. \end{equation} Every $n \in \mathbb{N}$ defines a unique uniform partition of $[a, b]$ into $a = t_0 < \ldots < t_n = b$ where $\Delta t_n = t_{l+1} - t_l = t_{k+1} - t_k$ for all $l, k \in \{0, \ldots, n\}$. Choose $N \in \mathbb{N}$ so that $\Delta t_N < \delta$. Let $I_k = [t_k, t_{k+1}]$, $\,k \in \{1, \ldots, N\}$. Then for all $t \in I_k$ we have \begin{equation} |y_N(t) - x(t)| \leq |y_N(t_k) - x(t)| + |y_N(t_{k+1}) - x(t)| < 2 \varepsilon, \end{equation} where the first inequality is due to the fact that since $y_N$ is linear on $I_k$ we know that $y_N(t) \in [\min(y_N(t_k), y_N(t_{k+1}), \max(y_N(t_k), y_N(t_{k+1})]$. Q.E.D. If anyone knows a reference for a proof along these lines, then I would be grateful to know it. Also, the function $x$ in the OP can certainly be well approximated near zero. Here is a picture of the function; the dashed lines are $y = t$ and $y = -t$.
Is $ \lim_{n \to \infty} \sum_{k=1}^{n-1}\binom{n}{k}2^{-k(n-k)} = 0$? Is it true that: $$ \lim_{n \to \infty} \sum_{k=1}^{n-1}\binom{n}{k}2^{-k(n-k)} = 0 \;?$$ It seems true numerically, but how can this limit be shown?
Note that $(n-k)$ is at least $n/2$ for $k$ between 1 and $n/2$. Then, looking at the sum up to $n/2$ and doubling bounds what you have above by something like: $$\sum_{k=1}^{n-1}\binom{n}{k}2^{-kn/2}=\left(1+2^{-n/2}\right)^n-2^{-n/2}-1$$ which bounds your sum above and goes to zero. Alternatively, use the bound $$\binom{n}{k}\leq \frac{n^k}{k!}\;.$$ Since the sum is symmetric around $k=n/2$, work with the sum up to $n/2$. Then $n^k2^{-k(n-k)}=2^{k(\log_2 n-n+k)}$. For $k$ between 1 and $n/2$ and for large $n$ this scales something like $2^{-kn/2}$, which when summed from 1 to $n/2$ in $k$ will tend to 0 as $n\rightarrow\infty$.
Generating Pythagorean triples for $a^2+b^2=5c^2$? Just trying to figure out a way to generate triples for $a^2+b^2=5c^2$. The wiki article shows how it is done for $a^2+b^2=c^2$ but I am not sure how to extrapolate.
Consider the circle $$x^2+y^2=5$$ Find a rational point on it (that shouldn't be too hard). Then imagine a line with slope $t$ through that point. It hits the circle at another rational point. So you get a family of rational points, parametrized by $t$. Rational points on the circle are integer points on $a^2+b^2=5c^2$.
How do I get the conditional CDF of $U_{(n-1)}$? Let $U_1$, $U_2$ .. $U_n$ be identical and independent random variables distributed Uniform(0, 1). How can I find the cumulative distribution function of the conditional distribution of $U_{(n-1)}$ given $U_{(n)} = c$? Here, $U_{(n-1)}$ refers to the second largest of the aforementioned uniform random variables. I know that I can find the unconditional distribution of $U_{(n-1)}$: It's just Beta(2, $n - 2 + 1$) or Beta(2, $n-1$) because the ith order statistic of uniforms is distributed Beta(i, $n - i + 1$). However, how do I find the conditional CDF of $U_{(n-1)}$ given that $U_{(n)} = c$??
To condition on $A=[U_{(n)}=c]$ is to condition on the event that one value in the random sample $(U_k)_{1\leqslant k\leqslant n}$ is $c$ and the $n-1$ others are in $(0,c)$. Thus, conditionally on $A$, the rest of the sample is i.i.d. uniform on $(0,c)$. In particular $U_{(n-1)}\lt x$ means that the $n-1$ values are in $(0,x)$, which happens with probability $(x/c)^{n-1}$. The conditional density of $U_{(n-1)}$ is $$ f_{U_{(n-1)}\mid A}(x)=(n-1)c^{-(n-1)}x^{n-2}\mathbf 1_{0\lt x\lt c}. $$ Thus, conditionally on $U_{(n)}$, $U_{(n-1)}$ is distributed as $\bar U_{(n-1)}\cdot U_{(n)}$, where $\bar U_{(n-1)}$ is independent of $U_{(n)}$ and distributed as $U_{(n-1)}$.
Is this Matrix Invertible? Suppose $X$ is a real $n\times n$ matrix. Suppose $m>0$ and let $\operatorname{tr}(X)$ denote the trace of $X$. If $\operatorname {tr}(X^{\top}X)=m$, can i conclude that $X$ is invertible? Thanks
The fact that $\operatorname{Tr}(X^TX)$ is positive just mean that the matrix is non-zero. so any non-zero matrix which is not invertible will do the job.
Analog of Beta-function What is the multi-dimensional analogue of the Beta-function called? The Beta-function being $$B(x,y) = \int_0^1 t^x (1-t)^y dt$$ I have a function $$F(x_1, x_2,\ldots, x_n) = \int_0^1\cdots\int_0^1t_1^{x_1}t_2^{x_2}\cdots(1 - t_1 - \cdots-t_{n-1})^{x_n}dx_1\ldots dx_n$$ and I don't know what it is called or how to integrate it. I have an idea that according to the Beta-function: $$F(x_1, \ldots,x_n) = \dfrac{\Gamma(x_1)\cdots\Gamma(x_n)}{\Gamma(x_1 + \cdots + x_n)}$$ Is there any analogue for this integral such as Gamma-function form for Beta-function?
What you can look at is the Selberg integral. It is a generalization of the Beta function and is defined by \begin{eqnarray} S_n(\alpha,\beta,\gamma) &=& \int_0^1\cdots\int_0^1\prod_{i=1}^n t_i^{\alpha-1}(1-t_i)^{\beta-1}\prod_{1\leq i<j\leq n}|t_i-t_j|^{2\gamma}dt_1\cdots dt_n \\ &=& \prod_{j=0}^{n-1}\frac{\Gamma(\alpha+j\gamma)\Gamma(\beta+j\gamma)\Gamma(1+(j+1)\gamma)}{\Gamma(\alpha+\beta+(n+j-1)\gamma)\Gamma(1+\gamma)} \end{eqnarray}
Proof $A\sim\mathbb{N}^{\mathbb{N}}$ Let $A=\{f\in\{0,1\}^{\mathbb{N}}\,|\, f(0)=0,f(1)=0\}$, i.e. all the infinite binary vectors, that start from $0,0$. Need to proof that $A\sim\mathbb{N}^{\mathbb{N}}$. Any ideas or hint?
Hint: Show that $\mathbb{2^N\sim N^N}$ first, then show that $A\sim 2^\mathbb N$. The first part is the more difficult part, but recall that $\mathbb{N^N}\subseteq\mathcal P(\mathbb{N\times N})$ and that $\mathbb{N\times N\sim N}$.
How do I prove the middle-$\alpha$ cantor set is perfect? Let $\alpha\in (0,1)$ and $\beta=\frac{1-\alpha}{2}$. Define $T_0(x) = \beta x$ and $T_1(x) = (1-\beta) + \beta x$ , $\forall x\in [0,1]$. Recursively define $I_0 =[0,1]$ and $I_{n+1}= T_0(I_n) \cup T_1(I_n)$. The Middle-$\alpha$ Cantor Set is defined as $\bigcap_{n\in \omega} I_n$. I have proved that $I_n$ is a disjoint union of $2^n$ intervals, each of length $\beta^n$. That is, $I_n=\bigcup_{i=1}^{2^n} [a_i,b_i]$ My question is that how do i prove that every endpoint $a_i,b_i$ in $I_n$ is in $\bigcap_{n\in\omega} I_n$? It seems trivial, but i don't know how to prove this..
Define $I_n^* = I_0 - I_n, \forall n\in \omega$. Note that (i); $I_{n+1}^*\\=I_0 \setminus I_{n+1} \\=I_0 \setminus (T_0(I_n)\cup T_1(I_n)) \\=(I_0\setminus (T_0(I_0)\setminus T_0(I_n^*)))\cap (I_0\setminus (T_1(I_0)\setminus T_1(I_n^*)) \\=T_0(I_n^*)\cup I_1^* \cup T_1(I_n^*)$. Also(ii), it can be found that, $\forall x\in I_n, \beta x\in I_n$ and $(1-\beta)+\beta x \in I_n$. Let $E_n$ be a set of endpoints of $I_n$. Let $G=\{n\in \omega | n<m \Rightarrow E_n\subset E_m\}$ Then, it can be shown that $n<m\Rightarrow E_n \subset E_m$.
Conceptual question about equivalence of eigenvectors Suppose for a matrix the eigenvalue is 1 and the eigenvector is (2,-3). Then does that mean (-4,6) and (4,-6) are equivalent eigenvectors as the ratios are the same?
Let T be a transformation, and let $\lambda$ be an eigenvalue with eigenvector $v$, ie. $T(v)=\lambda v$. Then if $c$ is any scalar, $cv$ is also an eigenvector with eigenvalue $\lambda$, since $T(cv)=cT(v)=c\lambda v=\lambda(cv)$
Trying understand a move in Cohen's proof of the independence of the continuum hypothesis I've read a few different presentations of Cohen's proof. All of them (that I've seen) eventually make a move where a Cartesian product (call it CP) between the (M-form of) $\aleph_2$ and $\aleph_0$ into {1, 0} is imagined. From what I gather, whether $CP \in M$ is what determines whether $\neg$CH holds in M or not such that if $CP \in M$ then $\neg$CH. Anyway, my question is: Why does this product, CP, play this role? How does it show us that $\aleph_2 \in M$ (the 'relativized' form of $\aleph_2$, not the $\aleph_2 \in V$)? Could not some other set-theoretical object play the same role?
In order to prove the continuum hypothesis is independent from the axioms of $ZFC$ what Cohen did was to start with $ZFC+V=L$ (in which the generalized continuum hypothesis holds), and create a new model in which $ZFC$ in which the continuum hypothesis fails. First we need to understand how to add one real number to the universe, then we can add $\aleph_2$ of them at once. If we are lucky enough then $\aleph_1$ of our original model did not become countable after this addition, and then we have that there are $\aleph_2$ new real numbers, and therefore CH fails. To add one real number Cohen invented forcing. In this process we "approximate" a new set of natural numbers by finite parts. Some mysterious create known as a "generic filter" then creates a new subset, so if we adjoin the generic filter to the model we can show that there is a new subset of the natural numbers, which is the same thing as saying we add a real number. We can now use the partial order which adds $\aleph_2$ real numbers at once. This partial order has some good properties which ensure that the ordinals which were initial ordinals (i.e. cardinals) are preserved, and so we have that CH is false in this extension. (I am really trying to avoid a technical answer here, and if you wish to get the details you will have to sit through some book and learn about forcing. I wrote more about the details in A question regarding the Continuum Hypothesis (Revised))
Coin tossing questions I had an exam today and I would like to know for sure that I got these answers correct. A fair coin will be tossed repeatedly. * *What is the probability that in 5 flips we will obtain exactly 4 heads. *Let $X =$ # flips until we have obtained the first head. Find the conditional probability of $P(X=4|X\geq2)$. *What is the probability that the fifth heads will occur on the sixth flip? Here are my answers: * *$\frac{5}{32}$ *$\frac{1}{8}$ *$\frac{5}{64}$
It seems that the general consensus is that you are right.
In regard to a retraction $r: \mathbb{R}^3 \rightarrow K$ Let $K$ be the "knotted" $x$-axis. I have been able to show that $K$ is a retract of $\mathbb{R}^3 $ using the fact that $K$ and the real line $\mathbb{R}$ are homeomorphic, $\mathbb{R}^3$ is a normal space, and then applying the Tietze Extension Theorem. But then what would be an explicit retraction $r: \mathbb{R}^3 \rightarrow K$? Any ideas?
Let $f : K \to \Bbb R$ and pick a point $x \in K$. Pick a infinite sheet of paper on the left side of the knot and imagine pinching and pushing it inside the knot all the way right to $x$, and use this to define $g$ on the space spanned by the sheet of paper into $( - \infty ; f(x))$, such that if $y<f(x)$, $g^{-1}(\{y\})$ is homeomorphic to the sheet of paper and intersects $K$ only at $f^{-1}(y)$. Do the same thing with another infinite sheet of paper from the right side of the knot moving all the way left to $x$. Finally define $g(y) = f(x)$ for all $y$ in the remaining space between the two sheets of paper. Then $g : \Bbb R^3 \to \Bbb R$ is continuous, extends $f$, and except at $f(x)$ where the fiber is big, $g^{-1}(\{y\})$ is homeomorphic to $\Bbb R^2 $
A necessary and sufficient condition for a measure to be continuous. If $(X,\mathcal{M})$ is a measurable space such that $\{x\}\in\mathcal{M}$ for all $x\in$$X$, a finite measure $\mu$ is called continuous if $\mu(\{x\})=0$ for all $x\in$$X$. Now let $X=[0,\infty]$, $\mathcal{M}$ be the collection of the Lebesgue measurable subsets of $X$. Show that $\mu$ is continuous if and only if the function $x\to\mu([0,x])$ is continuous. One direction is easy: if the function is continuous, I can get that $\mu$ is continuous. But the other direction confuses me. I want to show the function is continuous, so I need to show for any $\epsilon>0$, there is a $\delta>0$ such that $|\mu([x,y])|<\epsilon$ whenever $|x-y|<\delta$.But I can't figure out how to apply the condition that $\mu$ is continuous to get this conclusion.
It seems like the contrapositive is a good way to go. Suppose that $x\mapsto\mu([0,x])$ is not continuous, say at the point $x_0$. Then there exists an $\epsilon>0$ such that for all $\delta>0$ there is a $y$ such that $\vert x_0-y\vert<\delta$ but $\vert\mu([x_0,y])\vert\geq\epsilon$. Thus we can construct a sequence $(y_n)$ which converges to $x_0$, but such that $\vert\mu([x_0,y_n])\vert\geq\epsilon$ for all $n$. Hence we can conclude that $\mu(x_0)\geq\epsilon>0$. Hopefully you can fill in the details from there!
Integral and Area of a section bounded by a function. I'm having a really hard time grasping the concept of an integral/area of a region bounded a function. Let's use $x^3$ as our sample function. I understand the concept is to create an infinite number of infinitely small rectangles, calculate and sum their area. Using the formula $$\text{Area}=\lim_{n\to\infty}\sum_{i=1}^n f(C_i)\Delta x=\lim_{n\to\infty}\sum_{i=1}^n\left(\frac{i}n\right)^3\left(\frac1n\right)$$ I understand that $n$ is to represent the number of rectangles, $\Delta x$ is the change in the $x$ values, and that we are summing the series, but I still don't understand what $i$ and $f(C_i)$ do. Is $f(C_i)$ just the value of the function at that point, giving us area? Sorry to bother you with a homework question. I know how annoying that can be. P.S. Is there a correct way to enter formulas?
So, $f(C_i)$ is the value of $f$ at $C_i$, but more importantly it is the height of the specific rectangle being used in the approximation. Then $i$ is just the interval which is the base of the rectangle. As $|C_{i+1}-C_i|\rightarrow 0$, this sum becomes the area under the curve.
Isomorphism between 2 quotient spaces Let $M,N$be linear subspaces $L$ then how can we prove that the following map $$(M+N)/N\to M/M\cap N$$ defined by $$m+n+N\mapsto m+M\cap N$$ is surjective? Originally, I need to prove that this map is bijection but I have already proven that this map is injective and well defined,but having hard time to prove surjectivity,please help.
Define $T: M \to (M+N)/N$ by $m \mapsto m+N$. Show that it is linear and onto. Check the $\ker T$ and is $M\cap N$ by the first isomorphism theorem $f:M/(M\cap N) \to (M+N)/N$ is an isomorphism.
The notion of complex numbers How does one know the notion of real numbers is compatible with the axioms defined for complex numbers, ie how does one know that by defining an operator '$i$' with the property that $i^2=-1$, we will not in some way contradict some statement that is an outcome of the real numbers. For example if I defined an operator x with the property that $x^{2n}=-1$, and $x^{2n+1}=1$, for all integers n, this operator is not consistent when used compatibly with properties of the real numbers, since I would have $x^2=-1$, $x^3=1$, thus $x^5=-1$, but I defined $x^5$ to be equal to 1. How do I know I wont encounter such a contradiction based apon the axioms of the complex numbers.
A field is a generalization of the real number system. For a structure to be a field, it should fulfill the field axioms (http://en.wikipedia.org/wiki/Field_%28mathematics%29). It is rather easy to see that the complex numbers are, indeed, a field. Proving that there isn't a paradox hiding in the complex-number theory is harder. What can be proved is this: If number theory (natural numbers, that is) is consistent, then so is the complex number system. The main problem is that you can't prove the consistency of a theory without using a stronger theory. And then you have the problem of proving that the stronger theory is consistent, ad infinitum.
Constructing a strictly increasing function with zero derivatives I'm trying to construct a fuction described as follows: $f:[0,1]\rightarrow R$ such that $f'(x)=0$ almost everywhere,f has to be continuous and strictly increasing. (I'd also conlude that this functions is not absolutely continuous) The part in bracket is easy to prove. I'm in troubles with constructing the function: I thought about considering the function as an infinite sum of sucession of increasing and conitnuous function, I considered the Cantor-Vitali functions which is defined on $[0,1]$ and is continuous and incresing (not strictly). So $f(x)=\sum_{k=0}^{\infty} 2^{-k}\phi(3^{-k}x)$ where $2^{-k}$ is there to makes the sum converging and $\phi$ is the Cantor-Vitali function. The sum convergethe function in continuous (as sum of) and is defined as asked.But I'm in trouble while proving that is strictly increasing.Honestly it seems to be but I don't know how to prove it. I know that $0\leq x \leq y\leq 1$ always exist a k such that $\phi(3^{-k}x)\leq \phi(3^{-k}y)$ but then I stucked. I'm looking for help in proving this part.
By $\phi$ we denote Cantor-Vitali function. Let $\{(a_n,b_n):n\in\mathbb{N}\}$ be the set of all intervals in $[0,1]$ with rational endpoints. Define $$ f_n(x)=2^{-n}\phi\left(\frac{x-a_n}{b_n-a_n}\right)\qquad\qquad f(x)=\sum\limits_{n=1}^{\infty}f_n(x) $$ I think you can show that it is continuous and have zero derivative almost everywhere. As for strict monotonicity consider $0\leq x_1<x_2\leq 1$ and find interval $(a_n,b_n)$ such that $(a_n,b_n)\subset(x_1,x_2)$, then $$ f(x_2)-f(x_1)\geq f(b_n)-f(a_n)\geq f_n(b_n)-f(a_n)=2^{-n}>0 $$ So $f$ is strictly monotone.
$V$ is a vector space over $\mathbb Q$ of dimension $3$ $V$ is a vector space over $\mathbb Q$ of dimension $3$, and $T: V \to V$ is linear with $Tx = y$, $Ty = z$, $Tz=(x+y)$ where $x$ is non-zero. Show that $x, y, z$ are linearly independent.
Let $A = \mathbb{Q}[X]$ be the polynomial ring. Let $I = \{f(X) \in A|\ f(T)x = 0\}$. Clearly $I$ is an ideal of $A$. Let $g(X) = X^3 - X - 1$. Then $g(X) \in I$. Suppose $g(X)$ is not irreducible in $A$. Then $g(X)$ has a linear factor of the form $X - a$, where $a = 1$ or $-1$. But this is impposible. Hence $g(X)$ is irreducible in $A$. Since $x \neq 0$, $I \neq A$. Hence $I = (g(X))$. Suppose there exist $a, b, c \in \mathbb{Q}$ such that $ax + bTx + cT^2x = 0$. Then $a + bX + cX^2 \in I$. Hence $a + bX + cX^2$ is divisible by $g(X)$. Hence $a = b = c = 0$ as desired. A more elementary version of the above proof Suppose $x, y = Tx, z= Tx^2$ is not linearly independent over $\mathbb{Q}$. Let $h(X)\in \mathbb{Q}[X]$ be the monic polynomial of the least degree such that $h(T)x = 0$. Since $x \neq 0$, deg $h(X) = 1$, or $2$. Let $g(X) = X^3 - X - 1$. Then $g(X) = h(X)q(X) + r(X)$, where $q(X), r(X) \in \mathbb{Q}[X]$ and deg $r(X) <$ deg $h(X)$. Then $g(T)x = q(T)h(T)x + r(T)x$. Since $g(T)x = 0$ and $h(T)x = 0$, $r(T)x = 0$. Hence $r(X) = 0$. Hence $g(X)$ is divisible by $h(X)$. But this is impossible because $g(X)$ is irreducible as shown above. Hence $x, y, z$ must be linearly independent over $\mathbb{Q}$.
Is $\mathbb R^2$ a field? I'm new to this very interesting world of mathematics, and I'm trying to learn some linear algebra from Khan academy. In the world of vector spaces and fields, I keep coming across the definition of $\mathbb R^2$ as a vector space ontop of the field $\mathbb R$. This makes me think, Why can't $\mathbb R^2$ be a field of its own? Would that make $\mathbb R^2$ a field and a vector space? Thanks
Adding to the above answer. With the usual exterior multiplication of the $\mathbb{R}-\{0\}$ as a ring with the natural addition and multiplication you can not make a field out of $\mathbb{R}^{2}-(0,0)$ \ But there may exist other products such as the one in the answers which can make a field out of ${\mathbb{R}\times\mathbb{R}}-\{ 0\}$ \ According to one of the theorems of Field theory every field is an Integral domain. So by considering : ${\mathbb{R}\times\mathbb{R}}-\{ 0\}$ With the following natural product: $(A,B)*(C,D)=(AB,CD) $ We see that $(1,0)*(0,1)=(0,0) $ Which means that $\mathbb{R}^{2} $is not an integral domain and hence not a field.