qid
stringlengths
1
7
Q
stringlengths
87
7.22k
dup_qid
stringlengths
1
7
Q_dup
stringlengths
97
10.5k
1929277
Constructing the group multiplication table for [imath]|G| = 6[/imath] All groups of order 6 are isomorphic to either [imath]S_3[/imath] or [imath]\mathbb{Z}_6[/imath]. Without knowing that, I was trying to derive how many structurally distinct groups of order 6 exists by constructing the multiplication tables. And I came upon the following statement on this question: Having all non-identity elements have order 2, means the group is abelian. Is this trivial? Could I know that before trying to build the table?
814374
Every element of a group has order [imath]2[/imath]. Why, intuitively, is it abelian? What is the intuition behind the fact that if every element in a group is of order [imath]2[/imath], we have that the group is abelian? I can prove it, but I do not know the intuition behind it.
1929058
If [imath]\det(AB)=4[/imath] then find the value of [imath]\det(BA)[/imath] Let [imath]A[/imath] be a [imath]2 \times 3[/imath] matrix with real entries and let [imath]B[/imath] be a [imath]3 \times 2[/imath] matrix with real entries. If [imath]\det(AB)=4[/imath] then find the value of [imath]\det(BA)[/imath]. My attempt: I am aware that [imath]\det(AB)=\det(BA)[/imath] when [imath]A[/imath] and [imath]B[/imath] are of same order. But how to do this?
1782600
When A and B are of different order given the [imath]\det(AB)[/imath],then calculate [imath]\det(BA)[/imath] Let 'A' be a [imath]2 \times 3[/imath] matrix where as B be a [imath]3 \times 2[/imath] matrix if [imath]\det(AB) = 4[/imath] the find value of the [imath]\det(BA)[/imath] My attempt: I took A = [imath] \begin{bmatrix} 2 & 0 &0\\ 0 & 0 &2\\ \end{bmatrix} [/imath] B= [imath] \begin{bmatrix} 1 & 0 \\ 0 & 0 \\ 0 & 1 \\ \end{bmatrix} [/imath] It satisfies given condition and I get [imath]\det(BA)=0[/imath] But I have not proved it How do I prove that it is always zero (background)I am 12th grader and I know about adjoint,inverse,determinant,rank of a matrix and the other basics. However I do NOT know about eigenvalues and eigenvectors.
1929292
Proof related to Rolle's Theorem Prove: Let f be differentiable on [imath][a,b][/imath].If [imath]f'(a)>0[/imath] and [imath]f'(b) <0[/imath], then exists [imath]c\in(a,b)[/imath] such that [imath]f'(c)=0[/imath] (Do not assume that [imath]f'[/imath] is continuous). My attempt: If [imath]f(a) = f(b)[/imath], then by Rolle's Theorem, it's done. Assume [imath]f(a) \neq f(b)[/imath], say [imath]f(a) < f(b)[/imath]. I want to prove that [imath]\exists x_0 \in (a,b)[/imath] s.t. [imath]f(x_0)=f(b)[/imath], so using Rolle's Theorem again I can get the answer. But I cannot find how to prove it. Any suggestions will be appreciated!
771201
Proof of Darboux's theorem I tried to prove Darboux's theorem. It is the following theorem: Let [imath]f: [a,b]\to \mathbb R[/imath] be a differentiable function and let [imath]f'(a) < \alpha < f'(b)[/imath]. Then there exists [imath]c \in [a,b][/imath] with [imath]f'(c) = \alpha[/imath]. Please could somebody check my proof? Define [imath]g(x) = f(x) - \alpha x[/imath]. Then [imath]g[/imath] is continuous and because [imath][a,b][/imath] is compact [imath]g[/imath] attains its minimum on [imath][a,b][/imath]. Let [imath]x_m \in [a,b][/imath] be such that [imath]g(x_m) \le g(x)[/imath] for all [imath]x\in [a,b][/imath]. If [imath]x_m \in (a,b)[/imath] then [imath]g'(x_m) = 0 = f'(x_m) - \alpha[/imath] which shows the claim. If [imath]x_m = a[/imath] then [imath]g'(x_m) = g'(a) < 0[/imath]. Because [imath]g[/imath] is continuous and [imath]g'(a) < 0[/imath] there exists [imath]\delta>0[/imath] such that if [imath]x \in (a,a+\delta)[/imath] then [imath]g(x) < g(a) = g(x_m)[/imath]. But this is a contradiction because [imath]x_m[/imath] is the minimum. If [imath]x_m = b[/imath] then again there is [imath]\delta>0[/imath] such that if [imath]x \in (b-\delta, b)[/imath] then [imath]g(x)<g(b)=g(x_m)[/imath] because [imath]g'(b) > 0[/imath]. Again this is a contradiction. It follows that the minimum is attained in the interior [imath](a,b)[/imath].
1929394
Proving that [imath]\lim_{k\to \infty}\frac{f(x_k)-f(c)}{x_k-c}=f'(c)[/imath] where [imath]\lim_{k\to \infty} x_k =c [/imath] Question. Suppose that a function [imath]f[/imath] is defined on an interval [imath]I[/imath], [imath]c[/imath] is a point of [imath]I[/imath], and [imath]\{x_k\}[/imath] is any sequence of points in [imath]I[/imath], no term of which is [imath]c[/imath], such that [imath]\lim_{k\to \infty} x_k =c [/imath]. Define a sequence [imath]\{y_k\}[/imath] by [imath]y_k=\frac{f(x_k)-f(c)}{x_k-c}[/imath]. a)Prove that [imath]f'(c)[/imath] exists if and only if [imath]\lim_{k\to \infty} y_k [/imath] exists and has the same value for every such sequence [imath]\{x_k\}[/imath]. b)Prove that, if [imath]f'(c)[/imath] exists, then [imath]\lim_{k\to \infty} y_k=f'(c) [/imath] for every such sequence [imath]\{x_k\}[/imath]. My attempt(...?) a) Let [imath]\epsilon>0[/imath] be given. Since [imath]f'(c)[/imath] exists, [imath]\lim_{x\to c} \frac{f(x)-f(c)}{x-c} [/imath] exists. (...) There exists [imath]k_0\in \mathbb N[/imath] such that for [imath]k\ge k_0[/imath], [imath]\vert \frac{f(x_k)-f(c)}{x_k-c}-L \vert< \epsilon[/imath]. How can I find the link between [imath]\frac{f(x)-f(c)}{x-c}[/imath] and [imath]\frac{f(x_k)-f(c)}{x_k-c}[/imath]? It just seems so trivial to me and I cannot fill the blank. Any advice would be appreciated!
1928128
Proving that [imath]\lim_{k\to \infty}\frac{f(x_k)-f(y_k)}{x_k-y_k}=f'(c)[/imath] where [imath]\lim_{k\to \infty}x_k=\lim_{k\to \infty}y_k=c[/imath] with [imath]x_k[/imath] Question. Suppose that a function [imath]f[/imath] is defined on an interval [imath]I[/imath], [imath]c[/imath] is a point of [imath]I[/imath], and {[imath]x_k[/imath]} and {[imath]y_k[/imath]} are any two sequences in [imath]I[/imath] such that [imath]\lim_{k\to \infty}x_k=\lim_{k\to \infty}y_k=c[/imath] with [imath]x_k<c<y_k[/imath] for all [imath]k[/imath]. Prove that, if [imath]f'(c)[/imath] exists, then [imath]\lim_{k\to \infty}\frac{f(x_k)-f(y_k)}{x_k-y_k}=f'(c).[/imath] My Attempt. Let [imath]\epsilon_k=\frac{1}{k}[/imath] be given. For [imath]\epsilon_1=1,[/imath] there exists [imath]\delta_1>0[/imath] such that [imath]x_1,y_1\in I, x_1,y_1\in N'(c;\delta_1)[/imath] implies [imath]\vert \frac{f(x_1)-f(y_1)}{x_1-y_1} -f'(c) \vert<\epsilon_1.[/imath] For [imath]\epsilon_2=\frac{1}{2},[/imath] there exists [imath]0<\delta_2<[/imath]min[imath]\{\vert x_1-c \vert,\vert y_1-c \vert,\epsilon_2\}[/imath] such that [imath]x_2,y_2\in I, x_2,y_2\in N'(c;\delta_2)[/imath] implies [imath]\vert \frac{f(x_2)-f(y_2)}{x_2-y_2} -f'(c) \vert<\epsilon_2.[/imath] Similarly, for [imath]\epsilon_{k}=\frac{1}{k},[/imath] there exists [imath]0<\delta_k<[/imath]min[imath]\{\vert x_{k-1}-c \vert,\vert y_{k-1}-c \vert,\epsilon_k\}[/imath] such that [imath]x_k,y_k\in I, x_k,y_k\in N'(c;\delta_k)[/imath] implies [imath]\vert \frac{f(x_k)-f(y_k)}{x_k-y_k} -f'(c) \vert<\epsilon_k.[/imath] Then by taking the limit to infinity, we obtain [imath]\lim_{k\to \infty}\frac{f(x_k)-f(y_k)}{x_k-y_k}=f'(c).[/imath] It really took me a lot of time to finish the proof and I need this to be checked by an expert:) Is the proof okay? Any response would be appreciated!
1929014
Extracting vector containing the elements of the main diagonal of a matrix Is there any mathematical operation that would extract the elements of the main diagonal as a vector? i.e. multiply it by certain vectors or something like that. I'm using this in the context of linear systems. In the specific case I'm looking at I have a relationship between the elements of three vectors as follows: [imath] \bf{a} = \begin{bmatrix} a_{1} \\ a_{2} \\ a_{3} \\ a_{4} \end{bmatrix} [/imath] , [imath] \bf{b} = \begin{bmatrix} b_{1} \\ b_{2} \\ b_{3} \\ b_{4} \end{bmatrix} [/imath] , and [imath] \bf{c} = \begin{bmatrix} c_{1} \\ c_{2} \\ c_{3} \\ c_{4} \end{bmatrix} [/imath] I also know that: [imath]c_{i} = a_{i}b_{i} [/imath] for [imath]i \in [1, 4][/imath] Now I want to express this relationship as a vector equation. I understand that [imath]\bf{a} \bf{b}^\top[/imath] would give a square matrix with the elements of [imath]\bf{c}[/imath] on its main diagonal, but is there anyway to extract them as a vector? EDIT: Let me clarify a bit. If I multiply [imath]\bf{a}[/imath] by [imath]\bf{b}^\top[/imath] I get the following matrix: [imath]\bf{a} \bf{b}^\top = \begin{bmatrix} \bf{a_1b_1} && a_1b_2 && a_1b_3 && a_1b_4 \\ a_2b_1 && \bf{a_2b_2} && a_2b_3 && a_2b_4 \\ a_3b_1 && a_3b_2 && \bf{a_3b_3} && a_3b_4 \\ a_4b_1 && a_4b_2 && a_4b_3 && \bf{a_4b_4} \end{bmatrix} [/imath] The elements which have been made bold are the ones I'm interested in extracting as a vector. This vector would be [imath]\bf{c}[/imath]. If I multiply this by the all-ones vector, as some of the answers have suggested, I would get: [imath]\bf{a} \bf{b}^\top \bf{1}= \begin{bmatrix} \bf{a_1b_1} && a_1b_2 && a_1b_3 && a_1b_4 \\ a_2b_1 && \bf{a_2b_2} && a_2b_3 && a_2b_4 \\ a_3b_1 && a_3b_2 && \bf{a_3b_3} && a_3b_4 \\ a_4b_1 && a_4b_2 && a_4b_3 && \bf{a_4b_4} \end{bmatrix} \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix} = \begin{bmatrix} a_1b_1 + a_1b_2 + a_1b_3 + a_1b_4 \\ a_2b_1 + a_2b_2 + a_2b_3 + a_2b_4 \\ a_3b_1 + a_3b_2 + a_3b_3 + a_3b_4 \\ a_4b_1 + a_4b_2 + a_4b_3 + a_4b_4 \end{bmatrix}[/imath] Which is not the vector I'm looking for (it isn't equal to [imath]\bf{c}[/imath]). EDIT 2: Multiplying by the [imath]\bf{1}[/imath] vector would obviously work if all off diagonal elements become zero. So if anyone knows of a way to do that without modifying the elements of the main diagonal that would also answer my question. EDIT 3: The other question pointed out in the comments area is essentially the same and I have received similar answers but I was hoping for a simpler solution. I haven't marked it as duplicate to allow people to contribute in the future. I was hoping for a solution that would be linear in [imath]\bf{b}[/imath] which I would substitute in place of [imath]\bf{c}[/imath] into the equation I'm trying to solve. In that case [imath]\bf{b}[/imath] would be my only unknown and I would be able to get an algebraic solution.
970186
Mathematical expression to form a vector from diagonal elements I would like to know is there any way to express mathematically (using matrices multiplication, addition, etc.) a vector which is formed from the diagonal elements of a matrix. For example, I have a matrix called [imath]\mathbf{M}[/imath] and I want to create a vector [imath]\mathbf{v}[/imath] such that [imath]\mathbf{v}= \text{diagonal elements of } \mathbf{M} [/imath] Any matrix algebra or operation do that ? Thank you.
1929103
Prove [imath]\int_b^a [f(x)+f^{-1}(x)]dx=a^2-b^2[/imath] So I recently came across the formula [imath]\int_b^a [f(x)+f^{-1}(x)]dx=a^2-b^2[/imath] which apparently came up in a final of the MIT integration bee, and is true when [imath]a,b[/imath] are fixed points of [imath]f[/imath]. I was wondering how one would go about proving this formula? I have no clue how to even make a start!
906521
Show: [imath] f(a) = a,\ f(b) = b \implies \int_a^b \left[ f(x) + f^{-1}(x) \right] \, \mathrm{d}x = b^2 - a^2 [/imath] If [imath]a,b[/imath] are fixed points of [imath]f[/imath], then [imath] \int_a^b \left[ f(x) + f^{-1}(x) \right] \, \mathrm{d}x = b^2 - a^2 [/imath] In the words of 2014 MIT Integration Bee Champion (Carl Lian), the above property was responsible for the champion's victory in the 2013 MIT Integration Bee. How does one go about proving this property?
1928040
How to prove [imath]f(n)=\sum_{k=0}^n\binom{n+k}{k}\left(\frac{1}{2}\right)^k=2^n[/imath] without using the induction method? The problem is to prove the following statement for all natural numbers. [imath]f(n)=\sum_{k=0}^n\binom{n+k}{k}\left(\frac{1}{2}\right)^k=2^n[/imath] I already proved this using mathematical induction. But, my instinct is telling me that there is some kind of combinatorial/algebraic method that can be used to solve this easily. (Unless my instinct is wrong.) Can anyone give me hint on how to do this?
1874816
How to show [imath]\sum_{k=0}^{n}\binom{n+k}{k}\frac{1}{2^k}=2^{n}[/imath] How does one show that [imath]$$\sum_{k=0}^{n}\binom{n+k}{k}\frac{1}{2^k}=2^{n}?$$[/imath] I tried using the Snake oil technique but I guess I am applying it incorrectly. With the snake oil technique we have [imath]$$F(x)= \sum_{n=0}^{\infty}\left\{\sum_{k=0}^{n}\binom{n+k}{k}\frac{1}{2^k}\right\}x^{n}.$$[/imath] I think I have to interchage the summation and do something. But I am not quite comfortable in interchanging the summation. Like after interchaging the summation will [imath]$$F(x)=\sum_{k=0}^{n}\sum_{n=0}^{\infty}\binom{n+k}{k}\frac{1}{2^k}x^{n}?$$[/imath] Even if I continue with this I am unable to get the correct answer. How does one prove this using the Snake oil technique? A combinatorial proof is also welcome, as are other kinds of proofs.
275544
Order of nontrivial elements is 2 implies Abelian group If the order of all nontrivial elements in a group is 2, then the group is Abelian. I know of a proof that is just from calculations (see below). I'm wondering if there is any theory or motivation behind this fact. Perhaps to do with commutators? Proof: [imath]a \cdot b = (b \cdot b) \cdot (a \cdot b) \cdot (a \cdot a) = b \cdot (b \cdot a) \cdot (b\cdot a) \cdot a = b \cdot a[/imath].
2777917
[imath]|G|= 2^n, o(g)=2~\forall g \in G, g \neq e[/imath] show that G is abelian Question as in title: I have attempted this question by saying that the homomorpism that maps G to [imath]Z_2[/imath] is onto, [imath]Z_2[/imath] is abelian so G must be abelian.
1930422
Sequence [imath]x_{n}=\frac{1}{2}(x_{n-1}+\frac{a}{x_{n-1}}).[/imath] Let [imath]a[/imath] and [imath]x_{0}[/imath] be positive numbers, and define the sequence [imath]\{x_{n}\}[/imath] recursively [imath]x_{n}=\frac{1}{2}(x_{n-1}+\frac{a}{x_{n-1}}).[/imath] How to prove that the sequence converges and how to find its limit ? Actually i am thinking to prove that the sequence is monotone and bounded then limit can be find by solving the equation [imath]x^{2}-x-a=0.[/imath] But monotone part is dependent on the real number [imath]a.[/imath] Please give me simplest way to handle the problem. Thanks a lot.
1541378
Recursive Monotone Decreasing Sequence Proof [imath]{x_{k}} = \frac{1}{2}\left({x_{k-1}+\frac{a}{{x_{k-1}}}}\right)[/imath] I have been looking at this for hours and it isn't making anymore sense than it did in the first hour. If [imath]a[/imath] und [imath]{x_{0}}[/imath] are positive real numbers and [imath]{x_{k}}[/imath] defined as follows, prove that [imath]{x_{k}}[/imath] is monotone decreasing and bounded, then calculate the limit. [imath]{x_{k}} = \frac{1}{2}\left({x_{k-1}+\frac{a}{{x_{k-1}}}}\right)[/imath] What I though I had to do was pick an [imath]{x_{0}}[/imath] and solve for [imath]{x_{k}}[/imath], so I picked [imath]{x_{0}}[/imath]. Then I wanted to put the result back into the function to get [imath]{x_{k+1}}[/imath], which I still believe is what I'm supposed to be doing, but I don't understand what I am getting as a result. I get that I should prove it is decreasing, then that it is bounded, then address the limit, but the how is missing.
854457
'Algebraic' way to prove the boolean identity [imath]a + \overline{a}*b = a + b[/imath] For me, it is pretty clear that [imath]a + \overline{a}*b = a + b[/imath], because the first [imath]a[/imath] in the or will make sure that if the second term must be 'evaluated', [imath]a[/imath] will always be false, and therefore won't matter in the and - probably because I'm a programmer and it is very common to see this unnecessary cluttering in long if-else structures. I can easily prove it with the truth table as well, just make both expressions', and turns out they are identical. But I was looking for a 'algebraic' way to prove it. That is, something using the basic operations properties like the fact that they are transitive, commutative, associative, etc. For example, if I was to prove this other simple identity [imath]a + a*b = a[/imath] I would simply do something like: [imath]a + a*b = a*(1 + b) = a*1 = a[/imath]. Is there a way to also prove the former using this algebraic operations, without resorting to the truth table?
2003192
Proving [imath]A+A'B=A+B[/imath] without truth tables How can I prove the Boolean algebraic rule [imath]A+A'B=A+B[/imath] without using a truth table? With the truth table, it is easy to see that the two are equal, but how can I prove it using lesser Boolean identities?
1931653
Every homogeneous Riemannian manifold is complete. I have to prove that every homogeneous manifold is complete. Let [imath]p,q[/imath] points on a Riemannian homogeneous manifold. I want to prove that there exist a minimizing geodesic join [imath]p,q[/imath] or equivalently, that every geodesic can be extended forever, that is the same that the [imath]\exp_p[/imath] is defined to all [imath]T_pM[/imath] for at least one point [imath]p[/imath]. My idea is the following: We know that there is an open ball in [imath]T_pM[/imath] with some radius [imath]\epsilon > 0[/imath] such that [imath]\exp_p[/imath] is a diffeo in some open set on [imath]M[/imath]. In particular, the image of the diffeo has the property that every point can be join to [imath]p[/imath] by a minimizing geodesic with length less than [imath]\epsilon.[/imath] Once [imath]M[/imath] is homogeneous, then every point admits a ball with the same radius and same property. This can seen noting that given two arbitrary point there is an isometry that "connects" both. Then I can transfer the geodesics from a point to another. Ok, then: For every point I can produce a ball with radius [imath]\epsilon[/imath] with the property that for every point on the ball there exist a minimizing geodesic join two points. How can I conclude then that I can extend every geodesic? I know that for every point I can produce geodesics with the same length, how to finish?
581912
Homogeneous riemannian manifolds are complete. Trouble understanding proof. I came across this proof while looking for hints on my homework, and I think it's only gotten me more confused. This is from Global Lorentzian Geometry. Lemma 5.4 If [imath](H,h)[/imath] is a Riemannian manifold, then [imath](H,h)[/imath] is complete. Proof. By the Hopf-Rinow Theorem, it suffices to show that [imath](H,h)[/imath] is geodesically complete. Thus suppose that [imath]c:[a,1) \rightarrow H[/imath] is a unit speed geodesic which is not extendible to [imath]t=1[/imath]. Choosing any [imath]p \in H[/imath], we may find a constant [imath]\alpha > 0[/imath] such that any unit speed geodesic starting at [imath]p[/imath] has length [imath]\ell \geq \alpha[/imath]. Set [imath]\delta = \min\{\alpha/2, (1-a)/2\} > 0[/imath]. Since isometries preserve geodesics, it follows from the homogeneity of [imath](H,h)[/imath] that any unit speed geodesic starting at [imath]c(1-\delta)[/imath] may be extended to a geodesic of length [imath]\ell \geq 2 \delta[/imath]. In particular, [imath]c[/imath] may be extended to a geodesic [imath]c:[a, 1+\delta) \rightarrow H[/imath], in contradiction to the inextendibility of [imath]c[/imath] to [imath]t=1[/imath]. How exactly does it "follow from homogeneity" that we are able to extend unit speed geodesics? Isn't that what we're trying to show in the lemma? And how does considering geodesics based at [imath]c(1-\delta)[/imath] lead us to extending [imath]c[/imath] to [imath]c:[a,1+\delta) \rightarrow H[/imath]?
1928453
How are the following two definitions of holomorphic mappings on Riemann surfaces equivalent? A holomorphic function [imath]\phi:X\to Y[/imath], where [imath]X[/imath] and [imath]Y[/imath] are Riemann surfaces, is described in the following way: For [imath]a\in X[/imath], let [imath]a\in U_1[/imath], where [imath]U_1[/imath] is an open set. Let [imath]C_1:U_1\to V_1[/imath] be a chart. Similarly, let [imath]f(a)\in U_2[/imath], and let [imath]C_2:U_2\to V_2[/imath] be a chart. Then [imath]\phi[/imath] is holomorphic iff [imath](C_2\circ \phi\circ C_1^{-1}): V_1\to V_2[/imath] is holomorphic. Another definition of holomorphic functions that is given is the following: For any holomorphic function [imath]f[/imath] on [imath]Y[/imath], if [imath]f\circ \phi[/imath] is also holomorphic on [imath]X[/imath], then [imath]\phi[/imath] is holomorphic. How are the two definitions equivalent?
1384379
Show that a function from a Riemann Surface [imath]g:Y\to\mathbb{C}[/imath] is holomorphic iff its composition with a proper holomorphic map is holomorphic. I'm trying to show the following: Let [imath]f:X\to Y[/imath] be a proper holomorphic map between connected, non-empty Riemann Surfaces. Show that a map [imath]g:Y\to\mathbb{C}[/imath] is holomorphic if and only if its composition with f is holomorphic. So far I know that [imath]f[/imath] is surjective, since it's proper and holomorphic. My next step was to look at the charts, but it's there, I get stuck.
1931306
Distance function [imath]\hat{f}(x,y)= |f(x)-f(y)|[/imath] I am wondering, what is the necessary and sufficient condition for the function [imath]\hat{f}(x,y)= |f(x)-f(y)| [/imath] from [imath]\mathbb{R}[/imath] to [imath]\mathbb{R}[/imath] to be a distance on [imath]\mathbb{R}[/imath]?
155001
Condition on function [imath]f:\mathbb{R}\rightarrow \mathbb{R}[/imath] so that [imath](a,b)\mapsto | f(a) - f(b)|[/imath] generates a metric on [imath]\mathbb{R}[/imath] Can we impose such condition on function [imath]f:\mathbb{R}\rightarrow \mathbb{R}[/imath] so that [imath](a,b)\mapsto | f(a) - f(b)|[/imath] generates a metric on [imath]\mathbb{R}[/imath]? This question came into my mind when I was working on problem [imath](a,b)\mapsto | e^{a} - e^{b}|[/imath] is a metric on [imath]\mathbb{R}[/imath]. I guess this can be done by taking injective function [imath]f[/imath]. But I am not sure whether this will work or not. Certainly, this will help everyone in dealing with such kind of problems. I need help with this. Thank you very much.
1932435
Number of weakly increasing functions [imath]g: \{1,2,3,....m\} \rightarrow \{1,2,3,....m\}[/imath] How to find the number of weakly increasing functions of [imath]g: \{1,2,3,....m\} \rightarrow \{1,2,3,....m\}[/imath]? Here weakly increasing means that [imath]x < y[/imath] implies [imath]g(x) \le g(y)[/imath]. Attempt: The first element [imath]1[/imath] has [imath]m[/imath] choices(including [imath]1[/imath]), second element has [imath]m-1[/imath] choices ..... So I get [imath]m![/imath] , is this right?
1385227
Number of increasing functions from [imath]\{1,2,\dots, n\}[/imath] to itself. Let [imath]f[/imath] be a function from [imath]X=\{1,2,3,...,n\}[/imath] to itself. We say [imath]f[/imath] is increasing if [imath]a\le b[/imath] then [imath]f(a)\le f(b)[/imath]. How do we find the number of increasing functions? I think if we can define [imath]f(1)[/imath] then we can count. But it is very difficult.
817464
Inequality [imath]\left(a-1+\frac{1}b\right)\left(b-1+\frac{1}c\right)\left(c-1+\frac{1}a\right)\leq1[/imath] Let [imath]a,b,c\in\mathbb{R}^*_+[/imath], [imath]abc=1[/imath]. How can i show that [imath]\left(a-1+\frac{1}b\right)\left(b-1+\frac{1}c\right)\left(c-1+\frac{1}a\right)\leq1[/imath] ? I got [imath]\left(ab-b+1\right)\left(bc-c+1\right)\left(ca-a+1\right)\leq abc=1[/imath], but i can't go any further ... If i expand it : [imath]2-ab-\frac{b}a-c-ac-\frac{a}c-b+2a+\frac{2}a-bc-\frac{c}b-a+2b+\frac{2}b+2c+\frac{2}c-4[/imath] Which gives using [imath]c=\frac{1}{ab}[/imath] : [imath]5ab-\frac{b}a-\frac{1}{ab}-a^2b+\frac{1}a+b+\frac{1}b-2[/imath]
1912846
Let [imath]a, b, c[/imath] be positive real numbers such that [imath]abc=1.[/imath] Prove that [imath](a-1+1/b)(b-1+1/c)(c-1+1/a)\le1[/imath] Let [imath]a, b, c[/imath] be positive real numbers such that [imath]abc=1.[/imath] Prove that: [imath]\left(a-1+\dfrac1b\right)\left(b-1+\dfrac1c\right)\left(c-1+\dfrac1a\right)\le1[/imath] or equivalently: [imath](ab-b+1)(bc-c+1)(ca-a+1)\le1[/imath] What I have tried: Computing [imath]\left(a-1+\dfrac1b\right)[/imath] using [imath]abc=1[/imath] and similarly computing others and then multiplying them. But it didn't help. Any help will be appreciated.
1929912
Expected waiting time to cross the roads Given cars crossing certain place on a highway follows a Poisson process with rate [imath]\lambda = 3[/imath] mins. David waits to cross the road, but he only crosses if he sees no cars coming by in the next 30 seconds. Find his expected waiting time (hint: condition on the time of the first car). My attempt: Let [imath]X =[/imath] waiting time of David before he crosses the street (so, [imath]X\geq \frac{1}{2}[/imath] mins), [imath]Y = [/imath] time that the 1st car took to cross the point where David waits (in mins), so [imath]Y[/imath] follows P.P[imath](3)[/imath] We have: [imath]E(X) = E(X|Y<\frac{1}{2})P(Y<\frac{1}{2}) + E(X|Y\geq \frac{1}{2})P(Y\geq \frac{1}{2})[/imath]. The 2nd term is actually just [imath]\frac{1}{2}\ P(N(\frac{1}{2}) = 0) = \frac{1}{2}e^{\frac{-3}{2}},[/imath] because [imath]X[/imath] David would cross the road immediately after waiting for [imath]30[/imath] seconds without seeing any cars. I'm stucked here because I could not find a way to compute the first term (a more difficult case). Could someone please help me on how to compute this first term?
1692437
Crossing a lane of traffic A pedestrian wishes to cross a single lane of fast-moving tra c. Suppose the number of vehicles that have passed by time t is a Poisson process of rate , and suppose it takes time a to walk across the lane. Assuming that the pedestrian can foresee correctly the times at which vehicles will pass by, Question: 1 how long on average does it take to cross over safely? [Consider the time at which the 1st car passes.] Question 2: How long on average does it take to cross two similar lanes (a) when one must walk straight across (assuming that the pedestrian will not cross if, at any time whilst crossing, a car would pass in either direction), (b) when an island in the middle of the road makes it safe to stop half-way? Attempt: Question 1: It can be calculated by conditioning the first arrival. [imath]E[X]=\int E[X|Y=y]f_Y(y)dy[/imath], where [imath]Y[/imath] is the time of the first traffic. The answer is [imath](e^{\lambda a}-1)\lambda^{-1}[/imath]. Question 2: a) I think this is two indepedent poisson process each with parameter [imath]\lambda[/imath]. Therefore, the sum of the two poisson process = [imath]2\lambda[/imath]. So [imath]E[X]=(e^{2\lambda a}-1)(2\lambda)^{-1}[/imath]. b)I think the answer is [imath]2*(e^{\lambda a}-1)\lambda^{-1}[/imath]
1933496
What is wrong with using Kolmogorov's inequality for finding the expectation of the maximum of normal random variables? I am wondering if someone could help me shed light on why the following bound doesn't work. Suppose that [imath]X_1, \ldots, X_n \sim N(0,1)[/imath] are independent random variables. I am interested in finding a constant C that satisfies: [imath] E\left[\max_{1\leq i\leq n}|X_i|\right] \leq C \sqrt{log\ n} [/imath] My Method: Let [imath]Y = \max_{1\leq i\leq n}|X_i|[/imath] [imath] \begin{align} E\left[Y\right] & = \int_{0}^{\infty}P(Y >y)dy \\ &\leq\int_{0}^{\infty}\frac{1}{y^2}\sum_{i=1}^{n}Var(X_i) dy \\ & \leq \int_{0}^{\infty}\frac{n}{y^2}dy \\ \end{align} [/imath] I used Kolmogorov's inequality in the second step above, but don't know the exact mechanics of why this fails and am unsure why my integral in the end doesn't converge. Is there a specific reason this bounding doesn't work? Am I failing to take into account tail activity? thanks.
1933473
Finding a bound on the maximum of the absolute value of normal random variables Suppose that [imath]X_1, \ldots, X_n \sim N(0,1)[/imath] are independent random variables. I am interested in finding a constant C that satisfies: [imath] E\left[\max_{1\leq i\leq n}|X_i|\right] \leq C \sqrt{log\ n} [/imath] I know one method is to employ the moment generating function trick, then take logs of both sides. However, I was wondering if there exists a more direct method. thanks!
98260
how to get [imath]dx\; dy=r\;dr\;d\theta[/imath] In polar coordinate how we can get [imath]dx\;dy=r\;dr\;d\theta[/imath]? with these parameters: [imath]r=\sqrt{x^2+y^2}[/imath] [imath]x=r\cos\theta[/imath] [imath]y=r\sin\theta[/imath] Tanks.
2293337
Proving [imath]dx.dy = r.drd\theta[/imath] using x and y I can prove that [imath]dx dy = r.dr d\theta[/imath] by drawing a circle and calculating the area of a small square in the polar coordinates, but when I try proving it using the equations below, I fail to prove it. What is my mistake? [imath]x = r\cos(\theta) => dx = \cos(\theta).dr - \sin(\theta)r.d\theta[/imath] [imath]y = r\sin(\theta)=>dy=\sin(\theta).dr + \cos(\theta)r.d\theta[/imath] [imath]=> dxdy = r(\cos^2(\theta)-\sin^2(\theta))drd\theta=r\cos(2\theta).drd\theta[/imath] I actually saw a similar question in this site: how to get [imath]dx\; dy=r\;dr\;d\theta[/imath] My problem was that I didn't understand why [imath]drd\theta = -d\theta dr[/imath]
1933313
Let [imath]G = GL_2\mathbb (R)[/imath]. Show that [imath]T[/imath] is a subgroup of [imath]G[/imath] Let [imath]G = GL_2 (\mathbb {R} [/imath]). Show that [imath]T[/imath] = {[imath] \begin{bmatrix} a & b \\ 0 & d \\ \end{bmatrix}[/imath] | ad [imath]\neq 0[/imath] } is a subgroup of [imath]G[/imath] My attempt: det[imath](TT^{-1})[/imath] = det[imath](T)[/imath] det([imath]T^{-1})[/imath] = det([imath]T[/imath]) [imath]1[/imath]/det([imath]T[/imath]) = [imath]ad[/imath] [imath](1/ad)[/imath]= [imath]1[/imath]. We are done by subgroup test
796336
Upper Triangular Matrices Consider the set [imath]V[/imath] of upper-triangular [imath]n\times n[/imath] matrices with elements in some field [imath]K[/imath]. I.e., if [imath]A[/imath] is such a matrix, [imath]a_{ij} = 0[/imath], for [imath]i > j[/imath] Show that non-degenerate upper-triangular matrices form a group with respect to matrix multiplication.
1933855
Method of Steepest Descent I have been given this question from my university to complete and I am having some trouble. The lecturer did not do a very good job of explaining the concept and I could not find any materials online to help so I came here. The question is- What happens if you apply the method of steepest descent to [imath]f(x)=(x_1^2)+(x_2^2)+(x_3^2)[/imath] Thank you.
1932433
Method of Steepest Descent and Lagrange I am totally stumped for what to do here, any help would be appreciated. 1) What happens if you apply the method of steepest descent to [imath]f(x)=x_1^2+x_2^2+x_3^2[/imath]? 2)Lagrange multipliers method- Find the dimensions of the rectangular box open at the top of greatest internal volume, given the surface area of the five faces is [imath]108 \, \mathbb{cm}^2[/imath]. Thanks
1591558
Nonconstant polynomials do not generate maximal ideals in [imath]\mathbb Z[x][/imath] Let [imath]f[/imath] be a nonconstant element of ring [imath]\mathbb Z[x][/imath]. Prove that [imath]\langle f \rangle[/imath] is not maximal in [imath]\mathbb Z[x][/imath]. Let us assume [imath]\langle f \rangle[/imath] is maximal. Then [imath]\mathbb Z[x] / \langle f \rangle[/imath] would be a field. Let [imath]a \in \mathbb{Z}[/imath]. Then [imath]a + \langle f \rangle[/imath] is a nonzero element of this field, hence a unit. Let [imath]g + \langle f \rangle[/imath] be its inverse. Then [imath]a g - 1 \in \langle f \rangle[/imath], hence [imath]ag(x)-1 = f(x)h(x)[/imath] for some [imath]h \in Z[x][/imath], hence [imath]ag(0) + f(0)h(0) = 1[/imath], thus [imath](a,f(0))=1[/imath] for all [imath]a \in \Bbb Z[/imath], contradiction, hence the proof. Is my argument correct? Is there any other method?
1960029
Non-constant polynomials in [imath]\mathbb Z[x][/imath] can't generate maximal ideals I am wondering if anyone can validate my proof for this problem I am working on. Question: Let [imath]f(x)[/imath] be a non-constant polynomial in [imath]\mathbb Z[x].[/imath] Prove that [imath]\langle f(x)\rangle[/imath] is not maximal in [imath]\mathbb Z[x][/imath]. Clearly [imath]\frac{\mathbb Z[x]}{\langle f(x)\rangle}[/imath] is a field iff [imath]\langle f(x)\rangle[/imath] is a maximal ideal. Proof: [imath]\langle f(x)\rangle = \{g(x)f(x) \mid g(x) \in \mathbb Z[x]\}[/imath]. Thus [imath]\langle f(x)\rangle[/imath] is just all non-constant polynomials. Let [imath]a \in \frac{\mathbb Z[x]}{\langle f(x)\rangle} = \{ a_0 + \langle f(x)\rangle \mid a_0 \in \mathbb Z\}[/imath]. Thus [imath]a_0[/imath] is a constant and is not 'absorbed' by [imath]\langle f(x)\rangle[/imath]. Does there exists an element [imath]b \in \frac{\mathbb Z[x]}{\langle f(x)\rangle} = \{ b_0 + \langle f(x) \rangle \mid b_0 \in \mathbb Z[x]\}[/imath] s.t. [imath]ab = 1 + \langle f(x)\rangle[/imath] (the identity). But [imath]\langle f(x)\rangle[/imath] absorbs all non-constant polynomials, thus [imath]a_0[/imath], [imath]b_0[/imath] must be non-constant, and [imath]a_0 b_0 \in \mathbb Z[/imath]. Clearly, not all elements of [imath]\mathbb Z[/imath] have multiplicative inverses, and there does not exist elements [imath]a = \{a_0 + \langle f(x)\rangle \mid a_0 \in \mathbb Z\}[/imath], [imath]b = \{b_0 + \langle f(x) \rangle \mid b_0 \in \mathbb Z[x]\}[/imath] where [imath]a_0 b_0 = 1 + \langle f(x)\rangle[/imath]. Therefore, not all elements of [imath]\frac{\mathbb Z[x]}{<f(x)>}[/imath] are units, [imath]\frac{\mathbb Z[x]}{\langle f(x)\rangle}[/imath] is not a field, [imath]\langle f(x)\rangle[/imath] is not a maximal ideal of [imath]\mathbb Z[x][/imath].
605044
Do entire functions have infinite radius of convergence? If [imath]f[/imath] is an entire function, then is the radius of convergence of its Taylor series, centered at any point [imath]z_0[/imath], equal to infinity? I tend to think that this is true, because the series converges to [imath]f[/imath] at any point. Is this incorrect?
843858
Radius of convergence of entire function Let [imath]f[/imath] be an entire function on the complex plane. Is the radius of convergence of [imath]f[/imath] around any point [imath]z_0[/imath] infinite? If so, why? Thank you.
1932310
Prove [imath]m=2^n + 1[/imath] then [imath]n[/imath] must be of the form [imath]2^i[/imath] if [imath]m[/imath] is a prime I need to find the proof of expression given in the title. There are primes of the form [imath]2^2 + 1[/imath], [imath]2^4 + 1[/imath] and so on. So I'm thinking if [imath]m[/imath] is the prime then it must be of the form [imath]m=2^{2^i} + 1[/imath]. I probably need more details for the proof.
840199
Consider numbers of the type [imath]n=2^m+1[/imath]. Prove that such an n is prime only if [imath]n=F_K[/imath] for some [imath] k \in N[/imath], where [imath]F_k[/imath] is a Fermat Prime. Consider numbers of the type [imath]n=2^m+1[/imath]. Prove that such an [imath]n[/imath] is prime only if [imath]n=F_K[/imath] for some [imath] k \in N[/imath], where [imath]F_k[/imath] is a Fermat Prime. Consider numbers of the type [imath]n=a^m-1[/imath] where [imath]a>1[/imath] is a natural number. Prove that such an [imath]n[/imath] is prime only if [imath]n=M_p[/imath] for p-prime, where [imath]M_p[/imath] is a Mersennt Prime. I do not understand what the question is asking me for. Can I get some clarification?
1934451
Inequality involving [imath]\sqrt{2}[/imath] I need some help with the following problem: Show that for any pair [imath](a,b)[/imath] of positive integers, [imath]\dfrac{a+2b}{a+b} < \sqrt{2} < \dfrac{a}{b}[/imath]. I tried squaring both sides of the inequality, but I was not able to solve it.
1931013
Prove [imath]\sqrt{2}[/imath] is between [imath]\dfrac{a}{b}[/imath] and [imath]\dfrac{a+2b}{a+b}[/imath] Let [imath]a[/imath] and [imath]b[/imath] be positive integers. Show that [imath]\sqrt{2}[/imath] always lies between [imath]\dfrac{a}{b}[/imath] and [imath]\dfrac{a+2b}{a+b}[/imath]. Please give the easy solution as possible.
1934152
Proof by induction: [imath]2^{n} < n![/imath] Prove that [imath]2^{n} < n![/imath] [imath]\forall[/imath] n > 4 [imath]n=5:[/imath] [imath]2^{5}<5![/imath] [imath]32 < 120[/imath] This is true. Now, after knowing it worked for [imath]n[/imath] we need to show it works for every other, so [imath]n+1[/imath]: [imath]2^{n+1} < (n+1)![/imath] [imath]2^{n} < \frac{(n+1)!}{2}[/imath] We know from beginning that [imath]2^{n} < n![/imath] So we replace it with it here and then show: [imath]n! < \frac{(n+1)!}{2}[/imath] [imath]n! < \frac{n!\cdot(n+1)}{2}[/imath] [imath]1<\frac{n+1}{2}[/imath] Task say for all [imath]n > 4[/imath] so the thing on the right side will really be greater than [imath]1[/imath]. I hope everything is ok? Edit: The possible-duplicate-link didn't help me because I'm not really looking for a solution to the task. I'm rather interested in knowing if MY proof is correct.
1624497
Prove that: [imath]2^n < n![/imath] Using Induction I'm told to show that [imath]2^n < n![/imath] using induction This is my attempt at it: BC: [imath]n=4, 2^4 = 16 < 4![/imath] IH: n = k, [imath]2^k < k![/imath] IS: try n = k+1 I'm told to only work from one side, so I try the left side: 2^(k+1) But I'm stuck here, any ideas?
1935629
Find the PDF of Z where [imath]Z = X^2 + Y^2[/imath] [imath]X[/imath] and [imath]Y[/imath] are independent normal variable with zero mean and common variance [imath]k[/imath], Find the Probability Density of Function of [imath]Z = X^2 + Y^2[/imath]? We can although find the PDF of a random variable if it is function of one random variable with know PDF, but how to calculate for [imath]2[/imath] variables. I was not able to proceed. How to use the given data of common variance?
656762
Distribution of the sum of squared independent normal random variables. The sum of squares of [imath]k[/imath] independent standard normal random variables [imath]\sim\chi^2_k[/imath] I read here that if I have [imath]k[/imath] i.i.d normal random variables where [imath]X_i\sim\mathcal{N}(0,\sigma^2)[/imath] then [imath]X_1^2+X_2^2+\dots+X_k^2\sim\sigma^2\chi^2_k[/imath]. How do I go about obtaining the pdf? If I have [imath]k[/imath] independent normal random variables where [imath]X_i\sim\mathcal{N}(0,\sigma_i^2)[/imath] then what is the distribution of [imath]X_1^2+X_2^2+\dots+X_k^2[/imath]?
1936077
Finding how multiplication and addition behave on [imath]\mathbb{F}_4[/imath] without any result I'm currently tutor in an undergraduate course and the students are asked to find how addition and multiplication behave on the field with 4 elements: [imath]\{0,1,x,y\}[/imath]. In the solution the teacher gave me, he uses the fact that [imath](\mathbb{F}_4^*, \cdot )[/imath] is cyclic with order 3 to find that [imath]x[/imath] and [imath]y[/imath] are both generators and that [imath](\mathbb{F}_4,+)[/imath] is a group of order 4 to deduce that [imath]0=1+1+1+1=(1+1)(1+1)[/imath] and hence, [imath](1+1)=0[/imath] since a field is integral. However, I'm not satisfied with this reasoning because this question is asked to student in first year and they juste have been introduced to fields (and this was part of an analysis course so they won't see any result soon). So I tried to find how it behaves by myself, using as few results as possible. Multiplication is the easy part, [imath]0\cdot a[/imath] and [imath]1 \cdot a[/imath] is trivial for all [imath]a\in \mathbb{F}_4[/imath]. Then, [imath]x\cdot y\not = 0[/imath] because it is invertible, and it can't be equal to [imath]x[/imath] or [imath]y[/imath] because that would imply the other element to be equal to [imath]1[/imath]. Hence, [imath]x\cdot y=1[/imath]. We are left to find what is [imath]x^2[/imath] (and similarly, [imath]y^2[/imath]). [imath]x^2[/imath] cannot be [imath]0[/imath], nor [imath]x[/imath]. If [imath]x^2=1[/imath], then [imath]y=(x^2)y=x(xy)=x\cdot 1=x[/imath]. Contradiction. Hence [imath]x^2=y[/imath] and by the same reasoning, we have [imath]y^2=x[/imath]. It was easy with the multiplication, but I can't figure out any non-trivial result concerning addition. My intuition tells me that distributivity should be used here but I'm feeling like it's just converting a problem into another one because we don't know anything about addition. Does anyone here have an idea of how addition can be "found" without using any advanced result (or as few results as possible)? Thanks in advance.
1938294
How can I find the sum and multiplication table of a [imath]F_4[/imath] field? From the field [imath]\{0,1,A,B\}[/imath] I know that, by exhaustion, [imath]A\cdot B=1[/imath] but how do I get [imath]A \cdot A [/imath] and [imath]B\cdot B[/imath]? How can I get the sum table? We haven't been thought almost anything about fields in class yet so I am stuck even with the other answers on the internet. I need a full explanation. I don't know what cyclic means or anything pretty much about fields.
1936301
[imath]z_n\to 0[/imath], then [imath]\frac{1}{n}(z_1+\cdots + z_n)\to 0[/imath] if [imath]z_n\to 0[/imath], then [imath]\frac{1}{n}(z_1+\cdots + z_n)\to 0[/imath] I'm sorry for asking this one, I've seen it in may places before but I cannot think of a way to solve it, and I don't know how to search it here. Also, I've seen the solution before for real numbers.
103822
Why does [imath]\lim_{n\to\infty} z_n=A[/imath] imply [imath]\lim_{n\to\infty}\frac{1}{n}(z_1+\cdots+z_n)=A[/imath]? I'm self-studying a bit of complex analysis, and I'm attempting to figure out the following. Suppose [imath]\lim_{n\to\infty}z_n=A[/imath]. How can I show that [imath] \lim_{n\to\infty}\frac{1}{n}(z_1+\cdots+z_n)=A. [/imath] Is there a clever way to write the limit to make it more approachable? Thank you.
1936396
Conditional Distribution is a Binomial I've been working on some problems related to multivariate distributions in my independent studies, and I came across one that I am feeling mixed about. Consider [imath]X \sim Poisson(\lambda_x) \wedge Y \sim Poisson(\lambda_y),[/imath] and consider [imath]W = X + Y.[/imath] I want to figure out the conditional distribution of [imath]X[/imath] given [imath]W = w.[/imath] I have a feeling that I should go about proving the [imath]W \sim Poisson(\lambda_x + \lambda_y)[/imath], but for now I will assume this (if anyone has any suggestions on how to prove that one, I'd like some help on that too). We see that [imath]P(X = c | W = w) = \frac{P(X = c \wedge W = w)}{P(W = w)} = \frac{P(Y = w - c)}{P(W = w)}[/imath] [imath]=\frac{(\lambda_y)^{w-c} e^{-\lambda_y}\frac{1}{(w-c)!}}{ (\lambda_x + \lambda_y)^{w} e^{-(\lambda_x+\lambda_y)} \frac{1}{w!}}.[/imath] I thought this would be enough, but my professor told me that I should find some way to manipulate this into a binomial distribution. Any recommendations on where to simplify this? I am having some difficulties figuring this out.
1904459
Identify [imath]E(X\mid X+Y)[/imath] for [imath]X[/imath] and [imath]Y[/imath] independent and Poisson I am currently trying to understand how does an instance of the EM algorithm is derived when there are "hidden variables" in this old paper: http://www.ncbi.nlm.nih.gov/pubmed/18238264. I don't really understand how the expectation step is derived, and what bothers me the most, is that I cannot prove the following statement (right before 2.11): [imath] \mathbb{E}[X|X+Y] = \frac{(X+Y)\lambda_X}{\lambda_X+\lambda_Y} [/imath] where [imath]X \sim Poisson\{\lambda_X\}[/imath] and [imath]Y \sim Poisson\{\lambda_Y\}[/imath] To begin, I wrote the discrete expectation as follows: [imath] \mathbb{E}[X|X+Y] = \sum_{k=1}^{X+Y} k P(X=k|X+Y)\\ = \sum_{k=0}^{X+Y} k \frac{P(X+Y|X=k)P(x)}{P(X+Y)} [/imath] Where I can replace [imath] P(X+Y=k) = \frac{(\lambda_X+\lambda_Y)^ke^{-(\lambda_X+\lambda_Y)}}{k!}[/imath] [imath] P(k=x) = \frac{\lambda_X^xe^{-\lambda_X}}{x!}[/imath] [imath] P(X+Y=k|x) = P(Y=x-k) = \frac{\lambda_Y^{x-k}e^{-\lambda_Y}}{(x-k)!}[/imath] but I dont really know how to mix that together, because I don't really see what does the event [imath]X+Y[/imath] stands for, and how does it rely on the x in the integral ? Thank you very much for you help
1936239
Prove that [imath]\lfloor \log_{10}(xy)\rfloor \geq \lfloor \log_{10}(x)\rfloor+ \lfloor \log_{10}(y)\rfloor[/imath] Let [imath]x[/imath] and [imath]y[/imath] be positive numbers. Prove that [imath]\lfloor \log_{10}(xy)\rfloor \geq \lfloor \log_{10}(x)\rfloor+ \lfloor \log_{10}(y)\rfloor.[/imath] I thought about using the definition of floor functions but didn't see how to use that here.
266446
A simple inequality for floor function If [imath]x,y\in\mathbb R[/imath], I have problems to show that [imath]\lfloor x\rfloor+\lfloor y\rfloor\le \lfloor x+y\rfloor\le \lfloor x\rfloor+\lfloor y\rfloor + 1 [/imath] Can someone help me?
1936142
Show that any natural number [imath]n[/imath] can be written on the form [imath]n = 2^k \cdot m[/imath] Show that any natural number [imath]n[/imath] can be written on the form [imath]n = 2^k \cdot m[/imath], where [imath]m[/imath] is an odd integer and [imath]k \geq 0[/imath]. I know that if [imath]n[/imath] is an integer, it can be written on the form [imath]n = \frac{a}{b}[/imath] for two integers [imath]a[/imath] and [imath]b[/imath]. Also, [imath]m[/imath] can be written as [imath]m = 2c +1[/imath] for an integer c, since it's odd. But I don't know how to use this to come up with a proof. Also, I have to show that [imath]n[/imath] is a non-negative integer
765486
Prove that every positive integer [imath]n[/imath] has a unique expression of the form: [imath]2^{r}m[/imath] where [imath]r\ge 0[/imath] and [imath]m[/imath] is an odd positive integer Prove that every positive integer [imath]n[/imath] has a unique expression of the form: [imath]2^{r}m[/imath] where [imath]r\ge 0[/imath] and [imath]m[/imath] is an odd positive integer if [imath]n[/imath] is odd then [imath]n=2^{0}n[/imath], but I dont know what to do when [imath]n[/imath] is even and to prove that this expression is unique is it a good choice to use the fundamental theorem of arithmetic? I would really appreciate your help
1936957
Prove that a sequence converges to [imath]\sqrt{2}[/imath] Given [imath]x_1 = 2[/imath] and [imath]x_{n+1} = \dfrac{1}{2} x_n + \dfrac{1}{x_n}[/imath], prove that [imath]x_n \to \sqrt{2}[/imath]. I thought I could use monotone convergence but I have a hard time proving the monotonicity of the sequence.
721513
Showing the sequence converges to the square root For any [imath]a > 0[/imath], I have to show the sequence [imath]x_{n+1}[/imath] [imath]=[/imath] [imath] \frac 12[/imath]([imath]x_n+ [/imath] [imath] \frac {a} {x_n}[/imath]) converges to the square root of [imath]a[/imath] for any [imath]x_1>0[/imath]. If I assume the limit exists ( denoted by [imath]x[/imath]) then, [imath]x[/imath] [imath]=[/imath] [imath] \frac 12[/imath]([imath]x+ [/imath] [imath] \frac {a} {x}[/imath]) can be solved to [imath]x^2 = a[/imath] How could I show that it does exist?
1131956
Image of the union and intersection of sets. Let [imath]f:X\to Y[/imath] be a function, and let [imath]\{S_{i}:i\in I\}[/imath] be a family of subsets of [imath]X[/imath]. Then, [imath]f\left(\bigcup_{i \in I}S_i\right) = \bigcup_{i \in I}f(S_i).[/imath] The case where [imath]f(A\cup B)= f(A)\cup f(B)[/imath] is trivial and I've proved this many times in other classes. However, I believe that the problem I am running into is with notation. That is, I don't understand what the set [imath]I[/imath] is. Will my proof method be just the same? Also, I would like a little help on one more problem. If [imath]S_{1}[/imath] and [imath]S_{2}[/imath] are subsets of a set [imath]X[/imath], and if [imath]f:X\to Y[/imath] is an injection, then [imath]f(S_{1}\cap S_{2})=f(S_{1})\cap f(S_2)[/imath]. Now, I know how to prove [imath]f(S_{1}\cap S_{2})\subseteq f(S_{1})\cap f(S_2)[/imath] if our function is not injective, and I know counterexamples of why it isn't equal our function isn't injective. Unfortunately, I am not sure how to use the fact that our function is injective to prove [imath]f(S_{1})\cap f(S_2) \subseteq f(S_{1}\cap S_{2})[/imath]. Any help would be much appreciated. Thank you very much! Note: These questions are coming from Rotman's Intro to Abstract Algebra Chapter 2.
2360327
Using injective function to prove relation between image of function Question: Let [imath]f : A \rightarrow B[/imath] be a function and [imath]C[/imath], [imath]D ⊆ A[/imath] Prove that if [imath]f[/imath] is injective then [imath]f(C) ∩ f(D) \subseteq f(C ∩ D)[/imath]. My attempt: Let [imath] y \in f(C) \cap f(D)[/imath] implies [imath]y \in f(C)[/imath] and [imath]y \in f(D)[/imath] which implies [imath]y = f(a)[/imath] for some [imath]a \in C[/imath] and [imath]y = f(b)[/imath] for some [imath]b\in D[/imath]. I am not sure how to use the fact that [imath]f[/imath] is injective in order to continue.
1937444
Is a countably infinite Cartesian product of countably infinite sets uncountable? I've read that such is the case but I've thought up of a bijection which makes me think otherwise. My apologies if this has been asked before... Let [imath]\mathbb{W} = \mathbb{N} \cup \{0\}[/imath]. Let [imath]P[/imath] be the set of primes. Denote by [imath]p_{k}[/imath] the [imath]k[/imath]th prime number. Let [imath]\{W_{p_{n}}\}_{n=1}^{\infty}[/imath] be a sequence such that [imath]W_{p_{i}} = \mathbb{W}[/imath] for all [imath]i \in \mathbb{N}[/imath]. Then [imath]S := W_{p_{1}} \times W_{p_{2}} \times \cdots = \mathbb{W} \times \mathbb{W} \times \cdots[/imath] is a Cartesian product of countably infinite copies of [imath]\mathbb{W}[/imath], since the set of primes [imath]P[/imath] is countably infinite. Now, consider [imath]f : S \rightarrow \mathbb{N}[/imath] such that [imath]f(w_{1},w_{2},\dots) = p_{1}^{w_{1}}p_{2}^{w_{2}}\cdots = 2^{w_{1}}3^{w_{2}}\cdots[/imath] We know that the every natural number has a unique prime factorization, so that [imath]f[/imath] is a bijection, from where it follows that [imath]S[/imath] is numerically equivalent to [imath]\mathbb{N}[/imath]. Where is the fault in this proof?
500849
Infinite Cartesian product of countable sets is uncountable Let [imath]\{E_n\}_{n\in\mathbb{N}}[/imath] be a sequence of countable sets and let [imath]S=E_1\times\cdots\times E_n\times\cdots [/imath]. Show [imath]S[/imath] is uncountable. Prove that the same statement holds if each [imath]E_n=\{0,1\}[/imath]. By the definition of Cartesian product of sets, [imath]\displaystyle S=\Pi_{n\in\mathbb{N}} \{f\colon\mathbb{N}\rightarrow\cup_{n\in\mathbb{N}}E_n\mid\forall n, f(n)\in E_n\}[/imath] If [imath]E_n=\{ 0,1\}[/imath], then [imath]\displaystyle S_{01}=\Pi_{n\in\mathbb{N}}\{0,1\}=E^{\mathbb{N}}[/imath], where [imath]E=\{0,1\}[/imath]. By a theorem, [imath]\cup_{n\in\mathbb{N}} E_n[/imath] is countable since the sequence is countable. I'm not sure how to go on from here to show [imath]S[/imath] is uncountable. Can we say anything about the function [imath]f[/imath] that maps a countable [imath]\mathbb{N}[/imath] to another countable union of sequence of countable sets?
1936953
Prove that[imath] f\Bigg(\sum_{i=1}^n \lambda_i x_i\Bigg) \leq \sum_{i=1}^n \lambda_i f(x_i) [/imath] for a convex function "f" Let f be a convex function defined on a set I. If [imath]x_1, x_2, ...,x_n \in I[/imath], and [imath]\lambda_1, \lambda_2 ,...,\lambda_n \in [0,1][/imath] with [imath]\sum_{i=1}^n \lambda_i=1[/imath] then prove that [imath] f\Bigg(\sum_{i=1}^n \lambda_i x_i\Bigg) \leq \sum_{i=1}^n \lambda_i f(x_i) [/imath] I have no idea where to start, except to use the definition of a convex function: [imath] f(\lambda x+(1-\lambda)y) \leq \lambda f(x) + (1-\lambda)f(y) [/imath] Anyone any ideas of how to approach this problem?
1804505
Show that [imath]f[/imath] is convex if and only if [imath]f\left( \sum_{i=1}^m\lambda_ix_i \right) \leq \sum_{i=1}^m\lambda_if(x_i)[/imath] I need to prove the following statement Let [imath]S \subset \mathbb{R}^n[/imath] a nonempty convex set and [imath]f: S \to \mathbb{R}[/imath]. Then [imath]f[/imath] is convex in [imath]S[/imath] if and only if [imath]f\left( \sum_{i=1}^m\lambda_ix_i \right) \leq \sum_{i=1}^m\lambda_if(x_i)[/imath] for al [imath]m \in \mathbb{N}[/imath], for all [imath]x_1, \dots, x_m \in S[/imath] and for al [imath]\lambda_1, \dots, \lambda_m > 0[/imath] such that [imath]\sum_{i=1}^m\lambda_i=1[/imath] My try: I think I managed to prove the backwads implication. Let [imath]m=2[/imath], and since [imath]\lambda_1 + \lambda_2=1[/imath] then [imath]\lambda_2 = 1-\lambda_1[/imath], so [imath]f(\lambda_1x_1+\lambda_2x_2)=f(\lambda_1x_1+(1-\lambda_1)x_2) \leq \lambda_1f(x_1)+(1-\lambda_1)f(x_2)[/imath] since [imath]f[/imath] is convex in [imath]S[/imath] by assumption. For the forward implication, I thought that induction might work. The case when [imath]m=1[/imath] is trivial. However, I'm struggling to prove the general case. Any help with that step will be highly appreciate. Thanks in advance!
1937092
How to graph [imath]|x|+|y|\le1[/imath]? What should be the approach to draw the graph of the above inequality with modulus? Should I need to begin like this? [imath]|y| = 1 - |x|[/imath]
1905161
Graph of [imath]|x| + |y| = 1[/imath] Can anybody explain as how to plot a graph of [imath]|x| + |y| = 1[/imath]. Here [imath]|x|[/imath] and [imath]|y|[/imath] are absolute values.
1937634
A small question on notation- [imath]\delta_{ij}[/imath] and [imath]\epsilon_{ijk}[/imath] I'm just reading through some vector calculus notes and it begins to assume knowledge of the notation [imath]\delta_{ij}[/imath] and [imath]\epsilon_{ijk}[/imath] in reference to tensors, but I am not familiar with this. It uses them as if they are constant and well-defined. If someone could just explain this I would appreciate it. Thank you.
1758095
Levi civita and kronecker delta properties? I'm trying to grasp Levi-civita and Kronecker del notation to use when evaluating geophysical tensors, but I came across a few problems in the book I'm reading that have me stumped. 1) [imath]\delta_{i\,j}\delta_{i\,j}[/imath] 2) [imath]\delta_{i\,j} \epsilon_{i\,j\,k}[/imath] I have no idea how to approach evaluating these properties. Without knowing i, j, or k, how would I approach? I don't feel confident using the notation further until I can understand these properties.
1938717
If [imath]H \leq G[/imath] such that [imath][G:H]=2[/imath] then [imath]a^2 \in H [/imath] for every [imath]a\in G [/imath] If [imath]H \leq G[/imath] such that [imath][G:H]=2[/imath] then [imath]a^2 \in H [/imath] for every [imath]a\in G [/imath] If [imath]a \in H [/imath] then [imath]a^2 \in H [/imath] so I took [imath]a\in G-H [/imath] and wrote [imath]G=Hx \cup Hy [/imath] for some [imath]x,y \in G [/imath] but wasn't able to arrive anywhere. Any hint?
84587
Given [imath]G[/imath] group and [imath]H \le G[/imath] such that [imath]|G:H|=2[/imath], how does [imath]x^2 \in H[/imath] for every [imath]x \in G[/imath]? I'm having some trouble understanding cosets. If I understand it correctly, cosets form a partition. So, if I have [imath]|G:H| = 2[/imath], then [imath]G = H \cup xH[/imath]. Right? In an exercise, I'm asked the following: "For [imath]G[/imath] group and [imath]H \le G[/imath] such that [imath]|G:H| = 2[/imath], prove that [imath]x^2 \in H[/imath] for any [imath]x \in G[/imath]" If [imath]x \in H[/imath], then [imath]x^2 \in H[/imath] because [imath]H[/imath] is a group. But, if [imath]x \notin H[/imath], then [imath]x \in gH[/imath] for any [imath]g \in G[/imath], right? Then [imath]x^2[/imath] = [imath](gh)^2[/imath] but I can't see why this is in [imath]H[/imath]. Can you give me some hint? Thanks for stopping by.
1938039
The remainders of the terms of a recursive sequence Let [imath]a_n[/imath] be the sequence defined by [imath]a_1 = 3, \ a_{n+1} = 3^{a_n}[/imath]. Let [imath]b_n[/imath] be the remainder when [imath]a_n[/imath] is divided by [imath]100[/imath]. What is [imath]b_{2004}[/imath]?
1938353
[imath]a_1=3;\ a_{n+1}=3^{a_n}[/imath]; find [imath]a_{2004}\bmod100[/imath] A detailed solution will be helpful. Given that [imath]a_1=3[/imath] and [imath]a_{n+1}=3^{a_n}[/imath], find the remainder when [imath]a_{2004}[/imath] is divided by 100.
1938729
Matrix to the power 100 by Cayley Hamilton I have the matrix [imath]A = \begin{pmatrix} 1 & 1 & −1 \\ −1 & −1 & 0 \\ 0 & −1& 0\end{pmatrix}[/imath] with characteristic polynomial [imath]p(x)=-x^3 + x - 2[/imath] and I have to find [imath]A^{100}[/imath] with the Cayley-Hamilton theorem. Can anybody help me?
731400
Find high powers of a matrix with the Cayley Hamilton theorem Let A = \begin{bmatrix} 0 & 1 & 0\\ 0 & 0 & 1\\ -1 & -1 &-1\\ \end{bmatrix} Compute [imath]A^{10000} + A^{9998}[/imath] I know this should be done by the Cayley-Hamilton theorem. I get as characteristic polynomial [imath]-A^3 - A^2 - A - I = 0[/imath] but I don't see how to calculate [imath]A^{10000} + A^{9998}[/imath] from there. I hope someone can help me out!
1938819
mutual independence implies pairwise independence show that the converse is not true. We know: Mutual Independence : For [imath]n‎\geq‎‎‎‎3[/imath], random variables [imath]X_1,X_2,...,X_n[/imath] are mutually independent if [imath] p(x_1,x_2,...,x_n)=p(x_1)p(x_2)...p(x_n)[/imath] for all [imath]x_1,x_2,...,x_n[/imath]. Pairwise Independence : For [imath]n‎\geq‎‎‎‎3[/imath], random variables [imath]X_1,X_2,...,X_n[/imath] are pairwise independent if [imath] X_i,X_j[/imath] are independent for all [imath]1\leq i<j\leq n[/imath]. Note that mutual independence implies pairwise independence.(Proof that mutual statistical independence implies pairwise independence) show that the converse is not true. Personally, I think the answer is cleared with the definition of 'Conditional Independence', but any help is appreciated.
1606205
Durrett Example 1.9 - Pairwise independence does not imply mutual independence? The example in question is from Rick Durrett's "Elementrary Probability for Applications", and the setup is something like this: Let [imath]A[/imath] be the event "Alice and Betty have the same birthday", [imath]B[/imath] be the event "Betty and Carol have the same birthday", and [imath]C[/imath] be the event "Carol and Alice have the same birthday". Durrett goes on to demonstrate that each pair is independent, since for example, [imath]P(A \cap B) = P(A)P(B)[/imath]. However, he concludes that [imath]A, B[/imath], and [imath]C[/imath] are not independent, since [imath]P(A \cap B \cap C) = \frac{1}{365^2} \neq \frac{1}{365^3} = P(A)P(B)P(C)[/imath]. I understand the reasoning here, and that one can generally show that arbitrary events [imath]X[/imath] and [imath]Y[/imath] are not independent by showing that [imath]P(X\cap Y) \neq P(X)P(Y)[/imath]. I am a little new to probability, though, and don't understand why exactly [imath]P(A \cap B \cap C) = \frac{1}{365^2}[/imath]. My progress so far: I do see why [imath]P(A) = P(B) =P(C) = \frac{1}{365}[/imath], and thus why [imath]P(A)P(B)P(C) = \frac{1}{365^3}[/imath]. It seems like the sample space [imath]\Omega = \{ (a, b, c) \mid a,b,c \in [365] \}[/imath] -- i.e., all of the possible triples of numbers from 1 to 365, where 1 denotes January 1st, 2 denotes January 2nd, etc. From that, I can conclude [imath]|\Omega| = 365^3[/imath], but I'm not sure where to go from here. It seems like once a single birthday is chosen, the rest are completely determined if they're all equal to each other - is this a good direction to go in?
1932306
Unbounded continuous function on non-compact metric space Suppose [imath](S,d)[/imath] is a non-compact metric space. Is it possible to construct an unbounded continuous function from [imath]S[/imath] to [imath]\mathbb{R}[/imath]? If it is possible, please show the construction method :) Here's my attempt to solve this problem: Because [imath]S[/imath] is not compact, we can find a neighborhood assignment function [imath]N(x)[/imath] so that a finite cover is not possible. Then each time we can find an uncovered point from the union of neighborhoods before, and change the function by [imath]f(x):=max\{f(x),\text{a function which equals to the indice of this point at that point and gradually vanishes at further points from the selected point}\}[/imath]. However I'm concerned that while this function is unbounded, there may be some invalid points for this function (goes to infinity)... EDIT: As I found out there is a duplicate question, however the elementary solution is not presented in that question, so I still wish to know about this problem. :)
1244557
Existence of a continuous function which does not achieve a maximum. Suppose [imath]X[/imath] is a non-compact metric space. Show that there exists a continuous function [imath]f: X \rightarrow \mathbb{R}[/imath] such that [imath]f[/imath] does not achieve a maximum. I proved this assertion as follows: If [imath]X[/imath] is non-compact, there exists a sequence which has no converging subsequence. Therefore, the set [imath]E[/imath] of elements of this sequence is infinite and has no limit points. Therefore, in the induced topology, [imath]E[/imath] is a countable discrete space. Let [imath]e_n[/imath] be a enumeration of [imath]E[/imath]. Define [imath]f: E \rightarrow \mathbb{R}[/imath] as [imath]f(e_n)=n[/imath]. Since [imath]E[/imath] is discrete, [imath]f[/imath] is continuous. By Tietze (note that [imath]E[/imath] is closed in [imath]X[/imath]), there exists a continuous function [imath]g: X \rightarrow \mathbb{R}[/imath] which extends [imath]f[/imath]. But [imath]f[/imath] is unbounded, then so is [imath]g[/imath], and [imath]g[/imath] does not achieve a maximum. [imath]\square[/imath] My first question is: Is everything in the proof fine? Now, my second question is: Can you provide me a proof which is more elementary? (Not using Tietze, for example. This exercise is just after the definition of compactness. I suppose this proof is not what was intended.) And my third question is: Does the proposition hold for arbitrary topological spaces? If not, can you provide a counter-example?
1939759
If [imath]a[/imath] is a zero of [imath]f(x)[/imath] in some extension of [imath]F[/imath], show that [imath]g(x)[/imath] is irreducible over [imath]F(a)[/imath], given [imath]f(x)[/imath] and [imath]g(x)[/imath] are Suppose that [imath]f (x)[/imath] and [imath]g(x)[/imath] are irreducible over [imath]F[/imath] and that [imath]\deg f (x)[/imath] and [imath]\deg g(x)[/imath] are relatively prime. If [imath]a[/imath] is a zero of [imath]f (x)[/imath] in some extension of [imath]F[/imath], show that [imath]g(x)[/imath] is irreducible over [imath]F(a)[/imath]. I understand that [imath]f(a) = 0[/imath] implies that [imath][F(a):F] =[/imath] deg [imath]f(x)[/imath] and that all elements of [imath]F(a)[/imath] are of the form [imath]c_{n-1}a^{n-1} + \dots + c_0[/imath], but from here I can't to find a way to relate the coprimeness to show that [imath]g(x)[/imath] can only factored as a unit and a polynomial from [imath]F(a)[x][/imath]. Anyone have any ideas?
1427492
Suppose [imath]\gcd(\deg(f),\deg (g))=1[/imath]. Show that [imath]g(x)[/imath] is irreducible in [imath]k(\alpha)[X][/imath]. This is an assignment. There are two related (I think) problems. Please solve one of them and I will try to solve the other. Let [imath]\alpha, \beta [/imath] be algebraic over [imath]k[/imath] whose irreducible polynomials are [imath]f(x), g(x)[/imath]. Suppose [imath]\gcd(\deg(f),\deg(g))=1[/imath]. Show that [imath]g(x)[/imath] is irreducible in [imath]k(\alpha)[X][/imath]. [imath]f(x) \in k[x][/imath] is irreducible with degree [imath]n[/imath] and let [imath][K:k]=m[/imath], where [imath](n,m)=1[/imath]. Show that [imath]f[/imath] is irreducible in [imath]K[x][/imath].
1939176
Which Hilbert Space in Quantum Physics? I'm a maths student who would also like to know a bit about quantum physics. I keep reading that possible states of a system are represented by elements of a certain Hilbert space [imath]\mathcal{H}[/imath]. This question is about which Hilbert space [imath]\mathcal{H}[/imath] is supposed to be. From a few brief conversations I have had with people smarter than me, I've gathered that [imath]\mathcal{H}[/imath] is supposed to be [imath]L^2(X)[/imath] for some measure space [imath]X[/imath]. But this just raises further questions. First of all, and most importantly, why have I never come across this information officially? In textbooks, lecture notes and some information that I have found online, they just tell us that quantum physics takes place on [imath]\mathcal{H}[/imath], which is some Hilbert space, not [imath]L^2[/imath] specifically. Is this because the details of [imath]L^2[/imath] are irrelevant to the physics? This surprises me, since in Terence Tao's An Epsilon of Room Part I, exercise 1.12.36 is to prove Heisenberg's Uncertainty Principle using just facts from Fourier analysis about [imath]L^2[/imath]! Supposing that we do indeed have [imath]\mathcal{H}=L^2(X)[/imath], what on earth is [imath]X[/imath] supposed to be? If [imath]\psi\in L^2(X)[/imath] with [imath]||\psi||_{L^2(X)}=1[/imath] is supposed to be a wavefunction, then it makes sense to me that for a single particle we would have [imath]X=\mathbf{R}^3[/imath], because then [imath]|\psi|^2[/imath] would be just a probability density for the position of the particle. The problem would then be if there were a lot of particles in the system of interest. What happens, for instance, if there are countably many particles? Another problem with this hypothesis that [imath]\mathcal{H}=L^2[/imath] is that restricting our attention to only those [imath]\psi\in L^2[/imath] with [imath]||\psi||=1[/imath] seems to get rid of the point of working in a vector space. It no longer makes sense to add and subtract the elements because then they will no longer be wavefunctions. Moreover, we will only be working with the very specific class of linear operators that preserve norms. Even worse is the apparent restriction that [imath]\psi[/imath] be smooth, as in the Heisenberg and Schrodinger equations. Maybe this means that we have to restrict our attention to those [imath]\psi\in \mathcal{S}\subseteq L^2[/imath] like in Tao's question 1.12.36 (not forgetting that [imath]||\psi||=1[/imath]). This seems even more restrictive. I suppose there are other Hilbert spaces of interest, such as certain Sobolev spaces. I don't know a huge amount about these except for that they are pretty similar to [imath]L^p[/imath] spaces but with differentiation. Using these solve the conundrum about the Heisenberg/Schrodinger equations, but would just make me more confused as (a) it would no longer make sense for [imath]\psi\in H[/imath] to be like analogous to a probability density and (b) Tao's question 1.12.36 would no longer make as much sense. This is a pretty wordy question which might be answered in just a single sentence or two, but I certainly would appreciate the help. Edit: I think this question is appreciably different to the one Daniel asked. My question is on a lower level of understanding for one. Another difference is that Daniel's question is specifically about replacing [imath]L^2[/imath] with Sobolev space rather than asking about the general use of Hilbert spaces in quantum physics.
452992
Correct spaces for quantum mechanics The general formulation of quantum mechanics is done by describing quantum mechanical states by vectors [imath]|\psi_t(x)\rangle[/imath] in some Hilbert space [imath]\mathcal{H}[/imath] and describes their time evolution by the Schrödinger equation [imath]i\hbar\frac{\partial}{\partial t}|\psi_t\rangle = H|\psi_t\rangle[/imath] where [imath]H[/imath] is the Hamilton operator (for the free particle we have [imath]H=-\frac{\hbar^2}{2m}\Delta[/imath]). Now I have often seen used spaces like [imath]\mathcal{H}=L^2(\mathbb{R}^3)[/imath] (in the case of a single particle), but I was wondering whether this is correct or not. In fact shouldn't we require to be able to derivate [imath]\left|\psi_t\right>[/imath] twice in [imath]x[/imath] and thus choose something like [imath]\mathcal{H} = H^2(\mathbb{R}^3)[/imath]? If we treat directly [imath]\psi(t,x) := \psi_t(x)[/imath], shouldn't we require them to be in something like [imath]H^1(\mathbb{R};H^2(\mathbb{R}^3))[/imath]? i.e., functions in [imath]H^1(\mathbb{R})[/imath] with values in [imath]H^2(\mathbb{R}^3)[/imath], e.g. the function [imath]t\mapsto\psi_t[/imath].
1939630
Is L'Hopital for [imath]\lim\limits_{x\to0}\frac{\sin(x)}{x}[/imath] circular? I was considering using L'Hopital for [imath]\displaystyle\lim\limits_{x\to0}\frac{\sin(x)}{x}[/imath], but I was told that this is circular, because we use this limit to show [imath]\displaystyle\frac{\mathrm d}{\mathrm dx}\sin(x) = \cos(x)[/imath]. Do we have to use this limit to find the derivative of [imath]\sin(x)[/imath], or is there a legitimate counter-argument here?
1873286
Prove [imath][\sin x]' = \cos x[/imath] without using [imath]\lim\limits_{x\to 0}\frac{\sin x}{x} = 1[/imath] I came across this question: How to prove that [imath]\lim\limits_{x\to0}\frac{\sin x}x=1[/imath]? From the comments, Joren said: L'Hopital Rule is easiest: [imath]\displaystyle\lim_{x\to 0}\sin x = 0[/imath] and [imath]\displaystyle\lim_{x\to 0} = 0[/imath], so [imath]\displaystyle\lim_{x\to 0}\frac{\sin x}{x} = \lim_{x\to 0}\frac{\cos x}{1} = 1[/imath]. Which Ilya readly answered: I'm extremely curious how will you prove then that [imath][\sin x]' = \cos x[/imath] My question: is there a way of proving that [imath][\sin x]' = \cos x[/imath] without using the limit [imath]\displaystyle\lim_{x\to 0}\frac{\sin x}{x} = 1[/imath]. Also, without using anything else [imath]E[/imath] such that, the proof of [imath]E[/imath] uses the limit or [imath][\sin x]' = \cos x[/imath]. All I want is to be able to use L'Hopital in [imath]\displaystyle\lim_{x\to 0}\frac{\sin x}{x}[/imath]. And for this, [imath][\sin x]'[/imath] has to be evaluated first. Alright... the definition that some requested. Def of sine and cosine: Have a unit circumference in the center of cartesian coordinates. Take a dot that belongs to the circumference. Your dot is [imath](x, y)[/imath]. It relates to the angle this way: [imath](\cos\theta, \sin\theta)[/imath], such that if [imath]\theta = 0[/imath] then your dot is [imath](1, 0)[/imath]. Basically, its a geometrical one. Feel free to use trigonometric identities as you want. They are all provable from geometry.
1940496
Why is it that if I multiply a certain number of consecutive primes starting from [imath]2[/imath] and add [imath]1[/imath], I get another prime? Why is it that if I multiply a certain number of consecutive primes starting from [imath]2[/imath] and add [imath]1[/imath], I get another prime? This property is used it prove that there are infinitly many primes, but why is it correct?
842187
Proof of infinitely many primes, clarification Proof: The proof is by contradiction. Suppose there are only finitely many primes. Let the complete list be [imath]p_1,p_2,\dots,p_n[/imath]. Let [imath]N = p_1p_2 \dots p_n+1[/imath]. According to the Fundamental Theorem of Arithmetic, [imath]N[/imath] must be divisible by some prime. This must be one of the primes in our list. Say [imath]p_k \mid N[/imath]. But [imath]p_k\mid p_1\dots p_n[/imath], so [imath]p_k\mid(N-p_1 \dots p_n) = 1[/imath] Hence contradiction. I don't see how this proof works. I understand that [imath]N[/imath] isn't necessarily prime, but I don't understand how it apparently must show that some primes weren't in our list. A number could be made of different powers of the given primes right? Someone please explain.
1935119
Help on Surjection, Injection, and Bijection I am a undergraduate majoring in CS. In preparation for a discrete mathematics exam coming up next week, I am looking through problems I got wrong on the homework. A concept I don't understand are surjections, injections, and bijection. From lecture, for a function to be a bijection, it has to be both an injection and a surjection. So say I proved a function is not a surjection, why couldn't I say that it has to be injection since we know it can't be a bijection by definition? So my homework problem is in the link below. Assignment Problem 4.26. Let [imath]A[/imath], [imath]B[/imath], and [imath]C[/imath] be sets and let [imath]f\colon B\to C[/imath] and [imath]g\colon A\to B[/imath] be functions. Let [imath]h\colon A\to C[/imath] be the composition, [imath]f\circ g[/imath], that is, [imath]h(x)::=f(g(x))[/imath] for [imath]x\in A[/imath]. Prove or disprove the following claims. (a) If [imath]h[/imath] is surjective, then [imath]f[/imath] must be surjective. (b) If [imath]h[/imath] is surjective, then [imath]g[/imath] must be surjective. (c) If [imath]h[/imath] is injective, then [imath]f[/imath] must be injective. (d) If [imath]h[/imath] is injective and [imath]f[/imath] is total, then [imath]g[/imath] must be surjective I got a) True b) False c) True d) False When the answer is supposed to be a) True b) false c) false d) true I think the reason why I got them wrong is because I assumed that if a function is not surjective, then it has to be injective and vice versa. Could someone help me understand this concept? That would be much appreciated!
1705016
If [imath]g \circ f[/imath] is surjective, show that [imath]f[/imath] does not have to be surjective? Suppose [imath]f: A \to B[/imath] and [imath]g: C \to D[/imath] are functions, and that [imath]B ⊆ C[/imath]. I need to come up with an example where [imath]g \circ f[/imath] is surjective but [imath]f[/imath] is not. I'm confused on how exactly to do that, but I understand that to show something is surjective you have to have the range of the function equal to the codomain of the function. Here's my attempt: So if [imath]g(x) = x^2[/imath] with a codomain of [imath]ℝ[/imath], and [imath]f(x) = \sqrt{x}[/imath] with a codomain of [imath]ℝ[/imath], then [imath]g(f(x)) = (\sqrt{x})^2 = x[/imath]. Since the range of this function and codomain are the same, this is surjective. But, since [imath]f(x) = \sqrt{x}[/imath], its range is only [imath][0, ∞)[/imath], which is smaller than its codomain of [imath]ℝ[/imath], meaning [imath]f[/imath] is not surjective.
1940448
Find value of [imath] S = \sum_{n=0}^{\infty} \frac{1}{n!(n+2)} [/imath] I am stuck on this question. Could someone help me? [imath] \text{Find value of } S = \displaystyle\sum_{n=0}^{\infty} \cfrac{1}{n!(n+2)} [/imath] I am supposed to show that [imath] S = 1 [/imath] in two ways: 1) Integrate the taylor series of [imath] xe^x [/imath] 2) Differentiate the taylor series of [imath] \frac{e^{x-1}}{x} [/imath] For (1), I tried to using the fact that the taylor series of [imath] e^x = \displaystyle\sum_{n=0}^{\infty} \frac{x^n}{n!} [/imath] Now, multiplying [imath] x [/imath] into the taylor series gives: [imath] xe^x = \displaystyle\sum_{n=0}^{\infty} \cfrac{x^{n+1}}{n!} [/imath] Integrating this yields the following: [imath] \begin{align} \int_0^x xe^x &= \displaystyle\int_0^x\sum_{n=0}^{\infty} \cfrac{x^{n+1}}{n!} \\ &= \displaystyle\sum_{n=0}^{\infty} \cfrac{x^{n+2}}{n!(n+2)} \end{align} [/imath] I am not sure how to cary on from here. For (2), I am not sure how to find the taylor series for [imath] \frac{(e^x - 1)}{x} [/imath] Is anyone able to assist me?
1913967
How to find sum of power series [imath]\sum_{n=0}^\infty\frac{1}{n!(n+2)}[/imath] by differentiation and integration? Let [imath] S = \sum_{n=0}^\infty \frac{1}{n!(n+2)} [/imath] Integrate the Taylor Series of [imath]xe^x[/imath] to show that S = 1 Also, Differentiate the Taylor series of [imath]\frac{e^x - 1}{x}[/imath] to show that S = 1 For the integration one, I got [imath]\int xe^x dx[/imath] = [imath]x^2\sum_{n=0}^\infty \frac{1}{n!(n+2)} x^n[/imath] and pretty much stuck. I can see the S term but i have no idea how to move on from there
1940622
what does mean that [imath]A^TPA+P=I[/imath]? Question was Let [imath]A= \left[\begin{matrix} a_1 & a_2\\ a_3&a_4\\ \end{matrix}\right][/imath] [imath]P= \left[\begin{matrix} p_1 & p_2\\ p_3&p_4\\ \end{matrix}\right][/imath] Find the matrix [imath]Q[/imath] such that [imath] Q\left[ \begin{array} \\ p_1\\p_3\\p_2\\p_4 \end{array} \right] = \left[ \begin{array} \\ 1\\0\\0\\1 \end{array} \right][/imath] is equivalent to the equation [imath]A^TPA+P=I[/imath] In this question I approached to make [imath]Q[/imath] to [imath]2\times 2[/imath] matrix. but I couldn't get any idea.. I need you genius to help.
1939618
Find a matrix equation equivalent to [imath]A^TPA+P=I[/imath] Question was Let [imath]A= \left[\begin{matrix} a_1 & a_2\\ a_3&a_4\\ \end{matrix}\right][/imath] [imath]P= \left[\begin{matrix} p_1 & p_2\\ p_3&p_4\\ \end{matrix}\right][/imath] Find the matrix [imath]Q[/imath] such that [imath] Q\left[ \begin{array} \\ p_1\\p_3\\p_2\\p_4 \end{array} \right] = \left[ \begin{array} \\ 1\\0\\0\\1 \end{array} \right][/imath] is equivalent to the equation [imath]A^TPA+P=I[/imath] In this question I approached to make [imath]Q[/imath] to [imath]2\times 2[/imath] matrix. but I couldn't get any idea.. I need you genius to help.
1940556
Understanding Why [imath]\mathbb{Q}^+[/imath] Isn't a Cyclic Group Under Multiplication I'm just trying to understand why the positive rationals do not form a cyclic group under multiplication. Please let me know if my reasoning is correct, and where I made mistakes. Proof by Contradiction: If the positive rational numbers form a cyclic group, then that means that [imath]\mathbb{Q}^+[/imath] = [imath]\left\langle\frac{a}{b}\right\rangle[/imath]. This implies every single positive rational number can be written in the form of [imath](\frac{a}{b})^n[/imath]. Assume that this is true, and take the rational number [imath]\frac{a}{2b}[/imath]. Since [imath]\mathbb{Q}^+[/imath] = [imath]\left\langle\frac{a}{b}\right\rangle[/imath], this means that [imath]\left(\frac{a}{b}\right)^n[/imath] = [imath]\frac{a}{2b}[/imath]. However, no such [imath]n[/imath] exists such that this is true. Therefore, there is no generator for [imath]\mathbb{Q}^+[/imath], and is not a cyclic group. Is my reasoning correct? I'd really like to understand this airtight since the concept will be used in an exam, so I'd like any form of critique or correcting. Thanks for helping me!
748628
Show that [imath](\mathbb{Q}^*,\cdot)[/imath] and [imath](\mathbb{R}^*,\cdot)[/imath] aren't cyclic I'm reading a book about abstract algebra, but I'm having trouble solving this excercise: "Show that [imath](\mathbb{Q}^*,\cdot)[/imath] and [imath](\mathbb{R}^*,\cdot)[/imath] aren't cyclic" Where [imath](\mathbb{Q}^*,\cdot)[/imath] is the group of nonzero rational numbers under multiplication and [imath](\mathbb{R}^*,\cdot)[/imath] is the group of nonzero real numbers under multiplication. Here is my attempt for the first. Suppose [imath](\mathbb{Q}^*,\cdot)[/imath] is cyclic, then [imath]\mathbb{Q}^*=\langle\frac{p}{q}\rangle=\{(\frac{p}{q})^n,n\in\mathbb{Z}\}[/imath], where [imath]p[/imath] and [imath]q[/imath] are coprime. [imath]\frac{2p}{q}[/imath] is also in [imath]\mathbb{Q}^*[/imath] so it must be equal to [imath](\frac{p}{q})^n[/imath] for some [imath]n\in\mathbb{Z}[/imath]. To solve [imath]\frac{2p}{q}=(\frac{p}{q})^n[/imath], I take a logarithm of both sides and end up with [imath]1+\log_\frac{p}{q}(2)=n[/imath], since [imath]n[/imath] is an integer [imath]\log_\frac{p}{q}(2)[/imath] must be an integer too, but it is possible only when [imath]\frac{p}{q}=2^{\frac{1}{k}}, k\in\mathbb{N}[/imath], (i.e. [imath]\frac{p}{q}[/imath] is a k-th root of [imath]2[/imath]), but [imath]k[/imath] must be [imath]1[/imath] for [imath]2^\frac{1}{k}[/imath] to be rational so [imath]\frac{p}{q}=2[/imath] contradicting the hypothesis of [imath]p[/imath] and [imath]q[/imath] being coprime. However I don't know whether this is a proper proof and the same reasoning cannot be applied to [imath]\mathbb{R}^*[/imath], I'd like you to just give me an hint towards a proof, without telling me the whole proof, if possible.
1940689
A normed space is locally compact iff it is finite dimensional Riesz's lemma: Let [imath]Y[/imath] and [imath]Z[/imath] be subspaces of a normed space [imath]X[/imath] and suppose [imath]Y\neq Z[/imath]. Then for every [imath]\theta\in (0,1)[/imath] there is [imath]z\in Z[/imath] such that [imath]\|z\|=1[/imath] and [imath]\|z-y\|\geq\theta[/imath] for all [imath]y[/imath]. Using Riesz lemma prove that A normed space is locally compact iff it is finite dimensional. How to proceed ? Should I use the fact that The unit ball is compact in a normed linear space iff the space is finite-dimensional.
496301
Finiteness of the dimension of a normed space and compactness I am studying functional analysis, and in the setting of normed spaces I have seen the theorem that states that the unit ball is compact iff the space is finite dimensional. I also saw an exercise: Let [imath]X[/imath] be finite dimensional, prove that for any non-empty and closed [imath]C[/imath], [imath]x\in C[/imath] there exist [imath]c\in C[/imath] s.t [imath]||x-c||=d(x,C)[/imath]. The proof started with: take [imath]c\in C[/imath] and consider [imath] D=\overline{B(x,||x-c||)}\cap C [/imath] [imath]D[/imath] is closed and bounded hence, since [imath]X[/imath] is finite dimensional, is compact. I would be glad if someone could help me with the following two questions: 1) It seems that the theorem I read about the compactness also extends to [imath]X[/imath] is finite dimensional iff [imath]\{x,||x-x_{0}||\leq d\}[/imath] is compact for any [imath]x_{0}\in X,d\in\mathbb{R}^{+}[/imath]. is this correct ? 2) [imath]C[/imath] is closed, [imath]\overline{B(x,||x-c||)}[/imath] is also closed, hence so is the intersection. since [imath]\overline{B(x,||x-c||)}[/imath] is bounded then so is the intersection. And so I get that [imath]D[/imath] is closed and bounded. Why can't I conclude that [imath]D[/imath] is compact without the assumption that [imath]X[/imath] is finite dimensional ?
1940922
Prove that for each [imath]m \ge 3[/imath] the following holds: [imath]d_m=(m-1)(d_{m-1}+d_{m-2})[/imath] Let us denote by [imath]d_m[/imath] the number of sequences [imath](a_1,a_2,…,a_m)[/imath] such that [imath]a_i \ne i[/imath] for every integer [imath]i \in \{1,2,…,m\}[/imath] . Prove that for each [imath]m \ge 3[/imath] the following holds: [imath]d_m=(m-1)(d_{m-1}+d_{m-2})[/imath] Since the following formula [imath]d(n)=n!(1-\frac{1}{1!}+\frac{1}{2!}-\frac{1}{3!}+\frac{1}{4!}- \cdots +(-1)^n\frac{1}{n!})[/imath] is not what we have to prove and this is how to find the number of derangements for [imath]n[/imath] distinct objects, can I answer this question by substituting [imath]d(n)[/imath] for [imath]d_m[/imath], [imath]d(n-1)[/imath] for [imath]d_{m-1}[/imath] and [imath]d(n-2)[/imath] for [imath]d_{m-2}[/imath] and using algebra to show that they are equal? Do I need to use mathematical induction and also show that they are equal for [imath]n+1[/imath] or do I need to solve this problem a different way?
1274547
Intuitive explanation for derangements The recurrence relation for Derangement is as follows: Let [imath]D_n[/imath] denote the number of derangements of a set [imath]$\{1,2,3,\ldots,n\}$[/imath]. Then, [imath]$$D_n=(n-1)D_{n-1}+(n-1)D_{n-2}.$$[/imath] Can someone give an intuitive explanation on how we come to find this result?
1941284
Why [imath]\sqrt[3]{{2 + \sqrt 5 }} + \sqrt[3]{{2 - \sqrt 5 }}[/imath] is a rational number? Why [imath]\sqrt[3]{{2 + \sqrt 5 }} + \sqrt[3]{{2 - \sqrt 5 }}[/imath] is a rational number?
937073
How to show this equals 1 without "calculations" We have [imath] \sqrt[3]{2 +\sqrt{5}} + \sqrt[3]{2-\sqrt{5}} = 1 [/imath] Is there any way we can get this results through algebraic manipulations rather than just plugging it into a calculator? Of course, [imath](2 +\sqrt{5}) + (2-\sqrt{5}) = 4 [/imath], maybe this can in some way help?
1941502
Use comparison test to show that [imath]\sum^{+\infty}_{k=1} \frac{1}{k(k+1)(k+2)}[/imath] converges and find its limit Use comparison test to show that [imath]\sum^{+\infty}_{k=1} \frac{1}{k(k+1)(k+2)}[/imath] converges and find its limit I tried expanding out the denominator, and then using the comparison test with [imath]\frac{1}{k^3}[/imath] but I think this is an incorrect use of the comparison test as I get divergence. I know that the limit is [imath]\frac{1}{4}[/imath] but I am not sure how to use the comparison test to then apply the limit.
1000297
Calculate [imath]\sum_{n=1}^{\infty}(\frac{1}{2n}-\frac{1}{n+1}+\frac{1}{2n+4})[/imath] I am trying to calculate the following series: [imath]\sum_{n=1}^{\infty}\frac{1}{n(n+1)(n+2)}[/imath] and I managed to reduce it to this term [imath]\sum_{n=1}^{\infty}(\frac{1}{2n}-\frac{1}{n+1}+\frac{1}{2n+4})[/imath] And here I am stuck. I tried writing down a few partial sums but I can't see the pattern, [imath]\frac{1}{2}-\frac{1}{2}+\frac{1}{5}+\frac{1}{4}-\frac{1}{3}+\frac{1}{8}+...[/imath] I cant seem to find a closed formula that we can calculate for [imath]S_n[/imath] How should I go about solving this question
1941940
Proving [imath]\sum\limits_{cyc}{\frac{xy}{x^5 + xy + y^5}} \le 1[/imath] Prove that [imath]{\frac{xy}{x^5 + xy + y^5}} + {\frac{yz}{y^5 + yz + z^5}} + {\frac{zx}{z^5 + zx + z^5}}\le 1[/imath] for positive reals [imath]x, y, z[/imath] whose product is [imath]1[/imath]. I have tried proving the above inequality. Since, [imath]x^5 + y^5 \ge x^4y + xy^4[/imath] by Muirhead's Inequality. So, now we have to prove that [imath]\sum_{cyc}{\frac{1}{x^3 + y^3 + 1}} \le 1[/imath] I think it can be done with AM-GM inequality but am not sure. Thanks.
1898058
Symmetric Inequality in [imath]\mathbb{R}[/imath] Let [imath]a, b[/imath] and [imath]c[/imath] be posetive real numbers such that [imath]abc=1[/imath]. I want to prove this inquality [imath]\sum_{cyc}\frac{ab}{a^5+b^5+ab}\le 1[/imath] I used AGM inequality but I didn't prove it. Which should I use inequality? So thanks
540163
Why is the vector chosen by the right hand rule? I'm reading first year Physics and the Young & Freedman (13e) text describes how to find the vector (cross) product. Notably, the authors simplify the description of finding the product direction with the right hand rule which I've previously seen used to explain the direction of EM force from a wire with current flowing through it (I think?) The decision here sounds almost arbitrary There are always two directions perpendicular to a given plane, one on each side of the plane. We choose which of these the direction of [imath]{\mathrm{\overrightarrow{A}}}[/imath] [imath]{\mathrm{\times}}[/imath] [imath]{\mathrm{\overrightarrow{B}}}[/imath] as follows. Imagine rotating vector [imath]{\mathrm{\overrightarrow{A}}}[/imath] about the perpendicular line until it is aligned with [imath]{\mathrm{\overrightarrow{B}}}[/imath], choosing the smaller of the two possible possible angles between [imath]{\mathrm{\overrightarrow{A}}}[/imath] and [imath]{\mathrm{\overrightarrow{B}}}[/imath]. Curl the fingers of your right hand around the perpendicular line so that the fingertips point in the direction of rotation; your thumb will then point in the direction of [imath]{\mathrm{\overrightarrow{A}}}[/imath] [imath]{\mathrm{\times}}[/imath] [imath]{\mathrm{\overrightarrow{B}}}[/imath]. Then on the next page, We see that there are two kinds of coordinate systems, differing in the signs of the vector products of unit vectors. An axis system in which [imath]{\mathcal{\hat{i}}}[/imath] [imath]{\mathrm{\times}}[/imath] [imath]{\mathcal{\hat{j}}}[/imath] = [imath]{\mathcal{\hat{k}}}[/imath], as in [the example] is called a right-handed system. The usual practice is to use only right-handed systems, and we will follow that practice throughout this book. Now I'm wondering what I'm not being told about and what the significance is? I understand that most of the students reading the book probably don't know enough linear algebra to understand complex derivations of this choice but it's also kind of frustrating not to know any more than the simplified version... Can anyone enlighten me?
1941044
Why is cross product defined in the way that it is? [imath]\mathbf{a}\times \mathbf{b}[/imath] follows the right hand rule? Why not left hand rule? Why is it [imath]a b \sin (x)[/imath] times the perpendicular vector? Why is [imath]\sin (x)[/imath] used with the vectors but [imath]\cos(x)[/imath] is a scalar product? So why is cross product defined in the way that it is? I am mainly interested in the right hand rule defintion too as it is out of reach?
1942094
Sum of Binomial Coefficients Times a Polynomial Is there a closed for expression for, [imath]\displaystyle\sum_{k=0}^n {n\choose k}k^2[/imath] It holds that, [imath]\displaystyle\sum_{k=0}^n {n\choose k}k=n2^{n-1}[/imath] Is there a generalization for higher degrees?
1923215
Give a combinatorial proof: [imath] n(n+1)2^{n-2} = \sum_{k=1}^{n}k^2\binom{n}{k} [/imath] Find a combinatorial argument for the following binomial identity: [imath]$$n(n+1)2^{n-2} = \sum_{k=1}^{n}k^2\binom{n}{k} .$$[/imath] Algebraic proofs can be found at Can $n(n+1)2^{n-2} = \sum_{i=1}^{n} i^2 \binom{n}{i}$ be derived from the binomial theorem? , and a related identity at $\sum_{k=1}^m k(k-1){m\choose k} = m(m-1) 2^{m-2}$ .
1942489
Are they the same function? [imath]y = x^2/x[/imath] and [imath]y = x[/imath] Are they the same function? [imath]y=\frac{x^2}{x}[/imath] and [imath]y=x[/imath] For the first function, if we don't divide both the numerator and the denominator by x, then the domain of it is the real line except the point x = 0, which is different from the domain of the second function.
1525054
Why are removable discontinuities even discontinuities at all? If I have, for example, the function [imath]f(x)=\frac{x^2+x-6}{x-2}[/imath] there will be a removable discontinuity at [imath]x=2[/imath], yes? Why does this discontinuity exist at all if the function can be simplified to [imath]f(x)=x+3[/imath]? I suppose the answer is that you can't simplify it because you can't divide by something that could potentially equal [imath]0[/imath]. But what if you start with [imath]f(x)=x+3[/imath]? What's stopping you from multiplying it by [imath]1[/imath] in the form [imath]\frac{x-2}{x-2}[/imath]? Can multiplying by [imath]1[/imath] really introduce a discontinuity to the function?
1941453
Finding a distinguished open set in the intersection of two [imath]\operatorname{Spec}[/imath]s I am trying to follow the answers to this question about an intersection of affine open sets being covered by affine open sets. However, I can't seem to figure out why given [imath]\operatorname{Spec} A \cap \operatorname{Spec} B[/imath], it is possible to find a distinguished open set [imath]D_{A}(f) \subseteq \operatorname{Spec} A \cap \operatorname{Spec} B[/imath]. How would one show this?
1186011
Proof of proposition 5.3.1 of Ravi Vakil's notes on algebraic geometry I am reading the proof of proposition 5.3.1 of Ravi Vakil's notes on algebraic geometry, and I have a problem with the last sentence : "If [imath]g' = g''/f^n[/imath] ([imath]g''\in A[/imath]) then [imath]\textrm{Spec}((A_f)_{g'}) = \textrm{Spec} (A_{fg''})[/imath], and we're done." Noting [imath]V = \textrm{Spec} (B)[/imath] and [imath]V' = \textrm{Spec} (B_g)[/imath], and noting [imath]D_Z (h)[/imath] the distinguished affine open subset of an affine scheme [imath]Z[/imath] associated to a section [imath]h\in\Gamma(Z,\mathcal{O}_Z)[/imath], I understand perfectly that if we note [imath]U' = \textrm{Spec} (A_f)[/imath] then the inclusion [imath]U'\subset V[/imath] induces a morphism [imath]f : \Gamma(V',\mathcal{O}_X)\to \Gamma(U',\mathcal{O}_X)[/imath] and that if we note [imath]\varphi[/imath] the associated morphism of affine schemes then [imath]\varphi^{-1}(V') = D_{U'}(g')[/imath] where [imath]g'[/imath] is the image of [imath]g[/imath] by [imath]f[/imath], but as [imath]\varphi[/imath] is the the inclusion [imath]V'\subset U'[/imath], we have [imath]\varphi^{-1}(V') = V'[/imath] because [imath]V'\subset U'[/imath] so finally [imath]V' = D_{U'}(g')[/imath] and [imath]V'[/imath] is distinguished in [imath]U'[/imath]. I guess that the last sentence of the proof is supposed to show that this [imath]V'[/imath] is also distinguished in [imath]U[/imath], but I don't understand why.
1942602
Does [imath]S^1 \times S^3[/imath] admit a complex structure? Does [imath]S^1 \times S^3[/imath] admit a complex structure? It is parallelizable, so it admits an almost complex structure...is there a nice way to see that this is or is not integrable?
1932560
Complex structure on [imath]\mathbb S^1 \times \mathbb S^3[/imath] Apparently there is a complex structure on [imath]X = \mathbb S^1 \times \mathbb S^3[/imath] but I have no ideas why. Any hints ?
1940444
Solve the equation [imath]\cos^n(x) - \sin^n(x)=1[/imath] Solve the equation [imath]\cos^n(x) - \sin^n(x)=1,n \in \mathbb{N}-\{0\}[/imath] If [imath]n[/imath] is even then [imath]\cos^n(x) = \sin^n(x)+1[/imath] is only possible if [imath]\sin(x)=0[/imath] therefore the solution is [imath]x=k\pi, k \in \mathbb{Z}[/imath]. I'm having problems with [imath]n[/imath] odd case. UPDATE For [imath]n=1[/imath] we have [imath]\cos(x) - \sin(x)=1[/imath] and by squaring we get [imath]sin(x)cos(x)=0[/imath] which leads to [imath]x=k\pi[/imath] or [imath]x=\pm \frac \pi 2 + k\pi[/imath]. From these solutions, only [imath]x=2k\pi[/imath] and [imath]x=- \frac \pi 2 + 2k\pi[/imath] are valid. Also [imath]x=2k\pi[/imath] and [imath]x=- \frac \pi 2 + 2k\pi[/imath] are solutions for all odd [imath]n[/imath]
179778
Solve [imath]\cos^{n}x-\sin^{n}x=1[/imath] with [imath]n\in \mathbb{N}[/imath]. Solve [imath]\cos^{n}x-\sin^{n}x=1[/imath] with [imath]n\in \mathbb{N}[/imath] I have no idea how to deal with this crazy question. One idea came into my mine is factorization, but I can't go on... Can anyone help me please? Thank you.
1529881
Finding a nonempty subset that is not a subgroup Question: Give an example of a group [imath]G[/imath] having a subset [imath]H[/imath] such that [imath]HH=H[/imath], where [imath]HH = \left\{h_1h_2\mid h_1,h_2\in H \right\}[/imath], but [imath]H[/imath] is not a subgroup of [imath]G[/imath]. Attempted Solution: I haven't gotten far with this problem. In a previous exercise, I was able to prove that [imath]HH=H[/imath] if [imath]H[/imath] is a subgroup of [imath]G[/imath]. I am having a hard time thinking of a scenario where [imath]h_1h_2\in H[/imath] and [imath]H[/imath] is not a group. I've played around with making [imath]h_1h_2[/imath] not closed under its operation, but I haven't gotten anywhere yet.
1531353
example for not being a subgroup Be [imath]G[/imath] a group with subgroups [imath]U[/imath] and [imath]V[/imath] and be [imath]UV := \left\{uv \mid u \in U, v \in V \right\}[/imath] I want to prove the following: [imath]UV[/imath] is in general no subgroup of [imath]G[/imath]. Proof. Be [imath]U \nsubseteq V[/imath] and [imath]V \nsubseteq U[/imath]. Be [imath]x \in U \setminus V[/imath] and [imath]y \in V \setminus U[/imath]. Hence [imath]xy \in UV[/imath] but [imath]xy \notin U \cup V[/imath]. This implies [imath]\left(xy \right)\left(xy\right) \notin UV[/imath] per definition of [imath]UV[/imath], hence [imath]UV[/imath] is no subgroup of [imath]G \quad \square[/imath] But I also want to give an explicit counterexample beside this proof. Could you give me a hint in which group I should look for the easiest counterexample?
1943443
How to find CDF on disk Let [imath]Z = (X, Y)[/imath] be a random variable uniform inside circle of radius [imath]R[/imath]. How to find cumulative distribution function (CDF) on disk?
1931107
How to find the CDF and PDF How to find the Cumulative Distribution Function (CDF) and Probability Distribution Function (PDF) for uniform variable inside circle given by [imath]R^2 = (X-c_1)^2+(Y-c_2)^2[/imath], where ([imath]c_1, c_2[/imath]) is the center of circle. I think that the PDF for problem is given by \begin{equation} f(x,y) = \begin{cases} \frac{1}{\pi R^2}&, \quad (x-c_1)^2 + (y-c_2)^2 \leq R^2\\ 0&, \quad otherwise \end{cases} \end{equation}
1943546
How many [imath]\sigma[/imath]-algebras on a [imath]4[/imath]-element set [imath]X[/imath]? How many [imath]\sigma[/imath]-algebras on a set [imath]X[/imath] are there when the cardinality of [imath]X[/imath] is [imath]4[/imath]?
529771
Number of sigma algebras for set with 4 elements I am supposed to watch out for sigma algebras that belong to the set [imath]X=\{1,2,3,4\}[/imath]. I found 15(now with the new set even more) of them. I was wondering whether there is some nice proof how to see that there are no more of them? The problem is, that I would try to prove this by looking at a lot of different cases, how a new sigma-algebra would have to look like and prove then that it already is the whole set. Does anybody here have a better idea? only the set and the empty one// the set of all subsets// 4 sets, where you take the set, the empty one and one element with its complement.// 3 sets with the empty one, the set itself and a set containing two elements and its complement.// EDIT: I have even found 6 of another type that are possible, just like: [imath]\Sigma = \{\{1\},\{2\},\{1,2\},\{3,4\},\{1,3,4\},\{2,3,4\},\emptyset,X\}[/imath] Does anybody have an idea?
1944055
Irrationality of [imath]e^\sqrt{2}[/imath] - reference request Does anybody have a proof or know about the result by Borwein and Bailey that [imath]e^\sqrt{2}[/imath] is irrational and perhaps the motivation behind the problem and it's solution.
1441292
Show that [imath]e^{\sqrt 2}[/imath] is irrational I'm trying to prove that [imath]e^{\sqrt 2}[/imath] is irrational. My approach: [imath] e^{\sqrt 2}+e^{-\sqrt 2}=2\sum_{k=0}^{\infty}\frac{2^k}{(2k)!}=:2s [/imath] Define [imath]s_n:=\sum_{k=0}^{n}\frac{2^k}{(2k)!}[/imath], then: [imath] s-s_n=\sum_{k=n+1}^{\infty}\frac{2^k}{(2k)!}=\frac{2^{n+1}}{(2n+2)!}\sum_{k=0}^{\infty}\frac{2^k}{\prod_{k=1}^{2k}(2n+2+k)}\\<\frac{2^{n+1}}{(2n+2)!}\sum_{k=0}^{\infty}\frac{2^k}{(2n+3)^{2k}}=\frac{2^{n+1}}{(2n+2)!}\frac{(2n+3)^2}{(2n+3)^2-2} [/imath] Now assume [imath]s=\frac{p}{q}[/imath] for [imath]p,q\in\mathbb{N}[/imath]. This implies: [imath] 0<\frac{p}{q}-s_n<\frac{2^{n+1}}{(2n+2)!}\frac{(2n+3)^2}{(2n+3)^2-2}\iff\\ 0<p\frac{(2n)!}{2^n}-qs_n\frac{(2n)!}{2^n}<\frac{2}{(2n+1)(2n+2)}\frac{(2n+3)^2}{(2n+3)^2-2} [/imath] But [imath]\left(p\frac{(2n)!}{2^n}-qs_n\frac{(2n)!}{2^n}\right)\in\mathbb{N}[/imath] which is a contradiction for large [imath]n[/imath]. Thus [imath]s[/imath] is irrational. Can we somehow use this to prove [imath]e^\sqrt{2}[/imath] is irrational?
1942902
2 Related Questions About Finding A Closed Form Consider the sequence defined by [imath] \begin{cases} s_0=0\\ s_1=3\\ s_n=6s_{n-1}-9s_{n-2} & \text{if }n\ge 2 \end{cases} .[/imath] Find a closed form for [imath]s_n[/imath]. Consider the sequence defined by [imath] \begin{cases} t_0=5\\ t_1=9\\ t_n=6t_{n-1}-9t_{n-2} & \text{if }n\ge 2 \end{cases} .[/imath] Find a closed form for [imath]t_n[/imath]. I am having trouble with these questions about closed forms, could someone walk me step by step through each problem? Thanks!
1942721
Finding a closed form for a recurrence relation [imath]a_n=3a_{n-1}+4a_{n-2}[/imath] Consider the sequence defined by [imath] \begin{cases} a_0=1\\ a_1=2\\ a_n=3a_{n-1}+4a_{n-2} & \text{if }n\ge 2 \end{cases} .[/imath] Find a closed form for [imath]a_n[/imath]. I tried listing out examples, but I don't see any common pattern between them. All solutions are greatly appreciated.
1943465
How to show that two matrices have same eigenvalues? Consider [imath]4[/imath] square matrices [imath]A_{i,j}[/imath] for [imath]i,j\in \{1,2\}[/imath]. Suppose [imath]A_{1,1}[/imath] and [imath]A_{2,2}[/imath] are invertible. Consider [imath] B:=A_{2,1}A^{-1}_{1,1}A_{1,2} A_{2,2}^{-1} [/imath] and [imath] C:=A_{1,2}A^{-1}_{2,2}A_{2,1} A_{1,1}^{-1} [/imath] I have done some checks and it seems to me that [imath]B[/imath] and [imath]C[/imath] have the same eigenvalues (even if in different orders). How can I show this?
821934
Eigenvalues of [imath]AB[/imath] and [imath]BA[/imath] where [imath]A[/imath] and [imath]B[/imath] are square matrices Show that if [imath]A,B \in M_{n \times n}(K)[/imath], where [imath]$K=\mathbb{R}, \mathbb{C}$[/imath], then the matrices [imath]AB[/imath] and [imath]BA[/imath] have same eigenvalues. I do that like this: let [imath]\lambda[/imath] be the eigenvalue of [imath]B[/imath] and [imath]v\neq 0[/imath] [imath]ABv=A\lambda v=\lambda Av=BAv[/imath] the third equation is valid, because [imath]Av[/imath] is the eigenvector of [imath]B[/imath]. Am I doing it right?
1943470
Why can't we use van-Kampen's theorem directly on the Hawaiian earring? The Hawaiian earring is the space [imath]X = \cup_{i=1}^{\infty}\{(x,y) : (x-1/i)^2+y^2=1/i^2\}.[/imath] It is different from wedge of infinite circles. But why can't we apply van Kampen's directly by taking each open path connected set to be [imath]U_i = X\backslash\{(2/j,0)\}_{j\in\mathbb{N}}^{j\neq i}[/imath] and hence obtain that [imath] \pi_1(X,(0,0)) = *_{\infty}\mathbb{Z} [/imath] ? Am I missing some detail?
1827486
Why does Van Kampen Theorem fail for the Hawaiian earring space? The Hawaiian earring space has notoriously complicated fundamental group, and is essentially not as simple as the wedge sum of countably many circles whose fundamental group is straightforwardly given by Van Kampen Theorem. However, I'm still struggle to understand why Van Kampen fails in this case. I can, however, largely grasp the topological difference between the two. For instance, the wedge point of countably many unit circles is locally contractible while that of the Hawaiian earring is not. But, when checking the Van Kampen Theorem given by Allen Hatcher, I really can't find anything wrong with the earring: Path-connectedness is trivial. It really suffices to check the red-boxed condition. Of course, for each circle, say [imath]C_n:=\{(x-1/n)^2+y^2=1/n^2\}[/imath], in the earring space, we can always find an open subarc [imath]L_n[/imath] that deformation retracts onto [imath](0,0)[/imath], the wedge point. So why isn't the theorem applicable?
1943975
Prove the following series [imath]\sum\limits_{s=0}^\infty \frac{1}{(sn)!}[/imath] Prove that, [imath]\sum\limits_{s=0}^\infty \frac{1}{(sn)!}=\frac{1}{n}\sum\limits_{r=0}^{n-1}\exp\left(\cos\frac{2r\pi}{n}\right)\cos\left(\sin\frac{2r\pi}{n}\right)[/imath] I don't have a real idea on how to start approaching this question, some hints and suggestions would be helpful.
1708900
Sum of [imath]\sum \limits_{n=0}^{\infty} \frac{1}{(kn)!}[/imath] Does a closed form exist for [imath]\sum \limits_{n=0}^{\infty} \frac{1}{(kn)!}[/imath] in terms of [imath]k[/imath] and other functions? The best that I have been able to do is solve the case where [imath]k=1[/imath], since the sum is just the infinite series for [imath]e[/imath]. I would guess that any closed form must involve the exponential function, but am at a loss to prove it.
1943408
Why is the genus 2 canonical map of degree 2 ? Let [imath]C[/imath] be a smooth genus 2 curve, with canonical divisor W. From Riemann-Roch's theorem, we know that [imath]l(K)=2[/imath] so there exist two functions [imath]g_1,g_2[/imath] that span [imath]L(W)[/imath]. Since [imath]g_1[/imath]and [imath]g_2[/imath] are linearly independent, the map [imath]\varphi:C⟶\mathbb P^1[/imath] defined by [imath][g1(P):g2(P)][/imath] is a non-constant morphism of smooth curves, so it is surjective. We also know that [imath]\deg \varphi\geq 2[/imath], else [imath]C[/imath] would be birrational to [imath]\mathbb P^1[/imath], contradicting the assumption on its genus. I want to show with elementary arguments that [imath]\deg\varphi=2[/imath]. Using that [imath]\deg\varphi[/imath] is either the degree of the zero part of a [imath]g_i[/imath] or its pole part, I tried to write [imath]W[/imath] as [imath]D_0+Q_1+Q_2[/imath], with [imath]D_0\in\mbox{Pic}^0(C)[/imath] and the [imath]Q_i[/imath] possibly equal to obtain a contradiction with the numbers of poles/zeros that [imath]g_1[/imath] or [imath]g_2[/imath] should have, but it seems it does not work. If [imath]D_0=0[/imath], then as [imath]\mbox{div}(g_i)+K=\mbox{div}(g_i)+Q_1+Q_2≥0[/imath] and that [imath]g_i[/imath] has poles at [imath]Q_1[/imath] and [imath]Q_2[/imath], then this means the [imath]g_i[/imath] has 2 zeroes and poles and I get the result. But I fail to see why [imath]D_0[/imath] would be 0 be zero. So my questions are: is my reasoning correct ? Why would [imath]D_0[/imath] be zero ? For my curiosity also, is there a more elegant way using elementary arguments ? I am not familiar with sheaves and linear bundles, etc, and sadly I cannot find any proof of this well-known result with the most elementary language. Also, I do not want to consider a possible equation for [imath]C[/imath]. Thanks in advance !
1847205
How to compute degree of morphism given by a canonical divisor? Let [imath]X[/imath] be a smooth projective curve of genus [imath]2[/imath]. Let [imath]K[/imath] be a canonical divisor. It is known [imath]K[/imath] has degree [imath]2[/imath] and it is not hard to show that [imath]|K|[/imath] is base point free so induces a morphism into [imath]\mathbb P^{1}[/imath] (up to linear isomorphism of [imath]\mathbb P^1[/imath]). How do I show the degree of this morphism is [imath]2[/imath]? I've thought about the formula [imath]deg(f^*D) = deg(f) \cdot deg(D)[/imath] and taking [imath]D[/imath] to be a point, but I don't know enough about the preimage to say anything.
1945166
why a set of all [imath]2 \times 2[/imath] matrices can be a vector space? I am sorry this is a stupid question, but I am really confused about why [imath]2 \times 2[/imath] matrix is not a vector, but all [imath]2 \times 2[/imath] matrices can be a vector space? According to the definition, the each element in a vector spaces is a vector. So, [imath]2 \times 2[/imath] matrix cannot be element in a vector space since it is not even a vector.
116717
Can a basis for a vector space be made up of matrices instead of vectors? I'm sorry if this is a silly question. I'm new to the notion of bases and all the examples I've dealt with before have involved sets of vectors containing real numbers. This has led me to assume that bases, by definition, are made up of a number of [imath]n[/imath]-tuples. However, now I've been thinking about a basis for all [imath]n\times n[/imath] matrices and I keep coming back to the idea that the simplest basis would be [imath]n^2[/imath] matrices, each with a single [imath]1[/imath] in a unique position. Is this a valid basis? Or should I be trying to get column vectors on their own somehow?
1945615
A proof in Linear Algebra with transverse matrices [imath] U = \{A \in M_n\Bbb R : A^t = -A\}[/imath] I need to show if the above subset of [imath]M_n\Bbb R[/imath] is actually a subspace So to show a subset is a subspace it must agree with the following criteria: [imath]U[/imath] is not empty [imath]u+v \in U[/imath] for all [imath]a \in \mathbb R[/imath], [imath]au \in U[/imath] So my thinking is that U is not empty because if we let [imath]A^t[/imath] = 0 vector then [imath]-A[/imath] is still equal to the zero vector. I am unsure of proving whether not criteria 2 or 3 is valid or not. Looking for some help.
1944128
A question in Subspaces in linear algebra Looking for some help with the following question. I have begun to understand the concept of subspaces, but not sure how to deal with transverse matrices. So I need to determine which of the subsets of [imath]M_n(\Bbb R)[/imath] are actually subspaces: 1) [imath] U = \{A \in M_n\Bbb (R) : A^t = A\}[/imath], 2) [imath] U = \{A \in M_n\Bbb (R) : A^t = -A\}[/imath], 3) [imath] U = \{A \in M_n\Bbb (R) : A^t \neq A\}[/imath].
1945809
How to integration by parts with Borel measure How to do integration by parts when [imath]\mu[/imath] is a finite Borel measure. That this how to integrate by parts \begin{align} \int_0^\infty f(x) g(x) d\mu(x). \end{align}
1340790
Integration by parts for general measure? Let [imath]\mu[/imath] be a general measure, suppose [imath]f,g[/imath] has compact support on [imath]\mathbb{R}[/imath], when does the integration by parts formula hold [imath]\int f'g d\mu = - \int g'fd\mu?[/imath] I know in general this is false, we can take [imath]\mu[/imath] to be supported on a point, say [imath]0[/imath], then it is not necessarily true that [imath]f'(0)g(0) = -g'(0)f(0).[/imath] If [imath]\mu[/imath] is absolute continuous w.r.t. Lebesgue measure, we have [imath]\frac{d\mu}{dx} = h[/imath] [imath]\int f'gd\mu = \int f'gh dx = -\int f(gh)'dx[/imath] where [imath](gh)'dx[/imath] might be a measure. but we can not recover the form [imath]\int g'fh dx[/imath]. Thank you very much!
1946373
Why is this basic solution to a differential equation correct? I have asked questions before, and have read many questions and answers from others on stackexchange, about how to treat "[imath]dx[/imath]" in a differential equation. people often have the intuition of treating [imath]dx[/imath] in the term "[imath]\frac{df}{dx}[/imath]" as if it were a separate subterm of that, even though [imath]\frac{df}{dx}[/imath], or at least [imath]\frac{d}{dx}[/imath] is an operator, and shouldn't be seen as a fraction of two seperable elements. Nevertheless, when solving the simple differential equation [imath]\frac{dx}{dt}=x[/imath], people often proceed as follows: [imath](a):\qquad \frac{dx}{dt}=x[/imath] Step 1: multiply by [imath]dt[/imath] and subtract by [imath]x[/imath]: [imath](b):\qquad \frac{1}{x}dx=dt [/imath] Step 2: integrate both sides: [imath](c): \qquad \int\frac{1}{x}dx=\int dt[/imath] Step 3: Solve the integral: [imath](d): \qquad ln(x)=t+C\implies x=e^{t+C}=x_0e^t[/imath] So my questions are we know that the justification of Step 1 as "multiplication" is incorrect, since [imath]dx[/imath] and [imath]dt[/imath] are not seperable elements, so what is the justification for going from equation [imath](a)[/imath] to [imath](c)[/imath]? When [imath]a=b[/imath], we can conclude that [imath]\int adx=\int bdx[/imath], since this takes the integral of both sides with respect to the same variable, but What is the justification of inserting the integral sign in equation [imath](c)[/imath], without also adding the differential [imath]dx[/imath] to both sides?
1252405
Is it mathematically valid to separate variables in a differential equation? I read the following statement in a book on Calculus, as part of my mathematics course: Technically this separation of [imath]\frac{dy}{dx}[/imath] is not mathematically valid. However, the resulting integration leads to correct answer. The book also contains the following: To solve a differential equation by separation of variables: get all the [imath]x[/imath] values on one side and all the [imath]y[/imath] values on the other side by multiplication and division. separate [imath]\frac{dy}{dx}[/imath] as if it were a fraction. integrate both sides. Note: This box doesn't refer to a particular problem. It refers to a class of problems of differential equations which can be solved using the Method of Separation of Variables. My high school mathematics teacher told me that this is the most fundamental way to solve differential equations but the textbook says it is not mathematically valid. I am not able to understand why are certain methods being followed without having a mathematical proof. Or am I wrong?
1946608
Differentiability of [imath]f(x+y) = \frac{f(x)+f(y)}{1-f(x)f(y)}[/imath] let [imath]f[/imath] be a function defined on the interval [imath](-1, 1)[/imath] such that for all [imath]x, y \in (-1, 1)[/imath], [imath]f(x+y) = \frac{f(x)+f(y)}{1-f(x)f(y)}[/imath] Suppose that [imath]f[/imath] is differentiable at [imath]x = 0[/imath] (i) Show that [imath]f[/imath] is differentiable on [imath](-1, 1)[/imath]. (ii) If [imath]f'(0) = \pi/2[/imath], find the explicit expression of [imath]f(x)[/imath].
1946385
Proof of differentiability of [imath]f(x)[/imath] Let [imath]f[/imath] be a function defined on the interval [imath](-1,1)[/imath] such that for all [imath]x,y\in(-1,1)[/imath], [imath]f(x+y)=\frac{f(x)+f(y)}{1-f(x)f(y)}[/imath]. Suppose that [imath]f[/imath] is differentiable at [imath]x=0[/imath]. Show that [imath]f[/imath] is differentiable on [imath](-1,1)[/imath]. For me I can see that [imath]\tan(x)[/imath] is obviously a case of [imath]f[/imath], but I cannot find a general proof for this [imath]f[/imath]. Could any kind soul help?
1946093
Equidistribution of [imath]\big(\frac{1+ \sqrt{5}}{2}\big)^n[/imath] in [imath][0,1][/imath] I need to prove that the sequence [imath]\{\gamma_n\}[/imath] where [imath]\gamma_n[/imath] is the fractional part of [imath]\big(\frac{1+ \sqrt{5}}{2}\big)^n[/imath] is NOT equidistributed in [imath][0,1][/imath]. Now, I am not sure if I am correct but if not then please correct me. A sequence in some interval is equidistributed if it is dense in that interval. Right?? So, I was thinking of proving that [imath]\gamma_n[/imath] is dense in [imath][0,1][/imath] but I do not have any clue how to start. I am looking for a hint/answer that is from Fourier analytic point of view rather than number theory.
13915
The fractional parts of the powers of the golden ratio are not equidistributed in [0,1] Let [imath]a_n=\left(\frac{1+\sqrt{5}}{2}\right)^n.[/imath] For a real number [imath]r[/imath], denote by [imath]\langle r\rangle[/imath] the fractional part of [imath]r[/imath]. Why is the sequence [imath]\langle a_n\rangle[/imath] not equidistributed in [imath][0,1][/imath]?
1946690
Inverse trig function equation How would you suggest I go about solving this question? I've been thinking about it for ages and nothing comes to mind. [imath]\arcsin x + \arccos x = \frac{\pi}{2}[/imath]
1116974
Why it's true? [imath]\arcsin(x) +\arccos(x) = \frac{\pi}{2}[/imath] The following identity is true for any given [imath]x \in [-1,1][/imath]: [imath]\arcsin(x) + \arccos(x) = \frac{\pi}{2}[/imath] But I don't know how to explain it. I understand that the derivative of the equation is a truth clause, but why would the following be true, intuitively? [imath]\int^{x}_{C1}\frac{1\cdot dx}{\sqrt{1-x^{2}}} + \int^{x}_{C2}\frac{-1 \cdot dx}{\sqrt{1-x^{2}}} =\\ \arcsin(x) - \arcsin(C1) + \arccos(x) - \arccos(C2) = 0 \\ \text{while } \arcsin(C1) + \arccos(C2) = \frac{\pi}{2}[/imath] I can't find the right words to explain why this is true? Edit #1 (25 Jan, 20:10 UTC): The following is a truth clause: [imath] \begin{array}{ll} \frac{d}{dx}(\arcsin(x) + \arccos(x)) = \frac{d}{dx}\frac{\pi}{2} \\ \\ \frac{1}{\sqrt{1-x^{2}}} + \frac{-1}{\sqrt{1-x^{2}}} = 0 \end{array} [/imath] By integrating the last equation, using the limits [imath]k[/imath] (a constant) and [imath]x[/imath] (variable), I get the following: [imath] \begin{array}{ll} \int^x_k\frac{1}{\sqrt{1-x^{2}}}dx + \int^x_k\frac{-1}{\sqrt{1-x^{2}}}dx = \int^x_k0 \\ \\ \arcsin(x) - \arcsin(k) + \arccos(x) - \arccos(k) = m \text{ (m is a constant)}\\ \\ \arcsin(x) + \arccos(x) = m + \arcsin(k) + \arccos(k) \\ \\ \text{Assuming that } A = m + \arcsin(k) + \arccos(k) = \frac{\pi}{2} \text{ ,for } x \in [-1,1] \end{array} [/imath] Using Calculus, why is that true for every [imath]x \in [-1,1][/imath]? Edit #2: A big mistake of mine was to think that [imath]\int^x_k0 = m \text{ (m is const.)}[/imath], but that isn't true for definite integrals. Thus the equations from "Edit #1" should be as follows: [imath] \begin{array}{ll} \int^x_k\frac{1}{\sqrt{1-x^{2}}}dx + \int^x_k\frac{-1}{\sqrt{1-x^{2}}}dx = \int^x_k0 \\ \\ \arcsin(x) - \arcsin(k) + \arccos(x) - \arccos(k) = 0\\ \\ \arcsin(x) + \arccos(x) = \arcsin(k) + \arccos(k) \\ \\ A = \arcsin(k) + \arccos(k) = \frac{\pi}{2} \text{ ,for } x \in [-1,1] \end{array} [/imath]
1946977
proving a non-zero polynomial [imath]f[/imath] has a point where it's not [imath]0[/imath] (on a field with infinite elements) Let [imath]k[/imath] be a field with infinite elements. Let [imath]0 \not = f \in k [x_1, ..., x_n][/imath]. I want to prove that there exists [imath]P \in k^n[/imath] such that [imath]f(P) \not =0.[/imath] I think I could do it if I use induction on the number of variables or degree of [imath]f[/imath] or something, but it looks like it gets a bit messy, and I am sure there is a "slicker" proof. Could someone please tell me such a proof (that is not too messy) if there is one out there? Thank you very much!
474654
Proving the algebraic set corresponding to a polynomial is infinite. This is an exercise from Miles Reid, Undergraduate Algebraic Geometry. The proof has two parts, one of which I can do and one of which I can't. I could have some misunderstandings about notations here as well so anything people can point out will be helpful. a) Let [imath]k[/imath] be an infinite field and [imath]f\in k[X_1,...,X_n][/imath] be nonconstant. Prove [imath]V(f)=\{P\in \mathbb{A}^n_k\mid f(P)=0\}\neq \mathbb{A}^n_k.[/imath] [imath]A^n_k[/imath] here is the point set [imath]k^n[/imath]. My proof (which follows from a hint of his) is write [imath]f=\sum a_i(X_1,...,X_{n-1})X_n^i[/imath]. Then [imath]V(f)[/imath] consists of two kinds of points [imath]P=(p_1,...,p_{n-1},p_n=0)[/imath] [imath]P=(p_1,...,p_{n-1},p_n\neq 0 )[/imath] with [imath]\sum a_i(p_1,...,p_{n-1})p_{n}^i=0[/imath] In the first case we have [imath]V(f)\neq \mathbb{A}^n_k[/imath]. Assume we have [imath]P[/imath] like in the second case and now write [imath]f=\sum b_i(X_1,...,\hat{X}_{n-1},X_n)X_{n-1}^i[/imath]. Here again, either [imath]P=(p_1,...,p_{n-1}=0,p_n\neq 0)[/imath] and [imath]V(f)\neq \mathbb{A}^n_k[/imath] or [imath]p_{n-1}\neq 0[/imath], and by induction we should get to [imath]V(f)\neq \mathbb{A}^n_k[/imath]. b) Now let [imath]k[/imath] be algebraically closed. Let [imath]a_m(X_1,...,X_{n-1})X_n^m[/imath] be the leading term of [imath]f[/imath]. Show that when [imath]a_m\neq 0[/imath], there is a finite set of points of [imath]V(f)[/imath] corresponding to every value [imath](X_1,...,X_{n-1})[/imath]. Thus, [imath]V(f)[/imath] infinite for [imath]n \geq 2[/imath]. So I have kinda no idea what to do. A given value of [imath](X_1,...,X_{n-1})[/imath] corresponds to points in [imath]V(f)[/imath] [imath]P=(p_1,...,p_{n-1},p_n)[/imath] where [imath]f(P)=0[/imath], but I don't know anything about the value of [imath]p_n[/imath]; if it vanishes then clearly there is a correspondence [imath](X_1,...,X_n)\to (p_1,...,p_{n-1},0)[/imath] but I guess I need to show there are a finite number of [imath]p_n[/imath] where this is true and I don't know how.
1946407
Computing the limit with a radical: [imath] \lim_{x\to\infty}\left(\sqrt{x^2+4x}-x\right) [/imath] I've been struggling with the below problem (I'm a Calc I student and don't know how to approach this using the skills I've developed so far): [imath] \lim _{x\to \infty }\left(\sqrt{x^2+4x}-x\right) [/imath] I've tried rewriting using the conjugate: [imath] \dfrac{4x}{\sqrt{x^2+4x}+x} [/imath] But I'm not sure how to proceed. Writing the part of the denominator that is under the radical as a power of [imath] \frac{1}{2} [/imath] is the only thing I can think of, but what emerges is really messy algebraically and I still don't know how to find the limit from there.
569545
Difficult limit evaluation: [imath]\lim_{x\to\infty}(\sqrt{x^2+4x} - x)[/imath] I'm struggling to find the solution of the following: [imath]\lim_{x\rightarrow\infty}(\sqrt{x^2+4x} - x)[/imath] I come to the answer of [imath]0[/imath]. The book has an answer of [imath]4/1[/imath]. The book explains a part of the question briefly. Looking at the brief answer information given I then come to an answer of [imath]2[/imath]... What is the answer?
1171216
Suppose there exist infinite subsets [imath]X_1, . . , X_n[/imath] of F such that f([imath]x_1, . . , x_n[/imath]) = 0 for all ([imath]x_1, .. , x_n[/imath]) ∈ [imath]X_1 × · · · × X_n[/imath]. Let f([imath]t_1, . . . , t_n[/imath]) be a polynomial over a field F. Suppose there exist infinite subsets [imath]X_1, . . . , X_n[/imath] of F such that f([imath]x_1, . . . , x_n[/imath]) = 0 for all ([imath]x_1, . . . , x_n[/imath]) ∈ [imath]X_1 × · · · × X_n[/imath]. Prove that f is the zero polynomial. The related material is integral extensions, so I want to incorporate that area of maths to a solution.
580387
A polynomial is zero if it zero on infinite subsets Let [imath]f(t_1, ... , t_n)[/imath] be a polynomial over a field [imath]F[/imath]. Suppose there exist infinite subsets [imath]X_1, ... , X_n[/imath] of [imath]F[/imath] such that [imath]f(x_1, ... , x_n) = 0[/imath] for all [imath](x_1, ... , x_n) ∈ X_1× \cdots ×X_n[/imath]. Prove that [imath]f[/imath] is the zero polynomial. Not sure how to start on this one!
1947880
Natural proof of identity for [imath]\text{Li}_2(x)+\text{Li}_2(1-x)[/imath] What's a natural way to compute [imath]\text{Li}_2(x)+\text{Li}_2(1-x)[/imath] in closed form ? Once you know the answer [imath]\text{Li}_2(x)+\text{Li}_2(1-x)=\frac{\pi^2}{6}-\log(x)\log(1-x)[/imath] , computing the derivative of the function [imath]x\mapsto \text{Li}_2(x)+\text{Li}_2(1-x) + \log(x)\log(1-x)[/imath] gives a fast, out-of-the- blue proof. I'd like someone to show a computation of [imath]\text{Li}_2(x)+\text{Li}_2(1-x)[/imath] with no prior knowledge of the answer. I'm unable to do that with the integral formula for [imath]\text{Li}_2[/imath]
1056052
Proof of dilogarithm reflection formula [imath]\zeta(2)-\log(x)\log(1-x)=\operatorname{Li}_2(x)+\operatorname{Li}_2(1-x)[/imath] How to prove [imath]\zeta(2)-\log(x)\log(1-x)=\operatorname{Li}_2(x)+\operatorname{Li}_2(1-x)[/imath] I havent started, any hints?
856109
Evans PDE(2nd edition) Problem 5.11: If [imath]Du=0[/imath] a.e. , does [imath]u=c[/imath] a.e.? Let [imath]W^{1,p}(U)[/imath] be the Sobolev space, where [imath]U[/imath] is a connected bounded domain in [imath]\mathbb{R}^n[/imath] and [imath]u\in W^{1,p}(U)[/imath] satisfying [imath]Du=0[/imath] a.e. in [imath]U[/imath]. Then [imath]u[/imath] is constant a.e. in [imath]U[/imath]. I don't know how to prove this. Especially, I don't know how to use "connected". Please guide me.
348593
Evans PDE Chapter 5 Problem 11: Does [imath]Du=0[/imath] a.e. implie [imath]u=c[/imath] a.e.? Let [imath]W^{1,p}(U)[/imath] be the Sobolev space. Suppose that [imath]U[/imath] is connected bounded domain in [imath]\mathbb{R}^n[/imath] and [imath]u \in W^{1,p}(U)[/imath] satisfies [imath]Du=0[/imath] a.e. in [imath]U[/imath]. How can I prove that [imath]u[/imath] is constant a.e. in [imath]U[/imath]?
1947421
Proving the continued fraction expansion of [imath]\sqrt2[/imath] I am working on a multi-part problem, which shows that [imath]\sqrt{2} = 1+\frac{1}{2+\frac{1}{2+...}}[/imath] I have shown that given some rational approximation [imath]\frac{m}{n}[/imath] to [imath]\sqrt2[/imath] , we can take [imath]\frac{m'}{n'} = \frac{m+2n}{m+n}[/imath] and get a better approximation. I also know that this iteration goes back and forth between being larger and smaller than [imath]\sqrt2[/imath]. Say if we start with [imath]\frac{m}{n}<\sqrt2[/imath], then iterating will give us [imath]\frac{m}{n}<\frac{m''}{n''} = \frac{3m+4n}{2m+3n}<\sqrt2[/imath] Finally, I am asked to prove that the sequence obtained by iterating in this fashion, starting with [imath]1,\frac{3}{2},\frac{7}{5},...[/imath] which I have shown is given recursively by [imath]q_1=1, q_{n+1} = 1+\frac{1}{1+q_n} [/imath] converges to [imath]\sqrt2[/imath]. My professor's "hint" is to consider the "odd" and "even" subsequences, and show that they both converge to the same limit. This makes sense to me, since both sequences are clearly monotone and bounded. Most (all?) of my classmates simply wrote that because they are both monotone and bounded by [imath]\sqrt2[/imath], then they must both converge to [imath]\sqrt2[/imath]. However, just because they are bounded above and below, respectively by [imath]\sqrt2[/imath], it does not necessarily mean that [imath]\sqrt2[/imath] is the limit for both (or either!) of them. Some classmates have tried to prove by contradiction that if there were some other limit for one of the subsequences, we could iterate again and get closer to [imath]\sqrt2[/imath]. I think that this is a faulty argument, because it assumes that we reach the limit. Remember: the sequence is "fixed" as soon as we choose to start it at [imath]q_1 = 1[/imath] ! My counterargument towards most of my classmates has been the sequence given my [imath]\frac{1}{n}[/imath]. The sequence is bounded below by [imath]-99[/imath], and no matter how many times we "iterate" (in this case, just taking the next value of [imath]n[/imath]) we can always get closer and closer to [imath]-99[/imath], but that doesn't mean that [imath]-99[/imath] is the limit. If someone could offer me a resolution to this problem I would be hugely appreciative. It's been driving me crazy for the past week. EDIT: I realize this post has been marked as a duplicate, however I think it should be left up, as the answer given is a bit different from that in the "original".
14617
Proving the continued fraction representation of [imath]\sqrt{2}[/imath] There's a question in Spivak's Calculus (I don't happen to have the question number in front of me in the 2nd Edition, it's Chapter 21, Problem 7) that develops the concept of continued fraction, specifically the continued fraction representation of [imath]\sqrt{2}[/imath]. Having only recently become aware of continued fractions, I'm trying to work my way through this problem, but I'm stuck at the very last stage. Here's what I've got so far: Let [imath]\{a_n\}[/imath] be the sequence defined recursively as [imath]a_1 = 1, a_{n + 1} = 1 + \frac{1}{1 + a_n}[/imath] Consider the two subsequences [imath]\{a_{2n}\}[/imath] and [imath]\{a_{2n - 1}\}[/imath]. I've already shown that [imath]\{a_{2n}\}[/imath] is monotonic, strictly decreasing, and bounded below by [imath]\sqrt{2}[/imath], and similarly I've shown that [imath]\{a_{2n - 1}\}[/imath] is monotonic, strictly increasing, and bounded above by [imath]\sqrt{2}[/imath]. Obviously, both of these subsequences converge. Although of course in general, if two subsequences of a sequence happen to converge to the same value, that doesn't guarantee that the sequence itself converges at all (much less to that same value), in the case where the subsequences are [imath]\{a_{2n}\}[/imath] and [imath]\{a_{2n - 1}\}[/imath], it's easy to show that if they both converge to the same value, then so will [imath]\{a_n\}[/imath] (since every term of [imath]\{a_n\}[/imath] is a term of one of the two subsequences). So no problem there. In other words, it remains only to show that not only do the subsequences converge, but they converge to [imath]\sqrt{2}[/imath] in particular. Take, for starters, [imath]\{a_{2n - 1}\}[/imath] (if I can get [imath]\{a_{2n - 1}\}[/imath] to converge to [imath]\sqrt{2}[/imath], I'm sure getting [imath]\{a_{2n}\}[/imath] to converge to [imath]\sqrt{2}[/imath] won't be very different). Because it's strictly increasing and bounded above by [imath]\sqrt{2}[/imath], it converges to some number [imath]x \leq \sqrt{2}[/imath]. Suppose that [imath]x < \sqrt{2}[/imath]. We want to show this doesn't happen. But this is where I'm getting stuck. I feel like I want to take [imath]\epsilon = \sqrt{2} - x[/imath] and show that there exists some [imath]N[/imath] such that [imath]a_{2N - 1} > x[/imath], which would finish the problem due to the monotonicity of [imath]\{a_{2n - 1}\}[/imath]. But this isn't working. Any hints? Thanks a ton.
1948431
Sum of all products of [imath]k[/imath] members of the set [imath]\{1, 2, 3, \cdots, n\}[/imath] How can I get a closed form for the sum of all products of [imath]k[/imath] distinct members of the set [imath]\{1, 2, 3, \cdots, n\}[/imath]? I can see that for [imath]k = 2[/imath], the answer is [imath]\sum\limits_{i = 1}^{n-1}\sum\limits_{j = i+1}^nij[/imath] and this can be solved for an explicit formula. Is there a way to get this for arbitrary [imath]k[/imath]?
1928394
Closed formula for the sums [imath]\sum\limits_{1 \le i_1 < i_2 < \dots < i_k \le n} i_1 i_2 \cdots i_k [/imath]? I've worked out a few summation formulas, and I am hoping to find a pattern. Unless I have made a mistake somewhere, we have the following identities: [imath]\sum_{1 \le i \le n} i = \frac{(n+1)n}{2} [/imath] [imath]\sum_{1 \le i < j \le n} ij = \frac{(3n+2)(n+1) \, n \, ( n-1)}{24}[/imath] [imath]\sum_{1 \le i < j < k \le n} ijk = \frac{(n+1)^2 \, n^2 \, (n-1)(n-2) }{48}[/imath] It seems clear that [imath]p_k(n)= \sum_{1 \le i_1 < i_2 < \dots < i_k \le n} i_1 i_2 \cdots i_k [/imath] is a polynomial in [imath]n[/imath] of degree [imath]2k[/imath]. But is there a nice closed-form formula for it? Maybe in terms of its factorization? Comment: I realize that what I am looking for is the coefficient of [imath]t^k[/imath] in the expansion [imath](1+t)(1+2t) \dots (1+nt),[/imath] so maybe generating function techniques could be helpful. But the main way I know to extract the coefficient of [imath]t^k[/imath] is to take derivatives [imath]k-1[/imath] times and on the surface of things that looks like a huge mess.
1947697
How to evaluate the integral [imath]\int_0^{\infty} e^{-\frac{1}{2}\left(x^2+ \frac{c}{x^2}\right)}dx[/imath]? I am wondering how one would calculate the integral: [imath] \int_0^{\infty} e^{-\frac{1}{2}\left(x^2+ \frac{c}{x^2}\right)}dx [/imath] where [imath]c[/imath] is a constant. I have tried to reparametrize by letting [imath]u = x^2[/imath] to get: [imath] \int_0^{\infty} \frac{1}{2\sqrt{u}} e^{-\frac{1}{2}\left(u+\frac{c}{u}\right)}du [/imath] and then trying to use integration by parts. However, I am getting nowhere with that approach.
1909452
Show that [imath]e^{-\beta} = \frac{1}{\sqrt{\pi}} \int_{0}^{\infty} \frac{e^{-u}}{\sqrt{u}} e^{-\beta^2 / 4u} du[/imath]. Show that [imath]e^{-\beta} = \frac{1}{\sqrt{\pi}} \int_{0}^{\infty} \frac{e^{-u}}{\sqrt{u}} e^{-\beta^2 / 4u} du[/imath]. I'm not really sure of how I should proceed to show this, and it's pretty un-intuitive as well. I've managed to manipulate the RHS into [imath]\frac{e^{4\beta}}{\sqrt{\pi}} \int_0^\infty \frac{1}{\sqrt{u}} e^{-(\beta+2u)^2/4u} du[/imath] which now vaguely resembles a Gaussian. However, substituting [imath]v = \beta + 2u[/imath] in in order to to get something like [imath]e^{-x^2}[/imath] doesn't particularly help. I got [imath] \frac{e^{4\beta}}{\sqrt{2\pi}} \int_0^\infty \frac{1}{\sqrt{v - \beta}} e^{-v^2/2(v-\beta)} dv. [/imath] I'm really unsure of how I should proceed; any help would be appreciated.
1948487
Grassmannians [imath]G(k, n)[/imath] and [imath]G(n-k, n)[/imath] are diffeomorphic Let [imath]f:G(k, n)\to G(n-k, n)[/imath] be the map taking a [imath]k[/imath]-plane [imath]V[/imath] (i.e., a [imath]k[/imath]-dimensional subspace of [imath]\mathbb{R}^n[/imath]) into it's orthogonal complement [imath]V^{\perp}[/imath]. Show that [imath]f[/imath] is a diffeomorphism. My question is whether or not the following strategy can work: Since [imath]f[/imath] is obviously a bijection, it is enough to prove that [imath]f[/imath] is a local diffeomorphism, so the plan is to prove that the linear map [imath]f_{*_{p}}[/imath] has full rank [imath]k(n-k)[/imath] for every [imath]p\in G(k, n)[/imath]. My difficulty is: I'm having trouble to calculate [imath]f_{*_{p}}(v)[/imath] for an arbitrary [imath]v \in T_p(G(k, n))[/imath] because I don't know how to concretely represent such a [imath]v[/imath]. Is this strategy doable? Thanks!
1014897
Diffeomorphism between the Grassmannian manifolds [imath]\mathbf{Gr}(n,k)[/imath] and [imath]\mathbf{Gr}(n,n-k)[/imath]. This seems to be a common exercise question, however I am having trouble with it. The hint is to use a map that associates the k-plane to its orthogonal complement. But I have not been able to show this as a diffeomorphism. I was thinking of using the orthonormal basis and extending it to get a basis of [imath]\mathbb{R}^n[/imath], then using the remaining [imath]n-k[/imath] vectors to generate the orthogonal complement. Am I on the right path??How else should I approach this otherwise??
1948584
Problem involving many squares and several variables. Let [imath]a, b, c, d, e, f[/imath] be nonnegative real numbers such that [imath]a^2 + b^2 + c^2 + d^2 + e^2 + f^2 = 6[/imath]. [imath]ab + cd + ef = 3[/imath]. What is the maximum value of [imath]a+b+c+d+e+f[/imath] ?. I was thinking about somehow manipulating the numbers to get [imath]\left(a + b + c + d + e + f\right)^{2}[/imath], but I don't really know what to do.
2289400
Inequality Question-Maximum Problem: Let [imath]a, b, c, d, e, f[/imath] be nonnegative real numbers such that [imath]a^2 + b^2 + c^2 + d^2 + e^2 + f^2 = 6[/imath] and [imath]ab + cd + ef = 3[/imath]. What is the maximum value of [imath]a+b+c+d+e+f[/imath]? How would I do this? Would we need to use Cauchy-Schwarz or any of those types of inequalities? Edit: My question is different from the possible duplicate because the answers on that question are based on Lagrangian multipliers and mine is based on Cauchy-Schwarz
1948404
Does [imath]\lim_{n\to\infty}\frac{\xi_n}{n}[/imath] for poisson distribution exists? How can i find, does [imath]\lim_{n\to\infty}\frac{\eta_n}{n}[/imath] where [imath]\eta_n[/imath] has poisson distribution with [imath]\lambda = n[/imath] exists?
1832338
Find the almost sure limit of [imath]X_n/n[/imath], where each random variable [imath]X_n[/imath] has a Poisson distribution with parameter [imath]n[/imath] [imath]X_{n}[/imath] independent and [imath]X_n \sim \mathcal{P}(n) [/imath] meaning that [imath]X_{n}[/imath] has Poisson distributions with parameter [imath]n[/imath]. What is the [imath]\lim\limits_{n\to \infty} \frac{X_{n}}{n}[/imath] almost surely ? I think we can write [imath]X(n) \sim X(1)+X(1)+\cdots+X(1)[/imath] where the sum is taken on independent identical distribution then use the law of large number. But I am not sure that is it correct or not. Can anyone give me some hints? Thank you in advance!
1949517
Formula for number of 6 letter words from English alphabet if there is repetion and ordering Is there a formula or method to find the number of 6-letter words from the English alphabet if repetition is allowed, but they must appear in alphabetical order? Type what you know about how many words there'd be if they didn't also need to be in alphabetical order... Limitless repetition however ordering wasn't specified. I'm so confused as to how to combine the formula for repetition and ordering. I know the number six letter words that do not have to appear in alphabetical order would be [imath]26^6[/imath]. But when I try to count only those words that are alphabetically ordered, I get stuck.
945771
What are the total number of ordered permutations of a list of elements (possibly repeated)? This question is a part of a TopCoder problem. I am learning algorithms, and just got stuck at this (not homework). Suppose we have a set [imath]A[/imath] of integer elements, such that [imath]n(A) = a[/imath] (number of elements in [imath]A[/imath] is [imath]a[/imath]). Now we have some [imath]n[/imath] blanks, each of which can be filled using the [imath]a[/imath] elements in A, repetition allowed. I want to find out the total number of such sequences of length [imath]n[/imath] which are in non decreasing order, and can be formed using the elements in [imath]A[/imath]. My thoughts reduce this problem to a dynamic programming approach, but I just wanted to find out if a straightforward formula is possible or not. Thanks.
1926555
If [imath]f[/imath] is thrice differentiable on [imath][−1,1][/imath] such that [imath]f(−1)=0,f(1)=1[/imath] and [imath]f'(0)=0[/imath], then [imath]f'''(c)\ge3[/imath] for some [imath]c\in(−1,1)[/imath]. [imath]f[/imath] is a three times differentiable function on [imath][−1,1][/imath] such that [imath]f(−1)=0, f(1)=1[/imath] and [imath]f'(0)=0[/imath]. Using Taylor's theorem show that [imath]f'''(c)\ge3[/imath] for some [imath]c\in(−1,1)[/imath]. How can I proceed with this question? I applied the taylor's theorem,and got to, [imath]1=2f'(-1) + 2f''(-1) + (4/3)f'''(c)[/imath] I am not able to proceed further, Thanks.
889903
show that [imath]f^{(3)}(c) \ge 3[/imath] for [imath]c\in(-1,1)[/imath] Let [imath]f:I\rightarrow \Bbb{R}[/imath], differetiable three times on the open interval [imath]I[/imath] which contains [imath][-1,1][/imath]. Also: [imath]f(0) = f(-1) = f'(0) = 0[/imath] and [imath]f(1)=1[/imath]. Show that there's a point [imath]c \in (-1, 1)[/imath] such that [imath]f^{(3)}(c) \ge 3[/imath] I'd be glad to get a guidace here how to start.
1948465
Existential quantifier distribution over imply [imath]\Big(\exists x \in X, (p(x) \rightarrow q(x))\Big) \iff \Big((\exists x \in X, p(x)) \rightarrow (\exists x \in X, q(x))\Big)[/imath] How do I prove this is wrong using examples (not law)
1943815
Does the existential quantifier distribute over an implication? Does [imath]\exists[/imath] distribute over an implication? ie. Is [imath]\exists x \in \mathbb{R}, (p(x) \rightarrow q(x))[/imath] logically equivalent to [imath](\exists x \in \mathbb{R}, p(x)) \rightarrow (\exists x \in \mathbb{R}, q(x))[/imath]. If so, can you give an example of [imath]p(x)[/imath] and [imath]q(x)[/imath] to demonstrate? Thanks very much for any help in advance.
1945199
Fourier Series of $e^x$ I am tying to integrate [imath]$a_n= \frac{1}{L} e^x\sin\left(\frac{n\pi x}{h}\right)dx$[/imath] and get the solution in sinh form. I have gotten the long answer, but cannot figure out how to turn it into sinh. Can someone help me with the steps?
1178622
Fourier Series Representation [imath]e^{ax}[/imath] a) Compute the full Fourier series representation of [imath]f(x) = e^{ax}, −π ≤ x < π.[/imath] b) By using the result of a) or otherwise determine the full Fourier series expansion for the function [imath]g(x)=\sinh(x), −π ≤ x < π.[/imath] For part a) this is what I did and I'm pretty sure it's all correct. (Please let me know if it isn't!) [imath]\begin{align} a_0\ &=\frac{1}{π}\int_{-π}^{π} f(x)\cos(0x)\;dx\\ &=\frac{1}{π}\int_{-π}^{π} e^{ax}\;dx=\frac{1}{π}\left[\frac{e^{ax}}{a}\right]^π_{-π}=\frac{e^{aπ}-e^{-aπ}}{aπ}\\ a_m\ &=\frac{1}{π}\int_{-π}^{π} e^{ax}\cos(mx)\;dx\\ &=\frac{1}{π}\left[\frac{e^{ax}\sin(mx)}{m}\right]^π_{-π}-\frac{1}{π}\int_{-π}^{π} \frac{ae^{ax}\sin(mx)}{m} dx\\ &=\frac{1}{π}\left[\frac{e^{ax}\sin(mx)}{m}\right]^π_{-π}+\frac{1}{π}\left[\frac{ae^{ax}\cos(mx)}{m^2}\right]^π_{-π}-\frac{1}{π}\int_{-π}^{π} \frac{a^2e^{ax}\cos(mx)}{m^2} dx\\ &=\frac{1}{π}\left[\frac{e^{ax}\sin(mx)}{m}+\frac{ae^{ax}\cos(mx)}{m^2}\right]^π_{-π}-\frac{a^2}{m^2}a_m\\ a_m\left(\frac{m^2+a^2}{m^2}\right)&=\frac{ae^{aπ}(-1)^m-e^{-aπ}(-1)^m}{πm^2}\\ a_m&=\frac{(-1)^m\left(ae^{aπ}-e^{-aπ}\right)}{π\left(m^2+a^2\right)} \end{align}[/imath] Then [imath]\begin{align} b_m&=\frac{1}{π}\int_{-π}^{π} e^{ax}sin(mx)\;dx\\ &=\frac{1}{π}\left[\frac{-e^{ax}\cos(mx)}{m}\right]^π_{-π}+\frac{1}{π}\int_{-π}^{π} \frac{ae^{ax}\cos(mx)}{m}\;dx\\ &=\frac{1}{π}\left[\frac{-e^{ax}\cos(mx)}{m}\right]^π_{-π}+\frac{1}{π}\left[\frac{ae^{ax}\sin(mx)}{m^2}\right]^π_{-π}-\frac{1}{π}\int_{-π}^{π} \frac{a^2e^{ax}\sin(mx)}{m^2}\;dx\\ &=\frac{1}{π}\left[\frac{-e^{ax}\cos(mx)}{m}+\frac{ae^{ax}\sin(mx)}{m^2}\right]^π_{-π}-\frac{a^2}{m^2}b_m\\ b_m\left(\frac{m^2+a^2}{m^2}\right)&=\frac{(-1)^{m+1}\left(e^{aπ}-e^{-aπ}\right)}{πm}\\ b_m&=\frac{m(-1)^{m+1}\left(e^{aπ}-e^{-aπ}\right)}{π\left(m^2+a^2\right)} \end{align}[/imath] so [imath]\begin{align} f(x)&=\frac{1}{2}a_0+\sum_{n=1}^{\infty}a_n\cos(nx)+b_n\sin(nx)\\ e^{ax}&=\frac{e^{aπ}-e^{-aπ}}{π}\left[\frac{1}{2a}+\sum_{n=1}^{\infty}\frac{(-1)^n}{n^2+a^2}(\cos(nx)-n\sin(nx))\right]\\ e^{ax}&=\frac{2}{π}\sinh(aπ)\left[\frac{1}{2a}+\sum_{n=1}^{\infty}\frac{(-1)^n}{n^2+a^2}(\cos(nx)-n\sin(nx))\right] \end{align}[/imath] but for part b) I wasn't sure of how to get the answer from part a) so instead I did this. I am also unsure if this is right. How do you do part b) using part a) ? [imath]\begin{align} g(x)&=\sinh(x)\\ a_0\ &=\frac{1}{π}\int_{-π}^{π}\sinh(x)\cos(0x)\;dx\\ &=\frac{1}{π}\left[\cosh(x)\right]^π_{-π}=0\\ a_m\ &=\frac{1}{π}\int_{-π}^{π}\sinh(x)\cos(mx)\;dx\\ &=\frac{1}{π}\left[\frac{\sinh(x)\sin(mx)}{m}\right]^π_{-π}-\frac{1}{π}\int_{-π}^{π}\frac{\cosh(x)\sin(mx)}{m}\;dx\\ &=\frac{1}{π}\left[\frac{sinh(x)\sin(mx)}{m}\right]^π_{-π}+\frac{1}{π}\left[\frac{\cosh(x)\cos(mx)}{m^2}\right]^π_{-π}-\frac{1}{π}\int_{-π}^{π} \frac{\sinh(x)\cos(mx)}{m^2}\\ a_m\left(\frac{m^2+1}{m^2}\right)&=\frac{1}{π}\left[\frac{\sinh(x)sin(mx)}{m}+\frac{\cosh(x)\cos(mx)}{m^2}\right]^π_{-π}=0 \end{align}[/imath] Then [imath]\begin{align} b_m&=\frac{1}{π}\int_{-π}^{π}\sinh(x)\sin(mx)\;dx\\ b_m\left(\frac{m^2+a^2}{m^2}\right)&=\left[\frac{-\sinh(x)(-1)^{m}}{πm}\right]^π_{-π}\\ b_m&=\frac{-2m\sinh(π)(-1)^m}{π(m^2+1)} \end{align}[/imath] so [imath]\begin{align} g(x)&=\frac{1}{2}a_0+\sum_{n=1}^{\infty}a_n\cos(nx)+b_n\sin(nx)\\ \sinh(x)&=\frac{-2\sinh(π)}{π}\sum_{n=1}^{\infty}\frac{(-1)^nn\sin(nx)}{n^2+1} \end{align}[/imath]
1948835
How can I calculate a point's coordinates given distances from three other known points? I have a trilateration-related problem that I'm unsure how to solve mathematically. I can see a solution is possible through geometry, but I'm unsure how to solve the resulting equations. Given three 2D points, find where to put a new point on the same plane. You know the exact distances from the new point to each existing point. By drawing a picture, it is clear that this is attainable: In this image, [imath]P_1,P_2,P_3[/imath] are the known points, and [imath]P_4[/imath] is the point we want to find coordinates for. The red lines denote known distances. Given only two points, we can find 2 candidate positions (where the circles intersect), while the third given point limits us to our solution. Using the properties of circles, I have come up with 3 equations and 2 unknowns. Let [imath]P_i=[x_i,y_i]^T[/imath] and [imath]r_i=|P_i-P_4|[/imath] for [imath]i\in\{1,2,3,4\}[/imath]. Then my set of equations, expanded, is [imath]r_1^2 = (x_4 - x_1)^2 + (y_4 - y_1)^2\\ r_2^2 = (x_4 - x_2)^2 + (y_4 - y_2)^2\\ r_3^2 = (x_4 - x_3)^2 + (y_4 - y_3)^2[/imath] How can I calculate [imath]x_4[/imath] and [imath]y_4[/imath]? I FOUND THE ANSWER I WAS LOOKING FOR (I'd answer my own question, but since it's closed as a duplicate all I can do is post it here) So I was hoping to find a way to express this as a linear least-squares ([imath]Ax=b[/imath]) type problem. I mentioned that in the comments, but not in this OP. Regardless, the link from @dxiv in the comments below helped me get to this solutions. Also worth mentioning, the question this is a duplicate of reaches the same conclusion (as it obviously should). It just stops short of the [imath]Ax=b[/imath] form. In short, the approach is to combine the 3 quadratic equations to get 2 linear ones. Given [imath]\text{(1) } r_1^2 = (x_4 - x_1)^2 + (y_4 - y_1)^2\\ \text{(2) }r_2^2 = (x_4 - x_2)^2 + (y_4 - y_2)^2\\ \text{(3) }r_3^2 = (x_4 - x_3)^2 + (y_4 - y_3)^2[/imath] we first expand each to [imath]r_i^2 = \hat{x}^2 + \hat{y}^2 - 2\hat{x}x_i - 2\hat{y}y_i + x_i^2 + y_i^2[/imath] Then we subtract (2) from (1) and (3) from (1): [imath]\text{(1)-(2) }2\hat{x}(x_2 - x_1) + 2\hat{y}(y_2 - y_1) + (x_1^2 - x_2^2) + (y_1^2 - y_2^2) - (r_1^2 - r_2^2) = 0\\ \text{(1)-(3) }2\hat{x}(x_3 - x_1) + 2\hat{y}(y_3 - y_1) + (x_1^2 - x_3^2) + (y_1^2 - y_3^2) - (r_1^2 - r_3^2) = 0[/imath] This is now able to be expressed as an least-squares problem, particularly as an [imath]Ax=0[/imath] (i.e. nullspace) problem if the distances [imath]r_i[/imath] are precise. [imath]\left[ \begin{matrix} 2(x_2 - x_1) & 2(y_2 - y_1) & (x_1^2 - x_2^2) + (y_1^2 - y_2^2) - (r_1^2 - r_2^2) \\ 2(x_3 - x_1) & 2(y_3 - y_1) & (x_1^2 - x_3^2) + (y_1^2 - y_3^2) - (r_1^2 - r_3^2) \end{matrix} \right] \left[ \begin{matrix} \hat{x} \\ \hat{y} \\ 1 \\ \end{matrix} \right] \left[ \begin{matrix} 0 \\ 0 \end{matrix} \right] [/imath] Thus, in short, the solution for [imath]\left[ \begin{matrix} \hat{x} \\ \hat{y} \end{matrix} \right][/imath] is found in the nullspace of the [imath]A[/imath] matrix. If the distances [imath]r_i[/imath] have some uncertainty, this can be expressed as a linear least-squares minimization problem which is easily solvable using SVD.
100448
Finding location of a point on 2D plane, given the distances to three other know points I need to find location of the point [imath]s_0[/imath]; the locations of other three points ([imath]s_1[/imath], [imath]s_2[/imath], [imath]s_3[/imath]) are known. [imath]d_i[/imath] are known distances. Given: [imath]x_1[/imath], [imath]x_2[/imath], [imath]x_3[/imath], [imath]y_1[/imath], [imath]y_2[/imath], [imath]y_3[/imath], [imath]d_1[/imath], [imath]d_2[/imath], [imath]d_3[/imath] To be found: [imath]x_0[/imath], [imath]y_0[/imath] This will be a physical system. [imath]s_i[/imath] will be fixed antennas and the distances [imath]d_i[/imath] will be read via proximity sensors. The aim will be to make [imath]s_0[/imath] find its own location. So, [imath]s_0[/imath] will always exist; there will always be a unique solution point [imath]s_0:(x_0, y_0)[/imath]. How do I solve this problem?
1949341
Triangle Inequality of the "A"-norm Suppose [imath]A[/imath] is an [imath]n\times n[/imath], symmetric, positive definite matrix, i.e. [imath](Ax,x) > 0[/imath] for all [imath]0 \neq x \in \mathbb{R}^{n}[/imath] and [imath]A^{T} = A[/imath]. Propose the following norm, called the "A"-norm, by [imath] \|x||_{A} := (Ax,x)^{1/2}.[/imath] Trivially, this satisfies nonnegativity of the norm as well as scalar multiplication. How may it be shown that the triangle inequality, i.e. [imath]\|x+y\|_{A} \leqslant \|x\|_{A} + \|y\|_{A}[/imath], is also satisfied?
1274583
Triangle Inequality for SPD Matrix Norm We define a symmetric, positive-definite matrix [imath]A[/imath] to be one such that [imath]A = A^T[/imath] and for [imath]x \neq 0[/imath], [imath]x^TAx > 0[/imath]. If we have a norm [imath]\|x\|_A = \sqrt{x^TAx}[/imath], how can we show the triangle inequality? That is, we want to show that [imath]\|x + y\|_A \leq \|x\|_A + \|y\|_A[/imath] as is the case for typical matrix norms. [imath]\|x+y\|_A^2 = (x+y)^TA(x+y) = (x^T + y^T)A(x+y) = x^TAx + x^TAy + y^TAx + y^TAy = \|x\|_A^2 + x^TAy + y^TAx + \|y\|_A^2[/imath]. This is what I have so far, but I feel like I'm not really on the right track. Ideas?
1949700
If [imath]m(A \cap I) \le (1 - \epsilon)m(I)[/imath] for every interval [imath]I[/imath], then [imath]m(A) = 0[/imath]? Let [imath]\epsilon \in (0, 1)[/imath], let [imath]m[/imath] be Lebesgue measure, and suppose [imath]A[/imath] is a Borel measurable subset of [imath]\mathbb{R}[/imath]. If[imath]m(A \cap I) \le (1 - \epsilon)m(I)[/imath]for every interval [imath]I[/imath], then does it follow that [imath]m(A) = 0[/imath]?
664704
Let [imath]A\subset\mathbb{R}[/imath] a measurable and bounded set. Show that exists for each [imath]0<\alpha<1[/imath] an interval [imath]I[/imath] such that [imath]m(A\cap I)/m(I)>\alpha[/imath]. Let [imath]A\subset\mathbb{R}[/imath] a measurable where [imath]0<m(A)<\infty[/imath]. Show that exists for each [imath]0<\alpha<1[/imath] an interval [imath]I[/imath] such that [imath] \frac{m(A\cap I)}{m(I)}>\alpha. [/imath] MY ATTEMPT: Following a hint a give that: Let [imath]\varepsilon>0[/imath], exists [imath]G[/imath], a open set such that [imath]m(A)\leq m(G)<m(A)+\varepsilon[/imath]. As [imath]G[/imath] is open, we can write as the disjoint sum of open intervals [imath]\dot{\bigcup}_n I_n = G[/imath].So, [imath] m(A)\leq m(G)=m\left(\sum \dot{\bigcup_n} I_n \right)\leq \sum m\left(I_n \right)<m(A)+\varepsilon [/imath] Suppose that [imath]\varepsilon=(\alpha^{-1}-1)m(A)[/imath]: [imath] m(A)+\varepsilon=m(A)+m(A)(\alpha^{-1}-1)=m(A)(1+\alpha^{-1}-1)=m(A)\alpha^{-1} [/imath] and [imath] \alpha\sum m(I_n)<m(A)=\sum m(A\cap I_n) [/imath] [imath] \Rightarrow \alpha<\frac{\sum m(A\cap I_n)}{\sum m(I_n)} [/imath] But I have a insecurity with that: (1) I can write that: [imath]m(A)=\sum m(A\cap I_n)[/imath]? (2) How to get the result with this: [imath] \alpha<\frac{\sum m(A\cap I_n)}{\sum m(I_n)} [/imath]
1950274
Prove that [imath]y^2=x^3+23[/imath] has no integer solutions I am looking for a elementary (and possibly short) prove that [imath]y^2=x^3+23[/imath] has no solutions [imath](x,y)\in \mathbb Z^2[/imath]. Reducing the equation modulo [imath]p[/imath] didn't help. Thanks in advance:). Edit: I want to see a solution without using the theory of elliptic curves. This question appeared in our last exam of "introduction to number theory". We only know basic facts like the legendre symbol, hensels lemma, Chinese Remainder Theorem etc..
245299
Integer solutions for [imath]x^2-y^3 = 23[/imath]. As the title stated, I am wondering the integers [imath]x,y[/imath] that satisfy the equation [imath]x^2-y^3 = 23[/imath].
1950263
Show that a graph [imath]G[/imath] is a forest if and only if every induced subgraph of [imath]G[/imath] contains a vertex of degree at most [imath]1[/imath] I'm new with this, so I'd like to know if someone can help me to solve this problem, I really don't know how to solve it, thanks! Show that a graph [imath]G[/imath] is a forest if and only if every induced subgraph of [imath]G[/imath] contains a vertex of degree at most [imath]1[/imath]
944979
Prove that a graph [imath]G[/imath] is a forest if and only if every induced subgraph of [imath]G[/imath] contain a vertex of degree at most [imath]1[/imath] Prove that a graph [imath]G[/imath] is a forest if and only if every induced subgraph of [imath]G[/imath] contain a vertex of degree at most [imath]1[/imath] => Let [imath]G[/imath] be a forest. Then [imath]g[/imath] is the collection of a bunch of trees. For the tree there at least one vertex of degree [imath]0[/imath] or [imath]1[/imath], so every induced subgraph of [imath]G[/imath] contain a vertex of degree at most [imath]1[/imath] <= Assume that every induced subgraph of [imath]G[/imath] contain a vertex of degree at most [imath]1[/imath]. I want to show that these subgraph are tree, meaning I want to show that non of them contain any cycle and all of them are connected graph. But I'm not sure how.
1950112
Exercise for the identity theorem Question : If [imath]f[/imath] and [imath]g[/imath] are analytic functions on domain [imath]\mathbb{D}[/imath] and so is the product [imath]\overline{f}g[/imath] on [imath]\mathbb{D}[/imath]. Then, [imath]f[/imath] is a constant or [imath]g\equiv 0[/imath] identically I do not know how to start. If not an answer, I hope the hint for Question. [imath]\overline{f}[/imath] means a conjugate function for [imath]f[/imath] This exercise is included in the section for The identity Theorem.
1935298
Prove that is [imath]\bar{f}g[/imath] is analytic then [imath]f[/imath] is constant or...? I need help with the following exercise: Prove that if [imath]f[/imath] and [imath]g[/imath] are analytic functions on a region [imath]G[/imath] such that [imath]\bar{f}g[/imath] is analytic then [imath]f[/imath] is constant or [imath]g\equiv0[/imath] Thanks!
1950381
If [imath]H[/imath] is a subgroup of [imath]S_n[/imath] and if H is not contained in [imath]A_n[/imath] prove that precisely one half of the elements of H are even permutations. If [imath]H[/imath] is a subgroup of [imath]S_n[/imath] and if H is not contained in [imath]A_n[/imath] prove that precisely one half of the elements of H are even permutations. I know that multiplying two odd permutations gives a even, and that one even and one odd return an odd permutation but i have no idea where to go from there.
1626115
Proving the number of even and odd permutations of a subgroup [imath]H are equal, provided H is not contained in A_{n}[/imath] Let [imath]H<S_{n}[/imath] and suppose [imath]H[/imath] is not contained in [imath]A_{n}[/imath]. Write [imath]H[/imath] as [imath]H=E\cup O[/imath] where [imath]E[/imath] and [imath]O[/imath] represent the sets of even and odd elements, respectively. Let [imath]E=\{{\alpha_1,...,\alpha_n}\}[/imath] and [imath]O=\{{\beta_1,...,\beta_m}\}[/imath]. Since [imath]O[/imath] is non-empty, it contains at least one element, say [imath]\beta_1[/imath]. The same goes for [imath]E[/imath] because [imath]e\in E[/imath]. Our goal is to prove [imath]m=n[/imath]. The products [imath]\beta_1\alpha_i[/imath], [imath]i=1,...n[/imath], are all distinct elements in [imath]O[/imath], and it follows that [imath]O[/imath] has at least [imath]n[/imath] elements, i.e. [imath]m\geq n[/imath]. In the case of strict inequality, [imath]m>n[/imath], add at least one extra odd element ([imath]\beta_{n+1}[/imath]) to the list [imath]\beta_1\alpha_1,...,\beta_1\alpha_n,\beta_{n+1}[/imath] This is where I find myself stuck. Any help appreciated. Can this route work?
1950533
Finding A Subgroup of [imath]A_n[/imath] Isomorphic to [imath]S_{n-2}[/imath] I am asked to show there exists a subgroup of [imath]A_n[/imath] that is isomorphic to [imath]S_{n-2}[/imath] for [imath]n \ge 3[/imath]. Here are some of my thoughts: Certainly there exists a homomorphism [imath]f : A_n \rightarrow S_n[/imath]---namely, take [imath]f[/imath] to be the inclusion map, [imath]f(\sigma) = \sigma[/imath]. Because of this, [imath]f^{-1}(S_n)[/imath] will be a subgroup of [imath]A_n[/imath], and I suspect I can extend [imath]f[/imath] in someway to an isomormphism by defining [imath]\varphi : S_{n-2} \rightarrow f^{-1}(S_n)[/imath] I am not really sure how to move forward, but does this seem to be moving in the right direction? It seems that I want to take all even permutation [imath]S_{n-2}[/imath] and map them to themselves, and then take every odd permutation and "correct" it to become an even permutation, but I am not sure why this is right--or if it is even remotely correct. Can [imath]\varphi[/imath] map an odd permutation to an even permutation and still be an isomorphism? It doesn't seem that it could.
712847
Embedding [imath]S_n[/imath] into [imath]A_{n+2}[/imath] I am trying to prove that for all [imath]n[/imath], [imath]S_n[/imath] is isomorphic to a subgroup of [imath]A_{n+2}[/imath]. Say [imath]S_n[/imath] acts on [imath]\{\alpha_1,...,\alpha_n\}[/imath] and [imath]A_{n+2}[/imath] acts on [imath]\{\alpha_1,...,\alpha_n,\alpha_{n+1},\alpha_{n+2}\}[/imath]. Let [imath]\sigma \in S_n[/imath] and let [imath]\phi: S_n \to A_{n+2}[/imath] be given by [imath]\phi(\sigma)=\sigma[/imath] if [imath]\sigma[/imath] is an even permutation, and [imath]\phi(\sigma)=\sigma(\alpha_{n+1}\alpha_{n+2})[/imath] if [imath]\sigma[/imath] is an odd permutation. This way [imath]\phi(\sigma) \in A_{n+2}[/imath]. Now, let [imath]\sigma, \rho \in S_n[/imath]. Then [imath]\phi(\sigma \rho)=\sigma \rho (\alpha_{n+1}\alpha_{n+2})[/imath]. Also, [imath]\phi(\sigma)\phi(\rho)=\sigma(\alpha_{n+1}\alpha_{n+2})\rho(\alpha_{n+1}\alpha_{n+2})=\sigma\rho[/imath]. So, [imath]\phi(\sigma \rho)[/imath] does not seem to equal [imath]\phi(\sigma)\phi(\rho)[/imath]. But the idea is that we want [imath]\phi(S_n) \cong S_n[/imath], and [imath](\alpha_{n+1}\alpha_{n+2})[/imath] does not permute any members of [imath]\{\alpha_1,...,\alpha_n\}[/imath], so [imath]\phi(\sigma \rho)[/imath] and [imath]\phi(\sigma)\phi(\rho)[/imath] are equal as permutations on this (sub)set. This feels a little subtle to me and I'm not sure if my idea is valid or not. If it is not, can this proof still be saved? I appreciate any thoughts on this. Thanks.