qid
stringlengths
1
7
Q
stringlengths
87
7.22k
dup_qid
stringlengths
1
7
Q_dup
stringlengths
97
10.5k
2303677
If [imath]f:[a,b]\to\mathbb{R}[/imath] is Riemann-integrable then it is continuous at a point. If [imath]f:[a,b]\to\mathbb{R}[/imath] is Riemann-integrable then it is continuous at a point in [imath][a,b][/imath]. First of all, I already know the Lebesgue's criterion for Riemann-integrability. So such a function will be continuous almost everywhere. But could we prove a weaker result (the above one) without using the Lebesgue's criterion, which requires a difficult proof? What do you think?
776398
A Riemann integrable function must have infinitely many points of continuity I was wondering whether anyone would be so kind as to briefly check my proof? I am supposed to prove the statement without using any theorems which would render the proof trivial. If [imath]\displaystyle\int_a^b f[/imath] exists then [imath]f[/imath] has infinitely many points of continuity in [imath][a,b][/imath] For the sake of contradiction suppose [imath]f[/imath] has finitely many. It suffices to show that there is an interval [imath][u,v]\subset [a,b][/imath] over which [imath]f[/imath] is not Riemann integrable. Pick any [imath][u,b]\subset [a,b][/imath] such that [imath]f[/imath] is discontinuous everywhere in [imath][u,v][/imath]. For any finite partition [imath]D=\{x_1\cdots x_n\}[/imath] of [imath][u,v][/imath] let: [imath]s(f,D)=\sum_i (x_{i+1}-x_i)\inf_{x\in[x_{i},x_{i+1}]}f\quad\text{and}\quad S(f,D)=\sum_i (x_{i+1}-x_i)\sup_{x\in[x_{i},x_{i+1}]}f[/imath] We prove [imath]\sup_D s(f,D) <\inf_D S(f,D)\;(1)[/imath]. To achieve this consider an arbitrary chain: [imath]D_0\subset D_1\subset \cdots[/imath] Pick any [imath]I_0\subset I_1\subset \cdots[/imath] where [imath]I_k=[u_k,v_k][/imath] and [imath]I_k[/imath] is an interval of the partition [imath]D_k[/imath]. Without loss of generality [imath]\bigcap_i I_i=z[/imath]. Given that [imath]f[/imath] is discontinuous everywhere, there exists [imath]\epsilon>0[/imath] such that for any [imath]\delta>0[/imath] there exists [imath]y[/imath] such that [imath]|z-y|<\delta[/imath] yet [imath]|f(z)-f(y)|\geq \epsilon[/imath]. This immediately implies [imath]\displaystyle\sup_{I_n} f>\inf_{I_n} f[/imath] and hence [imath]\displaystyle\sup_{D_n} s(f,D)<\inf_{D_n} S(f,D)[/imath]. This is valid for any sequence [imath]D_n[/imath] so claim [imath](1)[/imath] holds, contradicting the integrability of [imath]f[/imath].
2303626
Prove [imath]\prod_{n=1}^{\infty}b_n[/imath] converges [imath]\iff[/imath] [imath]\sum_{n=1}^{\infty}\ln (b_n)[/imath] converges. Okay, so I'm stuck on this question which is part (b) to the question and was wondering if someone could help me out with it, I have solved the first part as seen below, (would you also be able to check (a); I feel my communication is lacking as the proof is very small), thanks! Question Consider a sequence [imath](b_n)_{n=1}^{\infty}[/imath] of non-zero real numbers. By definition, the infinite product [imath]\prod_{n=1}^{\infty}[/imath] converges if the sequence [imath](p_n)_{n=1}^{\infty}[/imath], where [imath]p_n=\prod_{k=1}^{n}b_k[/imath] converges to some non-zero number. (a) Prove that the convergence of [imath]\prod_{n=1}^{\infty}b_n[/imath] implies [imath]\lim_{n\to\infty}b_n=1[/imath] Working (a) If [imath]p_n \rightarrow p\neq 0[/imath], then [imath]b_n=\frac{p_n}{p_{n-1}}=\frac{\prod_{k=1}^{n}b_k}{\prod_{k=1}^{n-1}b_k}\rightarrow\frac{p}{p}=1[/imath] QED (b) Assume [imath]b_n>0[/imath] for all [imath]n\in\mathbb{N}[/imath]. Prove that [imath]\prod_{n=1}^{\infty}b_n[/imath] converge if and only if [imath]\sum_{n=1}^{\infty}\ln (b_n)[/imath] converges. (You may use without proof the fact that [imath]\ln(x)[/imath] is a continuous function on [imath](0, \infty)[/imath].) Thanks again :)
2303595
Prove that the convergence of [imath]\prod_{n=1}^{\infty} b_n[/imath] implies [imath]\lim_{n→∞} b_n = 1[/imath] So I am stuck on this question and am unsure how to solve it, would be appreciated if someone could help me out, thanks! :) Question Consider a sequence [imath](b_n)_{n=1}^{\infty}[/imath] of non-zero real numbers. By definition, the infinite product [imath]\prod_{n=1}^{\infty}[/imath] converges if the sequence [imath](p_n)_{n=1}^{\infty}[/imath], where [imath]p_n=\prod_{k=1}^{n}b_k[/imath] converges to some non-zero number. Prove that the convergence of [imath]\prod_{n=1}^{\infty}b_n[/imath] implies [imath]\lim_{n\to\infty}b_n=1[/imath]
2303580
How would we define an inner product on the space [imath]C[a,b][/imath] with sup-norm? Sorry for the dumb question. I'm having some trouble figuring out how to define an inner product on this space. The space is [imath]C[a, b][/imath] and the norm here is defined by [imath]\|f\| = \sup_{a \leq x \leq b}|f(x)|[/imath]. I know that [imath]\|f\|^2 = \langle f, f \rangle[/imath] but what about more generally [imath]\langle f, g \rangle[/imath]? I hope my question makes sense.
150349
Show that a norm on an inner product space satisfies parallelogram law. Show that a norm on an inner product space satisfies parallelogram law. Hence use the parallelogram law to show that the space of continuous real functions defined on the interval [imath][a,b][/imath] is not a Hilbert space. Here I did the first part. Let [imath]X[/imath] be an inner product space and [imath]x,y\in X[/imath] consider [imath]\Vert x+y\Vert ^2[/imath] and [imath]\Vert x-y\Vert ^2[/imath] Then on adding I get [imath]\Vert x+y\Vert ^2+\Vert x-y\Vert ^2=2(\Vert x\Vert ^2 +\Vert y\Vert ^2)[/imath] For the second part I don’t know how can a parallelogram law prove [imath]C[a,b][/imath] is not a Hilbert space, help me please.
2303141
Derivative of oscillatory integral with absolute value For the function [imath]F(x) = \int_0^x |\cos(1/u)| du[/imath], I want to determine if the derivative exists at [imath]x=0[/imath] and find [imath]F'(0)[/imath]. I figured this out when the absolute value is removed using integration by parts: [imath]\frac{F(h) - F(0)}{h} = \frac{1}{h}\int_0^h\cos(1/u)du = \frac{1}{h}\int_{1/h}^\infty\frac{\cos(x)}{x^2}dx \\= - h \sin(1/h) + \frac{1}{h}\int_{1/h}^\infty\frac{2\sin(x)}{x^3}dx = -h \sin(h) + \frac{1}{h}O(h^2)[/imath] So [imath]F'(0) = 0[/imath] here. This breaks down for [imath]F(x) = \int_0^x \cos(1/u)du[/imath]. Thank you for any help.
2063078
Does the limit [imath]\lim _{x\to 0} \frac 1x \int_0^x \left|\cos \frac 1t \right| dt[/imath] exists? Does the limit [imath]\lim _{x\to 0} \frac 1x \int_0^x \left|\cos \frac 1t \right| dt[/imath] exists ? If it does then what is the value ? I don't think even L'Hospital's rule can be applied . Please help . Thanks in advance
2304234
Solving a contest problem related to Stone-Weierstrass theorem Problem : Let [imath]f[/imath] be a real valued continuous function on [imath][-1,1][/imath], such that [imath]f(x)=f(-x)[/imath] for all [imath]x \in [-1,1][/imath]. Show that for every [imath]\epsilon \gt 0[/imath] there is a polynomial [imath]p(x)[/imath] with rational coefficients such that for every [imath]x \in [-1,1][/imath], [imath]|f(x)-p(x^2)| \lt \epsilon[/imath]. This problem had appeared in ISI JRF exam, 2017. I know that the set of polynomials with rational coefficients over [imath][-1,1][/imath] is dense in the set of real valued continuous functions on [imath][-1,1[[/imath]. So I'm directly using it here. So for every [imath]\epsilon \gt 0[/imath] three exists a polynomial with rational coefficients [imath]q(x)[/imath] such that for all [imath] x \in [-1,1][/imath], [imath]|f(x)-q(x)| \lt \epsilon[/imath]. Since [imath]f(x)=f(-x)[/imath] for all [imath]x \in [-1,1][/imath], we have [imath]|f(x)-q(|x|)| \lt \epsilon[/imath] for all [imath]x \in [-1,1][/imath]. I am stuck here. What can be done next? Applying Stone-Weierstrass again to get [imath]p[/imath] as required? Thanks.
1693629
Uniform approximation by even polynomial Proposition Let [imath]\mathcal{P_e}[/imath] be the set of functions [imath]p_e(x) = a_o + a_2x^2 + \cdots + a_{2n}x^{2n}[/imath], [imath]p_e : \mathbb{R} \to \mathbb{R}[/imath] Show that all [imath]f:[0,1]\to\mathbb{R}[/imath] can be uniformly approximated by elements in [imath]\mathcal{P_e}[/imath] Attempt: Since we are talking about polynomials let's try play with Weierstrass Theorem By definition [imath]\mathcal{P_e}[/imath] is dense in [imath]C^0([0,1], \mathbb{R})[/imath] , if for every funciton [imath]f \in C^0([0,1], \mathbb{R})[/imath], [imath]\exists p_e \in \mathcal{P_e}[/imath] such that [imath]\forall \epsilon > 0, \forall x \in [0,1], ||p_e-f||< \epsilon[/imath] So we want to show [imath]\|p_e - f\| < \epsilon[/imath] Try something like...since every continuous function is approximated by polynomials i.e. [imath]\forall \epsilon > 0,\ \|f - p\| < \epsilon[/imath], therefore let [imath]p[/imath] be a polynomial, [imath]p = a_0 + a_1x + a_2x^2 + a_3x^3 + \cdots[/imath] [imath]\|p_e - f\| \leq \|f - p\| + \|p - p_e\|[/imath] [imath]\Rightarrow \|p_e - f\| < \epsilon + \|p_o\|,[/imath] where [imath]p_o[/imath] is an odd polynomial [imath]\Rightarrow \|p_e - f\| < \epsilon + \sup_x|a_1+a_3+\cdots+a_{2_n+1}|[/imath] Stuck. In any case, [imath]p[/imath] and [imath]p_o[/imath] is poorly defined. What would be the standard approach to prove the proposition?
2304090
[imath]\chi_A(\lambda)=\lambda^n -Tr(A)\lambda^{n-1}+...+(-1)^ndet(A) \implies \chi_{A^{-1}}? [/imath] Let [imath]A \in GL_n(\mathbb{F})[/imath] and [imath]\chi_A[/imath] denote the characteristic polynomial of [imath]A[/imath]. I want to calculate [imath]\chi_{A^{-1}}[/imath]. I know that the characteristic polynomial of [imath]A[/imath] is of the form [imath]\chi_A(\lambda)=\lambda^n -Tr(A)\lambda^{n-1}+...+(-1)^ndet(A).[/imath] Using the form of [imath]\chi_A[/imath] how can i calculate [imath]\chi_{A^{-1}}[/imath]?
511009
Characteristic polynomial of an inverse Given the characteristic polynomial [imath]\chi_A[/imath] of an invertible matrix [imath]A[/imath], I'm to find [imath]\chi_{A^{-1}}[/imath]. I can see that this is theoretically possible. [imath]\chi_A[/imath] uniquely determines the similarity class of [imath]A[/imath], which uniquely determines the similarity class of [imath]A^{-1}[/imath], which uniquely determines [imath]\chi_{A^{-1}}[/imath]. Calculating the coefficients of [imath]\chi_{A^{-1}}[/imath] explicitly and then relating them to the coefficients of [imath]\chi_A[/imath] seems unfeasible. I thought about calculating the factors instead, which could be easier since I at least have some idea what the linear factors [imath]\chi_{A^{-1}}[/imath] are (since I can see how to get the eigenvalues of [imath]A^{-1}[/imath] from those of [imath]A[/imath]), but that doesn't help me with potential higher-degree irreducibles or repeated linear factors. Also, we're not supposed to know the eigenvalues of [imath]A^{-1}[/imath], at least I don't think so, since calculating them is the next question. Any hints?
2304738
Need help using induction to prove a product series inequality The question as stated on a past exam: Use mathematical induction to prove that [imath] \frac{1}{2}\cdot \frac{3}{4}\cdot \frac{5}{6}\cdot ...\cdot \frac{2n-1}{2n} \leq \frac{1}{\sqrt{3n+1}} [/imath] for any positive integer n. I have done plenty of induction proofs before and understand the how and why they work. However I can't seem the find the algebraic trick to this question. I start of by multiplying both sides by [imath] \frac{2n+1}{2n+2} [/imath] with the final goal of proving that [imath] \frac{1}{\sqrt{3n+1}} \cdot \frac{2n+1}{2n+2} \le \frac{1}{\sqrt{3n+4}}[/imath] to complete the proof but I quickly get lost, any help or tips would be very appreciated.
784760
Prove that [imath]\prod\limits_{i=1}^n \frac{2i-1}{2i} \leq \frac{1}{\sqrt{3n+1}}[/imath] for all [imath]n \in \Bbb Z_+[/imath] Given that [imath]x_n = \displaystyle \prod_{i=1}^n \frac{2i-1}{2i}[/imath] Then prove that [imath]x_n \leq \frac{1}{\sqrt{3n+1}}[/imath] for all [imath]n \in \mathbb Z_+[/imath] What I did was take the logarithm of [imath]x_n[/imath], and I arrived at: [imath]\log{x_n}=\displaystyle \sum_{i=1}^{n} (\log{(2i-1)} - \log{2i}) [/imath] I'd like to know if I proceeded correctly, and thus would like further guidance to solve the problem. However, if I haven't approached the problem correctly, I'd appreciate hints and techniques that are applicable. Please don't post the whole answer because I'd like to work this out on my own. Thanks.
2304820
Why does one want to have the standard definition of localization? The standard way of defining the localization of a commutative ring is as follows: given a multiplicatively closed subset [imath]S\subset R[/imath] the localization is defined first by considering the set [imath]R\times S/\sim[/imath] where [imath] (r,s) \sim (r',s') \text{ if there exists a } u \in S \text{ such that } u(rs' - r's) =0 [/imath] the rest is just equipping this set with a ring structure, but my question lies here: why do we want the [imath]u[/imath]? Don't we not want nilpotents in our denominator?
2259166
A question about the equivalence relation on the localization of a ring. Let [imath]A[/imath] be a ring and [imath]S[/imath] a multiplicative closed set. Then the localization of [imath]A[/imath] with respect to [imath]S[/imath] is defined as the set [imath]S^{-1}A[/imath] consisting of equivalence classes of pairs [imath](a, s)[/imath] where to such pairs [imath](a,s), (b,t)[/imath] are said to be equivalent if there exists some [imath]u[/imath] in [imath]S[/imath] such that [imath]u(at-bs)=0[/imath] Now, in the Wikipedia article about the localization of a ring, it says that the existence of that [imath]u\in S[/imath] is crucial in order to guarantee the transitive property of the equivalence relation. I've seen the proof that the equivalence relation defined above is indeed an equivalence relation, but I fail to see how crucial the existence of [imath]u[/imath] is. For example, why doesn't it work if we simply say that two pairs [imath](a,s),(b,t)[/imath] are equivalent iff [imath]at - bs = 0[/imath]? I tried to come up with a counterexample for such case, but failed in the attempt.
2304030
Prove that [imath]Dom(S \circ R )= Dom (R)[/imath] Suppose R is a relation from A to B and S is a relation from B to C. Also given that [imath]Ran (R) \subseteq Dom (S) [/imath]. Prove that [imath]Dom(S \circ R )= Dom (R)[/imath] Now Let [imath]x \in Dom (S \circ R) \iff \exists b \in B[/imath] s.t [imath](x,b) \in R[/imath] and [imath](b,y) \in S[/imath] [imath]\iff x \in Dom(R)[/imath]. Conversely, Let [imath]x \in Dom(R) \iff \exists b \in B (x,b) \in R[/imath]. Since [imath]Dom(R) \subseteq Dom(S)[/imath]. so [imath]b \in Dom(S)[/imath] and hence [imath]\exists y \in C[/imath] [imath]s.t (b,y) \in S[/imath]. So if [imath]x \in Dom(S)[/imath] , then [imath](x,b) \in R and (b,y) \in S[/imath] for some [imath]b[/imath] and [imath]y[/imath]. So [imath]x \in Dom (S)[/imath]. Please someone verify my proof as i donot have confidence especially in converse part. Thanks
193371
If [imath]\operatorname{ran} F \subseteq \operatorname{dom} G[/imath], then [imath]\operatorname{dom}(G \circ F) = \operatorname{dom} F[/imath]. THEOREM: If [imath] \text{ran } F \subseteq \text{dom } G [/imath] then [imath]\text{dom }(G \circ F)= \text{dom }F[/imath] PROOF: if [imath] F\subseteq A \times B[/imath] and [imath] G\subseteq B\times C[/imath] then by definition [imath]\text{dom }F=A \\\text{ ran }F=B[/imath] [imath]\text{dom }G=B\\\text{ran }G=C[/imath] Now [imath](G \circ F)\subseteq A \times C[/imath] by definition so [imath]\text{dom}(G \circ F)= A =\text{dom}F[/imath] [imath] QED [/imath]
2303773
Subset of Well-ordered set A is Order-Isomorphic to A or section of A Claim Let A is well-ordered set. [imath]C \approxeq A[/imath] or [imath]C \approxeq section \;of \;A[/imath] [imath]\forall C[/imath] s.t. [imath]C \subset A[/imath] How to prove above claim?
2303822
(Verification) [imath]C \approxeq A[/imath] or [imath]C \approxeq \text{section of } A[/imath] [imath]\forall C[/imath] s.t. [imath]C \subset A[/imath] Claim Let A is well-ordered set. [imath]C \approxeq A[/imath] or [imath]C \approxeq \text{section of } A[/imath] [imath]\forall C[/imath] s.t. [imath]C \subset A[/imath] Proof If [imath]C=A[/imath], [imath]C \approxeq A[/imath] since a well-ordered set is order-isomorphic to oneself. If [imath]C \neq A[/imath], there [imath]\exists a \in A[/imath] s.t. [imath]a \notin C[/imath]. Then [imath]A\setminus C = \{a \in A \;|\;a \notin C\}[/imath]. Now, since A is well-ordered, there [imath]\exists[/imath] least element in [imath]A\setminus C[/imath], so let this least element be [imath]m_0[/imath]. Now let [imath]S_{m_o} = \{a \in A \;|\; a< m_0\}[/imath] which is a section of A. Now define [imath]\phi : C \to S_{m_0}[/imath] s.t. [imath]\forall C_0, C_1 \in C[/imath] if [imath]C_0 \le C_1[/imath] then [imath]\phi(C_0) \le \phi(C_1)[/imath] There [imath]\exists \;\text{such}\;\phi[/imath] since [imath]C[/imath] and [imath]S_{m_0}[/imath] are all-well-ordered since A is well-ordered. So by definition [imath]\phi[/imath] is increasing. Also, [imath]\phi[/imath]is bijective since the number of element in [imath]C[/imath] and [imath]S_{m_0}[/imath] is equal. Thus, [imath]\phi^{-1}[/imath] also exists and it is also increasing and bijective by definition. Thus [imath]C \approxeq \text{section of } A[/imath]
2304683
Sums of squares (Proof) Prove that [imath]n^2 + (n + 1)^2 = m^3[/imath] does not have solutions in the positive integers. I guess that the proof is by contradiction, but if I suppose it, I can't find the contradiction. Thanks for your help.
1672211
[imath]n^2 + (n+1)^2 = m^3[/imath] has no solution in the positive integers The problem from Burton: show that the equation [imath]n^2 + (n+1)^2 = m^3[/imath] has no solution in the positive integers. So far, I can see that gcd([imath]n[/imath],[imath]n+1[/imath])[imath]=1[/imath] and [imath]m \equiv_4 1[/imath] and [imath]m=a^2 + b^2[/imath] for some integers a,b. I'm guessing I need to reach a contradiction. At this point, I am stuck. Any hints?
2305206
remainder of the division of [imath]\overbrace{11\ldots1}^{124 \text{ times}}[/imath] by 271 . We have to find the remainder when [imath]\overbrace{11\ldots1}^{124 \text{ times}}[/imath] is divided by 271 . In this I thought of using chinese remainder theorem or congruency/modular arithmetic . But I could not get any good start . I found a similar question . What will be the remainder when 111...(123 times) is divided by 271? Finding the remainder of [imath]\overbrace{11\ldots1}^{123 \text{ times}}[/imath] divided by [imath]271[/imath] But in that I could not understand how they got the idea that of 271x41
2009917
Finding the remainder of [imath]\overbrace{11\ldots1}^{123 \text{ times}}[/imath] divided by [imath]271[/imath] When a [imath]\overbrace{11\ldots1}^{123 \text{ times}}[/imath] is divided by [imath]271[/imath], what is the remainder? I don't know how could I proceed because the number is too big to divide and I can't even factor it out. Please someone let me know how to solve these kind of problems, also if there is any shortcut related to these kinda problems please let me know.
2305569
Why does the Newton-Raphson method always work? Why does the Newton-Raphson method for solving equations always work? What I know: I know that if [imath]f(x) = 0[/imath], then the Newton-Raphson method may be applied. It states that [imath]x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}[/imath] where the solution to [imath]f(x) = 0[/imath] is [imath]x_\infty[/imath] I also know that you can select a point on the graph of [imath]f(x)[/imath] (your first estimate [imath]x_1[/imath]), and Newton-Raphson takes the gradient of that point, and where that gradient meets the [imath]x[/imath]-axis, and the x value is where [imath]x_2[/imath] is. I know how it works, but why the [imath]\frac{f(x)}{f'(x)} ?[/imath] It seems that the curve is divided by the gradient, but what does this give? I would just like an explanation of why this works.
1244521
How does Newton's Method work? Before I am told, I want to clarify that I searched first, and I don't believe this to be a repost. I understand the formula in terms of how to apply it, and I've seen graphical representations and everything. I get that we are finding where the tangent line has a root, then choosing a new [imath]f(x)[/imath] at that point and finding the root of its tangent line, effectively closing the distance between x and the root r. What I do not understand, is what [imath]\frac{f(x)}{f'(x)}[/imath] is actually doing. I know it can be used to find the root x, as it is derived from [imath]y=mx+b[/imath], but how is dividing [imath]f(x)[/imath] by its derivative getting me the root? Why does this work? My intuition is telling me (before I actually tried it) that I was getting some y value, then seeing how many times the slope goes into it; but this would give me the [imath]x[/imath] coordinate, wouldn't it? I can use it, but it's not clicking as to why, and I'd like to fix that so I can actually understand what is going on.
601440
There exists [imath]c\in [a,b][/imath] such that [imath]\int_a^c f(t)dt = \int_c^b f(t)dt[/imath] If [imath]f:[a,b]\longrightarrow \mathbf{R}[/imath] is integrable prove that there is [imath]c\in[a,b][/imath] such that [imath]\int_a^c f(t)dt = \int_c^b f(t)dt[/imath]. I set [imath]g(x)=\int_a^x f(t)dt[/imath] but I don't know how I must continue.
2305541
If [imath]f \colon [a,b] \rightarrow \mathbb{R}[/imath], then show that there exists [imath]c \in [a,b][/imath] such that [imath]\int_a^c f = \int_c^b f[/imath] If [imath]f \colon [a,b]\rightarrow \mathbb{R}[/imath], then show that there exists [imath]c \in [a,b][/imath] such that [imath]c \in [a,b][/imath] such that [imath]\int_a^c f = \int_c^b f[/imath]. I think that I have to proceed with the Mean Value Theorem of Integrals...
261783
Distribution of [imath](XY)^Z[/imath] if [imath](X,Y,Z)[/imath] is i.i.d. uniform on [imath][0,1][/imath] [imath]X,Y[/imath] and [imath]Z[/imath] are independent uniformly distributed on [imath][0,1][/imath] How is random variable [imath](XY)^Z[/imath] distributed? I had an idea to logarithm this and use convolution integral for the sum, but I'm not sure it's possible.
2502840
PDF of [imath](AB)^C[/imath] where [imath]A, B, C \sim U[0,1][/imath] Let [imath]A[/imath], [imath]B[/imath], [imath]C[/imath] be i.i.d uniform random variables on [imath](0,1)[/imath]. What is the distribution of [imath](AB)^C[/imath]? What is a good way to go about answering a question like this? Trying to find the CDF of this distribution and then differentiating to find the PDF doesn't seem to be a particularly nice way to go about this problem. Instead I have been trying to use this change of variables method (example here). In particular: Let [imath]X = A[/imath], [imath]Y = B[/imath], [imath]Z = (AB)^C[/imath]. (So the inverse transformation is [imath]A = X[/imath], [imath]B = Y[/imath], [imath]C = \frac{\log(Z)}{\log(XY)}[/imath].) Then, the joint probability distribution of [imath]X[/imath], [imath]Y[/imath], [imath]Z[/imath] is given by: \begin{align} f_{X,Y,Z}(x,y,z) & = f_{A,B,C}(a(x),b(y),z(x,y,z))\cdot|\frac{\partial (a,b,c)}{\partial (x,y,z)}| \\ & = |\frac{\partial (a,b,c)}{\partial (x,y,z)}| \\ & = \frac{-1}{z \cdot \log(xy)} \\ \end{align} In order to find the distribution of [imath]Z[/imath], I would now need to integrate this w.r.t x and y over [imath][0,\frac{z}{y}]\times[0,1][/imath]: [imath]f_Z(z) = \int_0^1 \int_0^{\frac{z}{y}} \frac{-1}{z \cdot \log(xy)} \,dx \,dy \,\,\,\,\,\,\,\, (\text{for} \, z \in (0,1))[/imath] This is not easy (i.e. I don't know how to do this - maybe I am missing a trick as it is a (improper) definite integral?). The issue seems to be: either I am going about this question in the wrong way, or, I am using the wrong change of variables. Indeed, since finding the distribution of Z using the change of variables method only involves integrating the Jacobian, using a different, more clever set of new variables may lead to a much nicer integral - I just have not been able to find such a set of variables. Any help would be much appreciated. Edit: In light of the comment below that the distribution of [imath](AB)^C[/imath] is just [imath]U[0,1[/imath], it seems that a good way to go forward is to consider the MGF. (This did not originally seem like a particularly good option as I had guessed that the distribution would be somewhat esoteric, rather than a ‘known’ one.) Note: Wolfram Alpha says that the integral of [imath]\frac{-1}{\log(xy)}[/imath] over [imath][0,1]^2[/imath], for example is 1 - am I missing something obvious here?
2305429
[imath](M,d)[/imath] be a complete metric space and [imath]N \subset M[/imath] a closed subset [imath]\Rightarrow N[/imath] is complete [imath](M,d)[/imath] be a complete metric space and [imath]N \subset M[/imath] a closed subset [imath]\Rightarrow N[/imath] is complete proof take a Cauchy sequence in [imath](N,d)[/imath] then this Cauchy sequence converges in [imath]M[/imath] since M is complete. Now we need to show that this convergence occurs in [imath]N[/imath]. Since [imath]N[/imath] is closed, [imath]\cdot\cdot\cdot[/imath] Question Any hint to proceed above reasoning?
244661
Showing that if a subset of a complete metric space is closed, it is also complete Let [imath](X, d(x,y))[/imath] be a complete metric space. Prove that if [imath]A\subseteq X[/imath] is a closed set, then [imath]A[/imath] is also complete. My attempt: I tried to prove that every Cauchy sequence [imath](b_n)[/imath] of points of [imath]A[/imath] converges to a point [imath]b\in A[/imath]. However could not figure out the exit way. Maybe I am on the wrong track. Could you please help me? edit: More from my attempt: Suppose [imath]A[/imath] is a closed set and let [imath](x_n)[/imath] be a sequence of points [imath]A[/imath] such that [imath]\lim_n x_n\to b[/imath]. Suppose now that [imath]A[/imath] has the property that [imath]b\in A[/imath], whenever [imath]x_n[/imath] converges to [imath]b[/imath]. We know that every element of [imath]x_n[/imath] which is convergent in [imath]X[/imath] also converges to a point in [imath]A[/imath]. Since [imath]x_n[/imath] is a Cauchy sequence in [imath]A[/imath], it must converge to a point [imath]y\in A[/imath]. But the limit of a convergent sequence is unique. Take [imath]x\in A[/imath] and select an appropriate [imath]n[/imath], which enables [imath]x_n[/imath] to converge to a point [imath]x[/imath] in [imath]X[/imath]. Since the limit is unique, it must follow that [imath]x=y[/imath]. Thus [imath]x\in A[/imath] and [imath]A[/imath] is closed. If [imath]A[/imath] is a closed subset of [imath]X[/imath], then any Cauchy sequence of a point in [imath]A[/imath] is convergent in [imath]X[/imath] and hence converges to a point in [imath]A[/imath]. Thus [imath]A[/imath] is complete.
2306374
I want to compute the eigenvalue of this matrix. Help me, please. \begin{pmatrix} 2na & -a & -a & -a & -a & -a& -a\\ -a& a+b & 0 & 0 & -b & 0 & 0\\ -a& 0 & a+b & 0 & 0 & -b &0 \\ -a& 0 & 0 & a+b & 0 & 0&-b \\ -a& -b & 0 & 0 & a+b & 0 & 0\\ -a& 0&-b & 0 & 0 & a+b &0 \\ -a& 0& 0&-b & 0 & 0 & a+b \end{pmatrix} Above matrix is for [imath]n=3[/imath]. There are block matrices [imath](a+b)I_3[/imath] and [imath](-b)I_3[/imath] above. I think the characteristic polynomial of this matrix for any natural number [imath]n[/imath] is calculated efficiently. The rank is [imath]2n[/imath], so the characteristic polynomial should zero as the constant part. But I cannot calculate the determinant and I can't get zero when I put [imath]t=0[/imath] when I calculate [imath]\text{char}(M)(t)[/imath] for any [imath]n[/imath]. Also, I would like to know about the form, for example, spectral decomposition or Jordan from to compute eigenvalue of this matrix rapidly. Please give me a favor.
2305384
How can I compute eigenvalues or characteristic polynomial of this matrix? Please help. \begin{pmatrix} 2na & -a & -a & -a & -a & -a& -a\\ -a& a+b & 0 & 0 & -b & 0 & 0\\ -a& 0 & a+b & 0 & 0 & -b &0 \\ -a& 0 & 0 & a+b & 0 & 0&-b \\ -a& -b & 0 & 0 & a+b & 0 & 0\\ -a& 0&-b & 0 & 0 & a+b &0 \\ -a& 0& 0&-b & 0 & 0 & a+b \end{pmatrix} I represent the matrix when [imath]n=3[/imath]. There are block matrices [imath](a+b)I_3[/imath] and [imath](-b)I_3[/imath] above. I think the characteristic polynomial of this matrix for any natural number [imath]n[/imath] is calculated efficiently. The rank is [imath]2n[/imath], so the characteristic polynomial should zero as the constant part. But I cannot calculate the determinant and I can't get zero when I put [imath]t=0[/imath] when I calculate [imath]\text{char}(M)(t)[/imath] for any [imath]n[/imath]. Please give me a favor.
108952
If [imath][G:H]=n[/imath], then [imath]g^{n!}\in H[/imath] for all [imath]g\in G[/imath]. I have the following question: Let [imath]G[/imath] be a group and let [imath]H[/imath] be a subgroup of finite index of [imath]G[/imath]. Let [imath]|G:H|=n[/imath] Then it holds: [imath]g^{n!}\in H[/imath] for all [imath]g\in G[/imath]. Why is this true? I think, that's not very difficult, but I have no idea at the moment. Thanks!
2735454
If [imath]H \le G[/imath] and [imath]H[/imath] has index 2 in [imath]G[/imath] then [imath]a^2 \in H, \forall a\in G[/imath] Claim : If [imath]H \le G[/imath] and [imath]H[/imath] has index 2 in [imath]G[/imath] then [imath]a^2 \in H, \forall a\in G[/imath] My attempt : There will be exactly two cosets lets call [imath]H[/imath] and [imath]aH[/imath] respectively. If [imath]a\in H[/imath] then [imath]a^2\in H[/imath] because of closure property of [imath]H[/imath]. Second case when [imath]a\in aH[/imath] then [imath]aHaH = a^2H[/imath] so now how to prove that [imath]a^2 \in H[/imath] Please Help.
491948
What's the limit of [imath]\prod_1^\infty \left(1-\frac{1}{2^n}\right)=(1-1/2)(1-1/4)(1-1/8)...[/imath]? I know that [imath]\prod_1^\infty \left(1-\frac{1}{2^n}\right)[/imath] converges to a positive number because the series [imath]\sum 2^{-n}[/imath] is convergent. Do we know the limit? If so, how? Aside: I am interested in this product because it describes asymptotically the fraction of [imath]n\times n[/imath] matrices with entries in [imath]\mathbb{F}_2[/imath] that are nonsingular.
2874402
Value of a specific q-series / infinite product? While working on a different problem, I've run into the infinite product: [imath]\prod_{i=1}^\infty (1-2^{-i}) [/imath] Which I'm reasonably sure should be equal to [imath]1-\frac{1}{\sqrt{2}}[/imath]. EDIT: It definitely doesn't equal this value, nevermind However, I don't really have any notion of how to work with / prove things about an infinite product of this form. I tried considering the log of the partial products, but that doesn't seem to make the numerical value any easier to evaluate. Doing some googling, these sorts of products seem to be called Euler products, or q series (see here: http://mathworld.wolfram.com/q-PochhammerSymbol.html). However, I can't find much or anything online about methods for finding their value. Is there any process that would yield a precise answer in this case?
2307073
Proof of some statement How can I prove this inequality [imath]|\log(1+x^2)-\log(1+y^2)| \leq |x-y|[/imath] where [imath]x, y \in (0,+\infty)[/imath]. I'm trying but not getting any idea how to show. Please someone help.
1623832
Prove that [imath]|\log(1 + x^2) - \log(1 + y^2)| \le |x-y|[/imath] I need to show that [imath] \forall x,y \in \mathbb R, |\log(1 + x^2) - \log(1 + y^2)| \le |x-y|[/imath] I tried using concavity of log function: [imath]\log(1 + x^2) - \log(1 + y^2)=\log(\frac{1 + x^2}{1 + y^2})=\log(\frac{x^2y^2}{(1 + y^2)y^2}+\frac{1}{1 + y^2}) \ge \frac{2(\log(x)-\log(y))}{1+y^2}[/imath] Also the middle value theorem: [imath]0<\frac{\log(1 + x^2) - \log(1 + y^2)}{x^2-y^2} <1[/imath] But both attempts has not led too far.
2306314
R integral domain, Q its field of fraction, M R-module with nontrivial annihilator. Is Ext(Q,M) always 0? Let [imath]R[/imath] be an integral domain, let [imath]Q[/imath] be its field of fractions, and let [imath]M[/imath] be an [imath]R[/imath]-module with nontrivial annihilator. Determine if [imath]\mathrm{Ext}_{R}^{n}(Q,M) = 0[/imath] for all [imath]n \geq 0[/imath]. Intuitively, I believe it is true, because the most natural example [imath]R = \mathbb{Z}[/imath], [imath]Q = \mathbb{Q}[/imath], and [imath]M = \mathbb{Z}_m[/imath] looks promising. However, this example is "bad", because [imath]\mathbb{Z}[/imath] is a PID, and therefore divisible is equivalent to injective. My first thought is constructing a short exact sequence, for example, [imath]0 \to Ann_R(M) \to Q \to M \to 0[/imath] and look at its induced long exact sequence on the cohomology. However, I never succeed. My professor suggests that I should somehow use the fact that [imath]Q[/imath] is the field of fractions, maybe though the fact that [imath]Q[/imath] will kill all the torsions (considering the induced cohomology on [imath]Tor[/imath]). Can anyone give me some more suggestions? Thanks!
1341377
Ext[imath]_R^n(Q,A)=0=[/imath]Tor[imath]_n^R(Q,A)[/imath] where [imath]Q[/imath] is the field of fractions of a domain [imath]R[/imath] I am currently working through a problem in Rotman: Let [imath]R[/imath] be a domain and let [imath]Q=[/imath]Frac[imath](R)[/imath]. If [imath]r\in R[/imath] is nonzero and [imath]A[/imath] is an [imath]R[/imath]-module for which [imath]rA=0[/imath], prove that for all [imath]n\geq 0[/imath], [imath]\mathrm{Ext}_R^n(Q,A)=0=\mathrm{Tor}_n^R(Q,A)[/imath]. I think I have the Tor part: Since [imath]Q[/imath] is flat, Tor[imath]_n^R(Q,A)=0[/imath] for all [imath]n\geq 1[/imath]. And Tor[imath]_0^R(Q,A)=Q\otimes_R A=0[/imath] because: for each [imath]\frac{t}{s}\otimes a\in Q\otimes A[/imath], [imath]\frac{t}{s}\otimes a=\frac{tr}{sr}\otimes a=\frac{t}{sr}\otimes ra =0[/imath]. However, I am having trouble with the Ext part. To do it directly I either need a projective resolution for [imath]Q[/imath] or an injective resolution for [imath]A[/imath], but I am not sure how I would resolve either of these. Or I considered using a long exact sequence for the short exact sequence [imath]0\rightarrow R\rightarrow Q\rightarrow Q/R\rightarrow 0[/imath], but that didn't seem to get me anywhere either. I have been stuck on this problem for far too long so any help or hints are greatly appreciated.
837300
Evaluate limit of integration [imath]\lim_{x \to \infty} (\int_0^x e^{t^2} dt)^2/\int_0^x e^{2t^2} dt[/imath] How to evaluate [imath]\lim_{x \to \infty} \dfrac {\bigg(\displaystyle\int_{0}^x e^{t^2} dt \bigg)^2} {\displaystyle\int_{0}^x e^{2t^2} dt} =\ ?[/imath] Can we apply L'Hospital's rule?
1231158
Compute [imath]\lim_{x \rightarrow +\infty} \frac{[\int^x_0 e^{y^{2}} dy]^2}{\int^x_0 e^{2y^{2}}dy}[/imath] I've tried to apply L'hopitals rule on this one, as this get's [imath]\frac{\infty}{\infty}[/imath] [imath]\lim_{x \rightarrow +\infty} \frac{[\int^x_0 e^{y^2}\mathrm{d}y]^2}{\int^x_0 e^{2y^2}\mathrm{d}y}[/imath] [imath]\frac{\mathrm{d} }{\mathrm{d} x}[\int^x_0 e^{y^2}\mathrm{d}y]^2 = 2[\int^x_0 e^{y^2}\mathrm{d}y] * [\frac{\mathrm{d} }{\mathrm{d} x}\int^x_0 e^{y^2}\mathrm{d}y]=2(e^{x^2}-1)(e^{x^2})[/imath] and [imath]\frac{\mathrm{d} }{\mathrm{d} x}[\int^x_0 e^{2y^2}\mathrm{d}y]=e^{2x^2}[/imath] so [imath]\lim_{x \rightarrow +\infty} \frac{[\int^x_0 e^{y^2}\mathrm{d}y]^2}{\int^x_0 e^{2y^2}\mathrm{d}y} = \lim_{x \rightarrow +\infty} \frac{2(e^{x^2}-1)(e^{x^2})}{e^{2x^2}} = 2\lim_{x \rightarrow +\infty} \frac{(e^{x^2}-1)}{e^{x^2}}=2\lim_{x \rightarrow +\infty} (1-\frac{1}{e^{x^2}})=2[/imath] But the answer is [imath]0[/imath], so I think I've done a mistake somewhere I can't figure out where.
2308424
Prove the following equalities about matrices Let [imath]A, B \in M_{2}(\mathbb{R})[/imath] two matrices such that [imath](A-B)^2=O_2[/imath]. Prove: 1) [imath]\det(A^2-B^2)=(\det(A) - \det(B))^2[/imath] 2) [imath]\det(AB-BA)=0[/imath] iff [imath]\det(A)=\det(B)[/imath] My attempt Using Cayley it follows [imath]\det(A-B)=0[/imath] and [imath]tr(A-B)=0[/imath] therefore [imath]tr(A)=tr(B)=t[/imath] and [imath]A^2 = tA-\det(A)I_2, B^2 = tB-\det(B)I_2[/imath]. From the last two equalities: [imath]A^2-B^2=t(A-B) - (\det(A) - \det(B))I_2[/imath] Here I've got stuck, because applying determinant to the last equality doesn't seem to lead somewhere.
1905448
If [imath](A-B)^2=O_2[/imath] then [imath]\det(A^2 - B^2)=(\det(A) - \det(B))^2[/imath] Let [imath]A,B \in M_2(\mathbb{R})[/imath] be two matrices such that [imath](A-B)^2=O_2[/imath]. Prove [imath]\det(A^2 - B^2)=(\det(A) - \det(B))^2[/imath]. OBS. [imath]O_2[/imath] is the zero matrix Let [imath]D=A-B[/imath] then [imath]D^2=O_2[/imath] therefore, using Cayley Hamilton theorem, we get [imath]tr(D)=\det(D)=0[/imath]. It follows [imath]tr(A)=tr(B)=t[/imath] and, from here, [imath]A^2 -B^2=t(A-B) + (\det(A) - \det(B))I_2[/imath] This is all I could get. UPDATE The matrices are from [imath]M_2(\mathbb{R})[/imath]. Sorry for misleading you.
1565711
Group of order [imath]255[/imath] is cyclic Let [imath]G[/imath] a group and its order is [imath]255[/imath]. Prove that [imath]G[/imath] is cyclic. I easily demonstrated that the group has only one [imath]17[/imath]-Sylow subgroup [imath]P[/imath] that is normal in [imath]G[/imath] and it's cyclic since it is of a prime order. Then [imath]G/P[/imath] is also cyclic since a group of order [imath]15[/imath] is cyclic. Then [imath]G[/imath] can be seen as [imath]G=P(G/P)[/imath] since the orders are coprime and then [imath]G[/imath] is cyclic. Is it correct?
255441
How can I prove that every group of [imath]N = 255[/imath] elements is commutative? There was previous task was same but with [imath]N = 185[/imath]. And I prove it by showing that number of Sylow subgroups is 1 for every prime [imath]p\mid N[/imath]. But there I have some options [imath]N_5 \in \{1, 51\}[/imath], [imath]N_{17} = 1[/imath], [imath]N_3 \in \{1, 85\}[/imath]. I've tried to get contradiction from [imath]N_5 = 51[/imath] or [imath]N_3=85[/imath], but I didn't manage to do it I understand that it's impossible to have [imath]N_5 = 51[/imath] and [imath]N_3=85[/imath] at the same time.
1909611
For what conditions on sets [imath]A[/imath] and [imath]B[/imath] the statement [imath]A - B = B - A[/imath] holds? The obvious response is [imath]A = B[/imath]. But I came up with another response. I can't figure out what's my bias. Lets say [imath]S = A - B[/imath] [imath]T = B - A[/imath] Proving [imath]S=T[/imath] means proving [imath]S \subseteq T[/imath] and [imath]T \subseteq S[/imath] Let's start with [imath]S \subseteq T[/imath] For any [imath]x \in S[/imath] : [imath]x \in A[/imath] [imath]x \notin B[/imath] if [imath]S \subseteq T[/imath], [imath]x \in T[/imath] and : [imath]x \in B[/imath] [imath]x \notin A[/imath] There is no [imath]x[/imath] giving satisfaction, so [imath]A[/imath] and [imath]B[/imath] are empty. I prove [imath]T \subseteq S[/imath] by symmetry My solution is : [imath]A = B = \emptyset[/imath] What's wrong ?
2118095
When is [imath]A- B = B- A[/imath]? Hello I am taking a Discrete Mathematics course and having some trouble with this question about sets: Under what conditions is [imath]A- B = B- A[/imath] Diagram: [imath]A- B[/imath] Diagram: [imath]B- A[/imath] Maybe I'm understanding incorrectly, but how can A-B and B-A be equal if they contain elements that are not in each other?
2308880
Solving [imath]2 \times 5^n - 1 = m^2[/imath] in natural numbers This problem isn't of any particular importance in itself, but came up for me randomly, and though I couldn't crack it, I thought learning how would likely be instructive in improving my number theory skills. Is it possible to completely classify the solutions to [imath]2 \times 5^n - 1 = m^2[/imath] in natural numbers (i.e., the values of [imath]n[/imath] for which the left-hand expression is indeed a square)? Those [imath]n \in \{0, 1, 2\}[/imath] clearly work, and after that most don't, but are these indeed all the solutions, and if so (or if whatever alternative holds), how does one see so?
1149851
Integer solutions to [imath] n^2 + 1 = 2 \times 5^m[/imath] What are the integer solutions to the diophantine equation [imath]n^2 + 1 = 2 \times 5 ^m? [/imath] We have [imath](n,m) = (3,1), (7, 2) [/imath] as solutions. Are there any more? This seems like it would be a well known diophantine equation, but I can't seem to find any information about it.
2308528
Where is the sine function transcendental? Most if the the values of the sine function that I am familiar with are irrational, like [imath]\sin(\pi/3)[/imath] or [imath]\sin(\pi/6)[/imath], or even rational like [imath]\sin(\pi)[/imath] or [imath]\sin(0)[/imath]. Surely the sine function must give transcendental values somewhere, so my question is this: is there a way to determine whether the sine function will be transcendental or not? And if so, how?
112938
When is [imath]\sin x[/imath] an algebraic number and when is it non-algebraic? Show that if [imath]x[/imath] is rational, then [imath]\sin x[/imath] is algebraic number when [imath]x[/imath] is in degrees and [imath]\sin x[/imath] is non algebraic when [imath]x[/imath] is in radians. Details: so we have [imath]\sin(p/q)[/imath] is algebraic when [imath]p/q[/imath] is in degrees, that is what my book says. of course [imath]\sin (30^{\circ})[/imath], [imath]\sin 45^{\circ}[/imath], [imath]\sin 90^{\circ}[/imath], and halves of them is algebraic. but I'm not so sure about [imath]\sin(1^{\circ})[/imath]. Also is this is an existence proof or is there actually a way to show the full radical solution. One way to get this started is change degrees to radians. x deg = pi/180 * x radian. So if x = p/q, then sin (p/q deg) = sin ( pi/180 * p/q rad). Therefore without loss of generality the question is show sin (pi*m/n rad) is algebraic. and then show sin (m/n rad) is non-algebraic.
2309014
Sequence and series. What is [imath]b_n[/imath]? (a) If [imath]a,b[/imath] are positive quantities such that [imath](a<b)[/imath] and if [imath]a_1 = \frac{a+b}{2}[/imath], [imath]b_1 = \sqrt{a_1b}[/imath], [imath]a_2 = \frac{a_1+b_1}{2}[/imath], [imath]b_2 = \sqrt{a_2b_1},\dotsc, a_n = \frac{a_{n-1}+b_{n-1}}{2}[/imath], [imath]b_n=\sqrt{a_nb_{n-1}},\dotsc[/imath] then show that [imath]\lim_{n\to\infty} b_n = \frac{\sqrt{b^2-a^2}}{\cos^{-1}\frac{a}{b}}[/imath]. Please give me hints so that i can solve it..I am stuck... I found that [imath]b_n=\frac {1}{2}\sqrt{a_{n-2}b_{n-1}+a_{n-2}b_{n-2}+b_{n-2}b_{n-1}+a_{n-2}b_{n-3}}[/imath] Now i m stuck...cant understand what to do
2301340
Show that [imath]\lim_{n\to\infty}b_n=\frac{\sqrt{b^2-a^2}}{\arccos\frac{a}{b}}[/imath] If [imath]a[/imath] and [imath]b[/imath] are positive real numbers such that [imath]a<b[/imath] and if [imath]a_1=\frac{a+b}{2}, b_1=\sqrt{(a_1b)},..., a_n=\frac{a_{n-1}+b_{n-1}}{2},b_n=\sqrt{a_nb_{n-1}},[/imath] then show that [imath]\lim_{n\to\infty}b_n=\frac{\sqrt{b^2-a^2}}{\arccos\frac{a}{b}}.[/imath] I tried to calculate explicitly the first few terms [imath]a_2,b_2[/imath] etc but the terms got too complicated really quickly and I couldn't spot any pattern.
2308799
Show that [imath]\mathcal{O}_{\mathbb{P^1}}(-1)[/imath] has no nonzero global sections. Show that [imath]\mathcal{O}_{\mathbb{P^1}}(-1)[/imath] has no nonzero global sections. I can cover [imath]\mathbb{P}^1[/imath] by two open sets [imath]U_0[1: x_1 / x_0]= [1 :y_1][/imath] and [imath]U_1[x_0/x_1 : 1] = [z_0 : 1][/imath] The transition function of the sheaf is given by [imath]g_1 (z_0) = x_1 / x_0 f_o(y_0)[/imath] To show that [imath]\mathcal{O}_{\mathbb{P}^1}(-1)[/imath] as no nonzero global sections, I need to show that there do not exists sections on [imath]g_0 \in \mathcal{O}_{\mathbb{P}^1}(U_0)[/imath] and [imath]g_1 \in \mathcal{O}_{\mathbb{P}^1}(U_1)[/imath] such that [imath] g_0 = x_1/x_0 g_1[/imath] How would one efficiently go about proving such a statement? (There is a related question, however, the solutions to this question again states that this sheaf as no nonzero global sections but does next explain why.)
112926
Global sections of [imath]\mathcal{O}(-1)[/imath] and [imath]\mathcal{O}(1)[/imath], understanding structure sheaves and twisting. In chapter 2 section 7 (pg 151) of Hartshorne's algebraic geometry there is an example given that talks about automorphisms of [imath]\mathbb{P}_k^n[/imath]. In that example Hartshorne states that [imath]\mathcal{O}(-1)[/imath] has no global sections. However, we know that [imath]\mathcal{O}(1)[/imath] is generated by global sections. This is stated at the first of the section that if [imath]\mathbb{P}_k^n = Proj k[x_0,...,x_n][/imath] then the [imath]x_0,...,x_n[/imath] give rise to global sections [imath]x_0,...,x_n\in\Gamma(\mathbb{P}_k^n,\mathcal{O}(1))[/imath]. I guess I don't understand this twisted structure sheaf very well, or to be honest I don't think I understand structure sheaves in general as well as I would like. The twisting part seems simple at first- you shift the grading of Proj over and then take the structure sheaf. Admittedly I don't feel comfortable using the structure sheaf other than using the basic facts about it that Hartshorne gives when it is introduced. If anyone could give some insight as to what's going on here or how I might be able to understand this better it would be much appreciated. Thanks.
2308606
How can we prove it? [imath]{a}{b}≤\frac{a^p}{p}+\frac{b^q}{q}[/imath] [imath](i)[/imath] [imath]a>0,b>0,p>0,q>0[/imath] [imath](ii)[/imath] [imath]\frac{1}{p}+\frac{1}{q}=1[/imath] How can we prove it? [imath]{a}{b}≤\frac{a^p}{p}+\frac{b^q}{q}[/imath]
1063125
Proving that [imath]\frac{u^p}{p}+\frac{v^q}{q}\ge uv[/imath] under the condition [imath]\frac{1}{p}+\frac{1}{q}=1[/imath] The following is a problem (6.10) from Rudin's principles of Mathematical analysis. Let [imath]p[/imath] and [imath]q[/imath] be positive real numbers such that [imath]\frac{1}{p}+\frac{1}{q}=1.[/imath] Prove that if [imath]u\ge 0[/imath] and [imath]v\ge 0[/imath], then [imath]uv\le \frac{u^p}{p}+\frac{v^q}{q}.[/imath] I can prove this by using Weighted Arithmetic mean Geometric mean inequality and also by using Jensen's inequality on natural logarithm (this is usually used to the prove generalized AM-GM). I would like to see alternate elementary methods (preferably avoiding multivariate calculus methods) to solve this (I think Rudin hasn't introduced convexity before this; so generalized AM-GM is cheating).
2308885
In a finite set of positive integers, if we replace any two elements by their GCD and LCM in one step, the numbers stop changing eventually. Here's my proof: We designate an active pool that includes precisely those integers that will be involved in the process at some point, and for which there exists at least one integer such that performing the process on the pair actually results in the changing of the set. Then we implement the following iterative process: Choose any two arbitrary integers in the active pool and replace them by their GCD and LCM. Then, if the GCD will never be used for the process again, we remove it from the active pool. If the GCD is [imath]1[/imath], it is also removed from the active pool. Otherwise, it is placed in the active pool, and the process reiterates. Clearly, this process halts in finitely many steps, as size of the active pool is reduced to nil, and thus when the numbers stop changing. I don't think that the proof is incorrect, but it feels uneasy. So, is it actually correct?
2087037
I need some help with a GCD and LCM's Problem Given a list [imath]A[/imath] of [imath]n[/imath] positive integer numbers. We're gonna play this game: [imath]1 -[/imath] Take randomly [imath]2[/imath] numbers of [imath]A[/imath]. [imath]2 -[/imath] Delete this [imath]2[/imath] elements of [imath]A[/imath]. [imath]3 -[/imath] Insert in [imath]A[/imath] their gcd and lcm. [imath]4 -[/imath] Go to step [imath]1.[/imath] Prove that after some quantity of steps, [imath]A[/imath] doesn't change its elements.
2309153
The diagonal in [imath]T_1[/imath] space. Hi¡ I have some troubles with the next problem. Let [imath](X,\tau)[/imath] be a topological space. Prove that [imath]X[/imath] is [imath]T_1[/imath] if and only if there exists a family [imath]U[/imath] of open sets such that [imath]\bigcap U=\Delta[/imath] where [imath]\Delta=\left\{(x,x): x\in X\right\}\subseteq X\times X[/imath] (the diagonal) My attempt. [imath]\Rightarrow)[/imath] We know that [imath]X\times X[/imath] is [imath]T_1[/imath] because [imath]X[/imath] is [imath]T_1[/imath]. Moreover, we have the next theorem for [imath]T_1[/imath] spaces Theorem Let [imath]X[/imath] be a topological space. The next conditions are equivalent. 1) [imath]X[/imath] is [imath]T_1[/imath] 2) For all [imath]B\subseteq X[/imath], [imath]B=\bigcap \left\{ U: B\subseteq U, U\in\tau\right\}[/imath] 3) For all [imath]x\in X[/imath], [imath]\left\{x\right\}=\bigcap\left\{U : U\in\tau, x\in U\right\}[/imath] Then, the implication follows from 3). But, what can I do for [imath]\Leftarrow)[/imath]? I have tried to prove that [imath]\left\{x\right\}[/imath] is closed, but I have failed. My best attempt was consider two distinct points [imath]x[/imath] and [imath]y[/imath]. Clearly, [imath](x,y)\notin\Delta[/imath], then, [imath](x,y)\notin\bigcap U[/imath], so, there exist some basic open set such that [imath](x,y)\notin A\times B[/imath]. Then, my idea was use the projection [imath]\Pi_X[/imath], but, from here, I'm so confused. Thanks in advance.
2306420
Showing that a topological space is [imath]{\rm T}_1[/imath] Let [imath]X[/imath] be a topological space and let [imath]\Delta = \{(x,x) : x\in X \}[/imath] be the diagonal of [imath]X\times X[/imath] (with the product topology). I was asked to prove that [imath]X[/imath] is [imath]{\rm T}_1[/imath] if and only if [imath]\Delta[/imath] can be written as intersection of open subsets of [imath]X\times X[/imath]. I think is better (maybe easier) to use the well-known result "[imath]X[/imath] is [imath]{\rm T}_1[/imath] if and only if [imath]\{x\}[/imath] is a closed set [imath]\forall x\in X[/imath]". What I've done: Assuming that [imath]X[/imath] is [imath]{\rm T}_1[/imath], let [imath]Y=(X\times X) \setminus\Delta[/imath] and note that (trivially) [imath]Y = \bigcup_{y\in Y} \{y\}.[/imath] Then [imath]\Delta = (X\times X) \setminus Y = (X\times X) \setminus \left(\bigcup_{y\in Y} \{y\} \right) = \bigcap_{y\in Y} (X\times X)\setminus \{y\},[/imath] where [imath](X\times X)\setminus\{y\}[/imath] is open since each [imath]\{y\}[/imath] is closed. The other direction seems be a bit harder, I've tried unsuccessfully. Any ideas? Thanks in advance.
2308964
Complex series sum Show that for [imath]|z|<1[/imath], [imath]\sum_{n=0}^\infty \frac{z^{2^n}}{1-z^{2^{n+1}}}=\frac{z}{1-z}[/imath] and [imath]\sum_{n=0}^\infty \frac{2^n z^{2^n}}{1+z^{2^n}}=\frac{z}{1-z}[/imath] As a hint is given to use the dyadic expansion of an integer. I have no idea how to proceed.Please help.
105412
How can I show that [imath]\sum\limits_{n=1}^\infty \frac{z^{2^n}}{1-z^{2^{n+1}}}[/imath] is algebraic? Show that [imath]\sum_{n=1}^\infty \frac{z^{2^n}}{1-z^{2^{n+1}}}[/imath] is algebraic. More specifically, solve this and get exact values. Then use the result to evaluate [imath]\sum_{n=0}^\infty \frac{1}{F_{2^n}}[/imath] where [imath]F_n=\frac{\alpha^n-\beta^n}{\alpha-\beta}[/imath] and [imath]\alpha=\frac{1+\sqrt{5}}{2}[/imath] and [imath]\beta=\frac{1-\sqrt{5}}{2}[/imath].
2309276
Find Jordan basis and jordan form Linear transformation [imath]\varphi[/imath] from space [imath]\mathbf{R}_n[/imath] in basis [imath]\mathbf{e}_1\ldots\mathbf{e}_n[/imath] is given by matrix: [imath]A = \begin{pmatrix}3 & 2 & -3 \\ 4 &10 & -12 \\ 3 & 6 &-7\end{pmatrix}[/imath] I have to find basis [imath]\mathbf{f}_1\ldots\mathbf{f}_n[/imath] in which given matrix has Jordan form [imath]A_j[/imath] and find Jordan form itself. Which steps should I reproduce to find both?
1809724
Finding Jordan basis of a matrix ([imath]3\times3[/imath] example) Our teacher didn't explain us how to find it so I've had to look up a bit by myself. I have this matrix : [imath]A = \begin{pmatrix} 9 & 4 & 5 \\ -4 & 0 & -3 \\ -6 & -4 & -2 \end{pmatrix}[/imath] I've found its characteristic polynomial [imath]p_A(\lambda) = -(\lambda -3)(\lambda - 2)^2[/imath]. I've found its eigenspaces : [imath]E_2 = span(\begin{pmatrix}-2 & 1 & 2\end{pmatrix}^T)[/imath], [imath]E_3 = span(\begin{pmatrix}-3 & 2 & 2\end{pmatrix}^T)[/imath]. And I've found the generalized eigenspace of [imath]\lambda = 2[/imath] : [imath]\ker((A-2I_3)^2) = span(\begin{pmatrix}0 & 1 & 0\end{pmatrix}^T, \begin{pmatrix}1 & 0 & -1\end{pmatrix}^T)[/imath] But I don't know what to do now. I've tried to see what result I would get with [imath]M = \begin{pmatrix} -3 & -2 & 0 \\ 2 & 1 & 1 \\ 2 & 2 & 0 \end{pmatrix}.[/imath] But [imath]M^{-1}AM = \begin{pmatrix} 3 & 0 & 0 \\ 0 & 2 & -2 \\ 0 & 0 & 2 \end{pmatrix}.[/imath] So there it is. Any help ?
2309391
Proof that [imath] (\mathbb{Z} \times \mathbb{Z},+) /\langle(2,3)\rangle [/imath] is cyclic. I need a little help with the second half of the proof that [imath] (\mathbb{Z} \times \mathbb{Z},+) / \langle(2,3)\rangle [/imath] is cyclic. I know isomorphism preserves cyclic structure.I looked up the answer on my textbook from where I got this exercise and their proof is like this: "The function [imath]f : \mathbb{Z} \rightarrow \mathbb{Z} \times \mathbb{Z} /\langle(2,3)\rangle[/imath] with [imath]f(x) = \widehat{(x,x)}[/imath] is an isomorphism of groups because : [imath](x,x)[/imath] is not in [imath]\langle(2,3)\rangle[/imath] for every [imath]x \neq 0[/imath]. [imath](a,b) =(b-a)(2,3)+(3a-2b)(1,1)[/imath] for a,b in [imath]\mathbb{Z}[/imath] . I think in the first part they try to show that the function is injective but I don't understand the second part,can someone explain it to me? Thanks alot !
2152350
How to find [imath]k[/imath] such that [imath](\mathbb{Z} \times \mathbb{Z})/ \langle (m,n)\rangle \cong \mathbb{Z} \times \mathbb{Z}_k[/imath] In John Fraleigh's book, A First Course In Abstract Algebra Exercises 15.7 and 15.11, one shows that [imath] (\mathbb{Z} \times \mathbb{Z})/ \langle (1,2)\rangle \cong \mathbb{Z} \times \mathbb{Z}_1 \cong \mathbb{Z} \ \ \ \ \  \mbox{ and  } \ \ \ \ \ (\mathbb{Z} \times \mathbb{Z})/ \langle (2,2)\rangle \cong \mathbb{Z} \times \mathbb{Z}_2 [/imath] One does this with the first isomorphism theorem. With the same idea I proved for example that [imath] (\mathbb{Z} \times \mathbb{Z})/ \langle (2,3)\rangle \cong \mathbb{Z} \times \mathbb{Z}_1 \cong \mathbb{Z} \ \ \ \ \ \mbox{ and }\ \ \ \ \ (\mathbb{Z} \times \mathbb{Z})/ \langle (2,4)\rangle \cong \mathbb{Z} \times \mathbb{Z}_2 [/imath] So I conjectured that [imath](\mathbb{Z} \times \mathbb{Z})/ \langle (m,n)\rangle \cong \mathbb{Z} \times \mathbb{Z}_{k}[/imath] where [imath]k=\mathrm{gdc}(m,n)[/imath]. For the previous four cases, the homomorphism [imath]\phi: \mathbb{Z}\times \mathbb{Z} \to \mathbb{Z} \times\mathbb{Z}_k[/imath] given by [imath] \phi(x,y)=\left(\frac{nx-my}{k}, \ x \ \ (\mathrm{mod} \ k) \right) [/imath] is surjective with kernel =[imath]\langle (m,n)\rangle[/imath]. However, this is not the case when [imath](m,n)=(4,6)[/imath]. So, I cannot use what I did for the four cases to prove the general case. What I want to know is if my conjecture is true. If so, how can I give a general homomorphism? If it is not true, how can I find [imath]k[/imath], such that [imath](\mathbb{Z} \times \mathbb{Z})/ \langle (m,n)\rangle \cong \mathbb{Z} \times \mathbb{Z}_k[/imath]? Thanks in advance for any help/hint/comment!
2308065
Dual of a Comma Category Let [imath]F:\mathcal{C} \longrightarrow \mathcal{D}[/imath] be a functor between two categories and [imath]d[/imath] be an object of [imath]\mathcal{D}[/imath]. Does the following hold: [imath](F \downarrow d)^{op} \cong (d \downarrow F^{op})[/imath] If yes, how can I prove this?
2284113
How do we construct the coslice category [imath]C / \bf{C}[/imath] given [imath]\textbf{C}/C[/imath] and prove that the two are equal? The coslice category [imath]C/\bf{C}[/imath] under an object [imath]C \in \bf{C}[/imath] has as objects arrows [imath]f[/imath] in [imath]\bf{C}[/imath] such that [imath]\textbf{dom}(f) = C[/imath] and as arrows, arrows [imath]a: f \to g[/imath] which are arrows [imath]a[/imath] in [imath] \bf{C}[/imath] such that [imath]af = g[/imath]. So I want to construct this coslice category from the slice category [imath]\textbf{C}/C[/imath] and the [imath]\textbf{op}[/imath] operator (opposite or "dual" categories). [imath]\textbf{C}/C[/imath] is the same thing as [imath]C/\bf{C}[/imath] except in the definition [imath]\textbf{cod}(f) = C[/imath] and [imath]f = ga[/imath]. So my guess is that [imath]\textbf{C}^{\text{op}}/C = C/\textbf{C}[/imath] since [imath](\textbf{C}/C)^{\text{op}}[/imath] would first discard some arrows, namely those strictly coming from [imath]C[/imath] (not going to). But intuitively we need take the [imath]\textbf{op}[/imath] of the whole thing: [imath](\textbf{C}^{\text{op}}/C)^{\text{op}}[/imath] since otherwise all of our objects would be arrows pointing at [imath]C[/imath] not under [imath]C[/imath]. Therefore that is my final guess. Now what would suffice as a proof that the two are equal: [imath](\textbf{C}^{\text{op}}/C)^{\text{op}} = C/\bf{C}[/imath]?
2308708
Why does this result only apply to primes congruent 3 mod 4? Problem statement (and solution): Let p be a prime satisfying [imath]p \equiv 3 \pmod{4}[/imath]. Show that if the equation [imath]x^2\equiv a \pmod{p}[/imath] is soluble, then its solution is given by (1): [imath]x \equiv \pm a^{\frac{p+1}{4}} \pmod{p}[/imath] Solution: [imath]x^2 \equiv a \pmod{p}[/imath] implies that [imath](a\mid p) = 1[/imath] So by Euler's criterion: [imath]a^{\frac{p-1}{2}} \equiv 1 \pmod{p}[/imath] if (1) holds, we have: [imath]x^2 \equiv a^{\frac{p+1}{2}} \equiv a^{\frac{p+1}{2}} * a^{\frac{p-1}{2}} \equiv a^p \pmod{p}[/imath] By Fermat's Little Theorem, [imath]a^p \equiv a \pmod{p}[/imath] This quite obviously also works in the other direction. My only real question is, why does the problem statement limit the space to primes congruent to 3 mod 4? Why would this exact proof not work for primes congruent 1 mod 4? What am I missing?
1230974
Let [imath]a[/imath] be a quadratic residue modulo [imath]p[/imath]. Prove that the number [imath]b\equiv a^\frac{p+1}{4} \mod p[/imath] has the property that [imath]b^2\equiv a \mod p[/imath]. Let [imath]p[/imath] be a prime satisfying [imath]p\equiv 3 \mod 4[/imath]. Let [imath]a[/imath] be a quadratic residue modulo [imath]p[/imath]. Prove that the number [imath]b\equiv a^\frac{p+1}{4} \mod p[/imath] has the property that [imath]b^2\equiv a \mod p[/imath]. (Hint: Write [imath]\frac{p+1}{2}[/imath] as [imath]1+\frac{p-1}{2}[/imath].) This gives an easy way to take square roots modulo [imath]p[/imath] for primes that are congruent to [imath]3[/imath] modulo [imath]p[/imath]. \textit{Hint:} Write [imath]\frac{p+1}{2}[/imath] as [imath]1+\frac{p-1}{2}[/imath] and use Exercise [imath]3.36[/imath].) This gives an easy way to take square roots modulo [imath]p[/imath] for primes that are congruent to [imath]3[/imath] modulo [imath]p[/imath]. I assume that the proof comes directly from the proof of quadratic residues but I am not sure how.
1165692
Collinearity in the complex plane with the unit circle (a) Suppose p and q are points on the unit circle such that the line through p and q intersects the real axis. Prove that if z is the point where this line intersects the real axis, then [imath]z=\frac{p+q}{pq+1}[/imath] (b) Let [imath]P_1P_2...P_{18}[/imath] be a regular 18-gon. Prove that [imath]P_1P_{10}[/imath], [imath]P_2P_{13}[/imath], and [imath]P_3P_{15}[/imath] are concurrent. As for (a), I knew that z must be real given that it is on the real axis. Also, I can use the collinearity equation [imath]\frac{v-z}{u-z} = \overline{\left(\frac{v-z}{u-z}\right)}[/imath] and plug in values p, q, and z. Given that z is real, [imath]\overline{z}=z[/imath] so our equation for collinearity is now [imath]\frac{v-z}{u-z}=\frac{\overline{v}-z}{\overline{u}-z}[/imath]. Past here I'm not really sure. As for (b), I believe I need to use something from what is proved in (a) and therefore have not made any progress. Any help would be much appreciated!
1250880
Precalculus unit circle with imaginary axis. (a) Suppose [imath]p[/imath] and [imath]q[/imath] are points on the unit circle such that the line through [imath]p[/imath] and [imath]q[/imath] intersects the real axis. Show that if [imath]z[/imath] is the point where this line intersects the real axis, then [imath]z = \dfrac{p+q}{pq+1}[/imath]. (b) Let [imath]P_1 P_2 \dotsb P_{18}[/imath] be a regular 18-gon. Show that [imath]P_1 P_{10}[/imath], [imath]P_2 P_{13}[/imath], and [imath]P_3 P_{15}[/imath] are concurrent. I have gotten nowhere on this problem, but I have a hint: One of those three segments is more interesting than the other two. Which one, and why? And how can you use that fact to make part (a) relevant? Any help is appreciated!
2309613
General Solution to [imath]x^2-2y^2=1[/imath] Find a general solution to [imath]x^2-2y^2=1[/imath] I found that (3,2) is a solution. Now what should I do? I can not catch what the question really want. It is about pell's equation. Would you give me a form of general solution?
2095694
Find all integer solutions to [imath]x^2-2y^2=1[/imath] For the Pell's equation where [imath]d=2[/imath]: [imath]x^2-2y^2=1[/imath] What are all the integer solutions to the equation. Apparantly there are infinitely many solutions, but how would I represent them in an expression?
2309064
How the find the mod n if ab = 1 mod n? Need Help with RSA cryptography questions. Numbers have each been encoded using RSA with a modulus of [imath]m = p q = 496241[/imath] (with [imath]p[/imath] and [imath]q[/imath] being primes) and encoding exponent of [imath]218821[/imath]. You are advised that [imath]\{13631, 142703\}[/imath] is a valid encoding–decoding pair for the same modulus, [imath]m[/imath]. [imath](a)[/imath] Use this information to determine [imath]\phi(m)[/imath] for this modulus. (Using software to directly factorise m is not a valid option for doing this part.) [imath]de = 1 \pmod{\phi(m)}[/imath], right? So in this case: [imath]13631\times 142703 1 \pmod {\phi(m)}[/imath]. Am I right so far? How do I calculate what the mod is, and therefore what [imath]\phi(m)[/imath] is? [imath](b)[/imath] Verify your answer by determining the primes [imath]p[/imath] and [imath]q[/imath]. Show how these combine to give both [imath]m[/imath] and [imath]\phi(m)[/imath]. [imath](c)[/imath] Calculate the decoding exponent for [imath]218821[/imath], as encoding exponent, using the extended Euclidean algorithm. (Again, using software to directly obtain this is not a valid option, though you are welcome to use software to confirm your answer.) [imath](d)[/imath] For each of the [imath]12[/imath] numbers in your message, verify they have no prime factors in common with [imath]m[/imath]. (It is OK to use software for this task, provided you have answered the previous part.)
2309238
How to factorise large number without calculator? I would like to factorise [imath]496241[/imath]. I know the answer is [imath]677 \times 733[/imath]. But I don't know how to get there. Here is the full question: "A message has been encoded using RSA with a modulus of [imath]m = p q = 496241[/imath] (with [imath]p[/imath] and [imath]q[/imath] being primes) and encoding exponent of [imath]218821[/imath]. You are advised that [imath]\{13631, 142703\}[/imath] is a valid encoding–decoding pair for the same modulus, [imath]m[/imath]. (a) Use this information to determine [imath]\phi(m)[/imath] for this modulus. (Using software to directly factorise [imath]m[/imath] is not a valid option for doing this part.) (b) Verify your answer by determining the primes [imath]p[/imath] and [imath]q[/imath]. Show how these combine to give both [imath]m[/imath] and [imath]\phi(m)[/imath]." I need it so I can solve (a), and it says I cannot use software to directly factorise m. So I guess pen and paper. But there are not tutorial online. Thanks
2309871
Chemical Equation Balance How do i fix this problem without using null space matrix? [imath]4NH_3 + Cl_2 \rightarrow N_2 H_4 + 2NH_4Cl[/imath] Also before that, I need to learn how to find vectors. If someome show me it, it would be great.
1418988
Balancing chemical equations using linear algebraic methods I know there are already plenty of questions on this site regarding this topic but I am having difficulty with a particular chemical equation. I am trying to balance the following: [imath] { C }_{ 2 }{ H }_{ 2 }{ Cl }_{ 4 }\quad +\quad { C }a{ { (OH }) }_{ 2 }\quad \xrightarrow [ ]{ } \quad { C }_{ 2 }{ H }{ Cl }_{ 3 }\quad +\quad Ca{ Cl }_{ 2 }\quad +\quad { H }_{ 2 }{ O } [/imath] The system of linear equations produces the following augmented matrix: [imath] \begin{pmatrix} 2 & 0 & -2 & 0 & 0 & 0 \\ 2 & 2 & -1 & 0 & -2 & 0 \\ 4 & 0 & -3 & -2 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 & 0 \\ 0 & 2 & 0 & 0 & -1 & 0 \end{pmatrix} [/imath] With the rows in the following order: Carbon Hydrogen Chlorine Calcium Oxygen In row echelon form this reduces to: [imath] \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \end{pmatrix} [/imath] Which would indicate that: x1 = 0; x2 = 0; x3 = 0; x4 = 0; x5 = 0 which is obviously not correct. What have I done wrong?
2310297
Where does the laplace Kernel come from? Where does [imath]e^{-st} [/imath] kernel come from in a laplace transform?
1946189
A question about the idea of Laplace transform In Laplace integral transform equation one multiplies the function [imath]f(t)[/imath] by [imath]e^{-st}[/imath]. I read in many tutorials that [imath]e^{-st}[/imath] decays much faster than any other function so the integral diverges. But what I don't understand what makes one to think to multiply a function with [imath]e^{-st}[/imath] to transform it in [imath]s[/imath] domain? What is the motivation behind it? Why would you suddenly come up with an idea as such: "Oh there is a function [imath]f(t)[/imath] in time domain what can I do to transform it to complex freq. ([imath]s[/imath]) domain? Hmm let me multiply it with [imath]e^{-st}[/imath] and integrate it from zero to infinity" What would have motivated this idea of multiplying and integrating for transformation?
2310545
Proving linear bijection of vector space mapping. As the title states above, I am trying to show that if [imath]T : U \rightarrow V [/imath] is a linear bijection, then [imath]T^{-1} : V \rightarrow U[/imath] is also a linear bijection where U and V are vector spaces. Representing [imath]U = \lbrace u_1, u_2, u_3...,u_n \rbrace [/imath] and [imath]V = \lbrace v_1, v_2, v_3,....,v_n\rbrace[/imath] for some [imath]n \in \mathbb N[/imath], I understand that the bijectivity of T means that each vector in [imath]U[/imath] will map onto only one vector of [imath]V[/imath] (as the assumption suggests). However I do not how I can translate this knowledge to [imath]T^{-1}[/imath]. Thank you in advance (I am new to proofs).
1645103
Show that an inverse of a bijective linear map is a linear map. So I've got a bijection. It clearly has an inverse, but how exactly do I prove that the inverse is a linear map as well? Suppose that the linear map [imath]T:U\to V[/imath] is a bijection. So [imath]T[/imath] has an inverse map [imath]T^{-1}:V\to U[/imath]. Prove that [imath]T^{-1}[/imath] is a linear map.
2310483
How to find the sum of Beatty sequence of floor(e)+floor(2*e)+floor(3*e)...... How to find the sum of Beatty sequence of [imath]\lfloor e \rfloor+\lfloor 2e\rfloor+\lfloor 3e \rfloor\dots[/imath] and so on? Can it be reduced to recursion?
2307399
Solve summation [imath]\sum_{i=1}^n \lfloor e\cdot i \rfloor [/imath] How to solve [imath]\sum_{i=1}^n \lfloor e\cdot i \rfloor [/imath] For a given [imath]n[/imath]. For example, if [imath]n=3[/imath], then the answer is [imath]15[/imath], and it's doable by hand. But for larger [imath]n[/imath] (Such as [imath]10^{1000}[/imath]) it gets complicated . Is there a way to calculate this summation?
2310725
Recurrent sequence of matrices Let [imath]A, X_0 \in \mathbb{R}^{n\times n}[/imath] and [imath]det(A) \ne 0[/imath]. Define the following reccurence: [imath]X_{k+1} = X_k + X_k(I-AX_k)[/imath] Prove [imath]\lim_{x \rightarrow \infty} X_k = A^{-1} \iff \rho(I-AX_k) < 1[/imath] where [imath]\rho(B)[/imath] means the biggest eigenvalue (in abs value) of the matrix [imath]B[/imath]. Any hints on how to prove it?
1696484
Prove that if [imath]A[/imath] is nonsingular, then the sequence [imath]X_{k+1}=X_k+X_k(I-AX_k)[/imath] converges to [imath]A^{-1}[/imath] if and only if [imath]ρ(I-X_0A)<1[/imath]. Prove that if [imath]A[/imath] is nonsingular, then the sequence [imath]X_{k+1}=X_k+X_k(I-AX_k)[/imath] where [imath]A[/imath] and [imath]X_k[/imath] are [imath]n\times n[/imath] matrices with [imath]k=0,1,2,...[/imath] converges to [imath]A^{-1}[/imath] if and only if [imath]ρ(I-X_0A)<1[/imath]. I'm really stuck on this problem. Any hints are greatly appreciated.
2311016
Definition of [imath]d (P (x ,y )dx)[/imath] I know it is defined as [imath] dP \wedge dx [/imath] or explicitly, [imath] \frac{\partial P }{\partial y } dy \wedge dx . [/imath] The question is, could it be [imath]dx \wedge d P [/imath]? Or [imath] \frac{\partial P }{\partial y } dx \wedge dy ? [/imath]
2310274
Definition of [imath]d (P (x ,y )dx)[/imath] I know it is defined as [imath] dP \wedge dx [/imath] or explicitly, [imath] \frac{\partial P }{\partial y } dy \wedge dx . [/imath] The question is, could it be [imath]dx \wedge d P [/imath]? Or [imath] \frac{\partial P }{\partial y } dx \wedge dy ? [/imath] I know the question must be very naive or even stupid. But I am indeed confused.
2311667
Easy and elementary proof that roots of a polynomial cannot contain a ball Consider a non constant polynomial [imath]f:\Bbb R^n\to \Bbb R[/imath]. Let [imath]S_f\subset \Bbb R^n[/imath] be its solution set i.e. the set of [imath]x\in \Bbb R^n[/imath] such that [imath]f(x)=0[/imath]. What's the slickest proof that [imath]S[/imath] can't contain any ball, or equivalently, [imath]S_f^c[/imath] is open and dense? My try: let [imath]r_0[/imath] be such that [imath]f(r_0)=0[/imath]. Then consider the new polynomial [imath]g(x)=f(x+r_0)[/imath]. If [imath]B_\epsilon(r_0)\subset S_f[/imath], then [imath]B_\epsilon(0)\subset S_g[/imath]. Also, [imath]g[/imath] is non constant. Clearly [imath]g[/imath] doesn't have a constant term. Now, if [imath]g[/imath] is homogeneous, by which I mean all of [imath]g[/imath]'s term are of the same order [imath]k[/imath]. Then, for any [imath]x\in\Bbb R^n[/imath], find [imath]L>0[/imath] so that [imath]\|y\|/L<\epsilon[/imath], then [imath]g(y/L)=0[/imath], so [imath]g(y)=g(y/L)L^k=0[/imath], contradicting the fact that [imath]g[/imath] is non constant. If [imath]g[/imath] is not homogeneous, we consider the following two cases: A) in each term in [imath]g[/imath], [imath]x_1,\cdots,x_n[/imath] all appear. This indicates all terms have a common factor in the form [imath]c(x)=(x_1\cdots x_n)^p[/imath] such that [imath]g(x)=c(x)h(x)[/imath] where in at least one term in [imath]h(x)[/imath], not all [imath]x_1,\cdots,x_n[/imath] are présent. (A way to do this is keep factorising out [imath](x_1\cdots x_n)[/imath] till you can do it no more.) As such, [imath]S_g^c=S_h^cS_c^x[/imath], but it's easy to show [imath]S_c^c[/imath] is open and dense, so by Baire's Theorem it suffices to show [imath]S_h^c[/imath] is open and dense, or [imath]S_h[/imath] doesn't contain any ball. So we just repeat the initial discussion i.e. consider a new polynomial [imath]h(x+r_1)[/imath] (we can't directly jump to the next case since [imath]h[/imath] may contain the constant term). Note that, there may be several cycles, but ultimately we can only visit case A) finitely many times, since each time we visit case A) we come to consider a polynomial of strictly lower order than previously. Hence, we must eventually arrive at the next case; B) there's at least one term in [imath]g[/imath] in which there's at least one variable, say [imath]x_k[/imath], which is absent. Thus, we fix [imath]x_k=0[/imath], and [imath]g[/imath] becomes a non-zero polynomial in fewer variables. I find the striked arguments futile. In fact I also tried to generalise the linear scaling argument to non homegeneous cases, i.e. assigning different scaling rates to different variables, but was quickly disappointed to find it wouldn't work for even the univariable case say [imath]x^3+x^2+x[/imath]. Yet another attempt (the striked part) I made was try to select a suitable variable and fix all the other variables equal to [imath]0[/imath] to obtain a univariable non-zero polynomial (pretty much like choosing a suitable coordinate axis to move around on). Since this would make the new polynomial vanishes in a while short segment about zero on the real line, we get a contradiction. However, this was unable to deal with cases like [imath]x_1x_2+x_2x_3+x_1x_3[/imath] where no such suitable axis exists. Really need some Enlightenment now. Thanks. PS: absolutely no background in algebraic geometry.
36970
A polynomial that is zero on an open set Suppose that a polynomial [imath]p(x,y)[/imath] defined on [imath]\mathbb{R}^2[/imath] is identically zero on some open ball (in the Euclidean topology). How does one go about proving that this must be the zero polynomial?
1769424
Maximum value of [imath]\frac{\alpha\overline{\beta}+\overline{\alpha}\beta}{|\alpha\beta|}[/imath] Maximum value of [imath]\frac{\alpha\overline\beta+\overline\alpha\beta}{|\alpha\beta|}[/imath] is 1) 2 2) 1 3) none of the above. Considering [imath]\alpha=x+iy[/imath] and [imath]\beta=m+in[/imath] , on evaluating the expression I got [imath]\frac{2.(xm+ny)}{\sqrt{(x^{2}+y^{2})(m^{2}+n^{2})}}[/imath] which is [imath]\leq \frac{2.(xm+ny)}{\sqrt{4xymn}}[/imath]. Least value of this is 2. So can we call it as maximum value ?
2310666
Proving an identity relating to the complex modulus: [imath]z\bar{a}+\bar{z}a \leq 2|a||z|[/imath] Show that [imath]z\bar{a}+\bar{z}a \leq 2|a||z|[/imath] I was able to check this using some examples which is not ideal for a mathematician way of proving a problem/case/theorem. I couldn't generalize it (prove it). I will be glad if I can be given the hint/proof for this problem. Thank you.
2312526
Is it true, that [imath]\lim_{x \to \infty} {\bigg(\frac {\int_a^b{g^{x}(t)dt}}{b - a}\bigg)}^{\frac {1}{x}} = max(g(t))[/imath]? Is the statement, that [imath]\lim_{x \to \infty} {\bigg(\frac {\int_a^b{g^{x}(t)dt}}{b - a}\bigg)}^{\frac {1}{x}} = max(g(t))[/imath], where max(g(t)) is the maximal value, taken by function g(t) on [a, b], true for each real-valued function g(t), that is continuous on [a, b] and takes only positive values on it? I could neither prove it, nor find any counterexamples to it. Any help will be appreciated.
2047500
Prove that [imath]\lim_{n \to \infty}\bigg[\int_0^1f(t)^n \text{dt}\bigg]^{1/n}=M[/imath] Suppose that [imath]f[/imath] is a continuous, non-negative function on the interval [imath][0,1][/imath]. Let [imath]M[/imath] be the maximum of [imath]f[/imath] on the interval. Prove that [imath]\lim_{n \to \infty}\bigg[\int_0^1f(t)^n \text{dt}\bigg]^{1/n}=M[/imath] We wrote out some simple examples to show it worked for functions such as [imath]x^2[/imath]. We are having trouble finding how to create a general proof. Thanks for any help!
692166
Proving a basic property about derived sets and unions I'm having some trouble showing these two equal each other. I was doing some research and apparently this is not always true in a topological sense but I think that's out of the scope of this class. [imath](A \cup B)' = A' \cup B'[/imath] [imath]A'[/imath] is the set of all derived points of A.
498904
Derived sets - prove [imath](A \cup B)' = A' \cup B'[/imath] I'm trying to prove [imath](A \cup B)' = A' \cup B'[/imath] where [imath]S'[/imath] denotes the derived set of some subset [imath]S[/imath] of a topological space [imath]X[/imath]. A derived set [imath]S'[/imath] of a set [imath]S[/imath] is the set of [imath]x \in X[/imath] such that [imath]x[/imath] is in the closure of [imath]S-\{x\}[/imath]. I'm having troubling showing that [imath](A \cup B)' \subseteq A' \cup B'[/imath]. What I've done so far: Suppose [imath]x \in (A \cup B)'[/imath]. Then for any neighborhood [imath]U[/imath] of [imath]x[/imath] it follows that [imath]U[/imath] intersects [imath](A \cup B)-\{x\}[/imath] so [imath]U[/imath] intersects [imath]A-\{x\}[/imath] or [imath]B-\{x\}[/imath]. But I don't see why we couldn't have one neighborhood of [imath]x[/imath], [imath]U_{1}[/imath], which intersects [imath]A-\{x\}[/imath] but not [imath]B-\{x\}[/imath] and another neighborhood [imath]U_{2}[/imath] which does the opposite. In this case [imath]x[/imath] would be in neither [imath]A'[/imath] nor [imath]B'[/imath]. We can say that [imath]U_{1} \cap U_{2}[/imath] is not only [imath]\{x\}[/imath] because then [imath]\{x\}[/imath] would be open contradicting [imath]x \in (A \cup B)'[/imath].
2309917
Jordan Normal Form and Minimal Polynomial Write down all possible Jordan normal forms for matrices with characteristic polynomial[imath] (x − λ)^5[/imath]. In each case, calculate the minimal polynomial and the geometric multiplicity of the eigenvalue λ. For the only eigenvalue [imath]\lambda[/imath], the possible JNF is just assigned 1 to every column above the diagonal since the min polynomial can be any degree from 1 to 5? I figured out the possible JNF using the possible minimal polynomial [imath](x-\lambda)[/imath]and [imath](x-\lambda)^2[/imath] and [imath](x-\lambda)^3[/imath]...[imath](x-\lambda)^5[/imath] In total, it is 7 possibility (corresponding to each minimal polynomial and [imath]\lambda[/imath] has to appear 5 times.) But I don't understand that why the number of blocks gives the geometric multiplicity since each represent one eigenspace. Thank you so much!
576139
Jordan normal form for a characteristic polynomial [imath](x-a)^5[/imath] Write down all the possible Jordan normal forms for matrices with characteristic polynomial [imath](x-a)^5[/imath]. In each case, calculate the minimal polynomial and the geometric multiplicity of the eigenvalue [imath]a[/imath]. Verify that this information determines the Jordan normal form. I found this question in a textbook that I'm using for a test I have tomorrow. I think that I need to use the method for finding the Jordan normal form of a matrix but I can't see how to apply it and I don't have much intuition about the answer... I'm guessing that there are 5 possibilities since the minimal polynomial can be any factor of the characteristic, but I don't know how to prove this. Some help would be great for my test tomorrow! Thanks
2312475
prime and irreducible elements [imath]\equiv 1[/imath] modulo [imath]4[/imath] Consider the set [imath]L= n \in \mathbb{N}[/imath], such that [imath]n \equiv 1[/imath] modulo [imath]4[/imath]. 1) What is the set of prime or irreducible elements in [imath]L[/imath]? 2) Is prime and irreducible the same (in [imath]L[/imath])? 3) Is there a unique factorization of irreducible elements [imath]\forall n \in L[/imath]? I know that a prime number in [imath]\mathbb{N}[/imath] has to be prime in [imath]L[/imath], too. But are there any more? I think you can find irreducible elements which are not prime, but didn't manage to do so, yet. Could you please help me with this problem? I'm stuck.. Are Hilbert primes also Hilbert irreducible ? Furthermore, are Hilbert primes also primes in [imath]\mathbb{ Z}[/imath]?
1446993
Are Hilbert primes also Hilbert irreducible ? Furthermore, are Hilbert primes also primes in [imath]\mathbb{ Z}[/imath]? Consider the set [imath]\mathcal H[/imath] of Hilbert numbers (numbers of the form [imath]4n + 1[/imath], for [imath]n \ge 0[/imath]). Define a Hilbert prime as any number [imath]h[/imath] in the Hilbert set satisfying [imath]h \neq 1[/imath] and if [imath]h \mid ab[/imath] where [imath]a \in \mathcal H[/imath], [imath]b\in \mathcal H[/imath] then [imath]h \mid a[/imath] or [imath]h \mid b[/imath]. Define a member of the Hilbert set [imath]q[/imath] as Hilbert irreducible as if and only if [imath]q \neq 1[/imath] and [imath]q[/imath] cannot be expressed as the product of two smaller Hilbert numbers. I am trying to determine if Hilbert prime implies Hilbert irreducible, however I am not particularly strong at number theory. I have been unsuccessful in finding a counterexample, and I am starting to believe the implication holds. The textbook I pulled this example from for self study (Rings, Fields, and Groups: An Introduction to Abstract Algebra by Reg Allenby) has answers in the back of the text. However, the solution only says that Hilbert primes are also primes in [imath]\mathbb{Z}[/imath]. Any help would be greatly appreciated, I can't see to find any proof regarding this concept anywhere!
2313063
Is [imath]\frak c = \aleph_1[/imath]? Is [imath]\frak c = \aleph_1[/imath]? My textbook requires me to find out the cardinality of [imath]I(\Bbb N)[/imath], which is a set of all infinite subset of [imath]\Bbb N[/imath] What I found is cardinality of [imath]I(\Bbb N)[/imath] equals to [imath]2^{\aleph_0} = \frak c[/imath] then is it also okay to denote that [imath]\frak c =\aleph_1[/imath]?
2312360
[imath]\aleph_1 = 2^{\aleph_0}[/imath] The above definition was given to me by a friend of mine who introduced me to [imath]\aleph[/imath], and infinite sets. The argument he gave me was (paraphrased how I understand it). While the set of natural numbers is infinite, each number has only a finite number of digits. Each real number as an infinite number of digits. (The rest is filled up with 0s). We can represent each real number using binary numbers. Map each bit of the real numbers to an element of [imath]\mathbb N[/imath]. The cardinality of [imath]\mathbb N[/imath] is [imath]\aleph_0[/imath]. There are [imath]2^{\aleph_0}[/imath] possible combinations of the bits. There are [imath]2^{\aleph_0}[/imath] real numbers. He then sort of defined [imath]2^{\aleph_0}[/imath] as [imath]\aleph_1[/imath]. I'm guessing the cardinality of the real numbers is taken to be [imath]\aleph_1[/imath]? Said friend also said he has a proof that [imath]\aleph_0^k = \aleph_0[/imath]. I haven't yet read his proof as at the time of writing, and may update this question with it when I do. I've been told here that you can't claim [imath]2^{\aleph_0} = \aleph_1[/imath] which surprised me, but I can't be so sure of the definition myself, so.   My question is: Is [imath]\aleph_1 = 2^{\aleph_0}[/imath]?
2313339
[imath]1 + \frac{1}{3}\frac{1}{4} + \frac{1}{5}\frac{1}{4^2} + \frac{1}{7}\frac{1}{4^3} + ..........[/imath] [imath]1 + \frac{1}{3}\frac{1}{4} + \frac{1}{5}\frac{1}{4^2} + \frac{1}{7}\frac{1}{4^3} + ......[/imath] Can anyone help me out how to solve this. My try : I was thinking about the expansion of [imath]tan^{-1}x[/imath]. But in that series positive and negative will come alternatively.
1548665
Find the sum of the series [imath]1+\frac{1}{3}\cdot\frac{1}{4}+\frac{1}{5}\cdot\frac{1}{4^2}+\frac{1}{7}\cdot\frac{1}{4^3}+\cdots[/imath] Find the sum of the series : [imath]1+\frac{1}{3}\cdot\frac{1}{4}+\frac{1}{5}\cdot\frac{1}{4^2}+\frac{1}{7}\cdot\frac{1}{4^3}+\cdots[/imath]
2312438
Likelihood Ratio Test statistic to test [imath]H_0[/imath] vs [imath]H_1[/imath] Let [imath]{Y_1,...,Y_n}[/imath] be independent random variables and [imath]Y_i[/imath]~[imath]N(\beta x_i, 1)[/imath] where [imath]x_1,...,x_n[/imath] are fixed known constants, and [imath]\beta[/imath] is an unknown parameter. I'm looking to find the p-value or rejection region for the test [imath]H_0: \beta=0 \quad \text{vs} \quad H_1:\beta\ne0[/imath] The Likelihood Ratio Test statistic [imath]\Lambda[/imath] is [imath]e^{-1/2\left(\frac{\sum_{i=1}^n y_i x_i}{\sum_{i=1}^n x_i^2}\right)^2\sum_{i=1}^n x_i^2}[/imath] After setting [imath]\Lambda < k[/imath] and then solving the inequality [imath]\sqrt{-2\text{log}(k)}<\sum x_iy_i/\sqrt{\sum_{i=1}^n x_i^2} <-\sqrt{-2\text{log}(k)}[/imath] (Someone told me that my last step is incorrect)
2300621
Using the LRT statistic to test [imath]H_0[/imath] vs [imath]H_1[/imath] Let [imath]{Y_1,...,Y_n}[/imath] be independent random variables and [imath]Y_i[/imath]~[imath]N(\beta x_i, 1)[/imath] where [imath]x_1,...,x_n[/imath] are fixed known constants, and [imath]\beta[/imath] is an unknown parameter. I'm looking to find the p-value or rejection region for the test [imath]H_0: \beta=0 \quad \text{vs} \quad H_1:\beta\ne0[/imath] The Likelihood Ratio Test statistic [imath]\Lambda[/imath] is [imath]e^{-1/2\left(\frac{\sum_{i=1}^n y_i x_i}{\sum_{i=1}^n x_i^2}\right)^2\sum_{i=1}^n x_i^2}[/imath] I have already asked a similar question with [imath]H_1: \beta>0[/imath] here: Uniformly Most Powerful test for normal distribution. After setting [imath]\Lambda < k[/imath] I'm left with [imath]\sum x_iy_i[/imath] after removing constants to the right, which is the same as my previous question. How will finding the p-value or rejection region for this test differ?
1521981
Differentiability at an end point of an open interval. Suppose that [imath]f : [a,b] \rightarrow \mathbb R[/imath] is continuous on [imath][a,b][/imath], differentiable on [imath](a,b)[/imath], and that [imath]\lim_{x\rightarrow a^+}f'(x)=L[/imath]. Show that [imath]f[/imath] is differentiable at [imath]a[/imath], and that [imath]f'(a) = L[/imath]. I have tried starting with the continuity of the function, but I'm still not sure how to even begin or where to go from there.
152815
How to use the Mean Value Theorem to prove the following statement: Suppose [imath]f(x)[/imath] is continuous on [imath][a,b)[/imath] and differentiable on [imath](a,b)[/imath] and that [imath]f '(x)[/imath] tends to a finite limit [imath]L[/imath] as [imath]x \to a^+[/imath]. Then [imath]f(x)[/imath] is right-differentiable at [imath]x=a[/imath] and [imath]f '(a)=L[/imath]. (epsilon-delta proof not needed). This is a practice exam question. I am having trouble translating this into a 'mathematical' statement. The MVT states that there exists [imath]c[/imath], [imath]a\leq c\leq b[/imath], such that: [imath]f'(c) = (f(b)-f(a))/(b-a)[/imath] I suppose to prove that [imath]f(x)[/imath] is right differentiable at [imath]x=a[/imath], using the MVT, I need to somehow show that as [imath]x \to a^+[/imath], [imath]f'(c)=f'(a)=L[/imath] ??? Am I on the right track here? Can someone help me get started?
2314581
[imath]\lim_{y\downarrow 0} yE(1/X ; X>y)=0[/imath] Let [imath]X>0[/imath] but do NOT assume [imath]E(1/X)< \infty[/imath]. Show that [imath]\lim_{y\downarrow 0} yE(1/X ; X>y)=0[/imath]. I tried to use Jensen Inequality but this is not working. There was another problem which was [imath]\lim_{y\rightarrow \infty} yE(1/X ; X>y)=0[/imath] It was easy since [imath]\lim_{y\rightarrow \infty} yE(1/X ; X>y)< \lim_{y\rightarrow \infty} P(X>y)=0[/imath]. But this trick not working on the above problem. Need Help!!
1901860
Show that [imath]\lim\limits_{y\downarrow 0} y\mathbb{E}[\frac{1}{X};X>y]=0[/imath]. Let [imath](\Omega,\mathcal{F},\mathbb{P})[/imath] be a probability space, and let [imath]X[/imath] be a nonnegative random defined on this space. For any [imath]A\in \mathcal{F}[/imath], let [imath]\mathbb{E}[X;A]:=\mathbb{E}[X\mathbb{1}_{A}][/imath], where [imath]\mathbb{1}_{A}[/imath] denotes the indicator random variable on [imath]A[/imath]. Without assuming [imath]\mathbb{E}\left[\frac{1}{X}\right]<\infty[/imath], show that \begin{eqnarray} \lim\limits_{y\downarrow 0} ~~y~\mathbb{E}\left[\frac{1}{X};X>y\right]=0. \end{eqnarray} Any initial ideas will be greatly appreciated.
2308292
Solving [imath]u_t = (u_x)^{-1}[/imath] I'm trying to solve the pde: [imath]u_t = (u_x)^{-1} \quad u(t,x) \geq 0 \text{ for positive real }x,t, \text{ and } u(0,0)=0[/imath] by separation of variables with [imath]u(t,x) = f(t)g(x)[/imath]. By rewriting with the substitution i get to [imath]f f' g g' =1[/imath]. Trying to solve for [imath]f(t)[/imath] I keep [imath]x[/imath] constant so [imath]g g'=c[/imath] and then [imath]ff' = \frac{1}{c}[/imath]. From here though I can't see how to proceed, should I guess a function [imath]f[/imath] to satisfy this and if so how do I go about it?
2299978
How do I solve this PDE? How do I solve the partial differential equaiton [imath]u_{t} = (u_{x})^{-1}[/imath] subject to the conditions [imath]u(t,x) \geq 0[/imath] and [imath]u(0,0) = [/imath]0. Use the separation Ansatz [imath]u(t,x) = f(t)g(x)[/imath]. The inverse in confusing me
2314859
Simplifying/expanding the floor of a product Is there any way of simplifying or expanding the following expression: [imath]\lfloor ab\rfloor[/imath] where [imath]a[/imath] and [imath]b \in \mathbb{R}[/imath]? I know there exist formulae that allow simplifying the floor of a sum, or the floor of a quotient of two integers, but none that invlove the floor of a product. If simplifying is not possible, is there a way to expand it so that [imath]a[/imath] and [imath]b[/imath] are not "grouped" together within the floor function? Something like [imath]\lfloor ab\rfloor = \lfloor a\rfloor \lfloor b\rfloor +...[/imath]
1808800
Difference of the floor of a product and the product of floors Is there any way the following can be simplified? [imath]\lfloor f(x)\cdot g(x) \rfloor - \lfloor f(x) \rfloor \cdot \lfloor g(x) \rfloor[/imath]
939412
Find the norm of a linear combination of vectors, given their norms If a and b are vectors such that [imath]\|a\| = 4[/imath], [imath]\|b\| = 5[/imath], and [imath]\|a + b\| = 7[/imath], then find [imath]\|2a-3b\|[/imath]. So I first squared both sides and then got [imath]ab = -44[/imath]. What do I do now?
2315038
Vectors and Norms If [imath]a[/imath] and [imath]b[/imath] are vectors such that [imath]\|a\| = 4[/imath], [imath]\|{b}\| = 5[/imath], and [imath]\|{a} + {b}\| = 7[/imath], then find [imath]\|2 {a} - 3 {b}\|[/imath]. I couldn't figure out how to start off this problem. I attempted to use [imath]\cos \theta = \frac{a\cdot b}{\|a\|\cdot \|b\|}[/imath] But I still don't know what to do. Could someone nudge me in the right direction? Thank you!
2315166
Can we sure that a bounded linear operator is compact just having a condition on the range? The problem is the next: Suppose that [imath]A\in B(X,Y)[/imath], [imath]K\in K(X,Y)[/imath], where [imath]X,Y[/imath] are Banach spaces. Show that, if [imath]A(X)\subset K(X)[/imath], then [imath]A\in K(X,Y)[/imath]. Here [imath]B(X,Y)[/imath] stands for the set of all bounded linear operators from [imath]X[/imath] to [imath]Y[/imath], and [imath]K(X,Y)[/imath] stands for all compact operators from [imath]X[/imath] to [imath]Y[/imath]. I've tried some ways to prove it, but none of them successfully. What I reach to prove is the easy case, when [imath]K[/imath] is a surjective operator, because in this case[imath]Y[/imath] is finite-dimensional and every bounded linear operator on a finite-dimensional space is a compact operator. Then, I remember (but I'm not sure if it is true!) that a compact operator is one-to-one if and only if it is not surjective, so if we suppose that [imath]K[/imath] is not a surjective operator, then [imath]K[/imath] is one-to-one. I've tried to use this property but I could conclude nothing. Any help or hint is well received.
2311127
Functional Analysis: The rank of an operator detemines if it's a compact operator. I have a problem were I have to prove the following statement: Let [imath]X[/imath] and [imath]Y[/imath] be Bannach spaces. If [imath]A[/imath] is a Linear Bounded operator between [imath]A:X \rightarrow Y[/imath] so that [imath]Range(A)\subset Range(K)[/imath] with [imath]K[/imath] a compact operator from [imath]X[/imath] to [imath]Y[/imath] then [imath]A[/imath] is compact. But I have no clue how to start and the statement Itself sounds pretty incredible anyone can give a hand?
2313331
[imath]f:(0,\infty)\rightarrow \Bbb R[/imath], [imath]x\mapsto 1/x[/imath] show that f is continuous [imath]f:(0,\infty)\rightarrow \Bbb R[/imath], [imath]x\mapsto 1/x[/imath] show that [imath]f[/imath] is continuous. To prove the continuity on the given domain, for each [imath]x_0\in(0,\infty)[/imath] and [imath]\epsilon>0[/imath] we need to determine a [imath]\delta(x_0,\epsilon)[/imath] which satisfies [imath]|x-x_0|<\delta(x_0,\epsilon)\Rightarrow|f(x)-f(x_0)|\lt \epsilon[/imath]. In which way one could easily find [imath]\delta(x_0,\epsilon)[/imath]?
757667
Continuity of 1/x I am confused with what [imath]8(ii)[/imath] wants from me, I answered the first part of this question with help from the question posted here Is [imath]f(x)=1/x[/imath] continuous on [imath](0,\infty)[/imath]? But the this proves continuity and works for all [imath]\epsilon>0[/imath], so how do I prove it doesn't for [imath]\epsilon=1[/imath]??
2315940
Prove that [imath]G[/imath] is abelian and has odd order Let [imath]G[/imath] be a finite group and suppose that there an automorphism [imath]f[/imath] of [imath]G[/imath] satisfying: [imath]f^2 =id_G[/imath] and [imath]f(x)=x \iff x =1[/imath]. Show that every element of [imath]G[/imath] can be written as [imath]f(x)x^{-1}[/imath] for some [imath]x[/imath] and that [imath]G[/imath] is abelian of odd order. The first part I know how to do. Define [imath]s: G \to G[/imath] by [imath]s(g) = f(g)g^{-1}[/imath]. Then [imath]s[/imath] is one-one because [imath]s(g)=s(g') \implies f(gg'^{-1}) = gg'^{-1} \implies gg'^{-1} = 1 \implies g = g'[/imath]. So since [imath]G[/imath] is finite [imath]s[/imath] is onto so every element can be written as [imath]f(x)x^{-1}[/imath]. How to show that [imath]G[/imath] is abelian and has odd order? Thanks for help.
1748834
An automorphism that has no fixed points except for the identity and is its own inverse implies commutativity Let [imath]G[/imath] be a finite group and suppose there exists [imath]f\in\text{Aut}(G)[/imath] such that [imath]f^2=\text{id}_G[/imath], i.e., [imath]f[/imath] is its own inverse, and such that [imath]f[/imath] has no fixed points other than the identity [imath]e[/imath] of [imath]G[/imath], i.e., [imath]f(x)=x\Rightarrow x=e[/imath]. Show that [imath]G[/imath] is necessarily abelian. While trying to do this exercise I noticed two facts. First, [imath]g[/imath] and [imath]f(g)[/imath] have the same order because [imath]o(f(g))|o(g)[/imath] and, applying [imath]f[/imath] again and using [imath]f^2=\text{id}_G[/imath], [imath]o(f(f(g)))=o(g)|o(f(g))[/imath] and, once the order of any element is [imath]\geq 1[/imath], it follows that [imath]o(g)=o(f(g))[/imath]. Also, it's easy to see that [imath]g[/imath] and [imath]f(g)[/imath] commute. Second, there cannot exist such an automorphism if the order of [imath]G[/imath] is even, because [imath]f(e)=e[/imath] and we can form pairs like [imath]\{g,f(g)\}[/imath] with [imath]f(g)\neq g[/imath] that are invariant under [imath]f[/imath], i.e., [imath]f(\{g,f(g)\})=\{g,f(g)\}[/imath]. But once the order of [imath]G[/imath] is even, proceeding with the construction of the pairs, we'll end up with just one element [imath]\neq e[/imath], so we must have some [imath]\gamma\in G\setminus{e}[/imath] with [imath]f(\gamma)=\gamma[/imath], contradicting the hypothesis. Let [imath]n[/imath] be the order of [imath]G[/imath] and let's fix an enumeration of the elements of [imath]G[/imath], say [imath]G=\{g_1,\ldots,g_n\}[/imath]. My approach was the following. For each [imath]\sigma\in S_n[/imath], let [imath]x_{\sigma}=\prod_{i=1}^ng_{\sigma(i)}[/imath]. Then we have that [imath]f(x_{\sigma})=x_{\sigma^{\prime}}[/imath]. If it's shown that [imath]f(x_{\sigma})=e\;\forall\;\sigma\in S_n[/imath], then we'd have [imath]x_{\sigma}=e\;\forall\;\sigma\in S_n[/imath], which implies that [imath]G[/imath] is abelian. The problem is that [imath]S_n[/imath] has [imath]n![/imath] elements while [imath]G[/imath] has [imath]n[/imath] elements, so there repetitions among the [imath]x_{\sigma}[/imath], and we cannot apply directly the reasoning of the last paragraph. Is this the right way, or is there an easier manner to solve this?
2315729
Algebra of irrational numbers Why is [imath]\sqrt{7 + 2\sqrt{10}} = \sqrt2 + \sqrt5 [/imath] I can't seem to prove it so can someone help me out in doing so if it is possible. And if it can't be proven, is there and explanation to why this is so?
1776159
Convert from Nested Square Roots to Sum of Square Roots I am looking for a way to easily discover how to go from a nested root to a sum of roots. For example, [imath]\sqrt{10-2\sqrt{21}}=\sqrt{3}-\sqrt{7}[/imath] I know that if I set [imath]\alpha=\sqrt{10-2\sqrt{21}}[/imath], square both sides, I get [imath]\alpha^2=10-2\sqrt{21}[/imath] Now I recognize that we have a situation where [imath]10=3+7[/imath] and [imath]21=7\cdot 3[/imath], so I can immediately see that we have [imath]\alpha^2=10-2\sqrt{21}=3-2\sqrt{21}+7=\sqrt{3}^2-2\sqrt{3}\sqrt{7}+\sqrt{7}^2=(\sqrt{3}-\sqrt{7})^2[/imath] My question is, it this the only way to approach this problem? This approach mirrors basic algebra 1 methods of factoring quadratics, but I was curious to know if thre are other techniques that can be used to quickly deduce that a nested radical can be simplified to the sum of two radicals. Mathematically, suppose [imath]a,b,c,m,n,r,s\in\mathbb{N}[/imath]. Is there a way to quickly determine [imath]m,n,r,s[/imath] in the equation [imath]\sqrt{a\pm b\sqrt{c}}=m\sqrt{r}\pm n\sqrt{s}[/imath]
2315906
Solve [imath]ff(x)=f(x)[/imath] Treated with the inverse operator, one could get: [imath]f(x)=x[/imath] However, an other obvious solution is [imath]f(x)=C[/imath] (when f is not invertible). How could I reach this solution? Are there other solutions available? This is not a homework. To prove the uniqueness of the solution, I am trying (and currently failing) to do something similar to: Does a non-trivial solution exist for [imath]f'(x)=f(f(x))[/imath]? Thank you Jack and V. Your efforts help. I wonder if one could prove that the "projectors" are the only family of solution to Idempotence.
681692
[imath]f(f(x))=f(x)[/imath] question I am wondering what is the class of functions [imath]f: \mathbb{R}\rightarrow\mathbb{R}[/imath] such that [imath]f(f(x))=f(x)[/imath]? I think it should be: Constant Value functions the identity function absolute value function [imath]|x|[/imath] But I don't know if this is right or how to show it rigorously. Any suggestions?
2316488
Infinite Sum using [imath]x^{-x}[/imath] How can I calculate the value of the infinite sum [imath]\sum_{x=1}^\infty x^{-x}[/imath] By comparison to a geometric series, I know that it converges, but I don't know how to calculate an exact value. So far I have tried the following: I have tried to turn it into a telescoping sum, but have had no luck I tried adding a variable and differentiating it to create an easier sum Any hints?
836147
Sophomore's dream: [imath]\displaystyle\int_0^{1} x^{-x} \; dx = \sum_{n=1}^\infty n^{-n}[/imath] In the solution of the so-called sophomore's dream, one of the key steps is to compute [imath]\int_0^1 x^n (\log x)^n~\mathrm dx[/imath] using the change of variables [imath]x = \exp\left(-\frac{u}{n+1}\right)[/imath] to obtain the Gamma function. This substitution to me, looks like it was pulled out of thin air. Can someone help me motivate it? How would I have thought of this substitution?
2316344
Continuous function with [imath]\lim_{x \rightarrow + \infty}f(x)=0[/imath] and [imath]\lim_{x \rightarrow - \infty}f(x)=0[/imath] is uniformly continuous Suppose that [imath]f: \mathbb{R} \rightarrow \mathbb{R}[/imath] is a continuous function with [imath]\lim_{x \rightarrow + \infty}f(x)=0[/imath] and [imath]\lim_{x \rightarrow - \infty}f(x)=0[/imath]. Show that [imath]f[/imath] is uniformly continuous on [imath]\mathbb{R}[/imath] My attempt: Take a random [imath]\epsilon >0[/imath]. As [imath]\lim_{x \rightarrow + \infty}f(x)=0[/imath], there exists a [imath]M_{1} \in \mathbb{R}[/imath] so that for every [imath]x \in \mathbb{R}[/imath] with [imath]x > M_{1}[/imath] it holds that [imath]|f(x)-0|< \epsilon[/imath]. As [imath]\lim_{x \rightarrow - \infty}f(x)=0[/imath], there exists a [imath]M_{2} \in \mathbb{R}[/imath] so that for every [imath]x \in \mathbb{R}[/imath] with [imath]x < M_{2}[/imath] it holds that [imath]|f(x)-0|< \epsilon[/imath]. Now take [imath]M \geq \max \{|M_{1}|,|M_{2}|\}[/imath]. Then surely will [imath]|f(x)|< \epsilon[/imath] for every [imath]x \in \mathbb{R} \backslash [-M,M][/imath]. Now I'm not sure how to go on from this. I do know that [imath]f[/imath] is uniformly continuous on the interval [imath][-M,M][/imath] because it is continuous. But how can I expand this to [imath]]- \infty,+\infty[[/imath] ?
2311701
Suppose that [imath]f:\Bbb R\to \Bbb R[/imath] is continuous and that [imath]f(x)\to 0[/imath] as [imath]x\to \pm\infty.[/imath]Prove that [imath]f[/imath] is uniformly continuous. Suppose that [imath]f:\Bbb R\to \Bbb R[/imath] is continuous and that [imath]f(x)\to 0[/imath] as [imath]x\to \pm\infty[/imath]. Prove that [imath]f[/imath] is uniformly continuous. Since [imath]f(x)\to 0[/imath] as [imath]x\to \pm\infty[/imath] we have [imath]|f(x)|<\epsilon [/imath] whenever [imath]x<-K[/imath] and [imath]x>K[/imath] where [imath]K>0[/imath]. Also [imath]f[/imath] is uniformly continuous on [imath][-K,K][/imath] as the domain is compact. Hence [imath]f[/imath] is uniformly continuous in [imath](-\infty,-K)\cup [-K,K]\cup (K,\infty)[/imath]. But how to conclude that [imath]f[/imath] is uniformly continuous on [imath]\Bbb R[/imath] from above as I may chose [imath]x\in (-\infty,-K);y\in [-K,K][/imath] ,how to conclude that [imath]|f(x)-f(y)|<\epsilon [/imath] from above for a given [imath]\epsilon[/imath]? Please suggest the edits required.
2315372
Prove that there is a point [imath]x\in [0,1][/imath], such that [imath]f(x)=g(x)[/imath]. Let [imath]f:[0,1]\rightarrow[0,1][/imath] and [imath]g:[0,1]\rightarrow[0,1][/imath] be continuous functions satisfying [imath]f\circ g =g\circ f[/imath]. Prove that there is a point [imath]x\in [0,1][/imath], such that [imath]f(x)=g(x)[/imath]. I have thought two things 1) I have that for every continuous function [imath]f:[0,1]→[0,1][/imath] there exists [imath]y∈[0,1][/imath] such that [imath]f(y)=y[/imath], then [imath]f,g, f\circ g[/imath] have fixed points. 2)let [imath]h(x)=f(x)-g(x)[/imath] if there exists [imath]a,b \in [0,1][/imath] such that [imath]h(a)<0[/imath] and [imath]h(b)>0[/imath] Hence, by the intermediate value theorem, [imath]h[/imath] must equal [imath]0[/imath] at some point [imath]c \in [0,1][/imath]. the problem is that I can not find a, b that meets those conditions.
678831
[imath]f(g(x))=g(f(x))[/imath] implies [imath]f(c)=g(c)[/imath] for some [imath]c[/imath] Let [imath]f[/imath] and [imath]g[/imath] be continuous functions and map from [imath][0,1][/imath] to [imath][0,1][/imath]. Also let [imath]f(g(x)) = g(f(x))[/imath] . Prove that there exists [imath]c[/imath] from [imath][0,1][/imath] such that [imath]f(c)=g(c)[/imath]. I will try with contradiction. Let [imath]h(x) = f(x) - g(x) > 0[/imath] for all [imath]x[/imath] from [imath][0,1][/imath]. Since [imath]f(x)[/imath] maps from [imath][0,1][/imath] to [imath][0,1][/imath] and is greater then [imath]g[/imath] for all [imath]x[/imath] from [imath][0,1][/imath] then this implies that [imath]g(x)[/imath] is not an element of [imath][0,1][/imath] for all [imath]x[/imath] from [imath][0,1][/imath]. This is a contradiction, so there exists [imath]c[/imath] from [imath][0,1][/imath] such that [imath]f(c)=g(c)[/imath]. Problem here is that I never used fact that [imath]f(g(x)) = g(f(x))[/imath] which bothers me. Is proof correct or there is hole somewhere?
2316448
Infinite Sum Fallacy I was working on the infinite sum [imath]\sum_{x=1}^\infty \frac{1}{x(2x+1)}[/imath] and I used partial fractions to split up the fraction [imath]\frac{1}{x(2x+1)}=\frac{1}{x}-\frac{2}{2x+1}[/imath] and then I wrote out the sum in expanded form: [imath]1-\frac{2}{3}+\frac{1}{2}-\frac{2}{5}+\frac{1}{3}-\frac{2}{7}+...[/imath] and then rearranged it a bit: [imath]1+\frac{1}{2}-\frac{1}{3}+\frac{1}{4}-\frac{1}{5}+\frac{1}{6}-\frac{1}{7}+...[/imath] [imath]2-(1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\frac{1}{5}-\frac{1}{6}+\frac{1}{7}-...)[/imath] and since the sum inside of the parentheses is just the alternating harmonic series, which sums to [imath]\ln 2[/imath], I got [imath]2-\ln 2[/imath] Which is wrong. What went wrong? I notice that, in general, this kind of thing happens when I try to evaluate telescoping sums in the form [imath]\sum_{x=1}^\infty f(x)-f(ax+b)[/imath] and I think something is happening when I rearrange it. Perhaps it has something to do the frequency of [imath]f(ax+b)[/imath] and that, when I spread it out to make it cancel out with other terms, I am "decreasing" how many of them there really are because I'm getting rid of the one to one correspondence between the [imath]f(x)[/imath] and [imath]f(ax+b)[/imath] terms? I can't wrap my head around this. Please help!
2268345
Find the value of [imath]\sum_{n=1}^{\infty} \frac{2}{n}-\frac{4}{2n+1}[/imath] Find the value of [imath]S=\sum_{n=1}^{\infty}\left(\frac{2}{n}-\frac{4}{2n+1}\right)[/imath] My Try:we have [imath]S=2\sum_{n=1}^{\infty}\left(\frac{1}{n}-\frac{2}{2n+1}\right)[/imath] [imath]S=2\left(1-\frac{2}{3}+\frac{1}{2}-\frac{2}{5}+\frac{1}{3}-\frac{2}{7}+\cdots\right)[/imath] so [imath]S=2\left(1+\frac{1}{2}-\frac{1}{3}+\frac{1}{4}-\frac{1}{5}+\cdots\right)[/imath] But we know [imath]\ln2=1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\cdots[/imath] So [imath]S=2(2-\ln 2)[/imath] Is this correct?
2299977
find inverse of a matrix with one value on the diagonal and otherwise Let us look at the the matrix [imath] M =\begin{bmatrix} a & b & \dots & b \\ b & a & \dots & b \\ \vdots & b & \ddots & \vdots \\ b & \dots & b & a \end{bmatrix} [/imath] It has one value [imath]a[/imath] on the main diagonal, and another value [imath]b[/imath] everywhere else. Let us assume that [imath]a \neq b[/imath]. I wish to find the inverse of every [imath]n\times n[/imath] matrix of this form ([imath]a[/imath] on the diagonal, [imath]b[/imath] everywhere else).
636909
Finding determinant for a matrix with one value on the diagonal and another everywhere else Let us look the the matrix [imath]\left(\begin{array}{ccccc} a & b & b & b & b\\ b & a & b & b & b\\ b & b & a & b & b\\ b & b & b & a & b\\ b & b & b & b & a \end{array}\right)[/imath] It has one value, [imath]a[/imath], on the main diagonal, and another value, [imath]b[/imath] everywhere else. Let us assume that we are over a ring and that [imath]a[/imath] is invertible. I wish to find the determinant of every [imath]n\times n[/imath] matrix of this form ([imath]a[/imath] on the diagonal, [imath]b[/imath] everywhere else). Using row and column operations I have managed to transform the matrix to upper-triangular form and found formula for specific cases. Generalizing it I got to the following formula: [imath]\det(A) = a\left(a-b\right)^{n-2}\left(a+\left(n-2\right)b-\frac{\left(n-1\right)b^{2}}{a}\right)[/imath] I think I can prove it with row-operations in the general case with a little patience. However, I'm wondering if there is a "smart" way of getting to this formula that I'm missing and if there is a nicer representation of it. Also, what can be said when [imath]a[/imath] is not invertible? (esp. the case where we are over a field and [imath]a=0[/imath]).
2309473
Find if the function is Continuous and whether the partial derivatives exist [imath]f(x,y)=\begin{Bmatrix} 0 & (x,y)=(0,0)\\ \frac{xy}{x^{2}+y^{2}}&(x,y)\neq (0,0) \end{Bmatrix}[/imath] How to find out if this is continuous? And do the partial derivatives exist at [imath](0,0)[/imath]?
2202777
Showing that partial derivatives of [imath]f(x) = \frac{xy}{x^2+y^2}[/imath] exist for [imath](x,y) \neq (0,0)[/imath] I am aware there are many questions similar to this on MSE but I am having trouble following any of the solutions given. I have the function given by [imath]f(x) = \dfrac{xy}{x^2+y^2}\;[/imath] for [imath](x,y)\neq (0,0)[/imath] and [imath]f(0,0)=0[/imath] I have calculated the partial derivatives and found: [imath]\frac{\partial f}{\partial x} = \frac{y(-x^2+y^2)}{(x^2+y^2)^2}, \quad \frac{\partial f}{\partial y} = \frac{x(-y^2+x^2)}{(y^2+x^2)^2}[/imath] Now I need to show the partial derivatives exist for [imath](x,y) = (0,0)[/imath]. I am also asked to show it is not continuous at [imath](x,y)=(0,0)[/imath], based on other answers I've seen it seems like this follows from solving the first part but I fail to see how that follows through as well. I could just copy the solutions with my function in place as I have seen a lot of answers using the definition of the derivative, but realistically I do want to understand what the thinking behind this is. Any help would be appreciated.
2317097
Non-negative operator & self-adjoint operator I am wondering how to show that: if [imath]A[/imath] is a non-negative operator, then [imath]A[/imath] is self-adjoint. Def. 1. [imath]A[/imath] is non-negative if [imath]\langle Ax,x \rangle \geq 0[/imath] for [imath]\forall x\in H[/imath], where [imath]H[/imath] is a Hilbert space. Def. 2. [imath]A[/imath] is self-adjoint if [imath]A = A^*[/imath].
561636
Show that a positive operator on a complex Hilbert space is self-adjoint Let [imath](\mathcal{H}, (\cdot, \cdot))[/imath] be a complex Hilbert space, and [imath]A : \mathcal{H} \to \mathcal{H}[/imath] a positive, bounded operator ([imath]A[/imath] being positive means [imath](Ax,x) \ge 0[/imath] for all [imath]x \in \mathcal{H}[/imath]). Prove that [imath]A[/imath] is self-adjoint. That is, prove that [imath](Ax,y) = (x, Ay)[/imath] for all [imath]x,y \in \mathcal{H}[/imath]. Here's what I have so far. Because [imath]A[/imath] is positive we have [imath]\mathbb{R} \ni (Ax,x) = \overline{(x,Ax)} = (x,Ax)[/imath], all [imath]x \in \mathcal{H}[/imath]. Next, I have seen some hints that tell me to apply the polarization identity: [imath](x,y) = \frac{1}{4}((||x+y||^2 + ||x-y||^2) - i(||x + iy||^2 - ||x - iy||^2)),[/imath] where of course the norm is defined by [imath]|| \cdot||^2 = (\cdot, \cdot)[/imath]. So my guess is that I need to start with the expressions: [imath](Ax,y) = \frac{1}{4}((||Ax+y||^2 + ||Ax-y||^2) - i(||Ax + iy||^2 - ||Ax - iy||^2)),[/imath] [imath](x,Ay) = \frac{1}{4}((||x+Ay||^2 + ||x-Ay||^2) - i(||x + iAy||^2 - ||x - iAy||^2)),[/imath] and somehow show they are equal. But here is where I have gotten stuck. Hints or solutions are greatly appreciated.
2317641
Let [imath]n>0[/imath] and [imath]p(z) = z^n + a_{n-1}z^{n-1} + \cdots + a_0[/imath]. Then there exists [imath]z[/imath] on [imath]|z|=1[/imath] such that [imath]|p(z) | \geq 1[/imath] I tried a proof by contradiction using Rouche's theorem but I couldn't see how Rouche's theorem would help in this situation. What would happen if [imath]|p(z)|<1[/imath] for all [imath]z[/imath] in [imath]|z|=1[/imath] Could somebody help me out? New approaches are welcome.
158175
complex polynomial satisfying inequality Each of the polynomial of the form [imath]p(z)=a_0+\dots+a_{n-1}z^{n-1}+z^n[/imath] satisfies the inequality [imath]\sup\left\{\,|p(z)|\,\big\vert\,|z|\le 1\,\right\}\ge 1[/imath] Is this statement true or false that we have to find. well MMP says that sup will be attained at [imath]|z|=1[/imath] so when [imath]|z|=1[/imath] we have [imath]|p(z)|=|a_0+a_1\dots+ a_{n-1}+1|\le |a_0|+\dots+|a_{n-1}|+|1|[/imath] I can not conclude more.plz help.
2317315
What is the rank of matrix [imath]A\in \mathbb{R}^m\times \mathbb{R}^n[/imath] with entries [imath]a_{i,j} = i + j[/imath]? I've been sitting on this practice question for a while but haven't been able to make any progress. Thanks in advance for any help. Below is an [imath]n \times m[/imath] matrix over [imath]\mathbb{R}[/imath]: [imath]A= \begin{bmatrix}a_{1,1}& \dots &a_{1,n}\\ \vdots & \ddots & \vdots \\ a_{m,1}&...&a_{m,n}\end{bmatrix} \text{with} \ a_{i,j} = i + j \ \text{for all} \ i \in {1,..m}, \ j\in{1,...n}.[/imath] Determine the row and column rank of the matrix [imath]A[/imath] for any [imath]n,m > 0[/imath]. How do I go about this question? I know the rank of a matrix is the number of linearly independent rows or columns, but I don't really know how to apply that to this matrix. P.S: please excuse my poor formatting.
2283993
Let [imath]A[/imath] be a [imath]n\times n[/imath] matrix with entries [imath]a_{ij}=i+j [/imath] . Calculate rank of [imath]A[/imath] Let [imath]A[/imath] be a [imath]n\times n[/imath] matrix with entries [imath]a_{ij}=i+j [/imath] . Calculate rank of [imath]A[/imath]. My work : I noticed that A is symmetric . Hence all of its eigen vectors are real .. That is all i have got . Your help will be highly appreciated .Thank you .
2318673
genus of intersection complete for [imath]f=x_0x_3-x_2x_1[/imath] [imath]g=x_0^2+x_1^2+x_2^2+x_3^2[/imath] This question is from Rick Miranda's book: [imath]C[/imath] is the curve in [imath]\mathbb{P}^3[/imath] defined by [imath]f=x_0x_3-x_2x_1=0[/imath] and [imath]g=x_0^2+x_1^2+x_2^2+x_3^2=0[/imath] The question asks to prove that it is a complete intersection and also asks to found the topologic genus. My doubt is: This point [imath]P=[x_0=1,x_1=i,x_2=i,x_3=-1][/imath] satisfies both equations, and the jacobian in this point: [imath]\left(\begin{array}{rrrr}-1&-i&-i&1\\2&2i&2i&-2\end{array}\right)[/imath] shows that [imath]rank\neq 2[/imath] in that point. So, does this means that the curve is not a complete intersection? Anyway, even with this points where the curve is singular, does it has genus 1? This has already reviewed on several previous posts, but I still have this doubt, I will appreciate if anyone could help me solve it.
21164
Problem in Rick Miranda: finding genus of a projective curve I have just started learning Riemann surfaces and I am using the book by Rick Miranda: Algebraic curves and Riemann Surfaces. #F in section 1.3 asks to determine the genus of the curve in [imath]\mathbb{P}^3[/imath] defined by the two equations [imath]x_0x_3=2x_1x_2[/imath] and [imath]x_0^2 + x_1^2 +x_2^2 +x_3^2 = 0[/imath]. #G also has a similar question in which he asks to determine the genus of the twisted cubic. Please explain how to approach this type of question.
2128824
For any [imath]r[/imath], [imath]s[/imath], [imath]t>1[/imath], then there's a group such that [imath]x[/imath] has order [imath]r[/imath], [imath]y[/imath] has order [imath]s[/imath] while [imath]xy[/imath] has order [imath]t[/imath]? Let [imath]r[/imath], [imath]s[/imath], and [imath]t[/imath] be positive integers greater than [imath]1[/imath]. How to prove that there exists a finite group [imath]G[/imath] having elements [imath]x[/imath] and [imath]y[/imath] such that [imath]x[/imath] has order [imath]r[/imath], [imath]y[/imath] has order [imath]s[/imath], and [imath]xy[/imath] has order [imath]t[/imath]? Thanks in advance.
1635754
Construction of a specific non-commutative and infinite group (with conditions on the order of the elements) I am struggling with the following problem: Find a group [imath]G[/imath] such that whenever [imath]m, n, k \geq 2[/imath] are natural numbers, then there exist [imath]a, b \in G[/imath] such that the order of [imath]a[/imath] is [imath]m[/imath], order of [imath]b[/imath] is [imath]n[/imath], and order of [imath]ab[/imath] is [imath]k[/imath]. The group [imath]G[/imath] necessarily has to be non-commutative and infinite.
2319053
show that [imath]\lim\sqrt[n]{x_1x_2\dots x_n}=\alpha[/imath] Given a sequence of positive numbers [imath](x_n)[/imath] with [imath]\lim x_n=\alpha[/imath] show that [imath]\lim\sqrt[n]{x_1x_2\dots x_n}=\alpha[/imath] I'm not sure about how to proof it, but if each [imath]x_n\to\alpha[/imath] then [imath]x_1 x_2\dots x_n\to a^n[/imath] so [imath]\lim\sqrt[n]{x_1 x_2\dots x_n}=\sqrt[n]{a^n}=a[/imath] How I can proof it?
770959
If [imath](x_n) \to x[/imath] then [imath](\sqrt[n]{x_1x_2\cdots x_n}) \to x[/imath] This is not a duplicate of this question. The linked question says that it suffices to show that if [imath](x_n)\to x[/imath] then [imath](\frac{x_1+\cdots+x_n}{n})\to x[/imath] to prove my question, but how so? I tried using the same strategy as how one proves that if [imath](x_n)\to x[/imath] then [imath](\frac{x_1+\cdots+x_n}{n})\to x[/imath], by "splitting" the product in th [imath]N[/imath]th term: [imath]\sqrt[n]{x_1x_2\cdots x_n}=\sqrt[n]{x_1x_2\cdots x_Nx_{N+1}\cdots x_n}=\sqrt[n]{x_1x_2\cdots x_N} \sqrt[n]{x_{N+1}\cdots x_n}[/imath] but it seems I can't use for now the definition of convergence of [imath](x_n)[/imath] because of the [imath]n[/imath]th root. I also tried to use a result: if [imath](x_n)\to x[/imath] then [imath](\frac{x_n}{n})\to 1[/imath] but don't know if this is true. Sadly I've had no real progress. Any help will be greatly appreciated, thanks in advance!
2318657
Lower Box Dimension Inequality I am trying to find two subsets [imath]E,F[/imath] of [imath]\mathbb{R}[/imath] that satisfy [imath]\underline{\text{dim}}_{B}(E\cup F)>\max\{\underline{\text{dim}}_{B}(E),\underline{\text{dim}}_{B}(F)\}[/imath], where [imath]\underline{\text{dim}}_{B}[/imath] is the lower box-counting dimension. I've been given the hint: Let [imath]k_{n}=10^{n}[/imath] and adapt the Cantor set construction by deleting at the [imath]k[/imath]-th stage, the middle [imath]1/3[/imath] of intervals if [imath]k_{2n}<k\leq k_{2n+1}[/imath] and the middle [imath]3/5[/imath] of intervals if [imath]k_{2n-1}<k\leq k_{2n}[/imath]. But I'm not even sure what it means when it says "if [imath]k_{2n}<k\leq k_{2n+1}[/imath]".
2278956
How can I find an example to illustrate that the lower box dimension may not be finitely stable? Here the lower box dimension and the upper box dimension are exactly what Falconer talks about in his book "Fractal Geometry". I already know that the upper box dimension is finitely stable since [imath]N_\delta(A\cup B)\le N_\delta (A)+N_\delta(B)[/imath], where [imath]N_\delta(A)[/imath] is defined as usual, and Falconer gives an example to show the box dimension may not be countably stable. (We know that, however, the Hausdorff dimension is countably stable). But now I want to find the example to show the lower box dimension may not even be finitely stable.It seems that we can use von Koch curve to illustrate this, but what is the explicit explanation for this? Since the lower box dimension is monotonic, we need to find set A and B and show that [imath]\underline{\dim}_B(A\cup B)>\max\{\underline{\dim}_B(A),\underline{\dim}_B(B)\}[/imath].
331103
Intuitive explanation of entropy I have bumped many times into entropy, but it has never been clear for me why we use this formula: If [imath]X[/imath] is random variable then its entropy is: [imath]H(X) = -\displaystyle\sum_{x} p(x)\log p(x).[/imath] Why are we using this formula? Where did this formula come from? I'm looking for the intuition. Is it because this function just happens to have some good analytical and practical properties? Is it just because it works? Where did Shannon get this from? Did he sit under a tree and entropy fell to his head like the apple did for Newton? How do you interpret this quantity in the real physical world?
2849253
How can they come up with the definition of entropy in information theory? I have read some books about information theory but I don't have any ideas how can they find the definition of entropy? We have [imath]H(X)=-\sum_{x\in X}p(x)\, \text{log}\, p (x)[/imath] X is a discrete random variable.
2319963
If [imath]a[/imath] is an infinite cardinal number, prove that [imath]\aleph_0 + a = a[/imath] The statement I am trying to prove is: If [imath]a[/imath] is an infinite cardinal number, prove that [imath]\aleph_0 + a = a[/imath] Any idea on how to start would be appreciated. I am thinking of proving it by showing that [imath]\omega \cup A \approx A[/imath]. But how would I start? Addition: Since [imath]\omega \cup A[/imath] is an infinite set, this implies that [imath]\omega \cup A[/imath] is equipotent with a proper subset of itself, namely [imath]A[/imath]. This means that [imath]\#(\omega \cup A)=\#A[/imath]. Is this correct?
951442
Cardinality of the union of infinite and countable sets This seems evident, but I cannot come up with a reasonable proof for: Question: show that if [imath]X[/imath] is an infinite set and [imath]Y[/imath] is a countable set, then [imath]|X \cup Y|=|X|[/imath]
2319681
Calculating [imath]\int_{-\infty}^\infty \frac{\sin(x)}x \, dx\;[/imath] using [imath]\frac{(e^z - 1)}{z}[/imath] In on of my books there is an exercise to calculate [imath]\int_{-\infty}^\infty \dfrac{\sin(x)}x \, dx[/imath]. There is a hint given that one should consider the entire function [imath]\dfrac{(e^z - 1)}{z}[/imath]. But I really have no idea how to interpret [imath]\int_{-\infty}^\infty \dfrac{\sin(x)} x\, dx[/imath] as an complex integral involving this function. I would appreciate some hints.
305165
Calculating [imath]\int_{-\infty}^{\infty}\frac{\sin(ax)}{x}\, dx[/imath] using complex analysis I am going over my complex analysis lecture notes and there is an example about calculating [imath]\int_{-\infty}^{\infty}\frac{\sin(ax)}{x}\, dx[/imath] that I don't understand. The solution in the notes starts like this: Denote [imath]C[/imath] as the path from[imath]-R[/imath] to [imath]R[/imath] on the [imath]x[/imath]-axis (where [imath]R>0[/imath] is real). Denote [imath]C_{R}[/imath] as the semi-circle (anti-clockwise) that goes from [imath]R[/imath] to [imath]-R[/imath]. [imath]\int_{-R}^{R}\frac{\sin(az)}{z}\, dz=\int_{C}\frac{\sin(az)}{z}\, dz=\int_{C}\frac{e^{aiz}-e^{-aiz}}{2iz}=\frac{1}{2i}(\int_{C}\frac{e^{iaz}}{z}\, dz-\int_{C}\frac{e^{-aiz}}{z}\, dz)[/imath] Assume [imath]a>0[/imath]: [imath]e^{iaz}=e^{iaRe^{i\theta}}=e^{iaR\cos(\theta)}-e^{-iaR\sin(\theta)}[/imath] Thus [imath]\int_{C}\frac{e^{iaz}}{z}\, dz+\int_{C_{R}}\frac{e^{iaz}}{z}\, dz=2\pi iRes_{z=0}\left(\frac{e^{iaz}}{z}\right)=2\pi i[/imath] The next part claims that for [imath]R\to\infty[/imath]:[imath]\int_{C}\frac{e^{iaz}}{z}\, dz=2\pi i[/imath] (I understand this part) From here don't understand what going on in the notes, the sentences claim that [imath]\int_{C}\frac{e^{-iaz}}{z}\, dz+\int_{C_{R}}\frac{-e^{iaz}}{z}\, dz=0[/imath] but I think that in a similar manner that sum is [imath]2\pi iRes_{z=0}(\frac{e^{-iaz}}{z})[/imath] which I believe to be [imath]2\pi i\neq0[/imath]. The next two sentences afterward say that [imath]\lim_{R\to\infty}\int_{C}\frac{\sin(az)}{z}\, dz=\frac{1}{2i}\cdot2\pi i=\pi[/imath] and that [imath]\int_{-\infty}^{\infty}\frac{\sin(ax)}{x}\, dx=\pi[/imath] Can someone please help me understand the part about the sum [imath]\int_{C}\frac{e^{-iaz}}{z}\, dz+\int_{C_{R}}\frac{-e^{iaz}}{z}\, dz[/imath] ? I believe that there is a mistake here, I would also appreciate help understanding the last two claims: [imath]\lim_{R\to\infty}\int_{C}\frac{\sin(az)}{z}\, dz=\pi[/imath] and that [imath]\int_{-\infty}^{\infty}\frac{\sin(ax)}{x}\, dx=\pi[/imath] EDIT: I read this couple more times, I now think that there is problem with the part after "assume [imath]a>0[/imath]": [imath]e^{iaz}=e^{iaRe^{i\theta}}=e^{iaR\cos(\theta)}-e^{-iaR\sin(\theta)}[/imath] I think that the minus at the end should be [imath]\cdot[/imath] and that this is a typo in the notes, but I also think there should not be an [imath]i[/imath] in [imath]e^{-iaR\sin(\theta)}[/imath]
2319982
Show that [imath]ax \equiv b (mod\ m) [/imath] has solution iff [imath]gcd(a,m)[/imath] divides [imath]b[/imath] Here's what I have: [imath]ax \equiv b (mod\ m)[/imath] has answer if there are [imath]x[/imath] and [imath]y[/imath] such that [imath]b = ax + my[/imath] Let [imath]d = gcd(a,m)[/imath]. Then: [imath]d|a[/imath] and [imath]d|m \Leftrightarrow d|ax[/imath] and [imath]d|my \Leftrightarrow d|(ax+my)[/imath] Since [imath]m[/imath] divides the right part of the equation, it also has to divide the left part. Is this a valid proof for what I want?
1243715
Finding solutions to congruence equations In my notes it has a theorem, stating: [imath]ax\equiv b\mod m[/imath] has solutions if and only if [imath]\gcd(a,m)|b[/imath]. The proof going from right to left is: If [imath]d=\gcd(a,m)[/imath], [imath]d|b \Rightarrow b=td[/imath]. We write [imath]d=ra+sm[/imath] for [imath]r,s\in\mathbb{Z}[/imath] so [imath]b=t(ra+sm)=tra+tsm\equiv (tr)a\mod m[/imath], so [imath]x=tr[/imath] is a solution. Why is [imath]x=tr[/imath] a solution?
2319893
How to prove [imath]ax \equiv 0 \mod m[/imath] has solutions [imath]x \not\equiv 0[/imath] if [imath]\gcd(a,m) \not= 1[/imath] I'm thinking about [imath]ax \equiv 0 \mod m[/imath] [imath]\Leftrightarrow[/imath] [imath]m \mid (ax - 0) \Leftrightarrow m\mid ax[/imath] Since [imath]\gcd(a,m) \not= 1[/imath], we have [imath]m \mid a \Rightarrow m\mid ax[/imath] But I'm not tottaly sure about this. Any hints?
67928
How to prove [imath]\bar{m}[/imath] is a zero divisor in [imath]\mathbb{Z}_n[/imath] if and only if [imath]m,n[/imath] are not coprime Let us consider the ring [imath]\mathbb{Z}_n[/imath] where [imath]\bar{m}\in\mathbb{Z}_n[/imath] Could anyone help me prove that [imath]\bar{m}[/imath] is a zero divisor in [imath]\mathbb{Z}_n[/imath] if and only if [imath]m,n[/imath] are not coprime So far I have: Assume [imath]\exists \bar{a}: \bar{m} \bar{n}=n \mathbb{Z} \Rightarrow[/imath] for some [imath]b\in\mathbb{Z}:am=bn[/imath] I then assumed [imath]n,m[/imath] were coprime and attempted to use [imath]\exists a',b'\in \mathbb{Z}:a'm+b'n=1[/imath] to come to a contradiction, however haven't found one, the most promising thing I have found so far is that [imath]a=n(a'b+ab')[/imath] and [imath]b=m(a'b+ab')[/imath] [imath]\Rightarrow n|a[/imath] and [imath]m|b[/imath]
2320461
For [imath]\mathbb{R}[/imath], does every open cover have a "countable" subcover? I am reading intro theory on Heine-Borel where the equivalent statement to "a set [imath]E[/imath] contained in [imath]\mathbb{R}[/imath] is compact" (closed and bounded) is that "every open cover for [imath]E[/imath] has a finite subcover". Here, they give an example of how an open set [imath]A[/imath] cannot have a finite subcover: Consider [imath]A[/imath] to be the open interval [imath](0,1)[/imath]. Then for each point [imath]x[/imath] in [imath](0,1)[/imath], consider the infinite collection of sets [imath]O_x = \{(\frac{x}{2},1) : x \in (0,1)\}[/imath] whose union serves as an open cover for [imath](0,1)[/imath]. But it is impossible to find a finite subcover for [imath]O_x[/imath] because for any minimum value of [imath]x[/imath] in [imath](0,1)[/imath] we can find [imath]0 \le y \le \frac{x}{2}[/imath] where [imath]y[/imath] is not contained in the finite union of sets. I assume this weakness lies in the fact that due to the density of natural numbers in [imath]\mathbb{R}[/imath], we can always find such a [imath]y[/imath] that is smaller than [imath]\frac{x}{2}[/imath] and thus a finite union of sets is insufficient. But what if we had a countable union of sets—the same cardinality as that of the natural numbers. Then, on [imath]\mathbb{R}[/imath], must every open cover have a "countable" subcover?
81216
Every open cover of the real numbers has a countable subcover (Lindelöf's lemma) How to prove for every open cover of the real numbers [imath]\mathbb{R}[/imath] there is a countable subcover? Without using more sophisticated results from topology, assuming only a real analysis background. I've found a proof using second-countable space characterization, but since i never studied general topology before, it's hard to associate a countable base on the real line. My intuition says to transform the open cover into disjoint open subsets, but how to achieve that?
2318413
Prove that if A is a square matrix with linearly indep columns, then [imath]A^+[/imath] = [imath]A^{-1}[/imath] I have figured out a possible solution and I only need someone to tell me if I'm correct. The solution is the following: If a matrix has linearly independent columns, then [imath]A^+[/imath] = [imath](A^TA)^{-1}A^T[/imath] Therefore, by operating: = [imath]A^{-1}(A^T)^{-1}A^T[/imath] = [imath]A^{-1}[/imath] since [imath](A^T)^{-1}A^T = I[/imath] So [imath]A^+ = A^{-1}[/imath] Am I right with this proof? Thank you
2318428
If [imath]A[/imath] is a non-square matrix with orthonormal columns, what is [imath]A^+[/imath]? If a matrix has orthonormal columns, they must be linearly independent, so [imath]A^+ = (A^T A)^{−1} A^T[/imath] . Also, the fact that its columns are orthonormal gives [imath]A^T A = I[/imath]. Therefore, [imath]A^+ = (A^T A)^{−1} A^T = (I)^{-1}A^T = A^T[/imath] Thus, [imath]A^+ = A^T[/imath]. Am I correct? Thank you.
2313045
A function f on a metric space is uniformly continuous iff for every pair of subsets A, B with d(A, B)=0 implies that d'(f(A), f(B))=0 I have a function [imath]f:X \rightarrow Y[/imath], where [imath](X,d)[/imath] and [imath](Y,d')[/imath] are metric spaces. I need to prove that [imath]f[/imath] is uniformly continuous on [imath]X[/imath] iff for all [imath]A, B \subseteq X[/imath] such that [imath]d(A, B)=0[/imath] implies that [imath]d'(f(A), f(B))=0[/imath] Necessity is clear. I can't show the sufficiency. I tried by contradiction. Suppose that [imath]f[/imath] is not uniformly continuous. Then, for some [imath]\epsilon_0[/imath] there are sequences [imath]a_n[/imath] and [imath]b_n[/imath] such that [imath]d(a_n, b_n)\rightarrow0[/imath] and [imath]d'(f(a_n), f(b_n))>\epsilon_0[/imath]. Therefore, I defined the sets [imath]A[/imath] and [imath]B[/imath] as the set that contains the [imath]a_n[/imath]'s and [imath]b_n[/imath]'s respectively. So [imath]d(A,B)=0[/imath]. By hypothesis [imath]d'(f(A), f(B))=0[/imath]. I can't use this fact. I proved the continuity of the function on [imath]X[/imath], but it's so hard prove the uniform. I can't show this part.
2062447
Characterisation of uniformly continuous function I have the following exercise: Let [imath](X,d)[/imath], [imath](Y,e)[/imath] be metric spaces. This is the definition of distance of sets used in the exercise: [imath]d(A,B)=inf\{d(a,b)\colon a \in A, b \in B\}[/imath] [imath]d(f(A),f(B))=inf\{e(f(a),f(b))\colon f(a) \in A, f(b) \in B\}[/imath] The exercise is: Prove that [imath]f\colon X \rightarrow Y[/imath] is uniformly continuous if and only if, for all non empty sets [imath]A[/imath],[imath]B[/imath] in [imath]X[/imath] such that [imath]d(A,B)=0[/imath] we always have that [imath]d(f(A),f(B))=0[/imath]. If we suppose that [imath]f[/imath] is uniformly continuous, the implication is easy. But the converse is very hard for me. Let me show you what I have tried: Suppose that for all non empty sets [imath]A[/imath],[imath]B[/imath] in [imath]X[/imath] such that [imath]d(A,B)=0[/imath] we always have that [imath]d(f(A),f(B))=0[/imath], and for the sake of a contradiction suppose also that [imath]f[/imath] is not uniformly continuous. Then there exist [imath]\epsilon_0>0[/imath] such that for all [imath]\delta>0[/imath], exist [imath]x_\delta[/imath], [imath]y_\delta[/imath] in [imath]X[/imath] such that [imath]d(x_\delta, y_\delta)<\delta[/imath] but [imath]e(f(x),f(y)) \geq \epsilon_0[/imath]. In particular, for all [imath]\delta=\frac{1}{n}>0[/imath] there exist [imath]x_n,y_n[/imath] in [imath]X[/imath] such that [imath]d(x_n,y_n)<\frac{1}{n}[/imath] but [imath]e(f(x_n),f(y_n)) \geq \epsilon_0[/imath] Then [imath]A=\{x_n \colon n \in \mathbb{N}\}[/imath] and [imath]B=\{y_n \colon n \in \mathbb{N}\}[/imath] are such that [imath]d(A,B)=0[/imath]. Then by hypotheses, we have that [imath]d(f(A),f(B))=0[/imath]. Then, in particular for [imath]\epsilon_0>0[/imath], there exist [imath]x_n,y_m[/imath] in [imath]X[/imath] such that [imath]e(f(x_n),f(y_m))<\epsilon_0[/imath]. But I get a contradiction if [imath]n=m[/imath] but I don't know how to proceed in case that [imath]n\neq m[/imath]. Any help would be appreciated.
2321133
Prove the following trigonometric identity: [imath]\frac{\tan A+\sec A-1}{\tan A-\sec A+1}=\frac{1+\sin A}{\cos A}[/imath] Prove: [imath]\frac{\tan A+\sec A-1}{\tan A-\sec A+1}=\frac{1+\sin A}{\cos A}[/imath] My attempt: LHS= [imath]\frac{\tan A+\sec A-1}{\tan A-\sec A+1}[/imath] [imath]=\frac{\frac{\sin A}{\cos A}+\frac{1}{\cos A}-1}{\frac{\sin A}{\cos A}-\frac{1}{\cos A}+1}[/imath] [imath]=\frac{\sin A+1-\cos A}{\sin A-1+\cos A}[/imath] [imath]\text{Using componendo and dividendo}[/imath] [imath]\frac{\sin A+1-\cos A+\sin A-1+\cos A}{\sin A+1-\cos A-\sin A+1-\cos A}[/imath] [imath]\frac{2\sin A}{2-2\cos A}=\frac{\sin A}{1-\cos A}=\frac{\sin A(1+\cos A)}{(1-\cos A)(1+\cos A)}[/imath] [imath]\frac{\sin A(1+\cos A)}{\sin^2 A}=\frac{1+\cos A}{\sin A}[/imath] [imath]\text{Which is not equal to right hand side!}[/imath] Have I done something wrong in the componendo dividendo step? I dont know how to use componendo dividendo rule . I saw it being used like this in some question and hence applied it here the same way. Maybe I am wrong in application of that rule. Please tell me the right way to use it. Thank you.
2833244
Proving [imath]\frac{\tan x+\sec x-1}{\tan x-\sec x+1}=\frac{1+\sin x}{\cos x}[/imath] How do I myself start off this question: Show that: [imath]\frac{\tan x+\sec x-1}{\tan x-\sec x+1}=\frac{1+\sin x}{\cos x}[/imath] I have tried to express the LHS in terms of [imath]\sin x[/imath] and [imath]\cos x[/imath], simplified the resulting expression, squared numerator and denominator and simplified again, by this time I was lost in the forest…
2320596
If [imath]\Vert Tx-Ty \Vert = \Vert x-y \Vert[/imath] for all [imath]x,y \in X[/imath] and [imath]T(0)=0[/imath] then T is a linear aplication. Problem: Lets [imath]X[/imath] and [imath]Y[/imath] normed vector spaces [imath]T:X\rightarrow Y[/imath] a aplication such that [imath]\Vert Tx-Ty \Vert = \Vert x-y \Vert[/imath] for all [imath]x,y \in X[/imath] and [imath]T(0)=0[/imath] then T is a linear aplication. My attempt: If I evaluate in [imath]0[/imath]: [imath]\Vert Tx \Vert = \Vert x\Vert \quad \mbox{for all $x \in X$}[/imath] Then, [imath]\Vert T(x+y) \Vert = \Vert x+y\Vert \leq \Vert x\Vert + \Vert y\Vert = \Vert Tx\Vert+\Vert Ty\Vert[/imath] But, I do not know how to continue.
194538
Showing that an Isometry on the Euclidean Plane fixing the origin is Linear Suppose [imath]f[/imath] is an isometric (i.e., distance preserving) function on [imath]\mathbb{E}^2[/imath] such that [imath]f(0,0) = (0,0)[/imath]. Then I want to show that [imath]f[/imath] is necessarily linear. Now [imath]f[/imath] is linear iff [imath]f[/imath] is both additive and homogenous. The following is an attempted proof for the homogeneity of f (missing the last step); still more, I have no idea how to argue for the additivity of [imath]f[/imath]. Any ideas? Let [imath]x \in \mathbb{E}^2[/imath] and [imath]\alpha \in \mathbb{R}[/imath] We know that [imath]\forall x \in \mathbb{E}^2[/imath], [imath]\Vert x - 0 \Vert = \Vert f(x) - f(0)\Vert = \Vert f(x) - 0 \Vert[/imath] so that [imath]\Vert x\Vert = \Vert f(x)\Vert[/imath]. From this we immediately have the following facts: [imath]\Vert x \Vert = \Vert f(x) \Vert[/imath] [imath]\Vert \alpha x \Vert = \Vert f(\alpha x) \Vert[/imath] We can then argue that since [imath]\Vert \alpha x\Vert = |\alpha| \Vert x \Vert = |\alpha| \Vert f(x) \Vert = \Vert \alpha f(x) \Vert[/imath], we also have that [imath]\Vert f(\alpha x) \Vert = \Vert \alpha f(x)\Vert[/imath]. Finally, we have that [imath]\Vert \alpha x - x \Vert = \Vert \alpha f(x) - f(x) \Vert[/imath] iff [imath] \Vert (\alpha - 1)x \Vert = \Vert (\alpha - 1) f(x) \Vert[/imath] iff [imath]|\alpha - 1| \Vert x \Vert = |\alpha - 1| \Vert f(x) \Vert[/imath] Since the last of these statements is in fact true, we now have [imath]\Vert \alpha x - x \Vert = \Vert \alpha f(x) - f(x) \Vert[/imath] as desired. Now at this point it seems like I have all of the facts required to assert that [imath]f(\alpha x) = \alpha f(x)[/imath], but I can't figure out how to formally state why without illegally appealing to visual intuition. Any ideas?
2311583
Evaluating [imath]\int_0^1 \sqrt{1 + x ^4 } \, d x [/imath] [imath] \int_{0}^{1}\sqrt{\,1 + x^{4}\,}\,\,\mathrm{d}x [/imath] I used substitution of tanx=z but it was not fruitful. Then i used [imath] (x-1/x)= z[/imath] and [imath](x)^2-1/(x)^2=z [/imath] but no helpful expression was derived. I also used property [imath]\int_0^a f(a-x)=\int_0^a f(x) [/imath] Please help me out
2721939
[imath]\int_{0}^{1} \sqrt[]{1+t^4}dt[/imath] I tried by using [imath]t^2[/imath] = [imath]tan\theta[/imath] and then by inserting [imath]t^4[/imath] by [imath]tan^2\theta[/imath] in [imath]\int_{0}^{1} \sqrt[]{1+t^4}dt[/imath], I get [imath]dt[/imath] = [imath]\frac{sec^2\theta\times d\theta}{2\times \sqrt[]{tan\theta}}[/imath] & [imath]\sqrt[]{1+t^4}[/imath] = [imath]sec\theta[/imath]. Thus the integration becomes [imath]\int_{0}^{\frac{\pi}{4}} \frac{sec^3 \theta \times d\theta}{2\times \sqrt{tan\theta}}[/imath] What to do after this step? I have tried to do this which I have mistakenly written in the answer column. \begin{align} \int_{0}^{1}\sqrt{1+t^4}\,\mathrm{d}t &=\int_{0}^{1}\frac{1+t^4}{\sqrt{1+t^4}}\,\mathrm{d}t\\ &=\int_{0}^{1}\frac{\mathrm{d}t}{\sqrt{1+t^4}}+\int_{0}^{1}\frac{t^4}{\sqrt{1+t^4}}\,\mathrm{d}t\\ &=\int_{0}^{1}\frac{\mathrm{d}t}{\sqrt{1+t^4}}+\int_{0}^{1}t\cdot\frac{t^3}{\sqrt{1+t^4}}\,\mathrm{d}t\\ &=\int_{0}^{1}\frac{\mathrm{d}t}{\sqrt{1+t^4}}+\left[\frac12t\sqrt{1+t^4}\right]_{0}^{1}-\frac12\int_{0}^{1}\sqrt{1+t^4}\,\mathrm{d}t\\ &=\int_{0}^{1}\frac{\mathrm{d}x}{\sqrt{1+x^4}}+\frac121\sqrt{1+1^4}-\frac12\int_{0}^{1}\sqrt{1+t^4}\,\mathrm{d}t\\ \implies \frac32\int_{0}^{1}\sqrt{1+t^4}\,\mathrm{d}t &=\frac{1}{2}\sqrt{1+1^4}+\int_{0}^{1}\frac{\mathrm{d}t}{\sqrt{1+t^4}}\\ \implies \int_{0}^{1}\sqrt{1+t^4}\,\mathrm{d}t &=\frac{1}{3}\sqrt{1+1^4}+\frac23\int_{0}^{1}\frac{\mathrm{d}t}{\sqrt{1+t^4}}.\\ \end{align}
2321129
Proving that a function is not totally differentiable in [imath](0,0)[/imath] I am trying to show that [imath]g(x,y)=y f(x,y)[/imath] with [imath]f: \mathbb R^2 \rightarrow \mathbb R [/imath] [imath] f(x,y) = \begin{cases} \dfrac {2xy^2}{x^2+y^4} & (x,y)\ne (0,0) \\\\ 0 & (x,y)=(0,0) ~ \end{cases} [/imath] is not totally differentiable in [imath](0,0)[/imath]. What I already have found out about the functions is that [imath]f[/imath] is not continuous in [imath](0,0)[/imath], the function [imath]g[/imath] has all directional derivatives and they are equal to [imath]0[/imath]. The criterion I use for total differentials is that [imath]f[/imath] is totally differentiable in [imath]x[/imath], if there exists a linear map [imath]L[/imath] such that [imath]\lim_{h \to 0}\frac{g(x+h)-g(x)-Lh}{|h|}=0[/imath] For [imath](x,y)=(0,0)[/imath] we get [imath]\lim_{h \to 0}\frac{g(h)-g(0)-Lh}{|h|}=\lim_{h \to 0}\frac{h_2f(h)-0f(0)-Lh}{|h|}=\lim_{h \to 0}h_2\frac{f(h)-f(0)}{|h|}-\frac{Lh}{|h|}=0 [/imath] Is it legitimate to deduce from that that [imath]g[/imath] is not totally differentiable in [imath](0,0)[/imath] because [imath]lim_{h \to 0}\frac{f(h)-f(0)}{|h|}[/imath] does not exist, as [imath]f[/imath] is not continous in [imath](0,0)[/imath]?
967372
determine whether [imath]f(x, y) = \frac{xy^3}{x^2 + y^4}[/imath] is differentiable at [imath](0, 0)[/imath]. I am new to multivariable calculus and my textbook doesn't give out solutions so I'm just wondering how you go about proving something like this? I know that a function is differential at a point [imath]a[/imath] if it's continous at [imath]a[/imath] and and the partial derivatives of [imath]f[/imath] exist near [imath]a[/imath] but I have never actually seen an example. Heres the question: Assume [imath]f(0, 0) = 0[/imath], and determine whether [imath]f(x, y) = \frac{xy^3}{x^2 + y^4}[/imath] is differentiable at [imath](0, 0)[/imath].
2321734
Proving uniform convergence of [imath]\sqrt{x^2+1/n}[/imath] Given [imath]h_n(x) = \sqrt{x^2+\frac{1}{n}}[/imath], I'm asked to first compute the pointwise limit of [imath]h_n[/imath] and then prove that it converges uniformly on [imath]\mathbb{R}[/imath]. For pointwise convergence, I think that I'm supposed to fix [imath]x \in \mathbb{R}[/imath] and consider the [imath]lim_{n\rightarrow \infty} \ h_n(x) = lim_{n\rightarrow \infty} \sqrt{x^2+\frac{1}{n}} = lim_{n\rightarrow \infty} \sqrt{x^2} = |x|[/imath], so I say that [imath]h_n \rightarrow h[/imath] pointwise where [imath]h:\mathbb{R} \rightarrow\mathbb{R}[/imath], [imath]h(x) = |x|[/imath]. I'm having trouble showing that [imath]h_n\rightarrow h[/imath] uniformly. Given the definition of uniform convergence [imath]\forall \epsilon > 0, \exists N(\epsilon)\in\mathbb{N}; |h_n(x)-h(x)|<\epsilon[/imath] it seems that I would need to produce an [imath]N(\epsilon)[/imath] such that this inequality holds. However, I'm not having any luck with manipulating this inequality in order to find an appropriate [imath]N(\epsilon)[/imath]. Is there another way to do it or a trick I'm not seeing? Thanks for the help!
1273946
Determine if the convergence of [imath]f_n(x) = \sqrt{x^2 + \frac{1}{n^2}}[/imath] is uniform. [imath]f_n(x) = \sqrt{x^2 + \frac{1}{n^2}}[/imath] converges pointwise in a set [imath]E = [0, \infty)[/imath] to [imath]f(x) = x[/imath]. This problem reminds me a lot of how [imath]\frac{x}{n}[/imath] fails to converge uniformly to [imath]f=0[/imath] on [imath][0,\infty][/imath], but does converge uniformly on [imath][0,a][/imath]. However, when we proved that we used the definition of uniform convergence. I do not wish to do that here. I wish to use the Weierstrass test here. And that is where I'm stuck. I cannot find [imath]M_n = \sup\limits_{x \in E} |f_n(x)-f(x)|[/imath]. Can someone help me out here?
2321015
Is this condition for the union of a collection of subspaces to be a subspace? Let [imath]X[/imath] be a vector space over a field [imath]F[/imath], let [imath]\left\{ \ Y_j \ \colon \ j \in J \ \right\}[/imath] be a non-empty collection of (vector) subspaces of [imath]X[/imath]. Then the intersection [imath] \bigcap_{j \in J} Y_j[/imath] is indeed a subspace of [imath]X[/imath], but the union [imath] \bigcup_{j \in J} Y_j[/imath] is not necessarily a subspace of [imath]X[/imath]. However, if, for some [imath]j_0 \in J[/imath], we have [imath] Y_j \subset Y_{j_0} \text{ for all } j \in J, [/imath] then of course [imath] \bigcup_{j \in J} Y_j = Y_{j_0}[/imath] and is thus also a subspace of [imath]X[/imath]. Now let's suppose that [imath] \bigcup_{j \in J} Y_j[/imath] is a subspace of [imath]X[/imath]. Then can we prove that, for some [imath]j_0 \in J[/imath], [imath] Y_j \subset Y_{j_0} \ \mbox{ for all } j \in J?[/imath] I know that the answer is in the affirmative if the collection consists of only two subspaces. What is the situation in general? A rigorous proof will be appreciated.
2083837
When is the union of a family of subspaces of a vector space also a subspace? It is not difficult to prove that the union of a chain (or, more generally, a directed family) of subspaces of a vector space [imath]V[/imath] is a subspace of [imath]V[/imath]. Given a family [imath]\mathcal{F}[/imath] of subspaces of a vector space [imath]V[/imath] such that the union of [imath]\mathcal{F}[/imath] is a subspace of [imath]V[/imath], is it true that [imath]\mathcal{F}[/imath] is a directed family? If not, is there a "nice" characterization of families of subspaces whose union is a subspace?
2319437
Proving a Trigonometric inequality For this question [imath]x[/imath] satisfies [imath]0 \leq x < \pi/2[/imath] Prove that: [imath]1 \leq \sec x \leq 1 + \tan x[/imath] I'm not sure how to start this problem. I tried changing [imath]\sec x[/imath] to [imath]1/\cos x[/imath] and [imath]\tan x[/imath] to [imath]\sin x/\cos x[/imath] to get: [imath]1 \leq 1/\cos x \leq 1 + \sin x/\cos x[/imath] And then multiplying by [imath]\cos x[/imath] to get: [imath]\cos x \leq 1 \leq \cos x + \sin x[/imath] I'm not sure if that is the correct way to solve this and if it is, I don't know where to go from here.
524120
Question about trigonometry/trigonometry question? So in trig, say I have an acute angle [imath]X[/imath]. And one can intuitively conclude that [imath]\sin X + \cos X \ge 1[/imath] but how does the fact that [imath](\sin X + \cos X)^2 = 1 + 2 \sin X \cos X[/imath] tell me that it is true that [imath]\sin X + \cos X \ge 1[/imath]? I don't quite see the connection. Thanks
207406
Irreducible Components of the Prime Spectrum of a Quotient Ring and Primary Decomposition Recently I encountered a problem (the first exercise from chapter four of Atiyah & McDonald's Introduction to Commutative Algebra) stating that if [imath]\mathfrak{a}[/imath] is a decomposable ideal of [imath]A[/imath] (a commutative ring with unity), then the prime spectrum of [imath]A/ \mathfrak{a}[/imath] has finitely many irreducible components. This follows easily from the recognition that the maximal irreducible subspaces of [imath]\textrm{ Spec } (A / \mathfrak{a})[/imath] are precisely the "zero loci" of the minimal prime ideals of [imath]A /\mathfrak{a}[/imath]. I'm curious about the converse - the proof isn't easily reversed since the notion of minimal ideals of [imath]\mathfrak{a} [/imath] doesn't make sense before we know what we are trying to prove. My intuition says that it is false based on the general premise that images are badly-behaved (and the fact that it isn't part of the exercise.) However, I've had some difficulty constructing a counterexample, so that the main purpose of this post is to ask for a reasonable procedure or heuristic for doing so (or, of course, proof that my intuition is false.) If it helps, if [imath]\textrm{Spec }(A / \mathfrak{a})[/imath] is irreducible then the nilradical [imath]\mathcal{R}_{A /\mathfrak{a}}[/imath] is prime so that [imath]r(\mathfrak{a}) = \rho^{-1} ( \mathcal{R}_{A / \mathfrak{a}} ) = \displaystyle\cap_{i=1}^n \rho^{-1} (p_i),[/imath] where [imath]p_i[/imath] are the minimal prime ideals of [imath]A /\mathfrak{a}[/imath] and [imath]\rho [/imath] is the associated projection, is also prime. Thanks!
2383327
[imath]A[/imath] has only finitely many minimal prime ideals [imath]\implies\ (0)[/imath] is decomposable? Let [imath]A[/imath] be a commutative ring with only finitely many minimal prime ideals. Is the zero ideal [imath](0)[/imath] decomposable? [The converse implication is well known. Recall that an ideal is decomposable if it is a finite intersection of primary ideals.]
1092485
The n-th prime is less than [imath]n^2[/imath]? Let [imath]p_n[/imath] be the n-th prime number, e.g. [imath]p_1=2,p_2=3,p_3=5[/imath]. How do I show that for all [imath]n>1[/imath], [imath]p_n<n^2[/imath]?
206815
Is there a way to show that [imath]\sqrt{p_{n}} < n[/imath]? Is there a way to show that [imath]\sqrt{p_{n}} < n[/imath]? In this article, I show that [imath]f_{2}(x)=\frac{x}{ln(x)} - \sqrt{x}[/imath] is ascending, for [imath]\forall x\geq e^{2}[/imath]. As a result, [imath]\forall n \geq 3[/imath] [imath]\frac{p_{n}}{ln(p_{n})} - \sqrt{p_{n}}\leq \frac{p_{n+1}}{ln(p_{n+1})} - \sqrt{p_{n+1}}[/imath] Also (and as a result), [imath]\forall n \geq 3[/imath] [imath] \frac{p_{n}}{ln(p_{n})} - \sqrt{p_{n}} > 0[/imath] Or [imath] \frac{\pi (p_{n})}{p_{n}/ln(p_{n})} < \frac{\pi (p_{n})}{\sqrt{p_{n}}}[/imath] According to PNT [imath]\displaystyle\smash{\lim_{n \to \infty }}\frac{\pi (p_{n})}{p_{n}/ln(p_{n})}=1[/imath] Or, [imath]\forall \varepsilon >0[/imath], [imath]\exists N(\varepsilon )[/imath]: [imath]\forall n>N(\varepsilon )[/imath] [imath]1- \varepsilon < \frac{\pi (p_{n})}{p_{n}/ln(p_{n})} < 1+ \varepsilon[/imath] Or [imath]1- \varepsilon < \frac{\pi (p_{n})}{p_{n}/ln(p_{n})} < \frac{\pi (p_{n})}{\sqrt{p_{n}}}[/imath] As a result [imath]\forall \varepsilon >0[/imath], [imath]\exists N(\varepsilon )[/imath]: [imath]\forall n>N(\varepsilon )[/imath] [imath](1 - \varepsilon ) \cdot \sqrt{p_{n}} < \pi (p_{n}) = n[/imath] But this is not enough. Interestingly, Andrica's conjecture is true iff function [imath]f_{4}(x)=\pi (x) - \sqrt{x}[/imath] is strictly ascending ([imath]x < y \Rightarrow f(x) < f(y)[/imath]) for prime arguments. If [imath]f_{4}(p_{n}) < f_{4}(p_{n+1})[/imath] then [imath]\pi (p_{n}) - \sqrt{p_{n}} < \pi (p_{n+1}) - \sqrt{p_{n+1}}[/imath] Or [imath]\sqrt{p_{n+1}} - \sqrt{p_{n}} < \pi (p_{n+1}) - \pi (p_{n}) =1[/imath] And vice-versa, if [imath]\sqrt{p_{n+1}} - \sqrt{p_{n}} < 1[/imath] Then [imath]-\sqrt{p_{n}} < -\sqrt{p_{n+1}} + 1[/imath] Or [imath]\pi (p_{n})-\sqrt{p_{n}} < \pi (p_{n}) + 1 -\sqrt{p_{n+1}} = \pi (p_{n+1}) -\sqrt{p_{n+1}}[/imath] So, if Andrica's conjecture is true then [imath]\forall n \geq 3[/imath] [imath]\pi (p_{n})-\sqrt{p_{n}} > 0[/imath] Or [imath]\sqrt{p_{n}} < \pi (p_{n})= n[/imath]
2322291
Product of closed sub-groups in topological groups We know that if [imath]G[/imath] is a topological group and [imath]C \subseteq G[/imath] is compact and [imath]A \subseteq G[/imath] is closed, then [imath]AC[/imath] , [imath]CA[/imath] are closed. Is it right to say: The product of two closed subgroups in topological groups is closed?
1781180
Is the product of closed subgroups in topological group closed? Just out of curiosity: If [imath]G[/imath] is a topological group and [imath]H, K[/imath] are closed subgroups, is [imath]H\cdot K[/imath] a closed subgroup? Thanks!
2322685
How to prove [imath]\lim_{n\to\infty} (1+\frac{1}{n})^{1/n} = \lim_{n\to\infty} \sum_{k=0}^{n} 1/k![/imath]? Let [imath]e_n[/imath] be [imath](1+\frac{1}{n})^{\frac{1}{n}}[/imath]. Let [imath]E_n[/imath] be [imath]\sum_{k=0}^{n} \frac{1}{k!}[/imath]. Suppose the convergence of [imath](e_n)[/imath] and [imath](E_n)[/imath] are established and let [imath]e =lim_{n\to\infty}(e_n)[/imath]. By Binomial Theorem, [imath]e_n = 1+1+\frac{1}{2!}(1-\frac{1}{n})+\ldots+\frac{1}{n!}(1-\frac{1}{n})(1-\frac{2}{n})\ldots(1-\frac{n-1}{n})[/imath] Let [imath]\epsilon >0[/imath]. Since [imath]e =lim_{n\to\infty}(e_n)[/imath], there exists [imath]N \in \mathbb{N}[/imath] such that for each [imath]n\ge N[/imath], [imath]|e_n-e| < \epsilon /2[/imath]. I was trying to show that there exists [imath]N_2 \in \mathbb{N}[/imath] such that [imath]|E_n-e_n| < \epsilon /2[/imath], but I failed. How to prove that the two sequences converge to the same limit?
637255
Prove that the limit definition of the exponential function implies its infinite series definition. Here's the problem: Let [imath]x[/imath] be any real number. Show that [imath] \lim_{m \to \infty} \left( 1 + \frac{x}{m} \right)^m = \sum_{n=0}^ \infty \frac{x^n}{n!} [/imath] I'm sure there are many ways of pulling this off, but there are 3 very important hints to complete the exercise in the desired manner: Expand the left side as a finite sum using the Binomial Theorem. Call the summation variable [imath]n[/imath]. Now add into the finite sum extra terms which are [imath]0[/imath] for [imath]n>m[/imath], in order to make it look like an infinite series. What happens to the limit on [imath]m[/imath] outside the series? So far I was able to use Hint 1 to expand the left side: [imath] \lim_{m \to \infty} \left( 1 + \frac{x}{m} \right)^m = \lim_{m \to \infty} \sum_{n=0}^m \binom {m}{n} \left( \frac{x}{m} \right)^n [/imath] No matter what I do with the binomial coefficients and factorials, I can't figure out what extra terms to add per Hint 2. Any suggestions?
2322637
Is there a manifold which requires infinitely many charts to cover it? So, generally, a manifold is defined using only finitely many charts. A sphere is the graph of 2n different continuous functions, or just 2 stereographic projections. Obviously, there's no compact manifold that requires infinitely many charts, and there can't be more than countably many charts since manifolds are 2nd countable, but is there any manifold nasty enough to require countably many charts? Furthermore, if the answer is no, what if we narrow down the class of allowable charts, and demand that our manifold be smooth, analytic, Riemannian, or otherwise. Will any of these change the answer? edit: Because of Antonios-Alexandros Robotis counter-example, I'm going to add the stipulation that a chart is defined as "a homeomorphism between an open subset of [imath]M[/imath] and an open subset of [imath]\mathbb{R}^n[/imath]".
75594
Surface where number of coordinate charts in atlas has to be infinite In the definition of a parametrised surface [imath]S[/imath], for every point in the surface, [imath]p \in W \subseteq S[/imath], where [imath]W[/imath] is open, there exists a coordinate chart or patch , [imath]F :U\to \mathbb{R}^n[/imath] that maps to [imath]p[/imath] from an open subset [imath]U \in \mathbb{R}^n[/imath] Is that right? If anyone knows of a more general definition, I'm willing to learn. It sounds a lot like a manifold, which I'm not entirely familiar with. In this definition, the number of surface patches in the atlas is not stipulated. Given a parametrisable surface, is a finite number of charts sufficient to describe the surface? Can we find a surface that requires infinitely many patches to fully chart? If so, in which [imath]\mathbb{R}^n[/imath] does the first such surface occur? In which dimensions is it always possible to find a finite number of patches for any given surface? EDIT: Added requirement that such a surface (manifold) be connected. One made from infinitely many disconnected subsets would have to be charted infinitely.
2323010
Let [imath]n[/imath] be any positive integer and let [imath]x \in (0, \pi)[/imath]. Prove that [imath]\sin x+ \frac{\sin 3x}{3} + \cdots +\frac{\sin(2n-1)x}{2n-1}[/imath] is positive. I came across the following problem in a high school calculus exam paper. I do have the solution, but it took me quite a while to work it out and I think it's very clumsy. I'm curious to know if there is a simpler solution. Problem Let [imath]n[/imath] be any positive integer and let [imath]x \in (0, \pi)[/imath]. Prove that [imath]\sin{x} + \frac{\sin{3x}}{3} + \frac{\sin{5x}}{5} + \cdots +\frac{\sin{(2n-1)x}}{2n-1}[/imath] is positive.
1876336
Prove [imath] \sin x + \frac{ \sin3x }{3} + ... + \frac{ \sin((2n-1)x) }{2n-1} >0 [/imath] Prove that for [imath] 0<x< \pi [/imath], [imath] \quad S_n(x) = \sin x + \frac{ \sin3x }{3} + ... + \frac{ \sin((2n-1)x) }{2n-1} >0 \quad \forall n = 1,2,... [/imath] Having trouble with this problem. This is an olympiad-style question, so an answer that doesn't use calculus or analysis would be preferred. A possible approach is induction, but for this we need to find a function in terms of [imath]n[/imath] and [imath]x[/imath] so that we can actually use the inductive step. If anyone has any ideas they would be appreciated. If you really want to go down the calculus route (at this point I don't mind), then [imath] S_n' (x) = \cos x + \cos 3x + ... +\cos((2n-1)x) [/imath] , which you can find a closed form for, but I don't know how useful that is.
2315519
Recursion formula for: [imath]a_n = \left(\frac12\right)^n+2n−1[/imath] How do I write a recursive formula for: [imath]a_n = \left(\frac12\right)^n+2n−1[/imath] so we were taught to expand the recursion and the multiply all together. So is the answer: [imath]a_1=\frac32,\quad a_n=a_{n-1}+\frac52[/imath] How do you show the expanding and multiplying?
2322846
Find recursive formula for sequence [imath]a_n = \left(\frac23\right)^n + n[/imath] So I start with: [imath]a_n = \left(\frac23\right)^n + n[/imath] I know [imath]a_1=\frac53, a_2=\frac{22}9, a_3=\frac{340}{81}, a_4=\frac{1247}{243} [/imath] Then I do: [imath]a_{n-1} = \left(\frac23\right)^{n-1} + (n-1)[/imath] Do I now do [imath]a_n - a_{n-1}[/imath] or [imath]a_n\over a_{n-1}[/imath] ?