qid
stringlengths 1
7
| Q
stringlengths 87
7.22k
| dup_qid
stringlengths 1
7
| Q_dup
stringlengths 97
10.5k
|
---|---|---|---|
1969525 | In the real line, how many convex subsets are there up to homeomorphism?
I think the answer is 3. The convex sets are intervals, of course, and may be of any type; any two intervals that look alike are homeomorphic. So we add [imath](a,b)[/imath] and [imath][a,b][/imath] to our count. Since intervals like [imath](c,d][/imath] are not homeomorphic to either of the previous two intervals, it is a third type. Finally, [imath](c,d][/imath] is homeomorphic to [imath][a,b)[/imath], so we do not add [imath][a,b)[/imath]. Am I correct? | 56208 | Classification of connected subsets of the real line (up to homeomorphism)
I want to describe, up to homeomorphism, all the proper connected subsets of [imath]\mathbb{R}[/imath]. I know the theorem that [imath]A \subset \mathbb{R}[/imath] is connected if and only if [imath]A[/imath] is an interval. So consider the following intervals: [imath][a,b), [a,b], (a,b)[/imath] Note [imath][a,b)[/imath] is not homeomorphic to [imath][a,b][/imath] (compactness argument) and [imath][a,b)[/imath] is not homeomorphic to [imath](a,b)[/imath] (by connectedness). Similary [imath][a,b][/imath] is not homeomorphic to [imath][a,b)[/imath] (remove a,b from [a,b]). Now any ray [imath](a,\infty)[/imath] is homeomorphic to [imath](1,\infty)[/imath] which is homeomorphic to [imath](0,1)[/imath] and [imath](0,1)[/imath] is homeomorphic to [imath](a,b)[/imath]. Similarly the ray [imath][-a,\infty)[/imath] is homeomorphic to [imath](0,1][/imath]. So I think this covers all the cases. So I believe the answer is [imath]3[/imath], yes? |
2344004 | Is [imath]\text{Sup}f(x)[/imath] continuous?
This is my assumption: Is this function [imath]F(h)[/imath] continuous? [imath]F(h):=\sup_{x\in [a, h]} f(x)[/imath] where [imath]f(x)[/imath] is continuous in [imath][a, b][/imath] and [imath]h\in[a,b][/imath]. I think this function is continuous. But I don't know how to prove. | 690955 | Prove that [imath]M(t)=\sup_ {a \leq x \leq t} f(x)[/imath] given [imath]f(x)[/imath] is continuous on [imath][a,b][/imath]
[imath]f(x)[/imath] is continuous on [imath][a,b][/imath]. Now we define a new function [imath]M(t)[/imath], for every [imath]t\in[a,b][/imath] [imath]M(t) = \sup_{a \leq x \leq t} f(x).[/imath] Prove formally that [imath]M(t)[/imath] is continuous on [imath][a,b][/imath]. (sup = supremum) [imath]M(t)[/imath] is monotonically increasing function, but I don't see how to continue from here. Also I would like to know what are the standard strategy for proving a function is continuous on a given interval (not a point) except showing its a sum/product/composition of another continuous functions. |
2344648 | Prove that it is impossible to find any positive integers a, b, c such that [imath](2a+b)(2b+a) = 2^c[/imath]
Prove that it is impossible to find any positive integers a, b, c such that [imath](2a+b)(2b+a) = 2^c[/imath]. This problem has been driving me crazy. Thanks for helping. | 2342645 | Prove that the expression cannot be a power of 2
I have been boggled by this question for a while as well. Prove that [imath](2a+b)(2b+a)=2^c[/imath] Is impossible. I know that if a and b do exist then they must be even. I am trying to use this fact to contradict the statement. I haven also tried rewriting a and b as products of powers of twos and a odd factor |
2344978 | Why can this equation be written as determinant?
[imath]A(x^2+y^2)+Bx+Cy+D=0[/imath] is the equation of a circle in the plane provided [imath]A\ne0[/imath]. The equation above can be written as the determinant [imath] \begin{vmatrix} x^2+y^2 & x & y & 1 \\ x_1^2+y_1^2 & x_1 & y_1 & 1 \\ x_2^2+y_2^2 & x_2 & y_2 & 1 \\ x_3^2+y_3^2 & x_3 & y_3 & 1 \\ \end{vmatrix} = 0, [/imath] where [imath] A = \begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \\ \end{vmatrix}, B = \begin{vmatrix} x_1^2+y_1^2 & y_1 & 1 \\ x_2^2+y_2^2 & y_2 & 1 \\ x_3^2+y_3^2 & y_3 & 1 \\ \end{vmatrix}, [/imath] [imath] C = \begin{vmatrix} x_1^2+y_1^2 & x_1 & 1 \\ x_2^2+y_2^2 & x_2 & 1 \\ x_3^2+y_3^2 & x_3 & 1 \\ \end{vmatrix}, D = \begin{vmatrix} x_1^2+y_1^2 & x_1 & y_1 \\ x_2^2+y_2^2 & x_2 & y_2 \\ x_3^2+y_3^2 & x_3 & y_3 \\ \end{vmatrix}. [/imath] [imath]P_1(x_1, y_1, z_1)[/imath], [imath]P_2(x_2, y_2, z_2)[/imath], [imath]P_3(x_3, y_3, z_3)[/imath] are 3 points defining the circle and [imath]P(x, y, z)[/imath] is an arbitrary point. | 98530 | Equation of a sphere as the determinant of its variables and sampled points
Searching for an equation to find the center of a sphere given 4 points, one finds that taking the determinant of the four (non-coplanar) points together with the variables [imath]x[/imath], [imath]y[/imath], and [imath]z[/imath] arranged like so: [imath]\left|\begin{array}{ccccc} x^2+y^2+z^2 & x & y&z&1\\ x_1^2 + y_1^2 + z_1^2 & x_1 & y_1 & z_1 & 1\\ x_2^2 + y_2^2 + z_2^2 & x_2 & y_2 & z_2 & 1\\ x_3^2 + y_3^2 + z_3^2 & x_3 & y_3 & z_3 & 1\\ x_4^2 + y_4^2 + z_4^2 & x_4 & y_4 & z_4 & 1\\ \end{array}\right| = 0[/imath] yields the equation for the sphere. Then one need only re-arrange terms into the more familiar form to find the center and radius. This works fine. My question is why. This same approach also works for one or two dimensions. I'm guessing it also works for finding hyperspheres in higher-dimensional spaces as long as you have a corresponding number of points. But where did that determinant form come from? Is there an intuitive meaning for what that relationship is saying? |
1190817 | Linear Algebra, Hoffman and Kunze's book. Chapter about linear functionals
Can anybody please help me to solve? Let [imath]F[/imath] be a field. We define [imath]n[/imath] linear functionals on [imath]F^n[/imath], for [imath]n \geq 2 [/imath], by: [imath]f_k(x_1,\ldots,x_n) = \sum \limits_{j = 1}^n (k-j)x_j [/imath]. What is the dimension of the subspace annihilated by [imath]f_1,f_2,\ldots,f_n[/imath]? I have tried to prove that the [imath]n[/imath] functionals are linear independent. | 1074979 | Given [imath]n[/imath] linear functionals [imath]f_k(x_1,\dotsc,x_n) = \sum_{j=1}^n (k-j)x_j[/imath], what is the dimension of the subspace they annihilate?
Let [imath]F[/imath] be a subfield of the complex numbers. We define [imath]n[/imath] linear functionals on [imath]F^n[/imath] ([imath]n \geq 2[/imath]) by [imath]f_k(x_1, \dotsc, x_n) = \sum_{j=1}^n (k-j) x_j[/imath], [imath]1 \leq k \leq n[/imath]. What is the dimension of the subspace annihilated by [imath]f_1, \dotsc, f_n[/imath]? Approaches I've tried so far: Construct a matrix [imath]A[/imath] whose [imath]k[/imath]th row's entries are the coefficients of [imath]f_k[/imath], i.e., [imath]A_{ij} = i - j[/imath], and compute the rank of the matrix. Empirically, the resulting matrix has rank 2 for [imath]n = 2[/imath] to [imath]n = 6[/imath], but I don't see a convenient pattern to follow for a row reduction type proof for the general case. Observe that [imath]f_k[/imath], [imath]k \geq 2[/imath] annihilates [imath]x = (x_1,\dotsc, x_n)[/imath] iff [imath]\sum_{j=1}^{k-1} (k-j)x_j + \sum_{j = k+1}^n (j-k)x_j = 0[/imath] iff [imath]k\left(\sum_{j=1}^{k-1} x_j - \sum_{j=k+1}^n x_j\right) = \sum_{j=1}^{k-1} jx_j - \sum_{j=k+1}^n jx_j,[/imath] and go from there, but I do not see how to proceed. Note: This is Exercise 10 in Section 3.5 ("Linear Functionals") in Linear Algebra by Hoffman and Kunze; eigenvalues and determinants have not yet been introduced and so I am looking for direction towards an elementary proof. |
2344570 | Having trouble understanding formally why this proof is incorrect
I am working through Velleman's How To Prove It on my own and I am having trouble figuring out exactly why this proof is incorrect in a formal manner: I have this intuitive feeling that makes sense to me that it should NOT follow that [imath]\forall x\in A(x \in B)[/imath] since you can't ensure that all elements of [imath]A[/imath] are also in [imath]B[/imath]. I can even come up with a simple counterexample. But I don't know which formal step taken in the proof is incorrect. Following Velleman's methods, if I have the goals [imath]\forall x \in A(x \in B)[/imath] or [imath]\forall x \in A(x \in C)[/imath], I can assume that [imath]x[/imath] is an arbitrary element of [imath]A[/imath]. Then all I need to prove are the goals [imath]x \in B[/imath] or [imath]x \in C[/imath]. I believe everything is correct up to this point. Then the proof deduces from [imath]A \subseteq B\space\cup C[/imath] that [imath]x \in B[/imath] or [imath]x \in C[/imath]. Assuming either one of them proves one of the goals true and the theorem is proven. I know one of these steps in here is incorrect but I was wondering of someone can point me to what exactly went wrong. Thanks! | 456060 | An incorrect proof by exhaustion
Consider the following putative theorem. Theorem? Suppose A, B, and C are sets and [imath]A \subseteq B \cup C[/imath]. Then either [imath]A \subseteq B[/imath] or [imath]A \subseteq C[/imath]. What's wrong with the following proof? Proof. Let [imath]x[/imath] be an arbitrary element of A. Since [imath]A \subseteq B \cup C[/imath], it follows that either [imath]x \in B[/imath] or [imath]x \in C[/imath]. Case 1. [imath]x \in B[/imath]. Since x was an arbitrary element of A, it follows that [imath]\forall x \in A(x \in B)[/imath], which means that [imath]A \subseteq B[/imath]. Case 2. [imath]x \in C[/imath]. Similarly, since x was an arbitrary element of A, we can conclude that [imath]A \subseteq C[/imath]. Thus, either [imath]A \subseteq B[/imath] or [imath]A \subseteq C[/imath]. I have proved that the theorem is incorrect but I can't understand why the above proof is not correct. |
2345380 | Is [imath]\mathbb{Z}[\sqrt{-1}]/(7)[/imath] isomorphic to [imath]\mathbb{Z}[\sqrt{-2}]/(7)[/imath]?
Is [imath]\mathbb{Z}[\sqrt{-1}]/(7)[/imath] isomorphic to [imath]\mathbb{Z}[\sqrt{-2}]/(7)[/imath]? Here [imath](7)[/imath] is the ideal generated by [imath]7[/imath]. If so, please tell me the explicit isomorphism. And how you come up with the construction? | 1009188 | Finding an isomorphism between [imath]\mathbb{Z}[i]/(7)[/imath] and [imath]\mathbb{Z}[\sqrt{-2}]/(7)[/imath]
I was having some trouble finding an explicit isomorphism between [imath]\mathbb{Z}[i]/(7)[/imath] and [imath]\mathbb{Z}[\sqrt{-2}]/(7)[/imath]. [imath]\textbf{What I have noticed is}:[/imath] 7 is a prime element in [imath]\mathbb{Z}[i][/imath] so [imath](7)[/imath] is a maximal ideal in [imath]\mathbb{Z}[i][/imath] and [imath]\mathbb{Z}[i]/(7)[/imath] is a field. 7 is also a prime element in [imath]\mathbb{Z}[\sqrt{-2}][/imath] [imath]\textbf{What I have been trying to do is this}[/imath] Find a surjective homomorphism between [imath]\mathbb{Z}[i]/(7) \rightarrow \mathbb{Z}[\sqrt{-2}]/(7)[/imath]. Since [imath]\mathbb{Z}[i]/(7)[/imath] was a field, the kernel of this homomorphism will be either the whole ring or just [imath]0[/imath]. In the latter case it would be a isomorphism. I am having trouble finding this surjective homomorphism: I have noticed that [imath]\bar{{i}}^{2}=-1[/imath] so the image of [imath]\bar{{i}}[/imath] must be sent to something whose square is [imath]-1[/imath]. Any help would be appreciated. I may be missing some obvious insight. |
603183 | There is a surjective homomorphism from [imath]\Bbb Z * \Bbb Z[/imath] onto [imath]C_2*C_3[/imath]
Prove that there is a surjective homomorphism (an epimorphism) from [imath]\Bbb Z*\Bbb Z[/imath] onto [imath]C_2*C_3[/imath], where [imath]A*B[/imath] is the coproduct of [imath]A[/imath] and [imath]B[/imath] in [imath]\mathsf{Grp}[/imath]. Aluffi asks this question, oddly enough, immediately before revealing what [imath]C_2*C_3[/imath] actually is (which he does in the next exercise). If I jump ahead and use the result of the next exercise, it's entirely straightforward. But without that, I can clearly see that there are surjective homomorphisms [imath]f\colon \Bbb Z\to C_2[/imath] and [imath]g\colon \Bbb Z\to C_3[/imath], so there must be a unique homomorphism [imath]h\colon \Bbb Z*\Bbb Z\to C_2*C_3[/imath] such that [imath]h \circ i_{\Bbb Z_2}=i_{C_2}\circ f[/imath] and [imath]h\circ i_{\Bbb Z_3}=i_{C_3}\circ g[/imath], but I see no obvious reason that this must be surjective. Did Aluffi just mix up the order of the two exercises, or is there some other way to do it? Note: the previous exercise shows that there is a surjective homomorphism from [imath]C_2 * C_3[/imath] onto [imath]S_3[/imath], but I don't see how this could possibly be relevant. | 2344425 | Show there is a surjective homomorphism from [imath]\mathbb{Z}\ast\mathbb{Z}[/imath] onto [imath]C_2\ast C_3[/imath]
Show there is a surjective homomorphism from [imath]\mathbb{Z}\ast\mathbb{Z}[/imath] onto [imath]C_2\ast C_3[/imath], where [imath]\ast[/imath] denotes the coproduct in the category [imath]\mathsf{Grp}[/imath]. Note the exercise in this book (Algebra Chapter 0, Aluffi) is meant, or at least hinted at, to be done using the universal property of the coproduct, ie: coproduct of [imath]A[/imath] and [imath]B[/imath] in [imath]\mathsf{Grp}[/imath] is initial in the category [imath]\mathsf{Grp}^{A,B}[/imath], instead of using the definition of the free product. By property of the coproduct, for any object [imath]A[/imath] and two homomorphisms [imath]f_1,f_2:\mathbb{Z}\to A[/imath], and two homomorphisms [imath]i_1,i_2:\mathbb{Z}\to\mathbb{Z}\ast\mathbb{Z}[/imath], there exists a unique homomorphism [imath]\sigma:\mathbb{Z}\ast\mathbb{Z}\to A[/imath] such that the diagram commutes, ie: [imath]f_1=\sigma i_1[/imath] and [imath]f_2=\sigma i_2[/imath]. Now let [imath]A=C_2\ast C_3[/imath]. Now what? |
2272870 | Property of stochastic integral
I have to show next property of stochastic integration. Let [imath]X[/imath] be adapted integrand, for which we know that [imath]E[\int_0^T X_s^2 ds] < \infty[/imath]. Let [imath]0 \leq s < t \leq T [/imath] and we have some [imath]W \in F_s[/imath] ([imath]F_s[/imath] is [imath]\sigma[/imath]-algebra) and we know [imath]E[\int_s^t W^2 X_s^2 ds] < \infty[/imath]. Show that [imath]\int_s^t W X_u dBu = W \int_s^t X_u dBu[/imath]. I think I should start first with simple integrands and then make construction on other, but I am not sure if it is obvious how to make this on simple integrands. Thanks for help. | 2301708 | Ito integral property
Let [imath]X[/imath] be adapted integrand, for which we know that [imath]E[\int_0^T X_s^2 \, ds]< \infty [/imath]. Let [imath]0≤s<t≤T[/imath] and we have some [imath]W∈F_s[/imath] ([imath]F_s[/imath] is [imath]\sigma[/imath]-algebra) and we know [imath]E[\int_s^t W^2X_u^2 \, du]<\infty[/imath]. I have to show that [imath]\int_s^t W X_u \, dB_u=W \int_s^t X_u dBu[/imath]. I know that for simple integrands this follows from definition. We also know that for [imath]X[/imath] there exists sequence of elementary integrands [imath]X^n[/imath] such that [imath]E[ \int_0^t (X^n_u-X_u)^2 \, du][/imath] converges to [imath]0[/imath], when [imath]n \to \infty[/imath]. From this one can see that it also holds [imath]E[ (\int_0^t (X^n_u-X_u) \, dB_u)^2][/imath]. I wonder if I can show that it holds [imath]E[ |(\int_s^t W (X^n_u-X_u) \, dB_u)|] \to 0[/imath] as [imath]n \to \infty[/imath] . (I think I should use Cauchy- Schwarz inequality: [imath]E(XY)^2 \leq E(X^2)E(Y^2)[/imath], but not sure where. If I could show this convergence in [imath]L^1[/imath], it would be enough for my proof. If I rewrite it as [imath]E[E[|(\int_s^t W(X^n_u-X_u) \, dB_u)| \mid F_s]][/imath] I don't know if I can put [imath]W[/imath] out of integral. Would it be correct? Any help would be really very appreciated. Thanks. |
2345966 | Is A Ring simply connected space?
I was the given the follow definition Definition: If [imath]D[/imath] is simply connected space then each cycle is homologous to zero So it we look at a point, let say [imath]p[/imath] in the middle of the ring and on the paths one that is clockwise and the other which is anti-clockwise the cycle is zero, but because we can take [imath]2[/imath] paths that sum up to [imath]\neq 0[/imath] then a ring is not simply connected space? | 695948 | Prove that an annulus is not simply connected?
I don't have complex analysis at my beck and call, and I only have a low level of knowledge in topology, but I need to prove that this metric space (for any real [imath]r[/imath] and [imath]R[/imath] with [imath]r < R[/imath])[imath] X = \{ (x, y) \in \mathbb{R}^2 \ | \ r \leq x^2 + y^2 \leq R \}[/imath] with the Manhattan metric [imath]d((x_1, y_1), (x_2, y_2)) = |x_1-x_2| + |y_1-y_2|[/imath] is not simply connected. I've already prooven that it's path connected, and now I need to show there are some points [imath]P[/imath] and [imath]Q[/imath] with two paths between them such that one cannot be continuously 'morphed' into the other. [imath] \ [/imath] What I have so far is as follows: I take [imath]P = (0, r)[/imath] and [imath]Q = (0, -r)[/imath], with [imath]f_0[/imath] being a path from [imath]P[/imath] to [imath]Q[/imath] going clockwise around the circle radius [imath]r[/imath] and [imath]f_1[/imath] being much the same but going counterclockwise. Now I assume there is a function [imath]g : [0, 1]^2 \to X[/imath] such that: [imath]g(s, 0) = f_0 (s)[/imath] [imath]g(s, 1) = f_1 (s)[/imath] [imath]g(0, t) = P[/imath] [imath]g(1, t) = Q[/imath] To get the final result, I need to show that this function cannot be continuous, but for the life of me I cannot. For some context, these are the topics which have been visited during the course, roughly in order of recentness. Multiple connectedness Simple connectedness Pathwise connectedness Interior points Boundary points Open sets Compactness Complete metric spaces Bounded metric spaces Totally bounded metric spaces Closed sets Closure of a metric space Limit points Cauchy sequences Convergence Continuity Metric space quivalence Metric equivalence Edit: I might have an argument that works, though it's far from rigourous. We can shrink [imath]R[/imath] to be as close to [imath]r[/imath] as we want, so we can essentially constrain the annulus down to a circle and thus force any path from the left side of the circle to the right side to go through [imath]P[/imath] or [imath]Q[/imath]. So holding [imath]s \in (0, 1)[/imath] constant and varying [imath]t[/imath] must produce a path through [imath]P[/imath] or [imath]Q[/imath] for any [imath]s[/imath]. If for some [imath]s[/imath] it passes through [imath]P[/imath] and for some other [imath]s[/imath] it passes through [imath]Q[/imath], then there must be [imath]s_0[/imath] such that [imath]\forall \epsilon >0 \ \exists \delta \leq \epsilon[/imath] st. [imath]s_0[/imath] produces a path through [imath]P[/imath] and [imath]s_0 + \delta[/imath] produces a path through [imath]Q[/imath]. Now we can consider the path [imath]g(s, \frac{1}{2})[/imath], and note that it must have a discontinuity at [imath]s_0[/imath]. Now consider the case where all [imath]s[/imath] produce paths through only one of [imath]P[/imath] or [imath]Q[/imath]. WOLOG: [imath]P[/imath]. Now, at [imath]t = \frac{1}{2}[/imath], [imath]s[/imath] arbitrarily close to [imath]1[/imath] are mapped away from [imath]Q[/imath], but [imath]1[/imath] is always mapped to [imath]Q[/imath] by definition, so [imath]g(s, \frac{1}{2})[/imath] has a discontinuity at [imath]s=1[/imath]. Therefore [imath]g(s, t)[/imath] is not continuous. This argument is definitely iffy to me, if no one has their own argument (using sufficiently low level concepts), then criticism on the above would be appreciated. |
2075441 | How to prove that every ordered field has a subfield isomorphic to [imath]\mathbb{Q}[/imath] using a certain provided function?
Proposition 3. If [imath]K[/imath] is an ordered field, then [imath]K[/imath] has a subfield isomorphic to [imath]\mathbb{Q}[/imath]. Exercise 2. Prove Proposition 3 by showing that [imath]φ:\mathbb{N}→K[/imath] defined by [imath]φ(n)=1+...+1[/imath], with [imath]1+...+1[/imath] being [imath]1[/imath] added [imath]n[/imath] times, extends to an embedding of [imath]\mathbb{Q}[/imath] in [imath]K[/imath]. How to solve Exercise 2? How does embedding, I suppose in meaning of embedding defined before in the post linked as order-preserving ring homomorphism, [imath]e:\mathbb{Q}\,{\rightarrow}\,K[/imath], imply that for all ordered fields [imath]K[/imath] there exists a subfield of [imath]K[/imath] isomorphic to [imath]\mathbb{Q}[/imath]? | 650875 | Every ordered field has a subfield isomorphic to [imath]\mathbb Q[/imath]?
I'm going through the first chapter in a text on real analysis, which contains preliminaries on ordered fields, the real numbers, etc. Supposedly I had learned about such things already, in calculus, but I thought it wouldn't hurt to go over it again. Up until the paragraph which is the subject of my question, everything is thoroughly proven and examples are provided. However the following excerpt just goes through facts, whose proof I cannot conceive: Although [imath]\mathbb Q[/imath] is an archimedean field, these properties cannot be used to define [imath]\mathbb Q[/imath] since [imath]\\[/imath] there are many Archimedean ordered fields. What distinguishes [imath]\mathbb Q[/imath] from the other [imath]\\[/imath]Archimedean ordered fields is that [imath](\mathbb Q,<)[/imath] is the [imath]smallest[/imath] ordered field in the following sense: [imath]\\[/imath] if [imath](X,\prec)[/imath] is an ordered field, then [imath]X[/imath] contains a sub-field which is (field) isomorphic to [imath]\mathbb Q[/imath].[imath]\\[/imath] Furthermore such an isomorphism preserves the order relation. I've concatenated the paragraph a bit, so the text isn't idem from the book (if anyone wishes to know it's Phillips, An Introduction to Analysis and Integration Theory, Dover Publications). Also [imath]\mathbb Q[/imath] contains no proper subfields, and this can be verified by the apparently easy-to-prove fact that, if a field has characteristic zero, then it contains a subfield isomorphic to [imath]\mathbb Q[/imath]. Then it remains to prove the preservation of order. I'm guessing that some concepts from abstract algebra or field theory would easily suffice, but at the moment such topics are a bit over my head, so all I can think of doing is actually constructing the isomorphism [imath]\phi:\mathbb Q\to\hat{\mathbb Q}[/imath], where [imath]\hat{\mathbb Q}[/imath] is a certain subfield of [imath]X[/imath] in such a way that field operations are preserved, but I really can't come up with anything. So the question is: how do I construct this isomorphism, or, if there's a better way of proving this, how is it done? Thanks for any help. Edit: Assume the existence of [imath](\mathbb Q,<)[/imath] as an archimedean ordered field. |
2347030 | Find [imath]x[/imath] and [imath]y[/imath] such that [imath]\binom{100}{0}+2\binom{100}{1}+4\binom{100}{2}+\cdots+2^{100}\binom{100}{100}=x^{y}[/imath]
Find [imath]x[/imath] and [imath]y[/imath] such that [imath] \binom{100}{0}+2\binom{100}{1}+4\binom{100}{2}+\cdots+2^{100}\binom{100}{100}=x^{y} [/imath] I started out by letting [imath]n[/imath] be a small number. Let [imath]n=3[/imath]: [imath] 2^{0}\binom{3}{0}+2^{1}\binom{3}{1}+2^{2}\binom{3}{2}+2^{3}\binom{3}{3}=1(1)+2(3)+4(3)+8(1)=27=3^{3} [/imath] Let [imath]n=4[/imath]: [imath] 2^{0}\binom{4}{0}+2^{1}\binom{4}{1}+2^{2}\binom{4}{2}+2^{3}\binom{4}{3}+2^{4}\binom{4}{4}=1(1)+2(4)+4(6)+8(4)+16(1)=81=3^{4} [/imath] However, I cannot for the life of me figure out how to use this and solve the original question. More specifically, I know the answer is [imath]3^{n}[/imath], but I don't know how to do a combinatorial proof. | 201550 | Prove [imath]3^n = \sum_{k=0}^n \binom {n} {k} 2^k[/imath]
Let [imath]n[/imath] be a nonnegative integer. Prove that [imath]\begin{align} 3^n = \sum_{k=0}^n \dbinom{n}{k} 2^k . \end{align}[/imath] I know that [imath]$2^n = \sum\limits_{k=0}^n \dbinom{n}{k}$[/imath], but how to integrate the [imath]$2^k$[/imath] into this sum? |
2347121 | Is there anything in real world (nature) which is uncountable ( i. e. infinite but not countably infinite)?
In set theory I read that the sets are either finite or infinite. If they are infinite then there are also two categories countably infinite or uncountable. Natural numbers [imath]\Bbb{N}[/imath], integers [imath]\Bbb{Z}[/imath], rational numbers [imath]\Bbb{Q}[/imath] etc. are examples of countably infinite sets , whereas real numbers [imath]\Bbb{R}[/imath], irrational numbers are very well known examples of uncountable sets. Today I was thinking about counting objects in our day to day life. That time I realize that we can count each and every object. For example : 1) Suppose I decided to count number of sand particles on a beach, even though the number is huge but one can count them one by one. ( I also want to know that are they infinite or just finite ( a big natural number will be representing their quantity )). 2) Same thing when I think about number of leaves on big tree, they are definitely finite. 3) Stars in the sky ( I read on internet that there are approx [imath]10^{24}[/imath] stars in universe). etc. Similarly many objects seems to be finite ( or countably infinite* ( * please correct me if I'm wrong)) . So my question is do we have any object in real world which is uncountable. Thanks in advance ! | 154234 | Can we distinguish [imath]\aleph_0[/imath] from [imath]\aleph_1[/imath] in Nature?
Can we even find examples of infinity in nature? |
2347412 | Does a map from a compact space to a filtered colimit factor through at a finite stage?
Let [imath]K[/imath] be a compact space, and let [imath]A_i[/imath] be a sequence of spaces in the following diagram. [imath]A_0 \hookrightarrow A_1 \hookrightarrow A_2 \hookrightarrow \cdots[/imath] All the inclusions in the diagram are closed embeddings. Does a continuous map from [imath]K \to \mathrm{colim}\ A_i[/imath] factor through some [imath]A_j[/imath] for [imath]j \in \mathbb{N}[/imath]? If it doesn't, can we strengthen the hypotheses so it does? In particular, I'm interested in knowing the answer for the special case where [imath]A_i = \Omega^i \Sigma^i A_0[/imath], where [imath]\Omega[/imath] is the loop space, and [imath]\Sigma[/imath] the suspension. | 1584667 | Compact subset in colimit of spaces
I found at the beginning of tom Dieck's Book the following (non proved) result Suppose [imath]X[/imath] is the colimit of the sequence [imath] X_1 \subset X_2 \subset X_3 \subset \cdots [/imath] Suppose points in [imath]X_i[/imath] are closed. Then each compact subset [imath]K[/imath] of [imath]X[/imath] is contained in some [imath]X_k[/imath] Now I really don't know how to prove this fact. The idea would be to find a suitable open cover to it and after taking a finite sub cover trying to claim that [imath]K[/imath] lies in one of the [imath]X_k[/imath]. I'm able to do this reasoning in some more specific cases, where I've more control on how open subsets looks like, but in this full generality I don't see which open cover I can take. My Attempt: The only idea or approach I'm able to cook up so far is to try use some kind of sequence of points [imath]x_n \in K\cap X_n \setminus X_{n-1}[/imath] which can be assumed to exist by absurd. Being [imath]K[/imath] compact, there must be an accumulation point [imath]k\in K[/imath]. Clearly [imath]k \in X_k[/imath] (little abuse of notation here) and for every neighbourhood of [imath]k[/imath], there is a tail of this sequence entirely contained in it. Now everything seems to boil down to find the right nbhd to find the counterexample. It seems doable, but I don't have any idea on how to choose it, because the only open I have for sure are complements of points, but they seems a little bit coarse for what I want to do. As a side note, May claims at page [imath]67[/imath] of his "Concise Course (revised)" that this result holds for any based spaces. The proof seems to use the above result without T1 assumption. How one can prove this result in such generalities? (no details where provided, only the rough idea. |
2346826 | Expected number of triangles with random lines
Take the square [imath][0,1]^2[/imath] and divide it up with [imath]n[/imath] random lines. We can choose our random lines by choosing one point randomly on one of the four sides and then choosing another point randomly lines and also what is the probability that no region is greater than [imath]0.5[/imath]? If anyone has any cases or any ideas, or especially a full solution I'd be very grateful. Thanks! | 2346678 | Probability of area being greater than 0.5 with random lines
Take the square [imath][0,1]^2[/imath] we take [imath]n[/imath] random lines through the square. You can choose random lines by choosing a point on one of the sides of the squares and randomly choosing a different point on any of the other three sides and then drawing a line connecting them. What is the probability that after [imath]n[/imath] of these random lines the largest region is greater than or equal to [imath]0.5[/imath]? Any tips or cases help, thanks. |
2347702 | Is there a harder way to prove that if an integral curve's derivative vanishes then the curve is constant?
In the context of differential geometry, we have a smooth vector field [imath]X[/imath] and an integral curve of [imath]X[/imath] called [imath]\gamma[/imath] such that [imath]\exists\ t_0 \in \mathbb{R}[/imath] with [imath]\gamma'(t_0) = 0[/imath]. The exercise asks to prove that under those conditions, [imath]\gamma[/imath] is constant. Invoking the Picard-Lindelöf theorem of existence and uniqueness of differential equations that can easily be proved. I was wondering if there was another, more indirect and "by hand" way. Thanks! PS: I don't think this is a duplicate question, as it's not the same to have an integral curve into an arbitrary manifold as it is to have one to [imath]\mathbb{R^n}[/imath]. Of course, given Anthony's answer, I can see that this specific question can be answered by reducing it to the other case. But there could also be another answer that takes a different approach. | 1547447 | Looking for a Simple Argument for "Integral Curve Starting at A Singular Point is Constant"
Let [imath]U[/imath] be an open subset of [imath]\mathbf R^n[/imath] and [imath]V:U\to\mathbf R^n[/imath] be a differentiable vector field on [imath]U[/imath]. Let [imath]\mathbf p\in U[/imath] be a singular point of [imath]V[/imath], that is, [imath]V(\mathbf p)=\mathbf 0[/imath]. Then the only integral curve which starts at [imath]\mathbf p[/imath] is the constant curve. I know that one could simply use the theorem of "uniqueness of integral curves" and in fact adapt the same proof for this particular case. But is anybody aware of a simple argument for this? |
2347744 | Why are Christoffel's symbols considered intrinsic
This is the Theorema Egregium of Gauss. Gaussian curvature (something defined from derivative of the normal unit vector) depends only from coefficients of the first fundamental form, so it is intrinsic. My question is "Why"? The first fundamental form is given by these three coefficients [imath]E=(\partial_1,\partial_1), F,G[/imath] defined similarly. Now, we have that [imath]I(v_1\partial_1+v_2\partial_2)=Ev_1^2+2Fv_1v_2 + Gv_2^2[/imath] is independent from the local chart, but its coefficients change. So, ok that [imath]I[/imath] is intrinsic, but why we call also its coefficients intrinsic if they are actually defined from a local chart and depend from the local chart? Why can I call everything which uses only [imath]E,F,G[/imath] to be intrinsic? Why is [imath]E,F,G[/imath] themselves intrinsic? | 1088203 | Why is first fundamental form considered intrinsic
I am reading Kuhnel's differential geometry book, and in chapter 4, it says that "intrinsic geometry of a surface" can be considered to be things that can be determined solely from the first fundamental form. But I am unsure on why the first fundamental form itself can be considered to be something intrinsic to the surface. Kuhnel defines the first fundamental form to be the inner product induced from that of [imath]\mathbb{R}^3[/imath] restricted to [imath]T_pM[/imath]. So isn't the larger [imath]\mathbb{R}^3[/imath] used? So my question is: Supposed I have a smooth 2-dimensional manifold (e.g. in the sense defined by Lee's Introduction to Smooth Manifolds), but I didn't put it in any ambient space. Is there a way for me to define the first fundamental form? [imath]T_pM[/imath] is intrinsically defined (as the space of derivations in Lee's book), and so what is the inner product that I should put on [imath]T_pM[/imath]? |
2348173 | Does one always need some choice to show the existence of nonprincipal ultrafilters?
The last sentence of the section "Types and existence of ultrafilters" of Wikipedia's article on ultrafilter says: In ZF without the axiom of choice, it is possible that every ultrafilter is principal.{see p.316, [Halbeisen, L.J.] "Combinatorial Set Theory", Springer 2012} This statement is incorrect isn't it? Shouldn't "ultrafilter" here be replaced by "ultrafilter on [imath]\mathscr{P}(\omega)[/imath] (the power set of the set of natural numbers)"? I think that for some Boolean algebras, one can show the existence of a nonprincipal ultrafilter without assuming the axiom of choice (or the ultrafilter lemma or the Boolean prime ideal theorem). Let [imath]B[/imath] be the set of all finite and cofinite subsets of the set of natural numbers, and let [imath]U[/imath] be the set of all cofinite subsets of the set of natural numbers. Then [imath]U[/imath] is a nonprincipal ultrafilter on the Boolean algebra [imath]B[/imath], isn't it? I would also like to ask the following more general question: for what kind of Boolean algebra, can one show the existence of a nonprincipal ultrafilter without assuming some choice? EDIT: I am not asking whether there is an infinite Boolean algebra with a unique ultrafilter (unless "nonprincipal ultrafilter that can be found without choice" and "unique filter" should mean the same thing, which I fail to see). I am first asking whether my example is a counterexample to the statement from Wikipedia. Assuming that I am right about this first point, I am then asking in what cases you can find nonprincipal ultrafilters without using choice. I actually don't know what [imath]\mathcal{P}(\omega)/\text{fin}[/imath] denotes in the suggested question. In any case, the answer given there is only an example and cannot be an answer to my question. I would appreciate if you could explain how my question is a duplicate or reopen my question so that people could provide answers that I would be able to understand. | 2278647 | Existence of a Boolean algebra with a unique ultrafilter in ZF
ZFC proves every infinite Boolean algebra has infinitely many ultrafilters. If every ultrafilter over [imath]\omega[/imath] is principal, then [imath]\mathcal{P}(\omega)/\mathrm{fin}[/imath] has no ultrafilter. Is it consistent with ZF that there is an infinite Boolean algebra with a unique ultrafilter? Thanks for any help. |
2348234 | Theorem about winding number of two closed paths
Let [imath]\gamma_1, \gamma_2 : [0,1]\to\mathbb{C}[/imath] closed paths and [imath]w\in\mathbb{C}[/imath] a point, so that [imath]\vert\gamma_2(t)-\gamma_1(t)\vert < \vert\gamma_1(t)-w\vert[/imath] on [imath][0,1][/imath]. Prove that [imath]n(\gamma_1,w)=n(\gamma_2,w)[/imath] Hint: Use the path [imath]\gamma(t)=\frac{\gamma_2(t)-a}{\gamma_1(t)-a}+a[/imath]. I found some help with homotopy but we actually didn't learned something about this topic. We only get the definition of winding numbers [imath]n(\Gamma, z_0)=\frac{1}{2\pi i}\int_{\Gamma}\frac{d\zeta}{\zeta-z_0}[/imath] My first idea was, that I use this definition with [imath]\gamma_2[/imath] and solve the integral to get the line integral for [imath]\gamma_1[/imath]. Unfortunately I don't know how to use the inequality given in the task. Any hints? Thank you very much! | 704076 | Showing Equality of Winding Numbers
Let [imath] w \in \Bbb C [/imath], and let [imath] \gamma, \delta : [0,1] \rightarrow \Bbb C [/imath] be closed curves such that for all [imath] t \in [0,1], |\gamma(t) - \delta(t)| < |\gamma(t) - w| [/imath]. By computing the winding number [imath]n_\sigma(0)[/imath] about the origin for the closed curve [imath]\sigma(t) = (\delta(t) - w)/(\gamma(t) - w) [/imath], show that [imath]n_\gamma(w) = n_\delta(w)[/imath]. This seems intuitively clear to me. (Informally, the inequality tells us that [imath]\delta[/imath] and [imath]\gamma[/imath] can never be on "opposite sides" of [imath]w[/imath], and hence their winding numbers must be equal. Making this rigorous isn't enough, however, since I need to use the winding number of [imath]\sigma[/imath] about 0. I can also show that [imath]n_\sigma(0) = n_\gamma(w) - n_\delta(w)[/imath] (fairly elementarily), so it remains to show that [imath]n_\sigma(0) = 0[/imath]. This is the bit that I'm stuck on. Can someone give me a (small) hint (not a major hint)? If I still can't get anywhere, then I'll probably ask for a larger hint! There is also a second part to this question - dependent on whether I'm able to do the first part with a hint, I may well update this question to include the second part. (This is an example sheet question - completely non-examinable.) Thanks! :) |
2348406 | What is the "purely elementary reasoning" that the number of distinct prime factors of [imath]n[/imath] grows like [imath]\frac{\log n}{\log \log n}[/imath]?
At the top of the second page of this paper by Ramanujan, he states that "by purely elementary reasoning", where [imath]f(n)[/imath] is the number of distinct prime factors of [imath]n[/imath], we have that for all [imath]\varepsilon > 0[/imath], [imath] f(n) < (1+\varepsilon)\frac{\log n}{\log \log n} [/imath] for all sufficiently large [imath]n[/imath], and [imath] f(n) > (1-\varepsilon) \frac{\log n}{\log \log n} [/imath] for infinitely many values of [imath]n[/imath], so that the maximum order of [imath]f(n)[/imath] is [imath] \frac{\log n}{\log \log n}. [/imath] What's the simple reason for this he had in mind? Thanks. Edit: notation clarified per Thomas Andrews' comment. | 1103861 | Hardy-Ramanujan theorem's "purely elementary reasoning"
I'm reading through The normal number of prime factors of a number [imath]n[/imath]. I'm confused by a remark on the second page: let [imath]f(n)[/imath] represent the number of distinct prime factors of [imath]n[/imath]. Then we can shew (by purely elementary reasoning) that, if [imath]\epsilon[/imath] is any positive number, we have [imath] f(n) < (1 + \epsilon) \frac{\log n}{\log \log n}[/imath] for all sufficiently large values of [imath]n[/imath] and [imath] f(n) > (1 - \epsilon) \frac{\log n}{\log \log n}[/imath] for an infinity of values; so that the maximum order of [imath]f(n)[/imath] is [imath] \frac{\log n}{\log \log n} [/imath] I don't follow their "purely elementary reasoning." What am I missing? |
2348399 | Are two positive defined quadractic forms simultaneously diagonalizable?
Let [imath]f[/imath] and [imath]g[/imath] be quadratic forms in real variables [imath]x_1,\dots,x_n[/imath], suppose that [imath]f [/imath] and [imath]g[/imath] are positive defined, is true that [imath]f[/imath] and [imath]g[/imath] are simultaneously diagonalizable? I know that [imath]f[/imath] and [imath]g[/imath] have matrix representation [imath]A_f[/imath] and [imath]A_g[/imath] symmetric and have all eigenvalues positive, the problem then is to show that [imath]A_f[/imath] and [imath]A_g[/imath] commutes, therefore they are simultaneously diagonalizable. So the product of symmetrical matrix is again a symmetrical matrix iff they commute, so it will be true if I can show that [imath]A_fA_g[/imath] is symmetric. Am I looking in the right direction, where does the hipotesis of positive defined comes to play? There is a simplier way or a counterexample? | 2274804 | Simultaneous diagonlisation of two quadratic forms, one of which is positive definite
Let [imath]\varphi, \phi[/imath] be quadratic forms on [imath]V[/imath] and suppose [imath]\varphi[/imath] is positive definite. I want to find a basis for V such that [imath]\varphi[/imath] and [imath]\phi[/imath] are both represented by diagonal matrices. My idea is to define an inner product [imath]<,>: V\times V \rightarrow F[/imath] where [imath]<v,w> = \varphi(v,w)[/imath]. I know that if I can find a basis that is orthonormal w.r.t. this inner product that diagonalises [imath]\phi[/imath], then I am done, since [imath]\varphi[/imath] will be represented by the identity with respect to this basis. I know that I can use Gram-Schmidt to get an orthonormal basis for [imath]V[/imath], but I don't understand how to choose a basis that diagonalises [imath]\phi[/imath]. I realised I didn't understand what I was doing when I tried the example where [imath]\phi[/imath] is the symmetric bilinear form associated to [imath]2x^2 + 3y^2 +3z^2 - 2yz = (\sqrt{3}z - \frac{1}{\sqrt{3}}y)^2 + 2x^2 + \frac{8}{3}y^2[/imath] which is positive definite, and wish to simultaneously diagonalise this and [imath]\phi[/imath] which is the symmetric bilinear form associated to the quadratic form [imath]3x^2 + 3y^2 + z^2 +2xy - 3xz + 3yz[/imath]. Help in general, or with relevance to this particular example, gratefully received. |
2348277 | Prove that if [imath][.][/imath] is Greatest integer function then [imath]\left[\frac{[x]}{n}\right]=\left[\frac{x}{n}\right][/imath]
Prove that if [imath][.][/imath] is Greatest integer function then [imath]\left[\frac{[x]}{n}\right]=\left[\frac{x}{n}\right][/imath] [imath]\forall[/imath] [imath]n \in \mathbb{Z}[/imath] My Try is i have used both LHS and RHS: Case [imath]1.[/imath] if [imath]x \in \mathbb{Z}[/imath] proof is trivial Case [imath]2.[/imath] if [imath]x=q+f[/imath] where [imath]q \in \mathbb{Z}[/imath] and [imath] f \in (0 \:\: 1)[/imath] Then [imath]\left[\frac{x}{n}\right]=\left[\frac{q+f}{n}\right][/imath] Now by euclid's algorithm for some positive integer [imath]p[/imath] we have [imath]q=np+r[/imath] where [imath]0 \le r \lt n-1[/imath] So [imath]\left[\frac{x}{n}\right]=\left[\frac{q+f}{n}\right]=\left[\frac{np+r+f}{n}\right]=p+\left[\frac{r+f}{n}\right][/imath] Now since [imath]0 \le f \lt 1[/imath] and [imath]0 \le r \lt n-1[/imath] we have [imath] 0 \le r+f \lt n[/imath] So [imath]\left[\frac{r+f}{n}\right]=0[/imath] hence [imath]\left[\frac{x}{n}\right]=p \tag{1}[/imath] Now [imath]\left[\frac{[x]}{n}\right]=\left[\frac{[q+f]}{n}\right]=\left[\frac{q}{n}\right]=\left[\frac{np+r}{n}\right]=\left[p+\frac{r}{n}\right]=p+\left[\frac{r}{n}\right][/imath] Now since [imath]0 \le r \lt n-1[/imath] we have [imath]\left[\frac{r}{n}\right]=0[/imath] So [imath]\left[\frac{[x]}{n}\right]=p \tag{2}[/imath] From [imath]1[/imath] and [imath]2[/imath] we have the required proof. is there a better way please share | 172823 | How to prove floor identities?
I'm trying to prove rigorously the following: [imath]\lfloor x/a/b \rfloor[/imath] = [imath]\lfloor \lfloor x/a \rfloor /b \rfloor[/imath] for [imath]a,b>1[/imath] So far I haven't gotten far. It's enough to prove this instead [imath]\lfloor z/c \rfloor[/imath] = [imath]\lfloor \lfloor z \rfloor /c \rfloor[/imath] for [imath]c>1[/imath] since we can just put [imath]z=\lfloor x/a \rfloor[/imath] and [imath]c=b[/imath]. |
2349452 | Non-existence of non-trivial clopen set in [imath]\mathbb R[/imath] without using connectedness
Show that the set of real numbers has no nontrivial clopen set without using connectedness of [imath]\mathbb R[/imath]. I tried to show this by showing that an open set in [imath]\mathbb{R}[/imath] is strictly contained in its closure. My attempt: Let [imath]U[/imath] is open in [imath]\mathbb{R}[/imath]. Then [imath]U[/imath] can be written as union of disjoint open intervals. [imath]U = \bigcup_{n=1}^{\infty}A_n[/imath] , where [imath]A_n = (a_n, b_n)[/imath]. Now Closure of [imath]U, Cl(U) = Cl( \bigcup_{n=1}^{\infty}A_n) = \bigcup_{n=1}^{\infty}Cl(A_n)[/imath] [ not true in general ] = [imath] \bigcup_{n=1}^{\infty}[a_n, b_n] \supsetneq U[/imath]. So the problem thus reduces to the problem, whether [imath]Cl(\bigcup_{n=1}^{\infty}A_n) = \bigcup_{n=1}^{\infty}Cl(A_n)[/imath] holds for countably infinite intervals in [imath]\mathbb{R}[/imath]. This surely holds for finitely many sets, but doesn't hold for countably infinitely many sets in general. Is this true? or there's another proof of this? | 1628976 | No non-trivial clopen sets in [imath]\mathbb{R}[/imath]? How to give a direct proof?
How to give a direct proof of the following result? Let [imath]A[/imath] be a subset of [imath]\mathbb{R}[/imath] such that [imath]A[/imath] is both open and closed. Then [imath]A[/imath] is either empty or all of [imath]\mathbb{R}[/imath]? My work: If [imath]A[/imath] is not empty, let [imath]u \in A[/imath]. Then there is some open interval [imath](a,b)[/imath] such that [imath]u \in (a,b) \subset A.[/imath] Now if [imath]\mathbb{R} - A[/imath] is not empty either, then let [imath]v \in \mathbb{R} - A[/imath]. Then there is some open interval [imath](c,d)[/imath] such that [imath]v \in (c,d) \subset \mathbb{R} - A.[/imath] Suppose that [imath]u < v[/imath]. Since [imath]\emptyset \subset (a,b) \cap (c,d) \subset A \cap (\mathbb{R} - A) = \emptyset,[/imath] we must have [imath](a,b) \cap (c,d) = \emptyset.[/imath] So we can conlclude that [imath]b \leq c.[/imath] But [imath]b \in A[/imath] and [imath]c \in \mathbb{R} - A[/imath]. So we must have [imath]b < c.[/imath] What next? Can anybody here please help complete the proof from here on? An edit based on a comment by Marc Paul: Let us assume that the set [imath]A[/imath] is a non-trivial clopen set in [imath]\mathbb{R}[/imath]. Let us define a function [imath]f \colon \mathbb{R} \to \mathbb{R}[/imath] as follows: [imath]f(x) \colon= \begin{cases} 1 \ \mbox{ for } \ x \in A; \\ 0 \ \mbox{ for } \ x \in \mathbb{R} - A. \end{cases}[/imath] Let [imath]V[/imath] be an open set in the range space [imath]\mathbb{R}[/imath]. We show that the inverse image set [imath]f^{-1}(V)[/imath] is open in the domain space [imath]\mathbb{R}[/imath]. The following cases arise: If [imath]0, 1 \in V[/imath], then [imath]f^{-1}(V) = \mathbb{R}[/imath]. If [imath]0 \in V[/imath] but [imath]1 \not\in V[/imath], then [imath]f^{-1}(V) = \mathbb{R} - A[/imath]. If [imath]1 \in V[/imath] but [imath]0 \not\in V[/imath], then [imath]f^{-1}(V) = A[/imath]. And, if [imath]0 \not\in V[/imath] and [imath]1 \not\in V[/imath], then [imath]f^{-1}(V) = \emptyset[/imath]. Thus, [imath]f^{-1}(V)[/imath] is open in the domain space [imath]\mathbb{R}[/imath]. Hence the function [imath]f[/imath] is continuous. What next? How does this lead to our desired conclusion? [Yet another edit, again based on valuable comments from Marc Paul: ] So if both [imath]A[/imath] and [imath]\mathbb{R} - A[/imath] were non-empty, then let's suppose [imath]a \in A[/imath] and [imath]b \in \mathbb{R} - A[/imath], and we can assume without any loss of generality that [imath]a < b[/imath]. Then [imath]f(a) = 1[/imath] and [imath]f(b) = 0[/imath]. So by the intermediate value theorem there is a real number [imath]c \in (a,b)[/imath] such that [imath]f(c) = 1/2[/imath], which is a contradiction because the image set of [imath]f[/imath] does not contain [imath]1/2[/imath]. |
2349571 | optimization with ellipsoidal constraints
I'm reading a paper recently and encounter a optimization equation which I can't understand: [imath]\max_{x\in\{x|x^TPx\le1\}} c^Tx =(cP^{-1}c^T)^{1/2} [/imath] where [imath]P\succ0[/imath] Anybody knows why the result is [imath](cP^{-1}c^T)^{1/2} [/imath]? | 1832467 | Maximizing a linear function over an ellipsoid
Let [imath]A \in \mathbb{R}^{n\times n}[/imath] be a positive definite matrix, [imath]x \in \mathbb{R}^n[/imath] and [imath]c \in \mathbb{R} \setminus \{0\}[/imath]. I got to determine the maximum [imath]\max\{c^Ty:y\in \mathcal{E} (A,x)\}[/imath] where [imath]\mathcal{E} (A,x)[/imath] is an ellipsoid defined by [imath]A[/imath] and [imath]x[/imath]. How can I determine this maximum? Can anyone give me a hint? |
2349882 | Show that [imath]\frac{1}{x-a}+\frac{1}{x-b}+\frac{1}{x-c} =0[/imath] has only two real roots
I need help proving that the next equation has only two roots (under [imath]\mathbb R[/imath]) [imath]\frac{1}{x-a}+\frac{1}{x-b}+\frac{1}{x-c} =0[/imath] [imath]a\lt b\lt c[/imath] Here is what I tried: If I define a function [imath]f(x)=\frac{1}{x-a}+\frac{1}{x-b}+\frac{1}{x-c}[/imath] I could show that this is a continuous function and for different values I get positive or negative values and by the continuity it will be equal 0 exactly twice. Maybe it has something to do with the function derivative? Any ideas? | 412069 | [imath]\frac{1}{x-a} + \frac{1}{x-b} + \frac{1}{x-c} = 0 [/imath] has precisely two real roots
Prove that given [imath] a < b < c [/imath] this equation: [imath]\frac{1}{x-a} + \frac{1}{x-b} + \frac{1}{x-c} = 0 [/imath] has precisely 2 real roots. I understand there are 3 point of discontinuities, but I have no idea how to prove this. Can you give me a hint? Thanks in advance. |
595598 | Edited: [imath]T/F:[/imath] The automorphism group [imath]\text{Aut} (\mathbb Z/2 \times \mathbb Z/2)[/imath] is abelian.
[imath]T/F:[/imath] The automorphism group [imath]\text{Aut} (\mathbb Z/2 \times \mathbb Z/2)[/imath] is abelian. Keeping the comments for Which group is meant by [imath]\mathbb Z/2 \times \mathbb Z/2.[/imath] in mind I would like to present a solution for the above mathematical problem to get it verified. My answer to the problem is: FALSE My Reasons: [imath]\mathbb Z_2\oplus\mathbb Z_2\simeq K_4=\{e,a,b,c\}[/imath] where [imath]K_4[/imath] is the Kelin's [imath]4[/imath]-group with [imath]e,a,b,c[/imath] having their usual meaning. I would like to show that any bijection from [imath]K_4[/imath] onto [imath]K_4[/imath] which keeps [imath]e[/imath] fixed, is an isomorphism. Let [imath]\phi[/imath] be one such. Then W.L.G. we are to show that, [imath]\phi(ab)=\phi(a)\phi(b)[/imath] Now, [imath]\phi(ab)=\phi(c)[/imath] and [imath]\phi(a)\phi(b)[/imath] is different from [imath]\phi(a),\phi(b)[/imath] and also from [imath]e=\phi(e)[/imath] since [imath]\phi(a)\ne\phi(b).[/imath] All such [imath]\phi[/imath] forms a group [imath]\simeq S_3.[/imath] Thus [imath]\text{Aut } K_4[/imath] is not abelian. | 1130221 | Show [imath]\operatorname{Aut}(C_2 \times C_2)[/imath] is isomorphic to [imath]D_6[/imath]
Show [imath]\operatorname{Aut}(C_2 \times C_2)[/imath] is isomorphic to [imath]D_6[/imath] (the group with [imath]x^3=1[/imath], [imath]y^2=1[/imath] and [imath]xy=yx^2[/imath]). I'm not really sure how to express the elements of [imath]\operatorname{Aut}(C_2 \times C_2)[/imath]. Would it be sufficient to show the elements of [imath]\operatorname{Aut}(C_2 \times C_2)[/imath], find their order and show they bijectively map to every element of [imath]D_6[/imath] and satisfy [imath]xy=yx^2[/imath]? |
2349124 | [imath]X[/imath] and [imath]Y[/imath] are sets. Prove [imath]X = \emptyset[/imath] iff [imath]Y = (X \cap Y^c) \cup (X^c \cap Y)[/imath]
I keep on hitting a road block in trying to solve this, especially when trying to prove it going from the right hand side to the left hand side. | 2123377 | [imath]A=\emptyset [/imath] if and only if [imath]B = A \bigtriangleup B[/imath]
Is this true? If [imath]A[/imath] and [imath]B[/imath] are sets, then [imath]A=\emptyset [/imath] if and only if [imath]B = A \bigtriangleup B[/imath]. If [imath]A=\emptyset[/imath] then [imath]B=\emptyset[/imath] too? Could someone help me please? |
2350326 | Series: An equality between factorials
Here the problem: Prove this equality: [imath] 1 +\sum_{k=1}^{n} k\times k! = (n+1)![/imath]. | 1764581 | Proof by strong induction combinatorics problem: [imath]1(1!) + 2(2!) + 3(3!) + \dots + n(n!) = (n+1)! - 1[/imath]
[imath]1(1!) + 2(2!) + 3(3!) + \dots + n(n!) = (n+1)! - 1[/imath] How do we prove this by strong induction? I know how to do it with weak induction, but how would strong induction work with this problem? |
20664 | Finitely generated modules over PID
Let [imath]A[/imath], [imath]B[/imath], [imath]C[/imath], and [imath]D[/imath] be finitely generated modules over a PID such that [imath]A\oplus B\cong C\oplus D[/imath] and [imath]A\oplus D\cong C\oplus B[/imath]. Prove that [imath]A\cong C[/imath] and [imath]B\cong D[/imath]. The only tool I have is the theorem about finitely generated modules, but I don't quite see the connection. Please Help. Thanks. | 1500358 | direct sum of the modules
suppose [imath]A \oplus C \cong B \oplus C[/imath], [imath]A,B,C[/imath] are finitely generated modules over a PID R. Prove [imath]A \cong B[/imath]. My attempt: let [imath]\theta:A \oplus C \rightarrow A[/imath] by [imath]\theta((a,c)) = a[/imath], so the [imath]ker\theta = C,[/imath] so [imath](A \oplus C) / C \cong A[/imath], and by the same reasoning, [imath](B \oplus C) / C \cong B[/imath]. Since [imath]A \oplus C \cong B \oplus C[/imath], [imath](A \oplus C)/C \cong (B \oplus C)/C [/imath] and therefore [imath]A \cong B[/imath]. Are there any problems with my proof? Thanks a lot |
2349439 | Topological groups on the circle [imath]S^1[/imath]
On the circle [imath]S^1[/imath] there is the usual circle group, i.e. the group isomorphic to [imath]\{e^{i\varphi}\mid \varphi\in[0,2\pi)\}[/imath] with complex multiplication as group operation. This group is a topological group in the sense that [imath]S^1[/imath] is a topological space and the group operation and the inverse are continuous. Question: Are there other topological groups on [imath]S^1[/imath] essentially different from the circle group, assuming [imath]S^1[/imath] with the standard topology? What about other abelian topological groups? My question was motivated by this other post. The group described there turned out to be just the usual one. Two notes: I am looking for groups involving all of [imath]S^1[/imath], not only a subset. Especially no subgroups of the circle group. I am looking for groups not isomorphic to the circle group. | 1498307 | How many group structures make [imath]S^1[/imath] a topological group?
Let [imath]S^1[/imath] be the subspace of [imath]R^2[/imath] given the usual topology. How many group structures make [imath]S^1[/imath] a topological group? |
158041 | Dimensionality of null space when Trace is Zero
This is the fourth part of a four-part problem in Charles W. Curtis's book entitled Linear Algebra, An Introductory Approach (p. 216). I've succeeded in proving the first three parts, but the most interesting part of the problem eludes me. Part (a) requires the reader to prove that [imath]\operatorname{Tr}{(AB)} = \operatorname{Tr}{(BA)}[/imath], which I was able to show by writing out each side of the equation using sigma notation. Part (b) asks the reader to use part (a) to show that similar matrices have the same trace. If [imath]A[/imath] and [imath]B[/imath] are similar, then [imath]\operatorname{Tr}{(A)} = \operatorname{Tr}{(S^{-1}BS)}[/imath] [imath]= \operatorname{Tr}(BSS^{-1})[/imath] [imath]= \operatorname{Tr}(B)[/imath], which completes part (b). Part (c) asks the reader to show that the vector subspace of matrices with trace equal to zero have dimension [imath]n^2 - 1[/imath]. Curtis provides the hint that the map from [imath]M_n(F)[/imath] to [imath]F[/imath] is a linear transformation. From this, I used the theorem that [imath]\dim T(V) + \dim n(T) = \dim V[/imath] to obtain the dimension of the null space. Part (d), however, I'm stuck on. It asks the reader to show that subspace described in part (c) is generated by matrices of the form [imath]AB - BA[/imath], where [imath]A[/imath] and [imath]B[/imath] are arbitrary [imath]n \times n[/imath] matrices. I tried to form a basis for the subspace, but wasn't really sure what it would look like since an [imath]n \times n[/imath] matrix has [imath]n^2[/imath] entries in it, but the basis would need [imath]n^2 - 1[/imath] matrixes. I also tried to think of a linear transformation whose image would have the form of [imath]AB - BA[/imath], but this also didn't help me. I'm kind of stuck... Many thanks in advance! | 2335109 | Prove that the set of all [imath]n × n[/imath] matrices of the form [imath]AB-BA[/imath] is equal to the set of all [imath]n × n[/imath] matrices with trace zero
Let [imath]W[/imath] be the space of [imath]n × n[/imath] matrices over the filed [imath]F[/imath], and let [imath]W_0[/imath] be the subspace spanned by the matrices [imath]C[/imath] of the form [imath]AB - BA[/imath]. Prove that [imath]W_0[/imath] is exactly the subspace of matrices which have trace zero. (Hint: What is the dimension of the space of matrices of trace zero? Use the matrix 'units,' i.e. matrices with exactly one non-zero entry, to construct enough linearly independent mayrices of the form [imath]AB-BA[/imath]. ) Let [imath]K[/imath] be the set of all [imath]n × n[/imath] matrices of the form [imath]AB-BA[/imath], amd let [imath]U[/imath] be the subspace of all [imath]n × n[/imath] matrices with trace zero. First, I am not sure how [imath]K[/imath] is a subspace. Certainly it's closed under scalar multiplication, but how is it closed under addition? We know that [imath]K[/imath] is a subset of [imath]U[/imath], but I am not sure how it's the other way around. We know that the [imath]dim U = n^2-1[/imath], and I could find [imath]n^2-n[/imath] 'unit' matrices (each of which has only one 1 on a non-diagonal entry) and express them as [imath]AB-BA[/imath]. But I am not sure how to deal with the ones with at least one nonzero diagonal entry. I am not even sure what the hint means. Any help? |
2350298 | If [imath](X_n)_n[/imath] is i.i.d. and [imath]\frac{S_n}{n} \to a[/imath] almost surely then [imath]a=\Bbb E[X_1][/imath]
[imath](X_n)_{n \in \Bbb N}[/imath] independent and identical. [imath]S_n = \sum_{i=1}^n X_i[/imath] Now I want to show: [imath]\frac{S_n}{n} \xrightarrow{\text{almost surely}} a, \mathrm{with}\: a \in \Bbb R\Rightarrow a=\Bbb E[X_1][/imath] In our lecture we showed: [imath](X_n)_{n \in \Bbb N}[/imath] independent and identical, with [imath]\Bbb E[X_1] \lt \infty[/imath] , then [imath]\frac{S_n}{n} \xrightarrow{\text{almost surely}} \Bbb E[X_1][/imath] In other words, either there's a problem with the uniqueness of the limit or the fact that we don't know from the start if the expectation values are finite. I'm stuck on this problem. Does someone have any ideas or tipps on how to solve this? Thanks in advance! | 1961003 | If [imath](X_n)[/imath] is i.i.d. and [imath] \frac1n\sum\limits_{k=1}^{n} {X_k}\to Y[/imath] almost surely then [imath]X_1[/imath] is integrable (converse of SLLN)
Let [imath](\Omega,\mathcal F,P)[/imath] be a finite measure space. Let [imath]X_n:\Omega \rightarrow \mathbb R[/imath] be a sequence of iid r.v's I need to prove that if: [imath] n^{-1}\sum _{k=1}^{n} {X_k} [/imath] converges almost surely to [imath]Y[/imath] then all [imath]X_k[/imath] have expectation. If I understand correctly then [imath]X_k[/imath] has expectations means [imath]X_k[/imath] is in [imath]\mathcal L^1(\Omega)[/imath]. And I know that on finite measure space Converging in expectations is converging in [imath]\mathcal L^1(\Omega)[/imath] and it's stronger than almost sure convergence. And I know that from linearity of expectation even if one of the sequence is not in [imath]\mathcal L^1(\Omega)[/imath] then [imath]Y[/imath] is not in [imath]\mathcal L^1(\Omega)[/imath]. How do I continue? |
2352210 | Proving subspace of Banach space is finite dimensional
Let [imath]S[/imath] be a subspace of [imath]C([0,1])[/imath] whose elements satisfy [imath]\|f\|_{L^\infty}\leq K\|f\|_{L^2}[/imath]. Prove that it's finite dimensional. The obvious observation is that the [imath]L^\infty[/imath] and [imath]L^2[/imath] norms are equivalent (since the space is finite), but I can't see how to directly exploit this. The only way I know to prove a subspace is finite dimensional is to show the unit ball is compact. Using the equivalence of the norms it then seems like it'd be sufficient to prove that a sequence with [imath]\|f_n\|_\infty[/imath] has a subsequence converging in the (usually weaker) [imath]L^2[/imath] topology. I'm not sure this is true, am I on the right track? | 727274 | Finite dimensional subspace of [imath]C([0,1])[/imath]
Let linear [imath]S[/imath] be a subspace of [imath]C([0,1])[/imath], i.e., the continuous real-valued functions on [imath][0,1][/imath]. Assume that there exists [imath]c>0[/imath], such that [imath]\|\,f\|_\infty\leq c \|\,f\|_2[/imath], for all [imath]f\in S[/imath]. Then show that [imath]S[/imath] is finite-dimensional. This is equivalent to proving that the closed unit ball in [imath](S,\|\|_\infty)[/imath] or in [imath](S,\|\|_2)[/imath] is compact, but I can't derive this. Another thought is that the [imath]L^2[/imath] closure of [imath]S[/imath] is a subset of [imath]C([0,1])[/imath]. Indeed, if [imath]f_n\to f\in L^2[/imath], then the given relation [imath]\|f_n-f\|_\infty \leq c\|f_n-f\|_2[/imath] implies that [imath]f_n\to f[/imath] in [imath]L^\infty[/imath], so [imath]f[/imath] is continuous. Therefore we have the inclusion [imath]S\subset \overline{S}^{L^2}\subset C([0,1])\subset L^2[/imath] If [imath]S[/imath] was infinite dimensional, then [imath]\overline{S}[/imath] would be an infinite dimensional hilbert space, proper subset of [imath]L^2[/imath]. Another thought is that all the [imath]L^p[/imath] norms on [imath]S[/imath] are equivalent (by Holder). If ALL norms are equivalent, then [imath]S[/imath] has to be finite dimensional. |
2352392 | Does existence of nonzero linear functional depends on axiom of choice?
Given an arbitrary nonzero vector space [imath]V[/imath], is there a nonzero linear functional on [imath]V[/imath], without assuming axiom of choice? I know that by assuming existence of a basis for [imath]V[/imath], we can consider the dual basis for a subspace of [imath]V^*[/imath], which justifies the existence of nonzero linear functional, but this argument fails without axiom of choice. I guess there may not always be a nonzero linear functional, but my knowledge on axiom of choice and infinite-dimensional vector spaces is lacking. A quick search on Google fails to give an answer. Note that I am not talking about normed spaces or continuous linear functionals, just plain vector spaces with no additional structure. | 970194 | Existence of non-trivial linear functional on any vector space
For every vector space [imath]V[/imath] does there exist a linear functional [imath]f[/imath] ( a linear map from [imath]V[/imath] to [imath]F[/imath] the underlying field ) such that for some [imath] \vec v \in V[/imath] , [imath]f(\vec v) \ne 0[/imath] ? If it does exist , can we prove the existence without the "axiom of choice " ? Is the existence equivalent to axiom of choice ? |
468098 | How to find the [imath]20[/imath] consecutive composite numbers
I have a little confusion over the following problem: Find the [imath]20[/imath] consecutive composite numbers. HINTS: [imath]\color {green} {\text{Numbers}\,\, 20!+2,20!+3,\cdots ,20!+21 \,\,\text{will do the trick. The following result by Euclid has been known for more than 2000 years.}}[/imath] But the solution is not very clear to me specially I do not understand why [imath]k[/imath] has been added to [imath]20![/imath] where the numbers are of the form [imath]20!+k,k=2....,21[/imath]. Can someone explain it? Thanks and regards to all. | 2311652 | I'm trying to find the longest consecutive set of composite numbers
Hello and I'm quite new to Math SE. I am trying to find the largest consecutive sequence of composite numbers. The largest I know is: [imath]90, 91, 92, 93, 94, 95, 96[/imath] I can't make this series any longer because [imath]97[/imath] is prime unfortunately. I can however, see a certain relation, if suppose we take the numbers like (let [imath]a_1, a_2, a_3,...,a_n[/imath]denote digits and not multiplication): [imath]a_1a_2a_3...a_n1,\ a_1a_2a_3...a_n2,\ a_1a_2a_3...a_n3,\ a_1a_2a_3...a_n4,\ a_1a_2a_3...a_n5,\ a_1a_2a_3...a_n6,\ a_1a_2a_3...a_n7,\ a_1a_2a_3...a_n8,\ a_1a_2a_3...a_n9,\ a_1a_2a_3...(a_n+1)0[/imath] The entire list of consecutive natural numbers I showed above can be made composite if: The number formed by digits [imath]a_1a_2a_3...a_n[/imath] should be a multiple of 3 The numbers [imath]a_1a_2a_3...a_n1[/imath] and [imath]a_1a_2a_3...a_n7[/imath] should be composite numbers If I didn't clearly convey what I'm trying to say, I mean like, say I want the two numbers (eg: ([imath]121[/imath], [imath]127[/imath]) or ([imath]151[/imath], [imath]157[/imath]) or ([imath]181[/imath], [imath]187[/imath])) to be both composite. I'm still quite not equipped with enough knowledge to identify if a random large number is prime or not, so I believe you guys at Math SE can help me out. |
2352364 | If [imath]f(x)=\frac{1}{x^2}\cdot e^{\frac{1}{x}}[/imath], and [imath]x_{0}\in \left ( 0,\frac{1}{2} \right )[/imath], let [imath]x_{n+1}=f\left ( \frac{1}{x_{n}} \right )[/imath]. Show:
Let [imath]f:\mathbb{R}\setminus \left \{ 0 \right \}\to\mathbb{R}, f(x)=\frac{1}{x^2}\cdot e^{\frac{1}{x}}[/imath] [imath]x_{0}\in \left ( 0,\frac{1}{2} \right )[/imath], [imath]x_{n+1}=f\left ( \frac{1}{x_{n}} \right ) \forall n\in \mathbb{N}[/imath] Show that: 1) [imath](x_{n})_{n\in \mathbb{N}}[/imath] is convergent 2) [imath]\lim_{n\to\ \infty }x_{n} = 0[/imath] EDIT: Well, going to that link, someone says that [imath]x_{0}[/imath] is important when talking about the convergente of the sequence, and that's what that specific person forgot to mention. | 2328343 | Recursive sequence through a function
\begin{align}f(x) &= \dfrac{e^{1/x}}{x^2}\\ x_{n+1} &= f\left( \dfrac{1}{x_n}\right)\end{align} Show that [imath]x_n[/imath] is convergent and that its limit is [imath]0[/imath]. It is very easy to find the limit from the recurrence relation. Just plugging [imath]l[/imath] instead of the terms of the sequence. But how do I show it is convergent? I tried to evaluate the difference [imath]x_{n+1}-x_n[/imath] but its a messy calculation... Also I tried using the function monotony. But I don't see how it tells me something about the function |
2352930 | taking derivative of inverse function
Let [imath]f(x)=x+\frac{x^2}{2}+\frac{x^3}{3}+\frac{x^4}{4}+\frac{x^5}{5}[/imath], and [imath]g(x)=f^{-1}(x)[/imath]. Compute [imath]g^{(3)}(0)[/imath] (the 3rd derivative of g). I have a solution but its not giving the correct answer. My question is not how to solve the problem but what I did wrong in my solution. First [imath]g(0)=f^{-1}(0)=0[/imath] [imath]g^{'}(x)= (f^{'}(f^{-1}(x)))^{-1}[/imath], so [imath]g^{'}(0)=1[/imath] [imath]g^{(2)}(x)=-(f^{'}(f^{-1}(x)))^{-2} \cdot f^{(2)}(f^{-1}(x)) \cdot g^{'}(x)= -(g^{'}(x))^{-3} \cdot f^{(2)}(f^{-1}(x))[/imath], so [imath]g^{(2)}(0)=-1[/imath] Finally [imath]g^{(3)}(x)=3(g^{'}(x))^{-4} \cdot g^{(2)}(x) \cdot f^{(2)}(f^{-1}(x))+ f^{(3)}(f^{-1}(x)) \cdot g^{'}(x) \cdot -(g^{'}(x))^{-3}[/imath], so [imath]g^{(3)}(0)=-3-2=-5[/imath]. The correct answer is 1, however. I've checked my work so many times but still can't figure out what I did incorrect. | 1380972 | third derivative of inverse function
Is my way of solving and my answer correct? Let [imath]f(x)=x+\frac{x^2}{2}+\frac{x^3}{3}+\frac{x^4}{4}+\frac{x^5}{5}[/imath] And [imath]g(x)=f^{-1}(x)[/imath] Find [imath]g'''(0)[/imath] My attempt: We know that [imath]g'(x)=\frac{1}{f'(g(x))}[/imath] [imath]f'(x)=1+x+x^2+x^3+x^4=\frac{x^5-1}{x-1}[/imath] [imath]f'(g(x))=\frac{(g(x))^5-1}{(g(x))-1}[/imath] [imath]\Rightarrow g'(x)=\frac{(g(x))-1}{(g(x))^5-1}[/imath] [imath]\Rightarrow g'(x)[(g(x))^5-1]=g(x)-1[/imath] Similarly differentiating again, [imath]\Rightarrow g'(x)[5(g(x))^4]+[(g(x))^5-1]g''(x)=g'(x)[/imath] Similarly differentiating again, [imath]\Rightarrow g'(x)[20(g(x))^3g'(x)]+[5(g(x))^4]g''(x)+[(g(x))^5-1]g'''(x)+g''(x)[5(g(x))^4]=g''(x)[/imath] Putting [imath]x=0[/imath], [imath]g(0)=f^{-1}(0)=0[/imath] [imath]g'(0)=\frac{1}{f'(g(0))}=\frac{1}{f'(0)}=1[/imath] similarly,[imath]g''(0)=-1[/imath] [imath]\Rightarrow g'''(0)=1[/imath] |
2317473 | Basis of the space of alternating [imath]k[/imath]-tensors
I am reading Tu's book on Manifolds. As I understand it, given a point [imath]p[/imath] of a manifold [imath]M[/imath] we have the basis [imath]\{\frac{\partial}{\partial x^i}|_p\}[/imath] for [imath]T_pM[/imath], where [imath](U,x^1,...,x^m)[/imath] is a chart around [imath]p[/imath]. Now, if I prove that [imath]\{ dx^i(p) \}[/imath] is a basis for [imath]T^\ast_p M[/imath], then it shouldn't be too hard to show that [imath]\{ dx^{i_1}(p)\wedge ... \wedge dx^{i_k}(p) \}[/imath] is a basis for [imath] \bigwedge^k T^*_pM [/imath]. Now, I think that all I need to do to prove that [imath]\{ dx^i(p) \}[/imath] is a basis for [imath]T^\ast_p M[/imath] is to show that [imath]dx^i(p)(\frac{\partial}{\partial x^j}|_p)=\delta^i_j[/imath]. Let [imath]f\in C^\infty(\mathbb{R})[/imath]. We have [imath] dx^i(p)(\frac{\partial}{\partial x^j}|_p)(f) =\frac{\partial}{\partial x^j}|_p(f\circ x^i) [/imath] and this does not seem to be [imath]\delta^i_jf[/imath]. | 193258 | Basis of cotangent space
The derivative of a map [imath]F[/imath] between manifolds [imath]M[/imath] and [imath]N[/imath] is defined by [imath]F_*X(f)= X(f \circ F)[/imath] where [imath]X \in T_P(M)[/imath], the tanget space at the point [imath]P[/imath]. We know that [imath]\left\{\frac{\partial}{\partial x^i}\bigg|_P\right\}_i[/imath] is a basis for [imath]T_P(M)[/imath]. How to show that [imath]\left\{dx^i\bigg|_P\right\}_i[/imath] is a basis for the cotangent space [imath](T_P(M))^*[/imath]? First, by [imath]dx^i[/imath], I guess we mean the derivative of the map [imath]x^i[/imath] as defined above, right? Is this map [imath]x^i[/imath] just picking out the ith coordinate? Secondly, to show that it is a basis, we need to show that [imath]dx^i\left(\frac{\partial}{\partial x^j}\bigg|_P\right) = \delta^i_j.[/imath] Where to go from here: [imath]\underbrace{(dx^i)_P}_{(\Phi_*)_P}\underbrace{\left(\frac{\partial}{\partial x^j}\bigg|_P\right)}_{X}f = \left(\frac{\partial}{\partial x^j}\bigg|_P\right)(f\circ x^i)?[/imath] I can use the chain rule but I am not sure exactly. Please help. |
2352924 | Is there a way to map an infinite grid onto the natural numbers efficiently?
I have a grid of cells that extends infinitely in all four directions. I need a way to map the coordinates of each cell to a unique positive integer efficiently. Can this be done non-sequentially? In other words, what is a function that is a bijection between [imath](\mathbb{Z},\mathbb{Z})[/imath] and [imath](\mathbb{N})[/imath]? | 2328749 | Convert a Pair of Integers to a Integer, Optimally?
What's the best algorithm that takes in two positive integers [imath]a,b[/imath] and returns a positive integer [imath]c[/imath], such that all [imath]c[/imath]'s are unique and [imath](a,b)[/imath] is distinguishable from [imath](b,a)[/imath]; where the best means that the length of [imath]c[/imath] in terms of digits is shortest possible, on average? This implies that we can work out [imath]a,b[/imath] from [imath]c[/imath] with the same algorithm. (reversing it) If we have a bound [imath]x[/imath] such that [imath]a,b\le x[/imath], then [imath]f(a,b)[/imath] has [imath]x^2[/imath] unique values which means our [imath]c[/imath] needs to take values from [imath][1,x^2][/imath] to have the least amount of digits, which can be achieved with the following function: [imath]f(a,b)=(a-1)x+b[/imath] If [imath]x[/imath] does not exist, what's the optimal algorithm then? (To make sense which algorithm is the shortest, I was comparing the length of the [imath]c[/imath] with the sum of lengths of [imath]a,b[/imath] and calculating the average which is more precise the more values we consider.) I had two ideas so far; [imath](1)[/imath] Factorization Take the factors of [imath]a[/imath] and [imath]b[/imath]. Now you can use the second longest sequence of [imath]0[/imath]s as a separator between factors, and the longest sequence of [imath]0[/imath]s as a separator between the factors of the two numbers. Example: [imath]f(123,1007)=(3\times41 ),( 19\times53)=30410019053[/imath] Example: [imath]f(30,1006)=(2\times3\times5),(2\times503)=2003005000200503[/imath] But this can be optimized, for cases such as [imath]f(2^{64},3^{64})[/imath], and even then, seems like it is too much extra digits. [imath](2)[/imath] Trailing zeroes Take [imath]a[/imath], reverse its digits and put a random digit in front of the first digit. That way, the result does not have trailing zeroes. Then use the longest sequence of [imath]0[/imath]s as a separator from [imath]b[/imath]. Example: [imath]f(123,456)=93210456[/imath] Example: [imath]f(123,10100)=932100010100[/imath] Example: [imath]f(420,314)=902400314[/imath] This also has room for optimization if we look at individual cases and add more specific rules. But it feels like it won't be optimal even then. I suppose the optimal solution would need to look at couple or more cases individually? |
2352433 | How to succinctly express [imath]A_k{\times}A_{k-1}{\times}A_{k-2}{\times} ... {\times}A_1[/imath]
I was thinking of writing: [imath]\prod_{i=k}^1 A_i[/imath] But I'm not sure if it is the correct way to do it. As in: [imath]\left(A_1{\times}A_2{\times}A_3{\times} ... A_k\right)^{-1} = A_k^{-1}{\times}A^{-1}_{k-1}{\times}A^{-1}_{k-2}{\times} ... {\times}A^{-1}_1[/imath] I intended to write it more succinctly as: [imath] \left(\prod_{i=1}^k A_i \right)^{-1} = \, \prod_{i=k}^1 A_i^{-1}[/imath] But I'm not sure if it's the correct way of representing it. EDIT The [imath]A_i[/imath] are matrices. | 707290 | Product Notation for Multiplication in Reverse Order
Is there a standard notation for multiplication in reverse order? For example consider the problem [imath]x_{k+1} = A_k x_k[/imath] where [imath]x_i \in \mathbb{R}^n[/imath] and [imath]A_i \in M_n(\mathbb{R})[/imath], ([imath]i=0,1,2,\dots[/imath]) without any further assumptions on [imath]A_i[/imath]. The solution to this problem is [imath]x_k = A_{k-1} \dots A_1 A_0 x_0[/imath] Obviously, writing [imath]x_k = \prod_{i=0}^{k-1} A_i x_0[/imath] is wrong because of the multiplication order. But I'm also unconfortable to use [imath]x_k = \prod_{i=k-1}^{0} A_i x_0[/imath] as the notation suggests that we need to increment [imath]i[/imath], not decrement it. Also, one may need to write [imath]\prod_{i=k}^{m} A_i[/imath]. Then, we need to explicitly state [imath]m < k[/imath] to imply ordering. I can always write the open form but as similar products frequently occur in calculations in my work I want to express them in a more compact way. The question is, is there a standard and nice notation to do so? There are some suggestions in this question, but I think none of them are "nice enough" to work with. I came up with the notation [imath]x_k = \coprod_{i=k-1}^{0} A_i x_0[/imath] to imply decrementing [imath]i[/imath] rather than increment it, but I don't know if it is used in somewhere else or there is a standard notation. |
2353383 | [imath]\lim_{n \to\infty}{\left(\left(\frac{n}{n^2+1^2}\right) + \left(\frac{n}{n^2+2^2} \right)+ \dots +\left(\frac{n}{n^2+n^2} \right)\right)}[/imath]
Find [imath]\lim_{n \to\infty}{\left(\left(\frac{n}{n^2+1^2}\right) + \left(\frac{n}{n^2+2^2} \right)+ \dots +\left(\frac{n}{n^2+n^2} \right)\right)}[/imath] Is there some sort of a theorem or a method behind this type of limits? I mean, I can't even begin to do the task, since I have no clue. Recent findings have shown that it might be related to Rieman's sums, yet again, this hardly makes the matters clear. | 1105719 | Evaluating the limit [imath]\lim_{n\to\infty} \left[ \frac{n}{n^2+1}+ \frac{n}{n^2+2^2} + \ldots + \frac{n}{n^2+n^2} \right][/imath]
Evaluating the limit [imath]\displaystyle \lim_{n\to\infty} \left[ \frac{n}{n^2+1}+ \frac{n}{n^2+2^2} + \ldots + \frac{n}{n^2+n^2} \right][/imath] I have a question about the following solution: We may write it in the form: [imath] \frac{1}{n} \left[ \frac{1}{1+(\frac{1}{n})^2} + \ldots + \frac{1}{1+(\frac{n}{n})^2} \right] [/imath] Somehow I need to figure out that the limit is actually the Riemann sum of [imath]\frac{1}{1 + x^2}[/imath] on [imath][0,1][/imath] for [imath]\pi = 0 < \frac{1}{n} < \ldots < \frac{n}{n}[/imath]. Can you explain to me to reach this conclusion? |
2352386 | Solve [imath]\int (x+2)/(x^2+2)dx[/imath]
How can I integrate [imath]\int\frac{x+2}{x^2+2}dx[/imath]? I have tried splitting the integral into two pieces and can solve [imath]\int\frac{x}{x^2+2}dx[/imath] by substituting [imath]x^2+2[/imath], but the [imath]\int \frac{2}{x^2+2}dx[/imath] I have problems getting. | 39654 | Not sure how to go about solving this integral
[imath]\displaystyle \int \left( \frac{1}{x^2+3} \right)\; dx[/imath] I've let [imath]u=x^2+3[/imath] but can't seem to get the right answer. Really not sure what to do. |
2349761 | If [imath]f(0) = 0[/imath] and [imath]|f'(x)|\leq |f(x)|[/imath] for all [imath]x\in\mathbb{R}[/imath] then [imath]f\equiv 0[/imath]
Let [imath]f:\mathbb{R}\rightarrow\mathbb{R}[/imath] be a continuous and differentiable function in all [imath]\mathbb{R}[/imath]. If [imath]f(0)=0[/imath] and [imath]|f'(x)|\leq |f(x)|[/imath] for all [imath]x\in\mathbb{R}[/imath], then [imath]f\equiv 0[/imath]. I've been trying to prove this using the Mean Value Theorem, but I can't get to the result. Can someone help? | 2537397 | Prove [imath]f:[a,b]\rightarrow \mathbb{R}[/imath] is constant function.
Let [imath]f:[a,b]\rightarrow \mathbb{R}[/imath] s.t [imath]f(a)=0[/imath]. If [imath]f[/imath] is differentiable on [imath][a,b][/imath] and there exist [imath]C \in \mathbb{R}[/imath] s.t [imath]|f'(x)| \leq C|f(x)|[/imath] for all x [imath] \in [a,b][/imath], then [imath]f(x)=0[/imath] for all [imath]x \in [a,b][/imath] My attempt: |[imath]f'(a)| \leq C|f(a)|=0\Rightarrow f'(a) = 0[/imath] Let [imath]\epsilon=\frac{1}{C}[/imath], then there exist [imath]\delta > 0[/imath] s.t for all [imath]x \in (a,b)[/imath] if [imath]x\in (a,a+\delta)[/imath], then [imath]\frac{|f(x)|}{x-a}<\frac{1}{C}[/imath] thus [imath]|f'(x)|\leq C|f(x)|<|x-a|<\delta[/imath] I do not know what to do from here. Edit: I could try to think about [imath]f'(b)[/imath]: If [imath]f'(b)>0[/imath] or [imath]f'(b)<0[/imath], then find some contradiction. |
2353666 | How to calculate [imath]\frac{1}{2i\pi}\int_c|1+z+z^2|^2dz[/imath]?
Let c denotes the unit circle centered at the origin in C then [imath]\frac{1}{2i\pi}\int_c|1+z+z^2|^2dz[/imath] where the integral is taken anti clockwise along C equals 0 1 2 3 I tried this by considering the properties of the complex numbers [imath]|z|^2=z\bar{z}[/imath], by considering [imath]z=Re^{i\theta}[/imath] and by substituting [imath]z=x+iy[/imath] but i didn't get how to solve this | 2331657 | Prove that [imath]\frac{1}{2\pi i}\int_\mathcal{C} |1+z+z^2+\cdots+z^{2n}|^2~dz =2n[/imath] where [imath]\mathcal{C}[/imath] is the unit circle
On the generalization of a recent question, I have shown, by analytic and numerical means, that [imath]\frac{1}{2\pi i}\int_\mathcal{C} |1+z+z^2+\cdots+z^{2n}|^2~dz =2n[/imath] where [imath]\mathcal{C}[/imath] is the unit circle. Thus, [imath]z=e^{i\theta}[/imath] and [imath]dz=iz~d\theta[/imath]. There remains to prove it, however. What I have done: consider the absolute value part of the integrand, [imath] \begin{align} |1+z+z^2+\cdots+z^{2n}|^2 &=(1+z+z^2+\cdots+z^{2n})(1+z+z^2+\cdots+z^{2n})^*\\ &=(1+z+z^2+\cdots+z^{2n})(1+z^{-1}+z^{-2}+\cdots+z^{-2n})\\ &=(1+z+z^2+\cdots+z^{2n})(1+z^{-1}+z^{-2}+\cdots+z^{-2n})\frac{z^{2n}}{z^{2n}}\\ &=\left(\frac{1+z+z^2+\cdots+z^{2n}}{z^n} \right)^2\\ &=\left(\frac{1}{z^n}\cdots+\frac{1}{z}+1+z+\cdots z^n \right)^2\\ &=(1+2\cos\theta+2\cos 2\theta+\cdots+2\cos n\theta)^2\\ \end{align} [/imath] We now return to the integral, [imath] \begin{align}\frac{1}{2\pi i}\int_C |1+z+z^2+\cdots z^n|^2dz &=\frac{1}{2\pi}\int_0^{2\pi}(1+2\cos\theta+2\cos 2\theta+\cdots+2\cos n\theta)^2 (\cos\theta+i\sin\theta)~d\theta\\ &=\frac{1}{2\pi}\int_0^{2\pi}(1+2\cos\theta+2\cos 2\theta+\cdots+2\cos n\theta)^2 \cos\theta~d\theta \end{align}[/imath] where we note that the sine terms integrate to zero by virtue of symmetry. This is where my trouble begins. Clearly, expanding the square becomes horrendous as [imath]n[/imath] increases, and even though most of the terms will integrate to zero, I haven't been able to selectively find the ones that won't. The other thing I tried was to simplify the integrand by expressing it in terms of [imath]\cos\theta[/imath] only using the identity [imath]\cos n\theta=2\cos (n-1)\theta\cos\theta-\cos(n-2)\theta[/imath] but this too unfolds as an algebraic jungle very quickly. There are various other expressions for [imath]\cos n\theta[/imath], but they seem equally unsuited to the task. I'll present them here insofar as you may find them more helpful than I did. [imath] \cos(nx)=\cos^n(x)\sum_{j=0,2,4}^{n\text{ or }n-1} (-1)^{n/2}\begin{pmatrix}n\\j\end{pmatrix}\cot^j(x)=\text{T}_n\{\cos(x)\}\\ \cos(nx)=2^{n-1}\prod_{j=0}^{n-1}\cos\left(x+\frac{(1-n+2j)\pi}{2n} \right)\quad n=1,2,3,\dots [/imath] where [imath]\text{T}_n[/imath] are the Chebyshev polynomials. Any suggestions will be appreciated. |
2354905 | Convergence of [imath]\sum_{n=0}^\infty (\frac{n}{n+1})^{n^2}e^n[/imath]
Wolframalpha tells that [imath]\sum_{n=0}^\infty (\frac{n}{n+1})^{n^2}e^n[/imath] diverges since [imath]\lim_{n\to\infty}(\frac{n}{n+1})^{n^2}e^n=\sqrt{e}[/imath]. How do you calculate this limit? And also it tells that [imath]\sum_{n=0}^\infty (-1)^n(\frac{n}{n+1})^{n^2}e^n[/imath] is not convergent, but it didn't specify the reason. Can you give me one? | 2354778 | Interval of convergence of [imath]\sum_{n=0}^\infty (\frac{n}{n+1})^{n^2}(2x)^n[/imath]
I want to find the interval of convergence of [imath]\sum_{n=0}^\infty (\frac{n}{n+1})^{n^2}(2x)^n[/imath] By the root test, [imath]\sqrt[n]{|(\frac{n}{n+1})^{n^2}(2x)^n|}=|(\frac{n}{n+1})^{n}(2x)|[/imath] And [imath](\frac{n}{n+1})^n=(1-\frac{1}{n+1})^n\to\frac1e[/imath] as [imath]n\to\infty[/imath] So I've got an open interval(of convergence) s.t. [imath]|\frac1e 2x|<1\implies |x|<\frac e2[/imath]. Now I need to figure out whether the series is convergent or not at the end points, [imath]x=\frac e2, -\frac e2[/imath]. So there are two series I need to deal with, namely, [imath]\sum_{n=0}^\infty (\frac{n}{n+1})^{n^2}e^n[/imath] and [imath]\sum_{n=0}^\infty (\frac{n}{n+1})^{n^2}(-1)^ne^n[/imath]. Suppose [imath](\frac{n}{n+1})^{n^2}[/imath] is increasing. Then [imath](\frac{n}{n+1})^{n^2}<(\frac{n+1}{n+2})^{(n+1)^2}\iff 1<(\frac{n+1}{n+2})^{(n+1)^2}(\frac{n+1}{n})^{n^2}[/imath] [imath]\iff 1<(1-\frac{1}{n+2})^{2n+1}(1+\frac{1}{n^2+2n})^{n^2}[/imath] But [imath](1-\frac{1}{n+2})^{2n+1}\to\frac{1}{e^2}[/imath] and [imath](1+\frac{1}{n^2+2n})^{n^2}<(1+\frac{1}{n^2})^{n^2}\to e[/imath] So I have [imath]1<\frac1e<1[/imath], which is a contradiction. Hence [imath](\frac{n}{n+1})^{n^2}[/imath] is not increasing. But I don't know if it's decreasing. (is it true that [imath](1+\frac{1}{n^2+2n})^{n^2}\to e[/imath]?) If [imath](1+\frac{1}{n^2+2n})^{n^2}\to e[/imath] is true, then [imath](\frac{n}{n+1})^{n^2}e^n[/imath] converges to some positive number, so [imath]\sum(\frac{n}{n+1})^{n^2}e^n[/imath] diverges. But I don't know if it's decreasing: can't apply the alternating series convergence test(Leibniz's criterion). So, can you help me with the two end points? |
2355003 | Let [imath]I[/imath] and [imath]J[/imath] be ideals of a commutative ring [imath]R[/imath] such that [imath]I + J = R[/imath]. Show that there is an ideal [imath]K[/imath] in [imath]R[/imath] with [imath]R/K \cong R/I \times R/J[/imath]
Let [imath]I[/imath] and [imath]J[/imath] be ideals of a commutative ring [imath]R[/imath] such that [imath]I + J = R[/imath]. Show that there is an ideal [imath]K[/imath] in [imath]R[/imath] with [imath]R/K \cong R/I \times R/J[/imath] I tried solve this using the first isomorphism theorem. But I don’t know how to prove that the homomorphism [imath]\phi: R \to R/I\times R/J[/imath] given by [imath]\phi (a) = (a+I)\times(a+J)[/imath] is surjective. I've tried others functions, but I failed. Any help? | 1102037 | The Chinese Remainder Theorem for Rings.
The Chinese Remainder Theorem for Rings. Let [imath]R[/imath] be a ring and [imath]I[/imath] and [imath]J[/imath] be ideals in [imath]R[/imath] such that [imath]I+J = R[/imath]. (a) Show that for any [imath]r[/imath] and [imath]s[/imath] in [imath]R[/imath], the system of equations [imath]\begin{align*} x & \equiv r \pmod{I} \\ x & \equiv s \pmod{J} \end{align*}[/imath] has a solution. (b) In addition, prove that any two solutions of the system are congruent modulo [imath]I \cap J[/imath]. (c) Let [imath]I[/imath] and [imath]J[/imath] be ideals in a ring [imath]R[/imath] such that [imath]I + J = R[/imath]. Show that there exists a ring isomorphism [imath] R/(I \cap J) \cong R/I \times R/J. [/imath] Solution: (a) Let's remind ourselves that [imath]I + J = \{i + j : i \in I, j \in J\}[/imath]. Because [imath]I + J = R[/imath], there are [imath]i \in I, j\in J[/imath] with [imath]i + j = 1[/imath]. The solution of the system is [imath]rj + si[/imath]. We check both equations: [imath]\begin{align*} rj + si &\equiv rj \equiv ri + rj \equiv r(i + j) \equiv r \pmod{I} \\ rj + si &\equiv si \equiv si + sj \equiv s(i + j) \equiv s \pmod{J} \, . \end{align*}[/imath] (b) Assume we have two different solutions [imath]x[/imath] and [imath]x'[/imath]. Then [imath]\begin{align*} x &\equiv x' \pmod{I} \\ x &\equiv x' \pmod{J} \, , \end{align*}[/imath] or else one of them wouldn't even be a solution. So [imath]x - x'[/imath] is in [imath]I[/imath] and [imath]J[/imath], therefore [imath]x - x' \in I \cap J[/imath] and [imath]x\equiv x' \pmod{I \cap J}[/imath]. (c) The Cartesian product of two rings is a ring, so [imath]R/I \times R/J[/imath] is a ring. We look at the map [imath]\begin{align*} \phi: R &\rightarrow R/I \times R/J \\ x &\mapsto (x + I, x + J) \, . \end{align*}[/imath] »Componentwise« ring homomorphisms are ring homomorphisms, so [imath]\phi[/imath] is a ring homomorphism. [imath]\phi[/imath] is surjective: by (a) for any [imath]r\in R/I, s\in R/J[/imath] there exists an [imath]x \in R[/imath] with [imath]\phi(x) = (r, s)[/imath]. The kernel of [imath]\phi[/imath] are the solutions of the system for [imath]r = s = 0[/imath]. By (b) every other solution must be congruent to [imath]0[/imath] modulo [imath]I \cap J[/imath], so [imath]\ker \phi = I \cap J[/imath]. Then by the first isomorphism theorem for rings [imath]R/\ker(\phi) \cong \phi(R)[/imath] we obtain [imath]R/(I \cap J) \cong R/I \times R/J \, .[/imath] Could you please check, if my solution is correct? Thank you! |
2353995 | Proving [imath]\sum_{}^{}(n+1)c_{n+1}(z-z_{o})^{n}[/imath] is differnatible
In the text "Complex made Simple" I'm having trouble proving [imath]\text{Proposition 1.1}[/imath], via analytic methods: [imath]\text{Proposition 1.1}[/imath] Suppose that the power series [imath]\sum_{}^{}c_{n}(z-z_{o})^{n}[/imath] has a radius of convergence [imath]R > 0[/imath]. Then the function [imath]f(z)=\sum_{}^{}c_{n}(z-z_{o})^{n}[/imath] is differentiable in the disk [imath]D(z_{o},R)[/imath] with derivative: [imath]f'(z) = \sum_{}^{}nc_{n}(z-z_{o})^{n-1} = \sum_{}^{}(n+1)c_{n+1}(z-z_{o})^{n}[/imath] [imath]\text{Remark}[/imath]: Topologically our disk [imath]D(z_{o},R)[/imath] and our function, can be defined as follows in [imath](0)[/imath]: [imath]f : \Psi \rightarrow \mathbb{C}, \text{where} \, \Psi \subset \mathbb{C} \, \, \text{it also follows that}: D(z_{o},R) \subset \mathbb{C}[/imath] [imath]\text{Lemma}[/imath] To formally attack [imath]\text{Proposition 1.1}[/imath], one takes the [imath]\lim[/imath] of [imath]f(z)[/imath], by applying [imath]\text{Definition 1}[/imath] [imath]\text{Definition 1}[/imath] Given a complex-valued function [imath]f[/imath] of a single complex variable, the derivative of [imath]f[/imath] at a point [imath]z_{o}[/imath] in it's domain is defined by the limit: [imath]f'(z_{o})= \lim_{z \rightarrow z_{o}} \frac{f(z)-f(z_{o})}{z-z_{o}} [/imath] Applying our recent developments within our prior Lemma, one can make the following attacks: [imath]f'(z_{o}) = \lim_{z \rightarrow z_{o}}\frac{\sum_{}^{}c_{n}(z-z_{o})^{n} - \sum_{}^{}(n+1)c_{n+1}(z-z_{o})^{n}}{\sum_{}^{}c_{n}(z-z_{o})^{n}-\sum_{}^{}(n+1)c_{n+1}(z-z_{o})^{n}}[/imath] In summary my question boils down to the application of [imath]\text{Definition 1}[/imath] to our original, original proposition i'm having trouble discerning what [imath]z-z_{o}[/imath] and how is it applied ? | 588191 | How to prove that a complex power series is differentiable
I am always using the following result but I do not know why it is true. So: How to prove the following statement: Suppose the complex power series [imath]\sum_{n = 0}^\infty a_n(z-z_0)^n[/imath] has radius of convergence [imath]R > 0[/imath]. Then the function [imath]f: B_R(z_0) \to \mathbb C[/imath] defined by \begin{align*} f(z) := \sum_{n = 0}^\infty a_n (z-z_0)^n \end{align*} is differentiable in [imath]B_R(z_0)[/imath] and for any [imath]z \in B_R(z_0)[/imath] the derivative is given by the formula \begin{align*} f'(z) = \sum_{n = 1}^\infty na_n(z-z_0)^{n-1}. \end{align*} Thanks in advance for explanations. |
2273456 | Relation between Range Space and Null Space
There is a problem in the book Linear Algebra by 'Hoffman Kunze': Let [imath]V[/imath] be a vector space over the field [imath]F[/imath] and [imath]T[/imath] a linear operator on [imath]V[/imath]. If [imath]{T}^2[/imath] = [imath]0[/imath], what can you say about the relation of the range of [imath]T[/imath] to the null space of [imath]T[/imath]? I was trying with [imath]R(T^2)\subset R(T)[/imath] and [imath]N(T)\subset N(T^2)[/imath] but couldn't get the answer.... any hint would be appreciated..... | 810237 | What can you say about the range space and null space
Let [imath]V[/imath] be a vector space over a field [imath]F[/imath] and [imath]T[/imath] a linear operator on [imath]V[/imath]. If [imath]T^2[/imath][imath]=[/imath] [imath]0[/imath], what can you say about the relation of the range of [imath]T[/imath] to the null space of [imath]T[/imath]? |
2193449 | Confusion about a proof of Ito isometry for "elementary" functions in Oksendal's SDE book
The following page is excerpted from Bernt Øksendal's Stochastic Differential Equations: An Introduction with Applications: In particular, I don't understand how the author concluded that the two indicated random variables are independent. Although [imath]\Delta B_i[/imath] and [imath]\Delta B_j[/imath] are independent, how does the independence still pass through when [imath]\Delta B_i[/imath] is multiplied by an arbitrary r.v. that is simply [imath]F_{t_j}[/imath]-measurable? For the case [imath]i\ne j[/imath] I can prove alternatively using the fact that [imath]B_t[/imath] is a martingale, and for the second, using [imath]B_t^2-t[/imath] is a martingale. But I just want to know where the (claimed) independence comes from, or is the statement simply false? | 990219 | [imath]E[e_te_s\Delta B_t\Delta B_s][/imath] for [imath]\Delta B_t[/imath] Brownian motion increments and [imath]e_t(\omega)[/imath] a measurable function.
Let [imath]\Delta B_j=B_{t_{j+1}}-B_{t_j}[/imath] where [imath]B_t[/imath] is Brownian motion, and [imath]e_i(\omega)[/imath] measurable with respect to [imath]\sigma(B_{t_i})[/imath]. In Oksendal's 'Stochastic Differential Equations' he states: [imath] E[e_ie_j\Delta B_i\Delta B_j]= \begin{cases} 0 & i\ne j \\ E[e_j^2] & i=j \end{cases} [/imath] Justifying this because '[imath]e_ie_j\Delta B_i[/imath] and [imath]\Delta B_j[/imath] are independent if [imath]i<j[/imath]'. I am trying to understand this. I understand that Brownian motion is normally distributed with independent increments, so: [imath]E[\Delta B_i\Delta B_j]=0[/imath] However, how can we justify [imath]e_ie_j\Delta B_i[/imath] being independent to [imath]\Delta B_j[/imath]? Even if they are, how can we treat [imath]E[e_ie_j\Delta B_i][/imath]? Basically I believe my confusion stems from how to treat products of measurable functions. |
2355475 | Unique representation as a linear combination
Let [imath]V[/imath] be an infinite dimensional vector space and [imath]B[/imath] be a basis. I would like to prove that every element of [imath]V[/imath] has a unique representation as a linear combination of elements of [imath]B[/imath]. I can visualize why this is true (heuristically: take the first representation minus the second and get linear dependence), however I seem unable to do the technical work. It is tricky. The finite dimensional case is trivial and it can't be generalized. Can someone show me how to do this? | 2071094 | Unique representation of a vector
In a book I am reading the author states without proof that in an [imath]n[/imath]-dimensional vector space [imath]X[/imath], the representation of any [imath]x[/imath] as a linear combination of a given basis [imath]e_{1},e_{2},...,e_{n}[/imath] is unique. How to proof that? |
2355626 | If [imath]\alpha =2\pi/7[/imath], find [imath]\tan\alpha \tan 2\alpha + \tan 2\alpha \tan 4\alpha + \tan 4\alpha \tan \alpha[/imath].
If [imath]\alpha =2\pi/7[/imath], find [imath]\tan\alpha \tan 2\alpha + \tan 2\alpha \tan 4\alpha + \tan 4\alpha \tan \alpha[/imath]. I tried by [imath]7\alpha = 2\pi[/imath] [imath]4\alpha =2\pi-3\alpha[/imath] [imath]\sin 4\alpha = \sin 3\alpha[/imath] But couldn't reach solution Please help | 823819 | If [imath]\alpha = \frac{2\pi}{7}[/imath] then the find the value of [imath]\tan\alpha .\tan2\alpha +\tan2\alpha \tan4\alpha +\tan4\alpha \tan\alpha.[/imath]
If [imath]\alpha = \frac{2\pi}{7}[/imath] then the find the value of [imath]\tan\alpha .\tan2\alpha +\tan2\alpha \tan4\alpha +\tan4\alpha \tan\alpha[/imath] My 1st approach : [imath]\tan(\alpha +2\alpha +4\alpha) = \frac{\tan\alpha +\tan2\alpha +\tan4\alpha -\tan\alpha \tan2\alpha -\tan2\alpha \tan4\alpha -\tan4\alpha \tan\alpha}{1-(\tan\alpha \tan2\alpha +\tan2\alpha \tan4\alpha +\tan\alpha \tan4\alpha)} [/imath] [imath]\Rightarrow 0 = \frac{\tan\alpha +\tan2\alpha +\tan4\alpha -\tan\alpha \tan2\alpha -\tan2\alpha \tan4\alpha -\tan4\alpha \tan\alpha}{1-(\tan\alpha \tan2\alpha +\tan2\alpha \tan4\alpha +\tan\alpha \tan4\alpha)} [/imath] which doesn't give me any solution. My IInd approach : U\sing Euler substitution : \since [imath]\cos\theta +i\sin\theta = e^{i\theta} [/imath].....(i) and [imath]\cos\theta -i\sin\theta =e^{-i\sin\theta}[/imath]....(ii) Adding (i) and (ii) we get [imath]\cos\theta =\frac{e^{i\theta} +e^{-i\theta}}{2}[/imath] and subtracting (i) and (ii) we get [imath]\sin\theta =\frac{e^{i\theta} -e^{-i\theta}}{2}[/imath] By u\sing this we can write : [imath]\tan\alpha .\tan2\alpha +\tan2\alpha \tan4\alpha +\tan4\alpha \tan\alpha[/imath] as [imath]\frac{1}{4}\left[ (e^{\frac{i2\pi}{7}} -e^{\frac{-i2\pi}{7}}) (e^{\frac{i4\pi}{7}} -e^{\frac{-i4\pi}{7}}) + (e^{\frac{i4\pi}{7}} -e^{\frac{-i4\pi}{7}})(e^{\frac{i8\pi}{7}} -e^{\frac{-i8\pi}{7}}) + (e^{\frac{i8\pi}{7}} -e^{\frac{-i8\pi}{7}}) (e^{\frac{i\pi}{7}} -e^{\frac{-i\pi}{7}})\right][/imath] [imath]\large= e^{i\frac{6\pi}{7}}-e^{\frac{i2\pi}{7}}-e^{\frac{-i2\pi}{7}} +e^{\frac{-i6\pi}{7}} +e^{\frac{i3\pi}{7}}-e^{\frac{-i5\pi}{7}}-e^{\frac{i5\pi}{7}} +e^{\frac{-3\pi}{7}} +e^0 -e^{\frac{i2\pi}{7}} -e^{\frac{-i2\pi}{7}}+e^0[/imath] Can anybody please suggest whether this is my correct approach or not. please guide further... Thanks. |
2355200 | Distance of a BCH-Code
I have an exercise that asks for the minimum distance of a BCH-Code with length 63 over [imath]F_2[/imath] with generator polynomial given by [imath]g(x)=(x+ 1)(x^6 +x+ 1)(x^6 +x^5 + 1)[/imath]. As [imath]g(x)=x^{13}+x^{11}+x^8+x^5+x^2+1[/imath], the minimum distance is [imath]\leq 6[/imath]. I know that one can show that the distance is [imath]\geq 6[/imath] and therefore is equal to 6. Can someone help me how to do that? So show that the designed distance is 6? | 2253048 | How to calculate the minimum distance of a cyclic code
Let [imath]C[/imath] be the cyclic code of length [imath]63[/imath] over [imath]\mathbb{F}_2[/imath] that has generator polynomial [imath]g(x)=(x+1)(x^6+x+1)(x^6+x^5+1)[/imath] We are asked to comment on the minimum distance of [imath]C[/imath]. However I don't understand how to find the minimum distance. Can anyone explain how to do this? Thanks in advance! |
2355748 | If [imath]a,b,c[/imath] are distinct positive numbers , show that [imath]\frac{a^8 + b^8 + c^8}{a^3 b^3 c^3} > \frac{1}{a} + \frac{1}{b} + \frac{1}{c}[/imath]
If [imath]a,b,c[/imath] are distinct positive numbers , show that [imath]\frac{a^8 + b^8 + c^8}{a^3 b^3 c^3} > \frac{1}{a} + \frac{1}{b} + \frac{1}{c}.[/imath] I am thinking of Tchebycheff's inequality for this question, but not able to proceed. How do I solve this? | 842712 | Inequality [imath]\frac1a + \frac1b + \frac1c \leq \frac{a^8+b^8+c^8}{a^3b^3c^3}[/imath]
Let [imath]a,b,c[/imath] be positive reals . Prove that [imath]\displaystyle \frac1a + \frac1b + \frac1c \leq \frac{a^8+b^8+c^8}{a^3b^3c^3}[/imath] I found this one in a book, no hints mentioned, but marked as very hard. I can't make any progress... |
2355529 | Induction divisibility
Show that for any k∈N, if [imath]2^{3k-1} +5\cdot3^k[/imath] is divisible by 11 then, [imath]2^{3^{(k+2)-1}} +5\cdot3^{k+2}[/imath] is divisible by 11. Base Case: k = 2 [imath]\implies2^{3*2-1} +\cdot3^2 =77[/imath] Since 77 is divisible by 11, the base case holds true I.H: Assume it is true for [imath]2^{3x-1} +5\cdot3^x[/imath],x∈N Then; for k=x+1 [imath]\implies 2^{3(x+1)-1} +5\cdot3^{x+1}[/imath] [imath]\implies 2^{3x-1}\cdot2^3 +5\cdot(3^x\cdot3^1)[/imath] [imath]\implies 2^{3x-1}\cdot8 +15\cdot3^x[/imath] How do i proceed from what i have so far, also what variable do i use since k is already been used? | 2353021 | Proving divisibility of expression using induction
Question: Part a: Prove that for any [imath] b\in \Bbb N,[/imath] if [imath] 2^{3b -1} + 5 . 3^{b}[/imath] is divisible by [imath] 11[/imath], then [imath] 2^{3(b+2) -1} + 5 . 3^{b+2}[/imath] is divisible by [imath]11[/imath]. Part b: Is statement 1 or statement 2 true? Explain answer. For any odd number [imath]a\in \Bbb N [/imath], [imath] 2^{3a -1} + 5 . 3^{a}[/imath] is divisible by [imath] 11[/imath] For any even number [imath]a\in \Bbb N [/imath], [imath] 2^{3a -1} + 5 . 3^{a}[/imath] is divisible by [imath] 11[/imath] My attempt: Part a: I am not sure what the base case should be. Induction hypothesis: Assume [imath] 2^{3k -1} + 5 . 3^{k}[/imath] is divisible by [imath] 11[/imath], for some [imath]k[/imath] natural number. I am not sure how to prove true for [imath] 2^{3(k+2) -1} + 5 . 3^{k+2}[/imath]. Part b: Would statement 2 be correct since the expression is divisible by [imath]11[/imath] when [imath] a=2[/imath] |
2355619 | Determine a basis of the image and ther kernel of [imath]g[/imath] .
[imath]g : \mathcal{M}_2 (\mathbb{R}) \rightarrow \mathbb{R}[/imath] [imath] A \rightarrow g(A) = \operatorname{tr}(A)[/imath] Determine the basis of [imath]\ker g[/imath] Determine the basis of [imath]\operatorname{im} g [/imath] Determine the matrix [imath]M (g, \mathcal{B'}, \mathcal{B''})[/imath] with respect to the standard canonical basis [imath]\mathcal{B'}[/imath] and [imath]\mathcal{B''}[/imath] of [imath]\mathbb{R}[/imath] and [imath]\mathcal{M}_2(\mathbb{R})[/imath] respectively. 1. [imath] A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \ker g \iff \operatorname{tr}(A) = a + d = 0 \iff a = -d [/imath] and , [imath]A = \begin{pmatrix} a & b \\ c & -a \end{pmatrix} = a\begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} + bE_{12} + cE_{21} [/imath]. A basis of [imath]\ker g[/imath] is [imath](\begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}, E_{12}, E_{21})[/imath]. 2. [imath]\operatorname{Im}g = \operatorname{Vect}\left\langle g \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}, g(E_{12}), g(E_{21})\right\rangle = \operatorname{Vect} \langle 0,0,0\rangle = \operatorname{Vect}\langle 0\rangle[/imath] A basis of [imath]\operatorname{im}g [/imath] is [imath](0)[/imath]. 3. [imath]M (g, \mathcal{B'}, \mathcal{B''}) = \begin{pmatrix} 1 & 0 & 0 & 1 \end{pmatrix}[/imath] Are my answered correct? | 2355401 | Matrix [imath]M_1 = M(f', \mathcal{B}, \mathcal{B '})[/imath] of [imath]f[/imath]
[imath]f: \mathbb{R}^5 \rightarrow \mathcal{M}_{2}(\mathbb{R}) [/imath] [imath] (x_1,x_2,x_3,x_4,x_5) \rightarrow \begin{pmatrix} x_1 & x_2 \\ x_1 + x_3 & x_5 \end{pmatrix}[/imath] Determine the basis of [imath]Kerf[/imath] Determine a basis of [imath]Imf[/imath] Determine the matrix [imath]M_1 = M(f', \mathcal{B}, \mathcal{B '})[/imath] of [imath]f[/imath] 1 . [imath]x = (x_1,x_2,x_3,x_4,x_5) \in Kerf \iff f(x_1,x_2,x_3,x_4,x_5) = \begin{pmatrix} x_1 & x_2 \\ x_1 + x_3 & x_5 \end{pmatrix} =\begin{pmatrix} 0 & 0 \\ 0 & 0\end{pmatrix}[/imath] I got [imath](x_1,x_2,x_3,x_4,x_5) = (0,0,0,x_4,0)[/imath]. A base of [imath]Kerf[/imath] is [imath](0,0,0,1,0)[/imath]. [imath]Imf^= Vect <f(e_1), f(e_2), f(e_3), f(e_4), f(e_5) > = <E_{11}, E_{12}, E_{21}, E_{22}>[/imath] So the family [imath](E_{11}, E_{12}, E_{21}, E_{22})[/imath] is basis. with [imath]E_{ij} [/imath] is the elementary matrix. The coordinate [imath]x_4[/imath] is not on the matrix. Are my answers correct? I need help with question 3, too. Thank you. |
2355717 | For a odd prime [imath]p, \exists[/imath] nonzero [imath]a,b[/imath] such that [imath]a^2 + ab + b^2 \equiv 0 \pmod{p} \implies \exists x,y [/imath] such that [imath]x^2 + xy + y^2 = p[/imath]?
These are some examples. [imath]2^2 + 2 \cdot 4 + 4^2 = 4( 1^2 + 1 \cdot 2 + 2^2) = 4 \cdot 7[/imath] [imath]3^2 + 3 \cdot 9 + 9^2 = 9( 1^2 + 1 \cdot 3 + 3^2) = 9 \cdot 13[/imath] [imath]7^2 + 7 \cdot 11 + 11^2 = 13( 2^2 + 2 \cdot 3 + 3^2) = 13 \cdot 19[/imath] [imath]5^2 + 5 \cdot 25 + 25^2 = 25( 1^2 + 1 \cdot 5 + 5^2) = 25 \cdot 31[/imath] [imath]10^2 + 10 \cdot 26 + 26^2 = 28( 3^2 + 3 \cdot 4 + 4^2) = 28 \cdot 37[/imath] [imath]6^2 + 6 \cdot 36 + 36^2 = 36( 1^2 + 1 \cdot 6 + 6^2) = 36 \cdot 43[/imath] [imath]13^2 + 13 \cdot 47 + 47^2 = 49( 4^2 + 4 \cdot 5 + 5^2) = 49 \cdot 61[/imath] [imath]29^2 + 29 \cdot 37 + 37^2 = 49( 2^2 + 2 \cdot 7 + 7^2) = 49 \cdot 67[/imath] How can i prove? Please help me, please... (ㅠㅠ) | 2352532 | For a odd prime [imath]p, \exists a,b[/imath] such that [imath]a^2 + ab + b^2 \equiv 0 \pmod{p} \iff \exists x,y [/imath] such taht [imath]x^2 + xy + y^2 = p[/imath]?
We know that [imath]p=x^2+xy+y^2[/imath] if and only if [imath]p \equiv 1 \pmod {3}[/imath]. But I need [imath]a^2+ab+b^2 \equiv 0 \pmod{p} [/imath] if and only if [imath]p \equiv 1 \pmod {3}[/imath], more generalized theorem. I think that i should proof below : [imath]\exists a,b[/imath] such that [imath]a^2 + ab + b^2 \equiv 0 \pmod{p} \iff \exists x,y [/imath] such taht [imath]x^2 + xy + y^2 = p[/imath] Easy to prove ([imath]\Longleftarrow[/imath]), but not ([imath]\implies[/imath]). enter image description here These are some examples. [imath]2^2 + 2 \cdot 4 + 4^2 = 4( 1^2 + 1 \cdot 2 + 2^2) = 4 \cdot 7[/imath] [imath]3^2 + 3 \cdot 9 + 9^2 = 9( 1^2 + 1 \cdot 3 + 3^2) = 9 \cdot 13[/imath] [imath]7^2 + 7 \cdot 11 + 11^2 = 13( 2^2 + 2 \cdot 3 + 3^2) = 13 \cdot 19[/imath] [imath]5^2 + 5 \cdot 25 + 25^2 = 25( 1^2 + 1 \cdot 5 + 5^2) = 25 \cdot 31[/imath] [imath]10^2 + 10 \cdot 26 + 26^2 = 28( 3^2 + 3 \cdot 4 + 4^2) = 28 \cdot 37[/imath] [imath]6^2 + 6 \cdot 36 + 36^2 = 36( 1^2 + 1 \cdot 6 + 6^2) = 36 \cdot 43[/imath] [imath]13^2 + 13 \cdot 47 + 47^2 = 49( 4^2 + 4 \cdot 5 + 5^2) = 49 \cdot 61[/imath] [imath]29^2 + 29 \cdot 37 + 37^2 = 49( 2^2 + 2 \cdot 7 + 7^2) = 49 \cdot 67[/imath] |
2356145 | Proving [imath]|V(a_0,...,a_{n-1})|=\Pi_{0\leq i[/imath]
Note that this question is similar but slightly different than this. If you believe the answer from that question could be applicable, please explain why it still works for a matrix that has been transposed The matrix [imath]A[/imath] is defined as: [imath]V=\left(\begin{array} & 1 & a_0 & a_0^2 & \cdots & a_0^{n-1} \\ 1 & a_1 & a_1^2 & \cdots & a_1^{n-1} \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 1 & a_{n-1} & a_{n-1}^2 & \cdots & a_{n-1}^{n-1} \\\end{array}.\right)[/imath] Such that [imath]a_0,a_1,..,a_{n-1}\in \mathbb{C}[/imath] Prove that: [imath]|V(a_0,...,a_{n-1})|=\Pi_{0\leq i<j<n-1}(a_j-a_i)[/imath] For example: [imath]V(3,2,4)=\left( \begin{array} &1&3&9 \\ 1&2&4 \\ 1&4&16 \end{array} \right)[/imath] such that: [imath]\begin{vmatrix} 1&3&9 \\ 1&2&4 \\ 1&4&16 \end{vmatrix}=(4-2)(4-3)(2-3)[/imath] Use the following steps in your proof: -[imath]C_n-a_0C_{n-1}\rightarrow C_n[/imath] -[imath]C_{n-1}-a_0C_{n-2}\rightarrow C_{n-1}[/imath] -Until [imath]C_2-a_0C_1\rightarrow C_2[/imath] -Use induction/recursion to arrive at a solution | 319420 | Prove determinant of a matrix [imath]=\prod_{j[/imath]
Prove that [imath]\begin{vmatrix} 1 & 1 & \cdots & 1 \\ a_0 & a_1 & \cdots & a_n \\ a_0^2 & a_1^2 & \cdots & a_n^2 \\ \vdots & \vdots & \ddots & \vdots \\ a_0^n & a_1^n & \cdots & a_n^n \\\end{vmatrix}=\prod_{j<i}(a_i-a_j)[/imath]. I have trouble with this, I thought it'd be doable with Laplace's theory but I fail to understand it. |
2356146 | Show that a map is closed
I have the following exercise from "Munkres, Topology" Show that if [imath]f: X \rightarrow Y[/imath] is continuous, where [imath]X[/imath] is a compact and [imath]Y[/imath] is Hausdorff, then [imath]f[/imath] is a closed map ([imath]f[/imath] carries closed sets to closed sets) Proof.: If I take [imath]A \subset X[/imath] closed, then [imath]A[/imath] is compact. Since [imath]f[/imath] is continuous and [imath]A[/imath] compact, then [imath]f(A)[/imath] is a compact set. But [imath]f(A) \subset Y[/imath], with [imath]Y[/imath] Hausdorff, so [imath]f(A)[/imath] is a closed set and [imath]f[/imath] maps closed sets to closed sets. [imath]\square[/imath] Is it ok? | 1327685 | Continuous function from a compact space to a Hausdorff space is a closed function
We have that [imath]f\colon X\to Y[/imath] is a continuous function, [imath]X[/imath] is a compact space and [imath]Y[/imath] is a Hausdorff space. Prove that [imath]f[/imath] is a closed function. |
2356325 | Groups of even order have an element whose order is 2.
I was trying to prove this question. Q. If G is a group of even order, prove that it has an element a, such that a[imath]\neq[/imath]e satisfying [imath]a^2[/imath]=e. I tried solving it using Principle of mathematical induction. So, Let O(G)=2m, where m[imath]\in[/imath] [imath]N[/imath]. We proceed by induction on m. For m=1, O(G)=2, [imath]\therefore[/imath] G contains two elements. G=[imath]\{{e,a}\}[/imath]. Now, e has order 1. So, a can not have order 1. [imath]\therefore[/imath] a has order 2. So, it true for m=1. Now, let it be true for m=k. Then, we have a group G of order 2k, where [imath]\exists[/imath] an element a[imath]\in[/imath] G, whose order is 2. We wish to prove it for m=k+1. Then G has order 2k+2. I don't know how to proceed further. | 2494910 | prove existence of a, such that a^2 = e
currently we have a [imath](G,*)[/imath] group, which order is [imath]2k[/imath]. Prove existence of non [imath]e[/imath] item [imath]a[/imath] in the group, such that [imath]a^2 = e[/imath]. I am currently out of ideas, can you give me any hint. Thanks. |
2356406 | A Gamma function equality
This was an equality written in p15, Titchmarsh, [imath] \int_0^{\infty} \frac{ \sin y }{y^{s+1}} \,dy = - \Gamma(-s) \sin \frac{1}{2} s \pi [/imath] How is this true? The RHS, by definition is equal to [imath] \int_0^{\infty} \frac{ e^{-y} \sin \frac{1}{2} s\pi }{y^{s+1}} \,dy [/imath] which I do not see how it equates to the LHS. | 1222832 | How to integrate [imath] \int_0^\infty \sin x \cdot x ^{-1/3} dx[/imath] (using Gamma function)
How can I calculate the following integral: [imath]\int_0^\infty x ^{-\frac{1}{3}}\sin x \, dx[/imath] WolframAlpha gives me [imath] \frac{\pi}{\Gamma\Big(\frac{1}{3}\Big)}[/imath] How does WolframAlpha get this? I don't understand how we can rearrange the formula in order to apply the gamma-function here. Any helpful and detailed hint/answer is appreciated. |
2356435 | Is the sum of two random variables a random variable
This is a question that I have been struggling with for quite some time, but no luck. I'm not asking for a final answer, I need to know how to prove it. The question is: Prove whether or not the sum of two random variables is a random variable. [imath]X_1 + X_2[/imath] | 305989 | How do I show that the sum of two random variables is random variable?
How do I prove the following? If [imath]X[/imath] and [imath]Y[/imath] are random variables on a probability space [imath](\Omega, F, \mathbb P)[/imath], then so is [imath]X+Y[/imath]. The definition of a random variable is a function [imath]X: \Omega \to \mathbb R[/imath], with the property that [imath]\{\omega\in\Omega: X(\omega)\leq x\}\in F[/imath], for each [imath]x\in\mathbb R[/imath]. Furthermore, how to approach [imath]X+Y[/imath] and [imath]\min\{X, Y\}[/imath]? |
2354192 | Prove [imath]B-(B-A) = A \cap B[/imath]
English is not my mother language so please forgive any mistakes. I'm not really sure how to approach this question. This is what I have until now: Consider [imath]x \in B-(B-A)[/imath]. This means [imath]x \in B[/imath] and [imath]x\notin(B-A)[/imath]. Therefore, [imath]x \in B[/imath] and [imath](x \notin B[/imath] or [imath]x \in A)[/imath]. Since [imath]x \in B[/imath], [imath]x \in B[/imath] and [imath]x \in A[/imath]. So [imath]x \in A \cap B[/imath]. However, if I remember correctly, I am supposed to prove the equalty both ways, like this: 1) [imath]B-(B-A) \subset A \cap B[/imath] 2) [imath]A \cap B \subset B-(B-A)[/imath] What I just wrote was Part 1 (although I don't know if it's correct). My issue is, I don't really know how to prove Part 2. | 2006224 | set theory proof of [imath]A\cap B = A \setminus(A\setminus B)[/imath]
Please proof that [imath]A\cap B=A\setminus(A\setminus B)[/imath] this really gave me sleepless night. I tried using the set intersection properties but I still got confused. |
2356813 | How to prove this strange limit?
Let [imath]f:[0,\infty)\to\mathbb R[/imath] be a function in [imath]C^2[/imath] such that [imath]\lim_{x\to\infty} (f(x)+f'(x)+f''(x)) = a.[/imath] Prove that [imath]\lim_{x\to\infty} f(x)=a[/imath] | 726027 | If [imath]f(x) + f'(x) + f''(x) \to A[/imath] as [imath]x \to \infty[/imath], then show that [imath]f(x) \to A[/imath] as [imath]x \to \infty[/imath]
This problem is an extension to the simpler problem which deals with [imath]f(x) + f'(x) \to A[/imath] as [imath]x \to \infty[/imath] (see problem 2 on my blog). If [imath]f[/imath] is twice continuously differentiable in some interval [imath](a, \infty)[/imath] and [imath]f(x) + f'(x) + f''(x) \to A[/imath] as [imath]x \to \infty[/imath] then show that [imath]f(x) \to A[/imath] as [imath]x \to \infty[/imath]. However, the approach based on considering sign of [imath]f'(x)[/imath] for large [imath]x[/imath] (which applies to the simpler problem in the blog) does not seem to apply here. Any hints on this problem? I believe that a similar generalization concerning expression [imath]\sum\limits_{k = 0}^{n}f^{(k)}(x) \to A[/imath] is also true, but I don't have a clue to prove the general result. |
2357676 | What is the difference between [imath]f(x,y)[/imath] and [imath]f(x,y(x))[/imath]?
What is the difference between [imath]f(x,y)[/imath] and [imath]f(x,y(x))[/imath]? Are both functions of two variables, [imath]f:\mathbb R^2 \rightarrow \mathbb R[/imath]? Update: The possible duplicate question is regarding an ODE. | 2356514 | Is [imath]f(x,y(x))[/imath] a function of two variables? If so, how is that possible?
I don't know how to interpret this function: Let [imath]D[/imath] be an open subset of [imath]\mathbb{R}^2[/imath] with a continuous function [imath]f:D \rightarrow \mathbb{R}[/imath] and \begin{align} y'(x)=f(x,y(x)) \end{align} is a continuous, explicit first-order differential equation defined on [imath]D[/imath]. From [imath]f:\mathbb{R}^2 \rightarrow \mathbb{R}[/imath] I interpret [imath]f[/imath] as a two variable function, [imath]f(x,y)[/imath]. But here [imath]y[/imath] is a function of [imath]x[/imath], so isn't [imath]f(x,y(x))[/imath] a function of one variable, [imath]f:\mathbb{R} \rightarrow \mathbb{R}[/imath]? |
2358102 | Where is wrong on my [imath]i^i=1[/imath] proof?
I wanted to calculate [imath]i^i[/imath] ([imath]i[/imath] is an imaginary unit), and I calculated in this way: [imath]i^i = (i^4)^{i/4} = 1^{i/4} = 1[/imath] (because [imath]1^x = 1[/imath] for any number [imath]x[/imath]) So, I thought that [imath]i^i[/imath] 's one solution is [imath]1[/imath]. Though, in Wolfram Alpha, it says the solution was [imath]e^{2n\pi - \pi/2}[/imath]. It doesn't include [imath]1[/imath]. What's wrong with my proof? | 1642212 | Why isn't [imath]e^{2\pi xi}=1[/imath] true for all [imath]x[/imath]?
We know that [imath]e^{\pi i}+1=0[/imath]and [imath]e^{\pi i}=-1[/imath] So[imath](e^{\pi i})^2=(-1)^2[/imath][imath]e^{2\pi i}=1[/imath] Because [imath]1[/imath] is the multiplicative identity,[imath](e^{2\pi i})^x=1^x[/imath][imath]e^{2\pi xi} =1[/imath]should also hold true. But we also know that [imath]e^{xi}=\cos(x)+i\sin(x)[/imath]and so[imath]e^{2\pi xi}=\cos(2\pi x)+i\sin(2\pi x)[/imath]which does not equal 1 for all values of [imath]x[/imath]. Now I realize I probably didn't break math, so I must be making an invalid assumption. What is wrong with my reasoning? |
2357665 | Quotient Space Hausdorff
Why is [imath]\mathbb{R}/\mathbb{Z}[/imath] a Hausdorff space, if we assume that [imath]\mathbb{R}[/imath] has the canonical topology, and [imath]\mathbb{Z}[/imath] the induced subspacetopology? More generally, which requirements for a topologal space [imath]S[/imath] ans it's subspace [imath]U[/imath] allow to conclude that [imath]S/U[/imath] is hausdorff? | 91639 | [imath]X/{\sim}[/imath] is Hausdorff if and only if [imath]\sim[/imath] is closed in [imath]X \times X[/imath]
[imath]X[/imath] is a Hausdorff space and [imath]\sim[/imath] is an equivalence relation. If the quotient map is open, then [imath]X/{\sim}[/imath] is a Hausdorff space if and only if [imath]\sim[/imath] is a closed subset of the product space [imath]X \times X[/imath]. Necessity is obvious, but I don't know how to prove the other side. That is, [imath]\sim[/imath] is a closed subset of the product space [imath]X \times X[/imath] [imath]\Rightarrow[/imath] [imath]X/{\sim}[/imath] is a Hausdorff space. Any advices and comments will be appreciated. |
2357976 | Mathematical logic - Adequate sets of connectives.
How can I prove that [imath]\{\sim,\leftrightarrow \}[/imath] is not an adequate set of connectives? Please help me with this exercise, I cannot prove it. | 191146 | Prove that a set of connectives is inadequate
It is relatively easy to prove that a given set of connectives is adequate. It suffices to show that the standard connectives can be built from the given set. It is proven that the set [imath]\{\lor, \land, \neg\}[/imath] is adequate, and from that set it can be inferred (applying De Morgan laws and such) that [imath]\{\lor, \neg\}[/imath], [imath]\{\land, \neg\}[/imath] and [imath]\{\to, \neg\}[/imath] are also adequate. Nevertheless, I'm stuck trying to understand how to prove that a given set of connectives is inadequate. I know I have to prove that a standard connective can't be build using only the connectives of the given set, but I can't figure out how to do it. FYI, I'm trying to prove that [imath]\{\lor, \land\}[/imath] and [imath]\{\leftrightarrow, \neg\}[/imath] are inadequate sets of connectives. Thanks in advance. |
2358579 | Prove [imath]a_n[/imath] is a Cauchy-sequence, with [imath]a_0 \in \Bbb R[/imath] and [imath]a_{n+1}=f(a_n)[/imath]
Let [imath]f: \Bbb R \to \Bbb R[/imath] be a differentiable function with [imath]m=sup[/imath]{[imath]|f'(x)||x \in \Bbb R[/imath]} [imath]<1[/imath]. Let [imath]a_0 \in \Bbb R[/imath] and define [imath]a_{n+1} =f(a_n)[/imath] for [imath]n=0,1,2...[/imath]. Prove the sequence [imath](a_n)_{n \geq 0}[/imath] is a Cauchy sequence. So we have to prove that [imath]\forall \epsilon >0 ,\exists N \in \Bbb N[/imath] such that when [imath]n,m >N[/imath] then [imath]d(a_n,a_m) < \epsilon[/imath], but I don't really know how to continue from here. | 1536984 | If [imath]|f'(c)| \leq M[/imath] and [imath] M < 1[/imath], the sequence defined by [imath]a_{n+1} = f(a_n)[/imath] converges
Let [imath]f[/imath] be a real function such that [imath]f[/imath] is differentiable and [imath]|f'(c)| \leq M<1[/imath]. Let [imath]a_1[/imath] a real number and define [imath]a_{n+1} = f(a_n)[/imath]. Then [imath](a_n)[/imath] converges. My attempt: By the mean theorem, we have that [imath]|a_{n+1} - a_n| = |f(a_n) - f(a_{n-1})| \leq M|a_n - a_{n-1}|[/imath]. Then, [imath]|a_{n+1} - a_n| \leq M^{n-1}|a_2 - a_1|[/imath]. As [imath]M < 1[/imath], we know that [imath]M^{n-1}[/imath] converges to [imath]0[/imath]. |
2359043 | Show that a list of polynomials is a basis of the subspace _U_ of [imath]P_3[/imath](R)
The question is: Show that 1, [imath](x - 5)^2[/imath], [imath](x-5)^3[/imath] is a basis of the subspace U of [imath]P_3(R)[/imath] defined by U = {p ∈ [imath]P_3(R)[/imath] : p'(5) = 0}. My questions/confusions will be distinguished by bolding the font. I've found this question posted on here before, but my specific confusions are different; and the answers weren't totally satisfactory. Any help would be appreciated. I see that it is a linearly independent set simply because you can't write any of the polynomials in terms of one another; however, the solution is kind of confusing. The solution says that the list's linear independence is clear, so it's easy to tell that dim U ≥ 3. Question 1: Why should it be apparent that the dimension is ≥ 3 at this point? Since we know it's linearly independent, I thought about establishing the span, to see if every element in U can be written in terms of them, but I don't know how to establish that every vector in U can be written as a linear combo of them. Further, the solution says that the dim U ≤[imath]P_3(R)[/imath] = 4. However, dim U cannot be 4 because otherwise when we extend a basis of U to [imath]P_3(R)[/imath], we would get a list with a length of greater than 4. Can the bolded also be explained/shown to me? Intuitively, it makes sense that the subspace doesn't have all of the vectors in [imath]P_3(R)[/imath], due to the p'(5) = 0 restriction. I just can't wrap my head around that statement. | 1620512 | Why is [imath]1, (x-5)^2, (x-5)^3[/imath] a basis of [imath]U=\{p \in \mathcal P_3(\mathbb R) \mid p'(5)=0\}[/imath]?
[imath]\mathcal P_3(\mathbb R)[/imath] is the set of polynomials with degree at most [imath]3[/imath] with coefficients in [imath]\mathbb R[/imath]. In the last paragraph is says [imath]U[/imath] cannot be extended to a basis of [imath]\mathcal P_3(\mathbb R)[/imath]. I do not understand why not. Why would we get a list with length greater than [imath]4[/imath]? Why can't we add [imath](x-5)[/imath] to the list? From Linear Algebra Done Right |
2359410 | Does [imath]\lim_{n\rightarrow\infty} (\frac{1}{n+a} + \frac{1}{n+2a} + \cdots + \frac{1}{n+na}) [/imath] converge?
I want to check whether [imath]\lim_{n\rightarrow\infty} (\frac{1}{n+a} + \frac{1}{n+2a} + \cdots + \frac{1}{n+na}) [/imath] converges or not. (a is a positive constant number.) If it converges, how to find the value it converges? And if not, why? | 1237694 | Let a be a positive number. Then [imath]\lim_{n \to \infty}[\frac{1}{a+n}+\frac{1}{2a+n}+\cdots +\frac{1}{na+n}][/imath]
Problem : Let [imath]a[/imath] be a positive number. Then [imath]\lim_{n \to \infty}\left[\frac{1}{a+n}+\frac{1}{2a+n}+\cdots +\frac{1}{na+n}\right][/imath] Please suggest how to proceed in such limit problems, will be of great help thanks. |
2359442 | Proving the existence of set
Let [imath]A[/imath] be a subset of [imath]\mathbb N[/imath].[imath]A[/imath] contains 5 elements.sum of any three elements is prime.does [imath]A[/imath] exists ?please give some hints . | 2054023 | Does there exist a set of exactly five positive integers such that the sum of any three distinct elements is prime?
Does there exist [imath]A \subseteq \mathbb N [/imath] such that [imath]|A|=5[/imath] and sum of any [imath]3[/imath] distinct elements of [imath]A[/imath] is a prime number ? |
2359480 | Prove the following statement: if [imath]ab[/imath] is odd, then [imath]a^2 + b^2[/imath] is even
I am having a hard time with this one although it looks easy. Prove the following statement: Suppose a,b are integers. If [imath]ab[/imath] is odd, then [imath]a^2 + b^2[/imath] is even. I know I might need to use the fact that [imath]a^2 + b^2 = (a+b)^2 - 2ab[/imath]. Can someone please tell me where to start? | 992331 | Supposed [imath]a,b \in \mathbb{Z}[/imath]. If [imath]ab[/imath] is odd, then [imath]a^{2} + b^{2}[/imath] is even.
Supposed [imath]a,b \in \mathbb{Z}[/imath]. If [imath]ab[/imath] is odd, then [imath]a^{2} + b^{2}[/imath] is even. I'm stuck on the best way to get this started. My thinking is that I could use cases. i.e. Case 1: a is even and b is odd Case 2: a is odd and b is even Case 3: a is odd and b is odd Would this be my best approach? Or is there an easier way to look at it? Thanks. |
75031 | Solving quadratic Diophantine equations: [imath]5n^2+2n+1=y^2[/imath]
I hope it's not inappropriate asking this here. I stumbled upon this site recently while researching a Project Euler problem, now I figure I'd use it to ask about a recurring theme in these problems: quadratic Diophantine equations. I've recently boiled down another Project Euler problem (I won't say which, it should be unrecognizable from the original problem and should probably be kept that way) to the following Diophantine equation: [imath]5n^2+2n+1=y^2[/imath] I've been trying to use http://www.alpertron.com.ar/METHODS.HTM as a reference, but I seem to get lost in a sea of constants. And the steps that program on the bottom of that page takes don't seem to match what he says to do. I'd rather be able to understand the steps I'm taking anyway rather than just copying a method. I'm interested in all positive integer values of n and I'm more or less given a solution exists with n=2. How would I go about finding the rest of the solutions? And how would I solve these kinds of equations in general? If that last part is too complex a question to be handled here, is there any other resource that might help? As far as my current level of math, I have a degree in engineering (and helped a math major with some courses I never took myself) and I've already worked on Project Euler problems involving Pell's equations and continued fraction expansions of square roots. | 1776734 | How to find integer solutions to [imath]M^2=5N^2+2N+1[/imath]?
My number theory is terrible so I don't know what "class" of problem this secretly is. I'm looking for all positive integer solutions to the equation: [imath]M^2=5N^2+2N+1[/imath] That is, I want positive integer [imath]M[/imath] and [imath]N[/imath] to make the above true. I've got the obvious solution ([imath]N=0[/imath], [imath]M=1[/imath]) but I don't know how to go about getting more solutions. It has been suggested to me that there should be infinitely many solutions, and I would like to find them all. I could transform it to look like Pell's equation by completing the square on the right, but it won't have integer coefficients (or you could multiply it through by the denominators, but then it wouldn't look like Pell's equation), so I don't think that helps much. I don't know enough number theory to guess at other things, but I'm happy to read something on this topic. |
2359459 | Sum of square root of [imath]n[/imath] primes as a nested square root
[imath]\sqrt{2}+\sqrt{3}=\sqrt{5+\sqrt{24}}[/imath] [imath]\sqrt{2}+\sqrt{3}+\sqrt{5}=\sqrt{14+\sqrt{140+\sqrt{4096+\sqrt{8847360}}}}[/imath] These are two examples of how sum of k square root of primes (not necessarily consecutive) can be represented as nested square roots. Is there any way to find all the terms of the nested roots given a set of primes? How do I compute them, individually? i.e given [imath]A = [2, 3, 5][/imath] representing [imath]\sqrt{2}+\sqrt{3}+\sqrt{5}[/imath] how do I get [imath]B = [14,140, 4096, 8847360][/imath] representing [imath]\sqrt{14+\sqrt{140+\sqrt{4096+\sqrt{8847360}}}}[/imath] The above are just examples, how do I compute for an arbitrary number of primes? What if the primes are given as [imath]A = [a_0, a_1, ..., a_n][/imath] How can I compute the nested loop? | 720114 | Converting sums of square-roots to nested square-roots
When solving different equations, I have realised, that some roots containing only arithmetic operations and square roots (4th, 8th roots too, because they can be represented using only square roots) can be converted to nested square roots form. Examples (these are roots of equations of 2nd, 4th, 4th and 8th degree): [imath]\sqrt{2}+\sqrt{3}=\sqrt{5+\sqrt{24}}[/imath] [imath]\sqrt{2}+\sqrt{3}+\sqrt{6}=\sqrt{15+\sqrt{160+\sqrt{6912+\sqrt{18874368}}}}[/imath] [imath]1+\sqrt{2}+\sqrt{3}+\sqrt{6}=\sqrt{21+\sqrt{413+\sqrt{4656+\sqrt{16588800}}}}[/imath] [imath]\sqrt{2}+\sqrt{3}+\sqrt{5}=\sqrt{14+\sqrt{140+\sqrt{4096+\sqrt{8847360}}}}[/imath] However, I have failed to convert to such form following root (8th degree equation): [imath]3+\sqrt{2}+\sqrt{3}+\sqrt{5}[/imath] Performing any operations with it, number of square roots inside increases, what makes me think that converting that root is impossible. So, question: Can it be done with that root and with what roots in general? Some forms I was able to get: [imath]\sqrt{19+6 \sqrt{2}+6 \sqrt{3}+6 \sqrt{5}+2 \sqrt{6}+2 \sqrt{10}+2 \sqrt{15}}[/imath] [imath]\sqrt{19+2\left(\sqrt{33+6 \sqrt{30}}+\sqrt{37+6 \sqrt{30}}+\sqrt{51+6 \sqrt{30}}\right)}[/imath] If one don't know how I got those expressions, here you are an example. [imath]\sqrt{2}+\sqrt{3}+\sqrt{5}=\sqrt{\left(\sqrt{2}+\sqrt{3}+\sqrt{5}\right)^2}=[/imath] [imath]=\sqrt{10+2 \left(\sqrt{15}+\sqrt{6}+\sqrt{10}\right)}=\sqrt{10+2 \left(\sqrt{15}+\sqrt{\left(\sqrt{6}+\sqrt{10}\right)^2}\right)}=[/imath] [imath]=\sqrt{10+2 \left(\sqrt{15}+\sqrt{16+4 \sqrt{15}}\right)}=\sqrt{10+2 \left(\sqrt{15}-a+a+\sqrt{16+4 \sqrt{15}}\right)}=[/imath] [imath]=\sqrt{10+2a+2 \left(\sqrt{\left(\sqrt{15}-a\right)^2}+\sqrt{16+4 \sqrt{15}}\right)}=[/imath] [imath]=\sqrt{10+2a+2 \left(\sqrt{15+a^2-2a \sqrt{15}}+\sqrt{16+4 \sqrt{15}}\right)}=[/imath] [imath][2a=4 \Rightarrow a=2][/imath] [imath]=\sqrt{14+2 \left(\sqrt{19-4\sqrt{15}}+\sqrt{16+4 \sqrt{15}}\right)}=[/imath] [imath]=\sqrt{14+2 \sqrt{\left(\sqrt{19-4\sqrt{15}}+\sqrt{16+4 \sqrt{15}}\right)^2}}=[/imath] [imath]=\sqrt{14+2 \sqrt{35+4 \sqrt{16+3 \sqrt{15}}}}=\sqrt{14+\sqrt{140+\sqrt{4096+\sqrt{8847360}}}}[/imath] |
2359661 | How to find indefinite integral of [imath]\sin(x^3)[/imath]
I obtained[imath]\int\sin(x^3)dx=\frac{\cos(x^3)}{3x^2}[/imath] was my answer which is wrong. Is this somehow related to the indefinite integral of [imath]\frac{f'(x)}{f(x)}[/imath] being [imath]\ln(f(x))[/imath]? | 1328151 | Integral of [imath]\sin (x^3)dx[/imath]
[imath]\int \sin (x^3)dx[/imath] I have tried some substitutions, but I haven't reached the goal... Can you help me? |
2358533 | A problem about coset and transversal.
For a finite group [imath]G[/imath] and its subgroup [imath]H[/imath]. Show that there is a subset [imath]T[/imath] which is simultaneously a transversal for the left and right cosets of [imath]H[/imath]. Further ,if we consider any two partition of [imath]G[/imath],and the sets have the same cardinality.Can we find a transversal for both of them? I get the answer as the comments tell me how to do it.Considerating the left cosets as point in left,and right cosets in right.Two points are adjacent when they are in different sides and have common elements.Then I realized that if there exists a complement match,then I can prove it.The condition of Hall's theorem is easy to be seen as it will lead to a controdiction of the cardinality as they are partitions with the sets of same cardinality. However,when it comes to the sitiation of infinte cardinality of G,what can we say about it? | 268219 | Mutual set of representatives for left and right cosets: what about infinite groups?
Let [imath]G[/imath] be a group and [imath]H[/imath] a subgroup of [imath]G[/imath]. If [imath]G[/imath] is finite, then according to Philip Hall's "marriage theorem" there is a left transversal [imath]T[/imath] of [imath]H[/imath] in [imath]G[/imath] (that is, [imath]T[/imath] contains precisely one element from each left coset of [imath]H[/imath]) such that [imath]T[/imath] is also a right transversal of [imath]H[/imath]. Does this theorem generalize to infinite groups? The original theorem treats only the case where [imath]|H| < \infty[/imath] and [imath][G:H] < \infty[/imath]. What can we say when [imath]|H| = \infty[/imath] and [imath][G:H] = \infty[/imath]? What about [imath]|H| < \infty[/imath] and [imath][G:H] = \infty[/imath]? Also [imath]|H| = \infty[/imath] and [imath][G:H] < \infty[/imath]? |
2359383 | constructing a field of fraction
Let [imath]\mathbb{Z}\left [ i \right ]=\left \{ a+bi:a,b \in \mathbb{Z} \right \}.[/imath] Show that the field of quotient of [imath]\mathbb{Z}\left [ i \right ][/imath] is ring isomorphic to [imath]\mathbb{Q}\left [ i \right ]=\left \{ r+si: r,s \in \mathbb{Q} \right \}[/imath] Let's try to construct the field of quotient: The field of quotient is the smallest field of fraction contained in an integral domain. Try: [imath]F=\left \{ \frac{a+bi}{c+di} : a+bi, c+di \in \mathbb{Z}\left [ i \right ]\right \}[/imath] I never liked these sort of questions involving function construction. I wish I could obtain a bit of help here to get me further. Thanks in advance. | 665348 | Prove that the Gaussian rationals is the field of fractions of the Gaussian integers
I'm looking to prove that [imath]\Bbb Q[i] = \{ p + qi : p, q \in \Bbb Q \}[/imath] is the field of fractions of [imath]\Bbb Z[i] = \{p + qi : p, q \in Z\}[/imath]. I am familiar with definition of a field of fractions. For example, I understand that if one has an integral domain [imath]D[/imath], it can be embedded in a field of fractions [imath]F_D[/imath], and every element of [imath]F_D[/imath] can be written as the quotient of two elements in [imath]D[/imath]. However I have been confused by two questions: (1) If one has a field [imath]F[/imath], and one can show that any element in [imath]F[/imath] can be expressed as the quotient of two elements in an integral domain [imath]D[/imath], does that imply that [imath]F[/imath] is the field of fractions of [imath]D[/imath]? (i.e. Would it suffice in my case to show that every element in [imath]\Bbb Q[i][/imath] can be written as a the quotient of 2 elements of [imath]\Bbb Z[i][/imath]?) (2) How can I show that element in [imath]\Bbb Q[i][/imath] can be written as a the quotient of 2 elements of [imath]\Bbb Z[i][/imath]? When I write an element of [imath]\Bbb Q[i][/imath] like [imath]q = \frac ab + \frac cdi[/imath] I get mixed up trying to come up with 2 elements of [imath]\Bbb Z[i][/imath] that could equal that. I set [imath]z = u+vi[/imath] and [imath]w = x+yi[/imath], equate [imath]q = z/w[/imath] and I end up with these equations like this: [imath]a = ux + vy; b = x^2 + y^2; c = vx - uv; d = x^2 + y^2[/imath] This doesn't strike me as the way to go. Possibly there is a way to take advantage of fact that the elements are equivalence classes, but I'm not seeing it. Thank you very much! |
2359669 | Generalizing the Chinese remainder theorem with fibred products
For a commutative ring [imath]R[/imath] and ideals [imath]I,J\subset R[/imath] with [imath]I+J[/imath] we have an isomorphism [imath]R/(I\cap J)\cong R/I\times R/J.[/imath] But I'm wondering how much can be said if [imath]I+J\neq R[/imath]. We still have an injection [imath]R/(I\cap J)\ \hookrightarrow\ R/I\times R/J,[/imath] but this is not surjective in general. In fact we even have an injection into the fibred product [imath]R/(I\cap J)\ \hookrightarrow\ R/I\times_{R/(I+J)}R/J,[/imath] which is the usual Cartesian product if [imath]I+J=R[/imath]. My question is whether this latter map is surjective in general? For the few simple examples I checked it is indeed surjective. However, my failed attempts at proving this seem to suggest that it isn't surjective, but I haven't been able to find a counterexample. | 1553710 | Canonical map [imath]R/(I\cap J)\rightarrow R/I\times _{R/(I+J)} R/J[/imath] is an isomorphism
From this MSE question I understand the canonical map [imath]R/(I\cap J)\rightarrow R/I\times _{R/(I+J)} R/J[/imath] is an isomorphism for [imath]R[/imath] a commutative ring and [imath]I,J[/imath] ideals. I tried proving this directly and I got stuck. My attempt: The canonical map is given by [imath]r+I\cap J\mapsto (r+I,r+J)[/imath]. We need to construct an inverse. Given [imath](r+I,s+J)\in R/I\times _{R/(I+J)} R/J[/imath], we know [imath]r+I+J=s+J+I[/imath], so they represent the same coset in [imath]R/(I+J)[/imath]. Hence [imath]s=r+k[/imath] for some [imath]k\in I+J[/imath]. So we can rewrite our arbitrary element [imath](r+I,s+J)[/imath] in the pullback as [imath](r+I,r+k+J)[/imath]. Now map this pair to [imath]r+I\cap J[/imath]. This is a left inverse because [imath]r+I\cap J\mapsto (r+I,r+J)\mapsto r+I\cap J[/imath] is obviously the identity. It is not a right inverse because [imath](r+I,s+J)=(r+I,r+k+J)\mapsto r+I\cap J\mapsto (r+I,r+J)[/imath] is not the identity. |
2359509 | If [imath]B \setminus A[/imath] is closed under multiplication then [imath]A[/imath] is integrally closed in [imath]B[/imath].
I am trying to solve exercise 5 of Chapter Integral dependence and valuations'' from the book of Atiyah and MacdonaldIntroduction to Commutative algebra''. Let [imath]A[/imath] be a subring of [imath]B[/imath] such that [imath]B \setminus A[/imath] is closed under multiplication. Show that [imath]A[/imath] is integrally closed in [imath]B[/imath]. My attempt to solve the problem: Let [imath]z \in B \setminus A[/imath]. We need to show that [imath]z[/imath] can not be a root of a polynomial [imath] f \in A[x][/imath]. I can proof it if the degree of [imath]f[/imath] equals to [imath]2[/imath]. Indeed, let [imath]z^2 + az + b = 0[/imath], where [imath]a, b \in A[/imath]. Then [imath]-a-z \in B[/imath] is also a root of this polynomial and hence [imath]a(-a-z)=b \in B[/imath], however [imath]B \setminus A[/imath] is closed under multiplication that is the contradiction. | 1194298 | Question about Subrings and integraly closed
Proposition in abstract algebra. The book gives this as a proposition and asks the reader to do it as a exercise. It seems it should be easy.(Assuming every ring has a 1) Let R be a subring of S. If S\R is closed under multiplication then R is integrally closed in S I went by cotradicton let s be in R\S then. We get that [imath] s^n+a_1s^{n-1}..sa_{n-1}+a_{n}=0 [/imath] From and the a's are in R. Now I am thinking this [imath] s^n=(-1)(a_1s^{n-1}..sa_{n-1}+a_{n}) [/imath] Now multiplying both sides by s again [imath] ss^n=(-s)((a_1s^{n-1}..sa_{n-1}+a_{n}) [/imath] Now by closure [imath]ss^n[/imath] is in S\R which means the rest [imath](a_1s^{n-1}..sa_{n-1}+a_{n})[/imath] is in S\R which means the a's are in it which is a contradiction Does this look correct? |
2359811 | Find and prove formula
For [imath]n\in \Bbb N[/imath], find and prove a formula for [imath]\sum_{i=1}^n(2i-1)[/imath] How do I use induction to find a formula for it? I haven't not encountered any question like this in my textbook. | 940146 | Prove by induction that... [imath]1+3+5+7+...+(2n+1)=(n+1)^2[/imath] for every [imath]n \in \mathbb N[/imath]
I'm not too sure exactly how to approach this question. Would anyone be able to give me any helpful advice or some sort of direction? I have a little problem with induction. Prove by induction that: [imath]1 + 3 + 5 + 7 +...+ (2n+1) = (n+1)^2 \quad\forall n \in \mathbb N [/imath] |
2360056 | Prove [imath]a_n =3\cdot2^{n-1} +2(-1)^n[/imath]
[imath]a_1=1, a_2=8[/imath] and [imath]a_n=a_{n-1} +2a_{n-2}[/imath] for [imath]n\ge3[/imath] Prove that [imath]a_n =3\cdot2^{n-1} +2(-1)^n[/imath] for [imath]n\in N[/imath] Base Case: n=1 [imath]a_1= 3\cdot2^0+2(-1)^1, a_1 =1[/imath], the base case holds true I.H: Suppose its true for [imath]a_k =3\cdot2^{k-1} +2(-1)^k[/imath] How do i proceed for induction step, am i suppose to use [imath]a_n=a_{n-1} +2a_{n-2}[/imath] in the induction step? | 2151679 | Inductive proof of [imath]a_n = 3*2^{n-1} + 2(-1)^n[/imath] if [imath]a_n = a_{n-1} + 2*a_{n-2}, a_1 = 1, a_2 = 8[/imath]
I would appreciate a little help in finalizing a proof for the following: Let [imath]a_n[/imath] be the sequence defined as [imath]a_1 = 1[/imath], [imath]a_2 = 8[/imath], and [imath]a_n = a_{n-1} + 2*a_{n-2}[/imath] when [imath]n \geq 3[/imath]. Prove that [imath]a_n = 3*2^{n-1} + 2(-1)^n[/imath]. I decided to use strong induction and show that if the statement is true for [imath]1,...,n[/imath] , then it is true for [imath]n+1[/imath] (going off the fact that the statement is true for [imath]n=3[/imath]). I have: [imath]a_{n+1} = a_n + 2*a_{n-1} = 3*2^{n-1} + 2(-1)^n + 2(3*2^{n-2} + 2(-1)^{n-1}) = 3*2^n + 2(-1)^n + 4(-1)^{n-1}[/imath] Which is close to the result I want [imath](3*2^n + 2(-1)^{n+1})[/imath] but the signs are switched for the [imath](-1)[/imath] terms; therefore I suspect I missed a [imath]-1[/imath] somewhere, but I could not see where the error is. I appreciate all and any help. Thank you kindly! |
552173 | If an ideal contains the unit then it is the whole ring
I have to prove that if [imath]I\subseteq R[/imath] is an ideal and [imath]1\in I[/imath], then [imath]I=R[/imath]. So I know [imath]I\subseteq R[/imath] is an ideal if [imath]a,b\in I[/imath] implies [imath]a+b\in I[/imath], and if [imath]a\in I[/imath], [imath]r\in R[/imath], then [imath]ra\in I[/imath]. I'm finding it hard to put this into words. Since [imath]1\in I[/imath] and [imath]I[/imath] contained in [imath]R[/imath] is an ideal, then let [imath]a=1[/imath] and [imath]r=1[/imath] so [imath]1\cdot 1\in R[/imath]? | 2820303 | Why proper ideal doesn't contain [imath]1[/imath]?
Why [imath]I[/imath] is a proper ideal implies [imath]1\notin I[/imath]? The definition of proper ideal only requires [imath]I\neq R[/imath]. I don't get it. |
2358365 | Hoffman and Kunze, Linear Algebra Chapter 3 theorem 20
Theorem [imath]20.[/imath] Let [imath]g,f_1,\dots ,f_r[/imath] be linear functionals on a vector space [imath]V[/imath] with respective null spaces [imath]N,N_1,\dots ,N_r.[/imath] Then [imath]g[/imath] is a linear combination of [imath]f_1,\dots ,f_r[/imath] if and only if [imath]N[/imath] contains the intersection [imath]N_1\cap \cdots \cap N_r.[/imath] I am given a lemma for this theorem: If [imath]f[/imath] and [imath]g[/imath] are linear functionals on a vector space [imath]V,[/imath] then [imath]g[/imath] is a scalar multiple of [imath]f[/imath] if and only if the null space of [imath]g[/imath] contains the null space of [imath]f.[/imath] Attempt : Forward implication is trivial. I am trying to prove the reverse implication by induction. [imath]r=1[/imath] is handled by the lemma. Suppose the statement is true for [imath]r=k-1[/imath] i.e., [imath]\bigcap_{i=1}^{k-1} N_i\subseteq N \implies g=\sum_{i=1}^{k-1}c_if_i.[/imath] Also we are given that [imath]\bigcap_{i=1}^{k} N_i \subseteq N.[/imath] We have to show [imath]g=\sum_{i=1}^{k}c_if_i.[/imath] By sketching out some venn diagrams I noticed that [imath]\bigcap_{i=1}^{k} N_i \subseteq N[/imath] does not necessarily mean [imath]\bigcap_{i=1}^{k-1} N_i \subseteq N.[/imath] But if I consider the restriction functions [imath]g',f_1^{'},\dots ,f_{k-1}^{'}[/imath] on the subspace [imath]N_k[/imath] then I have [imath]\bigcap_{i=1}^{k-1} N_i^{'} \subseteq N[/imath] so that [imath]g'=\sum_{i=1}^{k-1}f_i^{'}.[/imath] What next? These are related posts :[imath](1)[/imath],[imath](2)[/imath] | 486872 | Theorem 20 Hoffman Kunze Linear Algebra book Section 3.6
What I don't understand here is why is [imath]h(\alpha)=0[/imath] for all [imath]\alpha[/imath] in [imath]N_{k}[/imath]. Is there a typo? In case there is not, could someone please detail that last step please? |
2359703 | Hoffman and Kunze, linear algebra sec 3.6 exercise 1
Let [imath]n[/imath] be a positive integer and [imath]F[/imath] a field. Let [imath]W[/imath] be the set of all vectors [imath](x_1, \dots , x_n)[/imath] in [imath]F^n[/imath] such that [imath]x_1+\dots +x_n =0[/imath]. [imath]1)[/imath] Prove that [imath]W^0[/imath] (annihilator of [imath]W[/imath]) consist of all linear functionals [imath]f[/imath] of the form [imath]f(x_1, \dots , x_n) = c \sum _{j=1}^n x_j.[/imath] [imath]2)[/imath] Show that the dual space [imath]W^*[/imath] of [imath]W[/imath] can be naturally identified with the linear functionals of the form [imath]f(x_1, \dots , x_n) =c_1x_1 +\dots +c_nx_n[/imath] on [imath]F^n[/imath] which satisfy [imath]c_1+\dots +c_n=0[/imath]. Attempt: [imath](1) [/imath]Solving [imath]x_1+\cdots+x_n=0[/imath] and considering [imath]x_n[/imath] as the dependent variable, we get the basis vectors for [imath]W[/imath] as [imath]\{\alpha_1,\alpha_2,\dots,\alpha_{n-1}\}[/imath] where [imath]\alpha_i=\big(\underbrace{\dots}_{0\text{'s}},\underbrace{1}_{i^{\text{th}}\text{coordinate}},\underbrace{\dots}_{0\text{'s}},-1\big)[/imath] If [imath]f\in W^0[/imath] is expressed as [imath]f(x_1,\dots ,x_n)=d_1x_1+\cdots d_nx_n[/imath] then [imath]W^0[/imath] is the solution space to the system [imath]AD=0[/imath] where [imath]A[/imath] is the [imath]n-1\times n[/imath] matrix with [imath]\alpha_i[/imath] as it's [imath]i^{\text{th}}[/imath] row and [imath]D=\big(d_1,\cdots,d_n\big)^T.[/imath] Since [imath]A[/imath] is in row-reduced row echelon form, we set [imath]d_n=c[/imath] and hence we get [imath]f(x_1, \dots , x_n) = c \sum _{j=1}^n x_j.[/imath] [imath](2)[/imath] To find the basis [imath]\{f_1,\dots,f_{n-1}\}[/imath]for the dual space [imath]W^*[/imath] we need [imath]f_i[/imath]'s to satisfy [imath]f_i(\alpha_j)=\delta_{ij},1\le i,j\le n-1.[/imath] Looking at [imath]\alpha_i[/imath]'s, it is natural to consider [imath]f_i[/imath] as [imath]f_i(x_1,\cdots,x_n)=x_i,1\le i\le n-1[/imath] so that [imath]f_i(\alpha_j)=\delta_{ij},1\le i,j\le n-1.[/imath] Thus [imath]\{f_i\}_{i=1}^{n-1}[/imath] is a basis for [imath]W^*[/imath] and if [imath]f\in W^*[/imath] then \begin{align} f=\sum_{i=1}^{n-1}f(\alpha_i)f_i\\ \implies f(\alpha)=\sum_{i=1}^{n-1}f(\alpha_i)x_i \end{align} It looks like I am nowhere near in the second part of the problem. Help! This is a related post but the answers given uses tools which are not introduced in the book yet. | 487770 | Task interpretation for Hoffman Kunze Linear Algebra exercise 1 (b) sec. 3.6
I don't understand the task from (b). Is it equivalent to : for every linear functional [imath]f(x_{1}, ..., x_{n})=c_{1}x_{1}+...+c_{n}x_{n}[/imath] on [imath]F^{n}[/imath] which satisfy [imath]c_{1}+...+c_{n}=0[/imath] there exists exactly one and unique functional that belongs to the dual space of [imath]W[/imath]? If it is so, could someone please give me a hint? |
2360542 | Proof that : [imath]f[/imath] is continously differentiable [imath]\Leftrightarrow |f(x + h) - f(x + t) - l(h - t)| \leq \epsilon |h-t|[/imath]
Prove: [imath]f: \mathbb{R} \rightarrow \mathbb{R}[/imath] is continuously differentiable if and only if for every [imath]x \in \mathbb{R}[/imath] there exists a [imath]l \in \mathbb{R}[/imath] such that for every [imath]\epsilon > 0[/imath] there exists a [imath]\delta > 0[/imath] such that for every [imath]h,t \in B(0, \delta)[/imath]: [imath]|f(x + h) - f(x + t) - l(h - t)| \leq \epsilon |h-t|[/imath] I've tried quite alot of stuff, but I just can't get to this solution. Also tried to use Lipschitz but that didn't help much either. I know that this question is on here already, but I don't understand that answer. | 2334586 | Continuously differentiable function iff [imath]|f(x + h) - f(x + t) - l(h - t)| \leq \epsilon |h-t|[/imath]
I am doing some practice tests for my analysis class, and in one of them I stumbled upon the following question: Prove: [imath]f: \mathbb{R} \rightarrow \mathbb{R}[/imath] is continuously differentiable iff for every [imath]x \in \mathbb{R}[/imath] there exists a [imath]l \in \mathbb{R}[/imath] such that for every [imath]\epsilon > 0[/imath] there exists a [imath]\delta > 0[/imath] such that for every [imath]h,t \in B(0, \delta)[/imath]: [imath]|f(x + h) - f(x + t) - l(h - t)| \leq \epsilon |h-t|[/imath] I was hoping someone could walk me through this proof. Whatever I try, I can't get to this form by using the [imath]\epsilon-\delta[/imath] definition of continuity and differentiability. I have also tried continuously differentiable [imath]\rightarrow[/imath] Lipschitz, but that did not get me any further. Thanks in advance! |
2360869 | [imath]\frac{\partial f}{\partial x} = \frac{\partial f}{\partial y}[/imath]
Consider function [imath]f(x, y)[/imath] smooth enough satisfying the following equation: [imath]\frac{\partial f}{\partial x} = \frac{\partial f}{\partial y}\ .[/imath] It is obvious, that any function of the form [imath]f(x + y)[/imath] suits the above condition. How can I prove, that these are all functions or find a contradiction? | 2042778 | Problems with partial derivatives.
I have a problem with the next exercise. It's so confusing for me. I have one implication, but no the other. Let [imath]f:D\subset\mathbb{R}^2\rightarrow\mathbb{R}[/imath] with [imath]D[/imath] open set. The exercise is the next: [imath]\displaystyle\frac{\partial f}{\partial x} = \displaystyle\frac{\partial f}{\partial y}[/imath] for all [imath](x,y)\in D[/imath] if and only if there exist a derivable function [imath]g[/imath] such that [imath]f(x,y)=g(x+y)[/imath] for all [imath](x,y)\in D[/imath]. [imath]\Leftarrow )[/imath] Let [imath](x,y)\in D[/imath] [imath]\displaystyle\frac{\partial f}{\partial x}=\lim\limits_{h\rightarrow 0} \displaystyle\frac{f(x+h,y)-f(x,y)}{h}=\lim\limits_{h\rightarrow 0}\displaystyle\frac{g(x+y+h)-g(x+y)}{h}=g'(x+y)[/imath] (the limit there exist because [imath]g[/imath] is derivable). [imath]\displaystyle\frac{\partial f}{\partial y}=\lim\limits_{h\rightarrow 0} \displaystyle\frac{f(x,y+h)-f(x,y)}{h}=\lim\limits_{h\rightarrow 0}\displaystyle\frac{g(x+y+h)-g(x+y)}{h}=g'(x+y)[/imath] (again, the limit there exist because [imath]g[/imath] is derivable). Thus, [imath]\displaystyle\frac{\partial f}{\partial x} = \displaystyle\frac{\partial f}{\partial y}[/imath] But, I can't understand the implication [imath]\Rightarrow) [/imath]. Some hint? I think define a function using the partial derivatives, but, really, I don't know how. |
2361545 | [imath]\underbrace{111\cdots 111}_{1991\,\text{times}}[/imath] is not a prime number
This is the question of Belgium mathematical Olympiad [imath]1991[/imath] Prove that [imath]\underbrace{111\cdots 111}_{1991\,\text{times}}[/imath] is not a prime number. I tried making a computer program to check it but it is too large to fit in that I am not so strong in congruences, but I can understand a provided solution regarding CRT or something like that. Please provide any hint for the first step I should take. Side note: The question has been marked duplicate but from the link I can only deduce that given number is in the form of [imath]13k+9[/imath] which is not helping me, please provide a comment or something which can help me understanding how can I use the information given in link. | 1776592 | Any digit written [imath]6k[/imath] times forms a number divisible by [imath]13[/imath]
Any digit written [imath]6k[/imath] times (like [imath]111111[/imath], [imath]222222222222222222222222[/imath], etc.) forms a number divisible by [imath]13[/imath]. (source: a solution taken from careerbless) I tested with many numbers and it seems this is correct. But, is it possible to prove this mathematically? If so, it will be a convincing statement. Please help. I am not able to think how such properties can be proved. |
2361376 | complex functions which can be approximated by sequence of polynomials
For which among the following functions [imath]f(z)[/imath] defined on [imath]G=\mathbb{C}\setminus\{0\}[/imath] is there no sequence of polynomials approximating [imath]f(z)[/imath] uniformly on compact subsets of [imath]G[/imath] ? 1)[imath]e^z[/imath] 2)[imath]\frac{1}{z}[/imath] 3)[imath]z^2[/imath] 4)[imath]\frac{1}{z^2}[/imath] here [imath]e^z[/imath] and [imath]z^2[/imath] are entire functions so can I say they can be approximated by sequence of polynomials ? I know Runge's theorem but not understanding how to apply it. | 2342210 | An error in application of Runge's theorem
I have read the following theorem in Complex Analysis by Cufi and Bruna on Page 420: Runge's theorem for open sets: An open set [imath]U[/imath] has the property that every holomorphic function on [imath]U[/imath] is the uniform limit on compact sets of [imath]U[/imath] of a sequence of polynomials, if and only if [imath]\mathbb C \setminus U[/imath] has no bounded connected component. Now I came across the following question: For which among the following functions [imath]f(z)[/imath] defined on [imath]G=\mathbb C \setminus \{0\}[/imath], is there no sequence of polynomials approximating [imath]f(z)[/imath] uniformly on compact subsets of [imath]G[/imath]? [imath](a)\ e^z \\(b)\ \frac1z\\(c)\ z^2\\(d)\ \frac1{z^2}[/imath] The answer according to me should be [imath](b)[/imath] and [imath](d)[/imath]. But if I try to apply the above theorem, I see that all the functions are holomorphic on [imath]G[/imath](open) and [imath]\mathbb C \setminus G=\{0\}[/imath] which has a bounded connected component, namely [imath]\{0\}[/imath]. So according to the theorem, the correct answer should be [imath](a),(b),(c)[/imath] and [imath](d)[/imath]. Where am I going wrong in applying the theorem? |
2362328 | Prove that[imath]\frac{\Gamma(n+1,n+1)}{\Gamma(n+1)}/{\frac{\Gamma(n,n)}{\Gamma(n)}}[/imath] is decreasing function.
Is it possible to prove that [imath]\frac{\Gamma(n+1,n+1)}{\Gamma(n+1)}/{\frac{\Gamma(n,n)}{\Gamma(n)}}[/imath] this function is decreasing function for n [imath]\geq[/imath] 1 where [imath]\Gamma(n,n) = \int_n^\infty t^{n-1} \mathrm{e}^{-t} \mathrm{d} t[/imath] is the upper incomplete gamma function. I tried it differentiating, but it is very tricky to handle. So, I wish anyone's answer or advice or suggestion to help to access this problem. | 2362289 | How can I prove this sequence is decreasing?
My problem is how to prove [imath]\frac{1+2}{1} > \frac{1+3+\frac{3^2}{2!}}{1+2} > \frac{1+4+\frac{4^2}{2!}+\frac{4^3}{3!}}{1+3+\frac{3^2}{2!}}>\cdots [/imath] i am sure that the sequence is decreasing as limit exp(1). However, the reason why the sequence is decreasing is not evident. Thanks if you consider this issue and make me relief. |
2361813 | Strong induction [imath]n=2^a\cdot b[/imath]
Given [imath]n\in\mathbb N[/imath], there exists a non-negative integer [imath]a[/imath] and an odd integer [imath]b[/imath] such that [imath]n=2^a\cdot b[/imath] Base Case: [imath]n=1 =2^0\cdot1[/imath] I.H: Assume its true for [imath]n=1,2,\ldots,k[/imath] How would i prove this? | 25914 | Proof by Strong Induction: [imath]n = 2^a b,\, b\,[/imath] odd, every natural a product of an odd and a power of 2
Can someone guide me in the right direction on this question? Prove that every [imath]n[/imath] in [imath]\mathbb{N}[/imath] can be written as a product of an odd integer and a non-negative integer power of [imath]2[/imath]. For instance: [imath]36 = 2^2(9)[/imath] , [imath]80 = 2^4(5)[/imath] , [imath]17 = 2^0(17)[/imath] , [imath]64 = 2^6(1)[/imath] , etc... Any hints in the right direction is appreciated (please explain for a beginner). Thanks. |
1542094 | Proving that [imath]a+b+c [/imath] is composite knowing it divides [imath]abc[/imath]
Assume [imath]a[/imath], [imath]b[/imath], and [imath]c[/imath] are positive integers such that [imath](a+b+c)[/imath] divides [imath]abc[/imath]. Show that [imath]a+b+c[/imath] is composite I have that so far, If [imath]a+b+c[/imath] is prime, then letting [imath]a = xd[/imath], [imath]b = yd[/imath], and [imath]c = zd[/imath] we have that [imath]d(x+y+z)[/imath] is prime. This means that [imath]d = 1[/imath] and thus [imath]a,b,c[/imath] are relatively prime. Therefore, [imath]\dfrac{xyz}{x+y+z}[/imath] is supposed to be an integer. What next? | 2363330 | Prove that if [imath]a+b+c[/imath] divides [imath]abc[/imath], then [imath]a+b+c[/imath] must be composite.
I have seen another post for this same question but it never got an answer. Since I am fairly new, I can't comment on other's posts, therefore I am creating a new one. The question states: If a, b, and c are positive integers, prove that if [imath](a + b + c) | (a*b*c)[/imath], then [imath]a+b+c[/imath] is composite. I do not really know how to approach this. The link to the post with the same question is: Proving that [imath]a+b+c [/imath] is composite knowing it divides [imath]abc[/imath] |
319553 | Show that the curve [imath]x^2+y^2-3=0[/imath] has no rational points
Show that the curve [imath]x^2+y^2-3=0[/imath] has no rational points, that is, no points [imath](x,y)[/imath] with [imath]x,y\in \mathbb{Q}[/imath]. Update: Thanks for all the input! I've done my best to incorporate your suggestions and write up the proof. My explanation of why [imath]\gcd(a,b,q)=1[/imath] is a bit verbose, but I couldn't figure out how to put it more concisely with clear notation. Proof: Suppose for the sake of contradiction that there exists a point [imath]P=(x,y)[/imath], such that [imath]x^2+y^2-3=0[/imath], with [imath]x,y\in\mathbb{Q}[/imath]. Then we can express [imath]x[/imath] and [imath]y[/imath] as irreducible fractions and write [imath](\frac{n_x}{d_x})^2+(\frac{n_y}{d_y})^2-3=0[/imath], with [imath]n_x, d_x, n_y, d_y\in\mathbb{Z}[/imath], and [imath]\gcd(n_x,d_x)=\gcd(n_y,d_y)=1[/imath]. Let [imath]q[/imath] equal the lowest common multiple of [imath]d_x[/imath] and [imath]d_y[/imath]. So [imath]q=d_xc_x[/imath] and [imath]q=d_yc_y[/imath] for the mutually prime integers [imath]c_x[/imath] and [imath]c_y[/imath] (if they weren't mutually prime, then [imath]q[/imath] wouldn't be the lowest common multiple). If we set [imath]a=n_xc_x[/imath] and [imath]b=n_yc_y[/imath], we can write the original equation as [imath](a/q)^2+(b/q^2)-3=0[/imath], and equivalently, [imath]a^2+b^2=3q^2[/imath]. In order to determine the greatest common divisor shared by [imath]a[/imath], [imath]b[/imath], and [imath]q[/imath], we first consider the prime factors of [imath]a[/imath]. Since [imath]a=n_xc_x[/imath], we can group them into the factors of [imath]n_x[/imath] and those of [imath]c_x[/imath]. Similarly, [imath]b[/imath]'s prime factors can be separated into those of [imath]n_y[/imath] and those of [imath]c_y[/imath]. We know that [imath]c_x[/imath] and [imath]c_y[/imath] don't share any factors, as they're mutually prime, so any shared factor of [imath]a[/imath] and [imath]b[/imath] must be a factor of [imath]n_x[/imath] and [imath]n_y[/imath]. Furthermore, [imath]q=d_xc_x=d_yc_y[/imath], so it's prime factors can either be grouped into those of [imath]d_x[/imath] and those of [imath]c_x[/imath], or those of [imath]d_y[/imath] and those of [imath]c_y[/imath]. As we've already eliminated [imath]c_x[/imath] and [imath]c_y[/imath] as sources of shared factors, we know that any shared factor of [imath]a[/imath], [imath]b[/imath], and [imath]q[/imath] must be a factor of [imath]n_x[/imath], [imath]n_y[/imath], and either [imath]d_x[/imath] or [imath]d_y[/imath]. But since [imath]n_x/d_x[/imath] is an irreducible fraction, [imath]n_x[/imath] and [imath]d_x[/imath] share no prime factors. Similarly, [imath]n_y[/imath] and [imath]d_y[/imath] share no prime factors. Thus [imath]a[/imath], [imath]b[/imath], and [imath]q[/imath] share no prime factors, and their greatest common divisor must be [imath]1[/imath]. Now consider an integer [imath]m[/imath] such that [imath]3\nmid m[/imath]. Then, either [imath]m\equiv 1\pmod{3}[/imath], or [imath]m\equiv 2\pmod{3}[/imath]. If [imath]m\equiv 1\pmod{3}[/imath], then [imath]m=3k+1[/imath] for some integer [imath]k[/imath], and [imath]m^2=9k^2+6k+1=3(3k^2+2k)+1\equiv 1\pmod{3}[/imath]. Similarly, if [imath]m\equiv 2\pmod{3}[/imath], then [imath]m^2=3(3k^2+4k+1)+1\equiv 1\pmod{3}[/imath]. Since that exhausts all cases, we see that [imath]3\nmid m \implies m^2\equiv 1\pmod{3}[/imath] for [imath]m\in\mathbb{Z}[/imath]. Notice that [imath]a^2+b^2=3q^2[/imath] implies that [imath]3\mid (a^2+b^2)[/imath]. If [imath]3[/imath] doesn't divide both of [imath]a[/imath] and [imath]b[/imath], then [imath](a^2+b^2)[/imath] will be either [imath]1\pmod{3}[/imath] or [imath]2\pmod{3}[/imath], and thus not divisible by [imath]3[/imath]. So we can deduce that both [imath]a[/imath] and [imath]b[/imath] must be divisible by [imath]3[/imath]. We can therefore write [imath]a=3u[/imath] [imath]\land[/imath] [imath]b=3v[/imath] for some integers [imath]u[/imath] and [imath]v[/imath]. Thus, [imath]9u^2+9v^2=3q^2[/imath], and equivalently, [imath]3(u^2+v^2)=q^2[/imath]. So [imath]3[/imath] divides [imath]q^2[/imath], and must therefore divide [imath]q[/imath] as well. Thus, [imath]3[/imath] is a factor of [imath]a,b,[/imath] and [imath]q[/imath], but this contradicts the fact that [imath]\gcd(a,b,q)=1[/imath], and falsifies our supposition that such a point [imath]P=(x,y)[/imath] exists. | 2845735 | Show that [imath]x^2 + y^2 = 3[/imath] has no rational points
Are there rational numbers such that [imath]x^2 + y^2 = 3[/imath] ? If I want to find a rational paramterizatio of [imath]x^2 + y^2 = 1[/imath] could start with the point [imath](1,0)[/imath] and find lines [imath]\ell[/imath] of slope [imath]m \in \mathbb{Q}[/imath] and the intersection points [imath][\ell] \cdot [circle] = 2 [pt] [/imath]. However, if I use the circle [imath]x^2 + y^2 = 2[/imath] there's no rational point on the axes. Instead we should use [imath](x,y) = (1,1)[/imath]. In the case of [imath]x^2 + y^2 = 3[/imath] there's no obvious rational point that comes to mind. I'm concerned there might be no rational point at all. In integers we'd have [imath]a^2 + b^2 = 3c^2[/imath] with [imath]a,b,c \in \mathbb{Z}[/imath]. We'd have [imath]c \equiv 0 \pmod 4[/imath]. Then [imath]a \equiv b \equiv 0 \pmod 4[/imath]. This could lead to an infinite descent argument. As a bonus could there exist a small rationqal [imath]\epsilon > 0[/imath] with [imath]\epsilon \ll 1[/imath] and [imath]\epsilon \in \mathbb{Q}[/imath] such that [imath]x^2 + y^2 = 3 + \epsilon[/imath] has a solution (and therefore infinitely many solutions)? |
2362513 | Inequality [imath]\sum_{cyc} \sqrt{\frac{a}{a+8}} \geq 1[/imath]
Let [imath]a[/imath], [imath]b[/imath] and [imath]c[/imath] be positive real numbers such that [imath]abc = 1[/imath]. Prove that [imath]\displaystyle \sum_{cyc} \sqrt{\frac{a}{a+8}} \geq 1.[/imath] I have tried using some substitutions but still cannot do it. Please suggest. | 1369441 | Prove the inequality [imath]\sqrt\frac{a}{a+8} + \sqrt\frac{b}{b+8} +\sqrt\frac{c}{c+8} \geq 1[/imath] with the constraint [imath]abc=1[/imath]
If [imath]a,b,c[/imath] are positive reals such that [imath]abc=1[/imath], then prove that [imath]\sqrt\frac{a}{a+8} + \sqrt\frac{b}{b+8} +\sqrt\frac{c}{c+8} \geq 1[/imath] I tried substituting [imath]x/y,y/z,z/x[/imath], but it didn't help(I got the reverse inequality). Need some stronger inequality. Thanks. |
1643171 | Length of the Union of Intervals is less than the Sum of Each Length of Intervals?
I am reading Royden and Fitzpatrick's book on Real Analysis, and I have a question about the length function on an interval in [imath]\mathbb{R}[/imath]. Is it true that given [imath]\{I_n \}_{n=1}^{\infty}[/imath] of countable collection of open intervals in [imath]\mathbb{R}[/imath]: [imath] \ell( \bigcup_{n=1}^{\infty}I_{n}) \leq \sum_{n=1}^{\infty} \ell (I_n)? [/imath] Where [imath]\ell[/imath] is the length function? It think this was implicitly used in their proof that the outer measure [imath]m^*[/imath] has the subadditivity property. I think it is true though, since, each [imath]I_n[/imath] might not be pairwise disjoint. | 1432200 | The length of an interval covered by an infinite family of open intervals
Prove that if [imath]I[/imath] is an interval of [imath]\mathbb R[/imath], covered by an infinite family [imath]\{I_n, n\in \mathbb N\}[/imath] of open intervals, then (where [imath]\ell(I)[/imath] denotes the length of [imath]I[/imath]): [imath]\ell(I) \le \sum_{n=1}^{\infty} \ell(I_n)[/imath] It is very intuitive but hard to prove (in a formal manner). The exercise gives the hint: prove that [imath]\ell(I) = \sup\{\ell(K), \text{[/imath]K[imath] is a compact subset of [/imath]I[imath]}\}[/imath]. I proved this, but I can't see how this hint makes the problem any easier to prove. I thought about doing the following: If [imath]I[/imath] is unbounded, then we are done, so let's assume that it's bounded. Due to the connectedness of [imath]I[/imath], we can WLOG suppose that the [imath]I_n[/imath]'s are such that [imath]I_n \cap I_{n+1} \neq \varnothing[/imath] for each [imath]n[/imath] (this would be the ideal situation, I guess). By the axiom of countable choice, we can find a sequence [imath](a_n)[/imath] where [imath][a_n, a_{n+1}] \subset I_n[/imath] for each [imath]n[/imath]. Then: [imath]\sum_{n=1}^{\infty} \ell(I_n) = \sum_{n=1}^{\infty} \sup \{ \ell([a_n,a_{n+1}]) = a_{n+1} - a_n\}[/imath] Does this make any sense? How to proceed? Thank you. |
2364428 | Find all numbers that are their own multiplicative inverse in mod p where p is prime.
Find all numbers that are their own multiplicative inverse in [imath]mod[/imath] [imath]p[/imath] where [imath]p[/imath] is prime. I recall that when [imath]p[/imath] is prime, all integers from 1 to the modulus minus 1, so all numbers from [imath]1[/imath] to [imath]p-1[/imath], have multiplicative inverses in mod [imath]p[/imath]. So, the numbers that are their own multiplicative inverse would be [imath]1[/imath] and [imath]1-p[/imath]. Can someone please explain why? | 112677 | Proving that an integral domain has at most two elements that satisfy the equation [imath]x^2 = 1[/imath].
I like to be thorough, but if you feel confident you can skip the first paragraph. Review: A ring is a set [imath]R[/imath] endowed with two operations of + and [imath]\cdot[/imath] such that [imath](G,+)[/imath] is an additive abelian group, multiplication is associative, [imath]R[/imath] contains the multiplicative identity (denoted with 1), and the distributive law holds. If multiplication is also commutative, we say [imath]R[/imath] is a commutative ring. A ring that has no zero divisors (non-zero elements whose product is zero) is called an integral domain, or just a domain. We want to show that for a domain, the equation [imath]x^2 = 1[/imath] has at most 2 solutions in [imath]R[/imath] (one of which is the trivial solution 1). Here's what I did: For simplicity let [imath]1,a,b[/imath] and [imath]c[/imath] be distinct non-zero elements in [imath]R[/imath]. Assume [imath]a^2 = 1[/imath]. We want to show that letting [imath]b^2 = 1[/imath] as well will lead to a contradiction. So suppose [imath]b^2 = 1[/imath], then it follows that [imath]a^2b^2 = (ab)^2 = 1[/imath], so [imath]ab[/imath] is a solution as well, but is it a new solution? If [imath]ab = 1[/imath], then [imath]abb = 1b \Rightarrow a = b[/imath] which is a contradiction. If [imath]ab = a[/imath], then [imath]aab = aa \Rightarrow b = 1[/imath] which is also a contradiction. Similarly, [imath]ab = b[/imath] won't work either. So it must be that [imath]ab = c[/imath]. So by "admitting" [imath]b[/imath] as a solution, we're forced to admit [imath]c[/imath] as well. So far we have [imath]a^2 = b^2 = c^2 = 1[/imath] and [imath]ab = c[/imath]. We can procede as before as say that [imath](abc)^2 = 1[/imath], so [imath]abc[/imath] is a solution, but once again we should check if it is a new solution. From [imath]ab = c[/imath], we get [imath]a = cb[/imath] and [imath]b = ac[/imath], so [imath]abc = (cb)(ac)(ab) = (abc)^2 = 1[/imath]. So [imath]abc[/imath] is not a new solution; it's just one. At this point I'm stuck. I've shown that it is in fact possible to have a ring with 4 distinct elements, namely [imath]1,a,b[/imath] and [imath]c[/imath] such that each satisfies the equation [imath]x^2 = 1[/imath] and [imath]abc = 1[/imath]. What am I missing? |
2364334 | Explanation for Binomial Coefficient Formula
I am confused regarding the derivation of the fact that, [imath]{n \choose k} =\frac{n!}{k!(n-k)!}[/imath] I understand that the number of sequences of [imath]n[/imath] objects arranged [imath]k[/imath] ways is [imath]\frac{n!}{(n-k)!}[/imath] and I also understand why there could be [imath]n![/imath] different permutations of the same [imath]n[/imath] letters, but my question is: In the formula, why does one divide by [imath]k[/imath]! to get rid of such combinations? How does that remove them? | 1446734 | Intuitive explanation of binomial coefficient formula
Regarding the formula for binomial coefficients: [imath]\binom{n}{k}=\frac{n(n-1)(n-2)\cdots(n-k+1)}{k!}[/imath] the professor described the formula as first choosing the [imath]k[/imath] objects from a group of [imath]n[/imath], where order matters, and then dividing by [imath]k![/imath] to adjust for overcounting. I understand the reasoning behind the numerator but don't understand why dividing by [imath]k![/imath] is what's needed to adjust for overcounting. Can someone please help me understand how one arrives at [imath]k![/imath]? Thanks. |
2364721 | Prove that [imath]\frac{1}{15}<\frac{1}{2}*\frac{3}{4}* \dots *\frac{99}{100}<\frac{1}{10}[/imath]
Prove that [imath]\frac{1}{15}<\frac{1}{2}*\frac{3}{4}* \dots *\frac{99}{100}<\frac{1}{10}[/imath] My attempt:If we name the value [imath]A[/imath] we have: [imath]A^2<\left(\frac{1}{2}*\frac{3}{4}* \dots *\frac{99}{100}\right)\left(\frac{2}{3}*\frac{4}{5}* \dots \frac{100}{101}\right)=\frac{1}{101} \Rightarrow A<\frac{1}{10}[/imath] But I don't know, how to prove the other side? | 934878 | show [imath]\frac{1}{15}< \frac{1}{2}\times\frac{3}{4}\times\cdots\times\frac{99}{100}<\frac{1}{10}[/imath] is true
Prove [imath]\frac{1}{15}< \frac{1}{2}\times\frac{3}{4}\times\cdots\times\frac{99}{100}<\frac{1}{10}[/imath] Things I have done: after trying many ways and failing, I reached the fact that[imath]\left(\frac{1}{2}\times\frac{3}{4}\times\cdots\times\frac{99}{100}\right)^2<\left(\frac{1}{2}\times\frac{3}{4}\times\cdots\times\frac{99}{100}\right)\left(\frac{2}{3}\times\frac{4}{5}\times\cdots\times\frac{100}{101}\right)=\frac{1}{101}[/imath] So [imath]\frac{1}{2}\times\frac{3}{4}\times\cdots\times\frac{99}{100}<\frac{1}{10}[/imath] is true. it remains to show [imath]\frac{1}{15}< \frac{1}{2}\times\frac{3}{4}\times\cdots\times\frac{99}{100}[/imath]. I'm thinking of applying my approach on proving this part. something like this. [imath]\frac{1}{225}<\frac{1}{x}=\left(\frac{1}{2}\times\frac{3}{4}\times\cdots\times\frac{99}{100}\right) \times B<\left(\frac{1}{2}\times\frac{3}{4}\times\cdots\times\frac{99}{100}\right)^2[/imath] And another thing I'm curious about it,is there a way to approximate value of [imath]\frac{1}{2}\times\frac{3}{4}\times\cdots\times\frac{99}{100}[/imath] ? |
2364891 | Is every norm in a dual space a dual norm of some norm?
Let [imath](X,\|\|)[/imath] be a normed space, and [imath](X^*,\|\|_*)[/imath] and [imath](X^{**},\|\|_{**})[/imath] be it's topological dual and topological bidual respectively. Now consider a norm [imath]\rho[/imath] in [imath]X^*[/imath]which is equivalent to [imath]\|\|_*.[/imath] Does there exists a norm [imath]\mu[/imath] in [imath]X[/imath] such that [imath]\mu_*= \rho[/imath]? (Of course, [imath]\mu_*[/imath] denotes the dual norm of [imath]\mu.[/imath]) I know that if [imath]X[/imath] is reflexive, such a norm indeed exists, and it is just [imath]\mu(x)=\sup_{\rho(x^*)\leq 1} |x^*(x)|.[/imath] The question is: can we remove reflexivity? I think that for a proof we could use Goldstine's Lemma. A proof or counterexample would be nice. | 2027019 | An example of a Banach space isomorphic but not isometric to a dual Banach space
I am wondering the following question: Let [imath]X[/imath] be a separable Banach space which is linearly isomorphic to a dual Banach space [imath]Y^*[/imath]. Is there a Banach space [imath]Z[/imath] such that [imath]X[/imath] is lineraly isometric to the dual of [imath]Z[/imath]: [imath]X=Z^*[/imath]. I think that the answer is no, but I do not have a counterexample. Since [imath]L_1[/imath] is not isometric to any dual Banach space, maybe one can find a dual Banach space which is isomorphic to [imath]L_1[/imath]... To finish, do that change anythink if I suppose [imath]X[/imath] to be almost linearly isometric to [imath]Y^*[/imath] ? By almost linearly isometric I mean that for every [imath]\varepsilon >0[/imath] there exist [imath]T[/imath]: [imath]X \to Y[/imath] a linear isomorphism satisfying [imath]\|T\| \|T^{-1}\| \leq 1+\varepsilon[/imath]. |
2168689 | Prove that [imath]\frac{1}{\left(2a+b\right)^2}+\frac{1}{\left(2b+c\right)^2}+\frac{1}{\left(2c+a\right)^2}\ge\frac{1}{ab+bc+ca}[/imath]
For [imath]a,b,c>0[/imath]. Prove that [imath]\frac{1}{\left(2a+b\right)^2}+\frac{1}{\left(2b+c\right)^2}+\frac{1}{\left(2c+a\right)^2}\ge\frac{1}{ab+bc+ca}[/imath] Outside [imath]a=b=c[/imath] I can't exploit what from it | 906972 | Prove [imath]\sum{\frac{1}{(x+2y)^2}} \geq\frac{1}{xy+yz+zx}[/imath]
Let [imath]x,y,z>0[/imath]: Prove that: [imath]\frac{1}{(x+2y)^2}+\frac{1}{(y+2z)^2}+\frac{1}{(z+2x)^2} \geq\frac{1}{xy+yz+zx}[/imath] I tried to apply Cauchy - Schwarz's inequality but I couldn't prove this inequality! |
250805 | generalization for linear functionals
I'm learning linear algebra on Kunze Hoffman book and stuck for a long time by this problem.Please help me solve this. Thanks: Let [imath]S[/imath] be a set, [imath]F[/imath] a field, and [imath]V(S,F)[/imath] the space of all functions from [imath]S[/imath] into [imath]F[/imath]: [imath] (f + g)(x) = f(x) +g(x)[/imath] [imath](cf)(x) = cf(x)[/imath] Let [imath]W[/imath] be any [imath]n[/imath]-dimensional subspace of [imath]V(S,F)[/imath] . Show that there exist points [imath]x_1, x_2, ... x_n[/imath] in [imath]S[/imath] and functions [imath]f_1, f_2, ..., f_n[/imath] in [imath]W[/imath] such that [imath]f_i(x_j) = \delta_{ij}[/imath]. Here [imath]\delta_{ij} = \begin{cases} 1 , & \text {if $i = j$} \\ 0 , & \text {if $i \neq j$} \\ \end{cases} [/imath] | 121704 | A question about an [imath]n[/imath]-dimensional subspace of [imath]\mathbb{F}^{S}[/imath].
I am self-studying Hoffman and Kunze's book Linear Algebra. This is Exercise 3 from page 111. Let [imath]S[/imath] be a set, [imath]\mathbb{F}[/imath] a field, and [imath]V(S,\mathbb{F})[/imath] the space of all functions from [imath]S[/imath] into [imath]\mathbb{F}:[/imath] [imath](f+g)(x)=f(x)+g(x)\hspace{0.5cm}(\alpha f)(x)=\alpha f(x).[/imath] Let [imath]W[/imath] be any [imath]n[/imath]-dimensional subspace of [imath]V(S,\mathbb F)[/imath]. Show that there exist points [imath]x_{1},\ldots,x_{n}\in S[/imath] and functions [imath]f_{1},\ldots, f_{n}\in W[/imath] such that [imath]f_{i}(x_{j})=\delta_{ij}[/imath]. Since [imath]W[/imath] is an [imath]n[/imath]-dimensional subspace of [imath]V(S,\mathbb{F})[/imath] we can say find a basis [imath]\mathcal{B}=\{f_{1},\ldots, f_{n}\}[/imath]. But I got stuck here. I don't know what to do from now on. I mean, what should I do in order to find those points [imath]x_{1},\ldots,x_{n}\in S[/imath] such that [imath]f_{i}(x_{j})=\delta_{ij}[/imath]. PS:This is the section about the double dual. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.