qid
stringlengths 1
7
| Q
stringlengths 87
7.22k
| dup_qid
stringlengths 1
7
| Q_dup
stringlengths 97
10.5k
|
---|---|---|---|
2808138 | Besides [imath]3x - 1[/imath], [imath]5x + 1[/imath], which variants of the [imath]3x + 1[/imath] problem have been proven conclusively one way or the other?
Months ago, I asked In the [imath]x + 1[/imath] problem, does every positive integer [imath]x[/imath] eventually reach [imath]1[/imath]? I think it's fairly easy to adapt the arguments given in the answers to that one to prove that [imath]x - 1[/imath] always reaches [imath]1[/imath] as well. It's also well known that [imath]3x - 1[/imath], [imath]5x - 1[/imath] and [imath]5x + 1[/imath] all have readily found cycles that don't include [imath]1[/imath]. I think I also found cycles for [imath]7x + 1[/imath] but I seem to have misplaced the relevant notebook. My question today is: which other variants of [imath]3x + 1[/imath] have been studied and proven to always or not always reach [imath]1[/imath]? And is there a paper or a book that gathers a lot of the available research on the variants? | 1408656 | Besides the [imath]3x + 1[/imath] problem, for which similar problems are still unresolved regarding trayectory?
Generalize the [imath]3x + 1[/imath] problem as [imath]cx \pm 1[/imath], where [imath]c[/imath] is a positive odd integer and [imath]x[/imath] is a positive integer iterated through the function as far as possible to discover a cycle. If [imath]x[/imath] is even, then you halve it. But if [imath]x[/imath] is odd, you do either [imath]cx + 1[/imath] or [imath]cx - 1[/imath] as the case may be. (If you prefer, [imath]c[/imath] may be negative and you disallow [imath]cx - 1[/imath] for the odd branch; then [imath]|-3x + 1|[/imath] and [imath]3x - 1[/imath] are kind of the same). With [imath]3x - 1[/imath] and [imath]5x + 1[/imath] it is somewhat well-known that many [imath]x[/imath] don't lead to 1, while with [imath]3x + 1[/imath] the question is unresolved despite intense scrutiny by many professionals and amateurs. For which other [imath]cx \pm 1[/imath] is the question of ultimate arrival at 1 still undetermined despite study by more than a few people? I would appreciate journal articles that look at several different [imath]cx \pm 1[/imath]. |
2844653 | Transforming a combination of sine and cosine functions to either sine or cosine function
Consider the expression [imath]a\sin\theta+b\cos\theta[/imath]. This expression consists of a combination of sine and cosine trigonometric functions which makes analysing it difficult. How can this expression be converted to an expression consisting of only sine or cosine functions? | 856580 | Why does [imath]A\sin{k(x+c)}=a\sin{kx}+b\cos{kx}[/imath] imply that [imath]A=\sqrt{a^2+b^2}[/imath] and [imath]\tan{c}=-b/a[/imath]?
I don't understand this. These identities are given in the online notes for MIT's 18.01 calculus class. It's related to taking the sum of two trig functions and transforming them into a single trig function. I can use the formulas to do this, but I am having trouble finding anything on the internet to use a proof for my understanding. Thanks for any responses! EDITED (added after the first answer): Sorry, the identity goes like this: [imath]A\sin{k(x-c)}=a\sin{kx}+b\cos{kx}[/imath] , with the relationships [imath]a=A\cos{kc}[/imath] , [imath]b=-A\sin{kc}[/imath] , [imath]A=\sqrt{a^2+b^2}[/imath] , and [imath]\tan{kc}=-b/a[/imath] |
2845938 | Rearranging a formula containing a non-square matrix
Given [imath]4 \times 2[/imath] matrix [imath]V[/imath] and [imath]4 \times 1[/imath] matrix [imath]K[/imath], solve the linear system [imath]V A = K[/imath] for [imath]A[/imath]. If V was a square matrix I could just multiply by the inverse, but I cannot since it is not. Apparently, the solution is of the form [imath]A=(V'*V)\(V' K)[/imath] where [imath]V'[/imath] is the transpose of [imath]V[/imath]. What is this called? The algebra seems to be out of nowhere. I know that the transpose of [imath]V[/imath] theoretically is cancelled out, but it seems like dividing which is not allowed in matrix algebra. | 1797351 | Using inverse of transpose matrix to cancel out terms?
I am trying to solve the matrix equation [imath]A = B^TC[/imath] for [imath]C[/imath], where [imath]A[/imath], [imath]B[/imath], and [imath]C[/imath] are all non-square matrices. I know that I need to utilize [imath]M^TM[/imath] in order to take the inverse. I'm just not sure how to isolate [imath]C[/imath] in the equation provided. |
2846378 | Are these functions equal?
Consider the function [imath]f:\mathbb R\to \mathbb R[/imath] and [imath]g:\mathbb R\to \mathbb R^+[/imath] (in that question [imath]\mathbb R^+[/imath] is a set of nonnegative real numbers). And let define for any [imath]x[/imath] in domains of this functions [imath]f(x)=x^2[/imath] and [imath]g(x)=x^2[/imath]. And if we look at set-theoretical definition of function it seems this functions must be equal, because [imath]f[/imath] contains all elements of [imath]g[/imath] and vice versa. But [imath]f[/imath] is not surjective but [imath]g[/imath] is. Isn't it a contradiction ? Or maybe I made a mistake and this functions are not equal, so are this functions equal ? | 1403122 | When do two functions become equal?
When do two functions become equal? I have stumbled over this definition of equality of functions in elementary real analysis. Let [imath]X[/imath] and [imath]Y[/imath] be two sets. Let [imath]f:X\rightarrow Y[/imath] and [imath]g:X\rightarrow Y[/imath] be two functions. [imath]f=g[/imath] iff [imath]f(x)=g(x)[/imath] for all [imath]x\in X[/imath]. Of course, this definition is so standard and I have no problem with it. However, as far as I know, this definition is actually a theorem. It can be proved in ZFC set theory. A function is just a set of ordered pairs (with some conditions). Two sets are equal if they have the same elements (the Axiom of Extensionality). With this assumption, one can prove the above definition. However, the proof does not require that the ranges of [imath]f[/imath] and [imath]g[/imath] have to be the same. Suppose that the ranges are different, let's say [imath]f:\mathbb{R}\rightarrow\mathbb{R}[/imath] defined by [imath]f(x)=x[/imath] and [imath]g:\mathbb{R}\rightarrow\mathbb{C}[/imath] defined by [imath]g(x)=x[/imath]. If we consider these functions as sets of ordered pairs, then they are just the set [imath]\lbrace (x,x):x\in\mathbb{R}\rbrace[/imath]. Thus they are equal by the Axiom of Extensionality. However, the equality also requires that if objects [imath]a[/imath] and [imath]b[/imath] are equal, then any property which is true for [imath]a[/imath] is also true for [imath]b[/imath]. In this case, we know that [imath]f[/imath] is a surjective, but [imath]g[/imath] isn't. Thus [imath]f[/imath] should not be equal to [imath]g[/imath] and hence a contradiction. But this is not a proof by contradiction to show that the ranges must be equal, everything assumed here is just axioms of ZFC and what is equality itself. So it looks like it is inconsistent. I have searched similar questions in this website, but there is no question or answer that relate to this. The most related answer would be [imath]f[/imath] and [imath]g[/imath] must have the same range so there would be no problem. But logically, this is just an additional assumption to restrict the ability to compare two functions. If they don't have the same range, then you can't compare it, or there will be a paradox. The problem for this answer is, it doesn't solve the above paradox. It just restricts itself to the situation that the paradox won't arise, but the inconsistency is still there. Lastly, I found another way to solve this problem in a set theory book. In the book, when one considers surjection, one have to specify which set the surjection is over. For example, one has to say whether it's surjection over [imath]\mathbb{R}[/imath] or surjection over [imath]\mathbb{C}[/imath]. In this case, the paradox won't arise because [imath]f[/imath] and [imath]g[/imath] are both surjective over [imath]\mathbb{R}[/imath] and not surjective over [imath]\mathbb{C}[/imath]. Thus the surjective property are the same for [imath]f[/imath] and [imath]g[/imath]. The only problem for this answer is, if one consider the property of 'surjection over its range' instead, then this property is true for [imath]f[/imath] but not for [imath]g[/imath], which implies that [imath]f\neq g[/imath] again. When do two functions become equal in general? Can anyone clarify this for me? Thank you in advance. |
777563 | Codomain of a function
At high school we were told that a function has a domain and a range, the function maps from the domain to the range. Such that the domain contains all and only the possible inputs and the range contains all and only the possible outputs. Now at University I'm told a function has a domain and a codomain, and that the codomain contains all the possible outputs but may also include other numbers. What is the point of having values in the codomain that can not be output by the function, how does that aid in describing the function? Does this also mean that the domain can include numbers that the are not inputs to the function? Surely this means you could say the codomain of any function (that outputs numbers) is the complex set (so all numbers)? EDIT: Wikipedia says the function [imath]f : x \rightarrow x^2[/imath] has codomain [imath]\mathbb{R}[/imath] but it's image (what I guess I knew as range in high school) is [imath]\mathbb{R}^+_0[/imath], so why not just say the codomain is [imath]\mathbb{R}^+_0[/imath]. EDIT2: And is it also then true that is a function is "onto" the codomain is the same as the image? So surely any function can be "onto" if you just change the what the codomain is? What I'm really trying to ask I guess is the range/image of a function is defined by the function, what defines the codomain? | 2866062 | Is codomain whatever we make?
For example, if I say [imath]f(x) = \ln \left\{ x \right\}[/imath] where [imath] \{ \cdot \}[/imath] denotes the fractional part function. Is there any way to know the codomain of this function? And Now if I define [imath]f : \mathbb{R} \to \mathbb{R}[/imath], now the codomain is [imath]\mathbb{R} [/imath]. So is it safe to say, codomain could be anything we want so long as it contains range, if there isn't a codomain already given? So, [imath] \sin : \mathbb{R} \to [-1,1][/imath] is as correct as writing [imath]\sin : \mathbb{R} \to \mathbb{R}[/imath]? So I take it that if domain and codomain aren't given, then I could also say Codomain [imath]\equiv[/imath] Range? EDIT : What I'm trying to ask is, if it's only a matter of codomain, then every function can be called surjective and conversely every function can be called into function? Which makes it all ambiguous. I have so many confusions with co-domain, but can anyone just explain me these for the time being? Help is appreciated :) |
871553 | Functions with different codomain the same according to my book?
My book gives the following definition: A function [imath]f[/imath] from [imath]A[/imath] to [imath]B[/imath] is defined as [imath]f\subseteq A\times B[/imath] such that if [imath](a,b)\in f[/imath] and [imath](a,b_1)\in f[/imath] then [imath]b=b_1[/imath] and there exists a [imath](a,b)\in f[/imath] for each [imath]a \in A[/imath] . Now according to this definition, the function would be the same even if the codomains are different. Is it true? | 2866897 | Why is [imath]\sin : \mathbb{R} \to [-5,5] [/imath] different from [imath]\sin : \mathbb{R} \to \mathbb{R}[/imath]?
My teacher says these two functions are different, why though? [imath]\sin : \mathbb{R} \to [-5,5] \tag{1}[/imath] [imath] \sin : \mathbb{R} \to \mathbb{R} \tag{2} [/imath] Both have the same domain and range. What difference does changing the codomain make here, so long as I keep the codomain as a superset of the range? More generally speaking, [imath]f : A \to B [/imath] and [imath]f: A \to C[/imath] where [imath]B[/imath] and [imath]C[/imath] are the codomain of the same function [imath]f[/imath] and are supersets of range of [imath] f[/imath] What difference would that make? How would changing the codomain (in this case) mean the functions are different? Isn't the function [imath]f[/imath] the same? |
2846731 | Show that if [imath](x_n)\rightarrow 2[/imath] then [imath](1/x_n)\rightarrow 1/2[/imath]
Given a convergent sequence [imath](x_n)\rightarrow 2[/imath], I am asked to prove [imath](1/x_n)\rightarrow 1/2[/imath] without the use of the algebraic limit theorem. Here is what I have tried: Let [imath]\epsilon>0[/imath] be arbitrary. We must prove that [imath]|1/x_n-1/2|<\epsilon[/imath]. Observe that [imath]|1/x_n-1/2|=\frac{|x_n-2|}{2|x_n|}[/imath] and we can make [imath]|x_n-2|[/imath] as small as we like. However, I'm not sure how to choose [imath]N[/imath], and what to do with the [imath]|x_n|[/imath] in the denominator. | 1059230 | Prove if [imath]\left\{x_n\right\}[/imath] converges to [imath]2[/imath], then [imath]\left\{\frac{1}{x_n}\right\}[/imath] converges to [imath]\frac{1}{2}[/imath]
We know that for all [imath]\varepsilon > 0[/imath], there exists an [imath]N \in \mathbb{N}[/imath] such that [imath]\lvert x_n - 2 \rvert < \epsilon[/imath] for all [imath]n \geq N[/imath], and we want to show that for all [imath]\varepsilon' > 0[/imath], there exists an [imath]N' \in \mathbb{N}[/imath] such that [imath]\left| \frac{1}{x_n} - \frac{1}{2} \right| < \epsilon'[/imath] for all [imath]n \geq N'[/imath]. Let [imath]\varepsilon = \varepsilon' - \frac{3}{2}[/imath]. Then by the triangle inequality, we have [imath]\left| \frac{1}{x_n} - \frac{1}{2} \right| \leq \left| \frac{1}{x_n}\right| + \left| -\frac{1}{2}\right|.[/imath] Because we proved earlier that [imath]x_n > 1[/imath] for all [imath]n \geq N[/imath], we know that [imath]\left| \frac{1}{x_n}\right| + \left| -\frac{1}{2}\right| < \left| x_n\right| + \left| -\frac{1}{2}\right|.[/imath] By the triangle inequality again, we know that [imath]\left| x_n - \frac{1}{2}\right| \leq \left| x_n\right| + \left| -\frac{1}{2}\right|.[/imath] Note that [imath]\left| x_n - \frac{1}{2}\right| = \left| (x_n - 2) + (2 - \frac{1}{2})\right|[/imath], so we get [imath]\left| (x_n - 2) + (2 - \frac{1}{2})\right| \leq \left| x_n - 2\right| + \left| 2 - \frac{1}{2}\right|.[/imath] Then we have [imath]\left| x_n - 2\right| + \left|2 - \frac{1}{2}\right| < \varepsilon + \frac{3}{2} = \left(\varepsilon' - \frac{3}{2}\right) + \frac{3}{2} = \varepsilon',[/imath] so [imath]\frac{1}{x_n} \rightarrow \frac{1}{2}[/imath]. I just realized that I don't know how to make sure that [imath]\varepsilon' - \frac{3}{2}[/imath] is a positive number. How can I do this? |
2847002 | What do Elements of the Tensor Product Look Like?
I'm looking for a tensor product (of abelian groups, let's say), and some elements [imath]a{\otimes}b[/imath] and [imath]c{\otimes}d[/imath] such that [imath]a{\otimes}b+c{\otimes}d[/imath] [imath]\neq e{\otimes}f[/imath] for any [imath]e{\otimes}f[/imath]. That is, I'd like to show that there are sometimes elements in a tensor product that are not of the form [imath]e{\otimes}f[/imath]. I can't find any examples in [imath]\mathbb{Z}{\otimes}\mathbb{Z}[/imath], or even in [imath](\mathbb{Z} \times \mathbb{Z} ){\otimes}\mathbb{Z}[/imath]. (Actually no such examples exist, simply because [imath]b[/imath] and [imath]d[/imath] are integers so you can put them into the first coordinates, then the second coordinates both become [imath]1[/imath] and you may add the two elements.) I think [imath](1,0){\otimes}(1,0)+(0,1){\otimes}(0,1)[/imath] is an example in [imath](\mathbb{Z} \times \mathbb{Z} ){\otimes}(\mathbb{Z} \times \mathbb{Z} )[/imath]. But to prove this, I have to show that [imath]((1,0),(1,0))+((0,1),(0,1))-(e,f)[/imath] is never a linear combination of the tensor relations. This seems very unwieldy; is there a better method invoking the universal property? | 202907 | Does every element of tensor product look like this?
If [imath]V\otimes W[/imath] is the tensor product of vector spaces V and W, I know that for any basis [imath](v_i)_{i\in I}[/imath] of V and [imath](w_j)_{j\in J}[/imath] of W, [imath](v_i\otimes w_j)_{i\in I,j\in J}[/imath] is a basis of [imath]V\otimes W[/imath], so any [imath]a\in V\otimes W[/imath] is a linear combination of some vectors [imath]v_i\otimes w_j[/imath]. But how can I prove that for every [imath]a\in V\otimes W[/imath] there exists [imath]n\in \mathbb{N}[/imath] and linear independent sets [imath]\{v_1',...v_n'\}\in V[/imath] and [imath]\{w_1',...w_n'\}\in W[/imath] such that [imath]a=v_1'\otimes w_1'+v_2'\otimes w_2'+...+v_n'\otimes w_n'?[/imath] This exercise is killing me: I have been trying to think of some way to construct these vectors starting with some fixed pair of bases and the upper fact, but I can't get anywhere! Help! |
2846799 | If A is primitive, is AA' primitive?
Let [imath]A[/imath] be a primitive matrix (a square nonnegative matrix some power of which is positive). Is [imath]AA^T[/imath] necessarily primitive? | 2757515 | Product of a primitive matrix and its transpose.
Is it true that if [imath]A[/imath] is a nonnegative primitive matrix, then [imath]AA^T[/imath] is also primitive? Obviously [imath]A^T[/imath] is primitive but in general product of primitive matrices is not primitive. Any hint? |
832598 | Rational Points in circle
How many rational points (a point [imath](a, b)[/imath] is called rational if [imath]a[/imath] and [imath]b[/imath] are rational numbers) can exist on the circumference of a circle having centre [imath](\pi, e)[/imath]? | 315761 | Rational points on a circle
A circle is centred at [imath](\pi,e)[/imath]. What is the maximum no. of rational points it can have? (A rational point is one with both coordinates rational). 1 rational point is definitely possible, just choose any rational point, and alter the radius to get it through. My book says that only one rational point is possible, as [imath]\pi\neq qe\quad q\in Q[/imath]. That's their whole explanation. I don't understand how that's enough. Edit: It has been pointed out that the problem is equivalent to showing [imath]q_1\pi+q_2e=q_3[/imath] has no non trivial solutions. Is this known to be true? Can someone prove it in an elementary way? |
2846556 | Homotopy in [imath]S^1[/imath]
Let [imath]f: S^1 \to S^1[/imath] be a continuous map, where [imath]S^1[/imath] is the 1-sphere. Prove that if there is an homotopy between [imath]f[/imath] and a constant map, then [imath]f[/imath] has a fixed point. I don't know how to prove it. I try with the Brouwer fixed point theorem, but I think that it doesn't work in [imath]S^1[/imath] | 1035575 | Circle to circle homotopic to the constant map?
How to prove that a continuous function, homotopic to the constant map [imath]f:S^1\to S^1[/imath] (a) has a fixed point and that (b) has a point [imath]x[/imath], such that [imath]f[/imath] maps [imath]x[/imath] to its antipodal point [imath]-x[/imath]? |
2840214 | Defining [imath]\mathbb{C}[/imath] without defining [imath]\mathbb{R}[/imath]
To define [imath]\mathbb{R}[/imath], one approach is to start with [imath]\mathbb{N}[/imath] and then systematically introduce [imath]\mathbb{Z},[/imath] [imath]\mathbb{Q}[/imath], and then [imath]\mathbb{R}[/imath]. Alternatively, we define [imath]\mathbb{R}[/imath] using its axiomatic definition that it is a complete ordered field. And then, we can construct [imath]\mathbb{N}[/imath], [imath]\mathbb{Z}[/imath] and [imath]\mathbb{Q}[/imath] within [imath]\mathbb{R}[/imath]. Now, [imath]\mathbb{C}[/imath] is introduced in the first way, by making [imath]\mathbb{R}^2[/imath] as a field. Can we follow the latter approach of defining [imath]\mathbb{C}[/imath] axiomatically, and then construct [imath]\mathbb{R}[/imath] inside it? | 1084893 | Can we construct [imath]\Bbb C[/imath] without first identifying [imath]\Bbb R[/imath]?
Sometimes it is useful to consider [imath]\Bbb C[/imath] as our primitive and identify [imath]\Bbb R[/imath] as a subset of [imath]\Bbb C[/imath]. Thus we can define [imath]\Bbb R[/imath] (or at least a set with all of the interesting properties of [imath]\Bbb R[/imath]) from [imath]\Bbb C[/imath]. This suggests to me that there is some way of constructing [imath]\Bbb C[/imath] without first constructing (or taking as a primitive) [imath]\Bbb R[/imath]. However, I've never seen such a construction of [imath]\Bbb C[/imath] (a quick Google search didn't provide me one, either). I've the Cayley-Dickson construction and the matrix construction many times, but are they the only known ways of constructing [imath]\Bbb C[/imath]? My question: Is there a way to construct the set of complex numbers without already having (or first constructing) the real numbers? |
601835 | Finding elements of a direct sum ring so that ab, ac, bc are zero divisors.
Find elements [imath]a,b[/imath], and [imath]c[/imath] in the ring [imath]\mathbb Z \oplus\mathbb Z \oplus\mathbb Z [/imath] such that [imath]ab[/imath], [imath]ac[/imath], and [imath]bc[/imath] are zero divisors but [imath]abc[/imath] is not a zero divisor. I am not sure how to approach this problem guidance is appreciated. | 2847301 | Zero divisors of [imath]\mathbb{Z}×\mathbb{Z}×\mathbb{Z}[/imath]
Find elements [imath]a,b,[/imath] and [imath]c[/imath] in the ring [imath]\mathbb{Z}×\mathbb{Z}×\mathbb{Z}[/imath] such that [imath]ab, ac,[/imath] and [imath]bc[/imath] are zero divisors but [imath]abc[/imath] is not a zero divisor. Work: [imath]a=(1,1,0)[/imath] [imath]b=(1,0,1)[/imath] [imath]c=(0,1,1)[/imath] Why this works: because [imath]ab=(1,0,0)\neq(0,0,0)[/imath]. Definition of zero divisor. A zero divisor is a non-zero element [imath]a[/imath] of a commutative ring [imath]R[/imath] such that there is a non-zero [imath]b \in R[/imath] with [imath]ab=0[/imath]. Amy hint or suggestion will be appreciated. |
2847352 | Show that [imath]g[/imath] is continuous
This question is from an old Ph.D Qualifying Exam for Complex Analysis. Let [imath]\Omega\subset\mathbb{C}[/imath] be an open set. Suppose that [imath]f[/imath] is holomorphic in [imath]\Omega[/imath]. Define [imath]g[/imath] on [imath]\Omega\times\Omega[/imath] by [imath]g(z,w)= \begin{cases} \dfrac{f(w)-f(z)}{w-z}, & w\neq z \\ f'(z), & w=z \end{cases}[/imath] Show that [imath]g[/imath] is continuous in [imath]\Omega\times\Omega[/imath]. My attempt: If [imath]w\neq z[/imath] then [imath]g[/imath] is clearly continuous, so we just have to consider the case of [imath]w=z[/imath]. Clearly, for a fixed [imath]a\in \Omega[/imath], [imath]\lim_{w\to a}(\lim_{z\to a}g(z,w))=f'(a)[/imath], but the double limit need not be identical to the joint limit [imath]\lim_{(z,w)\to (a,a)}[/imath], so I'm not sure this is right. Does anyone have ideas? | 2188817 | Proving that this function is continuous on [imath]G\times G[/imath]
Let [imath]G\subset \mathbb{C}[/imath] be a non-empty open set and [imath]f[/imath] be a function holomorphic on [imath]G[/imath]. Let [imath]g: G\times G\to \mathbb{C}[/imath] be a function defined as [imath]g(z,w)= \begin{cases} \frac{f(z)-f(w)}{z-w}, & z\ne w \\ f'(z), & z=w \end{cases}[/imath] I need to prove that [imath]g[/imath] is continuous on [imath]G\times G[/imath]. My approach: Let [imath]\varepsilon > 0[/imath]. Then [imath]\exists \delta_1, \delta_2>0[/imath] such that [imath]\| (z,w)-(z_0,w_0) \|^2\le|z-z_0|+|w-w_0|<\delta_1+\delta_2[/imath] implies that [imath]|f(z)-f(z_0)|<\epsilon[/imath], [imath]|f(w)-f(w_0)|<\epsilon[/imath], since [imath]f[/imath] is continuous on [imath]G[/imath]. Now, [imath]\| g(z,w)-g(z_0,w_0) \|=\left| \frac{f(z)-f(w)}{z-w} - \frac{f(z_0)-f(w_0)}{z_0-w_0} \right|\le \left| \frac{f(z)-f(w)}{z-w}\right| + \left|\frac{f(z_0)-f(w_0)}{z_0-w_0} \right|[/imath] Since [imath]f[/imath] is holomorphic on [imath]G[/imath], it is continuous on [imath]G[/imath], thus [imath]\exists \delta'>0[/imath] such that [imath]| {f(z)-f(w)}|<\varepsilon'|z-w|<\varepsilon'\delta'[/imath] whenever [imath]|z-w|<\delta'[/imath]. And here's where I'm stuck, because it's not clear how to deal with [imath]\left|\frac{f(z_0)-f(w_0)}{z_0-w_0} \right|[/imath], because [imath]z_0, w_0[/imath] are fixed points, so we can't just make them approach each other. Of course, my first thought was to somehow rearrange the inequality in such a way as to obtain [imath]\left| \frac{f(z)-f(z_0)}{z-z_0}\right| + \left|\frac{f(w)-f(w_0)}{w-w_0} \right|[/imath], but I don't see how to do so. Your help would be appreciated. |
2847943 | Does this spoof integral from a meme have a coherent solution [imath]\int\frac {3x^3 -x^2 + 2x-4}{\sqrt{x^2-3x+2}}dx[/imath]
I ask because I have what I think is a solution, but it doesn't fit the spirit of the joke, which is to say, it doesn't produce anything like a pin. The solution I get is the equation below: [imath]-\frac{135\sqrt2\ln(3-2\sqrt2)+404}{16\sqrt2}[/imath] However, this solves to a transcendental number, beginning: [imath]-2.98126694400553644032103778411344302709190188721887186739\cdots[/imath] So, I'm appealing for better or more informed solutions if I have made a mistake. Does the equation solve to something nice and friendly, or is the whole thing just a wheeze? | 2814179 | How to integrate the product of two or more polynomials raised to some powers, not necessarily integral
This question is inspired by my own answer to a question which I tried to answer and got stuck at one point. The question was: HI DARLING. USE MY ATM CARD, TAKE ANY AMOUNT OUT, GO SHOPPING AND TAKE YOUR FRIENDS FOR LUNCH. PIN CODE: [imath]\displaystyle \int_{0}^{1} \frac{3x^3 - x^2 + 2x - 4}{\sqrt{x^2 - 3x + 2}} \, dx [/imath] I LOVE YOU HONEY. Anyone knows? Are we gonna get an integer number? My attempt: Does this help? [imath]\frac{3x^3-x^2+2x-4}{x-1}=3x^2+2x+4[/imath] (long division) \begin{align*} I&=\int\frac{3x^3-x^2+2x-4}{[(x-1)(x-2)]^{1/2}} dx = \\ &=\int\frac{(3x^2+2x+4)(x-1)^{1/2}}{(x-2)^{1/2}} dx = \\ &=\int 3(u^4-4u^2-4)(u^2+1)^{1/2}du \times 2 \end{align*} after the substitution \begin{gather*} (x-2)^{1/2}=u\\ du=\frac1{2(x-2)^{1/2}}dx\\ u^2=x-2\\ (x-1)^{1/2}=(u^2+1)^{1/2} \end{gather*} Update: This may help us proceed. I tried to proceed: [imath]6\int (u^4-4u^2-4)(u^2+1)^{1/2} du = 6\int ((t-3)^2-8)t \frac{dt}{2u}[/imath] after [imath]u^2+1=t[/imath] and [imath]dt=2udu[/imath] \begin{align*} u^4-4u^2-4 &= (u^2+1)^2-(6u^2+5) \\ &= (u^2+1)^2-6(u^2+1)+1 \\ &= ((u^2+1)-3)^2-8 \end{align*} I wonder whether this question can be solved from here? Update: This has been getting a lot of views, and I think most people came for the sort of problem mentioned in the title (where I got stuck) rather than the original problem itself. Keepin this in mind, I'm reopening the question and here's the kind of answers I expect — Solutions to the original problem are good, but I'd prefer solutions that continue from the part where I got stuck — the polynomial in [imath]u[/imath] — that's the sort of problem mentioned in the title. |
2841957 | [imath]a_{n+1}=4a_n+n+1[/imath] Find Closed Form
\begin{cases} a_{n+1}=4a_n+n+1\\ a_0=1\\ \end{cases} a. find [imath]\alpha,\beta[/imath] such that if we plug in [imath]b_n=a_n+\alpha n+\beta[/imath] we will get [imath]b_{n+1}=4b_n[/imath] b. find [imath]b_n[/imath] explicitly So we first plug in the guess [imath]a_{n+1}=4a_n+n+1[/imath] but [imath]a_n=b_n-\alpha n-\beta[/imath] so [imath]a_{n+1}=4(b_n-\alpha n-\beta)+n+1=4b_n+n(1-4\alpha)+1-4\beta[/imath] then we look at [imath]1-4\alpha=0\iff \alpha=\frac{1}{4}[/imath] and [imath]1-4\beta=0\iff \beta=\frac{1}{4}[/imath] (Why do we do it?) So we got [imath]b_n=a_n+\frac{1}{4} n+\frac{1}{4}[/imath] looking at [imath]n=0[/imath] we get [imath]b_0=0+n*0+\frac{1}{4}[/imath] so [imath]b_0=\frac{1}{4}[/imath] a. how to continue? b. is there a place I can read about this process? I was just given examples which I try yo learn from, but I try to undersatnd | 1408256 | Matlab questions about [imath]a_{n+1} = 4a_{n} + n +1[/imath]
Solve the equation with Matlab Given: [imath]a_{n+1} = 4a_{n} + n +1[/imath] Start condition: [imath]a_{i}=0[/imath] Questions: a) Find x,y for: [imath]b_{n} = a_{n}+xn+y[/imath] such that: [imath]b_{n+1}=4b_{n}[/imath] b) Find an explicit formula for [imath]b_{n}[/imath] c) Write recursive function with input n I'll be very grateful for your help! |
2591195 | Combination on the different sum of money
How many different sums of money can be made from penny, a nickel (5 pennies), a dime (10 pennies) and a quarter (25 pennies)? I am having trouble sorting out the answer. As far as I have made progress, my answer would be [imath]\binom{4}{2}[/imath], but I still have doubts. | 121940 | Combinations of a penny, nickel, dime, and quarter
You have one penny, one nickel, one dime, and one quarter. How many different amounts of money can you make using one or more of these coins? Please help me! I'm having trouble! I'm having trouble I know the obvious ones like [imath]5,10,25,1,15,26,36,16,\ldots[/imath] Help? And I need to know the coin combinations too. I found [imath]15[/imath] different ones! Anyone who can find more?? |
2848498 | Partial minimization of function over variables
In Boyd and Vandenberghe's textbook on Convex Optimization, is claimed that: We always have [imath] \underset{x,y}{\text{inf}\; f(x, y)} = \underset{x}{\text{inf}}\;\tilde{f}(x) [/imath] where [imath] \tilde{f}(x) = \text{inf}_y\;f(x, y).[/imath] In other words, we can always minimize a function by first minimizing over some of the variables, and then minimizing over the remaining ones. My question is the following (since it is unclear to me from the context): does this applies only to convex functions or to every function? | 1293761 | Minimize multi-variable function one variable at a time
I am wondering if I can minimize a multi-variable function one variable at a time. In other words, is it true that: [imath]min_{x_1,x_2} f(x_1,x_2)=min_{x_1} min_{x_2} f(x_1,x_2)[/imath] |
2848101 | Proof by Induction: If [imath]x_1x_2\dots x_n=1[/imath] then [imath]x_1 + x_2 + \dots + x_n\ge n[/imath]
If [imath]x_1,x_2,\dots,x_n[/imath] are positive real numbers and if [imath]x_1x_2\dots x_n=1[/imath] then [imath]x_1 + x_2 + \dots + x_n\ge n[/imath] There is a step in which I am confuse. My proof is as follows (it must be proven using induction). By induction, for [imath]n=1[/imath] then [imath]x_1=1[/imath] and certainly [imath]x_1\ge1[/imath]. Suppose [imath]x_1 + x_2+\dots+x_n\ge n[/imath] (does this mean that [imath]x_1x_2\dots x_n = 1[/imath] also hold?) and [imath]x_1x_2\dots x_n x_{n+1} = 1[/imath] hold. Then \begin{align} x_1 + x_2 + \dots+x_n+x_{n+1} &\ge n + x_{n+1} \\ &=n+2-1/x_{n+1} \\ &=n+2-x_1x_2\dots x_n. \end{align} My problem is in the last step. As I wrote before, I don't think [imath]x_1x_2\dots x_n = 1[/imath] should hold, because if this is the case then [imath]x_{n+1}=1[/imath]. EDIT: In the itermediate step I used that [imath]x + 1/x \ge 2[/imath], where [imath]x>0[/imath]. | 2576298 | [imath]x_1 \cdot \dots \cdot x_n=1 \implies x_1 + \dots + x_n \ge n[/imath]
Suppose [imath]x_1, \dots, x_n[/imath] are positive real numbers such that [imath]x_1 \cdot \ldots \cdot x_n = 1[/imath]. Prove [imath]x_1 + \dots + x_n \ge n[/imath]. I don't want to use the AM-GM inequality to prove this (from which this statement would follow). I suppose induction is the way to go. The base case is true. The case for [imath]n=2[/imath] is true by [imath]x_1 + x+2 - 2 \sqrt{x_1 x_2} = (\sqrt{x_1} - \sqrt{x_n})^2 \ge 0.[/imath] Now suppose the statement holds for [imath]n[/imath], then [imath]x_1 \cdot \ldots \cdot x_n = 1 \implies x_1 + \dots + x_n \ge n.[/imath] So if [imath]x_1 \cdot \ldots \cdot (x_n x_{n+1}) = 1[/imath], then [imath] x_1 + \dots + x_n x_{n+1} \ge n. [/imath] Then we also have [imath] x_1 + \dots + x_{n-1} + x_n + x_{n+1} \ge n - x_{n}x_{n+1} + x_n + x_{n+1}. [/imath] Now I'm not sure where else to go from here. |
2848438 | Possible outcomes of "four of kind"
I have been working on my probability. I have "possibly" memorized how to solve but, I really like to understand it with my heart. But.. I am kinda stuck. I am trying to find out the possible outcomes of four of kind (unordered) Here is my intuition: [imath]\frac{52 \cdot 3 \cdot 2 \cdot 1 \cdot 48}{5!}[/imath] I do not see why this is wrong. Please help me out! | 2287479 | What is the probability that a five-card poker hand contains "four of a kind hand"?
Question What is the probability that a five-card poker hand contains "four of a kind hand"? For More information about poker's four of a kind hand,please refer here My Attempt [imath]\text {Rank of card-: Jack,King,Queen,Ace}[/imath][imath],9,8,...1[/imath] In four of a kind hand , [imath]4[/imath] cards are of the same rank( and obviously of diiferent type) [imath]\Rightarrow[/imath] Now in a set of [imath]52[/imath] card,we have [imath]13[/imath] ranks of [imath]4[/imath] kind(spade,heart,diamond,club) [imath]\Rightarrow[/imath] four of a kind hand says [imath]4[/imath] cards are of the same rank. So Among [imath]13[/imath] ranks ,select [imath]1[/imath] of the rank i.e [imath]\binom{13}{1}[/imath], and the remaining last card can be any thing I am not getting how to move forward ,please help me out. Thanks in advance |
1373568 | Functions with rank [imath]n[/imath].
An open set [imath]U\subset \mathbb{R}^n[/imath] contains the closed origin-centered unit ball [imath]B=B(0,1)[/imath]. If a [imath]C^1[/imath] mapping [imath]f:U\rightarrow \mathbb{R}^n[/imath] with rank [imath]n[/imath] obeys [imath]\|f(x)-x\|<1/2[/imath] for all [imath]x\in U[/imath], show that, a) [imath]\|f\|^2[/imath] must attain a minimum in the interior of [imath]B[/imath]. b) [imath]f(p)=0[/imath] for some [imath]p\in B[/imath]. Honestly I don't know how to solve such problems. I am planning to take an exam in few weeks which contains such problems. So I want to learn how to solve this. So I appreciate if someone would help me. First of all what can be inferred from the fact that [imath]f[/imath] is of rank [imath]n[/imath]? Does it imply that [imath]f'[/imath] is invertible? | 1348630 | Show that [imath]\|f\|^{2}[/imath] attains a minimum value on the interior of [imath]B[/imath]
I am looking for any help, hints, or suggestions in how to go about this problem from a previous qualifying exam. We are given a smooth mapping [imath]f: U \rightarrow \mathbb{R}^{n}[/imath] whose differential [imath]df_{p}: \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}[/imath] is of rank [imath]n[/imath] and we have that [imath]\|f(x)-x\|<\frac{1}{2}[/imath]. Here [imath]U \subset \mathbb{R}^{n}[/imath] is an open subset which contains the closed unit ball [imath]B=B_{1}(0)[/imath]. The problem asks us to show that [imath]\|f\|^{2}[/imath] attains a minimum on the interior of [imath]B[/imath]. As of right now, the only thing that comes to mind is the inverse function theorem, that there is an open subset [imath]V \subset U[/imath] where [imath]f[/imath] is an diffeomorphism on. |
2848392 | Endomorphisms of [imath]\mathbb{H}[/imath] that do not fix [imath]\mathbb{R}[/imath]
Are there any nontrivial field endomorphisms of [imath]\mathbb{H}[/imath] that are not the identity map when restricted to [imath]\mathbb{R}[/imath]? I realize that the only nontrivial endomorphism on [imath]\mathbb{R}[/imath] is the identity map, so if the desired function [imath]f[/imath] exists, it must be that [imath]f(\mathbb{R}) \ne \mathbb{R}[/imath]. Are there any such mappings? | 61216 | Is every endomorphism of the quaternion ring surjective?
Is the quaternion ring an EAS Division ring? An EAS Division ring is a ring [imath]D[/imath] such that each endomorphism of [imath]D[/imath] is surjective. I know that [imath]\mathbb{R}[/imath] and [imath]\mathbb{Q}[/imath] are EAS Division rings. |
2746382 | Length of 1D curves using 2D integration with [imath]\delta[/imath]
Let the equation [imath]f(x,y)=0[/imath] describe some simple curve in the [imath]x,y[/imath]-plane. I had the expectation that the length of this curve between [imath]x_1[/imath] and [imath]x_2[/imath] could be written as [imath] \int_{x_1}^{x_2} dx\int_{-\infty}^\infty dy \delta(f(x,y)),[/imath] in the sense that the [imath]\delta[/imath] would pick up the points in the curve. This works for a circumference, [imath]f(x,y)=\sqrt{x^2+y^2}-R[/imath] since [imath]\int_{-\infty}^{\infty} dx\int_{-\infty}^\infty dy \delta(\sqrt{x^2+y^2}-R)=2\pi \int_0^\infty rdr\delta(r-R)=2\pi R.[/imath] Does it work in general? The length should be [imath] \int_{x_1}^{x_2} dx \sqrt{1+\left(\frac{dy}{dx}\right)^2},[/imath] I don't see that this equals my first integral. | 2803475 | Measuring a curve with Dirac delta function.
Formally, if I want to measure the length of a closed curve [imath]f(x,y) = 0[/imath], I presumed I could write: [imath] L = \int^\infty_{-\infty}\int^\infty_{-\infty} \delta( f(x,y) )\, dx\, dy, [/imath] but trying this out I don't think this works. What is wrong with this formula? Edit: Am I missing a measure like a Jacobian or something? How can you prove this? |
2849239 | Zeros of a certain complex polynomial lie on the unit circle
I just picked up Wilhelm Schlag's book A Course in Complex Analysis and Riemann Surfaces, and I've been stuck on a Exercise 1.3: Let [imath]P(z)=\sum_{j=0}^{n}a_jz^j[/imath] be a polynomial of degree [imath]n\geq 1[/imath] with all roots inside the unit circle [imath]|z|<1[/imath]. Define [imath]P^*(z)=z^n\overline{P}(z^{-1})[/imath] where [imath]\overline{P}(z)=\sum_{j=0}^{n}\overline{a}_jz^j[/imath]. Show that all roots of [imath]P(z) + P^*(z)=0[/imath] lie on the unit circle [imath]|z|=1[/imath]. Here's what I've worked out so far. By the fundamental theorem of algebra we can write [imath]P(z)=a_n\prod_{j=1}^{n}(z-z_j),[/imath] where [imath]z_j[/imath] are the roots of [imath]P[/imath]. By definition of [imath]P^*[/imath] we see that [imath]P^*(z)=\overline{a}_n\prod_{j=1}^{n}(1-z\overline{z}_j)[/imath], which immediately yields (again by the fundamental theorem of algebra) [imath]P^*(z)=\overline{a}_0\prod_{j=1}^{n}(z-\overline{z}_j^{-1}).[/imath] So we see that the roots of [imath]P^*[/imath] are just the inversions of the roots of [imath]P[/imath] around the unit circle. There is something a little funny about this: if zero is a root of [imath]P[/imath] then there is no associated root of [imath]P^*[/imath] (this makes some sense since polynomials can't vanish at [imath]\infty[/imath]), and so the degree of the polynomial drops by one in this case? Now we can compute [imath](P+P^*)^*=(P+P^*)[/imath] and so by the above arguments all of the roots of [imath]P+P^*[/imath] are symmetric (in the sense of inversions) around the unit circle. Now here's where I get stuck... I'm not sure how to use the assumption that the roots of [imath]P[/imath] (denoted above by [imath]z_j[/imath]) lie inside the unit disk to show that the roots of [imath]P+P^*[/imath] must lie exactly on the unit disk... Hints on where to go from here are much more appreciated than complete solutions! | 1653319 | Prove the roots of [imath]p(z)+z^n\bar{p}(\frac{1}{z})[/imath] lie on the unit circle
I have to prove the following question from "A course in complex analysis and riemann surfaces": Let [imath]p(z)=\sum_{i=0}^na_iz^u[/imath] be a polynomial with all roots inside the (open )unit disk. Denote by [imath]\bar{p}(z)[/imath] the polynomial [imath]\sum_{i=0}^n\bar{a_i}z^i[/imath]. Prove that the roots of [imath]p(z)+z^n\bar{p}(\frac{1}{z})[/imath] lie on the unit circle. Now it's not hard to see that if [imath]r[/imath] is a root of [imath]p(z)[/imath], then [imath]\frac{1}{\bar{r}}[/imath] is a root of [imath]z^n\bar{p}(\frac{1}{z})[/imath]. But that's about as far as I got. I would like a hint on how to prove this. |
2849379 | Expected Value of the sum of random variables
If i have two random variables X and Y, then [imath]E[X+Y]=E[X]+E[Y][/imath] . However, do X and Y have to be independent ? Do you know where i can find a proof of this principle ? | 2305118 | Explanation of linearity of expectation
Linearity of expectation is the property that the expected value of the sum of random variables is equal to the sum of their individual expected values, regardless of whether they are independent. My understanding of random variables (both continuous and discrete) is that they assign a number to each possible outcome of a random experiment. For example, if we roll a die, we can land on any number between 1 and 6, and we can create a random variable [imath]X[/imath] that takes each of those values. Here, [imath]X[/imath] represents each possible outcome. The expected value of [imath]X[/imath] would then be [imath]\text{E}[X] = 3.5[/imath] by taking the weighted sum of each of the possible outcomes. It's all good until this part. Here is what I don't understand, What is this notion of adding two random variables? I mean they don't have distinct values, so how can we say [imath]\text{E}[X + X] = \text{E}[X] + \text{E}[X] = 7[/imath] This is just the linearity of expectation applied to when two dies are rolled, we are asked to get the expected sum of the numbers on both dies. But how are we adding two random variables? What if we wanted to get the product of the number on two dies. Or the difference or quotient? [imath]\text{E}[X * X] = \text{E}[X] * \text{E}[X] = 12.25[/imath] [imath]\text{E}[X - X] = \text{E}[X] - \text{E}[X] = 0[/imath] [imath]\text{E}[X / X] = \text{E}[X] / \text{E}[X] = 1[/imath] Are all of these operations valid? I'm really confused, please help. How am I supposed to think about these random variables? This is in connection to my other question that I will hopefully be able to make sense of. |
2846443 | Minimum value - Extension of Triangle Inequality?
What is the minimum value of [imath]|x-1| + |x-2| + |x-3| .... + |x - k + 1| + |x-k|[/imath] equal to? I suppose it depends on whether or not [imath]k[/imath] is even or odd. I was able to solve for [imath]k = 3[/imath] (three terms) using the triangle inequality - but couldn't generalize it to the above. Please help. | 439745 | Prove:[imath]|x-1|+|x-2|+|x-3|+\cdots+|x-n|\geq n-1[/imath]
Prove:[imath]|x-1|+|x-2|+|x-3|+\cdots+|x-n|\geq n-1[/imath] example1: [imath]|x-1|+|x-2|\geq 1[/imath] my solution:(substitution) [imath]x-1=t,x-2=t-1,|t|+|t-1|\geq 1,|t-1|\geq 1-|t|,[/imath] square, [imath]t^2-2t+1\geq 1-2|t|+t^2,\text{Since} -t\leq -|t|,[/imath] so proved. question1 : Is my proof right? Alternatives? one reference answer: [imath]1-|x-1|\leq |1-(x-1)|=|1-x+1|=|x-2|[/imath] question2 : prove: [imath]|x-1|+|x-2|+|x-3|\geq 2[/imath] So I guess:( I think there is a name about this, what's that? wiki item?) [imath]|x-1|+|x-2|+|x-3|+\cdots+|x-n|\geq n-1[/imath] How to prove this? This is question3. I doubt whether the two methods I used above may suit for this general case. Of course, welcome any interesting answers and good comments. |
2849176 | Checking a solution to a problem about non-zero divisors in an associative algebra having multiplicative inverses
The following problem appeared on a problem set I'm working on: "Let [imath]A[/imath] be a finite-dimensional associative algebra with [imath]1[/imath] over a field [imath]k[/imath]. Show that an element [imath]a[/imath] in [imath]A[/imath] has a multiplicative inverse if [imath]a[/imath] is not a zero divisor" (emphasis mine). I'd appreciate if y'all could check my solution to see that it's correct. Let [imath]a \in A[/imath] not be a zero divisor. Then the map [imath]T : A \to A[/imath] given by [imath]T : x \mapsto ax[/imath] is an injective linear transformation from a finite-dimensional vector space [imath]A[/imath] to itself, as [imath]\ker(T) = \{ x \in A : ax = 0 \} = \{ 0 \}[/imath] by the assumption that [imath]a[/imath] isn't a zero divisor. Thus [imath]T[/imath] must also be onto, so in particular there exists a unique [imath]b \in A[/imath] such that [imath]ab = 1[/imath]. Let us fix this [imath]b[/imath]. We know that [imath]b[/imath] is a right-inverse to [imath]A[/imath], so must show that it's also a left-inverse, i.e. that [imath]ba = 1[/imath]. First, we wanna show that a right-inverse to [imath]b[/imath] exists, for which it'll suffice to show that [imath]b[/imath] is not a zero-divisor. If it were the case that [imath]bx = 0[/imath], then we'd have [imath]a (bx) = (ab) x = 0[/imath]. But [imath](ab) x = 1x = x[/imath], so [imath]x = 0[/imath]. Thus we know that [imath]b[/imath] is not a zero-divisor, so there exists a unique [imath]c \in A[/imath] such that [imath]bc = 1[/imath]. It now remains to show that [imath]a = c[/imath]. This follows as \begin{align*} a(bc) & = (ab) c \\ = a 1 & = 1 c \\ = a & = c . \end{align*} Thus [imath]ab = ba = 1[/imath], so [imath]b[/imath] can rightly be called a multiplicative inverse to [imath]a[/imath]. Is my solution correct? Thanks! | 1253237 | Prove that any non-zero-divisor of a finite dimensional algebra has an inverse
Let [imath]A[/imath] be a finite dimensional algebra. Prove that an element of [imath]A[/imath] is invertible iff it is not a zero divisor. Let [imath]a[/imath] be an invertible element, then there exists an element [imath]b[/imath] such that [imath]ab=1[/imath] and assyme that [imath]a[/imath] is a zero divisor, then there exists an element [imath]c \neq 0[/imath] such that [imath]ac=0[/imath] and i don't know. |
2849870 | Does pointwise convergence to a continuous function on a closed interval imply uniform convergence?
Let's suppose that [imath](f_n)_{n}[/imath] is a sequence of continuous functions that converges pointwise to a continuous function [imath]f(x)[/imath] on a closed interval [imath][a, b][/imath]. Is then the convergence uniform, too? If it is so, how do you prove it? If it isn't, could you give a counterexample, please? My attempt I managed to write down the hypothesis in symbolical terms, but could not go beyond that: continuity of the sequence ([imath]x_0\in[a, b][/imath]): [imath]\forall n, \ \forall \epsilon >0 \ \ \exists \delta>0 \ : \ |x-x_0|<\delta \Rightarrow |f_n(x)-f_n(x_0)|<\epsilon [/imath] continuity of the limit function ([imath]x_0\in[a, b][/imath]): [imath]\forall \epsilon >0 \ \ \exists \delta>0 \ : \ |x-x_0|<\delta \Rightarrow |f(x)-f(x_0)|<\epsilon [/imath] pointwise convergence: [imath]\forall x\in[a, b], \ \forall \epsilon >0 \ \ \exists n_\epsilon>0 \ : \ n\geq n_\epsilon \Rightarrow |f_n(x)-f(x)|<\epsilon [/imath] The thesis should be: [imath]\forall \epsilon >0 \ \ \exists n_\epsilon>0 \ : \ n\geq n_\epsilon \Rightarrow |f_n(x)-f(x)|<\epsilon \ \ \forall x\in[a, b][/imath] Note Please do not bring sequences such as that of [imath]x^n (x\geq 0)[/imath] as a counterexample, because they are not counterexamples: [imath]\text{for } n \rightarrow \infty , \ x^n \rightarrow f(x)= \begin{cases} 0 & \text{if } 0 \leq x<1 \cr 1 & \text{if } x=1 \end{cases}[/imath] so there is a pointwise convergence to [imath]f(x)[/imath] on [imath][0, 1][/imath]; but [imath]f(x)[/imath] - the limit function - is not continuous, then this example lacks the conditions for the theorem to be applied. | 785997 | uniformly convergence on compact metric space
Let [imath]K[/imath] be a compact metric space. Let [imath]\{f_n\}_{n=1}^\infty[/imath] be a sequence of continuous functions on [imath]K[/imath] such that [imath]f_n[/imath] converges to a function [imath]f[/imath] pointwise on [imath]K[/imath]. on Walt. Rudin's book Principles of mathematical analysis, 7.13, if we assume (1). [imath]f[/imath] is continuous; (2). [imath]f_n(x)\geq f_{n+1}(x)[/imath] for all [imath]x\in K[/imath] and all [imath]n[/imath]; then it is proved that [imath]f_n[/imath] converges to [imath]f[/imath] uniformly on [imath]K[/imath]. Is there counterexample satisfying (1) but not (2)? And satisfying (2) but not (1)? |
2849259 | How to show this sequence is in [imath]l^p[/imath]
Suppose [imath]\{a_k\}[/imath] is a sequence such that for any sequence [imath]\{b_k\}[/imath]in [imath]l^q[/imath] series [imath]\sum_k a_k b_k[/imath] is convergent, then how to show [imath]a_k[/imath] is in [imath]l^p[/imath] where [imath]\infty>p,q>1[/imath] and [imath]1/p+1/q=1[/imath]. | 2249454 | Prove that for the series [imath]\sum_{n \in \mathbb{N}}|\zeta_n\mu_n|[/imath] to be convergent for all [imath]\zeta \in l^p \implies \mu \in l^q[/imath]
Prove that for the series [imath]\sum_{n \in \mathbb{N}}|\zeta_n\mu_n|[/imath] to be convergent for all [imath]\zeta \in l^p \implies \mu \in l^q[/imath] I used Holder's inequality to prove that [imath]\sum_{n \in \mathbb{N}}|\zeta_n\mu_n| \leq \|\zeta\|_p \|\mu\|_q [/imath] But I cannot find a [imath]\zeta= (\zeta_1, \zeta_2,....)[/imath] such that [imath]A(\zeta)=\sum_{n \in \mathbb{N}}|\zeta_n\mu_n|=(\sum_{n \in \mathbb{N}} |\mu_n|^q)^{\frac{1}{q}}[/imath]. [imath] \zeta=??[/imath] |
2852489 | A Messy Definite Integral [imath]\int_0^1 (x\ln x)^n dx[/imath] or a Trick?
[imath]\int_0^1 (x\ln x)^n dx[/imath] How should I proceed with the above integral? I'm actually supposed to solve it for n = 50, but I don't think reduction formulae would be a good idea (or would they?) I couldn't establish a closed form for the above integral in terms of [imath]n[/imath]. How do I do that? Could some please guide me in the right direction, or provide a solution for better understanding? Thanks! | 347815 | Evaluating [imath]\int _0 ^1 x^k(\ln x)^m dx[/imath] for integer [imath]k[/imath] and [imath]m[/imath]
Find a formula for [imath]\displaystyle \int _0 ^1 x^k(\ln x)^m dx[/imath] that works for all positive integers [imath]k[/imath] and [imath]m[/imath]. Use integration by parts [imath]m[/imath] times with [imath]k[/imath] fixed. |
2850543 | A Möbius map from the unit disc onto itself
I know that this has been asked a few times, but I found no thread that fully derived the results. I want to show that for any Möbius transformation from the unit disc onto itself it has the form [imath]e^{i\alpha} \frac{z-z_0}{\overline{z_0}z-1}\quad,\quad \alpha \in \mathbb{R},\ \vert z_0 \vert < 1[/imath] I found that for [imath]\frac{a z + b}{ c z + d}[/imath] it's necessary that: [imath]\forall z\ \vert z \vert = 1\ :\ \vert az+b \vert^2 = \vert cz+d \vert^2[/imath] which implies (by using [imath]\vert\ k \vert^2 = k\overline{k}[/imath] and assigning [imath]z=1,-1,i[/imath]) [imath]\vert a\vert^2 + \vert b \vert^2 = \vert c \vert^2 + \vert d \vert^2[/imath] [imath]a\overline{b} = c\overline{d}[/imath] From here didn't get much further. The closest I got is: [imath]\frac{az+b}{cz+d}=\frac{\overline{c}}{\overline{a}}\frac{\vert a \vert^2z+\overline{a}b}{\vert c \vert^2 z+\underbrace{\overline{c}d}_{z_0}} = \frac{\overline{c}}{\overline{a}}\frac{\vert a \vert^2z+z_0}{\vert c \vert^2 z+z_0}[/imath] It can go further, but I didn't get something similar to [imath]z-z_0[/imath] nor [imath]z_0 z - 1[/imath]. | 1705289 | Characterizing all mobius transformations from unit disk to itself.
All answers to this problem involve the following process: Pick a point [imath]a[/imath] in the unit disk such that [imath]T(a)=0[/imath]. Then, [imath]T(a^*)=\infty[/imath] if [imath]a^*[/imath] is the symmetric point [imath]a^*=\frac{1}{\bar{a}}[/imath] of a. Clearly, such a transformation is of the form [imath]T(z)=\frac{z-a}{1-z\bar{a}}[/imath]. We might as well place a constant C in there: [imath]T(z)=C\frac{z-a}{1-z\bar{a}}[/imath]. Then, most answers magically jump to replacing [imath]C=e^{i\theta}[/imath] in the formula and claiming the job is done. Why does a function built so that it sends a point inside the unit disk to zero, and its symmetric point to [imath]\infty[/imath], be the one that preserves the unit disk? What's the motivation behind approaching this problem in that manner? Could one proceed bruteforce: with [imath]|\frac{az+b}{cz+d}|<1[/imath] and arrive at the same function? For some reason this seems to me like somebody pulled a rabbit out of a hat. Once given the formula, I understand why it does the job, but why would somebody think that choosing a point inside the disk and building a function that sends it to zero create a mobius transformation that does the job? |
2852550 | Are there Cauchy sequence that are not bounded?
I was wondering, Is there Cauchy sequence that are not bounded ? Of course, in complete spaces is not possible. I have a theorem that says that if [imath](A,d)[/imath] is not complete, then [imath](\bar A,d)[/imath] is complete. But are there spaces s.t. indeed for all [imath]\varepsilon>0[/imath], there is [imath]N[/imath] s.t. [imath]d(x_n,x_{m})<\varepsilon[/imath] for all [imath]n\geq N[/imath], and all [imath]r\geq 0[/imath], but the ball also move to [imath]+\infty [/imath] ? | 835729 | Proof Check: Every Cauchy Sequence is Bounded
Sorry if I keep asking for proof checks. I'll try to keep it to a minimum after this. I know this has a well-known proof. I understand that proof as well but I thought I'd do a proof that made sense to me and seemed, in some ways, simpler. Trouble is I'm not sure if it's totally correct. It's quite short though. I was just hoping someone could look it over and see if it is a valid proof. Thank you! Lemma: Every Cauchy sequence is bounded. Proof: Let [imath](a_{n})[/imath] be Cauchy. We choose [imath] 0<\epsilon_{0}[/imath]. So [imath] \forall \; n>m\geq N_{0}[/imath] we have that [imath]\vert a_{n}-a_{m} \vert < \epsilon_{0}[/imath]. Therefore [imath](a_{n})[/imath] is bounded for all [imath] m \geq N_{0} [/imath] by [imath] \epsilon_{0} [/imath]. Since [imath] \mathbb{N}_{N_{0}}[/imath] is finite, it is bounded. So, for all [imath] m<N_{0} [/imath], [imath] (a_{n})[/imath] is bounded. Therefore [imath](a_{n})[/imath] is bounded. I realize I haven't said what the bounds are but I think that's sort of irrelevant. So long as we know it's bounded. Any help is much appreciated! |
2853118 | Find a basis of clopen sets for the topology on [imath]X[/imath]
The problem states: Let [imath]X[/imath] be a completely regular space such that [imath]|X|<c=|\mathbb{R}|[/imath]. Prove that there exists a basis [imath]\mathcal{B}[/imath] for the topology on [imath]X[/imath] such that for all [imath]B\in\mathcal{B}[/imath], [imath]B[/imath] is both open and closed. I let [imath]\mathcal{B}[/imath] be the collection of all clopen sets since any other choice for [imath]\mathcal{B}[/imath] would yield a coarser topology. It is easy to verify that [imath]\mathcal{B}[/imath] forms a valid basis. Let [imath]\tau[/imath] be the topology on [imath]X[/imath] and let [imath]\tau_\mathcal{B}[/imath] be the topology generated by [imath]\mathcal{B}[/imath]. Then clearly [imath]\tau_\mathcal{B}\subset\tau[/imath]. I couldn't figure out how to show that [imath]\tau_\mathcal{B}=\tau[/imath], so I assumed it wasn't and tried coming to a contradiction. If [imath]\tau_\mathcal{B}\neq\tau[/imath], then that means there must exist an open set [imath]U\in\tau[/imath] and a point [imath]x\in U[/imath] such that every clopen set containing [imath]x[/imath] must intersect [imath]X\setminus U[/imath]. I thought that maybe using this along with the completely regular condition, I could prove the existence of some surjective function [imath]f:X\to[0,1][/imath], thus contradicting the cardinality restriction, but I couldn't find any way of doing that. Any ideas? | 378940 | Collection of clopen sets form a base for a countable completely regular space
I have [imath]X[/imath] as a countable, Tychonoff space, and I want to show that the collection of clopen subset of [imath]X[/imath] form a base for the topology on [imath]X[/imath]. Can I first just define a base [imath]\mathscr{B}[/imath] of X, let [imath]x\in X[/imath] and [imath]E \subset X[/imath] is closed such that [imath]x\notin E[/imath]. So [imath]x\in X-E[/imath], which is open. I also noticed that the interval [imath][0,1][/imath] is uncountable. Can I define a function [imath]f: X\to [0,1][/imath], then f is onto. Do I need to show that [imath]f[/imath] continuous next? |
2852754 | Find a basis for [imath]\ U \cap W [/imath]
Two subspaces [imath]\ W = \{ 1+ x + x^3 , -2x+x^2+x^3 , 3+ x + x^2 + 4x ^ 3\} [/imath] and [imath]\ U = \{ a+bx-bx^2+2ax^3 \mid a,b \in \mathbb R \} [/imath] so [imath]\ W = \{ w_1 = (1,1,0,1), w_2 = (0,-2,1,1), w_3 = (3,1,1,4) \} , \\ U = \{ u_1 = (1,0,0,2), u_2 = (0,-1,1,0) \} [/imath] and I need to find basis for [imath]\ U \cap W [/imath] so if [imath]\ v \in U \cap W [/imath] this means [imath]\ 0 = \alpha_1 w_1 + \alpha_2 w_2 + \alpha_3 w _3 - \alpha_4 u_1 - \alpha_5u_2[/imath] so I should just rank it in matrix and so [imath]\ \begin{bmatrix} 1 & 0 & -1 & 0 \\ 0 & 1 & -1 & 2 \\ 0 & -1 & 0 & -1 \\ 2 & 0 & - 1 & -1 \end{bmatrix} \Rightarrow \begin{bmatrix} 1 & 0 & -1 & 0 \\ 0 & 1 & -1 & 2 \\ 0 & 0 & -1 & 1 \\ 0 & 0 & 0 & 0 \end{bmatrix} [/imath] so it means [imath]\alpha_1 = \alpha_3 = \alpha_4 [/imath] and [imath]\ \alpha_2 = - \alpha_3 [/imath] so it means any [imath]\ v \in U \cap W [/imath] is [imath]\ v= \alpha_1u_1 -\alpha_1u_2 [/imath] or [imath]\ v = \alpha_3w_3 - \alpha_3 w_4 [/imath] and how do I show by which vectors the space [imath]\ U \cap W [/imath] is spanned?? | 324064 | How to find basis of intersection of 2 vector spaces with given basis?
Suppose we have vector spaces [imath]\mathbb W[/imath] and [imath]\mathbb V[/imath]. A given basis for [imath]\mathbb W[/imath] is [imath]\{(1, 1, 0, -1), (0, 1, 3, 1)\}[/imath], and a given basis for [imath]\mathbb V[/imath] is [imath]\{(1, 2, 2, -2), (0, 1, 2, -1)\}[/imath]. How to find a basis for [imath]\mathbb W\cap\mathbb V[/imath]? |
2853067 | Is [imath]\mathbb{Q}_p[/imath] isomorphic to a subfield of [imath]\mathbb{C}[/imath] for some prime number [imath]p[/imath]?
Is there a prime number [imath]p[/imath], such that [imath]\mathbb{Q}_p[/imath] is isomorphic to a subfield of [imath]\mathbb{C}[/imath]? | 16724 | How far are the [imath]p[/imath]-adic numbers from being algebraically closed?
A few days ago I was recalling some facts about the [imath]p[/imath]-adic numbers, for example the fact that the [imath]p[/imath]-adic metric is an ultrametric implies very strongly that there is no order on [imath]\mathbb{Q}_p[/imath], as any number in the interior of an open ball is in fact its center. I know that if you take the completion of the algebraic closure of the [imath]p[/imath]-adic completion you get something which is isomorphic to [imath]\mathbb{C}[/imath] (this result was very surprising until I studied model theory, then it became obvious). Furthermore, if the algebraic closure is of an extension of dimension [imath]2[/imath] then the field is orderable, or even real closed. Either way, it implies that the [imath]p[/imath]-adic numbers don't have this property. So I was thinking, is there a [imath]p[/imath]-adic number whose square equals [imath]2[/imath]? [imath]3[/imath]? [imath]2011[/imath]? For which prime numbers [imath]p[/imath]? How far down the rabbit hole of algebraic numbers can you go inside the [imath]p[/imath]-adic numbers? Are there general results connecting the choice (or rather properties) of [imath]p[/imath] to the "amount" of algebraic closure it gives? |
2853509 | Metric induced by a surjective mapping
Let [imath]f[/imath] be a continuous mapping of a compact metric space [imath](X, d)[/imath] onto a Hausdorff space [imath](Y, \tau_1)[/imath]. If [imath]d[/imath] is a metric on [imath]X[/imath], how to show that [imath]d_1(y_1, y_2) := \inf\{d(a, b) : a \in f^{-1}(y_1)\text{ and }b \in f^{−1}(y_2)\}[/imath] is a metric on Y? I can see that it is never [imath]0[/imath] when [imath]y_1[/imath] is not equal to [imath]y_2[/imath]. But how should I go about showing that it satisfies the triangle inequality? I am actually reading the following page from a book. | 2840100 | Metrizability of Hausdorff continuous image of compact metric space
Let [imath]f[/imath] be a continuous mapping of a compact metric space [imath](X, d)[/imath] onto a Hausdorff space [imath](Y, \tau_1)[/imath]. Then [imath](Y, \tau_1)[/imath] is compact and metrizable. In one proof the following metric is constructed: [imath]d_1(y_1, y_2) = inf\{d(a, b) : a \in f^{-1}\{y_1\},\ b \in f^{-1}\{y_2\}\}[/imath], for all [imath]y_1[/imath] and [imath]y_2[/imath] in [imath]Y[/imath]. I'm thinking about how to prove that triangle inequality holds for [imath]d_1[/imath], s.t. [imath]d_1(x, z) \leq d_1(x, y) + d_1(y, z)[/imath] And how about the case when triangle inequality does not hold? Update: @daniel-schepler suggested to use metric [imath]d_1(x, y) = inf\{d(x^*, x_1) + d(y_1, x_2) + \cdots + d(y_{n-1}, x_n) + d(y_n, y^*)\}[/imath] where [imath]f(x^*) = x[/imath], [imath]f(y^*) = y[/imath], and [imath]f(x_i) = f(y_i)[/imath] for each [imath]i[/imath]. Now we need to prove 3 property of metric: positivity, symmetry and triangle inequality. This is my sketch: Positivity. Chose two points [imath]x,y \in Y[/imath] s.t. [imath]x \neq y[/imath]. Singleton sets [imath]\{x\}[/imath] and [imath]\{y\}[/imath] are disjoint closed sets by Hausdorfness of [imath]Y[/imath] and their preimages are also disjoint closed in [imath]X[/imath] by continuity of [imath]f[/imath]. [imath]X[/imath] is compact metric space by hypothesis and therefore a normal space. In normal space every two disjoint closed sets have disjoint open neighborhoods. Let [imath]f^{-1}(x) \subseteq U[/imath] and [imath]f^{-1}(y) \subseteq V[/imath] s.t. [imath]U,V[/imath] are open and [imath]U \cap V = \varnothing[/imath]. Therefore there exists an [imath]\varepsilon > 0[/imath], s.t. [imath]x^* \in B_\varepsilon(x^*) \subseteq U[/imath] and [imath]y^* \in B_\varepsilon(y^*) \subseteq U[/imath] and distance between any two closed sets is strictly positive and [imath]d_1[/imath] is positive and zero iff [imath]x = y[/imath]. Symmetry. Clearly [imath]d_1[/imath] is symmetric. Triangle inequality. For triangle inequality we can show that in the "worst" case [imath]d_1(x, z) = d_1(x,y) + d_1(y, z)[/imath]. |
2853821 | Is there an intuitive reason why the natural logarithm shows up in the Prime Number Theorem?
I've always wondered if it has something to do with the idea that the probability that an integer is divisible by a prime [imath]p[/imath] is [imath]1/p[/imath]. | 2144940 | Intuition for the prime number theorem
(surprisingly, it appears that this question has not been asked before) Let [imath]\pi(n)[/imath] denote the number of primes [imath]\leq n[/imath]. The prime number theorem states that [imath]\pi(n) \sim \frac{n}{\log n} \ \text{as} \ n \to +\infty[/imath] After painstakingly reading through Erdos's elementary proof of this theorem, I think I understand the mechanics of it from a formal perspective. However, I still don't seem to understand intuitively why this theorem is true. I would like some intuitive insight as to why this theorem holds. I understand that for a result as deep as this one, even the intuition is going to contain some nitty-gritty details. It's probably not the sort of thing that you could explain to a child, for example. Nevertheless, I will ask this question regardless. There has to be some convincing argument for this theorem beyond the technical details of the proofs. |
2649593 | Guillemin-Pollack Exercise 1.5.3: Normal Intersections
Let [imath]V_1, V_2, V_3[/imath] be linear subspaces of [imath]\mathbb R^n[/imath]. One says they have 'normal intersection' if [imath]V_i\pitchfork(V_j\cap V_k)[/imath] whenever [imath]i\ne j[/imath] and [imath]i\ne k[/imath]. Prove that this holds iff [imath]\operatorname{codim}(V_1\cap V_2\cap V_3)=\operatorname{codim} V_1+\operatorname{codim} V_2+\operatorname{codim} V_3.[/imath] I guess I proved one direction. If the spaces intersect normally, then in particular we have [imath]V_1 \pitchfork (V_2\cap V_3)[/imath] and [imath]V_2 \pitchfork V_3[/imath] (if [imath]j=k=3[/imath]). Now [imath]\operatorname{codim}(V_1\cap V_2\cap V_3)=\operatorname{codim}(V_1\cap (V_2\cap V_3))=\operatorname{codim}V_1+\operatorname{codim}(V_2\cap V_3)=\operatorname{codim} V_1+\operatorname{codim} V_2+\operatorname{codim} V_3,[/imath] where I have used that if two submanifolds [imath]X, Z[/imath] of [imath]Y[/imath] intersect transversally then [imath]\operatorname{codim}(X\cap Z)=\operatorname{codim} X + \operatorname{codim} Z[/imath] (Theorem on p.30 of the book). Is this direction correct? How do I prove the other direction? | 2651072 | Intersection of three vector subspaces
We say that two vector subspaces [imath]U,W\subset V[/imath] intersect transversally (written [imath]U\pitchfork W[/imath]) if [imath]U+W=V[/imath]. Let [imath]V_1,V_2,V_3[/imath] be vector subspaces of [imath]V[/imath] where [imath]\dim V=n[/imath]. We say they intersect normally if [imath]V_i\pitchfork(V_k\cap V_l)[/imath] for [imath]i\ne k,l[/imath]. Prove that [imath]V_1,V_2,V_3[/imath] intersect normally if the following condition holds: [imath]\operatorname{codim}(V_1\cap V_2\cap V_3)=\operatorname{codim} V_1+\operatorname{codim} V_2+\operatorname{codim} V_3,[/imath] where [imath]\operatorname{codim} V_i=n-\dim V_i[/imath]. How to approach this problem? I tried to order dimensions of [imath]V_i[/imath] but I got that the LHS is [imath]\ge n-\min (\dim V_i,i=1,2,3)[/imath] and the RHS is [imath]\le 3( n-\min (\dim V_i,i=1,2,3))[/imath] which does not lead anywhere. What is the right way to do this? |
2854019 | Evaluate [imath]\int_{1}^\infty \frac{\ln(\ln(x))}{1+x^2}dx[/imath]
I'm trying to evaluate: [imath]\int\limits_1^\infty \frac{\ln(\ln(x))}{1+x^2} dx[/imath] I've been told by the user Jack D'Aurizio that I can connect it to Euler's Beta Function using the substitution [imath]x=e^u[/imath] and using Feynman's Technique. I've tried connecting it to [imath]\int\limits_0^\infty \frac{t^{x-1}}{(1+t)^{x+y}}dt [/imath] but I don't see anything. Would appreciate a hint to put me in the right direction. | 121545 | Evaluating [imath]\int_0^1 \log \log \left(\frac{1}{x}\right) \frac{dx}{1+x^2}[/imath]
Show that [imath]\displaystyle{\int_0^1 \log \log \left(\frac{1}{x}\right) \frac{dx}{1+x^2} = \frac{\pi}{2}\log \left(\sqrt{2\pi} \Gamma\left(\frac{3}{4}\right) / \Gamma\left(\frac{1}{4}\right)\right)}[/imath] This question was posted as part of this question: Solve the integral [imath]S_k = (-1)^k \int_0^1 (\log(\sin \pi x))^k dx[/imath] I cannot think of a change of variable nor other integrating methods. Maybe there is a known method that I am missing. |
2854179 | Solvable, Supersolvable, and Nilpotent
I have learned the concepts for solvable, supersolvable, and nilpotent groups and their associated properties. In particular, we have [imath]\{\mbox{nilpotent groups}\}\subset\{\mbox{supersolvable groups}\}\subset\{\mbox{solvable groups}\}[/imath] I know that the term "solvable" comes from Galois theory where there is a correspondence between the Galois group and the solvability by radicals. But how about "supersolvable" and "nilpotent"? For "supersolvable", what makes it so special that we need to define such a term? Is there any special application of these groups that sets "solvable groups" apart? For "nilpotent", it somehow sounds a strong property to me, e.g., it is a direct product of its Sylow subgroups when finite. But the name sounds a bit puzzling, this somewhat like the idea of homology? PS: I am a researcher working in the field mainly related to finite group theory. I heard that these terms have some relation with Lie algebra, which I haven't learned. So I am hoping some gurus can lead my way:-) | 618604 | Solvable and nilpotent groups, normal series and intuition
I'm reading Hungerford's algebra and I'm on Nilpotent and solvable groups chapter. Hungerford starts with: Consider the following conditions on a finite group G: i) G is the direct product of its Sylow subgroups ii) If m divides |G|, then G has a subgroup of order m iii) If |G| = mn with (m,n) = 1, then G has a subgroup of order m This chapter comes after Sylow theorems, where we have seen that if [imath]|G| = p_1^{\alpha_1}p_2^{\alpha_2} \dotsm p_n^{\alpha_n}[/imath] there is a subgroup of order [imath]p_i^k[/imath] for [imath]k \in \{0, 1, ..., \alpha_i\}[/imath], but we can't guarantee more (take for an example [imath]A_4[/imath], it has subgroups of order 3 and 4, but there is no subgroup of order 6). So the question What kind of groups have nice subgroup structure? isn't that unexpected. What was unexpected? We shall first define nilpotent and solvable groups in terms of certain ''normal series''of subgroups. In case of finite groups, nilpotent groups are characterized by condition i) and solvable ones by condition iii). Hungerford first gives definition of nilpotent groups in terms of ascending central series and definition of solvable groups in terms of commutator subgroups and then proves equivalences with i) and iii). Conceptual jump between i), ii), iii) and definitions involving normal series is a little too big for me. I can't really see how someone could start thinking about i), ii), iii) and come out with normal series. I don't have a nice way to think about them and I would really be thankful if someone has a nice conceptual view about them, or several eye-opening exercises. |
2852249 | Is 31 the only number that can be represented by two distinct sums of consecutive powers of primes?
I'm trying to prove that a number with two distinct prime factors can't be friends with another number with the same prime factors. One way I could prove this is that there'd be only one example where [imath]\sum_{i=0}^np^i=\sum_{j=0}^mq^j[/imath] That example, preferrably, would be [imath]2^0+2^1+2^2+2^3+2^4=5^0+5^1+5^2=31[/imath], which fails to fit other conditions necessary to construct that pair of friends. Through a little bit of computing power, I was unable to find examples for [imath]p<300,n<10[/imath], which leads me to believe it may be the only example. However, I'm completely lost on a continuation, if there is one, and whether this is just a case of the XY problem, and I should drop this line of reasoning and move elsewhere. | 2599653 | On uniqueness of sums of prime powers
An exercise in number theory led to me to the following problem: Find all solutions [imath](p,n,q,m)[/imath] of the following equation: [imath]\sum_{k=0}^n p^k = \sum_{h=0}^m q^h,[/imath] where [imath]p<q[/imath] are distinct primes, and [imath]1 \le m < n[/imath] are indeterminates. Numerical evidence gives me the only solution [imath](p,n,q,m)=(2,4,5,2).[/imath] There might be no other solution: I have no idea on how to show it. For those interested of the source of this equation, here it comes the interesting exercise in number theory. Find all numbers [imath]A[/imath] such that the sum of divisors of [imath]A[/imath] divisible by [imath]5[/imath] equals the sum of divisors of [imath]A[/imath] divisible by [imath]2[/imath]: [imath]\sum_{2|d|A} > d = \sum_{5|e|A} e. [/imath] Clearly this is equivalent to the condition that the sum of divisors of [imath]A[/imath] NOT divisible by [imath]5[/imath] equals the sum of divisors of [imath]A[/imath] NOT divisible by [imath]2[/imath]. Let's factorize [imath]A= 2^n \cdot 5^m \cdot w[/imath], with [imath]n,m \ge 0[/imath] and [imath]\gcd(w,10)=1[/imath]. Without loss fo generality we can consider [imath]n,m\neq 0[/imath]. Then we get [imath]\sum_{d|2^nw} d = \sum_{e|5^mw} e[/imath] which is equivalent (using multiplicativity of the sum-of-divisors-function) to [imath]\sum_{k=0}^n 2^k = \sum_{h=0}^m 5^h[/imath] Giving us [imath]A=2^4 \cdot 5^2 \cdot w=400w[/imath] (with [imath]\gcd(w,10)=1[/imath]). Thus, I am looking for simple generalizations with arbirary primes [imath]p \neq q[/imath]. |
2854177 | Second derivative symbol [imath]\frac{d^2y}{dx^2}[/imath]!
I'm currently studying taking the second derivative of equations and I have been told the symbol used to represent second derivative is [imath]\frac{d^2y}{dx^2}[/imath] I was just wondering, why this symbol is chosen? Why is it not [imath]\frac{dy^2}{dx^2}[/imath] or [imath]\frac{d^2y}{d^2x}[/imath] | 25102 | Why is the 2nd derivative written as [imath]\frac{\mathrm d^2y}{\mathrm dx^2}[/imath]?
In Leibniz notation, the 2nd derivative is written as [imath]\dfrac{\mathrm d^2y}{\mathrm dx^2}\ ?[/imath] Why is the location of the [imath]2[/imath] in different places in the [imath]\mathrm dy/\mathrm dx[/imath] terms? |
2854121 | [imath]x_p[/imath] of a ordinary differential equation
I am trying to find the particular solution to the [imath]y^{(4)} -2y'' +y = xe^x [/imath] and currently am misunderstanding what to do. My steps: the polynomial operator concerned is [imath]p(s)= s^4 -2s^2 + 1 [/imath] which 0 at 1: [imath]p(1) = 0 [/imath] so now i know that the solution will be something like: [imath]y_p = (Ax^3 + Bx^2)e^x[/imath] where the parentheses show that it is a linear operator on the last coefficient. so far I think that my method is to differentiate this four times and collect up the terms of each [imath]y_p[/imath] according to the original equation and see what A and B are equal to. I assume this is okay to do with linear operators? [imath]y_p' = (Ax^3 + Bx^2)e^x + (3Ax^2 + 2Bx)e^x[/imath] [imath]y_p'' = (Ax^3 +(6A + B)x^2 + (4B + 6A)x + 2B)e^x[/imath] [imath]y_p''' = (Ax^3 +(9A +B)x^2 + (18A +6B)x + (6A + 6B))e^x[/imath] [imath]y_p'''' = (Ax^3 +(12A +B)x^2 + (36A +8B)x + (24A + 12B))e^x[/imath] which then gives me an equation of [imath]A-2A +A = 1[/imath] which is wrong because this is the equation for the first term of the particular solution. Can anyone please help me take it from here? | 2854072 | particular solution to [imath]y^{(4)} -2y'' +y = xe^x [/imath] using undetermined coefficients
I am trying to find the particular solution to the [imath]y^{(4)} -2y'' +y = xe^x [/imath] and currently am misunderstanding what to do. My steps: the polynomial operator concerned is [imath]p(s)= s^4 -2s^2 + 1 [/imath] which 0 at 1: [imath]p(1) = 0 [/imath] so now i know that the solution will be something like: [imath]y_p = (Ax^3 + Bx^2)e^x[/imath] where the parentheses show that it is a linear operator on the last coefficient. so far I think that my method is to differentiate this four times and collect up the terms of each [imath]y_p[/imath] according to the original equation and see what A and B are equal to. I assume this is okay to do with linear operators? [imath]y_p' = (Ax^3 + Bx^2)e^x + (3Ax^2 + 2Bx)e^x[/imath] [imath]y_p'' = (Ax^3 +(6A + B)x^2 + (4B + 6A)x + 2B)e^x[/imath] can I collect terms from different linear operators like i did with [imath]y_p''[/imath] and I assume that to find the particular solution I must find [imath]y_p''''[/imath] as it is in the original equation |
2854424 | Knowing that [imath]9a^2+8ab+7b^2 \le 6[/imath], prove that [imath]7a+5b+12ab \le 9[/imath].
Knowing that [imath]9a^2+8ab+7b^2 \le 6[/imath], prove that [imath]7a+5b+12ab \le 9[/imath]. I have found the same question here, but the answer looks wrong. You can't just add two inequalities like him. It's like you would multiply the second one with [imath]-1[/imath], which changes its sign, and then add them. Here is my try: [imath]9a^2 + 9b^2 + 18ab - 2b^2 - 10ab - 6 \le 0 \Longleftrightarrow\\ (3a+3b)^2 - 2(b^2+5ab+3) \le 0 \Longleftrightarrow\\ b^2+5ab+3 \ge 0 \Longleftrightarrow\\ 25a^2 - 12 \le 0 \Longleftrightarrow\\ a \in \left[-\frac{2\sqrt{3}}{5}, \frac{2\sqrt{3}}{5} \right][/imath] | 196128 | Inequality: [imath]7a+5b+12ab\le9[/imath]
If we assume that [imath]a,b[/imath] are real numbers such that [imath]9a^2+8ab+7b^2\le 6[/imath], how to prove that : [imath]7a+5b+12ab\le9[/imath] |
2854620 | Prove that second partial derivatives does not depend on the order of differentiation
I'm trying to prove that if [imath]\dfrac{\partial^2 f}{\partial x \partial y} \quad \text{and} \quad \dfrac{\partial^2 f}{\partial y \partial x}[/imath] are continuous in an open set containing [imath]a \in \mathbb{R}^2[/imath], then they are equal using the definition of derivative: First, I say [imath] \dfrac{\partial^2f}{\partial x \partial y} = \dfrac{\partial}{\partial x} \left( \dfrac{\partial f}{\partial y} \right) = \dfrac{\partial}{\partial x} \left( \lim_{h \to 0} \dfrac{f(x,y+h)-f(x,y)}{h} \right) = \lim_{k \to 0} \dfrac{1}{k} \left( \lim_{h \to 0} \dfrac{f(x+k,y+h)-f(x+k,y)}{h} - \lim_{h \to 0} \dfrac{f(x,y+h)-f(x,y)}{h} \right) = \\ \lim_{k \to 0} \left( \lim_{h \to 0} \dfrac{f(x+k,y+h)-f(x+k,y)-f(x,y+h)+f(x,y)}{kh} \right) [/imath] Then I do the same thing for [imath] \dfrac{\partial^2 f}{\partial y \partial x} [/imath], and I get [imath] \lim_{h \to 0} \left( \lim_{k \to 0} \dfrac{f(x+k,y+h)-f(x+k,y)-f(x,y+h)+f(x,y)}{kh} \right)[/imath] My question is: am I allow to say that those limits are equal because both [imath]\dfrac{\partial^2 f}{\partial x \partial y} \quad \text{and} \quad \dfrac{\partial^2 f}{\partial y \partial x}[/imath] are continuous on [imath]a[/imath], or do I need something extra? Please let me know if the question is clear enough, or if I made some silly mistakes :) Thanks in advance! | 2718864 | Symmetry of second derivative - Sufficiency of twice-differentiability
Symmetry of second derivative states that for [imath]u=u(x,y)[/imath] if [imath]u_x,u_y[/imath] exists and [imath]u_{xy},u_{yx}[/imath] exists and continuous then [imath]u_{xy}=u_{yx}[/imath]. I proved that statement using the mean value theorem. While I was looking in Wikipedia there is a section called "Sufficiency of twice-differentiability", if [imath]u(x,y):E^{\text{open set}}\subset \Bbb R^2\to \Bbb R[/imath] and [imath]u_x,u_y,u_{yx}[/imath] exists everywhere and [imath]u_{yx}[/imath] is continuous at a point in [imath]E[/imath] then [imath]u_{xy}[/imath] exists at that point and equal to [imath]u_{yx}[/imath]. My question is, while I proved Symmetry of second derivative I had to assume the continuity of [imath]u_{xy}[/imath] and [imath]u_{yx}[/imath], so how can I prove that this is true without even assuming the existence of one of the second derivative? I'm sitting on this for a long time and I couldn't think on any starting point and would love help with this. |
2854761 | Compute [imath]\int{\frac{dx}{\sqrt{x^2+16}}}[/imath]
Here is what I have done so far: [imath]I=\int \frac{dx}{\sqrt{x^2+16}} =\frac{1}{4}\int\sqrt{\frac{1}{\left(\frac{x}{4} \right)^2+1}}\,dx[/imath] [imath]\frac{x}{4}=\tan(u), dx=4\sec^2(u)[/imath] [imath]\therefore I=\int\sqrt{\frac{1}{\tan^2(u)+1}}\sec^2(u)\,du=\int\sec^3(u)\,du[/imath] now by integration by parts: [imath]I=\tan(u)\sec(u)-\int\tan^2(u)\sec(u)\,du[/imath] I am sure [imath]\int\sec^3(u)du[/imath] is a standard integral but I am now sure if IBP is the best way to go. Is there another obvious way of doing it? | 153787 | Finding [imath]\int \frac {dx}{\sqrt {x^2 + 16}}[/imath]
I can not get the correct answer. [imath]\int \frac {dx}{\sqrt {x^2 + 16}}[/imath] [imath]x = 4 \tan \theta[/imath], [imath]dx = 4\sec^2 \theta[/imath] [imath]\int \frac {dx}{\sqrt {16 \sec^2 \theta}}[/imath] [imath]\int \frac {4 \sec^ 2 \theta}{\sqrt {16 \sec^2 \theta}}[/imath] [imath]\int \frac {4 \sec^ 2 \theta}{4 \sec \theta}[/imath] [imath]\int \sec \theta[/imath] [imath]\ln| \sec \theta + \tan \theta|[/imath] Then I solve for [imath]\theta[/imath]: [imath]x = 4 \tan \theta[/imath] [imath]x/4 = \tan \theta[/imath] [imath]\arctan (\frac{x}{4}) = \theta[/imath] [imath]\ln| \sec (\arctan (\tfrac{x}{4})) + \tan (\arctan (\tfrac{x}{4}))|[/imath] [imath]\ln| \sec (\arctan (\tfrac{x}{4})) + \tfrac{x}{4}))| + c[/imath] This is wrong and I do not know why. |
2854949 | Find the density function of [imath]Z=X+Y[/imath] when [imath]X[/imath] and [imath]Y[/imath] are standard normal
Two independent random variables [imath]X[/imath] and [imath]Y[/imath] have standard normal distributions. Find the density function of random variable [imath]Z=X+Y[/imath] using the convolution method. This shows this problem on page 4 but it seems to use some tricks that are, well, pretty tricky. In particular getting behind their third line is beyond me. Is there an alternate way to show this that is more straightforward perhaps even with more (unskipped) steps? | 2171959 | Proof that sum of independent normals is normal using convolutions
Let [imath]X, Y[/imath] be independent standard normal random variables. We already know that [imath]X+Y[/imath] is normal with mean 0 and variance 2. However, I am trying to prove this result in a slightly different way than usual, i.e., using convolutions. We can address the problem in the following way: the sum of the two variables has a density [imath]h(x)[/imath] given by the convolution product of the densities of the two variables, say [imath]f[/imath] and [imath]g[/imath], so that [imath] h(x) = \int_{-\infty}^{\infty} f(x-y)g(y)dy. [/imath] However, [imath] f(y) = g(y) = \frac{1}{\sqrt {2\pi}}\exp(-y^2/2). [/imath] Hence now the integral becomes: [imath] h(x) =\frac{1}{2\pi}\int_{-\infty}^{\infty}\exp(-(x-y)^2/2)\exp(-y^2/2)dy. [/imath] However, I am puzzled here. How should we go on from here? I cannot see how to transform this integral into the integral of a normal r.v of type [imath]N(0,2)[/imath]. Thanks in advance for your help. |
2855212 | Show that [imath]\int_0^{\pi /2}\frac{dx}{\sqrt{\cos{x}}}=\int_0^{\pi /2}\frac{dx}{\sqrt{\sin{x}}}=\frac{\Gamma^2(\frac{1}{4})}{2\sqrt{2 \pi}}[/imath]
I am interested in showing that: [imath]\int_0^{\pi /2}\frac{dx}{\sqrt{\cos{x}}}=\int_0^{\pi /2}\frac{dx}{\sqrt{\sin{x}}}=\frac{\Gamma^2(\frac{1}{4})}{2\sqrt{2 \pi}}[/imath] I've come across this equality without anything else, would you please help me what I should do? | 769132 | Compute the integral [imath]\int_0^{\frac\pi2}\frac{\mathrm d \theta}{\sqrt{\sin \theta}}[/imath]
Compute the Riemann integral [imath]\int_0^{\frac\pi2}\frac{\mathrm d \theta}{\sqrt{\sin \theta}}[/imath] It seems very difficult, I don't know how to go ahead. Thank you very much for your help! |
2855204 | [imath]f(x^2+yf(x))=xf(x+y)[/imath].Find f
Find all real functions [imath]f:R\mapsto R[/imath] satisfying the relation [imath]f(x^2+yf(x))=xf(x+y).[/imath] My Answer: Putting [imath]y=0[/imath] we get, [imath]f(x^2)=xf(x)[/imath]. o Which implies [imath]\frac{f(x^2)}{x^2}=\frac{f(x)}{x}[/imath]. Let [imath]g(x)=\frac{f(x)}{x}[/imath].(Assume [imath]x\ne 0[/imath]) Hence, [imath]g(x)=g(x^2)[/imath]. Therefore, [imath]g(x)=c[/imath], Or [imath]f(x)=cx[/imath] (when [imath]x \ne 0[/imath]) And [imath]f(0)=0[/imath] (By putting [imath]x=y=0[/imath]). Is my answer is ask right?? If not then kindly tell me the mistake! | 1129113 | Problem in solving functional equation [imath]f(x^2 + yf(x)) = xf(x+y)[/imath]
To find all functions [imath]f[/imath] which is a real function from [imath]\Bbb R \to \Bbb R[/imath] satisfying the relation [imath]f(x^2 + yf(x)) = xf(x+y)[/imath] It can be easily seen that the identity function [imath]i.e.[/imath] [imath]f(x)=x[/imath] and [imath]f(x)=0[/imath] (verified just now) satisfies the above relation!! And putting [imath]y=0[/imath] I have got [imath]f(x^2)=xf(x)[/imath]. Help needed to find other functions satisfying the relation. |
2854934 | [imath]f[/imath] is continuous in [imath]\mathbb{R}[/imath] if and only if for every open set [imath]G[/imath], the set [imath]f^{-1}(G)=\{x:f(x)\in G\} [/imath] is open.
Suppose [imath]f(x)[/imath] defined in [imath]\mathbb{R}[/imath], then [imath]f[/imath] is continuous in [imath]\mathbb{R}[/imath] if and only if for every open set [imath]G[/imath], the set [imath]f^{-1}(G)=\{x:f(x)\in G\} [/imath] is open. I have done necessity. If [imath]f[/imath] is continuous and G is open. Then for [imath]x_0\in f^{-1}(G)[/imath],[imath]f(x_0)\in G[/imath],and [imath]f[/imath] is continuous hence [imath]\exists U(x_0,\delta)[/imath],such that for [imath]x\in U(x_0,\delta)[/imath],[imath]f(x)\in G[/imath]. Thus [imath] U(x_0,\delta)\subset f^{-1}(G)[/imath] which makes [imath]f^{-1}(G)[/imath] is an open set. And now I have no ideal of the sufficiency. | 65660 | Prove [imath]\epsilon[/imath]-[imath]\delta[/imath] definition of continuity implies the open set definition for real function
I need to prove that the [imath]\epsilon[/imath]-[imath]\delta[/imath] definition of continuity implies the open set definition continuity for a real function. Here's my attempt. For any basis [imath]V: (a, b)[/imath] in the range, for each [imath]f(x) \in V[/imath], let [imath]\epsilon = \min(f(x) - a, b - f(x))[/imath], then for any [imath]x[/imath] that [imath]f(x) \in V[/imath] according the [imath]\epsilon-\delta[/imath] definition of continuty there must exists a [imath]\delta[/imath] that the open set [imath]U_x : (x - \delta, x + \delta) \subset f^{-1}((f(x) - \epsilon, f(x) + \epsilon)) \subset f^{-1}(V)[/imath] In conclusion, [imath]f^{-1}(V) = \bigcup_{x \in f^{-1}(V)} U_x .[/imath] [imath]f^{-1}(V)[/imath] is an open set. Then for any open set [imath]W[/imath], [imath]f^{-1}(W) = \bigcup_{V \subset W} f^{-1}(V)[/imath] [imath]f^{-1}(W)[/imath] is an open set. So for any open [imath]W[/imath], [imath]f^{-1}(W)[/imath] is also an open set. This is exactly the open set definition of continuty. QED. Is my answer correct? Thanks. |
2854802 | For prime [imath]p[/imath] do we have [imath]p^3+p^2+p+1=n^2[/imath] infinitely often?
This is a question to ponder about the occurrence of prime [imath]p[/imath] giving [imath]p^3 + p^2 + p +1=n^2[/imath] as is true with [imath]7[/imath] giving [imath]400=20^2[/imath]. Do you think this will ever happen again? | 2453304 | Find p is the prime number which $\frac{p+1}{2}$ and $\frac{p^2+1}{2}$ both are square number.
Find p is the prime number which [imath]\dfrac{p+1}{2}[/imath] and [imath]\dfrac{p^2+1}{2}[/imath] both are square number. I do not know how to use "p is prime" assumption given. I just know [imath]p=7[/imath] is satisfied. If [imath]\dfrac{p+1}{2}=X^2[/imath] and [imath]\dfrac{p^2+1}{2}=Y^2[/imath] then [imath](X^2;X^2-1;Y)[/imath] is Pythagorean triple |
2855346 | Ring with additive group isomorphic to group of units
Is there a Ring [imath]R[/imath] with [imath](R,+) \cong (R^\times,\cdot)[/imath]? If [imath]R[/imath] is finite, clearly only the trivial ring does it (for cardinality reasons). But what about infinite rings? Are there even fields as example? | 1870675 | Let [imath]R[/imath] be a commutative unital ring. Is it true that the group of units of [imath]R[/imath] is not isomorphic with the additive group of [imath]R[/imath]?
Let [imath]R[/imath] be a commutative ring with unity, and let [imath]R^{\times}[/imath] be the group of units of [imath]R[/imath]. Then is it true that [imath](R,+)[/imath] and [imath](R^{\times},\ \cdot)[/imath] are not isomorphic as groups ? I know that the statement is true in general for fields. And it is trivially true for any finite ring (as [imath]|R^{\times}| \le |R|-1<|R|[/imath], so they are not even bijective). I can show that the groups are not isomorphic whenever [imath]\operatorname{char} R \ne 2[/imath] , but I am unable to deal with [imath]\operatorname{char} R=2[/imath] case ... Please help. Thanks in advance. |
2855294 | Prove that [imath]x_n\to x\implies \liminf\limits_{n\to\infty}f(x_n)\leq f(x)[/imath]
Let [imath]\{f_n\}[/imath] be a sequence of real-valued functions on [imath][0,1][/imath] and let [imath]M[/imath] be a real number such that: (i) for each [imath]n,[/imath] [imath]f_n[/imath] is continuous; (ii) for each [imath]x\in[0,1][/imath] and for each [imath]n,[/imath] [imath]f_n(x)\leq f_{n+1}(x)\leq M.[/imath] Prove that [imath]\{f_n\}[/imath] converges pointwise on [imath][0,1][/imath] to a function [imath]f[/imath] which is lower semicontinuous. (that is for all [imath]\{x_n\}\subset [0,1][/imath] and for all [imath]x\in [0,1],[/imath] [imath]x_n\to x\implies \liminf\limits_{n\to\infty}f(x_n)\leq f(x).[/imath] Please, can anyone help me out with this? | 185549 | Prove [imath]\liminf\limits_{n\to\infty} F(x_n)\le F(x)[/imath]
Suppose [imath]F[/imath] is a nondecreasing and right continuous function, and the sequence [imath]\{x_n\}_{n\geq1}[/imath] converges to [imath]x[/imath]. Then [imath]\liminf\limits_{n\to\infty}F(x_n)\leq F(x)[/imath]. How can I prove this? |
2855360 | Showing that [imath]x \mapsto |x|^p[/imath] is strictly mid-point convex for [imath]2 \leq p < \infty[/imath]
Let [imath]2 \leq p < \infty[/imath] and consider the function [imath]f : \mathbb{R}^n \to \mathbb{R}[/imath] defined by [imath]f(x) := |x|^p.[/imath] Then this function is mid-point convex (in fact strictly), i.e. we have that [imath]f\left(\frac{x + y}{2}\right) \leq \frac{f(x) + f(y)}{2}[/imath] holds for all [imath]x,y \in \mathbb{R}^n[/imath]. Is there a nice way of showing this? | 2200155 | Elementary proof that [imath]|x|^p[/imath] is convex.
I'm writing some notes about analysis and want to use the fact that [imath]|x|^p[/imath] is convex for every [imath]p>1[/imath] to prove Minkowski's inequality. However I didn't wrote anything about derivatives nor limits yet. Is there a simple way to prove this? EDIT: The only non-trivial inequalities proved yet are the triangular inequality and the Cauchy-Schwarz inequality. |
2855730 | [imath]\forall \alpha \in [0,1] \exists B_{\alpha} \in \mathcal B(\Bbb R): \Bbb P(B_{\alpha}) = \alpha[/imath]
Let [imath]\Bbb P[/imath] be a probability measure on [imath]\Bbb R[/imath] with [imath]\Bbb P(\{x\}) = 0[/imath] for all [imath]x \in \Bbb R[/imath]. I want to show that for all [imath]\alpha \in [0,1][/imath] there exists a [imath]B_{\alpha} \in \mathcal B(\Bbb R)[/imath] such that [imath]\Bbb P(B_{\alpha}) = \alpha[/imath]. My approach is to take an interval [imath]I_m = [-m,m][/imath] (let's assume [imath]\Bbb P([-m,m]) > \alpha[/imath]) and construct a sequence of subintervals [imath][-m + q_k, m - q'_k][/imath] with [imath](q_k,q'_k) \in \Bbb Q_+^2[/imath] such that [imath][-m + q_k, m - q'_k] \supseteq [-m + q_{k+1}, m - q'_{k+1}][/imath] and [imath]\Bbb P ([-m + q_k, m - q'_k]) \ge \alpha [/imath] for all [imath]k \in \Bbb N[/imath]. Then my idea was to use the continuity from above of the measure but I'm not sure whether this sequence indeed converges to a set with measure [imath]\alpha[/imath]. | 2851753 | continuity of a probability measure if [imath]\mu (\{x\})=0[/imath]
Let [imath]\mu[/imath] a probabibility measure in [imath]\mathbb R[/imath] with [imath]\mu\{x\}=0\quad \forall x\in\mathbb R[/imath]. Show that for all [imath]\alpha \in [0,1][/imath] there exists [imath]B_\alpha \in\mathcal B(\mathbb R)[/imath] with [imath]\alpha=\mu (B_\alpha)[/imath]. My thoughts: We have to show that the distribution function [imath]F(y)=\mu(-\infty,y)[/imath] is continous. Then we can apply the intermediate value theorem. I am not sure how to show that this measure is continous. Some help is welcome! |
2855619 | Prove that [imath] \left\lfloor{\frac xn}\right\rfloor= \left\lfloor{\lfloor{x}\rfloor\over n}\right\rfloor[/imath] where [imath]n \ge 1, n \in \mathbb{N}[/imath]
Prove that [imath] \left\lfloor{\frac xn}\right\rfloor= \left\lfloor{\lfloor{x}\rfloor\over n}\right\rfloor[/imath] where [imath]n \ge 1, n \in \mathbb{N}[/imath] and [imath]\lfloor{.}\rfloor[/imath] represents Greatest Integer [imath]\mathbf{\le x}[/imath] or floor function I tried to prove it by writing [imath]x = \lfloor{x}\rfloor + \{x\} [/imath] where [imath] \{.\}[/imath] represents Fractional Part function and [imath] 0 \le \{x\} < 1[/imath] So we get, [imath] \lfloor{\frac xn}\rfloor= \lfloor{{\lfloor x\rfloor\over n}+ {\{x\}\over n}}\rfloor \tag{1}[/imath] Then I tried to use the property, [imath]\lfloor{x+y}\rfloor =\begin{cases} \lfloor x\rfloor + \lfloor y\rfloor& \text{if [/imath]0\le \{x\} + \{y\}[imath]} < 1 \tag{2}\\ 1+ \lfloor x\rfloor + \lfloor y\rfloor & \text{if [/imath]1\le \{x\} + \{y\}[imath]} < 2 \\ \end{cases} [/imath] So if I can prove [imath](1)[/imath] = first case of [imath](2) [/imath] I’ll have , [imath] \lfloor{\frac xn}\rfloor= \lfloor{{\lfloor x\rfloor\over n}}\rfloor+ \lfloor{\{x\}\over n}\rfloor = \lfloor{\lfloor{x}\rfloor\over n}\rfloor[/imath] as the second term will come out to be zero. However, I am unable to prove this. Can someone help me out with this proof by showing me how [imath]\mathbf(1)[/imath]= first case of [imath]\mathbf (2)[/imath] and proving the question using this method and also giving a clear proof using a simpler method | 1733848 | How to prove or disprove [imath]\forall x\in\Bbb{R}, \forall n\in\Bbb{N},n\gt 0\implies \lfloor\frac{\lfloor x\rfloor}{n}\rfloor=\lfloor\frac{x}{n}\rfloor[/imath].
How to prove or disprove [imath]\forall x\in\Bbb{R}, \forall n\in\Bbb{N},n\gt 0\implies \left\lfloor\frac{\lfloor x\rfloor}{n}\right\rfloor=\left\lfloor\frac{x}{n}\right\rfloor[/imath]. So we want to prove [imath]\left\lfloor\frac{\lfloor x\rfloor}{n}\right\rfloor\ge\left\lfloor\frac{x}{n}\right\rfloor[/imath] and [imath]\left\lfloor\frac{\lfloor x\rfloor}{n}\right\rfloor\le\left\lfloor\frac{x}{n}\right\rfloor[/imath] Since [imath]\lfloor x\rfloor\le x[/imath], we can just start from here and prove [imath]\left\lfloor\frac{\lfloor x\rfloor}{n}\right\rfloor\le\left\lfloor\frac{x}{n}\right\rfloor[/imath] But for [imath]\left\lfloor\frac{\lfloor x\rfloor}{n}\right\rfloor\ge\left\lfloor\frac{x}{n}\right\rfloor[/imath], I have no idea how to start. |
2855944 | Pushforward of sheaf of relative differentials in family of elliptic curves
Update: never mind, This question has been asked before here Let [imath]f:E \to S[/imath] be an elliptic curve (precise definition is given below from Hida’s book Geometric Modular Forms and Elliptic Curves). Is [imath]f_*\Omega_{E/S} \cong \mathcal{O}_S[/imath]? I would think so by Grothendieck-Serre duality and (2.15) in the image below, but few pages later Hida says (the push forward in question) is invertible on S, i.e locally free on S - while I seem to think it is globally free. | 189767 | Is the pushforward of the sheaf of differentials on an elliptic curve over a scheme necessarily trivial?
If [imath]f:E\rightarrow S[/imath] is an elliptic curve over a scheme [imath]S[/imath] (so [imath]f[/imath] is proper and smooth of relative dimension one with geometrically connected fibers of genus one, equipped with a section [imath]0:S\rightarrow E[/imath]), then is the sheaf [imath]\underline{\omega}_{E/S}:=f_*\Omega_{E/S}^1[/imath] actually free of rank one? According to the statement of Grothendieck-Serre duality in Hida's book Geometric Modular Forms and Elliptic Curves, there should be a canonical isomorphism [imath]\underline{\omega}_{E/S}\cong\mathcal{Hom}_{\mathscr{O}_S}(R^1f_*\mathscr{O}_E,\mathscr{O}_S)[/imath], and the first result on elliptic curves in this book is that [imath]R^1f_*\mathscr{O}_E\cong\mathscr{O}_S[/imath]. It's also not clear to me whether [imath]S[/imath] is being assumed (locally) Noetherian, but if that's necessary for Grothendieck-Serre duality to hold, then I'm fine with assuming it. I can't find another reference with a statement of Grothendieck-Serre duality in this generality which does not use the language of derived categories (which I unfortunately don't understand). The reason I'm kind of skeptical about this is that in Hida's book, as well as in Katz-Mazur, it is said that [imath]\underline{\omega}_{E/S}[/imath] is invertible, so that an [imath]\mathscr{O}_S[/imath]-basis [imath]\omega[/imath] for [imath]\underline{\omega}_{E/S}[/imath] can be found locally on [imath]S[/imath]. If the invertible sheaf in question were really trivial then one would be able to choose a global [imath]\mathscr{O}_S[/imath]-basis for [imath]\underline{\omega}_{E/S}[/imath], and there would be no reason to talk about doing so locally. Hida goes on to say that, choosing an [imath]\mathscr{O}_S[/imath]-basis [imath]\omega[/imath] locally on [imath]S[/imath] allows one to regard [imath](\Omega_{E/S}^1,\omega)[/imath] as a relative effective Cartier divisor in [imath]E/S[/imath], which also doesn't make complete sense to me because if we can only find [imath]\omega[/imath] locally, how are we getting a global section [imath]\omega\in H^0(E,\Omega_{E/S}^1)=H^0(S,\underline{\omega}_{E/S})[/imath] (unless [imath]\underline{\omega}_{E/S}[/imath] really is trivial)? |
2856632 | What is the least value of [imath]\tan^2 \theta + \cot^2 \theta + \sin^2 \theta + \cos^2 \theta + \sec^2 \theta+ \textrm{cosec}^2 \theta[/imath]?
What is the least value of this expression? [imath]\tan^2 \theta + \cot^2 \theta + \sin^2 \theta + \cos^2 \theta + \sec^2 \theta+ \textrm{cosec}^2 \theta[/imath] Will putting [imath]\theta=45^{\circ}[/imath] give right answer? | 1620239 | Find the minimum value of [imath]\sin^{2} \theta +\cos^{2} \theta+\sec^{2} \theta+\csc^{2} \theta+\tan^{2} \theta+\cot^{2} \theta[/imath]
Find the minimum value of [imath]\sin^{2} \theta +\cos^{2} \theta+\sec^{2} \theta+\csc^{2} \theta+\tan^{2} \theta+\cot^{2} \theta[/imath] [imath]a.)\ 1 \ \ \ \ \ \ \ \ \ \ \ \ b.)\ 3 \\ c.)\ 5 \ \ \ \ \ \ \ \ \ \ \ \ d.)\ 7 [/imath] [imath]\sin^{2} \theta +\cos^{2} \theta+\sec^{2} \theta+\csc^{2} \theta+\tan^{2} \theta+\cot^{2} \theta \\ =\sin^{2} \theta +\dfrac{1}{\sin^{2} \theta }+\cos^{2} \theta+\dfrac{1}{\cos^{2} \theta }+\tan^{2} \theta+\dfrac{1}{\tan^{2} \theta } \\ \color{blue}{\text{By using the AM-GM inequlity}} \\ \color{blue}{x+\dfrac{1}{x} \geq 2} \\ =2+2+2=6 [/imath] Which is not in options. But I am not sure if I can use that [imath] AM-GM[/imath] inequality in this case. I look for a short and simple way . I have studied maths upnto [imath]12[/imath]th grade . |
2857077 | Why unit circle is not diffeomorphic to the real line
I read from a text book that unit circle ([imath]\mathbb{S}[/imath]) is not diffeomorphic to the real line. This result is intuitive since we cannot construct a smooth function from [imath]\mathbb{S}[/imath] to [imath]\mathbb{R}[/imath] such that it is onto (e.g., [imath]f(\rho)=(\cos(\rho),\sin(\rho))[/imath] defined from [imath]\mathbb{R}[/imath] to [imath]\mathbb{S}[/imath] is not onto). How can I prove this result? Incidentally, I am not familiar with advanced topics in topology. Thank you, in advance, for your response! | 2778159 | Show that unit circle is not homeomorphic to the real line
Show that [imath]S^1[/imath] is not homeomorphic to either [imath]\mathbb{R}^1[/imath] or [imath]\mathbb{R}^2[/imath] [imath]\mathbf{My \ solution}[/imath]: So first we will show that [imath]S^1[/imath] is not homeomorphic to [imath]\mathbb{R}^1[/imath]. To show that they are not homeomorphic we need to find a property that holds in [imath]S^1[/imath] but does not hold in [imath]\mathbb{R}^1[/imath] or vice-versa. [imath]S^1[/imath] is compact however [imath]\mathbb{R}^1[/imath] is not compact. The set [imath]\{1\} [/imath] is closed, and the map [imath]f: \Bbb R^2 \longrightarrow \Bbb R,[/imath] [imath](x, y) \mapsto x^2 + y^2[/imath] is continuous. Therefore the circle [imath]\{(x,y) \in \Bbb R^2 : x^2 + y^2 = 1\} = f^{-1}(\{1\})[/imath] is closed in [imath]\Bbb R^2[/imath]. Set [imath]S^1[/imath] is also bounded, since, for example, it is contained within the ball of radius [imath]2[/imath] centered at 0 of [imath]\Bbb R^2[/imath] (in the standard topology of [imath]\Bbb R^2[/imath]). Hence it is also compact. However real line [imath]\Bbb R^1[/imath] is not because there is a cover of open intervals that does not have a finite subcover. For example, intervals (n−1, n+1) , where n takes all integer values in [imath]\mathbb{Z}[/imath], cover [imath]\mathbb{R}[/imath] but there is no finite subcover. Hence [imath]S^1[/imath] can not be isomorphic to [imath]\mathbb{R}^1[/imath]. How to show now that [imath]S^1[/imath] is not homeomorphic to [imath]\mathbb{R}^2[/imath]? Can i show it now in the same way? They can not be homeomorphic since [imath]S^1[/imath] is compact however [imath]\mathbb{R}^2[/imath] not. How to show that [imath]\mathbb{R}^2[/imath] is not compact? |
2857101 | In triangle [imath]ABC,[/imath] we have [imath]\angle B = 2\angle A.[/imath] Prove that [imath]b^2 = a(a+c).[/imath]
Here, I have a very interesting problem that I found. In triangle [imath]ABC,[/imath] we have [imath]\angle B = 2\angle A.[/imath] Prove that [imath]b^2 = a(a+c).[/imath] Here is how far I have gotten: Here, [imath]\angle B= 2 \angle A[/imath] and therefore, [imath]\angle C = 180- 3 \angle A[/imath]. Now, I use the Law of Cosines because if I distribute out the RHS of the equation we need to prove, we get [imath]b^2 = a^2 + ac[/imath], which is close to what I would get by the Law of Cosines. If I use Law of Cosines, I get the following possibilities: [imath] a^2 = b^2+c^2 -2bc\cdot \cos A\\ \boxed{b^2 = a^2+c^2 -2ac\cdot \cos B} \\ c^2 = a^2+b^2 -2ab\cdot \cos C.[/imath] Now what would I do? Thanks in advance for any help. | 557704 | In a triangle [imath]\angle A = 2\angle B[/imath] iff [imath]a^2 = b(b+c)[/imath]
Prove that in a triangle [imath]ABC[/imath], [imath]\angle A = \angle 2B[/imath], if and only if: [imath]a^2 = b(b+c)[/imath] where [imath]a, b, c[/imath] are the sides opposite to [imath]A, B, C[/imath] respectively. I attacked the problem using the Law of Sines, and tried to prove that if [imath]\angle A[/imath] was indeed equal to [imath]2\angle B[/imath] then the above equation would hold true. Then we can prove the converse of this to complete the proof. From the Law of Sines, [imath]a = 2R\sin A = 2R\sin (2B) = 4R\sin B\cos B[/imath] [imath]b = 2R\sin B[/imath] [imath]c = 2R\sin C = 2R\sin(180 - 3B) = 2R\sin(3B) = 2R(\sin B\cos(2B) + \sin(2B)\cos B)[/imath] [imath]=2R(\sin B(1 - 2\sin^2 B) +2\sin B\cos^2 B) = 2R(\sin B -2\sin^3 B + 2\sin B(1 - \sin^2B))[/imath] [imath]=\boxed{2R(3\sin B - 4\sin^3 B)}[/imath] Now, [imath]\implies b(b+c) = 2R\sin B[2R\sin B + 2R(3\sin B - 4\sin^3 B)][/imath] [imath]=4R^2\sin^2 B(1 + 3 - 4\sin^2 B)[/imath] [imath]=16R^2\sin^2 B\cos^2 B = a^2[/imath] Now, to prove the converse: [imath]c = 2R\sin C = 2R\sin (180 - (A + B)) = 2R\sin(A+B) = 2R\sin A\cos B + 2R\sin B\cos A[/imath] [imath]a^2 = b(b+c)[/imath] [imath]\implies 4R^2\sin^2 A = 2R\sin B(2R\sin B + 2R\sin A\cos B + 2R\sin B\cos) [/imath] [imath] = 4R^2\sin B(\sin B + \sin A\cos B + \sin B\cos A)[/imath] I have no idea how to proceed from here. I tried replacing [imath]\sin A[/imath] with [imath]\sqrt{1 - \cos^2 B}[/imath], but that doesn't yield any useful results. |
2856244 | Formula for finding area for all polygons
I have a image below, kinda hard to see but was wondering if anyone has seen this formula and what its from and how to use it. thx [imath]A=\frac{1}{2}\sum\limits_{i=0}^{n-1}(x_iy_{i+1}-x_{i+1}y_i)[/imath] | 492407 | Area of an irregular polygon
I was searching for methods on how to calculate the area of a polygon and stubled across this: http://www.mathopenref.com/coordpolygonarea.html. [imath] \mathop{area} = \left\lvert\frac{(x_1y_2 − y_1x_2) + (x_2y_3 − y_2x_3) + \cdots + (x_ny_1 − y_nx_1)}{2} \right\rvert [/imath] where [imath]x_1,\ldots,x_n[/imath] are the [imath]x[/imath]-coordinates and [imath]y_1,\ldots,y_n[/imath] are the [imath]y[/imath]-coordinates of the vertices. It does work and all, yet I do not fully understand why this works. As far as I can tell you take the area of each triangle between two points. Basically you reapeat the formula of [imath]\frac{1}{2} * h * w[/imath] for each of the triangles and take the sum of them? Yet doesn't this leave a "square" in te center of the polygon that is not taken into account? (Apparently not since the correct answer is produced yet I cannot understand how). If someone could explain this some more to me I would be grateful. |
2857115 | Can I say "let [imath]f : \mathbb{R} \mapsto \mathbb{R}[/imath]" without the axiom of choice?
Indeed letting [imath] f[/imath] be in [imath]\mathbb{R}^\mathbb{R}[/imath] seems like it requires making [imath]2^{\aleph_0}[/imath] choices. | 856341 | Do we need Axiom of Choice to make infinite choices from a set?
According to the answers to this question, we do not need choice to pick from a finite product of nonempty sets, even if each of the sets is infinite. The axiom of choice is required to ensure that a infinite product of nonempty sets is non-empty. i.e. [imath]\prod_{i \in I} A_i \neq 0[/imath]. Now, let [imath]A_i = \mathbb{R}[/imath]. The answers to this question (and the one linked above) says we do not need choice to pick an element [imath]x_0 \in \mathbb{R}[/imath]. Suppose, I want an arbitrary sequence of real numbers [imath]X = (x_n)_{n =1}^{\infty}[/imath]. Then, I will have to make an infinite number of "picks" from [imath]\mathbb{R}[/imath]. Is it right to say that the resulting sequence [imath]X \in \prod_{i \in I} \mathbb{R}[/imath] and that we need choice to ensure that it exists? Why or why not? |
2857386 | Continuity of 1/x/x
We know that if [imath]f(x)[/imath] and [imath]g(x)[/imath] are continuous in a domain then [imath]f(x)/g(x)[/imath] is continuous in the domain except for those elements in the domain for which [imath]g(x) = 0[/imath]. If I take two functions [imath]f(x) = x[/imath] and [imath]g(x) = 1/x[/imath], then [imath]f(x)[/imath] is continuous for all [imath]R[/imath] and [imath]g(x)[/imath] is continuous for [imath]R-\{0\}[/imath] but if I take [imath]f(x)/g(x)[/imath] which is [imath]x/1/x[/imath] or [imath]x^2[/imath] then it is continuous for all [imath]R[/imath] even when [imath]0[/imath] was not in the domain of [imath]g(x)[/imath]. Is this correct? | 216313 | Is [imath]f(x) = x/x[/imath] the same as [imath]f(x) = 1[/imath]?
Let [imath]f(x) = \frac{x}{x}[/imath]. Is it correct to say that [imath]f(x) \ne 1[/imath], since [imath]f(x)[/imath] has a discontinuity at [imath]x=0[/imath]? |
2721172 | Homeomorphism between [imath][0,\infty)^n[/imath] with Upper Half-Space [imath]\mathbb{H}^n[/imath]
I've been working through an exercise in Lee's Smooth Manifold. The author ask us to show that the space [imath]\bar{\mathbb{R}}_+^n := [0,\infty)^n[/imath] is homeomorphic to upper half-space [imath]\mathbb{H}^n := \mathbb{R}^{n-1}\times[0,\infty)[/imath], both equiped with the usual topology. I managed to construct a homeomorphism [imath]f : \bar{\mathbb{R}}_+^2 \to \mathbb{H}^2[/imath] defined by restriction of the map [imath]f(z) = z \, e^{i\theta} = r \, e^{i2\theta}[/imath] to [imath]\bar{\mathbb{R}}_+^2=[0,\infty) \times [0,\infty) \subset \mathbb{R}^2 \approx \mathbb{C}[/imath]. Alternatively, a more pleasing the map like [imath]f(z) = z^2[/imath]. So we have [imath] \mathbb{H}^2\approx \bar{\mathbb{R}}^2_+. [/imath] To proceed i tried for [imath]n=3[/imath] case but it's messy. After a while i came up with this induction: Suppose that [imath]\mathbb{H}^n\approx \bar{\mathbb{R}}^n_+[/imath] for all dimension less than or equal to [imath]n[/imath]. Clearly this hold for [imath]n=1[/imath]. By definition of upper half-space, \begin{align} \bar{\mathbb{R}}^{n+1}_+ = \bar{\mathbb{R}}^{n}_+ \times \bar{\mathbb{R}}_+ &\approx \mathbb{H}^n \times \bar{\mathbb{R}}_+ = \mathbb{R}^{n-1} \times \bar{\mathbb{R}}_+ \times \bar{\mathbb{R}}_+ \\ &\approx \mathbb{R}^{n-1} \times \mathbb{H}^2 = \mathbb{R}^{n-1} \times \mathbb{R} \times \bar{\mathbb{R}}_+ = \mathbb{R}^n \times \bar{\mathbb{R}}_+ = \mathbb{H}^{n+1}. \end{align} In the argument above i use the following result : if [imath]A \approx B[/imath] then [imath]A \times C \approx B \times C[/imath], which i think can be done by set [imath]f : A \times C \to B \times C[/imath] to be [imath]f(a,c) = (\varphi(a),c)[/imath], where [imath]\varphi[/imath] is a homeomorphism between [imath]A[/imath] and [imath]B[/imath]. Is this proof correct ? Is there any other (correct) proof for this ? I tried to googling it but i found nothing. If possible, can anyone give me references for manifold with corner ? Thank you. | 2855110 | Are the spaces [imath]\mathbb{H}^n[/imath] and [imath]\overline{\mathbb{R}}^n_+[/imath] homeomorphic?
Let [imath]\mathbb{H}^n = \{(x^1, \dots , x^n) \in \mathbb{R}^n : x^n \ge 0\}[/imath], and [imath]\overline{\mathbb{R}}^n_+ = \{(x^1, \dots , x^n) \in \mathbb{R}^n : x^1 \ge 0, \dots, x^n \ge 0\}[/imath]. Endow each set with the subspace topology it inherits from [imath]\mathbb{R}^n[/imath]. Exercise 16.18 on page 415 of Lee's Introduction to Smooth Manifolds (2nd edition) asks us to show that [imath]\mathbb{H}^n[/imath] and [imath]\overline{\mathbb{R}}^n_+[/imath] are homeomorphic. However, it seems to me the spaces are not homeomorphic, as shown by the following argument. Suppose [imath]f: \overline{\mathbb{R}}^n_+ \to \mathbb{H}^n[/imath] is a homeomorphism. Set [imath]f(0, \dots , 0) = (a^1, \dots, a^n)[/imath]. Then [imath]A = f(\overline{\mathbb{R}}^n_+ \setminus \{(0, \dots, 0)\}) = f((\infty,0) \times \cdots \times (\infty,0)) = \\ ((\infty, a^1) \cup (a^1, - \infty)) \times \cdots \times ((\infty, a^n) \cup (a^n, 0]),[/imath] where, if [imath]a^n= 0[/imath], then in the last factor we discard [imath](a^n, 0][/imath]. The set [imath]A[/imath] is connected as it is the continuous image of the connected set [imath](\infty, 0) \times (\infty,0)[/imath]. But at the same time we see that [imath]A[/imath] is disconnected as it is the union of the disjoint relatively open sets [imath](\infty, a^1) \times \cdots \times (\infty, a^n)[/imath] and [imath] (a^1, - \infty) \times \cdots \times (a^n, 0][/imath] (or [imath](\infty, a^1) \times \cdots \times (\infty, 0)[/imath] and [imath](a^1, - \infty) \times \cdots \times (\infty, 0)[/imath] in case [imath]a^n = 0[/imath]). Is there a mistake in my argument or something I am missing? Any comments are greatly appreciated. |
2857534 | Proof the sum of odd cubes using induction
I have [imath]1^3 + 3^3 + ... + (2n + 1)^3 = (n+1)^2(2n^2 + 4n + 1)[/imath] So, if [imath]A_r = (r + 1)^2(2r^2 + 4r + 1)[/imath] is true, then [imath]A_{r+1} = (r+1)^2(2r^2 + 4r + 1) + (2r + 3)^3[/imath] And now I can't transform the expression above into the form [imath](r + 2)^2(2(r + 1)^2 + 4(r+1) + 1)[/imath] I tried to open these terms and got [imath]2r^4 + 16r^3 + 47r^2 + 60r + 28[/imath], but it seems to be a very difficult expression. I will be grateful for any hints. | 1460608 | Mathematical Induction Question About A Basis Step
Use mathematical induction to show that: [imath]1^3 + 3^3 + 5^3 + ... + (2n + 1)^3 = (n+1)^2(2n^2+4n+1)[/imath] whenever [imath]n[/imath] is a positive integer. I am able to solve this problem, but the thing that confuses me on this is the Basis Step. P(1): [imath](2(1) + 1)^3[/imath] [imath]\neq[/imath] [imath]4(7)[/imath], does this mean that this problem can not be proved using mathematical induction, or am I missing something? Thanks. |
2858020 | Prove positivity of [imath]f[/imath] when [imath]f'' > f[/imath]
I'm given that [imath]f:\mathbb{R} \to \mathbb{R}[/imath] is twice differentiable and [imath]f(0) = f'(0) =1[/imath]. Assuming that [imath]f''(x) > f(x)[/imath] everywhere show that [imath]f(x) > 0[/imath] for all [imath]x[/imath]. I know that [imath]f[/imath] and [imath]f’[/imath] are continuous ([imath]f’’[/imath] exists). Since [imath]f(0) = f’(0)= 1[/imath] there is some [imath]\delta > 0[/imath] where [imath]f(x), f’(x) > 0[/imath] for [imath]-\delta \leq x \leq \delta[/imath]. I tried using the second-order Taylor approximation to extend the interval but I cannot see how to show [imath]f(x) > 0[/imath] for all [imath]x < -\delta[/imath]. | 1684265 | If [imath]\,f''(x) \ge f(x)[/imath], for all [imath]x\in[0,\infty),[/imath] and [imath]\,f(0)=f'(0)=1[/imath], then is [imath]\,f(x)>0[/imath]?
Let [imath]f:[0,\infty) \to \mathbb R[/imath] be a twice differentiable function, such that [imath]\,f''(x) \ge f(x)[/imath], for all [imath]x\in\ [0,\infty)[/imath], and [imath] f(0)=f'(0)=1[/imath]. Can we deduce that [imath]f[/imath] is increasing? I feel like it is, but I cannot see it. I can only show that to get it increasing it is enough to show that [imath]f[/imath] is non negative . |
2858698 | Prove that [imath]x_{n+1}=\frac{1}{4-x_n}[/imath] converges
Let [imath]x_1 =3.[/imath] Prove that [imath]x_{n+1}=\frac{1}{4-x_n}[/imath] converges. The sequence seems to be converging to [imath]1/4[/imath] as the first few terms are [imath]3,1,\frac{1}{3},\frac{3}{11},\frac{11}{41},\frac{41}{153}, \cdots[/imath] In order to prove convergence I know I must make use of the definition that a sequence [imath](x_n)[/imath] converges to [imath]x[/imath] if there exists an [imath]N[/imath] for all [imath]n \ge N \Rightarrow |x_n-x|\lt \epsilon[/imath]. But how do I choose [imath]N[/imath] as a function of [imath]\epsilon[/imath]? What can I assume? Could someone show me what a proof for this looks like? Thanks. | 1734044 | Prove recursively defined sequence converges
I would like some advice on how to solve problems like the following: Let [imath](x_n)[/imath] be a sequence defined by [imath]x_1= 3[/imath] and [imath]x_{n+1} = \frac{1}{4-x_n}[/imath]. Prove that the sequence converges. My strategy is to use the Monotone Convergence Theorem, but I am having trouble showing that the sequence is decreasing and bounded below. Here's my work so far: Decreasing: The first 4 values are [imath]3,1,1/3,3/11[/imath], so let's assume [imath]x_n \leq x_{n-1} \leq \ldots \leq x_1[/imath]. Want to show [imath]x_{n+1} \leq x_n[/imath]. We have [imath]x_{n+1} = \frac{1}{4-x_n}[/imath]. I want to have an upper bound for the RHS, but can't find one and don't really know where to go from here. Bounded below: I wanted to show that all values are positive, but if we assume [imath]x_n > 0[/imath], that doesn't rule out [imath]x_{n+1}[/imath] from being negative. |
2859670 | Why does the Maclaurin series of sin(x) converge to sin(x)?
When looking up how the extremely famous series [imath]\sin(x)=\sum_{k=0}^\infty(-1)^k\frac{x^{2k+1}}{(2k+1)!}[/imath] is derived, I found this great explanation by Proof Wiki. My question is this: the explanation shows clearly how to derive the Maclaurin series for [imath]\sin(x)[/imath] and how it converges for all real arguments, however - as someone new to the intricacies of Maclaurin series - it does not prove that whatever the series converges to at the real number [imath]a[/imath] is [imath]\sin(a)[/imath]. Why is this true? | 577676 | Why is [imath]\sin{x} = x -\frac{x^3}{3!} + \frac{x^5}{5!} - \cdots[/imath] for all [imath]x[/imath]?
I'm pretty convinced that the Taylor Series (or better: Maclaurin Series): [imath]\sin{x} = x -\frac{x^3}{3!} + \frac{x^5}{5!} - \cdots[/imath] Is exactly equal the sine function at [imath]x=0[/imath] I'm also pretty sure that this function converges for all [imath]x[/imath] What I'm not sure is why this series is exactly equal to the sine function for all [imath]x[/imath]. I know exactly how to derive this expression, but in the process, it's not clear that this will be equal the sine function everywhere. Convergence does not mean this will be equal, it just mean that it will have a defined value for all [imath]x[/imath]. Also, I would want to know: is this valid for values greater than [imath]\frac{\pi}{2}[/imath]? I mean, don't know how I can proof that this Works for values greater than the natural definition of sine. |
2859759 | Attempt to prove Fermat's Last Theorem n = 3.
My previous attempt at proof did not work as I thought and so I decided to try again. Please bear in mind that this is not a duplicate of my old proof. If you look at the other one you will see that this examines the parity of the equation, something the previous attempt does not do. This time I have based my attempt on induction rather than algebra. To begin with, I examined the equation: [imath] A^3 + B^3 = C^3[/imath] Where [imath]A[/imath], [imath]B[/imath], [imath]C[/imath] [imath]\in \Bbb Z[/imath] and [imath]A[/imath], [imath]B[/imath], [imath]C > 0[/imath]. We can also assume that [imath]A, B, C[/imath] are the lowest solutions. Therefore, [imath]A^3, B^3, C^3[/imath] are co-prime. Examining this, we should be able to draw conclusions on each term's parity. In this form, we could have: [imath](1)EVEN + ODD = ODD[/imath] [imath](2)ODD + ODD = EVEN[/imath] [imath](3)EVEN + EVEN = EVEN[/imath] However, equation [imath]3[/imath] is not possible as we know that [imath]A^3, B^3, C^3[/imath] are the lowest solutions possible and are co-prime. Therefore, we are left only with [imath]1[/imath] and [imath]2[/imath]. My previous attempted solution involved rewriting the equation in terms of [imath]A^3[/imath]: [imath]A^3 + xA^3 = C^3[/imath] Here [imath]x[/imath] is either an integer, irrational or fraction. If it is an integer we get: [imath](1+x)A^3 = C^3[/imath] Since [imath]x \in \Bbb Z[/imath], [imath]x+1 \in \Bbb Z[/imath]. Therefore, [imath]\sqrt[3]{x+1} \in \Bbb Z[/imath] and [imath]\sqrt[3]{x} \in \Bbb Z[/imath]. The only possible integer solution to this is [imath]x = 0[/imath]. However, this would mean that [imath]B^3 = 0[/imath]. Therefore, [imath]x[/imath] is irrational or a fraction. If [imath]x[/imath] is irrational, [imath]B^3[/imath] will be irrational. Therefore, [imath]x[/imath] is a fraction. We can now represent this in the equation: [imath]A^3 + \frac{p^3}{q^3}A^3 = C^3[/imath] Where [imath]p,q \in \Bbb Z[/imath]. The reason the fraction can be represented as such is because that is the ratio of [imath]A^3[/imath] to [imath]B^3[/imath]. Even if [imath]p^3[/imath] and [imath]q^3[/imath] did share factors they could be simplified to still be cubes. (For example, [imath]\frac{64}{8} = 8[/imath].) Simplifying this further we get: [imath](qA)^3 + (pA)^3 = (qC)^3[/imath] We shall begin by analysing [imath]qA[/imath]. We know from earlier that this is either odd or even. If we assume it is even, then either: [imath]\frac{A}{2} \in \Bbb Z , \frac{q}{2} \notin \Bbb Z[/imath] or [imath]\frac{A}{2} \notin \Bbb Z , \frac{q}{2} \in \Bbb Z[/imath] or [imath]\frac{A}{2} \in \Bbb Z , \frac{q}{2} \in \Bbb Z[/imath] Analysing [imath]pA[/imath], we can see from before that [imath]pA[/imath] must be odd. Therefore: [imath]\frac{A}{2} \notin \Bbb Z , \frac{p}{2} \notin \Bbb Z[/imath] This means we can compare this with the other possible parities of [imath]A[/imath] to give: [imath]\frac{A}{2} \notin \Bbb Z[/imath] [imath]\frac{p}{2} \notin \Bbb Z[/imath] [imath]\frac{q}{2} \in \Bbb Z[/imath] Substituting this all into our equation gives us: [imath](EVEN)^3 + (ODD)^3 = (EVEN)^3[/imath] [imath]EVEN + ODD = EVEN[/imath] This clearly is incorrect and so our original assumption, that there are integer solutions to the below is incorrect. [imath]A^3 + B^3 = C^3[/imath] Q.E.D. (Hopefully.) Is this correct? I cannot find anything wrong with it unlike my previous one. Thank you. | 2859343 | Is this a proof of Fermat's Last Theorem where n = 3?
So I have spent some time thinking about Fermat's Last Theorem and about how to come up with a proof for certain cases of n. To begin with I took n = 3. [imath]A^3 + B^3 = C^3 (A,B,C \in \mathbb {Z})( A,B,C ≠ 0)[/imath] We can assume that [imath]A<B<C[/imath]. Because of this we can rewrite this as: [imath]A^3 + xA^3 = C^3[/imath] We now have three possibilities: [imath]x[/imath] is an integer, [imath]x[/imath] is an irrational, or [imath]x[/imath] is a fraction. If [imath]x[/imath] is an integer then [imath]x[/imath]+1 and [imath]x[/imath] must be cubes. The only numbers which satisfy this are [imath]x[/imath] = 0. However, this means [imath]B^3 = 0[/imath]. [imath]x[/imath] cannot be irrational because then [imath]C^3[/imath] is irrational. Therefore [imath]x[/imath] is a fraction. We can represent this now as: [imath]A^3 + \frac{p}{q}B^3 = C^3 (p,q \in \mathbb {Z})(GCD(p,q) = 1)[/imath] This means [imath]\frac{p}{q}[/imath] is actually a cube over a cube. Therefore [imath]p[/imath] and [imath]q[/imath] are cube numbers. However, [imath]1 + \frac{p}{q}[/imath] is also a cube. Therefore [imath]\frac{p+q}{q}[/imath] is a cube over a cube. Hence: [imath]\sqrt[3]{p} ,\sqrt[3]{q}, \sqrt[3]{p+q} \in \mathbb {Z}[/imath] We can now create another variable [imath]v[/imath] for [imath]p+q[/imath]. Therefore, [imath]p+q = v[/imath] (Where [imath]\sqrt[3]{v} \in \mathbb {Z}[/imath]) Now we must deal with [imath]\frac{p}{q}+1[/imath]. Since this is a cube we can write this as: [imath]\frac{p}{q} + 1 = \lambda^3[/imath] Since [imath]p[/imath] and [imath]q[/imath] are co-prime and in simplest form, [imath]\lambda[/imath] and [imath]\lambda^3[/imath] are also fractions in simplest form. Rearranging, we get: [imath]p = q(\lambda^3 - 1)[/imath] Substituting this into [imath]p+q=v[/imath], we get: [imath]q+q(\lambda^3 - 1) = v[/imath] [imath]q\lambda^3 = v[/imath] Since [imath]\lambda^3[/imath] is a fraction, we can express it as [imath]\frac{D^3}{E^3}[/imath]. Hence: [imath]qD^3 = vE^3[/imath] However, because [imath]v[/imath] is a cube, [imath]vE^3[/imath] is also a cube. Therefore [imath]\lambda^3[/imath] doesn't have to be a fraction. This is a contradiction, and so there can be no integer solutions for [imath]p[/imath] and [imath]q[/imath]. Subsequently, there can be no integer solutions for A,B,C in: [imath]A^3 + B^3 = C^3[/imath]. If anyone could point out the problem in this and what you think of it it would be greatly appreciated. Thank you. |
2859508 | Is matrix multiplication transitive?
Let [imath]A, B, C \in M_n(\mathbb R)[/imath] be such that [imath]A[/imath] commutes with [imath]B[/imath], [imath]B[/imath] commutes with [imath]C[/imath] and [imath]B[/imath] is not a scalar matrix. Then [imath]A[/imath] commutes with [imath]C[/imath]. I think it is false, but how can I solve it within 3 minutes? | 2561966 | If [imath]A[/imath] commutes with [imath]B[/imath] and [imath]B[/imath] commutes with [imath]C[/imath], then [imath]A[/imath] commutes with [imath]C[/imath]
Let [imath]A,B,C \in \ M(3,\mathbb{R})[/imath] be such that [imath]A[/imath] commutes with [imath]B [/imath]and [imath]B[/imath] commutes with [imath]C[/imath]. [imath]B[/imath] is not scalar matrix. Then, [imath]A[/imath] commutes with [imath]C[/imath]. Is this true or false? I think it is not true, but cannot think of a counterexample. Any suggestions? |
2860337 | Sum of remaining of 2^k divided by 2003
Sorry for my interruption. I am looking for an answer to this question: Calculating [imath]\sum_{k=1}^{2002}r_k[/imath] in which [imath]r_k[/imath] is the remaining of [imath]2^k[/imath] divided by [imath]2003[/imath]. I thought that [imath]2[/imath] was a primitive root of [imath]2003[/imath], but I was wrong, in fact [imath]ord_{2003}(2) = 286[/imath]. I hope you can help me on this question. | 2859417 | sum of integers using order of integers and primitive roots
Sorry for my interruption, I am looking for a solution to this question: Calculate [imath]\sum_{k=0}^{2001}\left \lfloor \frac{2^k}{2003}\right \rfloor[/imath] without using computational engines, with [imath]\lfloor x \rfloor[/imath] denoting the largest integer that does not succeed x. I hope you can answer this question. And sorry for my mistakes, English is my second language. |
2860527 | How to rewrite this optimization problem in standard form?
Consider the following problem \begin{eqnarray*} \underset{y}{\max} & f(y)\\ s.t. & y_{1}A_{1}+y_{2}A_{2}+y_{3}A_{3}+S_{1}=C_{1},\\ & y_{4}A_{4}+y_{5}A_{5}+y_{6}A_{6}+y_{7}A_{7}+S_{2}=C_{2},\\ & y_{4}A_{8}+y_{5}A_{9}+y_{6}A_{10}+y_{7}A_{11}+S_{3}=C_{3},\\ & y_{4}A_{12}+y_{5}A_{13}+y_{6}A_{14}+y_{7}A_{15}+y_{2}A_{16}+y_{3}A_{17}+S_{4}=C_{4},\\ & S_{1},S_{2},S_{3},S_{4}\succeq0, \end{eqnarray*} where [imath]y_{i}[/imath] are scalar variables. How can we rewrite this optimzation problem in the following form \begin{eqnarray*} \underset{x}{\max} & f(x)\\ s.t. & \sum_{i=1}^{i=m}x_{i}B_{i}+T=D,\\ & T\succeq0, \end{eqnarray*} where [imath]x_{i}[/imath] are scalar variables please? Thanks. | 2860421 | How can we rewrite this optimization problem in standard form?
Consider the following problem [imath] \begin{array}{crl} &\underset{y}{\max} & f(y)\\ s.t. & y_{1}A_{1}+y_{2}A_{2}+S_{1}&=C_{1},\\ & y_{3}A_{3}+y_{2}A_{4}+y_{5}A_{5}+S_{2}&=C_{2},\\ & y_{3}A_{6}+y_{4}A_{7}+S_{3}&=C_{3},\\ & S_{1},S_{2},S_{3}\succeq0, \end{array} [/imath] where [imath]y_{i}[/imath] are scalar variables and the matrices [imath]A_{i}[/imath] are symmetric real. How can we rewrite this optimzation problem in the following form [imath] \begin{array}{crl} &\underset{y}{\max} & f(y)\\ s.t. & \sum_{i=1}^{m}y_{i}A_{i}+S&=C,\\ &S&\succeq0, \end{array} [/imath] where [imath]x_{i}[/imath] are scalar variables and the matrices [imath]B_{i}[/imath] are symmetric real. please? Thanks. |
2860401 | Degree of a differential equation
I understand why we define the order of differential equations but why do we need to define the degree of the equation? I don't understand how it is important for differential equations. Also, if degree is defined as the power of the highest derivative, then, for example, consider [imath]\frac{d^2 y}{dx^2}+\sin\left[\frac{dy}{dx}\right]+y=0[/imath] Here we don't define the degree of the equation because one of the derivative terms is present inside a trigonometric function. But if we write the sin term in its polynomial form, then even though the highest power of first derivative approaches infinity, the power of the highest derivative is still 1. Then why don't we define the degree in this case? | 1117694 | Differential equation degree doubt
[imath]\frac{dy}{dx} = \sin^{-1} (y)[/imath] The above equation is a form of [imath]\frac{dy}{dx} = f(y)[/imath], so degree should be [imath]1[/imath]. But if I write it as [imath]y = \sin\left(\frac{dy}{dx}\right)[/imath] then degree is not defined as it is not a polynomial in [imath]\frac{dy}{dx}[/imath]. Please explain? |
2861520 | Proving [imath]f'[/imath] can't have a jump discontinuity point
I was given a hw question that I just cant seem to solve. It goes: Let [imath]f[/imath] be defined and differentiable in [imath](a,b)[/imath]. Prove that [imath]f'[/imath] can't have a jump discontinuity point. I think that by using Darboux's theorem you could get to that conclusion but we were instructed to use Lagrange's theorem. What I did do: Suppose [imath]x_0\in(a,b)[/imath] is a jump discontinuity point. We can look at [imath](x_0,x_0+\delta)\subset(a,b)[/imath] all the conditions of Lagrange's theorem are met so we have a point [imath]c\in(x_0,x_0+\delta)[/imath] such that [imath]f'(c)=\frac{f(x_0+\delta)-f(x_0)}{\delta}[/imath] And this is where I'm stuck. I understand that the right side reminds the derivative definition of [imath]f(x_0)[/imath] but I don't understand how to connect it all. | 563771 | Prove that if a function [imath]f[/imath] has a jump at an interior point of the interval [imath][a,b][/imath] then it cannot be the derivative of any function.
Prove that if a function [imath]f[/imath] has a jump at an interior point of the interval [imath][a,b][/imath] then it cannot be the derivative of any function. I know that for [imath]f[/imath] is differentiable in [imath](a,b)[/imath] and that it has one-sided derivative [imath]f_+' (a)≠f_-' (b)[/imath] at the endpoints. If [imath]C[/imath] is a real number between [imath]f_+' (a)[/imath] and [imath]f_-' (b)[/imath], then there exists [imath]c∈(a,b)[/imath] such that [imath]f' (c)=C [/imath]. How can I use this to prove the above? |
2857710 | Fibonacci-type Sequence with Complex Numbers
I have been playing around with Fibonacci-type of sequence that involve complex numbers. I have stumbled upon the following sequence, which seemed interesting to me: [imath]0,1,2i,-3,-4i,5,6i,...[/imath] so [imath]F_n = 2iF_{n-1} + F_{n-2}[/imath]. These look like a sequence of natural numbers (except for [imath]0[/imath]) where every other is multiplied by [imath]i[/imath] and the signs change after two sequences. I understand the algebra behind the above sequence, but I have been wondering whether there is an intuition behind why the sequence looks like a "modified" sequence of natural numbers. | 91379 | Characteristic equation of a recurrence relation
Yesterday there was a question here based on solving a recurrence relation and then when I tried to google for methods of solving recurrence relations, I found this, which gave different ways of solving simple recurrence relations. My question is how do you justify writing the recurrence relation in its characteristic equation form and then solving for its roots to get the required answer. For example, Fibonacci relation has a characteristic equation [imath]s^2-s-1=0[/imath]. How can we write it as that polynomial? |
2861762 | Section 19 Munkres Topology, [imath]\mathbb{R}^\infty[/imath] vs [imath]\mathbb{R}^\omega[/imath]
Question 7 in Munkres's Topology Reads: "Let [imath]\mathbb{R}^\infty[/imath] be a subset of [imath]\mathbb{R}^\omega[/imath] consisting of all sequences that are 'eventually zero...'" I am trying to understand what's the difference between [imath]\mathbb{R}^\infty[/imath] and [imath]\mathbb{R}^\omega[/imath]. What does the superscript [imath]\omega[/imath] suggest about the space [imath]\mathbb{R}^\omega[/imath] ? I believe [imath]\mathbb{R}^\infty[/imath] refers to the product topology (or Box Topology) of [imath]\mathbb{R}[/imath], while [imath]\mathbb{R}^\omega[/imath] [imath]\textit{is}[/imath] [imath]\mathbb{R}^\infty[/imath], but also holds the property that any sequence is eventually zero... Is this correct? I have not seen the formal definition of either of these and in this case, I believe the definition directly leads me to answer the problem: " What is the closure of [imath]\mathbb{R}^\infty[/imath] in [imath]\mathbb{R}^\omega[/imath] in the box and product topologies?" | 616651 | Difference between [imath]R^\infty[/imath] and [imath]R^\omega[/imath]
I know [imath]R^\omega[/imath] is the set of functions from [imath]\omega[/imath] to [imath]R[/imath]. I would think [imath]R^\infty[/imath] as the limit of [imath]R^n[/imath], but isn't that [imath]R^\omega[/imath]? The seem to be used differently, but I can't tell exactly how. |
584710 | What can I use as the generic term for "a function that is composed with another"?
Suppose I am talking about the composition [imath]g \circ f[/imath] (or more generally [imath]f_n \circ \cdots \circ f_1[/imath]). Is there a generic term for the functions [imath]f[/imath] and [imath]g[/imath] (the functions [imath]f_i[/imath])? "Compositand"? | 2847640 | What are the functions that make composite functions called?
What do you call the functions that make a composite function. Example : [imath]e^{\sin x} [/imath] is made up of [imath]\sin x[/imath] in the argument of [imath]e^x[/imath] So what are [imath]\sin x[/imath] and [imath]e^x[/imath] called here in this context? Basic functions or something? (Please don't tell me the former is trig and latter is exponential function, I know that, but that's not what I'm asking here, please try to understand) Hope I made myself clear. |
2853575 | Existence of an antiderivative for a continuous function on an arbitrary subset of [imath]\mathbb{R}[/imath]
Let [imath]f[/imath] be a continuous function on a set [imath]I\subset \mathbb{R}[/imath]. Does there always exist a function [imath]F[/imath] differentiable on an open set [imath]J[/imath] containing [imath]I[/imath] such that [imath]F'=f[/imath] on [imath]I[/imath] ? The case where J is an interval has been studied in this thread Existence of antiderivative on a part [imath]I[/imath] of [imath]\mathbb{R}[/imath] | 2853497 | Existence of antiderivative on a part [imath]I[/imath] of [imath]\mathbb{R}[/imath]
Let [imath]f[/imath] be a continuous function on a part [imath]I[/imath] of [imath]\mathbb{R}[/imath]. Is they exist always a function [imath]F[/imath] differentiable on an interval [imath]J[/imath] containing I such that [imath]F'=f[/imath] on [imath]I[/imath]? If [imath]I[/imath] is an interval , it's ok if [imath]I[/imath] is an open set , it's ok But if [imath]I[/imath] is only a part of [imath]\mathbb{R}[/imath] ? |
2861926 | x[imath]A^{100 }[/imath] where [imath]A = \begin{bmatrix} 1 &2 \\ 3& 4 \end{bmatrix}[/imath]
Compute [imath]A^{100 }[/imath] where [imath]A = \begin{bmatrix} 1 &2 \\ 3& 4 \end{bmatrix}[/imath]. I can calculate [imath]A^{100}[/imath] using a calculator, but my question is that is there any short formula/method or is their any trick to find the [imath]A^{100}[/imath]? | 597602 | Finding a 2x2 Matrix raised to the power of 1000
Let [imath]A= \pmatrix{1&4\\ 3&2}[/imath]. Find [imath]A^{1000}[/imath]. Does this problem have to do with eigenvalues or is there another formula that is specific to 2x2 matrices? |
2862631 | Prove that [imath]E^{\circ}[/imath] is always open
Let [imath]E^{\circ}[/imath] denote the set of all interior points of a set [imath]E[/imath] Prove that [imath]E^{\circ}[/imath] is always open. Important definitions: An interior point of a set E, is a point [imath]p[/imath] in which there exists a nbhd [imath]N_{\delta}(p)[/imath] such that [imath]N_{\delta}(p) \subset E[/imath]. A set is said to be open if every point of [imath]E[/imath] is an interior point My Solution: I would've just said that the set [imath]E^{\circ}[/imath] is open by this definition. But another solution I found was: If [imath]x\in E^{\circ} \rightarrow x\in U \subset E^{\circ}[/imath] where [imath]U[/imath] is open. Therefore any point [imath]x\in U[/imath] has a nbhd [imath]N_{\delta}(p) \subset E[/imath]. This implies [imath]E^{\circ} = \bigcup _{U \subset E}U[/imath]. And from this it is concluded that [imath]E^{\circ}[/imath] is open. My question is why is this extended argument necessary? This happens to me often where I encounter a question which appears to be asking something simple, but the actual proof is more complicated than initially anticipated. | 1157723 | Proving set of interiors is open
Prove: The set of interior points of any set [imath]A[/imath], written int([imath]A[/imath]), is an open set. Let [imath]p\in[/imath] int([imath]A[/imath]), then by definition [imath]p[/imath] must belong to some open interval [imath]S_{p}\subset A[/imath]. Now since we know that the real line itself is open then [imath]S_{p}\subset \mathbb{R}[/imath]. Now suppose we pick some [imath]q\in A[/imath]. I want [imath]q\in[/imath] int([imath]A[/imath]). I am stuck here I am not sure if my logic is correct, any suggestions would be greatly appreciated. Definition of interior: Let [imath]A[/imath] be a set of real numbers. A point [imath]p\in A[/imath] is an interior point iff [imath]p[/imath] belongs to some open interval [imath]S_{p}[/imath] which is contained in A: [imath]p\in S_{p}\subset A[/imath] Definition of open: A set [imath]A[/imath] is open iff each of its points is an interior point |
2862654 | Proof from Axioms for a Ring
I'm trying to prove this following theorem: If [imath]x, y \in \mathbb{Z}[/imath], use the cancellation law for [imath]\mathbb{Z}[/imath] to demonstrate that [imath]xy = 0 \implies[/imath] [imath]x = 0[/imath] or [imath]y = 0[/imath] The proof I came up with doesn't quite seem definitive enough. I know how to prove this without using the cancellation law, but this requirement seems to make things much more difficult. So, clearly [imath]0 = 0 \cdot x = 0 \cdot y[/imath] [imath]\forall x, y \in \mathbb{Z}[/imath]. So, we can clearly say that this holds for [imath]x \neq 0[/imath] and [imath]y \neq 0[/imath]. So, first we can write that [imath]xy = 0 \cdot x[/imath] for [imath]x \neq 0[/imath]. So, by the cancellation law, [imath]y = 0[/imath]. Similarily, we can write that [imath]xy =0 \cdot y[/imath] for [imath]y \neq 0[/imath], so [imath]x = 0[/imath] by the cancellation law. It seems to me that we can write [imath]xy = 0 \cdot a[/imath] for any integer, so this doesn't quite "prove" anything, though in such a case we wouldn't be able to use the cancellation law, so that wouldn't at all be a pertinent fact. How does this sound? | 2859919 | Is this proof of [imath]ab = 0[/imath] correct?
I have to prove the following theorem(Apostol's Calculus I, exercise 1 page 19): If [imath]ab = 0[/imath] then either [imath]a = 0[/imath] or [imath]b = 0[/imath]. My attempt to solve it was: [imath]ab = 0[/imath] can be rewritten as [imath]ab = a0[/imath], because [imath]a0 = 0[/imath](already been proved). So we now can cut a on both sides(already been proved too), so we have [imath]b = 0[/imath]. Also, we could rewrite the original equation as [imath]ab = b0[/imath] and b on both sides of equation and turn it into [imath]a = 0[/imath]. I think my proof covers the basic steps but I don't think it's asserting anything. |
2862776 | Proving [imath]\frac{1}{\sin(A/2)}+\frac{1}{\sin(B/2)}+\frac{1}{\sin(C/2)}\ge 6[/imath], where [imath]A[/imath], [imath]B[/imath], [imath]C[/imath] are angles of a triangle
If [imath]A[/imath], [imath]B[/imath], and [imath]C[/imath] are the angles of a triangle, then [imath]\frac{1}{\sin \left(\frac{A}{2}\right)}+\frac{1}{\sin\left(\frac{B}{2}\right)}+\frac{1}{\sin\left(\frac{C}{2}\right)}\ge 6[/imath] I have used multiple trigonometric identities, but the situation becomes complicated. I also thought about the Sine Law. To be honest, I don’t think these techniques are suitable. Any suggestions? | 1139491 | Proving [imath]\csc\frac\alpha2+\csc\frac\beta2+\csc\frac\gamma2 \ge 6[/imath], where [imath]\alpha[/imath], [imath]\beta[/imath], [imath]\gamma[/imath] are the angles of a triangle
If [imath]\alpha[/imath], [imath]\beta[/imath], [imath]\gamma[/imath] are angles of a triangle. prove that [imath]\csc\frac\alpha2+\csc\frac\beta2+\csc\frac\gamma2 \ge 6[/imath]. I started from [imath]\alpha + \beta + \gamma = 180^{\circ}[/imath] and then I tried to use some trigonometric identities, but with no success. Can someone tell me how to start? |
2861641 | Rolle's theorem without compact domain hypothesis
Good evening everyone, I'm asking for a proof of "Rolle's theorem generalization". The thesis is as follows: Let be [imath]a \in \mathbb{R}[/imath] and [imath]f:[a,\infty) \longrightarrow \mathbb{R}[/imath] a continuous function such that [imath]\lim_{{x\to \infty}} {f(x)} = f(a)[/imath]. Prove that if the derivative exists in [imath](a,\infty)[/imath] then [imath]\exists[/imath] [imath]x_0>a[/imath] such that [imath]f'(x)=0[/imath]. I tried to prove that but I wasn't able to do it, so if you can help me I'll appreciate it a lot. Thank you for your time. | 2621009 | [imath]f(0)=0[/imath] and [imath]\lim_{x \to \infty}f(x)=0[/imath]. Show there's [imath]x_0[/imath] such that [imath]f'(x_0)=0[/imath]
[imath]f(0)=0[/imath] and [imath]\lim_{x \to \infty}f(x)=0[/imath] and [imath]f[/imath] is differentiable in [imath]\mathbb{R}[/imath] How can I show there's [imath]x_0[/imath] such that [imath]f'(x_0)=0[/imath]. I know it's true since the values of f(x) for large enough x are close to 0 as much as I want. So, that means that the same value would have to appear at least twice, and then it can be concluded. However, I don't know how to show it formally. |
2859546 | [imath]2[/imath]-Norm for Convolution
Let [imath]C_c(\mathbb{R})[/imath] be the following: [imath]C_c = \{ f \in C(\mathbb{R}) \mid \exists T > 0 \text{ s.t. } f(t) = 0 \text{ for } |t| \geq T\}[/imath] Let [imath]T_n \in L(C_c(\mathbb{R}))[/imath] be a linear operator such that: [imath]T_n u = \delta_n *u, \forall u \in C_c(\mathbb{R}),[/imath] where [imath] \delta_n(t)= \begin{cases} n^2(t+1/n) & -1/n \leq t \leq 0 \\ -n^2(t-1/n) & 0 < t \leq 1/n \\ 0 & \text{elsewhere} \end{cases} [/imath] I have to prove that with respect to the 2-norm, the operator [imath]T_n[/imath] has [imath]\|T_n\| = 1.[/imath] I cannot use the Young's convolution inequality directly, but I have to use the properties of the Fourier's transform. So far, I was trying to do the following: [imath]\|\delta_n * u\|_2 = \|\mathfrak{F}(\delta_n * u)\|_2 = \|\mathfrak{F}(\delta_n ) \mathfrak{F}(u)\|_2 [/imath] But then I was wondering how to proceed. | 2861443 | Convolution operator has norm 1 with respect to [imath]L^2[/imath] norm
Let [imath]C_c(\mathbb{R})[/imath] be the following: [imath]C_c = \{ f \in C(\mathbb{R}) \mid \exists \text{ } T > 0 \text{ s.t. } f(t) = 0 \text{ for } |t| \geq T\}[/imath] Let [imath]T_n \in L(C_c(\mathbb{R}))[/imath] be a linear operator such that: [imath]T_n u = \delta_n *u, \forall u \in C_c(\mathbb{R}),[/imath] where [imath] \delta_n(t)= \begin{cases} n^2(t+1/n) & -1/n \leq t \leq 0 \\ -n^2(t-1/n) & 0 < t \leq 1/n \\ 0 & \text{elsewhere} \end{cases} [/imath] I have to prove that with respect to the 2-norm, the operator [imath]T_n[/imath] has [imath]\|T_n\| = 1[/imath] and I have succeeded in proving that [imath]\|T_nu\|_2 \leq \|u\|_2, \forall u \in C_c(\mathbb{R}).[/imath] Now, I remained to prove that [imath]\exists \text{ } u \in C_c(\mathbb{R}) \text{ s.t. } \|T_nu\|_2 = \|u\|_2[/imath], but I really don't get how [imath]u[/imath] should be shaped in order to satisfy it. |
2862927 | What exactly is a function?
This remark appears in Terence Tao's Analysis I Remark 3.3.6. Strictly speaking, functions are not sets, and sets are not functions; it does not make sense to ask whether an object [imath]x[/imath] is an element of a function [imath]f[/imath], and it does not make sense to apply a set [imath]A[/imath] to an input [imath]x[/imath] to create an output [imath]A(x)[/imath]. On the other hand, it is possible to start with a function [imath]f : X → Y[/imath] and construct its graph [imath]\{ (x, f(x)) : x \in X \}[/imath] , which describes the function completely: see Section 3.5. In a lot of books I checked (almost all of them about Set Theory) do consider [imath]f[/imath] to be a set defined as [imath]f = \{ (x, f(x)) : x \in X \}[/imath] which is included in [imath]X \times Y[/imath] (i.e The Cartesian product of [imath]X[/imath] and [imath]Y[/imath]) and I don't see why Tao sees it as nonsensical? One other thing, lets consider this two definitions: (1) For each element [imath]x \in A[/imath], there exist at most an element [imath]y[/imath] in [imath]B[/imath] such that [imath](x,y) \in f[/imath], [imath]y = f(x)[/imath], or [imath]x f y[/imath] depends on the notation used. (2) For each element [imath]x[/imath] in [imath]A[/imath], there exist a unique element [imath]y \in B[/imath] such that [imath](x,y) \in f[/imath], [imath]y = f(x)[/imath], or [imath]x f y[/imath] depends on the notation used. In almost all French books I checked (1) is a definition of a they call "fonction" (i.e Function in English apparently), and (2) is for what they call "application" (I don't know what it should be translated to in English, I think 'map' would do), but in English books I checked they don't make this distinction, they define function, map...etc as in (2) and consider (1) to a not be a function. My question is which one I should consider as a the definition for a function? even though (2) would make the most sense for me, because why would you include elements that not have an image in the domain of [imath]f[/imath]? | 1071944 | Confusion about the definition of function
Yesterday I was talking to one of my friends about the definition of function. The formal definition of function is given by Cartesian Products but my friend's question was whether it is possible to define a function without being acquainted with any concept of Cartesian Products. To answer this question I told him the definition of a function may be given by defining a function [imath]f[/imath] from the set [imath]X[/imath] to [imath]Y[/imath], denoted by [imath]f:X\to Y[/imath], to be some "rule" (the word rule being an undefined concept) which associates, to each member of [imath]X[/imath], exactly one element of [imath]Y[/imath]. But my friend said that the definition using the concept of "rule" isn't rigorous and there are examples that don't obey the definition but still is a function in the Cartesian Product sense. My questions are, Why the definition of function using concepts such as "rule" isn't rigorous? Is there really any example which doesn't obey the definition but still is a function in the Cartesian Product sense? |
2863265 | "Simple problem" to find order of a product in a Group
Let a and b be elements of a group, with [imath]a^2 = e, b^6 = e [/imath] and [imath] ab =b^4 a [/imath] . Find the order of ab, and express its inverse in each of the forms [imath]a^mb^n[/imath] and [imath]b^ma^n?[/imath] Though it seems very simple I'm unable to find a power such that [imath](ab)^x =e[/imath]. Please help. Im all confused with this problem. PS: to put in context, this question was asked for maths optional UPSC IAS exam in India | 1558108 | Inverse of [imath]ab[/imath] and its order in a group with relations [imath]a^2=e=b^6[/imath] and [imath]ab=b^4a[/imath]
Let [imath]a[/imath] and [imath]b[/imath] be elements of a group, with [imath]a^2=e, b^6=e[/imath] and [imath]ab=b^4a.[/imath] Find the order of [imath]ab[/imath] and express the inverse in each of the terms [imath]a^mb^n[/imath] and [imath]b^ma^n.[/imath] Just want to cross check my solution. Would be very grateful for the complete solution. |
2863188 | How to define summation recursively?
I have attempted to define [imath]s_n=\sum_{i=1}^n a_i=a_1+\cdots+a_n[/imath] in a valid manner, but I'm not sure if my extraction of [imath](s_i\mid 1\leq i\leq n)[/imath] from [imath](p_i\mid i\in\mathbb N)[/imath] contains any error. Please help me check that part! Suppose that [imath](a_1,\cdots,a_n)[/imath] is a finite sequence in [imath]\mathbb N[/imath]. Then there exists a sequence [imath](s_1,\cdots,s_n)[/imath] such that [imath]s_1=a_1[/imath] and [imath]s_{i+1}=s_i+a_{i+1}[/imath] for all [imath]1\leq i<n[/imath]. My attempt: We define mapping [imath]f[/imath] as follows: [imath]f: \mathbb N\times\mathbb N\to\mathbb N\times\mathbb N: (i,a)\mapsto\begin{cases} (i+1,a+a_{i+1})&\text{if }i<n\\ (i+1,a)&\text{if }i\geq n \end{cases}[/imath] By recursion theorem, there is a unique sequence [imath](p_i\mid i\in\mathbb N)[/imath] such that [imath]p_0=(1,a_1)[/imath] and [imath]p_{i+1}=f(p_i)[/imath]. Let [imath]\pi:\mathbb N\times\mathbb N\to\mathbb N[/imath] be the projection to the second co-ordinate i.e. [imath]\pi(i,a)=a[/imath]. Let [imath]s_i=\pi(p_i)[/imath] for all [imath]1\leq i\leq n[/imath], then [imath](s_i\mid 1\leq i\leq n)[/imath] is the required sequence. It's clear from the definition of [imath]s_i[/imath] that [imath]s_{i+1}=s_i+a_{i+1}[/imath] for all [imath]1\leq i<n[/imath]. | 2860987 | How to justify the definition of summation [imath]s_n=\sum_{i=1}^n a_i=a_1+\cdots+a_n[/imath]?
I read https://www.wikiwand.com/en/Summation#/Formal_definition and found that they define summation via recursion, so I decided to formalize the proof that that this definition is actually valid. I've two questions: Does my proof contain any error? Are there other simple ways to define summation? Thank you so much! Suppose that [imath](a_1,\cdots,a_n)[/imath] is a finite sequence in [imath]\mathbb N[/imath]. Show that there is a sequence [imath](s_1,\cdots,s_n)[/imath] such that [imath]s_1=a_1[/imath] and [imath]s_{i+1}=s_i+a_{i+1}[/imath] for all [imath]1\leq i<n[/imath]. My attempt: We define mapping [imath]f[/imath] as follows: [imath]f: \mathbb N\times\mathbb N\to\mathbb N\times\mathbb N: (i,a)\mapsto\begin{cases} (i+1,a+a_{i+1})&\text{if }i<n\\ (i+1,a)&\text{if }i\geq n \end{cases}[/imath] By recursion theorem, there is a unique sequence [imath](p_i\mid i\in\mathbb N)[/imath] such that [imath]p_0=(1,a_1)[/imath] and [imath]p_{i+1}=f(p_i)[/imath]. Let [imath]\pi:\mathbb N\times\mathbb N\to\mathbb N[/imath] be the projection to the second co-ordinate i.e. [imath]\pi(i,a)=a[/imath]. Let [imath]s_i=\pi(p_i)[/imath] for all [imath]1\leq i\leq n[/imath], then [imath](s_i\mid 1\leq i\leq n)[/imath] is the required sequence. It's clear from the definition of [imath]s_i[/imath] that [imath]s_{i+1}=s_i+a_{i+1}[/imath] for all [imath]1\leq i<n[/imath]. |
2863685 | How to find [imath]\sum_{n=1}^{\infty} \frac{(-1)^n}{n^2}[/imath] using complex analysis?
Consider the series [imath]\sum_{n=1}^{\infty} \frac{(-1)^n}{n^2}[/imath]. How to find its value using complex analysis? I am considering [imath]f(z)=\sum_{n=1}^{\infty} \frac{z^n}{n^2}[/imath]. It is absolutely convergent in [imath]|z|<1[/imath]. So it is holomorphic and can be differentiated termwisely, namely [imath]f'(z) = \sum_{n=1}^{\infty} \frac{z^{n-1}}{n}[/imath], thus [imath]zf'(z)=\sum_{n=1}^{\infty} \frac{z^n}{n} = \log (1-z)[/imath] where [imath]\log[/imath] takes principal branch such that [imath]\log1=0[/imath]. Thus we have [imath] f'(z) = \frac{\log (1-z)}{z} [/imath] I cannot proceed now. Is it possible to use complex analysis theories to tackle this? Thank you for any help! | 620071 | Evaluating the sum [imath]\sum_{n=1}^{\infty}\dfrac{(-1)^{n}}{n^{2}}[/imath]
I am tasked to evaluate the sum [imath]\sum_{n=1}^{\infty}\dfrac{(-1)^{n}}{n^{2}}[/imath] Using contour integration. This is what I've done so far. Let [imath]C_{N}[/imath] be the square defined by the lines [imath]x=\pm(N+\tfrac{1}{2})\pi[/imath] and [imath]y=\pm(N+\tfrac{1}{2})\pi[/imath]. Let [imath]f=\tfrac{1}{z^{2}\sin(z)}[/imath]. I was able to prove that [imath]\int_{C_{N}}\dfrac{1}{z^{2}\sin(z)}dz=2\pi i\left[\dfrac{1}{6}+2\sum_{n=1}^{N}\dfrac{(-1)^{n}}{\pi^{2}n^{2}} \right]. [/imath] By Cauchy's residue theorem. So now if I could prove that he integral converges to zero then I would be done. The problem is that I can't seem to get an upper bound on the integral. I've having trouble with the fact that [imath]|f|[/imath] is unbounded within [imath]C_{N}[/imath] so ML inequality doesn't help me. Any hints would be great. Thanks. |
2863787 | Integrate [imath]\int \frac{1}{1+ \tan x}dx[/imath]
Does this integral have a closed form? [imath]\int \frac{1}{1+ \tan x}\,dx[/imath] My attempt: [imath]\int \frac{1}{1+ \tan x}\,dx=\ln (\sin x + \cos x) +\int \frac{\tan x}{1+ \tan x}\,dx[/imath] What is next? | 953691 | Simplest way to integrate [imath]\int \frac{1}{1+\tan x}dx,[/imath]
[imath]\int \frac{1}{1+\tan x}dx,[/imath] A substitution like [imath]t = \tan x, \;dt = (1+t^2)dx[/imath] etc. immediately comes to mind, but I find this method a bit lengthy with the partial fractions. Is there a more concise solution to this? |
2863849 | Is my proof of [imath]\sqrt{2} + \sqrt{3} + \sqrt{5}[/imath] is an irrational number valid?
The question is prove [imath]\sqrt{2} + \sqrt{3} + \sqrt{5}[/imath] is an irrational number. I started by assuming the opposite that [imath]\sqrt{2} + \sqrt{3} + \sqrt{5}[/imath] is a rational number. I stated that a rational number is a number made by dividing two integers. So I set [imath]\sqrt{2} + \sqrt{3} + \sqrt{5} = i_1/i_2[/imath], where [imath]i_1[/imath] and [imath]i_2[/imath] are two integers. I multiplied [imath]i_2[/imath] onto both sides and got [imath]i_2\sqrt{2} + i_2\sqrt{3} + i_2\sqrt{5} = i_1[/imath]. I then said that in order to turn an irrational number such as [imath]\sqrt{2}[/imath] into a rational number you can multiply, [imath]\sqrt{n}\sqrt{n}=n[/imath]. Meaning [imath]i_2[/imath] would have to hold the value of [imath]\sqrt{2}[/imath], [imath]\sqrt{3}[/imath] and [imath]\sqrt{5}[/imath] which is impossible. So it is an irrational. I think I made a mistake somewhere but I am not sure. | 2125853 | Prove that [imath]\sqrt{2}+\sqrt{3}+\sqrt{5}[/imath] is irrational. Generalise this.
I'm reading R. Courant & H. Robbins' "What is Mathematics: An Elementary Approach to Ideas and Methods" for fun. I'm on page [imath]60[/imath] and [imath]61[/imath] of the second addition. There are three exercises on proving numbers irrational spanning these pages, the last is as follows. Exercise [imath]3[/imath]: Prove that [imath]\phi=\sqrt{2}+\sqrt{3}+\sqrt{5}[/imath] is irrational. Try to make up similar and more general examples. My Attempt: Lemma: The number [imath]\sqrt{2}+\sqrt{3}[/imath] is irrational. (This is part of Exercise 2.) Proof: Suppose [imath]\sqrt{2}+\sqrt{3}=r[/imath] is rational. Then [imath]\begin{align} 2&=(r-\sqrt{3})^2 \\ &=r^2-2\sqrt{3}+3 \end{align}[/imath] is rational, so that [imath]\sqrt{3}=\frac{r^2+1}{2r}[/imath] is rational, a contradiction. [imath]\square[/imath] Let [imath]\psi=\sqrt{2}+\sqrt{3}[/imath]. Then, considering [imath]\phi[/imath], [imath]\begin{align} 5&=(\phi-\psi)^2 \\ &=\phi^2-\psi\phi+5+2\sqrt{6}. \end{align}[/imath] I don't know what else to do from here. My plan is/was to use the Lemma above as the focus for a contradiction, showing [imath]\psi[/imath] is rational somehow. Please help :) Thoughts: The "try to make up similar and more general examples" bit is a little vague. The question is not answered here as far as I can tell. |
2864159 | Evaluating: [imath]\int \dfrac{1+x^4}{(1-x^4)^{3/2}} dx[/imath]
[imath]\int \dfrac{1+x^4}{(1-x^4)^{3/2}} dx[/imath] Attempt: Multiplied and divided numerator by [imath]x^2 [/imath] to get [imath]\displaystyle\int \dfrac{x^2+x^6}{(x^3 - x^7)^{3/2}} dx[/imath] but the problem here is the [imath]-[/imath] sign before [imath]x^7[/imath] otherwise the numerator would have been the derivative and I could've done u- substitution. How do I proceed with it? Need a minor hint. | 2858747 | How to evaluate [imath]\int\frac{1+x^4}{(1-x^4)^{3/2}}dx[/imath]?
How do I start with evaluating this- [imath]\int\frac{1+x^4}{(1-x^4)^{3/2}}dx[/imath] What should be my first attempt at this kind of a problem where- The denominator and numerator are of the same degree Denominator involves fractional exponent like [imath]3/2[/imath]. Note:I am proficient with all kinds of basic methods of evaluating integrals. |
2864273 | Find out the condition for [imath]k[/imath] such that the locus of [imath]z[/imath] is a circle.
Let [imath]\alpha,\beta[/imath] be fixed complex numbers and [imath]z[/imath] is a variable complex number such that [imath]|z-\alpha|^2+|z-\beta|^2=k.[/imath]Find out the condition for [imath]k[/imath] such that the locus of [imath]z[/imath] is a circle. I think,if i take [imath]\alpha,\beta[/imath] as diametrically opposite points and [imath]z[/imath] on the circumference,then the [imath]\alpha,\beta[/imath] subtend right angle on [imath]z[/imath].But i cannot figure out the condition.The answer given is [imath]k>\frac{1}{2}|\alpha-\beta|^2[/imath] | 1479849 | Complex Equation Of a Circle [imath]|z-z_1|^2+|z-z_2|^2=k[/imath].
If [imath]z_1[/imath] and [imath]z_2[/imath] are fixed and satisfies [imath]|z-z_1|^2+|z-z_2|^2=k[/imath] what are the possible values of k so that this equation represents a circle? I tried using pythagoras theorem,that the equation of circle should be [imath]z-z_1|^2+|z-z_2|^2=|z_1-z_2|^2[/imath]. So k should be [imath]|z_1-z_2|^2[/imath].But the answer is given is [imath]k>\frac{1}{2}|z_1-z_2|^2[/imath].Where am I going wrong?How to approach the problem (preferably geometrically) ? |
2859480 | Understanding the proof of complete reducibility for torus action from the book by Springer
Let [imath]G[/imath] be an algebraic torus ([imath]\cong(\mathbb C^*)^n[/imath]) acting linearly on a finite dimensional [imath]\mathbb C[/imath] vector space [imath]V[/imath]. For a character [imath]\chi[/imath] of [imath]G[/imath], we define [imath]V_{\chi}=\{v\in V|\forall g\in G, g\cdot v=\chi(g)v\}[/imath]. I want to show that [imath]V=\oplus_{\chi}V_{\chi}[/imath]. I am following the book 'Linear algebraic groups' by Springer Theorem [imath]3.2.3[/imath], page [imath]44[/imath]. Let [imath]X[/imath] denotes the set of all characters of [imath]G[/imath] and [imath]\phi:G\rightarrow GL(V)[/imath] be the rational representation. This can be seen as a map to [imath]\mathbb C^{n^{2}}[/imath], therefore we have [imath]\phi(g)_{i,j}\in\mathbb C[G][/imath] and we may write [imath]\phi(g)_{i,j}=\sum_{\chi}a(i,j)_{\chi}\chi[/imath], where [imath]a(i,j)_{\chi}[/imath] are constants. Thus we have \begin{align}\phi=\sum_{\chi}\chi A_{\chi}\end{align} for some [imath]A_{\chi}\in GL(V)[/imath](why [imath]A_{\chi}[/imath] is in [imath]GL(V)[/imath]?). Note that only finitely many [imath]A_{\chi}[/imath] are non zero. Then by Dedikind's lemma we have [imath]A_{\chi'}A_{\chi''}=\delta_{\chi',\chi''}A_{\chi'}[/imath]. We also have [imath]\sum_{\chi}A_{\chi}=Id[/imath]. If we set [imath]V_{\chi}=imA_{\chi}[/imath], then [imath]V=\oplus_{\chi}V_{\chi}[/imath]. My question is why this [imath]V_\chi:=imA_{\chi}=\{v\in V|\forall g\in G, g\cdot v=\chi(g)v\}[/imath]. In the book it is also given that [imath]g\in G[/imath] acts on [imath]V_{\chi}[/imath] as [imath]\chi(g).Id[/imath]. I also don't understand this. Any help is highly appreciated. Thank you. | 828359 | proof of basic fact that torus actions are diagonalizable
Suppose a torus [imath]T=(\mathbb{C}^\ast)^n[/imath] acts on a finite dimensional vector space [imath]W[/imath], and define for [imath]m \in M[/imath] ([imath]M[/imath] is the character lattice of [imath]T[/imath]) the eigenspace [imath]W_m[/imath] by [imath]W_m = \{w \in W \mid t\cdot w = \chi^m(t)w \text{ for all }t \in T \}[/imath] i.e. for [imath]w \in W_m[/imath] is a simultaneous eigenvector for all [imath]t \in T[/imath], with eigenvalue [imath]\chi^m(t)[/imath] depending on [imath]t \in T[/imath]. Then it is a famous fact [imath]W=\underset{m \in M} \bigoplus W_m[/imath] Can someone provide a somewhat self-contained proof of this result? I don't know much about the theory of algebraic groups. |
2865221 | Uniqueness of finite field
Assume [imath]L[/imath] is the algebraic closure of [imath]\mathbb{F}_p[/imath]. Show there exists a unique finite field of cardinality [imath]p^n[/imath] containing [imath]\mathbb{F}_p[/imath]. The existence is easy just have to define the splitting field of [imath]X^{p^n}-X[/imath]. But what about uniqueness? | 290574 | Finite fields are isomorphic
This is from A Course in Arithmetic by JP Serre Theorem 1 ii) Let [imath]p[/imath] be a prime number and let [imath]q = p^f(f \geq 1)[/imath] be a power of [imath]p[/imath]. Let be an algebraically closed field [imath]\Omega[/imath] of characteristic [imath]p[/imath]. There exists a unique subfield [imath]F_q[/imath] of [imath]\Omega[/imath] which has [imath]q[/imath] elements. It is the set of roots of the polynomial[imath]X^q-X[/imath]. iii) All finite fields with [imath]p^f[/imath] elements are isomorphic to [imath]F_q[/imath]. The proof of the last part says Assertion iii) follows from ii) and from the fact that all fields with [imath]p^f[/imath] elements can be embedded in [imath]\Omega[/imath], since [imath]\Omega[/imath] is algebraically closed. Can someone explain me how iii) follows fom ii)? |
2865222 | Complex analysis... I can't write the function [imath]f(z) =\tan (z)[/imath] as a real part and imaginary part
How to write the function [imath]f(z) =\tan (z)[/imath] as a real part and imaginary part? I have reached the form : \begin{align} \tan (z) &= \frac{\sin(z)}{\cos(z)}\\ &= \frac{e^{iz} - e^{-iz}}{ i (e^{iz}+ e^{-iz})} \end{align} | 1732032 | Real and Imaginary Parts of tan(z)
This is where I'm at: I know [imath] \cos(z) = \frac{e^{iz} + e^{-iz}}{2} , \hspace{2mm} \sin(z) = \frac{e^{iz} - e^{-iz}}{2i}, [/imath] where [imath] \tan(z) = \frac{\sin(z)}{\cos(z)}. [/imath] Applying the above, with a little manipulation, gives me: [imath] \tan(z) = \frac{i\left(e^{-iz} - e^{iz}\right)}{e^{iz} + e^{-iz}}.[/imath] My thoughts are that I could use [imath]e^{z} = e^{x+iy} = e^x\left(\cos(y) + i\sin(y) \right)[/imath] to express both the numerator and denominator in trig form. Then I could times both by the denominator's complex conjugate as to get a real denominator, which would then, in turn, allow me to express the function in its real and imaginary parts. However, as I'm sure you'd agree, this is messy, and it's hard to shake the idea that I'm missing something far more elegant. |
2863139 | Every derivative [imath]\delta : L \to L [/imath] for finite semisimple Lie algebra [imath]L[/imath] over [imath]\Bbb C [/imath] is inner automorphism
I need to show that Every derivative [imath]\delta : L \to L [/imath] for finite semisimple Lie algebra [imath]L[/imath] over [imath]\Bbb C [/imath] is inner automorphism. when Inner automorphism is defined by [imath]exp(ad_x)= \sum_{j=0}^{k-1} ad_x^j/j![/imath] when [imath]ad_x[/imath] nilpotent with nilpotent degree of [imath]k[/imath] for all [imath]x \in L[/imath] when [imath]ad_x:L \to L , ad_x(y)=[x,y] [/imath] | 396986 | Derivations on semisimple Lie algebra
First recall some definitions : Let [imath]B[/imath] be a Killing form on Lie algebra [imath]\mathfrak{g}[/imath] over [imath]{\bf R}[/imath] such that [imath]B(X,Y)\doteq Tr(ad_Xad_Y)[/imath]. [imath]\mathfrak{g}[/imath] is semisimple if [imath]B[/imath] is non-degenerate. Define [imath]\partial \mathfrak{g} \doteq \{ D | D[X,Y]=[DX,Y] + [X,DY] \}[/imath] Clearly it contains [imath]ad(\mathfrak{g})\doteq \{ ad_X | X\in \mathfrak{g}\}[/imath]. My question is about proof of [imath]\partial \mathfrak{g}=ad(\mathfrak{g})[/imath] : I am reading Helgason's book. We can show that [imath] ad(\mathfrak{g}) = \mathfrak{g}[/imath] easily. I cannot proceed the proof since [imath]B[/imath] is defined on [imath]\mathfrak{g}[/imath] only. How can I complete the proof ? Thank you in advance. |
2865500 | Are roots of [imath]\det(A-tB)[/imath] with symmetric matrices real?
Let [imath]A[/imath], [imath]B[/imath] be symmetric real matrices [imath]n \times n[/imath]-type. Let's consider the polynomial [imath] f(t)=\det(A-tB). [/imath] If [imath]B[/imath] is positive definite, then it is known that [imath]f[/imath] has only real roots. Is the same true for nonsingular [imath]B[/imath] (and [imath]A[/imath], [imath]B[/imath] symmetric)? | 54489 | If [imath]\mathbf A[/imath] and [imath]\mathbf B[/imath] are Hermitian, when does [imath]\det(\mathbf A-\lambda\mathbf B)=0[/imath] have only real roots?
Let [imath]\mathbf A, \mathbf B[/imath] be Hermitian matrices of the same size. What is the characterization of [imath]\mathbf A, \mathbf B[/imath] such that [imath]p(\lambda)=\det(\mathbf A-\lambda\mathbf B)=0[/imath] has only real roots? If [imath]\mathbf B[/imath] is positive definite (I corrected this), it is easy to see [imath]p(\lambda)[/imath] has only real roots. |
2865069 | If [imath]m[/imath] is a divisor of [imath]k[/imath] and [imath]n[/imath] is a divisor of [imath]k[/imath] and [imath]m[/imath] and [imath]n[/imath] prime to each other then [imath]mn[/imath] is divisor of [imath]k[/imath].
If [imath]m[/imath] is a divisor of [imath]k[/imath] and [imath]n[/imath] is a divisor of [imath]k[/imath] and [imath]m[/imath] and [imath]n[/imath] prime to each other then [imath]mn[/imath] is divisor of [imath]k[/imath]. I tried but somehow I didnt manipulate. As [imath]m[/imath] is divisor of [imath]k[/imath] then [imath]k=mp[/imath] for some [imath]p[/imath] and similarly [imath]k=nq[/imath] for some [imath]q[/imath] and as [imath]gcd(m, n)=1[/imath] there exist integer say [imath]u[/imath] and [imath]v[/imath] such that [imath]mu+nv=1[/imath] After that how I proceed?, Please help. | 289337 | Product of pairwise coprime integers divides [imath]b[/imath] if each integer divides [imath]b[/imath]
Let [imath]a_1....a_n[/imath] be pairwise coprime. That is [imath]gcd(a_i, a_k) = 1[/imath] for distinct [imath]i,k[/imath], I would like to show that if each [imath]a_i[/imath] divides [imath]b[/imath] then so does the product. I can understand intuitively why it's true - just not sure how to formulate the proof exactly. I want to say if we consider the prime factorizations of each [imath]a_i[/imath], then no two prime factorizations share any prime numbers. So the product of [imath]a_1...a_n[/imath] must appear in the prime factorization of [imath]b[/imath]. Is this correct? Or at least if if the idea is correct, any way to formulate it more clearly? |
2865480 | Binomial random variable multiplied by a positive constant
Suppose we have [imath]X \sim Bin(n, p)[/imath] Let [imath]Y \equiv \frac{X}{n}[/imath] such that [imath]n>0[/imath] What's the distribution of [imath]Y[/imath]? Using expectation and variance properties we got that: [imath]EY = p[/imath] [imath]VY = \frac{p(1-p)}{n}[/imath] Is also [imath]Y[/imath] binomial? | 1438673 | Let [imath]X \sim Bin(n,p)[/imath] show that [imath]X/n \nsim Bin()[/imath]
Let [imath]X \sim Bin(n,p)[/imath] Part A. Show that the argument "Then [imath]X/n \sim Bin()[/imath] with [imath]E[X/n]=p[/imath]and [imath]Var[X/n]=pq/n[/imath]" is false. My book says we can prove this result using a moment generating function. But to me it makes no intuitive sense: We have Bin-distributed RV [imath]X[/imath] and if we rescale it with [imath]1/n[/imath] of course it will be Bin-distributed. And we know from standard formulas that [imath]E[aX]=aE[X][/imath] and [imath]Var[aX] = a^2Var[X][/imath] so I think the argument is correct. (For full disclosure, the argument we should falsify according to the book is "Let X be a binomial random variable with mean np and variance np(1−p). Show that the ratio X/n also has a binomial distribution with mean p and variance p(1−p)/n." But I think the argument is correct. ) Ok, now I get part A. Here is Part B. Prove it using the moment generating function of [imath]X/n[/imath]. Attempt: I get [imath]m_X(t)=E[\exp(tX/n)]=(m_X(t))^{1/n}[/imath] |
2865374 | Diff[imath](S^1)[/imath] is deformation retracts to [imath]O(2)[/imath]
I have proved the following Diff[imath]^+(S^1[/imath]) is path connected. Now I want to prove it is deformation retracts to [imath]O(2)[/imath]. What I tried is the following: I define an onto homomorphism [imath] f:\text{Diff}^{+}(\mathbb{D}^2)\to \text{Diff}^{+}(S^1) [/imath] by [imath] \phi\mapsto \phi|_{\partial \mathbb{D}^2=S^1}[/imath] This is onto because any diffeomorphism of [imath]S^1[/imath] can be extended to a diffeomorphism of [imath]\mathbb{D^2}[/imath]. The kernel of this homomorphism is [imath]\text{Diff}^+(\mathbb{D}^2\text{rel } \partial)[/imath]. After that I don't know what to do. | 320741 | Space of homeomorphisms Homeo[imath](S^1)[/imath] of [imath]S^1[/imath] deformation retracts onto [imath]O(2)[/imath]
How can we prove that the space of homeomorphisms Homeo[imath](S^1)[/imath] of [imath]S^1[/imath] (strong) deformation retracts onto the orthogonal group [imath]O(2)[/imath]? I know that this result is proved by Hellmuth Kneser in his paper Die Deformationssätze der einfach zusammenhängenden Flächen. I want to learn somewhat more elementary proof if possible, or at least the idea of the proof. Added Later (The topology on the space of homeomorphisms Homeo[imath](S^1)[/imath] of [imath]S^1[/imath]). This is footnote 7 from Hutchings' lecture notes, Introduction to homotopy groups and obstruction theory: We topologize the space Maps[imath](X, Y)[/imath] of continuous maps from [imath]X[/imath] to [imath]Y[/imath] using the compact-open topology. For details on this see e.g. Hatcher. A key property of this topology is that if [imath]Y[/imath] is locally compact, then a map [imath]X \rightarrow[/imath] Maps[imath](Y,Z)[/imath] is continuous iff the corresponding map [imath]X \times Y \rightarrow Z[/imath] is continuous. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.