INSTRUCTION
stringlengths 61
6.63k
| RESPONSE
stringlengths 1
11k
|
---|---|
In diagonalization, can the eigenvector matrix be any scalar multiple? One can decompose a diagonalizable matrix (A) into A = C D C^−1, where C contains the eigenvectors and D is a diagonal matrix with the eigenvalues in the diagonal positions. So here's where I get confused. If I start with a random eigenvector matrix D
> D
[,1] [,2]
[1,] 7 0
[2,] 0 5
and a random eigenvector matrix C
> C
[,1] [,2]
[1,] 4 2
[2,] 1 3
There should be some matrix A with those eigenvectors and eigenvalues. When I compute A by multiplying C D C^-1 I get
> A<-C%*%D%*%solve(C)
> A
[,1] [,2]
[1,] 7.4 -1.6
[2,] 0.6 4.6
My understanding is that if I then work backwards and diagonalize A I should get the same matrix C and D, I started with to get A in the first place. But for reasons that escape me, I don't.
> eigen(A)
$values
[1] 7 5
$vectors
[,1] [,2]
[1,] 0.9701425 0.5547002
[2,] 0.2425356 0.8320503
the first eigenvector is a multiple of column 1 of C and the second eigenvector is a multiple of column 2 of C. For some reason it feels strange that I have this relationship:
xC D (xC)^-1 = C D C where x is a scalar. Did I screw up somewhere or is this true?
|
Well:
$$(xA)^{-1}=\frac{1}{x}A^{-1}\qquad x\neq 0$$ so the result is actually the same. Eigenvectors are vectors of an eigenspace, and therefore, if a vector is an eigenvector, then any multiple of it is also an eigenvector. When you build a matrix of eigenvectors, you have infinite of them to choose from, that program is calculating two of them that it wants, and they don't have to be the ones you chose at the beginning.
|
minimum value of a trigonometric equation is given. the problem is when the minimum value attains Suppose the minimum value of $\cos^{2}(\theta_{1}-\theta_{2})+\cos^{2}(\theta_{2}-\theta_{3})+\cos^{2}(\theta_{3}-\theta_{1})$ is $\frac{3}{4}$.
Also the following equations are given
$$\cos^{2}(\theta_{1})+\cos^{2}(\theta_{2})+\cos^{2}(\theta_{3})=\frac{3}{2}$$
$$\sin^{2}(\theta_{1})+\sin^{2}(\theta_{2})+\sin^{2}(\theta_{3})=\frac{3}{2}$$ and
$$\cos\theta_{1}\sin\theta_{1}+\cos\theta_{2}\sin\theta_{2}+\cos\theta_{3}\sin\theta_{3}=0$$
To my intuition it can be proved that the minimum value of the 1st expression attains only if $(\theta_{1}-\theta_{2})=(\theta_{2}-\theta_{3})=(\theta_{3}-\theta_{1})=\frac{\pi}{3}$.
Provide some hints and techniques how to solve this.
|
As $\sin2x=2\sin x\cos x,$
$$\cos\theta_1\sin\theta_1+\cos\theta_2\sin\theta_2+\cos\theta_3\sin\theta_3=0$$
$$\implies \sin2\theta_1+\sin2\theta_2+\sin2\theta_3=0$$
$$\implies \sin2\theta_1+\sin2\theta_2=-\sin2\theta_3\ \ \ \ (1)$$
As $\cos2x=\cos^2x-\sin^2x,$
$$\cos^2\theta_1+\cos^2\theta_2+\cos^2\theta_3=\frac32=\sin^2\theta_1+\sin^2\theta_2+\sin^2\theta_3$$
$$\implies \cos2\theta_1+\cos2\theta_2+\cos2\theta_3=0$$
$$\implies \cos2\theta_1+\cos2\theta_2=-\cos2\theta_3\ \ \ \ (2)$$
Squaring & adding $(1),(2)$
$$\sin^22\theta_1+\sin^22\theta_2+2\sin2\theta_1\sin2\theta_2+(\cos^22\theta_1+\cos^22\theta_2+2\cos2\theta_1\cos2\theta_2)=\sin^22\theta_3+\cos^22\theta_3$$
$$\implies 2+2\cos2(\theta_1-\theta_2)=1$$
Using $\cos2x=2\cos^2x-1,$
$$2+ 2\left(2\cos^2(\theta_1-\theta_2)-1\right)=1\implies \cos^2(\theta_1-\theta_2)=\frac14$$
Similarly, $\theta_2-\theta_3,\theta_3-\theta_1$
|
Two propositions about weak* convergence and (weak) convergence Let $E$ be a normed space. We have the usual definitions:
1) $f, f_n \in E^*$, $n \in \mathbb{N}$, then $$f_n \xrightarrow{w^*} f :<=> \forall x \in E: f_n(x) \rightarrow f(x)$$ and in this case we say that $(f_n)$ is $weak^*$-$convergent$ to $f$.
2)$x, x_n \in E$, $n \in \mathbb{N}$, then $$x_n \xrightarrow{w} x :<=> \forall f \in E^*: f(x_n) \rightarrow f(x)$$ and in this case we say that $(x_n)$ is $weakly\ convergent$ to $x$.
Now for the two propositions I want to prove or disprove the following statements.
Let $f, f_n \in E^*$, $n \in \mathbb{N}$, such that $f_n \xrightarrow{w^*} f$ and let $x, x_n \in E$. Consider:
[edit: Thanks for pointing out my mistake!]
a) $x_n \rightarrow x$ => $f_n(x_n) \rightarrow f(x)$,
b) $x_n \xrightarrow{w} x$ => $f_n(x_n) \rightarrow f(x)$.
So far, I think that even b) is true which would imply that a) is also true. My reasoning is that, by assumption, we have $f_m(x_n) \rightarrow f_m(x)$ for every fixed $m \in \mathbb{N}$ as well as $f_m(x) \rightarrow f(x)$ for every $x \in E$. Hence we have $$\lim_{m\to \infty} \lim_{n\to \infty} f_m(x_n) = \lim_{m\to \infty} f_m(x) = f(x)$$ which should be the same as $$\lim_{n\to \infty} f_n(x_n) = f(x).$$
However, I'm a little bit suspicious because the setting seems to imply that a) ist true, but b) is not. Is my argument too sloppy?
|
a)
$$
\|f_n(x_n) - f(x) \| \leq \|f_n(x_n) - f_n(x)\| + \| f_n(x) - f(x) \| \leq \|f_n\| \|x_n - x\| + \| f_n(x) - f(x) \|.
$$
By the principle of uniform boundedness, $\sup_n \|f_n\|$ is finite, so you're done.
b) Not true in general. Let $\{x_n\}$ be an orthonormal basis in some Hilbert space. Then $x_n \rightarrow 0$ weakly and, by reflexivity, $\langle x_n, \cdot \rangle \rightarrow 0$ in the weak-* topology. Evidently, $\langle x_n, x_n \rangle$ does not go to $0$.
|
solvable subalgebra I want to show that a set $B\subset L$ is a maximal solvable subalgebra.
With $L = \mathscr{o}(8,F)$, $F$ and algebraically closed field, and $\operatorname{char}(F)=0$ and
$$B= \left\{\begin{pmatrix}p&q\\0&s \end{pmatrix}\mid p \textrm{ upper triangular, }q,s\in\mathscr{gl}(4,F)\textrm{ and }p^t=-s, q^t=-q\right\}.$$
$B$ is a subalgebra by construction. So my problem now is how I can show that it is maximal solvable.
I tried the approach by '$L$ solvable $\Leftrightarrow$ $[L,L]$ nilpotent'. I am not sure how good this idea is, but here what I have:
$[B,B]= span\{[x,y]\mid x,y\in B\}$. So $x$ and $y$ are matrices of the form above.
That means, that for the '$p$-part' we see already that it is nilpotent. Since with each multiplication there is one more zero-diagonal.
But how can I show now, that we will also get rid of the $q$ and $s$ part?
I tried to look at $[x,y] = xy-yx = \begin{pmatrix}\bigtriangledown&*\\0&ss' \end{pmatrix}-\begin{pmatrix}\bigtriangledown&*\\ 0&s's \end{pmatrix}$
(with $\bigtriangledown$ any upper triangle matrix minus one diagonal, $s$ the part in the $x$ matrix and $s'$ in the $y$ matrix). (Sorry for the chaotic notation. I don't realy know how to write it easier..)
Is this a beginning where I should go on with? Or does someone have a hint how to approach this problem?
I don't really know, how to go on from here on. So I'd be very happy for any hint :)
Best, Luca
|
As far as I can think, use the Thm c at page 84, however i have no idea of how to compute this $B(triangular)=H+\cup L_{\alpha}$ which are the positive.
|
single valued function in complex plane Let
$$f(z)=\int_{1}^{z} \left(\frac{1}{w} + \frac{a}{w^3}\right)\cos(w)\,\mathrm{d}w$$
Find $a$ such that $f$ is a single valued function in the complex plane.
|
$$
\begin{align}
\left(\frac1w+\frac{a}{w^3}\right)\cos(w)
&=\left(\frac1w+\frac{a}{w^3}\right)\left(1-\frac12w^2+O\left(w^4\right)\right)\\
&=\frac{a}{w^3}+\color{#C00000}{\left(1-\frac12a\right)}\frac1w+O(w)
\end{align}
$$
The residue at $0$ is $1-\frac12a$, so setting $a=2$ gives a residue of $0$.
|
Does every infinite group have a maximal subgroup?
$G$ is an infinite group.
*
*Is it necessary true that there exists a subgroup $H$ of $G$ and $H$ is maximal ?
*Is it possible that there exists such series $H_1 < H_2 < H_3 <\cdots <G $ with the property that for every $H_i$ there exists $H_{i+1}$ such that $H_i < H_{i+1}$?
|
Rotman p. 324 problem 10.25:
The following conditions on an abelian group are equivalent:
*
*$G$ is divisible.
*Every nonzero quotient of $G$ is infinite; and
*$G$ has no maximal subgroups.
It is easy to see above points are equivalent. If you need the details, I can add them here.
|
Finding $a$ s.t the cone $\sqrt{x^{2}+y^{2}}=za$ divides the upper half of the unit ball into two parts with the same volume My friend gave me the following question:
For which value of the parameter $a$ does the cone
$\sqrt{x^{2}+y^{2}}=za$ divides $$\{(x,y,z):\,
x^{2}+y^{2}+z^{2}\leq1,z\geq0\}$$ into two parts with the same volume ?
I am having some difficulties with the question.
What I did:
First, a ball with radius $R$ have the volume $\frac{4\pi}{3}R^{3}$
hence the volume of the upper half of the unit ball is $\frac{\pi}{3}$.
Secondly: I found where does the cone intersect with the boundary
of the ball:
$$
\sqrt{x^{2}+y^{2}}=za
$$
hence
$$
z=\sqrt{\frac{x^{2}+y^{2}}{a^{2}}}
$$
and
$$
x^{2}+y^{2}+z^{2}=1
$$
setting $z$ we get
$$
x^{2}(\frac{a^{2}+1}{a^{2}})+y^{2}(\frac{a^{2}+1}{a^{2}})=1
$$
hence
$$
x^{2}+y^{2}=\frac{a^{2}}{a^{2}+1}
$$
using the equation for the cone we get
$$
z=\frac{1}{\sqrt{a^{2}+1}}
$$
I then did (and I am unsure about the boundaries) : $0<z<\frac{1}{\sqrt{a^{2}+1}},0<r<az$
and using the coordinates $x=r\cos(\theta),y=r\sin(\theta),z=z$ I
got that the volume that the cone enclose in the ball is
$$
\int_{0}^{\frac{1}{\sqrt{a^{2}+1}}}dz\int_{0}^{az}dr\int_{0}^{2\pi} r d\theta
$$
which evaluates to
$$
\frac{\pi a^{2}}{3(a^{2}+1)^{\frac{3}{2}}}
$$
I then required this will be equal to e the volume of the upper half
of the unit ball :
$$
\frac{\pi a^{2}}{3(a^{2}+1)^{\frac{3}{2}}}=\frac{\pi}{3}
$$
and got
$$
a^{2}=(a^{2}+1)^{\frac{3}{2}}
$$
which have no real solution, according to WA.
Can someone please help me understand where I am wrong and how to
solve this question ?
|
Check your bounds again. I believe they should be
$$\begin{align}0<&z<\frac{1}{\sqrt{a^2+1}}\\az<&r<\sqrt{1-z^2}\end{align}$$
Finishing the integral with these bounds should yield $a=\sqrt3$.
|
truncate, ceiling, floor, and...? Truncation rounds negative numbers upwards, and positive numbers downwards. Floor rounds all numbers downwards, and ceiling rounds all numbers upwards. Is there a term/notation/whatever for the fourth operation, which rounds negative numbers downwards, and positive numbers upwards? That is, one which maximizes magnitude as truncation minimizes magnitude?
|
The fourth operation is called "round towards infinity" or "round away from zero". It can be implemented by
$$y=\text{sign} (x)\text{ceil}(|x|)$$
|
Usefulness of the concept of equivalent representations Definition: Let $G$ be a group, $\rho : G\rightarrow GL(V)$ and $\rho' : G\rightarrow GL(V')$ be two representations of G. We say that $\rho$ and $\rho'$ are $equivalent$ (or isomorphic) if $\exists \space T:V\rightarrow V'$ linear isomorphism such that $T{\rho_g}={\rho'_g}T\space \forall g\epsilon G$.
But I don't understand why this concept is useful. If two groups $H,H'$ are isomorphic, then we can translate any algebraic property of $H$ into $H'$ via the isomorphism. But I don't see how a property of $\rho$ can be translated to similar property of $\rho'$. Nor I have seen any example in any textbook where this concept is used. Can someone explain its importance?
|
This is just the concept of isomorphism applied to representations, i.e. $T$ is providing an isomorphism between $V$ and $V'$ which interchanges the two group
representations. So all your intuitions for the role of isomorphisms in group theory should carry over.
Why are you having trouble seeing that properties of $\rho$ can be carried over to
$\rho'$? As with any isomorphic situations, any property of $\rho$ that doesn't make specific reference to the names of the elements in $V$ (e.g. being irreducible or not, the number of irreducible constituents, the associated character) will carry over to $\rho'$.
|
Proof for Sum of Sigma Function How to prove:
$$\sum_{k=1}^n\sigma(k) = n^2 - \sum_{k=1}^nn\mod k$$
where $\sigma(k)$ is sum of divisors of k.
|
$$\sum_{k=1}^n \sigma(k) = \sum_{k=1}^n\sum_{d|k} d = \sum_{d=1}^n\sum_{k=1,d|k}^{n}d = \sum_{d=1}^n d\left\lfloor \frac {n} {d}\right\rfloor$$
Now just prove that $$d\left\lfloor \frac n d\right\rfloor = n-(n\mod d)$$
|
Why does the result of the Lagrangian depend on the formulation of the constraint? Consider the following maximization problem:
$$ \max f(x) = 3 x^3 - 3 x^2, s.t. g(x) = (3-x)^3 \ge 0 $$
Now it's obvious that the maximum is obtained at $ x =3 $. In this point, however, the constraint qualification
$$ Dg(x) = -3 (3-x)^2 = 0$$
fails, so it's not a solution of the Lagrangian.
Re-formulating the constraint as
$$ h(x) = 3 - x \ge 0 $$
allows obtaining the result, as the constraint qualification holds: $ Dh(x) = -1 $
Now, I'm well aware that the Lagrangian method can fail under certain circumstances. However, isn't it kind of odd that re-formulation of the constraint yiels a solution? Does this mean that whenever we're stuck with a constraint qualificiation issue, we should try to 'fix' the constraint?
|
Yes, reformulating the constraint might change the validity of the CQ. This is quite natural, since the Lagrangian expresses optimality via derivatives of the constraints (and the objective). Changing the constraint may render its derivative useless ($0$ in your case).
|
Endomorphisms of a semisimple module Is there an easy way to see the following:
Given a $k$-algebra $A$, with $k$ a field, and a finite dimensional semisimple $A$-module $M$. Look at the natural map $A \to \mathrm{End}_k(M)$ that sends an $a \in A$ to
$$
M \to M: m \mapsto a \cdot m.
$$
Then the image of $A$ is a finite-dimensional semisimple algebra.
|
Here's one way to look at it: Notice that the kernel of the map is exactly $ann(M)$, which necessarily contains the Jacobson radical $J(A)$ of $A$. Since $A/J(A)$ and all of its quotients are semiprimitive, it follows that $A/ann(M)$ is semiprimitive.
Now $M$ as a faithful $A/ann(M)$ module. Since the simple submodules of $M$ remain the same during this passage, $M$ is also still semisimple over this new ring. You can see in this question why a ring with a faithful module of finite length must be Artinian. Now we have that $A/ann(M)$ is Artinian and semiprimitive: so it is semisimple.
I see I overlooked a simple way for concluding that the image is finite dimensional. Of course our image ring is a subalgebra of $End(M_k)$ which is finite dimensional... so the subring is finite dimensional as well. The argument I gave before essentially proves a more general case: "If $M$ is a semisimple $R$ module of finite length, the image of the natural map is a semisimple ring."
|
Question about order of elements of a subgroup Given a subgroup $H \subset \mathbb{Z}^4$, defined as the 4-tuples $(a,b,c,d)$ that satisfy $$ 8| (a-c); a+2b+3c+4d=0$$
The question is: give all orders of the elements of $\mathbb{Z}^4 /H$.
I don't have any idea how to start with this problem. Can anybody give some hints, strategies etc to solve this one?
thanks
|
As Amir has said above, consider the homomorphism $\phi:\oplus_{i=1}^{4}\mathbb{Z}\rightarrow \mathbb{Z}_{8}\oplus\mathbb{Z}$. You can check that as Amir pointed out $\operatorname{im}(\phi)\cong (\oplus_{i=1}^{4}\mathbb{Z})/H$, so what is $\operatorname{im}(\phi)$? If you wanted, you could work this out, but you are only asked for the possible orders of elements. In $w=(x,y)\in,\mathbb{Z}_{8}\oplus\mathbb{Z}$, then if $y\ne 0$, then $x$ has infinite order. Clearly, taking $w=\phi(0,0,0,1)$, we get $w=(0,4)$ and so \operatorname{im}(\phi)$ has elements of infinite order.
Now consider elements of finite order. These must arise from elements of the form $w=(x,0)\in\mathbb{Z}_{8}\oplus\mathbb{Z}$. Now the possible orders of elements of $\mathbb{Z}_{8}$ are $1,2,4$ and $8$ can we find elements of such orders in $\operatorname{im}(\phi)$?
Order 1: As $\operatorname{im}(\phi)$ is a subgroup, it contains the identity - an element of order $1$.
Order 8: Consider $(a,b,c,d)\in \oplus_{i=1}^{4}\mathbb{Z}$. We want $\phi(a,b,c,d)$ to have order $8$, so $a-c$ must be coprime to $8$ (i.e. odd) and $a+2b+3c+4d=0$. Does such an element exist? Well as $a-c$ is odd, then $a$ and $c$ have different parity, so assume that $a$ is odd and $c$ is even. But then $a+2b+3c+4d$ will be off and hence not equal to $0$. Thus no elements of order $8$ exist, and in particular.
Order 4: As above, considering $\phi(a,b,c,d)$ we must have $a-c\equiv 2\text{ or }6\pmod{8}$ and $a+2b+3c+4d=0$. If $a=2$ and $c=0$, then $a-c\equiv 2\pmod{8}$. Moreover, if we then take $b=-1$ and $d=0$ we have $w=\phi(2,-1,0,0)=(2,0)$, and so $w$ is an element of order $4$.
Order 2: Take $2w=(4,0)=\phi(4,-2,0,0)$, and then $2w$ has order $2$.
Thus the possible orders of elements of $(\oplus_{i=1}^{4}\mathbb{Z})/H$ are $1,2,4$ and $\infty$.
Edit: I misread the question initially, and so thought it was asking for the possible orders of $H$ and not $\mathbb{Z}^{4}/H$. Here is an answer for finding orders of elements of $H$.
To start off with, what are the possible powers of elements of the direct product $\oplus_{i=1}^{4}\mathbb{Z}$. Clearly the only element of finite order is the identity element of order $1$ (since this is the case in $\mathbb{Z}$). Thus as $H$ is a subgroup of $\oplus_{i=1}^{4}\mathbb{Z}$, the only possible finite order of elements of $H$ is $1$. Can this order be attained?
Well clearly, the only element of $\oplus_{i=1}^{4}\mathbb{Z}$ of order $1$ is the identity element, namely $(0,0,0,0)$, and it is easy to see that this will be contained in $H$.
Now let's check if there are elements of infinite order in $H$. Suppose $x=(a,b,c,d)$ is such an element. Well for simplicity (i.e. to get rid of your first condition), just take $a=c=0$ so that $8\vert a-c=0$. Then the other condition gives $0=a+2b+3c+4d=2b+4d$, meaning that $b+2d=0$. Thus taking $d=1$ and $b=-2$ we have that $x=(0,-2,0,1)\in H$, and $x$ has infinite order.
We conclude that the possible orders of elements of $H$ are $1$ and $\infty$.
|
number of errors in a binary code To transmit the eight possible stages of three binary symbols, a code appends three further symbols: if the message is ABC, the code transmits ABC(A+B)(A+C)(B+C). How many errors can it detect and correct?
My first step should be to find the minimum distance, but is there a systematic way to find this? Or do I just try everything?
|
The code is linear and there are seven non-zero words, so the brute force method of listing all of them is quite efficient in this case. You can further reduce the workload by observiing that the roles of $A,B,C$ are totally symmetric. What this means is that you only need to check the weights of the words where you assign one, two or all three of them to have value $1$ (and set the rest of them equal to zero). Leaving that to you, but it sure looks like the minimum distance will be three, and hence the code can correct a single error.
In general the Yes/No question: Does a linear code with a given generator matrix have non-zero words of weight below a given threshold? has been proven to belong to one of those nasty NP-complexity classes (NP-hard? NP-complete? IDK), so there is no systematic efficient way of doing this.
|
How can $\lim_{n\to \infty} (3^n + 4^n)^{1/n} = 4$? If $\lim_{n\to \infty} (3^n + 4^n)^{1/n} = 4$, then $\lim_{n\to \infty} 3^n + 4^n=\lim_{n\to \infty}4^n$ which implies that $\lim_{n\to \infty} 3^n=0$ which is clearly not correct. I tried to do the limit myself, but I got $3$. The way I did is that at the step $\lim_{n\to \infty} 3^n + 4^n=\lim_{n\to \infty}L^n$ I divided everything by $4^n$, and got $\lim_{n\to \infty} (\frac{3}{4})^n + 1=\lim_{n\to \infty} (\frac{L}{4})^n$. Informally speaking, the $1$ on the LHS is going to be very insignificant as $n \rightarrow \infty$, so $L$ would have to be $3$. Could someone explain to me why am I wrong and how can the limit possibly be equal to $4$? Thanks!
|
$\infty-\infty$ is not well-defined.
|
Prove that: $\sup_{z \in \overline{D}} |f(z)|=\sup_{z \in \Gamma} |f(z)|$ Suppose $D=\Delta^n(a,r)=\Delta(a_1,r_1)\times \ldots \times \Delta(a_n,r_n) \subset \mathbb{C}^n$
and
$\Gamma =\partial \circ \Delta^n(a,r)=\left \{ z=(z_1, \ldots , z_n)\in \mathbb{C}^n:|z_j-a_j|=r_j,~ j=\overline{1,n} \right \}$.
Let $f \in \mathcal{H}(D) \cap \mathcal{C}(\overline{D})$.
Prove that: $\sup_{z \in \overline{D}} |f(z)|=\sup_{z \in \Gamma} |f(z)|$
I need your help. Thanks.
|
Since it's homework, a few hints:
First assume that $f$ is holomorphic on a neighbourhood of $\bar D$. Then use the maximum principle in each variable separately to conclude that it's true in this case. Finally, approximate your $f$ by functions that are holomorphic on a neihbourhood of $\bar D$.
More details I'll do it for $n=2$ under the assumption that $f$ extends to a neighbourhood of $\bar D$, and leave the general case up to you. Let $a \in \partial \Delta$ and define $\phi_a(\zeta) = (a,\zeta)$. Then $f(\phi_a(\zeta))$ is holomorphic on a neighbourhood of $\bar\Delta$, so by the maximum modulus principle in one variable applied to $f\circ \phi_a$, $\sup_{\zeta\in\bar\Delta} f(a, \zeta) = \sup_{\zeta\in\partial\Delta} f(a, \zeta)$. Do the same for $\psi_a(\zeta) = (\zeta,a)$ and take the sup over all $a \in \Delta$ to obtain
$$\sup_{z\in \partial(\Delta \times \Delta)} f(z) = \sup_{z\in\partial\Delta\times\partial\Delta} f(z)$$
since the union of the images of $\phi_a$ and $\psi_a$ cover the entire boundary of $\Delta\times\Delta$. To finish off, use the maximum modulus principle in $\mathbb{C}^n$ to see that
$$\sup_{z\in \bar\Delta \times \bar\Delta} f(z) = \sup_{z\in\partial(\Delta\times\Delta)} f(z)$$
or, if you prefer, do the same argument again with $a \in \Delta$.
|
What is the difference between isomorphism and homeomorphism? I have some questions understanding isomorphism. Wikipedia said that
isomorphism is bijective homeomorphism
I kown that $F$ is a homeomorphism if $F$ and $F^{-1}$ are continuous. So my question is: If $F$ and its inverse are continuous, can it not be bijective? Any example? I think if $F$ and its inverse are both continuous, they ought to be bijective, is that right?
|
Isomorphism and homeomorphism appear both in topology and abstract algebra. Here, I think you mean isomorphism and homeomorphism in topology. In one word, they are the same in topology.
|
Is there a terminological difference between "sequence" and "complex" in homology theory Suppose you are given something like this:
$\dots \longrightarrow A^n \longrightarrow A^{n+1} \longrightarrow \dots$
People tend to talk about "chain complexes" but about "short exact sequences". Is there any terminological difference or any convention with regards to using these words (EDIT: I mean "complex" and "sequence" in a homological context) that a mathematical writer should comply to?
|
To say that the sequence is a chain complex is a less imposing condition: it simply says that if you compose any two of the maps in the sequence, you get $0$. But to say the sequence is exact says more: it says this is precisely (or, if you rather, exactly) the only way you get something mapping to zero.
The first statement says that the image of the arrow to the left of $A^n$ is contained in the kernel of the arrow to its right, the second statement says that the reverse inclusion also holds.
|
Need help solving the ODF $ f''(r) + \frac{1}{r}f'(r) = 0 $ I am currently taking complex analysis, and this homework question has a part that requires the solution to a differential equation. I took ODE over 4 years ago, so my skills are very rusty. The equation I derived is this:
$$ f''(r) + \frac{1}{r}f'(r) = 0 $$
I made this substitution: $ v(r) = f'(r) $ to get get:
$$ v'(r) + \frac{1}{r}v(r) = 0 \implies \frac{dv}{dr} = - \frac{v}{r} \implies \frac{dv}{v} = - \frac{dr}{r} $$
Unsure of what to do now.
Edit: I forgot to add that the answer is $ a \log r + b $
|
You now have:
1/v dv = -1/r dr
We can integrate this:
ln(v) = -ln(r) + C1
v = e^(-ln(r) + c1)
v = 1/r + c1
Thus we have:
df/dr = 1/r + c1 (since v = f'(r) = df/dr)
df = (1/r + c1) dr
f = ln(r) + c1(r) + c2
And thats our answer!
|
Is the outer boundary of a connected compact subset of $\mathbb{R}^2$ an image of $S^{1}$? A connected compact subset $C$ of $\mathbb{R}^2$ is contained in some closed ball $B$. Denote by $E$ the unique connected component of $\mathbb{R}^2-C$ which contains $\mathbb{R}^2-B$. The outer boundary $\partial C$ of $C$ is defined to be the boundary of $E$.
Is $\partial C$ always a (continuous) image of $S^{1}$?
|
No; the outer boundary of any handle body is a $n$-holed torus for some $n$. But all sorts of other things can happen; if $C$ is the cantor set, then the boundary is the Cantor set. So it can be all sorts of things.
|
Sets question, without Zorn's lemma Is there any proof to $|P(A)|=|P(B)| \Longrightarrow |A|=|B|$ that doesn't rely on Zorn's lemma (which means, without using the fact that $|A|\neq|B| \Longrightarrow |A|<|B|$ or $|A|>|B|$ ) ?
Thank you!
|
Even with Zorn's Lemma, one cannot (under the usual assumption that ZF is consistent) prove that if two power sets have the same cardinality, then the sets have the same cardinality.
|
If $G$ is a finite group, $H$ is a subgroup of $G$, and $H\cong Z(G)$, can we conclude that $H=Z(G) $?
If $G$ is a finite group, $H$ is a subgroup of $G$, and $H\cong Z(G)$, can we conclude that $H=Z(G) $?
I learnt that if two subgroups are isomorphic then it's not true that they act in the same way when this action is related to outside groups or elements. For example, if $H \cong K $ and both $H,K$ are normal of $G$ then it's not true that $G/H \cong G/K$. Also, there is sufficient condition (but not necessary, as I read after that) for there to be an automorphism that, when restricted to $H$, induces an isomorphism between $H$ and $K$.
Now, is this true in this case? or is the statement about center and its isomorphic subgroups always true?
|
No. Let me explain why. You should think of $H\cong K$ as a statement about the internal structure of the subgroups $H$ and $K$. The isomorphism shows only that elements of $H$ interact with each other in the same way that elements of $K$ interact with each other. It doesn't say how any of these elements behave with the rest of the group - that is, the external behavior of elements of $H$ with the rest of $G$ may not be the same as the external behavior of elements of $K$ with the rest of $G$.
Being the center of a group is a statement about external behavior. If we state that every element of $Z(G)$ commutes with every other element of $G$, then just because $H$ and $Z(G)$ have the same internal structure doesn't mean that every element of $H$ must then commute with every other element in $G$.
For an easy counterexample, consider $G=S_3\times \mathbb{Z}_2$. Let $\alpha$ be any of the transpositions $(12)$, $(13)$, or $(23)$ in $S_3$, and let $\beta$ be the generator of the $\mathbb{Z}_2$. Here, by virtue of the direct product, $\beta$ commutes with every other element of $G$, and it is not difficult to see that no other nontrivial element of $G$ holds this property, so $\langle\beta\rangle =Z(G)$. On the other hand, we know that all groups of order $2$ are isomorphic, so $\langle \alpha \rangle \cong \langle \beta \rangle$.
|
What's the difference between Complex infinity and undefined? Can somebody please expand upon the specific meaning of these two similar mathematical ideas and provide usage examples of each one? Thank you!
|
I don't think they are similar.
"Undefined" is something that one predicates of expressions. It means they don't refer to any mathematical object.
"Complex infinity", on the other hand, is itself a mathematical object. It's a point in the space $\mathbb C\cup\{\infty\}$, and there is such a thing as an open neighborhood of that point, as with any other point. One can say of a rational function, for example $(2x-3)/(x+5)$, that its value at $\infty$ is $2$ and its value at $-5$ is $\infty$.
To say that $f(z)$ approaches $\infty$ as $z$ approaches $a$, means that for any $R>0$, there exists $\delta>0$ such that $|f(z)|>R$ whenever $0<|z-a|<\delta$. That's one of the major ways in which $\infty$ comes up. An expression like $\lim\limits_{z\to\infty}\cos z$ is undefined; the limit doesn't exist.
|
kaleidoscopic effect on a triangle Let $\triangle ABC$ and straightlines $r$, $s$, and $t$. Considering the set of all mirror images of that triangle across $r$, $s$, and $t$ and its successive images of images across the same straightlines, how can we check whether $\triangle DEF$ is an element of that set?
Given:
*
*Points: $A(1,1)$, $B(3,1)$, $C(1,2)$, $D(n,m)$, $E(n+1,m)$, $F(n,m+2)$, where $n$ and $m$ are integers numbers.
*Straightlines: $r: x=0$, $s: y=0$ and $t:x+y=5$.
No idea how to begin.
|
Create an image of your coordinate system, your three lines of reflections, and your original triangle. You can draw in mirror triangles pretty easily, and with a few of these, you will probably find a pattern. (I created my illustration using Cinderella, which comes with a tool to define transformation groups.)
As you can see, there are locations $m,n$ for which the triangle $DEF$ is included. Note that $E$ is the image of $C$ and $F$ is the image of $B$, though. I'll leave it to you as an excercise to find a possible combination of reflections which maps $\triangle ABC$ to $\triangle DEF$.
|
Evaluating $\lim\limits_{x\to0}\frac{1-\cos(x)}{x}$ $$\lim_{x\to0}\frac{1-\cos(x)}{x}$$
Could someone help me with this trigonometric limit? I am trying to evaluate it without l'Hôpital's rule and derivation.
|
There is also a fairly direct method based on trig identities and the limit $ \ \lim_{\theta \rightarrow 0} \ \tan \frac{\theta}{2} \ = \ 0 , $ which I discuss in the first half of my post here.
[In brief,
$$ \lim_{\theta \rightarrow 0} \ \tan \frac{\theta}{2} \ = \ 0 \ = \ \lim_{\theta \rightarrow 0} \ \frac{1 - \cos \theta}{\sin \theta} \ = \ \lim_{\theta \rightarrow 0} \ \frac{1 - \cos \theta}{\theta} \ \cdot \ \frac{\theta}{\sin \theta} $$
$$= \ \lim_{\theta \rightarrow 0} \ \frac{1 - \cos \theta}{\theta} \ \cdot \ \frac{1}{\lim_{\theta \rightarrow 0} \ \frac{\sin \theta}{\theta} } \ = \ \lim_{\theta \rightarrow 0} \ \frac{1 - \cos \theta}{\theta} \ \cdot \ \frac{1}{1} \ \Rightarrow \ \lim_{\theta \rightarrow 0} \ \frac{1 - \cos \theta}{\theta} \ = \ 0 \ . ] $$
|
Convergence of increasing measurable functions in measure? Let ${f_{n}}$ a increasing sequence of measurable functions such that $f_{n} \rightarrow f$ in measure.
Show that $f_{n}\uparrow f$ almost everywhere
My attempt
The sequence ${f_{n}}$ converges to f in measure if for any $\epsilon >0$ there exists $N\in \mathbb{N}$ such that for all n>N,
$$
m (\{x: |f_n(x) - f(x)| > \varepsilon\}) \rightarrow 0\text{ as } n \rightarrow\infty.
$$
I think that taking a small enough epsilon are concludes the result since $f_n$ are increasing
Can you help me solve this exercise?
Thanks for your help
|
If $f_n$ converges to $f$ in measure, we have a subsequence $f_{n_k}$ converging to $f$ almost everywhere. As $f_n$ is increasing, we know that $f_n$ converges at every point or every subsequence increases to $\infty$. But as the subsequence $f_{n_k}$ converges to $f$ at almost every point, we have that $f_n$ converges to $f$ almost everywhere.
|
A directional derivative of $f(x,y)=x^2-3y^3$ at the point $(2,1)$ in some direction might be: A directional derivative of
$$
f(x,y)=x^2-3y^3
$$
at the point $P(2,1)$ in some direction might be:
a) $-9$
b) $-10$
c) $6$
d) $11$
e) $0$
I'd say it's $-9$ for sure, but what about $0$ (the direction would be $<0,0>$)?
Are there any other proper answers?
|
$$D_{\vec u}f(\vec x)=\nabla f_{(2,1)}\frac{\vec u}{||\vec u||}\cdot=4u_1-9u_2\;\;\wedge\;\;u_1^2+u_2^2=1$$
so you get a non-linear system of equations
$$\begin{align*}\text{I}&\;\;4u_1-9u_2=t\\\text{II}&\;\;\;\;u_1^2+\;u_2^2=\,1\end{align*}$$
and from here we get
$$u_1^2+\left(\frac{4u_1-t}{9}\right)^2=1\implies 97u_1^2-8tu_1+(t^2-81)=0$$
The above quadratic's discriminant is
$$\Delta=-324(t^2-97)\ge 0\iff |t|\le\sqrt{97}$$
Thus, the system has a solution for any $\;t\in\Bbb R\;\;,\;\;|t|\le\sqrt{97}\;$ , so all of $\,(a), (c), (e)\;$ fulfill this condition .
|
Whether $L=\{(a^m,a^n)\}^*$ is regular or not? I am condidering the automatic structure for Baumslag-Solitar semigroups. And I have a question. For any $m,n \in Z$, whether the set $L=\{(a^m,a^n)\}^*$ is regular or not. Here a set is regular means it can be recognized by a finite automaton.
Since the operations:union, intersection, complement, concatenation and Kleene star for the regular sets are closed (see here), I have tried to represent $L$ to be the result of some sets under the operations mentioned above. If it is successfully represented, $L$ will be regular. But I failed. So I want to ask for some clues for this question.
Thanks for your assistance.
|
As Boris has pointed out in the comments, so far your question doesn't make complete sense, because you haven't specified a finite alphabet $\Sigma$ such that $L\subseteq \Sigma^*$.
What I think you probably want is to consider $L$ as a language over $\Sigma = (A\cap\{\epsilon\})\times(A\cap\{\epsilon\})$, where we are simplifying strings over $\Sigma$ and writing them as pairs of strings over $A$, for example $(a,a)(a,\epsilon) = (a^2,a)$.
If this is what you mean, then for a given $m,n\in \mathbb{N}$, $L = \{(a^m,a^n)\}^*$ is indeed a regular language over $\Sigma$, which is obvious, since $(a^m,a^n)$ is just (the shorthand form of) a particular word over $\Sigma$, and so $(a^m,a^n)^*$ is a regular expression.
|
Symmetric positive definite with respect to an inner product Let $A$ be a SPD(symmetric positive-definite) real $n\times n$ matrix. let $B=LL^T$ be also SPD. Let $(,)_B$ be an inner product given by $(x,y)_B=x \cdot By=y^T Bx$. Then $(B^{-1}Ax,y)_B=(x,B^{-1}Ay)_B$ for all $x,y$. Show that $B^{-1}A$ is SPD with respect to $(,)_B$
I don't understand what it means SPD with respect to an inner product. What does it mean?
|
It means that in the definition of positive definite matrix you replace a standard euclidean scalar product by an inner product generated by another matrix.
A matrix $C$ is positive definite with respect to inner product $(,)_B$ defined by a positive definite matrix $B$ iff
$$\forall v\ne 0\, (Cv,v)_B = (BCv,v)>0$$
or, in our case
$$\forall v\ne 0\, (B^{-1}Av,v)_B = (Av,v)>0,$$ which is given.
|
Prove $n\mid \phi(2^n-1)$ If $2^p-1$ is a prime, (thus $p$ is a prime, too) then $p\mid 2^p-2=\phi(2^p-1).$
But I find $n\mid \phi(2^n-1)$ is always hold, no matter what $n$ is. Such as $4\mid \phi(2^4-1)=8.$
If we denote $a_n=\dfrac{\phi(2^n-1)}{n}$, then $a_n$ is A011260, but how to prove it is always integer?
Thanks in advance!
|
I will use Lifting the Exponent Lemma(LTE).
Let $v_p(n)$ denote the highest exponent of $p$ in $n$.
Take some odd prime divisor of $n$, and call it $p$.
Let $j$ be the order of $2$ modulo $p$.
So, $v_p(2^n-1)=v_p(2^j-1)+v_p(n/j)>v_p(n)$ as $j\le p-1$.
All the rest is easy. Indeed, let's pose $n=2^jm$ where $m$ is odd.
Then $\varphi\left(2^{2^jm}-1\right)=\varphi(2^m-1)\varphi(2^m+1)\varphi(2^{2m}+1)\cdots\varphi\left(2^{2^{j-1}}m+1\right)$. At least $2^j$ terms in the right side are even.
|
Closed subset of $\;C([0,1])$
$$\text{The set}\; A=\left\{x: \forall t\in[0,1] |x(t)|\leq \frac{t^2}{2}+1\right\}\;\;\text{is closed in}\;\, C\left([0,1]\right).$$
My proof:
Let $\epsilon >0$ and let $(x)_n\subset A$, $x_n\rightarrow x_0$. Then for $n\geq N$
$\sup_{[0,1]}|x_n(t)-x_0(t)|\leq \epsilon$ for some $N$ and we have
$|x_0(t)|\leq|x_0(t)-x_N(t)|+|x_N(t)|\leq\epsilon+\frac{t^2}{2}+1$
Then $|x_0(t)|\leq \frac{t^2}{2}+1$ for all $t\in[0,1]$
Is the proof correct?
Thank you.
|
Yes, it is correct. An "alternative" proof would consist in writing $A$ as the intersection of the closed sets $F_t:=\{x,|x(t)|\leqslant \frac{t^2}2+1\}$. It also work if we replace $\frac{t^2}2+1$ by any continuous function.
|
Efficient way to compute $\sum_{i=1}^n \varphi(i) $ Given some upper bound $n$ is there an efficient way to calculate the following:
$$\sum_{i=1}^n \varphi(i) $$
I am aware that:
$$\sum_{i=1}^n \varphi(i) = \frac 12 \left( 1+\sum_{i=1}^n \mu(i) \left \lfloor \frac ni \right\rfloor ^2 \right) $$
Where:
$\varphi(x) $ is Euler's Totient
$\mu(x) $ is the Möbius function
I'm wondering if there is a way to reduce the problem to simpler computations because my upper bound on will be very large, ie: $n \approx 10^{11} $.
Neither $\varphi(x) $, nor $\mu(x) $, are efficient to compute for a large bound of $n$
Naive algorithms will take an unacceptably long time to compute (days) or I would need would need a prohibitively expensive amount of RAM to store look-up tables.
|
If you can efficiently compute $\sum_{i=1}^n \varphi(i)$, then you can efficiently compute two consecutive values, so you can efficiently compute $\varphi(n)$. Since you already know that $\varphi(n)$ isn't efficiently computable, it follows that $\sum_{i=1}^n \varphi(i)$ isn't either.
|
How many of these four digit numbers are odd/even? For the following question:
How many four-digit numbers can you form with the digits $1,2,3,4,5,6$ and $7$ if no digit is repeated?
So, I did $P(7,4) = 840$ which is correct but then the question asks, how many of those numbers are odd and how many of them are even. The answer for odd is $480$ and even is $360$ but I have no clue as to how they arrived to that answer. Can someone please explain the process?
Thanks!
|
We first count the number of ways to produce an even number. The last digit can be any of $2$, $4$, or $6$. So the last digit can be chosen in $3$ ways.
For each such choice, the first digit can be chosen in $6$ ways. So there are $(3)(6)$ ways to choose the last digit, and then the first.
For each of these $(3)(6)$ ways, there are $5$ ways to choose the second digit. So there are $(3)(6)(5)$ ways to choose the last, then the first, then the second.
Finally, for each of these $(3)(6)(5)$ ways, there are $4$ ways to choose the third digit, for a total of $(3)(6)(5)(4)$.
Similar reasoning shows that there are $(4)(6)(5)(4)$ odd numbers. Or else we can subtract the number of evens from $840$ to get the number of odds.
Another way: (that I like less). There are $3$ ways to choose the last digit. Once we have chosen this, there are $6$ digits left. We must choose a $3$-digit number, with all digits distinct and chosen from these $6$, to put in front of the chosen last digit. This can be done in $P(6,3)$ ways, for a total of $(3)P(6,3)$.
|
Change of variables in $k$-algebras Suppose $k$ is an algebraically closed field, and let $I$ be a proper ideal of $k[x_1, \dots, x_n]$. Does there exist an ideal $J \subseteq (x_1, \dots, x_n)$ such that $k[x_1, \dots, x_n]/I \cong k[x_1, \dots, x_n]/J$ as $k$-algebras?
|
This is very clear from the corresponding geometric statement: Every non-empty algebraic set $\subseteq \mathbb{A}^n$ can be moved to some that contains the zero. Actually a translation from some point to the zero suffices.
|
Evaluating $\lim_{x\to0}\frac{x+\sin(x)}{x^2-\sin(x)}$ I did the math, and my calculations were:
$$\lim_{x\to0}\frac{x+\sin(x)}{x^2-\sin(x)}= \lim_{x\to0}\frac{x}{x^2-\sin(x)}+\frac{\sin(x)}{x^2-\sin(x)}$$ But I can not get out of it. I would like do it without using derivation or L'Hôpital's rule .
|
$$\lim_{x\to0}\;\frac{\left(x+\sin(x)\right)}{\left(x^2-\sin(x)\right)}\cdot \frac{1/x}{1/x} \quad =\quad \lim_{x\to 0}\;\frac{1+\frac{\sin(x)}x}{x-\frac{\sin(x)}x}$$
Now we evaluate, using the fact that
$$\lim_{x \to 0} \frac {\sin(x)}x = 1$$
we can see that:
$$\lim_{x\to 0}\;\frac{1+\frac{\sin(x)}x}{x-\frac{\sin(x)}x} = \frac{1 + 1}{0 - 1} = -2$$
$$ $$
|
Why is $\frac{1}{\frac{1}{X}}=X$? Can someone help me understand in basic terms why $$\frac{1}{\frac{1}{X}} = X$$
And my book says that "to simplify the reciprocal of a fraction, invert the fraction"...I don't get this because isn't reciprocal by definition the invert of the fraction?
|
Well, I think this is a matter of what is multiplication and what is division.
First, we denote that
$$\frac{1}{x}=y$$
which means
$$xy=1\qquad(\mbox{assuming $x\ne0$ in fundamental mathematics where there isn't Infinity($\infty$)})$$
Now,
$$\frac{1}{\frac{1}{x}}=\frac{1}{y}$$ by using the first equation. Here,
by checking the second equation ($xy=1$), it is obvious that
$$\frac{1}{y}=x$$
thus
$$\frac{1}{\frac{1}{x}}=x$$
Q.E.D.
|
Neighborhoods in the product topology. In the last two lines thread, it is said that $N \times M \subseteq A\times B$ is a neighborhood of $0$ if and only if $N, M$ are neighborhoods of $0$. Here $A, B$ are topological abelian groups. How to prove this result? I searched on the Internet, but was not able to find a proof.
|
If $A,B$ are topological spaces,then we define the product topology by means of the topologies of $A$ and $B$ in such a way that $U\times V$ is open whenever $U\subseteq A$ and $V\subseteq B$ are open. More proecisely, we take the smallest topology on $A\times B$ such that these $U\times V$ are all open. This is precisely achieved by declaring precisely the sets of the form
$$ \bigcup_{i\in I}U_i\times V_i$$
with $I$ an arbitrary index set and $U_i\subseteq A$, $V_i\subseteq B$ open.
If $C\subseteq A$, $D\subseteq B$ is such that $C\times D$ is open and nonempty(!), then we can conlude that $C,D$ are open. Why?
Assume
$$ C\times D = \bigcup_{i\in I}U_i\times V_i$$
as above.
Since $C\times D$ is nonempty, let $(c,d)\in C\times D$. From the way $\{c\}\times D\subseteq C\times D$ is covered by the $U_i\times V_i$, we observe that
$$ D=\bigcup_{i\in I\atop c\in U_i}V_i $$
hence $D$ is open. Of course $d\in D$ so that $D$ is an open neighbourhood of $d$.
Similarly, $C$ is an open neighbourhood of $c$.
|
How to convert a permutation group into linear transformation matrix? is there any example about apply isomorphism to permutation group
and how to convert linear transformation matrix to permutation group and convert back to linear transformation matrix
|
It seems that for each natural number $n$ there is an isomorphic embedding $i$ of the permutation group $S_n$ into the group of all non-degerated matrices of order $n$ over $\mathbb R$, defined as $i(\sigma)=A_\sigma=\|a_{ij}\|$ for each $\sigma\in S_n$, where $a_{ij}=1$ provided $\sigma(i)=j$, and $a_{ij}=0$ in the opposite case.
From the other side, if $G$ is any group (and, in particular, a matrix group of linear transormations) then any element $g\in G$ induces a permutation $j(g)$ of the set $G$ such that $j(g)h=gh$ for each $h\in G$. Then the map $j:G\to S(G)$ should be an isomorphic embedding of the group $G$ into the group $S(G)$ of all permutations of the set $G$. You can read more details about such embedding here.
|
$\mathbb Q$-basis of $\mathbb Q(\sqrt[3] 7, \sqrt[5] 3)$. Can someone explain how I can find such a basis ? I computed that the degree of $[\mathbb Q(\sqrt[3] 7, \sqrt[5] 3):\mathbb Q] = 15$. Does this help ?
|
Try first to find the degree of the extension over $\mathbb Q$. You know that $\mathbb Q(\sqrt[3]{7})$ and $\mathbb Q(\sqrt[5]{3})$ are subfields with minimal polynomials $x^3 - 7$ and $x^5-3$ which are both Eisenstein.
Therefore those subfields have degree $3$ and $5$ respectively and thus $3$ and $5$ divide $[\mathbb Q(\sqrt[3]7,\sqrt[5]3) : \mathbb Q]$, which means $15$ divides it. But you know that the set $\{ \sqrt[3]7^i \sqrt[5]3^j \, | \, 0 \le i \le 2, 0 \le j \le 4 \}$ spans $\mathbb Q(\sqrt[3]7, \sqrt[5]3)$ as a $\mathbb Q$ vector space. I am letting you fill in the blanks.
Hope that helps,
|
If $x+{1\over x} = r $ then what is $x^3+{1\over x^3}$? If $$x+{1\over x} = r $$ then what is $$x^3+{1\over x^3}$$
Options:
$(a) 3,$
$(b) 3r,$
$(c)r,$
$(d) 0$
|
$\displaystyle r^3=\left(x+\frac{1}{x}\right)^3=x^3+\frac{1}{x^3}+3(x)\frac{1}{x}\left(x+\frac{1}{x}\right)=x^3+\frac{1}{x^3}+3r$
$\displaystyle \Rightarrow r^3-3r=x^3+\frac{1}{x^3}$
Your options are incorrect.For a quick counter eg. you can take $x=1/2$ to get $r=\frac{5}{2}$ and $x^3+\frac{1}{x^3}=\frac{65}{8}$ but none of the options result in $65/8$
|
eigenvalue and independence Let $B$ be a $5\times 5$ real matrix and assume:
*
*$B$ has eigenvalues 2 and 3 with corresponding eigenvectors $p_1$ and $p_3$, respectively.
*$B$ has generalized eigenvectors $p_2,p_4$ and $p_5$ satisfying
$Bp_2=p_1+2p_2,Bp_4=p_3+3p_4,Bp_5=p_4+3p_5$.
Prove that $\{p_1,p_2,p_3,p_4,p_5\}$ is linearly independent set.
|
One can do a direct proof. Here is the beginning:
Suppose $0=\lambda_1p_1+\dots+\lambda_5p_5$. Then
$$\begin{align*}
0&=(B-3I)^3(\lambda_1p_1+\dots+\lambda_5p_5)\\
&=(B-3I)^3(\lambda_1p_1+\lambda_2p_2)\\
&=(B-2I-I)^3(\lambda_1p_1+\lambda_2p_2)\text{ now one can apply binomial theorem as $B-2I$ and $-I$ commute}\\
&= (B-2I)^3(\lambda_1p_1+\lambda_2p_2)+3(B-2I)^2(-I)(\lambda_1p_1+\lambda_2p_2)+3(B-2I)(-I)^2(\lambda_1p_1+\lambda_2p_2)+(-I)^3(\lambda_1p_1+\lambda_2p_2)\\
&=(3\lambda_2-\lambda_1)p_1-\lambda_2p_2.
\end{align*}$$
Now a similar application of $(B-2I)$ yields $\lambda_1=0$ and $\lambda_2=0$. Again similar applications of $(B-3I)^k$ for appropriate $k$ yield the vanishing of the other coefficients. I'll leave that to the reader.
|
Fixed points for $k_{t+1}=\sqrt{k_t}-\frac{k_t}{2}$ For the difference equation
$k_{t+1}=\sqrt{k_t}-\frac{k_t}{2}$
one has to find all "fixed points" and determine whether they are locally or globally asymptotically stable.
Now I'm not quite sure what "fixed point" means in this context. Is it the same as "equilibrium point" (i.e., setting $\dot{k}=0$ , and calculate $k_{t+1}=k+\dot{k}=k+0$ from there)? Or something different?
I feel confident in solving such types of DE, just not sure what "fixed point" is supposed to mean here. Thanks for providing some directions!
|
(I deleted my first answer after rereading your question; I thought perhaps I gave more info than you wanted.)
Yes, you can read that as "equilibrium points". To find them, just let $k_t = \sqrt{k_t} - \frac{1}{2}k_t$. Solving for $k_t$ will give you a seed $k_0$ such that $k_0 = k_1$. As you wrote, letting $k_0 = 0$ is one such value. However, there's another $k_0$ that will behave similarly. Furthermore, it's obvious what happens if $k_0 < 0$. Does the same thing happen for any other $k_0$? Now that you've found all the totally uninteresting seeds and the really weird ones, what about the others?
|
Testing convergence of $\sum_{n=0}^{\infty }(-1)^n\ \frac{4^{n}(n!)^{2}}{(2n)!}$ Can anyone help me to prove whether this series is convergent or divergent:
$$\sum_{n=0}^{\infty }(-1)^n\ \frac{4^{n}(n!)^{2}}{(2n)!}$$
I tried using the ratio test, but the limit of the ratio in this case is equal to 1 which is inconclusive in this case. Any hints please!
|
By Stirling's approximation $n!\sim\sqrt{2\pi n}(n/e)^n$, so
$$\frac{4^{n}(n!)^{2}}{(2n)!}\sim \frac{2\pi n 4^{n} (n/e)^{2n}}{\sqrt{4\pi n}(2n/e)^{2n}} =\sqrt{\pi n}.$$ Thus, the series diverges.
|
A line through the centroid G of $\triangle ABC$ intersects the sides at points X, Y, Z. I am looking at the following problem from the book Geometry Revisited, by Coxeter and Greitzer. Chapter 2, Section 1, problem 8: A line through the centroid G of $\triangle ABC$ intersects the sides of the triangle at points $X, Y, Z$. Using the concept of directed line segments, prove that $1/GX + 1/GY + 1/GZ = 0$.
I am perplexed by the statement and proof given in the book for this problem, whose statement I have reproduced verbatim (proof from the book is below, after this paragraph), as a single line through the centroid can only intersect all three sides of a triangle at $X, Y, Z$ if two of these points are coincident at a vertex of the triangle, and the third is the midpoint of the opposite side: In other words, if the line is a median. In this case I see that it is true, but not very interesting, and the proof doesn't have to be very complex. The median is divided in a 2:1 ratio, so twice the reciprocal of $2/3$ plus the reciprocal, oppositely signed, of $1/3$, gives $3 + -3 = 0$.
But here's the proof given in the book:
Trisect $BC$ at U and V, so that BU = UV = VC. Since GU is parallel to AB, and GV to AC,
$$GX(1/GX + 1/GY + 1/GZ) = 1 + VX/VC + UX/UB = 1 + (VX-UX)/VC =$$
$$1 + VU/UV = 0$$.
I must be missing something. (If this is a typo in this great book, it's the first one I've found). In the unlikely event that the problem is misstated, I have been unable to figure out what was meant. Please help me!
[Note]: Here's a diagram with an elaborated version of the book's solution above, that I was able to do after realizing my mistake thanks to the comment from Andres.
|
a single line through the centroid can only intersect all three sides of a triangle at $X,Y,Z$ if two of these points are coincident at a vertex of the triangle, and the third is the midpoint of the opposite side
The sides of the triangle are lines of infinite length in this context, not just the line segments terminated by the corners of the triangle. OK, Andreas already pointed that out in a comment, but since originally this was the core of your question, I'll leave this here for reference.
I recently posted an answer to this question in another post. Since @Michael Greinecker♦ asked me to duplicate the answer here, that's what I'm doing.
I'd work on this in barycentric homogenous coordinates. This means that your corners correspond to the unit vectors of $\mathbb R^3$, that scalar multiples of a vector describe the same point. You can check for incidence between points and lines using the scalar product, and you can connect points and intersect lines using the cross product. This solution is influenced heavily by my background in projective geometry. In this world you have
$$
A = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} \qquad
B = \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix} \qquad
C = \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \\
G = \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} \qquad
l = \begin{pmatrix} 1 \\ a \\ -1-a \end{pmatrix} \qquad
\left<G,l\right> = 0 \\
X = \begin{pmatrix} a \\ -1 \\ 0 \end{pmatrix} \qquad
Y = \begin{pmatrix} 1+a \\ 0 \\ 1 \end{pmatrix} \qquad
Z = \begin{pmatrix} 0 \\ 1+a \\ a \end{pmatrix}
$$
The coordinates of $l$ were chosen such that the line $l$ already passes through $G$, as seen by the scalar product. The single parameter $a$ corresponds roughly to the slope of the line. The special case where the line passes through $A$ isn't handled, since in that case the first coordinate of $l$ would have to be $0$ (or $a$ would have to be $\infty$). But simply renaming the corners of your triangle would cover that case as well.
In order to obtain lengths, I'd fix a projective scale on this line $l$. For this you need the point at infinity on $F$. You can obtain it by intersecting $l$ with the line at infinity.
$$ F = l\times\begin{pmatrix}1\\1\\1\end{pmatrix}
= \begin{pmatrix} 2a+1 \\ -2-a \\ 1-a \end{pmatrix} $$
To complete your projective scale, you also need to fix an origin, i.e. a point with coordinate “zero”, and a unit length, i.e. a point with coordinate “one”. Since all distances in your formula are measured from $G$, it makes sense to use that as the zero point. And since you could multiply all your lengths by a common scale factor without affecting the formula you stated, the choice of scale is irrelevant. Therefore we might as well choose $X$ as one. Note that this choice also fixes the orientation of length measurements along your line: positive is from $G$ in direction of $X$. We can then compute the two remaining coordinates, those of $Y$ and $Z$, using the cross ratio. In the following formula, square brackets denote determinants. The cross ratio of four collinear points in the plane can be computed as seen from some fifth point not on that line. I'll use $A$ for this purpose, both because it has simple coordinates and because, as stated above, the case of $l$ passing through $A$ has been omitted by the choice of coordinates for $l$.
$$
GY = \operatorname{cr}(F,G;X,Y)_A =
\frac{[AFX][AGY]}{[AFY][AGX]} =
\frac{\begin{vmatrix}
1 & 2a+1 & a \\
0 & -2-a & -1 \\
0 & 1-a & 0
\end{vmatrix}\cdot\begin{vmatrix}
1 & 1 & 1+a \\
0 & 1 & 0 \\
0 & 1 & 1
\end{vmatrix}}{\begin{vmatrix}
1 & 2a+1 & 1+a \\
0 & -2-a & 0 \\
0 & 1-a & 1
\end{vmatrix}\cdot\begin{vmatrix}
1 & 1 & a \\
0 & 1 & -1 \\
0 & 1 & 0
\end{vmatrix}}
= \frac{a-1}{a+2}
\\
GZ = \operatorname{cr}(F,G;X,Z)_A =
\frac{[AFX][AGZ]}{[AFZ][AGX]} =
\frac{\begin{vmatrix}
1 & 2a+1 & a \\
0 & -2-a & -1 \\
0 & 1-a & 0
\end{vmatrix}\cdot\begin{vmatrix}
1 & 1 & 0 \\
0 & 1 & 1+a \\
0 & 1 & a
\end{vmatrix}}{\begin{vmatrix}
1 & 2a+1 & 0 \\
0 & -2-a & 1+a \\
0 & 1-a & a
\end{vmatrix}\cdot\begin{vmatrix}
1 & 1 & a \\
0 & 1 & -1 \\
0 & 1 & 0
\end{vmatrix}}
= \frac{1-a}{1+2a}
$$
The thrid length, $GX$, is $1$ by the definition of the projective scale. So now you have the three lengths and plug them into your formula.
$$
\frac1{GX} + \frac1{GY} + \frac1{GZ} =
\frac{a-1}{a-1} + \frac{a+2}{a-1} - \frac{2a+1}{a-1} = 0
$$
As you will notice, the case of $a=1$ is problematic in this setup, since it would entail a division by zero. This corresponds to the situation where $l$ is parallel to $AB$. In that case, the point $X$ which we used as the “one” of the scale would coincide with the point $F$ at infinity, thus breaking the scale. A renaming of triangle corners will again take care of this special case. The cases $a=-2$ and $a=-\tfrac12$ correspond to the two other cases where $l$ is parallel to one of the edges. You will notice that the lengths would again entail divisions by zero, since the points of intersection are infinitely far away. But the reciprocal lengths are just fine and will be zero in those cases.
|
How to prove that the set $\{\sin(x),\sin(2x),...,\sin(mx)\}$ is linearly independent? Could you help me to show that the functions $\sin(x),\sin(2x),...,\sin(mx)\in V$ are linearly independent, where $V$ is the space of real functions?
Thanks.
|
If $\{ \sin x, \sin 2x, \ldots, \sin mx\}$ is linear dependent, then for some $a_1,\ldots,a_m \in \mathbb{R}$, not all zero, we have:
$$\sum_{k=1}^m a_k \sin kx = 0, \text{ for all } x \in \mathbb{R}$$
This in turn implies for every $z \in S^1 = \{ \omega \in \mathbb{C} : |\omega| = 1\}$,
if we write $z$ as $e^{ix}$, we have:
$$0 = \sum_{k=1}^m a_k \sin kx = \sum_{k=1}^m a_k \frac{z^k - z^{-k}}{2i} = \frac{z^{-m}}{2i}\sum_{k=1}^m a_k\left(z^{m+k}-z^{m-k}\right)$$
This contradicts with the fact the rightmost side of above expression is $\frac{z^{-m}}{2i}$ multiplied by a non-zero polynomial in $z$ and has at most finitely many roots on $S^1$.
|
Why does the Tower of Hanoi problem take $2^n - 1$ transfers to solve? According to http://en.wikipedia.org/wiki/Tower_of_Hanoi, the Tower of Hanoi requires $2^n-1$ transfers, where $n$ is the number of disks in the original tower, to solve based on recurrence relations.
Why is that? Intuitively, I almost feel that it takes a linear number of transfers since we must transfer all but the bottommost disc to the buffer before transferring the bottommost disc to the target tower.
|
Your intuition is right. All but the bottom disk must be moved TWICE, so you should expect (one more than) twice the number of transfers for one fewer disk. We have $$2(2^n-1)+1=2^{n+1}-1$$
|
Does $\sum_{k=1 }^ n (x-1+k)^n=(x+n)^n$ have integer solution when $n\ge 4$? from a post in SE, one says $3^2+4^2=5^2,3^3+4^3+5^3=6^3$,that is interesting for me. so I begin to explore further, the general equation is
$\sum_{k=1}^n (x-1+k)^n=(x+n)^n$,
from $n \ge 4$ to $n=41$, there is no integer solution for $x$.
for $n>41$,I can't get result as Walframalpha doesn't work.
I doubt if there is any integer solution for $n \ge 4$, when $n$ is bigger,the $x$ is close to $\dfrac{n}{2}$.
Can some one have an answer? thanks!
|
Your problem has been studied, and it is conjectured that only $3,4,5$ for squares and $3,4,5,6$ for cubes are such that all the numbers are consecutive, and the $k$th power of the last is the sum of the $k$th powers of the others ($k>1$).
I've spent a lot of time on this question, and no wonder did not get anything final on a proof, since apparently nobody else has succeeded in proving it.
The website calls it "Cyprian's Last Theorem", arguing that it seems so likely true but nobody has shown it yet, just like for Fermat for so many years.
The page reference I found:
http://www.nugae.com/mathematics/cyprian.htm
There may be other links there to further get ideas...
|
Quadratic form $\mathbb{R}^n$ homogeneous polynomial degree $2$ Could you help me with the following problem?
My definition of a quadratic form is: it is a mapping $h: \ V \rightarrow \mathbb{R}$ such that there exists a bilinear form $\varphi: \ V \times V \rightarrow \mathbb{R}$ such that $h(v)=\varphi(v,v)$.
Could you tell me how, based on that definition, I can prove that a quadratic form on $V=\mathbb{R}^n$ is a homogeneous polynomial of degree $=2$?
|
Choose a basis for $V$, call it $\{v_1,\dots,v_n\}$. Then leting $v = \sum_{i=1}^n a_i v_i$
$$
h(v) = \varphi(v,v) = \sum_{i=1}^n \sum_{j=1}^n a_i a_j \varphi(v_i,v_j).
$$
Therefore $h(v)$ is a polynomial whose terms are $a_i a_j$ (i.e. degree $2$ in the coeficients of $v$) and the coefficient in front of $a_i a_j$ is $\varphi(v_i,v_j)$. This means $h(v)$ is an homogeneous polynomial of degree $2$ in the variables $\{a_1,\dots,a_n\}$.
Hoep that helps,
|
Convert two points to line eq (Ax + By +C = 0) Say one has two points in the x,y plane. How would one convert those two points to a line? Of course I know you could use the slope-point formula & derive the line as following:
$$y - y_0 = \frac{y_1-y_0}{x_1-x_0}(x-x_0)$$
However this manner obviously doesn't hold when $x_1-x_0 = 0$ (vertical line). The more generic approach should however be capable of define every line (vertical line would simply mean B = 0);
$$Ax+By +C = 0$$
But how to deduce A, B, C given two points?
|
Let $P_1:(x_1,y_1)$ and $P_2:(x_2,y_2)$. Then a point $P:(x,y)$ lies on the line connecting $P_1$ and $P_2$ if and only if the area of the parallellogram with sides $P_1P_2$ and $P_1P$ is zero. This can be expressed using the determinant as
$$
\begin{vmatrix}
x_2-x_1 & x-x_1 \\
y_2-y_1 & y-y_1
\end{vmatrix} = 0 \Longleftrightarrow
(y_1-y_2)x+(x_2-x_1)y+x_1y_2-x_2y_1=0,
$$
so you get (up to scale) $A=y_1-y_2$, $B=x_2-x_1$ and $C=x_1y_2-x_2y_1$.
|
Question in do Carmo's book Riemannian geometry section 7 I have a question. Please help me.
Assume that $M$ is complete and noncompact, and let $p$ belong to $M$.
Show that $M$ contains a ray starting from $p$.
$M$ is a riemannian manifold. It is geodesically and Cauchy sequences complete too. A ray is a geodesic curve that its domain is $[0,\infty)$ and it minimizes the distance between start point to each other points of curve.
|
Otherwise suppose every geodesic emitting from p will fail to be a segment after some distance s. Since the unit sphere in the tangent plane that parameterizing these geodesics is compact, s has a maximum $s_{max}$. This means that the farthest distance from p is $s_{max}$, among all points of the manifold. So the diameter of the manifold is bounded by $2s_{max}$, by the triangle inequality. So the manifold is bounded and complete, by the Hopf–Rinow theorem, it is then compact.
|
How to prove that the following function has exactly two zeroes in a particular domain? I am practicing exam questions for my own exam in complex analysis. This was one I couldn't figure out.
Let $U = \mathbb{C} \setminus \{ x \in \mathbb{R} : x \leq 0 \} $ en let $\log : U \to \mathbb{C} $ be the usual holomorphic branch of the logarithm on $U$ with $\log (1) = 0$. Consider the function given by $f(z) = \log(z) - 4(z-2)^2$.
Q1: Show that $f$ has exactly two zeroes (including multiplicity) in the open disk $D(2,1) = \{ z \in \mathbb{C} : |z-2| < 1 \} $.
Q2: Show that $f$ has exactly two different zeroes in $D(2,1)$.
I strongly suspect we should use Rouché's Theorem for this. I tried to apply it by setting $h(z) = \log(z)$ and $g(z) = -4 (z-2)^2$. If $|z-2| = 1$, then $z = 2 + e^{it}$ with $t \in [0,2 \pi ) $. Then we have $|h(z)| = |\log(z)| \leq \log|z| = \log|2+e^{it}| \leq \log(|2| + |e^{it}|) = \log|2|\cdot \log|e^{it}| = \log(2) \cdot \log(1) = 0$.
Furthermore, we have $g(z) = |-4 (z-2)^2| = |-4| |z-2|^2 = 4$. So $|h(z)| < |g(z)|$ when $|z-2| = 1$. According to Rouché's Theorem, $f$ en $g$ have the same number of zeroes within $\{ |z-2| < 1 \}$, so we have to count the zeros of $g$ to find the number of zeroes of $f$. However, I can only find one zero of $g$, which is at $z=2$. Can you tell what's going wrong with my approach?
|
Your estimate $|h(z)|\le 0$ (when $|z-2|=1$) cannot possibly be true: a nonconstant holomorphic function cannot be equal to zero on a circle. I marked the incorrect steps in red:
$$|\log(z)| \color{red}{\leq} \log|z| = \log|2+e^{it}| \leq \log(|2| + |e^{it}|) \color{red}{=} \log|2|\cdot \log|e^{it}| = \log(2) \cdot \log(1) = 0$$
A correct estimate could look like
$$|\log(z)| \le |\operatorname{Re} \log z| + |\operatorname{Im} \log z|
= \log |z| + |\arg z| \le \log 3+ \pi/2 <3$$
which is not sharp but suffices for the application of Rouché's theorem, which settles Q1.
The question Q2 is less standard. One way to answer it is to observe that the real function $f(x)=\log x-4(x-2)^2$ has two distinct zeros on the interval $(1,3)$, because $f(1)<0$, $f(2)>0$, and $f(3)=\log 3-4<0$. Since we already know there are two zeros in $D(2,1)$ with multiplicity, there are no others.
|
Generating all coprime pairs within limits Say I want to generate all coprime pairs ($a,b$) where no $a$ exceeds $A$ and no $b$ exceeds $B$.
Is there an efficient way to do this?
|
If $A$ and $B$ are comparable in value, the algorithm for generating Farey sequence might suit you well; it generates all pairs of coprime integers $(a,b)$ with $1\leq a<b\leq N$ with constant memory requirements and $O(1)$ operations per output pair. Running it with $N=\max(A,B)$ and filtering out pairs whose other component exceeds the other bound produces all the coprime pairs you seek.
If the values of $A$ and $B$ differ too much, the time wasted in filtering the irrelevant pairs would be too high and a different approach (such as that suggested by Thomas Andrews) might be necessary.
|
$l^2$ is not compact
Prove that the space $l^2$ (of real series $a_n$ such that $\sum_{i=1}^{\infty}a_i^2$ converges) is not compact.
I want to use the open cover $\{\sum_{i=1}^{\infty}a_i^2<n\mid n\in\mathbb{Z}^+\}$ and show that it has no finite subcover. To do that, I must prove that for any $n$, the set $\{\sum_{i=1}^{\infty}a_i^2<n\}$ is an open set.
So let $\sum_{i=1}^{\infty}a_i^2<n$. Suppose it equals $n-\alpha$. I must find $\epsilon$ such that for any series $\{b_i\}\in l^2$ with $\sum_{i=1}^{\infty}(a_i-b_i)^2<\epsilon$, we have $\sum_{i=1}^{\infty}b_i^2<n$.
But $(a_i-b_i)^2$ has the term $-2a_ib_i$. How should I deal with that?
|
There're two easy ways to prove that $\ell_2$ is not compact. First, is to say that its dimension is infinite, hence closed unitary ball is not compact, hence the space itself is not compact.
Another way to see this is to find a bounded sequence which doesn't have a convergent subsequence; as a matter of fact, a sequence of basis vectors doesn't converge in norm (nor does any its subsequence); it's easy to see by checking Cauchy criterion.
|
Finding the definite integral $\int_0^1 \log x\,\mathrm dx$ $$\int_{0}^1 \log x \,\mathrm dx$$
How to solve this? I am having problems with the limit $0$ to $1$. Because $\log 0$ is undefined.
|
$\int_0^1 \log x dx=\lim_{a\to 0^+}\int_a^1\log x dx=\lim_{a\to 0^+}(x\log x-x|_a^1)=\lim_{a\to 0^+}(a-1-a\log a)=\lim_{a\to 0^+}(a-1) -\lim_{a\to 0^+}a\log a=-1-\lim_{a\to 0^+}a\log a$
Now $$\lim_{a\to 0^+}a\log a=\lim_{a\to 0^+}\frac{\log a}{1/a}$$
Using L' hopital's rule( which is applicable here), $$\lim_{a\to 0^+}\frac{\log a}{1/a}=\lim_{a\to 0^+}\frac{1/a}{1/a^2}=\lim_{a\to 0^+}(-a)=0$$
Therefore, $$\int_0^1 \log x dx= -1-\lim_{a\to 0^+}a\log a =-1$$
|
Two students clean 5 rooms in 4 hours. How long do 40 students need for 40 rooms? A class decides to do a community involvement project by cleaning classrooms in a school. If 2 students can clean 5 classrooms in 4 hours, how long would it take for 40 students to clean 40 classrooms?
|
A student-hour is a unit of work. It represents 1 student working for an hour, or 60 students working for one minute, or 3600 students working for 1 second, or ...
You're told that cleaning 5 classrooms takes 2 students 4 hours, or $8$ student-hours. So one classroom takes $\frac{8}{5}$ or $1.6$ student-hours.
So the 40 classrooms will take $40 \times 1.6$ or $64$ student-hours.
The forty students will put out $64$ student-hours in $\frac{64}{40}$ or $1.6$ hours...
|
Archimedean Proof? I've been struggling with a concept concerning the Archimedean property proof. That is showing my contradiction that For all $x$ in the reals, there exists $n$ in the naturals such that $n>x$.
Okay so we assume that the naturals is bounded above and show a contradiction.
If the naturals is bounded above, then it has a least upper bound (supremum) say $u$
Now consider $u-1$. Since $u=\sup(\mathbb N)$ , $u-1$ is an element of $\mathbb N$. (here is my first hiccup, not entirely sure why we can say $u-1$ is in $\mathbb N$)
This implies (again not confident with this implication) that there exists a $m$ in $\mathbb N$ such that $m>u-1$. A little bit of algebra leads to $m+1>u$.
$m+1$ is in $\mathbb N$ and $m+1>u=\sup(\mathbb N)$ thus we have a contradiction.
Can anyone help clear up these implications that I'm not really comfortable with? Thanks!
|
I dont think you need the fact that $u\in N$(the fact is even not true) .And for the second difficulty the fact follows from the supremum property.As $u-1$ is not an upper bound so there exists a natural number greater than it.
|
Solve the following quadratic inequalities by graphing the corresponding function Looking at these questions and I am not confident in my abilities to solve them.
Solve the following quadratic inequalities by graphing the corresponding function. Note: a) and b) are separate questions.
$$ a) y \le -2x^2+16x-24\\
b) y > \frac 13 (x-1)^2-3
$$
Help would be appreciated!
|
Step one:
Learn to draw graphs of the equalities.
Suppose we are given
$$
y \le -2x^2+16x-24.
$$
The matching equality is
$$
y = -2x^2+16x-24.
$$
We can factorise this, then graph it.
$$
\begin{align}
y &= -2(x^2-8x+12)\\
&= -2(x-6)(x-2)
\end{align}
$$
This means that the roots of the polynomial are at $x=6$ and $x=2$ respectively. The negative sign of the coefficient of the highest power tells us that the parabola is "upside down", compared to the simplest form of the parabola ($y=x^2$).
Now, we put two points on the cartesian plane at $(6,0)$ and $(2,0)$ respectively. For a parabola, we know that the maxima (or minima) will lie exactly half-way between the roots. i.e. when $x=4$. We can find the $y$ value of this maxima by substituting $x=4$ into the equation:
$$
\begin{align}
y &= -2x^2+16x-24\\
&= -2(4)^2+16(4)-24\\
&= 8
\end{align}
$$
So the turning point (a maxima in this case) lies at $(4,8)$.
We now have three points of the parabola, $(2,0)$, $(4,8)$, and $(6,0)$, which we can plot as follows:
Now we can join the dots to make a parabola.
Step two:
Solve inequalities using graphs.
To solve the original inequality, we simply check the two regions on either side of the parabola to see whether they make the inequality true or false.
We can choose any point in the region, provided it is not actually on the parabola. Suppose we check the point $(0,0)$, which lies above and to the left of the parabola. Substituting $x=0$, $y=0$ into the inequality, we get
$$
\begin{align}
y &\le -2x^2+16x-24\\
0 &\le -2(0)^2 +16(0) - 24\\
0 &\le -24\\
\end{align}
$$
Of course, this is FALSE, which means that the region above the parabola does not satisfy the inequality.
Testing the region below the inequality, we substitute any point in that region. For instance, $(4,1)$.
$$
\begin{align}
y &\le -2x^2+16x-24\\
1 &\le -2(4)^2 + 16(4) -24\\
1 &\le 8\\
\end{align}
$$
This is evidently TRUE.
This means that the inequality is true for any point in that region below the parabola. (i.e. The shaded region.)
(I'm stealing @Kaster's image, because Wolfram|Alpha didn't want to play nicely)
We can describe this region using set theory as follows:
$$
\begin{align}
R = \{(x,y) \in \mathbb{R}^2:-\infty \le x \le \infty,\,\, y \le -2x^2 +16x -24\}
\end{align}
$$
However, the simplest way to describe the region without using the graph is to use the inequality given in the question:
$$
y \le -2x^2+16x-24.
$$
Because this inequality defines a region, we can't write it any more concisely than that.
|
prime notation clarification When I first learned calculus, I was taught that $'$ for derivatives was only a valid notation when used with function notation: $f'(x)$ or $g'(x)$, or when used with the coordinate variable $y$, as in $y'$.
But I have seen on a number of occasions, both here and in the classroom, where it will be used with an expression. E.g. $(x+\frac{1}{x})'$ to mean $\frac{d}{dx}(x+\frac{1}{x})$. It has always been my understanding that this notation is not considered valid because it doesn't indicate what the independent variable that the expression is being differentiated with respect to is. E.g. in $(ax+bx^2)'$, the variable could be $a$, $b$, or $x$. This problem also exists with $y'$ but I figured this was an exception because $y$ and $x$ usually represent the coordinate axes so it can be assumed that the independent variable for $x$ is $y$ when taking $y'$.
So is this notation valid, just putting a $'$ at the end of an expression?
|
What you're seeing is a "shorthand" an instructor or such may use in the process of computing the derivative of a function with respect to $x$. Usually when you seem something like $(ax + bx^2)'$, it's assumed from the context that we are taking the derivative of the expression, with respect to $x$. That is, "$(ax + bx^2)'$" is taken to mean "evaluate $\,\frac d{dx}(ax + bx^2)$", just as one assumes from context that $y'$ refers to the derivative, $f'(x)$, of $y = f(x)$.
I prefer to stick with $f'(x)...$ or $y'$, using $\frac d{dx}(\text{some function of x})$ when evaluating the derivative of a function with respect to $x$, particularly when trying to convey information to another person. (On scratch paper, or in my own work, I might get a little informal and slip into using a "prime" to abbreviate what I'm doing.) But I would prefer the more formal or "official" conventions/notations were used in "instructive contexts", to avoid confusion or possible ambiguity.
|
Is the cone locally compact Let $X$ denote the cone on the real line $\mathbb{R}$. Decide whether $X$ is locally
compact. [The cone on a space $Y$ is the quotient of $Y \times I$ obtained by
identifying $Y \times \{0\}$ to a point.]
I am having a hard time showing that there exists a locally compact neighborhood around $Y \times \{0\}$. Some help would be nice.
|
Here is a way of showing that no neighborhood of $r=\Bbb R\times\{0\}\in X$ is compact. The idea is to find in any neighborhood $V$ of $r$ a closed subspace homeomorphic to $\Bbb R$. Since the subspace is not compact, $V$ cannot be compact.
So let $V$ be a neighborhood of $r$ in $X$. Then $V$ contains the image of an open set $U$ around $\Bbb R\times\{0\}$. Since the interval $[n,n+1]$ for any $n\in\Bbb Z$ is compact, there is an $\epsilon_n>0$ such that $[n,n+1]\times[0,ϵ_n]$ is contained in $U$. Let $b_n=\min\{ϵ_n,ϵ_{n-1}\}$. Define
$$
f(x) = (x-n)b_{n+1}+(n+1-x)b_n,\quad n\in\Bbb Z,\quad x\in[n,n+1]
$$
This map has a graph $\Gamma$ homeomorphic to $\Bbb R$ and contained in $U$. The quotient map $q:\Bbb R\times I\to X$ embeds $\Gamma$ as a closed subspace of $V$, so $q(\Gamma)$ had to be compact if $V$ were compact.
|
If $G$ is a group, $H$ is a subgroup of $G$ and $g\in G$, is it possible that $gHg^{-1} \subset H$?
If $G$ is a group, $H$ is a subgroup of $G$ and $g\in G$, is it possible that $gHg^{-1} \subset H$ ?
This means, $gHg^{-1}$ is a proper subgroup of $H$. We know that $H \cong gHg^{-1}$, so if $H$ is finite then we have a contradiction since the isomorphism between the two subgroups implies that they have the same order so $gHg^{-1}$ can't be proper subgroup of $H$.
So, what if $H$ is infinite is there an example for such $G , H , g$ ?
Edit: I suppose that $H$ has a subgroup $N$ such that $N$ is a normal subgroup of $G$.
|
Let $\mathbb{F}_2 = \langle a,b \mid \ \rangle$ be the free group of rank two. It is known that the subgroup $F_{\infty}$ generated by $S= \{b^nab^{-n} \mid n \geq 0 \}$ is free over $S$. Then $bF_{\infty}b^{-1}$ is freely generated by $bSb^{-1}= \{b^n a b^{-n} \mid n \geq 1\}$, hence $bF_{\infty}b^{-1} \subsetneq F_{\infty}$.
(Otherwise, $a$ can be written over $bSb^{-1}$, which is impossible since $a \in F_{\infty}$ and $F_{\infty}$ is free over $S$.)
|
a codeword over $\operatorname{GF}(4)$ -> two codewords over $\operatorname{GF}(2)$ using MAGMA A codeword $X$ over $\operatorname{GF}(4)$ is given. How can I write it as $X= A+wB$ using MAGMA? where $A$ and $B$ are over $\operatorname{GF}(2)$ and $w^2 + w =1$.
Is there an easy way, or do I have to write some for loops and if statements?
|
Probably there is an easier way, but the following function should do the job:
function f4tof2(c)
n := NumberOfColumns(c);
V := VectorSpace(GF(2),n);
ets := [ElementToSequence(c[i]) : i in [1..n]];
return [V![ets[i][1] : i in [1..n]],V![ets[i][2] : i in [1..n]]];
end function;
|
Evaluating $\int_0^\infty \frac{e^{-kx}\sin x}x\,\mathrm dx$ How to evaluate the following integral?
$$\int_0^\infty \frac{e^{-kx}\sin x}x\,\mathrm dx$$
|
One more option:$$\begin{align}\int_0^\infty\int_0^\infty e^{-(k+y)x}\sin x\mathrm{d}x\mathrm{d}y&=\Im\int_0^\infty\int_0^\infty e^{-(k+y-i)x}\mathrm{d}x\mathrm{d}y\\&=\int_0^\infty\tfrac{1}{(k+y)^2+1}\mathrm{d}y\\&=[\arctan(k+y)]_0^\infty\\&=\tfrac{\pi}{2}-\arctan k.\end{align}$$
|
Let $f \colon \Bbb C \to \Bbb C$ be a complex valued function given by $f(z)=u(x,y)+iv(x,y).$ I am stuck on the following question :
MY ATTEMPT:
By Cauchy Riemann equation ,we have $u_x=v_y,u_y=-v_x.$ Now $v(x,y)=3xy^2 \implies v_x=3y^2 \implies -u_y=3y^2 \implies u=-y^3+ \phi(x) $. Now,I am not sure which way to go? Can someone give some explanation about which way to go in order to pick the correct option?
|
Hint: you used the second C.R. equation arriving at $u(x,y)=-y^3+\phi(x)$. What does it happen if you apply the other C.R. equation, i.e. $u_x=v_y$, to your result?
|
Are there two $\pi$s? The mathematical constant $\pi$ occurs in the formula for the area of a circle, $A=\pi r^2$,
and in the formula for the circumference of a circle, $C= 2\pi r$. How does one prove that these constants are the same?
|
One way to see it is if you consider a circle with radius $r$ and another circle with radius $r+\Delta r$ (where $\Delta r\ll r$) around the same point, and consider the area between the two circles.
As with any shape, the area is proportional to the square of a typical length; the radius is such a typical length. That is, a circle of radius $r$ has the area $Cr^2$ with some constant $C$. Now the area in between the two circles has the area $\Delta A = C(r+\Delta r)^2-Cr^2\approx 2Cr\,\Delta r$. That relation gets exact as $\Delta r\to 0$.
On the other hand, the distance between the two circles is constant, and therefore for sufficiently small $\Delta r$ you can "unroll" this shape into a rectangle (again, the error you make when doing this vanishes in the limit $\Delta r\to 0$). That rectangle has as one side the circumference, $2\pi r$, and as the other side $\Delta r$. Since the area of a rectangle is the product of its side lengths, we get as area $\Delta A = 2\pi r\,\Delta r$.
Comparing the two equations, we get $2Cr\,\Delta r=2\pi r\,\Delta r$, that is, $C=\pi$.
|
Nitpicky Sylow Subgroup Question Would we call the trivial subgroup of a finite group $G$ a Sylow-$p$ subgroup if $p \nmid |G|$? Or do we just only look at Sylow-$p$ subgroups as being at least the size $p$ (knowing that a Sylow-$p$ subgroup is a subgroup of $G$ with order $p^k$ where $k$ is the largest power of $p$ that has $p^k \mid |G|$)?
|
For what it is worth, I consider all primes $p$, not just those that divide the group order.
This makes many statements smoother. For instance, the defect group of the principal block is the Sylow $p$-subgroup, and a block is semisimple if and only if the defect group is trivial. Thus the principal block is semisimple iff $p$ does not divide the order of the group. It would be awkward to state the theorem only for non-principal blocks to avoid mentioning size $p^0$ Sylow $p$-subgroups.
Another reason is induction. For instance, a group is called $p$-closed if it has a normal Sylow $p$-subgroup. Subgroups and quotient groups of $p$-closed groups are $p$-closed. Except that if we only allow Sylows for $p$ that divides $G$ we have to redefine $p$-closed to be “normal Sylow $p$-subgroup or $p$ does not divide the order of the group” and now every time we consider a subgroup or quotient group we have to consider two cases: normal Sylow $p$-subgroup or $p$ does not divide the order of the group.
For this reason, most finite group theorists allow the trivial primes as well. For instance: Alperin, Aschbacher, Gorenstein, Huppert, Kurzweil and Stellmacher, Suzuki, etc. all explicitly allow primes that do not divide the order of the group.
|
Maximum cycle in a graph with a path of length $k$ I don't understand why this stands:
Let $G$ be a graph containing a cycle $C$, and assume that $G$ contains a path of length at least $k$ between two vertices of $C$.
Then $G$ contains a cycle of length at least $\sqrt{k}$.
Since we can extend the cycle $C$ with the vertices of the path, why don't we get a cycle of length $k+2$? ($2$ being the minimum number of vertices belonging to $C$ between the vertices where $C$ connect to it).
I really don't see where that square root is coming from.
For reference this is exercise $3$ from Chapter $1$ of the Diestel book.
|
Here is my solution. Let $s$ and $t$ two vertices of $C$ such that
there is a $st$-path $P$ of lenght $k$. If $|V(P) \cap V(C)|\geq \sqrt{k}$ then the proof follows, because the cycle we want is $C$. Otherwise, consider that
$|V(P) \cap V(C)| < \sqrt{k}$. Then, as $|V(P)| \geq k$, by pigeon principle, there is a subpath of $P$ of size at least $\sqrt{k}$ internally disjoint from some subpath of $C$. Joining this subpaths we get the desired.
|
Proving a lemma - show the span of a union of subsets is still in the span This is part of proving a larger theorem but I suspect my prof has a typo in here (I emailed him about it to be sure)
The lemma is written as follows:
Let $V$ be a vector space. Let {$z, x_1, x_2, ..., x_n$} be a subset of $V$. Show that if $z \in\ span(${$x_1, x_2,..., x_n$}$)$, then $span(${$z, x_1, x_2,..., x_r$}$)=span(${$ x_1, x_2,..., x_r$})
I feel like this should be really simple and I saw a proof that you can take out a vector from a subset and not change the span, but I am unsure of the reverse -- assuming that is what this lemma is about. (To me, the "if" implies something besides just unifying the two sets should follow).
Anyhow, the proof should, I think, start with that we can modify a subset (let's call it S) without affecting the span if we start like this:
$\exists x \in S$ such that $ x \in span(S-${$x$})
then you build up a linearly independent subset somehow. The proof that you can take vectors out of the subset says that since $x\in span(${$x$}$)\ \exists\ \lambda_1, \lambda_2,..., \lambda_n \ \in\ K$ such that $x=\sum_{i=1}^n\lambda_i x_i $
since we know $ span(S) \supset span(S-${$x$}) we just need to show $ span(S) \subseteq span(S-${$x$})
But honestly I am not sure I understand what's happening here well enough to prove the above lemma. (This class moves fast enough that we're essentially memorizing proofs rather than re-deriving them I guess).
I am really starting to hate linear algebra. :-(
(Edited to fix U symbol and make it a "is a member of" symbol)
|
We want to show $\operatorname{span}\{z, x_1, \dots, x_n\} = \operatorname{span}\{x_1, \dots, x_n\}$. In general, to show $X = Y$ where $X, Y$ are sets, we want to show that $X \subseteq Y$ and $Y \subseteq X$.
So suppose $v \in \operatorname{span}\{x_1, \dots, x_n\}$. Then, we can find scalars $c_1, \dots, c_n$ such that $$v = c_1x_1 + \dots + c_nx_n$$ so clearly, $v \in \operatorname{span}\{z, x_1, \dots, x_n\}$. This proves $$\operatorname{span}\{x_1, \dots, x_n\} \subseteq \operatorname{span}\{z, x_1, \dots, x_n\}$$
Now let $v \in \operatorname{span}\{z, x_1, \dots, x_n\}$. Again, by definition, there are scalars $c_1, \dots, c_{n+1}$ such that $v = c_1x_1 + \dots + c_nx_n + c_{n+1}z$. But hold on, $z \in \operatorname{span}\{x_1, \dots, x_n\}$, right? This means there are scalars $a_1, \dots, a_n$ such that $z = a_1x_1 + \dots + a_nx_n$. Hence,
$$v = c_1x_1 + \dots + c_nx_n + c_{n+1}(a_1x_1 + \dots + a_nx_n)$$
$$= (c_1 + c_{n+1}a_1)x_1 + \dots + (c_n+c_{n+1}a_n)x_n$$
and so we conclude that $v \in \operatorname{span}\{x_1, \dots, x_n\}$. Therefore,
$$\operatorname{span}\{z, x_1, \dots, x_n\} = \operatorname{span}\{x_1, \dots x_n\}$$
|
To what extent the statement "Data is normally distributed when mode, mean and median scores are all equal" is correct? I read that normally distributed data have equal mode, mean and median. However in the following data set, Median and Mean are equal but there is no Mode and the data is "Normally Distributed":
$ 1, 2, 3, 4, 5 $
I am wondering to how extent the statement is correct? Is there a more accurate definition for "normal distribution"?
|
It is not correct at all. Any unimodal probability distribution symmetric about the mode (for which the mean exists) will have mode, mean and median all equal.
For the definition of normal distribution, see e.g. Wikipedia.
Strictly speaking, data can't be normally distributed, but it can be a
sample from a normal distribution. In a sample of $3$ or more points from a continuous distribution such as the normal distribution, with probability $1$ the data points will all be distinct (so there is no mode), and the mean will not be exactly the same as the median. It is only the probability distribution the data is taken from that can have mode, mean and median equal.
|
Looking for reference to a couple of proofs regarding the Stereographic Projection. I'm looking for a reference to rigorous proofs of the following two claims (if someone is willing to write down a proof that would also be excellent):
*
*The Stereographic Projection is a Homeomorphism between $S^{n}\backslash\left\{ N\right\}$ (the sphere without its north pole) and $\mathbb{R}^{n}$ for $n\geq2$.
*The Stereographic Projection is a Homeomorphism between $S^{n}$ and the one point compactification of $\mathbb{R}^{n}$
Help would be appreciated.
|
For the first request just try to write down explicitly the function that defines such a projection, by considering an hyperplane which cuts the sphere along the equator.
Consider $S^n$ in $R^{n+1}$, with $R^n$ as the subset with $x_{n+1}=0$. The North pole is $(0,0,..,0,1)$ and the image of each point is the intersection of the line between such a point and the north pole and the above mentioned hyperplane. Thus you need to find (solving with respect to $t$) $\{(0,...,1) + t((x_1,..,x_{n+1})-(0,...,1)): t \in \mathbb{R} \}\bigcap\{x_{n+1}=0\}$ which yields the desired $t$ and so the image of the point.
|
Show that $\frac{x}{1-x}\cdot\frac{y}{1-y}\cdot\frac{z}{1-z} \ge 8$.
If $x,y,z$ are positive proper fractions satisfying $x+y+z=2$, prove that $$\dfrac{x}{1-x}\cdot\dfrac{y}{1-y}\cdot\dfrac{z}{1-z}\ge 8$$
Applying $GM \ge HM$, I get $$\left[\dfrac{x}{1-x}\cdot\dfrac{y}{1-y}\cdot\dfrac{z}{1-z}\right]^{1/3}\ge \dfrac{3}{\frac 1x-1+\frac 1y-1+\frac 1z-1}\\=\dfrac{3}{\frac 1x+\frac 1y+\frac 1z-3}$$
Then how to proceed. Please help.
|
Write $(1-x)=a, (1-y)=b \text { and} (1-z)=c$
$x=2-(y+z)=b+c$
$y=2-(z+x)=a+c$
$z=2-(x+y)=a+b$
Thus we have the same expression in simpler form:
$\dfrac{b+c}{a} \cdot \dfrac{a+c}{b} \cdot \dfrac{a+b}{c}$
Now we have AM-GM:
$b+c \ge 2 \sqrt{bc}$
$a+c \ge 2 \sqrt{ac}$
$b+a \ge 2 \sqrt{ba}$
$\dfrac{b+c}{a} \cdot \dfrac{a+c}{b} \cdot \dfrac{a+b}{c} \ge \dfrac{2^3 abc}{abc} =8$, Done.
|
Number of distinct points in $A$ is uncountable How can one show:
Let $X$ be a metric space and $A$ is subset of $X$ be a connected set with
at least two distinct points then the number of distinct points in $A$
is uncountable.
|
We will show that if $A$ is countable then $A$ is not connected.
Let $a,b$ be two distinct points in $A$ and let $d$ be the metic on $X$. Then, since $d$ is real valued, there are uncountably many $r\in \mathbb R$ such that $0<r<d(a,b)$. Let $r_0$ be such that $\forall x\in A$, $d(a,x)\ne r_0$ and $0<r_0<d(a,b)$. This is possible because we are assuming $A$ is countable. Then the two open sets $U$, $V$ defined by $$U=\{x\in X: d(a,x)<r_0\}\cap A$$ and $$V=\{x\in X: d(a,x)>r_0\}\cap A$$ are disjoint, their union equals $A$ and $U\cap \bar V=\bar U\cap V=\emptyset$.
Therefore, $A$ is not connected.
|
Contour integration with branch cut This is an exercise in a course on complex analysis I am taking:
Determine the function $f$ using complex contour integration:
$$\lim_{R\to\infty}\frac{1}{2\pi i}\int_{c-iR}^{c+iR}\frac{\exp(tz)}{(z-i)^{\frac{1}{2}}(z+i)^{\frac{1}{2}}} dz$$
Where $c>0$ and the branch cut for $z^\frac{1}{2}$ is to be chosen on $\{z;\Re z=0, \Im z \leq0\}$.
Make a distinction between:
$$t>0, \quad t=0, \quad t<0$$
I think I showed that for $t<0$, $f(t)=0$ by using Jordan's Lemma. For $t=0$ I think the answer must be $f(0)=\frac{1}{2}$. For $t>0$ however, I have no idea what contour I have to define, nor how I have to calculate the residues in $i$ and $-i$.
|
For $t\equiv-\tau<0$, consider a half-circle of radius $M$ centred at $c$ and lying on the right of its diameter that goes from $c-iM$ to $c+iM$. By Cauchy's theorem
$$
\int_{c-iM}^{x+iM}\frac{e^{tz}}{\sqrt{1+z^2}}dz
=\int_{-\pi/2}^{+i\pi/2}\frac{e^{-\tau(c+Me^{i\varphi})}}{\sqrt{1+(c+Me^{i\varphi})^2}}iMe^{i\varphi}d\varphi;
$$
the right-hand side is bounded in absolute value by an argument in the style of Jordan's lemma, and hence goes to $0$ as $M\to\infty$.
So, indeed:
$$\boxed{
\int_{c-i\infty}^{c+i\infty}\frac{e^{tz}}{\sqrt{1+z^2}}dz=0,\text{ for }t<0.}
$$
For $t=0$ the integral diverges logarithmically, but we can compute its Cauchy principal value:
$$
\int_{c-iM}^{x+iM}\frac{1}{\sqrt{1+z^2}}dz=\left[ \sinh^{-1}z \right]_{c-iM}^{c+iM}=\log\frac{cM^{-1}+i+\sqrt{(cM^{-1}+i)^2+M^{-2}}}{cM^{-1}-i+\sqrt{(cM^{-1}-i)^2+M^{-2}}}
$$
and for $M\to\infty$
$$
\boxed{
PV\int_{c-i\infty}^{c+i\infty}\frac{1}{\sqrt{1+z^2}}dz=i\pi.
}
$$
Finally, for $t>0$, consider the contour below:
It is easy to see that the integrals along the horizontal segments vanish in the $M\to\infty$ limit, as well as the integral along the arc on the left (the latter, again by Jordan's lemma). Even the integrals on the small arcs give no contribution as the contour approaches the branch cuts.
The only contribution comes from the branch discontinuities:
$$
\int_{c-i\infty}^{c+i\infty}\frac{e^{tz}}{\sqrt{1+z^2}}dz=
4i \int_{1}^{+\infty}\frac{{\sin (ty)}}{\sqrt{y^2-1}}dy.
$$
Now, letting $y=\cosh \psi$, we have
$$
4i\Im \int_1^{+\infty}\frac{e^{ity}}{\sqrt{y^2-1}}dy=
2i\Im \int_{-\infty}^{+\infty}e^{it\cosh\psi}d\psi=
2i\Im \left(i\pi H_0^{(1)}(t)\right)=i2\pi J_0(t),
$$
thanks to the integral representation of cylindrical Bessel functions.
In this step, the analytic continuation $t\mapsto t+i\delta$, for a small $\delta>0$ which is then sent to $0$, has been employed.
So, finally
$$
\boxed{
\int_{c-i\infty}^{c+i\infty}\frac{e^{tz}}{\sqrt{1+z^2}}dz
=i2\pi J_0(t), \text{ for }t>0.
}
$$
To sum up
$$
f(t)=
\begin{cases}
0 &\text{if }t<0\\
1/2 &\text{if }t=0\text{ (in the }PV\text{ sense)}\\
J_0(t)&\text{if }t>0.
\end{cases}
$$
|
Integral of polylogarithms and logs in closed form: $\int_0^1 \frac{du}{u}\text{Li}_2(u)^2(\log u)^2$ Is it possible to evaluate this integral in closed form?
$$ \int_0^1 \frac{du}{u}\text{Li}_2(u)^2\log u \stackrel{?}{=} -\frac{\zeta(6)}{3}.$$
I found the possible closed form using an integer relation algorithm.
I found several other possible forms for similar integrals, including
$$ \int_0^1 \frac{du}{u}\text{Li}_2(u)^2(\log u)^2 \stackrel{?}{=} -20\zeta(7)+12\zeta(2)\zeta(5).$$
There doesn't seem to be an equivalent form when the integrand contains $(\log u)^3$, at least not just in terms of $\zeta$.
Does anybody know a trick for evaluating these integrals?
Update. The derivation of the closed form for the second integral follows easily along the ideas O.L. used in the answer for the first integral.
Introduce the functions
$$ I(a,b,c) = \int_0^1 \frac{du}{u}(\log u)^c \text{Li}_a(u)\text{Li}_b(u) $$
and
$$ S(a,b,c) = \sum_{n,m\geq1} \frac{1}{n^am^b(n+m)^c}. $$
Using integration by parts, the expansion of polylogarithms from their power series definition and also that
$$ \int_0^1 (\log u)^s u^{t-1}\,du = \frac{(-1)^s s!}{t^{s+1}},$$
check that
$$ I(2,2,2) = -\frac23 I(1,2,3) = 4S(1,2,4). $$
Now use binomial theorem and the fact that $S(a,b,c)=S(b,a,c)$ to write
$$ 6S(1,2,4) + 2S(3,0,4) = 3S(1,2,4) + 3S(2,1,4)+S(0,3,4)+S(3,0,4) = S(3,3,1). $$
Now, using Mathematica,
$$ S(3,3,1) = \sum_{n,m\geq1}\frac{1}{n^3m^3(n+m)} = \sum_{m\geq1}\frac{H_m}{m^6} - \frac{\zeta(2)}{m^5} + \frac{\zeta(3)}{m^4}, $$
and
$$ \sum_{m\geq1}\frac{H_m}{m^6} = -\zeta(4)\zeta(3)-\zeta(2)\zeta(5)+4\zeta(7), $$
so
$$ S(3,3,1) = 4\zeta(7)-2\zeta(2)\zeta(5). $$
Also,
$$ S(0,3,4) = \zeta(3)\zeta(4) - \sum_{m\geq1} \frac{H_{n,4}}{m^3} = -17\zeta(7)+10\zeta(2)\zeta(5)+\zeta(3)\zeta(4), $$
from which it follows that
$$ I(2,2,2) = \frac23\left(S(3,3,1)-2S(0,3,4)\right) = -20\zeta(7)+12\zeta(2)\zeta(5). $$
|
I've decided to publish my work so far - I do not promise a solution, but I've made some progress that others may find interesting and/or helpful.
$$\text{Let } I_{n,k}=\int_{0}^{1}\frac{\text{Li}_{k}(u)}{u}\log(u)^{n}du$$
Integrating by parts gives $$I_{n,k}=\left[\text{Li}_{k+1}(u)\log(u)^{n}\right]_{u=0}^{u=1}-\int_{0}^{1}\frac{\text{Li}_{k+1}(u)}{u}n\log(u)^{n-1}du$$
$$\text{Hence, }I_{n,k}=-nI_{n-1,k+1} \implies I_{n,k}=(-1)^{r}\frac{n!}{(n-r)!}I_{n-r,k+r}$$
Taking $r=n$ gives $I_{n,k}=(-1)^{n}n!I_{0,n+k}$.
$$\text{But obviously } I_{0,n+k}=\int_{0}^{1}\frac{\text{Li}_{n+k}(u)}{u}du=\text{Li}_{n+k+1}(1)-\text{Li}_{n+k+1}(0)=\zeta(n+k+1)$$
$$\text{Now consider }J_{n,k,l}=\int_{0}^{1}\frac{\text{Li}_{k}(u)}{u}\text{Li}_{l}(u)\log(u)^{n}du$$
Integrating by parts again,
$$J_{n,k,l}=\left[\text{Li}_{k+1}(u)\text{Li}_{l}(u)\log(u)^{n}\right]_{0}^{1}-\int_{0}^{1}\frac{\text{Li}_{l-1}(u)}{u}\text{Li}_{k+1}(u)\log(u)^{n}-\int_{0}^{1}\frac{n\log(u)^{n-1}}{u}\text{Li}_{k+1}(u)\text{Li}_{l}(u) du$$
So $J_{n,k,l}=-J_{n,k+1,l-1}-nJ_{n-1,k+1,l}$; continuing in the spirit of the first part suggests that we ought to try to increase the first and second indices, while decreasing the third. If we can succeed in this, we have found a closed form.
|
Looking for a good counterargument against vector space decomposition. How do I see that I cannot write $\mathbb{R}^n = \bigcup_{\text{all possible }M} \operatorname{span}(M)$, where $M$ runs over the subsets with $n-1$ elements in it of the set of vectors $N=\{a_1,\ldots,a_n,\ldots,a_m\} \in \mathbb{R}^n$, where the total dimension of the span of all of them is $n$.
|
Let $V$ be a vector space over an infinite field $F$, and $V_1, \ldots, V_n$ proper subspaces. Then I claim $\bigcup _j V_j$ is not a vector space.
For each $k$ let $u_k$ be a vector not in $V_k$. We then inductively find
vectors $w_k$ not in $\bigcup_{j \le k} V_j$. Namely, if $w_k \notin \bigcup_{j \le k} V_j$, and $u_{k+1} \notin V_{k+1}$, consider $f(t) = t u_{k+1} + (1-t) w_k$ for scalars $t$. If this was in $V_j$ for two different values of $t$, say $t_1 \ne t_2$, then it would be in $V_j$ for all $t$, because
$$f(t) = \dfrac{t - t_1}{t_2 - t_1} f(t_2) + \dfrac{t - t_2}{t_1 - t_2} f(t_1)$$
This is not the case for any $j \in \{1,\ldots,k+1\}$ because $f(0) \notin V_{k+1}$ and $f(1) \notin V_j$ for $j \le k$. So there are at most $k+1$
values of the scalar $t$ for which $f(t) \in \bigcup_{j \le k+1} V_j$, and
infinitely many for which it is not.
|
Continuity and Metric Spaces How do I show that the function $f:X \to \mathbb R$ given by $$f(x)=\frac{d(a,x)}{d(a,b)}$$ is continuous.
Given that $(X,d)$ is a metric space, and $a,b$ are distinct points in $X$.
|
If $d(x,y)<d(a,b)\cdot\varepsilon$ then
$$
|f(x)-f(y)| = \left|\frac{d(a,x)}{d(a,b)} - \frac{d(a,y)}{d(a,b)}\right| \le \frac{d(x,y)}{d(a,b)}<\varepsilon.
$$
The first inequality follows from two instances of the triangle inequality: $d(a,x)+d(x,y)\ge d(a,y)$ and $d(a,y)+d(y,x)\ge d(a,x)$.
So given $\varepsilon>0$, let $\delta =d(a,b)\cdot\varepsilon$.
|
What is the average weight of a minimal spanning tree of $n$ randomly selected points in the unit cube? Suppose we pick $n$ random points in the unit cube in $\mathbb{R}_3$, $p_1=\left(x_1,y_1,z_1\right),$ $p_2=\left(x_2,y_2,z_2\right),$ etc. (So, $x_i,y_i,z_i$ are $3n$ uniformly distributed random variables between $0$ and $1$.) Let $\Gamma$ be a complete graph on these $n$ points, and weight each edge $\{p_i,p_j\}$ by $$w_{ij}=\sqrt{\left(x_i-x_j\right)^2+\left(y_i-y_j\right)^2+\left(z_i-z_j\right)^2}.$$
Question: What is the expected value of the total weight of a minimal spanning tree of $\Gamma$?
(Note: Here total weight means the sum of all edges in the minimal spanning tree.)
A peripheral request: The answer is probably a function of $n$, but I don't have the computing power or a good implementation of Kruskall's algorithm to suggest what this should look like. If someone could run a simulation to generate this average over many $n$, it might help towards a solution to see this data.
|
If $n = 0$ or $n = 1$ the answer obviously is 0. If $n = 2$ we have
$$E\left((x_1 - x_2)^2\right) = E(x_1^2 - 2x_1x_2 + x_2^2) = E(x_1^2) - 2E(x_1)\cdot E(x_2) + E(x_2^2) \\= \frac13 - 2\frac12\cdot\frac12 + \frac13 = \frac16.$$
The same for $y$- and $z$-coordinates. So $E(w_{12}) = \sqrt{\frac16 + \frac 16 + \frac16} = \frac1{\sqrt2}$ and spanning tree contains the edge $\{\,1, 2\,\}$ only.
I see it is possible to consider several cases for $n = 3$, however for arbitrary $n$ I don't expect to get close form of the answer.
|
Characterizing continuous exponential functions for a topological field Given a topological field $K$ that admits a non-trivial continuous exponential function $E$, must every non-trivial continuous exponential function $E'$ on $K$ be of the form $E'(x)=E(r\sigma (x))$ for some $r \in K$* and $\sigma \in Aut(K/\mathbb{Q})$?
If not, for which fields other than $\mathbb{R}$ is this condition met?
Thanks to Zev
|
It seems that as-stated, the answer is false. I'm not satisfied with the following counterexample, however, and I'll explain afterwards.
Take $K = \mathbb{C}$ and let $E(z) = e^z$ be the standard complex exponential. Take $E'(z) = \overline{e^z} = e^{\overline{z}}$, where $\overline{z}$ is the complex conjugate of $z$. Then $E'(z)$ is not of the form $E(r z)$, and yet is a perfectly fine homomorphism from the additive to the multiplicative groups of $\mathbb{C}$.
Here's why I'm not satisfied: you can take any automorphism of a field and cook up new exponentials by post-composition or pre-composition. In the case I mentioned, these two coincide.
This won't work in $\mathbb{R}$ because there are no nontrivial continuous automorphisms there. I would be interested in seeing an answer to a reformulation to this problem that reflected this.
|
Show that $(1+x)(1+y)(1+z)\ge 8(1-x)(1-y)(1-z)$.
If $x>0,y>0,z>0$ and $x+y+z=1$, prove that $$(1+x)(1+y)(1+z)\ge 8(1-x)(1-y)(1-z).$$
Trial: Here $$(1+x)(1+y)(1+z)\ge 8(1-x)(1-y)(1-z) \\ \implies (1+x)(1+y)(1+z)\ge 8(y+z)(x+z)(x+y)$$ I am unable to solve the problem. Please help.
|
$$(1+x)(1+y)(1+z) \ge 8(1-x)(1-y)(1-z) \Leftrightarrow $$
$$(2x+y+z)(x+2y+z)(x+y+2z) \ge 8(y+z)(x+z)(x+y)$$
Let $a=x+y, b=x+z, c=y+z$. Then the inequality to prove is
$$(a+b)(a+c)(b+c) \ge 8abc \,,$$
Which follows immediately from AM-GM:
$$a+b \ge 2 \sqrt{ab}$$
$$a+c \ge 2 \sqrt{ac}$$
$$b+c \ge 2 \sqrt{bc}$$
Simplification The solution above can be simplified the following way:
By AM-GM
$$2\sqrt{(1-y)(1-z)}\le 1-y+1-z=1+x \,.$$
Similarly
$$2\sqrt{(1-x)(1-z)}\le 1+y \,.$$
$$2\sqrt{(1-x)(1-y)}\le 1+z \,.$$
|
Calculate the probability of two teams have been drawns If we know that team A had a $39\%$ chance of winning and team B $43\%$ chance of winning, how we can calculate the probability of the teams drawn?
My textbook mention the answer but I cannot understand the logic behind it. The answer is $18\%$. As working is not shown I guess that this is how the find $18\%$ probability of two teams withdrawn:
$$ (100\% - 39\%) - 43\% = 18\%$$
But I cannot understand the logic behind it. I appreciate if someone can explain it to me.
|
The sum of all events' probabilities is equal to 1. In this case, there are three disjoint events: team A winning, team B winning or a draw. Since we know the sum of these probabilities is 1, we can get the probability of a draw as follows:
$$
Pr(\text{Draw})=1-Pr(\text{Team A wins})-Pr(\text{Team B wins})=1-0.39-0.43=0.18
$$
|
Where to start Machine Learning? I've recently stumbled upon machine learning, generating user recommendations based on user data, generating text teaser based on an article. Although there are tons of framework that does this(Apache Mahout, Duine framework and more) I wanted to know the core principles, core algorithms of machine learning and in the future implement my own implementation of machine learning.
Where do I actually start learning Machine Learning as its basics or Machine Learning starting with concepts then to implementation? Although I have an weak to average math skills(I think that this will not hurt? If So what branches of mathematics should I study before Jumping to Machine Learning?)
Note that this is not related to my academics rather I want to learn this as an independent researcher and I am quite fascinated how machine learning works
|
I would also recommend the course Learning from data by Yaser Abu-Mostafa from Caltech. An excellent course!
|
Question about linear systems of equations Let $X=\{x_1,\cdots,x_n\}$ be a set of variables in $\mathbb{R}$.
Let $S_1$ be a set of linear equations of the form $a_1 x_1+\cdots+a_n x_n=b$ that are independent.
Let $k_1=|S_1|<n$ where $|S_1|$ denotes the rank of $S_1$ (i.e., the number of independent equations).
That is, $S_1$ does not contain enough equations to uniquely specify the values of the variables in $X$.
How many other equations are needed to solve the system uniquely? The answer is $n - k_1$.
Let $M$ be a set of $n-k_1$ equations such that $S_1 \cup M$ is full rank (i.e., the system $S_1 \cup M$ can be solved uniquely). My first question is that how can one find a set $M$?
Let $S_2$ be a set of independent equations such that $|S_2|<n$ too. Now I want to find an $M$ such that both $S_1 \cup M$ and $S_2 \cup M$ are uniquely solvable. How can I find such an $M$? Note that $|M|$ must be $\ge \max(n-k_1,n-k_2)$.
What if we extend the question to $S_1,\cdots,S_m$ such that $S_1\cup M,\cdots,S_m\cup M$ are all uniquely solvable?
A partial answer is also appreciated.
|
Here is a reasonably easy algorithm if you already know some basic stuff about matrices:
Look at the coefficients matrix's rows as vectors in $\;\Bbb R^n\;$ :
$$A:=\{ v_1=(a_{11},\ldots,a_{1n})\;,\;v_2=(a_{21},\ldots,a_{2n})\,\ldots,v_k=(a_{k1},\ldots,a_{kn})\}\;\;(\text{where $\,k=k_1\;$ for simplicity of notation)}$$
Since you're given the equations are independent that means $\,A\,$ is a linearly independent set of vectors in $\,\Bbb R^n\;$ .
Well, now just "simply" complete the set $\,A\,$ to a basis of $\,\Bbb R^n\,$ and that's all...you can do this by taking the matrix
$$\begin{pmatrix}a_{11}&a_{12}&\ldots&a_{1n}\\a_{21}&a_{22}&\ldots&a_{2n}\\\ldots&\ldots&\ldots&\ldots\\a_{k1}&a_{k2}&\ldots&a_{kn}\end{pmatrix}$$
and adding each time a new row $\,(b_1,\ldots,b_n)\,$ . Check whether this matrix is singular or not (for example, reducing this matrix and checking the row you added doesn't become all zeros!), and continue on until you get an $\,n\times n\$ regular matrix and voila!
|
Does $\sum_{n=1}^{\infty} \frac{\ln(n)}{n^2+2}$ converge? I'm trying to find out whether
$$\sum_{n=1}^{\infty} \frac{\ln(n)}{n^2+2}$$
is convergent or divergent?
|
Clearly
$$
\frac{\ln(n)}{n^2+2}\leq \frac{\ln(n)}{n^2}
$$
You can apply the integral test to show that $\sum\frac{\ln(n)}{n^2}$ converges. You only need to check that $\frac{\ln(n)}{n^2}$ is decreasing. But, the derivative is clearly negative for $n>e$.
|
What is the range of values of the random variable $Y = \frac{X - \min(X)}{\max(X)-\min(X)}$? Suppose $X$ is an arbitrary numeric random variable. Define the variable $Y$ as
$$Y=\frac{X-\min(X)}{\max(X)-\min(X)}.$$
Then what is the range of values of $Y$?
|
If $X$ takes values over any finite (closed) interval, then the range of $Y$ is $[0,1]$.
|
Vorticity equation in index notation (curl of Navier-Stokes equation) I am trying to derive the vorticity equation and I got stuck when trying to prove the following relation using index notation:
$$
{\rm curl}((\textbf{u}\cdot\nabla)\mathbf{u}) = (\mathbf{u}\cdot\nabla)\pmb\omega - ( \pmb\omega \cdot\nabla)\mathbf{u}
$$
considering that the fluid is incompressible $\nabla\cdot\mathbf{u} = 0 $, $\pmb \omega = {\rm curl}(\mathbf{u})$ and that $\nabla \cdot \pmb \omega = 0.$
Here follows what I've done so far:
$$
(\textbf{u}\cdot\nabla) \mathbf{u} = u_m\frac{\partial u_i}{\partial x_m} \mathbf{e}_i = a_i \mathbf{e}_i \\
{\rm curl}(\mathbf{a}) = \epsilon_{ijk} \frac{\partial a_k}{\partial x_j} \mathbf{e}_i = \epsilon_{ijk} \frac{\partial}{\partial x_j}\left( u_m\frac{\partial u_k}{\partial x_m} \right) \mathbf{e}_i = \\
= \epsilon_{ijk}\frac{\partial u_m}{\partial x_j}\frac{\partial u_k}{\partial x_m} \mathbf{e}_i + \epsilon_{ijk}u_m \frac{\partial^2u_k}{\partial x_j \partial x_m} \mathbf{e}_i \\
$$
the second term $\epsilon_{ijk}u_m \frac{\partial^2u_k}{\partial x_j \partial x_m} \mathbf{e}_i$ seems to be the first term "$(\mathbf{u}\cdot\nabla)\pmb\omega$" from the forementioned identity. Does anyone have an idea how to get the second term?
|
The trick is the following:
$$ \epsilon_{ijk} \frac{\partial u_m}{\partial x_j} \frac{\partial u_m}{\partial x_k} = 0 $$
by antisymmetry.
So you can rewrite
$$ \epsilon_{ijk} \frac{\partial u_m}{\partial x_j} \frac{\partial u_k}{\partial x_m} = \epsilon_{ijk} \frac{\partial u_m}{\partial x_j}\left( \frac{\partial u_k}{\partial x_m} - \frac{\partial u_m}{\partial x_k} \right) $$
Note that the term in the parentheses is something like $\pm\epsilon_{kml} \omega_l$
Lastly use the product property for Levi-Civita symbols
$$ \epsilon_{ijk}\epsilon_{lmk} = \delta_{il}\delta_{jm} - \delta_{im}\delta_{jl} $$
|
Function problem Show that function $f(x) =\frac{x^2+2x+c}{x^2+4x+3c}$ attains any real value if $0 < c \leq 1$ Problem :
Show that function $f(x)=\dfrac{x^2+2x+c}{x^2+4x+3c}$ attains any real value if $0 < c \leq 1$
My approach :
Let the given function $f(x) =\dfrac{x^2+2x+c}{x^2+4x+3c} = t $ where $t$ is any arbitrary constant.
$\Rightarrow (t-1)x^2+2(2t-1)x+c(3t-1)=0$
The argument $x$ must be real, therefore $(2t-1)^2-(t-1)(3tc-c) \geq 0$.
Now how to proceed further? Please guide. Thanks.
|
$(2t-1)^2-(t-1)(3tc-c) \geq 0\implies 4t^2+1-4t-(3t^2c-4tc+c)\geq 0\implies t^2(4-3c)+4(c-1)t+(1-c)\geq 0$
Now a quadratic polynomial $\geq 0$ $\forall t\in \Bbb R$ iff coefficient of second power of variable is positive and Discriminant $\leq 0$
which gives $4-3c>0\implies c<\frac{4}{3}$ and $D=16(c-1)^2+4(4-3c)(c-1)\leq 0\implies 4(c-1)(4c-4+4-3c)\leq 0\implies 4(c-1)c\leq 0\implies 0\leq c\leq 1$
So $c<\frac{4}{3}$ and $0\leq c\leq 1\implies 0\leq c\leq 1$
|
Quadratic residues, mod 5, non-residues mod p 1) If $p\equiv 1\pmod 5$, how can I prove/show that 5 is a quadratic residue mod p?
2) If $p\equiv 2\pmod 5$, how can is prove/show that 5 is a nonresidue(quadratic) mod p?
|
1) $(5/p) = (p/5)$ since $p$ is $1(mod 5)$ then $(p/5) = (1/5) = 1$. So 5 is a quadratic residue mod p.
2) again $(5/p) = (p/5)$ since $p$ is $2(mod5)$ then $(p/5) = (2/5) = -1$ since 5 is 5(mod8). So 5 is not a quadratic residue
|
Why does Monte-Carlo integration work better than naive numerical integration in high dimensions? Can anyone explain simply why Monte-Carlo works better than naive Riemann integration in high dimensions? I do not understand how chosing randomly the points on which you evaluate the function can yield a more precise result than distributing these points evenly on the domain.
More precisely:
Let $f:[0,1]^d \to \mathbb{R}$ be a continuous bounded integrable function, with $d\geq3$. I want to compute $A=\int_{[0,1]^d} f(x)dx$ using $n$ points. Compare 2 simple methods.
The first method is the Riemann approach. Let $x_1, \dots, x_n$ be $n$ regularly spaced points in $[0,1]^d$ and $A_r=\frac{1}{n}\sum_{i=1}^n f(x_i)$. I have that $A_r \to A$ as $n\to\infty$. The error will be of order $O(\frac{1}{n^{1/d}})$.
The second method is the Monte-Carlo approach. Let $u_1, \dots, u_n$ be $n$ points chosen randomly but uniformly over $[0,1]^d$. Let $A_{mc}=\frac{1}{n}\sum_{i=1}^n f(u_i)$. The central limit theorem tells me that $A_{mc} \to A$ as $n\to \infty$ and that $A_{mc}-A$ will be in the limit a gaussian random variable centered on $0$ with variance $O(\frac{1}{n})$. So with a high probability the error will be smaller than $\frac{C}{\sqrt{n}}$ where $C$ does not depend (much?) on $d$.
An obvious problem with the Riemann approach is that if I want to increase the number of points while keeping a regular grid I have to go from $n=k^d$ to $n=(k+1)^d$ which adds a lots of points. I do not have this problem with Monte-Carlo.
But if the number of points is fixed at $n$, does Monte-Carlo really yield better results than Riemann? It seems true in most cases. But I do not understand how chosing the points randomly can be better. Does anybody have an intuitive explanation for this?
|
I think it is not the case that random points perform better than selecting the points manually as done in the Quasi-Monte Carlo methods and the sparse grid method:
http://www.mathematik.hu-berlin.de/~romisch/papers/Rutg13.pdf
Also in Monte Carlo methods one usually uses random numbers to generate an adaptive integration method.
|
Help with a conditional probability problem There are 6 balls in a bag and they are numbered 1 to 6.
We draw two balls without replacement.
Is the probability of drawing a "6" followed by drawing an "even" ball the same as the probability of drawing an "even" ball followed by drawing a "6".
According to Bayes Theorem these two possibilities should be the same:
Pr(A and B) = Pr(A) x Pr(B∣A)
Pr(A and B) = Pr(B) x Pr(A∣B)
However, when I try to work this out I am getting two different probabilities, 2/30 and 3/30 for the two different scenarios listed above. The first scenario is fairly straight-forward to determine,
Pr(6) x Pr(even∣6 has already been drawn)
1/6 x 2/5 = 2/30
however, I think I am doing something wrong with the second scenario,
Pr(even) x Pr(6∣even has already been drawn)
3/6 x ?????
Any help would be greatly appreciated as this is really bugging me.
Thank you in advance....
|
There is no need to compute anything. All orders of drawing the balls are equally likely.
|
Sum of greatest common divisors As usually, let $\gcd(a,b)$ be the greatest common divisor of integer numbers $a$ and $b$.
What is the asymptotics of
$$\frac{1}{n^2} \sum_{i=1}^{i=n} \sum_{j=1}^{j=n} \gcd(i,j)$$
as $n \to \infty?$
|
Of the lattice points $[1,n] \times [1,n], 1-\frac 1{p^2}$ have no factor $p$ in the $\gcd, \frac 1{p^2}-\frac 1{p^4}$ have a factor $p$ in the $\gcd\frac 1{p^4}-\frac 1{p^6}$, have a factor $p^2$ in the $\gcd, \frac 1{p^6}-\frac 1{p^8}$ have a factor $p^3$ in the $\gcd$ and so on. That means that a prime $p$ contributes a factor $(1-\frac 1{p^2}+p(\frac 1{p^2}-\frac 1{p^4})+p^2({p^4}-\frac 1{p^6})+\dots$ or $\sum_{i=0}^\infty(p^{-i}-p^{-i-2})=\sum_{i=0}^\infty p^{-i}(1-p^{-2})=\frac {1-p^{-2}}{1-p^{-1}}=1+\frac 1p$. I don't know how to justify the use of the fact that $\gcd$ is multiplicative to turn this into $$\lim_{m \to \infty}\prod_{p \text {prime}}^m(1+\frac 1p)$$ to get the asymptotics, but it seems like it should work by taking, say $m=\sqrt n$ and letting $n \to \infty$
|
$u,v$ are harmonic conjugate with each other in some domain $u,v$ are harmonic conjugate with each other in some domain , then we need to show
$u,v$ must be constant.
as $v$ is harmonic conjugate of $u$ so $f=u+iv$ is analytic.
as $u$ is harmonic conjugate of $v$ so $g=v+iu$ is analytic.
$f-ig=2u$ and $f+ig=2iv$ are analytic, but from here how to conclude that $u,v$ are constant? well I know they are real valued function, so by open mapping theorem they are constant?
|
Your proof is correct. I add some remarks:
*
*$v$ is a conjugate of $u$ if and only if $-u$ is a conjugate of $v$ (since $u+iv$ and $v-iu$ are constant multiples of each other)
*Since the harmonic conjugate is unique up to additive constant, the assumption that $u$ is a conjugate of $v$ implies (because of 1) that $u=-u+\text{const}$, and conclusion follows.
*Related to 1: the Hilbert transform $H$ satisfies $H\circ H=-\text{id}$.
|
Cantor's Diagonal Argument Why Cantor's Diagonal Argument to prove that real number set is not countable, cannot be applied to natural numbers? Indeed, if we cancel the "0." in the proof, the list contains all natural numbers, and the argument can be applied to this set.
|
How about this slightly different (but equivalent) form of the proof? I assume that you already agree that the natural numbers $\mathbb{N}$ are countable, and your question is with the real numbers $\mathbb{R}$.
Theorem: Let $S$ be any countable set of real numbers. Then there exists a real number $x$ that is not in $S$.
Proof: Cantor's Diagonal argument. Note that in this version, the proof is no longer by contradiction, you just construct an $x$ not in $S$.
Corollary: The real numbers $\mathbb{R}$ are uncountable.
Proof: The set $\mathbb{R}$ contains every real number as a member by definition. By the contrapositive of our Theorem, $\mathbb{R}$ cannot be countable.
Note that this formulation will not work for $\mathbb{N}$ because $\mathbb{N}$ is countable and contains all natural numbers, and thus would be an instant counterexample for the hypothesis of the natural number version of our Theorem.
|
Evaluating $\int_0^\infty \frac{dx}{1+x^4}$. Can anyone give me a hint to evaluate this integral?
$$\int_0^\infty \frac{dx}{1+x^4}$$
I know it will involve the gamma function, but how?
|
Following is a computation that uses Gamma function:
For any real number $k > 1$, let $I_k$ be the integral:
$$I_k = \int_0^\infty \frac{dx}{1+x^k}$$
Consider two steps in changing the variable. First by $y = x^k$ and then by $z = \frac{y}{1+y}$. Notice:
$$\frac{1}{1+y} = 1 - z,\quad y = \frac{z}{1-z}\quad\text{ and }\quad dy = \frac{dz}{(1-z)^2}$$
We get:
$$\begin{align}
I_k = & \int_0^{\infty}\frac{1}{1 + y} d y^{\frac{1}{k}} = \frac{1}{k}\int_0^\infty \frac{1}{1+y}y^{\frac{1}{k}-1} dy\\
= & \frac{1}{k}\int_0^\infty (1-z) \left(\frac{z}{1-z}\right)^{\frac{1}{k}-1} \frac{dz}{(1-z)^2}
= \frac{1}{k}\int_0^\infty z^{\frac{1}{k}-1} (1-z)^{-\frac{1}{k}} dz\\
= & \frac{1}{k} \frac{\Gamma(\frac{1}{k})\Gamma(1 - \frac{1}{k})}{\Gamma(1)}
= \frac{\pi}{k \sin\frac{\pi}{k}}
\end{align}$$
For $k = 4$, we get:
$$I_4 = \int_0^\infty \frac{dx}{1+x^4} = \frac{\pi}{4\sin \frac{\pi}{4}} = \frac{\pi}{2\sqrt{2}}$$
|
My text says$ \left\{\begin{pmatrix}a&a\\a&a\end{pmatrix}:a\ne0,a\in\mathbb R\right\}$ forms a group under matrix multiplication. My text says$$\left\{\begin{pmatrix}a&a\\a&a\end{pmatrix}:a\ne0,a\in\mathbb R\right\}$$ forms a group under matrix multiplication.
But I can see $I\notin$ the set and so not a group.
Am I right?
|
It's important to note that this set of matrices forms a group but it does NOT form a subgroup of the matrix group $GL_2(\mathbb{R})$ (the group we are most familiar with as being a matrix group - the group of invertible $2\times 2$ matrices) as no elements in this set have non-zero determinant. In particular, we are looking at a subset of $Mat(\mathbb{R},2)$ which is disjoint from $GL_2(\mathbb{R})$.
The identity of the group will then be the matrix $\pmatrix{\frac{1}{2}&\frac{1}{2}\\ \frac{1}{2}&\frac{1}{2}}$ and the inverse of the element $\pmatrix{a&a\\ a&a}$ will be $\dfrac{1}{4}\pmatrix{a^{-1}&a^{-1}\\ a^{-1}&a^{-1}}$ (you should check this).
|
Domain and range of a multiple non-connected lines from a function? How do you find the domain and range of a function that has multiple non-connected lines?
Such as, $ f(x)=\sqrt{x^2-1}$. Its graph looks like this:
I'm wanting how you would write this with a set eg: $(-\infty, \infty)$.
P.S. help me out with the title. Not sure how to describe this.
|
You can find domain of the function by simply analyzing the behavior of the function. For
$$
f(x) = \sqrt{x^2-1}
$$
you can conclude that the expression under the square root must be non-negative. So
$$
x^2-1 \ge 0 \\
(x-1)(x+1) \ge 0 \\
x \in (-\infty, -1] \cup [1, +\infty)
$$
Latter is your domain. $D[f] = (-\infty, -1] \cup [1, +\infty)$
Finding range of the function sometimes is more trickier, but for this particular function not so much. Again, you have to observe its behavior. It's square root, and square root in real analysis can take any non-negative value, so range is $E[f] = [0, +\infty)$.
|
Yet another $\sum = \pi$. Need to prove. How could one prove that
$$\sum_{k=0}^\infty \frac{2^{1-k} (3-25 k)(2 k)!\,k!}{(3 k)!} = -\pi$$
I've seen similar series, but none like this one...
It seems irreducible in current form, and I have no idea as to what kind of transformation might aid in finding proof of this.
|
Use Beta function, I guess... for $k \ge 1$,
$$
\int_0^1 t^{2k}(1-t)^{k-1}dt = B(k,2k+1) = \frac{(k-1)!(2k)!}{(3k)!}
$$
So write
$$
f(x) = \sum_{k=0}^\infty \frac{2(3-25k)k!(2k)!}{(3k)!}x^k
$$
and compute $f(1/2)$ like this:
$$\begin{align}
f(x) &= 6+\sum_{k=1}^\infty \frac{(6-50k)k(k-1)!(2k)!}{(3k)!} x^k
\\ &= 6+\sum_{k=1}^\infty (6-50k)k x^k\int_0^1 t^{2k}(1-t)^{k-1}dt
\\ &= 6+\int_0^1\sum_{k=1}^\infty (6-50k)k x^k t^{2k}(1-t)^{k-1}\;dt
\\ &= 6+\int_0^1 \frac{4t^2x(14t^3x-14t^2x-11)}{(t^3x-t^2x+1)^3}\;dt
\\ f\left(\frac{1}{2}\right) &=
6+\int_0^1\frac{16t^2(7t^3-7t^2-11)}{(t^3-t^2+2)^3}\;dt
= -\pi
\end{align}$$
......
of course any calculus course teaches you how to integrate a rational function...
|
Why $x^2 + 7$ is the minimal polynomial for $1 + 2(\zeta + \zeta^2 + \zeta^4)$?
Why $f(x) = x^2 + 7$ is the minimal polynomial for $1 + 2(\zeta + \zeta^2 + \zeta^4)$ (where $\zeta = \zeta_7$ is a primitive root of the unit) over $\mathbb{Q}$?
Of course it's irreducible by the Eisenstein criterion, however it apparently does not satisfies $1 + 2(\zeta + \zeta^2 + \zeta^4)$ as a root, I tried to calculate several times however I couldn't get $f(1 + 2(\zeta + \zeta^2 + \zeta^4))$ = 0$.
Thanks in advance.
|
If you don't already know the primitive polynomial, you can find it with Galois theory. The element given is an element of the cyclotomic field, and so it's conjugates are all the roots of the primitive polynomial. In fact, there is only one different conjugate, obtained for example by cubing each primitive root in the original expression. So $1+2(\zeta^{3}+\zeta^{5}+\zeta^{6})$ is also a root, and there are no others. Call these $r_1$ and $r_2$. The minimum polynomial must be $(x-r_1)(x-r_2)$. The sum of the roots is zero, so we only need to compute the product, which is easily found to equal 7.
|
Making exponent of $a^x$ object of the function Is it possible to make a variable the subject of a formula when it is an exponent in the equation? For example:
$$y=a^x\quad a\;\text{is constant}$$
For example, let the constant $a = 5.$
$$
\begin{array}{c|l}
\text{x} & \text{y} \\
\hline
1 & 5 \\
2 & 25 \\
3 & 125 \\
4 & 652 \\
5 & 3125 \\
\end{array}
$$
I cannot find the relation between x and y. The constant is making the equation a bit complicated. Appreciate if someone can help me here.
|
Try taking the natural log "ln" of each side of your equation:
$$y = a^x \implies \ln y = \ln\left(a^x\right) = x \ln a \iff x = \dfrac{\ln y}{\ln a}$$
If $a = 5$, then we have $$x = \dfrac{\ln y}{\ln 5}$$
This gives us an equation with $x$ expressed in terms of $y$. $\;\ln a = \ln 5$ is simply a constant so $\dfrac 1{\ln 5}$ would be the coefficient of $\ln y$.
|
Maximal ideals in rings of polynomials Let $k$ be a field and $D = k[X_1, . . . , X_n]$ the polynomial ring in $n$ variables over $k$.
Show that:
a) Every maximal ideal of $D$ is generated by $n$ elements.
b) If $R$ is ring and $\mathfrak m\subset D=R[X_1,\dots,X_n]$ is maximal ideal such that $\mathfrak m \cap R$ is maximal and generated by $s$ elements, then $\mathfrak m$ is generated by $s + n$ elements.
The days that I am trying to solve. Help me.
|
The answer to question a) can be found as Corollary 12.17 in these (Commutative Algebra) notes. The proof is left as an exercise, but the proof of it is just collecting together the previous results in the section.
(As Patrick DaSilva has mentioned, as written your question b) follows trivially from part a). I'm guessing it's not what you meant to ask.)
|
Substituting an equation into itself, why such erratic behavior? Until now, I thought that substituting an equation into itself would $always$ yield $0=0$. What I mean by this is for example if I have $3x+4y=5$, If I substitute $y=\dfrac {5-3x}{4}$, I will eventually end up with $0=0$. However, consider the equation $\large{\sqrt {x+1}+\sqrt{x+2}=1}$ . If we multiply by the conjugate, we get $\dfrac {-1}{\sqrt{x+1}-\sqrt{x+2}}=1$, or $\large{\sqrt{x+2}-\sqrt{x+1}=1}$. Now we can set this equation equal to the original, so $\sqrt{x+2}-\sqrt{x+1}=\sqrt {x+1}+\sqrt{x+2}$ , and you get $0=2 \sqrt{x+1}$ which simplifies to $x=-1$ , which is actually a valid solution to the original! So how come I am not getting $0=0$ , but I am actaully getting useful information out of this? Is there something inherently wrong with this? Thanks.
|
You didn't actually substitute anything (namely a solution for $x$) into the original equation; if you would do that the $x$ would disappear. Instead you combined the equation with a modified form of itself to obtain a new equation that is implied by the original one; the new equation may or may not have retained all information from the original one. As it happens the new equation has a unique solution and it also solves the original equation; this shows the new equation implies the original one, and you did not in fact loose any information.
If you consider the operation of just adding a multiple of an equation to itself, you can see what can happen: in most cases you get something equivalent to the original equation, but if the multiple happened to be by a factor$~-1$ then you are left with $0=0$ and you have lost (in this case all) information contained in the equation.
|
how to prove a parametric relation to be a function For example lets suppose that I have given the functions $f:\mathbb{R}\longrightarrow \mathbb{R}$ and $g:\mathbb{R}\longrightarrow \mathbb{R}$. If my relation is $R=\{(x,(y,z))\in \mathbb{R}\times \mathbb{R}^{2}: y=f(x) \wedge z=g(x)\}$ How to prove formally (from a set theoretic stand point) that $R$ is a function. I have a try but I'm not convince:
Let suppose to have $(x,(y,z))\in R$ and also $(x,(y',z'))\in R$. Then $y=f(x), z=g(x)$ and also $y'=f(x), z'=g(x)$ by definition. Then $y=y'$ and also $z=z'$. Therefore $(y,z)=(y',z')$.
In a more general case if I have the functions $f_1, f_2,...,f_n:\mathbb{R}^m\longrightarrow \mathbb{R}$ and I define the function $f:\mathbb{R}^{m}\longrightarrow\mathbb{R}^{n}$ such that $f(x_{1},x_{2},...,x_{m})=(f_{1}(y),f_{2}(y),...,f_{n}(y))$ with $y=(x_1,x_2,...,x_m)$, how to justify that it is indeed a function? If my try is fine I suppose this can be done by induction. Any comment will be appreciated.
|
Let's prove a vastly more general statement.
Let $I$ be an index set, and for every $i\in I$ let $f_i$ be a function. Then the relation $F$ defined on $X=\bigcap_{i\in I}\operatorname{dom}(f_i)$ by $F=\{\langle x,\langle f_i(x)\mid i\in I\rangle\rangle\mid x\in X\}$ is a function.
Proof. Let $x\in X$, and suppose that $\langle x,\langle y_i\mid i\in I\rangle\rangle,\langle x,\langle z_i\mid i\in I\rangle\rangle\in F$, then for every $i\in I$ we have $y_i=f_i(x)=z_i$, therefore the sequences are equal and $F$ is a function. $\square$
Now you care about the case where $\operatorname{dom}(f_i)$ are all equal, so the intersection creating $X$ is trivial.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.