INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Prove that certain elements are not in some ideal I have the following question: Is there a simple way to prove that $x+1 \notin \langle2, x^2+1\rangle_{\mathbb{Z}[x]}$ and $x-1 \notin \langle2, x^2+1\rangle_{\mathbb{Z}[x]}$ without using the fact that $\mathbb{Z}[x]/\langle2, x^2+1\rangle$ is an integral domain? Thanks for the help.
First a comment: it looks like the goal of the problem is to show that $(2,x^2+1)$ isn't a prime ideal (and hence the quotient isn't a domain), because $(x+1)^2=x^2+1+2x$ is in the ideal, and we hope to show that $x+1$ isn't in the ideal. If $x+1\in (2,x^2+1)$, then we would be able to find two polynomimals in $\Bbb Z[x]$, say $a$ and $b$, such that $x+1=2a+(x^2+1)b$. Looking at the equation mod 2, you get $x+1=(x^2+1)\overline{b}$, where all the terms are in $\Bbb F_2[x]$. But since $\Bbb F_2[x]$ is a domain, the degrees on both sides have to match. If $\deg(b)>0$, then the degree of the right hand side would be at least 3, and even if the degree of $b$ were 0, the right hand side would have degree 2. It is impossible then, for such an expression to be equal to $x+1$. By this contradiction, we conclude $x+1$ is not in the ideal. Finally, you can note that $x+1$ is in the ideal iff $x-1$ is, since $x-1+2=x+1$. Thus we have shown that $(2,x^2+1)$ is not a prime ideal, and it isn't even a semiprime ideal.
Evaluating $\int_{0}^{1} \frac{\ln^{n} x}{(1-x)^{m}} \, \mathrm dx$ On another site, someone asked about proving that $$ \int_{0}^{1} \frac{\ln^{n}x}{(1-x)^{m}} \, dx = (-1)^{n+m-1} \frac{n!}{(m-1)!} \sum_{j=1}^{m-1} (-1)^{j} s (m-1,j) \zeta(n+1-j), \tag{1} $$ where $n, m \in \mathbb{N}$, $n \ge m$, $m \ge 2$, and $s(m-1,j)$ are the Stirling numbers of the first kind. My attempt at proving $(1)$: $$ \begin{align}\int_{0}^{1} \frac{\ln^{n}x}{(1-x)^{m}} \, dx &= \frac{1}{(m-1)!} \int_{0}^{1} \ln^{n} x \sum_{k=m-1}^{\infty} k(k-1) \cdots (k-m+2) \ x^{k-m+1} \ dx \\ &= \frac{1}{(m-1)!} \sum_{k=m-1}^{\infty} k(k-1) \cdots (k-m+2) \int_{0}^{1} x^{k-m+1} \ln^{n} x \, dx \\ &= \frac{1}{(m-1)!} \sum_{k=m-1}^{\infty} k(k-1) \cdots (k-m+2) \frac{(-1)^{n} n!}{(k-m+2)^{n+1}}\\ &= \frac{1}{(m-1)!} \sum_{k=m-1}^{\infty} \sum_{j=0}^{m-1} s(m-1,j) \ k^{j} \ \frac{(-1)^{n} n!}{(k-m+2)^{n+1}} \\ &= (-1)^{n} \frac{n!}{(m-1)!} \sum_{j=0}^{m-1} s(m-1,j) \sum_{k=m-1}^{\infty} \frac{k^{j}}{(k-m+2)^{n+1}} \end{align} $$ But I don't quite see how the last line is equivalent to the right side of $(1)$. (Wolfram Alpha does say they are equivalent for at least a few particular values of $m$ and $n$.)
Substitute $x=e^{-t}$ and get that the integral is equal to $$(-1)^n \int_0^{\infty} dt \, e^{-t} \frac{t^n}{(1-e^{-t})^m} $$ Now use the expansion $$(1-y)^{-m} = \sum_{k=0}^{\infty} \binom{m+k-1}{k} y^k$$ and reverse the order of summation and integration to get $$\sum_{k=0}^{\infty} \binom{m+k-1}{k} \int_0^{\infty} dt \, t^n \, e^{-(k+1) t}$$ I then get as the value of the integral: $$\int_0^1 dx \, \frac{\ln^n{x}}{(1-x)^m} = (-1)^n\, n!\, \sum_{k=0}^{\infty} \binom{m+k-1}{k} \frac{1}{(k+1)^{n+1}}$$ Note that when $m=0$, the sum reduces to $1$; every term in the sum save that at $k=0$ is zero. Note also that this sum gives you the ability to express the integral in terms of a Riemann zeta function for various values of $m$, which will provide the Stirling coefficients.
The integral over a subset is smaller? In a previous question I had $A \subset \bigcup_{k=1}^\infty R_k$ where $R_k$ in $\Bbb{R}^n$ are rectangles I then proceeded to use the following inequality $\left|\int_A f\right| \le \left|\int_{\bigcup_{k=1}^\infty R_k} f \right|$ which I am not really certain of. Does anyone know how to prove it? If its wrong what similar inequality should I use to prove the result here.
The inequality is not true in general (think of an $f$ that is positive on $A$ but such that it is negative outside $A$). But it does hold if $f\geq 0$. This is not an obstacle to you using it, because you would just have to split your function in its positive and negative part.
Normal form of a vector field in $\mathbb {R}^2$ Edited after considering the comments Problem: What is the normal form of the vector field: $$\dot x_1=x_1+x_2^2$$ $$\dot x_2=2x_2+x_1^2$$ Solution: The eugine values of the matrix of the linearised around $(0,0)$ system are $2$ and $1$. We, therefore, have the only resonance $2=2\dot{}1+0\dot{} 2$. The resonant vector-monome is $(0,x_1^2)$. The normal form is then $$\dot x_1=x_1$$ $$\dot x_2=2x_2+cx_1^2$$ Question: I believe this is correct, is it not?
I would use $y$ instead of $x$ in the normal form, since these are not the same variables. Otherwise, what you did is correct. (I don't know if the problem required the identification of a transformation between $x$ and $y$.)
Proof that $1/\sqrt{x}$ is itself its sine and cosine transform As far as I understand, I have to calculate integrals $$\int_{0}^{\infty} \frac{1}{\sqrt{x}}\cos \omega x \operatorname{d}\!x$$ and $$\int_{0}^{\infty} \frac{1}{\sqrt{x}}\sin \omega x \operatorname{d}\!x$$ Am I right? If yes, could you please help me to integrate those? And if no, could you please explain me. EDIT: Knowledge of basic principles and definitions only is supposed to be used.
Let $$I_1(\omega)=\int_0^\infty \frac{1}{\sqrt{x}}\cdot \cos (\omega\cdot x)\space dx,$$ and $$I_2(\omega)=\int_0^\infty \frac{1}{\sqrt{x}}\cdot\sin (\omega\cdot x)\space dx.$$ Let $x=t^2/\omega$ such that $dx=2t/\omega\space dt$, where $t\in [0,\infty)$. It follows that $$I_1(\omega)=\frac{2}{\sqrt{\omega}}\cdot\int_0^\infty \cos (t^2)\space dt,$$ and $$I_2(\omega)=\frac{2}{\sqrt{\omega}}\cdot\int_0^\infty \sin (t^2)\space dt.$$ Recognize that both integrands are even and exploit symmetry. It follows that $$I_1(\omega)=\frac{1}{\sqrt{\omega}}\cdot\int_{-\infty}^{\infty} \cos (t^2)\space dt,$$ and $$I_2(\omega)=\frac{1}{\sqrt{\omega}}\cdot\int_{-\infty}^{\infty} \sin (t^2)\space dt.$$ Establish the equation $$I_1(\omega)-i\cdot I_2(\omega)=\frac{1}{\sqrt{\omega}}\cdot\int_{-\infty}^{\infty} (\cos (t^2)-i\cdot\sin (t^2))\space dt.$$ Applying Euler's formula in complex analysis gives $$I_1(\omega)-i\cdot I_2(\omega)=\frac{1}{\sqrt{\omega}}\cdot\int_{-\infty}^{\infty} e^{-i\cdot t^2}dt.$$ Let $t=i^{-1/2}\cdot u$ such that $dt=i^{-1/2}\space du$, where $u\in(-\infty,\infty)$: $$I_1(\omega)-i\cdot I_2(\omega)=\frac{i^{-1/2}}{\sqrt{\omega}}\cdot \int_{-\infty}^{\infty} e^{-u^2}du.$$ Evaluate the Gaussian integral: $$I_1(\omega)-i\cdot I_2(\omega)=i^{-1/2}\cdot \frac{\sqrt{\pi}}{\sqrt{\omega}}.$$ Make use of the general properties of the exponential function and logarithms in order to rewrite $i^{-1/2}$: $$I_1(\omega)-i\cdot I_2(\omega)=e^{\ln(i^{-1/2})}\cdot \frac{\sqrt{\pi}}{\sqrt{\omega}}=e^{-1/2\cdot \ln(i)}\cdot \frac{\sqrt{\pi}}{\sqrt{\omega}}.$$ In cartesian form, $i=0+i\cdot 1$. Therefore, in polar form, $i=1\cdot e^{i\cdot \pi/2}$. Taking the natural logarithm on both sides gives $\ln(i)=i\cdot\pi/2$. Substitution into the equation gives $$I_1(\omega)-i\cdot I_2(\omega)=e^{-i\cdot \pi/4}\cdot \frac{\sqrt{\pi}}{\sqrt{\omega}}.$$ Applying Euler's formula in complex analysis gives $$I_1(\omega)-i\cdot I_2(\omega)=(\cos(\pi/4)-i\cdot \sin(\pi/4))\cdot \frac{\sqrt{\pi}}{\sqrt{\omega}}=(\frac{1}{\sqrt{2}}-i\cdot \frac{1}{\sqrt{2}})\cdot \frac{\sqrt{\pi}}{\sqrt{\omega}}.$$ Expanding the terms reveals that $$I_1(\omega)=I_2(\omega)=\sqrt{\frac{\pi}{2\cdot\omega}}.$$
What makes $5$ and $6$ so special that taking powers doesn't change the last digit? Why are $5$ and $6$ (and numbers ending with these respective last digits) the only (nonzero, non-one) numbers such that no matter what (integer) power you raise them to, the last digit stays the same? (by the way please avoid modular arithmetic) Thanks!
It's because, let x be any number, if x^2 ends with x, then x raised to any positive integer(excluding zero) will end with x. For x^2 to end with x, x(x-1) have to be a multiple of 10 raised to the number of digits in x. (Ex: if x = 5, then 10^1. If x = 25, then 10^2) By following this procedure, I have come up with 25 and 625 which ends with themselves, when raised to any positive integer excluding zero.
Hopf-Rinow Theorem for Riemannian Manifolds with Boundary I am a little rusty on my Riemannian geometry. In addressing a problem in PDE's I came across a situation that I cannot reconcile with the Hopf-Rinow Theorem. If $\Omega \subset \mathbb{R}^n$ is a bounded, open set with smooth boundary, then $\mathbb{R}^n - \Omega$ is a Riemannian manifold with smooth boundary. Since $\mathbb{R}^n - \Omega$ is closed in $\mathbb{R}^n$, it follows that $\mathbb{R}^n - \Omega$ is a complete metric space. However, the Hopf-Rinow Theorem seems to indicate that $\mathbb{R}^n - \Omega$ (endowed with the usual Euclidean metric) is not a complete metric space since not all geodesics $\gamma$ are defined for all time. Am I missing something here? Do the hypotheses of the Hopf-Rinow theorem have to be altered to accommodate manifolds with boundary?
Hopf-Rinow concerns, indeed, Riemannian manifolds with no boundary.
Prove that $V$ is the direct sum of $W_1, W_2 ,\dots , W_k$ if and only if $\dim(V) = \sum_{i=1}^k \dim W_i$ Let $W_1,\dots,W_k$ be a subspace of a finite dimensional vector space $V$ such that the $\sum_{i=1}^k W_i = V$. Prove: $V$ is the direct sum of $W_1, W_2 , \dots, W_k$ if and only $\dim(V) = \sum_{i=1}^k \dim W_i$.
Define $\ \ B_i$ a basis of $ \ \ W_i\ \ $ for $\ \ i \in {1,..,k } $. Since sum( $W_i$)$= V$ we know that $\bigcup_{i=1}^k B_i$ is a spanning list of $V$. Now we add the condition : sum dim($B_i$)$=$sum dim($W_i$)$=$dim($V$). This means that $\bigcup_{i=1}^k B_i$ is a basis. (Spanning list of $V$ with same dimension). Can you conclude from here the direct sum property? If we define Basises as before and we know that $W_i$ is a direct sum can you conclude ( knowing that sum( $W_i$)$= V$ , wich is given) that $\ \ \bigcup_{i=1}^k B_i$ is a Basis of $V$?
What does it mean when one says "$B$ has no limit point in $A$"? If $B$ is a subset of a set $A$, what does the sentence "$B$ has no limit points in $A$" mean? I am aware that $x$ is a limit point of $A$, if for every neighbourhood $U$ of $x$, $(U-\{x\})\cap A$ is non-empty. Please let me know. Thank you.
Giving an example may be helpful for you. Example: Let $X=\mathbb R$ with usual topology. $A=\{x\in \mathbb R: x > 0\} \subseteq X$ and $B=\{1, \frac12, \frac13,... \frac1n,...\} \subset A$. It is not difficult to see that $B$ has no limit points in $A$ since the unique limit point 0 of $B$ is not in $A$.
Proving that ${n}\choose{k}$ $=$ ${n}\choose{n-k}$ I'm reading Lang's Undergraduate Analysis: Let ${n}\choose{k}$ denote the binomial coefficient, $${n\choose k}=\frac{n!}{k!(n-k)!}$$ where $n,k$ are integers $\geq0,0\leq k\leq n$, and $0!$ is defined to be $1$. Prove the following assertion: $${n\choose k}={n\choose n-k}$$ I proceded by making the adequate substitutions: $${n\choose n-k}=\frac{n!}{\color{red}{(n-k)}!(n-\color{red}{(n-k)})!}$$ And then I simplified and achieved: $$\frac{n!}{(n-k)!-k!}$$ But I'm not sure on how to proceed from here, I've noticed that this result is very similar to $\frac{n!}{k!(n-k)!}$. What should I do? I guess it has something to do with the statement about the nature of $n$ and $k$: $n,k$ are integers $\geq0,0\leq k\leq n$ So should I just change the minus sign to plus sign and think of it as a product of $(n-k)!$? $$\frac{n!}{(n-k)!-k!}\Rightarrow\frac{n!}{(n-k)!+k!}\Rightarrow \frac{n!}{k!(n-k)!}$$ I'm in doubt because I've obtained the third result on Mathematica, but obtained the first with paper and pencil. I'm not sure if there are different rules for simplification with factorials. I'm not sure if this $(n-k)!+k!$ mean a sum or a product in this case.
$$ \binom{n}{n-k}=\frac{n!}{(n-k)!(n-n-k)!}=\frac{n!}{(n-k)!k!}=\binom{n}{k} $$
Onto and one-to-one Let $T$ be a linear operator on a finite dimensional inner product space $V$. If $T$ has an eigenvector, then so does $T^*$. Proof. Suppose that $v$ is an eigenvector of $T$ with corresponding eigenvalue $\lambda$. Then for any $x \in V$, $$ 0 = \langle0,x\rangle = \langle(T-\lambda I)v,x\rangle = \langle v, (T-\lambda I)^*x\rangle = \langle v,(T^*-\bar{\lambda} I)x\rangle $$ This means that $(T^*-\bar{\lambda} I)$ is not onto. WHY? (Of course the proof is not completed in here)
If you want to know why $T^{*}$ is not onto if $T$ is not onto, just observe that the matrix of $T^{*}$ is just the conjugate transpose of the matrix of $T$ and we know that a matrix and its conjugate transpose have the same rank (which is equal to the dimension of the range space).
Find an integrable $g(x,y) \ge |e^{-xy}\sin x|$ I want to use Fubini theorem on $$\int_0^{A} \int_0^{\infty} e^{-xy}\sin x dy dx=\int_0^{\infty} \int_0^{A}e^{-xy}\sin x dx dy$$ Must verify that $\int_M |f|d(\mu \times \nu) < \infty$. I'm using the Lebesgue theorem, so far I've come up with $g(x,y)=e^{-y}$ but am not sure whether it's correct. My argument is that if $x\in (0,1)$ then the $\sin x$ part is going to ensure that the inequality holds.
Try $g(x,y)=x\mathrm e^{-xy}$, then $|\mathrm e^{-xy}\sin x|\leqslant g(x,y)$ for every nonnegative $x$ and $y$. Furthermore, $\int\limits_0^\infty g(x,y)\mathrm dy=1$ for every $x\gt0$ hence $\int\limits_0^A\int\limits_0^\infty g(x,y)\mathrm dy\mathrm dx=A$, which is finite.
Product of two functions converging in $L^1(X,\mu)$ Let $f_n\to f$ in $L^1(X,\mu)$, $\mu(X)<\infty$, and let $\{g_n\}$ be a sequence of measurable functions such that $|g_n|\le M<\infty\ \forall n$ with some constant $M$, and $g_n\to g$ almost everywhere. Prove that $g_nf_n\to gf$ in $L^1(X,\mu)$. This is a question from one of my past papers, but unfortunately there is no solution provided. Here is as far as I have gotten: $$\int|gf-g_nf_n|=\int|(f-f_n)g_n+(g-g_n)f|\le\int|f-f_n||g_n|+\int|g-g_n||f|$$ $$\le M\int|f-f_n|+\int|g-g_n||f|$$ We know that $f_n\to f$ in $L^1$, so $\int|f-f_n|\to 0$, and by Lebesgue's bounded convergence theorem it follows that $\int|g-g_n|\to 0$. But I am unsure whether this also implies $\int|g-g_n||f|\to0$.
Hint: Observe that $2M|f|$ is an integrable bound for $|g_n - g|\cdot |f|$ and the latter converges a. e. to $0$. Now apply the bounded convergence theorem.
Help $\lim_{k \to \infty} \frac{1-e^{-kt}}{k}=? $ what is the lime of $ \frac{1-e^{-kt}}{k}$ as $k \to \infty$? Is that just equal $\frac{1}{\infty}=0$? Does any one can help, I am not sure if We can apply L'Hopital's rule. S
HINT: If $t=0,e^{-kt}=1$ If $t>0, \lim_{k\to\infty}e^{-kt}=0$ If $t<0, t=-r^2$(say), $\lim_{k\to\infty}\frac{1-e^{-kt}}k=\lim_{k\to\infty}\frac{1-e^{kr^2}}k=\frac\infty\infty $ So, applying L'Hospitals' rule, $\lim_{k\to\infty}\frac{1-e^{kr^2}}k=-r^2\cdot\lim_{k\to\infty}\frac{e^{r^2t}}1=-r^2\cdot\infty=-\infty$
definition of discriminant and traces of number field. Let $K=\Bbb Q [x]$ be a number field, $A$ be the ring of integers of $K$. Let $(x_1,\cdots,x_n)\in A^n$. In usual, what does it mean $D(x_1,\cdots,x_n)$? Either $\det(Tr_{\Bbb K/ \Bbb Q} (x_ix_j))$ or $\det(Tr_{A/ \Bbb Z} (x_ix_j))$? Or does it always same value? I searched some definitions, but it is not explicitly stated.
I'm not entirely certain what $\operatorname{Tr}_{A/\mathbb{Z}}$ is, but the notation $D(x_1,\dots,x_n)$ or $\Delta(x_1,\dots,x_n)$ usually means the discriminant of $K$ with respect to the basis $x_1,\dots,x_n$, so I would say it's most likely the former. After all, $x_1,\dots,x_n\in A\subset K$, so it still makes sense to talk about the trace of these as elements of $K$, and that is what the definition of the discriminant of $K$ with respect to a basis is (assuming that $x_1,\dots,x_n$ are indeed a basis for $K/\mathbb{Q}$!).
If $x\equiv y \pmod{\gcd(a,b)}$, show that there is a unique $z\pmod{\text{lcm}(a,b)}$ with $z\equiv x\pmod a$ and $z\equiv y\pmod b$ If $x\equiv y \pmod{\gcd(a,b)}$, show that there is a unique $z\pmod{\text{lcm}(a,b)}$ with $z\equiv x\pmod{a}$ and $z\equiv y\pmod{b}$ What I have so far: Let $z \equiv x\pmod{\frac{a}{\gcd(a,b)}}$ and $ z \equiv y\pmod b $. Then by the chinese remainder theorem there is a unique $z\pmod{\text{lcm}(a,b)}$ which satisfies this... Is this the right approach here? I can't figure out how to get from $$z \equiv x\pmod{\frac{a}{\gcd(a,b)}}$$ what I need.
Put $d=\gcd(a,b)$ and $\delta=x\bmod d=y\bmod d$ (here "mod" is the remainder operation). Then the numbers $x'=x-\delta$, $y'=y-\delta$ are both divisible by$~d$. In terms of a new variable $z'=z-\delta$ we need to solve the system $$ \begin{align}z'&\equiv x'\pmod a,\\z'&\equiv y'\pmod b.\end{align} $$ Since $x',y',a,b$ are all divisible by $d$, any solution $z'$ will have to be as well; therefore we can divide everything by$~d$, and the system is equivalent to $$ \begin{align}\frac{z'}d&\equiv \frac{x'}d\pmod{\frac ad},\\ \frac{z'}d&\equiv \frac{y'}d\pmod{\frac bd}.\end{align} $$ Here the moduli $\frac ad,\frac bd$ are relatively prime, so by the Chinese remainder theorem there is a solution $\frac{z'}d\in\mathbf Z$, and it is unique modulo $\frac ad\times\frac bd$. Then the solutions for $z'$ will then form a single class modulo $\frac ad\times\frac bd\times d=\frac{ab}d=\operatorname{lcm}(a,b)$, and so will the solutions for $z=z'+\delta$.
find a group with the property : a)find a nontrivial group $G$ such that $G$ is isomorphic to $G \times G $ what i'm sure is that $G$ must be infinite ! but have now idea how to get or construct such group i chose many $G$'s but all of the homomorphism was not injective b) an infinite group in which every element has finite order but for each positive integer n there is an element of order n the group $G = (Z_1 \times Z_2 \times Z_3 \times Z_4 \times ...) $ satisfies the conditions except the one which says that every element have finite order . how can we use this group to reach the asked group ?
For the second problem, you can use the subgroup of $\mathbb{Z}_1\times \mathbb{Z}_2\times \cdots\times \mathbb{Z}_n \times \cdot$ consisting of all sequences $(a_1,a_2,a_3,\dots)$ such that all but finitely many of the $a_i$ are $0$.
Determine the matrix of a quadratic function I'm given a quadratic form $\Phi:\mathbb{R}^3\longrightarrow\mathbb{R}$, for which we know that: * *$(0,1,0)$ and $(0,1,-1)$ are conjugated by the function *$(1,0,-1)$ belongs to the kernel *$\Phi(0,0,1)=1$ *The trace is $0$ From here, I know the matrix must be symmetric, so it will have up to six unique numbers, I have four conditions that I can apply, and I will get four equations that relate the numbers of the matrix, but I still need two to full determine it. Applying the above I get that the matrix must be of the form: $$A=\pmatrix{2c-1 & b & c\\b&-2c&-2c\\c & -2c & 1}$$ How do I determine $b$ and $c$?
Ok, solved it, the last two equations came from knowing the vector that was in the kernel, so it should be that $f_p[(1,-0,-1),(x,y,z)]=0$, being $f_p$ the polar form of $\Phi$: $f_p=\frac{1}{2}[\Phi(x+y)-\Phi(x)-\Phi(y)]$
Interesting Problems for NonMath Majors Sometime in the upcoming future, I will be doing a presentation as a college alumni to a bunch of undergrads from an organization I was in college. I did a dual major in mathematics and computer science, however the audience that I am presenting to are not necessarily people who enjoy math. So, to get their attention, I was thinking about presenting an interesting problem in math, for example, the birthday problem, to get their attention and have them enjoy the field of math a little. I feel a question in the field of probability would interest them the most (due to its instinctiveness) , although that's just a personal opinion. The audience studies a variety of majors, from sciences to engineers to literature and the arts. So here's my question, besides the birthday problem, are there any other interesting problems that would be easy to understand for people who have limited knowledge of calculus and would, hopefully see math as an interesting subject and get their attention? (Doesn't have to be in the field of probability.)
I think geometry is the area the most attractive that a "non-mathematician" can enjoy, and I believe that's the idea Serge Lang has when he prepared his encounters with high school students and in his public dialogues, I refer her to this tow reports of These events : The Beauty of Doing Mathematics : Three Public Dialogues Math! : Encounters with High School Students I hope you can access to these tow books because, I think they might provide something helpful for you.
How can I prove $2\sup(S) = \sup (2S)$? Let $S$ be a nonempty bounded subset of $\mathbb{R}$ and $T = \{2s : s \in S \}$. Show $\sup T = 2\sup S$ Proof Consider $2s = s + s \leq \sup S + \sup S = 2\sup S $. $T \subset S$ where T is also bounded, so applying the lub property, we must have $\sup T \leq 2 \sup S$. On the other hand $2s + s - s \leq \sup T + \sup S - 3\sup S \implies 2\sup S \leq 2s + 2\sup S \leq \sup T $. Which gives the desired result. Okay I am really worried about my other direction. Especially $2\sup S \leq 2s + 2\sup S$, do I know that $2s$ is positive? Also in the beginning, how do I know that $\sup S \leq 2 \sup S$? How do I know that the supremum is positive?
You can't assume $s$ is positive, nor can you assume $\sup S$ is positive. Your proof also assumes a couple of other weird things: * *$T \subset S$ is usually not true. *$2s + s - s \le \sup T + \sup S - 3\sup S$ is not necessarily true. Why would $-s \le -3\sup S$? The first part of your proof is actually correct, ignoring the $T \subset S$ statement. What you are saying is that any element of $T$, say the element $2s$, is bounded above by $2 \sup S$; thus $2 \sup S$ is an upper bound on $T$; thus $2 \sup S \ge \sup T$ by the least upper bound property. For the second part of the proof, you need to show that $\sup T \ge 2 \sup S$. To do this, you need to show that $\frac{\sup T}{2}$ is an upper bound on $S$. This will imply $\frac{\sup T}{2} \ge \sup S$ by least upper bound property.
How to calculate: $\sum_{n=1}^{\infty} n a^n$ I've tried to calculate this sum: $$\sum_{n=1}^{\infty} n a^n$$ The point of this is to try to work out the "mean" term in an exponentially decaying average. I've done the following: $$\text{let }x = \sum_{n=1}^{\infty} n a^n$$ $$x = a + a \sum_{n=1}^{\infty} (n+1) a^n$$ $$x = a + a (\sum_{n=1}^{\infty} n a^n + \sum_{n=1}^{\infty} a^n)$$ $$x = a + a (x + \sum_{n=1}^{\infty} a^n)$$ $$x = a + ax + a\sum_{n=1}^{\infty} a^n$$ $$(1-a)x = a + a\sum_{n=1}^{\infty} a^n$$ Lets try to work out the $\sum_{n=1}^{\infty} a^n$ part: $$let y = \sum_{n=1}^{\infty} a^n$$ $$y = a + a \sum_{n=1}^{\infty} a^n$$ $$y = a + ay$$ $$y - ay = a$$ $$y(1-a) = a$$ $$y = a/(1-a)$$ Substitute y back in: $$(1-a)x = a + a*(a/(1-a))$$ $$(1-a)^2 x = a(1-a) + a^2$$ $$(1-a)^2 x = a - a^2 + a^2$$ $$(1-a)^2 x = a$$ $$x = a/(1-a)^2$$ Is this right, and if so is there a shorter way? Edit: To actually calculate the "mean" term of a exponential moving average we need to keep in mind that terms are weighted at the level of $(1-a)$. i.e. for $a=1$ there is no decay, for $a=0$ only the most recent term counts. So the above result we need to multiply by $(1-a)$ to get the result: Exponential moving average "mean term" = $a/(1-a)$ This gives the results, for $a=0$, the mean term is the "0th term" (none other are used) whereas for $a=0.5$ the mean term is the "1st term" (i.e. after the current term).
We give a mean proof, at least for the case $0\lt a\lt 1$. Suppose that we toss a coin that has probability $a$ of landing heads, and probability $1-a$ of landing heads. Let $X$ be the number of tosses until the first tail. Then $X=1$ with probability $1-a$, $X=2$ with probability $a(1-a)$, $X=3$ with probability $a^2(1-a)$, and so on. Thus $$E(X)=(1-a)+2a(1-a)+3a^2(1-a)+4a^3(1-a)\cdots.\tag{$1$}$$ Note that by a standard convergence test, $E(X)$ is finite. Let $b=E(X)$. On the first toss, we get a tail with probability $1-a$. In that case, $X=1$. If on the first toss we get a head, it has been a "wasted" toss, and the expected number of tosses until the first tail is $1+b$. Thus $$b=(1-a)(1)+a(1+b).$$ Solve for $b$. We get $b=\dfrac{1}{1-a}$. But the desired sum $a+2a^2+3a^3+\cdots$ is $\dfrac{a}{1-a}$ times the sum in $(1)$. Thus the desired sum is $\dfrac{a}{(1-a)^2}$.
Proof identity of differential equation I would appreciate if somebody could help me with the following problem: Q: $f''(x)$ continuous in $\mathbb{R}$ show that $$ \lim_{h\to 0}\frac{f(x+h)+f(x-h)-2f(x)}{h^2}=f''(x)$$
Use L'Hospital's rule $$ \lim_{h\to 0}\frac{F(h)}{G(h)}=\lim_{h\to 0}\frac{F'(h)}{G'(h)}$$ You can use the rule if you have $\frac{0}{0}$ result. You need to apply it 2 times $$ \lim_{h\to 0}\frac{f(x+h)+f(x-h)-2f(x)}{h^2}= \lim_{h\to 0}\frac{f'(x+h)-f'(x-h)}{2h}=\lim_{h\to 0}\frac{f''(x+h)+f''(x-h)}{2}=f''(x)$$
Find the polynomial $f(x)$ which have the following property Find the polynomial $p(x)=x^2+px+q$ for which $\max\{\:|p(x)|\::\:x\in[-1,1]\:\}$ is minimal. This is the 2nd exercise from a test I gave, and I didn't know how to resolve it. Any good explanations will be appreciated. Thanks!
Here's an informal argument that doesn't use calculus. Notice that $p(x)$ is congruent to $y = x^2$ (for example, simply complete the square). Now suppose that we chose our values for the coefficients $p,q$ carefully, and it resulted in producing the minimal value of $m$. Hence, we can think of the problem instead like this: By changing the vertex of $y=x^2$, what is the minimal value of $m$ such that for all $x\in [-1,1], -m \le p(x) \le m$? By symmetry, there are only two cases to consider (based on the location of the vertex). Case 1: Suppose the vertex is at $(1,-m)$ and that the parabola extends to the top left and passes through the point $(-1,m)$. Using vertex form, we have $p(x)=(x-1)^2-m$ and plugging in the second point yields $m=(-1-1)^2-m \iff 2m=4 \iff m = 2$. Case 2: Suppose that the vertex is at $(0, -m)$ and that the parabola extends to the top left and passes through the point $(-1,m)$. Using vertex form, we have $p(x)=x^2-m$ and plugging in the second point yields $m=(-1)^2-m \iff 2m = 1 \iff m = 1/2$. Since $m=1/2$ is smaller, we conclude that $\boxed{p(x)=x^2-\dfrac{1}{2}}$ is the desired polynomial.
$y^2 - x^3$ not an embedded submanifold How can I show that the cuspidal cubic $y^2 = x^3$ is not an embedded submanifold of $\Bbb{R}^2$? By embedded submanifold I mean a topological manifold in the subspace topology equipped with a smooth structure such that the inclusion of the curve into $\Bbb{R}^2$ is a smooth embedding. I don't even know where to start please help me. All the usual tricks I know of removing a point from a curve and see what happens don't work. How can I extract out information about the cusp to conclude it is not? Also can I put a smooth structure on it so it is an immersed submanifold? THankz.
It is better to view $y$ as the independent variable and $x=y^{2/3}$. Since $2/3<1$, this has infinite slope at the origin for positive $y$ and infinite negative slope for negative $y$. Hence the origin is not a smooth point of this graph, which is therefore not a submanifold.
Pigeon Hole Principle; 3 know each other, or 3 don't know each other I found another question in my text book, it seems simple, but the hardest part is to prove it. Here the question There are six persons in a party. Prove that either 3 of them recognize each other or 3 of them don't recognize each other. I heard the answer use Pigeon Hole Principle, but i have no idea in using it. Could somebody please tell me the way to solve it? Thanks for the attention and sorry for the bad English and my messy post
NOTE: in my answer I assume that the relation "know someone" is symmetric (i.e., A knows B if and only if B knows A). If this relation is not symmetric for you then, I did not really check it but I believe the statement is not true.\\\ Choose a person A at the party. The following two situations are possible: (CASE 1) A knows at least three people, say B, C and D, at the party; (CASE 2) A doesn't know at least three people at the party. In (CASE 1), if at least a pair among {B,C}, {C,D} or {D,B} is formed by people that know each other, then you have three people that know each other. If there is no such pair among these three, then B, C and D are three people that do not know each other. In (CASE 2) proceed similarly.
Sum of Random Variables... Imagine we repeat the following loop some thousands of times: $$ \begin{align} & \text{array} = []\\ & \text{for n} = 1: 10 000 \\ & k = 0 \\ & \text{while unifrnd}(0,1) < 0.3 \\ & k = k + 1 \\ & \text{end} \\ & \text{if k} \neq 0 \\ & \text{array} = [\text{array,k}] \\ & \text{end} \\ \end{align} $$ whereas "unifrnd(0,1)" means a random number drawn from the uniform distribution on the unit interval. My question is then: What is then the value of k, which is the most often observed - except for k = 0? And is that the expectation of k? Thanks very much
It appears you exit the loop the first time the random is greater than $0.3$. In that case, the most probable value for $k$ is $0$. It occurs with probability $0.7$. The next most probable is $1$, which occurs with probability $0.3 \cdot 0.7$, because you need the first random to be less than $0.3$ and the second to be $0.7$. In general, the probability of a value $k$ is $0.3^k\cdot 0.7$
Distribution of $pX+(1-p) Y$ We have two independent, normally distributed RV's: $$X \sim N(\mu_1,\sigma^2_1), \quad Y \sim N(\mu_2,\sigma^2_2)$$ and we're interested in the distribution of $pX+(1-p) Y, \space p \in (0,1)$. I've tried to solve this via moment generating functions. Since $$X \perp Y \Rightarrow \Psi_X(t) \Psi_Y(t)$$ where for $ N(\mu,\sigma^2)$ we'll have the MGF $$\Psi(t) = \exp\{ \mu t + \frac12 \sigma^2 t^2 \}$$ After computation I've got the joint MGF as $$\Psi_{pX+(1-p) Y}(t) = \exp\{ t(p \mu_1 +(1-p)\mu_2) + \frac{t^2}{2}(p^2 \sigma_1^2 +(1-p)^2 \sigma_2^2) \}$$ which would mean $$pX+(1-p) Y \sim N(p \mu_1 +(1-p)\mu_2, p^2 \sigma_1^2 +(1-p)^2 \sigma_2^2) $$ Is my approach correct? Intuitively it makes sense and the math also adds up.
Any linear combination $aX+bY$ of independent normally distributed random variables $X$ and $Y,$ where $a$ and $b$ are constants, i.e. not random, is normally distributed. You can show that by using moment-generating functions provided you have a theorem that says only normally distributed random variables can have the same m.g.f. that a normally distributed random variable has. Alternatively, show that $aX$ and $bY$ are normally distributed, and then compute the convolution of the two normal density functions, to see that it is also a normal density function. And basic formulas concerning the mean and the variance say \begin{align} \operatorname{E}(aX+bY) & = a\operatorname{E}(X) + b\operatorname{E}(Y), \\[6pt] \operatorname{var}(aX+bY) & = a^2 \operatorname{var}(X) + b^2\operatorname{var}(Y). \end{align}
Choosing a bound when it can be plus or minus? I.e. $\sqrt{4}$ My textbook glossed over how to choose integral bounds when using substitution and the value is sign-agnostic. Or I missed it! Consider the definite integral: $$ \int_1^4\! \frac{6^{-\sqrt{x}}}{\sqrt x} dx $$ Let $ u = -\sqrt{x} $ such that $$ du = - \frac{1}{2\sqrt{x}} dx $$ Now, if one wishes to alter the bounds of the integral so as to avoid substituting $ - \sqrt{x} $ back in for $ u $, how is the sign of the integral's bounds determined? Because: $ u(1) = -\sqrt 1 = -(\pm 1) = \pm 1 $ and $ u(4) = -\sqrt{4} = -(\pm2) = \pm2 $ How does one determine the correct bound? My textbook selected $ -1 $ and $-2 $ without explaining the choices.
It is convention that $\sqrt{x} = + \sqrt{x}$. Thus, you set $u(1) = -\sqrt{1}=-1$ and $u(4) = -\sqrt{4}=-2$. The only situation where you introduce the $\pm$ signs is when you are finding the root of a quadratic such as $y^2=x$ in which case both $y=+\sqrt{x}$ and $y=-\sqrt{x}$ satisfy the original equation.
How does one derive $O(n \log{n}) =O(n^2)$? I was studying time complexity where I found that time complexity for sorting is $O(n\log n)=O(n^2)$. Now, I am confused how they found out the right-hand value. According to this $\log n=n$. So, can anyone tell me how they got that value? Here is the link where I found out the result.
What this equation means is that the class $O(n\log n)$ is included in the class $O(n^2)$. That is, if a sequence is eventually bounded above by a constant times $n \log n$, it will eventually be bounded above by a (possibly different) constant times $n^2$. Can you prove this? The notation is somewhat surprising at first, yes, but you get used to it.
How do you divide a complex number with an exponent term? Ok, so basically I have this: $$ \frac{3+4i}{5e^{-3i}} $$ So basically, I converted the numerator into polar form and then converted it to exponent form using Euler's formula, but I can have two possible solutions. I can have $5e^{0.972i}$ (radian) and $5e^{53.13i}$ (angle). So my real question is, the exponent, does it have to be in radian or angle? I ended up with $e^{3.927i}$.
A complex number $z=x+iy$ can be written as $z=re^{i\theta}$, where $r=|z|=\sqrt{x^2+y^2}$ is the absolute value of $z$ and $\theta=\arg{z}=\operatorname{atan2}(y,x)$ is the angle between the $x$-axis and $z$ measured counterclockwise and in radians. In this case, we have $r=5$ and $\theta=\arctan\frac{4}{3}$ (since $x>0$, see atan2 for more information), so $3+i4=5e^{i\arctan\frac{4}{3}}$ and $$\frac{3+i4}{5e^{-i3}}=\frac{5e^{i\arctan\frac{4}{3}}}{5e^{-i3}}=e^{i(\arctan\frac{4}{3}+3)}\approx e^{i3.927}.$$
Repeatedly assigning people to subgroups so everyone knows each other Say a teacher divides his students into subgroups once every class. The profile of subgroup sizes is the same everyday (e.g. with 28 students it might be always 8 groups of 3 and 1 group of 4). How can the teacher specify the subgroup assignments for a classes so that in the shortest number of classes, everyone has been in a subgroup with everyone else? Exhaustive search seems intractable, so what would be good optimization approach? Edit: To clarify, I'm assuming the subgroup structure is given and fixed. The optimization is just the assignment of students to groups across the days.
The shortest number of classes is 1, with a single subgroup that contains all the students. The second shortest number of classes is 3, which is achieved with one small subgroup and a large one. For example, ABCDEFG/HI, ABCDEHI/FG, ABCFGHI/DE. The proof that 2 classes is impossible: let X, Y be the two largest subgroups from the first day. Each member of X must meet each member of Y on the second day, but to do this would create a subgroup larger than either X or Y.
Category with endomorphisms only How is called a category with endomorphisms only? How is called a subcategory got from an other category by removing all morphisms except of endomorphisms?
Every category in which every arrow is an endomorphism is a coproduct (in the category of categories) of monoids (a monoid is a category with just one object). So a category with all morphisms endormophisms is a coproduct of monoids. I'm not aware of any specific terminology for it. Clearly the category of all coproducts of monoids admits an inclusion functor into the category of all categories. The construction you describe (of removing all non-endo morphisms) describes a right adjoint to this inclusion functor.
Invariant subspaces of a linear operator that commutes with a projection I have an assignment problem, the statement is: Let $V$ be a vector space and $P:V \to V$ be a projection. That is, a linear operator with $P^2=P.$ We set $U:= \operatorname{im} P$ and $W:= \ker P.$ Further suppose that $T:V\to V$ is a linear operator such that $TP = PT.$ Prove that $TU \subseteq U$ and $TW\subseteq W.$ Here is my attempt: Suppose $u\in U:= \operatorname{im} P$ so $u= P(v)$ for some $v\in V.$ Then $Tu = TPv = PTv \in \operatorname{im} P := U$ so $TU\subseteq U.$ Suppose $w\in W:= \ker P$ so that $Pw=0.$ Then $P (Tw) = T(Pw) = T(0)=0$ so $Tw\in \ker P := W$ so $TW\subseteq W.$ It seems fine to me, but nowhere did I use that $P$ was a projection, I only used $TP=PT.$ Is my proof okay?
Yes, that's all. $P$ doesn't have to be projection for this particular exercise. However, we can just start out from the fact that $P$ is a projection in a solution: now it projects to subspace $U$, in the direction of $W$, and we also have $U\oplus W=V$, and $P|_U={\rm id}_U$. Having these, an operator $T$ commutes with $P$ iff both $U$ and $W$ are $T$-invariant subspaces. Your proof can be reformulated for one direction, and the other direction goes as: If $TU\subseteq U$ and $TW\subseteq W$, then $TP(u+w)=Tu$, as it is $\in U$, it $=PTu$, and as $Tw\in W$, we finally have $$TP(u+w)=Tu=PTu=PTu+PTw=PT(u+w)\,.$$
Convex homogeneous function Prove (or disprove) that any CONVEX function $f$, with the property that $\forall \alpha\ge 0, f(\alpha x) \le \alpha f(x)$, is positively homogeneous; i.e. $\forall \alpha\ge 0, f(\alpha x) = \alpha f(x)$.
Maybe I'm missing something, but it seems to me that you don't even need convexity. Given the property you stated, we have that, for $\alpha>0$, $$f(x)=f(\alpha^{-1}\alpha x)\leq \alpha^{-1}f(\alpha x)$$ so that $\alpha f(x)\leq f(\alpha x)$ as well. Therefore, we have that $\alpha f(x)=f(\alpha x)$ for every $\alpha>0$. The equality for $\alpha=0$ follows by continuity of $f$ at zero (which is implied by convexity).
Show the points $u,v,w$ are not collinear Consider triples of points $u,v,w \in R^2$, which we may consider as single points $(u,v,w) \in R^6$. Show that for almost every $(u,v,w) \in R^6$, the points $u,v,w$ are not collinear. I think I should use Sard's Theorem, simply because that is the only "almost every" statement in differential topology I've read so far. But I have no idea how to relate this to regular value etc, and to solve this problem. Another Theorem related to this problem is Fubini Theorem (for measure zero): Let $A$ be a closed subset of $R^n$ such that $A \cap V_c$ has measure zero in $V_c$ for all $c \in R^k$. Then $A$ has measure zero in $R^n$. Thank you very much for your help!
$u,v,$ and $w$ are collinear if and only if there is some $\lambda\in\mathbb{R}$ with $w=v+\lambda(v-u)$. We can thus define a smooth function $$\begin{array}{rcl}f:\mathbb{R}^5&\longrightarrow&\mathbb{R}^6\\(u,v,\lambda)&\longmapsto&(u,v,v+\lambda(v-u))\end{array}$$ By the equivalence mentioned in the first sentence, the image of $f$ is exactly the points $(u,v,w)$ in $\mathbb{R}^6$ with $u,v,$ and $w$ collinear. Now, because $5<6$, every point in $\mathbb{R}^5$ is a critical point, so that the entire image of $f$ has measure $0$, by Sard's theorem.
Topology of uniform convergence on elements of $\gamma$ Let $\gamma$ be a cover of space $X$ and consider $C_\gamma (X)$ of all continuous functions on $X$ with values in the discrete space $D=\{0,1\}$ endowed with the topology of uniform convergence on elements of $\gamma$. What does "topology of uniform convergence on elements of $\gamma$" mean?
In general we have a metric co-domain $(R,d)$, so we consider (continuous) functions from $X$ to $R$, and we have a cover $\gamma$ of $X$. A subbase for the topology of uniform convergence on elements of $\gamma$ is given by sets of the form $S(A, f, \epsilon)$, for all $f \in C(X,R)$, $A \in \gamma$, $\epsilon>0$ real, and $S(A, f, \epsilon) = \{g \in C(X,R): \forall x \in A: \, d(f(x),g(x)) < \epsilon \}$ For the cover of singletons we get the pointwise topology, and for the cover $\{X\}$ we get the uniform metric, and also the cover by all compact sets is used (topology of compact convergence). In your case we have $\{0,1\}$ as codomain, so we can just consider all $S(A,f,1)$ as subbasic sets, and those are all functions that exactly coincide with $f$ on $S$, due to the discreteness of the codomain.
Given $G = (V,E)$, a planar, connected graph with cycles, Prove: $|E| \leq \frac{s}{s-2}(|V|-2)$. $s$ is the length of smallest cycle Given $G = (V,E)$, a planar, connected graph with cycles, where the smallest simple cycle is of length $s$. Prove: $|E| \leq \frac{s}{s-2}(|V|-2)$. The first thing I thought about was Euler's Formula where $v - e + f = 2$. But I really could not connect $v$, $e$ or $f$ to the fact that we have a cycle with minimum length $s$. Any direction will be appreciated, thanks!
Lets use euler's theorem for this n-e+f=2 where: n-vertices, e-edges and f-faces Let $d_1,d_2,...,d_f$ where each $d_i$ is the number of edges in face $i$ of our grph. A cycle causes us to have a face, and according to this our smallest cycle is of size s $\Rightarrow |d_i| \geq s$ $d_1+d_2+...+d_f = 2e \Rightarrow s \times f\leq 2e\Rightarrow $ (f=2-n+e) $ 2e\geq s(2-n+e)\Rightarrow 2e\geq 2s-sn+se\Rightarrow s(n-2) \geq e(s-2)$ Now just divide and you'll get the desired result.
Dual space of $H^1(\Omega)$ I'm a bit confused, why do people not define $H^1(\Omega)^*$? Instead they only say that $H^{-1}(\Omega)$ is the dual of $H^1_0(\Omega).$ $H^1(\Omega)$ is a Hilbert space so it has a well-defined dual space. Can someone explain the issue with this?
As far as I remember, one usually defines $H^{-1}(\Omega)$ to be the dual space of $H^1(\Omega)$. The reason for that is that one usually does not identify $H^1(\Omega)^*$ with $H^1(\Omega)$ (which would be possible) but instead works with a different representation. E.g. one works with the $L^2$-inner product as dual pairing between $H^{-1}(\Omega)$ and $H^1(\Omega)$ (in case the element in $H^{-1}(\Omega)$ is an $L^2$-function).
How can $4^x = 4^{400}+4^{400}+4^{400}+4^{400}$ have the solution $x=401$? How can $4^x = 4^{400} + 4^{400} + 4^{400} + 4^{400}$ have the solution $x = 401$? Can someone explain to me how this works in a simple way?
The number you're adding is being added n (number) times. Well, we can infer from this that if that number is being added n (number) times, it is multiplicating itself. Now, if a number is multiplicating itself, then we have an exponentiation! $N_1+N_2+N_3+...+N_N = N\times N = N^2$ If you're adding a number, no matter how big it is or what operation you're doing with it, and you're adding it n (number) times, you'll end up with $(N^k)1+(N^k)2+(N^k)3+...(N^k)N = (N^k)N$ Which is $N^{k+1}$ So, if you're doing it with 4^400, we've got $(4^{400})+(4^{400})+(4^{400})+(4^{400}) = (4^{400})\times 4 = 4^{401}$
Elegant way to solve $n\log_2(n) \le 10^6$ I'm studying Tomas Cormen Algorithms book and solve tasks listed after each chapter. I'm curious about task 1-1. that is right after Chapter #1. The question is: what is the best way to solve: $n\lg(n) \le 10^6$, $n \in \mathbb Z$, $\lg(n) = \log_2(n)$; ? The simplest but longest one is substitution. Are there some elegant ways to solve this? Thank you! Some explanations: $n$ - is what I should calculate - total quantity of input elements, $10^6$ - time in microseconds - total algorithm running time. I should figure out nmax.
For my money, the best way is to solve $n\log_2n=10^6$ by Newton's Method.
Find a maximal ideal in $\mathbb Z[x]$ that properly contains the ideal $(x-1)$ I'm trying to find a maximal ideal in ${\mathbb Z}[x]$ that properly contains the ideal $(x-1)$. I know the relevant definitions, and that "a proper ideal $M$ in ${\mathbb Z}[x]$ is maximal iff ${\mathbb Z}[x]/M$ is a field." I think the maximal ideal I require will not be principal, but I can't find it. Any help would be appreciated. Thanks.
Hint: the primes containing $\,(x-1)\subset \Bbb Z[x]\,$ are in $1$-$1$ correspondence with the primes in $\,\Bbb Z[x]/(x-1)\cong \Bbb Z,\,$ by a basic property of quotient rings.
How to find the integral of implicitly defined function? Let $a$ and $b$ be real numbers such that $ 0<a<b$. The decreasing continuous function $y:[0,1] \to [0,1]$ is implicitly defined by the equation $y^a-y^b=x^a-x^b.$ Prove $$\int_0^1 \frac {\ln (y)} x \, dx=- \frac {\pi^2} {3ab}. $$
OK, at long last, I have a solution. Thanks to @Occupy Gezi and my colleague Robert Varley for getting me on the right track. As @Occupy Gezi noted, some care is required to work with convergent integrals. Consider the curve $x^a-x^b=y^a-y^b$ (with $y(0)=1$ and $y(1)=0$). We want to exploit the symmetry of the curve about the line $y=x$. Let $x=y=\tau$ be the point on the curve where $x=y$, and let's write $$\int_0^1 \ln y \frac{dx}x = \int_0^\tau \ln y \frac{dx}x + \int_\tau^1 \ln y \frac{dx}x\,.$$ We make a change of coordinates $x=yu$ to do the first integral: Since $\dfrac{dx}x = \dfrac{dy}y+\dfrac{du}u$, we get (noting that $u$ goes from $0$ to $1$ as $x$ goes from $0$ to $\tau$) \begin{align*} \int_0^\tau \ln y \frac{dx}x &= -\int_\tau^1 \ln y \frac{dy}y + \int_0^1 \ln y \frac{du}u \\ &= -\frac12(\ln y)^2\Big]_\tau^1 + \int_0^1 \ln y \frac{du}u = \frac12(\ln\tau)^2 + \int_0^1 \ln y \frac{du}u\,. \end{align*} Next, note that as $(x,y)\to (1,0)$ along the curve, $(\ln x)(\ln y)\to 0$ because, using $(\ln x)\ln(1-x^{b-a}) = (\ln y)\ln(1-y^{b-a})$, we have $$(\ln x)(\ln y) \sim \frac{(\ln y)^2\ln(1-y^{b-a})}{\ln (1-x^{b-a})} \sim \frac{(\ln y)^2 y^{b-a}}{a\ln y} = \frac1a y^{b-a}\ln y\to 0 \text{ as } y\to 0.$$ We now can make the "inverse" change of coordinates $y=xv$ to do the second integral. This time we must do an integration by parts first. \begin{align*} \int_\tau^1 \ln y \frac{dx}x &= (\ln x)(\ln y)\Big]_{(x,y)=(\tau,\tau)}^{(x,y)=(1,0)} + \int_0^\tau \ln x \frac{dy}y \\ & = -(\ln\tau)^2 + \int_0^\tau \ln x \frac{dy}y \\ &= -(\ln\tau)^2 - \int_\tau^1 \ln x \frac{dx}x + \int_0^1 \ln x \frac{dz}z \\ &= -\frac12(\ln\tau)^2 + \int_0^1 \ln x \frac{dz}z\,. \end{align*} Thus, exploiting the inherent symmetry, we have $$\int_0^1 \ln y\frac{dx}x = \int_0^1 \ln y \frac{du}u + \int_0^1 \ln x \frac{dz}z = 2\int_0^1 \ln x \frac{dz}z\,.$$ Now observe that \begin{multline*} x^a-x^b=y^a-y^b \implies x^a(1-x^{b-a}) = x^az^a(1-x^{b-a}z^{b-a}) \\ \implies x^{b-a} = \frac{1-z^a}{1-z^b}\,, \end{multline*} and so, doing easy substitutions, \begin{align*} \int_0^1 \ln x \frac{dz}z &= \frac1{b-a}\left(\int_0^1 \ln(1-z^a)\frac{dz}z - \int_0^1 \ln(1-z^b)\frac{dz}z\right) \\ &=\frac1{b-a}\left(\frac1a\int_0^1 \ln(1-w)\frac{dw}w - \frac1b\int_0^1 \ln(1-w)\frac{dw}w\right) \\ &= \frac1{ab}\int_0^1 \ln(1-w)\frac{dw}w\,. \end{align*} By expansion in power series, one recognizes that this dilogarithm integral gives us, at long last, $$\int_0^1 \ln y\frac{dx}x = \frac2{ab}\int_0^1 \ln(1-w)\frac{dw}w = \frac2{ab}\left(-\frac{\pi^2}6\right) = -\frac{\pi^2}{3ab}\,.$$ (Whew!!)
Evaluating the series $\sum\limits_{n=1}^\infty \frac1{4n^2+2n}$ How do we evaluate the following series: $$\sum_{n=1}^\infty \frac1{4n^2+2n}$$ I know that it converges by the comparison test. Wolfram Alpha gives the answer $1 - \ln(2)$, but I cannot see how to get it. The Taylor series of logarithm is nowhere near this series.
Hint: $\frac 1{4n^2+2n} = \frac 1{2n}-\frac 1{2n+1}$
Self-Paced Graduate Math Courses for Independent Study Does anyone know of any graduate math courses that are self-paced, for independent study? I am a high school math teacher at a charter school in Texas. While I am quite happy with where I am right now, but my goal is to earn at least 18 graduate credits on math subjects so that I can teach higher-level math, and be certified as dual-credit math teacher. (HS class where students earn high school as well as college credits at the same time.) I am aware of many online graduate classes offered by some respectable universities, but all of them are semester-based, which may not be very feasible since my full-time teaching is extremely demanding, not to mention that math is anything but a casual subject. I welcome any suggestions even for programs from outside of US, as long as they are accredited and conducted in English. (For example, I was told that the college system in the Philippine is an exact "copy cat" of the US.) For your information, I am quite comfortable studying independently, in fact, I took lots of prerequisite math classes successfully under this study mode. By the way, last year I took GRE for this purpose, my verbal + quantitative score is a decent 1200 under old scoring scale. Thank you very much for your time and help.
For a decent selection of grad courses and to whet your appetite, ocw.mit.edu : MIT open course ware.
A (not necessarily continuous) function on a compact metric space attaining its maximum. I am studying for an exam and my study partners and I are having a dispute about my reasoning for $f$ being continuous by way of open and closed pullbacks (see below). Please help me correct my thinking. Here is the problem and my proposed solution: Let $(K, d)$ be a compact metric space, and let $f: K \rightarrow \mathbb{R}$ be a function satisfying that for each $\alpha \in \mathbb{R}$ the set {$x \in K: f(x) \ge \alpha$} is a closed subset of $K$. Show that $f$ attains a maximum value on $K$. Proof: Notice that $A :=$ {$x \in K: f(x) \ge \alpha$} is precisely $f^{-1}[\alpha, \infty)$. Since $[\alpha, \infty)$ is closed in $\mathbb{R}$ and $A$ is assumed to be closed in $K$, then it follows that $f$ is continuous on $A$. On the other hand, $K-A = f^{-1}(-\infty, \alpha)$ is open in $K$ since $A$ is closed in $K$. And since $(\alpha, \infty)$ is open in $\mathbb{R}$ and $K - A$ is open in $K$, then if follow that $f$ is continuous on $K - A$, hence $f$ is continuous on $K$. Since $K$ is compact and $f$ is continuous, then $f(K)$ is compact in $\mathbb{R}$. Compact sets in $\mathbb{R}$ are closed and bounded intervals. Thus $\sup{f(K)} = \max{f(K)} = f(x_0)$ for some $x_0 \in K$. Thus $f$ indeed attains its maximum value on $K$. $\blacksquare$
Here is a complete proof with sequential compactness: Suppose that $f$ has no maximum on $K$. Then there are two cases: Case 1: $\sup_{x \in K} f(x) = \infty$. Then for any $n \in \mathbb{N}$ there is some $x_n \in K$ such that $f(x_n) > n$. Since $K$ is compact there exists a subsequence $x_{n_k} \in K$ that converges to some $x \in K$. Let $A = \{y \in K| f(y) > f(x)+1\}$. Then $x_{n_k} \in A$ for sufficiently large $k$, but $x \not \in A$. Therefore $A$ is not closed. Case 2: $\sup_{x \in K} f(x) = L \in \mathbb{R}$. Then for any $n \in \mathbb{N}$ there is some $x_n \in K$ such that $f(x_n) > L - \frac{1}{n}$. Again since $K$ is compact there exists a subsequence $x_{n_k} \in K$ that converges to some $x \in K$. Because $f$ attains no maximum, we have $f(x) < L$. Let $A = \{y \in K| f(y) > \frac{f(x)+L}{2}\}$. Then $x_{n_k} \in A$ for sufficiently large $k$, but again $x \not \in A$. Hence $A$ is not closed as in Case 1. $\blacksquare$
How to solve $x\log(x) = 10^6$ I am trying to solve $$x\log(x) = 10^6$$ but can't find an elegant solution. Any ideas ?
You won't find a "nice" answer, since this is a transcendental equation (no "algebraic" solution). There is a special function related to this called the Lambert W-function, defined by $ \ z = W(z) \cdot e^{W(z)} \ $ . The "exact" answer to your equation is $ \ x = e^{W( [\ln 10] \cdot 10^6)} \ . $ (I'm assuming you're using the base-10 logarithm here; otherwise you can drop the $ \ln 10 $ factor.)
Integral $\frac{\sqrt{e}}{\sqrt{2\pi}}\int^\infty_{-\infty}{e^{-1/2(x-1)^2}dx}$ gives $\sqrt{e}$. How? To calculate the expectation of $e^x$ for a standard normal distribution I eventually get, via exponential simplification: $$\frac{\sqrt{e}}{\sqrt{2\pi}}\int^\infty_{-\infty}{e^{-1/2(x-1)^2}dx}$$ When I plug this into Wolfram Alpha I get $\sqrt e$ as the result. I'd like to know the integration step(s) or other means I could use to obtain this result on my own from the point I stopped. I am assuming that Wolfram Alpha "knew" an analytical solution since it presents $\sqrt e$ as a solution as well as the numerical value of $\sqrt e $. Thanks in advance!
This is because $$\int_{\Bbb R} e^{-x^2}dx=\sqrt \pi$$ Note that your shift $x\mapsto x-1$ doesn't change the value of integral, while $x\mapsto \frac{x}{\sqrt 2}$ multiplies it by $\sqrt 2$, giving the desired result, that is, $$\int_{\Bbb R} e^{-\frac 1 2(x-1)^2}dx=\sqrt {2\pi}$$
Second pair of matching birthdays The "birthday problem" is well-known and well-studied. There are many versions of it and many questions one might ask. For example, "how many people do we need in a room to obtain at least a 50% chance that some pair shares a birthday?" (Answer: 23) Another is this: "Given $M$ bins, what is the expected number of balls I must toss uniformly at random into bins before some bin will contain 2 balls?" (Answer: $\sqrt{M \pi/2} +2/3$) Here is my question: what is the expected number of balls I must toss into $M$ bins to get two collisions? More precisely, how many expected balls must I toss to obtain the event "ball lands in occupied bin" twice? I need an answer for very large $M$, so solutions including summations are not helpful. Silly Observation: The birthday problem predicts we need about 25 US Presidents for them to share a birthday. It actually took 28 presidents to happen (Harding and Polk were both born on Nov 2). We see from the answers below that after about 37 US Presidents we should have a 2nd collision. However Obama is the 43rd and it still hasn't happened (nor would it have happened if McCain had won or Romney had won; nor will it happen if H. Clinton wins in 2016).
Suppose there are $n$ people, and we want to allow $0$ or $1$ collisions only. $0$ collisions is the birthday problem: $$\frac{M^{\underline{n}}}{M^n}$$ For 1 collision, we first choose which two people collide, ${n\choose 2}$, then the 2nd person must agree with the first $\frac{1}{M}$, then avoid collisions for the remaining people, getting $${n \choose 2}\frac{M^{\underline{n-1}}}{M^{n}}$$ Hence the desired answer is $$1-\frac{M^{\underline{n}}}{M^n}-{n \choose 2}\frac{M^{\underline{n-1}}}{M^{n}}$$ or $$ 1-\frac{M^{\underline{n-1}}(M-n+1+{n\choose 2})}{M^n}$$ When $M=365$, the minimum $n$ to get at least a 50% chance of more than 1 collision is $n=36$.
Permutation and Combinations with conditions Hallo :) This is a question about permutations but with conditions. 2 boys and 4 girls are to be arranged in a straight line. In how many ways can this be done if the two boys must be separated? (The order matters) Thank You.
Total number of ways of arranging the people = 6! Cases when boys are together: (2B) G G G G G (2B) G G G G G (2B) G G G G G (2B) G G G G G (2B) Each of the above combinations can be arranged in 2 * 4! ways. (The factor of 2 is accommodated since the boys themselves could be interchanged, as they are on a straight line.) Number of ways to separate the boys = (6! - (2 * 5 * 4!)) = 5! * (6 - 2) = 480
Smallest projective subspace containing a degree $d$ curve Is it true that the smallest projective subspace containing a degree $d$ curve inside $\mathbb{P}^n$ has dimension at most $d$? If not, is there any bound on the dimension? Generalization to varieties? For $d=1$ this is obvious. I think for the case that the curve is an embedding of $\mathbb{P}^1$ this is also true: Suppose the embedding is given by $n$ degree $d$ homogeneous polynomials $f_0,\dots,f_n$. For each $0\leq i\leq d$, let $p_i=(c_{0,i},\dots,c_{n,i})$ where $c_{j,i}$ is the coefficient of $x^iy^{d-i}$ in $f_j$ (or we ignore $p_i$ if all $c_{j,i}$ are zero). Then the curve is contained in the projective subspace spanned by all $p_i$.
I think your observation is correct for curves. Given a curve $C$ in $\mathbb{P}^n$ satisfying $C$ is not contained in any projective subspace of $\mathbb{P}^n$, WLOG we may assume $C$ is irreducible. Let $\tilde C$ be the normalization of $C$, then we have a regular map $\phi: \tilde C\rightarrow \mathbb{P}^n$ which is an embedding outside a finite subset of $\tilde C$. Since $C$ is not contained in any projective subspace, $\phi$ is the map induced by the linear series $L$ of all hyperplane divisors. It is clear that $\dim L=\dim \mathbb{P}^n=n$, so for any hyperplane divisor $D$, we have $h^0(\tilde C,D)\geq \dim L+1=n+1$. On the other hand, $h^0(\tilde C,D)\leq \deg(D)+1$ since $D$ is an effective divisor, so $\deg C=\deg D\geq n$. Any generalizations to higher dimensional varieties? Guess: for a $k$-dimensional variety $X$ embedded in $\mathbb{P}^n$, if $X$ is not contained in any projective subspace, then $\deg X\geq n-k+1$.
Basis of a basis I'm having troubles to understand the concept of coordinates in Linear Algebra. Let me give an example: Consider the following basis of $\mathbb R^2$: $S_1=\{u_1=(1,-2),u_2=(3,-4)\}$ and $S_2=\{v_1=(1,3),v_2=(3,8)\}$ Let $w=(2,3)$ be a vector with coordinates in $S_1$, then $w=2u_1+3u_2=2(1,-2)+3(3,-4)=(11,-16)$. When I tried to found the coordinates of $w$ in $S_2$, I found the following problem: Which basis $(11,-16)$ belongs to? I suppose the same of $u_1$ and $u_2$, but which basis $u_1$ and $u_2$ belongs to? and if I discover the basis of $u_1$ and $u_2$, what's the basis of the basis of $u_1$ and $u_2$? I found an infinite recurrence problem and I was stuck there. Maybe I'm seeing things more complicated than it is, but it seems that there is a deeper and philosophical question inside of this doubt, I couldn't see what a coordinate really is. I would be very grateful if anyone help me with this doubt.
The basis for everything, unless specified, is the standard basis $\{\textbf{e}_1=(1,0),\textbf{e}_2=(0,1)\}$
An interesting problem using Pigeonhole principle I saw this problem: Let $A \subset \{1,2,3,\cdots,2n\}$. Also, $|A|=n+1$. Show that There exist $a,b \in A$ with $a \neq b$ and $a$ and $b$ is coprime. I proved this one very easily by using pigeon hole principle on partition on $\{1,2\},\{3,4\},\dots,\{2n-1,2n\}$. My question is How can I prove or disprove that: Let $A \subset \{1,2,3,\cdots,2n\}$. Also, $|A|=n+1$. Show that There exist $a,b \in A$ with $a \neq b$ and $a|b$. I can't make suitable partition. Is this true?
Any number from the set $A$ is of the form $2^{k}l$ where $k\ge 0,0\le l\le (2n-1)$ and $l$ is odd. Number of odd numbers $l\le (2n-1) $ is $n$. Now if we select $(n+1)$ numbers from the set $A$ then there must be two numbers(among the selected numbers) with the same $l$. That is, we must get $a,b$ with $a=2^{k_1}l$ and $b=2^{k_2}l$ now as $a\ne b$ so $k_1\ne k_2$.Now if $k_1>k_2$ then $b|a$ else $a|b$. This completes the proof.
Is $C_2$ the correct Galois Group of $f(x)= x^3+x^2+x+1$? Let $\operatorname{f} \in \mathbb{Q}[x]$ where $\operatorname{f}(x) = x^3+x^2+x+1$. This is, of course, a cyclotomic polynomial. The roots are the fourth roots of unity, except $1$ itself. I get $\mathbb{Q}[x]/(\operatorname{f}) \cong \mathbb{Q}(\pm 1, \pm i) \cong \mathbb{Q}(i) = \{a+bi : a,b \in \mathbb{Q}\}.$ Let $\alpha : \mathbb{Q}(i) \to \mathbb{Q}(i)$ be a $\mathbb{Q}$-automorphism. We have: $$\alpha(a+bi) = \alpha(a)+\alpha(bi) = \alpha(a)+\alpha(b)\alpha(a)i = a+b\alpha(i).$$ Since $\alpha(i)^2 = \alpha(i)\alpha(i) = \alpha(ii) = \alpha(-1)=-1$ we have $\alpha(i) = \pm\sqrt{-1} = \pm i$. There are then two $\mathbb{Q}$-automorphisms: the identity with $\alpha(z)=z$ and the conjugate $\alpha(z)=\overline{z}$. This tells me that the Galois Group is $S_2=\langle(12)\rangle.$ I've been using GAP software, and it says that the Galois Group is $\langle(13)\rangle$. I can see that $\langle(12)\rangle \cong \langle(13)\rangle$. However, $\langle(13)\rangle < S_3$. My suspision is that because $x^3+x^2+x+1$ is reducible over $\mathbb{Q}$: $x^3+x^2+x+1 \equiv (x+1)(x^2+1)$. Is GAP telling me that the Galois Group of $x^3+x^2+x+1$ is $C_1\times C_2$? How should I think about the Galois Group of $x^3+x^2+x+1$? Is it $C_2$, is it a subgroup of $S_3$ which is isomorphic to $C_2$, or is it the product $C_1 \times C_2$. I realise that these are all isomorphic, but what's the best way to think of it?
The Galois group is the group of authomorphisms of the splitting field. It acts on the roots of any splitting polynomial (such as $f$) by permuting the roots. In your case, there are three roots, $-1, i, -i$ and the automorphisms must leave $-1$ fixed. Since the action is also free, you can view $G$ (via this action) as a subgroup of $\operatorname{Sym}(\{-1,i,-i\})$ and of cours it as only one nontrivial element $(1)(i\ {-i})$.
Proving existence of a surjection $2^{\aleph_0} \to \aleph_1$ without AC I'm quite sure I'm missing something obvious, but I can't seem to work out the following problem (web search indicates that it has a solution, but I didn't manage to locate one -- hence the formulation): Prove that there exists a surjection $2^{\aleph_0} \to \aleph_1$ without using the Axiom of Choice. Of course, this surjection is very trivial using AC (well-order $2^{\aleph_0}$). I have been looking around a bit, but an obvious inroad like injecting $\aleph_1$ into $\Bbb R$ in an order-preserving way is impossible. Hints and suggestions are appreciated.
One of my favorite ways is to fix a bijection between $\Bbb N$ and $\Bbb Q$, say $q_n$ is the $n$th rational. Now we map $A\subseteq\Bbb N$ to $\alpha$ if $\{q_n\mid n\in A\}$ has order type $\alpha$ (ordered with the usual order of the rationals), and $0$ otherwise. Because every countable ordinal can be embedded into the rationals, for every $\alpha<\omega_1$ we can find a subset $\{q_i\mid i\in I\}$ which is isomorphic to $\alpha$, and therefore $I$ is mapped to $\alpha$. Thus we have a surjection onto $\omega_1$.
Using integration by parts to evaluate an integrals Can't understand how to solve this math: use integration by parts to evaluate this integrals: $$\int x\sin(2x + 1) \,dx$$ can any one solve this so i can understand how to do this! Thanks :)
$\int uv'=uv-\int u'v$. Choose $u(x):=x$, $v'(x):=\sin(2x+1)$. Then $u'(x)=1$ and $v(x)=-\frac{\cos(2x+1)}{2}$. So $$ \int x\sin(2x+1)\,dx=-x\frac{\cos(2x+1)}{2}+\int\frac{\cos(2x+1)}{2}\,dx=-x\frac{\cos(2x+1)}{2}+\frac{\sin(2x+1)}{4}+C. $$
How to prove/show $1- (\frac{2}{3})^{\epsilon} \geq \frac{\epsilon}{4}$, given $0 \leq \epsilon \leq 1$? How to prove/show $1- (\frac{2}{3})^{\epsilon} \geq \frac{\epsilon}{4}$, given $0 \leq \epsilon \leq 1$? I found the inequality while reading a TCS paper, where this inequality was taken as a fact while proving some theorems. I'm not a math major, and I'm not as sufficiently fluent in proving inequalities such as these (as I would like to be), hence I'd like to know, why this is true (it does hold for a range of values of $\epsilon$ from $0$ to $1$), and how to go about proving such inequalities in general.
On of the most helpful inequalities about the exponential is $e^t\ge 1+t$ for all $t\in\mathbb R$. Using this, $$ \left(\frac32\right)^\epsilon=e^{\epsilon\ln\frac32}\ge 1+\epsilon\ln\frac32$$ for all $\epsilon\in\mathbb R$. Under the additional assumption that $ -\frac1{\ln\frac32}\le \epsilon< 4$, multiply with $1-\frac\epsilon4$ to obtain $$\begin{align}\left(\frac32\right)^\epsilon\left(1-\frac\epsilon4\right)&\ge \left(1+\epsilon\ln\frac32\right)\left(1-\frac\epsilon4\right)\\&=1+\epsilon\left(\ln\frac32-\frac14\right)-\frac{\ln\frac32}{4}\epsilon^2\\&=1+\frac{\ln\frac32}{4}\epsilon\cdot\left(4-\frac1{\ln\frac32}-\epsilon\right).\end{align}$$ Hence $\left(\frac32\right)^\epsilon\left(1-\frac\epsilon4\right)\ge1$ and ultimately $1-\frac\epsilon4\ge \left(\frac23\right)^\epsilon$ for all $\epsilon$ with $0\le\epsilon\le 4-\frac1{\ln\frac32}\approx1.53$
Good places to start for Calculus? I am a student, and due to my school's decision to not teach Calculus in high school (They said we'd learn it in college, but that's a year and a half away for me), I have to learn it myself. I am trying to get a summer internship as a Bioinformatics intern, and I would like to have prior knowledge (I was told I'd need an understanding of calculus to get the internship.) I have heard of MIT's OpenCourseWare, which I had taken a look at, but I was wondering if there were any other resources (Preferably free of charge, I don't have much money to spend) Is there any good online (Preferably free) way to learn Calculus?
MIT OpenCourseWare is useful. Also, check out Paul's Online Notes: http://tutorial.math.lamar.edu/Classes/CalcI/CalcI.aspx
Finding the upper and lower limit of the following sequence. $\{s_n\}$ is defined by $$s_1 = 0; s_{2m}=\frac{s_{2m-1}}{2}; s_{2m+1}= {1\over 2} + s_{2m}$$ The following is what I tried to do. The sequence is $$\{0,0,\frac{1}{2},\frac{1}{4},\frac{3}{4},\frac{3}{8},\frac{7}{8},\frac{7}{16},\cdots \}$$ So the even terms $\{E_i\} = 1 - 2^{-i}$ and the odd terms $\{O_k\} = \frac{1}{2} - 2^{-k}$ and each of them has a limit of $1$ and $\frac{1}{2}$, respectively. So, the upper limit is $1$ and the lower limit is $1\over 2$, am I right ? Does this also mean that $\{s_n\}$ has no limits ? Is my denotation $$\lim_{n \to \infty} \sup(s_n)=1 ,\lim_{n \to \infty} \inf(s_n)={1 \over 2} $$ correct ?
Shouldn't it be $E_i = \frac{1}{2} - 2^{-i}$ and $O_i = 1 - 2^{1-i}$? That way $E_i = 0, \frac{1}{4}, \frac{3}{8}...$ and $E_i = 0, \frac{1}{2}, \frac{3}{4}...$, which seems to be what you want. Your conclusion looks fine, but you might want to derive the even and odd terms more rigorously. For example, the even terms $E_i$ are defined recursively by $E_{i+1} = s_{2i+2} = \frac{s_{2i+1}}{2} = \frac{E_1 + \frac{1}{2}}{2}$, and $\frac{1}{2} - 2^{-i}$ also satisfies this recursion relation. $E_1 = 0$, and $\frac{1}{2} - 2^{-1} = 0$, hence they have the same first term. By induction the two sequences are the same. If we partition a sequence into a finite number of subsequences then the upper and lower limit of the sequence are equal to the maximum upper limit and minimum lower limit of the subsequences; in this case you're partitioning into even and odd terms. $\{s_n\}$ has a limit iff the upper and lower limits are the same (this is proved in most analysis books), so in this case $\{s_n\}$ has no limits.
Without using L'Hospital's rule, I want to find a limit of the following. Given a series with $a_n = \sqrt{n+1}-\sqrt n$ , determine whether it converges or diverges. The ratio test was inconclusive because the limit equaled 1. So I tried to use the root test. So the problem was reduced to finding $$\lim_{n \to \infty} \sqrt[n]{\sqrt{n+1}-\sqrt n}$$ Since the book hasn't covered derivatives, yet, I am trying to solve this without using L'Hospital's rule. So the following is the strategy that I am trying to proceed with. I cannot come up with a way to simplify the expression, so I am trying to compare $a_n$ to a sequence that converges to 1 or maybe something less. In fact, I want to compare it to 1 because of that reason. The problem with this is that adding constants makes the series divergent, and I suspect that the series converges, so I don't think that works at all. Can someone help ?
Hint: what is the $n^\text{th}$ partial sum of your series?
Find $Y=f(X)$ such that $Y \sim \text{Uniform}(-1,1)$. If $X_1,X_2\sim \text{Normal} (0,1)$, then find $Y=f(X)$ such that $Y \sim \text{Uniform}(-1,1)$. I solve problems where transformation is given and I need to find the distribution. But here I need to find the transformation. I have no idea how to proceed. Please help.
Why not use the probability integral transform? Note that if $$ F(x) = \int_{-\infty}^x \frac{1}{\sqrt{2\pi}} e^{-u^2/2} du $$ then $F(X_i) \sim U(0,1)$. So you could take $f(x) = 2*F(x) - 1$.
Integral of $\int \frac{1+\sin(2x)}{\operatorname{tg}(2x)}dx$ I'm trying to find the $F(x)$ of this function but I don't find how to do it, I need some hints about the solution. I know that $\sin(2x) = 2\sin(x)\cos(x)$ its help me? It's good way to set $2x$ as $t$? $$\int \frac{1+\sin(2x)}{\operatorname{tg}(2x)}dx$$ EDIT Its right to do it like that? $$\int \frac{1+2\sin(x)\cos(x)dx}{\frac{2\sin(x)\cos(x)}{\cos^2(x)-\sin^2(x)}}$$ $$\int \left(\frac{\cos^2(x)-\sin^2(x)}{2\sin(x)\cos(x)}+\cos^2(x)-\sin^2(x)\right)dx = \int \left(\operatorname{ctg}(2x)+\frac{1+\cos(2x)}{2}-\frac{1-\cos(2x)}{2}\right)dx$$ and the integral of $\displaystyle \operatorname{ctg}(2x) = \frac{\ln|\sin(2x)|}{2}+C$ and the two other is equal to $\displaystyle \frac{\sin(2x)}{2}+C$ Thanks
Effective hint: Let $\int R(\sin x,\cos x)dx$ wherein $R$ is a rational function respect to $\sin x$ and $\cos x$. If $$R(-\sin x, -\cos x)\equiv R(\sin x, \cos x) $$ then $t=\tan x, t=\cot x$ is a good substitution.
On Polar Sets with respect to Continuous Seminorms In the following, $X$ is a Hausdorff locally convex topological vector space and $X'$ is the topological dual of $X$. If $p$ is a continuous seminorm on $X$ then we shall designate by $U_p$ the "$p$-unit ball", i.e, $$U_p=\{x\in X: p(x)\le 1\}.$$ The polar set of $U_p$ is given by $$U_p^o=\{f\in X':|f(x)|\le 1\quad \forall x\in U_p\}.$$ How do we prove that for each $x\in X$, we have $$p(x)=\sup\{|f(x)|:f\in U_p^o\}.$$ I need some help...Thanks in advance.
Assume that $p(x)=0$. Then for all $\lambda>0$, $\lambda x\in U_p$ hence $|f(\lambda x)|\leqslant 1$ and $f(x)=0$ whenver $f\in U_p^0$. If $p(x)\neq 0$, then considering $\frac 1{p(x)}x$, we get $\geqslant$ direction. For the other one, take $f(a\cdot x):=a\cdot p(x)$ for $a\in\Bbb R$; then $|f(v)|\leqslant p(v)$ for any $v\in\Bbb R\cdot x$. We extend $f$ by Hahn-Banach theorem to the whose space: then $|f(w)|\leqslant p(w)$ for any $w\in X$, giving what we want.
Applications of computation on very large groups I have been studying computational group theory and I am reading and trying to implement these algorithms. But what that is actually bothering me is, what is the practical advantage of computing all properties of extremely large groups, moreover it is a hard problem? It might give birth to new algorithms but does it solve any problem specific to group theory or other branches affected by it?
The existence of several of the large finite simple sporadic groups, such as the Lyons group and the Baby Monster was originally proved using big computer calculations (although I think they all now have computer-free existence proofs). Many of the properties of individual simple groups, such as their maximal subgroups and their (modular) character tables, which are essential for a deeper understanding of the groups, have been calculated by computer. Some significant theorems in group theory have proofs that are partly dependent on computer calculations, usually for small or medium sized special cases that are not covered by the general arguments. A recent example of this is the proof of Ore's conjecture that every element in every finite simple group is a commutator.
Endomorphim Ring of Abelian Groups In the paper "Über die Abelschen Gruppen mit nullteilerfreiem Endomorphismenring." Szele considers the problem of describing all abelian groups with endomorphism ring contaning no zero-divisors. He proved that there is no such group among the mixed groups. While $C(p)$ and $C(p^\infty)$ are the only torsion groups of this property. I do not have access to this paper. Moreover, I do not speak German, someone can give me a reference, in English or French, for this result or Sketch the Proof?
Kulikov proved that an indecomposable abelian group is either torsion-free or $C(p^k)$ for some $k=0,1,\dots,\infty$. A direct summand creates a zero divisor in the endomorphism ring: Let $G = A \oplus B$ and define $e(a,b) = (a,0)$. Then $e^2=e$ and $e(1-e) = 0$. However $1-e$ is the endomorphism that takes $(a,b)$ to $(0,b)$ so it is not zero either.
Where can I learn about the lattice of partitions? A set $P \subseteq \mathcal{P}(X)$ is a partition of $X$ if and only if all of the following conditions hold: * *$\emptyset \notin P$ *For all $x,y \in P$, if $x \neq y$ then $x \cap y = \emptyset$. *$\bigcup P = X$ I have read many times that the partitions of a set form a lattice, but never really considered the idea in great detail. Where can I learn the major results about such lattices? An article recommendation would be nice. I'm also interested in the generalization where condition 3 is disregarded.
George Grätzers book General Lattice Theory has a section IV.4 on partition lattices, see page 250 of this result of Google books search. A more recent version of the book is called Lattice Theory: Foundation.
Derivative of $\left(x^x\right)^x$ I am asked to find the derivative of $\left(x^x\right)^x$. So I said let $$y=(x^x)^x \Rightarrow \ln y=x\ln x^x \Rightarrow \ln y = x^2 \ln x.$$Differentiating both sides, $$\frac{dy}{dx}=y(2x\ln x+x)=x^{x^2+1}(2\ln x+1).$$ Now I checked this answer with Wolfram Alpha and I get that this is only correct when $x\in\mathbb{R},~x>0$. I see that if $x<0$ then $(x^x)^x\neq x^{x^2}$ but if $x$ is negative $\ln x $ is meaningless anyway (in real analysis). Would my answer above be acceptable in a first year calculus course? So, how do I get the correct general answer, that $$\frac{dy}{dx}=(x^x)^x (x+x \ln(x)+\ln(x^x)).$$ Thanks in advance.
If $y=(x^x)^x$ then $\ln y = x\ln(x^x) = x^2\ln x$. Then apply the product rule: $$ \frac{1}{y} \frac{dy}{dx} = 2x\ln x + \frac{x^2}{x} = 2x\ln x + x$$ Hence $y' = y(2x\ln x + x) = (x^x)^x(2x\ln x + x).$ This looks a little different to your expression, but note that $\ln(x^x) \equiv x\ln x$.
If $\mid \lambda_i\mid=1$ and $\mu_i^2=\lambda_i$, then $\mid \mu_i\mid=1$? If $|\lambda_i|=1$ and $\mu_i^2=\lambda_i$, then $|\mu_i|=1$? $|\mu_i|=|\sqrt\lambda_i|=\sqrt |\lambda_i|=1$. Is that possible?
Yes, that is correct. Or, either you could write $1=|\lambda_i|=|{\mu_i}^2|=|\mu_i|^2$, and use $|\mu_i|\ge 0$ to arrive to the unique solution $|\mu_i|=1$.
Prove that $U$ is a self adjoint unitary operator Let $W$ be the finite dimensional subspace of an inner product space $V$ and $V=W\oplus W^\perp $. Define $U:V \rightarrow V$ by $U(v_1+v_2)=v_1-v_2$ where $v_1\in W$ and $v_2 \in W^\perp$. Prove that $U$ is a self adjoint unitary operator. I know I have to show that $\parallel U(x) \parallel=\parallel x \parallel $ but can't proceed from this stage.
$\langle U(x),U(x)\rangle = \langle U(v_1+v_2) , U(v_1+v_2)\rangle = \langle v_1 - v_2, v_1 - v_2\rangle = \langle v_1,v_1\rangle + \langle v_2,v_2\rangle = \langle x,x\rangle$ where last two equalities comes frome the fact that $\langle v_1,v_2\rangle = 0$
Which one is bigger: $\;35{,}043 × 25{,}430\,$ or $\,35{,}430 × 25{,}043\;$? Which of the two quantities is greater? Quantity A: $\;\;35{,}043 × 25{,}430$ Quantity B: $\;\;35{,}430 × 25{,}043$ What is the best and quickest way to get the answer without using calculation, I mean using bird's eye view?
Hint: Compare $a\times b$ with $$(a+x)\times (b-x)=ab-ax+bx-x^2=ab-x(a-b)-x^2$$ keeping in mind that in your question, $a> b$ and $x>0$.
Solving $f_n=\exp(f_{n-1})$ : Where is my mistake? I was trying to solve the recurrence $f_n=\exp(f_{n-1})$. My logic was this : $f_n -f_{n-1}=\exp(f_{n-1})-f_{n-1}$. The associated differential equation would then be $\dfrac{dg}{dn}=e^g-g$. if $f(m)=g(m)>0$ for some real $m$ then for $n>m$ we would have $g(n)>f(n)$. Solving the differential equation with seperating the variables gives the solution $g(n)= \mathrm{inv}(\int_c^{n} \dfrac{dt}{e^t-t}+c_1)+c_2$ for some $c,c_1,c_2$. That solutions seems correct since $e^t-t=0$ has no real solution for $t$ so there are no issues with singularities near the real line. However $\mathrm{inv}(\int_c^{n} \dfrac{dt}{e^t-t}+c_1)+c_2$ Does NOT seem near $f_n$ let alone larger than it !! So where did I make a big mistake in my logic ? And can that mistake be fixed ?
The problem is that the primitive $\displaystyle\int_\cdot^x\frac{\mathrm dt}{\mathrm e^t-t}$ does not converge to infinity when $x\to+\infty$. The comparison between $(f_n)$ and $g$ reads $$ \int_{f_1}^{f_n}\frac{\mathrm dt}{\mathrm e^t-t}\leqslant n-1, $$ for every $n\geqslant1$. When $n\to\infty$, the LHS converges to a finite limit hence one can be sure that the LHS and the RHS are quite different when $n\to\infty$ and that this upper bound becomes trivial for every $n$ large enough. Take-home message: to compare the sequence $(f_n)$ solving a recursion $f_{n+1}=f_n+h(f_n)$ and the function $g$ solving the differential equation $g'(t)=h(g(t))$ can be fruitful only when the integral $\displaystyle\int_\cdot^{+\infty}\frac{\mathrm dt}{h(t)}$ diverges.
Exercise 3.15 [Atiyah/Macdonald] I have a question regarding a claim in Atiyah, Macdonald. A is a commutative ring with $1$, $F$ is the free $A$-module $A^n$. Assume that $A$ is local with residue field $k = A/\mathfrak m$, and assume we are given a surjective map $\phi: F\to F$ with kernel $N$. Then why is the following true? Since $F$ is a flat $A$-module, the exact sequence $0\to N \to F\overset\phi\to F\to 0$ gives an exact sequence $0\to k\otimes N \to k\otimes F \overset{1\otimes \phi}\to k\otimes F \to 0$. I can see that $F$ is a free $A$-module, and that the first sequence is exact. But how does flatness of $F$ tell me something about the second sequence? Thanks!
A general principle in homological algebra is the following: Every ses of chain complexes gives rise to a LES in homology. One can apply this principle to many situations, in our case it can be used to show that every ses of $A$ - modules gives rise to a LES in Tor. The LES in your situation is exactly $$\ldots \to \text{Tor}_1^A(k, N) \to \text{Tor}_1^A(k, F) \to \text{Tor}_1^A(k, F) \to k \otimes_A N \to k \otimes_A F \to k\otimes_A F \to 0.$$ Now we claim that $\text{Tor}_1^A (k,F) = 0$. Indeed because $F$ is free (hence projective) we can always take the tautological projective resolution $$ \ldots \to 0 \to 0 \to F \to F \to 0 $$ and remove the first $F$, tensor with $k$, to get the chain complex $$ \ldots \to 0 \to 0 \to k \otimes_A F \to 0$$ from which it is clear that the first homology group of this complex is zero, i.e. $\text{Tor}_1^A(k,F) = 0$.
Strictly convex sets If $S\subseteq \mathbb{R} ^2$ is closed and convex, we say $S$ is strictly convex if for any $x,y\in Bd(S)$ we have that the segment $\overline{xy} \not\subseteq Bd(S)$. Show that if $S$ is compact, convex and constant width then $S$ is strictly convex. Any hint? Than you.
The idea of celtschk works just fine. Suppose that the line $L$ meets $\partial S$ along a line segment. Let $a\in S$ be a point that maximizes the distance from $L$ among all points in $S$. This distance, say $w$, is the width of $S$. Let $b$ any point of $L\cap \partial S$ which is not the orthogonal projection of $a$ onto $L$. Then the distance from $a$ to $b$ is greater than $w$, a contradiction. (The projection onto the line through $a$ and $b$ will have length $>w$).
Find all positive integers $x$ such that $13 \mid (x^2 + 1)$ I was able to solve this by hand to get $x = 5$ and $x =8$. I didn't know if there were more solutions, so I just verified it by WolframAlpha. I set up the congruence relation $x^2 \equiv -1 \mod13$ and just literally just multiplied out. This lead me to two questions: * *But I was wondering how would I do this if the $x$'s were really large? It doesn't seem like multiplying out by hand could be the only possible method. *Further, what if there were 15 or 100 of these $x$'s? How do I know when to stop?
Starting with $2,$ the minimum natural number $>1$ co-prime with $13,$ $2^1=2,2^2=4,2^3=8,2^4=16\equiv3,2^5=32\equiv6,2^6=64\equiv-1\pmod{13}$ As $2^6=(2^3)^2,$ so $2^3=8$ is a solution of $x^2\equiv-1\pmod{13}$ Now, observe that $x^2\equiv a\pmod m\iff (-x)^2\equiv a$ So, $8^2\equiv-1\pmod {13}\iff(-8)^2\equiv-1$ Now, $-8\equiv5\pmod{13}$ If we need $x^2\equiv-1\pmod m$ where integer $m=\prod p_i^{r_i}$ where $p_i$s are distinct primes and $p_i\equiv1\pmod 4$ for each $i$ (Proof) $\implies x^2\equiv-1\pmod {p_i^{r_i}}$ Applying Discrete logarithm with respect to any primitive root $g\pmod {p_i^{r_i}},$ $2ind_gx\equiv \frac{\phi(p_i^{r_i})}2 \pmod {\phi(p_i^{r_i})}$ as if $y\equiv-1\pmod {p_i^{r_i}}\implies y^2\equiv1 $ $\implies 2ind_gy\equiv0 \pmod {\phi(p_i^{r_i})}\implies ind_gy\equiv \frac{\phi(p_i^{r_i})}2 \pmod {\phi(p_i^{r_i})}$ as $y\not\equiv0\pmod { {\phi(p_i^{r_i})}}$ Now apply CRT, for relatively prime moduli $p_i^{r_i}$ For example, if $m=13, \phi(13)=12$ and $2$ is a primitive root of $13$ So, $2ind_2x\equiv 6\pmod {12}\implies ind_2x=3\pmod 6$ $\implies x=2^3\equiv8\pmod{13}$ and $x=2^9=2^6\cdot2^3\equiv(-1)8\equiv-8\equiv5\pmod{13}$
If $f(a)$ is divisible by either $101$ or $107$ for each $a\in\Bbb{Z}$, then $f(a)$ is divisible by at least one of them for all $a$ I've been struggling with this problem for a while, I really don't know where to start: Let $f(x) \in \mathbb{Z}[X]$ be a polynomial such that for every value of $a \in \mathbb{Z}$, $f(a)$ is always a multiple of $101$ or $107$. Prove that $f(a)$ is always divisible by $101$ for all values of $a$, or that $f(a)$ is divisible by 107 for all values of $a$.
If neither of the statements "$f(x)$ is always divisible by $101$" or "$f(x)$ is always divisible by $107$" is true, then there exist $a,b\in{\bf Z}$ so that $107\nmid f(a)$ and $101\nmid f(b)$. It follows from hypotheses that $$\begin{cases} f(a)\equiv 0\bmod 101 \\ f(a)\not\equiv0\bmod 107\end{cases}\qquad \begin{cases}f(b)\not\equiv 0\bmod 101 \\ f(b)\equiv 0\bmod 107\end{cases}$$ Let $c\in{\bf Z}$ be $\equiv a\bmod 107$ and $\equiv b\bmod 101$. Is $f(c)$ divisible by $101$ or $107$?
How could we see that $\{n_k\}_k$ converges $\infty$? Let $x \in \Bbb R\setminus \Bbb Q$ and the sequence $\{\frac {m_k} {n_k}\}_k$ concerges to $x$. The question is from this comment by Ilya: How could we see that $\{n_k\}_k$ converges $\infty$? Thanks for your help.
Let $M$ be fixed. Show that there exists such $k_0$ that $$n_k>M, k\geq k_0.$$ By assuming the contradiction, we'll get such subsequance $\{n_{k_j}\}_j$ that $n_{k_j} \leq M$ for all $j\geq 1$. Note that $$\frac{m_{k_j}}{n_{k_j}}\to x.$$ Since such fractions can written as $$\frac{m_{k_j}}{n_{k_j}}=\frac{A_{k_j}}{M!},$$ where $A_{k_j}$ are integers, then $\frac{m_{k_j}}{n_{k_j}}$ cannot tends to an irrational number. So there is no such subsequance $\{n_{k_j}\}_j$.
Find the greatest common divisor of the polynomials: a) $X^m-1$ and $X^n-1$ $\in$ $Q[X]$ b) $X^m+a^m$ and $X^n+a^n$ $\in$ $Q[X]$ where $a$ $\in$ $Q$, $m,n$ $\in$ $N^*$ I will appreciate any explanations! THanks
Let $n=mq+r$ with $0\leq r<m $ then $$x^n-1= (x^m)^q x^r-1=\left((x^m)^q-1\right)x^r+(x^r-1)=(x^m-1)\left(\sum_{k=0}^{q-1}x^{mk}\right)x^r+(x^r-1)$$ and $$\deg(x^r-1)<\deg(x^m-1)$$ hence by doing the Euclidean algorithm in parallel for the integers and the polynomials, we find $$(x^n-1)\wedge(x^m-1)=x^{n\wedge m}-1$$
If $T\colon V\to V$ is linear then$\text{ Im}(T) = \ker(T)$ implies $T^2 = 0$ I'm trying to show that if $V$ is finite dimensional and $T\colon V\to V$ is linear then$\text{ Im}(T) = \ker(T)$ implies $T^2 = 0$. I've tried taking a $v$ in the kernel and then since it's in the kernel we know its in the image so there is a $w$ such that $T(w) = v$, so then $TT(w) = 0$, but thats only for a specific $w$? Thanks
Hint: Note that $T^2$ should be read as $T\circ T$. You wish to show that $(\forall v\in V)((T\circ T)(v)=0)$. I think you can do this.
Proving $(A \land B) \to C$ and $A \to (B \to C)$ are equivalent Prove that $(A \land B) \rightarrow C$ is equivalent to $A \rightarrow (B \rightarrow C)$ in two ways: by semantics and syntax. Can somebody give hints or answer to solve it?
Semantically you can just consider two cases. 1) Suppose A is true, and 2) Suppose A is false. Since all atomic propositions in classical logic are either true or false, but not both, this method will work. Syntactically, we'll need to know the proof system (the axioms and the rules of inference for your system) to know how to solve this.
Insertion sort proof I am reading Algorithm design manual by Skiena.It gives proof of Insertion sort by Induction. I am giving the proof described in the below. Consider the correctness of insertion sort, which we introduced at the beginning of this chapter. The reason it is correct can be shown inductively: * *The basis case consists of a single element, and by definition a one-element array is completely sorted. *In general, we can assume that the first $n − 1$ elements of array $A$ are completely sorted after $n − 1$ iterations of insertion sort. *To insert one last element $x$ to $A$, we find where it goes, namely the unique spot between the biggest element less than or equal to $x$ and the smallest element greater than $x$. This is done by moving all the greater elements back by one position, creating room for $x$ in the desired location. I cannot understand the last pragraph(i.e 3).Could someone please explain me with an example?
The algorithm will have the property that at each iteration, the array will consist of two subarrays: the left subarray will always be sorted, so at each iteration our array will look like $$ \langle\; \text{(a sorted array)}, \fbox{current element},\text{(the other elements)}\;\rangle $$ We work from left to right, inserting each current element where it belongs in the sorted subarray. To do that, we find where the current element belongs, shift the larger elements one position to the right, and place the current element where it belongs. Consider, for example, the initial array $\langle\; 7, 2, 6, 11, 4, 8, 5\;\rangle$. We start with $$ \langle\; \fbox{7}, 2, 6, 11, 4, 8, 5\;\rangle $$ The sorted part is initially empty, so inserting the 7 into an empty array will just give the array $\langle\;7\;\rangle$. In the second iteration we have $$ \langle\; 7, \fbox{2}, 6, 11, 4, 8, 5\;\rangle $$ and now your part (3) comes into play: we find that the element 2 belongs at the front of the sorted list, so we shift the 7 one position to the right and insert the 2 in its proper location, giving us $$ \langle\; 2, 7, \fbox{6}, 11, 4, 8, 5\;\rangle $$ Inserting the 6 in its proper place (after shifting the 7 to make room) in the sorted subarray yields $$ \langle\; 2, 6, 7, \fbox{11}, 4, 8, 5\;\rangle $$ Continuing this process, we'll have $$ \langle\; 2, 6, 7, 11, \fbox{4}, 8, 5\;\rangle $$ (since the 11 is already where it should be, so no shifting was necessary). Then we have $$ \langle\; 2, 4, 6, 7, 11, \fbox{8}, 5\;\rangle $$ (shifting the 7 and 11). Then $$ \langle\; 2, 4, 6, 7, 8, 11, \fbox{5}\;\rangle $$ and, finally, we use paragraph (3) one final time to get $$ \langle\; 2, 4, 5, 6, 7, 8, 11\;\rangle $$
Every principal ideal domain satisfies ACCP. Every principal ideal domain $D$ satisfies ACCP (ascending chain condition on principal ideals) Proof. Let $(a_1) ⊆ (a_2) ⊆ (a_3) ⊆ · · ·$ be a chain of principal ideals in $D$. It can be easily verified that $I = \displaystyle{∪_{i∈N} (a_i)}$ is an ideal of $D$. Since $D$ is a PID, there exists an element $a ∈ D$ such that $ I = (a)$. Hence, $a ∈ (a_n)$ for some positive integer $n$. Then $I ⊆ (a_n) ⊆ I$. Therefore, $I = a_n$. For $t ≥ n$, $(a_t) ⊆ I = (a_n) ⊆ (a_t)$. Thus, $(a_n) = (a_t)$ for all $t ≥ n$. I have prove $I$ is an ideal in the following way:- Let $ x,y\in I$. Then there exist $i,j \in \mathbb{N}$ s.t. $x \in (a_i)$ & $y \in (a_j)$. Let $k \in \mathbb{N}$ s.t $k>i,j$. Then $x \in (a_k)$ & $y \in (a_k)$. as $(a_k)$ is an ideal $x-y \in (a_k)\subset I$ and $rx,xr \in (a_k)\subset I$. So $I$ is an ideal. Is it correct?
Your proof is right but you can let t = max(i,j) and any k > t.
How many resulting regions if we partition $\mathbb{R}^m$ with $n$ hyperplanes? This is a generalization of this question. So in $\mathbb{R}^2$, the problem is illustrated like so: Here, $n = 3$ lines divides $\mathbb{R}^2$ into $N_2=7$ regions. For general $n$ in the case of $\mathbb{R}^2$, the number of regions $N_2$ is $\binom{n+1}{2}+1$. But what about if we consider the case of $\mathbb{R}^m$, partitioned using $n$ hyperplanes? Is the answer $N_m$ still $\binom{n+1}{2}+1$, or will it be a function of $m$?
Denote this number as $A(m, n)$. We will prove $A(m, n) = A(m, n-1) + A(m-1, n-1)$. Consider removing one of the hyperplanes, the maximum number is $A(m, n-1)$. Then, we add the hyperplane back. The number of regions on the hyperplane is the same as the number of newly-added regions. Since this hyperplane is $m-1$ dimensional, this maximum number is $A(m-1, n-1)$. So, this number satisfies $A(m, n) = A(m, n-1) + A(m-1, n-1)$, and the formula can be simply derived by induction as $\sum_{i=0}^m \binom{n}{i}$.
Recurrence relations: How many numbers between 1 and 10,000,000 don't have the string 12 or 21 so the question is (to be solved with recurrence relations: How many numbers between 1 and 10,000,000 don't have the string 12 or 21? So my solution: $a_n=10a_{n-1}-2a_{n-2}$. The $10a_{n-1}$ represents the number of strings of n length of digits from 0 to 9, and the $2a_{n-2}$ represent the strings of n-length with the 12 or 21 strings included. Just wanted to know if my recursion is correct, if so, I'll be able to solve the rest. Thanks in advance!
We look at a slightly different problem, from which your question can be answered. Call a digit string good if it does not have $12$ or $21$ in it. Let $a_n$ be the number of good strings of length $n$. Let $b_n$ be the number of good strings of length $n$ that end with a $1$ or a $2$, Then $a_n-b_n$ is the number of good strings of length $n$ that don't end with $1$ or $2$. We have $$a_{n+1}=10(a_n-b_n) +9b_n.$$ For a good string of length $n+1$ is obtained by appending any digit to a good string that doesn't end with $1$ or $2$, or by appending any digit except the forbidden one to a good string that ends in $1$ or $2$. We also have $$b_{n+1}=2(a_n-b_n) + b_n.$$ For we obtain a good string of length $n+1$ that ends in $1$ or $2$ by appending $1$ or $2$ to a string that doesn't end with either, or by taking a string that ends with $1$ (respectively, $2$) and adding a $1$ (respectively, $2$). The two recurrences simplify to $$a_{n+1}=10a_n-b_n\qquad\text{ and}\qquad b_{n+1}=2a_n-b_n.$$ For calculational purposes, these are good enough. We do not really need a recurrence for the $a_i$ alone. However, your question perhaps asks about the $a_i$, so we eliminate the $b$'s. One standard way to do this is to increment $n$ in the first recurrence, and obtain $$a_{n+2}=10a_{n+1}-b_{n+1}.$$ But $b_{n+1}=2a_n-b_n$, so $$a_{n+2}=10a_{n+1}-2a_n+b_n.$$ But $b_n=10a_n-a_{n+1}$, and therefore $$a_{n+2}=9a_{n+1}+8a_n.$$ Remark: It would have been better to have $b_n$ as above, and $c_n$ the number of strings that do not end in $1$ or $2$, and to forget about $a_n$ entirely for a while.
Can every real number be represented by a (possibly infinite) decimal? Does every real number have a representation within our decimal system? The reason I ask is because, from beginning a mathematics undergraduate degree a lot of 'mathematical facts' I had previously assumed have been consistently modified, or altogether stripped away. I'm wondering if my subconscious assumption that every real number can be represented in such a way is in fact incorrect? If so, is there a proof? If not, why not? (Also I'm not quite sure how to tag this question?)
Irrational numbers were known to the ancient Greeks, as I expect you know. But it took humankind another 2000 years to come up with a satisfactory definition of them. This was mainly because nobody realised that a satisfactory definition was lacking. Once humankind realised this, various suggestions were proposed. One suggestion (Dedekind's) defined a real number as two infinite sets of rational numbers, which 'sandwiched' the real number; another suggestion (Cauchy's) defined a real number as an equivalence class of sequences obeying a certain convergence criterion. The details are available in many places. But the important point is that all of the reasonable definitions turned out to be equivalent -- the set of real numbers according to Dedekind's definition was 'the same' as the set of real numbers according to Cauchy's definition, although the definitions look completely different. Now, to your question: another reasonable definition of a real number is a non-terminating decimal expansion (we say non-terminating just to clear up the ambiguity that arises between e.g. 123.4599999... and 123.46 -- only the first is allowed). It turns out that this definition is equivalent to all the others. So your intuition is correct. But strictly speaking, your question is flawed: instead of asking whether every real number can be represented in this way, you should ask whether this representation of the real numbers is a valid one. And it is.
Existence and uniqueness of God Over lunch, my math professor teasingly gave this argument God by definition is perfect. Non-existence would be an imperfection, therefore God exists. Non-uniqueness would be an imperfection, therefore God is unique. I have thought about it, please critique from mathematical/logical point of view on * *Why does/doesn't this argument fall through? Does it violate any logical deduction rules? *Can this statement be altered in a way that it belongs to ZF + something? What about any axiomatic system? *Is it positive to make mathematically precise the notion of "perfect"?
Existence is not a predicate. You may want to read Gödel's onthological proof, which you can find on Wikipedia. Equally good is the claim that uniqueness is imperfection, since something which is perfect cannot be scarce and unique. Therefore God is inconsistent..?
How to prove " $¬\forall x P(x)$ I have a step but can't figure out the rest. I have been trying to understand for hours and the slides don't help. I know that since I have "not P" that there is a case where not All(x) has P... but how do I show this logically? 1. $\forall x (P(x) → Q(x))$ Given 2. $¬Q(x)$ Given 3. $¬P(x)$ Modus Tollens using (1) and (2) 4. 5. 6.
First, you want to instantiate your quantified statement with a witness, say $x$: So from $(1)$ we get $$\;P(x) \rightarrow Q(x) \tag{$1\dagger$}$$ Then from $(1\dagger)$ with $(2)$ $\lnot Q(x)$, by modus tollens, you can correctly infer $(3)$: $\lnot P(x)$. So, from $(3)$ you can affirm the existence of an $x$ such that $\lnot P(x)$ holds: $\quad\exists x \lnot P(x)$ Then recall that, by DeMorgan's for quantifiers,$$\underbrace{\exists x \lnot P(x) \quad \equiv \quad \lnot \forall x P(x)}_{\text{these statements are equivalent}}$$
Linear Algebra dependent Eigenvectors Proof Problem statement: Let $n \ge 2 $ be an integer. Suppose that A is an $n \times n$ matrix and that $\lambda_1$, $\lambda_2$ are eigenvalues of A with corresponding eigenvectors $v_1$, $v_2$ respectively. Prove that if $v_1$, $v_2$ are linearly dependent then $\lambda_1 = \lambda_2$. I have an intuition as to why this is true, but am having difficulty formalizing a proof. What I have doesn't seem tight enough. If $v_1$ and $v_2$ are linearly dependent then $v_1$ lies in the span of $v_2$. If two eigenvectors lie in the span of one another then only one of them is required in order to form a basis of the eigenspace. All eigenvalues correspond to a single $n\times 1$ eigenvector or a set of $n\times 1$ linearly independent vectors. Since $v_1$ and $v_2$ are linearly dependent, we know that there can only be one eigenvalue that corresponds to the single eigenvector. Thus $\lambda_1$ must equal $\lambda_2$. Any thoughts or criticism are welcome. Thanks
You know that $$Av_1=\lambda_1v_1\\Av_2=\lambda_2v_2$$ If $v_1,v_2$ are linearly dependent, then $v_1=\mu v_2$ for some scalar $\mu$. Putting this in the first equation, $$A(\mu v_2) = \lambda_1(\mu v_2) \implies Av_2 = \lambda_1 v_2$$ This gives $\lambda_1=\lambda_2$ as desired. I think your idea is on the right track, but putting it in the above way gives more clarity.
Prove inequality: $74 - 37\sqrt 2 \le a+b+6(c+d) \le 74 +37\sqrt 2$ without calculus Let $a,b,c,d \in \mathbb R$ such that $a^2 + b^2 + 1 = 2(a+b), c^2 + d^2 + 6^2 = 12(c+d)$, prove inequality without calculus (or langrange multiplier): $$74 - 37\sqrt 2 \le a+b+6(c+d) \le 74 +37\sqrt 2$$ The original problem is find max and min of $a+b+6(c+d)$ where ... Using some calculus, I found it, but could you solve it without calculus.
Hint: You can split this problem to find max and min of $a+b$ and $c+d$.
supremum of an array of a convex functions If $\{J_n\}$ is an array of a convex functions on a convex set $U$ and $G(u)=\sup J_i(u), u\in U$, how to show that $G(u)$ is convex too? I've done this, but I am not sure about properties of a supremum. Since $U$ is convex, $\alpha x +(1-\alpha) y)\in U$ for all $x,y\in U$. If $G$ is convex, than it would be $G(\alpha x +(1-\alpha) y)\leq \alpha G(x)+(1-\alpha) G(y)$ i.e. $\sup J_i(\alpha x +(1-\alpha) y)\leq \alpha \sup J_i(x)+(1-\alpha) \sup J_i(y)$. So, I've done this $G(\alpha x +(1-\alpha) y)=\sup J_i(\alpha x +(1-\alpha) y)\leq sup \{\alpha J_i(x)+(1-\alpha)J_i(y)\}\leq \alpha \sup J_i(x)+(1-\alpha) \sup J_i(y)=\alpha G(x)+(1-\alpha) G(y)$. Is this ok?
It seems that you assume that your $J_n$ are convex real-valued functions. One can prove that their pointwise supremum is a convex without assuming that the common domain $U$ is convex, or even that the set of indices $n$ is finite. A function $J_n$ is convex iff its epigraph is a convex set. The epigraph of the supremum $G=sup_n J_n$ is precisely the intersection of the epigraphs of the $J_n$'s. But an intersection of (an arbitrary number of) convex sets is convex, from which it follows that $G$ is convex.
Maximum value of a product How to write the number $60$ as $\displaystyle\sum^{6}_{i=1} x_i$ such that $\displaystyle\prod^{6}_{i=1} x_i$ has maximum value? Thanks to everyone :) Is there a way to solve this using Lagrange multipliers?
(of course the $x_i$ must be positive, otherwise the product may be as great as you want) Hint: if you have $x_i \ne x_j$, substitute both with their arithmetic mean.
CDF of the distance of two random points on (0,1) Let $Y_1 \sim U(0,1)$ and $Y_2 \sim U(0,1)$. Let $X = |Y_1 - Y_2|$. Now the solution for the CDF in my book looks like this: $P(X < t) = P(|Y_1 - Y_2| < t) = P(Y_2 - t < Y_1 < Y_2 + t) = 1-(1-t)^2$ They give this result without explanation. How do they come up with the $1-(1-t)^2$ part? Can you help me find the explanation?
I want to change notation. Call the random variable called $Y_1$ in the problem by the name $X$. Call the random variable called $Y_2$ in the problem by the name $Y$. And finally, call the random variable called $X$ in the problem by the name $T$. Trust me, these name changes are a good idea! We need to assume that $X$ and $Y$ are independent. Fix $t$ between $0$ and $1$. In the usual coordinate plane, draw the square with corners $(0,0)$, $(1,0)$, $(1,1)$, and $(0,1)$. Now draw the two lines $y=x+t$ and $y=x-t$. By independence, the joint distribution of $(X,Y)$ is uniform in our square. Draw the lines $y=x-t$ and $y=x+t$. You know well what these look like. Remember that $0\le t \le 1$ when drawing the lines. For a nice picture, you could for example pick $t$ around $\frac{1}{3}$. (Without drawing a picture, you are unlikely to understand what is really going on.) Note that $T\le t$ if and only if $|X-Y|\le t$ if and only if the pair $(X,Y)$ lands between our two lines. The probability that this happens is the area of the region between the two lines, divided by the area of the whole square, which is $1$. So we need to find the area of the region between the two lines. Now we find that area. The part of the square which is outside our region consists of two isosceles right-angled triangles. Each of these triangles has legs $1-t$, so together they make up a $(1-t)\times (1-t)$ square, with area $(1-t)^2$. Thus the area of the region between the two lines is $1-(1-t)^2$.
probability of sum of two integers less than an integer Two integers [not necessarily distinct] are chosen from the set {1,2,3,...,n}. What is the probability that their sum is <=k? My approach is as follows. Let a and b be two integers. First we calculate the probability of the sum of a+b being equal to x [1<=x<=n]. WLOG let a be chosen first. For b= x-a to be positive, we must have 1<=a < x. This gives (x-1) possible values for a out of total n possible values. Probability of valid selection of a= (x-1)/n. For each valid selection of a, we have one and only one possible value of b. Only 1 value of b is then valid out of total n possible values. Thus probability of valid selection of b= 1/n. Thus probability of (a+b= x) = (x-1)/n(n-1). Now probability of (a+b<=k) = Probability of (a+b= 2) + probability of (a+b= 3) + ... + probability of (a+b= k) = {1+2+3+4+5+...+(k-1)}n(n-1) = k(k-1)/n(n-1). Can anybody please check if my approach is correct here?
Notice if $k\le 1$ the probability is $0$, and if $k\ge 2n$ the probability is $1$, so let's assume $2\le k\le 2n-1$. For some $i$ satisfying $2\le i\le 2n-1$, how many ways can we choose $2$ numbers to add up to $i$? If $i\le n+1$, there are $i-1$ ways. If $i\ge n+2$, there are $2n-i+1$ ways. Now, suppose $k\le n+1$, so by summing we find: $$\sum_{i=2}^{k}i-1=\frac{k(k-1)}{2}$$ If $k\ge n+2$, if we sum from $i=2$ to $i=n+1$ we get $\frac{(n+1)n}{2}$, and then from $n+1$ to $k$ we get: $$\sum_{i=n+2}^k2n-i+1=\frac{1}{2}(3n-k)(k-n-1)$$ Adding the amount for $i\le n+1$ we get: $$2kn-\frac{k^2}{2}+\frac{k}{2}-n^2-n$$ Since there are $n^2$ choices altogether, we arrive at the following probabilities: $$\begin{cases}\frac{k(k-1)}{2n^2}&1\le k\le n+1\\\frac{4kn-k^2+k-2n^2-2n}{2n^2}&n+2\le k\le 2n\end{cases}$$
Test for convergence of improper integrals $\int_{0}^{1}\frac{\sqrt{x}}{(1+x)\ln^3(1+x)}dx$ and $\int_{1}^{\infty}\frac{\sqrt{x}}{(1+x)\ln^3(1+x)}dx$ I need to test if, integrals below, either converge or diverge: 1) $\displaystyle\int_{0}^{1}\frac{\sqrt{x}}{(1+x)\ln^3(1+x)}dx$ 2) $\displaystyle\int_{1}^{\infty}\frac{\sqrt{x}}{(1+x)\ln^3(1+x)}dx$ I tried comparing with $\displaystyle\int_{0}^{1}\frac{1}{(1+x)\ln^3(1+x)}dx$, $\displaystyle\int_{0}^{1}\frac{\sqrt{x}}{(1+x)}dx$ but ended up with nothing. Do you have any suggestions? Thanks!
A related problem. 1) The integral diverges since as $x\sim 0$ $$\frac{\sqrt{x}}{(1+x)\ln^3(1+x)}\sim \frac{\sqrt{x}}{(1)(x^3) }\sim \frac{1}{x^{5/2}}.$$ Note: $$ \ln(1+x) = x - \frac{x^2}{2} + \dots. $$ 2) For the second integral, just replace $x \leftrightarrow 1/x $, so the integrand will behave as $x\sim 0$ as $$ \frac{\sqrt{1/x}}{(1+1/x)\ln^3(1+1/x)}= \frac{\sqrt{x}}{(1+x)(\ln^3(1+1/x))}\sim \frac{\sqrt{x}}{(x)(\ln^3(1/x))} = -\frac{1}{\sqrt{x}\ln^3(x)}.$$
Under What Conditions and Why can Move Operator under Integral? Given a function space $V$ of some subset of real-valued functions on the real line, linear operator $L: V \rightarrow V$, and $f,g \in V$, define $$ h(t) = \int_{\mathbb{R}}f(u)g(u-t)du $$ Further, assume $h \in V$. Is the below true? $$L(h(t)) = \int_{\mathbb{R}}f(u)L(g(u-t))du $$ If not, under what assumptions is this true? If yes, why?
An arbitrary operator cannot be moved into the convolution. For example, if $Lh=\psi h$ for some nonconstant function $\psi$, then $$\psi(t) \int_{\mathbb R} f(u) g(u-t) \,du \ne \int_{\mathbb R} f(u) g(u-t) \psi(u-t) \,du $$ for general $f,g$. However, the identity is true for translation-invariant operators, i.e., those for which $L(g(t-c))=L(g)(t-c)$ for every $c\in\mathbb R$. Indeed, for such operators $$f*(Lg)= \int_{\mathbb{R}}f(u)L(g)(u-t)\,du =\int_{\mathbb{R}}f(u)L(g(u-t))\,du = L(f*g)$$
Theorems' names that don't credit the right people The point of this question is to compile a list of theorems that don't give credit to right people in the sense that the name(s) of the mathematician(s) who first proved the theorem doesn't (do not) appear in the theorem name. For instance the Cantor Schröder Bernstein theorem was first proved by Dedekind. I'd also like to include situations in which someone conjectured something, didn't prove it, then someone else conjectured the same thing later, also without proving it, and was credited with having first conjectured it. Similar unfair things which I didn't remember to include might also be considered. Some kind of reference is appreciated.
Nobody's mentioned the Pythagorean theorem yet?
Inner Product Spaces : $N(T^{\star}\circ T) = N(T)$ (A PROOF) Let $T$ be a linear operator on an inner product space. I really just want a hint as to how prove that $N(T^{\dagger}\circ T) = N(T)$, where "$^\dagger$" stands for the conjugate transpose. Just as an aside, how should I read to myself the following symbolism:
Hint: Let $V$ denote your inner product space. Clearly $N(T)\subseteq N(T^* T)$, so you really want to show that $N(T^* T)\subseteq N(T)$. Suppose $x\in N(T^* T)$. Then $T^* Tx = 0$, so we have $\langle T^* Tx, y\rangle = 0$ for all $y\in V$. Can you see where to go from here?
Concept about series test I have five kind of test here 1. Divergent test 2. Ratio test 3. Integral test 4. Comparison test 5. Alternating Series test And a few questions here. 1. Are test 1,2,3,4 only available for Positive Series? and alternate series test is only for alternating series? 2. To show $\sum_{n=1}^{\infty}(-1)^n$ diverge I can't use alternating series test right? it just tell me the series doesn't converge. So, i tried to use divergence test? but it seems like edivergence test is not applicable for alternating series.
I assume that for (1) you mean the theorem that says that if the $n^\text{th}$ term does not approach 0 as $n \to \infty$ then the series diverges. This test does not require the terms to be positive, so you can apply it to show that the series $\sum_{n=1}^\infty (-1)^n$ diverges. The ratio test does not require the terms to be positive. You end up taking the absolute value in this test, so signs do not matter. The usual formulations of the integral test and comparison test only apply to series with positive terms. The alternating series test is only for alternating series, as the name suggests. It has a couple of other requirements also. The alternating series test never tells you that a series diverges. If the hypotheses are met, then the conclusion is that the series converges.
Online Model Theory Classes Since "model theory" is kind of too general naming, I have encountered with lots of irrelevant results (like mathematical modelling etc.) when I searched for some videos on the special mathematical logic branch "model theory". So, do you know/have you ever seen any online lecture videos on model theory? Any relevant answer will be appreciated...
If you are french fluent, here are the lecture notes of Tuna Altinel course : http://math.univ-lyon1.fr/~altinel/Master/m11415.html
Is this kind of space metrizable? It has a nice result from Tkachuk V V. Spaces that are projective with respect to classes of mappings[J]. Trans. Moscow Math. Soc, 1988, 50: 139-156. If the closure of every discrete subset of a space is compact then the whole space is compact. The proof can be seen here by Brian M. Scott. Then these questions arise naturally: Question 1: If the closure of every discrete subset of a space is countably compact, then is the whole space countably compact? Question 2: If the closure of every discrete subset of a space is metrizable, then is the whole space metrizable?
$\newcommand{\cl}{\operatorname{cl}}$The first conjecture is true at least for $T_1$ spaces. If $X$ is $T_1$ and not countably compact, then $X$ has an infinite closed discrete subspace, which is obviously not countably compact. Thus, if every discrete subspace of a $T_1$ space $X$ is countably compact, so is $X$. It’s at least consistent that the second conjecture is false. It is consistent that there be a compact Suslin line, i.e., a complete dense linear order $\langle X,\preceq\rangle$ with endpoints such that the order topology on $X$ is ccc but not separable. (E.g., the existence of Suslin line follows from the combinatorial principle $\diamondsuit$, which holds in $\mathsf{V=L}$.) Suppose that $F\subseteq X$ is closed, $x\in F$, and $F\cap[x,\to)$ is open in $F$. If $x=\min F$, then $x$ is not a left pseudogap of $F$. (For a definition of left and right pseudogaps see this answer.) Otherwise, $F\cap(\leftarrow,x)$ is a non-empty closed subset of $F$ and therefore of $X$. It follows that $F\cap(\leftarrow,x)$ is compact and has a maximum element $y$. But then $F\cap[x,\to)=\{z\in F:y\prec z\}$ is open in the order topology on $F$, and $x$ is not a left pseudogap of $F$. A similar argument shows that $F$ has no right pseudogaps and hence that the subspace topology on $F$ is identical to the order topology, so that $F$ with its subspace topology is a LOTS. Let $D\subseteq X$ be discrete. The spread of any LOTS is equal to its cellularity, so $D$ is countable. Let $Y=\cl_XD$; then $Y$ is a separable, compact LOTS. Let $$J=\{x\in Y:x\text{ has an immediate successor in }Y\}\;;$$ since $Y$ is a LOTS, $w(Y)=c(Y)+|J|=\omega+|J|$. For $x\in J$ let $x^+$ be the immediate successor of $x$ in $Y$; then $\{(x,x^+):x\in J\}$ is a pairwise disjoint family of non-empty open intervals in $X$, so $|J|\le\omega$, and $w(Y)=\omega$. It now follows from the Uryson metrization theorem that $Y$ is metrizable and hence that every discrete subset of $X$ has metrizable closure.
Prove $x^2+y^2+z^2 \ge 14$ with constraints Let $0<x\le y \le z,\ z\ge 3,\ y+z \ge 5,\ x+y+z = 6.$ Prove the inequalities: $I)\ x^2 + y^2 + z^2 \ge 14$ $II)\ \sqrt x + \sqrt y + \sqrt z \le 1 + \sqrt 2 + \sqrt 3$ My teacher said the method that can solve problem I can be use to solve problem II. But I don't know what method that my teacher talking about, so the hint is useless, please help me. Thanks
Hint: $$x^2+y^2+z^2 \ge 14 = 1^2+2^2+3^2\iff (x-1)(x+1)+(y-2)(y+2)+(z-3)(z+3) \ge 0$$ $$\iff (z-3)[(z+3)-(y+2)] + (y+z-5)[(y+2)-(x+1)] + (a+b+c-6)(a+1) \ge 0$$ (alway true)
Arctangent integral How come this is correct: $$\int \dfrac{3}{(3x)^2 + 1} dx = \arctan (3x) + C$$ I learned that $$\int \dfrac{1}{x^2+1} = \arctan(x) + C$$ But I don't see how you can get the above one from the other. The $1$ in the denominator especially confuses me.
We can say even more in the general case: if a function $\;f\;$ is derivable , then $$\int \frac{f'(x)}{1+f(x)^2}dx=\arctan(f(x)) + K(=\;\text{a constant})$$ which you can quickly verify by differentiating applying the chain rule. In your particular case we simply have $\;f(x)=3x\;$ ...