INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Methods for determining the convergence of $\sum\frac{\cos n}{n}$ or $\sum\frac{\sin n}{n}$ As far as I know, the textbook approach to determining the convergence of series like $$\sum_{n=1}^\infty\frac{\cos n}{n}$$ and $$\sum_{n=1}^\infty\frac{\sin n}{n}$$ uses Dirichlet's test, which involves bounding the partial sums of the cosine or sine terms. I have two questions: * *Are there any other approaches to seeing that these series are convergent? I'm mostly just interested to see what other kinds of arguments might be made. *What's the best way to show that these two series are only conditionally convergent? I don't even know the textbook approach to that question.
Since you specifically said "Are there any other approaches to seeing that these series are convergent" I will take the bait and give an extremely sketchy argument that isn't meant to be a proof at all. The harmonic series $\sum_{n=1}^{\infty} \frac{1}{n}$ famously diverges, but very slowly. For large $n$ the numbers being added are so tiny that it takes exponential amounts of time for the millions of tiny terms to amount to much at all. But $\cos(n)$ will be negative roughly half the time, so there's no possibility of any slow steady accumulation. Thus our sum $\sum_{n=1}^{\infty} \frac{\cos n}{n}$ will remain bounded. And furthermore, the magnitude of the terms being added gets smaller and smaller, so in a matter that's roughly analogous to the logic behind the alternating series test, the sum will oscillate [with a period of roughly 6], but with a decreasing amplitude around a value that is an artifact of the first handful of terms.
Does $\,x>0\,$ hint that $\,x\in\mathbb R\,$? Does $x>0$ suggest that $x\in\mathbb R$? For numbers not in $\,\mathbb R\,$ (e.g. in $\mathbb C\setminus \mathbb R$), their sizes can't be compared. So can I omit "$\,x\in\mathbb R\,$" and just write $\,x>0\,$? Thank you.
It really depends on context. But be safe; just say $x > 0, x\in \mathbb R$. Omitting the clarification can lead to misunderstanding it. Including the clarification takes up less than a centimeter of space. Benefits of clarifying the domain greatly outweigh the consequences of omitting the clarification. Besides one might want to know about rationals greater than $0$, or integers greater than $0$, and we would like to use $x \gt 0$ in those contexts, as well. ADDED NOTE: That doesn't mean that after having clarified the context, and/or defined the domain, you should still use the qualification "$x\in \mathbb R$" every time you subsequently write $x \gt 0$, in a proof, for example. But if there's any question in your mind about whether or not to include it, error on the side of inclusion.
Simplifying this expression $(e^u-1)(e^u-e^l)$ Is it possible to write the following $$(e^u-1)(e^u-e^l)$$ as $$e^{f(u,l)}-1?$$
The first expression evaluates to $e^{2u}-e^u-e^l+1$, so if your question is whether there is some algebraic manipulation that brings this into the form of an exponential minus $1$ for general $l,u$ then the answer is no. In fact the range of the function $x\mapsto e^x-1$ in $\Bbb R$ is $(-1,\infty)$, and the product of two values in that range may be outside the range if one is negative and the other sufficiently large. For instance for $u=-1,l=10$ your product will be much less than $-1$, so it cannot be written as $e^x-1$ for any real number $x$. You might want to get around this by using complex numbers for $f(u,l)$, but then consider the example $u=-\ln2,l=\ln3$, for which you have $e^u-1=-\frac12$ and $e^l-1=2$, so the product $(e^u-1)(e^l-1)$ is $-\frac12\times2=-1$, which is not a value $e^x-1$ even for $x\in\Bbb C$, so this shows you just cannot define $f$ for general arguments $u,l$ at all.
Evaluating $\int_0^\infty \frac{\log (1+x)}{1+x^2}dx$ Can this integral be solved with contour integral or by some application of residue theorem? $$\int_0^\infty \frac{\log (1+x)}{1+x^2}dx = \frac{\pi}{4}\log 2 + \text{Catalan constant}$$ It has two poles at $\pm i$ and branch point of $-1$ while the integral is to be evaluated from $0\to \infty$. How to get $\text{Catalan Constant}$? Please give some hints.
Following the same approach in this answer, we have \begin{gather*} \int_0^\infty\frac{\ln^a(1+x)}{1+x^2}\mathrm{d}x=\int_0^\infty\frac{\ln^a\left(\frac{1+y}{y}\right)}{1+y^2}\mathrm{d}y\\ \overset{\frac{y}{1+y}=x}{=}(-1)^a\int_0^1\frac{\ln^a(x)}{x^2+(1-x)^2}\mathrm{d}x\\ \left\{\text{write $\frac{1}{x^2+(1-x)^2}=\mathfrak{J} \frac{1+i}{1-(1+i)x}$}\right\}\\ =(-1)^a \mathfrak{J} \int_0^1\frac{(1+i)\ln^a(x)}{1-(1+i)x}\mathrm{d}x\\ =a!\ \mathfrak{J}\{\operatorname{Li}_{a+1}(1+i)\} \end{gather*} where the last step follows from using the integral form of the polylogarithm function: $$\operatorname{Li}_{a}(z)=\frac{(-1)^{a-1}}{(a-1)!}\int_0^1\frac{z\ln^{a-1}(t)}{1-zt}\mathrm{d}t.$$
find out the value of $\dfrac {x^2}{9}+\dfrac {y^2}{25}+\dfrac {z^2}{16}$ If $(x-3)^2+(y-5)^2+(z-4)^2=0$,then find out the value of $$\dfrac {x^2}{9}+\dfrac {y^2}{25}+\dfrac {z^2}{16}$$ just give hint to start solution.
Hint: What values does the function $x^2$ acquire(positive/negarive)? What is the solution of the equation $x^2=0$? Can you find the solution of the equation $x^2+y^2=0$? Now, what can you say about the equation $(x-3)^2+(y-5)^2+(z-4)^2=0 $? Can you find the values of $x,y,z?$
Solving modular equations that gives GCD = 1 I have problems with understanding modular equations that gives GCD = 1. For example: $$3x \equiv 59 \mod 100$$ So I'm getting $GCD(3, 100) = 1$. Now: $1 = -33*3 + 100$ That's where the first question appears - I always guess those -33 and 1 (here) numbers...is there a way to solve them? And the second question - the answer to that equation, at least according to the book, is: {$-33*59 + 100k$} (k - integer) - and why is that so? Where did the 59 came from?
$$\begin{eqnarray} \text{Note} &&\ 1 &=&\ 3\, (-33)\ \ \ \, +\ \ \ \, 100\, (1) \\ \stackrel{\times\, 59\ }\Rightarrow && 59\, &=&\ 3\, (-33\cdot 59) + 100\, (59)\end{eqnarray}$$ Hence $\ 59 = 3x+100y \ $ has a particular solution $\rm\:(x,y) = (-33\cdot 59, 59).\, $ By linearity, the general solution is the sum of this and the solution of the associated homogeneous equation $\ 0 = 3x+100y,\,$ so $\, \frac{y}x = \frac{-3}{100},\ $ so $\ x = 100k,\ y = -3k .\,$ So the general solution is their sum $$(-33\cdot 59,\,59)\, +\, (100k, -3k)\, =\, (-33\cdot 59+100k,\, 59-3k)$$
HINT for summing digits of a large power I recently started working through the Project Euler challenges, but I've got stuck on #16 (http://projecteuler.net/problem=16) $2^{15} = 32768$ and the sum of its digits is $3 + 2 + 7 + 6 + 8 = 26$. What is the sum of the digits of the number $2^{1000}$? (since I'm a big fan of generality, my interpretation is to find a solution to the sum of digits of $a^b$ in base $c$, and obviously I'm trying to solve it without resorting to "cheats" like arbitrary-precision numbers). I guess this is simpler than I'm making it, but I've got no interest in being told the answer so I haven't been able to do a lot of internet searching (too many places just give these things away). So I'd appreciate a hint in the right direction. I know that $2^{1000} = 2^{2*2*2*5*5*5} = (((((2^2)^2)^2)^5)^5)^5$, and that the repeated sum of digits of powers of 2 follows the pattern $2, 4, 8, 7, 5, 1$, and that the last digit can be determined by an efficient pow-mod algorithm (which I already have from an earlier challenge), but I haven't been able to get further than that… (and I'm not even sure that those are relevant).
In this case, I'm afraid you just have to go ahead and calculate $2^{1000}$. There are various clever ways to do this, but for numbers this small a simple algorithm is fast enough. The very simplest is to work in base 10 from the start. Faster is to work in binary, and convert to base 10 at the end, but this conversion is not completely trivial. A compromise (on a 32-bit machine) would be to work in base 1000000000, so that you can double a number without overflow. Then at the end the conversion to base 10 is much simpler.
Find the transform I have the paper with 3 points on it. I have also a photo of this paper. How can I determine where is the paper on the photo, if I know just the positions of these points? And are 3 points enough? It doesn't look like a linear transform: the paper turns into trapeze on the photo. But it should be able to be written mathematically. I was unsuccessfully looking for some function $\mathbb{R}^2\rightarrow\mathbb{R}^2$, which turns the coordinates of the points on the default paper to their coordinates on photo, so I could just apply this function to the paper corners, but I have no idea, what kind of transform it should be. Any help is appreciated :)
I believe a projective transformation is exactly right. But with a projective transformation you can take any four points in general position (no three collinear) to any other four. So you'd better add a fourth point to reconstruct the transformation. In terms of the linear algebra, you want to construct a $3\times 3$ matrix $A$, up to scalar multiples. You think of a point in $\mathbb R^2$ as a vector $X=\begin{bmatrix}1\\x\\y\end{bmatrix}\in\mathbb R^3$ and its image will be the vector $AX$ rescaled so that its first coordinate is likewise $1$. So see that you can determine $A$ from knowing where four points map, think of it this way. A linear map from $\mathbb R^3$ to $\mathbb R^3$ is determined by where it sends a basis. Given our three points in general position, they correspond to vectors $P$, $Q$, $R\in\mathbb R^3$ up to scaling. But now, given our fourth point in general position, we can rescale those vectors so that the fourth point is given by $P+Q+R$. Doing the corresponding construction with the four image points will now allow us to specify precisely where $A$ sends our basis vectors, up to one common scalar multiplication.
If we know the eigenvalues of a matrix $A$, and the minimal polynom $m_t(a)$, how do we find the Jordan form of $A$? We have just learned the Jordan Form of a matrix, and I have to admit that I did not understand the algorithm. Given $A = \begin{pmatrix} 1 & 1 & 1 & -1 \\ 0 & 2 & 1 & -1 \\ 1 & -1 & 2 & -1 \\ 1 & -1 & 0 & 1 \end{pmatrix} $, find the Jordan Form $J(A)$ of the matrix. So what I did so far: (I) Calculate the polynomial: $P_A(\lambda) = (\lambda - 1)^2(\lambda -2)^2$. (II) Calculate the minimum polynomial: $m_A(\lambda) = P_A(\lambda) =(\lambda - 1)^2(\lambda -2)^2 $ But I am stuck now, how do we exactly calculate the Jordan Form of $A$? And an extra question that has been confusing me. In this case, does $A$ have $4$ eigenvalues or $2$ eigenvalues?
Since the minimal polynomial has degree $4$ which is the same order of the matrix, you know that $A$'s smith normal form is $\begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & m_A(\lambda )\end{pmatrix}$. Therefore the elementary divisors (I'm not sure this is the correct term in english) are $(\lambda -1)^2$ and $(\lambda -2)^2$. Theory tells you that one jordan block is $\color{grey}{(\lambda -1)^2\to} \begin{pmatrix}1 & 1\\ 0 & 1\end{pmatrix}$ and the other is $\color{grey}{(\lambda -2)^2\to} \begin{pmatrix} 2 &1\\ 0 & 2\end{pmatrix}$. Therefore a possible JNF for $A$ is $\begin{pmatrix} 1 & 1 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 2 &1\\ 0 & 0 & 0 & 2\end{pmatrix}$. Regarding the extra question, it seems to be asking about the geometric multiplicity. In this case it has two eingenvalues and not four.
If $\lim\limits_{x \to \pm\infty}f(x)=0$, does it imply that $\lim\limits_{x \to \pm\infty}f'(x)=0$? Suppose $f:\mathbb{R} \rightarrow \mathbb{R}$ is everywhere differentiable and * *$\lim_{x \to \infty}f(x)=\lim_{x \to -\infty}f(x)=0$, *there exists $c \in \mathbb{R}$ such that $f(c) \gt 0$. Can we say anything about $\lim_{x \to \infty}f'(x)$ and $\lim_{x \to -\infty}f'(x)$? I am tempted to say that $\lim_{x \to \infty}f'(x)$ = $\lim_{x \to -\infty}f'(x)=0$. I started with the following, but I'm not sure this is the correct approach, $$\lim_{x \to \infty}f'(x)= \lim_{x \to \infty}\lim_{h \to 0}\frac{f(x+h)-f(x)}{h}.$$
To correct a incorrect attempt, let $f(x) = e^{-x^2} \cos(e^{x^4})$, so $\begin{align}f'(x) &= e^{-x^2} 4 x^3 e^{x^4}(-\sin(e^{x^3})) -2x e^{-x^2} \cos(e^{x^3})\\ &= -4 x^3 e^{x^4-x^2} \sin(e^{x^3}) -2x e^{-x^2} \cos(e^{x^3})\\ \end{align} $ The $e^{x^4-x^2}$ term makes $f'(x)$ oscillate violently and unboundedly as $x \to \pm \infty$.
Infinite Series Problem Using Residues Show that $$\sum_{n=0}^{\infty}\frac{1}{n^2+a^2}=\frac{\pi}{2a}\coth\pi a+\frac{1}{2a^2}, a>0$$ I know I must use summation theorem and I calculated the residue which is: $$Res\left(\frac{1}{z^2+a^2}, \pm ai\right)=-\frac{\pi}{2a}\coth\pi a$$ Now my question is: how do I get the last term $+\frac{1}{2a^2}$ after using the summation theorem?
The method of residues applies to sums of the form $$\sum_{n=-\infty}^{\infty} f(n) = -\sum_k \text{res}_{z=z_k} \pi \cot{\pi z}\, f(z)$$ where $z_k$ are poles of $f$ that are not integers. So when $f$ is even in $n$, you may express as follows: $$2 \sum_{n=1}^{\infty} f(n) + f(0)$$ For this case, $f(z)=1/(z^2+a^2)$ and the poles $z_{\pm}=\pm i a$ and using the fact that $\sin{i a} = i \sinh{a}$, we get $$\sum_{n=-\infty}^{\infty} \frac{1}{n^2+a^2} = \frac{\pi}{a} \text{coth}{\pi a}$$ The rest is just a little more algebra.
Property of the trace of matrices Let $A(x,t),B(x,t)$ be matrix-valued functions that are independent of $\xi=x-t$ and satisfy $$A_t-B_x+AB-BA=0$$ where $X_q\equiv \frac{\partial X}{\partial q}$. Why does it then follow that $$\frac{d }{d \eta}\textrm{Trace}[(A-B)^n]=0$$ where $n\in \mathbb N$ and $\eta=x+t$? Is there a neat way to see that this is true?
(This may not be a neat way to prove the assertion, but it's a proof anyway.) Let $\eta=x+t$ and $\nu=x-t$. Then $x=\eta+\nu$ and $t=\eta-\nu$ are functions of $\eta$ and $\nu$, $A=A(\eta+\nu,\eta-\nu)$ and similarly for $B$. As both $A$ and $B$ are independent of $\nu$, by the total derivative formula, we get \begin{align*} 0 = \frac{dA}{d\nu} &= \frac{\partial A}{\partial x} - \frac{\partial A}{\partial t},\\ 0 = \frac{dB}{d\nu} &= \frac{\partial B}{\partial x} - \frac{\partial B}{\partial t} \end{align*} and hence \begin{align*} \frac{dA}{d\eta} &= \frac{\partial A}{\partial x} + \frac{\partial A}{\partial t} = 2\frac{\partial A}{\partial t},\\ \frac{dB}{d\eta} &= \frac{\partial B}{\partial x} + \frac{\partial B}{\partial t} = 2\frac{\partial B}{\partial x}. \end{align*} Therefore $$ \frac{d(A-B)}{d\eta} = 2\left(\frac{\partial A}{\partial t} - \frac{\partial B}{\partial x}\right) = 2(BA - AB). $$ and \begin{align*} \frac{d}{d\eta}\operatorname{trace}[(A-B)^n] &= n \operatorname{trace}\left[(A-B)^{n-1}\frac{d(A-B)}{d\eta}\right]\\ &= 2n \operatorname{trace}[(A-B)^{n-1} (BA-AB)].\tag{1} \end{align*} Let $\mathcal{P}_m$ denotes the set of products of $m$ matrices from $\{A,B\}$ (e.g. $\mathcal{P}_2=\{AA,AB,BA,BB\}$). Then the function $f:\mathcal{P}_m\to \mathcal{P}_m$ defined by \begin{cases} f(B^m) = B^m,\\ f(pAB^k) = B^kAp &\text{ for all } 0\le k<m \text{ and } p\in\mathcal{P}_{m-k-1} \end{cases} is a bijection. Since $\operatorname{trace}(AqB)=\operatorname{trace}(Bf(q)A)$ for all $q\in\mathcal{P}_{n-1}$ and the degree of $B$ is preserved by $f$, it follows that $\operatorname{trace}[A(A-B)^{n-1}B]=\operatorname{trace}[B(A-B)^{n-1}A]$. Consequently, $$\operatorname{trace}[(A-B)^{n-1} (BA-AB)]=0$$ for all square matrices $A,B$ and $(1)$ evaluates to zero.
Let $f :\mathbb{R}→ \mathbb{R}$ be a function such that $f^2$ and $f^3$ are differentiable. Is $f$ differentiable? Let $f :\mathbb{R}→ \mathbb{R}$ be a function such that $f^2$ and $f^3$ are differentiable. Is $f$ differentiable? Similarly, let $f :\mathbb{C}→ \mathbb{C}$ be a function such that $f^2$ and $f^3$ are analytic. Is $f$ analytic?
If you do want $f^{p}$ to represent the $p^{\text{th}}$ iterate, then you can let $f$ denote the characteristic function of the irrational numbers. Then $f^2$ and $f^3$ are both identically zero, yet $f$ is nowhere continuous, let alone differentiable.
How to evaluate $\sqrt[3]{a + ib} + \sqrt[3]{a - ib}$? The answer to a question I asked earlier today hinged on the fact that $$\sqrt[3]{35 + 18i\sqrt{3}} + \sqrt[3]{35 - 18i\sqrt{3}} = 7$$ How does one evaluate such expressions? And, is there a way to evaluate the general expression $$\sqrt[3]{a + ib} + \sqrt[3]{a - ib}$$
As I said in a previous answer, finding out that such a simplification occurs is exactly as hard as finding out that $35+18\sqrt{-3}$ has a cube root in $\Bbb Q(\sqrt{-3})$ (well actually, $\Bbb Z[(1+\sqrt{-3})/2]$ because $35+18\sqrt{-3}$ is an algebraic integer). Suppose $p,q,d$ are integers. Then $p+q\sqrt d$ is an algebraic integer, and so is its cube root, and we can look for cube roots of the form $(x+y\sqrt d)$ with $2x,2y \in \Bbb Z$ So here is what you can do : * *compute analytically the $9$ possible values of $(p+q\sqrt{d})^{1/3}+(p-q\sqrt{d})^{1/3}$, using the polar form to take cube roots, so this involves the use of transcendental functions like $\arctan, \log, \exp$ and $\cos$. Does one of them look like an integer $x$ ? if so, let $y = 3qx/(x^3+p)$ and check algebraically if $(x+y\sqrt d)^3 = 8(p+q\sqrt d)$. *try to find a semi-integer root to the degree $9$ polynomial $64x^9-48px^6+(27dq^2-15p^2)x^3-p^3 = 0$. It will necessarily be of the form $\pm z$ or $\pm z/2$ where $z$ is a divisor of $p$, so you will need to decompose $p$ into its prime factors. *see if the norm of $p+q\sqrt d$ is a cube (in $\Bbb Z$). If it's not, you can stop. If you find that $p^2-dq^2 = r^3$ for some integer $r$, then try to find a semi-integer root to the degree $3$ polynomial $4x^3-3rx-p = 0$. Again you only need to check divisors of $p$ and their halves. The check that the norm was a cube allows you to make a simpler computation. *study the factorisation of $p+q\sqrt d$ in the ring of integers of $\Bbb Q(\sqrt d)$. This involves again checking that $p^2-dq^2$ is a cube $r^3$ (in $\Bbb Z$), then find the factorisation of the principal ideal $(p+q\sqrt d)$ in prime ideals. To do this you have to find square roots of $d$ modulo some prime factors of $r$ and find the relevant prime ideals. If $(p+q\sqrt d)$ is the cube of an ideal, you will need to check if that ideal is principal, say $(z)$ (so you may want to compute the ideal class group of $\Bbb Q(\sqrt d))$, and then finally compute the group of units to check if $(p+q\sqrt d)/z^3$ is the cube of a unit or not (especially when $d>0$, where that group is infinite and you need to do some continued fraction computation. otherwise it's finite and easy in comparison). I would say that methods $2$ and $4$ are strictly more complicated than method $3$, that checking if a polynomial has some integer roots is not too much more evil than doing a prime factorisation to check if some integer is a cube or not. And if you can bear with it, go for method $1$, which is the best, and you don't need a high precision. This is unavoidable. If you use Cardan's formula for the polynomial of the method $3$ you will end up precisely with the expression you started with, and that polynomial is the one you started with before considering simplifying that expression. Finally, I think what you want to hear is There is no formula that allows you to solve a cubic real polynomial by only taking roots of real numbers.
Compute value of $\pi$ up to 8 digits I am quite lost on how approximate the value of $\pi$ up to 8 digits with a confidence of 99% using Monte Carlo. I think this requires a large number of trials but how can I know how many trials? I know that a 99% confidence interval is 3 standard deviations away from the mean in a normal distribution. From the central limit theorem the standard deviation of the sample mean (or standard error) is proportional to the standard deviation of the population $\sigma_{\bar X} = \frac{\sigma}{\sqrt{n}}$ So I have something that relates the size of the sample (i.e. number of trials) with the standard deviation, but then I don't know how to proceed from here. How does the "8 digit precision" comes into play? UPDATE Ok I think I am close to understand it. From CLT we have $\displaystyle \sigma_{M} = \frac{\sigma}{\sqrt{N}}$ so in this case $\sigma = \sqrt{p(1-p)}$ therfore $\displaystyle \sigma_{M} = \frac{\sqrt{p(1-p)}}{\sqrt{N}}$ Then from the Bernoulli distribution, $\displaystyle \mu = p = \frac{\pi}{4}$ therefore $$\sigma_{M}=\frac{\sqrt{\pi(4-\pi)}}{\sqrt{N}}$$ but what would be the value of $\sigma_{M}$? and then I have $\pi$ in the formula but is the thing I am trying to approximate so how does this work? and still missing the role of the 8 digit precision.
The usual trick is to use the approximation of 355/113, repeating multiples of 113, until the denominator is 355. This version of $\pi=355/113$ is the usual implied value when eight digits is given. Otherwise, the approach is roughly $\sqrt{n}$
How to find inverse of the function $f(x)=\sin(x)\ln(x)$ My friend asked me to solve it, but I can't. If $f(x)=\sin(x)\ln(x)$, what is $f^{-1}(x)$? I have no idea how to find the solution. I try to find $$\frac{dx}{dy}=\frac{1}{\frac{\sin(x)}{x}+\ln(x)\cos(x)}$$ and try to solve it for $x$ by some replacing and other things, but I failed. Can anyone help? Thanks to all.
The function fails the horizontal line test for one, very badly in fact. One to one states that for any $x$ and $y$ in the domain of the function, that $f(x) = f(y) \Rightarrow x = y$. that is to each point in the domain there exists a unique point in the range. Many functions can be made to be one to one $(1-1)$ by restricting the interval over which values are taken, for example the inverse trig functions and the square root function (any even root). We typically only take the positive square root because otherwise the function would have two answers and each x wouldn't have a unique y value (The so called vertical line test, which is why generally if functions fail the horizontal line test they don't typically have an inverse). If you graph this function it looks very much like a growing sinusoidal shape this cannot be restricted uniquely in a manner that makes an inverse definable.
The negative square root of $-1$ as the value of $i$ I have a small point to be clarified.We all know $ i^2 = -1 $ when we define complex numbers, and we usually take the positive square root of $-1$ as the value of "$i$" , i.e, $i = (-1)^{1/2} $. I guess it's just a convention that has been accepted in maths and the value $i = -[(-1)^{1/2}] $ is neglected as I have never seen this value of "$i$" being used. What I wanted to know is, if we will use $i = -[(-1)^{1/2}] $ instead of $ (-1)^{1/2} $, would we be doing anything wrong? My guess is there is nothing wrong in it as far as the fundamentals of maths goes. Just wanted to clarify it with you guys. Thanks.
The square roots with the properties we know are used only for Positive Real numbers. We say that $i$ is the square root of $-1$ but this is a convention. You cannot perform operations with the usual properties of radicals if you are dealing with complex numbers, rather than positive reals. It is a fact that, $-1$ has two complex square roots. We just define one of them to be $i$. You have to regard the expression $i=\sqrt {-1}$ just as a symbol, and not do operations. Counterexample: $$\dfrac{-1}{1}=\dfrac{1}{-1}\Rightarrow$$ $$\sqrt{\dfrac{-1}{1}}=\sqrt{\dfrac{1}{-1}}\Rightarrow $$ $$\dfrac{\sqrt{-1}}{\sqrt{1}}=\dfrac{\sqrt{1}}{\sqrt{-1}}\Rightarrow$$ $$\dfrac{i}{1}=\dfrac{1}{i}\Rightarrow$$ $$i^2=1\text { ,a contradiction}$$
Prove that ideal generated by.... Is a monomial ideal Similar questions have come up on the last few past exam papers and I don't know how to solve it. Any help would be greatly appreciated.. Prove that the ideal of $\mathbb{Q}[X,Y]$ generated by $X^2(1+Y^3), Y^3(1-X^2), X^4$ and $ Y^6$ is a monomial ideal.
Hint. $(X^2(1+Y^3), Y^3(1-X^2), X^4, Y^6)=(X^2,Y^3)$.
Summation of a finite series Let $$f(n) = \frac{1}{1} + \frac{1}{2} + \frac{1}{3}+ \frac{1}{4}+...+ \frac{1}{2^n-1} \forall \ n \ \epsilon \ \mathbb{N} $$ If it cannot be summed , are there any approximations to the series ?
$f(n)=H_{2^n-1}$, the $(2^n-1)$-st harmonic number. There is no closed form, but link gives the excellent approximation $$H_n\approx\ln n+\gamma+\frac1{2n}+\sum_{k\ge 1}\frac{B_{2k}}{2kn^{2k}}=\ln n+\gamma+\frac1{2n}-\frac1{12n^2}+\frac1{120n^4}-\ldots\;,$$ where $\gamma\approx 0.5772156649$ is the Euler–Mascheroni constant, and the $B_{2k}$ are Bernoulli numbers.
How to find the norm of the operator $(Ax)_n = \frac{1}{n} \sum_{k=1}^n \frac{x_k}{\sqrt{k}}$? How to find the norm of the following operator $$ A:\ell_p\to\ell_p:(x_n)\mapsto\left(n^{-1}\sum\limits_{k=1}^n k^{-1/2} x_k\right) $$ Any help is welcome.
Consider diagonal operator $$ S:\ell_p\to\ell_p:(x_n)\mapsto(n^{-1/2}x_n) $$ It is bounded and its norm is $\Vert S\Vert=\sup\{|n^{-1/2}|:n\in\mathbb{N}\} =1$ Consider Caesaro operator $$ T:\ell_p\to\ell_p:(x_n)\mapsto\left(n^{-1}\sum\limits_{k=1}^nx_k\right) $$ As it was proved earlier its norm is is $\Vert T\Vert\leq p(p-1)^{-1}$. Since $A=T\circ S$, then $$ \Vert A\Vert\leq\Vert T\Vert\Vert S\Vert=p(p-1)^{-1} $$ But unfortunately I can''t say this is the best constant.
Trigonometric substitution integral Trying to work around this with trig substitution, but end up with messy powers on sines and cosines... It should be simple use of trigonometric properties, but I seem to be tripping somewhere. $$\int x^5\sqrt{x^2+4}dx $$ Thanks.
It comes out pretty well if you put $x=2\tan\theta$. Doing it carefully, remove a $\tan\theta\sec\theta$ and everything else can be expressed as a polynomial in $\sec\theta$, hence it's easily done by substitution. But a more "efficient" substitution is $u=\sqrt{x^2+4}$, or $u^2=x^2+4$. Then $2u\,du = 2x\,dx$ andthe integral becomes $$\int (u^2-4)^2 u^2\,du\,.$$
How to show in a clean way that $z^4 + (x^2 + y^2 - 1)(2x^2 + 3y^2-1) = 0$ is a torus? How to show in a clean way that the zero-locus of $$z^4 + (x^2 + y^2 - 1)(2x^2 + 3y^2-1) = 0$$ is a torus?
Not an answer, but here's a picture. Cheers.
Help with modular arithmetic If$r_1,r_2,r_3,r_4,\ldots,r_{ϕ(a)}$ are the distinct positive integers less than $a$ and coprime to $a$, is there some way to easily calculate, $$\prod_{k=1}^{\phi(a)}ord_{a}(r_k)$$
The claim is true, with the stronger condition that there is some $i$ with $e_i=1$ and all other exponents are zero. The set of $r_i$'s is called a reduced residue system. The second (now deleted) claim is false. Let $a=7$. Then $2^13^1=6^1$, two different representations.
Soving Recurrence Relation I have this relation $u_{n+1}=\frac{1}{3}u_{n} + 4$ and I need to express the general term $u_{n}$ in terms of $n$ and $u_{0}$. With partial sums I found this relation $u_{n}=\frac{1}{3^n}u_{0} + 4\sum_{n=1}^n\frac{1}{3^n-1}$ But I also need to prove by mathematical induction that my $u_{n}$ is ok, but I have no idea how to do this. Can anyone please help me? Thanks in advance
$$u_{ n }=\frac { 1 }{ { 3 }^{ n } } u_{ 0 }+4\sum _{ k=0 }^{ n-1 }{ \frac { 1 }{ { 3 }^{ k } } } =\frac { 1 }{ { 3 }^{ n } } u_{ 0 }+4\left( \frac { { \left( 1/3 \right) }^{ n }-1\quad }{ 1/3-1 } \right) =\frac { 1 }{ { 3 }^{ n } } u_{ 0 }-6\left( \frac { 1 }{ { 3 }^{ n } } -1 \right) $$ $$u_{ n }=\frac { 1 }{ { 3 }^{ n } } \left(u_{ 0 }-6\right)+6$$ See geometric series
Searching for unbounded, non-negative function $f(x)$ with roots $x_{n}\rightarrow \infty$ as $n \rightarrow \infty$ If a function $y = f(x)$ is unbounded and non-negative for all real $x$, then is it possible that it can have roots $x_n$ such that $x_{n}\rightarrow \infty$ as $n \rightarrow \infty$.
The function $ y = |x \sin(x)|$ has infinitely many roots $x_n$ such that $x_{n}\rightarrow \infty$ as $n \rightarrow \infty$.
How prove this $\int_{0}^{\infty}\sin{x}\sin{\sqrt{x}}\,dx=\frac{\sqrt{\pi}}{2}\sin{\left(\frac{3\pi-1}{4}\right)}$ Prove that $$\int_{0}^{\infty}\sin{x}\sin{\sqrt{x}}\,dx=\frac{\sqrt{\pi}}{2}\sin{\left(\frac{3\pi-1}{4}\right)}$$ I have some question. Using this, find this integral is not converge, I'm wrong? Thank you everyone
First make the substitution $x=u^2$ to get: $\displaystyle \int _{0}^{\infty }\!\sin \left( x \right) \sin \left( \sqrt {x} \right) {dx}=\int _{0}^{\infty }\!2\,\sin \left( {u}^{2} \right) \sin \left( u \right) u{du}$, $\displaystyle=-\int _{0}^{\infty }\!u\cos \left( u \left( u+1 \right) \right) {du}+ \int _{0}^{\infty }\!u\cos \left( u \left( u-1 \right) \right) {du}$, and changing variable again in the second integral on the R.H.S such that $u\rightarrow u+1$ this becomes: $=\displaystyle\int _{0}^{\infty }\!-u\cos \left( u \left( u+1 \right) \right) {du}+ \int _{-1}^{\infty }\!\left(u+1\right)\cos \left( u \left( u+1 \right) \right) {du} $, $\displaystyle=\int _{0}^{\infty }\!\cos \left( u \left( u+1 \right) \right) {du}+ \int _{-1}^{0}\! \left( u+1 \right) \cos \left( u \left( u+1 \right) \right) {du} $. Now we write $u=v-1/2$ and this becomes: $\displaystyle\int _{1/2}^{\infty }\!\cos \left( {v}^{2}-1/4 \right) {dv}+\int _{-1/ 2}^{1/2}\! \left( v+1/2 \right) \cos \left( {v}^{2}-1/4 \right) {dv}=$ $\displaystyle \left\{\int _{0}^{\infty }\!\cos \left( {v}^{2}-1/4 \right) {dv}\right\}$ $\displaystyle +\left\{\int _{-1/2} ^{1/2}\!v\cos \left( {v}^{2}-1/4 \right) {dv}+\int _{-1/2}^{0}\!1/2\, \cos \left( {v}^{2}-1/4 \right) {dv}-1/2\,\int _{0}^{1/2}\!\cos \left( {v}^{2}-1/4 \right) {dv}\right\},$ but the second curly bracket is zero by symmetry and so: $\displaystyle \int _{0}^{\infty }\!\sin \left( x \right) \sin \left( \sqrt {x} \right) {dx}=\displaystyle \int _{0}^{\infty }\!\cos \left( {v}^{2}-1/4 \right) {dv}$, $\displaystyle =\int _{0}^{ \infty }\!\cos \left( {v}^{2} \right) {dv}\cos \left( 1/4 \right) + \int _{0}^{\infty }\!\sin \left( {v}^{2} \right) {dv}\sin \left( 1/4 \right) $. We now quote the limit of Fresnel integrals: $\displaystyle\int _{0}^{\infty }\!\cos \left( {v}^{2} \right) {dv}=\int _{0}^{ \infty }\!\sin \left( {v}^{2} \right) {dv}=\dfrac{\sqrt{2\pi}}{4}$, to obtain: $\displaystyle \int _{0}^{\infty }\!\sin \left( x \right) \sin \left( \sqrt {x} \right) {dx}=\dfrac{\sqrt{2\pi}}{4}\left(\cos\left(\dfrac{1}{4}\right)+\sin\left(\dfrac{1}{4}\right)\right)=\dfrac{\sqrt{\pi}}{2}\sin{\left(\dfrac{3\pi-1}{4}\right)}$.
Understanding the (Partial) Converse to Cauchy-Riemann We have that for a function $f$ defined on some open subset $U \subset \mathbb{C}$ then the following if true: Suppose $u=\mathrm{Re}(f), v=\mathrm{Im}(f)$ and that all partial derivatives $u_x,u_y,v_x,v_y$ exists and are continuous on $U$. Suppose further that they satisfy the Cauchy-Riemann equations. Then $f$ is holomorphic on $U$. The proof for this is readily available, though there is a subtlety that I can't understand. We essentially want to compute $\lim_{h \rightarrow 0} \dfrac{f(z+h)-f(z)}{h}$ where $h=p+qi \in \mathbb{C}$. We need a relationship like $u(x+p,y+q)-u(x,y)=pu_x(x,y)+qu_y(x,y)+o(|p|+|q|)$. Why is this relationship true?
Denote $h=\pmatrix{p\\q}.$ Since $u(x,\ y)$ is differentiable at the point $(x,\ y),$ increment of $u$ can be represented as $$u(x+p,\ y+q)-u(x,\ y)=Du\;h+o(\Vert h\Vert)=pu_x(x,\ y)+qu_y(x,\ y)+o(|p|+|q|),$$ where $Du=\left(\dfrac{\partial{u}}{\partial{x}}, \ \dfrac{\partial{u}}{\partial{y}} \right).$
Finite ultraproduct I stucked when trying to prove: If $A_\xi$ are domains of models of first order language and $|A_\xi|\le n$ for $n \in \omega$ for all $\xi$ in index set $X$ and $\mathcal U$ is ultrafilter of $X$ then $|\prod_{\xi \in X} A_\xi / \mathcal U| \le n$. My tries: If $X$ is finite set then $\mathcal U$ is principal. Then singleton $\{x\}\in \mathcal U$ and $|\prod_{\xi \in X} A_\xi / \mathcal U| = |A_x|$. If $\mathcal U$ is not principal then for $x \in X$ there is $S_x \in \mathcal U$ with $x \notin S_x$. Then for every $k \in \omega$ there exists equivalence class corresponding to $S_{x_1} \cap \dots \cap S_{x_k}$ with size greater $|A_1|\cdot \dots \cdot |A_k|$. Can there be said anything about a structure of the ultrafilter if $X$ is infinite? And how to prove it?
The statement you are trying to prove is a consequence of Łoś's theorem - if every factor satisfies "there are no more than $n$ elements", then the set of factors that satisfy it is $X$, which is in $\mathcal{U}$, so by Łoś's theorem the ultraproduct will satisfy that sentence as well. Note that "there are no more than $n$ elements" is the sentence $$ (\exists x_1)\cdots(\exists x_n)(\forall y)[ y = x_1 \lor \cdots \lor y = x_n] $$ Thus one way to come up with a concrete proof of the statement you want is to examine the proof of Łoś's theorem and specialize it to the situation at hand. As a side note, if every factor is finite, but there is no bound on the sizes of the factors, then the ultraproduct will not be finite. The difference is that there is no longer a single sentence of interest that is true in all the factors, because finiteness is not definable in a first-order language. I assume that the OP figured out the hint, so let me spell out the answer for reference. Because $|A^\xi| = k$ for all $\xi \in X$, we can write $A^\xi = \{a^\xi_1, \ldots, a^\xi_k\}$ for each $\xi$. For $1 \leq i \leq k$ define $\alpha_i$ by $\alpha_i(\xi) = a^\xi_i$. Then every $\beta$ in the ultraproduct is equal to $\alpha_i$ for some $1 \leq i \leq k$. Proof: For $1\leq i \leq k$ let $B_i = \{\xi : \beta(\xi) = a^\xi_i\}$. Then $$X = B_1 \cup B_2 \cup\cdots\cup B_k.$$ Because $\mathcal{U}$ is an ultrafilter, one of the sets $B_i$ must be in $\mathcal{U}$, and if $B_i \in \mathcal{U}$ then $\beta = \alpha_i$ in the ultraproduct, QED. Thus we can explicitly name the $k$ elements of the ultraproduct: $\alpha_1, \alpha_2, \ldots, \alpha_k$.
Show that no number of the form 8k + 3 or 8k + 7 can be written in the form $a^2 +5b^2$ I'm studying for a number theory exam, and have got stuck on this question. Show that no number of the form $8k + 3$ or $8k + 7$ can be written in the form $a^2 +5b^2$ I know that there is a theorem which tells us that $p$ is expressible as a sum of $2$ squares if $p\equiv$ $1\pmod 4$. This is really all I have found to work with so far, and I'm not really sure how/if it relates. Many thanks!
$8k+3,8k+7$ can be merged into $4c+3$ where $k,c$ are integers Now, $a^2+5b^2=4c+3\implies a^2+b^2=4c+3-4b^2=4(c-b^2)+3\equiv3\pmod 4,$ But as $(2c)^2\equiv0\pmod 4,(2d+1)^2\equiv1\pmod 4,$ $a^2+b^2\equiv0,1,2\pmod 4\not\equiv3$ Clearly, $a^2+5b^2$ in the question can be generalized $(4m+1)x^2+(4n+1)y^2$ where $m,n$ are any integers
The graph of $x^x$ I have a question about the graph of $f(x) = x^x$. How come the graph doesn't extend into the negative domain? Because, it is not as if the graph is undefined when $x=-5$. But according to the graph, that seems to be the case. Can someone please explain this? Thanks
A more direct answer is the reason your graphing calculator doesn't graph when $x<0$ is because there are infinite undefined "holes" and infinite defined points in the real plane. Even when you restrict the domain to $[-2,-1]$ this will still be the case. Note that for $x^x$ when $x<0$ if you calculate for the output of certain x-values (using the Texas I-85) you will have... $$x^x=\begin{cases} (-x)^x & x=\left\{ {2n\over 2m+1}\ |\ n, m \in \Bbb Z\right\}\frac{\text{even integer}}{\text{odd integer}}\\ -(-x)^{x} & x=\left\{ {2n+1\over 2m+1}\ |\ n, m \in \Bbb Z\right\}\frac{\text{odd integer}}{\text{odd integer}}\ \\ \text{undefined} & x=\left\{ {2n+1\over 2m}\ |\ n, m \in \Bbb Z\right\}\bigcup \left\{\mathbb{R}\setminus{\mathbb{Q}}\right\} \left(\frac{\text{odd integer}}{\text{even integer}},\text{irrational numbers}\right) \end{cases}$$ (Just remember to simplify fractions all the way until the denominator is a prime number (ex: $2/6\to1/3$)) This is because when we have $x^a$ it can only extend to the negative domain if $a$'s denominator is odd (ex: $x^{1/3},x^{2/3}$). Thus there are infinite undefined values from $[-2,-1]$ that are still (even/odd) when simplified. For example $( -3/2,-1/2)$ are undefined but so is $ (-19/10, -17/10, -15/10...-11/10)$ and $(-199/100, -197/100, -195/100,.....-101/100)$. This includes irrational numbers. There is also infinite defined values. There are infinite defined values that have positive output and infinite defined values that have a negative output. For example there is $(-2,-4/3$), ($-2,-24/13,-22/13,-16/13...-14/13)$ and $(-2,-52/27,-50/27,-48/27,-46/27,-44/27...-28/27)$ that are still positive. Then there is $(-5/3,-3/3)$, $(-25/13,-23/13,-21/13,-19/13..-13/13)$ and $(-53/27,-51/27,-49/27,-47/27,-45/27,-43/27...-27/27)$ that is negative. Because the function is so "disconnected" with undefined holes and real numbers the graphing calculator still fails to register a graph of $x^x$ when $x<0$. Thus when you see $x^x$ with the three graphs in the peicewise definition note that I am hiding the infinite holes that exist for ${x}^{x}$. Now since the outputs for the negative domain can be positive or negative we have two "trajectories". Thus we must graph $\left(-x\right)^{x}$ and $-\left(-x\right)^{x}$ with $x^x$. However, if you want to graph $x^x$ to seem "more continuous" you can either $|x|^{x}$ or $\text{sgn}{\left(x\right)}|x|^{x}$.
What can I do this cos term to remove the divide by 0? I was asked to help someone with this problem, and I don't really know the answer why. But I thought I'd still try. $$\lim_{t \to 10} \frac{t^2 - 100}{t+1} \cos\left( \frac{1}{10-t} \right)+ 100$$ The problem lies with the cos term. What can I do with the cos term to remove divide by 0 ? I found the answer to be $100$ (Google), but I do not know what they did to the $\cos$ term. Is that even the answer ? Thanks!
The cos term is irrelevant. It can only wiggle between $-1$ and $1$, and is therefore killed by the $t^2-100$ term, since that approaches $0$. For a less cluttered version of the same phenomenon, consider the function $f(x)=x\sin\left(\frac{1}{x}\right)$ (for $x\ne 0$). The absolute value of this is always $\le |x|$, so (by Squeezing) $f(x)\to 0$ as $x\to 0$.
Why is the universal quantifier $\forall x \in A : P(x)$ defined as $\forall x (x \in A \implies P(x))$ using an implication? And the same goes for the existential quantifier: $\exists x \in A : P(x) \; \Leftrightarrow \; \exists x (x \in A \wedge P(x))$. Why couldn’t it be: $\exists x \in A : P(x) \; \Leftrightarrow \; \exists x (x \in A \implies P(x))$ and $\forall x \in A : P(x) \; \Leftrightarrow \; \forall x (x \in A \wedge P(x))$?
I can't answer your first question. It's just the definition of the notation. For your second question, by definition of '$\rightarrow$', we have $\exists x (x\in A \rightarrow P(x)) \leftrightarrow \exists x\neg(x\in A \wedge \neg P(x))$ I think you will agree that this is quite different from $\exists x (x\in A \wedge P(x))$
How many points can you find on $y=x^2$, for $x \geq 0$, such that each pair of points has rational distance? Open problem in Geometry/Number Theory. The real question here is: Is there an infinite family of points on $y=x^2$, for $x \geq 0$, such that the distance between each pair is rational? The question of "if not infinite, then how many?" follows if there exists no infinite family of points that satisfies the hypothesis. We have that there exists a (in fact, infinitely many) three point families that satisfy the hypothesis by the following lemma and proof. Lemma 1: There are infinitely many rational distance sets of three points on $y=x^2$. The following proof is by Nate Dean. Proof. Let $S$ be the set of points on the parabola $y = x^2$ and let $d_1$ and $d_2$ be two fixed rational values. For any point, $P_0(r)=(r, r^2) \in S$, let $C_1(r)$ be the circle of radius $d_1$ centered at $P_0(r)$ and let $C_2(r)$ be the circle of radius $d_2$ centered at $P_0(r)$. Each of these circles must intersect $S$ in at least one point. Let $P_1(r)$ be any point in $C_1(r) \cap S$ and likewise, let $P_2(r)$ be any point in $C_2(r) \cap S$. Now let $dist(r)$ equal the distance between $P_1(r)$ and $P_2(r)$. The function $dist(r)$ is a continuous function of $r$ and hence there are infinitely many values of $r$ such that $P_0(r)$, $P_1(r)$, and $P_2(r)$ are at rational distance. $ \blacksquare $ This basically shows that the collection of families of three points that have pairwise rational distance on the parabola is dense in $S$. Garikai Campbell has shown that there are infinitely many nonconcyclic rational distance sets of four points on $y = x^2$ in the following paper: http://www.ams.org/journals/mcom/2004-73-248/S0025-5718-03-01606-5/S0025-5718-03-01606-5.pdf However, to my knowledge, no one has come forward with 5 point solutions, nor has it been proven that 5 point solutions even exist. But I know that many people have not seen this problem! Does anyone have any ideas on how to approach a proof of either the infinite case or even just a 5 point solution case? Edit: The above Lemma as well as the paper by Garikai Campbell do not include the half-parabola ($x \geq 0$) restriction. However, I thought that the techniques that he employed could be analogous to techniques that we could use to make progress on the half-parabola version of the problem.
The answer to the "infinite family" question appears to be, no. Jozsef Solymosi and Frank de Zeeuw, On a question of Erdős and Ulam, Discrete Comput. Geom. 43 (2010), no. 2, 393–401, MR2579704 (2011e:52024), prove (according to the review by Liping Yuan) that no irreducible algebraic curve other than a line or a circle contains an infinite rational set.
What is the cardinality of $\Bbb{R}^L$? By $\Bbb{R}^L$, I mean the set that is interpreted as $\Bbb{R}$ in $L$, Godel's constructible universe. For concreteness, and to avoid definitional questions about $\Bbb{R}$, I'm looking at the set ${\cal P}(\omega)$ as a proxy. I would think it needs to be countable, since only definable subsets of $\omega$ are being considered, but I don't see how Cantor's theorem would fail here, since $L$ is a model of $\sf{ZFC}$. (Actually, that's a lie: this probably means that there is no injection $f:\omega\to{\cal P}(\omega)^L$ in $L$, even if there is one in $V$. I'm still a little hazy on all the details, though.) But the ordinals are in $L$, so $L$ is not itself countable, and there must exist genuinely uncountable elements of $L$. What does the powerset operation (in $L$) do to the cardinality of sets (as viewed in $V$), then?
It is impossible to give a complete answer to this question just in $\sf ZFC$. For once, it is possible that $V=L$ is true in the model you consider, so in fact $\Bbb R^L=\Bbb R$, so the cardinality is $\aleph_1$. On the other hand it is possible that the universe is $L[A]$ where $A$ is a set of $\aleph_2$ Cohen reals, in which case $\Bbb R^L$ is still of size $\aleph_1$. And it is also possible that the universe is $L[G]$ where $G$ is a function which collapses $\omega_1^L$, resulting in having $\Bbb R^L$ as a countable set. There are other axioms, such as large cardinal axioms (e.g. "$0^\#$ exists") which imply that $\Bbb R^L$ is countable, and one can arrange all sort of crazy tricks, where the result is large or small compared to $\Bbb R$. One thing is for certain, $|\Bbb R^L|=|\omega_1^L|\in\{\aleph_0,\aleph_1\}$.
In any tree, what is the maximum distance between a vertex of high degree and a vertex of low degree? In any undirected tree $T$, what is the maximum distance from any vertex $v$ with $\text{deg}(v) \geq 3$ to the closest (in a shortest path sense) vertex $y$ with $\text{deg}(y) \leq 2$? That is, $y$ can be leaf. It seems to me that this distance can be at most $\dfrac{\text{diam}(T)}{2}$, and furthermore that the maximum distance will be attained from a graph center. Is this true? There's probably simple argument for it somewhere.
In your second question (shortest path to vertex of degree $\le 2$), the bound $\operatorname{diam}(G) / 2$ holds, simply by noticing that the ends of the longest path in a tree are leaves, and the "worst" that a graph can do is have the vertex of degree $3$ or higher right in the center. But in fact, this holds for the shortest path from any vertex of to vertex of degree $1$. Can we do better? No, in fact. Take a look at a tournament bracket, for example (just delete the leaves on one side to create a unique center). Viewed as a tree, all the vertices are of degree $3$ except for the leaves, which are all at least distance $d/2$ from the center of this modified graph.
Yitang Zhang: Prime Gaps Has anybody read Yitang Zhang's paper on prime gaps? Wired reports "$70$ million" at most, but I was wondering if the number was actually more specific. *EDIT*$^1$: Are there any experts here who can explain the proof? Is the outline in the annals the preprint or the full accepted paper?
70 million is exactly what is mentioned in the abstract. It is quite likely that this bound can be reduced; the author says so in the paper: This result is, of course, not optimal. The condition $k_0 \ge 3.5 \times 10^6$ is also crude and there are certain ways to relax it. To replace the right side of (1.5) by a value as small as possible is an open problem that will not be discussed in this paper. He seems to be holding his cards for the moment... You can download a copy of the full accepted paper on the Annals page if your institution subscribes to the Annals.
The integral of $\frac{1}{1+x^n}$ Motivated by this question: Integration of $\displaystyle \int\frac{1}{1+x^8}\,dx$ I got curious about finding a general expression for the integral $\int \frac{1}{1+x^n},\,n \geq 1$. By factoring $1+x^n$, we can get an answer for any given $n$ (in terms of logarithms, arctangents, etc), but I was wondering whether a general one-two-liner formula in terms of elementary functions is known/available (WolframAlpha trials for specific $n$ show some structure.).
I showed in THIS ANSWER, that a general solution is given by $$\bbox[5px,border:2px solid #C0A000]{\int\frac{1}{x^n+1}dx=-\frac1n\sum_{k=1}^n\left(\frac12 x_{kr}\log(x^2-2x_{kr}x+1)-x_{ki}\arctan\left(\frac{x-x_{kr}}{x_{ki}}\right)\right)+C'} $$ where $x_{kr}$ and $x_{ki}$ are the real and imaginary parts of $x_k$, respectively, and are given by $$x_{kr}=\text{Re}\left(x_k\right)=\cos \left(\frac{(2k-1)\pi}{n}\right)$$ $$x_{ki}=\text{Im}\left(x_k\right)=\sin \left(\frac{(2k-1)\pi}{n}\right)$$
Solve the equation $\sqrt{3x-2} +2-x=0$ Solve the equation: $$\sqrt{3x-2} +2-x=0$$ I squared both equations $$(\sqrt{3x-2})^2 (+2-x)^2= 0$$ I got $$3x-2 + 4 -4x + x^2$$ I then combined like terms $x^2 -1x +2$ However, that can not be right since I get a negative radicand when I use the quadratic equation. $x = 1/2 \pm \sqrt{((-1)/2)^2 -2}$ The answer is 6
$$\sqrt{3x-2} +2-x=0$$ Isolating the radical:$$\sqrt{3x-2} =-2+x$$ Squaring both sides:$$\bigg(\sqrt{3x-2}\bigg)^2 =\bigg(-2+x\bigg)^2$$ Expanding $(-2+x)^2$ and gathering like terms: $$3x-2=-2(-2+x)+x(-2+x)$$ $$3x-2=4-2x-2x+x^2$$ Set x equal to zero:$$3x-2=4-4x+x^2$$ Gather like terms:$$0=4+2-3x-4x+x^2$$ Factor the quadratic and find the solutions:$$0=x^2-7x+6$$ $$0=(x-6)(x-1)$$ $$0=x-6\implies\boxed{6=x}$$ $$0=x-1\implies\boxed{1=x}$$ Checking 6 as a solution: $$\sqrt{3(6)-2} +2-(6)=0$$ $$\sqrt{16} +2-6=0$$ $$4+2-6=0$$ $$6-6=0$$ Checking 1 as a solution: $$\sqrt{3(1)-2} +2-(1)=0$$ $$\sqrt{1} +2-1=0$$ $$2\neq0$$ The solution $x=1$ does not equal zero and therefore is not a solution. The solution $x=6$ does equal zero and therefore is our only solution to this equation.
Riemann integral and Lebesgue integral $f:R\rightarrow [0,\infty)$ is a Lebesgue-integrable function. Show that $$ \int_R f \ d m=\int_0^\infty m(\{f\geq t\})\ dt $$ where $m$ is Lebesgue measure. I know the question may be a little dump.
We have, using Fubini and denoting by$\def\o{\mathbb 1}\def\R{\mathbb R}$ $\o_A$ the indicator function of a set $A \subseteq \R$ \begin{align*} \int_\R f(x)\, dx &= \int_\R \int_{[0,\infty)}\o_{[0,f(x)]}(t)\, dt\,dx\\ &= \int_{[0,\infty)} \int_\R \o_{[0,f(x)]}(t)\, dx\, dt\\ &= \int_{[0,\infty)} \int_\R \o_{\{f \ge t\}}(x)\, dx\, dt\\ &= \int_{[0,\infty)} m(\{f\ge t\})\, dt. \end{align*} For the third line note that \begin{align*} \o_{[0,f(x)]}(t) = 1 &\iff 0 \le t \le f(x)\\ &\iff f(x) \ge t\\ &\iff x \in \{f\ge t\}\\ &\iff \o_{\{f\ge t\}}(x) = 1 \end{align*} and hence $\o_{[0,f(x)]}(t) = \o_{\{f \ge t\}}(x)$ for all $(x,t) \in \R \times [0,\infty)$.
Improper integrals Question I was asked to define the next intergrals and I want to know if I did it right: $$1) \int^\infty_a f(x)dx = \lim_{b \to \infty}\int^b_af(x)dx$$ $$2) \int^b_{-\infty} f(x)dx = \lim_{a \to -\infty}\int^b_af(x)dx$$ $$3) \int^\infty_{-\infty} f(x)dx = \lim_{b \to \infty}\int^b_0f(x)dx + \lim_{a \to -\infty}\int^0_af(x)dx$$ Thanks.
The first two definitions you gave are the standard definitions, for $f$ say continuous everywhere. The third is more problematical, It is quite possible that the definition in your course is $$\lim_{a\to-\infty, b\to \infty} \int_a^b f(x)\,dx.$$ So $a\to-\infty$, $b\to\infty$ independently. What you wrote down would then be a fact rather than a definition.
Solve equations $\sqrt{t +9} - \sqrt{t} = 1$ Solve equation: $\sqrt{t +9} - \sqrt{t} = 1$ I moved - √t to the left side of the equation $\sqrt{t +9} = 1 -\sqrt{t}$ I squared both sides $(\sqrt{t+9})^2 = (1)^2 (\sqrt{t})^2$ Then I got $t + 9 = 1+ t$ Can't figure it out after that point. The answer is $16$
An often overlooked fact is that $$ \sqrt{t^2}=\left|t\right|$$ Call me paranoid but here's how I would solve this $$\sqrt{t +9} - \sqrt{t} = 1$$ $$\sqrt{t +9} = \sqrt{t}+1$$ $$\left|t +9\right| = \left(\sqrt{t}+1\right)\left(\sqrt{t}+1\right)$$ $$\left|t +9\right| = \left|t\right|+2\sqrt{t}+1$$ Since the original equation has $\sqrt{t}$, then we know that $t\geq 0$ and we can safely remove the absolute value bars. So now we have $$t +9= t+2\sqrt{t}+1$$ $$9= 2\sqrt{t}+1$$ $$8= 2\sqrt{t}$$ $$4= \sqrt{t}$$ $$\left|t\right|=4^2$$ $$t=16$$
What is the physical meaning of fractional calculus? What is the physical meaning of the fractional integral and fractional derivative? And many researchers deal with the fractional boundary value problems, and what is the physical background? What is the applications of the fractional boundary value problem?
This may not be what your looking for but... In my line of work I use fractional Poisson process a lot, now these arise from sets of Fractional Differential Equations and the physical meaning behind this is that the waiting times between events is no longer exponentially distributed but instead follows a Mittag-Leffler distribution, this results in waiting times between events that may be much longer than what would normally occur if one was to assume exponential waiting times.
Improper Integral $\int_{1/e}^1 \frac{dx}{x\sqrt{\ln{(x)}}} $ I need some advice on how to evaluate it. $$\int\limits_\frac{1}{e}^1 \frac{dx}{x\sqrt{\ln{(x)}}} $$ Thanks!
Here's a hint: $$ \int_{1/e}^1 \frac{1}{\sqrt{\ln x}} {\huge(}\frac{dx}{x}{\huge)}. $$ What that is hinting at is what you need to learn in order to understand substitutions. It's all about the chain rule. The part in the gigantic parentheses becomes $du$.
Convergence of $\sum \frac{a_n}{(a_1+\ldots+a_n)^2}$ Assume that $0 < a_n \leq 1$ and that $\sum a_n=\infty$. Is it true that $$ \sum_{n \geq 1} \frac{a_n}{(a_1+\ldots+a_n)^2} < \infty $$ ? I think it is but I can't prove it. Of course if $a_n \geq \varepsilon$ for some $\varepsilon > 0$ this is obvious. Any idea? Thanks.
Let $A_n=\sum\limits_{k=1}^na_n$. Then $$ \begin{align} \sum_{n=1}^\infty\frac{A_n-A_{n-1}}{A_n^2} &\le\frac1{a_1}+\sum_{n=2}^\infty\frac{A_n-A_{n-1}}{A_nA_{n-1}}\\ &=\frac1{a_1}+\sum_{n=2}^\infty\left(\frac1{A_{n-1}}-\frac1{A_n}\right)\\ &\le\frac2{a_1} \end{align} $$
Upper bound on the difference between two elements of an eigenvector Let $W$ be the non-negative, symmetric adjacency/affinity matrix for some connected graph. If $W_{ij}$ is large, then vertex $i$ and vertex $j$ have a heavily weighted edge between them. If $W_{ij} = 0$, then no edge connects vertex $i$ to vertex $j$. Now $L = \mathrm{diag}(W\mathbf{1})-W$ is the (unnormalized) graph Laplacian. Let $v$ be the Fiedler vector of $L$, that is, a unit eigenvector corresponding to the second smallest eigenvalue of $L$. As $W_{ij}$ increases, all else equal, $|v_i - v_j|$ tends to decrease---at least this is the idea behind spectral clustering. What is an upper bound on $|v_i - v_j|$, given quantities that don't require computing $v$, like $W_{ij}$ and $\|W\|$? Any suggestions or thoughts would be greatly appreciated.
I've seen peopl use Davis and Kahan (1970), "The Rotation of Eigenvectors by a Perturbation". It's sometimes a bit tough going, but incredibly useful for problems like this. More info would also be useful. Is $W$ stochastic? Are there underlying latent classes that control the distribution of $W_{ij}$, e.g., with $E(W_{ij})$ larger if $i$ and $j$ possess the same class? If so, then I suggest first considering $E(W_{ij})$ with the latent classes given. Reorder the rows and columns according to latent class. You'll end up with a block matrix with entries constant on each block. It's not that difficult to then compute eigenvalues and eigenvectors of this block matrix. You'll find the eigenvector entries for the same latent class are the same. Now perturb the matrix from $E(W)$ back to the original $W$ and use Davis and Kahan to bound differences in the eigenvector entries.
Evaluate $\int^{441}_0\frac{\pi\sin \pi \sqrt x}{\sqrt x} dx$ Evaluate this definite integral: $$\int^{441}_0\frac{\pi\sin \pi \sqrt x}{\sqrt x} dx$$
This integral (even the indefinite one) can be easily solved by observing: $$\frac{\mathrm d}{\mathrm dx}\pi\sqrt x = \frac{\pi}{2\sqrt x}$$ which implies that: $$\frac{\mathrm d}{\mathrm dx}\cos\pi\sqrt x = -\frac{\pi \sin\pi\sqrt x}{2\sqrt x}$$ Finally, we obtain: $$\int\frac{\pi\sin\pi\sqrt x}{\sqrt x}\,\mathrm dx = -2\cos\pi\sqrt x$$ whence the definite integral with bounds $0, n^2$ evaluates to $2(1-(-1)^n)$.
Function generation by input $y$ and $x$ values I wonder if there are such tools, that can output function formulas that match input conditions. Lets say I will make input like that: $y=0, x=0$ $y=1, x=1$ $y=2, x=4$ and tool should generate for me function formula y=x^2. I am aware its is not possible to find out exact function, but it would be great to get some possibilities. I'm game developer and i need to code some behaviours in game mechanics, that aren't simply linear, sometimes I need for example arcus tangens, when i want my value to increase slower and slower for higher arguments. Problem is that I finished my school long time ago and simply I don't remember how does many functions looks like and such tool would be great to quickly find out what i need.
One of my favorite curve-fitting resources is zunzun. They have many, many possible types of curves that can fit the data you give it.
Determining $\sin(15)$, $\sin(32)$, $\cos(49)$, etc. How do you in general find the trigonometric function values? I know how to find them for 30 45, and 60 using the 60-60-60 and 45-45-90 triangle but don't know for, say $\sin(15)$ or $\tan(75)$ or $\csc(50)$, etc.. I tried looking for how to do it but neither my textbook or any other place has a tutorial for it. I want to know how to find the exact values for all the trigonometric functions like $\sin x$, $\csc x$, ... opposed to looking it up or using calculator. According to my textbook, $\sin(15)=0.26$, $\tan(75)=3.73$, and $\csc(50)=1.31$ but doesn't show where those numbers came from, as if it was dropped from the Math heaven!
Value of $\sin{x}$ with prescribed accuracy can be calculated from Taylor's representation $$\sin{x}=\sum\limits_{n=0}^{\infty}{\dfrac{(-1)^n x^{2n+1}}{(2n+1)!}}$$ or infinite product $$\sin{x}=x\prod\limits_{n=1}^{\infty}{\left(1-\dfrac{x^2}{\pi^2 n^2} \right)}.$$ For some partial cases numerous trigonometric identities can be used.
Help with combinations problem? Initially there are $m$ balls in one bag, and $n$ in the other, where $m,n>0$. Two different operations are allowed: a) Remove an equal number of balls from each bag; b) Double the number of balls in one bag. Is it always possible to empty both bags after a finite sequence of operations? Operation b) is now replaced with b') Triple the number of balls in one bag. Is it now always possible to empty both bags after a finite sequence of operations? This is question 4 on Round $1$ of the $2011/2012$ British Mathematical Olympiad. I suck at combinatorics and the like but need to practise to try and improve my competition mathematics. If anyone could give me a hint on where to start I'd be most grateful :D EDIT: Never mind guys, I just completely mis-read the question. I thought it said you had to double the numbers of balls in both bags. Thanks for the help!
Regarding the first part... Let $m>n$ Remove $n-1$ balls from each bag so that you have $m-n+1$ balls in one bag and $1$ ball in the other bag. Now repeat the algorithm of doubling the balls in the bag which has $1$ ball and then taking away $1$ from each bag till you have $1$ ball in each bag. Finally remove $1$ ball from each bag and you have emptied them in finite number of steps. For the second part Again suppose $m>n$ Remove $n-1$ balls from each bag so that you have $m-n+1$ balls in one bag and $1$ ball in the other bag. Now repeat the algorithm of tripling the balls in the bag which has $1$ ball and then taking away $2$ from each bag. If $2|(m-n)$, then you'll end up with $1$ ball in each bag in some steps. Otherwise you'll end up with $2$ balls in one bag and $1$ ball in the other bag and it seems like no matter what you do from here, you'll always end up with this arrangement. Hence, in the second part, it may be conjectured that the bags cannot be emptied in finite steps if $m-n\equiv 1\pmod2$.
Books for Geometry processing Please suggest some basic books on geometry processing. I want to learn this subject for learning algorithms in 3d mesh generation and graphics. Please suggest me subjects or areas of mathematics i have learn in order to be understanding 3d mesh generation. I am doing self study and i am very new to this topic Please suggest me online links to any videos or resources, if available.
See these books: * *Polygon Mesh Processing by Botsch et al. *Geometry and Topology for Mesh Generation by Edelsbrunner and these courses: * *Mesh Generation and Geometry Processing in Graphics, Engineering, and Modeling *Geometry Processing Algorithms
How to show $x^4 - 1296 = (x^3-6x^2+36x-216)(x+6)$ How to get this result: $x^4-1296 = (x^3-6x^2+36x-216)(x+6)$? It is part of a question about finding limits at mooculus.
Hints: $1296=(-6)^4$ and $a^n-b^n=(a-b)(a^{n-1}+a^{n-2}b+\ldots+ab^{n-2}+b^{n-1})$.
Simple Linear Regression Question Let $Y_{i} = \beta_{0} + \beta_{1}X_{i} + \epsilon_{i}$ be a simple linear regression model with independent errors and iid normal distribution. If $X_{i}$ are fixed what is the distribution of $Y_{i}$ given $X_{i} = 10$? I am preparing for a test with questions like these but I am realizing I am less up to date on these things than I thought. Could anyone explain the thought process used to approach this kind of question?
Let $\epsilon_i \sim N(0,\sigma^2)$. Then, we have: $$Y_i \sim N(\beta_0 + \beta_1 X_i,\sigma^2)$$ Further clarification: The above uses the following facts: (a) Expectation is a linear operator, (b) Variance of a constant is $0$, (c) Covariance of a random variable with a constant is $0$ and finally, (d) A linear combination of normals is also a normal. Does that help?
Bertrand's postulate in another point of view I was just wondering why can't we use prime number's theorem to prove Bertrand's postulate.We know that if we show that for all natural numbers $n>2, \pi(2n)-\pi(n)>0$ we are done. Why can't it be proven by just showing (By using the prime number's theorem) that for every natural numbers $n>2, \frac{2n}{ln(2n)}-\frac{n}{ln(n)}>0$?
You need a precise estimate of the form $$c_1<\frac{\pi(n)\ln n}{n}<c_2.$$ With that you can derive $\pi(2n)>c_1\frac{2n}{\ln(2n)}>2c_1\frac{n}{\ln 2+\ln n}$. If you are lucky, you can continue$2c_1\frac n{\ln2+\ln n}>c_2\frac n{\ln n}>\pi(n)$. However, for this you better have $\frac{2c_1}{c_2} >1+\frac{\ln 2}{\ln n}$. So at least if $c_2<2c_1$, your idea works for sufficiently big $n$.
Prove that a cut edge is in every spanning tree of a graph Given a simple and connected graph $G = (V,E)$, and an edge $e \in E$. Prove: $e$ is a cut edge if and only if $e$ is in every spanning tree of $G$. I have been thinking about this question for a long time and have made no progress.
Hint ("only if"): Imagine you have a spanning tree in the graph which doesn't contain the cut-edge. What happens to the graph if you remove this cut edge? What happens to the spanning tree? Hint ("if"): What happens if you remove this "indispensable" edge (the one which is in every spanning tree)? Can the resulting graph have any spanning tree? What kinds of graphs don't have any spanning tree?
Evaluating $48^{322} \pmod{25}$ How do I find $48^{322} \pmod{25}$?
Finding the $\phi(n)$ for $25$ was easy, but what it if the $n$ was arbitrarily large? $$48 \equiv -2( \mod 25)$$ Playing around with $-2(\mod 25)$ so as to get $1$ or $-1(\mod 25)$. We see that $1024 \equiv -1(\mod 25)$ $$((-2)^{10})^{32} \equiv 1 (\mod 25)$$ $$(-2)^{322} \equiv 4(\mod 25)$$
Does Euler totient function gives exactly one value(answer) or LEAST calculated value(answer is NOT below this value)? I was studying RSA when came across Euler totient function. The definition states that- it gives the number of positive values less than $n$ which are relatively prime to $n$. I thought I had it, until I came across this property:- Euler Totient function is multiplicative function, that is: $\varphi(mn) = \varphi(m)\varphi(n)$ Now, if $p$ is a prime number, $\varphi(p)=p-1$. Putting values of $p$ as 11 and 13 one by one, $$\varphi(11)=10$$ $$\varphi(13)=12$$ Applying above stated property, $$\varphi(11\cdot 13)=\varphi(11)\varphi(13)$$ $$\varphi(143)=12 \cdot 10$$ $$\varphi(143)=120$$ Is it correct? Does that mean we have $23$ values between $1$ and $143$ which are not relatively prime to $143$? Sorry if its something basic I'm missing. I'm not some genius at maths and came across this during study of RSA Algo. Thanks.
For some intuition about why $n$ and $m$ must be relatively prime, consider that, for $N=p_1^{a_1}p_2^{a_2}...p_n^{a_n}$, $$\varphi(N)=N \left (1-\frac{1}{p_1}\right) \left (1-\frac{1}{p_2}\right)...\left (1-\frac{1}{p_n}\right)$$ And for $M=p_m^b$, $$\varphi(M)=M \left (1-\frac{1}{p_m}\right)$$ If $p_m$ is not one of $p_1,p_2,...,p_n$ then $NM$ has $1$ more unique prime factor than $N$, and $$\varphi(NM)=NM \left (1-\frac{1}{p_1}\right) \left (1-\frac{1}{p_2}\right)...\left (1-\frac{1}{p_n}\right) \left(1-\frac{1}{p_m}\right)=\varphi(N)\varphi(M)$$ But if $p_m$ is one of $p_1,p_2,...,p_n$, then $NM$ the same number of unique prime factors as $N$, and $$\varphi(NM)=NM \left (1-\frac{1}{p_1}\right) \left (1-\frac{1}{p_2}\right)...\left (1-\frac{1}{p_n}\right) \ne\varphi(N)\varphi(M)$$
Covariance of order statistics (uniform case) Let $X_1, \ldots, X_n$ be uniformly distributed on $[0,1]$ and $X_{(1)}, ..., X_{(n)}$ the corresponding order statistic. I want to calculate $Cov(X_{(j)}, X_{(k)})$ for $j, k \in \{1, \ldots, n\}$. The problem is of course to calculate $\mathbb{E}[X_{(j)}X_{(k)}]$. The joint density of $X_{(j)}$ and $X_{(k)}$ is given by $$f_{X_{(j)}, X_{(k)}}=\binom{n}{k}\binom{k}{j-1}x^{j-1}(y-x)^{k-1-j}(1-y)^{n-k}$$ where $0\leq x\leq y\leq 1$. (I used the general formula here.) Sadly, I see no other way to calculate $\mathbb{E}[X_{(j)}X_{(k)}]$ than by $$\mathbb{E}[X_{(j)}X_{(k)}]=\binom{n}{k}\binom{k}{j-1}\int_0^1\int_0^yxyx^{j-1}(y-x)^{k-1-j}(1-y)^{n-k}\,dx\,dy.$$ But this integral is too much for me. I tried integration by parts, but got lost along the way. Is there a trick to do it? Did I even get the limits of integration right? Apart from that, I wonder if there's a smart approach to solve the whole problem more elegantly.
Thank you so much for posting this -- I too looked at this covariance integral (presented in David and Nagaraja's 2003 3rd edition Order Statistics text) and thought that it looked ugly. However, I fear that there may be a few small mistakes in your math on E(X_jX_k), assuming that I'm following you right. The joint density should have (j-1)! in the denominator instead of j! at the outset -- otherwise j! would entirely cancel out the j! in the numerator of Beta(j+1,k-j) instead of ending up with j in the numerator of the solution... Right?
Derive Cauchy-Bunyakovsky by taking expected value In my notes, it is said that taking expectation on both sides of this inequality $$|\frac{XY}{\sqrt{\mathbb{E}X^2\mathbb{E}Y^2}}|\le\frac{1}{2}\left(\frac{X^2}{\mathbb{E}X^2}+\frac{Y^2}{\mathbb{E}Y^2}\right)$$ can lead to the Cauchy-Bunyakovxky (Schwarz) inequality $$\mathbb{E}|XY|\le\sqrt{\mathbb{E}X^2\mathbb{E}Y^2}$$ I am not really good at taking expected values, may anyone guide me how to go about it? Note: I am familiar with the linearity and monotonicity of expected values, what I am unsure about is the derivation that leads to the inequality, especially when dealing with double expectation. Thanks.
You can simplify your inequality as follows, for the left side: $|\frac {XY}{\sqrt{EX^{2}EY^{2}}}|=\frac {|XY|}{\sqrt{EX^{2}EY^{2}}}$ for the right side, take the expectation: $\frac{1}{2}E\left( \frac{X^2}{EX^2}+\frac{Y^2}{EY^2}\right)= \frac{1}{2} E \left( \frac{X^2 EY^2+X^2 EY^2}{EY^2 EX^2} \right)$ Now, $E(X^2 EY^2+X^2 EY^2)=2*EX^2EY^2 $ using the fact that $E(EY^2)=E(Y^2)$ Plug in and you get the result.
Check if $\lim_{x\to\infty}{\log x\over x^{1/2}}=\infty$, $\lim_{x\to\infty}{\log x\over x}=0$ Could anyone tell me which of the following is/are true? * *$\lim_{x\to\infty}{\log x\over x^{1/2}}=0$, $\lim_{x\to\infty}{\log x\over x}=\infty$ *$\lim_{x\to\infty}{\log x\over x^{1/2}}=\infty$, $\lim_{x\to\infty}{\log x\over x}=0$ *$\lim_{x\to\infty}{\log x\over x^{1/2}}=0$, $\lim_{x\to\infty}{\log x\over x}=0$ *$\lim_{x\to\infty}{\log x\over x^{1/2}}=0$, but $\lim_{x\to\infty}{\log x\over x}$ does not exists. For I know $\lim_{x\to\infty}{1\over x}$ exist and so is for $1\over x^{1/2}$.
$3$ is correct as $\log x$ grows slower than any $x^n$. So $x^{-1}$ and $x^{-\frac{1}{2}}$ will manage to pull it down to $0$. And $\displaystyle\lim_{x\to\infty}{1\over x}=\lim_{x\to\infty}{1\over \sqrt{x}}=0$.
Is $\sum\limits_{k=1}^{n-1}\binom{n}{k}x^{n-k}y^k$ always even? Is $$ f(n,x,y)=\sum^{n-1}_{k=1}{n\choose k}x^{n-k}y^k,\qquad\qquad\forall~n>0~\text{and}~x,y\in\mathbb{Z}$$ always divisible by $2$?
Hint: Recall binomial formula $$ (x+y)^n=\sum\limits_{k=0}^n{n\choose k} x^{n-k} y^k $$
One of $2^1-1,2^2-1,...,2^n-1$ is divisible by $n$ for odd $n$ Let $n$ be an odd integer greater than 1. Show that one of the numbers $2^1-1,2^2-1,...,2^n-1$ is divisible by $n$. I know that pigeonhole principle would be helpful, but how should I apply it? Thanks.
Hints: $$(1)\;\;\;2^k-1=2^{k-1}+2^{k-2}+\ldots+2+1\;,\;\;k\in\Bbb N$$ $$(2)\;\;\;\text{Every natural number $\,n\,$ can be uniquely written in base two }$$ $$\text{with maximal power of two less than $\,n\,$ }$$
determine whether an integral is positive Given a standardized normal variable $X\sim N\left(0,1\right)$, and constants $ \kappa \in \left[0,1\right)$ and $\tau \in \mathbb{R}$, I want to sign the following expression: \begin{equation} \int_{-\infty}^\tau \left(X-\kappa \tau \right) \phi(X)\text{d}X \end{equation} where $\phi$ is the PDF of $X$. Any comment will be appreciated. I would at least want to know if the sign of the expression can be determined given the information, or whether it hinges on the value of $\tau$.
Note that $$ \int_{-\infty}^\infty (X-\kappa\tau)\phi(X)\,\mathrm dX=E[X-\kappa\tau]=-\kappa\tau$$ and for $\tau>0$ $$ \int_{\tau}^\infty (X-\kappa\tau)\phi(X)\,\mathrm dX\ge\int_{\tau}^\infty (1-\kappa)\tau\phi(X)\,\mathrm dX>0.$$ So for $\tau>0$ your expression is positive.
A problem on matrices: Find the value of $k$ If $ \begin{bmatrix} \cos \frac{2 \pi}{7} & -\sin \frac{2 \pi}{7} \\ \sin \frac{2 \pi}{7} & \cos \frac{2 \pi}{7} \\ \end{bmatrix}^k = \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} $, then the least positive integral value of $k$ is? Actually, I got no idea how to solve this. I did trial and error method and got 7 as the answer. how do i solve this? Can you please offer your assistance? Thank you
Powers of matrices should always be attacked with diagonalization, if feasible. Forget $2\pi/7$, for the moment, and look at $$ A=\begin{bmatrix} \cos\alpha & -\sin\alpha\\ \sin\alpha & \cos\alpha \end{bmatrix} $$ whose characteristic polynomial is, easily, $p_A(X)=1-2X\cos\alpha+X^2$. The discriminant is $4(\cos^2\alpha-1)=4(i\sin\alpha)^2$, so the eigenvalues of $A$ are \begin{align} \lambda&=\cos\alpha+i\sin\alpha\\ \bar{\lambda}&=\cos\alpha-i\sin\alpha \end{align} Finding the eigenvectors is easy: $$ A-\lambda I= \begin{bmatrix} -i\sin\alpha & -\sin\alpha\\ i\sin\alpha & \sin\alpha \end{bmatrix} $$ and an eigenvector is $v=[-i\quad 1]^T$. Similarly, an eigenvector for $\bar{\lambda}$ is $w=[i\quad 1]^T$. If $$ S=\begin{bmatrix}-i & i\\1 & 1\end{bmatrix} $$ you get immediately that $$ S^{-1}=\frac{i}{2}\begin{bmatrix}1 & -i\\-1 & -i\end{bmatrix} $$ so, by well known rules, $$ A=SDS^{-1} $$ where $$ D= \begin{bmatrix} \cos\alpha+i\sin\alpha & 0 \\ 0 & \cos\alpha-i\sin\alpha \end{bmatrix}. $$ By De Moivre's formulas, you have $$ D^k= \begin{bmatrix} \cos(k\alpha)+i\sin(k\alpha) & 0 \\ 0 & \cos(k\alpha)-i\sin(k\alpha) \end{bmatrix}. $$ Since $A^k=S D^k S^{-1}$ your problem is now to find the minimum $k$ such that $\cos(k\alpha)+i\sin(k\alpha)=1$, that is, for $\alpha=2\pi/7$, $$ \begin{cases} \cos k(2\pi/7)=1\\ \sin k(2\pi/7)=0 \end{cases} $$ and you get $k=7$. This should not be a surprise, after all: the effect of $A$ on vectors is exactly rotating them by the angle $\alpha$. If you think to the vector $v=[x\quad y]^T$ as the complex number $z=x+iy$, when you do $Av$ you get $$ Av= \begin{bmatrix} \cos\alpha & -\sin\alpha\\ \sin\alpha & \cos\alpha \end{bmatrix} \begin{bmatrix}x\\y\end{bmatrix} =\begin{bmatrix} x\cos\alpha-y\sin\alpha\\x\sin\alpha+y\cos\alpha \end{bmatrix} $$ and $$ (x\cos\alpha-y\sin\alpha)+i(x\sin\alpha+y\cos\alpha)= (x+iy)(\cos\alpha+i\sin\alpha)=\lambda z $$ (notation as above). Thus $z$ is mapped to $\lambda z$, which is just $z$ rotated by an angle $\alpha$.
Symmetric Matrices Using Pythagorean Triples Find symmetric matrices A =$\begin{pmatrix} a &b \\ c&d \end{pmatrix}$ such that $A^{2}=I_{2}$. Alright, so I've posed this problem earlier but my question is in regard to this problem. I was told that $\frac{1}{t}\begin{pmatrix}\mp r & \mp s \\ \mp s & \pm r \end{pmatrix}$ is a valid matrix $A$ as $A^{2}=I_{2}$, given the condition that $r^{2}+s^{2}=t^{2}$, that is, (r,s,t) is a Pythagorean Triple. Does anybody know why this works?
It works because $$A^2 = \frac{1}{t^2}\begin{pmatrix}r & s\\s & -r\end{pmatrix}\begin{pmatrix}r&s\\s&-r\end{pmatrix} = \frac{1}{t^2}\begin{pmatrix}r^2+s^2 & 0\\0 & r^2 + s^2\end{pmatrix}.$$ and you want the diagonals to be 1, i.e. $\frac{r^2 + s^2}{t^2} = 1$.
Jacobson Radical and Finite Dimensional Algebra In general, it is usually not the case that for a ring $R$, the Jacobson radical of $R$ has to be equal to the intersection of the maximal ideals of $R$. However, what I do like to know is, if we are given a finite-dimensional algebra $A$ over the field $F$, is it true that the Jacobson radical of $A$ is precisely the intersection of the maximal ideals of $A$?
Yes, it is true. See J.A. Drozd, V.V. Kirichenko, Finite dimensional algebras, Springer, 1994
Similar triangles question If I have a right triangle with sides $a$.$b$, and $c$ with $a$ being the hypotenuse and another right triangle with sides $d$, $e$, and $f$ with $d$ being the hypotenuse and $d$ has a length $x$ times that of $a$ with $x \in \mathbb R$, is it necessary for $e$ and $f$ to have a length $x$ times that of $b$ and $c$ respectively? EDIT: The corresponding non-right angles of both triangles are the same.
EDIT: This answer is now incorrect since the OP changed his question. This is a good way of visualizing the failure of your claim: Imagine the point C moving along the circle from A to the north pole. This gives you a continuum of non-similar right triangles with a given hypotenuse (in this case the diameter).
How to prove that End(R) is isomorphic to R? I tried to prove this: $R$ as a ring is isomorphic to $End(R)$, where $R$ is considered as a left $R$-module. The map of isomorphism is $$F:R\to End(R), \quad F(r)=fr,$$ where $fr(a)=ar$. And I define the multiplication in $End(R)$ by $(.)$, where $h.g=g\circ h$ for $g,h$ in $End(R)$. Is this true?
It's true that: $a\mapsto ar$ is a left module homomorphism. If we call this map $a\mapsto ar$ by $\theta_r$, then indeed $\theta:R\to End(_RR)$. * *Check that it's additive. *Check that it's multiplicative. (You will absolutely need your rule that $f\circ g=g\cdot f$. The $\cdot$ operation you have given is the multiplication in $(End(R))^{op}$) If you instead try to show that $R\cong End(R_R)$ in the same way (with $\theta_r$ denoting $a\mapsto ra$), you will have better luck. Can you spot where the two cases are different?
Find what values of $n$ give $\varphi(n) = 10$ For what values of $n$ do we get $\varphi(n) = 10$? Here, $\varphi$ is the Euler Totient Function. Now, just by looking at it, I can see that this happens when $n = 11$. Also, my friend told me that it happens when $n = 22$, but both of these are lucky guesses, or educated guesses. I haven't actually worked it out, and I don't know if there are any more. How would I go about answering this question?
Suppose $\varphi(n)=10$. If $p \mid n$ is prime then $p-1$ divides $10$. Thus $p$ is one of $2,3,11$. If $3 \mid n$, it does so with multiplicity $1$. But then there would exist $m \in \mathbb{N}$ such that $\varphi(m)=5$, and this quickly leads to a contradiction (e.g. note that such values are always even). Thus $n$ fits the form $2^a\cdot 11^b$, and we claim that $b=1$. If $b>1$ we have $11 \mid \varphi(n)$, a contradiction. As well, $b=0$ gives $\varphi(n)$ a power of $2$. Thus $n=2^a \cdot 11$, and it's easy to see from here that $n=11,22$ are the only solutions.
every integers from 1 to 121 can be written as 5 powers of 3 We have a two-pan balance and have 5 integer weights with which it is possible to weight exactly all the weights integers from 1 to 121 Kg.The weights can be placed all on a plate but you can also put some in a dish and others with the goods to be weighted. It's asked to find the 5 weights that give us this possibility. It also asks you to prove that the group of five weights is the only one that solves the problem. Easily i found the 5 weights: 1 , 3 , 9 , 27 , 81 but i can' demonstrate that this group of five weights is the only one that solves the problem. Can you help me ? Thanks in advance !
A more general result says, given weights $w_1\le w_2 \le \dots \le w_n$, and if $S_k = \sum_{i=1}^k w_k$; $S_0 = 0$, then everything from $1$ to $S_n$ is weighable iff each of the following inequalities hold: $$S_{k+1} \le 3S_k + 1 \text{ for } k = 0,\dots,n-1$$ Note that this is equivalent to $w_k \le 2 S_k +1$ for each $k$. If you wish, a proof is given here.
Calculate the limit of two interrelated sequences? I'm given two sequences: $$a_{n+1}=\frac{1+a_n+a_nb_n}{b_n},b_{n+1}=\frac{1+b_n+a_nb_n}{a_n}$$ as well as an initial condition $a_1=1$, $b_1=2$, and am told to find: $\displaystyle \lim_{n\to\infty}{a_n}$. Given that I'm not even sure how to approach this problem, I tried anyway. I substituted $b_{n-1}$ for $b_n$ to begin the search for a pattern. This eventually reduced to: $$a_{n+1}=\frac{a_{n-1}(a_n+1)+a_n(1+b_{n-1}+a_{n-1}b_{n-1})}{1+b_{n-1}+a_{n-1}b_{n-1}}$$ Seeing no pattern, I did the same once more: $$a_{n+1}=\frac{a_{n-2}a_{n-1}(a_n+1)+a_n\left(a_{n-2}+(a_{n-1}+1)(1+b_{n-2}+a_{n-2}b_{n-2})\right)}{a_{n-2}+(a_{n-1}+1)(1+b_{n-2}+a_{n-2}b_{n-2})}$$ While this equation is atrocious, it actually reveals somewhat of a pattern. I can sort of see one emerging - though I'm unsure how I would actually express that. My goal here is generally to find a closed form for the $a_n$ equation, then take the limit of it. How should I approach this problem? I'm totally lost as is. Any pointers would be very much appreciated! Edit: While there is a way to prove that $\displaystyle\lim_{n\to\infty}{a_n}=5$ using $\displaystyle f(x)=\frac{1}{x-1}$, I'm still looking for a way to find the absolute form of the limit, $\displaystyle\frac{1+2a+ab}{b-a}$.
The answer is $a_n \to 5$ , $b_n \to \infty$. I'm trying to prove that and I will edit this post if I figure out something. EDIT: I would write all this in comment instead in answer, but I cannot find how to do it.. maybe I need to have more reputation to do this (low reputation = low privileges:P) Anyway, I still didn't solved it, but maybe something of that will help you. I will edit it when I think something out. EDIT: After many transformations and playing with numbers, I think that the limit, for $a<b$, is $$ \frac{ab + 2a +1}{b-a}$$ But still cannot prove it. (In statement above: $a = a_1 $ , $ b = b_1 $)
For any arrangment of numbers 1 to 10 in a circle, there will always exist a pair of 3 adjacent numbers in the circle that sum up to 17 or more I set out to solve the following question using the pigeonhole principle Regardless of how one arranges numbers $1$ to $10$ in a circle, there will always exist a pair of three adjacent numbers in the circle that sum up to $17$ or more. My outline [1] There are $10$ triplets consisting of adjacent numbers in the circle, and since each number appears thrice, the total sum of these adjacent triplets for all permutations of the number in the circle, is $3\cdot 55=165$. [2] If we consider that all the adjacent triplets sum to 16 , and since there are $10$ such triplets, the sum accordingly would be $160$, but we just said the invariant sum is $165$ hence there would have to be a triplet with sum of $17$ or more. My query Could someone polish this into a mathematical proof and also clarify if I did make use of the pigeonhole principle.
We will show something stronger, namely that there exists 3 adjacent numbers that sum to 18 or more. Let the integers be $\{a_i\}_{i=1}^{10}$. WLOG, $a_1 = 1$. Consider $$a_2 + a_3 + a_4, a_5 + a_6 + a_7, a_8 + a_9 + a_{10}$$ The sum of these 3 numbers is $2+3 +\ldots + 10 = 54$. Hence, by the pigeonhole principle, there exists one number which is at least $\lfloor \frac{54}{3} \rfloor = 18 $. I leave it to you to show that there is a construction where no 3 adjacent numbers sum to 19 or more, which shows that 18 is the best that we can do.
Finding the determinant of $2A+A^{-1}-I$ given the eigenvalues of $A$ Let $A$ be a $2\times 2$ matrix whose eigenvalues are $1$ and $-1$. Find the determinant of $S=2A+A^{-1}-I$. Here I don't know how to find $A$ if eigenvectors are not given. If eigenvectors are given, then I can find using $A=PDP^{-1}$. Please solve for $A$; the rest I can do.
\begin{pmatrix} 0 &-1 \\ -1&0 \end{pmatrix} the Characteristic polynomial is $(x-1)(x+1)$.
Counterexample to inverse Leibniz alternating series test The alternating series test is a sufficient condition for the convergence of a numerical series. I am searching for a counterexample for its inverse: i.e. a series (alternating, of course) which converges, but for which the hypothesis of the theorem are false. In particular, if one writes the series as $\sum (-1)^n a_n$, then $a_n$ should not be monotonically decreasing (since it must be infinitesimal, for the series to converge).
If you want a conditionally convergent series in which the signs alternate, but we do not have monotonicity, look at $$\frac{1}{2}-1+\frac{1}{4}-\frac{1}{3}+\frac{1}{6}-\frac{1}{5}+\frac{1}{8}-\frac{1}{7}+\cdots.$$ It is not hard to show that this converges to the same number as its more familiar sister.
Listing subgroups of a group I made a program to list all the subgroups of any group and I came up with satisfactory result for $\operatorname{Symmetric Group}[3]$ as $\left\{\{\text{Cycles}[\{\}]\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{cc} 1 & 2 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{cc} 1 & 3 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{cc} 2 & 3 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{ccc} 1 & 2 & 3 \\ \end{array} \right)\right],\text{Cycles}\left[\left( \begin{array}{ccc} 1 & 3 & 2 \\ \end{array} \right)\right]\right\}\right\}$ It excludes the whole set itself though it can be added seperately. But in case of $SymmetricGroup[4]$ I am getting following $\left\{\{\text{Cycles}[\{\}]\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{cc} 1 & 2 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{cc} 1 & 3 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{cc} 1 & 4 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{cc} 2 & 3 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{cc} 2 & 4 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{cc} 3 & 4 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{cc} 1 & 3 \\ 2 & 4 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{cc} 1 & 4 \\ 2 & 3 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{ccc} 1 & 2 & 3 \\ \end{array} \right)\right],\text{Cycles}\left[\left( \begin{array}{ccc} 1 & 3 & 2 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{ccc} 1 & 2 & 4 \\ \end{array} \right)\right],\text{Cycles}\left[\left( \begin{array}{ccc} 1 & 4 & 2 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{ccc} 1 & 3 & 4 \\ \end{array} \right)\right],\text{Cycles}\left[\left( \begin{array}{ccc} 1 & 4 & 3 \\ \end{array} \right)\right]\right\},\left\{\text{Cycles}[\{\}],\text{Cycles}\left[\left( \begin{array}{ccc} 2 & 3 & 4 \\ \end{array} \right)\right],\text{Cycles}\left[\left( \begin{array}{ccc} 2 & 4 & 3 \\ \end{array} \right)\right]\right\}\right\}$ The matrix form shows double transposition. Can someone please check for me if I am getting appropriate results? I doubt I am!!
I have the impression that you only list the cyclic subgroups. For $S_3$, the full group $S_3$ ist missing as a subgroup (you are mentioning that in your question). For $S_4$, several subgroups are missing. In total, there should be $30$ of them. $14$ of them are cyclic, which are exactly the ones you listed. To give you a concrete example, the famous Klein Four subgroup $$\{\operatorname{id},(12)(34),(13)(24),(14)(23)\}$$ is not contained in your list.
Does a closed walk necessarily contain a cycle? [HOMEWORK] I asked my professor and he said that a counter example would be two nodes, by which the pathw ould go from one node and back. this would be a closed path but does not contain a cycle. But I am confused after looking at this again. Why is this not a cycle? Need there be another node?
I guess the answer depends on the exact definition of cycle. If it is as you wrote in your comment - a closed walk that starts and ends in the same vertex, and no vertex repeats on the walk (except for the start and end), then your example with two nodes is a cycle. However, a definition of a cycle usually contains a condition of non-triviality stating that a cycle has at least three vertices. So a graph with two vertices is not a cycle according to this definition.
Calculate the limit: $\lim_{n\to+\infty}\sum_{k=1}^n\frac{\sin{\frac{k\pi}n}}{n}$ Using definite integral between the interval $[0,1]$. Calculate the limit: $\lim_{n\to+\infty}\sum_{k=1}^n\frac{\sin{\frac{k\pi}n}}{n}$ Using definite integral between the interval $[0,1]$. It seems to me like a Riemann integral definition: $\sum_{k=1}^n\frac{\sin{\frac{k\pi}{n}}}{n}=\frac1n(\sin{\frac{\pi}n}+...+\sin\pi)$ So $\Delta x=\frac 1n$ and $f(x)=\sin(\frac{\pi x}n)$ (Not sure about $f(x))$ How do i proceed from this point?
I think it should be $$ \int_{0}^{1}\sin \pi xdx=\frac{2}{\pi} $$ EDIT OK, the point here is the direct implementation of Riemanns sums with $d x = \frac{b-a}{n}, \ b=1, \ a=0$ and $\frac{k}{n}=x$
Show that $\langle(1234),(12)\rangle = S_4$ I am trying to show that $\langle(1234),(12)\rangle = S_4$. I can multiply and get all $24$ permutations manually but isn't there a more compact solution?
Write $H$ for the subgroup generated by those two permutations. Then $(1234)(12)=(234)$, so $H$ contains certainly the elements of $\langle(1234)\rangle$, $\langle (234)\rangle$ and $(12)$, hence $\vert H\vert \geq 7$ and therefore $\vert H\vert \geq 8$ ($\vert H\vert$ must divide $24=\vert S_{4}\vert$). Since $(1234)\in H$ but such a permutation is not even, i.e it doesn't belong to $A_{4}$, $H$ is not a subgroup of $A_{4}$. By Lagrange's theorem, $\vert H\vert$ must equal $24$ (the only subgroup of order $12$ in $S_{4}$ is $A_{4}$) and we're done.
A property of uniform spaces Is it true that $E\circ E\subseteq E$ for every entourage $E$ of every uniform space?
As noted in the comments, it clearly is not true in general. If an entourage $E$ has the property that $E\circ E\subseteq E$, then $E$ is a transitive, reflexive relation on $X$, and the entourage $E\cap E^{-1}$ is an equivalence relation on $X$. A uniform space whose uniformity has a base of equivalence relations is non-Archimedean, the uniform analogue of a non-Archimedean metric space; such a space is zero-dimensional, since it clearly has a base of clopen sets. Conversely, it’s known that every zero-dimensional topology is induced by a non-Archimedean uniformity. (I’ve seen this last result cited to B. Banaschewski, Über nulldimensionale Räume, Math. Nachr. $\bf13$ ($1955$), $129$-$140$, and to A.F. Monna, Remarques sur les métriques non-archimédiennes, Indag. Math. $\bf12$ ($1950$), $122$-$133$; ibid. $\bf12$ ($1950$), $179$-$191$, but I’ve not seen these papers.)
Understanding the differential $dx$ when doing $u$-substitution I just finished taking my first year of calculus in college and I passed with an A. I don't think, however, that I ever really understood the entire $\frac{dy}{dx}$ notation (so I just focused on using $u'$), and now that I'm going to be starting calculus 2 and learning newer integration techniques, I feel that understanding the differential in an integral is important. Take this problem for an example: $$ \int 2x (x^2+4)^{100}dx $$ So solving this... $$u = x^2 + 4 \implies du = 2x dx \longleftarrow \text{why $dx$ here?}$$ And so now I'd have: $\displaystyle \int (u)^{100}du$ which is $\displaystyle \frac{u^{101}}{101} + C$ and then I'd just substitute back in my $u$. I'm so confused by all of this. I know how to do it from practice, but I don't understand what is really happening here. What happens to the $du$ between rewriting the integral and taking the anti-derivative? Why is writing the $dx$ so important? How should I be viewing this in when seeing an integral?
$dx$ is what is known as a differential. It is an infinitesimally small interval of $x$: $$dx=\lim_{x\to{x_0}}x-x_0$$ Using this definition, it is clear from the definition of the derivative why $\frac{dy}{dx}$ is the derivative of $y$ with respect to $x$: $$y'=f'(x)=\frac{dy}{dx}=\lim_{x\to{x_0}}\frac{f(x-x_0)-f(x_0)}{x-x_0}$$ When doing u-substitution, you are defining $u$ to be an expression dependent on $x$. $\frac{du}{dx}$ is a fraction like any other, so to get $du$ you must multiply both sides of the equation by $dx$: $$u=x^2+4$$ $$\frac{du}{dx}=2x$$ $$dx\frac{du}{dx}=2x\space{dx}$$ $$du=2x\space{dx}$$
Are there nonlinear operators that have the group property? To be clear: What I am actually talking about is a nonlinear operator on a finitely generated vector space V with dimension $d(V)\;\in \mathbb{N}>1$. I can think of several nonlinear operators on such a vector space but none of them have the requisite properties of a group. In particular, but not exclusively, are there any such nonlinear operator groups that meet the definition of a Lie Group.
How about the following construction. Let $M \in R_{d}(G)$ be a $d$ dimensional representation of some Lie group (e.g. $SL(2,R)$ and $V$ is some $d$ dimensional vector space. Now define: $$ f(x,M,\epsilon)= \frac{M x}{(1+\epsilon w.x)} $$ where the vector $w$ is chosen so that $w.M= w$ for all $M\in SL(2,\mathbb{R})$. Such vectors exist, for instance, if $d=4$ and $R_4(G)=R_2(G) \otimes R_2(G)$ then $w=\begin{pmatrix} 0 \\ 1\\ -1\\ 0 \end{pmatrix}$ is one such vector. It is now not too difficult to convince oneself that: $$ \begin{eqnarray} f(x,I,0) = x \\ f(f(x,M_1,\epsilon_1),M_2,\epsilon_2) = f(x,M_1 M_2,\epsilon_1+\epsilon_2) \end{eqnarray}$$ so that the mapping $f$ is nonlinear and satisfies the group property.
How can I prove this closed form for $\sum_{n=1}^\infty\frac{(4n)!}{\Gamma\left(\frac23+n\right)\,\Gamma\left(\frac43+n\right)\,n!^2\,(-256)^n}$ How can I prove the following conjectured identity? $$\mathcal{S}=\sum_{n=1}^\infty\frac{(4\,n)!}{\Gamma\left(\frac23+n\right)\,\Gamma\left(\frac43+n\right)\,n!^2\,(-256)^n}\stackrel?=\frac{\sqrt3}{2\,\pi}\left(2\sqrt{\frac8{\sqrt\alpha}-\alpha}-2\sqrt\alpha-3\right),$$ where $$\alpha=2\sqrt[3]{1+\sqrt2}-\frac2{\sqrt[3]{1+\sqrt2}}.$$ The conjecture is equivalent to saying that $\pi\,\mathcal{S}$ is the root of the polynomial $$256 x^8-6912 x^6-814752 x^4-13364784 x^2+531441,$$ belonging to the interval $-1<x<0$. The summand came as a solution to the recurrence relation $$\begin{cases}a(1)=-\frac{81\sqrt3}{512\,\pi}\\\\a(n+1)=-\frac{9\,(2n+1)(4n+1)(4 n+3)}{32\,(n+1)(3n+2)(3n+4)}a(n)\end{cases}.$$ The conjectured closed form was found using computer based on results of numerical summation. The approximate numeric result is $\mathcal{S}=-0.06339748327393640606333225108136874...$ (click to see 1000 digits).
According to Mathematica, the sum is $$ \frac{3}{\Gamma(\frac13)\Gamma(\frac23)}\left( -1 + {}_3F_2\left(\frac14,\frac12,\frac34; \frac23,\frac43; -1\right) \right). $$ This form is actually quite straightforward if you write out $(4n)!$ as $$ 4^{4n}n!(1/4)_n (1/2)_n (3/4)_n $$ using rising powers ("Pochhammer symbols") and then use the definition of a hypergeometric function. The hypergeometric function there can be handled with equation 25 here: http://mathworld.wolfram.com/HypergeometricFunction.html: $$ {}_3F_2\left(\frac14,\frac12,\frac34; \frac23,\frac43; y\right)=\frac{1}{1-x^k},$$ where $k=3$, $0\leq x\leq (1+k)^{-1/k}$ and $$ y = \left(\frac{x(1-x^k)}{f_k}\right)^k, \qquad f_k = \frac{k}{(1+k)^{(1+1/k)}}. $$ Now setting $y=-1$, we get the polynomial equation in $x$ $$ \frac{256}{27} x^3 \left(1-x^3\right)^3 = -1,$$ which has two real roots, neither of them in the necessary interval $[0,(1+k)^{-1/k}=4^{-1/3}]$, since one is $-0.43\ldots$ and the other $1.124\ldots$. However, one of those roots, $x_1=-0.436250\ldots$ just happens to give the (numerically at least) right answer, so never mind that. Also, note that $$ \Gamma(1/3)\Gamma(2/3) = \frac{2\pi}{\sqrt{3}}. $$ The polynomial equation above is in terms of $x^3$, so we can simplify that too a little, so the answer is that the sum equals $$ \frac{3^{3/2}}{2\pi} \left(-1+(1-z_1)^{-1}\right), $$ where $z_1$ is a root of the polynomial equation $$ 256z(1-z)^3+27=0, \qquad z_1=-0.0830249175076244\ldots $$ (The other real root is $\approx 1.42$.) How did you find the conjectured closed form?
Intuition of Addition Formula for Sine and Cosine The proof of two angles for sine function is derived using $$\sin(A+B)=\sin A\cos B+\sin B\cos A$$ and $$\cos(A+B)=\cos A\cos B-\sin A\sin B$$ for cosine function. I know how to derive both of the proofs using acute angles which can be seen here http://en.wikibooks.org/wiki/Trigonometry/Addition_Formula_for_Cosines but pretty sure those who have taken trig know what I'm talking about. So I know how to derive and prove both of the two-angle functions using the acute angles, but what I am completely confused about is where those triangles came from. So for proving the two-angle cosine function, we look at two acute angles, $A$ and $B$, where $A+B<90$ and keep on expanding. So my question is, where did those two triangles come from and what is the intuition behind having two acute triangles on top of each other?
The bottom triangle is the right triangle used to compute sine and cosine of $\alpha$. The upper triangle is the right triangle used to compute sine and cosine of $\beta$, scaled and rotated so its base is the same as the hypotenuse of the lower triangle. We know the ratios of the sides of these triangles because of the definitions of sine and cosine. Making the base of the upper triangle the same length as the hypotenuse of the lower triangle allows relations to be drawn between the two triangles. Setting the base of the upper triangle to be aligned with the hypotenuse of the lower triangle creates a triangle with $\alpha+\beta$ as an angle.
$f$ and $g$ are holomorphic function , $A=\{z:{1\over 2}<\lvert z\rvert<1\}$, $D=\{z: \lvert z-2\rvert<1\}$ $f$ and $g$ are holomorphic function defined on $A\cup D$ $A=\{z:{1\over 2}<\lvert z\rvert<1\}$, $D=\{z:\lvert z-2\rvert <1\}$ * *If $f(z)g(z)=0\forall z\in A\cup D $ then either $f(z)=0\forall z\in A$ or $g(z)=0\forall z\in A$ *If $f(z)g(z)=0\forall z\in D $ then either $f(z)=0\forall z\in D$ or $g(z)=0\forall z\in D$ *If $f(z)g(z)=0\forall z\in A\cup D $ then either $f(z)=0\forall z\in A\cup D$ or $g(z)=0\forall z\in A\cup D$ *If $f(z)g(z)=0\forall z\in A $ then either $f(z)=0\forall z\in A$ or $g(z)=0\forall z\in A$ Except The $1,3$, I can say $2,4$ are true beacuse of Identity Theorem right? zeroes of holomorphic function are isolated. for $3$ I can define $f(z)=z$ on $A$ and $0$ on $D$ and $g(z)=0$ on $A$ and $z^2$ on $D$
You can use the identity theorem, but since that theorem doesn't directly apply to products, you should explain how you use the identity theorem. You can, for example, argue that if $f\cdot g = 0$ on some connected open set $S$, then there are $S_1,S_2$ with $f = 0$ on $S_1$, $g = 0$ on $S_2$, $S_1 \cap S_2 = \emptyset$ and $S_1 \cup S_2 = S$. Now if $f$ isn't identically zero on $S$, then due to the identity theorem $S_1$ contains only isolated points, hence for every $x \in S_1$ you have that $B_\epsilon(x) \setminus {x} \subset S_2$ for some $\epsilon > 0$. Using the identity theorem again, you get $g = 0$ on $S$. The same applies with $f$ and $g$ reversed, thus at least one of them is identically zero on $S$. This works, as you correctly stated, for (2) and (4). As Robert Isreal commented, you get (1) from (4), and for (3) your counter-example indeed refutes the assertion.
Exponential Growth, half-life time An exponential growth function has at time $t = 5$ a) the growth factor (I guess that is just the "$\lambda$") of $0.125$ - what is the half life time? b) A growth factor of $64$ - what is the doubling time ("Verdopplungsfaktor")? For a), as far as I know the half life time is $\displaystyle T_{1/n} = \frac{ln(n)}{\lambda}$ but how do I use the fact that we are at $t = 5$? I don't understand the (b) part. Thanks
The growth factor tells you the relative growth between $f(x)$ and $f(x+1)$, i.e. it's $$ \frac{f(t+1)}{f(t)} \text{.} $$ If $f$ grows exactly exponentially, i.e. if $$ f(t) = \lambda\alpha^t = \lambda e^{\beta t} \quad\text{($\beta = \ln \alpha$ respectively $\alpha = e^\beta$)} \text{,} $$ then $$ \frac{f(t+1)}{f(t)} = \frac{\lambda\alpha^{t+1}}{\lambda\alpha^t} = \alpha = e^\beta \text{,} $$ meaning that, as you noticed, the grow factor doesn't depend on $t$ - it's constant. The half-life time is the time $h$ it takes to get from $f(t)$ to $f(t+h)=\frac{f(t)}{2}$. For a strictly exponential $f$, you have $$ f(t+h) = \frac{f(t)}{2} \Rightarrow \lambda\alpha^{t+h} = \frac{\lambda}{2}\alpha^t \Rightarrow \alpha^h = \frac{1}{2} \Rightarrow h = \log_\alpha \frac{1}{2} = -\frac{\ln 2}{\ln \alpha} = -\frac{\ln 2}{\beta} \text{.} $$ Similarly, the doubling-time is the time $d$ it takes to get from $f(t)$ to $f(t+d) = 2f(t)$, and you have $$ f(t+d) = 2f(t) \Rightarrow \lambda\alpha^{t+d} = 2\lambda\alpha^t \Rightarrow \alpha^d = 2\Rightarrow h = \log_\alpha 2 = \frac{\ln 2}{\ln \alpha} = \frac{\ln 2}{\beta} \text{.} $$ Thus, you always have that $d = -h$ for doubling-time $d$ and half-time $h$, which of course makes sense. If you go forward $d$ units of time to double the value, then going backwards $d$ units halves the value, similarly for going forward respectively backward $h$ units to half respectively double the value. In your case, you get that the doubling time for (b) is $\frac{\ln 2}{\ln 64} = \frac{1}{6}$. For (a) you get that the half-life time is $-\frac{\ln 2}{\ln \frac{1}{8}} = \frac{1}{3}$. You can also derive those by observing that a growth factor of one-eight means that going forward one unit of time means the value decreases to one-eight of the original value. Thus, after going forward one-third unit, the value decreases to one-half, since if if it decreases three times to one-half each, it overall decreases to one-eight. Similarly, if the value increases to 64 times the original value when going forward one unit of time, you have to go forward one-sixt unit of time to have it increase to twice the value, since $2^6 = 64$.
Range of: $\sqrt{\sin(\cos x)}+\sqrt{\cos(\sin x)}$ Range of: $$\sqrt{\sin(\cos x)}+\sqrt{\cos(\sin x)}$$ Any help will be appreciated.
$$u=\cos(x); v=\sin(x)$$ $$\sqrt{\sin(u)}+\sqrt{\cos(v)}$$ $\sin(u)$ must be greater than or equal to 0, and $\cos(v)$ must be greater than or equal to 0: $$u\in{[2\pi{k},\pi+2\pi{k}],k\in\mathbb{Z}}$$ $$v\in{[-\frac{\pi}{2}+2\pi{k},\frac{\pi}{2}+2\pi{k}]},k\in\mathbb{Z}$$ Since $v=\sin{x}$, $v\in{[-1,1]}$, $v$ will always be in the necessary interval. But $u$ will only be in the necessary interval when $\cos{x}\ge0$. That is, when $x\in[-\frac{\pi}{2}+2\pi{k},\frac{\pi}{2}+2\pi{k}]\bigcap[2\pi{k},\pi+2\pi{k}]$. So the domain is $[2\pi{k},\frac{\pi}{2}+2\pi{k}],k\in\mathbb{Z}$. Now find the set of function values over that domain. Note that the function is not strictly increasing or decreasing over the domain, so the minimum and maximum are not the endpoints of the domain.
a real convergent sequence has a unique limit point How to show that a real convergent sequence has a unique limit point viz. the limit of the sequene? I've used the result several times but I don't know how to prove it! Please help me!
I am assuming that limit points are defined as in Section $6.4$ of the book Analysis $1$ by the author Terence Tao. We assume that the sequence of real numbers $(a_{n})_{n=m}^{\infty}$ converges to the real number $c$. Then we have to show that $c$ is the unique limit point of the sequence. First, we shall show that $c$ is indeed a limit point. By the definition of convergence we have $$\forall\epsilon > 0\:\exists N_{\epsilon}\geq m\:\text{s.t.}\:\forall n\geq N_{\epsilon}\:|a_{n}-c|\leq\epsilon$$ We need to prove that $\forall\epsilon >0\:\forall N\geq m\:\exists n\geq N\:\text{s.t.}\:|a_{n}-c|\leq\epsilon$. Fix $\epsilon$ and $N$. If $N\leq N_{\epsilon}$ then we could pick and $n\geq N_{\epsilon}$. If $N>N_{\epsilon}$ then picking any $n\geq N > N_{\epsilon}$ would suffice. Now, we shall prove that $c$ is the unique limit point. Let us assume for the sake of contradiction that $\exists$ a limit point $c'\in\mathbb{R}$ and $c'\neq c$. Since the sequence converges to $c$, $\forall\epsilon > 0\:\exists M_{\epsilon}\geq m$ such that $\forall n\geq M_{\epsilon}$ we have $|a_{n}-c|\leq \epsilon/2$. Further, since $c'$ is a limit point, $\forall\epsilon > 0$ and $\forall N\geq m\:\exists k_{N}\geq N$ such that $|a_{k_{N}}-c'|\leq \epsilon/2$. In particular, $|a_{k_{M_{\epsilon}}}-c'|\leq \epsilon/2$ and $|a_{k_{M_{\epsilon}}}-c|\leq \epsilon/2$. Using the triangle inequality we get $\forall\epsilon > 0\: |c-c'|\leq\epsilon\implies |c-c'| = 0\implies c = c'$ which contradicts the assumption.
Pointwise supremum of a convex function collection In Hoang Tuy, Convex Analysis and Global Optimization, Kluwer, pag. 46, I read: "A positive combination of finitely many proper convex functions on $R^n$ is convex. The upper envelope (pointwise supremum) of an arbitrary family of convex functions is convex". In order to prove the second claim the author sets the pointwise supremum as: $$ f(x) = \sup \{f_i(x) \mid i \in I\} $$ Then $$ \mathrm{epi} f = \bigcap_{i \in I} \mathrm{epi} f_i $$ As "the intersection of a family of convex sets is a convex set" the thesis follows. The claim on the intersections raises some doubts for me. If $(x,t)$ is in the epigraph of $f$, $f(x) \leq t)$ and, as $f(x)\geq f_i(x)$, also $f_i(x) \leq t)$; therefore $(x,t)$ is in the epigraph of every $f_i$ and the intersection proposition follows. Now, what if, for some (not all) $\hat{i}$, $f_{\hat{i}}(x^0)$ is not defined? The sup still applies to the other $f_i$ and so, in as far as the sup is finite, $f(x^0)$ is defined. In this case when $(x^0,t) \in \mathrm{epi} f$ not $\in \mathrm{epi} f_{\hat{i}}$ too, since $x^0 \not \in \mathrm{dom}\, f_{\hat{i}}$. So it is simple to say that $f$ is convex over $\bigcap_{i \in I} \mathrm{dom} f_i\subset \mathrm{dom} f$, because in this set every $x \in \mathrm{dom}\, f_{\hat{i}}$. What can we say when $x \in \mathrm{dom}\, f$ but $x \not \in \mathrm{dom}\, f_{\hat{i}}$ for some $i$?
I think it is either assumed that the $f_i$ are defined on the same domain $D$, or that (following a common convention) we set $f_i(x)=+\infty$ if $x \notin \mathrm{Dom}(f_i)$. You can easily check that under this convention, the extended $f_i$ still remain convex and the claim is true.
Solving $\sqrt{3x^2-2}+\sqrt[3]{x^2-1}= 3x-2 $ How can I solve the equation $$\sqrt{3x^2-2}+\sqrt[3]{x^2-1}= 3x-2$$ I know that it has two roots: $x=1$ and $x=3$.
Substituting $x = \sqrt{t^3+1}$ and twice squaring, we arrive to the equation $$ 36t^6-24t^5-95t^4+8t^3+4t^2-48t=0.$$ Its real roots are $t=0$ and $t=2$ (the latter root is found in the form $\pm \text{divisor}(48)/\text{divisor}(36)$), therefore $x=1$ and $x=3$.
Inverse Laplace transformation using reside method of transfer function that contains time delay I'm having a problem trying to inverse laplace transform the following equation $$ h_0 = K_p * \frac{1 - T s}{1 + T s} e ^ { - \tau s} $$ I've tried to solve this equation using the residue method and got the following. $$ y(t) = 2 K_p e ^ {- \tau s} e ^ {-t/T} $$ $$ y(t) = 2 K_p e ^ {\frac{\tau}{T}} e ^ {-t/T} $$ And that is clearly wrong. Is it so that you can't use the residue method on functions that contains time delay, or is it possible to do a "work around" or something to get to the right answer?
First do polynomial division to simplify the fraction: $$\frac{1-Ts}{1+Ts}=-1+\frac{2}{1+Ts}$$ Now expand $h_0$: $$h_0=-K_pe^{-\tau{s}}+2K_p\frac{1}{Ts+1}e^{-\tau{s}}$$ Recall the time-domain shift property: $$\scr{L}(f(t-\tau))=f(s)e^{-\tau{s}}$$ $$\scr{L}^{-1}h_0=-k_p\delta{(t-\tau)}+2k_p g(t-\tau)$$ Where $g(t)=\scr{L}^{-1}\frac{1}{ts+1}$. To take the inverse Laplace transform of this term, recall the frequency domain shift property: $$\scr{L}^{-1}f(s-a)=f(t)e^{at}$$ $$\frac{1}{Ts+1}=\frac{1}{T(s+\frac{1}{T})}=\frac{1}{T}\frac{1}{s+\frac{1}{T}}$$ Therefore the inverse Laplace transform is: $$\frac{1}{T}e^{-\frac{t}{T}}$$ Finally, putting all of it together, the full inverse Laplace transform of the original expression is: $$-K_p\delta(t-\tau)+\frac{2K_p}{T}e^{-\frac{t-\tau}{T}}$$
Show that $n \ge \sqrt{n+1}+\sqrt{n}$ (how) Can I show that: $n \ge \sqrt{n+1}+\sqrt{n}$ ? It should be true for all $n \ge 5$. Tried it via induction: * *$n=5$: $5 \ge \sqrt{5} + \sqrt{6} $ is true. *$n\implies n+1$: I need to show that $n+1 \ge \sqrt{n+1} + \sqrt{n+2}$ Starting with $n+1 \ge \sqrt{n} + \sqrt{n+1} + 1 $ .. (now??) Is this the right way?
Hint: $\sqrt{n} + \sqrt{n+1} \leq 2\sqrt{n+1}$. Can you take it from there?
Integral over a ball Let $a=(1,2)\in\mathbb{R}^{2}$ and $B(a,3)$ denote a ball in $\mathbb{R}^{2}$ centered at $a$ and of radius equal to $3$. Evaluate the following integral: $$\int_{B(a,3)}y^{3}-3x^{2}y \ dx dy$$ Should I use polar coordinates? Or is there any tricky solution to this?
Note that $\Delta(y^3-3x^2y)=0$, so that $y^3-3x^2y$ is harmonic. By the mean value property, we get that the mean value over the ball is the value at the center. Since the area of the ball is $9\pi$ and the value at the center is $2$, we get $$ \int_{B(a,3)}\left(y^3-3x^2y\right)\mathrm{d}x\,\mathrm{d}y=18\pi $$
Prove $\sin \left(1/n\right)$ tends to $0$ as $n$ tends to infinity. I'm sure there is an easy solution to this but my mind has gone blank! Any help on proving that $\sin( \frac{1}{n})\longrightarrow0$ as $n\longrightarrow\infty$ would be much appreciated. This question was set on a course before continuity was introduced, just using basic sequence facts. I should have phrased it as: Find an $N$ such that for all $n > N$ $|\sin(1/n)| < \varepsilon$
Hint $$\quad0 \leq\sin\left(\frac{1}{n}\right)\leq \frac{1}{n}$$
A problem from distribution theory. Let $f$, $g\in C(\Omega)$, and suppose that $f \neq g$ in $C(\Omega)$. How can we prove that $f \neq g$ as distributions? Here's the idea of my proof. $f$ and $g$ are continuous functions, so they will be locally integrable. Now, take any $\phi \neq 0 \in D(\Omega)$. Let us suppose that $\langle T_f,\phi\rangle =\langle T_g,\phi \rangle $ and $f\neq g$ $\langle T_f,\phi \rangle = \langle T_g,\phi \rangle$ $\implies \int_\Omega f(x) \phi(x) dx = \int_\Omega g(x) \phi(x) dx$ $\implies \int_\Omega \phi(x)[f(x)-g(x)] dx =0$ i.e., the area under above function is zero. We know that $\phi$ is non zero, we just need to prove that this integral will be zero only when $f(x)-g(x)=0$ and it will make contradiction to our supposition that $f$ and $g$ are not equal. [Please help me prove the last point]
Let me add a little more detail to my comment. You are correct up to the $\int_{\Omega} \phi(x)[f(x)-g(x)]dx = 0$. Now, for $\varepsilon > 0$ and $x_0 \in \Omega$, let $B_{\varepsilon}(x_0)$ denote the ball of radius $\varepsilon$ centered at $x_0$. Then, you can find a test function $\psi$ with the following properties: (i) $\psi(x) > 0 \, \forall x \in B_{\varepsilon}(x_0)$ (ii) $\psi(x) = 0 \, \forall x \notin B_{\varepsilon}(x_0)$ (iii) $\int_{\Omega} \psi = \int_{B_{\varepsilon}(x_0)}\psi = 1.$ Now, choose $\phi = \psi$ and for any $x_0 \in \Omega$ $$ 0 = \int_{\Omega} \psi(x)[f(x)-g(x)]dx = \int_{B_{\varepsilon}(x_0)} \psi(x)[f(x)-g(x)]dx \approx f(x_0)-g(x_0) $$ As $\varepsilon \rightarrow 0$, the equation is exact, i.e. $f(x_0) - g(x_0) = 0$. Since $x_0$ was arbitrary, $f = g$ in $\Omega$, a contradiction.
Prove if $n^2$ is even, then $n$ is even. I am just learning maths, and would like someone to verify my proof. Suppose $n$ is an integer, and that $n^2$ is even. If we add $n$ to $n^2$, we have $n^2 + n = n(n+1)$, and it follows that $n(n+1)$ is even. Since $n^2$ is even, $n$ is even. Is this valid?
We know that, * *$n^2=n\times n $, We also know that *even $\times$ even = even *odd $\times$ odd = odd *odd $\times$ even = even Observation $1$: As $n^2$ is even, we also get an even result in the 2nd and 4th case . Observation $2$: In the expression "$n \times n$" both operands are same i.e. '$n$', hence we get the result even in the $2$nd and $3$rd case Since the $2$nd case is common in both the operations we take the first case.Hence comparing even $\times$ even = even and $n^2=n\times n $, Hence proved if $n^2$ is even, then $n$ is even.
Is There A Function Of Constant Area? If I take a point $(x,y)$ and multiply the coordinates $x\times y$ to find the area $(A)$ defined by the rectangle formed with the axes, then is there a function $f(x)$ so that $xy = A$, regardless of what value of $x$ is chosen?
Take any desired real value $A$, then from $xy = A$, define $$f: \mathbb R\setminus \{0\} \to \mathbb R,\quad f(x) = y = \dfrac Ax$$
What is the answer to this limit what is the limit value of the power series: $$ \lim_{x\rightarrow +\infty} \sum_{k=1}^\infty (-1)^k \frac{x^k}{k^{k-m}}$$ where $m>1$.
For $\lim_{x\rightarrow +\infty} \sum_{k=1}^\infty (-1)^k \frac{x^k}{k^{k-m}}$, the ratio of consecutive terms is $\begin{align} \frac{x^{k+1}}{(k+1)^{k+1-m}}\big/\frac{x^k}{k^{k-m}} &=\frac{x k^{k-m}}{(k+1)^{k+1-m}}\\ &=\frac{x }{k(1+1/k)^{k+1-m}}\\ &=\frac{x }{k(1+1/k)^{k+1}(1+1/k)^{-m}}\\ &\approx\frac{x }{ke(1+1/k)^{-m}}\\ &=\frac{x (1+1/k)^{m}}{ke}\\ \end{align} $ For large $k$ and fixed $m$, $(1+1/k)^{m} \approx 1+m/k$, so the ratio is about $x/(ke)$, so the sum is like $\lim_{x\rightarrow +\infty} \sum_{k=1}^\infty (-1)^k \big(\frac{x}{ek}\big)^k$. Note: I feel somewhat uncomfortable about this conclusion, but I will continue anyway. Interesting how the $m$ goes away (unless I made a mistake, which is not unknown). In the answer pointed to by Mhenni Benghorbal, it is shown that this limit is $-1$, so it seems that this is the limit of this sum also.
When a quotient of a UFD is also a UFD? Let $R$ be a UFD and let $a\in R$ be nonzero element. Under what conditions will $R/aR$ be a UFD? A more specific question: Suppose $R$ is a regular local ring and let $I$ be a height two ideal which is radical. Can we find an element $a\in I$ such that $R/aR$ is a UFD?
This is indeed a complicated question, that has also been much studied. Let me just more or less quote directly from Eisenbud's Commutative Algebra book (all found in Exercise 20.17): The Noether-Lefschetz theorem: if $R = \mathbb{C}[x_1, \ldots x_4]$ is the polynomial ring in $4$ variables over $\mathbb{C}$, then for almost every homogeneous form $f$ of degree $\ge 4$, $R/(f)$ is factorial. On the other hand, in dimension $3$, there is a theorem of Andreotti-Salmon: Let $(R,P)$ be a $3$-dimensional regular local ring, and $0 \ne f \in P$. Then $R/(f)$ is factorial iff $f$ cannot be written as the determinant of an $n \times n$ matrix with entries in $P$, for $n > 1$.
Injectivity of a map between manifolds I'm learning the concepts of immersions at the moment. However I'm a bit confused when they define an immersion as a function $f: X\rightarrow Y$ where $X$ and $Y$ are manifolds with dim$X <$ dim$Y$ such that $df_x: T_x(X)\rightarrow T_y(Y)$ is injective. I was wondering why don't we let $f$ be injective and say that's the best case we can get for the condition dim$X <$ dim$Y$(since under this condition we can't apply the inverse function theorem)? Also does injectivity of $df_x$ inply the injectivity of $f$ (it seems that I can't prove it)? How should we picture immersion as (something like the tangent space of $X$ always "immerses" into the tangent space of $Y$)? Thanks for everyone's help!
I think a glimpse on the wikipedia's article helps. Immersions usually are not injective because the image can appear "knotted" in the target space. I do not think you need $\dim X<\dim Y$ in general. You can define it for $\dim X=\dim Y$, it is only because we need $df_{p}$ to have rank equal to $\dim X$ that made you "need" $\dim X\le \dim Y$. I think you can find in classical differential topology/manifold books (like Boothby's ) that an immersion is locally an injection map $$\mathbb{R}^{n}\times \{0\}\rightarrow \mathbb{R}^{m+n}$$You can attempt to prove this via inverse function theorem or implicit function theorem. The proof is quite standard.
Does $x^T(M+M^T)x \geq 0 \implies x^TMx \geq 0$ hold in only one direction? I know this is true for the "if" part, but what about the "only if"? Can you give me one example when the "only if" part does not hold? I am not quite sure about this. I forgot to tell you that $M$ is real and $x$ arbitrary.
Well $x^{T}Mx \geq 0 \implies x^{T}M^{T}x \geq 0$ (by taking transpose) Hence, $x^{T}(M^{T} + M)x \geq 0$. So, this gives you the one side. However, you wrote the converse in your title.
First moment inequality implies tail distribution inequality? Let $U,V$ be two continuous random variables, both with continuous CDF. Suppose that $\mathbb E V \geq \mathbb E U$. Can one conclude that $\mathbb P(V> x) \geq \mathbb P(U>x)$ for all $x\geq 0$? If not, what additional conditions are needed?
Here is an argument with may make think it's not true: if $Y$ is a positive random variable, then $E(X)=\int_0^{+\infty}P(X>t)dt$. The fact that $P(U>x)\geqslant P(V>x)$ seems much stronger than $E(U)\geqslant E(V)$.