INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Evaluating $\lim\limits_{n\to\infty} e^{-n} \sum\limits_{k=0}^{n} \frac{n^k}{k!}$ I'm supposed to calculate: $$\lim_{n\to\infty} e^{-n} \sum_{k=0}^{n} \frac{n^k}{k!}$$ By using WolframAlpha, I might guess that the limit is $\frac{1}{2}$, which is a pretty interesting and nice result. I wonder in which ways we may approach it.
Edited. I justified the application of the dominated convergence theorem. By a simple calculation, $$ \begin{align*} e^{-n}\sum_{k=0}^{n} \frac{n^k}{k!} &= \frac{e^{-n}}{n!} \sum_{k=0}^{n}\binom{n}{k} n^k (n-k)! \\ (1) \cdots \quad &= \frac{e^{-n}}{n!} \sum_{k=0}^{n}\binom{n}{k} n^k \int_{0}^{\infty} t^{n-k}e^{-t} \, dt\\ &= \frac{e^{-n}}{n!} \int_{0}^{\infty} (n+t)^{n}e^{-t} \, dt \\ (2) \cdots \quad &= \frac{1}{n!} \int_{n}^{\infty} t^{n}e^{-t} \, dt \\ &= 1 - \frac{1}{n!} \int_{0}^{n} t^{n}e^{-t} \, dt \\ (3) \cdots \quad &= 1 - \frac{\sqrt{n} (n/e)^n}{n!} \int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du. \end{align*}$$ We remark that * *In $\text{(1)}$, we utilized the famous formula $ n! = \int_{0}^{\infty} t^n e^{-t} \, dt$. *In $\text{(2)}$, the substitution $t + n \mapsto t$ is used. *In $\text{(3)}$, the substitution $t = n - \sqrt{n}u$ is used. Then in view of the Stirling's formula, it suffices to show that $$\int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du \xrightarrow{n\to\infty} \sqrt{\frac{\pi}{2}}.$$ The idea is to introduce the function $$ g_n (u) = \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \mathbf{1}_{(0, \sqrt{n})}(u) $$ and apply pointwise limit to the integrand as $n \to \infty$. This is justified once we find a dominating function for the sequence $(g_n)$. But notice that if $0 < u < \sqrt{n}$, then $$ \log g_n (u) = n \log \left(1 - \frac{u}{\sqrt{n}} \right) + \sqrt{n} u = -\frac{u^2}{2} - \frac{u^3}{3\sqrt{n}} - \frac{u^4}{4n} - \cdots \leq -\frac{u^2}{2}. $$ From this we have $g_n (u) \leq e^{-u^2 /2}$ for all $n$ and $g_n (u) \to e^{-u^2 / 2}$ as $n \to \infty$. Therefore by dominated convergence theorem and Gaussian integral, $$ \int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du = \int_{0}^{\infty} g_n (u) \, du \xrightarrow{n\to\infty} \int_{0}^{\infty} e^{-u^2/2} \, du = \sqrt{\frac{\pi}{2}}. $$
$k$th power of ideal of of germs Well,We denote set of germs at $m$ by $\bar{F_m}$ A germ $f$ has a well defined value $f(m)$ at m namely the value at $m$ of any representative of the germ. Let $F_m\subseteq \bar{F_m}$ be the set of germs which vanish at $m$. Then $F_m$ is an ideal of $\bar{F_m}$ and let $F_m^k$ denotes its $k$th power. Could any one tell me how the elements look like in this Ideal $F_m^k$? and they said all finite linear combination of $k-fold$ products of elements of $F_m$ But I dont get this. and they also said These forms $\bar{F_m}\supsetneq F_m\supsetneq F_m^2\supsetneq\dots \supsetneq$
If $x_1,\ldots,x_n$ are local coordinates at the point $m$, then any smooth germ at $m$ has an associated Taylor series in the coordiantes $x_i$. The power $F_m^k$ is precisely the set of germs whose degree $k$ Taylor polynomial vanishes, i.e. whose Taylor series has no non-zero terms of degree $\leq k$.
Well definition of multiplicity of a root in a polynomial of ring. Let $R$ be a conmutative ring with identity, let $\displaystyle f=\sum_{k=0}^{n}a_k x^k \in R[x]$ and $r\in R$. If $f=(x-r)^m g$, $m\in\mathbb{N}$ and $g\in R[x]$ with $g(r)\neq 0$, then the root $r$ is said to have $multiplicity\,\,\, m$. If the multiplicity is 1 the root is called simple. How to prove that the above definition of the multiplicity of a root is well defined?
More generally, for $\rm\!\ c\in R\!\ $ any ring, every $\rm\!\ r\ne 0\,$ may be written uniquely in the form $\rm\!\ r = c^n\,\! b,\,$ where $\rm\,c\nmid b,\,$ assuming $\rm\,c\,$ is cancellable, and only $0\,$ is divisible by arbitrarily high powers of $\rm\,c.\,$ Indeed, by hypothesis there exists a largest natural $\rm\,n\,$ such that $\rm\,c^n\,|\,r,\,$ hence $\rm\,r = c^n\, b,\ c\nmid b.\,$ Suppose $\rm\,r = c^k\:\! d,\ c\nmid d.\,$ If $\rm\,k < n,\,$ then cancelling $\rm\,c^k\,$ in $\rm\,c^n\:\! b = c^k\:\!d\,$ yields $\rm\,c^{n-k}\:\!b = d,\,$ so $\rm\,c\,|\,d,\,$ contra hypothesis. Thus $\rm\,k = n,\,$ so cancelling $\rm\,c^k\,$ yields $\rm\,b = d,\,$ thus uniqueness. $\bf\small QED$ Your special case follows by applying the above to the cancellable element $\, {\rm c} = x-r\in R[x],\,$ which clearly satisfies the bounded divisibility hypothesis: $\,(x\!-\!r)^n\,|\,f\ne0\,\Rightarrow\, n\le \deg\ f.$
Cutting cake into 5 equal pieces If a cake is cut into $5$ equal pieces, each piece would be $80$ grams heavier than when the cake is cut into $7$ equal pieces. How heavy is the cake? How would I solve this problem? Do I have to try to find an algebraic expression for this? $5x = 7y + 400$?
The first step is to turn the word problem into an equation; one-fifth of the cake is $80$ grams heavier than one-seventh of the cake, so one-fifth of the cake equals one-seventh of the cake plus 80. "The cake" (specifically its mass) is $x$, and we can work from there: $$\dfrac{x}{5} = \dfrac{x}{7}+80$$ $$\dfrac{x}{5} - \dfrac{x}{7} = 80$$ Here comes the clever bit; multiply each fraction by a form of one that will give both fractions the same denominator: $$\dfrac{7}{7}\cdot \dfrac{x}{5} - \dfrac{5}{5}\cdot\dfrac{x}{7} = 80$$ $$\dfrac{7x}{35} - \dfrac{5x}{35} = 80$$ $$\dfrac{2x}{35} = 80$$ $$2x = 2800$$ $$x = 1400$$ You can check your answer by plugging it into the original equation; if the two sides are indeed equal the answer is correct: $$\dfrac{1400}{5} = \dfrac{1400}{7} + 80$$ $$280 = 200 + 80$$ $$\ \ \ \ \ \ \ \ \ \ \ \ 280 = 280 \ \ \text{<-- yay!}$$
number prime to $bc$ with system of congruences Can you please help me to understand why all numbers $x$ prime to $bc$ are all the solutions of this system? $$\begin{align*} x&\equiv k\pmod{b}\\ x&\equiv t\pmod{c} \end{align*}$$ Here $k$ is prime to $b$, and $t$ is prime to $c$.
Hint $\rm\quad \begin{eqnarray} x\equiv k\,\ (mod\ b)&\Rightarrow&\rm\:(x,b) = (k,b) = 1 \\ \rm x\equiv t\,\,\ (mod\ c)\,&\Rightarrow&\rm\:(x,c) =\, (t,c) = 1\end{eqnarray} \Bigg\}\ \Rightarrow\ (x,bc) = 1\ \ by\ Euclid's\ Lemma$
Probably simple factoring problem I came across this in a friend's 12th grade math homework and couldn't solve it. I want to factor the following trinomial: $$3x^2 -8x + 1.$$ How to solve this is far from immediately clear to me, but it is surely very easy. How is it done?
A standard way of factorizing, when it is hard to guess the factors, is by completing the square. \begin{align} 3x^2 - 8x + 1 & = 3 \left(x^2 - \dfrac83x + \dfrac13 \right)\\ & (\text{Pull out the coefficient of $x^2$})\\ & = 3 \left(x^2 - 2 \cdot \dfrac43 \cdot x + \dfrac13 \right)\\ & (\text{Multiply and divide by $2$ the coefficient of $x$})\\ & = 3 \left(x^2 - 2 \cdot \dfrac43 \cdot x + \left(\dfrac43 \right)^2 - \left(\dfrac43 \right)^2 + \dfrac13 \right)\\ & (\text{Add and subtract the square of half the coefficient of $x$})\\ & = 3 \left(\left(x - \dfrac43 \right)^2 - \left(\dfrac43 \right)^2 + \dfrac13 \right)\\ & (\text{Complete the square})\\ & = 3 \left(\left(x - \dfrac43 \right)^2 - \dfrac{16}9 + \dfrac13 \right)\\ & = 3 \left(\left(x - \dfrac43 \right)^2 - \dfrac{16}9 + \dfrac39 \right)\\ & = 3 \left(\left(x - \dfrac43 \right)^2 - \dfrac{13}9\right)\\ & = 3 \left(\left(x - \dfrac43 \right)^2 - \left(\dfrac{\sqrt{13}}3 \right)^2\right)\\ & = 3 \left(x - \dfrac43 + \dfrac{\sqrt{13}}3\right) \left(x - \dfrac43 - \dfrac{\sqrt{13}}3\right)\\ & (\text{Use $a^2 - b^2 = (a+b)(a-b)$ to factorize}) \end{align} The same idea works in general. \begin{align} ax^2 + bx + c & = a \left( x^2 + \dfrac{b}ax + \dfrac{c}a\right)\\ & = a \left( x^2 + 2 \cdot \dfrac{b}{2a} \cdot x + \dfrac{c}a\right)\\ & = a \left( x^2 + 2 \cdot \dfrac{b}{2a} \cdot x + \left( \dfrac{b}{2a}\right)^2 - \left( \dfrac{b}{2a}\right)^2 + \dfrac{c}a\right)\\ & = a \left( \left( x + \dfrac{b}{2a}\right)^2 - \left( \dfrac{b}{2a}\right)^2 + \dfrac{c}a\right)\\ & = a \left( \left( x + \dfrac{b}{2a}\right)^2 - \dfrac{b^2}{4a^2} + \dfrac{c}a\right)\\ & = a \left( \left( x + \dfrac{b}{2a}\right)^2 - \left(\dfrac{b^2-4ac}{4a^2} \right)\right)\\ & = a \left( \left( x + \dfrac{b}{2a}\right)^2 - \left(\dfrac{\sqrt{b^2-4ac}}{2a} \right)^2\right)\\ & = a \left( x + \dfrac{b}{2a} + \dfrac{\sqrt{b^2-4ac}}{2a}\right) \left( x + \dfrac{b}{2a} - \dfrac{\sqrt{b^2-4ac}}{2a}\right)\\ \end{align}
A question about harmonic form of trigonometric functions. The question is: i) Find the maximum and minimum values. ii) the smallest non-negative value of x for which this occurs. 12cos(a)-9sin(a) I think it should be changed into the form of Rcos(a+x) and it should be 15cos(a+36.87), and I get the answer i)+15 / -15 ii)323.13 (360-36.87) / 143.13 (180-36.87). But the answer given by the book is " i)15, -15 ii)306.87, 143.13 " I'm really confused by that answer..Am I wrong? BTW, I'm self studying A-level further pure mathematics, but the book(written by BRIAN and MARK GAULTER published by Oxford university press) I get seems not very helpful. so I truly hope someone can recommend some books/websites for self learning.
We review the (correct) procedure that you went through. We have $12^2+9^2=15^2$, so we rewrite our expression as $$15\left(\frac{12}{15}\cos a -\frac{9}{15}\sin a\right).$$ Now if $b$ is any angle whose cosine is $\frac{12}{15}$ and whose sine is $\frac{9}{15}$, we can rewrite our expression as $$15\left(\cos a \cos b -\sin a \sin b\right),$$ that is, as $15\cos(a+b)$. The maximum value of the cosine function is $1$, and the minimum is $-1$. So the maximum and minimum of our expression are $15$ and $-15$ respectively. The only remaining problem is to decide on the appropriate values of $a$. For the maximum, $a+b$ should be (in degrees) one of $0$, $360$, $-360$, $720$, $-720$, and so on. The angle $b$ is about $36.87$ plus or minus a multiple of $360$. So we can get the desired kind of sum $a+b$ by choosing $a\approx 360-36.87$, about $323.13$. It is not hard to do a partial verification our answer by calculator. If you compute $12\cos a -9\sin a$ for the above value of $a$, you will get something quite close to $15$. The book's value gives something smaller, roughly $14.4$. The book's value is mistaken. It was obtained by pressing the wrong button on the calculator, $\sin^{-1}$ instead of $\cos^{-1}$. For the minimum, we want $a+b$ to be $180$ plus or minus a multiple of $360$. Thus $a$ is approximately $180-36.87$.
Compute: $\sum_{k=1}^{\infty}\sum_{n=1}^{\infty} \frac{1}{k^2n+2nk+n^2k}$ I try to solve the following sum: $$\sum_{k=1}^{\infty}\sum_{n=1}^{\infty} \frac{1}{k^2n+2nk+n^2k}$$ I'm very curious about the possible approaching ways that lead us to solve it. I'm not experienced with these sums, and any hint, suggestion is very welcome. Thanks.
Here's another approach. It depends primarily on the properties of telescoping series, partial fraction expansion, and the following identity for the $m$th harmonic number $$\begin{eqnarray*} \sum_{k=1}^\infty \frac{1}{k(k+m)} &=& \frac{1}{m}\sum_{k=1}^\infty \left(\frac{1}{k} - \frac{1}{k+m}\right) \\ &=& \frac{1}{m}\sum_{k=1}^m \frac{1}{k} \\ &=& \frac{H_m}{m}, \end{eqnarray*}$$ where $m=1,2,\ldots$. Then, $$\begin{eqnarray*} \sum_{k=1}^{\infty}\sum_{n=1}^{\infty} \frac{1}{k^2n+2nk+n^2k} &=& \sum_{k=1}^{\infty} \frac{1}{k} \sum_{n=1}^{\infty} \frac{1}{n(n+k+2)} \\ &=& \sum_{k=1}^{\infty} \frac{1}{k} \frac{H_{k+2}}{k+2} \\ &=& \frac{1}{2} \sum_{k=1}^{\infty} \left( \frac{H_{k+2}}{k} - \frac{H_{k+2}}{k+2} \right) \\ &=& \frac{1}{2} \sum_{k=1}^{\infty} \left( \frac{H_k +\frac{1}{k+1}+\frac{1}{k+2}}{k} - \frac{H_{k+2}}{k+2} \right) \\ &=& \frac{1}{2} \sum_{k=1}^{\infty} \left( \frac{H_k}{k} - \frac{H_{k+2}}{k+2} \right) + \frac{1}{2} \sum_{k=1}^{\infty} \left(\frac{1}{k(k+1)} + \frac{1}{k(k+2)}\right) \\ &=& \frac{1}{2}\left(H_1 + \frac{H_2}{2}\right) + \frac{1}{2}\left(H_1 + \frac{H_2}{2}\right) \\ &=& \frac{7}{4}. \end{eqnarray*}$$
what is the use of derivatives Can any one explain me what is the use of derivatives in real life. When and where we use derivative, i know it can be used to find rate of change but why?. My logic was in real life most of the things we do are not linear functions and derivatives helps to make a real life functions into linear. eg converting parabola into a linear function $x^{2}\rightarrow 2x$ but then i find this; derivation of $\sin{x}\rightarrow \cos{x}$ why we cant use $\sin{x}$ itself to solve the equation. whats the purpose of using its derivative $\cos{x}$. Please forgive me if i have asked a stupid questions, i want to improve my fundamentals in calculus.
There can be also economic interpretations of derivatives. For example, let's assume that there is a function which measures the utility from consumption. $U(C)$ where $C$ is the consumption. It is straightforward to say that your utility increases with consumption. This means that when you increase your consumption one unit marginally, you will have an increase of your utility, which you can find it by taking derivative of this function. (it is not a partial derivative because the only argument in your function is consumption.) So you will have ; $\frac{dU(C)}{dC} > 0 $ By intuition, the increase of the utility will be less more you consume. Let's take the simplest example, you have one coke and you drunk it. The second coke you will drink just after will provide you utility but less. The increase of your utility will be less fast more you consume. This one can be formulated by the second derivative of the utility function as follows ; $\frac{d^{2}U(C)}{dC^{2}} < 0$
Tensor product of sets The cartesian product of two sets $A$ and $B$ can be seen as a tensor product. Are there examples for the tensor product of two sets $A$ and $B$ other than the usual cartesian product ? The context is the following: assume one has a set-valued presheaf $F$ on a monoidal category, knowing $F(A)$ and $F(B)$ how does one define $F(A \otimes B)$ ?
Simply being in a monoidal category is a rather liberal condition on the tensor product; it tells you very little about what the tensor product actually looks like. Here is a (perhaps slightly contrived?) example: Let $C$ be the category of vector spaces over a finite field $\mathbb F_p$ with linear transformations. The vector space tensor product makes this into a monoidal category with $\mathbb F_p$ itself as the unit. $C^{op}$ is then also a monoidal category, and the ordinary forgetful functor is a Set-valued presheaf on $C^{op}$. However, $F(A\otimes B)$ cannot be the cartesian product $F(A)\times F(B)$, because $F(A)\times F(B)$ has the wrong cardinality when $A$ and $B$ are finite.
If $a$ in $R$ is prime, then $(a+P)$ is prime in $R/P$. Let $R$ be a UFD and $P$ a prime ideal. Here we are defining a UFD with primes and not irreducibles. Is the following true and what is the justification? If $a$ in $R$ is prime, then $(a+P)$ is prime in $R/P$.
I think thats wrong. If $ a \in P $ holds, then $ a + P = 0 + P \in R/P$ and therefore $a+P$ not prime.
Intersection of compositum of fields with another field Let $F_1$, $F_2$ and $K$ be fields of characteristic $0$ such that $F_1 \cap K = F_2 \cap K = M$, the extensions $F_i / (F_i \cap K)$ are Galois, and $[F_1 \cap F_2 : M ]$ is finite. Then is $[F_1 F_2 \cap K : M]$ finite?
No. First, the extension $\mathbb{C}(z)/\mathbb{C}$ is Galois for any $z$ that is transcendental over $\mathbb{C}$: if $p(z)$ is a nonconstant polynomial, and $\alpha$ is a root and $\beta$ is a nonroot, then the automorphism $\sigma\colon z\mapsto z+\alpha-\beta$ maps $z-\alpha$ to $z-\beta$, so that $p(z)$ has a factor of $z-\alpha$ and no factor of $z-\beta$ but $\sigma p$ has a factor of $z-\beta$, so $p\neq\sigma p$. Thus, no polynomial is fixed by all automorphisms. If $\frac{p(z)}{q(z)}$ is a rational function with $q(z)$ nonconstant, then a similar argument shows that we can find a $\sigma$ that "moves the zeros" of $q$, so that $p(z)/q(z)$ will have a pole at $\alpha$ but no pole at $\beta$, while $\sigma p/\sigma q$ has a pole at $\beta$. Thus, the fixed field of $\mathrm{Aut}(\mathbb{C}(z)/\mathbb{C})$ is $\mathbb{C}$, hence the extension is Galois. Now, let $x$ and $y$ be transcendental over $\mathbb{C}$. Take $F_1=\mathbb{C}(x)$, $F_2=\mathbb{C}(y)$, $K=\mathbb{C}(xy)$, all subfields of $\mathbb{C}(x,y)$. Then $M=\mathbb{C}$, so $F_i/M$ is Galois; $F_1\cap F_2=\mathbb{C}$, so $[F_1\cap F_2\colon M]=1$. But $F_1F_2\cap K = \mathbb{C}(x,y)\cap \mathbb{C}(xy) = \mathbb{C}(xy)$, and $\mathbb{C}(xy)$ is of infinite degree over $M=\mathbb{C}$
Tangent line of parametric curve I have not seen a problem like this so I have no idea what to do. Find an equation of the tangent to the curve at the given point by two methods, without elimiating parameter and with. $$x = 1 + \ln t,\;\; y = t^2 + 2;\;\; (1, 3)$$ I know that $$\dfrac{dy}{dx} = \dfrac{\; 2t\; }{\dfrac{1}{t}}$$ But this give a very wrong answer. I am not sure what a parameter is or how to eliminate it.
One way to do this is by considering the parametric form of the curve: $(x,y)(t) = (1 + \log t, t^2 + 2)$, so $(x,y)'(t) = (\frac{1}{t}, 2t)$ We need to find the value of $t$ when $(x,y)(t) = (1 + \log t, t^2 + 2) = (1,3)$, from where we deduce $t=1$. The tangent line at $(1,3)$ has direction vector $(x,y)'(1) = (1,2)$, and since it passes by the point $(1,3)$ its parametric equation is given by: $s \mapsto (1,2)t + (1,3)$. Another way (I suppose this is eliminating the parameter) would be to express $y$ in terms of $x$ (this can't be done for any curve, but in this case it is possible). We solve for $t$: $x = 1 + \log x \Rightarrow x = e^{x-1}$, so $y = t^2 + 2 = (e^{x-1})^2 + 2 = e^{2x-2} + 2$. The tangent line has slope $\frac{dy}{dx}=y_x$ evaluated at $1$: we have $y_x=2e^{2x-2}$ and $y_x(1)=2e^0 = 2$, so it the line has equation $y=2x +b$. Also, it passes by the point $(1,3)$, so we can solve for $b$: $3 = 2 \cdot 1 + b \Rightarrow b = 1$. Then, the equation of the tangent line is $y = 2x + 1$. Note that $s \mapsto (1,2)t + (1,3)$ and $y = 2x + 1$ are the same line, expressed in different forms.
A $C^{\infty}$ function from $\mathbb{R}^2$ to $\mathbb{R}$ Сould any one help me how to show $C^{\infty}$ function from $\mathbb{R}^2$ to $\mathbb{R}$ can not be injective?
If we remove three points from the domain it will be connected. In $\mathbb{R}$ the connected sets are intervals, so if we remove three point from an interval it will be disconnected. So there can not exist a continuous injective function from $\mathbb{R}^2$ to $\mathbb{R}$.
Limit of exponentials Why is $n^n (n+m)^{-{\left(n+m\over 2\right)}}(n-m)^{-{\left(n-m\over 2\right)}}$ asymptotically equal to $\exp\left(-{m^2\over 2n}\right)$ as $n,m\to \infty$?
Let $m = n x$. Take the logarithm: $$ n \log(n) - n \frac{1+x}{2} \left(\log n + \log\left(1+x\right) \right) - n \frac{1-x}{2} \left( \log n + \log\left(1-x\right) \right) $$ Notice that all the terms with $\log(n)$ cancel out, so we are left with $$ -\frac{n}{2} \left( (1+x) \log(1+x) - (1-x) \log(1-x) \right) $$ It seems like the you need to assume that $x$ is small here, meaning that $ m \ll n$. Then, using Taylor series expansion of the logarithm: $$ (1+x) \log(1+x) + (1-x) \log(1-x) = (1+x) \left( x - \frac{x^2}{2} + \mathcal{o}(x^2) \right) + (1-x) \left(-x - \frac{x^2}{2} + \mathcal{o}(x^2)\right) = x^2 + \mathcal{o}(x^3) $$ Hence the original expression, asymptotically, equals $$ \exp\left( -\frac{n}{2} x^2 + \mathcal{o}(n x^3)\right) = \exp\left(- \frac{m^2}{2n} + \mathcal{o}\left(\frac{m^3}{n^2}\right) \right) $$
Looking for some simple topology spaces such that $nw(X)\le\omega$ and $|X|>2^\omega$ I believe there are some topology spaces which satisfying the network weight is less than $\omega$, and its cardinality is more than $2^\omega$ (not equal to $2^\omega$), even much larger. * *Network: a family $N$ of subsets of a topological space $X$ is a network for $X$ if for every point $x \in X$ and any nbhd $U$ of $x$ there exists an $M \in N$ such that $x \in M \subset U$. Here I want to look for some simple topology spaces which are familiar with us. However, a little complex topology space is also welcome! Thanks for any help:)
If $X$ is $T_0$ and $nw(X)=\omega$ (network weight) then $|X|\leq 2^\omega$. Let $\mathcal N$ be a countable network. For each $x\in X$ consider $N_x=\{N\in\mathcal N: x\in N\}$. Since $X$ is $T_0$ it follows that $N_x\ne N_y$ for $x\ne y$. Thus, $|X|\leq |P(\mathcal N)|=2^\omega$.
Does an injective endomorphism of a finitely-generated free R-module have nonzero determinant? Alternately, let $M$ be an $n \times n$ matrix with entries in a commutative ring $R$. If $M$ has trivial kernel, is it true that $\det(M) \neq 0$? This math.SE question deals with the case that $R$ is a polynomial ring over a field. There it was observed that there is a straightforward proof when $R$ is an integral domain by passing to the fraction field. In the general case I have neither a proof nor a counterexample. Here are three general observations about properties that a counterexample $M$ (trivial kernel but zero determinant) must satisfy. First, recall that the adjugate $\text{adj}(M)$ of a matrix $M$ is a matrix whose entries are integer polynomials in those of $M$ and which satisfies $$M \text{adj}(M) = \det(M).$$ If $\det(M) = 0$ and $\text{adj}(M) \neq 0$, then some column of $\text{adj}(M)$ lies in the kernel of $M$. Thus: If $M$ is a counterexample, then $\text{adj}(M) = 0$. When $n = 2$, we have $\text{adj}(M) = 0 \Rightarrow M = 0$, so this settles the $2 \times 2$ case. Second observation: recall that by Cayley-Hamilton $p(M) = 0$ where $p$ is the characteristic polynomial of $M$. Write this as $$M^k q(M) = 0$$ where $q$ has nonzero constant term. If $q(M) \neq 0$, then there exists some $v \in R^n$ such that $w = q(M) v \neq 0$, hence $M^k w = 0$ and one of the vectors $w, Mw, M^2 w,\dots, M^{k-1} w$ necessarily lies in the kernel of $M$. Thus if $M$ is a counterexample we must have $q(M) = 0$ where $q$ has nonzero constant term. Now for every prime ideal $P$ of $R$, consider the induced action of $M$ on $F^n$, where $F = \overline{ \text{Frac}(R/P) }$. Then $q(\lambda) = 0$ for every eigenvalue $\lambda$ of $M$. Since $\det(M) = 0$, one of these eigenvalues over $F$ is $0$, hence it follows that $q(0) \in P$. Since this is true for all prime ideals, $q(0)$ lies in the intersection of all the prime ideals of $R$, hence If $M$ is a counterexample and $q$ is defined as above, then $q(0)$ is nilpotent. This settles the question for reduced rings. Now, $\text{det}(M) = 0$ implies that the constant term of $p$ is equal to zero, and $\text{adj}(M) = 0$ implies that the linear term of $p$ is equal to zero. It follows that if $M$ is a counterexample, then $M^2 \mid p(M)$. When $n = 3$, this implies that $$q(M) = M - \lambda$$ where $\lambda$ is nilpotent, so $M$ is nilpotent and thus must have nontrivial kernel. So this settles the $3 \times 3$ case. Third observation: if $M$ is a counterexample, then it is a counterexample over the subring of $R$ generated by the entries of $M$, so We may assume WLOG that $R$ is finitely-generated over $\mathbb{Z}$.
Lam's Exercises in modules and rings includes the following: which tells us that your determinant is not a zero-divisor. The paper where McCoy does that is [Remarks on divisors of zero, MAA Monthly 49 (1942), 286--295] If you have JStor access, this is at http://www.jstor.org/stable/2303094 There is a pretty corollary there: a square matrix is a zero-divisor in the ring of matrices over a commmutative ring iff its determinant is a zero divisor.
What is the total number of combinations of 5 items together when there are no duplicates? I have 5 categories - A, B, C, D & E. I want to basically create groups that reflect every single combination of these categories without there being duplicates. So groups would look like this: * *A *B *C *D *E *A, B *A, C *A, D *A, E *B, C *B, D *B, E *C, D . . . etc. This sounds like something I would use the binomial coefficient $n \choose r$ for, but I am quite fuzzy on calculus and can't remember exactly how to do this. Any help would be appreciated. Thanks.
There are $\binom{5}{1}$ combinations with 1 item, $\binom{5}{2}$ combinations with $2$ items,... So, you want : $$\binom{5}{1}+\cdots+\binom{5}{5}=\left(\binom{5}{0}+\cdots+\binom{5}{5}\right)-1=2^5-1=31$$ I used that $$\sum_{k=0}^n\binom{n}{k}=(1+1)^n=2^n$$
Course for self-study I have basically completed a good deal of Single Variable Calculus from Spivak's Calculus and since I leave school in May next year,I intend to put in some effort to pick up college mathematics.I am a bit confused as to what to study next.I did buy Herstein's Topics in Algebra. Question: So can anyone please tell me what I should study and in what order or what constitutes a coherent course of study .I am open to various suggestions! Thanks in advance.
Hoffman and Kunze is a great book. If you understand the abstractions in Spivak's book, you should be able to handle it. The problems are great. I have worked many of them. Go there next.
How can I convert between powers for different number bases? I am writing a program to convert between megabits per second and mebibits per second; A user would enter 1 Mebibits p/s and get 1.05 Megabits p/s as the output. These are two units of computer data transfer rate. A megabit (SI unit of measurement in deny) is 1,000,000 bits, or 10^6. A mebibit (IEC binary prefix) is 1,048,576 bits or 2^20. A user will specify if they have given a number in mega-or-mebi bits per second. So I need to know, how can I convert between these two powers? If the user inputs "1" and selects "mebibits" as the unit, how can I convert from this base 2 number system to the base 10 number system for "megabits"? Thank you for reading.
If you have $x \text{ Mebibits p/s}$, since a Mebibit is $\displaystyle \frac{2^{20}}{10^6} = 1.048576$ Megabits, you have to multiply by $1.048576$, getting $1.048576x \text{ Megabits p/s}$. Likewise, if you have $y\text{ Megabits p/s}$, since a Megabit is $\displaystyle \frac{10^6}{2^{20}} = 0.95367431640625$ Mebibits, you have to multiply by $0.95367431640625$, getting $0.95367431640625y\text{ Mebibits p/s}$. Round up as necessary. To find these conversion factors, you can see that a Mebibit is $2^{20}$ bits, and a Megabit is $10^6$ bits. Therefore a Mebibit is $\displaystyle 2^{20} \text{ bits} = \frac{2^{20}}{10^6} \text{ Megabits}$, and the other direction is analogous.
What is the distribution of a random variable that is the product of the two normal random variables ? What is the distribution of a random variable that is the product of the two normal random variables ? Let $X\sim N(\mu_1,\sigma_1), Y\sim N(\mu_2,\sigma_2)$ and $Z=XY$ That is, what is its probability density function, its expected value, and its variance ? I'm kind of stuck and I can't find a satisfying answer on the web. If anybody knows the answer, or a reference or link, I would be really thankful...
For the special case that both Gaussian random variables $X$ and $Y$ have zero mean and unit variance, and are independent, the answer is that $Z=XY$ has the probability density $p_Z(z)={\rm K}_0(|z|)/\pi$. The brute force way to do this is via the transformation theorem: \begin{align} p_Z(z)&=\frac{1}{2\pi}\int_{-\infty}^\infty{\rm d}x\int_{-\infty}^\infty{\rm d}y\;{\rm e}^{-(x^2+y^2)/2}\delta(z-xy) \\ &= \frac{1}{\pi}\int_0^\infty\frac{{\rm d}x}{x}{\rm e}^{-(x^2+z^2/x^2)/2}\\ &= \frac{1}{\pi}{\rm K}_0(|z|) \ . \end{align}
What's an intuitive explanation of the max-flow min-cut theorem? I'm about to read the proof of the max-flow min-cut theorem that helps solve the maximum network flow problem. Could someone please suggest an intuitive way to understand the theorem?
Imagine a complex pipeline with a common source and common sink. You start to pump the water up, but you can't exceed some maximum flow. Why is that? Because there is some kind of bottleneck, i.e. a subset of pipes that transfer the fluid at their maximum capacity--you can't push more through. This bottleneck will be precisely the minimum cut, i.e. the set of edges that block the flow. Please note, that there may be more that one minimum cut. If you find one, the you know the maximum flow; knowing the maximum flow you know the capacity of the cut. Hope that explains something ;-)
Irreducible polynomial over an algebraically closed field of characteristic distinct from 2 Let $k$ be an algebraically closed field such that $\textrm{char(k)} \neq 2$ and let $n$ be a fixed positive integer greater than $3$ Suppose that $m$ is a positive integer such that $3 \leq m \leq n$. Is it always true that $f(x_{1},x_{2},\ldots,x_{m})=x_{1}^{2}+x_{2}^{2}+\cdots+x_{m}^{2}$ is irreducible over $k[x_{1},x_{2},\ldots,x_{n}]$? I think yes. For $m=3$ we need to check that $f(x,y,z)=x^{2}+y^{2}+z^{2}$ is irreducible, yes? can't we use Eisenstein as follows? Note $y+iz$ divides $y^{2}+z^{2}$ and $y+iz$ is irreducible over $k[y,z]$ and $(y+iz)^{2}$ does not divide $y^{2}+z^{2}$. Therefore $f(x,y,z)=x^{2}+y^{2}+z^{2}$ is irreducible. Now we induct on $m$. Suppose the result holds for $m$ and let us show it holds for $m+1$. So we must look at the polynomial $x_{1}^{2}+\cdots+x_{m}^{2}+x_{m+1}^{2}$. Consider the ring $k[x_{m+1}][x_{1},..,x_{m}]$, we have a monic polynomial and by hypothesis $x_{1}^{2}+\cdots+x_{m}^{2}$ is irreducible over $k[x_{1},\ldots,x_{m}]$ and $(x_{1}^{2}+\cdots+x_{m}^{2} )^{2}$ does not divides $x_{1}^{2}+\cdots+x_{m}^{2}$ so Eisenstein applies again and we are done. Question(s): Is this OK? In case not, can you please provide a proof?
Let $A$ be a UFD. Let $a$ be a non-zero square-free non-unit element of $A$. Then $X^n - a \in A[X]$ is irreducible by Eisenstein. $Y^2 + Z^2 = (Y + iZ)(Y - iZ)$ is square-free in $k[Y, Z]$. Hence $X^2 + Y^2 + Z^2$ is irreducible in $k[X, Y, Z]$ by the above result. Let $m \gt 2$. By the induction hypothesis, $X_{1}^{2}+\cdots+X_{m}^{2}$ is irreducible in $k[X_{1},\ldots,X_{m}]$. Hence $X_{1}^{2}+\cdots+X_{m+1}^{2}$ is irreducible in $k[X_{1},\ldots,X_{m+1}] $by the above result.
Algorithm for computing Smith normal form in an algebraic number field of class number 1 Let $K$ be an algebraic number field of class number 1. Let $\frak{O}$ be the ring of algebraic integers in $K$. Let $A$ be a nonzero $m\times n$ matrix over $\frak{O}$. Since $\frak{O}$ is a PID, $A$ has Smith normal form $S$. I'm looking for an algorithm to compute $S$. It seems to me that we need to solve Bézout's identity. If it's too difficult, we may assume K is a quadratic number field of class number 1.
This is routine and is already implemented in several computer algebra systems, including Sage, Pari and (I think) Magma. (I wrote the Sage version some while back). As you point out, the standard existence proof for Smith form is completely algorithmic once you know how to find a GCD of two elements, which any of the above packages will do for you.
When does the vanishing wedge product of two forms require one form to be zero? Let $\alpha$ and $\beta$ be two complex $(1,1)$ forms defined as: $\alpha = \alpha_{ij} dx^i \wedge d\bar x^j$ $\beta= \beta_{ij} dx^i \wedge d\bar x^j$ Let's say, I know the following: 1) $\alpha \wedge \beta = 0$ 2) $\beta \neq 0$ I want to somehow show that the only way to achieve (1) is by forcing $\alpha = 0$. Are there general known conditions on the $\beta_{ij}$ for this to happen? The only condition I could think of is if all the $\beta_{ij}$ are the same. However, this is a bit too restrictive. I'm also interested in the above problem when $\beta$ is a $(2,2)$ form.
\begin{eqnarray} \alpha\wedge\beta &=&\sum_{i,j,k,l}\alpha_{ij}\beta_{kl}dx^i\wedge d\bar{x}^j\wedge dx^k\wedge d\bar{x}^l\cr &=&(\sum_{i<k,j<l}+\sum_{i<k,l<j}+\sum_{k<i,j<l}+\sum_{k<i,l<j})\alpha_{ij}\beta_{kl}dx^i\wedge d\bar{x}^j\wedge dx^k\wedge d\bar{x}^l\cr &=&\sum_{i<k,j<l}(-\alpha_{ij}\beta_{kl}+\alpha_{kj}\beta_{il}+\alpha_{il}\beta_{kj}- \alpha_{kl}\beta_{ij})dx^i\wedge dx^k\wedge d\bar{x}^j\wedge d\bar{x}^l. \end{eqnarray} Thus $$ \alpha\wedge\beta=0 \iff \alpha_{ij}\beta_{kl}+ \alpha_{kl}\beta_{ij}=\alpha_{kj}\beta_{il}+\alpha_{il}\beta_{kj} \quad \forall \ i<k,j<l. $$
If a function has a finite limit at infinity, does that imply its derivative goes to zero? I've been thinking about this problem: Let $f: (a, +\infty) \to \mathbb{R}$ be a differentiable function such that $\lim\limits_{x \to +\infty} f(x) = L < \infty$. Then must it be the case that $\lim\limits_{x\to +\infty}f'(x) = 0$? It looks like it's true, but I haven't managed to work out a proof. I came up with this, but it's pretty sketchy: $$ \begin{align} \lim_{x \to +\infty} f'(x) &= \lim_{x \to +\infty} \lim_{h \to 0} \frac{f(x+h)-f(x)}{h} \\ &= \lim_{h \to 0} \lim_{x \to +\infty} \frac{f(x+h)-f(x)}{h} \\ &= \lim_{h \to 0} \frac1{h} \lim_{x \to +\infty}[f(x+h)-f(x)] \\ &= \lim_{h \to 0} \frac1{h}(L-L) \\ &= \lim_{h \to 0} \frac{0}{h} \\ &= 0 \end{align} $$ In particular, I don't think I can swap the order of the limits just like that. Is this correct, and if it isn't, how can we prove the statement? I know there is a similar question already, but I think this is different in two aspects. First, that question assumes that $\lim\limits_{x \to +\infty}f'(x)$ exists, which I don't. Second, I also wanted to know if interchanging limits is a valid operation in this case.
Let a function oscillate between $y=1/x$ and $y=-1/x$ in such a way that it's slope oscillates between $1$ and $-1$. Draw the picture. It's easy to see that such functions exist. Then the function approaches $0$ but the slope doesn't approach anything. One could ask: If the derivative also has a limit, must it be $0$? And there, I think, the answer is "yes".
Techniques of Integration w/ absolute value I cannot solve this integral. $$ \int_{-3}^3 \frac{x}{1+|x|} ~dx $$ I have tried rewriting it as: $$ \int_{-3}^3 1 - \frac{1}{x+1}~dx, $$ From which I obtain: $$ x - \ln|x+1|\;\;\bigg\vert_{-3}^{\;3} $$ My book (Stewart's Calculus 7e) has the answer as 0, and I can intuitively see this from a graph of the function, it being symmetric about $y = x$, but I cannot analytically solve this.
You have an odd function integrated over an interval symmetric about the origin. The answer is $0$. This is, as you point out, intuitively reasonable from the geometry. A general analytic argument is given in a remark at the end. If you really want to calculate, break up the region into two parts, where (i) $x$ is positive and (ii) where $x$ is negative. For $x$ positive, you are integrating $\frac{x}{1+x}$. For $x$ negative, you are integrating $\frac{x}{1-x}$, since for $x\le 0$ we have $|x|=-x$. Finally, calculate $$\int_{3}^0 \frac{x}{1-x}\,dx\quad\text{and}\quad\int_0^3 \frac{x}{1+x}\,dx$$ and add. All that work for nothing! Remarks: $1.$ In general, when absolute values are being integrated, breaking up into appropriate parts is the safe way to go. Trying to handle both at the same time carries too high a probability of error. $2.$ Let $f(x)$ be an odd function. We want to integrate from $-a$ to $a$, where $a$ is positive. We look at $$\int_{-a}^0 f(x)\,dx.$$ Let $u=-x$. Then $du=-dx$, and $f(u)=-f(x)$. So our integral is $$\int_{u=a}^0 (-1)(-f(u))\,du.$$ the two minus signs cancel. Changing the order of the limits, we get $-\int_0^a f(u)\,du$. so this is just the negative of the integral over the interval from $0$ to $a$. That gives us the desired cancellation.
Dixon's random squares algorithm: a step in the proof of its subexp. running time I am currently working to understand Dixon's running time proof of his subexp integer factorization algorithm (random squares). But unfortunately I can not follow him at a certain point in his work. His work is available for free here: http://www.ams.org/journals/mcom/1981-36-153/S0025-5718-1981-0595059-1/home.html My problem occurs on the last page (page 6 of file, named page 260) where he states that in case of success the execution of step 1 is bound by 4hv+2, where h is the number of primes smaller v, and v is a fixed integer depending on n (the number we want to factorize). I really do not have a clue where this bound comes from. It might have to do something with the expected values of some steps in the algorithms, but these bound seems not probalistic as far as I unterstand him. I guess this problem might only be comprehensible when you already read the whole paper. I am hoping to fluke here. Best regards! Robert
The $4hv+2$ bound is indeed deterministic, but only applies when we're not in a "bad" case. So the question we need to ask is how "bad" cases are defined. I think the idea of the author is that the previous paragraph can be read by relaxing the condition $N=v^2+1$ to any $N\ge 4hv$, and in particular $N=4hv$. But then we need to change the last line of the paragraph: $$2c^{-1}+2^{-h}=O(vN^{-1})+O(n^{-1})=O(h^{-1})$$ This means that we also need to change the next sentence so that it reads All but $O(h^{-1})$ of the $A_L$ will have $N_1\le 4hv+2$. instead of $O(v^{-1})$. I don't think there is any reason for choosing $4hv+2$ instead of $4hv+1$. But this doesn't matter in the grand scheme of things, because it still allows us to write the bound $$\begin{align} &O(h^{-1}(N+1)h\log n + (4hv+2)h\log n)\quad\text{(*)}\\ =&O(N\log n)+O(vh^2\log n)\\ =&O(vh^2\log n)\\ =&O(v^3) \end{align}$$ (*): in the original text, $n\log n$ is a typo and should of course read $h\log n$.
Preimage of a point by a non-constant harmonic function on $\mathbb{R}$ is unbounded Let $u$ be a non-constant harmonic function on $\mathbb{R}$. Show that $u^{-1}(c)$ is unbounded. I am not getting what theorem or result to apply. Could anyone help me?
Let $u(x)=x$, for all $x \in \mathbb{R}$. Then $u''(x)=0$ for all $x$. But $u^{-1}(\{c\}) = \{c\}$. I think somebody is cheating you :-)
how to change polar coordinate into cartesian coordinate using transformation matrix I would like to change $(3,4,12)$ in $xyz$ coordinate to spherical coordinate using the following relation It is from the this link. I do not understand the significance of this matrix (if not for coordinate transformation) or how it is derived. Also please check my previous question building transformation matrix from spherical to cartesian coordinate system. Please I need your insight on building my concept. Thank you. EDIT:: I understand that $ \left [ A_x \sin \theta\cos \phi \hspace{5 mm} A_y \sin \theta\sin\phi \hspace{5 mm} A_z\cos\theta\right ]$ gives $A_r$ but how is other coordinates $ (A_\theta, A_\phi)$ equal to their respective respective rows from Matrix multiplication?
The transformation from Cartesian to polar coordinates is not a linear function, so it cannot be achieved by means of a matrix multiplication.
How can I calculate this limit: $\lim\limits_{x\rightarrow 2}\frac{2-\sqrt{2+x}}{2^{\frac{1}{3}}-(4-x)^\frac{1}{3}}$? I was given this limit to solve, without using L'Hospital rule. It's killing me !! Can I have the solution please ? $$\lim_{x\rightarrow 2}\frac{2-\sqrt{2+x}}{2^{\frac{1}{3}}-(4-x)^\frac{1}{3}}$$
The solution below may come perilously close to the forbidden L'Hospital's Rule, though the Marquis is not mentioned. To make things look more familiar, change signs, and then divide top and bottom by $x-2$. The expression we want the limit of becomes $$\frac{\sqrt{2+x}-2}{x-2} \cdot \frac{x-2}{(4-x)^{1/3}-2^{1/3}}.$$ We recognize the first part of the product as the "difference quotient" $\frac{f(x+a)-f(a)}{x-a}$ where $f(x)=\sqrt{2+x}$ and $a=2$. We recognize the second part of the product as the reciprocal of the difference quotient $\frac{g(x+a)-g(a)}{x-a}$ where $g(x)=(4-x)^{1/3}$ and $a=2$. Now take the derivatives.
A limit involving polynomials Let be the polynomial: $$P_n (x)=x^{n+1} - (x^{n-1}+x^{n-2}+\cdots+x+1)$$ I want to prove that it has a single positive real root we'll denote by $x_n$, and then to compute: $$\lim_{n\to\infty} x_{n}$$
Since it's not much more work, let's study the roots in $\mathbb{C}$. Note that $x=1$ is not a solution unless $n=1$, since $P_n(1) = 1-n$. Since we are interested in the limit $n\to\infty$, we can assume $x\ne 1$. Sum the geometric series, $$\begin{eqnarray*} P_n (x) &=& x^{n+1} - (x^{n-1}+x^{n-2}+\cdots+x+1) \\ &=& x^{n+1} - \frac{x^n-1}{x-1}. \end{eqnarray*}$$ The roots will satisfy $$x_n^{n}(x_n^2-x_n-1) = -1.$$ (Addendum: If there are concerns about convergence of the sum, think of summing the series as a shorthand that reminds us that $(x-1)P_n(x) = x^{n}(x^2-x-1) + 1$ for all $x$.) If $0\le |x_n|<1$, $\lim_{n\to\infty} x^n = 0$, thus, in the limit, there are no complex roots in the interior of the unit circle. If $|x_n|>1$, $\lim_{n\to\infty} 1/x^n = 0$, thus, in the limit, the roots must satisfy $$x_n^2 - x_n - 1 = 0.$$ There is one solution to this quadratic equation with $|x_n|>1$, it is real and positive, $$x_n = \frac{1}{2}(1+\sqrt{5}).$$ This is the golden ratio. It is the only root exterior to the unit circle. The rest of the roots must lie on the boundary of the unit circle. Figure 1. Contour plot of $|P_{15}(x+i y)|$.
Which theories and concepts exist where one calculates with sets? Recently I thought about concepts for calculating with sets instead of numbers. There you might have axioms like: * *For every $a\in\mathbb{R}$ (or $a\in\mathbb{C}$) we identify the term $a$ with $\{a\}$. *For any operator $\circ$ we define $A\circ B := \{a\circ b : a\in A\land b\in B\}$. *For any function $f$ we define $f(A) := \{ f(a) : a\in A \}$. (More general: For a function $f(x_1,\ldots, x_n)$ we define $f(A_1,\ldots, A_n):= \{f(a_1,\ldots, a_n): a_1\in A_1 \land \dots \land a_n\in A_n \}$). *One has to find a good definition for $f^{-1}(A)$ which might be the inverse image of $A$. ((3.) is just the normal definition of the image and (2.) is a special case of (3.)) Now I am interested to learn about theories and concepts where one actually calculates with sets (similar to the above axioms). After a while I found interval arithmetic. What theories or approaches do you know? Because there will not be just one answer to my question, I will accept the answer with the most upvotes. Update: The theories do not have to follow the above axioms. It's okay when they make there own definitions how a function shall act on sets. It is just important that you calculate with sets in the theory, concept or approach.
I like Minkowski addition, aka vector addition. It is a basic operation in the geometry of convex sets. See: zonotopes & zonoids, Brunn-Minkowski inequality, polar sets... and here's a neat inequality for an arbitrary convex set $A\subset\mathbb R^n$: $$ \mathrm{volume}\,(A-A)\le \binom{2n}{n}\mathrm{volume}\,(A) $$ with equality when $A$ is a simplex. (Due to Rogers and Shepard, see here) The case $n=1$ isn't nearly as exciting.
Describing multivariable functions So I am presented with the following question: Describe and sketch the largest region in the $xy$-plane that corresponds to the domain of the function: $$g(x,y) = \sqrt{4 - x^2 - y^2} \ln(x-y).$$ Now to be I can find different restrictions like $4 - x^2 - y^2 \geq 0$... but I'm honestly not even sure where to begin this question! Any help?
So, you need two things: $$ 4 - x^2 - y^2 \geq 0 $$ to get make the square root work, and also $$ x-y > 0 $$ to make the logarithm work. You will be graphing two regions in the $xy$-plane, and your answer will be the area which is in both regions. A good technique for graphing a region given by an inequality is to first replace the inequality by an equality. For the first region this means $$ 4 - x^2 - y^2 = 0$$ $$ 4 = x^2 + y^2 $$ Therefore, we're talking about the circle of radius two centered at the origin. The next question to answer: do we want the inside or outside of that circle? To determine that, we use a test point: pick any point not on the circle and plug it into the inequality. I'll choose $(x,y) = (5,0)$. Note that this point is on the outside of the circle. $$ 4 - x^2 - y^2 \geq 0 $$ $$ 4 - 5^2 - 0^2 \geq 0 $$ That's clearly false, so we do not want the outside of the circle. Our region is the inside of the circle. Shade that lightly on your drawing. Now, do the line by the same algorithm.
How to determine if 2 points are on opposite sides of a line How can I determine whether the 2 points $(a_x, a_y)$ and $(b_x, b_y)$ are on opposite sides of the line $(x_1,y_1)\to(x_2,y_2)$?
Writing $A$ and $B$ for the points in question, and $P_1$ and $P_2$ for the points determining the line ... Compute the "signed" areas of the $\triangle P_1 P_2 A$ and $\triangle P_1 P_2 B$ via the formula (equation 16 here) $$\frac{1}{2}\left|\begin{array}{ccc} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{array}\right|$$ with $(x_3,y_3)$ being $A$ or $B$. The points $A$ and $B$ will be on opposite sides of the line if the areas differ in sign, which indicates that the triangles are being traced-out in different directions (clockwise vs counterclockwise). You can, of course, ignore the "$1/2$", as it has not affect the sign of the values. Be sure to keep the row-order consistent in the two computations, though.
smooth quotient Let $X$ be a smooth projective curve over the field of complex numbers. Assume it comes with an action of $\mu_3$. Could someone explain to me why is the quotient $X/\mu_3$ a smooth curve of genus zero?
No, the result is not true for genus $1$. Consider an elliptic curve $E$ (which has genus one) and a non-zero point $a\in E$ of order three . Translation by $a$ is an automorphism $\tau_a:E\to E: x\mapsto x+a$ of order $3$ of the Riemann surface $E$. It generates a group $G=\langle \tau_a\rangle\cong \mu_3$ of order $3$ acting freely on $E$ and the quotient $E/G$ is an elliptic curve, a smooth curve of genus one and not of genus zero. Remarks a) There are eight choices for $a$, since the $3$-torsion of $E$ is isomorphic to $\mathbb Z/3\mathbb Z\times \mathbb Z/3\mathbb Z$. b) The quotient morphism $E\to E/G$ is a non-ramified covering
Direct limit of localizations of a ring at elements not in a prime ideal For a prime ideal $P$ of a commutative ring $A$, consider the direct limit of the family of localizations $A_f$ indexed by the set $A \setminus P$ with partial order $\le$ such that $f \le g$ iff $V(f) \subseteq V(g)$. (We have for such $f \le g$ a natural homomorphism $A_f \to A_g$.) I want to show that this direct limit, $\varinjlim_{f \not\in P} A_f$, is isomorphic to the localization $A_P$ of $A$ at $P$. For this I consider the homomorphism $\phi$ that maps an equivalence class $[(f, a/f^n)] \mapsto a/f^n$. (I denote elements of the disjoint union $\sqcup_{f \not\in P} A_f$ by tuples $(f, a/f^n)$.) Surjectivity is clear, because for any $a/s \in A_P$ with $s \not\in P$, we have the class $[(s, a/s)] \in \varinjlim_{f \not\in P} A_f$ whose image is $a/s$. For injectivity, suppose we have a class $[(f, a/f^n)]$ whose image $a/f^n = 0/1 \in A_P$. Then there exists $t \notin P$ such that $ta = 0$. We want to show that $[(f, a/f^n)]] = [(f, 0/1)]$, which I believe is equivalent to finding a $g \notin P$ such that $V(f) \subseteq V(g)$ and $g^ka = 0$ for some $k \in \mathbb{N}$. Well, $t$ seems to almost work, but I couldn’t prove that $V(f) \subseteq V(t)$, so maybe we need a different $g$? Or am I using the wrong map entirely?
If $a/f^n \in A_f$ is mapped to $0$ in $A_p,$ then there is a $g \not \in p,$ s.t. $ga=0,$ therefore, $a/f^n=0 \in A_{gf}.$ Hence the injectivity.
principal value as distribution, written as integral over singularity Let $C_0^\infty(\mathbb{R})$ be the set of smooth functions with compact support on the real line $\mathbb{R}.$ Then, the map $$\operatorname{p.\!v.}\left(\frac{1}{x}\right)\,: C_0^\infty(\mathbb{R}) \to \mathbb{C}$$ defined via the Cauchy principal value as $$ \operatorname{p.\!v.}\left(\frac{1}{x}\right)(u)=\lim_{\varepsilon\to 0+} \int_{\mathbb{R}\setminus [-\varepsilon;\varepsilon]} \frac{u(x)}{x} \, \mathrm{d}x \quad\text{ for }u\in C_0^\infty(\mathbb{R})$$ Now why is $$ \lim_{\varepsilon\to 0+} \int_{\mathbb{R}\setminus [-\varepsilon;\varepsilon]} \frac{u(x)}{x} \, \mathrm{d}x = \int_0^{+\infty} \frac{u(x)-u(-x)}{x}\, \mathrm{d}x $$ why is the integral defined on the left.
We can write $$I(\varepsilon):=\int_{\Bbb R\setminus [-\varepsilon,\varepsilon]}\frac{u(x)}xdx=\int_{-\infty}^{-\varepsilon}\frac{u(x)}xdx+\int_{\varepsilon}^{\infty}\frac{u(x)}xdx.$$ In the first integral of the RHs, we do the substitution $t=-x$, then $$I(\varepsilon)=-\int_{\varepsilon}^{+\infty}\frac{u(t)}tdt+\int_{\varepsilon}^{\infty}\frac{u(x)}xdx=\int_{\varepsilon}^{+\infty}\frac{u(t)-u(-t)}tdt.$$ Now we can conclude, since, by fundamental theorem of analysis, the integral $\int_0^{+\infty}\frac{u(t)-u(-t)}tdt$ is convergent. Indeed, $$u(t)-u(-t)=\int_{-t}^tu'(s)ds=\left[su'(s)\right]_{-t}^t-\int_{-t}^tsu''(s)ds\\= t(u'(t)+u'(-t))-\int_{-t}^tsu''(s)ds$$ hence, for $0<t\leq 1$ $$\frac{|u(t)-u(-t)|}t\leq 2\sup_{|s|\leq 1}|u'(s)|+2\sup_{|s|\leq 1}|u''(s)|.$$
Closest point of line segment without endpoints I know of a formula to determine shortest line segment between two given line segments, but that works only when endpoints are included. I'd like to know if there is a solution when endpoints are not included or if I'm mixing disciplines incorrectly. Example : Line segment $A$ is from $(1, 1)$ to $(1, 4)$ and line segment $B$ is from $(0, 0)$ to $(0, 2)$, so shortest segment between them would be $(0, 1)$ to $(1, 1)$. But of line segment $A$ did not include those end points, how would that work since $(1, 1)$ is not part of line segment $A$?
There would not be a shortest line segment. Look at line segments from $(0,1)$ to points very close to $(1,1)$ on the segment that joins $(1,1)$ and $(1,4)$. These segments get shorter and shorter, approaching length $1$ but never reaching it.
First order ordinary differential equations involving powers of the slope Are there any general approaches to differential equations like $$x-x\ y(x)+y'(x)\ (y'(x)+x\ y(x))=0,$$ or that equation specifically? The problem seems to be the term $y'(x)^2$. Solving the equation for $y'(x)$ like a qudratic equation gives some expression $y'(x)=F(y(x),x)$, where $F$ is "not too bad" as it involves small polynomials in $x$ and $y$ and roots of such object. That might be a starting point for a numerical approach, but I'm actually more interested in theory now. $y(x)=1$ is a stationary solution. Plugging in $y(x)\equiv 1+z(x)$ and taking a look at the new equation makes me think functions of the form $\exp{(a\ x^n)}$ might be involved, but that's only speculation. I see no symmetry whatsoever and dimensional analysis fails.
Hint: $x-xy+y'(y'+xy)=0$ $(y')^2+xyy'+x-xy=0$ $(y')^2=x(y-1-yy')$ $x=\dfrac{(y')^2}{y-1-yy'}$ Follow the method in http://science.fire.ustc.edu.cn/download/download1/book%5Cmathematics%5CHandbook%20of%20Exact%20Solutions%20for%20Ordinary%20Differential%20EquationsSecond%20Edition%5Cc2972_fm.pdf#page=234: Let $t=y'$ , Then $\left(1+\dfrac{t^3(t-1)}{y^2(t-1)+1}\right)\dfrac{dy}{dt}=\dfrac{2t^2}{y(1-t)-1}+\dfrac{yt^3}{y(1-t^2)-1}$
Is it true that $\max\limits_D |f(x)|=\max\limits\{|\max\limits_D f(x)|, |\min\limits_D f(x)|\}$? I came across an equality, which states that If $D\subset\mathbb{R}^n, n\geq 2$ is compact, for each $ f\in C(D)$, we have the following equality $$\max\limits_D |f(x)|=\max\limits\{|\max\limits_D f(x)|, |\min\limits_D f(x)|\}.$$ Actually I can not judge if it is right. Can anyone tell me if it is right, and if so, how to prove it? Thanks a lot.
I assume that you are assuming that $\max$ and $\min$ exists or that you are assuming that $f(x)$ is continuous which in-turn guarantees that $\max$ and $\min$ exists, since $D$ is compact. First note that if we have $a \leq y \leq b$, then $\vert y \vert \leq \max\{\vert a \vert, \vert b \vert\}$, where $a,b,y \in \mathbb{R}$. Hence, $$\min_D f(x) \leq f(x) \leq \max_D f(x) \implies \vert f(x) \vert \leq \max\{\vert \max_D f(x) \vert, \vert \min_D f(x) \vert\}, \, \forall x$$ Hence, we have that $$\max_D \vert f(x) \vert \leq \max\{\vert \max_D f(x) \vert, \vert \min_D f(x) \vert\}$$ Now we need to prove the inequality the other way around. Note that we have $$\vert f(x) \vert \leq \max_D \vert f(x) \vert$$ This implies $$-\max_D \vert f(x) \vert \leq f(x) \leq \max_D\vert f(x) \vert$$ This implies $$-\max_D \vert f(x) \vert \leq \max_D f(x) \leq \max_D\vert f(x) \vert$$ $$-\max_D \vert f(x) \vert \leq \min_D f(x) \leq \max_D\vert f(x) \vert$$ Hence, we have that $$\vert \max_D f(x) \rvert \leq \max_D\vert f(x) \vert$$ $$\vert \min_D f(x) \rvert \leq \max_D\vert f(x) \vert$$ The above two can be put together as $$\max \{\vert \max_D f(x) \rvert, \vert \min_D f(x) \rvert \} \leq \max_D\vert f(x) \vert$$ Hence, we can now conclude that $$\max \{\vert \max_D f(x) \rvert, \vert \min_D f(x) \rvert \} = \max_D\vert f(x) \vert$$
Sum of Natural Number Ranges? Given a positive integer $n$, some positive integers $x$ can be represented as follows: $$1 \le i \le j \le n$$ $$x = \sum_{k=i}^{j}k$$ Given $n$ and $x$ determine if it can be represented as the above sum (if $\exists{i,j}$), and if so determine the $i$ and $j$ such that the sum has the smallest number of terms. (minimize $j-i$) I am not sure how to approach this. Clearly closing the sum gives: $$x = {j^2 + j - i^2 + i \over 2}$$ But I'm not sure how to check if there are integer solutions, and if there are to find the one with smallest $j-i$.
A start: Note that $$2x=j^2+j-i^2+i=(j+i)(j-i+1).$$ The two numbers $j+i$ and $j-i$ have the same parity (both are even or both are odd). So we must express $2x$ as a product of two numbers of different parities.
for what value of $a$ has equation rational roots? Suppose that we have following quadratic equation containing some constant $a$ $$ax^2-(1-2a)x+a-2=0.$$ We have to find all integers $a$,for which this equation has rational roots. First I have tried to determine for which $a$ this equation has a real solution, so I calculated the discriminant (also I guessed that $a$ must not be equal to zero, because in this situation it would be a linear form, namely $-x-2=0$ with the trivial solution $x=-2$). So $$D=(1-2a)^2-4a(a-2)$$ if we simplify,we get $$D=1-4a+4a^2-4a^2+8a=1+4a$$ So we have the roots: $x_1=(1-2a)-\frac{\sqrt{1+4a})}{2a}$ and $x_2=(1-2a)+\frac{\sqrt{1+4a})}{2a}$ Because we are interesting in rational numbers, it is clear that $\sqrt{1+4a}$ must be a perfect square, because $a=0$ I think is not included in solution set,then what i have tried is use $a=2$,$a=6$,$a=12$,but are they the only solutions which I need or how can I find all values of $a$?
Edited in response to Simon's comment. Take any odd number; you can write it as $2m+1$, for some integer $m$; its square is $4m^2+4m+1$; so if $a=m^2+m$ then, no matter what $m$ is, you get rational roots.
Why does the Gauss-Bonnet theorem apply only to even number of dimensons? One can use the Gauss-Bonnet theorem in 2D or 4D to deduce topological characteristics of a manifold by doing integrals over the curvature at each point. First, why isn't there an equivalent theorem in 3D? Why can not the theorem be proved for odd number of dimensions (i.e. what part of the proof prohibits such generalization)? Second and related, if there was such a theorem, what interesting and difficult problem would become easy/inconsistent? (the second question is intentionally vague, no need to answer if it is not clear)
First, for a discussion involving Chern's original proof, check here, page 18. I think the reason is that the original Chern-Gauss-Bonnet theorem can be treated topologically as $$ \int_{M} e(TM)=\chi_{M} $$ and for odd dimensional manifolds, the Euler class is "almost zero" as $e(TM)+e(TM)=0$. So it vanishes in de rham cohomology. On the other hand, $\chi_{M}=0$ if $\dim(M)$ is odd. So the theorem now trivially holds for odd dimension cases. Another perspective is through Atiyah-Singer index theorem. The Gauss-Bonnet theorem can be viewed as a special case involving the index of the de rham Dirac operator: $$ D=d+d^{*} $$ But on odd dimensional bundles, the index of $D$ is zero. Therefore both the left and right hand side of Gauss-Bonnet are zero. I heard via street rumor that there is some hope to "twist" the Dirac operator in K-theory, so that the index theorem gives non-trivial results for odd dimensions. But this can be rather involved, and is not my field of expertise. One expert on this is Daniel Freed, whom you may contact on this.
Is the range of this operator closed? I think I am stuck with showing closedness of the range of a given operator. Given a sequence $(X_n)$ of closed subspaces of a Banach space $X$. Define $Y=(\oplus_n X_n)_{\ell_2}$ and set $T\colon Y\to X$ by $T(x_n)_{n=1}^\infty = \sum_{n=1}^\infty \frac{x_n}{n}$. Is the range of $T$ closed?
The range is not necessarily closed. For example, if $X=(\oplus_{n \in \mathbb{N}} X_n)_{\ell_2}$=Y: if $T(Y)$ is closed, $T(Y)$ is a Banach space. $T$ is a continuous bijective map from $X$ onto $T(Y)$, so is an homeomorphism (open map theorem) . But $T^{-1}$ is not continuous, because $T^{-1}(x_n)=nx_n$, for $x_n \in X_n$. So $T(Y)$ is not closed.
Projection onto Singular Vector Subspace for Singular Value Decomposition I am not very sophisticated in my linear algebra, so please excuse any messiness in my terminology, etc. I am attempting to reduce the dimensionality of a dataset using Singular Value Decomposition, and I am having a little trouble figuring out how this should be done. I have found a lot of material about reducing the rank of a matrix, but not reducing its dimensions. For instance, if I decompose using SVD: $A = USV^T$, I can reduce the rank of the matrix by eliminating singular values below a certain threshold and their corresponding vectors. However, doing this returns a matrix of the same dimensions of $A$, albeit of a lower rank. What I actually want is to be able to express all of the rows of the matrix in terms of the top principal components (so an original 100x80 matrix becomes a 100x5 matrix, for example). This way, when I calculate distance measures between rows (cosine similarity, Euclidean distance), the distances will be in this reduced dimension space. My initial take is to multiply the original data by the singular vectors: $AV_k$. Since $V$ represents the row space of $A$, I interpret this as projecting the original data into a subspace of the first $k$ singular vectors of the SVD, which I believe is what I want. Am I off base here? Any suggestions on how to approach this problem differently?
If you want to do the $100\times80$ to $100\times5$ conversion, you can just multiply $U$ with the reduced $S$ (after eliminating low singular values). What you will be left with is a $100\times80$ matrix, but the last $75$ columns are $0$ (provided your singular value threshold left you with only $5$ values). You can just eliminate the columns of $0$ and you will be left with $100\times5$ representation. The above $100\times5$ matrix can be multiplied with the $5\times80$ matrix obtained by removing the last $75$ rows of $V$ transpose, this results in the approximation of $A$ that you are effectively using.
$p=4n+3$ never has a Decomposition into $2$ Squares, right? Primes of the form $p=4k+1\;$ have a unique decomposition as sum of squares $p=a^2+b^2$ with $0<a<b\;$, due to Thue's Lemma. Is it correct to say that, primes of the form $p=4n+3$, never have a decomposition into $2$ squares, because sum of the quadratic residues $a^2+b^2$ with $a,b\in \Bbb{N}$ $$ a^2 \bmod 4 +b^2 \bmod 4 \le 2? $$ If so, are there alternate ways to prove it?
Yes, as it has been pointed out, if $a^2+b^2$ is odd, one of $a$ and $b$ must be even and the other odd. Then $$ (2m)^2+(2n+1)^2=(4m^2)+(4n^2+4n+1)=4(m^2+n^2+n)+1\equiv1\pmod{4} $$ Thus, it is impossible to have $a^2+b^2\equiv3\pmod{4}$. In fact, suppose that a prime $p\equiv3\pmod{4}$ divides $a^2+b^2$. Since $p$ cannot be written as the sum of two squares, $p$ is also a prime over the Gaussian integers. Therefore, since $p\,|\,(a+ib)(a-ib)$, we must also have that $p\,|\,a+ib$ or $p\,|\,a-ib$, either of which implies that $p\,|\,a$ and $p\,|\,b$. Thus, the exponent of $p$ in the factorization of $a^2+b^2$ must be even. Furthermore, each positive integer whose prime factorization contains each prime $\equiv3\pmod{4}$ to an even power is a sum of two squares. Using the result about quadratic residues in this answer, for any prime $p\equiv1\pmod{4}$, we get that $-1$ is a quadratic residue $\bmod{p}$. That is, there is an $x$ so that $$ x^2+1\equiv0\pmod{p}\tag{1} $$ This means that $$ p\,|\,(x+i)(x-i)\tag{2} $$ since $p$ can divide neither $x+i$ nor $x-i$, $p$ is not a prime in the Gaussian integers, so it must be the product of two Gaussian primes (any more, and we could find a non-trivial factorization of $p$ over the integers). That is, we can write $$ p=(u+iv)(u-iv)=u^2+v^2\tag{3} $$ Note also that $$ 2=(1+i)(1-i)=1^2+1^2\tag{4} $$ Suppose $n$ is a positive integer whose prime factorization contains each prime $\equiv3\pmod{4}$ to even power. Each factor of $2$ and each prime factor $\equiv1\pmod{4}$ can be split into a pair of conjugate Gaussian primes. Each pair of prime factors $\equiv3\pmod{4}$ can be split evenly. Thus, we can split the factors into conjugate pairs: $$ n=(a+ib)(a-ib)=a^2+b^2\tag{5} $$ For example, $$ \begin{align} 90 &=2\cdot3^2\cdot5\\ &=(1+i)\cdot(1-i)\cdot3\cdot3\cdot(2+i)\cdot(2-i)\\ &=[(1+i)3(2+i)]\cdot[(1-i)3(2-i)]\\ &=(3+9i)(3-9i)\\ &=3^2+9^2 \end{align} $$ Thus, we have shown that a positive integer is the sum of two squares if and only if each prime $\equiv3\pmod{4}$ in its prime factorization occurs with even exponent.
How many subsets are there? I'm having trouble simplifying the expression for how many sets I can possibly have. It's a very specific problem for which the specifics don't actually matter, but for $q$, some power of $2$ greater than $4$, I have a set of $q - 3$ elements. I am finding all subsets which contain at least two elements and at most half ($q / 2$, which would then also be a power of $2$) of the elements. I know that the total number of subsets which satisfy these conditions is (sorry my TeX may be awful), $$\sum\limits_{i=2}^{\frac{q}{2}} \binom{q-3}{i}= \sum\limits_{i=0}^{\frac{q}{2}} \binom{q-3}{i}- (q - 3 + 1)$$ but I'm having a tough time finding a closed-form expression for this summation. I'm probably missing something, but it has stumped me this time.
You're most of the way there; now just shave a couple of terms from the upper limit of your sum. Setting $r=q-3$ for clarity, and noting that $\lfloor r/2\rfloor = q/2-2$ (and that $r$ is odd since $q$ is a power of $2$), $$\sum_{i=0}^{q/2}\binom{r}{i} = \binom{r}{q/2}+\binom{r}{q/2-1} + \sum_{i=0}^{\lfloor r/2\rfloor}\binom{r}{i}$$ And the last sum should be easy to compute, since by symmetry it's one half of $\displaystyle\sum_{i=0}^r\binom{r}{i}$...
Examples of logs with other bases than 10 From a teaching perspective, sometimes it can be difficult to explain how logarithms work in Mathematics. I came to the point where I tried to explain binary and hexadecimal to someone who did not have a strong background in Mathematics. Are there some common examples that can be used to explain this? For example (perhaps this is not the best), but we use tally marks starting from childhood. A complete set marks five tallies and then a new set is made. This could be an example of a log with a base of 5.
The most common scales of non-decimal logrithms are the music scale. For example, the octave is a doubling of frequency over 12 semitones. The harmonics are based on integer ratios, where the logrithms of 2, 3, and 5 approximate to 12, 19 and 28 semitones. One can do things like look at the ratios represented by the black keys or the white keys on a paino keyboard. The black keys are a more basic set than the white keys (they are all repeated in the white keys, with two additions). The brightness of stars are in steps of 0.4 dex (ie 5 orders of magnitude = 100), while there is the decibel scale (where the same numbers represent intensity in $10\log_{10}$ vs power in $20\log_{10}$. The ISO R40 series is a series of decimal preferred numbers, the steps are in terms of $40\log_{10}$, it's very close to the semi-tone scales. One can, for example, with just rude approximations like $5<6$, and considering a graph of areas of $x=log_2(3)$ vs $y=log_2(5)$, draw the inequality above as a line saying that the point represented by the true value of $log_2(3), log_2(5)$, must be restricted to particular areas above or below a line. One finds that the thing converges quite rapidly, with inequalities less than 100.
Integrating multiple times. I am having problem in integrating the equation below. If I integrate it w.r.t x then w.r.t y and then w.r.t z, the answers comes out to be 0 but the actual answer is 52. Please help out. Thanks
$$\begin{align} \int_1^3 (6yz^3+6x^2y)\,dx &= \left[6xyz^3+2x^3y\right]_{x=1}^3 =12yz^3+52y \\ \int_0^1 (12yz^3+52y)\,dy &=\left[6y^2z^3+26y^2\right]_{y=0}^1 =6z^3+26 \\ \int_{-1}^1 (6z^3+26)\,dz &= \left[\frac{3}{2}z^4+26z\right]_{z=-1}^1 = 52 . \end{align}$$
Saturated Boolean algebras in terms of model theory and in terms of partitions Let $\kappa$ be an infinite cardinal. A Boolean algebra $\mathbb{B}$ is said to be $\kappa$-saturated if there is no partition (i.e., collection of elements of $\mathbb{B}$ whose pairwise meet is $0$ and least upper bound is $1$) of $\mathbb{B}$, say $W$, of size $\kappa$. Is there any relationship between this and the model theoretic meaning of $\kappa$-saturated (namely that all types over sets of parameters of size $<\kappa$ are realized)?
As far as I know, there is no connection; it's just an unfortunate clash of terminology. It's especially unfortunate because the model-theoretic notion of saturation comes up in the theory of Boolean algebras. For example, the Boolean algebra of subsets of the natural numbers modulo finite sets is $\aleph_1$-saturated but (by a definitely non-trivial result of Hausdorff) not $\aleph_2$-saturated, in the model-theoretic sense, even if the cardinality of the continuum is large. When (complete) Boolean algebras are used in connection with forcing, it is customary to say "$\kappa$-chain condition" instead of "$\kappa$-saturated" in the antichain sense.
Common factors of the ideals $(x - \zeta_p^k)$, $x \in \mathbb Z$, in $\mathbb Z[\zeta_p]$ I'm trying to understand a proof of the following Lemma (regarding Catalan's conjecture): Lemma: Let $x\in \mathbb{Z}$, $2<q\not=p>2$ prime, $G:=\text{Gal}(\mathbb{Q}(\zeta_p):\mathbb{Q})$, $x\equiv 1\pmod{p}$ and $\lvert x\lvert >q^{p-1}+q$. Then the map $\phi:\{\theta\in\mathbb{Z}:(x-\zeta_p)^\theta\in\mathbb{Q}(\zeta_p)^{*q}\}\rightarrow\mathbb{Q}(\zeta_p)$, $\ $ $\phi(\theta)=\alpha$ such that $\alpha^q=(x-\zeta)^\theta$ is injective. I don't understand the following step in the proof: The ideals $(x-\sigma(\zeta_p))_{\sigma \in G}$ in $\mathbb{Z}[\zeta_p]$ have at most the factor $(1-\zeta_p)$ in common. Since $x\equiv 1\pmod{p}$ the ideals do have this factor in common. Could you please explain to me why these two statements are true?
In the world of ideals gcd is the sum. Pick any two ideals $(x-\sigma(\zeta_p))$ and $(x-\sigma'(\zeta_p))$. Then $\sigma(\zeta_p)-\sigma'(\zeta_p)$ will be in the sum ideal. This is a difference of two distinct powers of $\zeta_p$, so generates the same ideal as one of the $1-\zeta_p^k$, $0<k<p$. All of these generate the same prime ideal as $(1-\zeta_p)$, because the quotient $$\frac{1-\zeta_p^k}{1-\zeta_p}=\frac{1-\zeta_p^k}{1-(\zeta_p^k)^{k'}}=\left(\frac{1-(\zeta_p^k)^{k'}}{1-\zeta_p^k}\right)^{-1}$$ is manifestly a unit in the ring $\mathbb{Z}[\zeta_p]$ (here $kk'\equiv1\pmod p$). Because the ideal $(1-\zeta_p)$ is a non-zero prime ideal, hence maximal, this gives the first claim you asked about. The second claim follows from the well known fact that up to a unit factor (i.e. as a principal ideal) $(p)$ is the power $(1-\zeta_p)^{p-1}$. If $x=mp+1$, $m$ a rational integer, then $(1-\zeta_p)$ divides both $mp$ and $(1-\sigma(\zeta_p))$, and therefore also $x-\sigma(\zeta_p)$.
Silly question about Fourier Transform What is the Fourier Transform of : $$\sum_{n=1}^N A_ne^{\large-a_nt} u(t)~?$$ This is a time domain function, how can I find its Fourier Transform (continuous not discrete) ?
Tips: * *The Fourier transform is linear; $$\mathcal{F}\left\{\sum_l a_lf_l(t)\right\}=\sum_l a_l\mathcal{F}\{f_l(t)\}.$$ *Plug $e^{-ct}u(t)$ into $\mathcal{F}$ and then discard part of the region of integration ($u(t)=0$ when $t<0$): $$\int_{-\infty}^\infty e^{-ct}u(t)e^{-2\pi i st}dt=\int_0^\infty e^{(c-2\pi is)t}dt=? $$ Now put these two together..
What does this mean: $\mathbb{Z_{q}^{n}}$? I can't understand the notation $\mathbb{Z}_{q}^{n} \times \mathbb{T}$ as defined below. As far as I know $\mathbb{Z_{q}}$ comprises all integers modulo $q$. But with $n$ as a power symbol I can't understand it. Also: $\mathbb{R/Z}$, what does it denote? "... $ \mathbb{T} = \mathbb{R}/\mathbb{Z} $ the additive group on reals modulo one. Denote by $ A_{s,\phi} $ the distribution on $ \mathbb{Z}^n_q \times \mathbb{T}$ ..."
You write "afaik $\mathbb Z_q$ comprises..." You have to be careful here what is meant by this notation. There are two common options: 1) $\mathbb Z_q$ is the ring of integers module $q$. Many people think this should be better written as $\mathbb Z/q$, to avoid confusion wth 2). However it is not uncommon to write $\mathbb Z_q$. 2) The ring of $q$-adic integers, i.e. formal powerseries in $q$ with coefficients in $\mathbb Z\cap [0,q-1]$.
How to solve $3x+\sin x = e^x$ How doese one solve this equation? $$ 3x+\sin x = e^x $$ I tried graphing it and could only find approximate solutions, not the exact solutions. My friends said to use Newton-Raphson, Lagrange interpolation, etc., but I don't know what these are as they are much beyond the high school syllabus.
sorry f(b)=3(1)+sin(1)−e1= is Not Equal to ---> 1.12... f(b)=3(1)+sin(1)−e1= is Equal to ----> 0.299170578 And f(0.5)=3(0.5)+sin(0.5)−e0.5= is Not Equal to ---> 0.33...>0 f(0.5)=3(0.5)+sin(0.5)−e0.5= is Equal to ----> -0.1399947352
Asymptotic behavior of the expression: $(1-\frac{\ln n}{n})^n$ when $n\rightarrow\infty$ The well known results states that: $\lim_{n\rightarrow \infty}(1-\frac{c}{n})^n=(1/e)^c$ for any constant $c$. I need the following limit: $\lim_{n\rightarrow \infty}(1-\frac{\ln n}{n})^n$. Can I prove it in the following way? Let $x=\frac{n}{\ln n}$, then we get: $\lim_{n\rightarrow \infty}(1-\frac{\ln}{n})^n=\lim_{x\rightarrow \infty}(1-\frac{1}{x})^{x\ln n}=(1/e)^{\ln n}=\frac{1}{n}$. So, $\lim_{n\rightarrow \infty}(1-\frac{\ln}{n})^n=\frac{1}{n}$. I see that this is wrong to have an expression with $n$ after the limit. But how to show that the asymptotic behavior is $1/n$? Thanks!
According to the comments, your real aim is to prove that $x_n=n\left(1-\frac{\log n}n\right)^n$ has a non degenerate limit. Note that $\log x_n=\log n+n\log\left(1-\frac{\log n}n\right)$ and that $\log(1-u)=-u+O(u^2)$ when $u\to0$ hence $n\log\left(1-\frac{\log n}n\right)=-\log n+O\left(\frac{(\log n)^2}n\right)$ and $\log x_n=O\left(\frac{(\log n)^2}n\right)$. In particular, $\log x_n\to0$, hence $x_n\to1$, that is, $$ \left(1-\frac{\log n}n\right)^n\sim\frac1n. $$ Edit: In the case at hand, one knows that $\log(1-u)\leqslant-u$ for every $u$ in $[0,1)$. Hence $\log x_n\leqslant0$ and, for every $n\geqslant1$, $$ \left(1-\frac{\log n}n\right)^n\leqslant\frac1n. $$
How to find $\lim\limits_{n\rightarrow \infty}\frac{(\log n)^p}{n}$ How to solve $$\lim_{n\rightarrow \infty}\frac{(\log n)^p}{n}$$
Apply $\,[p]+1\,$ times L'Hospital's rule to$\,\displaystyle{f(x):=\frac{\log^px}{x}}$: $$\lim_{x\to\infty}\frac{\log^px}{x}\stackrel{L'H}=\lim_{x\to\infty}p\frac{\log^{p-1}(x)}{x}\stackrel{L'H}=\lim_{x\to\infty}p(p-1)\frac{\log^{p-2}(x)}{x}\stackrel{L'H}=...\stackrel{L'H}=$$ $$\stackrel{L'H}=\lim_{x\to\infty}p(p-1)(p-2)...(p-[p])\frac{\log^{p-[p]-1}(x)}{x}=0$$ since $\,p-[p]-1<0\,$
Calculate $\lim\limits_{ x\rightarrow 100 }{ \frac { 10-\sqrt { x } }{ x+5 } }$ $$\lim_{ x\rightarrow 100 }{ \frac { 10-\sqrt { x } }{ x+5 } } $$ Could you explain how to do this without using a calculator and using basic rules of finding limits? Thanks
I suppose that you asked this question not because it's a difficult question, but because you don't know very well the rules to take care of over the limits. First of all you need to know what a limit is, what the indefinite case are, and why they are indefinite, what's the meaning behind this word (i.e. $ \frac{\infty}{\infty})$, and how to look things when facing a limit. You need to start learning basic things, and you may also play with them by using a computer to see the graph of a function when it takes certain values near the critical values you are looking for. I suppose the best way for you it's to receive an elementary explanation (this is possible) but i don't know what book i may recommend you for it.
Markov and independent random variables This is a part of an exercise in Durrett's probability book. Consider the Markov chain on $\{1,2,\cdots,N\}$ with $p_{ij}=1/(i-1)$ when $j<i, p_{11}=1$ and $p_{ij}=0$ otherwise. Suppose that we start at point $k$. We let $I_j=1$ if $X_n$ visits $j$. Then $I_1,I_2,\cdots,I_{k-1}$ are independent. I don't find it obvious that $I_1,\cdots,I_{k-1}$ are independent. It is possible to prove the independence if we calculate all $P(\cap_{j\in J\subset\{1,\cdots,k-1\}}I_j)$, but this work is long and tedious. Since the independence was written as an obvious thing in this exercise, I assume that there is an easier way.
For any $j$, observe that $X_{3}|X_{2}=j-1,X_{1}=j$ has the same distribution as $X_{2}|X_{2} \neq j-1, X_{1}=j$. Since $X_{2}=j-1$ iif $I_{j-1}=1$, by Markovianity conclude that $I_{j-1}$ is independent of $(I_{j-2},\ldots,I_{1})$ given that $X_{1}=j$. Let's prove by induction that $I_{j-1}$ independent of $(I_{j-2},\ldots,I_{1})$ given that $X_{1}=k$. I) $j=k$ follows straight from the first paragraph. II) Now assume $I_{a-1}$ independent of $(I_{a-2},\ldots,I_{1})$ for all $a \geq j+1$. Thus, $(I_{k-1},\ldots,I_{j})$ is independent of $(I_{j-1},\ldots,I_{1})$. Hence, in order to prove that $I_{j-1}$ is independent of $(I_{j-2},\ldots,I_{1})$ we can condition on $(I_{k-1}=1,\ldots,I_{j}=1)$. This is the same as conditioning on $(X_{2}=k-1,\ldots,X_{k-j+1}=j)$. By markovianity and temporal homogeneity, $(X_{k-j+2}^{\infty}|X_{k-j+1}=j,\ldots,X_{1}=k)$ is identically distributed to $(X_{2}^{\infty}|X_{1}=j)$. Using the first paragraph, we know that $I_{j-1}$ is independent of $(I_{j-1},\ldots,I_{1})$ given that $X_{1}=j$. Hence, by the equality of distributions, $I_{j-1}$ is independent of $(I_{j-2},\ldots,I_{1})$ given that $X_{1}=k$.
Difference between power law distribution and exponential decay This is probably a silly one, I've read in Wikipedia about power law and exponential decay. I really don't see any difference between them. For example, if I have a histogram or a plot that looks like the one in the Power law article, which is the same as the one for $e^{-x}$, how should I refer to it?
$$ \begin{array}{rl} \text{power law:} & y = x^{(\text{constant})}\\ \text{exponential:} & y = (\text{constant})^x \end{array} $$ That's the difference. As for "looking the same", they're pretty different: Both are positive and go asymptotically to $0$, but with, for example $y=(1/2)^x$, the value of $y$ actually cuts in half every time $x$ increases by $1$, whereas, with $y = x^{-2}$, notice what happens as $x$ increases from $1\text{ million}$ to $1\text{ million}+1$. The amount by which $y$ gets multiplied is barely less than $1$, and if you put "billion" in place of "million", then it's even closer to $1$. With the exponential function, it always gets multiplied by $1/2$ no matter how big $x$ gets. Also, notice that with the exponential probability distribution, you have the property of memorylessness.
Test for convergence $\sum_{n=1}^{\infty}\{(n^3+1)^{1/3} - n\}$ I want to expand and test this $\{(n^3+1)^{1/3} - n\}$ for convergence/divergence. The edited version is: Test for convergence $\sum_{n=1}^{\infty}\{(n^3+1)^{1/3} - n\}$
Let $y=1/n$, then $\lim_{n\to \infty}{(n^3+1)}^{1/3}-n=\lim_{y\to0^+}\frac{(1+y^3)^{1/3}-1}{y}$. Using L'Hopital's Rule, this limit evaluates to $0$.Hence, the expression converges. You can also see that as $n$ increases, the importance of $1$ in the expression ${(n^3+1)}^{1/3}-n$ decreases and $(n^3+1)^{1/3}$ approaches $n$.Hence, the expression converges to $0$(as verified by limit). For the series part as @siminore pointed out,this difference ${(n^3+1)}^{1/3}-n$ is of the order of $1/n^2$,therefore, the sum of these is of the order of this sum :$\sum_{1}^{\infty}1/n^2$ which is $= {\pi}^2/6$. Thus the series is bounded and hence converges .
Finding a point having the radius, chord length and another point Me and a friend have been trying to find a way to get the position of a second point (B on the picture) having the first point (A), the length of the chord (d) and the radius (r). It must be possible right? We know the solution will be two possible points but since it's a semicircle we also know the x coordinate of B will have to be lower than A and the y coordinate must be always greater than 0. Think you can help? Here's a picture to illustrate the example: Thanks in advance!
There are two variables $(a,b)$ the coordinates of B. Since B lies on the circle,it satisfies the equation of the circle. Also,the distance of $B$ from $A$ is $d$.You can apply distance formula to get an equation from this condition.Now you have two variables and two equations from two conditions,you can solve it now yourself.
Implicit Differentiation $y''$ I'm trying to find $y''$ by implicit differentiation of this problem: $4x^2 + y^2 = 3$ So far, I was able to get $y'$ which is $\frac{-4x}{y}$ How do I go about getting $y''$? I am kind of lost on that part.
You have $$y'=-\frac{4x}y\;.$$ Differentiate both sides with respect to $x$: $$y''=-\frac{4y-4xy'}{y^2}=\frac{4xy'-4y}{y^2}\;.$$ Finally, substitute the known value of $y'$: $$y''=\frac4{y^2}\left(x\left(-\frac{4x}y\right)-y\right)=-\frac4{y^2}\cdot\frac{4x^2+y^2}y=-\frac{4(4x^2+y^2)}{y^3}\;.$$ But from the original equation we know that $4x^2+y^2=3$, so in the end we have $$y''=-\frac{12}{y^3}\;.$$
Derivative of $f(x)= (\sin x)^{\ln x}$ I am just wondering if i went ahead to solve this correctly? I am trying to find the derivative of $f(x)= (\sin x)^{\ln x}$ Here is what i got below. $$f(x)= (\sin x)^{\ln x}$$ $$f'(x)=\ln x(\sin x) \Rightarrow f'(x)=\frac{1}{x}\cdot\sin x + \ln x \cdot \cos x$$ Would that be the correct solution?
It's instructive to look at this particular logarithmic-differentiation situation generally: $$\begin{align} y&=u^{v}\\[0.5em] \implies \qquad \ln y &= v \ln u & \text{take logarithm of both sides}\\[0.5em] \implies \qquad \frac{y^{\prime}}{y} &= v \cdot \frac{u^{\prime}}{u}+v^{\prime}\ln u & \text{differentiate}\\ \implies \qquad y^{\prime} &= u^{v} \left( v \frac{u^{\prime}}{u} + v' \ln u \right) & \text{multiply through by $y$, which is $u^{v}$} \\ &= v \; u^{v-1} u^{\prime} + u^{v} \ln u \; v^{\prime} & \text{expand} \end{align}$$ Some (most?) people don't bother with the "expand" step, because right before that point the exercise is over anyway and they just want to move on. (Plus, generally, we like to see things factored.) Even so, look closely at the parts you get when you do bother: $$\begin{align} v \; u^{v-1} \; u^{\prime} &\qquad \text{is the result you'd expect from the Power Rule if $v$ were constant.} \\[0.5em] u^{v} \ln u \; v^{\prime} &\qquad \text{is the result you'd expect from the Exponential Rule if $u$ were constant.} \end{align}$$ So, there's actually a new Rule here: the Function-to-a-Function Rule is the "sum" of the Power Rule and Exponential Rule! Knowing FtaF means you can skip the logarithmic differentiation steps. For example, your example: $$\begin{align} \left( \left(\sin x\right)^{\ln x} \right)^{\prime} &= \underbrace{\ln x \; \left( \sin x \right)^{\ln x - 1} \cos x}_{\text{Power Rule}} + \underbrace{\left(\sin x\right)^{\ln x} \; \ln \sin x \; \frac{1}{x}}_{\text{Exponential Rule}} \end{align}$$ As I say, we generally like things factored, so you might want to manipulate the answer thusly, $$ \left( \left(\sin x\right)^{\ln x} \right)^{\prime} = \left( \sin x \right)^{\ln x} \left( \frac{\ln x \cos x}{\sin x} + \frac{\ln \sin x}{x} \right) = \left( \sin x \right)^{\ln x} \left( \ln x \cot x + \frac{\ln \sin x}{x} \right) $$ Another example: $$\begin{align} \left( \left(\tan x\right)^{\exp x} \right)^{\prime} &= \underbrace{ \exp x \; \left( \tan x \right)^{\exp x-1} \; \sec^2 x}_{\text{Power Rule}} + \underbrace{ \left(\tan x\right)^{\exp x} \ln \tan x \; \exp x}_{\text{Exponential Rule}} \\ &= \exp x \; \left( \tan x \right)^{\exp x} \left( \frac{\sec^2 x}{\tan x} + \ln \tan x \right) \\ &= \exp x \; \left( \tan x \right)^{\exp x} \left( \sec x \; \csc x + \ln \tan x \right) \end{align}$$ Note. Be careful invoking FtaF in a class --especially on a test-- where the instructor expects (demands) that you go through the log-diff steps every time. (Of course, learning and practicing those steps is worthwhile, because they apply to situations beyond FtaF.) On the other hand, if you explain FtaF to the class, you could be a hero for saving everyone a lot of effort with function-to-a-function derivatives.
Are $|X|$ and $\operatorname{sgn}(X)$ independent? Let $X$ be a real valued random variable. Let $\operatorname{sgn}(x)$ be $1$ when $x>0$, $-1$ when $x<0$ and $0$ when $x=0$. Why are $|X|$ and $\operatorname{sgn}(X)$ independent, when the density function of $X$ is symmetric with respect to $0$? Are $|X|$ and $\operatorname{sgn}(X)$ independent, when the density function of $X$ is not necessarily symmetric with respect to $0$? Thanks!
If $X$ is a continuous random variables (absolutely continuous with respect to Lebesgue), $$P(|X| \leq x|sgn(X)=1) = P(|X| \leq x|X > 0) = \frac{P(0 < X \leq x)}{P(X > 0)} = \frac{\int_{0}^{x}{f(x)dx}}{\int_{0}^{\infty}f(x)dx}$$ $$P(|X| \leq x|sgn(X)=-1) = P(|X| \leq x|X < 0) = \frac{P(-x \leq X < 0)}{P(X < 0)} = \frac{\int_{-x}^{0}{f(x)dx}}{\int_{-\infty}^{0}f(x)dx}$$ and $P(sgn(X)=0)$. If $f(x)$ is symmetric with respect to $0$, observe that $P(|X| \leq x|sgn(X)=1) = P(|X| \leq x|sgn(X)=-1)$ for any $x \geq 0$. Hence, $|X|$ is independent of sgn(x). Now consider $X$ to be uniform in $(-1,2)$. Observe that $P(sgn(X)=1)=2/3$ and $P(sgn(X)=1||X|>1)=1$. Hence, $|X|$ and $sgn(X)$ are not independent. Also observe that it was important for $X$ to be continuous. For example, consider $X $ uniform in $\{-1,0,1\}$. Its mass function is symmetric with respect to $0$ but $P(sgn(X)=0) = 1/3$ and $P(sgn(X)=0||X|=0)=1$ and, thus sgn(X) and |X| are not independent.
Existence of Consecutive Quadratic residues For any prime $p\gt 5$,prove that there are consecutive quadratic residues of $p$ and consecutive non-residues as well(excluding $0$).I know that there are equal number of quadratic residues and non-residues(if we exclude $0$), so if there are two consecutive quadratic residues, then certainly there are two consecutive non-residues,therefore, effectively i am seeking proof only for existence of consecutive quadratic residues. Thanks in advance.
The number of $k\in[0,p-1]$ such that $k$ and $k+1$ are both quadratic residues is equal to: $$ \frac{1}{4}\sum_{k=0}^{p-1}\left(1+\left(\frac{k}{p}\right)\right)\left(1+\left(\frac{k+1}{p}\right)\right)+\frac{3+\left(\frac{-1}{p}\right)}{4}, $$ where the extra term is relative to the only $k=-1$ and $k=0$, in order to compensate the fact that the Legendre symbol $\left(\frac{0}{p}\right)$ is $0$, although $0$ is a quadratic residue. Since: $$ \sum_{k=0}^{p-1}\left(\frac{k}{p}\right)=\sum_{k=0}^{p-1}\left(\frac{k+1}{p}\right)=0, $$ the number of consecutive quadratic residues is equal to $$ \frac{p+3+\left(\frac{-1}{p}\right)}{4}+\frac{1}{4}\sum_{k=0}^{p-1}\left(\frac{k(k+1)}{p}\right). $$ By the multiplicativity of the Legendre symbol, for $k\neq 0$ we have $\left(\frac{k}{p}\right)=\left(\frac{k^{-1}}{p}\right)$, so: $$ \sum_{k=1}^{p-1}\left(\frac{k(k+1)}{p}\right) = \sum_{k=1}^{p-1}\left(\frac{1+k^{-1}}{p}\right)=\sum_{k=2}^{p}\left(\frac{k}{p}\right)=-1,$$ and we have $\frac{p+3}{4}$ consecutive quadratic residues if $p\equiv 1\pmod{4}$ and $\frac{p+1}{4}$ consecutive quadratic residues if $p\equiv -1\pmod{4}$.
A book useful to learn lattices (discrete groups) Does anyone know a good book about lattices (as subgroups of a vector space $V$)?
These notes of mine on geometry of numbers begin with a section on lattices in Euclidean space. However they are a work in progress and certainly not yet fully satisfactory. Of the references I myself have been consulting for this material, the one I have found most helpful with regard to basic material on lattices is C.L. Siegel's Lectures on the Geometry of Numbers.
Tensors: Acting on Vectors vs Multilinear Maps I have the feeling like there are two very different definitions for what a tensor product is. I was reading Spivak and some other calculus-like texts, where the tensor product is defined as $(S \otimes T)(v_1,...v_n,v_{n+1},...,v_{n+m})= S(v_1,...v_n) * T(v_{n+1},...,v_{n+m}) $ The other definition I read in a book on quantum computation, its defined for vectors and matrices and has several names, "tensor product, Kronecker Product, and Outer product": http://en.wikipedia.org/wiki/Outer_product#Definition_.28matrix_multiplication.29 I find this really annoying and confusing. In the first definition, we are taking tensor products of multilinear operators (the operators act on vectors) and in the second definition the operation IS ON vectors and matrices. I realize that matrices are operators but matrices aren't multilinear. Is there a connection between these definitions?
I can't comment yet (or I don't know how to if I can), but echo Thomas' response and want to add one thing. The tensor product of two vector spaces (or more generally, modules over a ring) is an abstract construction that allows you to "multiply" two vectors in that space. A very readable and motivated introduction is given in Dummit & Foote's book. (I always thought the actual construction seemed very strange before reading on D&F -- they manage to make it intuitive, motivated as trying to extend the set of scalars you can multiply with). The collection of $k$-multilinear functions on a vector space is itself a vector space -- each multilinear map is a vector of that space. The connection between the two seemingly different definitions is that you're performing the "abstract" tensor product on those spaces of multilinear maps. It always seemed to me that the tensor product definition in Spivak was a particularly nice, concrete example of the more general, abstract definition.
${10 \choose 4}+{11 \choose 4}+{12 \choose 4}+\cdots+{20 \choose 4}$ can be simplified as which of the following? ${10 \choose 4}+{11 \choose 4}+{12 \choose 4}+\cdots+{20 \choose 4}$ can be simplified as ? A. ${21 \choose 5}$ B. ${20 \choose 5}-{11 \choose 4}$ C. ${21 \choose 5}-{10 \choose 5}$ D. ${20 \choose 4}$ Please give me a hint. I'm unable to group the terms. By brute force, I'm getting ${21 \choose 5}-{10 \choose 5}$
What is the problem in this solution? If $S=\binom{10}{4} + \binom{11}{4} + \cdots + \binom{20}{4}$ we have \begin{eqnarray} S&=& \left \{ \binom{4}{4} + \binom{5}{4} \cdots + \binom{20}{4}\right \} - \left \{\binom{4}{4} + \binom{5}{4} \cdots + \binom{9}{4}\right \} &=& \binom{21}{5} - \binom{10}{5} \end{eqnarray}
How can a Bézier curve be periodic? As I know it, a periodic function is a function that repeats its values in regular intervals or period. However Bézier curves can also be periodic which means closed as opposed to non-periodic which means open. How is this related or possible?
A curve $C$ parameterised over the interval $[a,b]$ is closed if $C(a) = C(b)$. Or, in simpler terms, a curve is closed if its start point coincides with its end-point. A Bézier curve will be closed if its initial and final control points are the same. A curve $C$ is periodic if $C(t+p) = C(t)$ for all $t$ ($p \ne 0$ is the period). Bézier curves are described by polynomials, so a Bézier curve can not be periodic. Just making its start and end tangents match (as J. M. suggested) does not make it periodic. A spline (constructed as a string of Bézier curves) can be periodic. People in the CAD field are sloppy in this area -- they often say "periodic" when they mean "closed".
Burnside's Lemma I've been trying to understand what Burnside's Lemma is, and how to apply it, but the wiki page is confusing me. The problem I am trying to solve is: You have 4 red, 4 white, and 4 blue identical dinner plates. In how many different ways can you set a square table with one plate on each side if two settings are different only if you cannot rotate the table to make the settings match? Could someone explain how to use it for this problem, and if its not too complicated, try to explain to me what exactly it is doing in general?
There are four possible rotations of (clockwise) 0, 90, 180 and 270 degrees respectively. Let us denot the 90 degree rotation by $A$, so the other rotations are then its powers $A^i,i=0,1,2,3$. The exponent is only relevant modulo 4, IOW we have a cyclic group of 4 elements. These rotations act on the set of plate arrangements. So if $RWBR$ denotes the arrangement, where there is a Red plate on the North side, White on the East side, Blue on the South, and another Red plate on the Western seat, then $$ A(RWBR)=RRWB, $$ because rotating the table 90 degrees clockwise moves the North seat to East et cetera. The idea in using Burnside's lemma is to calculate how many arrangements are fixed under the various rotations (or whichever motions you won't count as resulting in a distinct arrangement). Let's roll. All $3^4=81$ arrangements are fixed under not doing anything to the table. So $A^0$ has $81$ fixed points. If an arrangement stays the same upon having a 90 degree rotation act on it, then the plate colors at North and East, East and South, South and West, West and North must all match. IOW we use a single color only. Therefore the rotation $A^1$ has only $3$ fixed points: RRRR, WWWW and BBBB. The same applies to the 270 degree rotation $A^3$. Only 3 fixed points. The 180 degree rotation $A^2$ is more interesting. This rotation swaps the North/South and East/West pairs of plates. For an arrangement to be fixed under this rotation, it is necessary and sufficient that those pairs of plates had matching colors, but we can use any of the three colors for both N/S and E/W, so altogether there are 9 arrangements stable under the 180 degree rotation. The Burnside formula then tells that the number of distinguishable table arrangements is $$ \frac14(81+3+9+3)=\frac{96}4=24. $$
Separatedness of a composition, where one morphism is surjective and universally closed. I'm stuck with the following problem: Let $f:X \rightarrow Y$ and $g:Y \rightarrow Z$ be scheme morphisms such that f is surjective and universally closed and such that $g \circ f$ is separated. The claim is then that g is also separated. I've been trying to use the fact that f is (as any surjective morphism) universally surjective, and somehow deduce that the diagonal $\Delta(Y)$ is closed in $Y \times_Z Y$ but I haven't gotten that far. I would love some hints on how to do this. Full answers are OK, but I would prefer to have hints! Thank you!
You know that the image $\Delta_X (X)$ of $X$ under $\Delta_X$is closed in $X\times_Z X$, because $g\circ f$ is separated . Take the image $Im=(f\times f) (\Delta_X (X)) \subset Y\times_Z Y$ of this closed set $\Delta_X (X)$ under $f\times f$ . This image $Im$ is then closed in $Y\times_Z Y$ (because $f$ is universally closed) and it coincides with... Yes, exactly: bingo!
Does a closed form formula for the series ${n \choose n-1} + {n+1 \choose n-2} + {n+2 \choose n-3} + \cdots + {2n - 1 \choose 0}$ exist. $${n \choose n-1} + {n+1 \choose n-2} + {n+2 \choose n-3} + \cdots + {2n - 1 \choose 0}$$ For the above series, does a closed form exist?
Your series is $$\sum_{k=0}^{n-1}\binom{n+k}{n-k-1}=\sum_{k=0}^{n-1}\binom{2n-1-k}k\;,$$ which is the special case of the series $$\sum_{k\ge 0}\binom{m-k}k$$ with $m=2n-1$. It’s well-known (and easy to prove by induction) that $$\sum_{k\ge 0}\binom{m-k}k=f_{m+1}\;,$$ where $f_m$ is the $m$-th Fibonacci number: $f_0=0,f_1=1$, and $f_n=f_{n-1}+f_{n-2}$ for $n>1$. Thus, the sum of your series is $f_{2n}$. The Binet formula gives a closed form for $f_{2n}$: $$f_{2n}=\frac{\varphi^{2n}-\hat\varphi^{2n}}{\sqrt5}\;,$$ where $\varphi=\frac12\left(1+\sqrt5\right)$ and $\hat\varphi=\frac12\left(1-\sqrt5\right)$. A computationally more convenient expression is $$f_{2n}=\left\lfloor\frac{\varphi^{2n}}{\sqrt5}+\frac12\right\rfloor\;.$$
How to prove if a function is bijective? I am having problems being able to formally demonstrate when a function is bijective (and therefore, surjective and injective). Here's an example: How do I prove that $g(x)$ is bijective? \begin{align} f &: \mathbb R \to\mathbb R \\ g &: \mathbb R \to\mathbb R \\ g(x) &= 2f(x) + 3 \end{align} However, I fear I don't really know how to do such. I realize that the above example implies a composition (which makes things slighty harder?). In any case, I don't understand how to prove such (be it a composition or not). For injective, I believe I need to prove that different elements of the codomain have different preimages in the domain. Alright, but, well, how? As for surjective, I think I have to prove that all the elements of the codomain have one, and only one preimage in the domain, right? I don't know how to prove that either! EDIT f is a bijection. Sorry I forgot to say that.
The way to verify something like that is to check the definitions one by one and see if $g(x)$ satisfies the needed properties. Recall that $F\colon A\to B$ is a bijection if and only if $F$ is: * *injective: $F(x)=F(y)\implies x=y$, and *surjective: for all $b\in B$ there is some $a\in A$ such that $F(a)=b$. Assuming that $R$ stands for the real numbers, we check. Is $g$ injective? Take $x,y\in R$ and assume that $g(x)=g(y)$. Therefore $2f(x)+3=2f(y)+3$. We can cancel out the $3$ and divide by $2$, then we get $f(x)=f(y)$. Since $f$ is a bijection, then it is injective, and we have that $x=y$. Is $g$ surjective? Take some $y\in R$, we want to show that $y=g(x)$ that is, $y=2f(x)+3$. Subtract $3$ and divide by $2$, again we have $\frac{y-3}2=f(x)$. As before, if $f$ was surjective then we are about done, simply denote $w=\frac{y-3}2$, since $f$ is surjective there is some $x$ such that $f(x)=w$. Show now that $g(x)=y$ as wanted. Alternatively, you can use theorems. What sort of theorems? The composition of bijections is a bijection. If $f$ is a bijection, show that $h_1(x)=2x$ is a bijection, and show that $h_2(x)=x+2$ is also a bijection. Now we have that $g=h_2\circ h_1\circ f$ and is therefore a bijection. Of course this is again under the assumption that $f$ is a bijection.
When will these two trains meet each other I cant seem to solve this problem. A train leaves point A at 5 am and reaches point B at 9 am. Another train leaves point B at 7 am and reaches point A at 10:30 am.When will the two trains meet ? Ans 56 min Here is where i get stuck. I know that when the two trains meets the sum of their distances travelled will be equal to the total sum , here is what I know so far Time traveled from A to B by Train 1 = 4 hours Time traveled from B to A by Train 2 = 7/2 hours Now if S=Total distance from A To B and t is the time they meet each other then $$\text{Distance}_{\text{Total}}= S =\frac{St}{4} + \frac{2St}{7} $$ Now is there any way i could get the value of S so that i could use it here. ??
We do not need $S$. The speed of the train starting from $A$ is $S/4$ while the speed of the train starting from $B$ is $S/(7/2) = 2S/7$. Let the trains meet at time $t$ where $t$ is measured in measured in hours and is the time taken by the train from $B$ when the two trains meet. Note that when train $B$ is about to start train $A$ would have already covered half its distance i.e. a distance of $S/2$. Hence, the distance traveled by train $A$ when they meet is $\dfrac{S}2 + \dfrac{S \times t}4$. The distance traveled by train $B$ when they meet is $\dfrac{2 \times S \times t}7$. Hence, we get that $$S = \dfrac{S}2 + \dfrac{S \times t}{4} + \dfrac{S \times 2 \times t}{7}$$ We can cancel the $S$ since $S$ is non-zero to get $$\dfrac12 = \dfrac{t}4 + \dfrac{2t}7$$ Can you solve for $t$ now? (Note that $t$ is in hours. You need to multiply by $60$ to get the answer in minutes.)
Square units of area in a circle I'm studying for the GRE and came across the practice question quoted below. I'm having a hard time understanding the meaning of the words they're using. Could someone help me parse their language? "The number of square units in the area of a circle '$X$' is equal to $16$ times the number of units in its circumference. What are the diameters of circles that could fit completely inside circle $X$?" For reference, the answer is $64$, and the "explanation" is based on $\pi r^2 = 16(2\pi r).$ Thanks!
Let the diameter be $d$. Then the number of square units in the area of the circle is $(\pi/4)d^2$. This is $16\pi d$. That forces $d=64$. Remark: Silly problem: it is unreasonable to have a numerical equality between area and circumference. Units don't match, the result has no geometric significance. "The number of square units in the area of" is a fancy way of saying "the area of."
Is this vector derivative correct? I want to comprehend the derivative of the cost function in linear regression involving Ridge regularization, the equation is: $$L^{\text{Ridge}}(\beta) = \sum_{i=1}^n (y_i - \phi(x_i)^T\beta)^2 + \lambda \sum_{j=1}^k \beta_j^2$$ Where the sum of squares can be rewritten as: $$L^{}(\beta) = ||y-X\beta||^2 + \lambda \sum_{j=1}^k \beta_j^2$$ For finding the optimum its derivative is set to zero, which leads to this solution: $$\beta^{\text{Ridge}} = (X^TX + \lambda I)^{-1} X^T y$$ Now I would like to understand this and try to derive it myself, heres what I got: Since $||x||^2 = x^Tx$ and $\frac{\partial}{\partial x} [x^Tx] = 2x^T$ this can be applied by using the chain rule: \begin{align*} \frac{\partial}{\partial \beta} L^{\text{Ridge}}(\beta) = 0^T &= -2(y - X \beta)^TX + 2 \lambda I\\ 0 &= -2(y - X \beta) X^T + 2 \lambda I\\ 0 &= -2X^Ty + 2X^TX\beta + 2 \lambda I\\ 0 &= -X^Ty + X^TX\beta + 2 \lambda I\\ &= X^TX\beta + 2 \lambda I\\ (X^TX + \lambda I)^{-1} X^Ty &= \beta \end{align*} Where I strugle is the next-to-last equation, I multiply it with $(X^TX + \lambda I)^{-1}$ and I don't think that leads to a correct equation. What have I done wrong?
You have differentiated $L$ incorrectly, specifically the $\lambda ||\beta||^2$ term. The correct expression is: $\frac{\partial L(\beta)}{\partial \beta} = 2(( X \beta - y)^T X + \lambda \beta^T)$, from which the desired result follows by equating to zero and taking transposes.
Your favourite application of the Baire Category Theorem I think I remember reading somewhere that the Baire Category Theorem is supposedly quite powerful. Whether that is true or not, it's my favourite theorem (so far) and I'd love to see some applications that confirm its neatness and/or power. Here's the theorem (with proof) and two applications: (Baire) A non-empty complete metric space $X$ is not a countable union of nowhere dense sets. Proof: Let $X = \bigcup U_i$ where $\mathring{\overline{U_i}} = \varnothing$. We construct a Cauchy sequence as follows: Let $x_1$ be any point in $(\overline{U_1})^c$. We can find such a point because $(\overline{U_1})^c \subset X$ and $X$ contains at least one non-empty open set (if nothing else, itself) but $\mathring{\overline{U_1}} = \varnothing$ which is the same as saying that $\overline{U_1}$ does not contain any open sets hence the open set contained in $X$ is contained in $\overline{U_1}^c$. Hence we can pick $x_1$ and $\varepsilon_1 > 0$ such that $B(x_1, \varepsilon_1) \subset (\overline{U_1})^c \subset U_1^c$. Next we make a similar observation about $U_2$ so that we can find $x_2$ and $\varepsilon_2 > 0$ such that $B(x_2, \varepsilon_2) \subset \overline{U_2}^c \cap B(x_1, \frac{\varepsilon_1}{2})$. We repeat this process to get a sequence of balls such that $B_{k+1} \subset B_k$ and a sequence $(x_k)$ that is Cauchy. By completeness of $X$, $\lim x_k =: x$ is in $X$. But $x$ is in $B_k$ for every $k$ hence not in any of the $U_i$ and hence not in $\bigcup U_i = X$. Contradiction. $\Box$ Here is one application (taken from here): Claim: $[0,1]$ contains uncountably many elements. Proof: Assume that it contains countably many. Then $[0,1] = \bigcup_{x \in (0,1)} \{x\}$ and since $\{x\}$ are nowhere dense sets, $X$ is a countable union of nowhere dense sets. But $[0,1]$ is complete, so we have a contradiction. Hence $X$ has to be uncountable. And here is another one (taken from here): Claim: The linear space of all polynomials in one variable is not a Banach space in any norm. Proof: "The subspace of polynomials of degree $\leq n$ is closed in any norm because it is finite-dimensional. Hence the space of all polynomials can be written as countable union of closed nowhere dense sets. If there were a complete norm this would contradict the Baire Category Theorem."
One of my favorite (albeit elementary) applications is showing that $\mathbb{Q}$ is not a $G_{\delta}$ set.
Combinatorics thinking I saw this question in a book I've been reading: in a group of four mathematicians and five physicians, how many groups of four people can be created if at least two people are mathematicians? The solution is obtained by ${4 \choose 2}{5 \choose 2} + {4 \choose 3}{5 \choose 1} + {4 \choose 4}{5 \choose 0} = 81$. But I thought of the present (wrong) solution: Step 1. Choose two mathematicians. It gives ${4 \choose 2}$ different ways of choosing. Step 2. Choose two people from the seven people left. It gives ${7 \choose 2}$ ways of choosing. Step 3. Multiply. ${4 \choose 2}{7 \choose 2} = 126 = {9 \choose 4}$. It is equivalent of choosing four people in one step. Clearly wrong. I really don't know what's wrong in my "solution". It seems like I am counting some cases twice, but I wasn't able to find the error. What am I missing here? PS. It is not homework.
You have counted the number of ways to chose two mathematicians leading the group, plus two regular members which can be mathematicians or physicians.
Trying to find the name of this Nim variant Consider this basic example of subtraction-based Nim before I get to my full question: Let $V$ represent all valid states of a Nim pile (the number of stones remaining): $V = 0,1,2,3,4,5,6,7,8,9,10$ Let $B$ be the bound on the maximum number of stones I can remove from the Nim pile in a single move (minimum is always at least 1): $B = 3$ Optimal strategy in a two-player game then is to always ensure that at the end of your turn, the number of stones in the pile is a number found in $V$, and that it is congruent to $0$ modulo $(B+1)$. During my first move I remove 2 stones because $8$ modulo $4$ is $0$. My opponent removes anywhere from 1-3 stones, but it doesn't matter because I can then reduce the pile to $4$ because $4$ modulo $4$ is $0$. Once my opponent moves, I can take the rest of the pile and win. This is straightforward, but my question is about a more advanced version of this, specifically when $V$ does not include all the numbers in a range. Some end-states of the pile are not valid, which implies that I cannot find safe positions by applying the modulo $(B+1)$ rule. Does this particular variant of Nim have a name that I can look up for further research? Is there another way to model the pile?
These are known as subtraction games; in general, for some set $S=\{s_1, s_2, \ldots s_n\}$ the game $\mathcal{S}(S)$ is the game where each player can subtract any element of $S$ from a pile. (So your simplified case is the game $\mathcal{S}(\{1\ldots B\})$) The nim-values of single-pile positions in these games are known to be ultimately periodic, and there's a pairing phenomenon that shows up between 0-values and 1-values, but generically there isn't much known about these games with $n\gt 2$. Berlekamp, Conway, and Guy's Winning Ways For Your Mathematical Plays has a section on subtraction games; as far as I can tell, it's still the canonical reference for combinatorial game theory. The Games of No Chance collections also have some information on specific subtraction games, and it looks like an article on games with three-element subtraction sets showed up recently on arxiv ( http://arxiv.org/abs/1202.2986 ); that might be another decent starting point for references into the literature.
Question About Concave Functions It easy to prove that no non-constant positive concave function exists (for example by integrating: $ u'' \leq 0 \to u' \leq c \to u \leq cx+c_2 $ and since $u>0$ , we obviously get a contradiction. Can this result be generalized to $ \Bbb R^2 $ and the Laplacian? Is there an easy way to see this? (i.e.- no non-constant positive real-valued function with non-positive l Laplacian exists) Thanks!
Let $u$ strictly concave and twice diferentiable in $\mathbb{R^{2}}$ então $v(x) = u(x,0)$ is strictly concave and twice diferentiable in $\mathbb{R}$. Hence assume negative value.
Images of sections of a sheaf I'm currently reading a paper by X. Caicedo containing an introduction to sheaves. On page 8 he claims, that for every sheaf of sets $p:E\to X$ and every section $\sigma:U\to E$ (U being open in X) the image $\sigma(U)$ is open. This statement is proved by picking a point $e\in\sigma(U)$, an open neighborhood S of e, which satisfies * *$p(S)$ is open in X, *$p\restriction S$ is a homeomorphism and arriving at an open set $\sigma(U)\supseteq S\cap\sigma(U)=p^{-1}(p(S)\cap U)$. I think the "$\supseteq$" part of this equation does not hold, if for example E is equipped with the discrete topology and the stalk of $p(e)$ has more than one element. I have tried to show that $(p\restriction S)^{-1}(p(S)\cap U) = p^{-1}(U)\cap S$ is contained in $\sigma(U)$, but all attempts at that felt quite clumsy, leading me to believe I have missed something important about the structure of a sheaf.
In order to prove $\sigma(U)$ is open, since $p(S)\cap U$ is open in $X$ and $p|S$ is a homeomorphism, it suffices to show $p|S(S\cap \sigma(U))=p(S)\cap U$. Clearly, this is true($p\sigma(u)=u$, for $u\in U$).
Average number of times it takes for something to happen given a chance Given a chance between 0% and 100% of getting something to happen, how would you determine the average amount of tries it will take for that something to happen? I was thinking that $\int_0^\infty \! (1-p)^x \, \mathrm{d} x$ where $p$ is the chance would give the answer, but doesn't that also put in non-integer tries?
Here's an easy way to see this, on the assumption that the average actually exists (it might otherwise be a divergent sum, for instance). Let $m$ be the average number of trials before the event occurs. There is a $p$ chance that it occurs on the first try. On the other hand, there is a $1-p$ chance that it doesn't happen, in which case we have just spent one try, and on average it will take $m$ further tries. Therefore $m = (p)(1) + (1-p)(1+m) = 1 + (1-p)m$, which is easily rearranged to obtain $m = 1/p$.
Linear system with positive semidefinite matrix I have a linear system $Ax=b$, where * *$A$ is symmetric, positive semidefinite, and positive. $A$ is a variance-covariance matrix. *vector $b$ has elements $b_1>0$ and the rest $b_i<0$, for all $i \in \{2, \dots, N\}$. Prove that the first component of the solution is positive, i.e., $x_1>0$. Does anybody have any idea?
I don't think $x_1$ must be positive. A counter example might be a positive definite matrix $A = [1 \space -0.2 ; \space -0.2 \space 1]$ with its inverse matrix $A^{-1}$ having $A_{11}, A_{12} > 0$. - Edit: Sorry. A counter example might be a normalized covariance matrix $ A= \left( \begin{array}{ccc} 1 & 0.6292 & 0.6747 & 0.7208 \\ 0.6292 & 1 & 0.3914 & 0.0315 \\ 0.6747 & 0.3914 & 1 & 0.6387 \\ 0.7208 & 0.0315 & 0.6387 & 1 \end{array} \right) $.
Determine whether $\sum\limits_{i=1}^\infty \frac{(-3)^{n-1}}{4^n}$ is convergent or divergent. If convergent, find the sum. $$\sum\limits_{i=1}^\infty \frac{(-3)^{n-1}}{4^n}$$ It's geometric, since the common ratio $r$ appears to be $\frac{-3}{4}$, but this is where I get stuck. I think I need to do this: let $f(x) = \frac{(-3)^{x-1}}{4^x}$. $$\lim\limits_{x \to \infty}\frac{(-3)^{x-1}}{4^x}$$ Is this how I handle this exercise? I still cannot seem to get the answer $\frac{1}{7}$
If $\,a, ar, ar^2,...\,$ is a geometric series with $\,|r|<1\,$ ,then $$\sum_{n=0}^\infty ar^n=\lim_{n\to\infty} ar^n=\lim_{n=0}\frac{a(1-r^n)}{1-r}=\frac{a}{1-r}$$since $\,r^n\xrightarrow [n\to\infty]{} 0\Longleftrightarrow |r|<1\,$ , and thus $$\sum_{n=1}^\infty\frac{(-3)^{n-1}}{4^n}=\frac{1}{4}\sum_{n=0}^\infty \left(-\frac{3}{4}\right)^n=\frac{1}{4}\frac{1}{1-\left(-\left(\frac{3}{4}\right)\right)}=\frac{1}{4}\frac{1}{\frac{7}{4}}=\frac{1}{7}$$
$10+10\times 0$ equals $0$ or $10$ I thought $10+10\times 0$ equals $0$ because: $$10+10 = 20$$ And $$20\times 0 = 0$$ I know about BEDMAS and came up with conclusion it should be $$0$$ not $$10$$ But as per this, answer is $10$, are they right?
To elucidate what you said above in the original post, consider that $20\times0=0$, and consider also that $10+10=20$. If we have two equations like that, with one number $n$ on one side of an equation by itself with $m$ on the other side of the equation, and $n$ also appearing in the middle of a formula elsewhere, we should then have the ability to replace $n$ by $m$ in the middle of the formula elsewhere. The rule of replacement basically says this. In other words, $20\times0=10+10\times0$, since $20=10+10$, and $20\times0=0$, we replace $20$ by $10+10$ in $20\times0=0$ and obtain $10+10\times0=0$, right? But, BEDMAS says that $10+10\times0=10$. Why the difference? The catch here lies in that the $10+10\times0$ obtained in the first instance does NOT mean the same thing as $10+10\times0$, given by fiat, in the second instance. Technically speaking, neither $20\times0=0$ and $10+10=20$ ends up as quite correct enough that you can use the rule of replacement as happened above. More precisely, $20\times0=0$ abbreviates $(20\times0)=0$, and $10+10=20$ abbreviates $(10+10)=20$. Keeping that in mind then we can see that using the rule of replacement here leads us to $((10+10)\times0)=0$ or more shortly $(10+10)\times0=0$. BEDMAS says that $10+10\times0$ means $(10+(10\times0))$ or more shortly $10+(10\times0)$, which differs from $(10+10)\times0$. So, the problem here arises, because the infix notation you've used makes it necessary to keep parentheses in formulas if you wish the rule of replacement as mechanically as you did. If you do express all formulas with complete parenthesis in infix notation, then BEDMAS becomes unnecessary. If you wish to drop parentheses and use the rule of replacement mechanically as you did, then you'll need to write formulas in either Polish notation or Reverse Polish notation, or fully parenthesized infix notation, instead of partially parenthesized, "normal" infix notation. If you wish to keep BEDMAS and like conventions around, and write in normal infix notation, then you have to refrain from applying the rule of replacement as mechanically as you did. Conventional mathematicians and authors of our era generally appear to prefer the latter. I don't claim to understand why they appear to prefer a notation that makes such a simple logical rule, in some cases at least, harder to use than needed.
Proving $\mathrm e <3$ Well I am just asking myself if there's a more elegant way of proving $$2<\exp(1)=\mathrm e<3$$ than doing it by induction and using the fact of $\lim\limits_{n\rightarrow\infty}\left(1+\frac1n\right)^n=\mathrm e$, is there one (or some) alternative way(s)?
It's equivalent to show that the natural logarithm of 3 is bigger than 1, but this is $$ \int_1^3 \frac{dx}{x}. $$ A right hand sum is guaranteed to underestimate this integral, so you just need to take a right hand sum with enough rectangles to get a value larger than 1.
De-arrangement in permutation and combination This article talks about de-arrangement in permutation combination. Funda 1: De-arrangement If $n$ distinct items are arranged in a row, then the number of ways they can be rearranged such that none of them occupies its original position is, $$n! \left(\frac{1}{0!} – \frac{1}{1!} + \frac{1}{2!} – \frac{1}{3!} + \cdots + (-1)^n \frac{1}{n!}\right).$$ Note: De-arrangement of 1 object is not possible. $\mathrm{Dearr}(2) = 1$; $\mathrm{Dearr}(3) = 2$; $\mathrm{Dearr}(4) =12 – 4 + 1 = 9$; $\mathrm{Dearr}(5) = 60 – 20 + 5 – 1 = 44$. I am not able to understand the logic behind the equation. I searched in the internet, but could not find any links to this particular topic. Can anyone explain the logic behind this equation or point me to some link that does it ?
A while back, I posted three ways to derive the formula for derangements. Perhaps reading those might provide some insight into the equation above.
How to solve infinite repeating exponents How do you approach a problem like (solve for $x$): $$x^{x^{x^{x^{...}}}}=2$$ Also, I have no idea what to tag this as. Thanks for any help.
I'm just going to give you a HUGE hint. and you'll get it right way. Let $f(x)$ be the left hand expression. Clearly, we have that the left hand side is equal to $x^{f(x)}$. Now, see what you can do with it.
FFT with a real matrix - why storing just half the coefficients? I know that when I perform a real to complex FFT half the frequency domain data is redundant due to symmetry. This is only the case in one axis of a 2D FFT though. I can think of a 2D FFT as two 1D FFT operations, the first operates on all the rows, and for a real valued image this will give you complex row values. In the second stage I apply a 1D FFT to every column, but since the row values are now complex this will be a complex to complex FFT with no redundancy in the output. Hence I only need width / 2 points in the horizontal axis, but you still need height points in the vertical axis. (thanks to Paul R) My question is: I read that the symmetry is just "every term in the right part is the complex conjugated of the left part" I have a code that I know for sure that is right that does this: * *take a real matrix as input -> FFT -> stores JUST half width (half coefficients, the nonredundant ones) but full height *perform a pointwise-multiplication (alias circular convolution in time, I know, the matrices are padded) with another matrix *Return with a IFFT on the half-width and full height matrix -> to real values. And that's the convolution result. Why does it work? I mean: the conjugated complex numbers skipped (the negative frequencies) aren't of any use to the multiplication? Why is that? To ask this as simple as I can: why do I discard half of the complex data from a real FFT to perform calculations? Aren't they important too? They're complex conjugated numbers afterall
Real input compared to complex input contains half that information(since the zero padded part contains no information). The output is in the complex form, that means a double size container for the real input. So from the complex output we can naturally eliminate the duplicate part without any loss. I tried to be as simple as possible.
Are there problems that are optimally solved by guess and check? For example, let's say the problem is: What is the square root of 3 (to x bits of precision)? One way to solve this is to choose a random real number less than 3 and square it. 1.40245^2 = 1.9668660025 2.69362^2 = 7.2555887044 ... Of course, this is a very slow process. Newton-Raphson gives the solution much more quickly. My question is: Is there a problem for which this process is the optimal way to arrive at its solution? I should point out that information used in each guess cannot be used in future guesses. In the square root example, the next guess could be biased by the knowledge of whether the square of the number being checked was less than or greater than 3.
There are certainly problems where a brute force search is quicker than trying to remember (or figure out) a smarter approach. Example: Does 5 have a cube root modulo 11? An example of a slightly different nature is this recent question where an exhaustive search of the (very small) solution space saves a lot of grief and uncertainty compared to attempting to perfect a "forward" argument. A third example: NIST is currently running a competition to design a next-generation cryptographic hash function. One among several requirements for such a function is that it should be practically impossible to find two inputs that map to the same output (a "collision"), so if anyone finding any collision by any method automatically disqualifies a proposal. One of the entries built on cellular automata, and its submitter no doubt thought it would be a good idea because there is no nice known way to run a general cellular automaton backwards. The submission, however, fell within days to (what I think must have been) a simple guess-and-check attack -- it turned out that there were two different one-byte inputs that hashed to the same value. Attempting to construct a complete theory that would allow one to derive a collision in an understanding-based way would have been much more difficult than just seeing where some initial aimless guessing takes you.
Existence of such points in compact and connected topological space $X$ Let $X$ be a topological space which is compact and connected. $f$ is a continuous function such that; $f : X \to \mathbb{C}-\{0\}$. Explain why there exists two points $x_0$ and $x_1$ in $X$ such that $|f(x_0)| \le |f(x)| \le |f(x_1)|$ for all $x$ in $X$.
Let $g(x)=|f(x)|$, observe that the complex norm is a continuous function from $\mathbb C$ into $\mathbb R$, therefore $g\colon X\to\mathbb R$ is continuous. Since $X$ is compact and connected the image of $g$ is compact and connected. All connected subsets of $\mathbb R$ are intervals (open, closed, or half-open, half-closed); and all compact subsets of $\mathbb R$ are closed and bounded (Heine-Borel theorem). Therefore the image of $g$ is an interval of the form $[a,b]$. Let $x_0,x_1\in X$ such that $g(x)=a$ and $g(x_1)=b$. (Note that the connectedness of $X$ is not really needed, because compact subsets of $\mathbb R$ are closed and bounded, and thus have minimum and maximum.)
How to get the characteristic equation? In my book, this succession defined by recurrence is presented: $$U_n=3U_{n-1}-U_{n-3}$$ And it says that the characteristic equation of such is: $$x^3=3x^2-1$$ Honestly, I don't understand how. How do I get the characteristic equation given a succession?
"Guess" that $U(n) = x^n$ is a solution and plug into the recurrence relation: $$ x^n = 3x^{n-1} - x^{n-3} $$ Divide both sides by $x^{n-3}$, assuming $x \ne 0$: $$ x^3 = 3x^2 - 1 $$ Which is the characteristic equation you have.
Unitisation of $C^{*}$-algebras via double centralizers In most of the books I read about $C^{*}$-algebras, the author usually embeds the algebra, say, $A$, as an ideal of $B(A)$, the algebra of bounded linear operators on $A$, by identifying $a$ and $M_a$, the left multiplication of $a$. However, in Murphy's $C^{*}$-algebras and operator theory, $A$ is embedded as an ideal of the space of 'double centralizers'. See p39 of his book. I do not quite understand why we need this complicated construction since the effect is almost the same as the usual embedding. The author remarked that this construction is useful in certain approaches to K-theory, which further confuses me. Can somebody give a hint? Thanks!
The set of double centralizers of a $C^*$-algebra $A$ is usually also called the multiplier algebra $\mathcal{M}(A)$. It is in some sense the largest $C^*$-algebra containing $A$ as an essential ideal and unital. If $A$ is already unital it is equal to $A$. (Whereas in your construction of a unitalisation we have that for unital $A$ the unitalisation is isomorphic as an algebra to $A\oplus \mathbb{C}$. Multiplier algebras can also be constructed as the algebra of adjointable operators of the Hilbert module $A$ over itself. Since in $KK$ theory Hilbert modules are central object, and $K$ theory is special case of $KK$ theory this could be one reason why it is good to introduce these multiplier algebras quite early. But if you want to learn basic theory I think this concept is not that important yet.
Detailed diagram with mathematical fields of study Some time ago, I was searching for a detailed diagram with mathematical fields of study the nearest one I could find is in this file, second page. I want something that shows information like: "Geometry leads to I topic, Geometry and Algebra leads do J topic and so on. Can you help me?
Saunders Mac Lane's book Mathematics, Form and Function (Springer, 1986) has a number of illuminating diagrams showing linkages between various fields of mathematics (and also to some related areas.) For example, p149: Functions & related ideas of image and composition; p184: Concepts of calculus; p306: Interconnections of mathematics and mechanics; p408: Sets, functions and categories; p416: Ideas arising within mathematics; p422-3: various ideas and subdivisions; p425: Interconnections for group theory; p426: Connections of analysis with classical applied mathematics; p427: Probability and related ideas; p428: Foundations. Mac Lane summarises on p428: "We have thus illustrated many subjects and branches of mathematics, together with diagrams of the partial networks in which they appear. The full network of mathematics is suggested thereby, but it is far too extensive and entangled with connections to be captured on any one page of this book."
In set theory, what does the symbol $\mathfrak d$ mean? What's meaning of this symbol in set theory as following, which seems like $b$? I know the symbol such as $\omega$, $\omega_1$, and so on, however, what does it denote in the lemma? Thanks for any help:)
It is the German script $\mathfrak{d}$ given by the LaTeX \mathfrak{d}. It probably represents a cardinal number (sometimes $\mathfrak{c}$ is used to represent the cardinality of the real numbers), but it would definitely depend on the context of what you are reading.
Show that there exists $n$ such that $i^n = 0$ for all $i$ in the ideal $I$ I'm new to this medium, but I'm quite stuck with an exercise so hopefully someone here can help me. This is the exercise: Let $I$ be an ideal in a Noetherian ring $A$, and assume that for every $i\in I$ there exists an $n_i$ such that $i^{n_i} = 0$. Show that there is an $n$ such that $i^n = 0$ for every $i\in I$. I thought about this: $A$ is Noetherian so $I$ is finitely generated. That means, there exist $i_1, \ldots, i_m$ such that all of the elements in $I$ are linear combinations of $i_1, \ldots, i_m$. Now is it possible to take $n = n_1n_2\cdots n_m$? I was thinking, maybe I have to use something like this: $(a + b)^p = a^p + b^p$. This holds in a field of characteristic $p$. If this is true, then indeed $(a_1i_1 + \ldots + a_mi_m)^n = 0 + \ldots + 0 = 0.$
Pritam's binomial approach is the easiest way to solve this problem in the commutative setting. In case you're interested though, it's also true in the noncommutative setting. Levitzky's theorem says that any nil right ideal in a right Noetherian ring is nilpotent. This implies your conclusion (in fact, even more.) However this is not as elementary as the binomial proof in the commutative case :)
Extension of morphisms on surfaces Consider two regular integral proper algebraic surfaces $X$ and $Y$ over a DVR $\mathcal O_K$ with residue field $k$. Let $U \subset X$ be an open subset, s.t. $X\setminus U$ consists of finitely many closed points lying in the closed fiber $X_k$. Assume that all points in $X\setminus U$ considered as points in $X_k$ are regular. Consider now an $\mathcal O_K$-morphism $f: U \to Y$. Is there any extension of $f$ to $X$? I know that $Y_k$ is proper. Since every $x \in U_k$ is regular, $\mathcal O_{X_k,x}$ is a DVR, therefore by the valuative criterion of properness $$ Hom_k(U_k,Y_k) \cong Hom_k(X_k,Y_k), $$ so $f_k$ can be uniquely extended to $X_k$. Thus, set-theoretically an extension of $f$ exists. On the other hand, if there is an extension of $f$ to $X$, then on the closed fiber it coincides with $f_k$. Unfortunately, I don't see, how to construct such an extension scheme-theoretically. Motivation Consider a subset $\mathcal C$ of the set of irreducible components of $Y_k$. Assume that the contraction morphism $g: Y \to X$ of $\mathcal C$ exists, i.e. $X$ is proper over $\mathcal O_K$, $g$ is birational and $g(C)$ is a points if and only if $C \in \mathcal C$. Since $g$ is birational we have a section $f: U \to Y$ of $g$ over an open $U \subset X$. In fact, $X\setminus U = f(\mathcal C)$. If we now assume that all $x \in X\setminus U$ are regular as points in $X_k$, we will obtain the above situation.
Suppose $f : U\to Y$ is dominante. Then $f$ extends to $X$ if and only if $Y\setminus f(U)$ is finite. In particular, in your situation, $f$ extends to $X$ only when $g$ is an isomorphism (no component is contracted). One direction (the one that matters for you) is rather easy: suppose $f$ extends to $f' : X\to Y$. As $X$ is proper, $f'(X)$ is closed and dense in $Y$, so $f'(X)=Y$ and $Y\setminus f(U)\subset f(X\setminus U)$ is finite. For the other direction, consider the graph $\Gamma\subset X\times_{O_K} Y$ of the rational map $f : X - -\to Y$. Let $p: \Gamma\to X$ be the first projection. Then $\Gamma\setminus p^{-1}(U)$ is contained in the finite set $(X\setminus U)\times (Y\setminus f(U))$. This implies that $p$ is quasi-finite. As $p : \Gamma\to X$ is birational and proper (thus finite), and $X$ is normal, this implies that $p$ is an isomorphism. So $f$ extends to $X$ via the $p^{-1}$ and the second projection $\Gamma\to Y$.
Can we possibly combine $\int_a^b{g(x)dx}$ plus $\int_c^d{h(x)dx}$ into $\int_e^f{j(x)dx}$? I'm wondering if this is possible for the general case. In other words, I'd like to take $$\int_a^b{g(x)dx} + \int_c^d{h(x)dx} = \int_e^f{j(x)dx}$$ and determine $e$, $f$, and $j(x)$ from the other (known) formulas and integrals. I'm wondering what restrictions, limitations, and problems arise. If this is not possible in the general case, I'm wondering what specific cases this would be valid for, and also how it could be done. It's a curiosity of mine for now, but I can think of some possible problems and applications to apply it to.
Here's a method that should allow one a large degree of freedom as well as allowing Rieman Integration (instead of Lesbegue Integration or some other method): Let $\tilde{g}$ be such that: $$\int_a^b{g(x)dx} = \int_e^f{\tilde{g}(x)dx}$$ ...and $\tilde{h}$ follows similarly. Then they can be both added inside a single integral. The first method that comes to mind is to let $\tilde{g}(x) = \dot{g}\cdot g(x)dx$, where $\dot{g}$, a constant, is the ratio of the old and new integrations. A similar method that comes to mind is to actually let $\dot{g}$ be a function. Another method that I'm exploring, and is somwhat questionable, is to attempt to use $e$ and $f$ as functions, possibly even of $x$, although this may be undefined or just plain wrong. I'll add ideas to this as I hopefully come up with better methods.
Limit of a sequence of real numbers If $(a_n), (b_n)$ are two sequences of real numbers so that $(a_n)\rightarrow a,\,\,(b_n)\rightarrow b$ with $a, b\in \mathbb{R}^+$. How to prove that $a_n^{b_n}\rightarrow a^b$ ?
Note: The statement doesn't require $b > 0$. We don't assume it here. If we take the continuity of $\ln$ and $\exp$ for granted, the problem essentially boils down to showing $b_n x_n \to bx$ where $x_n = \ln a_n, x = \ln a$ since $a_n^{b_n} = e^{b_n x_n}$. This is what the work in the first proof below goes toward (the "add-and-subtract $b_n \ln a_n$" trick comes up elsewhere too). But if you can use $b_n \to b \text{ and } x_n \to x \implies b_n x_n \to bx$ then the proof simplifies to the second one below. Proof: Given $\varepsilon > 0$, there exist $K_1, K_2 \in \mathbb{N}$ such that for $n \in \mathbb{N}$ we have $n > K_1 \implies |\ln a_n - \ln a| < \frac{\varepsilon}{2(|b|+1)}$ (since $a_n \to a \in \mathbb{R}^+ \implies \ln a_n \to \ln a$ by continuity of $\ln$ on $\mathbb{R}^+$) and $n > K_2 \implies |b_n - b| < \min(\frac{\varepsilon}{2 (|\ln a| + 1)},1)$ (by hypothesis). Let $K = \max(K_1, K_2)$. Then $$\begin{eqnarray} n > K \implies |b_n \ln a_n - b \ln a| &=& |b_n \ln a_n - b_n \ln a + b_n \ln a - b \ln a|\\ &\leq& |b_n \ln a_n - b_n \ln a| + |b_n \ln a - b \ln a|\\ &=& |b_n|\,|\ln a_n - \ln a| + |b_n - b|\,|\ln a|\\ &<& (|b| + 1)\,\frac{\varepsilon}{2(|b|+1)} + \frac{\varepsilon}{2 (|\ln a|+1)}\,|\ln a|\\ &<& \varepsilon \end{eqnarray}$$ so $b_n \ln a_n \to b \ln a \in \mathbb{R}$ by definition. But $x \mapsto e^x$ is continuous on $\mathbb{R}$, hence $a_n^{b_n} = e^{b_n \ln a_n} \to e^{b \ln a} = a^b$. As mentioned at the top, using the theorem on the limit of a product gives this: Short proof: $$\begin{eqnarray} a^b &=& \exp\left[b \ln a\right]\\ &=& \exp\left[\left(\lim_{n \to \infty} b_n\right)\left(\ln \lim_{n \to \infty} a_n\right)\right] &\text{ assumption}&\\ &=& \exp\left[\left(\lim_{n \to \infty} b_n\right)\left(\lim_{n \to \infty} \ln a_n\right)\right] &\text{ continuity of }\ln\text{ at }\lim_{n \to \infty}a_n = a \in \mathbb{R}^+&\\ &=& \exp\left[\lim_{n \to \infty} b_n \ln a_n\right] &\text{ product of limits }\longrightarrow\text{ limit of product}&\\ &=& \lim_{n \to \infty} e^{b_n \ln a_n} &\text{ continuity of }\exp\text{ at }\lim_{n \to \infty}b_n \ln a_n = b \ln a \in \mathbb{R}&\\ &=& \lim_{n \to \infty} a_n^{b_n} \end{eqnarray}$$ All the work and $\varepsilon$'s are still there, but now they're hidden in the theorems we used.