INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Finding a Pythagorean triple $a^2 + b^2 = c^2$ with $a+b+c=40$ Let's say you're asked to find a Pythagorean triple $a^2 + b^2 = c^2$ such that $a + b + c = 40$. The catch is that the question is asked at a job interview, and you weren't expecting questions about Pythagorean triples. It is trivial to look up the answer. It is also trivial to write a computer program that would find the answer. There is also plenty of material written about the properties of Pythagorean triples and methods for generating them. However, none of this would be of any help during a job interview. How would you solve this in an interview situation?
$$a^2=(c-b)(c+b) \Rightarrow b+c = \frac{a^2}{c-b}$$ $$a+\frac{a^2}{c-b}=40$$ For simplicity let $c-b=\alpha$. then $$a^2+\alpha a-40\alpha =0$$ Since this equation has integral solutions, $$\Delta=\alpha^2+160 \alpha$$ is a perfect square.Thus $$\alpha^2+160 \alpha =\beta^2$$ Or $$(\alpha+80)^2=\beta^2+80^2 \,.$$ This way we reduced the problem to finding all pytagorean triples of the type $(80, ??, ??)$. This can be easily done if you know the formula, or by setting $$80^2=(\alpha+80-\beta)(\alpha+80+\beta)$$ and solving.
Estimation with method of maximum likelihood Can anybody help me to generate the estimator of equation: $$Y_i = \beta_0 + \beta_1X_{i1} + \beta_2X_{i2}+\cdots+\beta_4X_{i4}+\varepsilon_i$$ using method of maximum likelihood, where $\varepsilon_i$ are independent variables which have normal distribution $N(0,\sigma^2)$
This is given by least squares estimation. To see this, write $$ L(\beta, \sigma^2 | Y) = \prod_i (2\pi\sigma^2)^{1/2} \exp\left(\frac {-1} {2\sigma^2} (Y_i - \beta_0 - \sum_j \beta_j X_{ij})^2\right) = (2\pi\sigma^2)^{n/2} \exp\left(\frac {-1} {2\sigma^2} \sum_i(Y_i - \beta_0 - \sum_j \beta_j X_{ij})^2\right) $$ Maximization of this is equivalent to minimization of $\sum_i (Y_i - \beta_0 - \sum_ j\beta_j X_{ij})^2$, which is the definition of least squares estimation. If $\mathbf X$ is such that $\langle \mathbf X\rangle_{ij} = X_{ij}$ and $\mathbf Y$ is such that $\langle\mathbf Y\rangle_{ij} = Y_{ij}$ then we can rewrite this as the minimization of $Q(\beta) = \|\mathbf Y - \mathbf X\beta\|^2$ with respect to $\beta$, and there are several ways to see that $\hat \beta = (\mathbf X^T \mathbf X)^{-1} \mathbf X^T \mathbf Y$ is the minimizer. The most expedient is probably to calculate $\partial Q/ \partial \beta$ and set equal to $0$ which gives $$ -2 \mathbf X^T \mathbf Y + 2 (\mathbf X^T \mathbf X) \beta = 0 $$ and note that $Q$ is a strictly convex function of $\beta$ which ensures that the solution to this equation is a minimum of $Q$. Note however that we are relying on $\mathbf X^T \mathbf X$ being invertible, i.e. $\mathbf X$ has linearly independent columns; if this fails, $Q$ is not strictly convex and any generalized inverse $(\mathbf X^T\mathbf X)^-$ can be used to replace $(\mathbf X^T \mathbf X)^{-1}$.
Category theory text that defines composition backwards? I've always struggled with the convention that if $f:X \rightarrow Y$ and $g:Y \rightarrow Z$, then $g \circ f : X \rightarrow Z$. Constantly reflecting back and forth is inefficient. Does anyone know of a category theory text that defines composition the other way? So that $f \circ g$ means what we would normally mean by $g \circ f$.
I recall that the following textbooks on category theory have compositions written from left to right. * *Freyd, Scedrov: "Categories, Allegories", North-Holland Publishing Co., 1990 . *Manes: "Algebraic Theories", GTM 26, Springer-Verlag, 1976. *Higgins: "Notes on Categories and Groupoids", Van Nostrand, 1971 (available as TAC Reprint No 7). Other examples appear in group theory and ring theory, e.g. * *Lambek: "Lectures on rings and modules", Chelsea Publishing Co., 1976 (2ed). or several books by P.M. Cohn. But in order to avoid confusion, authors usually do not use the symbol $\circ$ for this. In particular when (as with noncommutative rings) it is helpful to have both readings available (so that module homomorphisms and scalars act on opposite sides). For instance, as far as I remember, Lambek uses $\ast$ instead.
Smallest number of games per match to get $0.8$ chance to win the match. If the first person to win $n$ games wins the match, what is the smallest value of $n$ such that $A$ has a better than $0.8$ chance of winning the match? For $A$ having a probability of $0.70$, I get smallest $n = 5$ (Meaning there must be $5$ games per match for $A$ to have a $0.8$ chance to win.) I got this by doing $1 - 0.7^5 = 0.832$. Although I would have thought it would have been lower.
Using the same method as before, with A having a probability of winning a game, the probabilities of A winning the match are about $0.7$ for $n=1$, $0.784 $ for $n=2$, $0.837$ for $n=3$, $0.874$ for $n=4$ and $0.901$ for $n=5$. So the answer is $n=3$ to exceed $0.8$. $1−0.7^5$ is the answer to the question "What is the probability B wins at least one game before A wins 5 games?"
How to do $\frac{ \partial { \mathrm{tr}(XX^TXX^T)}}{\partial X}$ How to do the derivative \begin{equation} \frac{ \partial {\mathrm{tr}(XX^TXX^T)}}{\partial X}\quad ? \end{equation} I have no idea where to start.
By definition the derivative of $F(X)=tr(XX^TXX^T)$, in the point $X$, is the only linear functional $DF(X):{\rm M}_{n\times n}(\mathbb{R})\to \mathbb{R}$ such that $$ F(x+H)=F(X)+DF(X)\cdot H+r(H) $$ with $\lim_{H\to 0} \frac{r(H)}{\|H\|}=0$. Let's get $DF(X)(H)$ and $r(H)$ by the expansion of $F(X+H)$. But first we must do an algebraic manipulation to expand $(X+H)(X+H)^T(X+H)(X+H)^T$. In fact, \begin{align} (X+\color{red}{H})(X+\color{red}{H})^T(X+\color{red}{H})(X+\color{red}{H})^T =& (X+\color{red}{H})(X^T+\color{red}{H}^T)\big(XX^T+X\color{red}{H}^T+\color{red}{H}X^T+\color{red}{H}\color{red}{H}^T\big) \\ =&(X+\color{red}{H})\Big(X^TXX^T+X^TX\color{red}{H}^T+X^T\color{red}{H}X^T+X^T\color{red}{H}\color{red}{H}^T \\ &\hspace{12mm}+\color{red}{H}^TXX^T+\color{red}{H}^TX\color{red}{H}^T+\color{red}{H}^T\color{red}{H}X^T+\color{red}{H}^T\color{red}{H}\color{red}{H}^T\Big) \\ =&\;\;\;\;\,XX^TXX^T+XX^TX\color{red}{H}^T+XX^T\color{red}{H}X^T+XX^T\color{red}{H}\color{red}{H}^T \\ &+X\color{red}{H}^TXX^T+X\color{red}{H}^TX\color{red}{H}^T+X\color{red}{H}^T\color{red}{H}X^T+X\color{red}{H}^T\color{red}{H}\color{red}{H}^T \\ &+\color{red}{H}X^TXX^T+\color{red}{H}X^TX\color{red}{H}^T+\color{red}{H}X^T\color{red}{H}X^T+\color{red}{H}X^T\color{red}{H}\color{red}{H}^T \\ &+\color{red}{H}\color{red}{H}^TXX^T+\color{red}{H}\color{red}{H}^TX\color{red}{H}^T+\color{red}{H}\color{red}{H}^T\color{red}{H}X^T+\color{red}{H}\color{red}{H}^T\color{red}{H}\color{red}{H}^T \end{align} Extracting $XX^TXX^T$ and the portions where $H$ or $H^T$ appears only once and applying $tr$ we have \begin{align} F(X+H)=&tr\Big( (X+\color{red}{H})(X^T+\color{red}{H}^T)(X+\color{red}{H})(X^T+\color{red}{H}^T) \Big) \\ =&\underbrace{tr \big(XX^TXX^T\big)}_{F(X)} +\underbrace{tr\big( XX^TX\color{red}{H}^T+XX^T\color{red}{H}X^T +X\color{red}{H}^TXX^T+\color{red}{H}X^TXX^T \big)}_{DF(X)\cdot H} \\ &+tr\Big(XX^T\color{red}{H}\color{red}{H}^T +X\color{red}{H}^TX\color{red}{H}^T+X\color{red}{H}^T\color{red}{H}X^T+X\color{red}{H}^T\color{red}{H}\color{red}{H}^T \\ &\hspace{12mm}+\color{red}{H}X^TX\color{red}{H}^T+\color{red}{H}X^T\color{red}{H}X^T+\color{red}{H}X^T\color{red}{H}\color{red}{H}^T \\ &\underbrace{\hspace{12mm}+\color{red}{H}\color{red}{H}^TXX^T+\color{red}{H}\color{red}{H}^TX\color{red}{H}^T+\color{red}{H}\color{red}{H}^T\color{red}{H}X^T+\color{red}{H}\color{red}{H}^T\color{red}{H}\color{red}{H}^T\Big)}_{r(H)} \end{align} Here $\|H\|=\sqrt{tr(HH^T)}$ is the Frobenius norm and $\displaystyle\lim_{H\to 0}\frac{r(H)}{H}=0$. Then the total derivative is \begin{align} \mathcal{D}F(X)\cdot H = & tr\bigg(XX^TXH^T\bigg)+ tr\bigg(XX^THX^T\bigg) \\ + & tr\bigg(XH^TXX^T \bigg)+ tr\bigg(HX^TXX^T \bigg). \\ \end{align} The directional derivative is $$ \frac{\partial}{\partial V}F(X)=\mathcal{D}F(X)\cdot V $$ and the partial derivative is $$ \frac{\partial}{\partial E_{ij}}F(X)=\mathcal{D}F(X)\cdot E_{ij}. $$ Here $E_{ij}=[\delta_{ij}]_{n\times m}$.
Closed form for $\sum_{n=2}^\infty \frac{1}{n^2\log n}$ I had attempted to evaluate $$\int_2^\infty (\zeta(x)-1)\, dx \approx 0.605521788882.$$ Upon writing out the zeta function as a sum, I got $$\int_2^\infty \left(\frac{1}{2^x}+\frac{1}{3^x}+\cdots\right)\, dx = \sum_{n=2}^\infty \frac{1}{n^2\log n}.$$ This sum is mentioned in the OEIS. All my attempts to evaluate this sum have been fruitless. Does anyone know of a closed form, or perhaps, another interesting alternate form?
The closed form means an expression containing only elementary functions. For your case no such a form exists. For more informations read these links: http://www.frm.utn.edu.ar/analisisdsys/MATERIAL/Funcion_Gamma.pdf http://en.wikipedia.org/wiki/Hölder%27s_theorem http://en.wikipedia.org/wiki/Gamma_function#19th-20th_centuries:_characterizing_the_gamma_function http://divizio.perso.math.cnrs.fr/PREPRINTS/16-JourneeAnnuelleSMF/difftransc.pdf http://www.tandfonline.com/doi/abs/10.1080/17476930903394788?journalCode=gcov20 Some background are needed for your understanding and good luck with these referrences.
Confused where and why inequality sign changes when proving probability inequality "Let A and B be two events in a sample space such that 0 < P(A) < 1. Let A' denote the complement of A. Show that is P(B|A) > P(B), then P(B|A') < P(B)." This was my proof: $$ P(B| A) > P(B) \hspace{1cm} \frac{P(B \cap A)}{P(A)} > P(B) $$ $$P(B \cap A) + P(B \cap A') = P(B) \implies P(B \cap A) = P(B) - P(B \cap A') $$ Subbing this into the above equation gives $$ P(B) - P(B \cap A') > P(B)P(A) $$ I think the inequality was supposed to change there, but I don't know why. Carrying on with the proo and dividing both sides by P(B) and rearranging gives $$ 1 - P(A) > \frac{P(B \cap A')}{P(B)} $$ $$ P(A') > \frac{P(B \cap A')}{P(B)} $$ Rearrange to get what you need: $$ P(B) < \frac{P(B \cap A')}{P(A')} = P(B |A') $$ Why does the inequality change at that point? EDIT: Figured it out. It's in the last line where the inequality holds.
In general $P(B)=P(A)P(B|A) + P(A')P(B|A')$. What happens if $P(B|A)>P(B)$ and $P(B|A')\geq P(B)$? Hint: Use $P(A)+P(A')=1$ and $P(A)>0$ and $P(A')\geq 0$ to get a contradiction. Your proof was right up to (and including) this step: $$P(A') > \frac{P(B \cap A')}{P(B)}$$ From here, multiply both sides by $\frac{P(B)}{P(A')}$ and you get: $$P(B) > \frac{P(B\cap A')}{P(A')} = P(B|A')$$ That was what you wanted to prove.
Solving for $y$ with $\arctan$ I know this is a very low level question, but I honestly can't remember how this is done. I want to solve for y with this: $$ x = 2.0 \cdot \arctan\left(\frac{\sqrt{y}}{\sqrt{1 - y}}\right) $$ And I thought I could do this: $$ \frac{\sqrt{y}}{\sqrt{1 - y}} = \tan\left(\frac{x}{2.0}\right) $$ But it seems like I've done something wrong getting there. Could someone break down the process to get to $y =$ ? Again, I know this is very basic stuff, but clearly I'm not very good at this.
So, since Henry told me that I wasn't wrong, I continued and got a really simple answer. Thanks! x = 2 * arctan(sqrt(y)/sqrt(1 - y)) sqrt(y)/sqrt(1 - y) = tan(x/2) 1/(sqrt(1 - y) * sqrt(1/y)) = tan(x/2) 1/tan(x/2) = sqrt(1/y - 1) 1/(tan(x/2))^2 + 1 = 1/y y = (tan(x/2))^2/((tan(x/2))^2 + 1) Thanks again to Henry! EDIT Followed by: y = ((1 - cos(x)) / (1 + cos(x))) / (1 + (1 - cos(x))/(1 + cos(x))) = (1 - cos(x)) / ((1 + cos(x)) * (1 + (1 - cos(x))/(1 + cos(x)))) = (1 - cos(x)) / ((1 + cos(x)) + (1 - cos(x))) = (1 - cos(x)) / 2
Why the Picard group of a K3 surface is torsion-free Let $X$ be a K3 surface. I want to prove that $Pic(X)\simeq H^1(X,\mathcal{O}^*_X)$ is torsion-free. From D.Huybrechts' lectures on K3 surfaces I read that if $L$ is torsion then the Riemann-Roch formula would imply that $L$ is effective. But then if a section $s$ of $L$ has zeroes then $s^k\in H^0(X,L^k)$ has also zeroes, so no positive power of $L$ can be trivial. What I am missing is how the Riemann-Roch theorem can imply that if $L$ is torsion then $L$ is effective?
If $L$ is torsion, then $L^k=O_X$ (tensor power). Since $X$ is K3 and because the first chern class of the trivial bundle vanishes, we have $c_1(X)=0$. Furthermore, since $X$ is regular, we get $h^1(O_X)=0$. Thus, $\chi(O_X)=2$. Now the RRT says $$\chi(L)=\chi(O_X) + \tfrac 12 c_1(L)^2$$ Thus, $\chi(O_X)=\chi(L^k)=\chi(O_X)+\tfrac 12 c_1(L^k)^2$, so $c_1(L^k)^2=0$. By general chern polynomial lore, $c_1(L^k)=k\cdot c_1(L)$, so $c_1(L)^2=0$. But this means that $$h^0(L)-h^1(L)+h^2(L)=\chi(L) = \chi(O_X) = 2.$$ By Serre Duality, you have $H^2(X,L)\cong H^0(X,L^\ast)^\ast$. If $H^0(X,L^\ast)=H^0(X,L^{k-1})$ is nontrivial and $L\ne O_X$, then we'd be done since $H^0(X,L)$ would have to be non-trivial. Therefore, we may assume $h^2(L)=0$. Putting this all together we get $h^0(L)=2+h^1(L)> 0$ as required.
Volume integration help A volume sits above the figure in the $xy$-plane bounded by the equations $y = \sqrt{x}$, $y = −x$ for $0 ≤ x ≤ 1$. Each $x$ cross section is a half-circle, with diameter touching the ends of the curves. What is the volume? a) Sketch the region in the $xy$ plane. b) What is the area of a cross-section at $x$? c) Write an integral for the volume. d) Find the value of the integral.
The question has been essentially fully answered by JohnD: The picture does it all. The cross section at $x$ has diameter $AB$, where $A$ is the point where the vertical line "at" $x$ meets the curve $y=\sqrt{x}$, and $B$ is the point where the vertical line at $x$ meets the line $y=-x$. So the distance $AB$ is $\sqrt{x}-(-x)$, that is, $\sqrt{x}+x$. So the radius at $x$ is $\dfrac{\sqrt{x}+x}{2}$. The area of the half-circle with this radius is $A(x)$, where $$A(x)=\frac{\pi}{2}\left(\frac{\sqrt{x}+x}{2}\right)^2.$$ The required volume is $$\int_0^1 A(x)\,dx.$$ Once the setup has been done, the rest is just computation. We want to integrate $\dfrac{\pi}{8}(\sqrt{x}+x)^2$. Expand the square, and integrate term by term.
why a geodesic is a regular curve? In most definitions of the geodesic, it is required to be a regular curve,i.e. a smooth curve satisfying that the tangent vector along the curve is not 0 everywhere. I don't know why.
Suppose $\gamma:[a,b]\to M$ be a smooth curve on a Riemannian manifold $M$ with Riemannian metric $\langle\cdot,\cdot\rangle$. Then we have $$\tag{1}\frac{d}{dt}\langle\frac{d\gamma}{dt},\frac{d\gamma}{dt}\rangle=\langle\frac{D}{dt}\frac{d\gamma}{dt},\frac{d\gamma}{dt}\rangle+\langle\frac{d\gamma}{dt},\frac{D}{dt}\frac{d\gamma}{dt}\rangle$$ where $\frac{D}{dt}$ is the covariant derivative along the curvature $\gamma$. By definition, $\gamma$ is geodesic if and only if $\frac{D}{dt}\frac{d\gamma}{dt}=0$ for all $t\in[a,b]$, which implies together with $(1)$ that $$\frac{d}{dt}\langle\frac{d\gamma}{dt},\frac{d\gamma}{dt}\rangle=0\mbox{ for all }t\in[a,b],$$ which implies that the function $\langle\frac{d\gamma}{dt},\frac{d\gamma}{dt}\rangle$ is constant, i.e. $$\langle\frac{d\gamma}{dt},\frac{d\gamma}{dt}\rangle=C$$ for some constant $C$. Thus, if $C\neq 0$, then by definition $\gamma$ is a regular curve. And if $C=0$, we have $\langle\frac{d\gamma}{dt},\frac{d\gamma}{dt}\rangle= 0$, or equivalently, $\frac{d\gamma}{dt}=0$ for all $t\in[a,b]$, which implies $\gamma(t)=p$ for all $t\in[a,b]$ for some point $p\in M$, i.e. $\gamma$ degenerates to a point in $M$. Therefore, if $\gamma$ is a nontrivial geodesic in the sense that it does not degenerates to a point, $\gamma$ must be a regular curve.
Determine the PDF of $Z = XY$ when the joint pdf of $X$ and $Y$ is given The joint probability density function of random variables $ X$ and $ Y$ is given by $$p_{XY}(x,y)= \begin{cases} & 2(1-x)\,\,\,\,\,\,\text{if}\,\,\,0<x \le 1, 0 \le y \le 1 \\ & \,0\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text{otherwise.} \end{cases} $$ Determine the probbility density function of $ Z = XY$.
There are faster methods, but it can be a good idea, at least once or twice, to calculate the cumulative distribution function, and then differentiate to find the density. The upside of doing it that way is that one can retain reasonably good control over what's happening. (There are also a number of downsides!) So we want $F_Z(z)$, the probability that $Z\le z$. Lets do the easy bits first. It is clear that $F_Z(z)=0$ if $Z\le z$. And it is almost as clear that $F_Z(z)=1$ if $z\ge 1$. So from now on we suppose that $0\lt z\lt 1$. Draw a picture of our square. For fixed $z$ between $0$ and $1$, draw the first quadrant part of the curve with equation $xy=z$. This curve is a rectangular hyperbola, with the positive $x$ and $y$ axes as asymptotes. We want the probability that $(X,Y)$ lands in the part of our square which is on the "origin side" of the hyperbola. So we need to integrate our joint density function over this region. There is some messiness in evaluating this integral: we need to break up the integral at $x=z$. We get $$F_Z(z)= \Pr(Z\le z)=\int_{x=0}^z \left(\int_{y=0}^1 (2-2x)\,dy\right)\,dx + \int_{x=z}^1 \left(\int_{y=0}^{z/x} (2-2x)\,dy\right)\,dx. $$ Not difficult after that. We get, I think, $F_Z(z)=z^2-2z\ln z$. Differentiate for the density.
Evaluate $\lim\limits_{x \to \infty}\left (\sqrt{\frac{x^3}{x-1}}-x\right)$ Evaluate $$ \lim_{x \to \infty}\left (\sqrt{\frac{x^3}{x-1}}-x\right) $$ The answer is $\frac{1}{2}$, have no idea how to arrive at that.
Multiply and divide by $\sqrt{x^3/(x-1)}+x$, simplify and take the limit.
How to find $(-64\mathrm{i}) ^{1/3}$? How to find $$(-64\mathrm{i})^{\frac{1}{3}}$$ This is a complex variables question. I need help by show step by step. Thanks a lot.
For any $n\in\mathbb{Z}$, $$\left(-64i\right)^{\frac{1}{3}}=\left(64\exp\left[\left(\frac{3\pi}{2}+2\pi n\right)i\right]\right)^{\frac{1}{3}}=4\exp\left[\left(\frac{\pi}{2}+\frac{2\pi n}{3}\right)i\right]=4\exp\left[\frac{3\pi+4\pi n}{6}i\right]=4\exp \left[\frac{\left(3+4n\right)\pi}{6}i\right]$$ The cube roots in polar form are: $$4\exp\left[\frac{\pi}{2}i\right] \quad\text{or}\quad 4\exp\left[\frac{7\pi}{6}i\right] \quad\text{or}\quad 4\exp\left[\frac{11\pi}{6}i\right]$$ and in Cartesian form: $$4i \quad\text{or}\quad -2\sqrt{3}-2i \quad\text{or}\quad 2\sqrt{3}-2i$$
Proof of convexity from definition ($x^Tx$) I have to prove that function $f(x) = x^Tx, x \in R^n$ is convex from definition. Definition: Function $f: R^n \rightarrow R$ is convex over set $X \subseteq dom(f)$ if $X$ is convex and the following holds: $x,y \in X, 0 \leq \alpha \leq 1 \rightarrow f(\alpha x+(1-\alpha) y)) \leq \alpha f(x) + (1-\alpha)f(y)$. I got this so far: $(\alpha x + (1-\alpha)y)^T(\alpha x + (1-\alpha)y) \leq \alpha x^Tx + (1-\alpha)y^Ty$ $\alpha^2 x^Tx + 2\alpha(1-\alpha)x^Ty + (1-\alpha)^2y^Ty \leq \alpha x^Tx + (1-\alpha)y^Ty$ I don´t know how to prove this inequality. It is clear to me, that $\alpha^2 x^Tx \leq \alpha x^Tx$ and $(1-\alpha)^2y^Ty \leq (1-\alpha)y^Ty$, since $0 \leq\alpha \leq 1$, but what about $2\alpha(1-\alpha)x^Ty$? I have to prove this using the above definition. Note: In Czech, the words "convex" and "concave" may have opposite meaning as in some other languages ($x^2$ is a convex function for me!). Thanks for any help.
You can also just take the hessian and see that is positive definite(since this function is Gateaux differentiable) , in fact this means that the function is strictly convex as well.
Intersection of a closed subscheme and an open subscheme of a scheme Let $X$ be a scheme. Let $Z$ be a closed subscheme of $X$. Let $U$ be an open subscheme of $X$. Then $Y = U \cap Z$ is an open subscheme of $Z$. Can we identify $Y$ with $U\times_X Z$?
Yes. This doesn't have anything to do with closed subscheme. If $p: Z \to X$ is a morphism of schemes and $U \subset X$ is open subscheme, then the fibre product is $p^{-1}(U)$ with open subscheme structure.
Nature of D-finite sets. A is called D-finite if A is not containing countable subset. With the above strange definition I need to show the following two properties: * *For a D-finite set A, and finite B, the union of A and B is D-finite. *The union of two D-finite sets is D-finite. By the way, can we construct such D-finite set? Only hints... Thank you.
Hints only: The first property may be shown directly. The second however... Try showing what happens when the union of two sets is not D-finite. Hope it helps.
Eigenvalues for LU decomposition In general I know that the eigenvalues of A are not the same as U for the decomposition but for one matrix I had earlier in the year it was. Is there a special reason this happened or was it just a coincidence? The matrix was $A = \begin{bmatrix}-1& 3 &-3 \\0 &-6 &5 \\-5& -3 &1\end{bmatrix}$ with $U = \begin{bmatrix}-1& 3 &-3 \\0 &-6 &5 \\0& 0 &1\end{bmatrix}$ if needed $L = \begin{bmatrix}1& 0 &0 \\0 &1 &0 \\5& 3 &1\end{bmatrix}$ The eigenvalues are the same as $U$ which are $-1$,$-6$ and $1$. When I tried to do it the normal way I ended up with a not so nice algebra problem to work on which took way to long. Is there some special property I am missing here? If not is there an easy way to simplify $\mathrm{det}(A-\lambda I)$ that I am missing? Thank you!
It's hard to say if this is mere coincidence or part of a larger pattern. This is like asking someone to infer the next number to a finite sequence of given numbers. Whatever number you say, there is always some way to explain it. Anyway, here's the "pattern" I see. Suppose $$ A = \begin{pmatrix}B&u\\ v^T&\gamma\end{pmatrix}, $$ where * *$B$ is a 2x2 upper triangular matrix; *the two eigenvalues of $B$, say $\lambda$ and $\mu$, are distinct and $\neq\gamma$; *$u$ is a right eigenvector of $B$ corresponding to the eigenvalue $\mu$; *$v$ is a left eigenvector of $B$ corresponding to the eigenvalue $\lambda$. Then $A$ has the following LU decomposition: $$ A = \begin{pmatrix}B&u\\ v^T&\gamma\end{pmatrix} =\underbrace{\begin{pmatrix}I_2&0\\ kv^T&1\end{pmatrix}}_{L} \quad \underbrace{\begin{pmatrix}B&u\\0&\gamma\end{pmatrix}}_{U} $$ where $k=\frac1\lambda$ if $\lambda\neq0$ or $0$ otherwise. The eigenvalues of $U$ are clearly $\lambda,\mu$ and $\gamma$. Since $u$ and $v$ are right and left eigenvectors of $B$ corresponding to different eigenvalues, we have $v^Tu=0$. Therefore \begin{align} (v^T, 0)A &=(v^T, 0)\begin{pmatrix}B&u\\ v^T&\gamma\end{pmatrix} =(v^TB,\, v^Tu)=\lambda(v^T,0),\\ A\begin{pmatrix}u\\0\end{pmatrix} &=\begin{pmatrix}B&u\\ v^T&\gamma\end{pmatrix}\begin{pmatrix}u\\0\end{pmatrix} =\begin{pmatrix}Bu\\v^Tu\end{pmatrix} =\mu\begin{pmatrix}u\\0\end{pmatrix},\\ A\begin{pmatrix}\frac{1}{\gamma-\mu}u\\1\end{pmatrix} &=\begin{pmatrix}B&u\\ v^T&\gamma\end{pmatrix} \begin{pmatrix}\frac{1}{\gamma-\mu}u\\1\end{pmatrix} =\gamma\begin{pmatrix}\frac{1}{\gamma-\mu}u\\1\end{pmatrix}. \end{align} So, the eigenvalues of $U$ are also the eigenvalues of $A$.
Number of Divisor which perfect cubes and multiples of a number n = $2^{14}$$3^9$$5^8$$7^{10}$$11^3$$13^5$$37^{10}$ How many positive divisors that are perfect cubes and multiples of $2^{10}$$3^9$$5^2$$7^{5}$$11^2$$13^2$$37^{2}$. I'm able to solve number of perfect square and number of of perfect cubes. But the extra condition of multiples of $2^{10}$$3^9$$5^2$$7^{5}$$11^2$$13^2$$37^{2}$ is confusing, anyone can give me a hint?
The numbers you are looking for must be perfect cubes. If you split them into powers of primes, they can have a factor $2^0$, $2^3$, $2^6$, $2^9$ and so on but not $2^1, 2^2, 2^4$ etc. because these are not cubes. The same goes for powers of $3, 5, 7$ and any other primes. The numbers must also be multiples of $2^{10}$ so can have factors $2^{12}, 2^{15}, 2^{18}$ etc. because $2^9, 2^6$ and so on are not multiples of $2^{10}$. The numbers must divide $2^{14}$, which leaves only $2^{12}$ because $2^{15}, 2^{18}$ and so on don't divide $2^{14}$. You get another factor $3^9, 5^3$ or $5^6, 7^6$ or $7^9, 11^3, 13^3$, and $37^3, 37^6$ or $37^9$. For most powers you have one choice, for $5, 7$ and $11$ you have two choices, for $37$ you have three choices - total is $2 \times 2\times 2\times 3$ numbers.
Evaluation of Derivative Using $\epsilon−\delta$ Definition Consider the function $f \colon\mathbb R \to\mathbb R$ defined by $f(x)= \begin{cases} x^2\sin(1/x); & \text{if }x\ne 0, \\ 0 & \text{if }x=0. \end{cases}$ Use $\varepsilon$-$\delta$ definition to prove that the limit $f'(0)=0$. Now I see that h should equals to delta; and delta should equal to epsilon in this case. Thanks for everyone contributed!
$$\left|{\dfrac{f(h)-f(0)}{h}}\right|=\left|{\dfrac{2h^2 \sin{\dfrac{1}{h}}}{h}}\right|=2 \left|{h \sin{\dfrac{1}{h}}}\right|<2\left|h\right|<\varepsilon.$$ Choose $\delta<\dfrac{\varepsilon}{2}.$
Use of $\mathbb N$ & $\omega$ as index sets Why all the properties of a sequence or a series or a sequence of functions or a series of functions remain unchanged irrespective of which of $\mathbb N$ & $\omega$ we are using as an index set? Is it because $\mathbb N$ is equivalent to $\omega$?
It is because $\omega$ and $\mathbb N$ are just different names for the same set. Their members are the same, and so by the Axiom of Extensionality they are the same set.
Probability of a label appearing on a frazzle This is an exercise from a probability textbook: A frazzle is equally likely to contain $0,1,2,3$ defects. No frazzle has more than three defects. The cash price of each frazzle is set at \$ $10-K^2$, where $K$ is the number of defects in it. Gummed labels, each representing $\$ 1$, are placed on each frazzle to indicate its price. What is the probability that a randomy selected label will end up on a frazzle which has exactly two defects? Since the frazzles are equally likely to have $0,1,2,3$ defects, I may argue that a label is equally likely to appear on any of them. On the other hand, frazzles with less defects are more expensive, therefore requiring more labels, from this perspective, a label is most likely to appear on a frazzle with no defects. I am confused here.
It is not equally likely to go on any of the frazzles, because more labels will go to the frazzles with 0 defects than those with 3 defects, for example. 0,1,2,3 defects draws 10, 9, 6 and 1 labels respectively. So say you had 4 million frazzles. Since 0,1,2 or 3 defects are equally likely, suppose you have 1 million of each type. Then you have 10 million labels on those with 0 defects, 9 million labels on those with 1 defect, 6 million labels on those with 2 defects, and 1 million labels on those with 3 defects. So you used a total of 26 million labels, and 6 million of those labels went to frazzles with exactly two defects. Thinking about this example should lead you to understanding what the answer to your question is.
How to prove the derivative of position is velocity and of velocity is acceleration? How has it been proven that the derivative of position is velocity and the derivative of velocity is acceleration? From Google searching, it seems that everyone just states it as fact without any proof behind it.
The derivative is the slope of the function. So if the function is $f(x)=5x-3$, then $f'(x)=5$, because the derivative is the slope of the function. Velocity is the change in position, so it's the slope of the position. Acceleration is the change in velocity, so it is the change in velocity. Since derivatives are about slope, that is how the derivative of position is velocity, and the derivative of velocity is acceleration. So if the position can be expressed with the function $f(x)=x^2 - 3x + 7$, then the derivative would be $f'(x)=2x-3$ since that is the slope of the function at any given point, and since it is the slope of the position function, it is velocity. Same for acceleration; $f"(x)=2$, which is the derivative of velocity, which makes it slope. The slope of velocity is acceleration. This is how the derivative of position is velocity and the derivative is position. NOTE: These functions are entirely hypothetical and were created on the spur of the moment.
Help with combinations The sum of all the different ways to multiply together $a,b,c,d,\ldots$ is equal to $$(a+1)(b+1)(c+1)(d+1)\cdots$$ right? If this is true? why is it true?
Yes it is true, give it a try... Ok, sorry. Here's a little more detail: We have the identity $$ \prod_{j=1}^n ( \lambda-X_j)=\lambda^n-e_1(X_1,\ldots,X_n)\lambda^{n-1}+e_2(X_1,\ldots,X_n)\lambda^{n-2}-\cdots+(-1)^n e_n(X_1,\ldots,X_n). $$ $\biggr[$use $\lambda=-1$ to get $ \prod_{j=1}^n ( -1-X_j)=(-1)^n\prod_{j=1}^n ( 1+X_j)$$\biggr]$ This can be proven by a double mathematical induction with respect to the number of variables n and, for fixed n, with respect to the degree of the homogeneous polynomial. I really thought that giving it a try would show some inside. Hope the links help,
Have I calculated this integral correctly? I have this integral to calculate: $$I=\int_{|z|=2}(e^{\sin z}+\bar z)dz.$$ I do it this way: $$I=\int_{|z|=2}e^{\sin z}dz+\int_{|z|=2}\bar zdz.$$ The first integral is $0$ because the function is holomorphic everywhere and it is a contour integral. As for the second one, I have $$\int_{|z|=2}\bar zdz = \int_0^{2\pi}e^{-i\theta}\cdot 2 d\theta=-\int_0^{-2\pi}e^{i\tau}\cdot 2 d\tau=\int_0^{2\pi}e^{i\tau}\cdot 2 d\tau=\int_{|z|=2}zdz=0$$ because the function is now holomorphic. It seems fishy to me. Is it correct?
If $z = 2e^{i \theta}$, then $$\bar{z} dz = 2e^{-i \theta}2i e^{i \theta} d \theta = 4i d \theta$$ Hence, $$\int_{\vert z \vert = 2} \bar{z} dz = \int_0^{2 \pi} 4i d \theta = 8 \pi i$$
Why is the absolute value needed with the scaling property of fourier tranforms? I understand how to prove the scaling property of Fourier Transforms, except the use of the absolute value: If I transform $f(at)$ then I get $F\{f(at)\}(w) = \int f(at) e^{-jwt} dt$ where I can substitute $u = at$ and thus $du = a dt$ (and $\frac{du}{a} = dt$) which gives me: $ \int f(u) e^{-j\frac{w}{a}u} \frac{du}{a} = \frac{1}{a} \int f(u) e^{-j\frac{w}{a}u} du = \frac{1}{a} F \{f(u)\}(\frac{w}{a}) $ But, according to various references, it should be $ \frac{1}{|a|} F \{f(u)\}(\frac{w}{a}) $ and I don't understand WHY or HOW I get/need the absolute value here?
Think about the range of the variable $t$ in the integral that gives the transform. How do the 'endpoints' of this improper integral transform under $t\to at$? Can you see how this depends on the sign of $a$?
Equation to determine radius for a circle that should intersect a given point? Simple question. I tried Google but I don't know what search keywords to use. I have two points on a $2D$ plane. Point 1 $=(x_1, y_1)$ and Point 2 $=(x_2, y_2)$. I'd like to draw a circle around Point 1, and the radius of the circle should be so that it intersects exactly with Point 2. What is the equation to determine the required radius?
The radius is simply the distance between the two points. So use the standard Euclidean distance which you should have learned.
Find the determinant of $A$ satisfying $A^{-1}=I-2A.$ I am stuck with the following problem: Let $A$ be a $3\times 3$ matrix over real numbers satisfying $A^{-1}=I-2A.$ Then find the value of det$(A).$ I do not know how to proceed. Can someone point me in the right direction? Thanks in advance for your time.
No such $A$ exists. Hence we cannot speak of its determinant. Suppose $A$ is real and $A^{-1}=I-2A$. Then $A^2-\frac12A+\frac12I=0$. Hence the minimal polynomial $m_A$ of $A$ must divide $x^2-\frac12x+\frac12$, which has no real root. Therefore $m_A(x)=x^2-\frac12x+\frac12$. But the minimal polynomial and characteristic polynomial $p_A$ of $A$ must have identical irreducible factors, and this cannot happen because $p_A$ has degree 3 and $m_A$ is an irreducible polynomial of degree 2. Edit: The OP says that the question appears on an extrance exam paper, and four answers are given: (a) $1/2$, (b) $−1/2$, (c) $1$, (d) $2$. It seems that there's a typo in the exam question and $A$ is probably 2x2. If this is really the case, then the above argument shows that the characteristic polynomial of $A$ is $x^2-\frac12x+\frac12$. Hence $\det A = 1/2$.
the Green function $G(x,t)$ of the boundary value problem $\frac{d^2y}{dx^2}-\frac{1}{x}\frac{dy}{dx} = 1$ the Green function $G(x,t)$ of the boundary value problem $\frac{d^2y}{dx^2}-\frac{1}{x}\frac{dy}{dx} = 1$ , $y(0)=y(1)=0$ is $G(x,t)= f_1(x,t)$ if $x≤t$ and $G(x,t)= f_2(x,t)$ if $t≤x$ where (a)$f_1(x,t)=-\frac{1}{2}t(1-x^2)$ ; $f_2(x,t)=-\frac{1}{2t}x^2(1-t^2)$ (b)$f_1(x,t)=-\frac{1}{2x}t^2(1-x^2)$ ; $f_2(x,t)=-\frac{1}{2t}x^2(1-t^2)$ (c)$f_1(x,t)=-\frac{1}{2t}x^2(1-t^2)$ ; $f_2(x,t)=-\frac{1}{2}(1-x^2)$ (d)$f_1(x,t)=-\frac{1}{2t}x^2(1-x^2)$ ; $f_2(x,t)=-\frac{1}{2x}t^2(1-x^2)$ then which are correct. i am not getting my calculation right. my answer was ver similar to them but does not match completely to them.at first i multiply the equation by (1/x) both side and convert it to a S-L equation but not getting my answer right.any help from you please.
Green's function is symmetric, so answer can be (b) and (d).
Can't argue with success? Looking for "bad math" that "gets away with it" I'm looking for cases of invalid math operations producing (in spite of it all) correct results (aka "every math teacher's nightmare"). One example would be "cancelling" the 6's in $$\frac{64}{16}.$$ Another one would be something like $$\frac{9}{2} - \frac{25}{10} = \frac{9 - 25}{2 - 10} = \frac{-16}{-8} = 2 \;\;.$$ Yet another one would be $$x^1 - 1^0 = (x - 1)^{(1 - 0)} = x - 1\;\;.$$ Note that I am specifically not interested in mathematical fallacies (aka spurious proofs). Such fallacies produce shockingly wrong ends by (seemingly) valid means, whereas what I am looking for all cases where one arrives at valid ends by (shockingly) wrong means. Edit: fixed typo in last example.
Slightly contrived: Given $n = \frac{2}{15}$ and $x=\arccos(\frac{3}{5})$, find $\frac{\sin(x)}{n}$. $$ \frac{\sin(x)}{n} = \mathrm{si}(x) = \mathrm{si}x = \boxed{6} $$
Showing that complicated mixed polynomial is always positive I want to show that $\left(132 q^3-175 q^4+73 q^5-\frac{39 q^6}{4}\right)+\left(-144 q^2+12 q^3+70 q^4-19 q^5\right) r+\left(80 q+200 q^2-243 q^3+100 q^4-\frac{31 q^5}{2}\right) r^2+\left(-208 q+116 q^2+24 q^3-13 q^4\right) r^3+\left(80-44 q-44 q^2+34 q^3-\frac{23 q^4}{4}\right) r^4$ is strictly positive whenever $q \in (0,1)$ (numerically, this holds for all $r \in \mathbb{R}$, although I'm only interested in $r \in (0,1)$). Is that even possible analytically? Any idea towards a proof would be greatly appreciated. Many thanks! EDIT: Here is some more information. Let $f(r) = A + Br + Cr^2 + Dr^3 + Er^4$ be the function as defined above. Then it holds that $f(r)$ is a strictly convex function in $r$ for $q \in (0,1)$, $f(0) > 0$, $f'(0) < 0$, and $f'(q) > 0$. Hence, for the relevant $q \in (0,1)$, $f(r)$ attains its minimum for some $r^{min} \in (0,q)$. $A$ is positive and strictly increasing in $q$ for the relevant $q \in (0,1)$, $B$ is negative and strictly decreasing in $q$ for the relevant $q \in (0,1)$, $C$ is positive and strictly increasing in $q$ for the relevant $q \in (0,1)$, $D$ is negative and non-monotonic in $q$, and $E$ is positive and strictly decreasing in $q$ for the relevant $q \in (0,1)$.
Because you want to show that this is always positive, consider what happens when $q$ and $r$ get really big. The polynomials with the largest powers will dominate the result. You can solve this quite easily by approximating the final value using a large number of inequalities.
$H_1\triangleleft G_1$, $H_2\triangleleft G_2$, $H_1\cong H_2$ and $G_1/H_1\cong G_2/H_2 \nRightarrow G_1\cong G_2$ Find a counterexample to show that if $ G_1 $ and $G_2$ groups, $H_1\triangleleft G_1$, $H_2\triangleleft G_2$, $H_1\cong H_2$ and $G_1/H_1\cong G_2/H_2 \nRightarrow G_1\cong G_2$ I tried but I did not have success, I believe that these groups are infinite.
The standard counter example to that implication is the quaternion group $Q_8$ and dihedral group $D_4$. Both groups have order $2^3=8$, so that every maximal group (i.e. one of order 4) is normal. The cyclic group of order 4 is contained in both groups and the quotient has order 2 in both cases. So all assertions are satisfied but $D_4\ncong Q_8$, of course.
How to integrate this $\int\frac{\mathrm{d}x}{{(4+x^2)}^{3/2}} $ without trigonometric substitution? I have been looking for a possible solution but they are with trigonometric integration.. I need a solution for this function without trigonometric integration $$\int\frac{\mathrm{d}x}{{(4+x^2)}^{3/2}}$$
$$\frac{1}{\left(4+x^2\right)^{3/2}}=\frac{1}{8}\cdot\frac{1}{\left(1+\left(\frac{x}{2}\right)^2\right)^{3/2}}$$ Now try $$x=2\sinh u\implies dx=2 \cosh u\,du\implies$$ $$\int\frac{dx}{\left(4+x^2\right)^{3/2}}=\frac{1}{8}\int\frac{2\,du\cosh u}{(1+\sinh^2u)^{3/2}}=\frac{1}{4}\int\frac{du}{\cosh^2u}=\ldots $$
What exactly is infinity? On Wolfram|Alpha, I was bored and asked for $\frac{\infty}{\infty}$ and the result was (indeterminate). Another two that give the same result are $\infty ^ 0$ and $\infty - \infty$. From what I know, given $x$ being any number, excluding $0$, $\frac{x}{x} = 1$ is true. So just what, exactly, is $\infty$?
I am not much of a mathematician, but I kind of think of infinity as a behavior of increasing without bound at a certain rate rather than a number. That's why I think $\infty \div \infty$ is an undetermined value, you got two entities that keep increasing without bound at different rates so you don't know which one is larger. I could be wrong, but this is my understanding though.
Identity for central binomial coefficients On Wikipedia I came across the following equation for the central binomial coefficients: $$ \binom{2n}{n}=\frac{4^n}{\sqrt{\pi n}}\left(1-\frac{c_n}{n}\right) $$ for some $1/9<c_n<1/8$. Does anyone know of a better reference for this fact than wikipedia or planet math? Also, does the equality continue to hold for positive real numbers $x$ instead of the integer $n$ if we replace the factorials involved in the definition of the binomial coefficient by Gamma functions?
It appears to be true for $x > .8305123339$ approximately: $c_x \to 0$ as $x \to 0+$.
Proof for: $(a+b)^{p} \equiv a^p + b^p \pmod p$ a, b are integers. p is prime. I want to prove: $(a+b)^{p} \equiv a^p + b^p \pmod p$ I know about Fermat's little theorem, but I still can't get it I know this is valid: $(a+b)^{p} \equiv a+b \pmod p$ but from there I don't know what to do. Also I thought about $(a+b)^{p} = \sum_{k=0}^{p}\binom{p}{k}a^{k}b^{p-k}=\binom{p}{0}b^{p}+\sum_{k=1}^{p-1} \binom{p}{k} a^{k}b^{p-k}+\binom{p}{p}a^{p}=b^{p}+\sum_{k=1}^{p-1}\binom{p}{k}a^{k}b^{p-k}+a^{p}$ Any ideas? Thanks!
First of all, $a^p \equiv a \pmod p$ and $b^p \equiv b \pmod p$ implies $a^p + b^p \equiv a + b \pmod p$. Also, $(a+b)^p \equiv a + b \pmod p$. By transitivity of modulo, combine the above two results and get $(a+b)^p \equiv a^p + b^p \pmod p$. Done.
Proving an Entire Function is a Polynomial I had this question on last semesters qualifying exam in complex analysis, and I've attempted it several times since to little result. Let $f$ be an entire function with $|f(z)|\geq 1$ for all $|z|\geq 1$. Prove that $f$ is a polynomial. I was trying to use something about $f$ being uniformly convergent to a power series, but I can't get it to go anywhere.
Picard's Theorem proves this instantly; which states: Let $f$ be a transcendental (non-polynomial) entire function. Then $f-a$ must have infinitely many zeros for every $a$ (except for possibly one exception, called the lacunary value). For example, $e^z-a$ will have infinitely many zeros except for $a=0$ and so the lacunary value of $e^z$ is zero. Your inequality implies that $f$ and $f-\frac{1}{2}$ have only a finite number of zeros. Thus $f$ cannot be transcendental. Of course this is what we call hitting a tac with a sludge hammer. A more realistic approach might be the following: Certainly $f$ has a finite number of zeros say $a_1,\ldots,a_n$, so write $f=(z-a_1)\cdots(z-a_n)\cdot h$, where $h$ is some non-zero entire function. Then the inequalities above give us $|\frac{1}{h}|\le \max\left\{\max_{z\in D(0,2)}|\frac{1}{h}|, |(z-a_1)\cdots (z-a_n)|\right\}$ on the entire complex plane. Said more simply $|\frac{1}{h}|<|p(z)|$ for every $z\in C$ for some polynomial $p$. That implies $\frac{1}{h}$ is a polynomial. But remember that $\frac{1}{h}$ is nonzero, so $h$ is a constant and $f$ must therefore be a polynomial.
Show $S = f^{-1}(f(S))$ for all subsets $S$ iff $f$ is injective Let $f: A \rightarrow B$ be a function. How can we show that for all subsets $S$ of $A$, $S \subseteq f^{-1}(f(S))$? I think this is a pretty simple problem but I'm new to this so I'm confused. Also, how can we show that $S = f^{-1}(f(S))$ for all subsets $S$ iff $f$ is injective?
$S \subseteq f^{-1}(f(S)):$ Choose $a\in S.$ To show $a\in f^{-1}(f(S))$ it suffices to show that $\exists$ $a'\in S$ such that $a\in f^{-1}(f(a'))$ i.e. to show $\exists$ $a'\in S$ such that $f(a)=f(a').$ Now take $a=a'.$ $S = f^{-1}(f(S))$ $\forall$ $A \subset S$ $\iff f$ is injective: * *$\Leftarrow:$ Let $f$ be injective. Choose $s'\in f^{-1}(f(S))\implies f(s')\in f(S)\implies \exists$ $s\in S$ such that $f(s')=f(s)\implies s'=s$ (since $f$ is injective) $\implies s'\in S.$ So $f^{-1}(f(S))\subset S.$ Reverse inclusion has been proved earlier. Therefore $f^{-1}(f(S))= S.$ *$\Rightarrow:$ Let $f^{-1}(f(S))= S$ $\forall$ $A \subset S.$ Let $f(s_1)=f(s_2)$ for some $s_1,s_2\in S.$ Then $s_1\in f^{-1}(f(\{s_2\})=\{s_2\}\implies s_1=s_2\implies f$ is injective.
Understanding $\frac {b^{n+1}-a^{n+1}}{b-a} = \sum_{i=0}^{n}a^ib^{n-i}$ I'm going through a book about algorithms and I encounter this. $$\frac {b^{n+1}-a^{n+1}}{b-a} = \sum_{i=0}^{n}a^ib^{n-i}$$ How is this equation formed? If a theorem has been applied, what theorem is it? [Pardon me for asking such a simple question. I'm not very good at maths.]
Multiply both sides by b-a, watch for the cancling of terms, and you will have your answer.
In the induction proof for $(1+p)^n \geq 1 + np$, a term is dropped and I don't understand why. In What is Mathematics, pg. 15, a proof of $(1+p)^n \geq 1 + np$, for $p>-1$ and positive integer $n$ goes as follows: * *Substitute $r$ for $n$, then multiply both sides by $1+p$, obtaining: $(1+p)^{r+1}\geq 1+rp+p+rp^2$ *"Dropping the positive term $rp^2$ only strengthens this inequality, so that $(1+p)^{r+1}\geq 1+rp+p$, which shows that the inequality will hold for $r+1$." I don't understand why the $rp^2$ term can be dropped -- if we're trying to prove that the inequality holds, and dropping $rp^2$ strengthens the inequality, then why are we allowed to drop it? Thanks!
In $1.$ we have shown that $$(1+p)^{r+1}\geq 1+rp+p+rp^2$$ But we also know that $r > 1$ (because we're doing an induction proof from $1$ upwards); and obviously $p^2 \ge 0$ (because $p$ is real); so we know that $rp^2 \ge 0$. Therefore $$1+rp+p+rp^2 \ge 1+rp+p$$ So putting these two together gives $$(1+p)^{r+1}\geq 1+rp+p$$ as required. In short, if we know that $a \ge b + c$, and we know $c$ is non-negative, we can immediately conclude that $a \ge b$.
n×n matrices A with complex enteries Let U be set of all n×n matrices A with complex enteries s.t. A is unitary. then U as a topological subspace of $\mathbb{C^{n^{2}}} $ is * *compact but not connected. *connected but not compact. *connected and compact. *Neither connected nor compact I am stuck on this problem . Can anyone help me please..... I don't know where yo begin........
For connectedness, examine the set of possible determinants, and whether or not you can find a path of unitary matrices between two unitary matrices with different determinants. For compactness, look at sequences of unitary matrices and examine whether or not one can be constructed to not have a convergent subsequence. Once you have an affirmative or negatinve answer to the above paragraphs, you pick the corresponding alternative, and you're done.
How to calculate $\overline{\cos \phi}$ How do you calculate $\overline{\cos \phi}$? Where $\phi\in\mathbb{C}$. I try to proof that $\cos \phi \cdot \overline{\cos \phi} +\sin \phi \cdot \overline{\sin \phi}=1$?
$$ \cos(x+iy) = \cos x \cos (iy) - i \sin x \sin(iy) $$ $$ \overline {\cos(x+iy)} = \cos x \cos (iy) + i \sin x \sin(iy) = \cos x \cos (-iy) - i \sin x \sin(-iy) = \cos(x-iy) $$
Testing Convergence of $\sum \sqrt{\ln{n}\cdot e^{-\sqrt{n}}}$ What test should i apply for testing the convergence/divergence of $$\sum_{n=1}^{\infty} \sqrt{\ln{n}\cdot e^{-\sqrt{n}}}$$ Help with hints will be appreciated. Thanks
The $n$-th term is equal to $$\frac{\sqrt{\log n}}{e^{\sqrt{n}/2}}.$$ The intuition is that the bottom grows quite fast, while the top does not grow fast at all. In particular, after a while the top is $\lt n$. If we can show, for example, that after a while $e^{\sqrt{n}/2}\gt n^3$, then by comparison with $\sum \frac{1}{n^2}$ we will be finished. So is it true that in the long run $e^{\sqrt{n}/2}\gt n^3$? Equivalently, is it true that in the long run $\sqrt{n}/2\gt 3\log n$? Sure, in fact $\lim_{n\to\infty}\dfrac{\log n}{\sqrt{n}}=0$, by L'Hospital's Rule, and in other ways. Remark: A named test that works well here is the Cauchy Condensation Test. I believe that a more "hands on" confrontation with the decay rate of the $n$-th term is more informative.
Finding a dominating function for this sequence Let $$f_n (x) = \frac{nx^{1/n}}{ne^x + \sin(nx)}.$$ The question is: with the dominated convergence theorem find the limit $$ \lim_{n\to\infty} \int_0^\infty f_n (x) dx. $$ So I need to find an integrable function $g$ such that $|f_n| \leq g$ for all $n\in \mathbf N$. I tried $$ \frac{nx^{1/n}}{ne^x + \sin(nx)} = \frac{x^{1/n}}{e^x + \sin(nx)/n} \leq \frac{x^{1/n}}{e^x - 1} \leq \frac{x^{1/n}}{x}. $$ But I can't get rid of that $n$. Can anyone give me a hint?
We have \begin{align} \left| \frac{nx^{1/n}}{ne^x + \sin(nx)} \right|= & \frac{|x^{1/n}|}{|e^x + \sin(nx)/n|} & \\ \leq & \frac{\max\{1,x\}}{|e^x + \sin(nx)/n|} & \mbox{by } |x^{1/n}|\leq \max\{1,x\} \\ \leq & \frac{\max\{1,x\}}{|e^x -\epsilon |} & \mbox{if } |e^x + \sin(nx)/n|\geq |e^x -\epsilon| \\ \end{align} Note that for all $\epsilon\in(0,1)$ existis $N$ any sufficiently large such that $$ \left|\frac{1}{n}\sin(nx)\right|<\epsilon \implies |e^x + \sin(nx)/n|\geq |e^x -\epsilon| $$ if $n>N$.
Working out digits of Pi. I have always wondered how the digits of π are calculated. How do they do it? Thanks.
The Chudnovsky algorithm, which just uses the very rapidly converging series $$\frac{1}{\pi} = 12 \sum^\infty_{k=0} \frac{(-1)^k (6k)! (13591409 + 545140134k)}{(3k)!(k!)^3 640320^{3k + 3/2}},$$ was used by the Chudnovsky brothers, who are some of the points on your graph. It is also the algorithm used by at least one arbitrary precision numerical library, mpmath, to compute arbitrarily many digits of $\pi$. Here is the relevant part of the mpmath source code discussing why this series is used, and giving a bit more detail on how it is implemented (and if you want, you can look right below that to see exactly how it is implemented). It actually uses a method called binary splitting to evaluate the series faster.
Constrain Random Numbers to Inside a Circle I am generating two random numbers to choose a point in a circle randomly. The circles radius is 3000 with origin at the center. I'm using -3000 to 3000 as my bounds for the random numbers. I'm trying to get the coordinates to fall inside the circle (ie 3000, 3000 is not in the circle). What equation could I use to test the limits of the two numbers because I can generate a new one if it falls out of bounds.
Compare $x^2+y^2$ with $r^2$ and reject / retry if $x^2+y^2\ge r^2$.
Class Group of $\mathbb{Q}(\sqrt{-47})$ Calculate the group of $\mathbb{Q}(\sqrt{-47})$. I have this: The Minkowski bound is $4,36$ approximately. Thanks!
Here is another attempt. In case I made any mistakes, let me know and I will either try and fix it, or delete my answer. We have Minkowski bound $\frac{2 \sqrt{47}}{\pi}<\frac{2}{3}\cdot 7=\frac{14}{3}\approx 4.66$. So let us look at the primes $2$ and $3$: $-47\equiv 1$ mod $8\quad\Rightarrow\quad 2$ is split, i.e. $(2)=P\overline P$ for some prime ideals $P,\overline P$. NB: In fact we have $P=(2,\delta)$ and $\overline P=(2,\overline \delta)$ with $\delta=\frac{1+\sqrt{-47}}{2}$ and $\overline\delta=\frac{1-\sqrt{-47}}{2}$. But this is going to be irrelevant in the rest of the proof. $-47\equiv 1$ mod $3\quad\Rightarrow\quad 3$ is split, i.e. $(3)=Q \overline Q$ for some prime ideals $Q,\overline Q$. So the class group has at most 5 elements with representatives $(1),P,\overline P, Q, \overline Q$. Note that $P$ is not principal, because $N(\frac{a+b\sqrt{-47}}{2})=\frac{a^2+47b^2}{4}=2$ does not have an integer solution (because $8$ is not a square). So $P$ does not have order $1$. Suppose $P$ has order $2$. Then $P^2$ is a principal ideal with $N(P^2)=N(P)^2=2^2=4$. The only elements with norm $4$ are $\pm2$. But $P^2$ cannot be $(2)$, because $2$ is split. Suppose $P$ has order $3$. Then $P^3$ is a principal ideal with $N(P^3)=N(P)^3=2^3=8$. But $N(\frac{a+b\sqrt{-47}}{2})=\frac{a^2+47b^2}{4}=8$ does not have an integer solution (because $32$ is not a square). Suppose $P$ has order $4$. Then $P^4$ is a principal ideal with $N(P^4)=16$. The only elements with norm $16$ are $\pm4$. But $P^4$ cannot be $(4)$, because $(4)=(2)(2)=P\overline P P\overline P$ is the unique factorisation, and $P\ne \overline P$. Suppose $P$ has order $5$. Then $P^5$ is a principal ideal with $N(P^5)=32$. And, indeed, the element $\frac{9+\sqrt{-47}}{2}$ has norm $32$. So $P^5=(\frac{9+\sqrt{-47}}{2})$. Hence the class group is cyclic of order $5$.
The particular solution of the recurrence relation I cannot find out why the particular solution of $a_n=2a_{n-1} +3n$ is $a_{n}=-3n-6$ here is the how I solve the relation $a_n-2a_{n-1}=3n$ as $\beta (n)= 3n$ using direct guessing $a_n=B_1 n+ B_2$ $B_1 n+ B_2 - 2 (B_1 n+ B_2) = 3n$ So $B_1 = -3$, $B_2 = 0$ the particular solution is $a_n = -3 n$ and the homo. solution is $a_n = A_1 (-2)^n$ Why it is wrong??
using direct guessing $a_n=B_1 n+ B_2$ $B_1 n+ B_2 - 2 (B_1 (n-1)+ B_2) = 3n$ then $B_1 - 2B_1 = 3$ $2 B_1 - B_2 =0$ The solution will be $B_1 = -3, B_2=-6$
3-D geometry : three vertices of a ||gm ABCD is (3,-1,2), (1,2,-4) & (-1,1,2). Find the coordinate of the fourth vertex. The question is Three vertices of a parallelogram ABCD are A(3,-1,2), B(1,2,-4) and C(-1,1,2). Find the coordinate of the fourth vertex. To get the answer I tried the distance formula, equated AB=CD and AC=BD.
If you have a parallelogram ABCD, then you know the vectors $\vec{AB}$ and $\vec{DC}$ need to be equal as they are parallel and have the same length. Since we know that $\vec{AB}=(-2,\,3,-6)$ you can easily calculate $D$ since you (now) know $C$ and $\vec{CD}(=-\vec{AB})$. We get for $\vec{0D}=\vec{0C}+\vec{CD}=(-1,\,1,\,2)+(\,2,-3,\,6)=(\,1,-2,\,8)$ and hence $D(\,1,-2,\,8)$.
Eigen-values of $AB$ and $BA$? Let $A,B \in M(n,\mathbb{C})$ be two $n\times n$ matrices. I would like know how to prove that eigen-value of $AB$ is the same as the eigen-values of $BA$.
you can prove $|\lambda I-AB|=|\lambda I-BA|$ by computing the determinant of following $$ \left( \begin{array}{cc} I & A \\ B & I \\ \end{array} \right) $$ in two diffeerent ways.
Solving the integral of a Modified Bessel function of the second kind I would like to find the answer for the following integral $$\int x\ln(x)K_0(x) dx $$ where $K_0(x)$ is the modified Bessel function of the second kind and $\ln(x)$ is the natural-log. Do you have any ideas how to find? Thanks in advance!
Here's what Mathematica found: Looks like an integration by parts to me (combined with an identity for modified Bessel functions).
Card probabilities Five cards are dealt from a standard deck of 52. What is the probability that the 3rd card is a Queen? What I dont understand here is how to factor in when one or both of the first two cards drawn are also Queens.
All orderings of the $52$ cards in the deck are equally likely. So the probability the third card in the deck is a Queen is exactly the same as the probability that the $17$-th card in the deck is a Queen, or that the first card in the deck is a Queen: They are all equal to $\dfrac{4}{52}$. The fact that $5$ cards were dealt is irrelevant.
Show That This Complex Sum Converges For complex $z$, show that the sum $$\sum_{n = 1}^{\infty} \frac{z^{n - 1}}{(1 - z^n)(1 - z^{n + 1})}$$ converges to $\frac{1}{(1 - z)^2}$ for $|z| < 1$ and $\frac{1}{z(1 - z)^2}$ for $|z| > 1$. Hint: Multiply and divide each term by $1 - z$, and do a partial fraction decomposition, getting a telescoping effect. I tried following the hint, but got stuck on performing a partial fraction decomposition. After all, since all polynomials can be factored in $\mathbb{C}$, how do I know what the factors of an arbitrary term are? I tried writing $$\frac{z^{n - 1}(1 - z)}{(1 - z^n)(1 - z^{n + 1})(1 - z)} = \frac{z^{n - 1}}{(1 - z)^3(1 + z + \dotsb + z^{n - 1})(1 + z + \dotsb + z^n)} - \frac{z^n}{(1 - z)^3(1 + z + \dotsb + z^{n - 1})(1 + z + \dotsb + z^n)},$$ but didn't see how this is helpful.
HINT: Use $$ \frac{z^{n}-z^{n+1}}{(1-z^n)(1-z^{n+1})} = \frac{1}{1-z^n} - \frac{1}{1-z^{n+1}} $$
Pigeon-hole Principle: Does this proof have a typo? This was an example of generalized pigeon-hole principle. Ten dots are placed within a square of unit size. The textbook then shoes a box divided into 9 equal squares. Then there three dots that can be covered by a disk of radius 0.5. The proof: Divide our square into four equal parts by it's diagonals (from one corner to the other), then by the generalized pigeon-hole principle, at least one of these triangles will contain three of our points. The proof follows as the radius of the circumcircle of these triangles is shorter than 0.5. But wait! The statement said three dots can be covered by a disk of radius 0.5. Typo?
The proof is basically correct, but yes, there is a typo: the circumcircle of each of the four triangles has radius exactly $0.5$, not less than $0.5$. If $O$ is the centre of the square, and $A$ and $B$ are adjacent corners, the centre of the circumcircle of $\triangle AOB$ is the midpoint of $\overline{AB}$, from which the distance to each of $A,O$, and $B$ is $0.5$. The circle of radius $0.5$ and centre at the midpoint of $\overline{AB}$ contains $\triangle AOB$, as required.
What is Cumulative Distribution Function of this random variable? Suppose that we have $n$ independent random variables, $x_1,\ldots,x_n$ such that each $x_i$ takes value $a_i$ with success probability $p_i$ and value $0$ with failure probability $1-p_i$ ,i.e., \begin{align} P(x_1=a_1) & = p_1,\ P(x_1=0)= 1-p_1 \\ P(x_2=a_2) & = p_2,\ P(x_2=0) = 1-p_2 \\ & \vdots \\ P(x_n=a_n) & = p_n,\ P(x_n=0)=1-p_n \end{align} where $a_i$'s are positive Real numbers. What would be the CDF of the sum of these random variables? That is, what would be $P(x_1+\cdots+x_n\le k)$ ? and how can we find it in an efficient way?
This answer is an attempt at providing an answer to a previous version of the question in which the $x_i$ were independent Bernoulli random variables with parameters $p_i$. $P\{\sum_{i=1}^n x_i = k\}$ equals the coefficient of $z^k$ in $(1-p_1+p_1z)(1-p_2+p_2z)\cdots(1-p_n+p_nz)$. This can be found by developing the Taylor series for this function. It is not much easier than grinding out the answer by brute force.
Let $f$ be a continuous function on [$0, 1$] with $f(0) =1$. Let $ G(a) = 1/a ∫_0^a f(x)\,dx$ then which of the followings are true? Let $f$ be a continuous function on [$0, 1$] with $f(0) =1$. Let $ G(a) = 1/a ∫_0^af(x)\,dx$ then which of the followings are true? * *$\lim_{(a\to 0)} G(a)=1/2$ *$\lim_{(a\to0)} G(a)=1$ *$\lim_{(a\to 0)} G(a)=0$ *The limit $\lim_{(a\to 0)G(a)}$ does not exist. I am completely stuck on it. How should I solve this?
Note that $G(a)$ is the mean (or average) value of the function on the interval $[0,a]$. Here’s an intuitive argument that should help you see what’s going on. The function $f$ is continuous, and $f(0)=1$, so when $x$ is very close to $0$, $f(x)$ must be close to $1$. Thus, for $a$ close to $0$, $f(x)$ should be close to $1$ for every $x\in[0,a]$, and therefore its mean value should also be close to $1$. From that it should be easy to pick out the right answer, but it would also be a good exercise for you to try to prove that the answer really is right.
How can I solve this differential equation? How can I find a solution of the following differential equation: $$\frac{d^2y}{dx^2} =\exp(x^2+ x)$$ Thanks!
$$\frac{d^2y}{dx^2}=f(x)$$ Integrating both sides with respect to x, we have $$\frac{dy}{dx}=\int f(x)~dx+A=\phi(x)+A$$ Integrating again $$y=\int \phi(x)~dx+Ax+B=\chi(x)+Ax+B$$
$\sqrt{(a+b-c)(b+c-a)(c+a-b)} \le \frac{3\sqrt{3}abc}{(a+b+c)\sqrt{a+b+c}}$ Suppose $a, b, c$ are the lengths of three triangular edges. Prove that: $$\sqrt{(a+b-c)(b+c-a)(c+a-b)} \le \frac{3\sqrt{3}abc}{(a+b+c)\sqrt{a+b+c}}$$
As the hint give in the comment says (I denote by $S$ the area of $ABC$ and by $R$ the radius of its circumcircle), if you multiply your inequality by $\sqrt{a+b+c}$ you'll get $$4S \leq \frac{3\sqrt{3}abc}{a+b+c}$$ which is eqivalent to $$a+b+c \leq 3\sqrt{3}\frac{abc}{4S}=3\sqrt{3}R.$$ This inequality is quite known. If you want a proof, you can write $a=2R \sin A$ (and the other two equalities) and get the equivalent inequality $$ \sin A +\sin B +\sin C \leq \frac{3\sqrt{3}}{2}$$ which is an easy application of the Jensen inequality for the concave function $\sin : [0,\pi] \to [0,1]$.
Escalator puzzle equation I'm trying to understand the escalator puzzle. A man visits a shopping mall almost every day and he walks up an up-going escalator that connects the ground and the first floor. If he walks up the escalator step by step it takes him 16 steps to reach the first floor. One day he doubles his stride length (walks up climbing two steps at a time) and it takes him 12 steps to reach the first floor. If the escalator stood still, how many steps would there be on sight? The solution, apparently, is as follows: $16x = 12(x+1)$, so $x=3$, so the answer is 48. But why can we say $12(x+1)$? First, he covers 16 steps and the motion of the escalator gives him a multiplier of $x$ to cover a total of $16x$ steps. That makes sense. But why is this the same as 12 steps with a multiplier of $(x+1)$?
Let $d$ be the distance traveled, which remains same in both the cases. if $v$ is the speed of the man and $x$ is the speed of elevator, in case 1 the number of steps taken is $$\frac d{v+x}=16$$ In case 2 it is $$\frac d{2v+x}=12$$ because now he is traveling at double the speed; eliminating $d$, we get $x=2v$; therefore $d=48v$; when stationery $x=0$, we get no. of steps as $48v/v=48$.
Is this an equivalence relation (reflexivity, symmetry, transitivity) Let $\theta(s):\mathbb{C}\to \mathbb{R}$ be a well defined function. I define the following relation in $\mathbb{C}$. $\forall s,q \in \mathbb{C}: s\mathbin{R}q\iff\theta(s)\ne 0 \pmod {2\pi}$ (and) $\theta(q)\ne 0 \pmod {2\pi}$ The function $\pmod {2\pi}$ is the addition $\pmod {2\pi}$ My question: Is this an equivalence relation (reflexivity, symmetry, transitivity)? The formula of $\theta(s)$ is not important for this question.
Your relation is $$sRq\iff \theta(s)\not \equiv 0\text{ and }\theta(q)\not \equiv 0 \mod 2\pi$$ for $s,q\in \mathbb{C}$. For symmetry: $$sRq\iff \theta(s)\not \equiv 0\text{ and }\theta(q)\not \equiv 0 \mod 2\pi \iff qRs$$ For transitivity: $$sRq\text{ and }qRp\iff \theta(s)\not \equiv 0\text{ and }\theta(q)\not \equiv 0\text{ and }\theta(p)\not \equiv 0 \mod 2\pi\implies sRp$$ Reflexitivity is: $$sRs\iff \theta(s)\not \equiv 0\mod 2\pi$$ That clearly depends on your choice of $\theta$. Therefore, $R$ is an equivalence relation iff $$\theta(\mathbb{C})\cap \left\{2k\pi:k\in \mathbb{Z}\right\}=\emptyset$$
Products of infinitely many measure spaces. Applications? * *What are some typical applications of the theory for measures on infinite product spaces? *Are there any applications that you think are particularly interesting - that make the study of this worthwhile beyond finite products, Fubini-Tonelli. *Are there theorems that require, or are equivalent to, certain choice principles (AC, PIT, etc)? (similar to Tychonoff in topology) Sorry for being so vague, I am just trying to get a feel for this new area before diving head-first into the technical details.
Infinite products of measure spaces are used very frequently in probability. Probabilists are frequently interested in what happens asymptotically as a random process continues indefinitely. The Strong Law of Large Numbers, for example, tells us that if $\{X_i\}_i$ is a sequence of independent, identically distributed random variables with finite mean $\mu$ then the sum $\frac{1}{n}\sum_{i=1}^n X_i$ converges almost surely to $\mu$. But how do we find infinitely many independent random variables to which we can apply this theorem? The most common way to produce these variables is with the infinite product. For example, say we want to flip a coin infinitely many times. A way to model this would be to let $\Omega$ be the probability space $\{-1,1\}$ where $P(1) = P(-1) = \frac{1}{2}$. Then we consider the probability space $\prod_{i=1}^\infty \Omega$, and let $X_i$ be the $i$th component. Then the $X_i$ are independent identically distributed variables.
What's the probability of a gambler losing \$10 in this dice game? What about making \$5? Is there a third possibility? Can you please help me with this question: In a gambling game, each turn a player throws 2 fair dice. If the sum of numbers on the dice is 2 or 7, the player wins a dollar. If the sum is 3 or 8, the player loses a dollar. The player starts to play with 10 dollars and stops the game if he loses all his money or if he earns 5 dollars. What's the probability for the player to lose all the money and what's the probability to finish the game as a winner? If there some 3rd possibility to finish the game? If yes, what's its probability? Thanks a lot!
[edit: Apparently I misread the question. The player starts out with 10 dollars and not five.] Given that "rolling a 2 or 7" and "rolling a 3 or 8" have the same probability (both occur with probability 7/36), the problem of the probability of a player earning a certain amount of money before losing a different amount of money is the same as the problem of the Gambler's Ruin. What's different when considering individual rounds is that there's a possibility of a tie. But because a tie leaves everything unchanged, the Gambler's Ruin still applies simply because we can simply consider only events that do change the state of each player's money. Therefore, the probability that the player makes \$5 before losing \$10 is the same probability as flipping coins against somebody with $5, or 2/3. And the probability of the opposite event is 1/3. The third outcome, that the game goes on forever, has a probability that vanishes to zero.
Prove that $\sum \frac{a_n}{a_n+3}$ diverges Suppose $a_n>0$ for each $n\in \mathbb{N}$ and $\sum_{n=0}^{\infty} a_n $ diverges. How would one go about showing that $\sum_{n=0}^{\infty} \frac{a_n}{a_n+3}$ diverges?
Let $b_n=\dfrac{a_n}{a_n+3}$. If the $a_n$ are unbounded, then $b_n$ does not approach $0$, and therefore $\sum b_n$ diverges. If the $a_n$ are bounded by $B$, then $b_n\ge \dfrac{1}{B+3} a_n$, and $\sum b_n$ diverges by comparison with $\sum a_n$.
Prove every odd integer is the difference of two squares I know that I should use the definition of an odd integer ($2k+1$), but that's about it. Thanks in advance!
Eric and orlandpm already showed how this works for consecutive squares, so this is just to show how you can arrive at that conclusion just using the equations. So let the difference of two squares be $A^2-B^2$ and odd numbers be, as you mentioned, $2k+1$. This gives you $A^2-B^2=2k+1$. Now you can add $B^2$ to both sides to get $A^2=B^2+2k+1$. Since $B$ and $k$ are both just constants, they could be equal, so assume $B=k$ to get $A^2=k^2+2k+1$. The second half of this equation is just $(k+1)^2$, so $A^2=(k+1)^2$, giving $A = ±(k+1)$, so for any odd number $2k+1$, $(k+1)^2-k^2=2k+1$.
Construction of an increasing function from a general function Supposing $f: [0,\infty) \to [0,\infty)$. The goal is to make an increasing function from $f$ using the following rule:- If $t_1 \leq t_2$ and $f(t_1) > f(t_2)$ then change the value of $f(t_1)$ to $f(t_2)$. After this change, we have $f(t_1) = f(t_2)$. Let $g$ be the function resulting from applying the above rule for all $t_1,t_2$ recursively (recursively because if $f(t_2)$ changes then the value of $f(t_1)$ needs to be re-computed) Is it correct to treat $g$ as a well defined (increasing) function? Thanks, Phanindra
Consider the function $f(x)=1/x$ if $x>0$, with also $f(0)=0$. Then for example $f(1)$ will, for any $n>1$, get changed to $n$ on considering that $f(1/n)=n>f(1)=1$. Once this is done there will still be plenty of other $m>1$ for which $f(1/m)=m$ where $m>n$, so that $f(1)$ will have to be changed again from its present value of $n$ to the new value $m>n$. In this way, for this example, there will not be a finite value for $f(1)$ as the process is iterated, and the resulting function will not be defined at $x=1$. EDIT: As mkl points out in a comment, the interpretion in the above example has the construction backward. When $f(a)>f(b)$ where $a<b$ the jvp construction is to replace $f(a)$ by the "later value" $f(b)$. In this version there is no problem with infinite values occuring, as a value of $f(x)$ is only decreased during the construction, and the decreasing is bounded below by $0$ because the original $f$ is nonnegative. In fact, if $g(x)$ denotes the constructed function, and if we interpret the "iterative procedure" in a reasonable way, it seems one has $$g(x)=\inf \{f(t):t \ge x \},$$ which is a nondecreasing function for any given $f(x)$. Note that Stefan made exactly this suggestion.
How do we prove that $f(x)$ has no integer roots, if $f(x)$ is a polynomial with integer coefficients and $f( 2)= 3$ and $f(7) = -5$? I've been thinking and trying to solve this problem for quite sometime ( like a month or so ), but haven't achieved any success so far, so I finally decided to post it here. Here is my problem: If $f(x)$ is a polynomial with integer coefficients and $f( 2)= 3$ and $f(7) = -5$ then prove that $f(x)$ has no integer roots. All I can think is that if we want to prove * *that if $f( x)$ has no integer roots, then by the integer root theorem its coefficient of highest power will not be equal to 1, but how can I use this fact ( that I don't know)? *How to make use of given data that $f( 2)= 3$ and $f(7) = -5$? *Assuming $f(x)$ to be a polynomial of degree $n$ and replacing $x$ with $2$ and $7$ and trying to make use of given data creates only mess. Now, if someone could tell me how to approach these types of problems other than giving a few hints on how to solve this particular problem , I would greatly appreciate his/her help.
Let's define a new polynomial by $g(x)=f(x+2)$. Then we are told $g(0)=3, g(5)=-5$ and $g$ will have integer roots if and only if $f$ does. We can see that the constant term of $g$ is $3$. Because the coefficients are integers, when we evaluate $g(5)$, we get terms that are multiples of $5$ plus the constant term $3$, so $g(5)$ must equal $3 \pmod 5$ Therefore there is no polynomial that meets the requirement. As the antecedent is false, the implication is true. This is an example of the statement that for all polynomials $p(x)$ with integer coefficients, $a,b \in \mathbb Z \implies (b-a) | p(b)-p(a)$
Given real numbers: define integers? I have only a basic understanding of mathematics, and I was wondering and could not find a satisfying answer to the following: Integer numbers are just special cases (a subset) of real numbers. Imagine a world where you know only real numbers. How are integers defined using mathematical operations? Knowing only the set of complex numbers $a + bi$, I could define real numbers as complex numbers where $b = 0$. Knowing only the set of real numbers, I would have no idea how to define the set of integer numbers. While searching for an answer, most definitions of integer numbers talk about real numbers that don't have a fractional part in their notation. Although correct, this talks about notation, assumes that we know about integers already (the part left of the decimal separator), and it does not use any mathematical operations for the definition. Do we even know what integer numbers are, mathematically speaking?
How about the values (Image) of $$\lfloor x\rfloor:=x-\frac{1}{2}+\frac{1}{\pi}\sum_{k=1}^\infty\frac{\sin(2\pi k x)}{k}$$ But this is nonsense; we sum over the positive integers, and such, we can just define the integers as $$x_0:=0\\x_{k+1}=x_k+1\\ x_{-k}=-x_k$$
Circle geometry: nonparallel tangent and secant problem If secant and the tangent of a circle intersect at a point outside the circle then prove that the area of the rectangle formed by the two line segments corresponding to the secant is equal to the area of the square formed by the line segment corresponding to the tangent I find this question highly confusing. I do not know what this means. If you could please explain that to me and solve it if possible.
Others have answered this, but here is a source of further information: http://en.wikipedia.org/wiki/Power_of_a_point Here's a problem in which the result is relied on: http://en.wikipedia.org/wiki/Regiomontanus%27_angle_maximization_problem#Solution_by_elementary_geometry The result goes all the way back (23 centuries) to Euclid (the first human who ever lived, with the exception of those who didn't write books on geometry that remain famous down to the present day): http://aleph0.clarku.edu/~djoyce/java/elements/bookIII/propIII36.html
Game theory: Nash equilibrium in asymetric payoff matrix I have a utility function describing the desirability of an outcome state. I weigh the expected utility with the probability of the outcome state occuring. I find the expected utility of an action, a, with $EU(a) = \sum\limits_{s'} P(Result(a) = s' | s)U(s'))$ where Result(a) denotes the outcome state after executing a. There is no global set of actions, the set of actions available to each agent are not identical. Player1 / Player2 | Action C | Action D | ----------------------------------------------------- Action A | (500,-500) | (-1000,1000) | ----------------------------------------------------- Action B | (-5,-5) | ** (200,20) ** | ----------------------------------------------------- Is this a valid approach? All examples of nash equilibriums i can find uses identical action sets for both agents.
Set of concepts aimed at decision making in situations of competition and conflict (as well as of cooperation and interdependence) under specified rules. Game theory employs games of strategy (such as chess) but not of chance (such as rolling a dice). A strategic game represents a situation where two or more participants are faced with choices of action, by which each may gain or lose, depending on what others choose to do or not to do. The final outcome of a game, therefore, is determined jointly by the strategies chosen by all participants. http://en.docsity.com/answers/68695/what-type-of-study-is-the-game-theory
For what values of $a$ does this improper integral converge? $$\text{Let}\;\; I=\int_{0}^{+\infty}{x^{\large\frac{4a}{3}}}\arctan\left(\frac{\sqrt{x}}{1+x^a}\right)\,\mathrm{d}x.$$ I need to find all $a$ such that $I$ converges.
Hint 1: Near $x=0$, $\arctan(x)\sim x$ whereas near $x=+\infty$, $\arctan(x)\sim\pi/2$. Hint 2: Near $x=0$, consider $a\ge0$ and $a\lt0$. Near $x=+\infty$, consider $a\ge\frac12$ and $a\lt\frac12$.
Integers that satisfy $a^3= b^2 + 4$ Well, here's my question: Are there any integers, $a$ and $b$ that satisfy the equation $b^2$$+4$=$a^3$, such that $a$ and $b$ are coprime? I've already found the case where $b=11$ and $a =5$, but other than that? And if there do exist other cases, how would I find them? And if not how would I prove so? Thanks in advance. :)
$a=5, b=11$ is one satisfying it. I don't think this is the only pair.
How many subsets of $\mathbb{N}$ have the same cardinality as $\mathbb{N}$? How many subsets of $\mathbb{N}$ have the same cardinality as $\mathbb{N}$? I realize that any of the class of functions $f:x\to (n\cdot x)$ gives a bijection between $\mathbb{N}$ and the subset of $\mathbb{N}$ whose members equal multiples of n. So, we have at least a countable infinity of sets which have the same cardinality of $\mathbb{N}$. But, we could remove a single element from any countably infinity subset of the natural numbers and we still will end up with a countably infinite subset of $\mathbb{N}$. So (the reasoning here doesn't seem quite right to me), do there exist uncountably many infinite subsets of $\mathbb{N}$ with the same cardinality of $\mathbb{N}$? Also, is the class of all bijections $f: \mathbb{N} \to \mathbb{N}$ and a given countably infinite subset uncountably infinite or countably infinite?
As great answers have been given already, I'd merely like to add an easy way to show that the set of finite subsets of $\mathbb{N}$ is countable: Observe that $$\operatorname{Fin}(\mathbb{N}) = \bigcup_{n\in\mathbb{N}}\left\{ A\subseteq\mathbb{N}: \max(A) = n \right\},$$ which is a countable union of finite sets as for each $n\in\mathbb{N}$ there certainly are less than $2^n = \left|\mathcal{P}(\{1,\ldots,n \})\right|$ subsets of $\mathbb{N}$ whose largest element is $n$. Hence, $\operatorname{Fin}(\mathbb{N})$ is countable itself. From here on, you can use Asaf's argument to show that the set of infinite subsets of $\mathbb{N}$ (which are precisely the sets with the same cardinality as $\mathbb{N}$) must be uncountable.
Fixed point in a continuous map Possible Duplicate: Periodic orbits Suppose that $f$ is a continuous map from $\mathbb R$ to $\mathbb R$, which satisfies $f(f(x)) = x$ for each $x \in \mathbb{R}$. Does $f$ necessarily have a fixed point?
Here's a somewhat simpler (in my opinion) argument. It's essentially the answer in Amr's link given in the first comment to the question, but simplified a bit to treat just the present question, not a generalization. Start with any $a\in\mathbb R$. If we're very lucky, $f(a)=a$ and we're done. If we're not that lucky, let $b=f(a)$; by assumption $a=f(f(a))=f(b)$. Since we weren't lucky, $a\neq b$. Suppose for a moment that $a<b$. Then the function $g$ defined by $g(x)=f(x)-x$ is positive at $a$ and negative at $b$, so, by the intermediate value theorem, it's zero at some $c$ (between $a$ and $b$). That means $f(c)=c$, and we have the desired fixed point, under the assumption $a<b$. The other possibility, $b<a$, is handled by the same argument with the roles of $a$ and $b$ interchanged. As user1551 noted, we need $f(f(x))=x$ for only a single $x$, since we can then take that $x$ as the $a$ in the argument above.
A Few Questions Concerning Vectors In my textbook, they provide a theorem to calculate the angle between two vectors: $\cos\theta = \Large\frac{\vec{u} \cdot \vec{v}}{\|\vec{u}\|\|\vec{v}\|}$ My questions are, why does the angle have to be $0 \le \theta \le \pi$; and why do the vectors have to be in standard position? Also, on the next page, the author writes, "the zero vector is orthogonal to every vector because $0 \cdot \vec{u} = 0$;" why is that so?
Given two points $x$ and $y$ on the unit sphere $S^{n-1}\subset{\mathbb R}^n$ the spherical distance between them is the length of the shortest arc on $S^{n-1}$ connecting $x$ and $y$. The shortest arc obviously lies in the plane spanned by $x$ and $y$, and drawing a figure of this plane one sees that the length $\phi$ of the arc in question can be computed by means of the scalar product as $$\phi =\arccos(x\cdot y)\ \ \in[0,\pi]\ .$$ This length is then also called the angle between $x$ and $y$. When $u$ and $v$ are arbitrary nonzero vectors in ${\mathbb R}^n$ then $u':={u\over |u|}$ and $v':={v\over |v|}$ lie on $S^{n-1}$. Geometrical intuition tells us that $\angle(u,v)=\angle(u',v')$. Therefore one defines the angle $\phi\in[0,\pi]$ between $u$ and $v$ as $$\phi:=\arccos{u\cdot v\over|u|\>|v|}\ .$$
Proof that $\sqrt{5}$ is irrational In my textbook the following proof is given for the fact that $\sqrt{5}$ is irrational: $ x = \frac{p}{q}$ and $x^2 = 5$. We choose $p$ and $q$ so that the have no common factors, so we know that $p$ and $q$ aren't both divisible by $5$. $$\left(\dfrac{p}{q}\right)^2 = 5\\ \text{ so } p^2=5q^2$$ This means that $p^2$ is divisble by 5. But this also means that $p$ is divisible by 5. $p=5k$, so $p^2=25k^2$ and so $q^2=5k^2$. This means that both $q$ and $p$ are divisible by 5, and since that can't be the case, we've proven that $\sqrt{5}$ is irrational. What bothers me with this proof is the beginning, in which we choose a $p$ and $q$ so that they haven't got a common factor. How can we know for sure that there exists a $p$ and $q$ with no common factors such that $x=\dfrac{p}{q} = \sqrt{5}$? Because it seems that step could be used for every number Edit: I found out what started my confusion: I thought that any fraction with numerator 1 had a common factor, since every integer can be divided by 1. This has given me another question: Are confusions like this the reason why 1 is not considered a prime number?
Exactly what MSEoris said, you can always reduce a fraction to such a point that they have no common factors, if $\frac{p}{q}$ had a common factor n, then $nk_0 = p$ $nk_1 = q $ then $\frac{p}{q} = \frac{nk_0}{nk_1} = \frac{k_0}{k_1}$, now if $k_0, k_1$ have a common factor do the same and you will eventually get a fraction with no common factor
Convergence in $L^{p}$ spaces Set $$f_n= n1_{[0,1/n]}$$ For $0<p\le\infty $ , one has that $\{f_n\}_n$ is in $L^p(\mathbb R)$. For which values of $p$ is $\{f_n\}_n$ a Cauchy sequence in $L^p$? Justify your answer. This was a Comp question I was not able to answer. I don't mind getting every details of the proof. What I know for sure is for $p=1$, $\{f_n\}_n$ is Cauchy in $L^p$ because when you get the integral of the function that is going to equal 1 no matter the value of $n$. So the sequence is not convergent in $L^1$, and hence is not Cauchy. I do not know how can I be more rigorous on this problem. Any help much appreciated.
Note that, we have $$\Vert f_{2n} -f_n\Vert_p^p = n^p \left(\dfrac1n - \dfrac1{2n}\right) + (2n-n)^p \dfrac1{2n} = \dfrac{n^{p-1}}2 + \dfrac{n^{p-1}}2 \geq 1 \,\,\,\,\,\,\, \forall p \geq 1$$ For $p<1$, and $m>n$ we have $$\Vert f_m - f_n\Vert_p^p = n^p \left(\dfrac1n - \dfrac1m\right) + (m-n)^p \dfrac1m < n^p \dfrac1n + \dfrac{m^p}m = n^{p-1} + m^{p-1} < 2 n^{p-1} \to 0$$ EDIT Note that \begin{align} f_m(x) & =\begin{cases} m & x \in [0,1/m]\\ 0 & \text{else}\end{cases}\\ f_n(x) & =\begin{cases} n & x \in [0,1/n]\\ 0 & \text{else}\end{cases} \end{align} Since $m>n$, we have $$f_m(x) - f_n(x) = \begin{cases} (m-n) & x \in [0,1/m]\\ -n & x \in [1/m,1/n]\\ 0 & \text{else}\end{cases}$$ Hence, $$\vert f_m(x) - f_n(x) \vert^p = \begin{cases} (m-n)^p & x \in [0,1/m]\\ n^p & x \in [1/m,1/n]\\ 0 & \text{else}\end{cases}$$ Hence, $$\int \vert f_m(x) - f_n(x) \vert^p d \mu = (m-n)^p \times \dfrac1m + n^p \left(\dfrac1n - \dfrac1m\right)$$
Sums of two probability density functions If the weighted sum of 2 probability density functions is also a probability density function, then what is the relationship between the random variables of these 3 probability density functions.
I think you might mean, "What happens if I'm not sure which of two distributions a random variable will be drawn from?" That is one situation where you need to take a pointwise weighted sum of two PDFs, where the weights have to add to 1. Suppose you have three coins in your pocket, two fair coins and one which lands as 'Heads' two thirds of the time. You draw one at random from your pocket and flip it. Then the PMF is \begin{align*}f(x)&=2/3\times\left.\cases{1/2, x=\text{'Heads'}\\1/2,x=\text{'Tails'}}\right\}+1/3\times\left.\cases{2/3, x=\text{'Heads'}\\1/3, x=\text{'Tails'}}\right\}\\&=\cases{5/9, x=\text{'Heads'},\\4/9,x=\text{'Tails'}.}\end{align*} The formula is simple: for any value for x, add the values of the PMFs at that value for x, weighted appropriately. If the sum of the weights is 1, then the sum of the values of the weighted sum of your PMFs will be 1, so the weighted sum of your PMFs will be a probability distribution. The same principle applies when adding continuous PDFs. Suppose you have a large group of geese where the female geese have body weights following an N(3,1) distribution and the male geese have weights following an N(4,1) distribution. You toss your unfair coin, and if, with probability 2/3, it is heads, you choose a random female goose, and otherwise choose a random male goose, then the weight of the goose has PDF $f(x)=\frac{1}{3}\times \frac{1}{\sqrt{2 \pi{}\times9}}e^{-\frac{1}{2}(x-1)^2/9}+\frac{2}{3}\times \frac{1}{\sqrt{2 \pi{}\times16}}e^{-\frac{1}{2}(x-1)^2/16}.$ You can even integrate over infinitely many PDFs, in which case your weight function is another PDF. For example, suppose you have a robot with an Exp(1) life span programmed to move left or right at a fixed speed with equal probability in each arbitrarily short time interval, independent of its movement in every other time interval (this is called Brownian motion). Its position after time $t$ follows a N(0,t) distribution, so its position at the end of its life span is \begin{align*}f(x)=\int_0^\infty e^{-\sigma^2} \frac{1}{\sqrt{2\pi\sigma^2}}e^{\frac{1}{2}x^2/\sigma^2} \text{d}\sigma^2.\end{align*} This is a very open field. Play with it yourself for a while and see where it takes you. References Taleb, Nicholas Nassim (2007) The Black Swan Same author (2013), Collected Scientific Papers, https://docs.google.com/file/d/0B_31K_MP92hUNjljYjIyMzgtZTBmNS00MGMwLWIxNmQtYjMyNDFiYjY0MTJl/edit?hl=en_GB
Covariance of Brownian Bridge? I am confused by this question. We all know that Brownian Bridge can also be expressed as: $$Y_t=bt+(1−t)\int_a^b \! \frac{1}{1-s} \, \mathrm{d} B_s $$ Where the Brownian motion will end at b at $t = 1$ almost surely. Hence I can write it as: $$Y_t = bt + I(t)$$ where $I(t)$ is a stochastic integral, and in this case it is a martingale. Since it is a martingale, the co-variance can be calculated as: \begin{array} {lcl} E[Y_t Y_s] & = & b^2 ts + E(I(t)I(s)] \\ & = & b^2 ts + E\{(I(t)-I(s))* I(s) \} + E [I(s)^2] \\& =&b^2 ts + Var[I(s)] + b^2s^2 \\ & = & b^2 ts + b^2 s^2 + s(1-s) \end{array} Hence the variance is just $ b^2 s^2 + s(1-s)$. However I read online, the co-variance of the Brownian Bridge should be $s(1-t)$. I am relaly confused. Please advise. Thanks so much!
I think the given representation of the Brownian Bridge is not correct. It should read $$Y_t = a \cdot (1-t) + b \cdot t + (1-t) \cdot \underbrace{\int_0^t \frac{1}{1-s} \, dB_s}_{=:I_t} \tag{1}$$ instead. Moreover, the covariance is defined as $\mathbb{E}((Y_t-\mathbb{E}Y_t) \cdot (Y_s-\mathbb{E}Y_s))$, so you forgot to subtract the expectation value of $Y$ (note that $\mathbb{E}Y_t \not= 0$). Here is a proof using the representation given in $(1)$: $$\begin{align} \mathbb{E}Y_t &= a \cdot (1-t) + b \cdot t \\ \Rightarrow \text{cov}(Y_s,Y_t) &= \mathbb{E}((Y_t-\mathbb{E}Y_t) \cdot (Y_s-\mathbb{E}Y_s)) = (1-t) \cdot (1-s) \cdot \mathbb{E}(I_t \cdot I_s) \\ &= (1-t) \cdot (1-s) \cdot \underbrace{\mathbb{E}((I_t-I_s) \cdot I_s)}_{\mathbb{E}(I_t-I_s) \cdot \mathbb{E}I_s = 0} + (1-t) \cdot (1-s) \mathbb{E}(I_s^2) \tag{2} \end{align}$$ for $s \leq t$ where we used the independence of $I_t-I_s$ and $I_s$. By Itô's isometry, we obtain $$\mathbb{E}(I_s^2) = \int_0^s \frac{1}{(1-r)^2} \, dr = \frac{1}{1-s} -1.$$ Thus we conclude from $(2)$: $$\text{cov}(Y_t,Y_s) = (1-t) \cdot (1-s) \cdot \left( \frac{1}{1-s}-1 \right) = s-t \cdot s = s \cdot (1-t).$$
Examples of non-isomorphic abelian groups which are part of exact sequences Suppose $A_1$, $A_2$, $A_3$ and $B_1$, $B_2$, and $B_3$ are two short exact sequences of abelian groups. I am looking for two such short sequences where $A_1$ and $B_1$ is isomorphic and $A_2$ and $B_2$ are isomorphic but $A_3$ and $B_3$ are not. (Similarly I would like examples in which two of the other pairs are isomorphic but the third pair are not, etc)
For the first pair take $$0 \longrightarrow \mathbb{Z} \stackrel{2}{\longrightarrow} \mathbb{Z} \longrightarrow \mathbb{Z} / 2\mathbb{Z} \longrightarrow 0$$ and $$0 \longrightarrow \mathbb{Z} \stackrel{3}{\longrightarrow} \mathbb{Z} \longrightarrow \mathbb{Z} / 3\mathbb{Z} \longrightarrow 0.$$ For sequences with non-isomorphic first pairs you can use an infinite direct sum of $\mathbb{Z}$'s and include one or two copies of $\mathbb{Z}$. The quotient will be the infinite direct sum again so the second and third pairs are isomorphic but the first pair will be non-isomorphic. Finally for non isomorphic central pairs take $$0 \longrightarrow \mathbb{Z} / 2\mathbb{Z} \longrightarrow \mathbb{Z} / 2\mathbb{Z} \times \mathbb{Z} / 2\mathbb{Z} \longrightarrow \mathbb{Z} / 2\mathbb{Z} \longrightarrow 0$$ and $$0 \longrightarrow \mathbb{Z} / 2\mathbb{Z} \stackrel{2}{\longrightarrow} \mathbb{Z} / 4\mathbb{Z} \longrightarrow \mathbb{Z} / 2\mathbb{Z} \longrightarrow 0.$$
Integrating $\int_0^{\infty} u^n e^{-u} du $ I have to work out the integral of $$ I(n):=\int_0^{\infty} u^n e^{-u} du $$ Somehow, the answer goes to $$ I(n) = nI(n - 1)$$ and then using the Gamma function, this gives $I(n) = n!$ What I do is this: $$ I(n) = \int_0^{\infty} u^n e^{-u} du $$ Integrating by parts gives $$ I(n) = -u^ne^{-u} + n \int u^{n - 1}e^{-u} $$ Clearly the stuff in the last bit of the integral is now $I(n - 1)$, but I don't see how using the limits gives you the answer. I get this $$ I(n) = \left( \frac{-(\infty)^n}{e^{\infty}} + nI(n - 1) \right) - \left( \frac{-(0)^n}{e^{0}} + nI(n - 1) \right) $$ As exponential is "better" than powers, or whatever its called, I get $$ I(n) = (0 + I(n - 1)) + ( 0 + nI(n - 1)) = 2nI(n - 1)$$ Does the constant just not matter in this case? Also, I do I use the Gamma function from here? How do I notice that it comes into play? Nothing tells me that $\Gamma(n) = (n - 1)!$, or does it?
You have $$ I(n) = \lim_{u\to +\infty}u^ne^{-u}-0^ne^{-0}+nI(n-1) $$ But $$\lim_{u\to +\infty}u^ne^{-u}=\lim_{u\to +\infty}\frac{u^n}{e^{u}}=...=0$$ and so $$ I(n) =0-0+nI(n-1)=nI(n-1) $$
The way into set theory Given that I am going through Munkres's book on topology , I had to give a glance at the topics included in the first chapter like that of Axiom of choice, The maximum principle, the equivalence of the former and the later etc. Given all this I doubt that I know enough of set theory , or more precisely and suiting to my business , Lack a good deal of rigor in my ingredients. I wanted to know whether research is conducted on set theory as an independent branch. Is there any book that covers all about set theory, like the axioms, the axiom of choice and other advanced topics in it. I have heard about the Bourbaki book, but am helpless at getting any soft copy of that book.
I'd recommend "Naive Set Theory" by Halmos. It is a fun read, in a leisurely style, starts from the axioms and prove the Axiom of Choice. Also, see this XKCD. http://xkcd.com/982/
How can I show the Coercivity of this function? Let $S$ be the set of real positive matrices, $\lambda>0$ and $f:S\rightarrow\mathbb{R}$ defined by $$f(X)=\langle X,X\rangle-\lambda\log\det(X) $$ where $\langle X,X\rangle=\operatorname{trace}(X^\top X)$. How can one show that $f$ is coercive?
Let $\mu = \max \{\det X : \langle X,X\rangle=1, X\ge 0\}$. The homogeneity of determinant implies that $\log \det X\le \log \mu+\frac{n}{2}\log \langle X,X\rangle$ for all $X\ge 0$. Therefore, $$f(X)\ge \langle X,X\rangle -\lambda \log \mu - \frac{\lambda n}{2}\log \langle X,X\rangle $$ which is $\ge \frac12 \langle X,X\rangle$ when $\langle X,X\rangle$ is large enough.
Fun but serious mathematics books to gift advanced undergraduates. I am looking for fun, interesting mathematics textbooks which would make good studious holiday gifts for advanced mathematics undergraduates or beginning graduate students. They should be serious but also readable. In particular, I am looking for readable books on more obscure topics not covered in a standard undergraduate curriculum which students may not have previously heard of or thought to study. Some examples of suggestions I've liked so far: * *On Numbers and Games, by John Conway. *Groups, Graphs and Trees: An Introduction to the Geometry of Infinite Groups, by John Meier. *Ramsey Theory on the Integers, by Bruce Landman. I am not looking for pop math books, Gödel, Escher, Bach, or anything of that nature. I am also not looking for books on 'core' subjects unless the content is restricted to a subdiscipline which is not commonly studied by undergrads (e.g., Finite Group Theory by Isaacs would be good, but Abstract Algebra by Dummit and Foote would not).
Modern Graph theory by Bela Bollobas counts as fun if they're interested in doing exercises which can be approached by clever intuitive arguments; it's packed full of them.
Fun but serious mathematics books to gift advanced undergraduates. I am looking for fun, interesting mathematics textbooks which would make good studious holiday gifts for advanced mathematics undergraduates or beginning graduate students. They should be serious but also readable. In particular, I am looking for readable books on more obscure topics not covered in a standard undergraduate curriculum which students may not have previously heard of or thought to study. Some examples of suggestions I've liked so far: * *On Numbers and Games, by John Conway. *Groups, Graphs and Trees: An Introduction to the Geometry of Infinite Groups, by John Meier. *Ramsey Theory on the Integers, by Bruce Landman. I am not looking for pop math books, Gödel, Escher, Bach, or anything of that nature. I am also not looking for books on 'core' subjects unless the content is restricted to a subdiscipline which is not commonly studied by undergrads (e.g., Finite Group Theory by Isaacs would be good, but Abstract Algebra by Dummit and Foote would not).
Dissections: Plane & Fancy by Frederickson, and the second side of the same coin: The Banach--Tarski Paradox by Tomkowicz and Wagon.
solution for equation For $a^2+b^2=c^2$ such that $a, b, c \in \mathbb{Z}$ Do we know whether the solution is finite or infinite for $a, b, c \in \mathbb{Z}$? We know $a=3, b=4, c=5$ is one of the solutions.
Assuming $m,n$ be any two positive integers such that $m < n$, we have: $$a = n^2 - m^2,\;\; b = 2mn,\;\;c = n^2 + m^2$$ And then $a^2+b^2=c^2$.
Sets of second category-topology A set is of first category if it is the union of nowhere dense sets and otherwise it is of second category. How can we prove that irrational numbers are of second category and the rationals are of of first category?
$\mathbb Q = \bigcup_{q \in \mathbb Q} \{ q \}$ hence the rationals are a countable union of nowhere dense sets. Assume the irrationals are also a countable union of nowhere dense sets: $I = \bigcup_{n \in \mathbb N} U_n$. Then $\mathbb R = \bigcup_{q \in \mathbb Q} \{ q \} \cup \bigcup_{n \in \mathbb N} U_n$ is also a countable union of nowhere dense sets.
Mind-blowing mathematics experiments We've all heard of some mind-blowing phenomena involving the sciences, such as the double-slit experiment. I was wondering if there are similair experiments or phenomena which seem very counter-intuitive but can be explained using mathematics? I mean things such as the Monty Hall problem. I know it is not exactly an experiment or phenomenon (you could say a thought-experiment), but things along the line of this (so with connections to real life). I have stumbled across this interesting question, and this are the type of phenomena I have in mind. This question however only discusses differential geometry.
If you let $a_1=a_2=a$, and $a_{n+1}=20a_n-19a_{n-1}$ for $n=2,3,\dots$, then it's obvious that you just get the sequence $a,a,a,\dots$. But if you try this on a calculator with, say, $a=\pi$, you find that after a few iterations you start getting very far away from $\pi$. It's a good experiment/demonstration on accumulation of round-off error.
Why is 'abuse of notation' tolerated? I've personally tripped up on a few concepts that came down to an abuse of notation, and I've read of plenty more on stack exchange. It seems to all be forgiven with a wave of the hand. Why do we tolerate it at all? I understand if later on in one's studies if things are assumed to be in place, but there are plenty of textbooks out there assuming certain things are known before teaching them. This is a very soft question, but I think it ought to be asked.
When one writes/talks mathematics, in 99.99% of the cases the intended recipient of what one writes is a human, and humans are amazing machines: they are capable of using context, guessing, and all sorts of other information when decoding what we write/say. It is generally immensely more efficient to take advantage of this.
Exactness of Colimits Let $\mathcal A$ be a cocomplete abelian category, let $X$ be an object of $\mathcal A$ and let $I$ be a set. Let $\{ X_i \xrightarrow{f_i} X\}_{i \in I}$ be a set of subobjects. This means we get an exact sequence $$ 0 \longrightarrow X_i \xrightarrow{f_i} X \xrightarrow{q_i}X/X_i \longrightarrow 0 $$ for each $i \in I$. It is supposed to follow (Lemma 5 in the wonderful answer to this question) that there is an exact sequence $$ \operatorname{colim} X_i \longrightarrow X \longrightarrow\operatorname{colim} X/X_i \longrightarrow 0 $$ from the fact that the colimit functor preserves colimits (and in particular, cokernels). However I do not see why this follows. The family of exact sequences I mentioned above is equivalent to specifying the exact sequence $$ 0 \longrightarrow X_\bullet \xrightarrow{f} \Delta X \xrightarrow{q} X / X_\bullet \longrightarrow 0 $$ in the functor category $[I, \mathcal A]$, where $\Delta X$ is the constant functor sending everything to $X$. However applying the colimit functor to this sequence does not give the one we want, because the colimit of $\Delta X$ is the $I$th power of $X$ since $I$ is discrete. Can anybody help with this? Thank you and Merry Christmas in advance!
I think you may have misquoted the question, because if $I$ is (as you wrote) merely a set, then a colimit over it is just a direct sum. Anyway, let me point out why "the colimit functor preserves colimits (and in particular cokernels)" is relevant. Exactness of a sequence of the form $A\to B\to C\to0$ is equivalent to saying that $B\to C$ is the cokernel of $A\to B$. So the short exact sequence you began with contains some cokernel information (plus some irrelevant information thanks to the $0$ at the left end), and what you're trying to prove is also cokernel information. The latter comes from the former by applying a colimit functor, provided colim$(X)$ is just $X$. That last proviso is why I think you've misquoted the question, since it won't be satisfied if $I$ is just a set (with 2 or more elements).
Extra-Challenging olympiad inequality question We have the set $\{X_1,X_2,X_3,\dotsc,X_n\}$. Given that $X_1+X_2+X_3+\dotsb +X_n = n$, prove that: $$\frac{X_1}{X_2} + \frac{X_2}{X_3} + \dotsb + \frac{X_{n-1}}{X_n} + \frac{X_n}{X_1} \leq \frac{4}{X_1X_2X_3\dotsm X_n} + n - 4$$ EDIT: yes, ${X_k>0}$ , forgot to mention :)
Let $$ \begin{eqnarray} L(x_1,\ldots, x_n) &=& \frac{x_1}{x_2} + \frac{x_2}{x_3} + \ldots + \frac{x_n}{x_1} \\ R(x_1,\ldots, x_n) &=& \frac{4}{x_1 x_2 \ldots x_n} + n - 4 \\ f(x_1,\ldots, x_n) &=& R(x_1,\ldots, x_n) - L(x_1,\ldots, x_n) \end{eqnarray} $$ The goal is to prove that $f(x_1,\ldots, x_n) \ge 0$ for all $n$, given $x_i > 0$ and $\Sigma_i x_i = n$. Proof [by complete induction] The base case $n = 1$ is trivial. Now we prove the inequality for $n \ge 2$ assuming it holds for all values below $n$. The sequence $s = x_1,\ldots, x_n$ is split in two subsequences : $s_\alpha = x_u,\ldots, x_{v-1}$ and $s_\beta = x_v, x_{v+1}, \ldots x_1 \ldots x_{u-1}$, where $1 \le u < v \le n$. Let $k = v-u$ such that the lengths of the sequences are $k$ and $n-k$. We write $\pi_\alpha, \pi_\beta$ for the products of the sequences, respectively, and $\sigma_\alpha, \sigma_\beta$ for their sums. With appropriate rescaling we get $f(\frac{k}{\sigma_\alpha} s_\alpha) \ge 0$ and $f(\frac{n-k}{\sigma_\beta} s_\beta) \ge 0$, by induction hypothesis, as $1 \le k \le n-1$. It suffices now to show that $f(s) - f(\frac{k}{\sigma_\alpha} s_\alpha) - f(\frac{n-k}{\sigma_\beta} s_\beta) \ge 0$. Therefore we prove $R(s) - R(\frac{k}{\sigma_\alpha} s_\alpha) - R(\frac{n-k}{\sigma_\beta} s_\beta) \ge 0$ and $L(\frac{k}{\sigma_\alpha} s_\alpha) + L(\frac{n-k}{\sigma_\beta} s_\beta) - L(s)\ge 0$, in turn. $$ \begin{eqnarray} R(s) - R(\frac{k}{\sigma_\alpha} s_\alpha) &-& R(\frac{n-k}{\sigma_\beta} s_\beta) \\ &=& \frac{4}{\pi_\alpha \pi_\beta} + n - 4 - \left(\frac{4 \sigma_\alpha^k}{k^k \pi_\alpha} +k -4 \right) - \left(\frac{4 \sigma_\beta^{n-k}}{(n-k)^{n-k} \pi_\beta} + n -k -4 \right) \\ & =& \frac{4}{\pi_\alpha \pi_\beta} \left( 1 + \pi_\alpha \pi_\beta - \frac{ \sigma_\alpha^k}{k^k} \pi_\beta - \frac{\sigma_\beta^{n-k}}{(n-k)^{n-k}} \pi_\alpha \right) \\ & =& \frac{4}{\pi_\alpha \pi_\beta} \left( \left( \frac{ \sigma_\alpha^k}{k^k} - \pi_\alpha \right) \left(\frac{\sigma_\beta^{n-k}}{(n-k)^{n-k}} - \pi_\beta \right) + 1 - \frac{\sigma_\beta^{n-k}}{(n-k)^{n-k}} \frac{ \sigma_\alpha^k}{k^k} \right) \end{eqnarray} $$ Both factors in the first product are positive: using Jensen's inequality we get $\log(\frac{ \sigma_\alpha^k}{k^k}) = k \log (\frac{\sigma_\alpha}{k}) \ge \Sigma_{i=u}^{v-1} \log(x_i) = \log(\pi_\alpha)$, and similar for the second factor. Furthermore we have $\log(\frac{\sigma_\beta^{n-k}}{(n-k)^{n-k}} \frac{ \sigma_\alpha^k}{k^k}) = n \left( \frac{n-k}{n} \log(\frac{\sigma_\beta}{n-k}) + \frac{k}{n} \log(\frac{ \sigma_\alpha}{k}) \right) \le n \log(\frac{\sigma_\beta}{n} + \frac{\sigma_\alpha}{n}) = 0$ again using Jensen's inequality. So the remaining terms are also positive. Concerning the L-part we have: $$ \begin{eqnarray} L(\frac{k}{\sigma_\alpha} s_\alpha) &+& L(\frac{n-k}{\sigma_\beta} s_\beta) - L(s) \\ &=& (\ldots + \frac{x_{v-1}}{x_u}) + (\ldots +\frac{x_{u-1}}{x_v}) - (\ldots + \frac{x_{v-1}}{x_v} + \ldots + \frac{x_{u-1}}{x_u} + \ldots) \\ &=& (x_{v-1} - x_{u-1}) (\frac{1}{x_u} - \frac{1}{x_v}) \end{eqnarray} $$ This if positive if $x_{v-1} \ge x_{u-1} \wedge x_{v} \ge x_{u}$, or $x_{v-1} \le x_{u-1} \wedge x_{v} \le x_{u}$. As we did not pose any constraints on $u$ and $v$ yet, it now remains to show that for any sequence one can always find $1 \le u < v \le n$ for which this is fulfilled. First, if $x_{i-1} \le x_i \le x_{i+1}$ for some $i$, or $x_{i-1} \ge x_i \ge x_{i+1}$ (for $n$ odd this is always the case), then we can simply choose $u=i-1, v=i$. So now assume we have a "crown" of successive up and down transitions while looping through the sequence. If for some $i, j$ with $x_i \le x_{i+1}$ and $x_j \le x_{j+1}$, none of the intervals $[x_i \le x_{i+1}]$ and $[x_j \le x_{j+1}]$ contains the other completely, then appropriate $u$ and $v$ can be defined. So now assume that all "up-intervals" $[x_i, x_{i+1}]$ with $x_i \le x_{i+1}$ are strictly ordered (by containment). Looping through these up-intervals we must encounter a local maximum such that $[x_{i-2}, x_{i-1}] \subseteq [x_{i}, x_{i+1}] \supseteq [x_{i+2}, x_{i+3}]$. Hence $x_{i-1} \le x_{i+1}$ and $x_{i} \le x_{i+2}$, so with $u=i, v=i+2$ also this last case is covered.
Limit of a function whose values depend on $x$ being odd or even I couldn't find an answer through google or here, so i hope this isn't a duplicate. Let $f(x)$ be given by: $$ f(x) = \begin{cases} x & : x=2n\\ 1/x & : x=2n+1 \end{cases} $$ Find $\lim_{x \to \infty} f(x).$ The limit is different depending on $x$ being odd or even. So what's the limit of $f(x)$? Attempt: this limit doesn't exist because we have different values for $x \to \infty$ which could be either odd or even. My doubt is $\infty$ can't be both even and odd at the same time so one should apply. So what do you ladies/gents think? Also, when confronted with functions like these and one needs to know the limit to gain more information about the behavior of $f(x)$, how should the problem be attacked?
Your first statement following the word "attempt" has the correct intuition: "this limit doesn't exist because we have different values for" ... $\lim_{x\to \infty} f(x) $, which depends on x "which could be either odd or even." (So I'm assuming we are taking $x$ to be an integer, since the property of being "odd" or "even" must entail $x \in \mathbb{Z}$). The subsequent doubt you express originates from the erroneous conclusion that $\infty$ must be even or odd. It is neither. $\infty$ is not a number, not in the sense of it being odd or even, and not in the sense that we can evaluate $f(\infty)$! (We do not evaluate the $\lim_{x \to \infty} f(x)$ AT infinity, only as $x$ approaches infinity.) When taking the limit of $f(x)$ as $x \to \infty$, we look at what happens as $x$ approaches infinity, and as you observe, $x$ oscillates between odd and even values, so as $x \to \infty$, $f(x)$, as defined, also oscillates: at extremely large $x$, $f(x)$ oscillates between extremely large values and extremely small values (approaching $\infty$ when $x$ is even, and approaching zero when $x$ is odd. Hence the limit does not exist. Note: when $x$ goes to some finite number, say $a$, you may be tempted to simply "plug it in" to $f(x)$ when evaluating $\lim_{x\to a}f(x)$. You should be careful, though. This only works when $f(x)$ is continuous at that point $a$. Limits are really only about what happens as $x$ approaches an x-value, or infinity, not what $f(x)$ actually is at that value. This is an important to remember.
$f$ continuous in $[a,b]$ and differentiable in $(a,b)$ without lateral derivatives at $a$ and $b$ Does anyone know an example of a real function $f$ continuous in $[a,b]$ and differentiable in $(a,b)$ such that the lateral derivatives $$ \lim_{h \to a^{+}} \frac{f(x+h)- f(x)}{h} \quad \text{and} \quad \lim_{h \to b^{-}} \frac{f(x+h)- f(x)}{h}$$ don't exist?
$$f(x) = \sqrt{x-a} + \sqrt{b-x} \,\,\,\,\,\, \forall x \in [a,b]$$
Integral $\int_{0}^{1}\ln x \, dx$ I have a question about the integral of $\ln x$. When I try to calculate the integral of $\ln x$ from 0 to 1, I always get the following result. * *$\int_0^1 \ln x = x(\ln x -1) |_0^1 = 1(\ln 1 -1) - 0 (\ln 0 -1)$ Is the second part of the calculation indeterminate or 0? What am I doing wrong? Thanks Joachim G.
Looking sideways at the graph of $\log(x)$ you can also see that $$\int_0^1\log(x)dx = -\int_0^\infty e^{-x}dx = -1.$$
Prove that $\frac{a^2}{a+b}+\frac{b^2}{b+c}+\frac{c^2}{c+a}\geq\frac{1}{2}(a+b+c)$ for positive $a,b,c$ Prove the following inequality: for $a,b,c>0$ $$\frac{a^2}{a+b}+\frac{b^2}{b+c}+\frac{c^2}{c+a}\geq\frac{1}{2}(a+b+c)$$ What I tried is using substitution: $p=a+b+c$ $q=ab+bc+ca$ $r=abc$ But I cannot reduce $a^2(b+c)(c+a)+b^2(a+b)(c+a)+c(a+b)(b+c) $ interms of $p,q,r$
Hint: $ \sum \frac{a^2 - b^2}{a+b} = \sum (a-b) = 0$. (How is this used?) Hint: $\sum \frac{a^2 + b^2}{a+b} \geq \sum \frac{a+b}{2} = a+b+c$ by AM-GM. Hence, $\sum \frac{ a^2}{ a+b} \geq \frac{1}{2}(a+b+c)$.
Finding power series representation How we can show a presentation of a power series and indicate its radius of convergence? For example how we can find a power series representation of the following function? $$f(x) = \frac{x^3}{(1 + 3x^2)^2}$$
1) Write down the long familiar power series representation of $\dfrac{1}{1-t}$. 2) Differentiate term by term to get the power series representation of $\dfrac{1}{(1-t)^2}$. 3) Substitute $-3x^2$ everywhere that you see $t$ in the result of 2). 4) Multiply term by term by $x^3$. For the radius of convergence, once you have obtained the series, the Ratio Test will do the job. Informally, our orginal geometric series converges when $|t|\lt 1$. So the steps we took are OK if $3x^2\lt 1$, that is, if $|x|\lt \frac{1}{\sqrt{3}}$.
Uncountable closed set of irrational numbers Could you construct an actual example of a uncountable set of irrational numbers that is closed (in the topological sense)? I can find countable examples that are closed, like $\{ \sqrt{2} + \sqrt{2}/n \}_{n=1}^\infty \cup \{ \sqrt2 \}$ , but how does one construct an uncountable example? At least one uncountable example must exist, since otherwise the rational numbers form a Berstein set and are non-measurable.
Explicit example: translation of a Cantor-like set. Consider the Cantor set $$C := \Big\{ \sum \limits_{n=1}^{+\infty} \frac{\varepsilon_n}{4^n}\ \mid\ (\varepsilon_n)_n \in \{0,1\}^{\mathbb{N}}\Big\}.$$ It is uncountable and closed. Consider now the number $$x := \sum \limits_{n=1}^{+\infty} \frac{2}{4^{n^2}}.$$ The closed set which will answer the question is $$K := x + C = \{x+c,\ c\in C\}$$ Indeed, let us take an element $c$ of $C$, and distinguish two cases: * *$c$ is rational, in which case $c+x$ is irrational since $x$ is irrational *$c$ is irrational and can be written uniquely as $c = \sum \limits_{n=1}^{+\infty} \frac{\varepsilon_n}{4^n}$ with $\varepsilon_n \in \{0,1\}$ for all $n$. Then the base $4$ representation of $c+x$ is $\sum \limits_{n=1}^{+\infty} \frac{\varepsilon_n + 2\cdot 1_{\sqrt{n} \in \mathbb{N}}}{4^n}$. Thus the coefficients at non perfect-square-positions are $0$ or $1$, while the coefficients at perfect-square-positions are $2$ or $3$. Hence, the base $4$ representation cannot be periodic, so $c+x$ is not rational.
Proving a Geometric Progression Formula, Related to Geometric Distribution I am trying to prove a geometric progression formula that is related to the formula for the second moment of the geometric distribution. Specifically, I am wondering where I am going wrong, so I can perhaps learn a new technique. It is known, and I wish to show: $$ m^{(2)} = \sum_{k=0}^\infty k^2p(1-p)^{k-1} = p\left(\frac{2}{p^3} - \frac{1}{p^2}\right) = \frac{2-p}{p^2} $$ Now, dividing by $p$ both sides, and assigning $a = 1-p$ yields: $$ \sum_{k=1}^\infty k^2a^{k-1}=\frac{2}{(1-a)^3}-\frac{1}{(1-a)^2} \qquad \ldots \text{(Eq. 1)} $$ I want to derive the above formula. I know: $$ \sum_{k=0}^\infty ka^{k-1}=\frac{1}{(1-a)^2} $$ Multiplying the left side by $1=\frac aa$, and multiplying both sides by $a$, $$ \sum_{k=0}^\infty ka^k = \frac{a}{(1-a)^2} $$ Taking the derivative of both sides with respect to $a$, the result is: $$ \sum_{k=0}^{\infty}\left[a^k + k^2 a^{k-1}\right] = \frac{(1-a)^2 - 2(1-a)(-1)a}{(1-a^4)} = \frac{1}{(1-a)^2}+\frac{2a}{(1-a)^3} $$ Moving the known formula $\sum_{k=0}^\infty a^k = \frac{1}{1-a}$ to the right-hand side, the result is: $$ \sum_{k=0}^\infty k^2 a^{k-1} = \frac{1}{(1-a)^2} + \frac{2a}{(1-a^3)} - \frac{1}{1-a} $$ Then, this does not appear to be the same as the original formula (Eq. 1). Where did I go wrong? Thanks for assistance.
You have $$\sum_{k=0}^{\infty} ka^k = \dfrac{a}{(1-a)^2}$$ Differentiating with respect to $a$ gives us $$\sum_{k=0}^{\infty} k^2 a^{k-1} = \dfrac{(1-a)^2 - a \times 2 \times (a-1)}{(1-a)^4} = \dfrac{1-a + 2a}{(1-a)^3} = \dfrac{a-1+2}{(1-a)^3}\\ = \dfrac2{(1-a)^3} - \dfrac1{(1-a)^2}$$
Helly's selection theorem (For sequence of monotonic functions) Let $\{f_n\}$ be a sequence of monotonically increasing functions on $\mathbb{R}$. Let $\{f_n\}$ be uniformly bounded on $\mathbb{R}$. Then, there exists a subsequence $\{f_{n_k}\}$ pointwise convergent to some $f$. Now, assume $f$ is continuous on $\mathbb{R}$. Here, I want to prove that $f_{n_k}\rightarrow f$ uniformly on $\mathbb{R}$. How do i prove this ? I have proven that " $\forall \epsilon>0,\exists K\in\mathbb{N}$ such that $k≧K \Rightarrow \forall x\in\mathbb{R}, |f(x)-f_{n_k}(x)||<\epsilon \bigvee f_{n_k}(x) < \inf f + \epsilon \bigvee \sup f - \epsilon < f_{n_k}(x)$ ". The argument is in the link below. I don't understand why above statement implies "$f_{n_k}\rightarrow f$ uniformly on $\mathbb{R}$". Please explain me how.. Reference ; http://www.math.umn.edu/~jodeit/course/SP6S06.pdf Thank you in advance!
It's absolutely my fault that i didn't even read (c) in the link. I extend the theorem in the link and my argument below is going to prove; "If $K$ is a compact subset of $\mathbb{R}$ and $\{f_n\}$ is a sequence of monotonic functions on $K$ such that $f_n\rightarrow f$ pointwise on $K$, then $f_n\rightarrow f$ uniformly on $K$." (There may exist both $n,m$ such that $f_n$ is monotonically increasing while $f_m$ is monotonically decreasing) Pf> Since $K$ is closed in $\mathbb{R}$, complement of $K$ is a disjoint union of at most countable open segments. Let $K^C=\bigsqcup (a_i,b_i)$. Define; $$ g_n(x) = \begin{cases} f_n(x) &\text{if }x\in K \\ \frac{x-a_i}{b_i-a_i}f_n(b_i)+\frac{b_i-x}{b_i-a_i}f_n(a_i) & \text{if }x\in(a_i,b_i)\bigwedge a_i,b_i\in\mathbb{R} \\ f_n(b_i) &\text{if }x\in(a_i,b_i)\bigwedge a_i=-\infty \\ f_n(a_i) &\text{if }x\in(a_i,b_i)\bigwedge b_i=\infty \end{cases} $$ Then, $g_n$ is monotonic on $\mathbb{R}$ and $g_n\rightarrow g$ pointwise on $\mathbb{R}$ ans $g$ is a continuous extension of $f$. Let $\alpha=\inf K$ and $\beta=\sup K$. Then, by the argument in the link, $g_n\rightarrow g$ uniformly on $[\alpha,\beta]$. Hence, $f_n\rightarrow f$ uniformly on $K$.
What exactly is steady-state solution? In solving differential equation, one encounters steady-state solutions. My textbook says that steady-state solution is the limit of solutions of (ordinary) differential equations when $t \rightarrow \infty$. But the steady-state solution is given as $f(t)$, and this means that the solution is a function of $t$ - so what is this $t$ being in limit?
Example from dynamics: You can picture for yourself a cantilever beam which is loaded by a force at its tip say: $F(t) = \sin(t)$. At $t=0$ the force is applied, then you get the transient state, after some time the system will become in equilibrium: the steady-state. In this state no changes are applied to the system. You can expand this thinking to other differential equations as well. Hope that helps.
Absoluteness of $ \text{Con}(\mathsf{ZFC}) $ for Transitive Models of $ \mathsf{ZFC} $. Is $ \text{Con}(\mathsf{ZFC}) $ absolute for transitive models of $ \mathsf{ZFC} $? It appears that $ \text{Con}(\mathsf{ZFC}) $ is a statement only about logical syntax. Taking any $ \in $-sentence $ \varphi $, we can write $ \text{Con}(\mathsf{ZFC}) $ as $ \mathsf{ZFC} \nvdash (\varphi \land \neg \varphi) $, which appears to be an arithmetical $ \in $-sentence. If this is true, then I think one can get a quick proof of $$ \mathsf{ZFC} + \text{Con}(\mathsf{ZFC}) \nvdash \langle \text{There exists a transitive model of $ \mathsf{ZFC} $} \rangle, $$ assuming that $ \mathsf{ZFC} + \text{Con}(\mathsf{ZFC}) $ is consistent. Proof If $$ \mathsf{ZFC} + \text{Con}(\mathsf{ZFC}) \vdash \langle \text{There exists a transitive model of $ \mathsf{ZFC} $} \rangle, $$ then let $ M $ be such a transitive model. By the absoluteness of $ \text{Con}(\mathsf{ZFC}) $, we see that $ M \models \mathsf{ZFC} + \text{Con}(\mathsf{ZFC}) $. Hence, $ \mathsf{ZFC} + \text{Con}(\mathsf{ZFC}) $ proves the consistency of $ \mathsf{ZFC} + \text{Con}(\mathsf{ZFC}) $. By Gödel’s Second Incompleteness Theorem, $ \mathsf{ZFC} + \text{Con}(\mathsf{ZFC}) $ is therefore inconsistent. Contradiction. $ \blacksquare $ Question: Is $ \text{Con}(\mathsf{ZFC}) $ absolute for transitive models, and is the above proof correct? Thanks for any clarification.
Yes, $\text{Con}(\mathsf{ZFC})$ is an arithmetic statement ($\Pi^0_1$ in particular, because it says a computer program that looks for an inconsistency will never halt) so it is absolute to transitive models, and your proof is correct. By the way, there are a couple of ways you can strengthen it. First, arithmetic statements are absolute to $\omega$-models (models with the standard integers, which may nevertheless have nonstandard ordinals) so $\text{Con}(\mathsf{ZFC})$ does not prove the existence of an $\omega$-model of $\mathsf{ZFC}$. Second, the existence of an $\omega$-model of $\mathsf{ZFC}$ does not prove the existence of a transitive model of $\mathsf{ZFC}$, because the existence of an $\omega$-model of $\mathsf{ZFC}$ is a $\Sigma^1_1$ statement, and $\Sigma^1_1$ statements are absolte to transitive models.