title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Uniform limit of continuously differentiable function
Note that $$\sqrt{x^2+\frac{1}{n}} -\vert x \vert =\left(\sqrt{x^2+\frac{1}{n}} -\vert x \vert\right) \frac{\sqrt{x^2+\frac{1}{n}} +\vert x \vert}{\sqrt{x^2+\frac{1}{n}} +\vert x \vert} = \frac{x^2 + \frac{1}{n}- x^2}{\sqrt{x^2+\frac{1}{n}} +\vert x \vert} \leqslant \frac{1}{\sqrt{n}} $$
Sum of a geometric series of functions: $\sum_{n=0}^\infty\frac{x^2}{(1+x^2)^n}$
Recall: A sequence of functions $g_m\to g$ uniformly on some domain $U$ if and only if $\lim_{m\to\infty}\sup_{x\in U}|g_m(x)-g(x)|=0$. In this case, let $g_m(x)=\sum_{n=0}^m f_n(x)$ and $g(x)=\sum_{n=0}^\infty f_n(x)=1+x^2$. By straight forward calculation we have $$|g_m(x)-g(x)|=\frac{1}{(1+x^2)^m}$$ Since $\frac{1}{(1+x^2)}$ is continuous on any $[a,b]$, it attains a maximum value $\leq 1$. However, if $[a,b]$ contains $0$, then $$\sup_{x\in[a,b]}\frac{1}{(1+x^2)}=1$$ and in this case $\lim_{m\to\infty}\sup_{x\in U}|g_m(x)-g(x)|=1$, thus the convergence is not uniform. If $[a,b]$ does not contain $0$, the convergence is uniform since in this case $$\sup_{x\in[a,b]}\frac{1}{(1+x^2)}<1$$
Diagonalized matrix question?
You should get an Eigensystem as follows: $$\lambda_1 = 3, v_1 = (0, 0, 1)$$ $$\lambda_2 = 2, v_2 = (-1, -1, 1)$$ $$\lambda_3 = 1, v_3 = (-2, 0, 1)$$ Next, we can write the Jordan Normal Form as: $$A = P J P^{-1} = \begin{bmatrix} -2 & -1 & 0 \\ 0 & -1 & 0\\ 1 & 1& 1\end{bmatrix} \cdot \begin{bmatrix} 1 & 0 & 0 \\ 0 & 2 & 0\\ 0 & 0& 3\end{bmatrix} \cdot \begin{bmatrix} -\frac{1}{2} & \frac{1}{2} & 0 \\ 0 & -1 & 0\\ \frac{1}{2} & \frac{1}{2}& 1\end{bmatrix}$$ What do you notice about $J$? What is it made from? What do you notice about the columns of $P$? What is it made from? Regards
Proof that Legendre Polynomials are Complete
We follow J. Weidmanns Linear Operators in Hilbert Spaces. We find in section $3.2$ Orthonormal systems and orthonormal bases: Example 6: In $L_2(-1,1)$ the set $F=\{f_n:n\in \mathbb{N}_0\}$ with $f_n(x)=x^n$ is a linearly independent system. The application of the Gram-Schmidt orthogonalization process provides an orthonormal system (ONS) $M=\{p_n:n\in\mathbb{N}_0\}$, where $p_n(x)=\sum_{j=0}^na_{n,j}x^j$ holds with $a_{n,n}>0$; i.e., $p_n$ is a polynomial of degree $n$ with a positive leading coefficient. These polynomials are called the Legendre polynomials. As $F$ is total, the Legendre polynomials constitute an orthonormal basis (ONB) in $L_2(-1,1)$. The polynomials $p_n$ can be given explicitly: \begin{align*} p_n(x)=\frac{1}{2^nn!}\left(\frac{2n+1}{2}\right)^{\frac{1}{2}}\frac{d^n}{dx^n}\left(x^2-1\right)^n,\qquad\qquad n\in\mathbb{N}_0\tag{1} \end{align*} $L_2(M)$ denotes a Lebesgue space for a Lebesgue measurable subset $M$ of $\mathbb{R}^m$. If $B$ is a subset of a Hilbert space $H$, then it is said to be total if the linear hull $L(B)$ (the set of finite linear combinations of elements of $B$) is dense in $H$, i.e. ${\overline{L(B)}}=H$. In order to prove formula (1) it is sufficient to show that the expression given for $p_n$ is a polynomial of degree $n$ whose leading coefficient is positive and that $\langle p_n,p_m\rangle=\delta_{n,m}$. The first assertion is obvious. For $j<m$ we obtain by a $(j+1)$-fold integration by parts (the integrated terms vanish) that \begin{align*} \int_{-1}^1x^jp_m(x)\,dx&=C_m\int_{-1}^1x^j\frac{d^m}{dx^m}\left(x^2-1\right)^m\,dx\\ &=\ldots\\ &=C_m(-1)^{j+1}\int_{-1}^1\frac{d^{j+1}}{dx^{j+1}}(x^j)\frac{d^{m-j-1}}{dx^{m-j-1}}\left(x^2-1\right)^m\,dx\\ &=0 \end{align*} This implies that $\langle p_n,p_m\rangle=0$ for $n\ne m$. It remains to prove that $\|p_n\|=1$. By integration by parts we obtain that \begin{align*} \int_{-1}^1&\left[\frac{d^n}{dx^n}\left(x^2-1\right)^n\right]^2\,dx\\ &=(-1)^n\int_{-1}^1\left(x^2-1\right)^n\frac{d^{2n}}{dx^{2n}}\left(x^2-1\right)^n\,dx\\ &=(2n)!\int_{-1}^1(1-x)^n(1+x)^n\,dx\\ &=(2n)!\frac{n}{n+1}\int_{-1}^1(1-x)^{n-1}(1+x)^{n+1}\,dx\\ &=(2n)!\frac{n(n-1)}{(n+1)(n+2)}\int_{-1}^1(1-x)^{n-2}(1+x)^{n+2}\,dx\\ &=\ldots\\ &=(2n)!\frac{n(n-1)\cdots 1}{(n+1)(n+2)\cdots 2n}\int_{-1}^1(1+x)^{2n}\,dx\\ &=(2n)!\frac{1}{2n+1}\left[(1+x)^{2n+1}\right]_{-1}^1\\ &=(n!)^2(2n+1)^{-1}2^{2n+1} \end{align*} From this it follows that $\|p_n\|=1$ and the Legendre polynomials constitute an ONB in $L_2(-1,1)$.
Prob. 10, Sec. 3.9 in Kreyszig's functional analysis book: The null space and adjoint of the right-shift operator
Note that your computations show that $$ \|Tx\|^2=\left\|\sum_n\langle x,e_n\rangle\,e_{n+1}\right\|^2=\sum_n|\langle x,e_n\rangle|^2=\|x\|^2, $$ so $\|Tx\|=\|x\|$ for all $x$ and $T$ is an isometry. In particular, $T$ is bounded and injective. For the adjoint note that, since $\langle T^*e_n,e_m\rangle=1$ only when $m+1=n$ (and zero elsewhere), $$ \langle T^*e_n,y\rangle=\sum_{m=1}^\infty\overline{\langle y,e_m\rangle}\,\langle T^*e_n,e_m\rangle=\begin{cases}0,&\ n=1,\\ \ \\\overline{\langle y,e_{n-1}\rangle},&\ n\ne1\end{cases} $$ so if we write $e_0=0$, $$ \langle T^*e_n,y\rangle=\langle e_{n-1},y\rangle. $$ As we can do this for any $y$, we get that $T^*e_n=e_{n-1}$. So $$ T^*x=\sum_{n=1}^\infty\langle x,e_n\rangle\,e_{n-1}. $$
how many circle of radius r can be placed inside on the border of another circle
As seen from the center of the large circle, a small circle subtends an angle $2\phi$, where $\sin\phi={r\over R-r}$. It follows that the quantities $n$, $r$, and $R$ are related by the equation $$n\cdot\arcsin{r\over R-r}=\pi\ .$$
Control on an algebraic excersise
If you take your solution and multiply both sides by $2\tan^2\alpha$, you get for the coefficient of $x^2$ $$(b^2+c^2)\tan^2\alpha[\tan2\beta(1-\tan^2\gamma)+2\tan\gamma],$$ which is the same except for the final sign as the book solution. Similarly, the linear term becomes the same as in the book solution except for a couple of signs. The constant term becomes identical to the constant term in the book solution. That doesn't resolve which one is incorrect, but it may help you find your error (if there is one). Edit: After doing the arithmetic myself, I get the same thing you got.
Linear programming alternate optimum solutions
you can construct a from the given two points a direction vector. In combination with a position vector you can proof which of the given points is on the vector. greetings, calculus
IVP using TS and Euler
Recall the Taylor expansion about a point $a$ is given by $$f(x) = f(a) + f^{\prime}(a) (x-a) + f^{\prime\prime}(\xi)(x-a)^{2}/2$$ for $\xi$ between $x$ and $a$. For $Y_{n} = Y(t_{n})$ and $t_{n+k} = t_{n} +kh$, expand $$Y_{n} = Y_{n+1} + f(t_{n+1}, Y_{n+1})(t_{n}-t_{n+1}) + Y^{\prime\prime}(\psi_{n})(t_{n}-t_{n+1})/2 = Y_{n+1}-hf(t_{n+1},Y_{n+1})+\frac{1}{2}Y''(\psi_{n})h^2$$ as desired. For part b), we have, as you wrote, $$e_{n+1}= e_{n}+h[f(t_{n+1},Y_{n+1})-f(t_{n+1},y_{n+1})]+\frac{1}{2}Y''(\psi_{n})h^2.$$ By the Mean Value Theorem, there exists a $\eta_{n+1}$ so that $$f(t_{n+1},Y_{n+1})-f(t_{n+1},y_{n+1}) = (Y_{n+1}-y_{n+1})f_{y}(t_{n+1},\eta_{n+1}) = Je_{n+1}.$$ So $$e_{n+1}= e_{n}+hJe_{n+1}+\frac{1}{2}Y''(\psi_{n})h^2$$ or $$e_{n+1}(1-hJ)= e_{n}+\frac{1}{2}Y''(\psi_{n})h^2$$ and the result follows.
how many permutations can be formed from n objects when m of them are indistinguishable?
It is advantageous to consider a more general problem. Assume there are $k$ types of objects, each kind $i$ having $n_i$ indistinguishable representatives, the overall number of objects being $n=\sum_{i=1}^k n_i$. Then the overall number of possible permutations is determined by the multinomial coefficient: $$ \frac{n!}{\prod_{i=1}^k n_i!}. $$ Observe that this expression remains valid also when some $n_i$ are $0$.
Topological Space as an $(\infty,0)$-category
I think Neil Strickland's suggestion to post the question on MO is reasonable; there are a lot more experienced homotopy theorists there. Nonetheless, I'll give it a shot. Lurie's definition of the construction of an $(\infty, 1)$-category from a space is simply to take the singular simplicial set associated to it. This is a Kan complex, hence a quasicategory. (Conversely, any quasicategory where all edges are invertible is a Kan complex and can be thought of as representing a homotopy type.) The objects (or vertices of this simplicial set) are points of the space; the morphisms (or edges of this simplicial set) are maps from the unit interval into the space, etc. I don't think it makes sense to identify isomorphic vertices (i.e. vertices in the same connected component). However, one may ask that given a connected Kan complex, to produce a Kan complex with exactly one vertex which is homotopy equivalent to the first one. In fact, there is a theory of minimal Kan complexes that answers this question (which could also be approached directly). Say that a Kan complex $X$ is minimal if whenever we have two $n$-simplices $x, y \in X_n$ which are homotopic relative to their boundaries (i.e. there exists a map $\Delta[n] \times \Delta[1] \to X$ restricting to $x,y$ and constant on $\partial \Delta[n] \times \Delta[1]$), we have $x = y$. It is a theorem then that any Kan complex contains a minimal Kan complex which is homotopy equivalent to the original one. So, we can use this to produce connected subKancomplexes of a connected Kan complex with precisely one vertex. (This is explained in ch. 1 of Goerss-Jardine, for instance.) One of the reasons this theory is useful is that minimal Kan complexes that are homotopic are actually isomorphic. There is an analog of this theory for quasicategories (see HTT 2.3), which runs in a similar manner, except that when $n = 0$, the edge connecting the two should be an equivalence; then one gets the notion of a minimal quasicategory. It is similarly true that any quasicategory contains a minimal subquasicategory, to which it is categorically equivalent. I think the analogy in ordinary category theory of taking a minimal subquasicategory is the operation of taking a skeleton. Again, it turns out that categorically equivalent minimal quasicategories are isomorphic. (A side comment: for reasons that I don't fully understand, the theory of minimal Kan fibrations (a slight generalization of the theory of minimal Kan complexes) plays an important role in the establishment of the usual model structure on simplicial sets. For instance, because a minimal Kan fibration is actually an honest (simplicial) fiber bundle, the geometric realization has to be a Serre fibration. Nonetheless, as far as I know, there is no such theory for quasi-categories. And the corresponding Joyal model structure seems to be much, much harder to establish -- in particular, no simple set of generating acyclic cofibrations are published in the literature, as far as I can tell! If someone knows how to do this, I would very much appreciate a pointer.) Let me add that, as far as I understand, Lurie's theory of $(\infty, 1)$-categories (which is the only one I've looked at) makes very little reference to higher morphisms: there is a definite notion of an object (a vertex in the simplicial set), and a morphism (an edge). But higher morphisms are more or less treated as a black box, and as far as I can tell, the idea is not to separate them out but simply to work with mapping spaces between objects (which can be recovered from his theory). Instead, categorical notions are interpreted as statements of homotopy theory on the nerves, and those are then generalized to quasicategories. Perhaps it's also worth pointing out that $(\infty, 1)$-category theory is supposed to generalize both classical homotopy theory and 1-category theory. That is, there is an imbedding $$\text{Kan complexes} \hookrightarrow (\infty, 1)-\text{categories}$$ whose image is supposed to be precisely the $(\infty, 0)$-categories, that is the $\infty$-groupoids. In Lurie's theory at least, this is true, and furthermore, a categorical equivalence between Kan complexes is precisely the same thing as a homotopy equivalence. So the $\infty$-category of a space holds precisely the information of its homotopy type (which is quite a bit stronger than holding its homotopy groups).
When does $ \| Cx \|_{\mathbb{R}^2}^2 = \langle Cx, x\rangle$ hold?
My interpretation of your question is For which matrices $C$ is it true that $\langle x, Cx \rangle = \|Cx\|^2$? In terms of matrix multiplication, we could say that $C$ is such a matrix iff for every $x \in \Bbb R^n$, we have $$ x^T C x = (Cx)^T Cx \implies\\ x^T C x = x^T (C^TC) x $$ Note that $x^TAx = x^TBx$ for all $x \in \Bbb R^n$ iff $A + A^T = B + B^T$. Since $C^TC$ is symmetric, we note that $x^T C x = x^T (C^TC) x$ for all $x$ iff $$ C + C^T = 2C^TC $$
$\varepsilon - \delta$ proof that $f(x) = x^2 - 2$ is continuous - question concerning the initial choice of $\delta$
Your proof didn't cover the case $c < 0$. I would do it as follows:Let $c \in \mathbb{R}, \epsilon > 0$ be arbitrary, choose $\delta = \min \{1,\frac{\epsilon}{1+2|c|}\}$. Thus if $|x-c| < \delta $, then $$|x-c||x+c| \leq |x-c|(|x-c| + 2|c|)< |x-c|(1+2|c|)< \dfrac{\epsilon}{1+2|c|}\cdot (1+2|c|) = \epsilon .$$
$(\sum_{k\in[\sqrt{n},n-\sqrt{n}]}1)$ vanishes in $\frac{1}{n}(\sum_{k\in[\sqrt{n},n-\sqrt{n}]}1)(M+|\alpha|)$
$$\frac1n\big(\sum_{k\in [\sqrt{n},n-\sqrt{n}]}1\big)\le \frac1n\big(\sum_{k=1}^n1\big)=\frac1n\cdot n=1,$$ so $$ \frac1n\big(\sum_{k\in [\sqrt{n},n-\sqrt{n}]}1\big)(M+|\alpha|)\epsilon \le (M+|\alpha|)\epsilon $$
Symmetric matrix and determinant
The determinant of $A+ 2I$ is the product of all the eigenvalues of $A + 2I$. As the eigenvalues of $A$ are $\{2, 2, 2 , - 4, -4\}$, the ones of $A + 2I$ are $\{4, 4, 4, -2 , -2\}$. So the answer is $4^4$.
When to use different formulas to find the slope of a tangent line
The two formulas are entirely equivalent and merely reflect different ways to think: the first one focuses on the fact that you have two separate points $a$ and $x$ close to one another, while the second one focuses on the fact that you have a base point $a$, and a secondary point close to it (a distance of $h$ away).
Buying playing cards - probability
You can use the negative binomial distribution to solve this problem: https://en.m.wikipedia.org/wiki/Negative_binomial_distribution Set up the random variable with the appropriate parameters, and then take its expectation. You could also take the sum of two iid geometric r.v.’s to answer this. The expectation of the sum is the sum of the expectations. https://en.m.wikipedia.org/wiki/Geometric_distribution
Existence and Uniqueness of differential operator
You're working in an infinite dimensional setting, so injectivity need not be equivalent to surjectivity (to see an example of when this holds, look up the Fredholm alternative). To see that it's onto, let $f\in C([a,b]),$ and consider $F(x)=\int_a^x f(s)\, ds.$ What does $D$ do to $F$? And, you're correct, the solution is not unique. If $u$ solves $Du=f,$ then so does $u+c$ for any $c\in\mathbb{R}.$
Terminology: "entries" of a tuple
Yes, a tuple consists of two or more components.
How can I solve this probability question?
Let 0 means perfect, 1 means faulty. The possible outcomes are $0000,0001$;$0010,0011$ ; $011$; $0100,0101$; $11$;$101$;$1001,1000$; Among them $0001,0010,0100,1000$ have exactly 1 defective calculator. So, the required probability will be $\frac4{11}$ If we continue up to $4$ tests the number possible cases are $2^4=16$ Exactly 3 $1$s can occur in $\binom 43=4$ ways. Exactly 4 $1$s can occur in $\binom 44=1$ way. So, the number of available cases are $=16-(4+1)=11$ Among $4$ tests, exactly 1 defective calculator can occur in $\binom 41=4$ ways.
How to understand compactness?
Maybe you should think about compactness, as something that takes local properties to global properties. For example, if $f:K\rightarrow \mathbb{R}$ is continuous, $K$ is compact, and $f(x)>t_x>0$ for all x, then you can find $t>0$ such that $f(x)>t>0$ for all x - so from $f(x)>t_x>0$ point wise, you know that $f>t>0$ as a function. (This is a simple consequence of Weierstrass theorem in $[a,b]\rightarrow \mathbb{R}$) Usually we find some property that is true for every "small" enough open sets, then use compactness to reduce the case to finitely many open sets and use induction to show that the property is true for all of the space. This is at least how I understand compactness. As the commenters below this message wrote (and I didn't emphasize enough), we usually use compactness to reduce infinite problems\conditions\restraints to a finite subset that cover the entire space, and then use some argument that works only for finite cases (like induction, taking max\min, take finite sums etc). In my example above, we wanted to find a minimum over all the lower bounds, but in the infinite case this is usually just an infimum (and can be zero), but when we reduce to a finite case there is a minimum. In this way we can think of compactness as something that let us use some finite argument on infinite covers (and many times to transfer some property from the cover to the entire space).
Which is the diameter of $G$?
Suppose that the alphabet is $\{0,1,2,\ldots,9\}$. Distance between $111 111$ and $222 222$ is clearly six. Moreover, the distance between any two names is the number of letters that they have different, so can't be more than six. Hence, the diameter of the graph is $6$.
Limit involving norm of matrices
Let $M$ denote the set of $n\times n$ matrices of the reals. Then, the mapping $$\langle A,B \rangle = Tr(A^T B) $$ is an inner product on $M$, which induces the Frobenius norm $$ \|A\|_F^2 = \langle A, A \rangle. $$ By Cauchy Schwarz you obtain $|Tr(A^2)| \le \|A\|^2_F$. (notice it is really $\le$ and not $=$.) Since $M$ is finite dimensional, every pair of norms are equivalent. Thus, there exist constants $c, C >0$ with $$ c \|A\|_F \le \|A\| \le C\|A\|_F $$ for every $A\in M$. Both together yield the claimed statement.
Intuition on Harris recurrence
This answer may be wrong, but I think it is worth posting and if it is wrong, someone can point it out and I can learn something too. I think you do not mean a finite Markov Chain, because for a finite state chain, assuming it is irreducible, every state will be visited infinitely often, there is no question. I think there is only a difference if the state space is uncountable. This is because the event $V_i=\infty$ is the same as the event "state i is visited infinitely often". This has probability 1 or 0, by Levy's zero-one law. So, suppose a positive recurrent chain is not Harris recurrent. This means the expected number of visits to $i$ is infinite, but the number visits to $i$ is finite, almost surely, but doesn't this mean it is transient?
Invariant factors of a cyclic $K[X]$-module and of its dual
For the question 2. $M_Q^*= Hom_K(M_Q,K)$ with its natural $K[X]$-module structure $(X\cdot f)(a)=f(X a)$. Factorize $Q=\prod_j P_j^{e_j}$, let $R_i=P_i^{e_i-1}\prod_{j\ne i} P_j^{e_j}\in K[X]/(Q)$ and take $f\in M_Q^*$ such that $f(R_i)\ne 0$ for each $i$. Then $$\ker(K[X]\to K[X]\cdot f )= (Q)$$ Therefore $\dim_K(K[X]\cdot f)= \dim_K(M_Q)=\dim_K(M_Q^*)$ so that as $K[X]$-modules $$ M_Q^*=K[X]\cdot f \cong K[X]/(Q)=M_Q$$ Such a $f$ exists because we can take $f_i\in Hom_K(K[X]/(P_i^{e_i}),K), f_i(R_i)=1$ and $f=\sum_i f_i$.
Prove the monotoniticy of a function $f$ which satisfies $f(x) e^{f(x)} =x$
The function $$g(t)=te^t$$ is monotonic and continuous for $t\ge0$, and its codomain is $\mathbb R^+$. Then $g$ is invertible and monotonic in $\mathbb R^+$.
baby rudin, chapter 10, (differential forms) theorem 10.27
For starters, we will solve the equation $\bar{\sigma}(u)=\sigma(v)$. Say $u=u_1 e_1+\dots+u_k e_k$ and $v=v_1e_1+\dots+v_k e_k$, it follows that $\bar{\sigma}(u)=p_j+u_j(p_0-p_j)+\sum_{1 \le i \le k,i \neq j} u_i(p_i-p_j)$ and $\sigma(v)=p_0+\sum_{1 \le i \le k} v_i (p_i-p_0)$. Equating the two gives: $(p_j-p_0)+u_j (p_0-p_j)+\sum_{1 \le i \le k,i \neq j} u_i(p_i-p_0+p_0-p_j)-\sum_{1 \le i \le k} v_i (p_i-p_0)=0$ $(1-u_j-\sum_{1 \le i \le k,i \neq j} u_i-v_j)(p_j-p_0)+\sum_{1 \le i \le k,i \neq j} (u_i-v_i)(p_i-p_0)=0$ A possible solution for $v$ is $v=\sum_{1 \le i \le k,i \neq j} u_i e_i+(1-\sum_{1 \le i \le k} u_i)e_j =:G(u)$. Effectively, we have shown that $\bar{\sigma}=\sigma \circ G$, for a $C^1$-primitive $G$ from the simplex to itself. Theorem 10.9 (change of variables) gives the result for functions with "well behaved" supports (notice that $|J_G|=|-1|=1$). The general case follows by approximating $f$ with such functions (or, equivalently using a stronger version of theorem 10.9). I hope it's ok.
Making Sense of a pathological Dirac Comb as a Continuous Linear Functional
It turns out that what I've been looking for is $L_{\textrm{count}}^{2}\left(\left[0,1\right],\mathbb{C}\right)$, the space of all functions $f:\left[0,1\right]\rightarrow\mathbb{C}$ which are square-integrable with respect to the counting measure. I can't use $\ell^{2}$, because that would only allow me to keep track of the coefficient values $c_{t}$, and—for my purposes—what I need is to be able to keep track of both the $c_{t}$s and the points $t\in\left[0,1\right]$ at which the $c_{t}$s are non-vanishing. Using the counting measure, I can dispense with needing to treat $\varphi_{T}$ as a distribution. Instead, it becomes a perfectly ordinary element of the Hilbert space $L_{\textrm{count}}^{2}\left(\left[0,1\right],\mathbb{C}\right)$, and acts on other elements of that space in the natural way—via the inner product. This also makes all of my worries about inequalities moot: the inequalities will hold because I'm working over an ordinary Hilbert space, no distributions necessary.
function approximation using series
$$ \begin{align} 1-\sqrt{1-\frac1{1+x/2}} &=\frac{\frac1{1+x/2}}{1+\sqrt{1-\frac1{1+x/2}}}\\ &=\frac1{x+2}\frac2{1+\sqrt{1-\frac1{1+x/2}}}\\ &=\frac1{x+2}\left(1+\frac{1-\sqrt{1-\frac1{1+x/2}}}{1+\sqrt{1-\frac1{1+x/2}}}\right)\\ &=\frac1{x+2}\left(1+\frac{\large\frac1{1+x/2}}{\small\left(1+\sqrt{1-\frac1{1+x/2}}\right)^2}\right)\\ &=\frac1{x+2}\left(1+\frac12\frac1{x+2}\left(\frac2{\small1+\sqrt{1-\frac1{1+x/2}}}\right)^{\!\!\!2}\right)\\[9pt] &=\frac1{x+2}+\frac12\frac1{(x+2)^2}+O\left(\frac1{(x+2)^3}\right) \end{align} $$ Then, if we wish, we can use $$ \begin{align} \frac1{x+2} &=\frac1x\frac1{1+2/x}\\ &=\frac1x-\frac2{x^2}+O\left(\frac1{x^3}\right) \end{align} $$
How to solve this percentage question without equations?
If the original population were all male, it would only go up by $500$, not $600$. If it were all female, it would go up to $750$. Since $600$ is closer to $500$ than it is to $750$, it stands to reason there are more males than females in the original population. Looked at more closely, $600$ is $100$ from $500$ and $150$ from $750$. Those two differences are in the ratio $2:3$. That means the ratio of men to women is $3:2$. (Remember, we already concluded there should be more men than women.) It's now clear that the original population was $3000$ males and $2000$ females. As a quick check, $10\%$ of $3000$ is $300$, and $15\%$ of $2000$ is $300$, which together give $600$.
Question regarding Vector Equations
Start by assuming the lines as: $$\vec{r_1}=\vec{a_1}+\lambda_1\vec{b_1}\\ \vec{r_2}=\vec{a_2}+\lambda_2\vec{b_2}$$ Where $\lambda_1$, $\lambda_2$ be real. Now we construct a plane with two midpoints and the normal as $\vec{b_1} \times\vec{b_2}$. Lets write position vectors of two midpoints: $$\vec{m_1}=\dfrac{(\vec{a_2}+\lambda_2\vec{b_2}) - (\vec{a_1}+\lambda_1\vec{b_1})}{2}\\ \vec{m_2}=\dfrac{(\vec{a_2}+\lambda_2'\vec{b_2}) - (\vec{a_1}+\lambda_1'\vec{b_1})}{2}$$ So a vector in plane is $$\vec{m_2}-\vec{m_1}=\dfrac{\vec{b_2}(\lambda_2'-\lambda_2)-\vec{b_1}(\lambda_1'-\lambda_1)}{2}$$ Thus the equation of plane can be written as $$(\vec{m_2}-\vec{m_1})\cdot (\vec{b_1}\times\vec{b_2})=0$$ Now notice that the following expression: $$\dfrac{\vec{b_2}(\lambda_2'-\lambda_2)-\vec{b_1}(\lambda_1'-\lambda_1)}{2}\cdot (\vec{b_1}\times\vec{b_2})$$ is always zero regardless the value of any $\lambda$.
Prove transitivity or not of some relation
If $(x,y),(y,z)\in R$ implies: $$x^n=y^m\quad and \quad y^\hat{n}=z^\hat{m}$$ This will lead to: $$x^{n*\hat{n}} = y^{m * \hat{n}}=x^{m* \hat{m}}$$ Just a sketch, but I bet you can proof everything neccessary.
Finitely generated prime ideal and annihilator
$P$ is finitely generated. Lets make a remark, if $\{p_i\}$ generates $P$ then any element of $P^2$ is represented as $\sum q_ip_i$ with $q_i \in P$. This is quite trivial: an element $p$ of $P$ is $p=\sum r_ip_i$, so if $q \in P$ then $$qp=\sum (qr_i)p_i$$ and $qr_i \in P$ Now let $a \in ann (P/P^2)$ then $$ap_k=\sum q_{ki} p_i$$ with $q_{ki} \in P$. So by looking at the characteristic polynomial, like in linear algebra, we have that $$a^n+b_1a^{n-1}+\cdots +b_n=0$$ and each $b_i \in P$ since they are sums of products of $q_{ki}$. This implies that $a^n \in P$ and since $P$ is prime $a \in P$.
Variance of two functions
The new variance of X is just the old variance, because $Var(X+b)=Var(X)$. And the new variance of Y is $Var(1.08\cdot Y)=1.08^2\cdot Var(Y)$ The covariance is only affected by factors: $Cov(X+500,1.08\cdot Y)=1.08\cdot Cov(X,Y)$. Thus you have to calculate the old covariance by using the formula that you have posted and multiply the result by 1.08.
Why can the union of events be expressed as additions of those events?
Speculating, I'd say $+$ for events means mutually exclusive union (so disjoint union), so we add their probabilities. So it's really the same but stated so that we can apply the sum axiom for their probabilities.
Induced Topology via Injective Map
In the topological context, we can. Just define on $X$ the initial topology $\tau$ with respect to $f$: if $A\subset X$, then $A\in\tau$ if and only if $A=B\cap X$, for some open subset $B$ of $M$. In the case of smooth manifolds, in general we can't. Take $M=\mathbb R$, $X=\mathbb Q$ and $f$ the inclusion of $\mathbb Q$ in $\mathbb R$.
How are these two Banach spaces related ? (weighted $L2$ type space involving a logarithm and Besov type space)
Let $a_n:= \sqrt{\int_{\Omega_n}f^2}$. A function $f$ belongs to $B$ if and only if $\sum_{n\geqslant 0}2^{n/2}a_n$ is finite, and to $L^2_{1/2,1/2}$ if and only if $\sum_{n\geqslant 0} 2^nna_n^2$ is finite. Let $a_n=2^{-n/2}n^{-1}(\log(n+2))^{-3/4}$: then $f$ belongs to $L^2_{1/2,1/2}$ but not to $B$. If $a_{2^N}=N^{-4}2^{-2^N/2}$ and $a_n=0$ if $N$ is not of the form $2^N$ for some $N$, then $f$ belongs to $B$ but not to $L^2_{1/2,1/2}$.
A group of order 30 has at most 7 subgroups of order 5
The intersection of two subgroups is itself a subgroup. Lagrange's Theorem therefore implies that the intersection of two subgroups of order $5$ must have order $1$ or $5$. Therefore, two distinct subgroups of order $5$ must intersect only at the identity element. If there are $k$ distinct subgroups of order $5$, then there are $1+4k$ distinct elements in those subgroups, which must be at most the order of the group. Using this GAP code for G in AllSmallGroups(30) do S:=Set(List(G,g->Group(g))); Print(StructureDescription(G)," ",Number(S,H->Size(H)=5),"\n"); od; We can check that every group of order $30$ actually has a unique subgroup of order $5$. These groups are $C_5 \times S_3$, $C_3 \times D_{10}$, $D_{30}$, and $C_{30}$. (See also Groups of order 30.)
Help with notation in logic
One term that is used is valuation. It comes up in one of the standard ways to define what is meant by a sentence of the language $L$ to be true in a particular $L$-structure $M$. Formally, a valuation is a function from the set of variable symbols to the underlying set of $M$.
Integral of signum and a root
Hint. By the change of variable $$ u=\frac12-x,\quad du=-dx, $$ observing that $\text{sign}\left(\frac{1}{2} - x\right)=1 $ for $0<x<\frac12$, one just gets $$ \int_0 ^{\frac{1}{2}} \text{sign}\left(\frac{1}{2} - x\right) \frac{1}{\left(\frac{1}{2}-x\right)^s} \text{d}x = \int_0 ^{\frac{1}{2}}\frac{du}{u^s},\quad s \in (0,1) . $$
Why digit sum of 841 is 4 instead of 13?
Where they write "digit sum", they actually mean the digital root: As long as your result is not single digit, add the digits again. So you get $841 \rightarrow 8 + 4 + 1 = 13 \rightarrow 1 + 3 = 4$. Note that the digital root is equal to the reminder from division by $9$, except that if the reminder is $0$, the digital root is $9$ (unless you started with $0$, of course). So calculating the digital root essentially is doing calculation modulo 9. Of course if two numbers are equal, then their reminder on division by 9 is also the same. The calculation shows that for $841$ and for $29^2$, that reminder is $4$, while $21^9$ is a multiple of $9$. Thus you know that $21^2\ne 841$ because the remainder doesn't agree. Note that strictly speaking, you don't know yet that $29^2=841$; all you know that if $841$ is the square of a natural number, that number must be $29$. The method never looked at the middle digit, therefore it would have arrived at $29$ also for e.g. $831$, despite $29^2\ne 831$.
Can WolframAlpha calculate a multivariable integral over arbitrary domain of integration?
To my knowledge, it is not possible to use the Wolfram Language with Wolfram Alpha (to its full extent). However, you can use the free version of the Wolfram Programming Lab to evaluate valid Mathematica code. Your type of integral can be calculated in Mathematica with the following syntax Integrate[ x^2*y^2*z^2, Element[{x, y, z}, ImplicitRegion[Sqrt[x^2 + y^2] <= z <= Sqrt[1 - x^2 - y^2], {x, y, z}] ] ] (* ((128 - 71 Sqrt[2]) \[Pi])/60480 *) If Mathematica can find an analytical solution, however, depends of course how hard the problem is.
Prove that the given linear map is injective
The general statement is definitely false for $n > 1$. $f(0, 0)=0$, but we can also choose some $0 < \epsilon < \min(B, BM)$ so that $\left(-\epsilon, \frac \epsilon M\right) \in [-B, B]^2$ and thus in the domain of $f$. Thus, we have the following: $$f\left(-\epsilon, \frac \epsilon M\right)=-\epsilon M+\frac \epsilon M M^2=-\epsilon M+\epsilon M=0$$ Therefore, $f(0, 0)=f\left(-\epsilon, \frac \epsilon M\right)$ and $f$ is not injective. However, I think the reduced statement is actually true. Let $D$ be the domain of $f$. For any two unequal ${\bf x}, {\bf y} \in D$, $f({\bf x})=f({\bf y})$ means that $\sum_{i=1}^n ({\bf x}_i-{\bf y}_i)M^i=0$. Since the left side of this equation is a non-zero polynomial of degree at most $n$ in terms of $M$, there are at most $n$ values of $M$ which satisfies this equality. We have the following: Since $D$ is countable, there are a countable number of pairs of unequal ${\bf x}, {\bf y} \in D$. Given a specific pair of ${\bf x}, {\bf y} \in D$, there are a finite number of $M$ where $f({\bf x})=f({\bf y})$. Therefore, overall, there are a countable set of $M$ where $f({\bf x})=f({\bf y})$ is true for any ${\bf x}, {\bf y} \in D$. On the other hand, there are an uncountable number of positive real numbers, so choose a positive number that is not in the described countable set. For this $M$, $f({\bf x}) \neq f({\bf y})$ for all ${\bf x}, {\bf y} \in D$, which means $f$ is injective.
Quadrangle ABCD that is not a parallelogram, but does have a parallel side
Yes. In general, those things are called trapesoids.
Can $f(x) = \sin(nx)\sin(mx)$ $m,n \in \Bbb{N}$ be written as something in $A= \left\{\sum_{n=0}^k a_n \sin(n x): k \in \mathbb{N}\right\}$?
I'm not sure for a finite series, but f(x) would be periodic, and it is an even function, so it can be written as an infinite sum of cos(n*x) by it's Fourier representation. Apologies if this isn't of any help.
What's the graph for this periodic function
You can look at it this way: At first $f(t)$ is only defined for $-2\pi<t\leq2\pi$. This would just be the normal graph of $f(t)$, exept you only draw the part within the given interval. If you want to know the value of $f(t)$ for a $t$ outside this interval, you use the fact that $f(t)=f(t\pm2\pi)$, so you can go back with steps of $2\pi$ untill you are back in the original interval. This is how you get to know the values of $f(t)$ on all of $\mathbb R$. As for the graph, it will be that little piece of $e^{-t}$ on $(-2\pi,2\pi]$. Only there will be a great number of these segments and they're all at a fixed distance of their direct predecessor.
Existence and uniqueness proof check/critique
Maybe I'm not understanding something, but if we let $A = \lbrace a_1, a_2 \rbrace$, $B = \lbrace b_1, b_2 \rbrace$ and $C = \lbrace c_1, c_2 \rbrace$ and then define $f(a_i) = b_1$ and $g(a_i) = b_1$ for $i = 1,2$ we can define $h_1(b_1) = c_1$, $h_1(b_2) = c_2$ and $h_2(b_1) = c_1$, $h_2(b_2) = c_1$, they both fulfill $f = h_1 \circ g = h_2 \circ g$, hence $h$ is not unique.
Purchasing Optimization Problem
Let consider a planning time period made of m weeks. Let be $ x_1, x_2 , \dots , x_m $ the quantity (measured in pounds) of the specific commodity to be purchased for every week in order to meet the demand. The demand for every week is known in advance and it is forecasted with a relative error (standard deviation/mean) less than 3%: $ d_1, d_2 , \dots , d_m $ The lead time for getting the commodity from supplier requires 8 weeks and as a result it is necessary to place the purchase order in advance of 8 weeks: $ x(t - \tau) = x_t $ where $ \tau = 8 $ weeks For example, if today we got as optimal solution for third week x_3 = 100 pound, this means that we should place the order five weeks in advance from today, $ x_3 = x(3 - 8) $ Because of a full train car is able to carry $k$=195,000 pounds of the commodities, we designate as $ y_1, y_2 , \dots , y_m $ the number of train car to be hired for every specific week. Clearly $ y_i $ is a natural number. Let $ INV_0 $ be the stock on hand at the beginning of planning time period. The constraint that balances purchasing, demand and inventory is: $ x_i + INV_{i-1} – INV_i = d_i $ for $i=1, \dots , m $ So, $ y_i \ge x_i / k $ where k = 195,000 pounds and therefore we request that $ k y_i \ge d_i - INV_{i-1} + INV_i $ The goal is to keep the days on hand of commodity as minimum as possible without shortage in every week AND to hire the minimum number of train cars. The mathematical model as PL can be written as: $ min \left \{ \sum_{i = 1}^m y_i + \sum_{i=1}^m INV_i \right \} $ subject to: $ \left\{ \begin{array}{l} k y_1 \ge d_1 - INV_0 + INV_1 \\ k y_2 \ge d_2 - INV_1 + INV_2 \\ \vdots \\ k y_m \ge d_m - INV_{m-1} + INV_m \\ \\ INV_i \ge 0 \forall \ i \\ y_i \in N \forall \ i \end{array} \right. $
Maximum of a sequence and convergence of a series
I believe the hypothesis $\sum_{i=1}^{N}a_i =N$ should be $\sum_{i=1}^{N}a_i \leq N$, since. otherwise, $a_n=1$ for all $n$. Taking this as hypothesis note that $\sum_{i=1}^{N} a_i^{2} \leq \max \{a_i:1\leq i\leq N\} \sum_{i=1}^{N} a_i$ (because $a_k^{2} \leq \max \{a_i:1\leq i\leq N\} a_k$ for each $k$). Divide by $N^{2}$ and let $N \to \infty$. Since $\frac 1 {N^{2}} \max \{a_i:1\leq i\leq N\} \sum_{i=1}^{N} a_i \leq \frac 1 N \max \{a_i:1\leq i\leq N\} \to 0$ the proof is complete.
Using Mayer-Vietoris sequence to show the Möbius band does not embed in $S^2$
We have $V = S^2 \setminus f(C)$ and thus $f(M) \cap V = f(M \setminus C) \approx M \setminus C$. Now consider the following part of the Mayer-Vietoris sequence: $$H_1(f(M) \cap V) \stackrel{\phi}{\rightarrow} H_1(f(M)) \oplus H_1(V) \to H_1(S^2) = 0 .$$ This shows that $\phi$ must be onto. To get a contradiction, it is essential to know what $\phi$ looks like: We have $\phi(x) = (i_*(x), j_*(x))$ where $i : f(M) \cap V) \to f(M)$ and $j : f(M) \cap V \to V$ are the inclusions. Since the projection $p : H_1(f(M)) \oplus H_1(V) \to H_1(f(M))$ is onto, we see that $p \circ \phi = i_*$ must be onto. This is equivalent to $k : M \setminus C \hookrightarrow M$ inducing a surjection on $H_1$. It is well-known (and easy to show) that the boundary circle $S$ of $M$ is a strong deformation retract of $M \setminus C$. Hence the inclusion $l : S \to M \setminus C$ is a homotopy equivalence. Simlarly we have an obvious strong deformation retraction $r : M \to C$. Thus $f = r \circ k \circ l : S \to C$ must induce a surjection on $H_1$. However, it is ea^sy to see that if we identify $S$ and $C$ with $S^1$, then $f$ wraps the circle twice around itself. Thus $f$ corresponds to a map $g : S^1 \to S^1$ of degree $2$. This does not induce a surjection, and we have the desired contradiction.
$\sum_{n=1}^{\infty}a_nx^n=\sum_{n=1}^{\infty}b_nx^n,\space x\in\mathbb{R}$
hint If $f (x)=g (x) $ for $x\in (-R,R) $ then $f'(x)=g'(x) $ . with $x=0$, we get $a_1=b_1$. now think $f''$.
Visualizing solids of revolution
When you rotate about $y$, most of the area is farther from the axis than if you rotate around $x$. If you think about a small area of size $dxdy$ when rotated around $x$ it sweeps out a volume element $ydxdy$ and when rotated around $y$ is sweeps out a volume $xdxdy$. The methods of discs and shells just do one dimension of the integral for you. You could think of a rectangle $[0,10] \times [0,0.1]$. If you rotate it around $x$ you get a cylinder of radius $0.1$ and height $10$, for volume $0.1\pi$. If you rotate it around $y$ the radius is $10$ and the height is $0.1$ for a volume of $10\pi$
Is there a simple function which can be used to determine the two "next" search indexes in a binary search?
No longer relevant, keeping answer around for comments.
Probabilities with ${ n \choose k}$
Actually I need help deriving this equation $p_i( \bar{D}=\bar{d} | D=d ) = {i \choose \bar{d}} {N_t -i \choose d-\bar{d}} \pi_1^d (1-\pi_1)^{N_t-d}$ (given equation for derivation with $ 0 \leq \bar{d} \leq$ i ). As number of departure from tagged slot cannot be greater than number of users using it) The given equation is w.r.t a particular slot (tagged slot) which is in state $i$ (has currently $i$ users ) and the number of departure of users from this slot is termed as $\bar{d}$. The number of departures from all all slots including tagged slot is $d$. The departure of each users are independent events with $Nt$ as total number of users. What i think is that since $ Pi( \bar{D}=\bar{d} | {D}={d} ) = \frac{ P (\bar{D}=\bar{d}, {D}={d}) } { P({D}={d}) }$ since departures are independent so I write it as $ Pi( \bar{D}=\bar{d} | {D}={d} ) = \frac{ P (\bar{D}=\bar{d}) P( {D}={d}) } { P({D}={d}) }$ it becomes $ Pi( \bar{D}=\bar{d} | {D}={d} ) = P (\bar{D}=\bar{d}) $. so i get following result by putting values in Bernoulli formula $ Pi( \bar{D}=\bar{d} | {D}={d} ) = \sum_{\bar{d}=0}^i {i \choose \bar{d}} p^{ \bar{d} } (i-p)^{ i-\bar{d} }$. I thought that Bernoulli formula may be used here $Pr[k\mbox{ successes in }n\mbox{ trials }] = \binom{n}{k}s^kf^{n-k}$. or as $ Pr[k\mbox{ successes in }n\mbox{ trials }] =\sum_{k=0}^n \binom{n}{k}s^kf^{n-k}$ . but how can convert $\sum$ into second ${ n \choose k }$ to derive given equation. Am i right in making this decision ?. i am unable to derive it. Can any one tell me what i have done wrong?
Differentiation from first principles $\sqrt{1+e^x}$
$\frac {\sqrt {1+e^{x+h}}-\sqrt {1+e^{x}}} h=\frac {e^{x+h}-e^{x}} {h(\sqrt {1+e^{x+h}}+\sqrt {1+e^{x}})} \to \frac {e^{x}} {2 \sqrt {1+e^{x}}}$. I have used the fact that $\frac {e^{h}-1} h \to 1$ as $ h \to 0$.
How do the Barycentric weights work with the Lagrange interpolation?
Note that $\ell$ by itself does not interporate your points $f(x_i)$. It is just a stepping stone in developing a later expression for the actual interpolation. $\ell(x)$ is just the monic (i.e., leading coefficient = 1) polynomial having simple zeros at each of the interpolation points $x_i$ and no other zeros. For fixed $j$, by removing the $(x - x_j)$ factor from $\ell$, we get another polynomial with simple zeros at all the other interpolation points, but which is non-zero for $x_j$. Call it $$L_j(x) = (x - x_0)...(x- x_{j-1})(x-x_{j+1})...(x-x_n) = \frac {\ell(x)}{x - x_j}$$ But this isn't quite useful enough. We'd like to have $\ell_j(x_j) = 1$ as well. But that is simply a matter of dividing by the right constant: $$\ell_j(x) = \frac {L_j(x)}{L_j(x_j)}$$ You may note that $x = x_j$ is the one point where the equation $$L_j(x) = \frac {\ell(x)}{x - x_j}$$ does not hold, since the right-hand side is undefined there. However, $L_j(x_j)$ itself is defined. It is just $$L_j(x_j) = (x_j - x_0)...(x_j- x_{j-1})(x_j-x_{j+1})...(x_j-x_n)$$ To make the notation a little easier, we rename $$w_j = \frac 1{L_j(x_j)}$$, giving the expression in your post. I'll leave proving that $L_j(x_j) = \ell'(x_j)$ to you. The point of all of this is that now we have a set of polynomials with the property that $$\ell_j(x_i) = \begin{cases} 1 & i = j\\0 & i \ne j\end{cases}$$ and are the simplest such polynomials possible. And therefore we can take $$P(x) = \sum_{i=0}^n f(x_i)\ell_i(x)$$
Minimal generating set of an ideal in a polynomial ring
I answered my question. In the specific case I mentioned, it turns out $J$ is 2-generated. In fact, $J = \langle 15x + 2, \sqrt{-30} \rangle$, mostly since $15x = 15(15x + 2) + (7\sqrt{-30}x + \sqrt{-30})\sqrt{-30} \in J$.
Continuity of an implicitly defined function
The above definition of the final function $z(x,y)$ assumes that every ordered pair $(x,y)$ with two images under the original function $z$ has the trivial image $\dfrac{1}{2}$, which is, then, ignored in the final definition. This is, however, not always true for the case $x = 0$ where the ordered pair $(0,y)$ gives the images: $z = \dfrac{1-y}{2}$ and $z = \dfrac{1+y}{2}$ neither of which evaluate to $\dfrac{1}{2}$ for non-zero y, implying that the function is in fact not well-defined. As for the problem in hand, consider the intersections of the surface with a plane of the type $x = a$, $y = a$ or $z = a$. Each such intersection is a curve and if the surface is continuous everywhere, all the curves obtained by substituting every possible value of a would also be continuous. Hope this helped.
Is the free group functor an isomorphism?
OK, so you can do that to convert a set to a group. Now take that group and apply the forgetful functor: what set do you get? In particular, if $S = \{a\}$, then $G$ looks like the group $\Bbb Z$. Now forget the group structure and you've got an infinite set, not a singleton set!
Forming probability distributions of type $W=X+Y$.
You can probably see that the possible values of $X+Y$ are $0,2,4,6,8,10$. Call $X+Y$ by some name, like $W$. We want to find $\Pr(W=w)$ for $w=0,2,4,\dots,10$. Unfortunately we have to do them one at a time. What is $\Pr(W=0)$? This happens only if $X=0$ and $Y=0$. By independence, the probability of this is $(0.3)(0.4)=0.12$. One down, quite a few to go. What is the probability that $W=2$? This can happen in two ways, $X=0$, $Y=2$ or $X=2$, $Y=0$. So the probability is $(0.3)(0.2)+(0.4)(0.2)$. Continue. Note that for $W=4$ there will be three different ways, for $6$ there will be also three ways, but then things get easier. It will be easy to make a numerical slip or two. Note that $6$ numbers we get should add up to $1$. That gives a check on your calculations.
a precise "description" of $\lim s_n > \lim t_n$
Note $x>y$ iff there exists $z$ such that $x>z>y$. Suppose then $s=\lim\limits_{n\to\infty} s_n>\lim\limits_{n\to\infty} t_n=t$. Pick $r$ such that $s>r>t$. Since $s_n\to s$, there exists for $\varepsilon_0 =\dfrac{s-r}2$ an $N_{\varepsilon_0}$ such that... Since $t_n\to t$, there exists for $\varepsilon_1=\dfrac{r-t}2$ an $N_{\varepsilon_1}$ such that... I'll let you finish.
apparent mistake in description of $(x^2,xy,y^2, \alpha x + \beta y) \subseteq K[x,y]$ in Geometry of Schemes
The ideal $I=(x^2,xy,y^2)$ consists of all polynomials in $k[x,y]$ such that 1) constant term $= 0,$ 2) linear term $= 0$. Therefore $\mathfrak{a}=I+(\alpha x + \beta y)$ consists of all polynomials in $k[x,y]$ such that i) constant term $=0,$ ii) linear term $=\gamma(\alpha x+\beta y)$ for some $\gamma \in k.$ For any $f \in k[x,y],$ we observe that: A) $f(0,0) = $ the constant term of $f(x,y),$ B) $\displaystyle \frac{\partial f}{\partial x}{(0,0)} = $ the coefficient of the $x$-term in $f(x,y),$ C) $\displaystyle \frac{\partial f}{\partial y}{(0,0)} = $ the coefficient of the $y$-term in $f(x,y).$ It therefore follows that $f \in k[x,y]$ belongs to $\mathfrak{a}$ if and only if (i') $f(0,0) = 0,$ (ii') $\alpha\displaystyle \frac{\partial f}{\partial y}{(0,0)}=\beta\frac{\partial f}{\partial x}{(0,0)}.$ Hence, in your notation, we have $\mathfrak{a}=\mathfrak{b}.$ Hope this helps! Please let me know if you need any more details :)
Tensor Product over a ring
What you describe: $V \otimes_\mathbb{C} W$ when $V, W$ are merely real vector spaces is not generally possible. It's only possible if the vector spaces you consider are already complex vector spaces. What is possible though is the reverse: $V \otimes_\mathbb{R} W$ when $V,W$ are complex vector spaces. Indeed, a complex vector space is canonically a real vector space; this is because there is a canonical embedding of fields $\mathbb{R} \subset \mathbb{C}$. More generally given a field extension $F \subset K$, a $K$-vs is always an $F$-vs, and then you can do $V \otimes_F W$ when $V, W$ are $K$-vs. You should note that $V \otimes_\mathbb{R} W$ is different from $V \otimes_\mathbb{C} W$ when both are defined. In other words the base field is important. See this for example. The difference between $\otimes$ and $\otimes_F$ is that in $\otimes$, the base field (or ring) isn't specified. So it implies that at the beginning, you fixed some field $F$ and implicitly when you write $V \otimes W$ the tensor product is actually over $F$. But this depends on context. Sometimes the tensor product is taken by default over $\mathbb{Z}$, because every module over anything is always a $\mathbb{Z}$-module (aka an abelian group).
How do I prove isomorphism?
Essentially, you wish to construct an isomorphism $\alpha : S_\mathbb{N} \to S_\mathbb{Z}$. But what are the elements of $S_\mathbb{N}$? They're just $f : \mathbb{N} \to \mathbb{N}$ bijective. Likewise for $\mathbb{Z}$. Hint: there is some $g : \mathbb{N} \to \mathbb{Z}$ that is bijective. Can you use this to construct an explicit isomorphism? What is a natural thing to do to some $f : \mathbb{N} \to \mathbb{N}$?
What is the difference between "probability density function" and "probability distribution function"?
Distribution Function The probability distribution function / probability function has ambiguous definition. They may be referred to: Probability density function (PDF) Cumulative distribution function (CDF) or probability mass function (PMF) (statement from Wikipedia) But what confirm is: Discrete case: Probability Mass Function (PMF) Continuous case: Probability Density Function (PDF) Both cases: Cumulative distribution function (CDF) Probability at certain $x$ value, $P(X = x)$ can be directly obtained in: PMF for discrete case PDF for continuous case Probability for values less than $x$, $P(X < x)$ or Probability for values within a range from $a$ to $b$, $P(a < X < b)$ can be directly obtained in: CDF for both discrete / continuous case Distribution function is referred to CDF or Cumulative Frequency Function (see this) In terms of Acquisition and Plot Generation Method Collected data appear as discrete when: The measurement of a subject is naturally discrete type, such as numbers resulted from dice rolled, count of people. The measurement is digitized machine data, which has no intermediate values between quantized levels due to sampling process. In later case, when resolution higher, the measurement is closer to analog/continuous signal/variable. Way of generate a PMF from discrete data: Plot a histogram of the data for all the $x$'s, the $y$-axis is the frequency or quantity at every $x$. Scale the $y$-axis by dividing with total number of data collected (data size) $\longrightarrow$ and this is called PMF. Way of generate a PDF from discrete / continuous data: Find a continuous equation that models the collected data, let say normal distribution equation. Calculate the parameters required in the equation from the collected data. For example, parameters for normal distribution equation are mean and standard deviation. Calculate them from collected data. Based on the parameters, plot the equation with continuous $x$-value $\longrightarrow$ that is called PDF. How to generate a CDF: In discrete case, CDF accumulates the $y$ values in PMF at each discrete $x$ and less than $x$. Repeat this for every $x$. The final plot is a monotonically increasing until $1$ in the last $x$ $\longrightarrow$ this is called discrete CDF. In continuous case, integrate PDF over $x$; the result is a continuous CDF. Why PMF, PDF and CDF? PMF is preferred when Probability at every $x$ value is interest of study. This makes sense when studying a discrete data - such as we interest to probability of getting certain number from a dice roll. PDF is preferred when We wish to model a collected data with a continuous function, by using few parameters such as mean to speculate the population distribution. CDF is preferred when Cumulative probability in a range is point of interest. Especially in the case of continuous data, CDF much makes sense than PDF - e.g., probability of students' height less than $170$ cm (CDF) is much informative than the probability at exact $170$ cm (PDF).
Supersolvable groups
If you want a more essentially infinite example, you could take a semidirect product of ${\mathbb Z}^2$ by ${\mathbb Z}$ with an irreducible action. For example $\langle x,y,z \mid xy=yx, z^{-1}xz=y, z^{-1}yz=xy \rangle$.
Partition of $\mathbb{R}^+$ into two semigroups
The Solution: Notice that $\mathbb{R}$ is a vector space over $\mathbb{Q}$. Choose some linear function $f:\mathbb{R}\rightarrow\mathbb{Q}$ over this vector space for which there exist an $x,y\in\mathbb{R}^+$ such that $f(x)\geq 0$ and $f(y)<0$. Then, the sets $A=\{x:f(x)\geq 0\}$ and $B=\{x:f(x)<0\}$ for $x\in\mathbb{R}^+$ partition the space and are semigroups. Some notes on it: The existence of such an $f$ is guaranteed by the axiom of choice, which is equivalent to the statement that all vector spaces have a basis. Once we have a basis $S$, we can change to the basis $S'=\{|s|:s\in S\}$ and define $f$ on $S'$ such that it takes a negative value for at least one $s\in S'$ and takes a non-negative value for some other $s\in S'$. A more general statement we could make is that we could partition $\mathbb{R}^+$ into as many subsemigroups as we want by choosing more linear functions - for instance, if we had a $f$ and $g$ so that none of the following were empty, we could partition into three sets as: $$\{x:f(x)\geq 0\}$$ $$\{x:f(x)< 0\wedge g(x)\geq 0\}$$ $$\{x:f(x)< 0\wedge g(x) < 0\}$$ (We could also partition into the sets for which a single $f$ is positive, negative, or zero, but this doesn't generalize well) Another note is that $f$ could be $\mathbb{R}\rightarrow\mathbb{R}$ not $\mathbb{R}\rightarrow\mathbb{Q}$ - I used the latter to emphasize that I want $f$ to be linear in the rationals - that $f(kx)=kf(x)$ is only necessary when $k$ is rational.
Hatcher Exercise 2.1.26.
Your argument is not correct, for two reasons: first of all, $X/A$ is not exactly a wedge if countably many circles (this would not take into account the topology, the fact that $1/n\to 0$), and secondly $X$ is contractible does not imply $H_1(X,A) = 0$ : it implies $H_1(X) = 0$; but note that you have an exact sequence $H_1(X)\to H_1(X,A)\to H_0(A)\to H_0(X)$ so since $H_0(A)$ is very big and $H_0(X), H_1(X)$ are quite small, $H_1(X,A)$ has to be quite big (I'll let you give the precise statements for this) What you want to do is actually compute $H_1(X,A)$ via this exact sequence (this is not too hard), and then identify $X/A$ more carefully. It should be closer to the Hawaiian earrings than a wedge of circles.
compute $\lim_{n\rightarrow\infty}\sum_{k=1}^n \sin(\pi \sqrt{k/n}) (1/\sqrt{kn})$
Multiplying by $\dfrac nn$ we have $~\displaystyle\lim_{n\to\infty}~\frac1n~\sum_{k=1}^n\frac{\sin\bigg(\pi~\sqrt{\dfrac kn}~\bigg)}{\sqrt{\dfrac kn}},~$ which is the Riemann sum for $\displaystyle\int_0^1\frac{\sin\pi\sqrt x}{\sqrt x}~dx,~$ which, after letting $t=\sqrt x,~$ becomes $~\color{red}2\displaystyle\int_0^1\sin\pi t~dt,~$ whose evaluation is trivial.
Why is $\int\limits_0^1 (1-x^7)^{1/5} - (1-x^5)^{1/7} dx=0$?
Note that if $$ y = \left(1 - x^7\right)^{1/5} $$ then $$ \left(1 - y^5\right)^{1/7} = x $$ This means $(1-x^7)^{1/5}$ is the inverse function of $(1-x^5)^{1/7}$. In the graph, one will be the same as the other when reflected along the diagonal line y = x. Also, both functions share the same range [0, 1] and domain [0, 1] and monotonically decreasing, Therefore, the area under the graph in [0, 1] will be the same for both functions: $$ \int_0^1 \left(1-x^7\right)^{1/5} dx = \int_0^1 \left(1-y^5\right)^{1/7} dy $$ Grouping the two integrals yield the equation in the title.
Proving the powers of a element are all distinct.
"If x is an element of finite order n in G, prove that the elements 1 , x , x2 , x3 , x4 , ..... , x(n−1) are all distinct. Deduce that |x| ≤ |G|. " Let's assume that for some two elements $x^a,x^b$ in x's cyclic group (where $0\leq a<b\leq n$) , $x^a=x^b$. if $a=0$: by the assumption and definition of the first element in a cyclic group, $x^a=x^b=1$. Therefore (by definition of a finite cyclic group), we find our cyclic group is of size $b-1<b\leq n \rightarrow |x|<n$, while by definition, $|x|=n$, a contradiction. else ($a>0$): we have a cyclic sub-group $AB=\{x^a,x^{a+1},...,x^{b-1}\}$, and since $0<a<b\leq n$, none of our elements in the cycle contain an identity element, meaning it's an infinite cyclic group. And since $AB \subset X$, we find that x too is of infinite order, a contradiction to the definition that x is of a finite order. Therefore, the initial assumption, namely that X contains two identical elements in its cyclic group, is false. Conclusion (1): X's cyclic group elements $1,x,x^2,x^3,x^4,.....,x^{(n−1)}$ are all distinct. $\blacksquare$ From here, assume $|x|>|G|$. Remember that for every elem $a\in x, a\in G$. By the Pigeonhole principle, x's cyclic group must contain at least one identical pair of values from G, in contradiction to proof (1). Therefore, our initial assumption must be false. Conclusion (2): $|x|\leq |G| \blacksquare$
What did I do wrong when integrating by parts?
Note: $g'=x$ $g=\frac{x^2}{2}+C$ This is where you went wrong
How to estimate time to reach X with an exponentially increasing growth rate
You can just follow up the chain. Each thing you start with generates $X$ in a given pattern. The situation is linear, so you can just add things up. The number of $D$ never changes, so $D(t)=D(0)$. Since each $D$ makes a $C$ after $4$ seconds, it makes $\lfloor \frac t4\rfloor \ C's$ after $t$ seconds and $C(t)=\lfloor \frac t4\rfloor D(0)$ because we will count the $C'$s that were at the start later. All those $C$'s make $B$'s after $3$ seconds, so $B(t)=B(t-1)+C(t-3)=B(t-1)+\lfloor \frac {t-3}4\rfloor D(0)$. That equation hides the fact that the number of $B$'s is quadratically increasing in time because the second term is linearly increasing in time. Then we have $A(t)=A(t-1)+B(t-2)=A(t-1)+B(t-3)+\lfloor \frac{t-5}4\rfloor D(0)$ Now it is the $B(t-3)$ term that makes $A(t)$ cubic in $t$. Finally, $X(t)=X(t-1)+A(t-1)=X(t-1)+A(t-2)+B(t-3)$ is quartic in $t$.
Residues and Laurent expansion
Let $f(z)=\sum\limits_{n=-m}^{\infty}a_n(z-z_0)^n$ be the Laurent expansion of $f$ at $z_0$. Then $g(z)=(z-z_0)^mf(z)$ is holomorphic around $z_0$ and its Taylor expansion at $z_0$ is $g(z)=\sum\limits_{n=0}^{\infty}a_{n-m}(z-z_0)^n$. Therefore, $a_{-1}=\frac{1}{(m-1)!}g^{(m-1)}(z_0)$, and the conclusion follows.
Is there a closed form for a give infinite sum?
Your sum does have a closed form. On page 26 of New series involving harmonic numbers and squared central binomial coefficients by John Campbell your sum is given as $$\sum_{n=0}^\infty\frac{\binom{2n}{n}^2}{16^n(n+1)^3}=\frac{48}{\pi}+16\ln(2)-\frac{32G}{\pi}-16,$$ where $G$ is Catalan's constant. Unfortunately a full derivation is not given in the source.
How many equilateral triangles can be inscribed in a triangle?
There's a reason you can't prove there are no more than three inscribed equilaterals:
Proving that well ordering principle implies Zorn's Lemma.
Some issues: Zorn's lemma is about partial ordered sets where every chain has an upper bound, not a maximal element. It can be very easy to arrange a chain without a maximal element in a partial ordered set which has an infinite chain to begin with. Using $\kappa^+$ to denote $\kappa\setminus\{\varnothing\}$ is a horrible choice of notation, since $\kappa^+$ denotes the successor cardinal of $\kappa$. You need to slightly modify the construction of the chain. Suppose that $\{g(\beta)\mid\beta<\alpha\}$ were defined, then $g(\alpha)$ is the least (in the well-order) such that $\{g(\beta)\mid\beta\leq\alpha\}$ is a chain. If no such choice is possible, then $\{g(\beta)\mid\beta<\alpha\}$ is a maximal chain already. You didn't have to use contradiction here.
Does boundness in Topological vector space depend on the base point we choose?
If you choose $x=(1,0)$ in $\mathbb R^2$, then the unit ball $B=B_1(0,0)$ will not be bounded according to your definition. This is because if $U=B_{1/2}(x)$, then one would never have $$ B\subseteq\lambda U, $$ because $(0,0)$ is not in $\lambda U$, unless $\lambda=0$, which clearly doesn't work either. As an alternative, instead of $\lambda U$, one may consider homothety relative to $x$, meaning the map $$ h_\lambda(u) = \lambda(u-x) + x = \lambda u + (1-\lambda)x. $$ This is like multiplying by $\lambda$ when the origin is changed to $x$. It is now easy to see that a set $B$ is bounded if and only if, for any neighborhood $U$ of $x$, there exists some $\lambda$ such that $B\subseteq h_\lambda(U)$.
Localization at a Maximal Ideal
Let $x\in \mathfrak{m}$ and let $n$ be such that $x^n=x$. Then $x^{n-1}-1$ is not in $\mathfrak{m}$ because this would imply that $1\in\mathfrak{m}$ which is imposible since $\mathfrak{m}$ is a prime ideal. But $(x^{n-1}-1)x=0$ and this implies that $x/1=0$ by definition of $A_\mathfrak{m}$.
limits $\displaystyle \lim_{(x,y)\rightarrow (0,0)}\frac{xy}{2x^2-y^2}$
No it doesn't. Take the test sequence $(0,1/n)$ and the limit is $0$, taking a different test sequence $(1/n,1/n)$ gives the limit $1$. The general method is to test different, non-trivial directions, usually a sequence along $x=0$ or $y=0$ then a sequence along $y = \pm x$ suffices.
Compute the contraction of $i_\mathbb{X}\beta$
You can further simplify by using the fact that $i_{\mathbb{X}}$ is linear over smooth functions, i.e. $i_{\mathbb{X}}(f\omega) = fi_{\mathbb{X}}\omega$. To see this, note that $f$ is a $0$-form, so by the Leibniz property $$i_{\mathbb{X}}(f\omega) = i_{\mathbb{X}}f\wedge\omega + (-1)^0fi_{\mathbb{X}}\omega = fi_{\mathbb{X}}\omega$$ where we have used the fact that $i_{\mathbb{X}}f$ is a $-1$-form and therefore must be zero. Therefore, $$i_{\mathbb{X}}(z\,dx\wedge dy) = z\,i_{\mathbb{X}}(dx\wedge dy) = z(i_{\mathbb{X}}dx\wedge dy - dx\wedge i_{\mathbb{X}}dy).$$ As $dx$ is a one-form, $i_{\mathbb{X}}dx = dx(\mathbb{X}) = dx((0, -x, -1)) = 0$ - $dx$ is the one-form which returns the first component of the vector field. Likewise, as $dy$ is a one-form, $i_{\mathbb{X}}dy = dy((0, -x, -1)) = -x$ - $dx$ is the one-form which returns the second component of the vector field. Therefore $$i_{\mathbb{X}}(z\,dx\wedge dy) = z(i_{\mathbb{X}}dx\wedge dy - dx\wedge i_{\mathbb{X}}dy) = z(0\wedge dy - dx\wedge (-x)) = z(xdx) = xz\, dx.$$ Note, the last step of your working reduces to $xz\, dx$ once you realise that wedging with a $0$-form is just multiplication. In general, if the $k$-form $\omega$ is written as the wedge product of $1$-forms, $\omega = \alpha^1\wedge\dots\wedge\alpha^k$, then $$i_{\mathbb{X}}\omega = \sum_{i=0}^k(-1)^{i-1}\alpha^i(\mathbb{X})\alpha^1\wedge\dots\wedge\alpha^{i-1}\wedge\alpha^{i+1}\wedge\dots\wedge\alpha^k.$$ This can be shown by repeated application of the Leibniz rule.
Normalizing a Sturm-Liouville theory problem
You've solved correctly for $\sqrt{\lambda} L = \frac{\pi}{2}+n\pi$ or $\sqrt{\lambda}=(n+\frac{1}{2})\pi/L$. So the eigenfunctions are $$ Y_n(x)=\cos((n+1/2)\pi x/L) $$ The normalization comes from trying to expand a function $f$ in a Fourier series of this eigenfunctions $$ f \sim c_1 Y_1 + c_2 Y_2 + \cdots $$ These functions are orthogonal with respect to the integral. So, multiplying by $Y_n$ and integrating both sides gives only one term on the right $$ \int_{0}^{L}f(x)Y_n(x)dx = c_n \int_{0}^{L}Y_n(x)^2dx \\ c_n = \frac{\int_{0}^{L}f(x)Y_n(x)dx}{\int_{0}^{L}Y_n(x)^2dx} $$ Everything simplies if you normalize $Y_n$ so that $\int_{0}^{1}Y_n(x)^2dx=1$, which means dividing the original $Y_n$ by $\sqrt{\int_{0}^{1}Y_n(x)^2dx}$, which you can see from the final series $$ f \sim \sum_{n} \frac{\int_{0}^{L}f(x')Y_n(x')dx'}{\int_{0}^{L}Y_n(x')^2dx'}Y_n(x) $$ Assuming $Y_n$ has been so normalized, the series expansion is $$ f \sim \sum_{n} \left(\int_{0}^{L}f(x')Y_n(x')dx'\right) Y_n(x). $$
Show that the set of some rotations $S_v$ is a coset of $H_v=\{T\in G:Tv=v\}$
Hint, since you say you don't know how to find $A$: $H_v$ is a subgroup, so $I \in H_v$ (where $I$ is the identity matrix). You're trying to find $A$ such that $S_v = AH_v$; since $I \in H_v$, this implies $A \in S_v$. (And in fact, you can take any such $A$). Therefore, what you want to show is that given any $A \in S_v$, $B \in H_v$, then $AB \in S_v$.
Differentation on time scales
You already did almost everything: the neighborhood contains $t$ and so you can take $s=t$, which forces $f^\Delta$ to be the expression that you wrote (otherwise you cannot make it less than some $\varepsilon$). PS: Notice that you forgot the term $\sqrt s$ in the line "What I have...".
Struggling to Bridge the Gap (to Rudin's Principles of Mathematical Analysis)
Rudin can be a bit terse. Here is an alternative: Introduction to Real Analysis by Bartle . It's a good book. It dosen't cover Dedekind cuts though, if that's what you're struggling with (you said that you just started reading it, and the first part of the book is on Dedekind cuts). Anyways, read what you like. Some people like algebra more than analysis. Some are the other way around. Don't think that you have to read from specific famous books. People have learning styles, and this is one reason why we have many different books on the same subject.
How do I choose a free variable?
You did it correctly. You just made a little arithmetic mistake near the end. You got to here $$4x_1 = -2 -3C - \frac{18-5C}{-7}$$ correctly but the line beneath that is not correct. It should actually be $$x_1 = \frac{-14-21C+18-5C}{28}$$ So just be careful of your signs, and keep doing what you're doing because other than that mistake everything else you wrote is correct. You can verify for yourself that this expression for $x_1$, along with what you gave for $x_2$ and $x_3$, solve both of your equations.
Show that $x_n = \sqrt{\sum_{k=1}^n\left(\frac{k+1}{k}\right)^2} - \sqrt n$ is a bounded sequence.
Jakobian gave a very good hint. I am giving somewhat sharper bounds for $x_n$. Note that the OP incorrectly used the Cauchy–Bunyakovsky–Schwarz inequality. Indeed, when applied correctly, we get $$\sqrt{n}\left(x_n+\sqrt{n}\right)=\sqrt{\sum_{k=1}^n1^2}\sqrt{\sum_{k=1}^n\left(\frac{k+1}{k}\right)^2}\geq \sum_{k=1}^n1\cdot\left(\frac{k+1}{k}\right)=n+H_n,$$ where $H_n=\sum_{k=1}^n\frac1k$ is the $n$th harmonic number. That is, $$x_n\geq \frac{H_n}{\sqrt{n}}.$$ For the upper bound, the OP got $$x_n=\frac{\sum_{k=1}^n \frac{1}{k^2}+2H_n}{\sqrt{\sum_{k=1}^n\left(\frac{k+1}{k}\right)^2}+\sqrt{n}}.$$ We can show that $\sum_{k=1}^n\frac{1}{k^2}$ is bounded above by $2$ via $$\sum_{k=1}^n\frac{1}{k^2}<1+\sum_{k=2}^n\frac{1}{k(k-1)}=1+\left(1-\frac{1}{n}\right)<2.$$ However, a sharper upper bound is $\sum_{k=1}^n\frac{1}{k^2}<\zeta(2)=\frac{\pi^2}{6}$. Also, $$\sum_{k=1}^n\left(\frac{k+1}{k}\right)^2\geq \sum_{k=1}^n1^2=n.$$ This proves that $$x_n<\frac{\frac{\pi^2}{6}+2H_n}{\sqrt{n}+\sqrt{n}}=\frac{H_n}{\sqrt{n}}+\frac{\pi^2}{6}\left(\frac{1}{2\sqrt{n}}\right).$$ (You can replace $\frac{\pi^2}{6}$ by $2$ if you want a precalculus solution.) That is, we have $$\frac{H_n}{\sqrt{n}}\leq x_n<\frac{H_n}{\sqrt{n}}+\frac{\pi^2}{6}\left(\frac{1}{2\sqrt{n}}\right).$$ So, $x_n$ has the asymptotic behavior of $\frac{H_n}{\sqrt{n}}\approx \frac{\ln n}{\sqrt{n}}$. But to show that $x_n$ is bounded, we don't need to know that $H_n\approx \ln n$ (well, to be even more precise, $H_n\approx \gamma+\ln n$, where $\gamma$ is the Euler–Mascheroni constant). Clearly, $$\frac{\pi^2}{6}\left(\frac{1}{2\sqrt{n}}\right)\leq \frac{\pi^2}{6}\left(\frac{1}{2}\right).$$ Furthermore, $$H_n=\sum_{k=1}^n\frac{1}{k}\leq \sum_{k=1}^n\frac{1}{\sqrt{k}}<\sum_{k=1}^n\frac{2}{\sqrt{k}+\sqrt{k-1}}=2\sum_{k=1}^n(\sqrt{k}-\sqrt{k-1})=2\sqrt{n}.$$ Hence, $$x_n<\frac{2\sqrt{n}}{\sqrt{n}}+\frac{\pi^2}{6}\left(\frac{1}{2}\right)=2+\frac{\pi^2}{6}\left(\frac{1}{2}\right)<\infty.$$ Again, replace $\frac{\pi^2}{6}$ by $2$ if you don't want any non-precalculus knowledge in the proof. So, you would get $x_n<3$ for all $n$.
Can you factor a difference of squares in matrices? Is $A^2 - B^2 = (A+B)(A-B)$?
$$ A^2-B^2 = (A+B)(A-B) \\ \iff A^2-B^2 = A^2-AB+BA-B^2 \\ \iff AB=BA $$ So, this holds iff $A$ and $B$ commute.
Bregman divergence symmetric iff function is quadratic
If $f$ is quadratic, then $D_f(x,y)$ is just the difference between $f(x)$ and the Taylor expansion of first order of $f$ at $y$. Since $f$ is quadratic, the remainer term is easily seen to be $\frac 12 (x-y)^TQ(x-y)$, where $Q=\nabla^2 f$. Assume now that $D_f(x,y)=D(y,x)$. Then by differentiating this equation with respect to $x$, it follows $$ f'(x) - f'(y) = -f''(x)(y-x) = f''(x)(x-y). $$ Hence $f'$ is affine linear, and $f$ is quadratic. Doing this the finite difference way, the proof also shows that $f$ is twice differentiable everywhere. So twice differentiability is not an assumption but an implication of symmetry. I got this idea from Lemma 3.16 in this paper Joint and Separate Convexity of the Bregman Distance. Bauschke, Heinz H. and Borwein, Jonathan M. (2001) Studies in Computational Mathematics, 8 . pp. 23-36.
Maximum of an expression with four variables
$$\frac{\left(p_1+p_4\right)\left(p_2+p_3\right)}{\left(p_1+p_3\right)\left(p_2+p_4\right)} = 1 + \frac{\left(p_2-p_1\right)}{\left(p_1+p_3\right)} \times \frac{\left(p_4-p_3\right)}{\left(p_2+p_4\right)} $$ and the right hand side is greater than or equal to $1 + 0 \times 0$ but less than $1 + 1 \times 1$. The values $(1,n,n,n^2)$ achieve the lower bound when $n=1$ but approach the upper bound when $n$ increases without limit: $n=400$ gives a figure over $1.99$.
Which way does the gradient point?
The gradient of a function of 2 variables $f(x,y)$ is $(\partial f/\partial x, \partial f/\partial y)$. In your case you happen to have used $z$ as the symbol for your function so the gradient is $(\partial z/\partial x, \partial z/\partial y)$. Now, it is another matter if you want to find a normal vector to some surface. The normal is parallel to the gradient vector of the level surfaces. It doesn't matter which way you re-arrange your function in to a level surface. Yes the 'gradients' (i.e. the vector of partial derivatives) point in opposite directions but this doesn't matter because they are parallel and either are suitable as a normal vector. In other words, if $\vec{n}$ is a normal vector, then so is $-\vec{n}$.
Test for convergence of integral
As $\lim_{x\to0}{\sin x\over x}=1$,$\;x\to\infty\implies x\sin(x^{-1.5})\to x\cdot x^{-1.5}=x^{-0.5}$. If $c>0,$ $\int_c^\infty x^{-p}\,dx<\infty\iff p>1.$ So the $2$nd integral doesn't converge. As $|x\sin(x^{-1.5})|\le |x|$ and $\int_0^c|x|dx=0.5c^2,$ the $1$st integral converges. So the integral $\int_0^\infty x\sin(x^{-1.5})$ doesn't converge.
Show that $1, (x-5)^2, (x-5)^3$ is a basis of the subspace $U$ of $\mathcal P_3(\Bbb R)$
I think what Axler is saying is that $\dim U$ cannot be equal to 4 because $U$ is not all of $\mathcal P_3(\mathbb R)$. It is easy to see that they are not equal by just finding a polynomial whose derivative at $x=5$ is not 0. (For example, try $p(x)=x$.) If you take this polynomial and add it to a basis for $U$, you should still have a linear independent set because it is not a linear combination of the basis elements of $U$. But if there are already 4 elements in a basis for $U$ then we would end up with 5 independent vectors in $\mathcal P_3(\mathbb R)$, which is impossible as the dimension of this space is 4. So I guess Axler is saying, whatever the dimension of $U$ is, you can add vectors to a basis for $U$ until you get a basis for $\mathcal P_3(\mathbb R)$, and you would have to add at least one vector.
Show that $\lim\limits_{n \to \infty} \int_{\Bbb R} f(x) g(x+n) dx=0$
Your argument seems correct. Maybe it could be better to first declare $\varepsilon$, then choose $n_0$ such that $n\geqslant n_0$, $\int_{|x| > \frac{n}{2}} g(x)^2 dx\lt \varepsilon$ and the same for $f$. Unless you did not specify that the norm of $f$ is one, you can only bound by $\|f\|_2 \cdot \epsilon$. Consequently, the final bound is $\varepsilon\lVert f\rVert_2+\varepsilon\lVert g\rVert_2$.
Prove that $n \ln(n) - n \le \ln(n!)$ without Stirling
Recall that the sequence $$ e_n = \left( 1+ \frac 1n \right)^n $$ is increasing and converges to $e$. Thus, $$ e^n \ge e_1 \cdot e_2 \cdot \ldots \cdot e_n = \frac{(n+1)^n}{n!}. $$ In particular we obtain the weaker inequality $e^n \ge \frac{n^n}{n!}$, which is equivalent to $n \ln n - n \le \ln (n!)$. A by-product of this is that the sequence $$ \sqrt[n]{e_1 \cdot e_2 \cdot \ldots \cdot e_n} = \frac{n+1}{\sqrt[n]{n!}} $$ is increasing and tends to $e$ (by an application of Stolz-Cesaro theorem).
Simplify the following sum on binomial numbers
I'm not hopeful about this. I wrote a python script to generate the first few polynomials where we hold $b$ fixed and let $a$ vary. In each case, I used sympy to interpolate a polynomial from the first $b+1$ values. It then checks that this is gives a polynomial of degree $b$ and tests that the next $b$ values are indeed given by the polynomial. from sympy import factorial as fact from sympy import interpolate, poly from sympy.abc import x def choose(n,m): if n<0 or m<0: return 0 return fact(n)//(fact(m)*fact(n-m)) def f(a,b): return sum(choose(b+j-1,b-j-1)*choose(b-j+a,a)*(-2)**(b-j) for j in range(b+1)) for b in range(1,10): p=poly(interpolate({a:f(a, b) for a in range(b+1)}, x)) assert p.degree() == b assert all(p(a)==f(a,b) for a in range(b+1,2*b)) print(p.all_coeffs()) The output is given below for the the polynomials of degree $1$ through $9$, as a list of coefficients, leading coefficient first. [-2, -2] [2, 4, 2] [-4/3, -2, 4/3, 2] [2/3, -4/3, -44/3, -80/3, -14] [-4/15, 8/3, 24, 202/3, 1204/15, 34] [4/45, -32/15, -190/9, -236/3, -6524/45, -656/5, -46] [-8/315, 52/45, 556/45, 476/9, 5104/45, 5398/45, 5276/105, 2] [2/315, -152/315, -236/45, -944/45, -956/45, 4396/45, 109772/315, 14832/35, 178] [-4/2835, 52/315, 1576/945, 16/5, -1064/27, -1524/5, -2817608/2835, -543238/315, -483964/315, -542] It doesn't seem very encouraging.
Limit of Exponential Function
If you are allowed to use the fact that $$\lim_{y \to \infty}\left(1+\frac{a}{y}\right)^y=e^a,$$ then you can rewrite your expression as $$\lim_{x\to \infty}\left(\left(1-\frac{4}{x+3}\right)^{x+3}\right)^{\frac{x-2}{x+3}}.$$ As $x\to \infty$, $x+3\to \infty$, and therefore $$\lim_{x\to \infty}\left(1-\frac{4}{x+3}\right)^{x+3}=e^{-4}.$$ But $$\lim_{x\to\infty} \frac{x-2}{x+3}=1,$$ and the result follows.
To each point of $\mathbb Z^2$ is assigned a positive integer
In any set (finite or infinite) of positive integers there is a smallest value. Therefore we consider a smallest label $m$. Let $L$ be a lattice point labeled by $m$. Its neighbours are labeled by $a$, $b$, $c$, $d$. Then $m=(a+b+c+d)/4$, or $a+b+c+d=4m$.–––(1). Now, $a≥m, b≥m, c≥m, d≥m$. If any of these inequalities would be strict, we would have $a+b+c+d>4m$ which contradicts (1).Thus, $a=b=c=d=m$. It follows from this that all labels are equal.
Formal Limit Proofs for Limits Involving Factorials
n! > n^5 for n > 10. Then 0 < n^4/(n^2 + n!) < 1/n and you can use squeeze theorem.
If $a$ is an integer, prove that $gcd(14a + 3, 21a + 4) = 1$
Using Euclidean algorithm: \begin{align} \gcd(14a+3,21a+4)&=\gcd(14a+3,7a+1)\\ &=\gcd(7a+1,1)\\ &=1 \end{align} Equivalently, and more directly, \begin{align} (14a+3)(3)+(21a+4)(-2)&=(42a+9)-(42a+8)\\ &=1 \end{align}