title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Particular solution of $t^2y''-2y=t^2$ using the method of variation of parameters
Put the differential equation in the classical form before to apply the method of variation of parameters: $$y''+b(t)y'+c(t)y=f(t)$$ $$y''-\dfrac 2{t^2}y=\color{red}{1}$$ So now you have that: $$u_1'(t) = -\frac{ y_2(t)}{W[y_1,y_2](t)} = \frac{1}{3t} $$ And $$u_1'(t) = \frac{ y_1(t)}{W[y_1,y_2](t)} = -\frac{1}{3}t^2$$
What does the partial derivative notation $\frac{\partial (f,g)}{\partial (x,y)}$ mean?
In vector calculus, we said, for the generalization of the chain rule, that $$\frac{\partial \mathbf f}{\partial \mathbf x}=\frac{\partial \mathbf f}{\partial \mathbf u}\frac{\partial \mathbf u}{\partial \mathbf x}$$ Where $\mathbf x, \mathbf f, \mathbf u$ are vectors (of functions). Thus, $$\frac{\partial (f, g)}{\partial (x, y)}= \begin{bmatrix} \frac{\partial f}{\partial x}\\ \frac{\partial g}{\partial x} \end{bmatrix} \begin{bmatrix} \frac{\partial x}{\partial x}& \frac{\partial x}{\partial y} \end{bmatrix} = \begin{bmatrix} \frac{\partial f}{\partial x}&\frac{\partial f}{\partial y}\\ \frac{\partial g}{\partial x} &\frac{\partial g}{\partial y} \end{bmatrix}$$ Well, it is true that sometimes books use this notation for the determinant of this matrix (However I didn't). Then we get, $$ \begin{vmatrix} \frac{\partial f}{\partial x}&\frac{\partial f}{\partial y}\\ \frac{\partial g}{\partial x} &\frac{\partial g}{\partial y} \end{vmatrix} = \frac{\partial f}{\partial x}\frac{\partial g}{\partial y}- \frac{\partial f}{\partial y} \frac{\partial g}{\partial x} $$ Which is the Jacobian of $h(x,y)=(f(x,y), g(x,y))$.
Finding a basis to a vector space
What you have done, is that you have defined the map $$\begin{eqnarray}\Bbb R^2 &\rightarrow& \Bbb W \\ \begin{pmatrix} x_1 \\x_2 \end{pmatrix} &\rightarrow& \begin{pmatrix} -x_1-x_2 \\ x_1 \\ x_2 \end{pmatrix}\end{eqnarray}$$ (Or whatever base field you are working with). It is easy to show that this is an isomorphism. So it sends a base of $\Bbb R^2$ to a base of W. If you are familiar with isomorphism I think this may be easier to prove than checking directly that something is a basis of $W$.
Why is this a quotient map
There is indeed an obvious reason: Continuous maps from quasicompact spaces to Hausdorff spaces are closed. And continuous and closed surjective maps are quotient maps.
Linear Algebra: Question on if this proof can conclude this way, or not.
Yes, one needs to show that the decomposition is unique. This follows canonically from the fact that you have a direct sum, i.e. $(\ker P)\cap (\text{ran}\,P)=\{0\}$. Your argument is not really a proof, because it doesn't show that $y$ exists. The natural way to show that every $x$ admits the required decomposition is to write $$x=(I-P)x+Px.$$
Let R = Z[X]. Show that: I = {n + XP : $n\in2Z$, $P\in R$} is an ideal of R and that it is not a principal ideal
We want to show that $\langle p\rangle\ne I$ for all $p\in I$. Pick $p\in I$, and proceed by cases: (a) if $p=0$ then $\langle p\rangle=0$ which is different from $I$. (b) if $p$ is a polynomial of degree greater than $1$, then $\langle p\rangle\ne I$ as $2\in I\setminus \langle p\rangle$. (c) if $p\in\mathbb Z$, then $p$ is even so every $q\in \langle p\rangle$ is a polynomial with all coefficients even, hence $x+2\in I\setminus \langle p\rangle$. So $I$ cannot be generated by any of its elements.
Eigenvalue inequality $\lambda_{\min}(AB) \geq \lambda_{\min}(A)\lambda_{\min}(B)$
Hint. Suppose you can show that $\lambda_\min(M) = \min_{x\ne0} \frac{\|Mx\|}{\|x\|}$ whenever $M$ is positive definite. It follows that $\lambda_\min(M)\|x\| \le \|Mx\|$ for every vector $x$. Therefore, $$ \lambda_\min(A)\lambda_\min(B)\|x\| \le \lambda_\min(A)\|Bx\| \le \|ABx\| $$ for every nonzero vector $x$. Take $x$ as a unit eigenvector corresponding to the minimum eigenvalue (whatever it means) of $AB$, we get $\lambda_\min(A)\lambda_\min(B)\le |\lambda_\min(AB)|$. Note that for any two complex square matrices $X$ and $Y$ of the same sizes, $XY$ and $YX$ have identical characteristic polynomials and identical spectra. Hence $\lambda_\min(AB)=\lambda_\min(A^{1/2}BA^{1/2})$. Can you show that $\lambda_\min(A^{1/2}BA^{1/2})\ge0$?
Prove that $\tan^{-1}(1/n)+\tan^{-1}(2/n)+\cdots+\tan^{-1}(n/n)$ increases as $n$ increases
Notice for any $1 \le k \le n$, $$\frac{k+1}{n+1} - \frac{k}{n} = \frac{n-k}{n(n+1)} \ge 0 \implies \tan^{-1}\frac{k+1}{n+1} \ge \tan^{-1}\frac{k}{n}$$ We have $$f(n+1) = \sum_{k=1}^{n+1}\tan^{-1}\frac{k}{n+1} = \tan^{-1}\frac{1}{n+1} + \sum_{k=1}^n \tan^{-1}\frac{k+1}{n+1} > \sum_{k=1}^n\tan^{-1}\frac{k}{n} = f(n)$$
What is the connection between expected value (X) of a dice roll and predicting the odds of "X+0.5 or more" at least 50% of the time?
I hope this gets at your question, please let me know if not. There are two separate problems at play as I understand it. There's this operation of adding 0.5 and rounding down (a.k.a. flooring) and there's an interesting fact about symmetric distributions. First of all, the sequence of operations of adding 0.5 to a number and then rounding down is equivalent to rounding to the nearest integer for positive numbers (breaking ties by rounding up.). Second, the distribution you're working with is symmetric. The distribution for any fair die $dN$ is uniform, and thus symmetric, and the sum of symmetrically distributed random variables is also symmetrically distributed. With symmetric distributions, the expected value coincides with the median. This means that with probability $1/2$, the outcome is greater than or equal to the expected value. With dice, the expected value can be a rational (non-integer) number despite all the outcomes being integers. What this means in your situation is that we can round the expected value (using the `add 0.5 and floor' method) and claim that with probability one half the outcome will be at least that big. This would be the end of my answer if we were just talking about the faces of the dice representing themselves. However, you mentioned a game where the faces get mapped to some hit numbers, $1,2,3,4,5$ are $0$ hits, $6,7,8,9$ are $1$ hit, and $10$ is 2 hits. We can think of this as a discrete random variable $X$ where $P(X=0)=5/10$, $P(X=1)=4/10$ and $P(X=2)=1/10$. This is not a symmetric distribution. The explanation for why it still works in the $kd10$ examples you simulated is that for the single case $1d10$ the expected value (i.e., mean) and the median differ by $0.1<0.5$. You can check to see what the difference between the median and the expected value is for other $k$ in your $kd10$ experiments.
Reconciling statements about accessible categories
In Chapter 5 of Locally presentable and accessible categories it is shown that the category of models of a first order theory and elementary embeddings is accessible. The proof uses the Löwenheim–Skolem theorem. It is not explicitly calculated, but it should be the case that the category is $\aleph_1$-accessible if the signature is finitary and the language is countable. I am not sure why the focus is on elementary embeddings. Perhaps it has to do with the practice of model theory. However, when working with models of a theory in a more restricted fragment of logic, say geometric logic, category theorists do in fact usually consider the category of models and homomorphisms.
Nonlinear Implicit Finite Difference
In this case, a common approach is make the nonlinear part explicit. You start with the initial condition $u^{(0)}$ and , for each $k\ge 0$, solve a linear problem. The following example uses a simple forward difference for the time derivative, but you can replace it for something fancier. $$ \frac{u^{(k+1)}- u^{(k)}}{\delta t} = \Delta u^{(k+1)}+f(u^{(k)}) $$ or, equivalently, $$ -\Delta u^{(k+1)}+\frac{1}{\delta t} u^{(k+1)} = \frac{1}{\delta t} u^{(k)}+f(u^{(k)}) $$
Compute the principal divisors of a hyperlelliptic surface.
"... and so we conclude that $div(x)=2P_0-2P_\infty$ and $div(y)=P_0+P_1+P_{-1}+P_i+P_{-i}-5P_\infty$ in the notation explained with exquisite precision above." [Excerpt from a manuscript found in a Klein bottle]
Order of $\theta$-methods for IVP
The truncation error of the method is $$ \epsilon = \Delta t\left(\frac{1}{2} - \theta\right) y''(t_{n+1/2}) -\frac{\Delta t^2}{12} y'''(t_{n+1/2}) + O(\Delta t^3) $$ When $\theta$ approaches $\frac{1}{2}$ the linear term of the error tends to zero and finally is zero when $\theta = \frac{1}{2}$. Let $\Delta t_0 = \left|(6-12\theta) \frac{y''(t_{n+1/2})}{y'''(t_{n+1/2})}\right|$. For $\Delta t \gg \Delta t_0$ the linear term is small and the method effectively behaves like a second order one, but when $\Delta t \ll \Delta t_0$ the linear term dominates and the method behaves like a first order one. Note the closer $\theta$ is to $1/2$ the smaller is $\Delta t_0$. Finally for $\theta = 1/2$, $\Delta t_0 = 0$.
Is this function integrable?
Note that for $x \in [0,1[$, $$ \frac 1{1-x} = \sum_{n=0}^{\infty} x^n, $$ hence when $x \to 1$, $\frac 1{1-x} \to \sum_{n=0}^{\infty} 1 = \infty$ (say by the monotone convergence theorem for instance). The reason why you can always integrate $f$ is because $f$ is positive on $[0,1[$. To see if $f$ is integrable (which is a different question), notice that $$ \int_0^1 \frac 1{1-x} \, dx = \int_0^1 \sum_{n=0}^{\infty} x^n \, dx = \sum_{n=0}^{\infty} \int_0^1 x^n \, dx = \sum_{n=0}^{\infty} \frac 1{n+1} = \infty. $$ where the swap of the series and the integral is given by the monotone convergence theorem of the Lebesgue integral. If you use the Riemann integral, you can always compute $$ \int_0^1 \frac 1{1-x} dx = \lim_{a \nearrow 1} \int_0^a \frac 1{1-x} dx = \lim_{a \nearrow 1} -\log(1-a) = \infty. $$ EDIT : Andre Nicolas, I don't see why you deleted your answer. Your idea is perfectly fine : $$ \int_0^1 \frac 1{1-x} dx = \int_0^1 \frac 1u du. $$ (using the change of variables $1-x = u$). In this case we can only use the Riemann-integral trick though : $$ = \lim_{a \searrow 0} \int_a^1 \frac 1u du = \lim_{a \searrow 0} [\log(1) - \log(a)] = \infty. $$ If you are working with the Lebesgue integral, you can always truncate the function $\frac 1u$ and do something that allows you to get the equivalent of the limit using the monotone convergence theorem. But we're probably working too hard for this at this point. Hope that helps,
Does $K \subseteq L$ and $K \cong L \implies$ $K=L$?
No. Let $X=\{x_n\mid n\in\mathbb{N}\}$ be countably many variables, indexed by $\mathbb{N}$, and $Y=\{x_n\mid n\in\mathbb{Z}\}$ be countably many variables, indexed by $\mathbb{Z}$. Then $K=\mathbb{Q}(X)$ is isomorphic to $L=\mathbb{Q}(Y)$; and $K\subseteq L$, but $K\neq L$. As Lord Shark also notes, if $x$ is an indeterminate, then $\mathbf{F}(x^2)$ is isomorphic to $\mathbf{F}(x)$ (for any field $\mathbf{F}$), and $\mathbf{F}(x^2)\subseteq \mathbf{F}(x)$, but they are not equal. Note that the example originally given as applying to rings is incorrect; $2\mathbb{Z}$ is not a ring with identity, so it is not isomorphic to $\mathbb{Z}$. But you can compare the polynomial ring $\mathbb{Z}[X]$ with $\mathbb{Z}[Y]$ ($X$ and $Y$ as above) if you don’t want fields in your example.
c.l.u.b. set and type of $T$
$T_\alpha$ is the $\alpha$-th level of the tree $T$. It’s in the little Notation section just before Theorem $\bf{6.1}$: For an $\aleph_1$-tree $T$, $T_i$ is the $i$-th level, $T\upharpoonright i=\bigcup_{j<i}T_j$, and for $x\in T_\beta$, $\alpha\le\beta$, $x\upharpoonright\alpha$ is the unique $y\in T_\alpha$, $y\le x$. Here $T$ is any Aronszajn tree.
$\Hom_\Set(G_1 \times G_2,R) \cong \Hom_\Set(G_1,R) \otimes_R \Hom_\Set(G_2,R)$?
This is a comment made into an answer. For finite sets $X$, $$\mathrm{Mor}_{\mathbf{Set}}(X,R)\simeq\prod_X R\simeq\oplus_XR$$ are isomorphic as $(R,R)$-bimodules, and for if both $X$ and $Y$ are finite, then $$\left(\oplus_X R\right)\otimes_R\left(\oplus_X R\right)\simeq\oplus_{X\times Y}R$$ again as $(R,R)$-bimodules.
Is the product rule $f_n \to f$, $g_n\to g \Rightarrow f_ng_n\to fg$ true in the space $C[0,1]$?
No it is not correct. How do you know that $n\Vert g_n - g\Vert_\infty\to 0$? I also don't see why the estimation where you introduce $n$ should hold. Rather, use that (why does this inequality hold?)$$\Vert f g \Vert_\infty \le \Vert f \Vert_\infty \Vert g \Vert_\infty$$ to deduce $$\Vert f_ng_n -fg \Vert_\infty = \Vert f_n(g_n-g) + g(f_n-f)\Vert_\infty$$ $$\leq \Vert f_n \Vert_\infty \Vert g_n-g \Vert_\infty + \Vert g \Vert_\infty \Vert f_n-f\Vert_\infty$$ together with the fact that $\{\Vert f_n \Vert_\infty\}_n$ is a bounded sequence (why is this true and why is this relevant?) I do not supply a hint for $\Vert \cdot \Vert_1$ since you did not include your attempt for that subquestion.
Probability you can walk from point $A$ to point $B$
You're right, you're double-counting in the sense that the events whose probabilities you're adding in $(1)$ aren't mutually exclusive. In $(1)$, you want $\mathsf P((R_1\cap R_4)\cup(R_2\cap R_5)\cup(R_1\cap R_3\cap R_5)\cup(R_2\cap R_3\cap R_4))$. You can evaluate this using inclusion–exclusion as \begin{eqnarray*} &&\mathsf P((R_1\cap R_4)\cup(R_2\cap R_5)\cup(R_1\cap R_3\cap R_5)\cup(R_2\cap R_3\cap R_4))\\ &=& \mathsf P(R_1\cap R_4)+\mathsf P(R_2\cap R_5)+\mathsf P(R_1\cap R_3\cap R_5)+\mathsf P(R_2\cap R_3\cap R_4)-\mathsf P((R_1\cap R_4)\cap(R_2\cap R_5))\\ &&-\cdots-P((R_1\cap R_4)\cap(R_2\cap R_5)\cap(R_1\cap R_3\cap R_5)\cap(R_2\cap R_3\cap R_4))\\ &=& 3^{-2}+3^{-2}+3^{-3}+3^{-3}-3^{-4}-3^{-4}-3^{-4}-3^{-4}-3^{-4}-3^{-5}+4\cdot3^{-5}-3^{-5}=\frac{59}{243}\;, \end{eqnarray*} and likewise for the probabilities you need to apply Bayes's rule in $(2)$. Edit: Actually, your calculation in part $2$ suggests an easier way to calculate the probabilty in part $1$. This is $\mathsf P(Y)$, the denominator in part $2$, and we don't need the full inclusion–exclusion machinery to evaluate that: \begin{eqnarray*} \mathsf P(Y) &=& \mathsf P(Y\mid T)\mathsf P(T)+\mathsf P(Y\mid \overline T)\mathsf P(\overline T) \\ &=& \mathsf P((R_1\cup R_2)\cap(R_4\cup R_5))\mathsf P(R_3)+\mathsf P((R_1\cap R_4)\cup(R_2\cap R_5))\mathsf P(\overline{R_3}) \\ &=& \left(1-\left(1-\frac13\right)^2\right)^2\cdot\frac13+\left(1-\left(1-\left(\frac13\right)^2\right)^2\right)\cdot\frac23 \\ &=& \frac{25}{81}\cdot\frac13+\frac{17}{81}\cdot\frac23 \\ &=& \frac{59}{243}\;, \end{eqnarray*} in agreement with the previous result.
Can Extraneous Roots be Introduced by Elimination?
The problem is that your reasoning isn't reversible. Your two equations together imply $y^2-x^2-1=0$, but the converse is not true: that equation does not imply your original system. Compare your question to the following argument. The system $$x+y=1$$ $$2x+y=1$$ yields, by subtraction, that $x=0$. But this second equation allows $y$ to be arbitrary! So we get a whole lot of "extraneous solutions": $(0,y)$, for any $y$. The problem, of course, is that the equation $x=0$ does not, in turn, imply the original system. I take it you do not find this situation puzzling. You generally lose information when you replace a system of equations with a linear combination of them. The combination will include the solutions of your original equation, but it likely will include non-solutions, as well. The form of the reasoning is: "any solution of the original equations will be a solution of this equation, too." Yes, but then you've only found a superset containing the solution set. To characterize the solution set exactly, you must worry about the converse.
If $f(x)$ is the square root of the number that is $2$ more than $x,$ what is the value of $f(7) - f(-1)$?
The given answer is taking what is referred to as the principal square root, rather like $x=\frac{\pi}{6}^c$ or $x=30^0$ is the principal root of $\sin(x)=\frac{1}{2}$, there are many other solutions, but this is the one that is most commonly used, and given by calculatorss. In this question, it is standard to use the principal square root for equations like these. Hence we get $\sqrt{9}=3, \sqrt{1}=1$, thus $f(7)-f(-1)=3-1=2$
Do two rational parametric curves intersect only finitely many times?
$$\newcommand{\spec}[1]{\mathrm{Spec}({#1})}$$ Let $$ \Phi: t \mapsto (\psi_1(t),\ldots, \psi_n(t)) $$ be a map from $k - \{P_1,\ldots, P_s\}$ to $k^n$ where $k$ is an arbitrary field, $\psi_i(t) = p_i(t)/q_i(t)$ with polynomials $p_i, q_i \in k[t]$ and $P_i$ are the zeroes of all the $q_i(t)$. Furthermore let $\Delta(t) = \prod_i q_i(t)$. Now consider the map $f:A = k[x_1,\ldots,x_n] \to k[t]_{\Delta(t)} \subset k(t)$ with $f(x_i) = \psi_i(t)$. Call $I = \ker f$, the kernel-ideal of $f$ in $A$. Then we have a sequence of integral domains and $k$-algebras $$(**)\quad A \twoheadrightarrow A/I \hookrightarrow k[t]_{\Delta(t)} \hookrightarrow k(t)$$ Now citing a theorem of algebraic geometry, every finitely generated integral $k$--algebra $R$ corresponds to a variety (a scheme) $X = \spec{R}$, which has as its closed points the maximal ideals of $R$. Furthermore $X$ is irreducible (can not be decomposed into the union of two proper closed subsets) and reduced ($R$ has no nilpotent elements). The ring $A$ from above corresponds to the variety $\mathbb{A}^n_k$, the $n$-dimensional affine space over $k$. A surjection $R \twoheadrightarrow S$ corresponds to a "closed immersion" of varieties $\spec{S} \hookrightarrow \spec{R}$ which is a "good embedding" of $\spec{S}$ as a closed subvariety in $\spec{R}$. A localization $R_r$ with $r \in R$ corresponds to the variety $\spec{R_r} = D(r)$ which is the complement of the zero-set $V(r)$ in $\spec{R}$. So we can read the sequence $(**)$ as saying that it gives the map $\Phi$ of the affine $1$-space minus the points $P_1,\ldots,P_s$ (which corresponds to $k[t]_{\Delta(t)}$) into $\mathbb{A}^n_k$. The image $\Phi(\spec{k[t]_{\Delta(t)}})$ is dense in the closed subvariety $\spec{A/I}$ of $\mathbb{A}^n_K = \spec{A}$ (the denseness follows from the injectivity of $A/I \hookrightarrow k[t]_{\Delta(t)}$. Now in case of a general field $k$ the ideal $I$ has for $n > 2$ more than one generator for reasons of dimension. But in case $k = \mathbb{R}$ we can take generators $F_1,\ldots,F_r \in I$ and form the single polynomial $F_1^2 + \cdots + F_r^2 = F$. Then $V(F)$ has the same real zeros in $\mathbb{R}^n$ as $F_1,\ldots,F_r$ together. Computing $I$ is easy with a system like Macaulay 2 (try it online): i13 : R=QQ[x,y,z] o13 = R o13 : PolynomialRing i14 : S = QQ[t] o14 = S o14 : PolynomialRing i15 : KS = frac S o15 = KS o15 : FractionField i16 : phi = map(KS, R, {t^2/(t^2+1),(1-t^2)/(1+t^2),2*t/(1+t^2)}) 2 2 t - t + 1 2t o16 = map(KS,R,{------, --------, ------}) 2 2 2 t + 1 t + 1 t + 1 o16 : RingMap KS <--- R i17 : ker phi 2 2 o17 = ideal (2x + y - 1, y + z - 1) o17 : Ideal of R So we get the intersection of a plane and a cylinder as the image of $\Phi$ in this case.
Converting certain complex exponentials to trigonometric functions
Consider first $$A_k(x)=\frac{1}{i k+1} e^{(ik+1)x}-\frac{1}{ik-1} e^{(ik-1)x}$$ $$A_k(x)=\frac{ik-1}{(i k+1)(ik-1)} e^{(ik+1)x}-\frac{ik+1}{(ik-1)(ik+1)} e^{(ik-1)x}$$ $$A_k(x)=\frac{1-ik}{1+k^2} e^{(ik+1)x}+\frac{1+ik}{1+k^2} e^{(ik-1)x}=\frac{e^{ikx}}{1+k^2}\Big((1-ik) e^{x}+(1+ik) e^{-x} \Big)$$ Now, replace $e^{\pm x}=\cosh(x)\pm\sinh(x)$ to get $$A_k(x)=\frac{2 e^{i k x} (\cosh (x)-i k \sinh (x))}{1+k^2}$$ Now, using the integration bounds $$A_k(\pi)-A_k(-\pi)=\frac{4 i (\cosh (\pi ) \sin (\pi k)-k \sinh (\pi ) \cos (\pi k))}{1+k^2}$$ and, since $k$ is an integer, $$A_k(\pi)-A_k(-\pi)=-\frac{4 i k \sinh (\pi ) \cos (\pi k)}{1+k^2}=-\frac{4 i k \sinh (\pi ) (-1)^k}{1+k^2}$$ and hence $$c_k = -\frac{i k (-1)^k}{\pi (1+k^2)} \sinh(\pi)$$
Prove sensitivity to initial conditions numerically?
To be precise sensitivity to initial conditions is physicists language for the fact that a small change in the initial conditions grows exponentially in time. Numerically, just check if for two solutions the distance $|x(t)-y(t)|$ grows exponentially in time $t$ for slightly different initial conditions $y_0=x_0+\epsilon$. Formally, for $|\epsilon|\to 0$ and $t\to\infty$ the exponent of this dependence, if it exists, is the Lyapunov exponent mentioned in the comments.
The prime number $p$ is prime in $\mathbb{Q}(\sqrt{d})$ iff $d$ is not a quadratic residue modulo p
$\Bbb{Q}(\sqrt{d})$ is a field it has no non-trivial (prime) ideal. For $d \in \Bbb{Z}, \not \in \Bbb{Z}^2$ then $(p)$ is a prime ideal of $R=\Bbb{Z}[\sqrt{d}] =\Bbb{Z}[x]/(x^2-d)$ iff $R/(p) = \Bbb{F}_p[x]/(x^2-d)$ is an integral domain iff $x^2-d \in \Bbb{F}_p[x]$ is irreducible iff $d$ is a quadratic non-residue $\bmod p$. With $d = m^2 D$ where $D$ is squarefree the ring of integers of $\Bbb{Q}(\sqrt{d})$ is $\Bbb{Z}[\sqrt{D}] $ or $\Bbb{Z}[\frac{1+\sqrt{D}}{2}] $, in the latter case there is the additional case of $(2)$ which is a prime ideal because $x^2+x+1 \in \Bbb{F}_2[x]$ is irreducible.
Calculate the Laurent series centered at i on an annulus
Hint: $\frac{1}{(z-i)(z-2)}=\frac{a}{z-i}+\frac{b}{z-2}$. Work out the constant $a$ and $b$, and notice that $\frac{a}{z-i}$ is already a Laurent series at $z=i$, so what you need to do is to figure out the expansion of $\frac{b}{z-2}$ at $z=i$.(Try $\frac{b}{(z-i)+(i-2)}$)
Is it properly applied the Quine McCluskey algorithm by this?
For an optimal solution you need more than "two loops". The selection of a minimum number of implicants is a set cover problem and was shown to be NP-complete. Selecting a cover with a greedy algorithm just grabs the most promising implicants and is not guaranteed to arrive at an optimal solution. A good overview on two-level logic minimization was published by Olivier Coudert. Chapter 3 describes the Quine-McCluskey algorithm. This algorithm is of interest for historical reasons but hardly used in practice any more. For practical experiments, you could try tools like Logic Friday 1. It uses the two-level minimizer Espresso. Logic Friday features an exact/fast switch. It allows you to play with the trade-off between solution speed and optimality. Another tool for experiments is Karma 3.
Question regarding Jensen Inequality
The step $$\frac{n}{1-\frac{(a_1+...+a_n)^2}{n^2}}\geq\frac{n}{1-\frac{a_1^2+...+a_n^2}{n}}$$ is wrong: it should be $$\frac{n}{1-\frac{(a_1+...+a_n)^2}{n^2}}\leq\frac{n}{1-\frac{a_1^2+...+a_n^2}{n}},$$ which does not help. My solution. $$\sum_{i=1}^n\frac{a_i}{1-a_i^2}-\frac{na}{1-a^2}=\sum_{i=1}^n\left(\frac{a_i}{1-a_i^2}-\frac{a}{1-a^2}\right)=\frac{1}{1-a^2}\sum_{i=1}^n\frac{(a_i-a)(aa_i+1)}{1-a_i^2}=$$ $$=\frac{1}{1-a^2}\sum_{i=1}^n\left(\frac{(a_i-a)(aa_i+1)}{1-a_i^2}-\frac{(a_i^2-a^2)(a^2+1)}{2a(1-a^2)}\right)=$$ $$=\frac{1}{2a(1-a^2)^2}\sum_{i=1}^n\frac{(a_i-a)^2(a_i^2+(2a^3+a^2+2a)a_i+3a^2-1)}{1-a_i^2}\geq0.$$
What are functions applied to either side of a relation that maintain the relation called?
I think it would be acceptable and within convention to say the function is "relation-preserving" because the relation still holds true under the action of the function, i.e. $x \sim y \implies f(x) \sim f(y) $. Mathematicians often speak of certain properties or even equations being preserved under maps and transformations, so this choice of term has support.
Spivak - Chapter 10 Problem 19
I guess the point is that a precise formula will be "pretty messy". Let's see: $(f\circ g)^{(2)}(a)=(f'(g))\cdot g')'(a)=(f^{(2)}(g)\cdot g'^2+f'(g)g^{(2)})(a)$. Now it's apparent that $(f\circ g)^{(3)}$ will already involve quite a conglomeration of chain rules and product rules, to where it will be a little difficult to write a formula. Much easier just to note that we get a sum of terms which are products of powers of $f^{(n)}(g(a))$ and $g^{(n)}(a)$. But by induction this is what we get: assume at the $n-1$ stage this is what we get, then apply the product rule and chain rule where applicable, one more time.
Finding convergence rate for Bisection, Newton, Secant Methods?
The methods are said to converge with order $p$ if $$ e_{i+1} =c e_i^p, $$ where $e_i = |x_i - x^*|$ is the error in the approximation at the $i$th iteration, $x^*$ is the true root and $x_i$ is the approximation. Here, $c$ and $p$ are constants independent of $i$. Thus, to measure the rate of convergence we need to store the error in the approximation at each iteration. Note that $$ \log{e_{i+1}} = \log{c} + p \log{e_i}, $$ i.e. plotting $\log{e_{i+1}}$ against $\log{e_i}$ should (asymptotically) result in a straight line with gradient $p$, which can be measured. Alternatively, note that $$ \frac{e_{i+1}}{e_i} = \frac{ce_i^p}{ce_{i-1}^p} = \left(\frac{e_{i}}{e_{i-1}}\right)^p $$ such that $$ p = \log{\left( \frac{e_{i+1}}{e_i} \right)} \Big/ \log{\left( \frac{e_{i}}{e_{i-1}} \right)}, $$ which is easily calculated from the errors of three consecutive iterations. Note that the convergence rate may differ for different functions. In particular, functions with double roots (e.g. $(x-1)^2$) tend to give slower convergence that simple roots. Other functions may show superconvergence for some functions (e.g. $\sin{x}$ near $x=0$ for Newton's method). It may be worth trying your code on a few different examples before you draw conclusions about your methods.
If an integer $n$ has the form $3k+1$, then $n$ does not have the form $9l+5$
Yes, this works as a proof of it. Nicely done! I would use that only $9l, 9l+3$ and $9l+6$ are multiples of $3$ for $l\in\Bbb Z$, so only $9l+1, 9l+4, 9l+7$ are of the form $3k+1$.
Maximization problem on an ellipsoid
Hint: One way is symmetrization via change of variable $s=\frac{x}{a},t=\frac{y}{b},u=\frac{z}{c}$ and then optimize. $\max f(s,t,u)=(abc)stu\,\,\text{subject to: } s^2+t^2+u^2=1.$ You can either use calculus or the fact that optimal point is of form $(\lambda,\lambda,\lambda)$. Thus $s=t=u=\frac{1}{\sqrt 3}$ so $(x,y,z)=(\frac{a}{\sqrt 3},\frac{b}{\sqrt 3},\frac{c}{\sqrt 3})$. In order to verify your answer you can use Hessian matrix (second derivative test)
Profit and loss question
The remaining pears are worth $35$ cents each. The profit from them was $79.80$ dollars (because we can add the $60$ dollars he paid. We can divide: $$\frac{7980}{35} = 228$$ So they kept $228$ pears, and $240 - 228 = 12$ pears that were thrown away. $12$ is $\frac{1}{20}$ of $240$, or $5$%. So $\boxed{5\text{%}}$ of the pears were thrown away.
Is every countably compact subset of a Hausdorff space closed?
The ordinal space $\omega_1+1$ (the set of all ordinals $\le\omega_1$ with the order topology) is a compact Hausdorff space. The subspace $\omega_1$ of all countable ordinals is countably compact but not closed.
An honest die is thrown 8 times; let X be the number of twos and let Y be the number of fours. Find the joint pmf of X and Y and calculate P(X=Y).
The joint distribution is not just $\Pr[X=x,Y=y]=\Pr[X=x| Y=y]\Pr[Y=y]$? The last form is easy to handle, right? You just need to be aware that $\Pr[X=x|Y=y]=\Pr[Z=z]$ for the random variable $Z$ that counts the number of two in $8-y$ die. However you need to take in account that these $8-y$ die must be all different of four. That is: $\Pr[X=x|Y=y]$ means "probability that will be exactly $x$ two's in $8-y$ die, and the other $8-y-x$ die will be distinct of two or four". Take a look here also. For a "shortcut" to an easier way to handle this joint distribution take a look at the multinomial distribution.
Check if $X$ and $Y$ are statistically independent from a pdf
For part 1, check that $$ \int_0^1 \int_0^1 (2-x-y) \, \mathrm{d}y \, \mathrm{d}x = \int_0^1 \left(2-x-\frac{1}{2} \right) \, \mathrm{d}x = 2- \frac{1}{2} -\frac{1}{2} = 1. $$ For part 2, check whether $f(x,y) = g(x) h(y)$. Since your $f(x,y)$ does not appear to be separable...
Continuous function and maximum value
Consider the function $g(x)=f(x)-f(3x)$. Solutions to $g(x_0)=0$ correspond to values $f(x_0)=f(3x_0)$. Now $g(3)=f(3)-f(9)\ge0$ since $f(3)\ge f(9)$, and $g(1)=f(1)-f(3)\le 0$ since $f(1)\le f(3)$. Then by the intermediate value theorem applied to $g$ on $[1,3]$, there is an $x\in(1,3)$ with $g(x)=0$.
Sum of mutliples b/w $2$ and $10$ . What is wrong with this method
Maybe things will be clearer if we look at another example. Find the sum of all the multiples of 3 between 10 and 100. In this case, we are being asked to find $$12+15+18+21+\cdots+93+96+99.$$ That's the same as $$3(4+5+6+7+\cdots+31+32+33).$$ Why do the numbers inside the bracket start at 4, not at 1? Because the numbers we're adding start at 12, which is $3\times4$, not $3\times1$. Why do the numbers inside the bracket go up to 33? Because the numbers we're adding go up to 99, and $99=3\times33$. This still leaves you with the problem of finding $$4+5+6+7+\cdots+31+32+33,$$ of course.
Prime Factorization patterns of $\sum_{i=0}^k{4^i}$
$$f(k) = \sum_{i=0}^k 4^i = \dfrac{4^{k+1}-1}{3}$$ If $d$ is coprime to $2$ and $3$, then $d$ divides $f(k)$ if and only if $4^{k+1} \equiv 1 \mod d$, i.e. iff $k+1$ is a multiple of the order of $4$ in the multiplicative group $U_d$ of units in $\mathbb Z/d\mathbb Z$. For example, the order of $4$ in $U_7$ is $3$ (i.e. $4^3 = 64 \equiv 1 \mod 7$ but $4^1$ and $4^2 \not\equiv 1 \mod 7$, so $f(k)$ is divisible by $7$ whenever $k-1$ is divisible by $3$.
Sketching level curves for $f(x,y)=x/(x^2+y^2)$
hint for $c=-2$. $$\frac{x}{x^2+y^2}=-2 \iff$$ $$x^2+y^2+\frac{x}{2}=0 \iff$$ $$(x+\frac 14)^2+y^2=\frac{1}{16}$$ and this represents a circle whose center is $(\frac{-1}{4},0)$ and radius $\sqrt{\frac{1}{16}}=\frac 14$.
Tate's Thesis: in what sense is Tate's Theorem 4.2.1 the Riemann-Roch theorem for curves?
If you work in the function field case, i.e. replace the number field $K$ and its places $v$ by a finite extension of $k[x]$ for some finite field $k$, and work with its places, then this statement becomes the Riemann--Roch theorem. The appearance of $1-g$ in the RR thm., and the role of the canonical bundle in forming the correct kind of dual, will here be absorbed into the the definition of the self-dual measure on the adeles. To be a little more precise, you should imagine that your line bundle is of the form $\mathcal L(D)$ for some divisor $D$ (it always is, after all!); then $a$ will play the role of $D$ (or maybe $-D$). And you should take $f$ to the characteristic function of the integral adeles. Then one side of the equality will count rational functions $\xi$ for which $\xi a^{-1}$ has no demoninators (so global sections of $\mathcal L(D)$), and the other side will count rational functions $\xi$ such, roughly, $\xi a^{-1}$ is integral, which is the global sections of $\mathcal L(-D)$; except that $f$ is not quite self-dual. In the number field case the different comes in, and in the function field case we are considering here, the canonical bundle will come in (as well as a factor related to $1-g$). Finally, to get the familiar statement about dimensions, take log of both sides and divide by $\log q$ (where $q = |k|$). (You can check then that the $|a|^{-1}$ on the LHS, after taking logs, gives the $\deg D$ term.)
How do I find a closed form of the characteristic function of Gamma distribution?
Edit: As pointed out in a comment, this is not quite right. Will be fixed soon, after class... I said this: This is just the change of variable $x'=x(1-i\beta t)/\beta$ plus the definition of $\Gamma(\alpha)$: $$\int_0^\infty x^{\alpha -1}e^{-x(1-i\beta t)/\beta} dx =((1-i\beta t)/\beta)^{-\alpha}\int_0^\infty x^{\alpha -1}e^{-x} dx =\Gamma(\alpha)\beta^\alpha (1-i\beta t)^{-\alpha}.$$ And of course that's nonsense, it's not just a "change of variable" because $(1-i\beta t)/\beta$ is not real. It's a simple application of Cauchy's Theorem; some of you can stop reading at this point. The corrected version: Let $V=\Bbb C\setminus(-\infty,0]$ be the slit plane, and define $f\in H(V)$ by $$f(z)=z^{\alpha-1}e^{-z},$$where $z^{\alpha-1}$ is defined using the principal-value logarithm: $$z^{\alpha-1}=e^{(\alpha-1)\log(z)}.$$Note that if $z\in V$ and $x>0$ then $\log(xz)=\log(x)+\log(z)$, hence$$(xz)^{\alpha-1}=x^{\alpha-1}z^{\alpha-1}.$$ Also note that if $z$ has positive real part then $f(rz)\to0$ exponentially as $r>0$ tends to $+\infty$. Set $\omega=(1-i\beta t)/\beta$ and define $\gamma_1,\gamma_2:[0,R]\to\Bbb C$ by $$\gamma_1(x)=x,$$ $$\gamma_2(x)=\omega x.$$. Let $\gamma_3$ be the straight line from $R$ to $\omega R$. It follows easily from Cauchy's Theorem that $$\left(\int_{\gamma_1}+\int_{\gamma_3}-\int_{\gamma_2}\right)f(z)\,dz=0.$$ Detail: That's not quite just a special case of CT, since out triangle does not lie in $V$, passing through the origin as it does. That's easily fixed: Consider a contour consisting of that triangle except with a little detour near the origin, so it lies in $V$. Take a limit. (Note that the condition $\alpha-1>-1$ is needed to show that the error tends to $0$.) Sine $f$ dies exponentially along rays in the right half-plane it follows that $$\lim_{R\to\infty}\int_{\gamma_3}f(z)\,dz=0,$$hence $$\lim_{R\to\infty}\int_{\gamma_2}f(z)\,dz=\lim_{R\to\infty}\int_{\gamma_1}f(z)\,dz.$$But $$\lim_{R\to\infty}\int_{\gamma_1}f(z)\,dz=\int_0^\infty x^{\alpha-1}e^{-x}\,dx=\Gamma(\alpha),$$while, recalling that $(\gamma_2(x))^{\alpha-1}=\omega^{\alpha-1}x^{\alpha-1}$, the definition of $\int_\gamma f(z)\,dz$ shows that $$\lim_{R\to\infty}\int_{\gamma_2}f(z)\,dz=\omega^\alpha\int_0^\infty x^{\alpha-1}e^{-x\omega}\,dx.$$
Calculating throughput over 1 Gbps link
You have $\frac{10^3\cdot 8~\text{bits}}{\color{red}{3}\cdot 10^{-3}~\text{seconds}} = 10^6\cdot 8\cdot \color{red}{3}\cdot \frac{\text{bits}}{\text{seconds}}$ This should instead be $\frac{10^3\cdot 8~\text{bits}}{3\cdot 10^{-3}~\text{seconds}} = 10^6\cdot 8\cdot \frac{1}{\color{red}{3}}\cdot\frac{\text{bits}}{\text{seconds}}$ If you were to do this instead, you should get approximately $2.667 ~\text{megabits per second}$, which is the same as approximately $333.3 ~\text{kilobytes per second}$ or $0.3333~\text{megabytes per second}$ If the problem asks you to express throughput in a specific format (megabits per second, megabytes per second, gigabytes per second, bits per second, etc...) then try to match that format. Otherwise, an answer of $2.667 ~\text{mbps}$ is just as good as an answer of $333.3~\text{kBps}$ since they represent the same thing.
Difficult question about asymptotic notations and permutations
Disclaimer: this is strongly inspired by and follows the exposition of [1], specifically Section 1.3. I strongly suggest you read this book if you are interested in the topic. Recall the Erdős—Szekeres theorem: Theorem. (Erdős—Szekeres) For any integers $r, s\geq 0$, every sequence of length at least $(r - 1)(s - 1) + 1$ contains a monotonically increasing subsequence of length $r$ or a monotonically decreasing subsequence of length $s$. In particular, for any $r,s\geq 1$ such that $n> rs$, any permutation $\sigma\in\mathcal{S}_n$ satisfies $L(\sigma) \geq r$ or $D(\sigma)\geq r$ (where $D(\sigma)$ is the length of the longest decreasing subsequence); or, equivalently, $$ \forall \sigma\in\mathcal{S}_n,\quad L(\sigma)D(\sigma) > n \tag{1} $$ By symmetry, the distributions of $L(\sigma)$ and $D(\sigma)$ when $\sigma$ is chosen uniformly at random from $\mathcal{S}_n$ are the same, and thus $\mathbb{E}_\sigma[L(\sigma)]=\mathbb{E}_\sigma[D(\sigma)]$. Therefore, we can write $$ \frac{1}{n!}\sum_{\sigma\in\mathcal{S}_n} L(\sigma)=\mathbb{E}_\sigma[L(\sigma)] = \frac{\mathbb{E}_\sigma[L(\sigma)]+\mathbb{E}_\sigma[D(\sigma)]}{2} = \frac{1}{n!}\sum_{\sigma\in\mathcal{S}_n} \frac{L(\sigma)+D(\sigma)}{2}\tag{2} $$ By the AM-GM inequality, we get $$ \frac{1}{n!}\sum_{\sigma\in\mathcal{S}_n} \frac{L(\sigma)+D(\sigma)}{2} \geq \frac{1}{n!}\sum_{\sigma\in\mathcal{S}_n} \sqrt{L(\sigma)D(\sigma)} \tag{3} $$ and, combining (1), (2), and (3), we obtain that for all $n\geq 1$, $$ \frac{1}{n!}\sum_{\sigma\in\mathcal{S}_n} L(\sigma) \geq \frac{1}{n!}\sum_{\sigma\in\mathcal{S}_n} \sqrt{L(\sigma)D(\sigma)} \geq \frac{1}{n!}\sum_{\sigma\in\mathcal{S}_n} \sqrt{n} $$ i.e. $\frac{1}{n!}\sum_{\sigma\in\mathcal{S}_n} L(\sigma) \geq \sqrt{n}$. $~~~\square$ [1] Romik, Dan. The surprising mathematics of longest increasing subsequences. Institute of Mathematical Statistics Textbooks. Cambridge University Press, New York, 2015. xi+353 pp. ISBN: 978-1-107-42882-9; 978-1-107-07583-2 Available freely on the author's website.
Quaternions, torque, and impulse.
I don't know the term "instantaneous rotation", so I don't know whether you mean the instantaneous orientation or the instantaneous angular momentum or angular velocity; whichever one you mean, the other one seems to be missing from your state description. I also don't know what to make of a force resolving into a push and a torque, since a torque doesn't have the same dimensions as a force. Further it's unclear to me what you mean by "$f^T$ and $p$ combine", since $f^T$ is a force or a torque and $p$ is a point. I'll restate the problem in a form and notation that I understand, and I hope you'll be able to map that onto what you're doing. The orientation of the body at time $t$ can be described by a quaternion $s(t)$ corresponding to the rotation required to get to that orientation from some reference orientation. Its rotational state can be described by either its angular momentum $L$ or its angular velocity $\omega$; since the body is symmetric and its moment of inertia $I=\frac25mr^2$ is a scalar, these two are proportional to each other, $L=I\omega$. The angular velocity can be regarded as an element of the Lie algebra of the quaternions, specified by a three-dimensional vector whose direction is the instantaneous axis of rotation and whose magnitude is the instantaneous angular speed. The body's orientation evolves according to $\dot s(t)=\Omega(t)s(t)$, where the dot denotes differentiation with respect to the time $t$ and $\Omega(t)=(0,\omega(t))$ is the purely imaginary quaternion corresponding to the angular velocity $\omega(t)$. In the absence of torques, the angular velocity is constant and the motion can be integrated to $s(t)=\exp(\omega t)s(0)$, where $\exp$ is the exponential map from the Lie algebra to the quaternions, $$\exp(x)=\left(\cos|x|,\sin|x|\frac x{|x|}\right)\;.$$ A torque is by definition a rate of change of the angular momentum. A force $F$ acting at a point $p$ exerts a torque $\tau=r\times F$ on the ball, where $r$ is the vector from the ball's centre to $p$. This causes the angular momentum to change according to $\dot L=\tau$, so the angular velocity changes according to $\dot\omega=\tau/I$. Section $3$ of this paper gives what might be called a closed form for the resulting motion, but it's rather complicated and I gather you want to approximate the motion in finite time steps $\Delta t$. Since $\omega$ changes linearly, its update is exactly given by $\omega(t+\Delta t)=\omega(t)+\Delta t\tau/I$. To update the orientation $s$, you could take the average angular velocity $\bar\omega=\omega(t+\Delta t/2)=\omega(t)+\Delta t\tau/(2I)$ during the time step and update $s$ according to $s(t+\Delta t)\approx\exp(\bar\omega\Delta t)s(t)$. If you want, you can also approximate the exponential map by $$ \exp(x)\approx\left(\sqrt{1-|x|^2},x\right) $$ for small time steps. I hope you can translate that into how you're thinking about the problem; if not, feel free to ask.
what is maximum number of points of intersection between the diagonals of a convex octgon?
To explain André Nicolas's comment, for any group of $4$ vertices, there is one interior intersection of the diagonals. Thus, for each of the $\binom{n}{4}$ choice of $4$ vertices, there is one intersection. Therefore, the maximum number of intersections (when there are no coincident intersections) is $$ \binom{n}{4} $$
Need help in following problems related to combinatorial analysis.
For the first qquestion: you have to fill in 2 different letters and 3 different digits. In the English alphabets you have 26 letters, so you are choosing 2 from 26 which is $ \binom{26}{2}$. For the numbers you have 10 numbers from 0 to 9 , thus you are choosing 3 from 10, $ \binom{10}{3}$. Hence your answer is $$ \binom{26}{2} \times \binom{10}{3}$$ For the second question, your code consistes of four letters , while COMPUTE contains 7 letters. So for case (a) where letters are not allowed to be repeated, you have to choose 4 from 7, so it is $\binom{7}{4}$. For case (b), where letters in the code are allowed to be repeated, you may use instead of combination arrangement, and write $ A_7^4$. Hope this helps you. Try with the left 6 questions, and drop a comment for help!
Property of Vectors of an n-gon
This argument doesn't separates cases where the number of vertices is even or odd, but if $R$ denotes the rotation about $O$ by $\frac{1}{n}$ of a turn, then $R$ cyclically permutes the vertices, and therefore preserves the sum $$ \sum_{k=1}^{n} OP_{k}. $$ Since $O$ is the unique fixed point of $R$, the sum must be $O$.
Can it be shown that the Banach algebra $(L^1(\mathbb{R}^n), \ast)$ does not have a unit element?
Let us assume towards a contradiction that $f $ is a convolutional unit for $L^1$. By the convolution theorem, you have $\hat {g} = \widehat {f \ast g} = \hat {f} \hat {g} $ for arbitrary $g \in L^1$. But there is some $g $ (for example, a Gaussian) whose Fourier transform never vanishes. Thus, $\hat {f} \equiv 1$. But the Riemann Lebesgue lemma shows that $\hat {f} $ vanishes at $\infty $. Thus, $L^1$ has no unit.
how to find this limit $\lim_{x\rightarrow 0} \frac{\sin x^2}{ \ln ( \cos x^2 \cos x + \sin x^2 \sin x)} = -2$ without using L'Hôpital's rule
It is good recall the following asymptotics. $$\cos(x^2) = 1 + \mathcal{O}(x^4)$$ $$\cos(x) = 1 - \dfrac{x^2}{2!}+ \mathcal{O}(x^4)$$ $$\sin(x^2) = x^2 + \mathcal{O}(x^6)$$ $$\sin(x) = x + \mathcal{O}(x^3)$$ Hence, we get that $$\cos(x^2) \cos(x) = \left( 1 + \mathcal{O}(x^4) \right) \left( 1 - \dfrac{x^2}{2!}+ \mathcal{O}(x^4) \right) = 1 - \dfrac{x^2}{2!}+ \mathcal{O}(x^4)$$ $$\sin(x^2) \sin(x) = \left(x^2 + \mathcal{O}(x^6) \right) \left( x + \mathcal{O}(x^3) \right) = \mathcal{O}(x^3)$$ Hence, we get that $$\cos(x^2) \cos(x) + \sin(x^2) \sin(x) = 1 - \dfrac{x^2}{2!}+ \mathcal{O}(x^3)$$ Hence, $$\ln(\cos(x^2) \cos(x) + \sin(x^2) \sin(x)) = \ln \left(1 - x^2/2 + \mathcal{O}(x^3) \right)$$ Also, recall that $$\ln(1+t) = t + \mathcal{O}(t^2).$$ Hence, $$\ln \left(1 - x^2/2 + \mathcal{O}(x^3) \right) = -\dfrac{x^2}{2} + \mathcal{O}(x^3)$$ Hence, $$\dfrac{\sin(x^2)}{\ln(\cos(x^2) \cos(x) + \sin(x^2) \sin(x))} = \dfrac{x^2 + \mathcal{O}(x^{6})}{-x^2/2 + \mathcal{O}(x^3)} = \dfrac{-2 + \mathcal{O}(x^4)}{1 + \mathcal{O}(x)}$$ Hence, $$\lim_{x \to 0} \dfrac{\sin(x^2)}{\ln(\cos(x^2) \cos(x) + \sin(x^2) \sin(x))} = \lim_{x \to 0} \dfrac{-2 + \mathcal{O}(x^4)}{1 + \mathcal{O}(x)} = \dfrac{\lim_{x \to 0} \left(-2 + \mathcal{O}(x^4) \right)}{\lim_{x \to 0} \left(1 + \mathcal{O}(x) \right)} = -2$$ EDIT Below is a slightly different method. Note that $$\cos(x^2) \cos(x) + \sin(x^2) \sin(x) = \cos(x^2 - x)$$ We can rewrite $$\dfrac{\sin(x^2)}{\ln(\cos(x^2) \cos(x) + \sin(x^2) \sin(x))}$$ as $$\dfrac{\sin(x^2)}{\ln(\cos(x^2 - x))} = \dfrac{\sin(x^2)}{x^2} \times \dfrac{x^2}{\ln(\cos(x^2 - x))} = \dfrac{\sin(x^2)}{x^2} \times \dfrac{x^2}{\ln(1 - 2\sin^2((x^2 - x)/2))}$$ We used the identity that $\cos(\theta) = 1 - 2 \sin^2 \left( \theta/2\right)$. $$\dfrac{\sin(x^2)}{\ln(\cos(x^2 - x))} = \dfrac{\sin(x^2)}{x^2} \times \dfrac{-2\sin^2((x^2 - x)/2)}{\ln(1 - 2\sin^2((x^2 - x)/2))} \times \dfrac{x^2}{-2\sin^2((x^2 - x)/2)}$$ Now we have $$\lim_{x \to 0} \dfrac{\sin(x^2)}{x^2} = 1$$ $$\lim_{x \to 0} \dfrac{-2\sin^2((x^2 - x)/2)}{\ln(1 - 2\sin^2((x^2 - x)/2))} = 1$$ $$\lim_{x \to 0}\dfrac{x^2}{-2\sin^2((x^2 - x)/2)} = -2$$ Putting these together again gives us $-2$.
A maximum of a function on $S^2$
We assume at $u$ the maximal value is assumed. Now if we have $\nabla g(u)=au$ with $a<0$, then we can show $g((1-\epsilon)u)>g(u)$ as follows: locally $g(u+\delta)=g(u)+\delta\cdot\nabla g(u)+O(\delta^2)$, so $g((1-\epsilon)u)=g(u)-\epsilon u \cdot \nabla g(u)+O(\epsilon^2)=g(u)-a(|u|^2)\epsilon +O(\epsilon^2)>g(u)$ when $\epsilon$ is sufficiently small. So prove by contradiction, we have $a\ge 0$.
greatest common divisor for polynomials
For example, the gcd of $2x+1$ and $(2x+1)x$ in $\mathbb Q[x]$ is $x+\frac{1}{2}$, which is not in $\mathbb Z[x]$. So we cannot just replace $k$ with any commutative ring. This is really because of the monic condition on the gcd; we can divide by the leading entry when working in $k$, but this is not always possible in a general commutative ring.
Suppose the series $s_k=\sum_{n=0}^{k} a_n$ is convergent with limit $a$. How to determine $N \in \mathbb N :|s_k-a| \le \epsilon$ for $n \ge N$
The Alternating Series Test says that for a sequence of positive numbers, $a_n$, that decreases monotonically to $0$, $$ a=\sum_{k=1}^\infty(-1)^ka_k $$ converges and the difference $$ \left|a-\sum_{k=1}^n(-1)^ka_k\right|\le\left|a_{n+1}\right| $$ So you just need to include terms in the Leibniz Series until the next term is smaller than the error you desire. Note that the Leibniz Series, which is the evaluation of the Gregory Series at $x=1$ and converges extremely slowly, can be accelerated as in this question using Euler's Series Transformation to get the much more quickly converging series $$ \frac{\pi}{2}=\sum_{k=0}^\infty\frac{k!}{(2k+1)!!} $$
How to solve $A\tan\theta-B\sin\theta=1$
$$\tan x=\frac{2\tan \frac x2}{1-\tan^2 \frac x2}=\frac{2t}{1-t^2}$$ $$\sin x=\frac{2\tan \frac x2}{1+\tan^2 \frac x2}=\frac{2t}{1+t^2}$$
Sufficient conditions for bound
So we have $$ \binom{n}{m} \sum_{j=0}^m j\binom{n}{m-j}x^j < x \left( \sum_{j=0}^m \binom{n}{m-j} x^j \right)^2 $$ or $$ \binom{n}{m} \sum_{j=0}^m j\binom{n}{m-j}x^j < x \left( \sum_{j=0}^m\sum_{k=0}^m \binom{n}{m-j} \binom{n}{m-k} x^j x^k\right) $$ or $$ \binom{n}{m} \sum_{j=0}^m j\binom{n}{m-j}x^j < x\left( \sum_{j=0}^m\sum_{k=0}^m \binom{n}{m-j} \binom{n}{m-k} x^{j+k}\right) $$ On the right let $j=s+t$ and $k=t-s$ so $2t=j+k$ and $2s=j-k$. $$ \binom{n}{m} \sum_{j=0}^m j\binom{n}{m-j}x^j < x\left( \sum_{t=0}^m\sum_{s=-m}^m \binom{n}{m-s-t} \binom{n}{m-t+s} x^{t}\right) $$ Moving the $x$ into the summation and noting that for $j=0$ the summation on the right vanishes, $$ \binom{n}{m} \sum_{j=1}^m j\binom{n}{m-j}x^j < \left( \sum_{t=0}^m\sum_{s=-t}^t \binom{n}{m-s-t} \binom{n}{m-t+s} x^{t+1}\right) $$ Making the substitution $j=j-1$, $$ \binom{n}{m} \sum_{j=0}^{m-1}(j+1)\binom{n}{m-j}x^{j+1} < \sum_{t=0}^m x^{t+1}\left( \sum_{s=-t}^t \binom{n}{m-s-t} \binom{n}{m-t+s}\right) $$ Linear independence of $x^j$ tells us that we require the following for all $j=0,...,m-1$ $$ \binom{n}{m}(j+1)\binom{n}{m-j} < \left( \sum_{s=-j}^j \binom{n}{m-s-j} \binom{n}{m-j+s}\right) $$ Rewriting this, $$ \frac{n!}{m!(n-m)!}(j+1) \frac{n!}{(m-j)!(n-m+j)!} < \sum_{s=-j}^j \frac{n!}{(m-s-j)!(n-m+s+j)!}\frac{n!}{(m+s-j)!(n-m-s+j)!}$$ And this is exact. You can try to put this in MATLAB or excel to play around with various m and n. One more simplification can be performed $$ \frac{(j+1)}{m!(n-m)!(m-j)!(n-m+j)!} < \sum_{s=-j}^j \frac{1}{(m-s-j)!(n-m+s+j)!(m+s-j)!(n-m-s+j)!}$$
If $\varphi$ is a positive linear functional then the following are equivalent.
Use the following statement: If $A$ is a unital $C^*$-algebra and $x\in A$ there are unitaries $u_1,...,u_4$ and complex numbers $a_1,...,a_4$ with $x=\sum_i a_i u_i$. So: $$\varphi(x^*x)=\varphi\left(\sum_i(a_iu_i)^*\sum_j(a_ju_j)\right)= \sum_{ij} \overline{a_i}a_j\varphi(u_i^*u_j)$$ now note that $\varphi(u_i^*u_j)= \varphi\left(u_i(u_i^*u_j)u_i^*\right) = \varphi(u_ju_i^*)$ by (2). Hence: $$\varphi(x^*x)=\sum_{ij}\overline{a_i}a_j\varphi(u_ju_i^*) =\varphi\left(\sum_j(a_ju_j)\sum_i(a_iu_i)^*\right)=\varphi(xx^*)$$
Are regular expressions sets?
Languages are sets. They're sets of strings. Regular expressions are not sets. They are a notation for representing sets of strings. A regular expression is a description of a set of strings. For example, the regular expression ${\bf a }{\bf b}^\ast$ represents the set of strings that begin with an a that might be followed by a string of bs. The first section you quoted is describing what regular expressions look like. This is called the syntax of regular expressions. The third section explains what regular expressions mean: if you have a regular expression, what set does it represent? (Or “generate”—same thing.) You're right that $\mathcal L$ is a function. It's the function that takes a regular expression — a notation that represents a set of strings — and tells you which set it represents. The second section is saying the same as the first, but in a slightly different way. The first section says "here's what a regular expression can look like". The second section just puts it a little differently: "here's what is in the set of regular expressions". But the result is the same: to tell you what regular expressions are like. Regarding your exercise, the notation here is a little confusing. Let's suppose, for concreteness, that our alphabet, $\Sigma$ includes just the symbols x and y. The exercise is asking about the meaning of the regular expression $$({\bf x} + {\bf y})^\ast$$ It wants you to show that the set represented by this expression (that's $\mathcal L(({{\bf x} + {\bf y})^\ast})$) includes every possible string of xs and ys. The person who wrote the exercise has to find some way to say “every possible string of xs and ys”. (This is called the “Kleene closure of the set $\{{\mathtt x}, {\mathtt y}\}$.) They could have written that phrase, but they didn't. Instead they used an abbreviation. The abbreviation is “$\Sigma^\ast$”. So you're being asked to show that if $\Sigma = \{{\mathtt x}, {\mathtt y}\}$, then $$ \mathcal L(({{\bf x} + {\bf y})^\ast}) = \Sigma^\ast$$ and, more generally, that if $a_1, a_2, \ldots$ are the elements of some alphabet $\Sigma$, then $$ \mathcal L((a_1 + a_2 + \ldots)^\ast) = \Sigma^\ast$$ I hope this helps.
Solve recurrence relation using master theorem
Hint: Take $\epsilon= \frac{1}{2}$. Then $\lim_{n\to\infty}\frac{log_2n+10}{n^{1-\epsilon}}=\lim_{n\to\infty}\frac{log_2n}{\sqrt{n}}$ and use L'Hospital rule.
How to find the vector function $\vec{r}(u,v)$
With the given parametrization of the surface your original equation $$2x^2 + 4y^2 + z^2 = 1$$ is satisfied. Note that if you eliminate the coefficients then your original equation is not satisfied. $$2x^2 = \sin^2\phi \cos^2\theta $$ $$ 4y^2 = \sin^2\phi \sin^2\theta $$ $$z^2=\cos^2\phi$$ Thus you have $$2x^2 + 4y^2 + z^2 = 1$$
How to find the sum $\sum\limits_{k=1}^n (k^2+k+1)k!$?
Given expression is the same as $$\sum_{k=1}^n ((k+1)^2-k)k!=\sum_{k=1}^n(k+1)(k+1)!-\sum_{k=1}^nkk!\\ =\sum_{k=2}^{n+1}kk!-\sum_{k=1}^nkk!=(n+1)(n+1)!-1!1=(n+1)!(n+1)-1$$
Finding kernel of a column matrix
Guide: $$f(x) = \begin{bmatrix} 1 & 0 & 1 \\ 0 & 1 & 1 \\ 1 & 1 & 2\end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \\ x_3\end{bmatrix}$$ Consider the matrix, $$\begin{bmatrix} 1 & 0 & 1 \\ 0 & 1 & 1 \\ 1 & 1 & 2\end{bmatrix}$$ Sum of the first two rows give us the third row and the first two rows are linearly independent. Solve this linear system by letting $x_3=t$, express $x_1$ and $x_2$ in $t$. $$\begin{bmatrix} 1 & 0 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 0\end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \\ x_3\end{bmatrix}= \begin{bmatrix} 0 \\0 \\ 0\end{bmatrix}$$
Identifying a topological space
It cannot be the cofinite topology: you’re starting with a locally compact space, so its one-point compactification is Hausdorff, and the cofinite topology on an infinite set is not Hausdorff. HINT: Start by looking at the case $n=2$: show that the one-point compactification of $(0,1)\cup(1,2)$ is homeomorphic to a subset of $\Bbb R^2$ consisting of two tangent circles (rather like a figure $8$ or $\infty$). Added: For $n\in\Bbb Z^+$ let $C_n$ be the circle of radius $\frac1n$ centred at $\left\langle\frac1n,0\right\rangle$ in the plane. Let $Y_n=\bigcup_{k=1}^nC_k$, and let $X_n$ be the one-point compactification of $\bigcup_{k=1}^n(k-1,k)$. Let $p$ be the origin in $\Bbb R^2$, and let $q$ be the point at infinity in $X_n$. As you suspected, $X_n$ is homeomorphic to $Y_n$, and of course any homeomorphism must send $q$ to $p$. The natural way to get a homeomorphism is to map the interval $(k-1,k)$ to $C_k\setminus\{p\}$ in any natural way. For instance, you can map $k-\frac12$ to $\left\langle\frac1k,0\right\rangle$, map $\left(k-1,k-\frac12\right)$ to the lower open half of $C_k$, and map $\left(k-\frac12,k\right)$ to the upper open half of $C_k$. That’s algebraically a bit messy but conceptually straightforward. For the proof that the map is a homeomorphism, you may find it helpful to identify simple local bases at the points of $X_n$ and $Y_n$.
Deviation in $\sup$-norm of simple fixed design NW-regression estimator
The key observation here is that, as a function of $x$, $$g(x):=\widehat{f}(x)-\mathbb{E}[\widehat{f}(x)]=\sum_{i=1}^n \varepsilon_i W_{i,h}(x)$$ can only take a finite number of values. More precisely, each $x\in[0,1]$ can be written as $x=x_i-u$ for some $i\in\{1,2,\ldots,n\}$ and $u\in[0,\frac{1}{n}]$. Now, if you shift the window $[x_i-h,x_i+h]$ to $[x_{i-1}-h,x_{i-1}+h]$ (with $x_{i-1}:=0$) through $[x_i-h-a,x_i+h-a]$, $a\in[0,\frac{1}{n}]$, clearly the weight $W_{i,h}(x)$ changes at most two times - the first time corresponds to some $a\leq\frac{1}{2n}$ and the second time corresponds to some $a\geq\frac{1}{2n}$. In fact, all weights $W_{k,h}(x)$ have at most these two change points and therefore $$g(x_i-u)\in\left\{g(x_{i-1}),g\left(\frac{x_{i-1}+x_i}{2}\right),g(x_i)\right\}.$$ Using that for any $x\in[0,1]$, $|\mathbb{E}[\widehat{f}(x)]-f(x)|\leq H\cdot h^\alpha$, we can now derive the desired deviation bound: \begin{eqnarray*} \mathbb{P}(\Vert \widehat{f}-f\Vert_{\infty}>c)&\leq& \mathbb{P}\left(\exists x\in[0,1]:~|g(x)|+H\cdot h^\alpha>c\right)\\ &\leq&\sum_{k=0}^{2n}\mathbb{P}\left(\left|g\left(\frac{k}{2n}\right)\right|+H\cdot h^\alpha>c\right)\\ &\stackrel{!}{\leq}&\delta. \end{eqnarray*} With the Gaussian deviation result used above, we see that $c=\sqrt{\frac{4\sigma^2}{nh}\ln\left(\frac{2(2n+1)}{\delta}\right)}$ is a sufficient choice (replace $\delta$ by $\frac{\delta}{2n+1}$), i.e. for any $\delta\in(0,1)$ $$\mathbb{P}\left(\Vert \widehat{f}-f\Vert_{\infty}>H\cdot h^\alpha+\sqrt{\frac{4\sigma^2}{nh}\ln\left(\frac{2(2n+1)}{\delta}\right)}\right)\leq\delta$$
Value of cyclotomic polynomial evaluated at 1
Another proof follows directly from the formula $X^{n} - 1 = \prod_{d \mid n} \Phi_d(x)$, since we can deduce from it that \begin{equation} X^{n-1} + \cdots + X + 1 = \prod_{d \mid n, d>1} \Phi_d(x). \end{equation} Thus, if $n = p^{k}$, we have $$ X^{p^{k}-1} + \cdots + X + 1 = \Phi_{p}(x) \cdots \Phi_{p^{k-1}}(x) \Phi_{p^{k}}(x). $$ After evaluating in 1 we obtain $p^{k} = \Phi_{p}(1) \cdots \Phi_{p^{k-1}}(1) \Phi_{p^{k}}(1)$ and induction on $k$ gives $\Phi_{p^{k}}(1) = p$ for all $k$. If $n = p_{1}^{\alpha_{1}} \cdots p_{r}^{\alpha_{r}}$, where $\alpha_{i}$'s are positive integers and $r \geq 2$, then $$ n = \Phi_{n}(1) \prod_{d \mid n, d\neq 1,n} \Phi_d(1). $$ If we assume the statement true for all positive integers $<n$ then the product in the left member of the equation equals $n$, since $$ \prod_{i=1}^{r}\Phi_{p_{i}}(1) \cdots \Phi_{p_{i}^{\alpha_{i}}}(1) = p_{1}^{\alpha_{1}} \cdots p_{r}^{\alpha_{r}} = n $$ and the rest of the factors are 1. Thus, $\Phi_{n}(1) = 1$ also.
Method of expressing the product of first n integers
$$f(n):=\begin{cases}\tfrac{140!}{(140-n)!}&\text{if }1\le n\le 140,\\ 0&\text{if }n>140.\end{cases} $$
Is this correct math notation?
$[a,b]$ is usually defined as the interval containing all real numbers that are between $a$ and $b$. In this particular context, $\forall a,i\in[1,4]$ wouldn't make much sense if we're speaking about indices that take values over $\mathbb{N}$. A better notation would be, $$\big(\forall a,i\in\{1,\ldots,4\}\big)\big(\forall p,j\in\{1,\ldots,5\}\big)\, :\, x_{ap}=x_{ij}.$$ Or, so as to make it shorter, we may add the condition that $a$ and $i$ can't be $5$, as follows: $$\big(\forall a,i,j,p\in\{1,\ldots,5\}\mid a,i\neq5\big)\, : \, x_{ap}=x_{ij}.$$
Finding an appropriate axis of rotation for two points such that they can be rotated and translated to overlay a given line
Let $\vec v$ be the cross-product of the direction vectors of $L_1$ and $L_2$. As long as $L_1$ and $L_2$ are not already parallel, $\vec v$ will be non-zero. I assert that $\vec v$ is the direction vector of $L_{map}$. Now, take the plane formed by $L_{map}$ and $L_2$ and find the unique point $\vec x_0$ where it intersects with $L_1$. This will be a point on $L_{map}$. So you can then define $L_{map} = \vec x_0 + \vec v t$. The angle of rotation $\theta$ is determined by taking the dot product of the direction vectors, since $$ \vec u \cdot \vec v = |\vec u||\vec v| \cos \theta.$$ If you're doing this algorithmically, there's a slight hitch since you have two solutions, $\pm\theta$, from solving for $\cos \theta$. The blunt way from here is to try both, and keep the solution where the resulting direction vector's dot product with the direction vector of $L_1$ is higher.
Interesting but short math papers?
Ivan Niven's proof of the irrationality of $\pi$. And Timothy Jones's longer article on the same subject that provides some intuition for Niven's proof.
Likelihood function of linear model with variance equal to zero
In general the likelihood for a normal distribution is given by $$ L(\beta; y_1, \dots, y_n) = \prod_{i=1}^n \mathcal{N}(y_i; \beta x_i, T) = \prod_{i=1}^N \frac{1}{\sqrt{2\pi T}} \exp \left[-\frac{(y_i - \beta x_i)^2}{T}\right]. $$ Now we might be temped to let $T \rightarrow \infty$ and observe that then $L(\beta; y_1, \dots, y_n) \rightarrow 0$, but before we do this we should rather ask ourself: Does $T = 0$ really make sense? No it doesn't and here's why: $T = 0$ implies that $\epsilon_i = 0$ for all $i$ which further means that $Y_i = \beta x_i$ for all $i$. In this case we may directly deduce $\beta$ as $$ \beta = (Y_2 - Y_1) / (x_2 - x_1), $$ so there really isn't any need for formulating the likelihood at all; The probability of observing $Y_2 = \beta x_2$ and $Y_1 = \beta x_1$ is 100%. There is zero probability of observing anything else. And while you can set up the formula for the likelihood, I don't think you should. Likelihood is defined only in the context of something probabilistic having happened. But for $T = 0$, all the probabilistic contributions are zero $\epsilon_i = 0$ and thus our measurements were not the result of a probabilistic process. The likelihood isn't defined in this deterministic context.
$x^T A x > 0 \forall x \neq 0$ for a symmetric matrix $A$ with eigen values $\lambda_i > 0 \ \forall i$
Your approach to do the unitary diagonalization of $A$ is sound. After you have got $$ x^TAx=\sum_{i=1}^n \lambda_i \|u_i^T x\|_2^2 $$ it is only left to ensure that there is at least one term in the sum of squares that does not vanish for $x\ne 0$. If all terms are zero, i.e. $u_i^Tx=0$ for all $i$, then the vector $x$ is orthogonal to all basic vectors $u_i$, thus, to the whole space, which is possible only if $x=0$. The same thing with matrix notations is a bit shorter: denote $y=U^Tx$. Since $U$ is invertible we have $x\ne 0$ $\Leftrightarrow$ $y\ne 0$ and, hence, for $x\ne 0$ $$ x^TAx=y^T\Lambda y=\sum_{i=1}^n\lambda_i\|y_i\|^2\ge\lambda_\min\|y\|^2>0 $$ which proves that $A$ is positive definite.
How do I find the P value and Z score?
Easy hypothesis test problem. They all follow the same formula. First, since we're dealing with proportions, we know that the population proportion $P$ is distributed as $\mathcal{N}(np, p(1-p)/n)$ - that is, it follows a normal distribution with mean $np$ and variance $\frac{p(1-p)}{n}$. Anyway, enough theory. Here's what we do. Look at the claim - the previous study. Because of ambiguity (we can work around this, sort of), let's say $20\%$ of the target group owns an MP3 player. We'll test to see if, in these two samples (independently), if the true ownership is different than $20\%$. For the first group of $550$ males (it says the sample sizes are the same), the sample proportion is $24\%$. We compute the z-score $z_M$ as $$ z_M = \frac{0.24 - 0.2}{\sqrt{ \frac{0.2 \cdot 0.8}{550} } } $$ and so $z_M = 2.35$. We'll now look in a table for the z-score $-2.35$, as we can just multiply the corresponding area by $2$ to give the p-value. This p-value for this group is $0.019774$. This is less than $0.05$, so we can reject the hypothesis that the true ownership is $0.2$ of the population. The same can be done for female owners. Can you take it from here?
Identify $\sum\limits_{n=0}^\infty \left(x_n-L\right)$ where $x_{n+1}=\sqrt[3]{1+x_n}$ and $L^3=L+1$
I would write things in the following way. Starting with your definitions $f(x)= \sqrt[3]{1+x} $ and for the fixpoint $L = \lim_{h \to \infty } x_h$ . Let's now introduce the shorthand notations $ x' = x+ L$ and $x^" = x-L$ . Also I would explicitely introduce the function $$ g(x) = f(x+L)-L $$ as a much common usage in the discussion of functional iteration. A notationally short notation is now $$ g(x) = f(x')^" \qquad \qquad \text{and } \qquad f(x) = g(x^")' \qquad \text{ .}$$ With this let us furtherly denote the iterates of the $g()$ function with the common index-notation $$x_h:=g^{°h}(x) \qquad . $$ Note btw., that the power series for $g(x)$ has no constant term and we could generate power series for fractional iterates and so on. Iterates of $f(x)$ are now expressible by iterates of $g(x)$: $$f^{°h} (x) = g^{°h} (x-L)+L = g^{°h} (x^")' = (x^"_h)' \qquad .$$ Since $L$ is an attracting fixpoint for $f(x)$ the iteration of $g(x)$ (at least for a certain interval for $x$) converges to zero and a iteration-series based on $g(x)$ might also be convergent - however at this point this is not yet sure! Then I'd make the iteration-series depending on $g(x)$ such that $$ Q_g(x) = \sum_{h=0}^\infty x_h $$ and your definition for $Q(x)$ would be $$ Q(x) = Q_g(x^") $$ With that bit longish preliminaries it is easy to create a formal power series for $Q_g(x)$ ; I like especially the simple way using Carlemanmatrices and Neumann-series with that Carlemanmatrices. Let $G$ denote the Carlemanmatrix for function $g(x)$, let $V(x)$ denote a row-vector of consecutive powers of $x$ such that $V(x)=[1,x,x^2,x^3,...]$ Then with the little matrix-algebra $$ \begin{array} {rll} V(x_0) \cdot G &= [ 1, x_1, (x_1)^2, (x_1)^3 ,...] &= V(x_1)) \\ V(x_1) \cdot G &= [ 1, x_2, (x_2)^2 , (x_2)^3,...] &= V(x_2) \\ ... \end{array}$$ and $$ V(x) \cdot ( I + G + G^2 + ... +G^h) = \sum_{k=0}^h V(x_k) $$ The idea is now, that the above partial geometric series of $G$ above should possibly be expressed by the difference of two full geometric series. However, one exemplar of the full geometric series provides already the full iteration series of your question; and the shortcut for geometric series (here with a matrix argument, this is called "Neumann-series") gets then involved: $$ V(x) \cdot ( I + G + G^2 + ...) = V(x) \cdot (I-G)^{-1} = \sum_{k=0}^\infty x_k $$ For finite truncations of the (infinite) Carlemanmatrix that reciprocal term $(I-G)^{-1}$ cannot be done because the top-left entry and thus the whole first column would become zero. But in the case of this function $g(x)$ we can have a workaround, in that we insert a value $1$ in the top-left of $I-G$, (but which I don't want to discuss deeper here). This suggests to give that reciprocal the matrix-name $Q_g$. (Note, that by this operation $Q_g$ is not of the Carleman type!) We can evaluate $$ V(x) \cdot Q_g = [1, \sum_{h=0}^\infty x_h , \sum_{h=0}^\infty x_h^2, \sum_{h=0}^\infty x_h^3, ... ] $$ Of course we are only interested in the second result, so we can write - using the second column of $Q_g$ only: $$ V(x) \cdot Q_g [,1] = \sum_{h=0}^\infty x_h = Q_g(x) $$ and your searched result is then given by the power series evaluation $$ Q(x) = V(x^") \cdot Q_g [,1] $$ The formal power series which occurs by the rhs dot-product begins as $$ Q_g(x) = \small{ 1.2344868 x - 0.034880742 x^2 + 0.0084538027 x^3 - 0.0024444375 x^4 \\ + 0.00077551090 x^5 - 0.00026057755 x^6 + 0.000091048921 x^7 \\ - 0.000032729496 x^8 + 0.000012021537 x^9 - 0.0000044908285 x^{10} \\ + 0.0000017006482 x^{11} - 0.00000065129523 x^{12} + 0.00000025178258 x^{13} \\ - 0.000000098117179 x^{14} + 0.000000038499197 x^{15} + O(x^{16}) } $$ Conclusion: Other than in your earlier question, where in the $Q$-matrix of that function we found polynomial expressions leading to the exact result $Q(x) = 1/8 - x^2$ (see there) this $Q$-matrix here does not show such polynomial expression, so I rather think there are no such (simple) closed forms. But - the columns in $Q_g$ and $Q$ provide coefficients for power series so any characteristic in the result might be possible; I could not yet give a hint for the question of closed forms by this analysis only. (Perhaps soemone else can - at least we have now a power series for the function of $Q(x)$) for reference: here the top-left of matrix $Q_g$ 1 0 0 0 0 0 0 1.2344868 0 0 0 0 0 -0.034880742 1.0374302 0 0 0 0 0.0084538027 -0.010808060 1.0069005 0 0 0 -0.0024444375 0.0033709884 -0.0029721978 1.0013034 0 0 0.00077551090 -0.0011184147 0.0011374829 -0.00074777195 1.0002473 0 -0.00026057755 0.00038742656 -0.00042842516 0.00033959599 -0.00017732328 0 0.000091048921 -0.00013845479 0.00016195039 -0.00014353963 0.000093231469
Finding $p$ so that $[-1,-1/3]\not \in \text{range }f$ where $(p-x^2+1)f(x)=x-1, x\ne \pm\sqrt{p+1}$
Notice that $f(x) \to 0^-$ as $x\to \infty$, and $f(x) \to 0^+$ as $x\to -\infty$. Also, it is continuous for $p\leq 0$ so the only way that your condition will be true is if the global minimum value is greater than $-\frac 13$. $$f’(x) = \frac{(p-x^2-1) - (x-1)(-2x)}{(p-x-1)^2} =0 \implies x = 1 \pm \sqrt{-p}$$ Note that if $p\gt 0$ then there would be no local maxima/minima suggesting the existence of a vertical asymptote, and hence our condition would not be satisfied. So, $p\leq 0$. Now it is easy to check that $f(x)$ attains it’s minimum value at $x=1+\sqrt{-p}$ and we want $$ f(1+\sqrt{-p}) \gt -\frac 13 $$ $$\frac{\sqrt{-p}}{2p - 2\sqrt{-p}} \gt -\frac 13$$ $$3\sqrt{-p} \lt 2\sqrt{-p} -2p$$ $$-p \lt 4p^2$$ $$\implies \color{blue}{p\lt -\frac 14}$$ and we are done.
Probability of Level Crossing
First, $$ \mathsf{P}\left(Y_n-X_n>\frac{1}{2}\right)=\mathsf{P}\left(Y_{n-1}>\frac{1}{2\rho}\right) $$ and $Y_n=(1-\rho L)^{-1}X_{n}=\sum_{k=0}^{\infty}\rho^k X_{n-k}$, where $L$ is the lag operator. Then, using the characteristic function of a Laplace r.v., $$ \varphi_{\sum_{k=0}^{N}\rho^k X_{n-k}}(t)=\prod_{k=0}^{N}\varphi_{X_{n-k}}\left(\rho^k t\right)=\exp\left\{-\sum_{k=0}^{N}\ln\left(1+\rho^{2k}\frac{t^2}{\lambda^2}\right)\right\}, $$ which does not converge to the characteristic function of a normal r.v. In fact, for $|t|<\lambda$ (using the Taylor series for $\ln$), $$ \varphi_{Y_n}(t)=\exp\left\{-\sum_{k=1}^\infty \frac{(-1)^{k+1}}{k}\frac{(t/\lambda)^{2k}}{(1-\rho^{2k})} \right\}, $$ which is approximately the c.f. of a normal r.v. for small values of $t$. So, in general, it's hard to find the stationary distribution of $Y$-s given Laplace innovations. The pdf of $Y_n$ ($h_Y$) can be found as the solution of $$h_Y(x)=\int f_X(x-\rho u)h_Y(u)du$$ via the following recursion $$h_{Y,n}(x)=\int f_X(x-\rho u)h_{Y,n-1}(u)du$$ starting with some arbitrary pdf $h_{Y,0}$. It can be shown that $h_{Y,n}\rightarrow h$ as $n\rightarrow\infty$. However, it seems that there is a typo in the question. For example, in part (a) the authors ask to compute the autocorrelation $R_X(k,j)$ which does not make sense because $X$-s are i.i.d. So, probably their intention was to specify the stationary distribution of $Y$-s. Then you can find the corresponding distribution of innovations by noticing that $$\varphi_{Y_n}(t)=\varphi_{Y_{n-1}}(\rho t)\varphi_{X_n}(t)$$ which yields $$\varphi_{X_n}(t)=\frac{1+\rho^2(t/\lambda)^2}{1+(t/\lambda)^2}=\rho^2+(1-\rho^2)\frac{1}{1+(t/\lambda)^2}$$ so that $X_n$ is a mixture of a degenerate r.v. ($\delta_0$) and a $\text{Laplace}(0,\lambda^{-1})$ r.v. with weights $\rho^2$ and $1-\rho^2$, respectively.
How to find linearly independent columns in a matrix
Given $A\in\mathbb{R}^{m\times n}$, $m\geq n$, compute the (economy) QR factorisation. This gives $$ A = QR, \quad R\in\mathbb{R}^{n\times n}. $$ Now if $\mathrm{rank}(A)<n$, the upper triangular matrix $R$ has a staircase profile with some of the "steps" of the staircase over more than one column. Select column indices $j_1,\ldots,j_k$ such that if you remove these columns from $R$, you obtain a nonsingular upper triangular matrix (you can consider it as making each step of the staircase of length 1). The columns $j_1,\ldots,j_k$ can be expressed as linear combination of the remaining columns. Example: The red columns indicate the columns which are linear combinations of the others. $$ \begin{bmatrix} \times & \times & \color{red}\times & \times & \color{red}\times & \color{red}\times \\ 0 & \times & \color{red}\times & \times & \color{red}\times & \color{red}\times \\ 0 & 0 & \color{red}0 & \times & \color{red}\times & \color{red}\times \end{bmatrix} $$ Example: For the given matrix from the question, the QR factorisation is: Q = 0 -0.4472 -0.8944 0 -0.8944 0.4472 -1.0000 0 0 R = -1.0000 2.0000 -1.0000 0 4.4721 -2.2361 0 0 0 So one can pick the column 2 or 3 to make the matrix $R$ nonsingular and upper triangular (hence either the column 2 or 3 is a linear combination of the others).
Embedded submanifold of $\mathbb{S}^3$
Hint:Let $G:\mathbb{R}^3\to\mathbb{R}$ denote the function $G(x,y,z,t)=x^2+y^2+z^2+t^2-1$. We have the following: $T_p\mathbb{S}^3=\ker dG_p$ for $p\in\mathbb{S}^3$. $p\in\mathbb{S}^3$ is a critical point of $f$ if and only if $T_p\mathbb{S}^3\subset\ker d\tilde{f}_p$. By dimensional consideration, this is true if and only if $T_p\mathbb{S}^3=\ker d\tilde{f}_p$. Two linear functionals on a vector space are constant multiples of each other if they have the same kernel.
How to find the phase shift of this cosine graph?
An easy way to find the vertical shift is to find the average of the maximum and the minimum. For cosine that is zero, but for your graph it is $\frac{-1+3}2=1$. Therefore the vertical shift, $d$, is $1$. Notice that the amplitude is the maximum minus the average (or the average minus the minimum: the same thing). In your graph it is $3-1=2$ (or $1--1=2$), as you already knew. This gives us a check on both the vertical shift and the amplitude. By the way, $a$ could be the negative of the amplitude, though it is usually taken to be the amplitude. The period is $\frac p{|b|}$, where $p$ is the period of the "base" function. The period of the graph is seen to be $3\pi$ and cosine's period is $2\pi$, so a positive value for $b$ is $\frac{2\pi}{3\pi}=\frac 23$. Note the period is not $b$ as you wrote. Again, $b$ could be negative but it is usually taken to be positive. An easy way to find the phase shift for a cosine curve is to look at the $x$ value of the maximum point. For cosine it is zero, but for your graph it is $3\pi/2$. That is your phase shift (though you could also use $-3\pi/2$). By the way, the formula for phase shift is not $c$, but $-\frac cb$ to the right. This is easier to see if you rewrite the formula as $$f(x)-d=a\cos\left[b\left(x--\frac cb \right)\right]$$
Approximation of the Heaviside Function whose derivative has a compact support
I just responded to my own questions by looking at Robert Israel's answer. The function is $$ H_\delta=\dfrac{1}{1+{e^{{\tfrac {4 x\delta}{ x^2-\delta^2 }}}}},\;\;{\mbox{for}}\;\;|x|<\delta. $$
Theorem Cramer-Wold.
I don't understand what you mean by "lose generality" in your first question. The only "generality" in the proof that seems to me to be of any importance is that it must work for any $\ n$-vector and for any $\ n\ $, which it does. Putting $\ s=1\ $ is merely one step in the the proof, a step which you can legitimately take regardless of what the values of $\ n\ $ and $\ X\ $ are, so it doesn't impose any restriction on them. A vector is not the same thing as a linear combination of its entries (unless the vector, $\ X\ $ is $1$-dimensional and the linear combination is $\ 1\cdot X\ $), which must be a scalar. So neither the distribution nor characteristic function of a random vector can be the same as those of a linear combination of that random vector's entries. The distribution of a random $\ n$-vector as well as its characteristic function will be functions of $\ n\ $ variables, whereas both the distribution and the characteristic function of any particular linear combination of the random vector's entries will be functions of a just a single variable. The key point in the Cramer-Wold device is that you're required to know the one-dimensional distribution of not merely some particular linear combination of the random vector's entries, but of every such linear combination, and this enables you to deduce the random vector's distribution, a function of many variables, from that infinite family of functions of a single variable.
Picard Iteration Error
Assume $y^n(t)$ is the $n$th Picard iteration. If it happens also that $y^n(t)$ is the same $n$th Taylor polynomial of the exact solution, say $y(t)$, then from Taylor's theorem the error is: $$y^n(t)-y(t)=\frac{y^{(n+1)}(t_0)}{(n+1)!}(t-t_0).$$
Completely reducible group representation
The single biggest mistake I see inexperienced math students make is ignoring the definitions. In particular, you haven't specified what a direct sum of representations means. If you did, you would assume that $\psi=\psi^{(1)}\oplus\psi^{(2)}$ (not necessarily irreducible) and apply the definition to show that equivalence implies that $\phi=\phi^{(1)}\oplus\phi^{(2)}$. By induction, you can deduce that if $\psi$ is a direct sum of irreducible representations, then so is $\phi$. To indicate how such a proof would go, I'll give my own (equivalent) definition of a completely reducible representation and use it as the basis of a proof. As an exercise, I encourage you to apply the definitions you've received in class to get the result on your own. Given a representation $\psi:G\to GL(W)$, call a subspace $X\subset W$ $\psi$-invariant if $\psi_g(X)\subset X$ for all $g\in G$. Then, we say that $\psi:G\to GL(W)$ is completely reducible if, whenever $X\subset W$ is $\psi$-invariant, there exists $Y\subset W$ such that $Y$ is $\psi$-invariant and $W=X\oplus Y$. Now, assume that $\phi:G\to GL(V)$ is equivalent to $\psi:G\to GL(W)$, so that $\psi_g=T^{-1}\phi_gT$ for some isomorphism $T:W\to V$. Assume that $\psi$ is completely reducible. To show that $\phi$ is completely reducible, appeal to the definition. Assume that $X\subset V$ is $\phi$-invariant. We will use $T$ to construct $Y\subset V$ satisfying the definition. To this end, let $X'=T^{-1}(X)\subset W$. Then, $X'$ is $\psi$-invariant since, for all $g\in G$, $\phi_g(X)\subset X$ and, therefore, $$ \psi_g(X')=\psi_g(T^{-1}(X))=T^{-1}(\phi_g(TT^{-1}(X)))=T^{-1}(\phi_g(X))\subset T^{-1}(X)=X'.$$ By assumption, $W$ is $\psi$-invariant. By the definition, there exists $Y'\subset W$ with $Y'$ $\psi$-invariant and $W=X'\oplus Y'$. Let $Y=T(Y')\subset V$. Then, $Y'$ is $\phi$-invariant since $$ \phi_g(Y)=\phi_g(T(Y'))=T(T^{-1}(\phi_g(T(Y')))=T(\psi_g(Y'))\subset T(Y')=Y. $$ Finally, we have to show that $V=X\oplus Y$. By definition, this means that $X\cap Y=0$ and $X+Y=V$. Well, $$X\cap Y=T(T^{-1}(X\cap Y))\subset T(T^{-1}(X)\cap T^{-1}(Y))=T(0)=0.$$ Finally, if $v\in V$, then $T^{-1}(v)\in W$, so $T^{-1}(v)=x'+y'$ for some $x'\in X'$ and $y'\in Y'$ (since $W=X'\oplus Y'$). Set $x=T(x')$ and $y=T(y')$. Then $$x+y=T(x')+T(y')=T(x'+y')=T(T^{-1}(v))=v.$$ Hence, $V=X+Y$ as required.
When can we directly invert the function across the inequality sign?
What we need is that $g$ is a strictly increasing function. This implies that for any numbers $x_1$ and $x_2$ in the domain of $g,$ we will have $g(x_1) \leq g(x_2)$ if and only if $x_1 \leq x_2.$ Now put $X$ for $x_1$ and $g^{-1}(y)$ for $x_2.$ So we have $g(X) \leq g(g^{-1}(y))$ if and only if $X \leq g^{-1}(y).$ That is, the event that $g(X) \leq y$ is exactly the same event as the event that $X \leq g^{-1}(y)$ (you cannot have an outcome that satisfies one of those inequalities and not the other), and it has the same probability. If $g$ were strictly monotonic but decreasing rather than increasing, then we would have $g(x_1) \leq g(x_2)$ if and only if $x_1 \geq x_2,$ and when making the substitutions $X$ for $x_1$ and $g^{-1}(y)$ for $x_2$ we end up with $g(X) \leq y$ if and only if $X \geq g^{-1}(y),$ which is not what we need for the theorem. According to the way I learned it (something like this), monotonic does not imply increasing. But perhaps the author of the theorem believed it does; or perhaps they had implied earlier that they would use the term "monotonic" to describe only monotonically increasing functions, never decreasing functions.
$X$ sequentally Compact implies that $X$ is complete
Assuming you mean a metric space $(X,d)$: If $x_{n_k} \to x$ then $$ d(x_n,x) \leq d(x_n,x_{n_k}) + d(x_{n_k},x) $$ The first term can be made small by using the Cauchy property. The second term gets small by convergence of the subsequence.
The accuracy from left to right and that from right to left of the floating point arithmetic sums
You get it from $$ fl(a+b)=(a+b)(1+\delta) $$ with $|δ|\leϵ$ the quasi random relative error of the floating point execution of the operation. In computing $s_n=\sum_{k=1}^n a_k$ via the partial sums $s_{m+1}=s_m+a_{m+1}$ each numerical step produces an error $Δs_m$ that accumulates as $$ s_{m+1}+Δs_{m+1}=fl(s_m+Δs_m+a_{m+1})=(s_m+Δs_m+a_{m+1})(1+\delta_m) $$ The errors are dominated by the first order terms, so that up to higher order terms after removing the exact terms one gets $$ Δs_{m+1}=Δs_m+s_{m+1}\delta_m $$ So in first order of approximation, the last error added to the sum is proportional to the sum up to date, which in total gives $$ Δs_n=s_2\delta_1+s_3\delta_2+…+s_n\delta_{n-1}. $$ The quasi random quantities $\delta_m$ are bound by a machine constant $ϵ$. The partial sums have the absolute sums as upper bound. Combining both one gets $$ |Δs_n|\le \sum_{k=1}^n (n+1-k)·|a_k|\,·\,ϵ $$ For the sum $\sum_{k=1}^n\frac1{k^2}$ summing from the front gives $a_k=\frac1{k^2}$ and the first order coefficient of the error term $$ \sum_{k=1}^n (n+1-k)·|a_k|=(n+1)\sum_{k=1}^n\frac1{k^2}-\sum_{k=1}^n\frac1k\le(n+1)\frac{\pi^2}6-\ln(n+1)=O(n) $$ Summing from the end is encoded as $a_k=\frac1{(n+1-k)^2}$ which gives the first order coefficient of the error term as $$ \sum_{k=1}^n\frac1{n+1-k}\le 1+\ln(n)=O(\log n) $$ The identities used are $$ \sum_{k=1}^n\frac1{k^2}\le\sum_{k=1}^\infty\frac1{k^2}=\frac{\pi^2}6 $$ and, as an easy consequence of $e^x\ge 1+x$ or bounds on $\int_{k-1}^k\frac1x\,dx$, $$ \ln(k+1)-\ln(k)\le \frac1k\le \ln(k)-\ln(k-1)\\ \ln(n+1)-\ln(1)\le\sum_{k=1}^n\frac1k\le 1+\ln(n)-\ln(1) $$
Can the set of singletons with positive measure in a probability space be uncountable?
This boils down to the fact that the sum of uncountably many positive reals is infinite. One can see that in the following way: let $A$ be the uncountable set of positive reals we are adding. Let $A_i = \{a\in A : a>2^{-i}\}$ for $i\in \mathbb{N}$. If any $A_i$ is infinite, then the desired sum is infinite. (As noted below, one has to worry about the measurability of $A_i$. This can be ensured by taking a countable subset of $A_i$ and using the countable additivity of the measure.) Thus $A$ is the union of countably many finite sets, and so is countable, contradicting the supposition.
Finding a common Lipschitz constant for a family of contractions
Take $X=\mathbb{R}$ with the usual metric. Let $h:\mathbb{R} \rightarrow \mathbb{R}$ be a compactly supported Lipschitz function with Lipschitz constant 1. Then $$ f(t, x) = \begin{cases} t \cdot h\big( x - \frac{1}{1-t} \big), &t\neq 1 \\ 0, &t=1 \end{cases} $$ is a counterexample. Indeed, we have (as the Lipschitz constant does not care about translating the argument) $$ Lip(f(t, \cdot)) = \begin{cases} Lip(t\cdot h(\cdot - \frac{1}{1-t})), &t\neq 1,\\ 0, &t=1 \end{cases} = \begin{cases} t \cdot Lip(h), &t\neq 1, \\ 0, & t=1 \end{cases} = \begin{cases} t, & t\neq 1, \\ 0, & t=1. \end{cases}$$ Hence, $Lip(f(t, \cdot))<1$, but $\sup_{t\in [0,1]} Lip(f(t,\cdot))=1$. If you want a counterexample on a compact metric space, you might take $X=[0,1]$ and $$ g: [0,1]\times [0,1] \rightarrow \mathbb{R}, \ g(t,x) = \begin{cases} t \cdot \sin\big( \frac{1}{t+t^2} x\big), &t\neq 0, \\ 0, &t=0. \end{cases}$$ Then we get the Lipschitz constant by taking the derivative and see that this yields a counterexample. Namely, we have $$\partial_x g(t,x)= \begin{cases} \frac{t}{t+t^2} \cos \big( \frac{1}{t+t^2} x\big),&t\neq 0, \\ 0,& t=0 \end{cases}$$ Thus, $$ Lip(g(t, \cdot)= \sup_{x\in [0,1]} \partial_x g(t, \cdot) = \begin{cases} \frac{t}{t+t^2}, & t\neq 0,\\ 0, & t=0. \end{cases} $$ Hence, $Lip(g(t,\cdot))<1$ for all $t\in [0,1]$, but $$ sup_{t\in [0,1]} Lip(g(t,\cdot)) = \sup_{t\in (0,1]} \frac{t}{t+t^2} =1. $$
Proof by Induction - How can I get familiar with it?
Remember that it is perfectly fine (although less elegant perhaps) to manipulate both the LHS and the expected RHS to reach an equality. In the proof provided in your question, they use a couple of identities that require some experience to use, and stuff like that might look like "magic" at times. However, working out the LHS to polynomial form gives. $$\begin{align} LHS &= \left[\frac{k(k+1)}2\right]^2+(k+1)^3\\ &=\frac14k^2(k+1)^2+(k+1)^3=\\ &=\{\text{...a couple of minutes of work...}\}\\ &=\frac14k^4+\frac32k^3+\frac{13}4 k^2+3k+1 \end{align}$$ And, expanding the expected "RHS" $\left[\frac{(k+1)(k+2)}2\right]^2$ will give you the same result and proof is done. It's a bit tedious and not as concise, but perfectly fine, and requires less experience.
Using MCT twice to show the limit of an integral depending on $x$ and $n$
Let $f_n(x) = e^{-x^2} n \sin\left(\dfrac{x}{n}\right)\cdot \chi_{[0,n^2]}(x)$. On the interval $[0,1]$, this is a monotone increasing sequence of functions, and MCT can be applied. On the interval $[1,\infty)$, you have $$ \int^{n^2}_1 e^{-x^2} n \sin\left(\frac{x}{n}\right) \, dx = \int\limits_{[1,\infty)} f_n(x)\,dx $$ and $$ |f_n(x)| \le xe^{-x^2} $$ and $$ \int\limits_{[1,\infty)} xe^{-x^2} \, dx < \infty $$ so the dominated convergence theorem can be applied. I'm not at all sure this is the simplest way, but it should work.
Finding the Inverse Function of a Logarithmic equation
You have a function in $x$, $y(x)=-\log_2(x+3)+4\; \forall x>-3$. You are looking for a function in $x$, $z(x)>-3$ such that $y(z(x))=x$. Hence, $-\log_2(z(x)+3)+4=x$. Now get $z(x)$ out. $z(x)=2^{4-x}-3$.
Simplifying/computing an integral
Using @Andrei's comment, the expression simplifies to $$ \phi^*_n\phi_n= \begin{cases} \frac{2}{\pi}a(1-\cos(p))(\frac{q}{q^2-p^2})^2,& \text{if } n\text{ is even}\\ \frac{2}{\pi}a(1+\cos(p))(\frac{q}{q^2-p^2})^2, & \text{otherwise} \end{cases} $$ from which the expectation of $k^2$ can be solved using the identity $\int_{-\infty}^\infty p^2(1\pm\cos(p))(\frac{q}{q^2-p^2})^2dp=\mp \frac{\pi q}{2}(\sin(q)+q\cos(q))$, and the expectation of $k$ using the symmetry of the integrand. Thanks to him for pointing out my oversight.
Function $\mu$ such that is outer measure but not measure.
Yes, indeed $\mu$ in this case is only an outer measure. Some remarks, if you want to show that $\mu$ is not a measure it is sufficient to take $A_1 = \{0\}$ and $A_2 = \{1\}$, then $\mu(A_1 \cup A_2) = 1 \neq 2 = \mu(A_1) + \mu(A_2)$. A measure is a countably additive function so in particular it is additive. To show that $\mu \colon 2^{X} \rightarrow [0, \infty]$, where $X:=\{0,1\}$ is an outer measure you have to verify 3 conditions 1) $\mu(\varnothing) = 0$ Answer Satisfied trivially. 2) For any two subsets $A$ and $B$ of $X$, $$ A\subseteq B\quad\text{implies}\quad\mu(A) \leq \mu(B).$$ Answer Let $A$, $B$ be subsets of $X$, such that $A \subset B$, we can see that $\mu(B) =1$ or $\mu(B)=0$, if $\mu(B)=0$ then $A=\varnothing$ and $\mu(B)=\mu(A)=0$ so the condition is satisfied if $\mu(B) =1$ then $\mu(A) = 1$ or $\mu(A)=0$, so in all the cases $\mu(A) \leq \mu(B)$. 3) For any sequence $\{A_j\}$ of subsets of $X$ (pairwise disjoint or not), $$\mu\left(\bigcup_{j=1}^\infty A_j\right) \leq \sum_{j=1}^\infty \mu(A_j).$$ Answer LHS equals $0$ or $1$, RHS is always greater or equal, easy observation. You seemed to combine conditions 2) and 3), but since this exercise is easy I wouldn't do that.
What is $ \mathbb{N}^2 / R $?
This is the set of all equivalence classes of $R$.
Cartesian product of KC spaces
Half of the result follows from this generalization of the argument that I made in this answer. Proposition. If $X$ is a compact $KC$ space that is not Hausdorff, then $X\times X$ is not $KC$. Proof. Suppose that $\langle X,\tau\rangle$ is a compact $KC$-space that is not Hausdorff, and let $p$ and $q$ be two points of $X$ that cannot be separated by disjoint open sets. Let $\Delta=\{\langle x,x\rangle:x\in X\}$, the diagonal in $X\times X$; $\Delta$ is homeomorphic to $X$, so $\Delta$ is compact; I’ll show that $\Delta$ is not closed and hence that $X\times X$ is not $KC$. Let $z=\langle q,p\rangle\in(X\times X)\setminus\Delta$, and let $U$ be any open nbhd of $z$ in $X\times X$; there are $V_p,V_q\in\tau$ such that $p\in V_p$, $q\in V_q$, and $V_p\times V_q\subseteq U$. By the choice of $p$ and $q$ we know that $V_p\cap V_q\ne\varnothing$, so let $x\in V_p\cap V_q$; clearly $\langle x,x\rangle\in(V_p\times V_q)\cap\Delta\subseteq U\cap\Delta$, so $z\in(\operatorname{cl}\Delta)\setminus\Delta$. Thus, $\Delta$ is a compact subset of $X\times X$ that is not closed, and $X\times X$ is not $KC$. $\dashv$ Corollary. If the $KC$ space $X$ has a non-Hausdorff compact subset $K$, then $X\times X$ is not $KC$. Proof. $K\times K$ is a subspace of $X\times X$, and the $KC$ property is hereditary. $\dashv$. Now suppose that every compact subset of $X$ is Hausdorff, and let $K$ be a compact subset of $X^k$ for some finite $k\ge 2$. Let $K_i$ for $i=1,\dots,k$ be the projections of $K$ to the factors; then $K\subseteq\prod_{i=1}^kK_i$, and $\prod_{i=1}^kK_i$ is compact and Hausdorff. The compact set $K$ is therefore closed in $\prod_{i=1}^kK_i$. $X$ is $KC$, so each $K_i$ is closed in $X$, and $\prod_{i=1}^kK_i$ is closed in $X^k$. Thus, $K$ is a closed subset of a closed subset of $X^k$ and is therefore closed in $X^k$, which is therefore $KC$.
(0,n) can't map to (0,1) without dividing by some f(n).
Let $n = 1$. Then every element maps to itself. If $n \neq 1$, then let $f(x) = x/n : x \in (0, n)$. $0$ is not in the domain of $n$ because $0$ cannot map to $(0,1)$ surjectively.
How can I get eigenvalues of infinite dimensional linear operator?
Let $\mathcal{H}$ be an infinite-dimensional Hilbert space and let $S$ be any operator on $\mathcal{H}$ that has no eigenvalues [for example, take $\mathcal{H} = L^2[0, 1]$ and let $S$ be the operator on $\mathcal{H}$ defined by $(Sf)(x) = xf(x)$.] Now define an operator $T$ on $\mathcal{H} \times \mathcal{H}$ by $$T(f, g) = (0, Sg).$$ Then $0$ is the only eigenvalue of $T$, but $T$ is not nilpotent.
Finding the Nth element in a list of all possible numbers
Let $S$ be the list of the number of possible value for each position in the list , and let $n$ be the position in the list. Let $i$ be the length of the elements of the list initially. The process for obtaining the element is: $q = n/(prod_{x=0}^{i}S_i)$ $n = n\bmod prod_{x=0}^{i}S_i+1$ $i = i-1$ And each successive value of $n$ is the $i$th digit of the element. Given an element the position is $\sum_{x=0}^{\text{length of the elements of the list}}\prod_{x=0}^{i}\times(\text{xth digit of the element from left to right}-1)$ Basically this problem is a degenerate case (a general case) of converting into bases, except that 1 is used for 0, 2 for 1, (so 1111 in this list would map to 0000), etc. I'll post a proof soon. Not sure if it is correct; will verify this later. Also the output/input might be zero-indexed (starting from zero).
"Show that, if all the entries of A on or below the diagonal are zero, then A is nilpotent."
The eigenvalues of an $n\times n$ triangular matrix $A$ are the entries on its main diagonal, which in this case are all zero. By the Cayley-Hamilton theorem, this means that $A^n=0$.
Reconstructing a restricted distribution from its mean and standard deviation
I've attached a picture of various beta densities from the wiki on the Beta Distribution. It really matters what your density looks like beyond the first two moments you specified, but in general, this will ensure the values are restricted to a bounded interval. If you aren't concerned with some density living outside your interval you might consider using a Gamma Density. If you really have no idea what the density looks like and want to restrict to a bounded interval you determine the maximal entropy density. This will be the one subject to the constraints of a bounded interval with fixed mean and variance. See this wiki page on the Maximum Entropy Probability Distribution. In particular, the section headed "A theorem due to Boltzmann".
Is $f:\mathbb R\to\mathbb R$ concave everywhere if, within each interval on the domain, $\exists$ a sub-interval s.t. $f$ is concave locally?
Let $q_k$ be an enumeration of $\mathbb{Q}$ and $\varepsilon_k=2^{-k}$. Define $B_k= B_{\varepsilon_k}(q_k)$ and $f$ on $\bigcup B_k$ as $f(x) = - x^2$ and $f= 1$ on the complement. Note that $\bigcup B_k$ is dense but has finite volume.