title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Give example where an outer measure is strictly less than the set function from which it is defined.
Try $X = \mathbb R$, $K = \{[a,b] : a,b \in \mathbb R\}$, $\lambda([a,b]) = (b-a)^2$. $\lambda([0,2]) = 4$. $[0,2] \subset [0,1] \cup [1,2]$ implies $\mu([0,2]) \le \lambda([0,1]) + \lambda([1,2]) = 2$.
How to find the integral $\int_{-\infty}^{\infty}\frac{dx}{1+ae^{bx^2}}$
I will assume that $a, b$ are positive constants, in order to circumvent singularity issues. By the substitution $z = \sqrt{b} \, x$, the integral in question is equal to $$ \frac{1}{\sqrt{b}} \int_{-\infty}^{\infty} \frac{e^{-z^2}}{a + e^{-z^2}} \; dz.$$ From the identity $$ 1 + x^{2n+1} = (1 + x)(1 - x + \cdots - x^{2n-1} + x^{2n}), $$ we obtain $$ \frac{1}{1 + x} = 1 - x + \cdots - x^{2n-1} + x^{2n} - \frac{x^{2n+1}}{1 + x}.$$ Now we temporary assume further that $a > 1$, so that $\alpha = a^{-1} \in (0, 1)$. Then $$\begin{align*} \int_{-\infty}^{\infty} \frac{e^{-z^2}}{1 + \alpha e^{-z^2}} \; dz &= \int_{-\infty}^{\infty} \left( \sum_{k=1}^{2n+1} \alpha^{k-1} e^{-kz^2} - \frac{\alpha^{2n+1}e^{-(2n+2)z^2}}{1 + e^{-z^2}} \right) \; dz \\ &= \sum_{k=1}^{2n+1} (-1)^{k-1} \alpha^{k-1} \int_{-\infty}^{\infty} e^{-kz^2} \, dz - \alpha^{2n+1} \int_{-\infty}^{\infty} \frac{e^{-(2n+2)z^2}}{1 + e^{-z^2}} \; dz \\ &= \sum_{k=1}^{2n+1} (-1)^{k-1} \alpha^{k-1} \sqrt{\frac{\pi}{k}} - \alpha^{2n+1} \int_{-\infty}^{\infty} \frac{e^{-(2n+2)z^2}}{1 + e^{-z^2}} \; dz \end{align*}$$ Now taking $n\to\infty$, the remainder term vanishes. Hence we have $$ \int_{-\infty}^{\infty} \frac{e^{-z^2}}{1 + \alpha e^{-z^2}} \; dz = \sum_{k=1}^{\infty} (-1)^{k-1} \alpha^{k-1} \sqrt{\frac{\pi}{k}} = -a \sqrt{\pi} \, \mathrm{Li}_{1/2} \left( -\tfrac{1}{a}\right),$$ where $$ \mathrm{Li}_{s}(z) = \sum_{n=1}^{\infty} \frac{z^n}{n^s}$$ is the polylogarithm of order $s$, primarily defined on $|z| < 1$. Although we have proved this identity only for $a > 1$, the equality above can be used to define an analytic continuation of the right hand side, thus (by tautology) it holds for all $a > 0$. It has special value at $\alpha = 1$, given by $$ \int_{-\infty}^{\infty} \frac{1}{1 + e^{z^2}} \; dz = -\sqrt{\pi} \, \mathrm{Li}_{1/2}(-1) = \sqrt{\pi} (1 - \sqrt{2}) \zeta \left( \tfrac{1}{2} \right)$$
Total space of a finite rank locally free sheaf, Vakil's 17.1.4 & 17.1.G
First of all, pick a basis $\{v_k^i\}$ for $\mathcal{F}$ on each trivializing open set $U_i$, and let $\{x_k^i\}$ be its dual basis, which trivializes $\mathcal{F}^\vee$. Over some open affine subset of an intersection $U_{ij}$, the total space is $\DeclareMathOperator{spec}{Spec} \spec(A[x_1^i,\ldots ,x_n^i])\cong \spec(A[x_1^j,\ldots ,x_n^j])$, where the isomorphism is the transition function $\phi_{ij}$ we are looking for. It is induced by the map of rings in the opposite direction $$ \phi_{ij}^*:A[x_1^j,\ldots ,x_n^j]\to A[x_1^i,\ldots ,x_n^i] $$ Which maps a generator $x_k^j$ to $T_{ij}^t(x_k^j)$. If we let $v$ be a point in the total space, then $\phi_{ij}(v)$ is such that $x_k^j(\phi_{ij}(v))=\phi_{ij}^*(x_k^j)(v)$, just by how a ring homomorphism induces a map of schemes. Now the fact that for any basis element $x_k^j\circ \phi_{ij}=\phi_{ij}^*(x_{k}^j)$ implies that $\phi_{ij}$ is the dual map to $\phi_{ij}^*$, that is, its matrix with respect to the bases $\{v_k^i\}$ and $\{v_k^j\}$ will be the transpose of $T_{ij}^t$, which gives us $T_{ij}$ back. Summing up: the $x_i$'s form a basis for the dual of the fiber of the total space, and if the transition function acts like $T_{ji}^t$ on the dual, it must act on the original space as $T_{ij}$.
A basic power problem.
Simply, $a^{b^c}$ means you solve for $b^c$ and then raise $a$ to that power. Be sure not to confuse $(a^b)^c$ and $a^{b^c}$. For example: $$2^{3^2} = 2^9 = 512 \color{red}{\neq (2^3)^2 = 2^6 = 64}$$
Product or chain rule
damn it now that's all wrong... Try 2: $$\frac{\partial}{\partial y'}\frac{(y')^2}{x^3}=\frac{2y'}{x^3}$$ and $$\frac{d}{dx}\left(\frac{2y'}{x^3}\right)=\frac{2y''x^3-3x^2(2y')}{x^6}$$ I'll make you simplify since it got all messed up the first time :)
How do I prove $\Vert\mathbf x \Vert_{\mathcal l^q} \leq \Vert \mathbf x \Vert_{\mathcal l^p}$.
Suppose $\|y\|_p \leq 1$. Then $|y_i| \leq 1$ for all $i$ so $\sum |y_i|^{q} \leq \sum |y_i|^{p}=1$. Now apply this to $y =\frac x {\|x\|_p}$ (assuming $x \neq 0$). This gives the desired inequality. [You get $\|\frac x {\|x\|_p}\|_q \leq 1$. Just pull $\|x\|_p$ out of the norm in the left side and bring it to the right side].
Compute the product of cycles that are permutations of $S_8$
Your answer is correct! What seems to be bothering you is that here is more than one way to write a permutation as a product of cycles, but this is just true. The fact that your permutation can be written as one cycle $(145872)$ in no way implies that it can't be written as the product of 3. (It's true that the "disjoint cycle" decomposition is unique up to moving the factors around, but most cycle decompositions are not unique.)
Riemann-Lebesgue Lemma And Integral Having $\sin(\cdot)$
Hint: Set $\frac{f(t)}{t} = g(t)$ and observe that $$\lim_{w \to \infty}\int_{a}^{b} g(t)e^{i w t} = 0.$$
how to find sum of the given series
HINT: $$\ln(1+x)=x-\dfrac{x^2}2+\dfrac{x^3}3-\dfrac{x^4}4+\cdots$$ $$\ln(1-x)=-x-\dfrac{x^2}2-\dfrac{x^3}3-\dfrac{x^4}4-\cdots$$ $$\ln(1+x)-\ln(1-x)=?$$ Keep in mind : Taylor series for $\log(1+x)$ and its convergence
How to show that $C$ is countable?
Let $C_n := \{B\in C~\vert~ \lambda(B) \geq \frac{1}{n}\}$. Then $C$ is the union of all $C_n$ and $C_n$ is finite for each $n$ (the cardinality is bounded by $n$ to be precise).
Order of integration for $\int_{0}^{T-N}\left\{ \int_{b}^{b+N}f(b,t)dt\right\} db$.
The bounds of integration are not an issue for the Fubini-Tonelli theorem since $$ \int_{0}^{T-N}\left\{ \int_{b}^{b+N}f(b,t)dt\right\} db=\int_{\mathbb{R}^{2}}f(b,t)\cdot \boldsymbol{1}_{[b,b+N]}(t) \cdot \boldsymbol{1}_{[0,T-N]}(b)\,d(t,b). $$
bilinear transformation $\phi U\times V\to W$ such that $Im(\phi)=\{\phi(u,v): u\in U, v\in V\}$ is not a subspace of $W$
As far as I know there is no technique, but you might want to consider the case $U=V=\Bbb{R}^2$ and the map $\phi$ that sends a pair to the four coordinate products. That is to say $$\phi:\ \Bbb{R}^2\times\Bbb{R}^2\ \longrightarrow\ \Bbb{R}^4: ((x_1,y_1),(x_2,y_2))\ \longmapsto\ (x_1x_2,x_1y_2,y_1x_2,y_1y_2).$$
Frobenius norm and trace
$$G=UDU^T$$ $$GG'=UD^2U^T$$ $$(GG')^\frac12=UDU^T=G$$ Hence $$\operatorname{trace}((GG')^\frac12)=\operatorname{trace}(G)=\operatorname{trace}(D)=\sum_{i=1}^nd_{ii}$$ but $$\operatorname{trace}(GG')=\sum_{i=1}^n d_{ii}^2$$ hence $$\left( \operatorname{trace}(GG')\right)^\frac12=\sqrt{\sum_{i=1}^n d_{ii}^2}$$ but we know that $1$-norm are $2$-norm need not be equal. Example of counter example: $$G = \begin{bmatrix} 1 & 0\\ 0 & 2 \end{bmatrix}$$
Can someone help me to find boundary points of this set?
The point $1$ is indeed in the boundary: any open interval around $1$ contains $1$ (which is in the complement) and points $1+r$ for small $r$ which lie in the set. Alternatively: $A= (0,1) \cup (1,2)$ is open (a union of open intervals), its closure is $[0,2]$ and so the boundary, the set difference of these is $\{0,1,2\}$.
$x_{n+1}=\frac{2x_n+3f(x_n)}{5}$ showing $f$ has a fixed point
By the mean value theorem, for any $u, v \in \Bbb R$, we can find $t \in (u, v)$ so that: $$ \frac{f(u) - f(v)}{u - v} = f'(t) $$ Given that $\left|f'(t)\right| \le A < 1$, it follows that: $$ \left|f(u) - f(v)\right| \le A |u - v| $$ Hence, $f$ is a contraction. Now define $g(t) = \dfrac{2t + 3f(t)}{5}$. By a simple calculation we find that: $$ \left|g(u) - g(v)\right| = \left|\dfrac{2(u-v) + 3\left(f(u) - f(v)\right)}{5}\right| \le \dfrac{2 + 3A}{5} |u - v| $$ Since $A < 1$, $\dfrac{2 + 3A}{5} < 1$ and $g$ is also a contraction. Now apply the Banach fixed point theorem to $x_{n+1} = g(x_n)$ to arrive at the desired result.
If $X_1, X_2,Y$ are three random variables, and $X_1$ is independent of $X_2$, is it possible to reduce $E[Y\mid X_1, X_2]$?
Not even close. For example, suppose $Y$ is always between $2$ and $3.$ Then $\operatorname{E}(Y)$ is between $2$ and $3$ and $\operatorname{E}(Y\mid \cdots\cdots)$ is between $2$ and $3$ regardless of what the condition is. Since $\operatorname{E}(Y\mid X_1)$ is always between $2$ and $3$, and so is $\operatorname{E}(Y\mid X_2)$, we have $\operatorname{E}(Y\mid X_1) \cdot\operatorname{E}(Y\mid X_2)$ between $4$ and $9,$ although $\operatorname{E}(Y\mid X_1,X_1)$ must be between $2$ and $3.$
$ P[A] \leq P[A |\bar B] + P[B] $
$P(A) = P(A\cap B) + P(A\cap \overline{B}) \leq P(B) + P(A|\overline{B})P(\overline{B}) \leq P(B) + P(A|\overline{B})$
Question on lines of regression
Given $b_{yx}=\dfrac{1}{7}$ and $b_{xy}=c$. On using the property of the regression coefficients \begin{eqnarray*} r^2 &=& b_{yx}\times b_{xy}\\ &=& \left(\dfrac{1}{7}\right)c\\ c &=& 7\cdot r^{2} \end{eqnarray*} As $r^2$ lies between 0 and 1, hence, $0\leq c \leq 7$.
Given a number of vertices , a radius calculate vertices coordinates for regular polyhedron
This is not a well-defined question, as there is not necessarily a notion of "regular polyhedron". You might require that all faces are equivalent; or all edges are equivalent; or all vertices; or some combinations of these. These will give you sets of Platonic solids, Archimedean solids, Catalan solids, prisms, bipyramids, antiprisms, or trapezohedra, depending which conditions you impose. But in general, there is not necessarily any particularly symmetric polyhedron with a given number of vertices (or faces, or edges). The first 3 sets of solids above are highly symmetric and probably similar to what you want, but are finite in size, and highly irregular: you would need to hard-code every formula, essentially. The other 4 sets will generalize to arbitrary $n$ (or $2n$, or so) but will likely not have symmetries that you want. Their coordinates are given by very similar formulas to what you have for 2D. In case this doesn't make sense, why your question is ill-posed, please consider: what kind of answer would you want from your formula for the 5-vertex case, radius 1?
Does x approach some finite value according to this differential equation?
I don't think this is the case. The value of the slope isn't bound by the circle - it's saying that the value of the slope is ONLY proportional to the distance away from the origin. That is, on every point on the circle $x^2 + y^2 = r^2$, the slope is $r^2$. From this picture, you see that as the distance from the origin increases, the slope increases more and more rapidly. So this makes it seem that there ought to be an asymptote.
Finding tangent plane to $2$ dimensional submanifold of $\mathbb{R}^4$
"The gradient is the special case of the derivative for a function $f: \mathbb{R}^n \to \mathbb{R}^m$" i.e when $m =1$ we have $f'=\nabla f$. In $\mathbb{R}^4$, your submanifold $S$ is the graph of $\phi = (g_1,g_2)$ i.e $S= \{(x,y, \phi(x,y))\}$. We will have to use the derivative of the map $\Phi(x,y) = (x,y,g_1(x,y),g_2(x,y))$ i.e $D_{(x,y)}\Phi: T_{(x,y)}\mathbb{R}^2 \to T_pS$. Now just recall the definition and so in this case, $$ D_q\Phi = \begin{pmatrix} \nabla(x)\\ \nabla (y) \\ \nabla g_1 \\ \nabla g_2 \end{pmatrix}$$ where $q = \Phi^{-1}(p)$. For instance, $\nabla (x) = \begin{pmatrix} \frac{\partial (x)}{\partial x} \\ \frac{\partial (x)}{\partial y} \end{pmatrix} = (1,0)^T$. Here the tangent plane is the image of this map and translated by $q$ i.e, $$ D_q\Phi \begin{pmatrix} x \\ y \end{pmatrix} + q$$
Uniform convergence of series $\displaystyle \frac{x^{n}}{n}$
Prove by contradiction. If the series is uniformly convergent then there exists $N$ such that $\sum\limits_{k=N_1}^{N_2}\frac {x^{n}} n <\epsilon$ whenever $x \in [0,1)$ and $N_2 >N_1>N$. Let $x$ increase to $1$ in this to see that $\sum\limits_{k=N_1}^{N_2}\frac 1 n\leq \epsilon$ whenever $N_2 >N_1>N$. But this is false since $\sum \frac 1n$ is divergent.
Derive the total derivative for a function
The total derivative of $f$ is given by $$df=\frac{\partial f}{\partial x}dx+\frac{\partial f}{\partial y}dy+\frac{\partial f}{\partial z}dz,$$ So you get: $$df=yz\:dx+xz\:dy+xy\:dz.$$
Limit a complex contour integral
Since $z_0$ is a simple pole of $f$, we can write $$f(z)=\frac{g(z)}{z-z_0},$$ where $g$ is holomorphic in a disc centered at $z_0$ and $g(z_0)\neq 0$. Note that $g(z_0)=\lim_{z\to z_0}(z-z_0)f(z)=\mathrm{Res}_{z_0}f$, thus it's suffice to show that $$\lim_{r\to 0}\int_{\gamma_r}\frac{g(z)}{z-z_0}=i\alpha g(z_0).$$ By direct calculation, we have $$i\alpha g(z_0)=\int_{\gamma_r}\frac{g(z_0)}{z-z_0}\,dz$$ for every arc of circle $\gamma_r$ centered at $z_0$, of the radius $r$ and angle $\alpha$. Combining the two equality, it's suffice to show that $$\lim_{r\to 0}\int_{\gamma_r}\frac{g(z)-g(z_0)}{z-z_0}\,dz=0.$$ But the eqaulity is easy to prove.(Note that $g$ is differentiable at $z_0$, thus $\frac{g(z)-g(z_0)}{z-z_0}$ is bounded in a small disc centered at $z_0$.)
Calculating $ \int_{\partial B_2(0)} \frac{\mathrm{d}z}{(z-3)(z^{13}-1)} $
Since $\left|\frac1{(z-3)\left(z^{13}-1\right)}\right|\sim\frac1{|z|^{14}}$, the integral over any circle with radius larger than $3$ would be $0$. Next, we have the residue at the simple singularity $z=3$ to be $$ \operatorname*{Res}_{z=3}\left(\frac1{(z-3)\left(z^{13}-1\right)}\right)=\frac1{3^{13}-1} $$ The sum of the residues of the other singularities must therefore be $-\frac1{3^{13}-1}$. Thus, $$ \int_{B_2(0)}\frac{\mathrm{d}z}{(z-3)\left(z^{13}-1\right)}=-\frac{2\pi i}{3^{13}-1} $$
Complex Analysis ~ Unit Disc
Hint: If $f(z)$ has poles at $\{a_j\}_{j=1}^k$ (counted with their multiplicites), then $$g(z)=\left(\prod_{j=1}^k\frac{z-a_j}{1-\overline{a_j}z}\right)\cdot f(z)$$ has no poles and since the modulus of each term in the product is $1$ on the boundary, we have $|g(z)|=|f(z)|$, as desired. The product above is known as a Blaschke factor, and can be read about at Wolfram MathWorld or Wikipedia.
How to partially differentiate the integral $\int_{0}^{x/\sqrt{t}}\exp(-\xi^2/4)d\xi$ w.r.t $x$ and $t$?
HINT Think of $$ f(z) = \int_0^z e^{-s^2/4}ds $$ with the property $$f'(z) = e^{-z^2/4}.$$ Note that your $u(x,t) = f(x/\sqrt{t})$. Based on this and using the chain rule, can you figure out the partials you are asking for?
A mapping $\phi:\mathfrak{so}(3) \to \mathbb{R}^{3}$ such that $\phi(gXg^{-1}) = g \phi(X)$
Consider a rotation of $\mathbb R^3$ about the axis $\vec n $ through an angle $\theta$. The matrix for this rotation is $\exp \left( \theta\phi^{-1}(\vec n) \right)$, where $\phi$ is as defined in your question. Now do a change of basis: rotate your basis of $\mathbb R^3$ by some $g \in SO(3)$. Written with respect to the new basis, the axis of our original rotation is $g \vec n$. But by the usual change-of-basis formula, the matrix for our original rotation with respect to the new basis is $g\exp \left( \theta \phi^{-1}(\vec n) \right)g = \exp \left( \theta g\phi^{-1}(\vec n) \right)$. Hence $g \vec n = g \phi^{-1}(\vec n) g^{-1}$. [By the way, how do we know that $\exp \left( \theta\phi^{-1}(\vec n) \right)$ is the correct matrix for the rotation? Because $\phi^{-1}(\vec e_x), \phi^{-1}(\vec e_y), \phi^{-1}(\vec e_z)$ are the generators of the $so(3)$ Lie algebra corresponding to infinitesimal rotations about the $x, y$ and $z$ coordinate axes in the fundamental representation of $so(3)$.]
How to express $k(x,y)=e^{x^Ty}$ as $\langle\phi(x),\phi(y)\rangle$?
The thing you want to prove is false for an actual inner product, the inner product of the zero vector with itself should be zero.
Need help figuring out how to keep a ratio constant between two functions
I'm going to call the damages $D_1$ and $D_2$, the maximum health points $H_1$ and $H_2$. If I'm understanding the question, then one of your difficulties might be using variable names that are too long to do efficient pencil-and-paper algebra! For programming it makes sense, but mathematically, long names can be rough. From what you've said, we have $$\frac{D_1}{H_1} = \frac{D_2}{H_2}.$$ We can rearrange this in a number of ways. To get a value for $D_2$, we can multiply both sides by $H_2$, so that $$D_2 = H_2 \frac{D_1}{H_1} = D_1 \frac{H_2}{H_1},$$ where this last formula highlights that $D_1$ and $D_2$ are always proportional: To go from $D_1$ to $D_2$, just take $D_1$ and multiply by that max health ratio, $\frac{H_2}{H_1}$. The damage taken scales just like maximum health does: To go from $H_1$ to $H_2$, we multiply by $\frac{H_2}{H_1}$, which is the same way to go from $D_1$ to $D_2$. To go backwards, from $D_2$ to $D_1$, you'll multiply by the reciprocal of the above ratio; you have $$D_1 = D_2 \frac{H_1}{H_2}.$$ If I've misunderstood the question or it's the algebra itself giving you trouble, let me know.
A module over an algebra. Is it a vector space?
Yes. A module $M$ over $A$ is just a module $M$ over the ring $A$; the additional structure of $A$ as a $k$-algebra plays no role. (and 3.) That amounts to the same: since $A$ is a $k$-algebra, you already have a map $k \to A$ which turns $M$ into a $k$-module as well. $M$, ultimately being a $k$-module as well, is a $k$-vector space. I'm not sure what you mean by "Is $M$ a vector space in the case of 1?"
$\int\sqrt{1-\tan x}~\mathrm{d}{x}.$ (Integral of a trigonometric function under square root)
Substituting $u=\sqrt{1-\tan x}$ gives a pretty easy integral.
Does there exist a bounding integrable continuous function?
There is a continuous function $f_n$ such that $f_n(x)=\frac n {\ln n }$ on $(0,\frac 1 n)$, $0$ on $(\frac 2 n , 1)$ and linear in $[\frac1 n , \frac 2 n]$. In this case $f_n \to 0$ on $L^{1}(0,1)$ but there is no integrable function $g$ such that $f_n \leq g$ a.e. for every $n$. Proof: On $(\frac 1{n+1}, \frac 1 n)$ we have $g(x) \geq \frac n {\ln n }$ so we get $\int_0^{1} g(x) dx \geq \sum_n \frac n {\ln n } (\frac 1n -\frac 1 {n+1})=\infty$.
What is wrong with this proof that there are no odd perfect numbers?
$$D(q^k)\cdot{n^2} = \bigg(2q^k - \sigma(q^k)\bigg)\cdot{n^2} = 2{q^k}{n^2} - \sigma(q^k){n^2}$$ $$= \sigma(q^k)\sigma(n^2) - \sigma(q^k){n^2} = \sigma(q^k)\cdot\bigg(\sigma(n^2) - n^2\bigg)$$ which is divisible by $\sigma(q^k)$. Therefore, there is no contradiction.
Trig function evaluations. $\frac{\cos^3 (\pi)}{3}$
The notation $\cos^3 x$ really means $(\cos x)^3$. So $\cos^3 \pi$ is just $(\cos\pi)^3 = (-1)^3 = -1$. So your final answer should be $-1$ divided by 3, or in other words $-\frac{1}{3}$.
Finding the residue of $\frac{\sinh z}{z^2 \cosh z}$ at $z= \pi i /2$
We have \begin{align} \sinh \frac{i\pi}2 &= i\sin \frac{i\pi}2 =i\ ,\\\\ z^2 \text{ in } \frac{i\pi}2 &= -\frac{\pi^2}4\ ,\\\\ \cosh z &= \cosh\left(\left(z-\frac{i\pi}2\right) +\frac{i\pi}2\right) \\\\ &= \cosh\left(z-\frac{i\pi}2\right) \cosh\frac{i\pi}2 + \sinh\left(z-\frac{i\pi}2\right) \sinh\frac{i\pi}2 \\\\ &= \cosh\left(z-\frac{i\pi}2\right) \cdot 0 + \sinh\left(z-\frac{i\pi}2\right) \cdot i \\\\ &= i\cdot\left(\frac 1{1!}\left(z-\frac{i\pi}2\right) + \dots\right) \end{align} The residue is thus: $$ \frac i{\displaystyle-\frac{\pi^2}4\cdot i\cdot \frac 1{1!}} =-\frac 4{\pi^2}\ . $$ Using sage, www.sagemath.org: sage: E = sinh(z) / z^2 / cosh(z) sage: E.residue( z==i*pi/2 ) -4/pi^2
Tensor square of a torsion-free module ($R$ domain)
Following the suggestions of Mohan and Ben, I put here an example of torsionfree (and even torsionless) module $M$ over a domain $R$ such that its tensor square $M\otimes_R M$ has torsion. If, of course, somebody has a shorter proof/construction, I'll be delighted to accept his/her answer. Let $k$ be a field and $x,y$ two commuting variables; set $R=k[x,y]$. As a $k$-algebra, $R$ is graded by the total degree $$ R_n=span_k\{x^py^q|p+q=n\} $$ Consider the ideal $M:=R_{\geq 1}=\oplus_{n\geq 1}R_n=(x)+(y)=(x,y)$. As a $k$ vector space $M$ is graded by the preceding definition (the range of degrees is $\mathbb{N}_{\geq 1}$). We first look at the tensor square $M\otimes_k M$ and its graded structure given by $$ (M\otimes_k M)_n=\oplus_{p+q=n}R_p\otimes_k R_q\ . $$ (this time, the range of degrees is $\mathbb{N}_{\geq 2}$) and we consider the quotient $$ N={(M\otimes_k M)}/{(M\otimes_k M)_{\geq 3}} $$ and, denoting by $s:(M\otimes_k M)\to N$ the canonical quotient mapping, we see that $N$ has $k$-dimension 4 with basis $B=\{e_{ab}\}_{a,b\in \{x,y\}}$ with $e_{ab}=s(a\otimes_k b)$. Now, one checks easily that the $k$-bilinear map \begin{align*} M \times M &\to N, \\ (u,v) &\mapsto \Phi_k(u\otimes_k v)=s(u\otimes_k v) \end{align*} is, in fact, $R$-bilinear (check it with monomials $x^py^q$). Hence it gives rise to a linear map $\Phi_R : M \otimes_R M \to N$. This suffices to prove that $t=x\otimes_R y-y\otimes_R x\not=0$ as $\Phi_R(t)=e_{xy}-e_{yx}$ (note that $t\in M\otimes_R M$). But now $$ x.t=x^2\otimes_R y-xy\otimes_R x=x\otimes_R xy-x\otimes_R yx=0 $$ which shows the claim. Late edit Here, tensor products are thought and constructed as in Bourbaki Algebra II § 3 eq. (1). More precisely, if $M$ (resp. $N$) is a $B$-module on the right (resp. $N$ is a $B$-module on the left), $M\otimes_B N$ is a the solution of the universal problem with $f, (M\times N)\to G$ bi-additive and satisfying, for $x\in M,\ y\in N,\ \lambda\in B$ $$ f(x\lambda,y)=f(x,\lambda y) $$ but, if $M,N$ have other structures commuting with the $B$-actions, the tensor product $M\otimes_B N$ automatically inherits these properties. For example, here, as $R$ is commutative, all $R$-modules are $R-R$-bimodules.
What's the derivative with respect to a constant?
The derivative of a constant with respect to a variable is $0$, but the derivative of a function with respect to a constant, as Fra mentioned in the comments, is ill defined. EDIT The question has been updated. The link provided in the question discusses functions $$f_c(x) = c + x $$ and $$f_a(x) = ax$$ The link also indicates their derivatives are: $$\frac{df}{dx}=1$$ and $$\frac{df}{dx}=a$$ respectively, as expected. These derivatives are still with respect to $x$, not constants $c$ or $a$. The confusion might have arisen since letter $c$ in $f_c(x)$ might have given the impression that this is a function with respect to $c$, which is not the case. Same argument applies for $f_a(x)$.
Solution to the stochastic differential equation
Let $Y_t = \mathrm{e}^{t- 2 W_t} X_t^2$. Then, applying Ito's lemma: $$ \mathrm{d} Y_t = \mathrm{e}^{t- 2 W_t} \mathrm{d} (X_t^2) + X_t^2 \mathrm{d} \mathrm{e}^{t- 2 W_t} + \mathrm{d} \mathrm{e}^{t- 2 W_t} \cdot \mathrm{d} (X_t^2) $$ Since $\mathrm{d} \mathrm{e}^{t- 2 W_t} = \mathrm{e}^{t- 2 W_t} \left( 3 \mathrm{d} t - 2 \mathrm{d} W_t \right)$ and $\mathrm{d}(X_t^2) = (X_t^2 + 2) \mathrm{d} t + 2 X_t^2 \mathrm{d} W_t$, we get: $$ \mathrm{d} Y_t = 2 \mathrm{e}^{t- 2 W_t} \mathrm{d} t $$ Also, initial condition is $Y_0 = \mathrm{e}^{0 - 2 W_0}X_0^2 = x^2$. Therefore: $$ \mathrm{e}^{t- 2 W_t} X_t^2 = Y_t = x^2 + \int_0^t 2 \mathrm{e}^{s - 2 W_s} \mathrm{d} s $$ This provides the solution: $$ X_t = \operatorname{sign}(x) \sqrt{ x^2 \mathrm{e}^{2 W_t - t} + 2 \mathrm{e}^{2 W_t - t} \int_0^t \mathrm{e}^{s - 2 W_s} \mathrm{d} s } $$
Showing that naturality of this transformation follows from dinaturality of elements.
I agree that this seems like a mistake. It looks clear to me that $\zeta$ can be extended to a natural isomorphism between the sets of not-necessarily dinatural transformations $F\to V$ and not-necessarily natural transformations $1\to 1\otimes V$; after all, this is essentially just the natural isomorphism $[X^*X,V]\cong [X,XV]$, with a product taken over all $X$, no? The point should be that naturality of $\zeta_V(j)$ follows from dinaturality of $j$.
What is the probability of getting 3 heads in 4 coin tosses, given you get at least 2 heads?
It is unclear from your question whether your asking for the probability of exactly three heads, but that's what I'll assume. Given that there are at least two heads, the number of cases are as follows: Exactly $2$ heads: ${4 \choose 2} = 6$ cases Exactly $3$ heads: ${4 \choose 3} = 4$ cases Exactly $4$ heads: ${4 \choose 4} = 1$ case Of these $11$ equally likely cases, four have exactly $3$ heads. Hence the probability is $P(3|2) = \frac{4}{11}$.
Bias-Variance decomposition of sample average estimator.
$$\frac{\partial}{\partial m}\left(\frac{1}{N} \sum_{i = 1}^N (Y_i - m)^2 + \lambda m^2\right)=-\frac2N\sum_{i = 1}^N (Y_i - m) + 2m\lambda =0$$ $$ -\frac2N\sum_{i = 1}^N Y_i + \frac2N\cdot mN + 2m\lambda =0 $$ $$ 2m(1+\lambda)=\frac2N\sum_{i = 1}^N Y_i $$ $$ m=\frac{1}{1+\lambda}\cdot \bar Y = h_\lambda(D) $$ $$ \mathbb E[h_\lambda(D)] = \frac{1}{1+\lambda} \mathbb E[\bar Y] = \frac{1}{1+\lambda} \mathbb E[Y_1] = \frac{1}{1+\lambda} \mu. $$ So bias is $$ \mathbb E[h_\lambda(D)] - \mu = \frac{1}{1+\lambda} \mu - \mu = -\frac{\lambda\mu}{1+\lambda} $$ Please check the definitions. Bias of an estimate $\theta^*$ of parameter $\theta$ is $\mathbb E[\theta^*]-\theta$, nothing else. Next, $$ \text{Var}(h_\lambda(D)) = \text{Var}\left(\frac{1}{1+\lambda}\cdot \bar Y\right) = \frac{1}{(1+\lambda)^2} \text{Var}(\bar Y) = \frac{1}{(1+\lambda)^2} \frac{\text{Var}(Y_1)}{N}. $$
Properties of Euler's $\phi()$ function
$a\bmod m=a+km$ for some $k\in\mathbb Z$. If $d$ divides both $a\bmod m$ and $m$, say $a+km=d b$ and $m=d c$, then $a=db-km=d(b-kc)$, i.e. $d$ is also a common divisor of $a$ and $m$.
Archimedean absolute value on reals equivalent to usual one
It seems that we really have to make use of more extra structure on $\mathbb{R}$, namely, the usual order $<$. So let $t \in \mathbb{R}$, wlog $t \ge 0$, and $r_i$ a sequence of rationals that converge to $t$ with respect to $|\cdot|$. By replacing $r_i$ with $2t-r_i$ if necessary, we can make sure that (*) for all indices $i\in \Bbb{N}$, we have $r_i \le t$ for odd $i$ and $r_i\ge t$ for even $i$; in particular, $t$ lies between $r_i$ and $r_{i+1}$. I think you already got the following, and it does not need (*): $r_i$ is a Cauchy sequence w.r.t. $|\cdot|$. Then the sequence of values $|r_i| = |r_i|_\Bbb{R}^a$ is a Cauchy sequence in $\mathbb{R}$ w.r.t. the usual value (! -- this is how the metric is defined), hence converges to some $x_0 \in \mathbb{R}_{\ge 0}$. So $|t| = x_0$ by continuity of the value. On the other hand, if $|r_i| = |r_i|_{\mathbb{R}}^a \to x_0$, then by continuity of $a$-exponentiation, $|r_i|_{\mathbb{R}} \to x_0^{1/a}$. In other words, with respect to the usual value, $r_i$ converges to $x := x_0^{1/a}$. So $|t| = x^a = |x|_\mathbb{R}^a$. So what is left to show is $x=t$. But by (*), we have $t\in \bigcap_{n\in \Bbb{N}} [r_{2n-1}, r_{2n}]$, and because the sequence is Cauchy w.r.t the usual value, that intersection is actually a singleton, hence $\{t\}$. But one also sees (going to monotonous subsequences in the odd resp. even indices) that $x\in \bigcap_{n\in \Bbb{N}} [r_{2n-1}, r_{2n}]$. Hence $x=t$ and $|x|=|x|_\mathbb{R}^a$.
If two tours are starting at the same time, one lasts for 15 minutes and the other for 20, when do they meet again?
$$ \text{tour A} = 15\,\,\text{minutes} = 3\times 5 \,\,\text{minutes}\\ \text{tour B} = 20\,\,\text{minutes} = 4\times 5 \,\,\text{minutes}\\ $$ thus the lowest common multiple will occur by $$ 4\times\,\,\text{time for tour A} = 4\times 3\times 5 \,\,\text{minutes}\\ 3\times\,\,\text{time for tour B} = 3\times 4\times 5 \,\,\text{minutes} $$ both equate to $60$ minutes as you already found.
Proving Riemann Integrability when $x$ is a function of $n$?
The idea here is that all but finitely many of the points in $\{ 1/n : n \in \mathbb{N} \}$, where the discontinuities in $f$ are, are very close to $0$. So you can work this way. Let $\varepsilon >0$. Choose $N \in \mathbb{N}$ such that $1/(N+1)<\varepsilon/2$. Then make the first two points of the partition be $0$ and $1/(N+1)$. Now you just have $N$ discontinuities to work with. Make your partition so that they are each surrounded by intervals of length at most $\varepsilon/2N$. Now you can show that the lower sum of this partition is $0$ while the upper sum is some positive number which is less than $\varepsilon$.
Explain 2D ellipse function in terms of Circ
$$h(x,y)=\frac1{2\pi a^2b^2}\operatorname{circ}\left(\frac x{a_2},\frac y{b_2}\right)$$
If the measure attains only finitely many values then the space is disjoint union of finitely many atoms
First notice that $X$ is atomic. Suppose $A\subset X$ has positive measure but contains no atoms. Since $A$ is not an atom there exists $A_1\subset A$ with $0<\mu(A_1)<\mu(A)$. But since $A_1\:\:(\mu(A_1)>0)$ is not an atom, there exists $A_2\subset A_1$ with $0<\mu(A_2)<\mu(A_1)$ and so on. Thus there is an infinite chain $$\mu(A)>\mu(A_1)>\mu(A_2)>\dots>0$$ which contradicts the fact that $\mu$ assumes only finitely many values.\ Now let $B_1$ be the atom in $X$ having the largest measure. If $\mu(X\setminus B_1)=0$, stop. Otherwise let $B_2$ be the atom with the largest measure contained in $X\setminus B_1$. If $\mu(X\setminus(B_1\cup B_2))=0$, stop. Otherwise let $B_3$ be the atom with the largest measure contained in $X\setminus(B_1\cup B_2)$. This process will end in finitely many steps. For if not, then $$\mu(X\setminus(B_1\cup\dots B_n))=\mu(X\setminus(B_1\cup\dots B_{n+1}))+\mu(X\setminus(B_1\cup\dots B_n)\cap B_{n+1})=\mu(X\setminus(B_1\cup\dots B_{n+1}))+\mu(B_{n+1})$$ i.e. $$\mu(X\setminus(B_1\cup\dots B_n))>\mu(X\setminus(B_1\cup\dots B_{n+1}))$$ since $\mu(B_{n+1})>0$. This would again result in an infinitely descending chain of positive values. Thus, $X$ can be written as a finite disjoint union of atoms and a null set $$X=B_1\cup\dots\cup B_n\cup Z$$
Proof of inverse Laplace transform
It is the Fourier inversion formula in disguise. In case you have never encountered this theorem before, let me prove the following version (which is obviously far from optimal). Proposition. Let $F(s) = \int_{0}^{\infty} f(t)e^{-st} \, dt$ be the Laplace transform of $f : [0,\infty) \to \mathbb{R}$. Assume that the following technical conditions hold with some $g : [0,\infty) \to \mathbb{R}$ and $\sigma \in \mathbb{R}$: $f(t) = f(0) + \int_{0}^{t} g(u) \, du$. (In particular, $g$ is the 'derivative' of $f$.) Both $f(t)e^{-\sigma t}$ and $g(t)e^{-\sigma t}$ are Lebesgue-integrable on $[0, \infty)$. Then for any $s > 0$, we have $$ \lim_{R\to\infty} \frac{1}{2\pi i} \int_{\sigma-iR}^{\sigma+iR} F(z)e^{s z} \, dz = f(s). $$ Proof. Define $S(x) = \frac{1}{2} + \frac{1}{\pi}\int_{0}^{x} \frac{\sin t}{t} \, dt$. Then $S(x)$ is bounded, and by Dirichlet integral, we have $$ \lim_{R\to\infty} S(Rx) = H(x) := \begin{cases} 1, & x > 0 \\ \frac{1}{2}, & x = 0 \\ 0, & x < 0 \end{cases} $$ (Obviously $H$ denotes the Heaviside step function.) Now we have \begin{align*} \frac{1}{2\pi i} \int_{\sigma-iR}^{\sigma+iR} F(z)e^{s z} \, dz &= \frac{1}{2\pi} \int_{-R}^{R} F(\sigma + i\xi)e^{s(\sigma+i\xi)} \, d\xi \\ &= \frac{1}{2\pi} \int_{-R}^{R} \left( \int_{0}^{\infty} f(t)e^{-(\sigma+i\xi)t} \, dt \right)e^{s(\sigma+i\xi)} \, d\xi. \end{align*} By Fubini's theorem, we can interchange the order of integral to obtain \begin{align*} \frac{1}{2\pi i} \int_{\sigma-iR}^{\sigma+iR} F(z)e^{s z} \, dz &= \int_{0}^{\infty} f(t)e^{-(t-s)\sigma} \left( \frac{1}{2\pi} \int_{-R}^{R} e^{(s-t)i\xi} \, d\xi \right) \, dt \\ &= \int_{0}^{\infty} f(t)e^{-(t-s)\sigma} \left( \frac{\sin R(t-s)}{\pi (t-s)} \right) \, dt \end{align*} By the assumption, both $f(t)e^{-\sigma t}$ and $(f(t)e^{-\sigma t})' = (f'(t) - \sigma f(t))e^{-\sigma t}$ are Lebesgue-integrable. In particular, this tells that $f(t)e^{-\sigma t}$ converges to $0$ as $t\to\infty$. So by integration by parts, \begin{align*} \frac{1}{2\pi i} \int_{\sigma-iR}^{\sigma+iR} F(z)e^{s z} \, dz &= - f(0)e^{s\sigma} S(-Rs) - \int_{0}^{\infty} (f(t)e^{-(t-s)\sigma})' S(R(t-s)) \, dt. \end{align*} As $R \to \infty$, the right-hand side converges to \begin{align*} \lim_{R\to\infty} \frac{1}{2\pi i} \int_{\sigma-iR}^{\sigma+iR} F(z)e^{s z} \, dz &= - \int_{0}^{\infty} (f(t)e^{-(t-s)\sigma})' H(t-s) \, dt \\ &= - \left[ f(t)e^{-(t-s)\sigma} \right]_{t=s}^{t=\infty} = f(s). \end{align*} (Pushing the limit inside the integral is justified by the dominated convergence theorem.)
The set of points in $A_n$ for infinitely many $n$ is measurable.
Let $\{A_n\}$ be a countable collection of sets and let $B$ denote the set of elements which are in $A_n$ for infinitely many $n$. Then $$B = \bigcap_{n=1}^{\infty}\bigcup_{k = n}^{\infty}A_k.$$ To see this, note that $x \in \bigcap\limits_{n=1}^{\infty}\bigcup\limits_{k = n}^{\infty}A_k$ if and only if $x \in \bigcup\limits_{k=n}^{\infty}A_k$ for all $n$. The latter occurs if and only if $x \in A_n$ for infinitely many $n$; if $x$ were only in finitely many of the sets, say $A_{n_1}, \dots, A_{n_j}$, then $x \not\in \bigcup\limits_{k=n_j+1}^{\infty}A_k$. As the collection of Lebesgue measurable sets is closed under countable unions and countable intersections (it is a $\sigma$-algebra), $B$ is Lebesgue measurable. A similar exercise is to show that the set of elements which belong to $A_n$ for all but finitely many $n$ is measurable. Denoting this set by $C$, you can show that $C$ is Lebesgue measurable by using the fact that $$C = \bigcup_{n=1}^{\infty}\bigcap_{k = n}^{\infty}A_k.$$
Where should the Lorentz transformations fit into this?
If the stick had a width, we would see a Lorentz transformation in the width of the stick since the stick is moving in the $x$-axis direction. However, according to the coordinate transformation from the stationary frame to the moving frame, there should be no Lorentz transformation in either the $y$-axis or $z$-axis direction. Still, this is different from the question you are asking as it requires to consider the observer's dimensions.
Is an Inverse Menger Sponge a fractal?
It is not a fractal. First if we consider the 2-d case(Sierpinski carpet) and the 3-d as a generalization, the precess of construction its take a square, divide it in nine squares, extract the central square and in each of the eight squares left we do the same process, now if we consider the inverse of this, it means only keep the central square and repit the process. The process in the inifinite is equal a square an object of dimention 2 then this is not a fractal. The same occurs in the case 3-d. The case 3-d is not self-similar. The Menger sponge is closed and its complement open then it cannot be a fractal.
Number of permutation
Hint: This is identical to partitioning $a$ into $f$ parts. Think about how many ways $f-1$ "markers" can be distributed into $a+f-1$ positions. The $f-1$ markers will divide the set $\{1,2,\dots, a\}$ into $f$ parts. (This should be enough, but f you need more explanation let me know. Also, let me know if I am interpreting your question correctly. The term "permutations" seems a bit confusing in this context.)
For any $a>0$ can you find $f(x)$ and $g(x)$ such that $\lim_{x\to 0} f(x)^{g(x)} = a$?
Yes, let $f(x)=x$, $g(x)=\ln (a)/\ln(x)$. Both obviously converge to 0 and $f^g=a$ everywhere.
Using conditional expectation with MSE function
Formally, we are given two random variables $X$, $Y$ and we let $f^* = E[(Y-f(X))^2]$. By the law of iterated expectations, $f^* = E[E[(Y-f(X))^2|X]]$. Let $\mu$ be a probability kernel such that $\forall A\in \mathcal B(\mathbb R), P(Y\in A|X) = \mu(X,A)$. Then $$E[(Y-f(X))^2|X] = \int (y-f(X))^2d\mu(X,y)$$ Your $E_{P_{(y|x)}}[(y-f(x))^{2}|x]$ actually refers to $\int (y-f(X))^2d\mu(X,y)$ (which is a measurable function of $X$, say $g(X)$). You get back to $f^*$ by computing $E(g(X))$, which you wrote as $E_{P_{(x)}} [\ldots]$
Proving the validity of an argument
You're mistaken. The argument is only correct if every possible assignment of truth values that satisfies the premises also satisfies the conclusion. You cannot work backwards from the truth of the conclusion (you haven't proven it!) to determine what should or should not be true. In your specific case, there are 3 ways that $(p \to q)$ can be true, and in 2 of them $\neg p$ is also true, and 1 of those does not satisfy $\neg q$, so there is 1 counter-example scenario to the argument. Hence it is invalid.
Prove that three segments which intersect a circle pass through the same point
By intersecting secant theorem, $BZ \cdot BA = BD \cdot BY$ $CY \cdot CD = CE \cdot CA$ So we have, $BZ \cdot CD \cdot BA \cdot CY = BD \cdot CE \cdot CA \cdot BY$ But by angle bsector theorem we have, $BA \cdot CY = CA \cdot BY$ Also as $AY$ is angle bisector of $\angle EAZ$ and it is also the diameter of the circle, $AE = AZ (\triangle AZY \cong \triangle AEY$) That leads to $BZ \cdot AE \cdot CD = BD \cdot CE \cdot AZ \ $ and hence by Ceva's theorem, Cevians $AD, BE$ and $CZ$ concur.
If $x^2=y+z$, $y^2=x+z$ and $z^2=x+y$, prove
Note that $$x^2 + x = y^2 + y = z^2 + z = x + y + z$$ $$x(x + 1) = y(y + 1) = z(z + 1) = x + y + z$$ so $$\begin{align}\frac{1}{x + 1} + \frac{1}{y + 1} + \frac{1}{z + 1} &= \frac{x}{x + y + z} + \frac{y}{x + y + z} + \frac{z}{x + y + z}\\ &= \frac{x + y + z}{x + y + z}\\&= 1\end{align}$$
Which number or numbers could be in the middle?
The 9th number is at least $3$ because $1+2+2+2+2+2+2+2+2<21$. Similarly, the 19th number is at most $5$ because $6+6+6+6+6+6+6+6 + 25 >65$. So the middle sum is of the form $3a+4b+5c=41$ with $a,b,c\ge 0$ and $a+b+c=9$. This gives us $-a+c=41-4\cdot 9=5$ and in particular $c\ge 5$. Conclude.
Distribution of a division of two absolutely continuous random variables
Draw a picture of a $(y,z)$-plane and sketch the region consisting of points $(y,z)$ satisfying $z/y \le t$. This should look like two cones (or pizza slices) with points at the origin and extending infinitely away from the origin. Because the distribution of $(Y,Z)$ is rotationally symmetric, the probability that $(Y,Z)$ lies in this region is proportional to the total angle that these cones sweep out. The angle in question will involve arctangent and $t$, and differentiating to get the PDF will lead to the $\frac{1}{1+t^2}$ expression.
When do we have $A \subset B$ imples $f^{-1}(A) \subset f^{-1}(B)$?
All you need is that $f$ is a function. If $x\in f^{-1}A$, then $f(x)\in A$, so $f(x)\in B$ as well. Carl Mummert pointed out that if $R$ is any relation between $X$ and $Y$ then the claim still holds, where $R^{-1}A=\{x\in X: xRa\text{ for some }a\in A\}$. This is because if $A\subset B$, then $x\in R^{-1}A$ implies $xRa$ for some $a\in A$ implies $x\in R^{-1}B$.
Finding the eigenvalues of a 4x4 matrix
$A = \pmatrix {1&1&1&1\\2&2&2&2\\2&2&2&2\\1&1&1&1}$ Since A is a singular matrix, we know that 0 is an eigenvalue. So, what is the dimension of the kernel of A? if we perform row operations on A we get $A = \pmatrix {1&1&1&1\\0&0&0&0\\0&0&0&0\\0&0&0&0}$ The dimension of the kernel is 3 0 is an eigenvalue of multiplicty 3 The sum of the eigenvalues equals the trace of the matrix. The trace of A is 6 The remaining eigenvalue is 6
Understand the conditional probability
The probability for selecting number four and 1 from 3 numbers when selecting 2 from 6 numbers is: $$\mathsf P(M=4) = {\binom{3}{1}}/{\binom{6}{2}} = 1/5$$ The probability that the third number selected will be six when it is selected from any 6 numbers is: $$\mathsf P(N=6) = 1/6$$ The probability for selecting six as the third number when selecting from the 4 numbers that are neither four nor 1 from 3 numbers-less-than-four, is: $$\mathsf P(N=6\mid M=4)= 1/4$$ The probability for selecting number four and 1 from 3 numbers, when selecting 2 from 5 numbers-that-are-not-six, is: $$\mathsf P(M=4\mid N=6)= \binom 3 1/\binom 5 2 = 3/10$$ And there we go : $$\dfrac 3{10} ~=~ \mathsf P(M=4\mid N=6)~=~\dfrac{\mathsf P(N=6\mid M=4)~\mathsf P(M=4)}{\mathsf P(N=6)} ~=~\dfrac{\tfrac 14\cdot\tfrac 15}{\tfrac 1 6}~=~\dfrac 3{10}$$ Just to round off: The probability of selecting four and 1 from 3 numbers in the first 2 places then selecting six in the 3rd place, when selecting 3 from 6 numbers in the first 3 places, is: $$\mathsf P(M=4, N=6)~=~\dfrac{\binom 3 1 2!\cdot 1}{\binom 6 3 3!}~=~\dfrac 1{20}$$
Log-Likelihook and Softmax
Let the $j$th activation output be $$a_j=\frac{\exp(z_j)}{S},\;\;S=\sum_{t\in\mathcal{O}} \exp(z_t)$$ for outputs $\mathcal{O}$. The input is given by $$ z_k = \sum_{i\in\mathcal{I}} w_{ki}\tilde{a}_i + b_k $$ for inputs $\mathcal{I}$. Then, the log-likelihood cost $$ C = -\ln(a_y)= -\left[ \sum_{k\in\mathcal{I}} w_{yk}\tilde{a}_k + b_y \right] + \ln(S) $$ with derivative \begin{align} \frac{\partial C}{\partial b_j} &= -\delta_{yj} + \frac{1}{S}\frac{\partial S}{\partial b_j}\\ &= -{y_j} + \frac{1}{S}\sum_{t\in\mathcal{O}}\exp(z_t) \frac{\partial z_t}{\partial b_j} \\ &= -{y_j} + \frac{\exp(z_j)}{S} \\ &= a_j - y_j \end{align} where $\delta_{yj}=y_j$ is the Kronecker delta. The author remarks in a (cryptic) sidenote thet $y_j$ is the vector of zeros except at the $j$th position.
When is a transformation not linear?
If you define you transformation with something like $T(u)= Au$, with $A$ a matrix, then the transformation is always linear, no matter what $A$ you pick. The problem arise when you define it with different mathematical objects, such as an arbitrary formula: $$ T(x,y) = x^2- x + 4y. $$ Then it is quite easy to check if the given formula makes a linear o a non-linear transformation, for example by checking additivity: $$ T(x+x', y+y') = (x+x')^2 - x-x' +4y + 4y'\\ = x^2 + x'^2 + 2xx' - x - x' + 4y + 4y' \neq T(x,y) + T(x',y') = x^2 -x+4y + x'^2 -x' +4y'. $$ A good heuristic is: a linear function is usually a composition of 'primitive' linear functions (like sum and product). If squares, exponentials or other stuff appears, it probably isn't linear.
Showing homotopy of two paths if they are homotopic after a delay
Have you tried the homeomorphism $I\times I\to[0,2]\times I$ given by $h(s,t)=\begin{cases} s(3t+1,\ 0)+(1-s)(0,t)& \text{ if }\ t\le\frac13\\ s(2,\ 3t-1)+(1-s)(0,t)& \text{ if }\ \frac13\le t\le\frac23\\ s(4-3t,\ 1)+(1-s)(0,t)& \text{ if }\ t\ge\frac23 \end{cases}$ Intuitively, this stretches the rectangle $I\times[\frac13,\frac23]$ to the trapezoid with vertices $(0,\frac13),\ (2,0),\ (2,1),\ (0,\frac23).$ Then compose with the given homotopy $H$. For $s\in\{0,1\}$ (the endpoints of the paths) $H\circ h(s,t)$ equals $x,y$ respectively, independent of the time $t$, so it is a path-homotopy from $\gamma$ to $\delta$. The formula above can be generalized. Assume that $a:[0,p]\to X$ and $b:[0,q]\to X$ are paths of possibly different length $p,q\ge 0$. If we denote by $r_x$ (or just $r$) the constant path of length $r$ at the point $x$, then $r\cdot a$ is a path of length $r+p$ which coincides with $a$ on $[0,p]$ and is constant $a(p)$ during $[p,r+p]$. If $a$ and $b$ are paths that both start at $x=a(0)=b(0)$ and both end at $y=a(p)=b(q)$, then we say that $a,b$ are equivalent if there are $r,s\ge 0$ such that $$r\cdot p\simeq s\cdot q$$ The above is the special case $p,q,r,s=1$. Let's assume that $a$ and $b$ are equivalent via $H:[0,r+p]×I\to X$. Then $\varphi(x,t)=\begin{cases} x(3tr+p,\ 0)+(1-x)(0,t)& \text{ if }\ t\le\frac13\\ x(r+p,\ 3t-1)+(1-x)(0,t)& \text{ if }\ \frac13\le t\le\frac23\\ x(3(1-t)s+q,\ 1)+(1-x)(0,t)& \text{ if }\ t\ge\frac23 \end{cases}$ is a homeomorphism between $I×I$ and $[0,s+q]×I$. The composition $Hφ$ is then a homotopy between $a'(x)=a(px)$ and $b'(x)=b(qx)$, the "unifications" of $a,b$. Conversely, if $H'$ is a homotopy $a'\simeq b'$ between paths of length $1$, then $H'φ^{-1}$ gives a homotopy $r\cdot a\simeq s\cdot b$. In particular, this shows that equivalent paths of same length are homotopic. This also shows how the two definitions of the fundamental group(oid) that you find in literature (one using only paths of length $1$, the other allowing paths of arbitrary length) are equivalent.
Ross, Joint distribution function with bounds on X and Y
This answer can only be useful if you are familiar with expectations already. $$\mathbf{1}_{a_{1}<X\leq a_{2}}\mathbf{1}_{b_{1}<Y\leq b_{2}}=$$$$\left(\mathbf{1}_{X\leq a_{2}}-\mathbf{1}_{X\leq a_{1}}\right)\left(\mathbf{1}_{Y\leq b_{2}}-\mathbf{1}_{Y\leq b_{1}}\right)=$$$$\mathbf{1}_{X\leq a_{2}}\mathbf{1}_{Y\leq b_{2}}-\mathbf{1}_{X\leq a_{1}}\mathbf{1}_{Y\leq b_{2}}-\mathbf{1}_{X\leq a_{2}}\mathbf{1}_{Y\leq b_{1}}+\mathbf{1}_{X\leq a_{1}}\mathbf{1}_{Y\leq b_{1}}=$$$$\mathbf{1}_{X\leq a_{2},Y\leq b_{2}}-\mathbf{1}_{X\leq a_{1},Y\leq b_{2}}-\mathbf{1}_{X\leq a_{2},Y\leq b_{1}}+\mathbf{1}_{X\leq a_{1},Y\leq b_{1}}$$ Taking the expectations on both sides results:$$P\left(a_{1}<X\leq a_{2},b_{1}<Y\leq b_{2}\right)=F\left(a_{2},b_{2}\right)-F\left(a_{1},b_{2}\right)-F\left(a_{2},b_{1}\right)+F\left(a_{1},b_{1}\right)$$
A problem in determining conditions for relational matrix for module to be cyclic
This "relations matrix" just means that $M(a,b)$ is generated by $S$ with the relations $(x+2)m_1+5m_2+(x-a)m_3=0$ and $m_1+(x-2)m_2+(x-b)m_3=0$. That is, when you take the relations matrix and multiply it by the column vector $(m_1,m_2,m_3)$, you get $0$.
Worthwhile freshman-level definite-integrals-"proper"
I mentioned in the comments how one could similarly calculate the integral of any linear function, but this is rather trivial and does not illustrate why one would want to be able to calculate integrals. It should also be possible to calculate some less trivial integrals directly via the definition. For example: $$ \int_0^\alpha x^2dx = \lim_{n \to +\infty} \left( \frac{\alpha}{n} \sum_{k=1}^{n} \left( \frac{\alpha k}{n}\right)^2 \right) = \lim_{n \to +\infty} \left( \frac{\alpha^3}{n^3} \sum_{k=1}^{n} k^2 \right) \\ =\alpha^3\lim_{n \to +\infty}\left(\frac{1}{n^3}\frac{n(n+1)(2n+1)}{6}\right) = \frac{\alpha^3}{3}. $$ Using linearity of the integral, one can then integrate all polynomials of degree at most $2$. This does use the formula for the sum of the first $n$ integers, which is not trivial (this might motivate the need for a fundamental theorem of calculus). Similar to the example above, one could exploit the fact that one knows a formula for the partial sums of the geometric series to calculate $$ \int_0^{\alpha}e^xdx = e^\alpha - 1 $$ but this is calculating the indefinite integral, just without the fundamental theorem of calculus.
Disproving $\|u\|_{L^1}\leq C\|Du\|_{L^1}$ for compact, smooth $u$
Let $u$ be a smooth cutoff, supported on $B(0,1)$, and set $$u_n(x)=\frac{1}{n^2}u\left(\frac{x}{n}\right).$$ Then, the change of variables $x=yn$ shows that $$\int_{\mathbb R^2}|u_n(x)|\,dx=\int_{\mathbb R^2}\frac{1}{n^2}\left|u\left(\frac{x}{n}\right)\right|\,dx=\int_{\mathbb R^2}\frac{1}{n^2}|u(y)|n^2\,dy=\|u\|_1,$$ while $$\int_{\mathbb R^2}|Du_n(x)|\,dx=\int_{\mathbb R^2}\frac{1}{n^2}\left|\frac{1}{n}Du\left(\frac{x}{n}\right)\right|\,dx=\int_{\mathbb R^2}\frac{1}{n^3}|Du(x)|n^2\,dx=\frac{1}{n}\|Du\|_1.$$ So, if such a $C$ exists, we should have that $$\|u\|_1=\|u_n\|_1\leq C\|Du_n\|_1=\frac{C}{n}\|Du\|_1,$$ and letting $n\to\infty$ leads to a contradiction.
Suppose $x$ is an odd function and let $h = f \circ g$. Is h always an odd function?
$g(x)=-x;f(x)=x^2\Rightarrow f(g(x))=???$ Is this $f\circ g$ odd?? Coming to your question, Read $$h(-x)=f(g(-x))=f(-g(x))=-f(g(x))=h(x)$$ as $$h(-x)=f(g(-x))=f(-g(x))=f(g(x))=h(x)$$
How are basis defined in non-linear coordinate systems?
You don't have a "basis" for any thing that isn't a linear space. But, as Tony S.F. said, you can define the "tangent space" of a differentiable manifold and define a basis for that.
Whether a continuous function has fixed point or not when the domain and range are not $[0,1]$
For B consider that $0$ is the only point $p\in [0,1)$ such that $[0,1)\backslash \{p\}$ is a connected space, so a homeomorphism $f$ must have $f(0)=0.$ Another way is that $B=\{[0,d):0<d<1\}$ is a neighborhood base at $ 0$ such that $\forall b\in B\; (\bar b\backslash b$ has just one member.) No other point in $[0,1)$ has a nbhd base with this property, so $f(0)=0$.....For C if $f(0)=0$ we are done. If $f(0)>0$ the continuous function $g(x)=f(x)-x$ is positive at $x=0$ and negative when $x>\sup \{f(y) :y\geq 0\}$ so for some $x$ we have $ g(x)=0.$
How to approximate using the differential?
We have $$ dz=6xdx+2dy $$ which, at $(1,2)$, gives $6dx+2dy$. Linear approximation (ie approximating the graph of the function near $x_0$ by the tangent at $x_0$) for the function $f(x,y)=3x^2+2y$ at $x_0$: $$ f(x_0+h)\simeq f(x_0)+df_{x_0}(h). $$ In your case, you set $$ x_0:=(1,2)\qquad h:=(.2,.2) $$ so that $$ f(1.2,2.2)\simeq f(1,2)+df_{(1,2)}(.2,.2)=f(1,2)+6\cdot (.2)+2\cdot(.2) $$ $$ =7+1.2+.4=8.6. $$
An ${\rm Aut}(\mathbb{Z}/n\mathbb{Z}) \rightarrow(\mathbb{Z}/n\mathbb{Z})^\times$ construction
It turns out that for every $a \in \mathbb Z/n$, there is a unique map $\mathbb Z/n \longrightarrow \mathbb Z/n$ sending $1 \mapsto a$. This is just multiplication by $a$, and it is unique as $1$ generates $\mathbb Z/n$. On the other hand, if $\phi: \mathbb Z/n \longrightarrow \mathbb Z/n$ then $\phi$ is totally determined by where $1$ is sent. Indeed, $\phi(m)=m \phi(1)$ so $\phi$ is just the multiplication by $\phi(1)$ map. Thus, we have the following map: $\Phi: End(\mathbb Z/n) \longrightarrow \mathbb Z/n$ via $\phi \mapsto \phi(1)$. If you are unfamiliar with this notation, $End(\mathbb Z/n)$ is the set of endomorphisms of $\mathbb Z/n$, i.e. group homomorphisms $\mathbb Z/n \longrightarrow \mathbb Z/n$. This map $\Phi$, it turns out, will restrict to the desired map $Aut(\mathbb Z/n)\longrightarrow (\mathbb Z/n)^\times$, and this is a group isomorphism. I'd like to point out that in addition, $\Phi: End(\mathbb Z/n)\longrightarrow \mathbb Z/n$ is actually an isomorphism of rings, where multiplication in $End(\mathbb Z/n)$ is composition. The isomorphism you're looking for arises as the induced map on the unit groups of these rings. Ring theory aside, here are some ideas to guide you to proving that $\Phi$ is your desired isomorphism. As discussed, if $\Phi(\phi)=a$ then $\phi$ is the multiplication by $a$ map. If $\phi$ is an automorphism, why must $a$ be invertible in $\mathbb Z/n$? You can try to prove that $\Phi$ is an isomorphism by constructing an inverse map. How can you take an element of $\mathbb Z/n$ and get a map $\mathbb Z/n \longrightarrow \mathbb Z/n$?
Nine people sat around a table..
Since on average people ate $50/9<6$ slices, there exists a person who ate at most $5$ slices. The other eight people together ate at least $45$ slices. Can you conclude?
Prove that the following set is dense in real numbers
You're on the right track, all you need to add is the observation that $\lfloor x/a_n \rfloor$ is an integer, let's call it $k_n$. We know that $d(k_n a_n,x) \le |a_n| \to 0$ as $n \to \infty$ because $$k_n a_n \le x < (k_n+1) a_n $$ (I suspect this part you know already, from what you wrote). It follows that $k_na_n$ is a sequence of elements of $E$ that converges to $x$, because $$0 \le x - k_n a_n \le a_n $$ and so, by the squeeze theorem, $$\lim_{n\to\infty}(x - k_n a_n) = 0 $$ $$x - \lim_{n\to\infty} k_na_n = 0 $$ $$x = \lim_{n\to\infty} k_na_n $$ Since we have shown that an arbitrary real number $x \in \mathbb{R}$ can be written as a limit of some sequence of points in $E$, it follows that $E$ is dense in $\mathbb{R}$.
Show that if a linear transformation sends bases to bases, then it is bijective.
Let $\{x_1, \ldots, x_n\}$ be a basis in the domain such that $\{Ax_1, \ldots, Ax_n\}$ is a basis in the codomain. For injectivity, we wish to prove that $A$ has trivial kernel. Assume $Ax = 0$ and write $x = \sum_{i=1}^n \alpha_ix_i$. We have $$0 = Ax = A\left(\sum_{i=1}^n \alpha_ix_i\right) = \sum_{i=1}^n \alpha_i Ax_i$$ which implies $\alpha_1 = \cdots = \alpha_n = 0$ since $\{Ax_1, \ldots, Ax_n\}$ is linearly independent. Hence $x = 0$. For surjectivity, any $y$ in the codomain can be written as a linear combination $y = \sum_{i=1}^n \alpha_i Ax_i$ since $\{Ax_1, \ldots, Ax_n\}$ spans the codomain. Then $$A\left(\sum_{i=1}^n \alpha_ix_i\right) = \sum_{i=1}^n \alpha_i Ax_i = y$$ so $y$ is in the image of $A$. Hence $A$ is bijective.
If $\int_{1}^{\infty} |f(t)|^{2}\,dt < \infty$, is $\int_{1}^{\infty} \frac{|f(t)|}{\sqrt{t}}\,dt < \infty$?
The statement is false. Take $$f(t)=\frac{1}{\sqrt{t}\log(1+ t)}.$$ Then $$\int_1^\infty|f(t)|^2\,dt=\int_1^\infty\frac{1}{t\log^2(1+t)}&lt;\infty$$ while we have on the other hand $$\int_1^\infty \frac{|f(t)|}{\sqrt{t}}\,dt=\int_1^\infty\frac{1}{t\log(1+t)}\,dt=\infty.$$
Limit definition in Spivak
If you want to get technical, the definition makes sense in the following more general situation: say that $f$ is defined on a set $E \subset \mathbb R$, and you want to define the limit of $f$ at some $a \in \mathbb R$. Then you want $a$ to satisfy the following property : for all $\epsilon&gt;0$, the set $(a-\epsilon, a+\epsilon) \cap E$ is not empty. (the set of $a$ satisfying this property is called the closure of $E$).
$f'(x)\cosh x - f(x)\sinh x = c_0$
Let $F(x)=f'(x)\cosh x-f(x)\sinh x$. Then $$\begin{align} F'(x) &amp;=(f''(x)\cosh x+f'(x)\sinh x)-(f'(x)\sinh x+f(x)\cosh h)\\ &amp;=(f''(x)-f(x))\cosh x\\ &amp;=0 \end{align}$$ since $f''(x)=f(x)$. This implies $F(x)$ is constant. From $$F(0)=f'(0)\cosh0-f(0)\sinh0=0\cdot1-1\cdot0=0$$ we have $f'(x)\cosh x-f(x)\sinh x=0$ for all $x$.
Understanding of definition of continuity (and uniform continuity)
Continuous at $a$: Let $f: S\to\mathbb{R}$ be a function. The function is continuous at $a$ if $$a\in S,\,\, \forall \epsilon &gt;0\,\,\exists \delta &gt;0 : \forall x\in S\,\, |x-a|&lt;\delta \implies |f(x)-f(a)|&lt;\epsilon $$ notice that $a\in S$ is given at the beginning! Uniform continuity: $$\,\, \forall \epsilon &gt;0\,\,\exists \delta &gt;0 : \forall a,x\in S\,\, |x-a|&lt;\delta \implies |f(x)-f(a)|&lt;\epsilon $$ So the difference is that the value of $\delta$ in continuity depends on both $\epsilon$ and on the point $a$, whereas for uniform continuity the value of $\delta$ depends only on $\epsilon$ and not on the point at which the definition is being checked! For example $f(x) = x^2$ is clearly continuous (at every point in the domain) but not uniformly continuous! (try to prove it by contradiction!). To give you an intuition
Counting the number of ways to make 2 subcommittees from an initial 12 people.
Hint: Since for the first subcommittee you are choosing boys and choosing girls, the number of ways of doing that has to be multiplied: $$\binom{6}{2} \times \binom{6}{2} \ .$$ The second subcommittee follows the same reasoning, but now we have $4$ boys and girls left. As for the total number of combinations of the subcommittees, we will need to have the first subcommittee and the second subcommittee. Can you compute everything now?
Finding a probability on an infinite set of numbers
Just use the Geometric Series formula. $P(\Omega)= \sum_{n=1}^ \infty \frac{1}{2^n}= \frac { \frac{1}{2}}{1- \frac{1}{2}}= 1$
Benford's law with random integers
The linked paper is titled unfortunately, at least as regards the current conception of the word "random." The whole point of Benford's law is precisely that it doesn't hold when integers are drawn uniformly from a range that, like yours, ends at a power of $10$: a well-designed pseudorandom number generator should give numbers with asymptotically exactly a $\frac{1}{10}$ chance of each leading digit $0,1,...,9$ in decimal notation. Benford's law applies not to properly random sources of data, but to data produced by certain real-life random processes. One simple characterization of data that obey Benford's law is as those produced by processes that exhibit more-or-less exponential growth, such as populations. Such processes produce uniformly distributed logarithms of data points, but this uniform distribution gets splayed out into the characteristic Benford's law upon exponentiation.
boolean algebra XOR parity
$$h(x_1,x_2,...,x_n) = f(x_1,x_2,...,x_n) \ XOR \ g(x_1,x_2,...,x_n) = 1$$ iff either A. $f(x_1,x_2,...,x_n) = 1$ and $g(x_1,x_2,...,x_n) = 0$ or B. $f(x_1,x_2,...,x_n) = 0$ and $g(x_1,x_2,...,x_n) = 1$ $f(x_1,x_2,...,x_n) = 1$ for $N(f)$ variable placements, and $g(x_1,x_2,...,x_n) = 1$ for $N(g)$ variable placements Since there are $k$ variable placements where $f(x_1,x_2,...,x_n) = 1$ and $g(x_1,x_2,...,x_n) = 1$, there are $N(f) - k$ placements where $f(x_1,x_2,...,x_n) = 1$ and $g(x_1,x_2,...,x_n) = 0$. Similarly, there are $N(g)-k$ placements where $f(x_1,x_2,...,x_n) = 0$ and $g(x_1,x_2,...,x_n) = 1$. So, there are in total $N(f) - k + N(g) - k = N(f) + N(g) - 2k$ placements where $h(x_1,x_2,...,x_n) = 1$, i.e. $N(h) = N(f) + N(g) - 2k$
How do we relate the exterior derivative of a scalar function to the "differential?"
I don't think you want to think of $df$ as the dot product of $\nabla f$ with a vector of differentials $\left&lt;dx,dy\right&gt;$. Instead, think of $dx$ and $dy$ as the dual basis to the standard basis vectors $\mathbf{i}$ and $\mathbf{j}$. In other words, \begin{align*} dx(a\mathbf{i} + b\mathbf{j}) &amp;= a \\ dy(a\mathbf{i} + b\mathbf{j}) &amp;= b \end{align*} Then $df_{(x,y)}$ is a dual vector whose coefficients are $\frac{\partial f}{\partial x}(x,y)$ and $\frac{\partial f}{\partial y}(x,y)$. It acts on vectors $v=a\mathbf{i} + b\mathbf{j}$ as $$ df(v) = \left(\frac{\partial f}{\partial x}dx + \frac{\partial f}{\partial y}dy\right)(a\mathbf{i} + b\mathbf{j}) = \frac{\partial f}{\partial x}a + \frac{\partial f}{\partial y}b $$ So you see that this is really the same thing as $\nabla f \cdot v$. The differential one is the more “natural” of the two because it uses dual vectors rather than the dot product. On a general Riemannian manifold with inner product $g$, the gradient operator can be defined by the equation $$ df(v) = g(\nabla f,v) $$
Solution of a Hamilton-Jacobi-Bellman (HJB) equation
Assume $\psi' = f$, then the equation is of the form: $$Af' + B\frac{f^2}{f'} + (cu+d)f = 0$$ By putting $$g = \frac{f'}{f} = (\log f)'$$ we see that $$Ag + \frac{B}{g} + (cu+d) = 0$$ This is a quadratic in $g$ and can be easily solved. We get $$ \psi' = f = e^{\int g}$$ Hope that helps.
Stoke's Theorem problem (direct calculation and using Stoke's Theorem)
I know this isn't an answer, but I can't add a comment. Here's a page that explains the theorem and its use http://tutorial.math.lamar.edu/Classes/CalcIII/StokesTheorem.aspx
How to prove linearity, find matrix relative to canonical basis and determine kernel and the image with given map?
how do you find matrix $A$ representing $f$ with respect to the standard basis? i will take $n = 2$ and general result should be similar. let $v = \pmatrix{v_1\\v_2}$ and the standard basis $\left\{e_1 = \pmatrix{1\\0}, e_2 = \pmatrix{0\\1} \right\}.$ the first coulmn of $A$ is $Ae_1 = T(e_1) = \pmatrix{v_1\\v_2} v_1$ because the inner product $v^Te_1 = v_1$ therefore we have $Ae_1 = \pmatrix{v_1v_1\\v_1v_2} $ similarly $Ae_2 = \pmatrix{v_2v_2\\v_2v_2} $ finally here is the matrix $$ A = \pmatrix{v_1v_1 &amp; v_2v_1\\v_1v_2 &amp; v_2v_2}.$$ how do you find the kernel and the image of $f$. the image of $x$ under $f$ is always a multiple of $v$ so $image(f) = \alpha v$ for any number $\alpha$ the one dimensional space spanned by $v.$ the kernel of $f$ consists of all $x$ such that $f(x) = 0$ that implies $x^Tv = 0$ that is the orthogonal complement of the $kernel(f).$
Need help with integral manipulation
hint To integrate a rational fraction of the form $$\frac{P(u)}{Q(u)}$$ with $ deg P&lt;deg Q$ , we use partial fractions decomposition. So $$\frac{1}{u(a-bu)}=\frac Au + \frac{B}{a-bu}$$ with $$A=\frac 1a\text{ and } B=\frac ba$$ You should find that $$\int \frac{du}{u(a-bu)}=\frac 1a\ln(\Bigl|\frac{u}{a-bu}\Bigr|) +C$$
The implications of a Pi-base number system
Each number would have an expansion of the form $$x = \pm (\ldots + x_{-1} \pi^{-1} + x_0 + x_1 \pi + x_2 \pi^2 + \ldots + x_n \pi^n)$$ where $0 \leq x_i &lt; \pi$ with $x_i \in \mathbb{Z}$. A circle would have a circumference of $20 D$.
Expected local clustering coefficient for a node in a network
The formula seems to assume an Erdos-Renyi random graph, i.e., each vertex pair is connected with a fixed probability $p$. Given a vertex set $V$ with $|V| = n$, consider the probability space $(\mathcal{G}\times V, \mathcal{F},\mathbb{P})$, where $\mathcal{G}$ is the set of all graphs on $V$. Let $c(G,x)$ be the local clustering coefficient of vertex $x \in V$ in graph $G \in \mathcal{G}$. In the case of an Erdos-Renyi random graph with parameter $p$ $$ \mathbb{P}(\{(G,x)\}) = \frac{1}{n}p^{|E(G)|}(1-p)^{\binom{n}{2} - |E(G)|}$$ The local clustering coefficient of a vertex $x$ in graph $G$ is the fraction of vertex pairs in the neighborhood of $x$ that have an edge between them. $$c(G,x) = \frac{|\{y,z\} : \{x,y\},\{x,z\},\{y,z\} \in E(G) |}{|\{y,z\} : \{x,y\},\{x,z\} \in E(G) |} = \frac{|\{y,z\} : \{x,y\},\{x,z\},\{y,z\} \in E(G) |}{\binom{\deg(x)}{2}}$$ The expected local clustering coefficient is \begin{align} \mathbb{E}[c(G,x)] &amp;= \mathbb{E}[\mathbb{E}[c(G,x)|x]] \\ &amp;= \mathbb{E}\left[\mathbb{E}\left[ \left.\frac{\textrm{#edges among neighbors of }x}{\binom{\deg(x)}{2}}\right|x\right]\right] \\ &amp;= \mathbb{E}\left[ \frac{1}{ \binom{\deg(x)}{2} } \mathbb{E}\left[\left. \sum_{\{\{y,z\}:\{x,y\}, \{x,z\}\in E\}} \mathbb{1}_{ \{y,z\} \in E }\right|x\right]\right] \\ &amp;= \mathbb{E}\left[ \frac{1}{ \binom{\deg(x)}{2} } \sum_{\{\{y,z\}:\{x,y\}, \{x,z\}\in E\}} \mathbb{E}\left[\left. \mathbb{1}_{ \{y,z\} \in E }\right|x\right]\right] \\ &amp;= \mathbb{E}\left[ \frac{1}{ \binom{\deg(x)}{2} } \sum_{\{\{y,z\}:\{x,y\}, \{x,z\}\in E\}} \mathbb{P}\left(\left. \{y,z\} \in E \right|x\right)\right] \\ &amp;= \mathbb{E}\left[ \frac{1}{ \binom{\deg(x)}{2} } \sum_{\{\{y,z\}:\{x,y\}, \{x,z\}\in E\}} p\right] = \mathbb{E}\left[ \frac{1}{ \binom{\deg(x)}{2} } p \binom{\deg(x)}{2} \right] =p \end{align} But this can also be written as \begin{align} \mathbb{E}[c(G,x)] &amp;= \mathbb{E}\left[\mathbb{E}\left[ \left.\frac{\textrm{#edges among neighbors of }x}{\binom{\deg(x)}{2}}\right|x\right]\right] \\ &amp;= \mathbb{E}\left[ \sum_{i=0}^{\binom{\deg{x}}{2}}\underbrace{\mathbb{P}( i \textrm{ edges among neighbors of }x| x)}_{\mathrm{Binom}(i;\deg(x),p)} \;\frac{i}{\binom{\deg(x)}{2}} \right] \\ &amp;= \sum_{k=0}^{N-1} \underbrace{\mathbb{P}(\deg(x) = k)}_{\mathrm{Binom}(k;N-1,p)} \sum_{i=0}^{\binom{k}{2}} \; \underbrace{\mathbb{P}( i \textrm{ edges among neighbors of }x|\mathrm{degree}(x) = k)}_{\mathrm{Binom}\left(i;\binom{k}{2},p\right)} \;\frac{i}{\binom{k}{2}} \end{align}
Why is product rule applicable with Frobenius product?
There are many so-called products where the product rule works. Take a look at a proof of the product rule, and you can see that most of what's required for that proof to apply is that you can distribute the product over addition, which is kindof required for an operation to deserve the name "product" in the first place.
A question on absolute values of line integrals
In general, $dz$, equivalently $\gamma'(t)$, is not real-valued, so $$\int_\gamma \lvert f(z)\rvert\,dz = \int_a^b \lvert f(\gamma(t))\rvert\gamma'(t)\,dt$$ need not be real. Then the inequality doesn't make sense. What is true is $$\left\lvert \int_\gamma f(z)\,dz\right\rvert \leqslant \int_\gamma \lvert f(z)\rvert\, \lvert dz\rvert = \int_a^b \lvert f(\gamma(t))\rvert\,\lvert \gamma'(t)\rvert\,dt.$$
Using Laplace transform to solve convolutions
You must evaluate $\mathcal{L}^{-1}\left\{\dfrac{1}{(s^2+a^2)^2}\right\}$ Recall that: $$\mathcal{L}^{-1}\{F(s)\cdot G(s)\}=f(t)\ast g(t)=\int_0^t f(t-\tau)g(\tau)~d\tau \tag{1}$$ Where $F(s)=\mathcal{L}\left\{f(t)\right\}$ and $G(s)=\mathcal{L}\left\{g(t)\right\}$. The most logical and obvious choice in your case is to select $F(s)=G(s)=\dfrac{1}{s^2+a^2}$. The inverse laplace transform of $\dfrac{1}{s^2+a^2}$ is obviously $f(t)=g(t)=\dfrac{\sin(at)}{a}$. Hence, from $(1)$ we must evaluate the following integral: $$\begin{align} \mathcal{L}^{-1}\left\{\frac{1}{(s^2+a^2)^2}\right\}&amp;=\int_0^t \frac{\sin(a(t-\tau))}{a}\cdot \frac{\sin(a\tau)}{a}~d\tau\\&amp;=\frac{1}{a^2}\int_0^t \sin(a(t-\tau))\sin(a\tau)~d\tau \tag{2} \end{align}$$ Evaluating this will give you the same answer you have: $$\mathcal{L}^{-1}\left\{\frac{1}{(s^2+a^2)^2}\right\}=\frac{\sin(at)-at\cos(at)}{2a^3}$$
Let $V$ be an inner product space over $\mathbb{C}$, prove the Polar Identities: For $x,y\in V$
We have \begin{align} \frac{1}{4}\sum_{k=1}^4i^k\Vert x+i^ky\Vert^2 &amp;=\frac{1}{4}\left(i\Vert x+iy\Vert^2-\Vert x-y\Vert^2 -i\Vert x-iy\Vert^2+\Vert x+y\Vert^2\right)\\ &amp;=\frac{i}{4}\left(\Vert x+iy\Vert^2-\Vert x-iy\Vert^2\right) +\frac{1}{4}\left(\Vert x+y\Vert^2-\Vert x-y\Vert^2\right)\\ &amp;=\frac{i}{4}\cdot2\left(\langle x,iy\rangle+\langle iy,x\rangle\right) +\frac{1}{4}\cdot2\left(\langle x,y\rangle+\langle y,x\rangle\right)\\ &amp;=-\frac{1}{4}\cdot2\left(-\langle x,y\rangle+\langle y,x\rangle\right) +\frac{1}{4}\cdot4\cdot{\rm Re}\langle x,y\rangle\\ &amp;=-\frac{1}{4}\cdot2\left(-2\cdot{\rm Im}\langle x,y\rangle\right) +{\rm Re}\langle x,y\rangle\\ &amp;={\rm Im}\langle x,y\rangle+{\rm Re}\langle x,y\rangle\\ &amp;=\langle x,y\rangle. \end{align}
How to find the corresponding Collineation?
Send $e_{0} \mapsto a\cdot(-1,1,2)^{T}$, $e_{1} \mapsto b\cdot (2,-5,1)^{T}$, $e_{2} \mapsto c \cdot (1,2,-3)^{T}$. This gives you a matrix $A = \begin{bmatrix} -a&amp;2b&amp;c\\a&amp;-5b&amp;2c\\2a&amp;b&amp;-3c \end{bmatrix}$ Now you need to have that $$A(e_{0}+e_{1}+e_{2}) = A \begin{bmatrix} 1\\1\\1\end{bmatrix} = \begin{bmatrix} -a+2b+c\\a-5b+2c\\2a+b-3c \end{bmatrix}$$ is a scalar multiple of $(0,0,1)^{T}$. So you can set up equations to find $a$, $b$, and $c$ (you can either assume WLOG that $a=1$, or that $A(e_{0}+e_{1}+e_{2}) = (0,0,1)^{T}$, whichever is more convenient).
show that $0.5 - \frac{ln(x)}x > 0$
We can rearrange $$0.5−\frac{\ln(x)}{x}&gt;0$$ to be $$0.5&gt;\frac{\ln(x)}{x}$$ multiply the x over (which is always going to be positive) $$\frac{1}{2}x&gt;\ln(x)$$ make them each exponents of for $e^x$ $$e^{\frac{1}{2}x}&gt;e^{\ln(x)}$$ which is $$e^{\frac{1}{2}x}&gt;x$$ $$\sqrt{e}\cdot e^x&gt;x$$ and exponential functions always grow faster than polynomial functions, and in this case, is always greater than x. If you didn't like that solution, you could use calculus: Take the derivative of $\frac{\ln(x)}{x}$ $$\frac{x\cdot\frac{1}{x}-\ln(x)\cdot1}{x^2}$$ and find where it equals goes from positive to negative: $1-\ln(x)=0$ @ $x = e$ so we know that because the slope goes positive to negative, we have a maximum @ $x = e$. $$\ln(e)/e=\frac{1}{e}$$ which is our maximum of this function, which is less than 0.5, so we know that $$0.5&gt;\frac{\ln(x)}{x}$$ and thus $$0.5−\frac{\ln(x)}{x}&gt;0$$
Number of lines formed by sides of polygon
Can we do better? Yes: This figure has 28 edges forming 12 lines. If you count this as "2 indents on each side" then the generalisation to "$k$ indents per side" has $8k+12$ edges forming $2k+8$ lines, approaching asymptotically 4 edges per line. Are there better configurations? I'd conjecture almost certainly :-). Edit: In fact we can get the ratio arbitrarily low. In these figures with $k$ 'towers' and $k$ 'tiers' ($k \ge 2$) there are $8k^2$ edges in $6k+2$ lines ($2k+2$ horizontal and $4k$ vertical), giving at least $k$ edges per line.