title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
How to take integral of a second order partial derivative of a function?
You can see that something is wrong with your solution from symmetry: $x$ and $y$ should be interchangable. Your solution doesn't reflect that. When you integrate $\frac{\partial u}{\partial y} = C_1(y)$ you left out the $x$-dependency of $C$: $$ u(x,y) = \int \frac{\partial u}{\partial y}(y) \text{d} y + C(x) $$
Source of probably-the-most-simplest-math-trick
The "trick" is just linearity: $\rm\ f^{-1}(f(x)\!+\!f(n))\, =\, x\!+\!n.\:$ Above $\rm\ n=5,\ f(x) = 5x,\ f^{-1}(x) = x/5.$ Such simple consequences of linearity of multiplication are very well-known to mathematics students, but they may not be so obvious to a layperson - especially if disguised more effectively.
Select K numbers from N numbers fairly
Yes, that approach is working. Essentially, you first produce a ranom permutaion of all numbers and then pick the first $k$. While inefficient for many scenarios, this may be a nice and simple method e.g. as a one-liner in SQL - but I'm note sure how well the performance will be even ther (for cases of $n\gg k$).
Find the nth derivative
We can show by induction $y^{(n)}=\frac{(-1)^n(n-1)!a^n}{(ax+b)^n}$ For the inductive step note that we can write the $n+1$-th derivative as: $\frac{d}{dx} (-1)^n(n-1)!a^n(ax+b)^{-n}=(-1)^n(-n)(n-1)! a^{n+1}(ax+b)^{-n-1}=\frac{(-1)^{n+1}n!a^{n+1}}{(ax+b)^{n+1}}$ The second equality follows from the chain and power rules.
Is there a numeral system for real numbers that is always unique, but still has the usual convenient properties?
What you are searching for can not exist. For example, suppose we want to represent real numbers in the half- open interval $[0,1)$. As the length of the representation increases the set of reals represented becomes dense in $[0,1)$. This implies that $1$ can be represented arbitrarily closely by finite length representations. Given some natural continuity assumptions about the kind of representation used, this implies that there is a infinite length representation of $1$ aside from a finite representation of $1$. Thus, the representation of $1$ is not unique. One important and convenient property of a representation is that you can compare them and decide between the real numbers they correspond to which is the larger or smaller. This is a kind of monotonicity property and if it is not a continuity then there would be gaps of unrepresentable real numbers. This illustrates a basic topological difference between the continuum of real numbers and a very different discontinuum of limits of finite representation systems somewhat similar to the Cantor set.
Is there is any way to find limit points of sequence which is sum of two bounded sequences
$f_n$ is periodic with period $12$ because when you add $12$ to $n$ you add $6\pi$ to the argument of the $\sin$ function and $4\pi$ to the argument of the $\cos$ function. Compute $f_n$ for $12$ values in succession, like $0$ to $11$. All those values are accumulation points and no others.
How many permutations of the letters DANIEL do not begin with D or do not end with L?
The requirement "do not begin with D or do not end with L" is a bit tricky, for instance, it means that "DANILE" is accepted. Hence, what is not accepted is of the form "DXXXXL", and therefore we have $6!-4!=696$ possibilities :)
If $f(x)$ has an asymptote, when does the limit of the tangent lines approach the asymptote?
In order to answer this, we need a notion of "limit" for lines. A natural choice in this setting is to parametrize non-vertical lines by slope $m$ and $y$-intercept $b$, so that a line $y = b(a)x + m(a)$ approaches $y = mx + b$ if and only if $m(a) \to m$ and $b(a) \to b$. As you note, if $f$ is differentiable everywhere, the tangent line to the graph $y = f(x)$ at $x = a$ is $y = f(a) + f'(a)(x - a) = f'(a)x + [f(a) - af'(a)]$. This approaches $y = f(a)$ if and only if $f'(a) \to 0$ and $af'(a) \to 0$. The second clearly implies the first, since $|f'(a)| < |a|\, |f'(a)|$ for $|a| > 1$. It's also fairly clear by example that $f'(a) \to 0$ does not imply $af'(a) \to 0$. In sum, the tangent line of $f$ at $a$ approaches $y = f(a)$ as $|a| \to \infty$ if and only if $af'(a) \to 0$. If instead you have an oblique asymptote $y = mx + b$, the differentiable function $g(x) = f(x) - mx$ has a horizontal asymptote, so this question reduces to the previous case, and the necessary and sufficient conditions are $f'(a) \to m$ and $f(a) - af'(a) \to b$.
Motivations for mapping class group representations
Well, think about what the mapping class group is: we can view it as a group of diffeomorphisms of a surface where we identify isotopic ones. But two isotopic diffeomorphisms induce the same action on the fundamental group of the surface (I will completely ignore basepoints here; all my surfaces are closed, connected and oriented and everything I write preserves orientation). Therefore, one of the fundamental ways to understand the mapping class group is to understand what it does to isotopy classes of curves on your surface. In fact, the essential closed curves inside surfaces determine many geometric properties of the surface. And the mapping class group is the thing that moves isotopy classes of these curves around. Now look: $\pi_1(S)$ is generally a non-abelian group, but it has an abelianization isomorphic to $H_1(S,\mathbb{Z})$. And since the MCG acts on $\pi_1$, it acts on homology; by tensoring with $\mathbb{R}$, we get a linear action of the mapping class group on a real vector space $V=H_1(S,\mathbb{R})$ preserving a lattice. That is, we get a representation of the MCG on a $V$ with has additional structure preserved by the action of the MCG, and this representation has a lot to say about the surface. For example, via this representation one can prove that the mapping class group has a torsion free subgroup of finite index and already gain information about the group and the surface. This is a highly non-trivial result and the symplectic representation on homology plays a fundamental role. It turns out however that a big portion of the MCG is not seen by its representation on homology. The missing part, called the Torelli subgroup, is the subgroup that acts trivially on homology, and it is complicated. Think of the linear action on comology as rearranging cells, and the remaining non-linear part as involved in the knotting and twisting of curves in the surface. There is a vast amount of things to be said about the linear and non-linear actions of the mapping class group. After a reasonable modification, it becomes isomorphic to the outer automorphism group of the fundamental group (this is a huge result). This provides an algebraic group structure on MCG, and the representation theory of algebraic groups is rich and elegant. I can go on forever, but I prefer to give you an excellent reference: A primer on mapping class groups, by Farb and Margalit. It develops in an accessible way everything I wrote and goes far beyond. If you are familiar with the basics of the MCG, you can in fact jump directly to chapter 6 which is the most relevant part to your question. I did not talk at all about the external properties of representations of the MCG or how they inform the Teichmuller theory of the surface, but I hope Farb and Margalit's book will provide enough incentive for you to look into these things.
Universal property of quotient group to get epimorphism
Yes. In fact, this has nothing to do with group theory. If you have a surjective map $f\colon A\longrightarrow B$ and you have a map $g\colon B\longrightarrow C$, then $g$ is surjective if and only if $g\circ f$ is surjective.
Normal distribution can not be transformed into Laplace via additive transformation
What you use to solve this problem is that the Fourier transform of a standard Gaussian decays much faster than the Fourier transform of $e^{-|x|}$. If we handwave a little, this means that the Gaussian is much smoother than $e^{-|x|}$, which is something we can exploit directly. The main idea is classical : adding an independent real-valued random variable is equivalent, at the distribution level, with convolution the densities, and convolution improves smoothness. This is often used when we want to deal with smooth distribution; we just need to add some random noise to smooth any initial distribution. Proposition Let $N$ be a standard normal random variable. Let $V$ be a real-valued random variable, independent from $N$. Then the density of the distribution of $N+V$ has an entire version. Proof Let $\mu$ be the distribution of $V$. Then $(\mathbb{R}, \mathcal{B}, \mu)$ is a measure space. For $M >0$, let $G_M := \{z \in \mathbb{C}: \ |Im(z)| <M\}$. Then $G$ is open, and for $z =: x+iy \in G$, $$\left| \frac{1}{\sqrt{2 \pi}}e^{-\frac{z^2}{2}} \right| = \frac{1}{\sqrt{2 \pi}}e^{\frac{y^2-x^2}{2}} \leq \frac{1}{\sqrt{2 \pi}}e^{\frac{M^2}{2}}.$$ For $z \in G_M$ and $\omega \in \mathbb{R}$, let $f(z,\omega) := \frac{1}{\sqrt{2 \pi}}e^{-\frac{(z-\omega)^2}{2}}$. Then $f$ is bounded, holomorphic in the first variable, and measurable in the second. Hence, using complex differentiation under the integral, the function : $$F: z \mapsto \int_{\mathbb{R}} \frac{1}{\sqrt{2 \pi}}e^{-\frac{(z-\omega)^2}{2}} d \mu (\omega)$$ is holomorphic on $G_M$. Since this is true for all $M>0$, the function $F$ is holomorphic on $\mathbb{C}$, and thus, entire. But $F$ is exactly the distribution of $N+V$. ~~~~~ All we need to conclude is the observation that the density of a Laplace random variable is never smooth at $0$.
Counting squarefree numbers which have $k$ prime factors?
(edit : I'll try to get an asymptotic formula but I wouldn't be surprised if there are some errors) If in $n = \prod_{i=1}^k p_i$ the order of the prime factors counts, with $\delta_P(n)$ indicating the prime localization : $$h(1,n) = \pi(n) = \sum_{m=1}^n \delta_P(m) = 1 \ast \delta_P(n)\qquad \qquad h(2,n) = \sum_{p \le n} \pi(n/p) = h(1,.) \ast \delta_P(n)$$ $$h(k+1,n) = \sum_{p \le n} h(k,n/p) = \sum_{m=1}^n h(k,n/m) \delta_P(m) = h(k,.) \ast \delta_P(n)$$ i.e. with $P(s) = \sum_{p \in \mathcal{P}} p^{-s} = \sum_{n=1}^\infty \delta_P(n) n^{-s} $ : $$H(1,s) = s\int_1^\infty \pi(x) x^{-s-1} dx = s P(s)$$ $$H(k,s) = H(k-1,s) P(s) = s P(s)^{k} $$ and because when $s \to 1^+$ : $P(s) \sim \ln \zeta(s) \sim \ln(s-1)$ : $$H(k,s) \sim s (\ln(s-1))^k$$ from which we get the growth rate (with $\ast$ still denoting the multiplicative convolution) : $$h(k,n) \sim \frac{d}{dx}\underbrace{\displaystyle\frac{x}{\ln x} \ast \ldots \ast \frac{x}{\ln x}}_{k}\ (n)$$ finally, getting from $h(k,n)$ the growth rate of your function $f(k,n)$ counting the number of square-free integers with exactly $k$ prime factors (this time without taking in account the order of the factors) should be a matter of constant : $$fk,n) \sim C_k h(k,n)$$ with probably $C_k \sim \frac{1}{k!}$
Is there a more rigorous method of notating arithmetic?
Well performing the operations first should be very obvious, I would probably present that in the following way: $$ 3 \cdot 2 + x = 7 \\ 6 + x = 7 \\ 6 + x + (-6) = 7 +(-6) \\ 6 + (-6) + x = 1 \\ 0 + x = 1 x = 1 $$ Explaing at each step like (adding -6 to both sides), or (computing the sum of 7 + (-6))
Multi-derivative containing standard normal CDF
By using $ e^{bx}\varphi(x) = e^{\frac{b^2}{2}}\varphi(x-b) $, we have \begin{equation} \begin{split} &\frac{\text{d}^m}{\text{d}x^m} (e^{bx}\varphi(x)) \\ =& \frac{\text{d}^m}{\text{d}x^m} (e^{\frac{b^2}{2}}\varphi(x-b))\\ =& e^{\frac{b^2}{2}} \frac{\text{d}^m}{\text{d}x^m} (\varphi(x-b))\\ =& e^{\frac{b^2}{2}} (-1)^m H_m (x-b) \varphi(x-b). \end{split} \end{equation} Therefore, \begin{equation} \begin{split} \frac{\text{d}^m}{\text{d}x^m} \left( e^{bx} \Phi(x) \right)=& b^m e^{bx} \Phi(x) \\ & + e^{\frac{b^2}{2}} \varphi(x-b) \Big[ b^{m-1} +(-1)^1 H_1 (x-b) b^{m-2} \\ & +, \dotsc, +(-1)^{m-2} H_{m-2} (x-b)b +(-1)^{m-1} H_{m-1} (x-b) \Big] , \end{split} \end{equation} where $H_m(x)$ is Hermite polynomial defined as above.
Understanding the proof to continuity of norms in normed spaces
The norm is a function $||\cdot||_X:X\to [0,\infty)$. Hence if you define $f(x)=||x||_X$, then $f(x)$ is a real positive number. In $\mathbb{R}$, the norm is the usual euclidean norm which is simply the absolute value, so $||f(x)-f(y)||_\mathbb{R}=\big|||x||-||y||\big|$ :) If you have any questions, let me know !
Standard definition of smoothness on an arbitrary subset
The standard definition for me, is defined with local parametrizations. Let $M$ be an $n$-manifold and $f: A \subset M \to \mathbb{R}$. Let $(\phi,U)$ and $(\psi, V)$ be local charts on $M$ and $\mathbb{R}$. Then we say $f$ is smooth if $\psi \circ \tilde{f} \circ \phi^{-1}: \phi(U) \subset\mathbb{R}^n \to \psi(V) \subset \mathbb{R}$ is smooth. Here we take $x \in A$ and $U$ is an open set about $x$ for which $\tilde{f}|_A = f$. In other words $f$ is smooth, if there is an extension where this composition is smooth and the extension has the other property given. Edit: The local chart on $\mathbb{R}$ can be dropped since we know the coordinate system there and so we only need $\tilde{f} \circ \phi^{-1}$ to be smooth.
Formalizing the definition of continuity and uniform continuity according to first order logic
Let $(A,d)$ and $(B, \rho)$ be metric spaces. The function $ \ f:A \to B \ $ is continuous iff $$(\forall x \in A)(\forall \varepsilon >0)(\exists \delta>0)(\forall y \in A)[d(x,y)<\delta \to \rho \big( f(x) , f(y) \big)< \varepsilon]$$ The function $ \ f:A \to B \ $ is uniformly continuous iff $$(\forall \varepsilon >0)(\exists \delta>0)(\forall x \in A)(\forall y \in A)[d(x,y)<\delta \to \rho \big( f(x) , f(y) \big)< \varepsilon]$$ Where "$(\forall v \in X) \ P$" is an abbreviation for "$(\forall v)(v \in X \to P)$" and "$(\forall \varepsilon >0) \ Q$" is an abbreviation for "$(\forall \varepsilon)[(\varepsilon \in \mathbb{R} \wedge \varepsilon >0) \to Q]$".
Integral of $(1-x^2)^{1/4}$
This integral doesn't appear to have a elementary antiderivative---or at least neither Maple nor SageMath could find one. Maple gives $$\int (1 - x^2)^{1 / 4} dx = x \cdot {}_2 F_1\left(-\frac{1}{4}, \frac{1}{2}; \frac{3}{2}; x^2\right) + C,$$ where ${}_2 F_1$ is the ordinary hypergeometric function. Incidentally, to evaluate the original integral, $$\int \sqrt\frac{1 - \sqrt{x}}{1 + \sqrt{x}} \,dx ,$$ the substitution $x = u^2, dx = 2 u \,du$ transforms the integral into one with an integrand a rational function of $u$ and $\sqrt{1 - u^2}$, which can thus in turn be rationalized with an Euler substitution.
Construction of gluing sheaves as an equaliser
The answer is yes, basically because all limit diagrams can actually be written as an equalizer of products. Here $\mathcal T$ seems to be a typo for $\mathcal V$ which is a covering of $V$ assumed to be stable under fiber products. Then I claim that the limit of $F_i(U_i\times_X W)$ over $\mathcal V$ is the usual thing, that is : the equalizer of $\prod_{W\in \mathcal V}F_i(U_i\times_X W) \rightrightarrows \prod_{W,W'\in \mathcal V}F_i(U_i\times_X (W\times_V W'))$ where the two maps are the same as usual. If this is correct, then the fact that the limit is $F_i(U_i\times_X V)$ is just the statement that $F_i$ is a sheaf and that $U_i\times_X W$ is a covering of $U_i\times_X V$ and that $U_i\times_X (W\times_V W') = (U_i\times_X W)\times_{U_i\times_X V} (U_i\times_X W')$ (which follows from some simple diagram chase) Now the proof. Suppose you have a cone $(Y,f_W)$ over the $F_i(U_i\times_X W), W\in\mathcal V$. Then clearly the product map $Y\to \prod_{W\in \mathcal V}F_i(U_i\times_X W)$ with coordinates the $f_W$ equalizes the two maps of the above equalizer, as $W\times_V W' \in \mathcal V$. So we get a unique map $Y\to$ the equalizer. So it suffices to show that the projections of the equalizer form a cone. But suppose you have a map $W\to W'$ in $\mathcal V$, that is, an inclusion $W\subset W'$. Then $W'\times_V W = W$ so that clearly the two maps are equalized by the equalizer : it does form a cocone. So we are done (I treated the case of a space, but of course if there is a map $W\to W'$ in $\mathcal V$, then it's a map above $V$, so that $W\times_V W'$ is still canonically identified with $W$, so it doesn't change much)
Volume Integration of Bounded Region
This region seems better defined using spherical coordinates than cylindrical. We are given that the region is between two vertical planes $y=x$ and $y=\sqrt3x$, and it is between the sphere $x^2 + y^2 + z^2 = 1$ and the upper half of the cone $x^2 + y^2 = z^2$. From this, we can set the bounds to be: $$ \frac\pi3 \le \theta \le \frac\pi4$$ from the region of angles between the two lines (arctan of root(3) is pi/3) $$0 \le \phi \le \frac\pi4 $$from the intersection of the cone and sphere $$0 \le r \le 1 $$from the radius of sphere
Bound on complex triples satisfying some simple equations.
We may assume $|a|\geq|b|\geq|c|$. We therefore introduce new variables $u$, $v\in{\mathbb C}$ such that $$b=a u, c= b v=a u v\qquad\bigl(|u|\leq1, \ |v|\leq 1)\ .$$ We then obtain $$3=a+b+c=a(1+u+uv)\tag{1}$$ and $$0=(a+b+c)^2-(a^2+b^2+c^2)=2(ab+ac+bc)=2a^2 u(1+v+uv)\ .$$ The case $u=0$ leads to $a=3>2$. Otherwise we have $1+v+uv=0$, or $$v=-{1\over 1+u}\ .\tag{2}$$ Since we want $|v|\leq1$ we need $|u+1|\geq1$. It follows that the admissible values for $u$ are in the closed unit disc from which the unit disc centered at $-1$ has been removed. Let $\bar \Omega$ be this moonshaped region. Given an $u\in\bar\Omega$ we have to calculate $v$ from $(2)$ and then have to verify in $(1)$ that $|1+u+uv|\leq{3\over2}$. This would imply that $|a|\geq2$. Now $$1+u+uv={1+u+u^2\over1+u}=:f(u)\ .$$ In order to show that $|f(u)|\leq{3\over2}$ in $\bar\Omega$ it is sufficient to verify this inequality along the two boundary arcs. Along the arc $$\gamma_1:\quad t\mapsto u(t):=-1+e^{it}\qquad\left(-{\pi\over3}\leq t\leq{\pi\over3}\right)$$ we have $|1+u(t)|=1$, so that it remains to verify that $$\bigl|1+u(t)+u^2(t)\bigr|=2\cos t-1\in[0,1]\qquad\left(-{\pi\over3}\leq t\leq{\pi\over3}\right)\ .$$ Along the arc $$\gamma_2:\quad t\mapsto u(t):=e^{it}\qquad\left(-{2\pi\over3}\leq t\leq{2\pi\over3}\right)$$ we have to study the full complex $$w(t):=f\bigl(u(t)\bigr)={1+e^{it}+e^{2it}\over1+e^{it}}\qquad\left(-{2\pi\over3}\leq t\leq{2\pi\over3}\right)\ .$$ The following figure shows the Mathematica plot of this curve $t\mapsto w(t)$. We see that there is indeed a single point with $|w(t)|={3\over2}$, namely the point $w(0)={3\over2}$. It corresponds to $u=1$, $v=-{1\over2}$, and leads to the triple $(a,b,c):=(2,2,-1)$.
A term for multiples made by using an odd/even factor
We use the term modulo to describe what is left when you divide integers by a number $m$. Your example of $0,4,8,12,\ldots$ are the numbers equivalent to $0$ modulo $4$, often written $n \equiv 0 \pmod 4$. The numbers $2,6,10,14,\ldots$ are equivalent to $2 \pmod 4$. Does that do what you want?
Is Einstein's riddle an example of a combinatorial design?
The phrase "combinatorial design" can mean many things, but most commonly it refers to block designs. More broadly it might be interpreted in the context of incidence structures. The Einstein riddle could be viewed as involving a multipartite graph which frames the connections between: five houses "in a row" five nationalities five kinds of pets five beverages drunk five colors for houses five brands of cigarettes Thus one might relate the potential solution space to a complete multipartite graph $K_{5,5,5,5,5,5}$ with six parts, each of size five. The solution amounts to a matching from one part (say, the houses) to each of the other parts, consistent with various "facts" espoused or implied by the puzzle.
Evaluate $\int_1^4 \frac{1}{x^2}\left(1+\frac{1}{x} \right)^{\frac{1}{2}} \, dx$
You want to find $$ \int_1^4 \left(\frac{1}{x^2}\right)\left(1 + \frac{1}{x}\right)^{1/2}\;dx $$ It looks like you are using integration by substitution. Maybe try with $u = 1 + \frac{1}{x}$. Then $$ -du = x^{-2}dx $$ and you you get $$ \int_?^? -\sqrt{u}\; du = \dots $$
Pivots, determinant and eigenvalues
I'm not sure what you mean by the "pivots"; there is no unique row-echelon form for any particular matrix and moreover Gaussian elimination (in particular, row-swapping and row-scaling) does not preserve determinants. But you can say the following: elementary matrices that add a multiple of one row to a different row have determinant 1. It follows from multiplicity of the determinant that applying any sequence of such elementary operations (only) to any matrix (symmetric or not) leaves the determinant unchanged. If the resulting matrix is upper-triangular, the determinant of the matrix is the product of the diagonal entries. As for property (2); as the constant term in the characteristic polynomial the determinant of a matrix is always the product of its eigenvalues, with appropriate algebraic multiplicity. Note that if the matrix is not symmetric, the determinant is the product of all eigenvalues, not just the real ones.
Explain confusion about the identity $\lim \inf x_n = \lim \inf A_n$, with $A_n = \{x_k | k \ge n\}$
Because we define $\liminf\limits_{n\rightarrow\infty}x_{n}=\lim\limits_{n\rightarrow\infty}\left(\inf\limits_{k\geq n}x_{k}\right)=\lim\limits_{n\rightarrow\infty}\left(\inf A_n\right)$. It is not defined as $\lim\limits_{n\rightarrow\infty}\inf\{x_{k}:k=1,2,...\}$. Note that $\inf\{x_{k}: k=1,2,...\}$ is an extended real number independent of $n$, hence $\lim\limits_{n\rightarrow\infty}\inf\{x_{k}: k=1,2,...\}$ is simply $\inf\{x_{k}: k=1,2,...\}$. @Did has noted a good comment.
How to integrate $\int_{-\infty}^\infty {y\exp (-y^2)\over 1+y^2}dy$?
The integrand is an odd function, so the integral is zero.
How can I prove that $g\cdot H \cdot g^{-1}$ is also finite and has the same number of elements that $H$?
Conjugation by an element is an automorphism, the reciprocal of $\;x\mapsto gxg^{-1}$ being $\;x\mapsto g^{-1}xg$.
Idempotents in $ \mathbb{Z}_n $
No, this cannot be proved; for $c=5$ and $d=6$ we have $-1\cdot c+1\cdot d=1$, so $x=-1$ and $y=1$, but $x=-1$ is not idempotent in $\Bbb{Z}/30\Bbb{Z}$. As for the edited question; for any such $c$, $d$, $x$ and $y$ you have $$(xc)^2=(1-dy)(xc)=xc-xycd\equiv xc\pmod{n},$$ so indeed $xc$ is idempotent. It is rather unclear what you mean by this converse, but I think the answer is no; for any $c$ and $d$ you can take $x=0$ to get $xc=0$ idempotent.
Possible directions for directional derivative
This has nothing to do with directional derivatives. As you've noted, it's an observation about the dot product. Fixing $\vec x$ and the length of $\vec a$, then $\vec x\cdot\vec a$ will be the same for two positions of $\vec a$: Once you find one, reflecting across the line spanned by $\vec x$ gives you another. And those are the only ones. (The simplest case to see is dot product $0$. Then you get the two vectors — in your case of length $1$ — orthogonal to $\vec x$.) This is all about two dimensions. In three dimensions, you get a cone when you rotate $\vec a$ around the axis spanned by $\vec x$, and the tip of that vector traces out a circle. And so on in higher dimensions.
Can a Riemann integrable function $f$ on $[-1,1]$ such that $F(x)=\int_{-1}^{x}f$ be differentiable at every point except for $x=0$?
With $$F(x) = \begin{cases} -1 & x < 0 \\ 1 & x \geq 0 \end{cases},$$ we have $$\int_{-1}^x F(t)dt = |x| - 1 $$ for all $x \in [-1,1]$, which gives such an example.
Help with a proof $f(n) \leq c \times g(n)$
First, look at the leading terms of $f$ and $g$. The leading term for $f$ is $100n$ and the leading term for $g$ is $n$. Therefore, $c$ will need to be at least $100$. Let $c=100$. Then, you want to show that $$ 100n+\log n\leq 100n+100\log^2n. $$ In other words, you must have that $\log n\leq 100\log^2n$ or that $1\leq 100\log n$. Depending on the logarithm that you're using, this is true for $n$ sufficiently large (for most common logs, this is true for $n\geq 2$).
Go-cart world problem: how long did it take to finish the race?
We are told that if Arshia had not stopped, she would have won by $\frac 14$ hour, so we are given $$A=S+0.4\\\frac{42}A=\frac {42}S-\frac 14$$ Two equations, two unknowns.
entire functions f such that |f(z)-e^z|=e^{Re z}
Since $$|f(z) - e^z| = e^{\mathrm{Re}(z)} = |e^z|, \; \; z \in \mathbb{C}$$ the quotient $\frac{f(z) - e^z}{e^z}$ is a bounded entire function...
Show that the eigenvalues of $\mathcal{O}(n,\mathbb{R})$ have magnitude 1.
If $A \in \mathcal{O}(n, \mathbb{R})$, then $A$ is also in $\mathcal{O}(n, \mathbb{C})$, and satisfies the property $\langle Av, Aw \rangle = \langle v, w \rangle$ for any two vectors $v, w \in \mathbb{C}^n$, where $\langle \cdot, \cdot \rangle$ is the standard inner product given on column vectors by $\langle v, w \rangle = \sum_i v_i \overline{w_i}$. If $\lambda$ is an eigenvalue of $A$, then since we are over $\mathbb{C}$ it has an eigenvector $v$. Then $$\langle v, v \rangle = \langle Av, Av \rangle = \langle \lambda v, \lambda v \rangle = \lambda \overline{\lambda} \langle v, v \rangle$$ And since $\langle v, v \rangle \neq 0$, we have $|\lambda| = 1$. The fact that all eigenvalues are real or complex conjugate pairs comes from the fact that the characteristic polynomial has real coefficients.
Prove $\chi_A$ is Riemann integrable $\Leftrightarrow$ $\partial{A}$ is a null set
We know that a bounded function is Riemann-integrable if and only if its set of discontinuities has measure zero,(i.e., for every $\varepsilon>0$ it can be covered by a union of measurable sets whose measures sum up to less than $\varepsilon$). Now, the set of discontinuities of the characteristic function $\chi_A$ is precisely the boundary $\partial A$. The result follows.
Probability Question using Poisson distribution
The binomial distribution as it was used in the question is not correct unless the distribution of the number of emails received in a day is not random; i.e., exactly 25 emails are received each day, and the probability that any given email is spam is exactly 60%. If, however, we assume that the number of emails received in a day is a random variable $N$ which is Poisson distributed with mean $\lambda = 25$, the conditional distribution of the number of spam emails received on a given day with $N = n$ total emails is $$X \mid N = n \sim \operatorname{Binomial}(n,0.6).$$ The unconditional distribution is therefore $$\Pr[X = x] = \sum_{n=x}^\infty \Pr[X = x \mid N = n]\Pr[N = n] = \sum_{n=x}^\infty \binom{n}{x} p^x (1-p)^{n-x} e^{-\lambda} \frac{\lambda^n}{n!} = e^{-p\lambda} \frac{(p \lambda)^x}{x!},$$ which is of course itself Poisson with rate parameter $p\lambda$. So the probability that $X = 15$ is simply $$\Pr[X = 15] \approx 0.102436.$$ In comparison, if $N = 25$ and we use the binomial model, $$\Pr[X = 15] \approx 0.161158.$$ (Your calculation is not correct because you've switched $p$ and $1-p$.) Under a different distributional assumption for the number of emails received in a day, the answer will be different. $N$ need not be $25$, nor does it need to be Poisson. It could be negative binomially distributed; it could be uniform; it could be any discrete distribution whose support is a subset of the nonnegative integers. The question as it is posed does not impose such a distribution nor does it imply one--it only supposes that its mean is $25$.
Prove by Induction using Baseline and splitting into LHS & RHS?
I'm assuming you want to show that $$ \sum_{i=1}^{n-1}(n-i)=\frac{n(n-1)}{2}\tag{1} $$ holds for $n\in \{2,3,\ldots\}$ by using induction. The induction start is to check that $(1)$ holds for $n=2$. But the LHS is $$ \sum_{i=1}^1 (2-i)=2-1=1 $$ and the RHS is $\frac{2(2-1)}{2}=1$ so we see that $(1)$ is satisfied for $n=2$. Now, the induction step is to assume that $(1)$ holds for some $n\in \{2,3,\ldots,\}$ and then we aim to show that $(1)$ also holds for $n+1$. The assumption that $(1)$ holds for $n$ means exactly that $$ \sum_{i=1}^{n-1}(n-i)=\frac{n(n-1)}{2}, $$ and we want to show that $$ \sum_{i=1}^{(n+1)-1}((n+1)-i)=\frac{(n+1)((n+1)-1)}{2}\tag{2} $$ Here I have just plugged in $n+1$ instead of $n$ in the formula for $(1)$. But rewriting $(2)$ we have to show $$ \sum_{i=1}^n(n+1-i)=\frac{n(n+1)}{2}\tag{3} $$ So try and start out with the left-hand side of $(3)$ and rewrite it so that you can use your assumption (that is $(1)$). I hope this helps.
Show that a multivariable function isn't injective anywhere
Suppose it was injective in some open ball. By reducing the radius, we can assume it is continuous in a closed ball. Let $B$ be this ball. We now restrict $f$ to $B$. Then we have a bijection $B \to f(B)$. This is a surjection from a compact space to a Hausdorff space, so it is a quotient map (it maps open sets to open sets). Hence, $f^{-1}$ is continuous on $f(B)$, so $f$ restricted to $B$ is a homeomorphism. Note that $f(B)$ is compact and connected. Then, it is connected and bounded, so it is homeomorphic to a ball in $\mathbb{R}^n$. However, a ball in $\mathbb{R}^n$ is not homeomorphic to a ball in $\mathbb{R}^d$ if $n \neq d$.
Why is $\lim\limits_{x\to0+}x\cot x=1$?
Hint: $\cot(0)$ isn't equal to $0$, in fact $\cot$ isn't continuous at $x=0$.
Generalization of metric spaces?
The paper Ralph Kopperman. 1988. All topologies come from generalized metrics. Am. Math. Monthly 95, 2 (February 1988), 89-97, doi:10.2307/2323060. does this (generalises the codomain of a metric to a quite large class of sets with addition and order) and shows that "all" (I recall a talk on it that showed it for Tychonoff spaces, so it might not be really all) topological spaces can be thus endowed. There have been more of these efforts (some quite category-theoretical, others from computer science applications), but I don't have exact references there. This paper I've seen presented and your question reminded me of it.
Bases and dimensions of subspaces.
Hint : for 1), what can you say about the $a_{ii}$ elements ? for 2) that's not $\{x\}$, you can detail that $$\int_{-1}^1p(x)dx=\int_{-1}^1ax^2+bx+c\,dx=a\int_{-1}^1x^2dx+b\int_{-1}^1x\,dx+c\int_{-1}^11\,dx$$ and calculate the three integrals (you will find a base which is not a sub-base of $\{1,x,x^2\}$).
If $x_{n+1}=x_n+(-1)^n a_{n+1}$. Is $\{x_n\}$ convergent?
$x_n =x_1 +\sum_{k=2}^n (x_k -x_{k-1} ) =a_1 + \sum_{k=2}^n (-1)^{k-1} a_k =\sum_{k=1}^n (-1)^{k-1} a_k\to \sum_{k=1}^{\infty} (-1)^{k-1} a_k$ by Leibnitz Test.
Linear system of equation solving method
Note that you have four unknowns ($x,y,z,\lambda$) so you need four equations: the three from $\nabla f - \lambda \nabla g=0$ and also $xyz=1000$. From there, one nice method is to take advantage of the fact that all four equations have a constant on one side and a product of three of the variables on the other side. You can multiply each equation by one variable to get $\lambda xyz$ on one side, then set all the equations equal to each other to get some nice simple relations between the variables.
Properties of tensor product of modules
It is still true because: Any $A$-module is the direct limit of its finitely generated submodules (for any ring $A$). Tensor products commute with direct limits. Direct limit is an exact functor. Btw, a submodule of an $A$-module with this property (the morphism $M'\hookrightarrow M$ is universally exact) is called a pure submodule of $M$, and the morphism is said to be pure. Standard examples of pure submodules: A direct summand is a pure submodule If $M/M'$ is flat, $M'$ is a pure submodule of $M$.
Does the set $\lim_{n\to\infty}\{0/n,1/n,...,n/n\}$ contain irrational members? Does it equal the interval [0,1]?
You need to specify which limit you consider (there are several different definitions). But if you consider simply the set of limits of all converging sequences of rational numbers then the limit is the unit closed interval.You get the same result if you consider the Hausdorff limit.
The relation between the entropy of random variables $X$ and $Y=g(X)$
The post you are referring to is about differential entropy $h(X)$. It's true that simply relabeling the outcomes of a discrete random variable $X$ cannot change its entropy $H(X) = -\sum_x p(x)\log p(x)$. Differential entropy is a similar quantity, but lacks some of the nice properties of $H(X)$, e.g., it can be negative, it changes with nonzero scaling, etc. The relation between these two concepts of entropy is discussed at http://en.wikipedia.org/wiki/Differential_entropy
How to prove that a k-linear mapping is differentiable?
I will do this from scratch, assuming $T$ is bilinear. The generalization to $p$ factors should be straightforward. Take $(s_1,s_2)\in \mathbb R^n\times \mathbb R^n$ such that $\left \| s_i \right \|=1;\ i=1,2. $ Then, $s_i=\sum_{k=1}^{n}a_{ik}e_k\ $ with $\vert a_{ik}\vert \le 1$ and $T(s_1,s_2)=\sum_{k=1}^{n}\sum_{j=1}^{n}a_{1j}a_{2k}T(e_{j},e_{k})$ nd an application of the triangle inequality shows that $\left \| T(s_1,s_2) \right \|\le \sum_{k=1}^{n}\sum_{j=1}^{n}\left \| T(e_{j},e_{k}) \right \|$ so $T$ is bounded by some $M\ge 0$ Now, $\tag1 T((s_1,s_2)+(h_1,h_2))=T(s_1+h_1,s_2+h_2)=T(s_1,s_2)+T(h_1,h_2)+T(s_1,h_2)+T(h_1,s_2)$ so that $\tag2 T((s_1,s_2)+(h_1,h_2))-T(s_1,s_2)-(T(h_1,s_2)+T(s_1,h_2))=T(h_1,h_2)$. Finally, we compute $\tag3 \left \| \frac{T(h_1,h_2)}{h} \right \|\le M\frac{\left \| h_1 \right \|\left \| h_2 \right \|}{\left \| h \right \|}\le M\left \| h_2 \right \|$ and then letting $\left \| h \right \|\to 0$, we conclude that $DT(s_1,s_2)(h_1,h_2)=T(h_1,s_2)+T(s_1,h_2))$.
Question on proof that closed interval is compact
Let me give some more details: Question 1: $x\in G_{\alpha_0}$ which is open and hence $x$ is an interior point of $G_{\alpha_0}$.Hence there exists $\epsilon>0$ such that $(x-\epsilon,x+\epsilon)\subset G_{\alpha_0}$. Hence there must exist by Archimedean Property some $n\in \Bbb N$ such that $(x-\frac{1}{n},x+\frac{1}{n})\subset (x-\epsilon,x+\epsilon)\subset G_{\alpha_0}$. If you apply Archimedean Property further you can always find $(x-\dfrac{b-a}{2^{n-2}},x+\dfrac{b-a}{2^{n-2}})\subset G_{\alpha_0}$. Why we choose such an $n$ is to facilitate the proof. Question 2:Nested Interval property : If $I_n$ is a descending sequence of a closed intervals with length $\to 0$ then $\cap I_n=\{\text{singleton}\}$. Question 3: Remember $I_n$'s have been so constructed such that no finite subcover of $G$ covers $I_n$ Are these the details that you are asking for?
Geometrical meaning of $ \begin{pmatrix} a\\ b \end{pmatrix} \mapsto \begin{pmatrix} a&-b\\ b&a \end{pmatrix}$
You drew your vector wrong. The green vector should have x-coordinate -b and y-coordinate a. If you remember from algebra class that if a line has slope m, then a perpendicular line has slope -1/m. Therefore it is a mapping of a vector to itself and a perpendicular vector.
How does this proof of Theorem 1 in Spivak's Calculus work?
The lines are read left to right and top to bottom. So the symbol ≤ on line 2 relates the last expression of line 1 to the next expression on line 2. Similarly, the symbol = on line 3 relates the last expression of line 2 to the next expression on line 3. The same thing could be written in one line as follows: \begin{equation} \begin{split} (|a+b|^2) = (a+b)^2 &= a^2+2ab+b^2 \leq a^2+2|a|\times|b|+b^2 = |a^2|+2|a|\times|b|+|b^2| = (|a|+|b|)^2, \end{split} \end{equation} but it is easier on the eyes to write each new expression on a new line.
Find the range of a complicated function
Note that if you let $u=3y-4x+13$ then you get $$f(x,y)=\sqrt[4]{\frac{18-u}u}$$ The value inside the root must be nonnegative. This happens when $u$ and $18-u$ have the same sign, so we must have $0<u \le 18$. Check the range of $f(u)$ for those values of $u$ and you have your answer.
Difference between Modification and Indistinguishable
For consistency of terminology, let me say that $X_t$ and $Y_t$ are $M$-equivalent if one is a modification of the other, and $D$-equivalent if they are indistinguishable from one another. (This is not standard terminology, but I find the difference in syntax between the two terms makes it slightly difficult to write.) Here's a way of looking at it based on sampling. Suppose $X_t,Y_t$ are $M$-equivalent. Now choose $t_1 \in I$ arbitrarily. Repeatedly run $X_t$ and $Y_t$ "independently", but only sample them at time $t_1$. Then as you sample more and more times, the fraction of the samples such that $X_{t_1}=Y_{t_1}$ will converge to $1$, regardless of which $t_1$ you chose. Suppose now that they are $D$-equivalent. Repeatedly run $X_t$ and $Y_t$ again, but this time record the entire trajectory. Then as you sample more and more times, the fraction of the samples such that $X_t$ and $Y_t$ are equal at every time will converge to $1$. So $D$-equivalence automatically implies $M$-equivalence, since almost all samples have equality at every time and hence at any particular time. When $I$ is countable, $M$-equivalence also implies $D$-equivalence, because the event "$X_t=Y_t$ for every $t \in I$" is the intersection of the events "$X_{t_k}=Y_{t_k}$" over $t_k \in I$, and each of these has probability $1$. But when $I$ is uncountable (as in problems in continuous time), we may have $M$-equivalence but not $D$-equivalence. To see this, suppose $X_t,Y_t$ are $M$-equivalent and let $N(t)$ be the event that $X_t \neq Y_t$. Then $N(t)$ has probability zero. (If you like, this is just saying that $\int_t^t dx = 0$.) Then we are in $N=\bigcup_{t \in I} N(t)$ if there is some $t$ such that $X_t \neq Y_t$. Now $N$ is an uncountable union of sets of probability zero. So it might have positive probability, or it might not even be measurable. Let's imagine sampling from the example from Oksendal. Pick a $t_1 \in [0,1]$, now the random time will be $t_1$ only with probability zero. So the fraction of samples with $X_{t_1} \neq Y_{t_1}$ will get smaller as we take more and more samples. But if we look at the whole trajectory instead, then at some random time we will always have $X_t \neq Y_t$, in every single sample. We will never see the exact same trajectory from both.
How to calculate a big combination $\binom nr$
You can use Stirling’s approximation; even the simplest form, $$n!\approx\sqrt{2\pi n}\left(\frac{n}e\right)^n\;,$$ is fairly good. In your example you get $$\begin{align*} \binom{10^{30}}{10^{10}}&\approx\frac{\sqrt{2\cdot10^{30}\pi}\left(\frac{10^{30}}e\right)^{10^{30}}}{\sqrt{2\cdot10^{10}\pi}\left(\frac{10^{10}}e\right)^{10^{10}}\cdot\sqrt{2\left(10^{30}-10^{10}\right)\pi}\left(\frac{10^{30}-10^{10}}e\right)^{10^{30}-10^{10}}}\\\\ &=\frac1{\sqrt{2\pi}}\cdot\frac{10^{3\cdot10^{31}-10^{11}+10}}{\left(10^{30}-10^{10}\right)^{10^{30}-10^{10}+\frac12}} \end{align*}$$ for instance, if I didn’t make another silly algebra error.
How to show x and y are equal?
A differentiable function with everywhere positive derivative is injective. The derivative of $f$ is $f'(x)=3x^2-12x+12=3(x^2-4x+4)=3(x-2)^2$. This is positive for $x\ne2$. To see that the zero derivative at $x=2$ doesn't destroy injectivity, integrate this to find $f(x)=(x-2)^3+C$ (with $C=1$). Thus $f$ is a shifted version of $x^3$, which is injective.
How do you express the Frobenius norm of a Matrix as the squared norm of its singular values?
$\sum_{i}\sigma_i^2=Trace(\Lambda \Lambda^T)$ where $M=U\Lambda V^T$. Then, $$\|M\|_F^2=Trace(MM^T)=Trace(U\Lambda V^TV\Lambda^T U^T)=Trace(U\Lambda \Lambda^TU^T)=Trace(\Lambda\Lambda^T U^T U)=Trace(\Lambda\Lambda^T)$$
Sufficient conditions on degrees of vertices for existence of a tree
You can just compare the terms like that. The induction hypothesis is: For any $d_1, d_2, \dots d_n$ such that $$d_1 + d_2 + \dots + d_n = 2n-2$$ there is a tree with those degrees. The statement you wish to prove is For every $e_1, e_2, \dots e_{n+1}$ such that $$e_1 + e_2 + \dots + e_{n+1} = 2(n+1) - 2$$ there is a tree with those degrees. Note that these need not be the same as the $d_i$. In fact the $d_i$ and $e_i$ refer to a whole set of them and not just a single sequence. The $e_i$ are arbitrary and you cannot assume any connection with the $d_i$. In short, your proof is incorrect.
Is there a function with two different pseudo-primitives? (Motivated by FTC)
As requested by the OP I repost my comment as an answer. Let $F$ and $G$ be two pseudo-primitives according to the definition given in the question, clearly the fundamental theorem of calculus applies to them. We claim that the quantity $F(x) - G(x)$ is constant for any $x \in [a,b]$. To show it let $x <y$ be elements of $[a,b]$, then, by the fundamental theorem of calculus $$F(x) - F(y) = \int_y^x f(t) \, dt = G(x) - G(y).$$ Thus $F(x) - G(x) = F(y) - G(y)$ as claimed.
Minkowski Functional Evaluated at a Point Outside the Convex Set
It's not necessarily true that $z\in p_K(z)\cdot K$; consider the open ball of unit radius for $K$ and $z=\langle 1,0,0,\ldots\rangle$. However, since $K$ is a convex body containing $\mathbf{0}$, then $p\leq q\implies p\cdot K\subseteq q\cdot K$ (can you see why?), and so $p\gt p_K(z)\implies z\in p\cdot K$ (likewise, make sure you can understand why this is so!). So if $p_K(z)\lt 1$, then...
Solving for three unknowns given three linear equations
You have three unknowns and three equations, start by solving for one variable in one equation say "a" from the first one, then plug it into the second one and solve for another variable, for example: (first equation): $a = 3 - b - c$ (plug into second): $4(3 - b - c) + 2b + c = 6$ $12 - 4b - 4c + 2b + c = 6$ $-2b - 3c = -6$ (solve for another variable): $c = (-6 - 2b) / -3$ (use the results above and plug into third equation): $9(3 - b - ((-6 - 2b) / -3))+3b+((-6 - 2b) / -3)=13$ from this equation you can solve for "b" Continue this until you solve for all variables. And so you can check your calculations: $a = 2$ $b = -3$ $c = 4$
Show that $\frac{(b+c)^2} {3bc}+\frac{(c+a)^2}{3ac}+\frac{(a+b)^2}{3ab}=1$
$a+b+c=0\iff b+c=-a\implies (b+c)^2=a^2$ $\implies\dfrac{(b+c)^2}{3bc}=\dfrac{a^3}{3abc}$ Actually, $b+c=-a\implies(b+c)^3=(-a)^3$ $\implies -a^3=b^3+c^3+3bc(b+c)=b^3+c^3+3bc(-a)\iff\sum a^3=3abc$
Matrix multiplication computation
First instinct would be to multiply on the left and right by inverses to get $A$ on its own on the LHS. However the rightmost of these matrices is not invertible. But we can get $$ A \times \left[\begin{array}{ccc}-4 & 5 & 1\\-4 & 5 & 1\\-4 & 5 & 1 \end{array}\right] = \left[\begin{array}{ccc} 235 & 0 & -599 \\ -\frac{247}{2} & \frac{15}{2} & \frac{629}{2} \\ -\frac{343}{2} & \frac{15}{2} & \frac{761}{2}\end{array}\right] $$ The system of equations for the components of $A$ is over-determined. This means that, letting $$ A = \left[ \begin{array}{ccc} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3\end{array}\right] $$ we see (for instance) that this requires $$ -4 (a_1 + a_2 + a_3) = 235 \\ \implies a_1 + a_2 + a_3 = -\frac{235}{4} $$ but $$ 5 (a_1 + a_2 + a_3) = 0 \\ \implies a_1 + a_2 + a_3 = 0 $$ which is clearly contradictory. So no solution exists.
Prove that set has zero Jordan content iff its closure has measure 0
I will use $m$ to denote the Jordan measure and $\lambda$ to denote the Lebesgue measure. Note that for rectangles $R$, we have $m R = \lambda R$. I believe that the statement should be for bounded sets. If a set has Jordan Content zero, then it is automatically bounded. It follows that any unbounded set cannot have Jordan Content zero. For example, $\mathbb{Z}$ is closed and has Lebesgue measure zero, but cannot have Jordan Content zero. Hence I will assume that we are talking about bounded sets. Suppose $A$ has Jordan Content zero. Then for any $\epsilon>0$, there is a finite collection of rectangles $R_i$ such that $A \subset \cup_i R_i$ and $\sum_i m R_i < \epsilon$. Since $m R_i = m \overline{ R_i}$, we can take the rectangle to be closed, hence $C = \cup_i \overline{ R_i}$ is closed. In particular $\overline{A} \subset C$. Now let $\epsilon = \frac{1}{n}$, and let $C_n = \cup_i \overline{R_i^{(n)}}$ be the corresponding closed set, then $\overline{A} \subset C_n$ and $\sum_i m R_i^{(n)} = \sum_i \lambda R_i^{(n)} < \frac{1}{n}$. Then we have $\overline{A} \subset \cap_n C_n$, and so $\lambda \overline{A} \le \lambda C_n \le \sum_i \lambda R_i^{(n)} < \frac{1}{n}$ for all $n$. Hence $\lambda \overline{A} =0$. Now suppose $\lambda \overline{A} = 0$. There is no loss of generality in assuming that $A$ is closed (since a cover of the closure certainly is a cover of the original set). Since $A$ is bounded, we see that $A$ is, in fact, compact. I need to show that for any $\epsilon>0$, I can cover $A$ by a finite collection of rectangles $R_i$ such that $\sum_i m R_i < \epsilon$. Since $\lambda A = 0$, there is a countable collection of rectangles $R_i$ such that $A \subset \cup_i R_i$ and $\sum \lambda R_i < \frac{1}{2}\epsilon$. For each $i$, we can find an open rectangle $O_i$ such that $R_i \subset O_i$ and $\lambda O_i \le \lambda R_i + \frac{1}{2^{i+2}} \epsilon$. Hence $\sum \lambda O_i < \epsilon$. Since $A$ is compact, and $O_i$ form an open cover, there is a finite subcover $O_{k_1},...,O_{k_m}$, and clearly $\sum_{i=1}^m m O_{k_i} = \sum_{i=1}^m \lambda O_{k_i} \le \sum_i \lambda O_i < \epsilon$. Hence $A$ has Jordan content zero.
conversion of Binomial identity into series sum
We put $$S_n(x) = \sum_{p=1}^n \frac{1}{p} {n\choose p} (-1)^{p+1} (1-x)^p.$$ Working first with the coefficient on $[x^q]$ where $1\le q\le n$ we see that it is $$\sum_{p=q}^n (-1)^{p+1} \frac{1}{p} {n\choose p} {p\choose q} (-1)^q.$$ Now $${n\choose p} {p\choose q} = \frac{n!}{(n-p)! \times q! \times (p-q)!} = {n\choose q} {n-q\choose n-p}$$ so we find $$ (-1)^q {n\choose q} \sum_{p=q}^n (-1)^{p+1} \frac{1}{p} {n-q\choose n-p} \\ = (-1)^q {n\choose q} \sum_{p=0}^{n-q} (-1)^{p+q+1} \frac{1}{p+q} {n-q\choose n-p-q} \\ = {n\choose q} \sum_{p=0}^{n-q} (-1)^{p+1} \frac{1}{p+q} {n-q\choose p}.$$ Introducing $$f(z) = \frac{(-1)^{n-q+1} (n-q)!}{z+q} \prod_{k=0}^{n-q} \frac{1}{z-k}$$ we have $$\sum_{p=0}^{n-q} \mathrm{Res}_{z=p} f(z) = \sum_{p=0}^{n-q} \frac{(-1)^{n-q+1} (n-q)!}{p+q} \prod_{k=0}^{p-1} \frac{1}{p-k} \prod_{k=p+1}^{n-q} \frac{1}{p-k} \\ = \sum_{p=0}^{n-q} \frac{(-1)^{n-q+1} (n-q)!}{p+q} \frac{1}{p!} (-1)^{n-q-p} \frac{1}{(n-q-p)!} \\ = \sum_{p=0}^{n-q} (-1)^{p+1} \frac{1}{p+q} {n-q\choose p}.$$ This is the target sum omitting the binomial coefficient in front. Now the residue at infinity of $f(z)$ is clearly zero and hence the sum must be (residues sum to zero) $$- \mathrm{Res}_{z=-q} f(z) = - (-1)^{n-q+1} (n-q)! \prod_{k=0}^{n-q} \frac{1}{-q-k} \\ = - (n-q)! \prod_{k=0}^{n-q} \frac{1}{q+k} = -(n-q)! \frac{(q-1)!}{n!}.$$ Restoring the binomial coefficient in front we thus have $$[x^q] S_n(x) = -{n\choose q} (n-q)! \frac{(q-1)!}{n!}$$ or alternatively $$\bbox[5px,border:2px solid #00A000]{ [x^q] S_n(x) = - \frac{1}{q},}$$ as claimed. Comntinuing with the constant coefficient we find $$[x^0] S_n(x) = \sum_{p=1}^n \frac{1}{p} {n\choose p} (-1)^{p+1} [x^0] (1-x)^p = \sum_{p=1}^n \frac{1}{p} {n\choose p} (-1)^{p+1}.$$ Using the same technique as before we introduce $$g(z) = \frac{(-1)^{n+1} n!}{z} \prod_{k=0}^{n} \frac{1}{z-k}$$ We get for $$\sum_{p=1}^n \mathrm{Res}_{z=p} g(z) = \sum_{p=1}^n \frac{(-1)^{n+1} n!}{p} \prod_{k=0}^{p-1} \frac{1}{p-k} \prod_{k=p+1}^{n} \frac{1}{p-k} \\ = \sum_{p=1}^n \frac{(-1)^{n+1} n!}{p} \frac{1}{p!} (-1)^{n-p} \frac{1}{(n-p)!} \\ = \sum_{p=1}^n \frac{1}{p} {n\choose p} (-1)^{p+1}.$$ This is the target sum. Now the residue at infinity is zero so this sum must be equal to $$- \mathrm{Res}_{z=0} g(z) = - \mathrm{Res}_{z=0} \frac{(-1)^{n+1} n!}{z^2} \prod_{k=1}^{n} \frac{1}{z-k} \\ = (-1)^{n} n! \left.\left( \prod_{k=1}^{n} \frac{1}{z-k}\right)'\right|_{z=0} \\ = (-1)^{n} n! \left.\left( \prod_{k=1}^{n} \frac{1}{z-k} \sum_{k=1}^n \frac{1}{k-z} \right)\right|_{z=0} = (-1)^n n! \times \frac{(-1)^n}{n!} \sum_{k=1}^n \frac{1}{k}.$$ We have shown that (with harmonic numbers) $$\bbox[5px,border:2px solid #00A000]{ [x^0] S_n(x) = \sum_{k=1}^n \frac{1}{k} = H_n,}$$ which concludes the argument. If desired we may write this as $$S_n(x) = \sum_{k=1}^n \frac{1}{k} - \sum_{q=1}^n \frac{1}{q} x^q$$ or $$\bbox[5px,border:2px solid #00A000]{ S_n(x) = \sum_{q=1}^n \frac{1}{q} (1 - x^q).}$$
Is the following condition NASC for connectivity of a graph?
A fact about adjacency matrices is that that $A^k_{i, j}$ counts the number of walks of length exactly $k$ between vertices $i$ and $j$, see here: Proof - raising adjacency matrix to $n$-th power gives $n$-length walks between two vertices ("Walk" is not the same as "path", for definition see here: https://en.wikipedia.org/wiki/Path_(graph_theory)#Walk,_trail,_path). Therefore $(A + A^2 + \dots + A^n)_{i, j}$ is a number of all walks between vertices $i, j$ of lengths $1, \dots, n$. Notice that if a graph has $n$ vertices, if there doesn't exist a walk between $i, j$ having length at most $n$, there won't exist a walk having length bigger than $n$, as the vertices would have to repeat, implying an existence of a shorter walk between $i, j$. Therefore $(A + A^2 + \dots + A^n)_{i, j} = 0$ is equivalent to the fact that there is no walk between vertices $i$ and $j$, so the graph must be disconnected. Naturally it works both ways, if $i, j$ lie in different components of the graph, there won't exist a walk between them for any length $k$, so $(A + A^2 + \dots A^\infty)_{i, j} = 0$. As entries in the adjacency matrix are nonnegative, it implies that $(A + A^2 + \dots A^n)_{i, j} = 0$. It follows naturally that there are no $0$ entries in $(A + A^2 + \dots + A^n)$ iff the graph is connected.
Please help with a general solution of a functional equation involving projections
In a Hilbert space $\mathcal{H}$, $R$ and $S$ are bounded operators and hence are elements of $\mathcal{B(H)}$, the space of bounded operators on $\mathcal{H}$. But then, assuming linearity, $f$ can be chosen to be a linear functional on $\mathcal{B(H}$). Particular such functionals are the states $\Phi (A)$, $A\in \mathcal{B(H)}$, positive functionals of norm 1 (https://en.wikipedia.org/wiki/State)( https://en.wikipedia.org/wiki/Trace_class). These states decompose into normal states, elements of the pre-dual of $\mathcal{B(H)}$ which is the trace class $\mathcal{B}_{1}(\mathcal{H})$ and more general elements from the dual of $\mathcal{B(H)}$. A trace class state can be written as \begin{equation*} \Phi (A)=\mathrm{Trace}\rho A,\;\rho \in \mathcal{B}_{1}(\mathcal{H}),\;\rho >0,\;\mathrm{Trace}\rho =1.\; \end{equation*} which is the formula you mention. There are some problems. In case the multiplicity of $R$ is infinite the trace does not exist. Secondly, how do we exclude more general states?
About the definition of complex multiplication
Let $z_1 = a, z_2 = bi, z_3 = c, z_4 = di$. Then the rule \begin{equation*} \tag{$\spadesuit$}(z_1 + z_2)(z_3 + z_4) = z_1 z_3 + z_1 z_4 + z_2 z_3 + z_2 z_4 \end{equation*} tells us that \begin{align*} (a + bi)(c + di) &= ac + adi + bci + bd i^2 \\ &= ac - bd + (ad + bc)i. \end{align*} So if we want ($\spadesuit$) to be true, we are forced to use the standard formula for multiplication of complex numbers.
d^2y/dy^2 expressed in t
How the heck did you jump from $\frac{dy}{dx}$ to $\frac{d^2y}{dx^2}$ just by putting a "dt" on the end? It looks to me like you calculated $\frac{d}{dt}\left(\frac{dy}{dx}\right)$ NOT $\frac{d^2y}{dx^2}= \frac{d}{dx}\left(\frac{dy}{dx}\right)$.
Doubt in the proof of Lemma to prove No Retraction Theorem
Suppose $f\circ g={\rm Id}$ and assume that $g(x)=g(y)$. Then $$x={\rm Id}(x)=f\circ g(x)=f\circ g(y)={\rm Id}(y)=y.$$
Find all matrices that satisfy $\mathrm B \mathrm A = \mathrm I_2$
Since the second column is not a multiple of the first column, the matrix $\pmatrix{1 & 8\cr 3 & 5\cr 2 & 2\cr}$ has rank $2$. That implies that a solution exists. One way to get a solution is to take any two rows of $A$ (check that they are linearly independent) and use the inverse of that $2 \times 2$ matrix for the corresponding columns of $B$, and $0$'s in the other column. Thus if you take the first two rows of $A$, the inverse of $\pmatrix{1 & 8\cr 3 & 5\cr}$ is $\pmatrix{-5/19 & 8/19\cr 3/19 & -1/19\cr}$, corresponding to $B = \pmatrix{-5/19 & 8/19 & 0\cr 3/19 & -1/19 & 0\cr}$. For the general solution, write the system using block matrices as $$ [ B_1 \ b ] \left[\matrix{A_1\cr a^T\cr} \right] = B_1 A_1 + b a^T = I$$ (where $B_1$ and $A_1$ are $2 \times 2$, $b$ is $2 \times 1$ and $a^T$ is $1 \times 2$). We can solve for $B_1$ in terms of $b$: $B_1 = (I - b a^T) A_1^{-1}$. We already have $A_1^{-1} = \pmatrix{-5/19 & 8/19 \cr 3/19 & -1/19 \cr}$, so with $b = \pmatrix{b_1\cr b_2}$ and $a^T = (2, 2)$ we get $$ B_1 = \pmatrix{1 - 2 b_1 & - 2 b_1\cr -2 b_2 & 1 - 2 b_2\cr} \pmatrix{-5/19 & 8/19 \cr 3/19 & -1/19 \cr} = \pmatrix{ (-5 + 4 b_1)/19 & (8 - 14 b_1)/19\cr (3 + 4 b_2)/19 & (-1 - 14 b_2)/19\cr}$$ i.e. $$ B = \pmatrix{ (-5 + 4 b_1)/19 & (8 - 14 b_1)/19 & b_1\cr (3 + 4 b_2)/19 & (-1 - 14 b_2)/19 & b_2\cr}$$ where $b_1$ and $b_2$ are arbitrary.
Arithmetic series problem
\begin{align}a_1&=2\\a_{n+1}&=a_n+2n\\&=a_{n-1}+2[n+(n-1)]\\&=a_{n-2}+2[n+(n-1)+(n-2)]\\ &= \ldots\\ &=a_1+2[n+(n-1)+\ldots +1]\end{align} Hence \begin{align}a_{50}&=a_1+2(49+\ldots + 1)\\&=2+2\cdot \frac{49(50)}{2}\\&=2+49(50)\\&=2+(50-1)(50)\\&=2+2500-50\\&=2452 \end{align} Remark: This is not an AP, if it is an AP, the difference between consecutive terms is a constant.
Convergence of $\sum_{n=1}^{\infty} \frac{\sin(n!)}{n}$
Here is a proof that the answer is (almost certainly) not provable using current techniques. We will prove that the series in fact diverges if $2\pi e$ is a rational number with a prime numerator. We first prove the following claims: Lemma 1. If $p$ is an odd prime number and $S\subset \mathbb Z$ so that $$\sum_{s\in S}e^{2\pi i s/p}\in\mathbb R,$$ then $\sum_{s\in S}s\equiv 0\bmod p$. Proof. Let $\zeta=e^{2\pi i/p}$. We have $$\sum_{s\in S}\zeta^s=\sum_{s\in S}\zeta^{-s},$$ since the sum is its own conjugate. As a result, since the minimal polynomial of $\zeta$ is $\frac{\zeta^p-1}{\zeta-1}$, we see $$\frac{x^p-1}{x-1}\bigg|\sum_{s\in S}\left(x^{p+s}-x^{p-s}\right),$$ where we have placed each element of $s$ in $[0,p)$. The polynomial on the left is coprime with $x-1$ and the polynomial on the right has it as a factor, so $$\frac{x^p-1}{x-1}\bigg|\sum_{s\in S}\left(x^{p+s-1}+\cdots+x^{p-s}\right).$$ Now, the quotient of these two polynomials must be an integer polynomial, so in particular the value of the left-side polynomial at $1$ must divide the value of the right-side polynomial at $1$. This gives $p|\sum_{s\in S}2s,$ finishing the proof. Define $$a_n=\sum_{k=0}^n \frac{n!}{k!}.$$ Lemma 2. If $p$ is a prime number, $$\sum_{n=0}^{p-1}a_n\equiv -1\bmod p.$$ Proof. \begin{align*} \sum_{n=0}^{p-1}a_n &=\sum_{0\leq k\leq n\leq p-1}\frac{n!}{k!}\\ &=\sum_{0\leq n-k\leq n\leq p-1}(n-k)!\binom n{n-k}\\ &=\sum_{j=0}^{p-1}\sum_{n=j}^{p-1}n(n-1)\cdots(n-j+1)\\ &\equiv \sum_{j=0}^{p-1}\sum_{n=0}^{p-1}n(n-1)\cdots(n-j+1)\pmod p, \end{align*} where we have set $j=n-k$. The inside sum is a sum of a polynomial over all elements of $\mathbb Z/p\mathbb Z$, and as a result it is $0$ as long as the polynomial is of degree less than $p-1$ and it is $-1$ for a monic polynomial of degree $p-1$. Since the only term for which this polynomial is of degree $p-1$ is $j=p-1$, we get the result. Now, let $2\pi e = p/q$. Define $\mathcal E(x)=e^{2\pi i x}$ to map from $\mathbb R/\mathbb Z$, and note that $\mathcal E(x+\epsilon)=\mathcal E(x)+O(\epsilon)$. We have \begin{align*} \sin((n+p)!) &=\operatorname{Im}\mathcal E\left(\frac{(n+p)!}{2\pi}\right)\\ &=\operatorname{Im}\mathcal E\left(\frac{qe(n+p)!}{p}\right). \end{align*} We will investigate $\frac{qe(n+p)!}{p}$ "modulo $1$." We see that \begin{align*} \frac{qe(n+p)!}{p} &=q\sum_{k=0}^\infty \frac{(n+p)!}{pk!}\\ &\equiv q\sum_{k=n+1}^\infty \frac{(n+p)!}{pk!}\pmod 1\\ &=O(1/n)+q\sum_{k=n+1}^{n+p}\frac{(n+p)!}{pk!}\\ &=O(1/n)+\frac qp\left[\sum_{k=n+1}^{n+p}\frac{(n+p)!}{k!}\pmod p\right]. \end{align*} Now, \begin{align*} \sum_{k=n+1}^{n+p}\frac{(n+p)!}{k!}=\sum_{j=0}^{p-1}\frac{(n+p)!}{(n+p-j)!} &=\sum_{j=0}^{p-1}(n+p)(n+p-1)\cdots(n+p-j+1)\\ &\equiv \sum_{j=0}^{p-1}m(m-1)\cdots (m-j+1)\pmod p, \end{align*} where $m$ is the remainder when $n$ is divided by $p$. The terms with $j>m$ in this sum go to $0$, giving us $$\sum_{j=0}^m \frac{m!}{(m-j)!}=a_m.$$ Putting this together, we see that $$\sin((n+p)!)=\operatorname{Im}\mathcal E\left(\frac{qa_{n\bmod p}}p\right)+O\left(\frac 1n\right).$$ In particular, the convergence of our sum would imply, since the $O(1/n)$ terms give a convergent series when multiplied by $O(1/n)$, that $$x_N=\operatorname{Im}\sum_{n=1}^N\frac 1n\mathcal E\left(\frac{qa_{n\bmod p}}p\right)$$ should converge. In particular, $\{x_{pN}\}$ must converge, which implies that $$\sum_{m=0}^{p-1}\mathcal E\left(\frac{qa_m}p\right)$$ must be real (as otherwise the series diverges like the harmonic series). By Lemma 1, this implies that $$\sum_{m=0}^{p-1}a_m=0\bmod p,$$ which contradicts Lemma 2.
Quadratic modular equation
Hint: we have $x^2+ax+n\equiv x^2+ax\equiv x(x+a)\bmod n$. Hence certainly $x=0$ and $x=-a$ are roots in the ring $\mathbb{Z}/n$. We can use the Chinese remainder theorem to find all roots, by first finding all roots modulo $p$, and then modulo $q$. Since $p,q$ are primes, a quadratic equation over the fields $\mathbb{Z}/p$ resp. over $\mathbb{Z}/q$ can have at most two roots.
Find $a$ given $a \times b = c$ and $a \cdot d = e$ where $a,b,c,d$ are vectors and $e$ is a scalar
If the solution exists it's unique, as we can't add to $a$ something parallel to $b$ yet orthogonal to $d$. Since $a\perp c$, take the Ansatz $a=(Xb+Yd)\times c$ so$$c=((Xb+Yd)\times c)\times b=X[b^2c-(b\cdot c)b]+Y[(b\cdot d)c-(b\cdot c)d].$$But $b\cdot c=0$, so $c=(b^2X+(b\cdot d)Y)c$, which simplifies to $1=b^2X+(b\cdot d)Y$. Finally,$$e=Xb\times c\cdot d\implies X=\frac{e}{b\times c\cdot d},\,Y=\frac{1-b^2X}{b\cdot d}.$$
Two different outcomes (-1 and 1) for Legendre symbol?
$34=54^2\pmod{131}$ so $97$ is not a quadratic residue of $131$. I think quadratic reciprocity only works if both are odd primes.
Second Partial Derivative Test using Hessian Determinant
Theorem. A twice differentiable real-valued function defined on an open subset of $\mathbf{R}^d$ has mixed partials that coincide (that is to say, if $u$ is a twice-differentiable function then $u_{x,y} = u_{y, x}$). The proof of this stronger version is more difficult that the one usually given (where $u$ is assumed with continuous derivatives of higher order). It can be found in Foundations of Modern Analysis, by Jean Dieudonne, chapter 8. So, "well-behaved" function simply means twice-differentiable.
Surface area of a cylinders and prisms
In order to find the surface area of a cylinder, add the areas of the two circles on the top and bottom, and the surface area of the side. (For the surface area of the side, think about the circumference of a circle)
How to prove this is a tautology
Try breaking it up in words. The second two conditions are: "there is an $x$ such that $p(x)$ is false" "there is no $x$ such that $q(x)$ is true" You need to show that if neither of these conditions holds (that is, both of these statements are false), then necessarily the first one ("for every $x$, $p(x)$ is true and $q(x)$ is false") must be true. Can you see how to prove that?
Spherical Harmonic Identity
First of all, I forgot a piece of the derivative i.e. $$\partial_\theta Y_{\ell,m}(\theta,\phi)=m \cot\theta Y_{\ell,m}(\theta,\phi)+e^{-i\phi}\sqrt{(\ell-m)(\ell+m+1)}Y_{\ell,m+1}(\theta,\phi)$$ Using this result, I simply define $$\partial_\theta Y_{\ell,m}\equiv\frac{1}{2}(\partial_\theta Y_{\ell,+m}+\partial_\theta Y_{\ell,-m}).$$ Now as above, we have $$\partial_\theta Y_{\ell,+m}(\theta,\phi)=m \cot\theta Y_{\ell,m}(\theta,\phi)+e^{-i\phi}\sqrt{(\ell-m)(\ell+m+1)}Y_{\ell,m+1}(\theta,\phi).$$ Using this result, we do the other sign. \begin{align} \partial_\theta Y_{\ell,-m}(\theta,\phi)&=-m \cot\theta Y_{\ell,-m}(\theta,\phi)+e^{-i\phi}\sqrt{(\ell+m)(\ell-m+1)}Y_{\ell,1-m}(\theta,\phi)\nonumber\\ (-1)^m\partial_\theta Y_{\ell,+m}^*(\theta,\phi)&=-(-1)^m m \cot\theta Y_{\ell,+m}^*(\theta,\phi)+e^{-i\phi}\sqrt{(\ell+m)(\ell-m+1)}Y_{\ell,m-1}^*(-1)^{m-1}(\theta,\phi)\nonumber\\ \partial_\theta Y_{\ell,m}^*(\theta,\phi)&=-m \cot\theta Y_{\ell,m}^*(\theta,\phi)-e^{-i\phi}\sqrt{(\ell+m)(\ell-m+1)}Y_{\ell,m-1}^*(\theta,\phi)\nonumber\\ \partial_\theta Y_{\ell,m}(\theta,\phi)&=-m \cot\theta Y_{\ell,m}(\theta,\phi)-e^{i\phi}\sqrt{(\ell+m)(\ell-m+1)}Y_{\ell,m-1}(\theta,\phi)\nonumber\\ \end{align} where in the last step, I took the complex conjugate of both sides. It is then easily seen that \begin{align} \partial_\theta Y_{\ell,m}&\equiv\frac{1}{2}(\partial_\theta Y_{\ell,+m}+\partial_\theta Y_{\ell,-m})\nonumber\\ &=e^{-i\phi}\sqrt{(\ell-m)(\ell+m+1)}Y_{\ell,m+1}(\theta,\phi)-e^{i\phi}\sqrt{(\ell+m)(\ell-m+1)}Y_{\ell,m-1}(\theta,\phi) \end{align} as desired.
Conditioning on an event with probability close to one
For every $B$, $\mathbb P(B\mid A)-\mathbb P(B)=b(1-a)/a-c$ with $a=\mathbb P(A)$, $b=\mathbb P(B\cap A)$ and $c=\mathbb P(B\setminus A)$. Since $0\leqslant b\leqslant a$ and $0\leqslant c\leqslant 1-a$, $$ -(1-a)\leqslant -c\leqslant \mathbb P(B\mid A)-\mathbb P(B)\leqslant b(1-a)/a\leqslant 1-a. $$ The bound $1-a$ is achieved for $B=A$, hence $$ \sup\limits_{B\in\mathcal F}\,|\mathbb P(B\mid A)-\mathbb P(B)|=1-\mathbb P(A). $$
Solve the equation : $\tan \theta + \tan 2\theta + \tan 3\theta = \tan \theta \tan 2\theta \tan 3\theta $
You could have solved the problem using the tangent multiple angle formulae. Using $t=\tan(\theta)$ $$\tan(2\theta)=\frac{2t}{1-t^2}\qquad \tan(3\theta)=\frac{3t-t^3}{1-3t^2}$$ This makes, after some minor simplifications $$\tan \theta + \tan 2\theta + \tan 3\theta - \tan \theta \tan 2\theta \tan 3\theta=\frac{2 t \left(3-t^2\right)}{1-t^2}=0$$ and then the solutions assuming that ${1-3t^2}\neq 0$ and ${1-t^2}\neq 0$.
Find the point $D$ where the bisector of $\angle A$ meets side $BC$ of the triangle $ABC$
A simple (although possibly tedious) approach is to prove it algebraically. I will simply write the lowercase letters for the vectors. From the angle equality $$ \frac{(d-a)\cdot (c-a)}{|c-a|}=\frac{(b-a) \cdot (d-a)}{|b-a|}$$ Since $ d-c$ and $ b-d$ are collinear, $$ d-c = \delta(b-d) $$ $$ d = \frac{\delta b+ c}{1+\delta}$$ (this is a general result for a point lying on a line determined by two position vectors) $$ d-a = \frac{\delta (b-a)+(c-a)}{1+\delta}$$ $$ \frac{(d-a).(c-a)}{|c-a|} = \frac{\delta (b-a)\cdot (c-a)}{(1+\delta)|c-a|}+\frac{|c-a|}{1+\delta}$$ $$ \frac{(b-a).(d-a)}{|b-a|} = \frac{\delta |b-a|}{(1+\delta)}+\frac{(c-a)\cdot (b-a)}{(1+\delta)|b-a|}$$ Equate and solve for $\delta$ to get the first part. For the second part, we use a similar approach. For any two angle bisectors, the incentre will have the general form: $$ \frac{a+\mu d}{1+\mu} = \frac{b+\eta e}{1+\eta}$$ We get two equations to solve for two variables. This will give the result.
How can define irrational numbers in general
The Liouville numbers $$\sum_{k=1}^\infty \frac{a_k}{2^{k!}}$$ where $\{a_k\}$ is any non-eventually vanishing binary sequence, already provides you with an uncountable number of irrational numbers.
Did I do the composition of $R_1$ and $R_2$ properly?
Both of your answers are correct.
Find the field by the its multiplicative group
The additive case is straightforward: every field is a vector space over its prime subfield (the subfield generated by $1$), which is either $\mathbb{Q}$ or $\mathbb{F}_p$ for some $p$, and in both cases fields exist which have every possible dimension as vector spaces. So the problem reduces to characterizing which abelian groups are vector spaces over a prime field (either $\mathbb{Q}$ or $\mathbb{F}_p$ for some $p$). Note that this is a property, not a structure. The abelian groups which are $\mathbb{Q}$-vector spaces are precisely those which are both divisible and torsion-free, and the abelian groups which are $\mathbb{F}_p$-vector spaces are precisely those for which every element has order (dividing) $p$, what group theorists call the elementary abelian $p$-groups. In the multiplicative case, every finite subgroup of the multiplicative group of a field is cyclic, which is a pretty strong restriction. For more discussion see this MO question.
Computing the longest element of the Weyl group
How are you describing the elements of these Weyl groups? Take $B_2$. I've drawn the standard picture below. I've picked a couple of simple reflections, $s_\alpha$ and $s_\beta$. They bound the fundamental Weyl chamber, which contains the point $v$. (Ok, I've added a square since I think of this group as symmetries of a square, but you can delete the square if you want.) I've applied the simple reflections $s_\alpha, s_\beta$ to $v$ in all possible ways, recording the shortest expression(s) for each point in the orbit. We can just read off that there is a unique longest expression, $s_\alpha s_\beta s_\alpha s_\beta(v) = s_\beta s_\alpha s_\beta s_\alpha(v)$. This is hence the longest element.
Beth fixed points and transitive models of ZFC minus Replacement
Claim. The following are equivalent for a limit ordinal $\alpha$ : $\alpha$ is a beth-fixed point. $V_\alpha$ thinks every set is equipotent with an ordinal. For $1\to 2$, it suffices to show that for every $\beta<\alpha$, $V_\alpha$ thinks $V_\beta$ is equipotent with an ordinal. (This is because every set in $V_\alpha$ is a subset of some $V_\beta$, $\beta<\alpha$.) Let $f:V_\beta\to\beth_\beta$ be a bijection. Then $f$ is a subset of $V_\beta\times \beth_\beta$, which is a member of $V_\alpha$. Since $V_\alpha$ is closed under Cartesian products and power sets, $f\in V_\alpha$. For $2\to 1$, observe that the assumption implies $|V_\beta|\in V_\alpha$ for every $\beta<\alpha$, so $\beth_\beta<\alpha$ for all $\beta<\alpha$, which means $\alpha$ is a $\beth$-fixed point. I finally prove that the above characterization is equivalent to the validity of $\Sigma_1$-replacement over $V_\alpha$: Claim. If $\alpha$ is a beth-fixed point, then $V_\alpha$ satisfies $\Sigma_1$-Replacement. The main ingredient is the following version of Levy reflection principle (which is provable by the same proof of usual Levy reflection principle $H_\kappa\prec_{\Sigma_1} V$) Theorem. Let $\lambda<\kappa$ be cardinals and $\lambda$ be regular. Then $H_\lambda\prec_{\Sigma_1} V_\kappa$. Moreover, it is known that $H_\lambda$ is a model of ZFC without Power set if $\lambda$ is regular. Now let $F$ be a $\Sigma_1$-class function over $V_\alpha$ with a parameter $p$. Take $x\in V_\alpha$. Choose $\xi<\alpha$ such that $p,x\in V_\xi$. Since $\alpha$ is a beth-fixed point, $\lambda:=|V_\xi|^+<\alpha$. We can see that $V_\xi\subseteq H_\lambda\subseteq V_\alpha$. Observe that $F$ is absolute between $V_\alpha$ and $H_\lambda$. Moreover, $H_\lambda$ satisfies Replacement for $F$. Let $H_\lambda\models F^"[x]=y$ for $y\in H_\lambda$. Since the formula $$[\forall v\in y\exists u\in x (F(u)=v)]\land [\forall u\in x\exists v\in y (F(u)=v)]$$ is $\Sigma_1$-formula, it also holds over $V_\alpha$. This shows $y$ witnesses the instance of replacement for $F$, $x$ and $p$.
Where is the mistake in this reasoning?
The problem when $m$ and $n$ are not coprime is that some positions in the table cannot be filled; for instance, if $m=6$ and $n=10$, $a_{01}$ is undefined. Your argument correctly shows that the entries that can be defined must all be distinct.
Showing that an implicitly defined function is analytic on $(0,\infty)$
Note that $x\in (-1,1)$ here, since $|\tanh|<1$ always. Consider the function $$g(x) = \tanh^{-1} x -2\beta x,\quad -1<x<1$$ which is real analytic. (Indeed, $\tanh x$ is the quotient of two functions given by power series, and the inverse of a real analytic function is real analytic). Note that $g$ is strictly convex on $(0,1)$, since $\tanh$ is strictly concave on $(0,\infty)$. Therefore, either $g$ strictly increases from $g(0)=0$ onward, or it initially dips into negative territory and then strictly increases. Let $a$ be the largest value of $x$ for which $g(x)=0$. Restrict attention to the interval $(a,1)$, on which $g$ is strictly increasing by the above. This interval is mapped bijectively onto $(0,\infty)$. It remains to observe that $g^{-1}$ is your function; and as mentioned above, taking the inverse of a strictly monotone function preserves real-analyticity.
how to solve for length of diagonal board between columns
$L=\sqrt{D^2+H^2-W^2}$, thank you Jaap It now occurs to me that knowing the angle A is also important (or at least, knowing the other leg of the little triangles to cut off the ends). $A=\arctan(\frac{H}{D})-\arctan(\frac{W}{L})$
Simplest way to solve n in a series in which $n = 4\%$ of $ n-1$
So if $a_0\in\mathbb R$ is your initial value and $a_n=0.04\cdot a_{n-1}$ for $n\geq 1$, i.e. $a_n$ is $4\%$ of the previous number $a_{n-1}$, then $$ a_n=0.04^n\cdot a_0,\quad n\geq 1. $$
Finding the maximum of a ratio of binomial coefficients
You ask for the maximum value of your expression with respect to $d$. Let us name your expression $$ B(d) = \dfrac{\displaystyle\binom{100-D_L}{K-d}\binom{D_L}{d}}{\displaystyle\binom{100}{K}} $$ Then we have the quotient, after evaluating the factorials: $$ Q(d) = \frac{B(d)}{B(d+1)} = \frac{(d+1)(101-D_L - K + d)}{(D_L - d)(K-d)} $$ The claim is that the maximum of $B(d)$ will be obtained at some value of $d$ which we may call $d_m$. This is equivalent to (1) having that $Q(d) > 1$ for all $d \ge d_m$ and (2) having that $Q(d-1) < 1$ for all $d\le d_m$ Starting with (1), $Q(d) > 1$ is equivalent to $$ (d+1)(101-D_L - K + d) > (D_L - d)(K-d) $$ or $$ d > \frac{-101 +K + D_L (K+1)}{102} $$ With the same argument, for (2), $Q(d-1) < 1$ is equivalent to $$ d < 1 + \frac{-101 +K + D_L (K+1)}{102} $$ So both conditions give the solution, rounding to the next highest integer, $$ d_m = \lceil{\frac{K D_L + (-101 + K+D_L)}{102}} \rceil $$ Some discussion: For large $K$ and $D_L$, the approximate value is $K D_L / 100$, as expected (see also the comment by polfosol). For not so large numbers of $K$ and $D_L$, the linear term $(-101 + K+D_L)/102$ induces small shifts. We will not always obtain positive values for $d_m$, since $d_m$ becomes zero for $-101 +K + D_L (K+1) <0$ or $$ D_L < \frac{101 - K}{K+1} $$
Show that $(\sum_{k=1}^{n}k^{-1}-\sum_{k=1}^{n}k^{-2})/(\log n)\longrightarrow 1$ as $n\longrightarrow\infty$
Note that \begin{align*} \int_{1}^{n}\dfrac{1}{t}dt\leq\sum_{k=1}^{n}\dfrac{1}{k}\\ \sum_{k=2}^{n}\dfrac{1}{k}\leq\int_{1}^{n}\dfrac{1}{t}dt \end{align*} and \begin{align*} \sum_{n=1}^{\infty}\dfrac{1}{n^{2}}<\infty. \end{align*} So \begin{align*} \dfrac{\displaystyle\sum_{k=1}^{n}\dfrac{1}{k}-\sum_{k=1}^{n}\dfrac{1}{k^{2}}}{\log n}&\geq\dfrac{\displaystyle\int_{1}^{n}\dfrac{1}{t}dt-\sum_{n=1}^{\infty}\dfrac{1}{n^{2}}}{\log n}\\ &=1-\dfrac{1}{\log n}\sum_{n=1}^{\infty}\dfrac{1}{n^{2}}\\ &\rightarrow 1. \end{align*} On the other hand, \begin{align*} \dfrac{\displaystyle\sum_{k=1}^{n}\dfrac{1}{k}-\sum_{k=1}^{n}\dfrac{1}{k^{2}}}{\log n}&\leq\dfrac{1+\displaystyle\int_{1}^{n}\dfrac{1}{t}dt-\sum_{k=1}^{n}\dfrac{1}{k^{2}}}{\log n}\\ &=\dfrac{1}{\log n}+1-\dfrac{1}{\log n}\sum_{k=1}^{n}\dfrac{1}{k^{2}}\\ &\rightarrow 1, \end{align*} so the result follows by Squeeze Theorem.
Why is statistics considered a different discipline than mathematics rather than as a branch of mathematics?
You won't get any consensus in answers here because probability and statistics can be both theoretical and applied. Here, roughly speaking, is how I think about these things; I'm posting this to clarify the confusion mentioned in the comments and in the edited question, not to answer the historical question of why statistics departments are often separate from math departments. (Note, however, that MIT, for example, has no statistics department. All probability and statistics courses are in the math department, or done within the various scientific or engineering departments.) Theoretical statistics (also called mathematical statistics) -- e.g. what can be found in the book by Schervish -- is probability theory applied to particular theoretical problems: inference and estimation. You could construe this as a branch of probability theory, but in practice the theory of statistical inference usually follows more foundational courses in probability theory and the theory of stochastic processes. Theoretical statistics deals with the sampling distributions, interval estimation, hypothesis testing, alternative modes of inference (Bayesian, non-parametric), etc. Statistics also has an applied component, of course. Applied statisticians work with actual data sets and use the theory done by theoretical statisticians to draw conclusions about particular empirical problems. Theoretical statisticians hardly ever look at data; they prove mathematical theorems.
Step function approximation with Henstock–Kurzweil integral.
I think you cannot exchange absolute bar with integral since what you have to show is smaller than the condition written above.
Prove for any vector norm $\| \cdot \|$ that $\left| \|x\| -\|y\| \right| \leq \|x - y\|$
You have the wrong substitution. Try $x=z-y$.
3D solids of constant width from platonic solids
It turns out, you are not! More specifically, what you're talking about is a generalization of the so-called Reuleaux triangle, and in the simplest case — namely, the one where the solid is a tetrahedron — the resulting surface isn't constant-width; the problem is that an edge-to-edge span can be greater than the distance to a vertex. For more details, see https://en.wikipedia.org/wiki/Reuleaux_tetrahedron .
Counting the total number of possible passwords
He is computing the number of passwords if there were no restriction ($36^6$) less the number of passwords made up only of letters ($26^6$). Your computation doesn't take into account the position of the number within the password, or the fact that there can be more than one number (and therefore you have to be careful about double-counting).
Solving inequality for all even integers .
See this:$$ 2(n - 1) \leqslant \frac{n(n + 2)}{4} \Leftrightarrow 8(n - 1) \leqslant n(n + 2) \Leftrightarrow n^2 - 6n + 8 \geqslant 0 \Leftrightarrow (n - 4)(n - 2) \geqslant 0 $$ then equality holds iff $~n = 2,4$. Specifically, we can make the equality condition to $0 < n \leqslant 4$. The inequality holds true for each $n \in \mathbb{Z} - \{ 3\}. $
Doubt regarding sum of a subring and an ideal need not be subring.
Let $R$ denote a not-necessarily commutative ring. Assume that $A \subseteq R$ is a subring and that $I \subseteq R$ is a two-sided ideal. Let us show that $A+I$ is closed under products, since closure under addition is trivial, and so too is closure under unary negation (by which I mean the function $x \in R \mapsto -x \in R$). So suppose $x,y \in A+I$. Then $x=a+i$ and $y=a'+i'$ for appropriate choices of $a,a' \in A$ and $i,i' \in I$. So $$xy = (a+i)(a'+i') = \underbrace{aa'}_{\in A}+\underbrace{ia'+ai'+ii'}_{\in I} \in A+I$$ So $A+I$ is closed under products. We can write the above argument in more compact notation as follows: $$(A+I)(A+I) = \underbrace{AA}_{\subseteq A}+\underbrace{IA+AI+II}_{\subseteq I} \subseteq A+I$$ Remark. If $R$ has a unity element $1,$ and if $1 \in A,$ then since $1+0 \in A+I$, hence $1 \in A+I$.