title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
About definition of "ordered semi-ring"
I found three slightly different definitions of ordered semiring. This one may be the oldest: Definition: A semiring $R$ is ordered if there is a partial order $\preceq$ on $R$ such that $a+c\preceq b+c$ whenever $a,b,c\in R$ and $a\preceq b$, and $ac\preceq bc$ and $ca\preceq cb$ whenever $a,b,c\in R$, $0\preceq c$, and $a\preceq b$. $R$ is positive if $0\preceq a$ for all $a\in R$. Another definition removes the restriction that $0\preceq c$ from (2); the two are clearly equivalent for positive ordered semirings. The third adds to the second the requirement that $0\preceq a$ for all $a\in R$, so it’s equivalent to the notion of positive ordered semiring in either of the first two senses of ordered semiring. All of these definitions explicitly include additive monotonicity, your first property. Your second property follows from (2) above, so it holds no matter which of these definitions is in use: if $0\preceq a$ and $0\preceq b$, then $0=0\cdot b\preceq a\cdot b$. Thus, under any of these definitions of ordered ring you can use both of the properties that you listed.
Besides 1 and 11, is $\sum_{i=0}^n 10^i$ composite for every $n\in \mathbb{N}$?
The next-smallest counterexample, as indicated in the links from the comments, is given by $$ R_{19} = \sum_{i=0}^{18}10^i = \overbrace{1 \cdots 1}^{19} $$ it is conjectured (but has not been proven) that there are infinitely many such primes. It is notable that $R_n = \sum_{i=0}^{n-1}10^i$ can be written in the form $$ R_n = \frac{10^{n} - 1}{9} $$ From this presentation, one may more easily deduce that $R_n$ can only be prime if $n$ is prime (though the converse doesn't hold).
Are term #$n$ and term #$n+1$ always the highest terms in the Maclaurin series for $e^n$?
Yes, they are. For a fixed $n$, consider how we get from term number $k$ to term number $k+1$: $$ \frac{n^k}{k!}\to \frac{n^{k+1}}{(k+1)!} $$ We see that multiply by $\frac n{k+1}$. This is larger than $1$ as long as $k$ is smaller than $n$, so for those $k$ the sequence increases, and for $k$ larger than $n$ what we multiply by is smaller than $1$, so the sequence decreases. For the transition from term number $n$ to term number $n+1$, we multiply by $\frac{n+1}{n+1} = 1$, so those two terms are equal, and indeed the maximum.
if f and g are riemann integrable on [a b] , g is non negative and f is bounded
Define $ h(y) := \int_a^y \left(f(x) - m \right) g(x) \, dx - \int_y^b \left( M - f(x) \right) g(x) \, dx $ on $[a,b] $. Then $h$ is continuous and $ h(a) \le 0 $ while $ h (b) \ge 0 $. Apply the intermediate value theorem to get $c$ such that $h(c) = 0 $, simplify and conclude. You may note that $h$ is suggested by taking the difference between the sides of the equality you have to establish.
On the structure of outer automorphism group of $A_6$.
As stated on the wikipedia page, $$Out(A_6) \cong C_2 \times C_2$$ Thus, it is not cyclic.
Basis for $U + V$ in terms of bases
Recall that in general $$\dim (U+V)=\dim U + \dim V - \dim (U\cap V)$$ and that we have $$B_{U + V}\subseteq B_{V} \cup B_{U}$$ therefore to find a basis for $U + V$, that is a set of linear independent vectors which spans $U+V$, we need to start from the set $B_{V} \cup B_{U}$ and proceed by elimination.
2 attempts to draw 2 kings
$P(Case 1)=\frac{4}{52}\times\frac{3}{51}=\frac{12}{2652}$ (this is equal to what you calculated) Split up case 2 into two cases: 2.1: Draw 0 kings on first try ($P=\frac{48}{52}\times\frac{47}{51}=\frac{2256}{2652}$), 2 kings on 2nd try. 2.2: Draw 1 king on first try ($P=2\times\frac{48}{52}\times\frac{4}{51}=\frac{384}{2652}$), 2 kings on 2nd try. $P(Case 2.1)=\frac{2256}{2652}\times\frac{4}{50}\times\frac{3}{49}=\frac{27072}{6497400}$ $P(Case 2.2)=\frac{384}{2652}\times\frac{3}{50}\times\frac{2}{49}=\frac{2304}{6497400}$ $P(Case1)+P(Case2.1)+P(Case2.2)\approx 0.0090461$ This is how I would have done it.
Another Approach to Long division
What you did as:1. $8=2(4)+0$2. $7=2(3)+1$3. $6=2(3)+0$ Any remainder in step 1 is in the hundreds column Any remainder in step 2 is in the tens column Any remainder in step 3 is in the units column So, in your case, you had a remainder of $1$ in step 2, which means your remainder is effectively $1*10=10$. So, to complete your division, you would say:$$\frac{876}{2}=433+\frac{10}{2}$$ Now the problem you will end up with if you continue your strategy to divide $10$ by $2$ is as follows:1. $1=2(0)+1$ (i.e. remainder $=1*10=10$)2. $0=2(0)+0$ This means you get left with:$$\frac{10}{2}=0+\frac{10}{2}$$ i.e. you will recurse infinetly without reaching a result. But, if you modify your strategy to stop at this point and do the final division in the classical manner, then you would get:$$\frac{876}{2}=433+\frac{10}{2}=433+5=438$$ As another example, consider 876/3:1. $8=3(2)+2$2. $7=3(2)+1$3. $6=3(2)+0$ This gives you:$$\frac{876}{3}=222+\frac{210}{3}=222+70=292$$
About negligible terms in a limit
The reasoning of the first problem is dangerous. For example, $n^2+2n+1$ is "asymptotic" to $n^2$. But $\frac{e^n}{e^{\sqrt{n^2+2n+1}}}=e^{-1}$ for all $n$. Reasoning of the type used in 2.1 and 2.2 is quite safe. If we have a product or sum, we can separate out "well-behaved" portions.
How to calculate standard deviation and use three-sigma rule for couple variables?
I will consolidate my comments here. Since the standard deviation ($\sigma$) is calculated from the mean of the data, usually the three sigma rule is also based on the mean of the data. From the data given, the mean is $\dfrac{.49+.41}{2}=.45$. The standard deviation would be $\sigma=\sqrt{\dfrac{(.49-.45)^2+(.41-.45)^2}{2}}=.04$. If there is different mean, there must be other data, which might mean a different $\sigma$. Otherwise, you can compute the distance between the two samples in terms of their standard deviation using their difference ($.49-.41$) the value if $\sigma$ computed above.
Alternative ways to determine the centre of $GL_n(K)$
First, you can show that the center of $M_n(k)$ is $k$ (embedded as the diagonal matrices) using the double centralizer theorem. This part is very conceptual. Next, you can show that any element in the center of $GL_n(k)$ must lie in the center of $M_n(k)$ by showing that every element of $M_n(k)$ is a linear combination of elements of $GL_n(k)$. If $k$ is an infinite field this is easy: then if $X \in M_n(k)$ is any matrix the polynomial $\det (X + t I)$ is nonzero and hence takes nonzero values for all but finitely many nonzero values $t \in k$. If $t_0 \in k$ is nonzero and such that $\det (X + tI) \neq 0$, then $X$ is the difference of the invertible matrices $t I$ and $X + t I$. I don't have quite so clean a way to handle the finite case. It suffices to write down a basis of $M_n(k)$ consisting of invertible matrices. For starters we can consider the identity matrix $I$ together with matrices of the form $I + E_{ij}$ where $E_{i,j}$ is the matrix with a $1$ in the $i,j$ entry and $0$ otherwise, and also $i \neq j$. Next, to handle the diagonal, if the characteristic of $k$ isn't equal to $2$ we can consider the diagonal matrices with entries $(1, -1, 1, \dots), (1, 1, -1, \dots)$, etc. If the characteristic is $2$ we can consider matrices of the form $I - E_{i,i} + E_{(i-1),i} + E_{i,i-1}$ for $i = 2, \dots n$.
sum of pairs and sum of squares $\left(\sum x_i\right)^2-2\sum_{cyc}x_ix_j=\sum x_i^2$
If you expand $$(x_1 + \cdots + x_n)(x_1 + \cdots + x_n)$$ you get $n^2$ addends: $x_1^2, \ldots, x_n^2$, as well as two each of $x_i x_j$ for $i \ne j$. The cyclic sum accounts for some of these latter terms, but misses terms like $x_1 x_3$ or $x_2 x_4$.
Using a gamma distribution correctly?
I don't think you're correctly using $\Gamma$ here. Let's use $\lambda$ as the rate of the process instead of $X$ for aesthetic reasons. Note $$P(N(T) \geq 100) = P(N(T) - N(0) \geq 100) =P(\operatorname{Pois}(\lambda(T-0))\geq 100) \\= P(\operatorname{Pois}(\lambda T) \geq 100) = \cdots$$
Trigonometric Identity $\arcsin(\alpha+\beta)=\arcsin(\alpha|\sec(\beta)|)+\arcsin(\beta|\sec(\alpha)|)$
No. Try $\alpha = \beta =\frac12$.
A question about floor functions and series
The increment $$a_{n+1}-a_n=1+\left\lfloor\sqrt{n+1}+\dfrac12\right\rfloor-\left\lfloor\sqrt{n}+\dfrac12\right\rfloor$$ is $1$, or $2$ when $\sqrt{n+1}$ rounds differently than $\sqrt n$.
What does the magnitude and direction of a larger vector projected onto a smaller vector depend on?
When one speaks of projecting $\vec a$ onto $\vec b$ (I will write $p_{\vec b}(\vec a)$), I think it is understood that you really mean "the projection of $\vec a$ in the direction of $\vec b$", that, is $\vec b$ is only used to get a direction, and its (nonzero) magnitude is irrelevant. The direction of $p_{\vec b}(\vec a)$ depends only the direction of $\vec b$ (in fact, it is the direction of $\vec b$). The magnitude of $p_{\vec b}(\vec a)$ depends on the relative direction of $\vec a$ with respect to $\vec b$ (said differently, the angle between $\vec a$ and $\vec b$) as well as the magnitude of $\vec a$.
Poisson distribution problem..
Let $X_1$ and $X_2$ be respectively the number of men and women arriving at the ticket office in a one-minute period. Then $X_1$ is Poisson with mean $2$ and $X_2$ is Poisson with mean $1$. Since $X_1$ and $X_2$ are independent, $X = X_1 + X_2$ is Poisson with mean $3$. Thus $P(X \leq 2) = e^{-3}(1 + 3 + \frac{9}{2!}) \approx 0.423$
Building matrix expressions for product of sum, isolating vector of constants
After some attempts, I've figure it out: $$ \left.\prod\limits_{k=1}^P \sum\limits_{j=1}^M a_j \cdot \mathcal{T}_k \left[f_{i,j} \right]\;\right|_{i=1}^N = \left[\begin{array}{} \prod\limits_{k=1}^P \mathcal{T}_k \left[f_{1,1} \right] & \ldots & \prod\limits_{k=1}^P \mathcal{T}_k \left[f_{1,M}\right] \\ \vdots & \ddots & \vdots \\ \prod\limits_{k=1}^P \mathcal{T}_k \left[f_{N,1}\right] & \ldots & \prod\limits_{k=1}^P \mathcal{T}_k \left[f_{N,M}\right] \end{array}\right] \cdot \left[\begin{array}{} a_1 \\ \vdots \\ a_M \end{array}\right] $$ Thank you for those who have given a thought to this :-D
System of generators of a homogenous ideal
I think it is true. And we assume those generators are homogeneous, otherwise, the ideal $(x,x^n+y)=(x,y)$ of $k[x,y]$ gives an example that degrees are not unique determined. Write $I=I_1+I_2+I_3+\cdots$, where $I_k$ is the part with degree $k$ of $I$. Then $I$ is generated by the homogeneous elements of degree $1,2,3,...$. But view $I_1$ as $k$-vector space, we see that those generators of degree $1$ form a $k$-base of $I_1$, so the numbers of those generators of degree $1$ are unique. Similarly, in degree $2$, $I_2=(I_1S_1,f_{21},f_{22},\ldots)$ (as $k$-vector space). Since generators are minimal, those generators of degree $2$ are not in $I_1S_1$ and form a base of $I_2/I_1S_1$. (Here $S_i$ is the set of homogeneous polynomials of degree $i$.) We can do this step by step to see that your claim is true. Hence the number of the minimal generators of $I$ is $\sum_n\operatorname{dim}_kI_n/\sum_{i=1}^{n-1}I_iS_{n-i}$. (By Noetherian property, there are only finite terms in the sum.) Edit: If we do not require the generators are homogeneous, the statement is not true in general. In $k[x,y]$, ideal $J=(x+y^3,y^2+y^3,y^4)=(x,y^2)$ is a homogeneous ideal, let us verify that $\{x+y^3,y^2+y^3,y^4\}$ is a minimal generating set. If $I= (x+y^3,y^4)$, then $I=(x+y^3,y^4,x)=(x,y^3)$, contradiction. If $I=(x+y^3,y^2+y^3)$, then there exist $f,g\in k[x,y]$ such that $x=(x+y^3)f+(y^2+y^3)g$, let $x=1,y=-1$, we will get $1=0$, also contradiction.
Computing a sum involving the floor function
Look at the difference between sums of $n$ and $n-1$ terms. It is fairly trivial once you can see that and find the pattern. (DO NOT READ BELOW THIS IF YOU WANT TO TRY AND SOLVE IT YOURSELF AGAIN WITH THIS HINT) Here is a solution Let the sum be $S_n$. By looking at the difference between sums : $$S_n - S_{n-1} = \Bigl\lfloor \frac{x}{n}\Bigr\rfloor + \Bigl\lfloor \frac{x+1}{n}\Bigr\rfloor+...+\Bigl\lfloor \frac{x+n-1}{n}\Bigr\rfloor$$ $$ = \Bigl\lfloor \frac{x}{n} \Bigr\rfloor + \Bigl\lfloor \frac{x}{n} + \frac{1}{n}\Bigr\rfloor + ... + \Bigl\lfloor \frac{x}{n} + \frac{n-1}{n}\Bigr\rfloor$$ and, by Hermite’s identity, $$S_n - S_{n-1} = \Bigl\lfloor n\frac{x}{n} \Bigr\rfloor = \Bigl\lfloor x \Bigr\rfloor$$ As $S_1 = \Bigl\lfloor x \Bigr\rfloor$, it follows that $S_n = n\Bigl\lfloor x \Bigr\rfloor$ for all $n \geqslant 1$.
Permutation of multiple sets of elements where adjacent elements are different.
I will give a hint, but not a complete answer. Let $f(R,W,B)$ be the number of ways to order $R$ red balls, $W$ white balls, and $B$ black balls so that no two adjacent balls are of the same color. Let $f(R,W,B;\text{Red})$ be the number of ways to order $R$ red balls, $W$ white balls, and $B$ black balls so that the first ball is Red. Similarly for $f(R,W,B;\text{White})$ and $f(R,W,B;\text{Black})$. We will try to build a recurrence relation. $$f(R,W,B) = f(R,W,B;\text{Red})+f(R,W,B;\text{White})+f(R,W,B;\text{Black})$$ Let's try to break down each number further. If a sequence starts with red, then the balls after it must be a valid sequence of balls with one fewer red. However, it cannot start with red. This gives us the following three formulas: $$f(R,W,B;\text{Red}) = f(R-1,W,B)-f(R-1,W,B;\text{Red})$$ $$f(R,W,B;\text{White}) = f(R,W-1,B)-f(R,W-1,B;\text{White})$$ $$f(R,W,B;\text{Black}) = f(R,W,B-1)-f(R,W,B-1;\text{Black})$$ And the following boundary conditions: $$f(0,W,B) = \begin{cases}0, & |W-B|>1 \\ 1, & |W-B|=1 \\ 2, & |W-B|=0\end{cases}$$ and similarly for $f(R,0,B)$ and $f(R,W,0)$. $$f(0,W,B;\text{Red}) = 0, f(R,0,B;\text{White}) = 0, f(R,W,0;\text{Black})=0$$ These four formulas give the recurrence relations that can be used to solve the problem. See what you can do with this (or another approach). If you still need help, get back to us and we can assist further. Edit: Using an Excel program to evaluate this, I get $f(5,6,7) = 31632$. Edit #2: Using this link: Permutations Calculator I got the same amount (31632).
"A Queer Coincidence," riddle from Dudeney's book
It is essentially bookkeepping (or literally bookkeeping, since they are playing for money). Working backwards, each player has half as much as the subsequent round, except the "winner" of the next round additionally has half the total of $7\times 32=224$, so $112$ more than half.
Suppose $f$ is differentiable in $\overline{x}\in D$ then exists $M>0$ and $r>0$ such that...
1) The $r>0$ means something locally, in general we don't have such nice thing $\dfrac{|f(x+h)-f(x)|}{|h|}\leq M$ globally for $|h|$, which means that for large $|h|$ this inequality would fail. 2) Because you have restricted locally, and for bounded linear operator $T$, one has $|Tx|\leq M|x|$. 3) Not for all $x\in{\bf{R}}$, take for example $T(x)=x$, saying that $|T(x)|<M$ for all $x$ means that $|x|<M$ for $x\in{\bf{R}}$, this is clearly not true. As what has been said in 2), $|Tx|=|x|\leq M|x|$ is true, here $M=1$.
Inclusion Exclusion combinatorics problem.
A permutation of $n$ objects such that no object remains in its original position is called a derangement. So you are asking for the number of derangements of seven objects, i.e. the $7$-th derangement number, which is $1854$. Your approach to compute this number by means of inclusion-exclusion is a good one. Your first argument, however, is wrong. You indeed vastly miscount the numbers $A_i$. For example, as you later note, it is impossible for exactly six people to get their own jacket back. After all, the only remaining person must then get the only remaining jacket, which must then be correct as well. For another example, the number of ways at least $5$ people can get their own jacket is $$1+\tbinom{7}{5}=22.$$ After all, either everyone gets their own jacket back, or exactly $5$ people get their own jacket back. Your second argument starts off in the right direction. But it is entirely unclear to my how you get: So by this logic, the answer should be $$= 7! - (\binom{7}{1}6! - \binom{7}{2}5! + \binom{7}{3}4! -\binom{7}{4}3! + \binom{7}{5}2! -\binom{7}{6}1! + \binom{7}{7}0! ) = 1854.$$ The number that pops out is correct, but the argument is missing.
General process to find global extrema of a function?
If the function isn't continuous then you don't know that it has extreme values on the interval. For example, consider the function $f(x)=\frac{1}{x}$ for $x\neq 0$ and $f(x)=0$ when $x=0$ on the interval $[-1, 1]$. That function has no maximum or minimum value on that interval. If the interval is open you are also not guaranteed a maximum; consider $g(x)=x$ on $(-1, 1)$... no max or min. The assumption that $f(x)$ is continuous on a closed interval is so that you know (by the Extreme Value Theorem) that the function attains a global min and a global max on the interval. If the conditions are not met, no guarantee exists.
If $R$ is a commutative ring with ideals $A$ and $B$ such that $AB$ is principal, then $A$ and $B$ are finitely generated.
Consider $\oplus_{n\geq 0}\mathbb{Z}$ endowed with the product structure. Let $A=(n,0,...0)$, $B=(0,n_1,n_2,...)$, $AB=0$ and $B$ is not finitely generated.
Any ideas on how I can prove this expression?
Note: Here are at least some few hints which may help to solve this nice identity. In fact it's hardly more than a starter. But hopefully some aspects are nevertheless useful for the interested reader. Introduction: The following information is provided: Definition of partial Bell polynomials $B_{n,k}$: I state the definition of Bell polynomials according to Comtet's: classic Advanced Combinatorics section 3.3 Bell Polynomials as coefficients of generating functions. The idea is, that a proper representation via generating functions could help to find the solution. The convolution product $x \diamond y$ demystified We will observe, that the convolution product is strongly related with an iterative representation of the generating functions of the Bell numbers. This enables us to transform the stated identity to gain some more insight. Representation of the identity with the help of generating functions In fact its just another representation regrettably without clever simplifications Verification of the identity for $n=2,3$ In order to better see what's going on, the identity is also verified for small $n=2,3$. An analysis of these examples could provide some hints how to appropriately transform the generating functions in the general case. Definition of partial Bell polynomials $B_{n,k}$: According to Comtet's Advanced Combinatorics section 3.3 Bell Polynomials we define as follows: Let $\Phi(t,u)$ be the generating function of the (exponential) partial Bell polynomials $B_{n,k}=B_{n,k}(x_1,x_2,\ldots,x_{n-k+1})$ in an infinite number of variables $x_1,x_2,\ldots$ defined by the formal double series expansion: \begin{align*} \Phi(t,u)&:=exp\left(u\sum_{m\geq 1}x_m\frac{t^m}{m!}\right)=\sum_{n,k\geq 0}B_{n,k}\frac{t^n}{n!}u^k\\ &=1+\sum_{n\geq 1}\frac{t^n}{n!}\left(\sum_{k=1}^{n}u^kB_{n,k}(x_1,x_2,\ldots)\right) \end{align*} or what amounts to the same by the series expansion: \begin{align*} \frac{1}{k!}\left(\sum_{m\geq 1}x_m\frac{t^m}{m!}\right)^k=\sum_{n\geq k}B_{n,k}\frac{t^n}{n!},\qquad k=0,1,2,\ldots\tag{1} \end{align*} In the following the focus is put on the representation (1). Let's use the coefficient of operator $[t^n]$ to denote the coefficient $a_n=[t^n]A(t)$ of a formal generating series $A(t)=\sum_{k\geq 0}a_kt^k$. We observe for $n\geq 0$: \begin{align*} B_{n,k}=\frac{n!}{k!}[t^n]\left(\sum_{m\geq 1}x_m\frac{t^m}{m!}\right)^k,\qquad k\geq 0\tag{2} \end{align*} Note: In the following it is sufficient to consider $B_{n,k}$ for $n,k\geq 1$. The convolution product $x\diamond y$ (somewhat demystified) The convolution product $x \diamond y$ for sequences $x=(x_j)_{j\geq 1}$ and $y=(y_j)_{j\geq 1}$ is defined according to this link as \begin{align*} x \diamond y :=\sum_{j=1}^{n-1}\binom{n}{j}x_jy_{n-j}\tag{3} \end{align*} The polynomial $B_{n,k}$ can be written using the $k$-fold product $$x^{k_\diamond}=(x_n^{k\diamond})_{n\geq 1}:=\underbrace{x\diamond \ldots \diamond x}_{k \text{ factors}}$$ as \begin{align*} B_{n,k}=\frac{x_n^{k\diamond}}{k!}, \qquad n,k\geq 1\tag{4} \end{align*} We obtain according to the definition: \begin{align*} x\diamond y&=\left(0,\sum_{j=1}^{1}\binom{2}{j}x_jy_{2-j},\sum_{j=1}^{2}\binom{3}{j}x_jy_{3-j},\sum_{j=1}^{3}\binom{4}{j}x_jy_{4-j},\ldots\right)\\ &=\left(0,2x_1y_1,3x_1y_2+3x_2y_1,4x_1y_3+6x_2y_2+4x_3y_1,\ldots\right)\\ \end{align*} which implies \begin{align*} x^{1\diamond}&=\left(x_1,x_2,x_3,x_4,\ldots\right) &=&1!(B_{1,1},B_{2,1},B_{3,1},B_{4,1}\ldots)\\ x^{2\diamond}&=\left(0,2x_1^2,6x_1x_2,8x_1x_3+6x_2^2,\ldots\right) &=&2!(0,B_{2,2},B_{3,2},B_{4,2},\ldots)\\ x^{3\diamond}&=\left(0,0,6x_1^3,36x_1^2x_2,\ldots\right) &=&3!(0,0,B_{3,3},B_{4,3}\ldots)\\ \end{align*} Observe, that the multiplication of exponential generating functions $A(x)=\sum_{k\geq 0}a_k\frac{x^k}{k!}$ and $B(x)=\sum_{l\geq 0}b_l\frac{x^l}{l!}$ gives: \begin{align*} A(x)B(x)&=\left(\sum_{k\geq 0}a_k\frac{x^k}{k!}\right)\left(\sum_{l\geq 0}b_l\frac{x^l}{l!}\right)\\ &=\sum_{n\geq 0}\left(\sum_{{k+l=n}\atop{k,l\geq 0}}\frac{a_k}{k!}\frac{b_l}{l!}\right)x^n\\ &=\sum_{n\geq 0}\left(\sum_{k=0}^n\binom{n}{k}a_kb_{n-k}\right)\frac{x^n}{n!} \end{align*} According to the definition of $B_{n,k}$ the sequences $x^{k_\diamond}$ generated via the convolution product are simply the coefficients of the vertical generating functions for the Bell polynomials $B_{n,k}, n\geq 1$: \begin{align*} \frac{1}{1!}\sum_{m\geq 1}x^{1_\diamond}_n\frac{t^n}{n!}&= \frac{1}{1!}\left(\sum_{m\geq 1}x_m\frac{t^m}{m!}\right)=\sum_{m\geq 1}B_{n,1}\frac{t^n}{n!}\\ \frac{1}{2!}\sum_{m\geq 1}x^{2_\diamond}_n\frac{t^n}{n!}&= \frac{1}{2!}\left(\sum_{m\geq 1}x_m\frac{t^m}{m!}\right)^2=\sum_{m\geq 2}B_{n,2}\frac{t^n}{n!}\\ &\qquad\qquad\cdots\\ \frac{1}{k!}\sum_{m\geq 1}x^{k_\diamond}_n\frac{t^n}{n!}&= \frac{1}{k!}\left(\sum_{m\geq 1}x_m\frac{t^m}{m!}\right)^k=\sum_{m\geq k}B_{n,k}\frac{t^n}{n!}\\ \end{align*} We observe: A convolution with $x^{1\diamond}$ corresponds essentially to a multiplication of the generating function $\sum_{m\geq 1}x_m\frac{t^m}{m!}$ In order to keep complex expressions better manageable, we introduce some abbreviations: $$B_{n,k}^{f}(x) := B_{n,k}(f^\prime(x),f^{\prime\prime}(x),\ldots,f^{(n-k+1)}(x))$$ The $n$-th derivatives will be abbreviated as $$f_n:=\frac{d^n}{dx^n}f(x)\qquad\text{ and }\qquad g_n:= \frac{d^n}{dx^n}g(x),\qquad n \geq 1$$ we also use OPs shorthand $a=(f_1,f_2,\ldots)$ and $b=(g_1,g_2,\ldots)$. According to the statements above the expression $$B_{n,k}\left(f^\prime(x),f^{\prime\prime}(x),\ldots,f^{(n-k+1)}\right)_{(f\rightarrow g)^c} =\frac{\left(a^{(k-c)_{\diamond}}\diamond b^{c_\diamond}\right)_n}{(k-c)!c!}$$ can now be written as coefficients of the product of the generating functions \begin{align*} \sum_{n\geq k}&B^f_{{n,k}_{(f\rightarrow g)^c}}\frac{t^n}{n!} =\frac{1}{(k-c)!}\left(\sum_{m\geq1}f_m\frac{t^m}{m!}\right)^{k-c} \frac{1}{c!}\left(\sum_{m\geq1}g_m\frac{t^m}{m!}\right)^{c} \end{align*} Representation of the identity via generating functions We are now in a state to represent OPs identity with the help of generating functions based upon (1). To simplify the notation somewhat I will often omit the argument and write e.g. $(\ln\circ g)^k$ instead of $\left(\ln(g(x))\right)^k$. Now, putting the Complete Bell polynomial in OPs question on the left hand side and the other terms to the right hand side we want to show Following identity is valid: \begin{align*} \sum_{k=1}^{n}B_{n,k}^{f\cdot (\ln\circ g)}&=\sum_{k=1}^{n}(\ln\circ g)^{k}B_{n,k}^f\tag{5}\\ &\qquad+\sum_{k=1}^{n}\sum_{m=0}^{n-k}\sum_{j=0}^{m}\binom{m}{j} \frac{(\ln\circ g)^{m-j}}{g^k}\frac{d^j}{d(f)^j}[(f)_k]B_{{n,m+k}_{(f\rightarrow g)^k}}^f\qquad\qquad n\geq 1 \end{align*} Please note, the following abbreviations are used in (5) and the expressions below: \begin{align*} &f:= f(x), \qquad g := g(x), \qquad f_k := \frac{d^k}{dx^k}f(x), \qquad g_k :=\frac{d^k}{dx^k}g(x)\\ &d_k := \frac{d^k}{dx^k}\left(f(x)\ln(g(x)\right),\qquad\frac{d^j}{d(f)^j}:= \frac{d^j}{d(f(x))^j}\\ &(f)_k := f(x)\left(f(x)-1\right)\cdot\ldots\cdot\left(f(x)-k+1\right) \end{align*} Using the generating function (1) we observe: \begin{align*} \sum_{k=1}^{n}B_{n,k}^{f\cdot (\ln\circ g)}&=n!\sum_{k=1}^n\frac{1}{k!}[t^n]\left(\sum_{m\geq 1}d_m\frac{t^m}{m!}\right)^k\\ \\ \sum_{k=1}^{n}(\ln\circ g)^{k}B_{n,k}^f&=n!\sum_{k=1}^{n}(\ln\circ g)^{k}\frac{1}{k!}[t^n]\left(\sum_{m\geq 1}f_m\frac{t^m}{m!}\right)^k \\ \\ \sum_{k=1}^{n}\sum_{m=0}^{n-k}\sum_{j=0}^{m}\binom{m}{j}& \frac{(\ln\circ g)^{m-j}}{g^k}\frac{d^j}{d(f)^j}[(f)_k]B_{{n,m+k}_{(f\rightarrow g)^k}}^f\\ &=n!\sum_{k=1}^n\frac{1}{k!}\frac{1}{g^k}\sum_{m=0}^{n-k}\frac{1}{m!}\sum_{j=0}^m \binom{m}{j}\left(\ln\circ g\right)^{m-j}\frac{d^j}{d(f)^j}[(f)_k]\\ &\qquad\cdot[t^n]\left(\sum_{j\geq 1}f_{j}\frac{t^{j}}{j!}\right)\left(\sum_{j\geq 1}g_{j}\frac{t^{j}}{j!}\right) \end{align*} Putting all together gives following reformulation of the identity: \begin{align*} \sum_{k=1}^n&\frac{1}{k!}[t^n]\left(\sum_{m\geq 1}d_m\frac{t^m}{m!}\right)^k\\ &=\sum_{k=1}^{n}\left(\ln \circ g\right)^k\frac{1}{k!}[t^n]\left(\sum_{m\geq 1}f_m\frac{t^m}{m!}\right)^k\\ &\qquad+\sum_{k=1}^n\frac{1}{k!}\frac{1}{g^k}\sum_{m=0}^{n-k}\frac{1}{m!} \sum_{j=0}^m\binom{m}{j}\left(\ln \circ g\right)^{m-j}\frac{d^j}{d(f)^j}[(f)_k]\\ &\qquad\qquad\cdot[t^n]\left(\sum_{j\geq 1}f_{j}\frac{t^{j}}{j!}\right)\left(\sum_{j\geq 1}g_{j}\frac{t^{j}}{j!}\right) \end{align*} Note: Maybe this alternative representation could help to show OPs identity. Verification of the identity for $n=2,3$ In order to verify the identity for small $n$, we need some polynomials $B_{n,k}$ in variables $f_j$ and $g_j$ ($j$-th derivative of $f$ and $j$). We do so by applying the $\diamond$ operator to $a=(f_1,f_2,\ldots)$ and $b=(g_1,g_2,\ldots)$. \begin{array}{rlllll} a^{1\diamond}&=\left(f_1,\right.&f_2,&f_3,&f_4,&\left.\ldots\right)\\ b^{1\diamond}&=\left(g_1,\right.&g_2,&g_3,&g_4,&\left.\ldots\right)\\ \\ a^{2\diamond}&=\left(0,\right.&2f_1^2,&6f_1f_2,&8f_1f_3+6f_2^2,&\left.\ldots\right)\\ a^{1\diamond}\diamond b^{1\diamond}&=\left(0,\right.&2f_1g_1,&3f_1g_2+3f_2g_1,& 4f_1g_3+6f_2g_2+4f_3g_1,&\left.\ldots\right)\\ b^{2\diamond}&=\left(0,\right.&2g_1^2,&6g_1g_2,&8g_1g_3+6g_2^2,&\left.\ldots\right)\\ \\ a^{3\diamond}&=\left(0,\right.&0,&6f_1^3,&36f_1^2f_2,&\left.\ldots\right)\\ a^{2\diamond}\diamond b^{1\diamond}&=\left(0,\right.&0,&6f_1^2g_1,&12f_1^2g_2+24f_1f_2g_1,&\left.\ldots\right)\\ a^{1\diamond}\diamond b^{2\diamond}&=\left(0,\right.&0,&6f_1g_1^2,&24f_1g_1g_2+12f_2g_1^2,&\left.\ldots\right)\\ b^{3\diamond}&=\left(0,\right.&0,&6g_1^3,&36g_1^2g_2,&\left.\ldots\right)\\ \end{array} Case $n=2$: Each of the three sums of the identity is calculated separately. \begin{align*} \sum_{k=1}^{2}&B_{2,k}^{f\cdot(\ln \circ g)}\\ &=B_{2,1}^{f\cdot(\ln \circ g)}+B_{2,2}^{f\cdot(\ln \circ g)}\\ &=\frac{d^2}{{dx}^2}\left(f (\ln \circ g)\right)+\left(\frac{d}{dx}\left(f(\ln \circ g)\right)\right)^2\\ &=\left(f_2(\ln \circ g)+2f_1\frac{g_1}{g}+f\frac{g_2}{g}-f\frac{g_1^2}{g^2}\right)+\left(f_1(\ln \circ g)+f\frac{g_1}{g}\right)^2\\ &=\left(f_2(\ln \circ g)+2f_1\frac{g_1}{g}+f\frac{g_2}{g}-f\frac{g_1^2}{g^2}\right)+\left(f_1^2(\ln \circ g)^2+2ff_1(\ln \circ g)\frac{g_1}{g}+f^2\frac{g_1^2}{g^2}\right)\\ \\ \sum_{k=1}^{2}&(\ln \circ g)^kB_{2,k}^f\\ &=(\ln \circ g)B_{2,1}^f+(\ln \circ g)^2B_{2,2}^f\\ &=(\ln \circ g)f_2+(\ln \circ g)^2f_1^2\\ \\ \sum_{k=1}^{2}&\sum_{m=0}^{2-k}\sum_{j=0}^m\binom{m}{j} \frac{(\ln \circ g)^{m-j}}{g^k}\frac{d^j}{d(f)^j}[\left(f\right)_k]\frac{\left(a^{m_\diamond}\diamond b^{k_{\diamond}}\right)_2}{m!k!}\\ &=\sum_{k=1}^{2}\frac{1}{g^k}\sum_{m=0}^{2-k}\frac{\left(a^{m_\diamond}\diamond b^{k_{\diamond}}\right)_2}{m!k!} \sum_{j=0}^m\binom{m}{j}\frac{(\ln \circ g)^{m-j}}{g^k}\frac{d^j}{d(f)^j}[\left(f\right)_k]\\ &=\frac{1}{g}\sum_{m=0}^1\frac{\left(a^{m_\diamond}\diamond b^{1_{\diamond}}\right)_2}{m!1!} \sum_{j=0}^m\binom{m}{j}(\ln \circ g)^{m-j}\frac{d^j}{d(f)^j}(f)\\ &\qquad+\frac{1}{g^2}\sum_{m=0}^0\frac{\left(a^{m_\diamond}\diamond b^{2_{\diamond}}\right)_2}{m!2!} \sum_{j=0}^m\binom{m}{j}(\ln \circ g)^{m-j}\frac{d^j}{d(f)^j}\left(f(f-1)\right)\\ &=\frac{1}{g}\left[\frac{\left(b^{1_\diamond}\right)_2}{1!}\binom{0}{0}f+\frac{\left(a^{1_\diamond}\diamond b^{1_\diamond}\right)_2}{1!1!}\left(\binom{1}{0}(\ln \circ g)f+\binom{1}{1}\frac{d}{d(f)}f\right)\right]\\ &\qquad+\frac{1}{g^2}\left[\frac{\left(b^{2_\diamond}\right)_2}{2!}\binom{0}{0}f\left(f-1\right)\right]\\ &=\frac{1}{g}\left[g_2f+2f_1g_1\left((\ln \circ g) f+1\right)\right]+\frac{1}{g^2}\left[g_1^2f\left(f-1\right)\right] \end{align*} Comparison of the results of these sums shows the validity of the claim: \begin{align*} \sum_{k=1}^{2}B_{2,k}^{f\cdot(\ln \circ g)}&=\sum_{k=1}^{2}(\ln \circ g)^kB_{2,k}^f\\ &\qquad+\sum_{k=1}^{2}\sum_{m=0}^{2-k}\sum_{j=0}^m\binom{m}{j} \frac{(\ln \circ g)^{m-j}}{g^k}\frac{d^j}{d(f)^j}[\left(f\right)_k]\frac{\left(a^{m_\diamond}\diamond b^{k_{\diamond}}\right)_2}{m!k!}\\ \end{align*} Case $n=3$: Each of the three sums of the identity is calculated separately. \begin{align*} \sum_{k=1}^{3}&B_{3,k}^{f\cdot(\ln \circ g)}\\ &=B_{3,1}^{f\cdot(\ln \circ g)}+B_{3,2}^{f\cdot(\ln \circ g)}+B_{3,3}^{f\cdot(\ln \circ g)}\\ &=\frac{d^3}{{dx}^3}\left(f(\ln \circ g)\right)+3\frac{d}{dx}\left(f(\ln \circ g)\right)\frac{d^2}{{dx}^2}\left(f(\ln \circ g)\right) +\left(\frac{d}{dx}(\ln \circ g)\right)^3\\ &=\left(f_3(\ln \circ g)+3f_2\frac{g_1}{g}+3f_1\frac{g_2}{g}-3f_1\frac{g_1^2}{g^2}+f\frac{g_3}{g} +2f\frac{g_1^3}{g^3}-3f\frac{g_1g_2}{g^2}\right)\\ &\qquad+3\left(f_1(\ln \circ g)+f\frac{g_1}{g}\right)\left(f_2(\ln \circ g)+2f_1\frac{g_1}{g}+f\frac{g_2}{g}-f\frac{g_1^2}{g^2}\right)\\ &\qquad+\left(f_1(\ln \circ g)+f\frac{g_1}{g}\right)^3\\ &=\left(f_3(\ln \circ g)+3f_2\frac{g_1}{g}+3f_1\frac{g_2}{g}-3f_1\frac{g_1^2}{g^2}+f\frac{g_3}{g} +2f\frac{g_1^3}{g^3}-3f\frac{g_1g_2}{g^2}\right)\\ &\qquad+\left(3f_1f_2(\ln \circ g)^2+3ff_2(\ln \circ g)\frac{g_1}{g}+6f_1^2(\ln \circ g)\frac{g_1}{g}+6ff_1\frac{g_1^2}{g^2}\right.\\ &\qquad\qquad\left.+3ff_1(\ln \circ g)\frac{g_2}{g}+3f^2\frac{g_1g_2}{g^2}-3ff_1(\ln \circ g)\frac{g_1^2}{g^2}-3f^2\frac{g_1^3}{g^3}\right)\\ &\qquad+\left(f_1^3(\ln \circ g)^3+3ff_1^2(\ln \circ g)^2\frac{g_1}{g}+3f^2f_1(\ln \circ g)\frac{g_1^2}{g^2}+f^3\frac{g_1^3}{g^3}\right) \\ \\ \sum_{k=1}^{3}&(\ln \circ g)^kB_{3,k}^f\\ &=(\ln \circ g)B_{3,1}^f+(\ln \circ g)^2B_{3,2}^f+(\ln \circ g)^3B_{3,3}^f\\ &=(\ln \circ g)f_3+3(\ln \circ g)^2f_1f_2+(\ln \circ g)^3f_1^3 \end{align*} \begin{align*} \sum_{k=1}^{3}&\sum_{m=0}^{3-k}\sum_{j=0}^m\binom{m}{j} \frac{(\ln \circ g)^{m-j}}{g^k}\frac{d^j}{d(f)^j}[\left(f\right)_k]\frac{\left(a^{m_\diamond}\diamond b^{k_{\diamond}}\right)_3}{m!k!}\\ &=\sum_{k=1}^{3}\frac{1}{g^k}\sum_{m=0}^{3-k}\frac{\left(a^{m_\diamond}\diamond b^{k_{\diamond}}\right)_3}{m!k!} \sum_{j=0}^m\binom{m}{j}\frac{(\ln \circ g)^{m-j}}{g^k}\frac{d^j}{d(f)^j}[\left(f\right)_k]\\ &=\frac{1}{g}\sum_{m=0}^2\frac{\left(a^{m_\diamond}\diamond b^{1_{\diamond}}\right)_3}{m!1!} \sum_{j=0}^m\binom{m}{j}(\ln \circ g)^{m-j}\frac{d^j}{d(f)^j}(f)\\ &\qquad+\frac{1}{g^2}\sum_{m=0}^1\frac{\left(a^{m_\diamond}\diamond b^{2_{\diamond}}\right)_3}{m!2!} \sum_{j=0}^m\binom{m}{j}(\ln \circ g)^{m-j}\frac{d^j}{d(f)^j}\left(f(f-1)\right)\\ &\qquad+\frac{1}{g^3}\sum_{m=0}^0\frac{\left(a^{m_\diamond}\diamond b^{3_{\diamond}}\right)_3}{m!3!} \sum_{j=0}^m\binom{m}{j}(\ln \circ g)^{m-j}\frac{d^j}{d(f)^j}\left(f(f-1)(f-2)\right)\\ &=\frac{1}{g}\left[\frac{\left(b^{1_\diamond}\right)_3}{1!}\binom{0}{0}f +\frac{\left(a^{1_\diamond}\diamond b^{1_\diamond}\right)_3}{1!1!}\left[\binom{1}{0}(\ln \circ g) f +\binom{1}{1}\frac{d}{d(f)}f\right]\right.\\ &\qquad\qquad+\left.\frac{\left(a^{2_\diamond}\diamond b^{1_\diamond}\right)_3}{2!1!} \left[\binom{2}{0}(\ln \circ g)^2f+\binom{2}{1}(\ln \circ g)\frac{d}{df}f +\binom{2}{2}\frac{d^2}{{df}^2}f\right]\right]\\ &\qquad+\frac{1}{g^2}\left[\frac{\left(b^{2_\diamond}\right)_3}{2!}\binom{0}{0}f\left(f-1\right)\right.\\ &\qquad\qquad+\left.\frac{\left(a^{1_\diamond}\diamond b^{2_\diamond}\right)_3}{1!2!}\left[\binom{1}{0}(\ln \circ g) f(f-1) +\binom{1}{1}\frac{d}{d(f)}f(f-1)\right]\right]\\ &\qquad+\frac{1}{g^3}\left[\frac{\left(b^{3_\diamond}\right)_3}{3!}\binom{0}{0}f\left(f-1\right)(f-2)\right]\\ &=\frac{1}{g}\left[g_3f+\left(3f_1g_2+3f_2g_1\right)\left[(\ln \circ g)f+1\right] +\left(3f_1^2g_1\right)\left[(\ln \circ g)^2f+2(\ln \circ g)\right]\right]\\ &\qquad+\frac{1}{g^2}\left[3g_1g_2f\left(f-1\right) +3f_1g_1^2\left[(\ln \circ g) f(f-1)+2f-1\right]\right]\\ &\qquad+\frac{1}{g^3}\left[g_1^3f(f-1)(f-2)\right]\\ \end{align*} Comparison of the results of these sums shows the validity of the claim: \begin{align*} \sum_{k=1}^{3}B_{3,k}^{f\cdot(\ln \circ g)}&=\sum_{k=1}^{3}(\ln \circ g)^kB_{3,k}^f\\ &\qquad+\sum_{k=1}^{3}\sum_{m=0}^{3-k}\sum_{j=0}^m\binom{m}{j} \frac{(\ln \circ g)^{m-j}}{g^k}\frac{d^j}{d(f)^j}[\left(f\right)_k]\frac{\left(a^{m_\diamond}\diamond b^{k_{\diamond}}\right)_3}{m!k!} \end{align*}
What are the Betti numbers of a double pinched torus?
The Betti numbers of a space are really the ranks of the integral homology groups of that space, so let's try to work those out. If you know about the Mayer-Vietoris homology long exact sequence, you should be able to extract the result fairly quickly. Let $X$ denote the twice pinched torus. Notice that both "halves" of $X$ are homeomorphic to 2-spheres, and that we may decompose $X$ into two open subsets $U \simeq S^2$, $V \simeq S^2$, the union of whose interiors is all of $X$, by taking each open to be one half of $X$ extended a bit into the other half in such a way that it deformation retracts back to a 2-sphere. You should now be familiar with the homology of $U$, $V$ and $U \cap V$, the latter of which deformation retracts to the disjoint union of two points, and you may infer the homology of $X$, hence the Betti numbers of $X$, from the Mayer Vietoris homology long exact sequence associated to this situation, which looks like: $... \to H_n(U \cap V) \to H_n(U) \oplus H_n(V) \to H_n(X) \xrightarrow{\delta} H_{n-1}(U \cap V) \to ...$ Note: From the mere fact that $X$ is connected, you know that $H_0(X) = \mathbb{Z}$, hence that the zeroth Betti number is 1; because $X$ admits a CW complex structure of dimension 2 composed of two 0-cells, two 1-cells and two 2-cells, you know that all the Betti numbers $\geq$ 3 must vanish, and that the first and second Betti numbers must be either 0, 1 or 2. The above method allows you to get to the end of the story.
What is the difference between a vector and its transpose?
Basically, the difference is one of dual spaces. Low-level explanation: a vector is acted on by matrices by $$ v \mapsto Av. $$ The transpose of a vector (also called a covector) is acted on by $$ a \to aA, $$ i.e. we multiply on the left for vectors and the right for covectors. The product of a covector $u^T$ and a vector $v$, in that order, is a number, which is the same as $\langle u, v \rangle$. Mid-level, linear algebra explanation: Let $V$ be a vector space over a field $k$. The transpose takes the vectors $v \in V$ to linear maps in the dual space, normally called $V^* $. This is originally thought of as the set of linear maps on $V$: that is, $\alpha \in V^*$ is a function $V \to k$ such that $$ \alpha(\lambda u+\mu v) = \lambda \alpha(u) + \mu \alpha(v) $$ for all $u,v \in V$, $\lambda,\mu \in k$. Now, if $V$ is also equipped with an inner product $\langle \cdot , \cdot \rangle : V \times V \to k$, and satisfies some other conditions (finite-dimensional's good enough for now) one can show that every element of $V^*$ can be written in the form $$ \alpha(\cdot) = \langle u , \cdot \rangle, $$ for some $u \in V$. This is basically what $u^{T}$ is. To see how matrices act on these things, consider the member of $k$ given by $u^T A v$, where $A$ is a linear map $V \to V$ (or matrix, if you choose a basis). This can either be written as $$ u^T (Av) = \langle u, Av \rangle, $$ or, using $u^T A = (A^T u)^T$, as $$ (u^T A) v = (A^T u)^T v = \langle A^T u, v \rangle. $$ (This is normally actually the definition of the adjoint/transpose of $A$, but I'm trying to explain from a basic point-of-view.) You can go on to talk about bases, how to change them, and so on. Wikipedia no doubt has some good articles on this, since its general linear algebra coverage is excellent. (There's a stupidly high-powered category-theoretic answer too, but I'll spare all of us the details: basically the transpose is a contravariant functor that takes the category of vector spaces over a field to its opposite category, and so on...)
Expected value of rolling a $5$ followed by a $6$ different than rolling two consecutive sixes?
When you roll a six and then a non-six, there is no way you can "complete" your two sixes with one more roll. But if you roll a five, and then a non-six, occasionally, that non-six is a five, in which case, you can complete your pair in the next roll. This makes it easier to get a $56$ earlier than an $66$. For example, the ways to get your first $66$ in exactly three rolls is the five possible sequences $[1-5]66$, while the ways to get your first $56$ in exactly three rolls is the six sequences $[1-6]56$. One reason this is counter-intuitive: We play the following game with two players. A die is tossed repeatedly, until we get a six immediately followed by a 5 or 6. Player 1 wins if the last two rolls are 66, player 2 if they were 65. This game is "even" - each player has a probability of $\frac{1}{2}$ of winning. So this even-ness makes us intuit that the expectation of the first occurrence of each is also the same. But that is a fallacy. When the first is 66 at roll $n$, it is possible for the first $65$ to be at roll $n+1$. On the other hand, if the first is $65$ at roll $n$, it takes minimum of $n+2$ rolls to get $66$. So when $66$ wins, the first $65$ is expected to follow a little faster than the reverse case.
$ \lim_{x \to 0} \cos^{-1}(1-x)/\sqrt x$
Use L'Hospital's rule: $$\lim_{x \to 0} \frac{\cos^{-1}(1-x)}{\sqrt{x}}=\lim_{x \to 0} \frac{-\frac{1}{\sqrt{1-(1-x)^2}}\cdot (-1)}{\frac1{2\sqrt{x}}}=\lim_{x \to 0} \frac{2\sqrt{x}}{\sqrt{2x-x^2}}=\lim_{x \to 0} \frac{2}{\sqrt{2-x}}=\sqrt{2}.$$
Doubt on proof of equivalance of conditions for uniform integrability in $L^1$
It seems that the choice $\delta=\varepsilon/(2N)$ works well. Finiteness of the measure space is used to show boundedness in $\mathbb L^1$ of a uniformly integrable family. Indeed, if the measure space is infinite, then a constant function is not integrable. Therefore, we cannot show that the family $\left(\int \left|X\right|\mathbf 1\{\left|X\right|\lt R\}\right)_{X\in A}$ is bounded. For example, assume that we work with the real line with Lebesgue measure and $X_n=\mathbf 1([n,2n))$. This family would be uniformly integrable if we consider the standard definition, but this is not bounded in $\mathbb L^1$. What can be misleading in the statement in the opening post is that we talk about random variable, which supposes that we work with a probability space. In particular, there is no need to specify that the measure is finite. Note that the concept of uniform integrability can be extended to infinite measure spaces. A family of functions $\left(f_i\right)_{i\in I}$ is uniformly integrable if for each positive $\varepsilon$, we can find an integrable function $g$ such that $$\sup_{i\in I}\int\left|f_i\right|\mathbf 1\left\{\left|f_i\right|\gt g\right\}\lt \varepsilon.$$
System of Multiplicative Differential Equations
Hint: I assume $s,c,n$ to be constants. From the last equation, you obtain $z(t)=z(0)\exp(n t)$ (hence $z(t)$ is know from now onwards). Then $$y'/x'=\dfrac{dy}{dx}=\dfrac{cx^{2/3}z^{1/3}}{sx^{1/2}y^{1/6}z^{1/3}}=\dfrac{c}{s}\dfrac{x^{1/6}}{y^{1/6}}$$ $$y^{1/6}dy=\dfrac{c}{s}x^{1/6}dx \implies \dfrac{7}{6}y^{7/6}=\dfrac{7}{6}\dfrac{c}{s}x^{7/6}+\dfrac{7}{6}k \implies y(x)=\left[k+\dfrac{c}{s}x^{7/6} \right]^{6/7}$$ Substitute this into the first equation $$x'=sx^{1/2}\left[\left[k+\dfrac{c}{s}x^{7/6} \right]^{6/7} \right]^{1/6}\left[ z(0)\exp(n t)\right]^{1/3}$$ $$\dfrac{dx}{dt}=sx^{1/2}\left[k+\dfrac{c}{s}x^{7/6} \right]^{1/7} \left[ z(0)\exp(n t)\right]^{1/3}.$$ Note, that the last equation is separable. The solution of this differential equation is now only a problem of integration. I don't see a method to write the solution in a closed form. But if you are able to determine $x(t)$ then $y(t)$ is given by $$y(t)=y(x(t))=\left[k+\dfrac{c}{s}x(t)^{7/6} \right]^{6/7}$$
Additive basis of order $2$ (II)
I finally found this article : Cassels bases (cf. Theorem 6, p. 11).
Are there infinite prime numbers of the form $q = p^2 -2$, with $p$ prime?
We have no idea. We don't even know if there are infinitely many primes of the form $X^2 - 2$. In fact, we don't have a single example of a quadratic polynomial that we can prove takes infinitely many prime values. This is far beyond current scope. This is closely related to Schinzel's Hypothesis H.
Joint distribution table
(a) If $B$ in your question denotes "binomial" then your answer is correct. (b) For convenience let $Z$ denote the number of tails obtained in the second toss. Then $X-Z$ is the number of tails in the first toss, $Z$ is the number of tails obtained in the second toss and $Y-Z$ is the number of tails obtained in the last $2$ tosses. This indicates that they are independent and also makes clear how they are distributed. Now for every pair $\left(i,j\right)$ with $i\in\left\{ 0,1,2\right\} $ and $j\in\left\{ 0,1,2,3\right\} $ find $P\left(X=i,Y=j\right)$ on base of: $$\begin{aligned}P\left(X=i,Y=j\right) & =\sum_{k=0}^{1}P\left(X=i,Z=k,Y=j\right)\\ & =\sum_{k=0}^{1}P\left(X-Z=i-k,Z=k,Y-Z=j-k\right)\\ & =\sum_{k=0}^{1}P\left(X-Z=i-k\right)P\left(Z=k\right)P\left(Y-Z=j-k\right)\\ & =\frac{1}{2}\sum_{k=0}^{1}P\left(X-Z=i-k\right)P\left(Y-Z=j-k\right) \end{aligned} $$ The marginals can be calculated by means of: $P(X=i)=\sum_{j=0}^3P(X=i,Y=j)$ for $i=0,1,2$ $P(Y=j)=\sum_{i=0}^2P(X=i,Y=j)$ for $j=0,1,2,3$ Actually the marginals are calculated in (a) already, so check them.
Product/coproduct properties: If $N_1\simeq N_2$ in some category, then $N_1\times N_3\simeq N_2\times N_3$?
You're mixing things up. Given two modules $N$, $N'$, their product is a module $N \times N'$ that satisfies some universal property that you certainly know. It turns out that this product is given by the external direct product $N \oplus N'$, which is the set of pairs $\{ (x,x') \mid x \in N, x' \in N' \}$, endowed with a module structure that's easy to define. It does not matter whether or not $N$ and $N'$ are submodules of the same module. This is not the same thing as the internal direct product of two submodules $N, N' \subset M$. Given two such submodules, if they satisfy $N \cap N' = \{0\}$, then they form a direct sum, unfortunately denoted by the same symbol, $N \oplus N'$. This is the set of elements of $M$ that can be written as a sum of an element of $N$ with an element of $N'$. For this construction it's of course crucial that $N$ and $N'$ are submodules of the same module. It is then the case that the internal direct product, if it exists, is isomorphic to the external direct product – phew! Now, it is indeed the case that if $N_1$ is isomorphic to $N_2$, then the products (which are external direct products in the category of modules) $N_1 \times N'$ is isomorphic to $N_2 \times N'$. This follows from the universal property and it's a good exercise. It works in every category. However, it is not the case that if $N_1$, $N_2$, $N'$ are submodules of a module $M$ and $N_1$ is abstractly isomorphic to $N_2$, then the internal direct products $N_1 \oplus N'$ and $N_2 \oplus N'$ are both defined at the same time and isomorphic. For example, consider the $\mathbb{Z}$-module $M = \mathbb{Z}^2$, and $N_1 = \langle (1,0) \rangle$, $N_2 = \langle (0,1) \rangle = N'$. Then the internal direct product $N_1 \oplus N'$ is defined and equal to $M$, whereas $N_2$ and $N'$ do not form an internal direct product (their sum $N_2 + N'$ is just $N_2$ itself).
Prove that $|x^3|$ is continuous.
We know that $g(x) = x^{3}$ is continuous. Let us show that the function $f(x) = |x|$ is continuous at an arbitrary point $a \in \mathbb{R}$. To do this, let $\delta = \epsilon$ and suppose $|x-a| \le \epsilon$. Then, because of the triangle inequality, we have: $$||x|-|a||\le |x-a|\le \epsilon$$ Which proves $f$ is continuous at $a$. Now, we know that the composite of continuous functions is continuous, so if $h(x) = |x^{3}|$ then: $$h(x) = (f\circ g)(x)$$
Equivalence of gluability axiom for sheaves
Why do you think the first map needs to be surjective? This is easily seen to be false by example: consider holomorphic functions on $\mathbb C$. The natural map $\mathcal O_{\mathbb C}(\mathbb C) \to \mathcal O_{\mathbb C}(\mathbb C \setminus \left\{0\right\})$ is not surjective since $f(z) = 1/z$ is not in its image, so pairing with the restriction map to any other open set $U \not \ni 0$ you get a map of this type which is not surjective. The hypothesis (existence of two sections that agree on $U \cap V$) is that we have a pair $(f_u,f_v)$ in the middle spot which equalizes the double arrows (i.e. lies in the difference kernel of the two arrows, if I'm getting my terminology correct). Now the sheaf axiom is equivalent to exactness, i.e. the difference kernel of the double arrows is equal to the image of the first arrow. In other words, the conclusion (existence of a section that glues $f_u$ and $f_v$) is that we can lift the pair to a section on $X$, i.e. the pair is in the image of the first map.
Find the range of values in a geometric progression for which $g_1 4g_2 -3g_1$
Assuming $g1=a $, $g2=ar $, $g3=ar^2$, we have $$a <0$$ and $$ar^2>4ar-3a$$ Therefore we can write $$r^2-4r+3<0$$ which has solution $$ 1 <r <3$$ Note that the possibility $a >0$ and $r <0$, assuming $g1=ar $, $g2=ar^2$, $g3=ar^3$, would lead to the same solution $ 1 <r <3\, $, which in this case contradicts the assumption $r <0$. So the solution above is the only possible one.
A bicentric quadrilateral $ABCD$
Let $\alpha,\beta,\gamma,\delta$ be the angles at vertices $A,B,C,D$, respectively. Without loss of generality we can assume that the angles $\alpha,\beta$ are acute. In this case both circumcenter $O$ and incenter $I$ lie inside $\triangle ABK$, where $K$ is the intersection point of the diagonals. We have: $$\angle BID=2\pi-\frac12\beta-\gamma-\frac12\delta=\frac\pi2+\alpha, \quad \angle BOD=2\alpha.$$ Let $\rho(P,P_1P_2)$ denote the distance of a point $P$ to a line drawn through the points $P_1$ and $P_2$. Then $$ \frac{\rho(O,BD)}{\rho(I,BD)}=\frac{A_{BOD}}{A_{BID}} =\frac{OB\cdot OD\cdot\sin(\angle BOD)}{IB\cdot ID\cdot\sin(\angle BID)} =\frac{R^2\sin2\alpha}{\frac{r}{\sin\frac\beta2}\frac{r}{\sin\frac\delta2}\sin\left(\frac\pi2+\alpha\right)}=\frac{R^2}{r^2}\sin\alpha\sin\beta.\tag1 $$ The same result will be obtained for the ratio of the distances to the diagonal $AC$. Thus: $$ \frac{\rho(O,BD)}{\rho(I,BD)}=\frac{\rho(O,AC)}{\rho(I,AC)}\iff \frac{\rho(O,AC)}{\rho(O,BD)}=\frac{\rho(I,AC)}{\rho(I,BD)}, $$ which means that the points $K,I,O$ are collinear. As a side result we obtain: $$\begin{align} \frac{KO}{KI}&=\frac{R^2}{r^2}\sin\alpha\sin\beta\\ &=\frac{(ab+cd)(ac+bd)(ad+bc)}{16\,abcd}\frac{(a+c)(b+d)}{abcd}\frac{2\sqrt{abcd}}{ad+bc}\frac{2\sqrt{abcd}}{ab+cd}\\ &=\frac{(ac+bd)(a+c)(b+d)}{4abcd},\tag2 \end{align} $$ where formulas from wikipedia page were used to express the radii and angles via the sides of the quadrilateral.
Any rational map can be extended to codimension one.
The statement you have written is not correct, for several reasons. There are two related (true) theorems, which I'll state now. (Both can be found in Shafarevich Chapter II --- I don't have the book to hand, so I can't give precise references.) Theorem A: Let $f: X \dashrightarrow \mathbf A^1$ be a rational function on a smooth variety $X$. If $X$ is regular along every codimension-1 subvariety $Y \subset X$, then $X$ is regular. Sketch of proof: Everything is local on $X$, so we can assume $X$ is affine, embedded in some $\mathbf A^n$. Then $f=g/h$ for some polynomials $g$ and $h$ with no common factor. The indeterminacy locus of $f$ is then the zero locus of $h$, which is either empty or codimension 1 in $X$. $\ \square$ Theorem B: Let $f: X \dashrightarrow \mathbf P^m$ be a rational map from a smooth variety $X$ to projective space. Then the indeterminacy locus of $f$ has codimension at least 2 in $X$. Sketch of proof: Let's stick to the case $m=1$ for simplicity of notation; the general case works in the same way. Reducing to the affine case as above, we can write $f$ as $[g,h]$ where $g$ and $h$ are polynomials. If $g$ and $h$ have a common factor we can remove it, so again we may assume they don't. The indeterminacy locus of $f$ is $\{g=0\} \cap \{h=0\}$, which therefore has codimension at least 2 in $X$. $\ \square$ It seems as though you somehow mixed up these two theorems. Let me close with a couple of remarks: Theorems A and B almost seem to be in conflict with each other! The key point to notice is that in Theorem B we are talking about a map to projective space. Roughly speaking, in this case there are "more places" for the codimension-1 subsets of $X$ to map to, hence less indeterminacy. Neither Theorem A nor B is true without some hypotheses on the singularities of $X$. (I assumed "smooth" for simplicity; you can check that "normal" would also suffice.) For example, let $C$ be a nodal cubic curve and $n: \mathbf P^1 \rightarrow C$ its normalisation. Then Theorem B fails for the rational map $n^{-1}$. (A counterexample for Theorem A is a bit harder to find!) Update: This addresses the extra questions that the OP added later. If the linear system $|L|$ decomposes into mobile and fixed part as $|M|+F$, then we extend it to all codimension-1 points by simply "forgetting" about $F$. That is: outside $F$, the maps given by $|L|$ and $|M|$ are the same, so the map given by $|M|$ extends that given by $|L|$. Since $|M|$ is mobile, it has no fixed components, so the corresponding map only fails to be defined in codimension 2. (I note that none of this is specific to surfaces. One thing that is specific to surfaces is that a mobile linear system has finite base locus, and such a complete linear system is actually semi-ample (some multiple is basepoint-free) by the Zariski--Fujita theorem.) In your example of the pencil of conics on a cubic surface, the pencil $|M|$ of conics is basepoint-free, so the corresponding map is actually a morphism extending your original rational map. (To see that the pencil of conics is actually basepoint-free, let $C$ be a conic in the pencil. Then $C+l$ is a plane section of the cubic, so $(C+l)^2=3$. On the other hand $C \cdot l = 2$ by Bezout, and $l^2=-1$, so we get $C^2=0$. Hence two distinct conics in the pencil are disjoint.)
Lambert- W -Function calculation?
Let us consider the function $$f(x)=x \log(x)-a$$ Effcetively, the solution of $f(x)=0$ is given by $$x=\frac{a}{W(a)}$$ and, if I properly understood, you look for a computation method for getting $W(a)$. From definition $W(a)$ is defined such that $a=W(a)e^{W(a)}$ so Newton method seems to be (and is) very good. I strongly suggest you have a look at http://en.wikipedia.org/wiki/Lambert_W_function. In the paragraph entitled "Numerical evaluation", they give Newton and Halley formulae (the latest one has been massively used by Corless et al. to compute $W(a)$. In the same Wikipedia page, you will find very nice and efficient approximations of $W(a)$ for small and large values. These estimates will allow you to start really close to the solution. If I may underline one thing which is really nice : all derivatives od $W(a)$ express as functions of $a$ and $W(a)$ itself and this is extremely convenient. You could be interested by http://people.sc.fsu.edu/~jburkardt/cpp_src/toms443/toms443.html where the source code is available.
Linear Programming Books
The other classics besides Winston are Hillier and Lieberman's Introduction to Operations Research and Chvátal's Linear Programmming. I learned linear programming out of Bob Vanderbei's Linear Programming: Foundations and Extensions, which is also a fine book. The last time I taught linear programming I used Dave Rader's new book, Deterministic Operations Research, and was happy with it. As for a comparison, Winston focuses on how the different methods work and gives lots of examples but doesn't spend much time on theory. Hillier and Lieberman is at a slightly higher level than Winston, with a more leisurely pace and a little more theory but with fewer examples. Chvátal and Vanderbei have a more noticeable focus on the theory. Rader takes a different approach in that the simplex method does not appear until about halfway through the book. Instead, he spends a lot of time early on algorithm design and on what an algorithm for solving linear programs might look like so that when you finally do see the simplex method the reaction is closer to "Of course" than to "Where in the world did that come from?" He doesn't do the tableau form of the simplex method, which, while a plus in my opinion, may make it hard to understand his version of the simplex method if you're used to the tableau. Many LP books spend little time on how to construct linear programming models (i.e, how to come up with variables, an objective function, and constraints that describe the problem you're trying to solve). Of these five, Winston and Rader discuss construction of LP models the most.
Let $ \tau(\omega) =\inf \{ t \in (0, \infty): B_t(\omega) = 10\}$. Does it follow $P( \tau = s) = 0?$
Note that $P(\tau = s) \leq P(B_s=10)=0$ since the event $\{\tau=s\}$ is contained in the event $\{B_s=10\}$, and since $B_s$ has a continuous distribution. Note that this only shows that the law of $\tau$ is atomless, which doesn't necessarily imply that a density exists with respect to Lebesgue measure (think Cantor-type or local-time measures). If you want an explicit density for $\tau$ see theorem 2.32 on page 56 of this book.
$(\mathbb{Q},|\cdot|_{\mathbb{Q}})$ is not connected
The symbol $|\cdot|_{\mathbb{Q}}$ is not a discrete metric. It is not a metric at all since it only takes 1 argument! It is most likely a norm, probably the Euclidean one that induces a metric $d(x,y)=|x-y|_{\mathbb{Q}}$. Assuming that $|x|_{\mathbb{Q}}$ is the standard Euclidean norm, i.e. the absolute value then the induced metric is not bounded, since $d(n,0)=n$ for natural $n$. And to conclude that $\mathbb{Q}$ is not connected it is enough to consider the sphere centered at $0$ with radius $\sqrt{2}$.
What does this logic question mean?
You can abbreviate the statements as $M$: multi-user state $N$: operating normally $K$: kernel functioning $I$: in interrupt mode so that your statments are $$M \iff N, \quad N \implies K,\quad \neg K \vee I, \quad \neg M \implies I, \quad I.$$ You want truth values so that all these statments are true. Clearly $I$ must be true, and in this case so are the third and fourth statements. Just choose values for $M$, $N$, and $K$ that make the first two statements true. One possibility is if $M$, $N$, and $K$ all share the same truth value.
How to see there exists const $C$ such that $\frac{d^{n-m}}{dx^{n-m}}(1-x^2)^n=C(1-x^2)^m\frac{d^{n+m}}{dx^{n+m}}(1-x^2)^n$
Suppose we seek to determine the constant $Q$ in the equality $$ Q_{n,m} \left(\frac{d}{dz}\right)^{n-m} (1-z^2)^n = (1-z^2)^m \left(\frac{d}{dz}\right)^{n+m} (1-z^2)^n$$ where $n\ge m.$ We will compute the coefficients on $[z^q]$ on the LHS and the RHS. Writing $1-z^2 = (1+z)(1-z)$ we get for the LHS $$\sum_{p=0}^{n-m} {n-m\choose p} {n\choose p} p! (1+z)^{n-p} \\ \times {n\choose n-m-p} (n-m-p)! (-1)^{n-m-p} (1-z)^{m+p} \\ = (n-m)! (-1)^{n-m} \sum_{p=0}^{n-m} {n\choose p} {n\choose n-m-p} (1+z)^{n-p} (-1)^p (1-z)^{m+p}.$$ Extracting the coefficient we get $$(n-m)! (-1)^{n-m} \sum_{p=0}^{n-m} {n\choose p} {n\choose n-m-p} (-1)^p \\ \times \sum_{k=0}^{n-p} {n-p\choose k} (-1)^{q-k} {m+p\choose q-k}.$$ We use the same procedure on the RHS and merge in the $(1-z^2)^m$ term to get $$(n+m)! (-1)^{n+m} \sum_{p=0}^{n+m} {n\choose p} {n\choose n+m-p} (-1)^p \\ \times \sum_{k=0}^{n+m-p} {n+m-p\choose k} (-1)^{q-k} {p\choose q-k}.$$ Working in parallel with LHS and RHS we treat the inner sum of the LHS first, putting $${m+p\choose q-k} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q-k+1}} (1+z)^{m+p} \; dz$$ to get $$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} (1+z)^{m+p} \sum_{k=0}^{n-p} {n-p\choose k} (-1)^{q-k} z^k \; dz \\ = \frac{(-1)^q}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} (1+z)^{m+p} (1-z)^{n-p} \; dz.$$ Adapt and repeat to obtain for the inner sum of the RHS $$\frac{(-1)^q}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} (1+z)^{p} (1-z)^{n+m-p} \; dz.$$ Moving on to the two outer sums we introduce $${n\choose n-m-p} = \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{1}{w^{n-m-p+1}} (1+w)^n \; dw$$ to obtain for the LHS $$\frac{1}{2\pi i} \int_{|w|=\gamma} \frac{1}{w^{n-m+1}} (1+w)^n \\ \times \frac{(-1)^q}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} (1+z)^{m} (1-z)^{n} \sum_{p=0}^{n-m} {n\choose p} (-1)^p w^p \frac{(1+z)^p}{(1-z)^p} \; dz\; dw \\ = \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{1}{w^{n-m+1}} (1+w)^n \\ \times \frac{(-1)^q}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} (1+z)^{m} (1-z)^{n} \left(1-w\frac{1+z}{1-z}\right)^n \; dz\; dw \\ = \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{1}{w^{n-m+1}} (1+w)^n \\ \times \frac{(-1)^q}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} (1+z)^{m} (1-z-w-wz)^n \; dz\; dw.$$ Repeat for the RHS to get $$\frac{1}{2\pi i} \int_{|w|=\gamma} \frac{1}{w^{n+m+1}} (1+w)^n \\ \times \frac{(-1)^q}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{q+1}} (1-z)^{m} (1-z-w-wz)^n \; dz\; dw.$$ Extracting coefficients from the first integral (LHS) we write $$(1-z-w-wz)^n = (2-(1+z)(1+w))^n \\ = \sum_{k=0}^n {n\choose k} (-1)^k (1+z)^k (1+w)^k 2^{n-k}$$ and the inner integral yields $$(-1)^q \sum_{k=0}^n {n\choose k} (-1)^k {m+k\choose q} (1+w)^k 2^{n-k}$$ followed by the outer one which gives $$(-1)^q \sum_{k=0}^n {n\choose k} (-1)^k {m+k\choose q} {n+k\choose n-m} 2^{n-k}.$$ For the second integral (RHS) we write $$(1-z-w-wz)^n = ((1-z)(1+w)-2w)^n \\ = \sum_{k=0}^n {n\choose k} (1-z)^k (1+w)^k (-1)^{n-k} 2^{n-k} w^{n-k}$$ and the inner integral yields $$(-1)^q \sum_{k=0}^n {n\choose k} {m+k\choose q} (-1)^q (1+w)^k (-1)^{n-k} 2^{n-k} w^{n-k}$$ followed by the outer one which produces $$\sum_{k=0}^n {n\choose k} {m+k\choose q} {n+k\choose k+m} (-1)^{n-k} 2^{n-k}.$$ The two sums are equal up to a sign and the RHS for the coefficient on $[z^q]$ is obtained from the LHS by multiplying by $$\frac{(n+m)!}{(n-m)!} (-1)^{n-q}.$$ Observe that powers of $z$ that are present in the LHS and the RHS always have the same parity, the coefficients being zero otherwise (either all even powers or all odd). Therefore $(-1)^{n-q}$ is in fact a constant not dependent on $q$, the question is which. The leading term has degree $2n-(n-m)=n+m=(2n-(n+m))+2m$ on both sides and the sign on the LHS is $(-1)^n$ and on the RHS it is $(-1)^{n+m}.$ The conclusion is that the queried factor is given by $$\bbox[5px,border:2px solid #00A000]{ Q_{n,m} = (-1)^m \frac{(n+m)!}{(n-m)!}.}$$
Parenthesis in function notation?
No there isn't. It is all about readability and, perhaps, reminding yourself occasionally that R depends on t. In the second equation such a reminder is superfluous as a derivative is being taken.
Simplify $\sqrt{8-\sqrt{63}}$
Note that $63=9 \times 7$ and $8=\frac{1}{2}(9+7)$. Therefore, $$ 9+7-2\sqrt{9 \times 7} = (\sqrt{9}-\sqrt{7})^2, $$ so that $$ 8-\sqrt{63} = \frac{1}{2}(16-2\sqrt{63}) = \frac{1}{2}(3-\sqrt{7})^2 $$ and $$ \sqrt{8-\sqrt{63}} = \frac{3-\sqrt{7}}{\sqrt{2}} = \frac{3\sqrt{2}-\sqrt{14}}{2}. \quad \blacksquare $$
Extended mapping class group of $S^p \times S^q$
I guess "extended mapping class group" means $\pi_0 \text{Diff}(M)$, the set of isotopy classes of all diffeomorphisms. (I have never heard this notation in my life.) What you call the mapping class group is what I would call the oriented mapping class group $\pi_0 \text{Diff}^+(M)$, the set of orientation-preserving isotopy classes of diffeomorphisms. I will continue to use this standard notation instead of yours, writing $MCG(M)$ and $MCG^+(M)$, respectively. In my previous post, I wrote $MCG(M)$ everywhere, but what I meant was $MCG^+(M)$ - I made that restriction at the start of the post. It seems that this is what has caused the notation confusion; I am sorry about that. I hope that you can use what follows to answer questions like this yourself from now on (as they are not too difficult). If $M$ is an orientable manifold, the set $\text{Diff}(M)$ carries an homomorphism to $\Bbb Z/2$ (by whether or not a map preserves orientation), where the kernel is $\text{Diff}^+(M)$. So long as the map to $\Bbb Z/2$ is surjective, there is a short exact sequence $$0 \to \text{Diff}^+(M) \to \text{Diff}(M) \to \Bbb Z/2 \to 0.$$ For the map to $\Bbb Z/2$ to be surjective means precisely that $M$ carries some orientation-reversing diffeomorphism. If not, then $\text{Diff}^+(M) = \text{Diff}(M).$ Whenever you have a short exact sequence of topological groups, it is a fibration, and you may apply the long exact sequence of homotopy groups of a fibration: $$\cdots \to \pi_1(\Bbb Z/2) \to \pi_0 \text{Diff}^+(M) \to \pi_0 \text{Diff}(M) \to \Bbb Z/2 \to 0.$$ Of course, $\pi_1 (\Bbb Z/2) = 0$, so we see we have a short exact sequence $$0 \to MCG^+(M) \to MCG(M) \to \Bbb Z/2 \to 0.$$ That is, the full mapping class group is always an extension of $\Bbb Z/2$ by the oriented mapping class group. Deciding what the extension is tends to be quite difficult. Because every manifold $S^p \times S^q$ has an orientation-reversing diffeomorphism, we see that their mapping class groups an oriented mapping class groups fit into an extension as above. It is not always trivial: there is a surjection $MCG(S^{2n+1} \times S^{2n+1}) \to GL_2 \Bbb Z$, given by taking the induced map on $H_{2n+1}$. The oriented mapping class group to $SL_2 \Bbb Z$, and if the extension $$0 \to MCG^+(S^{2n+1} \times S^{2n+1}) \to MCG(S^{2n+1} \times S^{2n+1}) \to \Bbb Z/2 \to 0$$ was trivial, then so too would be the extension $$1 \to SL_2 \Bbb Z \to GL_2 \Bbb Z \to \Bbb Z/2 \to 0.$$ But this extension is nontrivial.
Integrating angular momentum, velocity and acceleration on two touching, spinning discs
The situation seems basically similar to a "dry clutch" in engineering. The simplest model of that, I think, would be almost identical to a block sliding on a flat surface, subject to frictional forces. And the common assumption there is that the kinetic friction force is proportional to the normal force applied to the block (which might be its weight), and to a coefficient of friction, but it is independent of the speed. This is the angular equivalent of that. The frictional torque between the two discs will act on the relative motion, and will conserve total angular momentum. So we can start by writing $$ I_1 \dot{\omega}_1(t) = -\tau , \qquad I_2 \dot{\omega}_2(t) = +\tau $$ where the dot represents the time derivative and $\tau$, the frictional torque, is a constant. We need to integrate these equations in time, from $t=0$, up to the point where $\omega_1(t)=\omega_2(t)=\omega_f$. Beyond that point, the discs will rotate together. This is simple enough to do analytically. The angular velocities will change linearly in time $$ \omega_1(t) = \omega_1(0) -\frac{\tau t}{I_1 } , \qquad \omega_2(t) = \omega_2(0) +\frac{\tau t}{I_2 } $$ An easy way to get a solution is to write an equation for the relative angular velocity $\omega=\omega_1-\omega_2$, set $\omega(t)=0$, and solve it for $\tau t$: \begin{align*} \omega(t) &= \omega(0) - \tau t \left(\frac{I_1+I_2}{I_1I_2}\right) = 0 \\ \quad\Rightarrow\quad \tau t_f &= \left(\frac{I_1I_2}{I_1+I_2}\right) \omega(0) = \left(\frac{I_1I_2}{I_1+I_2}\right) [ \omega_1(0)-\omega_2(0)] \end{align*} where I've called this final time $t_f$. We can check that this gives the correct answers by substituting back in \begin{align*} \omega_1(t_f) &= \omega_1(0) -\frac{1}{I_1 }\left(\frac{I_1I_2}{I_1+I_2}\right) [ \omega_1(0)-\omega_2(0)] = \frac{I_1\omega_1(0) + I_2\omega_2(0)}{I_1+I_2} \\ \omega_2(t_f) &= \omega_2(0) +\frac{1}{I_2 } \left(\frac{I_1I_2}{I_1+I_2}\right) [ \omega_1(0)-\omega_2(0)] = \frac{I_1\omega_1(0) + I_2\omega_2(0)}{I_1+I_2} \end{align*} So, both equal to $\omega_f$ at that time $t=t_f$. Of course, if you want to make the frictional torque depend on relative angular velocity in some complicated way, the solution may require a computer. But the underlying equations will be similar to the above. [Edit following OP comments] Note that I wrote the equation for $t_f$, time needed to reach equal angular velocities, as an expression for the product $\tau\, t_f$: this product is equal to a function of the initial angular velocities and the moments of inertia. You need to know $\tau$ before you can calculate $t_f$. To evaluate $t_f$ for a particular physical case, you need to multiply my equation for $\tau t_f$ by $1/\tau$ on both sides, i.e. take $\tau$ over to the right hand side. This illustrates that $t_f$ is inversely proportional to $\tau$, if the other parameters are kept constant. If the friction between the two discs is zero, it will take an infinite time to reach the same angular velocity, because the two discs will have no effect on each other. If the friction is very large, the time taken will be very short. In any case, the integration of those equations should stop at $t=t_f$, since after that time the two discs are not rotating relative to each other. So, to solve the problem you want to solve, you must provide the physics of the interaction between the discs. Let me emphasise that my solution is only based on the simplest assumption about this. Your situation might be more complicated. However, some aspects of my solution will still apply, e.g. the torque between the discs must still conserve total angular momentum. For more discussion of friction in general, see for example https://physics.stackexchange.com/questions/2408/does-the-force-of-kinetic-friction-increase-with-the-relative-speed-of-the-objec and https://physics.stackexchange.com/questions/154443/why-is-the-equation-for-friction-so-simple .
Meaning of the supra- and sub-indexes in tensor notation
I'm not entirely sure I followed what your issue is, but it seems to be notational in nature, so perhaps the following exposition will be helpful. Let $V$ be a real linear space with associated dual $V^*$. If $T$ is a $(k,l)$- tensor, then $T$ is a multilinear map $$T:\underbrace{V^*\times\cdots\times V^*}_{k-\text{times}}\times\underbrace{V\times\cdots\times V}_{l-\text{times}}\to\mathbb{R}.$$ In particular, if $T$ is a $(2,3)$-tensor, and $\{e_j\}$ is a basis for $V$ with associated dual basis $\{\epsilon^j\}$ for $V^*$, i.e., $e_i(\epsilon^j)=\delta_i^j$, then in component form, we have that (using Einstein's summation notation) $$T=T_{abc}^{ij} e_i\otimes e_j\otimes\epsilon^a\otimes\epsilon^b\otimes\epsilon^c.$$ If $X=X^je_j,$ $Y=Y^je_j$, $Z=Z^je_j$ are three vectors and $\alpha=\alpha_j\epsilon^j,$ $\beta=\beta_j\epsilon^j$ are two covectors, then $$T(\alpha,\beta,X,Y,Z)=T_{abc}^{ij}\alpha_i\beta_jX^aY^bZ^c.$$
How find this $I=\int_{0}^{\infty}\frac{x\sin{(2x)}}{x^2+4}dx$
This is a duplicate of this Functions defined by integrals (problem 10.23 from Apostol's Mathematical Analysis) We have that $$ F(y) = \int \frac{\sin(xy)}{x(x^2+1)}\mathrm{d}x = \frac{\pi}{2}(1-e^{-y}) $$ A generalization of your integral is given as $$ G(a,y) = \int \frac{\sin(xy)}{x(x^2+a^2)}\mathrm{d}x = a^{-2} F(ay) = \frac{\pi}{2a^2}(1-e^{-ay}) $$ Differentiating twice yield $$ \frac{\mathrm{d}^2 G}{\mathrm{d}y^2} = -\int_0^{\infty} \frac{x\sin(xy)}{x^2+a^2} = - \frac{\pi}{2} e^{-ay} $$ Hence $$ \int_{0}^{\infty}\dfrac{x\sin{(2x)}}{x^2+4}\mathrm{d}x = -G''(2,2) = \frac{\pi}{2} e^{-4} $$ Where $G''(a,y)$ means differentiation with respect to $y$.
Find the limit of a real sequence $a_n = \dfrac{(n^2+n+1)^{10}-(n+1)^{20}}{(n^2+1)^{10}-(n+1)^{20}}$
First, change variable $n = m-1$. That makes it look simpler: $$\frac{(m^2-2m+1)^{10}-m^{20}}{(m^2-2m+2)^{10}-m^{20}}$$ Then use the Binomial theorem to expand, and finally apply the method you were using. $$\frac{-10m^{19} + ...\text{lower degree terms}}{-20m^{19}+...\text{lower degree terms}}\to\frac{1}{2}$$
Abbott's proof that any rearrangement of an absolutely convergent series converges to the same limit as the original
What should be written is "$t_m - s_N$ consists of a finite sum of terms, the absolute values of which appear in $\sum_{k=N+1}^\infty|a_k|$." This is simply because $t_m=s_N + \text{other terms $a_i$ where $i\notin\{1,\dots,N\}$.}$ This was the point of picking a sufficiently large partial sum of the series $\sum b_k$, so that we have included the summands in $s_N$.
Let $X\sim\mathrm{Normal}(1,4)$. If $Y=0.5^X$, find $\Bbb E[Y^2]$.
You have $Y^2 = 0.5^{2X} = \left( \frac{1}{4} \right)^X,$ so $\mathbb{E}[Y^2] = \mathbb{E}[e^{tX}]$ where $ t= - \log 4.$ The moment generating function of a normal variable with mean $\mu$ and variance $\sigma^2$ is equal to $\exp(\mu t + \sigma^2 t^2/2)$ so $\mathbb{E}[Y^2] = \exp(8 \log^2 2)/4.$
Derivative of an L1 norm of transform of a vector.
Solving in coordinates, use the formula $\frac{\partial}{\partial x_k} \|\mathbf{x}\|_p = \frac{x_k |x_k|^{p-2}}{\|\mathbf{x}\|_{p}^{p-1}}$ for $p=1$ and with obvious existence conditions. See also the answer to Taking derivative of $L_0$-norm, $L_1$-norm, $L_2$-norm.
Count the number of ways of painting the strip with 3 colors
If it is as stated (exactly $A$ yellow, $B$ red, $C$ blue), it is just a multinomial coefficient. Consider it like this: Number the yellow strips $1$ to $A$, the red ones $1$ to $B$, and the blue ones $1$ to $C$. Then the problem is how to paint the strip with $N$ different color swatches, which can be done in $N!$ ways. But this overcounts by the reorderings of the swatches of each color ("turn off their numbers"), so you have to divide by $A! \, B! \, C!$, in all: $$ \frac{(A + B + C)!}{A! \, B! \, C!} = \binom{A + B + C}{A, B, C} $$
$T(x_1,x_2,x_3,x_4,\ldots)=\left(x_2,\frac{x_3}{2},\frac{x_4}{3},\frac{x_5}{4},\ldots\right)$, spectral radius and spectrum
Note that $$\|T^nx\|_2^2 = \sum_{i=1}^\infty \left(\frac{(i-1)!}{(i-1+n)!}\right)^2 x_n^2 \le \sum_{i=1}^\infty \left(\frac1{n!}\right)^2 x_n^2 = \frac{1}{n!^2}\|x\|_2^2$$ so $\|T^n\| \le \frac1{n!}$. Therefore the spetral radius is $$r(T) = \lim_{n\to\infty} \|T^n\|^{\frac1n} \le \limsup_{n\to\infty} \frac{1}{\sqrt[n]{n!}} = 0$$ and hence $\sigma(T) = \{0\}$. For the norm of $T$ we got $\|T\| \le 1$ and considering $Te_2 = e_1$ gives $\|T\|=1$.
Will adding a constant to a random variable change its distribution?
I feel like I'm making a huge mistake because of how simple this feels, but... If $X$ and $X+c$ had the same law, they would have the same expectation. But alas, $\Bbb E [X + c] = \Bbb E X + c$ so unless $c=0$, they have different laws.
How to find $ \lim_{(x,y)\to (0,0)} \frac{\sin(x-y)}{x+y} $?
It does not exist. If we take the limit about the line $y=mx$, the result depends on $m$, which should not happen if the limit did exist.
Question About Part of the Proof of a Lemma to the Church-Rosser Theorem in "Lectures on the Curry-Howard Isomorphism"(1998)
Lemma 1.4.2 and its proof use two different symbols, $>^*$ and $↠_\beta$ to represent the transtive closure of the relation $>$. Note that in the sequel of the section, for instance in lemma 1.4.6 and in theorem 1.4.7, the symbol $↠_\beta$ represent a specific relation on the set $\Lambda$ of $\lambda$-terms, i.e. the so called multi-step $\beta$-reduction (which is the reflexive-transitive closure of $\beta$-reduction $\to_\beta$). The use of the symbol $↠_\beta$ in lemma 1.4.2 is misleading because it clashes with the interpretation of that symbol in the sequel of the section, but strickly speaking it is not an error. The statement in the second paragraph in the proof of lemma 1.4.2 is if $N_1 > \cdots > N_m$ and $N_1 >^* M_1$, then there are $M_2, \dots, M_m$ such that $M_1 > M_2 > \cdots > M_m$ and $N_m>^*M_m.\qquad$ (*) The proof of (*) is by induction on $m \geq 1$. The base case is for $m = 1$. We have to show that if $N_1 >^* M_1$ then $N_1 >^* M_1$, which is trivially true. For the inductive case, we suppose that the property ( * ) is true for some $m \geq 1$ (this is the inductive hypothesis), and we want to prove that it is true for $m +1$. So, suppose $N_1 > \cdots > N_m > N_{m+1}$ and $N_1 >^* M_1$. By induction hypothesis applied to $N_1 > \cdots > N_m$, we know that there exist $M_2, \dots, M_m$ such that $M_1 > M_2 > \cdots > M_m$ and $N_m>^*M_m$. Thus, we have $N_m > N_{m+1}$ and $N_m >^* M_m$: by property stated in the first paragraph of the proof of lemma 1.4.2, there exists $M_{m+1}$ such that $N_{m+1} >^* M_{m+1}$ and $M_m > M_{m+1}$. Thereofore, $M_1 > M_2 > \cdots > M_m > M_{m+1}$ and $N_{m+1}>^*M_{m+1}$, and so the property (*) holds for $m+1$.
Unable to solve logarithm question
Just like in your other question, $$\log a = \lambda\cdot a(b+c-a)$$ and so on give: $$ b\log a +a\log b = \lambda\cdot\left(ab(b+c-a)+ab(a+c-b)\right)=2\lambda\cdot abc$$ that is symmetric in $a,b,c$, so by exponentiating the previous line $$ a^b b^a = a^c c^a = b^c c^b $$ follows.
Probability of at least one king and at least one ace when 5 cards are lifted from a deck?
We need to subtract from $C^{52}_{5}$ (the total number of ways of choosing $5$ cards from $52$) the number of ways of selecting no aces and the number of ways of selecting no kings, but we must add the number of ways of selecting no aces and no kings (else we'll have counted these twice). The number of ways of selecting neither aces nor kings is $C^{52-8}_{5}=C^{44}_{5}$. The number of ways of selecting no aces is $C^{52-4}_{5}=C^{48}_{5}$. This is equal to the number of ways of selecting no kings. So we have: $C^{52}_{5} - 2C^{48}_{5}+C^{44}_{5}$ ways of selecting $5$ cards from a deck such that we have at least one ace and at least one king. The probability of this is then $\dfrac{C^{52}_{5} - 2C^{48}_{5}+C^{44}_{5}}{C^{52}_{5}} \approx 10\%$.
A question related to improper integrals
The thing is that it need not be the case that $g' = f \to 0$. Consider, for instance, the case where in each interval $[n, n+1)$, the graph of $f$ looks like a triangle of height $n$ but width $1/n^3$, outside of which $f$ is $0$. Such an $f$ of course satisfies the claim, but it scuppers your stategy. On the other hand, we know that $\liminf f = 0$. Otherwise there is some $\varepsilon > 0$ such that $f > \varepsilon$ for large $x$, which makes the integral of $f$ blow up. Since $\liminf$s are approached arbitrarily closely infinitely many times, this tells us that a sequence of $x_n$ such that $f(x_n) \to 0$ must exist. One can be quite explicit in identifying such a sequence. For naturals $n$, let $m_n := \min_{x \in [n,n+1]} f(x),$ let $y_n$ be the corresponding minimiser (which exists because continuous function on a compact set), and consider the step function $h := \sum m_n \mathbf{1}\{x \in [n, n+1)\}.$ Then we know that $h > 0$ and $h \le f$ is integrable. So we know that $m_n \to 0$. This gives a sequence of values $y_n$ which diverge (since $y_n \ge n$), and for which $f(y_n) \to 0$. The only problem with the $y_n$ is that some of the consecutive $y_n$ may coincide - for instance if the minima are at the even naturals. However, it is trivially true that $y_n \neq y_{n+2}$ for any $n$, since they live in disjoint sets. The simple fix then is to go in steps of two. So $x_n = y_{2n}$ serves as a sequence that satisfies the claim.
Problems on span of col and row space
You’re correct about the size of $A$. Here’s a hint for the rest: what do the columns/rows of $A$ look like relative to $c$ and $r$?
Countable Basis and an Uncountable Set
It cannot be the case that $\mathcal{B}_x = \mathcal{B}_y$ for $x \neq y$ in a $T_0$ space: in that case some $O$ exists such that $x \in O, y \notin O$ (or the other way round) and no element of $\mathcal{B}_y$ can be contained in $O$ but some element of $\mathcal{B}_x$ must be, so $\mathcal{B}_x \neq \mathcal{B}_y$. Your approach should be changed. Let $N$ be the set of points of $A$ that are not limit points of $A$ (the uncountable set). So for each $x \in N$ we have some open set $O$ such that $x \in O$ and $O \cap A = \{x\}$, and we can pick a base element $B_x \in \mathcal{B}$ such that $x \in B_x \subseteq O$ and so $B_x \cap A = \{x\}$. For $x \neq y$ this implies $B_x \neq B_y$. So the map $x \to B_x$ is 1-1 from $N$ to $\mathcal{B}$ and the latter is a countable set, so $N$ is at most countable. It follows that $A \setminus N$ is the uncountable subset of $A$ that are limit points.
Trick to find multiples mentally
One needn't memorize motley exotic divisibility tests. There is a universal test that is simpler and much easier recalled, viz. evaluate a radix polynomial in nested Horner form, using modular arithmetic. For example, consider evaluating a $3$ digit radix $10$ number modulo $7$. In Horner form $\rm\ d_2\ d_1\ d_0 \ $ is $\rm\: (d_2\cdot 10 + d_1)\ 10 + d_0\ \equiv\ (d_2\cdot 3 + d_1)\ 3 + d_0\ (mod\ 7)\ $ since $\rm\ 10\equiv 3\ (mod\ 7)\:.\:$ So we compute the remainder $\rm\ (mod\ 7)\ $ as follows. Start with the leading digit then repeatedly apply the operation: multiply by $3$ then add the next digit, doing all of the arithmetic $\rm\:(mod\ 7)\:.\:$ For example, let's use this algorithm to reduce $\rm\ 43211\ \:(mod\ 7)\:.\:$ The algorithm consists of repeatedly replacing the first two leading digits $\rm\ d_n\ d_{n-1}\ $ by $\rm\ d_n\cdot 3 + d_{n-1}\:\ (mod\ 7),\:$ namely $\rm\qquad\phantom{\equiv} \color{red}{4\ 3}\ 2\ 1\ 1$ $\rm\qquad\equiv\phantom{4} \color{green}{1\ 2}\ 1\ 1\quad $ by $\rm\quad \color{red}4\cdot 3 + \color{red}3\ \equiv\ \color{green}1 $ $\rm\qquad\equiv\phantom{4\ 3} \color{royalblue}{5\ 1}\ 1\quad $ by $\rm\quad \color{green}1\cdot 3 + \color{green}2\ \equiv\ \color{royalblue}5 $ $\rm\qquad\equiv\phantom{4\ 3\ 5} \color{brown}{2\ 1}\quad $ by $\rm\quad \color{royalblue}5\cdot 3 + \color{royalblue}1\ \equiv\ \color{brown}2 $ $\rm\qquad\equiv\phantom{4\ 3\ 5\ 2} 0\quad $ by $\rm\quad \color{brown}2\cdot 3 + \color{brown}1\ \equiv\ 0 $ Hence $\rm\ 43211\equiv 0\:\ (mod\ 7)\:,\:$ indeed $\rm\ 43211 = 7\cdot 6173\:.\:$ Generally the modular arithmetic is simpler if one uses a balanced system of representatives, e.g. $\rm\: \pm\{0,1,2,3\}\ \:(mod\ 7)\:.$ Notice that for modulus $11$ or $9\:$ the above method reduces to the well-known divisibility tests by $11$ or $9\:$ (a.k.a. "casting out nines" for modulus $9\:$).
Uniqueness of the Frechet Derivative: the role of $x \in int_X(T)$
It is easier to see this using an $\epsilon$-$\delta$ argument. $\Phi$ is differentiable at $x$ iff there exists some continuous linear $L$ such that for all $\epsilon>0$ there is some $\delta>0$ such that if $\|x-y\| < \delta$ then $\|\Phi(y)-\Phi(x) - L(y-x) \| \le \epsilon \|y-x\|$. Suppose $K,L$ satisfy the equation, then $\|(K-L)(x-y)\| \le \|\Phi(y)-\Phi(x) - K(y-x) \| + \|\Phi(y)-\Phi(x) - L(y-x) \|$. Now choose $\epsilon>0$ and get some $\delta_L,\delta_K >0$ such that the above holds. Then if $\|x-y\| < \min(\delta_L,\delta_K)$ we have $\|(K-L)(x-y)\| \le 2 \epsilon \|y-x\|$. Since $x$ in the the interior, there is some $B(x,\eta) \subset T$ and so for any $h \in B(0,1)$ we have $\|(K-L) \eta h\| \le \epsilon \eta \|h\|$ or $\|(K-L) h\| \le \epsilon \|h\|$, since $(K-L)$ is linear. In particular, $\|K-L\| \le \epsilon$. Since $\epsilon>0$ was arbitrary we have the desired result $K=L$.
Laurent Expansion of a Composition (with the inverse)
Suppose $f(z) = a_1z + a_0 + O(\frac{1}{z})$ as $|z| \rightarrow \infty$ with $a_1>0$. Consider $g(z) = \frac{1}{f(\frac{1}{z})}$. Then $g(0) = 0$ and $g'(0) \neq 0$. So $g$ is analytic near $0$ and has the form $g(z) = g_1z + g_2z^2 + O(z^3)$ near $0$. $$ \frac{1}{f\left(\frac{1}{z}\right)} = g(z) $$ $$ \therefore g_1 = \frac{1}{a_1} \text{ and } g_2 = -\frac{a_0}{a_1^2}$$ Now as the comment (above) suggests, for $g(z) = \frac{1}{f\left(\frac{1}{z}\right)}$ we have $g^{-1} = \frac{1}{f^{-1}\left(\frac{1}{z}\right)}$ and $g^{-1}$ is also analytic at $0$. So $g^{-1}(z)$ is of the form $b_1z + b_2z^2 + O\left(z^3\right)$ near $0$. $$ (g^{-1}(g(z)))' = 1 $$ $$ \left( \frac{1}{a_1} - \frac{2a_0}{a_1^2} (b_1z + b_2z^2)) + O(z^3)\right)\left( b_1 + 2b_2z + O(z^3)\right) = 1 $$ Which gives us $b_1 = a_1$ and $b_2 = a_0a_1$ Therefore we get that $\frac{1}{f^{-1}(\frac{1}{z})} = a_1z + a_0a_1z^2 + O(z^3)$. Now since ord$_0 f^{-1}\left(\frac{1}{z}\right)) = -1$ we have $f^{-1}\left(\frac{1}{z}\right)$ is of the form $\frac{c_{-1}}{z} + c_o + c_1z + O(z^2)$ near $0$. Thus $$ \left( \frac{c_{-1}}{z} + c_0 + c_1z + O(z^2)\right)(a_1z + a_0a_1z^2 + O(z^3) = 1 $$ Which yields, $c_1 = \frac{1}{a_1}$ and $c_0 = -\frac{a_0}{a_1}$. So, $f^{-1}\left(\frac{1}{z}\right) = \frac{1}{a_1z} - \frac{a_0}{a_1} + O(z)$. Now, $L(z) = W \circ f^{-1}(z) = f^{-1}(z) + \frac{1}{f^{-1}(z)}$. Both of these can be found by taking the reciprocals inside $f^{-1}\left(\frac{1}{z}\right)$ and $\frac{1}{f^{-1}\left(\frac{1}{z}\right)}$.
Real-valued function with kind of additivity
Let $f:\mathbb{N}^k \rightarrow \mathbb{R}$ be an injective function. Then there is a function such $g:\mathbb{R} \rightarrow \mathbb{R}^k$ such that for all $X\in \mathbb{N}^k$, $g(f(X))=X$. For any $1 \leq i \leq k$ define the functions $\phi_i: \mathbb{R} \rightarrow \mathbb{R}$ as follow $$\phi_i(t)=f(g(t)+E_i)$$ for all $t\in f(\mathbb{N}^k)$ and else $\phi_i(t)=0$, in which for any $1 \leq i \leq k$, $E_i=(a_1,...,a_k)$ such that $a_i=1$ and $a_j=0$ for any $j\neq i$. Now for any $X=(a_1,...,a_k)\in \mathbb{N}^k$: $$\phi_i(f(n_1, n_2, \dotsc,n_i+1, \dotsc, n_k))=\phi_i(f(X))=f(g(f(X))+E_i)=f(X+E_i)=f(n_1, n_2, \dotsc,n_i+1, \dotsc, n_k).$$ Therefore for any real-valued injective functions $f:\mathbb{N}^k \rightarrow \mathbb{R}$ satisfy in your question. Hint. If $f:\mathbb{N}^k \rightarrow \mathbb{R}$ does not an injective function, but $f(\mathbb{N}^k)$ is a countable set, so with axiom of choice you can build functions $\phi_i$ that satisfy your condition. Proof. Let $h_i:\mathbb{R} \rightarrow \mathbb{R}$ are functions such that $f(X+E_i)-f(X)=h_i(f(X))$ for a fixed real $c$ and any $1 \leq i \leq k$ and $X\in \mathbb{N}^k$. Then we can define $\phi_i(t)=h_i(t)+t$ for any $1 \leq i \leq k$. Example. Let $f(n_1, n_2, \dotsc, n_k)=n_1n_2\dots n_k$, then we have $$f(X+E_i)-f(X)=\frac{n_1n_2\dots n_k}{n_i}=\frac{1}{n_i}f(X)=h_i(f(X))$$ therefore we get that $$\phi_i(t)=(\frac{1}{n_i}+1)t.$$
Std Dev of large set
The M/M/1 model assumes that job sizes are exponentially distributed with rate parameter $\mu$, in your setup the job sizes are uniformly distributed so you need to consider an M/G/1 queueing model. What is the difference between 'Mean time it takes to send a packet through the line' and 'Average latency'? They sound like a description of the same phenomena to me. Use the result for waiting/response time in an M/G/1 queue, recalling that the mean can be found by differentiating the Laplace–Stieltjes transform using the formula $$\mathbb E [X] = -\left.\frac{\text{d} \{\mathcal{L}^*F\}(s)}{\text{d}s} \right|_{s=0}.$$ The times taken to send a packet (service times) are uniformly distributed, so the 'Std dev for the time is takes to transmit a packet' will be the standard deviation of the uniform distribution. 'Time a packet spends waiting in the queue' is the response time less the service time. You can use the waiting/response time result for this also, can you see how?
How to figure out if there is an actual horizontal tangent without a graph
One of the answers is $(0,0)$. However, at this point, $\frac{dy}{dx}$ attains $\frac{0}{0}$ form. So, I think you should not consider the point $(0,0)$.
How to rotate / transform a vector so that it is parallel to a plane?
Okay - I get now that you talking about changing the ground. But still this princicple applies: the same transformation that you applied to the ground is the transformation that applies to the ball. You rotated the plane. I don't know how you have accomplished that rotation in your program, but mathematically it is expressed as a matrix $M$ that satisfies $M^TM = I$, the identity matrix (where $M^T$ is the transpose of $M$). If $\bf n$ is the normal vector to the original plane, then $M\bf n$ is the normal vector to the rotated plane. You also said you moved the plane (in the example, along the $x$-axis). This means that the new plane no longer passes through the origin (at least in your example, the original point did). This is a translation, but we need to be careful which translation we use. Suppose the new plane must pass through some point $\bf p$. In your example you indicated that it passes through $(\sim 0.5, 0, 0)$. You can use that to be the point $\bf p$. Since the normal vector is $M\bf n$ and it passed through $\bf p$, a point $\bf r$ is on the new plane if $$(M{\bf n})^T{\bf r} = {\bf n}^TM^T{\bf r} = d$$ where $d = {\bf n}^TM^T\bf p$. The translation vector we need is ${\bf b} = dM\bf n$, which is the closest point on the new plane to the origin. So the tranformation to the original plane ${\bf n}^T{\bf r} = 0$ to get the new plane ${\bf n}^TM^T{\bf r} = d$ is to rotate by $M$, then add the translation vector $b$: $$ {\bf r} \mapsto M{\bf r} + \bf b$$ (In case you don't know, $\mapsto$ means "maps to". I.e., the original value on the left is transformed into the value on the right.) This is the same transformation you need to apply to your sphere to get it's new location after the transformation. If $\bf c$ is the original center of the sphere, the transformed center will be $${\bf c} \mapsto M{\bf c} + \bf b$$ Let $\bf v$ be the "input direction vector" of the horizontal ball movement. Since $\bf v$ is a direction, not a point, it does not translate. So the new direction vector that the ball will travel up the sloped ground will be given by $${\bf v} \mapsto M\bf v$$ So the inputs you need to solve your problem are the rotation vector $M$, and a point $\bf p$ that the new plane passes through. If you have these, then you can use them to transform the ground plane, the sphere's position, and the direction of movement, all three.
Eigenspace of finite abelian group
No. For a counterexample, take any faithful representation of a nontrivial group (simplest is an action of $\Bbb Z_2$ on $\Bbb C^2$, say, by reflection through a line), and consider the unit element, that has the whole space as eigenspace, while other elements can have more complex eigenspace decomposition.
Why is important for a manifold to have countable basis?
There is one point that is mentioned in passing in Moishe Cohen's nice answer that deserves a bit of elaboration, which is that a lot of the time it is not important for a manifold to have a countable basis. Rather, what is important in most applications is for a manifold to be paracompact: this is what gives you partitions of unity, which are essential to an enormous amount of the theory of manifolds (for instance, as the other answer mentioned, proving that any manifold admits a Riemannian metric). Paracompactness follows from second-countability, which is the main reason why second-countability is useful. Paracompactness is weaker than second-countability (for instance, an uncountable discrete space is paracompact), but it turns out that it isn't weaker by much: a (Hausdorff) manifold is paracompact iff each of its connected components is second-countable. To put it another way, a general paracompact manifold is just a disjoint union of (possibly uncountably many) second-countable manifolds. So if you care mainly about connected manifolds (or even just manifolds with only countably many connected components), you lose no important generality by assuming second-countability rather than paracompactness. There are also a few situations where it really is convenient to assume second-countability and not just paracompactness. For instance, in the theory of Lie groups, it is convenient to be able to define a (not necessarily closed) Lie subgroup of a Lie group $G$ as a Lie group $H$ together with a smooth injective homomorphism $H\to G$. If you allowed your Lie groups to not be second-countable, you would have the awkward and unwanted example that $\mathbb{R}$ as a discrete space is a Lie subgroup of $\mathbb{R}$ with the usual $1$-dimensional smooth structure (via the identity map). For instance, this example violates the theorem (true if you require second-countability) that a subgroup whose image is closed is actually an embedded submanifold.
Probability rules with two conditionals only
I'm afraid you cannot find $P(B)$ without knowing $P(A)$. If you do know this, you can use the expression $P(B)=P(B|A)P(A)+P(B|\overline{A})P(\overline{A})$ and the fact that, since $P(\cdot|\overline{A})$ is a probability, $P(B|\overline{A})=1-P(\overline{B}|\overline{A}).$ (Edit: And of course $P(\overline{A})=1-P(A).$)
Precalculus Vector + Matrix Problem
You can take the scalars of a and b into the vector and add the two of them together to get vector v=$(\begin{bmatrix} 2a+3b \cr -a+b \end{bmatrix})$ . To get P from Pv=a refer to this: How to find matrix $A$ given $Ax=b$. Also $det(A)$ & $sum(A)$ are known.
Solution in general for a seemingly simple problem
The problem reminded me of some type of epigraph form. For example, for the following problem (P1) $\qquad\,\,\,\,\,\min_x f_0(x)\\ \text{subject to } f_i(x)\le0,\,i=1,\ldots,n\\ \qquad \qquad \,\,\,h_i(x)=0,\,i=1,\ldots,m$ The epigraph form is $\qquad\,\,\,\,\,\min_{x,t} t\\ \text{subject to } f_0(x)\le t\\ \qquad \qquad \,\,\,f_i(x)\le0,\,i=1,\ldots,n\\ \qquad \qquad \,\,\,h_i(x)=0,\,i=1,\ldots,m$ Which is equivalent to (P1); $(x,t)$ is optimal for the epigraph form if and only if $x$ is optimal for (P1) and $t=f_0(x)$. A geometrical interpretation can be seen below (from Boyd) However, in your problem it seems as though we wish to find the smallest scalar $t$ which defines some orthant (not epigraph), such that all $x_i$ live in this orthant and the set $\mathbb{S}$. In hindsight, the epigraph form may not be of much use here, but hopefully provides some intuition.
Show that the projection vector of $\vec b$ on $\vec a$, $a \ne 0$ is $\bigg( \frac{\vec a .\vec b}{|\vec a|^2}\bigg)\vec a$.
You need to show this vector, when subtracted from $\vec{b}$, gives something orthogonal to $\vec{a}$. Indeed $$(\vec{b}-\lambda\vec{a})\cdot\vec{a}=\vec{a}\cdot\vec{b}-\lambda\|\vec{a}|^2,$$ which is $0$ for $\lambda=\frac{\vec{a}\cdot\vec{b}}{|\vec{a}|^2}$.
Math notation please explain
I can see how you can be confused because the choice of symbols in this expression is poor. They are using $C$ both for the indexed terms in sums ($Ci$) and for a set that the indexes come from. Let's dissect the expression. I think what confuses you is $i\in A\cup B\cup C$. $A\cup B\cup C$ is the set where the index of the sum takes its values. Let's take a simple example, where the set is just the three numbers 1,2,3. $$\sum_{i\in\{1,2,3\}}C_i = C_1 +C_2 +C_3$$ I hope this is clear to you. Now in our case, $A, B, C$ are just sets (of natural numbers one would assume). For example $A$ could be $\{3,4,5\}$, $B$ could be $\{2,1\}$, and $C$ could be $\{3,4,10,20\}$. $A\cup B\cup C$ is the union of these sets, so for our example, it would be $\{1,2,3,4,5,10,20\}$ As pointed out in the comments if we have a set $S$, then $|S|$ is the cardinality of the set, i.e., how many elements the set has. So for our example above, the expression you give becomes: $$ C_0\left(|A|+ |B|+ |C|\right)-\sum_{i\in A\cup B\cup C}C_i=$$ $$ C_0(3+2+4) - \sum_{i\in\{1,2,3,4,5,10,20\}}C_i = $$ $$ 9C_0 - (C_1+C_2+C_3+C_4+C_5+C_{10}+C_{20})$$ As pointed in the comments, the equation you gave only holds if the sets A, B, C are mutually disjoint (they do not share any elements). You can find out why by exploring the example I just gave you, where the sets are not mutually disjoint. Can you see why the equation does not hold?
Using Complex Analysis to Compute $\int_0 ^\infty \frac{dx}{x^{1/2}(x^2+1)}$
For this, you want the keyhole contour $\gamma=\gamma_1 \cup \gamma_2 \cup \gamma_3 \cup \gamma_4$, which passes along the positive real axis ($\gamma_1$), circles the origin at a large radius $R$ ($\gamma_2$), and then passes back along the positive real axis $(\gamma_3)$, then encircles the origin again in the opposite direction along a small radius-$\epsilon$ circle ($\gamma_4$). Picture (borrowed from this answer): $\hspace{4.4cm}$ $\gamma_1$ is green, $\gamma_2$ black, $\gamma_3$ red, and $\gamma_4$ blue. It is easy to see that the integrals over the large and small circles tend to $0$ as $R \to \infty$, $\epsilon \to 0$, since the integrand times the length is $O(R^{-3/2})$ and $O(\epsilon^{1/2})$ respectively. The remaining integral tends to $$ \int_{\gamma_1 \cup \gamma_3} = \int_0^{\infty} \frac{x^{-1/2}}{1+x^2} \, dx + \int_{\infty}^0 \frac{(xe^{2\pi i})^{-1/2}}{1+(xe^{2\pi i})^2} \, dx, $$ because we have walked around the origin once, and chosen the branch of the square root based on this. This simplifies to $$ (1-e^{-\pi i})\int_0^{\infty} \frac{x^{-1/2}}{1+x^2} \, dx = 2I. $$ Now you need to compute the residues of the function at the two poles, using the same branch of the square root. The residues of $1/(1+z^2) = \frac{1}{(z+i)(z-i)}$ are at $z=e^{\pi i/2},e^{3\pi i/2}$, so you find $$ 2I = 2\pi i \left( \frac{(e^{\pi i/2})^{-1/2}}{2i} +\frac{(e^{3\pi i/2})^{-1/2}}{-2i} \right) = 2\pi \sin{\frac{1}{4}\pi} = \frac{2\pi}{\sqrt{2}} $$ However, I do recommend that you don't attempt to use contour integration for all such problems: imagine trying to do $$ \int_0^{\infty} \frac{x^{s-1}}{(a+bx^n)^m} \, dx, $$ for general $a,b,s,m,n$ such that it converges, using that method! No, the useful thing to know is that $$ \frac{1}{A^n} = \frac{1}{\Gamma(n)}\int_0^{\infty} \alpha^{n-1}e^{-\alpha x} \, dx, $$ which enables you to do more general integrals of this type. Contour integration's often a quick and cheap way of doing simple integrals, but becomes impractical in some general cases.
computing the Kauffman bracket with the given relation
You need to close it off at the end, since if I understand what you mean, $((1-A^4)\langle x\rangle + A^{-2}\langle y\rangle)^n$ will just be a tangle. So, you would expand this out, then close off $\langle x\rangle$ and $\langle y\rangle$, giving $\delta$ and $1$, respectively, where $\delta=-A^2-A^{-2}$. The relations you need are $\langle x\rangle\langle x\rangle=\langle x\rangle$ $\langle x\rangle\langle y\rangle=\langle y\rangle$ $\langle y\rangle\langle x\rangle=\langle y\rangle$ $\langle y\rangle\langle y\rangle=\delta\langle y\rangle$ Notice the horizontal composition of the brackets of these tangles is commutative, so you can expand your expression out like a polynomial: \begin{align} ((1-A^4)\langle x\rangle + A^{-2}\langle y\rangle)^n &= \sum_{k=0}^n \binom{n}{k}(1-A^4)^kA^{-2(n-k)}\langle x\rangle^k \langle y\rangle^{n-k} \\ &= (1-A^4)^n\langle x\rangle + \sum_{k=0}^{n-1} \binom{n}{k} (1-A^4)A^{2(k-n)} \delta^{n-k} \langle y\rangle \end{align} Hence, \begin{align} \langle D_n\rangle &= (1-A^4)^n\delta + \sum_{k=0}^{n-1} \binom{n}{k} (1-A^4)A^{2(k-n)} \delta^{n-k}. \end{align} That is, this is the bracket polynomial modulo any errors I might have made. Another approach I like is to see what the clasp does to $a\langle x\rangle+b\langle y\rangle$ when attached to one side. In the $\langle x\rangle,\langle y\rangle$ basis, this can be represented by the matrix $$\begin{pmatrix} 1-A^4 & 0 \\ A^{-2} & -A^4-A^{-4} \end{pmatrix}$$ This matrix is diagonalizable as $PDP^{-1}$ with \begin{align} P&=\begin{pmatrix} A^2+A^{-2} & 0 \\ 1&1\end{pmatrix} &D&=\begin{pmatrix} 1-A^4 & 0 \\ 0 & -A^4-A^{-4} \end{pmatrix} \end{align} Hence, we may take the $n$th power of the matrix by computing $PD^nP^{-1}$, which is \begin{align} PD^nP^{-1} &= \begin{pmatrix} \left(1-A^4\right)^n & 0 \\ \frac{\left(1-A^4\right)^n-\left(-A^4-A^{-4}\right)^n}{A^2+A^{-2}} & \left(-A^4-A^{-4}\right)^n \\ \end{pmatrix} \end{align} The first column of this is what $n$ clasps does to $\langle x\rangle$, which is what we want to then close off to get $\langle D_n\rangle$. Hence, \begin{align} \langle D_n\rangle &= \left(1-A^4\right)^n \delta + \frac{\left(1-A^4\right)^n-\left(-A^4-A^{-4}\right)^n}{A^2+A^{-2}}\\ &= \left(-A^4-A^{-4}\right)^n \end{align}
The convergence of $\sum_{k=n}^{\infty}\frac{(m^2+1)^k}{k^m}$, and a related series
For $m = 1$ we have $(m^2 + 1)^k = 2^k > k$ for any $k$. For $m > 1$ we have $m^2 + 1 > m$ and since $m^k > k^m$ for large enough $k$, it follows $(m^2 + 1)^k > k^m$. This means, $$\sum_{k=1}^{\infty} \frac {(m^2+1)^k} {k^m} > \sum_{k=1}^{\infty} 1 = + \infty,$$ so the series diverges for all $m \geqslant 1$.
Equation for the line joining north pole and a point on the complex plane
They are the same up to a change of variables $t \mapsto 1-t$. Combining the vectors in your equation, you get: $$\langle 0,0,1\rangle + t\langle x,y,-1\rangle = \langle tx,ty,1-t\rangle.$$ Combining the vectors in your professor's equation: $$t\langle 0,0,1\rangle + (1-t) \langle x,y,0\rangle = \langle (1-t)x,(1-t)y, t\rangle.$$ Notice that these are the same except $t$ has been replaced by $1-t$. In general the line segment between $\bf x$ and $\bf y$ can be parametrized by ${\bf x} + t({\bf y}-{\bf x})$ or by $t{\bf x} + (1-t){\bf y}$ for $0 \leq t \leq 1$. For a fixed value of $t$, the first equation gives you the point "$t$ of the way" to $\bf y$ from $\bf x$, while the second gives you the point "$t$ of the way" to $\bf x$ from $\bf y$.* You can think of it as a describing a weighted average of the points $\bf x$ and $\bf y$, for every possible combination of weights between $0$ and $1$ that sum to $1$. *Here when I say "$t$ of the way" I don't mean in terms of absolute distance, but as a fraction of the distance between $\bf x$ and $\bf y$.
Proof relation between diagonalization and eigenvectors
If we take a basis then the vectors of the basis are linearly independent, so if we take the vectors as column vectors of a matrix $A$ then the matrix is of full rank. We find the corresponding characteristics equation and all roots of the equation are its eigen-values of the matrix. Also all roots belong, as $\Bbb C$ is algebraically closed. Once you got the eigen-values you can easily get the corresponding eigen-vectors and create the corresponding matrix $P$. $D$ the diagonal matrix consists the eigen-values in its diagonal entry. Thus $$A = P^{-1}DP$$ Hence $A$ is diagonalizable.
Finding the solution for $Ax=0$
You have shown that matrix is non singular (or invertible), so that the only solution, as you state, is ${\bf x}=0$.
Integration by substitution called Weierstrass substitution?
The Weierstrass substitution is precisely $$t = \tan \dfrac{x}{2},$$ so that $$\cos x = \dfrac{1 - t^2}{1 + t^2}$$ $$\sin x = \dfrac{2t}{1 + t^2}$$ and $$dx = \dfrac{2\,dt}{1+ t^2}.$$ It rationalizes the expressions that contain trigonometric functions. For example, $$\int\frac{\cos x}{\cos x+1}dx=\int\frac{\dfrac{1 - t^2}{1 + t^2}}{\dfrac{1 - t^2}{1 + t^2}+1}\frac{2\,dt}{t^2+1}=\int\dfrac{1 - t^2}{1 + t^2}dt.$$ Note that Weierstrass is also useful to find the roots of trigonometric polynomials. E.g. with the classical linear equation $$a\cos x+b\sin x+c=0$$ we obtain $$a(1-t^2)+2bt=c(1+t^2),$$ which is quadratic.
Image of a subgroup under the projection to the projective general linear group is isomorphic to the group quotiented by its centre
I'll make a try. From first isom. theorem we have $$\pi(\rho(G))\cong \rho(G)/ker\pi|_{\rho(G)}$$ It suffies to show that $ker\pi|_{\rho(G)}=Z(\rho(G))$. It is $$ker\pi|_{\rho(G)}=\{\rho(g)\in \rho(G)|\ \rho(g)\in Z(V)\}=\\ =\{\rho(g)\in \rho(G)|\ \rho(g)=k\cdot Id\}=\\=Z(\rho(G))$$ where the second equality folllows from Schur's Lemma.
What is the probability of drawing 6 unique cards from 6 different decks?
Your answer to this problem is not correct. In fact, the correct answer should be $$ \frac{52}{52} * \frac{51}{52} * \frac{50}{52} * \frac{49}{52} * \frac{48}{52} * \frac{47}{52} $$ This is how I got this answer - Consider event A to be the set of all outcomes in which all of the six cards are pairwise-distinct. Using combinatorics, this is equivalent to $$ 52*51*50*49*48*47 $$ The total number of ways in which 6 cards can be drawn from 6 decks is $$ 52^{6} $$ Thus by probability theory, the answer should be equal to the number of favourable outcomes divided by total outcomes. Hence $$ \mathbb{P}(A) = \frac{52}{52} * \frac{51}{52} * \frac{50}{52} * \frac{49}{52} * \frac{48}{52} * \frac{47}{52} $$ Hope it helps you!!!
Easy criteria to determine isomorphism of fields?
If $f,g$ are quadratic polynomials (and $K$ is any field of characteristic not 2), then by the quadratic formula the isomorphism classes are classified by the discriminant "$b^2 - 4ac$" (if $f = ax^2 + bx + c$), modulo squares. Ie, the isomorphism classes are in bijection with $K^\times/(K^\times)^2$. In any other situation things become a lot more difficult. If $K$ is $\mathbb{Q}$, then $\mathbb{Q}[X]/(f)\cong\mathbb{Q}[X]/(g)$ if and only if the action of $G_\mathbb{Q} := \text{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ on the roots of $f$ is isomorphic to the action of $G_\mathbb{Q}$ on the roots of $g$. Ie, if $X_f,X_g$ denote the sets of roots of $f,g$, then $\mathbb{Q}[X]/(f)\cong\mathbb{Q}[X]/(g)$ if and only if there is a bijection $\phi : X_f\stackrel{\sim}{\rightarrow}X_g$ such that $\phi(\sigma x) = \sigma\phi(x)$ for all $x\in X_f$, $\sigma\in G_\mathbb{Q}$. This is a consequence of the Galois correspondence, which says that the association sending any finite extension $L := \mathbb{Q}[X]/(h)$ of $\mathbb{Q}$ (with $h$ irreducible) to $X_h$ (as a set with $G_\mathbb{Q}$-action) gives an equivalence of categories between the category of finite field extensions of $\mathbb{Q}$ and the category of finite sets with a transitive $G_\mathbb{Q}$-action. The result follows from the fact that two finite extensions of $\mathbb{Q}$ are isomorphic as fields if and only if they are isomorphic as extensions of $\mathbb{Q}$ (any abstract isomorphism between them must fix their prime subfields). The same result will also be true if $K$ is replaced by any $\mathbb{F}_p$ (though this situation is trivial since finite extensions of $\mathbb{F}_p$ are uniquely determined by degree). With suitable modifications the result is also true when $K$ is any finite extension of $\mathbb{Q}$, and with more care, even true when $K$ is an algebraic extension of $\mathbb{Q}$. If $K$ is not an algebraic extension over its prime subfield, then things can get weird, as we move into the world of arithmetic geometry. For example, for any field $k$, you can set $K := k(t)$, then $K = K[X]/(X) = k(t)$ is isomorphic to $K[X]/(X^2-t)\cong k(\sqrt{t})$. If $K$ is finite over $\mathbb{Q}$, then in certain cases you may also be able to use class field theory. Though, I should mention that in practice it's rare that you would care about isomorphisms between fields (as abstract fields). You generally will want to restrict yourself to "nice" extensions of a fixed base field, in which case a number of the conditions above can be relaxed.
proof of the inequality?
This can be done in three steps: Prove or quote that $|x+y| \le |x| + |y|$. Prove that $f(z) = \frac{z}{z+1}$ is increasing on $[0,\infty)$. This can be done with Calculus or with an algebra step. Prove that $\frac{a+b}{1+a+b} \le \frac{a}{1+a} + \frac{b}{1+b}$ for all $a, \, b \ge 0$. That is also an algebra step. Then put these three arguments together to make a proof.
True if sigma-compact
I think $\omega_1$ should supply a counterexample. Compact sets are countable, so every set $E$ has the property that $E \cap K$ is Borel for all compact $K$. But I'm pretty sure $\omega_1$ contains non-Borel sets (assuming enough choice), though I don't know a proof off the top of my head.
How do I prove that function is solution of the Laplace equation?
Just like for any differential equation, you check whether it's a solution by plugging the alleged solution into the equation and see whether the difference of the two sides simplifies to $0$. Here it doesn't, so this is not a solution.
machine learning model vs time series model?
Check out this paper on arXiv. It expounds on the following: We demonstrate how by applying deep learning techniques to forecasting, one can overcome many of the challenges faced by widely-used classical approaches to the problem. I think you're right in terms of the one or several time series being predicted: at least Amazon Web Services claims the following for the above arXiv-presented model: When your dataset contains hundreds of feature time series, the DeepAR+ algorithm outperforms the standard ARIMA and ETS methods I hope that this gives you some (at least close to) cutting edge work done on machine learning for time series. After all, if "classical" time series models e.g. ARIMA didn't have room for improvement, there would be no comparison to make!
Difference between Norm and Distance
All norms can be used to create a distance function as in $d(x,y) = \|x-y\|$, but not all distance functions have a corresponding norm, even in $\mathbb{R}^k$. For example, a trivial distance that has no equivalent norm is $d(x,x) = 0$, and $d(x,y) = 1$, when $x\neq y$. Another distance on $\mathbb{R}$ that has no equivalent norm is $d(x,y) = | \arctan x - \arctan y|$. However, in general, when working in $\mathbb{R}^k$ the distance used is one induced by a norm, and 'unusual' distances are typically used to illustrate other mathematical concepts (eg, the $\arctan$ distance gives an example of an incomplete metric space).
Evaluating quadratic residue $(6/p) $
I would simply go with your first method; the second is problematic as $6$ is not prime. For $\left( \frac{2}{p} \right)$ use the second supplement law of quadratic reciprocity, to get it is $1$ for $\pm 1 \mod 8$. For $\left( \frac{3}{p} \right)$ you can use quadratic reciprocity to reduce to $ (-1)^{(p-1)/2}\left( \frac{p}{3} \right)$. Since $1$ is a quadratic residue $\mod 3$ while $2$ is not. You get $\left( \frac{p}{3} \right) =1$ for $p \equiv 1 \mod 3$ and $-1$ otherwise. Now, note that $(-1)^{(p-1)/2}$ is even for $p \equiv 1 \pmod 4$. So, $\left( \frac{3}{p} \right)$ is $1$ for $ p \equiv 1 \pmod{12}$ (as $1 \times 1$) and $ p \equiv 11 \pmod{12}$ as $(-1)(-1)$ and $-1$ for $ p$ congruent to $5,7 \pmod{12}$. Now, combine the two and get a condition modulo $24$.
Textbook Questions to Do while Self-learning
Allow me to point out the obvious fact that knowing how to solve a problem and writing out the answer are quite different. I would suggest reading the exercises/problems and seeing if you have an idea about how to do them. If you are pretty sure you know how, then just do a couple to check that your intuition is accurate. If you aren't sure how the problems would be done, then ask yourself how you might broach them initially, and try to write out a (reasonably) full solution. Totally separately (and depending on what level you are trying to get to): I have a certain fondness for the book Abstract Algebra by Dan Saracino. Among introductory texts to Modern Algebra (which is the level I assume you are at, since you are starting with the group axioms) I find it to be quite accessible and very well-organized. The problems range nicely from straightforward/doable to tricky/puzzle-like.
Inverse limit of sequence of fibrations
To be honest, I have trouble finding out what is the needed argument, but I'll try to give another one. I will consistently use radial coordinates on the disk, ie. $(r, \phi) \in D^{k}$, where $r$ is the distance from the centre and $\phi \in S^{k-1}$. Let $\tilde{F}_{n-1}: S^{i} \times I \rightarrow X_{n-1}$ given by $\tilde{F}_{n-1}(\phi, t) = F_{n-1}(t-1, \phi)$ be the contracting homotopy. By the fibration property there exists a lift of this homotopy to $X_{n}$, say $G_{n}$, such that $G_{n} |_{S^{i} \times \{ 0 \}} = f_{n}$. Let $g_{n} = G_{n} |_{S^{i} \times \{ 1 \}}$. Since $\tilde{F}_{n-1} |_{S^{i} \times \{ 1 \}}$ maps to the basepoint, we see that the image of $g_{n}$ is contained in the fibre $F \subseteq X_{n}$. The map $\pi_{i+1}(X_{n}) \rightarrow \pi_{i+1}(X_{n-1})$ is surjective and hence by the long exact sequence of homotopy the map $\pi_{i}(F) \rightarrow \pi_{i}(X_{n})$ is injective. Since $g_{n}$ is null in $\pi_{i}(X_{n})$ (being homotopic to $f_{n}$), it must be also null in $F$ and thus there is a contracting homotopy $H_{n}: S^{i} \times I \rightarrow F$. Now define a map $F ^\prime _{n}: D^{i+1} \rightarrow X_{n}$ by $F ^\prime _{n}(r, \phi) = G_{n}(\phi, 2(1-r))$ for $r \geq \frac{1}{2}$ $F ^\prime _{n}(r, \phi) = H_{n}(\phi, 2(\frac{1}{2}-r))$ for $r \leq \frac{1}{2}$, After a moment of thought, we see that $p_{n} F^\prime _{n}$ is homotopic to $F_{n-1}$. Using the homotopy lifting for $(D^{i+1}, S^{i})$ we may deform it $rel \ S^{i}$ to a map $F_{n}$ such that $p_{n} F_{n} = F_{n-1}$, which is what we wanted to do.