title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Self adjoint operators on a Hilbert space
If $n$ is even then $0=\left<T^n x,x\right>=\left<T^{n/2} x,T^{n/2}x\right>$ so $T^{n/2}x=0$. What if $n$ is odd?
What is the easiest way to find the maximum of $9\cos(t) - 8\cos(3t)$?
Express $\cos(3t)$ in terms of $\cos(t)$ and differentiate with respect to $t$ or $\cos(t)$.
Values of $p$ for which quadratic possess at least one positive root.
Hint: Consider any $p$ for which the roots are real. The quadratic will have a positive and negative root if the constant term, $p+5$, is negative. If the constant term is positive, the two roots have the same sign, and we can use the fact that their sum is $-2(p-1)$. Full solution: By my first statement: if $p < -5$, then the roots have opposite sign, so we have at least one positive root. If $p = -5$, then we have the quadratic $x^2 - 12x$, which has a positive root. If $-5 < p \leq -1$, then we have two roots of the same sign. Since the sum of the roots is $-2(p-1) > 0$, we may conclude that both roots are positive. If $p \geq 4$, then both roots have the same sign. Since the sum of the roots is $-2(p-1) < 0$, we may conclude that both roots are negative. So, we have at least one positive root exactly when $p \leq -1$. Quicker solution: If $p \leq -1$, the sum of the roots is $-2(p - 1) > 0$. Conclude that we have a positive root. If $p \geq 4$, then both roots have the same sign since $p+5 > 0$. Since the sum of the roots is $-2(p-1) < 0$, we may conclude that both roots are negative.
Integral $\int_{0}^{\pi/3}\ln^4\left(\frac{\sin x}{\sin(x+\pi/3)}\right)dx$
$$I=\int_{0}^{\pi/3}\ln^4\left(\frac{\sin(x+\pi/3)}{\sin x}\right)dx=\int_0^\frac{\pi}{3}\ln^4 \left(\frac12 +\frac{\sqrt 3}{2}\cot x\right)dx=\frac{\sqrt 3}{2}\int_0^1 \frac{\ln^4 x}{x^2-x+1}dx$$ Above follows by the substitution $\frac12+\frac{\sqrt{3}}{2}\cot x\to x$. We also have for $t\in(0,\pi), x\in (-1,1)$: $$\frac{\sin t}{x^2-2x\cos t+1}=\sum_{n=1}^\infty x^{n-1}\sin(nt)$$ $$\Rightarrow I=\sum_{n=1}^\infty \sin\left(\frac{n\pi}{3} \right)\int_0^1 x^{n-1} \ln^4 x dx=24\sum_{n=1}^\infty \frac{\sin\left(\frac{n\pi}{3} \right)}{n^5}$$ Similarly to here, we have for $x\in(0,2\pi)$: $$\frac{\pi-x}{2}=\sum_{n=1}^\infty\frac{\sin(nx)}{n}$$ Integrating the above with respect to $x$ gives: $$\sum_{n=1}^\infty \frac{\cos(nx)}{n^2}=\frac{(\pi-x)^2}{4}+C_1$$ Setting $x=\pi$ gives $C_1=-\frac{\pi^2}{12}$ and integrating again produces: $$\sum_{n=1}^\infty \frac{\sin(nx)}{n^3}=-\frac{(\pi-x)^3}{12}-\frac{\pi^2}{12}x+C_2$$ Putting $x=\pi $ yields $C_2= \frac{\pi^3}{12}$. One more time: $$\sum_{n=1}^\infty \frac{\cos(nx)}{n^4}=-\frac{(\pi-x)^4}{48}+\frac{\pi^2}{24}x^2-\frac{\pi^3}{12}x+C_3$$ For $x=\pi \Rightarrow C_3=\frac{23\pi^4}{720}$. And finally, one more similar step gives for $x \in(0,2\pi)$: $$S(x)=\sum_{n=1}^\infty \frac{\sin(nx)}{n^5}=\frac{(\pi-x)^5}{240}+\frac{\pi^2}{72}x^3-\frac{\pi^3}{24}x^2+\frac{23\pi^4}{720}x-\frac{\pi^5}{240}$$ And well, the value of the integral is just $24S\left(\frac{\pi}{3}\right)$. Doing the algebra yields: $$\boxed{\int_{0}^{\pi/3}\ln^4\left(\frac{\sin x}{\sin(x+\pi/3)}\right)dx=\frac{17\pi^5}{243}}$$
When are $(a_1,...,a_n)\in \mathbb{A}_i^n$ and $(b_1,...,b_n)\in \mathbb{A}_j^n$ for some $i < j$ equal to each other?
For simplicity, let us assume $i&lt;j$ (if $i=j$ the the problem is trivial). The point in $\Bbb P^n$ associated to $(a_1, \dots, a_n)$ is $[a_1, \dots, a_{i-1}, 1, a_i, \dots, a_n]$ with $1$ on the $i$-th position and $[,]$ denoting projective coordinates. The point in $\Bbb P^n$ associated to $(b_1, \dots, b_n)$ is $[b_1, \dots, b_{j-1}, 1, b_j, \dots, b_n]$ with $1$ on the $j$-th position. Now remember that two $(n+1)$-uples represent the same point if and only if one is a non-zero multiple of the other. Therefore, it is necessary and sufficient to require that there exist $\lambda \ne 0$ such that $$[a_1, \dots, a_{i-1}, 1, a_i, \dots, a_n] = [\lambda b_1, \dots, \lambda b_{j-1}, \lambda , \lambda b_j, \dots, \lambda b_n] .$$ Equating the $i$-th coordinates gives us $1 = \lambda b_i$. This imposes that $b_i \ne 0$ and $\lambda = \dfrac 1 {b_i}$, whence the above condition may be rewritten as $$[a_1, \dots, a_{i-1}, 1, a_i, \dots, a_n] = \left[ \frac {b_1} {b_i}, \dots, \frac {b_{j-1}} {b_i}, \frac 1 {b_i}, \frac {b_j} {b_i}, \dots, \frac {b_n} {b_i} \right] ,$$ whence by a componentwise comparison it follows that the necessary and sufficient conditions are $$\begin{cases} a_k = \dfrac {b_k} {b_i}, &amp; k \le i-1, \\ a_k = \dfrac {b_{k+1}} {b_i}, &amp; i \le k \le j-2, \\ a_{j-1} = \dfrac 1 {b_i}, \\ a_k = \dfrac {b_k} {b_i}, &amp; k \ge j .\end{cases}$$
If $\mu$ equals Haar measure on the 3-dimensional unit sphere $S^2$, then $\hat{\mu}(\varepsilon) = \dfrac{2\sin(2\pi |\varepsilon|)}{|\varepsilon|}$.
In this case the Haar measure is simply the Lebesgue measure. So you have $$ \hat \mu(\varepsilon)=\int_{S^2} e^{-i\varepsilon x}dx. $$
Show that for all integers $a,b$ and every $n>0$, $(a+b)^n ≡ a^n + b^n \pmod 2$
Never mind the binomial expansion or induction. There are 3 cases: both a and b are even, both are odd, or one is odd and the other is even...
Can a finite subbasis create an infinite topology / topological space?
No. If $\mathcal{S}$ is a finite, set, then there are only finitely many (nonempty) finite subcollections of that set, and so the family $\mathcal{I}$ of intersections of (nonempty) finite subcollections of $\mathcal{S}$ is also finite (saying "finite" here is somewhat redundant). As there are only finitely many subcollections of $\mathcal{I}$, there are only finitely many distinct unions of subcollections of $\mathcal{I}$. Thus any topology generated by a finite subbasis is itself finite. In particular, the usual metric topology on $\mathcal{R}$ has no finite subbase. (Even more, given an arbitrary subbasis $\mathcal{S}$ on a set $X$, either $\mathcal{S}$ and $\mathcal{I}$ (as described above) above are both finite, or are both infinite and of the same cardinality. It follows that the topology generated by $\mathcal{S}$ has cardinality bounded above by some cardinal related to $| \mathcal{S} |$. In the finite case, it would be $2^{2^{|\mathcal{S}|}}$, and in the infinite case it is $2^{|\mathcal{S}|}$.)
Definition of $\ell^p$ space and some confusions with norm
Question 2: compare $d(x,y)$ with $d(x,0)=d(y,0)$, where $x = (1,0,0,\dots)$ and $y_n = (0,1,0,\dots)$. Note that this is a finite dimensional counterexample i.e. basically the same problem occurs for $\mathbb R^2$ equipped with $d(x,y)^p = |x_1-y_1|^p + |x_2-y_2|^p$. Re: question 1, Firstly, no one ever said that a definition needs a point. I can of course make the definition $\texttt{LJNG}:=42$. Whats the point? Who knows. It does make sense, however. And here there is a point, the $\ell^p$ spaces are basic examples of Banach spaces which are very important and heavily studied. Having examples is good because the theory is hard and intuition from finite dimensions do not carry over.
Proving that f is continous
Here's a hint: Suppose that $0&lt;x&lt;a$. Since $g$ is decreasing, we have $$\frac{f(a)}{a}\leq\frac{f(x)}{x}$$ However, $f$ is increasing, and since $x&lt;a$, $f(x)\leq f(a)$, so $$\frac{f(a)}{a}\leq\frac{f(x)}{x}\leq\frac{f(a)}{x}$$ What happens as $x\to a^-$? A similar argument can be done when $x\to a^+$. This justifies that $g$ is continuous. Then, just justify that $f$ is continuous.
Find range of values of k for which f(x) <= k for all real values of x
Rewrite $$f(x) =-2(x-2)^2+25\le 25$$ Thus, $k\ge 25$. In your approach, you should set $D\le 0$ because $f(x)$ and $y = k$ do not intersect or are tangential to each other, which leads to the same range for $k$.
Bonus scheme calculation
With the data provided there is no way to distinguish work on hard or long projects from work on easy or short ones. Perhaps the employees who did fewer projects did so because those were long or hard. You ask for a &quot;fair&quot; bonus calculation, but &quot;fair&quot; is not a mathematically well defined term. I think that given the data the &quot;fair&quot; assumption is that each of the five ratings on each employee's projects contributes the same weight to that employee's bonus, independent of what the other employees have done. The easiest calculation is to find the average rating for each employee. For the first one in the question that would be $$ \frac{3 + 14 \times 2}{15} = 2.07. $$ Then you could award that employee a bonus of $2.07 \times 10\% = 20.7\%$. With this algorithm the minimum bonus is $10\%$ and the maximum is $30\%$. You could do a little more work to count ratings of $3$ as more than three times as good as ratings of $1$ or to scale the result so that the bonus range was from $0\%$ to $30\%$. I suspect that any scheme would look &quot;unfair&quot; to someone. The simplest one would be the easiest to defend.
Calculate percentage error of pendulum
You are expected to compute $\frac {\Delta T}{T}$ as a function of $\frac {\Delta L}L$ by taking the derivative. Then you are given $\frac {\Delta L}{L}=0.05$
Artin-Rees property in commutative Noetherian rings with unit
Suppose that $EI= Q_{1}\cap \cdots \cap Q_{n}$ where $Q_{i}$ is primary and $P_{i}=\text{rad}(Q_{i})$ for $i\in \{1,\cdots,n\}$. Suppose that $E$ is not contained in $Q_{i}$. Then, there exists $x$ in $E$ such that $x$ is not in $Q_{i}$. Let $z\in I$. Then, $xz\in EI=Q_{1}\cap \cdots \cap Q_{n}$, so $xz\in Q_{i}$ and therefore, $z\in P_{i}$. Since $R$ is a commutative Noetherian ring, there exists $k_{i}\in \mathbb{N}$ such that $P_{i}^{k_{i}}\subseteq Q_{i}$. Let $k=\max\{k_{i}\mid i\in\{1,\cdots,n\}\}$. Then, $E \subseteq \bigcap_{E\subseteq Q_{i}} Q_{i}$ and $I^k\subseteq \bigcap_{I^k\subseteq Q_{i}} Q_{i}$, so $E\cap I^k\subseteq EI$.
About the $\lim_{x\to a} \frac{f(x)}{1+|f(x)|}$
You're making it overly complicated by setting $g(x) = f(x) / |f(x)|$. Set $g(x) = 1$ instead. $|f(x)| + 1$ is always positive, and since $f(x) &lt; 1+|f(x)|$, $$\frac{f(x)}{1+|f(x)|} \le 1$$ To critique what you already have, as the comments mention, for the case when $f(x) = x$, $$\lim_{x \to 0} \frac{f(x)}{|f(x)|}$$ does not exist. So the proof as you currently have it, is incorrect (since you can't satisfy the hypotheses for the theorem you're using).
Jump discontinuities
Why not? Consider $f(x) = x - \lfloor x\rfloor$, where $\lfloor x\rfloor$ is the greatest integer less than or equal to x. This has infinitely many discontinuities at integers. Now consider $g(x) =f(x-a)$ where $a \in (0,1)$. I might like to add that the set of jump discontinuities can be at most countable.
Test to determine whether a huge series of integers is random or there is a pattern?
If you are only concerned about the frequency, then a simple computer program which goes through all the integers and counts the frequency of each will suffice. Once you have the frequency table, you can do further analysis if required. (From a frequency table, you can calculate the mean, median, standard deviation, and mode of the data, and see whether it looks like a normal distribution, etc.)
How prove this rank identity $r(A)=r(B)$
If $20142014I - A - B = M$ then $AM = 20142014A - A^2 - AB = -AB$ and $MB = 20142014B - AB - B^2 = -AB$. So, $\text{rank}(AM) = \text{rank}(-AB) = \text{rank}(MB)$. Since $M$ is invertible, $\text{rank}(A) = \text{rank}(B)$.
The angle between two unit vectors is not what I expected
If you start with the vectors $(1,0,0)$ and $(1/\sqrt{2},0,1/\sqrt{2})$, and rotate both by $45^\circ$ about the $z\text{-axis}$, then you end up with $(1/\sqrt{2},1/\sqrt{2},0)$ and $(1/2,1/2,1/\sqrt{2})$. The second point is not $(1/\sqrt{3},1/\sqrt{3},1/\sqrt{3})$ as you imagined. If you think about it, the $z\text{-coordinate}$ cannot be changed by this rotation. If the $z\text{-axis}$ is vertical, and the $x\text{-}y$ plane is horizontal, then the height of the point above the plane is not changed by rotation about the $z\text{-axis}$. The height remains $1/\sqrt{2}$, and the length of the horizontal coordinate remains $1/\sqrt{2}$ as well. That would not be the case if the final vector were what you thought it was.
Eigenvalues and eigenvectors in a symmetric matrix
You know that $A$ is a square symmetric matrix. So, from the Spectral Theorem: $A$ is symmetric $\iff$ $A$ is orthogonally diagonalizable From that you have the following: $A$ is symmetric $\implies$ the eigenspaces of $A$ are pairwise orthogonal This means that $V_\alpha \bot V_\beta$ for every autovalue $\alpha \neq \beta$. From the previous you know that option (3) is false. You can find the cartesian equation of $V_5$: $$2x - y - z = 0$$ and the parametric equation of $V_{2_1}$: $$ \begin{cases} x = 2 \alpha \\ y = -\alpha \\ z = -\alpha \end{cases} $$ So: $$n[V_5] = d[V_{2_1}] = \begin{pmatrix}2 \\ -1 \\ -1 \end{pmatrix}$$ This means that $V_{2_1} \bot V_5$, so option (1) is true. Option (4) is false because: $$n[V_5] = \begin{pmatrix}2 \\ -1 \\ -1 \end{pmatrix} \neq d[V_{2_4}] = \begin{pmatrix}1 \\ 1 \\ 1 \end{pmatrix}$$ Option (2) is true because: $A$ is symmetric $\implies$ $A$ is diagonalizable
If a vector is a combination of two unit vectors, is this vector still a unit vector?
No. For instance, take: $$(1,0),(0,1),\alpha=1/2$$ What is $\vec{f}$? Note that $||\vec{f}||\neq 1$. Is it clear? Good studies!
Find the centroid of a lamina
This problem can be simplified by reminding that the centroid of any isosceles triangle divides the height in two parts with ratio $2:1$ (with the longest part towards the vertex). In this problem we have two right isosceles triangles. The height of the large one is included between $(-4,0)$ and $(3,0)$, so the centroid is in $(-\frac{1}{2},0)$. The height of the small one is included between $(3,5)$ and $(2,6)$, so the centroid is in $(\frac{7}{3},\frac{17}{3})$. Now simply apply the standard method of averaging the two centroid, taking into account that the two areas are $49$ and $2$ and that the area of the small triangle has to be considered negative.
Finitely generated projective module , of constant rank, over semi-local ring
As you say, one may reduce to the case when $R$ is a finite direct product of fields. Say $R = \prod_{i=1}^{n} F_{i}$, and $M$ has constant rank $r \in \mathbb{N}$. Let $e_{1}, \ldots, e_{n}$ be the corresponding standard idempotents of $R$, and put $J_{i} = e_{i}R$; it is both an ideal of $R$ and a ring (not a subring of $R$, though!) which is isomorphic to $F_{i}$. The maximal ideals $P_{1}, \ldots, P_{n}$ of $R$ are given by $P_{i} = \bigoplus_{j \neq i} e_{i}R$. Here is the outline of an approach, whose details I leave to you. $(1)$ Show that the localization $R_{P_{i}} \cong F_{i}$, so that $M_{P_{i}}$ is a free $F_{i}$-module of rank $r$. Observe that the isomorphism $M_{P_{i}} \cong F_{i}^{r}$ is also an isomorphism of $R$-modules, suitably interpreted. $(2)$ Show that $M \cong \prod_{i=1}^{n} J_{i}M$ as $R$-modules. $(3)$ Show that $M_{P_{i}} \cong J_{i}M$ as $R$-modules. $(4)$ Use $(1)$-$(3)$ to conclude that $M \cong R^{r}$ as $R$-modules.
Give a direct proof of the fact that $a^2-5a+6$ is even for any $a \in \mathbb Z$
$$a^2 - 5a + 6 = (a-2)(a-3) = (a-3)((a-3)+1)$$ Now can you show that one of the two factors must be even? (It would be a good exercise to establish to your satisfaction that given any two consecutive integers, exactly one of them is even.) Sketch of proof by cases : either $a$ is odd, or $a$ is even: If $a$ is odd, then $(a-3)$ is even. If $a$ is even, then $(a-2)$ is even. Since one of the factors in $(a-2)(a-3)$ is necessarily even, whatever the value of $a$, the entire product must be even.
How to prove an interval $[0, 1]$ is not a null set?!
Given a subset $X$ of $\mathbb{R}^n$, we can say that $X$ isn't a null set if $int(X) \neq \emptyset$ (it's not a definition). So, since $int([0,1]) = (0,1) \neq \emptyset$, we have that $[0,1]\subset \mathbb{R}$ isn't a null set.
Double Integral $\iint\limits_D\frac{dx\,dy}{(x^2+y^2)^2}$ where $D=\{(x,y): x^2+y^2\le1,\space x+y\ge1\}$
Among skills of a technical (as opposed to conceptual) nature that are used in calculus, among the ones in which most students are weakest is technical details of trigonometry. When changing this to polar coordinates, $x^2+y^2$ becomes $r^2$ and $dx\,dy$ becomes $r\,dr\,d\theta$ and the boundary at $x^2+y^2=1$ becomes $r=1$, but what is one to do with $x+y\ge 1$? Drawing the picture, you see that that's a diagonal line going through both the highest point on the circle (the one with the largest $y$-coordinate) and the rightmost point on the circle (the one with the largest $x$-coordinate). And the constraint says $r\cos\theta+r\sin\theta \ge 1$. So you want $\theta$ to go from $0$ to $\pi/2$ and $r$ to go from something to $1$. You can write $r\ge\dfrac 1 {\cos\theta+\sin\theta}$. Then you have $$ \int_0^{\pi/2} \int_{1/(\cos\theta+\sin\theta)}^1 \frac{r \, dr \,d\theta}{r^4} = \int_0^{\pi/2}\left( \int_{1/(\cos\theta+\sin\theta)}^1 \frac{dr}{r^3} \right) d\theta. $$ The inside integral is $$ \left.\frac{-1}{2r^2} \right|_{r:=1/(\cos\theta+\sin\theta)}^1 = \frac{-1} 2 + \frac 1 {2(\cos\theta+\sin\theta)^2}. $$ Then you need to integrate that from $0$ to $\pi/2$. Alternatively, you could note that by circular symmetry, the integral is the same as $$ \iint_E \frac{dx\,dy}{(x^2+y^2)^2} \text{ where } E=\left\{(x,y): x^2+y^2 \le 1\ \&amp;\ x\ge \frac{\sqrt 2} 2 \right\}. $$
Math software to calculate in different rings.
Many computer algebra systems are capable of such computation. I mainly deal with GAP (http://www.gap-system.org), where you can do this, for example, in a very simple way like this: gap&gt; x:=Indeterminate(Rationals,"x"); x gap&gt; f:=x^6+x; x^6+x gap&gt; Value(f,11); 1771572 gap&gt; Value(f,11) mod 19; 12 just to create a polynomial with rational coefficients and then calculate a remainder using mod. Note that here $12$ is in $\mathbb{Z}$, not in $\mathbb{Z_n}$. On this way, one should know the function PowerModInt( r, e, m ) which returns r^e mod m for integers r, e and m (e non-negative): compare the runtime in the brute-force calculation gap&gt; 111122^131113345 mod 139; time; 2 33203 which takes about 33 seconds, with the instantly returned result of PowerModInt: gap&gt; PowerModInt(111122,131113345,139); time; 2 0 This happens because PowerModInt can reduce intermediate results and thus will generally be faster than using r^e mod m, which would compute r^e first and reduce the result afterwards. A more "algebraically clean" approach could be to create a proper ring $\mathbb{Z_n}$ using Integers mod n, so that both $a$ and $f(a)$ will belong to $\mathbb{Z_n}$, for example: gap&gt; z42:=Integers mod 42; (Integers mod 42) gap&gt; x:=Indeterminate(z42,"x"); x gap&gt; f:=x^6+x; x^6+x gap&gt; a:=Random(z42); ZmodnZObj( 2, 42 ) gap&gt; Value(f,a); ZmodnZObj( 24, 42 ) This corresponds to the fact that (2^6+2) mod 42 = 2. It depends on the circumstances of the experiment whether to use one approach or another, or to use GAP or another system, such as for example PARI/GP (see also this question). Remark: note the usage of parentheses because of the rules of precedence of mod in GAP: compare gap&gt; (2^6+2) mod 42; 24 with gap&gt; 2^6+2 mod 42; 66 because the latter is equivalent to 2^6 + (2 mod 42).
Heavily stuck on Newton-Cotes integration
First, please be more specific on a number of points: 1) are you using open or closed Newton Cotes formulae?; 2) which formulae exactly have you tried and which terms don't you understand? Still, here is the general outline on calculating the error terms. Assume we have order $n$, and let the quadrature term be $Q_{n}(f)$. Then for the error we have $\mathrm{err}_{n}=|\int_{a}^{b}f(x)dx-Q_{n}(f)|$ The upper bound for the error term is given in terms of $M_{n+1}:=\mathrm{max}_{a\leq x\leq b}|f^{(n+1)}(x)|$ For example, for the Trapezoid rule we have $n=1$ and $\mathrm{err}_{1}\leq\frac{(b-a)^{3}}{12}M_{2}$. Hope this helps.
Function f(x) such that integral area is divided in half at the same point k where f(k)=half of the max function value
Great question! Your functions all have the property that they are of the form $$ f(x) = a b^x $$ for some constants $a$ and $b$ (with $b &lt; 1$). It's tempting to think that something special is going on with this class of "exponential" functions. Alas, that's not true. Consider a function like $$ h(x) = \begin{cases} c &amp; 0 \le x &lt; 1 \\c/2 &amp; x = 1\\ c/3 &amp; 1 &lt; x &lt; 4 \\ 0 &amp; \text{otherwise} \end{cases} $$ for any $c &gt; 0$. It, too, has your property: the maximum value is $c$, and if we split at height $c/2$, then half the area is to the left and half the area is to the right of the split-point. We can adjust the maximum value by changing $c$, and can move the split-point by considering $k(x) = h(x-a)$, which has its "split" at $x = a+1$. So there are a lot of these functions! (And it's pretty clear how to construct others like them) You might object that my function is not continuous, but it shouldn't be too hard to add some sloping bits to connect the straight parts and still maintain the property you describe. As for "why do your functions have this property?", let's look at $f(x) = a b^x = a \exp(x \ln b)$. Its maximum (for $b$ positive) is at $x = 0$, and is $a$; the half-max is at the point where $a b^x = a/2$, so $b^x = 0.5$, so $x = \log_b \frac12$. The area to the left of this split point is \begin{align} A_1 &amp;= \int_0^{\log_b \frac12} a \exp(x \ln b)dx \\ &amp;= a \int_0^{\log_b \frac12} \exp(x \ln b)dx \\ &amp;= a \left(\left. \frac{\exp(x \ln b)}{ \ln b} \right|_0^{\log_b \frac12}\right) \\ &amp;= a \left(\left. \frac{b^x}{ \ln b} \right|_0^{\log_b \frac12}\right) \\ &amp;= a \left(\frac{b^{{\log_b \frac12}}}{ \ln b} - \frac{b^0}{ \ln b} \right) \\ &amp;= a \left(\frac{\frac12}{ \ln b} - \frac{1}{ \ln b} \right) \\ &amp;= a \left(\frac{-\frac12}{ \ln b} \right) \end{align} which makes sense because for $b &lt; 1$, we have $\ln b &lt; 0$, so this number $A_1$ is positive. The other area is \begin{align} A_2 &amp;= \int_{\log_b \frac12}^\infty a \exp(x \ln b)dx \\ &amp;= a \left(\left. \frac{\exp(x \ln b)}{ \ln b} \right|_{\log_b \frac12}^\infty\right) \\ &amp;= a \left(\left. \frac{b^x}{ \ln b} \right|_{\log_b \frac12}\right)^\infty \\ &amp;= a \left( \lim_{c \to \infty} \frac{b^c}{ \ln b} - \frac{b^{{\log_b \frac12}}}{ \ln b} \right) \\ &amp;= a \left( 0 - \frac{b^{{\log_b \frac12}}}{ \ln b} \right) \\ &amp;= a \left(\frac{-\frac12}{ \ln b} \right) \end{align} which is the same as $A_1$. So your exponential functions do indeed have this unusual property. It's pretty cool that you happened to notice this -- that's how discoveries get made. It's too bad that it's not as special as it might have first appeared.
Convert statement from English to logic: "to pass philosophy it is not necessary to make notes every week"
If this was given as part of a logic assignment, it was a poor example. Many statements in English cannot be converted into logical form using the standard propositional logic ($p$, $q$, $\land$, $\lor$, $\lnot$, $\to$, etc.). This is true in your case, and the reason is the use of the term necessary. Usually, statements about necessity can only be correctly expressed with Modal logic (or some other form of quantifiers). We use $\Box$ to mean "it is necessary that". In this case, we could write the statement: $$ \lnot \Box (p \to m) $$ which we read "It is not necessarily true that if you pass, you took notes." The problem with your suggestion, $$ (m \lor \lnot m) \to p, $$ is that it says "whether you take notes or not, you will pass" which the speaker certainly did not intend to imply! User @fleablood also had a suggestion, $$ \lnot (\lnot m \to \lnot p). $$ This one is better; it seems to read as "it is not the case that if you take don't take notes, you won't pass". But taking a closer look, it's logically equivalent to $\lnot m \land p$, so it just says "you won't take notes and you'll pass", which makes no sense. (The reason @fleablood's translation didn't work has a lot to do with the word "if". "If" in English usually doesn't correspond very well to the logical connective $\to$. There are whole areas of philosophy devoted to translating English statements correctly, but one popular way that seems quite effective is to use the $\Box$ I mentioned earlier.)
To be ordered or not to be ordered
If nothing is said about the order, then you can assume the order doesn't count.
Three boats and their fishing habits
The area is a determinant $a = \det (\vec{AB}, \vec{AC})$. That gives you a polynomial in $t$ whose variations you can study. After trying that on wolfram alpha, it looks as if the signs of the area flips, which would mean that at some point they are colinear and hence the area is zero.
density function and measure null
Your notation is slightly wrong. Instead of $P = fd\mu$ you should write correctly $dP = fd\mu$, additionally we get then $dQ = gd\mu, dP = hdQ$ Simple plug in leads to:$$fd\mu = dP = hdQ = hgd\mu$$ So you get: $$f = hg$$ what's slightly different then $h = \frac{f}{g}$. But if $g \not= 0$ holds everywhere it's true. So in general you cannot conclude $$\mu(g = 0) = 0$$ You need some further assumptions for $P$ and $Q$ or $f$ and $h$ for it.
"Maybe" case for nth-Term Test
I'm going to take a stab at it. Consider the power series $$ \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} (x-1)^n. $$ With $x=2$ you get $$ \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} $$ which converges, and $c_n \to 0$. With $x=0$ you get $$ -\sum_{n=1}^\infty \frac{1}{n} $$ which now diverges even though $c_n \to 0$. In both cases, the terms $a_n=\frac{(-1)^{n+1}}{n} (x-1)^n$ go to $0$ termwise. Therefore, be very careful. Is this an answer?
sum of terms in rows of triangle
The $n$th row of Pascal's Triangle contains, reading left to right $$\binom{n}{0},\binom{n}{1},\binom{n}{2},\cdots,\binom{n}{n-1},\binom{n}{n}$$ Where $$\binom{n}{r}=\frac{n!}{r!(n-r)!}$$ Hence the sum you're looking for is: $$ \sum_{r=0}^{n}{\binom{n}{r}}$$ Which evaluates to $2^n$. Hence the sum of the $n$th row of Pascal's Triangle is $2^n$
Probability of an invalid "minefield" in riichi mahjong
There are $9$ choices for the non-honor cards in each suit, so we have $9^3=729$ ways to choose which tile values will occur in the hand. In any event there will be $25$ distinct tile vaules. In the first case, there are $\binom{25}{3}$ ways to choose which tiles will be four-of-a-kinds, and $4^{22}$ ways to choose the tiles that occur only once. Of course there's only one way to choose which tiles make up a four-of-a-kind. Therefore, case $1$ gives $$9^3\cdot4^{22}\binom{25}{3}$$ possibilities. In the second case, we have $\binom{25}{2}$ ways to choose the ranks of the four-of-a-kinds, and then $\binom{23}{3}$ ways to choose the ranks of the pairs. For the pairs, we have $\binom{4}{2}=6$ ways to choose which tiles make up the pair. This gives $$9^3\cdot3^6\cdot4^{20}\binom{25}{2}\binom{23}{3}$$ possibilities for case $2$. I computed the probability to be $$2.408361017451432\cdot10^{-9}$$
What is a principal formula??
See page 15 : The formula with the connective in a rule is the principal formula of that rule, and its components in the premisses are the active formulas. The Greek letters denote possible additional assumptions that are not active in a rule; they are called the contexts of the rules. The principal fromula is the one "acted on" by the rule : reading the rule bottom-up, the principal formula is "decomposed" by the rule according to the corresponding connective, while the formulae in the context are left unchanged.
GCD as linear combination of two numbers
That result is known as Bézout's identity and it is very useful to solve many problems in number theory as for example the calculation of the modular inverse. Take also a look here What is the importance of B&#233;zout&#39;s identity?
Considering the differential exact sequence $I/I^2\to\Omega_{S/R}\otimes_S S'\to\Omega_{S'/R}\to0$ as a chain of $S'$-modules
Let $\varphi:R\to S$ be any surjective map of rings, with $I=\ker\varphi$, then we give $I/I^2$ a natural $S$-module structure as follows: For $\overline{x}\in I/I^2$, and $\varphi(r)\in S$, we set $\varphi(r)\overline{x} = \overline{rx}$, and note that $\varphi(r)=0$ implies that $r\in I$, so that $\overline{rx}=0$, so this is well-defined.
How to compute the following formula?
Let $X_n$ denote the state of the system after $n$ broadcasts encoding $(k_A,k_B)$, where $k_A$ and $k_B$ are number messages received by $A$ and $B$ respectively. The termination condition is $$ \Omega = \{ (k_A,k_B) \colon \min(k_A, k_B) = k \} $$ Let $T = \inf\{n \colon X_n \in \Omega, X_{n-1} \not\in \Omega \}$. We are interested in determining $\mathbb{E}(T)$. The probability that $T$ equals $n$ is computed conditioning on $\Omega \not\ni X_{n-1} \to X_n \in \Omega$: $$ \begin{eqnarray} \mathbb{P}\left(T=n\right) &amp;=&amp; \mathbb{P}(K_A=k-1)\mathbb{P}(K_B \geqslant k) p_1 \\ &amp;\phantom{=}&amp; + \mathbb{P}(K_B=k-1)\mathbb{P}(K_A \geqslant k) p_2\\ &amp;\phantom{=}&amp; + \mathbb{P}(K_A=k-1) \mathbb{P}(K_B=k-1) p_1 p_2 \\ &amp;=&amp; \binom{n-1}{k-1} p_1^{k} q_1^{n-k} \sum_{m=k}^{n-1} \binom{n-1}{m} p_2^m q_2^{n-1-m} \\ &amp;+&amp; \binom{n-1}{k-1} p_2^{k} q_2^{n-k} \sum_{m=k}^{n-1} \binom{n-1}{m} p_1^m q_1^{n-1-m} \\ &amp;+&amp; \binom{n-1}{k-1} p_2^{k} q_2^{n-k} \binom{n-1}{k-1} p_1^{k} q_1^{n-k} \end{eqnarray} $$ I am not able to compute the expected value in the closed form, however, I am able to use Mathematica to find means for specific values of $k$: pdf[{k_Integer, p1_, p2_}, n_] := FullSimplify@ FunctionExpand@ PiecewiseExpand[ Refine[p1 PDF[BinomialDistribution[n - 1, p1], k - 1] SurvivalFunction[BinomialDistribution[n - 1, p2], k - 1], k \[Element] Integers] + Refine[p2 PDF[BinomialDistribution[n - 1, p2], k - 1] SurvivalFunction[BinomialDistribution[n - 1, p1], k - 1], k \[Element] Integers] + Refine[p1 p2 PDF[BinomialDistribution[n - 1, p2], k - 1] PDF[ BinomialDistribution[n - 1, p1], k - 1], k \[Element] Integers]] The density can be computed in closed form for explicit values of $k$, and Sum can be used to find the mean: In[61]:= With[{k = 1}, Sum[n Assuming[n &gt;= k, FullSimplify@Refine@FunctionExpand[pdf[{k, p1, p2}, n]]], {n, k, \[Infinity]}, Assumptions -&gt; 0 &lt; p1 &lt; 1 &amp;&amp; 0 &lt; p2 &lt; 1] ] Out[61]= (-p1^2 - p1 p2 + p1^2 p2 - p2^2 + p1 p2^2)/(p1 p2 (-p1 - p2 + p1 p2))
Simple everyday math
The website http://www.cut-the-knot.org/ is wonderful resource for engaging puzzles and lessons in basic mathematics. And even if you really just prefer a physical book as opposed to a website, well Cut-the-Knot also has a great list of interesting problem books you can order off Amazon: http://astore.amazon.com/ctksoftwareinc.
A Geometrical Interpretation of the line integral of $f$ along $C$ with respect to $x$ and $y$.
As you said the curve back and forth in $y$ direction. Let's assume the curve $C$ does not have that little twist in $x$ direction at the beginning. This means the curve $C$ is a function of $x$. Then the line integral with respect to $x$ would be $$\int_{x_1}^{x_2} f(x,y(x))\sqrt{1+\left(\frac{dy}{dx}\right)^2}\,dx$$ where $\sqrt{1+\left(\frac{dy}{dx}\right)^2}\,dx$ is the infinitesimal arc length of each subinterval. So it is exactly as in the picture. Now for the $y$ direction, since $C$ is not really a function in this direction, we have to separate it into parts. In this case, we have three parts $C_1,C_2,C_3$, as shown below On each subcurve, we can do the same thing as before: $$\int_{y_1}^{y_2} f(x(y),y)\sqrt{1+\left(\frac{dx}{dy}\right)^2}\,dy$$ where $\sqrt{1+\left(\frac{dx}{dy}\right)^2}\,dy$ is the arc length of an subinterval.
Finding supporting planes and bounds for function
The normal vector of the supporting plane in the point $p=(p_1,p_2,p_3)$ is $$\nabla f(p) = \{2p_1, 4p_2, 6p_3\}.$$ Equation of the supporting plane for the point $p_1 = (1,1,1)$ is $$f(p_1) + \nabla f(p_1)\cdot(x-p_1) = 0,$$ $$1^2 + 2\cdot1^2 + 3\cdot 1^2 + \{2,4,6\}\cdot\{x_1-1,x_2-1,x_3-1\}=0,$$ or $$f_1(x) = 2x_1+4x_2+6x_3-6=0.$$ Equation of the supporting plane for the point $p_2 = (-1,2,1)$ is $$f(p_2) + \nabla f(p_2)\cdot(x-p_2) = 0,$$ $$(-1)^2 + 2\cdot2^2 + 3\cdot 1^2 + \{-2,8,6\}\cdot\{x_1+1,x_2-2,x_3-1\}=0,$$ or $$f_2(x) = -2x_1+8x_2+6x_3-12=0.$$ The least value of function achieves in the stationary points or in the edges of the area. The stationary points of $f(x)$ can be found from the equation $\nabla f(p_m) = 0,$ then $$p_m = \{0,0,0\},\quad f(p_m) = 0.$$ The edges of the area belongs to the planes $x_1 = \pm 10,\quad x_2=\pm10, x_3=\pm10.$ \begin{vmatrix} x_1 &amp; x_2 &amp; x_3 &amp; f(x) &amp; \in\\ -10 &amp; [-10,10] &amp; [-10,10] &amp; 2x_2^2+3x_3^2+100 &amp; [100,600]\\ 10 &amp; [-10,10] &amp; [-10,10] &amp; 2x_2^2+3x_3^2+100 &amp; [100,600]\\ [-10,10] &amp; -10 &amp; [-10,10] &amp; x_1^2+3x_3^2+200 &amp; [200,600]\\ [-10,10] &amp; 10 &amp; [-10,10] &amp; x_1^2+3x_3^2+200 &amp; [200,600]\\ [-10,10] &amp; [-10,10] &amp; -10 &amp; x_1^2+2x_2^2+300 &amp; [300,600]\\ [-10,10] &amp; [-10,10] &amp; 10 &amp; x_1^2+2x_2^2+300 &amp; [300,600]\\ \end{vmatrix} The least value $\color{brown}{f(p_m)=0}$ achives in the stationary point (minimum) $\color{brown}{p_m=\{0,0,0\}}.$ Function $f(x)$ is convex, so upper bound $\color{brown}{\sup(\min f(x))=100}.$ Let us find the lower bounds for $\min f(x).$ The least value of linear function in the rectangle area achives in the vertex of the area. \begin{vmatrix} x_1 &amp; x_2 &amp; x_3 &amp; f_1(x) &amp; f_2(x)\\ -10 &amp; -10 &amp; -10 &amp; -126 &amp; -132\\ -10 &amp; -10 &amp; 10 &amp; -6 &amp; -12\\ -10 &amp; 10 &amp; -10 &amp; -46 &amp; 28\\ -10 &amp; 10 &amp; 10 &amp; 74 &amp; 148\\ 10 &amp; -10 &amp; -10 &amp; -86 &amp; -172\\ 10 &amp; -10 &amp; 10 &amp; 34 &amp; -52\\ 10 &amp; 10 &amp; -10 &amp; -6 &amp; -12\\ 10 &amp; 10 &amp; 10 &amp; 114 &amp; 108\\ \end{vmatrix} Support plane $f_1(x)$ leads to lower bound $\inf(\min f(x)) = \min f_1(x) = -126,$ Support plane $f_2(x)$ leads to lower bound $\inf(\min f(x)) = \min f_1(x) = -172.$ The best estimation is $\color{brown}{\inf(\min f(x)) = -126}.$
Why does this equality imply that the inverse image is open?
Each $g_i^{-1}(I_i)$ is open in $X$ because each $g_i:X\to E$ is continuous and each $I_i$ is open in $E.$ And $\prod_{i=1}^nI_i$ is open in $E^n.$ Therefore by the last inequality in the text, $\cap_{i=1}^n g^{-1}I_i$ is open in $X.$ So we have a base $B$ for $E^n$ such that $g^{-1}b$ is open in $X$ for every $b\in B.$ Every open $S$ in $E^n$ is equal to $\cup C$ for some $C\subset B$; therefore $g^{-1}S=\cup_{b\in C}g^{-1}b$ is open in $X.$
Calculate work of machines
Hint: Let $A$, $B$, $C$, $D$ represent the number of jobs each machine can do individually per 120 hours. Then: $$A+B+C={120 \over 24}=5$$ $$B+C+D={120 \over 60}=2$$ $$D={120 \over 120}=1$$ Add the first and third equations to yield $A+B+C+D=6$. Then subtract the second equation to yield $A=4$. So machine A does 4 jobs in 120 hours, so one job takes 30 hours.
a/b c/b d/r steps to follow
Let $d$ divide $n$. Therefore, for some natural $c$, $n=cd$. Assume $d$ also divides $n+1$, and therefore, for some natural $f$, $n+1=fd$. Since $n+1&gt;n$ it follows $f&gt;c$ so $f=c+g$ for some natural $g$. From this it follows $n+1=d(c+g)=cd+gd$. However $cd=n$ so we have $n+1=n+gd \Rightarrow 1=gd$. This is a contradiction! We are given $d&gt;1$ and we have defined $g$ as a natural number, so $gd&gt;1$, which contradicts our result $gd=1$. Thus, for all $d&gt;1$ and natural $n$ if $d$ divides $n$ then $d$ does not divide $n+1$.
Some questions about $\sum_{n=1}^{\infty} (2z)^{-n^2}$
For convenience, let $G(z)=F(\frac1z)$. Proving the fact that $F$ is holomorphic in $|z|&gt;\frac12$ is equivalent to proving that $G$ is holomorfic in $|z|&lt;2$ (we'll call this region $\Gamma$). To prove it, consider $\Gamma_k:=|z|\le k&lt;2$. Then, $\forall z\in \Gamma_k$, $$ |z|\le k\\ \sum_{n=1}^{\infty}\left(\frac{k}{2}\right)^{n^2}&lt;\infty\\ G(z)=\sum_{n=1}^\infty \left(\frac z2\right)^{n^2}\\$$ By Weierstrass M test, the series converges uniformly on all of the $\Gamma_k$. Since every compact subset of $\Gamma$ is contained in at least one of the $\Gamma_k$, we have proved the uniform convergence on every compact subset of $\Gamma$ of $G_n$, which are analytic (since they are polynomials), and thus we have proved that $G$ is analytic in $\Gamma$. To compute the integral, let us use the substitution $u=\frac{1}{z}; -\frac{du}{u^2}=dz;$ $$\oint_{C_1(0)}z^kF(z)dz=\oint_{C_1(0)}\frac{1}{u^{k+2}}G(u)du=\\ =\oint_{C_1(0)}\sum_{n=0}^{\infty} \frac{{u}^{n^2-k-2}}{2^{n^2}}du=\sum_{n=0}^{\infty}\frac{1}{2^{n^2}}\oint_{C_1(0)}u^{n^2-k-2}du$$ (we used the fact that the transformation $z=\frac{1}{u}$ changes the orientation of the unit circle and that, since $C_1(0)\in \Gamma_{1}$, the convergence of the series is uniform, so we can exchange the integral and the sum) Remembering that $\oint z^adz=2\pi i\cdot \delta^a_{-1}$, the result is $$\sum_{n=1}^{\infty}\frac{1}{2^{n^2}}\oint_{C_1(0)}u^{n^2-k-2}du=\sum_{n=0}^{\infty}\frac{2\pi i}{2^{n^2}}\delta^{n^2-k-2}_{-1}=\begin{cases}\frac{\pi i}{2^k}\ \ \text{if}\ \exists n:n^2=k+1\\0\ \ \text{otherwise}\end{cases}$$
Rationalizing mixed denominators?
Multiply top and bottom by all the "relatives" $5+\sqrt{2}-\sqrt{3}$, $5+\sqrt{2}+\sqrt{3}$ and $5-\sqrt{2}-\sqrt{3}$. The new denominator is invariant under replacement of $\sqrt{2}$ by $-\sqrt{2}$, also under replacement of $\sqrt{3}$ by $-\sqrt{3}$, so it must be rational.
How does the representation of co-vectors change if we change the basis of a vector space $V$?
Let us use upper indices for row(-vectors) and lower index for column(-vectors). Let $x$ be an arbitrary vector, and $[x]$ its coordinates in the standard basis. Let $V=\left([v_1]\mid [v_2]\mid\ldots [v_n]\right)$ be the matrix of stacked coordinate columns-vectors of the first basis, and similarly $W$ for the second basis $\{w_i\}$. If the coordinates of a vector in the two bases are related by $$[x]_w=T[x]_v,$$ then, identifying left-most and right-most sides in $V[x]_v=[x]=W[x]_w=W(T[x]_v)=(WT)[x]_v$ we have $$V=WT$$ Let $V^{\prime}=\left(\begin{smallmatrix} [v^1]\\ [v^2] \\ \cdots\\ [v^n]\end{smallmatrix}\right)$ be the matrix of stacked row-vectors of the first dual basis, and similarly $W^{\prime}$ for the second dual basis $\{w^i\}$. Using the dual basis we can write $[x]_v=\left(\begin{smallmatrix} v^1(x)\\ v^2(x) \\ \cdots\\ v^n(x)\end{smallmatrix}\right)=V^{\prime}[x]$ and similarly for the $[x]_w$. From $TV^{\prime}[x]=T[x]_v=[x]_w=W^{\prime}[x]$ we have $$W^{\prime}=TV^{\prime}$$ Finally, for any dual vector (co-vector) $\alpha$ we can write $[\alpha]=[\alpha]_{w^{\prime}}W^{\prime}=[\alpha]_{w^{\prime}}TV^{\prime}$ and therefore, $$[\alpha]_{v^{\prime}}=[\alpha]_{w^{\prime}}T$$
Use the definition to prove that $\lim_{n \rightarrow \infty} b^{\frac1n} = 1$, where $b>0$.
Let $b&gt;0$. Consider the following cases: Case 1. $b=1$. The result is trivial. Case 2. $b&gt;1$. In this case, we need to show the following claim: Claim: For all $n\in\Bbb N$ and $b&gt;1$, we have $b^{\frac{1}{n}}-1\leq \frac{b-1}{n}.$ Proof. For each $n\in\Bbb N$, write $d_n=b^{\frac{1}{n}}-1$. Because $b&gt;1$, it follows that $d_n&gt;0$ for each $n\in\Bbb N$. Thus, by using the Bernoulli inequality, we get $$b=(1+d_n)^n\geq 1+nd_n\quad \forall n\in\Bbb N.$$ Hence for each $n\in\Bbb N$, we get $$d_n\leq\frac{b-1}{n}$$ and this proves our claim. We shall now prove the problem under case 2. Let $\epsilon&gt;0$. Since $b&gt;1$, we have $b-1&gt;0$ and so $\frac{\epsilon}{b-1}&gt;0$. By using the Archimedean Property, $\exists N\in\Bbb N$ such that $$\frac{1}{N}&lt;\frac{\epsilon}{b-1}.$$ Thus, if $n\geq N$ then $$\begin{align} |b^{\frac{1}{n}}-1|&amp;=b^{\frac{1}{n}}-1\\ &amp;\leq\frac{b-1}{n}\\ &amp;\leq\frac{b-1}{N}&lt;\epsilon. \end{align}$$ Hence, if $b&gt;1$ then $$\lim_{n\to\infty}b^{\frac{1}{n}}=1.$$ This proves case 2. Case 3. $0&lt;b&lt;1$. Then $\frac{1}{b}&gt;1$ and then applying Case 2, we get $$\lim_{n\to\infty}\bigg(\frac{1}{b}\bigg)^{\frac{1}{n}}=1,$$ that is, $$\lim_{n\to\infty}\frac{1}{b^{\frac{1}{n}}}=1,$$ that is, if $0&lt;b&lt;1$ then $$\lim_{n\to\infty}b^{\frac{1}{n}}=1.$$
Understanding Lemma 13.2 in Munkres' Topology
I'm not entirely sure what you're asking, but I think the following should clarify whatever you're confused about. If $\mathcal{T}$ is a specific topology, then "$\mathcal{C}$ is a basis for $\mathcal{T}$" is defined to mean that $\mathcal{C}$ is a basis for a topology on $X$ and $\mathcal{T}_\mathcal{C}=\mathcal{T}$. So Lemma 13.2 is stating that if a topology $\mathcal{T}$ is given (namely, the given topology on $X$) and $\mathcal{C}$ satisfies the stated hypotheses, then $\mathcal{C}$ is a basis for a topology and in fact the topology it generates is $\mathcal{T}$.
Prove that any subspace of $V$ that contains both $W_{1}$ and $W_{2}$ must also contain $W_{1}+W_{2}$.
Your solution is almost correct, except it seems you've only shown that $W_1+W_2$ is indeed a subspace in part a), when it also asks you to show it contains $W_1$ and $W_2$. This is straightforward to show and I'll leave that for you to try.
What is meant by a transformation being linear?
A linear transformation is one where $T(\alpha x) = \alpha T(x)$ and $T(x + y ) = T(x) + T(y)$ That's all the is to it. $T((x, y)) = (2x + y, y, 0)$ Is linear because $T(c(x,y)) = T((cx,cy)) = (2cx + cy, cy, 0) = c(2x + y, cy, 0) = cT((x,y))$ and $T((x,y) + (w,z)) = T((x+w, y + z)) = (2x + 2w + y + z, y + z, 0) = (2x +y,y,0) + (2w + z, z,0) = T((x,y)) + T((w,z))$ But $T((x,y)) = (2x + y, xy, 0)$ is not because $T(c(x,y)) = (2cx + cy, c^2xy, 0) \ne (2cx + cy, cxy, 0) = cT((x,y))$. Also $T((x,y) + (w,z)) \ne T((x,y)) + T((w,z))$ ==== A trick to notice is if the transformation involves adding a scalar other than 0 it will not be linear. ($f(x) = x + c =&gt; f(2x) = 2x + c; c \ne 2c: f(x + y) = x + y + c \ne x + c + y + c$) If the transformation involves multiplying two terms together or taking a power of a term it will not be linear. ($f(x) = x^2 =&gt; f(2x) = (2x)^2 \ne 2(x^2): f(x + y) = (x +y)^2 \ne x^2 + y^2$) But if the transformation is only multipling terms by scalars and adding terms it is linear. $f((x,y)) = \sum (c_ix + d_iy) \implies f(a(x,y) + e(w,z)) = \sum (c_iax + d_iay + ec_iw + d_iez) = a\sum(c_ix + d_iy) + e\sum(c_ix + d_iy) = af((x,y)) + ef((w,z))$ ==== Intuitive. If you travel along a line the distance you change vertically is proportional to the distance you change horizontally. Lines are consistant. If you double the input, you double the output. If you add two inputs together you get the two outputs added together. If you travel anything that isn't a line that isn't true any more. That's why they call them "linear". BUT there is a caveat. In high school algebra lines could have y-intercepts which throw these proportions off. It's only the "steepness" that is linear. In linear algebra they call those type of functions "affine". They are sort of "linear" but with a constant "slide displacement to them".
Develop $(\frac{2}{x}-x)^8$
The function is: $$ f(x) = \left( \frac{2}{x} - x \right)^8 = \frac{(2-x^2)^8}{x^8} = x^8 \, \left(\frac{2}{x^2} - 1 \right)^8. $$ These have the expansions: \begin{align} f(x) &amp;= \sum_{k=0}^{8} \binom{8}{k} \, \left(- \frac{2}{x}\right)^{8-k} \, (-x)^k \\ &amp;= \frac{1}{x^8} \, \sum_{k=0}^{8} \binom{8}{k} \, 2^{8-k} \, (-1)^k \, x^{2k} \end{align} other expansions follow in a similar manor.
Pigeonhole principle based algorithm
Your argument is correct and nice. I would generalize it: "Let $S$ be a finite set. Let $d$ and $k$ be two nonnegative integers. Let $\sim$ be an equivalence relation on $S$ such that there are at most $d$ equivalence classes with respect to $\sim$. Assume that $\left|S\right| \geq d + 2k-1$. Then, we can find $2k$ distinct elements $s_1, s_2, \ldots, s_k, t_1, t_2, \ldots, t_k$ of $S$ such that $s_1 \sim t_1$, $s_2 \sim t_2$, ..., $s_k \sim t_k$." Now that you have turned the $k$ into a proper variable, you can do induction over $k$; this gives a clearer writeup than the algorithm (but of course is just a recursive rewording of the latter). Yes, because $11 \geq 4 + 2 \cdot 4 - 1$. Note that introducing a $4$-th integer into the original problem does not make it stronger, because $a^2 + b^2 + c^2 + d^2 \equiv e^2 + f^2 + g^2 + h^2 \mod 12$ does not imply $a^2 + b^2 + c^2 \equiv e^2 + f^2 + g^2 \mod 12$. Only once you have strengthened $a^2 + b^2 + c^2 \equiv e^2 + f^2 + g^2 \mod 12$ to $a \equiv e \mod 12,\ b \equiv f \mod 12,\ c \equiv g\mod 12$ does the $4$-th integer become an obvious boon.
Antiderivative of $e^{2\arctan{x}}$
The "really weird expression" is one involving Gaussian hypergeometric functions, which means that there is no way of representing this antiderivative by elementary functions only. Here is the result.
Solving an equation to find an inverse function: $x=(e^{y}-e^{-y})/2$
Here you multiply through by $2 e^{y}$ on both sides to get $$e^{2 y} - 2 x e^{y} - 1 = 0$$ Solve for $e^{y}$: $$e^y = x \pm \sqrt{x^2+1}$$ and get $$y = \log{\left ( x \pm \sqrt{x^2+1} \right ) }$$ Which sign to use? If $y$ is real, then of course use the positive sign. This is, of course, $\sinh^{-1} x$.
Prove the intersection of a compact set and a set with no accumulation points is finite
Your approach works if you combine your two ideas: Form an open set around each point in $K$, such that it contains only finitely many points of $S$. These open sets cover $K$, and since $K$ is compact, extract a finite subcover, each element of which only contains finitely many points of $S$.
Die that never rolls the same number consecutively
An impressive array of heavy machinery has been brought to bear on this question. Here's a slightly more pedestrian solution. The $6\cdot5^{999}$ admissible sequences of rolls are equiprobable, so we need to count how many of them contain $k$ occurrences of a given number, say, $6$. Let $a_{jn}$ be the number of admissible sequences of length $n$ with exactly $j$ $6$s that begin and end with a non-$6$. Fix a non-$6$ at the beginning, glue a non-$6$ to the right of each of the $j$ $6$s and choose $j$ slots for the resulting glued blocks out of $n-j-1$ objects ($j$ glued blocks and $n-2j-1$ unglued non-$6$s). There are $5^{j+1}4^{n-2j-1}$ options for the non-$6$s, so that makes $$ a_{jn}=\frac54\cdot4^n\left(\frac5{16}\right)^j\binom{n-j-1}j $$ sequences. A sequence with $k$ $6$s can begin and end with non-$6$s, for a contribution of $a_{k,1000}$, begin with a $6$ and end with a non-$6$ or vice versa, for a contribution of $2a_{k-1,999}$, or begin and end with a $6$, for a contribution of $a_{k-2,998}$, for a total of $$ a_{k,1000}+2a_{k-1,999}+a_{k-2,998}=4^{1000}\left(\frac5{16}\right)^k\left(\frac54\binom{999-k}k+2\binom{999-k}{k-1}+\frac45\binom{999-k}{k-2}\right)\;. $$ Dividing by the total number of admissible sequences yields the probability of rolling $k$ $6$s: $$ \frac56\left(\frac45\right)^{1000}\left(\frac5{16}\right)^k\left(\frac54\binom{999-k}k+2\binom{999-k}{k-1}+\frac45\binom{999-k}{k-2}\right)\;. $$ We could use this to calculate the expectation and variance, but this is more easily done using the linearity of expectation. By symmetry, the expected number of $6$s rolled is $\frac{1000}6=\frac{500}3=166.\overline6$. With $X_i$ denoting the indicator variable for the $i$-th roll being a $6$, the variance is \begin{align} \operatorname{Var}\left(\sum_iX_i\right) &amp;= \mathbb E\left(\left(\sum_iX_i\right)^2\right)-\mathbb E\left(\sum_iX_i\right)^2 \\ &amp;= 1000\left(\frac16-\frac1{36}\right)+2\sum_{i\lt j}\left(\textsf{Pr}(X_i=1\land X_j=1)-\textsf{Pr}(X_i=1)\textsf{Pr}(X_j=1)\right) \\ &amp;= 1000\left(\frac16-\frac1{36}\right)+2\sum_{i\lt j}\textsf{Pr}(X_i=1)\left(\textsf{Pr}(X_j=1\mid X_i=1)-\textsf{Pr}(X_j=1)\right)\;. \end{align} The conditional probability $p_{j-i}=\textsf{Pr}(X_j=1\mid X_i=1)$ depends only on $j-i$; it satisfies $p_{k+1}=\frac15(1-p_k)$ and has initial value $p_1=0$, with solution $p_k=\frac{1-\left(-\frac15\right)^{k-1}}6$. Thus \begin{align} \operatorname{Var}\left(\sum_iX_i\right) &amp;= 1000\left(\frac16-\frac1{36}\right)-\frac1{18}\sum_{k=1}^{999}(1000-k)\left(-\frac15\right)^{k-1} \\ &amp;= 1000\left(\frac16-\frac1{36}\right)-\frac1{18}\cdot\frac{25}{36}\left(\frac65\cdot1000-1+\left(-\frac15\right)^{1000}\right) \\ &amp;=\frac{60025-5^{-998}}{648} \\ &amp;\approx\frac{60025}{648} \\ &amp;\approx92.63\;, \end{align} in agreement with mercio's result.
Finding singular points and computing dimension of tangent spaces (only for the brave)
I'll use the more comfortable notations $w,x,y,z$ in place of $x_0,x_1,x_2,x_3$ and assume the base field has characteristic zero.. i) The curve $V$ is isomorphic to the plane curve $V'$ in the $x,y$ plane defined by $y^2=x^3$. The isomorphism is $p:V\to V':(x,y,z)\mapsto (x,y)$ with inverse $p^{-1}:V'\to V:(x,y)\mapsto (x,y,x^3)$. The curve $V'$ is irreducible because the polynomial $y^2-x^3$ is irreducible (or because it is the image of $\mathbb A^1\to \mathbb A^2: t\mapsto (t^2,t^3)$). Hence the isomorphic curve $V$ is irreducible too. The only singularity of $V'$ is $(0,0)$ where the tangent space has dimension $2$ (immediate from the Jacobian criterion you mention) so that $V$ has $(0,0,0)$ as only singularity, with tangent space of dimension $2$. ii) The surface $Y$ is irreducible because the polynomial $xy^2-z^3\in k[w,x,y,z]$ that defines it is irreducible (notice that it is of degree $1$ in $x$). The absence of the variable $w$ in the equation indicates that $Y$ is the cone with vertex $[1:0:0:0]$ over the curve $C\subset \mathbb P^2_{x:y:z}=V(w)$ with equation $xy^2-z^3=0$. The curve $C$ has $[1:0:0]$ as only singularity (cf. Jacobian), hence your surface has the line $V(y,z)\subset \mathbb P^3$ as set of singularities . At every point of that line the tangent space has dimension $3$. Tricks of the trade a) You can use the Jacobian also in projective space b) In an affine or projective space of dimension $n$ a hypersurface has tangent space of dimension $n$ at a singularity and of dimension $n-1$ at all non-singular points. Edit: Cones Since Jonathan asks, here is why a homogeneous polynomial $f(x,y,z)$ not involving $w$ defines the cone $C\subset \mathbb P^3$ with vertex $S=[1:0:0:0]$ over the projective curve $V(f)\subset \mathbb P^2_{0:x:y:z}$ A point $R$ on the line joining $S$ to a point $Q=[0:a:b:c]\in V(f)$ has coordinates $[u:va:vb:vc]$ for some $[u:v]\in \mathbb P^1$. Since $f(R)=f(va,vb,vc)= v^{deg(f)} f(a,b,c)=0$, we see that indeed $R\in C$ and $C$ is the claimed cone.
Question in proving Nakayama's Lemma
Even in the general case, the Jacobson radical is still a proper ideal, and no proper ideals contain units. Why do you think your argument is supposed to work for the case of a single maximal ideal? Here is an example of a proper ideal $I$ such that $IM = M$. Take $R = C[0,1]$, the continuous real-valued functions on $\mathbb{R}$ and let $M = \mathbb{R}$, with the action $f*a = f(0)a$. Exercise: check that $M$ is a finitely generated $C[0,1]$-module. Let $I$ be the ideal of functions vanishing at $1$. Then $IM = M$. But $I$ contains no units. Of course, this kind of example won't work for a local ring, since any ideal is already contained in the Jacobson radical (i.e. the maximal ideal), but it serves to show that you can't conclude the statement merely by appealing to an ideal lacking units.
How to find the length of a side using properties of triangle?
Since $AF//ED$, we have that $\frac{EC}{CF}=\frac{DC}{CA} \Rightarrow \frac{18}{24}=\frac{12}{CA} \Rightarrow CA=16\ (1).$ Now since $GB//EF$, we have that $GB//FC$. Thus in the triangle $ACF$ we get that $\frac{AG}{GF}=\frac{AB}{BC}$ and so $AB=BC\ (2)$, because it is given that $AG=GF$. Finally we combine $(1)$ and $(2)$, we have $16=CA=AB+BC=2AB\Rightarrow AB=8$.
Integration start idea
The antiderivative is not an elementary function. This can be proven using the Risch algorithm.
how to find derivative of $x^2\sin(x)$ using only the limit definition of a derivative
\begin{align*} &amp;\dfrac{(x+h)^{2}\sin(x+h)-x^{2}\sin x}{h}\\ &amp;=\dfrac{(x+h)^{2}\sin(x+h)-x^{2}\sin(x+h)+x^{2}\sin(x+h)-x^{2}\sin x}{h}\\ &amp;=\dfrac{((x+h)^{2}-x^{2})\sin(x+h)}{h}+\dfrac{x^{2}(\sin(x+h)-\sin x)}{h}\\ &amp;=\dfrac{(2hx+h^{2})\sin(x+h)}{h}+\dfrac{x^{2}(\sin x\cos h+\cos x\sin h-\sin x)}{h}\\ &amp;=(2x+h)\sin(x+h)+x^{2}\cdot\dfrac{\cos h-1}{h}+x^{2}\cos x\cdot\dfrac{\sin h}{h}\\ &amp;\rightarrow 2x\sin x+x^{2}\cos x. \end{align*}
search for element of $ S_{4} $ that $ \langle g \rangle H = H\langle g \rangle$
We have $|H|=4$, $|N|=8$, with $H \lhd N$. So any element $g \in N \setminus H$ satisfies $H\langle g \rangle = \langle g \rangle H$. If $h$ is any element of $S_4$ of order $3$, then $|\langle h \rangle| =3$, so $N\langle h \rangle$ and $\langle h \rangle N$ must have order $24$ and hence they are the whole of $S_4$, so they are equal.
Calculus - limit of a function: $\lim\limits_{x \to {\pi \over 3}} {\sin (x-{\pi \over 3})\over {1 - 2\cos x}}$
Let $y=x-\pi/3$. Then, the limit of interest becomes $$\begin{align} \lim_{y\to 0}\frac{\sin y}{1-2\cos (y+\pi/3)}&amp;=\lim_{y\to 0}\frac{\sin y}{1-\cos y+\sqrt 3 \sin y}\\\\ &amp;=\lim_{y\to 0}\frac{2\sin (y/2)\cos(y/2)}{2\sin^2(y/2)+2\sqrt{3}\sin(y/2)\cos(y/2)}\\\\ &amp;=\lim_{y\to 0}\frac{\cos(y/2)}{\sin(y/2)+\sqrt 3 \cos(y/2)}\\\\ &amp;=\frac{\sqrt 3}{3} \end{align}$$
Global extremas of $f(x,y)=2x^2+2y^2+2xy+4x-y$ on the region $x\leq 0, y\geq 0, y\leq x+3$
A solution without calculus. Adopt new variables $x = u - v$, $y = u + v$. The objective function: \begin{align*} 2x^2 + 2y^2 + 2xy + 4x - y &amp;= 2(u^2 - 2uv + v^2) + 2(u^2 + 2uv + v^2) + 2(u^2 - v^2) + 4(u-v) - (u+v) \\ &amp;= 6u^2 + 2v^2 + 3u - 5v \\ &amp;= 6 \left( u + \frac{1}{4} \right)^2 + 2 \left(v - \frac{5}{4}\right)^2 - \frac{7}{2}. \end{align*} The constraints: \begin{align*} x \leq 0 &amp;\Longrightarrow u \leq v \\ y \geq 0 &amp;\Longrightarrow u \geq -v \\ y \leq x + 3 &amp;\Longrightarrow v \leq \frac{3}{2}. \end{align*} We may combine the constraints as an easier-to-visualize $|u| \leq v \leq \frac{3}{2}$. The objective function clearly has a global minimum over all of $\mathbb{R}^2$ at $(u, v) = (-\frac{1}{4}, \frac{5}{4})$, i.e. $(x, y) = (-\frac{3}{2}, 1)$, which is within the constraints. Furthermore, as the objective function is convex, its maximum on a convex polygon must be at a vertex, and it is easy to check all three possible vertices. The maximum is $(u, v) = (\frac{3}{2}, \frac{3}{2})$ or $(x, y) = (0, 3)$.
What is the probability of monotonically increasing outcome ratios in repeated Bernoulli trials?
Here is an approximate answer. Instead of Bernoulli trials, suppose we have uniform random variables $X$ and $Y$ on the unit interval. Then what is $$Pr( X \le Y)$$ The sample space is the unit square and the upper half is the space of positive outcomes, so the probability is .5. Similarly, if we have three uniform random variables $X$, $Y$ and $Z$ on the unit interval, the volume of the space defined by the intersection of $$x \le y \le z$$ with the unit cube is equal to $$Pr( X \le Y \le Z)$$ By symmetry arguments, it looks like that area is $1/6$, or $1/3!$. Since there are $3!$ ways to arrange x,y, and z in the inequality, and they are mutually exclusive they must divide the cube into equal parts. By the same argument then, for $n$ uniform r.v. on the unit interval, the probability $$Pr(X_i \le X_{i+1})$$ for all i is $1/n!$ If f is any p.d.f without point mass, then since it is a map to the interval [0,1] I think the same argument applies. So a partial answer to the question would be, as k goes to infinity, the probability is $1/n!$. In the case of small k, for example k = 2 and n = 2, clearly the approximation is not so good (the probability of getting at least one head is 3/4, not 1/2). To get a better handle on the problem we need to compute the probability $$Pr(X = Y)$$ where X and Y Bernoulli r.v. on k trials with probability p of success $$\sum_{i=0}^k (\binom{k}{i} p^{i} (1-p)^{k-i})^2 $$
Finding $\lim_{x\to0}\int_0^1\frac{xf(t)}{x^2+t^2}\,dt$
Assume for the moment that $x&gt;0$. Then the change of variable $t=xu$ turns the integral into $$ \int_{[0,1/x]} \frac{f(xu)}{1+u^2}du. $$ If $$d\mu = \frac{dx}{1+u^2},$$ then by the dominated convergence theorem with respect to the finite measure (so that constants are integrable!) $\mu$ shows that the last integral converges to $f(0)\mu (\mathbb{R}^+)$. If $x \in \mathbb{R}$, the constant function $f =1$ shows that the limit does not exist.
What is the cofinality of $\omega+1$?
There's no contradiction; cofinality is not a strictly increasing function. Indeed, the cofinality of every non-zero successor ordinal (i.e., ordinals $\alpha$ such that $\alpha = \beta + 1$ for another ordinal $\beta$) is $1$. One way to think about cofinality of an ordinal $\alpha$ is that it measures how long a sequence needs to be in order to "reach" $\alpha$, not how big $\alpha$ is.
Show that $z_{k-1}z_{k+1}=z_k+1 \rightarrow z_5=\frac{z_1+1}{z_2}$
By the recurrence relation, you get: $$ z_5=\frac{z_4 + 1}{z_3} $$ Note that: $$ z_4 + 1= \frac{z_1 + z_2 + z_1 z_2 + 1}{z_1 z_2} = \frac{(z_1 + 1) + z_2(z_1 +1)}{z_1 z_2} = \frac{(z_1 +1)(z_2 + 1)}{z_1 z_2} $$ Then: $$ z_5 = \frac{(z_1 +1)(z_2 + 1)}{z_1 z_2} \cdot \frac{z_1}{1 + z_2} = \frac{z_1 + 1}{z_2} $$
Exercise 6.27 from Introduction to probability (Anderson,Seppalainen,Valko)
After finding pmf of $Y$, let $x,y\in \{1,-1\}$, \begin{align} P(X_2 = x, Y=y) &amp;= P(X_2=x, X_1=x_2y) \\ &amp;= P(X_2=x)P(X_1=x_2y)\\ &amp;= P(X_2=x) \cdot \frac12\\ &amp;= P(X_2=x)P(Y=y) \end{align} Hence, they are independent.
Symbolic coordinates for a hyperbolic grid?
Where concepts break There is no such thing as a parallel transport isometry in hyperbolic geometry. You can have a translation along a geodesic axis, but points off that axis will be transported along a curve. The curve has constant distance to the axis, but is itself not a geodesic. Therefore, your concept of a vector as a combination of length and direction won't translate to the hyperbolic plane. Transformations as building blocks The other formulation you use in that comment, “what takes you from A to B”, is more readily applicable. You can think of vectors as prepresentants for specific isometries, and you can think of the grid points as the oribit of a given point under these isometries. You can come up with isometries of the hyperbolic plane which yield similar results. Instead of a vector you'd describe this class the way you'd describe an arbitrary isometry of the hyperbolic plane. How you do that very much depends on preferences and the model you employ, but for the Poincaré disc model you'd probably use 4 complex numbers to describe a Möbius transformation. You can reduce this to 4 real numbers by using the upper half plane, or by expliting the specific class of possible Möbius transformations, and perhaps even 3 numbers only if you dehomogenize in some way. Expressed in $\mathbb{CP}^1$, the complex projective line, concatenation of Möbius transformations is simply a multiplication of matrices, so that's what would become of your vector addition. Congruent polygons Or you can think of integer grid in the Euclidean plane as a grid made up from squares, and generalize that. In general you have regular $m$-gons, and at each corner $n$ of them meet. Take $2m+2n&lt;mn$ and you are in hyperbolic geometry. The numbers $m$ and $n$ define all the angles and therefore all the lengths as well, as there isn't a global scaling isometry in hyperbolic geometry either. This paragraph will give you a good intuition as to what grids can look like, while the paragraph before this gives you an idea of how to do computations. Together I believe these two concepts should give you as good a toolbox as you can hope for, given the nature of the question and the time it has remained unanswered. Representation as a group Here is something I do in practice: take the polygons described above, and split them along their axes of symmetry to obtain elementary right triangles. Now you can take a single of these triangles, label its edges $a$, $b$ and $c$, and obtain any triangle from your tessalation using a combination of these reflections (see my current avatar). This corresponds to a finitely presented group: $$\left&lt;a,b,c\;\middle|\;1=a^2=b^2=c^2=(ab)^2=(bc)^m=(ca)^n\right&gt;$$ After that, you can do computations in that group. The problem of comparing two labels corresponds to the word problem in this group, which can in fact be solved by computing a canonical form using a deterministic finite state automaton obtained from a string rewrite system by applying the Knuth-Bendix procedure. There might be a slight modification required if you want to label vertices instead of triangles, but in general this approach should work for the tessalations you referred to (i.e. those by Don Hatch). I'm writing up large portions of this as part of my PhD, so I'll likely post a follow-up on this one day.
Showing that $\Gamma(X,\mathcal{O}_X)=k$
I’m unsure what the author meant, but there’s a standard proof as follows: a global section $f \in \mathcal{O}_X(X)$ induces a natural morphism $\hat{f}: X \rightarrow \mathbb{A}^1$. Composing with the natural open immersion $i$ from the affine space to the projective space, we get a morphism of proper $k$-schemes $\hat{f}’: X \rightarrow \mathbb{P}^1$, so it is closed. Thus the (topological) image $I$ of this morphism is a proper closed irreducible subset of $\mathbb{P}^1$, because $X$ is irreducible. Therefore $I$ is a $k$-point of $\mathbb{P}^1$ (because $k$ is algebraically closed) thus $\hat{f}$ is a morphism to a $k$-point, thus $f$ is an element of $k$ plus a nilpotent – as $X$ reduced, $f \in k$ and we are done.
Chain Rule For Limits
For $p=\infty$ it is not true. A counterexample: $$ f(x,y)=\frac{1}{x^2|y|+1}\to b(y)=\left\{ \begin{array}{ll} 0, &amp; \text{if } y\ne 0,\\ 1, &amp; \text{if } y= 0. \end{array} \right. $$ $$ g(x)=\frac{1}{x^2+1}\to c=0. $$ So we have $f(x,c)=f(x,0)=1$, but $$ f(x,g(x))=\frac{1}{\frac{x^2}{x^2+1}+1}\to \frac12. $$
Why -since this set is also closed in $E$, and $E$ is connected, it follows that it must be all of $E$-?
A set $E$ is connected if and only if the only clopen subsets of $E$ are $\varnothing$ and the same set $E$. You can find the proof of this fact right here.
Show that the pivots of A are positive if and only if A is symmetric positive definite
You know that $a$ and $c-b^2/a$ are pivots, so it remains to show that $A$ is SPD iff $a&gt;0$ and $ac-b^2&gt;0$ (by using the definition of the SPDness). We have that $x^TAx&gt;0$ if (why?) and only if $$ \begin{split} 0&amp;&lt;\pmatrix{1\\0}^T\pmatrix{a&amp;b\\b&amp;c}\pmatrix{1\\0}=a,\\ \quad 0&amp;&lt;\pmatrix{t\\1}^T\pmatrix{a&amp;b\\b&amp;c}\pmatrix{t\\1}=at^2+2bt+c\quad\text{for all $t\in\mathbb{R}$}. \end{split} \tag{1} $$ Note that $at^2+2bt+c$ is positive for all $t$ if and only if it has no real roots, that is, if the discriminant $(2b)^2-4ac=4(b^2-ac)$ is negative, which gives $ac-b^2&gt;0$.
A symbolic combination of an inequality concerning convergent series and Brun's theorem: improving the upper bound
Not exactly what you want, but it can be a starting point. It is not difficult to see that $$t_{k}=\sum_{m\leq k,\,p_{m}\in\mathcal{T}}p_{m}=\sum_{m\leq k,\,p_{m}\in\mathcal{T}}\frac{p_{m}\left(2p_{m}+2\right)}{2p_{m}+2}&gt;\sum_{m\leq k,\,p_{m}\in\mathcal{T}}\frac{p_{m}\left(p_{m}+2\right)}{2p_{m}+2}$$ then, by the D'Aurizio exercise, we have $$\sum_{k\leq n}\left(2k+1\right)\left(\frac{1}{t_{k}}+\frac{1}{t_{k}+2k}\right)&lt;2\sum_{k\leq n}\frac{2k+1}{t_{k}}$$ $$&lt;8\sum_{k\leq n,\,p_{k}\in\mathcal{T}}\frac{2p_{k}+2}{p_{k}\left(p_{k}+2\right)}&lt;8\sum_{k\geq1,p_{k}\in\mathcal{T}}\left(\frac{1}{p_{k}}+\frac{1}{p_{k}+2}\right)=8B.$$ Maybe the result can be improved but at this moment I don't see a good way to fix it.
How to prove positive semi-definiteness using eigenvalues
Step I If a matrix $A\in \text{ Mat}_n(\mathbb R)$ has eigen-values $\lambda_i$, then $A-aI$ has eigen-values $\lambda_i-a$, $\forall a\in \mathbb R$, and $A^2$ has eigen-values $\lambda_i^2$. Step II If all the eigen-values of a matrix are $\ge0$, then the matrix is positive semi-definite. Now let $A$ have characteristic equation $f(x)=\Pi_{i=0}^k(x-\lambda_i)^{m_i}$, where $m_i$ are the multiplicities of $\lambda_i$, and $k$ is the number of eigen-values of $A$. Then $A^2$ satisfies the equation $g(A)=0$, where $g(x)=\Pi_{i=0}^{k}(x-\lambda_i^2)^{m_i}$. So the minimal polynomial of $A^2$ must divide $g(x)$; in particular, all the eigen-values of $A^2$ lie in the set $\{\lambda_i^2\mid i=1,\cdots,k\}$. So the above-found eigen-values are all the ones of $A-bI$ and of $A^2$ respectively. This concludes our proof.
Prove that a formal system is absolutely inconsistent
First, let's find a proof of $X$ where $X$ is a propositional variable rather than an arbitrary formula. Afterwards, if want to prove an arbitrary wff $A$, we can take our proof of $X$ and replace every $X$ in that proof with $A$. This works because the system we're working with here has no axioms or inference rules that allow us to do something with a propositional variable that we can't also do with a more complex formula. We can find a proof of $X$ from the conclusion up by a sort of semi-smart brute force search. Every premise we need must be concluded either by modus ponens or as an instance of the axiom scheme, so whenever we need does not have the right shape to be an instance of the axiom, it must be concluded by modus ponens. This is, for example, the case for the final conclusion $X$, so the last step in the proof must be modus ponens. We then have to prove $$ {\sim} A \lor X \qquad\text{and}\qquad A $$ for some $A$ whose structure we're free to choose. The first of these cannot be an axiom no matter which $A$ we choose, so it itself must be concluded by modus ponens. So what we now need to prove is $$ {\sim}B \lor [{\sim}A\lor X] \qquad\qquad B \qquad\qquad A $$ The first one of these looks promising because it will be an instance of the axiom if only we decide that $B$ is going to be $C\lor{\sim}A$. That leaves us with the following to prove: $$ C\lor{\sim}A \qquad\qquad A $$ The first of these again needs to be proved by modus ponens (the negation is in the wrong place and we don't have any rule saying that $\lor$ is commutative), so we need $$ {\sim}D \lor [C\lor {\sim}A] \qquad\qquad D \qquad\qquad A $$ and the first of these becomes an axiom if $D$ is $E\lor C$. That coverts our proof obligations to $$ E\lor C \qquad\qquad A $$ and these can both be axioms by choosing appopriate unfoldings for $E$, $C$, and $A$, namely $A = {\sim}[X\lor X]\lor [X\lor X]$ $E = {\sim}[X\lor X]$ $C = X\lor X$ Putting everything together we can write down our proof in standard order: $~{\sim}[X\lor X]\lor [X\lor X] \qquad$ (axiom) $A \qquad$ (1, definition of $A$) $E\lor C \qquad$ (1, definition of $E$ and $C$) ${\sim}[E\lor C]\lor[C\lor {\sim}A] \qquad$ (axiom) $C \lor {\sim}A \qquad$ (3, 4, MP) ${\sim}[C\lor {\sim}A]\lor[{\sim}A\lor X] \qquad$(axiom) ${\sim}A\lor X \qquad$ (5, 6, MP) $X \qquad$ (2, 7, MP)
Does $SO(7)$ have a universal covering by $SU(6)$?
The universal cover of $SO(n)$ is the group $Spin(n)$. For small values of $n$, there are isomorphisms between various compact simply connected Lie groups that you might not expect, corresponding to accidental isomorphisms between small Dynkin diagrams. We have the one you mentioned; $Spin(3)=SU(2)$. There is also $Spin(4)=SU(2) \times SU(2)$, and $Spin(5)=Sp(2)$, and $Spin(6)=SU(4)$. However, we know that $Spin(7)$ isn't $SU(6)$ since $SU(6)$ is $35$ dimensional and $SO(7)$ is $21$ dimensional.
consider the group $G=\mathbb Q/\mathbb Z$. For $n>0$, is there a cyclic subgroup of order n
Hint: $\dfrac{1}{n}$ ${}{}{}{}{}{}{}{}{}$
Matrix representation of a bilinear form
Hint Write $u = \sum_{i=1}^n u_i e_i, v = \sum_{i=1}^n v_i e_i$ and use bilinearity of $a$ on $$a(u,v)$$ Now compare this to $$v^T A u$$ This should give you an idea how $A$ looks like.
Found $x^8$ while calculating inverse of $(x^6+1)$ in finite field $GF(2^8)$. Help???
Judging from the calculation at the link you provided, you're taking $\ x\ $ to be a root of the polynomial $\ x^8 + x^4 + x^3 + x + 1\ $. There's nothing particulary wrong about having terms of degree $8$ or more in an expression for the inverse of an element of the field, but you can always replace them with a combination of terms of smaller degree by using the equation $\ x^8 = x^4 + x^3 + x + 1\ $. As it happens, when I multiplied your putative inverse $\ x^8+x^7+x^6+x^5+x^3+1= x^7+x^6+x^5+x^4+x\ $ by $\ x^6+1\ $ I didn't get $1$. I got $\ x^5\ $ instead. In fact, there appears to be an error on line $2$ of the calculation pointed to by your link. I believe the remainder on the right side of the equation should be $\ x^4 + x^3 +x^2 + x + 1\ $ rather than $\ x^4 + x^3 + x + 1\ $.
Operators on a Tensor Product Space
Naive answer: in finite dimension, the vector space $\rm{End}(V)\otimes \rm{End}(W)$ has dimension $(\dim V)^2\cdot(\dim W)^2$, while $\rm{End}(V\otimes W)$ has dimension $(\dim V\cdot\dim W)^2$. So they are isomorphic. The real question is: is there a canonical isomorphism? In the finite-dimensional case, yes. In the infinite-dimensional case, we only get a canonical injection from $\rm{End}(V)\otimes \rm{End}(W)$ into $\rm{End}(V\otimes W)$. That's the map constructed in the second answer below. As pointed out by Martin Brandenburg, it is still injective in infinite dimension, but it fails to be surjective. So from now on, I will assume that $V$ and $W$ are finite-dimensional. With duals: each step below follows from canonical isomorphisms $$ \rm{End}(V)\otimes \rm{End}(W)\simeq (V\otimes V^*)\otimes (W\otimes W^*)\simeq (V\otimes W)\otimes (V^*\otimes W^*) $$$$ \simeq (V\otimes W)\otimes (V\otimes W)^*\simeq \rm{End}(V\otimes W).$$ Let's verify these steps now. First, we will show that $\rm{End}(V)$ is canonically isomorphic to $V\otimes V^*$, where $V^*$ denotes the dual of $V$. Indeed, for every $v\in V$ and $v^*\in V^*$, we can define $\theta_{v,v^*}$ in $\rm{End}(V)$ by $\theta_{v,v^*}(x):=v^*(x)v$. This yields a bilinear map $(v,v^*)\longmapsto \theta_{v,v^*}$ on $V\times V^*$ which factors through $V\otimes V^*$ by the universal property of the tensor product. This gives us $$\overline{\theta}:V\otimes V^*\longrightarrow \rm{End}(V).$$ Now take $\{v_i\}$ a basis of $V$, and denote $\{v_j^*\}$ the basis of $V^*$ canonically associated with it: that is $v_j^*$ is the linear form which takes $v_j$ to $1$ and other $v_i$'s to $0$. Observe that the $\overline{\theta}(v_i\otimes v_j^*)=\theta_{v_i,v_j^*}=v_j^*(\cdot)v_i$ constitute the canonical basis of $\rm{End}(V)$ associated with $\{v_i\}$. So $\overline{\theta}$ is linear and takes a basis onto a basis: this is an isomorphism from $V\otimes V^*$ onto $\rm{End}(V)$. Since its definition does not depend on the choice of a basis, this is a canonical isomorphism. In particular, people usually invoke it implicitly to denote $v\otimes v^*$ what is actually $v^*(\cdot)v$. So we have three canonical isomorphisms $\rm{End}(V)\simeq V\otimes V^*$, $\rm{End}(W)\simeq W\otimes W^*$, and $\rm{End}(V\otimes W)\simeq (V\otimes W)\otimes (V\otimes W)^*$. Now we need to show that $ V^*\otimes W^*\simeq (V\otimes W)^*$ canonically. For every $v^*\in V^*$ and $w^*\in W^*$, the bilinear form $(v,w)\longmapsto v^*(v)w^*(w)$ factors through the tensor product to give a linear form $\psi_{v^*,w^*}\in (V\otimes W)^*$. It is easy to see that $(v^*,w^*)\longmapsto \psi_{v^*,w^*}$ is bilinear, as it is pointwise bilinear on the spanning set $v\otimes w$. This yields a canonical linear map $$\overline{\psi}:V^*\otimes W^*\longrightarrow (V\otimes W)^*.$$ It only remains to see that for our earlier choices of bases, $\{v_i^*\}$ and $\{w_j^*\}$ are the corresponding bases of $V^*$ and $W^*$, while $\{v_i\otimes w_j\}$ is the corresponding basis of $V\otimes W$, and $\{v_i^*\otimes w_j^*\}$ the one of $V^*\otimes W^*$. This yields also $\{(v_i\otimes w_j)^*\}$ the associated basis of $(V\otimes W)^*$. Now it is easy to see that $\overline{\psi}(v_i^*\otimes w_j^*)=(v_i\otimes w_j)^*$, as they coincide on the basis elements $v_k\otimes w_l$. So $\overline{\psi}$ is linear and takes a basis onto a basis: that's our canonical isomorphism between $V^*\otimes W^*$ and $(V\otimes W)^*$. Finally, for every vector spaces $X,Y, Z$, we have canonical isomorphisms $X\otimes Y\simeq Y\otimes X$ and $(X\otimes Y)\otimes Z\simeq X\otimes (Y\otimes Z)$. That is: the tensor product is commutative and associative. Without duals: Consider the bilinear map $$ \phi:\rm{End}(V)\times \rm{End}(W)\longrightarrow \rm{End}(V\otimes W) $$ defined by $$ \phi(S,T)(u\otimes v):=Su\otimes Tv. $$ The fact that this is yields a well-defined endomorphism $\phi(S,T)$ on the tensor product follows from the universal property of the tensor product. Indeed, the map $(u,v)\longmapsto Su\otimes Tv$ is bilinear, so it factors through the tensor product and yields $\phi(S,T)$ in $End(V\otimes W)$. Since $(S,T)\longmapsto\phi(S,T)(u\otimes v)$ is bilinear for each $u\otimes v$ and since the latter span $\rm{End}(V)\otimes \rm{End}(W)$, the map $(S,T)\longmapsto \phi(S,T)$ is bilinear. So another application of the universal property yields a linear factorization $$ \overline{\phi}:\rm{End}(V)\otimes \rm{End}(W)\longrightarrow \rm{End}(V\otimes W). $$ This finishes the construction of our canonical isomorphism. It remains to check that it is indeed bijective. To see that, we will fix two bases $\{v_i\}$ for $V$ and $\{w_j\}$ for $W$. Then $\{v_i\otimes w_j\}$ is a basis of $V\otimes W$. Now denote $v_i\otimes v_k^*$ the endomorphism of $V$ which sends $v_k$ to $v_i$ and is null elsewhere on the basis. The family $\{v_i\otimes v_k^* \}$ is a basis of $\rm{End} (V)$. Likewise, $\{w_j\otimes w_l^*\}$ is a basis of $\rm{End} (W)$. Therefore $\{ (v_i\otimes v_k^*)\otimes(w_j\otimes w_l^*)\}$ is a basis of $\rm{End} (V)\otimes\rm{End} (W)$. On the other side, $\{ (v_i\otimes w_j)\otimes (v_k\otimes w_l)^*\}$ is a basis of $ \rm{End}(V\otimes W)$. Now $$ \overline{\phi}((v_i\otimes v_k^*)\otimes(w_j\otimes w_l^*))=\phi(v_i\otimes v_k^*,w_j\otimes w_l^*)=(v_i\otimes w_j)\otimes (v_k\otimes w_l)^*. $$ The first equality is by definition of $\overline{\phi}$. The second one is true on the basis of $V\otimes W$, so the operators coincide. Hence $\overline{\phi}$ is linear and takes a basis onto a basis: that's an isomorphism.
Find the isolated singularities of the function and calculate their residues
HINT: $$z^3=-1\implies z^3=e^{i(2n-1)\pi}$$ for $n=0,1,2$. Alternatively, write $z^3+1=(z-1)(z^2+z+1)$ and find the roots of the quadratic.
How can we check if the elements consist a set?
I assume that $\langle ~,~\rangle$ means an ordered pair. No, in standard set theory there is no set which has everything of the form $\langle A,\varnothing\rangle$ as elements. Namely, if such a thing existed -- let's call it $X$ -- we could make a variant of Russell's paradox, by considering the subset $$ Y = \{ \langle A,B\rangle \in X \mid \langle A,B\rangle \notin A \} $$ Now $\langle Y,\varnothing\rangle$ is in $Y$ if and only if $\langle Y,\varnothing\rangle$ isn't in $Y$, which is impossible. So $X$ cannot have existed. Alternatively, if we're using Kuratowski pairs, we can also just say that $\bigcup \bigcup X$ would be a set of all sets, which is known to be impossible by the standard Russell paradox.
Probability of collecting 3 balls of a kind
Let us consider the general problem: there are $N_1,N_2,\dots,N_k$ balls of $k$ colors. What is the probability of the event that the first color witn $n $ collected balls is the color &quot;1&quot;? Obviously the above event is realized if the sequence of balls drawn before the $n$-th ball of color &quot;1&quot; contains besides $n-1$ balls of this color at most $n-1$ balls of any other color. We shall refer to such sequences as &quot;good&quot;. With this observation the probability in question reads: $$\small \frac{\sum\limits_{0\le n_2,n_3,\dots n_k&lt;n}\color{red}{\binom{N_1}{n-1}\binom{N_2}{n_2}\cdots\binom{N_{k}}{n_{k}}(n-1+n_2+\cdots+n_k)!}\color{magenta}{(N_1-n+1)}\color{blue}{(N_1-n+N_2-n_2+\cdots+N_k-n_k)!}}{(N_1+N_2+\cdots+N_k)!}\\ =\frac{n}{N_1+N_2+\cdots+N_k}\sum\limits_{0\le n_2,n_3,\dots n_k&lt;n}\frac{\binom{N_1}{n}\binom{N_2}{n_2}\cdots\binom{N_k}{n_k}}{\binom{N_1+N_2+\cdots+N_k-1}{n+n_2+\cdots+n_k-1}}, $$ where the $\color{red}{\text{red}}$ term counts the number of good sequences, the $\color{magenta}{\text{magenta}}$ term counts the number of ways to choose the $n$-th ball of the 1st color, and $\color{blue}{\text{blue}}$ term counts the number of ways to arrange the rest balls. The second line is a trivial simplification of the first one. I did not check if the expression can be furter simplified.
integration of $(r\,du/dx)^2$
Assuming $\rho$ and $u$ are arbitrary smooth functions then $$\int \rho(x)\,\left(\frac{\partial u}{\partial x}(x)\right)^2 dx = \sum_{n=0}^\infty\frac{x^{n+1}}{n+1}\sum_{k=0}^n\frac{\rho^{(k)}(0)}{k!}\sum_{j=0}^{n-k}\frac{u^{(j+1)}(0)\,\,u^{(n-j-k+1)}(0)}{j!(n-j-k)!}$$ Where $f^{(n)}(x)$ is the $n$-th derivative of $f$ at $x$. You can approximate this by truncating the first sum at a finite value of $n$. To obtain this we expand both $u^{(1)}(x)$ and $\rho(x)$ using their Taylor expansions then multiply these three series together (two for $u^{(1)}$ and one for $\rho$) and gather together terms of the same order in $x$ giving $$\rho(x)\,\left(\frac{\partial u}{\partial x}(x)\right)^2 = \sum_{n=0}^\infty x^{n}\sum_{k=0}^n\frac{\rho^{(k)}(0)}{k!}\sum_{j=0}^{n-k}\frac{u^{(j+1)}(0)\,\,u^{(n-j-k+1)}(0)}{j!(n-j-k)!}$$ Then we can integrate the series easily by bringing the integral inside the first sum $$\int \rho(x)\,\left(\frac{\partial u}{\partial x}(x)\right)^2 dx = \sum_{n=0}^\infty\int x^n dx \sum_{k=0}^n\frac{\rho^{(k)}(0)}{k!}\sum_{j=0}^{n-k}\frac{u^{(j+1)}(0)\,\,u^{(n-j-k+1)}(0)}{j!(n-j-k)!}$$ which gives us the answer above. I do not know if this is useful but it will depend on what you know about the two functions.
How many $4$ digit integers have the product of their digits equal to $5!$?
Hints : $120 = 1\cdot 2^{3}\cdot 3\cdot 5$. So possible digits are : $1,2,3,4,5,6,8.$ You may assume that there is $8$ in your number and amount possible situations. After assume there $6$ and so on.
How do I find the sum of the series?
Consider \begin{align} S_{n} = \sum_{k=1}^{n} x^{k-1} \end{align} which can be seen as the following. \begin{align} S_{n} &amp;= \sum_{k=1}^{n} x^{k-1} + \sum_{k=n+1}^{\infty} x^{k-1} - \sum_{k=n+1}^{\infty} x^{k-1} \\ &amp;= \sum_{k=1}^{\infty} x^{k-1} - \sum_{k=0}^{\infty} x^{n+k} \\ &amp;= \frac{1}{1-x} - \frac{x^{n}}{1-x} = \frac{1-x^{n}}{1-x}. \end{align} Now, when $x=1/2$ this becomes \begin{align} \sum_{k=1}^{n} \left(\frac{1}{2}\right)^{k-1} = 2-2^{1-n} = \frac{2^{n+1}-2}{2^{n}}. \end{align} when $n=7$ this is further reduced to \begin{align} \sum_{k=1}^{7} \left(\frac{1}{2}\right)^{k-1} = \frac{127}{64}. \end{align} Multiplying both sides by 40 yields the answer desired, namely, \begin{align} 40 \sum_{k=1}^{7} \left(\frac{1}{2}\right)^{k-1} = \frac{635}{8}. \end{align}
How does one verify a ruler-compass construction is valid?
You have asked several questions. First How exactly does one know they've achieved a valid construction? You do that by writing a proof using Euclid's axioms and previously proved propositions. For example, Proposition 10 of Book 1 is a construction that bisects a given segment, with a proof that it's correct: https://mathcs.clarku.edu/~djoyce/elements/bookI/propI10.html For this one For instance, how did Ramanujan verify this side length is good approximation, was it simply empirical measurement with a ruler? Well, with a ruler, surely not. This construction finds an approximation. To see why it's a good one you would have to read it carefully. My guess is that it's a Euclidean construction based on a few terms of a series that converges quickly. (Since $\pi$ is not constructible with Euclidean tools an exact construction is impossible.) Finally Also, where does one get the intuition for geometric construction? With practice. Study how to bisect angles, erect perpendiculars, divide segments in given rational proportions, find square roots and mean proportionals (all these are in Euclid). Then you can use those tools to build what you are asked for.
Absolutely continuous but not monotone
$G(x)=\int_0^x \chi_A(t) - \chi_{[0,1]\setminus A}(t)\,dt $, so $G'(x) = \chi_A(x) - \chi_{[0,1]\setminus A}(x)$ almost everywhere. (Are you familiar with this?) By the hypothesis on $A$, this implies that in every subinterval of $[0,1]$, $G'$ takes the values $1$ and $-1$. This implies that $G$ is nonmonotone in every subinterval of $[0,1]$.
Even Fibonacci terms
Since $F_{n+3}=2F_{n+1}+F_n$, it leaves the same remainder as $F_n$ on division by $2$. So the 61st term has the same parity as the 1st etc. (because $3|60$).
almost everywhere Vs. almost sure
In a probability space (equipped with a probability $P$), we say that an event $\omega$ occurs almost surely if $P(\omega)=1$. On the other hand, on a measure space equipped with a measure $\mu$, we say that a property $\mathcal{P}$ is satisfied almost everywhere if the set where $\mathcal{P}$ is not satisfied has measure zero. Note that "a.s." is equivalent to "a.e." in probability spaces, since if $\omega$ occurs almost surely, then the probability that $\omega$ does not occur is zero. However, in the case of general measure spaces $X$ we cannot say that a property is satisfied almost everywhere if it is satisfied in a set of measure $\mu(X)$ (which would correspond to an event having probability $1$), since in many cases this measure is infinite. This is why in the case of measure spaces we formulate the definition of "almost everywhere" in terms of complements of sets.
Finding the supremum of the set $\{\frac{n^2}{2^n}: n\in \Bbb N\}$
You can show by induction that $2^n \geq n^2$ for $n \geq 4$. It then follows that $\frac{9}{8}$ is the supremum, as for $n \geq 4$ all entries in the set are $\frac{n^2}{2^n}\leq\frac{2^n}{2^n}=1&lt;\frac{9}{8}$.
Residue of $f(z) = e^z \csc^2 z $
If $z=a$ be a pole of order $2$ of $f(z)$ then you must to find the limit $$\lim_{z\to a}\left((z-a)^2f(z)\right)'$$ for residue there. So with $z-k\pi=w$ then \begin{align} \operatorname{Res}_{z=k\pi}\dfrac{e^z}{\sin^2z} &amp;= \lim_{z\to k\pi}\Big[(z-k\pi)^2\dfrac{e^z}{\sin^2z}\Big]'\\ &amp;= \lim_{w\to0}\Big[w^2\dfrac{e^we^{k\pi}}{\sin^2w}\Big]'\\ &amp;= e^{k\pi}\lim_{w\to0}\Big[e^w\left(\dfrac{w}{\sin w}\right)^2\Big]'\\ &amp;= e^{k\pi}\times1\\ &amp;= e^{k\pi} \end{align}
Functional optimization with constraints that are not continuously differentiable
We can use Hölder's inequality here rather than Euler Lagrange. Take $L^3[-1,1]$ and define $f(x) = \int_{-1}^1 t^3 x(t) dt $. We see that $f$ is linear, and the constraint is $\|x\|_3 \le \sqrt[3]{2}$. The problem becomes: $\max \{ f(x) | \|x\|_3 \le \sqrt[3]{2} \}$. Hölder's inequality gives $|f(x)| \le \|\phi\|_{3 \over 2} \| x\|_3$, where $\phi(t) = t^3$. Note that $\|\phi\|_{3 \over 2} = \sqrt[3]{4^2 \over 11^2}$. We have equality iff $ | {x(t) \over \|x\|_3 }|^3 = |{ \phi(t) \over \| \phi \|_{3 \over 2} } |^{3 \over 2}$ for ae. $t \in [-1,1]$. Letting $x(t) = (\operatorname{sgn} t)\sqrt[3]{22 \over 4}\sqrt{|t^3|}$, we obtain $\max \{ f(x) | \|x\|_3= \sqrt[3]{2} \}= \sqrt[3]{2^5 \over 11^2}$. Since $x$ is continuous, it solves the original problem.
Proof of a simple formula
Let $\frac{m'\xi}r=b$ and $\frac{l'\eta}r=c$. \begin{eqnarray*} -(B+C)\dfrac{m'\xi+l'\eta}{r} + B\dfrac{m'\xi}{r} + C\dfrac{l'\eta}{r}&amp;=&amp;\dfrac{B-C}{2}\dfrac{m'\xi+l'\eta}{r}\\ -(B+C)(b+c)+ Bb + Cb&amp;=&amp;\dfrac{B-C}{2}(b+c)\\ -Bb-Cb-Bc-Cc+Bb+Cc&amp;=&amp;\frac{Bb}2+\frac{Bc}2-\frac{Cb}2-\frac{Cc}2\\ -Bc-Cb&amp;=&amp;\frac{Bb}2+\frac{Bc}2-\frac{Cb}2-\frac{Cc}2\\ 0&amp;=&amp;\frac{Bb}2+\frac{3Bc}2+\frac{Cb}2-\frac{Cc}2\\ \frac{Cc}2&amp;=&amp;\frac{Bb}2+\frac{3Bc}2+\frac{Cb}2 \end{eqnarray*} So, you need the last equality to get what you want. By multiplying both sides by $2r$ you get $$Cl'\eta=Bm'\xi+3Bl'\eta+Cm'\xi$$ or $$C(l'\eta-m'\xi)=B(m'\xi+3l'\eta)$$ or $$l'\eta(C-3B)=m'\xi(B+C)$$
Probability of two or more identical values in a set of random values
This is essentially the birthday paradox. There are $200^8 = 2.56 \times 10^{18}$ possible rows and you're randomly generating $10^6$ of them. A rough heuristic for the birthday paradox is that if there are $N$ equally likely possibilities then you need about $\sqrt{2N}$ samples for there to be a non-negligible probability of a collision, which here is $\sim 10^9$. So we expect a very small probability of a collision here. Formally, if there are $N$ equally likely possibilities and you've generated $n$ samples, the probability of a collision is $1$ minus the probability of no collision. This is the probability that all of the $n$ samples are distinct, which is $$\frac{N(N-1) \dots (N-(n-1))}{N^n} = \prod_{i=0}^{n-1} \left( 1 - \frac{i}{N} \right).$$ This gives $$\mathbb{P}(\text{collision}) = 1 - \prod_{i=0}^{n-1} \left( 1 - \frac{i}{N} \right).$$ This is annoying to calculate exactly but for large $N$ and small $n$ compared to $N$ it's possible to estimate pretty precisely, as follows. We take the logarithm $$\log \prod_{i=0}^{n-1} \left( 1 - \frac{i}{N} \right) = \sum_{i=0}^{n-1} \log \left( 1 - \frac{i}{N} \right).$$ If $n$ is small compared to $N$ then $\frac{i}{N}$ is small and then we can use the first-order Taylor approximation $\log (1 - x) \approx -x$, which gives $$\sum_{i=0}^{n-1} \log \left( 1 - \frac{i}{N} \right) \approx \sum_{i=0}^{n-1} - \frac{i}{N} = \frac{1}{N} {n \choose 2} $$ and hence $$\mathbb{P}(\text{collision})) \approx 1 - \exp \left( - \frac{1}{N} {n \choose 2} \right).$$ (This is where the $\sqrt{2N}$ heuristic comes from: we want ${n \choose 2}$ to be about the same size as $N$.) If we now further assume that ${n \choose 2}$ is small compared to $N$ (which is the case here) then we can use the first-order Taylor approximation $\exp(x) \approx 1 + x$ to get $$\mathbb{P}(\text{collision}) \approx \frac{1}{N} {n \choose 2}.$$ This actually has a nice intuitive interpretation: $\frac{1}{N} {n \choose 2}$ is exactly the expected number of collisions, which is an upper bound on the probability of a collision. Plugging in $N = 200^8, n = 10^6$ gives $$\boxed{ \mathbb{P}(\text{collision}) \le 1.93... \times 10^{-7} }.$$