title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Probability to obtain more than X with 3 dice.
Your concrete problem is already solved here: If we throw three dice. The general problem is equivalent to counting the number of ways of distributing $X-Y$ balls into $Y$ bins with limited capacity $Z-1$. This problem is solved at Balls In Bins With Limited Capacity using inclusion-exclusion. The result is $$ \sum_{t=0}^Y(-1)^t\binom Yt\binom{X-tZ-1}{Y-1}\;, $$ where, contrary to convention, the binomial coefficient is taken to be zero for negative upper index. This is the count of outcomes with sum exactly $X$; to get the probability of a sum of more than $X$, we need to sum from $X+1$ to $YZ$ and divide by the number $Z^Y$ of equiprobable outcomes: $$ Z^{-Y}\sum_{x=X+1}^{YZ}\sum_{t=0}^Y(-1)^t\binom Yt\binom{x-tZ-1}{Y-1}=Z^{-Y}\sum_{t=0}^Y(-1)^t\binom Yt\left(\binom{YZ-tZ}Y-\binom{X-tZ}Y\right)\;. $$ For $Y=3$, $Z=6$, this is \begin{align} &\frac1{216}\sum_{t=0}^3(-1)^t\binom 3t\left(\binom{18-6t}3-\binom{X-6t}3\right)\\ ={}&\frac1{216}\left(\binom{18}3-\binom X3-3\left(\binom{12}3-\binom{X-6}3\right)+3\left(\binom63-\binom{X-12}3\right)\right)\\ ={}&1-\frac1{216}\left(\binom X3-3\binom{X-6}3+3\binom{X-12}3\right)\;, \end{align} where again binomial coefficients with negative upper index are taken to be zero. Distinguishing the three cases, we can write this as $$ \frac1{1296}\begin{cases} -X^3+3X^2-2X+1296&3\le X\lt9\;,\\ 2X^3-60X^2+436X+288&6\le X\lt15\;,\\ -X^3+57X^2-1082X+6840&12\le X\le18 \end{cases} $$ (where I intentionally wrote the maximal overlapping ranges to exhibit the symmetry more clearly). As far as I checked, the results coincide with those of the concrete calculation linked to above.
Finding the total number of proper subfields of $F$?
I take it you mean, proper subfield. Can you show that any subfield of $F$ contains the field of $5$ elements? Can you show that any subfield must contain $5^r$ elements, for some $r$? Can you show that the degree of such a subfield (over the field of $5$ elements) must be $r$? and must be a divisor of the degree of the field of $5^{12}$ elements? Can you show that a finite field has at most one subfield of any given number of elements? If you can do all those, you have your answer.
Show that $\frac{ n^{1/3} }{n-1} > \frac{ (n+1)^{1/3} }{n}$
Let $$f(x)=\frac{\sqrt[3]{x+1}}{x}$$ Take logarithms: $$g(x)=\ln f(x)=\frac13\ln(x+1)-\ln x$$ Now, $$g'(x)=\frac1{3(x+1)}-\frac1x$$ which is negative for $x\ge 1$. That is, $f$ is decreasing.
Infinite sequence series. Limit
Using the factorization $1 - x^{2^{k+1}} = (1 - x^{2^k})(1 + x^{2^k})$ we decompose \begin{align}&\frac{x}{1 - x^2} + \frac{x^2}{1 - x^4} + \cdots + \frac{x^{2^n}}{1 - x^{2^n}}\\ &= \left(\frac{1}{1 - x} - \frac{1}{1 - x^2}\right) + \left(\frac{1}{1 - x^2} - \frac{1}{1 - x^4}\right) + \cdots + \left(\frac{1}{1 - x^{2^n}} - \frac{1}{1 - x^{2^{n+1}}}\right)\\ &=\frac{1}{1 - x} - \frac{1}{1 - x^{2^{n+1}}}. \end{align} Since $0 < x < 1$, $x^{2^{n+1}} \to 0$, so the last expression converges to $$\frac{1}{1 - x} - 1 = \frac{x}{1 - x}.$$ Therefore, the answer is a).
Set is Convex regardless of b
Let $x, y \in S$. Then by definition of a convex set, you need to show that the points $λx+(1-λ)y$ are in $S$ for any $λ\in[0,1]$. So, by definition of $S$ you need to show that $f(λx+(1-λ)y)\le b$. Since, the function $f$ is convex you have that $$f(λx+(1-λ)y)\overset{f\text{ convex}}\le λf(x)+(1-λ)f(y)\overset{x,y\in S}\le λb+(1-λ)b=b$$
Find the tangents to circle
HINT: The equation of any straight line passing through $D(8,7)$ can be written as $$\dfrac{y-7}{x-8}=m\iff y=mx+7-8m$$ Replace this value of $y$ in the equation of the circle to form a Quadratic Equation in $x,$ whose roots represents the abscissa of the intersection For tangency, both root should be same. This will give the two possible values of $m$
Improper integral depending on parameter
Hint: $$ \int_0^a\Big(\frac{1}{\sqrt{x^2+4}}-\frac{C}{x+2}\Big)\,dx=\left[\ln(x+\sqrt{x^2+4})-C\ln(x+2)\right]_0^a=\ln\left(\frac{a+\sqrt{a^2+4}}{(a+2)^C}\right)+\text{const} $$
$LC$ ladder circuit: find the poles in a recursively defined complex sequence
I assume that you want to calculate $\lim_{n \to \infty} Z_n$. By taking limits on both sides, you find that any limit $Z$ would have to satisfy $$Z = Z_L + \frac{Z Z_C}{Z + Z_C}.$$ Doing some algebra: $$Z = \frac{Z_L Z + Z_L Z_C + Z Z_C}{Z + Z_C} \\ Z^2 + Z Z_C = Z (Z_L + Z_C) + Z_L Z_C \\ Z^2 - Z_L Z - Z_L Z_C = 0.$$ This is a quadratic equation whose roots are $$Z = \frac{Z_L \pm \sqrt{Z_L^2 + 4 Z_L Z_C}}{2}.$$ So if the limit exists, it is one or the other of these two numbers. Whether the limit exists will depend on the initial condition, and when it does exist, which limit is achieved will depend on the initial condition. Usually you will converge to the possible limit which is closer to the initial condition, and there is always a domain of attraction for each limit (although it could be just the limit itself, if the limit is unstable). To continue the calculation, it's convenient to make the variables real, so let's do that: $$Z = \frac{iL \pm \sqrt{-L^2 + 4 L/C}}{2.}$$ Your initial condition is $i(L-1/C)$, so the difference between the roots and the initial condition is $$Z-Z_1=\frac{iL \pm \sqrt{-L^2 + 4L/C}}{2} - i(L-1/C) = \frac{iL - 2 i L + 2 i/C \pm \sqrt{-L^2 + 4L/C}}{2} \\ = \frac{-i L + 2 i/C \pm \sqrt{-L^2 + 4L/C}}{2}.$$ This means that at least when $L/C$ is small enough, you will get the $+$ root, since $$\frac{-iL + 2i/C + \sqrt{-L^2 + 4L/C}}{2} = i \frac{-L + 2/C + \sqrt{L^2-4L/C}}{2} \\ \approx i \frac{-L + 2/C + L - \frac{4L/C}{2L}}{2} = 0.$$ This follows by Taylor expanding the square root. I emphasize that it requires that $L/C$ is small enough, in particular it requires that $4L/C \ll L^2$.
Is the cardinality of a set necessarily a natural number?
Fundamentally, cardinals and real numbers are different things. You can think of the cardinality of a set as some abstract object ("cardinal") assigned to it, in such a way that two sets get assigned the same cardinal if and only if there is a bijection between them. But there is a natural way to identify the finite cardinals with the natural numbers: namely, identify a cardinal $a$ with the natural number $n_a$ such that the set $\{1, 2, \dots, n_a\}$ has cardinality $a$ (i.e. any other set with cardinality $a$ has a bijection with $\{1,2,\dots, n_a\}$ and any other set with cardinality $a$). (In this discussion, "natural numbers" includes 0.) This is a nice identification because it makes cardinal arithmetic match up with the arithmetic of natural numbers. For instance: Ordering: $n_a \le n_b$ if and only if any set of cardinality $a$ has an injection into any set of cardinality $b$ Addition: If $A,B$ are disjoint and have cardinalities $a,b$ respectively, then the cardinality $c$ of their union $C = A \cup B$ has $n_c = n_a + n_b$. Multiplication: If $A,B$ have cardinalities $a,b$ and $C = A \times B$ has cardinality $c$, then $n_c = n_a n_b$. Exponents: If $A,B$ have cardinalities $a,b$, and $C = A^B$ is the set of all functions from $B$ to $A$, then $n_c = n_a^{n_b}$. One could imagine a system that identified other (infinite) cardinals with real numbers other than the natural numbers. For instance, nobody could stop me from proposing a system in which the cardinality $\aleph_0$ of the set of integers is identified with the real number $22/7$. But this system wouldn't have the properties listed above. For instance, $22/7 \le 4$, but there is no injection from $\mathbb{Z}$ to $\{1,2,3,4\}$, and $\mathbb{Z} \times \{1,2,3,4,5,6,7\}$ definitely does not have the same cardinality as $\{1,2,\dots, 21,22\}$. In fact, it's not hard to see that there would be no way of identifying infinite cardinals with real numbers that would preserve the above list of properties. The properties above are very useful, and so in order to preserve them, we generally do not attempt to identify any other real numbers with cardinals. In principle, there could be another system that, although it didn't satisfy the above properties, had some other useful properties. But I've never heard of one that was useful enough to attract much attention.
Is my proof of Abel's Theorem correct?
Given $\epsilon >0,$ take the least (or any) $k$ such that $\sup_{n>k}|A_n|<\epsilon /2.$ This is possible because $\lim_{n\to \infty}A_n=\sum_{m=0}^{\infty}a_m=0.$ Let $M_k=\max_{n\leq k}|A_n|.$ For $x\in (0,1)$ we have $|(1-x)\sum_{n=0}^kA_nx^n|\leq (1-x)(k+1)M_k.$ Take $\delta_k \in (0,1)$ such that $\delta_k(k+1)M_k<\epsilon /2.$ Then for all $x\in (1-\delta_k,1)$ we have $$|(1-x)\sum_{n\leq k}A_nx^n|<\epsilon /2$$ $$\text {and also }\quad |(1-x)\sum_{n>k}A_nx^n|\leq (1-x)\sum_{n>k}(\epsilon /2)x^n=$$ $$=x^{k+1}(\epsilon/2)<\epsilon /2.$$ So the theorem holds in the first case, which is when $\sum_{n=0}^{\infty}a_n=0 .$ For all other cases observe that if $a^*_0=a_0-\sum_{n=0}^{\infty}a_n$ and $a^*_n=a_n$ for all $n>0,$ then the first case applies to $f^*(x)=\sum_{n=0}^{\infty} a^*_nx^n.$ That is, $f^*$ is continuous from below at $x=1.$ But $f(x)$ differs from $f^*(x)$ by a constant so $f$ is also continuous from below at $x=1.$ We can do a direct proof for all cases at once. It's basically the same proof, but the intermediate formulas get a bit "messy" or confusing.
Easiest possible way to think of tensor products of modules.
That is correct (for $\alpha$ an element of the base ring). However, using that as a definition hides the most important property of the tensor product, namely the universal property that it satisfies.
Prove the absolute and exponential equation
First cleanup. As the property must hold for every $x$ and $x$ is only used in $|x-1|$, it suffices to prove for any positive $y$. Then $6/2=3$ and the identity simplifies to $$(y^3)^n=y^{3n}.$$ This is a basic property of exponentiation.
Let $f$ be uniformly continuous on $(a,b)$, where $a<b$. Prove that $\lim_{x\to a}f(x)$ and $\lim_{x\to b}f(x)$ exist.
You don't know what the limit of $f(x)$ from the right at $a$ might be. The Cauchy criterion comes to the rescue: Let $x_n\to a.$ Use uniform continuity to show $f(x_n)$ is Cauchy.
Isolating a variable that appears twice on one side of the equation
Of course: $$A=se\cdot P + 1-P - sp + sp\cdot P\\ A-1+sp = P(se-1+sp)$$
prove $\phi$ is ring homomorphism
If $y \in R'$, choose $x \in R$ such that $\phi(x)=y$ (here is where we're using surjectivity). Then $$ \phi(1) y=\phi(1)\phi(x)=\phi(1 \cdot x)=\phi(x)=y, $$ and $y \phi(1)$ is also $y$. This way, $\phi(1)$ has to be $1$.
Three people each choose five distinct numbers at random from the numbers 1, 2, . . . , 25, independently of each other.
Well you don't have to use "inclusion-exclusion" principle to solve this. Just use the fact that, $Probabbility=\frac{Total\ accepted\ outcomes}{Total\ outcomes}$. The total accepted outcomes turns out to be: $(^{25}_5)(^{20}_5)(^{15}_5)$. And, Total outcome will be: $\{(^{25}_5)\}^3$. Thus your answer is, $Probabbility=\frac{(^{25}_5)(^{20}_5)(^{15}_5)}{\{(^{25}_5)\}^3}$
Why is the inverse of an orthogonal matrix equal to its transpose?
Let $A$ be an $n\times n$ matrix with real entries. The matrix $A$ is orthogonal if the column and row vectors are orthonormal vectors. In other words, if $v_1,v_2,\cdots,v_n$ are column vectors of $A$, we have $v_i^Tv_j=\begin{cases}1 \quad\text{if }i=j\\ 0\quad\text{if } i\neq j\end{cases}$ If $A$ is an orthogonal matrix, using the above information we can show that $A^TA=I$. Since the column vectors are orthonormal vectors, the column vectors are linearly independent and thus the matrix $A$ is invertible. Thus, $A^{-1}$ is well defined. Since $A^TA=I$, we have $(A^TA)A^{-1}=IA^{-1}=A^{-1}$. Since matrix multiplication is associative, we have $(A^TA)A^{-1}=A^T(AA^{-1})$, which equals $A^T$. We therefore have $A^T=A^{-1}$.
Why is a Riemannian metric $g$?
I always thought it was $g$ for the Gram matrix, which, after picking a basis, specifies the component of an inner product: $g_{ij} = \langle \hat{e}_i, \hat{e}_j \rangle$ On a Riemannian manifold, the metric is just an inner product-valued field.
Is there a word for a contradictory set of linear system of equations?
They are called inconsistent equations, as the set of variables that would solve the first equation $x+y=1$ would not solve the second $x+y=5$.
Multiple faithful representation?
I don't know if there is a special name for this property, but I doubt it, as it is a very common property. Every finite group $G$ has a faithful transitive permutation representation on $|G|$ points, by Cayley's theorem. Given any subgroup $H$ of $G,$ there is a transitive permutation action of $G$ on the (say right) cosets of $H$ in $G$ &ndash; however, this action is not necessarily faithful. Its kernel is $\cap_{g \in G} g^{-1}Hg$, so the action is faithful just when $H$ contains no non-trivial normal subgroup of $G$. There are non-Abelian finite groups in which every proper non-trivial subgroup contains a proper non-trivial normal subgroup (such as (non-Abelian) Hamiltonian groups), but they are very much the exception, rather than the rule (to be precise, the exceptions are those non-Abelian groups in which every subgroup of prime order is normal). All other finite non-Abelian groups $G$ have a proper non-trivial subgroup $H$ such that $G$ acts faithfully and transitively on the $[G:H]$ right cosets of $H$ in $G.$
Solve trigonometric equation $ 3 \cos x + 2\sin x=1 $
Try this: $$3 \cos x + 2\sin x=\sqrt{13}\left(\frac{3}{\sqrt{13}}\cos x+\frac{2}{\sqrt{13}}\sin x\right)\\ =\sqrt{13}\sin(\arcsin\frac{3}{\sqrt{13}}+x)=1$$ You can try to solve it from there.
Why this harmonic sequence is not contractive
If $h_n$ were indeed contractive, there would be some $c\in (0,1)$ such that for all $n\geq 1$, $|h_{n+2}-h_{n+1}|\leq c|h_{n+1}-h_{n}|$ which is the same as $\frac{1}{n+2}\leq c \frac{1}{n+1}$. This would imply $\frac{n+1}{n+2}\leq c$, and letting $n$ go to $\infty$, $$1\leq c$$ a contradiction.
Find the Laplace function of a piecewise function
Well, you know the definition of the Laplace transform: $$\text{F}\left(\text{s}\right)=\mathcal{L}_t\left[\text{f}\left(t\right)\right]_{\left(\text{s}\right)}=\int_0^\infty\text{f}\left(t\right)\exp\left(-\text{s}t\right)\space\text{d}t\tag1$$ So, for your function we get: $$\text{F}\left(\text{s}\right)=\int_0^1t^2\exp\left(-\text{s}t\right)\space\text{d}t+\int_1^5\left(2t-1\right)\exp\left(-\text{s}t\right)\space\text{d}t+\int_5^\infty9\exp\left(-\text{s}t\right)\space\text{d}t\tag2$$
Are there Möbius transformations of arbitrary group-theoretic order?
The composition of Möbius transforms is naturally associated with their matrix of coefficients: $$x \rightarrow f(x)=\dfrac{ax+b}{cx+d} \ \ \ \leftrightarrow \ \ \ \begin{bmatrix} a &amp; b\\ c &amp; d \end{bmatrix}$$ This correspondence is in particular a group isomorphism between the group of (invertible) homographic transforms of the real projective line and $PGL(2,\mathbb{R})$. (composition $\circ$ mapped to matrix product $\times$). Thus, your question boils down to the following: for a given $n$, does it exist a $2 \times 2$ matrix $A$ such that $A^n=I_2$ ? The answer is yes for real coefficients. It suffices to take the rotation matrix : $$\begin{bmatrix} \cos(a) &amp; -\sin(a) \\ \sin(a) &amp; \cos(a) \end{bmatrix} \ \ \ a=\dfrac{2\pi}{n}$$ Edit: If you are looking for integer coefficients, the answer is no. In fact, with integer coefficients, only homographies of order 2,3,4 and 6 can exist. (I rectify here an error that has been signaled and I add information). See for that the very nice paper (http://dresden.academic.wlu.edu/files/2017/08/nine.pdf) (in particular its lemma 1).
What is the limit of this function as x approaches 2?
Yes, it is true that $$ \lim_{x \to 2} F(x) = 4. $$ To see why, note that $F(x) = x^2$ whenever $x \neq 2$. That is, $F(x) = x^2$ for $x &lt; 2$ and for $x &gt; 2$. So, the left and right limits are given (respectively) by \begin{align*} \lim_{x \to 2^-} F(x) = \lim_{\substack{x \to 2\\x &lt; 2}} F(x) = \lim_{\substack{x \to 2\\x &lt; 2}} x^2 = \lim_{x \to 2^-} x^2 = 4 \end{align*} and \begin{align*} \lim_{x \to 2^+} F(x) = \lim_{\substack{x \to 2\\x &gt; 2}} F(x) = \lim_{\substack{x \to 2\\x &gt; 2}} x^2 = \lim_{x \to 2^+} x^2 = 4. \end{align*} Looking at the graph of $F(x)$ helps to understand why the right limit also behaves this way.
Units in number fields with complex embeddings
As has been noted in comments, the important result here is Dirichlet's Unit Theorem. E.g., (taken from Daniel Marcus's Number Fields): Dirichlet's Unit Theorem. Let $U$ be the group of units in a number ring $\mathcal{O}_K = \mathbb{A}\cap K$ (where $\mathbb{A}$ represents the ring of all algebraic integers). Let $r$ and $2s$ denote the number of real and non-real embeddings of $K$ in $\mathbb{C}$. Then $U$ is the direct product $W\times V$, where $W$ is a finite cyclic group consisting of the roots of $1$ in $K$, and $V$ is a free abelian group of rank $r+s-1$. In particular, there is some set of $r+s-1$ units, $u_1,\ldots,u_{r+s-1}$ of $\mathcal{O}_K$, called a fundamental system of units, such that every element of $V$ is a product of the form $$u_1^{k_1}\cdots u_{r+s-1}^{k_{r+s-1}},\qquad k_i\in\mathbb{Z},$$ and the exponents are uniquely determined for a given element of $V$.
Finding a counter example to a basic question about integrability
We define $f:\left[0,1\right]\to\left[0,1\right]$ by:$$f\left(x\right)=\begin{cases} 0 &amp; 0\le x\le\frac{1}{2}\\ 1 &amp; \frac{1}{2}&lt;x\le1 \end{cases}$$ Let P,Q be the following partitions: $$P=\left\{ 0,\frac{1}{3},\frac{2}{3},1\right\},\;\;\;\;\; Q=\left\{ 0,\frac{1}{2},1\right\}$$ We have $\Delta\left(P\right)=\frac{1}{3}\le\frac{1}{2}=\delta\left(Q\right)$ as required, but: $$\overline{S}\left(f,P\right)=\sum_{k=1}^{3}M_{k}\left(x_{k}-x_{k-1}\right)=0+1\left(\frac{2}{3}-\frac{1}{3}\right)+1\left(1-\frac{2}{3}\right)=\frac{2}{3}$$ $$\overline{S}\left(f,Q\right)=\sum_{k=1}^{2}M_{k}\left(x_{k}-x_{k-1}\right)=0+1\left(1-\frac{1}{2}\right)=\frac{1}{2}$$ Where $M_{k}=\sup\left\{ f\left(x\right):x_{k}\le x\le x_{k-1}\right\}$
Show that if $\lambda A + \mu B$ is nilpotent therefore $tr(A^kB)=0$.
Hint: Let $k\ge 0$, look at $f(t)=\text{Tr}((A+tB)^{k+1})$. What kind of function is this, what are its properties? If you need another hint, let me know!
Optimization: Via manifolds point of view of Lagrange multipliers method
The Hessian of a function is a generalization of the 2nd derivative of a single-variable function $f(x)$. If $x=a$ is a critical point of $f(x)$, a sufficient criterion for $a$ be a maximum is that the second derivative of $f$ at $a$ be negative: $f''(a) &lt; 0$ (https://en.wikipedia.org/wiki/Derivative_test#Second_derivative_test). For multivariable functions, "second derivative" generalizes to "Hessian," and "negative (scalar)" generalizes to "negative definite (linear transformation)". A linear transformation $A$ is said to be negative definite if $-A$ is positive-definite. Positive-definiteness can be characterized using eigenvalues: https://en.wikipedia.org/wiki/Positive-definite_matrix#Characterizations. To get comfortable with manifolds, I would recommend the relevant sections in "Ordinary Differential Equations" by V. Arnol'd and, for a more thorough calculus context, "Mathematical Analysis" by Zorich (both books are published by UniversiText).
Supremum and greatest element
It would, if $\sqrt{5}$ were in the subset. But it isn't, since $\sqrt{5}\notin \mathbb Q$.
Cross product angle formula
No to both questions. It's usually easier to use the dot product $A\cdot B = |A|\,|B|\,\cos\theta$. It is only equivalent if either $\sin\theta=0$ or $A\cdot B=|A|\,|B|$ which happens when $\cos\theta=1$. (So, almost never..)
For $0 < x < y \leq \frac{\pi}{2}$, prove that $\frac xy < \frac {\sin x}{\sin y} < \frac{\pi}{2y}$
$$\frac{x}{y} &lt; \frac{\sin x}{\sin y} \Leftrightarrow \frac{\sin y}{y} &lt; \frac{\sin x}{x}$$ which makes sense because $x &lt; y$. (Think about the unit circle: $x$ is the length of the arc while $\sin x$ is the length of the vertical; their ratio becomes smaller as $x$ becomes larger.) $$\frac{\sin x}{\sin y} &lt; \frac{\pi}{2y} \Leftrightarrow \sin x &lt; \pi / 2 \frac{\sin y}{y}$$ $\frac{\sin y}{y}$ is smallest when $y = \pi/2$ by the same argument as above, and this gives $\sin x &lt; 1$, which is true since $0&lt;x &lt; \pi/2$. If you want to formalize the argument, consider the derivative of $f(x) = \frac{\sin x}{x}$ on $(0, \pi/2)$. $\frac{\partial}{\partial x} \left( \frac{\sin x}{x} \right) = \frac{x \cos x - \sin x}{x^2}$. Claim: this derivative is $&lt; 0$ on $(0, \pi/2)$. $x \cos x &lt; \sin x \Rightarrow x &lt; \tan x$. There is a neat geometric proof of this last fact, but since you are studying derivatives, here is a straightforward way to prove that: at $x=0$, we have $x = \tan x$. For $0 &lt; x&lt; \pi/2$, compare their derivatives: we have $1 &lt; \frac{1}{\cos^2 x}$ since $\cos^2 x \in (0,1)$. Thus $\tan x &gt; x$ on this interval.
Maximal Ergodic Theorem for flows?
I don't think the Hardy-Littlewood theorem is used in the standard setting to prove that $\int_{f^* &gt; 0} fd\mu \ge 0$. See Walter's book on ergodic theory. All that's used is that (the analogue of) $A_r$ is a positive operator.
Coloring the distance 2 graph of a bipartite graph
Here is a counterexample to both of your conjectures. Let $A = \{a_{ij} : 1 \le i,j \le n\}$ and $B = \{b_{ij} : 1 \le i,j \le n\}$. Define a bipartite graph $G$ between $A$ and $B$ by adding an edge between $a_{ij}$ and $b_{kl}$ whenever $i=k$ or $j=l$. This graph is regular of degree $2n-1$. The distance-$2$ graph $G_2$ consists of two cliques of size $n^2$: one with vertex set $A$, and one with vertex set $B$. Any two vertices $a_{ij}, a_{kl}$ in $A$ are connected by the length-$2$ path $a_{ij} \to b_{kj} \to a_{kl}$ in $G$, so they are adjacent in $G_2$. Therefore $\chi(G_2) = \omega(G_2) = n^2$, which is much larger than $\Delta_1 + \Delta_2 - 1 = 4n-3$, at least for large $n$. Probably the extremal graph of this type is the incidence graph of a finite projective plane. For every prime power $q$, we can take a projective plane of order $q$, with $q^2+q+1$ points and $q^2+q+1$ lines. Then let $G$ be the bipartite graph with points on one side, lines on the other, and an edge connecting each point to the lines through it. This has $\Delta_1 = \Delta_2 = q+1$, and $\chi(G_2) = \omega(G_2) = q^2+q+1$, because once again $G_2$ consists of two cliques. Through any two points, there is a line, and any two lines intersect in a point - when translated to a statement about $G$, these two properties just say that any two vertices in the same part are distance $2$ apart. Also, note that if Conjecture 1 were true even when $\Delta_2 = 2$, it would imply not only Vizing's theorem (your Theorem 2) but also the same result for edge colorings of multigraphs with maximum degree $\Delta$. But that result is false: consider the multigraph $H$ on three vertices $a,b,c$ with $k$ copies of each possible edge. Here, $\Delta(H) = 2k$ but $\chi'(H) = 3k$. (A theorem of Shannon says that this example is worst possible: $\chi'(H) \le \frac32 \Delta(H)$ for multigraphs.)
Is this NFA correct?
Delete the $\lambda$-edge from $q_1$ to $q_8$. Presently the automaton accepts every string over $\{0,1\}$. In your question delete the set-brackets in $\{ \Sigma^* \cdots\Sigma^* \}$. In itself, $\Sigma^*$ is a set (of strings), so it is not necessary (in fact wrong) to add another `level' of sets.
Vector subspace of functions where $f'(x) = 2f(x)$
$(\lambda\cdot f)' = \lambda \cdot f' = \lambda \cdot (2f) = 2 \cdot (\lambda\cdot f) \Rightarrow \lambda \cdot f \in V$
Uniqueness of Monic Polynomial of Least Degree in Extension Field $K$
Either $f(x)$ divides $g(x)$ or the vice versa . Let WLOG;$g(x)|f(x)\implies f(x)=g(x)q(x)+r(x)$ where $\deg r(x)&lt;\deg g(x)$... $f(a)=0\implies r(a)=0$ which is false as $\deg r(x)&lt;\deg g(x)\le\deg f(x)$ Hence $r=0\implies f(x)=cg(x);c\text{is a constant}$ But that is false as $f,g$ are monic.So $f=g$
Is the norm of an integral operator the essential supremum norm of its kernel?
I don't think that what you want a reference for is true. The estimate $\Vert T_k \Vert \leq \Vert K \Vert_{L^\infty}$ is trivial, but the other estimate fails in general, which can be seen as follows: By Fubini, \begin{align*} &amp;\Vert T_k f \Vert_{L^1} \\ &amp;\leq \int |f(x)| \int |k(t,x)| d\mu(t) d\mu(x) \\ &amp; \leq \Vert f \Vert_{L^1} \cdot \mu-\mathrm{esssup}_x \int |k(t,x)| d\mu(t). \end{align*} Now consider on $(0,1)$ with Lebesgue measure the kernel $$ k(t,x) = \frac{1}{x} \cdot 1_{t &lt;x}. $$ Then the integral from above satisfies $$ \mu-\mathrm{esssup}_x \int |k(t,x)| d\mu(t)=1, $$ but $\Vert k\Vert_{L^\infty} = \infty$. I noticed that you require $k \in L^\infty$, but it should be easy to adapt my counterexample accordingly.
Is a special module homomorphism injective?
Unless $N=0$, the answer is "no" -- simply because the $0$-homomorphism will not be injective.
Estimation covariance function?
If you are using a GP model, and you have a parametric form for the covariance operator, then you can just use maximum likelihood or any other estimation approach. However, you need to have observations associated with the covariates(observations $Y_1,...,Y_n$ associated with covariates $X_1,...,X_n$), or vice versa: if $X_1,...,X_n$ are the observations, some set of covariates $t_1,...,t_n$. Moving forward we'll use $X_i$ to denote a covariate and $Y_i$ to denote it's associated observation. Suppose that $Y \sim \textrm{GP}(0,K_\theta(s,t))$, where $\exp(\frac{1}{2\theta}||s-t||^2)$, then you have that the marginal distributions of $Y_1,...,Y_n$ is jointly normal with distribution: \begin{equation} \begin{split} \textrm{N}(0, K_\theta(X,X)) \end{split} \end{equation} Where $K(X,X) = [\exp(\frac{1}{2\theta}||x_i - x_j||^2)]_{ij}$. You can optimize this over $\theta$ to get the MLE. The marginal Gaussianity gives you some asymptotic properties on the estimator for $\hat{\theta}$, but I'm not sure off the top off my head if it's the same stuff as in standard parametric settings. You could also use some sort of empirical auto-covariance to get a nonparametric estimator of the auto-covariance operator. I don't know much about this, but I think various types exist, and you can sort of try and model different types of auto-covariances (like stationarity, whether it's isotropic or not, etc.) using different formulations. I would imagine that studying properties of these estimators is fairly technical. I couldn't find many references from a brief search.
Necessity of hypothesis in distance from a set in an inner product space
Let $H$ be a Hilbert space with complete orthonormal set $\{ e_{n}\}_{n=0}^{\infty}$. Define a curve $C(t)$ on $[1,\infty)$ in such a way that for $n=1,2,3,\ldots$, $$ C(t)=(1+2^{-t})\left[\cos(\pi (t-n)/2)e_{n}+\sin(\pi (t-n)/2)e_{n+1}\right], \;\;\; n \le t \le n+1. $$ Then $\|C(t)\|=(1+2^{-t})$ for all $t \ge 1$, and $$ \|e_{0}-C(t)\|^{2}=1+\|C(t)\|^{2} $$ satisfies $\inf_{t \ge 1}\|e_{0}-C(t)\|=2$, a value which is not achieved for any $t \in [1,\infty)$. So there is no closest point to $e_{0}$ on the curve $C$. The image of the curve $C$ is a complete subset of $H$ because (a) $[1,\infty)$ is a complete subset of $\mathbb{R}$, and (b) $\{ C(t_{n})\}_{n=1}^{\infty}$ is a Cauchy sequence iff $\{ t_{n}\}_{n=1}^{\infty}$ is a Cauchy sequence.
If p ⇒ q and q ⇔ r how can I prove p ⇒ r at Fitch?
In line 5, you assume $q$. Conditional on that, you show $p \implies r$. So what you have shown is $q \implies (p \implies r)$. (The running set of assumptions is what is being tracked by the vertical lines between the row numbers and your expressions.) I haven't used Fitch. Can you also show $\lnot q$ implies $p \implies r$? Then combine these to eliminate conditioning on $q$? (Or use the more direct methods explained in other answers...)
What are the three subgroups of $\mathbb{Z}_4\times\mathbb{Z}_6$ of 12 elements?
The first two 12-element subgroups are easy to find, by crossing the whole of one of the components with a subgroup of the other that has half the elements. Thus we have $$ G_1 = \{0, 1, 2, 3\} \mod 4 \otimes \{0, 2, 4 \} \mod 6 \\ G_2 = \{0, 2\} \mod 4 \otimes \{0, 1, 2, 3, 4, 5 \} \mod 6 $$ The third is more subtle: $$ G_3 = \{k \mod 4\} \otimes \{m \mod 6\} | (k+m) \text{ even} = \\ \{ 0 \otimes 0, 0 \otimes 2, 0 \otimes 4, 1 \otimes 1, 1 \otimes 3, 1 \otimes 5, 2 \otimes 0, 2 \otimes 2, 2 \otimes 4, 3 \otimes 1, 3 \otimes 3, 3 \otimes 5 \} $$ Having guessed the form, it is easy to see that $G_3$ is closed under the group multiplication operation, and since it is finite, it thus must form a subgroup of $G$. As to what would motivate one to guess that there are three and only three subgroups of order 12, that is a tougher issue.
Linear algebra question on similarity of linear transformations
Let's take it for granted that the polynomial $p(x) = x^3 + x^2 - 1$ is irreducible over $\Bbb Q[x]$, as you say, "by some known test" (the rational roots test will suffice). Usually, you can't determine $c(x)$ completely just by knowing that $m(x) \mid p(x)$. However, since $p$ is irreducible, we actually know that $m(x) = p(x)$ (assuming we take $m$ to be monic). Moreover, for any linear transformation: any irreducible factor of $c(x)$ must also be a factor of $m(x)$. So, we can deduce that $c(x) = [p(x)]^n$ for some integer $n \geq 1$. Alternatively: if you had determined $m(x)$, then you'd have $V\cong \bigoplus_{i=1}^{k} \frac{\mathbb{Q}[x]}{(a_i(x))}$ where each $a_i(x)=m(x)$. Since $m(x)$ has degree $3$, $\frac{\mathbb{Q}[x]}{(a_i(x))}$ has dimension $3$, and so the dimension of the direct sum is $3k$. Finally: for $\dim V = 3$, it suffices to consider an isomorphism $\Bbb Q[x]/m(x) \cong V[T]$ with $x \mapsto T$.
Why topology is called Rubbersheet Geometry?
IMO, the rubber sheet analogy is really just to help you visualize a physical surface for which things like "the distance between two points" isn't really meaningful. And maybe visualizing stretched and deformed open discs might help get you used to the idea of working in terms of an open basis for a topology when previously you're only familiar with working with open discs (e.g. the $\epsilon-\delta$ definition of limit for metric spaces). If you really want to take the rubberness somewhat more literally, you probably want to look into things like homotopy or deformations.
logarithm of a complex number?
The function $z \mapsto iz^4$ is one branch of $\ln e^{iz^4}$. All branches of $\ln e^{iz^4}$ are therefore $$f_k(z) = iz^4 + 2\pi ik,\quad k\in\mathbb{Z}.$$ The possible branches of $F$ are hence $$F_k(z) = iz + \frac{2\pi ik}{z^3},\quad k \in\mathbb{Z}.$$ $F_0(z) = iz$ is the only entire branch, all others have a pole (of order $3$) in $0$. But my problem is with : $\ln(e^{iz})=\ln(e^{ix−y})$ can we simplify it to: $ix−y$ ? If you are free to choose the branch of the logarithm, you can choose that branch. Otherwise, either list all possible branches, $ix - y + 2\pi ik,\; k\in\mathbb{Z}$, or if a specific branch is prescribed, use that.
Prove that $r(k,k) + k \leq r(k + 1, k + 1) $
Complements to Brian's answer. When $G&#39;$ has a $(k+1)$-clique: Since $G&#39;$ is the disjoint union of $G$ and $K_k$, in addition $K_k$ doesn't contain a $(k+1)$-clique, we get $G$ has a $(k+1)$-clique thus $G$ clearly has a $k$-clique. When $G&#39;$ has a stable set of $(k+1)$ vertices: Since $K_k$ has one and only one indepentdent vertice(if there're two, they are connected due to the completeness of $K_k$, a contradiction). Thus $G$ has a stable set of $(k+1)-1=k$ vertices. Thus $G$ has either a $k$-clique or a stable set of $k$ vertices.
Showing that there are infinitely many integer solutions to $x^2+y^2=z^2$
It is easy to notice that $2s+1=(s+1)^2-s^2$, i.e., every odd number is a difference of two squares. Now if I take any odd square, I can get a Pythagorean triangle from this. There are infinitely many odd squares. If I want to write this explicitly, I can put $(2k+1)^2=4k^2+4k+1=2[2k(k+1)]+1$, so I just plug $s=2k(k+1)$ into above formula and I get $$(2k+1)^2=(2k^2+2k+1)^2-(2k^2+2k)^2$$ i.e., $$(2k+1)^2+(2k^2+2k)^2=(2k^2+2k+1)^2.$$ Of course I do not obtain all Pythagorean triples in this way. This are only triples where the difference of the hypotenuse and one of the legs is one. (Note that all such triples are primitive since $\gcd(s,s+1)=1$.)
equation $\displaystyle a_{n+1}.x^2-2x\sqrt{a^2_{1}+a^2_{2}+.......+a^2_{n}+a^2_{n+1}}+\left(a_{1}+a_{2}+.......+a_{n}\right) = 0$ has real roots
Let $x=a_{n+1}$ then the last inequality is $x^2-xT+S \ge 0$ where $T=a_1+a_2+\cdots+a_n$ and $S=a_1^2+a_2^2+\cdots+a_n^2$. This requires its own discriminant $\Delta=T^2-4s\le 0$. As you try increasing the number of $a_i$'s you see at 4 elements the inequality can be made to work but not after that. Here is how it turns out after a bit of algebra for $n=1$ we have $-\Delta=3a_1^2\ge 0$. for $n=2$ we have $-\Delta=2(a_1^2+a_2^2)+(a_1-a_2)^2 \ge 0$. for $n=3$ we have $-\Delta=1(a_1^2+a_2^2+a_3^2)+(a_1-a_2)^2 +(a_2-a_3)^2+(a_3-a_1)^2\ge 0$. for $n=4$ we have $-\Delta=0(a_1^2+a_2^2+a_3^2+a_4^2) \\ +(a_1-a_2)^2+(a_1-a_3)^2+(a_1-a_4)^2 +(a_2-a_3)^2+(a_2-a_4)^2+(a_3-a_4)^2\ge 0$. for $n=5$ the inequality does not work for all possible $a_i$'s. Use the suggestion of @vonbrand, above, of setting all of them equal to say $t$ to see $-\Delta=4(5t^2)-(5t)^2=-5t^2 &lt;0 $ for $t\ne 0$.
Expected number of trials until success, with multiple variables per trial
Let $X_i$ be the number of test suite runs until the test $T_i$ is succesfully finished. $X_i$ follows a negative binomial distribution with parameter $1-p_i$, i.e. $P(X_i = k) = (1-p_i)^{k-1} p_i$. This yields $P(X_i \le k) = p_i \sum \limits_{i=0}^{k-1}(1-p_i)^i = 1 - (1-p_i)^k$. Now you are interested in the time until all tests were succesful, i.e. $N = \max(X_1, \ldots, X_n)$. To calculate this expectation, note that $P(N \le k) = P( X_i \le k \text{ for all } i) = \prod \limits_{i= 1}^n (1-(1-p_i)^k)$. Finally, as $N$ is a discrete random variable the formula $E[N] = \sum \limits_{i=0}^n P(N &gt; i) = \sum \limits_{i=0}^n 1-P(N \le i)$ holds. It is probably not possible to give a closed expression of this term, but you can calculate approximations numerically.
Prove that a conic section is symmetrical with respect to its principal axis.
The question, at the time I write this, does not define "principal axis". In the definitions of a conic using distances to two foci, or to a focus and a directrix, there is a natural line of symmetry where reflection across that line preserves the distances and is therefore a symmetry of the conic. This is the line through the two foci, or the line through the focus and perpendicular to the directrix. This line also happens to be the principal axis, usually by definition. If there is some other definition of the principal axis in use, such as the $x$ axis when the ellipse has equation $\frac{x^2}{a^2} + \frac{y^2}{b^2}=1$ with $|a| &gt; |b|$, a separate argument would be needed to demonstrate that it coincides with the line of symmetry.
Calculate profit formula and reverse it back
Your formula, in mathematical notation, is: $$ X = P_{1} - (\frac{P_{1}}{100}*m + 0.3) - P_{2}$$ where $X$ is profit, $P_{1}$ and $P_{2}$ are first and second price, and $m$ is margin. $X$, the profit, is the subject of the formula, because it's all on its own on one side while all the other parameters are on the other. What you want to do is make $P_{1}$ the subject. To do this, you first need to group all the terms involving $P_{1}$ together. These terms are $P_{1}$ and $-\frac{P_{1}m}{100}$ (the minus sign is important), and when you divide each by $P_{1}$ you get $1$ and $-\frac{m}{100}$ respectively. So you can rewrite your formula as: $$X = P_{1}\left(1 - \frac{m}{100}\right) - 0.3 - P_{2}$$ Now you add $P_{2}$ and $0.3$ to both sides so the term involving $P_{1}$ is on its own (note I've swapped the two sides of the formula): $$ P_{1}\left(1-\frac{m}{100}\right) = X + P_{2} + 0.3$$ Now you've got $P_{1}$ times $\left(1-\frac{m}{100}\right)$ on one side, so just divide both sides by $\left(1-\frac{m}{100}\right)$: $$ P_{1} = \frac{(X + P_{2} + 0.3)}{\left(1 - \frac{m}{100}\right)}$$ Or, in code: FirstPrice = ( Profit + SecondPrice - 0.3 ) / ( 1 - ( margin / 100 ) );
Trace of a linear map
Let $A$ be the matrix representation of $f$ with respect to the basis $\{e_i\}$. Then, the trace of $f$ is the sum of the diagonal entries of $A$, of which the $i$-th diagonal entry is $e_i^T A e_i = (e_i^T A) e_i = (A e_i)^T e_i = \langle A e_i, e_i\rangle = \langle f(e_i),e_i\rangle$. So, you sum up all the $\langle f(e_i),e_i\rangle $'s to get the trace.
Probability node A in a graph is isolated given that we know at least 1 node is isolated?
This isn't an elegant solution, but I guess it is one. Maybe someone can come up with a better solution later. The only way I can think of solving this is to explicitly count it out. There are 8 edges. We cannot isolate any nodes without removing at least 2 edges. There are 3 ways to remove 2 edges so that a node is isolated (B,D,F) There are 20 ways to remove 3 edges so that a node is isolated (1 way to isolate A,C each. 6 ways to isolate each of B,D,F). For removing 4 edges, we need some casework. There are 5 ways to isolate A, one of which also isolates B and another isolates D There are $\binom{6}{2}=15$ ways to isolate B. Of these, A,C,D and F are each isolated once. There are 5 ways to isolate C. One of these isolates B and another isolates F. There are $\binom{6}{2}=15$ ways to isolate D. Of these, A,F and B are each isolated once. There's one way to remove E. No other edge is isolated. There are $\binom{6}{2}=15$ ways to isolate F. Of these, C,D and B are each isolated once. Putting these altogether, there are $1+5\times 2 + 15\times 3 - \frac{1}{2}(2+4+2+3+0+3) = 49$ ways to isolate a point. There are only 2 ways to remove 5 edges so that no vertices are isolated. The remaining edges are shown below: (AB)(CF)(DE), (AD)(BC)(EF) So, there are $\binom{8}{5}-2 = 54$ ways to isolate a node. If more than 5 edges are removed, then at least one vertex will be isolated. There are $\binom{8}{6} = 28$ ways to remove 6 edges. There are $\binom{8}{7} = 8$ ways to remove 7 edges. There's only one way to remove all edges. Let $q=1-p$. Then, \begin{align} P(\text{at least one node isolated}) &amp;= 3p^2q^6 + 20p^3q^5 + 49p^4q^4 + 54p^5q^3 + 28p^6q^2 + 8p^7q + p^8\\ P(\text{A is isolated}) &amp;= p^3\\ P(\text{A isolated}|&gt;0\text{ nodes isolated}) &amp;= \frac{p}{3q^6 + 20pq^5 + 49p^2q^4 + 54p^3q^3 + 28p^4q^2 + 8p^5q + p^6}. \end{align} Hope there were no mistakes. Make of that what you will.
3D Epicycle Drawing of a Space Curve Using a Quaternion Fourier Transform
One can do a Fourier series of every element of a multidimensional closed parametric curve $\vec{f}(t) = (f_1(t),f_2(t),\cdots,f_N(t))\in\mathbb{R}^N$ with $$ f_i(t) = \sum_{k=0}^\infty a_{i,k} \sin(k\,\omega\,t) + b_{i,k} \cos(k\,\omega\,t). \tag{1} $$ The contribution of each frequency $k\,\omega$ to $\vec{f}(t)$ can be written as $$ \vec{f}_k(t) = \begin{bmatrix} a_{1,k} &amp; b_{1,k} \\ a_{2,k} &amp; b_{2,k} \\ \vdots &amp; \vdots \\ a_{N,k} &amp; b_{N,k} \end{bmatrix} \begin{bmatrix} \cos(k\,\omega\,t) \\ \sin(k\,\omega\,t) \end{bmatrix}, \tag{2} $$ such that $\vec{f}(t) = \sum_{k=0}^\infty \vec{f}_k(t)$. It can be noted that each $\vec{f}_k(t)$ forms an ellips in the plane spannend by the vectors $\vec{a}_k = (a_{1,k},a_{2,k},\cdots,a_{N,k})$ and $\vec{b}_k = (b_{1,k},b_{2,k},\cdots,b_{N,k})$. This ellips can also be obtained by adding two counter rotating circles using $$ \vec{f}_k(t) = \alpha_k \begin{bmatrix} \vec{x}_k &amp; \vec{y}_k \end{bmatrix} \begin{bmatrix} \cos(k\,\omega\,t + \varphi_k) \\ \sin(k\,\omega\,t + \varphi_k) \end{bmatrix} + \beta_k \begin{bmatrix} \vec{x}_k &amp; \vec{y}_k \end{bmatrix} \begin{bmatrix} \cos(-k\,\omega\,t + \theta_k) \\ \sin(-k\,\omega\,t + \theta_k) \end{bmatrix}, \tag{3} $$ where $\alpha_k,\beta_k\geq0$ are the radii of the circles, $\{\vec{x}_k,\vec{y}_k\}$ form an orthonormal basis for $\{\vec{a}_k,\vec{b}_k\}$ and $\varphi_k,\theta_k\in\mathbb{R}$ represent the starting angle of each circle with respect to the used orthonormal basis. For example $\{\vec{x}_k,\vec{y}_k\}$ could be obtained using the Gram–Schmidt process \begin{align} \vec{x}_k &amp;= \frac{\vec{a}_k}{\|\vec{a}_k\|}, \\ \vec{y}_k &amp;= \frac{\vec{b}_k - \big\langle\vec{x}_k , \vec{b}_k\big\rangle\,\vec{x}_k}{\|\vec{b}_k - \big\langle\vec{x}_k , \vec{b}_k\big\rangle\,\vec{x}_k\|}. \end{align} If $\|\vec{a}_k\|=0$ you could swap $\vec{a}_k$ with $\vec{b}_k$ (if both are zero then the entire $\vec{f}_k(t)$ term could be omitted) and if $\|\vec{b}_k - \big\langle\vec{x}_k , \vec{b}_k\big\rangle\,\vec{x}_k\|=0$ one could pick any vector which is orthonormal to $\vec{x}_k$ (the resulting contribution of $\vec{y}_k$ is zero after adding the two circles). By using the following trigonometric identities $\cos(x + \psi) = \cos(\psi)\cos(x) - \sin(\psi)\sin(x)$ and $\sin(x + \psi) = \sin(\psi)\cos(x) + \cos(\psi)\sin(x)$ $(3)$ can also be written as $$ \vec{f}_k(t) = \begin{bmatrix} \vec{x}_k &amp; \vec{y}_k \end{bmatrix} \begin{bmatrix} \alpha_k \cos(\varphi_k) + \beta_k \cos(\theta_k) &amp; \beta_k \sin(\theta_k) - \alpha_k \sin(\varphi_k) \\ \alpha_k \sin(\varphi_k) + \beta_k \sin(\theta_k) &amp; \alpha_k \cos(\varphi_k) - \beta_k \cos(\theta_k) \end{bmatrix} \begin{bmatrix} \cos(k\,\omega\,t) \\ \sin(k\,\omega\,t) \end{bmatrix}. \tag{4} $$ Equating $(4)$ to $(2)$ allows for the time varying terms to be factored out. Combining this with the fact that $\{\vec{x}_k,\vec{y}_k\}$ are orthonormal it can be rewriting as $$ \begin{bmatrix} \big\langle\vec{a}_k,\vec{x}_k\big\rangle \\ \big\langle\vec{a}_k,\vec{y}_k\big\rangle \\ \big\langle\vec{b}_k,\vec{x}_k\big\rangle \\ \big\langle\vec{b}_k,\vec{y}_k\big\rangle \end{bmatrix} = \begin{bmatrix} \alpha_k \cos(\varphi_k) + \beta_k \cos(\theta_k) \\ \alpha_k \sin(\varphi_k) + \beta_k \sin(\theta_k) \\ \beta_k \sin(\theta_k) - \alpha_k \sin(\varphi_k) \\ \alpha_k \cos(\varphi_k) - \beta_k \cos(\theta_k) \end{bmatrix}. \tag{5} $$ Solving $(5)$ for $\alpha_k$, $\beta_k$, $\varphi_k$ and $\theta_k$ yields \begin{align} \alpha_k &amp;= \frac{1}{2}\sqrt{ \left(\big\langle\vec{a}_k,\vec{x}_k\big\rangle + \big\langle\vec{b}_k,\vec{y}_k\big\rangle\right)^2 + \left(\big\langle\vec{a}_k,\vec{y}_k\big\rangle - \big\langle\vec{b}_k,\vec{x}_k\big\rangle\right)^2}, \tag{6a} \\ \beta_k &amp;= \frac{1}{2}\sqrt{ \left(\big\langle\vec{a}_k,\vec{x}_k\big\rangle - \big\langle\vec{b}_k,\vec{y}_k\big\rangle\right)^2 + \left(\big\langle\vec{a}_k,\vec{y}_k\big\rangle + \big\langle\vec{b}_k,\vec{x}_k\big\rangle\right)^2}, \tag{6b} \\ \varphi_k &amp;= \text{arctan2}\left( \big\langle\vec{a}_k,\vec{y}_k\big\rangle - \big\langle\vec{b}_k,\vec{x}_k\big\rangle, \big\langle\vec{a}_k,\vec{x}_k\big\rangle + \big\langle\vec{b}_k,\vec{y}_k\big\rangle\right), \tag{6c} \\ \theta_k &amp;= \text{arctan2}\left( \big\langle\vec{a}_k,\vec{y}_k\big\rangle + \big\langle\vec{b}_k,\vec{x}_k\big\rangle, \big\langle\vec{a}_k,\vec{x}_k\big\rangle - \big\langle\vec{b}_k,\vec{y}_k\big\rangle\right). \tag{6d} \end{align} So any multidimensional closed parametric curve could be written as a sum of pairs of counter rotating circles in the same plane. Hopefully it is clear from $(1)$ and $(2)$ that each frequency component should form an ellips in a certain plane. The decomposition of an ellips into two counter rotating circles is demonstrated by the following animation:
Average of middle number
If you have a list of $12$ numbers, the first six numbers are in positions $1-6$ and the last six numbers are in positions $7-12$. As stated, the data you are given is inconsistent with the list being $12$ numbers. If the first six average $10.4$ and the last six average $11.5$, the whole list would have to average $10.95$. Please check the problem and state it clearly.
In $\Delta ABC$, $AB:AC = 4:3$ and $M$ is the midpoint of $BC$ . $E$ is a point on $AB$ and $F$ is a point on $AC$ such that $AE:AF = 2:1$
Rename the points like on a picture: $AE:AD =2:1$ Draw a parallel through $B$ to $DE$ then if $AB = 4y$ then $AF = 2y$ and $AC = 3y$ so $AF:CF = 2:1$. Draw a parallel through $F$ to $AM$ then $CI:IM = 1:2$, so if $MB = 3z$ then $MI = 2z$. So $IM:MB = 2:3$ and thus $FH:HB = 2:3$. But $DG:GE = FH:HB$ so $x=54$.
$G$ acts faithfully on $\Omega$, $A\leq G$, $A$ transitive on $\Omega$. Then $|C_G(A)|$ is a divisor of $|\Omega|$.
First, we need that $C_G(A)$ acts semi-regularly on $\Omega$, i.e., no non-identity element of $C_G(A)$ stabilizes a point on $\Omega$. To see this, let $g\in C_G(A)$ stabilize a point $x\in \Omega$. Since $A$ is transitive on $\Omega$, for any $y\in \Omega$ there exists $a\in A$ such that $xa=y$. But then $g^a=a$, but as is standard, if $g$ stabilizes $x$ then $g^a$ stabilizes $xa=y$. Thus $g$ stabilizes $\Omega$, and $g=1$. This proves that $C_G(A)$ acts semi-regularly on $\Omega$. Each orbit has length $|C_G(A)|$, by the orbit-stabilizer theorem, and so $|C_G(A)|$ divides $|\Omega|$, as needed.
How do I determine cardinalities, given the cardinality of other sets?
$|A\times B|$ is indeed $10\cdot5=50$; the fact that $A$ and $B$ have some elements in common doesn’t matter. Look at a smaller example: say $A=\{1,2,3\}$, and $B=\{1,2\}$. Then $$A\times B=\{\langle 1,1\rangle,\langle 1,2\rangle,\langle 2,1\rangle,\langle 2,2\rangle,\langle 3,1\rangle,\langle 3.2\rangle\}$$ has $3\cdot2=6$ elements, as you can see by inspection. $A\cap(A\times B)$ need not be empty: we might, for instance, have $A=\{0,\langle 0,0\rangle\}$ and $B=\{0\}$, so that $A\times B=\left\{\langle 0,0\rangle,\big\langle\langle 0,0\rangle,0\big\rangle\right\}$, and $A\cup(A\times B)=\left\{0,\langle 0,0\rangle,\big\langle\langle 0,0\rangle,0\big\rangle\right\}$ has cardinality $3$ instead of $2+2\cdot1=4$. In fact you haven’t enough information to answer (3). I suspect, however, that you were supposed to assume that $A\cap(A\times B)=\varnothing$, in which case your calculations are all correct.
Informal derivation (and interpretation) of Substitution Rule from Chain Rule?
Informally, you might think of the substitution rule this way: \begin{align} &amp;\begin{array}{c}\text{the accumulated}\\ \text{value due to} \\ \text{integration of}\end{array} \left( \begin{array}{c}\text{the rate of}\\ \text{accumulation per}\\ \text{unit change in $x$}\end{array} \times \begin{array}{c}\text{the small}\\ \text{change}\\ \text{in $x$}\end{array} \right)\\[2ex] &amp;\qquad = \begin{array}{c}\text{the accumulated}\\ \text{value due to} \\ \text{integration of}\end{array}\left( \begin{array}{c}\text{the rate of}\\ \text{accumulation per}\\ \text{unit change in $u$}\end{array} \times \begin{array}{c}\text{the change in $u$}\\ \text{causedby a small} \\ \text{change in $x$}\end{array} \times \begin{array}{c}\text{the small}\\ \text{change}\\ \text{in $x$}\end{array} \right)\\[2ex] &amp;\qquad = \begin{array}{c}\text{the accumulated}\\ \text{value due to} \\ \text{integration of}\end{array}\left( \begin{array}{c}\text{the rate of}\\ \text{accumulation per}\\ \text{unit change in $u$}\end{array} \times \begin{array}{c}\text{the small}\\ \text{change}\\ \text{in $u$}\end{array} \right)\\[2ex] \end{align} When we write $\int f(x)\,dx = \int g(u)\, du$ the middle line is not explicitly included in the equation, but we invoke it implicitly when we write $f(x) = g(u) \frac{du}{dx}$ and $du = \frac{du}{dx} dx.$ Also implicitly assumed in all of this is that "the small change in $u$" is tied in a lockstep manner to "the small change in $x$" by defining $u$ as a function of $x$. (That is why I wrote "the small change" rather than "a small change". You cannot have just any small change in each place; each small change must be the change that corresponds to the small changes that occur the other parts of the equation.) The relationship between the chain rule and the substitution rule is evident here.
A step in verifying a stopping time.
The equality $$\{\sigma_H \leq t\} = \{X_0 \in H\} \cup \{X_s \in H \, \text{or} \, X_{s-} \in H \, \text{for some} \, s \in (0,t]\} \tag{1}$$ does not hold. Consider the following counter example: Consider the (deterministic) cádlág process $$X_t := \begin{cases} t &amp; t&lt;1 \\ 0 &amp; t \geq 1. \end{cases}$$ and set $H:=\{1\}$. Then it follows easily from the definition of $\sigma_H$ that $\sigma_H = \infty$ and therefore $$\{\sigma_H \leq t\} = \emptyset$$ for all $t \geq 0$. On the other hand, the right-hand side of $(1)$ does not equal $\emptyset$ for $t \geq 1$; this follows from the fact that $X_{1-} = 1 \in H$.
Looking for a simpler solution to a problem about the divisibility of combinatorial numbers
We have the identity$$ (n+1)\, \text{lcm}\left(\binom{n}{1}, \binom{n}{2}, \ldots, \binom{n}{n-1}\right) = \text{lcm}(1,2,\ldots, n+1).$$If $p$ is a prime number that does not divide $n + 1$ and $p^r \le n$, then the right-hand side is divisible by $p^r$ which forces some $\binom{n}{k}$ to be divisible by $p^r$. Such a prime $p$ exists for all large $n$ because the smallest $p$ not dividing $n + 1$ is at most $(1 + \text{o}(1))\log(n)$.
Help making sense of setbuilder notation found in Real Mathematical Analysis by Pugh
Though Robert Shore has answered your basic question, I thought a specific example might help. Let's do the classic: $\mathcal C = \{x \in \Bbb R : x &lt; 0 \text{ or }x^2 &lt; 2\}$. We can express this in terms of cuts in $\Bbb Q$ as $$\mathcal C = \{A|B \in \Bbb R : \text{ for all }a \in A, a &lt; 0 \text{ or }a^2 &lt; 2\}$$ Now for $a \in C$, there is some $A|B \in \mathcal C$ with $a \in A$. But by the definition of $\mathcal C$, this means that either $a &lt; 0$ or $a^2 &lt; 2$. We note that the cut $X|Y = \{x \in \Bbb Q: x &lt; 1\}|\{y \in \Bbb Q: y \ge 1\} \in \mathcal C$. Hence every $x &lt; 1$ is in $X$, and therefore in $C$. Now suppose $r$ is any positive rational number with $r^2 &lt; 2$. Then there must be a $q &gt; r$ with $q^2 &lt; 2$ (because there is a perfect rational square between any two rational numbers, which I'll leave to you to prove). But then $U|V = \{u \in \Bbb Q: u &lt; q\}|\{v \in \Bbb Q: q \le v\}\in\mathcal C$, and $r \in U$. So $r \in C$. Therefore $C = \{r \in \Bbb Q: r &lt; 0 \text{ or } r^2 &lt; 2\}$. What is significant about this example is that there is no $A|B\in \mathcal C$ with $A = C$. $A \subset C$, but there is no single $A$ which is all of $C$.
Logarithmic commission?
Rather than trying to guess at a function, I suggest your try to translate your requirements into mathematical terms. As I understand it, in your example, you want $$\begin{align} 60r_1+40r_2&amp;=15\\ 60r_1&amp;&lt;40r_2,\end{align}$$ where $r_1,r_2$ are the commission rates. Obviously, if the commissions were equal, we would have $$\begin{align} 60r_1&amp;=40r_2=7.5\\r_1&amp;={7.5\over60}=.125\\r_2&amp;={7.5\over40}=.1875\end{align}$$ If you really need the commission on the smaller part to be larger, just choose $r_1$ to be any convenient number less than $12.5\%$ and use the equation $60r_1+40r_2=15$ to determine $r_2.$
Proving $\sum\limits_{n = 0}^{\infty} \frac{x^n}{n!}$ converges
One way to prove it is as follow : $(s_n)_{n\geqslant 1}$ is a non-decreasing sequence therefore it suffices to show that it is bounded above. There exists $C&gt;0$ such that $\frac{|x|^n}{n!}\leqslant\frac{C}{2^n}$, this is because $\lim\limits_{n\rightarrow +\infty}\frac{(2x)^n}{n!}=0$, thus the sequence $\left(\frac{(2x)^n}{n!}\right)_{n\geqslant 1}$ is bounded. Finally, $$ s_n\leqslant\sum_{k=1}^n\frac{C}{2^k}\leqslant C $$ and this ends the proof.
If $fg$ is smooth, are $f$ and $g$ necessarily smooth?
Let $f(x)=1$ if $x\in\Bbb Q$ and $f(x)=2$ otherwise, and let $g(x):=1/f(x)$.
Linear independence of certain vectors of $\mathbf{C}^2$ over $\mathbf{R}$
Just follow the rules: Let $a,b,c,d \in \mathbb R$ and $ae_1+be_2+ce_1i+de_2i=0$. Then we have: $$ae_1+be_2+ce_1i+de_2i=0 \iff (a+ci)e_1+(b+di)e_2=0 \iff a,b,c,d=0$$ since we have $$(a+ci)\left(\begin{array}{c}1 \\0 \end{array}\right)+(b+di)\left(\begin{array}{c}0 \\1 \end{array}\right)=\left(\begin{array}{c}0 \\0 \end{array}\right) \iff a+ci=0,b+di=0$$ and $a+ci,b+di$ are just complex numbers since $a,b,c,d$ are reals and a complex number $z=x+yi$ with $x,y \in \mathbb R$ is equal $0$ if and only if imaginary and realpart ($Re(z)=x, Im(z)=y$) are equal 0. So we have that $\mathbb C^2$ over $\mathbb R$ is a $4$ dimensional vector space. Using the fact $\mathbb C \cong \mathbb R^2 $ you can conclude the claim. Edit: Even though there is a seperate answer above how to deal with the span I'll give a proof too for the sake of completeness and because I know that sometimes it's good to see the same story twice, so here it is: Let $z \in \mathbb C^2$ be arbitrary and of the form $z=(z_1,z_2)$ with $z_1,z_2 \in \mathbb C$. Since elements in the complex plane consist of a real- and imaginary part we can 'decompose' our $z$ even further by $$z_1=x_1+y_1i; \space x_1,y_1 \in \mathbb R$$ $$z_2=x_2+y_2i, \space x_2,y_2 \in \mathbb R$$ So we can express our arbitrary $z \in \mathbb C^2$ as follows: $$z=(z_1,z_2)=(z_1,0)+(0,z_2)=z_1e_1+z_2e_2=(x_1+y_1i)e_1+(x_2+y_2i)e_2=x_1e_1+x_2e_2+y_1e_1i+y_2e_2i$$ And as you can see, we were able to find a way to express an arbitrary (and hence any!) element in $\mathbb C^2$ in terms of real valued coefficients and the linear indepentent vectors $\{e_1,e_2,e_1i,e_2i\}$.
direct sum of $T$-invariant subspaces
It is not true. Consider $$ A=\left(\begin{array}{ccc} 0&amp;0&amp;0\\1&amp;0&amp;1\\0&amp;0&amp;0\end{array}\right). $$ For a standard basis $\{e_1,e_2,e_3\}$, it holds $I(e_1)=\text{span}\{e_1,e_2\}$ and $I(e_3)=\text{span}\{e_2,e_3\}$. Hence $I(e_1)\cap I(e_3)\neq\varnothing.$
How can I solve $\lim_{(x,y) \rightarrow (0,0)} \frac{xy\sin(x+y)}{x^2+y^2+|xy|}$?
Your argument is not correct because the estimate $$ \bigl|\frac{xy\sin(x+y)}{x^2+y^2+|xy|} \bigr|&lt;|xy\sin(x+y)| $$ is valid only if the denominator is greater than one, and that is not the case for $x, y$ close to zero. But your result is correct, the limit is indeed zero: From $$ \frac {|xy|}{x^2 + y^2 + |xy|} \le 1 \tag 1 $$ it follows that $$ \bigl| \frac{xy\sin(x+y)}{x^2+y^2+|xy|} \bigr| \le \ |\sin(x+y)| \le |x + y| $$ which tends to zero for $(x, y) \to (0,0)$. Remark: From the AM-GM inequality you have $|xy| \le \frac{x^2 + y^2}2$ and therefore the inequality $(1)$ can be improved to $$ \frac {|xy|}{x^2 + y^2 + |xy|} \le \frac{|xy|}{2 |xy| + |xy|} = \frac 13 \, . $$ But since the other factor $\sin(x+y)$ tends to zero, this better estimate is not needed to compute the limit.
Differential equation system involving square.
If $D=\dfrac{d}{dx}$, you have: $$\dfrac{d^2}{dx^2}f(x)=f(x)^2$$ So the solution is: $$f(x)=6WeierstrassP(x+c1,0,c2)$$ where WeierstrassP(a,b,c) is the Weierstrass elliptic function. In more dimensions the solution is similar.
Probability of intersection of two events
The probability of the intersection of two events A and B is $P(A) * P(B)$ when A and B are independent. In this case the event &quot;picking a truck&quot; changes the probability of the event &quot;the picked car is red&quot; red from $\frac{2}{5}$ to $P(R|T) = \frac{1}{3}$, so they are not independent. In this formula $P(R|T)$ means &quot;probability that a random car is red knowing (the | symbol) that it is a truck&quot;. You actually proved that they are not independent when you saw that $P(T) * P(R)$ differs from the probability of picking at random a red truck. This is called conditional probability: https://en.wikipedia.org/wiki/Conditional_probability More info on independent events: https://en.wikipedia.org/wiki/Independence_(probability_theory)
How can I transform this system into a first order system?
Let $z = (u,u',v,v') = (z_1,z_2,z_3,z_4)$ then we have, $$ z_2' = \frac{z_3}{1+t^2} - \sin(r(t))$$ $$ z_4' = \frac{-z_1}{1+t^2} + \cos(r(t))$$
Congruence if and only if Left and Right Congruence
If $$\begin{array}{rcll} x &amp;\sim&amp; x' \\ x \circ y &amp;\sim&amp; x' \circ y &amp; \text{right congruence} \\ y &amp;\sim&amp; y' \\ x' \circ y &amp;\sim&amp; x' \circ y' &amp; \text{left congruence} \\ x \circ y &amp;\sim&amp; x' \circ y' &amp; \text{equivalence relation is transitive} \\ \end{array}$$ Only if Let $x,y,z \in M$ such that $x \sim y$. Also, $z \sim z$ because equivalence relation is reflexive. Then, by the given assumption, we have $x \circ z \sim y \circ z$ (right congruence) and $z \circ x \sim z \circ y$ (left congruence). Remarks Note how the two properties of equivalence relation are used here. Also, note how symmetry is not used. We can construct a relation that is reflexive and transitive but not symmetric, on a set of two elements $S = \{a,b\}$, as follows: $R = \{(a,a), (a,b), (b,b)\}$, i.e. for all $x,y \in S$, $xRy$ if and only if $(x,y) \ne (b,a)$. Reflexive We have $aRa$ and $bRb$, so $R$ is reflexive. Symmetric We have $aRb$ but not $bRa$, so $R$ is not symmetric. Transitive We have $8$ statements to check: $(x,y,z) = (a,a,a): aRa \land aRa \implies aRa$ (true and true implies true) $(x,y,z) = (a,a,b): aRa \land aRb \implies aRb$ (true and true implies true) $(x,y,z) = (a,b,a): aRb \land bRa \implies aRa$ (true and false implies true) $(x,y,z) = (a,b,b): aRb \land bRb \implies aRb$ (true and true implies true) $(x,y,z) = (b,a,a): bRa \land aRa \implies bRa$ (false and true implies false) $(x,y,z) = (b,a,b): bRa \land aRb \implies bRb$ (false and true implies true) $(x,y,z) = (b,b,a): bRb \land bRa \implies bRa$ (true and false implies false) $(x,y,z) = (b,b,b): bRb \land bRb \implies bRb$ (true and true implies true) Congruence There are 16 binary operations that can be defined on $S$. I'll only pick one: $$\begin{array}{c|c} \circ&amp;a&amp;b\\\hline a&amp;a&amp;a\\\hline b&amp;b&amp;b \end{array}$$ The verification that $\circ$ and $\sim$ satisfy all three of left congruence, right congruence, and congruence, is left to the reader as an exercise, ending this exploration.
Set Theory: Union over sets satisfying criterion, rigorous definition?
You can take $$ b = \{B\in \mathcal{B}\colon B\subseteq U\}. $$ Obviously, $$C = \bigcup b\subseteq U.$$ For each $x\in U$ there exists some $B_x\in b$ such that $x\in B_x$, hence $x\in C$. You don't have to make any choices. You should just take more sets in $b$ than you really need.
A simple question related to Complex Numbers?
When dealing with complex numbers, for integer values of n, $\sqrt[n]z$ is not a single number, but rather a set of n numbers, each of which has the property that its n-th power is z. For instance, $\sqrt1=\pm1$, $\sqrt[3]1=\left\{1,\dfrac{-1\pm i\sqrt3}2\right\}$, $\sqrt[4]1=\{\pm1,\pm i\}$, etc. In other words, for complex numbers, the n-th root is a binary relation rather than an actual function. Which is why the property that the n-th root of a product is the same as the product of n-th roots, ultimately no longer holds true anymore.
Example for an operator that is strictly monotone but not maximally monotone (or the other way)
The word 'maximal' means: the operator is maximal with respect to inclusions of monotone operators. You cannot add points to its graph without destroying monotonicity. An example of some non-trivial maximal monotone operator is the subdifferential of $x\mapsto |x|$: $$ A(x) = \begin{cases} -1 &amp; \text{ if } x&lt;0 \\ [-1,1] &amp; \text{ if } x=0\\ +1 &amp; \text{ if } x&gt;0 \end{cases} $$ So this is a monotone set, which is connected from left to right. If $A$ is maximal monotone then $A(x)$ is convex and closed. If we would modify the above operator to $A(0)=\{-1,+1\}$ then it would not longer be maximal.
Showing an upper bound on $\kappa(G)$
Try to think of an interpretation of $\frac{2m}{n}$ and then relate it to the minimum degree $\delta(G)$. Hint: The interpretation has to do with degree. Note that $\frac{2m}{n}=\frac{\sum_{v\in V}\deg(v)}{n}$ and you can think about the right hand side as the average degree of vertices in $G$.
how to show that this is a sigma algebra?
Since $X$ is the domain of the function $\Psi$ with $Y$ as its codomain, we have that $\Psi^{-1}(Y)=\{x\in X\ |\ \Psi(x)\in Y\}=X\in \Sigma$, because $\Sigma$ is a $\sigma$-algebra.
Orthogonal group is a regular submanifold of $GL(n,\Bbb R)$
Consider the function $f:A\mapsto A^tA$ defined on the vector space $V=M_n(\mathbb R)$ of $n\times n$ matrices. Let $A_0\in O(n,\mathbb R)$. Let $B\in V$ and let us compute the derivative of $f$ at $A_0$ in the direction of $B$, namely, $$D_{A_0}f(B):=\lim_{h\to0}\frac{f(A_0+hB)-f(A_0)}{h}.$$ Using the definition of $f$, we have \begin{align}\frac{f(A_0+hB)-f(A_0)}{h}&amp;=\frac{(A_0+hB)^t(A_0+hB)-A_0^tA}{h}\\ &amp;=\frac{A_0^tA_0+h(B^tA_0+A_0^tB)+h^2B^tB-A_0^tA_0}{h}\\ &amp;=B^tA_0+A_0^tB+hB^tB, \end{align} so that obviously the limit above is $$D_{A_0}f(B)=B^tA_0+A_0^tB.$$ Can you compute the rank of the differential $D_{A_0}f:V\to V$ that we have just computed? Notice that $D_{Id}f(A^tB)=D_Af(B)$. As long as $A$ is invertible, this implies that the maps $D_{Id}f$ and $D_Af$ have the same rank, so it is enough to compute the rank of $D_{Id}$.
Is there a meaningful way to define $ij$, where $i$ is the imaginary unit and $j$ is the split-complex unit?
No. What you are talking about is tessarines, a 4-dimensional algebra that combines complex and split-complex numbers. In that system $ij$ is a separate unit vector, similar to $i$ in that $(ij)^2=-1$. But not equal to $i$. It is irreducible. The algebra of tessarines is commutative and associative.
Order of automorphism group of cyclic group
Since an automorphism of $G$ should map a generator of $G$ to a generator of $G$ it's enough to know how many generators does $G$ have. If $G=\{e,g,g^2,...,g^{m-1}\}$ then a $g^i$ generates G if and only if $\operatorname{gcd}(i,m)=1$. $\lvert \operatorname{Aut}(G)\rvert=\phi(m)$ where $\phi(m)$ is Euler's function. For a more detailed proof: Let $G=\langle g\rangle$ and $f\in\operatorname{Aut}(G)$. Then $f(g)=g^i$ for some $i$. If $f$ is an isomorphism $\langle g^i\rangle =G$ and this happens only if $\operatorname{gcd}(i,m)=1$. On the other hand every homomorphism $f:G\rightarrow G$ with $f(g)=g^i$ is an isomorphism when $\operatorname{gcd}(i,m)=1$, so $\lvert \operatorname{Aut}(G)\rvert=\phi(m)$.
How to figure out the contrapositive and negation of statements.
Following my hint: $(P\lor Q)\Rightarrow\neg R$ Contrapositive is: $\neg (P\lor Q)\Leftarrow\neg(\neg(R))$ Which simplifies to: $((\neg P)\land(\neg Q))\Leftarrow R$ or in the usual order: $R\Rightarrow((\neg P)\land(\neg Q))$. For $P\Rightarrow(Q\Rightarrow R)$, the leftmost $\Rightarrow$ is the outermost, so: $\neg P\Leftarrow\neg(Q\Rightarrow R)$ Which simplifies to $\neg P\Leftarrow(Q\land\neg R)$ or in the usual order: $(Q\land\neg R)\Rightarrow\neg P$.
An irreducible $f\in \mathbb{Z}[x]$, whose image in every $(\mathbb{Z}/p\mathbb{Z})[x]$ has a root?
No. In fact, if $f$ is an irreducible polynomial of degree at least $2$ then there are infinitely many primes $p$ such that $f$ does not have a root $\bmod p$. The argument is standard and goes as follows. A theorem of Dedekind asserts that if $f$ factors as a product $\prod f_i(x) \bmod p$ of irreducible polynomials, and if $p$ does not divide the discriminant of $f$, then some element of the Galois group of $f$ has cycle type $(\deg f_1, \deg f_2, ...)$. The Frobenius density theorem (a weaker version of the Chebotarev density theorem) asserts that the converse holds in the following sense: if some element of the Galois group of $f$ has a given cycle type, then a positive proportion of primes has the property that the factorization of $f \bmod p$ has the same cycle type. In particular, if some element of the Galois group of $f$ has no fixed points, then a positive proportion of primes has the property that $f$ has no roots $\bmod p$. But the Galois group of $f$ acts transitively on its roots, and we have the following general result. Lemma: Let $G$ be a finite group acting transitively on a set $S$ of size greater than $1$. Then some $g \in G$ does not have a fixed point. Proof. By Burnside's lemma, $$1 = \frac{1}{|G|} \sum_{g \in G} |\text{Fix}(g)|$$ so the average number of fixed points of a random element of $G$ is $1$. On the other hand, $\text{id} \in G$ has $|S| &gt; 1$ fixed points. Hence some element must have less than $1$ fixed point, which can only mean that it has no fixed points. $\Box$ The conclusion follows.
Prove that if X is a partition, then the partition obtainable from an equivalence relation = X.
You need to use the fact that $X$ is a partition, which you haven't. So, you have an equivalence class $[a]_{E_X}$. You want to show that this class is equal to some element of $X$. Since $a\in \mathrm{dom}(X)$, then there exists $P\in X$ such that $a\in P$. Now show that $[a]_{E_X} = P$; since they are both subsets of $\mathrm{dom}(X)$, you do this by showing that $[a]_{E_X}\subseteq P$ and that $P\subseteq [a]_{E_X}$. For example: if $p\in P$, then since $a,p\in P$ then $(a,p)\in E_X$, hence $p\in [a]_{E_X}$. Thus, $P\subseteq [a]_{E_X}$. Etc. In this part you want to prove that $c\in \mathrm{dom}(E_X)/E_X$. That is, you want to show that there exists $a\in\mathrm{dom}(E_X)$ such that $c=[a]_{E_X}$. You already have a candidate; so now you must show that $c=[a]_{E_X}$; to do this, since both $c$ and $[a]_{E_X}$ are subsets of $\mathrm{dom}(E_X)$, you do this by showing that $c\subseteq [a]_{E_X}$ and that $[a]_{E_X}\subseteq c$. For example, suppose $x\in c$; then $x,a\in c\in X$, so by definition of $E_X$ you have $(a,x)\in E_X$; therefore, $x\in[a]_{E_X}$, proving $c\subseteq [a]_{E_X}$. Etc.
Power series expansion involving Lambert-W function
Use the MATHEMATICA commands f = -E/(2 z) ProductLog[-2 z/E^2] (Log [-E/(2 z) ProductLog[-2 z/E^2]] - 1) Series[f, {z, 1, 10}]//N and you will obtain $$ f(z)=-0.880194-0.22445 (z-1.)-0.153651 (z-1.)^2-0.164248 (z-1.)^3-0.224172 (z-1.)^4-0.355448 (z-1.)^5-0.622729 (z-1.)^6-1.17077 (z-1.)^7-2.31929 (z-1.)^8-4.78304 (z-1.)^9-10.1829 (z-1.)^{10}+O\left((z-1.)^{11}\right) $$ or if you prefer the lenghty form (only four therms) $$ f(z) = \frac{1}{2} e W\left(-\frac{2}{e^2}\right) \left(W\left(-\frac{2}{e^2}\right)+2\right)-\frac{1}{2} \left(e W\left(-\frac{2}{e^2}\right)^2\right) (z-1)+\frac{e W\left(-\frac{2}{e^2}\right)^3 (z-1)^2}{2 W\left(-\frac{2}{e^2}\right)+2}-\frac{\left(e W\left(-\frac{2}{e^2}\right)^4 \left(3 W\left(-\frac{2}{e^2}\right)+4\right)\right) (z-1)^3}{6 \left(W\left(-\frac{2}{e^2}\right)+1\right)^3}+\frac{e W\left(-\frac{2}{e^2}\right)^5 \left(2 W\left(-\frac{2}{e^2}\right) \left(6 W\left(-\frac{2}{e^2}\right)+17\right)+25\right) (z-1)^4}{24 \left(W\left(-\frac{2}{e^2}\right)+1\right)^5}+O\left((z-1)^5\right) $$ etc. You can also use the fact $$ f(z) = y(z)(\ln y(z) -1) $$ with $$ y(z) = -\frac{e W\left(-\frac{2 z}{e^2}\right)}{2 z} $$ and $$ y_0 = y(1) = -\frac{1}{2} e W\left(-\frac{2}{e^2}\right) $$ and then find the expansion for $y$. This can be done with MATHEMATICA or with bare hand.
Formula for $\sum_{i\geq 0} i{n \choose 2i}$?
So for n even you can use $ {n \choose 2i} = {n \choose n-2i}$ to rewrite the sum as $\sum_i i {n \choose n-2i}$ then add this sum to the original sum to get $\sum_i n{n \choose n-2i}$
Proving a necessary and sufficient condition for compactness of a subset of $\ell^p$
The properties 1) and 2) are quite obvious when $A$ is a finite set. So we see that a compact set "almost behaves as a finite set". Here is a formal proof. As $\ell^p$ is complete, $A$ is complete whenever it's closed. So we just have to show pre-compactness. Fix $\varepsilon&gt;0$, and apply 2) with $\varepsilon/2$. This gives an integer $N$ such that for all $x\in A$, $\sum_{j\geq N}|x_j|^p&lt;\varepsilon/2$. As $A$ is bounded, we can find $M&gt;0$ such that $\lVert x\rVert\leq M$ for all $x\in A$. Since $[-M,M]^N$ is pre-compact, we can find an integer $K$ and sequences $x^{(0)},\dots,x^{(K)}$ such that for all $v\in [-M,M]^n$, there exists $i\in \{0,\dots,J\}$ such that $\sum_{j=0}^{N-1}|v_j-x^{(i)}_j|^p\leq \frac{\varepsilon^p}{2^p}$. Define $y^{(j)}:=(y^{(j)}_0,\dots,y^{(j)}_{N_1},0,\ldots,0)\in \ell^p$ to see that $A$ is pre-compact. Conversely, we assume $A$ compact. A compact subset of a Hausdorff space is closed. It's bounded, as we can extract from the open cover $\{B(x,1)\}_{x\in A}$ a finite subcover $\{B(x,1)\}_{x\in F}$, where $F$ is finite. Then for all $x\in A$, $\lVert x\rVert\leq 1+\max_{y\in F}\lVert y\rVert$. Fix $\varepsilon&gt;0$, then by pre-compactness, we can find an integer $K$ and $x^{(1)},\dots,x^{(K)}$ such that $\bigcup_{j=1}^KB(x^{(j)},\varepsilon/2)\supset A$. For each $j\leq K$, take $N_j$ such that $\sum_{i\geq N_j}|x_i^{(j)}|^p&lt;\frac{\varepsilon^p}{2^p}$. Then take $N:=\max_{1\leq j\leq K}N_k$.
Free normal subgroup of an HNN-extension
Like in the comments, we factor a map through the abelianization (of $F$), which is finitely generated free abelian, and then project on $\Bbb Z$ making sure we map $a$ to a non trivial element. We now have a map $φ:F \to \Bbb Z$ with $φ(a)=φ(b) \neq 0$. We expand this map to a map $Φ:G\to \Bbb Z$ by killing $t$. The $kerΦ \cap &lt;a&gt; = 1$ due to the construction of $φ$, which means that $kerΦ$ acts freely on the edges of the HNN tree (which are the conjugates of $&lt;a&gt;$ in $G$). This means that G is a free product with free factors since the stabilizers are subgroups of conjugates of $F$ which are free as subgroups of free groups. Hence $kerΦ$ is free.
Determine the smallest disc in which all the eigen values of a given matrix lie
Yes it's an application of the Gershgorin Theorem: The theorem states that the eigenvalues are bounded in the union of the regions: \begin{equation} K_i=\left\{ |z-a_{ii}|&lt;\underset{{\scriptscriptstyle i\neq j}}{\sum|a_{ij}|}\right\} \end{equation} Here we have: $$A=\left[\begin{matrix}1&amp;-2&amp;3&amp;-2\\1&amp;1&amp;0&amp;3\\-1&amp;1&amp;1&amp;-1\\0&amp;-3&amp;1&amp;1\end{matrix}\right]$$ The circles are all centered on value 1 since all $a_{ii}=1$ so summing horizontally line by line you would obtain the following restriction on the eigenvalues: $$K_1=\left\{|z-1|&lt; |-2|+|3|+|-2|=7 \right\} $$ $$K_2=\left\{|z-1|&lt; 4 \right\}$$ $$K_3=\left\{|z-1|&lt; 3\right\}$$ $$K_4=\left\{|z-1|&lt; 4\right\}$$ The union of all $K_i$ is $K_1$. So $K=\left\{|z-1|&lt;7 \right\}$ seems to be the minimum disc including all the eigenvalues. But since the eigenvalues are the same for the transpose of the matrix you can calculate again watching for the columns instead of the rows and obtain $$|z-a_{ii}|&lt; {\sum}|a_{ji}|$$ with $i \neq j$ and obtain as maximum radius $$|z-1|&lt; |-2|+|-3|+|1|=6$$ So the correct answer is the $\bf (2)$ and all eigenvalues are included in $K=\left\{|z-1|&lt;6 \right\}$
Prove that $E(X) = \int_{0}^{a} (1-F_X(x))\,dx$
\begin{align} E(X) &amp; = \int_0^\infty x f(x) \ dx \\ &amp; = \int_0^\infty \int_0^x f(x) \ dy dx \\ &amp; = \int_0^\infty \int_y^\infty f(x) \ dx dy \\ &amp; = \int_0^\infty [1-F(y)] \ dy. \end{align}
Complex mapping, plotting such functions step by step on paper
Since you have a function, $u$ and $v$ are presumably given by explicit formulas in $x$ and $y$, e.g., $u = x^3 - 3xy^2$ and $v = 3x^2y - y^3$. For each real number $y_0$, the horizontal line $y = y_0$ maps to the image of the parametric curve $\gamma(x) = u(x, y_0) + iv(x, y_0)$; this curve can be plotted by calculating $\gamma(x)$ at several equally-spaced values of $x$ and connecting the resulting dots in order. Doing this for several equally-spaced values of $y_0$ gives a picture of the image of equally-spaced horizontal grid lines in the $z$ plane. Similarly, fixing $x = x_0$ and varying $y$ plots the images of vertical grid lines. The end result depicts the image in the $w$ plane of a rectangular grid in the $z$ plane. More algorithmically, suppose you want to draw the image under $f$ of the rectangle with corners $a_1 + ib_1$ and $a_2 + ib_2$ using an $n_1 \times n_2$ grid. Put $\Delta x = (a_2 - a_1)/n_1$ and $\Delta y = (b_2 - b_1)/n_2$. Plot the $(n_1 + 1)(n_2 + 1)$ points $p(k_1, k_2) := f\bigl((a_1 + k_1\, \Delta x) + i(b_1 + k_2\, \Delta y)\bigr)$ for integers $0 \leq k_1 \leq n_1$ and $0 \leq k_2 \leq n_2$, and connect them by joining $p(k_1, k_2)$ to $p(k_1 + 1, k_2)$ for $k_1 &lt; n_1$ (and all $k_2$), and by joining $p(k_1, k_2)$ to $p(k_1, k_2 + 1)$ for $k_2 &lt; n_2$ (and all $k_1$).
Question about the Structure group of a circle bundle over a Riemann surface
This all comes down to the lift of the action of $PSL(2,R)$ on $RP^1$ to the 2-fold cover: The unique connected 2-fold cover of $PSL(2,R)$ is $SL(2,R)$, it acts naturally on the unit circle $S^1$, which is understood as the set of oriented lines in $R^2$. The circle $S^1$ is the unique 2-fold cover of $RP^1$ (defined by forgetting the orientation of the lines). Thus, the group $PSL(2,R)$ lifts to $SL(2,R)$ under the covering map $S^1\to RP^1$. The kernel consists of all $SL(2,R)$ matrices which send each line to itself, i.e. equals $\{\pm I\}$. In particular, the diagram (where the horizontal arrow are group actions and vertical arrow consist of the covering maps and their products) $$ \begin{array}{ccc} SL(2,R)\times S^1 &amp; \longrightarrow &amp; S^1\\ \downarrow &amp;~&amp; \downarrow\\ PSL(2,R)\times RP^1 &amp; \longrightarrow &amp;RP^1 \end{array} $$ is commutative. The rest is done by applying this consideration to each fiber. Hence, the bundle which is the 2-fold cover Milnor is talking about, is $$ (H\times S^1)/G $$ where $G$ acts on $S^1$ via its isomorphic lift from $PSL(2,R)$ to $SL(2,R)$. The confusion stems from the fact that $S^1$ is homeomorphic to $RP^1$, but in this situation they should be regarded as different entities.
Cayley's Theorem for Semigroups
in essence this matter is not particularly mysterious, but it is basic, so worth taking the trouble to get a good understanding of. the main point to keep in mind is that the endomorphisms of a set $W$, which as a set you may denote by $Hom(W,W)=End(W)$, can be composed, and this composition is naturally associative. if you restrict attention to automorphisms of $W$ (endomorphisms which are monic and epic) then the structure is the symmetric group $S_W$ and any other group structure defined on $W$ can be naturally identified with the corresponding subgroup of $W_S$. for a semigroup you have associativity, but the actions may fail to be injective or surjective. and you may or may not have an element which acts as an identity. maybe try to restate your question in more specific terms
For what translation groups $G$ of $\mathbb C$ is $\mathbb C / G$ compact?
We may as well ask the more general question "For which subgroups $G \subset (\mathbb{R}^n, +)$ is $\mathbb{R}^n / G$ compact?". The answer is that it is compact if and only if the linear span of $G$ is all of $\mathbb{R}^n$. If $G$ spans $\mathbb{R}^n$, it contains a basis, so we can find a linear isomorphism $f: \mathbb{R}^n \to \mathbb{R}^n$ such that $\mathbb{Z}^n \subset f[G]$. Then $\mathbb{R}^n/G \cong (\mathbb{R}^n/\mathbb{Z}^n) / (f[G]/\mathbb{Z}^n)$, which must be compact because $\mathbb{R}^n/\mathbb{Z}^n \cong \mathbb{T}^n$ is already compact. If $G$ does not span $\mathbb{R}^n$, there is a linear isomorphism $h: \mathbb{R}^n \to \mathbb{R} \times\mathbb{R}^{n-1}$ such that $h[G] \subset \{0\} \times \mathbb{R}^{n-1}$. In that case, since $\mathbb{R}$ is locally compact, we have $\mathbb{R}^n/G \cong \mathbb{R} \times (\{0\} \times \mathbb{R}^{n-1})/h[G]$, which is not compact because $\mathbb{R}$ is not. Since $\mathbb{C}$ is a two-dimensional over $\mathbb{R}$, this means that $\mathbb{C}/G$ is compact if and only if there are two elements of $G$ that are linearly independent over $\mathbb{R}$, as suggested in Blake's comments.
I cannot prove that $f \notin L^\infty(\mathbb{R}^n)$. $f$ is defined as follows:
In fact, Prop. If $g\in L^2(\Bbb R^d)$, $g\ge0$ and $\int g=\infty$ then $\hat g\notin L^\infty$. Hint: It follows from Plancherel that $$\int\widehat g(\xi)\frac {y}{(|\xi|^2+y^2)^{(d+1)/2}}\,d\xi=\int g(x)e^{-y|x|}\,dx.$$(There may be some constants missing there that I leave to you to get straight.) Let $y\to0$. (If you're not familiar with the Poisson kernel and/or its Fourier transform you could see here or you could use something else like a gaussian or a Schwarz function...)
Square Root Algorithm
Method 1 Newton's Method converges quadratically. $a \ge 0, x_0 \ne \sqrt{a}$ $$x_{k+1} = \dfrac{1}{2}\left(x_{k} +\dfrac{a}{x_{k}}\right)$$ Method 2 $n \ge 0, x_n \gt \sqrt{a}$ for all $n \gt 0$. $$x^2_{k+1}-a = \left[\dfrac{x^2_{k} - a}{2x_n}\right]^2$$ Method 3 Third order method, $n \ge 0$ $$x_{n+1} = \dfrac{x_n(x_n^2 + 3a)}{2x^2_n+a}$$ Method 4 See Math World Bhaskara-Brouncker algorithm Additional Methods (also see references) Wiki Methods of computing square roots
Proving that $\int_{0}^{1}\ln^{2n}\left(\ln\left({1-\sqrt{1-x^2}\over x}\right)\over \ln\left({1+\sqrt{1-x^2}\over x}\right)\right)dx=(-\pi^2)^n$
Since the principal value of $\ln(-1)$ is $i\pi$: $$\int_0^{\pi/2}\ln^{2n}(-1)\cos u\,du$$ $$=\int_0^{\pi/2}(i\pi)^{2n}\cos u\,du$$ $$=(i\pi)^{2n}\int_0^{\pi/2}\cos u\,du$$ $$=(i\pi)^{2n}=(-\pi^2)^n$$ However, as the complex logarithm is multivalued, this answer is not well-defined (as mentioned by Chappers in the comments).
Goldbach's conjecture
There is a succinct and accessible account of the latest on Goldbach and the twin prime conjecture by Chris Linton in his October 2013 Editorial for Mathematics Today, published by the Institute of Mathematics and its Applications (IMA) here: http://www.ima.org.uk/_db/_documents/MT_Editorial_Oct13.pdf
Bounding the basins of attraction of Newton's method
Chee Yap's book on &quot;Fundamental Problems of Algorithmic Algebra&quot;, chapter 6 deals with this problem precisely. Yap cites Friedman's &quot;On the convergence of Newton's method.&quot; and Smale's &quot;The fundamental theorem of algebra and complexity theory&quot; as references for the following result (Yap, Theorem 6.37): Let $f(X) \in \mathbb{Z}[X]$ be square free with $m = \deg f$ and $M = 1 + \|f\|_{\infty}$. Then Newton iteration for $f(X)$ is guaranteed to converge to root $X^*$ provied the initial approximation is at most $$\delta_0 = [m^{3m+9}(1+M)^{6m}]^{-1} $$ from $X^*$ Where $\|f\|_{\infty}$ is the maximum coefficient of the polynomial $f(x)$. Hope that helps.
Find a basis for a space of functions
More precisely, it is made up of the functions $f_i$ defined by \begin{align*} f_i\colon X &amp;\longrightarrow \mathbf R\\ x_i&amp;\longmapsto 1\\ x_j&amp;\longmapsto 0\quad(j\ne i). \end{align*}