title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Prove there are no integer solutions
For $N=2:$ The exponent of $2$ when factoring $2(n^2+m^2+nm)$ will be odd, while the exponent of $2$ when factoring $(k^2+l^2+kl)$ will be even. They cannot be equal. Still with the fixed $N=2.$ We do not really need quadratic reciprocity for this. Indeed, $x^2 + xy + y^2$ is odd unless both $x,y$ are even. If both are even, we can divide both by 2, with the result that we have divided $x^2 + xy + y^2$ by $4.$ Do this as many time as necessary until at least one of the variables is now odd. The result is that the exponent of $2$ in prime factorization of the original $x^2 + xy + y^2$ was even! There is a solution if and only if we can express $$ N = u^2 + uv + v^2 $$ in integers. This is the same as saying that, whenever a prime $q \equiv 2 \pmod 3$ and $N$ is divisible by $q,$ then the exponent of $q$ in the prime factorization of $N$ is even. The same characterization applies to both $m^2 + mn + n^2$ and $k^2 + kl + l^2.$ In turn, this is the same as saying that $h(-3) = 1,$ there is only one equivalence class of (positive) binary quadratic forms of discriminant $-3.$
Where is the mistake while solving $\frac{|x-3|}{x^2-5x+6}\ge2$
Hint: for $x \ge 3$ we have $\frac{|x-3|}{x^2-5x+6}=\frac{1}{x-2}$ and for $x < 3$ and $x \ne 2$ we have $\frac{|x-3|}{x^2-5x+6}=-\frac{1}{x-2}.$
Determining number of isomorhism classes of indecomposable modules of length $k$
Here's how one could answer (b). In fact we'll prove the following (equivalent) statement: If $F$ is infinite, then the set of isomorphism classes of modules of finite length has cardinality at most $|F|$. Assume that $F$ is infinite, and let $M$ be a module of finite length over $A$. Then there exists an epimorphism $$ A^k \to M \to 0, $$ where $k$ is a non-negative integer. This is just a way of saying that there exists a finite set $\{m_1, \ldots, m_k\}$ which generates $M$ as an $A$-module. In fact, since $A$ is finite-dimensional over $F$, the existence of such an epimorphism is equivalent to the requirement that $M$ is of finite length. This characterization of finite-length modules will give us the upper bound we are looking for. Indeed, it implies that the set of isomorphism classes of finite length modules has cardinality bounded by that of the set of quotient modules of free modules $A^k$, with $k\geq 0$. Now, fixing $k\geq 0$, a quotient of $A^k$ has the form $A^k/L$ for some $A$-submodule $L$ of $A^k$. Thus quotients of $A^k$ are in bijection with submodules of $A^k$. Any $A$-submodule of $A^k$ is in particular an $F$-subvector space of $A$ (viewed as a vector space). The set of subvector spaces of $A^k$ of a given dimension $d$ is a projective variety called the Grassmannian $Gr(d,A^k)$. In particular, since $F$ is infinite, $|Gr(d,A^k)|$ is at most $|F|$ (see below for another argument). Therefore, the cardinality of the set of isomorphism classes of $A$-modules of finite length is bounded above by the cardinality of the union of all $|Gr(d,A^k)|$, where $d$ and $k$ are non-negative integers. Since each $Gr(d,A^k)$ has cardinality bounded above by $|F|$, their (countable) union over all $d$ and $k$ will have cardinality bounded by $|F|$ as well. This finishes the proof. Here's an argument for the fact that $|Gr(d,A^k)|$ is at most $|F|$ when $F$ is infinite. First, if $d=0$, then $|Gr(d,A^k)|=1$, and if $d > \dim A^k$, then $|Gr(d,A^k)|=0$. Assume that $d>0$ and $d\leq \dim A^k$. Any subvector space of dimention $d$ of $A^k$ is given by a basis. Such a basis can be expressed as a $(\dim A^k \times d)$ matrix whose columns are linearly independent (each column is a basis vector, all expressed in some fixed basis). The cardinality of the set of $(\dim A^k \times d)$ matrix is $|F|$. Thus $|Gr(d,A^k)|$ is bounded above by $|F|$. Similar ideas will prove (a): if $M$ is an $A$-module of length $k$, then there exists an epimorphism $$ A^k \to M \to 0 $$ (note that the $k$ in $A^k$ is the length of $M$). Thus the number of isomorphism classes of $A$-modules of length $k$ is bounded above by the number of subvector spaces of $A^k$, which is finite.
homomorphism from group G to group H
I.$K$ is a normal subgroup in $G$ III. $G/K$ is isomorphic to a subgroup in $H$
Group theory commutator and solvable groups
Assume that $G$ is solvable and let $G=G_1 > G_2 > \cdots G_n > G_{n+1}=1$ be the derived series of $G$. There is a unique $k$ with $1 \le k \le n$ such that $a \in G_k \setminus G_{k+1}$. If $b$ is conjugate to $a$ in $G$, then we must also have $b \in G_k \setminus G_{k+1}$. But then $[a,b] \in G_{k+1}$, so $[a,b]$ cannot be conjugate to $a$. So the answer to the question is no.
Jordan Canonical Forms: Different Approaches
Check Halmos "Finite Dimensional Vector Spaces" classic (1958 or 1974 reprint), section 58. It may be phrased differently than your approach, but it is essentially the same method.
Proof that the product of two differentiable functions is also differentiable
You can prove a lemma which says that differentiable implies continuous in your context. Then, the $\phi(x)$ terms naturally factor out in view of the identity $\lim_{x \rightarrow c} f(x) = f(c)$. Of course, you also need to use the differentiability of $\phi$, but I gather you are aware of this as you used the analog for $f$ already in the first half. Moreover, I would encourage you to also check that your proposed derivative is linear. Usually, the definition requires a linear function which satisfies the Frechet quotient. Nice work thus far.
About Cohen's proof for Goldbach's conjecture
The typesetting is quite poor but they seem to write the following in step 2 of Theorem 1.1.: Given that A and B are odd, Let us assume that both A and B are composite numbers. That is, A and B can be divided only by odd numbers. It means that {∀* : A ≠2* } and {∀& : B ≠2$^\&$ }. Therfore {∀* ∀& : A + B ≠2*+2$^\&$ }, and it follows that {∀* ∀& ∀y : 2*+2$^\&$≠ 2$^y$ }. Contradiction! What they seem to claim is that if $A$ and $B$ are not powers of $2$, then the sum $A+B$ cannot be written as two powers of $2$. This claim is, of course, not true. Consider $A = B = 9$ as an example. Neither is a power of $2$ but $$9 + 9 = 2^1 + 2^4.$$ It could also be the case that they are using $2^*$ and $2^\&$ to denote multiplication. In that case, the claim is even more easily proven wrong because then they seem to say that the sum of two odd numbers cannot equal the sum of two even numbers. That is, once again, incorrect. $$2 + 8 = 3 + 7.$$ In fact, a clearer way of seeing that the step is wrong is by noting that they never use the assumption that $A$ and $B$ are composite. They already started off with the knowledge that $A$ and $B$ are odd and then tried to use compositeness to conclude that they can only be divided by odd factors. Of course, that was true to begin with. As MoonKnight pointed out in the comments, the author assumes $A+B = 2^Y$ for some $Y$. A counterexample is $A = 39$ and $B = 25$.
How fast does a matrix expression go to zero?
I would use eigenvectors and eigenvalues. These are pairs $v$ and $\lambda$, that fulfill $Av=v\lambda$ with $A$ being your matrix. Since you have a $2\times2$ matrix, I assume you will end up with a complex pair of values $\lambda_{1,2}$. With this, you can show, that $\begin{bmatrix} X_n \\Y_n\end{bmatrix} \leq |\lambda|\begin{bmatrix}X_{n-1} \\ Y_{n-1} \end{bmatrix}$ You will get geometric convergence to zero for both entries and therefore the sum.
Materials for self-study (problems and answers)
Schaums outlines. And forums like this one.
Calculating the Elo rating if x beats y z % of the time
The solution is as follows \begin{align}e &= \frac{1} {1 + 10^{(b-a)/400}}\\ e\left(1+10^{(b-a)/400}\right)&=1\tag{1}\\ 1+10^{(b-a)/400}&=\frac 1e\tag{2}\\ 10^{(b-a)/400}&=\frac1e -1\tag{3}\\ \frac{b-a}{400}&=\log_{10}\left(\frac 1e -1\right)\tag{4}\\ b-a&=400\log_{10}\left(\frac 1e -1\right)\tag{5}\\ b&=400\log_{10}\left(\frac 1e -1\right)+a\tag{6}\end{align} where the steps are Multiply both sides by $e$ Divide both sides by $1+10^{(b-a)/400}$ Subtract $1$ from both sides Take the logarithm of both sides. If we have $x=10^y$ then we can say that $\log_{10}(x)=y$. Here is a good introduction to logarithms Multiply both sides by $400$ Add $a$ to both sides You can then use Wolfram|Alpha to solve the equation by filling in your values for $a$ and $e$ to find that $$b=-275.452$$
Can the cube of every perfect number be written as the sum of three cubes?
As pointed out by E. Schmidt, the sequence A023042 shows that a large percentage of cubes $N^3$ are a sum of three positive cubes. OEIS lists only $N<1770$, but we can extend that: $$\begin{array}{|c|c|} \hline N&\text{%}\\ \hline 2000&85.8\text{%}\\ 4000&89.8\text{%}\\ 6000&92.1\text{%}\\ 8000&93.3\text{%}\\ 10000&94.2\text{%}\\ \hline \end{array}$$ This means that $94.2\text{%}$ of all $N<10000$ have a solution to $a^3+b^3+c^3=N^3$ in positive integers. Note that $N=10000$ is still small. Extrapolating the table, it can be seen that the percentage may easily reach $99\text{%}$ if we go into the millions. Thus, if we pick a random $N$ in the high end of the range, there is a very good chance that there is an $a,b,c$. For the next perfect number $N=8128$, it is just mere statistics that suggests $N^3$ will be the sum of three positive cubes, and not because it is perfect. In fact, like $496$, it is in several ways, $$2979^3 + 4005^3 + 7642^3 = 8128^3$$ $$2^6(102^3 + 673^3 + 2007^3) = 8128^3$$ $$2^9(197^3 + 198^3 + 1011^3) = 8128^3$$ And it was almost certain for the next perfect number which is in the millions, $$2^{27}(3042^3 + 56979^3 + 45845^3) = 33550336^3$$ $$2^{30}(821^3 + 32590^3 + 8227^3) = 33550336^3$$ $$2^{36}(4543^3 + 6860^3 + 5104^3) = 33550336^3$$ Both can be expressed in many more ways than this, and I have only chosen a sample. For the cube of the next perfect number, or $137438691328^3$, chances are even greater that it is a sum of three positive cubes in many ways as well. Update: Yes, it is: $$2^{54}(425664^3 + 358719^3 + 275140^3)= 137438691328^3$$ $$2^{54}(432204^3 + 386604^3 + 177535^3)= 137438691328^3$$ Note: Jarek Wroblewski has calculated $a^3+b^3+c^3 = d^3$ with $\color{brown}{\text{co-prime}}$ $a,b,c$, and $d<1000000$ in his website. Using his database and some help with Mathematica and Excel, I came up with the table above which counts all $N$, regardless of whether $a,b,c$ is co-prime or not. P.S: An interesting question, I believe, is: "Are there infinitely many $N^3$ (especially for prime $N$) that cannot be expressed as a sum of three positive cubes?" For example, there are no positive integers, $$a^3+b^3+c^3 = 999959^3$$ even though the percentage of $N<1000000$ with solutions should be close to $99\text{%}$.
what is the value of the following sum?
$$ \sum_{k \ge 0} \binom {n}{2k} (\sqrt{5})^{2k} + \sum_{k \ge 0} \binom {n}{2k+1} (\sqrt{5})^{2k+1} = (1+\sqrt{5})^n$$ $$ \sum_{k \ge 0} \binom {n}{2k} (\sqrt{5})^{2k} - \sum_{k \ge 0} \binom {n}{2k+1} (\sqrt{5})^{2k+1} = (1-\sqrt{5})^n$$
What the boundary of the disc in Poincaré disk model exactly is by a geometric point of view?
The boundary of the Poincaré disk is most definitely not a topological object. It is inherently a geometric object, and it is described here. Here is a picture of single equivalence class of rays in the Poincare disc model, i.e. all lines in the Poincare disc sharing a common point at infinity.
Number of realizations of particular triad type
If you have, for example, a large random Bernoulli digraph $G$ and are looking for the number of "copies" of a subgraph $H$, we can write something along the lines: "...the number of induced subgraphs of $G$ isomorphic to $H$ is...". This is the typical definition in the network motif literature (which seems to be the modern spin on dyads, triads, etc.). Variations on this theme are not unheard of (e.g. to account for the fact that the above definition counts overlapping "copies" of $H$ separately). Note the word "induced", which is frequently (and, from a graph theory perspective, erroneously) omitted from many publications in this area. For example, the subgraph labelled "003" would occur exactly ${n \choose 3}$ times as a subgraph in any $n$-node network, whereas, it would likely occur fewer times as an induced subgraph. Note: it's also fairly normal (at least in graph theory) to call these random graphs, Erdős-Rényi random graphs, since they are generated in the same spirit as the undirected model. This terminology is used, for example, in: B. Bollobás, O. Riordan, Mathematical results on scale-free random graphs, in Handbook of graphs and networks, 2002. Here's the default reference when using Erdős-Rényi random graphs: P. Erdos, A. Renyi, On the evolution of random graphs, Publ. Math. Inst. Hung. Acad. Sci, Vol. 5 (1960), pp. 17-61.
Sum of reciprocals of the triangle numbers
Let $r=2$, and we can see that $$\frac{2n-2}n=2-\frac2n<2$$ Similarly, as $n\to\infty$, the limit is $2$, so this is the least rational number satisfying the inequality.
Radical ideal of $\langle x+y+z, -xy+yz+z^2 \rangle$
I believe that it can be done without the assumption that $k$ is algebraically closed. Suppose that $p$ is a prime ideal containing $I$. Your computation shows that $$ (x+y+z, -xy + yz + z^2, x^2) \subset p. $$ Using that $p$ is prime, this inclusion implies that $$ (x+y+z, -xy + yz + z^2, x) = (y+z, yz + z^2, x) = (y+z, z(y+z), x) = (y+z, x) $$ is also in $p$. Since $(y+z,x)$ is prime, it is the radical of $I$.
Continuous extension of differential operator to sobolev spaces
oops, luckily, we can extend the bounded operator $T$ on smooth sections to a bounded operator on sobolev spaces, given $D(T)$ is dense in $W^s(X,E)$. So suffice to show $T$ is bounded on normed space of smooth sections. Now it is easy! use partition of unity, since the cover is finite, we reduce to the $\mathbb{R}^n$ case.
convergence of autocorrelation function and existence of Fourier transform
There is a fundamental mistake in both answers. It is not necessary to assume the autocovariance function is integrable. For a non-ergodic stationary process, it will not be integrable. The reason why Wiener gets credit for this theorem, instead of physicists like Schuster or Einstein, is that he was able to rigorously make sense of its Fourier transform anyway, in a new way, which he called «Generalised Harmonic Analysis», instead of the usual notion of the Fourier transform as given by the integral you write down. (In fact, he even anticipated Laurent Schwartz's notion of a distribution in his work on this.) So the Wiener-Khintchine theorem states that as long as the original process $f$ is stationary and has an auto-covariance function at all, then in this new sense of Fourier transform (which works even when the Dirichlet conditions are not satisfied), the power spectral density function (which can have infinities since it is the derivative of a function which is not differentiable, and so only makes rigorous sense as a distribution) is the Fourier transform of the auto-covariance function. ==About power and finite energy signals== If the signal has finite energy, the power is zero, as follows from your formulas below. But only a transient signal can have finite energy. The probability of sampling a transient signal from a stationary process is zero. Transient is the exact complete opposite of stationary. A simple unit square wave for one-cycle only has finite energy, but zero power, and as you can calculate its sample auto-covariance function easily, you see it is zero. It has to be, since the Wiener-Khintchine theorem says the Fourier transform of the auto-covariance is the power spectral density and we just saw the power is zero. Summarizing: finite energy (which means transient) ==> zero power ==> zero sample auto-covariance function.
The Nash equilibrium, an existence proof
If $p_i(s,\alpha)\leq 0$ for every $s$, then $\alpha'_{i} = \alpha_{i}$, thus we get equality (proving "if" but not "only if"). Suppose there exists an $\hat{s}$ where $p_i(\hat{s},\alpha)> 0$, then for any $s$ where $p_i(s,\alpha)\leq 0$ and $\alpha_{i}^{(s)} > 0$ we have $\alpha_{i}^{\prime(s)} < \alpha_{i}^{(s)}$, which implies: $$\sum_{s:p_{i}(s, \alpha)\leq0} \alpha_{i}^{\prime(s)} < \sum_{s:p_{i}(s, \alpha)\leq0} \alpha_{i}^{(s)}$$ (Note, there must exist such an $s$ because $\alpha_{i}$ cannot result in strictly lower payoff than all pure strategies in its support.) $$\Rightarrow \sum_{s:p_{i}(s, \alpha)>0} \alpha_{i}^{\prime(s)} > \sum_{s:p_{i}(s, \alpha)>0} \alpha_{i}^{(s)}$$ The last implication follows because $\sum\limits_{s}\alpha_{i}^{\prime(s)}=\sum\limits_{s}\alpha_{i}^{(s)}=1$. Hence, we can say equality is achieved if and only if $p_i(s,\alpha)\leq 0$ for every $s$.
What is the eigenvalue for characteristic polynomial $(1+\lambda^2)\lambda$?
You can factorize your characteristic polynomial as $$p(\lambda)=\lambda(\lambda+i)(\lambda-i)$$ So, the eigenvalues (roots of the polynomial) are $0,i,-i$
Matsumura Commutative Ring Theory 6.9 on coprimary modules of finite length.
Here's a proof that $\mathrm{Ass}(M) = \mathrm{Att}(M)$. First we show $\mathrm{Att}(M)= \{\sqrt{\mathrm{Ann}(M)}\}$. It's clear that any quotient of a secondary module is secondary, so we just need to show that if $N < M$ is a proper submodule, $\sqrt{\mathrm{Ann}(M/N)} = \sqrt{\mathrm{Ann}(M)}$. If $a \in \mathrm{Ann}(M/N)$, then $aM \subseteq N$, so multiplication by $a$ on $M$ isn't surjective, and thus is nilpotent. Thus $a \in \sqrt{\mathrm{Ann}(M)}$. Thus what we really need to show is that $\mathrm{Ass}(M) = \{\sqrt{\mathrm{Ann}(M)}\}$. We may assume wlog that $\mathrm{Ann}(M) = 0$ by replacing $A$ with $A/\mathrm{Ann}(M)$. Any associated prime of $M$ contains the annihilator, so this doesn't really change $\mathrm{Ass}(M)$, and it's clear that $M$ is still secondary and artinian/noetherian since it has the same submodules over this new $A$. Then $M$ is a faithful artinian module over $A$, and so $A$ is artinian. Thus every prime ideal of $A$ is maximal. Since $M$ is secondary, $\sqrt{\mathrm{Ann}(M)} = \sqrt{0}$ is prime, and thus maximal, and thus the only prime ideal of $A$. Thus any associated prime of $M$ is equal to $\sqrt{\mathrm{Ann}(M)}$. Since $M$ is coprimary, it has an associated prime, and thus $\mathrm{Ass}(M) = \{\sqrt{\mathrm{Ann}(M)}\}$.
Data Processing Inequality for sufficient statistic case
Basically, if you put some unrelated information in $Y$, then you'll have your inequality. For instance, suppose you get $Z$ by pushing $X$ through a channel. Let $W$ be independent of $X$ and of $Z$. Then take $Y = (Z,W)$. $X - Y - Z$ holds because $$ P(Z = z | Y = (z',w), X = x) = \delta_{z,z'} = P(Z = z| Y = (z',w)).$$ Also, $$ I(Y;X) = I(Z;X) + I(W;X|Z) = I(Z;X), $$ since $H(W|X,Z) = H(W|Z) = H(W)$ due to the independence. But now $H(Y) = H(Z) + H(W) > H(Z)$.
maked proportion by cyclic angle
Okay, so i will add point E as intersection of diameter AB and chord CD, and point F as intersection of OC and MD. Since CD and AB are perpendicular,CE=ED and BD=CD. Denote $\angle CAB = \alpha $ and $\angle DQB= \beta $. Then $\angle CAB = \angle DAB = \angle CDB= \angle BDC = \alpha $ (Inscribed angles over the same or equal chord) $\angle COB = 2 \angle CAB = 2 \alpha $ and because triangle AOC is isosceles , $ \angle BAC = \angle ACO = \alpha $ Since $\angle DQB= \angle OQF = \beta$, angle $\angle MFC= \angle OFQ= 180-(2\alpha+\beta)$ Then $\angle MDO= \beta$ and $ \angle MDA= \beta - \alpha$ and $ \angle BCP=180-90-(\beta - \alpha)= 90+\alpha - \beta$ Laq of sines on triangles QAC and QCB $\frac{QA}{QC}= \frac{\sin(\beta-\alpha)}{\sin\alpha}$ and $\frac{QB}{QC}= \frac{\sin(90-\beta+\alpha)}{\sin(90-\alpha)}= \frac{\cos(\beta-\alpha)}{\cos \alpha} $ from that it follows that $\frac{QA}{QB}=\frac{\tan(\beta-\alpha)}{\tan \alpha} $ Again law of sines on triangles PBD and PAD $\frac{PA}{PD}= \frac{\sin(180-(\beta-\alpha))}{\sin \alpha}= \frac{\sin (\beta-\alpha)}{\sin \alpha}$ and $ \frac{PB}{PD}= \frac{\sin(90-(\beta-\alpha))}{\sin(90+\alpha)}= \frac{\cos(\beta-\alpha)}{\cos \alpha} $ It follows that $ \frac{PA}{PB}= \frac{\tan(\beta-\alpha)}{\tan \alpha}$ so $ \frac{PA}{PB}=\frac{QA}{QB}$
free bounded lattice
The construction you describe does produce a bounded lattice, but it does not not produce the bounded lattice freely generated by the poset. One way to define the bounded lattice freely generated by the poset ${\bf P}=\langle P; \leq \rangle$ is the bounded lattice $L({\bf P})$ with the presentation $$ \langle P\;|\;\{(a,a\wedge b)\;|\;a\leq b\}\rangle. $$ This is a presentation relative to the variety of bounded lattices. $L({\bf P})$ has the property that any order-preserving function $f\colon {\bf P}\to B$ from ${\bf P}$ to a bounded lattice $B$ extends uniquely to a bounded lattice homomorphism $\widehat{f}\colon L({\bf P})\to B$. As I claimed in the first paragraph, the construction in the question, when applied to a finite poset, produces a bounded lattice, but it is not free as a bounded lattice generated by ${\bf P}$. You can see this from the fact that when the construction is applied to a finite poset it always produces a finite bounded lattice, while the free bounded lattice generated by a poset is almost never finite. The specific poset in the question is called the $2$-crown by some folks. The construction in the question starts with a $2$-crown and constructs a $7$-element lattice. The free bounded lattice generated by a $2$-crown has size $8$. If we had started with two independent chains instead of a $2$-crown (say, $A<X$ and $B<Y$ with no other relations), then the construction of the question would produce a $9$-element bounded lattice, while the free bounded lattice generated by $2$ independent chains is infinite. (This latter fact is a theorem of Howard Rolf from 1958.) The main paper that is relevant to this question is R. A. Dean, Sublattices of free lattices, in Lattice Theory Amer. Math. Soc., Providence, Rhode Island, 1961, Proc. Symp. Pure Math. II, pp. 31–42. while a modern treatment can be found in the book Ralph Freese, Jaroslav Jezek and J. B. Nation, Free Lattices, Amer. Math. Soc. (Mathematical Surveys and Monographs 42), 1995.
How to solve this partial differential equation?
Set $$ u=\frac{x-y}{2},\ v=\frac{x+y}{2},\ F(u,v)=Z(u+v,v-u). $$ Then \begin{eqnarray} \frac{\partial Z}{\partial x}(x,y)&=&\frac{\partial F}{\partial u}(u,v)\cdot\frac{\partial u}{\partial x}+\frac{\partial F}{\partial v}(u,v)\cdot\frac{\partial v}{\partial x}\cr &=&\frac12\left(\frac{\partial F}{\partial u}(u,v)+\frac{\partial F}{\partial v}(u,v)\right)\cr \frac{\partial Z}{\partial y}(x,y)&=&\frac{\partial F}{\partial u}(u,v)\cdot\frac{\partial u}{\partial y}+\frac{\partial F}{\partial v}(u,v)\cdot\frac{\partial v}{\partial y}\cr &=&\frac12\left(\frac{\partial F}{\partial v}(u,v)-\frac{\partial F}{\partial u}(u,v)\right). \end{eqnarray} Now the PDE reads: $$ \frac12\frac{\partial }{\partial u}F^2(u,v)=4v^2+F^2(u,v). $$ After integration we get $$ \frac12\ln(4v^2+F^2(u,v))=u+\frac12 A(v), $$ where $A$ is an arbitrary function. It follows $$ F(u,v)=\pm \sqrt{e^{2u}e^{A(v)}-4v^2}. $$ Hence $$ Z(x,y)=F(\frac{x-y}{2},\frac{x+y}{2})=\pm\sqrt{f(x+y)e^{x-y}-(x+y)^2}, $$ where $f(t)=\exp(A(t/2))$.
Finding Principal Branch Value of Lambert W function
By the definition of the Lambert W function, the value $y = W_0\left(x \mathrm{e}^{x}\right)$ satisfies: $$ y \mathrm{e}^{y} = x \mathrm{e}^{x} $$ There are two real solutions to this equation for each $x<-1$. One $y\leqslant -1$ and another $-1 \leqslant y < 0$. The solutions are $W_{-1}\left(x \mathrm{e}^{x}\right) = x$ and $W_0\left( x \mathrm{e}^{x} \right)$. The latter solution is a non-trivial function of $x$.
Solving a system of equations in Maple
If you're looking for a solution where the value of $x$ doesn't involve $a$, you can solve for $a$ and $x$ in terms of $b$ and $y$: solve(..., [a,x]); $$[[a = y-b, x = 2 y]]$$ and then rewrite the first equation to give $y$ in terms of $a$ and $b$: isolate(%[1][1],y);
Function field of an affine hypersurface
By assumption $k[x_1,\dotsc,x_n]/(f) \cong k[x_2,\dotsc,x_n][x_1]/(f)$ is an integral domain. It follows that $f$ is irreducible over $k[x_2,\dotsc,x_n]$, as a polynomial in $x_1$. By assumption it is not constant. Hence, Gauss' Lemma tells us that it stays irreducible over $k(x_2,\dotsc,x_n)$. This means that $k(x_2,\dotsc,x_n)[x_1]/(f)$ is a field. Since it contains $k[x_1,\dotsc,x_n]/(f)$ as a subring and is generated by it, it must be its field of fractions.
Change of Basis of Matrix: Two points of view
Let $A=(a_{ij})_{1\leqslant i,j\leqslant k}$ be the matrix such that the entries of its $i$th column are the coefficients of $e_i$ in the basis $\{b_1,b_2,..,b_k\}$. Then$$(\gamma_1,\ldots,\gamma_k)=A.(\lambda_1,\ldots,\lambda_k).$$On the other hand, $y=A.x$. My guess is that it is in this sense that Halmos claims that the two things are the same. Note that $A$ is the change of basis matrix from $\{b_1,b_2,\ldots,b_k\}$ to $\{e_1,e_2,\ldots,e_k\}$. As far as I am concerned, I had never though about the second point of view that you mentioned.
How do you evaluate this integral $I =\int_{0}^{1}x^{m-n}(x^2-1)^n dx$
With substitution $x^2=t$ : \begin{align} I &= \int_{0}^{1}x^{m-n}(x^2-1)^n dx \\ &= \dfrac12(-1)^n\int_{0}^{1}t^{\frac12(m-n-1)}(1-t)^n dx \\ &= \dfrac12(-1)^n\beta(\dfrac{m-n+1}{2},n+1) \\ &= \dfrac12(-1)^n\dfrac{\Gamma(\dfrac{m-n+1}{2})\Gamma(n+1)}{\Gamma(\dfrac{m+n+3}{2})} \end{align} where $\beta(x,y)$ is Beta function.
Prove that $\cos x=1-\frac{x^2}{2!}+\frac{x^4}{4!}-\frac{x^6}{6!}+R_8(x)$ where $|R_8(x)|\leq \frac{x^8}{8!}$
The formula you have is the Taylor polynomial of order 7, use the legendre error term and that $|\cos(x)|\leq 1$ for all $x$. The second derivative of $\cos(x)$ is $-\cos(x)$ so the eight derivative is $\cos(x)$.
product formula for the sum of cosine and sine
I think there is typo in Conway's book. It should be $$\cos\left(\frac{\pi z}{4}\right)-\sin\left(\frac{\pi z}{4}\right)= \prod_{n=1}^{\infty}\left(1+\frac{(-1)^n z}{2n-1}\right).$$ Note that $$\cos\left(\frac{\pi z}{4}\right)-\sin\left(\frac{\pi z}{4}\right)=\frac{\sin\left(\frac{\pi (z-1)}{4}\right)}{\sin\left(-\frac{\pi}{4}\right)}$$ Then by using $$\sin(\pi z) = \pi z \prod_{n\neq 0} \left(1-\frac{z}{n}\right)e^{z/n},$$ we obtain \begin{align*} \cos\left(\frac{\pi z}{4}\right)-\sin\left(\frac{\pi z}{4}\right) &=\frac{\frac{\pi (z-1)}{4} \prod_{n\neq 0} \left(1-\frac{(z-1)}{4n}\right)e^{(z-1)/(4n)}}{\frac{\pi (-1)}{4} \prod_{n\neq 0} \left(1-\frac{(-1)}{4n}\right)e^{(-1)/(4n)}}\\ &=(1-z) \prod_{n\neq 0} \left(\frac{4n-(z-1)}{4n+1}\right)e^{z/(4n)}\\ &=(1-z) \prod_{n\neq 0} \left(1-\frac{z}{4n+1}\right)e^{z/(4n)}\\ &=(1-z)\prod_{n\geq 1} \left(1-\frac{z}{4n+1}\right)e^{z/(4n)}\left(1-\frac{z}{-4n+1}\right)e^{-z/(4n)}\\ &=(1-z)\prod_{n\geq 1} \left(1-\frac{z}{4n+1}\right)\left(1+\frac{z}{4n-1}\right) \end{align*} which is equal to \begin{align*} \prod_{n=1}^{\infty}\left(1+\frac{(-1)^n z}{2n-1}\right) &=\prod_{k\geq 1}\left(1+\frac{(-1)^{2k-1} z}{2(2k-1)-1}\right)\prod_{k\geq 1}\left(1+\frac{(-1)^{2k} z}{2(2k)-1}\right) \\ &=\prod_{k\geq 1}\left(1-\frac{z}{4k-3}\right)\prod_{k\geq 1}\left(1+\frac{z}{4k-1}\right)\\ &=(1-z)\prod_{k\geq 2}\left(1-\frac{z}{4(k-1)+1}\right)\prod_{k\geq 1}\left(1+\frac{z}{4k-1}\right)\\ &=(1-z)\prod_{k\geq 1} \left(1-\frac{z}{4k+1}\right)\left(1+\frac{z}{4k-1}\right). \end{align*}
Hodge dual on orthonormal basis: two inconsistent answers
Actually, if $\Lambda$ is a boost operator, they are equal. $\Lambda_1^1 = \Lambda_2^2$ and $\Lambda_1^2 = \Lambda_2^1$. You know that $\eta^{11} = - \eta^{22}$ also. These simplifications make it clear that the two results, while appearing different, are actually the same for the kind of linear operator used here. Edit: I answered for the (1,1) signature case. In the (2,0) or (0,2) cases, the off-diagonal components of the $\Lambda$ matrix are no longer equal, but the diagonal terms of the $\eta$ matrix are, and this accomplishes the same result.
An inequality for positive definite matrix with trace 1
In order to investigate behavior of the function $f$, at the first we consider the following auxiliary problem. Let $0<x,y,a$ and $x+y+a=1$. If $a$ is fixed then $$h(x,y)=\frac{(1-x)^2(1-y)^2}{xy}=\frac{(a+xy)^2}{xy}=\frac{a^2}{xy}+2a+xy.$$ A function $\frac{a^2}{t}+t$ has a derivative $-\frac{a^2}{t^2}+1$, so it decreases when $0<t<a$ and increases when $a<t<1$. By AM-GM inequality, $xy\le \frac{(1-a)^2}4$ and the equality is attained iff $x=y$. In particular, if $\frac{(1-a)^2}4\le a$ then $h(x,y)$ attains its minimum at $(x,y)$ iff $x=y$. It is easy to check that $\frac{(1-a)^2}4\le a$ iff $$a\ge 3-2\sqrt{2}=0.1715\dots.$$ Consider the function $f(x_1,\dots, x_n)$. Fixing the smallest $x_l$ we restrict domain of $f$ to a compact set $C$ consisting of $x$ such that $\{x_j\ge x_l\}$ for each $j$ and $\sum x_j=1$. Since $f$ on $C$ is continuous, it attains its minimum at some point $x$. Let $x_i$ be the largest coordinate of $x$. Then $$x_j+x_k\le \frac 23(x_i+x_j+x_k)\le \frac 23<1-(3-2\sqrt{2})$$ for each remaining distinct $j$ and $k$, so the minimality of $f(x)$ on $C$ and the properties of the function $h$ imply that $x_j=x_k=t$ for each remaining $j$ and $k$. Then $x_i=1-(n-1)t$ and $$f(x)=r(t)=\frac{((n-1)t)^2}{1-(n-1)t}\left(\frac{(1-t)^2}{t}\right)^{n-1}=\frac{(n-1)^2(1-t)^{2n-2}}{(1-(n-1)t)t^{n-3}}.$$ If $t$ tends to $0$ or to $\frac 1{n-1}$ then $r(t)$ tends to infinity. So $r(t)$ attains its minimum at a compact subset of an interval $(0, \frac 1{n-1})$ in some its point $s$. Then $r’(s)=0$. This easily implies that $$\left((1-s)^{2n-2}\right)’ (1-(n-1)s)s^{n-3}=(1-s)^{2n-2}\left((1-(n-1)s)s^{n-3}\right)'$$ $$(2-2n)(1-(n-1)s)s^{n-3}=(1-s)\left((1-(n-1)s)s^{n-3}\right)'$$ The following cases are possible 1) $n=3$. Then $-4(1-2s)=(1-s)\left(1-2s\right)'$ and $s=\frac 13$. 2) $n>3$. Then $$(2-2n)(1-(n-1)s)s^{n-3}=(1-s)\left((n-3)s^{n-4}-(n-1)(n-2)s^{n-3}\right)$$ $$(2-2n)(1-(n-1)s)s=(1-s)\left((n-3)-(n-1)(n-2)s\right)$$ $$s^2(n^2-n)+s(n^2-4n+1)-n+3=0$$ $$s=\frac 1n\mbox{ or }s=-\frac {n-3}{n-1}<0.$$ Thus anyway $s=\frac 1n $. Then all $x_j$ equal to $\frac 1n$ and $f(x)=\frac{(n - 1)^{2n}}{n^n}$.
Existence of Positive Solutions to Constrained Linear Elliptic Second-Order PDE
This is a long comment, not an answer. I just want to mention a technicality that suggests your problem might need to be rephrased. You will want to assume that the system itself is compatible. Let me illustrate with $n=1$ dimensions. If $n=1$, we can write your system as \begin{align} u''+\Gamma(x)u'+\alpha(x)u=0,\\ u'+\beta(x)u=0, \end{align} where, using the low dimensional structure, we set some coefficients equal to 1. The problem then becomes: if $u|_{\partial D}>0\to u|_D>0$, does this imply anything on $\Gamma,\alpha,\beta$? Here's my observation: if no positive solutions exist, then the answer must be negative. So the problem is only interesting in the case when the system is compatible. In higher dimensions, the precise conditions are unclear to me, but for $n=1$, it is clear: To test for compatibility, we differentiate the second equation and substitute $u'$ terms: \begin{align} 0=u''+\beta u'+\beta' u=u''+(\beta'-\beta^2)u. \end{align} Let us also eliminate $u'$ from the "main" equation: \begin{align} u''+(\alpha-\Gamma\beta)u=0. \end{align} Solving both equations for $u''$, we get \begin{align} [\alpha-\Gamma\beta-(\beta'-\beta^2)]u=0. \end{align} For nontrivial solutions, we thus need \begin{align} \alpha=\Gamma\beta+\beta'-\beta^2. \end{align} So before even addressing your question about positive solutions, we need conditions on $\alpha,\beta,\Gamma$.
Show $\int_a^b \lambda f(x)dx = \lambda\int_a^b f(x)dx$
It depends how you define $\int_a^bg(x)dx$, but I'll sketch a technique with one definition, as $\lim_{n\to\infty}\frac1n\sum_{k=0}^{n-1}g(a+k(b-a)/n)$, that can be adapted for others. Note that$$\frac1n\sum_{k=0}^{n-1}\lambda f(a+k(b-a)/n)=\lambda\frac1n\sum_{k=0}^{n-1}f(a+k(b-a)/n)$$so, taking the limit,$$\int_a^b\lambda fdx=\lambda\int_a^bfdx.$$
EDİTED: Find the derivative of $f(x)=a^x$, using the definition of the derivative.
There are some subtleties here. How is $a^x$ defined, for a generic $a>0$ and some $x\in\mathbb{R}$? The most common ways are to define $a^x$ directly as $\exp\left(x\log a\right)$, or to consider a sequence of rational numbers $\left\{\frac{p_n}{q_n}\right\}_{n\geq 0}$ convergent to $x$ and let $a^x=\lim_{n\to +\infty}a^{\frac{p_n}{q_n}}$ ; If we have $\lim_{n\to +\infty}f(n)=L$ (limit of a sequence) it is not granted that $\lim_{x\to +\infty} f(x)=L$ (limit of a function) without further assumptions on $f$. For instance $\lim_{n\to +\infty}\sin(\pi n)=0$ but $\lim_{x\to +\infty}\sin(\pi x)$ does not exist, so we have to be careful in deriving $\lim_{x\to 0}\frac{a^x-1}{x}=\log a$ from $\lim_{n\to +\infty}\frac{a^{1/n}-1}{1/n}=\log a$; On the other hand, there is no need to over-complicate things: $\lim_{x\to 0}\frac{a^x-1}{x}$ is just the value of the derivative of $a^x$ at the origin. If we know/prove that $a^x=\exp\left(x\log a\right)$, then $\lim_{x\to 0}\frac{a^x-1}{x}=\log a$ is a straightforward consequence of the chain rule, given the differentiability of the exponential function.
How to prove that a morphism of schemes preserves the surjectivity of ring endomorphisms?
The question is unclear as stated (what is $q$? what's the purpose of $s,t$? what does "correspond to the definition of $f^{*}:\mathcal{O}_{S}^{n}\rightarrow \mathcal{O}_{T}^{n}$" mean?). So here is what perhaps was meant: Set $f : T \to S$ be a morphism of arbitrary ringed spaces. Let $q : \mathcal{O}_S^n \to \mathcal{O}_S^n$ be a homomorphism of $\mathcal{O}_S$-modules such that $q(U) : \mathcal{O}_S(U)^n \to \mathcal{O}_S(U)^n$ is surjective for some open subset $U$. Let $V \subseteq f^{-1}(U)$ be some open subset. Why is then $(f^* q)(V) : \mathcal{O}_S(V)^n \to \mathcal{O}_S(V)^n$ surjective? First, I claim that actually $q|_U : \mathcal{O}_U^n \to \mathcal{O}_U^n$ is an isomorphism. In fact, $q(U)$ is an isomorphism since it is a surjective endomorphism of a f.g. free module. The determinant of $q(U)$ is therefore a unit global section of $\mathcal{O}_U$. This stays a unit global section if we restrict to open subsets $V$ of $U$, which shows that $q(V)$ is also an isomorphism. Let $f_U$ denote the corestriction $f^{-1}(U) \to U$. Since $(f_U)^*$ is a functor, it maps isomorphisms to isomorphisms. Thus, $(f_U)^* q|_U : \mathcal{O}_{f^{-1}(U)}^n \to \mathcal{O}_{f^{-1}(U)}^n$ is an isomorphism. Now we may restrict this to an open subset $V \subseteq f^{-1}(U)$ and take sections on $V$ to see that $(f^* q)(V)$ is an isomorphism. (Notice that we really need isomorphisms for this kind of reasoning; epimorphisms don't restrict to epimorphisms on sections, since sheaf cohomology is an obstruction for this.)
How to derive the higher order linear ODE from autonomous $1$st order ODE?
Just apply Cayley-Hamilton, $$ \chi_A(\frac d{dt})y(t)=\chi_A(A)y(t)=0. $$
Show that $p$ isn't a prime in $Q[\sqrt{-1}]$
Let $p$ be a prime number. The ideal $(p)= p\Bbb Z[i]$ is prime (splits) in $\Bbb Z[i]$ if and only if the quotient ring $$ A_p=\Bbb Z[i]/(p) $$ is (is not) a field. But since $\Bbb Z[i]\simeq\Bbb Z[X]/(X^2+1)$ we easily have $$ A_p\simeq\Bbb F_p[X]/{(X^2+1)} $$ where $\Bbb F_p$ denotes the field with $p$ elements. Thus $$ \text{$p$ factorizes in $\Bbb Z[i]$} \Leftrightarrow \text{$X^2+1$ factorizes in $\Bbb F_p[X]$} \Leftrightarrow \text{$-1$ is a square in $\Bbb F_p$}. $$ When $p$ is odd the latter condition is itself equivalent to $\left(\frac{-1}p\right)=(-1)^{\frac{p-1}2}=1$ (Legendre symbol) and the latter holds if and only if $p\equiv1\bmod4$.
Need help with proving equivalence relation on Z x Z
To prove R is an equivalence relation by definition is to prove it is reflexive, symmetric, and transitive -Reflexivity: $(a,b)R(a,b) \iff a+b^3=a+b^3$ this one is trivial. -Symmetry: $(a,b)R(c,d) \implies (c,d)R(a,b)$ this one is also trivial since it's the same equality. -Transitivity: $(a,b)R(c,d) \land (c,d)R(e,f) \implies (a,b)R(e,f)$ $$\begin{align}(a,b)R(c,d)\land(c,d)R(e,f) &\iff (a+b^3=c+d^3) \land (c+d^3=e+f^3) \\&\iff a+b^3=e+f^3 \implies (a,b)R(e,f)\end{align}$$
Another question about "all odd moments vanish"
Let $X$ have density $$ f(x) = \frac1{48}\left(1-\mathsf{sign}(x)\sin\left(|x|^{\frac14}\right)\right) e^{-|x|^{\frac14}},\ x\in\mathbb R. $$ Then for each positive integer $n$ we have $$ \mathbb E[X^n] = \int_{-\infty}^\infty x^n f(x)\ \mathsf dx = \frac{(1+(-1)^n)(4(n+1))!}{12}. $$ It follows immediately that $\mathbb E[X^{2n+1}=0]$ for all nonnegative integers $n$. It is clear though that $f$ is not an even function, so $X$ is not symmetric about zero.
Fundamental group in flat families
If the fibres are all smooth, then by Ehresmann Theorem they are diffeomorphic, in particularly homotopically equivalent and so the answer is yes. Instead, if you allow singular fibres the answer is in general no. Think of a flat family $\mathscr{X}_t$ of smooth, plane cubic curves degenerating to a cuspidal curve $X_0$. Then $X_0$ is homeomorphic to $S^2$, in particular it is simply connected, whereas $\pi_1(X_t)\simeq\mathbb{Z} \times \mathbb{Z}$.
Motion in 3D Space: Finding Velocity from Distance, Launch Angle
You've made mistakes for the components of the initial velocity. $\displaystyle u_{x}=v\cos 45^{\circ}=\frac{v}{\sqrt{2}}$ and $\displaystyle u_{y}=v\sin 45^{\circ}=\frac{v}{\sqrt{2}}$
Lebesgue measure of a subset of the unit circle
Convolution is a powerful tool. Let $A = \{ e^{i\theta} : \lvert \theta\rvert < 0.1\}$, and $f = \chi_A \ast \chi_M$. Then you have $X = f^{-1}([0,0.1])$, and \begin{align} \int_{S^1} f(\theta)\,d\theta &= \int_{S^1} \int_{S^1}\chi_A(\theta-\varphi)\chi_M(\varphi)\,d\varphi\,d\theta\\ &= \int_{S^1}\int_{S^1} \chi_A(\theta-\varphi)\chi_M(\varphi)\,d\theta\,d\varphi\\ &= \int_{S^1} m(A)\chi_M(\varphi)\,d\varphi\\ &= m(A)\cdot m(M)\\ &= 0.2 m(M)\\ &\geqslant 0.2\cdot\frac{3\pi}{2}. \end{align} On the other hand, \begin{align} \int_{S^1} f(\theta)\,d\theta &= \int_X f(\theta)\,d\theta + \int_{S^1\setminus X} f(\theta)\,d\theta\\ &\leqslant 0.1\cdot m(X) + 0.2\cdot m(S^1\setminus X)\\ &= 0.2\cdot 2\pi - 0.1\cdot m(X), \end{align} so, combining with the above $$0.2\cdot\frac{3\pi}{2} \leqslant 0.2\cdot 2\pi - 0.1\cdot m(X) \iff m(X) \leqslant 2\bigl(2\pi - \tfrac{3\pi}{2}\bigr) = \pi.$$
$ n $ lines intersections
Let $\ell$ be say the $x$-axis, and let $P$ be a point not on the $x$-axis. Pick $n-1$ points $Q_1,\dots, Q_{n-1}$ on $\ell$. Consider the set of $n$ lines consisting of $\ell$ and the $n-1$ lines $PQ_i$. These determine $n$ intersection points. In this case, $\binom{n}{2}$ minus the number of points is large. (We assume $C_n^2$ is intended to be a name for the binomial coefficient.) If I understand what you mean by $L(n)$, $\lim\frac{L(n)}{n^2}=\frac{1}{2}$. The above is for the projective plane. For the Euclidean plane, one can make the number of intersection points $0$.
solving Legendre equation using the Frobenius method around a singular point
I'll advance where you stopped. Replacing $n$ by $n+r$ and $n+r-2$ in the first and second sums respectively you get $$ \sum_{n=0}^\infty a_{n+r}(n+r)((n+r)-1)(x-1)^{n+r-2}$$ $$+\sum_{n=2}^\infty{}[(n+r-2)((n+r-2)-1)-2(n+r-2)+1]a_{n+r-2}(x-1)^{n+r-2}=0 .$$ Simplify the above and you should be able to find advance.
An embedding of a projective variety.
The statement is false. A correct statement is: Suppose $X$ is reduced. Then $H^0(\mathbb P^n,O(1))\to H^0(X,L)$ is injective if and only if the image of the morphism $f: X\to \mathbb P^n$ is not contained in a hyperplane. Proof. Let $s\in H^0(\mathbb P^n,O(1))$, let $t\in H^0(X, L)$ be its image. Then $t=0$ if and only if $t(x)=0$ for all $x\in X$ (we use the fact that $X$ is reduced). Now $t(x)=s(f(x))$. So $t=0$ if and only if $f(X)\subseteq (s)_0$ (the zero locus of $s$). As $(s)_0$ is a hyperplane if $s\ne 0$, our statement is proved. In particular, for any finite morphism from a curve $X$ to $\mathbb P^1$, the map on the $H^0$'s is always injective, but $f$ is not an embedding in general.
$P$ vs $NP$ characterization confusion
$P$ is the class of problems which can be solved by a deterministic Turing machine in polynomial time and $NP$ is the class of problems which can be solved by a non-deterministic Turing machine in polynomial time. Since every deterministic turing maching can be simulated by a non-deterministic Turing machine then you have the corresponding inclusion. You may view the deterministic Turing machine as a branch in the tree of computations given by the non-deterministic Turing machine.
An invertible matrix minus the diagonal is nilpotent
No. Here is a counterexample for every $n\ge3$: $$ A=\pmatrix{0&0&1\\ 0&1&-1\\ 1&1&1\\ &&&I_{n-3}} =\pmatrix{0&0&1\\ 0&0&-1\\ 1&1&0\\ &&&0_{n-3}}+\pmatrix{0\\ &1\\ &&1\\ &&&I_{n-3}}. $$ When $n=3$, we have $$ (A-D)^2=\pmatrix{0&0&1\\ 0&0&-1\\ 1&1&0}^2=\pmatrix{1&1&0\\ -1&-1&0\\ 0&0&0} \text{ and } (A-D)^3=0. $$
(Dice), population mean and variance of the odd numbers.
Another way to think about the variance: Because we know the rolls are independent, $$Var(\sum\limits_{i=1}^{100}x_i)=\sum\limits_{i=1}^{100}Var(x_i)$$ We can then look at just $Var(x_i)$ $$Var(x_i) = E(x_i^2) - E(x_i)^2$$ $$E(x_i^2) = \frac{1}{6}*1^2 + \frac{1}{6}*3^2 + \frac{1}{6}*5^2 = \frac{35}{6}$$ $$Var(x_i)=\frac{35}{6}-(\frac{3}{2})^2=\frac{43}{12}$$ and then just plug back into the intitial equation $$Var(\sum\limits_{i=1}^{100}x_i)=\sum\limits_{i=1}^{100}\frac{43}{12}=100*\frac{43}{12}$$
Property of Binomial Coefficient
Substitute $x=1$ into $(1+x)^{2n+1}=\sum_{k=0}^{2n+1}\binom{2n+1}{k}x^k$ to obtain $2^{2n+1}=\sum_{k=0}^{2n+1}\binom{2n+1}{k}=2\sum_{k=0}^{n}\binom{2n+1}{k}$ since $\binom{2n+1}{k}=\binom{2n+1}{2n+1-k}$.
What is the convergence or divergence for $\sqrt[n]{1+c_n}$
No. We have $1 \le \sqrt[n]{1+c_n} \le \sqrt[n]{2}$. Hence $\sqrt[n]{1+c_n} \to 1$ for $n \to \infty$
solving differential equation $\frac{dy}{dx}=(x+y)\ln(x+y)-1$
Hint: What is the derivative of the natural logarithm? This should suggest a substitution for your left-hand integral.
Tensor product of $\mathbb Z_3$ and $\mathbb Z $.
Indeed, you are right, and this is a general property: for a commutative ring $R$ and an $R$-module $M$ the following isomorphism holds: $$M \cong M \otimes_R R$$ via a pair of mutually inverse maps $$ \begin{align} m & \mapsto m \otimes 1_R \\ r\cdot m & \leftarrow\!\shortmid m \otimes r \end{align}$$
How to sketch/draw a set of imaginary numbers?
Polar form makes a lot of problems easier. $$ \begin{aligned} 2\ arg(c)&=n\pi, n\in\mathbb{Z}\\ \\ arg(c)&=\left(\frac{n\pi}{2}\right), n\in\mathbb{Z} \end{aligned} $$ Therefore, $c$ is all complex number on real axis and imaginary axis.
The problem related with the map $L\colon \mathbb R^{2}\rightarrow \mathbb R^{2}$ given by $L(x,y)=(x,-y)$
Hint: For a map $f \colon \mathbb R^2 \to \mathbb R^2$ to be differentiable at $(x,y)$ you need to find a linear map $Df(x,y) \colon \mathbb R^2 \to \mathbb R^2$ such that $$ \frac{f(x+ h, y+k) - f(x,y) - Df(x,y)(h,k)}{\|(h,k)\|} \to 0, \qquad (h,k) \to 0 $$ Your $f = L$ is linear. Can you spot a map $Df(x,y)$ that makes the numerator "small"? Compute $L(x+h, y+k) - L(x,y)$ and try to identify something that is linear in $h$ and $k$.
Cost function $C(x) = 8x+150$, and it has gone up by $19\%$
If $\;Y\;$ is any quantity, if it goes up by $\;t\;$% then the new quantity is $$Y\left(1+\frac t{100}\right)$$
$\sqrt a+\sqrt b$ is a root of polynomial, then $\sqrt a -\sqrt b$ is so over finite field
Consider $\Bbb F_5$ and consider $a=2$, $b=3$ and $\sqrt b=2\sqrt a$. Then, $x^2+2$ has roots $\pm(\sqrt a+\sqrt b)$, and neither $\sqrt a-\sqrt b$ nor $\sqrt b-\sqrt a$ are among them.
Riemann zeta function and modulus
(a) if $Re(s)=1/2$, then $|f(s)|=1$ : this is true and allows us to find a simple expression of the phase of the Riemann zeta function on the critical line. (b) the function $f(s)$ itself doesn't contain $\zeta$ since $f(s)$ is, from the functional equation : $$\tag{1}f(s)=2^s\pi^{s-1}\sin\left(\frac {\pi s}2\right)\Gamma(1-s)$$ but $|f(s)|$ may be $1$ out of the critical line as you may see on this picture of $|f(x+i y)]-1$ : The vertical line at $x=\frac 12$ gives the solution $(a)$ but two solutions to $|f(x+i y)]=1$ exist for $y$ near $2\pi$ and $-2\pi$ (for $x$ near $\frac 12$ see too this discussion concerning Siegel-$\theta$). You'll have to exclude these two solutions to make your implication correct! $\qquad$Another perspective view : And with this picture for $x$ between $-30$ and $30$ a kind of symbol for your 'superquest' :-) APPROXIMATIONS: In the remaining we will always suppose that $x\in[0,1]$ and will only consider the case $y>0$. Let's remember $6.1.30$ from A&S : $$\tag{2}\left|\Gamma\left(\frac 12+iy\right)\right|^2=\frac {\pi}{\cosh(\pi y)}$$ An excellent approximation for other values of $x$ (when $y>2$) is given by : $$\tag{3}\left|\Gamma\left(x+iy\right)\right|^2\approx\frac {\pi\;y^{2x-1}}{\cosh(\pi y)}$$ (the error is less than $0.4\%$ and quickly decreasing with $y$) Since $\ \left|\sin\left(\frac {\pi (x+iy)}2\right)\right|^2=\cosh^2\bigl(\frac{\pi\;y}2\bigr)-\cos^2\bigl(\frac {\pi\;x}2\bigr)\ $ and $\ \left|(2\pi)^{x+iy}\right|^2=(2\pi)^{2x}\;$ we get : \begin{align} |f(x+iy)|^2&\approx\left|\frac{(2\pi)^{x+iy}}{\pi}\sin\left(\frac {\pi (x+iy)}2\right)\right|^2\frac {\pi\;y^{1-2x}}{\cosh(\pi y)}\\ &\approx\frac{(2\pi)^{2x}}{\pi}\left(\cosh^2\left(\frac{\pi\;y}2\right)-\cos^2\left(\frac {\pi\;x}2\right)\right)\frac {\;y^{1-2x}}{\cosh(\pi y)}\\ \tag{4}&\approx \left(\frac{2\pi}y\right)^{2x}\frac y{\pi}\frac{\cosh^2\left(\frac{\pi\;y}2\right)-\cos^2\left(\frac {\pi\;x}2\right)}{\cosh(\pi y)}\\ \end{align} Now for $y\gg 1$ the fraction at the right will converge to $\frac 12$ (since $|\cos|\le 1$) and we will have : $$\tag{5}|f(x+iy)|\sim \left(\frac{2\pi}y\right)^{x-1/2}\quad\text{for}\ y\gg 1$$ which shows clearly all the things of interest for us : for $x=\frac 12$ we get $|f(x+iy)|=1$ independently of $y$ there is only one other solution : $y\approx 2\pi$ (considering $y>0$) the visual aspect of the approximation shows no real difference : Of course this is not a complete proof : the error term in $(3)$ to next order should be found (possibly using A&S' expansions and propagated) proving that $x\mapsto |f(x+iy)|$ is decreasing for $y>K$ and increasing for $y<K$ (with $K$ some constant near $2\pi$). $|f(x+iy)|$ should be studied with more care for $y$ near $0$ and so on. But, at least in principle, it appears ok to me so : Fine continuation!
Maximal element of $(I : x)$, where $x$ is in $A - I$, is prime belonging to $I$
To show that a maximal $(I:x)$ is prime: Choose $x$ such that $(I:x)$ is maximal. If $(I:x)$ is not prime, then we can choose $a$ and $b$ such that $ab \in (I:x)$ but $a \not \in (I:x)$, $b \not \in (I:x)$. Now what can you say about $(I:ax)$?
Relation between Sobolev Space $W^{1,\infty}$ and the Lipschitz class
"Locally" is ambiguous here $f$ is locally Lipschitz in $\Omega$ if and only if $f \in W^{1,\infty}_{loc}(\Omega)$ The validity of this claim depends on interpretation of "locally Lipschitz". Does it mean every point of $\Omega$ has a neighborhood in which $f$ is Lipschitz, or there is $L$ such that every point of $\Omega$ has a neighborhood in which $f$ is $L$-Lipschitz (i.e., satisfies $|f(x)-f(y)|\le L|x-y|$). With interpretation 1) the above would be false, because $f(x)=1/x$ is locally Lipschitz on the interval $(0,1)$. The authors meant interpretation 2, but you should be aware that it may be less common. Saying "locally $L$-Lipschitz" would be more precise. Counterexample and quasiconvexity $W^{1,\infty}(\Omega) = C^{0,1}(\Omega)$ This is not always true. For example, let $\Omega$ be the plane with the slit along negative $x$-axis. Using polar coordinates $r,\theta$, with $-\pi<\theta<\pi$, define $u(r,\theta) = r\theta$. You can check that this is a $W^{1,\infty}$ function (it is locally $10$-Lipschitz, say), but it is not in $C^{0,1}(\Omega)$ because the values of $f$ just above the slit and just below it are far apart. The counterexample is taken from here, where you can also find a result in the positive direction; for sufficiently nice $\Omega$ the equality holds. Also, see the answer and references in relation between $W^{1,\infty}$ and $C^{0,1}$. Vanishing on the boundary The definition of $W^{1,\infty}_0(\Omega)$ is a bit tricky since the usual approach (complete the space of smooth compactly supported functions with respect to the Sobolev norm) does not apply. Instead one can define $W_0^{1,\infty}(\Omega)$ as follows. Every element of $W^{1,\infty}(\Omega)$ has a continuous representative $u$. If $u$ satisfies $\lim_{x\to a}u(x)= 0$ for every $a\in \partial \Omega$, then we say that $u\in W_0^{1,\infty}(\Omega)$. This is a closed subspace, because convergence in $W^{1,\infty}$ norm implies uniform convergence. As an aside: one can give a unified definition of $W^{1,p}_0(\Omega)$ that works for all $1\le p\le \infty$: a function is in $W^{1,p}_0(\Omega)$ if its zero extension to $\mathbb R^n$ is in $W^{1,p}(\mathbb R^n)$. For every domain, $W_0^{1,\infty}(\Omega)$ is the same as the set of Lipschitz functions on $\Omega$ that tend to $0$ at the boundary. Indeed, we can extend by zero to the rest of $\mathbb R^n$ and use the fact that $\mathbb R^n$ is convex.
If $\cos A = \sin B$ , does $A+B = 90$ degrees?
Sometimes a picture says more than 1000 words:
A number when successively divided by $9$, $11$ and $13$ leaves remainders $8$, $9$ and $8$ respectively
Note $\ $ A newer duplicate question with cited source makes it clear that this is not a CRT problem (as was inferred in the above comments) but, more simply, a question on iterated division with remainder. The answer below is for the CRT interpretation. Hint $\ $ Note that $\rm\ x\equiv 8\:\ (mod\ 9),\ x\equiv 8\:\ (mod\ 13)$ $\iff$ $\rm x\equiv 8\:\ (mod\ 9\cdot 13),\:$ follows from CCRT = special constant case $\rm\,a\! =\! b\,$ of Easy CRT (below), reducing the $3$ equations to these $2$: $$\begin{array}{ll}\rm x\ \equiv\ 9\ \ (mod\ \ 11)\\ \rm x\ \equiv\ 8\ \ (mod\ 9\cdot\! 13)\end{array}$$ Applying Easy CRT (below) with $\rm\ a=9,\ b = 8,\ n=\:9\cdot 13,\ m=11,\ $ noting that $$\rm mod\:\ m\!=\!11\!:\ \ \frac{a-b}{n}\ =\ \frac{9-8}{9\cdot 13}\ \equiv\ \frac{1}{-2\cdot 2}\ \equiv\ \frac{12}{-4}\ \equiv\ {-3}\ \equiv\ \color{#C00}8 $$ we quickly obtain the unique solution: $\rm\ \ x\, \equiv\, 8 + 9\cdot 13\cdot [\color{#C00}8]\,\equiv\, 944 \ \ (mod\,\ 11\cdot9\cdot 13)$ Theorem (Easy CRT) $\rm\ \ $ If $\rm\ m,n\:$ are coprime integers then $\rm\ \color{#0a0}{n^{-1}}\ $ exists $\rm\ (mod\ m)\ \ $ and $\rm\displaystyle\quad\quad\quad\quad\quad \begin{eqnarray}\rm x&\equiv&\rm\ a\ \ (mod\ m) \\ \rm x&\equiv&\rm\ b\ \ (mod\ n)\end{eqnarray} \ \iff\ \ x\ \equiv\ b + n\ \bigg[\frac{a-b}{\color{#0a0}n}\ mod\ m\:\bigg]\ \ (mod\ mn)$ Proof $\rm\ (\Leftarrow)\ \ \ mod\ n\!:\,\ x\equiv b + n\ [\cdots]\equiv b\:,\ $ and $\rm\ mod\ m\!:\,\ x\equiv b + (a-b)\ n/n\: \equiv\: a\:.$ $\rm\ (\Rightarrow)\ \ $ The solution is unique $\rm\ (mod\ mn)\ $ since if $\rm\ x',x\ $ are solutions then $\rm\ x'\equiv x\ $ mod $\rm\:m,n\:$ therefore $\rm\ m,n\ |\ x'-x\ \Rightarrow\ mn\ |\ x'-x\ \ $ since $\rm\ \:m,n\:$ coprime $\rm\:\Rightarrow\ lcm(m,n) = mn\:.\ \ $ QED Note $\ $ The constant case optimization of CRT = Chinese Remainder Theorem frequently proves handy in practice, e.g. see also this answer, where it shortens a few-page proof to a few lines.
What is the structure of the multiplicative group of the ring $\mathbb{Z}_{p^k } $?
I have found the answer in Ireland&Rosen's book "A Classical Introduction to Modern Number Theory" (GTM Vol.84). If $n=2^a p_1^{a_1 } \dotsb p_l^{a_l } $, then $$U(\mathbb{Z} /n\mathbb{Z} )\cong U(\mathbb{Z} /2^a \mathbb{Z} )\times\dotsb\times U(\mathbb{Z} /p_l^{a_l } \mathbb{Z} )$$ where $U(\mathbb{Z} /p_i^{a_i } \mathbb{Z} )$ is a cyclic group, and $U(\mathbb{Z} /2^a \mathbb{Z} )$ is a cyclic group for $a=1$ or $2$ but a product of two cyclic groups with one of order $2$ and the other of order $2^{a-2 } $. Its proof is in chapter $4$.
analogue of the Jordan curve theorem for closed curve
Can you specify what the components you're talking about are? I am pretty sure there is no such neat generalization. I know this from ODE where in 2-dimensions the dynamical systems are so easily classified (Poincare-Bendixon Theorem), but there arises chaos in 3-dimensions and above.
Can someone please help me find examples of the following if they exist or not.
Take $S=\{0,1,2,3,4,5,6\}$ and define multiplication $\odot$ by $a\odot b := a\cdot b \text{ mod } 7$ Clearly $1$ is the identity with respect to that multiplication. But $0$ has no inverse.
How to show that the two extensions are isomorphic?
$\newcommand{\Q}{\mathbb{Q}}$You are trying to do something impossible. The only automorphism of the field $\Q$ is the identity. Rather, note that if $\alpha$ is a root of $t^{2} - 2$, then $\alpha + 2 \in \Q(\alpha)$ is a root of $t^{2} - 4 t + 2$, as $$ (\alpha + 2)^{2} - 4 (\alpha + 2) + 2 = \alpha^{2} + 4 \alpha + 4 - 4 \alpha - 8 + 2 = 0. $$
$G$ a group and $H$,$K$ subgroups, $kHk^{-1} \subseteq H \implies kHk^{-1} = H$?
No. If $H = \mathbb{Z}$, it might be the case that $kHk^{-1} = 2 \mathbb{Z} \subsetneq \mathbb{Z}$. In fact this can actually occur in $G = \text{GL}_2(\mathbb{Q})$, where we can take $$H = \left[ \begin{array}{cc} 1 & \mathbb{Z} \\ 0 & 1 \end{array} \right]$$ $$k = \left[ \begin{array}{cc} 2 & 0 \\ 0 & 1 \end{array} \right].$$ As Berci observes in the comments, $K$ plays no role in this question.
Expectation with $-1$
$(-1)^{a+b+c+d} = (-1)^a (-1)^b (-1)^c (-1)^d$. Assuming $a,b,c,d$ are independent (which you didn't say but is probably what is meant), the expected value of the product is the product of the expected values. $(-1)^a$ is $-1$ if $a$ is odd and $1$ if $a$ is even. So if the values $0,1,\ldots,n$ are all equally likely (which again you didn't say but you probably meant), it all comes down to counting how many odd and even numbers there are in $0,1,\ldots,n$.
Solve the Diophantine equation $x^6 + 3x^3 + 1 = y^4$
$x^6+3x^3+1-y^4=0$ is a quadratic equation. Thus, there is an integer $n$ for which $$3^2-4(1-y^4)=n^2$$ or $$n^2-4y^4=5$$ or $$(n-2y^2)(n+2y^2)=5$$ and we have four cases only.
Prove $[n\alpha]-[(n-1)\alpha]$ is $0$ or $1$.
$n\alpha - (n-1)\alpha = \alpha \in(0, 1)$. If there exists an integer $k$ such that $n\alpha \ge k > (n-1)\alpha$, then $[n\alpha] - [(n-1)\alpha] = k - (k-1) = 1$. Otherwise, there is no integer in $((n-1)\alpha,n\alpha]$ in which case $[n\alpha] = [(n-1)\alpha]$.
How many persons eat at least two out of the three dishes?
Something is wrong with the data in this problem, assuming that each person eats at least one dish: vegetables, fish or eggs. The number of persons which eat at least two out of the three dishes is $$N:=|F\cap V|+|V\cap E|+|E\cap F|-2|F\cap V\cap E|$$ By the inclusion-exclusion principle $$N=|F|+|V|+|E|-|F\cup V\cup E|-|F\cap V\cap E|\\=10+9+7-21-5=0.$$ This can't be because $N\geq |F\cap V\cap E|=5$. On the other hand, if there are persons that do not eat vegetables, fish or eggs then $$10=\max(|F|,|V|,|E|)\leq |F\cup V\cup E|\leq 21$$ and the above equality implies $$5=|F\cap V\cap E|\leq N=21-|F\cup V\cup E|\leq 11.$$ P.S. By Sander De Dycker's comments, $|F\cup V\cup E|\not=10$, therefore $|F\cup V\cup E|\geq 11$ and $$5\leq N\le 10.$$
Set of recurring decimals has an supremum?
As you noted, the values of the sequence $$f(n) = \frac{10^n - 1}{3\cdot 10^{n-1}}$$ are $3, 3.3, 3.33, 3.333, \ldots$. From this we see that $f(n)$ is strictly increasing. We can also prove this formally by observing that for all $n \in \mathbb N$ we have $$\begin{aligned} \frac{f(n+1)}{f(n)} &= \left(\frac{10^{n+1}-1}{3\cdot 10^{n}}\right)\left(\frac{3\cdot 10^{n-1}}{10^{n}-1}\right)\\ &= \frac{1}{10}\left(\frac{10^{n+1}-1}{10^{n}-1}\right)\\ &= \frac{10^{n+1} - 1}{10^{n+1}-10} \\ &> 1 \end{aligned}$$ and therefore $f(n+1) > f(n)$. Consequently, the supremum of the set $\{f(n) \mid n \in \mathbb N\}$ is the limit of the sequence, which is $$\begin{aligned} \lim_{n \to \infty} f(n) &= \lim_{n \to \infty}\left(\frac{10^n - 1}{3\cdot 10^{n-1}}\right) \\ &= \lim_{n \to \infty}\left(\frac{10^{n}}{3\cdot 10^{n-1}}\right) - \lim_{n \to \infty}\left(\frac{1}{3\cdot 10^{n-1}}\right) \\ &= \frac{10}{3} + 0 \\ &= \frac{10}{3} \end{aligned}$$
Series counterexample
The canonical example is $$a_n=b_n=\frac{(-1)^n}{\sqrt n}$$ ADD Given two sequences $a_n,b_n$, the sequence $c_k=\sum_{i=1}^k a_ib_{k-i}$ is usually called the Cauchy product or convolution of $a_n$ with $b_n$. It is a good exercise (and not an easy one) to prove that if $a_n$ is absolutely summable - that is $$\sum |a_n| $$ exists - and $b_n$ is summable, then the Cauchy product is summable and it converges to the product of the sums. This is known as Merten's theorem. ADD There is a theorem (found in Spivak's calculus) that says that if both $a_n$ and $b_n$ are absolutely summable, then any sum of the form $$\sum_{i,j} c_{i,j}$$ where each product $a_\ell b_k$ appears exactly once will converge to $$\sum a_n\cdot \sum b_n$$
Question Regarding the Commutativity of F-Algebras when the Algebra is finite dimensional over F.
Do you know of any noncommutative algebras? Matrix algebras: $M_n(F)$ over fields $F$ (when $n>1$). Dimension $n^2$. Group algebras $F[G]$ (when $G$ is nonabelian). Dimension is $|G|$. Quaternions $\mathbb{H}$ are noncommutative $\mathbb{R}$-algebras. Dimension is $4$. These are all associative algebras. (Dietrich gives two canonical examples of nonassociative.)
Second cohomology group of a perfect group
Can't happen because of the universal coeff Theorem for Homology $0\to H_2(G,Z) \otimes U(1) \to H_2(G,U(1)) \to \mathrm{Tor}(H_1(G,Z),U(1)) \to0$. Since $G$ is perfect $H_1(G,Z)$ is zero, since $G$ is finite $H_2(G,Z)$ is finite, since $U(1)$ is a divisible group and $H_2(G,Z)$ is finite, $H_2(G,Z) \otimes U(1) = 0$. So $H_2(G,U(1)) = 0$.
Problem with computing numerical Gradient with Matlab
Do not transpose the matrices x and y. Remove the statements: x = x'; y = y'; See what happens.
Coupon Collector Problem with multiple copies and X amount of coupons already collected
What follows is a computational contribution where we derive a closed form (as opposed to an infinite series) of the expected number of draws required to see all coupons at least twice when a number $n'$ of coupons from the $n$ types where $n' < n$ have already been collected in two instances. We then observe that the expectation does not simplify. It seems like a rewarding challenge to compute the asymptotics for these expectations using probabilistic methods and compare them to the closed form presented below. Using the notation from this MSE link we have from first principles that $$P[T = m] = \frac{1}{n^m}\times {n-n'\choose 1}\times (m-1)! [z^{m-1}] \exp(n'z) \left(\exp(z) - 1 - z\right)^{n-n'-1} \frac{z}{1}.$$ We verify that this is a probability distribution. We get $$\sum_{m\ge 2} P[T=m] \\ = (n-n') \sum_{m\ge 2} \frac{1}{n^m} (m-1)! [z^{m-2}] \exp(n'z) \left(\exp(z) - 1 - z\right)^{n-n'-1} \\ = (n-n') \frac{1}{n^2} \sum_{m\ge 0} \frac{1}{n^m} (m+1)! [z^{m}] \exp(n'z) \left(\exp(z) - 1 - z\right)^{n-n'-1} \\ = (n-n') \frac{1}{n^2} \sum_{m\ge 0} \frac{1}{n^m} (m+1)! [z^{m}] \exp(n'z) \\ \times \sum_{p=0}^{n-n'-1} {n-n'-1\choose p} \exp((n-n'-1-p)z) (-1)^{p} (1+z)^p \\ = (n-n') \frac{1}{n^2} \sum_{m\ge 0} \frac{1}{n^m} (m+1)! \\ \times [z^{m}] \sum_{p=0}^{n-n'-1} {n-n'-1\choose p} \exp((n-1-p)z) (-1)^{p} (1+z)^p \\ = (n-n') \frac{1}{n^2} \sum_{m\ge 0} \frac{1}{n^m} (m+1)! \\ \times \sum_{p=0}^{n-n'-1} {n-n'-1\choose p} \sum_{q=0}^{m} [z^{m-q}] \exp((n-1-p)z) (-1)^{p} [z^q] (1+z)^p \\ = (n-n') \frac{1}{n^2} \sum_{m\ge 0} \frac{1}{n^m} (m+1)! \\ \times \sum_{p=0}^{n-n'-1} {n-n'-1\choose p} \sum_{q=0}^{m} \frac{(n-1-p)^{m-q}}{(m-q)!} (-1)^{p} {p\choose q}.$$ Re-arranging the order of the sums now yields $$(n-n') \frac{1}{n^2} \sum_{p=0}^{n-n'-1} {n-n'-1\choose p} \\ \times \sum_{m\ge 0} \frac{1}{n^m} (m+1)! \sum_{q=0}^{m} \frac{(n-1-p)^{m-q}}{(m-q)!} (-1)^{p} {p\choose q} \\ = (n-n') \frac{1}{n^2} \sum_{p=0}^{n-n'-1} {n-n'-1\choose p} \\ \times \sum_{q\ge 0} (-1)^{p} {p\choose q} \sum_{m\ge q} \frac{1}{n^m} (m+1)! \frac{(n-1-p)^{m-q}}{(m-q)!}.$$ Simplifying the inner sum we get $$\frac{1}{n^q} \sum_{m\ge 0} \frac{1}{n^m} (m+q+1)! \frac{(n-1-p)^{m}}{m!} \\ = \frac{(q+1)!}{n^q} \sum_{m\ge 0} \frac{1}{n^m} {m+q+1\choose q+1} (n-1-p)^m \\ = \frac{(q+1)!}{n^q} \frac{1}{(1-(n-1-p)/n)^{q+2}} = (q+1)! n^2 \frac{1}{(p+1)^{q+2}}.$$ We thus obtain for the sum of the probabilities $$\sum_{m\ge 2} P[T=m] = (n-n') \sum_{p=0}^{n-n'-1} {n-n'-1\choose p} (-1)^{p} \sum_{q=0}^p {p\choose q} (q+1)! \frac{1}{(p+1)^{q+2}}.$$ Repeat to instantly obtain for the expectation $$\bbox[5px,border:2px solid #00A000]{ E[T] = n (n-n') \sum_{p=0}^{n-n'-1} {n-n'-1\choose p} (-1)^{p} \sum_{q=0}^p {p\choose q} \frac{(q+2)!}{(p+1)^{q+3}}.}$$ Now to simplify these we start with the inner sum from the probablity using the fact that $$\sum_{q=0}^p {p\choose q} (q+1)! \frac{1}{(p+1)^{q+1}} = 1$$ which was proved by residues at the cited link from the introduction. We then obtain $$(n-n') \sum_{p=0}^{n-n'-1} {n-n'-1\choose p} \frac{(-1)^{p}}{p+1} \\ = \sum_{p=0}^{n-n'-1} {n-n'\choose p+1} (-1)^p = - \sum_{p=1}^{n-n'} {n-n'\choose p} (-1)^p \\ = 1 - \sum_{p=0}^{n-n'} {n-n'\choose p} (-1)^p = 1 - (1-1)^{n-n'} = 1$$ which confirms it being a probability distribution. We will not attempt this manipulation with the expectation, since actual computation of the values indicates that it does not simplify as announced earlier. For example, these are the expectations for the pairs $(2n', n'):$ $$4,11,{\frac {347}{18}},{\frac {12259}{432}}, {\frac {41129339}{1080000}},{\frac {390968681}{8100000}}, {\frac {336486120012803}{5717741400000}}, \ldots$$ and for pairs $(3n', n'):$ $${\frac {33}{4}},{\frac {12259}{576}},{\frac {390968681}{10800000}}, {\frac {2859481756726972261}{54646360473600000}}, \ldots$$ The reader who seeks numerical evidence confirming the closed form or additional clarification of the problem definition used is asked to consult the following simple C program whose output matched the formula on all cases that were examined. #include <stdlib.h> #include <stdio.h> #include <assert.h> #include <time.h> int main(int argc, char **argv) { int n = 6 , np = 3, j = 3, trials = 1000; if(argc >= 2){ n = atoi(argv[1]); } if(argc >= 3){ np = atoi(argv[2]); } if(argc >= 4){ j = atoi(argv[3]); } if(argc >= 5){ trials = atoi(argv[4]); } assert(1 <= n); assert(1 <= np && np < n); assert(1 <= j); assert(1 <= trials); srand48(time(NULL)); long long data = 0; for(int tind = 0; tind < trials; tind++){ int seen = np; int steps = 0; int dist[n]; for(int cind = 0; cind < n; cind++){ if(cind < np) dist[cind] = j; else dist[cind] = 0; } while(seen < n){ int coupon = drand48() * (double)n; steps++; if(dist[coupon] == j-1) seen++; dist[coupon]++; } data += steps; } long double expt = (long double)data/(long double)trials; printf("[n = %d, np = %d, j = %d, trials = %d]: %Le\n", n, np, j, trials, expt); exit(0); }
Solving $y'(x)\left(4-3y(x)x^2\right)=4x$
i will write ode as $(4-3x^2y)\frac{dy}{dx} = 4x$ as $$4x \frac{dx}{dy} +3x^2 y = 4 $$ substituting $u = x^2,$ gives a linear equation $$2\frac{du}{dy} + 3uy = 4$$ for $u$. you can take it from here. use integrating factor or variation of parameters.
Evaluate a complex integral using Cauchy formula
Hint: Since $\gamma$ is the unit circle and $w$ is on $\gamma$ you have $$\bar{w}=\frac{1}{w}$$ Your integral is then $$\int_\gamma \frac{\overline{w-z}}{w-z} dw,=\int_\gamma \frac{1}{w(w-z)} dw - \bar{z}\int_\gamma \frac{1}{(w-z)} dw,$$ Now the second integral is known, while the first is done either by Partial fraction decomposition, or by putting small nonintersecting curves around $0$ and $z$. Note hat with this approach you should disscuss the case $z=0$ separatelly...
Why don't surjective functions form a group under composition?
Composition of functions is always associative. However, while surjective functions always have right inverses, these right inverses need not be surjective as well. In fact, let $f : X\to X$ be a function. Then $f$ is surjective if and only if there exists $g : X\to X$ such that $f\circ g = id_X$. Dually, $g : X\to X$ is injective if and only if there exists $f : X\to X$ such that $f\circ g = id_X$. So you always have a function which is a right inverse to a surjective function, but this inverse is not necessarily surjective. So the set $\operatorname{Surj}(X)$ of surjections from $X\to X$ actually does not always contain right inverses (even though these right inverses exist in the full set of functions $X\to X$). If $g$ is a right inverse of $f$, and $g\in \operatorname{Surj}(X)$, then $g$ is both injective and surjective (hence bijective). The set of bijections $X\to X$ does form a group under function composition (this is the permutation group $S_X$ of $X$). If $X$ is finite, then a surjection is automatically an injection as well, so you have $\operatorname{Surj}(X) = S_X$, and $\operatorname{Surj}(X)$ actually is a group.
Solving number divisibility problem using cardinal number of sets!
Hint:The cardinal you want to find is the cardinal of the set $A_7\backslash(A_{10}\cup A_{12}\cup A_{25})$ for which you can use the following formula: $$|T|=|\overline{A}\cap \overline{B} \cap \overline{C}|=|A_7|-|A|-|B|-|C|+|A\cap B|+|A\cap C|+|B\cap C|-|A\cap B\cap C|$$ with $A=A_7\cap A_{10}=A_{70}$, $B=A_7\cap A_{12}$ and $C=A_7\cap A_{25}$.
Potential Game: Solving Prisoner's Dilemma
We want to define a function $P$ from the strategy space $\{(c, c), (q, c), (c, q), (q, q)\}$ to $\mathbb R$ such that when either player deviates unilaterally i.e., changes from $q$ to $c$, or from $c$ to $q$ while the other player keeps the same strategy, the change in the potential function is the same as the change in that players utility. Say the row player plays $q$ and column player plays $c$ which means we are in $(q, c)$. The row player has payoff 10 and the column player has payoff 0. Now, if the column player changes her strategy from $c$ to $q$ that means we end up in $(q, q)$ and hence her payoff increases from 0 to 2. Then the potential should also increase by 2 i.e., $P((q, q)) - P((q, c)) = 2$. If instead the row player switched from $q$ to $c$, thus we go from $(q, c)$ to $(c,c)$, her payoff changes by -4 from 10 to 6 and since the potential has to reflect this it must be that $P((c,c)) - P((q, c)) = -4$. Doing this also for the situation $(c, q)$ to $(q,q)$ we obtain the equations \begin{align*} P((q, q)) - P((q, c)) &= 2 \\ P((c,c)) - P((q, c)) &= -4 \\ P((q, q)) - P((c, q)) &= 2 \\ \end{align*} Observe that if one of the terms is fixed, all others are determined by the equations. Also, we only care about the change in potential and not about the value of the potential. Suppose we set $P((c, c)) = C$. Then, by the second equality, $P((q, c)) = C+4$. By the first equality we obtain $P((q,q)) = C+6$ and finally by the last equality $P((c, q)) = C+4$. In the matrix notation from Rahul Savani's post $$ \begin{pmatrix} C & C+4 \\ C+4 & C+6 \end{pmatrix} $$ by taking $C=0$ we get the same potential matrix as Rahul Savani, but we could have taken any other real value.
How can I solve thins problem by applying the Radon-Nikodym Theorem?
Hint: Suppose that $f\geq 0$. Let $\nu$ be the measure defined on $(X,M)$ by $$ \nu(E):=\int_E f\,d\mu. $$ Now, let $\mu_0=\mu\vert_{M_0}$ and $\nu_0=\nu\vert_{M_0}$ be the restriction of these $M$-measures to $M_0$. These are both measures on $M_0$; and, since $\nu\ll\mu$, we also have $\nu_0\ll\mu_0$.
Maximum and minimum value of an inequality
The inequality $$\sum_{cyc}\frac{a^2 + a b + 2 a + b^2 + 3}{a^2 + a b - 2 a + b^2 + 3}\leq \sum_{cyc}\frac{a + \sqrt{a b} + 2 \sqrt{a} + b + 3}{a + \sqrt{a b}- 2 \sqrt{a} + b + 3}.$$ is wrong. Try $b=c\rightarrow0^+$ and $a=1.1$.
How to determine when to split integral when calculating area between two curves? Calculus 1
To expand on Fakemistake's comment: If $\color{red}{f(x) \ge g(x)}$ for all $x \in [a,b]$, then the area between the two curves is given by $$A = \int_a^bf(x) - g(x)\,dx$$ But your two curves do not meet that if clause. Part of the problem is linguistic. The wording here is not precise. What exactly does "area between two curves" mean in this case? One could take it to mean the area between the curves horizontally: or vertically: Because we are expressing these curves as functions, they probably mean the vertical version. But the area formula holds for when one curve is always above the other. In this case, the curve on top changes at $0$. So to use that area formula, you have to break the area up into the two regions where one curve is always on top, and calculate those areas separately: $$A = A_\text{left} + A_\text{right}\\A_\text{left} = \int_{-3}^0 \sqrt{3-x} - \sqrt{x + 3}\,dx\\A_\text{right}= \int_0^3 \sqrt{x + 3} - \sqrt{3-x}\,dx$$ Alternatively, you can express the area vertically between two curves without regard to which curve is on top by $$A = \int_a^b |f(x) - g(x)|\, dx$$ But this differs from the above only superficially. You'll still end up calculating the integral of the absolute value in the same fashion.
Does it make sense to define a "metric topological space" $(M, d, \tau)$
Both sequential and covering compactness are purely topological notions. Moreover, if the metric $d$ induces the topology $\tau$ then there is no confusion.
Confusion in notation, Action of $Gal(\bar{K}/K)$
$P^{\sigma}$ is simply $\sigma$ acting on $P$. $P = P^{\sigma}$ means that $P$ is invariant under the action of the Galois group.
Solution to a certain moment problem
If you set $f(x)=1/x^6$ for $x\ge m>0$ then you have taken care of the tail. The remaining conditions can be met by a polynomial of degree 4 on $-m\le x\le m$ with $f(x):=0$ for $x<-m$. However, the condition $f(x)\ge 0$ discards many solutions. In general, $m$ will depend on $\delta$. Another way of obtaining a solution is to define $f$ constant on certain intervals. For example, if $\delta=3$ you can set $m=1$ and $$ f(x):=\begin{cases}0&\text{if }\qquad\quad\ x<-3\\ \frac{2}{75}&\text{if }\ -3\le x<-2\\ \frac{3}{25}&\text{if }\ -2\le x<-1\\ \frac{33}{100}&\text{if }\ -1\le x<0\\ \frac{97}{300}&\text{if }\ \quad\ 0\le x<1\\ \frac{1}{x^6}&\text{if }\ \ \quad 1\le x \end{cases} $$ Moreover, you can construct solutions that are $C^{\infty}$, approximating the locally constant solutions by such functions. ${\bf{EDIT:}}$ Playing around with the limits and locally constant functions one can set for example $$ f(x):=\begin{cases}0&\text{if }\qquad\quad\ x<-2\delta\\ \frac{k}{6\delta^6}&\text{if }\ -2\delta\le x<-\delta\\ a&\text{if }\ 0\le |x|<1\\ b&\text{if }\ \quad\ 1\le |x|<(\delta+1)/2\\ c&\text{if }\ \quad\ (\delta+1)/2\le |x|<\delta\\ \frac{k}{x^6}&\text{if }\ \ \quad \delta\le x \end{cases} $$ Then $a,b,c$ can be determined by the conditions 2., 4. and 5. If one set $k:=\delta^2/800$, then $a,b,c$ are positive for $\delta\in[1.0026,6.9]$.
Calculating path of gradual turn given two wheel speeds
The path of travel will be a circle with center to the right of the moving robot. You can measure radii from that center to each wheel, thus obtaining circles with two different circumferences to be traversed in equal time. If the speed of the outer wheel is twice the speed of the inner wheel, then the circumference of the outer circle must be twice the circumference of the inner circle. Therefore also, the outer radius is twice the inner radius. Since the distance between the wheels is 2cm, the center of the robot's circle should be another 2cm to its right. Since the outer wheel has diameter 1cm, it rolls a distance of $\pi$cm with each revolution. We then calculate: $\pi$ cm/rev * 2 rev/sec * 5 sec = 10$\pi$ cm. Since the outer wheel is traversing a circle with radius 4cm, thus with circumference 8$\pi$cm, the robot ought to complete $1.25$ circles in $5$ seconds.
How to calculate the length of arcs in the given periodic function?
If we have a curve, $y=f(x)$, then the length of the curve between $x=a$ and $x=b$ is equal $$\int_a^b\sqrt{\left(\frac{dy}{dx}\right)^2+1}~dx$$ Does that help?
R-REF and final row of 0's
This only occurs in some cases. Consider the matrix $A=\begin{bmatrix}2 & 3 \\ 2 & 3\\ 0 & 0\end{bmatrix}$ Consider the linear transformation $L_A:\mathbb{R}^2\to\mathbb{R}^3$ given by $L_A(\vec{v})=A\vec{v}$. If $\vec{v}=\begin{bmatrix}x \\ y\end{bmatrix}$, then $$A\vec{v}=\begin{bmatrix}2x+3y\\2x+3y\\ 0 \end{bmatrix}$$ Set $\vec{b}=\begin{bmatrix}5\\6\\0\end{bmatrix}$. If we want to solve $A\vec{v}=\vec{b}$, then upon row reducing the augmented matrix $$\left[A\,\big|\,\vec{b}\right]= \begin{bmatrix} 2x &3y & & 5\\ 2x & 3y & & 6\\ 0 & 0 & & 0 \end{bmatrix} \leadsto \begin{bmatrix} x &\frac{3}{2}y & &\frac{5}{2}\\ 0 & 0 & &1\\ 0 & 0 & &0 \end{bmatrix}$$ we would require $0x+0y=1$, so this system has no solution.
How to solve this system of non linear trigonometric equations.
Hint: Use $(3)$ to simplify $(4)$ and $(1)$ to simplify $(2)$. Square and add to eliminate $\theta_2$. Can you proceed?
inversion of the Euler totient function
Warning, you are going to have to do some work to get your hands around this answer, but I provided enough details for you to work through it and it answers your question. Goal: Given an integer $n$ find the smallest integer $x$ such that $\varphi(x) = n$. Approach One can calculate all of the possible integers with a totient of $n$ using "Inverse Totient Trees." For instance, take the integers with a totient of $24$. Then, $\varphi(N) = 24$ $\varphi(24) = 8$ $\varphi(8) = 4$ $\varphi(4) = 2$ $\varphi(2) = 1$ There are $5$ "links" (designate this as $L$) in the totient chain so to speak, with $4$ intervals. In general, the greatest integer that can have a totient of $n$ is $2*3^{(L-1)}$, which means that $2*3^{(5 - 1)}$ $= 162$ is the upper bound of an integer with a totient of $24$. In fact, via a simple proof by exhaustion, one can easily check a table and see that the smallest integer where $\varphi(x) = 24$ is $x = 35$ (see the list below). $$\varphi(x)= n = 24 \rightarrow x = 35, 39, 45, 52, 56, 70, 72, 78, 84, 90$$ Here are a couple of related number sequences which include references for you to investigate this approach. A032447 Inverse function of phi( ) A058811 Number of terms on the n-th level of the Inverse-Totient-Tree (ITT) So, the next obvious question, is there a program that is related or tangentially related that can generate all of those values in the range specified using some approach? Yes, see Solving $\varphi^{-1}(x) = n$, where $\varphi(x) = n$ is Euler's totient function - testing Carmichael's conjecture. Please see the references there for further details on this program - including the C-source code. Lets do the example above and the some examples within your range (note that numbers are obviously never odd because $\varphi(x)$ is always even for $n \ge 3$). $n = 24$: Enter $n = 24, e = 0, f = 0$: and look at the resulting $10$-bolded numbers and compare that list to the above. If you sort that list from low to high, it is all of the $x's$ that produce that desired $n$. Of course, you only want the minimal (last bolded number in the display) one. So $\varphi^{-1}(35) = 24$. $n = 10^{5}$: Enter $n = 100000, e = 0, f = 0$: see the metrics for how many numbers satisfy this, but your desired result is $x = 100651$. $n = 10^5 + 1$: Enter $n = 100001, e = 0, f = 0$: Odd numbers are not permissible by statement above. $n = 10^{5} + 2$: Enter $n = 100002, e = 0, f = 0$: see the metrics for how many numbers satisfy this, but your desired result is $x = 100003$. $n = 5*10^{5}$: Enter $n = 500000, e = 0, f = 0$: see the metrics for how many numbers satisfy this, but your desired result is $x = 640625$. ... $n = 10^{8}$: Enter $n = 100000000, e = 0, f = 0$: see the metrics for how many numbers satisfy this, but your desired result is $x = 100064101$. Asides This is a very interesting problem and has been asked before, so I am summarizing some of the other discussions and references I found in case others want to consider more approaches. There are several papers on the topic of finding the inverse of the Euler Totient function: Euler's Totient Function and Its Inverse, by Hansraj Gupta The number of solutions of $\phi(x) = m$, by Kevin Ford On the image of Euler’s totient function, R.Coleman Complexity of Inverting the Euler Function, by Scott Contini, Ernie Croot, Igor Shparlinski There have been several questions along these lines, for example, the-inverse-of-the-euler-totient-function, inverting-the-totient-function and finding-the-maximum-number-with-a-certain-eulers-totient-value There are some other code examples that use different methods for you to consider, see, for example: Inversion of Euler Totient Function by Max Alekseyev and you can experiment with this since PARI/GP is free. Play around with the function and see if you can modify your approach with this approach. Discussion and implementation of an efficient algorithm for finding all the solutions to the equation EulerPhi[n] = m, by Maxim Rytin is a nice article off of wolfram that gives an efficient algorithm for computing the inverse of the Euler totient function. Download the invphi.nb file at the bottom and get Mathreader. As an alternative, you can see oeis A006511. MAGMA - see FactoredEulerPhiInverse(n) Regards
Partial sum square root of reciprocal of primes
If $\pi(x)$ is the number of primes not greater than $x$, then $\pi(x)$ is continuous from the right and the Riemann-Stieltjes integral over $[2, 2 + \epsilon]$ will tend to zero. The first equation should be $$\sum_{p \leq x} \frac 1 {\sqrt p} = \frac 1 {\sqrt 2} + \int_2^x \frac {d \pi(t)} {\sqrt t} = \frac {\pi(x)} {\sqrt x} + \frac 1 2 \int_2^x \frac {\pi(t)} {t^{3/2}} dt.$$ To prove the asymptotic equivalence of the integrals, show that l'Hopital's rule applies. Then $$\lim_{x \to \infty} \frac {\int_2^x t^{-3/2} \, \pi(t) \, dt} {\int_2^x t^{-1/2} \ln^{-1} t \, dt} = \lim_{x \to \infty} \frac {x^{-3/2} \, \pi(x)} {x^{-1/2} \ln^{-1} x} = 1.$$ To estimate the integral in the denominator, apply integration by parts and l'Hopital's rule again: $$\frac 1 2 \int_2^x \frac {dt} {\sqrt t \ln t} = \frac {\sqrt t} {\ln t} \bigg\rvert_{t = 2}^x + \int_2^x \frac {dt} {\sqrt t \ln^2 t} = \frac {\sqrt x} {\ln x} + o {\left( \frac {\sqrt x} {\ln x} \right)}.$$ Therefore your final result is correct.
How to plot a plane with the given normal vector (numerically)?
The following might be a useful approach. Starting with the general expression for a plane in 3D, solve for z. $$ n_{1}\left(x-x_{0}\right)+n_{2}\left(y-y_{0}\right)+n_{3}\left(z-z_{0}\right)=0 $$ $$ n_{1}\left(x-x_{0}\right)+n_{2}\left(y-y_{0}\right)=-n_{3}\left(z-z_{0}\right) $$ $$ -\frac{n_{1}}{n_{3}}\left(x-x_{0}\right)-\frac{n_{2}}{n_{3}}\left(y-y_{0}\right)+z_{0}=z $$ Once you have this, you can generate a grid of $x,y$ values then compute the corresponding $z=f\left(x,y\right)$ and plot. The below is an example code which does this along with a figure from the code. I hope this helps. import numpy import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import axes3d fig = plt.figure() ax = fig.add_subplot(111, projection='3d') # Plane parameters n_1=2 n_2=0 n_3=1 x_0=0 y_0=0 z_0=10 # Create a grid of x and y x = np.arange(-5, 5, 0.1) y = np.arange(-5, 5, 0.1) xg, yg = np.meshgrid(x, y) # Compute z = f(x,y) z = -n_1/n_3*(xg-x_0)-n_2/n_3*(yg-y_0)+z_0 ax.plot_wireframe(xg, yg, z, rstride=10, cstride=10) plt.xlabel('x') plt.ylabel('y') plt.title('Plane: z=f(x,y)') plt.show()
Positive integer solutions to $\frac{x}{y+z}+\frac{y}{x+z}+\frac{z}{x+y}=4$
Yes, there are BIG positive solutions. For example: x:=16666476865438449865846131095313531540647604679654766832109616387367203990642764342248100534807579493874453954854925352739900051220936419971671875594417036870073291371; y:=184386514670723295219914666691038096275031765336404340516686430257803895506237580602582859039981257570380161221662398153794290821569045182385603418867509209632768359835; z:=32343421153825592353880655285224263330451946573450847101645239147091638517651250940206853612606768544181415355352136077327300271806129063833025389772729796460799697289; See Oleg567's answer here: Find integer in the form: $\frac{a}{b+c} + \frac{b}{c+a} + \frac{c}{a+b}$
Question of convexity in $\mathbb{C}^2$
Let $\zeta$ be any unit vector in $\mathbb C$. Then the set $S$ contains $r\sqrt{1-r^2}e^{i\theta}\zeta$ for every $r\in [0,1]$ and all $\theta\in\mathbb{R}$. Indeed, we can write $r\zeta = (\bar a, \bar c)$ and get $|b| = \sqrt{1-|a|^2-|c|^2} = \sqrt{1-r^2}$. So, $$ S = \{r\sqrt{1-r^2}e^{i\theta}\zeta : 0\le r\le 1, \theta \in\mathbb{R}, \zeta \text{ a unit vector in } \mathbb{C}^2\} $$ Since $r\sqrt{1-r^2}$ takes on all values in $[0,1/2]$ as $r$ runs through $[0, 1]$, the description of $S$ simplifies further to $$ S = \{w\zeta : w\in \mathbb{C}, |w|\le 1/2, \zeta \text{ a unit vector in } \mathbb{C}^2\} $$ But this is precisely the set of all vectors $\xi\in \mathbb{C}^2$ such that $|\xi|\le 1/2$, namely the closed ball of radius $1/2$. It is convex.