title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
All faces of the n-dimensional hypercube
You've said it all. If you want a set-builder description of each individual face, you can consider the set $S$ of all functions $$f:\quad[n]\to\bigl\{\{-1\}, \>[{-1},1]\>,\>\{1\}\bigr\}\ ,$$ and for any $f\in S$ define the face $F_f$ of $C_n$ by $$F_f:=\bigl\{(x_1,x_2,\ldots, x_n)\in{\mathbb R}^n\>\bigm|\>x_i\in f(i) \ \ (1\leq i\leq n)\bigr\}\ .$$
Proof of Multiple angle trigonometri ratios
Add $1$ to both sides of the tangent condition. We get $\tan^2A+1=2+2\tan^2B$. Simplifying with the identity $\tan^2\theta+1=\sec^2\theta$, we have: $$\sec^2A=2\sec^2B$$ Taking the reciprocals of both sides and multiplying by $4$, we get: $$4\cos^2A=2\cos^2B$$ Remembering the identity $2\cos^2\theta-1=\cos2\theta$, we can subtract $1$ from each side to get: $$2(2\cos^2A-1)+1=2\cos^2B-1$$ which simplifies to: $$2\cos2A+1=\cos2B$$
A small query about the integration by substitution formula: $\int f(u) \, du= \int f(g(x))g'(x) \, dx, \ u=g(x) \, .$
To be precise, and under some conditions, you can write $$\int_{g(a)}^{g(x)}f(u)du=$$ $$\int_a^xf(g(u))g'(u)du=$$ $$\int_a^xf(g(t))g'(t)dt$$ the substitution made is $$u=g(t)$$
Closed form for a sum of values of a quadratic?
Your question amounts to evaluating $$\sum_{x=1}^n x^2\,\,\,\text{and}\;\;\sum_{x=1}^n x$$ (upon expanding your original sum). Specifically, $$\sum_{x=1}^nx=\frac{x(x+1)}{2}$$ $$\sum_{x=1}^n x^2=\frac{x(x+1)(2x+1)}{6}$$ which can be shown by induction.
How to prove two of following central extensions are isomorphic?
Here is a proof that $b=1$ in $G_2$, so $|G_2|=6$. $$rsrs=b \Rightarrow r^{-2}srs=b \Rightarrow r^{-1}srsr^{-1}=rbr^{-1}=b \Rightarrow r^{-1}sr=brs = (rs)^3.$$ So then $s^2=1 \Rightarrow (rs)^6=1$, but $(rs)^4=b^2=1$, so $(rs)^2=b=1$.
Trace of the transformation
For $(i, j) \in \{1, \dots, n\}^2 =: [n]^2$, let $E_{i,j}$ be the $n\times n$ matrix with a $1$ in the $i,j$ entry and zeros elsewhere. The set $\{ E_{i,j} : (i,j) \in [n]^2\}$ forms a basis of $M(n, \mathbb R)$. Note $M(n, \mathbb R) \simeq \mathbb R^{n^2}$, and hence $E_{i,j}$ may be identified with the standard orthonormal basis of the inner-product space $\mathbb R^{n^2}$ (once we enumerate $[n]^2$). This turns $M(n, \mathbb R)$ into an inner-product space, and hence $\operatorname{tr}(T) = \sum_{i, j} \langle T E_{i,j}, E_{i,j} \rangle$, where the inner-product on $M(n, \mathbb R)$ is computed by sending matrices in $M(n, \mathbb R)$ to vectors in $\mathbb R^{n^2}$ and computing the standard inner-product there. Let $e_1, \dots, e_n$ be the standard basis of $\mathbb R^n$. Since $A$ is nilpotent, it's traceless, i.e., $\sum_{i=1}^n \langle A e_i, e_i \rangle = 0$, from which we deduce $ \sum_{i,j} \langle A E_{i,j}, E_{i,j} \rangle = 0. $ It's not hard to verify that $\sum_{i,j} \langle I E_{i,j}, E_{i,j} \rangle = n^2$. Using the above formula for $\operatorname{tr}(T)$, we find $$ \operatorname{tr}(T) = \sum_{i,j} \langle (A - I) E_{i,j}, E_{i,j} \rangle = \sum_{i,j} \langle A E_{i,j}, E_{i,j} \rangle - \sum_{i,j} \langle I E_{i,j}, E_{i,j} \rangle = -n^2. $$ An alternative way to see this: if the isomorphism $M(n, \mathbb R) \to \mathbb R^{n^2}$ is given by vertically stacking column $1$, $\dots$, column $n$ of a matrix, then the matrix representation of "$T = A-I$" as an $M(n, \mathbb R)$ operator is $A \otimes I - I \otimes I$ (where $\otimes$ is the tensor or Kronecker product), and using the property $\operatorname{tr}(U \otimes V) = \operatorname{tr}(U) \operatorname{tr}(V)$, we have $ \operatorname{tr}(T) = \operatorname{tr}(A \otimes I - I \otimes I) = 0 \cdot n - n \cdot n = -n^2. $
Integral solutions $(a,b,c)$ for $a^\pi + b^\pi = c^\pi$
The Wikipedia article on Fermat's last theorem has a full section about it, with plenty of references. Here are a few results (see the article for precise references): The equation $a^{1/m} + b^{1/m} = c^{1/m}$ has solutions $a = rs^m$, $b = rt^m$ and $c = r(s+t)^m$ with positive integers $r,s,t>0$ and $s,t$ coprime. When $n > 2$, the equation $a^{n/m} + b^{n/m} = c^{n/m}$ has integer solutions iff $6$ divides $m$. The equation $1/a + 1/b = 1/c$ has solutions $a = mn + m^2$, $b = mn + n^2$, $c = mn$ with $m,n$ positive and coprime integers. For $n = -2$, there are again an infinite number of solutions. For $n < -2$ an integer, there can be no solution, because that would imply that there are solutions for $|n|$. I don't know if anything is known for irrational exponents.
Antiderivative Theory Problem
The first question is much easier than you're making it. If $f$ is differentiable, it has to be continuous. If $f$ is continuous, then $f(0) = \lim_{x \to 0}f(x)$. For the second question: note that $$ \lim_{h \to 0} \frac{f(x+h) - f(x)}{h} = \lim_{h \to 0} \frac{1}{h}\left(f(x+h) - f(x)\right) = \lim_{h \to 0} \frac{1}{h}\left(\frac{f(x)+f(h)}{1-f(x)f(h)} - f(x)\right) $$
Question on a proof that there are infinitely many primes
I disagree with the use of "division" in any proof for elementary number theory. The concept of division is usually only formally introduced much later in a course from where you appear to be at the moment. So, we get to letting $N=\prod\limits_{i=1}^mp_i + 1$ and we determined that $N>p_i$ for all $i$ and so $N$ is not one of the elements in our list of primes. Ergo, $N$ must be composite (by theorem proved earlier, every natural number is either 0, 1, prime, or composite). That is, there is some naturals $j$ and $a$ such that $N=a\cdot p_j$. That is, $a\cdot p_j = p_1\cdot p_2\cdots p_j\cdots p_m + 1$ Now, by subtracting and factoring, we have $1 = p_j\cdot(a - p_1\cdot p_2\cdots p_{j-1}\cdot p_{j+1}\cdots p_m)$ Note, however, that $(a-p_1\cdots p_m)$ is an integer and so too is $p_j$. Notice that this would then imply that $p_j$ is a divisor of $1$, but $1$ has no divisors except itself. This is our contradiction. Note, the above argument completely bypassed the need for referring to division, though it does make use of divisibility (something which is perfectly acceptable to refer to and use in these level of proofs).
Graph with two strongly connected components
I take it that the basis of $G$ is the underlying undirected graph. That is, $G$ with the orientation of the edges removed. Say the two strongly connected components are $S$ and $T$. Since the basis is connected there must be an edge joining a vertex in $S$ and a vertex in $T$. WLOG we may say $e=st,$ where $s\in S,t\in T$. If $e$ were the only edge between $S$ and $T$ it would be a bridge in the basis, so there is another such edge $f$. If $f=\tau\sigma,\tau\in T,\sigma\in S$, then $G$ is strongly connected, so $f$ also goes from $S$ to $T$. Changing the direction of either $e$ or $f$ would make $G$ strongly connected.
Why the digit Sum of a non negative number(n) is always the remainder after division by 9 , until n becomes only one digit?
(Hint) :- Use modular arithmetic . Note that $10^n \equiv 1\ (\textrm{mod}\ 9)$ for any positive integer $n$ and every number can be uniquely represented as $10^na_1 + 10^{n - 1}a_2 + ... + a_n$ where $a_1,a_2,...,a_n$ are positive integers from the set $[1,2,3,4,5,6,7,8,9]$ . You have actually done the same thing in the case of $689$ . Since you got :- $$(9 + 8 + 6) + 9 * ((8*1) + (6*11))$$ We get :- $689 \equiv (9 + 8 + 6)\ (\textrm{mod}\ 9)$ => $(9 + 8 + 6) \equiv 5\ (\textrm{mod}\ 9)$ , and that's the same dealing with the remainder of the sum of digits after division by $9$ . Edit :- As mentioned by @Toby Mak, the remainder found from the division by $9$ is called it's digital root .
Showing $\lor$ in terms of $\to$ and $\lnot$
$p\lor q=\lnot(\lnot p\land\lnot q)=\dots$
Conditions that allow Integration by Substitution
As you use the Chain-rule for this one you function must necessary be continuous differentiable. But if you substitue don't forget to change you bounds (correct me if it is the wrong word). As a formel $$\int_a^b f(\varphi(t)) \cdot \varphi'(t) \, \mathrm{d}t = \int_{\varphi(a)}^{\varphi(b)} f(x)\, \mathrm{d}x $$ In higher dimensions you have to use continuous differentiable bijections.
How to show a rational polynomial is irreducible in $\mathbb{Q}[a,b,c]$?
Suppose contrary that $p(a,b,c)$ is reducible over $\mathbb{Q}$. You can write $p(a,b,c)$ as $$a^3+b^3+c^3-3(b+c)a^2-3(c+a)b^2-3(a+b)c^2-5abc\,.$$ It suffices to regard $p(a,b,c)$ as a polynomial over $\mathbb{F}_3$ (why?). Over $\mathbb{F}_3$, $$p(a,b,c)=a^3+b^3+c^3+abc=a^3+(bc)a+(b+c)^3\,.$$ Since $p(a,b,c)$ is homogeneous of degree $3$ and reducible, it has a linear factor $a+ub+vc$ for some $u,v\in\mathbb{F}_3$. Clearly, we must have $ub+vc \mid (b+c)^3$, whence $u=v=1$ or $u=v=-1$. However, both choices are impossible via direct computation.
Question on closure in the product and box topologies
The box topology is finer (has more open sets) than the product topology. So if $A$ were closed in the product topology its complement would be open in that topology and it would remain open in the box topology, and so $A$ would still be closed in the box topology. So the meta-hint gives away that $A$ must be closed in the box topology, and dense in the product topology... Suppose we have a point $(x_n)$ not in $A$. This means that there is no tail of the coordinates that is $0$ and this means, if you think about it, exactly that there are infinitely many coordinates $k$ such that $x_k \neq 0$. We can find open sets around those coordinates that also miss $0$ and use those to build a box-open basic set that has the property that all its members are also non-zero at these same coordinates. When we have a basic product open set, we only have "control" over finitely many coordinates, and all values beyond those finitely can be anything (all of $\mathbb{R}$ is allowed). In particular, they could all be $0$...
Does solving the complexity class ALL collapse all Turing degrees?
Yes, your understanding is correct. Note that the "programs" allowed in this paper cannot be described using a finite amount of information, since they include an infinite sequence of "quantum advice states" (one for each possible length of input). Note that if you allow such extra information with no bound on its length, it is trivial that you can compute any language: just let your advice state for inputs of length $n$ encode what all of the outputs should be on inputs of length $n$. In this case the result is more surprising since the size of the advice state is limited to being polynomial in $n$, but it is still not too shocking.
Calculate limit of sequence of Riemann Sums
but there is no something over n which could define $\delta x$=(b-a)/n. Hint. One may observe that $$ \frac{1}{\sqrt{n^2+i^2}}=\frac{1}{n}\cdot\frac{1}{\sqrt{1+\frac{i^2}{n^2}}} $$ leading, as $n \to \infty$, to $$ \int_0^1\frac{dx}{\sqrt{1+x^2}} $$ using a Riemann sum.
Isomorphism between $U$ and $\mathbb R^n$
Hints: Choose a basis $\;\{u_1,...,u_n\}\;$ of $\;U\;$ , and then define $$Tu_i:=e_i\in\Bbb R^n\;\;,\;\;e_i:=i\rightarrow\begin{pmatrix}0\\\ldots\\1\\\ldots\\0\end{pmatrix}$$ and expand definition of $\;T\;$ by linearity. Check it is a linear transformation and bijective
Proving that $Y$ is independent of $X$ if $Y=XZ$ for some random variables $X,Y,Z$
$X$ and $Y$ are independent iff $\mathbb{P}(X=1,Y\leq y) = \mathbb{P}(X=1)\mathbb{P}(Y\leq y)$ for every real $y$. Now, denoting by $\Phi(y) := \mathbb{P}(Z\leq y)$, $$\mathbb{P}(X=1) = \frac{1}{2}$$ $$\mathbb{P}(Y \leq y) = \mathbb{P}(Z\leq y,X=1) + \mathbb{P}(Z \geq -y,X=-1) = \frac{1}{2}(\Phi(y) + 1-\Phi(-y)) = \Phi(y)$$ $$\mathbb{P}(X=1,Y\leq y) = \mathbb{P}(X=1,Z\leq y) = \frac{1}{2} \Phi(y)$$
Understanding a plot of a complex plane
I think the following is adequate evidence that your plots are actually showing hyperbolas that arise when several cells happen to fall in a straight line. Lacking exact details of your algorithm, I wrote a program to find all the $1\times 1$ open square lattice cells in the plane that overlap a circle of given radius. (This number appears to be asymptotic to $8r$ as $r\to\infty,$ consistent with what you found.) For each cell with corner-coordinates $(i,j),(i,j+1),(i+1,j),(i+1,j+1)$, I then computed the distance between the circle and the point $(i,j).$ As an example with $r=459$, the following plot on the left shows distance vs cell index for the first $2000$ cells (there being exactly $3660$ cells overlapping the circle), the cells being indexed in counter-clockwise sequence around the circle from angle $0$ back to $2\pi:$ The plot on the right is the result of re-ordering the cells in the manner you have done (as you explained in comments), so that the first four cells are the ones at angles $0,\pi/2,\pi,3\pi/2$, the next four are the next ones counter-clockwise after those respective locations, and so on around the circle. This "interleaving" is what causes various hyperbolas to get matched up with inverted hyperbolas, giving the appearance of closed curves. Why hyperbolas? It's a consequence of the alignment of several cells that overlap the circle. For example, letting $d_n$ be the distance between the circle and the corner of the $n$th such cell (in counter-clockwise order), I find $d_n = r - \sqrt{(r-1)^2 + n^2}$, or $(d_n-r)^2 - n^2 = (r-1)^2,$ which is the equation of a hyperbola.
Simulation of bouncing circles
The link given by @Rahul has formulas, but I'd like to add a geometric interpretation (and have some answer recorded here). Suppose the circles/spheres are identical: same mass, same radius $r>0$. Let $(x_1,x_2)$ and $(x_3,x_4)$ be the coordinates of their centers. Then the configuration of both circles is encoded by a point in 4-dimensional space, namely $(x_1,x_2,x_3,x_4)$. This point can move freely (with constant velocity) within the domain $U=\{x: (x_1-x_3)^2+(x_2-x_4)^2\ge 4r^2\}$. This is simply the exterior of tubular neighborhood of a line in $\mathbb R^4$. When the configuration point reaches the boundary of $U$, it bounces back just as an elastic point mass would. Indeed, the speed must be preserved by the conservation of energy, and the tangential component of velocity is preserved by the conservation of momentum. Together these two facts imply that the incident angle is equal to the reflected angle. It is not hard to find the equation of the normal vector to $\partial U$ (hint: gradient) and use it to recalculate the velocity of the configuration point after collision. With $n\gg 2$ circles/spheres we have a game of billiards in a high-dimensional space with a lot of concave boundaries: a good recipe for mixing things up. Hence, the behavior of gas molecules.
Transitive Closure and Composite relations in set builder notation
Note that even if you correct $R^t=\{(x,y,z) \mid (y-x>0)\land(z-y>0)\}$ to $\{(x,y):\exists z(y-z>0 \wedge z-x>0)\}$, this only gives you a transitive relation because you already knew it was transitive. If you started instead with $R':=\{(x,y):y=x+1\}$, the analogous construction would only ensure that the resulting relation held for numbers whose difference was one, and those whose difference was two; $1\:R'^t\:5$ wouldn't hold. Instead, you want to know that chains like $x_1 R x_2 \wedge x_2Rx_3 \ldots\wedge x_{n-1}Rx_n$ always give you $x_1Rx_n$. To get at this, think about a set $X$ such that $X$ contains $x_1$ from the above list, and for all $y\in X$ and all $z$, if $yRz$ then $z\in X$. It should be clear that $x_n$ from the above list is also in $X$. In fact for absolutely any set with the property we've described for $X$, $x_n$ will be a member, even if it contains extra junk like elements that $R$ doesn't relate to anything. So you want to translate these two facts into logical notation to get the transitive closure. As for the composite, it really depends on exactly what's being asked. The half-repaired version of the "transitive closure" I gave in the first paragraph is exactly $R^2$. That will tell you how to generalize it to more terms. I don't know if you're supposed to know anything about recursion at this point, but there is also a recursive definition that will give you a way to define it for every $n$ at once, too. But that'd be a mouthful and I'm not sure if that's what's being asked for.
Why does the Binomial Theorem use combinations and not permutations for its coefficients?
The reason combinations come in can be seen in using a special example. The same logic applies in the general case but it becomes murkier through the abstraction. Consider $$(a+b)^3$$ If we were to multiply this out, and not group terms according to multiplication rules (for example, let $a^3$ remain as $aaa$ for the sake of our exercise), we see $$(a+b)^3 = aaa + aab + aba + baa + abb + bab + bba + bbb$$ Notice that we can characterize the sum this way: $$(a+b)^3 = (\text{terms with 3 a's}) + (\text{terms with 2 a's}) + (\text{terms with 1 a}) + (\text{terms with no a's})$$ (You can also do the same for $b$, the approach is equivalent.) Well, we see from our weird expansion that we have every possible sequence of length $3$ made up of only $a$'s and $b$'s. We also know some of these terms are going to group together, as, for example, $aba = aab = baa$. So how many summands are actually equal? Well, since they all have the same length, two summands are equal if and only if they have the same number of $a$'s (or $b$'s, same thing). And we also know that every possible sequence of length $3$ and only $a$'s and $b$'s are here. So we can conclude that $$\begin{align} (\text{# of terms with 3 a's}) &= \binom{3}{3} = 1\\ (\text{# of terms with 2 a's}) &= \binom{3}{2} = 3\\ (\text{# of terms with 1 a}) &= \binom{3}{1} = 3\\ (\text{# of terms with no a's}) &= \binom{3}{0} = 1 \end{align}$$ Thus, we conclude: There will only be one $aaa = a^3$ term There will be $3$ $aba=aab=baa=a^2b$ terms. There will be $3$ $abb = bab = abb = ab^2$ terms. There will be $1$ $bbb=b^3$ term. Thus, $$(a+b)^3 = \sum_{k=0}^3 \binom{3}{k}a^k b^{3-k}$$ and in general, for positive integers $n$, $$(a+b)^n = \sum_{k=0}^n \binom{n}{k}a^k b^{n-k}$$ In short, the reason we use combinations is because the order does not matter, because we will get terms like $aab, baa, bab$ which are all equal in the expansion. Since multiplication is a commutative operation over the real numbers, then, we can say they're equal. Thus, the number of terms of that "type" (characterized by how many $a$'s or $b$'s they have) is given precisely by the number of sequences of length $n$ ($n=3$ in our example), made of only $a$ and $b$, that has exactly $k$ $a$'s (or $b$'s). Of course this all relies on the central premise that multiplication commutes in the reals and thus ensures that the order of the factors does not matter. That suggests that it does not always hold in situations where multiplication does not commute - for example, the multiplication of a type of numbers known as quaternions is not commutative, and thus the binomial theorem does not hold there as it does here (since there $ab$ need not equal $ba$). The nature of this commutativity, or the lack of it, and the consequences of each is better divulged in a discussion on abstract algebra, and this tangent is long enough as it is.
What is word reversal $w^R$?
If $w=a_1a_2\cdots a_n$, then $w^R=a_na_{n-1}\cdots a_2a_1$
why is $ \{(0,x,z)|x,z\in R\}$ is a two dimensional subspace space of $R^{3}$ over $R$ but $\{(0,0,z)|z\in R\} $ $\cup$ $\{(0,x,0)|x\in R\}$ is not?
$B$ is defined as the union of two subsets of $\mathbb R^{3}$, not as a sum of subspaces. $(0,0,1)+(0,1,0)=(0,1,1)$ is not in $B$ so it is not a subspace of $\mathbb R^{3}$.
non homology based polynomial time computable invariants
There are several questions you ask, and I think I can say something about the last question. It is worth thinking about the origins of homology, see for example essays in IM James (editor) "History of Topology". The early writers on what we now call algebraic topology wanted to take "cycles modulo boundaries" but were not too clear about the meanings of those words. It was Poincar\'e who brought in the idea of "formal sums" of oriented simplices, and so the famous equation $\partial \partial=0$, defined on what we now call a free abelian group. The work of E Noether led to the abelian group theoretic formulation of homology we know today. The topologists of the early 20th century, such as Dehn, were well aware, particularly by the 1920s, that for a connected space $X$ the first homology $H_1 X$ was $\pi_1(X,x)$ made abelian, and that the nonabelian nature of the fundamental group was important in applications in geometry and analysis. So they looked for higher dimensional versions of the fundamental group. In 1932, E. Cech submitted a paper on higher homotopy groups to the ICM at Zurich, but Alexandroff and Hopf persuaded Cech to withdraw his paper on the grounds of their abelian nature (it is not clear if this was known by Cech or was proved by Alexandroff and Hopf). Later, worries about the abelian nature of the homotopy groups would be seen as a quirk of history. However work was done on the generally nonabelian second relative homotopy groups $\pi_2(X,A,a)$ by JHC Whitehead, who introduced in 1946 the notion of crossed module, and in 1949 the notion of free crossed module. This allows one to say that for the standard diagram of a Klein bottle, we can write $\delta \sigma = a+b-a +b$, with values in the free group on generators $a,b$. He pursued these ideas in his 1949 paper "Combinatorial Homotopy II". Now the reason for the abelian nature of the higher homotopy groups can be put as that group objects in the category of groups are just abelian groups. There are arguments for phrasing the theory of the fundamental group in terms of groupoids. It then turns out that group objects in the category of groupoids, or groupoid objects in the category of groups, are equivalent to crossed modules, see this 1976 paper. This was part of a search for Higher van Kampen Theorems started about 1965, whose results are described in this 2011 book Nonabelian algebraic topology and this 1992 expository article, which deals also with some algebraic models, cat$^n$-groups, of homotopy $(n+1)$-types, and calculations involving these. Note that the book does not require the development of singular homology, even for a statement and proof of a Relative Hurewicz Theorem! What may be seen as a catch is that the strict higher homotopy groupoids used are defined for filtered spaces and $n$-cubes of spaces, not directly for spaces. (The original question does not say of what we should have invariants!) My own view, influenced by spending 9 years on trying and failing to define a homotopy double groupoid of a space, is that the emphasis on theories for spaces without additional structure is likely to be a mistake: this is kind of confirmed by remarks of Grothendieck in Section 5 of his 1984 "Esquisse d'un Programme". (See my comment below for a link to this work.) Just to develop the last point, the usual method in singular homology is to define singular homology of a space and then after a lot of palaver to get the cellular chain complex of a CW-complex, based on its filtration by skeleta. Instead, one can define homotopically invariants of any filtered space, and then one needs quite a lot of work to show how to calculate them for a CW-filtration, but in the process gets more information since we deal with nonabelian structures in dimension 2 and keep the operations of the fundamental groupoid. As regards computation, this is always a question in dealing with nonabelian methods; but the above cited book has many calculations involving crossed modules, and so second relative homotopy groups, while one spinoff from the work on cat$^2$-groups, a nonabelian tensor product of groups, has had lots of work mainly by group theorists, see this bibliography, with 130 items. What does seem to be lacking in this work is the vision of the workers of the early 20th century, namely the applications of these particular nonabelian methods in geometry and analysis! I have attempted to indicate some work "between homotopy and homology"; it seems to me to deserve this appellation, because it involves homotopically defined functors, and honest (partial) compositions of mappings defined by using gluings of cubes, not the "trick" of using free abelian groups. I am not sure if this answers your question.
Prove a certain holomorphic function does not exist.
Hint: Suppose there were such an $f.$ Then $$\sum_{n=0}^{\infty} \frac{f^{(n)}(0)}{n!}z^n $$ has positive radius of convergence.
Prove that $\sum_{k=0}^{n-1}\frac{n^{k-1}}{k!}(n-k)(n-k+1)=\frac{n^{n-1}}{(n-1)!}+\sum_{k=0}^{n-1}\frac{n^k}{k!}$
We have that $$\begin{align}\sum_{k=0}^{n-1}\left[\frac{n^{k-1}}{k!}(n-k)(n-k+1)-\frac{n^k}{k!}\right]&=\sum_{k=0}^{n-1}\frac{n^{k-1}((n-k)^2+n-k-n)}{k!}\\&=\sum_{k=0}^{n-1}\frac{n^{k-1}(n^2-2kn+k^2-k)}{k!}\\&=\sum_{k=0}^{n-1}\frac{n^{k+1}}{k!}-2\sum_{k=1}^{n-1}\frac{n^k}{(k-1)!}+\sum_{k=2}^{n-1}\frac{n^{k-1}}{(k-2)!}\\&=\sum_{j=1}^{n}\frac{n^{j}}{(j-1)!}-2\sum_{k=1}^{n-1}\frac{n^k}{(k-1)!}+\sum_{l=1}^{n-2}\frac{n^{l}}{(l-1)!}\end{align}$$ which can be written as $$\sum_{j=1}^{n}\frac{n^{j}}{(j-1)!}-2\left(-\frac{n^{n}}{(n-1)!}+\sum_{k=1}^{n}\frac{n^k}{(k-1)!}\right)-\frac{n^n}{(n-1)!}-\frac{n^{n-1}}{(n-2)!}+\sum_{l=1}^{n}\frac{n^{l}}{(l-1)!}$$ or $$\frac{n^{n}}{(n-1)!}-\frac{n^{n-1}(n-1)}{(n-1)!}=\frac{n^{n-1}}{(n-1)!}$$ so $$\sum_{k=0}^{n-1}\frac{n^{k-1}}{k!}(n-k)(n-k+1)=\frac{n^{n-1}}{(n-1)!}+\sum_{k=0}^{n-1}\frac{n^k}{k!}$$ as desired.
Measure inequality in $L^2$
Let $g = 1_{E_{\beta}}$, by holder $||fg ||_1 \leq || f||_ 2 ||g||_2 \Leftrightarrow \int_{E_{\beta}} f \leq m(E_{\beta})^{1/2} $ Notice that $\alpha< \int_{[0,1]} f = \int_{E_{\beta}}f + \int_{E_{\beta}^c}f $ but then $\int_{E_{\beta}}f > \alpha - \int_{E_{\beta}^c}f > \alpha -\beta$ so $m(E_{\beta})^{1/2} \geq \alpha -\beta \Leftrightarrow m(E_{\beta}) \geq (\alpha -\beta)^{2}$
Is the image of $\overline{\rho_{E,p}}$ in $PGL_2$ always isomorphic to $A_5$ if $p$ does not divide the order of the image of $\rho_{E,p}$?
You have slightly misunderstood the situation. There is an additional assumption, namely that the image is non-solvable. Now there is a classification of the subgroups of $\mathrm{PGL}_2(\mathbb{F}_p)$. (Probably the paper of Serre that they cite has it --- did you look at that? Otherwise, one place it is discussed is in Swinnerton-Dyer's article in LNM 350.) My memory (but you should check) is that a subgroup either contains the image of $\mathrm{SL}_2(\mathbb F_p)$ (in which case its order is divisible by $p$), is contained in the image of a Borel (in which case it is solvable), is contained in the normalizer of a Cartan (in which case it is solvable), or is contained in $A_4,$ $S_4$, or $A_5$, the first two of which are again solvable. Since any proper subgroup of $A_5$ is also solvable, we are reduced to either being contained in $A_5$, or containing the image of $\mathrm{SL}_2(\mathbb F_p)$, which are the two cases considered in the argument. As for the vanishing of cohomology, isn't just a consequence of the fact that the module has $p$-power order, while the group has order prime-to-$p$ by assumption?
Is it trivial that I will always find a solution to Laplace's equation via finite-difference method
I will outline the typical method of showing that elliptic problems are solvable, applied to your particular PDE. Since you mentioned you have not taken any PDE classes, I have included links to the facts I will use. In particular, if you are not familiar with weak derivatives and Sobolev Spaces, you might want to take a glance at the respective Wikipedia pages. The operator you have mentioned is just the Laplacian expressed in polar coordinates. So you are trying to show that $-\Delta u = f$ is solvable with Dirichlet boundary conditions (also assume $f$ is square-integrable). Let $\Omega$ denote your region in question (all that is really important about $\Omega$ is that it is open and bounded). In the weak sense, this equation is equivalent to the statement $$\int_\Omega -u \Delta v dx = \int_\Omega fv dx \text{ for all } v \in D(\Omega)$$ If we apply Green's identity then this is equivalent to $$\int_\Omega \nabla u \cdot \nabla v dx = \int_\Omega fv dx \text{ for all } v \in D(\Omega)$$ Here is used $D(\Omega) = C^{\infty}_0(\Omega)$ to denote the space of test functions. But the term on the left hand side defines an inner product on $H^1_0(\Omega)$ (because of Friedrich's inequality)and the term on the right is a bounded linear functional $F$ on $H^1_0(\Omega)$ by first Holder's and then Friedrich's inequality. So in the weak form, our equation is $$\langle u,v \rangle = F(v) \text{ for all } v \in D(\Omega)$$ But by the Riesz Representation Theorem, there exists a $u \in H^1_0(\Omega)$ such that the above equation is satisfied for all $v \in H^1_0(\Omega)$, so in particular it is satisfied for all $v \in D(\Omega)$. So we have proved the existence of a weak solution. As of now we can only ensure that our solution $u$ lies in $H^1_0(\Omega)$ but we would like to know that at least $u \in H^2_0(\Omega)$. For this we will need the Elliptic Regularity Theorem. For our purposes, the following simplified version will suffice: Let $k \geq 0$. If $f \in H^k(\Omega)$ and $\Delta u = f$, then $u \in H^{k+2}(\Omega)$. In particular, if $f$ is smooth, then $f \in H^{k}(\Omega)$ for every $k$ so $u \in H^{k+2}(\Omega)$ for every $k$ which by the Sobolev Embedding Theorem implies that $u$ is smooth. So we have proven that your PDE has a unique smooth classical solution whenever the data is smooth.
If the tangent at $(x_0,y_0)$ to the curve $x^3+y^3=a^3$ meets the curve again at $(x_1,x_1)$ then prove that $\frac{x_0}{x_1}+\frac{y_0}{y_1}=1$
$$x^3+y^3=a^3$$ $$3x^2+3y^2 y'=0$$ $$y'=-\frac{x^2}{y^2}$$ Thus tangent at $(x_0,y_0) $ can be written as $$(y-y_0)=-\frac{x_0^2}{y_0^2}(x-x_0)$$ $$yy_0^2=-x_0^2x+x_o^3+y_0^3$$ The intersection of this line and curve $x^3+y^3=a^3$ is at the point $(x_1,y_1)$ thus $(x_1,y_1)$ satiesfies both the line and the curve Thus we can write $$x_1^3+y_1^3=a^3$$ and $$y_1y_0^2=-x_0^2x_1+x_0^3+y_0^3$$ $$y_1y_0^2+x_0^2x_1=x_0^3+y_0^3$$ also $x_0^3+y_0^3=a^3$ Thus $$y_1y_0^2+x_0^2x_1=a^3$$ $$\frac{y_1}{y_0}y_0^3+\frac{x_1}{x_0}x_0^3=a^3$$
Big subalgebras of the free polynomial algebra
This is true when $n=2$ and the ground ring is an algebraically closed field of characteristic $0$ by https://projecteuclid.org/download/pdf_1/euclid.ojm/1200773129
Finding distribution of $Y=-1/\log X$
$\frac{dX}{dY}=X\log^2(X)=e^{-1/Y}\left(\frac{-1}Y\right)^2=\frac{e^{-1/Y}}{Y^2}$ Or you can do $X=e^{-1/Y}\implies \frac{dX}{dY}=\frac1{Y^2}e^{-1/Y}$. The exponent is $-1/Y,$ not $-Y$. $Y=-1/\log X$ ranges from $0^+\to\infty$ as $X$ ranges from $0^+\to1^-$.
Hypothesis Testing for the Binomial Distribution
The answer depends on the definition of the rejection region. Here four possibilities: The rejection region is given by $[0,c_1]\cup [c_2,25]$ such that $P(X\leq c_1)\leq \frac{\alpha}{2}$ and $P(X\geq c_2)\leq\frac{\alpha}{2}$ $P(X\leq c_1)+P(X\geq c_2)\leq \alpha$ $P(X\leq c_1)\approx \frac{\alpha}{2}$ and $P(X\geq c_2)\approx\frac{\alpha}{2}$ $P(X\leq c_1)+P(X\geq c_2)\approx \alpha$ You see, that definition 3 and 4 are a little bit more handwaving. For definition 1, you have to take $c_1=0$ and $c_2=10$. But with definiton 2 you can choose $c_1=1$ and $c_2=10$.
Determining if a vector is in the row space
A vector $\vec b$ is in the row space of $A$ if and only if $\vec b$ is in the column space of $A^\top$. Thus, to determine if the vector $\vec b=\left[\begin{array}{r} 0 \\ 7 \\ 4 \end{array}\right]$ is in the row space of $A = \left[\begin{array}{rrr} 1 & 2 & 0 \\ 3 & -1 & 4 \\ 1 & -5 & 4 \end{array}\right]$, form the augmented matrix $$ \left[\begin{array}{rrr|r} 1 & 3 & 1 & 0 \\ 2 & -1 & -5 & 7 \\ 0 & 4 & 4 & 4 \end{array}\right] $$ Row reducing gives $$ \DeclareMathOperator{rref}{rref}\rref\left[\begin{array}{rrr|r} 1 & 3 & 1 & 0 \\ 2 & -1 & -5 & 7 \\ 0 & 4 & 4 & 4 \end{array}\right]= \left[\begin{array}{rrr|r} 1 & 0 & -2 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right] $$ This gives an inconsistent system. Hence $\vec b$ is not in the row space of $A$.
$\lim_{x\rightarrow\infty}e^{-x}\cosh(\alpha x)$
Knowing that $cosh(x) = \frac{1}{2}(e^x + e^{-x})$ by definition, we can take \begin{aligned} \lim_{x \to \infty} e^{-x}cosh(\alpha x) &= \lim_{x \to \infty}e^{-x}\frac{1}{2}(e^{\alpha x} + e^{-\alpha x}) \\ &= \lim_{x \to \infty}\frac{1}{2}(e^{(\alpha-1) x} + e^{(-\alpha-1) x}) \\ \end{aligned} Which converges to : $0$ for $|\alpha|<1$ $\frac{1}{2}$ for $|\alpha|=1$ and will diverge otherwise.
Translating Logic into English Sentence
Use these: $\exists x$ = "there is a student" $\forall y$ = "any student" $x\ne y$ = "other" Combine the first two statements with "such that" So then the first sentence becomes "There is a student such that any other student who has a class with the first will not eat lunch with the first student". Or, in a simplified version, "There is a student who does not eat lunch with anyone else in his class"
Mean hitting times with 2 absorbing states.
Hint: There is a positive probability that the walk goes from state 2 immediately to state 1 (an absorbing state) while never visiting any other. Similarly, there is a positive probability that the walk goes from state 2 to 3 to 5 to 6 (the other absorbing state) while never visiting any other. Thus: for any state $x$, there is a positive probability that the walk never travels from state 2 to $x$. What does that imply about the hitting times?
Is $((a \mod n) + (b \mod n) ) = (a + b) \mod n$?
You can add, subtract and multiply congruences all day as long as you mind the "wrap-around." Look at an analog clock or watch, let's do some arithmetic modulo 12. Suppose you work an 8-hour shift at the factory starting at 1300 (1 p.m.) Indeed $(13 \mod 12) + (8 \mod 12) = 9$ and $13 + 8 \equiv 9 \mod 12$. Suppose you're asked to work 4 hours overtime immediately after your regular shift. We have $9 + 4 \equiv 1 \mod 12$ and likewise $(21 \mod 12) + (4 \mod 12) = 1$. Now, I don't know what your lunch arrangements would be in this hypothetical scenario, but the math can easily be changed to accommodate it. You might complain that I'm being cagey in not exactly copying your usage of the equal sign. Truth is, I think it could be misleading. Maybe you manage to punch in at exactly 1300 one day and punch out exactly at 0100 the next day. The clock's hands are in the exact same position, and indeed $1 = 1$, but in some ways, the 1 p.m. that you started your regular shift at is very different from the 1 a.m. you ended your overtime shift at. But if we were to declare that we're going to be working in, say, $\mathbb{Z}_{12}$, a finite ring which consists of precisely 12 integers, I'd have no problem at all with statements like $9 + 4 = 1$ and $1 = 9 + 4$. Equality is unequivocally commutative. Equivalence is commutative with caveats.
Linear Algebra : Eigenvalues and rank
Your answer is correct (but it seems for the wrong reason; see below). The equation $A\mathbf{x}=0\mathbf{x}$ (implying $\mathbf{x}$ is an eigenvector with eigenvalue $0$, or $\mathbf{x}=\mathbf{0}$) is the same as $A\mathbf{x}=\mathbf{0}$ (implying $\mathbf{x}$ is in the null space of $A$). In other words: The eigenspace corresponding to eigenvalue $0$ is the null space of the matrix. The eigenvalue $0$ has algebraic multiplicity $1$ (since the characteristic polynomial will have $4$ roots, and three of them are non-zero) and hence has geometric multiplicity $1$ (since $1 \leq$ geometric multiplicity $\leq$ algebraic multiplicity). Hence the null space is $1$-dimensional. The Rank-Nullity Theorem implies the rank is therefore $3$. i. No, the column space of $M$ is $\mathrm{span}\{a,\alpha a,b,\beta b\}=\mathrm{span}\{a,b\}$ which is $2$-dimensional, since $a$ and $b$ are linearly independent. Hence the rank is $2$. ii. For example: $(-\alpha,1,0,0)^T$ and $(0,0,-\beta,1)^T$. These can be found by inspecting the linear dependencies among the columns of $M$. It seems that the rank will correspond to the number of non-zero eigenvalues. This is untrue in general; a counter-example is $$\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix}$$ has rank $2$ but characteristic polynomial $x^3(x-1)$, so only one non-zero eigenvalue (even when multiplicities are accounted for). Another way to phrase this is that the algebraic multiplicity of $0$ is $3$, whereas the geometric multiplicity of $0$ (i.e., the nullity) is $2$.
Use $\alpha, \beta, \gamma $ roots of a polynomial to construct another polynomial
Consider $(x-1+\alpha^{-1})(x-1+\beta^{-1})(x-1+\gamma^{-1})$. Using Vieta's formulae, determine it's coefficients given the coefficients of $x^3-3x+1=(x-\alpha)(x-\beta)(x-\gamma)$.
Differentiation product of functions in multidimensional Analysis
Assuming that $f: \mathbb{R}^d \to \mathbb{R}^m$, $g: \mathbb{R}^d \to \mathbb{R}^m$ are differentiable, the fact that $k$ is differentiable follows from Theorem 7 on p. 7 here applied to each component of $g$ multiplied with $f^T$, and then the fact showing differentiability for each component of $gf^T$ suffices to show the differentiability of $k=gf^T$ itself follows from Theorem 4.3 on p. 5 here (the whole document looks like a good refresher on total derivatives in general). This result also gives us a formula for calculating the derivative of $k=gf^T$: $$Dk(x)=D(g \cdot f^T)(x)= Dg(x)\cdot f^T(x)+g(x)\cdot (Df(x))^T$$ By using the definitions, you should get the desired expression when evaluating $$k'(x)(v) = Dk(x)(v)= \left[ D(g \cdot f^T)(x) \right](v) = \left[Dg(x) \cdot f^T(x)\right](v) + \left[g(x) \cdot (Df(x))^T\right](v)$$ I would go into more detail, but you said you were looking primarily for ideas/hints.
Probability of the waiting time M/G/1 and G/M/1 queue
1) For a general M/G/1 you only have an expression for the Laplace transform of the waiting time (Pollaczek-Khinchine formula), and so typically there are no explicit expressions for the probability $\mathbb{P}(W>t)$. The exception is the atom at zero which can be computed for example by Little's law: $$ \mathbb{P}(W>0)=\mathbb{E}\mathbf{1}(W>0)=\lambda \mathbb{E}[B]=\rho. $$ Other probabilities you can only approximate by numerical inversion of the Laplace Transform. 2) For the G/M/1 it is actually easier because the sojourn time is exponentially distributed (as in the M/M/1 queue). This can be shown by using the fact that the queue length upon arrival is geometric with parameter $\sigma$ that is the solution of $$ \sigma=\tilde{A}(\mu(1-\sigma)), $$ where $\tilde{A}(s)=\mathbb{E}[e^{-sA}]$ is the LST of the inter-arrival times. The sojourn time is then given by a geometric sum of exponential random variables with rate $\mu$ and so it is also exponential with rate $\mu(1-\sigma)$. The waiting time distribution is then (again, as in the M/M/1 queue) a mixture of an atom with probability $1-\rho$ at zero and an exponential distribution with probability $\rho$. Therefore, $$ \mathbb{P}(W> t)=e^{-\mu(1-\sigma)t}. $$ *The above assumes that you are familiar with the standard transform analysis of the queues, otherwise some more details are required and I suggest going over the relevant chapters in a queueing theory text book.
How is full pivoting more stable than partial pivoting
There is a paper by Trefethen that describes the average case stability of Gaussian Elimination back in 1990 using random matrices. It does note at the beginning that average case analysis excludes worst-case possibilities. There are types of matrices mentioned at the beginning which have a large condition number with Gaussian elimination. .You can generate these matrices in Matlab. Nick Higham created a bunch of test matrices. It looks like the following using python ( I don't have Matlab anymore). import numpy as np import scipy.linalg as slin n=8 # generate the matrix A = np.eye(n,n)- np.tril(np.ones(n),-1) A[:,n-1] = 1 P,L,U= slin.lu(A) this generates the following matrix A Out[19]: array([[ 1., 0., 0., 0., 0., 0., 0., 1.], [-1., 1., 0., 0., 0., 0., 0., 1.], [-1., -1., 1., 0., 0., 0., 0., 1.], [-1., -1., -1., 1., 0., 0., 0., 1.], [-1., -1., -1., -1., 1., 0., 0., 1.], [-1., -1., -1., -1., -1., 1., 0., 1.], [-1., -1., -1., -1., -1., -1., 1., 1.], [-1., -1., -1., -1., -1., -1., -1., 1.]]) L Out[20]: array([[ 1., 0., 0., 0., 0., 0., 0., 0.], [-1., 1., 0., 0., 0., 0., 0., 0.], [-1., -1., 1., 0., 0., 0., 0., 0.], [-1., -1., -1., 1., 0., 0., 0., 0.], [-1., -1., -1., -1., 1., 0., 0., 0.], [-1., -1., -1., -1., -1., 1., 0., 0.], [-1., -1., -1., -1., -1., -1., 1., 0.], [-1., -1., -1., -1., -1., -1., -1., 1.]]) U Out[21]: array([[ 1., 0., 0., 0., 0., 0., 0., 1.], [ 0., 1., 0., 0., 0., 0., 0., 2.], [ 0., 0., 1., 0., 0., 0., 0., 4.], [ 0., 0., 0., 1., 0., 0., 0., 8.], [ 0., 0., 0., 0., 1., 0., 0., 16.], [ 0., 0., 0., 0., 0., 1., 0., 32.], [ 0., 0., 0., 0., 0., 0., 1., 64.], [ 0., 0., 0., 0., 0., 0., 0., 128.]]) You can then test the residual error like this. import numpy as np import scipy.linalg as slin n=20 # generate the matrix A = np.eye(n,n)- np.tril(np.ones(n),-1) A[:,n-1] = 1 P,L,U= slin.lu(A) b = np.random.rand(n) x = np.linalg.solve(A,b) r = b - np.dot(A,x) relative_residual = np.linalg.norm(r)/(np.linalg.norm(A)*np.linalg.norm(x)) keep on increasing the $n$ and see what happens. How does full pivoting make it better? There is something called the growth factor denoted $\rho$ $$ \rho = \frac{\max_{i,j,k} | a_{j,k}^{k}|}{\max_{i,j} |a_{i,j}| } $$ where for a matrix $A$ the element $a_{i,j}^{k}$ denotes the element the matrix $A$ after the $k$th step in the elimination. If $\rho$ is not too large then it will be deemed stable. The above matrix for partial pivoting has a growth factor of at least $2^{n-1}$ . You can see this through the matrix size being $n=8$. I.e $2^{8-1} =128$ So I believe when you test it in average case analysis it is almost the same however in the worst-case analysis you will have much worse residual error.
Cookie Clicker Chocolate Egg strategy
Not a real answer to your question or a complete answer but : It's more about the number of cookies you want to bake before reseting than the time you want to wait before reseting. Suppose you want to bake $n$ more cookies before selling all your buildings (without buying the supplementary prism). Suppose the boost given by a supplementary prism take your production from $P$ to $P(1+\alpha)$. If you play as hard and as long with that extra prism than without you'll produce $n(1+\alpha)$ cookies before selling your buildings. So the extra cookie given by the extra prism will amount to $\alpha \cdot n$. Now the amount of cookies lost by the extra prism in the use of chocolate egg is (without the auras) $0,05\times0,5\times$cost of the extra prism. If this quantity is inferior to $\alpha\cdot n$ then it's more efficient to buy the prism. With the auras (and believe me it's more efficient to switch to the aura to sell at 85%) there is a shifting of one prism and the amount of cookie lost is $0,05\times 0,15 \times $cost of the prism before the extra prism. For the other buildings it's juste $0,05\times 0,15 \times$ cost of the extra building. I think it's better to think in terms of number of cookies before reseting because with this technique you can take account of the cookies given by the golden cookies (which account for the majority of your cookies for an active gameplay). If you just play by iddeling then number of cookies before reset and time before reset are roughly the same thing, you can deduce one from the other easily as they're proportional. However this is all theoretical and it doesn't really matter. The amount of lost cookies by buying lots of building is usually negligible, especially with the update with auras. The difference between best chocolate egg strategy and "buying everything i can" strategy is only about $0,75$% of your bank.
st.petersburg paradox in python
In Python, there's usually an easy way. Here's what you could do, using the $U(0,\,1)$-sampling numpy.random.random: from numpy.random import random def St(number): result = 2 for i in range(number): if random() < 0.5: return result result *= 2 return result
Find the Galois group of $x^3-2x-1$ over $\mathbb{Q}$ and over $\mathbb{Q}(\sqrt{5})$
As you said, $p = x^3-2x-1 \in \mathbb{Q}[x]$ have roots $-1, \frac{1+\sqrt{5}}{2}, \frac{1-\sqrt{5}}{2}$. So, the splitting field of $p$ over $\mathbb{Q}$ is $E = \mathbb{Q}(-1, \frac{1+\sqrt{5}}{2}, \frac{1-\sqrt{5}}{2})$ . But see: $E = \mathbb{Q}(\sqrt{5})$ (why?) and by Einsentein's Criterion $q = \min(\sqrt{5}, \mathbb{Q}) =x^2 - 5 \in \mathbb{Q}[x]$, then $[E:\mathbb{Q}] = \deg q = 2$. $\mathbb{Q}$ have characteristic zero, so it's separable and we have $|Gal(E/\mathbb{Q})| = [E:\mathbb{Q}] = 2$. $$Gal(\mathbb{Q}(\sqrt{5})/\mathbb{Q}) \simeq \mathbb{Z}_2$$ Let $\sigma \in Gal(E/\mathbb{Q})$. Note that $\{1, \sqrt{5}\}$ is a $\mathbb{Q}-$basis of the $\mathbb{Q}-$vector space $E$ and $Gal(E/\mathbb{Q}) \subset GL_{\mathbb{Q}}(E)$. Therefore, we just need to know $\sigma(1), \sigma(\sqrt{5})$. Of course $\sigma(1) = 1$. We also know that $\sigma(\sqrt{5})$ is a root of $q$, then $\sigma(\sqrt{5}) \in \{\sqrt{5}, -\sqrt{5}\}$. If $\sigma(\sqrt{5}) = \sqrt{5}$, then $\sigma = id_E$. If $\sigma(\sqrt{5}) = - \sqrt{5}$, then $$\sigma: E \to E: \,a+b \sqrt{5} \mapsto a - b \sqrt{5}$$ For your last question: of course $Gal(E/E) = \{id_E\}$.
Need help showing $\left(\frac{n^2 - 2n + 1}{n^2-4n+2}\right)^n \to e^2$
$$A=\left (1+\frac{2n-1}{n^2-4n+2}\right)^n\implies \log(A)=n\log \left (1+\frac{2n-1}{n^2-4n+2}\right)$$ Now, using equivalents since $n$ is large $$\log \left (1+\frac{2n-1}{n^2-4n+2}\right)\sim \frac{2n-1}{n^2-4n+2}\sim \frac 2n$$ making $\log(A)\sim 2 \implies A \sim e^2$
"for almost all" symbol
I use (and I often see in others' writing) $\forall^\infty$ meaning "for all but finitely many" and $\exists^\infty$ for the dual quantifier, "for infinitely many".
Related rates of change - concentric spheres
The inner radius $r$ and outer radius $R$ (both in mm) depend on time elapsed $t$ (in s) according to $$ r = 3t \\ R = 5t, $$ and the volume $V$ (in mm$^3$) is given by $$ V = \frac{4 \pi}{3} \left( R^3 - r^3 \right). $$ The multivariable chain rule gives $$ \begin{align} \frac{dV}{dt} &= \frac{dV}{dR} \cdot \frac{dR}{dt} + \frac{dV}{dr} \cdot \frac{dr}{dt} \\ &= 4 \pi R^2 \cdot 5 + 4 \pi r^2 \cdot 3, \end{align} $$ and when $t = 4$, the radii are $R = 20$ and $r = 12$, $$ \left. \frac{dV}{dt} \right|_{t = 4} = 9728 \pi \approx 30561 \text{ mm}^3/\text{s}.$$
Integer solution for $n_1 k_1 + n_2 k_2 + n_3 k_3 = 1$
Bezout's Identity says that for any given integers $n_1$ and $n_2$ there are integers $k_1$ and $k_2$ so that $$ k_1n_1+k_2n_2=\gcd(n_1,n_2) $$ Simply extending this, we get that for any given integers $n_1$, $n_2$, and $n_3$ there are integers $k_1$, $k_2$, and $k_3$ so that $$ k_1n_1+k_2n_2+k_3n_3=\gcd(n_1,n_2,n_3) $$ Thus, there is an integer solution for $$ k_1n_1+k_2n_2+k_3n_3=1 $$ if and only if $$ \gcd(n_1,n_2,n_3)=1 $$
How this $ \exp(i 10 \pi)^{\frac56}=\exp(\frac{i \pi}{3})$ true in the below paper ? And how is de Moivre's formula applied for non integer exponent?
$\exp(i 10 \pi)^{5/6} = \exp(i \pi 50/6) = \exp(i \pi 8) . \exp (i \pi /3) = \exp (i \pi /3)$ Indeed $1^{5/6}$ shall refer a to real root of 1. But I think this is why the authors write "new" solution (in quotation mark).
How to prove the diagonal subgroup is a normal subgroup implies that group is abelian?
Suppose $H \unlhd G$. Then for all $(a,b)\in G\times G$, $(a,b)H(a,b)^{-1}=H$. In particular, if we choose $(a,b)=(a,e) \in G \times G$ and $(h,h) \in H$, then $(a,e)(h,h)(a,e)^{-1}\in H$, so it equals $(k,k)$ for some $(k,k) \in H$. Thus \begin{align*} (a,e)(h,h)(a,e)^{-1}&=(k,k)\\ (a,e)(h,h)(a^{-1},e^{-1})&=(k,k)\\ (aha^{-1},ehe^{-1})&=(k,k). \end{align*} Do you see where to go from here?
Prove that the polynomial is $g(x,y)(x^2 + y^2 -1)^2 + c$
Treating $f$ as a polynomial in $(\mathbb{R}[y])[x]$, there exist polynomials $p(x,y) \in (\mathbb{R}[y])[x], q(y), r(y) \in \mathbb{R}[y]$, such that $f(x,y)=(x^2+y^2-1)p(x,y)+xq(y)+r(y)$. We have that $\frac{\partial f}{\partial x}(x,y)=(x^2+y^2-1)\frac{\partial p}{\partial x}(x,y)+2xp(x,y)+q(y)$ and $\frac{\partial f}{\partial y}(x,y)=(x^2+y^2-1)\frac{\partial p}{\partial y}(x,y)+2yp(x,y)+xq’(y)+r’(y)$ are divisible by $(x^2+y^2-1)$. Therefore $2xp(x,y)+q(y)$ and $2yp(x,y)+xq’(y)+r’(y)$ are divisible by $(x^2+y^2-1)$ As we did earlier, there exist polynomials $s(x,y) \in (\mathbb{R}[y])[x], t(y),u(y) \in \mathbb{R}[y]$ such that $p(x,y)=(x^2+y^2-1)s(x,y)+xt(y)+u(y)$. We have that \begin{align} 2xp(x,y)+q(y)&=2x(x^2+y^2-1)s(x,y)+2x^2t(y)+2xu(y)+q(y) \\ &=(x^2+y^2-1)(2xs(x,y)+2t(y))+x(2u(y))+(q(y)-2(y^2-1)t(y))\\ \end{align} is divisible by $x^2+y^2-1$. Thus $2u(y)=0,q(y)-2(y^2-1)t(y)=0$. Have $q’(y)=4yt(y)+2(y^2-1)t’(y)$ Next we have \begin{align} 2yp(x,y)+xq’(y)+r’(y)&=2y(x^2+y^2-1)s(x,y)+2xyt(y)+2yu(y)+xq’(y)+r’(y)\\ &=2y(x^2+y^2-1)s(x,y)+x(2yt(y)+4yt(y)+2(y^2-1)t’(y))+r’(y)\\ \end{align} is divisible by $x^2+y^2-1$. Thus $6yt(y)+2(y^2-1)t’(y)=0, r’(y)=0$. Thus $r(y)=c$ for some constant $c$. We shall show $t(y)=0$. Assume on the contrary $t$ is not identically $0$. Let $t$ have degree $n$ with nonzero leading coefficient $a$. Comparing the leading coefficient in $6yt(y)+2(y^2-1)t’(y)=0$, we get $6a=2an$, so $n=3$. Note $t$ is divisible by $y^2-1$, so $t(y)=a(y^2-1)(y+b)$, some $b \in \mathbb{R}$. Substituting $y=0$ in $6yt(y)+2(y^2-1)t’(y)=0$ gives $t’(0)=0$. However $t’(0)=-a$ is nonzero, a contradiction. Thus $t(y)=0$, and so $q(y)=2(y^2-1)t(y)=0$. Thus \begin{align} f(x,y)&=(x^2+y^2-1)p(x,y)+xq(y)+r(y)\\ &=(x^2+y^2-1)^2s(x,y)+x(x^2+y^2-1)t(x,y)+(x^2+y^2-1)u(y)+xq(y)+r(y)\\ &=(x^2+y^2-1)^2s(x,y)+c\\ \end{align} and we are done.
write proof that it is possible to obtain the product rule from chain, sum rule
The hint $(x+y)^2-(x-y)^2$ was enough for the OP, great. Decided to make the hint into an "answer" for the sake of pointing out that it was an interesting question, at least to me. Interesting because the same trick is a standard thing in an inner-product space, deducing things about the inner product from things about the norm. Never realized it also had application to calculus... Details added on request: Say $f$ and $g$ are differentiable. Then $$4(fg)'=((f+g)^2-(f-g)^2)'=2(f+g)(f+g)'-2(f-g)(f-g)'=4f'g+4fg'.$$
Why this theorem implies the equicontinuity of the second derivative?
Functions that are uniformly bounded in a Holder space are equicontinuous, in this case, the second derivatives are uniformly bounded in $C^\alpha$.
Galois action on the character group of a torus
$\newcommand{\Hom}{\mathrm{Hom}}$$\newcommand{\Z}{\mathbb{Z}}$$\newcommand{\Spec}{\mathrm{Spec}}$$\newcommand{\ov}[1]{\overline{#1}}$$\newcommand{\bb}[1]{\mathbb{#1}}$ When I write $\overline{F}$ below I mean the separable closure of $F$. So, by definition $X^\ast(T):=\mathrm{Hom}_{\ov{F}-\mathrm{grps}}(T_{\ov{F}},\mathbb{G}_{m,\ov{F}})$. The action of $\sigma\in\Gamma_F$ (I'll use $\Gamma_F$ for the absolute Galois group) is defined as follows: $$\sigma\cdot f:= \sigma_{\bb{G}_{m,\ov{F}}}\circ f\circ \sigma_{T_{\ov{F}}}^{-1}$$ where for any $F$-scheme $X$ we define $\sigma_{X_{\ov{F}}}$ to be the natural map $X_{\ov{F}}\to X_{\ov{F}}$ which, when one writes $X_{\ov{F}}:= X\times_{\Spec(F)}\Spec(\ov{F})$, is trivial on the $X$-component and is $\Spec(\sigma^{-1})$. So, let's do an example very explicitly. Let's set $F$ to be any field and $E/F$ to be a degree $2$ Galois extension. Then, we can form the group $U(1)_{E/F}$, or just $U(1)$, defined as the kernel of the natural map $\mathrm{Res}_{E/F}\bb{G}_{m,E}\to \bb{G}_{m,F}$ given by the norm. This is a one-dimensional torus. If we write $E=F(\sqrt{d})$ where $d\in F$, then as an affine scheme we can write $U(1)\cong \Spec(F[a,b]/(a^2-db^2-1))$ where Hopf algebra structure is the map $$F[a,b]/(a^2-db^2-1)\to F[a_1,b_1,a_2,b_2]/(a_1^2-db_1^2-1,a_2^2-db_2^2-1)$$ (where we have made the identification of $F[a,b]/(a^2-db^2-1)\otimes_F F[a,b]/(a^2-db^2-1)$ with $F[a_1,b_1,a_2,b_2]/(a_1^2-db_1^2-1,a_2^2-db_2^2-1)$) given by $$a\mapsto a_1a_2+db_1b_2,\quad b\mapsto ab_2+ba_2$$ Let us calculate its character lattice. Well, we have a natural isomorphism $\varphi:U(1)_{\ov{F}}\to \bb{G}_{m,\ov{F}}$ given on coordinate rings (using $t$ for the paramter of $\bb{G}_{m,F}$) by $$\ov{F}[t]\to \ov{F}[a,b]/(a^2-db^2-1):t\mapsto a+b\sqrt{d}$$ with inverse $$\Spec(\varphi^{-1}):\ov{F}[a,b]/(a^2-db^2-1)\to \ov{F}[t]$$ given by $$a\mapsto \frac{1}{2}(t+t^{-1}),\quad b\mapsto \frac{1}{2\sqrt{d}}(t-t^{-1})$$ It's then not hard to see that $\Hom_{\ov{F}-\mathrm{grps}}(U(1)_{\ov{F}},\bb{G}_{m,\ov{F}})=\Z\cdot\varphi\cong \Z$. Let's now see how $\Gamma_F$ acts on $\Z\cdot \varphi$. Of course, it suffices to specify how it acts on $\varphi$. Now, by definition we have that $\sigma\cdot \varphi$ is the map $ \sigma_{\bb{G}_{m,\ov{F}}}\circ \varphi\circ \sigma_{T_{\ov{F}}}^{-1}$. Let's see how this acts on coordinate rings. Namely, this map corresponds to a map $$\ov{F}[t]\to \ov{F}[a,b]/(a^2-db^2-1)$$ which can be thought of as factorized as \begin{equation} $$\ov{F}[t]\xrightarrow{\sigma^{-1}}\ov{F}[t]\xrightarrow{\Spec(\varphi)}\ov{F}[a,b]/(a^2-db^2-1)\xrightarrow{\sigma}\ov{F}[a,b]/(a^2-db^2-1)$$ and we'll call these arrows 1, 2, and 3 (labeled left to right). Well, let's see where $t$ goes. In the first map it maps to $t$ again. Under the second map it maps to $a+b\sqrt{d}$ and under the last map it maps to $a+b\sigma(\sqrt{d})$. In particular, note that $\Gamma_F$ acts through the quotient $\mathrm{Gal}(E/F)$ and if we denote by $\sigma$ the non-trivial element of $\mathrm{Gal}(E/F)$ this satisfies that $\sigma\cdot \varphi$ sends $t$ to $a-b\sqrt{d}$. Note though that this is just $(a+b\sqrt{d})^{-1}$ and so it's not hard to see that $\sigma\cdot \varphi=-\varphi$ (our groups are multiplicatively written so it might be more natural to write $\varphi^{-1}$ but this could be confused with the inverse of $\varphi$ so I wrote it additively). In other words, $X^\ast(U(1))$ is the $\Gamma_F$-module $\mathbb{Z}$ where the $\Gamma_F$ structure is the composition $$\Gamma_F\twoheadrightarrow \mathrm{Gal}(E/F)\to \mathrm{Aut}(\mathbb{Z})\cong \mathbb{Z}/2\mathbb{Z}$$ where the arrow $\mathrm{Gal}(E/F)\to\mathbb{Z}/2\mathbb{Z}$ is the obvious isomorphism. To see the relationship between $\sigma_{T_{\ov{F}}}$ and $T(\ov{F})$ for a torus $T$ over $F$ note that $T(\ov{F})$ is nothing other than $\mathrm{Hom}_F(\mathrm{Spec}(\ov{F}),T)$. But, this is the same as $\Hom_{\ov{F}}(\Spec(\ov{F}),T_{\ov{F}})$. By the above we then know how $\Gamma_F$ should act on $f\in\Hom_{\ov{F}}(\Spec(\ov{F}),T_{\ov{F}})$: $$\sigma\cdot f:= \sigma_{T_{\ov{F}}}\circ f\circ \sigma_{\Spec(F)_{\ov{F}}}^{-1}$$ Let's do two examples to see that this is what you should get: If $T$ is $\mathbb{G}_{m,F}$ then $T(\ov{F})=\ov{F}^\times$ and for $\alpha\in T(\ov{F})$ you expect that $\sigma_{T_{\ov{F}}}(\alpha)=\sigma(\alpha)$. Well, note that in scheme world this $\alpha\in\ov{F}$ corresponds to the map $\Spec(\ov{F})\to T_{\ov{F}}$ that corresponds to $\ov{F}$-algebra map $\ov{F}[t,t^{-1}]\to \ov{F}$ that takes $t$ to $\alpha$. Note then that $\Spec(\sigma\cdot f)=\Spec(\sigma_{\Spec(F)_{\ov{F}}}^{-1})\circ \Spec(f)\circ\Spec(\sigma_{T_{\ov{F}}})$ takes (using the sequence of maps on the right hand side) $t$ iteratively to $t$, then $\alpha$, then finally $\sigma(\alpha)$. So you get what you expect. If $T=U(1)_{E/F}$ then $T(\ov{F})=\{(\alpha,\beta)\in\ov{F}^2:\alpha^2-d\beta^2=1\}$. You expect that $\sigma$ acts on $(a,b)$ as $(\sigma(a),\sigma(b))$. To see this, note that $(\alpha,\beta)$ corresponds to the $\ov{F}$-algebra map $\ov{F}[a,b]/(a^2-db^2-1)\to \ov{F}$ taking $a$ to $\alpha$ and $b$ to $\beta$. Note then that if you consider $\Spec(\sigma\cdot f)=\Spec(\sigma_{\Spec(F)_{\ov{F}}}^{-1})\circ \Spec(f)\circ\Spec(\sigma_{T_{\ov{F}}})$ this takes $a$ to $\sigma(\alpha)$ and $b$ to $\sigma(\beta)$, just as you'd expect.
Analyzing a Diophantine equation: $A^k + 1 = B!$ Efficient way to solve.
Given a positive integer $A$, if $k$ and $B$ are positive integers such that $$A^k+1=B!,$$ it is clear that $A^k$ and $B!$ are coprime. Then also $A$ and $B!$ are coprime, and so $B$ is strictly smaller than the smallest prime factor of $A$. If $A$ is not too large, an effective approach is to determine the smallest prime factor of $A$, and then simply try all values of $B$ up to that prime. In particular, for your example with $A=99$ we see that the smallest prime factor is $3$, so we only need to try $B=2$ to see that there are no solutions. Note that if you intend to test this for many values of $A$, it may be worth while to verify that $B!-1$ is not a perfect power for any small value of $B$. (Thanks to Peter in the comments $B!-1$ is not a perfect power if $B\leq10^4$.) Some more general results: A quick check shows that every solution with $B\leq3$ is of the form $$(A,k,B)=(1,k,2)\qquad\text{ or }\qquad(A,k,B)=(5,1,3).$$ For $B\geq4$ we have $A^k=B!-1\equiv7\pmod{8}$ and so $k$ is odd and $A\equiv7\pmod{8}$. Then $$B!=A^k+1=(A+1)(A^{k-1}-A^{k-2}+A^{k-3}-\ldots+A^2-A+1),$$ which shows that $A+1$ divides $B!$, so in particular $A+1$ is $B$-smooth. So for every prime $p$ dividing $A$ and every prime $q$ dividing $A+1$ we have $q\leq B<p$.
Can you raise a number to the power of another number being raised to a power?
Any power of a positive number is just another number. You can do anything with that number that you can do with numbers, including using them as exponent to another positive number. However $y=$3E-15$e^{0.0197x}$ would stand for $y=3\times 10^{-15}\times e^{0.0197x}$, since the E notation only accepts an explicit integer, and the power of $e$ just has to be multiplied with the number that preceeds it. So in this case there is no repeated exponentiation.
integral with pdf of a gaussian
Hint: define $Q(x) = x^2/2$. $$x\phi(x) = \frac 1{\sqrt{2\pi}}Q'(x)\exp(-Q(x)) $$
Modular exponentiation problem
For such a small exponent, just computing it directly works: \begin{align} 10^2 = 100 &\equiv 23 \pmod{77} \\ 10^3 \equiv 230 &\equiv -1 \pmod{77} \\ 10^6 \equiv (-1)^2 &\equiv 1 \pmod{77} \\ 10^7 \equiv 10 \cdot 1 &\equiv 10 \pmod{77} \end{align} Another way is to notice that $1000 = 1001 - 1 = 7\cdot 11 \cdot 13 - 1 \equiv -1 \pmod{7}$
Is a smooth function sending algebraic numbers to algebraic numbers a polynomial?
This is a CW answer to remove this question from the unanswered list -- this question has been answered on Mathoverflow. The answer is no; see this paper.
Not a normal subgroup by left and right coset
Let $g=$ $\begin{pmatrix} a & b \\ 0 & 1 \\ \end{pmatrix}$ with $b\ne 0$ Then $g^{-1}=\begin{pmatrix} \frac{1}{a} & -\frac{b}{a} \\ 0 & 1 \\ \end{pmatrix}$ For $k=\begin{pmatrix} s & 0 \\ 0 & 1 \\ \end{pmatrix}$ we get $gkg^{-1}=\begin{pmatrix} a & b \\ 0 & 1 \\ \end{pmatrix}\begin{pmatrix} s & 0 \\ 0 & 1 \\ \end{pmatrix}\begin{pmatrix} \frac{1}{a} & -\frac{b}{a} \\ 0 & 1 \\ \end{pmatrix}=\begin{pmatrix} as & b \\ 0&1\\ \end{pmatrix}\begin{pmatrix} \frac{1}{a} & -\frac{b}{a} \\ 0 & 1 \\ \end{pmatrix}=\begin{pmatrix} s & -sb+b \\ 0&1\\ \end{pmatrix}\notin K$
Understanding the Stone-Weierstrass Theorem in Rudin's Principle of Mathematical Analysis
It wouldn't change much, since $$(1-x^4)^n \geq 1-nx^4$$ by the same reason (ie, the derivative of $f(x)=(1-x^4)^n-1+nx^4$ is $-4nx^3(1-x^4)^{n-1}+4nx^3$, which is positive on $(0,1)$) and then you can integrate up to $1/\sqrt[4]{n}$ and get a similar analysis for $c_n$. Then, the only times that we see $(1-x^2)^n$ appearing again is in justifying the uniform convergence of $Q_n$ outside any small interval around $0$, which will hold by essentially the same argument, and in the last inequality, which will not make a difference. However, there is more to it than "it wouldn't change anything". The idea of the proof by Rudin is that the convolution of two functions should preserve the best properties out of the two functions. One good example of this is the convolution with a smooth compactly supported function, which is a good way to prove uniform density of the smooth functions on $C^0(I)$, for example. So he strikes for a convolution with a polynomial. The point is: if $f$ is a function (continuous, $L^1_{loc}$ or whatever depending on context), we expect that after making a convolution with a function $\phi$ which is normalized as to have $\int \phi =1$, if the support is small enough and concentrated near $0$, then the convolution $\widetilde{f}$ is near $f$ (think of a discrete averaging where $\phi$ is a "weight", and the convolution at a point is the average of the function according to that weight centralized on the point. If we put more weight at the point, and less on its near values, then the average shouldn't shake things that much). This is easy to arrange for $C^{\infty}$ functions, since we have bump functions. But here we would like to prove density of polynomials, which are not so malleable. With effect, we can't have a polynomial with compact support and integrating $1$ (we can't have non-trivial polynomials with compact support at all!). So we try to emulate a compact support by taking a polynomial which is highly concentrated near the origin. The polynomial $p(x)=1-x^2$ is maybe the most trivial example of something concentrated near the origin. An important observation is that since the function is defined on $[0,1]$, what matters to our avering polynomial is its behaviour in $[-1,1]$, so the fact that $(1-x^2)$ explodes outside of $[-1,1]$ is irrelevant. Then we raise it to the power of $n$ since, being on $(-1,1)$, this will concentrate things even more near the origin (you can plot a few powers to try and see this). The estimates that Rudin do are to guarantee that everything goes well and represent the technical difficulty of not having compact support (also as small as desired). The polynomial $1-x^4$ also satisfies the property that if we raise it to $n$, things will concentrate near the origin. The only difference is that it is less concentrated than $1-x^2$, so that the error you make with the convolution will probably be bigger than if you used $1-x^2$.
Approximating the function $G(x,y) = \int_x^\infty \frac{\ln(t)^2}{(t^2 -1) ( \ln(t)^2 + y^{2} )^2} \,dt$
Accurate approximates of the integral can be obtained from asymptotic series (below). One have to chose the number of terms $K$ and $H$ : $$ G(x,y) = \int_x^\infty \frac{\ln(t)^2}{(t^2 -1) ( \ln(t)^2 + y^2 )^2} \,dt $$ $$G(x,y)\simeq \sum_{k=0}^K\sum_{h=0}^H\frac{(-1)^h(h+1)}{(2k+1)^{2h-1}}y^{2h}\Gamma\left(-2h-1\:,\: (2k+1)\ln(x) \right)$$ where $\Gamma$ is the Incomplete Gamma function. The first terms are : $$G(x,y)\simeq \Gamma\left(-1\:,\:\ln(x)\right) -2y^2\Gamma\left(-3\:,\:\ln(x)\right) +3y^4\Gamma\left(-5\:,\:\ln(x)\right)+...\\+\Gamma\left(-1\:,\:3\ln(x)\right)-\frac{2}{3}y^2\Gamma\left(-3\:,\:3\ln(x)\right)+...$$ For $x$ large, all the Gamma terms are small. Note: For a good approximate it is better to chose large $H$ and small $K$ than large $K$ and small $H$. Asymptotic series expansion:
How to calculate relative error when true value is zero?
If this is based on any kind of real-world situation, then there should be multiple $x_{test}$ measurements, i.e. a distribution. Then it will have a standard deviation, or at least quantiles, and you can define the distance from the mean of the $x_{test}$ to $x_{true}$ in terms of these. E.g., $(\mu_{test} - x_{true}) / \sigma_{test}$ will give you a sort of 'relativized error'. You can also apply standard statistical tests for significance, e.g. the t-test.
Can we deduce that $M_0$ is a submodule of the limit of the following diagram?
If not yet done, convince yourself (by element chasing) that the pushout of an injective module homomorphism is injective. Then, if $A\to B$ is injective, then $M_i$ embeds to $M_{i+1}$, and it's easy to see that $M':=\bigcup_iM_i$ satisfies the universal property of the colimit, hence $M\cong M'$ and the natural arrow $M_i\to M$ is injective for all $i$.
Showing that a certain function is a local diffeomorphism
Hint: Calculate Jacobian matrix for mapping $f: (x,y) \mapsto (e^x(x \cos y - y \sin y),e^x(x \sin y + y \cos y)$ and check its determinant is zero or not.
Proving a Summation Equation by Induction
$$ \sum_1^n i\cdot i! = \sum_1^{n - 1} i\cdot i! + n\cdot n! $$ Now plug in the formula for $\sum i\cdot i!$ for $n - 1$, add the next term and see if you get the correct formula for $n$: $$ ((n - 1) + 1)! - 1 + n\cdot n! = n! - 1 + n\cdot n! = (n + 1)n! - 1 $$ That's the inductive step. If the formula holds for $n - 1$ then it also holds for $n$.
Dedekind cuts: Why is the union the supremum?
Because the set over which you take the supremum doesn't need to have a single largest element. Consider for example the Dedekind cuts $$M_n = \{r \in \mathbb{Q} \mid r < -\tfrac{1}{n}\}$$ for $n \in \mathbb{N}$. The supremum of these cuts is given by the cut $\{r \in \mathbb{Q} \mid r < 0\}$, which is not of the form $M_n$ for some $n$.
On differentiability of continuous square root
Notice that for all $z\in U$, $f(z)\ne0$ and therefore, by continuity of $f$, $$\lim_{h\to0}\frac{f(z+h)-f(z)}h=\lim_{h\to 0}\frac{(f(z+h))^2-(f(z))^2}{h(f(z+h)+f(z))}=\frac1{2f(z)}$$
How can a unit step function be differentiable??
The derivative of the unit step function (or Heaviside function) is the Dirac delta, which is a generalized function (or a distribution). This wikipedia page on the Dirac delta function is quite informative on the matter. One way to define the Dirac delta function is as a measure $\delta$ on $\mathbb{R}$ defined by $$ \delta(A) = \begin{cases} 0 &: \text{ if } 0 \notin A \\ 1 &: \text{ if } 0 \in A \end{cases} $$ Then one can write down precisely what is meant by the expression $$ \int fd \delta = f(0) $$
Coincidental Trigonometric Identity for Two Particular Values
The left-hand side of $(1)$ equals $\cos^2 b−\cos^2 a$. Hence we have: $\cos b = x \cos a$, where $x^2−x−1=0$. The answer to your second question looks like "no". At least Wolfram Alpha says nothing about the rationality of $\frac{1}{\pi}\arccos \frac{\sqrt{5}-1}{2}$.
Find all primes $p$ such that $x^2\equiv 10\pmod p$ has a solution
Everything you do is correct (modulo noting that $\left(\frac{5}{p}\right) = \left(\frac{p}{5}\right)$ by Quadratic Reciprocity, which is why you are checking for quadratic residues modulo $5$) and dealing with the primes $p=2$ and $p=5$. If $p=2$ or $p=5$, then $x^2\equiv 10\pmod{p}$ clearly has solutions. Otherwise, your development is correct. You need either $p\equiv 1,-1\pmod{5}$ and $p\equiv 1,-1\pmod{8}$; hence $p\equiv \pm 1\pmod{40}$ or $p\equiv \pm 31\equiv\pm 9\pmod{40}$; $p\equiv 2,-2\pmod{5}$ and $p\equiv 3,-3\pmod{8}$; hence $p\equiv \pm 27\equiv \pm13\pmod{40}$ or $p\equiv\pm37\equiv \pm3\pmod{40}$; $p=2$; $p=5$.
Are these integrals convergent?
Look at the asymptotics as $x\rightarrow \infty$ on the first integral \begin{equation} \frac{x\sin(\ln(x))}{x^2+\cos(x)} \sim \frac{x\sin(\ln(x))}{x^2} = \frac{\sin(\ln(x))}{x} \end{equation} If we change variables \begin{equation} \int_{N}^{\infty} \frac{\sin(\ln(x))}{x} dx = \int_{\ln(N)}^{\infty} \sin(u) du \end{equation} For the second integral, use a change of variables \begin{equation} \int_{N}^{\infty} \frac{\sin(\ln(x))}{(\ln(x))^3} dx = \int_{\ln(N)}^{\infty} \frac{e^{u}}{u^3}\sin(u)du \end{equation} It seems that both diverge, but this is all imprecise. The details would need to be filled in. If you restrict to intervals of the form $[e^{\pi M},e^{\pi M+\pi}]$, then the sign in the numerator with be either positive or negative and you can make the necessary comparisons.
How to go from $5\pi/4$ to $\pi + \pi/4$?
Hint, what if you write: $$\pi = (4 \pi )/4$$
how many triples $(a,b,c)$ of even positive integers satisfy $a^3 +b^2+c \leq 50$
Hint: You know that $a^3 < a^3 + b^2 + c \leq 50$. If $a \geq 4$, this doesn't hold, so $a=2$. Now you have to count solutions to $b^2 + c \leq 42$. Do this in a similiar way: Find an upper bound for $b$ and then, for each possible $b$, count the possibilities for $c$.
Show $\lim_{N\to\infty}\int_0^\pi\left(\frac1{\sin\frac{x}2}-\frac2x\right)\sin\left((N+\frac12)x\right)dx=0$
First note that as $x \to 0^{+}$ the function $$f(x) = \dfrac{1}{\sin\left(\dfrac{x}{2}\right)} - \frac{2}{x}$$ tends to a definite limit $0$ and hence can be assumed continuous in $[0, \pi]$. Therefore $f(x)$ is Riemann-integrable on interval $[0, \pi]$. It now follows by Riemann-Lebesgue Lemma (related to coefficients of Fourier series of $f(x)$, a proof is available in Tom M. Apostol's "Mathematical Analysis", 2nd Ed., Page 313) that $$\lim_{N \to \infty}\int_{0}^{\pi}f(x)\sin(Nx + b)\,dx = 0$$ This settles the hard part of the question. The limit of $f(x)$ as $x \to 0^{+}$ is calculated as follows: $\displaystyle \begin{aligned}\lim_{x \to 0^{+}}f(x) &= \lim_{x \to 0^{+}}\dfrac{1}{\sin\left(\dfrac{x}{2}\right)} - \frac{2}{x}\\ &= \lim_{x \to 0^{+}}\dfrac{x - 2\sin(x/2)}{x\sin(x/2)}\\ &= \lim_{x \to 0^{+}}\dfrac{x - 2\sin(x/2)}{x\dfrac{\sin(x/2)}{x/2}\cdot(x/2)}\\ &= 2\lim_{x \to 0^{+}}\dfrac{x - 2\sin(x/2)}{x^{2}}\\ &= 2\lim_{x \to 0^{+}}\dfrac{1 - \cos(x/2)}{2x}\text{ (by L'Hospital's Rule)}\\ &= \lim_{x \to 0^{+}}\dfrac{2\sin^{2}(x/4)}{x}\\ &= 2\lim_{x \to 0^{+}}\dfrac{\sin^{2}(x/4)}{(x/4)^{2}}\cdot\frac{(x/4)^{2}}{x} = 0\end{aligned}$ The observation which you have made about representing $\sin(N + 1/2)x$ as a sum can help you out in showing that $$\int_{0}^{\pi}\dfrac{\sin\left(N + \dfrac{1}{2}\right)x}{\sin(x/2)} = \pi$$ Using this result together with the earlier established limit $$\lim_{N \to \infty}\int_{0}^{\pi}\left(\dfrac{1}{\sin(x/2)} - \frac{2}{x}\right)\sin\left(N + \frac{1}{2}\right)x\,dx = 0$$ gives us $$\lim_{N \to \infty}\int_{0}^{\pi}\dfrac{\sin\left(N + \dfrac{1}{2}\right)x}{x}\,dx = \frac{\pi}{2}$$The last part of the question can be easily deduced by putting $(N + 1/2)x = t$ in the above integral.
Problem with denominator in transformation
Having looked at your previous question, I see the issue here. The key point is that we are computing a partial derivative here. $\partial$ is not a number that gets canceled out. The derivative of $f(k) = k^{\frac12}$ is \begin{equation*} f'(k) = \frac12 k^{-\frac12}. \end{equation*}.
Can the partial fraction method of integration be used with trig functions contained inside the function to be integrated?
It is better to change a variable first: $$\int\frac{dx}{\cos(x)\left(\sin^2(x)+4\right)}=\int\frac{\cos x \,dx}{\cos^2(x)\left(\sin^2(x)+4\right)}=\left[\begin{array}{c}t=\sin x \\ dt=\cos x\,dx\end{array}\right]=\int\frac{dt}{(1-t^2)\left(t^2+4\right)}$$ Now you can use partial fraction. You can also see that the denominators you'll get are not exactly what you'd expected.
Change of variable, calculation of new partial derivative expression
You don’t “suddenly have additional terms.” The expression for ${\partial\over\partial\tau}$ in your second example is simply an application of the chain rule for multivariable functions. The reason there is only one term in the expression for ${\partial\over\partial\eta}$ is, as i squared pointed out, that ${\partial\tau'\over\partial\eta}=0$, so the second term drops out. That’s the same reason that there’s only one term in the expression for ${\partial\over\partial t}$ in your first example—the other terms in the full expression vanish because it depends on only one of the variables.
Find A and B for a continuous random variable with the following density
You are almost right, if $A\ne 0$ then $\int_0^1\frac{A}{x^2}\,dx$ does not converge. So we must have $A=0$. Now finding $B$ is a routine integration.
Proving by definition that sequence is Cauchy
Hint. We have that $$\lim_{k\to +\infty} \frac{{\arccos(1/k) +3k-1\over (2k-1)^4}}{1/k^3}=\frac{3}{16}<1$$ which implies that there exist $N>0$ such that for $k> N$, $$0\leq {\arccos(1/k) +3k-1\over (2k-1)^4}\leq \frac{1}{k^3}$$ Hence for $n\geq m> N$, $$ 0\leq a_n-a_m\leq \sum_{k=m+1}^n\frac{1}{k^3}=b_n-b_m.$$ where $b_n:=\sum_{k=1}^{n}\frac{1}{k^3}$. Now note that $(b_n)_n$ is a convergent sequence.
How to sample continuous signal correctly?
Scale your frequency axis by fs: freq = 2*pi/N * fs * np.arange(N)
Using inverse image for morphism definitions
It does not directly answer your general question (which has no general answer probably anyway), but the confusion or awkwardness about the definition of continuous maps just disappears when you choose a different, equivalent definition of topological spaces. In other words, you can work with a category which is isomorphic to $\mathbf{Top}$. For example, you can work with Kuratowski spaces: A map $f : X \to Y$ between Kuratowski spaces is continuous by definition if the following holds: If $x \in X$ touches $A \subseteq X$, then $f(x) \in Y$ touches $f(A) \subseteq Y$. So as you can see, no inverse image, and the definition of continuity is much more geometric! (See also Vectornaut's answer here.) You can also work with net convergence spaces: A map $f : X \to Y$ between such spaces is continuous by definition if the following holds: If $x_{\alpha} \to x$ in $X$, then $f(x_{\alpha}) \to f(x)$ in $Y$. With this, you can even see topological spaces as "multialgebraic structures", see Edgar, The class of topological spaces is equationally definable (here), and homomorphisms of multialgebraic structures are defined just as for algebraic structures. In any case, the "right definition" of the morphisms between topological spaces should not be a consequence of some formal properties of set-theoretic operations. Topological spaces should model geometric spaces, and continuous maps should model maps which are "graphically continuous" (which can be formalized pretty well with Kuratowski spaces above). Edit. For the general question, something which comes to my mind is that algebraic structures are built up from their elements using algebraic operations, and simplicial complexes are built up from their simplices. This might explain that we want to consider maps which preserve these building blocks. But topological spaces are not built from open sets. In fact, open sets are just auxiliary gadgets (again, see here). This might be an explanation why we do not consider maps which preserve open sets. However, when we see topological spaces as being "built from" from their points and the touch relation / convergence relation as above, it makes sense to consider maps of points which preserve this relation. Similarly, measurable sets can be considered to be auxiliary gadgets to define integration.
How is the free group on $S$ generators a cogroup?
You just need to follow the isomorphisms involved. You know what the group structure on $G^S$ is, so you know, for example, the multiplication function $$m : UG^S × UG^S → UG^S.$$ You know what the isomorphism $UG^S ≅ \mathrm{Hom}(FS, G)$ is, so you can calculate $$m' : \mathrm{Hom}(FS, G) × \mathrm{Hom}(FS, G) → \mathrm{Hom}(FS, G),$$ and from there $$m'' : \mathrm{Hom}(FS ⊔ FS, G) → \mathrm{Hom}(FS, G).$$ This is natural in $G$, so by Yoneda lemma it comes from a morphism $$m''' : FS → FS ⊔ FS.$$ Now just remembering the "by Yoneda lemma" part won't do you any good, but Yoneda lemma or at least its proof tells you exactly how to calculate $m'''$ from $m''$. Of course there's another method which is much faster, but offers less insight, and isn't guaranteed to work -- guessing :)
Show that $\frac{1+\sin A}{\cos A}+\frac{\cos B}{1-\sin B}=\frac{2\sin A-2\sin B}{\sin(A-B)+\cos A-\cos B}$
The $LHS$ can be written as $\dfrac {1+\sin A}{\cos A} +\dfrac{1+\sin B}{\cos B}$ $=\dfrac{\cos A+\cos B+\sin A\cos B+\cos A\sin B}{\cos A\cos B}$ Multiply Neumaretor and Denominator by $\sin(A-B)+\cos A-\cos B=(\cos A+\sin A\cos B)-(\cos B+\cos A\sin B)$ $=\dfrac{(\cos A+\sin A\cos B)^2-(\cos B+\cos A\sin B)^2}{\cos A\cos B[...]}$ $=\dfrac{\cos^2 A+\sin^2 A\cos^2B+2\sin A\cos A\cos B-\cos^2B-\cos^2A\sin^2B-2\cos A\cos B\sin B}{\cos A\cos B[...]}$ Note that $\sin^2A\cos^2B-\cos^2A\sin^2B=(1-\cos^2A)\cos^2B-\cos^2A(1-\cos^2B)=\cos^2B-\cos^2A$ So LHS becomes $\dfrac{2\sin A \cos A\cos B-2\cos A\cos B\sin B}{\cos A\cos B[...]}$ $=\dfrac{2\sin A-2\sin B}{\sin(A-B)+\cos A-\cos B}=RHS$
Are arbitrary finite subsets of $\mathbb{N}$ sets?
As mentioned in the comments, Specification (or Separation) will work via the formula $$x=a_1\vee ...\vee x=a_n.$$ But in order to apply this we also need a given set $A$ which we already know contains each of the $a_i$s. In this context we have such a set - namely, $\mathbb{N}$ - but leaves open the question of whether arbitrary finite collections of sets are sets. To show that the answer is yes, we'll actually use a different approach: Pairing and Union. For example, suppose I have sets $a_1,a_2,a_3,a_4, a_5$. Then: For each $i$, Pairing gives us $\{a_i,a_i\}=\{a_i\}.$ We now repeatedly apply Pairing and Union: By Pairing applied to $\{a_1\}$ and $\{a_2\}$, we get $\{\{a_1\},\{a_2\}\}$; applying Union to that gives $\{a_1,a_2\}$. By Pairing applied to $\{a_1,a_2\}$ and $\{a_3\}$, we get $\{\{a_1,a_2\},\{a_3\}\}$; applying Union to that gives $\{a_1,a_2,a_3\}$. Continuing in this way we ultimately get $\{a_1,...,a_n\}$. There's actually a bit of subtlety here with either approach: when we sit down to make this rigorous what we wind up doing is using proof by induction in the metatheory to show that ZF proves each of a family of sentences - in the original case the family of sentences of the form $$\forall x_1,...,x_n\in\mathbb{N}\exists y\forall z(z\in y\iff (z=x_1\vee...\vee z=x_n))$$ for $n$ finite, and in the more general case the family of sentences of the form $$\forall x_1,...,x_n\exists y\forall z(z\in y\iff (z=x_1\vee...\vee z=x_n))$$ for $n$ finite (this is a more precise way of saying "For all $x_1,...,x_n$ the set $\{x_1,...,x_n\}$ exists"). We're not constructing a single proof in ZF here. This isn't something to pay much attention to at first, but when you dive into the model theory more it will become very important: the issue is that what a given model of ZF thinks is finite may not actually line up with what "real" finiteness is. Such "anomalous" models are called non-$\omega$-models; it will probably be easier to first study their "number" analogues, the nonstandard models of various theories of arithmetic. So is there a "purely internal" version of this? Well, when we try to precisely formulate the statement "Every finite subcollection of a set is a set" in ZF, what we get (since "finite" means "in bijection with some natural number") is "For every $n\in\mathbb{N}$, every set $X$, and every function $f$ with domain $n$ each of whose outputs is in $X$, the range of $f$ is a set." This is an immediate consequence of Separation ("is in the range of $f$"), and an even more immediate consequence of Replacement (since then $X$ doesn't even enter into the situation). But this isn't actually a very useful thing to prove, since constructing the required function $f$ won't be any easier than outright building the desired set itself.
What are the residues of $\frac{z^2 e^z}{1+e^{2z}}$?
You wouldn't have to do the rewrite, but it's easier for the residue calculation. The only thing you have to see is that the nominator doesn't have any singularities and neither does the denominator (except at $\infty$). The only singularities are due to the denominator being zero. Then you use Euler formula (using $z=x+iy$ with read $x$ and $y$): $$1 + e^{2z} = 1 + e^{2x}(\cos(2y)+i\sin(2y))$$ For this to be zero you will first have to have $\sin(2y) = 0$ which means that $cos(2y) = \pm1$, but also that $\cos(2x)<0$ since $e^{2y} > 0$. It follows that $\cos(2y) = -1$. Now that means that $e^{2x}=1$. So we have that $x=0$ and $y=(n-1/2)\pi$. That is $$z = i(n-1/2)\pi$$ That you get different $z$ for each value of $n$ is as it should as you have infinitely many values for which the denominator is zero. For the residue calculation we rewrite it as $${z^2e^{z}\over1+e^{2z}} = {z^2\over\cos(-iz)}$$ Now to calculate the residue we can note that we could cancel the poles by multiplying with $z-i(n-1/2)\pi$ which would reveal the $c_{-1}$ laurent coefficient as the limit: $$\operatorname{Res} = \lim_{z\to i(n-1/2)\pi} {z^2(z-i(n-1/2)\pi)\over\cos(-iz)}$$ Then use $\lim_{z\to0}\sin(z)/z = 1$, or l'Hospitals rule: $$\operatorname{Res} = (i(n-1/2)\pi)^2\lim_{z\to i(n-1/2)\pi} {(z-i(n-1/2)\pi)\over\cos(-iz)} = (i(n-1/2)\pi)^2 {1\over -\sin( (n-1/2)\pi )}$$
Help proving a recursive formula involving planes and lines
Good news is you're in the right track! For a) when you add the $n$th line, imagine the line is coming from infinity to intersect the first line. On its way, it is cutting the region in two pieces, and we previously had one, so is creating one new region indeed. In fact, you can see each intersection corresponds to adding a new region. That line when it intersects the $n-1$th line, also goes for ever and is creating one more region which corresponds to intersecting at the infinity! For b) Let $f(x) = \sum_{n=0}^{\infty}r_nx^n$ be the generating function of our recursion $r_n = r_{n-1} + n.$ Simply plugging yields $$f(x) = \sum_{n=0}^{\infty}r_nx^n = r_0 + \sum_{n=1}^{\infty}r_nx^n = r_0 + \sum_{n=1}^{\infty}(r_{n-1} + n)x^n$$ which is $$r_0 + x \underbrace{\left(\sum_{n=1}^{\infty}r_{n-1}x^{n-1} \right)}_{=\sum_{n=0}^\infty r_nx^n} + \sum_{n=1}^\infty nx^n=r_0 + xf(x) + \dfrac{x}{(1-x)^2}$$ so $f(x) = 1 + xf(x) + \dfrac{x}{(1-x)^2},$ thus $f(x) = \dfrac{x^2-x+1}{(1-x)^3}.$ Note: As you said, $\sum_{n=0}^\infty x^n = \dfrac{1}{1-x}.$ By taking derivative from both sides and multiplying both sides to $x,$ we will get $\sum_{n=1}^\infty nx^n = \dfrac{x}{(1-x)^2}.$
Is the function $f(n)=\varphi(n)+\varphi(n+1)-n$ surjective?
Below are the numbers up to 1000, along with 998, that cannot be represented with any $n \leq 100000.$ It would appear that there is little hope for a proof by inequalities, we keep getting larger and larger numbers $n$ such that $\phi(n) + \phi(n+1) - n$ is one of these. jagy@phobeusjunior:~$ jagy@phobeusjunior:~$ date Wed Jan 23 12:24:02 PST 2019 jagy@phobeusjunior:~$ ./mse 473 138855 = 3 * 5 * 9257 138856 = 2^3 * 17 * 1021 774 403490 = 2 * 5 * 157 * 257 403491 = 3 * 11 * 12227 206 435714 = 2 * 3 * 101 * 719 435715 = 5 * 7 * 59 * 211 774 736434 = 2 * 3^2 * 163 * 251 736435 = 5 * 7 * 53 * 397 539 2301765 = 3 * 5 * 173 * 887 2301766 = 2 * 17 * 67699 774 2493914 = 2 * 43 * 47 * 617 2493915 = 3 * 5 * 53 * 3137 473 2778215 = 5 * 11 * 50513 2778216 = 2^3 * 3 * 7 * 23 * 719 206 2915474 = 2 * 19 * 73 * 1051 2915475 = 3 * 5^2 * 38873 473 3063423 = 3 * 11 * 92831 3063424 = 2^7 * 7 * 13 * 263 774 4182954 = 2 * 3 * 31 * 43 * 523 4182955 = 5 * 7 * 119513 774 4372794 = 2 * 3^2 * 29 * 8377 4372795 = 5 * 7 * 101 * 1237 158 5075570 = 2 * 5 * 507557 5075571 = 3 * 17 * 23 * 4327 206 5357090 = 2 * 5 * 535709 5357091 = 3 * 17 * 23 * 4567 774 6368810 = 2 * 5 * 7 * 37 * 2459 6368811 = 3 * 2122937 158 38029730 = 2 * 5 * 29 * 71 * 1847 38029731 = 3 * 17 * 79 * 9439 774 39871314 = 2 * 3^2 * 7 * 316439 39871315 = 5 * 11^2 * 59 * 1117 progress 50000000 Wed Jan 23 12:56:43 PST 2019 158 64981730 = 2 * 5 * 11 * 47 * 12569 64981731 = 3 * 37 * 149 * 3929 473 79627911 = 3 * 11 * 389 * 6203 79627912 = 2^3 * 7 * 13 * 109379 progress 100000000 Wed Jan 23 13:47:31 PST 2019 206 121764914 = 2 * 17 * 3581321 121764915 = 3^2 * 5 * 137 * 19751 158 145708130 = 2 * 5 * 53 * 89 * 3089 145708131 = 3 * 19 * 47 * 137 * 397 progress 150000000 Wed Jan 23 14:51:08 PST 2019 473 194010879 = 3 * 239 * 270587 194010880 = 2^8 * 5 * 7 * 59 * 367 progress 200000000 Wed Jan 23 16:02:15 PST 2019
Intuitive explanation of complex-valued function and a notation
$f: A \to B$ just means that $f$ is a function defined on the set $A$ with values in the set $B$. $\mathbb C$ is the set of complex numbers.
$X$,$Y$,$Z$ mutually independent implies $X+Y$ independent of $Z$
Let $g:\mathbb R^2\to\mathbb R$ be the map $(a,b)\mapsto a+b$. Clearly for any $t\in\mathbb R$, $$g^{-1}((-\infty,t])=\{(a,b)\in\mathbb R^{2}:a+b\leqslant t\}$$ is a measurable set. Therefore $g$ is a measurable map, so $X+Y=g(X,Y)$ and $\sigma(g(X,Y))\subset \sigma(X,Y)$. It follows that if $E\in\sigma(g(X,Y))$, $F\in\sigma(Z)$, then $E\in\sigma(X,Y)$ so that $$\mathbb P(E\cap F)=\mathbb P(E)\mathbb P(F),$$ from which we conclude.
How to represent an integer $N$ as $x^a y^b$
Rob and lulu basically answered it in the comments. 1) Find the prime factorization of $N = p_1^{n_1}*p_2^{n_2}.....*p_m^{n_m}$ 2) If any of the $n_i$ are equal to one it can not be done. This will happen about $60\%$ of the time! 3) If you only have two prime you are done: $N = p_1^{n_1}*p_2^{n_2}$. 4) If you have only one prime less than or equal to $3$ i.e. $N= p^3$ or $p^2$ it can not be done but other wise you are done: $N = p^n = p^a*p^b$ for any $a+b=n$. 5) If all then $n_i$ are even then you are done $N=(p_1)^{n_1}*(p_2^{n_2/2}*....p_m^{n_m/2})^2$. 6) Separate the even $n_i$ from the odd from the odd so that $N = p_l^{2k_l}...p_j^{2k_j}*p_h^{2k_h + 3}...p_i^{2k_n + 3}$ 7) Break this into $N = (p_1^{k_1}..... p_n^{k_n})^2*(p_h....p_i)^3$. 8) The only case not covered above is if $N= p_1^3.....p_n^3$. In this case $N = (p_1)^3*(p_2.....p_3)^3$. So for example, if I take $N=16810159716000$ Prime factorization is $N=2^5 * 3^6 * 5^3 * 7^8$ $= (2^2*3^6*7^8)*(2^3*5^3) = (2*3^3*7^4)^2*(2*5)^3 = 129654^2*10^3$ Which is not the only way to do it. Ex. $(2*7)^2(2*3^2*5*7^2)^3$ will also work. It's just a matter of splitting the prime factors so that all the exponents are all multiples of one of two numbers. Which sounds hard but is easy to do in practice. That is if it can be done at all, Which it usually won't, but will be easy to see when it can't.
Differential 1-form for $f(x,y) = \sin(x^2 + y^2)$ on vector field $\mathbf{A}=x\partial_x + y\partial_y$
Formula (definition of differential): $$dx (\alpha (x,y) \partial_x + \beta (x,y) \partial_y) = \alpha (x,y) $$ Analogously for $dy $.
further decomposing the canonical decomposition of a representation
Consider the trivial representation on $V\cong k^n$ with $k$ infinite and $n>1$. Writing $V$ as a sum of irreducibles means finding one-dimensional subspaces $U_1,\cdots,U_n$ of $V$ such that $V$ is the internal direct sum of the $U_i$. Of course there is an obvious way to do this: select the subspaces generated by the coordinate vectors under some assumed basis. However (say $V=k^n$ so the coordinate vectors are our assumed basis) this hardly exhausts all the ways of writing $V$ in this way. For instance with $n=2$ we could write $V=k(1,0)\oplus k(0,1)$, or we could instead have a different decomposition $V=k(1,1)\oplus k(1,-1)$, which is completely different. In general, each choice of some decomposition $V=U_1\oplus U_2$ corresponds to a choice of two distinct lines through the origin, and there are infinitely many pairs of distinct lines through the origin. Now consider $U$ an arbitrary irrep of $G$. Since its only subreps are $0$ and $U$, every $u\in U\setminus\{0\}$ spans all of $U$ (as a $k[G]$-module). Let $V=U_1\oplus U_2$ with each $U_1,U_2\cong U$ and $U_i=\langle u_i\rangle$. Just as before, there is a distinct decomposition $V=k[G](u_1+u_2)\oplus k[G](u_1-u_2)$. In general, if $V=U_1\oplus\cdots\oplus U_n=W_1\oplus\cdots\oplus W_n$ are two distinct decompositions of $V$, with each $U_i,W_i\cong U$ (only one irrep at play again for simplicity), then gluing isomorphisms $U_i\cong W_i$ together we obtain a map $A:V\to V$ such that the image of $V_i$ is $W_i$ as $k[G]$-submodules of $V$: thus we conclude that every decomposition of $V$ can be written as $V=AU_1\oplus\cdots AU_n$ for various automorphisms $A\in{\rm Aut}_G(V)$ and a given decomposition $V=\bigoplus U_i$. Schur's lemma tells us that $A\in{\rm Aut}_G(V)$, where $V$ has a given decomposition, comes in the form of a block matrix with each block a scalar multiple of an identity matrix. Thus, the ways of rewriting $V$ from one internal decomposition to another are in natural correspondence with the ways of rewriting $k^n$ from its standard decomposition into other decompositions.
Ideals generated by irreducible elements in a UFD.
I assume by $<x>$ you mean the principal ideal generated by $x\in R$. Consider $R=\mathbb{Z}$ and let $a=3$,$b=5$,then $a$ and $b$ are distinct irreducibles. 1.$(1+a)=(4)$ is not prime. 2.$(a+b)=(8)$ is not prime 3.$(1+ab)=(1+15)=(16)$ is not prime. For 4, you are right. An example is $R=\mathbb{R}[x][y]$, and let $a=x$.
Getting a closed form from $\sum_{i=1}^{n-1} \sum_{j=i+1}^{n} \sum_{k=1}^{j} 1$
Thanks @Winther for pointing out the previous mistake \begin{equation} \sum_{i=1}^{n-1} \sum_{j=i+1}^{n} \sum_{k=1}^{j} 1 \end{equation} We know that \begin{equation} \sum_{k=1}^{j} 1 = j \end{equation} So \begin{equation} \sum_{i=1}^{n-1} \sum_{j=i+1}^{n} \sum_{k=1}^{j} 1 = \sum_{i=1}^{n-1} \sum_{j=i+1}^{n} j \end{equation} \begin{equation} \sum_{j=i+1}^{n} j = \sum_{j=1}^{n} j - \sum_{j=1}^{i} j = \frac{n(n+1)}{2} - \frac{i(i+1)}{2} \end{equation} \begin{equation} \sum_{i=1}^{n-1} \sum_{j=i+1}^{n} \sum_{k=1}^{j} 1 = \sum_{i=1}^{n-1} ( \frac{n(n+1)}{2} - \frac{i(i+1)}{2}) = \frac{n(n+1)(n-1)}{2} - \frac{1}{2} \sum_{i=1}^{n-1} i+i^2 \end{equation} But \begin{align} \sum_{i=1}^{n-1} i &= \frac{(n-1)n}{2} \\ \sum_{i=1}^{n-1} i^2 &= \frac{(n-1)n(2n-1)}{6} \end{align} So \begin{equation} \sum_{i=1}^{n-1} \sum_{j=i+1}^{n} \sum_{k=1}^{j} 1 = \frac{n(n+1)(n-1)}{2} - \frac{1}{2} (\frac{(n-1)n}{2} + \frac{(n-1)n(2n-1)}{6}) \end{equation} Let's arrange \begin{equation} \sum_{i=1}^{n-1} \sum_{j=i+1}^{n} \sum_{k=1}^{j} 1 = \frac{6n(n+1)(n-1) - 3n(n-1) - n(n-1)(2n-1)}{12} \end{equation} We arrive at the most compact form, \begin{equation} \sum_{i=1}^{n-1} \sum_{j=i+1}^{n} \sum_{k=1}^{j} 1 = \frac{(n-1)n(6n + 6 - 3 - 2n + 1)}{12} = \frac{(n-1)n(n + 1)}{3} \end{equation}
Number of integer solutions to the equation $x_1+x_2+x_3+x_4=100$
We are defining $y_1=x_1-1, y_2=x_2-2,y_3=x_3-5,y_4=x_4$ and demanding that all the $y$'s be $\ge 0$. Once we defined $y_3=x_3-5$, there is no constraint on $y_3$ except that it be nonnegative, so it is not listed in the constraints.