title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Why Null Set is zero Dimensional
Note the distinction between saying "The empty set is zero-dimensional" (which is false, as Tobias Kildetoft says, because the empty set does not normally admit the structure of a vector space), and "A zero-dimensional vector space (i.e., a one-element vector space) has the empty set as basis" (which is true, and the reason a one-element vector space is "zero-dimensional").
Does the gradient of a function ever exist on the boundary (prove or disprove)?
You're right about the problem that $p_i + h$ is out of boundary. It shouldn't be. Actually by definition of a differentiable function $f$ in a point $x$, the function should be defined on an open set $U$ and the $x$ should be in $U$ (https://en.wikipedia.org/wiki/Differentiable_function) and then there are some other conditions. So you can't even say your function is differentiable on the entire $D$ if it cannot be defined outside the $D$. More correct would be to say it's continuous on $D$ and differentiable on $D \setminus \delta D$ (obiviously if $D \neq \delta D$). But sometimes people for simplicity do call a function differentiable on entire $D$ (especially in mathematical physics) meaning $\forall x_0 \in \delta D \,\,\, \exists \lim\limits_{x \rightarrow x_0} \nabla f(x)$. The function $\nabla f(x)$ is defined on $D \setminus \delta D$.
Prove that $x^{2} \equiv 1 \pmod{2^k}$ has exactly four incongruent solutions
Note that $(x - 1)(x + 1) \equiv 0 \mod{2^k}$ will imply that $x$ must be an odd integer. So we may assume that $x = 2m + 1$. Putting value of $x$ in the equation we have $4m(m+1) \equiv 0 \mod{2^k}$. This means that $2^{k-2}$ divides $m(m+1)$ if I assume $k>2$. Note that if $k \leq 2$ then there is no condition on $m$. So all residue classes of odd integer satisfy the above equation. So now assume that $k > 2$. if $m$ is even then $m$ is divisible by $2^{k-2}$ and $x = 2^{k -1}t +1$ $t \in \mathbb{Z}$. But if $m$ is odd, then $m+1$ is divisible by $2^{k - 2}$ and in this case $x= 2(m +1)-1 = 2^{k-1}t - 1$ for $t \in \mathbb{Z}$. In the first we shall have only two non congruent solution namely $1$,$2^{k -1} +1$ whereas in the second case the incongruent solutions are $-1$ and $2^{k-1} - 1$.
Maple: How to convert Cylindrical coordinates to Cartesian coordinates?
I don't speak Maple, but it looks like your eval takes you from Cartesian to cylindrical coordinates. The inverse is $x=r \cos \phi , y=r \sin \phi, z=z$. The Wikipedia link you have gives this, though using $\rho$ instead of $r$
det function in concave
Hint 1 You should prove it for the case where neither of the two matrices is symmetric positive definite first and then assume one is positive definite. Hint 2: You can simultaneously diagonalise a symmetric positive definite matrix and a symmetric one. Can you prove it? Hint 3: You can consider the positive definite one as the matrix of a scalar product. What does it look like in an orthonormal basis? And what form can the other matrix take in an orthonormal basis? Hint 4: Try using the spectral theorem.
distributional identity implies pathwise identity
Notice that if $X(\omega )=Y(\omega )$ almost surely, then of course, $X\overset{d}{=}Y$. The converse doesn't hold since $X$ and $Y$ doesn't necessarily leave on the same probability space. Moreover to say that $X(\omega )=Y(\omega )$ for all $\omega \in \Omega $ is a very strong assumption. We normally rather have the equality almost surely. Now, $B_t(\omega )-B_s(\omega )=B_{t-s}(\omega )$ will normally never hold. Simply because $B_{t-s}$ is $\mathcal F_{t-s}$ measurable (where $\mathcal F_t=\sigma (X_t\mid s\leq t)$ is the natural filtration of $(B_t)$), whereas $B_t-B_s$ is not.
Second order differential equation particular solution of a product
You can just use $y_p=(Ax+B)e^x$ Expand it you'll get $Axe^x+Be^x$ Then find the first and second derivatives and plug them into the original funtion
Why does $8$ not divide both the divisor and remainder?
The Euclidean algorithm stops when the remainder divides both divisor and quotient (which is thus shown to be the $\gcd$). So the statement that you question is odd, unless it is referring to some context outside the scope of the text and the link. Still it is certainly true that at any point in the process, the $\gcd$ is one of the factors of the latest remainder. I write the extended Euclidean algorithm as a table: $\begin{array}{c|c} n & s & t & q \\ \hline 3084 & 1 & 0 & \\ 1424 & 0 & 1 & 2 \\ 236 & 1 & -2 & 6 \\ 8 & -6 & 13 & 29 \\ \color{red}{4} & 175 & -379 & 2 \\ 0 & -356 & 771 & 0 \\ \end{array}$ ... each line solving the equation $n=3084\cdot s + 1424 \cdot t$, set up in the first two lines with the trivial equations for $n=3084$ and $1424$, and then subtracting $q$ times the line from the line above to get the smaller $n$ value on the next line. The last non-zero $n$ is the $\gcd$ of the opening pair. This also gives the appropriate Bézout's identity for the original pair for a linear combination that gives their $\gcd$: $175\cdot 3084+(-379)\cdot 1424=4$
infinity, p2 and p1 Norm and its associated unit ball that has negative y values
The comment was a little sloppy. Let $N$ be a norm in $\mathbb{R}^2$. Then the unit ball is the set of all points in $\mathbb{R}^2$ whose norm is one. So for example if we consider the Euclidean norm, then $$N(-1,0)=N(0,-1)=N(0,1)=N(1,0)=N\left(-\frac{1}{2},-\frac{\sqrt{3}}{2}\right)=1$$ so all five of these points are on the unit ball. Since the norm is defined for all of the points in the plane, the input to the norm function can be anything, positive, zero, negative. The output must always be nonnegative. To disambiguate my comment, when I wrote the comment I was thinking only of $\mathbb{R}$ with $N(x)=|x|$ to show that the input to a norm function can be negative. For a generic dimension, we can always use $N(\vec{x})$ where $\vec{x}$ is a vector of whatever type we need it to be.
Setting Up An Integral for Moment of Inertia of Wire
It is not clear to me why you are doing a surface integral when there is no surface at all. On a curve we can move in two directions, pretty much just like on a wire. On a surface we can move in an infinite amount of directions. You are interested in computing the line integral, $$\oint_{x^2+y^2=R^2} (x^2+y^2)\delta ds$$ This is not hard, just parametrize with $x=R \cos (\theta)$ and $y=R \sin (\theta)$ with $\theta \in [0,2\pi]$.
Prove there are no simple groups of even order $<500$ except orders $2$, $60$, $168$, and $360$.
Hint Recall that Burnside's Theorem implies that the order of any non-Abelian, finite, simple group has at least three distinct prime factors. (Burnside's Theorem is stated in $\S$6 but only proved later, in $\S$19, to take advantage of some representation theory.) If $2$ divides the order $n$ of a group $G$ exactly once, then $G$ has a subgroup of index $2$ ($\S$4.2, Exercise 12), but any such group is normal ($\S$3.2, Example (2)), so unless $n = 2$, we have $2^2 \mid n$. These two restrictions together leave $38$ possibilities besides $n = 2$ and so $35$ candidates to be eliminated. Applying Exercise 25--- Let $G$ be a simple group of order $p^2 q r$ where $p$, $q$ and $r$ are primes. Prove that $|G| = 60$. ---leaves just $16$ to eliminate, which is already doable manually with (considerable) effort. (Alas, Exercise 25 comes after the one in question statement, but it's in the same section, any anyway it is much more efficient to prove this general statement than to handle separately the $19$ cases it eliminates.) Additional hint The text eliminates several of the remaining possibilities in previous examples and exercises: $264$ and $396$ ($\S$6.2, in the subsection Permutation Representation), $312$ ($\S$4.5, Exercise 14), $336$ ($\S$6.2, Exercise 9), $420$ ($\S$6.2, Exercise 17(a)). This leaves just $11$ numbers: $120$, $180$, $240$, $252$, $280$, $300$, $408$, $440$, $456$, $468$, $480$. Probably some of these can be eliminated by $\S$4.5, Exercise 48, though that exercise asks you to write a program.)
I just want to know is this is correct without a proof
$30\mid 60$ and $20\mid 60$ but $(30\cdot20)\nmid 60$. The statement that if $a\mid c$ and $b\mid c$ then $ab\mid c$ is true if $a$ and $b$ have no prime factors in common. But when they do share common factors, then there are many counterexamples like the one above.
Proving commutativeness of an operator using structural induction
Yes, this is fine. Formally, your induction hypothesis is (for the first case): "$\forall m, sum(Sn,m) = S(sum(n,m))$", so substituting $Sm$ for $m$ is valid. You may want to make this clear in the second problem as you do induct on both variables. So in the second example, your IH1 is "$\forall n, sum(n,m)=sum(m,n)$", and IH2 is "$sum(n,Sm)=sum(Sm,n)$", with $n,m$ fixed.
'Coprime' problem related to integer rings
Yes, $u$ and $v'$ are coprime. This is equivalent to the statement: $$\gcd(N, r-s) = \gcd(N, r'-s')$$ whenever $r', s'$ satisfy $rr'\equiv ss'\equiv 1\mod{N}$. To prove this, notice that $r'-s'\equiv -(r-s)r's' \mod{N}$. Since $\gcd(-r's',N)=1$, it follows that $$\gcd(N, r'-s')=\gcd(N, (-r's')(r-s))=\gcd(N, r-s)$$ and the result follows $\square$
Numerical integration of nonlinear second order equation
This problem wasn't so hard to solve, after all, as soon as I remembered how to classify it; it's implicit, which is the biggest part of the reason I struggled. This is how to solve it: Rewrite it as a first-degree problem, using substitutions as suggested by Arkamis. I had a system of two second-degree equations, so I ended up with four first-degree equations. $ \left\{\begin{array}{rcl} I\dot\alpha-Mlh\dot\beta\sin(\theta+\phi)-Mlh\beta^2\cos(\theta+\phi)-V\cos\theta &amp; = &amp; 0 \\ h\dot\beta-l\dot\alpha\sin(\theta+\phi)-l\alpha^2\cos(\theta+\phi)+g\sin\phi &amp; = &amp; 0 \\ \dot\theta-\alpha &amp; = &amp; 0 \\ \dot\phi - \beta &amp; = &amp; 0 \end{array}\right. $ This system s on the form $f(t,y,\dot y)=0$, with $y=(\alpha,\beta,\theta,\phi)$, and can be solved with any implicit ODE solver, such as Matlab's ode15i. That's what I did =)
Finding a pair of Orthogonal Vectors
You got lost in your computations while the answer is quite easy - if you keep in mind what kind of spaces etc. you are manipulating instead of staying at the level of the equations themselves. First find any non-zero vector $b$ orthogonal to $a = \tiny\begin{pmatrix} 1\\1\\-2\\3\end{pmatrix}$. Such a vector is found as an element of the hyperplane with equation $x_1+x_2-2x_3+3x_4 = 0$; you could for instance use $b = \tiny\begin{pmatrix}1\\-1\\0\\0\end{pmatrix}$. Now the beauty is that since the space spanned by $a, b$ has dimension two, its orthogonal also has dimension two; therefore, you will certainly be able to find one more vector $c$ orthogonal to both $a$ and $b$, using the same method I used above for $b$. (Left as an exercise for now!). EDIT: Modified $a$ to match problem statement.
Self-referential integral simplifies to an exponential function
Any continuous function on a closed interval $[a,b]$ is integrable. To differentiate note that $f(x)=h(x^{3})$ where $h(x)=\int_8^{x} f(t^{1/3}) dt$ and apply Chain Rule.
Bianchi identity of linear connection on vector bundle
As Anthony Carapetis suggests, following the book "From Calculus to cohomology", it said that $d^\nabla F=0$ (Theorem 17.13, .178), where one treat $F \in \Omega^2(M, End(E))$, and $d^\nabla$ the induced connection on $End(E) \cong E^* \otimes E$. So it doesn't really mean $(d^\nabla)^3=0$.
1-1 correspondence theorem
Let $\;B\;$ be an ideal of $\;R\;$ containing $\;A\;$, and now define : $$B/A:= \{r+a\in R/A\;\;;\;\;r\in B\}$$ Prove that $\;B/A\;$ as defined above is an ideal of the factor ring $\;R/A\;$ . Important: don't forget that $\;A\le B\;$ ! Now, let $\;\overline B\le R/A\;$ (an ideal of $\;R/A\;$ ), and define $$B:=\{r\in R\;\;:\;\;r+A\in\overline B\}$$ Prove that $\;B\;$ as defined above is an ideal of $\;R\;$ containing $\;A\;$ . There you have the main part of the very important Correspondence Theorem for Rings, which allows us to write any ideal of the factor ring $\;R/A\;$ in the form $\;B/A\;$ , for some ideal $\;A\le B\le R\;$ . You could also prove that both maps determined by the definitions above are inverse to each other, and that the correspondence respects the indexes: $$[R:B]=\left[R/A:B/A\right]$$
Is the number of Lebesgue covering dimensions a topological property?
Isn’t it obvious from the definition that the covering dimension is a topological invariant? it’s defined purely in terms of open sets and set-theory concepts.
Moving between different ellipse representations
It's rather obvious that if $\vert\vert Ax + b \vert\vert \leqslant 1$, then $\vert\vert Ax + b \vert\vert^2 \leqslant 1$. Here we go: $\vert\vert Ax + b \vert\vert^2 = (Ax+b, Ax+b) = (Ax+b)^{T}(Ax+b) = (x^{T}A^{T} + b^{T})(Ax+b) = x^{T}(A^{T}A)x+ x^{T}A^{T}b + b^{T}Ax + b^{T}b=x^{T}(A^{T}A)x + 2(b^{T}A)x + (b^{T}b) = x^{T} \tilde{A} x + 2\tilde{b}^{T}x + \tilde{C}$. This is how you can move from one representation to another.
Problem with showing that every Contractive Sequence is Cauchy
Instead of explicitly demonstrating some natural $K$ making the Cauchy condition satisfied, the proof is indirectly doing that by using a known fact from Real Analysis that the sequence $\{C^{n - 1}\}_{n \in \Bbb Z_+}$ tends to $0$ as $\Bbb Z_+ \ni n \to \infty$ as long as $0 &lt; C &lt; 1$ (which indeed holds for contractive sequences). In other words, the natural number $K$ is "secretly" being chosen like this: If $x_2 = x_1$, then there is nothing to prove. So assume $|x_2 - x_1| &gt; 0$. Now fix some $\varepsilon &gt; 0$. And note that because $C^{n - 1} \to 0$ as long as $0 &lt; C &lt; 1$, there must be some natural $K$ such that $\Bbb Z_+ \ni \forall n &gt; K$ we have $$|C^{n - 1}| = C^{n - 1} &lt; \varepsilon \frac{1 - C}{|x_2 - x_1|}$$ (note $1 - C &gt; 0$ as $0 &lt; C &lt; 1$). Hence continuing the proof from the last inequality, we get that for our epsilon above and for all naturals $m &gt; n &gt; K$: $$ |x_m - x_n| &lt; C^{n - 1}\frac{1}{1 - C}|x_2- x_1| &lt; \varepsilon \frac{1-C}{|x_2 - x_1|} \cdot \frac{1}{1 - C}|x_2 - x_1| = \varepsilon $$ demonstrating that the sequence is Cauchy.
Show that for each of the following graphs G there exists up to isomorphism precisely one category A with G(A) = G.
Let me show you how to do the first graph; the second graph uses similar ideas but is more complicated. Let's let $X$ and $Y$ be the two objects, with $f:X\to Y$, $g:Y\to X$, and $h:Y\to Y$ the non-identity morphisms. The only compositions that are not automatically determined are $fg$ and $h^2$: each of them could be either $h$ or $1_Y$. Note that $gf:X\to X$ must be $1_X$, since there are no non-identity morphisms $X\to X$. Now if $fg$ were $1_Y$, then $f$ and $g$ would be inverse isomorphisms, so $X\cong Y$. This is impossible, since there is a nonidentity morphism $Y\to Y$ but not a non-identity morphism $X\to X$. (Explicitly, note that $ghf=1_X$ and so if $fg=1_Y$, then $h=1_Yh1_Y=(fg)h(fg)=f(ghf)g=f(1_X)g=fg=1_Y$, which is a contradiction.) Thus $fg\neq 1_Y$, so $fg=h$. The only composition that remains to be determined is $h^2$. But since $fg=h$, $$h^2=(fg)(fg)=f(gf)g=f1_Xg=fg=h.$$ So we have determined the entire composition operation for our category, and so it is unique up to isomorphism.
Metric space question
Let $ε&gt;0$ , then there is a $n_1\in \Bbb N$:$p(x_n,x)&lt;ε/2$ for every $n\geq n_1$ Also there is a $n_2\in \Bbb N$:$p(x_n,y)&lt;ε/2$ for every $n\geq n_2$ Let $k=max(n_1,n_2)$. Then we have that $p(x,y)\leq p(x_n,x) +p(x_n,y)&lt;ε/2+ε/2=ε$ for every $n\geq k$ and thus $x=y$
Odd coefficient in $M\in \mathcal{M}_n(\Bbb{Z})$ satisfies $n\le m\le n²-n+1$.
Such an estimate is not true in general. For $n=1$ we have $m=\pm 1$, and $m=-1$ is smaller than $n=1$. In general, we may take $M=-I_n$ to have $m&lt;n$. For $n\ge 2$ we can write down a matrix $M$ in $GL_n(\mathbb{Z})$ with arbitrarily large odd coefficients: take, for arbitrary $m\in \mathbb{Z}$, $$ M=\begin{pmatrix} m &amp; m-1 \\ m+1 &amp; m \end{pmatrix} . $$
Confusion on Riemann integrable functions and further confusion about properties.
Q1) Yes of course. Q2) Yes you can suppose that $h^{\sigma _n}\to 0$. Q4) Notice that you can define $$\underline{S}=\sup\left\{\int_a^b \varphi\mid \varphi\leq f,\ \varphi\ step\ function\right\},$$ $$\overline{S}=\inf\left\{\int_a^b \varphi\mid f\leq \varphi,\ \varphi\ step\ function\right\},$$ but if $f\in R[a,b]$ you won't have necessarily a sequence of step function $(f_n)$ s.t. $f_n(x)\to f(x)$ for all $x$ (it will be only almost everywhere). Q5) Closure in which sense ? in $L^1$ ? Unfortunately, you will only have $$R[a,b]\subset Closure\{step\ function\}$$ since there are sequence of step function that converge to function that are not Riemann integrable (for example, there is a sequence of step function that will converge to $\boldsymbol 1_{\mathbb Q\cap [a,b]}$ that is not Riemann integrable.)
Prove that $e^x\geq x^{a}$
For the first part take log and plug $x=e$(as it is given for all $x$ in positive reals.) Next use the fact $x^{\frac{1}{x}}\leq e^{\frac{1}{e}}$ if $x$ is positive real. Then $e^{\frac{1}{e}}\leq e^{\frac{1}{a}}$ by the provided inequality on lhs part we are done, thus $x^a\leq e^x$
Prove using complex numbers $e^{(x+y)} = e^x e^y$ for all $x,y$ complex.
For complex numbers I would start with $\displaystyle \exp(z)=\sum_{n=0}^\infty \frac{1}{n!}z^n$. Then use the Cauchy product to have $$\displaystyle \exp(z)\exp(w)=\left(\sum_{ n=0}^\infty \frac{1}{n!}z^n\right)\left( \sum_{k=0}^\infty \frac{1}{n!}w^k \right)=\sum_{n=0}^\infty \frac{1}{n!} \sum_{p=0}^n \frac{n!}{p!(n-p)!}z^pw^{n-p}=\sum_{n=0}^\infty \frac{1}{n!}(z+w)^n=\exp(z+w).$$
Discuss the solution to 5x ≡ 8(mod 10)
The easiest way is to notice that $5x = 0$ or $5 \mod 10$. So it can never be 8. This can be shown formally. If $x$ is even then let $x=2k$, then $5x = 10k \equiv 0 \neq 8 \mod 10$. If $x$ is odd then let $x=2k+1$, then $5x = 10k+5 \equiv 5 \neq 8 \mod 10$. Therefore the equation $5x \equiv 8 \mod 10$ has no solutions.
Lie Group of $C^k$-curves?
Infinite-dimensional Lie groups are usually required to be Banach manifolds, which $C^k(\mathbb{R})$ isn't. In any case, pointwise multiplication cannot be used to give a group structure, unless you only take the non-vanishing functions (and get an abelian topological group). Pointwise addition gives $C^k(\mathbb{R})$ the structure of an abelian topological group. Also, pointwise multiplication does not make $C^k(\mathbb{R})$ into a groupoid, as there is no object inclusion map.
If $P(A\setminus B)\geq P(B\setminus A)$, then $P(A) \leq P(B):\;$ Why false?
$$P(A)=P(A\cap B)+\underline{\qquad}\qquad P(B)=P(A\cap B)+\underline{\qquad}$$
Find the value of $c$ if the vectors lie in the same plane
Okay @AlexeyBurdin has pointed out my mistake essentially I got the cross product wrong in both methods, but they should both work, I'm just going to put this correction up and what the subsequent answer should be for anyone else who runs into this problem again. The reason is the for the matrix $$ A=\begin{pmatrix} a &amp; b &amp; c\\ d &amp; e &amp; f\\ g &amp; h &amp; i\\ \end{pmatrix} $$ |A| = a(ei − fh) − b(di − fg) + c(dh − eg) And the minus in the middle is what I forgot about which lead to the rest of my workings being wrong and finding the determinant of a 3 x 3 matrix is part of what the cross product is. First method: $$ \overrightarrow b × \overrightarrow c = (4-6)\overrightarrow i - (2-12) \overrightarrow j + (2-8) \overrightarrow k $$ $$ \overrightarrow b × \overrightarrow c = -2\overrightarrow i +10 \overrightarrow j -6 \overrightarrow k $$ $$ \overrightarrow a⋅(\overrightarrow b × \overrightarrow c) = (2*-2) + (-2 *10) + (c* -6) = 0 $$ $$ \overrightarrow a⋅(\overrightarrow b × \overrightarrow c) = -4 - 20 -6c = 0 $$ $$ -6c = 24 $$ $$ c = -4 $$ Second method: $$ \overrightarrow {PQ} = \overrightarrow Q - \overrightarrow P = (1,2,3) $$ $$ \overrightarrow {PR} = \overrightarrow R - \overrightarrow P = (4,2,2) $$ Next I crossed $\overrightarrow {PQ} × \overrightarrow {PR}$ to find the tangent vector which is the coefficients for the equation of the plane where the tangent vector = (a,b,c) and the equation of the plane is $a(x-x0) + b(y-y0) + c(z-z0) = 0$. $$\overrightarrow {PQ} × \overrightarrow {PR}= -2\overrightarrow i +10 \overrightarrow j -6 \overrightarrow k $$ Thus, the equation for the tangent plane is: $$-2(x-x0) +10(y-y0) + -6(z-z0) = 0$$ Substituting any random easy point on the plane (0,0,0). $$-2x +10y -6z = 0$$ Substituting the point in question that must be on the plane(2, -2, c). $$-2(2) +10(-2) -6c = 0$$ $$-2(2) +10(-2) -6c = 0$$ $$-2(2) +10(-2) -6c = 0$$ $$-24-6c = 0$$ $$-6c = 24$$ $$c = -4$$
Claims regarding dimensions of vector space and subspaces
There is a contradiction example for the second statement. suppose $dim(V) = 10$ and $(U \cup W) \subset V$ and $U$ and $V$ have intersection with each other such that $U = W$ and $dim(U) = dim(W) = 1$, but not $U \cap W = 0_V$.
A question concerning to show that $V(I)$ is open if $I$ is radical ideal
$f(x) = 0$ implies $f_m(x)=0$ for all $m$, since $\sum_{m} |f_m(x)|$ is a sum of non-negative numbers, which is zero.
Solve this Riccati equation
The general method I believe calls for the substitution $$y=\pm{u'\over u},$$ such that $$ y' = \pm{u''\over u} \mp {(u')^2\over u^2} $$ and the equation becomes $$ \pm{u''\over u} \mp {(u')^2\over u^2} \pm {(u')^2\over u^2} = \eta \cos{\theta} + \xi \cos^2 \theta $$ or $$ u'' +F(\theta)u = 0, $$ where $$ F(\theta) = \pm \left(\eta \cos{\theta} + \xi \cos^2 \theta\right) $$
$f(x)=\prod_{j=1}^n(x-x_j)=0$ when $x=x_j$. But, how can a variable be equal to an object tied to the index?
I am not sure I got your question right, but maybe my explanation helps anyway. You have a given pool of numbers, which we call $x_1...x_n$. For any $i \in \{1...n\}$ you define a function $f_i$ with the property that $f_i(x_i) = 1$ and $f_i(x_j) = 0$ for $j \neq i$. When defining $f_i$ the variable $j$ is not in use, so we can use it as an index. Though, when evaluating $f_i(x_j)$, the variable $j$ is in use, so you should use another index like $k$, when plugging $x_j$ in the defining term...
Examples of division algebras
This is a summary of the construction of a non-cyclic division algebra of degree four from Nathan Jacobson's book Finite-Dimensional Division Algebras over Fields. Jacobson tells that Albert was the first to construct such division algebras, and the presented construction may (?) be a modification of his method. The construction: Let $D(F,\alpha,\beta)$ be the quaternion algebra with center $F$, and basis $\{1,u,v,uv\}$, where $u^2=\alpha$, $v^2=\beta$ and $uv=-vu$. Let $F_0$ be a subfield of reals, and $F=F_0(\xi,\eta)$ a purely transcendental extension with $\xi,\eta$ algebraically independent over $F_0$. Let $D_1=D(F,\eta,\eta)$ and $D_2=D(F,\xi,\xi\eta)$. Then $D=D_1\otimes_FD_2$ is a non-cyclic division algebra of degree four with center $F$ (variations are possible, and we may use as $\beta$-elements polynomials on $\xi$ and $\eta$ such that the parities of the exponents of the leading term are as above). Why is it a division algebra? Here the key step is that the tensor product of two quaternion division algebras over a field $F$ is NOT a division algebra if and only if $D_1$ and $D_2$ contain isomorphic quadratic extensions of $F$ as subfields. This condition can be re-expressed in terms of the reduced norms as follows. Let $D_i', i=1,2$ be the kernels of the reduced trace maps of $D_1,D_2$ respectively that is, the $F$-spans of the respective sets $\{u,v,uv\}$. Then on $D_1'\oplus D_2'$ we can define a quadratic form $n$ by the recipe $n(x_1,x_2)=n_1(x_1)-n_2(x_2)$. The reformulation says that $D_1\otimes_F D_2$ is a division algebra, iff $n$ is anisotropic. The equivalence of these two conditions is easy to believe. For if $n_1(x_1)=n_2(x_2)$ for some $x_i\in D_i', i=1,2,$ then the quadratic fields $F(x_1)$ and $F(x_2)$ are isomorphic. The other direction is not too difficult either. Why is it not cyclic? This depends on a Lemma due to Albert: If $F$ is a field, $\sqrt{-1}\notin F$, and $E/F$ is a cyclic quartic extension, then the unique quadratic intermediate field is of the form $F(\sqrt{u^2+v^2})$, where $u,v\in F$ and (obviously) $u^2+v^2$ is a non-square of $F$. This leads to the idea. If $D\otimes_F K$ remains a division algebra in every extension of scalars from $F$ to $K=F(\sqrt{u^2+v^2})$, then $D$ can't contain a copy of $K$, and hence, by Albert's Lemma, won't contain a cyclic quartic extension field either. Jacobson then proceeds to prove that this holds with the above $D$. The parity constraint mentioned above saves the day, as using it allows to show that the quadratic form $n$ remains anisotropic under quadratic extensions of scalars of the prescribed type. I'm afraid this is about as far as I have ever made in Jacobson's book. I'm not very conversant with the details here. Anyway, I hope this gives you an idea of what tools and tricks the construction requires. All this takes a bit over four pages in the book.
Tell whether $\dfrac{10^{91}-1}{9}$ is prime or not?
Just think through the actual number. $10^{91}$ is a $1$ with $91$ $0$'s after it. $10^{91}-1$ is therefore $91$ $9$'s in a row. $\frac{10^{91}-1}{9}$ is therefore $91$ $1$'s in a row. Due to the form of this number, $x$ $1$'s in a row will divide it, where $x$ is a divisor of $91$. For example $1111111$ is a divisor, so is $1111111111111$. Hence the number is not prime.
Make the entries of a matrix positive with linear algebra
Given an $m \times n$ matrix $A$ and an $m$-dimensional column vector $b$, asking whether there is an $n$-dimensional column vector $x$ such that $$Ax=b$$ is fundamental question in linear algebra. You can use Gaussian elimination to work out whether $b$ is in the column space of $A$ and find a solution vector $x$ in the case that it is. Seemingly more generally, you could start with $A$ an $m \times n$ matrix and $B$ an $m \times k$ matrix and ask whether there exists an $n \times k$ matrix $X$ such that $$AX=B.$$ This isn't actually a more general question though. You just have to ask the above question $k$ times, once for each column of $B$. As long as all $k$ columns of $B$ belong to the column space of $A$, you just populate the columns of $X$ with any $k$ solution vectors. It seems to me that your question fits into this framework. Added: To spell things out a bit more, given $A$ and $B$, there exists $X$ such that $AX=B$ if and only if the column space of $B$ is contained in the column space of $A$. Similarly (this is the same statement, up to taking the transpose) there exists $X$ such that $XA=B$ if and only if the row space of $B$ is contained in the row space of $A$. It's not always going to work out that way in the situations you interested in: Example: The column space of $M=\begin{bmatrix}1 \\-1\end{bmatrix}$ does not contain the column space of $\operatorname{abs}(M) =\begin{bmatrix}1 \\1\end{bmatrix}$, so there does not exist $Q_M$ with $MQ_M=\operatorname{abs}(M)$. Another example: the row space of $M=\begin{bmatrix}1 &amp; 2 \end{bmatrix}$ does not contain the row space of $\operatorname{sgn}(M)=\begin{bmatrix} 1 &amp; 1 \end{bmatrix}$, so there does not exist $P_M$ with $P_MM=\operatorname{sgn}(M)$.
Probability that a five-card poker hand contains exactly two aces if it has exactly one face card
Let $T$ be the event the hand has exactly $2$ Aces, and let $F$ be the event the hand has exactly $1$ face card. We want $\Pr(T|F)$. By the definition of conditional probability, we have $$\Pr(T\F)=\frac{\Pr(T\cap F)}{\Pr(F)}.$$ It remains to find the probabilities on the right. For the probability of exactly one face card, note that there are $\binom{52}{5}$ equally likely poker hands. To count the hands with exeactly $1$ face card, note that the face card can be chosen in $\binom{12}{1}$ ways, and for each such way the $4$ non-face cards can be chosen in $\binom{40}{4}$ ways. Thus $$\Pr(F)=\frac{\binom{12}{1}\binom{40}{4}}{\binom{52}{5}}.$$ We now calculate $\Pr(T\cap F)$. The $2$ Aces can be chosen in $\binom{4}{2}$ ways, and then the face card in $\binom{12}{1}$ ways. The $2$ remaining non-face, non-Ace cards can be chosen in $\binom{36}{2}$ ways. Thus $$\Pr(T\cap F)=\frac{\binom{4}{2}\binom{12}{1}\binom{36}{2}}{\binom{52}{5}}.$$ Divide. There is a great deal of cancellation, and we end up with $$\Pr(T|F)=\frac{\binom{4}{2}\binom{36}{2}}{\binom{40}{4}}.$$ Remark: There are slicker ways to do the calculation. For given that there is one face card, the only thing that matters is the remaining $4$ cards. There are $\binom{40}{4}$ ways to choose these $4$ cards from the $40$ non-face cards. And there are $\binom{4}{2}\binom{36}{2}$ ways to choose $2$ Aces and $2$ non-face cards/ That gives a quick path to the final answer, However, it is useful to just let the full machinery operate.
How to round "correctly" (to certain level of accuracy)
If it was 0.11488 you would round to 0.115, and it does not implies that it is exactly 0.115. I think you just confused because the rounded number ended on 0. So, in your example, you should write 0.740.
Relationship between Spectral Decomposition, Positive Definite Matrix and Quadratic Form
The connection is that every quadratic form can be written in the form $$ Q(x) = x^TAx $$ for the correct real symmetric matrix $A$. For example, for quadratic forms on $2$ variables, we can write $$ Q(x_1,x_2) = ax_1^2 + bx_1x_2 + cx_2^2 = \\ \pmatrix{x_1&amp;x_2} \pmatrix{a&amp;b/2\\b/2&amp;c} \pmatrix{x_1\\x_2} $$ Spectral decompositions give us a "change of variables" that make quadratic forms easier to understand. In particular, suppose that we have $Q(x) = x^TAx$. $A$ has spectral decomposition $A = UDU^T$ for orthogonal matrix $U$ and diagonal matrix $D$ whose diagonal entries are $\lambda_1,\dots,\lambda_n$. We now have $$ Q(x) = x^T U DU^Tx = (U^Tx)^T D (U^Tx) $$ that is, if we make the substitution $y = U^Tx$ (which is to say $x = Uy$) and define the simpler quadratic form $Q'(x) = x^TDx$, we have $$ Q(x) = Q'(y) = y^TDy = \lambda_1 y_1^2 + \cdots + \lambda_n y_n^2 $$ For example, an important consequence we can gather is Rayleigh's theorem, which says that if $\lambda_1$ is the lowest eigenvalue, then the expression $Q(x)/\|x\|^2$ is minimized when $y_2,\dots,y_{n} = 0$.
Inverse Laplace transform of complicated function shift
By using \begin{align} \delta(t - a) &amp;\Doteq e^{-as} \\ e^{at} \, \sin(b t) &amp;\Doteq \frac{b}{(s-a)^2 + b^2} \\ \int_{0}^{t} f(t-u) \, g(u) \, du &amp;\Doteq \overline{f}(s) \, \overline{g}(s) \end{align} then \begin{align} \mathcal{L}^{-1}\left\{ \frac{e^{-as}}{(s-a)^2 + 2} \right\} &amp;= \frac{1}{\sqrt{2}} \, \mathcal{L}^{-1}\left\{ e^{-as} \cdot \frac{\sqrt{2}}{(s-a)^2 + (\sqrt{2})^2} \right\} \\ &amp;= \frac{1}{\sqrt{2}} \, \int_{0}^{t} \delta(u-a) \, e^{-3(t-u)} \, \sin(\sqrt{2}(t-u)) \, du \\ &amp;= \frac{1}{\sqrt{2}} \, e^{-3(t-a)} \, \sin(\sqrt{2} \, (t-a)). \end{align}
Why are right angles important
Right angles give us a convenient system of orthogonality, that helps us break down bigger things into components that can be analyzed independently. Think of how in physics when we calculate "work-done", we can neglect all components of a force which are orthogonal to the direction of displacement. When we started the primitive business of measuring things, we encountered tons of objects which stood "perpendicularly" on the ground, in some loose sense of the word. Tall trees, hills, take your pick. People realized that taller objects create larger shadows, and at some point, this correlation led them to the question of this relation could be used to measure how tall things are. If a $1$m stick creates a $10$cm shadow at afternoon, how tall is the huge tree which has a $10$m shadow at the same point of the day? This gave rise to trigonometry. People built homes, pyramids, so on and so forth using all these clever techniques.
How to linearize a weighted average with a decision variable?
This can be linearized but with some effort. The ratio $$y=\frac{\sum_i f_i w_i x_i}{\sum_i w_i x_i}$$ with $x_i \in \{0,1\}$ can be written as a (nonlinear) constraint: $$ y \sum_i w_i x_i = \sum_i f_i w_i x_i$$ where $y$ is an additional continuous variable. The non-linear expression $(y x_i)$ is a continuous variable times a binary variable. I assume $y\ge 0$. We can now linearize $z_i=y x_i$ as: $$\begin{align} &amp;z_i \le M x_i\\ &amp;z_i \le y \\&amp; z_i \ge y - M(1-x_i)\\ &amp;z_i \ge 0\end{align} $$ Here $M$ is an upper bound on $y$. We have $M=1$ because of the values $w_i$ and $f_i$ can assume. So putting things together we have: $$\begin{align} \max\&gt; &amp; y - \sum_i c_i x_i\\ &amp; \sum_i w_i z_i = \sum_i f_i w_i x_i\\ &amp; 0 \le z_i \le x_i \\ &amp; y-(1-x_i) \le z_i \le y \\ &amp;y\ge 0\\ &amp; x_i \in \{0,1\} \end{align}$$ I cannot replicate your stated optimal solution. Your solution has: ---- 30 VARIABLE obj.L = -0.240 ---- 30 VARIABLE x.L original variables i2 1.000, i4 1.000 When I solve it, I get a better solution: ---- 48 VARIABLE obj.L = -0.200 ---- 48 VARIABLE x.L original variables i4 1.000, i5 1.000 ---- 48 VARIABLE y.L = 0.600 ratio ---- 48 VARIABLE z.L products y*z(i) i4 0.600, i5 0.600 The objective for $x=[0,1,0,1,0]$ is $-0.24$ while my optimal $x=[0,0,0,1,1]$ gives an objective value of $-0.2$. (Assuming no typos in transcribing the problem and data). A similar problem is formulated here.
Simple Set theory question and reference request
Wrong. Counterexample for $C\cup B = B$ (while $A\cap (C\cup B)=A\cap B$): $A=\{1,2,3\}, B=\{1,2\}$ and $C\{1,4\}$ $A\cap (C\cup B)=(A\cap C)\cup (A\cap B)=A\cap B$. This implies $(A\cap C)\subset (A\cap B)$. Refernce for Set Theory: Thomas Jech's book, named "Set Theory". See also here.
Simplify the ring $\mathbb Z[\sqrt{-13}]/(2)$
Make the substitution $y = x + 1$ to get an isomorphism $(Z/2Z)[x] \cong (Z/2Z)[y]$. The answer is $(Z/2Z)[y]/(y^2)$.
Is there an efficient algorithm to decide whether a particular subset of given integers exists?
If a good subsequence of $a_1,\ldots,a_k$ exists then the sum of all $a_i$ must be greater or equal than $A.$ So in the following we assume that the sum of the sequence is greater or equal than $A.$ If the sum of the sequence is greater or equal than $A$ then in linear time a good subsequence can be calculated. Simply scan through the elements of the sequence and remove an element from the sequence if the remaining elements sum up to a value greater or equal to $A$. If you have scanned through the whole sequence the remaining elements sum to a value greater or equal to $A$ but if you remove one of these elements the sum of the remaining will be less than $A.$ The sequence you find with this algorithm may have 1, 2, 3 or more elements. If you end with 1 or 2 elements you don't know if a good sequence exists with 3 or more elements exists. If the sum of each pair of elements (with different indexes) of a sequence is less than $A$ than the algorithm described at 2 will find a good sequence with 3 or more elements in linear time because it cannot end with a one or two elements. If $s_3$ is a good subsequence with three or more elements then it cannot have two elements that sum up to a value greater or equal than $A.$ If $s_1$ is a subsequence of $s$ and $s_2$ is a subsequence of $s_1$ and therefore a subsequence of $s$ than $s_2$ is good with respect to $A$ and $s$ if and only if $s_2$ is good with respect to $s_1$ and $A.$ Define $ss(s,i,A)$ as the subsequence of a sequence $s$ such that $a_k$ is in this subsequence either if $k=i$ or if $i \lt k$ and $a_i+a_k&lt;A.$ If $s_2$ is a good subsequence of a sequence $s$ with respect to $A$ and $a_i$ is its largest element then $s_2$ is a good subsequence of $ss(s,i,A)$ with respect to $A$. A good subsequence of $ss(s,i,A)$ can be found in linear time as described in 3. A good subsequence of $ss(s,i,A)$ with respect to $A$ must have three or more elements. for all indexes $i$ of $s$ check if there is a good subsequence in $ss(s,i,A)$. If not, then there is not such a subsequence. If yes, you have found one. Example1 If $s$ is the sequence $8,8,7,6,1,1$ and $A=9$ this works in the following way: we investigate $$ss(s,1,9)=8$$ $$ss(s,3,9)=7,1,1$$ $$ss(s,4,9)=6,1,1$$ $$ss(s,5,9)=1,1$$ The first two have less than 3 elements and the third and the following one does not sum up to 9 or higher. So there is no good subsquence. Example2 If $s$ is the sequence $8,8,7,6,2,2,1,1$ and $A=9$ this works in the following way: we investigate $$ss(s,1,9)=8$$ $$ss(s,3,9)=7,1,1$$ $$ss(s,4,9)=6,2,2,1,1$$ $$ss(s,5,9)=2,2,1,1$$ $$ss(s,7,9)=1,1$$ Only the third one is of interest. We scan through $6,2,2,1,1$ from the beginning. We cannot drop $6$ but the first $2$ and the first $1$ and end with the good subsequence $6,2,1$ From this we construct the following algorithm that needs $O(k)$ time. Input All numbers are integer. We have given $k$ such that $$k \ge 3 \tag{1}$$ a finite sorted sequence $a_1,\ldots, a_k$, so $$a_1\ge a_2\ge\ldots\ge a_k \gt 0 \tag{2}$$ and a number $A$ such that $$A \gt a_1$$ Decision Algorithm Initialization Initialize $u$ we set \begin{align} &amp;u:=1\\ &amp;\text{while }(a_u+a_{k-1}\ge A) \text{ and } (u\lt k-2) \\ &amp;\qquad u:=u+1 \\ \end{align} If now $a_u+a_{k-1}&lt;A$ then there exists no $v$ such that $a_u+a_v\ge A$ and therefore there exists no good subset with at least $3$ elements. The algorithm will terminate. Otherwise it continues: Initialize $v$ \begin{align} &amp;\text{tailsum}:=a_{k-1}+a_k\\ &amp;v:=k-1\\ &amp;\text{while }(u \lt v-1) \text{ and }a_u+a_{v-1} \lt A\\ &amp;\qquad v:=v-1\\ &amp;\qquad \text{tailsum}:=\text{tailsum}+a_{v}\\ \end{align} loop invariant $$\text{tailsum}=\sum_{t=v}^{k}a_t$$ Loop \begin{align} &amp;\text{while }(u\lt v-1) \text{ and } (a_u+\text{tailsum} \lt A)\\ &amp;\qquad u:=u+1\\ &amp;\qquad\text{while } (u \lt v-1) \text{ and } (a_u+a_{v-1} \lt A)\\ &amp;\qquad\qquad v:=v-1\\ &amp;\qquad\qquad\text{tailsum}:=\text{tailsum}+a_v\\ \end{align} Note tha $v$ ts decremented but never incremented in this loop. It can only be decremente $k$ times. So this block is executed in $O(k)$ time. Decision If we have now $$a_u+\text{tailsum}\lt A$$ then $u=v-1$. We haven't found a pair (u,v) such that $$a_u+\sum_{t=v}^k a_t&lt;A$$ until now and we will not find one when we further decrease $u$ because this will decrease $v$, too, and therefore decrease the sum $a_u+\sum_{t=v}^k a_t.$ So no good set with three or more elements will exist and the algorithm will terminate here. If the loop terminates with $a_u+\text{tailsum}\ge A$ then $${u, v, v+1, \ldots, k}$$ will have a good subset and this will have $3$ or more elements. Construction of the good set \begin{align} &amp;\text{if } \text{tailsum} \lt A \text{ then}\\ &amp;\qquad G:=\{u\}\\ &amp;\qquad\text{sum}:=a_u+\text{tailsum}\\ &amp;\text{else}\\ &amp;\qquad G:=\{\}\\ &amp;\qquad\text{sum}:=\text{tailsum}\\ &amp;t=v\\ &amp;\text{while } (t&lt;=k)\\ &amp;\qquad \text{if } \text{sum} - a_t \lt A \text{ then}\\ &amp;\qquad\qquad G:=G\cup \{t\}\\ &amp;\qquad\text{else}\\ &amp;\qquad\qquad\text{sum}:=\text{sum}-a_t\\ &amp;\qquad t=t+1\\ \end{align} loop invariants: $$\sum_{t\in G}a_t+\sum_{t=v}^k a_t=\text{sum}$$ $$\sum_{t\in G}a_t+\sum_{t=v}^k a_t \ge A$$ $$\sum_{t\in G\setminus \{r\}}a_t+\sum_{t=v}^k a_t \lt A,\; \forall r \in G$$ def find_good(a): A=a[0] # python lists start with index 0 k=len(a)-1 if k&lt;3: return(None) u=1 ## Initialize u while(a[u]+a[k-1]&gt;=A) and (u&lt;k-2): u=u+1 if a[u]+a[k-1]&gt;=A: return(None) ## Initialize v v=k-1 tailsum=a[k-1]+a[k] while(a[u]+a[v-1]&lt;A) and (u&lt;v-1): v=v-1 tailsum=tailsum+a[v] # loop while((u&lt;v-1) and (a[u]+tailsum&lt;A)): u=u+1 while((u&lt;v-1) and (a[u]+a[v-1]&lt;A)): v=v-1 tailsum=tailsum+a[v] # decision if ((a[u]+tailsum)&lt;A): # no solution exists return(None) # construction of a goot set: if (tailsum&lt;A): G=[u] sum=a[u]+tailsum else: G=[] sum=0 t=v while (t&lt;=k): if sum-a[t]&lt;A: G.append(t) else: sum=sum-a[t] t=t+1 #prepare return value H=[a[k] for k in G] # H is a[g1], a[g2],... return(G, H) Here is a link to the program: https://repl.it/@miracle173/findgood2
Calculate $100^{1207} \mod 63$
Note that $$100^3\equiv 1 \mod 63$$
A function on an LCH space that is sequentially continuous but nowhere continuous
Let $X=\beta\omega\setminus\omega$; $X$ is compact Hausdorff. Moreover, $X$ has no non-trivial convergent sequences, so every function on $X$ is sequentially continuous. Finally, $w(X)=2^\omega$, so let $\mathscr{B}=\{B_\xi:\xi&lt;2^\omega\}$ be a base for $X$. Let $\{\langle\alpha_\xi,i_\xi\rangle:\xi&lt;2^\omega\}$ enumerate $2^\omega\times 2$. Given $\eta&lt;2^\omega$ and distinct points $x_\xi\in X$ for $\xi&lt;\eta$, let $x_\eta$ be any point of $B_{\alpha_\eta}\setminus\{x_\xi:\xi&lt;\eta\}$; this is possible, since $|B_{\alpha_\eta}|=2^{\mathfrak{c}}$. Thus, we can recursively construct $X_0=\{x_\xi:\xi&lt;2^\omega\}$ such that the points $x_\xi$ are distinct, and $x_\xi\in B_{\alpha_\xi}$ for each $\xi&lt;2^\omega$. Now define $$f:X\to 2:x\mapsto\begin{cases} i_\xi,&amp;\text{if }x=x_\xi\\ 0,&amp;\text{if }x\in X\setminus X_0\;. \end{cases}$$ Then $f^{-1}[\{0\}]$ and $f^{-1}[\{1\}]$ are both dense in $X$, so $f$ is not continuous.
How can I make a series expansion of $F(x) = \int_0^x \exp -{(t^2)}\ dt$?
\begin{align} F'(x) &amp;= \exp - \frac {x^2}2 = \sum_{n = 0}^\infty \frac{(-x^2/2)^n}{n!} \\ &amp;= \sum_{n = 0}^\infty \frac{(-1)^n}{2^nn!} x^{2n} \\ \implies F(x) &amp;= \sum_{n = 0}^\infty \frac{(-1)^n}{2^nn!} \frac{x^{2n+1}}{2n+1} \end{align}
Understanding the Pauli-Y gate in the Bloch sphere
The state vector $$ |\Psi\rangle=\cos \psi/2 |0\rangle + \sin \psi/2 ~e^{i\theta} |1\rangle = \begin{pmatrix} \cos \psi/2 \\ e^{i\theta} \sin \psi/2 \end{pmatrix}$$ defines a pure state density matrix through its projection operator, $$\bbox[yellow]{ |\Psi\rangle \langle \Psi | = \begin{pmatrix} \cos^2 \psi/2 &amp; \sin \psi/2 ~ \cos\psi/2 ~e^{-i\theta} \\ \sin \psi/2 ~ \cos\psi/2 ~e^{i\theta} &amp; \sin^2 \psi/2 \end{pmatrix}=\rho }~. $$ Note the manifest invariance under over-all rephasing of $|\Psi\rangle$. The general principles' expression of this idempotent hermitean density matrix is also, evidently, $$ \rho=\frac{1}{2}(1\!\! 1 + \hat n \cdot \vec \sigma) , $$ with $\hat n = (\sin \psi \cos \theta, \; \sin \psi \sin \theta, \; \cos \psi)^T. $ It is now obvious that $-i|0\rangle$ corresponds to the same point of the sphere as $|0\rangle$, its rephasing by an over-all angle angle of $\pi/2$. On the Bloch sphere, Y has sent (rotated by π/2) the x-axis (1,0,0) to the z-axis (0,0,1).
Why is $5 + 5z + 5z^2 + ... + 5z^{11} = \frac{(5z^{12} - 5)}{(z - 1)}$?
For $z\neq 1$ $$5 + 5z + 5z^2 + … + 5z^{11} = \frac{(5z^{12} - 5)}{(z - 1)}\iff \\\iff (z-1)(5 + 5z + 5z^2 + … + 5z^{11}) = (5z^{12} - 5)$$ which is true by direct inspection, indeed $$z\cdot (5 + 5z + 5z^2 + … + 5z^{11} ) = 5z + 5z^2 + … + 5z^{12} $$ $$-1\cdot (5 + 5z + 5z^2 + … + 5z^{11}) = -5 - 5z - 5z^2 + … - 5z^{11} $$ then sum up.
Exponential function word problem
You need to pay more attention to the wording of the problem. You're given the average cost and are asked to find the marginal cost; you made the mistake in thinking that you had to differentiate the average cost to get the marginal cost. The first thing you need to do is find the cost function given that we know the average cost function. Recall that average cost is defined as $$\overline{C}(x) = \frac{C(x)}{x}$$ where $C(x)$ is your cost function. So, from your problem, it follows that if the average cost function is $$\overline{C}(q) = \frac{870}{q} + 3500\frac{e^{(3q+4)/820}}{q}$$ then the cost function $C(q)$ is $$C(q) = q\cdot\overline{C}(q) = 870 + 3500e^{(3q+4)/820}.$$ Now you can go head and find the marginal cost when $q=99$; i.e. compute $C^{\prime}(99)$.
Prove that the order of an element in a cyclic group must divide the order of the group
Let he order of the group be $n$ since the group is cyclic we can assume it is of the form $g^k$, we then need to find the smallest value $l$ so that $(g^k)^l=1$ notice this is $g^{kl}=1$. This only happens if $n$ divides $kl$ by how the cyclic subgroup works. Therefore we need to find the smallest postive $l$ so that $n|kl$ this is clearly going to be a divisor of $n$ since otherwise we could take away some unecessary primes from $l$ to make it into $l'$ so that $kl'$ is still a multiple of $n$ but is smaller.
Show that $N(A^T A) \subset N(A)$
If $ A^T A b = 0, $ then $$ b^T A^T A b = 0, $$ so $$ (Ab)^T (Ab) = 0. $$ What does that tell you about $Ab?$ If $v$ is any column vector, what is $v^T v?$
Why the dramatic difference in the arc tangent?
There are parentheses missing. You probably intended ATAN((y2 - y1) / (x2 - x1)).
How to prove this using Combinatorial proof?
Imagine choosing $q+r+1$ numbers from $1,...,p+q+r+1$ such that the $r+1$ - th smallest number is $r+k+1$. Choose $r$ numbers from $1,...,r+k$, $\binom{r+k}{r}$ $r+k+1$ is already determined as the $r+1$ - th number Then choose remaining $q$ numbers from $r+k+2,...,p+q+r+1$, $\binom{p+q-k}{q}$ In total, there are $\binom{p+q-k}{q}\binom{r+k}{r}$ possibilities. Now what if we sum for all possible values of $k$? We get the number of all possible $q+r+1$ combination from $1,...,p+q+r+1$. Isn’t that $\binom{p+q+r+1}{q+r+1}=\binom{p+q+r+1}{p}$?
What means this notion for scheme morphism?
The category of affine schemes is equivalent to the opposite of the category of commutative rings, so to specify a morphism between two affine schemes it suffices to specify a morphism in the other direction between their rings of functions. For affine space over an arbitrary base ring $S$ a morphism $$f : \mathbb{A}^n \ni (x_1, ... x_n) \mapsto (f_1, ... f_m) \in \mathbb{A}^m$$ where $f_1, ... f_n$ are polynomials with coefficients in $S$ is merely the morphism corresponding to the morphism $S[y_1, ... y_m] \mapsto S[x_1, ... x_n]$ sending $y_i$ to $f_i$. Note that $S$ does not even need to be a field, much less an algebraically closed field.
Tangent bundle of an algebraic group
The canonical map $k[\epsilon]\to k$ induces a map $ G(k[\epsilon])\to G(k)$, and we have an exact sequence $$ 0\to \mathrm{Lie}(G) \to G(k[\epsilon])\to G(k)\to 1.$$ This can be taken as a definition of $\mathrm{Lie}(G)$. It is known that $\mathrm{Lie}(G) $ has canonically a structure of $k$-vector space and is isomorphic to the tangent space $T_{G, e}$ of $G$ at the unit $e\in G$. On the other hands, the tangent bundle $T_{G/k}$ (the dual of the differentials) is free (as $O_G$-module) and satisfies $$T_{G/k}\simeq \mathrm{Lie}(G)\otimes_k O_G \simeq T_{G,e}\otimes_k O_G.$$
Maximization of $f(\theta)=\frac{1}{\sqrt{2\pi c\theta}}e^{-\frac{1}{2c\theta}(x-\theta)^2}$ with inequality constraints
Hint. What you have done is right. Then solving the quadratic equation gives two potential solutions $$ \theta_0=-\frac12 \left(\sqrt{c^2+4 x^2}+c\right), \qquad \theta_1=\frac12 \left(\sqrt{c^2+4 x^2}-c\right). $$ Which $\theta_i$ satisfies $0\le\theta_i \le1$ ?
Proving for every odd number $x$, $x^2$ is always congruent to $1$ or $9$ modulo $24$
The original poster, Bob, said in a comment: The questions asks if $x^2$ is equivalent to 1mod24 or 9mod24 where $x$ is an odd integer. However, based on the responses here it is actually asking if $x^2$mod24 is equivalent to 1mod24 or 9mod24. I think this comment is revealing the real source of OP's difficulty with this question. First, the "mod" does not apply to a number or to an expression like $x$ or $x^2$. It applies to the equivalence. When we say that 289 is equivalent to 1 (mod 24) we are not talking about some special kind of 1 called "1mod24". We are not talking about a special kind of 289 either. They are the same 1 and the same 289 as always. The "mod 24" applies to "equivalent": it is the equivalence that is mod 24: 289 and 1 are equivalent (mod 24) but are not equivalent (mod 17); that is a different kind of equivalence. A better way to say it, maybe, would be to say "1 is equivalent, mod 24, to 289". There is a special kind of "equivalence mod 24", and that is what is meant here: If $n$ is odd, then $n^2$ is equivalent (mod 24) to 1 or 9. Similarly, the notation is misleading. We write $$289\equiv1\pmod{24},$$ which suggests that the $\pmod{24}$ applies to the 1. But it doesn't. It applies to the $\equiv$. A better notation might be something like $$\def\mtf{\stackrel{\mod\!\!{24}}\equiv}289\mtf1.$$ But that's not how we write it, because that's not how it has been written in the past. So that's one problem: you are confused by the notation and the terminology. There is no such thing as "1 mod 24" or "$x^2$ mod 24", because the "mod 24" is talking about whether two things are equivalent in a certain way. Specifically, two things are equivalent (mod 24) when they differ by a multiple of 24. A second problem is that we abuse the notation and we do sometimes speak of “the value of $x^2$, mod 24”. I did this myself in the header of the table in my other answer: $$\begin{array}{r|r|l} n &amp; n^2 &amp; \color{darkred}{n^2\pmod{24}} \\\hline \end{array}$$ When we speak of something like “the value of $x^2$, mod 24”, what we mean is this: It is not hard to see that every number is equivalent (mod 24) to one of $0, 1, 2, \ldots,\text{ or }23$, which is called its "residue". For all mod-24-related purposes the residue behaves just like the original number, so for mod-24 purposes, we can pretend that the residue is the original number. Any time you have some equivalence mod 24 that involves the number 289 somewhere, you can replace the 289 with its mod-24 residue 1, because 289 is equivalent (mod 24) to 1. And that last bit may be the essential piece you are missing in all this, a deep theorem that is crucial to all work with modular equivalence: If $x$ and $y$ are equivalent (mod 24), and you have some expression involving $x$, then the value of that expression is equivalent (mod 24) to the value you get if you replace $x$ with $y$, or with anything else that is equivalent (mod 24) to $x$. If $x$ and $y$ are equivalent (mod 24), they are interchangeable as far as equivalence-mod-24 is concerned. In particular, if $x$ and $y$ are equivalent (mod 24) then so are their squares $x^2$ and $y^2$. Because certainly $$x^2\mtf x^2$$ and if we replace that $x$ on the right-hand side with the equivalent (mod 24) value $y$ we get $$x^2\mtf y^2.$$ Similarly, suppose we want to know if $$289\mtf 1\text{ or } 9 ?$$ is true. Instead of dealing with 289, we can deal with its mod-24 residue 1 instead, because 289 and 1 are equivalent (mod 24). So we can replace the 289 with its mod-24 residue 1, and the result is $$1\mtf 1\text{ or } 9 ?$$ which is obviously true. And again: suppose we know that some number $n$ is equivalent (mod 24) to 17: $$n\mtf 17.$$ Then $$\begin{array}{rcl} n^2&amp; \mtf &amp; 17^2 \\ &amp; = &amp; 289\\ &amp; \mtf &amp; 1 \end{array}$$ So if $n$ is any number equivalent (mod 24) to 17, then $n^2$ is equivalent (mod 24) to 1. This is why we can argue like this: “Since any odd number is equivalent (mod 24) to one of 1, 3, 5, …, or 23, it's enough to check that each of these 12 numbers has the required property, of having a square that is equivalent (mod 24) to 1 or to 9. Because if all 12 of those do, then so will any number that is equivalent (mod 24) to one of them, and therefore so will any odd number at all.” For example, how do I now know that $1111^2$ is equivalent (mod 24) to 1 or to 9? It's because $1111$ is equivalent(mod 24) to 7, and so its square, $1111^2$, must be equivalent (mod 24) to $7^2$, which is 49, which is equivalent (mod 24) to 1. So I can know that $1111^2\mtf 1$, without having to do a long calculation. And I can know that once I check the claim (that $n^2$ is equivalent (mod 24) to 1 or to 9) for the 12 residues $1, 3, 5, \ldots, 23$, I know that it is true for every odd integer, because every odd integer is equivalent (mod 24) to its (mod 24) residue which is one of $1, 3, 5, \ldots, 23$.
Viscosity solution for Hamilton-Jacobi equations and local extrema
Here is a short argument proving density of the set of points at which admissible test functions exist. The result requires some continuity of $u$ (semi-continuity is sufficient). Let $u$ be continuous and bounded and fix $x_0$. For $\varepsilon&gt;0$ small let $x_\varepsilon$ be a point at which $$u - \frac{1}{\varepsilon}|x-x_0|^2$$ attains its max (this max exists because $u$ is continuous and bounded; here, upper semi-continuity of $u$ is actually sufficient). So there is an admissible test function for the subsolution property at $x_\varepsilon$. Now show that as $\varepsilon\to 0$ we have $x_\varepsilon \to x_0$, which establishes density.
Proof about uniform continuity
Let $\epsilon &gt; 0$ and $L$ be the limit of $f$ at $\infty$. Choose $M$ so that $|x| &gt; M \Rightarrow |f(x) - L| &lt;\epsilon$. Notice that $\{x\in C| |x|\le M\}$ is compact so $f$ is uniformly continous there. Can you do the rest?
Length of the perimeter for a quadric surface
You have to find length of four parabolic arcs formed by intersection of planes $$ x=\pm 1, y= \pm 1$$ with a hyperbolic paraboloid $$ z= x^2 - 2 y^2 $$ ( There is an error in the above given example, but you can take coefficient correctly $2$ for $y^2$ ). The parabola orthogonal projections are of form: $$ z = a + b x^2,\, z = c + d y^2;\, $$ Perimeter length is of four arcs $$ 2 (AB+BC) $$ Hope you can take it further.
Proving an inequality without an integral: $\frac {1}{x+1}\leq \ln (1+x)- \ln (x) \leq \frac {1}{x}$
Let $f(x):=\ln(x)$, then the Mean Value Theorem (Differentiation) says that there exists some $\xi\in (x,x+1)$ $$\ln (1+x)- \ln (x) = \frac{\ln (1+x)- \ln (x)}{(1+x)-x}= \frac{f (1+x)- f (x)}{(1+x)-x} =\frac{df}{dx}(\xi) $$ As $\frac{df}{dx}(x)= 1/x$ and as the function $1/x$ is monotonic we know that $$\frac {1}{x+1}\leq\frac{df}{dx}(\xi)\leq \frac {1}{x}$$
How to compute the following relationship with tensor notation?
Note that the indices of the metric tensor are raised in the left-most expression, but lowered in the middle expression. Recall that $$ g^{ij}\,g_{js} = \delta^i_s $$ Take the partial derivative with respect to $q^k$ of both sides and you'll find where the negative sign went. As for the right-most expression, it's simply the middle one where the time term is separated out from the space terms, and written out explicitly. Note the indices! Always pay attention to the indices! :) On a side note, bad notation in that expression. Generally, Greek indices run from 0 to 3 and Latin indices from 1 to 3. That distinction makes it easier to see when the time index is written out explicitly and when not. In the expression you quoted, every index is Latin.
Is this function continuous and if so, what is its norm?
We have by definition $\|p\|\leq 1$ iff $|p(t)|\leq 1$ for all $t\in [0,1]$. So we get $$\sup_{\|p\|\leq 1}|\psi(p)|=\sup_{|p|\leq 1}|p(1/2)|= 1,$$ And the last "=" is correct because there are polynomials e.g. $p=1$ with $p(1/2)=1$ and $\|p\|\leq 1$ and $\|p\|\leq 1$ implies $p(1/2)\leq 1$.
Simplify using boolean algebra laws/formulas
The following general equivalence principles are used: Adjacency $p = (p \land q) \lor (p \land q')$ $p = (p \lor q) \land (p \lor q')$ Idempotence $p = p \lor p$ $p = p \land p$ Applied to your statement: $ (x'\land y \land z' ) \lor (x' \land z) \lor (x \land y) \overset{Adjacency: \ x' \land z = (x' \land y \land z) \lor (x' \land y' \land z)}{=}$ $ (x'\land y \land z' ) \lor (x' \land y \land z) \lor (x' \land y' \land z) \lor (x \land y) \overset{Idempotence: \ (x' \land y \land z) = (x' \land y \land z) \lor (x' \land y \land z)}{=}$ $ (x'\land y \land z' ) \lor (x' \land y \land z) \lor (x' \land y \land z) \lor (x' \land y' \land z) \lor (x \land y) \overset{Adjacency: \ (x'\land y \land z' ) \lor (x' \land y \land z) = x' \land y}{=}$ $ (x'\land y) \lor (x' \land y \land z) \lor (x' \land y' \land z) \lor (x \land y) \overset{Adjacency: \ (x' \land y \land z) \lor (x' \land y' \land z) = x' \land z}{=}$ $ (x'\land y) \lor (x' \land z) \lor (x \land y) \overset{Adjacency: \ (x'\land y) \lor (x \land y) = y}{=}$ $y \lor (x' \land z)$
A question about reductive groups's structure.
The subgroup generated by two unipotent subgroups need not be itself unipotent, since the product of two unipotent elements need not be unipotent. For example, let $G = \operatorname{SL}_2(K)$ for some field $K$, and choose $T=\left\{ \begin{bmatrix} \lambda &amp; 0 \\ 0 &amp;\lambda^{-1} \end{bmatrix} : \lambda \in K^\times \right\} \cong K^\times$ to be the maximal torus. Then there are only two non-identity closed unipotent subgroups of $G$ that are normalized by $T$: $$U_1 = \left\{ \begin{bmatrix} 1 &amp; \lambda \\ 0 &amp; 1 \end{bmatrix} : \lambda \in K \right\} \cong K_+, \qquad U_{-1} = \left\{ \begin{bmatrix} 1 &amp; 0 \\ \lambda &amp; 1 \end{bmatrix} : \lambda \in K \right\} \cong K_+$$ Then $$\begin{bmatrix} 1 &amp; 1 \\ 0 &amp; 1 \end{bmatrix} \cdot \begin{bmatrix} 1 &amp; 0\\ 1 &amp; 1 \end{bmatrix} = \begin{bmatrix} 2 &amp; 1 \\ 1 &amp; 1 \end{bmatrix}$$ has minimal polynomial $$(x-2)(x-1)-1 = x^2-3x+1 \neq (x-1)^2,$$ so it is not unipotent, even though it is a product of unipotent elements. In case of $\operatorname{SL}_2$ we actually get a simpler equality: $G=\langle U_1, U_{-1} \rangle$. The subgroup $T$ is redundant here. However for $\operatorname{GL}_2$ it is needed, and also I believe for $\operatorname{PGL}_2$ over many fields.
Prove that $\sum\limits_{cyc}\sqrt{\frac{a+b}{c}}\ge2\sum\limits_{cyc}\sqrt{\frac{c}{a+b}}$
It is a consequence of Chebychev's inequality: $$ \sum_{cyc}\sqrt{\frac{a+b}{c}}≥2\sum_{cyc}\sqrt{\frac{c}{a+b}}\iff\sum_{cyc}\frac{a+b-2c}{\sqrt{c(a+b)}}≥0 $$ Since the $a+b-2c$ and $\frac{1}{\sqrt{c(a+b)}}$ are ordered in the same way, we can apply Chebychev's inequality to obtain: $$ \sum_{cyc}\frac{a+b-2c}{\sqrt{c(a+b)}}≥\frac{1}{3}\left(\sum_{cyc}a+b-2c\right)\left(\sum_{cyc}\frac{1}{\sqrt{c(a+b)}}\right)=0 $$ Edit: In case you are not familiar with this approach: If we consider two real sequences $a_1,a_2,…,a_n$ and $b_1,b_2,…,b_n$ for which $a_1≤a_2≤…≤a_n$ and $b_1≤b_2≤…≤b_n$, then Chebyvhev tells us, that: $$ \frac{a_1b_1+a_2b_2+…+a_nb_n}{n}≥\frac{a_1+a_2+…+a_n}{n}\cdot\frac{b_1+b_2+…+b_n}{n} $$ We are allowed to use it in this case, because by symmetry, we can assume $a≥b≥c&gt;0$. This implies: $$ a+b-2c≥a+c-2b≥b+c-2a $$ And: $$ ab≥ac≥bc\iff a(b+c)≥b(a+c)≥c(a+b) \iff\\ \frac{1}{\sqrt{c(a+b)}}≥\frac{1}{\sqrt{b(a+c)}}≥\frac{1}{\sqrt{a(b+c)}} $$ So the two sequences we have in the above inequality are indeed ordered in the same way.
Find a prime $p$ satisfying $p \equiv 1338 \mod 1115$
Hint: If $p \equiv 1338 \pmod{1115}$ then $p = 1338+1115n$ for some integer $n$. Now, note that $1338 = 6 \cdot 223$ and $1115 = 5 \cdot 223$. Hence, $p = 223(6+5n)$. What does this tell you about $n$?
Find the solution set for $[\sin^{-1}x]>[\cos^{-1}x]$, where $[.]$ is greatest integer function
Note that $$\lfloor \sin^{-1}x \rfloor=\begin{cases}1, &amp; \sin 1\le x\le 1 \\ 0,&amp; 0\le x\lt \sin 1 \\ -1,&amp; -\sin 1\le x\lt 0 \\ -2,&amp; -1\le x\lt -\sin 1 \end{cases} $$ and $$\lfloor \cos^{-1}x \rfloor=\begin{cases} 0, &amp;\cos1\lt x\le 1 \\ \vdots\end{cases} $$ We don't need to worry about the other values, as they will turn out to be $\ge 1$, but $\sin^{-1} x\le 1$. Your inequality will only be true, when $$\lfloor \sin^{-1} x\rfloor =1 \land \lfloor \cos^{-1} x\rfloor =0$$ That is, we need to take the intersection of the range of values for which the two equalities hold. The answer is hence $$[\sin 1,1] \cap (\cos 1, 1]\\=\color{purple}{[\sin 1, 1]} \\ (\because \sin 1\gt \cos 1) $$
If $T$ Self Adjoint So T+iI is 1-1
Here are the main facts: $T-\lambda I$ is not injective iff $\lambda$ is an eigenvalue of $T$. If $T$ is self-adjoint and $\lambda$ is an eigenvalue of $T$, then $\lambda$ is real.
Intutition behind triple integrals between two surfaces
We are given this (a priori infinitely long) vertical cylinder whose base is an elliptical disc $E$ in the $(x,y)$-plane with semi-axes $2$ and $3$. In addition we are given a sphere of radius $4$. Note that $E$ is completely inside this sphere, but the sphere cuts off the cylinder, making round top and bottom surfaces of the resulting cylindrical body. Actually we don't want the full cylindrical body, but only the part ${\cal W}$ of it in the first octant. We are then told to compute the integral $$\int_{\cal W} x z\&gt;{\rm d}V=\int_{E'}\left( \int_0^{\sqrt{16-x^2-y^2}} x z\&gt;dz\right){\rm d}(x,y)\ ,\tag{1}$$ whereby $E'$ denotes the part of $E$ in the first quadrant. On the RHS at each point $(x,y)\in E'$ a vertical stalk has been erected, with lower end at $z=0$ and upper end at $z=\sqrt{16-x^2-y^2}$ on the sphere. The body ${\cal W}$ is the union of these stalks. Now $$\int_0^{\sqrt{16-x^2-y^2}} z\&gt;dz={z^2\over2}\biggr|_{z=0}^{z=\sqrt{16-x^2-y^2}}={1\over2}(16-x^2-y^2)\ .\tag{2}$$ At this point the spherical shaped top boundary of ${\cal W}$ is completely taken care of. We now plug $(2)$ into the RHS of $(1)$, and obtain $$\eqalign{\int_{\cal W} x z\&gt;{\rm d}V&amp;=\int_{E'} {x\over2} (16-x^2-y^2)\&gt;{\rm d}(x,y)\cr &amp;=\int_0^3\left(\int_0^{{2\over3}\sqrt{9-y^2}}{x\over2}(16-x^2-y^2)dx\right)dy\ .\cr}$$ Note that the $\int_{E'}$ integral is completely in the $(x,y)$-plane. In order to compute it "by reduction" we have drawn at each level $y\in[0,3]$ a horizontal beam beginning at $x=0$ and ending at $x={2\over3}\sqrt{9-y^2}$ on the right half of the elliptical boundary. The reason for this choice of integration order is that we wanted to make good use of the factor $x$ in the integral. In this way no square roots will appear in the calculation. Calculate the inner integral (with respect to $x$, while $y$ is held constant). You obtain a polynomial in $y$, which you then have to integrate from $0$ to $3$. The result is indeed ${126\over5}$.
Solving the characteristic equation of a $3\times 3$ matrix to find the eigenvalues.
Your approach is fine, but not your computations. The characteristic polynomial of $A$ is $-\lambda^3+10\lambda^2-33\lambda+36$. Using the rational root theorem, you can see that its roots are $3$ (twice) and $4$. By the way, note that the characteristic polynomial is not $(4-\lambda)^2(2-\lambda)-(4-\lambda)$. In fact, it is $(4-\lambda)^2(2-\lambda)+(4-\lambda)$, and\begin{align}(4-\lambda)^2(2-\lambda)+(4-\lambda)&amp;=(4-\lambda)\bigl((4-\lambda)(2-\lambda)+1\bigr)\\&amp;=(4-\lambda)(\lambda^2-6\lambda+9)\\&amp;=(4-\lambda)(\lambda-3)^2.\end{align}
approximate greatest common divisor
I made a similar question here, where I propose a partial solution. How to find the approximate basic frequency or GCD of a list of numbers? In summary, I came with this being $v$ the list $\{v_1, v_2, \ldots, v_n\}$, $\operatorname{mean}_{\sin}(v, x)$ $= \frac{1}{n}\sum_{i=1}^n\sin(2\pi v_i/x)$ $\operatorname{mean}_{\cos}(v, x)$ $= \frac{1}{n}\sum_{i=1}^n\cos(2\pi v_i/x)$ $\operatorname{gcd}_{appeal}(v, x)$ = $1 - \frac{1}{2}\sqrt{\operatorname{mean}_{\sin}(v, x)^2 + (\operatorname{mean}_{\cos}(v, x) - 1)^2}$ And the goal is to find the $x$ which maximizes the $\operatorname{gcd}_{appeal}$. Using the formulas and code described there, using CoCalc/Sage you can experiment with them and, in the case of your example, find that the optimum GCD is ~100.18867794375123: testSeq = [399, 710, 105, 891, 402, 102, 397] gcd = calculateGCDAppeal(x, testSeq) find_local_maximum(gcd,90,110) plot(gcd,(x, 10, 200), scale = "semilogx")
Number of zeros of $ z^7+4z^4+z^3+1$
Note that if $|z| \ge 2$, then $|z|^7 - (4 |z|^4+|z|^3+1) \ge 55$, hence all of the roots lie inside $|z|&lt;2$. Note that if $|z|=1$, then $4 |z|^4+|z|^3+1 - |z|^7 \ge 5$. Hence $z \mapsto 4z^4+z^3+1$ and $z \mapsto z^7+4z^4+z^3+1$ have the same number of zeros inside $|z|=1$. Note that if $|z|=1$, then $4 |z|^4-(|z|^3+1) \ge 2$. Hence $z \mapsto 4z^4+z^3+1$ and $z \mapsto 4z^4$ have the same number of zeros inside $|z|=1$ (that is, four).
Finding all group homomorphisms $(\mathbb{Q},+)\to (\mathbb{Q}-\lbrace 0\rbrace,\cdot)$
There is exactly one such homomorphism, namely the constant map $x\mapsto 1$, because $1$ is the only number whose $k$th roots are all rational. In more detail: Let $x$ be arbitrary. Since $f(x)=f(\frac x2+\frac x2)=f(\frac x2)^2$ it must be positive; take $f(x)=n/m$ in lowest terms. Suppose that $n/m\ne 1$; then there is some prime $p$ that divides either $n$ or $m$. Suppose it is $p\mid n$; the other case is similar. Let $k$ be some integer such that $p^k&gt;n$. Then what can $f(\frac{x}{k})$ be? \We know that $f(x) = f(k\frac{x}{k}) = f(\frac{x}{k})^k$. But if $f(\frac{x}{k})=a/b$ in lowest terms, then we must have $a^k=n$. Since $p$ divides $n$, $p$ must divide $a$ too, but then $p^k$ divides $a^k=n$, which contradicts $p^k$ being larger than $n$.
Summation to n terms of series
Let $$f(x):=\sum_{r=1}^n\frac{x^r}r$$ and $$f'(x)=\sum_{r=1}^n x^r=\frac{1-x^r}{1-x}.$$ Then by integration, $$f(\tfrac12)=-\log(\tfrac12)-\int_0^{1/2}\frac{x^r}{1-x}dx.$$ The last integral is known as an incomplete Beta, and has no closed-form expression (other than the explicit summation for integer $r$).
Limit of derivative of a differentiable function.
Actually the derivative need not tend to zero. You can construct a counter example in the following way: For each natural number $n\ge 2$ the function $f_n:\left[-\frac{1}{n^2},\frac{1}{n^2}\right]\to \mathbb{R}$ given by $f_n=\begin{cases} \frac1{n^2}-n^2\left(x+\frac1{n^2}\right)^2 &amp;&amp; x&lt;0\\ -\frac1{n^2}+n^2\left(x-\frac1{n^2}\right)^2 &amp;&amp; x&gt;0\\ \end{cases}$ is differentiable over its domain. It is strictly decreasing and satisfies $f'(0)=-2$. Also the difference between its maximum and minimum values is $\frac{2}{n^2}$. (The total "drop" in the function is $\frac{2}{n^2}$). So start with a large $f(0)$, say $10$ will do. Define $f$ piecewise insert copies of $f_n$ around each natural number $n\ge 2$. If you want strictly decreasing $f$, insert linear pieces in between of extremely low negative derivative so that their "drop" is of the order of $\frac{1}{n^2}$. You will have to adjust the ends of the pieces accordingly. The total "drop" in the function from $0$ to $\infty$ will then be dominated by the series $\sum \frac{1}{n^2}$ and thus will be finite. So you get a strictly decreasing positive function but the derivative at each natural number is $-2$ and therefore derivative does not approach $0$. This is only a rough idea, I leave it to you to explicitly write down the formulae for each piece of the function. Edit: Explicit construction for the function. The following function is, however, not strictly decreasing. Explicit formulae for the strictly decreasing case would be quite complicated. Let $S_n=2\left(\dfrac1{2^2}+\dfrac1{3^2}+\cdots+\dfrac1{(n-1)^2}\right)+\dfrac1{n^2}$, $\left(S_2=\dfrac1{2^2}\right)$ and $K=10$ ($K$ could be anything greater than $\frac{\pi ^2}{3}$). Define $f(x)=\begin{cases} K &amp;&amp; 0\le x&lt;2-\dfrac1{2^2}&amp;&amp;\\ K-S_n+f_n(x-n) &amp;&amp; n-\dfrac1{n^2}\le x \le n+\dfrac1{n^2}&amp;&amp; n\ge 2\\ K-S_n-\dfrac1{n^2} &amp;&amp; n+\dfrac1{n^2}&lt; x &lt; (n+1)-\dfrac1{(n+1)^2} &amp;&amp; n\ge 2\\ \end{cases}$ Try drawing a rough graph of this function. It is easy to see that $f$ is (not strictly) decreasing, always positive (as $f(x)\ge K-\lim S_n= K-\frac{\pi ^2}{3}$) but $f'(n)=-2\quad \forall n\in \mathbb{N}$ and so $f'$ does not approach $0$ as $x$ approaches $\infty$.
Why is $\sum_{j=1}^n (n-j+1) \ge \sum_{j=\lceil n/2 \rceil}^n (n/2) \ge n^2/4$?
\begin{align} \sum_{j=1}^n \sum_{k=j}^n 1 &amp;= \sum_{j=1}^n (n-j+1)&amp;&amp;\text{put $i=n-j+1$}\\ &amp;= \sum_{i=1}^n i\\ &amp;\geq \sum_{i=\lceil n/2 \rceil}^n i&amp;&amp;\text{by $1\leq\lceil n/2 \rceil\leq n$}\\ &amp;\ge \sum_{i=\lceil n/2 \rceil}^n (n/2)\\ &amp;=(n-\lceil n/2 \rceil+1)(n/2)\\ &amp;\ge n^2/4&amp;&amp;\text{by $\lceil n/2 \rceil&lt;n/2+1$} \end{align}
Lemma 2 - p.204 (Differential topology - Guillemin and Pollack)
Suppose $A\cap V_I$ is not contained in $I\times U$, for any interval $I$. Let $I_j = (c-\frac{1}{j},c+\frac{1}{j})$. Then there is $(c_j,x_j)\in A\cap V_{I_j}$ but $(c_j,x_j)\notin I_j\times U$. (I switched the roles of the $x$ and the $c$, I think it fits better with the established notation.) Well, from the definition of $V_{I_j}$ we must have $c_j\in I_j$. Therefore if $(c_j,x_j)\notin I_j\times U$ it follows that $x_j\notin U$. As $j\to\infty$, since the $I_j$-s shrink to $c$ we see that $c_j\to c$. Where do the $x_j$-s go? If we project $A$ to $\mathbb{R}^{n-1}$ (call the projection $\pi(A)$), then $x_j\in \pi(A)$ for all $j$. Since $A$ is compact the projection $\pi(A)$ is compact. Therefore the $x_j$ have a convergent subsequence, and the limit is in $\pi(A)$. Replacing $(c_j,x_j)$ by this convergent subsequence which we also call $(c_j,x_j)$, we have $$ (c_j,x_j)\to (c,x) \in \{c\}\times\pi(A) = A\cap V_c. $$ But wait! Each $x_j\notin U$, i.e. $x_j\in \mathbb{R}^{n-1}\setminus U$. $\mathbb{R}^{n-1}\setminus U$ is a closed set, so it contains its limit points. Since the $x_j\to x$, $x$ is a limit point, so $x\in\mathbb{R}^{n-1}\setminus U$, i.e. $x\notin U$. But $A\cap V_c \subset \{c\}\times U$, so this is a contradiction. (There is some abuse of notation, which I guess I inherited from G&amp;P: $U$ can be taken to mean the open set in $V_c = \{c\}\times\mathbb{R}^{n-1}$ containing $A\cap V_c$, or it can mean the set $U$ in $\{c\}\times U$, $U$ open in $\mathbb{R}^{n-1}$, such that $A\cap V_c \subset \{c\}\times U$. I'm using the latter.)
Showing that a set is meager
For each of $A_{+}$ and $A_{-}$, we can show that it is its own closure (that is, it is closed) and that its interior is empty. In other words, it is nowhere dense and therefore, immediately, a meager set. First, we note what an $\varepsilon$-ball around a point $f$ looks like. It is the set of all continuous functions that differ from $f$ by less than $\varepsilon$ at any point in $[0,1]$. That is, the set of of all continuous functions that lie within a $\varepsilon$ “tube” around $f$. To see that the $A_{+}$ is closed, we show its complement $U$, is open. If $g \in U$, then $\exists T \in [0,1]$ such that $g(-T) \neq g(T)$. Define $\delta = g(T)-g(-T)$. Now for any continuous function $h$ in the $\frac{\delta}{3}$ ball around $g$, it’s clear that $h(T) \neq h(-T)$. In other words, the entire $\frac{\delta}{3}$ ball around $g$ lies within $U$. So $U$ is open and therefore, $A_{+}$ is closed. To show its interior is empty, pick any $f\in A_{+}$ and consider an $\varepsilon$-ball $B$ around $f$. Since this is an $\varepsilon$-tube around $ f$, it is easy to see that there are continuous functions $s\in B$ such that for some $T, s(T) \neq s(-T)$. In other words, $A_{+}$ does not contain any open ball. So it has an empty interior. The arguments are analogous for $A_{-}$.
What exactly the Ellipsoid method does?
You can use it to determine if a certain value is attainable or not, and thus guess the optimal value. Suppose the problem is $\max(c^T x : Ax \leq b, x \geq 0)$. We can check if $(c^T x \geq \mbox{guess}, Ax \leq b, x \geq 0)$ is feasible and if so which point that is. You can then do binary search to converge onto an optimal solution
Why is the roll of a die considered random?
A die roll is only considered random if the external factors are not controlled. Practiced dice cheats can roll numbers they want to roll. So talk about nerves and blood vessels and quantum effects are just wrong. These cheats control the meaningful factors such that they influence the outcome of the roll, predictably. Even if someone only increases their chance of rolling a certain number by a few percentage points, that's huge in gambling terms. That's why there are rules on how the dice must be rolled at casinos, and inventions such as the dice tower: .
Using Parsevals formula to calculate a sum
The Fourier Series of $g$ converges to $g$ uniformly thus pointwise. For $x=\frac{\pi}{2}$ you have that $\sin{((2n+1)\frac{\pi}{2})}=(-1)^n$
Condition For Existence Of Phase Flows
Arnold's defines a phase flow as a one parameter group of diffeomorphisms from R^2 -> R^2 that map points on the phase space to the points they evolve to after time t. In the case of U = -x^4 ,the map g(t,M) is not differentiable on the separatrix, which is defined as E = 0, in the second and fourth quadrants of the phase plane. Physically, particles with negative energy do not cross the potential barrier. Thus, a particle having E&lt;0 , x>0 and p&lt;0 will be "reflected" off the point where U(x) = E and continue to have x > 0 for all time. Particles having positive energy do cross the barrier, which is to say that if a particle has E>0 , x>0 , p&lt;0 , will cross over to negative x in finite time. However a particle with E = 0, x > 0 and p &lt; 0 will only go upto the point x = 0 , and it will take infinitely long time to do so. So if you look at a point on the separatrix having x > 0 , p &lt; 0 , or x &lt; 0 and p > 0 , in any neighbourhood of that point will lie points which will evolve to be far away on the phase plane for a large enough time t.
Finding the equation of a curve using two points.
At all points on the curve the distance from $P$ to $A$ should be three times the distance from $P$ to $B$. I recommend first sketching this by hand. Then use the distance formula to find the equation of the curve. Finally, make sure both answers agree! Let $d(P,Q)$ represent the distance between points $P$ and $Q$. The problem statement tells us that $$d(P,A) = 3 d(P,B).$$ Plug in $(x,y)$ for $P$ and the given values of $A$ and $B$ to find the equation for the curve. Hint: After some algebra you will find it is one of the conic sections.
How to correctly understand submanifolds?
First of all, your definition 1 is a bit faulty, you should say that $rank(df)_x$ is constant equal to $m=dim(M)$ at every point $x\in M$. With this in mind, these are equivalent definitions. However, I do not like either one of these definitions and for several reasons. Spivak's definition - because it depends on a nontrivial theorem (the immersion theorem), while a definition this basic should not depend on anything nontrivial. Also, for the reason that you stated. More importantly, I do not like both definitions - because they utterly fail in other, closely related situations. For instance, if I were to define the notion of a topological submanifold in a topological manifold along these lines, Spivak's will fail immediately (what is the rank of the derivative if I do not have any derivatives to work with?); Carroll's definition will fail because it will yield in some cases rather unsavory objects, like Alexander's horned sphere in the 3-space. The same if I were to use triangulated manifolds and triangulated submanifolds, algebraic (sub)varieties and analytic (sub)varieties. Here is the definition that I prefer. First of all, what are we looking for in an $n$-dimensional manifold $N$ (smooth or not): We want something which is locally isomorphic (in whatever sense of the word isomorphism) to an $n$-dimensional real vector space (no need for particular coordinates, but if you like, just $R^n$). Then an $m$-dimensional submanifold should be a subset which locally looks like an $m$-dimensional vector subspace in an $n$-dimensional vector space. This is our intuition of a submanifold in any category (smooth, topological, piecewise-linear, holomorphic, symplectic, etc) we work with. Once you accept this premise, the actual definition is almost immediate: Definition. Let $N$ be a smooth $n$-dimensional manifold. A subset $M\subset N$ is called a smooth $m$-dimensional submanifold if for every $x\in M$ there exists an (open) neighborhood $U$ of $x$ in $N$ and a diffeomorphism $\phi: U\to V\subset R^n$ ($V$ is open) such that $\phi(M\cap U)= L\cap V$, where $L$ is an $m$-dimensional linear subspace in $R^n$. (If you like coordinates, assume that $L$ is given by the system of equations $y_1=...y_{n-m}=0$.) This is completely intrinsic. Next, you prove a lemma which says that such $M$ has a natural structure of an $m$-dimensional smooth manifold with topology equal to the subspace topology and local coordinates near points $x\in M$ given by the restrictions $\phi|(U\cap M)$. Then you prove that with this structure, $M$ satisfies the other two definitions that you know. Remark. Note that this definition will work almost verbatim if I were to deal with topological manifolds: I would just replace "a diffeomorphism" with "a homeomorphism. If I were to work with, say, complex (i.e. holomorphic) manifolds, I would replace $R^n$ with $C^n$ (of course), use complex vector subspaces and replace "diffeomorphism" with "a biholomorphic map". An so on. Now, to the question why is it so much more complicated than the concept of a subgroup or a submodule or any other algebraic concept you can think of. This is because manifolds have a much richer structure. To begin with, they are topological spaces. (Notice that every submanifold is equipped with the subspace topology, so this has to be built in.) Then, the notion of vector spaces has to be used at some point. Next, there is the "local" thing (local charts)....
Convergence of the series - best criertion
In my opinion, Riemann comparisons, as for $n \geq 1$, $$e^{\frac{1}{n}} \leq e \leq 3$$ and $$\frac{1}{n^e} \leq \frac{1}{n^2}$$
Question about non essential singulariy
It could be either. For example, $z^2$ has a pole at $\infty$, $1/z$ has a removable singularity there.
Differential Equation - Falling Projectile - Help getting started?
The equation for $v$ (see my comment above) is $$m\dot v=-mg-kv$$Let $v=\dot y$. Then$$y=y_0-\frac{mg}kt+\frac mk\left(v_0+\frac{mg}k\right)\left(1-e^{-\frac kmt}\right)$$Given $y_0=30$ and $v_0=20$, one solves (by a numerical method) $$y=0$$with rspect to $t$ and finds $t=5.1285 s$ .
Prove $\mathbb{Z}^{*}_{14}$ is cyclic
You are been asked a generator of Z14* , as Z14* has 6 elements, the order of any subgroup (and so the order of the group generated by any element) is 1,2,3 or 6 , (divisor of 6). Now you just have to find one element of order 6, 1 has order 1, 3^1=3 mod 14 , 3^2=9 mod 14, 3^3 =27=13 mod 14 ,since its order is bigger than 3 you now know its order has to be 6, and then 3 is a generator. But you may what to give all the elements as a power of 3 , so 3^4 =11 mod 14, 3^5=5 mod 14 and finally 3^6 = 1 mod 14.
Any manifold admits a morse function with one minimum and one maximum
Here's a direct proof. Let $M$ be a smooth $n$-manifold and $f$ a Morse function on $M$. Let $p_i$ be the local minima, $q_j$ be the index-$1$ critical points, $f(p_i)&lt;f(q_j)$ for all $i,j$, and $\gamma_{ij}$ the gradient flow line from $p_i$ to $q_j$. Connect each $p_i$ to $p_0$ by a direct path of $\gamma_{ij}$s. Note in particular the graph we have defined is contractible. Homotope $f$ so the $q_j$ which are not on any paths have a greater value than the $q_j$ which are on the path, say (by scaling and translating) $f^{-1}(1)$ separates the greater $q_j$ and the $q_j$ on the path. Now consider $B = f^{-1}(\infty,1]$. By construction, this is diffeomorphic to an $n$-ball. In $B$, replace $f$ by the radial distance function (possibly homotoped to match smoothly with $f$ on $\partial B$). This new $f$, call it $\widetilde{f}$, has exactly one index $0$ critical point. Repeat the procedure for $-\widetilde{f}$ to get exactly one index $n$ critical point. Intuitively, think of a level surface flow. It starts with a bunch of dots, the index $0$ critical points, expanding. Then they send out tendrils which join together at the index $1$ critical points. Controlling the Morse function is tantamount to controlling the level surface flow. We slow down the level surfaces near some of the index $1$ points so that the level surfaces all join into a giant sphere. This is our motivation; inside the sphere, we modify the function so that the sphere has expanded from a single dot, instead of many dots. Here's another proof using handlebody decompositions. By duality between Morse theory and handlebody theory, "Every manifold admits a Morse function with exactly one local maximum and exactly one local minimum" is equivalent to saying that every closed manifold admits a handlebody decomposition with exactly one $0$-handle and exactly one $n$-handle. To see this, take a smooth handlebody decomposition of $M$ ("smooth" so that the attaching maps are all smooth maps). Since this is done by gluing successive handles, we may focus our attention on the $0$-handles $M^0$ and the manifold obtained by gluing $1$-handles, $M^1$. Since $M$ is connected and the only handle with disconnected attaching spheres are $1$-handles, $M^1$ is connected (that is, the gluing of $1$-handles kills all elements of $\pi_0$). Therefore, we may pick a single $0$-handle and connect it to each other $0$-handle by a path of $1$-handles (possibly through other $0$-handles). This union of $0$- and $1$-handles is homeomorphic to a ball, so we may replace it with a single $0$-handle and regard the remaining $1$-handles as attaching to the single $0$-handle. Turn the handlebody decomposition upside down and repeat to obtain a single $n$-handle. References to look into: Milnor's Morse Theory, Milnor's "Killing paper", Ranicki's book on surgery.
Finite Subgroups of $GL(n,\mathbb{C})$
Preserving the $G$-invariant inner product $\langle , \rangle_1$ implies preserving the original $\langle , \rangle .$ To see this requires knowing how $\langle , \rangle_1$ was constructed (via Weyl's averaging trick.) For $u,v \in \mathbb{C}^n,$ $g\in G,$ and $A \in PGP^{-1},$ we have $\langle A(gu), A(gv) \rangle_1 = \langle gu, gv \rangle_1 =\langle u, v \rangle_1 =\frac{1}{|G|} \displaystyle\sum_{g\in G} \langle u , v \rangle = \langle u, v \rangle .$
Minimize the minimum - Linear programming
As far as I know the answer is negative: the point-wise minimum of affine functions is not convex and thus you cannot cast an an LP.
Logarithm, LUT and how to make it non-linear
$$\log(m \cdot 2^e) = \log m + e \log 2$$
Rank of $M=R^{\oplus A}$ as an $R$-module equals the dimension of $V=(K(R))^{\oplus A}$ as a $(K(R))$-vector space.
You've proved that $A$ is linearly independent in $V$ as a $K$-vector space, so $\dim_R M \leq \dim_K V$. Now fix a basis of $V$. Crucially, we can multiply each of its elements by its denominator to get a basis with all of its elements lying in $M$. By the statement you proved, this set must be linearly independent over $R$ too, so $\dim_K V \leq \dim_R M$, as required.
Finding a recursive formula for a sequence
From the fact that $a_n=1+\frac{1}{2^{n-1}}$ we have that $a_{n+1}=1+\frac{1}{2^{n}}$. Now notice that $$\frac{a_n}{2}=\frac{1}{2}+\frac{1}{2^{n}}=a_{n+1}-\frac{1}{2}.$$ Therefore we get the following recurrence relation $$a_{n+1}=a_n/2+1/2$$