title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Derivative of vector with respect to change in length
As you state in the comments, you have some function $$f \left(|\mathbf r| \right),$$ and you would like to know if you could take some sort of derivative of this, so that you have another vector. Well, since you also say you're in $\mathrm R^3,$ then we can write $\mathbf r=(x,y,z).$ Then we have that $f=f(x,y,z).$ Thus what you want is the gradient of $f,$ which you can calculate as $$\left(\partial f_x,\partial f_y,\partial f_z\right).$$
Prove that another matrix is inverse of $A+BCD$
For square matrices, if $AB = I$ then $BA = I$.
Prove one group is the subgroup of another under a specific condition
$e\in H$ so $eg_1=g_1\in Kg_2$. Therefore, $g_1=kg_2$ for some $k\in K$. This gives $Hkg_2\subset Kg_2=Kkg_2\implies Hkg_2(kg_2)^{-1}\subset Kkg_2(kg_2)^{-1}\implies H\subset K$
Does $\{(1-x)x^k\}$ converge uniformly on $[0,1]?$
If the uniform limit exists, then it is $f(x)=0$. The maximum of $f_n:=(1-x)x^n$ is at $$f'(x)=nx^{n-1} - (n+1)x^n=0 \Longrightarrow x=n/(n+1), $$ where $$f_n(n/n+1)=\frac1{n+1}\frac{n^n}{(n+1)^n}=\frac{n^n}{(n+1)^{n+1}}\to 0$$ as $n\to +\infty$.
Math behind a "fling"? (i.e. on a mobile touch device)
In the physical world, once an object is released from all external forces, it will travel in a straight line. UI design strongly suggests that interfaces work better when the user is using previous knowledge of motor control, eye movement, physics, etc. Unless you have a legitimate reason to curve after the object is released, I would personally go with linear only. In that case, take the last few coordinates and timestamps, and average the velocity components together. Then run a quadratic or exponential function of time on each component until the change of position is negligibly small. The latter functions relates to throwing an object upward and watching it being slowed by gravity, and the former relates to shoving an object on the ground and seeing it come to a stop by friction. Since the exponential function seems like the best for the job, here's an example. Find the last few velocities for the $x$ and $y$ components, and average them: $v = \frac{\Delta x}{\Delta t}; y = \cdots$ $x = \frac{\sum v_x}{n}; y = \cdots$ Set time from release for $t$, $\beta$ as some friction constant, $v_x$ as the average velocity for each component, and $x_0$ as the initial release point. $x = v_x (1 - e^{-\beta t})$ The purpose of the 1 is to align the function to start at zero when time is zero. So I hope this helps. I'm interested in what you're doing, so you'll have to show me when you're done. ;)
Proof for a certain binomial identity
Here we have Chu-Vandermonde's Identity in disguise. We obtain for $1\leq m\leq n$ \begin{align*} \color{blue}{\sum_{k=n}^{n+m}}&\color{blue}{(-1)^k\binom{k-1}{n-1}\binom{n}{k-m}}\\ &=\sum_{k=0}^m(-1)^{k+n}\binom{n+k-1}{n-1}\binom{n}{k+n-m}\tag{1}\\ &=\sum_{k=0}^m(-1)^{k+n}\binom{n+k-1}{k}\binom{n}{m-k}\tag{2}\\ &=(-1)^n\sum_{k=0}^m\binom{-n}{k}\binom{n}{m-k}\tag{3}\\ &=(-1)^n\binom{0}{m}\tag{4}\\ &\,\,\color{blue}{=0} \end{align*} Comment: In (1) we shift the index to start with $k=0$. In (2) we apply the binomial identity $\binom{p}{q}=\binom{p}{p-q}$ twice. In (3) we apply the binomial identity $\binom{-p}{q}=\binom{p+q-1}{q}(-1)^q$. In (4) we finally apply the Chu-Vandermonde identity.
Showing $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$ are not continuous for $f(x,y)=xy\sin\frac {1}{x^2+y^2}$.
For $2$) Observe that for $(x,y)\neq (0,0)$ we get $$\frac{\partial f}{\partial x}=y\sin\left(\tfrac1{x^2+y^2}\right)-\tfrac{2x^2y}{(x^2+y^2)^2}\cos\left(\tfrac1{x^2+y^2}\right)\qquad\text{and}$$ $$\frac{\partial f}{\partial y}=x\sin\left(\tfrac1{x^2+y^2}\right)-\tfrac{2xy^2}{(x^2+y^2)^2}\cos\left(\tfrac1{x^2+y^2}\right)$$ As $(x,y)$ tends to $(0,0)$ on the $x$ axis, i.e., with $y=0$, we get $$\frac{\partial f}{\partial x}(x,0)=0,\quad x\neq 0$$ However, if we take the sequence $\mathbf{x}_n=(x_n,y_n)=\left(\tfrac1{\sqrt{4n\pi}},\tfrac1{\sqrt{4n\pi}}\right)$ we see that \begin{align*} \frac{\partial f}{\partial x}(\mathbf{x}_n)&=\frac1{2\sqrt{n\pi}}\sin(2n\pi)-2(2n\pi)^2(4n\pi)^{-3/2}\cos(2n\pi)\\ &=\sqrt{n\pi} \end{align*} So, $\partial f/\partial x$ is not continuous at $(0,0)$ since tends to two different values as $(x,y)\to(0,0)$ on two different paths (sequences). Similarly it can be proved that $\partial f/\partial y$ is not continuous at $(0,0)$.
How to evaluate transition rates of a Markov Jump Process
Hints: The average waiting time from the time of Bob’s accident until the time of his surgery is just the average length of time he spends in state $1$. The length of time he spends in state $1$ has cumulative distribution function $\ 1-e^{-(\alpha+\beta)t}\ $ (for $\ t\ge0\ $)—that is, probability density function $\ (\alpha+\beta)e^{-(\alpha+\beta)t}\ $. The probability that Bob gets admitted to hospital A is $\ \frac{\alpha}{\alpha+\beta}\ $, and the probability that he gets admitted to hospital B is $\ \frac{\beta}{\alpha+\beta}\ $, so you're correct that if the former is twice the latter, then $\ \alpha=2\beta\ $. The average waiting time from admission to Hospital A until full recovery is just the average length of time Bob will spend in state $2$, given that he enters that state. The length of time he will spend in that state after entering it has cumulative distribution function $\ 1-e^{-\mu t}\ $ (for $\ t\ge0\ $)—that is, probability density function $\ \mu e^{-\mu t}\ $. Similarly, the waiting time from admission to Hospital B until full recovery has cumulative distribution function $\ 1-e^{-\nu t}\ $ (for $\ t\ge0\ $)—that is, probability density function $\ \nu e^{-\nu t}\ $.
Classical presentation of fundamental group of surface with boundary
OK, my bad, Fulton's Algebraic topology: A First Course only deals with the closed case. I'll suppose that you know this case quite well. Let's do the bounded case by hand. First case: one boundary component Keep in mind the classical decomposition of the closed surface $F_{g,0}$ of genus $g$ : you have 1 vertex, $2g$ edges, and that $2$-cell whose boundary gives the complicated $[a_1,b_1]\cdots[a_g,b_g] = 1$ relation. Now, take a needle, and pierce a hole in the middle of the 2-cell. You get $F_{g,0} \setminus \textrm{a point}$. Deformation retract the pierced 2-cell on its boundary: that creates a movie whose opening scene is this pierced surface, and whose closing scene is the $1$-skeleton, which is a wedge of $2g$ circles (the $a_i$'s and the $b_i$'s). What happens in the middle of the movie? Well, you have a surface with a disc-shaped hole which expands with time. Topologically, it's exactly the surface $F_{g,1}$ of genus $g$ with 1 boundary component. So we have learned two things: Piercing a surface (i.e. taking a point out) or making a true hole in it (i.e. take an open disc out) gives the same result up to homotopy equivalence [that's quite irrelevant for our discussion, but it's good to know nevertheless. Of course it works for many other spaces: they only have to be locally not too complicated]. A pierced surface has the homotopy type of a graph. This is quite important for the study of surfaces. In particular, it gives the wanted presentation: $$ \pi_1(F_{g,1}) = \left\langle a_1, \ldots, a_g, b_1, \ldots, b_g\right\rangle.$$ Of course, because the boundary of the surface is associated to the word $[a_1, b_1]\ldots[a_g, b_g]$, you can choose to write this group $$ \pi_1(F_{g,1}) = \left\langle a_1, \ldots, a_g, b_1, \ldots, b_g,x \middle| x = [a_1,b_1]\cdots [a_g, b_g]\right\rangle$$ but this quite obfuscates the fact that this group is free. Second case: the sphere with holes Take now $F_{0,b+1}$, the sphere with $b+1 > 0$ boundary components. You can see it as the disc with $b$ boundary components. This amounts to choosing one of the boundary components and declaring it the "outer" one. It's quite easy to retract that on a wedge of $b$ circles, so that $$\pi_1(F_{0, b}) = \left\langle z_1, z_2, \ldots, z_{b}\right\rangle.$$ In this presentation, the (carefully oriented) bouter boundary component is simply the product $z_1\cdots z_b$. The general case $F_{g,b}$ You can write the surface $F_{g,b}$ of genus $g$ as the union of $F_{g,1}$ and $F_{0,b+1}$, gluing the boundary of the former with the outer boundary of the latter. Since we have computed the fundamental groups of the two pieces and that we know the expression of the gluing curve in both of them ($[a_1, b_1]\cdots[a_g,b_g]$ and $z_1\cdots z_b$, respectively), the Van Kampen theorem gives us the answer $$\pi_1(F_{g,b}) = \frac{\left\langle a_1, \ldots, a_g, b_1, \ldots, b_g\right\rangle * \left\langle z_1, \ldots, z_b \right\rangle}{\langle\langle [a_1, b_1]\cdots[a_g,b_g] \cdot (z_1\cdots z_b)^{-1}\rangle\rangle} = \left\langle a_1, \ldots, a_g, b_1, \ldots, b_g, z_1, \ldots, z_b \middle| [a_1, b_1]\cdots[a_g,b_g] =z_1\cdots z_b \right\rangle.$$ It is probably worth noting that you can rewrite the relation so that it expresses $z_b$ (say) as a word in the other generators. You can then eliminate it and notice that this is also a free group (again, as long as $b > 0$, $F_{g,b}$ deformation retracts to a graph).
Equation of a line maintaining equal ratio distance between two points.
As DanielFischer said, the locus is a circle if the ratio isn't $1:1$. This is known as Apollonius' Circle Theorem.
Find UMVUE of $P(X_1+X_2+X_3 =2)$ Where $X_1, ... X_n \sim$ Poisson$(\lambda)$ (independent)
You want to solve for $h(\cdot)$ where for every $\lambda>0$, $$E[h(S)]=\sum_{k=0}^\infty h(k)\frac{e^{-n\lambda}(n\lambda)^k}{k!}=c\,e^{-3\lambda}\lambda^2$$ for some positive constant $c$. That is, $$ \sum_{k=0}^\infty \frac{h(k)n^k}{k!}\lambda^k=c\,e^{(n-3)\lambda}\lambda^2 =c\sum_{j=0}^\infty \frac{(n-3)^j}{j!}\lambda^{j+2} =\sum_{k=2}^\infty \frac{c(n-3)^{k-2}}{(k-2)!}\lambda^{k} $$ Now compare coefficients of $\lambda^k$.
Convergence of $\sum_{n=2}^\infty {1\over(\log n)^{\log n}}$.
Hint: $$(\log n)^{\log n} = n^{\log\log n} $$ and as soon as $\log\log n > 1$, you can compare your series with a converging one.
Are these statements regarding isomorphism of a linear mapping True or False?
1) Remember that $L$ is linear. Since $L$ is linear and $V$ and $W$ are both finite dimensional, then $L$ can be represented by a matrix. What does this matrix look like? What properties does it have?
Solving Laplace's equation using finite differences
In 3D case, you have a 7-point stencil (see above picture took from wikipedia), the center node is $(i,j,k)$. It has six neighbor nodes: $(i+1,j,k)$, $(i-1,j,k)$, $(i,j+1,k)$, $(i,j-1,k)$, $(i,j,k+1)$, and $(i,j,k-1)$. Then the central difference to approximate the second derivative is: $$ \frac{\partial^2 u}{\partial x^2}\Bigg|_{(i,j,k)} \approx \frac{u_{i+1,j,k} -2u_{i,j,k}+ u_{i-1,j,k}}{h^2}, $$ where the second and third components are holding fixed, similar for second derivative in $y$ and $z$.
Sum of Eulers phi function
As you noted, what we're interested in is really $$\sum_{k=0}^n (p^k - p^{k-1}). $$ Writing it out, we can see that it is a telescoping sum: $$(p^n - p^{n-1}) + (p^{n-1} - p^{n-2}) + \dots + (p - 1) + 1.$$
If $(A_1,<_1)$ and $(A_2,<_2)$ are linearly ordered sets and $|A_1|=|A_2|<\aleph_0$. Then $(A_1,<_1)$ and $(A_2,<_2)$ are isomorphic
I don't know how rigorous you want to be or how much set theory you know. Here is one approach: Suppose $A$ and $B$ are well-ordered sets of equal cardinality. If $A=B=\emptyset$ then $A$ and $B$ are trivally order isomorphic. Otherwise, let $|A|=|B|=n$ and let $a\in A$ and $b\in B$ be maximal in $A$ and $B$ respectively. Then, $|A\setminus \left \{ a \right \}|=|B\setminus \left \{ b \right \}|$ so applying the inductive hypothesis, we get an order isomorphism $f:A\setminus \left \{ a \right \}\to |B\setminus \left \{ b \right \}.$ Now $g:A\to B$ defined by $$ g(x) = \left\{ \begin{array}{ll} f(x) &amp; \quad x &lt;a \\ b&amp; \quad x =a \end{array} \right. $$ is an order isomorphism. Another way to do this would be to define $E:A\to n\in \omega$ by $E(a_0)=\emptyset$ where $a_0$ is the least element in $A$ and in general, $E(t)=\left \{ E(x):x&lt;t \right \}.$ Then, $E$ is by construction an order isomorphism from $A$ to $n$, where, of course, $n$ is well-ordered by $\in.$ To finish,observe that the $same$ function works for $B$, so $A$ and $B$ are both order isomorphic to $n$, hence to each other.
The comparison functor for the adjoint $F\dashv G$ with $G$ fully faithful is an equivalence
There's an almost-one liner for that: the counit of $F\dashv G$ is an isomorphism, since $G$ is fully faithful, so $FG\cong 1$, and $K$ is such that $U^TK=G$, $KF=F^T$ if $F^T\dashv U^T$ denotes the free-forgetful adjunction generated by $T$. If you put these things, together, it's pretty easy to show that $FU^T$ is an inverse for $K$: $FU^TK = FG\cong 1$, because $G$ is fully faithful. $KFU^T=F^T U^T \cong 1$, because the monad $T$ is idempotent ($T$ is idempotent iff $U^T$ is fully faithful, if and only if the counit $\epsilon^T : F^TU^T\Rightarrow 1$ is invertible, if and only if the multiplication of $T$ is an isomorphism).
How many feet of rope would be needed to wrap this device?
The diagram doesn't have enough information to solve the problem. The answer is $$2\pi\cdot{R_i + R_o\over 2} N$$ where $R_i$ and $R_o$ are the radii of the innermost and outermost circles of rope, and $N$ is the number of windings around. We have $R_i$, which is 1 foot (1 foot left unwrapped as shown in the upper left of the picture), or maybe 1 foot plus the thickness of the inner stem, and $R_o$, which is 14 feet (8+6 feet as shown in the upper right of the picture). If the umbrella were flat, we would have $N$ equal to $R_o-R_i = 13$ feet divided by the thickness of the rope, which is 156 times around. But the ribs are curved, so they are more than 13 feet long. We need to know the length of the ribs, as measured with a flexible tape measure from the central stem to the end of one of the ribs. If this length is $L$, then $N= L\div 1\text{ inch}$, and you should be able to use the formula above. But there is nothing in the diagram to tell us what $L$ is. Still we can be sure that the rope will have to go around considerably more than 156 times, so each umbrella will need considerably more than $2\pi\cdot7\frac12\text{ feet }\cdot156 = 7,351$ feet of rope.
What is $\int f(3y +3) dy$
You want to compute$$\int_1^9f(3x+3)\,\mathrm dx.$$Let $u=3x+3$ and $\mathrm du=3\mathrm dx$. Then\begin{align}\int_1^9f(3x+3)\,\mathrm dx&amp;=\frac13\int_1^9f(3x+3)\,3\mathrm dx.\\&amp;=\frac13\int_{3\times1+3}^{3\times9+3}f(u)\,\mathrm du\\&amp;=\frac13\int_6^{30}f(u)\,\mathrm du\\&amp;=10.\end{align}
Developing a general expression for powers of matrices
If you assume that $$P=\begin{pmatrix}a&amp;b\\b&amp;c\end{pmatrix}$$ then you compute $$P^2=\begin{pmatrix}a^2+b^2&amp;ab+bc\\ab+bc&amp;b^2+c^2\end{pmatrix}=\begin{pmatrix}0.6&amp;0.4\\0.4&amp;0.6\end{pmatrix}$$ From this, you get the system of equations $$a^2+b^2=0.6,~ b(a+c)=0.4,~ b^2+c^2=0.6$$ Subtracting the third from the first, we get $a^2-c^2=0$, or $a=\pm c$. $a=-c$ is inconsistent with the second equation, so we must have $c=a$, and now two equations: $$a^2+b^2=0.6, 2ab=0.4$$ Adding, we get $a^2+2ab+b^2=(a+b)^2=1$, so $a+b=\pm 1$. Also, subtracting we get $a^2-2ab+b^2=(a-b)^2=0.2$, so $a-b=\pm \sqrt{0.2}$. Hence, there are four cases, corresponding to $$a+b=1, a-b=\sqrt{0.2}$$ $$a+b=-1, a-b=\sqrt{0.2}$$ $$a+b=1, a-b=-\sqrt{0.2}$$ $$a+b=-1, a-b=-\sqrt{0.2}$$ The first gives $a\approx 0.72, b\approx 0.28$. The second gives $a\approx -0.28, b\approx -0.72$. The third gives $a\approx 0.28, b\approx 0.72$. The fourth gives $a\approx -0.72, b\approx -0.28$. Hence there are just four answers, two of which are the negatives of the other two.
Showing that $|||A-B|||\geq \frac{1}{|||A^{-1}|||}$?
Hint for operator norm: notice that $\frac{1}{|||A^{-1}|||}=\inf_{v\neq 0} \lVert Av\rVert/\lVert v\rVert$, and then consider $(A-B)v$ for $v\in \ker B$.
Linear elliptic partial diff. operator
If you already know how to show that $AH$ is negative semi-definite, then just note that $$\sum_{i,j=1}^n a_{ij}u_{x_ix_j} = \text{Trace}(AH)\leq 0.$$
Trying to convert a nasty logarithm into an exponential
First solve for $\ln\left(\frac{r}{R}\right)$. We get $$\ln\left(\frac{r}{R}\right)=\frac{2\pi\epsilon_0 \Delta V}{\lambda}.$$ Now take the exponential of both sides, and it's nearly over. Remark: Under other circumstances, your observation that $\ln\left(\frac{r}{R}\right)=\ln r-\ln R$ would be useful. Here it can be used, but there are quicker ways.
Questions about harmonic functions and distribution.
Convergence in the sense of distributions has a very nice property: the limit of derivatives is the derivative of the limit. Precisely, if $f_n\to f$ as distributions, then for every test function $\phi$ we have $\langle f_n,\nabla \phi\rangle \to \langle f,\nabla \phi\rangle$ (since $\nabla \phi$ is just a bunch of test functions), which by definition of distributional derivative means $\nabla f_n\to \nabla \phi$. In particular, $f_n\to f$ implies $\Delta f_n\to \Delta f$ in the sense of distributions. But $\Delta f_n\equiv 0$, hence $\Delta f\equiv 0$. Weyl's lemma tells us that both $f$ and $f_n$ are harmonic functions. It remains to upgrade the convergence to locally uniform. Harmonic functions have another nice property: $f(a)=\int f\varphi$ for any test function of the form $\varphi(x)=\psi(|x-a|)$ such that $\int \varphi =1$. (Proof is a sub-exercise.) By distributional convergence, $\int f_n\varphi\to \int f \varphi$, hence $f_n(a)\to f(a)$. This is pointwise convergence; I leave it for you to upgrade it to locally uniform.
Given $5$ white balls, $8$ green balls and $7$ red balls. Find the probability of drawing a white ball then a green one.
There is no need to divide by $2$. You need to multiply the probability of drawing a white ball on the first draw by the probability of drawing a green ball on the second draw. Since the draws are independent and the first ball is replaced, the probability of drawing a green ball on the second draw given that a white ball was drawn on the first draw is just the probability of drawing a green ball from the urn. Hence, $$\Pr(\text{drawing white, then green}) = \Pr(W)\Pr(G \mid W) = \Pr(W)\Pr(G) = \frac{5}{20} \cdot \frac{8}{20} = \frac{1}{10}$$
Sum without an index
You will sometimes see it used that way, but in my view it’s a dismally poor abuse of notation. At the very least the index should appear somewhere in the expression: $\sum_ia_i$ is fine, given a reasonable context, or even $\sum a_i$, but $\sum a$ is at best annoying and at worst confusing, especially since $\sum_{k=1}^na$ has the completely different unambiguous meaning $na$. Added: It occurs to me belatedly that there is one context in which I would not at all object to the notation $\sum a$: if $a$ is a finite set of real numbers, say, $\sum a$ is perfectly acceptable shorthand for $\sum\{x:x\in a\}$, just as in set theory $\bigcup a$ is unambiguously $\bigcup\{x:x\in a\}$ if $a$ is a set of sets.
Dividing $k$ distinct balls into $n$ distinct cells
This means that $k$ divides $n$. If the order of balls was unimportant, you'd have $\binom{k}{\frac{n}{k}} \binom{k-\frac{k}{n}}{\frac{k}{n}} \ldots \binom{\frac{k}{n}}{\frac{k}{n}}$. Since the order of balls in the bin is important, you need to multiply by $(\frac{k}{n})^n$ to get $k!$
max {$f_n(x):x\in[a,b]$}$\to$ max{$f(x):x\in[a,b]$}
Your (1) is too complicated and the required conclusion does no follow. There is no need of the $x_k$. Try this: Let be $x_0$ like you; by the uniform convergence, $$|f_n(x)-f(x)|&lt;\epsilon$$ for $n\ge k(\epsilon)$. Said inequality is equivalent to $$f(x)-\epsilon &lt; f_n(x) &lt; f(x)+\epsilon.$$ As $x_0=\max f$, $$f(x)-\epsilon &lt; f_n(x) &lt; f(x_0)+\epsilon.$$ By definition of max, $$f(x)-\epsilon &lt; f_n(x)\le\max f_n \le f(x_0)+\epsilon.$$ Particularly, $$f(x_0)-\epsilon \le\max f_n \le f(x_0)+\epsilon.$$ That is equivalent to $$|\max f_n-f(x_0)|\le\epsilon.$$
continuous onto function from irrationals in [0,1] onto rationals in [0,1]
Let $I = [0,1]\setminus \mathbb{Q}$, the set of irrationals in the unit interval. Partition $I$ into countably many nonempty pieces, each given by the intersection of an open set in $[0,1]$ and $I$, such that $I = \cup_{k=0}^\infty I_k$. For example, $I_0 = \left(\frac{1}{2},1\right) \cap I$, $I_1 = \left(\frac{1}{4},\frac{1}{2}\right)\cap I$, and in general $I_k = \left(\frac{1}{2^{k+1}},\frac{1}{2^k}\right)\cap I$. Note that each $I_k$ is open in $I$. Enumerate $\mathbb{Q} \cap [0,1]$ (in any way you like): $\{q_0, q_1, q_2,\dots\}$. For any $x\in I$, $x\in I_k$ for exactly one $k$. Define $f(x) = q_k$. Note that $f$ is surjective, since each $I_k$ is nonempty. We want to show that $f$ is continuous. Let $O\subseteq \mathbb{Q}\cap [0,1]$ be an open set. We can write $O = \bigcup_{q_k\in O} \{q_k\}$. $$f^{-1}[O] = \bigcup_{q_k\in O}\,f^{-1}[\{q_k\}] = \bigcup_{q_k\in O} I_k.$$ This is a union of open sets in $I$, so it is open. Note that we didn't actually use that $O$ was an open set: our function $f$ is continuous even if we give $\mathbb{Q}$ the discrete topology!
$U_{n+1} = a * U_n * (1-U_n)$ with : $3<a<3.6$ What are some values of $a$ such that $U_n$ changes its periodicity?
Find a book that has this diagram: In fact, $3.3$ is in the period $2$ region. Here is a table, period-doublings at: $3, 3.4494897, 3.5440903, 3.5644073, 3.5687594, 3.5696916, \dots$
Rank of a (3x5) matrix
Rank = 3 I removed the 3 linearly independent matrices where I should have removed only two. It leads us to a 3x3 matrix with 3 linearly independant columns
Linear impluse needed for follow desired trajectory
To make things a bit simpler, I'll assume that $(x_0, y_0)=(0, 0)$. We can always translate things back once we're done. Also, to save writing subscripts I'll rename your $(x_1, y_1)$ to be $(a, b)$. Describing the parabola. The first thing we'll do is get the equation of the parabola in Cartesian coordinates. Once that's done we'll try to parametrize it as a trajectory, which of course is your ultimate goal. It's almost immediate that a downward-opening parabola through $(0,0)$ with apex coordinates $(m, n)$ is given by $$ y=n-(x-m)^2 $$ In this case, $n=2y_1$ (since $y_0=0$) which in our terms gives $n=2b$. Also, since the parabola contains the origin, we'll have $0=2b-(0-m)^2$, so $m=\sqrt{2b}$ so the parabola will be described by $$ y=2b-(x-\sqrt{2b})^2=2\sqrt{2b}\; x-x^2 $$ The apex will be at $(\sqrt{2b}, 2b)$. Note this, we'll use it shortly. [Although we won't use it in what follows, we note that since the parabola contains $(a, b)$ it's not hard to show that $a=\sqrt{b}\;(\sqrt{2}-1)$.] Parametrizing the parabola. For initial velocity $V$ and elevation angle $\alpha$, the trajectory will be given by $$ \begin{align} x(t) &amp;= V\cos(\alpha)\;t\\ y(t) &amp;= V\sin(\alpha)\;t-\frac{1}{2}gt^2 \end{align} $$ For this trajectory, the apex will be when the derivative $y'(t_{apex})=0$. This gives us $$ t_{apex}=\frac{V}{g}\sin\alpha $$ which gives us $$ \begin{align} x(t_{apex})&amp;=\frac{V^2}{g}\sin\alpha\cos\alpha\ \\ y(t_{apex})&amp;=\frac{V^2}{g}\sin^2\alpha \end{align} $$ So the apex in these terms will be $((V^2/g)\sin\alpha\cos\alpha, (V^2/g)\sin^2\alpha)$. Now recall that the apex is $(\sqrt{2b}, 2b)$ so we'll have $$ \left(\frac{V^2}{g}\sin\alpha\cos\alpha\right)^2=\frac{V^2}{g}\sin^2\alpha $$ from which we find a relation between $V\text{ and }\alpha$: $$ V=\sqrt{\frac{g}{2}}\frac{1}{\cos\alpha} $$ Substituting this for $V$ in the original parametrized equations gives us $$ \begin{align} x(t) &amp;= \sqrt\frac{g}{2}\;t\\ y(t) &amp;= \sqrt\frac{g}{2}\;\tan(\alpha)\;t-\frac{g}{2}t^2 \end{align} $$ Now note that $x^2=(g/2)\;t^2$ and so we now have two equations for our curve, the parametrized version and the Cartesian version: $$ y=\tan(\alpha)\;x-x^2=2\sqrt{2b}\;x-x^2 $$ so we'll have $\tan\alpha = 2\sqrt{2b}$, so, finally, we have the elevation and the initial velocity given by $$ \alpha=\tan^{-1}(2\sqrt{2b})\qquad V=\sqrt{\frac{g}{2}(1+8b)} $$ (since, if $\alpha=\tan^{-1}(2\sqrt{2b})$, we'll have [Edit: $\cos\alpha=\underline{1/\sqrt{1+8b}}$]). These seem pretty reasonable, for example (in English units, rather than metric, since it makes the math a touch tidier), If $b=1$ foot, then $V \approx 12$ ft/sec, max height = 2 feet, horizontal travel $\approx$ 2.8 feet If $b=6$ feet, then $V \approx 28$ ft/sec, max height = 12 feet, horizontal travel $\approx$ 6.9 feet That was fun. Thanks for posing the question.
A question related to Chain rule
Since $f_x+f_zz_x=0$ and $g_x+g_zz_x=0$, your first try is missing two minus signs: $$f_zz_xg_w-f_wg_zz_x=z_x\implies -f_xg_w-(-f_wg_x)=z_x,$$ so your two results are the same.
Showing that the quotient of a function belonging to Schwartz Space by a strictly positive polynomial remains in the Schwartz Space
Since $P$ is strictly positive, all you need to show is that $x^m D^k (f/P)$ is bounded on all of $\mathbb{R}$ for all $m$ and $k$; in particular, by continuity of everything in sight, it suffices to show that $x^m D^k (f/P)$ is bounded outside of some neighbourhood $[-\epsilon,\epsilon]$ of $0$. Suppose that $P$ is of degree $n$, and write $P = a_n x^n + a_{n-1}x^{n-1} + \cdots + a_0$. Observe that $$ P(x) = a_n x^n \left(1+ \frac{a_{n-1}}{a_n} x^{-1} + \cdots + \frac{a_0}{a_n} x^{-n} \right) = a_n x^n (1 + g(x)), $$ where $g$ is continuous, indeed smooth, for $x \neq 0$, and satisfies $g(x) \to 0$ as $x \to \infty$. Then, for any $m \in \mathbb{N}$, when $x \neq 0$, $$ x^m \frac{f(x)}{P(x)} = x^m \frac{f(x)}{a_n x^n (1+g(x))} = \frac{x^{m-n}f(x)}{a_n(1+g(x))}, $$ which vanishes as $x \to \infty$ if $m &lt; n$ and is bounded outside of $[-\epsilon,\epsilon]$ if $m \geq n$, precisely because $f \in S(\mathbb{R})$ and $\lim_{x \to \infty} g(x) = 0$. Now, in general, along exactly the same lines, one can check that $$ P^{(k)}(x) = a_{n,k} x^{n-k}(1+g_k(x)), \quad a_{n,k} := n(n-1) \cdots (n-k+1)a_n, $$ where, once more, $g_k$ is smooth for $x \neq 0$ and satisfies $g_k(x) \to 0$ as $|x| \to +\infty$. Do you see how to show that all the derivatives of $f/P$ vanish quickly enough, using the quotient rule?
Finding the other end of the Diameter
Let $(a, b)$ be other end of the diameter then the center $(3, -2)$ is the mid-point of line joining the ends points of diameter $(a, b)$ &amp; $(7, 2)$ Hence the coordinates of the center $(3, -2)$ are given as $$\left(\frac{a+7}{2}, \frac{b+2}{2}\right)\equiv(3, -2)$$ By comparing the corresponding coordinates, we get $$\frac{a+7}{2}=3\iff a=6-7=-1$$ $$\frac{b+2}{2}=-2\iff a=-4-2=-6$$ Hence the other end of the diameter is $(a, b)\equiv\color{blue}{ (-1, -6)}$
Geometrization of the positive integers
First, it is clear that if the answer is positive for some $c_0$, then it must be for every $c &lt; c_0$, too: if we keep the same configuration we have $A_{n,x} \geq c_0 x &gt; cx$. Similarly, if the answer is negative for $c_0$ it must be negative for every $c &gt; c_0$, too. We will now prove that the answer is positive for every $c &lt; 1$. Indeed, let $n$ be any prime number such that $$ \frac{n - 1}{n} \geq c $$ and consider two parallel lines: one for the multiples of $n$ and one for every other number. Now, clearly for every $k \geq 1$ we have $A_{kn,x} = x &gt; cx$. If instead $m$ is not a multiple of $n$ we have $$ A_{m,x} = x - \left\lfloor \frac{x}{n} \right\rfloor \geq x - \frac{x}{n} = \left(\frac{n-1}{n}\right) x \geq cx. $$ As a side note, you are quite right in thinking that this problem hides some notion of density. This answer hinges heavily on the fact that the natural density of $n\Bbb{N}$ in $\Bbb{N}$ is precisely $\frac{1}{n}$. Update: Interestingly, this argument can be tweaked to construct a solution with countably many lines. Indeed, start by choosing a sequence of primes $\{p_i\}_{i \geq 1}$ such that $$ 1 - \sum_{i = 1}^{\infty} \frac{1}{p_i} &gt; c. $$ For example, we could simply choose $p_i &gt; n^i$ for $n$ large enough, because this gives $$ \sum_{i = 1}^{\infty} \frac{1}{p_i} &lt; \sum_{i = 1}^{\infty} \frac{1}{n^i} = \frac{1}{n-1}. $$ Now, for every $i \geq 1$ put every number divisible by $p_i$ but not $p_j$ for every $j \neq i$ on, say, the vertical line through $(p_i,0)$ in the Cartesian plane, and put every other number on the vertical line through $(1,0)$. Then for every positive integer $m$ we have $$ \begin{align} A_{m,x} &amp;\geq x - \sum_{i = 1}^{\infty} \left\lfloor \frac{x}{p_i} \right\rfloor\\ &amp;\geq x - \sum_{i = 1}^{\infty} \frac{x}{p_i}\\ &amp;= \left( 1 - \sum_{i = 1}^{\infty} \frac{1}{p_i} \right) x\\ &amp;&gt; cx. \end{align} $$
Sequence of a number
Solve the recurrence for $a_{n-1}$ in terms of $a_n$, and work backwards: calculate $a_4$ from the known value of $a_5$, then use that to calculate $a_3$, and finally use that to calculate $a_2$. You can check your formula for $a_{n-1}$ in terms of $a_n$ by verifying that if you substitute $191$ for $a_6$, it really does give you $95$ for $a_5$.
Really large birthday problem as a spacefaring question.
The number is close enough to $0$ that you can make some approximations: given any pair, the probability they coincide is $1/(18 \times 10^{18})$ and there are about $12.5 \times 10^{12}$ possible pairs so the expected number of coincidences is about $6.944 \times 10^{-7}$ and since this is small it is also the approximate probability of any coincidence. Multiply the reciprocal of this by the logarithm of $\frac{1}{1-0.01}$ and you get about $14472$ for the number of attempts needed to get the probability of a single coincidence up to $1\%$.
If for a complex function if $f^{(n)}(1/k)=0$ $\forall k \in \mathbb{N}$, then $f(z)$ is a polynomial
What's the point of all this over-complication? $f$ is entire $\implies f^{(n)}$ is entire. Now $\{\frac{1}{k}\}_{k \ge 1} \subset \Bbb C$ has a limit point in $\Bbb C$ (which is connected), and $f^{(n)}$ being entire vanishes on $\{\frac{1}{k}:{k \ge 1}\} $ . Hence by Uniqueness principle, $f^{(n)} \equiv 0 \implies$ $f^{(n-1)}$ is a constant $\implies$ $f^{(n-2)}$ is a polynomial of degree at most 1 $\implies \dots \implies f$ is a polynomial of degree at most $(n-1)$
Counting Ordered Pairs
Hint: You are correct that to maximize the number of pairs you should have $|C_4|=1$, which gives $|C_3|=n-3$ But when you add the third element to $C_3$ you add more than two ordered pairs. $c$ can be paired with each of $a$ and $b$ in either order, so you add four pairs. Can you determine what happens for larger $n$?
Inscribed Quadrilateral: Collinear Points
Let the second point of intersection of $AE$ with the circumcircle be called $P$ and the second point of intersection of $AF$ with the circumcircle be called $Q$ (the first being $A$ for both lines). Then $O \in BQ$ because $\angle \, BAQ = 90^{\circ}$ implies that $BQ$ must be a diameter. Analogously, $O \in DP$ because $\angle \, DAP = 90^{\circ}$ implies that $DP$ must be a diameter. Consequently, $O = BQ \cap DP$. Apply Pascal's theorem to the non-convex inscribed hexagon $BCDPAQ$. Then the intersection points of the opposite edges of the hexagon, $E = BC \cap AP, \,$ $F = CD \cap AQ$ and $O = BQ \cap DP$ must be collinear.
I am trying to find the zeroes of this polynomial
You can ignore the $x$ term at the right as it is tiny compared to everything else. The leading factor of $8.0 \cdot 10^{29}$ makes sure of it. The largest term in $x$ that comes from expanding the product is $8.0 \cdot 10^{29}\cdot 0.010 \cdot 4 \cdot (1.99 \cdot 10^{-7})^3 \cdot -4\approx -1.0\cdot 10^9$, so changing that by $1$ makes no difference. Then you can see the roots are $0.010$ and a quadruple root at $\frac 14\cdot 1.99\cdot 10^{-7}\approx 5.0\cdot 10^{-8}$
What is the most efficient way to determine if a matrix is invertible?
Gauss-Jordan elimination can be used to determine when a matrix is invertible and can be done in polynomial (in fact, cubic) time. The same method (when you apply the opposite row operation to identity matrix) works to calculate the inverse in polynomial time as wel.
What is distribution of $Z^2$ if$Z$ is the sum of a Gaussian and Rayleigh variable?
If the distribution of $Z$ has density $f$, then the distribution of $T=Z^2$ has density $g$ where, for every $t\gt0$, $$ \color{green}{g(t)=\frac1{2\sqrt{t}}\left(f(\sqrt{t})+f(-\sqrt{t})\right)}. $$ This is a simple consequence of the definition of $f$ and of the change of variables formula applied to the expression of $\mathrm E(u(Z^2))$ as an integral, for every bounded measurable function $u$. Since @Sasha gave you the density $f$, you are done.
Book recommendations for algebraic number theory with usage of algebraic geometry
You'll find that Milne has a collection of (in my opinion) excellent notes here. He indicates which of his documents depend on one another by means of a 'required' and 'useful' column. His Abelian varieties notes require AG and ANT, and suggest the CFT notes would be useful. I believe it would be feasible to gain a deep appreciation for the ANT you care about by using the Abelian varieties notes to give yourself direction, and the CFT and ANT notes to understand the content.
Sum of polynomials with no common factors
Let $k$ be a field. Notice that the $g_i$'s have no factor in common, and that $k[x]$ is a PID. If $R$ is a PID, and therefore a UFD, and $a_1,\ldots,a_k\in R$ share no common factor in $R$, then there exist $b_1,\ldots,b_k\in R$ such that $\sum a_ib_i=1$. This is exactly the result you need. In general, if $d=(a_1,\ldots,a_k)$, then $d$ is a linear combination of the $a_i$. An explicit argument, still using the fact that $k[x]$ is a PID. Since $k[x]$ is a PID, we know there is some $p(x)$ such that $(g_1(x),\ldots,g_k(x))=(p(x))$. This means that $p(x)$ divides each $g_i(x)$. Since the $g_i(x)$ are relatively prime, $p(x)$ must be a unit, so $(g_1(x),\ldots,g_k(x))=(p(x))=k[x]$. By definition of an ideal generated by elements $g_i(x)$, there must be $f_i(x)$ such that $\sum f_i(x)g_i(x)=1$.
prove $x = (b-a) \mod m$
Just use Definitions. $A\equiv B\pmod m$ means $m|A-B$ means there exists an integer $k$ so that $mk = A- B$ means there exists an integer so that $A = B + mk$. So if $ai+x \equiv b \pmod m$ then there exists a $k$ so that $ai+x = b + mk$, but then there exists an integer $k$ so that $x=b - ai + mk$. So $x\equiv b-ai\pmod m$.
restricting invertible maps to get new maps
Let $V,V',W$ be finite-dimensional vector spaces over $k$ and let $T : V \otimes_k W \to V' \otimes_k W$ be a linear map. Then the following are equivalent: $\bullet$ $T$ commutes with the $\mathrm{GL}(W)$-action. $\bullet$ There is some linear map $S : V \to V'$ such that $T = S \otimes \mathrm{id}_W$. Proof. Clearly $S \otimes \mathrm{id}_W$ commutes with the $\mathrm{GL}(W)$-action. Now let us assume that $T$ commutes with the $\mathrm{GL}(W)$-action. By writing $V$ and $V'$ as a finite direct sum of copies of $k$, we may obviously assume that $V=V'=k$, i.e. $T : W \to W$ commutes with the $\mathrm{GL}(W)$-action. Now use my proof at SE/630842 to show that $T(w)=\lambda w$ for some $\lambda \in k$. $\square$
Does there exist a function which is holomorphic on the open unit disc but goes to infinity on the boundary?
Correct. Such a function would have finitely many zeros in the unit disk. Dividing by the product of $z - r_j$ where $r_j$ are the zeros (listed by multiplicity), you would get an analytic function $g$ on the disk which has no zeros there but still has $|g| \to \infty$ as $|z| \to 1-$. And then $1/g$ would violate the maximum modulus principle.
Linear representation of $\mathbb{Z}/{p^k}\mathbb{Z}$
Warning: Compared to computations in $\mathbb Z/p^k \mathbb Z$, computations in $\mathrm{GL}(p^k,\mathbb C)$ under multiplication (which is what is done when you take group representations) are really, really slow on any platform I have ever heard of. Computations in the first run in linear time, while in the second there are no known algorithms that work in even quadratic time. That being said, we can generate a subgroup of $\mathrm{GL}(p^k,\mathbb C)$ isomorphic to $\mathbb Z/p^k \mathbb Z$ with the $p^k\times p^k$ matrix $$X=\begin{pmatrix} 0 &amp; 0&amp; \cdots &amp; 0&amp; 1\\ 1 &amp; 0 &amp; \cdots &amp; 0 &amp; 0\\ 0 &amp; 1 &amp; \cdots &amp; 0 &amp; 0\\ \vdots &amp; &amp; \ddots&amp; \vdots &amp; \vdots\\ 0 &amp; 0 &amp; \cdots &amp; 1&amp; 0\end{pmatrix}$$ which is just the identity matrix $I$ with all but the last row shifted down one, and the last row placed at the top. The isomorphism is then simply $h(n)=X^n$, as $I=X^0=X^{p^k}$. Edit: Note that we are really representing $\mathbb Z/p^k \mathbb Z$ as a subgroup of $S_{p^k}$, namely that generated by the cycle $(12\cdots p^k)$, and then representing $S_{p^k}$ as a subgroup of $\mathrm{GL}(p^k,\mathbb C)$ via the map $$\sigma\mapsto \begin{pmatrix}e_{\sigma(1)}\\ \vdots\\ e_{\sigma(p^k)}\end{pmatrix}$$ where $e_n$ denotes the $n$th row of the identity matrix.
How do you solve $4x^2=-16x$? I get different answers depending on the method used.
Your first method is not wrong, but notice that you can only divide by $4x$ if $x\neq0$. If $x=0$, then it is already a solution, and this is how you can add $x=0$ as a solution in the first method as well.
Reference request for complex variables
I would use Bak &amp; Newman's "Complex Analysis" for an introduction to the above topics except for "CR in polar form", "harmonic functions" and "Milne-Thomson".
Stereographic Projection: Cartography Applications
Cartography benefit is significant. Also easy to calculate mappings. If we consider a small patch like a state on the globe its geographical mapping from sphere surface onto a plane at south pole SPp has an approximately constant magnification. But the angles are preserved (conformal but not isometric). It is useful for GIS and GPS assisted navigation on land. If the earth's curvature can be neglected.. its small area map is represented on flat SPp tangent plane faithfully. Corresponding angles do not change in magnitude except for $\phi \to \pi -\phi $ reckoned from meridian. Large loxodromes are mapped to logarithmic spirals along rhumb lines, very useful in ship navigation. Small circles ( radius small compared to earth's radius ) project to perfect circles on SPp. When entire Lat/Long lines map is slid by $ 90^{\circ}$ ( so N-S poles axis is along a diameter of equator plane) projection from north pole point light NPl casts a bipolar coordinate map or net. Properties of circle inversions is useful and interesting. 3d printed surface models cast interesting angle preserving shadow projections on SPp cast from NPl.
Complete and unabridged proof of the theorem of acyclic models
You should look at Barr, M. Acyclic models, CRM Monograph Series, Volume 17. American Mathematical Society, Providence, RI (2002), and see if that satisfies you for completeness. A theorem involving crossed complexes rather than chain complexes is in Section 10.4 of the book partially titled Nonabelian Algebraic Topology (2011). This version gives in some instances homotopy equivalences rather than just homology equivalences.
Proving Uniform Convergence 2
For $x \in[-1,1]$ we have $$ \frac{|x^n|}{n^2}=\frac{|x|^n}{n^2} \le \frac{1}{n^2}.$$
Does this system of PDE's always admit a solution?
Differentiating $(1)$ w.r.t. $v$ gives $$ F_{uuv} + F_{v} = P_{v} . $$ Differentiating $(2)$ w.r.t. $u$ gives $$ F_{uvu} = F_{u} , $$ so that the equality of mixed derivatives gives $F_{u} + F_{v} = P_v$. The method of characteristics provides solutions of the form $$ F(u,v) = F_0(v-u) + \int_0^u P_v(u-\tau, v - \tau) \, \text d \tau $$ where $F_0$ is an arbitrary function. The unknown $P$ solves a second-order linear hyperbolic PDE in its canonical form $P_{uv} = P$. Some solutions of the form $P(u,v) = A e^{ku + v/k}$ can be found by separation of variables.
The subcomplex of degenerate simplices has trivial homology
In Weibel's book on homological algebra, in the proof of theorem 8.3.8, there is a direct proof that $DA$ is acyclic. It is very natural: one filters the complex with the $p$th layer of $DA_q$ being $F_pD A_q = \sum_{i=0}^{p} \textrm{im}(s_i : A_{n-1} \to A_n)$ and looks at the corresponding spectral sequence. He then shows that the $E_0$ terms of the spectral sequence are acyclic by exhibiting an actual contraction. This does not (immediately!) give a contraction of $DA$, though.
Upper bound for Ramsey number of four-cycles
Consider any fixed coloring of edges of a complete graph $K_n$ with $n&gt;1$ vertices into colors $1,\dots, k$ without monochromatic copies of $K_4$. Lets double count the quantity $N$ of pairs $(e,v)$ where $e=uw$ is and edge of $K_n$ such that edges $uv$ and $wv$ are monochromatic. Since there are no monochromatic copies of $K_4$, each edge $e$ of $K_4$ can participate in at most $k$ pairs $(e,v)$ which we count. Thus $N\le k{n\choose 2}$. On the other hand, let $v$ be any vertex of $K_n$. For each color $i$ let $n_i$ be the quantity of vertices $u$ of $G$ such that an edge $uv$ is colored in a color $i$. Then $v$ participates in $N_i=\sum_{i=1}^k {n_i\choose 2}$ pairs $(e,v)$ which we count. Using the inequality between arithmetic and quadratic means, we have $$N_i=\sum_{i=1}^k {n_i\choose 2}=\frac 12\sum_{i=1}^k n_i^2-n_i\ge$$ $$ \frac 12\left(\frac{(\sum n_i)^2}{k}- \sum n_i \right) =\frac {(n-1)^2-k(n-1)} {2k}.$$ Thus $N=\sum_{i=1}^n N_i\ge\frac {n(n-1)(n-k-1)}{2k}$ and we have a contradiction provided $\frac {n(n-1)(n-k-1)}{2k}&gt; k{n\choose 2}$, that is when $\frac {n-k-1}{k}&gt; k$, or $n&gt;k^2+k+1$. Thus $R_k(C_4,\dots,C_4)\le k^2+k+2$.
How to find the minimizer of the following problem?
Note that there are some other sub-problems in my project, for which I already got the solution. The given sub-problem can be written in continuous setting as: $$\min_P\frac{r}{2} \int_\Omega|P-Z|^2dx+\frac{\mu}{2}\int_\Omega(|P|-1)^2dx,$$ where $\Omega= [1,N]\times[1,M]$ be a set of $N \times M$ points in $\mathbb{R}^2$. However my question is still that how to the find the closed form solution of the problem with respect to variable $P$?
Set conjectures concerning the asymptotic behaviour of erratic arithmetic functions, related to the Möbius function and the Liouville function
Let $h(n) = \sum_{k=1}^n \mu(k) (n \bmod k)$. Note that $(n \bmod k) - (n-1 \bmod k) = 1-k\, 1_{k |n}$ thus $$h(n)-h(n-1) = \mu(n)(n \bmod n) +\sum_{k=1}^{n-1} \mu(k)(1-k\, 1_{k |n}) = M(n)- (f(n)-\mu(n)n)$$ where $f(n) = \sum_{k | n} k\mu(k) $ is the Dirichlet inverse of $\varphi(n)=\sum_{d | n} \mu(d) \frac{n}{d}$. So understanding $f(n)$ and $M(n)=\sum_{k=1}^n \mu(k)$ is enough for understanding $h(n)$, and that's what you should have seen before posting this.
Finding the normal distribution given a specific amount of objects
Note that if $X_1,X_2,\dots,X_n$ are iid $\mathcal N(\mu,\sigma^2)$ random variables, then \begin{equation} \overline X = \dfrac{\sum\limits_{i=1}^nX_i}{n}\sim \mathcal N\left(\mu,\frac{\sigma^2}{n}\right) \end{equation} Given that $n=9, \mu = 0.96$ and $\sigma = 0.045$, plug in the values to get the distribution of the mean.
Proving a particular identity for differentiable functions.
$$\frac{f(ax)-f(bx)}{cx}=\frac{f(ax)-f(0)}{cx}-\frac{f(bx)-f(0)}{cx}=\frac ac\frac{f(ax)-f(0)}{ax}-\frac bc\frac{f(bx)-f(0)}{bx}$$ tends to $$\frac acf'(0)-\frac bcf'(0).$$
Counting permutations : too many separate possibities
For a given student, let $p_k$ be the probability that the student is adjacent to $k$ parents, where $0\le k\le 2$. Then we get $$ \left\lbrace \begin{align*} p_0&amp;= \left(\frac{2}{15}{\,\cdot\,}\frac{4}{14}\right) + \left(\frac{13}{15}{\,\cdot\,}\frac{4}{14}{\,\cdot\,}\frac{3}{13}\right) = \frac{2}{21} \\[6pt] p_1&amp;= \left(\frac{2}{15}{\,\cdot\,}\frac{10}{14}\right) + \left(\frac{13}{15}{\,\cdot\,}\frac{4}{14}{\,\cdot\,}\frac{10}{13}{\,\cdot\,}2\right) = \frac{10}{21} \\[6pt] p_2&amp;= \frac{13}{15}{\,\cdot\,}\frac{10}{14}{\,\cdot\,}\frac{9}{13} = \frac{3}{7} \\[4pt] \end{align*} \right. $$ The contribution of each student to the expected number of student-parent adjacencies is equal to $$ 0{\,\cdot\,}p_0\;+\;1{\,\cdot\,}p_1\;+\;2{\,\cdot\,}p_2 = 0{\,\cdot\,}\frac{2}{21}\;+\;1{\,\cdot\,}\frac{10}{21}\;+\;2{\,\cdot\,}\frac{3}{7} = \frac{4}{3} $$ hence since there are $5$ students, the expected number of student-parent adjacencies is equal to $$ 5{\,\cdot\,}\frac{4}{3} = \frac{20}{3} \approx 6.67 $$
Show that $P=\frac{3W}{5+3\sin\theta -\cos\theta}$
Compute the moments around the middle of the disc (Center of mass). The normal Forces of wall A and B will give no contribution to the resulting Moment because both lines of Action passing through the Center of mass. You will have the equation for disc radius $r$ (is the sign correct?) $F_a r + F_b r - P r = 0$ and the radius cancels out. Then you can solve a System of equations to obtain $P$.
System of Recurrence Relations
One way is to define generating functions $A(z) = \sum_{n \ge 0} a_n z^n$ and $B(z) = \sum_{n \ge 0} b_n z^n$, write your recurrences with indices shifted so that there are no subtractions in indices: $\begin{align} a_{n + 1} &amp;= 2 a_n - b_n + 2 \\ b_{n + 1} &amp;= - a_n + 2 b_n - 1 \end{align}$ Multiply both recurrences by $z^n$, sum over $n \ge 0$, and recognise some sums: $\begin{align} \sum_{n \ge 0} a_{n + 1} z^n &amp;= 2 \sum_{n \ge 0} a_n z^n - \sum_{n \ge 0} b_n z^n + 2 \sum_{n \ge 0} z^n \\ \sum_{n \ge 0} b_{n + 1} z^n &amp;= - \sum_{n \ge 0} a_n z^n + 2 \sum_{n \ge 0} b_n z^n - \sum_{n \ge 0} z^n \end{align}$ $\begin{align} \frac{A(z) - a_0}{z} &amp;= 2 A(z) - B(z) + \frac{2}{1 - z} \\ \frac{B(z) - b_0}{z} &amp;= - A(z) + 2 B(z) - \frac{1}{1 - z} \end{align}$ The solution to this system of equations is: $\begin{align} A(z) &amp;= \frac{z - 2 z^2}{1 - 5 z + 7 z^2 - 3 z^3} \\ &amp;= \frac{1}{4 (1 - 3 z)} - \frac{3}{4 (1 - z)} + \frac{1}{2 (1 - z)^2} \\ B(z) &amp;= \frac{1 - 4 z + 2 z^2}{1 - 5 z + 7 z^2 - 3 z^3} \\ &amp;= - \frac{1}{4 (1 - 3 z)} + \frac{3}{4 (1 - z)} + \frac{1}{2 (1 - z)^2} \end{align}$ Note that: $$ (1 - z)^{-r} = \sum_{n \ge 0} (-1)^n \binom{-r}{n} z^n = \sum_{n \ge 0} \binom{n + r - 1}{r - 1} z^n $$ Also, $\binom{n + r - 1}{r - 1}$ is a polynomial of degree $r - 1$ in $n$. In particular, $\binom{n + 1}{1} = n + 1$. Picking the coefficients of $z^n$ of the above: $\begin{align} a_n &amp;= \frac{3^n}{4} - \frac{3}{4} + \frac{n + 1}{2} \\ &amp;= \frac{3^n - 3}{4} + \frac{n + 1}{2} \\ b_n &amp;= - \frac{3^n}{4} + \frac{3}{4} + \frac{n + 1}{2} \\ &amp;= - \frac{3^n - 3}{4} + \frac{n + 1}{2} \end{align}$
An equivalence of categories
If $G$ is a groupoid, there is a simple but useful equivalence between The category of actions of $G$ on sets, i.e. of functors $G \to Sets$, and The category of groupoid covering morphisms of the groupoid $G$. Note that a covering morphism $q: H \to G$ of groupoids is a morphism such that for each $y \in Ob(H)$ and $g \in G$ starting at $q(y)$ there is unique $h \in H$ starting at $y$ such that $q(h)=g$. Now your question rephrases as: given a covering morphism $q: H \to \pi_1(X)$, how might we construct a covering space $Y$ of $X$? Clearly we want to take the set $Y$ to be $Ob(H)$. To get the topology on $Y$ you need a local condition on $X$: this is that for any $y \in Y$ there exists a path connected neighbourhood $N$ of $q(y)$ in $X$ such that the morphism $\pi_1(N) \to \pi_1(X) $ lifts uniquely to a morphism $\phi: \pi_1(N) \to H$ such that $\phi(q(y))=y$. The details are in Topology and Groupoids, Chapter 10. The advantage of this approach to that of actions is that liftings are more easily described in terms of covering morphisms than in terms of actions. This illustrates a general principle in mathematics: two notions may be equivalent, even in a categorical sense, but where you should operate for a particular situation may be easier to understand in one notion than in the other. The more difficult the equivalence between the two notions, the more advantage there is in being able to move from one to the other according to convenience.
How to denote $\underbrace{(1+n(1+n(1+\cdots)))}_{x-1 \text{times}}$ in a single expression?
Multiply the left hand side by $(1 - n)$. This will collapse it down to $n - n^{x+1}$. Then divide by $(1 - n)$ to get a final answer of: $$S = \frac{n - n^{x+1}}{1 - n}.$$ For more on Telescoping Series
Show that $C-C=[-1,1].$
A proof not starting from $C+C$ uses the balanced ternary system, in which every number $x\in [-1/2, 1/2]$ is represented as $$ x = \sum_{n=1}^\infty \epsilon_n 3^{-n},\quad \epsilon_n\in \{-1, 0, 1\} $$ Write each such $\epsilon_n$ as $(1-0)$ or $(0-1)$ or $(0-0)$ as appropriate, and redistribute the terms: $$ x = \sum_{n=1}^\infty a_n 3^{-n} - \sum_{n=1}^\infty b_n 3^{-n} $$ with $a_n\in \{0, 1\}$ and $b_n\in \{0, 1\}$. It remains to multiply this by $2$, and the result follows: $2x = a-b$ where $a, b$ are in the Cantor set $C$.
Given an exact differential; How do you find the function that satisfies the differential?
We have two equations \begin{align*} u &amp;= xy + f(y) &amp;&amp; \color{#180}{(\mathrm{A})}\\ u &amp;= xy + y^2 + g(x) &amp;&amp; \color{#180}{(\mathrm{B})} \end{align*} Both of the expressions on the right hand side are equal to $u$ and hence each other (we are 'matching' the two expressions for $u$), so we see that \begin{align*} xy + f(y) &amp;= xy + y^2 + g(x)\\ f(y) &amp;= y^2 + g(x). \end{align*} As the left hand side depends only on $y$, but not on $x$, the same must be true of the right hand side. It follows that $g(x)$ must be a constant function, $g(x) = c$, so $f(y) = y^2 + c$. Substituting into either $\color{#180}{(\mathrm{A})}$ or $\color{#180}{(\mathrm{B})}$, we obtain the solution $u = xy + y^2 + c$. As for your second query, given the equation $\frac{\partial u}{\partial x} = y$, we would like to determine $u$. That is, we want a function $u$ such that its partial derivative with respect to $x$ is $y$. Of course $xy$ is one such function, but so is $xy + f(y)$ for any function $f$ because $\frac{\partial}{\partial x}f(y) = 0$. This is precisely the same reason why we add a constant of integration when antidifferentiating a function of one variable. Here's the other approach I mentioned in my comment. We have equation $\color{#180}{(\mathrm{A})}$ and we know $\frac{\partial u}{\partial y} = x + 2y$. Differentiating $\color{#180}{(\mathrm{A})}$ with respect to $y$, we now have the two equations \begin{align*} \frac{\partial u}{\partial y} &amp;= x + f'(y)\\ \frac{\partial u}{\partial y} &amp;= x + 2y. \end{align*} Therefore \begin{align*} x + f'(y) &amp;= x + 2y\\ f'(y) &amp;= 2y\\ f(y) &amp;= y^2 + c. \end{align*} Substituting this back into $\color{#180}{(\mathrm{A})}$, we see that $u = xy + y^2 + c$.
Strong operator convergence of unilateral shift.
Since $LR=I$, you immediately get that $L^nR^n=I$ for all $n$, no limits required. Formally, this is shown by induction: if $L^nR^n=I$, then $L^{n+1}R^{n+1}=L^nLRR^n=L^nR^n=I$. The other way around, the composition $R^nL^n$ equals the orthogonal projection onto $\overline{\operatorname{span}}\{e_n,e_{n+1},\ldots\}$. So indeed it goes to zero sot.
If $\det(A + t_i B) = 0$ for $n + 1$ distincts $(t_i)_i$, can I find $V, W$ such that $A(V), B(V) \subset W$ and $\dim W < \dim V$?
Note that $det(A+tB)$ is a polynomial of degree $n$ in $t$. Hence if it has $n+1$ many zeros, then it must be the $0$ polynomial. Now since $det(A+tB)=0$ for all $t$, there exists a kernel element in $\mathbb{R}[t]\otimes\mathbb{R}^n$, i.e. there exists vectors $v_0,\dots,v_m$ such that $(A+tB)(\sum_i t^i v_i)=0$ (w.l.o.g. $v_m\neq 0$). This implies that $B v_m=0$ and $B v_i = -Av_{i+1}$ for $i&lt;m$. Hence $A,B$ together sends the space $\mathbb{R}\cdot\{v_0,\dots,v_m\}$ to $\mathbb{R}\cdot\{Bv_1,\dots,Bv_m\}$ and the result follows. Edit : In general, a tuple of matrices $(A_1,\dots,A_m)$ is called a compression space if it satisfies 2. It is always true that $2$ implies $det(t_1A_1+\dots+t_m A_m)=0$ for all $t_i\in\mathbb{R}$ but the converse statement holds if and only if $m\leq 2$ or $n\leq 2$. $m=2$ case is dealt in Kronecker-Weierstrass theory of pencils. Edit 2 : Before giving a proof, maybe an intuition would help. For each $t\in\mathbb{R}$, there is a vector $v(t)$ such that $(A+tB)v(t)=0$. This is because $det(A+tB)=0$. All we need to show is that $v(t)$ can be picked as a polynomial in $t$. Here is the proof : Considering $t$ as an indeterminate, observe that $A+tB$ is a matrix with entries in $\mathbb{R}[t]$. It is a $\mathbb{R}[t]$ module map from $\mathbb{R}[t]^n$ to $\mathbb{R}[t]^n$. Take the field of fractions $\mathbb{R}(t)=\{\frac{f}{g}\mid f,g\in\mathbb{R}[t], g\neq 0\}$. Then we can also consider $A+tB$ as a linear map from $\mathbb{R}(t)^n$ to $\mathbb{R}(t)^n$. Its determinant is still $0$ so there is a kernel element of the form $$v=\begin{bmatrix} \frac{f_1}{g_1}\\ \frac{f_2}{g_2}\\ \vdots\\ \frac{f_n}{g_n}\\ \end{bmatrix}$$ In other words $(A+tB)v=0$ in $\mathbb{R}(t)^n$. Multiply $v$ by $g_1g_2\dots g_n$ so it is in $\mathbb{R}[t]^n$. The equation $(A+tB)v=0$ still holds hence you obtain a kernel vector where each entry is a polynomial. Thus this is a vector of the form $v=v_n t^n+\dots+v_0\in\mathbb{R}[t]\otimes \mathbb{R}^n\cong\mathbb{R}[t]^n$ such that $(A+tB)v=0$.
Understanding an equality involving Cauchy's integral formula
Cauchy's integral formula says $\int_\gamma \frac{f(z)}{z-a}\, dz =2\pi i f(a)$. Divide both sides by $a-b$ you should have your expression.
Proof of Isomorphism from $\mathbb R/\mathbb Z$ to $\mathbb R/2\pi\mathbb Z$
One strategy is the following: Define an isomorphism $\varphi:\mathbb{R}\to \mathbb{R}$ which sends $\mathbb{Z}$ to $2\pi \mathbb{Z}$. Then, you can show that in general if $f:A\to A$ is an automorphism of an Abelian group taking a subgroup $N$ to $N'$, then $f$ induces an isomorphism $A/N\to A/N'$.
How to solve this simple Algebraic equation? (Wolfram & SymboLab both stumped...)
I may have a solution. The bad news: you're not going to like it because it's nasty. It ain't pretty. So first, we get all of the $p$-based terms on one side of the equation, probably the most trivial step. This is kosher since all of the constants are nonzero. This gives us $$(p-1)r^{(p-1)} = \frac{d}{c}$$ This is where it goes downhill. For simplicity, I let $x = p-1$ to make this a bit prettier, and we reduce this problem to finding $x$: $$xr^{x} = \frac{d}{c}$$ If you've ever heard of the Lambert $W$-function, this expression absolutely screams its use. If not, the $W$ function is essentially introduced in scenarios where we have $xe^x = c$ for some number $c$ and want to find $x$. Then the $W$ function is applied to both sides to give $x = W(c)$. The $W$ function is basically the "inverse" to $xe^x$, i.e. $W(xe^x) = x$. I'll be honest and admit I don't know a whole lot about it, so I'll link to the Wikipedia article The article does give us a way to solve $xr^x$ when $r$ is not necessarily $e$: the identity that $$z = xa^x \;\;\; \Leftrightarrow \;\;\; x = \frac{W(z \ln(a))}{\ln(a)}$$ Take $z = d/c, a = r.$ Then, $$\frac{d}{c} = xr^x \;\;\; \Leftrightarrow \;\;\; x = \frac{1}{\ln(r)}W\left(\frac{d}{c} \cdot \ln(r) \right)$$ Then, since $x = p-1,$ $$p = 1 + \frac{1}{\ln(r)}W\left(\frac{d}{c} \cdot \ln(r) \right)$$ Per the Wikipedia article, "the Lambert W relation cannot be expressed in terms of elementary functions," so unless there's a method I overlooked in solving this, this might be the best you get. I'm not sure how it would be evaluated. The Wikipedia article notes some special values and approximation via Newton's method.
Show that MNPQ and ABCD have the same centroid.
Let $\varepsilon = e^{-i{\pi\over 3}} = \cos {\pi\over 3} - i\cdot \sin{\pi\over 3}$ and let $G$, $G'$ be gravity centers of $ABCD$ and $MNPQ$, so $$ G ={1\over 4}(A+B+C+D)$$ and $$G'= {1\over 4}(M+N+P+Q)$$ Now since we get the vector $\vec{AM}$ with rotation of $\vec{AB}$ around $A$ for angle $-{\pi\over 3}$ we have (and similary for others): \begin{eqnarray} % \nonumber to remove numbering (before each equation) M-A &amp;=&amp; \varepsilon(B-A) \\ N-B &amp;=&amp; \varepsilon(C-B) \\ P-C &amp;=&amp; \varepsilon(D-C) \\ Q-D &amp;=&amp; \varepsilon(A-D) \end{eqnarray} If we sun these equation and divide it with 4 we get: $$ G'-G = 0$$ and we are done.
If $c\mid ab$ and $\gcd(c,a)=d$, then $c\mid db$
Try using Bézout's identity. Since $\gcd(c, a) = d$, we know that there exist $s,t \in \mathbb Z$ such that: $$ cs + at = d \iff bcs + abt = bd $$ Since $ c \mid ab$, we know that there exists some $x \in \mathbb Z$ such that $cx = ab$, so: $$ bd = bcs + (cx)t = c(\underbrace{bs + xt}_{\in ~ \mathbb Z}) $$ Hence, $c \mid bd$, as desired.
Finding the determinant of a matrix given determinants of other matrices
$$\mathbf P=\begin{pmatrix}a&amp;2d&amp;1\\b&amp;2e&amp;-2\\c&amp;2f&amp;-1\end{pmatrix},\,\mathbf U=\begin{pmatrix}a&amp;b&amp;c\\2&amp;3&amp;2\\d&amp;e&amp;f\end{pmatrix},\,\mathbf V=\begin{pmatrix}a&amp;b&amp;c\\d&amp;e&amp;f\\1&amp;5&amp;3\end{pmatrix}$$ For later use, we take the transpose of $\mathbf P$: $$\mathbf P^\top=\begin{pmatrix}a&amp;b&amp;c\\2d&amp;2e&amp;2f\\1&amp;-2&amp;-1\end{pmatrix}$$ Recall that $\det\mathbf P=\det(\mathbf P^\top)$. Denote by $\mathbf P_{i,j}$ the permutation matrix that, upon multiplication by a matrix $\mathbf A_{m\times n}$, swaps rows $i$ and $j$ in $\mathbf A$. So we can write $$\mathbf P_{2,3}\mathbf U=\begin{pmatrix}1&amp;0&amp;0\\0&amp;0&amp;1\\0&amp;1&amp;0\end{pmatrix}\mathbf U=\begin{pmatrix}a&amp;b&amp;c\\d&amp;e&amp;f\\2&amp;3&amp;2\end{pmatrix}$$ Recall that the determinant of any square matrix with any pair of its rows/columns swapped negates the value of the new matrix, so that $\det\mathbf U=-\det(\mathbf P_{i,j}\mathbf U)$ when $i\neq j$. Now, observe that we can write the last row of $\mathbf V$ as a linear combination of the last rows of $\mathbf P^\top$ and $\mathbf P_{2,3}\mathbf U$: $$\begin{pmatrix}1&amp;5&amp;3\end{pmatrix}=(-1)\begin{pmatrix}1&amp;-2&amp;-1\end{pmatrix}+\begin{pmatrix}2&amp;3&amp;2\end{pmatrix}$$ The determinant has the property that it is linear with respect to any given row, which is to say: focusing our attention on a single row, if we can write it as a linear combination of other row vectors, then we can expand the determinant as the sum of two component determinants. To illustrate in practice, we can write $$\begin{vmatrix}a&amp;b&amp;c\\d&amp;e&amp;f\\1&amp;5&amp;3\end{vmatrix}=\begin{vmatrix}a&amp;b&amp;c\\d&amp;e&amp;f\\(-1)1&amp;(-1)(-2)&amp;(-1)(-1)\end{vmatrix}+\begin{vmatrix}a&amp;b&amp;c\\d&amp;e&amp;f\\2&amp;3&amp;2\end{vmatrix}$$ Aside: I highly recommend watching this lecture from MIT if you ever feel the need to brush up on the properties of the determinant. Strang does a great job of explaining them. Next, we can pull out a factor of $-1$ from the first determinant, and simultaneously distribute a factor of $2$ along the second row of the first matrix, $$\begin{vmatrix}a&amp;b&amp;c\\d&amp;e&amp;f\\1&amp;5&amp;3\end{vmatrix}=-\frac12\begin{vmatrix}a&amp;b&amp;c\\2d&amp;2e&amp;2f\\1&amp;-2&amp;-1\end{vmatrix}+\begin{vmatrix}a&amp;b&amp;c\\d&amp;e&amp;f\\2&amp;3&amp;2\end{vmatrix}$$ and we see that we have written $\det\mathbf V$ in terms of known determinants. We get $$\det\mathbf V=-\frac12\det(\mathbf P^\top)+\det(\mathbf P_{2,3}\mathbf U)=-\frac12\det\mathbf P-\det\mathbf U=-5-(-3)=-2$$
Prove or disprove: analytic $N$-th roots are unique up to multiplication by an $N$-th root of unity
On the set $\Omega\setminus f^{-1}(0)$, the map $\lambda\colon z\mapsto \frac{g_2(z)}{g_1(z)}$ is continuous and by discretenss of the roots of unity it is locally constant. If $f$ is not identically zero, then $f^{-1}(0)$ is discrete and therefore $U$ is still connected, hence $\lambda$ constant (and can be extended constant and continuously to all of $\Omega$).
Determine the number of ordered pairs of integers $(m, n)$
Hint: $$m^3 + n^3 + 99 m n - 33^3 = \left( m+n-33 \right) \left( {m}^{2}-mn+{n}^{2}+33\,m+33\,n+1089 \right) $$
Maximization of a determinant
As $A$ has orthonormal columns, its singular value decomposition must be in the form of $A=U\pmatrix{I\\ 0}V^T$. Hence to maximise $\det(A^T\Lambda A)$ is equivalent to maximise the determinant of $P_k$, the $k\times k$ leading principal submatrix of $P=U^T\Lambda U$. However, by the interlacing property, the $i$-th largest eigenvalue of $P_k$ is bounded above by the $i$-th largest eigenvalue of $P$. Since $P$ and $U$ have identical spectra, it follows that $\det P_k\le\lambda_1\cdots\lambda_k$. Therefore $U=I$ is a maximiser and in turn, every $A=\pmatrix{I\\ 0}V^T$ is a maximiser whenever $V\in O_k(\mathbb R)$.
$x^{1/2}-4x^{1/4}=0 \Rightarrow x = 0, 256$ - cannot arrive at this.
$\sqrt[4]x=y,x=y^4$ $0=y^2-4y=y(y-4)$ $4^4=(4^2)^2=?$
Proof by induction that I'm stumped on.
Let $S=\{a_1, a_2, \dots, a_n\}$. Number of $k$-subsets is equal to $C_n^k$. Number of elements in $k$-subsets is equal to $kC_n^k$. Total $\sum \limits_{k=1}^{n}kC_n^k$. But on the other hand, it is equal to $n2^{n-1}$ (because each $k$-set we can complete to $n$-set adding $(n-k)$ elements. After this process we have $2^n$ $n$-subsets but each set meets twice. So, $\dfrac{n2^n}{2}=n2^{n-1}$). We get $\sum \limits_{k=1}^{n}kC_n^k=n2^{n-1}.$ It was a combinatorial proof. Let's go to inductional proof. We see that for $n=1$ it's true. Let it be true for $n-1$. Then we have $\sum \limits_{k=1}^{n-1}kC_{n-1}^{k}=(n-1)2^{n-2}$. Now we will prove for $n$. Here we will use this property of binomial coefficients: $C_n^k=C_{n-1}^k+C_{n-1}^{k-1}$. $$\sum \limits_{k=1}^{n}kC_n^k=\sum \limits_{k=1}^{n}kC_{n-1}^{k}+\sum \limits_{k=1}^{n}kC_{n-1}^{k-1}=(n-1)2^{n-2}+\sum \limits_{k=1}^{n}((k-1)+1)C_{n-1}^{k-1}=(n-1)2^{n-2}+\sum \limits_{k=1}^{n}(k-1)C_{n-1}^{k-1}+\sum\limits_{k=1}^{n}C_{n-1}^{k-1}=(n-1)2^{n-2}+(n-1)2^{n-2}+2^{n-1}=n2^{n-1}.$$ Q.E.D.
Pushforward of a representation?
There are two natural things you can do, corresponding to the left and right adjoint of restriction. Explicitly, the left adjoint sends a $G$-representation $V$ to the $G/N$-representation $$V/(nv - v, n \in N)$$ and the right adjoint sends a $G$-representation to the $G/N$-representation $$\{ v \in V : nv = v \forall n \in N \}.$$ In other words, the left adjoint takes $N$-coinvariants and the right adjoint takes $N$-invariants. Both of these are equipped with a $G/N$-action, and the left / right adjointness describes a universal property of this $G/N$-action that you should write down if you haven't already. More generally, there are left and right adjoints to pullback along a functor between two groupoids, not necessarily finite or connected, or for that matter between two categories. They're called left and right Kan extension, and loosely speaking they correspond to taking fiberwise colimits and fiberwise limits respectively. It's a good exercise to write down what this looks like for the restriction of scalars functor $\text{Mod}(S) \to \text{Mod}(R)$ coming from a map $f : R \to S$ of rings (e.g. group rings). The left adjoint is extension of scalars / induction and the right adjoint is a different thing that you might call coextension of scalars or coinduction.
Proving sum of product forms a pattern in n * nnnnnn....
One can note a more general pattern - let us, for convenience, define $$R_n=\underbrace{11\ldots 11}_{n\text{ times}}=\frac{10^n-1}9.$$ Then, we can note that, for fixed $k$, after a point, the sum of the digits of $k\cdot R_n$ increases linearly. Note that, if the digits were $8$'s, then we can express the desired product as $8^2\cdot R_n$ since multiplying by $8$ once gives $88\ldots 88$ and doing it again gives $8\cdot 88\ldots 88$. One will see the pattern if they go far enough: $$64\cdot 1 = 64$$ $$64\cdot 11 = 704$$ $$64\cdot 111 = 7104$$ $$64\cdot 1111 = 71104$$ $$64\cdot 11111 = 711104$$ $$64\cdot 111111 = 7111104$$ $$64\cdot 1111111 = 71111104$$ $$64\cdot 11111111 = 711111104$$ So, obviously, there's a pattern here - which is that we get a new $1$ inserted after the $7$ each time we add another $1$ on the left. There are a few productive ways to prove this. The one I think is most elegant would be simply to write this as long multiplication: $$\begin{array} &amp; &amp; &amp; 1 &amp; 1 &amp; 1 &amp; \ldots &amp; 1 &amp; 1 &amp; 1 \\ \times &amp; &amp; &amp; &amp; &amp; &amp; &amp; 6 &amp; 4 \\ \hline &amp;&amp; 4 &amp; 4 &amp; 4 &amp;\ldots &amp; 4 &amp; 4 &amp; 4 \\ +&amp;6 &amp; 6 &amp; 6 &amp; 6 &amp;\ldots &amp; 6 &amp; 6 &amp; 0\\\hline \end{array}$$ and now we can carry out this addition by hand too, writing out carries in the small text above: $$\begin{array} &amp; &amp;\scriptstyle{1} &amp; \scriptstyle{1} &amp; \scriptstyle{1} &amp; \scriptstyle{1} &amp;\ldots &amp; \scriptstyle{1} &amp; &amp; \\ &amp; &amp; 4 &amp; 4 &amp; 4 &amp;\ldots &amp; 4 &amp; 4 &amp; 4 \\ +&amp;6 &amp; 6 &amp; 6 &amp; 6 &amp;\ldots &amp; 6 &amp; 6 &amp; 0\\\hline &amp; 7 &amp; 1 &amp; 1 &amp; 1 &amp;\ldots &amp; 1 &amp; 0 &amp; 4 \end{array}$$ One may notice that every column elided looks exactly like this: $$\begin{array}{c} &amp;\scriptstyle{1} \\ &amp;4 \\ (+)&amp;6\\ \hline &amp; 1\end{array} $$ and, obviously, caused a $1$ to be carried to the column to the right of it. This suffices to prove that, once these columns begin, they will not end until we reach the end of the $4$'s in the top column - this is proved inductively, if we wish to be formal. Then, once we have this, we know that the beginning (here $7$) will always be the same, as it always looks like: $$\begin{array}{c} &amp;\scriptstyle{1} \\ &amp; \\ (+)&amp;6\\ \hline &amp; 7\end{array} $$ and the ending will always look like: $$\begin{array}{c} &amp; \\ &amp;4&amp;4 \\ (+)&amp;6&amp;0\\ \hline (1) &amp; 0&amp; 1\end{array} $$ where the $1$ in parenthesis is carried, causing the inner columns to begin. This proof can be redone in whole to show the property for any sequence of the form $k\cdot 11\ldots 11$. With a bit more care, we can note that a sequence of identical columns begin after at most $2\lceil\log_10(k)\rceil$ digits (for a rough bound) since the carry to each such column is an increasing function of the carry to the previous column. Since the carry does not exceed the number of digits of $k$, it must hit a fixed point after at most $\lceil\log_{10}(k)\rceil$ steps and this follows the "irregular" portion of the sum (when zeros are still being added from some terms), which has length at most $\lceil\log_{10}(k)\rceil$ as well. More directly, however, would be an algebraic approach to prove some identity of the form: $$k\cdot R_n = \alpha \cdot 10^n + \beta \cdot R_{n-m} \cdot 10^m + \kappa$$ for positive integers $\alpha$, $\beta$, $m$, and $\kappa$ with $\kappa &lt; 10^m$ and $0\leq \beta &lt; 10$. Knowing the form of $R_n=\frac{10^n-1}{9}$ makes it easy to verify such an identity - for instance: $$64\cdot R_n = 7\cdot 10^n + 1\cdot R_{n-2} \cdot 10^2 + 4$$ $$64\cdot \underbrace{11\ldots 11}_{n\text{ times}}=7\underbrace{11\ldots 11}_{n-2\text{ times}}04$$ holds, as can be checked purely algebraically. Note that this expresses that the last few digits are the digits of $\alpha$ (which is $7$ in the example) followed by a number of repetition of $\beta$ (which is $1$ in the example) followed by the digits of $\kappa$ (which is $04$ in the example). Obviously, the sum of the digits will increase by $\beta$ at each step, since the only change in the expression is the insertion of another one of that digit. To find such an expression, one extracts things in parts. For instance, for $64\cdot R_n=\frac{64\cdot 10^n-64}9$, we first take out the largest multiple of $10^n$ that we can which is $\lfloor\frac{64}9\rfloor=7$. Subtracting this out gives $$\frac{64\cdot 10^n-64}{9}-7\cdot 10^n=\frac{1\cdot 10^n-64}9$$ From here, we want the numerator to be of the form $\beta(10^n - 10^m) + 9\kappa$ which is easy to do. Here, for instance, we just need to choose $m$ such that $10^m$ is bigger than $64$ - so $m=2$ is the smallest that suffices. This gives $$\frac{64\cdot 10^n-64}{9}-7\cdot 10^n=\frac{1\cdot (10^n-10^2)+36}9=1\cdot R_{n-2}\cdot 10^2+4$$ which yields the expression when we add the $7\cdot 10^n$ back to both sides.
Inverse functions (elementary)
You are at $$h^{-1}(3x+1)=x-5.$$ So now you substitute $y=3x+1$ or $x=\frac{y-1}3$ and you get $$h^{-1}(y)=\frac{y-1}3-5.$$ You now simply invert this to $$h(x)=3(x+5)+1=3x+16.$$
If $T= T_1 \cup T_2$ is inconsistent, then can we conclude that $T_1 \models \neg \phi$ for some $\phi\in T_2$
Not necessarily. Let $T_1=\emptyset$ and $T_2=\{\phi,\lnot\phi\}$ where neither $\phi$ nor $\lnot \phi$ is valid. ${}{}{}{}$
What is the acute angle $A$ if $\log_4(\sin^2A)=-1$?
Hint: Unchain the various operations from each other: $$\log_4 q = -1\\ r^2 = q\\ \sin A = r$$ and finally, $A$ is an acute angle. Different direction hint: Taking the rule $\log a^b = b\log a$, we can start with $\log_4(\sin^2 A) = -1 = 2\log_4(\sin A)$.
Running programs in nonstandard models of PA
You do need to prove this syntactically. One way is to prove the following lemma: For each $\Sigma^0_1$ formula $A(x_1,\ldots,x_k)$ there is an index $e$ such that PA proves $$(\forall n_1)\cdots(\forall n_k)[ A(n_1,\ldots,n_k) \leftrightarrow \phi_e(n_1,\ldots,n_k)\downarrow]$$ This is proved by induction on the structure of the formula $A$, using the definition of the formalization of "$\phi_e(n_1,\ldots,n_k)\downarrow$" in PA, which is actually Kleene's T predicate. It is true that if $e,n$ are standard and the standard model thinks that $\phi_e(n)$ converges then so will any nonstandard model. But a nonstandard model could think that $\phi_e(n)$ converges (in a nonstandard number of steps), even when $n$ is standard, even if that computation does not actually converge.
Isomorphic algebraic closures.
There are lots of algebraic closures of $\Bbb R$ in $\Bbb H$ (Hamilton's quaternions. ) For example, you have $\Bbb R[i]$ and $\Bbb R[j]$ and $\Bbb R[k]$. You can adjoin any element that squares to $-1$. These are all isomorphic to $\Bbb C$, of course.
$a+b+c=2,\ a+b^2+c^2=12$, what is the maximum value of c?
Subtracting the first equation from the second, we get $b^2-b+c^2-c=10$, and that's equivalent with $(b-1/2)^2+(c-1/2)^2=21/2$. Since $(b-1/2)^2\ge0$, we have $(c-1/2)^2\le21/2$, i.e. $c\le1/2+\sqrt{21/2}$.
Optimizing $\rm \frac{x^TAx}{c^T x}$ subject to $\rm1^T x = 1$
Just noticed this question! One field where quadratic-over-linear appears is optimal transportation. Check out this paper for one example: http://arxiv.org/pdf/1304.5784.pdf
How do I compute speed based on acceleration and drag?
I'm not sure I understood the question, is a(t) given? If yes so write $ dV/dt+d_1V^2=a(t)$ now if a(t) is a constant you can separate variables and get $dV/(a(t)+d1_V^2)=dt$ and do integration from $V_0$ to V and $t=0$ to t here is the answer $V(0)=V_0, d_1=d_0/M$ M=mass of body
Is there a close form solution for parallel transport on 2 sphere along the great circles.
Yes, you want to apply the unique rotation matrix that sends $p$ to $q$ while preserving their cross product. Computing that rotation matrix is a well-known problem, e.g.: Calculate Rotation Matrix to align Vector A to Vector B in 3d? Stack Overflow: Finding quaternion representing the rotation from one vector to another
How to alternate alternating negatives in a series (summation)?
[Comment converted to answer] Is it as simple as throwing in a factor of $(-1)^n$? I guess that would change your factor to $(-1)^{1-i+n}$, right?
If $ (1+3+5+\cdots +p)+ (1+3+5+\cdots +q)= (1+3+5+\cdots +r)$, Find the minimum value of $p+q+r$, $p>6$
p=7 -> S(p=7) = 16 q=5 -> S(q=5) = 9 r=9 -> S(r=9) = 25 * p=23 -> S(p=23) = 144 q=17 -> S(q=17) = 81 r=29 -> S(r=29) = 225 * p=29 -> S(p=29) = 225 q=15 -> S(q=15) = 64 r=33 -> S(r=33) = 289
Why there isn't lexicographically smallest solution to a bounded linear program?
Consider the linear programming problem minimize $y$ subject to $y \ge 0$, $y \le 1$, $x \le 0$ where there are optimal solutions $(x, 0)$ for $-\infty &lt; x \le 0$. The point is that although the problem is bounded, the feasible region is not.
Having trouble figuring out when to use induction or direct proof.
Since I'm not very knowledgeable about graph theory, I won't try to recommend which method(s) to use with your example of the "max length of a path in a connected graph with $n$ vertices". For what it's worth though, in answer as to when to use induction or direct proof (or possibly other techniques) in general, I believe there are no hard &amp; fast rules. Note that many (likely most) conjectures can be proven in more than one way. The "best" technique generally depends on what you know &amp; understand, plus what seems to be simplest &amp; easiest. However, there may be exceptions, such as if a longer &amp; more involved technique gives a more general result that can be applied to many other situations. Although general guidelines can be determined, or provided, as to what proof technique is best to use in various situations, I don't believe anybody can ever reasonably cover every possible case. Instead, your knowledge &amp; experience will often help guide you in terms of how to best approach trying to solve a given problem. Specifically regarding using induction, though, I don't fully agree with your initial statement of I know for simple induction you generally want to use this technique when the domain of the conjecture is in the Naturals. A more accurate statement would be that one of the first techniques you should consider using is simple induction. There are many conjectures with domains of the natural numbers where simple induction does not work, or at least does not work well, but either some other type of induction (e.g., strong) could work, or perhaps induction is not really applicable at all (e.g., for many prime, congruence, polynomial, etc., related conjectures). However, a good clue that a type of induction will likely work well is if you have some type of relation of one natural number value, defined by values at other earlier natural number(s) (e.g., a sequence value at $n$ is defined in terms of the values at $n - 1$ and $n - 2$). This will help to allow applying the required base and inductive steps (e.g., as explained in Mathematical induction).
Rankin-Selberg zeta function
There are various constructions known as Rankin--Selberg $\zeta$-functions (or Rankin--Selberg $L$-functions), and you should be able to find much more literature on this than the one paper you linked to (which doesn't address the particular $L$-function you are asking about, as far as I can tell). You might want to begin by looking at de Weger's reference [10], since this is what he cites when he discusses Conjecture 7. I did look there, and there was no direct definition of the particular Rankin--Selberg $\zeta$-function in question, but following the references there ([5] and [9]; see p. 76), you might want to look at the Inventiones paper of Kohnen and Zagier, where various Rankin--Selberg-type computations take place. (This is reference [9] of Goldfeld--Szpiro.) In any case, the Shintani--Shimura lift asociates a weight $k+{1/2}$ form to an eigenform $f$ of weight $2k$ (so a weight $3/2$-form gets associated to the weight two eigenform attached to a given elliptic curve $E$). The lifted form has the same system of Hecke eigenvalues as the original form, but because of the way Hecke operators work in the half-integral weight case, it has not just one leading coefficient (unlike in the integral weight case, where there is a single leading coefficient a_1 which, together with all the Hecke eigenvalues, determines the $q$-expansion) but a collection of "leading coefficients", indexed by fundamental discriminants $D$. The theorem of Waldspurger is that, up to an overall proportionality factor, the $D$th leading coefficient is equal to the $L$-value $L(f,\chi_D,1)$, where $\chi_D$ is the quadratic character of disc. $D$. It is reproved (under restricted hypotheses, I think) in the Kohnen--Zagier paper. As BR indicates, the Rankin--Selberg $L$-function is determined from this half-integral weight modular form by multiplying by a certain Eisenstein series and integrating over a fundamental domain. It is (I'm pretty sure) one of the Rankin-type constructions given in the Kohnen--Zagier paper. The growth rates of its coefficients are related to growth rates of the values of $L(E,\chi_D,1)$, which, assuming BSD, are related to growth rates of Sha of quadratic twists of the elliptic curve $E$. This is why it comes up in the kind of problems you are reading about.
If $H\leq G$ is of finite index, then, we have a normal subgroup $N\leq G$ of finite index.
Your solution is correct provided you add the easy verification that the stabiliser of the coset $eH$ in the action of $G$ on $G/H$ is $H$, so that $\ker\eta\leq H$ is a proper subgroup. If you don't want to mention group actions, you can define the morphism $\eta:G\to \operatorname{Sym}(G/H)$ explicitly. Identifying the stabilisers of the other cosets as conjugates of $H$, and $\ker\eta$ as their intersection, you get the argument in the comment by Martin Brandenberg.
Suppose $\lim_{n\to\infty} {\frac{a_{n+1}}{a_n}} = \frac{1}{2}$, prove $\lim_{n\to\infty}{a_n}=0$
You're on the right track, now WLOG assume $\epsilon &lt; {1\over 2}$ so that after you reach the appropriate $N\in\Bbb N$ when your hypothesis starts to hold, with $0&lt;a=\epsilon + {1\over 2} &lt; 1$ and then you have $|a_{N+k}|&lt;a^k|a_N|\to 0$ as $k\to\infty$ proving the result. You can even quantify it if you like, since $$a^k|a_N|&lt;\epsilon\iff k&gt;{\log\epsilon -\log|a_N|\over\log a}$$ where the inequality switches direction because we divide by the negative number $\log a$.