title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
what will be yielded by gluing the faces of tetrahedron in the picture?
The theorem says that the G-side-pairing used for "glueing" such a manifold should be proper. Proper means that each cycle under the side-pairing is finite and has a solid angle sum of $4 \pi$. Now, the question is to determine what kind of cycles and solid angles you've got in the picture. If you consider a point inside this tetrahedron, then there is no problem to circumscribe a small sphere around it and thus the solid angle will be $4 \pi$. The respective cycle consists of one point. If you take a point on a face (inside that face), then you automatically have a half-sphere with the centre at this very point (the solid angle is $2 \pi$). However, this point is identified with some other point on the second face. The point there also has a half-sphere. Thus, the respective cycle consists of two points. And the solid angle is $2\pi + 2\pi = 4 \pi$ again. Now the most delicate thing is to verify what happens to a point sitting at an edge (inside it) or a point that coincides with one of the vertices. Actually, you've got to know the respective dihedral angles to check if the solid angle in question is $4\pi$. Even if you know the dihedral angles, this task needs some labour. However, by Poincare's polyhedron theorem, you have to verify that the dihedral angle sum over each equivalence class of edges is $2\pi$. Check Ratcliffe's book for the theorem and its proof. The theorem is pretty nice and often in use. Also there are examples of how to glue the figure-eight complement out of regular ideal hyperbolic tetrahedra (in the same book, sorry for the lack of exact references to paragraphs). This could be a good illustration on how to check the properness of a side-pairing (may be even without Poincare's theorem). Edit: For example, here you say that AC is always glued to itself and there are no other edges in its equivalence class under the glueing (am I right here?). Thus, there is only one equivalence class in the cycle for every point sitting on that edge. Thus, the respective angle sum from Poincare's theorem equals the dihedral angle along AC. Supposedly, this angle is never $2\pi$, since you may want your tetrahedron to be convex (although you do not specify the angles in your question). Thus, you will never have a manifold. Same answer "not a manifold" if you try to compute the solid angle for a point from the interior of the edge AC. Hope my reply help a bit. Cheers!
Find the smallest $a>1$ such that $\frac{a+\sin x}{a+\sin y} \leq e^{(y-x)}$ for all $x \leq y$
Let $b = e^x$ and $c = e^y$. Then the problem is to find the supremum of all numbers of the form $f(c) - f(b)/(c-b)$ for $c > b > 0$, where $f(t) = - t \sin \log t$. By the mean value theorem, such numbers are always of the form $f'(d)$. Conversely, any value $f'(d)$ of the derivative is itself the limit of numbers of the given form. So the problem amounts to finding the supremum of $f'(t) = -\sin \log t - \cos \log t$ for $t > 0$. That means the maximum value of $-\cos x - \sin x = - \sqrt{2}\sin(x + \pi/4)$ for all $x$. Therefore take $a = \sqrt{2}$. EDIT: A comment questioned the use of the mean value theorem. An alternative is to prove the following lemma: If $f'(t) \leq M$ for all $t$, then $(f(c) - f(b))/(c-b) \leq M$, which can fulfill the same purpose in this problem. This can be proved using only the fact that a function with nonnegative derivative is increasing (i.e., nondecreasing).
Intersection of Plane Through Origin and Sphere
From the system given derive that: $(\frac{y}{2}+z) ^2+3\frac{y^2}{4}=\frac{R^2}{4}$, (solve for $x$ the first equation and replace it to the second.) Then set $y=\frac{R}{\sqrt{3}}\cos{t}$ and $z=\frac{R}{2}\sin{t}-\frac{R}{2\sqrt{3}}\cos{t}$. Then from the equation $x+y+z=0$ we have $x=-\frac{R}{2\sqrt{3}}\cos{t}-\frac{R}{2}\sin{t}$ .
$V dV = \frac{1}{2} d(V^2)$?
Start with what you know such as $$\frac{d(V^2)}{dV} = 2V$$ then rearrange to $$\frac12\frac{d(V^2)}{dV} = V$$ and if you think it meaningful $$\frac12 \; d(V^2) = V \; dV.$$ If you want to take the derivative with respect to $t$ then by the chain rule $\frac{d(V^2)}{dt} = 2V\frac{dV}{dt}$ so: $$\frac12 \frac{d(V^2)}{dt} = V\frac{dV}{dt}.$$
Inverse Tangent Function
Let us look at a general trigonometric function of the form $$\sin(\tan^{-1}(x))\tag{1}$$ Let $$\theta = \tan^{-1}(x)\tag{2}$$ Substitute $(2)$ in $(1)$ to get $$\sin(\theta)\tag{3}$$ Let's go back to $(2)$. $$\theta = \tan^{-1}(x)\implies\tan(\theta) = x = \frac x1\tag{4}$$ Imagine the triangle below having length $A = 1$ and height $O = x$. Using the Pythagorean Theorem, we can deduce the length $H = \sqrt{1^2 + x^2} = \sqrt{1 + x^2}$. Recall that $$\sin(\theta) = \frac OH = \frac {x}{\sqrt{1 + x^2}}$$ Great! We have an expression for $\sin(\theta)$. Let's convert it to the original form. $$\sin(\theta) = \sin(\tan^{-1}(x)) = \frac {x}{\sqrt{1 + x^2}}$$ Plug in $x = 2$ to get $$\sin(\tan^{-1}(2)) = \frac 2{\sqrt{1 + 2^2}} = \frac2{\sqrt{5}} = \frac{2\sqrt{5}}{5}$$
Why $\mathbb E\left[\sup\frac{|Y_t-Y_s|}{|t-s|^\alpha }\right]<\infty$ imply $(Y_t)_t$ continuous?
Assume that $Y = (Y_t)_{t\in[0, 1]}$ is realized over a complete probability space $(\Omega, \mathcal{F}, \mathbb{P})$ such that the function $C : \Omega \to [0, \infty]$ defined by $$ C(\omega) = \sup_{\substack{s, t \in [0, 1] \\ s \neq t}} \frac{|Y_s(\omega) - Y_t(\omega)|}{|s-t|^{\alpha}} $$ is measurable, then If $\omega \in \Omega$ is such that $C(\omega) &lt; \infty$, then $|Y_t(\omega) - Y_s(\omega)| \leq C(\omega)|t - s|^{\alpha}$ for all $s, t \in [0, 1]$ and hence $t \mapsto Y_t(\omega)$ is $\alpha$-Holder continuous. In particular, $$ E := \{ \omega : t \mapsto Y_t(\omega) \text{ is continuous} \}$$ satisfies $$ E^{c} \subseteq \{ \omega : C(\omega) = \infty \} $$ If in addition that $\mathbb{E}[C] &lt; \infty$, then $\{ C = \infty\}$ is measurable and $\mathbb{P}[C = \infty] = 0$. Since the probability space is complete, this implies that $E^c$ is also a measurable null-set. Therefore $E$ is also measurable and $\mathbb{P}[E] = 1$, i.e. the sample path is $\mathbb{P}$-a.s. continuous. It is worth to notice that $C$ and $E$ need not be measurable for any choice of the base probability space. For instance, under the naive choice $(\Omega, \mathcal{F}) = (\mathbb{R}^{[0,1]}, \mathcal{B}(\mathbb{R}^{[0,1]}))$, one can prove that $$ C([0, 1], \mathbb{R}) = \{ \omega \in \mathbb{R}^{[0,1]} : \omega \text{ is continuous} \} \notin \mathcal{B}(\mathbb{R}^{[0,1]}). $$ (This is because, for any $A \in \mathcal{B}(\mathbb{R}^{[0,1]})$, there exists $I \subset [0, 1]$ such that $I$ is at most countable and $A$ is determined only by the information at times $t \in I$, i.e., $A \in \sigma( \pi_t : t \in I)$, where $\pi_t : \mathbb{R}^{[0,1]} \to \mathbb{R}$ is the projection $\pi_t(\omega) = \omega_t$.) This tells that, in order to discuss regularity of sample paths, one needs to choose a suitable base probability space to begin with. If one is given a stochastic process $S$, the usual workaround is to show that $X$ satisfies some weaker notion of stochastic continuity, and then show that this notion allows to construct a modification which has continuous sample paths. For instance, If $W = (W_t)_{t\geq 0}$ is the Brownian motion, then it satisfies $\mathbb{E}[|W_s - W_t|^2] = |s - t|$, hence by Kolmogorov-Chenstov theorem, $W$ has a modification whose sample paths are continuous. Then one may realize this modification on a nice probability space where sample-path continuity can be discussed as measurable event.
convergence towards infinity of jumping times of Levy processes
Acutally the assertion is a direct consequence of the fact that Lévy processes have càdlág sample paths. Recall the following elementary statement Lemma If $f: [0,\infty) \to \mathbb{R}$ is a càdlàg function, then $$\sharp\{t \in [0,T]; |\Delta f(t)| \geq \epsilon\} &lt; \infty$$ for any $\epsilon&gt;0$ and $T&gt;0$. Using this statement one can easily deduce that the sequence of stopping times converges almost surely to infinity. Since $0 \notin \bar{\Lambda}$ there exists $\epsilon&gt;0$ such that $$\Lambda \subseteq B :=\{x \in \mathbb{R}; |x| \geq \epsilon\}.$$ Now suppose that $\lim_{n \to \infty} T_{\Lambda}^n(\omega)&lt;\infty$ for some $\omega \in \Omega$. As $\Lambda \subseteq B$, this implies $T:=\lim_{n \to \infty}T_B^n(\omega)&lt;\infty$; in particular, $$\sharp\{t \in [0,T]; |\Delta X_t(\omega)| \geq \epsilon\} = \infty.$$ It follows from the above lemma that $\omega \mapsto X_t(\omega)$ is not càdlàg. Since $(X_t)_{t \geq 0}$ has almost surely càdlàg sample paths we conclude $$\mathbb{P}(\lim_n T_{\Lambda}^n &lt; \infty) \leq \mathbb{P}(t \mapsto X_t \, \, \text{is not càdlàg}) = 0.$$
Rigorous Understanding of Behavior around Equilibrium Points of this Dynamical System
I hope Strogatz didn't actually write "the nearest zero". It's actually the nearest stable equilibrium (coloured black in the picture) assuming it doesn't start at an unstable equilibrium. And "nearest" is rather misleading, because that's only when the equilibria are regularly spaced, as in this example. Closeness is not important, rather what happens in general is that when you start in an interval between two consecutive equilibria, you go to an endpoint of that interval; which endpoint is determined by the sign of $\dot{x}$ in that interval. EDIT: The point is basically this. Consider a differential equation $\dot{x} = f(x)$, where $f$ is continuously differentiable, and initial condition $x(0) = x_0$ where $x_0$ is in an interval $(a,b)$ where $f(x) &gt; 0$, with $f(b) = 0$. As long as $x(t)$ stays in that interval, the differential equation says it must be increasing. But $x(t)$ can never get to $b$, because the constant $x(t)=b$ is a solution and the Uniqueness Theorem says two solutions can never meet or cross, so $x(t)$ must be in the interval for all $t &gt; 0$. The next fact is that an increasing function that is bounded above has a limit as $t \to \infty$. Let $\lim_{t \to \infty} f(t) = L$. Of course, $L \le b$. But it's impossible that $L &lt; b$, because then $f(L) &gt; 0$ and (by continuity) there would be some $\delta &gt; 0$ and $\epsilon &gt; 0$ such that $f(x) &gt; \epsilon$ whenever $|x - L| &lt; \delta$. But that means that when $x$ gets is withing distance $\delta$ of $L$, it is increasing at a speed of more than $\epsilon$, so within time $\delta/\epsilon$ it will have gone past $L$, and since it must keep increasing this contradicts the assumption that the limit is $L$. So the only possibility is that $L = b$.
Are there positive integral solutions for this simple system?
There are none. Well, except some trivial stuff. Third condition essentially means that $n_1$ and $n_2$ are coprime (all right, it means more than that, but the rest we won't need). Second condition is not needed at all. First condition says: $$m_1n_2 + \underbrace{m_2n_1 = n_1n_2}_{\text{divisible by }n_1}$$ which means $m_1n_2$ is also divisible by $n_1$, which means $m_1$ alone is, which in turn means that $m_1\geqslant n_1$ (unless $m_1=0$, which we'll discuss later). By similar logic, $m_2\geqslant n_2$, and so the LHS of the first equation hopelessly exceeds the RHS. Now what if $m_1=0$? From the first equation it follows that $m_2=n_2$, and the question boils down to: $n_1|n_2^2+1,\; n_2|1$, which gives the only (up to permutation) solution: $n_1=2,\;m_1=0,\;n_2=m_2=1$.
Solve the following differential equation $x^2 y''-(2x+1)y'+(x+1)y=4e^{2x} x^2$!
So as you said we guessed the solution is somehow related to $e^{2x}$. Now if you set $e^{2x}$ we get Left side is $$(4x^2-2(2x+1)+(x+1))e^{2x}=(4x^2-3x-1))e^{2x}$$ Right side is $$4x^2 e^{2x}$$ so we almost got the solution up to a polynomial error. Hence we would like to guess a more complicated solution, instead of $e^{2x}$ we can guess it is $P(x)e^{2x}$ for a function $P$. Now substitue $P(x)e^{2x}$ and divide by $e^{2x}$ you get a differential equation involving only $P$ $$xP''(x)+(2x-1)P'(x)+(x-1)P(x)=4x^2$$ At this point I would (wrongly) guess that $P$ is a polynomial function, but as Dr. Sonnhard Graubner said, it Doesn't seem to be a nice solution for this.
Probability of an "anti-run" in a sequence of draws of RV?
A contiguous segment of length $k$ and value $i$ located at $x=1...M-k+1$ has probability: $$ S_{ikx}=p_i^k $$ Having no segment in the contiguous sides happens with probability $1-p_i$ if is in a border and $(1-p_i)^2$ if is in between. Others values are not important supposing $k&gt;M/2$ i.e. supposing the segments are enough large to happens just once in a trial. Hence the segments happens with probability, for each faces and locations: $$ S_{k}=(M-k-1)\sum_{i=1}^n (1-p_i)^2 p_i^k+2\sum_{i=1}^n (1-p_i)p_i^k $$ And for each lengths, including those pathological cases when the segments cover everything: $$ S=\sum_{k=L}^{M-1}\left((M-k-1)\sum_{i=1}^n (1-p_i)^2 p_i^k+2\sum_{i=1}^n (1-p_i)p_i^k\right)\\ +\sum_{i=1}^n p_i^{M} $$ If the length $k$ is enough small for allowing more than one contiguous segments per trial, hence including the factor of additional lengths, extracting the case of them being equal and contiguous, which would actually be a longer sequence. Of course, this is left as exercise to the OP. $$ \lfloor M/k \rfloor $$
What are the roots of the polynomial $x^{3}+3x-2\pi$ $?$
Things are really not as bad as people make them out to be. The first thing to recall is that polynomial equations up to and including degree $4$ can always be solved in terms of radicals. For the fully general case the formulas can get messy, but there are two simplifications: One can always assume that the second highest coefficient is zero by a Tschirnhaus-Transformation (linear substitution). This is already the case here and significantly simplifies the formulas in the cubic case. One can try to work with field theoretic methods directly. This should simplify things if the Galois group is sufficiently small compared to the fully symmetric group. This is not necessary here. For this polynomial, it seems easiest to straightforwardly apply Cardano's formula. (Cardano's formula) Let $K$ be a field with $\operatorname{char}(K) \ne 2,3$. Consider the cubic equation $x^3 + px + q =0$ where $p$, $q \in K$. The solutions are given by $x_1 = u + v$, $x_2 = \zeta^2 u + \zeta v$, $x_3 = \zeta u + \zeta^2 v$ where $\zeta$ is an arbitrary primitive cubic root of unity, and $$ \begin{align*} u &amp;= \sqrt[3]{-q/2 + \sqrt{(p/3)^3 + (q/2)^2}} &amp; v &amp;=\sqrt[3]{-q/2 - \sqrt{(p/3)^3 + (q/2)^2}}, \end{align*} $$ where the third roots must be chosen subject to the sign condition $uv=-p/3$. The square root can be taken arbitrary, but it has to be the same one both times. Note that a priori we would have three possibilities for each cubic root, giving a total of 9 possibilities. But the side condition on $uv$ reduces that to at most three distinct solutions. Now, plugging in your equation we see $p=3$, $q=-2\pi$. (Incidentally, the coefficients are chosen in such a way that the formulas become particularly simply. We conclude that this was an exercise in a book or course.) Now choose $u = \sqrt[3]{\pi + \sqrt{1 + \pi^2}}$ and $v=\sqrt[3]{\pi - \sqrt{1 + \pi^2}}$ to be the real cubic roots. Then $u &gt; 0$ and $v &lt; 0$, so the sign condition is verified. A third root of unity is given by $\zeta = \frac{-1 + i \sqrt{3}}{2}$. The roots are now determined as above, in particular, $$ x_1=\sqrt[3]{\pi + \sqrt{1 + \pi^2}} + \sqrt[3]{\pi - \sqrt{1 + \pi^2}} $$ (with the cubic roots taken in $\mathbb R$) is the real root. Note: Since $\pi$ is in fact transcendental over $\mathbb Q$, this is the same as factoring $g=x^3 + 3x -2y$ in the splitting field of $g$ over the rational function field $\mathbb Q(y)$.
Taylor series of $\frac{x-2}{\sqrt{x^2-4x+8}}$ at $x = 2$. Where is the mistake?
There's no error in what you did, you only "wasted" two orders of magnitude in your remainder term. The terms of the textbook's answer look different from yours, but are in fact the same, since $$(2k)!! = 2^k\cdot k!$$ for one definition of the double factorial of even natural numbers (there's another definition that differs from the used one by a factor of $\sqrt{\frac{2}{\pi}}$), thus $$2^{3k+1}\cdot k! = 2^{2k+1}(2k)!!$$ and consequently we see that the terms are identical. You wasted one order of magnitude in the remainder term by writing $$\biggl(1 + \Bigl(\frac{t}{2}\Bigr)^2\biggr)^{-1/2} = \sum_{k = 0}^n \binom{-\frac{1}{2}}{k} \biggl(\frac{t}{2}\biggr)^{2k} + o(t^{2n})\tag{$\ast$}$$ and interpreting the polynomial as the Taylor polynomial of order $2n$. It is also the Taylor polynomial of order $2n+1$, and therefore one can write the remainder term as $o(t^{2n+1})$. The next order of magnitude you wasted in the remainder term was when you multiplied $(\ast)$ with $\frac{t}{2}$, since $\frac{t}{2}o(t^m) = o(t^{m+1})$, and you didn't increase the exponent. So with these orders taken into account, we find $$\frac{x-2}{\sqrt{x^2-4x+8}} = \frac{x-2}{2} + \sum_{k = 1}^m \frac{(-1)^k (2k-1)!!}{2^{2k+1}(2k)!!}(x-2)^{2k+1} + o\bigl((x-2)^{2m+2}\bigr)$$ for any fixed $m\in \mathbb{N}$. So if we want to have a remainder term in $o(t^{2n})$, it suffices to take $m = n-1$.
Solving a 1st order nonlinear ODE
Hint. Try to solve $$ \frac{dx}{(2x-4)(2x+4)} = dt $$
Find the imaginary part of $\ln(1-re^{ix})$
Let $z = |z|e^{i\alpha} = 1 - re^{ix} = (1 - r\cos x) -ir\sin x$. Note that $$\log z = \log\left(|z|e^{i\alpha}\right) = \log |z| + i\alpha$$ whose imaginary part is $\alpha$. So we need only evaluate $\alpha$. But $\alpha = \arg(1-re^{ix})$ and we have $1-re^{ix}= (1-r\cos x) - ir\sin x$, from which we can conclude $\alpha = \arctan \frac{r\sin x}{1-r\cos x}$ under suitable restrictions.
Proving that $\mathbb{Q}$ is neither open or closed in $\mathbb{R}$
You are certainly right that all you need to show that $\mathbb{Q}$ is not open is to show that given a rational $q$, you can find irrationals arbitrarily closed to $q$. A very simple way of doing this is to take your favorite irrational, say $\sqrt{2}$, and then consider $\sqrt{2}/n$ with $n$ an integer. These numbers are never rational, and get arbitrarily small; so if $q$ is a given rational, then $q+\frac{\sqrt{2}}{n}$ will be as close as you want to $q$ by taking $n$ large enough.
solving equations with log and polynomials.
This equation has solution in terms of Lambert function $$K \log(x) + x^\beta - r=0$$ gives $$x=\left(\frac{K W\left(\frac{\beta \left(e^{\frac{r}{K}}\right)^{\beta }}{K}\right)}{\beta }\right)^{\frac{1}{\beta }}$$ It could have been more obvious defining $x^\beta=y$ and rewriting the equation as $$\frac{K }{\beta }\log (y)+y-r=0$$ $$y=\frac{K }{\beta }W\left(\frac{\beta e^{\frac{\beta r}{K}}}{K}\right)$$ and to remember that any equation which can write $$A+Bx+C\log(D+Ex)=0$$ has a solution in terms of Lambert function. Now, for large values of the argument, there are very good approximations of $W(x)$. Just look at the asymptotic expansions at http://en.wikipedia.org/wiki/Lambert_W_function
Show an infinite series of complex number converges
(to update my comment to an answer) By Cauch-Schwarz inequality $(\sum |z_n|/n)^2\leq (\sum |z_n|^2)(\sum 1/n^2)$. (use it for sums form 1 to $N$ and take $N$ to $\infty$, or use it for infinite sums, if you know it also in this form)
Convert affine coordinates to projective coordinates?
Multiply by $y(x^2+1)$ so your map is $((x^4+3y)y, (x+1)(x^2+1), y(x^2+1))$. Now further you want $X,Y,Z$ to be homogeneous so set $x=\frac{X}{Z}$ and $y=\frac{Y}{Z}$ and now multiply by the lowest power of $Z$ to clear the denominators.
What is the difference of the following two mathematical sentences? (Walter Rudin's Principles of Mathematical Analysis)
There's no fundamental difference; to say "the set $E$" or "a set $E$" are both equally valid, grammatically, and Rudin isn't referring to a special or particular set either. It's just how some people word things sometimes. While not particularly mathematical, this did remind me of another Stack Exchange question I saw recently. You might find it worth looking at: here ya go
Find the determinant of the matrix A with this linear transformation.
Clearly, you need to assume $v_1,v_2$ are nonzero. Then $v_1$ and $v_2$ are independent, because otherwise $v_1=kv_2$ and then $-6k^2 v_2 =k Av_2 = Av_1= 5v_2 $ which gives $k^2&lt;0$. Then write $A$ in the basis $(v_1,v_2)$, and it has the form $$\pmatrix{0 &amp; 5 \\ -6 &amp; 0}$$ so the determinant is $30$.
Irreducible radical ideals are prime
I'll just add a solution that continues your approach: Try to show that $I_x\cap I_y=I$, I'm not sure what you are trying to do with the product $I_xI_y$. So suppose we have an element in the intersection $$ax+i = a'y+i'\in I_x\cap I_y.$$ Now we just need a little trick: Since we know that $xy\in I$, multiply by $y$ so you get $$ a'y^2+i'y= axy+iy. $$ From this you should be able to conclude that $a'y^2\in I$ and then use (2) to get $a'y\in I$ and hence $a'y+i'\in I$. This shows that $I_x\cap I_y=I$. Now apply (1) to conclude.
the birthday paradox for $p=0.5$
You can't conclude any exact result from this, since the expected number includes contributions from cases with more than one collision. However, when the probability of at least one collision is $\frac12$, the probability of more than one collision is still rather low, so you can get an estimate by taking the expected number as an approximation of the probability of at least one collision. Then $$ \frac{m(m-1)}{2n}\ge\frac12 $$ yields $$ m^2-m-n\ge0 $$ and thus $$ m\ge\frac12+\sqrt{n+\frac14}\;, $$ which for $n=365$ yields $m\ge20$, not a bad estimate.
Integral of $\int \frac {\sqrt {x^2 - 4}}{x} dx$
You are correct. First note that you have not carried a factor of $2$, since your integral is $2 \int \tan^2(\theta) d \theta$. Hence your solution should read $$2 \tan(\text{arsec}(x/2)) - 2 \text{arcsec}(x/2) + c$$ You may want to rewrite your solution to match with the solution in your text. For instance, $$\text{arsec}(x/2)) = \arctan \left( \dfrac{\sqrt{x^2-4}}{2} \right)$$ So why is the above identity true? If we let $\text{arsec}(x/2)) = \theta$, then we get that $\sec(\theta) = x/2$ i.e. $\sec^2(\theta) = \dfrac{x^2}{4}$. We have that $\tan^2(\theta) = \sec^2(\theta) - 1 = \dfrac{x^2}{4} - 1 = \dfrac{x^2-4}{4}$ i.e. $\tan(\theta) = \dfrac{\sqrt{x^2 - 4}}{2}$. Hence, $$\theta = \arctan \left( \dfrac{\sqrt{x^2-4}}{2}\right)$$ Hence, we have the identity, $$\text{arsec}(x/2)) = \arctan \left( \dfrac{\sqrt{x^2-4}}{2} \right)$$ If you use this, then your solution will read $$\sqrt{x^2 - 4} - 2 \, \text{arsec}(x/2)) + c = \sqrt{x^2 - 4} - 2 \, \arctan \left( \dfrac{\sqrt{x^2-4}}{2} \right) + c$$
Tangent portfolio weights without short sales?
To understand how to proceed you have to dispense with the formula and look at the derivation of the tangent portfolio from first principles. The multiobjective model is $$\begin{array}{ll} \text{maximize} &amp; (\bar{R}^T x + R_f x_f, - \tfrac{1}{2} x^T V x) \\ \text{subject to} &amp; \vec{1}^T x + x_f = 1 \end{array}$$ The zero risk solution is of course $x_f=1$, and the maximum return solution is $x_i=1$ where $i=\textrm{argmax}_i \bar{R}_i$. To examine the rest of the tradeoff curve we scalarize the model with $\gamma^{-1}\in(0,+\infty)$ as the weight of the risk term. (The reason we use the reciprocal will become clear later.) $$\begin{array}{ll} \text{maximize} &amp; \bar{R}^T x + R_f x_f - \tfrac{1}{2} \gamma^{-1} x^T V x \\ \text{subject to} &amp; \vec{1}^T x + x_f = 1 \end{array}$$ The Lagrangian is $$L(x,x_f,\lambda) = -\bar{R}^T x - R_f x_f + \tfrac{1}{2} \gamma^{-1} x^T V x - \lambda ( \vec{1}^T x + x_f -1 )$$ The optimality conditions are $$-\bar{R} + \gamma^{-1} V x - \lambda \vec{1} = 0 \quad - R_f - \lambda = 0 \quad \vec{1}^T x + x_f = 1$$ Eliminating $\lambda$ and solving for $x$ yields $$x = \gamma V^{-1} ( \bar{R} - R_f \vec{1} ) \quad x_f = 1 - \vec{1}^T x.$$ The tangent portfolio is found by finding the value of $\gamma$ for which $x_f=0$: $$\gamma=\left(\vec{1}^TV^{-1}(\bar{R}-R_f\vec{1})\right)^{-1}.$$ This coincides with the solution you have offered in your post. Now that the general principle is illuminated, we can apply it to the model with a short sale restriction. The zero risk and maximum return solutions are identical, so we immediately return to the scalarized model with a nonnegativity constraint added: $$\begin{array}{ll} \text{maximize} &amp; \bar{R}^T x + R_f x_f - \tfrac{1}{2} \gamma^{-1} x^T V x \\ \text{subject to} &amp; \vec{1}^T x + x_f = 1 \\ &amp; x \succeq 0 \end{array}$$ The Lagrangian is $$L(x,x_f,\lambda,z) = -\bar{R}^T x - R_f x_f + \tfrac{1}{2} \gamma^{-1} x^T V x - \lambda ( \vec{1}^T x + x_f -1 ) - z^T x$$ where $z \geq 0$. The optimality conditions are $$-\bar{R} + \gamma^{-1} V x - \lambda \vec{1} - z = 0 \quad - R_f - \lambda = 0 \quad \vec{1}^T x + x_f = 1 \quad z \geq 0$$ Eliminating $\lambda$ and $z$ yields $$V x \geq \gamma(\bar{R} - R_f\vec{1}) \quad \quad \vec{1}^T x + x_f = 1$$ Unfortunately, even for fixed $\gamma&gt;0$, it is not likely that there is an analytic solution for $x$ and $x_f$ (unless, say, $V$ is diagonal). To determine the tangent portfolio, we need the value of $\gamma$ for which $x_f=0$. It also cannot be determined analytically, but it can be determined computationally. \begin{array}{ll} \text{maximize} &amp; \gamma \\ \text{subject to} &amp; V x \geq \gamma ( \bar{R} - R_f \vec{1} ) \\ &amp; x \geq 0, ~ \gamma \geq 0 \\ &amp; \vec{1}^T x = 1 \end{array} The efficient frontier with a risk-free asset and a short sale restriction is no longer a straight line; I believe it is piecewise linear. So calling this a "tangent" portfolio may be a bit misleading. But it is still the portfolio found at the point where the tradeoff curves with and without the risk-free asset touch. It is not necessarily the case that this new portfolio is strictly positive in all assets. I do not believe you can offer an a priori condition that ensures they will be.
How can I find a non-negative interpolation function?
Positivity-preserving interpolation is hard, as searching for the italicized text will tell you. There are several things you could do, and the choice will depend on the practical context of your problem (not present in the question). A cheap and dirty solution is to interpolate $(x_i,\sqrt{y_i})$ and square the interpolating function. If you interpolate with a polynomial, you'll have a nonnegative polynomial of twice the degree. The obvious drawback is that the squared function will have somewhat unnatural behavior in the regions where the interpolant was negative before squaring. Like on the right side of the graph here: Not to mention that interpolation by a high-degree polynomial is rarely desirable at all. If you use spline interpolation (e.g., cubic spline), the behavior of interpolant is much better; it takes a bit of effort to produce an example where the interpolant becomes negative. Here I started with the values 1 90 1 1 90 1 at equally spaced points, interpolated their square roots with a natural cubic spline, and squared the spline: The bump around 0.5 is unpleasant. It would be better to use logarithm instead of square root, but of course then we don't have a piecewise polynomial as a result: Still, in many cases the reason for data being positive is that it's naturally $\exp$ of something, so the above may be the best solution. A completely different approach is to turn to approximation instead of interpolation. Using positive compactly supported functions such as $B$-splines, you can approximate positive data by a positive function. A related question: Polynomial fitting where polynomial must be monotonically increasing Related SE site: Computational Science
Intuitively understanding the Poincare metric
The simplest way to move between the disc and half-plane models is using complex analysis: The Mobius transformation $z \mapsto \frac{z+i}{iz+1}$ maps the unit disk to the upper half-plane conformally, inducing an isometry between the Poincare metrics of the half-plane and the unit disk. Another, more explicit way to think about it is as follows: Consider the stereographic projection $\varphi: (x,y,z) \mapsto (\frac{x}{1-z}, \frac{y}{1-z})$ from the unit sphere $S^2$ to the plane. The preimage of the upper half-plane $\{(x,y) : y &gt; 0\}$ under $\varphi$ is the "northern" half-sphere with $y &gt; 0$. The preimage of the unit disc $\{(x,y) : x^2+y^2&lt;1\}$ is the "bottom" half-sphere with $z &lt; 0$. Now consider the map $r: (x,y,z) \mapsto (x,-z,y)$ that rotates the unit sphere by $90^\circ$. It maps the bottom half-sphere to the northern half-sphere. The composition $\varphi \circ r \circ \varphi^{-1}$ maps the Poincare disk to the Poincare half-plane, inducing an isometry of the Poincare metrics. The relation between these two explanations is the Riemann sphere: The stereographic projection maps complex numbers to their corresponding points on the Riemann sphere, and the particular Mobius transformation $z \mapsto \frac{z+i}{iz+1}$ translates to a rotation of the Riemann sphere by $90^\circ$.
Correct way of finding $\delta $ for $\lim_{x \to a} \sqrt{x} = \sqrt{a}$
That certainly works. There isn't really a "right" way to do these, but some approaches are shorter than others. In some instances it may be important to find the largest possible $\delta$, but to verify the definition of limit you only need to demonstrate one. For instance, if $a &gt; 0$ you can use $x-a = (\sqrt x - \sqrt a)(\sqrt x + \sqrt a)$ to get $$|\sqrt x - \sqrt a| = \frac{|x-a|}{|\sqrt x + \sqrt a|} \le \frac{|x-a|}{\sqrt a}.$$ Thus if $\delta = \epsilon \sqrt a$ then $|x-a| &lt; \delta$ implies $|\sqrt x - \sqrt a| &lt; \epsilon$.
Finding characteristic function and differentiate to get expectation
$$\phi_X(t)=E(e^{itX})$$ $$\frac{d}{dt}(\phi_X(t))=E(iXe^{itX})$$ $$\frac{d}{dt}(\phi_X(0))=E(iX)=iE(X)$$ In other words, you found $\phi_X(t)$ correctly, but we must differentiate with respect to $t$ to find expectation. In your case we have: $$\frac{d}{dt}(\phi_X(t))=\frac{d}{dt}\Bigg(\frac{1}{(1-it)^2}\Bigg)$$ $$=\frac{2i}{(1-it)^3}$$ $$ \frac{d}{dt}(\phi_X(0)) = 2 i=iE(X)$$ $$\therefore E(x)=2$$
$z =\frac{\sqrt{2}(1-i)}{1−i\sqrt{3}}$ . Show that the X set =$ \{z^n : n ∈ Z\} $ is finite and find the number of its elements.
Note that $$z =\frac{\sqrt{2}(1-i)}{1−i\sqrt{3}} =\frac{\frac{1}{\sqrt{2}}-\frac{1}{\sqrt{2}}i}{\frac{1}{2}−\frac{\sqrt{3}}{2}i}=\frac{e^{-\frac{\pi}{4}i}}{e^{-\frac{\pi}{3}i}}=e^{\frac{\pi}{12}i}$$ Thus $$z^n=e^{\frac{n\pi }{12}i}$$ The proof follows from the fact that $z^{24}=1$
Set of n dice are thrown...
This is a double counting question. Note that $$\sum_{i=1}^N X_i = \sum_{i=1}^n N_i,$$ where $N_i$ is the number of times the $i$th die is rolled. Try to finish the solution from here. The answer is $6n$.
What is the Cantor Set? How do I write it mathematically?
The Cantor Set $\mathcal C$ can be defined by the following recursive sequence of sets: $$\mathcal C_0 = [0, 1]$$ $$\mathcal C_{n+1}= \frac{\mathcal C_n}3\cup\left(\frac23+\frac{C_n}3\right)$$ $$\mathcal C := \bigcap^\infty_{n=0}\mathcal C_n$$ This definition should reflect the intuitive notion of what the Cantor Set is and how it can be constructed. To explain this further, I will explain the math in laymen's terms. Start with the closed unit interval, $[0, 1]$. The next iteration is the previous one divided by three and the previous one divided by three with its origin shifted to be at $\frac23$. This yields $[0, \frac13]\cup[\frac23, 1]$, then $[0, \frac19]\cup[\frac29, \frac13]\cup[\frac23, \frac79]\cup[\frac89, 1]$, etc. The set itself is the set of points that are present in every iteration of this process. In case you were wondering how the definition you gave ties into mine (which I find much more intuitive), Here is an explanation: Observe that as $\mathcal C_n$ is iterated, the edges of each interval ($0, \frac13, \frac23, \frac19, $ etc) remain in the set. All of these numbers can be expressed by the sum $$\sum^\infty_{n=1}\frac{a_n}{3^n}$$ where the sequence $a$ is all $0$s and $2$s. For example, $\frac13$ is the sum when $a_n = \langle 0, 2, 2, 2, 2, \cdots\rangle$, or $\frac29+\frac2{27}+\frac2{81}+\cdots$ I'll leave it to you to find the reason for this connection, but a hint is that it has to do with the $\frac23$ being added.
Prove convergence of a sequence.
If $\lim na_n=0$ then clearly $a_n\sim \dfrac{a_n}{1+na_n}$ and the two series $\sum a_n$ and $\sum \dfrac{a_n}{1+na_n}$ have the same nature. Thus $\sum \dfrac{a_n}{1+na_n}$ is divergent. If $\lim na_n=\ell$ with $\ell&gt;0$ or $\ell=+\infty$ then $\frac{a_n}{1+na_n}\sim\frac{k}{n}$ where $k=1$ if $\ell=+\infty$ and $k=\ell/(1+\ell)$ otherwise. But $\sum\frac{1}{n}$ is divergent, so $\sum \dfrac{a_n}{1+na_n}$ will also be divergent.
Poisson distribution, the meaning of the parameter lambda
Poisson RV is commonly used for modelling number of occurrences of an event within a particular time interval. And, since $E[X]=\lambda$, its unique parameter is referred as mean number of event occurrences within our particular time interval.
Construction of $p^n$ field
There are finite fields of order $p^n$ for all prime powers $p^n$. They are often called Galois fields. But their additive groups are not cyclic (when $n\ne 1$). So for order $4$ one can have an addition table $$\begin{array}{cccc} 0&amp;1&amp;a&amp;b\\ 1&amp;0&amp;b&amp;a\\ a&amp;b&amp;0&amp;1\\ b&amp;a&amp;1&amp;0 \end{array}$$ but not an addition table $$\begin{array}{cccc} 0&amp;1&amp;2&amp;3\\ 1&amp;2&amp;3&amp;0\\ 2&amp;3&amp;0&amp;1\\ 3&amp;0&amp;1&amp;2 \end{array}.$$
Is it incorrect to factor a square root of a positive number into two negative roots?
$\sqrt x$ is a single valued function which is always positive. Therefore, the answer to $\sqrt {16}$ is $\color{red}{\text{only}\; 4}$. What you're doing wrong is that $$ \sqrt {a \times b} =\sqrt a \times \sqrt b \; \text{if} \; a,b &gt;0$$ Hence, $$\sqrt{(-4) \times (-4)}\neq \sqrt{-4}\times \sqrt{-4}$$
Compound interest formula property
As given, the equation is not correct. The limit is zero, which is also the limit of the right side as $r \to 0$. What you have is the leading order (smallest power of $r$) term in the expansion. The hint you are given is correct, but I don't know what use it is. You are expected to be expanding $\left(1+\frac rn\right)^n$ using the binomial theorem. The first two terms will cancel with the stuff before the minus sign. The limit you are quoting will be the next term.
Why is 1 + 1 = 0 when we make the addition table for F = {0, 1} (F = field)
This is known as the finite field $\mathbb{F}_2$. Here, we are thinking of doing the operations "mod 2". Let's look at the field axioms your professor gave: using the addition table he wrote out, we can show that the addition he defines is commutative, associative, and multiplication distributes over it. As well, with this addition, $1$ is it's own additive inverse. I would check out the modular arithmetic link above to learn more about this, and if you would like to know more about finite fields in general, here is a link to the Wikipedia page.
Divisibility rule for 43
You're proof is perfectly fine. Maybe faster way to prove it to multiply everything by $13$ in the first step. So you have: $$10x + a_0 \equiv 0 \pmod{43} \iff 130x + 13a_0 \equiv 0 \pmod{43} \iff x - 30a_0 \equiv 0 \pmod{43}$$ If you wonder how we came up with $13$ note that $10 \cdot 13 \equiv 1 \pmod {43}$, so $13$ is the multiplicative inverse of $10$ modulo $43$
Find two triangles of longest side length 25?
All Pythagorean triples, i.e. triplets of positive integers such that $a^2+b^2=c^2$, can be expressed as $a=k(m^2-n^2)$, $b=2kmn$, $c=k(m^2+n^2)$; where $m$ and $n$ are coprime, $m&gt;n$ and $m$ and $n$ have opposite parity. (Up to interchanging of $a$ and $b$.) So in your case you want $k(m^2+n^2)=25$. This gives you the possibilities: $k=1$ and $m^2+n^2=25$ $k=5$ and $m^2+n^2=5$ $k=25$ and $m^2+n^2=1$ For such small numbers, it is easy to find all expressions as a sum of two squares by hand, you will get $25=0^2+5^2=3^2+4^2$, $5=1^2+2^2$ and $1=1^2+0^2$. Since you are interested only in non-zero values, you get only two possibilities for $(k,m,n)$ namely $(1,4,3)$ and $(5,2,1)$. From the first one you get $a=7$, $b=24$, $c=25$. The second one gives you $a=15$, $b=20$, $c=25$.
$\text{Hom}_{k}(U \oplus V,W) \cong \text{Hom}_{k}(U,W) \oplus \text{Hom}_{k}(V,W)$
Hint: If $\eta\in \mathrm{Hom}(U\oplus V, W)$, then $\eta(u,v) = \eta(u,0) + \eta(0,v)$. What can we now define using $\eta(u,0)$ and $\eta(0,v)$?
How to best approach a problem of this kind (problem solving and simple linear equations)
I would probably use a table and/or Venn diagram to track the various different sub-populations cleanly. You have three attributes for your universe of 190 people $U$: passed the final test $|P|=75, |\bar P|=115$ passed all three earlier tests $|T|=161, |\bar T|=29$ got full bonus points $|B|=107, |\bar B|=83$ You want $ |P \cap T \cap B|$ You also know $|T \cap P|=74, |B \cap P|=63, |T \cap B|=99,$ and $ |\bar P \cap \bar T \cap \bar B|=21$ You therefore know $|\bar T \cap P|=75-74=1$ and $|\bar T \cap \bar P|=29-1=28$ This gives you $|B \cap \bar T \cap \bar P| = |\bar T \cap \bar P| - |\bar B \cap \bar T \cap \bar P| = 28-21=7$ Then $|B \cap \bar P|= 107-63=44$ gives you $|B \cap T \cap \bar P| = 44-7=37$ Finally then you have $ |P \cap T \cap B| = |T \cap B|- |B \cap T \cap \bar P| = 99-37 = 62$
What IS conditional convergence?
"Conditional" is a bit of a strange adjective to use. After all, a series either converges or it doesn't: what is conditional about that? The reason for the word "conditional" is that, given any series which converges but does not converge absolutely, it is possible to rearrange the series (i.e., reorder the terms) in such a way that the series no longer converges. It is also possible, given any desired value $V$, to find a rearrangement of the series which converges to $V$. This is known as the Riemann rearrangement theorem. Note that this phenomenon does not occur with absolutely convergent series. Given any absolutely convergent series, we can rearrange the terms any way we like, and it will still converge to the same value.
Use of congruence arithmetic laws to solve linear congruences
How do you get the scalar 4 in order to obtain $8x+8y$. Is it because $8≡1\mod7$ and therefore we need an $ 8$, $8/2 = 4$, and that's it? Or is there a totally different logic behind this step? The Congruence Product Rule implies that congruences are preserved uder integer scalings, i.e. $$ b\equiv c\!\!\pmod{\!n}\, \Rightarrow\, ab\equiv ac\!\!\pmod{\!n}$$ Thus the idea is to scale $\, 2x+2y\equiv 0\,$ by some integer $\,a\,$ to simplify it by making the coefficents smaller. Here we can make them $1$ because $2$ is invertible: $\,2a\equiv 1\equiv 8\iff a\equiv 4\pmod{\!7}.\,$ Therefore scaling by $\,4\equiv 2^{-1}$ simplifies both coefficient to $\,4\cdot 2\equiv 8\equiv 1,\,$ i.e. $$ 2x+2y\equiv 0\!\!\pmod{\!7}\iff x+y\equiv 0\!\!\pmod{\!7}$$ Beware generally scaling yields only the direction $(\Rightarrow)$ but scaling by an invertible $\,a\,$ means the direction $(\Leftarrow)$ holds too (by scaling RHS by $\,a^{-1}\equiv 4^{-1}\equiv 2,\,$ which is obvious in this case). When the scale factor $\,a\,$ is not invertible then we need to check that the solutions of the scaled equations are not extraneous, i.e. they actually satisfy the original equation. Here is an extraneous example. I assume you get rid of the $8$s simply by dividing the whole congruence by $8$? No we used $\,8\equiv 1\,$ so $\,8x\equiv 1x\equiv x\,$ by the Congruence Product Rule. In the final solution it is stated that $y=-x+7k$; to obtain the $-x$, can you simply move it on the other hand of the equation? So if we had anything else, could we just move it, like in normal equations? The Congruence Sum Rule implies that congruences are preserved under integer shifts, i.e. $$ b\equiv c\!\!\pmod{\!n}\, \Rightarrow\, a+b\equiv a+c\!\!\pmod{\!n}$$ Thus shifting $\,y+ x\equiv 0\,$ by adding $\,a\equiv -x\,$ to both sides yields $\, y\equiv -x\pmod{\!7}$. Remark $\ $ In more advanced contexts we don't usually explicitly mention invocation of these basic congruence rules (laws). But it is essential to know the scope of such laws to avoid mistakes (e.g. such sum and product rules don't apply analogously to exponentiation). By induction, the congruence rules imply that we can replace arguments of sums and products by any congruent argument and we will obtain a congruent result (this is the congruence generalization of equalities being preserved upon replacing function arguments with equal arguments). In particular this holds true for all polynomial expressions, because they are composed of sums and products (see the Polynomial Congruence Rule). We can think of a congruence as a generalized equality. Generally congruences are equivalence relations that are also compatible with the ambient arithmetical operations (here addition and multiplication in a ring), which is the gist of the Sum and Product Rules, i.e. addition and multiplication operations don't depend on which congruence class rep is chosen (which implies that they induce well-defined operations on the congruence classes - which is reified algebraically in the study of quotient rings - above the ring $\,\Bbb Z_7 \cong \Bbb Z\bmod 7 = $ integers modulo $7)$.
Does the Border (Boundary) Points of a convex shape in the positive quadrant make a convex function?
It seems that the answer is trivially positive. Let $x_1,x_2\in [0;c]$, $f(x_1)=y_1$, $f(x_2)=y_2$, $0\le\lambda_1, \lambda_2$, and $\lambda_1+\lambda_2=1$. Then $\lambda_1 x_1+\lambda_2 x_2\in [0,c]$. Since $\mathbb S$ is convex, $\lambda_1(x_1,y_1)+ \lambda_2(x_2,y_2)\in\mathbb S$. Then $f(\lambda_1 x_1+\lambda_2 x_2)\ge \lambda_1 y_1+\lambda_2 y_2$. But I remark, that the boundary of $\mathbb S$ in the positive quadrant may not be a graph of a function. E.g. think about a circular disk with center as (0,1) and radius 2.
Two-point equidistant projection of the sphere
You can reconstruct it by yourself. Choose two reference points and obtain their Cartesian coordinates. Now take any other point and convert to Cartesian coordinates. The dot product of the vectors give the cosines of the terrestrial distances (on a unit sphere). Now you have the three sides of a flat triangle on the projected map. The process can be inverted, but for now I don't see a better way than by solving the spherical triangle. The terrestrial distances are given by $$d_{ij}=\arccos(\cos\theta_i\sin\phi_i\cos\theta_j\sin\phi_j+\sin\theta_i\sin\phi_i\sin\theta_j\sin\phi_j+\cos\phi_i\cos\phi_j).$$ To solve the flat triangle in a Cartesian system, you can compute an angle formed with the base by the cosine law and use polar coordinates (angle and length of the side). Denoting $a,b$ the reference points, $$\psi=\hat{bap}=\arccos\frac{d_{ap}^2+d_{bp}^2-d_{ab}^2}{2d_{ap}d_{bp}}$$ and $$\rho=d_{ap}.$$ I cannot resist giving the complete expression in terms of the spherical coordinates of the point to be projected. $$\begin{cases} \psi_p=\arccos\dfrac{\arccos^2(\cos\theta_p\sin\phi_px_a+\sin\theta_p\sin\phi_py_a+\cos\phi_pz_a)+\arccos^2(\cos\theta_p\sin\phi_px_b+\sin\theta_p\sin\phi_py_b+\cos\phi_pz_b)-d_{ab}^2}{2\arccos(\cos\theta_p\sin\phi_px_a+\sin\theta_p\sin\phi_py_a+\cos\phi_pz_a)\arccos(\cos\theta_p\sin\phi_px_b+\sin\theta_p\sin\phi_py_b+\cos\phi_pz_b)},\\ \rho_p=\arccos(\cos\theta_p\sin\phi_px_a+\sin\theta_p\sin\phi_py_a+\cos\phi_pz_a) .\end{cases}$$ In Cartesian coordinates, $$\begin{cases} x_p=\dfrac{\arccos^2(\cos\theta_p\sin\phi_px_a+\sin\theta_p\sin\phi_py_a+\cos\phi_pz_a)+\arccos^2(\cos\theta_p\sin\phi_px_b+\sin\theta_p\sin\phi_py_b+\cos\phi_pz_b)-d_{ab}^2}{2\arccos(\cos\theta_p\sin\phi_px_b+\sin\theta_p\sin\phi_py_b+\cos\phi_pz_b)},\\ y_p=\sqrt{\arccos^2(\cos\theta_p\sin\phi_px_a+\sin\theta_p\sin\phi_py_a+\cos\phi_pz_a)-x_p^2}.\end{cases}$$
Is it possible to represent the natural number "1" as the sum of p-series in this way?
So what you are saying is $1 =\sum_{m=2}^{\infty} \sum_{k=2}^{\infty} \dfrac1{k^m} $. Let's check. $\begin{array}\\ \sum_{m=2}^{\infty} \sum_{k=2}^{\infty} \dfrac1{k^m} &amp;=\sum_{k=2}^{\infty} \sum_{m=2}^{\infty} \dfrac1{k^m} \qquad\text{(reverse order of summation)}\\ &amp;=\sum_{k=2}^{\infty}\dfrac{1/k^2}{1-1/k} \qquad\text{(just a geometric series)}\\ &amp;=\sum_{k=2}^{\infty}\dfrac{1}{k^2-k} \qquad\text{(multiply num and dec by }k^2)\\ &amp;=\sum_{k=2}^{\infty}\dfrac{1}{k(k-1)} \qquad\text{(rewrite)}\\ &amp;=\sum_{k=2}^{\infty}(\dfrac1{k-1}-\dfrac1{k}) \qquad\text{(now we can telescope)}\\ &amp;=1\\ \end{array} $ Yup.
Finding limits of rational logarithmic functions
Hint: Observe that $$\log(1+x(x-2))=\log(x-1)^2=2\log|x-1|$$
Existence of $f_{xy}$ and $f_{yx}$
Straight from Baby Rudin (theorem 9.41, page 235-6), the theorem is thus: Theorem: Suppose $f:E\to \mathbb{R}^2$, for $E$ open, and suppose $\dfrac{\partial f}{\partial x}$ and $\dfrac{\partial f}{\partial y}$ exist on $E$ and $\dfrac{\partial^2 f}{\partial y\partial x}$ is continuous at some $(a,b)\in E$. Then $\dfrac{\partial^2 f}{\partial x\partial y}$ exists at $(a,b)$ and they are equal.
prove that $a_n$ is convergent if $\limsup a_n \cdot \limsup \frac1{a_n} = 1$
Can you show that $$\limsup\frac1{a_n}=\frac1{\liminf a_n},$$ (for any sequence $(a_n)$ such that $a_n&gt;0$)? Once you prove this, the equality you are given simply says that $\limsup a_n=\liminf a_n$. You can find this also in the book Wieslawa J. Kaczor, Maria T. Nowak: Problems in mathematical analysis: Volume 1; Real Numbers, Sequences and Series as Problems 2.4.22 and 2.4.23. The problems are given on p.45 and solved on p.203-204.
Stabilization of embedding?
He explicitly says in 4.48(ii) that the stabilization of $f: M \to \Bbb R^k$ is the map $\tilde f: M \to \Bbb R^k \times \Bbb R^k$ given by $\tilde f(x) = (0,f(x))$. That is, you compose with the inclusion $i: \Bbb R^k \to \Bbb R^k \times \Bbb R^k$ given by $i(x) = (0,x)$. (Normally I would say that composing with the obvious linear embedding $\Bbb R^k \to \Bbb R^{k+1}$ is a stabilization, and would phrase this theorem as "after stabilizing sufficiently many times, every embedding is isotopic"; if $M$ is an $n$-manifold, it suffices to stabilize $n+2$ times by the relative Whitney embedding theorem, as applied to $M \times I$: Every embedding of $M$ into $\Bbb R^{2n+3}$ is isotopic. But Freed prefers to bake in that last statement into the definition of stabilization, so that you only need to stabilize once.)
Open/closed intervals and infinity
The definition is just the first sentence: "A closed interval is an interval that includes all of its limit points." The section you quote is helping to explain the definition. The article does not state that $(-\infty,\infty)$ is open because it is about closed intervals, but you are correct that it is open.
Why do you use $\mbox{min}$ (rather than $\mbox{max}$) in a quadratic limit epsilon-delta proofs?
When you limit $\delta$ in more than one way by introducing different upper bounds, you need that all upper bounds are satisfied to ensure that all the steps based on these upper bounds remain valid. By taking the smallest upper bound, the other ones are automatically satisfied (but this is not the case when taking the largest upper bound!). So if you need: $$\delta \le a_1 \;\mbox{and}\; \delta \le a_2 \;\mbox{and}\; \ldots \;\mbox{and}\; \delta \le a_n$$ then all these inequalities are satisfied by taking: $$\delta \le \mbox{min}\left\{ a_1,a_2,\ldots,a_n \right\}$$ Simple example: if somewhere along the proof you require $\delta \le 2$ and a bit later you also require $\delta \le 1$, then you are sure both inequalities are satisfied by taking $\delta \le 1$, i.e. $\delta \le \mbox{min}\left\{ 1,2 \right\}$. Note that it would not be sufficient to take $\delta \le 2 = \mbox{max}\left\{ 1,2 \right\}$ because then $\delta = 1.5$ would be possible, but that doens't satisfy $\delta \le 1$. Of course in this example, it is clear that $1 &lt; 2$ so that we could simply take $\delta \le 1$. However, you don't always know which upper bound is the smallest one as it may contain variables such as $\varepsilon$. By using min, we are sure to take the smallest upper bound and hence satisfying all conditions on $\delta$. In your example, the bound will depend on the value of $\varepsilon$; for example: if $\varepsilon = 8$, then $\delta = \text{min} \{1,\frac{\epsilon}{4}\} = \text{min} \{1,\frac{8}{4}\} = \text{min} \{\color{red}{1},\color{blue}{2}\} = \color{red}{1}$ if $\varepsilon = 2$, then $\delta = \text{min} \{1,\frac{\epsilon}{4}\} = \text{min} \{1,\frac{2}{4}\}= \text{min} \{\color{blue}{1},\color{red}{\tfrac{1}{2}}\} = \color{red}{\tfrac{1}{2}}$ The smallest upper bound is automatically 'selected', whatever the value of $\varepsilon$. Remark: note that this is not only the case for limits of quadratic functions (as mentioned in your question). For any limit proof, or more generally even for any context where you need multiple upper bounds to be simultaneously satisfied, this is a way to do it.
Colored blocks and towers
For C), we choose the $2$ positions (from the $10$) that will be occupied by colour 1. Then we choose the $2$ positions, from the remaining $8$, that are occupied by colour $2$. And so on. For B), for every way of selecting where the whites will go, there are $4^4$ ways to fill in the rest. So you need to multiply, not add. The answer to A) is correct.
What are the poles and zeros of the Euler Beta function?
The Beta function can be defined by $$B(x,y)=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}$$ So it has zeroes where the denominator tends to $\infty$ in absolute value. The Gamma function tends to an infinite absolute value only when the argument is a negative integer or zero, hence we need $$x+y\in\mathbb{Z}_{\le0}$$ $$x,y\in\mathbb{C}/\mathbb{Z}_{\le0}$$ There are infinitely many such choices including $x=k\in\mathbb{C}/\mathbb{Z}_{\le0}$, $y=-k$.
Is there a closed form?
A closed form solution can only exist if m is a rational power of n, and/or $abc=0$. If such is not the case, let $\gamma=\dfrac1{\ln m-\ln n},\quad\alpha=\dfrac cb,\quad\beta=-\dfrac ab$ . Then $k=-x$, where x is the solution to the recursive equation $x=\gamma\ln(\alpha m^x+\beta)$, which can be computed using the following iterative algorithm: $x_0=\ldots$ , and $x_{n+1}=\gamma\ln(\alpha m^{x_n}+\beta)$.
A counterexample for an integral inequality
We can show that if $f\geq 0$ satisfies $\int f= 0$, then $f$ vanishes almost everywhere. Thus, there is no $f,g\in L^1$ with $f\leq g$ such that $ f&lt; g $ on a set of positive measure and $\int f=\int g$.
Modeling drunkeness over time
Using this rather oversimplified model, the drunkenness function $$D(T,t)=T-ct$$ is indeed not linear in $T$. We have $$\frac{D(mT,t)}{D(T,t)}=\frac{mT-ct}{T-ct}$$ which will only equal $m$ at $t=0$. The part about piecewise definition for $T$ is totally unnecessary. $T=T(t)$ is just the sum of all the alcohol drunk by time $t$. For the record, using both $T$ and $t$ in the same math problem is a clear warning sign of drunkenness.
Verification of change of basis calculation
You should be careful with "the" matrix for a change of basis, since the order matters: the matrix converting coordinates from basis $A$ to basis $A'$ is the inverse of the matrix converting coordinates in the other direction. You filled a matrix with as columns the base vectors of $A$ expressed w.r.t. the base vectors of $A'$, the standard basis. This gives you a change of basis matrix $P_{A'A}$ that converts coordinates in the following direction: $$[\vec x]_{A'} = P_{A'A}[\vec x]_{A}$$ where $[\vec x]_B$ denotes the coordinate vector of $\vec x$ w.r.t. a basis $B$. You use it the other way around, there was no need to find the inverse matrix for the conversion in the direction asked. If a vector $\vec c$ has coordinate vector: $$[\vec c]_{A} = \begin{bmatrix} -1 \\ 2 \end{bmatrix}$$ with respect to $A$, then the coordinate vector with respect to $A'$ is given by: $$[\vec c]_{A'} = P_{A'A}[\vec c]_{A} = \begin{bmatrix} 2 &amp; 2 \\ 1 &amp; 5 \end{bmatrix}\begin{bmatrix} -1 \\ 2 \end{bmatrix}= \begin{bmatrix} 2 \\ 9 \end{bmatrix} $$ If you are given a coordinate vector w.r.t. the standard basis and you would want to know its coordinate vector w.r.t. the intially given basis $A$, then you would need $P_{AA'} = P_{A'A}^{-1}$.
Trigonometric/Logarithmic Integration
Putting $\ln x=y, x=e^y, dx=e^y dy$ So, $\int \sin(\ln x)dx=\int \sin y\cdot e^y dy$ Use Integration by parts, with $e^y$ as the first term Alternatively, using Euler's formula, $e^{iy}=\cos y+i\sin y$ $\int \sin y\cdot e^y dy$ is the imaginary part of $\int e^y\cdot e^{iy}dy$ $$\int e^y\cdot e^{iy}dy=\int e^{y(1+i)}dy=\frac{e^y(e^{iy})}{(1+i)}=\frac{(1-i)e^y(\cos y+i\sin y)}2=\frac{e^y\{(\cos y+\sin y)+i(\sin y-\cos y)\}}2$$ $$\implies \int \sin y\cdot e^y dy=\frac{e^y(\sin y-\cos y)}2$$ $$\implies \int \sin(\ln x)dx=\frac{x(\sin(\ln x)-\cos(\ln x))}2$$
Need help with taylor series.
Hint. You have, near $x=1$, $$1-x + \ln x = 1 -x + (x-1) -(x-1)^2/2+ O(x-1)^3$$ $$1-x + \ln x = -(x-1)^2/2+ O(x-1)^3$$ and $$1+\cos πx = 1 -1+\frac{\pi^2}2 (x-1)^2 + O(x-1)^3$$ $$1+\cos πx = \frac{\pi^2}2 (x-1)^2 + O(x-1)^3$$ thus $$\frac{1-x + \ln x}{1+ \cos πx} =\frac{-(x-1)^2/2+ O(x-1)^3}{\frac{\pi^2}2 (x-1)^2 + O(x-1)^3}=\frac{-1/2+ O(x-1)}{\pi^2/2 + O(x-1)}=-\frac{1}{\pi^2}+ O(x-1)$$ then $$\lim\limits_{x \to 1} \frac{1-x + \ln x}{1+ \cos πx}=-\frac{1}{\pi^2}.$$
Find the general solution of $e^y (\cos xy - y \sin xy)dx + e^y (\cos xy - x \sin xy)dy = 0$
I got the same answer without integrating factor. So your answer is correct. $$e^y (\cos xy - y \sin xy)dx + e^y (\cos xy - x \sin xy)dy = 0$$ Since $e^y \ne 0$ $$ (\cos xy - y \sin xy)dx + (\cos xy - x \sin xy)dy = 0$$ $$ \cos xy(dx+dy) - \sin xy(xdy+ydx) = 0$$ We have $xdy+ydx=dxy$ $$ \cos xyd(x+y) - \sin xyd(xy) = 0$$ $$ d(x+y) - \tan xyd(xy) = 0$$ Integrate: $$x+y+\ln |\cos xy|=C$$
Solve for c of PDF $f(x,y)=cxy1_{0<x<y<1}$
HINTS As for $c$, your idea is right. For $\mathbb{P}[X+Y&lt;1]$, you need to compute $$\iint_A f(x,y)dA$$ with $A = \{0 &lt; x &lt; y &lt; 1 \text{ and } x+y &lt; 1\}.$ If $V$ and $W$ are independent random variables with joint pdf $f_{V,W}$ and individual pdfs $f_V$ and $f_W$ then $$f_{V,W}(v,w) = f_V(v) f_W(w).$$ UPDATE Since your pdf contains a term $\mathbb{I}_{(0 &lt;x&lt;y&lt;1)}$ which is an unseparable mixture of both $x$ and $y$, then $X$ and $Y$ are not independent.
Given a ring R how to obtain a ring $R^1$ with an identity?
To obtain an extension of $R$ with identity $e$, consider $R^1=R[e]=\{r+me/r\in R, m\in \mathbb{Z}\}$
Game theory problem - two towers
You can divide through by $k$, equvalently making $k=1$. Let $m=xk+a, n=yk+b$, with both $a$ and $b$ between $0$ and $ k-1$. Then the $a$ and $b$ stones left over play no part in the game. Every round, you reduce either $x$, $y$ or both by $1$. So let $k=1$. Which positions with $y=0$ are wins, and which are losses ? If $y=1$, can you put the position into a lost position for your opponent ? So which positions are wins when $y=1$? Then do $y=2$, and so on.
Issues with $\lim_{n\rightarrow\infty}(a_n*b_n) = \lim_{n\rightarrow\infty}(a_n) * \lim_{n\rightarrow\infty}(b_n)$
Usually, one says that a sequence $(a_n)_{n\in\Bbb N}$ of real numbers converges if it converges to a real number $l$. Then we say that the limit of the sequence is $l$, aand this is expressed by $\lim_{n\to\infty}a_n=l$. However, this does not mean that this is the only case in which we can talk about the limit of the sequence. There are also the notions of the limit of a sequence being $\infty$ or $-\infty$, which is expressed by $\lim_{n\to\infty}a_n=\infty$ and by $\lim_{n\to\infty}a_n=-\infty$ respectively. However, such a sequence does not converge. And, yes, it is true that, for instance, if $\lim_{n\to\infty}a_n=\infty$ and if $\lim_{n\to\infty}b_n=2$, then $\lim_{n\to\infty}(a_nb_n)=\infty$.
Find the sum $\sum_{j=0}^{n}j$
HINT add first and last term to get $n+1$ next pair (next to last and second) gives the same result see Gauss Trick
Variables as Elements of Sets
If someone asked you, "Is it true?" you'd need more information, and you'd probably reply with a question: "Is what true?" If you were asked, "Is he going to the party?", you couldn't answer without knowing whom "he" refers to. Variables are like pronouns. If I were told that $A = \{1,2,6\}$ is and then asked, "Is $x\in A$?", without more information about $x$ no reply is possible. If I'm told that $x=4$ then I can say, No, $x\notin A$; if I'm told that $1\le x\le 2$ and that $x$ is an integer, then I know that, yes, $x\in A$. If $x$ and $y$ have values, denote entities, then $\{x,y,100\}$ is well-defined; if they don't, it isn't. Variables are not members of sets; sets (/mathematical entities) are, and variables denote them. Pronouns don't go to parties; people do, and pronouns denote them. Re your last two examples: For all $x,y$, if $B = \{x,y,100\}$ then $y\in B$. This is true. Notice that $x,y$ are bound variables here. In your first example, $x$ is free — it's not bound by a quantifier, and has no value. If $C = \{x,y,z\}$, then $a\in C \iff (a=x \lor a=y \lor a=z)$, so if you negate both sides the results are equivalent too. If you know $a\notin C$ then you know that $a$ is not equal to any of $x,y,z$, and if you know the latter, then you know $a\notin C$. This holds for all $x,y,z,a$ (bound variables again).
Explicit kernel for diffusion processes.
In general, no. Identifying the transition kernel comes down to solving the corresponding Fokker-Planck or Kolmogorov forward equation, which is a partial differential equation for the transition kernel, and one cannot write down a general solution. For a diffusion $$dX_t = \mu(X_t) dt + \sigma(X_t) dB_t,$$ the forward equation is $$ \frac{\partial p(x,t)}{\partial t} = -\frac{\partial}{\partial x}(\mu(x)p(x,t))+\frac{1}{2}\frac{\partial^2}{\partial x^2}(\sigma(x)^2 p(x, t)).$$ As you point out, in the case $\mu = 0$ and $\sigma = 1$, $p(x, t) = \frac{1}{\sqrt{2\pi t}} e^{-x^2/2t}$ solves the forward equation with boundary conditions determined by $p(x, t) \rightarrow 0$ as $t\rightarrow 0$ for any $x \neq 0$. The second most popular example of processes for which the transition kernel can be identified are the Ornstein-Uhlenbeck processes, where $\mu(x) = -ax$ and $\sigma = c$ for some constant $c&gt;0$. For various reasons, one usually writes $\sigma$ in terms of the diffusion coefficient, $\sigma =\sqrt{2D}$. In this case, $$p(x, t) = \sqrt{\frac{a}{2\pi D(1-e^{-2at})}} \exp\left(-\frac{a x^2}{2D(1-e^{-2at})}\right),$$ with appropriate initial conditions. Possibly the only other thing one can say for "general" systems is that gradient systems with $\mu(x) = -\nabla V(x)$ and $\sigma = c$ admit an explicit steady-state solution to the forward equation, namely $$\lim_{t\rightarrow\infty} p(x,t) \propto e^{-2V(x)/c^2}.$$ Observe that this applies to the Ornstein-Uhlenbeck process above, with $V(x) = ax^2/2$ (so the stationary distribution is Gaussian).
Find a collection of pairwise independent events each with probability $< p$
Roll a fair $n$-sided a total of $n+1$ times. Let $X_i$ be the outcome of the $i^{th}$ roll, and for each $i\neq j$, let $E_{i,j}$ be the event that $X_i=X_j$. Then $P(E_{i,j})=1/n$, the events $E_{i,j}$ are pairwise independent, and at least one of them always occurs by the pigeonhole principle.
Calculate $\lim_{x \longrightarrow 0^{-}}(1-2^x)^{\sin(x)}$ by Taylor expansion.
Note that $$(1-2^x)^{\sin(x)}=e^{\sin(x)\log(1-2^x)}\to 1$$ indeed $${\sin(x)\log(1-2^x)}={\frac{\sin(x)}{1-2^x}(1-2^x)\log(1-2^x)}\to 0$$ since by standard limits $(1-2^x)\log(1-2^x)\to 0$ $\frac{\sin(x)}{1-2^x}=\frac{\sin(x)}{x}\frac{x}{1-2^x}\to1\cdot\frac1{-\log 2}=-\frac1{\log 2}$
Lebesgue Integration and Complex Integration
about the problem: one way is to find, for every $z\in\mathbb D$, a function $\phi_z$ on $\mathbb D$ such that for every holomorphic $f$ on $\mathbb D$ we have $$f(z)=\langle \phi_z,f\rangle.$$ For example, use the mean value property: choose a small closed disk $D_z\subset\mathbb D$ centered at $z$, say with radius $(1-|z|)/2$, and set $\phi_z=c_z^{-1}\chi_{D_z}$, where $c_z$ is area of $D_z$ ($\chi$ stands for characteristic function). We then have $\Vert\phi_z\Vert_2^2=c_z^{-1}$, hence $$|f_n(z)|\leq C\,{c_z}^{-1/2} = c\ (1-|z|)^{-1} $$ for every $n$ (for a suitable $c$). And here we are - the functions are locally uniformly bounded, so they form a normal family (Montel's theorem).
Use Poincaré-Bendixson to show that a limit cycle exists in the first quadrant.
Let me repeat, with suitable modifications, my answer to the OP's earlier question. Assume that all solutions starting in $\mathbb{R}^2_{+} := \{\, (x, y) : x \ge 0,\ y \ge 0 \,\}$ are bounded for $t &gt; 0$ (this would require a separate proof). It follows then that for any such solution, its domain contains $[0,\infty)$ and its $\omega$-limit set is compact and nonempty. Let $L$ stand for the $ω$-limit set of some point, $(x_0,y_0)$, sufficiently close to the unstable focus $(1,3)$. By the Poincaré–Bendixson theorem, as there are finitely many equilibria, $L$ is either a limit cycle, or a heteroclinic cycle, (EDIT: or a homoclinic loop), or an equilibrium. There are two equilibria, $(4,0)$ and $(1,3)$. The first of them is an unstable focus, so it cannot belong to any heteroclinic cycle (because a heteroclinic cycle (EDIT: or a homoclinic loop) must contain an equilibrium that is an $\omega$-limit point for some other point). Consequently, there are no heteroclinic cycles (EDIT: or homoclinic loops) at all. So, $L$ is either a periodic orbit, or equals $\{(4,0)\}$. We proceed now to excluding the latter. The linearization of the vector field at $(4,0)$ has matrix $$ \begin{bmatrix} -1 &amp; -\frac25 \\ 0 &amp; 3 \end{bmatrix}. $$ There are two real eigenvalues, $-1$ and $3$, of opposite signs, so $(4,0)$ is a hyperbolic saddle. Its stable manifold is tangent to an eigenvector corresponding to $-1$, that is, to $(1,0)$. I claim that the stable manifold is the $x$-axis, minus $(4,0)$. Indeed, on the $x$-axis we have $\dot{x} = 4 - x$, $\dot{y} \equiv 0$, so for any $(x_1,0)$ we have $\omega((x_1,0)) = \{(4,0)\}$. Now, for a hyperbolic saddle its stable manifold is just the set of those points whose (unique) $\omega$-limit point is the saddle. Hence, if $L = \{(4,0)\}$ then the positive semitrajectory of $(x_0, y_0)$ must belong to the $x$-axis, which contradicts the uniqueness of the initial value problem (notice that $y_0 &gt; 0$). We have thus shown that $L$ is a periodic orbit.
How can I integrate the following function?
Assuming that there is no typo in the integrand, we already know (because od the $\sqrt{x^3+1}$ term) that the antiderivative will contain elliptic integrals of the first and second kind (with nasty arguments). Concerning the definite integral $$\int_0^2 x\sqrt{x^3+1}\,dx=2 \, _2F_1\left(-\frac{1}{2},\frac{2}{3};\frac{5}{3};-8\right)\approx 3.91613$$ To get a reasonable approximation, you could develop the integrand as a Taylor series around $x=1$ and integrate termwise. $$x\sqrt{x^3+1}=\sqrt{2}+\frac{7 (x-1)}{2 \sqrt{2}}+\frac{39 (x-1)^2}{16 \sqrt{2}}+\frac{47 (x-1)^3}{64 \sqrt{2}}-\frac{277 (x-1)^4}{1024 \sqrt{2}}+\frac{321 (x-1)^5}{4096 \sqrt{2}}+\frac{1891 (x-1)^6}{32768 \sqrt{2}}-\frac{12737 (x-1)^7}{131072 \sqrt{2}}+\frac{220851 (x-1)^8}{4194304 \sqrt{2}}+\frac{263341 (x-1)^9}{16777216 \sqrt{2}}-\frac{6669319 (x-1)^{10}}{134217728 \sqrt{2}}+O\left((x-1)^{11}\right)$$ leading to a value $\frac{143031897707}{25836912640 \sqrt{2}}\approx 3.91451$ which is not too bad. Edit Just to answer a question in one of your comments : expanding around $x=0$, we would obtain avery divergent expansion since $$x\sqrt{x^3+1}=x+\frac{x^4}{2}-\frac{x^7}{8}+\frac{x^{10}}{16}-\frac{5 x^{13}}{128}+\frac{7 x^{16}}{256}-\frac{21 x^{19}}{1024}+O\left(x^{21}\right)$$ and integrating termwise the terms would increase more and more (because of the powers of $2$) with alternating signs leading even to negative values for the definite integral ! Expanding around $x=1$ (that is to say at the mid point of the integration interval), the coefficients become smaller and smaller and the result tends to converge even if slowly. I hope this makes things clearer for you.
Differentiating functions of scalar products
You can basically apply the chain rule. For the first example, define $u_1 = k \cdot x$ and $u_2 = b \cdot x$; then we have $$ \vec{\nabla} S_1(u_1, u_2) = \frac{\partial S}{\partial u_1} \vec{\nabla}u_1 + \frac{\partial S}{\partial u_2} \vec{\nabla}u_2 = \vec{k} \frac{\partial S}{\partial u_1} + \vec{b} \frac{\partial S}{\partial u_2}. $$ If this needs to be true for all vectors, and the dimension of the space is greater than 1, then we can pick $\vec{b}$ to be linearly independent of $\vec{k}$; and so the equation $$ \vec{\nabla} S_1(u_1, u_2) = [g(u_1)] \vec{b} + [0] \vec{k} $$ implies that $$ \frac{\partial S}{\partial u_1} = 0, \qquad\frac{\partial S}{\partial u_2} = g(u_1). $$ Assuming that $S$ is $\mathcal{C}^2$, this is a contradiction unless $g(u_1)$ is constant, since taking the mixed partials implies that $$ \frac{\partial^2 S}{\partial u_2 \partial u_1} = 0 \qquad \text{but} \qquad \frac{\partial^2 S}{\partial u_1 \partial u_2} = g'(u_1). $$ If $g(u_1) = g_0$ is a constant, then the general solution is $S_1 = g_0 u_2 + C = g_0 (b\cdot x) + C$. Similar logic can be applied to show that if the second equation must hold for all vectors $\vec{k}$, $\vec{K}$ and $\vec{b}$, and the dimension of the space is greater than 2, then there is no $\mathcal{C}^2$ solution unless $h(k\cdot x, K\cdot x) = h_0$ is a constant function, in which case the general solution is $S_2 = h_0 (b\cdot x) + C$.
find general equation of $x''(t) + 5x'(t) + 4x(t) = 0$
Both $x_1(t)=e^{-4t}$ and $x_2(t)=e^{-t}$ are linearly independent solutions of the given differential equation. Thus, the complete general solution is, $x(t) = c_1 x_1(t)+c_2x_2(t)=c_1e^{-4t}+c_2e^{-t}$, where $c_1$ and $c_2$ are arbitrary constants.
Showing sum of squares of products of all non-empty subsets of $\{1, 2, 3, \ldots, n\}$ having no consecutive elements is $(n + 1)! − 1$
It follows from induction immediately. Any such subset of n+1, arises from a subset of n, or a subset of n-1 union the element n+1 or the subset of the element n+1 itself. We have $(n+1)! -1$ plus $(n+1)^2[n!-1]+(n+1)^2$ Plus $(n+1)^2$ which gives us $(n+2)!-1$
Product of a linearly dependent and a lineraly independent matrix
The matrix you are considering seems to be $A^*B$, where $A^*$ is the Hermitian transpose. Thus $A^*B$ is a $4\times 4$ matrix. The given data is insufficient to conclude, because $A^*B$ can have any rank from $0$ to $4$. Indeed, taking $B=A$, we can prove that the rank of $A^*A$ is the same as the rank of $A$.
Does a statement have to be true for all conditions to be transitive,symetric,reflexive?
The notation you see here can be expanded to $$R = \{(x,y) : |x-y| = 2\}$$ Now you should see that this relation is symmetric but neither reflexive nor transitive. Generally the notations $xRy: P(x,y)$ and $R = \{(x,y) : P(x,y)\}$, where $P$ is a predicate, are equivalent. Sometimes you also see $xRy :\Leftrightarrow P(x,y)$ (read the $:\Leftrightarrow$ as "is defined equivalent to")
Likelihood function of a gamma distributed sample
If $X$ follows a gamma distribution with shape $\alpha$ and scale $\beta$, then its probability density is $$p_{\alpha, \beta}(x) = \frac{ x^{\alpha-1} e^{-x/\beta}}{\Gamma(\alpha) \beta^\alpha }$$ Sometimes this is re-parameterized with $\beta^{\star} = 1/\beta$, in which case you will need to change this accordingly. The likelihood function is just the density viewed as a function of the parameters. So, the log-likelihood function for an IID sample $X_1, ..., X_n$ from this distribution with realized values $x_1, ..., x_n$ is $$ L(\alpha, \beta) = \sum_{i=1}^{n} \log \big( p_{\alpha, \beta}(x_i) \big) = (\alpha-1) \sum_{i=1}^n \log(x_i) - \frac{1}{\beta} \sum_{i=1}^{n}x_i - n\alpha \log(\beta) - n\log( \Gamma(\alpha) )$$ which can be maximized jointly as a function of $\alpha, \beta$ to get the MLE.
Number Of Uncompleted Tic Tac Toe Games
http://www.se16.info/hgb/tictactoe.htm and http://www.mathrec.org/old/2002jan/solutions.html show there are $255168$ possible completed games (before symmetries reduce this to perhaps $26830$) but there will be fewer board positions The numbers on those sites can easily be transferred to incomplete games, so your number of incomplete games of $294777$ is in a sense almost correct. It is made up of 9 incomplete games with 1 move 72 incomplete games with 2 moves 504 incomplete games with 3 moves 3024 incomplete games with 4 moves 13680 incomplete games with 5 moves 49392 incomplete games with 6 moves 100224 incomplete games with 7 moves 127872 incomplete games with 8 moves though I think you should also add in 1 incomplete game with 0 moves Even before symmetries, this overstates the number of positions: for example there are only $252$ positions after $3$ moves ($38$ taking account of symmetries), all incomplete overall there are $4520$ incomplete positions and $958$ completed positions ($627$ incomplete positions and $138$ completed positions taking account of symmetries)
Is $\mathbb{Z}[\frac{1}{p}]$ a lattice in $\mathbb{R} \times \mathbb{Q}_p$?
$\Bbb{R\times Q_p}$ with the norm $\|(a,b)\|=|a|_\infty+|b|_p$, $\iota(c)=(c,c)$, $\iota(\Bbb{Z}[p^{-1}])$ is a discrete subgroup and $[0,1)\times\Bbb{Z}_p$ is a fundamental domain of $\Bbb{R\times Q_p}/\iota(\Bbb{Z}[p^{-1}])$ which is compact.
Norm of functional on $L^4[0, 1]$
Applying Holder's inequality with $p=4$ and $q=\frac 4 3$ we se that $|f(x)| \leq 5^{-3/4} \|x||$. Hence the norm is at most equal to $5^{-3/4}$. To see that equality holds just take $x(t)=t$. I will let you verify that $\frac {|f(x)|} {\|x\|} =5^{-3/4}$ in this case. Note: The choice of $x(t)$ is dictated by the condition for equality in Holder's inequality.
Evans PDE p.714 Change of variable and change of integration region
By definition, $\eta$ is supported in the unit ball $\overline{B(0,1)}$, i.e. $\eta(x) = 0$ for $x \in \mathbb{R}^N \setminus \overline{B(0,1)}$. Again by definition, $$\eta_{\epsilon}(x) = \epsilon^{-N}\eta(x\epsilon^{-1}).\tag 1$$ Now I claim that $\eta_{\epsilon}(x) = 0$ for $x \in \mathbb{R}^N \setminus \overline{B(0,\epsilon)}$. Indeed, fix such a $x$, notice that $x\epsilon^{-1} \in \mathbb{R}^N \setminus \overline{B(0,1)}$ and use $(1)$ to prove the claim. This implies that $\eta_{\epsilon}(x - y) = 0$ for $y \in \mathbb{R}^N \setminus \overline{B(x,\epsilon)}$. Moreover, if $x \in U_{\epsilon}$ then $B(x,\epsilon) \subset U$. Then we have $$ \int_U\eta_{\epsilon}(x - y)f(y)\,dy = \int_{B(x,\epsilon)}\eta_{\epsilon}(x - y)f(y)\,dy. $$ Now it should be much easier to understand how the domain of integration changes with the change of coordinates $z = x - y$.
Why does $\binom{10}{7} = \frac{10!}{(10-7)!7!}$
This is quite difficult to communicate with text, but I will try. It's important to understand each point in turn, as each one follows from the previous one. Please comment if you have any questions about anything that isn't clear: First, think about how many ways there are to arrange ABCDE? Perhaps it's clear that there are $5\times 4\times 3\times 2\times 1 = 5! = 120$ permutations? If not, we could maybe start with a shorter example, like ABC. So then (once you've understood the first point!), how many ways are there to arrange AACDE? Well, there are still $5! = 120$ permutations. But, some of these arrangements are the same. In fact, we are double counting, because the two A-s can appear in either order. So, in fact there are $5!/2 = 60$ combinations. How many ways are there to arrange AAADE? There are still $5! = 120$ permutations, but now each arrangement is counted 6 times. Because there are $3! = 6$ ways to arrange the A-s. So, there are $5!/3! = 20$ combinations. How many ways are there to arrange AAADD? There are still $5! = 120$ permutations, but there are 6 ways to arrange the A-s (six-times-counting), and 2 ways to arrange the D-s (double counting). In fact, each distinct arrangement is now counted $6\times 2 = 12$ times. In other words, there are $5!/(3!2!)$ combinations here. Whether we are interested in permutations or combinations depends on whether we care about the order of the results. If we are just counting the number of 6s, we don't care about the order, we just care about how many 6s we got. So, we should check the combinations. If you call A the result of getting a 6, and B the result of getting anything other than a 6, then we are trying to find the number of combinations of A-s and B-s, as in part 4 above. We can extend this idea to any number, like 10 dice and counting 7 sixes. Hopefully it's clear how this leads to the idea of a binomial coefficient?!
Projective special linear group
The group is generated by two elements. This was proved by L.E. Dickson, and that can be found in D. Gorensteins book on Finite Groups. Strictly, that proof does not cover the cases $q &lt; 4$ and $q = 9.$ The case $q=2,3,4$ can be checked directly. Since ${\rm PSL}(2,9) \cong A_{6},$ this can be checked directly too. Added later: Perhaps Dickson required the prime power $q$ to be odd, I can't remember. But in any case, when $q$ is a power of $2$ and $q &gt;2,$ then $G = {\rm PSL}(2,q)$ can be generated by two elements, one of order $q-1$ and one of order $q+1.$ The first, $C$ say, can be taken to normalize a chosen Sylow $2$-subgroup, and permute its non-identity elements transitively by conjugation, and the second, $D$ say, permutes the Sylow $2$-subgroups transitively by conjugation. Now $\langle C,D \rangle$ must have odd order, for otherwise, it would contain a full Sylow $2$-subgroup, and hence have order divisible by $q(q-1)(q+1) = |{\rm PSL}(2,q)|$ in this case. Hence $CD = DC$ is a group of order $(q+1)(q-1).$ However, all odd order subgroups of ${\rm PSL}(2,q)$ are Abelian when $q$ is a power of $2$. There are several ways to finish-eg, N. Ito proved that a product of Abelian finite groups is metabelian and we have $G = S(CD)$ for $S \in {\rm Syl}_{2}(G)$ (also $S$ is Abelian), and ${\rm PSL}(2,q)$ is not metabelian for $q = 2^{n}$ when $n &gt;1$- alternatively, one can just see that $G$ contains no element of order $(q-1)(q+1)$, and $CD$ would have to be cyclic. (The $2$-generation of ${\rm PSL}(2,q)$ for $q &gt;3$ also follows from a theorem of R. Steinberg).
Convergence of Sequence with factorial
yes it's fine, maybe a little bit more care is needed for the $(\frac{3e}{n})^n$ term... but you could simply say that $\displaystyle \sum_0^{\infty} \frac{3^n}{n!}=e^3$ in particular it converges and hence the terms must go to zero.
Why does a singular matrix have a (non-zero) eigenvector?
If the matrix is invertible, then you can use gaussian elimination to transform it to an upper triangular one which has nonzero determinant; hence the original matrix had nonzero determinant as well. So, if the determinant is zero, the matrix is singular. You can easily show that a singular matrix has linearly dependent columns, so some $\sum_j A_{\cdot, j}\, x_j=0$, $A_{\cdot, j}$ denoting the $j$-the column of the matrix. So, for $x=(x_1,\ldots, x_n)^T$, you get that $Ax=0$.
How many numbers end in the four digits 1995 and become an integer number of times smaller when these digits are erased?
If $n$ is the number remaining after removing the last four digits, then we are given that $$\frac{10000n + 1995}{n}$$ is an integer. But this is equal to $$10000 + \frac{1995}{n}$$ So the answer is simply the number of divisors of $1995$.
Show that the radius of convergence of a sum of series is at least as big as minimum of radii of these series.
You have the definition: $\sum c_nx^n$ has radius of convergence $T$ if for all $x$ with $|x|&lt;T$ the series $\sum c_nx^n$ converges and for all $x$ with $|x|&gt;T$ the series $\sum c_nx^n$ diverges. From this definition you can deduce: $\sum c_nx^n$ has at least radius of convergence T if for all $x$ with $|x|&lt;T$ the series $\sum c_nx^n$ converges. This you can use to solve your problem. Hint: If $\sum a_nx^n$ converges and $\sum b_nx^n$ converges, then $\sum (a_n+b_n)x^n$ converges (maybe you try to prove this hint by yourself)
Does $\pi \ | \ 2 \pi$
Yes, it makes sense. It is not widely used, but it is clear and useful.
Why variance in kalman?
Why (co-)variances? Because no system, no model, and no measurement is perfect. Every position measurement provides more evidence to prefer or reject various models and parameter values for those models. However, no amount of evidence can absolutely select a "correct" model or "correct" parameters. The filter is designed to partition deviations from predicted and observed measurement into the part that conforms with the model (i.e., is compatible with the predicted observables distribution from the prior time step) and the part that is "noise" (i.e., whatever disagreement with the prediction is observed). Note that measurement noise, unconverged model parameters, unmodeled influences, and mismatch between model and system all contribute to the "noise". You don't have independent (marginal) distributions of $x_t$ and $\dot{x}_t$. You have a joint PDF of $(x_t, \dot{x}_t)$ pairs. When you use this joint PDF to predict the distribution for the next time step, you introduce correlation (and covariance) because, for each choice of $x_t$, the image of that vertical slice of the PDF is angled -- the parts corresponding to larger $\dot{x}_t$ are shifted further to the right and those to lesser $\dot{x}_t$ are shifted further to the left. At each prediction, the independent variances get mixed into the covariance. This is illustrated and explained in different words (and with equations) at [B]. Also, at each new prediction, the effect of the likelihood of more extreme parts of the previous PDFs that are incompatible with the new observation are suppressed in the update. (That is, if the observation keeps being a little to the right for several updates, both the left tail of the position and the "to the left" tail of the velocity are suppressed, tending to push the prediction to "catch up".) It is not that $x_t$ and $\dot{x}_t$ are dependent -- they are not; one can easily imagine prior histories giving any pair of current positions and velocities. (Note that specific models may implement a constraint causing a dependency, but your question doesn't indicate that you are thinking of model constraints.) However, $x_{t+1}$ is strongly dependent on both $x_t$ and $\dot{x}_t$. In addition $\dot{x}_{t+1}$ is weakly dependent on $\dot{x}_t$ since infinite acceleration is very expensive. Since one has only a simultaneous distribution of $(x_t, \dot{x}_t)$ pairs, one pushes that distribution forward in time, subject to model constraints, to produce a prediction of the next state. Then the system is observed and that new information is used to update the predicted distribution. The Kalman filter does not bother to represent any possible joint distribution, it uses a simple join Gaussian, so only needs to track the mean and the covariance matrix. [B] Babb, Tim, "How a Kalman filter works, in pictures", http://www.bzarg.com/p/how-a-kalman-filter-works-in-pictures/ .
Primitivity implies transitivity?
Suppose $G$ acts on a set $\Omega$. Then the orbits of $G$ form a partition of $\Omega$, and each orbit is a block of $G$; in fact, each orbit $B$ is a minimal fixed block, so that $B \cap B^g=B, \forall g \in G$. If $G \ne 1$, then there is an orbit $B$ of length at least 2, and in addition if $G$ is intransitive, then $|B| &lt; |\Omega|$, so that $B$ is a nontrivial block. Thus, every intransitive group $G \ne 1$ has a nontrivial block. Given a nontrivial block $B$, if $G$ is transitive, then $\Sigma:=\{B^g: g \in G\}$ is a partition of $\Omega$, and $G$ acts on $\Sigma$. As the authors of the text mention in p. 12, we can sometimes obtain useful information about $G$ by considering this action. If the group is intransitive, the resulting $\{B^g: g \in G\}$ is not a partition of $\Omega$. Every intransitive group $G \ne 1$ has a nontrivial block and hence (by definition of primitivity) cannot be primitive. Thus, if the group is intransitive, there is no question as to whether it is primitive or imprimitive.
Finding the order of an element
The cyclic group indeed has order $n$ because it is the group generated by $\left&lt;x\right&gt;$ where $x$ has order $n$. However, I think there is a problem with the product of groups because you have to show that it (the product) is isomorphic to a group of order $420$.
Is there any trig identity which can be used for $\int \sin^4x$?
use that $$\sin(x)^4=\frac{1}{8} (-4 \cos (2 x)+\cos (4 x)+3)$$
Show that if $||\cdot||_1$, $||\cdot||_2$ are equivalent norms then $(V,||\cdot||_1)$ is a banach space iff $(V,||\cdot||_2)$ is.
It suffices to show that a sequence $\left\{v_n\right\}$ is Cauchy in $\left\Vert\cdot\right\Vert_1$ iff it is Cauchy in $\left\Vert\cdot\right\Vert_2$. a sequence $\left\{v_n\right\}$ converges in $\left\Vert\cdot\right\Vert_1$ iff it converges in $\left\Vert\cdot\right\Vert_2$. Why is this sufficient? Because then the argument goes like this: Suppose $\left\Vert\cdot\right\Vert_1$ is complete; that is, $V$ is a Banach space in the $\left\Vert\cdot\right\Vert_1$ metric. Let $\left\{v_n\right\}$ be a Cauchy sequence in $\left\Vert\cdot\right\Vert_2$. By (1.) the sequence is Cauchy in $\left\Vert\cdot\right\Vert_1$. Since $\left\Vert\cdot\right\Vert_1$ is complete, the sequence converges in $\left\Vert\cdot\right\Vert_1$, and by (2.) the sequence converges in $\left\Vert\cdot\right\Vert_2$. Therefore $\left\Vert\cdot\right\Vert_2$ is complete.
Proof of sequence formula
Use an ansatz of the type $a_n=A^n$ and plug this into the equation to get $A^n=2A^{n-1}+4A^{n-2}$ or $A^2=2A+4$. Now, solve for $A$ by solving the quadratic equation to get the two soltions $A_{1,2}$. The general solution is given by $a_n=\alpha A_1^n+\beta A_2^n$. Determine $\alpha$ and $\beta$ by using $a_0=a_1=1$.
Riemann problem of non-homogeneous Burgers equation $u_t+uu_x=u$
This PDE is a particular scalar balance law. Following this post, the entropy condition would imply that the solution is a shock if $u_l&gt;u_r$, and a rarefaction otherwise. Firstly, let us compute the classical solution by applying the method of characteristics. We are left with $\frac{\text d x}{\text dt} = u =\frac{\text d u}{\text dt}$ along the characteristic curves, leading to the sets of curves $$ x = \left\lbrace \begin{aligned} u_l (e^t - 1) + x_0, \quad x_0&lt;0 \\ u_r (e^t - 1) + x_0, \quad x_0&gt; 0\end{aligned} \right. $$ along which $u=u_l e^t$ if $x_0&lt;0$, and $u=u_re^t$ if $x_0&gt;0$. Case $u_l&gt;u_r$. The solution is classical on both sides of the discontinuity located at the abscissa $x_s$. The discontinuity satisfies the Rankine-Hugoniot condition $\dot x_s(t) = \frac{1}{2}(u_l+u_r)e^t$ with the initial location $x_s(0)=0$, i.e., $x_s(t) = \frac{1}{2}(u_l+u_r)(e^t - 1)$. Therefore, the entropy solution reads $$ u(x,t) = \left\lbrace \begin{aligned} &amp;u_le^t &amp;&amp;\text{if}\quad x&lt; x_s(t) \\ &amp;u_re^t &amp;&amp;\text{if}\quad x&gt; x_s(t) \end{aligned}\right. ,\qquad x_s(t) = \frac{u_l+u_r}{2}(e^t - 1) . $$ Case $u_l&lt;u_r$. The solution is classical on both sides of the rarefaction located between the curves $x=u_l (e^t - 1)$ and $x=u_r (e^t - 1)$ along which $u = u_l e^t$ or $u_r e^t$ is growing exponentially. We look for smooth solutions of the PDE of the form $u(x,t) = v(\xi,t)$ where $\xi$ is a function of $x$ and $t$ to be determined. This assumption yields $$ v_t + v_\xi \left(\xi_t + v \xi_x\right) = v \, , $$ and the problem can be reduced by imposing $\xi_t + v \xi_x = 0$ with $v_t = v$. One concludes that $\xi$ is constant along each characteristic curve starting at the origin, while $v$ is exponentially increasing in time. In other words, solving the PDE $v_t = v$ tells us that $v = A(\xi)e^t$ where $A$ is an arbitrary function. Now, we use the method of characteristics to solve the transport equation satisfied by $\xi$ with the boundary condition $A(\xi) = u_l$ at the left side $x = u_l (e^t - 1)$ of the rarefaction wave. Along the characteristic curves, we have \begin{aligned} \bullet\; &amp;\dot t(s) = 1 \text{ and } t(0) = t_0 \text{ yields } t=s+t_0 \\ \bullet\; &amp;\dot \xi(s) = 0 \text{ and } \xi(0) = \xi_0 \text{ yields } \xi = \xi_0 \text{ s.t. } A(\xi_0) = u_l \\ \bullet\; &amp;\dot x(s) = A(\xi) e^t \text{ and } x(0) = u_l (e^{t_0} -1) \text{ yields } x = u_l (e^{s+t_0} -1) \end{aligned} and therefore, we know $A(\xi) = x/(e^t - 1)$. Thus, the entropy solution reads $$ u(x,t) = \left\lbrace \begin{aligned} &amp;u_le^t &amp;&amp;\text{if}\quad x&lt; u_l (e^t - 1) \\ &amp;\frac{x e^t}{e^t - 1} &amp;&amp; \text{if}\quad u_l (e^t - 1)\leq x\leq u_r (e^t - 1) \\ &amp;u_re^t &amp;&amp;\text{if}\quad x&gt; u_r (e^t - 1) \end{aligned}\right. $$ For generalizations, see e.g. D. Fusco, N. Manganaro: &quot;A reduction approach for determining generalized simple waves&quot;, Z. angew. Math. Phys. 59 (2008) 63-75. doi:10.1007/s00033-006-5128-1
can Sophie Germain prime be arbitrarily many?
No, the Green-Tao theorem is not enough (not even with Dirichlet's theorem) to prove that there are infinitely many Sophie Germain primes. The Sophie Germain pairs have infinite complexity in the Gowers norm and thus current methods do not yet apply to them. For more information see Linear Equations in Primes. Recent advances (Zhang, Maynard, Polymath) toward the twin prime conjecture makes progress on the Sophie Germain conjecture plausible, but we're not there yet.
transformation of trigonometric graphs
To illustrate how $|B|$ is the period, simply evaluate $f(x+\lvert B \rvert)$, \begin{align} f(x+\lvert B \rvert) &amp;= A \sin\Big(\frac{2\pi}{B}(x+|B|-C)\Big) +D \\ &amp;=A\sin\Big(\frac{2\pi}{B}(x-C) \pm 2\pi \Big) + D \\ &amp;=f(x) \end{align} so that $|B|$ is a period. The $\pm$ enters the equation because $B$ and $|B|$ may have different signs and the period by convention is usually taken to be positive. Strictly you need to show $|B|$ is the least such number, which you can try do do, possibly using the addition formula for the sine function. That $C$ is the $x$-shift should be obvious by your own definition since it applies a fixed subtraction directly and only to the $x$ variable.