title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
the geometry of level set of solution of elliptic PDE
If the solution u is positive, then by a well-known theorem due to Gidas, Ni and Nirenberg, it has to be radially symmetric. Then the level sets are concentric spheres. When u is not positive, in general nothing can be said about the level sets.
Solving a Linear Constant Coefficient Difference Equation?
For the homogeneous $$ y_h^n+\alpha y_h^n = 0\Rightarrow y_h^n = (-\alpha)^n C_0 $$ now proposing for the particular $$ y_p^n = (-\alpha)^n C_n $$ after substituting into the complete recurrence we get $$ C_n-C_{n-1} = \alpha\left(-\frac {\beta}{\alpha}\right)^n\Rightarrow C_n=\beta\sum_{k=-1}^{n-1}\left(-\frac {\beta}{\alpha}\right)u_{k+1} $$ and finally $$ y_n = y_h^n+y_p^n = (-\alpha)^n C_0 + (-\alpha)^n \left(\beta\sum_{k=-1}^{n-1}\left(-\frac {\beta}{\alpha}\right)^k u_{k+1}\right) $$
How much ketchup is on the table? (Ketchup flow rate problem)
If we let $x=0$ be the table top and measure upwards, the bottle position is $x(t)=vt$. Ketchup that leaves the bottle at time $t$ takes $t'$ to fall where $vt=\frac 12g(t')^2$ Ketchup that leaves the bottle at time $t$ hits the table at $t+\sqrt{\frac {2vt}g}$ At time $u$ the ketchup that just arrived at the table left the bottle at time $t$ where $u=t+\sqrt{\frac {2vt}g}$ We would like to invert this equation $$u=t+\sqrt{\frac {2vt}g}\\(u-t)^2=\frac {2vt}g\\t^2-2ut-\frac {2v}gt+u^2=0\\ t=\frac 12\left(2u+\frac {2v}g-\sqrt{(2u+\frac {2v}g)^2-4u^2}\right)$$ and the amount of ketchup at time $u$ is $Q$ times this. If $Q$ is not a constant, you need to integrate $$\int_0^{t(u)}Q(t)dt$$to get the amount on the table at time $u$
Evaluating $\lim\limits_{n\to \infty }\sum\limits_{k=0}^n\:(2n)^{-k}\binom{n}{k}$
First use the binomial theorem: $$\sum_{k=0}^n\binom{n}k\left(\frac1{2n}\right)^k=\left(1+\frac1{2n}\right)^n\;.$$ Now $$\lim_{n\to\infty}\left(1+\frac1{2n}\right)^n=\lim_{n\to\infty}\left(\left(1+\frac1{2n}\right)^{2n}\right)^{1/2}=\left(\lim_{n\to\infty}\left(1+\frac1{2n}\right)^{2n}\right)^{1/2}\;,$$ and you should know what the last limit there is.
Minkowski Addition of Sets
Use equal instead of equivalent, as this is the correct terminology for sets. And before talking about the answer Yes. This is a good approach. One direction/inclusion is immediate, namely $4\mathbb{Z}+\mathbb{N}\subset \mathbb{Z}$. For the other, just argue why the following holds: Any integer number can be written as $$z=4k+l, \quad k\in\mathbb{Z}, l=\{0,1,2,3\}$$ from which you can conclude. The keyword here is euclidean division
Show that when written in terms of $ t$, where $t = \tan(x/2)$, the expression $2(1 + \cos(x))(5\sin(x) + 12\cos(x) + 13)$ is a perfect square.
$$\begin{align} 2(1+\cos(x))(5\sin(x)+12\cos(x)+13)&=2\left(1+\frac{1-t^2}{1+t^2}\right)\left(\frac{10t}{1+t^2}+\frac{12-12t^2}{1+t^2}+13\right)\\ &=\frac{4}{(1+t^2)^2}(t+5)^2 \end{align}$$
Given an inhabited type in simply-typed lambda calculus (w/no basics, just variables), is there a combinator of that type that is no longer than it?
As we discussed in the comments, the answer to your question is negative. The type $(A \rightarrow B) \rightarrow (B \rightarrow B \rightarrow C) \rightarrow A \rightarrow C$ has length 7 according to your definition. We use the injection from simply-typed lambda terms to proofs of the corresponding formula in Herbelin's focusing sequent calculus LJT [1]. Exhaustive proof search yields a unique LJT proof of the intuitionistic tautology $(A \rightarrow B) \rightarrow (B \rightarrow B \rightarrow C) \rightarrow A \rightarrow C$, but the corresponding term $\lambda f. \lambda g. \lambda a. g (f a) (f a)$ has length 8. [1] Hugo Herbelin. A Lambda-calculus Structure Isomorphic to Gentzen-style Sequent Calculus Structure. Computer Science Logic, Sep 1994, Kazimierz, Poland. pp.61–75.
Equation include floor function,where is the mistake?
Except for neglecting to specify (when you first use the symbol $n$) that $n$ is an integer, your method was fine up until the very last step. Indeed, given the initial problem statement, it is true that $$ 4n + 3\{x\} = 1.$$ Now this is possible to solve by a combination of reasoning and trial-and-error: try $n = 1,$ and you may observe that $n \geq 1$ is too high; try $n = -1,$ and you observe that $n \leq -1$ is too low; but $n = 0$ works. Another possible approach is to divide both sides of the equation by $4.$ We then have $$ n + \frac34\{x\} = \frac14.$$ Now for the two sides to be equal, their integer parts must be equal, and their fractional parts also must be equal. Let's look at the fractional parts. On the right side, of course, the fractional part is $\frac14.$ On the left side, since $n$ is an integer, its fractional part is zero. The fractional part of the entire left side of the equation therefore is just the fractional part of $\frac34\{x\}.$ But since $0 \leq \{x\} < 1,$ it follows that $0 \leq \frac34\{x\} < \frac34,$ so the integer part of $\frac34\{x\}$ is $0$ and the fractional part is $\frac34\{x\}$ itself. Therefore $$ \frac34\{x\} = \frac14.$$ Solve for $\{x\}$: $$ \{x\} = \frac13.$$ Now put that value of $\{x\}$ into any of the previous equations involving $n,$ and you can show that $n = 0.$ Hence $$ x = n + \{x\} = 0 + \frac13 = \frac13.$$ Alternatively, after showing that $n + \frac34\{x\} = \frac14,$ you could look at the integer part on both sides of the equation. Since $0 \leq \frac34\{x\} < \frac34$ (for the same reasons as before), the integer part of $\frac34\{x\}$ can only be $0.$ Of course the integer part of $\frac14$ also is $0,$ so setting the integer parts of both sides of the equation equal, we have $$ n + 0 = 0.$$ Therefore $n=0,$ and putting this in any earlier equation involving $n$ and $\{x\},$ you can show that $ \{x\} = \frac13.$
An example of an expectation operator that is uniformly bounded
Let $I_{j, k} := [2^{-j}k, 2^{-j}(k+1)) $. Observe we have \begin{align} P_j f(x) = \sum^\infty_{k=-\infty} \left(\frac{1}{|I_{j, k}|}\int_{I_{j, k}}f(t)\ dt\right)\chi_{I_{j, k}(x)} \end{align} then it follows \begin{align} \|P_jf\|^2_2=&\ \int^\infty_{-\infty}\left| \sum^\infty_{k=-\infty} \left(\frac{1}{|I_{j, k}|}\int_{I_{j, k}}f(t)\ dt\right)\chi_{I_{j, k}(x)} \right|^2\ dx\\ =&\ \int^\infty_{-\infty} \sum^\infty_{k=-\infty} \frac{1}{|I_{j, k}|^2}\left(\int_{I_{j, k}} f(t)\ dt\right)^2 \chi_{I_{j, k}}(x)\ dx\\ =&\ \sum^\infty_{k=-\infty} \frac{1}{|I_{j, k}|} \left(\int_{I_{j, k}} f(t)\ dt\right)^2 \leq \sum^\infty_{k=-\infty} \int_{I_{j, k}}|f(t)|^2\ dt =\|f\|_2^2. \end{align} Since the inequality is purely a consequence of Cauchy-Schwarz inequality, then it's not hard to concoct up an example where the inequality is strict. Edit: Consider \begin{align} f(x) = 2x\cdot \chi_{I_{j, 0}}(x), \end{align} then we see that \begin{align} P_jf(x) = 2^{-j}\chi_{I_{j, 0}}(x). \end{align} Then it follows \begin{align} \|P_j f\|_2=2^{-3j/2} \ \ \ \text{ and } \ \ \ \|f\|_2 = \sqrt{\frac{4}{3}}2^{-3j/2}. \end{align}
Doubt in proof of theorem proving analyticity of $\Delta(\tau) $ in H.
Take the inequality after (6) to the power $-\alpha/2$, then you get (6) with $M=K^{-\alpha/2}$.
Determine the upper and lower bounds of the expected value of a random variable
If you increase some of the probabilities of success, the expected number of trials will decrease; if you decrease some of them, it will increase. Thus the case where all probabilities are $\beta$ and the one where they're all $\alpha$ provide bounds on the expected number of trials.
Geometrical principle used in Fourier's paper "Theory of Heat"
$$m\,dx+n\,dy+p\,dz=0$$ is the equation of the tangent plane, thus $(m,n,p)$ is the direction of the surface normal, which is orthogonal to all directions in the tangent plane. As $(δx,δy,δz)$ has the same direction is has to be a multiple of the normal direction, which can also be formulated as the condition that all the $2\times2$ minors of $$\pmatrix{m&n&p\\δx&δy&δz}$$ have to be zero, which gives the cited equations.
For which values of $a$ the function $u(x,y)=x^2+x^4+axy+y^2$ is convex? concave?
For convexness, $d^2u(x,y)$ is positively semi-definite. Concaveness is opposite.
some questions about combinations
There are too many questions. I will deal with the card questions, since they form a connected collection. (a) The suit (one of $\spadesuit$, $\heartsuit$, $\diamondsuit$, $\clubsuit$) can be chosen in $4$ ways. Instead, call this $\binom{4}{1}$. For every choice of suit, the actual $5$ cards can be chosen in $\binom{13}{5}$ ways, for a total of $\binom{4}{1}\binom{13}{5}$. I assume you know how to compute from this point on. Please note that our count included the straight flushes, and the royal flushes, which in poker are much better hands than a plain flush. If we want to just count the plain flushes, we need to subtract the number of straight flushes. But we were not asked to count the plain flushes. (b) The aces can be chosen in only one way, though one could call the number $\binom{4}{4}$. The remaining card can be chosen in $48$ ways. So the total is $48$, but we could call that $\binom{4}{4}\binom{48}{1}$. (c) We need to choose the kind. There are $\binom{13}{1}$ ways of doing this. For each such choice, there are $\binom{48}{1}$ ways to choose the remaining card, for a total of $\binom{13}{1}\binom{48}{1}$. (d) There are $\binom{4}{3}$ ways to choose the three Aces. For each such choice, there are $\binom{4}{2}$ ways to choose the Jacks, for a total of $\binom{4}{3}\binom{4}{2}$. (e) (f) There are $\binom{13}{1}$ ways to choose the kind we will have three of. For each such choice, there are $\binom{4}{3}$ ways to choose the actual three cards. For every way of doing these two things, there are $\binom{12}{1}$ ways to choose the kind we will have two of, and $\binom{4}{2}$ ways to choose the actual cards, for a total of $\binom{13}{1}\binom{4}{3}\binom{12}{1}\binom{4}{2}$. (g) Like in (f), there are $\binom{13}{1}\binom{4}{3}$ ways to choose the kind we will have three of, and the actual cards. There remain two cards to choose. It is tempting to think that there are $\binom{48}{2}$ ways to choose these. (Of course we have to avoid the last card of the kind we have three of.) So it is tempting to think that the total is $\binom{13}{1}\binom{4}{3}\binom{48}{2}$. However, this would count the full house hands, which are a much better hand. So the actual total is $\binom{13}{1}\binom{4}{3}\binom{48}{2}-\binom{13}{1}\binom{4}{3}\binom{12}{1}\binom{4}{2}$. We can also express this as $\binom{13}{1}\binom{4}{3}\left(\binom{48}{2}-\binom{12}{1}\binom{4}{2}\right)$. (h) This one is quite tricky. It is all too easy to give a plausible argument that leads to an answer that is wrong by a factor of $2$. There are $\binom{13}{2}$ ways to choose the two kinds that we will have two cards from. For each such choice, there are $\binom{4}{2}$ ways to choose the actual two cards of the higher ranking kind that we chose, and then $\binom{4}{2}$ ways to choose the two cards of the lower ranking kind. Once this has been done, there are $\binom{44}{1}$ ways to choose the "odd" card, for a total of $\binom{13}{2}\binom{4}{2}\binom{4}{2}\binom{44}{1}$.
Order of a factor group and whether it is cyclic
The element $(1,1) + H$ has order $2$. The element $(1,2) + H$ has order $\infty$. Therefore, the group is not cyclic.
Solve for $x: \sin 2x = - \frac 12$
There is a general formula: $$\sin x = \sin \theta\implies x =\begin {cases} 2 k \pi + \theta,& k \in \mathbb Z \\\\ \text{or}\\\\ 2k\pi +\pi - \theta,&k \in \mathbb Z \end{cases}$$ It is sufficient to find a $\theta_0$ such that $\sin\theta_0=-\dfrac 12$ and then you can solve the inequalities $$0\le 2k\pi +\theta_0\le2 \pi $$ and $$0\le 2k\pi +\pi -\theta_0\le 2 \pi$$ to find all suitable $ k \in \mathbb Z$. Notice that in your case you have $\sin(2x)$.
definition of primes for higher hyperoperations
Another quick comment on your java-program: Concerning the distribution of "hyper-primes", we find that non-hyper-primes are so rare that the series of their inverses converges: $$\sum_{2\leq a^b}\frac{1}{a^b} = \sum_{a=2}^\infty \sum_{b=2}^\infty \frac{1}{a^b}=\sum_{a=2}^\infty \frac{1}{a(a-1)} = 1.$$ This gives you a measure for the density of hyper-primes in the natural numbers. Note that for "normal" prime numbers, both series diverge, meaning that there are quite a lot of both of them: $$\sum_{p \; \text{prime}}\frac{1}{p} = \infty, \sum_{p \; \text{not prime}} \frac{1}{p} = \infty$$
What is 6.5 in binary?
$13$ in binary is $8 + 4 + 1 = 1101_2$. $6.5$ is half of thirteen, so move the decimal place once: $110.1_2$.
Solving a set of linear equations for variables with non-constant values
The Linear Least Squares is a method of solving equations $\mathbf{Ax}=\mathbf{b}$ wherein the vector $\mathbf{b}$ may or may not be in $\mathrm{R}(\mathbf{A})$. The solution that minimizes the mean squared error $|\mathbf{Ax}- \mathbf {b}|^2$ when $\mathbf{A}$ is full rank is given by the Least Squares solution:$$\hat{\mathbf{x}}=(\mathbf{A}^\mathrm{T}\mathbf{A})^{-1}\mathbf{A}^\mathrm{T}\mathbf{b}$$ which can be computed in MATLAB quickly as x=A\b. If $\mathbf{b}$ were in the Range space then the solution obtained is indeed the exact solution which may not happen too often because of the errors in measured quantities in the real world.
Condition on matrix to ensure nontrivial Jordan canonical form
You've sort of answered your own question. Suppose $A$ has an eigenvalue $\lambda$ with algebraic multiplicity $m_a$. If $$ \DeclareMathOperator{null}{null} \dim\null(A-\lambda I)\neq m_a $$ then $A$ is not diagonalizable. So, to check if $A$ is not diagonalizable, we need only find such an eigenvalue.
find $P\{P\{0\}\}$. $P$ represents the power set.
Almost. You forgot one subset of $\{0,\{0\}\}.$ (Hint: It isn't proper.)
Automorphism that preserves Kahler class
There is only one Kahler class on the projective space (up to positive constant muliples); the on of the Fubini-Study metric $\omega$. If $F$ is an automorphism of the projective space, then $\omega' := F^*\omega$ is again a Kahler metric. The hypothesis that $F$ preserves the volume form implies that the Ricci-form of $\omega'$ is equal to the Ricci-form of the Fubini-Study metric. By uniqueness of the Ricci-positive Kahler metric on the projective space, $\omega' = \omega$. Scaling by positive reals then treats the case when we consider a multiple of the Fubini-Study class.
How do you calculate the odds that the odds will be right?
The number of ways, in a deck of $n$ cards, to win this game (out of the $n!$ total ways for the deck to have been shuffled) is given in OEIS as sequence A144188. Playing with only 13 cards your chances of winning are about 5.246%. Playing with half a deck, yet using suit order to break ties, your chances are down to 0.095%. What I find amazing is that this is much less than the square of the chances for a 13-card deck (which we would expect because it involves 25 guesses and the 13-card deck involves only 12) and is even 1.6 times less than the square of the winning chances with a 14 card deck and is less than the chances of winning two consecutive games, with a 15 card deck and a 14 card deck. For large $n$ the chances of winning drop off by about a factor of 0.737 for each additional guess needed.
Harmonic Series Paradox
If your concern is the apparent paradox about a infinite length of paint and a finite area, then you might want to consider what Wikipedia says on about Gabriel's horn with an infinite area of paint and a finite volume, and then take it down a dimension: Since the Horn has finite volume but infinite surface area, it seems that it could be filled with a finite quantity of paint, and yet that paint would not be sufficient to coat its inner surface – an apparent paradox. In fact, in a theoretical mathematical sense, a finite amount of paint can coat an infinite area, provided the thickness of the coat becomes vanishingly small "quickly enough" to compensate for the ever-expanding area, which in this case is forced to happen to an inner-surface coat as the horn narrows. However, to coat the outer surface of the horn with a constant thickness of paint, no matter how thin, would require an infinite amount of paint. Of course, in reality, paint is not infinitely divisible, and at some point the horn would become too narrow for even one molecule to pass.
There is only one distinct pair of numbers that multiply to a given number and sum to a given number.
You start with $$ x + y = c_1 \\ x \, y = c_2 $$ and transform to $$ y = c_1 - x \\ x (c_1 - x) = c_2 $$ where the second equation can be transformed to $$ 0 = x^2 - c_1 x + c_2 = (x - c_1/2)^2 + c_2 - c_1^2/4 \iff \\ x = \frac{c_1 \pm \sqrt{c_1^2 - 4 c_2}}{2} $$ so depending on $\Delta = c_1^2 - 4 c_2$ we have zero, one or two solutions $(x, y)$.
Irreducibility of $f(x)=x^{q+1}+x+1$ over $\mathbb{F}_q$
I guess that the question is about the relation between the polynomial $f(x)$ and the minimal polynomials of its roots. You see, there is no reason for $f(x)$ to be irreducible. In fact, when $q>2$ the calculations show that it cannot be!! As a simple example consider the case $q=3$. We are in characteristic three so $f(1)=1+1+1=0$, and therefore $f(x)$ factors as $$ f(x)=x^4+x+1=(x-1)(x^3+x^2+x-1). $$ Here the cubic factor is irreducible, because otherwise it would have a linear factor and hence also a zero in $\Bbb{F}_3$. As a way of confirming the OP's findings let me include the following calculation. If $f(\alpha)=0$, then $$ \alpha^q=\frac{f(\alpha)-\alpha-1}\alpha=-\frac{\alpha+1}\alpha=-1-\frac1\alpha. $$ Consequently $1/\alpha^q=-\alpha/(\alpha+1)$ and $$ \alpha^{q^2}=-\frac{\alpha^q+1}{\alpha^q}=-1-\frac1{\alpha^q}=-\frac1{\alpha+1}. $$ Repeating the dose we then get $$ \alpha^{q^3}=-1-\frac1{\alpha^{q^2}}=-1+(\alpha+1)=\alpha. $$ The conclusion is that $\alpha\in\Bbb{F}_{q^3}$. This means that the minimal polynomial of $\alpha$ over $\Bbb{F}_q$ is either cubic or linear. The polynomial $f(x)$ is a product of linear and cubic factors from $\Bbb{F}_q[x]$. In particular it is reducible whenever $q>2$. In other words, we are working with the familiar cyclic group of three fractional linear transformations. The other question seems to be whether the zeros of $f(x)$ are primitive elements of the cubic extension. It may happen that no zero of $f(x)$ would be a primitive element of $\Bbb{F}_{q^3}$. As an example let me proffer the case of $q=4$. In this case we have the factorization $$ f(x)=x^5+x+1=(x^2+x+1)(x^3+x+1) $$ over $\Bbb{F}_2$. The quadratic factor $x^2+x+1$ splits into a product of irreducibles over $\Bbb{F}_4$, but zeros of the cubic factor $x^3+x+1$ are elements of $\Bbb{F}_8$, thus all of multiplicative order seven. Adjoining a seventh root of unity to $\Bbb{F}_4$ gives $\Bbb{F}_{4^3}$ as promised, but none of the roots of $f(x)$ have order $63$. In general we can do the following. If $\alpha$ is a zero of $f(x)$ then $$ \begin{aligned} &\alpha^{q+1}=-1-\alpha\\ \implies &\alpha^{q(q+1)}=-1-\alpha^q\\ \implies &\alpha^{q^2+q+1}=-\alpha-\alpha^{q+1}\\ \implies &\alpha^{q^2+q+1}=1. \end{aligned} $$ So we can conclude that the order of $\alpha$ is a factor of $q^2+q+1$. This is a proper factor of $q^3-1=(q-1)(q^2+q+1)$ unless $q=2$. If $q>2$ then none of the zeros of $f(x)$ are primitive elements of $\Bbb{F}_{q^3}$.
sum (difference) of polynomials to the power n
By the binomial theorem $$ (f_1+f_2)^n -(f_1-f_2)^n = \sum_{k=0}^{n}\binom{n}{k}f_1^{n-k}f_2^k -\sum_{k=0}^{n}\binom{n}{k}f_1^{n-k}(-f_2)^k. $$ Notice that since we have $(-f_2)^k$ the second sum is alternating, so every other term cancels, leaving $$ 2\sum_{j=0}^{\lfloor\frac{n}{2}\rfloor}\binom{n}{2j+1}f_1^{n-(2j+1)}f_2^{2j+1}=2\left( \binom{n}{1}f_1^{n-1}f_2 + \binom{n}{3}f_1^{n-3}f_2^3 + \dots \right) $$ where $\lfloor\ \rfloor$ is the floor function.
Big-O notation always holds for this two functions?
That's not right. Let $f(x)=\cos x$ and $g(x)=\sin x$. If $f(x)=O(g(x))$, then there would be some constant $C>0$ so that for $x$ sufficiently large, we would have $$\tag{1} |f(x)|\le C|g(x)|. $$ But for any positive integer $n$, we have $f(2n\pi)=1$ and $g(2n\pi)=0$. Since we can make $2n\pi$ arbitrarily large, this shows that $(1)$ cannot hold. I'll leave it to you to show that $g(x)\ne O(f(x))$.
Calculating matrix derivatives with MATLAB or MATHEMATICA?
Write the function in terms of the Frobenius product (:) and find its differential $$\eqalign{ L &= (ff^+-I)U:(ff^+-I)U \cr\cr dL &= 2\,(ff^+-I)U:d(ff^+)U \cr &= 2\,(ff^+-I)UU^T:(df\,f^+ + f\,df^+) \cr &\equiv W:(df\,f^+ + f\,df^+) \cr &= Wf^{+T}:df \,+\, f^TW:df^+ \cr }$$ Now the trick is to know that $f^T(ff^+-I)=0$ and therefore $f^TW=0$. This reduces the differential to $$\eqalign{ dL &= Wf^{+T}:df \cr }$$ Since $dL = (\frac{\partial L}{\partial f}):df$ the derivative must be $$\eqalign{ \frac{\partial L}{\partial f} &= Wf^{+T} \cr &= 2\,(ff^+-I)UU^T f^{+T} }$$ To get the derivative with respect to $C$ I will make a few assumptions, since you didn't tell us anything about the function $f(C)$. Since the shape of $C$ is not square, I assume that you are applying a scalar function element-wise to the matrix components. And that you know the derivative of the function in the scalar case, i.e. $f'(x)= \frac{df(x)}{dx}\,\,\,$ When applied element-wise to a matrix argument ($C$), the differential of a function can be expressed using the Hadamard product ($\circ$) as $$\eqalign{ df &= f' \circ dC \cr }$$ The differential of $L$ can be written as $$\eqalign{ dL &= \Big(\frac{\partial L}{\partial f}\Big):df \cr &= \Big(\frac{\partial L}{\partial f}\Big):(f' \circ dC) \cr &= \Big(\frac{\partial L}{\partial f}\circ f'\Big): dC \cr }$$ That means the derivative of $L$ with respect to $C$ is $$\eqalign{ \frac{\partial L}{\partial C} &= \Big(\frac{\partial L}{\partial f}\Big)\circ f' \cr }$$ If you are uncomfortable with the Frobenius product, you can replace it with the trace function, since $\,\,A:B = {\rm tr}(A^TB)$.
Time continuity in $H^1_0$ for weak solution to $u_t - \Delta u = u \log |u|$
By the Sobolev inequality, $$ \|u\|_p \le C_1 \|u\|_{H_0^1} ,$$ for any $p>2$ satisfying $\frac1p \ge \frac12 - \frac1n$. Also $$ \| u \log |u| \|_2 \le C_2 \| u\|_p .$$ Let $\phi_n$ be the eigenvalues of the Laplacian on $\Omega$ with Dirichlet boundary conditions. Then every function in $L^2$ can be written as $$ u = \sum_{k=1}^\infty (u, \phi_k) \phi_k .$$ Let $$ S_n u = u_n = \sum_{k=1}^n (u, \phi_k) \phi_k .$$ Thus $\Delta S_n = S_n \Delta$, and $S_n^2 = S_n$. Now put $-\Delta u_n$ into the weak PDE to get $$ \frac12 \frac{\partial}{\partial t} {\|\nabla u_n\|}_2^2 + {\|\Delta u_n\|}_2^2 \le {\|\Delta u_n \|}_2 {\| u \|}_{H^1_0} \le \frac12 {\|\Delta u_n \|}_2^2 + 2{\| u \|}_{H^1_0}^2 ,$$ that is $$ \frac12 {\|\nabla u_n(t)\|}_2^2 - \frac12 {\|\nabla u_n(0)\|}_2^2 + \frac12 \int_0^t {\|\Delta u_n(s) \|}_2^2 \, ds \le 2 \int_0^t {\| u(s) \|}_{H^1_0}^2 \, ds .$$ Let $n \to \infty$. Then $$ {\|u\|}_{L^\infty H^1_0}^2 + {\| u\|}_{L^2 H^2_0} ^2 \le {\| u(0)\|}_{H^1_0}^2 + 4 {\| u \|}_{L^2 H^1_0}^2 .$$ So $u \in L^2 H^2_0$. Now put $v = \Delta u_n(t_1) - \Delta u_n(t_0)$ into the weak PDE, and integrate with respect to $t$ from $t_0$ to $t_1$. \begin{align} \|u_n(t_1) - u_n(t_0)\|_{H^1_0}^2 &= \int_{t_0}^{t_1} ( v,\partial_t u(t))_{L^2} \, dt \\&= - \int_{t_0}^{t_1} ( \nabla v, \nabla u(t) )_{L^2} dt + \int_{t_0}^{t_1} ( v, u(t)\log|u(t)| )_{L^2} dt \\&\le \| v \|_{H_0^2} \int_{t_0}^{t_1} \|u(t)\|_{H_0^2} \, dt + C_1 C_2 \|v\|_{H_0^2} \int_{t_0}^{t_1} \|u(t)\|_{H_0^1} \, dt . \end{align} (I got a bit lazy at the end, so you might want to double check it.) And $$ \int_{t_0}^{t_1} \|u(t)\|_{H_0^1} \, dt \to 0 $$ as $t_1 \to t_2$.
A question on complex conjugate
Hint: a) In your condition, replace $z$ by $\overline{z}$ and take the conjugate to see a necessary condition on $m$. b) Use $P(z)=cz$ to see that the condition found is sufficient. ( And using the remark by Shahar Even-Dar-Mandel, you can take simply $P(z)=c$ for a non zero constant $c$.)
Estimate on expression involving elements in the unit disc in $\Bbb C$
Let $x = -\Re(\lambda), y = \Im(\lambda)$ and $c = \frac{2\pi}{n+2}$. The above problem is then equivalent to $$\begin{equation*} \begin{aligned} & {\text{minimize}} & & -x -cy\\ & \text{subject to} & & x^2 + y^2 < 1 \\ & & & x>0 \\ & & & y>0 \end{aligned} \end{equation*}$$ This is a pretty tame convex program, and has a standard solution via the KKT conditions. In particular, the infimum is $-\sqrt{1+c^2}$, and the point $\frac{1}{\sqrt{1+c^2}} (-1,c)$ is the limit point of the region where this value is attained.
A problem with rounding
That really depends on the semantics of these values. For example, if you compute the prices of individual items in a shopping basket, and the also compute the sum, you'd want to compute the sum from the rounded values. Otherwise, the final amount on the invoice might not equal the sum of the individual amounts, which will get you into trouble with your users and in most countries also with the law... In other cases, where the intermediate values have purely informational character - you'd probably want to continue the computation with the raw values, and only round for displaying purposes.
Determining transversality
The intersection of the graph of $f$ (I'll denote this by $G(f)$) and $\Delta$ is the set of all points in the diagonal $(x,y,x,y)$ such that $$x=x^2+x-2y^2+1,\qquad y=-x^2+y^2+3y-2.$$ Solving this, you obtain $$G(f)\cap\Delta=\{(1,1,1,1),(-1,1,-1,1)\}.$$ From here it is just linear algebra: We want to show that for each $x\in G(f)\cap\Delta$, the map $T_xG(f)\oplus T_x\Delta\to T_x\mathbb R^4$, $(u,v)\mapsto u+v$, has full rank. The tangent space of $G(f)$ at $(x,f(x))$ is the set of all vectors $(u,df_xu)$, where $u\in T_x\mathbb R^2$, and the tangent space of $\Delta$ at $(x,x)$ is the set of all vectors $(v,v)$, where $v\in T_x\mathbb R^2$. Thus, we need to show that the matrices $$\begin{pmatrix}I_2& I_2\\df_{(1,1)}&I_2 \end{pmatrix}$$ and $$\begin{pmatrix}I_2& I_2\\df_{(-1,1)}&I_2 \end{pmatrix}$$ are full rank, where $I_2$ is the $2\times 2$ identity matrix.
Continuity on the boundary of analytic map does not allow the assumption of specific values in the interior
Yes, $f$ can assume a real value on $G$. Suppose that $G=\left\{re^{i\theta}\,\middle|\,r\in(0,\infty)\wedge\theta\in\left(0,\frac{2\pi}3\right)\right\}$ and that $f(z)=z^2$. Then $\delta G\cap\mathbb R=(0,\infty)$, which is an open subset of $\mathbb R$, and $f$ is injective. But $i\in G$ and $f(i)=-1\in\mathbb R$.
Prove if $f$ and $g$ are coprime then $C(fg) \sim C(f) \oplus C(g)$
If you have seen the $k[X]$-module point of view: the result you are looking for is just saying that if $f,g$ are coprime, then we have an isomorphism of $k[X]$-module $k[X]/(fg)\simeq k[X]/(f)\times k[X]/(g)$. If not, let $u$ be the endomorphism of your $k$-vector space $E$, whose is cyclic with characteristic (or minimal, that the same here) polynomial is $fg$. Since $f,g$ are coprime and $\chi_u=fg$, we have $E=\ker g(u)\oplus \ker(f(u))$. Let $v$ be a vector which is $u$-cyclic (meaning $(v,u(v),\ldots, u^{n-1}(v))$ is a $k$-basis of $E$. Then using a Bézout relation $Uf+Vg=1$, you should be able to prove that $\ker(g(u))=Im((Uf)(u)), \ker(f(u))=Im((Vg)(u))$, and that $w=(Vg)(u)(v)$ is a cyclic vector for the restriction of $u$ to $\ker(f(u))$, and $w'=(Uf)(u)(v)$ is a cyclic vector for the restriction of $u$ to $\ker(g(u))$ (or something like that, I didn't do the computations...but it is the translation of the $k[X]$-module isomorphism above in terms of endomorphisms, so it should work that way). Of course, you should also convince yourself along the way that the characteristic polynomials over each subspace are $f$ and $g$ respectively.
Calculus 7th Ed (Stewart) - Chapter 4 solution 2 page 332
If $u=\sqrt{2x+1}$, then the derivative is $\frac{du}{dx}=\frac{1}{2\sqrt{2x+1}} \cdot 2$, so the book is right.
In how many ways to partition(ordered partition) an natural number $n$ so none of the parts is greater then $k$?
First, we can work with ordered partitions in $k$ parts with size limit $w$. This is explained in Stars and bars with restriction of size between bars via generating functions.: you have $k$ parts, each which may have $1, \dots, w$ which is represented by the generating function $x+x^2+\dots+x^w = \frac{x(1-x^w)}{1-x}$. With $k$ piles, you find that the number of way to generate $n$ is the $n$th coefficient of $f(x) = \frac{x^k(1-x^w)^k}{(1-x)^k}$. This is \begin{align*} [x^n]f(x) &= [x^{n-k}]\frac{(1-x^w)^k}{(1-x)^k} = [x^{n-k}]\sum_r(-1)^rx^{wr}\binom{k}{r}\sum_s\binom{s+k-1}{s}x^s \\ &= \sum_{wr+s=n-k} (-1)^r\binom{k}{r}\binom{s+k-1}{k-1} = \sum_r (-1)^r\binom{k}{r}\binom{n-wr-1}{k-1}. \end{align*} Now, just sum over $k$ to allow for any number of parts, so your desired answer is $$ \sum_k\sum_r (-1)^r\binom{k}{r}\binom{n-wr-1}{k-1}. $$ Hope that hideous sum helps! I'm not sure if there's a simpler form, I'd love for someone else to verify/correct me.
Does $\tan (x)$ equal $\frac{-1}{x-\frac{\pi}{2}}+\frac{-1}{x+\frac{\pi}{2}}+\frac{-1}{x-\frac{3\pi}{2}}+\frac{-1}{x+\frac{3\pi}{2}}+...$?
Recall that the infinite product representation of the cosine function is given by $$\cos z=\prod_{n=1}^{\infty}\left(1-\frac{z^2}{\pi^2(n-1/2)^2}\right) \tag 1$$ Now, just take the logarithmic derivative of both sides of $(1)$ and multiply by $-1$ to expose that $$\bbox[5px,border:2px solid #C0A000]{\tan z=\sum_{n=1}^{\infty}\frac{-2z}{z^2-(n-1/2)^2\pi^2}} \tag 2$$ NOTE: We can write the sum in $(2)$ as $$\tan z=\sum_{n=1}^{\infty}\left(\frac{-1}{z-(n-1/2)\pi}+\frac{-1}{z+(n-1/2)\pi}\right)$$
How should I understand $R[x]/(f)$ for a ring $R$?
Let $I = (f)$. Then "let $R[\alpha]$ denote the ring $R[x]/I$" means that $\alpha := x+I\,$ and $\,R[\alpha]\,$ denotes the subring of $R[x]/I$ generated by $R$ and $\alpha$, i.e. the smallest subring containing both. Clearly this is the whole ring $R[x]/I$ since $\,g(x)+I = g(x+I) = g(\alpha)\in R[\alpha].\ $ Furthermore, notice that $\,0 = g(\alpha) = g(x+I) = g(x)+I$ $\iff$ $g\in I = (f)$ $\iff$ $\,f\mid g\,$ in $R[x].\,$ Thus $\,\alpha\,$ serves as a "generic" root of $\,f\,$ over $R$ since it satisfies $f$ but no smaller degree polynomials. We can view the ring $R[x]/(f)$ as the most general (universal) way of "adjoining" a root of $f$ to $R$. Here "adjoining" has a technical meaning, which I elaborate on below (from a prior answer). More generally, if $\rm\,R \subset S\,$ are rings and $\rm\,s\in S\,$ then $\rm\,R[s]\,$ denotes the ring-adjunction of $\rm\,s\,$ to $\rm\,R\,,\,$ i.e. the smallest subring of $\rm\,S\,$ containing both $\rm\,R\,$ and $\rm\,s\,.\,$ Equivalently $\,\rm R[s]$ is the image of $\rm\,R[x]\,$ under the evaluation map $\rm\,x\mapsto s.\,$ It is the set of all elements that can be written as polynomials in $\rm\,s\,$ with coefficents in $\rm\,R.\,$ The notation for the polynomial ring $\rm\,R[x]\,$ is the special case where $\rm\,x\,$ is transcendental over $\rm\,R\ $ (an "indeterminate" in old-fashioned language),$\ $ i.e. $\rm\, x\,$ isn't a root of any polynomial with coefficients in $\rm\,R\,$. One may view $\rm\,R[x]\,$ as the adjunction of a universal (or generic) element $\rm\,x\,$, in the sense that any other adjunction $\rm\,R[s]\,$ is a ring-image of $\rm\,R[x]\,$ under the evaluation homomomorphism $\rm\, x\to s\,.\ $ For example, if $\rm\,R \subset S\,$ are fields then $\rm\,R[s]\cong R[x]/(f(x))\,$ where $\rm\,f(x)\,$ is the minimal polynomial of $\rm\,s\,$ over $\rm\,R\,.\,$ Essentially this serves to faithfully ring-theoretically model $\rm\,s\,$ as a "generic" root $\rm\,x\,$ of the minimal polynomial $\rm\,f(x)\,$ for $\rm\,s\,.\,$ Polynomial rings may be characterized by the existence and uniqueness of such evaluation maps ("universal mapping property"), e.g. see any textbook on Universal Algebra, e.g. Bergman.
Determine concentration
Solve the following system: $x+y = 260$ $0.1x+0.15y = (0.14)(260)$
Question regarding Poisson Process
Note that none of the information about the expected amounts of time between losses appears in the solution. This is because the information that the $9$th loss is observed at time $31.62$ years fixes the number of losses up to that time to $8$. The answer you provide is slightly wrong, since it regards the times of $9$ losses as unknown whereas the time of the $9$th loss is known and only the times of $8$ losses are unknown. The correct answer is $$ \Pr[S_3\le8]=\sum_{i=3}^8\binom8i\left(\frac8{31.62}\right)^i\left(1-\frac8{31.62}\right)^{8-i}\;. $$ Given that exactly $8$ losses occurred in $31.62$ years, we can regard their times of occurrence as independently uniformly distributed over the interval. Thus each of them has an independent probability of $8/31.62$ to have occurred before $t=8$ years, so the number of such occurrences follows a binomial distribution, and the answer adds up the cases in which the number of losses before $t=8$ years was between $3$ and $8$.
Prove that $\sum |x_k| < \infty \iff \sum |x_k|^p < \infty$
The implication is obvious since $|x_k|^p\leq |x_k|$ for all $k$ big enough but the converse is not true as $|x_k|=\frac{1}{k}$ prove.
Tangent Line of a Parametric Curve
Look, gang, I must be missing something serious, or have a critical screw loose, because I am having an inordinately hard time getting the gist as stated of the seemingly straightforward question. Seriously, I kid you not. So if what I say below is off the mark, please set me straight! Not too put too fine a point on it, but I believe that, despite NasuSama's comment, the given formulas $x = \cosh t + t \sinh t; \; y = \sinh t + t \cosh t; \; z = 2ct \tag{1}$ do not describe any tangent line to the curve $\mathbf r(t) = (\cosh t, \sinh t, ct), \tag{2}$ simply because (1) is not the equation of any line. But before going any further, I would like to say that we can obtain a solution with one equation if we allow it to be a vector equation. I'll take this as the intent of the question, and show how to derive a vector equation for the tangent line to the given curve through any point this curve. For any $t_0 \in \Bbb R$, the point $\mathbf r(t_0)$ given by taking $t = t_0$ in (2) is a point on the curve; the tangent vector to the curve at this point is clearly $\mathbf r'(t_0) = (\sinh t_0, \cosh t_0, c). \tag{3}$ The tangent line to this curve at the point given by $t = t_0$ is, as I am given to understand it, the line through the point $\mathbf r(t) = (\cosh t_0, \sinh t_0, ct_0)$ whose tangent vector is given by (3). To write an equation for the points on such a line, we need to introduce a second variable $r \in \Bbb R$ which parametrizes that line along its extent, just a $t$ parametrizes a given point on such a line via (2). If $\mathbf l$ is a point on the line, then the vector $\mathbf l - \mathbf r(t_0)$ must be collinear with $\mathbf r'(t_0)$, whence $\mathbf l - \mathbf r(t_0) = r \mathbf r'(t_0) \tag{4}$ for some $r \in \Bbb R$, whence we can write the vector equation of the line as $\mathbf l(r) = \mathbf r(t_0) + r \mathbf r'(t_0) = (\cosh t_0, \sinh t_0, ct_0) + r (\sinh t_0, \cosh t_0, c), \tag{5}$ which depends on two parameters, $t_0$ for the point on the curve, and $r$ for the point on the resulting line. As $t_0, r$ vary over $\Bbb R$, (5) describes all points on all lines tangent to the curve $\mathbf r(t)$; holding $t_0$ fixed, we obtain all points on a given tangent line. Hope this helps. Cheers, and as always, Fiat Lux!!!
Problem with commutator relations
I get that, $H(\lambda)=e^{-\lambda D}Ce^{\lambda D}$, $H'(\lambda)=-De^{-\lambda D}Ce^{\lambda D}+e^{-\lambda D}CDe^{\lambda D}$, $H''(\lambda)=D^2e^{-\lambda D}Ce^{\lambda D}-2De^{-\lambda D}CDe^{\lambda D}+e^{-\lambda D}CD^2e^{\lambda D}$. Now, we know that $D$ and $e^{-\lambda D}$ commute and we have that $H(\lambda) = \sum_{n=0}^\infty \frac{H^{(n)}(0)}{n!} \lambda^n$ $H(0)=C$, $H'(0)=-DC+CD=[C,D]$, $H''(0)=D^2C-2DCD+CD^2=DDC-DCD-DCD+CDD=D[D,C]-[D,C]D=[D,[D,C]]=0$ So $H(\lambda)=C+\lambda[C,D]$
Constant Rank Theorem for Manifolds with Boundary
Lee's Introduction to Smooth Manifolds deals with the case of local immersions for manifolds with boundary in Theorem 4.15. So let us suppose that $F:\mathbb{H}^m \rightarrow \mathbb{R}^n$ has $\mathrm{rank}(F)=k&lt;m$. For $\mathbb{H}^m=\{(x^1,\dots,x^m)\in \mathbb{R}^m, x^m\geq 0\}$, the assumption $\ker dF_p\not\subseteq T_p\partial\mathbb{H}^m$ in Lee's answer to the original question means that $dF_p(\partial/\partial x^m+a_1\partial/\partial x^1+\dots+a_{m-1}\partial/\partial x^{m-1})=0$ for some numbers $a_i$. The search for $k$ linearly independent tangent vectors in the image can therefore be restricted to $dF_p(\partial/\partial x^i), i&lt;m$. Let's suppose that $k$ is the rank of $(\partial F_i/\partial x^j)_p, 1\leq i, j \leq k$. The coordinate change $(x^1,\dots,x^m) \rightarrow (F_1,\dots,F_k,x^{k+1},\dots,x^m)$ produces another boundary chart $x^m\geq 0$ for which the rest of the proof works as in the case of manifolds without a boundary.
Is $\mathcal P(A) \times \mathcal P(B)=\mathcal P(A\times B)$?
Your counting argument is a good one. $$|P(A)\times P(B)|=|P(A)||P(B)|=2^{|A|}2^{|B|}=2^{|A|+|B|}$$ while $$|P(A\times B)|=2^{|A||B|}$$ Hence so long as $|A|+|B|\neq |A||B|$, the two sets will be of different sizes.
I have trouble finding limits at infinity .
I'll walk through how I would approach the first one (showing all work), and leave the rest up to you. Find $a$ and $b$ such that $$\lim _{x\to\infty}\left(\frac{x^2+1}{x+1}-ax-b\right)=0.$$ I would first find a common denominator between all three terms and write it as one ratio: \begin{align} \lim_{x\to\infty}\left(\frac{x^{2} + 1 - ax(x+1) - b(x+1)}{x+1}\right) &amp;= \lim_{x\to\infty}\left(\frac{x^{2} + 1 - ax^{2} - ax - bx - b}{x+1}\right)\\ &amp;=\lim_{x\to\infty}\left(\frac{(1-a)x^{2}-(a+b)x+1-b}{x+1}\right) \end{align} Dividing each term by the highest power of $x$ in the denominator: \begin{align} \lim_{x\to\infty}\left(\frac{(1-a)x^{2}-(a+b)x+1-b}{x+1}\right) &amp;= \lim_{x\to\infty}\left(\frac{(1-a)x - a - b + (1-b)x^{-1}}{1+x^{-1}}\right) \end{align} Now, the $x^{-1}$ terms approach zero as $x\to \infty$, so we can rewrite this as $$\lim_{x\to\infty}\left(\frac{(1-a)x - a - b + (1-b)x^{-1}}{1+x^{-1}}\right) = \lim_{x\to\infty}\left((1-a)x - a - b\right).$$ So, we're left with $$-(a+b) +(1-a)\lim_{x\to\infty}x = 0$$ which implies that $a = 1$ so that $1-a = 0$ so we eliminate $\lim_{x\to\infty}x = \infty$. That then leaves us with $$-(1+b) = 0\implies b = -1.$$
Every power of adjacency matrix contains zeroes
Hint: if the graph is bipartite (with parts $A$ and $B$), any walk starting in part $A$ will be in part $B$ after an odd number of steps and in part $A$ after an even number of steps.
Basis of the following vector space
Yes your derivation is correct, you have found a basis and the dimension of that space is $2$. The fact that B is fixed doesn't matter. We only need it is a subspace and of course that condition is fulfilled.
How can I derive these $3$ fundamental $2\times 2$ matrices:
The first column is $F\left[\begin{array}{c}1\\0\end{array}\right]$. The second column is $F\left[\begin{array}{c}0\\1\end{array}\right]$. So just work out where $(1,0)^T$ and $(0,1)^T$ end up.
Rudin RCA Theorem 7.1
You need to use sequences to remedy your problem. In the first place, you may as well assume $A=0.$ Then, there is a $\delta&gt;0$ such that $|y-x|&lt;\delta\Rightarrow |f(y)-f(x)|&lt;\epsilon |y-x|.$ Choose $(a,b)$ containing $x$ such that $|b-a|&lt;\delta$ and a sequence $(t_n)$ such that $t_n&lt;x$ for each $n$ and such that $(t_n)$ decreases to $a$. Then, $\mu([t_n,b))=|f(b)-f(t_n)|\le |f(b)-f(x)|+|f(x)-f(t_n)|\le \epsilon|b-t_n|\le \epsilon |b-a|$. And since $\bigcup_n[t_n,b)=(a,b)$, we have $\mu((a,b))=\mu\left (\bigcup_n[t_n,b)\right)=\lim \mu([t_n,b))\le \epsilon\lim |b-a|=\epsilon|b-a|.$ This finishes the proof. For the reverse implication, first note that the hypothesis implies that $\mu(\{x\})=0$ so $f$ is continuous at $x$. Now choose $a&lt;x&lt;b$ such that $|b-a|&lt;\delta.$ Then, $|f(b)-f(a)|=\mu([a,b))$ and by hypothesis, if $n$ is large enough, we have $|\mu((a-1/n,b))|\le \epsilon|(b-a)+1/n|$ so since $\mu([a,b))=\lim|\mu((a-1/n,b))|,$ it follows that $|f(b)-f(a)|\le \epsilon|b-a|$. To finish, let $a\to x$ and use continuity of $f$ at $x$ to conclude that $f'(x)=0.$
Verifying Transformation is Skew Symmetric
The matrix is representing a bilinear form $B$ if you take $B(u,v) = u^\top s(v)$.
$d(a,X) = 0 \iff X\cap U\neq \emptyset$ (distance from set equals 0 iff is adherent point)
"$\Rightarrow$" Suppose $d(a,X)=0$, then for any $\epsilon&gt;0$, there exists $b_\epsilon\in X$ such that $d(a,b_\epsilon)&lt;\epsilon$. Let $B(a,\epsilon)$ denote the open ball centered at $a$ with radius $\epsilon$, then $B(a,\epsilon)\cap X\neq \emptyset$. As $\epsilon&gt;0$ is arbitrary, we conclude that every open ball centered at $a$ intersects $X$. Thus this direction is proved. "$\Leftarrow$": Suppose $X\cap U\neq\emptyset$ for every open set containing $a$. Then we consider two cases: If $a\in X$, then there is nothing to prove. If $a\notin X$, note that for each $\epsilon&gt;0$, we have $X\cap (B(a,\epsilon)\setminus\{a\})\neq\emptyset$, so there exists $b_\epsilon\in X\cap(B(a,\epsilon)\setminus\{a\})$, in particular $d(a,b_\epsilon)&lt;\epsilon$. By letting $\epsilon\to 0$, the claim follows.
Show compactness of an operator with Arzelà–Ascoli
Notice that the range of the unit ball is contained in the compact set $$K:=\left\{\sum_{j=1}^na_j\phi_j, |a_j|\leqslant \lVert \psi_j\rVert_2\right\}.$$
Tangents circles in right triangle
Construction:- Add lines as indicated. Circle ABC has center O and circle MNS has center X. Let the required radius be r. Note that AMXN is a square with side r. Then, CP = 0.5CA = 2 and OP = … = 1.5 OQ = PM = … = 2 – r QX = r – 1.5 Apply Pythagoras theorem to ⊿OQX to obtain OX = F(r), a function of r. F(r) + r = … = 2.5 The above is a quadratic in r and the roots are 2 or 0 (rejected). Note: the coincidental simple answer makes me think that there should be a simpler method in handling this problem.
Limit at $\infty$ of a polynomial multiplied by a negative exponential
L'Hôpital works perfectly: $$\lim_{x\to\infty}x^2e^{-2x} = \lim_{x\to\infty}\frac{x^2}{e^{2x}} = \lim_{x\to\infty}\frac{2x }{2e^{2x}}=\lim_{x\to\infty}\frac{2 }{4e^{2x}}=0.$$
Is $f:M_n(\mathbb{C})\longrightarrow M_n(\mathbb{C})$ continuous?
There is no way to order the eigenvalues so that the function $f$ (as a function into $M_n({\mathbb C})$) is continuous. Consider the matrices $A(t) = \pmatrix{0 &amp; 1\cr e^{2it} &amp; 0\cr}$. Note that $A(0) = A(\pi)$. The eigenvalues are $\pm e^{it}$. But if you take the eigenvalue that is $1$ at $t=0$ and follow it continuously as $t$ goes from $0$ to $\pi$, it will be $-1$ at $t=\pi$.
unit u and primes in $\mathbb{Z}[i]$ with criterias
Your claim that $u=i$ is impossible does not follow. You know that each $b_i$ is positive; but that does not tell you that the final product $a+bi$ must have $b$ positive. For example, you can have $(a_1+b_1i)(a_2+b_2i)$ with $b_1,b_2\gt 0$, but with product $(a_1a_2-b_1b_2) + (a_1b_2+b_1a_2)i$ having imaginary part negative. Just take $a_1=0$, $b_1=2$, $a_2=-2$, $b_2=1$. Then you have $(0+2i)(-2+i)$, and the product is $-2-4i$, with negative real and negative imaginary parts. The other conclusions are likewise incorrect. Finally, your claim that $7+i$ is irreducible is also incorrect. Note that $N(7+i) = 50 = 2\times 5^2$; if it is reducible, you could look for an element with norm $5$ and one with norm $10$, which quickly leads to $7+i = (2+i)(3-i)$. Since neither factor is a unit (their norms are indeed $5$ and $10$), then this shows $7+i$ is indeed reducible. (You can also look for elements with norms $2$ and $25$).
How to classify equilibrium points
I am going to sketch the steps for you and have you fill in the details. We are given the system: $$\frac{dN_1}{dt} = N_1(2 - N_1 - 2N_2)$$ $$\tag 1 \frac{dN_2}{dt} = N_2(3 - N_2 - 3N_1).$$ We need to find the critical points. We find that there are four critical points at: $$\displaystyle (N_1, N_2) = (0, 0), (0, 3), (2, 0), (\frac{4}{5},\frac{3}{5}).$$ In order to classify these, we find the Jacobian of the system in $(1)$, evaluate it at 'each' of those critical points by finding the eigenvalues. Do you know how to find the Jacobain? Do you know how to evaluate the eigenvalues at each of these four points? Do you know how to classify the eigenvalues and the type of point each is? We have two stable and two unstable points (you can see these in the diagram). Next, we need to draw the phase plane, null clines, ... Here is a guide to aid with that. Lastly, we can take all of this information, draw a phase plane with the direction fields, classified eigenvalues and then superimpose example solution curves (notice the green versus blue lines). You have all of the answers to the questions above in this single graphic. You should end up with:
How can one find $\mu$?
The convergence in probability to a constant $\mu$ is equivalent to weak convergence to $\mu$. And the latter, by L&eacute;vy's criterion, is equivalent to the pointwise convergence of characteristic functions to $e^{i\mu t}$. Thus, $S_n/n \overset{P}{\longrightarrow} \mu$ $\iff$ $\varphi_{S_n/n}(t)\to e^{i\mu t}, t\in \mathbb{R}$ $\iff$ $\varphi_{X}(t/n)^n \to e^{i\mu t},t\in\mathbb{R}$ $\iff$ $n\log \varphi_{X}(t/n)\to it\mu,t\in\mathbb{R}$ $\iff$ $\frac{n}{t} (\varphi_{X}(\frac{t}{n})-1) \to i\mu, t\in \mathbb{R}$, where $\varphi_X$ is the characteristic function of one summand. The last convergence says roughly that the derivative of $\varphi_{X}(t)$ at $t=0$ is equal to $i\mu$; clearly, it is implied by $\varphi_{X}'(0) = i\mu$. It turns out that this convergence is equivalent to $\varphi_{X}'(0) = i\mu$. Although this provides a way to identify $\mu$, the usefulness of this result is quite limited; e.g. in your case it is not clear at all how to find the derivative of $\varphi_X(t)$ at $t=0$ (maybe, someone knows the answer, but I don't). An alternative approach is to use Feller's weak law of large numbers, which says that if $x P(|X|&gt;x)\to 0$, $x\to+\infty$, then $S_n/n -\mu_n \overset{P}{\longrightarrow} 0$, $n\to\infty$, where $\mu_n = E[X\mathbf{1}_{|X|\le n}]$. In our case $$ nP(|X|&gt;n) = C n\sum_{k=n+1}^\infty \frac{1}{k^2 \log k}\sim \frac{C}{\log n}, n\to\infty, $$ where the last follows e.g. from the Stolz rule. Therefore, $S_n/n -\mu_n \overset{P}{\longrightarrow} 0$, $n\to\infty$. But now the sequence $\mu_n$ has a limit $$ \mu = C\sum_{k=2}^\infty \frac{(-1)^k}{k\log k}, $$ whence $S_n/n\overset{P}{\longrightarrow} \mu $, $n\to\infty$.
Confusion about an example in Miles Reid Undergraduate Algebraic Geometry
In $\Bbb P^2\Bbb R$, we identify points $(u,v,w)$ and $(\lambda u,\lambda v,\lambda w)$ for all $\lambda\in\Bbb R\setminus\{0\}$. In particular, $(a,b,0)$ is the same point as $(-a,-b,0)$ and $(a,-b,0)$ is the same point as $(-a,b,0)$. So, these four solution are two points, and it is $(a,b,0)$ instead of $(b,a,0)$ if $|a|\ne |b|$.
Determining the value of A given $Z_4=Z_8\oplus Z_2/A$
Well, the notation is a bit dangerous, since you seem to be conflating $=$ with $\cong$. Other than that, we know that any subgroup of $Z_8\oplus Z_2$ must be of the form $B\oplus C$, where $B\le Z_8$ and $C\le Z_2$. So for $C$ our only choices are the trivial group and the whole group. Also note that $|Z_4|=4$,$|Z_8\oplus Z_2|=16$, hence $|A|=|B||C|=4$. So, if $|C|=1$, we have $|B|=4$, and the only subgroup of order $4$ of $Z_8$ is the one generated by the generator squared....then we would have $Z_8\oplus Z_2/A\cong Z_2 \oplus Z_2$ which is false, hence $|C|=2=Z_2$. Then $|B|=2$, which is the only subgroup that you identified. In other words, your $A$ is correct and unique
Books in the spirit of Problems and Theorems in Analysis by George Pólya and Gábor Szegő
Note: Your question is really a challenge, cause the book you're pointing to as reference is a first-class evergreen of highest rank. So, I was thinking: Which books give me a similar feeling when I am going through them as the classic by Pólya and Szegő and which of them could also play in the same leaque? One other criteria was, that they should provide a reasonable thorough survey through a part of mathematics. The first two books which came into my mind: Enumerative Combinatorics by Richard P. Stanley An outstanding classic to study combinatorics with an enormous wealth of examples and solutions from easy to really hard. E.g. example 6.19 of Volume $2$ provides you with $66$ different combinatorial structures related with the ubiquitous Catalan Numbers. Applied and Computational Complex Analysis by P. Henrici is a classic on Complex Analysis from 1977. The keyword in this 3 volume set is applied. You will by guided through lots of enlightning examples, which help you to study complex analysis and become familiar with this part of mathematics. In fact I found this book only a few years ago. I was curious that so many papers had referenced P. Henrici's book. But since I've bought it and read parts of it with great pleasure I know the reason! :-)
Computing the volume inside a surface S, using a seemingly unrelated result,
Hint for the volume inside the surface $S$. Find a linear transformation of the space such that the image of the surface $S$ is the sphere centered on the origin with radius equal to $1$. Then use change of variables theorem for integrals. Giving some more details $$(x,y,z) \to x^2 + xy +y^2 + z^2$$ is an inner product. Trying to find an orthogonal basis, you'll see that the linear application $$A = \begin{pmatrix} \frac{1}{\sqrt{3}} &amp; 1 &amp; 0 \\ -1 &amp; \frac{1}{\sqrt{3}} &amp; 0 \\ 0 &amp; 0 &amp; 1 \end{pmatrix}$$ transforms the set $B= \{(x,y,z): x^2 + xy + y^2 + z^2 \le 1\}$ into the set $B^\prime= \{(u,v,w): u^2 + v^2 + w^2 \le 1\}$. $B\prime$ is the 3D unit ball. Then you can use the Substitution for multiple variables $$\int_{\varphi(U)} f(R) dR=\int_Uf(\varphi(T))\vert \det (D \varphi)(T)\vert dT$$ with $U=B$, $\varphi=A$, $\varphi(U)=B^\prime$ and $f=1$ you get $$\int_{B^\prime} 1dR=\frac{4}{3}\int_B 1 dT \text{ as } \det A =\frac{4}{3}$$ What you're looking for is $\displaystyle \int_B 1 dT$ which is equal to $\displaystyle \frac{3}{4}\int_B 1 dR = \frac{3}{4} \frac{4}{3} \pi^3 = \pi^3$ as the volume of a sphere of radius $r$ is equal to $\frac{4}{3} \pi r^3$
Different Trigonometric Equations have different general solutions
What do you mean? Just factor out $π/2,$ to get $$x=π/2(4n\pm 1).$$ The last factor contains all odd numbers since $4n+3$ can be taken instead of $4n-1$ (this is just a shift), and all odd numbers are of either of the forms, since $4n,4n+2$ can never be odd. So the solutions are actually equivalent, only apparently different.
A question on FLT and Taniyama Shimura
The elliptic curve is not a modular form. The idea that there is a modular form $f$ associated to $E$. It satisfies $$f(z)=\sum_{n=1}^\infty c_n q^n$$ where $q=\exp(2\pi i z)$, $c_1=1$ and (with finitely many exceptions) for prime $p$, the equation $y^2=x^3+ax=b$ has $p-c_p$ solutions $(x,y)$ considered modulo $p$. It also satisfies various other conditions that I won't spell out (that it's a "newform" for a modular group $\Gamma_0(N)$ etc.)
Show that $\left|\int_a^b f(x) dx\right|\leqslant \frac{M}{12}(b-a)^3$?
Let $c:=\dfrac{a+b}{2}$. Integration by parts (using $f(a)=0$ and $f(b)=0$) gives $$\int_a^b\,f(x)\,\text{d}x=-\int_a^b\,(x-c)\,f'(x)\,\text{d}x\,.$$ By the Mean Value Theorem, $$f'(x)=f'(c)+(x-c)\,f''\big(\xi(x)\big)\text{ for all }x\in[a,b]\,,$$ where $\xi(x)$ is a number (inclusively) between $x$ and $c$. That is, we obtain $$\left|\int_a^b\,f(x)\,\text{d}x\right|=\left|\int_a^b\,(x-c)\,f'(c)\,\text{d}x+\int_a^b\,(x-c)^2\,f''\big(\xi(x)\big)\,\text{d}x\right|\,.$$ Since $\displaystyle \int_a^b\,(x-c)\,f'(c)\,\text{d}x=0$ and $\big|f''(t)\big|\leq M$ for all $t\in[a,b]$, we conclude that $$\left|\int_a^b\,f(x)\,\text{d}x\right|\leq M\,\left|\int_a^b\,(x-c)^2\,\text{d}x\right|=M\,\left(\frac{(b-a)^3}{12}\right)\,.$$ The inequality becomes an equality if and only if $f(x)=+M(x-a)(b-x)$ for all $x\in[a,b]$, or $f(x)=-M(x-a)(b-x)$ for all $x\in[a,b]$.
Finding a Basis for S$^\perp$
Longer answer: Your parameterization is: $x_1 = 2x_3 - x_4$; $x_2 =-3x_3+2x_4$; $x_3$ and $x_4$ are arbitrary. Then your vector $\vec x$ is $\left(\begin{matrix}x_1\\x_2\\x_3\\x_4\end{matrix}\right) = x_3 \cdot \left(\begin{matrix}2\\-3\\1\\0\end{matrix}\right) + x_4\cdot \left(\begin{matrix}-1\\2\\0\\1\end{matrix}\right)$ That means $\left\{ \left(\begin{matrix}2\\-3\\1\\0\end{matrix}\right),\left(\begin{matrix}-1\\2\\0\\1\end{matrix}\right)\right\}$ is a basis for $S^\perp$.
Gereralized function with 2nd species singularity integrable
We don't need to talk about distributions here. This is a simple function, taking its values in $[0,+\infty]$, which is common in measure theory. To answer your question, yes you can use the same argument. This function is equal to $0$ $\lambda$-almost everywhere, so it's integrable.
If $M$ maps all probability vectors on a subspace to some probability vectors, is $M$ the restriction of a column stochastic matrix?
Here is a minimal counterexample for $n = 3$. Let $S$ and $S'$ be the subspaces defined by $x_1 = x_2 + x_3$ and $x_1 = 0$ respectively. It is easy to see that $S \cap \Delta$ and $S' \cap \Delta$ are two segments with endpoints $a_2 := (1/2, 1/2, 0), a_3 := (1/2, 0, 1/2)$ and respectively $b_2 := (0,1,0), b_3 := (0,0,1)$. Clearly the linear map $M\colon S \to S'$ defined by $M(x_1, x_2, x_3) := (0, 2x_2, 2x_3)$ satisfies the condition $M(S \cap \Delta) \subset S' \cap \Delta$ as $M$ sends $a_2$ to $b_2$ and $a_3$ to $b_3$. It is then easy to see that if a $3 \times 3$ non-negative matrix $Q$ satisfies $$\begin{pmatrix}Q_{11} &amp; Q_{12} &amp; Q_{13} \\ Q_{21} &amp; Q_{22} &amp; Q_{23}\\ Q_{31} &amp; Q_{32} &amp; Q_{33}\end{pmatrix}\begin{pmatrix}1/2 &amp; 1/2 \\ 1/2 &amp; 0\\ 0 &amp; 1/2\end{pmatrix} = \begin{pmatrix}0 &amp; 0 \\ 1 &amp; 0\\ 0 &amp; 1\end{pmatrix},$$ then $Q_{ij} = 0$ for all $(i,j) \neq (2,2), (3,3)$. Hence there is no column stochastic matrix $Q$ that sends $a_2$ to $b_2$ and $a_3$ to $b_3$.
Inequality involving prime numbers
Hint: $p_k\cdot p_{k+1}\geq 2p_{k+1}$
Can you solve $3-x^2=2^x$ analytically?
Even with special functions, this equation cannot be solve analytically. But we can make approximations. Knowing that $x=1$ is a trivial solution, consider that we look for the negative root of function $$f(x)=2^x+x^2-3$$ for which $$f'(x)=2^x \log (2)+2 x$$ $$f''(x)=2^x \log ^2(2)+2 \quad &gt; 0\quad \forall x$$ The first derivative cancels at $$x_*=-\frac 1{\log(2)} W\left(\frac{\log ^2(2)}{2}\right)$$ where $W(.)$ is Lambert function. Build a second order Taylor expansion around this point and make it equal to zero. This would give, as an approximation $$x_0=x_*-\sqrt{-2 \frac {f(x_*)}{f''(x_*)}}$$ and, numerically, this is $\sim -1.60832$ while the exact solution, given by Newton method, is $-1.63658$. We can go much further using a Taylor expansion up to order $O((x-x_*)^{p+1})$ but this requires the solution of higher order polynomials. Using a cubic expansion would give $-1.64396$ and a quartic would give $-1.63518$; this would be obtained using radicals (nasty formulae). After, it is just a numerical method. Edit We can obtain good approximations using one single iteration of Newton-like methods. In order to avoid overshoots, use a starting point $x_0$ such that $f(x_0)&gt;0$ (Darboux theorem). Since by inspection $f(-2)&gt;0$, $x_0$ is a simple good choice. Depending on the order $n$ of the method, the estimate will write $$x_{(n)}=\frac {P_n(t)}{Q_n(t)}$$ where $P_n(t)$ and $Q_n(t)$ are polynomials of degree $n$ in $t=\log(2)$. Below are the polynomials for Newton $(n=2)$, Halley $(n=3)$ and Householder $(n=4)$ methods $$\left( \begin{array}{ccc} n &amp; P_n(t) &amp; Q_n(t)\\ 2 &amp; -2t+27 &amp; t-16 \\ 3 &amp; -6 t^2-118 t+784 &amp; 3 t^2+64 t-472 \\ 4 &amp; -2 t^3-339 t^2-7776 t+34392 &amp; t^3+192 t^2+4368 t-20736 \end{array} \right)$$ and the numerical values are $x_{(2)}=-1.67335$ and $x_{(3)}=-1.64085$, $x_{(4)}=-1.63709$. I did not report here the results for higher order because the formulae are quite lengthy, but just to give you an idea $x_{(6)}=-1.63657615$ while the &quot;exact&quot; solution is $-1.63657604$.
Who can help me estimate the operator norm of this integral operator?
Fix $x \geq 0$. Let's first suppose that $\psi, F \geq 0$, $\psi \in L^1$. Then, by Jensen's inequality and Tonelli's theorem, $$\begin{align*} \int_{\mathbb{R}} |T_F(x)\psi(y)|^p \, dy &amp;= \|\psi\|_1^p \cdot \int_{\mathbb{R}} \left| \int_0^{\infty} F(t+x+y) \cdot \frac{\psi(t)}{\|\psi\|_1} \, dt \right|^p \, dy \\ &amp;\leq \|\psi\|_1^p \cdot \int_{(0,\infty)} \underbrace{\int_{\mathbb{R}} |F(x+t+y)|^p \, dy}_{=\|F(x+\cdot)\|_p^p} \frac{\psi(t)}{\|\psi\|_1} \, dt = \|F\|_p^p \cdot \|\psi\|_1^p \end{align*}$$ for any $p \geq 1$. This shows that $T_F(x) \psi &lt; \infty$ almost surely and $\|T_F(x)\| \leq \|F(x+\cdot)\|_p$, i.e. $T_F(x):L^1 \to L^1$ and $T_F(x): L^1 \to L^2$ are bounded operators. Splitting up $\psi = \psi^+-\psi^-$ and $F= F^+-F^-$ proves this for not necessarily non-negative $\psi$, $F$. In order to find the operator norm, we first consider the case $F \in C_c$ (i.e. $F$ has bounded support and is (uniformly) continuous). We choose a sequence $\psi_k \in L^1$ such that the following conditions are satisfied: $$\psi_k \geq 0 \qquad \int \psi_k = 1 \qquad \{x; \psi_k(x) \neq 0\} \subseteq [0,k^{-1}]$$ Then $$\begin{align*} \left| \int_0^{\infty} \psi_k(t) \cdot F(t+x+y) \, dt - F(x+y) \right| &amp;= \left| \int_0^{\infty} (F(x+y+t)-F(x+y)) \cdot \psi_k(t) \, dt \right| \\ &amp;\leq \sup_{t \leq k^{-1}} |F(x+y+t)-F(x+y)| \cdot \underbrace{\int_0^{k^{-1}} \psi_k(t) \, dt}_{1} \end{align*}$$ Since $F$ is uniformly continuous, this shows that $$T_F(x)\psi_k \to F(x+\cdot)$$ uniformly, hence in particular $\|T_F \psi_k\|_p \to \|F(x+\cdot)\|_p$. Thus $\|T_F(x)\| \geq \|F(x+\cdot)\|_p$. Since $C_c$ is dense in $L^1 \cap L^2$, this inequality holds for any $F \in L^1 \cap L^2$. Consequently, $$\|T_F(x)\| = \|F(x+\cdot)\|_p$$ As far as I can see, a similar reasoning applies if $\psi \in L^2$ (basiscally, we can interchange the roles of $F$ and $\psi$ in the first calculation).
Why does the mollified function converge uniformly to the original $W^{1,\infty}$ function
Continuity of $u$ follows from Morrey's inequality which was proved a few pages earlier in the book: a function in $W^{1,p}$, $p&gt;n$, has a continuous representative, with which it is identified. For completeness, a proof of uniform convergence. Continuity and compact support imply uniform continuity. So, for any fixed $\epsilon&gt;0$ there is $\delta$ such that if $r&lt;\delta$, all the values of $u$ involved in the integral $\eta_r* u(x)$ are within $\epsilon$ of $u(x)$. Since mollification averages these values, $|\eta_r* u(x)-u(x)|\le \epsilon$ as desired.
What exactly *is* the Riemann zeta function?
The Riemann zeta function $\zeta(s)$ is a sum of reciprocals of powers of natural numbers, $$\zeta(s) = \sum_{n \geq 1} \frac{1}{n^s}.$$ As written, this makes sense for complex numbers $s$ so long as $\text{Re } s &gt; 1$. For these numbers, there is little more to be said. But you've asked about an interesting number: $\zeta(1 + i)$, and $\text{Re }(1 + i) \not &gt; 1$. What's happening there is a bit subtle, and a bit abusive in terms of notation. It turns out there is another function (let's call it $Z(s)$) which makes sense for all complex numbers $s$ except for $s = 1$, and which exactly agrees with $\zeta(s)$ when $\text{Re } s &gt; 1$. If you're familiar with some calculus or complex analysis, then you should also know that the function $Z(s)$ is also complex differentiable everywhere except for $s = 1$. This is a very special property that distinguishes $Z(s)$. The theory of complex analysis (in particular, the theory of "analytic continuation") gives that there can be at most one function that extends $\zeta(s)$ to a larger region, like $Z(s)$ does. In this sense, we could realize that $Z(s)$ is uniquely determined by $\zeta(s)$. As it agrees with $\zeta(s)$ everywhere $\zeta(s)$ (initially) makes sense, it might even be reasonable to just use the name $\zeta(s)$ instead of $Z(s)$. That is, when I write $\zeta(s)$, what I'm really saying is $$\zeta(s) = \begin{cases} \zeta(s) &amp; \text{if Re }s &gt; 1 \\ Z(s) &amp; \text{otherwise } \end{cases}$$ It is this function that W|A computes when you ask it for $\zeta(1 + i)$. Although what I've written is true (and important), it doesn't answer one aspect of your question What is it even calculating? I mentioned there exists this function $Z(s)$, or rather that it is possible to give meaningful values to $\zeta(s)$ for all $s \neq 1$. But how? Stated differently, yo're asking what is the analytic continuation of the Riemann zeta function? The continuation is unique, but the steps to get there are not. I'll give a very short, incomplete proof that describes one way to calculate $\zeta(1+i)$. We start by considering $\displaystyle h(s) = \sum_{n \geq 1} \frac{2}{(2n)^s}$. Performing some rearrangements, $$\begin{align} h(s) &amp;= \sum_{n \geq 1} \frac{2}{(2n)^s} \\ &amp;= \frac{1}{2^{s - 1}} \sum_{n \geq 1} \frac{1}{n^s} \\ &amp;= \frac{1}{2^{s - 1}} \zeta(s) \end{align}$$ Let's subtract this from the regular zeta function. On the one hand, $$ \zeta(s) - h(s) = \zeta(s)(1 - \frac{1}{2^{s-1}}).$$ On the other hand, $$ \begin{align}\zeta(s) - h(s) &amp;= \sum_{n \geq 1} \left( \frac{1}{n^s} - \frac{2}{(2n)^s} \right) \\ &amp;= \sum_{n \geq 1} \frac{(-1)^{n+1}}{n^s}, \end{align}$$ and this last series makes sense for $\text{Re } s &gt; 0$. (If you haven't looked at alternating series before, this might not be obvious. But the idea is that the sign changes cancel out a lot of the growth, so much that it converges for a larger region). In total, this means that $$\zeta(s) = (1 - 2^{s - 1})^{-1} \sum_{n \geq 1} \frac{(-1)^{n+1}}{n^s},$$ and you can just "plug in" $1+i$ here. [Notice that the problem when $s = 1$ is apparent here, as you cannot divide by $0$.] In practice, it's an infinite sum, so you'll take the first very many terms to get the value of $\zeta(1+i)$ to any precision you want. For completeness, it also turns out that $$\pi^{-s/2} \zeta(s) \Gamma(\tfrac{s}{2}) = \pi^{(s-1)/2} \zeta(1-s) \Gamma(\tfrac{1-s}{2}),$$ which lets us transform values of $\zeta(s)$ for $\text{Re } s &gt; 0$ into values when $\text{Re } s &lt; 1$. The $\Gamma(z)$ function here is called the "Gamma function" (it's an integral, a sort of generalization of a factorial) and this equation is called the symmetric functional equation of the zeta function.
Why does the spectral radius satisfy $r(A^{2})=r(A)^{2}$?
Let $A^2v = \lambda v$ for some vector $v \ne 0$. Then $(A-\lambda I)v = 0$. If $\omega, -\omega$ are the complex square roots of $\lambda$, we have $$0 = (A-\lambda I)v = (A-\omega I)(A+\omega I)v$$ If $(A+\omega I) v = 0$, then $Av = -\omega v$ so $-\omega$ is an eigenvalue of $A$. On the other hand, if $(A+\omega I) v \ne 0$, then $$A\big((A+\omega I)v\big) = \omega \big((A+\omega I)v\big)$$ so $\omega$ is an eigenvalue of $A$. In either case, $\lambda = (\pm\omega)^2$ so $\lambda$ is a square of an eigenvalue of $A$, or $\lambda \in \sigma(A)^2$.
proof using a recursive definition
Take your recursive strings definition and append a "value" function for each part of the definition: Base: 1 - (val(1) = 1) Recursion: S1 (val(S1) = 2val(S)+1) or S0 (val(S0) = 2 val(S)) This usually works for any recursive definition that you want to assign additional properties or interpretions to.
Show that $|e^z| \le 1$ if $Re [z] \le 0$
It's a useful fact that $$|e^{z}| = |e^{a+bi}| = |e^{a}e^{bi}| = |e^{a}||e^{bi}| = |e^{a}| = e^{a}$$ Since $a = \Re z$, if $a \leq 0$ then $e^{a} \leq 1$, which in turn means $|e^{z}| \leq 1$.
Solving integral (by substitution?)
Use substitution $t=\sqrt{|b|}x$, then: $$\int\frac{1}{\sqrt{b-x^2}}dx=\int \frac{\sqrt{|b|}}{\sqrt{b-bt^2}}dt=\int \frac{1}{\sqrt{1-t^2}}dt$$
Is Proof By Induction Necessary?
What set of axioms are you using? If you use the Peano Axioms, you can't prove much without induction. In particular, the commutative and associative properties of addition and multiplication require induction. The inverted addition proof that you mention needs a lot of work to justify if you go all the way back to the axioms. If we don't go back to the axioms we rely on a lot of intuitive understanding of the naturals.
Input in Differential Equations and Difference Equations
I am a Mathematica user and know the history of the package rather well. They introduced the difference equations in version 7.0 and the differential equations in the first. So there is a big difference in how interested people are in either. In DifferenceRoot there is last section that is of high interest to the intent of Your question: GeneratingFunction[ DifferenceRoot[ Function[{y, n}, {-y[n] - y[n + 1] + y[n + 2] == 0, y[0] == 0, y[1] == 1}]][n], n, z] $$-\frac{z}{-1 + z + z^2}$$ The built-in is DifferenceRoot. The difference equation in this example is the second list to the function. This is in terms of Mathematica Your similarity between difference equation and differential equation. There is not much. Differential equation are special cases of limit of a difference quotient over time solely. Difference equation are part of the theory of generating functions GeneratingFunction. There are differential equation for generating functions, GeneratingFunction, and difference equations. Both define the generating function complete. Most generating functions are rational polynoms or transcendential functions. There are too exponential generating functions: Both are mathematics but very different in nature. Do not mix them up. Mathematica knows some more of this functions categories infinite sums and recurrence equations. The solution to difference equation are holonomic sequences. The solution to differential equation do not necessarily need to suit more than differentiability of suitable order. But this is only the Mathematica version. Difference equation appears in very different contexts. Examples: Difference_equation as a scheme to set up solvers for differential equations. The above detailed Mathematica interpretation deals with homogeneous differential equations. Defnition for scheme for interpolation of functions or data. The equations in school math with at least one subtraction. This introduction the relation as in Mathematica with recurrence, What-is-a-difference-equation. .... sol = RSolve[{y[n + 2] - 7 y[n + 1] + n y[n] == 0, y[0] == 5, y1 == 10}, y, n] give a solution in Mathematica. With f there is non. This is very rapidly increasing. This shows the homogeneous function diverges much faster than the inhomogenity does and is pausible that this might be an error in question. I made use of What is a differnce equation? on google.com
Solution to this equation?
Simplifying your expression yields $$2^n=\dfrac{2}{3}n(n-1)(n-2)$$ and clearly $n=4$ is the only integer solution. However there are two other real solutions of this equation given approximately by $$-0.1668684918692626814441173808364929507849837416$$ and $$13.93856023297568362803948004939032501849896848.$$ Here you can find a sketch of the graph of $f(x)=2^x-\dfrac{2x}{3}(x-1)(x-2)$ on domain $[-10, 15].$
Number of possibilities to draw from a card deck isn't an integer - where's my error?
Basically you’ve incorrectly identified the nature of the problem. Let $n_A,n_K,n_Q,n_J$, and $n_{10}$ be the numbers of aces, kings, queens, jacks, and tens in your set of $4$ cards. In effect you’re asking for the number of solutions in non-negative integers to the equation $$n_A+n_K+n_Q+n_J+n_{10}=4\;.$$ This is a standard stars and bars problem, and the solution is given by the binomial coefficient $$\binom{4+5-1}{5-1}=\binom84=70\;.$$ A fairly clear explanation of the formula and its derivation is given in the linked article.
How to draw the following configuration space (manifold)?
I have a representation to propose. Is it interesting for your purpose, I don't know... Consider the set of 3D oriented straight lines that are parallel to vertical plane xOz and intersect base plane xOy. They can be parameterized by $(x,y,\theta)$, with $(x,y)$ their point of intersection with xOy and angle $\theta$ with respect to horizontal plane. Edit: One can give of this manifold slightly different representations by replacing the oriented lines by their unit vector, or, (probably better) by line segments connecting a point $(x_1,y,0)$ to a point $(x_2,y,1)$, with the same $y$, the angle being obtained as $\theta=\operatorname{atan}(1/(x_2-x_1)$. This representation has the advantage to make the connection with affine representation mentionned here and there. A drawback, angles $\pi/2$ and $3\pi/2$ aren't represented; they have to be &quot;added manually&quot; (!)
Proving $|\sin x - \sin y| < |x - y|$
You don't actually need Calculus to prove it: $$|\sin x - \sin y| = \left| 2 \sin \frac{x-y}{2} \cos\frac{x+y}{2} \right| \,.$$ The inequality $\left| \sin \frac{x-y}{2}\right|&lt; \left|\frac{x-y}{2}\right|$ is well known, while $\left|\cos\frac{x+y}2\right|\leq 1$ is even more well known. The first inequality is sharp if $x-y \neq 0$.
Support of a faithful representation
In most reasonable cases $\tilde{\pi}$ is not faithful. Consider the diagonal embedding $\pi\colon \ell_\infty \to B(\ell_2)$ with respect to any orthonormal basis. Certainly there is no faithful representation $\ell_\infty^{**}\to B(\ell_2)$ as $\ell_\infty^{**}$ is not $\sigma$-finite. Consequently, $\tilde{\pi}$ is not faithful. This can be generalised to the setting where $A$ acts on a separable Hilbert space and contains an element with uncountable spectrum.
Expected Value of Maximum Likelihood Estimator for $\operatorname{Beta}(\theta,1)$
But I really don't know how to calculate this expected value Observe that $$Y=-\log X \sim\exp(\theta)$$ Thus $\Sigma_y Y_i\sim \text{Gamma}(n,\theta)$ Is this enough for you to derive your UMVU Estimator? Here is how to calculate your expectation using gamma distribution $$\mathbb{E}[\hat{\theta}]=n\int_0^{\infty}\frac{1}{y}\frac{\theta^n}{\Gamma(n)}y^{n-1}e^{-n\theta}dy=n\theta\frac{\Gamma(n-1)}{\Gamma(n)}\underbrace{\int_0^{\infty}\frac{\theta^{n-1}}{\Gamma(n-1)}y^{(n-1)-1}e^{-n\theta}dy}_{=1}=$$ $$=\frac{n}{n-1}\theta$$
Fitting exponential curve to data
I assume you are looking for a curve of the form $y=Ae^{kx}$. For all your data points $(x_i,y_i)$, compute $w_i=\ln(y_i)$. Find in the usual way constants $a,b$ such that the line $w=a+bx$ is a line of best fit to the data $(x_i,w_i)$. Then $e^a$ and $b$ are good estimates for $A$ and $k$ respectively. Added: "Line of best fit" is a huge subject. We describe a basic method, least squares. The idea is to find numbers $a$ and $b$ which minimize $$\sum_{i=1}^n \left(w_i -(a+bx_i)\right)^2.$$ It turns out that the best $b$ and $a$ are given by the following formulas: $$b=\frac{\sum_{i=1}^n x_iw_i -\frac{1}{n}\left(\sum_{i=1}^n x_i\right)\left(\sum_{i=1}^n w_i\right)}{\sum_{i=1}^n x_i^2 -\frac{1}{n}\left(\sum_{i=1}^n x_i\right)^2}$$ and $$a =\frac{1}{n}\sum_{i=1}^n w_i -\frac{1}{n}b\sum_{i=1}^n x_i,$$ where $b$ is given by the previous formula. I suggest you look for "least squares" elsewwhere for a more detailed discussion.
If $g(z)^3$ is analytic and $g(z)$ is continous, then $g(z)$ is analytic
We need $n \neq 0$ of course, since $g^0$ is the constant, hence analytic, function $z \mapsto 1$, whatever $g$ is. For $n \neq 0$, we can conclude that $g$ is analytic if $f$ is. If $f \equiv 0$, then clearly $n &gt; 0$ and $g \equiv 0$. Otherwise, $f$ has only isolated zeros, and for $n &lt; 0$ it can't have any. If $f(z)$ is nonzero on a small disk $D \subset S$, then there are $\lvert n\rvert$ holomorphic branches of $f^{1/n}$ on $D$. By continuity, $g$ must be one of those branches, hence $g$ is holomorphic on $D$. Thus we found that $g$ is holomorphic on $S\setminus f^{-1}(0)$. Since the zeros of $f$ are isolated, it follows that $g$ is holomorphic on $S$ by the Riemann removable singularity theorem.
how to find a vector?
You can simply write two equations for both conditions and then solve the system. They would be $2b_1-b_2=0$ and $2b_1^2-2b_1b_2+5b_2^2=1$.
Comparing Statements and predicates using Truth Tables
You have to read the "matrix" : $p_1, p_2, p_3, p_4$ as interpretations for the predicate letter $p$ that is $T/F$ according to the values assigned to the variables $x$ and $y$ respectively, where the values of $x$ must be read on the left and the values of $y$ on top. Consider e.g. $S_1 : ∃x ∀y p(x,y)$, and consider the matrix $p_1$. Is it true that there is a value for $x$ such that $p(x,y)$ is $T$ for all values for $y$ ? YES. Consider $p_1(0,y)$: it is $T$ for both values : $0,1$ for $y$. Thus the predicate $p_1$ satisfy the formula $S_1$. The same for $S_4$, and these results are consistent with the solution 1: "if we take $p$ to be $p_1$, then $S_1$ and $S_4$ are true, but $S_2$ and $S_3$ are false". Note In principle, the approach is the same IF we have to read the "matrix" in the other "direction" (as suggested by you : with the values for $x$ on top and the values for $y$ on the left) : but in that case, you can see that now the predicate $p_1$ does not satisfy the formula $S_1$. But in the same way, you can verify that $p_2$ satisfy $S_1$.
Is there any expansion for $\log(1+x)$ when $x\gt 1$?
You can expand the function $\log (1+x)$ around any point at which it is defined. This means there exists an expansion of $\log (1+x)$ around the point $x=2$, for example, however it will be of the form $$\log(1+x) = \sum_{i=1}^\infty a_i (x-2)^i,$$ and the expansion will be valid for $x\in(-1,5)$. However, I imagine you want the expansion to of the form $$\log(1+x)=\sum_{i=1}^\infty a_i x^i,$$ for which you will have a problem. The problem is that any power series $$\sum_{i=1}^\infty a_i (x-x_0)^i$$ will converge on a interval symmetric around $x_0$ (meaning an interval of the type $(x_0-\delta, x_0+\delta)$). This means that if the series expansion for $\log(1+x)$ will converge for $x&gt;1$, it will also converge for some $x&lt;-1$ which is impossible.
Descending sequence of vector spaces
Let $V=L^2$ the set of all sequences $\{ a_i \}$ such that $\sum a_i^2$ converges. Let $V_n$ be the subspace defined by $a_0=a_1=\cdots=a_n=0$. $\bigcap V_n=0$. $V$ has no countable basis.
Proving question is true with big o notation
Take $f(n) \in O(1)$. There exists $N_0$ so that for all $n \geq N_0$, $|f(n)| \leq M_0$. Hence we have that $f$ is eventually a bounded function. Now take $g(n) \in O(\log(n))$. This says that there exists an $N_1$ so that for all $n \geq N_1$, we have $|g(n)| \leq M_1 \log(n)$. Here's an issue. We don't know what $f$ and $g$ look like. It could be that $g(n) \in O(1)$ as well, say $g(n) = M_1$, and $M_1 \geq M_0$. Then eventually we have $g(n) \geq |f(n)|$. On the other hand, what if $M_1 \leq M_0$ and $g(n) = M_1$? It still could be $g(n) \geq |f(n)|$, since all we know is an upper bound on $f$, or it could be eventually we have $f(n) = M_0$ so that $g(n) \leq f(n)$. So we can't say anything about $f(n) \in O(1)$ and $g(n) \in O(\log(n))$ being such that $f(n) \leq g(n)$ or vice versa. Maybe another way to interpret your problem is this, though. Is it true that $O(1) \subseteq O(\log(n))$? If $f(n) \in O(1)$, then there is some $N$ so that for all $n \geq N$ we have $|f(n)| \leq N$. Now for some $M$ we have that for all $n \geq M, N \leq N \log(n)$. If we take $N_1 = \max\{N,M\}$, then for all $n \geq N_1$ we have $|f(n)| \leq N \leq N\log(n)$ so that $f(n) \in O(\log(n))$. Note this is a strict subset, since $\log(n) \in O(\log(n))$ but it unbounded so $\log(n) \notin O(1)$. We actually have $O(1) \subsetneq O(\log(n))$. Maybe this is what you were wanting? Edit: If you're just trying to prove the inequalities are true, the first follows by monotonicity of logarithm (meaning if $m \leq n$, then $\log(m) \leq \log(n)$). Note $\log(e) = 1$. Now for $n \geq 3$ (since $2 &lt; e &lt; 3$) we have $\log(n) &gt; 1$. For $\log(n) &lt; n$, let's examine the function $f(x) = x - \log(x)$. We take its derivative, so $f'(x) = 1 - \frac{1}{x}$. It is $0$ when $x = 1$. For $x &gt; 1$, it is positive (meaning increasing) and for $x &lt; 1$ it is negative (meaning decreasing). So we know the difference is increasing for $x &gt; 1$, and we know $f(e) = e - \log(e) = e - 1 &gt; 0$, so for all $n \geq 3$ we have $n &gt; \log(n)$. Using the fact that we have $\log(e) = 1$, and $n &lt; n \log(n)$ for $n \geq 3$ again. Finally we know $\log(n) &lt; n$ for $n \geq 3$, so $n \log(n) &lt; n \cdot n = n^2$ for $n \geq 3$. Notice I'm implicitly using the fact that if $f_1(n) \leq g_1(n)$ and $f_2(n) &lt; g_2(n)$, then $f_1(n) f_2(n) &lt; g_1(n) g_2(n)$. Make sure this makes sense.
What's the difference between explicit and implicit Runge-Kutta methods?
Runge-Kutta methods are methods for numerically estimating solutions to differential equations of the form $y^\prime=f(x,y)$. One is interested in both explicit and implicit methods, as they have quite different applications. To simplify things, I'll consider the two simplest Runge-Kutta methods that are usually ascribed to Euler. The (usual) Euler method is the simplest example of an explicit method: $$y_1=y_0+h\;f(x_0,y_0)$$ The backward Euler method is the simplest implicit method: $$y_1=y_0+h\;f(x_1,y_1)$$ To explain the notation: $(x_0,y_0)$ is the initial point, from which the Runge-Kutta method "launches" itself to generate a new point, $(x_1,y_1)$, where $x_1=x_0+h$ and $h$ is a so-called "step size". The Euler method is an explicit method in that the expression for $y_1$ depends only on $x_0$ and $y_0$. On the other hand, backward Euler is an implicit method, since the right-hand side also contains $y_1$; that is, $y_1$ is implicitly defined. Why would we need to consider both, when the explicit methods look simpler? That is because the implicit methods are in fact the most efficient way to handle so-called stiff differential equations, which are differential equations that usually feature a rapidly decaying solution. Explicit methods need to take very tiny values of $h$ to accurately estimate the solution, and this takes lots of time. Implicit methods allow for a more reasonably sized $h$, but you are now required to use an associated method for solving the implicit equation, like Newton-Raphson. Even with that overhead, implicit methods are more efficient for stiff equations. Of course, if the equations are not stiff, one uses explicit RK methods.
Finding the loci of $\arg\dfrac{z-a}{z-b}=\theta$
Since you are aware that the locus would be a circle, let me just proceed to how you can get the radius of the circle. Let $A$, $B$ and $C$ be the points representing the complex numbers $a$, $b$ and $z$. Then, in $\Delta ABC$, $\angle C = \theta$. By applying the sine rule, i.e. $$ {\sin A \over BC} = {\sin B \over AC} = {\sin C \over AB} = 2R $$ where $R$ is the circumradius of $\Delta ABC$, we get $$ R = \frac{\sin C}{2AB} = \frac{\sin \theta}{2|a - b|}. $$ If $\arg\left( \frac{z-a}{z-b} \right) = -\theta$, then the resulting circle will be the reflection of the circle mentioned before about the line $AB$. This happens because a change from $+\theta$ to $-\theta$ only changes the sense of rotation. I leave it up to you to verify this. EDIT: The locus will not be a circle, but only a part of the circle. Moreover, the curve will have poles at $z = a$ and $z = b$.
Existence of a free ultrafilter with nonempty countable intersections.
This is not entirely correct. Using the axioms of $\sf ZF$ we cannot even prove there exists free ultrafilters, so there cannot be an explicit definition of a free ultrafilter, on any set. Even if we assume the axiom of choice, and can therefore prove the existence of a free ultrafilter, we cannot prove the existence of a countably complete free ultrafilter. If $\kappa$ is the least cardinality of a set $X$ on which there is a countably complete free ultrafilter, then in fact there is a $\kappa$-complete ultrafilter on $X$, namely the intersection of less than $\kappa$ large sets is large. Such $\kappa$ is called a measurable cardinal. We can prove now that $\kappa$ is a strongly inaccessible cardinal, and that this proves the consistency of $\sf ZFC$ as a whole. This means we cannot prove from $\sf ZFC$ the existence of such ultrafilter.
How to prove $x_n$ converges as $n \to \infty$.
Hint: A proof without using the integration concepts. Show that $$\frac{1}{n + 1} &lt; \log\left(1 + \frac{1}{n}\right) &lt; \frac{1}{n}. \tag{1}$$ You might need the facts that the sequence $\left(1 + \frac{1}{n}\right)^n$ increasingly converges to $e$ and the sequence $\left(1 + \frac{1}{n}\right)^{n + 1}$ decreasingly converges to $e$. Use $(1)$, show that $$\frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{n + 1} &lt; \log(n + 1) &lt; 1 + \frac{1}{2} + \cdots + \frac{1}{n}. \tag{2}$$ Use $(2)$, show that the sequence $\{x_n'\}$ is increasing and bounded above, where $$x_n' = 1 + \frac{1}{2} + \cdots + \frac{1}{n} - \log (n + 1).$$ Hence $x_n'$ converges. Show that $x_n$ and $x_n'$ have the same limit.
X=[0,1], consider the measure space (X,$F$,m), if there is no point in infinitely many Borel sets $E_n$, show that $m(E_n)=0$ for some n.
Define $$A_k := \{x \in E_k : \forall n &gt; k~~x \notin E_n\}$$ It is easy to see $A_k$ are disjoint and $$\bigcup\limits_n E_n = \bigcup\limits_k A_k$$ From disjointness (and some work to see $A_k$ are measurable) you can conclude, that given $\varepsilon &gt; 0$ you can find $N$, s.t. $$m\left(\bigcup\limits_{k=N}^{\infty} A_k\right) &lt; \varepsilon$$ but any $E_n \subseteq \bigcup\limits_{k=N}^{\infty} A_k$ for $n &gt; N$. contradiction to the fact $m(E_n) \geq c &gt; 0$.