title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
How to compute $\partial \frac{1}{z^*}$?
The key here is to understand that $\partial_z\frac{1}{z^*}=0$ everywhere but at the origin. This is how a delta-singularity can arise in this context. Once one is aware of this, integrating this function on a square shouldn't be too difficult, and can be done without use of multivariable tools like Stokes' theorem. To wit, using the fundamental theorem of calculus and Fubini's theorem we obtain: $$\int_{[-a,a]\times[-a,a]}\partial_z\frac{1}{z^*}dxdy=\frac{1}{2}\int_{-a}^{a}\int_{-a}^{a}(\partial_x-i\partial_y)\frac{1}{x-iy}dxdy\\\begin{align}&=\frac{1}{2}\int_{-a}^ady\frac{1}{x-iy}\Bigg|_{(-a,y)}^{(a,y)}-\frac{i}{2}\int_{-a}^adx\frac{1}{x-iy}\Bigg|_{(x,-a)}^{(x,a)}\\&=\frac{1}{2}\int_{-a}^a{dy}\frac{2a}{a^2+y^2}-\frac{i}{2}\int_{-a}^{a}dx\frac{2ia}{a^2+x^2}\\&=2a\int_{-a}^a\frac{dy}{a^2+y^2}\\&=\pi\end{align}$$ which shows there is some kind of mass at the origin since the integral is non-zero for arbitrary sizes of the square. Of course, a full proof of the fact that this is a delta-function requires applying this to a test function, but for illustration purposes, this calculation is generally enough.
An equality with vectors
Your problem can be summarized as: $$\min_{x \in \mathbb{R},y \in [-\alpha,\alpha]^n} \{ x : xy + x(1-\alpha)b = a\}$$ Substitute $xy=z$: $$\min_{x \in \mathbb{R},z \in [-\alpha x,\alpha x ]^n} \{ x : z + x(1-\alpha)b = a\}$$ The dual problem is: $$ \max_{v \in \mathbb{R}^n,w_1 \in \mathbb{R}_+^n,w_2 \in \mathbb{R}_+^n} \{ a^Tv : (1-\alpha)b^Tv + \alpha e^T(w_1+w_2) = 1, \; e^T(v + w_1-w_2) = 0\}$$ I do not see an immediate solution to any of these problems, but the last two problems you can just feed to a linear optimization solver.
how can we compute the homology of these groups without using topology?
If you look in the book by Cartan-Eilenberg on Homological Algebra (and surely many others, but each of us has a favorite!) you will find explicit projective resolutions for both free groups and for free abelian groups. From that one can immediately compute homology and cohomology. You can also use technology. For example, it is easy to show that the (co)homology of a free product of groups is (in positive degrees) the direct sum of the factors (you can find this in Hilton-Stammbach's book on homological algebra, for example) and this easily reduces the computation of the (co)homology of free groups to that of infinite cyclic groups. Likewise, there is a Künneth formula for the (co)homology of direct products of groups which reduces the computation of (co)homology of free abelian groups to —again­— that of infinite cyclic groups. Infinite cyclic groups are easily handled by writing down an explicit resolution: if we write $G$ for the infinite cyclic group generated by $\sigma$, we can use $$\mathbb ZG\xrightarrow{d}\mathbb ZG\stackrel\varepsilon\twoheadrightarrow\mathbb Z$$ with $\varepsilon$ the usual augmentation and $d$ the unique $\mathbb ZG$-linear map such that $d(1)=\sigma-1$. Alternatively, every textbook includes an explicit description of the functors $H^0(G,\mathord-)$ and $H^1(G,\mathord-)$, which in the case of the infinite cyclic group makes them immediately computable, and then you can check any problems that $H^1(G,\mathord-)$ is right exact. It follows that the higher $H^p$'s are zero, and that we have computed the whole cohomology. One can proceed in a similar way for homology.
$\mathbb E[\boldsymbol 1_{\{(H,T)\}}\mid \mathcal F]$ here?
Two fair coins are thrown and $X$ denotes the number of heads thrown in total. In this answer I use probability space $(\Omega,\wp(\Omega),P)$ where $\Omega=\{(H,H),(H,T),(T,H),(T,T)\}$ and $P(\{\omega\})=\frac14$ for every $\omega\in\Omega$. Further $\mathcal F$ denotes the $\sigma$-algebra generated by $\{\{(H,H)\},\{(T,T)\},\{(H,T),(T,H)\}\}\subseteq\wp(\Omega)$ Then your question can be interpreted as: $$\text{"what is }\mathbb E[\mathbf1_{\{(T,H)\}}\mid X]\text{?"}$$ This because: $\sigma(X)=\mathcal F$. We find: $\mathbb E[\mathbf1_{\{(T,H)\}}\mid X=0]=0$ $\mathbb E[\mathbf1_{\{(T,H)\}}\mid X=1]=P(\{(H,T)\}\mid X=1)=\frac12$ $\mathbb E[\mathbf1_{\{(T,H)\}}\mid X=2]=0$ This allows the conclusion:$$\mathbb E[\mathbf1_{\{(T,H)\}}\mid X]=\frac12\mathbf1_{\{X=1\}}=\frac12\mathbf1_{\{(H,T),(T,H)\}}$$
9-digit ternary sequences with no three consecutive digits that are the same
I think you would do better to use inclusion/exclusion. If all the $0$s are consecutive, "glue" them like you said. You have $1+3+3$ items and the number of arrangements is $$\frac{7!}{1!\,3!\,3!}\ .$$ Same for $1$s, same for $2$s. If all the $0$s are consecutive and all the $1$s are consecutive, a similar argument gives $$\frac{5!}{1!\,1!\,3!}\ .$$ Likewise for $0$s and $2$s, likewise for $1$s and $2$s. If all three digits occur in blocks of $3$ there are $3!$ possibilities. By inclusion/exclusion, your final answer will be $$\frac{9!}{3!\,3!\,3!}-3\times\frac{7!}{1!\,3!\,3!}+3\times\frac{5!}{1!\,1!\,3!} -3!\ .$$
Result for $A^{-\frac{1}{2}}A(A^{-\frac{1}{2}})^T$
Yes, since for every symmetric non-negative definite matrix $A$, there exists a square-root factor $A^{1/2}$ such that $ A^{1/2}\left(A^{1/2}\right)^T = A$. If $A$ is positive definite, $A$ is invertible and $A^{1/2}$ is invertible as well. Therefore, you have $$A^{-1/2} A \left(A^{-1/2}\right)^T =A^{-1/2} A^{1/2}\left(A^{1/2}\right)^T \left(A^{-1/2}\right)^T = I_n I_n = I_n. $$
Decimal expansions of rational numbers.
For $\dfrac {1}{6} \dots$ $\begin {align} 1\operatorname {mod} 6 \equiv 1 \\ 10\operatorname {mod} 6 \equiv 4 \\ 40\operatorname {mod} 6 \equiv 4 \\ \end {align}$ $4$ is the repeating number, so we say $\color \red {10^2 - 4} = 96 = 6 \cdot 16$, and $(10^2 - 4)x = (16) \rightarrow x = \dfrac {16}{10^2-4} \rightarrow \dfrac {16}{96} \rightarrow \bbox [2px, border: 2px solid black]{x = .1\overline{6}}$. For $\dfrac {1}{37} \dots$ $\begin {align} 1\operatorname {mod} 37 \equiv 1 \\ 10\operatorname {mod} 37 \equiv 10 \\ 100\operatorname {mod} 37 \equiv 26 \\ 260\operatorname {mod} 37 \equiv 1 \\ \end {align}$ $1$ is the repeating number, so we say $10^3 - 1 = 999 = 37 \cdot 27$, and $\color \red {(10^3 - 1)}x = 27 \rightarrow x = \dfrac {27}{10^3-1} \rightarrow x = \dfrac {27}{999} \rightarrow \bbox [2px, border: 2px solid black] {x = \overline{.027}}$. It's equivalent to what we do when we write out the fraction in long division, because in each case we're multiplying by 10 when we reduce the modulo, then stop when we've found a repeating number. The highest power of $10$ in which a remainder repeates is where we find out the equivalent fraction.
Maximum likelihood estimator doesn't exist
The likelihood function, as a function of the two variables $\theta$ and $\sigma$, takes on arbitrarily large values when evaluated at $\theta=x_1$ and at $\sigma$ very close to $0$. Because of the division by $\sigma$ in the second term in your formula for the density. Other values of $\theta$ close to any of the $x_i$ and with $\sigma\ge\min_i|x_i-\theta|$ will similarly give rise to large values of the likelihood.
Immersion is a diffeomorphism
Once you know that $f$ is a local diffeomorphism, to conclude that it's a global diffeomorphism you just need to show that it's bijective. Surjectivity is pretty easy: Because $X$ is compact, $f(X)$ is also compact, and because $S^n$ is Hausdorff, $f(X)$ is closed in $S^n$. On the other hand, the fact that $f$ is a local diffeomorphism implies that it's an open map, and thus $f(X)$ is open. Since $S^n$ is connected, $f(X)$ is all of $S^n$. Injectivity is quite a bit harder. The only proof I know uses the theory of covering spaces. Because $f$ is a proper local homeomorphism, it's a covering map (which is another way to prove surjectivity), and because $S^n$ is simply connected, it follows that $f$ is injective. One place to read about covering spaces is in my book Introduction to Topological Manifolds.
If the sum $A_1 + A_2 + · · · + A_n$ is equal to the negative of the identity operator on $V$ , show that $dim_{\mathbb{R}} V$ is even.
Suppose $dim_{\mathbb{R}}(V)$ is odd, so for each $i$ there exist a real eigenvalue, $r_i$ of $A_i$, and hence exist eigenspaces $W_i$'s corresponding to $r_i$'s for all those $A_i$'s. Now since $A_i$'s pairwise commutes, so for each fixed $i$ the $W_i$ is invariant under all of $\{A_j \mid j\in \{1,2,...n\}\}$ Now if each of $W_i \mid i\in \{1,2,..,n\}$ is equals to $V$, then we've $\sum_{i}^{n} r_i =-1$, which is a contradiction, since none of the operators $A_i$ has a negative real eigenvalue. So for some $W_i$ we've $0 \subsetneq W_i \subsetneq V$. Say $W=W_i$. Now $V/W$ is also invariant under all of $A_i$'s, so now all of $A_i|_{W}$ and $A_i|_{V/W}$ has same property as $A_i$'s (I mean the same given properties as given in the problem). So, by induction hypothesis $dim_{\mathbb{R}}(W),dim_{\mathbb{R}}(V/W)$ are even, forces $dim_{\mathbb{R}}(V)$ is even.
Doubt in The inverse function theorem of Rudin's Principles of mathematical Analysis (Theorem 9.24)
Note that $\varphi(x)$ is the sum of $x$ with $A^{-1}\bigl(y-f(x)\bigr)$. Well, $x$ is just the identity function, and $A^{-1}$ is linear. Finally, $y$ is constant. Therefore, by the rule for differentiating sums and by the chain rule (together with that fact that $(A^{-1})'=A^{-1}$), we get that\begin{align}\varphi'(x)&=\operatorname{Id}+A^{-1}\circ(y-f)'(x)\\&=\operatorname{Id}+A^{-1}\bigl(-f'(x)\bigr)\\&=\operatorname{Id}-A^{-1}\bigl(f(x)\bigr).\end{align}
A difference in a formula of theorem 4.2(e) on congruence relations.
Well, they both hold. By definition of modular congruence, $a\equiv b\pmod n$ means $n|b-a$, that is, $b-a=dn$ for some $d$. But then $bc-ac=dnc$, showing $ac\equiv bc\pmod{nc}$. Generally, if $x\equiv y\pmod{nc}$, we always have $x\equiv y\pmod n$, too, because the condition means $nc|y-x$ which implies $n|y-x$.
Need help with this question.
Using Double Integration Let $\displaystyle I = \int^{\infty}_{0}\frac{e^{-x}-e^{-7x}}{x}dx = \int^{\infty}_{0}\bigg(\int^{7}_{1}e^{-tx}dt\bigg)dx$ So $\displaystyle I = \int^{7}_{1}\bigg(\int^{\infty}_{0}e^{-xt}dx\bigg)dt = \int^{7}_{1}\frac{1}{t}dt=\ln(t)\bigg|^{7}_{1} = \ln(7).$ Using Differentiation under integral sign Let $\displaystyle I(\alpha) = \int^{\infty}_{0}\frac{e^{-\alpha x}-e^{-x}}{x}dx,$ Then $\displaystyle I'(\alpha) = -\int^{\infty}_{0}\frac{x\cdot e^{-\alpha x}}{x}dx = -\frac{e^{-\alpha x}}{\alpha}\bigg|^{\infty}_{0}=\frac{1}{\alpha}$ So $\displaystyle I (\alpha) = \ln(\alpha)+\mathcal{C}.$ Put $\alpha =1,$ We get $I(1)=0$ So we gave $\displaystyle I (\alpha) = \int^{\infty}_{0}\frac{e^{-\alpha x}-e^{-x}}{x}dx= \ln(\alpha).$ So $\displaystyle I(7)=\int^{\infty}_{0}\frac{e^{-7x}-e^{-x}}{x}dx = \ln(7).$
Why the same position u for two different functions applying mean-value theorem?
Hint It can be the same $u$. I think that this refers implicitly to Cauchy's mean value theorem (see link).
A closed form for $\sum _{j=0}^{\infty } -\frac{\zeta (-j)}{\Gamma (j)}$
Reflection formula for $\zeta(s)$ transforms the sum into $\sum_{n=1}^{\infty}\left(2\pi i\right)^{-2n}\left(2-4n\right)\zeta\left(2n\right)$. The latter can be computed by differentiating the well-known generating function $\sum_{n=0}^{\infty}\zeta\left(2n\right)z^{2n}=-\frac{\pi z\cot \pi z}{2}$, with the result $$1-\frac{1}{2\cosh1-2}.$$
Angle between tangents and angle subtended by radii are supplementary
Hint: Try to use congruence. If you can prove that the two triangles are congruent then it is a kite. Now, you can prove it easily.
Normal distributed $\log x$
If $$Y=\log X\sim N(\mu;\sigma^2)$$ Then $$X\sim \text{Log-Normal}$$ to find the pdf of $X$ simply use the fundamental transformation theorem (EDIT) that is the following $$f_X(x)=f_Y(g^{-1}(x))|\frac{d}{dx}g^ {-1 }(y)|$$ Where $Y=g^{-1}(X)=\log X$ Thus $$f_X(x)=\frac{1}{\sigma x\sqrt{2\pi}}e^{-(\log x-\mu)^2/(2\sigma^2)}$$ $x >0$ Here read more on Log-normal distribution
Diagonalization without computing the inverse
Calculate the normalised eigenvectors of your matrix and make these eigenvectors the columns of a new matrix U. Now calculate U^T * P * U using matrix multiplication and then you will obtain a diagonalised matrix with the eigenvalues of P as the entries on the diagonal. NB: U^T is U-transpose.
Gateaux Differentiation in Infinite Dimensional Space
Take $X$ an infinite dimensional Banach space, and $Y:=\Bbb R$. There exists a non-continuous linear functional $f$. Let $F(x):=f(x)^2$ and let $x_0$ such that $f(x_0)\neq 0$. Then \begin{align} F(x_0+th)-F(x_0)&=(f(x_0)+tf(h))^2-f(x_0)^2\\ &=2tf(x_0)f(h)+t^2f(x_0)^2, \end{align} so the $A$ which would work is $A(h):=2f(x_0)f(h)$, which is not continuous.
Algebra with proportionalities?
I'm not entirely sure where you are coming from on this, so tell me if I am answering a different question here. Proportionality is defined as follows. $P\propto \rho$ means $P=c\rho$ for a constant $c$. It must mean that and that is all it ever means. It is literally a shorthand way of writing $P=c\rho$. Thus, you would not logically get $Pm\propto \rho$. This would imply that $Pm=c\rho$ for some constant $c$, or $P=\frac{c}{m}\rho$, but you already said that $P\propto \rho$. Since $m$ is not a constant (at least, not mathematically, although mass usually stays constant), we have a contradiction. Going on from that, your last line would also give a contradiction. You write $Pm=\gamma m$, which implies of course that $P=\gamma$, a constant. Hopefully that helps.
Intersection of distinct maximal subgroups in a finite simple group
HINT: Let $H$ be the normalizer of $M\cap N$ in $G$. Since $M$ and $N$ are Abelian, $M\subseteq H$ and $N\subseteq H$. What does this tell you about $H$?
Show if $f_x \rightarrow \eta \,\,\,$ and $g_x \rightarrow \zeta$ so $f_x+g_x \rightarrow \eta + \zeta$
It is almost correct but you shouldn't say call $\nu=2\epsilon$. You should start with $\nu >0$ and take $\epsilon =\frac { \nu} 2$ in your argument.
tangential and normal projection of a vector in the ambient vector field of a sphere
At $(1,1,1)$ the unit normal vector is $n = \frac{1}{\sqrt 3}(1,1,1)$. The projection of $v$ onto $n$ is $$ v^N = (v \cdot n) n = \frac{1}{3}(1,1,1). $$ Now since $v^T + v^N = v$ we can solve $$ v^T = v - v^N = (2/3, -1/3,-1/3). $$
Flux of a vector field.
It would be the latter $(x,y,z)=(a\sin\phi\cos\theta,...)$, here $r=a$ is fixed, because this is a surface integral $(dS)$, rather than a volume integral $(dV)$. Now with regards to the first integral, I think were in luck: $4\pi\int_0^a(rg'(r)+3g(r))r^2dr=4\pi\int_0^a r^3g'(r)+3r^2g(r)dr=4\pi\int_0^a \frac{d}{dr}(r^3g(r))dr=4\pi a^3g(a)$
Complex Fourier series and its represntation
For A., you don't need anything related to Fourier analysis. Just write $$f(x) = \frac{1}{2}[f(x) + f(-x)] + \frac{1}{2}[f(x) - f(-x)] := h(x) + g(x).$$ It is easily checked that $h$ and $g$ such defined are even and odd, respectively. For B., it can be shown by definition that, the Fourier coefficients for any even real-valued function are real and that for any odd-valued function are pure imaginary. For example, the $n$th Fourier coefficient for $h$ is given by \begin{align} & c_n = \frac{1}{2\pi}\int_{-\pi}^\pi h(x)e^{-inx} dx = \frac{1}{2\pi}\int_{-\pi}^\pi h(x)\cos(-nx)dx \in \mathbb{R}, \end{align} since $\sin(-nx)h(x)$ is odd, and the integration interval is symmetric about $0$. C. also easily follows by definition (no convergence issues are involved), by definition, the $n$th Fourier coefficient for $f'$ is calculated by (using integration by parts): \begin{align} c_n' = & \frac{1}{2\pi}\int_{-\pi}^\pi f'(x)e^{-inx} dx \\ = & \frac{1}{2\pi}f(x)e^{-inx}\big |_{-\pi}^\pi - \frac{1}{2\pi}\int_{-\pi}^\pi f(x)(-in) e^{-inx} dx \\ = & in\frac{1}{2\pi}\int_{-\pi}^\pi f(x)e^{-inx} dx \\ = & inc_n, \end{align} where we used that $f$ is of period $2\pi$.
Characteristically simple subgroup
HInt: if $K \text{ char } H \trianglelefteq G $, then $K \trianglelefteq G$.
Mathematics, Philosophy and writing.
I mentioned in the comments: Charles Sanders Peirce W.V.O. Quine Bertrand Russell A.N. Whitehead But also: Bernard Bolzano George Boolos Alonzo Church René Descartes Solomon Feferman Gottlob Frege Kurt Gödel David Hilbert Pierre-Simon Laplace Jan Łukasiewicz Blaise Pascal Henri Poincaré Hilary Putnam Frank P. Ramsey Raymond Smullyan Alfred Tarski And this list omits many people who were philosophers who considered mathematics (such as Wittgenstein or Hintikka) or mathematicians who thought philosophically about mathematics (such as Brouwer) or logicians who published mathematical logic articles in philosophy journals, or mathematicians who found themselves doing early research on AI and therefore became philosophers of mind (such as Turing or Yehoshua Bar-Hillel), or a large number of scholars (such as Pythagoras, Galileo, or Newton) who lived before the modern separation of mathematics and philosophy, as well as many mathematician-philosophers who are not truly famous.
Manipulation of Taylor expansion of $e^x$
Note that differentiating and multiplying by $x$ in the power series $\sum_{n=0}^\infty a_nx^n$ gives the power series $(xD)\sum_{n=0}^\infty a_nx^n=\sum_{n=0}^\infty na_nx^n$, where $D$ is the differentiation operator. So in general for any polynomial $p(n)$, we have the identity $$p(xD)\sum_{n=0}^\infty a_nx^n=\sum_{n=0}^\infty p(n)a_nx^n.$$ In this case taking $a_n=1/n!$ gives that the desired expression is $(xD)^2e^x$, or $$ (xD)^2e^x=(xD)xe^x=x(e^x+xe^x)=xe^x+x^2e^x. $$
Finding $f(x)$ for any $x$ by assuming $a_{n+1}$
For the induction base case, note that the RHS must be in the range $[0,2/3]$, so the LHS, and therefore the solution, must also be in that range. Letting our first guess be $1/3$, we are at most $1/3$ away.
Joining or concatenating two different vectors into a one vector
I would write $(x,y)=(x_0, \dots , x_n, y_0, \dots, y_n)$ the first time, and indicated to the reader that from now on this is what $(x,y)$ is going to mean.
Find $a$ so that $a(e^{-2x}-e^{-3x})$ is a probability density function.
$a$ is determined by the equation: $$\int_{-\infty}^\infty f(x) \ dx = 1$$So for this problem:$$a\int_{0}^\infty e^{-2x}-e^{-3x}\ dx = 1$$Now you can solve for $a$. For the second question, $$P(X\le 1)=\int_{0}^1 f(x) \ dx$$ Plug the values in and you can solve for the probability.
Find new point of tangency on circular arc having second point which is known but unknown center
Your drawing is very... unusual. What's wrong with capital letters? Note that $\angle dea=90^\circ-6^\circ=84^\circ$. $$\overline{bc} = dx+R\sin \angle dea$$ $$\overline{ga}=\overline{df}\cos \angle bfd-R\cos \angle dea + R\tag{2} $$ This leads to: $$\overline{df} \sin 6^\circ+R\sin 84^\circ=W\tag{3}$$ $$\overline{df}\cos 6^\circ-R\cos 84^\circ + R=W+H\tag{4} $$ This linear system of two equations, (3) and (4), has two unknows, $\overline{df}$ and $R$, and can be easily solved in terms of $W$ and $H$. The rest is easy: $$dx=\overline{df}\sin 6^\circ$$ $$dy=\overline{df}\cos 6^\circ-H$$
Computation of a series.
First, we will transform $S_n$ to a form easier to manipulate. Let $C \subset \mathbb{C}$ be a circle of radius $r \ll 1$ centered at $0$. For any $n \in \mathbb{N}$, $m \in \mathbb{N}^n$, we can single out those $m \in \mathfrak{M}_n$ with help of contour integrals of the form: $$\delta_n(m) \stackrel{def}{=} \frac{1}{2\pi i} \oint_{C} s^{\sum_{k=1}^n k m_k} \frac{ds}{s^{n+1}} = \begin{cases}1, & m \in \mathfrak{M}_n\\ 0, & \text{ otherwise }\end{cases}$$ Together with following integral representation of factorial: $$n! = \Gamma(n+1) = \int_0^\infty t^n e^{-t}dt$$ We have $$\begin{align} S_n &= \sum_{m \in \mathbb{N}^n} \delta_n(m) \int_0^\infty \prod_{k=1}^n \frac{1}{m_k!} \left(\frac{t}{k+1}\right)^{m_k} t^n e^{-t} dt\\ &= \frac{1}{2\pi i}\sum_{m \in \mathbb{N}^n} \oint_C \left[\int_0^\infty \prod_{k=1}^n \frac{1}{m_k!} \left(\frac{ts^k}{k+1}\right)^{m_k} \left(\frac{t}{s}\right)^ne^{-t} dt \right] \frac{ds}{s}\\ &= \frac{1}{2\pi i}\oint_C \left[\int_0^\infty \prod_{k=1}^n \left(\sum_{m_k=0}^\infty \frac{1}{m_k!} \left(\frac{ts^k}{k+1}\right)^{m_k}\right) \left(\frac{t}{s}\right)^ne^{-t} dt \right] \frac{ds}{s}\\ &= \frac{1}{2\pi i}\oint_C \left[\int_0^\infty \prod_{k=1}^n \exp\left(\frac{ts^k}{k+1}\right) \left(\frac{t}{s}\right)^ne^{-t} dt \right] \frac{ds}{s}\\ &= \frac{1}{2\pi i}\oint_C \left[\int_0^\infty \exp\left(\sum_{k=1}^n\frac{ts^k}{k+1}\right) \left(\frac{t}{s}\right)^ne^{-t} dt \right] \frac{ds}{s}\\ &\stackrel{\color{blue}{[1]}}{=} \frac{1}{2\pi i}\oint_C \left[\int_0^\infty \exp\left(\sum_{k=1}^\infty\frac{ts^k}{k+1}\right) \left(\frac{t}{s}\right)^ne^{-t} dt \right] \frac{ds}{s}\\ &= \frac{1}{2\pi i}\oint_C \left[\int_0^\infty \exp\left[-t\left(\frac{\log(1 - s)}{s} + 2\right)\right]\left(\frac{t}{s}\right)^n dt \right] \frac{ds}{s}\tag{*1}\\ \end{align} $$ Next, let $S(x) \stackrel{def}{=} \sum_{n=0}^\infty S_n \frac{x^n}{n!}$ be the EGF (exponential generating function) for $S_n$. $\Delta(x) = \sum_{n=0}^\infty S_n \frac{x^{n+1}}{(n+1)!}$ be the series we want to study its convergence. They are related by the relation $\Delta(x) = \int_0^x S(t) dt$. For any $x$ with $|x| \ll r$, $(*1)$ implies $$\begin{align} S(x) &= \frac{1}{2\pi i}\oint_C \left[\int_0^\infty \exp\left[-t\left(\frac{\log(1 - s)}{s} + 2 - \frac{x}{s}\right)\right] dt \right] \frac{ds}{s}\\ &= \frac{1}{2\pi i} \oint_C \frac{ds}{\log(1-s) + 2s - x} \end{align} $$ Change variable to $y = -\log(1-s) \iff s = 1 - e^{-y}$. When $r$ is small, the image of $C$ in $y$-space is close to circle $C$. We can deform the contour back to $C$ without changing the integral. This leads to $$S(x) = \frac{1}{2\pi i}\oint_C \frac{dy}{P(x,y)} \quad\text{ where }\quad P(x,y) = (2-x-y)e^y - 2 $$ Under the condition $|x| < |y| = r \ll 1$, we have $$P(x,y) \approx (2 - x - y) (1 + y + O(r^2)) - 2 \approx y - x + O(r^2)$$ This means for fixed $x$ and as a function in $y$, $P(x,y)$ has only one root inside $C$. Furthermore, the root in $y$ is close to $x$. Let $\eta$ be that root, we have $$\begin{align} P(x,\eta) = 0 &\iff (2-x-\eta)e^\eta - 2 = 0 \iff (\eta + x - 2)e^{\eta + x - 2} = -2e^{x-2}\\ & \implies 2 - x - \eta = -W(-2e^{x-2}) \end{align} $$ where $W(z)$ is a branch of the Lambert-W function. In terms of $\eta$, we have $$\begin{align} S(x) &= \text{Res}_{y=\eta}\left(\frac{1}{P(x,y)}\right) = \left.\frac{1}{\frac{\partial}{\partial y}P(x,y)}\right|_{y=\eta} = \frac{1}{(1 - x - \eta)e^\eta}\\ &= \frac{2-x-\eta}{2(1-x-\eta)} = \frac{W(-2e^{x-2})}{2(1+W(-2e^{x-2}))} \end{align} $$ Since $S(0) = 1$, we need to choose a branch of Lambert W function with $W(-2e^{-2}) = -2$. The correct branch is the "lower branch" described in above wiki link. It is usually denoted as $W_{-1}(\cdot)$. In terms of it, we find $$S(x) = \frac{W_{-1}(-2e^{x-2})}{2(1+W_{-1}(-2e^{x-2}))}$$ Notice the branches of Lambert W function satisfies ODE $$z\frac{d}{dz}W(z) = \frac{W(z)}{1+W(z)}\tag{*2}$$ We can integrate $(*2)$ and deduce a closed form expression for $\Delta(x)$: $$\Delta(x) = \frac12 \int_0^x \left[ z\frac{dW_{-1}(z)}{dz} \right]_{z=-2e^{t-2}} dt = 1 + \frac12 W_{-1}(-2e^{x-2})\tag{*3}$$ $W_{-1}(z)$ has two branch cuts, one terminated at $z = -\frac1e$, the other at $z = 0$. The closest singularity of $\Delta(x)$ to origin is located at $x = 1 - \log(2)$. As a result, $r_0$, the radius of convergence of the power series expansion of $\Delta(x) = \sum\limits_{n=0}^\infty S_n\frac{x^{n+1}}{(n+1)!}$, $r_0$ equals to $1 - \log(2)$. A corollary of this is$\color{blue}{{}^{[2]}}$ $$\frac{S_n}{(n+1)!} \sim o(\rho^n)\quad\text{ for any }\; \rho > \frac{1}{1-\log(2)} \approx 3.258891353270929$$ As a double check, we evaluate the power series expansion of $\Delta(x)$ using following command Series[1+1/2*LambertW[-1,-2*Exp[x-2]],{x,0,8}] on WA (wolfram alpha). WA returns $$\begin{align} \Delta(x) = & x+\frac{{x}^{2}}{2}+\frac{5\,{x}^{3}}{6}+\frac{41\,{x}^{4}}{24}+\frac{469\,{x}^{5}}{120}+\frac{6889\,{x}^{6}}{720}\\ & +\frac{24721\,{x}^{7}}{1008}+\frac{2620169\,{x}^{8}}{40320}+\frac{64074901\,{x}^{9}}{362880} + \cdots \end{align}$$ Translate back to $S_n$, this is equivalent to $$( S_0,S_1,\ldots ) = (1, 1, 5, 41, 469, 6889, 123605, 2620169, 64074901,\ldots )$$ For $n \le 5$, I have checked by hand this is indeed the correct value. An OEIS search return the sequence OEIS A032188. Up to $n = 18$, I've verified the $S_n$ extract from expansion of $(*3)$ matches the numbers on OEIS. Look at references there and see whether there is anything useful for your purposes. Notes $\color{blue}{[1]}$ - As a function of $s$, $$\exp\left(\sum_{k=1}^n\frac{ts^k}{k+1}\right) \left(\frac{t}{s}\right)^n\frac{e^{-t}}{s} = \frac{1}{s^{n+1}}A(s)\quad\text{ and }\quad \exp\left(\sum_{k=n+1}^\infty\frac{ts^k}{k+1}\right) = 1 + s^{n+1}B(s) $$ where $A(s), B(s)$ are analytic over the disc bounded by $C$. This implies $$\exp\left(\sum_{k=1}^\infty\frac{ts^k}{k+1}\right) \left(\frac{t}{s}\right)^n\frac{e^{-t}}{s} = \exp\left(\sum_{k=1}^n\frac{ts^k}{k+1}\right) \left(\frac{t}{s}\right)^n\frac{e^{-t}}{s} + A(s)B(s)$$ Changing the upper bound in the sum within the exponent from $n$ to $\infty$ modifies the integrand by a function analytic over the disc bounded by $C$. The value of the contour integral over $C$ remains the same. $\color{blue}{[2]}$ - A more detailed analysis suggests for large $n$, $S_n$ has following approximation: $$S_n \approx \frac{(2n)!}{\sqrt{8r_0}n!(4r_0)^n}\left( 1 - \frac{r_0}{6(2n-1)} + \cdots \right)\quad\text{ where }\quad r_0 = 1 - \log(2)$$ For $n$ as small as $4$, this formula gives a relative error below $10^{-4}$ (checked against numbers from OEIS). The leading behavior of coefficients of $\Delta(x)$ should be: $$\frac{S_n}{(n+1)!} \sim O\left(\frac{r_0^{-n}}{\sqrt{8\pi r_0} n^{3/2}}\right)$$
Number of solutions of $\ln(|a-x|)=x$
The equation can be splitted in two parts $$\ln(a-x)=x,\quad x<a$$ $$\ln(x-a)=x,\quad x>a$$ The first equation has one and only one solution for any value of $a$. consider $f(x)=\ln(a-x)-x$ Its derivative is $f'(x)=\frac{1}{x-a}-1$ which is negative for $x<a$. Thus $f(x)$ is decreasing and its range is $(-\infty,\infty)$ so for the intermediate value theorem it assumes once and only once the value $f(x)=0$ For the second equation consider $g(x) = \ln(x-a)-x$. $$ \mathop {\lim }\limits_{x \to a^ + } \ln \left( {x - a} \right) - x = - \infty $$ $$ \mathop {\lim }\limits_{x \to + \infty } \ln \left( {x - a} \right) - x = - \infty $$ We have $$g'(x) = \frac{1}{x-a}-1=\frac{1-x+a}{x-a}$$ $g'(x) = 0 $ for $x=1+a$ and we can see that this is a point of maximum since $g'(x)<0$ for $x>1+a$ and $g'(x)>0$ for $a<x<1+a$. For $a=-1$ we have $x=0, g(0)=0$ and the tangent in $(0,0)$ is $y=0$. One single intersection. $y$-value of the point of maximum is $g(1+a)=\ln(1+a-a)-1-a=-1-a>0$ for $-1-a>0\to a<-1$. So $g(x)$ has two roots for $a<-1$ because second derivative is $g''(x)=-\frac{1}{(x-a)^2}$ and is negative for any $x$, which means that concavity is down on all the function's domain. Therefore $g(x)$ must intersect $y$-axis in two points because the limits at $a^+$ and $+\infty$ are both $-\infty$. Therefore $g(x)$ has no roots for $a>-1$, one root for $a=-1$ and two roots for $a<-1$. In conclusion we can say that the given equation $$\ln|a-x|=x$$ has three roots for $a<-1$, two roots for $a=-1$ and one root for $a>-1$.
writing proof for greatest common divisor and least common multiple of fractions
Let $\frac ab$ and $\frac nm$ be positive fractions. $\frac ab = [\frac {a}{\gcd(a,n)}\cdot \frac {\operatorname{lcm}(b,m)}b]\times \frac {\operatorname{lcm}(b,m)}{\gcd(a,n)}$ And $\frac nm = [\frac {n}{\gcd(a,n)}\cdot \frac {\operatorname{lcm}(b,m)}m]\times \frac {\operatorname{lcm}(b,m)}{\gcd(a,n)}$ And both $[\frac {a}{\gcd(a,n)}\cdot \frac {\operatorname{lcm}(b,m)}b]$ and $[\frac {n}{\gcd(a,n)}\cdot \frac {\operatorname{lcm}(b,m)}m]$ are integers, $\frac {\operatorname{lcm}(b,m)}{\gcd(a,n)}$ is a common divisor of $\frac ab$ and $\frac nm$. Now suppose $\frac \gamma\delta$ (assume in lowest terms) was another common divisor so there are $k,j$ so that $\frac ab = \frac {k\gamma}{\delta}$ and $\frac nm = \frac {j\gamma}{\delta}$. That means $\frac {a\delta}\gamma = bk$ so $\gamma$ divides $a\delta$ but as $\delta$ and $\gamma$ are relatively prime, that means $\gamma$ divides $a$ similarly $\frac {n\delta}\gamma = mj$ so $\gamma$ divides $n$ so $\gamma$ is a common divisor of $a,n$. So $\gamma\le \gcd(a,n)$. Likewise $\frac {a\delta}b = k\gamma$ and $\frac {n\delta}m = j\gamma$. So $a \delta$ is a multiple of $b$ and $n\delta$ is a multiple of $m$. But as $a,b$ and $n,m$ are relatively prime that means $\delta$ is a multiple of $b$ and $\delta$ is a multple of $m$ so $\delta$ is a common multiple of $b$ and $m$. So $\delta \ge {\operatorname{lcm}(b,m)}m$ So $\frac \gamma \delta \le \frac {\operatorname{lcm}(b,m)}{\gcd(a,n)}$ So $\frac {\operatorname{lcm}(b,m)}{\gcd(a,n)} = \gcd(\frac ab, \frac nm)$. Proving that $\operatorname{lcm}(\frac ab, \frac nm) = \frac {\operatorname{lcm}}{\gcd(b,m)}$ is done similarly.
Are four random variables independent if pairs and sums are?
Here's what I imagine is the minimal counterexample: set $X_4 = 0$ to be a constant and forget about it. Write $X \sim B$ if $X$ is either $0$ or $1$ with probability $1/2$. Now choose $X_1,X_2 \sim B$ independently. Then choose $$X_3 \qquad \begin{cases} = X_1 & \text{if } X_1 \neq X_2 \\ \sim B \text{ (independently)} & \text{if } X_1 = X_2 \end{cases}$$ Now the pairs are independent by construction. Now consider $X_1 + X_2$. If this sum is anything even then necessarily $X_1 = X_2$, so $X_3 \sim B$. Otherwise, the sum is odd, so $X_1 \neq X_2$ and $X_3 = X_1 \sim B$ still. Hence $X_1 + X_2$ is indeed independent of $X_3$. However, clearly $X_3$ is not independent from $(X_1,X_2)$. In fact, $X_3$ is not independent from $X_1$ alone: $3/4$ of the time they must agree.
Principal non-prime ideal whose extension is prime
I don't think so. Take $A = k[x,xy,xy^2,y^3] \subset B = k[x,y]$. The ideal $xA$ is not prime, but $xB$ is; $x(xy^2) = (xy)^2$, but $xy \notin xA$ in $A$.
differentiability of $(x,y)\mapsto\max\{x,y\}$
If it could help we can observe that $$\max(x,y)=\frac 12(x+y+|x-y|)$$
$2^{2^n}+5^{2^n}+7^{2^n}$ is always divisible by $39$
You first find $2^2$, $5^2$ and $7^2$, $\mod 39$, to be $4$, $25$ and $10$ respectively. These add to $39$. Their squares are $16$, $1$, and $22$, which also add to $39$. This is power $2^2$. The squares of these are $22$, $1$, $16$, which is the same as before, Thus if it is true for $2^n$, it's true for $2^{n + 1}$. therefore $2^a + 5^a + 7^a$ is a multiple of $39$ if $a = 2^n$ (it's actually a multiple if $a \mod 12 = 4, 8$.)
Probability for many coin flips
Note that $P\big(\frac{N}{2}\big) = \binom{N}{N/2}\frac{1}{2^{N}}$, not $\binom{2N}{N}\frac{1}{2^{N}}$. This error is why your probability did not converge earlier. Anyways, applying the asymptotic approximation $\binom{2x}{x}\approx \frac{4^{x}}{\sqrt{\pi x}}$: $P\big(\frac{N}{2}\big)\approx \frac{4^{\frac{N}{2}}}{\sqrt{\frac{\pi N}{2}}}\frac{1}{2^{n}} = \sqrt{\frac{2}{\pi n}}$ Thus, $P\big(\frac{N}{2}\big)\rightarrow 0$ for large values of $N$. $\blacksquare$
Basics in Complex Analysis
Is this one question or four questions? (1). The derivative of a complex function is not at all (other than superficially) like that of a real function. For example, if a complex function has one derivative, it has infinitely many derivatives! (2). A contour integral could represent various physical entities (for example work) depending on the context. Green's theorem and Stokes' theorem relate a contour integral to an "area" or "volume" integral. In any case, the integral needs no physical analogue. (3). The term $dz$ is a differential, similar to the role of differential in a real integral. (4). In Cauchy's theorem, $f(z)$ is holomorphic (it has a derivative). Please read "Chapter zero" of Stein and Shakarchi, Complex Analysis for more motivation.
Find the values of $\epsilon$ at the two pair bifurcation points of the polynomial: $\phi(x,\epsilon):=(x-2)^2(x-3)+\epsilon^2=0$.
Algebraically, we can compute the discriminant $\Delta$ of the cubic polynomial $P(x) = (x - 2)^2 (x - 3) + \epsilon^2$. There are three roots when $\Delta > 0$ and one root when $\Delta < 0$. The sign of $\Delta$ changes at the values of $\epsilon$ corresponding to pair bifurcation points. Or, since varying $\epsilon$ is the same as shifting the graph of $P(x)$ vertically, we can conclude that a pair bifurcation occurs when the sign of a local extremum of $P(x)$ changes.
No group can be a union of two proper subgroups
Hint: Prove that for the union of two subgroups to be a group, they must be nested.
The proof of $e^x \leq x + e^{x^2}$
Note the inequality $e^t \ge t + 1$ for all $t \in \mathbb{R}$. In particular $e^{x^2} \ge x^2 + 1$. If $x \le -1$, then $$ e^{x^2} - e^x + x \ge x^2 + 1 - e^0 + x = x(x+1) \ge 0. $$ If $-1 < x < 1$, then \begin{align*} e^{x^2} - e^x + x &\ge x^2 + x + 1 - \sum_{k \ge 0} \frac{x^k}{k!} \\ &= \frac{x^2}{2!} - \sum_{k \ge 3} \frac{x^k}{k!} \\ &\ge \frac{x^2}{2!} - \sum_{k \ge 3} \frac{x^2}{k!} \\ &= x^2 \left( \frac12 - \left[e - 1 - \frac{1}{1!} - \frac{1}{2!} \right]\right) \\ &= x^2 \left( 3 - e \right) \ge 0. \end{align*} Finally, if $x \ge 1$, then $$ e^{x^2} - e^x + x > e^{x} - e^x + x > 0. $$
Finding the CDF given a PDF
The CDF $F$ is defined by $$F(x) = \int_{-\infty}^{x}f(t)\text{ d}t$$ for every $x$. As long as $x < 0$, $$F(x) = \int_{-\infty}^{x}f(t)\text{ d}t = \int_{-\infty}^{x}\dfrac{\alpha}{2}e^{\alpha t}\text{ d}t\text{.}$$ For $x \geq 0$, $$F(x) = \int_{-\infty}^{x}f(t)\text{ d}t = \int_{-\infty}^{0}\dfrac{\alpha}{2}e^{\alpha t}\text{ d}t + \int_{0}^{x}\dfrac{\beta}{2}e^{-\beta t}\text{ d}t\text{.}$$ I will leave the integration up to you.
How many of the 32 potential geometric systems are actually useful or practical to study/understand?
First of all the Euclids postulates are not complete as many commenters allready said. But then some ideas in an attempt to answer your question: The first postulate: "To draw a straight line from any point to any point." Negating this: There is at least one pair of points that are not on a common line. Difficult to imagine, some kind of black hole between the two points, I am not sure what to make of it. The second postulate: "To produce [extend] a finite straight line continuously in a straight line." There is discussion what this means, does this means that lines have an infinite length or that they are just boundless (a circle for example, you can go round forever) But then negating this, not sure what to make of it but it would nicely fit with a negation of the first postulate two points are just to far away of each other to be able to be connected) The third postulate: "To describe a circle with any centre and distance [radius]." This can be negated in two different ways: There are points where a certain circle can not be drawn around , seems hopeless. There is a maxmum size of circle (for example spherical geometry) The fourth postulate: "That all right angles are equal to one another." Negating this: Some right angles are not equal to each other? Not sure what to make of this. (some geometers would even claim this postulate was unneeded in the first place) More general It is note worthy that these first 4 postulates are much shorter than the fifth postulate and much more difficult to deny.(what for strange geometry have you left?) But on the other hand just try the (surface) geometry of a torus it breaks postulates 2 and 3 but then doing geometry on a torus is rather difficult. In the Heath's translation of Euclid's elements (Dover paperback) there are long sections on each postulate. maybe best to study it all Good luck
Multiplying eigenvalues with characteristic polynomial
Your formulation is not very clear. The characteristic polynomial is a polynomial, so it involves an indeterminate that is usually written $x$ or $X$, not $\lambda$ which in your setup is a scalar (one particular eigenvalue). What it seems your question is about is how the characteristic polynomial changes if we scale a matrix $A$ by a scalar $t$. One way to see this is to check the definition of the characteristic polynomial. Being the determinant of $XI_n-A$, every contribution is obtained by multiplying $n$ factors, each of which either comes from $XI_n$ or from $-A$. To get a term involving a power $X^k$, one needs $k$ factors from $XI_n$ and the remaining $n-k$ from $-A$; then scaling $A$ by $t$ results in scaling the coefficient of $X^k$ in the characteristic polynomial by $t^{n-k}$. A similar argument can be made using the known effect of scaling of eigenvalues. If the characteristic polynomial of $A$ splits as $(X-\lambda_1)\ldots(X-\lambda_n)$ then the characteristic polynomial of $At$ splits as $(X-t\lambda_1)\ldots(X-t\lambda_n)$, and again we see that the coefficient of $X^k$ gets multiplied by $t^{n-k}$.
On solutions of a certain $n\times n$ linear system whose coefficients are all $\pm1$
Edit. The hypothesis is false. The smallest counterexample I found was $5\times5$. We have $\det(A)=-16$ and $Av=b$ for $$ A=\pmatrix{-1&1&-1&-1&-1\\ 1&-1&1&-1&1\\ -1&-1&-1&1&1\\ -1&1&1&1&-1\\ 1&1&-1&1&1}, \ b=\pmatrix{4\\ 2\\ 3\\ 2\\ 1}, \ v=\frac12\pmatrix{-11\\ 9\\ 4\\ -6\\ 14}. $$
Proving the Weierstrass M-Test with topology
The essential fact used in the proof of the $M$-test is that the space of bounded continuous functions $$f:X\longrightarrow\Bbb R\qquad\hbox{or}\qquad f:X\longrightarrow\Bbb C$$ is complete. Completeness is beyond topology.
Is the series $\sum_{n=1}^\infty \frac{(-1)^{n+1}}{n}$ convergent or divergent?
Notice that $$\begin{align}\sum_{n=1}^\infty\frac{(-1)^{n+1}}n&=1-\frac12+\frac13-\frac14+\dots\\&=\left(1-\frac12\right)+\left(\frac13-\frac14\right)+\dots\\&>\frac12+0+0+0+\dots\\&=\frac12\\1-\frac12+\frac13-\frac14+\dots&=1+\left(-\frac12+\frac13\right)+\left(-\frac14+\frac15\right)+\dots\\&<1+0+0+0+\dots\\&=1\end{align}$$ Lastly, note that $\lim\limits_{n\to\infty}\frac{(-1)^{n+1}}n=0$, and that $$\frac12<S_{2n}<S_{2n+1}<1,\quad S_{2n}-S_{2n+1}\to0\text{ as }n\to\infty$$ where $S_n$ is the $n$th partial sum. Thus, it is convergent and $$\frac12<\sum_{n=1}^\infty\frac{(-1)^{n+1}}n<1$$ The exact value can be found from the Taylor expansion of the natural logarithm: $$\ln(1+x)=\sum_{n=1}^\infty\frac{(-1)^{n+1}}nx^n$$ Plugging in $x=1$ yields $\sum_{n=1}^\infty\frac{(-1)^{n+1}}n=\ln(2)$
How to prove that at least one solution to $x^2 + y^2 \equiv -1 \pmod p$ exists?
The result is clear for $p=2$, so let $p$ be odd. For $i=0$ to $\frac{p-1}{2}$, the numbers $i^2$ are distinct modulo $p$. Let $S$ be this set of numbers. Then $S$ has $\frac{p+1}{2}$ elements. Similarly, let $T$ be the set of numbers of the shape $-j^2-1$, where $j$ ranges from $0$ to $\frac{p-1}{2}$. Then $T$ contains $\frac{p+1}{2}$ numbers that are distinct modulo $p$. The sum of the cardinalities of $S$ and $T$ is $p+1$. Thus by the Pigeonhole Principle there exist $i$, $j$ such that $i^2\equiv -j^2-1\pmod{p}$. This completes the proof.
Thermodynamic relation
When dealing with thermodynamic equalities, it is very useful to introduce Jacobian determinants $$ \frac{\partial (u,v)}{\partial (x,y)} = \begin{vmatrix} (\partial_x u)_y & (\partial_y u)_x \\ (\partial_x v)_y & (\partial_y v)_x \end{vmatrix}.$$ Here, we have and will implicitly assume that $u(x,y)$ and $v(x,y)$. But because we often change the variables which the function depends on, we always keep the variable which is kept constant as a subscript to the bracket surrounding the partial derivative. In thermodynamics, you should always think that functions are defined implicitly. All quantities live on a 2D surface. Thus specifying two independent coordinates you can figure out a third one. The Jacobian have the following relevant properties: $$\frac{\partial (u,y)}{\partial (x,y)} = \left( \frac{\partial u}{\partial x} \right)_y,$$ $$\frac{\partial (u,v)}{\partial (x,y)} = \left(\frac{\partial (x,y)}{\partial (u,v)} \right)^{-1}, \qquad\text{(inverse function theorem)}$$ and $$\frac{\partial (u,v)}{\partial (x,y)} = \frac{\partial (u,v)}{\partial (s,t)} \frac{\partial (s,t)}{\partial (x,y)} . \qquad\text{(chain rule)}$$ In your case $$\left(\frac{\partial U}{\partial T} \right)_p= \frac{\partial (U,p)}{\partial (T,p)} = \frac{\partial(U,p)/\partial(T,V) }{\partial(T,p)/\partial(T,V)} = \frac{(\partial U/\partial T)_V (\partial p/\partial V)_T - (\partial U/\partial V)_T (\partial p/\partial T)_V}{\left(\partial p/\partial V\right)_T}. $$ Now, we evaluate $$\frac{(\partial p/\partial T)_V}{(\partial p/\partial V)_T} = \frac{\partial(p,V)}{\partial(T,V)} \Big/\frac{\partial(p,T)}{\partial(V,T)} = - \frac{\partial(p,V)}{\partial(V,T)} \frac{\partial(V,T)}{\partial(p,T)} = - \frac{\partial(p,V)}{\partial(p,T)} = -\left(\frac{\partial V}{\partial T}\right)_p .$$ So in total, we have $$\left(\frac{\partial U}{\partial T} \right)_p =\left(\frac{\partial U}{\partial T}\right)_V +\left( \frac{\partial U}{\partial V}\right)_T \left(\frac{\partial V}{\partial T}\right)_p. $$ Note that we did not use any Maxwell relation. Your relation is just a fact about partial derivatives.
What is the most surprising result that you have personally discovered?
I was very happy to find out that if we look at a notebook with a magnifying glass, then the lines become curves; and the fact that they are parallel is remained (especially if you keep them at the focal point of the magnification). However the curves all meet at the "edge" of the glass. So we can have a sense of geometry where parallel lines meet at infinity. I remember telling about this to my brother who was an engineering student (I was merely 16), and he said that it's impossible. Some years later I learned that this was already known as non-Euclidean geometry and played an important part of Einstein's relativity.
Eulerian connected graph
You should note that if every vertex had degree at least $2$, then there exists a cycle. There is another fact you should be using- if every vertex in a graph has an even degree, then the graph has a decomposition into edge-disjoint cycles. The proof sketch for this is as follows. Take a maximial list of edge-disjoint cycles within the graph $G$. Suppose every vertex in $G$ has an even degree. Now remove all the edges of these cycles from $G$, and suppose $G$ still contains an edge. This implies at least one vertex in $G$ has odd degree, a contradiction. So if $G$ is Eulerian, every vertex has even degree, and so the forward side follows trivially. Now for the converse, suppose $G$ has a decomposition into edge disjoint cycles but not every vertex is even. Then you have a contradiction based on the second fact I stated above.
Using Ratio Test for Taylor Series of Natural Log At $x = 1$
If $x\leq 1,$ then $x\leq 2.$ Your argument, which is sufficient to prove convergence for $0<x\leq 2,$ is certainly also sufficient to prove convergence for $0<x\leq 1.$ After all, there is no $x$ in the interval $(0,1]$ that is not in the interval $(0,2].$ It is not true that the series converges if and only if $0<x\leq 1.$ But you were not asked to prove that. You were only asked to prove the “if” part. And the “if” part is true. One does have to wonder why the exercise asked for a proof of a theorem so much weaker than the one you are able to prove. Usually we would want the stronger theorem in a case like this where the stronger statement is just as simple as the weaker one. Possibly the author had in mind that you could use a weaker argument (but I have no guess what that would be), or perhaps there will be a need later to know this for $0<x\leq1$ and this led the author not to think about the possibility of convergence on a larger interval. Or perhaps it was just a transcription error and it should have been $2$ instead of $1$. But I am just guessing now. Another way to think about it is, suppose you were asked to show that the series converges when $x=\frac12$? Surely you could do that. What the question actually asked is somewhere in between asking you to show convergence for just one value, and asking you to find all values of $x$ for which the series converges.
GCD identity for univariate polynomials
The gcd $(a,b)\,$ is characterized by the following fundamental gcd Universal Property $$ c\mid a,b\,\color{#0a0}\Leftarrow\!\color{#c00}\Rightarrow\, c\mid (a,b)\quad \text{[gcd Universal Property]}\qquad$$ For $\,c = (a,b)\,$ arrow $\color{#0a0}{(\Leftarrow)}$ shows $\,(a,b)\mid a,b,\,$ i.e. $\,(a,b)\,$ is a common divisor of $\,a,b,\,$ and the opposite arrow $\color{#c00}{(\Rightarrow)}$ shows that $\,(a,b)\,$ is divisible by every common divisor of $\,a,b,\,$ so $\,(a,b)\,$ is a greatest (degree) common divisor (unique if we normalize it to be monic, i.e. lead coef $=1).\,$ Therefore generally $\,(f,g)\mid f,g\,\Rightarrow\, (f,g)\mid f,gh\,\color{#c00}\Rightarrow\, \color{#90f}{(f,g)}\mid \color{#0a0}{(f,gh)}$. Further, in your particular case we are given that $\,\color{#0a0}{(f,gh)\!=1}\,$ thus $\color{#90f}{(f,g)}\! =\! 1$ Your use of the Bezout identity to deduce this is essentially repeating the Bezout-based proof of the gcd Universal Property in this particular case. Instead you should abstract out and prove this basic property (whose proof is easier), then invoke it as above (by name). This will work more generally in rings where there is no Bezout identity (e.g. $\,\Bbb Z[x]\,$ or $\,\Bbb Q[x,y]\:\!)\,$ but the gcd universal property still holds true (it is the definition of the gcd in general domains).
Long division in integration by partial fractions
You did not do the long division correctly. x^2 + x + 2 _________________________ x - 1 | x^3 + x - x^3 + x^2 ----------- + x^2 + x - x^2 + x -------- 2x -2x + 2 ------- + 2 So the quotient is $x^2 + x + 2$, and the remainder is $2$. You can verify this by doing the product and adding the remainder: $$(x-1)(x^2+x+2) = x^3 + x^2 + 2x - x^2 - x -2 = x^3 + x - 2$$ so $$(x-1)(x^2+x+2) + 2 = x^3 + x -2 + 2 = x^3 + x.$$ Whereas you claim a quotient of $x^2-1$ and a remainder of $-1$, which would give $$(x-1)(x^2-1) -1 = x^3 -x - x^2 + 1 -1 = x^3 - x^2 - x \neq x^3 + x.$$ (Even if you tried with $x+1$ instead fo $x-1$, your answer is still incorrect, since $$(x+1)(x^2-1)-1 = x^3 - x + x^2 -1 -1 = x^3 + x^2 - x -2 \neq x^3+x.$$ If you divide by $x+1$ correctly, you'll get a quotient of $x^2-x+2$ and a remainder of $-2$, $$(x+1)(x^2-x+2)-2 = x^3 -x^2 +2x +x^2 -x + 2 -2 = x^3 +x$$ which is the correct total.)
Examples of complete families of functions forming an absolutely convergent series
According to the definition of complete system in the link provided, $\{\phi_m\}$ must be an orthonormal system. This implies among other things that$\int_0^T|\phi_m|^2\,dt=1$. Suppose that there is a constant $M>0$ such that $$ |\phi_m(t)|\leq M\quad\forall t\in[0,T]\quad\text{and}\quad \sum_{m=1}^\infty\int_0^T|\phi_m(t)|\,dt\leq M. $$ This holds in particular if $|\phi_m(t)|\leq c_m$ with $\sum_{m=1}^\infty c_m<\infty$. Then $$ \sum_{m=1}^\infty|\phi_m(t)|^2\le M\sum_{m=1}^\infty|\phi_m(t)|,\quad0\le t\le T. $$ Integrating we get $\infty$ on the left hand side, and $M^2$ on the right hand side, a contradiction. This shows that no such families exist.
Prove that the if $\sum{a_k}$ converges, then $\sum{a_k^2}$ converges
Here is a brief answer to a brief question: Justify that there is an $N$ such that for all $n > N$, $a_n < 1$. Ignore the finite sum from $1$ to $N$. Use your observation that $a_n^2 < a_n$ and basic comparison to conclude that $\sum a_n^2$ is finite.
Are these polynomial expressions valid or invalid?
From Wikipedia: In mathematics, a polynomial is an expression consisting of variables and coefficients which only employs the operations of addition, subtraction, multiplication, and non-negative integer exponents. An example of a polynomial of a single variable $x$ is $x^2 − 4x + 7$. An example in three variables is $x^3 + 2xyz^2 − yz + 1$. Something like $x^x$ cannot be written as the sum of non-negative integer powers of $x$ multiplied by coefficients, so it is not a polynomial. This should help.
Integration Shortcut to Avoid Tedious Integration by Parts
Do enough integrals of the form $\int x^ne^x\,dx$ and you'll notice the answer is $e^x$ times a polynomial of degree $n$. So you could cheat and guess a solution of this form, differentiate it, and match coefficients. For example, guess $$\int x^3e^x\,dx=(ax^3+bx^2+cx+d)e^x + C. $$ Differentiate, using the product rule on the RHS: $$ x^3e^x=[(ax^3+bx^2+cx+d) + (3ax^2+2bx+c)]e^x $$ Match coefficients on $x^3, x^2, \dots$ on down and read off the answer: $$\begin{aligned}a&=1\\b&=-3a=-3\\c&=-2b=6\\d&=-c=-6\end{aligned}$$ (Notice the pattern $-3, -2, -1$ !) For integrals like $\int x^ne^{ax}\,dx$, reduce to this simpler case by substituting $u:=ax$.
Can any functions be expressed in one formula? (without conditions)
If you allow the floor function (as you do in your example), one can easily express indicator fucntions of intervals: $${\bf 1}_{[0,\infty)}(x) =\begin{cases}1&\text{if $x\ge0$}\\0&\text{if $x<0$}\end{cases}=1+\left\lfloor\frac{x}{2+x^2}\right\rfloor$$ and then $$\begin{align} {\bf 1}_{[a,\infty)}(x)&={\bf 1}_{[0,\infty)}(x-a)\\ {\bf 1}_{(-\infty,a)}(x)&=1-{\bf 1}_{[a,\infty)}(x)\\ {\bf 1}_{(a,\infty)}(x)&={\bf 1}_{(-\infty,-a)}(-x)\\ {\bf 1}_{(-\infty,a]}(x)&={\bf 1}_{[-a,\infty)}(-x)\\ {\bf 1}_{[a,b]}(x)&={\bf 1}_{[a,\infty)}(x)\cdot{\bf 1}_{(-\infty,b]}(x)\\ {\bf 1}_{(a,b]}(x)&={\bf 1}_{(a,\infty)}(x)\cdot{\bf 1}_{(-\infty,b]}(x)\\ {\bf 1}_{[a,b)}(x)&={\bf 1}_{[a,\infty)}(x)\cdot{\bf 1}_{(-\infty,b)}(x)\\ {\bf 1}_{(a,b)}(x)&={\bf 1}_{(a,\infty)}(x)\cdot{\bf 1}_{(-\infty,b)}(x)\\ \end{align}$$ so that we can express indicator functions of arbitrary intervals. Then in principle you can build your function by multiplying with such indicator functions. However, you may need to be careful with psrts that areundefined in points outside the interval they are used in.
How do you prove this set is convex?
Let $x=(x_1,x_2),y=(y_1,y_2)\in C$. If $t\in (0,1)$, then $$tx_1 + (1-t)y_1\leq tp+(1-t)p=p,$$ and $$tx_2 + (1-t)y_2\leq tq+(1-t)q=q.$$ Therefore $tx+(1-t)y\in C$.
An example of uncountable set with zero Lebesgue measure
What if $f$ maps $X_0$ homeomorphically to $\left[0,\frac12\right]$ and $\Bbb I^2\setminus X_0$ bijectively to $\left(\frac12,1\right]$? Added: It’s quite possible that $f$ maps some of the verticals $\{x\}\times\Bbb I$ to measurable sets of different measures and others to non-measurable sets. For example, $f$ might map $X_0$ homeomorphically onto $\left[0,\frac12\right]$ (of measure $1/2$), $\left\{\frac12\right\}\times\Bbb I$ homeomorphically onto $\left[\frac34,1\right]$ (of measure $1/4$), and (assuming the axiom of choice) the rest of $\Bbb I^2$ bijectively to $\left(\frac12,\frac34\right)$ in such a way that no other vertical $\{x\}\times\Bbb I$ maps to a measurable set at all. The behavior of $f$ on one vertical says nothing about its behavior on another (save that the images have to be disjoint, of course, and the measures of any measurable images cannot sum to more than $1$).
Does one get any set of positive roots by "cutting" with a hyperplane?
Let $V$ be the real vector space spanned by the root system, and let $L: V \rightarrow \mathbb R$ be any linear functional with the property that $L(\Phi^+) \subset \mathbb R_{> 0}$; then $\ker(L)$ is a hyperplane that does the job. One obvious choice for $L$ is as follows: From your $\Phi^+$, choose a set of simple roots $\alpha_i$, $1 \le i \le n$. It is well known that the $\alpha_i$ are a basis of $V$, and a root $\alpha = \sum_{i=1}^n c_i \alpha_i$ is in $\Phi^+$ if and only if all $c_i \in \mathbb Z_{\ge 0}$. Now set $L(\sum_{i=1}^n c_i \alpha_i) := \sum_{i=1}^n c_i$. (With respect to the basis $(\alpha_i)_i$, this is the functional $(1,1, ...,1)^{tr}$. Notice that because of the fact that for all roots, the coefficients $c_i$ are either all $\ge 0$ or all $\le 0$, one actually has a lot of "wiggle room": any $(\ell_1, ..., \ell_n)^{tr}$ with all $\ell_i > 0$ gives another suitable hyperplane.)
To prove that denominator has discriminate 0.
Statement (3) is simply not true. Counterexample: $f(x)=\frac{x^2}{x^2+bx}\ \ {\rm for\ }b\neq 0$ Then: (i): all coefficients are real (ii): leading coefficients non-zero (iii): $(b)(0)=(-0)^2$: true. (iv): $(1)(b)\neq (1)(0)$: true. (v): The function simplifies to $\frac{x}{x+b}=1-\frac{b}{x+b}$, and so its range is clearly $\Bbb R\setminus\{1\}$. However, (3) is false, since $b^2-(1)(0)\neq 0$ whenever $b\neq 0$.
Mazur Theorem implies that if $x_n \rightharpoonup x$ then $|x| \leq \liminf |x_n|$.
Let $a=\liminf|x_n|$ and $\varepsilon>0$. Then there exists $N\in\mathbb N$ such that, for all $n\geq N$, $|x_n|<a+\varepsilon$. Set now $$K_N=\overline{{\rm conv}}\{x_N,x_{N+1},\dots\},$$ the closure of the convex hull of the $x_n$, for $n\geq N$. Since $x_n\rightharpoonup x$ and $K_N$ is convex, we obtain that $x\in K_N$. But, since all the $x_n$ for $n\geq N$ belong to the ball $B(a+\varepsilon)$, centered at $0$ and with radius $a+\varepsilon$, and this ball is convex, we obtain that $$K_N\subseteq\overline{B(a+\varepsilon)}.$$ This implies that $x\in \overline{B(a+\varepsilon)}$, so $|x|\leq a+\varepsilon$ for all $\varepsilon>0$. Hence $|x|\leq a$.
Normalize $X$ to $0$ to $10$ scale with asymptotes at either end
You can use the arctangent function, which has horizontal asymptotes going to $\pm \infty$. As it ranges from $\frac {-\pi}2$ to $\frac \pi 2$ we need to rescale to get your range of $(0,10)$, so we want $10\left(\frac 1\pi \arctan X_{scaled} + \frac 12\right)$. Now all we have to do is figure out how to scale $X$. This is not too hard as $X_{scaled}=\pm 1$ gives $2.5, 7.5$, so $X_{scaled}=\frac {2(X-(X_1+X_2)/2)}{X_2-X_1}$. The final answer is $Y=10\left(\frac 1\pi \arctan \frac {2(X-(X_1+X_2)/2)}{X_2-X_1} + \frac 12\right)$
Existence of function $f:\mathbb{R}^{2}\rightarrow\mathbb{R}$ which has partial derivates as given functions.
Hint : You should check Schwarz Theorem https://en.wikipedia.org/wiki/Symmetry_of_second_derivatives Edit : if there is such a $f$, then $\frac{\partial^2 f}{\partial x_1 \partial x_2}=\frac{\partial^2 f}{\partial x_2 \partial x_1}$ which is not the case because $\frac{\partial F_1}{\partial x_2}=-\frac{\partial F_2}{\partial x_1}$
Closed formula for a sequence
For $x$ close to $1$, $x^{1/2^n}\approx 1+\dfrac{x-1}{2^n}$ and the sum of the $n$ first terms is about $n+x-1$. For large $x$, after sufficiently many square roots, the value comes close to $1$. An approximation is for instance $$x+\sqrt[2]x+\sqrt[4]x+\sqrt[8]x+\sqrt[16]x+n-4+\sqrt[16]x-1.$$
How is this expansion of $2^{ab} - 1$ obtained?
Use $$x^n - y^n = (x-y)(x^{n-1} + x^{n-2} y + ... + x y^{n-2} + y^{n-1})$$ with $x = 2^b,y =1, n = a$
Find minimum sentence in terms of quantifier depth.
For this question, the universe of discourse it is not fully specified, which makes it hard to give a definite answer. Three possible universes of discourse spring to mind. The first one, which corresponds to the kind of sentence you wrote for your solution, assumes that the graph vertex are the only thing that exists, the only predicate is the edge predicate and no constants are allowed. For that one, your solution is indeed correct. There are 8 possible combinations of quantifiers that form sentences of QD 3. We will examine each one in turn and see why they are not sufficient to distinguish between graphs: 1) $\exists\exists\exists$ - Every subgraph of size 3 of the 15 graph is isomorphic to a subgraph of the 11 graph, so we cannot rely on identifying a dissimilar subgraph. 2) $\exists\exists\forall$ - $\exists\forall\exists$ - $\forall\exists\exists$ - there is no pair of vertices which hold a special position with respect to the rest of the graph in one graph which does not hold in the other. 3) $\exists\forall\forall$ - $\forall\forall\exists$ - $\forall\exists\forall$ - There is no single vertex with a privileged position with respect to the rest of the graph. 4) $\forall\forall\forall$ - There is no special relation between every three vertex of one graph which does not not hold in the other. On a second universe, we are allowed to use constants. Then a possible solution becomes $\forall x \neg e(x,6)$. There is no possible solution with QD zero since the constants which are valid in both universes are related to one another in the same way in both graphs. In a final universe, the underlying elements are natural numbers with the usual arithmetic, plus special predicates to identify the vertices and edges. In this kind of universe, it is possible to "compress" the four existential quantifiers of your solution into a single one, by forming functions which convert natural numbers into 4-uples of natural numbers through an enumeration. This is the reason why in the arithmetical hierarchy we only count alternate quantifiers for depth; we can always compress adjacent existential quantifiers and adjacent universal quantifiers.
Formula for $\prod\limits_{k=-\infty}^\infty \frac{1}{e}\left(1+\frac{1}{k+z}\right)^{k+z+\frac{1}{2}}$
You can calculate $\log A(z)$ explicitly, by diferentiating with respect to $z$ two times and carrying out the summation. You'll then obtain $$\frac{{\rm d}^2}{{\rm d}z^2} \, \log A(z) = \Psi'(z) - \frac{1}{z} - \frac{1}{2z^2} \\ \Longrightarrow \quad \log A(z) = \log \Gamma(z+1) + z - z\log z - \log \sqrt{z} + c_1 z + c_2 \, .$$ By comparing the limit value for $z \rightarrow \infty$ of the original sum (which is $0$), you will need $c_1=0$ and $c_2=-\log\sqrt{2\pi}$, so $$\log A(z) = \log \Gamma(z+1) + z - z\log z - \log \sqrt{2\pi z} \\ A(z) = \frac{\Gamma(z+1)}{\sqrt{2\pi z}} \left(\frac{e}{z}\right)^z \, .$$ I presume you can use this backwards to prove your equality. Note that the exponent $1/2$ is crucial in the sum/product. For any other value the sum/product diverges. To be honest I still don't know what you precisely want :-(. Have you tried \begin{align} &\qquad \prod_{k=-N}^N \frac{1}{e} \, \frac{(k+z+1)^{k+z+1/2}}{(k+z)^{k+z+1/2}} \\ &=(1+N+z)^{N+z+1/2} (z-N)^{N-z+1/2} \frac{e^{-2N-1}}{z} \prod_{k=1}^N \frac{1}{z^2-k^2} \\ &=\left(1+\frac{z+1}{N}\right)^{N+z+1/2} N^{N+z+1/2} \cdot \left(1-\frac{z}{N}\right)^{N-z+1/2} N^{N-z+1/2} \\ &\quad \cdot (-1)^{1/2-z} \, \frac{e^{-2N-1}}{z} \cdot \frac{1}{N!^2} \prod_{k=1}^N \frac{1}{1-\left(\frac{z}{k}\right)^2} \\ &\sim \frac{(-1)^{1/2-z}}{2\pi z} \prod_{k=1}^\infty \frac{1}{1-\left(\frac{z}{k}\right)^2} \\ &= \frac{i e^{-i\pi z}}{2\sin(\pi z)} = \frac{1}{1-e^{i2\pi z}} \, ? \end{align}
Cardinality of order topology?
We can calculate. Why is the topology on $\Bbb R$ have the same cardinality as $\Bbb R$? Because we have a basis of size $\aleph_0$ and each open set is the union of $\aleph_0$ basic open sets. So if $(X,\leq)$ is a linear order, then its order topology has size $\leq\kappa$ if (and only if) there exists a basis of size $\mu$ such that every open set is the union of at most $\lambda$ basic open sets, and $\mu^\lambda\leq\kappa$. Some observations: $|X|\leq\mu^\lambda$. We can assume $\lambda\leq\mu$, since any union of more than $\mu$ elements of a set of size $\mu$, is really just the same union of at most $\mu$ elements. So if $|X|=\mu^\lambda$ then the topology is exactly the wanted size.
Exponential functions with negative base
Rewrite $$ f(x) = (-2)^x = 2^x \mathbf{i}^{2x} = 2^x \Big( \cos(\pi x) + \mathbf{i} \sin(\pi x ) \Big). $$ So real for $$ \sin(\pi x) = 0 \quad \Longrightarrow \quad k \in \mathbb{Z}. $$ General base $$ f(x) = (-b)^x = b^x \mathbf{i}^{2x} = b^x \Big( \cos(\pi x) + \mathbf{i} \sin(\pi x ) \Big). $$ So real for $$ \sin(\pi x) = 0 \quad \Longrightarrow \quad k \in \mathbb{Z}. $$
Random variable as a vector
We are treating each random variable as a vector. If we shift the random variable such that the mean is zero. The definition of co-variance meets the definition of an inner product, and it is very close to the definition of the euclidean inner product. $cov(X,Y) = \frac 1n\sum_{i=1}^n X_iY_i$ $St Dev = \sqrt {cov(X,X)}$ Which compares to the Euclidean inner product of $\langle \mathbf x,\mathbf y\rangle = \sum_{i=1}^n x_iy_i$ And norm $\|\mathbf x\|^2 = \langle \mathbf x,\mathbf x\rangle$
Find the integral $\int_{0}^{1} f(x)dx$ for $f(x)+f(1-{1\over x})=\arctan x\,,\quad \forall \,x\neq 0$.
Appearently $f(x)=f(1/x)$ does not hold. Let $g(x) = 1-1/x$, then we have $$g^2 (x) = 1/(1-x) \quad \quad \color{red}{g^3 (x) = x}$$ Hence $f(x) + f(g(x)) = \arctan x$ implies $$f(g(x)) + f(g^2(x)) = \arctan(g(x))$$ $$f(g^2(x)) + f(x) = \arctan(g^2(x))$$ Solving for $f(x)$ from these three equations give $$f(x) = \frac{1}{2}\left[\arctan x - \arctan(1-\frac{1}{x}) + \arctan(\frac{1}{1-x})\right]$$ and routine integration gives $$\int_0^1 f(x) dx = \frac{3\pi}{8}$$
Prove that $int(A)=A\setminus bd(A)$
Hint: $x\in bd (A)$ if for every open neighborhood $V$ of $x$, we have that $$A\cap V\neq \emptyset \text{ and } (M-A)\cap V\neq\emptyset.$$
Relation between two answered problem in Lebesuge Integral
As mentioned by David Mitra, the two problems are rather different. The first property states that $f$ is small enough, so that it takes small values on sets of infinite measure - a rough analogy on $\Bbb R^n$ would be that $f(x)\to 0$ as $x\to\infty$. In contrast, the second property says that the measure $\mathrm d\nu = f\;\mathrm d\mu$ is absolutely continuous w.r.t. $\mu$ which of course uses the fact that $f$ is not incredibly big but in essence the issues here may arise even if $\int f\mathrm d\mu$ is finite. Yet again, roughly it means that the first property is violated if $f = \infty$ on positive $\mu$ sets, whereas the second is violated if $f = \infty$ on null $\mu$ sets. That said, in math one can rarely really safely claim that two results are completely unrelated.
Indefinite integral of normal distribution
In general, the integral $$\int e^{-x^2} dx$$ cannot be expressed in terms of elementary functions. For a particular definite integral, we can define the error function, $$\text{erf }{x} = \frac{2}{\sqrt{\pi}} \int_0^x e^{-x^2} dx$$ In order to introduce constants as in your function, a simple substitution and rescaling can be done. On the other hand, if you want to compute the number $$\int_{\mathbb{R}} e^{-x^2} dx$$ the usual trick is to square the integral, convert into polar coordinates, and evaluate.
Are simple commutative monoids monogeneous?
Let $L=\{0,1\}$, considered as a commutative monoid under multiplication. If $M$ is any commutative monoid, there is a homomorphism $f:M\to L$ which sends all invertible elements to $1$ and all non-invertible elements to $0$. If $M$ is simple, then either $f$ must be an isomorphism or $f$ must fail to be surjective. If $f$ is not surjective, that means every element of $M$ is invertible so $M$ is a group. Then $M$ must also be simple as a abelian group, so it is cyclic of prime order. So, the only simple commutative monoids (up to isomorphism) are $L$ and cyclic groups of prime order. In particular, they are all generated by a single element.
Convergence in distribution for changing domains.
Convergence in distribution refers to a (well defined) convergence of the distributions, that is, of some probability measures defined on the target space. Thus the target space must be fixed (in your case, this seems to be the real line for every $n$ hence everything is fine) but the source spaces are simply irrelevant.
How can I construct a variation problem whose solution is $f^{(3)}=0$?
Any even number of differentiations is easy: $f^{(2n)}=0$ is the (higher) Euler-Lagrange equation for the functional $\int \! dx~|f^{(n)}|^2$. Here $n\in\mathbb{N}_0$. However OP wants to consider an odd number of differentiations: $f^{(2n+1)}~=~0$. The remedies are standard: If $f$ is a complex variable, then use the functional $\int \! dx~f^{\ast} f^{(2n+1)}$. If a Lagrange multiplier field $\lambda$ is allowed, then use the functional $\int \! dx~\lambda f^{(2n+1)}$.
Orthogonal Basis and orthogonal projection
If $(a,b,c,d)$ is the point of the plane defined by $x_1+3x_2-5x_3-x_4=36$ closest to the origin, then $(a,b,c,d)$ is orthogonal to that plane. Therefore, $(a,b,c,d)=\lambda(1,3,-5,-1)$ for some $\lambda\in\mathbb R$. So, solve the equation$$\lambda+3\times(3\lambda)-5\times(-5\lambda)-(-\lambda)=36.$$In other words, take $\lambda=1$.
Convergence of a series where terms involve a convergent sequence
The usual way to prove that a series converges is to find an asymptotic equivalent ($\sim$, $O$, ...) of the general term of the series. Here since $L>0$ $$ \frac{n^2}{n^4 x_n+1} \sim \frac{n^2}{n^4L}=\frac{1}{n^2L}. $$ and using the fact that $\sum \frac{1}{n^2}$ converges, we have the result. Another way: for large $n$ $$ 0 \le \frac{n^2}{n^4x_n+1} \le \frac{1}{n^2 x_n} \le \frac{1}{n^2(L/2)}. $$ (indeed $x_n \to L$ so taking $\epsilon=L/2 >0$ there exists $n_0$ such that $\forall n \ge n_0, x_n \ge L/2$.) And using the fact that $\sum \frac{1}{n^2}$ converges, we have the result.
I am having trouble fitting this curve - Does anyone know what a good equation to fit could be?
You can go for broke on questions like this. The fact is, there's no one function that's going to be "the best". There are measures of the goodness of fit, such as the coefficient of determination, $R^2$; it's useful, but not the end-all, be-all. You should always plot your residuals. For example, in Excel, I found that a fourth-order polynomial wasn't too awful bad: $$\operatorname{prod}=-0.0029\operatorname{flow}^4+1.2195\operatorname{flow}^3 -175.89\operatorname{flow}^2+9889.9\operatorname{flow}+7797.1$$ with $R^2=0.4024.$ An $R^2$ closer to $1$ would be better, certainly, but this function captures the basic shape, I think. Now if you plot the residuals, you're looking to make sure there's no definite pattern (thus indicating your model is leaving something out). I've got the basic plot plus the residuals shown here: You can see that there's definitely a pattern, mostly corresponding to the small cluster of points originally located near $\operatorname{flow}=17,\;\operatorname{prod}=25000.$ So you could certainly do better than this. But this will hopefully get you started. What you're doing is called linear regression, but a proviso: it's called linear even if you're fitting odd shapes like polynomials or exponentials to your data. It's linear because the coefficients show up in a linear fashion.
Estimate length of a curve, using left sum
ANSWER Your integral is $$\int_0^2\sqrt{1+4x^2}dx$$ Now use left Riemann sums to estimate it. ADDENDUM: When you calculate an integral your technically calculating the "area" underneath the function. But the integral $\int_a^b\sqrt{1+(f'(x))^2}dx$ is the length of the curve $f(x)$ on the interval $[a,b].$ Next time look at the context of the question instead of using falsely preconceived intuitive notions.
How can I check whether the given function or vector field is path-dependent or path-independent?
If you're working in 3-D Euclidean space, then a vector field $\vec{A}: \mathbb{R}^3 \to \mathbb{R}^3$ can be written as the gradient of a scalar field if and only if its curl is zero: $$ \vec{\nabla} \times \vec{A} = 0 \Leftrightarrow \vec{A} = \vec{\nabla} f \text{ for some $f: \mathbb{R}^n \to \mathbb{R}$.} $$ Proving the arrow going to the left above is relatively easy (it follows from assuming that the mixed partials of $f$ are equal). Proving the arrow going to the right is noticeably harder, but can still be done. This can easily be extended to 2-D Euclidean space by briefly "pretending" that you have a vector field that doesn't depend on a third Euclidean coordinate, taking the curl, and seeing if it vanishes. In higher-dimensional Euclidean spaces, the idea of the "curl" must be replaced by something called the exterior derivative, but a similar statement still holds. And in non-Euclidean spaces, the failure of the above statement to be true tells you something deep and beautiful about the topology of the space; the study of this phenomenon is called de Rham cohomology, which I never miss an opportunity to mention if I'm given an opening because I think it's so darn cool.
What is meant by $B(X,Y)$ and (how) are these theorems equivalent?
I doubt that the two version are logically equivalent but the one of your course implies the other one where the assumption is that ALL orbits are bounded.
Let $f:D(0,R)\rightarrow D(0,M)$ holomorphic function with $f(0)=0$ prove that $|f(z)|\leq \frac{M}{R}|z|$ for every $z \in D(0,R) $
Hint: Consider $g(z)=f(Rz)/M,$ which satisfies the usual Schwarz Lemma hypotheses.
Moment Generating function
Indeed the MGF is a particular case of a Theorem relationed with Fourier Transform. The idea behin is the concept of convergence in distribution for a sequence of random variables. That definition generalizes the MGF, and it works by mean of defining the set of continuity of a distribution function. I hope it helped you.
How can I generate random coordinates for left, right, bottom, and top positions?
You already have a method that will return you four random numbers within a specified range. An integer point in your rectangle is given by coordinates $$ (x, y) $$ where $1\le x\le\text{MaxX}$ and $1\le y\le\text{MaxY}$. It's not completely obvious why this should be the case, but uniformly choosing a random such coordinate is equivalent to uniformly choosing $x$ randomly and then uniformly choosing $y$ randomly. Then the probability of choosing any given pair $(x,y)$ is $$ \frac{1}{\text{MaxX}\text{MaxY}} $$ I.e., it is the reciprocal of the total number of integer points in the rectangle. As an example of choosing four points at random within this rectangle: int[] xCoordinates = generateRandomNumbers(1, MaxX); int[] yCoordinates = generateRandomNumbers(1, MaxY); java.awt.point[] randomPoints; for (int i = 0 ; i < 4 ; ++i) { randomPoints[i].x = xCoordinates[i]; randomPoints[i].y = yCoordinates[i]; } Of course, a better solution might be to modify your generateRandomNumbers method so that it returns points rather than numbers. Now you might want to choose four random points in the top half of the plane. In that case, you only need to modify the parameters used when choosing the y coordinates - so you would use int[] yCoordinates = generateRandomNumbers(1, MaxY / 2); instead, while if you wanted to choose four random points in the bottom half of the plane, you'd use int[] yCoordinates = generateRandomNumbers((MaxY / 2) + 1, MaxY);
A proof for validity of disjunction of sentences by completeness theorem
Suppose for finite $\Gamma \subseteq \Sigma$, $ \bigvee_{\sigma \in \Gamma} \sigma$ is not valid. Then $\bigwedge_{\sigma \in \Gamma} \neg \sigma$ is satisfiable. (This is just De Morgan's law.) If $\bigwedge_{\sigma \in \Gamma} \neg \sigma$ is satisfiable, then $\{\neg \sigma \mid \sigma \in \Gamma \}$ is a satisfiable finite set of sentences. If this holds for all finite $\Gamma \subseteq \Sigma$, compactness says that $\{\neg \sigma \mid \sigma \in \Sigma\}$ is satisfiable. That is, $\{ \neg\sigma \mid \sigma \in \Sigma\}$ is a set of true sentences in some $\mathcal{L}$-structure $\mathcal{A}$. Now, this structure must be a model of all the negations of sentences in $\Sigma$; that is, $\mathcal{A} \models \neg \sigma$ for all $\sigma \in \Sigma$. However, this contradicts the assumption that in every $\mathcal{L}$-structure, including $\mathcal{A}$, at least one sentence $\sigma \in \Sigma$ is true. We have to reject the assumption that there is no finite subset of $\Sigma$ such that the disjunction of all its members is valid.
How are Lagrangian mechanics equivalent to Newtonian mechanics?
The Euler-Lagrange equation: $$\frac{\mathrm d\mathcal{L}}{\mathrm dx}=\frac{\mathrm d}{\mathrm dt}\frac{\mathrm d\mathcal{L}}{\mathrm d\dot{x}}$$ is equivalent to Newton's second law of motion which states: $$\mathbf{ F}=m\vec a.$$ I will not go into the pure maths way since I will only present the procedures to show that our first equation is equivalent to Newton's second law. The Lagrangian (denoted $\mathcal L$) is simply the kinetic energy of a body minus its potential energy which can be written as follows, $\mathcal L=1/2mv^2-\mathrm V(x)$. Now if you take the derivative of this Lagrangian w.r.t $x$ you will simply get the derivative of the potential energy $\mathrm{V}(x)$ w.r.t $x$, which is a force. So the LHS of the first equation is equivalent to the LHS of the second. $\dot{x}$ just means the derivative of $x$ w.r.t time, so if you differentiate the Lagrangian with respect to $\dot{x}$, which is velocity, you will get $m\dot{x}$ since $$\frac{\mathrm d\mathcal{L}}{\mathrm d\dot{x}}=\frac{\mathrm d}{\mathrm d\dot{x}}\frac12 mv^2-\mathrm{V}(x)=\frac{\mathrm d}{\mathrm d\dot{x}}\frac12 m\dot{x}^2=m\dot{x}.$$ And the derivative of the latter w.r.t time will be $m\ddot{x}$ which is of course $m\vec{a}$. So the RHS of the first equation is equivalent to the RHS of the second equation. Therefore: $$\frac{\mathrm d\mathcal{L}}{\mathrm dx}=\frac{\mathrm d}{\mathrm dt}\frac{\mathrm d\mathcal{L}}{\mathrm d\dot{x}}\iff \mathbf{ F}=m\vec a.$$
Finding independence of X and Y from joint pdf
No they arent, if they were independent then we know that $f_{XY}= f_x f_y$ This is not the case here In fact the value of X and Y are dependent because looking at the pdf, we can see that if $X>1$, then $Y<1$ as well
Demonstrate equalities are correct using trigonometric identities
$\sin^3 + \cos^3 = (\sin + \cos )(1 - \sin\cos), 2\sin^2 - 1= \sin^2-\cos^2 = (\sin - \cos)(\sin + \cos)\implies LHS = \dfrac{1-\sin\cos}{\sin - \cos}= RHS$ after dividing top and bottom by $\cos$.
Minkowski sum and vectors
First of all, a good reference (CGal s a very powerful library ) : http://doc.cgal.org/latest/Minkowski_sum_2/ I think the point is that you have to revert the definition of a convex polygon: There is a perfect equivalence between two convex polygon definitions: through a list of points or through a set of vectors (sorted by their polar angle) Being given a list of points $P_k$, the associated set of vectors is $P_{k+1}-P_k=\overrightarrow{P_kP_{k+1}}$ (it is a kind of derivative, denoted $\partial P$, part of a vast theory called "homology" ). in a reverse way, being given a list of vectors $V_k$ (sorted by their polar angle), one takes an arbitrary origin point $P_1$, then $P_2=P_1+V_1$, $P_3=P_2+V_2$, etc. The second way gives an immediate definition: the Minkowski sum of 2 polygons is the polygon associated with the (sorted) union of the list of vectors of the 2 polygons. Philosophical note: It wouldn't be the first time in mathematics that a definition and a property take advantage to be interchanged...
How to prove that $(\sqrt{3} + 1)b/2 < a < 1 + (\sqrt{3} + 1)b/2$?
The condition gives: $$2a^2-2a(b+1)-b^2+b=0,$$ which gives $$a=\frac{b+1+\sqrt{3b^2+1}}{2}$$ or $$a=\frac{b+1-\sqrt{3b^2+1}}{2}.$$ In the first case we need to prove that $$\frac{(1+\sqrt3)b}{2}&lt;\frac{b+1+\sqrt{3b^2+1}}{2}&lt;1+\frac{(1+\sqrt3)b}{2}$$ or $$\sqrt3b&lt;1+\sqrt{1+3b^2}&lt;2+\sqrt3b,$$ which is obvious. In the second case we need $$\frac{b+1-\sqrt{3b^2+1}}{2}&gt;0,$$ which gives $$0&lt;b&lt;1.$$ Can you end it now? I got that there is a problem with $$\frac{(1+\sqrt3)b}{2}&lt;\frac{b+1-\sqrt{3b^2+1}}{2},$$ which is wrong for $b\rightarrow1^-.$ Because in this case we need to prove that: $$\frac{1+\sqrt3}{2}\leq\frac{2-2}{2},$$ which is wrong. If $a\geq1$ and $b\geq1$ so the second case is impossible because the second root is negative and we are done!
Is the exponential function on Hermitian matrices injective?
Just follow the proof there, replacing transpose with conjugate transpose. Note that the exponential function remains injective on the eigenvalues, because a Hermitian matrix has real eigenvalues.
By knowing, that $a_{n+1}=(n+3)a_n$, how can I find $a_n$?
$$a_{n+1}=(n+3)a_n=(n+3)(n+2)a_{n-1}=(n+3)(n+2)(n+1)a_{n-2}=...$$ can you find the pattern now ?