title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Positive Elements: Characterization
Every C$^*$- algebra has a faithful representation into $B(H)$ for some $H$. So the question can be reduced to $\mathcal A\subset B(H)$.
Relative discriminant of extension of number field
In general you can't find a basis for $O_L$ over $O_K$ because it isn't always free, but what you can do is this: Find a basis $b_1, ... , b_n$ for $L$ over $K$ and ideals $I_1, ... , I_n$ of $K$, such that $O_L = b_1 I_1 + ... + b_n I_n$ (direct sum). Then the relative discriminant is the product $\Delta\{b_1,..., b_n\} \cdot (I_1 ... I_n )^2$. The easiest example is this: let $P$ be a maximal ideal of $K$ whose order in the class group is $2$, so $P^2$ is the principal ideal generated by an element $\alpha$. Then you can define $L= K (\alpha^{1/2})$. In most cases $O_L$ can be written in the form $1\cdot O_K + \alpha^{1/2}\cdot P^{-1}$. so the relative discriminant is $(4\alpha)\cdot P^{-2}$.
Is an axiom a proof?
One can say that a proof of a statement $X$ is a finite sequence of statements where a) each statement in the sequence is either an axiom or follows from some previous staements via one of a handful of rules of inference and b) the last statement in the sequence is $X$. - So if the statement $X$ is already an axiom, the one-term sequence with only term $X$ constitutes a proof of statement $X$. Hence if you do not distinguish between $X$ and the one-term sequence with only term $X$, tha claim is correct and an axiom fulfills the definition of a proof.
Separable and irreducible polynomials over field with characteristic $p$
Your approach doest not make much sense. $x \mapsto x^{p^e}$ is not surjective for $e \geq 1$. What you do is the following: You choose the maximal $e \geq 0$, such that $f$ lies in the image of $x \mapsto x^{p^e}$ (You should proof that such a maximal $e$ exists). Then you can write $f(x) = g(x^{p^e})$ by construction of $e$. It is a straightforward verification that $f$ factors into two polynomials of positive degree, if $g$ does. So $g$ has to be irreducible. You are left to show that $g$ is separable. Use that we we cannot write $g(x)=h(x^p)$ for any $h$ by the construction of $e$.
Regarding expressing $j_p $ as a polynomial in $\Phi $
You have to look at the fundamental region of $\Gamma_0(p)$, it is the union of $ST^j(R)$ for $0\le j <p$, where $R$ is the fundamental region of $\Gamma$ and $S$ and $T$ are the usual $S\tau=-1/\tau$ and $\tau = \tau+1$. The only points of concern (outside $\mathbb H$) are easily seen to be $0$ and $\infty$. As $f$ is automorphic for $\Gamma_0(p)$ and analytic in all $\mathbb H$ and in $\tau = 0$ it is enough to look at $\tau \to \infty$. But $\Phi(\tau)$ has a zero in $\tau=\infty$ looking the expression for $f$ if it is enough to check that $j_p(\tau)$ is bounded as $\tau \to \infty$ but then you can use the relation for $j_p(-1/\tau)$ on top of the first page you have quoted to see that it is also bounded as $\tau \to \infty$.
Oblate Spheroidal coordinate system graphic representation of ellipse
Neither or both, depending on your point of view. Refer to the illustration: the surfaces with $\eta$ held constant are oblate spheriods. From the equations of the transformation to Cartesian coordinates, these spheroids have the parameterization $$\begin{align}x &= (a\cosh\eta)\sin\theta\cos\psi \\ y &= (a\cosh\eta)\sin\theta\sin\psi \\ z &= (a\sinh\eta)\cos\theta.\end{align}$$ I’ve added parentheses to emphasize the constant quantity in each equation. The half-axis lengths of the spheroid are therefore $a\cosh\eta$ and $a\sinh\eta$. Similarly, the surfaces with constant $\theta$ are hyperboloids of revolution with half-axis lengths $a\cos\theta$ and $a\sin\theta$. The angle $\theta$ represents the aperture half-angle of the hyperboloid’s asymptotic cone. Surfaces with constant $\psi$ are half-planes that make an angle of $\psi$ with the positive $x$-$z$ half-plane.
Differentiating a triangular wave
The key observation is that a sine wave is the same as a cosine wave, but shifted by $\frac \pi 2$ As the triangle wave is odd, the derivative of the square wave is even (plot it) so should be a sum of cosines.
Trouble with fixed point iteration.
As you rightly pointed out, the fixed points are the solution of $x=\frac{1}{2}(x^2+x)$ which are $0,1$. Doing the iteration $x_{n+1}=\frac{1}{2}(x_n^2+x_n)$ starting at $x_0=\frac{1}{2}$ gives a sequence of numbers that seems to converge to 0. [$x_1=\frac{3}{8},x_2=\frac{33}{128},...$] So here the error terms are just $x_i-0=x_i$, so $e_0=x_0=\frac{1}{2},e_1=x_1=\frac{3}{8},...$
Evaluating $\int_0^\infty\operatorname{erfi}(x)e^{-x^2}\frac{dx}x$
Introduce $$ I(\lambda)=\int_0^\infty\mathrm{erfi}(\lambda x)\frac{\exp(-x^2)}{x}\,\mathrm{d}x $$ for $\lambda\in[0,1]$. Note that $I(\lambda)$ is continuous (and even real-analytic on $(0,1)$) and differentiating with respect to $\lambda$ under the integral sign, we have $$ I'(\lambda)=\frac{2}{\sqrt{\pi}}\int_0^\infty\exp(-(1-\lambda^2) x^2)\,\mathrm{d}x=\frac{1}{\sqrt{1-\lambda^2}} $$ for $\lambda\in (0,1)$. Furthermore, $I(0)=0$ and so $$ I=I(1)=\int_0^1\frac{1}{\sqrt{1-\lambda^2}}\,\mathrm{d}\lambda=\frac{\pi}{2} $$
How would i put this into polynomial equation really struggling ive tried alot
Let $H_n=1+\frac12+...+\frac1n$. There is no polynomial $f$ such that $f(n)=H_n$ for all $n$. One way of seeing this is that $H_n$ grows slower and slower as $n\to\infty$, but polynomials grow faster and faster as $n\to\infty$. Specifically, we have $H_{n+1}-H_n\to 0$, but if $f$ is a non-constant polynomial, then $f(n+1)-f(n)$ is also a polynomial, and either converges to plus or minus infinity if $\deg f\geq 2$, or is constant if $\deg f=1$.
Prove this group is Abelian
One pretty easy way: note that any matrix of the given form form may be written $\begin{bmatrix} \cos \theta & \sin \theta \\ -\sin \theta & \cos \theta \end{bmatrix} = \cos \theta I + \sin \theta J, \tag{1}$ where $J = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}, \tag{2}$ and $I$ is the $2 \times 2$ identity matrix. Now note that any two matrices of the form $aI + bJ, \tag{3}$ commute: $(aI + bJ)(cI + dJ) = (cI + dJ)(aI + bJ), \tag{4}$ which easily follows from the fact that $I$ and $J$ themselves commute, since $I$ is the identity: $IJ = JI$. Or simply write the products in (4) out in full: $(aI + bJ)(cI + dJ) = (ac - bd)I + (bc + ad)J, \tag {5}$ where we have used the fact that $J^2 = -I. \tag{6}$ Nota Bene: at this point one might notice the curious similarity between (4) and the ordinary multiplication of the complex numbers $a + bi$ and $c + di$!!! ;) Anywaaayyy . . . . It is easy to see $I$ is of the form (1), and since it can easily be checked that $(\cos \theta I + \sin \theta J)^{-1} = \cos \theta I - \sin \theta J = \cos (-\theta) + J\sin (-\theta), \tag{7}$ the set $O_2$ contains the identity and inverses. Closure under multiplication is had by applying standard trigonometric identities to (5), with $a = \cos \theta_1$, $b = \sin \theta_1$, $c = \cos \theta_2$, $d = \sin \theta_2$. I'll leave that part of the algebra to you. If we apply (4) two matrices of the form (1), the next result pops right out: $\begin{bmatrix} \cos \theta_1 & \sin \theta_1 \\ -\sin \theta_1 & \cos \theta_1 \end{bmatrix} \begin{bmatrix} \cos \theta_2 & \sin \theta_2 \\ -\sin \theta_2 & \cos \theta_2 \end{bmatrix} = \begin{bmatrix} \cos \theta_2 & \sin \theta_2 \\ -\sin \theta_2 & \cos \theta_2 \end{bmatrix} \begin{bmatrix} \cos \theta_1 & \sin \theta_1 \\ -\sin \theta_1 & \cos \theta_1 \end{bmatrix}; \tag{8}$ your "set" $O_2$ is an Abelian group. Hope this helps. Cheers, and as always Fiat Lux!
A difficult functional series (still unsolved)
Since $\sin(\pi x/k) \sim \pi x/k$ as $k \to \infty$, the summand has nonzero limit $1/(\pi^2 x^2)$, and the series diverges.
Binomial coeff having bounded residue
Let $\chi:\Bbb F_p^*\to\Bbb C^*$ be a quartic character. Then $\chi^2$ is the Legendre symbol modulo $p$. By definition the Jacobi sum $$J(\chi,\chi^2)=\sum_{a=0}^{p-1}\chi(a)\chi^2(1-a)$$ where we follow the usual convention that $\chi(a)=0$ when $p\mid a$. It is well-known (see for instance Ireland and Rosen's A Classical Introduction to Modern Number Theory) that $J(\chi,\chi^2)=\pi$ where $\pi$ is a Gaussian integer of norm $p$. Moreover $$\chi(a)\equiv a^k\pmod\pi$$ in the Gaussian integers. Now consider congruences modulo $\overline\pi$. Then $$\chi(a)\equiv a^{3k}\pmod{\overline\pi}$$ and $$\chi^2(a)\equiv a^{2k}\pmod{\overline\pi}.$$ Then \begin{align} \pi&=J(\chi,\chi^2)\equiv\sum_{a=0}^{p-1}a^{3k}(1-a)^{2k}\\ &\equiv\sum_{j=0}^{2k}\sum_{a=0}^{p-1}(-1)^j\binom{2k}ja^{3k+j}\pmod{\overline\pi} \end{align} A sum $\sum_{a=0}^{p-1}a^r$ is divisible by $p$ unless $r$ is a multiple of $p-1=4k$. In the above sum all the terms in the $j$ sum are divisible by $p$ and so also $\pi$ save for the $j=k$ term. Thus $$\pi\equiv\sum_{a=0}^{p-1}(-1)^k\binom{2k}k a^{4k}\equiv-(-1)^k\binom{2k} k\pmod{\overline\pi}.$$ Write $\pi=b+ci$. Then $b^2+c^2=p$ and $\pi\equiv 2b\pmod{\overline\pi}$. Therefore $$\binom{2k}k\equiv\pm2b\pmod{\overline\pi}$$ and as bot sides are integers then $$\binom{2k}k\equiv\pm2b\pmod p.$$ As $\binom{2k}k$ is even, then $$\frac12\binom{2k}k\equiv\pm b\pmod p.$$ The difference of $\frac12\binom{2k}{k}$ and the nearest multiple of $p$ is $\pm b$ and $b^2=p-c^2<p$.
What algorithm will maximize utility when assigning of students to practicum locations
Hint: First place to start is reading about the Nobel Prize-winning Gale-Shapley algorithm which is used to match med students to first residencies, for example.
Show event is in a tail field
You have already shown that for any $k \in \mathbb{N}$, we have $$A:=\left\{ \lim_{n \to \infty} X_n = \infty \right\} \in \sigma(X_k,X_{k+1},\ldots).$$ Since this holds for any $k \in \mathbb{N}$, this yields $$A \in \bigcap_{k \in \mathbb{N}} \sigma(X_k,X_{k+1},\ldots) = \tau.$$
Series $\sum_{n=0}^\infty \frac{a^n}{\Gamma(b+nc)}$
Providing solution in the $c=1$ case. Let me know in the comments if $a,b,c$ are natural numbers, in that case it can be solved without any such assumption, but even there the underlying idea won't change. $$ \sum_{n=0}^\infty \frac{a^n}{\Gamma(b+n)} $$ Now as $$\Gamma(x+1)=x\Gamma(x)$$ This gives us on repeated application $$\Gamma(b+n)=\prod_{k=0}^{n-1 }{(b+k)} \Gamma(b)$$ Therefore our sum becomes $$\frac{1}{\Gamma(b)} \sum_{n=0}^{\infty}{\frac{a^n}{\prod_{k=0}^{n-1 }{(b+k)}}}$$ Can you take it from here; does the sum look familiar now?
Is there a name for the value $x$ such that $f(x)$ is maximum?
It's often called the 'arg max' ('argument maximum'; similarly 'arg min'): http://en.wikipedia.org/wiki/Arg_max
if you were to roll a four-sided die 100 times what is the probability that you will not roll a four?
For each roll the probability to not get a 4 is $\frac34$ then for 100 rolls we have $$p=\left(\frac34\right)^{100}\approx 3.21\cdot10^{-13}\approx 3.21\cdot10^{-11}\%$$
Reference request: Behavior of power series at endpoints
You can find this, a proof of it, and an in depth exploration of the behavior of power series at their boundaries in E.C. Titchmarsh's "Theory of Functions."
$|a-b|+|b-c|+|c-a|=2(\max\{a,b,c\}-\min\{a,b,c\})$
$1$. Step: Show that both sides of your equation are invariant unter permutations of $a,b,c$. Hence, we may assume without loss of generality $a \leq b \leq c$. $2$. Step: Show that if $a \leq b \leq c$, both sides equal $2(c-a)$.
Finding $\tan x$ from $24\sin x\cos x-5=14\cos^2(x)$
Divide both sides by $\cos^2{x}$ to get $24\tan{x} -5\sec^2{x}=14$ Then use $\sec^2{x}=\tan^2{x}+1$ to get $5\tan^2{x}-24\tan{x}+19=0$
Fun with distributivity of graph intersection over graph union
This is a very broad answer, but here are some things I can think of. First of all, there are two common versions of 'graph union'. One is the general union of the vertex-sets and the edge-sets, and the other is the disjoint union where the operation remains the same but the graphs are seen to be disjoint (empty intersection). Rough example uses There are more interesting things you can do with disjoint union. For instance, any graph can be seen as the disjoint union of its connected components and that decomposition is unique among decompositions into the disjoint union of connected graphs. In this sense, anything that applies component-wise can be seen to distribute over the disjoint union. Many things can be done component-wise, in particular, the search for any connected subgraph such as paths, cycles, trees, cliques is such an example and graph intersection is one way to 'select' subgraphs from a graph. So if the connected components $G_1, \cdots, G_k$ of a graph $G$ have some special structure which makes intersection easier, than you can use that to compute $(G_1 \cap H) \cup (G_2 \cap H) \cdots (G_k \cap H)$ faster than $G \cap H$ vertex-wise and edge-wise. Direct computational advantages cannot seem to be generalizable to any class of graphs. Problems Unfortunately, a big hurdle against finding very many interesting things to do with graph union and intersection is that union is not invertible (there is such a thing as a $-5$ number, but there is no $-G$ graph) and graph intersection does not even have a unit (there is no '1 graph' $G$ such that $H \cap G = H$ for all graphs $H$ (at least not nice, finite graphs). To see why these cause problems, consider the case on integers where multiplication by 10 is easy. This multiplication is considered easy because the digits of 10 are 1, the multiplicative identity, so you don't change the digits and 0, multiplicative absorbing element, so the multiplication by 0 is 0 allowing you to say that $16 \times 10 = 16 0$ the same digits, but shifted. Other operations Since you appear to be searching for interesting uses of some properties of graph operations more broadly, I would like to direct you toward a generalization of these graph operations in category theory as products and sums. Here, union and intersection can be seen as categorical sums and products in the category where arrows are inclusion (in other words where functions can be seen as indicating the subgraph relation), but there are other distributive operations that might resemble those on numbers more, namely, the tensor product and the edge-disjoint union. EDIT: Categorical products don't have to distribute over categorical sums Distributivity of categorical product and sum
Inclusion-Exclusion question
Let $d_n$ be the number of derangements (permutations without fixed points) on $n$ points, that is $$ d_n = n! \left(1 - \frac{1}{1!} + \frac{1}{2!} - \frac{1}{3!} + \cdots \pm \frac{1}{n!}\right). $$ The number of permutations of $n$ elements having exactly $k$ fixed points is $$ \binom{n}{k} d_{n-k}. $$ The first factor gives the number of choices for the fixed points, and the second factor gives the number of choices for the other coordinates. The sequence $d_n$ starts (indexed from $0$) $$ 1, 0, 1, 2, 9, 44, 265, 1854, \ldots. $$ Therefore the number of permutations of $n = 7$ with $k = 0,\ldots,7$ fixed points is $$ 1854, 1855, 924, 315, 70, 21, 0, 1. $$
Calculate one side of a triangle, given: other sides, sum of two sides
By law of cosines we obtain: $$AB=\sqrt{1+1+2\sqrt{1-\frac{64}{81}}}=\frac{1}{3}(1+\sqrt{17}).$$
Proof: dimension of the vector space of solutions to the system Bx=0
Since the rank of a product is less than or equal to the rank of each factor, the hypothesis $\operatorname{rank}(B)\le\operatorname{rank}(AB)$ implies $\operatorname{rank}(B)=\operatorname{rank}(AB)$. As a consequence, $$ \dim P(B)=n-\operatorname{rank}(B)=n-\operatorname{rank}(AB)=\dim P(AB) $$ It is clear that $P(B)\subseteq P(AB)$ (if $x\in P(B)$, then $Bx=0$, so also $ABx=0$); equality of dimensions yields $P(B)=P(AB)$. Note that your argument is flawed: just proving that $\dim U_1\le\dim U_2$ doesn't imply $U_1\subseteq U_2$, for generic subspaces.
evaluate: $ \lim_{n→∞} \frac1n ((n+1)(n+2)(n+3)⋯(2n))^{\frac1n}$.
$$L=\lim_{n\rightarrow \infty}\frac{1}{n} \left ((n+1) (n+2)(n+3)....(2n) \right)^{1/n}$$ $$L=\lim_{n \rightarrow \infty} \left( (1+1/n) (1+2/n) (1+3/n) (1+4/n).....\right)^{1/n}$$ $$\ln L= \lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^n\,\ln\left(1+\frac{k}{n} \right)= \int_{0}^{1} \ln (1+x) dx=(1+x)\ln (1+x)-(1+x)|_{0}^{1}=\ln (4/e)$$ $$\Rightarrow L= 4/e. $$
Question about multiplicity of an eigenvalue for $d-$regular graph
1) $V_c$ is the subset of $V(H)$ with maximal entry among the vertices in $H$ in the eigenvector $x$, which is nonempty by definition. Suppose $V(H)\setminus V_c$ is nonempty. Then if there were no edge $(i,j)\in E(H)$ in $i\in V_c$ and $j\in V(H)\setminus V_c$, then $V(H)$ could be partitioned into two components with no edges between them, namely $V(H)\setminus V_c$ and $V_c$, which would violate $H$ being a connected component. 2) $dx_i=\sum_{k\in N(i)} x_k$ follows from the eigenvalue equation, as you supposed $d$ is the eigenvalue for $x$. Namely, this is the $i$th entry of the eigenvalue equation $Ax=dx$.
discrete math binary relations - why is set transitive
The only pairs $(a,b)$ and $(b,c)$ that would be useful to refute transitivity of $R\cap S$ would be those where $(a,b),(b,c)\in R\cap S$, but $(a,c)\not\in R\cap S$. In your example, $(2,4)\in R$ but $(2,4)\not\in S$ so $(2,4)\not\in R\cap S$. Similarly, $(4,5)\in S$ but $(4,5)\not\in R$, therefore $(4,5)\not\in R\cap S$. This means those two pairs are not useful in determining (non)-transitivity. And, actually $R\cap S$ is transitive because $R\cap S=\{(3,3),(4,4)\}$. I am also completely at loss about why you claim $R$ and $S$ are transitive in the first place. $(2,4), (4,3)\in R$ but $(2,3)\not\in R$, and, similarly, $(3,4),(4,5)\in S$ but $(3,5)\not\in S$.
Encryption using modular addition and a key
Consider alphabet of 32 letters: '  ' = 0; 'a' = 1; 'b' = 2; 'c' = 3; . . . 'v' = 22; 'w' = 23; 'x' = 24; 'y' = 25; 'z' = 26; and other symbols - up to 31: '+' = 27; '-' = 28; '(' = 29; ')' = 30; '=' = 31. Now, encrypt the text $-$ is to add value 27 to each symbol value: Let plain text is "a+b=c". Digital interpretation: (1,27,2,31,3). Encryption: $1+27 \equiv 28 \mod 32$; $27+27 \equiv 22 \mod 32$; $2+27 \equiv 29 \mod 32$; $31+27 \equiv 26 \mod 32$; $3+27 \equiv 30 \mod 32$; So, cipher text is $(28,22,29,26,30)$. Of, in letter format: "-v(z)". Edit. If you need to encrypt black/white images like $\Box\Box\Box\blacksquare\Box \;,\; \Box\blacksquare\Box\blacksquare\blacksquare \;,\;$ ... then same way: 1-st symbol: $00010_2 = 2_{10}$; $\quad$ $2+27\equiv 29 \mod 32$; $\quad$ encrypted : $29_{10}=11101_2$; 2-nd symbol: $01011_2 = 11_{10}$; $\quad$ $11+27\equiv 6 \mod 32$; $\quad$ encrypted : $6_{10}=00110_2$; So, encrypted image will look like this: $\blacksquare\blacksquare\blacksquare\Box\blacksquare \;,\; \Box\Box\blacksquare\blacksquare\Box \;,\;$... Finally, to decrypt, it is necessary to subtract $27$, or (the same) to add $5$ (because $32-27=5$).
Absolute Value of Complex Integral
We first state the Cauchy-Schwarz inequality for definite integrals: Let $u$ and $v$ be real functions which are continuous on the closed interval $[a,b].$ Then $$\left(\int_{a}^{b} u(t)v(t)dt \right) \leq \left(\int_{a}^{b} u(t)dt \right)^2 \cdot \left(\int_{a}^{b} v(t)dt \right)^2$$ Since $F$ is complex-valued, we can write $F(t) = u(t) + iv(t),$ with $u(t), v(t)$ real-valued. Thus $|F(t)|^2 = u^2+v^2.$ Also \begin{align}\left(\int_{a}^{b} F(t)dt \right)^2 = \left(\int_{a}^{b} u(t) + iv(t) dt \right)^2. \end{align} Observe that \begin{align*} \left(\int_{a}^{b} u ~dt \right)^2 &= \left(\int_{a}^{b} \dfrac{u}{(u^2+v^2)^{1/4}}(u^2+v^2)^{1/4}dt \right)^2 \\ &\overset{\mathrm{C-S \ ineq.}}{\leq} \left(\int_{a}^{b} \dfrac{u}{(u^2+v^2)^{1/4}}dt \right)^2\left(\int_{a}^{b} \sqrt[4]{u^2+v^2} ~dt \right)^2 \\ &= \left(\int_{a}^{b} \dfrac{u^2}{\sqrt{u^2+v^2}}dt \right)\left(\int_{a}^{b} \sqrt{u^2+v^2} ~dt \right) \end{align*} and \begin{align*} \left(\int_{a}^{b} v ~dt \right)^2 &= \left(\int_{a}^{b} \dfrac{v}{(u^2+v^2)^{1/4}}(u^2+v^2)^{1/4}dt \right)^2 \\ &\overset{\mathrm{C-S \ ineq.}}{\leq} \left(\int_{a}^{b} \dfrac{v}{(u^2+v^2)^{1/4}}dt \right)^2\left(\int_{a}^{b} \sqrt[4]{u^2+v^2} ~dt \right)^2 \\ &= \left(\int_{a}^{b} \dfrac{v^2}{\sqrt{u^2+v^2}}dt \right)\left(\int_{a}^{b} \sqrt{u^2+v^2} ~dt \right). \end{align*} So \begin{align*} \bigg|\int_{a}^{b} F(t) ~dt \bigg|^2 &= \bigg|\int_{a}^{b} u(t) + iv(t) ~dt \bigg|^2 \\ &= \bigg|\int_{a}^{b} u(t) ~dt + i\int_{a}^{b}v(t) ~dt \bigg|^2 \\ &= \left(\int_{a}^{b} u ~dt \right)^2 + \left(\int_{a}^{b} v ~dt \right)^2 \\ &\leq \left(\int_{a}^{b} \dfrac{u^2}{\sqrt{u^2+v^2}}dt \right)\left(\int_{a}^{b} \sqrt{u^2+v^2} ~dt \right) \\ &+ \left(\int_{a}^{b} \dfrac{v^2}{\sqrt{u^2+v^2}}dt \right)\left(\int_{a}^{b} \sqrt{u^2+v^2} ~dt \right) \\ &= \left(\int_{a}^{b} \sqrt{u^2+v^2} ~dt \right)\left(\int_{a}^{b} \dfrac{u^2+v^2}{\sqrt{u^2+v^2}}dt \right) \\ &= \left(\int_{a}^{b} \sqrt{u^2+v^2} ~dt \right)^2 \\ &= \left(\int_{a}^{b} \bigg|F(t)\bigg|\right)^2 dt \end{align*} Now, recalling that the modulus is always non-negative we can take the square root of both sides and we arrive at the result.
Complex line integral
You could parametrise the curve. I think you will then find that the integral is pretty messy. A better approach: the curve in question does not cross the negative part of the real axis (draw a sketch to confirm this). So we can use an antiderivative to integrate: $$\int_\alpha \frac{d\zeta}{\zeta}=[{\rm Log}\,\zeta]_{\rm start}^{\rm end} ={\rm Log}\,(4+4i)-{\rm Log}\,(-4+4i)\ .$$ I'll leave you to calculate the logarithms for yourself. The logarithm I am referring to is the principal complex logarithm defined by $${\rm Log}\,(re^{i\theta})=\ln r+i\theta\ ,\quad -\pi<\theta\le\pi\ .$$
Algorithm for taking square root of a matrix
Find an invertible matrix $B$ and diagonal matrix $D$ such that $D=B^{-1}AB$. Then take all square roots $D_1,...,D_m$ (all of them diagonal, there are $m\le 2^n$ of these where $n$ is the size of $A$ because every non-negative number has at most 2 square roots) of $D$ and all matrices $BD_iB^{-1}$. Look which ones of these matrices are over natural numbers.
Case analysis in theorem 4.1.3 of the HoTT book
The point is that $f$ takes path concatenation to composition of equivalences, so that $f^{-1}$ does the opposite. So you can pull out the $f^{-1}$s and reduce your goal to the obvious fact $\mathrm{id}_2 \circ e = e \circ \mathrm{id}_2$.
Working with piecewise answer in MATLAB
From the online MATLAB R2014b documentation : Access Methods ... expression — Object in a specific branch expression(p, i) Instead of piecewise::expression(p, i), the index operator p[i] can be used synonymously. In your example, p would be your original answer, and i would be 2. You can also access the condition of a branch using condition(p, i), the whole branch by branch(p, i), or the number of branches by numberOfBranches(p).
Question On Fourier Series and Continuous Functions
If it is uniform approximation that you want, then you can use Fejer's Theorem.
For more explanation about the difference between the following equations.
There is no difference, because $$D^{\alpha}(x(t)-x_0)=D^{\alpha}x(t)-D^{\alpha}x_0=D^{\alpha}x(t)$$
Product of Chebyshev polynomials of the second kind?
The sequence $(U_n)$ may be defined by the identity, valid for every $t$ such that $\sin t\ne0$, $$U_n(\cos t)=\frac{\sin((n+1)t)}{\sin t}. $$ Thus, $$ \sin t\cdot\sum_{k=0}^nU_{m-n+2k}(\cos t)=\Im\left(\sum_{k=0}^n\mathrm e^{\mathrm i(m-n+2k+1)t}\right)=\Im(z), $$ where $$ z=\mathrm e^{\mathrm i(m-n+1)t}\sum_{k=0}^n\mathrm e^{2k\mathrm it}=\mathrm e^{\mathrm i(m-n+1)t}\frac{\mathrm e^{2(n+1)\mathrm it}-1}{\mathrm e^{2\mathrm it}-1}=\mathrm e^{\mathrm i(m+1)t}\frac{\sin((n+1)t)}{\sin t}. $$ Thus, $$ \sum_{k=0}^nU_{m-n+2k}(\cos t)=\frac{\sin((m+1)t)}{\sin t}\cdot\frac{\sin((n+1)t)}{\sin t}=U_m(\cos t)\cdot U_n(\cos t). $$
Conditional pdf $f(y|x,w)$ where $Y = XW$
The dirac-delta impulse function has the property, for constant $c$ and function $g$, that $$\int_\Bbb R \delta_{w-c}\, g(w)\,\mathrm d w= g(c)$$ Also for non-zero $x$, we have $\delta_{y-xw}=\delta_{w-y/x}$ ... when integrating over $w$ the impulse occurs when $y-xw=0$, which is exactly when $w-y/x=0$ . Thus for a positive constant $X$... $$\begin{align}f_{Y\mid X}(y\mid x)&=\mathbf 1_{\small x=X}\,\int_{\Bbb R} \delta_{y-xw}\,\mathcal N(w;0,1^2)\,\mathrm d w\\[1ex]&=\mathbf 1_{\small x=X}\,\int_{\Bbb R}\delta_{w-y/x}\,\mathcal N(w;0,1^2)\,\mathrm d w\\[1ex]&=\mathcal N(y/x;0,1^2)\,\mathbf 1_{\small x=X}\\[2ex]&=\mathcal N(y;0,x^2)\,\mathbf 1_{x=X}\end{align}$$
Given a sequence ${a_k}, k=1,2,3...$, find the generating function for ${a_2k}$
$$G(x) = \frac12 \left [F \left ( \sqrt{x} \right ) + F \left (- \sqrt{x} \right ) \right ] $$ I feel the answer is self-explanatory. As an additional exercise, try computing the G.F. for the sequence $a_1, a_3,\ldots$.
solution of $\int_0^{\infty}x^{-n}e^{-ax}$
For small $x$, the exponential factor is about $1$. Then $$\int_0^\epsilon x^{-n}dx=\left.\frac{x^{1-n}}{1-n}\right|_0^\epsilon.$$ Do you see the problem ?
How can I predict missing values based on existing data?
There are plenty of reasonable ways of predicting values, and you should check which one of them adapts best to the nature of your data. For instance, if it seems to be converging over time, I would start with moving averages. But then again, predicting 5 values over 80 is prone to huge error.
Proof of $\mathcal{L}(V,W)$ is a vector space
You are 100% correct. It's an easy trap to fall into, to only verify the subspace conditions instead of all the many conditions for a full vector space, but just verifying subspace conditions is definitely not enough! I've seen countless students (as well as a few teachers) fall into this trap. Now, there is one potential saving grace here: perhaps it was proven that the set $W^V$ of functions from $V$ to $W$ (linear or not) is a vector space. That is, $W^V$ under the given operations, all of the 8 or so properties of vector spaces were proven previously. Then, proving $\mathcal{L}(V, W)$ is a subspace $W^V$ is a valid way of proving $\mathcal{L}(V, W)$ is a vector space in its own right, as it will inherit almost all the properties from $W^V$. But otherwise, the proof is incorrect.
Prove Identity Function is Isomophic
Well, you can't exactly "choose" what operation $G$ has. It's part of the pack "$G$ is a group", which is actually the short form for "$G$ is a group with operation $\bot$, and we write $\bot(a,b)$ as $ab$". I've used a generic symbol but it could have been any other, this depends on the context. Anyways, the correct way of concluding is to actually take the image of $ab$ under $\epsilon$ (as we probably would, were $\epsilon$ any other map): $$\epsilon(ab) = ab$$ Now how can you rewrite both $a$ and $b$? The answer is that, since $\epsilon$ is the identity, $a=\epsilon(a)$ and $b = \epsilon(b)$. Therefore the last expression is equal to (by transitivity of the equals sign) $\epsilon(a)\epsilon(b)$.
Differential Equations: Solve $(x^2-1){dy\over dx} + 2xy = x$
Your solution is correct. A nice way to get there is to notice that $2x=(x^2-1)^{'}$ and thus rewrite the equation $${d\over dx}\left((x^2-1)y\right)=x$$ And therefore $$(x^2-1)y={x^2+C\over 2}$$ And we get $$y={x^2+C\over 2(x^2-1)}$$
Why is the following matrix NOT diagonalizable
Such a matrix is nilpotent, that is, there is an $n$ such that $A^n=0$. Therefore, if it were diagonalizable, that diagonal matrix would also be nilpotent, since we would have $$0=A^n=(PDP^{-1})^n=PD^nP^{-1}$$ and so $D^n=0$. The only nilpotent diagonal matrix is $0$, but that would imply $A=0$.
Linear independent set of functionals makes certain map surjective
Suppose $(\lambda_1(v),\ldots, \lambda_n(v)) = 0\in V$. Then $\lambda_1(v)=\cdots=\lambda_n(v)=0$. Therefore every linear combination $a_1\lambda_1(v)+\cdots+a_n\lambda_n(v)$ is $0\in V$. If $v$ is in the kernel of every linear combination of $\lambda_1,\ldots,\lambda_n$ then $v$ is in the kernel of every linear functional on $V$, since $\{\lambda_1,\ldots,\lambda_n\}$ spans the whole space of linear functionals on $V$. (Here I'm assuming you've seen it proved somewhere that the dimension of the space of linear functionals on $V$ is the same as the dimension of $V$.) The only vector in $V$ that is in the kernel of every linear functional on $V$ is $0$. (In other words if $0\ne v\in V$ then some linear functional maps $v$ to some nonzero scalar.) Therefore, if $(\lambda_1(v),\ldots, \lambda_n(v)) = 0\in V$, then $v=0$. So the kernel of $v\mapsto(\lambda_1(v),\ldots, \lambda_n(v))$ is trivial.
Automorphisms of $k[x_1,x_2,\dots,x_n]$ that fix $k$
Let $X = (x_1, x_2, \ldots , x_n)$. Automorphisms are polynomial substitutions $f(X) \to f(P(X))$ where $P$ has a polynomial inverse. Structure theory of the automorphism group is complicated for $n \geq 3$. A recent breakthrough was the proof that not all of the group is generated by substitutions $x_i \to ax_i +b$ where $a$ is a constant and $b$ is a polynomial in the other variables. http://www.pnas.org/content/100/22/12561.full For $n=2$ these substitutions (called tame automorphisms) generate the full automorphism group and the structure of the group is explained in Cohn's book on free rings. The structure of the tame subgroup for $n=3$ is determined in http://arxiv.org/abs/math/0607028 This is all very closely connected to the Jacobian conjecture.
Prove that a simple connected graph with exactly 2 cut vertices has a cut edge
What you want to prove is not true. Here is a counterexample: O O O / \ / \ / \ O---O---O---O In this case the two cut vertices happen to be adjacent, but the edge between them is not a cut edge. They don't need to be adjacent, though: O O O / \ / \ / \ O---O O---O \ / O
Proof that $e^x$ is a transcendental function of $x$?
One could use the growth at infinity of the function $f:x\mapsto\mathrm e^x$. Assume that that $f$ is algebraic and choose a real number $x\geqslant0$. Then $|f(x)|\geqslant1$ and $$ |c_n(x)|\,|f(x)|^n\leqslant b(x)|f(x)|^{n-1},\qquad b(x)=\sum\limits_{k=0}^{n-1}|c_k(x)|. $$ Hence, for every real number $x\geqslant0$ such that $c_n(x)\ne0$, $|f(x)|\leqslant b(x)/|c_n(x)|$. But indeed, $c_n(x)\ne0$ for every real number $x$ large enough and $b(x)/|c_n(x)|$ can grow at most polynomially when the real number $x$ goes to $+\infty$ while $|f(x)|=\mathrm e^x$ grows... well, exponentially. This is a contradiction.
The probability that random removal of members from a set results in the empty set
For a single element, the probability to survive up to $A_l$ is $(1-p)^l$. Thus $Pr\{|A_l|=0\}=(1-(1-p)^l)^k$ $Pr\{|A_l|\ge 2\}=1-Pr\{|A_l|=2\}-Pr\{|A_l|=1\}=1-(1-(1-p)^l)^k-k(1-p)^l(1-(1-p)^l)^{k-1}$ is a bit more difficult. $Pr\{|A_i|=1\} = k(1-p)^i(1-(1-p)^i)^{k-1}$ and $Pr\{|A_i|=1, |A_{i+1}|=0\} = k(1-p)^i(1-(1-p)^i)^{k-1}p$. Thus the answer is $$\sum_{i=0}^{l-1} k(1-p)^i(1-(1-p)^i)^{k-1}p+ k(1-p)^l(1-(1-p)^l)^{k-1}$$ (The firs sum counts probability that at step $i$ there is one element for the last time, the extra summand accounts for exactly one lement at the end of the experiment).
Finding the algebraic elements of $\mathbb{F}_3(x,y)$ over $\mathbb{F}_3$, where $y^2+x^4-x^2+1=0$.
Let $K\subset \mathbb{F}_3(x,y)$ be the field of algebraic elmenent over $\mathbb{F}$. $\left(\frac{x^2+1}{y}\right)^2 +1 = 0$ provides $[K:\mathbb{F}_3]\geq 2$. $y^2 = x^4 -x^2+ 1$ indicates $[\mathbb{F}_3(x,y):\mathbb{F}_3(x)]\leq 2$. Let's assume $[K:\mathbb{F}_3]> 2$. Because there is only one field extension with $[L:\mathbb{F}_3] = 2$ we can choose $u\in K$ with minimal polynom $f\in\mathbb{F}_3[T]$ and $\deg(f)\geq 3$. Because $x$ is transcendetial $f\in \mathbb{F}_3(x)[T]$ is irreducible, too, therefore $[\mathbb{F}_3(x,u):\mathbb{F}_3(x)]\geq 3$. This contradicts $[\mathbb{F}_3(x,y):\mathbb{F}_3(x)]\leq 2$
Relating a binary and an integer variable
If you want that $y$ and $z$ take just one value in the range $1,\ldots,10$ $$\lambda_1,\ldots,\lambda_{10} \in\{0,1\}$$ $$\mu_1,\ldots,\mu_{10} \in \{0,1\}$$ $$y=\sum_{i=1}^{10}i\lambda_i$$ $$z=\sum_{i=1}^{10}i\mu_i$$ $$\sum_{i=1}^{10}\lambda_i=1$$ $$\sum_{i=1}^{10}\mu_i=1$$ Then add the incompatible constraints you need. In the specific example you report $$\lambda_3+\mu_7 \leq 1$$ Hope this can help you
Decide whether two lines are parallel
Natural approaches would be to find two vectors $\vec{a}, \vec{b}$ on the lines ant then either check $\Big|\frac{\vec{a} \cdot \vec{b}}{|\vec{a}| |\vec{b}|}\Big| > \cos \epsilon$ or $\Big|\frac{\vec{a} \times \vec{b}}{|\vec{a}| |\vec{b}|}\Big| < \sin \epsilon$, where $\cdot$ is the dot product, $\times$ is the cross product and $\epsilon$ is the angle of error.
Complete non compact riemannian manifolds and cylinders
You can answer this by classifying the subgroups of $\operatorname{Aut}(\mathbb{C})$ that act freely and properly discontinuously on $\mathbb{C}$. Alternatively, you can use uniformization and treble's comment. I'll sketch out the subgroup classification. Conformal automorphisms of $\mathbb{C}$ are Möbius transformations that fix $\infty$, rotations and translations. Since $\pi_1M$ acts freely on $\mathbb{C}$, it must act by translations and not rotations, so identify it with a discrete subgroup $\pi$ of $\mathbb{R}^2$. Argue that this subgroup can have no more than two generators, otherwise it wouldn't be discrete in $\mathbb{R}^2$. (If $x,y,z$ generate the group, and $z = ax + by$, then you can find elements of $\mathbb{Z}x\oplus\mathbb{Z}y$ arbitrarily close to elements of $\mathbb{Z}z$.) So now $\pi$ is either $\{0\}$, $\mathbb{Z}x$, or $\mathbb{Z}x+\mathbb{Z}y$ (for some linearly independent $x,y\in\mathbb{R}^2-\{0\}$). You have ruled out $\{0\}$ (since $M$ is not isomorphic to $\mathbb{C}$, and $\mathbb{Z}x+\mathbb{Z}y$ (since $M$ is not compact), so you're left with $\pi = \mathbb{Z}x$. The quotient $\mathbb{C}/\pi$ is a cylinder.
Is $\forall I: I(f)=1$ where $f$ stands for a formula the meaning / definition of a tautology?
There are two notions kicking around here, one syntactic and the other semantic. Syntactic tautologies: $f$ is a theorem of the empty theory, with respect to whatever proof system is being used. Semantic tautologies: $f$ is true in every structure in the language of $f$ (or, every interpretation). In my experience, "tautology" refers primarily to the former notion, while sentences with the latter property are called "validities." You will need to look at whatever text you're using to tell whether the definition of tautology you've given is appropriate. However, by the completeness theorem, these two notions are in fact the same (as long as we're using a reasonable proof system). This is one reason that it's not a big deal that texts sometimes define "tautology" in different ways. ... At least, in the context of first-order logic (or propositional logic, for that matter). The notions above make sense in much more general contexts: "syntactic tautology" makes sense in the context of any proof system, and "semantic tautology" makes sense in the context of any satisfaction notion (even if the structures involved are quite different from what we're used to!). In general these notions may not line up; e.g. second-order logic (with the standard semantics) has no complete proof system.
Probability of picking exactly needed number of useful balls
The chance of getting $1$ is $$\frac {{4 \choose 1}{36 \choose 19}}{40 \choose 20}=\frac {120}{481}$$ The chance of getting $3$ is equal to the chance of getting $1$. If you draw and get $1$ the remaining balls are a draw that would give you $3$.
What is my mistake when evaluating this limit?
I believe l'Hopital's rule works only for indeterminate forms.
Diffeomorphism of open intervals in $\mathbb{R}$ with specified values
Claim: for any finite sequences $x_1<x_2<\dots<x_n$ and $y_1<y_2<\dots,y_n$ there is a $C^\infty$-diffeomorphism $f:\mathbb R\to\mathbb R$ such that $f(x_j)=y_j$ for all $j$. (Your statement follows by considering $a-\varepsilon, a, b, b+\varepsilon$ and $c-\delta,c,d,d+\delta$.) Proof of the claim: Let $$\delta=\frac{1}{3}\min_{1\le j\le n-1}(x_{j+1}-x_{j})$$ Let $g:\mathbb R\to \mathbb R$ be a continuous piecewise affine function with the following properties strictly increasing affine on each interval $[x_j-\delta,x_j+\delta]$ $g(x_j)=y_j$ for all $j$ Such a function is easy to find. Next, let $\psi:\mathbb R\to [0,\infty)$ is a function which is $C^\infty$-smooth even, that is, $\psi(-x)=\psi(x)$ supported on $[-\delta,\delta]$ satisfies $\int_{\mathbb R} \psi=1$ Such a mollifier is also easy to find. Finally, let $f=g*\psi$ and observe that $f$ is $C^\infty$-smooth $f'=g'*\varphi$ is strictly positive $f(x_j)=y_j$, because $g$ is affine in $\delta$-neighborhood of $x_j$, and the linear term cancels out due to the symmetry of $\psi$.
Is there a simple explanation on the multinomial theorem?
The best way is to connect multinomial coefficients to a certain counting problem - and this can be done very naturally. Note that if we want to calculate an expression such as $$(x_1+x_2+\ldots+x_k)^n$$ we could really just imagine writing $n$ copies of this term side-by-side and then distributing everything we could possibly distribute. For instance, suppose we wanted to calculate $$\newcommand{\x}{{\color{red} x}}\newcommand{\y}{{\color{blue} y}}\newcommand{\z}{{\color{green} z}}(\x+\y+\z)(\x+\y+\z)(\x+\y+\z)$$ where I've now colored the terms for a reason we will soon see. When you distribute, what you are really doing is taking a term from each of the three sums and multiply those terms together - and then summing that up over all possible combinations of three terms. We could, of course, just write out every single possible sequence of three terms from $\{\x,\y,\z\}$ and we would get a correct expression for $(x+y+z)^3$: \begin{align*}&\x\x\x+\x\x\y+\x\x\z+\x\y\x+\x\y\y+\x\y\z+\x\z\x+\x\z\y+\x\z\z\\ +&\y\x\x+\y\x\y+\y\x\z+\y\y\x+\y\y\y+\y\y\z+\y\z\x+\y\z\y+\y\z\z\\ +&\z\x\x+\z\x\y+\z\x\z+\z\y\x+\z\y\y+\z\y\z+\z\z\x+\z\z\y+\z\z\z\end{align*} However, this is not a very efficient way, because we see that some terms are listed multiple times! For instance $\x^2\y =\x\x\y = \x\y\x = \y\x\x$ is listed three times - and $\x\y\z$ is listed six times! So, the question becomes: how many times is $\x^a\y^b\z^c$ listed in the sum resulting from distributing $(\x+\y+\z)^n$? Well, that's the number of ways we can arrange a string of length $n$ from $a$ copies of $\x$ and $b$ copies of $\y$ and $c$ copies of $\z$. Otherwise said: it's the number of ways to color a set of $n$ distinct elements with three colors, specifying how many are to be red, green, and blue. How might we calculate that quantity? Well, one approach is to simply define the multinomial coefficient to calculate that. A more useful approach is to think of a procedure for generating all such colorings. As an example to generalize from, suppose we wished to calculate how many ways we could arrange four terms, so that two of them were $\x$ and one each was $\y$ and $\z$. We could imagine that we start with an empty string consisting of four empty spaces, which we'll refer to as positions one through four:$$\cdot\cdot\cdot\,\cdot$$ We know that we first need to fill two of the positions with red $\x$'s, so we'll choose an empty position and put an $\x$ in it. There are $4$ ways to do this. $$\cdot\cdot\x\,\cdot$$ Now we need another red $\x$ somewhere. There are $3$ places to put it - so let's choose one. $$\x\cdot \x\,\cdot$$ Next, we want to put a blue $\y$ somewhere and we have $2$ choices $$\x\y \x\,\cdot$$ Then, finally, in the remaining space, we must put a green $\z$ $$\x\y \x\z$$ Essentially, our process is that we pick an exhaustive sequence of positions and greedily fill them by the first color that we don't yet have enough of. There are $4!$ choices total in this process, but some lead to the same solution - for instance, we could have started by putting an $\x$ in the first position and then put one in the third position. In general, there are two ways to reach any given sequence of two $\x$'s, one $\y$ and one $\z$, since we can choose in which order to place the $\x$'s - hence there will be $\frac{4!}2 = 12$ sequences with the given number of each symbol. More broadly, if we wanted to do this process to generate $a$ red symbols, $b$ blue symbols and $c$ green symbols, we would have to (according to the process) place the red ones first, then the blue ones, then the green ones, but it wouldn't matter in which order we placed symbols within each group - hence each sequence with the desired counts of symbols would be generated by $a!b!c!$ processes of choosing one symbol at a time. If we have $n=a+b+c$, then there are $n!$ ways to pick one element at a time, this gives a total of $\frac{n!}{a!b!c!}={n\choose a,b,c}$ sequences with the desired outcome. But remember! We're really counting the number of terms in the expansion of $(x+y+z)^{n}$ that reduce to $\x^a\y^b\z^c$ - so this would exactly be the coefficient of $\x^a\y^b\z^c$ in the expansion of $(x+y+z)^n$. From this, generalizing to sums of arbitrarily many variables is simply a matter of adding more colors - and all the reasoning works out likewise to show that $$(x_1+x_2+\ldots+x_k)^n = \sum_{\substack{a_1,a_2,\ldots,a_k\\a_1+a_2+\ldots+a_k=n}}{n \choose a_1,a_2,\ldots,a_k}x_1^{a_1}x_2^{a_2}\ldots x_k^{a_k}$$ where ${n \choose a_1,a_2,\ldots,a_k} = \frac{n!}{a_1!a_2!\ldots a_k!}$. Note: this argument can be made rigorous in a fairly straightforward manner, but it very quickly runs into notational difficulty which would obscure the intuition (although it's not as bad as trying to put together an inductive argument, as students are sometimes asked to do!). The important lemma here is that if $I_1,\ldots,I_k$ are finite sets used as indices to a sum and for each $i\in I_j$ there is a value $v_i$, we have $$\prod_{j=1}^n\sum_{i\in I_j}v_i=\sum_{(i_1,\ldots,i_k)\in I_1\times \ldots \times I_k}\prod_{j=1}^n v_{i_j}$$ which is a rather opaque equation to come across if not explained well! What it says is that taking a product over a sum is the same as summing up all the possible products of terms from the sums. The rest of the argument is then looking at the set $I_1\times \ldots \times I_k$ which represents a choice of term for each sum in the product, and dividing it up based on the value of the inner product $\prod_{j=1}^n v_{i_j}$. The most literal translation of the argument above is that, since we can regard $I_1=I_2=\ldots=I_k$ since all the sums in the product are the same, we can, given the counts of appearances of each $i\in I_1$ we want in the tuple $(i_1,\ldots,i_k)$, essentially construct a function which takes a permutation of the indices $(1,\ldots,k)$ to the subset of $I_1\times \ldots \times I_k$ having the appropriate counts, and then calculate how many permutations map to each such tuple. It's easy to see how a proof involving so many indices could easily turn into an unreadable mess - and how it might involve a fair breadth of somewhat distance concepts to make things worse - especially if the author doesn't wish to use ellipses as I have done in this sketch.
Proof of Urysohn's lemma (or what my teacher called with that name)
Let $Y=X\cup\{\infty\}$ be the $1$-point compactification of $X$, and apply the usual Urysohn's lemma on $Y$ to $K$ and $Y\setminus V$. Since $K$ is not just closed but compact, it is still closed in $Y$.
What is the number of rooted planar decreasing trees on n vertices?
Let's call these ordered decreasing trees as the term planar is also used for the cyclic group acting at the root. Now we have the following recursive combinatorial construction. To assemble one of these we need a root node, which receives the label $n$ and an ordered sequence of subtrees, each of some size ranging from one to $n-1$ with a total of $n-1$ nodes (composition into one to $n-1$ parts). We then partition the remaining $n-1$ labels into an ordered sequence of sets, one for each subtree, having the matching number of labels. The key observation here is that these subtrees say of some size $q$ correspond bijectively to ordered decreasing trees on $q$ nodes where the elements of the set of labels for these subtrees are placed according to the ordering induced by the source tree, which is ordered and decreasing and has labels from $1$ to $q.$ E.g. if we select the labels $4,7,11$ for one of the subtrees then $4$ will replace $1$, $7$ will replace $2$ and $11$ will replace $3$ in the source tree that is being attached recursively. At this point we win because this is the canonical construction that supports all cartesian products of exponential generating functions. This yields for $n\ge 2$ (we have $T_1=1$) the recursive relation $$T_n = \sum_{k=1}^{n-1} \sum_{q_1+q_2+\cdots+q_k = n-1} {n-1\choose q_1, q_2, \ldots, q_k} \prod_{p=1}^k T_{q_p}.$$ These are standard compositions with no zero elements. We introduce the EGF as promised and obtain for $$T(z) = \sum_{q\ge 1} T_q \frac{z^q}{q!}$$ with $n\ge 2$ the relation $$n! [z^n] T(z) = (n-1)! [z^{n-1}] \frac{T(z)}{1-T(z)} \quad\text{or}\quad n [z^n] T(z) = [z^{n-1}] \frac{T(z)}{1-T(z)}$$ which yields $$[z^{n-1}] T'(z) = [z^{n-1}] \frac{T(z)}{1-T(z)}.$$ Multiply by $z^{n-1}$ and sum over $n\ge 2$ to get $$\sum_{n\ge 2} z^{n-1} [z^{n-1}] T'(z) = \sum_{n\ge 2} z^{n-1} [z^{n-1}] \frac{T(z)}{1-T(z)}.$$ Now $T'(z)$ has a constant coefficient which we miss on the left while $T(z)/(1-T(z))$ does not and we find $$T'(z) - 1 = \frac{T(z)}{1-T(z)}$$ so that $$\bbox[5px,border:2px solid #00A000]{ T'(z) = \frac{1}{1-T(z)}.}$$ Solving this by separation of variables we get $$-\frac{1}{2} (1 - T(z))^2 = z + C_1 \quad\text{or}\quad T(z) = 1 - \sqrt{C_2-2z}.$$ Since $T(z)$ has no constant coefficient we obtain $$\bbox[5px,border:2px solid #00A000]{ T(z) = 1 - \sqrt{1-2z}.}$$ Extracting coefficients from this we conclude with (again for $n\ge 2$) $$n! [z^n] T(z) = - n! {1/2\choose n} (-1)^n 2^n = - (1/2)^{\underline n} (-1)^n 2^n \\ = (-1)^{n+1} 2^n \prod_{p=0}^{n-1} (1/2-p) = (-1)^{n+1} \prod_{p=0}^{n-1} (1-2p) = (-1)^{n-1} \prod_{p=1}^{n-1} (1-2p) \\ = \prod_{p=1}^{n-1} (2p-1)$$ which is $$\bbox[5px,border:2px solid #00A000]{ 1\times 3\times\cdots\times (2n-3)}$$ and we have the claim. Readings. This set of notes by M. Drmota has the elementary combinatorial argument for this as well as some additional generating functions. We also find OEIS A001147 which offers a considerable number of references. There is additional material on page 531 of Flajolet / Sedgewick (page number refers to PDF). Here is how I approached these concepts. with(combinat); T := proc(n) option remember; local k, comp, res; if n=1 then return 1 fi; res := 0; for k to n-1 do for comp in composition(n-1, k) do res := res + (n-1)!*mul(T(q)/q!, q in comp); od; od; res; end;
difficulty in solving the total differential equation $cos(2x+2y+z)(dx+dy)+ cos(x+y)cos(x+y+z)dz =0.$
By using the cosine product identity we can rewrite the differential as $$\cos(2x +2y +z)\left[dx +dy +\frac{1}{2}\cos(z)dz\right] = 0.$$ Can you solve it from here?
Arc length in polar coordinates: Why isn't $dS=r×d\theta$
Your first formula is incorrect. For instance, consider the line $y=x$ and you want to get the length of the line segment from $x=0$ to $x=1$. The length is $\sqrt2$, and the equation in polar coordinates is $\theta=\dfrac{\pi}4$. If we use the first formula, we get the length to be $0$. In terms of $x$, $y$, we have $$S = \int \sqrt{(dx)^2+(dy)^2} = \int \sqrt{1+y'^2} dx$$ Setting $x=r \cos(t)$ and $y=r\sin(t)$, we get $$dx = dr \cos(t) - r \sin(t)dt$$ and $$dy = dr \sin(t) + r \cos(t)dt$$ Hence, $$(dx)^2 + (dy)^2 = (dr)^2 + (rdt)^2$$
Rewrite $(\det A)^{1/n}=\min\left\{\frac{\mathrm{tr}(AC)}{n}:C \in \Bbb{C}^{n×n},C>0,\det C=1\right\}$ in terms of $\frac{\rm{tr}(CAC)}{n}$
The statement is true if $A$ is positive definite or $A=0$. Statement $(2)$ can be easily seen to be equivalent to statement $(1)$, because when $C>0$, $\operatorname{tr}(AC)=\operatorname{tr}(AC^{1/2}C^{1/2})=\operatorname{tr}(C^{1/2}AC^{1/2})$ and $\det(C)=1$ if and only if $\det(C^{1/2})=1$. In other words, the $C$ in $(2)$ is just the square root of the $C$ in $(1)$. When $A$ is nonzero, singular and positive semidefinite, since $C^{1/2}AC^{1/2}$ is congruent to $A$, $\operatorname{tr}(AC)=\operatorname{tr}(C^{1/2}AC^{1/2})$ is always positive. Therefore both statements $(1)$ and $(2)$ are false, but they can be corrected by taking infima instead of minima (which do not exist).
Show the subset $A$ of $\mathbb{R}^n$ is compact
Let $\{a_i\}$ be a sequence in $A$ which converges in $\mathbb{R}^n$. Let $a$ be the limit of this sequence. If we can show that $a \in A$, then we have shown that $A$ is closed (in fact sequentially closed, but this is enough as we are in a metric space). Letting $a_i = (a_i^1, \dots, a_i^n)$ and $a = (a^1, \dots, a^n)$, as $a_i \to a$, we know that $a_i^j \to a^j$ for $j = 1, \dots, n$. As $a_i \in A$, we have $$-1 \leq a_i^1 \leq \dots \leq a_i^n \leq 1$$ for each $i$. Now take the limit as $i \to \infty$ and use the fact that if $x_i \leq y_i$ for all $i$, then $\lim\limits_{i\to\infty}x_i \leq\lim\limits_{i\to\infty}y_i$. This will allow you to conclude that $A$ is closed. As $A$ is contained in the cube $[-1,1]^n$, $A$ is bounded. Therefore, by the Heine-Borel Theorem, $A$ is compact. The second part of your question has been answered in the comments.
Given the function $f(x,y)=\frac{x^3-y^3}{x^2-y^2}$, determine whether the $\lim \limits_{(x,y) \rightarrow (a,a)} f(x,y)$ exists or not.
HINT: For $x\not=y$ $$\frac{x^3-y^3}{x^2-y^2}=\frac{x^2+xy+y^2}{x+y}$$ Now taking $\lim_{(x,y)\to(a,a)}$ should be easy because the function is continuous.
What holds for invertible matrices?
It seems correct indeed $A+A^{-1}$ is invertible just consider $A=\begin{bmatrix}0 & -1 \\ 1 & 0\end{bmatrix}$ $(ABA^{-1})^7 = AB^7A^{-1} $ true by inspection $(A+B)(A-B) = A^2-B^2 $ $(A+B)(A-B)=A^2-AB+BA+B^2$ and see $6.$ $(A+A^{-1})^8=A^8+A^{-8} $ $(A+A^{-1})^2=A^2+2I+A^{-2}\ldots$ $A^9$ is invertible $A^9A^{-9}=I$ $AB=BA$ just consider $A=\begin{bmatrix}0 & 1 \\ 1 & 0\end{bmatrix}\quad B=\begin{bmatrix}0 & -1 \\ 1 & 0\end{bmatrix}$
Elementary proof for $\lim_{n\to \infty}\sqrt[2n-1]{\dfrac{1}{2^n}}=\dfrac1{\sqrt 2}$
With $m=2n-1$, the expression is $$\left(2^{-(m+1)/2}\right)^{1/m}=2^{-1/2}2^{-1/2m}.$$ Clearly the second factor tends to $1$. If this needs to be proven, let $2^{1/2m}=1+\epsilon>1$. Then by the binomial theorem, $$(1+\epsilon)^{2m}=2=1+2m\epsilon+\cdots$$ (all terms are positive) and $2m\epsilon<1$. Hence $$1<2^{1/2m}<1+\frac1{2m}$$ which squeezes to $1$.
Need help understanding graph theory question
$$r-1=e-v+1=(2p+3l)-(p+2l)+1=p+l+1$$ You can tell $p-l+1$ is wrong if you draw multiple lines without intersection. Number of regions cannot be $\le 0$.
Finding set of possible values of a constant in a function definition.
When you talk of $y$, presumably you mean $f(x)$. Since $f$ is continuous, to avoid having you must either have $f(x)\gt x$ for all $x$ or have $f(x) \lt x$ for all $x$. This explicitly keeps $f(x)$ from meeting $y=x$. You can set $f(x)=x$ and find what values of $m$ make it impossible to satisfy. The condition that $f(x) \ne f^{-1}(x)$ is covered as you must have $f(x)\gt x \gt f^{-1}(x)$ or the reverse.
L'hopital rule fails with limits to infinity?
The rule of L'Hospital states that the limit of $\dfrac fg$ equals that of $\dfrac{f'}{g'}$ if the latter exists. You precisely found a case where this does not hold. We can simplify the example as $$\lim_{n\to\infty}\frac{n+\sin n}n=1$$ but $$\lim_{n\to\infty}\frac{1+\cos n}1$$ is undefined.
What is the derivative of $f(x)=e^{f^{\prime \prime}}$
Such kind of problems usually ask for the function $f$ itself (of course, then its derivative can be calculated, too). And, such is called a Differential equation.
How to find presentation of $G'$ for $G=\langle a,b\ |\ a^4,b^4,a^2=b^2 \rangle$.
Your group $G$ is presented as the amalgamated product $\mathbb{Z}_4*_{\mathbb{Z}_2}\mathbb{Z}_4$ which is a central extension of order $2$ of the infinite dyhedral group $\mathbb Z_2*\mathbb Z_2$. The latter is isomorphic to $\mathbb Z \rtimes \mathbb Z_2$ which is clearly metabelian. A central extension of a metabelian group is metabelian. The commutator subgroup of $\mathbb Z_2*\mathbb Z_2$ is cyclic and generated by $ABAB$, where $A$ and $B$ are the generators of the two free factors and are the images of $a$ and $b$ under the quotient of $G$ by $\langle a^2\rangle$. The element $ABAB$ lifts in $G$ to the commutator $[a,b]=[a,b^{-1}]$ which generates the commutator subgroup $G'$ (this is easy to check). Whence $G'$ is infinite cyclic and generated for example by $[a,b]$.
Stochastic integral with respect to a stochastic integral
Yes, this is abuse of notation. One way to prove this statement rigorously is to use the characterization of the stochastic integral by its (predictable) covariation: Theorem Let $X$ a square integrable martingale and $f$ predictable such that $$\mathbb{E} \left( \int_0^{\infty} f^2 \, d\langle X \rangle_s \right) < \infty.$$ Then for any square integrable martingale $Y$, we have $$\langle \int_0^{\cdot} f \, dX, Y \rangle_t = \int_0^t f(s) \, d\langle X,Y \rangle_s \tag{1}.$$ On the other hand, if for some square integrable martingale $I$ the equality $$\langle I,Y \rangle_t = \int_0^t f(s) \, d\langle X,Y\rangle_s \tag{2}$$ holds for all $Y$, then $$I_t-I_0 = \int_0^t f(s) \, dX_s.$$ This means that, in view of $(2)$, we have to prove that $L$ satisfies $$\langle L,Y \rangle_t = \int_0^t H_s K_s \, d\langle M,Y\rangle_s$$ for any square integrable martingale $Y$. This follows from $(1)$: $$\begin{align*} \langle L,Y \rangle_t &\stackrel{(1)}{=} \int_0^t K_s \, d\langle N,Y \rangle_s \stackrel{(1)}{=} \int_0^t K_s H_s \, d\langle M,Y \rangle_s. \end{align*}$$ This finishes the proof. Note that we have to assume that $K$ is suitable integrable; otherwise the integral $\int_0^t K_s dN_s$ might not exist.
Suppose $0<a<1$. Show $\{a^n\}$decreasing, converges.
It needs way more explanation. If you want to check convergence formally, you need to show $$∀ε &gt; 0~∃ N ∈ ℕ~∀n &gt; N~\lvert a^n \rvert &lt; ε,$$ since the limit of $(a^n)_n$ for $\lvert a \rvert &lt; 1$ is always zero as you can tell by inspecting some well-known examples (so set $L = 0$ in your formulation). You wrote “$\lvert a^n - L\rvert ≤ 1/an - L$”. Do you mean “$\frac 1 {an}$” or “$\frac 1 a n$”? Probably the former. But how did you conclude that? This is not clear at all. The usual way of showing the convergence of the geometric sequence is a two-step process: Show that a sequence $(a_n)_n$ of positive real numbers converges to zero if and only if its reciprocal sequence $(1/a_n)_n$ increases without bounds, i. e. $$∀κ &gt; 0~∃N∈ℕ~∀n &gt; N~a_n &gt; κ.$$ Show that for $b &gt; 1$, $(b^n)_n$ increases without bounds by invoking Bernoulli’s Inequality. For the decreasing part: See John Ma’s comment (use $a^n &gt; 0$ and $a &lt; 1$ to conclude $a^n·a &lt; a^n·1$ using the order axioms on $ℝ$).
Why am I getting derivative of $y = 1/x$ function as $0$?
Do not remove $(dx)^2$: $$\frac{\frac{1}{x+dx} - \frac{1}{x}}{dx}=\frac{x-(x+dx)}{(x+dx)xdx}=-\frac{1}{x(x+dx)} \to -\frac{1}{x^2}$$
Is there a way to show that $\int x^ndx = \frac{x^{n+1}}{n+1}$ using induction?
I'm not sure what you mean by induction. In my interpretation of induction. Assume that $\int x^{n}dx = \frac{x^{n+1}}{n+1} $ (forget about the constant). Then we want to prove that $\int x^{n+1}dx = \frac{x^{n+2}}{n+2}$. Integrating by parts: $$ \int x^{n+1} dx= \int x\cdot x^{n} dx = x\frac{x^{n+1}}{n+1} - \frac{1}{n+1}\int x^{n+1}dx $$ so $$ \frac{n+2}{n+1}\int x^{n+1} dx = =\frac{x^{n+2}}{n+1} \implies \int x^{n+1}dx = \frac{x^{n+2}}{n+2}$$
Is $f(x)=|x\sin x|$ positive and not bounded function
You can't say the $f$ is positive, as it frequently take value $0 $(when $x_k = k\cdot\pi$ for $k\in \Bbb Z$) so you can only say that $f$ is non-negative. $f$ is unbounded since for $y_k = \frac\pi2+k\cdot\pi$ you have $f(y_k) = y_k$ as $\sin y_k =1$. I am not sure, what did you mean by saying and doesn't have a point that converges to infinity but what can be said is that the limit $\lim _{x\to\infty}f(x)$ does not exist. Just consider values of $f$ over sequences $x_k$ and $y_k$. Even worse (better?): for any $a\in [0,\infty)$ there exists a sequence $a_n$ such that $\lim_{n\to\infty}f(a_n) = a.$
Why did no student correctly find a pair of $2\times 2$ matrices with the same determinant and trace that are not similar?
If $A$ is a $2\times 2$ matrix with determinant $d$ and trace $t$, then the characteristic polynomial of $A$ is $x^2-tx+d$. If this polynomial has distinct roots (over $\mathbb{C}$), then $A$ has distinct eigenvalues and hence is diagonalizable (over $\mathbb{C}$). In particular, if $d$ and $t$ are such that the characteristic polynomial has distinct roots, then any other $B$ with the same determinant and trace is similar to $A$, since they are diagonalizable with the same eigenvalues. So to give a correct example in part (2), you need $x^2-tx+d$ to have a double root, which happens only when the discriminant $t^2-4d$ is $0$. If you choose the matrix $A$ (or the values of $t$ and $d$) "at random" in any reasonable way, then $t^2-4d$ will usually not be $0$. (For instance, if you choose $A$'s entries uniformly from some interval, then $t^2-4d$ will be nonzero with probability $1$, since the vanishing set in $\mathbb{R}^n$ of any nonzero polynomial in $n$ variables has Lebesgue measure $0$.) Assuming that students did something like pick $A$ "at random" and then built $B$ to have the same trace and discriminant, this would explain why none of them found a correct example. Note that this is very much special to $2\times 2$ matrices. In higher dimensions, the determinant and trace do not determine the characteristic polynomial (they just give two of the coefficients), and so if you pick two matrices with the same determinant and trace they will typically have different characteristic polynomials and not be similar.
Differentiation; Maxima and Minima
I assume the variable is $x$, and $n$ is any real constant (an integer will do, but the following is valid for any real $n$). If $x$ is free to take any value in $]-\infty,0[\,\cup\,]0,+\infty[$ (as $y$ is not defined for $x=0$), then there is no minimum value. Just pick an arbitrary small positive number $\epsilon$, and let $x=-\epsilon$, then $y$ takes on arbitrarily large negative values. There is no infimum in the reals, nor a minimum, since the set of values of $y$ has no lower bound. If $x$ is restricted to positive values, that is $x&gt;0$, then there is no minimum either, but there is an infimum: let $x$ be an arbitrarily large positive number, then $y$ takes on arbitrarily small positive values, but never reaches $0$, so the inf is zero, but it is not reached so the minimum does not exist. Last, there is no maximum. Let $x=\epsilon$, then $y$ takes on arbirarily large positive values. There is no supremum in the reals, nor a maximum, since the set of values of $y$ has no upper bound. Worth reading: https://en.wikipedia.org/wiki/Infimum_and_supremum
Question about commutator of groups.
Define $[g,h] = ghg^{-1}h^{-1}$. Let $g \in G$, $h,k \in H$. We need to prove that $h^{-1}k^{-1}hk$ commutes with $g$, which is equivalent to $[hk,g] = [kh,g]$. Now $$ hgh^{-1}g^{-1}kgk^{-1}g^{-1} = hgh^{-1}(g^{-1}kgk^{-1})g^{-1} = hg(g^{-1}kgk^{-1})h^{-1}g^{-1} = [hk,g]$$ but also $$ hgh^{-1}g^{-1}kgk^{-1}g^{-1} = (hgh^{-1}g^{-1})kgk^{-1}g^{-1} = k(hgh^{-1}g^{-1})gk^{-1}g^{-1}= [kh,g].$$
Finding the actual vector
As I indicated in the comments, we get $\begin{pmatrix}-1&amp;0&amp;3\\0&amp;0&amp;-2\\1&amp;1/2&amp;0\end{pmatrix}\begin{pmatrix}5\\-4\\1\end{pmatrix}=\begin{pmatrix}-2\\-2\\3\end{pmatrix}$.
Let $a$, $b$ and $c$ be integers. Prove that if $a^2 \mid b$ and $b^3 \mid c$, then $a^4b^5 \mid c^3$.
Using your notations: $a^2 k_1 = b$ and $b^3 k_2 = c$ for some $k_1, k_2 \in \mathbb{Z}$. Then $$c^3=b^9k_2^3=b^4b^5k_2^3=a^8k_1^4b^5k_2^3$$
Would Relational Calculus be Turing-Complete if it Allowed Unsafe Queries?
I don't know if this is correct, but I have a hypothesis. If we allowed free variables in the formula, then we could think of queries as functions. For example, consider the following query: { &lt;A, B, C&gt; | &lt;A, B, C&gt; \in R and A = x} If the above query has a result M, we can consider that to be a function of the form lambda x.M. If we allow relations to be free variables, we can define a function of the form lambda R.R. Now, if we allow "higher-order queries", ie queries that can query queries, we can represent the Church numerals (with q being a query): 0 = lambda qR.R 1 = lambda qR.q R 2 = lambda qR.q (q R) 3 = lambda qR.q (q (q R)) Therefore, I think that relational calculus can also be a lambda calculus, and therefore be Turing-complete. Can anyone tell me if I'm on the right path?
How do I prove that $\cos{\frac{2\pi}{7}}\notin\mathbb{Q}$?
If $\cos \frac{2 \pi}{7}$ is rational, then $i\sin \frac{2 \pi}{7}=\sqrt{\cos^2 \frac{2 \pi}{7}-1}$ is a quadratic irrational, and hence so is $\cos \frac{2 \pi}{7}+i\sin \frac{2 \pi}{7}$. But $\cos \frac{2 \pi}{7}+i\sin \frac{2 \pi}{7}$ is a primitive $7$th root of $1$, and so has minimal polynomial $x^6+x^5+x^4+x^3+x^2+x+1$, which is not a quadratic.
Integral with Euler-Mascheroni constant
As already mentioned by TheSimpliFire you cannot simply change from real to complex variables without any changes within your solution. I will present you a solution which does not rely on complex analysis at all. Recently reading this post dealing with related integrals I have finally found a way to evaluate your integral. However, the crucial part we are in need of is precisely the Mellin Transform of the sine function which is given by $$\mathcal M_s\{\sin(x)\}~=~\int_0^\infty x^{s-1}\sin(x)\mathrm dx~=~\Gamma(s)\sin\left(\frac{\pi s}2\right)\tag1$$ Here $\Gamma(z)$ denotes the Gamma Function. There are different possible ways to show this relation, for myself I prefer using Ramanujan's Master Theorem as it is done here for example $($just substitute $s$ by $-s$$)$, but I will leave this out for now hence it is not of our concern. Note that we can use a variation of Feynman's Trick, i.e. Differentation under the Integral Sign. Before applying this technique we may rewrite the RHS of $(1)$ in the following way $$\Gamma(s)\sin\left(\frac{\pi s}2\right)=\Gamma(s)\sin\left(\frac{\pi s}2\right)\frac{2\cos\left(\frac{\pi s}2\right)}{2\cos\left(\frac{\pi s}2\right)}=\frac1{{2\cos\left(\frac{\pi s}2\right)}}\Gamma(s)\sin(\pi s)=\frac\pi2\frac1{\Gamma(1-s)\cos\left(\frac{\pi s}2\right)}$$ Here we used Euler's Reflection Formula. Even though the new form seems to be more complicated in the context of taking derivatives it actually prevents us from running into indefinite expressions which are harder to deal with. Anyway, differentiating w.r.t. $s$ leads us to \begin{align*} \frac{\mathrm d}{\mathrm ds}\int_0^\infty x^{s-1}\sin(x)\mathrm dx&amp;=\frac{\mathrm d}{\mathrm ds}\frac\pi2\frac1{\Gamma(1-s)\cos\left(\frac{\pi s}2\right)}\\ \int_0^\infty \frac{\partial}{\partial s}x^{s-1}\sin(x)\mathrm dx&amp;=\frac\pi2\frac{\mathrm d}{\mathrm ds}\frac1{\Gamma(1-s)\cos\left(\frac{\pi s}2\right)}\\ \int_0^\infty x^{s-1}\log(x)\sin(x)\mathrm dx&amp;=\frac\pi2\left[\frac1{\cos\left(\frac{\pi s}2\right)}\frac{-(-1)\Gamma'(1-s)}{\Gamma^2(1-s)}+\frac1{\Gamma(1-s)}\frac{-\frac\pi2\sin\left(\frac{\pi s}2\right)}{\cos^2\left(\frac{\pi s}2\right)}\right]\\ \int_0^\infty x^{s-1}\log(x)\sin(x)\mathrm dx&amp;=\frac\pi2\left[\frac1{\cos\left(\frac{\pi s}2\right)}\frac{\psi^{(0)}(1-s)}{\Gamma(1-s)}-\frac\pi2\frac1{\Gamma(1-s)}\frac{\sin\left(\frac{\pi s}2\right)}{\cos^2\left(\frac{\pi s}2\right)}\right] \end{align*} Now we are basically done. Hence every occuring term is defined at $s=0$ we can simply plug in this value. Utilizing that the Digamma Function $\psi^{(0)}(z)$ is closely related to the Euler-Mascheroni Constant we can deduce that $$\int_0^\infty x^{0-1}\log(x)\sin(x)\mathrm dx=\frac\pi2\left[\underbrace{\frac1{\cos\left(\frac{\pi\cdot0}2\right)}\frac{\psi^{(0)}(1-0)}{\Gamma(1-0)}}_{=-\gamma}-\underbrace{\frac\pi2\frac1{\Gamma(1-0)}\frac{\sin\left(\frac{\pi\cdot0}2\right)}{\cos^2\left(\frac{\pi\cdot0}2\right)}}_{=0}\right]$$ $$\therefore~\int_0^\infty \frac{\log(x)\sin(x)}x\mathrm dx~=~-\frac{\gamma\pi}2$$
How do I find the point equidistant from three points $(x, y, z)$ and belonging to the plane $x-y+3z=0?$
In $\Delta ABC$, \begin{align*} a &amp;= BC \\ &amp;= \sqrt{5} \\ b &amp;= CA \\ &amp;= \sqrt{3} \\ c &amp;= AB \\ &amp;= \sqrt{2} \\ a^2 &amp;= b^2+c^2 \\ \angle A &amp;= 90^{\circ} \\ O &amp;= \frac{B+C}{2} \tag{circumcentre of $\Delta ABC$} \\ &amp;= \left( 1,0,\frac{3}{2} \right) \\ \vec{AB} \times \vec{AC} &amp;= (1, -1, 0) \times (-1, -1, 1) \\ &amp;= (-1,-1,-2) \end{align*} Equation of axis of enveloping cone for circular section $ABC$ is $$\mathbf{r}=\left( 1,0,\frac{3}{2} \right)+t(-1,-1,-2)$$ Substitute into $x-y+3z=0$, \begin{align*} (1-t)-(-t)+3\left( \frac{3}{2}-2t \right) &amp;= 0 \\ t &amp;= \frac{11}{12} \\ (x,y,z) &amp;= \left( \frac{1}{12},-\frac{11}{12},-\frac{1}{3} \right) \end{align*}
connection laplacian on general vector bundles
The answer to both of your questions is yes, and I think that it works completely analogously to what describe in the first part of your question. More specifically, The idea of the trace is the same one that you discuss in the first part of your question. The "$E$-Hessian" $\nabla^2 \varphi$ is a section of $E \otimes T^\ast M \otimes T^\ast M$. Use the metric (the musical isomorphism $\sharp$) to identify $T^\ast M$ with $TM$ and obtain a section of $E \otimes T^\ast M \otimes TM \cong E \otimes \text{End}(TM)$. Take the trace of the endomorphism piece to obtain a section of $E$. Note that $\nabla^2 \phi$ may not be symmetric in its entries, so we have a choice as to which factor of $T^\ast M$ we apply $\sharp$ to, but this choice is irrelevant once we take the trace. In terms of a local frame $\{ e_i \}$ for $TM$, we have $$ \Delta \phi = \text{tr}_g \nabla^2 \varphi = g^{ij} [\nabla^2 \varphi] (e_i, e_j).$$ Of course, if the frame is orthonormal, this reduces to $[\nabla^2 \varphi] (e_i, e_i)$, as you stated. We have a connection $\nabla^E$ on $E$ and the Levi-Civita connection $\nabla^{LC}$ on $T^\ast M$. These induce a connection $\tilde{\nabla}^E$ on $E \otimes T^\ast M$, which is defined by the "product rule": $$\tilde{\nabla}^E (\varphi \otimes \omega) = (\nabla^E \varphi) \otimes \omega + \varphi \otimes (\nabla^{LC} \omega)$$ where $\varphi$ is a section of $E$ and $\omega $ is a one-form. This connection gives a map $\tilde{\nabla}^E: \Gamma(E \otimes T^\ast M) \to \Gamma(E \otimes T^\ast M \otimes T^\ast M)$, and as you guessed, $\nabla^2$ as you defined it is precisely the composition of the connections $$\tilde{\nabla}^E \circ \nabla^E: \Gamma(E) \to \Gamma(E \otimes T^\ast M) \to \Gamma(E \otimes T^\ast M \otimes T^\ast M).$$ It's a good exercise to prove this! A note, as much for my own understanding as anything: $\tilde{\nabla}^E$ as defined above is an extension of $\nabla^E$ which maps $\Gamma(E \otimes T^\ast M) \to \Gamma(E \otimes T^\ast M \otimes T^\ast M)$. There is another interesting extension of $\nabla^E$ to $E \otimes T^\ast M$, which I will call $d^E$. $d^E$, in contrast to $\tilde{\nabla}^E$, is a map $$d^E: \Gamma(E \otimes T^\ast M) \to \Gamma(E \otimes \Lambda^2 T^\ast M),$$ i.e., $d^E$ gives a two-form with $E$-coefficients. More generally, one can define $d^E$ as a map on $E$-valued forms of any degree: $$d^E: \Gamma(E \otimes \Lambda^k T^\ast M) \to \Gamma(E \otimes \Lambda^{k+1} T^\ast M),$$ defined by $$d^E(\varphi \otimes \omega) = (\nabla^E \varphi) \wedge \omega + \varphi \otimes (d\omega),$$ where $\wedge$ means "wedge the one-form part of $\nabla^E \varphi$ with $\omega$". I suppose one should think of $d^E$ as a generalization of the de Rham exterior derivative $d$ on forms. (If $E$ is the trivial bundle $M \times \mathbb{R}$ with trivial connection $\nabla^E = d$, we recover $d$.) Note that $d^E$ does not require a connection on $TM$. The curvature $R^E$ of the connection $\nabla^E$ is the $\text{End}(E)$-valued two-form defined by the composition $$R^E:=(d^E)^2 \text{ (or }d^E \circ \nabla^E): \Gamma(E) \to \Gamma(E \otimes T^\ast M) \to \Gamma(E \otimes \Lambda^2 T^\ast M) .$$ It's a standard computation to show that $R^E$ really lives in $\text{End}(E)$, i.e., that it's $C^\infty(M)$-linear, and that $$ R^E(X,Y) = \nabla^E_X \nabla^E_Y - \nabla^E_Y \nabla^E_Y - \nabla^E_{[X,Y]} $$ As a final remark to relate this back to the Hessian, notice that the antisymmetric part of the Hessian is precisely the curvature, i.e., $$[\nabla^2 \varphi](X, Y) - [\nabla^2 \varphi](Y, X) = R^E(X,Y) \varphi.$$ The Hessian of a smooth function is symmetric, which is equivalent to the the fact that the de Rham "curvature" $d^2$ is zero. References: Here are a couple of books I found useful in reminding myself how some of this works: Jost, Riemannian Geometry and Geometric Analysis. See chapter 4. Taylor, Partial Differential Equations I. See Appendix C (on his website) and chapter 2.
Normed vector space and absolute value
The Wikipedia article you cite is written from the perspective of an analyst, so it takes a narrow interpretation of a norm. A search for "normed linear/abstract space" together with "absolute value" or "valued field" shows that this term is also used for linear spaces over fields other than subfields of $\mathbb C$. Here is one example and here is another, with a screenshot of the relevant part:
Prove there is no homomorphism from $Z_{16} \oplus Z_2 $ ont $Z_4 \oplus Z_4$
If there were a homomorphism, the kernel would be a subgroup of order $2$, since $Z_4 \oplus Z_4$ has order $16$, and the first isomorphism theorem implies that $Z_{16} \oplus Z_2/ker \phi \cong Z_4 \oplus Z_4$ There is only one subgroup of order $2$ in this group, and its quotient is not isomorphic to $Z_4 \oplus Z_4$.
How do I update position using velocity, acceleration and friction with variable time?
Generally two types of frictional forces that are encountered. In one the acceleration due to frictional force is proportional to the Normal force of the object on which the force is acting. So the frictional force $F=\mu\cdot N$ where $N$ is the noraml force.For example in the case of a moving car it is $\mu\cdot Mg$ where $M$ is the mass of the car. So if there is an external force $F_e$ already acting on the car provided by the engine then the net force on the car is $F_e-\mu\cdot Mg$ and hence the net acceleration is $\frac{F_e}{M}-\mu\cdot g$ . So $velocity_n=velocity_o+(\frac{F_e}{M}-\mu\cdot g)\cdot timedelta$ But there are also cases when the resisting acceleration or drag is proportional to the velocity and hence $$\frac{dv}{dt}=-kv$$ $$\Rightarrow \frac{dv}{v}=-kdt$$ $$\Rightarrow v=v_0e^{-kt}$$$$\Rightarrow v_{next}=v_{now}\cdot e^{-ktimedelta}$$ This all will eventually lead to the following also $$\frac{ds}{dt}=v_0e^{-kt}$$ $$\Rightarrow s=\frac{v_0}{k}(1-e^{-kt})$$ Consolidating all above you get $$v_{next}=v_{now}.e^{\frac{acceleration_{now}}{v_{now}}\cdot timedelta}$$
formal power series ring over field is m-adic complete
Consider the ring of polynomials $S=k[x_1,\ldots,x_n]$ over a field $k$. Then $\mathfrak{m}:=(x_1,\ldots ,x_n)$ is a maximal ideal in $S$, such that the $\mathfrak{m}$-adic completion of $S$ can be viewed as just the formal power series $R=k[[x_1,\ldots,x_n]]$. So $R$ is complete. To see this, define $f\colon \widehat{S_{\mathfrak{m}}}\rightarrow R$ by $$ f\mapsto (f+\mathfrak{m},f+\mathfrak{m}^2,f+\mathfrak{m}^3,\cdots ) $$ The preimage of any $(f_1+\mathfrak{m}, f_2+\mathfrak{m}^2,\ldots)$ can be computed as $f_1 +(f_2 -f_1)+(f_3 -f_2)\cdots$, and this is trivially a homomorphism. Hence these rings are isomorphic.
How to explain why a parabola opens up or down
If $x$ is big and positive, and $a$ is positive, then $ax^2$ will be very big and positive, overwhelming any effect from $bx+c$. If $x$ is big and negative, and $a$ is positive, then $ax^2$ will again be very big and positive. So if $a$ is positive, the parabola opens upwards. If $a$ is negative then if $x$ is big (positive or negative) the opposite occurs, and $ax^2$ will be very big and negative with the parabola opening downwards.
Is the union of two locally closed sets in real line, locally closed?
Consider $A = \{0\} \cup \bigcup_{n=1}^\infty (2^{-n}, 2^{-n+1})$. This is locally open since $\{0\}$ is closed and $\bigcup_{n=1}^\infty (2^{-n}, 2^{-n+1})$ is open. Suppose $A$ is locally closed, so $A = U \cap C$ where $U$ is open and $C$ is closed. Since $0 \in A \subseteq U$, $(-\epsilon, \epsilon) \subseteq U$ for some $\epsilon &gt; 0$. Now if $2^{-n} &lt; \epsilon$, $2^{-n} \in U \backslash A$ so $2^{-n} \notin C$. But $2^{-n} +s \in C$ for $|s|&gt;0$ sufficiently small, contradicting the statement that $C$ is closed.
How to solve $\int \frac{\,dx}{(x^3 + x + 1)^3}$?
Hint: the polynomial $x^3+x+1$ has one real root, say $\alpha$. Then $x^3+x+1=(x-\alpha)(x^2+\alpha x+\alpha^2+1)$ and then apply integration techniques of rational expressions of polynomials with repeated factors, see for example https://math.la.asu.edu/~surgent/mat271/parfrac.pdf
How to find a Graph's $K_v$?
Let the enemy erase $2n-1$ vertices and select two of the remaining vertices. Put a token on each of the selected vertices; our job is now to move the tokens along the graph until they meet. (This will prove that there was a path between the two selected vertices). If the two tokens have no coordinate in common, show that it is possible to move one of them a single step such that they now have a coordinate in common. We can thus assume that the two tokens have at least one coordinate in common. Without loss of generality let's suppose they are both in layer $0$. If fewer than $2(n-1)$ nodes have been erased in layer $0$, then we can (by induction) connect the two vertices in layer $0$ alone. But otherwise there can be at most one erased node in layers $1$ and $2$, so one of these layers is completely without erasures. Move each token up to that layer and then the rest is easy. Specialized base cases will be needed for $n\le 2$; I will leave those to you.
Conjugacy classes in space of trace zero 2*2 matrices
The orbits of $\operatorname{GL}_2$ on $\mathfrak{sl}_2$ are labelled by the Jordan type of the matrices, which (because we are acting on traceless matrices) fall into two families: $\begin{pmatrix}\lambda &amp; 0 \\ 0 &amp; -\lambda\end{pmatrix}$ for any $\lambda \in \mathbb{C}$, and also $\begin{pmatrix} 0 &amp; 1 \\ 0 &amp; 0 \end{pmatrix}$. The orbits of $SL_2$ should be the same.
If $a \mid c$ and $b \mid c$ where $a, b, c \in \mathbb{N}$, under what conditions does it follow that $a \mid b$?
$c=1$ is the only possibility. Indeed, if $c\neq 1$, then $c \text{ } | \text{ }c$ and $1 \text{ } | \text{ } c$, but obviously you don't have $c \text{ } | \text{ } 1$.
The spectrum of a linear operator
This is not true in general for example $$A = \left(\begin{array}{cc} 0 &amp; \frac{1}{4} \\ 2 &amp; 0\end{array}\right), \, v = (0,0)$$ and $f$ the mapping. You have $$\text{Spec} (A) = \left\{\frac1 {\sqrt 2}, -\frac1 {\sqrt 2}\right\} \subset \{\lambda : |\lambda| &lt; 1\}$$ However $$\left\|f(e_1) - f(0)\right\| = \left\|Ae_1\right\| = \left\|2e_2\right\| = 2 &gt; \left\|e_1\right\|$$ this proves that $f$ is not a contraction.
Inverting the cumulative distribution function to solve for boundary conditions
Define $u:=\sigma \sqrt{\tau}$ and $v:=b+c e^{-d\tau}$, so $f(x)=a\Phi(u-x)-v\Phi(-x)$. For $f$ to have a root, clearly $a$ and $v$ must have identical signs. I assume this in the following. Given this, $f(x)=0$ implies $\log{|a|}+\log{\Phi(u-x)}=\log{|v|}+\log{\Phi(-x)}$, so if we additionally define $w=\log{|v|}-\log{|a|}$, then any root of $g$ defined by $$g(x):=\log{\Phi(u-x)}-\log{\Phi(-x)}-w$$ is also a root of $f$. Now, if $u&gt;0$, then $\log{\Phi(u-x)}&gt;\log{\Phi(-x)}$, so there can only be a root if $w&gt;0$. If $u=0$, then $g(x)=-w$ and there can only be a root if $w=0$ (in which case all points are roots). Finally, if $u&lt;0$, then $\log{\Phi(u-x)}&lt;\log{\Phi(-x)}$, so there can only be a root if $w&lt;0$. Henceforth then, I will also assume that $\operatorname{sign}u=\operatorname{sign}w$. In line with your desire for solutions which are accurate when $|x|$ is large, the natural next step is to compute asymptotic expansions of $g(x)$ around $x\rightarrow+\infty$ and $x\rightarrow-\infty$. I confess I did these in Maple to save time. (The Maple code is included at the bottom.) These series expansions depend on whether $u&gt;0$ or $u&lt;0$. The expressions are particularly simple around $\infty$. In particular, as $x\rightarrow \infty$: $$g(x)=ux-\frac{u^2}{2}-w+O\left(\frac{1}{x}\right).$$ Thus, when the root $x$ of $f$ is large: $$x\approx \frac{u^2+2w}{2u},$$ i.e. this approximation is valid for large $u$, large $w$ or large $-w$ with small $-u$. Approximating around the $-\infty$ produces a slightly messy expression, but its inverse is simpler. In particular, when the root $x$ of $f$ is negative and large in magnitude, and either $u&gt;0$ or $u&lt;0$ and small in magnitude: $$x\approx -\sqrt{\operatorname{LambertW}\left(\frac{1}{2w^2\pi}\right)}$$ Since the $\operatorname{LambertW}$ function (see Maple's definition here) tends to $\infty$ as its argument tends to $\pm\infty$, this approximation is valid for small $|w|$ and small $|u|$. ($|u|$ must also be small both because it was an additional assumption in deriving this expression in the $u&lt;0$ case, and because for large $|u|$, the previous approximation is better.) If $\operatorname{LambertW}$ is not sufficiently "closed-form", note that for small $|w|$, from the asymptotic approximation to the $\operatorname{LambertW}$ function given on Wikipedia here, we have that: $$x\approx-\sqrt{-\log{(2w^2\pi)}-\log{(-\log{(2w^2\pi)})}-\frac{\log{(-\log{(2w^2\pi)})}}{\log{(2w^2\pi)}}}.$$ Maple code: phi:=x-&gt;1/sqrt(2*Pi)*exp(-x^2/2): Phi:=z-&gt;int(phi(x),x=-infinity..z): g:=x-&gt;log(Phi(u-x))-log(Phi(-x))-w: "u&gt;0, x&gt;&gt;0"; g(x): simplify(series(%,x=infinity,3)) assuming u&gt;0 and w&gt;0 and x&gt;0; convert(%, polynom): simplify(solve(%,x)) assuming u&gt;0 and w&gt;0; "u&lt;0, x&gt;&gt;0"; g(x): simplify(series(%,x=infinity,3)) assuming u&lt;0 and w&lt;0 and x&gt;0; convert(%, polynom): simplify(solve(%,x)) assuming u&lt;0 and w&lt;0; "u&gt;0, x&lt;&lt;0"; g(x): simplify(series(%,x=-infinity,3)) assuming u&gt;0 and w&gt;0 and x&lt;0; convert(%, polynom): simplify(solve(%,x)) assuming u&gt;0 and w&gt;0; "u&lt;0, x&lt;&lt;0"; g(x): simplify(series(%,x=-infinity,3)) assuming u&lt;0 and w&lt;0 and x&lt;0; eval(%,u=0); convert(%, polynom): simplify(solve(%,x)) assuming u&lt;0 and w&lt;0;