title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
How to expand a harmonic function in terms of eigensolutions for bipolar / toroidal coordinates?
While identifying those coefficients might be a challenging task as the expansion does not involve orthogonal functions, transforming the hyperbolic functions into trigonometric functions using the substitution $\xi = i \zeta$ might perhaps help to solve your problem.
Prove that $lcm(r,s) = ab$ with a,b coprime, $a|r$ and $b|s$
Let $d=\gcd(r,s)$ with factorization $\prod p_i^{a_i}$. Consider each prime factor $p_i$ of $d$ in turn in order to build two components $e,f$ with $ef=d$. If the multiplicity of $p_i$ in $r$ is $a_i$, assign $p_i^{a_i}$ to $e$, otherwise assign it to $f$. Now set $a=r/e$ and $b=s/f$. This gives $ab = rs/d = \text{lcm}(r,s)$ and for each common prime factor of $r$ and $s$ eliminates that from one of the terms $a,b$, giving $\gcd(a,b) = 1$.
How to characterize self-adjoint operators in terms of orthogonal diagonalizability
In terms of physics, there's a simple reason why you rule out normal operators: physical quantities are things that you can measure. And therefore the corresponding eigenvalues should be real. Normal operators in general admits complex eigenvalues. If the self-adjoint operator is compact, then you know what the eigenfunctions are (the orthonormal basis you get from the spectral theorem; Kato may have meant his self-adjoint operators to be compact, but I doubt it). In the more general cases, what Kato (I assume) was thinking of is perhaps more along the line of "generalized" eigenfunctions. Two examples: On $L^2(\mathbb{R})$, the Laplacian is an unbounded self-adjoint operator (or rather, has a self-adjoint extension yada yada). From solving the ODE, you see that $e^{ikx}$ satisfy $\triangle e^{ikx} = -k^2 e^{ikx}$, so they look loke eigenfunctions, but of course, $e^{ikx}$ is not in $L^2(\mathbb{R})$. On $L^2([0,1])$, the operation $f(x) \mapsto x f(x)$ is bounded and self-adjoint. (But it is not compact.) It is easy to see by inspection that the Dirac distribution $\delta_{x_0}$ "solves" the eigenfunction equation with eigenvalue $x_0$, but of course the delta function is not an element of $L^2$. In fact, in terms of the measure formulation, the eigenfunctions are precisely objects supported on a point $\lambda$. So if you apply the Lebesgue decomposition theorem, you see that for every $\lambda$ that is in the pure-point part of the measure $P_A$, the characteristic function of $\lambda$ is measurable, and its integral corresponds to a projection onto some subspace of your Hilbert space. Elements of those subspaces are eigenfunctions. In any case, whenever you see sweeping statements like this made in books or articles, you should always take them with a grain of salt and treat them more like guiding principles rather than precise definitions.
Finding operator norm
If you don't want to use the Fourier transform techniques suggested in the comments, here's something you can do. Recall that $L^2([-\pi, \pi])$ has an orthogonal basis consisting of the functions $1$, $\cos(mx)$ for $m\geq 1$ and $\sin(nx)$ for $n\geq 1$. You can explicitly compute what $A$ does to this orthogonal basis. For instance, if $f = \sin(nx)$, then $$(Af)(x) = \int_{-\pi}^\pi\cos^2\left(\frac{x-t}{2}\right)\sin(nt)\,dt = \begin{cases}\frac{1}{2}\pi\sin(x) & n = 1.\\ 0 & n>1.\end{cases}$$ Similarly, if $f = \cos(nx)$, then $$(Af)(x) = \int_{-\pi}^\pi\cos^2\left(\frac{x-t}{2}\right)\cos(nt)\,dt = \begin{cases}\frac{1}{2}\pi\cos(x) & n = 1.\\ 0 & n>1.\end{cases}$$ Finally, when $f \equiv 1$, you have $$(Af)(x) = \int_{-\pi}^\pi \cos^2\left(\frac{x-t}{2}\right)\,dt \equiv \pi.$$ (You should check these computations, as I could easily have made a mistake) If these computations are indeed correct, this implies the norm of $A$ is $\|A\| = \pi$, and moreover $\|Af\| = \pi\|f\|$ when $f$ is a constant function.
Integer Partition, with restriction
The generating function of $P_{N_k}(n)$ is $$\prod_{j=1}^k\frac1{1-t^j}.$$ This is a rational functions, and for any given $k$ it can be expanded as a partial fraction and a formula for $P_{N_k}(n)$ can be read off. The dominant term comes from the term $$\frac1{k!(1-t)^k}$$ in the partial fraction, so $P_{N_k}(n)\sim n^{k-1}/(k!(k-1)!)$.
Polynomial expansion
No, they meant $\Theta(d^2)$. They describe a process for going from a polynomial of degree $k$ to one of degree $k+1$, or rather, from the coefficients of a degree $k$ polynomial to the coefficients of a degree $k+1$ polynomial. This takes $O(k)$ multiplications, and $\sum_{k=1}^dO(k) = \Theta(d^2)$.
Prove that $S_n = 5^n - 1$
For the base case, you should start with computing $s_2$ using both methods and showing they're the same. That is, $s_2 = 6\cdot 4 -5\cdot 0 = 24 = 5^2 - 1$. For the inductive step, you use strong induction by assuming the closed form holds for all indices up to a certain $n$ (meaning it also holds for $n-1$ and $n$), and then show that it holds for $n+1$ as well. That is, Assume $s_{n-1} = 5^{n-1}-1$ and $s_n = 5^n-1$. Now use $s_{n+1} = 6s_n - 5s_{n-1}$ and manipulate the result so that you get $s_{n+1} = 5^{n+1}-1$.
For any two ordinals $x$ and $y$, either $x\in y$, or $x=y$, or $y\in x$
For given ordinals $x$ and $y$ the following claims hold true: $x\subsetneq y\implies x\in y$. (I presented a proof for this at If $\alpha\ne\beta$ are ordinals and $\alpha\subset\beta$ show $\alpha\in\beta$) $x\cap y$ is an ordinal. (It's quite easy to prove this) For $x=y$, the theorem is trivially true. For $x\neq y$, let $z=x\cap y$, hence $z$ is an ordinal. We will prove this case by contradiction. Assume $x\not\subsetneq y$ and $y\not\subsetneq x$. Since $z\subsetneq x$ and $z$ is an ordinal, then $z\in x$. Similarly, $z\in y$. Thus $z\in x\cap y=z$, which contradicts the fact that $z$ is ordinal and hence well-ordered under $\in$. As a result, either $x\subsetneq y$ or $y\subsetneq x$.
the convergence of Fourier series
The Cesaro means $\sigma_n(x)$ of the Fourier series of an $L^1$ function $f(x)$ converge almost everywhere to $f(x)$. At any point where the partial sums $S_n(s)$ converge, the Cesaro means $\sigma_n(x)$ converge to the same value. So if the partial sums converge pointwise almost everywhere, the limit must almost everywhere be $f(x)$. Of course, as user16892 noted, the limit might not be $f(x)$ everywhere (but that's obvious, because you can change an $L^1$ function on a set of measure 0 and not affect the Fourier series).
Calculate number of combinations for picking up and dropping off passengers
First, having $N$ different passengers picked up at $N$ different stations can be done in $N!$ ways: the first passenger is picked up at one of $N$ stations, after which the next one has $N-1$ options left, etc. After that, you need to 'associate' each of the $N$ pickup stations with a different dropoff station, making sure that no two passengers go to the same dropoff station, so that means you multiply by the number of derangements, i.e. $!N$ Total: $$N! \cdot !N$$ EDIT: Seeing Riley's answer I see that I interpreted the question differently than Riley: I was merely looking at how many different ways you can assign pickup and dropoff stations to passengers, without looking at the order in which passengers would be picked up or dropped off, which is what Riley did. So ... what exactly was the question?
Prove that $(P_{ij})^{-1}=P_{ij}$ in matrices
Enough to show that $P_{ij}^2=I_n$. That means for any vector $x$, we have to show $P^2_{ij}x=x$, that is, $P_{ij}(P_{ij}x)=x$. For a vector $x$, $P_{ij}x$ is a vector obtained by interchanging the $i$th and $j$th entries of $x$. Applying $P_{ij}$ again gives you another interchanging, and hence you get back $x$. Thus proved!
Let $p$ be a prime number. For all $1 \le k,r < p$, there exists $n \in \mathbb N$ such that $nk \equiv r \pmod{p}$
Consider the $p-1$ numbers $nk\bmod p$ for $n=1,2,\ldots,p-1$. These numbers are all different, because if $nk=mk\bmod p$, that means that $(n-m)k$ is a multiple of $p$, which can only happen if $n-m$ is a multiple of $p$ (because $p$ is prime). This in turn can only happen if $m=n$ (because both $m$ and $n$ are less than $p$). Note also that none of them can be zero, for a similar reason. So the $p-1$ numbers $nk\bmod p$ are equal to the $p-1$ numbers $1,2,\ldots,p-1$ in some order. Hence one of them must be equal to $r$.
Sublinear functional as supremum of linear functionals
Yes, every sublinear functional $p$ is the supremum of all linear $\varphi$ with $\varphi \leqslant p$. And conversely, the supremum of any family of linear $\varphi \colon V \to \mathbb{R}$ is a sublinear functional (if we don't allow the value $+\infty$ for sublinear functionals, that holds only for pointwise bounded families). It's an easy consequence of one of the Hahn-Banach theorems, Theorem 3.2 in Rudin (FA): Suppose (a) $M$ is a subspace of a real vector space $X$, (b) $p \colon X \to \mathbb{R}$ satisfies $$p(x+y) \leqslant p(x) + p(y)\quad \text{and}\quad p(tx) = tp(x)$$ $\quad$ if $x \in X,\, y \in X,\, t \geqslant 0$, (c) $f \colon M \to \mathbb{R}$ is linear and $f(x) \leqslant p(x)$ on $M$. Then there exists a linear $\Lambda \colon X \to \mathbb{R}$ such that $$\Lambda x = f(x) \qquad (x \in M)$$ and $$-p(-x) \leqslant \Lambda x \leqslant p(x) \qquad (x \in X).$$ For any $x \in V$, define $f \colon \mathbb{R}\cdot x \to \mathbb{R}$ by $f(\lambda x) = \lambda p(x)$. For $\lambda \geqslant 0$ we have $f(\lambda x) = p(\lambda x)$, and for $\lambda &lt; 0$ we have $f(\lambda x) \leqslant p(\lambda x)$ by the sublinearity of $p$ (that gives $0 = p(0) \leqslant p(\lambda x) + p(\lvert\lambda\rvert x)$, whence $p(\lambda x) \geqslant -p(\lvert\lambda\rvert x) = -f(\lvert\lambda\rvert x) = f(\lambda x)$ for $\lambda &lt; 0$). Extending $f$ to all of $V$ as guaranteed by the theorem yields $$p(x) = \sup \{ \Lambda x : \Lambda \leqslant p\}.$$
A question about the automorphisms of the alternating group $A_n$.
For $n\le 3$, $H_1=1$ and the result has to be checked by hand (easy since $A_2$ is trivial and $A_3$ is cyclic fo order 3). Now assume $n\ge 4$. The set $C=\{H_1,\dots,H_n\}$ is a single $G$-conjugacy class of subgroups of $A_n$. Hence if $\phi(H_1)=H_i$, then $\phi(C)=C$. The set of $\phi$ such that $\phi(C)=C$ form a subgroup $I_n$ of $\mathrm{Aut}(A_n)$, containing automorphisms induced by elements of $S_n$. In particular, for $\phi\in I_n$ there is a permutation $s=s(\phi)$ of $\{1,\dots,n\}$ such that $\phi(H_j)=H_{s(j)}$ for all $j$. Then $\phi\mapsto s(\phi)$ is a homomorphism $I_n\mapsto S_n$. If $\phi\in S_n$ then $s(\phi)=\phi$. Hence to show that $I_n=S_n$ it is enough to show that any $\psi$ in the kernel of $s$ is trivial. By assumption, $\psi(H_j)=H_j$ for all $j$ and let us show $\psi$ is the identity. In particular, $\psi$ preserves the intersections of any $n-3$ of the $H_j$; thus it maps any 3-cycle to either itself or its inverse. Since all cyclic subgroups generated by 3-cycles are conjugate (because $A_n$ acts transitively on 3-element subsets), if $\psi$ is not the identity, then $\psi$ maps all 3-cycles to their inverse. Now (using the right action, so reading permutations from the left to compute compositions), but this yields a contradiction since it would imply $$(134)=\psi((143))=\psi((123)(243))=\psi((123))\psi((243))=(321)(342)=(142).$$
Converge to Brownian Motion problem
Surely not an exam proof, what is going to come. We will use Theorem 5.4.1 from Ethier and Kurtz, Markov Processes, Wiley, 1986 (EK86); the important part is quoted below. Let us first write down the generator $A_n$ of the process $X^{(n)}$ \begin{equation} A_n f(x) = \sin (nx) f'(x) + \frac{1}{2} f''(x) , x \in \mathbb{R}. \end{equation} The expected limit process, Brownian motion, has generator $A f(x) = \frac{1}{2}f''(x)$, $x \in \mathbb{R}$. Both definitions are understood for functions $f \in C_c^\infty(\mathbb{R})$, the compactly supported, infinitely often differentiable functions on $\mathbb{R}$. Lemma 1: The sequence $\{X^{(n)}_t)_{t\geq 0} : n \in \mathbb{N}\}$ is tight in the Skorohod space $D([0,\infty), \mathbb{R})$. Lemma 2: For any $f \in C_c^\infty(\mathbb{R})$ we find a function $f_n$ such that \begin{equation} \| f_n - f\|_\infty \to 0, \qquad \| A_n f_n - A f \|_\infty \to 0 \end{equation} as $n \to \infty$. Proof of your statement : Let $Y=(Y_t)_{t\geq 0}$ be a limit point of the sequence $\{X^{(n)}_t)_{t\geq 0} : n \in \mathbb{N}\}$ which exists by Lemma 1. Then Lemma 2 (combined with Theorem 5.4.1 in EK86) tells us that $Y$ solves the martingale problem related to the generator $A$. That means that $Y$ is a Brownian motion. Since this holds for any limit point $Y$, we know that $\{X^{(n)}_t)_{t\geq 0} : n \in \mathbb{N}\}$ converges weakly in Skorohod space to Brownian motion. This implies converges in finite dimensional distributions by Theorem 3.7.5 in EK86. Proof of Lemma 2: Fix $f \in C_c^2(\mathbb{R})$ and let $a := \inf \text{supp } f$, $A:= \sup \text{supp } f$ such that $f(x) = 0$ for all $x \not \in [a,A]$. \begin{equation} f_n (x) := f(x) - 2\int_a^x \exp (2n^{-1} \cos (ny) ) \left(-C + \int_a^y \exp(-2n^{-1}\cos(nz)) \sin(nz) f'(z) d z \right) . \end{equation} Here $C = C(n) = \int_a^A \exp(-2n^{-1}\cos(nz)) \sin(nz)f'(z) d z$. It is easy to see that $A_n f_n(x) - Af(x) = 0$ for any $x \in \mathbb{R}$. So we are left with the verification of $\sup_{x} | f_n(x) -f(x)| \to 0$ as $n\to \infty$. Note that since $1-(2/n) \leq \exp(-n^{-1} \xi) \leq 1+(4/n)$ for any $\xi \in [-2,2]$, $n \geq 4$ we have \begin{align} \int_a^y &amp; \exp(-2n^{-1}\cos(nz)) 2\sin(nz) f'(z) d z \\ &amp; = - \int_a^y \exp(-2n^{-1}\cos(nz)) f''(z) dz + f'(y) \exp(-2n^{-1}\cos(ny)) - f'(a) \exp(-2n^{-1}) \\ &amp; = -\int_a^y f''(z) dz + f'(y) - f'(a) + \mathcal{O}\left(n^{-1} \|f''\|_\infty + n^{-1} \|f'\|_\infty \right) \\ &amp; = \mathcal{O}\left(n^{-1} \|f''\|_\infty + n^{-1} \|f'\|_\infty \right) \end{align} for any $y \in \mathbb{R}$. This allows to say that for $x \in [a,A]$: \begin{align} f(x) - f_n(x) &amp; = 2\int_a^x \exp(-n^{-1}\cos (ny)) dy \ \cdot \mathcal{O}\left(n^{-1} \|f''\|_\infty + n^{-1} \|f'\|_\infty \right) \\ &amp; \leq 4 |A-a| \cdot \mathcal{O}\left(n^{-1} \|f''\|_\infty + n^{-1} \|f'\|_\infty \right) \to 0 \end{align} as $n \to \infty$. If we are not in the interval of support, i.e. $x \not \in [a,A]$, then we have for $x \geq A$: \begin{align} f(x) -f_n(x) &amp;= -f_n(x) = 2\int_a^x \exp (2n^{-1} \cos (ny) ) \left(-C + \int_a^y \exp(-2n^{-1}\cos(nz)) \sin(nz) f'(z) d z \right) \\ &amp; = |A-a| \cdot \mathcal{O}\left(n^{-1} \|f''\|_\infty + n^{-1} \|f'\|_\infty \right) \\ &amp; \qquad + 2\int_A^x \exp (2n^{-1} \cos (ny) ) \left( -C + \int_a^A \cdots dz + \int_A^y \cdots dz \right) \, . \end{align} The first term vanishes as before uniformly in $x$, which leaves us with the terms of the last line. By definition of $C$ we are only left with the last term which is zero anyway, since we are not in the support of $f$ anymore. A similar, even easier reasoning applies for $x &lt;a$. The previous lines include saying that $f_n$ is a bounded function. Proof of Lemma 1: Use Remark 5.4.2 in EK86 which requires standard conditions on $A$ which are fulfilled here and the compact containment condition for $\{X^{(n)}_t)_{t\geq 0} : n \in \mathbb{N}\}$. For the compact containment condition hold define two processes $Z^{i}$, $i = -1, 1$ solving $dZ_t^i = i\ dt + dB_t$. Then Theorem 5.2.18 in Karatzas and Shreve, Brownian Motion and Stochastic Calculus, 1991 allows to say we can construct probability spaces $(\Omega^n,\mathcal{A}^n,P^n)$ such that $Z_t^{-1} \leq X_t^n \leq Z_t^1$ for all $t \geq 0$ $P^n$ almost surely for all $n \in \mathbb{N}$. Theorem 5.4.1 in EK86 Let $A \subset C_b(\mathbb{R}) \times C_b(\mathbb{R})$ and $A_n \subset \mathbb{B}_b(\mathbb{R}) \times \mathbb{B}_b(\mathbb{R})$, the bounded measurable functions, $n \in \mathbb{N}$. Suppose that for each $(f,g) \in A$ there is an $(f_n,g_n) \in A_n$ such that \begin{equation} \| f_n -f \| \to 0, \quad \| g_n -g\| \to 0. \end{equation} If for each $n\in \mathbb{N}$, $X^{(n)}$ is a solution of the $A_n$ martingale problem with paths in the Skorohod space and $X_n \Rightarrow X$, then $X$ is a solution to the $A$-martingale problem. Note: The authors of EK86 prefer to denote a generator $A: D(A) \to \mathbb{B}_b(E)$ by its graph $\{(f,Af): f \in D(A) \} \subset (\mathbb{B}_b(E),\mathbb{B}_b(E))$.
show that $f$ is differentiable for $x=0$
Let $x\in ]0, 1[$. Then, $f$ is continuous on $[0, x]$ and differentiable on $]0, x[$. Thus, the mean value applies and there is some $c_x \in ]0, x[$ with $$ \frac{f(x) - f(0)}{x - 0} = f'(c_x) \to -\infty. $$ Hence, the differential quotient has no finite limit and $f$ is not differentiable at $0$.
Prove with the ( ε , δ ) limit proof that a function is continuous
For each $t \in \mathbb R$ we have $|\sin(t) \le |t|.$ Hence $|f(x,y)| \le |y|$ for all $(x,y)$.
What is the lowest positive integer multiple of $7$ that is also a power of $2$ (if one exists)?
No power of 2 is a multiple of 7; by the uniqueness of prime factorization, a number $n$ cannot have two prime factorizations $$n=2^a$$ and $$n=7^{b}p_1^{c_1}p_2^{c_2}\cdots p_r^{a_r}$$ where $b\geq1$, because the factorizations must be different (the power of 7 occuring in the first is $7^0=1$, while the power of 7 occuring in the second is $7^b$ where $b\geq1$). Thus, a number cannot be both a power of 2 and divisible by 7.
Prove Complex Limits from first principle definition
$$\lim_{z\to i } \frac{z-1}{z^2 + 1} = \lim_{z\to i } \left|\frac{z-1}{(z-i)(z+i)}\right| \\ =\lim_{z\to i } \frac{|z-1|}{|z-i||z+i|}$$ $$\rm Now,\ as\ z\to i, \ |z-i|\to 0$$ $$\implies f(z) \to \infty$$
Analytic continuation of a real function
Necessary and sufficient condition for existence of an entire function $g$ extending $f$ to the whole complex plane: $f$ is infinitely differentiable at $0$, and the power series for $f$ at the origin converges to $f$ on the real line. A counterexample for something more is (as noted in a comment) $f(x) = 1/(1+x^2)$. It is real analytic on the real line, but cannot be extended analytically to any connected region containing both $0$ and either $i$ or $-i$. The power series at $0$ only has radius of convergence $1$.
An Application of Open Mapping theorem and counting measure
To show $X$ is a subspace just use the triangle inequality as it mentioned in the comment (by Vaidyanathan). We can think $L^1(\mu)$ as $l^1$. Observe that space of eventually zero sequence contained in $X$. As we know that space of eventually zero sequence is dense in $Y=l^1$, so $X$ will be dense. Moreover $Y$ is proper as $f(n)=\frac{1}{n^2}$ in $l^1$ but not in $X$ (As $\sum \frac{1}{n}$ is not convergent). Hence $X$ is not complete. Consider $\{f_j\}$ where each $f_j=(0,0,...0,1,0....)$ i.e, $1$ is in j-th position. Clearly $\{f_j\}$ is bounded sequence. Then $T(f_n(n))=(0,0,...,0,n,0,....)$. For $n\to \infty$, $||Tf_n||\to \infty$. So $T$ is not bounded. To show $T$ is closed you can use sequential criterion. Here $Sf(n)=\frac{1}{n}f(n)$. Then it is easy to see that $\sum \frac{1}{n}|f(n)|\leq \sum |f(n)|$ i.e, $||S f||_1\leq ||f||_1$. So $S$ is bounded. Surjectivity follows from definition of $X$. It is not open by (b).
Calculation of integral including exponentials
Go with your first instinct, assume an order for $x_n$ and multply your result by $n!$ to account for this. To deal with the min function, you only need to break up the integral for $x_1$ assumed to be the smallest. This inner integral will be ... $$\int _1^c dx_1 + \int_c^{x_2}e^{-x_1} dx_1 = (c-1) + e^{-c}-e^{-x_2} $$ So $$I= n! \int_1^\infty \int_1^{x_n} ... \int_1^{x_3} \left ( c-1+e^c-e^{-x_2} \right ) e^{-(x_1+x_2+...+x_n)} dx_2 ... dx_{n-1} dx_n $$
Can't understand these notations
$(r,\infty)$ is all the real numbers above $r$. Then $(r, \infty)\cup\{\infty\}$ means a set that includes all the real numbers above $r$, and also $\infty$ as well. Since $\infty$ is not a real number, the meaning depends on what else is happening in the current problem. $\mathbb{R}_+\cup\{\infty\}$ is the same, but $\mathbb{R}_+$ is all the positive real numbers; $\mathbb{R}_+=(0,\infty)$
Vertices and orthocentre
This is correct; the orthocentre of $ABH$ is always $C$. Indeed, we have $AC \perp BH$ (since $BH$ is an altitude) and $BC \perp AH$ (since $AH$ is an altitude). So the perpendiculars from $A$ to $BH$ and $B$ to $AH$ meet at $C$, and $C$ is the orthocentre of $ABH$.
Characteristic function of a standard normal random variable
I will give two answers: Do it without complex numbers, notice that $$ \begin{eqnarray} \mathcal{F}(\omega) = \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-\frac{x^2}{2}} \mathrm{e}^{j \omega x} \mathrm{d} x &amp;=&amp; \int_{0}^\infty \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-\frac{x^2}{2}} \mathrm{e}^{j \omega x} \mathrm{d} x + \int_{-\infty}^0 \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-\frac{x^2}{2}} \mathrm{e}^{j \omega x} \mathrm{d} x \\ &amp;=&amp; \int_{0}^\infty \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-\frac{x^2}{2}} \mathrm{e}^{j \omega x} \mathrm{d} x + \int_{0}^{\infty} \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-\frac{x^2}{2}} \mathrm{e}^{-j \omega x} \mathrm{d} x \\ &amp;=&amp; 2 \int_{0}^\infty \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-\frac{x^2}{2}} \cos(\omega x) \mathrm{d} x \end{eqnarray} $$ Now, compute $\mathcal{F}^\prime(\omega)$, and integrate by parts: $$\begin{eqnarray} \mathcal{F}^\prime(\omega) &amp;=&amp; -\frac{2}{\sqrt{2\pi}} \int_0^\infty \mathrm{e}^{-\frac{x^2}{2}} x \sin(\omega x) \mathrm{d} x = \frac{2}{\sqrt{2\pi}} \int_0^\infty \sin(\omega x) \mathrm{d} \left( \mathrm{e}^{-\frac{x^2}{2}} \right) \\ &amp;=&amp; \frac{2}{\sqrt{2\pi}} \left. \mathrm{e}^{-\frac{x^2}{2}} \sin(\omega x) \right|_0^\infty - \frac{2}{\sqrt{2\pi}} \int_0^\infty \mathrm{e}^{-\frac{x^2}{2}} \omega \cos(\omega x) \mathrm{d} x \\ &amp;=&amp; - \omega \mathcal{F}(\omega) \end{eqnarray} $$ The solution to so obtained ODE, $\mathcal{F}^\prime(\omega) = - \omega \mathcal{F}(\omega)$ is $\mathcal{F}(\omega) = c \exp\left( - \frac{\omega^2}{2} \right)$, and the integration constant is seen to be one from normalization requirement $\mathcal{F}(0)=1$ of the Gaussian probability density. Complex integration: As you have started, complete the square: $$ \left( -\frac{x^2}{2} + j \omega x \right) = \left( -\frac{x^2}{2} + j \omega x + \frac{\omega^2}{2} \right) - \frac{\omega^2}{2} = -\frac{1}{2} \left( x - j \omega \right)^2 - \frac{\omega^2}{2} $$ We then have: $$ \mathcal{F}(\omega) = \mathrm{e}^{-\frac{\omega^2}{2}} \cdot \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-\frac{1}{2} \left( x - j \omega \right)^2 } \mathrm{d} x $$ The integral $I = \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-\frac{1}{2} \left( x - j \omega \right)^2 } \mathrm{d} x$ is indeed $1$. To see this, consider $$ \begin{eqnarray} I_L &amp;=&amp; \int_{-L}^L \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-\frac{1}{2} \left( x - j \omega \right)^2 } \mathrm{d} x = \int_{-L-j \omega}^{L-j \omega} \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-\frac{z^2}{2} } \mathrm{d} z \\ &amp;=&amp; \left(\int_{-L-j \omega}^{L-j \omega} \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-\frac{z^2}{2} } \mathrm{d} z - \int_{-L}^{L} \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-\frac{z^2}{2} } \mathrm{d} z\right) + \int_{-L}^{L} \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-\frac{z^2}{2} } \mathrm{d} z \\ &amp;=&amp; \left(\int_{-L-j \omega}^{L-j \omega} \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-\frac{z^2}{2} } \mathrm{d} z - \int_{-L}^{L} \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-\frac{z^2}{2} } \mathrm{d} z\right) + \mathcal{I}_L \end{eqnarray} $$ Here we denoted $\mathcal{I}_L = \int_{-L}^{L} \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-\frac{z^2}{2} } \mathrm{d} z$. Notice that $\lim\limits_{L \to \infty} \mathcal{I}_L = 1$. Consider a complex contour $\mathcal{C}$, $ -L \to L \to L - j \omega \to -L - j \omega \to -L$: $$ \begin{eqnarray} I_L - \mathcal{I}_L &amp;=&amp;\left(\int_{-L-j \omega}^{L-j \omega} \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-\frac{z^2}{2} } \mathrm{d} z - \int_{-L}^{L} \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-\frac{z^2}{2} } \mathrm{d} z\right) \\ &amp;=&amp; -\int_\mathcal{C} \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-\frac{z^2}{2} } \mathrm{d} z - \int_{L}^{L-j \omega} \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-\frac{z^2}{2} } \mathrm{d} z - \int_{-L-j \omega}^{-L} \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-\frac{z^2}{2} } \mathrm{d} z \end{eqnarray} $$ The integral over $\mathcal{C}$ is zero, since the integrand is holomorphic. Therefore: $$ I-1 = \lim_{L \to \infty} (I_L-\mathcal{I}_L) = \lim_{L \to \infty} \left( - \int_{L}^{L-j \omega} \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-\frac{z^2}{2} } \mathrm{d} z - \int_{-L-j \omega}^{-L} \frac{1}{\sqrt{2\pi}} \mathrm{e}^{-\frac{z^2}{2} } \mathrm{d} z \right) $$ And the limit above is easily seen to vanish. Indeed: $$ \lim_{L\to\infty} \left| \mathrm{e}^{-\frac{-(-L - j \omega t)^2}{2}} \right| = \lim_{L\to\infty} \left| \mathrm{e}^{-\frac{-(L^2 + \omega^2 t^2)}{2}} \right| =0. $$
Do there exist true statements with only one proof?
As saulspatz stated, this is a rather vague question if you don't pin down a proof system. Once you do pin down a (formal) proof system, then there are still questions about what "effectively the same" means, and there are multiple answers to that question. At that point though, the problem falls into the field of proof theory. For many systems, especially constructive/intuitionistic systems, we have a good notion of "normal form" for proofs which means one way of defining "effectively the same" is when two proofs can be reduced to the same normal form. We talk about "normal proofs" in natural deduction systems and "cut-free proofs" in sequent calculi. One particular form of the Curry-Howard correspondence links terms in typed lambda calculi to proofs in various logics. The archetypal example being the simply typed lambda calculus and intuitionistic propositional logic. From this perspective, your question is equivalent (using the same notion of "effectively the same" as the previous paragraph) to asking if there are types that have only one normal form inhabitant. The answer to this is definitely "yes" though it can depend on which logic you're using and some other details. A widely used example is the type $\forall A.A\to A$ in the polymorphic lambda calculus has only one (normal form) inhabitant, namely the polymorphic identity function (formally: $\Lambda\tau.\lambda x\!:\!\tau.x$). The polymorphic lambda calculus corresponds to intuitionistic second-order propositional (not predicate) logic. For contrast, $\forall A.A\to(A\to A)$ has two distinct normal form proofs corresponding to the terms $\Lambda\tau.\lambda x\!:\!\tau.\lambda y\!:\!\tau.x$ and $\Lambda\tau.\lambda x\!:\!\tau.\lambda y\!:\!\tau.y$. This notion of "effectively the same" is somewhat reasonable but probably not perfect. On the one hand, it's not so coarse as to make all provable propositions equivalent, and the proof reductions it's based on generally do look "bureaucratic". On the other hand, there can be an exponential increase in the size of a proof when reducing to its normal form, and relatively minor details can still make proofs distinct even if they use the same "ideas". It's unlikely there is a (clear) formal analogue to the intuitive, informal notion of two proofs being "effectively the same". This likely depends on aspects of human psychology. Different intuitive perspectives can easily lead to the same proofs. To be very reductionistic about it, intuition is usually a guide to proof search (especially when you are working in a fixed formal system).
Discriminant of a quadratic form on $V_1 \oplus V_2$
Hint If $(v_1, \ldots, v_k)$ is a basis of $V_1$ and $(w_1, \ldots, w_{\ell})$ is a basis of $V_2$, then $$((v_1, 0), \ldots, (v_k, 0), (0, w_1), \ldots, (0, w_{\ell}))$$ is a basis of $V_1 \oplus V_2$. How can you relate the respective Gram matrices $[q_1], [q_2], [q]$?
Need help finding Green's function for $x^2y''-2xy'+2y=x\ln x$
$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ The Green function derivative 'jump' at $\ds{x = x'}$ is $\ds{\color{#f00}{1 \over x'^{2}}}$ $\pars{~\mbox{instead of}\ \color{#f00}{1}}$. That yields $\ds{-\,{x \over x'^{2}} + {x^{2} \over x'^{3}}}$ when $\ds{x &gt; x'}$. The result is given by $$ \int_{1}^{x}\pars{-\,{x \over x'^{2}} + {x^{2} \over x'^{3}}}x'\ln\pars{x'} \,\dd x' = \bbox[#ffe,15px,border:1px dotted navy]{\ds{% -x + x^{2} - x\ln\pars{x} - {1 \over 2}\,x\ln^{2}\pars{x}}} $$ Note that $\ds{\int_{x'^{-}}^{x'^{+}}x^{2}\,\partiald[2]{\mrm{G}''\pars{x,x'}}{x}\,\dd x = \color{#f00}{x'^{2}}\bracks{\left.\partiald{\mrm{G}\pars{x,x'}}{x} \right\vert_{\ x\ =\ x'^{+}} - \left.\partiald{\mrm{G}\pars{x,x'}}{x} \right\vert_{\ x\ =\ x'^{-}}} = 1}$. Moreover, with your 'original Green Function', the 'right integration' is given by $$ \int_{1}^{x}\pars{-x + {x^{2} \over x'}}\,{\ln\pars{x'} \over x'}\,\dd x' \,\dd x' = \bbox[#ffe,15px,border:1px dotted navy]{\ds{% -x + x^{2} - x\ln\pars{x} - {1 \over 2}\,x\ln^{2}\pars{x}}} $$ which is the right result.
Translation of uniformly continuous function again uniformly continuous?
Yes. Since $x\mapsto g^{-1}xg$ is continuous, if $U$ is as above there exists a neighborhood $U'$ of the identity such that $x(x')^{-1}\in U'$ implies $g^{-1}x(g^{-1}x')^{-1}=g^{-1}(x(x')^{-1})g\in U$.
Continuous partial derivatives implies continuous differential
The partial derivatives$\frac{\partial f}{\partial x^1},\dots,\frac{\partial f}{\partial x^m}$ are only continuous at $x$. Nothing is said about the continuouity of these derivatives in points of $U(x) \setminus \{x\}$. So $f$ need not to be continuously differentiable.
Prove any ring homomorphism between $M_2(\mathbb{R})$ and $\mathbb{R}$ is trivial
A lot of ring properties aren't preserved under homomorphism - unless that homomorphism happens to be injective, or maybe surjective. So, to control that, we look at the map as a whole. The kernel of a ring homomorphism is a two-sided ideal. What are the two-sided ideals of the matrix ring? (Definition: a left ideal is closed under left multiplication by ring elements ($rx\in I$ if $x\in I$ and $r\in R$), a right ideal is closed under right multiplication by ring elements ($xr\in I$ if $x\in I$ and $r\in R$), and a two-sided ideal is closed under both)
Proof to motivate the need for measure theoretic probability
Let $x$ be an element of the probability space $X$. Let $p$ be the probability of the event $\{x\}$. Consider the $2^n$ pairwise disjoint sets of the form $A_1^{\varepsilon_1}\cap\dots\cap A_n^{\varepsilon_n}$ where for each $i\in\{1,\dots,n\}$, $\varepsilon_i\in\{0,1\}$, $A_i^0=A$ and $A^1_i=X\setminus A$. Each of these sets has probability $2^{-n}$ and $x$ is an element of one of them. It follows that $p\leq 2^{-n}$. Since this holds for every $n$, $p=0$. So every singleton in your space has probability $0$ and hence the space is not discrete.
Show that a limit does not exist
It's all good, but note that for this particular purpose you don't need to compute limits, all you need is to observe that $$\cases{\frac{1}{x-1}&lt;-1&amp;for $x\in (0,1), $\\\frac{1}{x-1}&gt;1&amp;for $x\in (1,2). $}$$
Advice on how to read a mathematical paper
Of course the answers to these questions are highly personal. People tend to read papers in very different ways one from each other. I would say that probably the most important thing to understand is the structure of the arguments, and why they work. This may be a difficult task, because certain authors don't bother too much in making this clear to the reader, and they rather just pile up technicalities on the top of each other without explaining the reason why they're doing so. In a well-written and informative introduction, you should be able to loosely follow the logic behind the proofs, and you should get a very general idea of what is going on in the paper. This is of course not enough if you want to understand everything carefully. Personally, I tend to read the same sections several times, maybe on different days. Thinking about them in the meantime helps "digesting" the math, and usually at some point things become clear. With regards to your specific questions: 1) Essentially, yes. Of course it is also your supervisor's responsibility to give you a paper (at the beginning of your PhD) that you don't need billions of new definitions to understand. On your side, you should definitely have a solid grasp of the basic concepts. Then it's part of your work to bridge the gap. With time you'll learn that certain definitions are crucial, and other mostly cosmetics. 2) Usually no. Again, if a paper is well-written then the arguments explained in it, plus the statements of the quoted theorems from other papers, should be enough to understand everything. Certain people hate to use other papers as "black boxes" though, and go through, even if quickly, the other quoted papers. 3) This is a delicate point. I think what your supervisor means is that there are certain big motivating questions behind math, and so also behind number theory. One of them is certainly to understand number fields as deeply as possible, and knowing how many number fields are there with a given Galois group is definitely at least a good start. That doesn't necessarily mean that $D_4$ is a particularly interesting case. But very often in math in order to tackle a big, open problem one cuts it down in very little and doable pieces hoping to recover then the big picture. Certainly ordering number fields by discriminant is a natural thing to do, but in the $D_4$ case one also have Artin representations attached to them, so also ordering by conductor is a reasonable thing to do. Nobody says that one way is better than the other, they are just two different instances of the same problem. There is no systematic way of "understanding the big picture". My very personal recommendation is to try to read different things, and go to many seminars, even if they seem a bit far from your research topic. Listening to the same thing under different points of view sometimes yields a better understanding of why mathematicians work on certain problems rather than on other ones. 4) I think this question has no answer. Just spend as much time you think it is reasonable to do!
$p$ prime, $1 \le k \le p-2$ there exists $x \in \mathbb{Z} \ : \ x^k \neq 0,1 $ (mod p)
Consider the polynomial $x^k(x^k-1)$. You are asking to show that some $x$ is not a root modulo $p$. That's the same as asking if $x(x^k-1)$ has the same property. But this is a degree $k+1$ polynomial, so it has at most $k+1$ distinct roots. Since $k+1&lt;p$, there is at least one residue mod $p$ that is not a root.
Probability that rolling X dice with Y sides and summing the highest Z values is above some value k
Let $\{S_1, \ldots, S_n\}$ be random vector of i.i.d. outcomes of the die throws. Also let $S_{k \colon n}$ denote the order statistics from that sample. The total score of the $n$-keep-$k$ scheme equals $T = \sum_{i=k+1}^n S_{i \colon n}$. Because $S_i$ are positive discrete random variables, the technique of finding the probability generating function is the most promising: $$ \mathcal{P}_T(z) = \mathbb{E}\left(z^T\right) $$ Once the probability generating function is know, probabilities of possible outcomes can be read off as series coefficients: $$ \Pr(T=t) = [z^t] \mathcal{P}_T(z) $$ Moreover, the probabilites $\Pr(T \leqslant t)$ and $\Pr(T &gt; t)$ can be read off as well: $$ \Pr(T \leqslant t) = \sum_{k=0}^{t} [z^k] \mathcal{P}_T(z) = [z^t] \frac{\mathcal{P}_T(z)}{1-z} $$ $$ \Pr(T &gt; t) = 1 - \Pr(T \leqslant t) = [z^t] \frac{1 - \mathcal{P}_T(z)}{1-z} $$ For all of the cases of interest $\mathcal{P}_T(z)$ is a polynomial or rational function in $z$. But finding it requires the knowledge of the joint distribution of the order statistics $\{S_{i\colon:n}\}$. Using Mathematica, the distribution of the total score can be found as follows: TotalHighestScoreDistribution[{x_, z_}, dist_] := Block[{v}, TransformedDistribution[Total[Array[v, z]], Distributed[Array[v, z], OrderDistribution[{dist, x}, Range[x - z + 1, x]]]]] TotalLowestScoreDistribution[{x_, z_}, dist_] := Block[{v}, TransformedDistribution[Total[Array[v, z]], Distributed[Array[v, z], OrderDistribution[{dist, x}, Range[z]]]]] Here I can provide the answer for explicit choices of the dice systems. For $6$-keep-$3$ non-exploding 10-sided die: Similarly, for the keep-lowest scores system: The exploding die case I did by simulation, mostly because the probability generating function could not be computed in closed form:
How can I use the relationship between the Fibonacci numbers and the EA to fill squares?
I will address your first question. While I think that the question is poorly phrased, I think I know what they mean by "What squares could you use to fill up a 6765$\times$4181 rectangle". I will illustrate this on a scaled-down version of the problem. We have that $F_6=8$ and $F_5=5$. Let's fill up a $8\times5$ rectangle with squares. One can see that we can fill up the rectangle perfectly with squares whose sides' lengths are all Fibonacci numbers up to $F_5$. Why does this work for every pair of consecutive Fibonacci numbers? Imagine that you are starting from one corner (e.g. the top right one as in the image) of the rectangle by placing a $1\times1$ square in it. Then, along the smaller side of the rectangle, place another $1\times1$ square. The two squares form another rectangle. Now place another square ($2\times2$) next to the longer side of that rectangle. What results is yet another rectangle. Now just repeat the process until you use up all the Fibonacci numbers up to $F_6$. Notice that by adding new squares, we will always get a new rectangle. What I mean by this is that the resulting figure will never be "jagged". This follows from the way that Fibonacci numbers are defined. I think that this is what the question was going for, although it was not very well phrased, since there are many ways to fill up a rectangle with squares. EDIT: The second part of the question As Jair Taylor suggested in a comment, the number $F_{2018}/F_{2017}$ is rational and therefore its decimal representation will either have a finite number of digits or it will be periodic. This means that the decimal digits will contain repeating "chunks" of numbers. For example, in the decimal representation of the number $70/111$, the sequence "$630$" keeps repeating: $$70/111=0.\underbrace{630}\underbrace{630}\underbrace{630}...$$ Assuming that $\phi$ is the golden ratio, $\phi=\frac{1+\sqrt5}{2}$, its decimal representation will be unpredictable, because $\phi$ is an irrational number. Looking at the pictures you provided, one can see that the first one is much more "chaotic" then the second one and one can hardly find any repeating patterns, whereas the second one obviously contains repeating patterns. This gives us good reason to conclude that the first image represents $\phi$ and the second one represents $F_{2018}/F_{2017}$.
how to convert $dS$ into $dxdy$
General method: Parametrize the surface as $$ \vec x(s,t) = (x_1(s,t),x_2(s,t),x_3(s,t)).$$ The surface element is given by $$ \left| \frac{\partial \vec x}{\partial s} \times \frac{\partial \vec x}{\partial t}\right|dsdt$$ and you integrate over the parameters $(s,t).$ The $\times$ is a cross product and $|\cdot|$ is the length of the vector. Your question: Your surface is just a rectangle in the $x-y$ plane. $dS = dxdy.$ Connection between them: Let $\vec x(s,t) = (s,t,0)$ for $0&lt;s&lt;2$ and $0&lt;t&lt;3.$ Verify this is a correct parametrization of the surface. Plugging in the above formula you'll find the surface element is $dsdt$. Then just call $s$ and $t$ by their more natural names $x$ and $y.$
Can Symmetric group on $n$ letters act non-trivially on a set with $k$ elements where $k<n$?
An action of a group $G$ on a set with $k$ elements induces a homomorphism $\phi\colon G\to \Sigma_k$. If $\Sigma_n$ acts on a set with $k\lt n$ elements, the action induces a morphism $\Sigma_n\to\Sigma_k$. Because the map cannot be one-to-one from size considerations, the kernel is nontrivial. If you want the action to not be trivial, at least for $n\geq 5$ you need the kernel to be $A_n$. In that case, the action factors through $\Sigma_n/A_n\cong C_2$. We can do this with no fixed points if $k$ is even; if $k$ is odd, though, the action must have a fixed point. For example, we can let $\Sigma_5$ act on $\{1,2,3,4\}$ by letting odd permutations act as $(12)(34)$, and even permutations act trivially. But $S_5$ cannot act on $\{1,2,3\}$ without having at least one fixed point. For $n=4$ the action can also factor through $\Sigma_4/V$, where $V=\{e,(12)(34),(13)(24),(14)(23)\}$; this is a group of order $6$, isomorphic to $S_3$, and so $\Sigma_4$ can act on $\{1,2,3\}$ with no point being fixed by the action. For $n=3$, the only nontrivial normal subgroup is $A_3$, so it works as it does for $n\geq 5$. In summary: It cannot be done for $n=2$. For $n\gt 2$, $n\neq 4$, there are actions of $\Sigma_n$ on $\{1,2,\ldots,k\}$ with $k\lt n$, $k$ even, with no points fixed by the action: just let odd permutations act as a product of transpositions that permute all points. For $n\gt 2$, $n\neq 4$, any action of $\Sigma_n$ on $\{1,2,\ldots,k\}$ with $k\lt n$, $k$ odd, has a fixed point. For $n=4$, $\Sigma_4$ can act nontrivially on a set with two elements (let odd permutations exchange the two points, even permutations fix both points), and on a set with $3$ elements (acting as $S_3$ via $\Sigma_4/V$.
Tangential component of a vector field to a spherical surface.
Because this is in relation to a sphere, where $\hat R$ is the direction of the radius from the center of the sphere, $\hat R$ will always be perpendicular to the sphere. hope this helps
$G$ conservative iff counit components are extremal epi.
For the "if" direction, consider the diagram below: $$\require{AMScd} \begin{CD} F G Y @&gt;{\epsilon_Y}&gt;&gt; Y \\ @V{F G f}VV @VV{f}V \\ F G Y^\prime @&gt;&gt;{\epsilon_{Y^\prime}}&gt; Y^\prime \end{CD}$$ Since $G$ is known to be faithful (because the counit components are epimorphisms), it reflects monomorphisms, so $f : Y \to Y'$ is a monomorphism. But $F G f : F G Y \to F G Y'$ is an isomorphism and $\epsilon_{Y'} : F G Y' \to Y'$ is an extremal epimorphism, so $f : Y \to Y'$ must be an isomorphism. On the other hand, for the "only if" direction, observe that $G \epsilon_Y : G F G Y \to G Y$ is a split epimorphism, and $G$ preserves monomorphisms, so $G i$ must be an isomorphism; but $G$ is also conservative, so $i$ is an isomorphism.
Is there any meaning or basis-free description for coevaluation in FDVect?
For a finite-dimensional vector space $V$, the tensor product $V \otimes V^{\ast}$ is naturally isomorphic to $\text{End}(V)$. Its action on $V$ is given by the map $$(V \otimes V^{\ast}) \otimes V \cong V \otimes (V^{\ast} \otimes V) \to V$$ given by applying the evaluation map $V^{\ast} \otimes V \to k$. Now, $\text{End}(V)$ has a distinguished element in it, namely the unit $\text{id}_V \in \text{End}(V)$, and if you transport this map across the isomorphism above you get exactly the coevaluation map (keeping in mind the natural identification between vectors in $W$ and linear maps $k \to W$). This is just the statement that the &quot;coevaluation tensor&quot; $\sum v_i \otimes v_i^{\ast}$ satisfies $$ \left( \sum_i v_i \otimes v_i^{\ast} \right) \left( \sum_j c_j v_j \right) = \sum_{i, j} c_j v_i ( v_i^{\ast}(v_j)) = \sum_{i, j} c_j v_i \delta_{ij} = \sum_i c_i v_i$$ which is equivalent to the definition of the dual basis. This is also a simple explanation why the evaluation and coevaluation maps are sometimes called the counit and unit.
I don't understand how Kirchhoff's Theorem can be true
I think you've made a mistake computing the Laplacian matrix. I find \begin{bmatrix}2 &amp; -1 &amp;0 &amp;-1 \\ -1 &amp; 2 &amp; -1 &amp; 0 \\ 0 &amp; -1 &amp; 2 &amp; -1 \\ -1 &amp; 0 &amp; -1 &amp; 2 \end{bmatrix} You can check that any cofactor has determinant $4$, which you can check by inspection is the right number of trees.
Closed-form expression for this definite integral
$$ {\rm I}\left(z\right) \equiv \int_{0}^{\infty}\sqrt{\vphantom{\LARGE A^{A}}\frac{1}{2x} \left\lbrack% \frac{1}{\left(1 + x\right)^{2}} + \frac{z}{\left(1 + xz\right)^2} \right\rbrack\,}\ {\rm d}x\,, \qquad \begin{array}{|rclcl} \,\,{\rm I}\left(0\right) &amp; = &amp; {\rm I}\left(\infty\right) &amp; = &amp; {\sqrt{2\,} \over 2}\,\pi \\[1mm] \,\,{\rm I}\left(1\right) &amp; = &amp; \pi&amp;&amp; \end{array} $$ \begin{align} {\rm I}\left(z\right) &amp;= {\sqrt{2} \over 2}\int_{0}^{\infty}\!\! {\sqrt{z\left(z + 1\right)x^{2} + 4zx + 1 + z\,} \over \left(x + 1\right)\left(1 + xz\right)}\, {{\rm d}x \over \sqrt{x\,}} \\[3mm]&amp;= {\sqrt{2} \over 2}\left(z + 1 \over z\right)^{1/2}\int_{0}^{\infty} {\sqrt{x^{2} + 4\left(z + 1\right)^{-1}\,x + z^{-1}\,} \over \left(x + 1\right)\left(x + z^{-1}\right)} {{\rm d}x \over \sqrt{x\,}}&amp; \end{align} Equation $x^{2} + 4\left(z + 1\right)^{-1}\,x + z^{-1} = 0$ has the complex roots: $$ x_{\pm} = -\,{2 \over z + 1} \pm {\rm i}\,{\left\vert z -1\right\vert \over \left(z + 1\right)\,\sqrt{z\,}}\,, \qquad\mbox{Notice that}\quad x_{+}x_{-} = \left\vert x_{\pm}\right\vert^{2} = z^{-1} $$ Then, ${\rm I}\left(z\right) = \left(\sqrt{2}/2\right)\sqrt{a^{2} + 1\,}\ {\cal I}\left(a\right)$ where $a = z^{-1/2}$. \begin{eqnarray*} {\cal I}\left(a\right) &amp; \equiv &amp; \int_{0}^{\infty} {\sqrt{x^{2} + 4a^{2}\left(a^{2} + 1\right)^{-1}\,x + a^{2}\,} \over \left(x + 1\right)\left(x + a^{2}\right)} {{\rm d}x \over \sqrt{x\,}} \\ x_{\pm} &amp; = &amp; {-2 \pm {\rm i}\,\left\vert\,a^{2} - 1\right\vert \over a^{2} + 1}\,a^{2} \end{eqnarray*} $$ {\rm I}\left(z\right) = {\sqrt{2\,} \over 2}\,\left(z + 1 \over z\right)^{1/2}{\cal I}\left(1 \over \sqrt{z\,}\right)\,, \qquad\qquad {\cal I}\left(a\right) = {\sqrt{2\,} \over \sqrt{a^{2} + 1\,}}\,{\rm I}\left(1 \over a^{2}\right) $$ $$ {\cal I}\left(a\right) = \int_{-\infty}^{\infty} {\sqrt{x^{4} + 4a^{2}\left(a^{2} + 1\right)^{-1}\,x^{2} + a^{2}\,} \over \left(x^{2} + 1\right)\left(x^{2} + a^{2}\right)} \,{\rm d}x $$ Integration over the complex plane is possible. You have to take into account that $$ x^{4} + 4a^{2}\left(a^{2} + 1\right)^{-1}\,x^{2} + a^{2} = \left(x - x_{\atop -}^{1/2}\right)\left(x + x_{\atop -}^{1/2}\right) \left(x - x_{\atop +}^{1/2}\right)\left(x + x_{\atop +}^{1/2}\right) $$ which introduces branch-cuts in the complex plane.
Choose the correct option.
$f(1,0,0,\cdots)=f(0,1,1,\cdots)$ so $f$ is not one-to one. $\{(x_i):x_1=0\}\equiv \{(x_i):x_1 \neq 1\}$ is open and its image is $[0,\frac 1 2]$ which is not open. Hence $f$ is not an open map.
Clarification on a question involving ratios
You are correct. The key is indeed assuming integer values for $m$, $h$, and $p$, and working from the divisibility relations that come out of the ratios you are given: $4 \mid (11m) \implies 4 \mid m$ $7 \mid (13(m+p)) \implies 7 \mid (m+p)$ $11 \mid (4h) \implies 11 \mid h$ $13 \mid (7h) \implies 13 \mid h$.
Thinking more intuitively about complex analysis, in particular regarding Schwarz's Lemma
You'd like to say $|g(z)| \le 1$ on $|z|=1$, except that $g$ is only defined for $|z| &lt; 1$, so you have to use $r$ slightly less than 1, then take the limit as $r \to 1-$. Bigger $r$'s produce better bounds. Of course, $f(0)=0$ is needed for the conclusion of the Lemma to be true. There is a slightly more general version of the Lemma where you assume $|f(z)| \le 1$ for $|z| &lt; 1$ and $f(c) = 0$ for some $c$ with $|c| &lt; 1$, and conclude $|f(z)| \le \left| \frac{z-c}{1-\overline{c}z}\right|$ (this follows from the ordinary Lemma after composing $f$ with a fractional linear transformation). On the other hand, if $f(z)$ is never $0$ for $|z| &lt; 1$, I don't know what kind of a bound you could hope for: e.g. $f$ could be constant.
How to solve $y' = \sqrt {x+y+1}$
Hints: Make the substitution: $$v = x + y$$ So, $$v' = 1+ y' \rightarrow y' = v'-1$$ Substitute into original DEQ and get: $$v' = \sqrt{v+1} + 1$$ Can you take it from here?
Co-ordinate of extremities of major axis
The line perpendicular to the directrix and passing through the focus, intersects the ellipse at the points you are looking for.
A positive integer a is self-invertible modulo p if and only if a ≡ ±1 (mod p).
It means that $a^2 \equiv 1 mod p$.
Some questions about the proof of the properties of the uniform equivalent.
Take two different discrete metrics on an infinite set, e.g. the usual metric on $\Bbb Z$ and the metric $\rho(x,y)=1$ if $x\ne y$.
If $(a_n)_n$ is convergent sequence than $\lim n(a_{n+1} -a_n)=?$
As already remarked by Clement C the limit can be $0$ (see $a_n=1$), or it may not exist (see $a_{n+1} = a_{n} + \frac{(-1)^n}{n}$). I show that if limit exists then it is $0$. Assume that it si positive (including $+\infty)$ then there exist $L&gt;0$ and $N&gt;0$ such that for all $n\geq N$, $$a_{n+1} - a_n\geq \frac{L}{n}$$ which implies $$a_{n}=a_N+\sum_{k=N}^{n-1}(a_{k+1} - a_n)\geq a_N+L\sum_{k=N}^{n-1}\frac{1}{k}\implies \lim_{n\to \infty} a_n=+\infty$$ contradicting the fact that the sequence $(a_n)_n$ is convergent (to a finite limit).
Find upper and lower bounds to the function $f(n)=1\cdot3\cdot5\cdot\ldots\cdot(2n-1)$ where $n\in\Bbb N$
Using factorials, we can write $$ f(n) =\frac{(2n)!}{2^n\cdot n!}$$ and then using Stirling's $n!\approx n^ne^{-n}\sqrt{2\pi n}$ find $$\tag1 f(n)\approx \frac{(2n)^{2n}e^{-2n}\sqrt{4\pi n}}{2^nn^ne^{-n}\sqrt{2\pi n}}=\left(\frac{2n}{e}\right)^n\cdot \sqrt 2$$ so your $f(n)$ grows quite fast (not surprisingly). For an explicit upper/lower bound function, use explicit bounds for Stirling's approximation. Edit: It seems you essentially did that to obtain your bounds with one additional term in the Stirling approximation. Do find something better one might try to look at higher order terms in STirling, but admittedly the expressions turn out ugly (and probably not very helpful) Note that we can also observe that $f(n)={2n\choose n}\cdot \frac{n!}{2^n}$ and then from ${2n\choose n}\le (1+1)^{2n}=4^n$ find $$\tag2f(n)\le 2^nn! $$ as a simple upper bound (and it's only ba a factor $\approx\sqrt {\pi n}$ bigger than the estimate in $(1)$).
Let $f_n(x) = \frac{x^2}{(1+x^2)^n}$ If $\sum_{n=0}^\infty$ converges pointwise, find it's limit.
Note that$$\sum_{n=0}^N\frac{x^2}{(1+x^2)^n}=x^2\frac{1-\frac1{(1+x^2)^{N+1}}}{1-\frac1{1+x^2}}=1+x^2-\frac1{(1+x^2)^N}.$$So, your sum converges pointwise to $$\begin{array}{rccc}s\colon&amp;\mathbb{R}&amp;\longrightarrow&amp;\mathbb{R}\\&amp;x&amp;\mapsto&amp;\begin{cases}1+x^2&amp;\text{ if }x\neq0\\0&amp;\text{ if }x=0.\end{cases}\end{array}$$Since $s$ this is not a continuous functions, the convergence is not unform.
Linear operator image subspace chain
First direction: $A^{p+2} \subseteq A^{p+1}$: If $v = A^{p+2}(w)$ for some $w \in V$, then for some $w' \in V$ $$ v = A(A^{p+1}(w)) = A(A^p(w')) = A^{p+1}(w') \in Im(A^{p+1})$$ Other direction: $A^{p+1} \subseteq A^{p+2}$: If $ v = A^{p+1}(w)$ for some $w \in V$, then for some $w' \in V$ $$v = A(A^p(w)) = A(A^{p+1}(w')) = A^{p+2}(w') \in Im(A^{p+2}) $$ Shorter version The above is instructive, but there's a one-liner to this, using the hint that states $Im(A\circ B) = A(Im(B))$: $$ Im(A^{p+2}) = Im(A \circ A^{p+1}) = A(Im(A^{p+1})) = A(Im(A^p)) = Im(A^{p+1}) $$
Verify that a function is a solution to the 3-dimensional wave equation.
For a tempered distribution $\,f\in\mathcal{S}'$, its Fourier transform $\,\hat{f}=F[f]\,$ is a linear functional $\,\hat{f}\in\mathcal{S}'\,$ defined by the identity $$ \langle\hat{f},\varphi\rangle=\langle f,\hat{\varphi}\rangle\quad\forall\, \varphi\in\mathcal{S} $$ where $\,\hat{\varphi}\,$ designates a classical integral Fourier transform $$ \hat{\varphi}(\xi)=\!\int\limits_{\mathbb{R}^3}\!\varphi(x)e^{-ix\cdot\xi}dx $$ of a rapidly decreasing test function $\,\varphi\in\mathcal{S}(\mathbb{R}^3)$. Denote $\,\mathbb{R}_{+}=\{t\in\mathbb{R}\colon\;t&gt;0\}$,&nbsp; and let $\,u\in C^1(\mathbb{R}_{+};\mathcal{S}')\,$ be a solution of Cauchy problem $(\ast\ast)$. Notice that Fourier transform $\,\hat{u}\in C^1(\mathbb{R}_{+};\mathcal{S}')$ is to solve the Cauchy problem $$ \begin{cases} \hat{u}_{tt}+|\xi|^2\hat{u}=0, \quad t&gt;0,\\ \hat{u}|_{t=0}=\hat{f},\quad \hat{u}_t|_{t=0}=\hat{g}, \end{cases} $$ solution of which is of the form $$ \hat{u}(\xi,t)=\hat{f}(\xi)\cos{(|\xi|t)}+ \hat{g}(\xi)\frac{\sin{(|\xi|t)}}{|\xi|}.\tag{1} $$ Consider a tempered distribution $$ \Phi_t=F^{-1}\Bigl[\frac{\sin{(|\xi|t)}}{|\xi|}\Bigr]\tag{2} $$ and notice that Fourier transform $$ \hat{\Phi}_t(\xi)=\frac{\sin{(|\xi|t)}}{|\xi|}= \sum_{n=0}^{\infty}(-1)^n t^{2n+1}\frac{(\xi_1^2+\xi_2^2+\xi_3^2)^{n}}{(2n+1)!} $$ is an entire function in $\,\xi=(\xi_1,\xi_2,\xi_3)\,$ on $\,\mathbb{C}^3$. Hence, a distribution $\,\Phi_t\in\mathcal{S}'\,$ has a compact support $\,{\rm supp\,}\Phi_t\Subset\mathbb{R}^3$. By the convolution theorem, representation $(1)$ immediately implies the required representation of solution $(\ast)$. Defintion $(2)$ of the tempered distribution $\Phi_t$ implies that $$ \langle \Phi_t,h\rangle=\frac{1}{4\pi t}\langle\delta_{S_t},h\rangle= \frac{t}{4\pi} \int\limits_{S^2} h(t\omega)\, d\omega\quad \forall\,h\in \mathcal{S} $$ &mdash; for details see The issue of treating an inverse Fourier transform in terms of a tempered distribution.
Chromatic polynomial of non-tree graphs?
In my answer to the Chromatic polynomial of connected graph $ \leq x(x-1)^{n-1}$, I explain how it's not true for the polynomials (giving a counterexample). It's true if we restrict to non-negative integers, but this means we're counting $x$-colorings for $x \geq 0$ (and the polynomial is just a distraction). In this question, $\chi(G, x) &lt; x(x−1)^{n−1}$ is not true when $G$ is a tree. After changing $&lt;$ to $\leq$ and restricting to non-negative integers, yes, it's not difficult (a $x$-coloring of $G$ gives an $x$-coloring of a spanning tree). If we allow $G$ to be a disconnected graph, then it's not true even when $x$ is restricted to positive integers. An $n$-vertex graph with no edges has the chromatic polynomial $x^n$.
Combining an outcome of a score
You can do this using generating functions. This is analogous to the classic coin change problem. In your case, you want to look at the term for $x^{133}$ in the expansion of $$ f(x) = \frac{1}{(1-x^1)(1-x^2)(1-x^3)}.$$ We may rewrite $f$ as $$ f(x) = \frac{1/6}{(1-x)^3} + \frac{1/4}{(1-x)^2} + \frac{17/72}{1-x} + \frac{1/8}{1+x} + \frac{-j/9}{x-j} + \frac{-j^2/9}{x-j^2},$$ with $j^2 + j + 1 = 0$. By expanding each fraction $(a-x)^{-d} = a^{-d} (1-x/a)^{-d} = a^{-d} \sum \binom{d}{n} (x/a)^n$, we may obtain an exact formula for the coefficient $a_n$ of $x^n$ in $f$; namely $$ a_n = \frac{1}{6} (-1)^n\binom{-3}{n} + \frac{1}{4} (-1)^n\binom{-2}{n} + \frac{17}{72} + \frac{1}{8} (-1)^n + \frac{1}{9} (j^n + j^{2n}). $$ Using this, I find $a_{133} = 1541$ and $a_{75} = 507$.
Is $\mathbb R$ terminal among Archimedean fields?
I defer to Proposition 12 (well... the second Proposition 12...) and Theorems 14 and 15 in this answer of mine. It is not hard to construct an argument that $\mathbb{R}$ is a final object in the category of Archimedean fields from these results. For example see the notes that Pete L. Clark links to on the same page.
A Basic Question About Sample Space
It entirely depends on whether the coins are considered different. Whether you should consider them different depends on the context though. For example, say one coin is blue and the other is red. Then your sample space consists of pairs $(B,R)$, where $B,R$ can represent a heads or tails value for the blue and red coins respectively. Then clearly there are four possibilities: $(H,H),(H,T),(T,H),(T,T)$. The important bit here being that $(H,T) \ne (T,H)$ - which would make sense, if the coins are considered different. If the coins are not considered different or distinct, then what can appear? Two heads, one of each, or two tails. You might write this sample space as $\{HH,TT,TH\}$ then. (Note that we're not using ordered pairs: indeed, the "ordered" in that term is very important in our earlier discussion.) Of course, bear in mind that this distinction also bears consequences on the probability. In the first case, each event in the sample space is equally likely, and in the second they are not! Introductory probability students can often make this mistake.
Meaning of constant $\{N_{i}\}$
As it turns out I found the answer while looking for an example for Paul Sinclair. It does mean each specie is at constant number. Sorry if the question is unclear and thank you.
target hitting problem
Indeed, use Bayes theorem. If $B$ means 'Bill hit the target', $G$ means 'Gates hits the target' and $T$ means that 'the target is hit', you have $$P(G|T) = \frac{P(T|G)P(G)}{P(T)}$$ Note that $G \Rightarrow T$, hence, $P(T|G)=1$ and you know $P(G)$. You only have to find $P(T)$. To find it, note that the probability to be hit is $1-P(\lnot T)$, where $P(\lnot T)$ is very easy to find. And it's done.
How many labeled rooted trees are there on 12 nodes where no node has exactly 4 children?
I do not think you are applying the principle of inclusion exclusion quite correctly. Let $E_i$ be the set of trees where vertex $i$ has exactly four children. According to the principle of inclusion exclusion, the number of rooted trees where no vertex has four children is $$ 12^{12-1}-\sum_{i=1}^{12} |E_i|+\sum_{1\le i&lt;j\le 12}|E_i\cap E_j| $$ To count $|E_i|$, use Prufer codes. Rooted trees are in bijection with lists of length $n-1$ of labels of vertices, and the number of times each vertex appears equals the degree of that vertex in the corresponding tree. Therefore, you want to count strings of length $12-1=11$ where the element $i$ appears exactly four times. The number of these is $\binom{11}411^{7}$; choose the locations of the $i$'s, then assign the other seven entries to be anything except $i$. Finally, to count $|E_i\cap E_j|$, both $i$ and $j$ must appear exactly four times, which can be done in $\binom{11}4\binom{7}410^3$ ways. Putting this all together, the answer is $$ 12^{11}-\binom{12}1\binom{11}411^7+\binom{12}2\binom{11}4\binom7410^3 $$
Complex matrix eigenvalues
In general, no. Take, for instance,$$A=\begin{bmatrix}-1 &amp; -1-i \\ 1-i &amp; 1\end{bmatrix},$$whose eigenvalues are $\pm i$. The eigenvectors corresponding to the eigenvalue $i$ are the multiples of $(1,-1)$, whereas the eigenvectors corresponding to the eigenvalue $i$ are the multiples of $(1,i)$.
Point of intersection between two circles how do I get the point?
Now I see that you've changed the question since I posted this answer. The answer below is correct for the question as initially stated. The same technique will find the points of intersection to the later modified problem. If you just draw the picture you can see that they don't intersect. But one can also look at their two equations: $$ (x-1)^2 + (y-3)^2 = 1^2 = 1 \tag 1 $$ $$ (x-1)^2 + (y-2.5)^2 = 2^2 = 4 \tag 2 $$ Subtracting the left side of $(1)$ from the left side of $(2)$ and likewise on the right sides, we get $$ (y-2.5)^2 - (y-3)^2 = 4-1 = 3. $$ Expanding the two squares, we get $$ (y^2 - 5y + 6.25) - (y^2 - 6y + 9) = 3. $$ Then: $$ y - 2.75 = 3 \quad\text{so}\quad y = 5.75. $$ Plugging $5.75$ in place of $y$ in $(1)$ we get $$ (x-1)^2 + (5.75-3)^2 = 1 $$ or $$ (x-1)^2 = 1 - 2.75^2. $$ So a square equals a negative number. Hence there is no solution. The two circles do not intersect.
Showing $\lvert\mathscr{B}_X\rvert=2^{n^2}$.
Hint. A binary relation on $X$ can be represented by a Boolean $n \times n$ matrix. What is the number of Boolean $n \times n$matrices?
password probability
You want words of 8 to 20 characters containing at least one capital and at least one numeral. Permutations are not the way to go, but the Principle of Inclusion and Exclusion is. There are $\sum\limits_{n=8}^{20} 62^n$ ways to form words of 8 to 20 characters, when each character is selected from $(26+26+10)$ options, with repetitions allowed. &nbsp; Simplify using the Geometric Series closed form. There are --how many?-- ways to form such words which contain no numerals. There are --how many?-- ways to form such words which contain no capitals. There are --how many?-- ways to form such words which contain neither capitals nor numerals. Now put it together using PIE, and then simplify.
What does this double summation with mod evaluate to?
Note you have $$\begin{equation}\begin{aligned} \sum_{x}f(x,0) &amp; = 2^{-1} - 2^{-2} + 2^{-3} - 2^{-4} + \ldots \\ &amp; = \frac{1}{2}\sum_{i=0}^{\infty}\frac{1}{2}\left(\right)^{i} \\ &amp; = \frac{\frac{1}{2}}{1 - \left(-\frac{1}{2}\right)} \\ &amp; = \frac{\left(\frac{1}{2}\right)}{\left(\frac{3}{2}\right)} \\ &amp; = \frac{1}{3} \end{aligned}\end{equation}\tag{1}\label{eq1A}$$ $$\begin{equation}\begin{aligned} \sum_{x}f(x,1) &amp; = -2^{-1} + 2^{-1} - 2^{-3} + 2^{-3} + \ldots \\ &amp; = (-2^{-1} + 2^{-1}) + (-2^{-3} + 2^{-3}) + \ldots \\ &amp; = 0 \end{aligned}\end{equation}\tag{2}\label{eq2A}$$ Note I was able to do the bracketing above due to the series being absolutely convergent (as the sum of the absolute values would be that of a geometric series with $r = \frac{1}{4}$, so its sum would be $\frac{4}{3}$), as explained in the Rearrangements and unconditional convergence section of Wikipedia's "absolute convergence" article. Thus, $$\begin{equation}\begin{aligned} \sum_{y}\sum_{x}f(x,y) &amp; = \sum_{y}\left(\sum_{x}f(x,y)\right) \\ &amp; = \sum_{x}f(x,0) + \sum_{x}f(x,1) \\ &amp; = \frac{1}{3} \end{aligned}\end{equation}\tag{3}\label{eq3A}$$
Standard Deviation and Variance
Try putting the comparison in a ratio: $\dfrac{\text{standard deviation in minutes}}{\text{average time of ____ in minutes}},\;$ in terms of golf, and then in terms of cabinet meetings. Then simplify and compare the two ratios. Which is larger than the other? The larger of the two will correspond to which is the most variable: golf minutes vs. cabinet minutes.
Direct proof that Pr[2 immediately follows 1] in a random permutation is 1/n
You need to ensure the last element in the $n$-element permutation is not $1$, since that would mean it has no immediate successor. The chance of this occurring is $(n-1)/n$. In a scenario for which $1$ appears among the first $n-1$ entries, there are still $n-1$ elements left that could follow it, viz., $2, \ldots, n$. The probability in a given scenario that $2$ is the immediate successor, then, is $1/(n-1)$. Then the probability in question is $\frac{n-1}{n} \cdot \frac{1}{n-1} = \frac{1}{n}$ as desired. QED
Line Integral to Find Work on Slope (Without Explicit Use of Vector Calculus Format)
Note that the integral in the quote has $s$ as parameter and integration variable, not $x$. Written in really long form this would be $$\int_0^S R(\vec P(s))\cos\epsilon(s)\,ds$$ where $\vec P(s)=(x(s),y(s),z(s))$ and $R(\vec P(s))=\|\vec R(\vec P(s))\|$ and $$\cos\epsilon(s)=\frac{\langle \vec R(\vec P(s)), \dot{\vec P}(s)\rangle}{\|\vec R(\vec P(s))\|\,\|\dot{\vec P}(s)\|}$$ and since it is assumed that $s$ corresponds to the path length, $\|\dot{\vec P}(s)\|=1$. Thus another form to write the line integral is $$\int_\Gamma \langle \vec R(\vec P), d\vec P\rangle$$ So you see that there is no area directly involved, it is all more complicated. Think about the path $\Gamma$ consisting of many small straight segments $[\vec P_{k-1},\vec P_k]$, $k=1,...,N$, and that the force $\vec R$ is nearly constant along each segment. Then you can compute the work along each small segment without integration, and summing up all small segments gives an approximation of the integral formula, $$\sum_{k=1}^N \langle \vec R(\vec P_k), \vec P_k-\vec P_{k-1}\rangle.$$
If $a$ is a positive integer and $p$ is prime, then $\log_{p}(a)$ is rational if and only if $a$ is a power of $p$.
This is a reasonable outline to start with. The major problem with what you've written is that you assume that $\log_pa$ is rational and then assign an integer value to it. You should actually say that there are relatively prime integers $x,y\in\mathbb Z$ such that $\log_pa=\frac xy$ and then show that it must be that $y=1$. Your strategy of thinking about the prime factorization of $a$ is a good idea here that will help you here. You also need to note that you are asked to prove the equivalence of the statements, so you also need to prove the converse of the statement you proved. This is not difficult, but it is essential in &quot;if and only if&quot; proofs.
$\sigma_p(f(A))=f(\sigma_p(A))$?
Let $f$ be a constant polynomial. Then $f(A) = c I$ for some scalar $c$, so $\sigma_p(f(A)) = \{c\}$. But $\sigma_p(A)$ might be empty. But these are the only counterexamples. If $\lambda \in \sigma_p(A)$, there is nonzero $v \in H$ such that $A v = \lambda v$, and then $f(A) v = f(\lambda) v$, so $p(\lambda) \in \sigma_p(f(A))$. Thus $f(\sigma_p(A)) \subseteq \sigma_p(f(A))$. On the other hand, suppose $\lambda \in \sigma_p(f(A))$. Thus there is nonzero $v \in H$ such that $f(A) v = \lambda v$. If $f$ has degree $d &gt; 0$, the polynomial $f(z) - \lambda$ can be factored over the complex numbers as $$f(z) - \lambda = \prod_{j=1}^d (z - \alpha_j)$$ so that $$ 0 = (f(A) - \lambda) v = \prod_{j=1}^d (A - \alpha_j I) v $$ For some $k$, $1 \le k \le d$, we must have $\prod_{j=k}^d (A - \alpha_j I)v = 0$ but $\prod_{j=k+1}^d (A - \alpha_j I) v \ne 0$, i.e. with $w = \prod_{j=k+1}^d (A - \alpha_j I) v$ (which is just $v$ in the case $k=d$) we have $w\ne 0$ and $(A - \alpha_k I) w = 0$, so $\alpha_k \in \sigma_p(A)$, and $f(\alpha_k) = \lambda$. So $\sigma_p(f(A)) \subseteq f(\sigma_p(A))$.
Inequality on a general convex normed space
There are one or two things you have to check before the following proof. Since you don't assume $C$ to be closed, what happens if $x_0\in\overline{C}$. (What is a closed ball of radius $0$?) This is a special case which should be dealt with separately. Edit: (A little note on balls of radius $0$.) Let $x\in X$, and let $r&gt;0$. Denote the open ball of centre $x$ and radius $r$ by $B(x,r)$, and the corresponding closed ball by $\bar B(x,r)$. Then, since $X$ is a normed space, the closure $\overline{B(x,r)}$ is equal to $\bar B(x,r)$. There are two candidates for the closed ball at $x$ of radius $0$, namely $\{x\}$ and $\emptyset$. Now, certainly, the open ball at $x$ of radius $0$ should be empty; there are no elements $y$ of $X$ satisfying $\|y-x\|&lt;0$. The closure of $\emptyset$ is $\emptyset$. One the other hand, there is exactly one element $y\in Y$ which satisfies $\|y-x\|\leq 0$, namely $y=x$. In terms of this question, it makes no difference which is used, since $\{x_0\}\cap C=\emptyset\cap C=\emptyset$, if $x_0\notin C$. By some translation and scaling, we may assume that $x_0=0$ and $r=1$. Set $B=\{y\in X:\|y\|\leq 1\}$ (The closed unit ball of $X$). Now $E:=B\cap\overline{C}$ is a closed convex set in $X$. (If $E$ is empty then we don't have anything to prove so assume not.) $x_1,x_2\in E$. Assume, towards a contradiction that $x_1\neq x_2$. Then $y:=(x_1+x_2)/2\in E$ (why?). Now, we have $\|x_1\|=\|x_2\|=1$ (why?), and so, by strict convexity $\|y\|&lt;1$. This is impossible, so we must have $x_1=x_2$. This shows that $E$ can contain at most one point. The result you seek follows from this. I've deliberately left out one or two details to be filled in. You should also justify that it is indeed OK to translate and scale to the origin, this is a common trick in normed space theory, but it should be justified. Interestingly enough, strict convexity is not enough for the intersection to be non-empty; see Least norm in convex set in Banach space.
How to flip one point of a triangle
Let $$A=(x_1,y_1)$$ $$B=(x_2,y_2)$$ $$C=(x_3,y_3)$$ and lets call the point after flipping be $$A'=(x,y)$$ $$\vec{BC}=(x_2-x_3,y_2-y_3)$$ $$\vec{AA'}=(x_1-x,y_1-y)$$ So your first equation becomes $$\vec{BC}\cdot \vec{AA'}=0$$ Your second equation becomes $$|AC|=|A'C|$$ This is because as your taking the mirror image side length should be same. Solving both the equations together you will find value of $x,y$
Proving the Unit Sphere without the North Pole is Homeomorphic to the Plane
Let $\Phi(x,y,z)=(\frac{x}{1-z},\frac{y}{1-z})$ $$\|\Phi(x,y,z)-\Phi(x',y',z')\|^2=\frac{x^2}{(1-z)^2}+\frac{x'^2}{(1-z')^2}-2\frac{xx'}{(1-z)(1-z')}$$ $$+\frac{y^2}{(1-z)^2}+\frac{y'^2}{(1-z')^2}-2\frac{yy'}{(1-z)(1-z')}$$ $$=\frac{(x^2+y^2)}{(1-z)^2}+\frac{(x'^2+y'^2)}{(1-z')^2}-2\frac{xx'+yy'}{(1-z)(1-z')}=\frac{1+z}{1-z}+\frac{1+z'}{1-z'}-2\frac{xx'+yy'}{(1-z)(1-z')}$$ $$=\frac{2-2zz'-2(xx'+yy')}{(1-z)(1-z')}=\frac{[x(x-x')+y(y-y')+z(z-z')]+[x'(x'-x)+y'(y'-y)+z'(z'-z)]}{(1-z)(1-z')}$$ $$\frac{(x-x')^2+(y-y')^2+(z-z')^2}{(1-z)(1-z')}$$ So we have continuity as long we stay away from $N(0,0,1).$ However clearly as we approaching $N$, $\|(\frac{x}{1-z},\frac{y}{1-z})\|\to\infty$ (since numerators of fractions stay bounded while denominators go to zero) and this is the whole point. Its called pole for the good reason.
Proving that the function $\frac{1}{q^2}$ is essential to Dirichlet's Approximation Theorem
The main idea is just to note that Liousville's lemma tells you that you can only get so close to an irrational algebraic number. So if you could improve Dirichlet's theorem, you would eventually end up violating the lemma. More rigorously, suppose $f(q)$ is a faster decaying function than $\frac1{q^2}$, so that $\lim_{q\to\infty}q^2f(q)=0$, and suppose for the sake of contradiction that we can replace $\frac1{q^2}$ with $f(q)$ in Dirichlet's theorem. Now let $\alpha$ be an irrational number of degree two, as hinted. Liouville's lemma tells us that there exists some $M$ so that $\left\lvert\alpha-\frac pq\right\rvert&gt;\frac1{Mq^2}$ for every $\frac pq$. By hypothesis, we know, however, that for infinitely many $\frac pq$, we must have $\left\lvert\alpha-\frac pq\right\rvert&lt;f(q)$. In particular, it follows that there are infinitely many fractions $\frac pq$ such that $$1=\frac{\left|\alpha-\frac pq\right|}{\left\lvert\alpha-\frac pq\right\rvert}&lt;\frac{f(q)}{\frac1{Mq^2}}=Mq^2f(q).$$ But because $q^2f(q)\to0$, we know that there exists some $Q\in\mathbb N$ so that for all $q&gt;Q$ we have $|q^2f(q)|&lt;\frac1M$. However, this would mean that $q\le Q$ for each of the infinitely many fractions $\frac pq$ with $\left|\alpha-\frac pq\right|&lt;f(q)$. But since $f(q)$ is finite, there are only finitely many $p$ for any given $q$ which satisfy this. And since we just said that $q\in\{1,2,\dots,Q\}$, it follows that there are only finitely many pairs $(p,q)$ which satisfy that inequality. Obviously, this is impossible, so the $\frac1{q^2}$ in Dirichlet's theorem is the best possible.
differentials that I can't solve correctly
Use the integrating factor method. See the answer to your earlier question Differential Equation $ty&#39;= 3t^2-y$ solution incorrect $$\frac{dy}{dx}+P(x)y=Q(x) \Rightarrow I=e^{\int P(x)dx}$$ Then $$yI=\int IQdx +c$$
Complex parametrization of a line
Yes, $$ L(t)= P+tV = (a+ib)+ t(v+iw)$$ is a parametrization of a straight line passing through $P=(a+bi)$ at $t=0$ with the direction vector of $V=(v+iw)$
Trigonometry identities sum and substraction of cosine
Using the angle sum formula for $\cos$: $$\frac{\cos(a-b)}{\cos(a+b)}= \frac{\cos a \cos b + \sin a \sin b}{\cos a \cos b - \sin a \sin b}$$ Hint: Now all you need to do is to divide the numerator and denominator by $(\cos a\cos b)...$
Does every null set have a superset which is an $F_{\sigma}$ null set?
The answer, perhaps surprisingly, is no. You write null sets I had ever seen have been constructed from finite or countable sets or Cantor set But that doesn't exhaust the full variety of the null sets at all. There are quite a lot of very strange null sets out there, and one of the most important examples (and a counterexample to many seemingly-plausible claims) is the existence of a comeager null set - that is, a null set whose complement is the union of countably many nowhere-dense sets. At first glance such a thing may seem impossible, but they do exist - e.g. take the set of all non-absolutely-normal numbers. (See e.g. the discussion at this MO question. Basically, category and measure are completely orthogonal, although they're both "notions of size," and the interplay between the two (and other notions of size) gives rise to a lot of interesting analysis, topology, and descriptive set theory.) By the Baire category theorem, comeager null sets can't be covered "efficiently" by $F_\sigma$ sets. Specifically, BCT implies that (in $\mathbb{R}$) meager $G_\delta$ sets are nowhere dense, so comeager $F_\sigma$ sets contain intervals and hence aren't null. So any comeager null $A$ gives a counterexample to your guess.
Is $f^{-1} (\alpha) = \bigcap_{m=0}^\infty \bigcup_{n=m+1}^\infty f_n^{-1} (\alpha)$ correct?
False. Take $f_n(x)=\frac 1 n $ for all $n$, $f(x)=0$ and $\alpha =0$.
How can I determine the end behavior of a polynomial based on Taylor series?
If you truncate a Taylor series to a polynomial, you get something that's a good approximation of the function ... if you're close enough. For larger $x$, the fact that it's just a polynomial takes over, and it separates from the function completely to do its own thing. Since it's a polynomial, that thing is to go to $\pm\infty$ as $x\to\infty$ or $x\to -\infty$. Which sign? That depends on the sign of the coefficient of the largest-degree term and whether that degree is even or odd.
Equivalence of definition of product in a category
Just write down everything carefully... There's no funny business going on. The map $\phi_B$ is surjective: this means that for every element $(\beta_i) \in \prod_i [B,A_i]$, i.e. for every collection of morphisms $\beta_i : B \to A_i$, there exists some $\beta : B \to A$ such that $\phi_B(\beta) = (\beta_i)$. In other words, there exists a map $\beta : B \to A$ such that $p_i \circ \beta = \beta_i$. This is the first part of the universal property of a product. And the map $\phi_B$ is also injective, meaning that if $\beta, \beta' : B \to A$ are such that $p_i \circ \beta = p_i \circ \beta'$ for all $i$, then $\beta = \beta'$. If you combine the two, you get that for every collection of morphisms $(\beta_i : B \to A_i)_i$, there exists (part 1) a unique (part 2) morphism $\beta : B \to A$ such that $p_i \circ \beta = \beta_i$. This is exactly the definition of $(p_i : A \to A_i)_i$ being the product of the collection $(A_i)_i$.
Is $a_{0} + a_{1}y_{1} + a_{2}y_{2} + ... + a_{n}y_{n}$ a linear combination of vectors $y_{1},..., y_{n}$?
No, it isn't. In fact, it doesn't make any sense, since $1$ -appearing in the first term as $a_0\cdot 1$, is not supposed to be a vector of your vector space, in general. For instance, assume your vector space is $\mathbb{R}^2$: can you add $1$ to $(5,-3)$?
example of knot diagram colored by dihedral quandle of non-orime order, if any
Let $D(n)$ denote the dihedral quandle of order $n$ Then for any $m$ and $n$, the quandle $D(mn)$ contains copies of both $D(m)$ and $D(n)$. (Indeed, $D(mn)$ is the union of $m$ copies of $D(n)$, as well as $n$ copies of $D(m)$.) It follows that any coloring of a knot by either $D(m)$ or $D(n)$ also gives a coloring of the knot by $D(mn)$. By the way, note that dihedral quandles of even order are disconnected. In particular, every coloring of a knot by a dihedral quandle of order $2n$ is really a coloring by one of its two dihedral subquandles of order $n$. Of course, this doesn't answer the question of whether there are any onto colorings of knots by dihedral quandles of composite order. For example, is it possible to color a knot with $D(9)$ so that all nine colors are used? One can also ask whether there are any essential colorings of knots by $D(9)$, i.e. colorings in which the colors used do not all lie in some proper subquandle.
On the number of fixed points of the differential equation $x' = x^3 +ax - b$ depending on $(a,b)$
Hint: $x'=0=x^3+ax-b$ Is a cubic equation, which can have 1 or 3 real solutions. You can even calculate them with Cardano's formula. In order to show that there is always at least one real solution look at $p(x)=x^3+ax-b$ for $|x|\to \infty$ and use the intermediate value theorem. If $p(x)$ has at least one real solution $x_1$ then we can rewrite the polynomial as $p(x)=(x-x_1)(x^2+\alpha x+ \beta)$. The quadratic part has again real coefficients and this can have no real solution, or two real solution. Edit: If you want to find the curve you can use the cubic discriminant $\Delta = -4a^3-27b^2$. If the cubic discriminant is zero then we have a double zero, which is the case that is separating the one + two real solutions case to the one real + 2 complex conjugate solutions. So the curve of separation is given by the implicit curve $F(a,b)=4a^3+27b^2=0$.
Characteristic polynomial of diagonal matrix with two rank-one updates
You seem to have assumed that $\det(1-k^T(\lambda I-A)^{-1}b)=1$, i.e. $k^T(\lambda I-A)^{-1}b=0$. Although $k^Tb=0$, it isn't true that $k^T(\lambda I-A)^{-1}b$ always vanishes. E.g. when $D=0,\,u=k$ and $v=b$, $$ k^T(\lambda I-A)^{-1}b =u^T(\lambda I-uv^T)^{-1}v =u^T\left[\frac{1}{\lambda}\left(I+\frac{uv^T}{\lambda-v^Tu}\right)\right]v =\frac{1}{\lambda}\left(u^Tv+\frac{(u^Tu)(v^Tv)}{\lambda-v^Tu}\right). $$ Since $v^Tu=0$, we obtain $k^T(\lambda I-A)^{-1}b=\frac{(u^Tu)(v^Tv)}{\lambda^2}$, which is nonzero when $u^Tu\ne0$ and $v^Tv\ne0$.
What is $\sum_{r=1}^\infty\frac{r+2}{2^{r+1}(r)(r+1)}$?
If you do the partial fraction expansion, the summand becomes $$\frac{1}{2^{r+1}}\left( \frac{2}{r} - \frac{1}{r+1}\right) = \frac{1}{2^r r} - \frac{1}{2^{r+1}(r+1)}, $$ So the sequence is telescoping. All terms cancel except the first and so sum equals $\frac12$.
Checking an answer from How To Prove It
Because the theorem says : for every $a,b,c$ (real numbers) the equation $ax^2+bx+c=0$ has exactly two real solutions iff $b^2 &gt; 4ac$. Thus, an instance of the above equation will be obtained instantiating the leading universal quantifiers with three individual real numbers, says : $a=1, b=3, c=2$. The result will be the individual equation : $x^2+3x+2=0$ whose roots are : $-1,-2$. The roots must not be "specified" because they must be computed from the individual equation. In fully symbolic form the theorem will be : $\forall a,b,c \ \exists x_1,x_2 [x_1 \ne x_2 \land \ldots \land \forall z (az^2+bz+c=0 \to z=x_1 \lor z=x_2)]$.
Prove that $\int \limits_{0}^{\infty} x^p e^{-g(x)/x} dx \leq e^{p+1} \int \limits_{0}^{\infty} x^p e^{-g'(x)}dx$.
A proof can be found here, at section 2, but I'll go over how it works here. First, the convexity condition on $g$ is used, in particular fact 5 here to show that $$g(k x) \geq g(x) + (k-1) x g'(x)$$ for any $k&gt;1$. Then, consider the integral $$J = \int_0^A x^p \exp\left(-\frac{g(kx)}{kx}\right) dx$$ for $A&gt;0$. By a substitution, you can show $$\begin{align}J &amp;= k^{-p-1}\int_0^{Ak} x^p \exp\left(-\frac{g(x)}{x}\right) dx \\ &amp;\geq k^{-p-1}\int_0^{A} x^p \exp\left(-\frac{g(x)}{x}\right) dx\end{align}$$ On the other hand, use the convexity inequality on $g$ to show $$J \leq \int_0^A x^p \exp\left(-\frac{g(x)}{kx} - \frac{(k-1) g'(x)}{k}\right) dx$$ from which you can use Holder's inequality here in integral form to get $$J \leq \left(\int_0^A x^p \exp\left(-\frac{g(x)}{x}\right) dx \right)^{1/k} \left(\int_0^A x^p \exp\left(-g'(x)\right) dx\right)^{(k-1)/k}$$ which should start to look familiar. Putting our bounds on $J$ together, we get $$k^{-p-1}\left(\int_0^A x^p \exp\left(-\frac{g(x)}{x}\right) dx \right)^{(k-1)/k} \leq \left(\int_0^A x^p \exp\left(-g'(x)\right) dx\right)^{(k-1)/k}$$ So take the limit as $A \to \infty$ and rearrange to get $$\int_0^\infty x^p \exp\left(-\frac{g(x)}{x}\right) dx \leq \left(k^{\frac k{k-1}}\right)^{p+1} \int_0^\infty x^p \exp\left(-g'(x)\right) dx$$ and taking the $k\to 1$ limit finishes off the answer!
Does every non-elementary subgroup of the additive group of rationals contain prime multiples of elements in its complement?
Suppose $\frac mn\in \mathbb Q/H$ with $\gcd(m,n)=1$ and let $n=\prod p_i^{a_i}$ be the prime factorization of $n$. We multiply by one of the $p_i$ after another. If we get to an element in $H$ we are done, otherwise we can assume that this process leads to an integer $r \in \mathbb Q/H$. Let $s\in H\cap \mathbb Z$. (Note: if $\frac lk\in H$ then $l\in H\cap \mathbb Z$.) Then of course $sr\in H$. If $s=\prod q_i^{b_i}$ is the prime factorization of $s$, then we divide $sr$ by each $q_i$ in turn until we get to an element not in $H$.
confusion about differential eq question
The C1 is an arbitrary constant. It appears during the integration. Let's try to solve the equation. ($f(x) = y$ for simplicity) $-5y+y'=2$ $\frac{dy}{dx}=2+5y$ $\frac{1}{2+5y}dy=dx$ Then we integrate both sides. Notice that an arbitrary constant will appear since the integration is not definite. $\frac{1}{5}ln(5y+2)=x+C$ $5y+2=e^{5x+C}=e^{5x}e^C$ Since $e^C$ is a constant, we can substitute it with an arbitrary constant which is $C1$. You can try to solve for $y$ and you will get the result.
How to prove $\int_0^\infty xdF = \int_0^\infty 1-F dx$
Hint: $\int_{0}^{\infty}1\ dx$ gives an area of rectangle with length $\infty$ and width $1$. $\int_{0}^{\infty}F\ dx$ gives area of a region in the rectangle below curve $F$ $\int_{0}^{1}x\ dF$ gives area of a region in the rectangle above curve $F$
Integral of power and exponentials
Could you help me on the following integral ? The short answer is no. Giving various values to the two exponents a and b, we notice that the result yields completely different types of special functions each time $($ $\Gamma$ functions, Bessel functions, Airy functions, Anger functions, etc. $)$ &ndash; and this is only for $a,~b\le4.$ Beyond that, we have hypergeometric series and Meijer G-functions, etc.
Mandelbrot set perturbation theory: When do I use it?
First answer: "the $\delta^3$ term remains significantly smaller than the $\delta^2$ term" means that $\left|C_n \delta^3\right| &lt;&lt; \left|B_n \delta^2\right|$, usually a few orders of magnitude is a good amount (factor of $10^3$ or so). What the final $n$ will be for this stage depends on both the $X_0$ and the largest $\delta$ for the pixels in the image. Often it will be obvious how many per-pixel iterations it is safe to "skip" by this series approximation technique, the $\left|C_n\right|$ will suddenly increase, but sometimes you get more subtle image distortion first - a common technique is to use regular perturbed iterations for some "probe points" in the image and stop the series approximation iterations once they deviate too much. The final $n$ in the series approximation step is usually significantly smaller than the minimum iteration count for the first escaping pixel in the view. So you combine it with perturbed iterations for the remaining count. Second answer: I'm not sure what you mean by seed. Usually it works like this: pick a reference point $X_0$ and some probe points $\Delta_0$ in the image while the series approxmation is accurate, determined by the worst relative error among all the probe points $|e| &lt;&lt; 1$ step the high precision reference one iteration using $$X_{n+1} = X_n^2 + X_0$$ step the series approximation coefficients one iteration using $$\begin{aligned}A_{n+1} &amp;= 2 X_n A_n + 1 \\ B_{n+1} &amp;= 2 X_n B_n + A_n^2 \\ C_{n+1} &amp;= 2 X_n C_n + 2 A_n B_n \end{aligned}$$ step the probe points one iteration using $$\Delta_{n+1} = 2 X_n \Delta_n + \Delta_n^2 + \Delta_0$$ initialize all the image points from their $\Delta_0$ with last good series approximation coefficients using $$\Delta_n = A_n \Delta_0 + B_n \Delta_0^2 + C_n \Delta_0^3$$ step the reference $X_n$ until it escapes or maximum iteration count is reached using $$X_{n+1} = X_n^2 + X_0$$ step all image points $\Delta_n$ using stored reference iterations $X_n$ using $$\Delta_{n+1} = 2 X_n \Delta_n + \Delta_n^2 + \Delta_0$$ If you detect "glitches" in the perturbed iterations by $|X_n + \Delta_n| &lt;&lt; |X_n|$, set these pixels aside if there are glitched pixels remaining (or the reference escaped too early), repeat this whole process for the glitched pixels (pick a different $X_0$) "Glitches" occur when the dynamics of a pixel are too different from the dynamics of the reference. They can (usually) be detected by Pauldelbrot's criterion1 $\left|z_n+\delta_n\right| &lt;&lt; \left|z_n\right|$ and corrected by picking a better reference. This is all a bit adhoc and seems to work well, but rigourous numerical analysis proofs are still lacking as far as I know. There are two arbitrary thresholds (for series vs probe point relative error tolerance, and for glitch detection) which is very unsatisfactory. Asymptotics Suppose $M$ iterations are skipped by series approximation, and $N$ iterations are needed in total. The image size is $W \times H$. Suppose the cost of high precision operations is $K$ times the cost of low precision operations. Then the cost of the traditional method using high precision for all pixels is $O(K \times N \times W \times H)$. With perturbation, the cost is $O(K \times N + N \times W \times H)$. With perturbation and series approximation the cost is $O(K \times N + (N - M) \times W \times H)$.
Applied Probability- Bayes theorem
I find the easiest way to do these problems is with a contingency table like this one. Pick a large population so you can deal with whole numbers rather than percents. I used 10000 here: has disease healthy total tests + 95 49.5 144.5 tests - 5 9450 9455 total 100 9900 10000 Now the probability that someone who tests positive actually has the disease is easy to calculate given the numbers in the first row: $$ \frac{95}{144.5} = 0.65743944636 \approx 0.657 . $$ That answer is much too precise. The data have just one or two significant digits so it makes no sense to report anything more precise than $0.66$ or even $0.7$. In fact "about $2/3$" would be the most informative. (If I'd used $100,000$ I wouldn't have had half a person.) This is in fact just Bayes' theorem, but with less chance to make a mistake and (I think) more understanding. Can you set up the table for the second problem? References: Chances Are: https://opinionator.blogs.nytimes.com/2010/04/25/chances-are/ Natural frequencies improve Bayesian reasoning in simple and complex inference tasks http://journal.frontiersin.org/article/10.3389/fpsyg.2015.01473/full
How does $\sqrt{2}^{\log n}$ become $n^{\log \sqrt{2}}$
Using the rule $a=2^{\log_2 a}$, we have $\sqrt{2}^{\log_2 n}=2^{\log_2(\sqrt{2})\log_2 n}=n^{\log_2\sqrt{2}}$.
Can you give me a good explanation and an example of a topological linear space?
It's a generalization of a normed linear space. So basically, A TVS is a vector space with a topology on it, which is compatible with the linear structure. Specifically, that means that the addition $+: X \times X \to X$ is continuous w.r.t. the product topology, and so is scalar multiplication $\cdot : \mathbb{F} \times X \to X$. A reference to learn more is Rudin's Functional Analysis.
Introductory texts for weak $\omega$-categories
I learned from Baez's article http://arxiv.org/abs/q-alg/9705009v1, which is also loaded with references.