title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Triangle inequality for distance along a smooth path
(a) Minor issue: it's better to write $\dots < d_\rho(z_1,w)+\varepsilon/2$ (instead of $=$) because this is what the definition of infimum tells you. (b) Major issue: How do you know you can arrange $\dot \gamma_1(1)=\dot \gamma_2(0)$? Not clear at all. One way to fix this is to take $\gamma_1$ and $\gamma_2$ so that the inequalities of the form $\int \dots < d_\rho(z_1,w)+\varepsilon/3$ hold, and then insert a connecting path in between. This extra path - call it $\gamma_{3/2}$ -- will begin and end at $w$ and will satisfy $\dot\gamma_{3/2}(0)=\dot \gamma_1(1)$ and $\dot\gamma_{3/2}(1)=\dot \gamma_2(0)$. It should also be smooth and very short -- so short that the integral over it will be $<\epsilon/3$. One way to cook up this connector it to modify the familiar "polar flower" $r=\sin n\theta$ from calculus.
The Upwards and Downwards Lowenheim Skolem Theorem together imply the axiom of choice (in ZF)
Well, given an infinite set $A$, in order to prove $|A\times A|=|A|$, you only really care about the cardinality of $A$: in other words, it suffice to prove that $|B\times B|=|B|$ for some $B$ such that $|B|=|A|$ (since you can transport a bijection $B\times B\to B$ along a bijection between $B$ and $A$). So it doesn't matter what specific sets we get in our models, as long as we hit every possible cardinality. Unfortunately, though, your argument doesn't work: starting from $\omega$ and going up and down as your statement of Löwenheim-Skolem allows, you cannot reach all infinite cardinalities in the absence of AC. In particular, your version of Downward Löwenheim-Skolem will never guarantee the existence of a model of any cardinality that is not greater than or equal to $\aleph_0$ (because the conclusion has $|N|\leq |A|+\aleph_0+|L|$ rather than just $|N|\leq |A|$). Without AC, it is not necessarily true that every infinite cardinality is greater than or equal to $\aleph_0$. Here, then, is a more careful version of the argument you propose in the special case that $|A|\geq \aleph_0$. Starting from the model $\omega$, Upward Löwenheim-Skolem gives a model $M$ of cardinality at least $|A|$. Picking a subset of $M$ which is in bijection with $A$, Downward Löwenheim-Skolem then gives a submodel $N$ of $M$ such that $|A|\leq |N|$ (since $N$ contains our chosen subset of size $|A|$) and $|N|\leq |A|+\aleph_0$. But since $|A|\geq \aleph_0$, $|A|+\aleph_0=|A|$ (since $|A|\geq\aleph_0$, we can write $|A|=\aleph_0+|B|$ for some $B$, and then $|A|+\aleph_0=(|B|+\aleph_0)+\aleph_0=|B|+(\aleph_0+\aleph_0)=|B|+\aleph_0=|A|$). Thus $|N|=|A|$, and since we have $|N\times N|=|N|$ we conclude that $|A\times A|=|A|$. Of course, this still leaves the issue: what if $|A|\not\geq\aleph_0$? Well, it turns out that if you look at the proof that $|A\times A|=|A|$ for all infinite $A$ implies AC, it actually only ever uses sets $A$ such that $|A|\geq\aleph_0$. (Specifically, it uses $A$ of the form $X\sqcup \aleph(X)$ where $X$ is an infinite set and $\aleph(X)$ is its Hartogs number, and $\aleph(X)$ always contains $\omega$.) So actually, the weaker conclusion obtained above is still enough to deduce AC.
Solutions to the complex equation $z^n=w$ with one solution given
Between the Fundamental Theorem of Algebra and the Factor Theorem, if you find $n$ distinct roots to $z^n=w$ then you are done and there are no more. Look at each of $(-z_0)^4$, $(iz_0)^4$ and $(-iz_0)^4$. In general, if $z_0^n=w$ then multiplying $z_0$ by the $n-1$ $n$-th roots of unity (that aren't one) gives $n-1$ more solutions to $z^n=w$ and so you have them all by the above theorems.
Change of base logarithms
we get $$9\log_5(x)=\frac{25}{\log_5(x)}$$ and we get $$\left(\log_5(x)\right)^2=\frac{25}{9}$$ can you proceed? taking the square root we get $$|\log_5(x)|=\frac{5}{3}$$
Does the random variable follow a Poisson distribution?
If the parameter is the rate $\mu$ at which events (traditionally called "births") happen, then $N$ is Poisson distributed with mean $\mu$ times the amount of time the process has been running.
How to derive inverse of x^x to be log(x)/W(log(x))
We have \begin{align*} x &= y^y \\ \iff \log x &= y \log y \tag+\\ \iff \log x &= \log y \exp(\log y)\\ \iff W(\log x) &= \log y\\ \iff W(\log x) &= \frac{\log x}y & \text{by $(+)$}\\ \iff y &= \frac{\log x}{W(\log x)} \end{align*}
Closure of a set in $l^2 $ in the weak topolgy
Some hints: For the first part, you need to prove that for all $x_1,x_2, \ldots, x_k \in x \in l^2$ and for all $\epsilon >0$ there are $n,m$ such that $$ \lvert \langle x_i, e_{n,m} \rangle \rvert < \epsilon $$for all $1 \leq i \leq k$. First try to find an $m$ such that $\lvert \langle x_i, e_m \rangle \rvert < \epsilon/2$ for all $1 \leq i \leq k$. Fix such an $m$, then try to find an $n$ such that $\lvert \langle x_i, e_n \rangle \rvert < \epsilon/(2m)$. For the second part, note that a sequence $(x_n)$ in $l^2$ converges to zero if and only if $\lim_n \langle x_n, y \rangle =0$ for all $y \in l^2$. So if $e_{f(n), g(n)}$ is a sequence in $A$ converging to zero, this means that $$ \lim_n (y_{g(n)} + g(n) \cdot y_{f(n)}) =0 $$ for all $y \in l^2$. I'm positive you can now find an $y \in l^2$ where this doesn't hold.
Meaning of finite expectation for an absolute value of random variable
The way I learned things, it's part of the definition that in order for the mean $E(X)$ to exist we must have $E(|X|)<\infty$ (which, under the hood, is just the usual definition of Lebesgue integrability). This is the standard definition, I think. It's possible that by modifying the definition of integral you can define the expectation without having $E(|X|)<\infty.$ For instance, if you make it principal parts integral, then a centered Cauchy distribution with PDF proportional to $\frac{1}{1+x^2}$ would have "mean zero," but this is not standard, nor is it useful, as far as I know. So I would say it's just put in that definition for emphasis. Otherwise, people might erroneously think the example above has mean zero since it's symmetric and try to apply the LLN to it.
Independence of Brownian motion
Suppose $W_t$ and $W_s$ are independent for $t\neq s$. Then for $0\leq s<t$ we would have that $W_t$ is independent of $\mathcal{F}^W_s=\sigma(W_u\mid u\leq s)$. Now the martingale property of the Brownian motion yields $$ W_s=E[W_t\mid\mathcal{F}^W_s]=E[W_t]=0\quad\text{a.s.}, $$ which certainly isn't true.
How to represent EigenValues of covariance matrix
The covariance matrix is symmetric, so has an orthonormal eigenvector basis by the spectral theorem. To recover $\Sigma$ from normalised column-eigenvectors and eigen-values, just form a matrix $P$ by writing your eigenvectors in order and a diagonal matrix D with the eigenvalues along the diagonal, in the corresponding order to P. Since P is orthonormal, $P^{-1}=P^t$ and $\Sigma = P^{-1}DP$. Also see https://en.wikipedia.org/wiki/Matrix_decomposition#Eigendecomposition. The only statistical fact used here is that $\Sigma^t=\Sigma$, everything else is general linear algebra. Re terminology: Some authors call $\Sigma$ the variance because it is the reasonable generalisation of uni-variate variance to multivariate distributions: The variance of each component of your probability Vector sits on the diagonal of $\Sigma$, while covariances are off-diagonal - in the univariate case the 1-by-1 "Covariance matrix" is trivially diagonal, so just equal to the variance. (See "Conflicting nomenclatures and notations" on https://en.wikipedia.org/wiki/Covariance_matrix)
Number of ways to choose a closed path of given length on a square lattice
Consider taking $n$ steps on the square grid, each step being either East or North. There are $2^n$ ways of doing this. You can create a self-avoiding polygon by connecting together four copies of this set of $n$ edges, each copy being 90 degree rotated from the previous one. You will need 4 extra edges to space them out to make sure the copies don't overlap. So there are at least $2^n$ self-avoiding polygons of perimeter $4n+4$. Edit: Actually, the above procedure double-counts a few of the self-avoiding polygons. This can be fixed by using two spacer edges (8 all together), e.g. appending one East and one North step after the $n$ freely chosen steps. The conclusion doesn't really change: There are at least $2^n$ self-avoiding polygons of perimeter $4n+8$, so it grows (at least) exponentially.
Usage of finite fields or Galois fields in real world
Finite fields are extensively used in design of experiments, an active research area in statistics that began around 1920 with the work of Ronald Fisher. Fisher was a major pioneer in the theory of statistics and one of the three major founders of population genetics. I've heard of the use of finite fields in scheduling tournaments. Problems in that area may be the same mathematical problems as some of those that occur in design of experiments.
What is the n-th sum that has binomial coefficients?
The sum $\displaystyle\sum_{k=2}^n \frac {k-1}{k!}$ is not related to the harmonic series. Observe that $$\frac {k-1}{k!} = \frac k{k!}-\frac 1{k!} = \frac{1}{(k-1)!} - \frac 1{k!}$$ so that sum actually telescopes to $1-\dfrac 1{n!}$.
Function with a continuous domain but a discrete range
It makes sense and your example is a good one. A function is required to return a single value for each element of the domain, but doesn't have to be continuous. A couple other example of functions on $\Bbb R$ are $\lfloor x \rfloor$ and the function that is $1$ if $x$ is rational and $0$ otherwise.
Raising complex number to high power - Cartesian form
For the factor of $(1 - i)^9,$ I think the de Moivre form yields some insight, because $1 - i = \sqrt2 e^{-i\pi/4}.$ Hence $$(1 - i)^9 = 2^{9/2} e^{-i9\pi/4} = 16\sqrt2 e^{-i\pi/4} = 16 - 16i.$$ For the factor of $(3-2i)^3$ I am not convinced by the other answers that computing $\cos(3\arctan(-2/3))$ is simpler than just doing two complex multiplications by the algebraic method.
Application of MVT to find the limit
You need to show uniform convergence to zero, not just convergence to zero. Applying MVT to the function $f(x) = e^{-x}$ tells us for a given $x\in[0,1],$ we have $$ \frac{|e^{-x/n}-1|}{x/n} = e^{-c/n}$$ for some $c\in[0,x].$ This means that $$\frac{|e^{-x/n}-1|}{x/n} \le 1 $$ and therefore we have $$ \left|\frac{\sqrt n (e^{-x/n}-1)}{x}\right| \le \frac{1}{\sqrt{n}}$$ for all $x\in [0,1].$ This shows that the sequence of functions converges uniformly to zero.
Show that $\log(r)$ minus an integral involving the $\log$ function is less than $\frac{1}{6r^2}$
$$\int_{n}^{n+1}\log(x)\,dx -\log(n) = \int_{n}^{n+1}\log\left(\frac{x}{n}\right)\,dx = \int_{0}^{1}\log\left(1+\frac{x}{n}\right)\,dx \tag{1}$$ but for every $z>-1$: $$ z-\frac{z^2}{2}\leq\log(1+z)\leq z \tag{2} $$ hence by termwise integration: $$ \frac{1}{2n}-\frac{1}{6n^2}\leq \int_{n}^{n+1}\log(x)\,dx -\log(n) \leq \frac{1}{2n}.\tag{3}$$ We may also notice that, by integration by parts: $$\begin{eqnarray*}\int_{0}^{1}\left[\frac{x}{n}-\log\left(1+\frac{x}{n}\right)\right]\,dx&=&\frac{1}{n}-\log\left(1+\frac{1}{n}\right)-\int_{0}^{1}\frac{x^2}{n^2+nx}\,dx\\&=&\color{red}{\int_{0}^{1}\frac{x(1-x)}{n(n+x)}\,dx}\\&\leq&\frac{1}{n^2}\int_{0}^{1}x(1-x)\,dx = \frac{1}{6n^2}\tag{4} \end{eqnarray*}$$ and that upper bound can be improved up to $\color{red}{\frac{4n+1}{12n^2(2n+1)}}$ by splitting the red integral over $\left[0,\frac{1}{2}\right]$ and $\left[\frac{1}{2},1\right]$. Moreover, $\color{red}{\frac{1}{6n^2+6n}}$ can be took as a lower bound.
$\nabla \, \times ( {\bf u} \times {\bf v} ) = (\nabla . {\bf v}) \, {\bf u} - (\nabla {\bf v}) \, {\bf u}$???
Notice that $\nabla v$ is not the same as $\nabla\cdot v$. Do you know how $\nabla v$ is defined?
What are the restriction maps of the inverse image presheaf?
Yes, this is correct. You're using the fact that all of the objects in the diagram to calculate the value of $\pi^{-1}\mathscr{G}(V)$ are also objects in the diagram to calculate the value of $\pi^{-1}\mathscr{G}(U)$, and thus we have a map between the diagrams and thus a map between the limits. As to why this isn't mentioned, it's because you don't need to check it often and if you have to, you can cook it up from the definition on sections without too much fuss just like you did. One big reason you might not think about this much is that $f^{-1}$ as a functor is not so common - usually, one deals with $f^*$, the composition of $f^{-1}$ and $-\otimes_{f^{-1}\mathcal{O}_Y}\mathcal{O}_X$, in order to get $\mathcal{O}_X$-modules out. With both $f^{-1}$ and $f^*$, needing to consider the specific form of a restriction map in order to make a proof work is not common. And even in the scenarios where you might need to consider such a thing, the fact that the map is induced by the properties of the inverse limit means that it's natural and thus easy to work with. The big idea here about sheaves is that they're a tremendous amount of data, and we usually like to work with some sort of easier or less wordy representative (like when you call your friend - where I'm from, you usually just say their first name). For instance, when we talk about a quasicoherent sheaf on an affine scheme, we know that every such sheaf is of the form $\widetilde{M}$ for some module $M$. We hardly ever specify all of the restriction maps, even in this particularly easy case, because it would require us to say something about all the open sets. That's often difficult! Even in the Zariski topology, where there are far fewer open sets than the standard topology, we usually don't work explicitly with very many of our open sets.
Deciphering Hill Cipher with only cipher text
Step 1 is to convert the text to numbers in $\mathbb{Z}_{27}$. Call the result $(c_0, c_1, c_2, c_3, \ldots)$ Call the decryption matrix (which you need) $$D = \begin{bmatrix}x &y \\ u & v\ \end{bmatrix}$$ for definitess. For every possibility of $(x,y) \in \mathbb{Z}_{27}^2$, compute the supposed partial plain text (we get the even indexed plain text, because we only do one row at the time) $P(x,y) = (p_0, p_2 ,p_4, \ldots)$ where $p_{2i} = x\cdot c_{2i} + y \cdot c_{2i+1} \bmod 27$. Assign a point score $+2$ or more to spaces and 'e's that occur in $P(x,y)$, and minus points to q's and x's etc. Or do a $\chi^2$-matching score if you prefer. Probably one $(x,y)$ row will jump out, and be correct. Then do the same for $(u,v)$ and the odd-places plaintext. Alternatively, if you know the text is English and not to strange, you can for each position $i$ supose that $c_{i} c_{i+1} c_{i+2} c_{i+3}$ corresponds to $(19, 7, 4, 26)$ (or "the "), which gives 4 assumed equations in the 4 unknowns $(x,y,u,v)$, and which you can solve modulo $27$, and if solvable test it on another part of the text. If it fails try another position. Etc. Both solutions will involve some programming of course, unless you're very patient.
What would the notation G/H mean in terms of groups and subgroups?
If $H$ is normal in $G$, it means the quotient group. In the more general subgroup case, it could just mean the set of (left) cosets of $H$ in $G$.
A subtle point regarding the axiom of induction
Well, sure. After all, starting a sentence with "iff" makes no sense. Added: You are quite correct that the axiom of induction (in that form) is equivalent to $$\bigl(S\subseteq\Bbb N\wedge 1\in S\wedge(n\in S\implies\sigma(n)\bigr)\iff S=\Bbb N.$$ The form given is the less trivial direction of this biconditional.
showing that an R-module is Noetherian if a submodule is Noetherian and their quotient is Noetherian
Let $I_n$ be an ascending chain of submodules, $I_n\subset I_{n+1}$ of $M$ and $p:M\rightarrow M/N$ the projection. $p(I_n)$ is an ascending chain of $M/N$, so there exists $n_0: n>m>N_0$ implies $p(I_n)=p(I_m)$. There exists also $n_0'$ such that $n>m>n_0'$ implies that $I_n\cap N=I_m\cap N$. Let $n>m>n_0,n_0'$. Let $x\in I_n$, there exists $y\in I_m$ such that $p(y)=p(x)$ since $p(I_n)=p(I_m)$, $p(x-y)=0$, implies that $x-y\in N$, but $x-y\in I_n$ since $y\in I_m\subset I_n$, thus $x-y\in I_n\cap N=I_m\cap M$, thus $x-y\in I_m$, this implies $x\in I_m$.
Strong convexity of quadratic functional $x\mapsto \|Ax-b\|^2$
$f$ is twice continuously differentiable. So $f$ is strongly convex if and only if the Hessian matrix $\nabla^2 f(x) \succ 0$ is positive definite for all $x$. You can compute the Hessian $\nabla^2 f(x) = 2A^TA$. It implies $f$ is strongly convex if $\text{Null}(A) = \{0\}$. So $A$ must be injective.
Constructing a transformation satisfying the given properties
Let the transformation be $f(x,y) = (f_1(x,y), f_2(x,y))$. The line $y = 3$ is mapped to $x = 0$ so $f_1(x,3) = 0$ for all $x$. This means that we can set $f_1(x,y) = M_1(y-3)$ for a some constant $M_1$. Similarly, we can set $f_2(x,y) = M_2(x^2 - y)$. Then we have $f(x,y) = (M_1(y-3), M_2(x^2 - y))$.
What is the significance of $[t/ \Delta t]$ in Ross' definition of Brownian motion?
Note that $X_n$ denotes the direction you move on the $n$th step, not at time $n$. So the fact that the sum goes up to $[t/\Delta t]$ means that you take $[t/\Delta t]$ steps. Why does this make sense? Well, you take a step every $\Delta t$ units of time. So by time $t$, $[t/\Delta t]$ is exactly the number of steps you will have taken.
Question concerning inequality of norms of matrices
Here's a proof of the first inequality: Let $A_i$ be the $i$th row of $A$. Then, for any $x \in \mathbb{R}^n$, we have $$||Ax||_2^2 = \sum_i\left|\sum_ja_{ij}x_j\right|^2=\sum_i\left|\langle A_i,x\rangle \right|^2 \\\leq \sum_i\langle A_i,A_i\rangle\langle x,x\rangle = \|x\|_2^2\sum_{i,j}a_{ij}^2,$$ where the inequality comes from Cauchy-Schwarz. Then, taking the square-root of each side we get, $$\|Ax\|_2 \leq \|x\|_2 \sqrt{\sum_{i,j}a_{ij}^2}.$$ Thus, $$\max_{\|x\|_2=1}\|Ax\|_2 \leq \max_{\|x\|_2=1}\|x\|_2 \sqrt{\sum_{i,j}a_{ij}^2} = \sqrt{\sum_{i,j}a_{ij}^2}.$$ Note, that the Frobenius norm is defined as $\|A\|_F = \sqrt{\sum_{i,j}a_{ij}^2}.$ Therefore, for part two, we want to show $$\|A\|_F \leq \sqrt{n}|A|.$$ Now, note $$\|A\|_F^2 = \sum_i\|Ae_i\|_2^2 \leq \sum_i\|A\|_2^2\|e_i\|_2^2 = |A|^2\sum_i1 = n|A|^2,$$ where $e_i$ is the vector of all zeros except for the $i$th position which has a $1$. Taking square-roots gives the desired result. Note, the inequality here comes from the "compatibility" of induced norms. That is, for an vector norm and its induced matrix norm $\|\cdot\|$, we know that $$\|Ax\| = \frac{\|Ax\|}{\|x\|}\|x\| \leq \max_{y\neq 0}\left\{\frac{\|Ay\|}{\|y\|}\right\}\|x\| \equiv\max_{\|y\|=1}\{\|Ay\|\}\|x\| =:\|A\|\cdot\|x\|.$$
Showing that the coordinate mapping is one-to-one and onto.
That $B=\{b_1,\dotsc,b_n\}$ is a basis for $V$ means $B$ spans $V$ so for every $v\in V$ there exist $\lambda_1,\dotsc,\lambda_n\in\Bbb R$ such that $v=\lambda_1b_1+\dotsb+\lambda_nb_n$ $B$ is linearly independent so $\lambda_1b_1+\dotsb+\lambda_nb_n=\vec 0$ if and only if $\lambda_1=\dotsb=\lambda_n=0$ We immediately have the following proposition Proposition. If $$ \alpha_1b_1+\dotsb+\alpha_nb_n=\beta_1b_1+\dotsb+\beta_nb_n $$ then $\alpha_j=\beta_j$ for $j=1,\dotsc,n$. Can you prove the proposition? Now, our linear map $T:V\to\Bbb R^n$ is given by $$ T(v)=\langle \lambda_1,\dotsc,\lambda_n\rangle $$ where $v=\lambda_1b_1+\dotsb+\lambda_nb_n$. This map is well-defined by the fact that $B$ spans $V$ and the above proposition. To show that $T$ is one-to-one can you prove that $\ker T=\{\vec 0\}$? To show that $T$ is onto, let $\langle \lambda_1,\dotsc,\lambda_n\rangle\in\Bbb R^n$. Then let $$ v=\lambda_1b_1+\dotsb+\lambda_nb_n $$ What is $T(v)$?
Is unblurring an image possible
You are looking for deconvolution. In your case, $a,b,c$ can be solved by the system of equations $$\begin{align*} a + b &= 3f[S]_0 \\ a + b + c &= 3f[S]_1 \\ b + c &= 3f[S]_2 \end{align*}$$ In general, if you know the convolution kernel (usually a Gaussian matrix in the case of image bluring), and the kernel is non-zero, then the image can be deconvolved using Fourier transform.
Pseudo-random generator of same "family" numbers
Maybe, it's good for you abstract class MyRandom{ private Random rnd; private long startSeed; protected MyRandom(long seed){ rnd = new Random(seed); startSeed = seed; } public long nextLong(){ while(true){ long nextTry = rnd.nextLong(); if(F(startSeed, nextTry)) return nextTry; } } protected abstract boolean F(long n1, long n2); }
Infinite series containing positive and negative terms
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} &\bbox[10px,#ffd]{1 - \sum_{n = 0}^{\infty}\pars{-1}^{n}\, {\prod_{k = 0}^{n}\pars{3k + 1} \over \prod_{k = 0}^{n}\pars{5k + 5}}} \\[5mm] = &\ 1 - \sum_{n = 0}^{\infty}\pars{-1}^{n}\, {3^{n + 1}\prod_{k = 0}^{n}\pars{k + 1/3} \over 5^{n + 1}\prod_{k = 0}^{n}\pars{k + 1}} \\[5mm] = &\ 1 + \sum_{n = 0}^{\infty}\pars{-\,{3 \over 5}}^{n + 1}\, {\pars{1/3}^{\overline{n + 1}} \over \pars{n + 1}!} \\[5mm] = &\ 1 + \sum_{n = 0}^{\infty} {\Gamma\pars{1/3 + n + 1}/\Gamma\pars{1/3} \over \pars{n + 1}!}\, \pars{-\,{3 \over 5}}^{n + 1} \\[5mm] = &\ 1 + \sum_{n = 0}^{\infty} {\pars{n + 1/3}! \over \pars{n + 1}!\pars{-2/3}!}\, \pars{-\,{3 \over 5}}^{n + 1} = 1 + \sum_{n = 0}^{\infty}{n + 1/3 \choose n + 1} \pars{-\,{3 \over 5}}^{n + 1} \\[5mm] = &\ 1 + \sum_{n = 0}^{\infty} \bracks{{-1/3 \choose n + 1}\pars{-1}^{n + 1}} \pars{-\,{3 \over 5}}^{n + 1} \\[5mm] = &\ \sum_{n = 0}^{\infty} {-1/3 \choose n}\pars{3 \over 5}^{n} = \pars{1 + {3 \over 5}}^{-1/3} = \bbx{5^{1/3} \over 2} \approx 0.8550 \end{align}
Tensor product of simple modules
Here are some things that you probably know, but just to say something ... : Suppose that $R$ is a $k$-algebra for a field $k$, and that $M$ is finite dimensional over $k$. Then $M$ is simple as a right $R$-module if and only if $Hom_k(M,k)$ (with the transpose $R$-action) is simple as a left $R$-module. Then $M\otimes_k N$ is a $k$-vector space, and since it is the universal recipient of a $R$-bilinear pairing from $M\times N$, it is non-zero if and only if there is a $R$-linear pairing $M \times N \to k$, if and only if there is a non-zero $R$-module homomorphism $N \to Hom_k(M,k)$. If $N$ is simple, such a non-zero homomorphism has to be an isomorphism, and so we see that $M\otimes N \neq 0$ for simple $M$ and $N$ if and only if $N = Hom_k(M,k)$. Of course my assumptions imply that $M\otimes_R N = Hom_R(Hom_k(M,k),N),$ and so this reduces to the "Hom" case that is motivating your whole question. But maybe it suggests a way of thinking which might work in a more general setting (trying to replace $k$ by some kind of "minimal" quotient of $M\otimes_R N$). I didn't actually succeed yet in saying anything more general, though ... .
Could we simplify the log determinant's concavity proof?
Actually, we can not simplify the proof. Indeed, $Z^{-1}V$ is not necessarily symmetric, and thus, the decomposition to the eigenvalues is not guaranteed. This explains the use of the trick with $Z^{1/2}$
Show that the map $f(a)=a^2$ is an isomorphism of groups from G to itself.
What's the kernel? If $a^2=e$, then $a=e$. Otherwise $\vert a\vert=2 \Rightarrow \Leftarrow $.
Notation question regarding field extensions (What does $K^2 \subseteq k$ mean)
The squares in $K$. Using that $k$ is of characteristic $2$, you can show that if $K$ is an extension of $k$ generated by square roots of elements in $k$ (like in your case), then every square of element in $K$ is in $k$. This is very specific to characteristic $2$: think of $\mathbb{Q}(\sqrt{2},\sqrt{3})$ and $x = \sqrt{2} + \sqrt{3}$.
Shortest path between two vertex
First perform a DFS to remove all the vertex that are not accessible from the source. Then apply the same idea as Dijkstra's algorithm: dist[source] := 0 // Distance from source to source stack := [source] // Stack that we use to remember vertices for each vertex v in Graph: // Initializations if v != source dist[v] := infinity // Unknown distance function from source to v previous[v] := undefined // Previous node in optimal path from source end if end for while stack is not empty: // The main loop u := pop(stack) // take the first vertex in the stack for each edge (u,v) in E alt := dist[u] + length(u, v) if alt < dist[v]: // A shorter path to v has been found dist[v] := alt previous[v] := u end if remove (u,v) from E // That edge is useless know if there is no (u',v) in E // v is now not reachable push(v,stack) // store v for later end if end for end while return dist[], previous[] Why it works: when you push a vertex you are sure that you found the shortest distance to it. Why it is in $O(E+V)$ because each time you use an edge you remove it. I hope it helps.
what would the detailed solution to this attached problem be?
Let $w$ be the weight of the body, then $$\frac{dw}{dx}=k$$ where $k$ is a constant. When the bucket starts it weight is $w(0)=5+60=65$ lb, then, in the top $\frac13(60)=20$ lb of sand has leaked, so the weight of the bucket is $w(10)=5+40=45$ lb. It follows $$\int_{65}^{45}dw=\int_0^{10}kdx\qquad\implies\qquad-20=10k\qquad\implies\qquad k=-2$$ The weight of the bucket is given by $w(x)=65-2x$. Thus, the work is given by $$W=\int_0^{10}(65-2x)dx=650-10^2=550\quad\text{lb-ft.}$$
Sum of all values along a line
For all the cells except the end ones, you can just compute the length of the segment passing through the cell and multiply by the concentration in the cell. For the end cells you need to figure out the distance from the endpoint to the edge of the cell. If the concentration is in g/m^3 the result will have units g/m^2. It represents the number of grams that would be in a column of area 1 m^2 around your line. Depending on how the concentration varies, you might be able to average the concentrations in all the cells on the route, then multiply by the total length. That will overstate the contributions of the cells where the line just cuts the corner, but if the concentration varies slowly that will not be too bad.
Can you cross two cross products with a cross product?
The vector triple product formula is $$a \times (b \times c)=(a\cdot c)b-(a\cdot b)c$$ where you need the parentheses because the cross product is not associative. If you apply this you get $$(n \times a)\times(n \times b)=((n\times a)\cdot b)n-((n \times a)\cdot n)b\\ =((n\times a)\cdot b)n$$ because the second dot product is zero.
The difference between ω and A?
Probability space is a triple $(\Omega,\mathcal F,P)$ where $\Omega$ is the set of outcomes, $\mathcal F$ is $\sigma$-algebra on $\Omega$ and $P\colon \mathcal F\to [0,1]$ probability measure. Now, $\omega$ is usually element of $\Omega$, while $A$ is element of $\mathcal F$. We measure elements of $\mathcal F$, not elements of $\Omega$. Let me give an example. Let $\Omega = [0,1]$, $\mathcal F$ Borel $\sigma$-algebra on $[0,1]$ ($\sigma$-algebra generated by open intervals) and $\mu$ Lebesgue measure (in simple terms, length). Now, it makes sense to ask what is the length of $[0,1]$, $\langle 0.2,0.75\rangle$ and even $\{1\}$, but it doesn't make sense to ask what is the length of number $1$, for example. Notice how $[0,1]$, $\langle 0.2,0.75\rangle$ and $\{1\}$ are elements of $\mathcal F$, while $1$ is not, $1\in\Omega$.
What's the meaning of algebraic data type?
Think of an algebraic data type as a type composed of simpler types, where the allowable compositions operators are AND (written $\cdot$, often referred to as product types) and OR (written $+$, referred to as union types or sum types). We also have the unit type $1$ (representing a null type) and the basic type $X$ (representing a type holding one piece of data - this could be of a primitive type, or another algebraic type). We also tend to use $2X$ to mean $X+X$ and $X^2$ to mean $X\cdot X$, etc. For example, the Haskell type data List a = Nil | Cons a (List a) tells you that the data type List a (a list of elements of type a) is either Nil, or it is the Cons of a basic type and another lists. Algebraically, we could write $$L = 1 + X \cdot L$$ This isn't just pretty notation - it encodes useful information. We can rearrange to get $$L \cdot (1 - X) = 1$$ and hence $$L = \frac{1}{1-X} = 1 + X + X^2 + X^3 + \cdot$$ which tells us that a list is either empty ($1$), or it contains 1 element ($X$), or it contains 2 elements ($X^2$), or it contains 3 elements, or... For a more complicated example, consider the binary tree data type: data Tree a = Nil | Branch a (Tree a) (Tree a) Here a tree $T$ is either nil, or it is a Branch consisting of a piece of data and two other trees. Algebraically $$T = 1 + X\cdot T^2$$ which we can rearrange to give $$T = \frac{1}{2X} \left( 1 - \sqrt{1-4X} \right) = 1 + X + 2X^2 + 5X^3 + 14X^4 + 42X^5 + \cdots$$ where I have chosen the negative square root so that the equation makes sense (i.e. so that there are no negative powers of $X$, which are meaningless in this theory). This tells us that a binary tree can be nil ($1$), that there is one binary tree with one datum (i.e. the tree which is a branch containing two empty trees), that there are two binary trees with two datums (the second datum is either in the left or the right branch), that there are 5 trees containing three datums (you might like to draw them all) etc.
Double absolute value proof: $||a|-|b||\le |a-b|$
$$ ||a|-|b||\le |a-b| $$ iff $$ ||a|-|b||^2\le |a-b|^2 $$ iff $$ a^2 -2|ab| +b^2 \leq a^2-2ab+b^2 $$ iff $$ ab \leq |ab|, $$ and the last inequality is always true.
Find Res $(f,0)$ if $f(z) = \frac{\sin z}{z^8}$.
It usually could be much easier to use the Taylor series of $sin(z)$. Write $$\frac{sin(z)}{z^8}=\frac{1}{z^8}\sum_{n=0}^{\infty}\frac{(-1)^nz^{2n+1}}{(2n+1)!}=\sum_{n=0}^{\infty}\frac{(-1)^nz^{2n-7}}{(2n+1)!}$$ You got a Laurent series of a meromorphic function. The residue is the coefficient of $z^{-1}$, which in our case, corresponds to $n=3$. So we get: $$Res(f,0)=\frac{(-1)^3}{(2\cdot 3+1)!}=-\frac{1}{7!}$$ And as you see, this is identical to your result.
Cholesky factorization
Here is a partial answer: why is such a routine is needed? A Cholesky factorization is $XX^t = A$. But not all A have such a factorization. Whenever matrices get you down, try the 1×1 matrices: numbers. The transpose of [x] is just [x], and so we get: $$[x][x]^t = [x^2] = [a]$$ only works (for real numbers x), if a ≥ 0. You can't take the square root of a negative (and get a real number), and you can't take the square root of a "negative" (definite) matrix. So what happens if we ask for the Cholesky decomposition of [−2]? Well the routine says −2 is too small, so we'll just choose d = 2, and find the Cholesky factorization of [a] + [d] = [a + d] = [−2+2] = [0]. $$[0][0]^t = [0] = [-2] + [2]$$ Given any symmetric matrix, it can be thought of (after finding some other special decompositions) as a diagonal matrix: in other words as a few separate numbers. For example take a = diag(1,−2,3). We can almost find a nice Cholesky decomposition: $$\begin{bmatrix}1 & 0 & 0 \\ 0 & \sqrt{-2} & 0 \\ 0 & 0 & \sqrt{3} \end{bmatrix} \begin{bmatrix}1 & 0 & 0 \\ 0 & \sqrt{-2} & 0 \\ 0 & 0 & \sqrt{3} \end{bmatrix}^t = \begin{bmatrix}1 & 0 & 0 \\ 0 & -2 & 0 \\ 0 & 0 & 3 \end{bmatrix}$$ except for the pesky $\sqrt{-2}$ not being a real number. Never fear, d = diag(0,2,0) is here! $$\begin{bmatrix}1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & \sqrt{3} \end{bmatrix} \begin{bmatrix}1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & \sqrt{3} \end{bmatrix}^t = \begin{bmatrix}1 & 0 & 0 \\ 0 & -2 & 0 \\ 0 & 0 & 3 \end{bmatrix} + \begin{bmatrix}0 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 0 \end{bmatrix} $$ Things are a little messier when A and D are not both diagonal (we can assume one is diagonal, but not both for this algorithm), but I think the basic idea is the same. Fix negative diagonal entries, well, negative eigenvalues.
Normal vector of $\Gamma \times \mathbb{R}^+$ where $\Gamma$ is compact hypersurface
$Q$ is a hypersurface in $\mathbb R^{n+1}$ with boundary (a cylinder to be more specific). On the interior of $Q$ the normal vector is nothing else than the normal vector to $\Gamma$ with a zero $n+1$ component. The boundary of $Q$ has codimension $2$ hence you have two independent normal vectors. One is again the vector defined before, and the other one is $e_{n+1}$ the unit versor of the last coordinate in $\mathbb R^{n+1}$. addendum after more details given in the comments. It seems that you are interested in the vector which is tangent to the the hypersurface $Q$ and normal to the boundary $\partial Q$. Such a vector is $(0,-1)$ with $0\in \mathbb R^n$. If the variables of $\mathbb R^n \times \mathbb R^+$ are $(y,x)$ then the normal derivative of a function $u$ would be $-\partial u / \partial x$.
Does is hold that $f \ge 0$ almost everywhere if $\int_a^b f \varphi dx = 0$ for all $\varphi \in C^\infty_0((a,b); [0,1])$?
I'm going to assume that $f$ is integrable on $(a,b)$. Consider the function $\mathbb{1}_{\{f<0\}}$. Multiplying by a cut-off function and mollifying, we see that there exists a sequence $(\varphi_n)\subseteq C^\infty_0((a,b);[0,1])$ that converge to $\mathbb{1}_{\{f<0\}}$ in $L^1((a,b))$. Thus, up to a subsequence, $\varphi_n \to \mathbb{1}_{f<0}$ almost everywhere. By the dominated convergence theorem $$ \int_{(a,b)} f\varphi_n \to \int_{\{f < 0\}} f = 0. $$ So the theory you need is on mollifiers!
Let $f$ be continuous on $\mathbb{R}$, and $\inf_\mathbb{R}f(x)<0$. Prove that $\exists c$ such that $f(c)<0$
An option: Assume $f(x) \ge 0$ for all $x \in \mathbb{R}$. Then $0$ is a lower bound for $f$, and $\inf_{\mathbb{R}} (f) \ge 0$, a contradiction.
Compact but not sequentially compact question
The existence of the real number $r\in[0,1]=I$ such that its binary expansion has $k$th digit $0$ if $k$ is odd and $1$ if $k$ is even does not depend on the Axiom of Choice in any way whatsoever. This is nothing more than the number $$r = \sum_{i=0}^{\infty}\frac{1}{2^{2i+1}},$$ which is a convergent series of real numbers, being a series of positive terms that is bounded above by $$\sum_{i=1}^{\infty}\frac{1}{2^i} = 1.$$ Where exactly do you believe that this requires the Axiom of Choice? If your real numbers are defined as equivalence classes of Cauchy sequences, then $r$ is the equivalence class of the sequence of its partial sums, which can be defined using induction/recursion theorem. If your real numbers are defined as Dedekind cuts, then $r$ is the cut determined by the union of the cuts determined by the partial sums, which again can be defined using recursion/induction. (P.S: $$\begin{align*} r &amp;= \sum_{i=0}^{\infty}\frac{1}{2^{2i+1}} = \frac{1}{2}\sum_{i=0}^{\infty}\frac{1}{2^{2i}}\\ &amp;= \frac{1}{2}\sum_{i=0}^{\infty}\left(\frac{1}{4}\right)^i\\ &amp;=\frac{1}{2}\left(\frac{1}{1-\frac{1}{4}}\right)\\ &amp;= \frac{1}{2}\left(\frac{4}{3}\right)\\ &amp;=\frac{2}{3}. \end{align*}$$ and, luckily, $\frac{2}{3}$ exists, even without the Axiom of Choice...)
Riemann Integral defined as limit of Riemann sum
Let $f$ and $g {\in}R[a,b]$ .Let $p$ be a partition such that $P=[X0,X1],.......[Xn-1,Xn]$. Suppose $X_0=a$ and $X_n=b$.let $f$ and $g$ continuous and $f≥g$ then $g(x)-F(x)≤0$ so $g(x)-f(x) \in R[a, b]$ then ${\int_a^b(g(x)-f(x)}={\int_a^{X1}(g(x)-f(x))+\int_X1^{X2}(g(x)-f(x))...........\int_Xn-1^b(g(x)) -f(x))} ≤ 0(b-a)$ so $\int_{a}^{X_1}(g) + \int_{x_1}^{x_2}(g)+.......+ \int_{x_n-1}^{b}(g) - \int_{a}^{X_1}(f) \int_{x_1}^{x_2}(f)+........+ \int_{x_n-1}^{b}(f)) \le 0$ so ${\int _a^b(g) -\int_a^b(f) ≤0}$
Let $G$ be an $n$-vertex graph with at most $100n$ triangles. Prove that $G$ has a triangle-free...
Is there a better bound? Here is an upper bound of $\frac{n}{13}$, i.e. a graph such that every $\frac{n}{13}+1$ vertices induce a subgraph containing a triangle. Let $n$ be a multiple of $26$, and let $G$ be $\frac{n}{26}$ disjoint copies of $K_{26}$. The number of triangles is ${26 \choose 3}\frac{n}{26} = 100n$, and clearly the largest number of vertices such that the induced subgraph is triangle-free is $2\frac{n}{26} = \frac{n}{13}$.
Recurrence relation involving floor function.
$$x_n = \left\lfloor \frac{x_{n-1}}{3}\right\rfloor - 2$$ This sequence is strictly decreasing. Since $x_1, x_2, x_3, \dots$ will all be integers. We may as well study what happens starting from the integer $x_1$. Theorem. There are $3^k$ values of $x_1$ that will result in a given integer value of $x_{k+1}$. In particular (Read $\{a \mid b \mid c\}$ as $``a$ or $b$ or $c"$.) $$x_1 = 3^kx_{k+1} + \sum_{i=0}^{k-1} 3^i \{6 \mid 7 \mid 8\}$$ Proof. If $x_n = \left\lfloor \dfrac{x_{n-1}}{3}\right\rfloor - 2$, then $x_{n-1} = 3(x_n+2) + \{0 \mid 1 \mid 2\}$. So $x_{n-1} = 3x_n + \{6 \mid 7 \mid 8\}.$ $x_{n-2} = 9x_n + 3\{6,7,8\} + \{6,7,8\}.$ $x_{n-3} = 27x_n + 9\{6,7,8\} + 3\{6,7,8\} + \{6,7,8\}.$ It follows by induction that $x_{n-k} = 3^k x_n + \sum_{i=0}^{k-1} 3^i \{6 \mid 7 \mid 8\}$ In particular, $$x_1 = 3^k x_{k+1} + \sum_{i=0}^k 3^i \{6 \mid 7 \mid 8\}$$ Hence there are $3^k$ values of $x_1$ that will result in a particular value of $x_{k+1}$. For example, if I wanted to make $x_4=-2$, then one number I could use would be $$x_{n-3} = 27(-2) + 9(7) + 3(6) + 8 = 35$$ \begin{align} x_1 &amp;= 35 \\ x_2 &amp;= \left\lfloor \frac{35}{3} \right\rfloor - 2 = 9\\ x_3 &amp;= \left\lfloor \frac 93\right\rfloor - 2 = 1\\ x_4 &amp;= \left\lfloor \frac 13\right\rfloor - 2 = -2\\ \end{align}
Need an explanation for binomial theorem question
I’m assuming that you have no problem with the fact that $(2x^2+5x^{-2})^8=(x^{-2})^8(2x^4+5)^8$, but just in case: $2x^2+5x^{-2}=x^{-2}(2x^4+5)$. We want the coefficient of $x^0$ in $(2x^2+5x^{-2})^8$; this is of course the same as the coefficient of $x^0$ in $(x^{-2})^8(2x^4+5)^8$. Note that $(x^{-2})^8(2x^4+5)^8=x^{-16}(2x^4+5)^8$, and suppose that we’ve multiplied out $(2x^4+5)^8$ and got some polynomial with terms like $ax^k$. When we finish the calculation by multiplying this polynomial by $x^{-16}$, each of its terms $ax^k$ will become $ax^{k-16}$. We want the term with exponent $0$ in the final product, which we’ll get when $k=16$. The coefficient won’t change when we multiply by the simple power $x^{-16}$, so the coefficient of $x^{16}$ in $(2x^4+5)^8$ will become the coefficient of $x^0$ in $x^{-16}(2x^4+5)^8$: the exponent will be reduced by $16$. Knowing this, we can apply the binomial theorem to $(2x^4+5)^8$: it’s equal to $$\sum_{i=0}^8\binom8{8-i}(2x^4)^{8-i}5^i=\sum_{i=0}^8\binom8{8-i}2^{8-i}x^{32-4i}5^i\;.$$ The exponent on $x$ is $16$ when $i=4$, and the coefficient is then $$\binom8{8-4}2^{8-4}5^4=\binom842^45^4=10000\binom84=700,000\;.$$
Let p be prime. If a group has more than $p-1$ elements of order $p$, why can't the group be cyclic?
An infinite cyclic group has no elements of finite order, except for $e$, which has order $1$. Thus, your group cannot be infinite cyclic. A finite cyclic group of order $n$ has exactly one subgroup of order $d$ for each $d$ dividing $n$. A subgroup of order $p$ in any group has exactly $p-1$ elements of order $p$. Therefore, if a group has more than $p-1$ elements of order $p$, then it has more than one subgroup of order $p$ and so cannot be cyclic.
$\sum X_n$ converges a.s.
Example: Let $X_n\sim N(0,n^{-2})$ (independent) and $Y_n=n^{-1}Z$ where $Z\sim N(0,1)$. Then $X_n\overset{d}{=} Y_n$ and $\sum X_n\to X$ in distribution (and hence $a.s.$), where $X\sim N\left(0,\pi^2/6\right)$. However, $$ \sum_{n=1}^N Y_n(\omega)=Z(\omega)\sum_{n=1}^N n^{-1} $$ which does not converge in $\mathbb{R}$ for $Z(\omega)\ne 0$.
Prove that $ES \leq \sigma$, with S being a random sample
Let $g(y) = y^{1/2}$. Since this is a concave function, we can apply Jensen's inequality if we flip the direction of the relationship: $E(g(y)) \leq g(E(y))$. Setting $y = s^2$ the result $E(s) \leq \sigma$ follows naturally.
Can the natural embedding $K\to K[X]/(f)$ be extended to form an isomorphism $L/K\to K[X]/(f)$?
$L$ is viewed as an extension $L/K$ but the fact is that $L$ just contains an isomorphic copy $K'$ (as field) of $K$ which we identify with $K$ then we think of $K'$ as $K$ (since they are isomorphic as fields) and say $L/K$ and not $L/K'$. So when you write about the restriction to $K$ it is important to note what this theorem gives you - $L:=K[x]/\langle f\rangle$ and $K$ is not a subset of $L$ - but the constant polynomials in the quotient are isomorphic copy of $K$ Regarding the root $\alpha$ of $f$ in $L$ : let $$\alpha=\bar{x}\in L= K[x]/\langle f\rangle$$ then $$f(\alpha)=\sum_{i=0}^{n}a_{i}\bar{x}^{i}=\bar{f}\equiv_{\langle f\rangle}0$$
Langevin equation: Why is the autocorrelation of the noise term infinite?
The noise term, $\eta$, is a Gaussian white noise. That is, it is a stationary mean-zero Gaussian process with flat spectral density $S_{\eta}\equiv p$ ($p$ is the &quot;power level&quot; of $\eta$). The autocorrelation function of $\eta$ is the inverse Fourier transform of $S_{\eta}$, i.e. $$ R_{\eta}(s)=\mathcal{F}^{-1}(S_{\eta})=p\delta(s), $$ where $\delta$ is the Dirac delta function. Specifically, the variance of $\eta(t)$ is infinite because otherwise the power spectrum would be a null function.
Weighting a cubic hermite spline
It's a Bézier curve. (No, Unity does not have a copyright or patent on Bézier curves.) Cubic Bézier curves are widely used in the computer graphics industry to create smooth curves like this one. You give the two endpoints of the curve $P_0, P_3$ and two intermediate &quot;control points&quot; $P_1,P_2$ and it produces a smooth curve that is tangent to the line $P_0P_1$ at the start and $P_2P_3$ at the end, where the magnitude of the derivative at these points is proportional to the distance $|P_0-P_1|$ and $|P_2-P_3|$. $$x(t)=(1-t)^3x_0+3(1-t)^2tx_1+3(1-t)t^2x_2+t^3x_3$$ $$y(t)=(1-t)^3y_0+3(1-t)^2ty_1+3(1-t)t^2y_2+t^3y_3$$ In this case, $P_0=(0,0)$ and $P_3=(1,1)$, and you are adjusting the locations of the control points $(x_1,y_1)$ and $(x_2,y_2)$. The &quot;weights&quot; are the values $x_1$ and $1-x_2$. One thing which makes the Unity graph a bit interesting is that it's been re-parameterized as a function $y(x)$, by inverting $x(t)$ and plugging in to $y(t)$, so it's not a simple cubic anymore in its expression (hence why the derivative goes to infinity in the limiting example). The function $x(t)$ stops being injective if you make the weights larger than 1, which is why that's the upper limit. When $x_1=1/3$ and $x_2=2/3$ then $x(t)=t$, so $y(x)$ is just a cubic function. This is the &quot;unweighted&quot; case. (Note that in the unity interface the handles have a fixed length in &quot;unweighted&quot; mode, so the locations of the handles is not the location of the control points which threw me off a bit. To find the location of the control point, extend the line from the handle until it meets the line $x=1/3$.) When you turn on &quot;weighted&quot; mode, the handles are actually on the control points. Here's a Mathematica script that you can play with to see how moving the control points affects the curve, along with pictures for the &quot;default&quot; and &quot;extreme&quot; cases: Manipulate[Graphics[{BezierCurve[pts, SplineDegree -&gt; 3], Dashed, Green, Line[pts]}, PlotRange -&gt; {{0, 1}, {0, 1}}, Frame -&gt; True, AspectRatio -&gt; 1/1.6, ImageSize -&gt; 600], {{pts, {{0, 0}, {1/3, 0}, {2/3, 1}, {1, 1}}}, Locator, LocatorAutoCreate -&gt; True}]
Problem to find the matrix of linear transformation
My suggestion would be to evaluate the transformation in $e_1$, $e_2$ and $e_3$. Also you must note that the matrix associated to a transformation in a given basis $\{e_1,e_2,e_3\}$ is $[T(e_1)T(e_2)T(e_3)]$ where each $T(e)$ is a column vector.
Prove that : $\frac{10^{18n+12}-7}{3}\equiv 0\pmod{19}$
Let $a=10^{18k+12}-7$. If $19|a$ and $3|a$, then $19|(\frac a3)3$, and $19\nmid3$, so, by Euclid's lemma, $19|\frac a3$.
Natural Numbers as Vectors via Factorization?
I hope the idea is clear from the following example. $m=51450=2\times3\times5^2\times7^3$, think of $m$ as the vector $(1,1,2,3,0,0,\ldots)$. $n=16500=2^2\times3\times5^3\times11$, think of $n$ as the vector $(2,1,3,0,1,0,0,\ldots)$. The gcd of $m,n$ is $2\times3\times5^2$, think of this as the vector $(1,1,2,0,0,\ldots)$ in which each element is the minimum of the two corresponding elements above. And similarly for the lcm.
Does $X^{C_2} \simeq * \simeq X/{C_2}$ imply $X \simeq *$?
Here is a counterexample, though it may not be very interesting to you because the space involved is not Hausdorff. Let $X=\{a,b,c,d,e\}$, with topology generated by the sets $\{a\}$, $\{b\}$, $\{c\}$, $\{d,a,b,c\}$, and $\{e,a,b,c\}$. Define $\sigma:X\to X$ by $\sigma(a)=b$, $\sigma(b)=a$, $\sigma(c)=c$, $\sigma(d)=e$, and $\sigma(e)=d$. Then $\sigma$ is a homeomorphism and gives an action of $C_2$ on $X$. The fixed points and quotient are contractible (the fixed points are just $\{c\}$ and the quotient has a point $[d]$ that is in the closure of every point), but $X$ is not contractible (in fact, it has the weak homotopy type of $S^1\vee S^1$).
Recurrence relation without modulo
$$ f(n+1) = 3-f(n)\qquad f(1) = 2 $$
Definition of Aut(G) in the graph theory and group theory
There is definitely a concept of an automorphism group of a set. It's just the permutation group of that set (the set of all functions from that set to itself, under the operation of function composition). The following may or may not be helpful: Most objects we study in math form these things called categories. The definition of a category is pretty simple, but very abstract. Essentially, a category is a collection of objects and "maps" between these objects called morphsims. For example, the category of groups has groups as objects and homomorphisms as morphisms. The category of sets has sets as objects and functions as morphisms. Morphisms which have two-sided inverses are called isomoprhisms. An isomorphism between an object and itself is called an automorphism. For example, in the category of groups, the only morphisms which have two-sided inverses are the bijective ones, and so automorphisms are bijective homomorphisms from a group to itself. A very important property of morphisms is that some morphisms can be composed together (and this composition is associative). Given any category $\mathcal C$ and an object $X$ in $\mathcal C$, we can for the automorphism group of $X$, denoted $\mathrm{Aut}(X)$, which is the set of all automorphisms of $X$ under composition. Groups, graphs, and sets all form categories, so each of these objects has an automorphism group. The automorphism group $\mathrm{Aut}(X)$ really is a group, even if the object $X$ is not a group (as we've seen, it could be a set or a graph).
Integration of improper integrals
Neither of your answers are correct unfortunately. It would actually be easier to write $\frac{4}{7e^{-2x} + 1} = \frac{4e^{2x}}{7 + e^{2x}}$. Take $u = e^{2x}, \; du = 2e^{2x}dx$. We therefore get $$ \int \frac{4e^{2x}}{7+e^{2x}} dx \;\; =\;\; \int \frac{2}{7+u} du \;\; =\;\; 2\ln(7+u) + c \;\; =\;\; 2\ln(7+e^{2x})+c. $$
Why is the Fourier transform called a 'transform', and not a 'transformation'?
I think operation which assigns to the function $x \mapsto f(x)$ another function $p \mapsto \int f(x) e^{-ipx} dx = \widetilde {f}(p)$ should be called Fourier transformation, as you suggest. On the other hand the result of this operation, i.e. the function $\widetilde f$, is called the Fourier transform (a noun) of $f$.
Size of closed loop on a (bipartite) hexagonal lattice with equal number of enclosed A and B sublattice sites.
Yes, this is true. In fact this is also true for any closed loops which enclose an even number of interior lattice points. Let $p$ be the number of edges of a nontrivial simple loop on a hexagonal lattice. Let $k$ be the number of interior hexagons, and $x$ be the number of interior lattice points. Lemma: $2x+p-2 = 4k$. We can simply prove this by induction over $k$. Clearly the statement is true for a loop containing exactly one hexagon. Now consider a loop containing more than one hexagon. Choose an interior hexagon that touches the loop in exactly one connected subpath, i.e. an interior hexagon that does not separate the interior of the loop. Such a hexagon always exists. Now consider changing the path to exclude that hexagon, leaving all nonadjacent path edges intact. In doing so, $p$ changes by $-4, -2, 0, 2$ or $4,$ and $x$ changes accordingly, namely by $0, -1, -2, -3$ and $-4$ respectively. This means that $2x+p-2$ always changes by $-4$, just like $4k$ changes by $-4$. Apply the lemma to the new path (by induction), and we get $2x+p-2-4 = 4k-4$, hence $2x+p-2 = 4k$. $\square$ Corollary: $p$ is always even, and $x$ is even iff $p/2$ is odd. This answers your question. EDIT: There is a much nicer way to prove the lemma which does not require induction or involve postulating the existence of a hexagonal face with certain properties: Let $z$ denote the number of obtuse loop vertices, and let $y$ denote the number of reflex loop vertices. We know that $z+y = p$ and that $z-y = 6$. Consider cutting each interior hexagon into 12 right triangles by cutting along all its axes of symmetry. We can count the total number $t$ of such right triangles in two ways: $t = 12k$, but also $t = 6x + 4y + 2z$. So we have $12k = t = 6x+4y+2z = 6x+3(y+z)+(y-z) = 6x+3p-6$. Dividing by three yields $4k = 2x + p - 2$. $\square$
What is the smallest natural number divisible by the first $n$ natural numbers?
You are looking for A003418. The starting terms are as follows: 1, 1, 2, 6, 12, 60, 60, 420, 840, 2520, 2520, 27720, 27720, 360360, 360360, 360360, 720720, 12252240, 12252240, 232792560, 232792560
Countably Infinite, Uncountable or Finite
A set is "finite" if it can be placed in 1-1 correspondence with the set of natural numbers $&lt; n$ for some $n$. More generally, it is sufficient to place it in 1-1 correspondence with some set of integers which is bounded both above and below. A set is "countable" if it can be placed in 1-1 correspondence with some subset of the natural numbers. Note that this includes finite sets, but also some infinite sets. Since the sets of all rationals is in 1-1 correspondence with the set of naturals, it is sufficient to place a set in 1-1 correspondence with some set of rationals to show that it is countable. A set is "infinite" if it is not finite. Since any finite set of real numbers is bounded, to prove a set is infinite, it is sufficient to put it in 1-1 correspondence with any unbounded set of real numbers. A set is "countably infinite" or "denumerable", if it is both countable and infinite. From the above remarks, it follows that to prove denumerability, it is sufficient to put a set in 1-1 correspondence with some unbounded set of rational numbers. A set is "uncountable" if it is not countable. Since all finite sets are countable, uncountable sets are all infinite. Per Cantor's theorem, the real numbers are uncountable. Further, since any interval can be put in 1-1 correspondence with the entire set of real numbers, to show a set is uncountable, it is sufficient to put it in 1-1 correspondence with some interval. The size of any set is obviously greater than or equal to the size of any subset. This gives us some inheritance relations: Any subset of a finite set is also finite. Any subset of a countable set is also countable. Any superset of an infinite set is also infinite. Any superset of an uncountable set is also uncountable. For your problems: for each real number in the interval $(0,1)$, we can consider its binary expansion. Certain numbers have two binary expansions, one ending in repeating $0$s and one ending in repeating $1$s. For example: $$0.10000\ldots_2 = 0.01111\ldots_2$$ For definiteness, we always choose the one with repeating $0$s. Each $x \in (0,1)$ can be translated into the function $f_x : \Bbb N \to \{\text{true},\ \text{false}\}$ that carries $i$ to the value $\text{true}$ if the $i^\text{th}$ bit of $x$ is $1$, and $\text{false}$ otherwise. The map $x \mapsto f_x$ is a 1-1 correspondence between the interval (0,1) and a subset of your set of functions. Since the subset is therefore uncountable, your set is uncountable. The map $x \mapsto (x, 0, 0, 0)$ is a 1-1 correspondence between $\Bbb R$ and a subset of $\Bbb R^4$. Hence $\Bbb R^4$ is uncountable. Any constant function of $n$ is $O(n^2)$. For each $a \in \Bbb R$, define $g_a\ :\ \Bbb N \to \Bbb R\ :\ n \mapsto a$. I.e., $g_a$ is the constant map with image $\{a\}$. Then $a \mapsto g_a$ is a 1-1 correspondence between $\Bbb R$ and a subset of the set of $O(n^2)$ functions. Hence the set of $O(n^2)$ functions is also uncountable.
Determining the pattern in a basic numerical series
As you noticed, the numerators correspond to double factorial of odd numbers and the denominators to factorial of odd numbers. So the general term of your numbers seems to be $$-\frac{(2 n-3)\text{!!}}{(2 n+1)!}$$
Sand-Timer egg boiling
If all times are assumed to be integers, then inn general, if you have two sand timers that measure $a$ minutes and $b$ minutes respectively, and you seek to measure $c$ minutes. Then if $\operatorname{gcd}(a,b)$ divides $c$, you can use the Extended Euclidean Algorithm, to find integers $m$ and $n$ such that $$am + bn = c$$. If both $m$ and $n$ are positive, then run the first timer $m$ times and the second timer $n$ times after you're done with the first timer. If one of $m$ and $n$ is negative, then start both timers at the same time, run the first $m$ times back to back and the second $n$ times back to back, and the time you seek to measure will be the time between when whichever timer finishes its iterations first finishes and the finishing of the all of the iterations of the final timer. Since the time to measure is assumed to be positive, these are the only cases.
Maximization of ratio of two functionals
Clearly the maximum is $+\infty$. Consider for example $r=f(\theta)=\epsilon |\cos(\theta/2)|$. Then $$ \frac{\int_0^{2\pi}r^3\cos\theta d\theta}{\int_0^{2\pi}r^4 d\theta}=\frac{32}{15\pi\epsilon} $$ and this tends to $+\infty$ as $\epsilon$ goes to $ 0^+$.
Singularity of a surface
I know a surface is singular at a point when gradient vanishes at that point. $\newcommand{\Reals}{\mathbf{R}}\newcommand{\grad}{\nabla}$To clarify: If $F:U \to \Reals$ is a real-valued function on some non-empty open subset&nbsp;$U$ of&nbsp;$\Reals^{3}$, and if $\grad F(p) \neq 0$ at some point&nbsp;$p$ of&nbsp;$U$, the implicit function theorem guarantees that the level set of&nbsp;$F$ through&nbsp;$p$ is a regular surface in some neighborhood of&nbsp;$p$. There are at least two snags with your proposed example: Your surface is expressed as a graph, not as a level surface. If you express the defining equation $z = f(x, y)$ in "level set form" $$ 0 = F(x, y, z) := z - f(x, y), $$ you find that $\grad F = (-f_{x}, -f_{y}, 1) \neq (0, 0, 0)$. The converse of the implicit function theorem is not true. The level surface $$ 0 = F(x, y, z) = z^{3} $$ is the image of a regular surface, but $\grad F \equiv 0$ everywhere on the level surface.
Relation between the distribution functions of random variables $Y$ and $-Y$
Let $Y$ have exponential distribution. It looks as if we are defining a new random variable $-Y$. We want the cumulative distribution function of $-Y$. The interesting part of the distribution function $F_{-Y}(w)$ is when $w$ is negative. We have $$F_{-Y}(w)=\Pr(-Y\le w)=\Pr(Y\ge -w)=1-F_Y(-w).\tag{1}$$ Note that this is different from what in OP is described as the book's claim. Now differentiate to find $f_{-Y}(w)$. The differentiation introduces two cancelling minus signs, and from (1) we get $f_{-Y}(w)=f_Y(-w)$. Perhaps the book mistakenly used $F$ instead of $f$.
Let $G$ be a finite abelian group with elements $a_1,a_2,\dots,a_n$. If $G$ has more than one element of order $2$ then $a_1a_2\dots a_n=1$.
To finish your inductive proof, first look at the elements in $H$ that doesn't contain a factor $b_k$ (we'll include $1$ here for counting purposes). This is just the product of all elements of $K$, so by the inductive hypothesis, they all multiply to $1$. Now look at the elements of $H$ that do contain a factor $b_k$. This is the same as above, except that each element now carries an additional factor $b_k$ (remember to include the lone $b_k$ as well). The product of all those elements is therefore $b_k^{|K|}$ times the product of all elements of $K$, which simplifies to just $b_k^{|K|}$ by the inductive hypothesis. Lastly, note how many elements there are in $K$, and you see that $b_k^{|K|} = 1$. One could also try to appeal to some sort of symmetry moral with this problem (this is not a valid proof, by any measure, but I personally like to think along these lines when pondering the eternal question "yeah... but why?"). Note that $(a_1a_2\cdots a_n)^2 = 1$, so $a_1a_2\cdots a_n$ is either the identity or some degree-2 element. If there is only one degree $2$ element in the group, then there's nothing wrong with $a_1a_2\cdots a_n$ being that one element. But if there are more than one, how would the group know which one to pick? The group being abelian means that there is no algebraic property that distinguishes any of the order $2$ elements, but "being the product of all the elements in the group" is a pretty distinguishing feature. The most (/ only?) consistent choice for $a_1a_2\cdots a_n$ is therefore $1$.
Show that $\sin 10^\circ$ is irrational
identity: $\sin(3a)=3\sin(a)-4\sin^3(a)$ By using this identity, $$1/2 = \sin 30^\circ = 3 \sin 10^\circ - 4\sin^3 10^\circ$$ $$1=2\sin 30^\circ = 6 \sin 10^\circ - 8\sin^3 10^\circ$$ Then if you set $x=2\sin(10)$ you will get $$x^3 - 3x+1 = 0.$$
When does $\ln(x)=\sin(x)$?
If $\log x = \sin x$, then $x&gt;0$ (otherwise the logarithm is not defined) and $x\leq e$ (otherwise $\log(x)&gt;1\geq\sin(x)$), so we just have to find the roots of $f(x)=\sin x-\log x$ over $I=(0,e]$. There is at least one root since $e&lt;\pi$ implies that $f$ has opposite signs on the endpoints of $I$. Such a root is unique since $f(x)$ is decreasing over $I$, as a consequence of: $$ f'(x) = \cos x-\frac{1}{x} &lt; 0.\tag{1}$$ To prove $(1)$ it is sufficient to study the function $g(x)=x\cos x-1$ over $I$. By computing its derivative, we see that it has a maximum where $x=\cot x$, hence for some $x&lt;1$. But if $x&lt;1$, then $g(x)&lt;0$. Since $f$ is concave (by computing $f''$) and negative in a right neighbourhood of the root, we can find such a root by choosing $x=e$ as a starting point for the Newton's method.
What is the difference between bounded boundary and a bounded domain?
The boundary of an $\Omega \subseteq \def\R{\mathbb R} \R^n$ is usually defined as $\partial\Omega := \bar\Omega - \Omega^°$. $\Omega$ has bounded boundary, iff $\partial \Omega \subseteq \R^n$ is a bounded set. Each bounded set $\Omega$ has bounded boundary (as $\Omega$ bounded by $R$ implies $\bar\Omega$ bounded by $R$), but the converse is false: As $\partial\Omega = \partial (\R^n - \Omega)$ holds for every $\Omega$, the complement of a bounded set has bounded boundary without being bounded itself.
Follow up to a question, why does proof $\rho(x,y) = \dfrac{d(x,y)}{1+d(x,y)}$ work
$0 \le a &lt; b \implies 0 \le \frac{a}{b} \le\frac{a+n}{b+n}&lt; 1$ where $n\ge0$ with $\lim_{n\to\infty}\frac{a+n}{b+n}=1$ We can use this since, of course, the difference between the numerators is the same as the difference between the denominators. see this question if you want a proof of the inequality.
Matroids and greedy algorithms - how can a singleton subset be dependent?
What you're describing is called a loop, that is, an element of the matroid that by itself has rank 0. Thinking of a representable matroid, where the elements are vectors, the zero vector is a loop- the span of the zero vector is just itself. In a graphic matroid, a loop is a graph loop (matroid terminology borrows heavily from graph theory and linear algebra), that is, an edge from a vertex to itself, since it contributes nothing to the connectivity. Intuitively, loops are just redundant elements, so they can never add anything to the rank.
A real analysis problem on continuous functions
Since $f$ is continuous and maps $[0,1]$ to $[0,1]$, you can apply the intermediate value theorem to the function $g$ defined by : $g(x) = f(x) - x$ for all $x \in [0,1]$. $g$ is continuous and $g(0)g(1) \leq 0$ by hypothesis.
Is my modification to this solution valid?
The first two circles determine two intersections. To check if the third circle passes through it, it suffices to compute the distance to its center and compare to the expected radius. The condition should be $$|d-r_3|&lt;\theta$$ where $\theta$ is some tolerance (which depends on the accuracy of your data).
Find a power series solution to $xy''(x) + 2y'(x) + xy(x) = 1, \quad y(0) = 0$
If I may suggest, do not change the index and write first $$x\sum_{k=0}^{\infty} k(k-1) a_k x^{k-2}+2\sum_{k=0}^{\infty} k a_kx^{k-1}+x\sum_{k=0}^{\infty} a_kx^{k}-1=0$$ that is to say $$\sum_{k=0}^{\infty}\big[k(k-1)+2k \big] a_kx^{k-1}+\sum_{k=0}^{\infty} a_kx^{k+1}-1=0$$ $$\sum_{k=0}^{\infty}k(k+1)a_kx^{k-1}+\sum_{k=0}^{\infty} a_kx^{k+1}-1=0$$ To make $x^m$, for the first summation you need $k-1=m$ that is to say $k=m+1$ and for the second $k+1=m$ that is to say $k=m-1$. So $$(m+1)(m+2)a_{m+1}+a_{m-1}=0$$ and you already know that $a_0=0$. Now, work the very first terms.
When does a Path Algebra give a unique Quiver?
I'll assume that the quiver $Q$ has finitely many vertices and arrows. Then even if $Q$ has oriented cycles, it is it is still true that the path algebra $\mathbb{C}Q$ determines the quiver $Q$. (1) In the case of a quiver with no oriented cycles, a quick way to recover the quiver is as follows: There is a simple module $S_i$ associated with each vertex $i$, and the number of arrows from vertex $i$ to vertex $j$ is $\dim_{\mathbb{C}}\operatorname{Ext}^1_{\mathbb{C}Q}(S_i,S_j)$. [I'm not sure that this is the proof that Derksen was asking for, as the book doesn't seem to assume knowledge of $\operatorname{Ext}$.] So knowing the simple $\mathbb{C}Q$-modules and the extensions between them lets you recover the quiver. (2) If the quiver has oriented cycles but no loops (arrows with the target equal to source) then there are more simple modules, but the obvious simples associated to the vertices are the only $1$-dimensional simples, and the same method of recovering the quiver works, if you only consider the $1$-dimensional simples. (3) If the quiver has loops, then there are more $1$-dimensional simples (consider the representation with $\mathbb{C}$ at vertex $i$, zero at every other vertex, with the loops at vertex $i$ acting by multiplication by arbitrary scalars). But the same method work if we can pick out one $1$-dimensional simple module for each vertex. There may be a simpler method, but one way to do this is to consider the abelianization of $\mathbb{C}Q$. This is a product of polynomial algebras $\mathbb{C}[x_1,\dots,x_{r_i}]$, one for each vertex $i$, where $r_i$ is the number of loops at vertex $i$. So it has one primitive idempotent for each vertex, and for each of these idempotents we can choose any $1$-dimensional simple module (it doesn't matter which) that is not annihilated by that idempotent. As before, the quiver can then be recovered by considering $\operatorname{Ext}^1_{\mathbb{C}Q}$ between these simple modules.
Can a non-constant analytic function have infinitely many zeros on a closed disk?
The relevance of the fact that the set is closed is that the limit point must be in the set. For example $\sin (1/(z+1))$ has infinitely many zeros in the open unit disc, but is not zero. (The sole point of accumulation is $-1$, outside the domain, and the function is not analytic there.) On the multiplicity: first I think distinct zeros are meant. Second, a non-zero analytic function (on a connected domain) cannot vanish to infinite order anywhere; this would give the series there is $0$ and so the function is zero.
Rewrite the equation as a system of equations
I think they are writing the third-order linear ODE as a system of first order equations, but there are some problems in what is written. We have: $x_1 = y$, so $x'_1 = y' = x_2$ $x'_2 = y'' = x_3$ $x'_3 = y''' = 2y'' - y' - 3y + 3^t = -3x_1 -x_2 + 2 x_3 + e^t$ In matrix form, we can write this system as: $$X'(t) = \begin{bmatrix} {x_1}'\\ {x_2}'\\ {x_3}' \end{bmatrix} = \begin{bmatrix} {y}'\\ {y}''\\ {y}''' \end{bmatrix} = AX(t) + F(t) = \begin{bmatrix} 0 &amp;1 &amp;0 \\ 0 &amp;0 &amp;1 \\ -3 &amp;-1 &amp; 2 \end{bmatrix}\begin{bmatrix} {x_1}\\ {x_2}\\ {x_3} \end{bmatrix} + \begin{bmatrix} 0\\ 0\\ e^t \end{bmatrix}$$
Can we calculate derivatives in terms of matrix-by-matrix?
In index notation $$\eqalign{ Y_{ij} &amp;= H_{ik}\,W_{kj} + B_{ij} \cr\cr dY_{ij} &amp;= H_{ik}\,dW_{kj} \cr\cr \frac{\partial Y_{ij}}{\partial W_{ps}}&amp;=H_{ik}\,\delta_{kp}\,\delta_{js} \,\,= H_{ip}\,\delta_{js} \cr\cr }$$ In matrix notation $$\eqalign{ Y &amp;= HW + B \cr\cr dY &amp;= H\,dW \cr \operatorname{vec}(dY) &amp;= \operatorname{vec}(H\,dW\,I) \cr dy &amp;= (I\otimes H)\,dw \cr\cr \frac{\partial y}{\partial w} &amp;= I\otimes H \cr }$$
Spanning Sets in Inner Product Spaces
a) One appropriate choice of $v$ suffices. Try $v=x$. b) Write $x=\lambda_1v_1+\ldots+\lambda_nv_n$ and use the bilinearity of the inner product when computing $(x,x)=(x,\lambda_1v_1+\ldots+\lambda_nv_n)$. c) Apply b) to $x-y$ instead fo $x$.
A hard series convergence question
If $\sum_{n=1}^\infty t_n$ converges, $t_n \rightarrow 0$. But $$ \lim_{x \rightarrow 0} 1 - 2 \mathrm{e}^{-2 x^2} = -1 \text{,} $$ so your product cannot converge by having terms $\rightarrow 1$. Maybe we can get this to work the other way around, having the product "diverge to $0$"... Suppose we want, for $n &gt; N \in \mathbb{N}$, $-1 &lt; 1 - 2 \mathrm{e}^{-2 t_n^2} &lt; -1 + \epsilon$ so that (eventually) all the terms are less than unit magnitude and we can arrange for them to converge to $-1$ so that the sum may converge. Then $$ 0 &lt; t_n &lt; \frac{1}{\sqrt{2}} \sqrt{-\ln \frac{2-\epsilon}{2}} \text{.} $$ If we try $\epsilon \mapsto n^{-n}$, $t_n = \frac{1}{\sqrt{2}} \sqrt{-\ln \frac{2-n^{-n}}{2}}$, and then the product is $\prod_{n=1}^\infty -1+n^{-n} = 0$, which converges by diverging to $0$. Now the logarithm is concave down everywhere and passes through $(1,0)$, so \begin{align*} \frac{1}{\sqrt{2}} \sqrt{-\ln \frac{2-n^{-n}}{2}} &amp;= \frac{1}{\sqrt{2}} \sqrt{\ln \frac{2}{2-n^{-n}}} \\ &amp;= \frac{1}{\sqrt{2}} \sqrt{\ln ( 1 + \frac{n^{-n}}{2-n^{-n}}}) \\ &amp;&lt; \frac{1}{\sqrt{2}} \sqrt{\frac{n^{-n}}{2-n^{-n}}} \\ &amp;&lt; \frac{1}{\sqrt{2}} \sqrt{\frac{n^{-n}}{1}} \\ &amp;= \frac{1}{\sqrt{2}} n^{-n/2} \text{.} \end{align*} Then, since $\sum_{n=1}^\infty n^{-n/2}$ converges, so does $\sum_{i=1}^n t_n$. In retrospect, $\epsilon \mapsto 2n^{-2n}$ might have been a little nicer to push through the comparison test. As I noted in a comment $$ t_n = \begin{cases} -2, &amp;n = 1 \\ 3^{1-n}, &amp;n &gt; 1 \end{cases} $$ makes your product $-3 \cdot \prod_{n=1}^\infty 1 - 2 \mathrm{e}^{-2 (3^{-n})^2} = -3 \cdot -(1/3; 1/3)_\infty = 1.68038\dots$ and your sum still converges (since a finite initial sequence has no impact on convergence). We can make the product converge to whatever value you like by altering the one inserted sequence member.
Pigenhole Principle Problem
There are only ten possible values for $a^k\bmod10$ for all natural numbers $a$, but infinitely possible values for $k$.
What is wrong with that counting of $S_{3}\times S_{3}$ subgroups?
Not every pair of order $2$ elements generates a $2$-Sylow subgroup. For example $((1 \ 2), 1)$ and $((2 \ 3), 1)$ generate $S_3 \times 1$.
Continuity of $ f(x,y) = \frac{|x|^a|y|^b}{\sqrt{x^2 + y^2}} $ at the point (0,0) using $\epsilon, \alpha$ definition.
We want $\frac{1}{\epsilon} &lt; \frac{||(x,y)||_2}{|x|^a|y|^b}$. Since $|x|,|y| \leq ||(x,y)||_2$, it suffices to make sure that $\frac{1}{\epsilon} &lt; \frac{||(x,y)||_2}{||(x,y)||_2^{a+b}}$. This is equivalent to $||(x,y)||_2^{a+b-1} &lt; \epsilon$, or $||(x,y)||_2 &lt; \epsilon^{1/(a+b-1)}$. Hence, we can take $\alpha = \epsilon^{1/(a+b-1)}$. If $a+b &lt;1$, then $\lim_{r \to 0} f(r,r) = +\infty$, so surely, $f$ is not continuous. If $a+b=1$, then $\lim_{r \to 0} f(r,r) = \frac{1}{\sqrt{2}}$, so surely, $f$ is not continuous.
Solve in $\mathbb{C}$ the equation $z^2-(1+m)(1+i)z+i(m^2+1)=0$
By completing the square, we can obtain $$(z-(1+m)(1+i))^2=-\frac12 (5m^2+2m-1)~{\rm cis}~(\frac{3\pi}{2}+2k\pi)$$, where k is an integer. Then, we apply De'moivres theorem, which states that for some integer $n$, the set of solutions for $z^n=r\bigg(\cos(\theta+2k\pi) + i~\sin(\theta+k2\pi)\bigg)$ will be $$\begin{align} % z_0 &amp;= \sqrt[n]{r}\bigg(\cos(\theta/n) +i~\sin(\theta/n)\bigg) \\ % z_1 &amp;= \sqrt[n]{r}\bigg(\cos(\frac{\theta}{n}+\frac{2\pi}{n}) +i~\sin(\frac{\theta}{n} +\frac{2\pi}{n}) \bigg) \\ % z_j &amp;= \sqrt[n]{r}\bigg(\cos(\frac{\theta}{n}+\frac{2\pi j}{n}) +i~\sin(\frac{\theta}{n} +\frac{2\pi j}{n})\bigg) \text{ for all } j \leq n-1 \\ % \end{align}$$ So for your situation, $j=0,1$. Just do the computation and you will find the two answers in terms of $m$. If someone finds a simpler method then please tell me.
Does the Poincare homology sphere smoothly embed in $\mathbb{C}P^2$?
No. Splitting $\Bbb{CP}^2$ into two pieces $X_1, X_2$ along the Poincare homology sphere is such that $H_*(X_1) \oplus H_*(X_2) = H_*(\Bbb{CP}^2)$ for $* &lt; 3$ (run Mayer-Vietoris). In particular we split the intersection form. Therefore one of the two pieces is a homology ball and the other is positive-definite. $\Sigma(2,3,5)$ can not bound the first piece, as it never bounds any negative definite manifold with diagonal intersection form by Donaldson's theorem: if it did, then capping off the $-E8$-manifold (a smooth manifold with intersection form $-E8$ and boundary $-\Sigma(2,3,5)$) gives a smooth closed oriented 4-manifold with negative definite, non-diagonalizable intersection form. In a related but different direction, Proposition 3.4 of this great paper shows that the minimal $k$ such that $\Sigma(2,3,5)$ embeds in $\#^k S^2 \times S^2$ is $k = 8$.
Unique manifold structure and differentiable structure on submanifold
The claim that the manifold structure on $(A,i)$ is the only one making it equivalent to $(N,\phi)$ is a typical instance of transport of structure: If $A$ is given some smooth manifold structure such that $(A,i)$ is equivalent to $(N,\phi)$, then in particular there is a diffeomorphism $\alpha\colon N\to A$ such that $i\circ\alpha = \phi$, which is to say that $\phi$ itself (with its codomain restricted to $A$) is a diffeomorphism from $N$ to $A$ with the given manifold structure. Thus the topology of $A$ is uniquely determined by declaring its open sets to be those of the form $\phi(U)$ for $U$ open in $N$, and the smooth structure of $A$ is uniquely determined by declaring the coordinate maps to be all maps of the form $\psi\circ\phi$, where $\psi$ is any coordinate map for $N$. The second claim, "given a subset $A$, there is at most one differentiable structure on $A$ such that $(A,i)$ is a submanifold of $M$," doesn't quite make sense the way you stated it, because a "differentiable structure" only makes sense on a topological space, not on a mere set. I can think of three ways of interpreting it. Given a subset $A$ with the subspace topology, there is at most one differentiable structure on $A$ such that $(A,i)$ is a submanifold of $M$. Given a subset $A$ and some fixed topology on it, there is at most one differentiable structure on $A$ such that $(A,i)$ is a submanifold of $M$. Given a subset $A$, there is at most one manifold structure on $A$ such that $(A,i)$ is a submanifold of $M$. Statements 1 and 2 are true (and 1 is a special case of 2). This is Theorem 5.32 in my Introduction to Smooth Manifolds, 2nd ed. (The proof, which is left as a problem for the reader, is basically a straightforward application of Theorem 5.29 on restricting the codomain of a smooth map.) But statement 3 is false. A standard counterexample is a figure-eight curve in the plane: There are two different manifold structures that turn it into a smooth submanifold, each diffeomorphic to an open interval.
Does this vector product, based on indexing with a powerset, have a name?
This is a version of the exterior product, but missing some signs. It can be described as the multiplication in a certain ring, namely the ring $$\mathbb{R}[x_1, x_2, \dots x_n]/(x_1^2 = x_2^2 = \dots = 0)$$ where the set $X$ has $n$ elements, and for convenience we'll identify it with the set $\{ 1, 2, \dots n \}$. A "vector indexed by $2^X$," which I'll interpret as a function $f : 2^X \to \mathbb{R}$, is sent to the element $$\sum_S f(S) \prod_{i \in S} x_i$$ in this ring. The point of the relations $x_i^2 = 0$ is that the product of two monomials $x_S = \prod_{i \in S} x_i$ and $x_T = \prod_{i \in T} x_i$ vanishes as soon as $S$ and $T$ have nontrivial intersection, and if their intersection is trivial then the result is $x_{S \cup T}$.
Sum of $2009$ roots of unity
You have that $$\sum_{r=1}^{2008}r(\alpha_r+\alpha_{2009-r})=\sum_{r=1}^{2008}r\alpha_r+\sum_{r=1}^{2008}r\alpha_{2009-r}=\sum_{r=1}^{2008}r\alpha_r+\sum_{r=1}^{2008}(2009-r)\alpha_r$$ Can you take it from here?
$f_n$ converges in measure implies $\liminf f_n = 0$ a.e.
In the first part you can apply (or repeat) the Borel-Cantelli lemma. Since $f_n\to0$ in measure, there is an increasing sequence $n_k$ such that $\mu\big(f_{n_k}\ge\tfrac1{2^k}\big)&lt;\tfrac1{2^k}$. We will show that $f_{n_k}\to0$ a.e. Let $A_k=\{x\colon f_{n_k}(x)\ge\tfrac1{2^k}\big\}$; so $\mu(A_k)&lt;\tfrac1{2^k}$; for every $m$ we have $$ \Big\{ x\colon \varlimsup_{k\to\infty} f_{n_k}(x)\ge \tfrac1{2^k}\Big\} \subset\bigcup_{\ell=m}^\infty A_\ell $$ $$ \mu\Big(\big\{ x\colon \varlimsup_{k\to\infty} f_{n_k}(x)\ge \tfrac1{2^k}\big\} \Big) \le \sum_{\ell=m}^\infty \mu(A_\ell) &lt; \tfrac2{2^m}. $$ From $m\to\infty$ we get $$ \mu\Big(\big\{ x\colon \varlimsup_{k\to\infty} f_{n_k}(x)\ge \tfrac1{2^k}\big\} \Big) =0, $$ therefore $$ \mu\Big(\big\{ x\colon f_{n_k}(x)\not\to0 \big\} \Big) \le \sum_{k=1}^\infty \mu\Big(\big\{ x\colon \varlimsup f_{n\to\infty} f_n(x)\ge \tfrac1{2^k} \big\}\Big)= 0. $$ For the second part, For every $0\le k&lt;m$, let $$ f_{m,k}(x) = \begin{cases} 1 &amp; \tfrac{k-1}{m} \le x \le \tfrac{k}{m}\\ 0 &amp; \text{otherwise}\end{cases} $$ and order them in any order. This sequence tends to $0$ in measure, but for every $x\in[0,1]$, $f_{k,m}(x)=1$ infinitely many times.
What's wrong with my "proof" of the existence of the intersection of the void set?
We understand intuitively that a set B is the intersection of a family $S$ iff for all $x$ we have that $$ x \in B \iff (\forall X \in S)x \in X. $$ It's obvious that we can draw $(\forall X \in S)x \in X$ from $x \in \{x \in \bigcup S : (\forall X \in S)x \in X\}$. If $(\forall X \in S)x \in X$ and $S \neq \varnothing$, then there is a $B \in S$. So, $(\exists X \in S)x \in X$ and thus $x \in \bigcup S$. We conclude that $x \in \{x \in \bigcup S : (\forall X \in S)x \in X\}$. What if $S = \varnothing$? Then the last reasoning now will be broken because we can't choose a concrete element of $S$. Technically your first definition is valid, but the second one is simpler and more transparent with the use of $S \neq \varnothing$.
How to calculate HarmonicNumber(x, 1.6)?
I have found the answer: http://functions.wolfram.com/GammaBetaErf/HarmonicNumber2/02/0003/ describes that: H_n^(1.6) = sum_(k=1)^n 1/k^r where r = 1.6