title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Find a recurrence for in , the number of integer compositions of n which only have 1s and 2s as parts.
Hint: what are the choices for the last part? For each choice, how many compositions are there for what is left?
Give a big-$\mathcal{O}$ estimate for the number of times the algorithm needs to determine whether an integer is in one of the subsets.
The if $a_i \lt a_j$ line is a mistake. $a_i,a_j$ are never defined and the algorithm does not need it. The $i$ loop is executed $n-1$ times, the $j$ loop is executed an average of about $\frac n2$ times, and the $k$ loop is executed $n$ times, so the number of inclusion checks is $2\cdot (n-1)\cdot \frac n2\cdot n$ with the $2$ coming from checking $S_i$ and $S_j$. We don't care about multiplicative constants. The algorithm is then $\mathcal O(n^3)$
If $T_{U,V}=\inf\{t:B_t \notin (U,V)\}$ does $\int E[B_{T_{U,V}}|U,V]dP=\int E[B_{T_{u,v}}]dP_{U,V}$
Since stopping times typically depend on a path of a process (and not only the value at some particular time), your formula for the conditional expectation is not too useful. However, there is a " functional" version of this formula which comes in handy; you can find it for instance in the book Brownian motion - an introduction to stochastic processes by Schilling & Partzsch (Lemma A.3). Let $(\Omega,\mathcal{A})$ and $(S,\mathcal{S})$ be measurable spaces. Let $\mathcal{F},\mathcal{H}$ be sub-$\sigma$-algebras of $\mathcal{A}$ and let $X: \Omega \to S$, $\Psi: S \times \Omega \to \mathbb{R}$ be random variables with the following properties: $\mathcal{F}$ and $\mathcal{H}$ are independent, $X: (\Omega,\mathcal{F}) \to (S,\mathcal{S})$ is measurable, $\Psi: (S \times \Omega, \mathcal{S} \otimes \mathcal{H}) \to (\mathbb{R},\mathcal{B}(\mathbb{R}))$ is measurable and bounded. Then $$\mathbb{E}(\Psi(X(\cdot),\cdot) \mid \mathcal{F}) = h(X)$$ where $$h(x) := \mathbb{E}(\psi(x,\cdot)), \qquad x \in S.$$ Let's check that we can apply this result in your framework. Set $S := \mathbb{R}^2$ and $\mathcal{S} := \mathcal{B}(\mathbb{R}^2)$ $X(\omega) := (U(\omega),V(\omega))$, $\Psi((u,v),\omega) := B_{T_{u,v}}(\omega)$, $\mathcal{F} := \sigma(X)$, $\mathcal{H}:=\sigma(B_t; t \geq 0)$. By assumption, $\mathcal{F}$ and $\mathcal{H}$ are independent. Moreover, it is trivial that $X$ is $\sigma(X)$-measurable. Let's assume for the moment that we already know that $\Psi$ is measurable (as specified above) and that $\Psi$ is bounded which is e.g. satisfied if $U$ and $V$ are both bounded. Applying the above result we then get $$\mathbb{E}(B_{T_{U,V}} \mid U,V) = \mathbb{E}(B_{T_{u,v}}) \mid_{(u,v) = (U,V)}$$ which implies in particular that $$\int \mathbb{E}(B_{T_{U,V}} \mid U,V) \, d\mathbb{P} = \int \mathbb{E}(B_{T_{u,v}}) \, dP_{U,V}(u,v).$$ The assumption on the boundedness of $U$ and $V$ is quite natural since it ensures that all appearing integrals are well-defined. Proof of the measurability of $\Psi$: For $x \neq 0$ set $$\tau_x := \inf\{t >0; B_t > \text{sgn}(x) \cdot x\}.$$ Let's first consider $x \geq 0$. Since $$\tau_x = \inf\{t>0, \sup_{s \leq t} B_s> x\}$$ we see that $x \mapsto \tau_x(\omega)$ is the generalized inverse of the increasing and right-continuous mapping $t \mapsto \sup_{s \leq t} B_s(\omega),$ and therefore $x \mapsto \tau_x$ is also right-continuous. As $\omega \mapsto \tau_x(\omega)$ is $\mathcal{H}$-measurable for each fixed $x \geq 0$, this implies that $$([0,\infty) \times \Omega, \mathcal{B}[0,\infty) \otimes \mathcal{H}) \ni (x,\omega) \mapsto \tau_x(\omega) \in ([0,\infty),\mathcal{B}[0,\infty))$$ is measurable (it's an approximation argument, e.g. as in the proof of joint measurability of Brownian motion). A similar reasoning applies for $x \leq 0$, and so we get that $$(\mathbb{R} \times \Omega, \mathcal{B}(\mathbb{R}) \otimes \mathcal{H}) \ni (x,\omega) \mapsto \tau_x(\omega) \in ([0,\infty),\mathcal{B}[0,\infty))$$is measurable. This entails that $$(\mathbb{R}^2 \times \Omega, \mathcal{B}(\mathbb{R}^2) \otimes \mathcal{H}) \ni ((u,v),\omega) \mapsto \tau_u(\omega) \in ([0,\infty),\mathcal{B}[0,\infty))$$ is measurable which, in turn, gives that $$(\mathbb{R}^2 \times \Omega, \mathcal{B}(\mathbb{R}^2) \otimes \mathcal{H}) \ni ((u,v),\omega) \mapsto T_{u,v}(\omega)=\min\{\tau_u(\omega),\tau_v(\omega)\} \in ([0,\infty),\mathcal{B}[0,\infty)) \tag{1}$$ is measurable. On the other hand, also the mapping $$(\mathbb{R}^2 \times \Omega, \mathcal{B}(\mathbb{R}^2) \otimes \mathcal{H}) \ni ((u,v),\omega)\mapsto \omega \in (\Omega,\mathcal{H}) \tag{2}$$ is measurable. By basic properties of the product-$\sigma$-algebra we conclude that $$(\mathbb{R}^2 \times \Omega, \mathcal{B}(\mathbb{R}^2) \otimes \mathcal{H}) \ni ((u,v),\omega) \mapsto (T_{u,v}(\omega),\omega) \in ([0,\infty) \times \Omega,\mathcal{B}[0,\infty) \otimes \mathcal{H}) \tag{3}$$ is measurable. Finally, we recall that the Brownian motion is progressively measurable, i.e. $$([0,\infty) \times \Omega,\mathcal{B}[0,\infty) \otimes \mathcal{H}) \ni (t,\omega) \mapsto B_t(\omega) \in (\mathbb{R},\mathcal{B}(\mathbb{R})) \tag{4}$$ is measurable. Hence, the composition of the mappings in $(3)$ and in $(4)$ is measurable, i.e. $$(\mathbb{R}^2 \times \Omega, \mathcal{B}(\mathbb{R}^2) \otimes \mathcal{H}) \ni ((u,v),\omega) \mapsto B_{T_{u,v}}(\omega)$$ is measurable.
If $f(t)$ is periodic, is there any $t$ that would equal to DC components?
Yes. Given a continuous function f(t) that is periodic with period T, and the Fourier transform of that function F(w), then there will necessarily always be some time t where F(0) == f(t). (That time t will be different for different functions f(t).) F(0) is also called the DC component, also called the average value of f(t). This can be proven using the intermediate value theorem, as AnonSubmitter85 pointed out. The only way for a periodic function to not have such a point is for the periodic function to not be continuous. One simple example is a rectangle wave where g(t) = 1 for 0 <= t < T/3, and g(t) = 0 for T/3 <= t < T, and g(t) is periodic with period T. The average value of such a square wave, G(0), is 1/3. But the function g(t) never exactly equals 1/3 -- it's value is always either 0 or 1 at all times, with discontinuities where it jumps from 0 to 1 and from 1 to 0.
Bounded nonempty countable sets necessarily contain its supremum?
False: $\left\{1-\frac1n\right\}_{n\in\Bbb N}$. Does it has something to do with closure? Yes, a closed bounded set necessarily contains its supremum, even if it is uncountable.
Inverse laplace 2/((s-1)^2+1)^2
While there is a defined inverse Laplace transform, the integral is often difficult. Usually, it is easier to take the transforms we know and noodle with them until we find a fit. $L\{e^t\cos t\} = \frac {s-1}{(s-1)^2+1}\\ L\{e^t\sin t\} = \frac 1{(s-1)^2+1} = \frac {s^2-2s +2}{((s-1)^2+1)^2}\\ L\{te^t\cos t\} = -\frac {d}{ds} \frac {(s-1)}{(s-1)^2+1} = \frac {s^2 - 2s}{((s-1)^2+1)^2}\\ L\{te^t\sin t\} = -\frac {d}{ds} \frac 1{(s-1)^2+1} = \frac {2s-2}{((s-1)^2+1)^2}$ Some combination of $(s^2 - 2s + 2), (s^2-2s),(2s-2) = 2$ $e^t \sin t - te^t \cos t$
Prove that problems like these can always be solved - Inequalities problems
The issue with your proof is that it essentially assumes its result. You're basically claiming that each ordering of "$\lt$" and "$\gt$" has a solution, because you can always pick numbers from the set that have that order, but that's exactly what "has a solution" means, so your argument is circular. A better proof would be to simply construct a general solution, like so: Each arrangement of "$\lt$" and "$\gt$" can be broken down as follows: Max positions - either "$\_\_\gt$", "$\lt\_\_$" or "$\lt\_\_\gt$" Min positions - either "$\_\_\lt$", "$\gt\_\_$" or "$\gt\_\_\lt$" Chain positions - either "$\gt\_\_\gt$" or "$\lt\_\_\lt$" To form a solution, first fill the max and min positions with the largest and smallest numbers of the set, respectively. Note that if you have, say, two max positions, it doesn't matter which one has the highest value and which has the second highest. Next, fill in the chain positions with sequential numbers from the set in the required order, moving from left to right, and starting at the top of the (remaining) set for "$\gt\_\_\gt$" chains and the bottom for "$\lt\_\_\lt$" chains. Since this process is always possible, and satisfies an arbitrary problem of this type, all such problems have a solution.$\blacksquare$ Note: the above process does not necessarily generate all possible solutions for a given problem (in fact it usually won't), but it will always produce at least one solution, which is all that is required for the proof.
Idempotents: $e$ a primitive and $f$ an arbitrary idempotent, then $\dim eA\mid\dim\ fA$?
I will assume you mean $A$ is associative and unital, and artinian, if that last point is not something you want it could be more. Hence, for every $e\in A$, $eA$ is a (right) $A$-module because it is right ideal. Since $A$ is simple, its modules are semisimple, i.e. products of simples. Now $e$ is primitive so $eA$ is simple. Associative algebras have one simple module up to isomorphism. So $fA$ is a product of modules isomorphic to $eA$. Hence, $\dim fA=m\cdot \dim eA$.
Let $f(x)$ be defined over all rationals $x$ in $[0,1]$ and let $F(n) = \sum_{i=1}^n f(\frac in)$
Here is a standard derivation of the result. To help us in the derivation we start by introducing the $\delta$-function $\delta_{m,n}:\mathbb{Q}\times \mathbb{Q} \to \{0,1\}$ by $$\delta_{m,n} = \left\{\matrix{1 & m=n\\0 & m\not= n}\right.$$ By construction, the $\delta$-function satisfy the following identity: Identity 1: If $d\mid n$ and $i\leq n/d$ then $\sum_{j=1}^n\delta_{i,j/d}h(j) = h(id)$ for any function $h$. Another useful identity is the Möbius function sum property in the following form: Identity 2: $\sum_{d\mid n}\mu(d)\sum_{i=1}^{n/d}\delta_{i,j/d} = 1$ if $(n,j)= 1$ and $0$ otherwise. This follows by writing out $$\sum_{d\mid n}\mu(d)\sum_{i=1}^{n/d}\delta_{i,j/d} = \sum_{d\mid n,\,d\mid j}\mu(d) = \sum_{d\mid (n,j)}\mu(d) = \delta_{(n,j),\, 1}$$ Using the two identities above the derivation is pretty straight forward: $$ \begin{align}(\mu * F)(n) &\equiv \sum_{d\mid n}\mu(d)\sum_{i=1}^{n/d}f(id/n) \\&= \sum_{d\mid n}\mu(d)\sum_{i=1}^{n/d}{\sum_{j=1}^{n}\delta_{i,j/d}\,f(j/n)}~~~~~~~\text{(using identity 1)}\\&= \sum_{j=1}^{n}f(j/n){\sum_{d\mid n}\mu(d)\sum_{i=1}^{n/d}\delta_{i,j/d}} ~~~~~~~\text{(switched the summation order)}\\&= \sum_{j=1,\,(n,j)=1}^nf(j/n)~~~~~~~~~~~~~~~~~~~~~~~~~\text{(using identity 2)} \end{align} $$
Radon transform expressed via delta distribution
This follows directly from the definition of the Radon function in $\mathbb R^n$. The Radon transform $Rf$ of a function $f:\mathbb R^2 \to \mathbb R$ is a function defined on the space of straight lines in $\mathbb R^2$ by the line integral along each such line, $$Rf: [0, 2\pi]\times\mathbb R \to \mathbb R$$ $$Rf(\alpha, s)= \int_{-\infty}^{\infty}f(s\alpha + z\alpha^{\perp})dz$$ where $\alpha=(\cos \alpha,\sin \alpha)$ is the normal vector of the line and $\alpha^{\perp}$ is the tangent vector. More generally, the Radon transform $Rf$ of a function $f:\mathbb R^n \to \mathbb R$ is a function defined on the space of all hyperplanes in $\mathbb R^n$. If one parametrizes these hyperplanes by $\{x\in\mathbb R^n : x \cdot \alpha=s\}$ where $\alpha \in S^{n-1}$ is a unit vector of $\mathbb R^n$ and $s\in\mathbb R$, one obtains a function defined on $S^{n-1}\times\mathbb R$ by $$Rf: S^{n-1}\times\mathbb R \to \mathbb R$$ $$Rf(\alpha, s)= \int_{x \cdot \alpha=s}f(x)dx=\int_{\alpha^{\perp}}f(s\alpha + y)dy$$ where $\alpha$ represents angles: one angle in 2D (points on the unit circle or equivalently tangent lines to the unit circle), two angles in 3D (points on the unit sphere or equivalently tangent planes to the unit sphere), and so on. The number $s$ is the (signed) distance between these hyperplanes and the origin. Notice that this is equivalent to your definition of the Radon transform as $$ Rf(\theta , s) = \int _{\{ x^T\theta =s \}} f(x)dx=\int_{x \cdot \theta=s}f(x)dx =\int_{\theta^{\perp}}f(s\theta + y)dy $$ which we could now compare to $$Rf(\theta , s) = \int _{\mathbb{R}^n}f(x)\delta(s-x^T\theta)dx=\int _{\mathbb{R}^n}f(x)\delta(s-x \cdot \theta)dx$$ As $f\in C_c^\infty(\mathbb{R}^n)$, we know from a property of convolutions that $$f(x)=\int_{\mathbb{R}^n}\delta(x-y)f(y)dy$$ and since we have the constraint that ($x^T\theta =s$) or ($x \cdot \theta=s$) we see that the convolution property allows us to write $$Rf(\theta , s) = \int _{\mathbb{R}^n}f(x)\delta(s-x^T\theta)dx=\int _{\mathbb{R}^n}f(x)\delta(s-x \cdot \theta)dx = \int_{\theta^{\perp}}f(s\theta + y)dy = \int _{\{ x^T\theta =s \}} f(x)dx$$
Distribution of Expectation function into a $|X-Y|$
If $X \ge Y$, then $$E|X-Y| = EX - EY$$ Otherwise $$E|X-Y| = EY - EX$$ Neither of which are necessarily equal to $$E|X| - E|Y|$$
Past exam question: probabilistic method
HINT: The function $$g(v) \doteq f_v + \sum_{u: uv \in e(G)} f_u \mod (n+1) $$ is uniformly distributed amongst $\{0,1,\ldots, n \}$ so for any graph $G$ on $n$ vertices the following holds: For each vertex $v \in G$, the probability that $g(v)$ is 0 is $\frac{1}{n+1}$. Thus by the union bound the probability that this algorithm fails to find a good solution (for any graph $G$) is at most $n \times \frac{1}{n+1} = \frac{n}{n+1} = 1 -\frac{1}{n+1} >0$. So for any $G$, there is indeed a positive probability that this algorithm will find a solution, which of course implies that there indeed is a solution to be found in the first place. Furthermore, from the above paragraph the probability that this algorithm fails to find a good solution after $\ell$ iterations is at most $\left(\frac{n}{n+1} \right)^{\ell}$. Thus the expected running time of this algorithm is at most $\sum_{\ell=1}^{\infty} \ell \left(\frac{n}{n+1} \right)^{\ell}$. I leave it to you to show that this is polynomial in $n$. Now if $g(v)$ were (say) $\sum_{uv \in E(G)} (f_u + f_v)$ then it would not necessarily hold that $g(v)$ is uniformly distributed amongst $\{0,1,\ldots, n\}$. If e.g., $G$ were a multigraph and each edge appeared twice and $n+1$ were even then $g(v)$ would be even as well. [If $n+1$ is even then the parity of $m$ $\mod (n+1)$ is well-defined for all integers $m$]
Evaluate $\int x^2dx$ using darboux sum
The computation of sum of squares is wrong. $$\sum_{i=1}^n i^2 = \frac{n(n+1)(2n+1)}{6}$$
Proof of $(a^2b+b^2c+c^2a)(ab^2+bc^2+ca^2)>\frac{(ab+bc+ca)^3}{3}$
By Cauchy-Schwarz: $(a^2b+b^2c+c^2a)(ab^2+bc^2+ca^2) \ge (a^{1.5}b^{1.5}+b^{1.5}c^{1.5}+c^{1.5}a^{1.5})^2$ By the power mean inequality: $(a^{1.5}b^{1.5}+b^{1.5}c^{1.5}+c^{1.5}a^{1.5})^2 \ge \frac{(ab+bc+ca)^3}{3}.$ (Note: The > sign in your post is false; let $a=b=c.$)
How often does a power of 3 occur compared to power of 6
The number of integers of the form $6^n$ below m is equal to $ \lfloor \log_6{m} \rfloor + 1$. So the number of powers of 6 below the nth power of 3 is equal to $\lfloor \log_6{3^n}\rfloor + 1$. You can interpret this intuitively as being the number of times we must multiply $1$ by $3$ before we reach $m$, or just overshoot it. If we overshoot it, then the floor function ensures that we only calculate the number of powers that actually lie below $m$. We add one to each of these expressions because we want to include $1$ as a power of $6$. This is based on your example where you include $1$ as a power, but it’s worth noting that this potentially contradicts your original description of $6^n$ ‘where $n$ is a natural number’ as the natural numbers are often, although not always, assumed to begin at $1$. Of course, you can add $1$ or not, depending on what you want.
Question regarding the formal definition of NP
I may not be the best person to answer this as I only have a cursory background, but as I understand it, condition two states that the size of the certificate is bounded by a polynomial, not the number of them. For the example given above, the certificate might look like "$030215$" where the odd digits correspond to negative or positive if they are $0$ or $1$ respectively, and the even digits correspond to the numbers to be added. The Turing machine $M$ would then use this string, add the numbers together, and then spit out $1$ as $(-3)+(-2)+5=0$. Another way you could feed these numbers in that only uses three digits could be $01110$. Here, a $1$ corresponds to the second, third, and fourth entry in the set and tells the machine to add these entries. Depending on the Turing machine $M$, this might be a better way to do it. More generally, lets say our machine $M$ excepts binary as an input language, then using the second option we would need at most the number of elements of our set in order to express the certificate, which is definitely polynomial.
How to solve 8 degree polynomial?
Getting all roots of an arbitrary degree polynomial to arbitrary precision is very involved and hardly a reasonable question for this site. I can only suggest the paper "How to find all roots of complex polynomials by Newton's method" by Hubbard, Schleicher, and Sutherland which is available for download by the authors.
Is there a name for the result that homotopy groups are insensitive to removing submanifolds of sufficiently large codimension?
Your ogicinal result can be proved by a simple argument involving transversality (which is the same argument used for the general case, I guess) First check that the fundamental homotopy group of a manifold can be computed using only smooth paths and smooth homotopies. Then check that every smooth path is homotopic to one which avoids your union of subspaces of codimension at least $2$, and that homotopies can be moved away from such a union of codimension at least $3$.
Some implicit differentiation questions. Please check them [see desc.]
1) As John pointed there is a mistake... 2) The second example seems correct to me 3) For the third differenciation... $$y \cdot \cos{x} = x^2 + y^2$$ $$y'\cos(x)-y\sin(x)=2x+2yy'$$ $$y'\cos(x)-2yy'=y\sin(x)+2x$$ $$y'(\cos(x)-2y)=y\sin(x)+2x$$ $$\frac {dy}{dx}=\frac {y\sin(x)+2x}{\cos(x)-2y} $$ ...
How to define the dirac delta as the limit of an arbitrary probability distribution?
By definition, $\frac{1}{\alpha} \rho(x/\alpha) \to \delta$ as $\alpha \to 0^+$ if $\int \frac{1}{\alpha} \rho(x/\alpha) \, \varphi(x) \, dx \to \varphi(0)$ for every $\varphi \in C_c^\infty.$ Therefore, let $\varphi \in C_c^\infty.$ Then, $$ \int \frac{1}{\alpha} \rho(x/\alpha) \, \varphi(x) \, dx = \{ \text{ substitution $x = \alpha y$ } \} = \int \rho(y) \, \varphi(\alpha y) \, dy\ . $$ Now, since $|\rho(y)\,\varphi(\alpha y)| \leq |\rho(y)|\,\sup|\varphi| \in L^1$ we can apply the dominated convergence theorem and get the limit $$ \int \rho(y) \, \varphi(0) \, dy = \int \rho(y) \, dy \ \varphi(0) = \varphi(0) . $$ Thus, $\frac{1}{\alpha} \rho(x/\alpha) \to \delta.$
Complex Numbers and Hyperbolic Functions
We know that $$ \sin\left(\dfrac{(2+i)\pi}{4}\right)=\sin\left(\dfrac{\pi}{2}+\frac{i\pi}{4}\right)=\cos\left(\frac{i\pi}{4}\right), $$ then $$ \begin{align} (1+i)\sin\left(\dfrac{(2+i)\pi}{4}\right)&= (1+i)\cos\left(\frac{i\pi}{4}\right)\\ &=(1+i)\left(\dfrac{e^{i\frac{i\pi}{4}}+e^{-i\frac{i\pi}{4}}}{2}\right)\\ &=(1+i)\left(\dfrac{e^{-\frac{\pi}{4}}+e^{\frac{\pi}{4}}}{2}\right)\\ &=\left(\dfrac{e^{-\frac{\pi}{4}}+e^{\frac{\pi}{4}}}{2}\right)+i\left(\dfrac{e^{-\frac{\pi}{4}}+e^{\frac{\pi}{4}}}{2}\right). \end{align} $$
Probability of process success
Let's see if this helps: If each activity has probability of success $p_i = \frac{s_i}{t_i}$ and you can reasonably assume that they are independent then $\mathbb{P}(\text{success } a_1 , a_2) = p_1 + p_2 - p_1 p_2$. This formula extends for $k$ activities by iteration Let ($p_{i_1,\ldots i_k}$ represent $\mathbb{P}(\text{success } a_{i_1}\ldots, a_{i_k})$): $$\mathbb{P}(a_1, a_2,a_3) = p_{1,2,3} = p_1 + p_{2,3} - p_1 p_{2,3} = p_1 + p_2 + p_3 -p_2 p_3 - p_1(p_2 + p_3 -p_2 p_3) = p_1 + p_2 + p_3 - p_2 p_3 -p_1p_2 -p_1p_3 + p_1p_2p_3 $$ $$\mathbb{P} (a_1,a_2,\ldots a_k) = \sum_{j \leq k} \sum_{i_1, \ldots i_j} (-1)^{j+1} p_{i_1} \ldots p_{i_j}$$ In the case where these events are not independent you might consider the general approach: $$ \mathbb{P}(a_1 \cup a_2) = \mathbb{P}(a_1) + \mathbb{P}(a_2) - \mathbb{P}(a_1 \cap a_2)$$ Use the inclusion exclusion principle $$\mathbb{P} (\cup_{i=1}^k a_i) = \sum_{j \leq k} (-1)^{j+1} \sum_{i_1 ,\ldots i_j}\mathbb{P}(\cap_{i = 1}^j a_{i_j}) $$ in the case you would like to succeed in the entire process I would consider the probability of achieving each milestone at the time, that is, consider the probability $p^j_l = \mathbb{P}(k_j \text{ succeeds under } a_l)$ use the same arguments (of independence if it is reasonable or inclusion and exclusion principle plus a case by case consideration in case you suspect there are relevant interdependencies between milestones or activities)
Basic Statistics Question (sample, normal distribution)
Yes, $Z$ is a standardization of $\bar{X}_n$ (I put an index on your $\bar{X}$), i.e. $Z\sim\mathcal{N}(0,1)$, and so $$ P(a\leq \bar{X}_n\leq b)=P\left(\frac{a-\mu}{\sigma/\sqrt{n}}\leq Z\leq \frac{b-\mu}{\sigma/\sqrt{n}}\right)=\Phi\left(\frac{b-\mu}{\sigma/\sqrt{n}}\right)-\Phi\left(\frac{a-\mu}{\sigma/\sqrt{n}}\right), $$ where $\Phi$ is the cumulative distribution function for an $\mathcal{N}(0,1)$ distribution. So you're looking to compute $$ P(n):=P(19.5\leq \bar{X}_n\leq 20)=\Phi\left(\frac{20-19}{3/\sqrt{n}}\right)-\Phi\left(\frac{19.5-19}{3/\sqrt{n}}\right). $$ The Stata function normal() is exactly the $\Phi$ from above. The following expression in Stata yields the correct result for me. disp normal((20-19)/(3/sqrt(100)))-normal((19.5-19)/(3/sqrt(100))) Your prediction about the probability tending to $0$ is correct.
Condition for a finite sequence to be a basis of a subspace of the dual space
The inclusion $i : G' \hookrightarrow E^*$ yields $^ti : E^{**} \to (G')^*$, and then $^ti \circ c_E : E \to (G')^*$ where $c_E : E \to E^{**}$ is the canonical injection. Now note that \begin{align} \ker({}^ti \circ c_E) &= \{x \in E : c_E(x) \circ i = 0\} \\ &= \{x \in E : (\forall z \in G') \ c_E(x)(z)=0\} = F, \end{align} so that $^ti \circ c_E$ induces an isomorphism $E/F \to (G')^*$ that sends the class of $x \in E$ to $c_E(x) \circ i$, the restriction of $c_E(x)$ on $G'$.
A question on the solution for the lifeguard problem (or Snell's law)
Snell's law was derived from a basic proposition that that light follows path of least time. (Fermats principle) The Snell's law is: $$\dfrac{c}{v_1}\sin(\theta_1)=\dfrac{c}{v_2}\sin(\theta_2)$$ where $c, v_1, v_2$ are speeds of light in vaccuum, medium $1$ and medium $2$ respectively. Thus in your problem, we may say, $$\dfrac{\sin(\theta_1)}{v_r}=\dfrac{\sin(\theta_2)}{v_s}$$ Now you haven't been provided with either of the angles, but only the angle that straight line joining the two people makes with normal. It turns out that this information is not enough for solving the problem explicitly.
Turning the Limit of a Complex Contour Integral into the Integral of a Limit
The limit $R \rightarrow \infty$ is irrelevant. Inside the integral, $r$ is not the variable of integration nor is it in the integration bounds, so under certain conditions, the limit of $r \rightarrow 0$ and the integration operation can be exchanged. So can you try making the substitutions $$r = \dfrac{1}{n}$$ $$f_n(\omega) = \frac{e^{i\frac{1}{n}e^{i\omega}t}}{\left(\frac{e^{i\omega}}{n}+\omega_0-\omega_1\right)\left( \frac{e^{i\omega}}{n}+\omega_0 -\omega_2\right)}$$ $$f(\omega) = \frac{1}{\left(\omega_0 -\omega_1\right)\left(\omega_0 -\omega_2\right)}$$ $$|f_n(\omega)| = \frac{1}{\left|\frac{e^{i\omega}}{n}+\omega_0-\omega_1\right|\left| \frac{e^{i\omega}}{n}+\omega_0 -\omega_2\right|}$$ assuming $$n > \dfrac{1}{\min\left(|\omega_0-\omega_1|,|\omega_0-\omega_2| \right)}$$ and try applying the Dominated Convergence Theorem? Namely, find an integrable function $g(\omega)$ such that $$|f_n(\omega)| \le g(\omega)$$ for all finite real $\omega_0$ and all finite, distinct, complex $\omega_1$, $\omega_2$. (For any specific choice of distinct $\omega_0$, $\omega_1$, and $\omega_2$, it should be straightforward to select a suitable $g(\omega)$.) I didn't, and still haven't, done this rigorously myself. I'm thinking the constant function $$g(\omega) = \dfrac{1}{\left[\min\left(|\omega_0-\omega_1|,|\omega_0-\omega_2| \right) - \dfrac{1}{n_{min}}\right]^2}$$ might work. Update In my original answer, for "Term 3", instead of parameterizing the small semi-circular contour around $\omega_0$ and going through the above gyrations, I could have left "Term 3" as $$\lim_{r \to 0} \int_{C_r}{\dfrac{ie^{iz t}}{\alpha\left(z-\omega_0\right)\left(z-\omega_1\right)\left(z-\omega_2\right)}}dz$$ and then used the lemma in this answer. Knowing that the clockwise orientation of the contour simply changes the sign of the answer, one can obtain $$\lim_{r \to 0} \int_{C_r}{\dfrac{ie^{iz t}}{\alpha\left(z-\omega_0\right)\left(z-\omega_1\right)\left(z-\omega_2\right)}} dz = {\dfrac{\pi e^{i\omega_0 t}}{\alpha\left(\omega_0-\omega_1\right)\left(\omega_0-\omega_2\right)}}$$ right away.
Calculate $\sum_{|S|=k}(n-|\cup S|)^m$ where $S$ is a subset of $X=\{\{a_1,a_2\},\{a_2,a_3\},\cdots,\{a_{n-1},a_n\},\{a_n,a_1\}\}$
$|\cup S|=k+r$, where $r$ is the number of runs in $S$, that is, the number of consecutive stretches $\{a_k,a_{k+1}\}, \{a_{k+1},a_{k+2}\},\ldots$ in $S$ (possibly wrapping around at $n$). To find the number of subsets $S$ with given $k$ and $r$, let's first count the ones in which a run starts at $\{a_1,a_2\}$. We have $r$ runs with at least one element, separated by $r$ gaps, also with at least one element. Thus we need to distribute $k$ balls into $r$ non-empty bins and $n-k$ balls into $r$ non-empty bins, for a total of $\binom{k-1}{r-1}\binom{n-k-1}{r-1}$ different ways. In any given set $S$ with $r$ runs, a run starts at $r$ out of $n$ elements. Thus, to make up for the restriction that we assumed that a stretch starts at a specific element, we need to multiply by $\frac nr$. Then your sum comes out as $$ \sum_{|S|=k}(n-|\cup S|)^m=\sum_{r=1}^k\frac nr\binom{k-1}{r-1}\binom{n-k-1}{r-1}(n-k-r)^m $$ (where if $2k\gt n$ some of the terms are zero because the second binomial coefficient is zero, since in this case there aren't enough elements for $k$ runs and $k$ gaps).
Distance between the two points $P$ and $Q$
Yes, your answers are all correct. Nice work! (In the future, you might want to post what methods you used to determine your answers, so if answers aren't correct, we can point out where/why there's a flaw. For example, I'm assuming you computed distance using the Euclidean distance function: $$\text{distance}(P, Q)\;=\;d(P, Q)=\sqrt{(x_2-x_1)^2 + (y_2-y_1)^2}$$ That would have been good to include in your post: it makes checking work easier for us, if there are errors in answers, we can save you a lot of time by pointing out where may have gone wrong. In mathematics, getting the correct answer is usually less important than is using appropriate methods and reasoning.
Combinatorics: How should we count lists?
The brute force solution is to list the $3$ possibilities $(e,p,m) =(3,3,4),(2,4,4),(2,3,5)$ then calculate \begin{eqnarray*} \binom{5}{3} \binom{7}{3} \binom{9}{4} +\binom{5}{2} \binom{7}{4} \binom{9}{4} + \binom{5}{2} \binom{7}{3} \binom{9}{5}. \end{eqnarray*} Now observe that $ \binom{5}{3}= \binom{5}{2}=10$ , $\binom{7}{3} =\binom{7}{4}=35$ and $\binom{9}{4}=\binom{9}{5}=126$ (all three terms give the same value) so the anser is $ 3 \times 10 \times 35 \times 126= \color{blue}{132300}$.
How to prove that a set is universe of a subgroup?
There is no need to define a set $X$. Instead, what you need to show is that: The restriction of $\cdot$ to $N\times N$ is a binary operation on $N$; The restriction of ${}^{-1}$ to $N$ is a unary operation on $N$; The co-restriction of $1$ to $N$ is still well-defined (that is, the image lies in $N$; The group identities are satisfied when all operations are restricted as above. (4 may be moot, or trivial). Once you do that, you will have that $\langle N, \cdot|_{N\times N},{}^{-1}|_N,1\rangle$ is a group (hence $N$ is the universe/underlying set of a group), and since $N\subseteq A$ by construction, it is in fact a subgroup of $\langle A,\cdot,{}^{-1},1\rangle$.
Two conditional expectations equal almost everywhere
No. If $X,Y,Z$ are independent, $\mathcal F=\sigma (Y)$ and $\mathcal G =\sigma (Z)$ then $\mathbb E( X|\mathcal F)=\mathbb E(X|\mathcal G)=\mathbb EX$.
What is the meaning of $<$ in a preorder?
It's the second. (1) turns out to be a pretty bad relation; it's usually neither antisymmetric nor transitive! You can fix (1) by changing it to $a \not\equiv b$, where $a \equiv b$ is defined to be $a \leq b \wedge b \leq a$.
Example of group of order $p^3$
Using semidirect products is almost always the easiest way to construct groups that are non-abelian of a given order. In this case, look at $C_p\rtimes_\varphi C_{p^2}$.
eigenvalue differential operator
Here's a hint. Try multiplying both sides by $e^{\frac{x^2}{2}} $. You can rework the operator on the left and you will get something much nicer.
Is $f(x) = 1 + x + x^2 + x^3 + x^4 + ...$ continuous and differentiable?
This is not a polynomial because it has infinitely many terms. When $|x|&lt;1$, it is a convergent geometric series with sum $1/(1-x)$, which is continuous and (infinitely) differentiable. But for $|x|\geq1$ it diverges.
Can you have a numeral system with infinite digits?
The set $${\mathbb N}[X]$$ is exactly a system as you describe. The constant polynomials are the whole numbers ${\mathbb N}$, and they represent the infinite "symbols" in your system, while $a_0a_1$ is actually the polynomials $a_0+a_1X$. If you replace $\mathbb N$ by $\mathbb Z$ or $\mathbb Q$ you get some rings which are actually often studied in mathematics. Added When studying the prime factorization of integers, the same type of system actually comes in play. Look at the primes $p_1=2,p_2=3,..$. Then any $n &gt;2$ can be written as $2^{a_1}3^{a_2}5^{a_3}....p_k^{a_k}$ where $p_k$ is the last prime appearing in the prime factorization of $n$. Then the "symbols" would correspond to the powers of $2$, the elements of teh form $a_0a_1$ correspond to the integers divisible by no other prime than (maybe) 2 and 3 and so on. Interesting, the example is similar to $\mathbb N[X]$ and the addition of polynomials in $\mathbb N[X]$ corresponds to multiplication in positive integers.
Average of limsup and liminf
Here's a counterexample: $a_{3k}=1$ and $a_{3k+1}=-1$ and $a_{3k+2}=1$. $b_{3k}=1$ and $b_{3k+1}=1$, $b_{3k+2}=-1$. Then $\limsup$ of either is $1$ and $\liminf$ of either is $-1$. Thus, RHS is zero. As for LHS, $\limsup a_n+b_n =2$ and $\liminf a_n+b_n = 0$. Therefore LHS is $1$.
Probability of product of two numbers having a specific property
Hint: Condition on $n=0,1,\cdots 9$. For each value of $n \neq 0$, the valid values $m$ can take are from $0$ to $\min\{\lfloor\frac{25}{n}\rfloor,9\}$. For $n=0$, $m$ can vary of course over all $10$ values.
Does there exist $f:M \rightarrow S^n$ s.t. $f_{*}([M])=[S^n]$?
Let $U \subseteq M$ be an open subset of $M$ homeomorphic to $\mathbb{R}^n$. Let $V \subseteq U$ be the homeomorphic image of the unit disk (so that the closure of $V$ is inside $U$). Define a continuous $f: M \to S^n$ by mapping $M-V$ to a fixed point $p \in S^n$, and mapping $\bar{V}$ to the sphere via any homeomorphism $\bar{V}/\partial V \to S^n$ which maps the boundary of $V$ to $p$. To prove that the pushforward of a fundamental class is actually the fundamental class, observe that $f: (M, \ast) \to (S^n, p)$ (for $\ast$ any point outside $V$) factors through $(M, M-V)$. Since $M-V$ is an $n$-manifold with boundary, $\tilde{H}_n(M-V) = 0$, so $\tilde{H}_n(M) \to H_n(M, M-V)$ is injective. Since the map from $\tilde{H}_n(M) \to H_n(M,M-q)$ factors through this map for any $q \in V$, and the latter is an isomorphism, we see that the image of the fundamental class generates a direct summand of $\tilde H_n(M, M-V)$. By excision, $H_n(M, M-V) \cong H_n(U, U-V)$, and the map $(U, U-V) \to (S^n, p)$ is an isomorphism on homology. Since the isomorphism by excision is induced by the inclusion $(U, U-V) \to (M, M-V)$, the map $f$ induces on $(M, M-V) \to (S^n, p)$ is an isomorphism on homology. So the image of the fundamental class of $[M]$ under $M \to (M, M-V)$ is the generator of $\tilde{H}_n(M, M-V)$, and hence $f_\ast[M]$ is the fundamental class of the sphere.
If $2xy$ is a perfect square, then $x^2+y^2$ cannot be
This conjecture is true. Assume for contradiction such $x$, $y$ exist. We may assume that $\gcd(x,y) = 1$ without loss of generality. Then according to $x^2+y^2=M^2$ there exist $m$ and $n$ of different parity and $\gcd(m,n) = 1$ such that $x = m^2-n^2$ and $y = 2mn$ (Euclid's Formula), so we would need $mn(m-n)(m+n)$ to be a square. As $m$, $n$, $m-n$, $m+n$ are all pairwise coprime (note exactly one of them is odd), this can only happen if each is a perfect square; if $m = a^2$, $n = b^2$ then we get $c^2 = a^4-b^4$, but this is not possible.
How to calculate distance from the International Space Station given coordinates?
If $r$ is the distance from the Earth's centre, $\phi$ is the longitude in radians (increasing from zero at Greenwich as one goes eastwards) and $\theta$ is the latitude in radians (increasing from zero at the Equator as one goes northwards), then the $(x,y,z)$ coordinates are given by: $$p(r,\phi, \theta) = r(\cos \phi \cos \theta, \cos \phi \sin \theta, \sin \phi)$$ Given two sets of coordinates $(r_k,\phi_k, \theta_k)$, the distance is given by $\|p(r_1,\phi_1, \theta_1)-p(r_2,\phi_2, \theta_2)\|$. In your case, we can take $r_1=R$, $r_2=R+A$, where $R$ is the radius of the Earth (I'm assuming a nice sphere, of course) and $A$ is the altitude of the station above the surface of the Earth.
Finding the minimum distance from the origin to the surface $xyz^2=2$
No one can be zero. So, in that surface, $z^2=2/xy$. Now, by AM-GM $$x^2+y^2+z^2=x^2+y^2+\frac{2}{xy}\geq2xy +\frac{2}{xy}\geq 2\sqrt{4}=4.$$ Study the conditions for the equality to happen and show that they actually happen for the points you already suspect.
Group action on finite set gives rise to a linear representation?
There is a group homomorphism $\psi:Bij(\Omega) \to GL(V_\Omega)$ (where $Bij(\Omega)$ is the group of bijection of $\Omega$) given by $\psi(f)(e_x)=e_{f(x)}$ extended by linearity. $\psi(f)$ is in $GL(V_\Omega)$ because it has inverse $\psi(f^{-1})$. Now, a $G$-action on $\Omega$ is just a map $G \to Bij(\Omega)$. It is then trivial to see that a $G$-action on $\Omega$ induces a group representation structure on $V_\Omega$, just consider the composite group homomorphism $$G \to Bij(\Omega) \to GL(V_\Omega).$$
Surjectivity of morphisms of sheaves on a base
Here is an example of a surjective morphism of sheaves $F\to G$ on $X$ such that for any open covering $\{ U_i\}_i$ of $X$, there is an index $i$ such that $F(U_i)\to G(U_i)$ is not surjective. Let $X=\mathrm{Spec}(\mathbb C[t])$ be the affine line (say over $\mathbb C$) with origin $e$. Let $F=O_X$ be the structural sheaf of $X$. Let $R=O_{X,e}=\mathbb C[t]_{t\mathbb C[t]}$. Let $G$ be defined by $$ G(U)= \left\lbrace\matrix{ 0 &amp; \text{if } e\notin U \\ R &amp; \text{if } e\in U.}\right.$$ We can check that $G$ is actually a sheaf and $G_x=0$ if $x\ne e$ and $G_e=R$. Consider the morphism $F\to G$ which on $U$ is the zero map if $e\notin U$ and is the localization map at $e$ otherwise. Then $F_x\to G_x$ is surjective for all $x\in X$. Hence $F\to G$ is surjective. But for any open subset $U\ni e$, $F(U)\to G(U)=R$ is not surjective because if $U=D(f)$, then $F(U)=\mathbb C[t]_f$ is clearly strictly smaller than $R$.
Proof that the incidence matrix of a laminar family is TU.
$\textbf{Claim:}$ $A_S$ can be permuted into a matrix having the consecutive ones property. $\textbf{Proof:}$ Permute the rows of $A_S$ so that they have descending row-wise sums. $\textit{Base Step:}$ Permute the columns so that the first row has the consecutive ones property. $\textit{Inductive Step:}$ Assume the first $N-1$ rows have the consecutive ones property. Then consider the $N^{th}$ row, $r_l$, and an arbitrary row $r_u$ from the first $N-1$ rows. For the inductive step it will suffice to show that we can permute the columns of our permuted $A_S$ matrix to ensure the consecutive 1's property in $r_l$, while maintaining this property in $r_u$. By the Laminar Property and the ordering of the rows of $A_S$, the set $l^{th}$ set in $S$, $s_l$ has either $s_l \subset s_u$, or $s_l \cap s_u = \emptyset.$ Thus, over all the indices $i$ where $r_l$ is 1, the value of $r_u$ is either all 1, or all 0. By the consecutive ones property, all of the indices within this selection are the same value of 1, or all 0. Then we can freely permute the columns where $r_l$ is 1, and all of the ones within this selection, without altering the values in any of the rows above $r_l$. Thus, we can permute this subset of columns of $A_S$ to give $r_l$ the consecutive ones property, without compromising the consecutive ones property of any of the first $N-1$ rows. $\textbf{Note:}$ As shown here, $M$ has the consecutive 1's property $\Rightarrow$ $M$ is TU. In addition, all permutations of $M$ are also TU.
Proving that $f:\mathbb{R}_0^+ \to \mathbb{R},~f(x) = x^2 + 4x + 4$ is injective using direct proof from definition
Your argument is fine, but you can end a step earlier: $x_1&gt;0$ and $x_2&gt;0$ implies $x_1+x_2&gt;0$, so $x_1+x_2=-4$ is a contradiction. However you have better making evident where you're using the hypothesis $x_1\ne x_2$. Proofs of injectivity are often times simpler with the contrapositive: “if $f(x_1)=f(x_2$ then $x_1=x_2$”. Suppose $f(x_1)=f(x_2)$; then, as in your steps, $$ (x_1^2-x_2^2)+4(x_1-x_2)=0 $$ so $$ (x_1-x_2)(x_1+x_2+4)=0 $$ Since $x_1+x_2+4&gt;x_1&gt;0$, we conclude $x_1-x_2=0$.
Help understanding Proof of the equality of the normal and the extended integrals
Spivak is summing up over all the partition of unity functions that are zero on $C$. Therefore, by the definition of a partition of unity, the sum is $1$ or less at every point of $A-C$ (less because conceivably some of the functions $\varphi$ that are nonzero on $C$ may be nonzero at points off $C$ as well).
What is the purpose of an oracle in optimization?
In optimization theory and computational practice, there's a difference between having a formula for the function that you want to minimize and having access to a routine that can compute values of that function. If you're given an explicit formula for the objective function then you can compute its gradient and Hessian (if the function is twice continuously differentiable), determine a Lipschitz constant for the gradient, etc. If all you've got is the ability to call the function to compute values of $f(x)$ and perhaps $\nabla f(x)$, then you have limited information to work with. In computational practice, this is important because you often are optimizing functions that are implemented by very complicated programs for which this kind of analysis is difficult. Or, you may not even have access to the source code. For this reason, you want algorithms that can work with minimal information. Most library routines for optimization have interfaces in which the user supplies subroutines to compute the function and perhaps its gradient or even its Hessian. Of course, if you have a mathematical formula for the function that you want to minimize, it is relatively simple to write code to implement that function. In theoretical analysis of optimization algorithms, it is also interesting to see how much can be done with limited information. If you had perfect information about the function you were minimizing, then you could simply write down the minimum- the problem is theoretically interesting precisely because you have limited information. There is a desire to find the best possible optimization algorithm, but this can only be determined with respect to a specific model of the properties of the function and information about the function that is available to the algorithm. For example, it can be shown that for smooth convex functions, using only values of $f(x)$ and $\nabla f(x)$ at specific points, the best possible algorithm has $f(x^{k})-f(x^{*})$ decreasing at a rate of $O(1/k^{2})$ where $k$ is counting the number of evaluations of $f(x)$ and $\nabla f(x)$. Furthermore, Nesterov's accelerated gradient method is optimal in that it achieves this $O(1/k^{2})$ convergence.
The eigenvalues of the product of a positive definite and a symmetric matrix.
Edit: Yes, the number of positive or negative eigenvalues in $AB$ and $B$ are the same. The spectrum of $AB$ is identical to the spectrum of $\sqrt{A}B\sqrt{A}$ and by Sylvester's law of inertia, $\sqrt{A}B\sqrt{A}$ and $B$ have the same number of positive/negative/zero eigenvalues.
If a $K$-algebra is finite-dimensional and locally unital, then it is unital.
Yes, that's exactly right. In fact, you even can say "every finitely generated $K$-subalgebra", because a similar argument shows that $u_Rs=su_R=s$ for any $s$ in the $K$-subalgebra generated by $R$. (For maximal generality, you can say that if $u_R$ is a unit for $R$, then $u_R$ is also a unit for the intersection of the left $K$-ideal and the right $K$-ideal generated by $R$, where by $K$-ideal I mean an ideal that is also a vector subspace. I don't know any snappy way to describe all sets of this form you can get for finite $R$, though.)
Do weak limits coincide with the corresponding limit if they exist?
Actually, it is easy to see that every inhabited set is a weak terminal object in the category of sets, but only the singleton sets are terminal objects. Hence, there are proper weak terminal objects in the category of sets.
Why does the cube have the fewest facets among (centrally) symmetric polytopes in $\mathbb{R}^n$?
By symmetry, any face should have an opposite face. So, your polytope should have at least two opposite faces. But the region bounded between two opposite hyperplanes is not bounded, you still have $n-1$ degrees of freedom inside the &quot;slice&quot; of $\mathbb{R}^n$. You choose another pair of independent (opposite) hyperplanes. These bound your region in an independent direction, but you still have $n-2$ directions along which it is unbounded. You have to continue until you have at least $2n$ faces to get a bounded region. Since a cube has exactly $2n$ faces, it is the polytope with the minimum possible number of them.
Number of sequences which follow a given pattern
Any pattern: $$_NP_K$$ Only decreasing: $$_NC_K$$ Only increasing: $$_NC_K$$ Mixed pattern: $$_NP_K-2\cdot _NC_K=(K!-2)\cdot _NC_K$$
How are euclidean rings and norms related?
This is the definition of Euclidian rings I know: It is a commutative integral domain with unity($1$) that we will denote $E$, where there exists a function $\phi$$:$ $E$ $\rightarrow$ $\Bbb Z$ satisfying (1) If $a,b$ $\in$ $E$, and $a,b$ are nonzero, $a$ $\mid$ $b$, then $\phi(a)$ $\leq$ $\phi(b)$ (2) For $a,b$ $\in$ $E$, $a\neq 0$, there are elements $q,r$ $\in$ $E$ such that $b = aq + r$ and $\phi(r)$ $\lt$ $\phi(a)$ Such a function $\phi$ is what is what I mean by a norm. $\phi(0)$ has to be less than any other norm, because for any $a \neq 0$,we have that $a = aq + r$ for some $r,q \in E$ with $\phi(r)$ $\lt$ $\phi(a)$ which means that $a - aq$ $=$ $a(1-q)$ $=$ $r$, which means that $a \mid r$, which in turn means that $\phi(a)$ $\leq$ $\phi(r)$; a contradiction, unless $r = 0$. We have, in summary, $\phi(r) = \phi(0) $ $\lt$ $\phi(a)$. That the norm of an invertible element (unit) is the same as the norm of $1$ in an Euclidian ring (domain): Call the Euclidian ring $E$. Let the norm be $\phi$, let $a$ be invertible in $E$. Then for every element $\qquad$ $\,$ $b$ $\neq$ $0$ $\in$ $E$, $\phi$$(a)$ $\leq$ $\phi$$(b)$, since a unit divides every element in any ring with $1$. Especially, $\phi (a)$ $\leq$ $\phi(1)$. But $1$ is also a unit [$1\ast1$ $=$ $1$], so $\phi(1)$ $\leq$ $\phi(a)$. Therefore, $\phi(a)$ = $\phi(1)$ , since two numbers can't both be greater than the other number. Constant norm in a field: In a field you could define the norm to be $1$ for each element except $0$, since every nonzero element in a field is a unit and therefore each one of them divides each other. By defining the norm of $0$ to be the number zero, you are assured of having a remainder with norm less than the norm of the element you "divide" with. Of course, you can let the norm of all the nonzero elements of the field be any constant $c$, as long as the norm of $0$ is less than this constant. Local PIDs are euclidean Before I go through a proof of this result, there are some things I will assume you know, and if you don't know these things, I expect you to look them up. First of all, for a commutative ring $R$ with unity, every ideal in $R$ not equal to $R$ is contained in a maximal ideal. This is a consequence of Zorn's lemma. This in turn, means that every non-unit is contained in a maximal ideal, because the ideal generated by this non-unit is unequal to $R$; otherwise the non-unit would divide $1$. An element not contained in any maximal ideal must therefore be a unit. I expect you to know what an $R$-module is, what it means for it to be finitely generated, and most importantly, this; Nakayama's lemma: If $M$ is a finitely generated $R$-module, and $A$ is an ideal contained in the Jacobson-radical $\Bbb J$ of $R$, then $AM = M \implies M=0$. [The Jacobson-radical of $R$ is the intersection of all the maximal ideals in $R$]. Now for the good stuff: A local PID is a PID with only one maximal ideal. Therefore, its Jacobson-radical is equal to this one ideal. The fact that a field is a local PID I suspect you can deduce. $\{0\}$ is the maximal ideal and its generator is $0$. I already provided a norm on fields. To show condition (2), let $a,b \in R$ ($R$ a field), $a \neq 0$. $b = a(a^{-1}b) + 0$. Now, $\phi(a) = 1, \phi(0) = 0$, so $\phi(0) \lt \phi(a)$. Let $R$ be a local PID. Let $pR$ be the only maximal ideal in $R$. For $a \neq 0$, define $\phi:R \to \Bbb N$ by $\phi(a) =n$ where $n$ is the first $n$ such that $a \notin p^nR$, and let $\phi(0) = 0$. Are you missing something? No. We will have to prove that such an n exists for each nonzero element in $R$. Let $M =\bigcap_{n=1}^{\infty}p^nR$. This is the intersection of all ideals $p^nR$, where $n \in \Bbb N$. Now, $pM \subseteq M$, because $M$ is an ideal, and $M \subseteq pM$, because $pM =\bigcap_{n=2}^{\infty}p^nR \supseteq \bigcap_{n=1}^{\infty}p^nR$. In effect, $pM = M$, which means that $(pR)M = M$, since $RM = M$, since $1 \in R$. But $M$ is finitely generated, because every ideal in a PID is finitely generated (of the form $cR$). Because the ideal $M$ can be viewed as a module, and $pR$ is the Jacobson-radical, and therefore contained in itself, we have by Nakayama that $M = 0$. What do we do with this? Well, $M = 0$ means that only $0$ is in every ideal of the form $p^nR$. So if $a \neq 0$, then $a$ can't be in $p^nR$ for all $n$. There must be a least such n by the well-ordering principle. We have confirmed that $\phi$ is well-defined. Do we have an Euclidean norm though? Let $a$ be a unit. Then $a \notin pR$, so $\phi(a) = 1$, and for $b \neq 0$ we clearly have $\phi(b) \geq \phi(a)$. Anyway, let $a \neq 0, b \neq 0$. Further, let $\phi(a) = m+1$, and let $\phi(b) = n+1$. Then $a = p^mr$ for $m \in \Bbb N$ (or $m = 0$)and $r \in R$. We have $b= p^ns$ for $n \in \Bbb N$ (or $n = 0$) and $s \in R$. Let $p^0$ mean $1$. $a \mid b$ means $p^mr \mid p^ns$. Then $m \gt n$ gives us $p^{m-n}r \mid s$ $\implies$ $b = p^np^{m-n}rk$ for some $k \in R$, which is basically saying that $b = p^mrk$, but $b$ can't be in $p^mR$, because $p^mR \subseteq p^{n+1}R$. Therefore, $\phi(a) \leq \phi(b)$ Property (2): Let $a, b \in R, a \neq 0$. Let $\phi(a) = n$. We have $a = p^{n-1}u$, where $u$ is a unit. If $u$ wasn't, it would have to be in pR [in a local ring, every element outside the maximal ideal is a unit], say $u = pr$, and we get $a = p^{n-1}pr = p^nr$; a contradiction. So now, if $\phi(b) \lt n$, consider $b = a\cdot0 + b$. This is in accordance with (2). If $\phi(b) \geq n$, then $b = p^mx$ for some $x$ in $R$ and $m \in \Bbb N \cup 0$, where $m \geq n-1$, which means that $b$ is also of the form $p^{n-1}y$. We see that $b = au^{-1}y + 0 = p^{n-1}u \cdot u^{-1}y = p^{n-1}y = b$. Of course, $\phi(0) \lt \phi(a)$ when $a \neq 0$. It seems to me that property (2) is the only thing that requires there to be only one maximal ideal. The maximal ideal in a field is the $0$-ideal, generated by $0$, that is, let $p=0$ and write $0 = 0 \cdot R$. Now, any nonzero elements are not in this ideal, so their norm would all be $1$.
Why should automorphism groups of compact hyperbolic curves be finite
All compact complex surfaces $X$ of general type have a finite group $Aut(X)$ of automorphism. If $X$ is minimal, even numerical bounds are known for the number $\#Aut(X)$, e.g. a bound linear in $c_1^2(X)$. Cf. "Xiao, Gang: Bound of automorphisms of surfaces of general type, I. Ann. of Math. (139) 1994, 51-77"
Find limit of $\lim_{x\to \infty} \sqrt{x^2+x}-\sqrt{x^2-x}$
write your limit in the form $$\frac{2x}{x\left(\sqrt{1+1/x}+\sqrt{1-1/x}\right)}$$
Counting the number of ways (variants)
The candy bars are identical, all $10$ of them must be given out, and each kid gets $0$ or more. You are correct that the number of ways of distributing the candy bars to four children is the number of solutions of the equation $$x_1 + x_2 + x_3 + x_4 = 10$$ in the non-negative integers. The candy bars are identical, it's allowed to give out fewer than $10$ candy bars, and each kid gets $1$ or more. You are correct that the number of ways of distributing the candy bars is the number of solutions of the inequality $$x_1 + x_2 + x_3 + x_4 \leq 10 \tag{1}$$ in the positive integers. If we let $x_5 = 10 - (x_1 + x_2 + x_3 + x_4)$, which could represent the number of candy bars the person distributing them keeps for himself, then inequality 1 can be transformed into the equation $$x_1 + x_2 + x_3 + x_4 + x_5 = 10 \tag{2}$$ where $x_1, x_2, x_3, x_4 \geq 1$ and $x_5 = 0$. If we let $y_k = x_k - 1$, $1 \leq k \leq 4$, and let $y_5 = x_5$, then we can transform equation 2 into the equation $$y_1 + y_2 + y_3 + y_4 + y_5 = 6 \tag{3}$$ Equation 3 is an equation in the non-negative integers. The candy bars are distinct, all of them must be given out, and each kid gets $1$ or more. There are four choices of recipient for each of the ten candy bars. Hence, the number of ways we could distribute the candy bars is $4^{10}$. However, this counts distributions in which fewer than four children receive a candy bar. To eliminate those cases, use the Inclusion-Exclusion Principle. The number of ways to exclude $k$ children from receiving a candy bar is $\binom{4}{k}$. The number of ways that to distribute the candy bars to the remaining children is $(4 - k)^{10}$. Hence, the number of ways of distributing the candy bars so that each child receives at least one is $$4^{10} - \binom{4}{1}3^{10} + \binom{4}{2}2^{10} - \binom{4}{3}1^{10}$$ Next problem: The candy bars are distinct, exactly $4$ of them will be given out, and each kid gets exactly $1$. We select four of the ten candy bars. Next we line up the four children in some order, such as by age. Now we distribute the four selected candy bars. We have four options to give to the first child in line, after which three options remain for the second child in line, and so forth. $$\binom{10}{4} \cdot 4!$$ Finally, The candy bars are distinct; this time, instead of giving them out to the kids, we'll simply put $4$ of them inside a single basket. You are correct that this can be done in $\binom{10}{4}$ ways since we are simply selecting four of the ten candy bars to place in the basket.
Quick Fourier Series Question about Cn Integration
The complex Fourier series is defined as: \begin{equation} f(x)=\sum_{-\infty}^{+\infty} c_n e^{i n x \frac{2 \pi}{L}} \end{equation} where \begin{equation} c_n=\frac{1}{L} \int_{0}^{L} f(x) e^{-i n x \frac{2 \pi}{L}} dx \end{equation} In your case $L=3$. Of course the Fourier series is periodic.
Q: Volume involving spherical and polar coordinates
Your working for spherical coordinates is correct. In cylindrical coordinates, the lower bound of $y$ is incorrect. Rest everything is correct. The lower bound of $y$ as zero for all values of $r$ will mean you are integrating over a cylinder. Please note that $y$ is bound below by the surface of the cone and hence, $y^2 \tan^2 a = r^2 \implies y = r \cot a$. So the integral should be, $ \displaystyle \int_0^{2\pi}\int_0^{b\sin a}\int_{ \color {blue} {r \cot a}}^{\sqrt{b^2-r^2}} r \ dy \ dr \ d\theta$
Homotopy Equivalence on Fibers implies the same on total spaces?
Hint: Assume that every space here are connected and are $CW$-complexes. The Serre homotopy sequence associated to the fibrations implies that $\pi_n(E)\simeq \pi_n(E'), n\geq 0$ via the morphism induced by $g$, since $E$ and $E'$ are $CW$-complexes and connected, the Whitehead theorem implies that $g$ is a homotopy equivalence.
Christoffel Symbols in Flat Space-Time
Yes, it makes sense to talk about Christoffel symbols in flat spacetime. Every coordinate system has associated Christoffel symbols. On Minkowski spacetime in the standard coordinates, the Christoffel symbols are all zero. But in different coordinates (e.g., spherical coordinates), they will not be zero. The Christoffel symbols contain information about the intrinsic curvature of the spacetime and about the "curvature of the coordinates".
Borel Zero-One Law for a sequence of constants
Let $A_n = \{X_n &gt; a_n\}$. Then you'll agree that what we are trying to calculate is $P(A_n \text{ i.o})$. These events are independent (since the random variables are independent) so by the Borell-Cantelli Lemmas, $P(A_n \text{ i.o})$ is $0$ or $1$ depending on whether or not the sum $$ \sum_{n = 1}^\infty P(A_n) $$ converges. But since $P(A_n) = P(X_n &gt; a_n) = P(X_1 &gt; a_n)$ what we have is $$ \sum_{n = 1}^\infty P(A_n) = \sum_{n = 1}^\infty P(X_1 &gt; a_n). $$
Use the definition to prove for all natural number m, 4m+ 7 odd.
When giving an explanation I'm not sure if I should say $4i+6$ is an integer (even) and $+1$ is odd. No, this is not what you want to say! You are told to use the definition, so you need to use the the definition. That is, the only way you can prove a number is odd (in this context) is by proving it satisfies the definition: that is, there exists an integer $k$ such that the number is $2k+1$. That mean you need to find some integer $k$ such that $4m+7=2k+1$. In other words, you want to solve that equation for $k$ and show that the value of $k$ you obtain is indeed an integer. Can you see a way to do that? (Also, when writing a proof, you want to be very clear and precise about all your notation. What do you mean when you write $4n+7=4i+7$? Where did these variables $n$ and $i$ come from? The problem statement, at least in the title of your post, uses $m$ instead, so you should probably use $m$. If you use a different variable, you should explain what it means and how it is related to $m$.)
Polynomials of degree $n$ with $\Bbb{F}_p$ are always reducible in $\Bbb{F}_{p^n}$
This is true. If $P(x)$ is irreducible over $\mathbb F_{p^n}$ then it is certainly irreducible over $\mathbb F_{p}$. But that means that $\mathbb F_p(\alpha)$ for some root $\alpha$ of $P$ is a degree $n$ extension of $\mathbb F_p$, and hence equal to $\mathbb F_{p^n}$, and so $P(x)$ would have a root in $\mathbb F_{p^n}$, contradicting the initial assumption that it was irreducible.
Equivariance of lifts to universal cover
Consider a lift $\widetilde f : \widetilde X \to \widetilde Y$ of $f$. By definition of the deck group, each of the maps $\widetilde f \circ \gamma$ and $f_*(\gamma) \circ \widetilde f$ is a lift of the map $$F : \widetilde X \mapsto X \xrightarrow{f} Y $$ The lifting lemma has a uniqueness provision built into it: given two lifts of $F$, if they have the equal values at one point of $\widetilde X$, then those two lifts are equal. So, all one needs to do is pick a point $p \in \widetilde X$ and prove that $\widetilde f \circ \gamma(x) = f_*(\gamma) \circ \widetilde f(x)$; probably that proof is easy with the appropriate choice of $p$. At this stage, since your question does not specify base points, and since base points are needed downstairs in $X$ and in $Y$ in order to define the fundamental group, and since base points are needed upstairs in $\widetilde X$ and $\widetilde Y$ in order to define the deck groups, I think I'll end with a hint: With good choices of all these base points, the appropriate point $p$ which makes the proof easy is probably closely associated to the choices of the base points.
minimize $x_1$ subject to $ f_1(x_1,x_2) \le C_1 $ $f_2(x_1,x_2) \le C_2$
KKT conditions are NOT satisfied at the optimum $[0;0]$, which as you said is the only feasible point. The nonlinear inequality constraints can not be strictly satisfied, hence, as you said, the Slater condition is not satisfied. The gradient of constraint 1 at the optimum = $[0;-2]$. The gradient of constraint 2 at the optimum = $[0;2]$. The gradients of the nonlinear constraints are not linearly independent at the optimum, hence the Linearly Independent Constraint Qualification is not satisfied at the optimum. Indeed, as it turns out, the KKT conditions are not satisfied at exactly the optimum. There are no (non-negative) Lagrange multipliers which make the gradient of the Lagrangian close to being the zero vector at the optimum. However, for a very slight perturbation away from the optimum, (non-negative) Lagrange multipliers can be found (via constrained linear least squares) which make the gradient of the Lagrangian very close to being the zero vector at the optimum. Therefore, such a very close to optimum point can achieve a very good KKT optimality score, and be declared optimal for some reasonable KKT optimality tolerance. As an illustration of coming close to satisfying KKT optimality conditions, I ran my own nonlinear optimizer and got an "optimal" point, which in reality is slightly infeasible, of $1e-5*[-0.997888668876936;0.000000000003149]$. Using Lagrange multipliers of $1.0e+04 * [2.505289510463797; 2.505289510463796]$, the gradient of the Lagrangian is $1e-8*[-0.589922777294305; -0.002182787284255]$ and the constraints are each violated by 9.957812352467954e-11. From a practical optimization perspective, that optimization problem is solved, and the KKT conditions satisfied to a reasonable tolerance. However, the theoretical lack of KKT conditions holding exactly, can add some difficulty for certain solvers.
Geometric series; Bernoulli Random Variable
Using your formula we obtain $p(1)=\frac{1}{2}$ and $p(2)=\frac{3}{4}$. Thus $p(1)+p(2)\gt 1$, which is impossible. If we define the distribution of a random variable $X$ by saying that $\Pr(X\le k)$ is given by your sum, then we will have defined a legitimate distribution. The probability mass function is then given by $p(k)=mn^k+nm^k$. Since $m=n=\frac{1}{2}$, it would be clearer to say $p(k)=\frac{1}{2^k}$ for every positive integer $k$.
Hoffman and Kunze, Linear Algebra Sec 3.4 exercise 3
Since $\{\alpha_1,\dots,\alpha_n\}$ is a basis of $F^n$, we know $\{T\epsilon_1,\dots,T\epsilon_n\}$ generate the range of $T$. But $T\epsilon_i$ equals the $i$-th column vector of $A$. Thus the column vectors of $A$ generate the range of $T$ (where we identify $F^n$ with $F^{n\times1}$). We can also conclude that a subset of the columns of $A$ give a basis for the range of $T$.
Fixed points and periodic orbits of $F(x)=x^2-1.1$
compute fixed points of $F^2(x)$. It is a set X2 of points x :$$X2= \{x: x=F^2(x)\}$$ This set include fixed points of F and fixed points of F2 ( cycle of the prime period 2) compute set X1 of fixed points x of F : $$ X1= \{x: x=F(x)\}$$ remove point of set X1 from set X2 then you wil have set of cycles with prime period 2 Generaly : roots of $F^p(x)=x$ are cycles for period p and its divisors
When proving $\liminf(a_n)>L$, proving $\liminf(a_n)>a$ with $a<L$ is enough?
You proved that, for every $\epsilon&gt;0$ $$\liminf\sqrt[n]{a_n}\geq L-\epsilon$$ We can then take the supremum on the right-hand side: $$\liminf\sqrt[n]{a_n}\geq\sup_{\epsilon&gt;0}L-\epsilon=L$$
Number of ways of partitioning $a+b$ objects into $k $ partitions such that every partition has at least one object
It would be surprising if a closed form could be given for this number, since setting $b=0$ would give the number of partitions of $a$ into $k$ parts, for which no closed form is known. But we can readily write down a generating function by analogy with the partition number generating function: The desired number is the coefficient of $x^ay^bz^k$ in $$\prod_{{\scriptstyle l,m=0}\atop{\scriptstyle l+m\ne 0}}^\infty\frac1{1-x^ly^mz}\;.$$
Does the following inner product converge?
No. Consider consider $X=Y=\ell^2(\Bbb N)$ and let $A_n$ be the projection onto $e_n$. The $A_n$ are compact and converge strongly to zero, however with $x_n=z_n=e_n$ you've got $$\langle A_nx_n,z_n\rangle=\langle e_n,e_n\rangle=1$$ for all $n$. Indeed, if $A_n\not\to0$ in norm topology then you will always find sequences $x_n,y_n$ so that $\langle A_nx_n,z_n\rangle$ does not converge to zero. For if $A_n\not\to0$ in norm you've always got a subsequence $A_{n_k}$ so that $\|A_{n_k}\|≥C$ for some $C&gt;0$. Then for such $C$ you can find a non-zero $x_{n_k}\in \ell^2$ so that $\|A_{n_k}x_{n_k}\|≥C\|x_{n_k}\|$. Let $z_{n_k}=A_{n_k}x_{n_k}$ then $$\left\langle A_{n_k}\frac{x_{n_k}}{\|x_{n_k}\|},\frac{z_{n_k}}{\|z_{n_k}\|}\right\rangle=\left\|A_{n_k}\frac{x_{n_k}}{\|x_{n_k}\|}\right\|≥C.$$
Calculate $f_y(y)$ for all $0 < y < 1$
Yes just calcuate $f_Y(y)$ by integratinng over $x$ between appropriate limits.But as far as your putting values in a pdf are concered they don't actually give you probabilities,it just gives you the density of the function at that point. Now to the last part of your question.if you want to get the probability just integrate your between $0$ and $1$. NOTE: the probability at a point for a continuous random variable is $0$
triangle circumcircle geometry question
Triangle $PKB$ is isosceles and right-angled, so $\angle PBK=\angle PBC=45^\circ$. Hence $\angle PAC=45^\circ$. We can calculate $\angle A$ by the cosine formula, getting approx $57.9^\circ$. Hence $\angle PAB=\angle A-\angle PAC=12.9^\circ$. Hence $\angle PCB=12.9^\circ$. Let $BK=x$. Then $CK=12-x$, and $PK=x=(17-x)\tan12.9^\circ=3.89-0.229x$. So $x=3.168$. So finally $$\frac{\triangle BPK}{\triangle ABC}=\frac{x^2}{12\cdot20\ \sin A}=0.049$$
Show that if $Q$ is orthogonal, then $Q^{-1}$ is orthogonal?
$$ (Q^{-1})(Q^{-1})^\intercal = (Q^{-1})(Q^\intercal)^{-1} = (Q^\intercal Q)^{-1} = I^{-1} = I $$ where the identities $(MN)^{-1} = N^{-1}M^{-1}, (M^\intercal)^{-1} = (M^{-1})^\intercal$ are used.
Find harmonic function s.t. $\int_{\mathbb R^d}|D^2 u|^2<\infty $
Suppose that $u$ is a harmonic function on $\mathbb{R}^n$. Then by the mean-value property we know that for each $x \in \mathbb{R}^n$ and $r&gt;0$ we have that $$ u(x) = \frac{1}{\omega_n r^{n}} \int_{ B(x,r)} u(y) dy $$ where $\omega_n$ is the volume of the unit ball in $\mathbb{R}^n$. From this and Cauchy-Schwarz we get the estimate $$ |u(x)| \le \frac{1}{\omega_n r^n} \int_{B(x,r)} |u(y)| dy \le \frac{1}{\omega_n r^n} \left( \int_{B(x,r)} |u(y)|^2 dy \right)^{1/2} |B(x,r)|^{1/2} \\ =\frac{1}{\sqrt{\omega_n} r^{n/2}} \left( \int_{B(x,r)} |u(y)|^2 dy \right)^{1/2} $$ Now, if $u$ is such that $$ \int_{\mathbb{R}^n} | u(y)|^2 dy = M^2 &lt; \infty, $$ then we in turn find that $$ |u(x)| \le \frac{1}{\sqrt{\omega_n} r^{n/2}} \left( \int_{B(x,r)} |u(y)|^2 dy \right)^{1/2} \le \frac{1}{\sqrt{\omega_n} r^{n/2}} \left( \int_{\mathbb{R}^n} |u(y)|^2 dy \right)^{1/2} = \frac{M}{\sqrt{\omega_n} r^{n/2}}. $$ Then for any fixed $x \in \mathbb{R}^n$ we can send $r \to \infty$ to get that $$ |u(x)| \le \lim_{r \to \infty}\frac{M}{\sqrt{\omega_n} r^{n/2}} =0, $$ and hence we find that $u(x) =0$ for all $x$. Thus the only harmonic function that is square-integrable over all of $\mathbb{R}^n$ is $0$. Having established this, we can now answer your question. Suppose now that $u$ is harmonic on $\mathbb{R}^n$ and $D^2 u$ is square-integrable. Since $u$ is harmonic, it's smooth, and so we can apply arbitrarily many derivatives to the PDE $\Delta u =0$. In particular we find that if $|\alpha|=2$ then $$ 0 = \partial^\alpha \Delta u = \Delta \partial^\alpha u, $$ which means that all of the second-order partial derivatives are also harmonic. Since $$ \int_{\mathbb{R}^n} |\partial^\alpha u |^2 \le \int_{\mathbb{R}^n} |D^2 u |^2 &lt; \infty $$ we find that $\partial^\alpha u$ is a harmonic function that is square-integrable. By the above analysis we then know that $\partial^\alpha u =0$ for all multi-indices with $|\alpha |=2$. In other words, we know that $D^2 u =0$, and so basic calculus tells us that $u$ is linear, i.e. $u(x) = a + b \cdot x$ for some $a \in \mathbb{R}$ and $b\in \mathbb{R}^n$. It turns out that all such functions are trivially harmonic. Thus a harmonic functions has $D^2 u$ square integrable if and only if it's linear.
Suppose $\mathcal{V}$ be a subspaces of $\mathbb{R}^n$,
I would go and try to prove both inclusions separately. So let's focus on $$ (\mathcal{V}\cap\text {im } P) \subset (P^{-1}\mathcal{V}\cap\text{im }P) $$ first. $$ x \in (\mathcal{V}\cap\text {im } P) \Rightarrow x \in V \land x \in {im } P $$ $$ \Rightarrow x \in V \land \exists y \in \mathbb{R}^n : Py=x $$ By the definition of a projection we get: $$ \Rightarrow x \in V \land Py = Px = x $$ $$ \Rightarrow x \in (P^{-1}\mathcal{V}\cap\text{im }P) $$ Now for the second part I'll give you the following hint because P is a projection we know: $$ x \in \text {im } P \iff Px = x $$ I hope that will help you a bit.
Equation of a subspace given basis
Hint: All vectors in $S$ are perpendicular to the normal of the plane.
How does it follow from the Pascal's Triangle that binomial coefficient are integers
I suppose that, in this context, $\binom nm$ is defined as$$\binom nm=\frac{n!}{m!(n-m)!}.$$With this definition, it is clear that $\binom nm\in\mathbb Q$, but it is not clear that it is an integer. However, it is clear that $\binom n0=\binom nn=1$, which is an integer. And, since$$\binom n{m-1}+\binom nm=\binom{n+1}m,$$it follows (by induction on $n$), that each $\binom nm$ is an integer.
Removing balls from an urn in pairs: expected number of pairs in which both are red
As Didier pointed out in a comment, you can see that the probabilities are the same for each pair by noting that each pair contains two uniformly chosen balls, that is, since the balls are by definition identical (except for the colour, which doesn't influence their selection), in a given draw every pair of balls is equally likely to be drawn, and this determines all probabilities related to a single draw. Regarding the variance, your calculation is correct (the factor $\binom n2$ is the number of pairs in the summation over the covariances), but I find it easier to calculate variances using expectation values, like this: \begin{align} \operatorname{Var}(X)&amp;=E[X^2]-E[X]^2\\ &amp;=nE[X_1^2]+n(n-1)E[X_1X_2]-(nE[X_1])^2\\ &amp;=nE[X_1]+n(n-1)E[X_1X_2]-(nE[X_1])^2\\ &amp;=n\frac{\binom r2}{\binom{2n}2}+n(n-1)\frac{\binom r4}{\binom{2n}4}-n^2\left(\frac{\binom r2}{\binom{2n}2}\right)^2\;. \end{align}
connected components of open sets are open
A space $X$ is locally connected iff a component of an open set is open. See here e.g. So this proof is not correct. Consider the rationals $\mathbb{Q}$, then all components are singletons which are never open in $\mathbb{Q}$ (nor in any of its open sets). Logical fallacies: "not open" does not imply "closed" (sets aren't doors). Components are always closed, so the complement of a component is a union of closed sets, namely all other components,but unions of closed sets need not be closed, only finite unions are. So the complement need not be closed, etc.
For a discrete stopping time $\tau$, $\mathcal{F}_\tau^X = \sigma(X(t\wedge\tau):t\ge 0).$
Let $(X_n)_{n \in \mathbb{N}}$ be a stochastic process and $\tau: \Omega \to \mathbb{N}$ an $\mathcal{F}^X$-stopping time. You have already shown that $X(\tau \wedge n)$ is $\mathcal{F}_{\tau}^X$-measurable for all $n \geq 1$, and therefore it just remains to show that $$\mathcal{F}_{\tau}^X \subseteq \sigma(X(\tau \wedge n); n \geq 1) =: \mathcal{H}. \tag{1} $$ Let us first recall the factorization lemma: Let $Y:\Omega \to \mathbb{R}^d$ be a random variable. Then a mapping $T: \Omega \to \mathbb{R}$ is $\sigma(Y)$-measurable if, and only if, there exists a Borel-measurable mapping $f:\mathbb{R}^d \to \mathbb{R}$ such that $T = f \circ Y$. Proof of $(1)$: Since $\tau$ is a discrete stopping time, we have $$\begin{align*} \mathcal{F}_{\tau}^X &amp;\stackrel{\text{def}}{=} \{A; \forall k \geq 1: \, \, A \cap \{\tau \leq k\} \in \mathcal{F}_k^X\} \\ &amp;= \{A; \forall k \geq 1: \, \, A \cap \{\tau = k\} \in \mathcal{F}_k^X\}. \tag{2} \end{align*}$$ Let $A \in \mathcal{F}_{\tau}^X$. We show by induction that $A \cap \{\tau = k\} \in \mathcal{H}$ for all $k \geq 1$. $k=1$: $$A \cap \{\tau = 1\} \in \mathcal{F}_1^X = \sigma(X_1) = \sigma(X_{\tau \wedge 1}) \subseteq \mathcal{H}.$$ $k-1 \to k$: By $(2)$ we have $A \cap \{\tau =k\} \in \mathcal{F}_k = \sigma(X_1,\ldots,X_k)$. Applying the above theorem shows that there exists a mapping $f: \mathbb{R}^k \to \mathbb{R}$ such that $$1_{A \cap \{\tau = k\}} = f(X_1,\ldots,X_k). $$ Using that $1_{\{\tau=k\}} = 1_{\{\tau=k\}} 1_{\{\tau \geq k\}}$ we find $$\begin{align*} 1_{A \cap \{\tau = k\}} = f(X_1,\ldots,X_k) 1_{\{\tau \geq k\}} &amp;= f(X_{\tau \wedge 1},\ldots,X_{\tau \wedge k}) 1_{\{\tau \geq k\}}. \end{align*}$$ By the induction hypothesis, we have $\{\tau \geq k\} = \{\tau \leq k-1\}^c \in \mathcal{H}$, and therefore the right-hand side is $\mathcal{H}$-measurable. This, however, is equivalent to saying that $A \cap \{\tau=k\} \in \mathcal{H}$. Finally, we conclude that $$A = \bigcup_{k \geq 1}(A \cap \{\tau=k\}) \in \mathcal{H}.$$ Remarks: It is also possible to show that $$\sigma(X_{n \wedge \tau}; n \geq 1) = \sigma(\mathcal{F}_{\tau \wedge n}; n \geq 1),$$ see this question. See this question for the analogous result for time-continuous processes.
Vector Fields as Differential Operators
We begin by defining the scalar product of a covector and a vector as \begin{align*} \alpha\cdot\xi=\alpha_1\xi^1+\alpha_2\xi^2+\cdots+\alpha_n\xi^n \end{align*} Since $\alpha=d\Phi$, we have \begin{align*} \alpha\cdot\xi&amp;=d\Phi\cdot\xi \end{align*} The covector $d\Phi$ is equal to \begin{align*} d\Phi=\left(\frac{\partial\Phi}{\partial x^1},\frac{\partial\Phi}{\partial x^2},\ldots,\frac{\partial\Phi}{\partial x^n}\right) \end{align*} Using the expression for the scalar product defined earlier, \begin{align*} d\Phi\cdot\xi&amp;=\frac{\partial\Phi}{\partial x_1}\xi^1+\frac{\partial\Phi}{\partial x_2}\xi^2+\cdots+\frac{\partial\Phi}{\partial x_n}\xi^n \end{align*} The vector field $\xi$ can be represented by its set of components in terms of partial differential operators, and we will of course choose the coordinates to be the same as the previously used coordinates for $\Phi$: \begin{align*} \xi=\xi^1\frac{\partial}{\partial x^1}+\xi^2\frac{\partial}{\partial x^2}+\cdots+\xi^n\frac{\partial}{\partial x^n} \end{align*} Therefore, \begin{align*} \xi(\Phi)&amp;=\left(\xi^1\frac{\partial}{\partial x^1}+\xi^2\frac{\partial}{\partial x^2}+\cdots+\xi^n\frac{\partial}{\partial x^n}\right)\Phi\\ &amp;=\xi^1\frac{\partial\Phi}{\partial x^1}+\xi^2\frac{\partial\Phi}{\partial x^2}+\cdots+\xi^n\frac{\partial\Phi}{\partial x^n} \end{align*} Hence, the definition of the scalar product implies $d\Phi\cdot \xi=\xi(\Phi)$ as required.
Continuity at $x=0$ of this function
In the way it is written here, you have made a minor mistake. I recommend you to pass from the first line to the third line directly and omit the x^2s.
if F is closed in $\mathbb R$, then F is compact. proof problem
This is true in a general topological space. We can directly check the definition of compact set. Indeed, suppose that $\{O_i\}$ is an open covering of $F$. In addition, since $F$ is closed, we have that $O:=F^c$ is open. Moreover $\{O,O_i\}$ is an open covering of $K$. Therefore, we can extract a finite subcovering $O,O_{i_1},\cdots,O_{i_n}$ of $K$. Neccessarily, $O_{i_1},\cdots,O_{i_n}$ is a covering of $F$.
Balls and boxes probability question.
There are $5!$ ways to arrange the five balls into the five boxes, in total. I have figured out that the number of ways for putting 2 balls in correct numbered box is $^5C_2$ but I cant figure out how to calculate for remaining 3 balls? Thanks in advance. To arrange for 2 balls to be "good" and 3 balls "bad", we need to select which two from the five balls as good, put each in their own box, the make sure the three remaining balls are bad we take any one of them and select one from the two other balls' box to put it in, then just take the ball for that box and put it in the last ball's box so that the last ball will also go into the wrong box. Basically you want to calculate how many ways to rearrange $ABC$ so they are all in the wrong position, such as $BCA$ , $CAB$, or... wait that's it.
Conjugate of negative normalized entropy
In case $\sum_{i=1}^n e^{y_i} &gt; 1$, setting $x_i = r \cdot e^{y_i}$ yields $r \cdot \sum_{i=1}^n e^{y_i} \cdot \log \sum_{i=1}^n e^{y_i}$, which goes to infinity for $r \rightarrow \infty$. Write $S(x) = \sum_{j=1}^n x_j$. The first derivatives of the $\sup$ objective are \begin{align} \frac{\partial}{\partial x_i} &amp;\left( y^T x - x^T \log x + \sum_{i} x_i \log S(x) \right) = y_i - \log x_i + \log S(x) \ ; \end{align} the gradient being zero thus means that $\forall i: \frac{x_i}{S(x)} = e^{y_i}$. This can be fulfilled if and only if $\sum_i e^{y_i} = 1$, and then with $x_i = r \cdot e^{y_i}$ for some $r&gt;0$. The second derivative of the objective is \begin{align} \frac{\partial^2 \left( y^T x - x^T \log x + \sum_{i} x_i \log S(x) \right)}{\partial x_i \partial x_j}(x) = -\frac{\delta_{ij}}{x_j} + \frac{1}{S(x)} \end{align} which is negative definite: For $v \in \mathbb{R}^n$, \begin{align} \sum_{ij} v_i \left(-\frac{\delta_{ij}}{x_j} + \frac{1}{S(x)}\right) v_j &amp;= - \sum_{i}\frac{v_i^2}{x_i} + \frac{\left(\sum_{i} v_i\right)^2}{S(x)} \leq 0 \end{align} where we used Jensen's inequality on the squaring function: \begin{align} \sum_{i}\frac{v_i^2}{x_i} &amp;= \frac{1}{S(x)}\sum_{i}\frac{v_i^2S(x)}{x_i} \\ &amp;= \frac{1}{S(x)}\sum_{i}\frac{x_i}{S(x)}\left(\frac{v_iS(x)}{x_i}\right)^2 \geq \frac{1}{S(x)}\left(\sum_{i}\frac{x_i}{S(x)}\frac{v_i S(x)}{x_i}\right)^2 = \frac{\left(\sum_{i} v_i\right)^2}{S(x)} \ . \end{align} Thus, in case $\sum_i e^{y_i} = 1$ we have a global maximum with value $h^*(y) = \sum_{j=1}^n r \cdot e^{y_j} y_j - \sum_{i=1}^n r \cdot e^{y_i}\log e^{y_i} = 0$. In the last case $\sum_i e^{y_i} &lt; 1$, there is $\hat{y} \succ y$ with $\sum_i e^{\hat{y}_i} = 1$. For any $x\succ0$ the term $x^T y - \sum_{i=1}^n x_i\log \frac{x_i}{S(X)}$ is increasing in $y_i$. Therefore, $h^*(y) \leq h^*(\hat{y}) = 0$. Again taking $x_i = r \cdot e^{y_i}$ (also possible: simply $x_i = r$), we see that $0$ is realized as the supremum with $r \rightarrow 0$.
On permutations and combinations and probability
Case 1) If there are at least 2 spades in the first 5 cards As for the sample space there are $$\binom {52}{5}.5!$$ ways to select and then draw the cards without any restriction. We first find the number of ways in which there are exactly 2 spades in the first 5. Then we find the number of ways in which there are exactly 3 spades out of 5 and so on. For two exactly two spades using the reasoning in case 2, we get the number of ways for two spades as $$\binom {13}{2}\binom {39}{3}.5!$$ Using this same reasoning we find answer for 3 spades, 4 spades and 5 spades. Hence we need to simply find $$\sum_{k=2}^5 \binom {13}{k}\binom {39}{5-k}5!$$ Which by Vandermonde's identity simplifies to $$5!\sum_{k=2}^5 \binom {13}{k}\binom {39}{5-k}=5!*\left[\binom {52}{5}- \left[ \binom {39}{5}+\binom {13}{1}\binom {39}{4}\right]\right]$$ Hence probability is $$1- \frac {\binom {39}{5}+\binom {13}{1}\binom {39}{4} }{\binom {52}{5}}$$ Case 2) If exactly 2 spades in first 5 cards As for the sample space there are $$\binom {52}{5}.5!$$ ways to select and then draw the cards without any restriction. But for our event we need two of the first 5 cards to be spade. We can select this spade in $\binom {13}{2}$ ways while the rest 3 can be chosen in $\binom {39}{3}$ ways. But since the cards are drawn successively drawn we also need to arrange them Hence number of ways of drawing 5 cards such that 2 of them are spade is $$\binom {13}{2}\binom {39}{3}.5!$$ Hence the probability is $$\frac {\binom {13}{2}\binom {39}{3}}{\binom {52}{5}}$$
Evaluation of integral $\int_{-\infty}^{+\infty} xe^{-|x|}\,dx$ is not $0$
$$ \begin{align} \int_{-\infty}^{\infty}x\mathrm{e}^{-|x|}dx &amp; = \int_{-\infty}^{0}x\mathrm{e}^{x}dx + \int^{\infty}_{0}x\mathrm{e}^{-x}dx \\[6pt] &amp; =\int_{\infty}^{0}x\mathrm{e}^{-x}dx + \int^{\infty}_{0}x\mathrm{e}^{-x}dx \\[6pt] &amp; =-\int^{\infty}_{0}x\mathrm{e}^{-x}dx + \int^{\infty}_{0}x\mathrm{e}^{-x}dx \\[6pt] &amp; =0 \end{align} $$
Proving that a function is measurable
The mapping $d:Y\times Y \to \mathbb{R}$ is continuous with the product topology. Given any open set $U\subset \mathbb{R}$, $d^{-1}(U)$ is open in $X\times Y$. Hence, $\phi^{-1}(d^{-1}(U))$ is $\mu$-measurable, where $\phi: X\to Y\times Y$ is the map $x\mapsto (f(x),g(x))$. Hence, $(d\circ \phi)^{-1}(U)$ is measurable, giving us that $d\circ \phi$ is a measurable function.
Limit of ln summation
If you know about Pochhammer symbols $$\sum_{i=a}^n\log\left(\dfrac{n}{i}\right)=\log \left(\frac{n^{n-a+1}}{a (a+1)_{n-a}}\right)$$ $$\dfrac{1}{n}\sum_{i=\left\lceil \sqrt{n}\right\rceil}^n\log\left(\dfrac{n}{i}\right)=\frac 1n\log\left(\frac{\Gamma \left(\left\lceil \sqrt{n}\right\rceil \right)}{\Gamma (n)}\,n^{n-\left\lceil \sqrt{n}\right\rceil }\right) $$ Now, use Stirling approximation.
Calculate $\lim_{n \to \infty} \ln \frac{n!^{\frac{1}{n}}}{n}$
First let's write $$\ln \frac{n!^{\frac{1}{n}}}{n} = \frac{1}{n} (\ln n! - n\ln n) = \frac{a_n}{b_n}$$ where $a_n = \ln n! - n\ln n$ and $b_n = n$. Then $$\begin{align*} \frac{a_{n+1} - a_n}{b_{n+1} - b_n} &amp; = \quad \frac{\ln (n+1)! - (n+1) \ln(n+1) - (\ln n! - n\ln n)}{1} \\ &amp; = \quad \ln(n+1) - (n+1)\ln(n+1) + n \ln n \\ &amp; = \quad -\ln(1 + 1/n)^n \\ &amp; \longrightarrow -1 \ \text{ as } n \to \infty \end{align*}$$ Hence by the Cesàro theorem (a.k.a. Stolz-Cesàro theorem) $$ \lim_{n\to\infty} \ln \frac{n!^{\frac{1}{n}}}{n} = \lim_{n\to\infty} \frac{a_n}{b_n}= \lim_{n\to\infty} \frac{a_{n+1} - a_n}{b_{n+1} - b_n} = -1$$
Bringing a constant out of double integral (polar coordinates)
You ignored the angular part of the integral: $$\int\int_R\sin(x^2+y^2)\,dx\,dy=\int_0^{2\pi}\int_1^9r\sin(r^2)\,dr\,d\theta\\=\left(\int_0^{2\pi}\,d\theta\right)\left(\int_1^9r\sin(r^2)\,dr\right)=2\pi\times(-0.118\dots)=-0.74\dots$$
Independence of two r.v.
Hint: Use Borel-Cantelli Lemma 2 for sets like $A_{3k}$ or similar set with non overlapping string pieces! Solution: Define $B_k = A_{3k}$. One can simply see that $\{B_k\}_{k=1}^\infty$ are independent and $\sum {\mathbb P}(B_k)=\infty$, so from second Borel-Cantelli lemma one has that ${\mathbb P}(\{B_k,i.o.\})=1$. As $\{B_k,i.o.\}\subset \{A_k,i.o.\}$, the result is straightforward.
Exponential distribution probability calculation
$$\mathbb{P}[Y \leq1]=\mathbb{P}[X^3 \leq1]=\mathbb{P}[X \leq1]=F_X(1)=1-e^{-4}\approx 0.981684$$
Duplicate each digit in a whole number with arithmetic operations
I'm going to say no. The hitch is that $112233=11\cdot10203$, but any function smart enough to do $12\mapsto102$, $123\mapsto10203$, $1234\mapsto1020304$ and so on is going to need a loop in it. Contrast with making $123123=123\cdot 1001$, which is achievable in a relatively simple function, since $$1001=1+10^{\lfloor\log_{10}123\rfloor+1}$$
Function on equivalence relation
Your work is correct. As was mentioned in the comments, your work could be improved readability-wise by giving some intuition behind them. For example, what is the big idea?
formula for derivative of differentiable mapping
Hint: In general, when you have a function $g:\mathbf R^n\times\mathbf R^p\to\mathbf R^m$ and functions $a:\mathbf R\to\mathbf R^n$ and $b:\mathbf R\to\mathbf R^p$, and you want to find the derivative of $\phi(t)=f(a(t),b(t))$, you have $$ \frac{\mathrm d\phi}{\mathrm dt} = \frac{\partial f}{\partial x}(a(t),b(t))\cdot a'(t) + \frac{\partial f}{\partial y}(a(t),b(t))\cdot b'(t),$$ where $\partial f/\partial x$ is the differential of the function $f_y:\mathbf R^n\to\mathbf R^m$ defined by $f_y(x)=f(x,y)$, and similarly for $\partial f/\partial y$. Here you have a slightly more general case (with manifolds), but it's basically the same thing.
convergent sequence in set of p-adic numbers
Hint: What is $\lim_{n\to\infty}5^n$ in $\mathbb Q_5$?
Quadratic forms linearized by real matrices
If $(x, y)$ is a scalar product on a vector space, and $A$ a linear transformation on that space, then the mapping that takes $x$ to $(Ax, x)$ is called a quadratic form, and is said to be realized by $A$.