title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Express width as a function of area
The trick is to use the fact that the width of the rectangle is half the length in order to express $l$ in terms of $w$, i.e. $$l = 2w,$$ Now you know that the area of a rectangle is length*width so you can write $$A = wl$$ substituting by the above equation we get $$A= w(2w) = 2w^2.$$ Stays to express $w$ in terms of $A$, noting that $A,l,w \geq 0$ we get: $$A=2w^2 \iff \frac{A}{2} = w^2 \iff \sqrt\frac{A}{2} =w.$$
The Frob-id morphism on an abelian variety
The homomorphism $\pi_A$ acts identically on $\mathbb{F}_q$-points, but highly nontrivially on points which are defined over extensions of $\mathbb{F}_q$. As a homomorphism, $\pi_A$ and $\text{Id}$ are very different. To see that $\pi_A - \text{Id}$ is finite, one can argue as follows. First, $\pi_A - \text{Id}$ is a homomorphism. Next, the induced map on tangent space is $-\text{Id}$. This is because $\pi_A$ acts trivially on tangent space.
Let $f(z)$ be a complex function whose value is $\sqrt{z^2-1}\in\Bbb R$ when $z\in \Bbb R$ and $z>1$, and $f(z)$ is holomorphic if $1<|z|<\infty$
For $z\in\Bbb{C}\setminus[-1,1]$, define $f(z):= z\exp\left(\frac{1}{2}\log\left(1-\frac{1}{z^2}\right)\right)\equiv z\sqrt{1-\frac{1}{z^2}}$, where $\log: \Bbb{C}\setminus(-\infty,0]\to \Bbb{C}$ denotes the principal branch where the argument lies strictly between $-\pi$ and $\pi$. This is a well-defined holomorphic function because whenever $z\notin [-1,1]$, we have $1-\frac{1}{z^2}\notin (-\infty,0]$. Next, let $c_r:[0,2\pi]\to \Bbb{C}$ denote the curve $c_r(t)=re^{it}$. Now, we're going to transfer the problem to a calculation about the point at infinity. To do so, we introduce the coordinate $\zeta=\frac{1}{z}$ about the point at infinity. Note that $c_r$ is the image under the mapping $\zeta$ of the curve $c_{1/r,\text{opp}}$, which mean the oppositely oriented curve $t\mapsto \frac{1}{r}e^{-it}$. Therefore, the change of variables yields \begin{align} \int_{c_r}f(z)\,dz &amp;=\int_{\zeta\circ c_{1/r,\text{opp}}}f(z)\,dz\\ &amp;=\int_{c_{1/r,\text{opp}}}f(z(\zeta))\,d(z(\zeta))\\ &amp;=\int_{c_{1/r,\text{opp}}}f\left(\frac{1}{\zeta}\right)d\left(\frac{1}{\zeta}\right)\\ &amp;=\int_{c_{1/r,\text{opp}}}\frac{\sqrt{1-\zeta^2}}{\zeta}\left(-\frac{d\zeta}{\zeta^2}\right)\\ &amp;=\int_{c_{1/r}}\frac{\sqrt{1-\zeta^2}}{\zeta^3}\,d\zeta \end{align} Note that in the end, the minus sign cancels the opposite orientation. At this stage, the radius of the circle is irrelevant (as long as it is small enough... i.e strictly less than $1$). This is a typical residue calculation about $\zeta=0$ (keep in mind the square root has been defined using the principal branch of the logarithm), so I'll leave this bit to you.
help, how to calculate earn money to get?
How much does he earn in a week where he works with a partner for 20 hours and alone for 12 hours? When he works with a partner for $20$ hours, he earns $\$11.40$ per hour. So, the total amount earned while working with a partner is $$\$11.40\times20= \$228.00 $$ while the total amount earned while working alone is $\$11.40 + \$2.20=\$13.60$ an hour. As he worked 12 hours we have $$\$13.60\times12= \$163.20 $$ therefore to get the total amount earned in a week we add both of these totals and add the $\$14.50$ clothing allowance $$\$228.00+ \$163.20+$14.50=\$405.70$$ which means that the suggested answer of $\$405.40$ is off by thirty cents. Did you state everything in the problem properly?
Method of characteristics for Burgers' equation with rectangular data
The variable $u$ is constant along the characteristic curves, which satisfy \begin{aligned} x'(t) &amp; = u(x(t),t) \, , \\ &amp; = u(x(0),0) \, . \end{aligned} Thus, the latter are straight lines in the $x$-$t$ plane, determined by the initial data. Here, the initial data is piecewise constant, i.e. we solve Riemann problems. As displayed in the figure below, characteristics separate in the vicinity of $x=0$, and a rarefaction wave occurs; characteristics cross in the vicinity of $x=1$, and a shock-wave occurs. The rarefaction wave is a continuous self-similar solution, deduced from the self-similarity Ansatz $u (x,t)=v (\xi)$ with $\xi = x/t$. Indeed, $$ \partial_t u(x,t) + u(x,t)\, \partial_x u(x,t) = \left(v(\xi) - \xi\right) \frac{v'(\xi)}{t} \, , $$ and thus, $v(\xi) = \xi$ or $u(x,t) = x/t$. The shock speed $s$ is given by the Rankine-Hugoniot jump condition: $$ s = (1+0)/2\, . $$ As long as the rarefaction and the shock don't interact, the solution is therefore $$ u(x,t) = \left\lbrace \begin{aligned} &amp; 0 &amp;&amp;\text{if }\; x\leq 0 \, , \\ &amp; x/t &amp;&amp;\text{if }\; 0\leq x\leq t \, , \\ &amp; 1 &amp;&amp;\text{if }\; t \leq x &lt; 1+ t/2 \, ,\\ &amp; 0 &amp;&amp;\text{if }\; 1+t/2 &lt; x \, , \end{aligned} \right. $$ valid for times $t&lt;t^*$ such that $t^* = 1 + t^*/2 = 2$. At the time $t^*$, both waves interact. The new shock speed is determined from the Rankine-Hugoniot condition $$ x'(t) = (x(t)/t+0)/2 \, , $$ with initial shock speed $x'(t^*) = s$. Hence, the solution for $t\geq t^*$ is $$ u(x,t) = \left\lbrace \begin{aligned} &amp; 0 &amp;&amp;\text{if }\; x\leq 0 \, , \\ &amp; x/t &amp;&amp;\text{if }\; 0\leq x&lt; \sqrt{2t} \, , \\ &amp; 0 &amp;&amp;\text{if }\; \sqrt{2t} &lt; x \, . \end{aligned}\right. $$
Why are trace-class unit-trace linear operators (like density operators) necessarily positive semidefinite?
The definition is the same as in the finite-dimensional case. Note that the general density matrix (an operator) is given by \begin{align} \rho = \sum_n \lambda_n |\psi_n\rangle \langle \psi_n| \end{align} where $\lambda_n\geq 0$ and $\sum_n\lambda_n = 1$. We say that $\rho$ is positive semidefinite if \begin{align} \langle \Psi\mid \rho\mid\Psi\rangle \geq 0 \end{align} for all $\Psi$. Observe \begin{align} \langle \Psi\mid \rho\mid\Psi\rangle = \sum_n \lambda_n \langle \Psi \mid\psi_n\rangle \langle \psi_n\mid \Psi\rangle = \sum_n \lambda_n |\langle \psi_n \mid\Psi\rangle|^2\geq 0. \end{align}
Does this series converge pointwise or uniformly on $\Bbb R$?
Your series is the Fourier series of the function $f(x)=-\frac{x}{2}$ over $(-\pi,\pi)$, extended by periodicity. The proof is straighforward, you just have to compute $$ \frac{1}{\pi}\int_{-\pi}^{\pi}f(x)\sin(nx)\,dx $$ through integration by parts. Then the situation is the following: $\hspace1in$ with pointwise convergence for every point of $I=(-\pi,\pi)$. In virtue of Gibbs' phenomenon, the convergence on $I$ is not uniform.
Static Optimization problem assumptions
Solving with the help of Lagrange Multipliers and introducing some slack variables $(\epsilon_i)$ to reduce the inequalities to equalities we have with $f(x,y) = -2(x+y)^2+5x-y$ $$ L(x,y,\lambda,\epsilon) = f(x,y)+\lambda_1(x-\beta+\epsilon_1^2)+\lambda_2(-x+2y-3-\epsilon_2^2)+\lambda_3(y-\epsilon_3^2) $$ The stationary conditions are $$ L_x = 5+\lambda_1-\lambda_2-4(x+y)=0\\ L_y = -1+2\lambda_2+\lambda_3-4(x+y) = 0\\ L_{\lambda_1} = -\beta+x+\epsilon_1^2 = 0\\ L_{\lambda_2} = 2y-3-x-\epsilon_2^2 = 0\\ L_{\lambda_3} = y-\epsilon_3^2 = 0\\ L_{\epsilon_1} = \lambda_1\epsilon_1 = 0\\ L_{\epsilon_2} = \lambda_2\epsilon_2 = 0\\ L_{\epsilon_3} = \lambda_3\epsilon_3 = 0 $$ Solving those equations we have the possible stationary points as well as the $f(x,y)$ value according to the table $$ \begin{array}{ccccccccc} x &amp; y &amp; \lambda_1 &amp; \lambda_2 &amp;\lambda_3 &amp;\epsilon_1 &amp;\epsilon_2 &amp;\epsilon_3 &amp; f\\ \beta &amp; \frac{\beta +3}{2} &amp; 9 \beta +\frac{9}{2} &amp; 3 \beta +\frac{7}{2} &amp; 0 &amp; 0 &amp; 0 &amp; -\frac{\sqrt{\beta +3}}{\sqrt{2}} &amp; -\frac{3}{2} (3 \beta (\beta +1)+4) \\ \beta &amp; \frac{\beta +3}{2} &amp; 9 \beta +\frac{9}{2} &amp; 3 \beta +\frac{7}{2} &amp; 0 &amp; 0 &amp; 0 &amp; \frac{\sqrt{\beta +3}}{\sqrt{2}} &amp; -\frac{3}{2} (3 \beta (\beta +1)+4) \\ \beta &amp; 0 &amp; 4 \beta -5 &amp; 0 &amp; 4 \beta +1 &amp; 0 &amp; -\sqrt{-\beta -3} &amp; 0 &amp; (5-2 \beta ) \beta \\ \beta &amp; 0 &amp; 4 \beta -5 &amp; 0 &amp; 4 \beta +1 &amp; 0 &amp; \sqrt{-\beta -3} &amp; 0 &amp; (5-2 \beta ) \beta \\ \beta &amp; -\beta -\frac{1}{4} &amp; -6 &amp; 0 &amp; 0 &amp; 0 &amp; -\sqrt{-3 \beta -\frac{7}{2}} &amp; -\frac{1}{2} \sqrt{-4 \beta -1} &amp; 6 \beta +\frac{1}{8} \\ \beta &amp; -\beta -\frac{1}{4} &amp; -6 &amp; 0 &amp; 0 &amp; 0 &amp; -\sqrt{-3 \beta -\frac{7}{2}} &amp; \frac{1}{2} \sqrt{-4 \beta -1} &amp; 6 \beta +\frac{1}{8} \\ \beta &amp; -\beta -\frac{1}{4} &amp; -6 &amp; 0 &amp; 0 &amp; 0 &amp; \sqrt{-3 \beta -\frac{7}{2}} &amp; -\frac{1}{2} \sqrt{-4 \beta -1} &amp; 6 \beta +\frac{1}{8} \\ \beta &amp; -\beta -\frac{1}{4} &amp; -6 &amp; 0 &amp; 0 &amp; 0 &amp; \sqrt{-3 \beta -\frac{7}{2}} &amp; \frac{1}{2} \sqrt{-4 \beta -1} &amp; 6 \beta +\frac{1}{8} \\ -3 &amp; 0 &amp; 0 &amp; 17 &amp; -45 &amp; -\sqrt{\beta +3} &amp; 0 &amp; 0 &amp; -33 \\ -3 &amp; 0 &amp; 0 &amp; 17 &amp; -45 &amp; \sqrt{\beta +3} &amp; 0 &amp; 0 &amp; -33 \\ -\frac{1}{2} &amp; \frac{5}{4} &amp; 0 &amp; 2 &amp; 0 &amp; -\sqrt{\beta +\frac{1}{2}} &amp; 0 &amp; -\frac{\sqrt{5}}{2} &amp; -\frac{39}{8} \\ -\frac{1}{2} &amp; \frac{5}{4} &amp; 0 &amp; 2 &amp; 0 &amp; -\sqrt{\beta +\frac{1}{2}} &amp; 0 &amp; \frac{\sqrt{5}}{2} &amp; -\frac{39}{8} \\ -\frac{1}{2} &amp; \frac{5}{4} &amp; 0 &amp; 2 &amp; 0 &amp; \sqrt{\beta +\frac{1}{2}} &amp; 0 &amp; -\frac{\sqrt{5}}{2} &amp; -\frac{39}{8} \\ -\frac{1}{2} &amp; \frac{5}{4} &amp; 0 &amp; 2 &amp; 0 &amp; \sqrt{\beta +\frac{1}{2}} &amp; 0 &amp; \frac{\sqrt{5}}{2} &amp; -\frac{39}{8} \\ \end{array} $$ NOTES 1-Some values are duplicate due to the adoption of $\epsilon_i$ squared. 2-Solutions with at least one $\epsilon_i = 0$ are located at the boundary. As we can observe, all solutions are at the feasible region boundary. 3-Once the $\beta$ value is fixed, the maximum and minimum can be chosen.
How many injective relations exist between $A$ and $B$?
It should be clear that if $|A|&gt; |B|$, then by the pigeonhole principle there is no injective mapping. Suppose $n=|A| = |B|$, then we see that the class of injective mapping is the same as the class of bijective mapping which is just the permutation on $n$ element. Hence we have $n!$ injective map. Lastly, consider the case $n=|A|&lt;|B|=m$. We have $n$ balls and $m$ bins. So we need to first select the $n$ bins from $m$ bins to put our balls in which we have $\binom{m}{n}$ choices. Once we fix our bins, then there are exactly $n!$ ways to put the balls into the bins just like in the case when $|A|=|B|$. Hence we have exactly $\binom{m}{n}n!$ injective mappings.
To show that $d(A,B) > 0$ and there exists points $a \in A$ and $b \in B$ such that $d(A,B) = d(a,b)$.
This is a good approach but it is incomplete in an important way. You must know that the map $(a,b) \mapsto d(a,b)$ is continuous for the arguement to work. Put differently it is not quite clear where "$S$ is closed and bounded" comes from. Separately: I would also add at the very end that $d(a,b)&gt;0$ as $A$ and $B$ are disjoint. (This is where you need disjoint; where you mention it is in fact not relevant.)
Separability of a group and its dual
Let $G$ a topological locally compact abelian group. If $G$ has a countable topological basis $(U_n)_{n \in \mathbb{N}}$. We show $\hat{G}$ has a countable topological basis. For every finite subset $I$ of $\mathbb{N}$, let $O_I=\cup_{i \in I}U_i$. We define $B:=\{\bar{O_I} | \bar{O_I}$ is compact $\}$. $B$ is countable, because the cardinality is lower or equal than the cardinality of the finite subset of $\mathbb{N}$. $U(1)$ has a countable topological basis $(V_n)_{n \in\mathbb{N}}$. Let $O(K,V)=\{ \chi \in \hat{G} | \chi(K) \subset V\}$, with $K$ compact in $G$, $V$ open in $U(1)$. $O(K,V)$ is open in the compact-open topology on $\hat{G}$. Let $B&#39;=\{O(K,V_n)|K \in B, n \in \mathbb{N}\}$. $B&#39;$ is countable and is a topological basis of $\hat{G}$. If $\hat{G}$ has a countable topological basis, $\hat{\hat{G}}$ too. But $\hat{\hat{G}}=G$ (Pontryagin's duality), so $G$ has a topological basis.
Confusion about the Yoneda lemma
Here's one possible answer to this question. Let's take the viewpoint that functors are representations of categories. First, why is this sensible? Well, recall that categories are generalizations of monoids (and consequently groups as well), since a one object category is the same thing as a monoid. If $M$ is a monoid, then we can define a category, $C$, with one object, $*$, hom set $C(*,*)=M$, and unit and composition given by the unit and multiplication in $M$. Conversely, given a one object category $C$, $C(*,*)$ is a monoid with composition as multiplication, and these constructions are inverse to each other. From now on, if $M$ is a monoid, or $G$ is a group, I'll write $BM$ or $BG$ for the corresponding one object category. Now, what about functors? Well, what are functors $[BG,k\newcommand\Vect{\text{-}\mathbf{Vect}}\Vect]$? Well, we need to pick a vector space $V$ to send $*$ to, and we need to pick a monoid homomorphism $G\to \newcommand\End{\operatorname{End}}\End V$. Since $G$ is a group, this is equivalent to a group homomorphism $G\to \operatorname{GL}(V)$. In other words, functors from $BG$ to $k\Vect$ are exactly the same as linear group representations, and you can check that natural transformations of functors correspond exactly to the $G$-equivariant linear maps. Similarly, when we replace $k\Vect$ with $\newcommand\Ab{\mathbf{Ab}}\Ab$, or $\newcommand\Set{\mathbf{Set}}\Set$, we get $G$-modules and $G$-sets respectively. Specifically, these are all left $G$-actions, since a functor $F:BG\to \Set$ must preserve composition, so $F(gh)=F(g)F(h)$, and we define $g\cdot x$ by $F(g)(x)$. Thus $(gh)\cdot x = g\cdot (h\cdot x))$. A contravariant functor $\newcommand\op{\text{op}}BG^\op\to \Set$ gives a right $G$-action, since now $F(gh)=F(h)F(g)$, so if we define $x\cdot g = F(g)(x)$, then we have $$x\cdot (gh) =F(gh)(x) = F(h)F(g)x = F(h)(x\cdot g) = (x\cdot g)\cdot h.$$ Thus we should think of covariant functors $[C,\Set]$ as left $C$-actions in $\Set$, and we should think of contravariant functors $[C^\op,\Set]$ as right $C$-actions in $\Set$. Yoneda Lemma in Context Representable presheaves now correspond to free objects in a single variable in the following sense. The Yoneda lemma is that we have a natural isomorphism $$ [C^\op,\Set](C(-,A),F)\simeq F(A)\simeq \Set(*,F(A)). $$ In other words, $C(-,A)$ looks a lot like the left adjoint to the &quot;forgetful&quot; functor that sends a presheaf $F$ to its evaluation at $A$, $F(A)$, but evaluated on the singleton set $*$. In fact, we can turn $C(-,A)$ into a full left adjoint by noting that $$\Set(S,F(A)) \simeq \prod_{s\in S} F(A) \simeq \prod_{s\in S}[C^\op,\Set](C(-,A),F) \simeq [C^\op,\Set](\coprod_{s\in S} C(-,A), F),$$ and $\coprod_{s\in S} C(-,A)\simeq S\times C(-,A)$. Thus one way of stating the Yoneda lemma is that $S\mapsto S\times C(-,A)$ is left adjoint to the evaluation at $A$ functor (in the sense that the two statements are equivalent via a short proof). Incidentally, there is also a right adjoint to the evaluation at $A$ functor, see here for the argument. Relating this back to more familiar notions First thing to notice in this viewpoint is that we now have notions of &quot;free on an object&quot; rather than just &quot;free.&quot; I.e., I tend to think of $C(-,A)$ as being the free presheaf in one variable on $A$ (this is not standard terminology, just how I think of it). Now we should be careful, a free object isn't just an object, it's an object and a basis. In this case, our basis (element that freely generates the presheaf) is the identity element $1_A$. Thinking about it this way, the proof of the Yoneda lemma should hopefully be more intuitive. After all, the proof of the Yoneda lemma is the following: $C(-,A)$ is generated by $1_A$, since $f^*1_A=f$, for any $f\in C(B,A)$, so natural transformations $C(-,A)$ to $F$ are uniquely determined by where they send $1_A$. (Analogous to saying $1_A$ spans $C(-,A)$). Moreover, any choice $\alpha\in F(A)$ of where to send $1_A$ is valid, since we can define a natural transformation by &quot;extending linearly&quot; $f=f^*1_A \mapsto f^*\alpha$ (this is analogous to saying $1_A$ is linearly independent, or forms a basis). The covariant version of the Yoneda lemma is the exact same idea, except that we are now working with left representations of our category. Examples of the Yoneda lemma in more familiar contexts Consider the one object category $BG$, then the Yoneda lemma says that the right regular representation of $G$ is the free right $G$-set in one variable (with the basis element being the identity, $1_G$). (The free one in $n$-variables is the disjoint union of $n$ copies of the right regular representation.) The embedding statement is now that $G$ can be embedded into $\operatorname{Sym}(G)$ via $g\mapsto -\cdot g$. This also works in enriched contexts. A ring is precisely a one object category enriched in abelian groups, and the Yoneda lemma in this context says that the right action of $R$ on itself (often denoted $R_R$) is the free right $R$-module in one variable, with the basis being the unit element $1_R$. (The free one in $n$-variables is now the direct sum of $n$ copies of $R_R$) The embedding statement here is that $R$ can be embedded into the endomorphism ring of its underlying abelian group via $r\mapsto (-\cdot r)$.
Behaviour of the function $\ln(1+ x^2)$
You stumbled upon a classic counterexample to the "theorem" $f\colon \mathbb{R} \to \mathbb{R}$ has an horizontal asymptote if and only if $f'(x) \to 0$ as $x\to \infty$. Now, your function $f(x) = \log(1+x^2)$ is a counterexample to the "$\Leftarrow$" implication, since $\lim_{x\to \infty} f(x) = + \infty$. This can be shown by many means; but let's try with the definition, which is $\forall \varepsilon &gt; 0$ there exist a $\delta &gt; 0$ such that $\left| \log(1+x^2) \right| &gt; \varepsilon$ if $x &gt; \delta$. Now, $\log(1+x^2) &gt; \varepsilon$ means that $x^2 &gt; e^\varepsilon -1$; so we only have to take $\delta = \sqrt{e^\varepsilon -1}$ to see that if $x &gt; \delta = \sqrt{e^\varepsilon -1}$, then $\log(1+x^2)&gt; \varepsilon$. What about the other direction, "$\Rightarrow$"? Well, that is an actual theorem and we can prove it: to have an horizontal asymptote means to have a finite limit at infinity. Let's now remember that the tangent to the graph of a function is the line $y = f(x_0) + f'(x_0)(x-x_0)$; if $y = c$ is an horizontal asymptote for $f$, then it must be tangent to the graph of $f(x)$ "at infinity". So now $c = c + \lim_{x_0 \to \infty} f'(x_0)(x-x_0)$, from which we conclude that $\lim_{x_0 \to \infty} f'(x_0) = 0$. edit: of course for all this to be true we must have that $\lim_{x \to \infty} f'(x)$ must exist, otherwise we can't prove anything as a comment points out.
Determination of Invertibility
There is no straightforward, always-working, follow-these-steps-to-get-what-you-need answer to your question. In general, proving that a function is invertible can be a very hard thing to do. However, in your case, it's simple. Since $f$ is independent of $x_2$, it is not invertible (for example, $f(0,0,0,0) = f(0,1,0,0)$). The same is true for all its derivatives.
Continous function $f$
If we assume $f$ is discontinuous at some point, let's say $m \in [a,b]$, then $\lim_{x\to m}f(x) \neq f(m)$ by definition. Notice that if the left-hand limit and right-hand limit at $m$ are equal, then $f(m)$ cannot be larger or smaller than the limit, as that would break monotonicity. Therefore, intuitively, the left-hand limit must be strictly less than the right-hand limit, $\lim_{x\to m^-}f(x) &lt; \lim_{x\to m^-}f(x)$, and $\lim_{x\to m^-}f(x) \leq f(m) \leq \lim_{x\to m^-}f(x)$. Then if there's a point $f(z) \in [c,d]$ such that $\lim_{x\to m^-}f(x) &lt; f(z) &lt; \lim_{x\to m^-}f(x)$ and $f(z)\neq f(m)$, there is no point $z\in [a,b]$ that satisfies $f(z)$, which is a contradiction to the assumption that $f$ is surjective. Therefore, $f$ must be continuous.
Find dominant eigenvector
$x_0$ is close to the dominant eigenspace. $A^4x_0$ is even closer to the dominant eigenspace. Choose an entry (say, the first entry), and calculate the ratio between that entry in $A^5x_0$ and $A^4x_0$.
Find the sum of $\sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n^{p}}$
We will subtract the terms in the series which are even, and add the terms that are odd. $$\sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n^p} = -\sum_{n=1}^{\infty} \frac{1}{(2n)^p} + \sum_{n=1}^{\infty} \frac{1}{(2n-1)^p} = \sum_{n=1}^{\infty} \frac{1}{(2n-1)^p}-\frac{1}{(2n)^p}$$ $$=\sum_{n=1}^{\infty} \frac{(2n)^p-(2n-1)^p}{(2n)^p(2n-1)^p}.$$ This is a telescoping sum. Can you find the sum from here?
Finding the mode of a distribution
The question you refer to in the link seems to have been the subject of some bickering. Anyway, I gather that you're asking how to find the mode of a distribution in general, not just for the binomial distribution. The mode is the outcome(s) which arises most frequently. This is easy to understand in a discrete random variable. If $X$ is a random variable which takes values in $\Omega$ then the $mode$ is the value $x \in \Omega$ for which $Pr[X= x]$ is maximised, in other words the point $x$ at which the p.m.f. $p(x)$ is a maximum. There may be many such points, in which case there is more than one $mode$. These points are not necessarily always next to each other. With a continuous random variable, the $mode$ is the point(s) $x$ at which the density function $f(x)$ is a maximum. This can usually be found by differentiating the density function to find the points where the derivative is zero and then, importantly, also checking whether such points are actually maxima.
Where can I find difficult trigonometric formulas?
Once you get past the introductory right angle approach to trigonometry, and identities such as Pythagorean and complimentary angle identities, you should start seeing that &quot;trig&quot; triangles and arbitrary $(a,b,c)$ triangles as all similar. After some practice, we only consider the &quot;trig&quot; triangles when doing proofs. Later on, we take the unit circle approach, so that we can analyse and prove identities for angles larger than a quarter revolution, as well as negative angles. To actually look up proofs, it may be productive to learn their names (e.g. Pythagorean identity, double angle formula, sum of angles formula, half angle formula, etc.), then just Google it.
$f(0)=0$ and $f'(x)=f(x)^2$
Maybe I'm looking at this wrong, but it seems like we could easily do this with a separable ODE. Let $y = f(x)$. Then, $\frac{dy}{dx} = y^2$ $\frac{dy}{y^2} = dx$ $\frac{-1}{y} = x + C$ $\frac{-1}{x + C} = y$ However, upon using the initial condition, we find a problem. $\frac{-1}{C} = 0$ $-1 = 0$ which is obviously not true. Therefore, the only possible solution to the ODE is the trivial solution. (i.e. f(x) = 0 is the only solution). And this is exactly what you want. You mentioned you are in an Analysis class, so this may not suffice, but it seems like a decent way to go about it using methods from ODE.
Discrete Probability Lottery Ticket Question
The number of ways to choose $k$ balls from $n$ is $${n\choose k}={n!\over k!(n-k)!}$$ If none of your numbers is chosen, all $k$ choices must come from the other $n-k$ balls, and there are ${n-k\choose k}$ ways this can happen. So, the answer, along the lines you suggested is $$1-{{n-k\choose k}\over{n\choose k}}=1-{(n-k)(n-k-1)\cdots(n-2k+1)\over n(n-1)\cdots(n-k+1)}$$
Examples of modules satisfying $(0:_{M/N}r):=\{m+N\in M/N:r(m+N)\in N\}=\{m\in M:rm\in N\}=(N:_{M}r)$
What you know is $m+N\in (0:_{M/N} r)\iff rm\in N\iff m\in(N:_M r)$, which does not mean the same thing as the equality you wrote. It would be correct to write $(N:_M r)/N=(0:_{M/N} r)$, though. As far as I can see that holds for all rings and modules. As for a nontrivial case, why not keep things simple and just try $R=\mathbb Z$ and $M=\mathbb Z/4\mathbb Z$, $N=2\mathbb Z/4\mathbb Z$. With $r=2$ you have, for example, $$ (N:_M r)=M $$ $$ (0:_{M/N} r)= M/N $$ and $m=1 +4\mathbb Z$ is nonzero such that $rm\neq 0+4\mathbb Z$.
Optimize $f(x,y) = \sqrt{3} x + y$ over the upper half of a circle.
How about using angles, $(\cos\theta, \sin\theta) = (x,y)$ ? $$g(\theta) = \sqrt3\cos\theta + \sin\theta$$ $$g'(\theta) = -\sqrt3\sin\theta + \cos\theta$$ For maximum, solve for $g'(\theta)=0$: $$\sqrt3\sin\theta = \cos\theta$$ $$\tan\theta = {1 \over \sqrt3} = \tan({\pi \over 6})$$ $$max(f(x,y)) = \large g({\pi \over 6}) = \sqrt3\times({\sqrt3 \over 2}) + {1\over2} = 2$$ $$min(f(x,y)) = f(-1,0) = -\sqrt3$$ Another way, without doing differentiation $$g(\theta)=\sqrt3\cos\theta + \sin\theta = 2 \cos(\theta-{\pi\over6})$$ $$max(f(x,y)) = max(g(\theta)) = 2$$
Why is this trig function never undefined
For this to be undefined, the denominator has to be zero. For the denom to be $0$ we must have, $1=-0.5\cos(x)$, which gives $\cos(x)=-2$. But $|\cos(x)| \leq 1$ for all $x$, and hence the denom can never be $0$.
R is projective Q-module?
The answer is more general. If $\mathbb{F}$ is a field, then the category of left (resp. right) modules over $\mathbb{F}$ has global dimension $0$, which means that every object in this category is projective. Note that in the case of a field, modules over it are simply vector spaces over it. Now in general, if $R$ is any ring, then the projective (right or left) modules over $R$ are exactly the direct summands of free modules over $R$: this means that an $R$-module $M$ is projective if and only if there exists some index-set $I$, as large as it may be, and another $R$-module $N$ such that $\bigoplus_{i\in I}R\cong N\oplus M$, where $$\bigoplus_{i\in I}R:=\{(r_i)_{i\in I}: r_i\in R,\text{ only finitely many of the }r_i\text{ are non-zero}\} $$ Now $\mathbb{R}$ is obviously a module over $\mathbb{Q}$ via the usual number multiplication, i.e. $\mathbb{R}$ is a vector space over $\mathbb{Q}$. To check whether it is a projective module, one should wonder if we can find an index set $I$ and another vector space over $\mathbb{Q}$ such that $V\oplus\mathbb{R}\cong\bigoplus_{i\in I}\mathbb{Q}$. So take a basis of $\mathbb{R}$ as a vector space over $\mathbb{Q}$ (by linear algebra, every vector space has a basis). Use this as an index set and take $V=0$ and you will see that this gives you the desired isomorphism. (yes, $\mathbb{R}$ is a projective $\mathbb{Q}$-module).
For what exactly we need that the submartingale is a finite sequence in theorem 6.7.3 of Ash's book?
Lots of typos in the question---just to point out a couple: &quot;...$\{\omega :T_n(\omega )\leqslant j\}\in \mathscr{F}_n...$&quot; should be &quot;...$\{\omega :T_n(\omega )\leqslant j\}\in \mathscr{F}_j$...&quot;. &quot;...$X_{T}(\omega ):=T(\omega )$...&quot; should be &quot;...$X_{T}(\omega ):=X_{T(\omega )}(\omega)$...&quot;. Now to answer your question: ...where exactly was the assumption that $m &lt; \infty$ (in $\{X_n:n=1,\ldots ,m\}$) used in the argument? The answer is nowhere, really. Note that the argument only involves two adjacent elements $Y_{n+1} = X_{T_{n+1}}$ and $Y_{n} = X_{T_{n}}$. This would remain the same regardless whether $m$ is finite or infinite. Rather, the key to the argument is the assumption that the sequence of stopping times $\{ T_n \}$ is bounded. For example, the assumption you mention that $T_n &lt; K_n\;\; a.s.$ would suffice (where the sequence $K_n$ may diverge to $\infty$). Under the assumption that $T_n &lt; K_n\;\; a.s.$, the same argument goes through verbatim for a submartingale $\{ X_n \}_{n \geq 1}$: Proof: Integrability of $Y_n$ holds exactly the same as before: $$ \int |Y_n| dP = \sum_{j = 1}^{K_n} \int_{\{T_n = j\}} |X_{j}| dP &lt; \infty. $$ Let $A \in \mathscr{F}_{T_n}$, then $A\stackrel{(1)}{=}\bigsqcup\limits_{j=1}^{K_n } A\cap \{T_n=j\}$. Therefore it suffices to consider $$ D_j = A \cap \{T_n=j\} $$ for some $1 \leq j \leq K_n$. So $Y_n \cdot 1_{D_j} = X_{T_j} \cdot 1_{D_j}$. On the other hand, $$ \int_{ D_j } Y_{n+1} dP \stackrel{(2)}{=} \sum_{i = j }^{K_{n+1}} \int_{ D_j \cap \{ T_{n+1} = i \} } Y_{n+1} dP = \sum_{i = j }^{K_{n+1}} \int_{ D_j \cap \{ T_{n+1} = i \} } X_i dP \geq \int_{ D_j } X_j dP = \int_{ D_j } Y_{n} dP. $$ End of Proof. With the additional assumption that $\{ X_n \}_{n \geq 1}$ is, say, dominated by an $Z \in L^1_+$, i.e. $|X_n| \leq Z \; a.s.$, each element of the stopping time sequence $\{ T_n \}$ can be allowed to be unbounded: $\int |Y_n| dP = \sum\limits_{j = 1}^{\infty} \int_{\{T_n = j\}} |X_j| dP \leq \int Z dP &lt; \infty$. &quot;$\stackrel{(1)}{=}$&quot; becomes $A = \bigsqcup\limits_{j=1}^{\infty } A\cap \{T_n=j\}$. Therefore it still suffices to consider $$ D_j = A \cap \{T_n=j\}. $$ &quot;$\stackrel{(2)}{=} $&quot; becomes $\int\limits_{ D_j } Y_{n+1} dP = \sum\limits_{i = j }^{\infty} \int\limits_{ D_j \cap \{ T_{n+1} = i \} } Y_{n+1} dP$. Now, for each finite $m$, $$ \sum\limits_{i = j }^{m} \int\limits_{ D_j \cap \{ T_{n+1} = i \} } Y_{n+1} dP \geq \sum\limits_{i = j }^{m} \int\limits_{ D_j \cap \{ T_{n+1} = i \} } Y_{n} dP. $$ So the submartingale property follows from the Dominated Convergence Theorem. One can probably construct counterexamples where the $L^1$-dominated condition does not hold, each $T_n$ is unbounded, and $\{ X_{T_n} \}$ is no longer a submartingale.
Solutions of $u_{xx} + 2 \mathrm i u_{xy} + u_{yy} = 0$
There appears to be a typo in the equation Friedman treats. Whenever $f(z)$ is analytic, the function $u(z)=(1-|z|^2)f(z)$ does satisfy $$ u_{xx}+2{\rm i}u_{xy} - u_{yy}=0.$$ The symbol of this equation is $$ a(\xi)=\xi_1^2+2{\rm i}\xi_1\xi_2-\xi_2^2 = (\xi_1+{\rm i}\xi_2)^2, $$ which does not vanish for any real vector $\xi=(\xi_1,\xi_2)$ except for $(0,0)$. This makes the PDE elliptic in the sense that I believe Friedman's book uses (which I take from a 1955 paper of Nirenberg titled Remarks on strongly elliptic partial differential equations). But it is not ``strongly elliptic,'' since it is false that $$\Re a(\xi)=\xi_1^2-\xi_2^2 \ge c(\xi_1^2+\xi_2^2)$$ for some postive constant $c$.
Does this integral have any closed form? $\int\frac{1}{x+\sin(x+1)}\mathop{\mathrm dx}$
Does this integral have any closed form? No. At least not according to Liouville's theorem and the Risch algorithm.
$f(x)=x+(1-x)x^2+(1-x)(1-x^2)x^3+....(1-x)(1-x^2)....(1-x^{n-1})x^n, (n \ge 4)$
Perhaps this will help:$f(0)=0$, $f(1) =1$, $f'(0)=1$. Now we do elimination: 1.st can be right since there we get $f(0)=1$. 2.st can be right since there we get $f(1)=0$. 4.th can be right since there we get $f'(0) = f(0)$ Or, you can try answering the question for $n=4$ (or less).
Prove that :$a_{n+2}a_n-a_{n+1}^2=2^n$ $\forall n \in N$
We calculate the initial sequence and obtain $1, 3, 10, 34, 116$. This leads to the guess that $a_{n+2} = 4a_{n+1} - 2 a_n$, which has solution $a_n = \frac { (2+\sqrt{2})^{n+1} + (2-\sqrt{2})^{n+1}} {4}$. Step 1: Verify by induction that $a_n$ has the closed form above. Direct Step 2: Conclude that $a_{n+2}a_n -a_{n+1}^2 = 2^n$. (we're now done) Alternative Step 2: We can also further show that $a_{n+2} = 4 a_{n+1} - 2a_n$, to conclude that each of the $a_n$ are integers, and hence the floor function isn't necessary. Thus $a_{n+2}a_n - a_{n+1}^2 = 2^n$.
How can the number $\left\langle \matrix {3&3\\3&3}\right\rangle $ be described?
You can't really visualize planar arrays easily in terms of linear arrays other than to understand the rules of reduction, just as a multivariable Ackermann number like A(9,9,9,9) can't be easily expressed as a binary Ackermann number - the numbers just get too large. It's true that a nonlinear planar array with first entry $3$ will eventually reduce to a linear array of the form $&lt;3,3,\ldots,3&gt;$, but the number of entries will usually be so large as to be indistinguishable from the number defined by the planar array itself. We have $ \left&lt; \begin{array}{cc} 3 &amp; p \\ 2 \\ \end{array} \right&gt; = &lt;3,3,\ldots,3&gt;$ with $p$ entries and then $ \left&lt; \begin{array}{c@{}c@{}c} 3 &amp; p+1 &amp; 2 \\ 2 \\ \end{array} \right&gt; = \left&lt; \begin{array}{c@{}c@{}c} 3 &amp; \left&lt; \begin{array}{ccc} 3 &amp; p &amp; 2 \\ 2 \\ \end{array} \right&gt; &amp; \\ 2 \\ \end{array} \right&gt; $ so basically $ \left&lt; \begin{array}{c@{}c@{}c} 3 &amp; p &amp; 2 \\ 2 \\ \end{array} \right&gt;$ iterates over the function $ \left&lt; \begin{array}{c@{}c@{}c} 3 &amp; p \\ 2 \\ \end{array} \right&gt;$. Similarly, $ \left&lt; \begin{array}{c@{}c@{}c} 3 &amp; p &amp; 3 \\ 2 \\ \end{array} \right&gt;$ iterates over the function $ \left&lt; \begin{array}{c@{}c@{}c} 3 &amp; p &amp; 2\\ 2 \\ \end{array} \right&gt;$ and so on. Then, $ \left&lt; \begin{array}{c@{}c@{}c@{}c} 3 &amp; p &amp; 1 &amp; 2 \\ 2 \\ \end{array} \right&gt;$ iterates over the function $ \left&lt; \begin{array}{c@{}c@{}c@{}c} 3 &amp; 3 &amp; p \\ 2 \\ \end{array} \right&gt;$, and so on for more and more entries in the first row. This is the same recursive construction as linear arrays, except it starts from the base function $ \left&lt; \begin{array}{cc} 3 &amp; p \\ 2 \\ \end{array} \right&gt; = &lt;3,3,\ldots,3&gt;$, rather than $&lt;b,p&gt; = b^p$. Next we have $ \left&lt; \begin{array}{cc} 3 &amp; p \\ 3 \\ \end{array} \right&gt; = \left&lt; \begin{array}{ccc} 3 &amp; 3 &amp; \ldots &amp; 3 \\ 2 \\ \end{array} \right&gt; $ which leads to a third linear array hierarchy starting from the base function $ \left&lt; \begin{array}{cc} 3 &amp; p \\ 3 \\ \end{array} \right&gt;$. So that should give you an general idea about arrays of the form $\left&lt; \begin{array}{ccc} b &amp; p &amp; \ldots \\ n \\ \end{array} \right&gt; $; they form an infinite sequence of linear array hierarchies, each one diagonalizing over the previous. Next is $\left&lt; \begin{array}{cc} 3 &amp; p+1 \\ 1 &amp; 2 \\ \end{array} \right&gt; = \left&lt; \begin{array}{cccc} 3 &amp; 3 &amp; \ldots &amp; 3 \\ \left&lt; \begin{array}{cc} 3 &amp; p \\ 1 &amp; 2 \\ \end{array} \right&gt; \\ \end{array} \right&gt; $ so $\left&lt; \begin{array}{cc} 3 &amp; p \\ 1 &amp; 2 \\ \end{array} \right&gt;$ iterates over the previously described sequence of linear array hierarchies. This leads to another sequence of hierarchies of the form $\left&lt; \begin{array}{cc} b &amp; p &amp; \ldots \\ n &amp; 2 \\ \end{array} \right&gt;$, which $\left&lt; \begin{array}{cc} 3 &amp; p \\ 1 &amp; 3 \\ \end{array} \right&gt;$ iterates over, and so on. Then $\left&lt; \begin{array}{ccc} 3 &amp; p \\ 1 &amp; 1 &amp; 2 \\ \end{array} \right&gt;$ iterates over $\left&lt; \begin{array}{ccc} 3 &amp; 3 &amp; \ldots \\ 1 &amp; n \\ \end{array} \right&gt;$, and similarly for further entries in the second row. Then $ \left&lt; \begin{array}{cc} 3 &amp; p \\ 1 \\ 2\\ \end{array} \right&gt; = \left&lt; \begin{array}{cc} 3 &amp; 3 &amp; \ldots &amp; 3 \\ 3 &amp; 3 &amp; \ldots &amp; 3 \\ \end{array} \right&gt;$ with $p$ entries in each row, so $ \left&lt; \begin{array}{cc} 3 &amp; p \\ 1 \\ 2\\ \end{array} \right&gt;$ diagonalizes over the first two rows. I hope this gives a good idea for how the rules work. You can see that there's no way to describe any but the simplest planar arrays in terms of linear arrays without lots and lots of recursion.
Variance associated with sampling one outcome of a multinomial distribution
If you are only focusing on a particular outcome of a multinomial distribution, then the scenario reduces to the simple binomial setting, where a &quot;success&quot; is &quot;getting the $k$th output&quot; and &quot;failure&quot; is &quot;getting any other output.&quot; Specifically, $N \sim \text{Binomial}(n, p_k)$ where $p_k$ is the true unknown probability of the $k$th output, and $n$ is the number of trials. $N$ has variance $np_k(1-p_k)$, so your estimate $\hat{p}_k = N/n$ has variance $\frac{p_k(1-p_k)}{n}$.
Show continuity or uniform continuity of $\phi: (C([0,1];\Bbb R ),||\cdot||_\infty )\to (\Bbb R, |\cdot | )$
$\phi$ will be continuous. Indeed, fix any $u\in C_{[0,1],\mathbb{R}}$, and let $\delta &gt; 0$ such that $\delta(2\lVert u\rVert_\infty + \delta) \leq \varepsilon$. Fix any $\varepsilon &gt; 0$. If $v$ is such that $\lVert u - v\rVert_\infty \leq \delta$, then $$\begin{align} \lvert \phi(u)-\phi(v) \rvert &amp;= \left\lvert \int_{[0,1]} u^2-v^2\right\rvert \leq \int_{[0,1]} \left\lvert u^2-v^2\right\rvert \\ &amp;= \int_{[0,1]} \underbrace{\left\lvert u-v\right\rvert}_{\leq \lVert u - v\rVert_\infty} \cdot \underbrace{\left\lvert u+v\right\rvert}_{\leq 2\lVert u\rVert_\infty+\delta} \leq \delta(2\lVert u\rVert_\infty + \delta) \leq \varepsilon \end{align}$$ i.e. $\phi$ is continuous. It is easy to adapt the above to show that $\phi$ will be Lipschitz (and therefore uniformly continuous) on any subset of any $C_{[0,1],[-a,a]}\subseteq C_{[0,1],\mathbb{R}}$ (i.e., for functions bounded by some universal constant). The Lipschitz constant will then be $2a$ (replacing the $2\lVert u\rVert_\infty + \delta$ term).
If $f$ is an integer coefficient polynomial and $n$ is square free, does $f(\frac{1+\sqrt n}{2})=f(\frac{1-\sqrt n}{2})$ imply that $f$ is an integer?
If $f(x) =(x-\frac{1+\sqrt n}{2})(x-\frac{1-\sqrt n}{2}) =x^2-x+\frac{1-n}{4} $ then $f(\frac{1+\sqrt n}{2})=f(\frac{1-\sqrt n}{2}) =0 $ and if $n=4m+1$ then $f(x)$ has integer coefficients.
Describe the units of $\mathbb {Z}_4[i]$
Your base ring is $\Bbb Z_4=\{0,1,2,3\}$, with addition and multiplication calculated modulo $4$. Your extension has an $i$, square root of $-1$. The ring has only sixteen elements, so I think you would find it very useful to write out the full multiplication table. As to your question, you see that $(i)(3i)=3(-1)=3^2=1$. So the inverse of $i$ is $3i$. It happens that among those sixteen elements, there are eight units and eight nonunits. Now you go find them.
Not Assuming Choice, Nilradical Not Equal to Intersection of Prime Ideals
Sure. For instance, let $A=\mathcal{P}(\mathbb{N})/\mathrm{fin}$ be the Boolean ring of subsets of $\mathbb{N}$ modulo the ideal of finite subsets. So explicitly, an element of $A$ is an equivalence class of subsets of $\mathbb{N}$ where two sets are equivalent if they differ by finitely many elements, addition is symmetric difference, and multiplication is intersection. A prime ideal in $A$ is equivalent to a nonprincipal ultrafilter on $\mathbb{N}$. There are models of ZF in which there are no nonprincial ultrafilters on $\mathbb{N}$, and so $A$ has no prime ideals. So the intersection of all primes ideals is all of $A$, but not all of $A$ is nilpotent (in fact, the only nilpotent element is $0$).
How does one prove if a multivariate function is constant?
Yes, it does, as long as the function is continuous on a connected domain and the partials exist (let's not get into anything pathological here). And the proof is the exact same as in the one variable case (If there are two points whose values we want to compare, they lie on the same line. Use the multivariable mean variable theorem to show that they must have the same value. This proof is easier if you know directional derivatives and/or believe that you can assume that the partial derivative in the direction of this line is zero because all other basis-partials are zero.)
Time it takes for a car to stop given a ratio of deceleration
Car A never stops moving, but it covers a finite distance that is shorter than car $B$. By the way, car $B$ travels a distance of $120$ meters, not $130$. Car $B$ covers a larger distance while braking, but it stops in a finite time. So the answer really depends on what "stopping first" really means. As for your argument with the professor: If you want to solve only questions which are completely practical and only such that actually happen in reality, then you should really only solve problems from quantum physics and should go nowhere without Schroedinger's equation. After all, when you throw a ball, you never really throw a ball, you just change it's wave function sucht that the mean of it's distribution moves at some velocity relative to the earth's center. What I am trying to point out is this: ALL REAL LIFE PROBLEMS ARE APPROXIMATIONS. And you have to first learn how to solve simple approximations (like the approximation that a car's speed decreases by $6$km/hr per second, which, as far as approximations go, is not really that bad since the force on the car is pretty constant) before you can learn to understand very complex approximations (like quantum theory).
Are there $2^{\aleph_{0} }$ sets of natural numbers such that each two have finite intersection
Yes, your approach works. For $\alpha$ a real number, choose a single sequence of distinct rationals $x_{\alpha}=(x_{\alpha,1},x_{\alpha,2},\dots)$ that converge to $\alpha.$ Then the sets of elements in $x_{\alpha}$ and $x_{\beta}$ can only have finitely many rationals in common when $\alpha\neq \beta.$ So if $X_{\alpha}=\{x_{\alpha,i}\mid i\in\mathbb N\}$ then we have a family of size $|\mathbb R|=2^{\aleph_0}.$ If, in the case $\alpha$ is rational, we make sure $\alpha\not\in X_{\alpha}$, then we have that, for any $\alpha,$ $X_{\alpha}$ is a discrete bounded subspace of $\mathbb Q$ with $\alpha$ as the only limit point.
Is $ABC - ADC = A(B-D)C$?
Assuming the matrices have compatible dimensions, $$ABC-ADC=(AB-AD)C=(A(B-D))C=A(B-D)C$$ None of the matrices have to be invertible.
Determining the distance between two points on the surface of the earth.
In answer to question 3. Computing distance on an ellipsoid of revolution (oblate or prolate) is addressed in "Algorithms for geodesics". The algorithms given there are available in several different languages using GeographicLib. The methods are essentially exact for $|f|&lt;1/50$ where $f$ is the flattening. (There's a C++ version of the algorithms which works for arbitrary values of $f$.)
Trace map from finite surjective morphism of normal irreducible varieties
As $Y\to X$ is finite, $K(X)\hookrightarrow K(Y)$ is finite extension and therefore there is a trace map $Tr:K(Y)\to K(X)$ which comes from the trace of multiplication by an element $a\in K(Y)$ acting as a $K(X)$-linear endomorphism of $K(Y)$ viewed as a finite-dimensional $K(X)$-vector space. The goal is to show that this gives to a morphism of sheaves $f_*\mathcal{O}_Y\to \mathcal{O}_X$, which we can do by showing that this is true on each affine open subset $\operatorname{Spec} A = U\subset X$. Since normal + irreducible implies integral, we can take $A$ to be a normal integral domain, and then $f^{-1}(U)=\operatorname{Spec} B$ for $B$ a normal integral domain which is a finite (thus intergal) $A$-module. As the characteristic polynomial of $b$ as an endomorphism of $K(Y)$ over $K(X)$ is also a power of it's minimal polynomial (think about $K(X)\subset K(X)(b)\subset K(Y)$) and has the trace as a coefficient, all we need to do is to show that this minimal polynomial lives in $A[t] \subset K(X)[t]$. The following lemma proves this: Lemma: Let $R$ be an integrally closed domain and $F=Frac(R)$. Let $F\subset K$ be a finite field extension. Then for any $\alpha\in K$ integral over $R$, the minimal polynomial $m(x)$ of $\alpha$ is actually in $R[x]$. Proof: As $\alpha$ is integral, there's a monic polynomial $f\in R[x]$ with $f(\alpha)=0$. But by the inclusion $R[x]\subset F[x]$, $f$ is also a polynomial in $F$ which vanishes on $\alpha$. So it's divisible by $m(x)$, and we may write $f=gm$. As all roots of $m$ are roots of $f$, all roots of $m$ are integral over $R$. As the coefficients of $m$ are the elementary symmetric polynomials in these roots, the coefficients of $m$ are again integral over $R$. As $R$ is integrally closed, these coefficients are actually in $R$, and thus we've shown that $m(x)\in R[x]$. $\blacksquare$ As a result of this lemma, we get that the trace of any element $b\in B$ actually lands in $A$ and therefore gives us a morphism $f_*\mathcal{O}_Y\to \mathcal{O}_X$. After averaging by dividing by $\deg K(Y)/K(X)$, we get that the composite $\mathcal{O}_X\to f_*\mathcal{O}_Y \to \mathcal{O}_X$ is the identity, demonstrating a splitting as required in the question.
Trivial normal bundle $NS$ equivalence
We are in $\mathbb{R}^n$ all the way, so the computations are more concrete. To see why $(1) \implies (2)$, we can use the fact that we have a very explicit derivative for $\Phi$: $$\Phi'_p=\begin{pmatrix} \nabla \Phi_1 \\ \nabla\Phi_2 \\ \cdots \\ \nabla \Phi_k \end{pmatrix}. $$ Since we are supposing $S$ is a regular level set, we have that those $\nabla \Phi_i$ are all linearly independent along $S$. They are also all normal to $S$, since $\Phi$ is constant there. This gives a global framing of the normal bundle. To see why $(2) \implies (1)$, you simply use the fact that by assumption there exists a diffeomorphism $\Psi: NS \to S \times \mathbb{R}^k$, and consider $\Phi:=\pi_2 \circ \Psi \circ T,$ where $T: U \to V \subset NS $ is a diffeomorphism of a neighbourhood of $S$ onto a neighbourhood of the zero section on the normal bundle (such diffeomorphism is given by the tubular neighbourhood theorem).
Calculate the solution of the differential equation
Start with the substitution $$ x(t) = t^r $$ Plug this into the non-homogenous ODE and see where you can go from there. After this substitution, your ODE becomes the following: $$ r(r-1)t^r + 2rt^r - 6t^r = t^3 $$ Do some algebra, and get the characteristic polynomial. Solve the homogenous case and non homogenous case separately. Use initial conditions to find the constants, and then have a good day! Hope this helps
Application of Central Limit Theorem for geometrical random variables
There are two conventions for geometric random variables; one counts trials, the other one counts failures. From the context I think you are using the number of trials; the answer changes a bit if it is the other convention. Anyway, the mean and variance of a Geom(p) distribution under this convention are $1/p$ and $(1-p)/p^2$ respectively. Plugging in $p=1/2$ you have $2$ and $2$ respectively. Thus the distribution of a sum of $k$ iid Geom(0.5) variables has mean $2k$ and variance $2k$. So $P(\sum_{i=1}^k X_i \geq 2k)$ for large $k$ behaves like $P(Z \geq 0)$ where $Z$ is a standard normal random variable. I got $0$ from $\frac{2k-2k}{\sqrt{2k}}$. Thus it converges to $1/2$. In effect CLT is telling us that the distribution eventually looks symmetric about its mean (even though the original Geom(p) distribution is not at all symmetric about its mean).
Lipschitz function $f(x)=x \log|x|$
Lipschitz continuity requires a fixed $K$ such that $|f(x)-f(y)|\le K|x-y|$ for all $x$ and $y$ in the interval. In particular, since $y=0\in[-b,b]$, we would need $$|x\ln|x||=|f(x)|=|f(x)-0|=|f(x)-f(0)|\le K|x-0|=K|x|$$ for all $x\not=0$, which is to say, we would need $|\log x|\le K$ for all $0\lt x\le b$. But that's clearly not the case, so the function is not Lipschitz.
My conjecture on almost integers.
Well, we have that $$(\sqrt{x}+\lfloor\sqrt{x}\rfloor)^{n} + (-\sqrt{x}+\lfloor\sqrt{x}\rfloor)^{n} $$ is an integer (by the Binomial Theorem), but $(-\sqrt{x}+\lfloor\sqrt{x}\rfloor)^{n}\to 0$ if $\sqrt{x}$ was not already an integer.
Definition of the ring of weakly modular forms over $\mathbb{Z}_{(p)}$
This was going to be a comment but got way too long. I'm not familiar with weakly modular forms but there are a couple potential motivations that come to mind. Over $\mathbb{C}$ there is a ring of modular forms (of all weights), and this is generated as a $\mathbb{C}$-algebra by forms $E_4, E_6$ of weights 4 and 6 respectively. There is also a cusp form commonly called $\Delta$ of weight 12 that satisfies the relation $E_4^3 - E_6^2 = 1728 \Delta$, or something similar depending on your normalization. This can be found in any reasonable book about modular forms and doesn't require knowledge of algebraic geometry. I recommend Diamond and Shurman. Another motivation behind the definition is the idea of a moduli space. You may be aware that modular forms can have many equivalent definitions. One is as sections of a sheaf over some moduli space (be that a scheme, stack, or something else) of elliptic curves. A detailed explanation of this can be found in Arithmetic Moduli by Katz and Mazur, but this may be impenetrable until you learn more algebraic geometry. The rough idea is that points of your space correspond to elliptic curves. Each elliptic curve $E$ has a discriminant $\Delta(E)$, and this discriminant is invertible since $E$ is smooth (by definition). I recommend taking a look at Silverman's book on elliptic curves if you're not comfortable with this. We can then think of $\Delta$ as a function on our moduli space of elliptic curves, that takes a point $x$ (necessarily associated to some elliptic curve $E_x$) to the value $\Delta(E_x)$ in our base ring/field. If we call our sheaf $\omega$ and pretend its the structure sheaf (which it is locally) then sections of $\omega$ are just functions on our space. So $\Delta$ is an element of the ring of global sections on our moduli space. Similarly, we can define certain quantities $c_4, c_6$ for every elliptic curve, and view these as functions as well. I think the idea should be that every generalized elliptic curve $E$ (up to iso) is determined by the values $c_4(E), c_6(E), \Delta(E)$, with the caveat that we always know $$ c_4(e)^3 - c_6(E)^2 = 1728 \Delta(E). $$ A good example to keep in mind is that the (coarse) moduli space of elliptic curves over $\mathbb{C}$ is isomorphic to $\mathbb{A}^1_{\mathbb{C}}$ via the $j$-invariant; the moduli space is just $\operatorname{Spec}\mathbb{C}[j]$. In the paper you linked, they consider a moduli space of generalized elliptic curves (ref for this is the paper by Deligne-Rapoport, but it may not be accessible to you yet), so not all points will be smooth curves. Some points will be singular, and these correspond to where $\Delta$ vanishes. Inverting $\Delta$ corresponds to taking the complement of the singular locus, i.e., only considering the points corresponding to smooth curves. Finally, I would also check out the paper referenced as [8] in the link you gave (P. Deligne, Courbes elliptiques: formulaire d’après J. Tate), as this appears immediately after the definition you copied and will probably be quite helpful.
Bounds on a quadratic form
If $A$ is symmetric, you have not lost anything, since for such a matrix the operator norm is equal to $\sup\{|x^TAx| : \|x\|=1\}$, which is also the largest absolute value of an eigenvalue. For general $A$, you can gain something by writing it as $$A = \frac12(A+A^T)+\frac12(A-A^T)$$ where the second term does not contribute to $x^TAx$. Hence $$|x^TAx| \le \frac12\|A+A^T\|\ \|x\|^2$$
Prove the continuity on an open interval
This function is a ratio. A ratio is continuous wherever its numerator and denominator are continuous and the denominator is not zero. (In symbols, $\frac{f(x)}{g(x)}$ is continuous at $x$ if $f$ and $g$ are continuous at $x$ and $g(x) \neq 0$. This is an application of the &quot;quotient law&quot; for limits to the ratio.) Your given numerator and denominator are polynomials, so are continuous for all values of $x$. Your denominator, $x-2$ is zero precisely when $x = 2$, so the ratio is continuous for all $x$ except $x=2$. Written formally and specifically for the interval you give: Let $x \in (2,\infty)$. Then using the product and sum laws of limits, $2x+3 = \lim_{t \rightarrow x} 2t+3$ and $x-2 = \lim_{t \rightarrow x} t-2$. Furthermore, since $x \neq 2$, the quotient law for limits gives $\frac{2x+3}{x-2} = \lim_{t \rightarrow x} \frac{2t+3}{t-2}$.
Nodes and edges with combinatorics!
You are asking several questions of various difficulties, which I will rephrase in common graph theory vocabulary: How many labeled graphs have no isolated nodes? How many labeled graphs are trees with every vertex having degree at most $4$? How many unlabeled graphs have no isolated nodes? How many unlabeled trees are trees with every vertex having degree at most $4$? Your solution to $(1.)$ does not quite work. For each node, you are choosing the edges out of each node, which would mean you have to multiply the numbers $(2^{N-i}-1)$, not add them. Furthermore, there are more choices than you have accounted for. You used $2^{N-i}-1$ so every node is connected to some node after it, but it is OK for a node to have no connections after it as long as it had some connections before. The choice is therefore sometimes $2^{N-i}-1$ and sometimes $2^{N-i}$. The only way to resolve these complicated dependencies is to use the principle of inclusion exclusion. Namely, count the number of graphs, then for each vertex subtract the graphs where that vertex is isolated, the for each pair of vertices add back in graphs where both of those vertices are isolated, etc. The result is $$ \sum_{k=0}^n (-1)^k\binom{n}k 2^{\binom{n-k}{2}} $$ For $(2.)$, things are even trickier. I think you have to use Prüfer codes, which are a bijection between labeled trees on $\{1,2,\dots,n\}$ and lists of length $n-2$ with entries in $\{1,2,\dots,n\}$, such that the degree of each vertex $i$ is one plus the number of times $i$ appears in the list. Therefore, the number of such labeled trees is $$ \sum_{\substack{\sum_{i}d_i=n-2\\\max d_i\le 3}} \binom{n-2}{d_1,d_2,\dots,d_n} $$ This is a large summation of multinomial coefficients which I believe cannot be simplified. Unfortunately, I have little hope of getting closed form expressions for $(3.)$ or $(4.)$. There is no known expression for the number of unlabeled graphs, full stop. As you said, the number of symmetries makes things complicated. You may be able to do something using Polya's enumeration method to handle the symmetries, but this will result in a summation over $n!$ terms which will quickly get computationally infeasible.
check if series $\sum^\infty_{n=1} \frac{(-1)^n}{\sqrt{n}+{(-1)^n}}$ converges
The series $\sum_{n=2}^N\frac{(-1)^n}{\sqrt{n}+(-1)^n}$ diverges. To show this we write the partial sums of the series as $$\begin{align} \sum_{n=2}^N\frac{(-1)^n}{\sqrt{n}+(-1)^n}&amp;=\color{blue}{\underbrace{\sum_{n=2}^N\frac{(-1)^n}{\sqrt{n}}}_{\text{Converges as}\,\,N\to \infty}}-\color{red}{\underbrace{\sum_{n=2}^N\left(\frac{1}{\sqrt{n}(\sqrt{n}+(-1)^n)}\right)}_{\text{Diverges as}\,\,N\to \infty}}\tag 1 \end{align}$$ Leibniz's test guarantees that the limit first term converges as $N\to \infty$. However, for the second term on the right, we have $$\sum_{n=2}^N \frac{1}{\sqrt{n}(\sqrt{n}+(-1)^n)}\ge \frac12 \sum_{n=2}^N\frac1{n}$$ which shows that the second term diverges by comparison to the harmonic series. Inasmuch as the partial sums of the series of interest are comprised of the partial sums of a convergent series and the partial sums of a divergent series, the series of interest diverges.
How to tell if a numerical solution of a differential equation is accurate
What you can do to estimate the error when you do not know the error is to solve the equation with stepsizes $h$ and then with stepsizes $h/2$. If you are using an order 1 method, this should approximately halve the error, and you can estimate the error as the difference between these two solutions. So if the total error between these two solutions is $5.13\times 10^{-3}$, then your error estimate is something like $10^{-2}$. If you have an order 2 method, halving the stepsize should quarter the error. So you can do a similar back-of-the-envelope calculation. Some little tidbits. This approximation becomes more accurate as $h$ gets smaller (indeed, order of accuracy is defined in the limit as $h$ goes to zero), so you want $h$ small enough. Instead of halving you can make this more accurate by quartering or more, and adjust the math accordingly. Lastly, since we're talking about PDEs, you have to change $h$ around and $\Delta x$, knowing the order for both to get a rough estimate. Also in this case to even calculate the error you have to choose a norm like $l^2$ or maximum norm. Choose a norm the method is stable in.
How to determine the odds of a matching pattern where any single bit can vary
The number of pattern of 16 bits which differ by a single bit to the given pattern are 16 (one for every bit you can modify). So you have a false positive in $16/2^{16}$ cases.
Prove that f is either strictly increasing or decreasing
Think of it like this. Since $f$ is one-one, w.l.o.g., assume $x_1\lt x_2$ be two elements of $\textrm{Dom}(f)$. Then, $x_1\neq x_2\implies f(x_1)\neq f(x_2)$. By the trichotomy principle, either $f(x_1)\lt f(x_2)$ or $f(x_1)\gt f(x_2)$. This is sufficient to prove that $f$ is either strictly increasing or strictly decreasing.
Stirling numbers of the second kind for small numbers of partition
Picture three bins and start throwing your elements in them however you like. There is a specific number of ways to do so. However, that also counts the cases that some bin is left empty! You're going to need to subtract those. Then you should promptly add all the ways that left two bins empty, since you counted each twice at the previous step. Finally, the order of the bins doesn't really matter, does it? Edit your progress if you need extra help.
Is 35%=0.35 not as a percentage of a number but just on its own?
The symbol % means "per cent". In this case, "cent" means $100$, so % means $/100$. 35% means $35/100$ or $0.35$. We often multiply percents by other quantities, like 80% of 200. Here, "of" designates multiplication. In the Daily Mail story there is no "of", so no multiplication and no ambiguity.
prove existence of integers $a,q$ which satisfy the following inequality
What you want is $ |qx - a| &lt; \frac{1}{Q}$. For a given q, you can find an a satisfing this iff the fractional part of qx is either less than $\frac{1}{Q}$ or greater than $1-\frac{1}{Q}$. Now consider the Q candidates of q that you have. For each, compute the integer part of (fractional part of qa) times Q. This can have Q possible values from 0 to (Q-1). If each of these values is attained, then use the value of q which gives you 0 here. If not, by pigenhole principle there must be two candidates of q here which give you the same value here. Their difference will also be a candidate and will give 0 here, which makes it a valid candidate.
Help Needed With Pascal's Triangle
Yes, $$ \frac{(3k-1)!}{(k-1)! \; (2k)!} = \frac{1}{2k} \; \frac{(3k-1)!}{(k-1)! \; (2k-1)!} $$ $$ \frac{(3k-1)!}{k! \; (2k-1)!} = \frac{1}{k} \; \frac{(3k-1)!}{(k-1)! \; (2k-1)!} $$ so the second is twice the first. Row 14 is the only occurrence of three consecutive entries in proportion $(1,2,3),$ those being $1001, 2002, 3003$ $$ \left( \begin{array}{c} 3k-1 \\ k-1 \\ \end{array} \right) = \frac{1}{2k} \; \frac{(3k-1)!}{(k-1)! \; (2k-1)!} $$ $$ \left( \begin{array}{c} 3k-1 \\ k \\ \end{array} \right) = \frac{1}{k} \; \frac{(3k-1)!}{(k-1)! \; (2k-1)!} $$ $$ 2 \left( \begin{array}{c} 3k-1 \\ k-1 \\ \end{array} \right) = \left( \begin{array}{c} 3k-1 \\ k \\ \end{array} \right) $$ IF $$ 3 \left( \begin{array}{c} 3k-1 \\ k-1 \\ \end{array} \right) = \left( \begin{array}{c} 3k-1 \\ k+1 \\ \end{array} \right) $$ then $$ \frac{3}{2k(2k-1)} = \frac{1}{k(k+1)} $$ and $$ 3 k^2 + 3k = 4 k^2 - 2k $$ and $$ 0 = k^2 - 5k = k(k-5)$$ With $k=5$ we get $3k-1=14$
Treatment of Linear Equations: Fields vs Polynomials over a field
A coordinate free to study linear equations over a vector space $V$ is to use linear functionals. A linear functional is just a linear map $\varphi \colon V \rightarrow \mathbb{F}$ and a linear equation on $V$ is just an expression of the form $\varphi(v) = b$ where $b \in \mathbb{F}$ and $\{ v \in V \, | \, \varphi(v) = b \} = \varphi^{-1}(b)$ is our solution space. More generally, we can consider the solution space of $k$ equations as $\cap_{i=1}^k \varphi^{-1}(b_i)$. The collection of all linear functionals is denoted by $V^{*}$ and called the dual space of $V$. It is itself a vector space (reflecting among other things the fact that we can add equations and multiply them by scalars). Let us see how this works in $\mathbb{F}^n$: A linear map $\varphi \colon \mathbb{F}^n \rightarrow \mathbb{F}$ can be written uniquely as $\varphi(x_1, \dots, x_n) = a_1 x_1 + \dots + a_n x_n$ for some uniquely determined coefficients $a_i \in \mathbb{F}$. The parameters $x_1, \dots, x_n$ are variables of the function $\varphi$ so we can also think of the elements of the dual space $(\mathbb{F}^n)^{*}$ as linear polynomials in the variables $x_1, \dots, x_n$. A linear equation homogeneous equation has the form $$ \varphi(x_1, \dots, x_n) = a_1 x_1 + \dots + a_n x_n = 0_{\mathbb{F}} $$ and so the corresponding solution space is $\varphi^{-1}(0) = \ker(\varphi)$. The linear functional $\varphi$ can be multiplied by a scalar $c \in \mathbb{F}$ resulting in a new linear functional $c\varphi$ actings on $(x_1,\dots,x_n)$ by $$ (c\varphi)(x_1,\dots,x_n) = c x_1 + \dots + c x_n. $$ Thus, we see that $c\varphi = 0$ corresponds to the original equation multiplied by a scalar. Similarly we can add two equations, consider the solution space of many equations (in the case the equations are homogeneous, the space of solutions is called the annahilator of functionals/"equations"), etc.
If two lines are skew, then $\overrightarrow{PQ}\cdot\left(\mathbf{u}\times\mathbf{v}\right)\neq0$?
The quantity $\overrightarrow{PQ} \cdot (\mathbf{u} \times \mathbf{v})$ is known as the scalar triple product. It corresponds with the determinant of the matrix formed by putting the vectors as rows (or columns, equivalently). As such, it is $0$ if and only if the vectors are linearly dependent. Note that, if two lines are skew, then $\mathbf{u}$ is independent from $\mathbf{v}$, that is, neither is a scalar multiple of each other. Suppose for the sake of contradiction that $\overrightarrow{PQ} \cdot (\mathbf{u} \times \mathbf{v}) = 0$. Then the vectors are linearly dependent, hence $$\overrightarrow{PQ} = a \mathbf{u} + b \mathbf{v}$$ for some $a, b \in \mathbb{R}$. Consider the plane $\Pi$, going through $P$, generated by directions $u$ and $v$. That is, $$\Pi = \left\lbrace \overrightarrow{OP} + \lambda \mathbf{u} + \mu \mathbf{v} : \lambda, \mu \in \mathbb{R}\right\rbrace.$$ Then both lines lie in $\Pi$. Obviously the first line lies in $\Pi$ because we can choose $\mu = 0$. The second line lies in $\Pi$ because $$\overrightarrow{OQ} + \lambda \mathbf{v} = \overrightarrow{OP} + \overrightarrow{PQ} + \lambda \mathbf{v} = \overrightarrow{OP} + a\mathbf{u} + (\lambda + b) \mathbf{v} \in \Pi.$$ The lines are coplanar, hence they follow the Euclidean dichotomy: they intersect or they're parallel. Either way, they're not skew, which is a contradiction.
The circle bundle of $S^2$ and real projective space
$SO_3$ is the space of triples $(v_1,v_2,v_3)$ of elements of $\mathbb R^3$ which are an oriented orthonormal basis. Given an element $x\in SS^2$, you construct a pair of orthonormal basis for it. $v_1$ is the point of $S^2$ your vector $x$ is tangent to, $x \in T_{v_1}S^2$ and $v_2$ would be the vector in $\mathbb R^3$ that is the image of $x \in T_{v_1}S^2$ under the inclusion of vector-spaces $T_{v_1}S^2 \subset \mathbb R^3$. But given $v_1$ and $v_2$ orthonormal, $v_3 = v_1 \times v_2$. So that's essentially why $SO_3$ and $SS^2$ are diffeomorphic / homeomorphic. There's a lot of fun ways to see $\mathbb RP^3$ and $SO_3$ are diffeomorphic. There are arguments using the quaternions. I prefer the exponential map $T_ISO_3 \to SO_3$ -- consider it restricted to balls of various radius and stop at the first radius where the function is onto.
How to approximate the shape of a human face
A rather convenient parametric representation of a head is: $$\tag{1}x=a \cos(t) (1+c \sin(t)), \ \ \ y=b \sin(t)$$ (modified ellipse) with $a=3, b=4$ and $c=0.15$. See plot below. All you have to do now is to convert $(1)$ into a cartesian representation, by eliminating parameter $t$ (essentially by using $\cos(t)^2+\sin(t)^2=1$): $$\tag{2}\dfrac{(x/a)^2}{(1+cy/b)^2}+\dfrac{y^2}{b^2}=1$$ Thus a point $(x,y)$ is inside the "head" iff the following inequality is verified: $$\tag{2}\dfrac{(x/a)^2}{(1+cy/b)^2}+\dfrac{y^2}{b^2}&lt;1$$ Edit: I obtained a slightly flatter head by adding in $(1)$ an extra $c \sin(t)^2$ inside the parenthesis, i.e. taking $x=3 \cos(t)(1+c \sin(t)+c \sin(t)^2)$.
Why is a slice category a category?
There are two ways you can define the morphism part of a category $\mathcal{C}$. Either: Define a set $\mathrm{mor}(\mathcal{C})$ of morphisms and functions $\mathrm{dom}, \mathrm{cod} : \mathrm{mor}(\mathcal{C}) \to \mathrm{ob}(\mathcal{C})$; or Define sets $\mathrm{Hom}_{\mathcal{C}}(A,B)$ for each $A,B \in \mathrm{ob}(\mathcal{C})$. To translate from the first definition into the second, you can define $\mathrm{Hom}_{\mathcal{C}}(A,B)$ to be $\{ f \in \mathrm{mor}(\mathcal{C}) \mid \mathrm{dom}(f) = A \text{ and } \mathrm{cod}(f) = B \}$. Then the hom sets are automatically disjoint since the domain and codomain are encoded in the morphism. In the second case, however, the hom sets don't need to be disjoint, since we haven't explicitly required disjointness of the hom sets. Thus the second case is more general than the first case. This is why some people require hom sets to be disjoint—namely, so that $\mathrm{dom}(f)$ and $\mathrm{cod}(f)$ are well-defined. However, you can always turn a specification of a category without disjoint hom sets into a specification of a category with disjoint hom sets, by replacing $\mathrm{Hom}_{\mathcal{C}}(A,B)$ by $\{A\} \times \{ B \} \times \mathrm{Hom}_{\mathcal{C}}(A,B)$, so that a morphism $f : A \to B$ is then 'officially' a triple $(A,B,f)$. This is equivalent to defining $\mathrm{mor}(\mathcal{C})$ to be the disjoint union $\bigsqcup\limits_{(A,B) \in \mathrm{ob}(\mathcal{C}) \times \mathrm{ob}(\mathcal{C})} \mathrm{Hom}_{\mathcal{C}}(A,B)$ and taking $\mathrm{dom}$ and $\mathrm{cod}$ to be the respective projection maps to $\mathrm{ob}(\mathcal{C})$. Under this encoding, a morphism $\sigma : (f_1 : Z_1 \to A) \to (f_2 : Z_2 \to A)$ in $\mathcal{C}/A$ is 'officially' a triple $(f_1,f_2,\sigma)$ where $\sigma : Z_1 \to Z_2$ and $f_2 \circ \sigma = f_1$. The morphisms $\mathrm{id}_Z : f_1 \to f_1$ and $\mathrm{id}_Z : f_2 \to f_2$ are then 'officially' triples $(f_1,f_1,\mathrm{id}_Z)$ and $(f_2,f_2,\mathrm{id}_Z)$, so you don't have any issues. However, in practice, it's easier to define $\mathrm{Hom}_{\mathcal{C}}(A,B)$ without worrying about whether the hom sets are disjoint, and rest easy with the knowledge that the hom sets could be made disjoint if you wanted them to be.
A limit of a sequence statistifying $S_{n} = \frac{1}{2}(a_{n}+\frac{1}{a_{n}})=a_1+a_2+...+a_n$
Hint: since $a_{1}=1$ $$a_{n}=S_{n}-S_{n-1}$$ then we have $$2S_{n}=S_{n}-S_{n-1}+\dfrac{1}{S_{n}-S_{n-1}}\Longrightarrow S^2_{n}-S^2_{n-1}=1$$ so $$S_{n}=\sqrt{n},$$ so $$S_{n+1}(S_{n}-S_{n-1})=\sqrt{n+1}(\sqrt{n}-\sqrt{n-1})\to \dfrac{1}{2}$$
Number of classes of k-digit strings when digit order and identity doesn't matter
If I've read this right, you're looking to count k-tuples of integers in {1,..,n} that are inequivalent under the operations (a) permute the coordinates and (b) permute the integers. So, for example, when k=7 and n=2, 1122212 would be equivalent to 2211121, 1112222 and 1111222 (and many others). I believe it would be useful to construct a canonical form for each class: In this case, we can take the lexicographic first element in the equivalence class. So these would be in canonical form: 1111222 1111122 1111112 1111111 whereas 1112222 would not be since it belongs to the same class as 1111222. These canonical forms are equivalent to ordered partitions of k into n parts (some of which can be zero). For example 1111222 &lt;-> 4+3 since there are four 1's and three 2's. [If instead n=3, then 1111222 &lt;-> 4+3+0 since there are no 3's in the string.] The number of these is the number $p_n(k)$ of partitions of k into at most n parts (afterwards, we can append the zeroes to achieve n parts). We can compute $p_n(k)$ using the recurrence relation $p_n(k)=p_n(k-n)+p_{n-1}(k-1)$ (with some appropriate boundary conditions). [see: Enumerative combinatorics, Volume 1 By Richard P. Stanley p. 28]
Proving sequence result using integrals
$$\lim _{n\rightarrow \infty }\sum_{k=1}^{n}\frac{n}{n^{2}+k^{2}}=\lim_{n\to +\infty}\frac{1}{n}\sum_{k=1}^{n}\frac{1}{1+\left(\frac{k}{n}\right)^2}=\int_{0}^{1}\frac{dx}{1+x^2}=\arctan 1=\color{red}{\frac{\pi}{4}}.$$
Remainder when product is divided by 11
$a_k\equiv 1\pmod{11}$, so their product will also have remainder 1. Are you sure you've stated the problem correctly?
The Normal approximation to the Binomial (I cant find where im going wrong)
I see you have set up the calculation $$ \frac{17.5 - 16}{\sqrt{3.2}}, $$ which looks like the correct approach to me. Giving only two digits after the decimal place for a $Z$-score near $0.83$ is not enough precision to give a result accurate to four places, such as the probability $0.2005.$ You should carry more digits in your calculations. Even if you did not increase the number of digits, you should be more careful how you round. The result should have been closer to $0.84$ than $0.83.$ Once you have a more accurate answer, you can look it up in a table of the normal distribution. Note that many such tables require you to add $0.5$ to the result in the table in order to get a probability, and in this case it will give you the probability of $17$ or fewer drivers wearing seatbelts. I get $0.2009,$ so I suspect that the person writing the answer sheet reported more significant digits than their own methods justified.
calculating $\int_{-\infty}^\infty e^{iat^2}\,{\rm d}t$
First integral, for $s=1$ \begin{align} 2\int \mathrm{e}^{it^{2}} dt &amp;= 2\frac{1}{\sqrt{i}} \int \mathrm{e}^{x^{2}} dx \\ &amp;= \frac{1}{\sqrt{i}} \sqrt{\pi} \mathrm{erfi}(x) \\ &amp;= \frac{1}{\sqrt{i}} \sqrt{\pi} \mathrm{erfi}(t\sqrt{i}) \end{align} Thus \begin{align} 2\int\limits_{0}^{\infty} \mathrm{e}^{it^{2}} dt &amp;= \frac{1}{\sqrt{i}} \sqrt{\pi} \mathrm{erfi}(t\sqrt{i}) \Big|_{0}^{\infty} \\ &amp;= \frac{1}{\sqrt{i}} \sqrt{\pi} (i-0) = \sqrt{i}\sqrt{\pi} = \mathrm{e}^{i\pi /4}\sqrt{\pi} \end{align} Second integral, for $s=-1$ \begin{align} 2\int \mathrm{e}^{-it^{2}} dt &amp;= 2\frac{1}{\sqrt{i}} \int \mathrm{e}^{-x^{2}} dx \\ &amp;= \frac{1}{\sqrt{i}} \sqrt{\pi} \mathrm{erf}(x) \\ &amp;= \frac{1}{\sqrt{i}} \sqrt{\pi} \mathrm{erf}(t\sqrt{i}) \end{align} Thus \begin{align} 2\int\limits_{0}^{\infty} \mathrm{e}^{-it^{2}} dt &amp;= \frac{1}{\sqrt{i}} \sqrt{\pi} \mathrm{erf}(t\sqrt{i}) \Big|_{0}^{\infty} \\ &amp;= \frac{1}{\sqrt{i}} \sqrt{\pi} (1-0) = \frac{1}{\sqrt{i}} \sqrt{\pi} = \mathrm{e}^{-i\pi /4}\sqrt{\pi} \end{align} And we have \begin{equation} \int\limits_{-\infty}^{\infty} \mathrm{e}^{ist^{2}} dt = \mathrm{e}^{is\pi /4}\sqrt{\pi} \end{equation} for $s=\pm 1$
Convergence of series $\sum\limits_{n=1}^\infty\int\limits_{1}^{+\infty}e^{-x^n}\,dx$
Note that $$\alpha_n=\int_1^\infty\,\exp\left(-x^n\right)\,\text{d}x=\frac{1}{n}\,\int_1^\infty\,t^{-\left(1-\frac{1}{n}\right)}\,\exp(-t)\,\text{d}t\geq \frac{1}{n}\,\int_1^\infty\,\frac{\exp(-t)}{t}\,\text{d}t\,,$$ by setting $t:=x^{\frac1n}$. Therefore, $$\alpha_n\geq \frac{\lambda}{n}\,,\text{ where }\lambda:=\int_1^\infty\,\frac{\exp(-t)}{t}\,\text{d}t=-\text{Ei}(-1)\approx 0.21938\,.$$ Here, $\text{Ei}$ is the exponential integral. (We do not need the value of $\lambda$, just that it is a finite positive real number.) Thus, the sum $\sum\limits_{n=1}^\infty\,\alpha_n$ diverges due to divergence of the harmonic series. On the other hand, we can also see that $$\alpha_n\leq \frac{1}{n}\,\int_1^\infty\,\exp(-t)\,\text{d}t=\frac{1}{n}\,\exp(-1)=\frac{1}{n\,\text{e}}\,.$$ Therefore, $\alpha_n \in \Theta\left(\dfrac{1}{n}\right)$ as $n\to\infty$, with $$-\text{Ei}(-1)\leq \liminf_{n\to\infty}\,n\,\alpha_n\leq \limsup_{n\to\infty}\,n\,\alpha_n\leq \frac{1}{\text{e}}\,.$$ I expect that $\lim\limits_{n\to\infty}\,n\,\alpha_n$ exists, though, and conjecture that the limit is precisely $-\text{Ei}(-1)$. Let $f:[1,\infty)\to\mathbb{R}$ and, for each $n\in\mathbb{Z}_{&gt;0}$, $f_n:[1,\infty)\to\mathbb{R}$ be the functions defined by $$f(t):=\frac{\exp(-t)}{t}\text{ and }f_n(t):=t^{-\left(1-\frac{1}{n}\right)}\,\exp(-t)$$ for all $t\geq 1$. Then, $f_n\to f$ as $n\to \infty$ pointwise, $\left|f_n\right|=f_n\leq g$, where $g:[1,\infty)\to\mathbb{R}$ is an integrable function given by $$g(x)=\exp(-t)\text{ for all }t\geq 1\,,$$ and $$\begin{align}\int_1^\infty\,\left|f_n(t)-f(t)\right|\,\text{d}t&amp;=\int_1^\infty\,\left(t^{\frac{1}{n}}-1\right)\,\frac{\exp(-t)}{t}\,\text{d}t\\&amp;\leq \int_1^\infty\,\left(t^{\frac{1}{n}}-1\right)\,\exp(-t)\,\text{d}t\\&amp;\leq\Gamma\left(1+\frac{1}{n}\right)-\Gamma(1)\underset{n\to\infty}{\longrightarrow}0\,,\end{align}$$ where $\Gamma$ is the usual gamma function (which is continuous). By the Dominated Convergence Theorem, $$\lim_{n\to\infty}\,\int_1^\infty\,f_n(t)\,\text{d}t=\int_1^\infty\,f(t)\,\text{d}t\,.$$ Therefore, $n\,\alpha_n$ does indeed converge to $-\text{Ei}(-1)$, as $n$ grows to infinity.
Fatou's lemma on bounded domain and bounded range
No, even with bounded functions on a space of finite measure we can find examples for the strict inequality, if the space has two disjoint sets $A,B$ with positive measure. Then one can simply oscillate $$f_i = \begin{cases}\chi_A &amp;, i \equiv 0 \pmod{2}\\ \chi_B &amp;, i \equiv 1 \pmod{2} \end{cases}$$ and one has $f = \liminf f_i = 0$, but $\liminf \int f_i = \min \{\mu(A),\,\mu(B)\} &gt; 0$.
To show that $\mathcal{B}\left(\mathbb{R}^d\right)=\left\{\mathcal{B}\left(\mathbb{R}\right)\right\}^d$
The claim is not true; just look at an open set of $\mathbb{R}^d$ which is not the product of open sets of $\mathbb{R}$, like the unit ball. It would be true that the Borel subsets of $\mathbb{R}^d$ are generated by $\mathcal{B}(\mathbb{R})^d$.
$\vec{a}\cdot\vec{x} = \vec{b}\cdot\vec{x} = \vec{c}\cdot\vec{x} = ct.$ then $\vec{a},\vec{b},\vec{c}$ lie on the same plane.
Hint: if $(\vec a - \vec b) \cdot \vec x = 0$ for all $\vec x$ in a plane, then $\vec a - \vec b$ is normal to that plane. Therefore $\vec a - \vec b = \lambda \vec u\,$, and by symmetry also $\vec b - \vec c = \mu \vec u$ where $\vec u$ is a normal to the given plane, then the linear dependency between $\vec a , \vec b, \vec c\,$ follows easily.
Proving the inequality $\|A\|_2 ^2 \le \|A\|_1 \|A\|_\infty$
$$ \|A\|_2^2=\lambda_{\max}(A^*A)\leq\|A^*A\|_\infty\leq\|A^*\|_\infty\|A\|_\infty=\|A\|_1\|A\|_\infty $$
A closed form for a series
This is simply a Bernoulli polynomial $\;B_2(x)\;$ up to a constant $\pi^2$ for $\,x \in (0,2\pi)$ : \begin{align} S&amp;:=\sum_{k=1}^\infty \frac {\cos (kx)}{k^2}\\ &amp;=\pi^2\;B_2\left(\frac x{2\pi}\right)\\ &amp;=\pi^2\;\left(\left(\frac x{2\pi}\right)^2-\frac x{2\pi}+\frac 16\right)\\ &amp;=\frac{\pi^2}6-\frac{\pi x}2+\frac{x^2}4\\ \end{align} On the other side if you replace the $\cos$ function by a $\sin$ you would get a non elementary Clausen function $Cl_2$. For some (nontrivial) intuition about all this you may see this answer or this thread or this answer.
Equation of a triangle in three dimensional complex plane.
Given a triangle constructed as described in the question, the complex number $\beta - \alpha$ corresponds to the side of the triangle opposite the vertex $\gamma,$ so it has modulus $c.$ The complex number $\gamma - \alpha$ corresponds to the side of the triangle opposite the vertex $\beta,$ so it has modulus $b.$ The complex number $\gamma - \alpha$ also has argument $\arg(\gamma - \alpha) = \arg(\beta - \alpha) + A,$ because the magnitude of the angle between the two sides represented by $\beta - \alpha$ and $\gamma - \alpha$ is $A$ and because $\gamma$ is anticlockwise from $\beta$ when viewed from the position of $\alpha.$ In other words, if $\psi = \arg(\beta - \alpha),$ then \begin{align} \beta - \alpha &amp;= c e^{i\psi}, \\ \gamma - \alpha &amp;= b e^{i(\psi + A)} = be^{iA} e^{i\psi}. \end{align} Now evaluate $c(\gamma - \alpha) - be^{iA}(\beta - \alpha)$ and compare the result to the fact that is to be proved.
Markov chain restricted to a subset
Fix $x \in F$ and consider any fixed path $X_n$ which hits $x$ infinitely often. Consider the subpath $X_{n_k}$ at the times where $X_n$ hits $x$. Because $x \in F$, each of the $n_k$ is some $T_j$. So $X_{T_n}$ hit $x$ infinitely often. But $x$ was an arbitrary element of $F$ and the original path had the required property with probability $1$...
What is the probability of an event happening in some interval given probability of it in x interval?
It depends on the distribution. If you say that an event happens with probability $y$ in an interval $x$, I guess what you mean is that the probability that the event happens within the first $x$ time units is $y$. In other words, the probability that the time instance $t$ at which the event happens is within $[0,x]$. It thus depends on the distribution of the time instance $t$ at which the event happens. If we assume that the time instance is always positive (i.e., that $t$ is a positive random variable), then we can say that the probability that the event happens before $x$ is equal to $F_t(x)$, where $F_t$ is the cumulative distribution function of the random variable $t$. Now, in your question you say that $y=F_t(x)$ and you ask in which case you have $y/2=F_t(x/2)$. For a fixed $x$, this might happen for many random variables. But if you want this property to hold for all $x$ within a given range, then the uniform distribution is the only correct solution.
Why is $f(x)=x^{2}+1$ a primitive recursive function?
Instead of given a proof, let me give you a guideline. (See how far you get and if you are stuck, feel free to ask for additional help.) Show that $add(m,n)$ is primitive recursive Note that $mult(m,0) = 0$, $mult(m,S(n)) = add(mult(m,n),m)$ Using primitive recursion (and projection) and $add(m,n)$ to show that $mult(m,n)$ is primitive recursive Conclude that $f$ is primitive recursive
Galois Groups are isomorphic to subgroups of symmetric groups.
Assume $\sigma_X = \varphi_X$, i.e. $\forall i, \sigma(\alpha_i) = \varphi(\alpha_i)$. We shall show that $\sigma = \varphi$, i.e. $\forall x \in E, \sigma(x) = \varphi(x)$. Now note that $E = F(\alpha_1, \cdots, \alpha_n)$, and every $\alpha_i$ is algebraic over $F$, so every $x \in E$ can be expressed as a polynomial in $(\alpha_i)_{i=1}^n$ with coefficients in $F$, say $x = \sum f_j \alpha_{1}^{v_{j1}} \alpha_{2}^{v_{j2}} \cdots \alpha_{n}^{v_{jn}}$. Then: $$\begin{array}{rcll} \sigma(x) &amp;=&amp; \sigma \left( \sum f_j \alpha_{1}^{v_{j1}} \alpha_{2}^{v_{j2}} \cdots \alpha_{n}^{v_{jn}} \right) \\ &amp;=&amp; \sum \sigma \left( f_j \alpha_{1}^{v_{j1}} \alpha_{2}^{v_{j2}} \cdots \alpha_{n}^{v_{jn}} \right) &amp; \text {$\sigma$ preserves addition} \\ &amp;=&amp; \sum \sigma \left( f_j \right) \sigma \left( \alpha_{1} \right)^{v_{j1}} \sigma \left( \alpha_{2} \right)^{v_{j2}} \cdots \sigma \left( \alpha_{n} \right)^{v_{jn}} &amp; \text {$\sigma$ preserves multiplication} \\ &amp;=&amp; \sum f_j \sigma \left( \alpha_{1} \right)^{v_{j1}} \sigma \left( \alpha_{2} \right)^{v_{j2}} \cdots \sigma \left( \alpha_{n} \right)^{v_{jn}} &amp; \text {$\sigma$ fixes $F$} \\ &amp;=&amp; \sum f_j \varphi \left( \alpha_{1} \right)^{v_{j1}} \varphi \left( \alpha_{2} \right)^{v_{j2}} \cdots \varphi \left( \alpha_{n} \right)^{v_{jn}} &amp; \forall i, \sigma(\alpha_i) = \varphi(\alpha_i) \\ &amp;=&amp; \sum \varphi \left( f_j \right) \varphi \left( \alpha_{1} \right)^{v_{j1}} \varphi \left( \alpha_{2} \right)^{v_{j2}} \cdots \varphi \left( \alpha_{n} \right)^{v_{jn}} &amp; \text {$\varphi$ fixes $F$} \\ &amp;=&amp; \sum \varphi \left( f_j \alpha_{1}^{v_{j1}} \alpha_{2}^{v_{j2}} \cdots \alpha_{n}^{v_{jn}} \right) &amp; \text {$\varphi$ preserves multiplication} \\ &amp;=&amp; \varphi \left( \sum f_j \alpha_{1}^{v_{j1}} \alpha_{2}^{v_{j2}} \cdots \alpha_{n}^{v_{jn}} \right) &amp; \text {$\varphi$ preserves addition} \\ &amp;=&amp; \varphi(x) \end{array}$$ which is what was to be demonstrated.
Given two sequences of positive numbers with $\sum_{n\ge1}\frac{x_n}{y_n}$ and $\sum_{n\ge1}y_n$ convergent, is $\sum_{n\ge1}\sqrt{x_n}$ convergent?
Hint: $$\sqrt{x_n}\leqslant\frac12\left(\frac{x_n}{y_n}+y_n\right).$$
$P$, $Q$ are projections on a Hilbert space such that $|P-Q|<1$ then $\dim(\operatorname{Range}(P))=\dim(\operatorname{Range}(Q))$
I suppose you are considering finite rank projections. Let their ranges be $M$ and $N$. Define $T: M \to N$ by $Tx=Qx$. Then $T$ is linear. If $Tx=0$ with $\|x\|=1$ then $\|Px-Qx\|=\|x-0\| =\|x\|=1$ and $\|Px-Qx\| \leq \|P-Q\|&lt;1$ which is a contradiction . Hence $T$ is a one-to-one linear map from $M$ into $N$. Since $M$ and $N$ finite dimensional this implies $dim(Range(P)) \leq dim(Range(Q))$. Now just reverse the roles of $P$ and $Q$ to get the reverse inequality.
Why do we use Riemann approximations when we can find actual area by using integrals
The Riemann sums are used to construct the integral, to define the object. When the functions to be integrated are "nice enough" you have learned a simple formula to compute the integral (involving primitives), but this rule does not define the integral, nor does it allow to compute every integral.
Need a counterexample related to Interior and Closure
HINT: Do it in pieces. Let $A_1=[0,1)\cup(1,2)\cup\{3\}$; then $\operatorname{cl}A_1=[0,2]\cup\{3\}$, $\operatorname{int}\operatorname{cl}A_1=(0,2)$, and $\operatorname{cl}\operatorname{int}\operatorname{cl}A_1=[0,2]$. Moreover, $\operatorname{int}A_1=(0,1)\cup(1,2)$, so we’ve taken care of everything except $\operatorname{cl}\operatorname{int}A$ and $\operatorname{int}\operatorname{cl}\operatorname{int}A$. Now let $A_2=(-1,4)\setminus A_1=(-1,0)\cup\{1\}\cup[2,3)\cup(3,4)$. Show that $A_2$, $\operatorname{cl}A_2$, $\operatorname{int}A_2$, $\operatorname{cl}\operatorname{int}A_2$, and $\operatorname{int}\operatorname{cl}\operatorname{int}A_2$ are all distinct. Now what if you had a set $A$ that was like a discrete union of $A_1$ and $A_2$? And how can you make such a set?
Unspecific limits in a summation sign
what ever $a$ is, it holds $\sum_{i=1}^{k-1}a=\underbrace{a+a+...+a}_{k-1 \text{ times}}=a\cdot(k-1)$
Show that $\| y\| =1$ and $\| y-x_j\|\geq \| x_j\|$
You are on the right track. For $1 \le j \le n$ let $f_j \in X^*$ be a bounded functional such that $f_j(x_j) = \|x_j\|$ and $\|f_j\| =1$. Then the intersection of kernels $$\bigcap_{j=1}^n \ker f_j$$ is a nontrivial subspace of $X$. Indeed, we can define a linear map $f : X \to \Bbb{K}^n$ by $f(x) = (f_1(x), \ldots, f_n(x))$. Then $\ker f = \bigcap_{j=1}^n \ker f_j$ so if $\bigcap_{j=1}^n \ker f_j = \{0\}$ we would have that $f$ is an injective linear map from an infinite-dimensional space $X$ to a finite-dimensional space $\Bbb{K}^n$. This is a contradiction so $\bigcap_{j=1}^n \ker f_j$ is nontrivial. Pick $y \in \bigcap_{j=1}^n \ker f_j$ such that $\|y\|=1$ and notice $$\|y-x_j\| = \|f_j\|\|y-x_j\| \ge |f_j(y-x_j)| = |f_j(y)-f_j(x_j)| = |0-\|x_j\||=\|x_j\|$$ which proves the claim.
Generators of the intersection of prime monomial ideals
$I$ is generated by the monomials that are products of $n-d+1$ (distinct) indeterminates. Take $\Delta_m=\{G\subset[1,n]:|G|\le m\}$. This is a simplicial complex on $[1,n]$ whose facets are the subsets $F$ of $[1,n]$ with $|F|=m$. Define $I_{\Delta_m}$ as being the ideal of $K[X_1,\dots,X_n]$ generated by all the monomials $X_{i_1}\cdots X_{i_s}$ with $\{i_1,\dots,i_s\}\notin\Delta_m$. From Bruns and Herzog, Cohen-Macaulay Rings, Theorem 5.1.4, one knows that $$I_{\Delta_m}=\bigcap P_F,\;\;\;\; (*)$$ where the intersection is taken over all facets $F$ of $\Delta_m$, and $P_F$ denotes the ideal generated by all $X_i$ with $i\notin F$. In our case $\{i_1,\dots,i_s\}\notin\Delta_m$ means that $s\ge m+1$ and then $I_{\Delta_m}$ is generated by all the monomials $X_{i_1}\cdots X_{i_s}$ with $s\ge m+1$. (In fact one can say that $I_{\Delta_m}$ is generated by all the monomials $X_{i_1}\cdots X_{i_s}$ with $s=m+1$.) On the other side, $P_F$ is generated by all $X_i$ with $i\notin F$, that is, $P_F$ is generated by all $X_i$ with $i\in [1,n]-F$. Looking back to $(*)$ we can deduce that the right hand side coincides with $\bigcap P_G$ where $G$ runs over all the subsets of $[1,n]$ with $|G|=n-m$. Now take $m=n-d$ and get the conclusion. Remark. This is a simpler answer which I've found later: one knows that a finite intersection of monomial ideals is a monomial ideal, therefore $I=\bigcap_{|F|=d} P_F$ is a monomial ideal. A monomial $X_{i_1}^{a_{i_1}}\cdots X_{i_s}^{a_{i_s}}$, denoted by $X_G^a$, where $G=\{i_1,\dots, i_s\}$ and $a=(a_{i_1},\dots,a_{i_s})$, belongs to $P_F$ iff $G\cap F\neq\emptyset$, so $I$ contains all the monomials $X_G^a$ with $G\cap F\neq\emptyset$ for all $F\subset[1,n]$ with $|F|=d$. This shows that $|G|\ge n-d+1$ and get the (same) conclusion.
product for two conditional probability
The Law of Total Probability is: $$\mathsf P(A\mid C) ~=~ \mathsf P(A\mid B, C)~\mathsf P(B\mid C)+\mathsf P(A\mid \neg B, C)~\mathsf P(\neg B\mid C)$$ And likewise : $$p_{X\mid Z}(x\mid z) ~=~\sum\limits_{y\in \textsf{support}(Y)}p_{X\mid Y,Z}(x\mid y,z)~p_{Y\mid Z}(y\mid z)$$
When is $\cos (x) \geq \frac{1}{2}$?
First, we solve $\cos x=\frac12$, which is for $x=\frac{\pi}{3}$ and $x=\frac{5\pi}{3}$. Then, we take a look at the unit circle. The cosine value is the $x$-coordinate, so the question we ask is: "for which $x$ is the $x$-coordinate greater than or equal to $\frac12$?". In the first quadrant, we have $0\leq x\leq\frac{\pi}{3}$ or $[0,\frac{\pi}{3}]$. In the fourth quadrant, we have $\frac{5\pi}{3}\leq x\leq2\pi$ or $[\frac{5\pi}{3},2\pi]$. Now, we add the periodicity. We now that $\cos(x)=\cos(x+2\pi)$, so we can always add $+2\pi$ to our answer. Therefore, the final answer is: $\cos(x)\geq\frac12$ for $x\in[2\pi n,\frac{\pi}{3}+2\pi n]\cup[\frac{5\pi}{3}+2\pi n,2\pi+2\pi n]$ or, preferably for some, $x\in[-\frac{\pi}{3}+2\pi n, \frac{\pi}{3}+2\pi n]$, ($n\in\mathbb{Z}$)
Presentations of Amalgamated Free Products of Two Groups.
A presentation for $H*_LK$ is $\langle X,Y\mid R,S,T\rangle$ where $T$ is as follows. For each $\ell\in L$, choose words $x_\ell$ and $y_\ell$ representing $\ell$ using generators from $X$ and $Y$, respectively. Then $T$ is the set of the words $x_\ell y_\ell^{-1}$ for all $\ell\in L$ (or, it suffices to just take a collection of $\ell$s that generate $L$). In other words, for each element (or generator) of $L$, we add a relation saying that its representation in $H$ is the same as its representation in $K$. (More precisely, it is rather rare that $L$ is actually literally a subgroup of both $H$ and $K$. Rather, we have a subgroup $L$ of $H$, a subgroup $L'$ of $K$, and an isomorphism $f:L\to L'$. Then we would take $T$ to consist of words $x_\ell y^{-1}_\ell$ where $x_\ell$ is a word representing $\ell\in L$ and $y_\ell$ is a word representing $f(\ell)\in L'$.) For a simple example, suppose $H$ is cyclic of order $4$, $K$ is cyclic of order $6$, and we are amalgamating them over their subgroup of order $2$. Then $\langle x\mid x^4\rangle$ is a presentation of $H$ and $\langle y\mid y^6\rangle$ is a presentation of $K$. A generator of $L$ is $x^2$ in $H$ and $y^3$ in $K$. So, we obtain the presentation $\langle x,y\mid x^4,y^6,x^2y^{-3}\rangle$ for $H*_L K$.
Hint Taylor Series $x^{2}\ln(x)$ about $a=1$
Your answer is no completely correct. As you would like the expansion around $a=1$, you should also expand $x^2$ around this point. In fact, you can easily show that $$x^2 = 1 +2 (x-1) + (x-1)^2\;.$$ As a result, we have that $$x^2 \ln x = \Bigl[1 +2 (x-1) + (x-1)^2 \Bigr] \sum_{k=1}^\infty \frac{(-1)^{k+1}}k (x-1)^k\;.$$ Or, after expanding $$x^2 \ln x = \sum_{k=1}^\infty \frac{(-1)^{k+1}}k (x-1)^k + 2 \sum_{k=1}^\infty \frac{(-1)^{k+1}}k (x-1)^{k+1} + \sum_{k=1}^\infty \frac{(-1)^{k+1}}k (x-1)^{k+2}\\ = \sum_{k=1}^\infty \frac{(-1)^{k+1}}k (x-1)^k + 2 \sum_{k=2}^\infty \frac{(-1)^{k}}{k-1} (x-1)^{k} + \sum_{k=3}^\infty \frac{(-1)^{k+1}}{k-2} (x-1)^{k}\\ = (x-1) + \frac32 (x-1)^2 +\sum_{k=3}^\infty (-1)^{k+1}\left[\frac{1}{k} - \frac{2}{k-1} + \frac{1}{k-2} \right] (x-1)^k.$$
The counting problem of paths
Method I (brute force): Clearly we must have seven $+1's$ and three $-1's$. Also clearly, $X_1=1$. If the third $+1$ occurs after the third $-1$ then we violate positivity, so the third $+1$ can only occur at $X_3,X_4,X_5$. Easy to count each case. If it occurs at $X_3$ all subsequent paths are good, so this case contributes $\binom 74=35$ if it occurs at $X_4$ then there are $2$ ways to place the first $-1$ and after that all subsequent paths are good, so $2\times \binom 64=30$. if it occurs at $X_5$ then there are $2$ ways to place the second $+1$ and after that all subsequent paths are good, so $2\times \binom 54=10$. Thus the final answer is $$35+30+10=\boxed {75}$$ Method II (reflection): if you ignore positivity there are $\binom {10}7=120$ paths. The bad ones touch the line $y=-1$. From the first point of contact with that line we could reflect our path to get a path that ends at $(10,-6)$. Thus there is a bijection between bad paths in our problem and paths that end at $(10,-6)$. There are $\binom {10}8=45$ paths that end at $(10,-6)$ so the answer is $$\binom {10}7-\binom {10}8=120-45=\boxed {75}$$
Birth-death process: Calculate number of clients that can really enter the system for each unit of time
I haven't seen the G used at the end to describe a queueing node before, but will assume that by M/M/1/4/∞/G we mean an M/M/1 queue which has maximum capacity 4, an unbounded potential calling population (our source will never run out of customers) and G here I assume denotes a general service discipline (e.g. FCFS, LCFS, SIRO). This process can be described as continuous-time Markov chain with 5 states {0, 1, 2, 3, 4} describing the number of customers in the queue and transition rate matrix $$Q = \begin{pmatrix} -\lambda &amp; \lambda \\ \mu &amp; -(\lambda + \mu) &amp; \lambda \\ &amp;\mu &amp; -(\lambda + \mu) &amp; \lambda \\ &amp;&amp;\mu &amp; -(\lambda + \mu) &amp; \lambda \\ &amp;&amp;&amp;\mu &amp; -\mu \end{pmatrix}.$$ I'm not entirely clear what you wish to compute as time isn't discrete in this model, so describing how many customers enter in each time unit would seem rather arbitrary. Perhaps you want to describe the long term average amount of time the queue spends in each of the 5 states. When it is in state 4 there are no arrivals (they are blocked because the system is at maximum capacity), but in all other states arrivals happen at rate $\lambda$. So if you compute the stationary probability distribution $\pi_i$ for each of the states then the long term average arrival rate is given by $$\lambda(\pi_0+\pi_1+\pi_2+\pi_3) = \lambda(1-\pi_4).$$
Stochastic integrals and stopping times
Let $M(t) = \int_0^{t \wedge T} H(s) dB(s)$. Then $M(t)$ is a stopped local martingale and hence is also a local martingale. Further, $\langle M \rangle_t = \int_0^{t \wedge T} H(s)^2 ds$ and so your assumption gives $\mathbb{E}[\langle M \rangle_\infty] &lt; \infty$. It is a standard result that this implies that $M$ is an $L^2$-bounded martingale and $\mathbb{E}[M(\infty)^2] = \mathbb{E}[M(0)^2] + \mathbb{E}[\langle M \rangle_\infty]$. Hence $$\mathbb{E}[\int_0^T H(s)dB(s)] = \mathbb{E}[M(\infty)] = \mathbb{E}[M(0)] = 0$$ and similarly $$\mathbb{E}[\bigg(\int_0^T H(s)dB(s)\bigg)^2] = \mathbb{E}[\langle M \rangle_\infty] = \mathbb{E}[\int_0^T H(s)^2 ds]$$ as desired.
When do two subbases generate the same topology
$\newcommand{\S}{\mathcal{S}}$ One criterion is the following: Two subbases $\S$ and $\S'$ generate the same topology iff for every $A\in\S$ and $x\in A$ there are $A_1',\dots, A_n'\in\S'$ such that $x\in A_1'\cap\dots \cap A_n'\subseteq A$, and conversely. I think you won't find something simpler.
How do I calculate the reminder (mod)
You could go about like this: $$ 87^{17}\equiv_{77} 10^{17}=10\cdot(10^2)^8=10\cdot100^8\equiv_{77}=10\cdot23^8 $$ then since $23^2=529\equiv_{77}67$ we have $$ 10\cdot(23^2)^4\equiv10\cdot67^4 $$ and you keep on like this always reducing the base by breaking up the exponent. Hope this helps
(True/False)? If $R$ is a commutative ring, then prove that $(s \cdot t) x =(s \cdot x)(t \cdot x) ; x \in R~,~s,t \in Z$
The statement is in general false in the absence of special restrictions on $x$. As a counterexample, simply take $R = Z$, $0, 1 \ne x \in Z$, and $0 \ne s, t \in Z$; then if $(st)x = (sx)(tx) =(st)x^2, \tag{1}$ we have, since $st \ne 0$, that $x = x^2; \tag{2}$ but in $Z$, (2) implies that $x =0$ or $x = 1$, contradicting our assumption that $x \ne 0, 1$. The same conclusion binds if $Z$ is replaced by any integral domain. It strikes me that Jyrki Lahtonen is "morally correct" when he indicates in his comment that what is intended here might not be what is written. My take on this is that what is wanted is a demonstration of either $(s(tx)) = (st)x$ or $(sx)(tx) = (st)x^2$, and that some careless reader/writer (present company of course excluded! ;-)) got the two equations mixed up. I also surmise that the underlying point of this exercise was to show that when the integers act most obviously on the elements of arbitrary rings, viz. via $nr = (r + r + \ldots + r), \; n \; \text{ times}, \tag{3}$ where $r \in R$, $n \in \Bbb N$, the natural numbers, and for $n &lt; 0$ we take $nr = (-n)(-r)$ (and of course $0r = 0$), then multiplication of integers behaves in the "obvious" manner as well ; such assertions are easy to prove, by e.g. simple inuctive arguments. But if such was the original intention of this question, it was morphed and mutated into something else altogether. But such is the nature of evolution, I guess. In closing, I would like to adress our OP VHP's second question: Be careful about believing what you read on the 'Net! To One and All: May this 4th of July find your variables independent! Hope this helps! Cheers, and as always, Fiat Lux!!!
Probability of being highest OR second highest draw from a distribution
I will assume that the distribution is continuous, so that the probability of ties is $0$. The probability that a specific draw, say the seventh, is the highest is $\frac{1}{n}$. This is because all draws are equally likely to be highest. The probability it is second highest is $\frac{1}{n}$. Add.
Chord $x+y=1$ Subtends $45^{\circ}$ at the centre
It's not that you couldn't get the second answer of the second method $\sqrt{2+\sqrt2}$ in the first method. It's that $\sqrt{2+\sqrt2}$ is wrong &ndash; drawing the associated circle and line and measuring the subtended angle gives you well over $90^\circ$. There is only one (positive) solution $a$.