title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Find max $x\cdot y$ where $x\in S= \{(x_1,\ldots, x_n)\in \mathbb{R}^n$
Define $\newcommand{\sgn}{\operatorname{sgn}}$ $$ F(x)=\sum_{k=1}^n|x_k|^p\tag{1} $$ then $$ \nabla F(x)=\left(p\sgn(x_k)|x_k|^{p-1}\right)_{k=1}^n\tag{2} $$ You want to find a point on the surface where $\nabla F\,||\,y$. Therefore, $$ x=\left(\sum_{k=1}^n|y_k|^{\frac{p}{p-1}}\right)^{-\frac{1}{p}}\left(\sgn(y_k)|y_k|^{\frac{1}{p-1}}\right)_{k=1}^n\tag{3} $$ should be the point where $T(x)$ is the greatest. Computing $T(x)$ yields $$ T(x)=\left(\sum_{k=1}^n|y_k|^{\frac{p}{p-1}}\right)^{\frac{p-1}{p}}\tag{4} $$
Equivalence of definitions of oriented atlas.
A chart is a homeomorphism $\phi : U \to U'$, where $U \subset M$ is open and $U' \subset \mathbb R^n$ is open. Given another chart $\psi : V \to V'$, we get the transition map $$ \psi \circ \phi^{-1} : \phi(U \cap V) \to \psi(U \cap V) .$$ It has the Jacobian matrix $$\begin{pmatrix}\frac{\partial (\psi \circ \phi^{-1})_1}{\partial x_1}&\dots&\frac{\partial (\psi \circ \phi^{-1}_n)}{\partial x_1}\\\vdots &\ddots & \vdots\\\frac{\partial(\psi \circ \phi^{-1})_1}{\partial x_n}&\dots& \frac{\partial (\psi \circ \phi^{-1})_n}{\partial x_n}\end{pmatrix}$$ where the $x_j$ are the standard coordinates in $\mathbb R^n$ and the $(\psi \circ \phi^{-1})_i$ are the coordinate functions of $\psi \circ \phi^{-1}$. This is the same as in Tu if we correctly interpret his notation. He writes $\phi = (x^1,\ldots,x^n)$ with $x^i : U \to \mathbb R$ and $\psi = (y^1,\ldots,y^n)$ with $y^i : V \to \mathbb R$. We may regard the $x^j(p)$ and $y^j(p)$ as the local coordinates of the point $p$ with respect to the given charts. This implies $\psi \circ \phi^{-1} = (y^1 \circ \phi^{-1}, \ldots, y^n \circ \phi^{-1})$, having Jacobian matrix in standard coordinates $$\begin{pmatrix}\frac{\partial (y^1 \circ \phi^{-1})}{\partial x_1}&\dots&\frac{\partial (y^1 \circ \phi^{-1})}{\partial x_1}\\\vdots &\ddots & \vdots\\\frac{\partial(y^n \circ \phi^{-1})}{\partial x_n}&\dots& \frac{\partial (y^n \circ \phi^{-1})}{\partial x_n}\end{pmatrix}$$ For local coordinates Tu defines $$\frac{\partial y^i}{\partial x^j} = \frac{\partial(y^i \circ \phi^{-1})}{\partial x_j}$$ or more precisely $$\frac{\partial y^i}{\partial x^j}(p) = \frac{\partial(y^i \circ \phi^{-1})}{\partial x_j}(\phi(p))$$ for $p \in U \cap V$. See chapter "6.6 Partial Derivatives".
Integral involving binomial expression of an exponential
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \mrm{M}\pars{b,k} & \equiv \int_{0}^{\infty}\expo{-kt}\pars{2\expo{t} - 1}^{b}\,\dd t = 2^{k}\int_{0}^{\infty}\pars{{1 \over 2}\,\expo{-t}}^{k - b} \pars{1 - {1 \over 2}\expo{-t}}^{b}\,\dd t \end{align} Set $\ds{x \equiv \expo{-t}/2 \iff t = -\ln\pars{2x}}$ \begin{align} \mrm{M}\pars{b,k} & = 2^{k}\int_{1/2}^{0}x^{k - b}\pars{1 - x}^{b}\pars{-\,{\dd x \over x}} = 2^{k}\int_{0}^{1/2}x^{k - b - 1}\pars{1 - x}^{b}\,\dd x \\[5mm] & = \bbx{2^{k}\,\mrm{B}_{1/2}\pars{k - b,b + 1}} \end{align} where $\ds{\,\mrm{B}_{z}}$ is the $Incomplete\ Beta\ Function$.
Fibonacci primes vs Mersenne primes
The Fibonacci numbers, as well as the Mersenne numbers, are both members of the family of Lucas sequences. Presently, there is no known Lucas sequence (or companion Lucas sequence) that has been shown to contain an infinite number of primes. Nevertheless, I was reading recently in the 2nd edition of Richard K. Guy's ``Unsolved Problems in Number Theory'' that it seems to be widely conjectured that many are of the mindset that many of what Guy refers to as Lucas-Lehmer sequences do contain infinitely many prime terms. I would suspect that most mathematicians who have given it soemserious thought probably think as well that the specific sequences you mention contain an infinite number of primes, Actually, in the case of the Mersenne numbers, I am quite certain of it. That being said, if it is true that both of these sequences contain an infinite number of primes, without being given any additional information, what we do know is that primes in both of these sequences will occur only at terms whose index is prime. Now, more Mersenne primes that Fibonacci primes have been discovered to date probably because of the unified interest (i.e., GIMPS) in these numbers. GIMPS has made it so simply to look for them that even a child of elementary school age can participate in the search for them. Also, the monetary prizes that were attached in the past for the discovery of the first of a above a given magnitude, as well as the attractive bounties that are still are in place today for the discovery of Mersenne primes of a hundred million digits or more and one billion digits or more add further incentive to maintaining interest in these primes. Such to my knowledge is not the case for Fibonacci primes. The largest Fibonacci prime discovered to date has six digits in its index (104911); on the other hand, the most recently discovered Mersenne prime contains eight digits in its exponent (82,589,933). Again, this is surely attributable to the concerted effort being waged for the discovery of the latter type. Finally, although the structure of Mersenne primes is and appealing because of its base (i.e., 2^{p} - 1) and the ease by which they are converted into a binary representation consisting of $p$ $1s$, the Fibonacci primes are obtained from Binet's formula, which involves the golden mean. Hence, when studying Fibonacci numbers, there are countless upon countless upon countless of interesting things to try to prove and draw conclusion from; whereas, with Mersenne numbers, the big thing on the minds of those interested in them seems often enough just their primality.
Is it possible to prove a contradiction with natural deduction by negating the formula?
is a contradiction because it is of the form $\varphi \land \neg \varphi$ is a tautology, because it is of the form $\varphi \lor \neg \varphi$ Therefore, 2. is a tautology as well, since it is equivalent to 3 So all of that makes sense: If 1 is a contradiction, then 2, being the negation of 1, is a tautology. Now, as far as formally demonstrating these things: in many systems pointing out that a statement is of the form $\varphi \land \neg \varphi$ is sufficient to prove that it is a contradiction. And in many systems, showing that a statement is of the form $\varphi \lor \neg \varphi$ is sufficient to show it is a tautology. Some systems, however, have an explicit contradiction symbol, $\bot$, and so to show that a statement is a contradiction you either show that it is equivalent to $\bot$, or derive $\bot$ from it (any statement from which a contradiciton can be derived is a contradiction itself). In some systems you can go straight to $\bot$ from $\varphi \land \neg \varphi$ as either a rule of equivalence or rule as inference, but in other systems you derive $\bot$ from a statement $\varphi$ and a second statement $\neg \varphi$ Likewise, some systems have an explicit symbol for the tautology: $\top$, and they will typically have an equivalence rule to go from $\varphi \lor \neg \varphi$ to $\top$. Unlike the $\bot$, though, formal systems typically do not have an inference rule to derive $\top$, because a tautology logically follows from any statement, and this the fact that a tautology can be inferred from some statement says nothing about that statement. Instead, many systems will demonstrate a statement to be a tautology by demonstrating that its negation is a contradiction. This is the proof by contradiction proof technique of course. Now, you actually do something very unusual: you negate statement 1, and show that the result is equivalent to a tautology. And yes, while that indeed show that the original statement is a contradiction, there would be few systems that would demonstrate statements to be a contradiction in this manner. As explained above, this would only work if you with equivalence rules only, and most such systems have a much more direct way to demonstrate a statement to be a contradiction. Indeed, if you are talking about natural deduction, then you are probably talking about a system of inference, and remember that if you end up inferring a tautology from the negation of some statement, then that tells us nothing about the nature of that negation, and thus nothing about the original statement either. So no, you typically do not demonstrate a statement to be a tautology using natural deduction by negating it and seeing what happens. Or, put differently, while a proof by contradiction demonstrates a statement to be a tautology by showing that the negation of that statement leads to a contradiction, in natural deduction there is no parallel proof technique that shows a statement to be a contradiction by showing that its negation leads to a tautology.
How can I rewrite equations with conflicting indices?
I'm not from a computing background, but from a mathematical context it seems to me that you would be right to use different variables, such as f(k) instead of f(n). Similarly, you might use subscripts after the variable instead, such as k subscript T for temporal. Personally, I would double check with your advisor too.
$\pi$ permutation decomposed in $k$ disjoint cycles of length $n-1, \dots, n_k$. Find the order of $\pi$
Since the order of a $k$-cycle is $k$, you need $\operatorname {lcm}(n_1,\dots,n_k)$. This is pretty much immediate, since disjoint cycles commute.
Two descriptions of locally increasing functions, are they equivalent?
First, obviously $A$ must be a connected set (even if you assume $f$ to be as regular as you want, not just continuous); so it must be an interval - open or closed or half-open, half-closed, bounded or half-bounded or all of $\mathbb R$. NO: Let $f(x) = 0 \text{ for } x\le0, f(x) = 1 \text{ for } x\ge 1$, and $f(x) = x - 1/2^n$ for $x \in [1/2^n, 1/2^{n-1})$ for $n \in \mathbb N, n \ge 1$. 2. is true for $a=0$, but 1. isn't. YES: for $f$ continuous Take any $a$ and let $c = \sup \{b > a \mid f(x) \ge f(a) \forall x \in [a,b)\}$. Then $c$ is the right endpoint (finite or infinite) of $A$. Indeed, by continuity $f(a) \le f(c)$, and if $c$ is not already the right endpoint of $A$, apply condition 2. to $c$. Any new "$b$" greater than $c$ that satisfies condition 2. for $c$ will also satisfy it for $a$ also, but $c$ was supposed to be the $\sup$ - contradiction. This proves that $f$ is (globally) non-decreasing on all of $A$.
$p_n > 0$ and $p_{n+1} \ge p_n$. Prove $\sum \frac{ p_n-p_{n-1}}{p_np_{n-1}^a}$ is convergent where $a > 0$.
The case where $\{p_n\}$ is bounded and, hence, $p = \lim_{n \to \infty}p_n$ exists is relatively easy. We have $$\frac{p_n - p_{n-1}}{p_np_{n-1}^a} \leqslant \frac{1}{p_1p_0^a}(p_n - p_{n-1})$$ The series with terms on the RHS is telescoping. Thus, by the comparison test, $$\sum_{n=1}^\infty\frac{p_n - p_{n-1}}{p_np_{n-1}^a} \leqslant \frac{1}{p_1p_0^a}\sum_{n=1}^\infty(p_n - p_{n-1}) = \frac{1}{p_1p_0^a}(p - p_0)$$ On the other hand, if $\{p_n\}$ is unbounded, then $1/p_n \to 0$ as $n \to \infty$. Since $a > 0$, there exists a positive integer $q$ such that $1/q < a$. Eventually, $p_{n-1} > 1$ and $p_{n-1}^{1/q} < p_{n-1}^a$. Thus, there exists $m$ such that for all $n \geqslant m$ we have, $$\frac{p_n - p_{n-1}}{p_np_{n-1}^a} < \frac{p_n - p_{n-1}}{p_np_{n-1}^{1/q}} = \frac{1 - \frac{p_{n-1}}{p_n}}{1 - \frac{p_{n-1}^{1/q}}{p_n^{1/q}}}\left(\frac{1}{p_{n-1}^{1/q}}- \frac{1}{p_n^{1/q}} \right)$$ If we can show that there exists a constant $C$ such that $$\tag{*}\frac{1 - \frac{p_{n-1}}{p_n}}{1 - \frac{p_{n-1}^{1/q}}{p_n^{1/q}}} \leqslant C,$$ then we have convergence again by the comparison test, since in this case, $$\sum_{n=m}^\infty\frac{p_n - p_{n-1}}{p_np_{n-1}^a} \leqslant C\sum_{n=m}^\infty\left(\frac{1}{p_{n-1}^{1/q}}- \frac{1}{p_n^{1/q}} \right) = C\left(\frac{1}{p_m^{1/q}} - \lim_{n \to \infty} \frac{1}{p_n^{1/q}} \right) = \frac{C}{p_m^{1/q}}$$ Proof of inequality (*) Take $x = p_{n-1}^{1/q}/p_n^{1/q} = (p_{n-1}/p_n)^{1/q}$ and note that $0 < x \leqslant 1$. We have, $$(1-x)q \geqslant (1-x)\sum_{k=0}^{q-1}x^k = 1 - x^q,$$ which implies $$\frac{1 - \frac{p_{n-1}}{p_n}}{1 - \frac{p_{n-1}^{1/q}}{p_n^{1/q}}}= \frac{1- x^q}{1-x} \leqslant q$$ Therefore, the inequality (*) holds with $C = q$.
Minimum distance between closed sets.
Let one set be $\mathbb{Z}$. Let the other one be $\{i+1/(i+1);\,i\in\mathbb{Z},i>0\}$. The main point here is that the two closed sets must both be unbounded to get what you want. Otherwise they will both be compact. (I'm talking about within $\mathbb{R}$ here.)
The Range of Taxi Fares
The answer was a minimum of $2.50+\max(0.50⋅60t, 2.50⋅m)$ and maximum of $2.50+0.50⋅60t+2.00⋅m$.
Calculate exactly and as $N \rightarrow \infty$ of $\sum_{k=1}^{\left\lfloor{N/2}\right\rfloor} \left[{\sqrt{{N}^{2}+{k}^{2}} \in \mathbb{Z}}\right]$
This is a hyperbolic version of the Gauss circle problem (see also Chapter 8 of my notes). By mimicking Gauss' approach it is pretty simple to prove that $$ \sum_{k\leq N/2}\lfloor \sqrt{N^2+k^2}\rfloor = \int_{0}^{N/2}\sqrt{N^2+x^2}\,dx + O\left(\int_{0}^{N/2}\sqrt{\frac{N^2+2x^2}{N^2+x^2}}\,dx\right) $$ i.e. that $$ \sum_{k\leq N/2}\lfloor \sqrt{N^2+k^2}\rfloor = \left(\frac{\sqrt{5}}{8}+\frac{1}{2}\log\frac{1+\sqrt{5}}{2}\right)N^2+O(N). $$ Improving the error term beyond $O(N)$ is tricky but doable due to the smoothness of $\sqrt{N^2+x^2}$. By mimicking Voronoi's approach we get $$ \sum_{k\leq N/2}\lfloor \sqrt{N^2+k^2}\rfloor = \left(\frac{\sqrt{5}}{8}+\frac{1}{2}\log\frac{1+\sqrt{5}}{2}\right)N^2+O\left(N^{2/3}\right). $$ This almost is the best we can get. Like in the Gauss circle problem, it can be shown that the error term is $\gg N^{1/2}$ in infinite cases.
Properties of rational function of two monotone functions
No. Here is a counterexample: $$ f(x)=e^{2x+\sin(x)}\quad g(x)=e^{2x-\sin(x)+\log(1+x^2)},$$ for $x>0$. It is easy to check that both are monotone increasing, and that their ratio is not.
Finite integral of convolution
For $g$ bounded supported on $t > 0$ and $t f \in L^1$ then $ \int_{-\infty}^t \int_{-\infty}^\infty f(u)g(v-u)dudv$ converges absolutely thus you can swap the order of integration $= \int_{-\infty}^\infty \int_{-\infty}^\infty f(u) 1_{v< t} g(v-u)dudv=\int_{-\infty}^\infty \int_{-\infty}^\infty f(u) 1_{v< t} g(v-u)dvdu$ $=\int_{-\infty}^\infty \int_{-\infty}^\infty f(u) 1_{w< t-u} g(w)dwdu= \int_{-\infty}^\infty f(u) G(t-u)du$ where $G(t) = \int_{-\infty}^t g(u)du$. If only $f \in L^1$ but $g(t) = \Theta(t)C e^{iat}$ then $G$ is bounded, replace the integral by a series to obtain something absolutely convergent and to swap $\int,\sum$.
Given the sequence $x_{n+1}=x_n + \frac{2}{x_n}$ and $x_0 = 1$, find $\lim\limits_{n \to \infty} \frac{x_n}{\sqrt{n}}$
Your first part is not rigorous. When you write let $\lim_{n\to\infty} x_n=a$ you have made an implicit assumption that the limit exists but unless this assumption is justified the approach can't be considered rigorous. Also one can't write $a=\pm\infty $. First of all note that the sequence is consisting of positive terms and the sequence is increasing. Therefore it either tends to a limit or to $\infty $. If it tends to a limit $L$ then we must have $L\geq x_0=1$ and taking limit of the recurrence relation we get $L=L+(2/L)$ which can't hold. Thus $x_n\to\infty $. For the next part use the hint given in comments. We have $$x_{n+1}^2=x_n^2+4+\frac{4}{x_n^2}$$ so that $$x_{n+1}^2-x_n^2\to 4$$ By Cesaro-Stolz we have $x_n^2/n\to 4$ and hence $x_n/\sqrt{n} \to 2$.
From the result of Hilbert's tenth problem, does it follow that there exist infinitely numbers of equation...
There are algorithms for determining the solvability in integers of certain classes of Diophantine equations. For example, we can deal quite easily with linear equations in arbitrarily many variables, and there are infinitely many of those. There is also an algorithm for quadratic Diophantine equations in arbitrarily many variables, though this is less obvious. The answer for cubics is not known. There is no general algorithm for determining the solvability of quartic equations in arbitrarily many variables. There is a polynomial $P(w,x_1,x_2,x_n)$ with integer coefficients such that there is no algorithm that will determine, given input $w$, whether there are integers $x_1,x_2,\dots,x_n$ such that $P(w,x_1,x_2,\dots,x_n)=0$. As a consequence of this, there are infinitely many $w$ such that the equation has no solution, but the fact that it has no solution is not provable in first-order Peano arithmetic. A similar remark could be made about ZFC. I do not think that this result can be equated with "impossible to determine," but it goes some distance toward that. The above results are very beautiful. But they do not necessarily have a direct connection with classical questions in Diophantine equations, such as questions abut elliptic curves. The work on Hilbert's $10$th problem in fact produced interesting number-theoretic results, and generated new number-theoretic problems. It has enriched Number Theory, and in no way diminished it.
How do I find the $\theta$ in this isosceles triangle?
Let $AC=y$ and $EC=x$. Hence, from $\Delta ABC$ we obtain $$\sin\frac{\theta}{2}=\frac{x+y}{2y}$$ and by law of sines for $\Delta AEB$ we obtain $$\frac{y}{\sin(50^{\circ}+\theta)}=\frac{x+y}{\sin50^{\circ}},$$ which gives $$2\sin\frac{\theta}{2}\sin(50^{\circ}+\theta)=\sin50^{\circ},$$ which gives $$\theta=100^{\circ}.$$
Problem 5.3 Character theory of finite groups (M. Isaacs)
Recall that given a class function $f$ of $H$, the induced class function $f^G$ is defined as $g \mapsto \sum_{s\in S} f_0(s^{-1}gs)$, where $S$ is a set of representatives for the cosets in $G/H$, and $f_0(h)=f(h)$ if $h\in H$ and $0$ otherwise. Now applying this to $\phi \psi_H$, we get $g \mapsto \sum_{s\in S} \varphi_0(s^{-1}gs)(\psi_H)_0(s^{-1}gs)$. Notice that $\varphi_0(s^{-1}gs)(\psi_H)_0(s^{-1}gs)=\varphi_0(s^{-1}gs)\psi_H(s^{-1}gs)$ (just check the cases $s^{-1}gs\in H$ or $\not \in H$). In turn, $\psi_H(s^{-1}gs)=\psi(s^{-1}gs)$, and since $\psi$ is a class function for $G$, this equals $\psi(g)$. It follows that you can factor out $\psi(g)$ on the right, so you get $(\varphi \psi)^G=\varphi^G\psi$.
What is the smallest element $a$ in $\mathbb{Z}/36\mathbb{Z}$ such that $\langle21\rangle=\langle a\rangle$?
For every element $a\in\mathbb{Z_n}$ we have $\langle a\rangle=\langle \gcd(a,n)\rangle$. I'll give a short proof. Let $d=\gcd(a,n)$. $d|a$ so $a\in\langle d\rangle$ and that implies $\langle a\rangle\leq \langle d\rangle$. As for the other direction note that there are integers $k,l\in\mathbb{Z}$ such that $d=ka+ln$. If we write this equation in $\mathbb{Z_n}$ then we get $d=ka$. Hence $d\in\langle a\rangle$ and that implies $\langle d\rangle\leq \langle a\rangle$. So $\langle d\rangle=\langle a\rangle$. Now, in your specific example $\gcd(21,36)=3$ so $\langle 21\rangle=\langle 3\rangle$. So the $a$ you need to find is at most $3$. Easy to check that smaller elements of $\mathbb{Z_{36}}$ generate different subgroups. (look at their orders)
Prove continuity of $T(\varphi)=\int_\Omega f\varphi +\int_{\partial \Omega } g\varphi$
Hint The trace is continuous $H^1(\Omega )\longrightarrow H^{1/2}(\partial \Omega )$.
Solution to $f(x)=g(y)$
Suppose that $h(x,y)$ is some function such that $h(x,y)=f(x)=g(y)$ for all $x$ and $y$. Label the value $h(0,0)$ with the letter $c$. Then for an arbitrary point $(s,t),$ $$h(s,t)=f(s)=h(s,0)=g(0)=h(0,0)=c.$$
trouble expanding taylor series about a point other than zero using geometric series
You can expand $2\log(t+4)$ around $t=0$ and then plug in $t = x-2$.
Question regarding Notes on Strong Markov Property
Well, in the first case, when you consider a Brownian motion $B_t$, the following is usually understood: You are considering a background probability triple $(\Omega,\mathcal{F},P)$, with $\Omega$ being some set, $\mathcal{F}$ being some $\sigma$-algebra and $P$ being some probability space, and you consider a measurable mapping $B:\Omega\to C[0,\infty)$ such that the distribution of $B$ - technically, the pushfoward measure $B(P)$ - is the Brownian motion distribution (given by a specification of its Gaussian finite-dimensional distributions). When you define $B^x_t = x + B_t$, you are defining this mapping on $\Omega$, that is, you define $$ B^x : \Omega \to C[0,\infty), \\ B^x_t(\omega) = x + B_t(\omega). $$ You now have a single probability space $(\Omega,\mathcal{F},P)$ endowed with many mappings $B^x$. The other case you describe, with the probability measures $P^x$ and so forth, is the following. Consider the set $\Omega' = C[0,\infty)$. Let $w:\Omega'\to\Omega'$ be the identity mapping. For each $t\ge0$, $w_t$ is then a mapping from $\Omega'$ to $\mathbb{R}$, the projection mapping corresponding to the index $t$. We define a $\sigma$-algebra $\mathcal{F}$ by letting $\mathcal{F}$ be the smallest $\sigma$-algebra making each $w_t$ measurable. Now let $P^x$ be a probability measure on $\Omega'=C[0,\infty)$, specifically, let $P^x$ be the distribution of Brownian motion starting in $x$. This is a probability measure which is uniquely specified through its finite-dimensional distributions $$ P^x(w_{t_1}\in A_1,\ldots,w_{t_n}\in A_n) $$ and whose existence, by the way, is rather non-trivial. For each $x$, the probability measure $P^x$ is a probability measure on $(\Omega',\mathcal{F}')$. Thus, you now have a measure space with one actual mapping $w:\Omega'\to\Omega'$ but with many measures $P^x$. For each $x$, the distribution of $w$ relative to the probability triple $(\Omega',\mathcal{F}',P^x)$ is $w(P^x) = P^x$ (the equality holds as $w$ is the identity), thus, the distribution is the Brownian motion starting in $x$. In other words, in this case, you have a construction where you have "one process $w$" and many probability measures on the space where $w$ is defined, each giving rise to a different distribution of $w$. All this may seem like a rather bizarre construction. However, it can be convenient when working with distributional properties of stochastic processes, as the space $\Omega'$, and not the abstract, somewhat unspecified space $\Omega$, is where the stochastic process distributions really "live". This methodology is particularly useful in the context of for example Markov processes and marked point processes.
Intuition for Artin's lemma
I was recently trying to gain intuition for the same proof, so I will share my findings on this question which has remained unanswered for a few years now. My starting point was to read Noam Elkies' version. After reading it, my feeling is that there is really only one trick being used: arbitraging between symmetry of an object, and the size of the space it lives in. You can think of it like a fancy version of an averaging procedure. A physicist might even describe this as an inverse of symmetry breaking. I'll employ a trick novelists use and start in medias res, halfway through Elkies' argument: In fact, we show that in every nonzero $E$-vector subspace of $E^m$ that is invariant under $G$, a minimal-weight vector is proportional to one in $F^m$. Without even knowing what "minimal-weight" means, we can already see what this claim is saying: we can go from a big space with a little bit of symmetry into a little space with lots of symmetry - swapping size for symmetry. Indeed, elements of $F$ are nothing but highly symmetric elements of $E$, and the little space I am referring to is the $1$-dimensional $E$-vector space spanned by any such minimal-weight vector. The only property of $m$ used in the proof of the highlighted claim is that it is finite - for the proof uses a variation of Fermat's method of descent. We use the weight to track how "large" an element of $E^m$ is. Each symmetry (i.e. element of $G$) corresponds to an operation which reduces the weight of a vector $b\in E^m$ (after normalizing $b$ such that one of its entries equals $1$). The operation returns zero only when we have satisfied the corresponding symmetry. Since the weight can only decrease finitely many times before hitting zero, if we stop this procedure right before this happens we will have found an element with the desired amount of symmetry. Great, so now we should have some intuition about why the highlighted claim is true (and the full details are spelled out precisely in the page I linked). But how is this related to the original question? Remember, we are trying to prove an upper bound on the dimension of a vector space. This means we are trying to find lots of linear dependencies between its elements. The only thing we know about the base field is that it is highly symmetric, so we had better find a way of exploiting symmetry. Here we will use the same idea as before, to trade off size vs symmetry. We are trying to find a non-trivial $F$-linear dependence between $m$ arbitrary elements of $E$, where $m>|G|$. Equivalently, we are trying to find a non-zero element in the kernel of an arbitrary $F$-linear map from $F^m$ to $E$. But this map is not symmetric, so we have to find some way of introducing symmetry - at the cost of making the map higher dimensional. The new map goes from $F^m$ to $E^{|G|}$, so it can be thought of as a collection of maps from $F^m$ to $E$ indexed by elements of $G$, where the map corresponding to $g\in G$ is obtained from our original map by applying $g$ to all of its coefficients. We've taken a single equation (which should be easy to satisfy, but hard to solve explicitly since it is asymmetric) and replaced it by a family of equations (which in principle is harder to solve since it is more constrained, but whose solution set is highly symmetric). We can guarantee non-trivial solutions (for dimensional reasons) by extending the map from $F^m\to E^{|G|}$ to $E^m\to E^{|G|}$ - it has the same coefficients, but now ranging over $E$. We can apply the highlighted claim to the solution set of this extended map, obtaining a non-trivial highly symmetric element. Due to its symmetries, it will actually belong to the kernel of the $F^m\to E^{|G|}$ map, which in particular yields the desired linear dependence (by considering the equation corresponding to the identity element of $G$), and hence the dimension bound. In summary, my intuition for this result is that "size can be exchanged for symmetry". There are many ways to formulate such a vague principle, but in particular Elkies found a nice formulation (the highlighted quote above) and a slick proof (using Fermat descent). And once you know what you are looking for, you can more or less follow your nose to reduce the original problem to here.
boys and girls arrangement in a row
Hint: If there are two girls in front (GGBBB) then the case is excluded. If there are exactly one boy and one girl in front of a girl, then the case is excluded i.e. any case starting with (BGG) or (GBG). Examples of favorable cases include: (BBBGG), (BBGBG), (BBGGB). If you notice, as long as there is a string 'BB' somewhere and 'GG' does not start the queue then it is counted.
Show that $\lambda^2$ is an eigenvalue of $A^2$.
The first proof is correct. The other one works only if $A$ is diagonalizable.
Paper from 1969 not available ? is it possible?
If the university has an access to Springer, then more or less recent papers (say, up to 20-years-old, I don't remember details) are available to download from the university domain. Older papers are paid. My university has such an access. Of course, I was also surprised that old articles are impossible to download without extra money.
Am I correct regarding this probability exercise
I think you're almost there. For the first question, you might consider balls 1, 2, 3 as one "big ball". Then you would have $n-2$ total balls. There are $(n-2)!$ ways to arrange these balls and precisely 1 way to rearrange the balls in the "big ball". You can get the probability from here. For the second exercise, you seem to have the right answer and justification. My reasoning would be that we have $n \choose 3$ ways to choose the positions of 1, 2, and 3. There is only one way to arrange these balls. Then there are $(n-3)!$ ways to rearrange the rest of the balls, giving the answer you provided.
Finding the distribution with greatest Hellinger distance
Note $$H^2(p,q) = 1 - \sum_i \sqrt{p_i q_i}.$$ So we want to choose $q$ to minimize $\sum_i \sqrt{p_i q_i}$, with the constraint $\sum_i q_i=1$ If we rewrite this as $x_i=\sqrt{p_i}$ and $y_i=\sqrt{q_i}$, note that we are simply finding $y$ to minimize $x^\top y$ subject to $\|y\|_2^2=1$ and $y_i \ge 0$. Geometrically, we are finding the unit vector $y$ in the nonnegative orthant $\{v:v_i \ge 0\}$ that makes the largest angle with $x$. Intuitively, $y$ should be a standard basis vector (all zeros except one component). If this intuition holds, then $y$ should be the indicator vector for the smallest component of $x$. In terms of $p$ and $q$, this means $q$ should have all its mass on the least likely outcome under $p$, as you suspected. Let us verify the "intuition" above by proving the minimization more explicitly. The Lagrangian is \begin{align} \sum_i x_i y_i - \sum_i \lambda_i y_i + \mu\left(\sum_i y_i^2 - 1\right) . \end{align} The KKT conditions imply that for an optimal $y$ there exist $\mu \in \mathbb{R}$ and $\lambda_i \ge 0$ such that the following hold. \begin{align} x_i-\lambda_i + 2 \mu y_i &=0,\\ \lambda_i y_i &= 0. \end{align} So, if $y_i \ne 0$, then $\lambda_i=0$ and thus $y_i = -\frac{x_i}{2\mu}$. Also, $1 = \sum_i y_i^2 = \frac{1}{4\mu^2} \sum_{i : y_i \ne 0} x_i^2$ implies $\mu^2 = \frac{1}{4} \sum_{i:y_i \ne 0} x_i^2$ Finally, $0 \le y_i = - \frac{x_i}{2\mu}$ implies $\mu = -\frac{1}{2}\sqrt{\sum_{i:y_i \ne 0} x_i^2}$. Then, the quantity to minimize can be rewritten as $$\sum_i x_i y_i = -\frac{1}{2\mu}\sum_{i : y_i \ne 0} x_i^2 = \sqrt{\sum_{i:y_i \ne 0} x_i^2}.$$ To make this right-hand side quantity as small as possible, we need many of the $y_i$ to equal zero. The best we can do is have all but one be zero (since we need $\sum_i y_i^2=1$) so that the sum $\sum_{i:y_i \ne 0} x_i^2$ consists of only one term, and in particular we should make this term be the smallest of the $x_i$.
Recurrence equation similar to a geometric progression
Remind that $T(i)=\sqrt{T(i-1)T(i+1)}$ can find its general solution is just a luck: $T(i)=\sqrt{T(i-1)T(i+1)}$ $(T(i))^2=T(i-1)T(i+1)$ Let $T(i)=e^{U(i)}$ , Then $T(i-1)=e^{U(i-1)}$ , $T(i+1)=e^{U(i+1)}$ $\therefore(e^{U(i)})^2=e^{U(i-1)}e^{U(i+1)}$ $e^{2U(i)}=e^{U(i-1)+U(i+1)}$ $2U(i)=U(i-1)+U(i+1)$ $U(i+1)-2U(i)+U(i-1)=0$ $U(i)=\Theta_1(i)i+\Theta_2(i)$, where $\Theta_1(i)$ and $\Theta_2(i)$ are arbitrary periodic functions with unit period $\therefore T(i)=e^{\Theta_1(i)i+\Theta_2(i)}$, where $\Theta_1(i)$ and $\Theta_2(i)$ are arbitrary periodic functions with unit period It does not mean that $T(i)=\sqrt{T(i-1)\left(T(i+1)+k\right)}$ is as lucky as $T(i)=\sqrt{T(i-1)T(i+1)}$ .
Two ways of approximating a function?
By direct computation, $$V(x_0+h)=\frac{(x_0+h)^3}3-l^2(x_0+h)=V_0(x_0)+x_0^2h-l^2h+x_0h^2+\frac{h^3}3$$ and by Taylor, $$V(x_0+h)=V(x_0)+(x_0^2-l^2)h+2x_0\frac{h^2}2+2\frac{h^3}{3!}.$$ The two expression coincide.
Is finding the maximum of a polynomial of degree one a linear programming problem?
Your objective is a piecewise linear convex function, and maximization of such cannot be expressed as a linear program. We can, however, use binary variables to accomplish this. Let me point out that you can break your model apart into the $x,y$ variables and $u,v$ variables, because they are not coupled. Furthermore, the small size of the problem means that you could in theory just enumerate the vertices $x,y,u,v\in\{0,1\}$ and try them all. Thanks to the decoupling, that's just 4 tests for $x,y$, and 4 tests for $u,v$. So there's really no point in going through all the trouble to convert this problem into a mixed-integer problem, which we're about to do. I am going to do so anyway, however, in case this is actually just a simplification of a more complex model. First of all, let's do some simplification. Note that $$ \begin{aligned} &\max\left\{\max\left\{a_1x+a_2y,b_1x+b_2y\right\}-(c_1x+c_2y),0\right\} \\ &\qquad=\max\left\{\max\left\{a_1x+a_2y,b_1x+b_2y\right\},c_1x+c_2y\right\}-(c_1x+c_2y) \\ &\qquad=\max\{a_1x+a_2y,b_1x+b_2y,c_1x+c_2y\}-(c_1x+c_2y) \end{aligned} $$ If we repeat this process for the other term, and insert the result into your model, we get this: $$\begin{array}{ll} \text{maximize} &\max\{a_1x+a_2y,b_1x+b_2y,c_1x+c_2y\}-2(c_1x+c_2y) \\ & + \max\{\alpha_1u+\alpha_2v,\beta_1u+\beta_2v, \gamma_1u+\gamma_2v\}-2(\gamma_1u+\gamma_2v) \\ \text{subject to} & 0 \le x,y,u,v \le 1 \end{array}$$ Now let's linearize the objective: $$\begin{array}{ll} \text{maximize} &z_1-2(c_1x+c_2y)+z_2-2(\gamma_1u+\gamma_2v) \\ \text{subject to} & 0 \le x,y,u,v \le 1 \\ & \max\{a_1x+a_2y,b_1x+b_2y,c_1x+c_2y\} \geq z_1 \\ & \max\{\alpha_1u+\alpha_2v,\beta_1u+\beta_2v, \gamma_1u+\gamma_2v\} \geq z_2 \end{array}$$ So the question is this: how to handle an inequality of this form: $$\max\{x_1,x_2,x_3\}\geq x_4$$ What we do is introduce three binary variables $b_1,b_2,b_3$, and then do this: $$\begin{gathered} x_1 + b_1 M \geq x_4, \quad x_2 + b_2 M \geq x_4, \quad x_3 + b_3 M \geq x_4 \\ b_1 + b_2 + b_3 = 2, \quad b_1,b_2,b_3\in\{0,1\} \end{gathered}$$ where $M$ is a very large number. When $b_i=1$, the term $+b_i M$ makes the inequality trivial to satisfy (assuming $M$ is large enough). The equation $b_1+b_2+b_3=2$ ensures that exactly two of the values of $b_i$ are one. Here are a couple of things I'll leave you to prove to yourself: If you use $b_1+b_2+b_3\leq 2$ instead of $b_1+b_2+b_3=2$, this transformation will still work. If other constraints in the problem ensure that $L\leq x_1,x_2,x_3\leq U$, then choosing any $M\geq U-L$ will be sufficient. In practice, you want $M$ as small as possible while ensuring equivalence, so this knowledge helps. Applying this approach to your problem yields the following model: $$\begin{array}{ll} \text{maximize} &z_1-2(c_1x+c_2y)+z_2-2(\gamma_1u+\gamma_2v) \\ \text{subject to} & 0 \le x,y,u,v \le 1 \\ & a_1x+a_2y+b_1 M_1 \geq z_1 \\ & b_1x+b_2y+b_2 M_1 \geq z_1 \\ & c_1x+c_2y+b_3 M_1 \geq z_1 \\ & \alpha_1u+\alpha_2v+b_4 M_2 \geq z_2 \\ & \beta_1u+\beta_2v+b_5 M_2 \geq z_2 \\ & \gamma_1u+\gamma_2v+b_6 M_2 \geq z_2 \\ & b_1+b_2+b_3\leq 2 \\ & b_4+b_5+b_6\leq 2 \\ & b_1,b_2,b_3,b_4,b_5,b_6\in\{0,1\} \end{array}$$ where $M_1$ and $M_2$ are constants that satisfy $$M_1\geq \max\{a_1+a_2,b_1+b_2,c_1+c_2\}, \quad M_2\geq \max\{\alpha_1+\alpha_2,\beta_1+\beta_2,\gamma_1+\gamma_2\}.$$ When I original read this problem, I mis-read the model as a minimization, not a maximization. If it is a minimization, then we do not need the binary variables at all; the problem can, in fact, be converted to an LP. I'm providing the equivalent LP here without derivation. $$\begin{array}{ll} \text{minimize} &z_1 - 2(c_1x+c z_2y) + z_2-2(\gamma_1x+\gamma_2y) \\ \text{subject to} & 0 \le x,y,u,v \le 1 \\ & a_1x+a_2y \leq z_1 \\ & b_1x+b_2y \leq z_1 \\ & c_1x+c_2y \leq z_1 \\ & \alpha_1u+\alpha_2v \leq z_2 \\ & \beta_1u+\beta_2v \leq z_2 \\ & \gamma_1u+\gamma_2v \leq z_2 \end{array}$$
A good pictorial explanation of separation of variables?
I urge you not teach your students to multiply with $dx$ as this can cause conceptual problems for weaker students in the future. Specifically, $dx$ is not a real number, but represents part of a limiting process. I recommend that you stress the application of the chain rule of differentiation. Specifically given \begin{equation} f(x) = g(y(x)) y'(x) \end{equation} I would instead stress the need to find a function $G$ such that \begin{equation} \frac{dG}{dy}(y) = g(y) \end{equation} so that the chain rule could be applied, collapsing the equation to \begin{equation} f(x) = \frac{d}{dx} \left[ G(y(x)) \right] = g(y(x))y'(x) \end{equation} which will motivate your students to find an integral for $f$.
How is Nakagami function derived from Gamma function?
It is very simple. Let's start from $$f_Y(y)=\frac{1}{\theta^k\Gamma(k)}y^{k-1}e^{-y/\theta}$$ Let's set $$x=\sqrt{y}$$ That is $$y=x^2$$ and its derivative is $y'=2x$ Immediately you get $$f_X(x)=\frac{1}{\theta^k\Gamma(k)}x^{2k-2}e^{-x^2/\theta}2x=\frac{2}{\theta^k\Gamma(k)}x^{2k-1}e^{-x^2/\theta}$$ Fundamenta Tranformation Theorem Remember that, if the transformation $y=g(x)$ is monotone, the density of Y can be derived by simply transforming the density of X with the following formula $$f_Y(y)=f_X[g^{-1}(y)]\Bigg|\frac{d}{dy}g^{-1}(y)\Bigg|$$
are elementary symmetric polynomials concave on probability distributions?
(I know this question is ancient, but I happened to run into it while looking for something else.) While I am not sure if $S_{n,k}$ is concave on the probability simplex, you can prove the result you want and many other similar useful things using Schur concavity. A sketch follows. A vector $y\in \mathbb{R}_+^n$ majorizes $x \in \mathbb{R}_+^n$ if the following inequalities are satisfied: $$ \sum_{j=1}^i{x_{(j)}} \leq \sum_{j=1}^i{y_{(j)}} $$ for all $i$, and $\sum_{i=1}^n x_i = \sum_{i=1}^n y_i$. Here $x_{(j)}$ is the $j$-th largest coordinate of $x$ and similarly for $y$. Let's write this $x \prec y$. For intuition it's useful to know that $x \prec y$ if and only if $x$ is in the convex hull of vectors you get by permuting the coordinates of $y$. A function is Schur-concave if $x \prec y \implies f(x) \geq f(y)$. A simple sufficient condition for Schur concavity is that $\partial f(x)/\partial x_i \ge \partial f(x)/\partial x_j$ whenever $x_i \le x_j$. It is easy to verify that $S_{n,k}$ satisfies this condition for any $n$,$k$. Notice that $x=(1/n, \ldots, 1/n)$ is majorized by every vector $y$ in the probability simplex. You can see this for example by noticing that the sum of $i$ random coordinates of $y$ is $i/n$, so surely the sum of the $i$ largest coordinates is at least as much. Equivalently, $x$ is the average of all permutations of $y$. This observation, and the Schur concavity of $S_{n,k}$ imply $S_{n,k}(x) \ge S_{n,k}(y)$. In fact, $S_{n,k}^{1/k}$ is concave on the positive orthant, and this implies what you want. This is itself a special case of much more powerful results about the concavity of mixed volumes. But the Schur concavity approach is elementary and pretty widely applicable.
Summation of series- substitution
Let $f(n)=0$ for $n>4$. Then what we have to solve is: $$\left\{\begin{matrix} (0\cdot f(0)+1\cdot f(1))+2f(2)+3f(3)+4f(4)&=&C \\ (f(0)+f(1))+f(2)+f(3)+f(4)&=&K \end{matrix}\right.$$ These simultaneous equations have infinitely many solutions $(f(2),f(3),f(4))$. What I want to say is, can $f(n)$ be determined uniquely?
Problem: Prove that in a set of 17 people, where all talk to each other in one of 3 languages, there exists a threesome...
Pick a person, $X$, at random and divide the other $16$ up by the language in which they converse with that person. One of those groups must have at least $6$ people in it. Wlog let's say that common language is English. Now if any two of those $6+$ speak to each other in English we are done, so assume they only use the other two languages between them. Take a person $Y$ from the group and divide the remaining $5+$ up according to the language they speak with $Y$. One of those two groups must have at least $3$ people in it. Let's say those $3+$ all speak Russian with $Y$. If any two of the $3+$ speak Russian to each other, we are done. But if they all speak French to each other we are also done.
Is it possible to plot a function with a vertical tangent line while the plot of the function has no vertical line segment?
Yes, eg- x^(1/3) (real to real only not complex) and has a vertical tangent at x=0
Superadditive but not convex function
We can assume that $x+y\leq 1-y$. Then consider $f(x)= \frac{3}{2}x,\ 0\leq x\leq \frac{1}{3} ,\ f(x)=\frac{1}{2},\ \frac{1}{3} <x<\frac{2}{3}$ and $f(x)=\frac{3}{2}x-\frac{1}{2},\ \frac{2}{3}\leq x\leq 1 $.
Closed Convex Subsets of $\Bbb R^2$; Find them all!
I think that the other four examples are: The closed upper-half plane $ \mathbb{R} \times [0,\infty) $. The line $ \mathbb{R} \times \{ 0 \} $. The line $ [0,\infty) \times \{ 0 \} $. The infinite strip $ [-1,1] \times \mathbb{R} $.
Covariance of geometric Brownian motion
Assuming that that $W_0 = 0$, \begin{align} E[e^{W_s}e^{W_t}] &= e^{\frac{t}{2}}E[e^{W_s}e^{W_t - \frac{t}{2}}] \\ &= e^{\frac{t}{2}}E[e^{W_s}E[e^{W_t - \frac{t}{2}}|\mathcal{F}_s]] \quad \text{$W_s$ is $\mathcal{F}_s$-mesurable and thanks to b)}\\ &= e^{\frac{t}{2}}E[e^{2W_s - \frac{s}{2}}] \\ &= e^{\frac{t+3s}{2}}E[e^{2W_s - 2s}] \quad \text{thanks to c) with $\lambda=2$}\\ &= e^{\frac{t+3s}{2}}\\ \end{align} Now : \begin{align} E[e^{W_s}]E[e^{W_t}] &= e^{\frac{t+s}{2}}E[e^{W_s- \frac{s}{2}}]E[e^{W_t - \frac{t}{2}}] \\ &= e^{\frac{t+s}{2}}\\ \end{align} You have all the ingredients to compute the covariance.
Number of partitions of list
If your list has length $n$, there are $n-1$ places to put a divider between elements. Each choice of a set of dividers makes a different partition. As each divider can be there or not, there are $2^{n-1}$ ways to partition the list. In your example $n=3$ and $2^{3-1}=4$, which is the number of partitions you have found.
Hyperbolic motion and circular motion
If a body of negligible mass (like a space probe) travels without active propulsion in the gravity field of a more heavy object (like a planet or the sun) which is traveling at constant speed (which is a good approximation for the short duration of a swing-by maneuver), then the path of the first body in the reference frame moving with the second is a conic section. This includes hyperbolas and ellipses, with parabolas as limiting case between them and circles as another special case. As a general rule, the orbit is a hyperbola if the speed of the object is enough to break free of the gravitational pull, as the asymptotes of the hyperbola are lines going to infinity. A body describing an ellipse is gravitationally bound and can't escape unless it gains more energy. Once you have more than two bodies in your system, or comparable masses, things become a lot more difficult. But at least for the general idea of why a swing-by works, going from one two-body situation with the sun in its center to one with a planet and back to one with the sun works well enough. For actual mission planning, more work would be needed. In case you do read German, I did write a Facharbeit in my final year at school about the subject of swing-by, deriving the properties mentioned above and many more. Would write it different these days, but it might still be interesting.
composition of functions and concavity
Use the definition of concave: $$f((1-\alpha )x+\alpha y)\geq (1-\alpha ) f(x)+\alpha f(y)$$ So in this case you would show $$h(g((1-\alpha)x_1+\alpha y_1),...,g((1-\alpha)x_n+\alpha y_n))$$ $$\ge h((1-\alpha)g(x_1)+\alpha g(y_1),...,(1-\alpha)g(x_n)+\alpha g(y_n))$$ $$\ge (1-\alpha)h(g(x_1),...,g(x_n))+\alpha h(g(y_1),...,g(y_n)))$$ using the facts that h is non-decreasing and concave, respectively.
Geometric Probability/Joint Variable Problem
Let the arrival time of Naomi depend on the arrival time of Samir. Since they arrive within $10$ minutes of each other, Naomi has to arrive between $10$ minutes earlier and $10$ minutes later than Samir. This corresponds to the inequality $n-10 ≤ s ≤ n+10$. When we graph it, we find that the area equals $60 \times 60 - 2 * (\frac{50 * 50}{2})$, which equals $1100$. Can you find the probability from here?
Volume of the solid enclosed by curves.
First, find the area enclosed by a parabolic cross section in the $xz$-plane enclosed by the curves $$z = f(x) = 7-x^2, \quad z = g(x) = -2:$$ this is simply $$B = \int_{x=-3}^3 f(x) - g(x) \, dx = \int_{x=-3}^3 9-x^2 \, dx = 36.$$ Thus the total volume is simply the area of the base $B$ times the height of the parabolic cylinder, which is the distance between $y = -1$ and $y = 4$, which is $h = 5$. Thus the total volume is $V = Bh = 5(36) =180$, as you found.
Examining the nature of mapping of $f(x) = \frac{x}{x^2 - 2}$.
$$x\in \mathbb{Q} \Rightarrow x^2-2\in \mathbb{Q} \Rightarrow \frac{x}{x^2-2}\in \mathbb{Q}.$$ So actually $f$ is $\mathbb{Q}\rightarrow\mathbb{Q}$, hence it is not on to. $$f(x)=f(y) \Longleftrightarrow x(y^2-2)=y(x^2-2) \Longleftrightarrow (y-x)(xy+2)=0.$$ So if $xy=-2$ and $x\neq y$, we also have $f(x)=f(y)$, hence it is not one-one.
Is $\mathbb{Z}$ isomorphic to a direct subproduct of the family $\left\{ \mathbb{Z}_{n}\right\} _{n>1}$?
I'm assuming that $\mathbb Z_n$ means $\mathbb Z/n\mathbb Z$. I'm also assuming that "subproduct" means "subgroup of the product". If this is the case then yes, the subgroup generated by $(1, 1, \ldots) \in \prod_{n > 1}\mathbb Z_n$ is isomorphic to $\mathbb Z$. To see that this is the case note that we can always define a homomorphism out of $\mathbb Z$ by picking an element $x$ and declaring $m \mapsto m\cdot x$. So what we need is for this to be injective. When $x = (1, 1, \ldots)$ we have $m\cdot x = (m, m, \ldots)$ and if $m \neq 0$ then the looking at the $|m| + 1$ coordinate gives $m\cdot x \neq 0$. Another way to see this: if $p$ is prime then the coordinate of $(m, m, \ldots)$ corresponding to $\mathbb Z_p$ is zero if and only if $p$ divides $m$. So if $m\cdot x = (0, 0, \ldots)$ then every prime divides $m$, so $m$ must be $0$.
Proving the cofinite topology is coarser then the Euclidean topology.
Being contained in an open set isn't enough. What must be proved is that every element of $\tau_c$ is an open subset of $\mathbb R$,with respect to the usual topology. Let $A\in\tau_c$. Then either $A=\emptyset$ or $A^\complement$ is finite. If $A=\emptyset$, it's trivial. And if $A^\complement$ is finite with respect to the usual topology, then $A^\complement$ is closed with respect to the usual topology. But then $A$ is open with respect to the usual topology.
Solving an equation with the same variable twice
You want to find the root $r>0$ of $$ f(r) = \bigg[1 - \frac{1}{{(1 + r)^{36} }}\bigg]\bigg/r - 27.397, $$ that is the $r > 0$ for which $f(r)=0$. Using Wims Function Calculator, I get $r \approx 0.0155764414039$, which is what you want.
About the question of Truth Table
Repetitions of, say '$q$,' count as the same variable, so it would be considered one variable. The definition is the same as for an equation in algebra, like $x^2+x =0.$ The unknown 'x' counts as just one unknown.
Expected number of semicircles
Firstly note that we can represent each semi-circle using its mid-point. Therefore, choosing a semi-circle uniformly at random is equivalent to choosing a point uniformly in the interval $[0, 2\pi)$. Consider a sample of $n$ such points given by $\{X_1, X_2, \dots, X_n\}$ and let $x_{\min}(n)$ and $x_{\max}(n)$ be the minimum and maximum values in this sample. Let us try to find the probability that this sample will not be able to cover the circle. This sample will not be able to cover the circle if the semi-circles corresponding to $x_{\min}(n)$ and $x_{\max}(n)$ do not intersect. Since we are dealing with a circle, the interval of $[0, 2\pi)$ can be thought of as a periodic interval and the above condition can be written as $(2\pi + x_{\min}(n)) - x_{\max}(n) > \pi$ which is equivalent to $x_{\max}(n) - x_{\min}(n) < \pi$. To compute the probability of the above event, first let us compute $\Pr(x_{\max}(n) < a + \pi | x_{\min}(n) = a )$ for some $a \in [0, 2\pi)$. Since the distribution is uniform, this evaulates to $\displaystyle \left(\frac{\pi}{(2\pi - a)}\right)^{n - 1}$. The exponent is $n - 1$ because only $n-1$ points remain after setting the minimum one equal to $a$. Using the distribution of the minimum of $n$ uniformly distributed random variables, one can write \begin{align} \Pr(x_{\max}(n) - x_{\min}(n) < \pi) & = n \int_{0}^{2\pi} \left(\frac{\pi}{(2\pi - a)}\right)^{n - 1} \left(1 - \frac{a}{2\pi}\right)^{n-1} \frac{da}{2 \pi} \\ & = \frac{n}{2^{n-1}} \end{align} If $T$ denotes the random number of steps, then $P(T> n) = \Pr(x_{\max}(n) - x_{\min}(n) < \pi) = \frac{n}{2^{n-1}}$. Using the formula, $\displaystyle \mathbb{E}[T] = \sum_{n = 0}^{\infty} \Pr(T > n)$, you can compute the required value.
Counting marbles.(partitions into n sumands)
First you have to place a marble in each pile. Since the marbles are all the same and we aren't counting piles differently, this simply removes $k$ marbles from the original $n$. Next, we have to take the remaining $n-k$ marbles and place them into the $k$ piles in some way. This is the same as counting the ways of partitioning $n-k$ as a sum of nonnegative integers, where the number of summands of the partition is less than or equal to $k$. In particular, the number of ways to place your marbles (call it $N$) will satisfy $N\leq p(n-k)$ (here $p(m)$ is the partition function seen in the wiki link). I'm not sure if there is a known closed form for the number of partitions of a number $n - k$ into $k$ or fewer summands; perhaps someone else with more experience in that area of number theory would be able to provide a reference or formula with that information (or even just confirm or deny existence of such a formula).
Using MATLAB to solve a system of two PDEs encountered in Mathematical Biology
First order PDEs can generally be solved by the method of characteristics. To solve your first PDE using this method, put $$ \phi_z(x) = y_1(z + x, x)\ . $$ Then \begin{eqnarray} \frac{d\phi_z}{dx}(x)&=&\frac{\partial y_1}{\partial t}(z+x,x)+\frac{\partial y_1}{\partial a}(z+x,x)\\ &=& f_1(z+x)\,\phi_z(x)\ . \end{eqnarray} This is a first-order ODE for $\ \phi_z(x)\ $, which has the solution: \begin{eqnarray} \frac{\phi_z(x)}{\phi_z(0)}&=&e^{\int_0^xf_1(z +u)\,du}\\ &=& e^{\int_z^{x+z}f_1(u)\,du}\ . \end{eqnarray} Now, $\ \phi_z(0)=y_1(z, 0) = c_1\ $, and $\ y_1(t,a)=\phi_{t-a}(a)\ $, so $$ y_1(t,a)=c_1 e^{\int_{t-a}^tf_1(u)\,du}\ . $$ Your second PDE can now be solved similarly. When solving the corresponding ODE, however, you will need to integrate over the interval from $\ x\ $ to $\ \tau_1\ $, rather than from $\ 0\ $ to $\ x\ $ as I did above.
If $n^a+mn^b+1$ divides $n^{a+b}-1$, prove $m=1$ and $b=2a$
Suppose $k(n^a + mn^b + 1) = n^{a+b} - 1$. Since $k$ is the ratio of two positive integers (using $n\ge 2$), it is certainly positive, and also clearly $k < n^{a+b}/(mn^b) = n^a/m$. In particular $0 < k < n^a$. On the other hand, looking at this equation modulo $n^a$ (using the fact that $a \le b$ and hence $n^a \mid n^b$) we see that $k \equiv -1 \pmod{n^a}$. Combining this with the known range of $k$, this uniquely identifies $k = n^a-1$. It also forces $m=1$ since otherwise $n^a/m$ would be too small. We therefore have $(n^a - 1)(n^a + n^b + 1) = n^{a+b} - 1$ which easily simplifies to $n^{2a} = n^b$, hence $b = 2a$ (since $n > 1$).
Mini Tetris Winning Configuration
Let $U_n$ be the number of ways to cover a $2\times n$ board with one corner square missing. Clearly $$U_1=0\ ,\quad U_2=1\ .$$ In the first of your five diagrams, we start the $2\times n$ board with an L-shaped piece at the bottom. The number of ways to complete the covering is then $U_{n-1}$. In the fourth, we start with a two-square piece placed vertically; then the one next to it is forced; then there are $T_{n-2}$ ways to complete the covering of the $2\times n$ board. In fact, the five diagrams show all ways of placing the initial piece. See if you can use arguments similar to those I have given to show that the total number of ways to cover the board is $$T_n=U_{n-1}+U_{n-1}+T_{n-2}+T_{n-2}+T_{n-1}\ ,$$ that is, $$T_n=T_{n-1}+2T_{n-2}+2U_{n-1}\ .\tag{$*$}$$ Starting with a $2\times(n-1)$ board with a bottom corner missing, show similarly that $$U_{n-1}=U_{n-2}+T_{n-3}\ .\tag{$*{*}$}$$ Now from $(*)$ we get $$\eqalign{ 2U_{n-1}&=T_n-T_{n-1}-2T_{n-2}\cr 2U_{n-2}&=T_{n-1}-T_{n-2}-2T_{n-3}\ ;\cr}$$ substitute into $(*{*})$ to eliminate the $U$ terms, and then simplify.
Predicate formula to propositional formula
Try to eliminate quantifiers step by step: $$ \begin{aligned} \exists x\forall yP(x,y) &\iff \exists x (P(x,a)\land P(x,b))\\ &\iff \left(P(a,a)\land P(a,b)\right)\lor \left(P(b,a)\land P(b,b)\right). \end{aligned}$$
Factoring the polynomial $5c^2 - 52c + 20$
$$5c^2-52c+20=0$$ $$5c^2-50c-2c+20=0\\5c(c-10)-2(c-10)=0$$ $$(5c-2)(c-10)=0$$ or you can use quadratic formula $$x=\dfrac{-b\pm \sqrt{b^2-4ac}}{2a}$$ Here $a=5,b=-52,c=20$
How to find the center of a circle, given a point on the circumference of the circle, radius and arc length?
There are infinitely many possible answers. The figure shows which direction $(x_1, y_1)$ and $(x_2, y_2)$ are, relevant to the centre. However, the text does not reference this information, so it is not clear whether we can rely on it. Assuming we cannot rely on this information, Seyed’s comment applies (with a slight correction): The center of the circle can be anywhere on the perimeter of a circle with its center at $(x_1, y_1)$ and radius $R$.
A question on open and closed sets
If all convergent sequences of points of $C$ have their limits in $C$, then yes, $C$ must be closed. You can prove it by showing that $\Bbb R\setminus C$ is open. If $\Bbb R\setminus C$ is not open, there is a point $x\in\Bbb R\setminus C$ such that for every $\epsilon>0$, $(x-\epsilon,x+\epsilon)\nsubseteq\Bbb R\setminus C$, or in other words, $(x-\epsilon,x+\epsilon)\cap C\ne\varnothing$. Thus, for each $n\in\Bbb Z^+$ there is a point $$x_n\in\left(x-\frac1n,x+\frac1n\right)\cap C\;.$$ Can you show that $\langle x_n:n\in\Bbb Z^+\rangle$ is a convergent sequence of points of $C$ whose limit is $x$? Your hypothesis would then imply that $x\in C$, contradicting our choice of $x$ and proving that $\Bbb R\setminus C$ must be open, so that $C$ must be closed.
Is there a simple function, where I input 1 and it outputs 0, and if I input 0 it outputs 1?
The fastest way is to use the bitwise operator ~$x & 1
GRE Cumulative addition problem.
The question is how much did Molly earn in total, not in the last two-week period. The mysterious 'additional $\$1287$' is simply the sum of all earlier earnings: $$1287 = 160+161+322+644$$ hence her total earning is $$160+161+322+644 + 1288 = 2575$$
Does the power rule come from the generalised binomial theorem or the other way around?
I'm guessing: $\lim_{h\rightarrow 0} \frac{(x+h)^r-x^r}{h}$ =$\lim_{h\rightarrow 0} \frac{e^{r\ln{x+h}}-x^r}{h}$ Now we use the L Hospital's rule $=\lim_{h\rightarrow 0} e^{r\ln{x+h}} r \frac{1}{x+h}$ $= rx^{r-1}$ So the generalised power rule does not depend on the generalised binomial theorem.
Is p value sufficient to compare the significance
Doing three paired t tests would not tell you what you want to know. Each paired test would tell you whether students in one of the three groups (tutorial systems) shows improvement. The answer to that is Yes for all groups, because each group shows all five students with positive improvement scores ($1/2^5 < .05$). But this says nothing at all about the relative effectiveness of the three groups. One appropriate procedure would be to find an improvement score for each person 14 for student 1a. Then do a one-factor (one-way) ANOVA on the three sets of improvement scores. If the F-test for the ANOVA shows significance (i.e., that not all three tutorial systems are equally effective), then you would need to do some kind of multiple comparison procedure to see what pattern of difference is plausible. A complication is that these are percentages and so not really normally distributed. There are several possibilities if you suspect (or find from a test on residuals) that data are far from normal: Kruskal-Wallis and permutation tests are popular alternatives. The title of the Question asks whether the P-value is sufficiently small to declare significant differences. But you don't show a P-value, so that it not a good title for this Question. I put these data into Minitab statistical software (quickly and without proofreading, better check). The resulting ANOVA is as follows: Source DF SS MS F P Group 2 26.1 13.1 0.42 0.668 Error 12 375.6 31.3 Total 14 401.7 The P-value shown is not sufficient to reject the null hypothesis that improvement scores are the same for all three tutorial programs. I don't know whether you have studied ANOVA or not. If not, I wonder about the purpose of the question. If so, I suppose you will know how to interpret the ANOVA table. The largest difference among the three sample means is a little over 3 (not shown). This ANOVA procedure assumes that the variances are the same in the three groups. It estimates the common group variance to be about 31 (Ms for Error). Note: When significance is not found, it is good statistical practice to do power computations to see whether the study had any chance of success in detecting differences among group population means. (a) A one-way ANOVA with 3 groups, 5 replications within each group, and a common variance of 31 has only about one chance in ten of detecting a difference of 3 in improvement scores, even if it is real. (b) It is difficult to say how large a difference investigators would find interesting. In order reliably to detect a difference of 5 points between the best and worst tutorial system one would need over 30 students in each group.
Ask an optimization problem
Here is my attempt: $$ \frac{\partial}{\partial x_i} f = a_i - 3 \left( \lVert x\rVert_2^{2/3} + \lVert y\rVert_2^{2/3} + \lVert z\rVert_2^{2/3} \right)^2 \frac{2 x_i}{3\lVert x \rVert_2^{4/3}} $$ We have critical points for $$ 0 = \frac{\partial}{\partial x} f = a - \frac{2 \left( \lVert x\rVert_2^{2/3} + \lVert y\rVert_2^{2/3} + \lVert z\rVert_2^{2/3} \right)^2}{\lVert x \rVert_2^{4/3}} x = a - \lambda x \\ 0 = \frac{\partial}{\partial y} f = b - \frac{2 \left( \lVert x\rVert_2^{2/3} + \lVert y\rVert_2^{2/3} + \lVert z\rVert_2^{2/3} \right)^2}{\lVert y \rVert_2^{4/3}} y = b - \mu y \\ 0 =\frac{\partial}{\partial z} f = c - \frac{2 \left( \lVert x\rVert_2^{2/3} + \lVert y\rVert_2^{2/3} + \lVert z\rVert_2^{2/3} \right)^2}{\lVert z \rVert_2^{4/3}} z = c - \nu z $$ for some non-negative numbers $\lambda, \mu, \nu$. If $a=0$ then $x=0$, if $b=0$ then $y=0$ and if $c=0$ then $z=0$. Otherwise choose from below for the non-zero $a, b, c$ cases: \begin{align} x &= \alpha\, a \quad (\alpha = 1 / \lambda > 0) \\ y &= \beta\, b \quad (\beta = 1 / \mu > 0) \\ z &= \gamma\, c \quad (\gamma = 1 / \nu > 0) \\ \end{align} This leads to up to three equations in the real unknowns $\alpha$, $\beta$, $\gamma$: \begin{align} \frac{1}{\alpha} &= \frac{2 \left( \lVert \alpha a \rVert_2^{2/3} + \lVert \beta b \rVert_2^{2/3} + \lVert \gamma c \rVert_2^{2/3} \right)^2}{\lVert \alpha a \rVert_2^{4/3}} \\ &= \frac{2 \left( \alpha^{2/3} \lVert a \rVert_2^{2/3} + \beta^{2/3} \lVert b \rVert_2^{2/3} + \gamma^{2/3} \lVert c \rVert_2^{2/3} \right)^2}{\alpha^{4/3} \lVert a \rVert_2^{4/3}} \iff \\ \alpha^{1/3} \lVert a \rVert_2^{4/3} &= 2 \left( \alpha^{2/3} \lVert a \rVert_2^{2/3} + \beta^{2/3} \lVert b \rVert_2^{2/3} + \gamma^{2/3} \lVert c \rVert_2^{2/3} \right)^2 \end{align} which is equivalent to \begin{align} 2^{1/2} \left( \lVert a \rVert_2^{2/3} \alpha^{2/3} + \lVert b \rVert_2^{2/3} \beta^{2/3} + \lVert c \rVert_2^{2/3} \gamma^{2/3} \right) &= \lVert a \rVert_2^{2/3} \alpha^{1/6} \quad (*) \\ &= \lVert b \rVert_2^{2/3} \beta^{1/6} \\ &= \lVert c \rVert_2^{2/3} \gamma^{1/6} \end{align} Example: For non-zero $a, b, c$ the RHS give $$ \lVert a \rVert_2^4 \alpha = \lVert b \rVert_2^4 \beta = \lVert c \rVert_2^4 \gamma $$ Expressing $\beta$ and $\gamma$ in terms of $\alpha$ in $(*)$ gives \begin{align} 2^{1/2} \left( \lVert a \rVert_2^{2/3} + \lVert b \rVert_2^{2/3} \left( \frac{\lVert a \rVert_2}{\lVert b \rVert_2} \right)^{8/3} + \lVert c \rVert_2^{2/3} \left( \frac{\lVert a \rVert_2}{\lVert c \rVert_2} \right)^{8/3} \right) \alpha^{2/3} &= \lVert a \rVert_2^{2/3} \alpha^{1/6} \iff \\ 2^{1/2} \left( 1 + \frac{\lVert b \rVert_2^{2/3}}{\lVert a \rVert_2^{2/3}} \left( \frac{\lVert a \rVert_2}{\lVert b \rVert_2} \right)^{8/3} + \frac{\lVert c \rVert_2^{2/3}}{\lVert a \rVert_2^{2/3}} \left( \frac{\lVert a \rVert_2}{\lVert c \rVert_2} \right)^{8/3} \right) \alpha^{1/2} &= 1 \iff \\ 2^{1/2} \left( 1 + \frac{\lVert a \rVert_2^2}{\lVert b \rVert_2^2} + \frac{\lVert a \rVert_2^2}{\lVert c \rVert_2^2} \right) \alpha^{1/2} &= 1 \iff \\ \alpha = \frac{\lVert b \rVert_2^4 \, \lVert c \rVert_2^4} { 2 \left( \lVert b \rVert_2^2\, \lVert c \rVert_2^2 + \lVert a \rVert_2^2\, \lVert c \rVert_2^2 + \lVert a \rVert_2^2\, \lVert b \rVert_2^2 \right)^2 } \\ \beta = \frac{\lVert a \rVert_2^4 \, \lVert c \rVert_2^4} { 2 \left( \lVert b \rVert_2^2\, \lVert c \rVert_2^2 + \lVert a \rVert_2^2\, \lVert c \rVert_2^2 + \lVert a \rVert_2^2\, \lVert b \rVert_2^2 \right)^2 } \\ \gamma = \frac{\lVert a \rVert_2^4 \, \lVert b \rVert_2^4} { 2 \left( \lVert b \rVert_2^2\, \lVert c \rVert_2^2 + \lVert a \rVert_2^2\, \lVert c \rVert_2^2 + \lVert a \rVert_2^2\, \lVert b \rVert_2^2 \right)^2 } \end{align} The maximum is: \begin{align} f(\alpha\, a, \beta\, b, \gamma\, c) &= \alpha\, \lVert a \rVert_2^2 + \beta\, \lVert b \rVert_2^2 + \gamma\, \lVert c \rVert_2^2 - \left( \lVert \alpha a \rVert_2^{2/3} + \lVert \beta b \rVert_2^{2/3} + \lVert \gamma c \rVert_2^{2/3} \right)^3 \\ &= \alpha\, \lVert a \rVert_2^2 + \beta\, \lVert b \rVert_2^2 + \gamma\, \lVert c \rVert_2^2 - \left( \lVert a \rVert_2^{2/3} \alpha^{2/3} + \lVert b \rVert_2^{2/3} \beta^{2/3} + \lVert c \rVert_2^{2/3} \gamma^{2/3} \right)^3 \\ &= \alpha\, \lVert a \rVert_2^2 + \beta\, \lVert b \rVert_2^2 + \gamma\, \lVert c \rVert_2^2 - \left( \frac{1}{2^{1/2}} \lVert a \rVert_2^{2/3} \alpha^{1/6} \right)^3 \\ &= \alpha\, \lVert a \rVert_2^2 + \beta\, \lVert b \rVert_2^2 + \gamma\, \lVert c \rVert_2^2 - \frac{1}{2^{3/2}} \lVert a \rVert_2^{2} \alpha^{1/2} \\ &= \lVert a \rVert_2^2 \alpha + \frac{\lVert a \rVert_2^4}{\lVert b \rVert_2^2} \alpha + \frac{\lVert a \rVert_2^4}{\lVert c \rVert_2^2} \alpha - \frac{1}{2^{3/2}} \lVert a \rVert_2^{2} \alpha^{1/2} \\ &= \lVert a \rVert_2^2 \alpha \left( 1 + \frac{\lVert a \rVert_2^2}{\lVert b \rVert_2^2} + \frac{\lVert a \rVert_2^2}{\lVert c \rVert_2^2} \right) - \frac{1}{2^{3/2}} \lVert a \rVert_2^{2} \alpha^{1/2} \\ &= \lVert a \rVert_2^2 \alpha \frac{ \lVert b \rVert_2^2 \, \lVert c \rVert_2^2 + \lVert a \rVert_2^2 \, \lVert c \rVert_2^2 + \lVert a \rVert_2^2 \, \lVert b \rVert_2^2 }{\lVert b \rVert_2^2 \, \lVert c \rVert_2^2} - \frac{1}{2^{3/2}} \lVert a \rVert_2^{2} \alpha^{1/2} \\ &= \lVert a \rVert_2^2 \alpha \frac{1}{(2a)^{1/2}} - \frac{1}{2^{3/2}} \lVert a \rVert_2^{2} \alpha^{1/2} \\ &= \frac{7}{8\, 2^{1/2}} \lVert a \rVert_2^{2} \alpha^{1/2} \\ &= \frac{7}{16} \frac{\lVert a \rVert_2^{2} \, \lVert b \rVert_2^2 \, \lVert c \rVert_2^2} { \lVert b \rVert_2^2\, \lVert c \rVert_2^2 + \lVert a \rVert_2^2\, \lVert c \rVert_2^2 + \lVert a \rVert_2^2\, \lVert b \rVert_2^2 } \end{align} Example: E.g. $a \ne 0$, $b = c = 0$: $$ f(x,y,z) = a^T x - \left( \lVert x\rVert_2^{2/3} + \lVert y\rVert_2^{2/3} + \lVert z\rVert_2^{2/3} \right)^3 $$ It seems reasonable that $y = z = 0$ for a maximum, otherwise they would decrease the value of $f$. So we continue maximizing $$ f(x,0,0) = a^T x - \lVert x\rVert_2^2 \quad (**) $$ From the above we assume $x = \alpha a$ and use equation $(*)$ without the terms in $\beta, b, \gamma, c$ and have $$ 2^{1/2} \lVert a \rVert_2^{2/3} \alpha^{2/3} = \lVert a \rVert_2^{2/3} \alpha^{1/6} \iff \\ 2^{1/2} \alpha^{2/3} = \alpha^{1/6} \iff \\ 2 \alpha^{4/3} = \alpha^{1/3} \iff \\ \alpha = \frac{1}{2} $$ which is what we would get from analyzing equation $(**)$. The maximum is: $$ f((1/2)a,0,0) = \frac{1}{4} \lVert a \rVert_2^2 $$
How to compute $\left ( \frac{\partial y}{\partial x} \right )_{x\rightarrow 0}$?
You have to compute the partial derivative of y wrt x and then compute the limit of this expression as x approaches zero
Can this summation be proved to equal 1 for any upper bound 0 or greater?
We can show the identity with the help of generating functions. It is convenient to use the coefficient of operator $[z^n]$ to denote the coefficient of $z^n$ in a series. This way we can write for instance \begin{align*} [z^n]e^{kz}=[z^n]\left(1+kz+\frac{(kz)^2}{2!}+\cdots\right)=\frac{k^n}{n!}\tag{1} \end{align*} We obtain for integers $n\geq 0$ \begin{align*} \color{blue}{\sum_{i=0}^n\frac{(n-i+1)^n(-1)^i}{i!(n-i)!}} &=\frac{1}{n!}\sum_{i=0}^n\binom{n}{i}(-1)^i(n-i+1)^n\tag{2}\\ &=\frac{1}{n!}\sum_{i=0}^n\binom{n}{i}(-1)^{n-i}(i+1)^n\tag{3}\\ &=\frac{1}{n!}\sum_{i=0}^n\binom{n}{i}n![z^n]e^{(i+1)z}(-1)^{n-i}\tag{4}\\ &=[z^n]e^z\sum_{i=0}^n\binom{n}{i}\left(e^{z}\right)^i(-1)^{n-i}\tag{5}\\ &=[z^n](e^z-1)^ne^z\tag{6}\\ &=[z^n]\left(z+\frac{z^2}{2!}+\cdots\right)^n\left(1+z+\frac{z^2}{2!}+\cdots\right)\tag{7}\\ &\color{blue}{=1} \end{align*} and the claim follows. Comment: In (2) we introduce binomial coefficients since we want to apply the binomial theorem later on. In (3) we change the order of summation $i\rightarrow n-i$. In (4) we use the coefficient of operator as shown in (1). In (5) we do a small simplification. In (6) we apply the binomial theorem. In (7) we see the smallest power of $z$ is $n$.
Find the range of $x$ for which the sequence $\dfrac{n!} {k!(n-k)!}x^n $ converges to $0$ for a stabilised $k\in\mathbb{N}$
An idea: using D'Alembert's test $$\left|\frac{\binom{n+1}kx^{n+1}}{\binom nk x^n}\right|=|x|\frac{n+1}{n+1-k}\xrightarrow[n\to\infty]{}|x|\implies\text{the infinite series}\; \sum_{n=k}^\infty \binom nk x^n$$ converges iff $\;|x|<1\;$ and thus the series sequence converges to zero
Dividing foods, does anybody know what kind of problem this is?
We have $$n+\frac n3+\frac n5+\frac n7+\frac n9=3378\ .$$ Multiply by $9$ and $7$ and $5$: $$315n+105n+63n+45n+35n=3378\times5\times7\times9\ .$$ Collect terms: $$563n=1064070\ .$$ Divide: $$n=1890\ .$$
Complex Conjugate in the form $x+iy$
$$\overline{e^z}=e^{\overline z}$$ because $$\overline{e^z}=e^{x}\overline{(\cos y+i\sin y)}=e^{x}{(\cos\ y-i\sin y)}=e^{\overline z}.$$ More generally, if $f$ is an holomorphic function taking real values on the real axis, $\overline{f(z)}=f(\overline z)$.
Are the computable reals finitary?
A real number $r$ is defined to be computable if there is an algorithm that, given any natural number $n$, will produce a rational number within distance $1/n$ of $r$. Thus each individual computable number is defined using a finite amount of information (the algorithm). On the other hand, there is a well-known intuitive sense in which an arbitrary real number can contain an infinite amount of information. In this sense, computable real numbers are finitary in a way that arbitrary real numbers are not. The definition of a computable real number can be stated in either classical mathematics or in intuitionistic mathematics. Nothing in the concept of a computable real number ties you to one logic or the other.
Help with Proving : Period estimation for for concatenated sequences
Yes. I assume both $s$ and $d$ have the same length, otherwise it's not clear how to zip them. If $X$ is a period, then any integer multiple $kX$ is also a period: $s[n+kX] = s[n+(k-1)X] = ... = s[n]$. Since $\text{LCM}(X,Y)$ is by definition both a multiple of $X$ and $Y$, it is also a period of both $s$ and $d$. If two strings share a period $Z$, then their bitwise concatenation also has this period: $e[n+Z] = (s[n+Z], d[n+Z]) = (s[n], d[n]) = e[n]$. If $X=Y$, then $Z=X=Y$. Else... You don't need this special case; the formula $Z=\text{LCM}(X,Y)$ also holds for $X=Y$, since $\text{LCM}(X,X)=X$. Without more information about strings $s$ and $d$, it can be shown that $Z=\text{LCM}(X,Y)$ is the best that can be said. Consider the strings of length $X\cdot Y$ $s[N] = \begin{cases} 1 &\mbox{if } X \mid N \\ 0 &\mbox{otherwise} \end{cases}$ $d[N] = \begin{cases} 1 &\mbox{if } Y \mid N \\ 0 &\mbox{otherwise} \end{cases}$ Their periods are $X$ and $Y$. Let $e$ be their bitwise concatenation. For $Z$ to be a period of $e$, $e[Z]$ must be equal to $e[0]$. For this to happen, $s[Z]$ and $d[Z]$ must be 1, which happens exactly when $X \mid Z$ and $Y \mid Z$; therefore, $Z$ must be the common multiple, so LCM is optimal.
Easier way to solve this problem of trigonometry.
HINT: $$2\sin x\sin y=\cos(x-y)-\cos(x+y)$$ $$\implies2(2\sin x\sin y)\sin(x-y)=2\sin(x-y)[\cos(x-y)-\cos(x+y)]=\sin(2x-2y)-(\sin2x-\sin2y)$$ $$\sum 2(2\sin x\sin y)\sin(x-y)=\sum\sin(2x-2y)$$ Now setting $x-y=A$ etc., $$\sin2A+\sin2B+\sin2C=2\sin(A+B)\cos(A-B)+2\sin C\cos C$$ As $A+B+C=0\implies \sin(A+B)=\sin(-C)=-\sin C,\cos(A+B)=\cdots=\cos C$
Finding $\iint_S {z \:ds}$ for some $S$
This problem is actually best done in cylindrical coordinates. Here when we let $x = r\cos\theta$ and $y = r\sin\theta$ we can rewrite our $z$-limits as $z = r$ and $r^2 +z^2 = 1$. Note that when $z =1$ we have $r = 0$, and similarly when $z = r$ we get that $r^2 + r^2 = 1$ implying that $r = \frac{1}{\sqrt{2}}$. Therefore our integral can be written \begin{eqnarray*} \int_0^{2\pi} d\theta \int_0^{1/\sqrt{2}} \int_r^{\sqrt{1-r^2}}rzdzdr & = & 2\pi\int_0^{1/\sqrt{2}}\frac{r}{2}(1-r^2 - r^2)dr \\ & = & \pi\int_0^{1/\sqrt{2}}r - 2r^3 dr \\ & = & \pi\left (\frac{1}{2}\cdot \frac{1}{2} - \frac{1}{2}\cdot \frac{1}{4} \right )\\ & = & \pi\left (\frac{2}{8} - \frac{1}{8} \right ) \\ & = & \frac{\pi}{8}. \end{eqnarray*}
Find the sum of a series using Fourier series
Solution: Use Parseval's identity. Set $f(x)=\sin\left(ax\right)$ with $a=\dfrac 1 2$, for all $x\in [-\pi, \pi]$. Find $b_n=\dfrac{(-1)^{n+1}8n}{\pi(4n^2-1)}$ for all $n\in \mathbb N$ and $\displaystyle \dfrac 2 \pi\int \limits_0^\pi\left(\sin\left(\dfrac x 2\right)\right)^2\mathrm dx=1$.
Any subset of a metric space is an infinite union of some individual elements of the space?
The statement is true, but your argument is false: what if $S$ is uncountable? The key is that a union of arbitrarily many open sets is still open - can you see a way of writing $S$ as a union of open sets? HINT: each $\{x\}\subseteq S$ is open . . .
Subsequence of Measurable Functions
Hint: This is just a statement about real sequences, namely that for a real sequence $(x_n)$, there exists a subsequence $(x_{n_k})$ such that $\lim_{k \to \infty} x_{n_k} = \liminf x_n$. Can you show this instead?
Beginner help at proving two sets are equal
Usually the easiest way to show that two sets are equal is to show that each one is a subset of the other. A common proof format is as follows. Let $x\in LHS$. ... Therefore $x\in RHS$. Conversely, let $x\in RHS$. ... Therefore $x\in LHS$. We have shown that $LHS\subseteq RHS$ and $RHS\subseteq LHS$, therefore $LHS=RHS$. In your case I think you are assuming too readily the relation between $\wedge$ and $\cap\,$. Try this. Let $v\in[A\wedge B]$. Then by definition $v(A\wedge B)=1$. Therefore $v(A)=1$ and $v(B)=1$. $(*)$ Hence $v\in[A]$ and $v\in[B]$. By definition of intersection, $v\in[A]\cap[B]$. Thus $[A\wedge B]\subseteq[A]\cap[B]$. In line $(*)$ you may need to fill in some details depending on exactly what you have been taught about valuations. You will also need to prove $[A]\cap[B]\subseteq[A\wedge B]$. Give it a try. Good luck!
Is a map with invertible differential a diffeomorphism onto its image near a boundary point?
A map $f : M \to N$ is smooth if for each $p \in M$ there exist charts $\varphi : U \to U' \subset \mathbb{H}^n$ around $p$ and $\psi : V \to V' \subset \mathbb{H}^n$ around $f(p)$ such that $f(U) \subset V$ and $\tilde{f} =\psi f \varphi^{-1}$ is smooth. Here $\mathbb{H}^n = \mathbb{R}^{n-1} \times [0,\infty)$. For an interior point $p$ of $M$ we have $U' \subset int \mathbb{H}^n = \mathbb{R}^{n-1} \times (0,\infty)$, similarly for $f(p)$. Concerning smoothness we may regard $\tilde{f}$ as a map $U' \to \mathbb{R}^n$. If $\varphi$ is a boundary chart, then smoothness of $\tilde{f}$ means that there exists an open $U'' \subset \mathbb{R}^n$ such that $U'' \cap \mathbb{H}^n = U'$ and a smooth extension $F : U'' \to \mathbb{R}^n$ of $\tilde{f}$. $df_p$ is invertible if and only if $dF_p$ is. In this case $F$ is a local diffeomorphism at $\varphi(p)$. Let $W'' \subset U''$ be open such that $f(p) \in W''$ and $F : W'' \to F(W'')$ is a diffeomorphism to an open $F(W'') \subset \mathbb{R}^n$. W.l.o.g. assume that $F(W'') \cap \mathbb{H}^n \subset V'$. Now define $W' = W'' \cap \mathbb{H}^n \subset U'$. Then $F(W')$ is manifold with boundary (note that $W'' \cap (\mathbb{R}^{n-1} \times \{ 0 \})$ is a submanifold of $W''$). Next define $U^\ast = \varphi^{-1}(W')$ which is an open neighborhood of $p$. We conclude that $f(U^\ast) = (\psi^{-1} F \varphi)(U^\ast) = \psi^{-1} F(W')$ is a manifold with boundary. By construction $f : U^\ast \to f(U^\ast)$ is a diffeomorphism.
Ellipse to Standard Form
The following is correct. $$\frac{(x-4)^2}{4} + \frac{(y+5)^2}{9}= 1$$
What mean $X_n(\mathbb P)$ for $X_n$ r.v. and $\mu$ a measure on $\mathbb R$?
The distribution of $X_n$ under $\mathbb P$. Remember $X_n$ will be a function from $\Omega$ to $\mathbb R$, so it ``pushes'' the measure $\mathbb P$ forward to the measure $\nu = X_n(\mathbb P)$ defined by $\nu(A) = \mathbb P( \{\omega\in\Omega: X_n(\omega)\in A\})$; what the book claims is that $X_n$ can be picked so this image measure $\nu$ is equal to the specified $\mu$, for all $n$. This is described in the Wikipedia article with a slightly different notation with subscripted stars, as if Y&R had written ${(X_n)}_*(\mathbb P)$ or something.
Transforming FOL sentences
Yes and yes. Here are some general equivalence principles involving quantifiers you can use to show this: Distribution $\forall$ over $\land$ $$\forall x (\phi(x) \land \psi(x)) \Leftrightarrow \forall x \ \phi(x) \land \forall x \psi(x)$$ Prenex Laws Where $\varphi$ is any formula and where $x$ is not a free variable in $\psi$: $$ \exists x \ \varphi \rightarrow \psi \Leftrightarrow \forall x (\varphi \rightarrow \psi)$$ Applied to your case: $$\forall x \forall y : (overlap(x,y) \equiv (\exists z : part(z,x) \land part(z,y))) \Leftrightarrow \text{ (Equivalence)}$$ $$\forall x \forall y : ((overlap(x,y) \rightarrow (\exists z : part(z,x) \land part(z,y))) \land ((\exists z : part(z,x) \land part(z,y))\rightarrow overlap(x,y))) \Leftrightarrow \text{ (Distribution } \forall \text{ over } \land \text{)}$$ $$\forall x(\forall y ((overlap(x,y) \rightarrow (\exists z : part(z,x) \land part(z,y))) \land \forall y((\exists z : part(z,x) \land part(z,y))\rightarrow overlap(x,y))) \Leftrightarrow \text{ (Distribution } \forall \text{ over } \land \text{)}$$ $$\forall x \forall y ((overlap(x,y) \rightarrow (\exists z : part(z,x) \land part(z,y))) \land \forall y \forall y((\exists z : part(z,x) \land part(z,y))\rightarrow overlap(x,y)))$$ So $(1)$ is equivalent to the conjunction of $(1a)$ and $(1b)$, meaning that the latter two can indeed be derived from the former. And, once you have $(1b)$: $$\forall y \forall y((\exists z : part(z,x) \land part(z,y))\rightarrow overlap(x,y))) \Leftrightarrow \text{ (Prenex Law)}$$ $$\forall y \forall y \forall z((part(z,x) \land part(z,y))\rightarrow overlap(x,y))) $$ So $(1b)$ is equivalent to $(1b')$, meaning that the latter logically follows from the former as well.
Asking a linear algebra problem in Berkeley Problems in Mathematics, Spring 1986.
$$AB+BA=I$$ $$A(AB+BA)=A$$ $$ABA=A$$ $$rank(ABA)=rank(A)$$ Combining with $$rank(ABA)\leq \min(rank(AB),rank(A))$$ Gives that $$rank(A)\leq \min(rank(AB),rank(A))$$ If $rank(AB)<rank(A)$, then $rank(A)$ is $\leq$ something strictly smaller than itself, a contradiction. So $rank(A)\leq rank(AB)$. $$rank(A)\leq rank(AB)\leq \min(rank(A),rank(B))$$ Reusing the previous argument, we find that $rank(A)\leq rank(B)$. We can start back up at the top with $$AB+BA=I$$ $$B(AB+BA)=B$$ $$BAB=B$$ and carry through the whole previous argument again (swapping $B$ and $A$ essentially) to find that $rank(B)\leq rank(A)$. Both of these together imply that $rank(A)=rank(B)$. Since the nullspaces span the space together, the space must be of even dimension.
Is there an irrational number that the digits never repeat anywhere and have all 10 digits appear everywhere?
The sequence of digits you want is an infinite square-free word on the alphabet 0123456789. EDIT: Consider an infinite square-free word on the alphabet 012, which we know exists by Thule's construction. Let the positions of $2$ in this word be $i_1, i_2, \ldots$: there must be infinitely many, otherwise after some point we would have an infinite square-free word on alphabet 01, which is impossible. For each $k$, change the letter in position $i_k$ to $3$, $4$, \ldots $9$ or leave it as $2$ if $k \equiv 0, 1, \ldots, 7 \mod 8$ respectively. The resulting infinite word is still square-free, and now has infinitely many of each of $0,1, \ldots, 9$.
Limit of recursive sequence $ a_{n+1} = 1+a_n^2/2 $
Since $(a_n)$ is increasing, $\frac12a_n^2\geqslant\frac12$ for every $n\geqslant1$ hence $a_{n+1}\geqslant a_n+\frac12$ for every $n$ hence $a_n\geqslant$ $____$ for every $n\geqslant1$, in particular, $a_n\to$ $____$ when $n\to\infty$.
Terminology of adjacent points
East, North,,West, South ( of A). North-East, North-West, South-West, South-East( of A).
Linear Algebra Subspaces and dimension
I am going to assume the space $P_4(\mathbb{R})$ is the set of polynomials of degree $4$ or less, with the regular polynomial addition. Also, I assume you are working with $\mathbb{R}$ as your scalar field. To begin with, saying that $W_1$ and $W_2$ are subspaces of $P_4(\mathbb{R})$ just means that $W_1\subset P_4(\mathbb{R})$ $W_2\subset P_4(\mathbb{R})$. You probably mean linear subspaces. If that is the case, you have three conditions to work with, as you may alread know. First, $0\in W_1$ by taking $a=b=c=0$. Secondly, if you have $p_1,p_2\in W_1$, then they are of the form $$p_1=a_1x^4+b_1x^3+a_1x^2+c_1x$$ $$p_2=a_2x^4+b_2x^3+a_2x^2+c_2x$$ with the coefficient taken in $\mathbb{R}$. For any $\lambda\in\mathbb{R}$, you have $$p_3=p_1+\lambda p_2=(a_1+\lambda a_2)x^4+(b_1+\lambda b_2)x^3+(a_1+\lambda a_2)x^2+(c_1+\lambda c_2)x^4.$$ Since you can write $p_3$ as the required form, $p_3\in W_1$, as wanted. I will leave you $W_2$ to do. For the basis, your answer is wrong. Be careful on your representation of polynomials as a tuple. One can show that $$ \lbrace\; x^4+x^2,x^3,x\;\rbrace $$ is a basis for $W_1$. With your notation, this gives $$\lbrace\; (1,0,1,0,0),(0,1,0,0,0),(0,0,0,1,0)\;\rbrace $$ as a basis. Don't forget you need to show that every element of $W_1$ is a linear combination of the basis by explicit calculations. Again, I leave you $W_2$. The answer you gave for the dimension is correct. It is the size of the basis. I hope this helps you.
If $f:\mathbb R \to \mathbb R$ satisfies $f'(x) <L$ for some $L<1$, then $f$ has a unique fixed point
Consider the function $g(x)=x-f(x)$, which is differentiable and satisfies $g'(x) &gt; \delta$ for some $\delta&gt;0$ (namely $\delta=1-L$). Fixed points of $f$ are precisely the zeros of $g$. Now $g$ is increasing everywhere, so it can't have two distinct zeros (thanks to Rolle's theorem). On the other hand, for $x&gt;0$, we have $g(x)-g(0) = \int_0^x g'(x) \,dx &gt; \int_0^x \delta\,dx = \delta x$; therefore $g(x)&gt;0$ when $x$ is sufficiently positive. Similarly, $g(x)&lt;0$ when $x$ is sufficiently negative. It follows from the intermediate value theorem that $g$ does have a zero.
Number of Perfect Powers in Pascal`s Triangle
There's a theorem by Erdős which (partially) solves your problem : Theorem . The equation :$$\binom{n}{k}=m^l$$ has no integer solutions with $l \geq 2$ and $4 \leq k \leq n-4$ You can find a proof in Proofs from the BOOK or directly in Erdős's paper here http://www.renyi.hu/~p_erdos/1951-05.pdf This means that on a row , $n$ there are at most $8$ perfect powers and so : $$f(n)-f(n-1) \leq 8$$ It seems plausible that $f(n)-f(n-1) \leq 6$ but this would mean to prove that $\binom{n}{1}$ , $\binom{n}{2}$ ,$\binom{n}{3}$ can't be perfect powers simultaneously (and I don't know how to do this).
Understanding smooth manifolds
You can cover $E$ with an atlas of 2 charts. One is $E - \{(0,0,\sqrt{2})\}$, and the other is $E - \{(0,0,-\sqrt{2}\}$. For both of these charts, the associated maps are a stereographic projection which is a diffeomorphism to a plane. This is, of course, modulo a lot of details. You should check that the transition map is smooth. Also, disclaimer: I work with manifolds very little so some of my phrasing and terminology might be a bit off.
Invertible Proof with transposed matrices
Use $(M^T)^{-1} = (M^{-1})^T$, $(M_1M_2)^T = M_2^TM_1^T$, $(M_1M_2)^{-1}= M_2^{-1}M_1^{-1}$ to see that $$(C^{-1}B^{-1})^{T}= ((BC)^{-1})^T = ((BC)^T)^{-1} = (C^TB^T)^{-1}. \quad (*)$$ Now note that $$(A^{T}A)^{-1}(X +B^ {T})(C^{-1}B^{-1})^{T} = I \overset{(1)}{\iff} (X +B^ {T})(C^{-1}B^{-1})^{T} = A^TA \\ \overset{(*)}{\iff} (X+B^{T})(C^{T}B^{T})^{-1} = A^TA \overset{(2)}{\iff} X+B^{T}=A^TAC^{T}B^{T} \overset{(3)}{\iff} X =(A^TAC^{T}-I)B^{T}.$$ (1) Multiply on the left by $A^TA$ (2) Multiply on the right by $C^{T}B^{T}$ (3) Subtract $B^T$ and factorize In particular if $A^TA=I,$ then $X = (C^T-I)B^T$.
How many 5 digit numbers contain all the digits 1,2,3,4,5 and have the property that each pair of adjacent digits has a difference of at least 2?
To quickly find all possibilities, consider $5$ cases: $1. \ 3\, \_\, \_\, \_\, \_$ $2. \ \_\, 3\, \_\, \_\, \_$ $3. \ \_\, \_\, 3\, \_\, \_$ $4. \ \_\, \_\, \_\, 3\, \_$ $5. \ \_\, \_\, \_\, \_\, 3$ Explanation: $3$ is very convenient since only $1$ and $5$ can be next to it. Also, note that if we apply $x\mapsto 6 - x$ to each numeral, we will get a new solution from the old one; and that is why we like $3$, it is fixed under this map. So, why do we consider this? Because, for each of the above cases, there will always be two subcases where $1$ and $5$ switch places. This is exactly what $x\mapsto 6-x$ does. Thus, our cases become $1'. \ 3\, 1\, \_\, \_\, \_$ $2'. \ 5\, 3\, 1\, \_\, \_$ $3'. \ \_\, 5\, 3\, 1\, \_$ $4'. \ \_\, \_\, 1\, 3\, 5$ $5'. \ \_\, \_\, \_\, 1\, 3$ but we can ignore numbers $4.$ and $5.$ since they are just mirror images of $1.$ and $2.$ If we denote with $n_i$ the number of possibilities in $i$-th case, we get total number of $$n = n_1+n_2+n_3+n_4+n_5 = 2n_1 + 2n_2 + n_3 = 4n_1'+4n_2' + 2n_3'.$$ Now, to counting: $1.'$ Since $4$ and $5$ cannot be next to each other, we must have $2$ sitting on the $4$-th place, i.e. $3\, 1\, \_\, 2\, \_$ and we have two possibilities from here: $3\, 1\, 4\, 2\, 5$ and $3\, 1\, 5\, 2\, 4$. Thus $n_1' = 2$. $2.'$ and $3.'$ Obviously, there is only one possibility in each case: $5\, 3\, 1\, 4\, 2$ and $2\, 5\, 3\, 1\, 4$. Thus $n_2'=n_3' = 1$. Finally, $n = 4\cdot 2+4\cdot 1+2\cdot 1 = 14$. Addendum: You can just ignore the stuff with $x\mapsto 6-x$ and simply write it down. However, if you had similar problem with $2k+1$ numerals, it would halve your work.
Sturm-Liouville and Continuity
The usual theorems on existence and uniqueness of solutions for an equation $$ y''=f(x,y,y') $$ require $f$ to be continuous. When the equation is $$ y'' = -R(x)y' - (Q(x) - \lambda P(x))y $$ this means that you want $P$, $Q$ and $R$ continuous. Also, in general you want your solution to be $C^2$, which means that the right hand side must be continuous. Without the continuity condition, the definition of solution and the results of existence and uniqueness of solution are more complicated. Consider for instance the equation $y'=p(x)y$ where $p(x)=0$ if $x&lt;0$, $p(x)=1$ if $x\ge0$. The solution is $$ y(x)=\begin{cases} C &amp; x&lt;0,\\ C\,e^x &amp; x\ge0,\end{cases} $$ which is not differentiable at $x=0$ unless $C=0$. So, in what sense does $y$ satisfy the equation at $x=0$?
What is the quickest way to simplify : $\left(\frac{2u^{-5}v^2}{8w}\right)^{-2}$ ( Barton 's College Placement Test).
Here is my attempt: $$\begin{align*}\left(\dfrac{2u^{-5}v^2}{8w}\right)^{-2} &amp; = \left(\dfrac{u^{-5}v^2}{4w}\right)^{-2} \text{ (Cancel 2 from top and bottom)} \\ &amp; = \left( u^{-5}v^24^{-1}w^{-1} \right)^{-2} \text{ }\left(\text{Because }\dfrac{1}{a} = a^{-1} \right) \\ &amp; = u^{(-5)(-2)}v^{2(-2)}4^{(-1)(-2)}w^{(-1)(-2)} \left(\text{Because }(a^bc^d)^e = a^{be}c^{de} \right) \\ &amp; = 16u^{10}v^{-4}w^2 \left(\text{By commutativity of multiplication}\right)\end{align*}$$ Is this what you are looking for?
$ \lim_{x \to \infty} x(\sqrt{2x^2+1}-x\sqrt{2})$
While one way to proceed here is to write the term $\sqrt{2x^2+1}-\sqrt{2}x=\frac{1}{\sqrt{2x^2+1}+\sqrt{2}x}$ a second way is to write $$\sqrt{2x^2+1}=\sqrt{2}x\left(1+\frac{1}{4x^2}+O\left(\frac{1}{x^4}\right)\right)$$ so that $$\begin{align} x\left(\sqrt{2x^2+1}-\sqrt{2}x\right)&amp;=\sqrt{2}x^2\left(1+\frac{1}{4x^2}+O\left(\frac{1}{x^4}\right)\right)-\sqrt{2}x^2\\\\ &amp;=\frac{\sqrt{2}}{4}+O\left(\frac{1}{x^2}\right)\\\\ &amp;\to \frac{\sqrt{2}}{4} \end{align}$$
Integrals over derived Harmonic functions
As @r9m pointed out in the comments section the assumption in the question follows from Green's theorem.
Let $F$ be a field in which $1+1=0$. Prove that for any $x \in F$, we have $x=-x$ (i.e. any element in F equals its own negative.)
What's $(1+1)x$? Can you do some algebra on that?
A group is simple if an only if its homomorphic images are the trivial group and $G$ itself (up to isomorphism)
This is not true. It holds for finite groups, but not in general. The issue is that a surjective homomorphism $G\rightarrow G$ need not be an isomorphism. If there exists a non-injective surjective homomorphism $G\rightarrow G$ then $G$ is called non-Hopfian. See here for more details and explicit examples (easiest example: $\mathbb{Z}\times\mathbb{Z}\times\mathbb{Z}\times\cdots$). The Prüfer group $\mathbb{Z}[\frac{1}{p}]/\mathbb{Z}$ gives a couner-example to your specific question: it is not simple, but all homomorphic images are either isomorphic to itself or are trivial (see here). If you restate your question to be about finite groups instead then it is a relatively straight-forward exercises on the first isomorphism theorem, as the comments and the other answer are getting at.
Family of maps from inverse system to another space
I guess it will certainly be the case whenever you'll have $\phi_\alpha = \phi_\beta\circ \psi^\alpha_\beta$, where $\psi^\alpha_\beta:X\alpha\to X_\beta$ is the bonding map of the inverese system $\{X_\alpha\}$, $\alpha \ge\beta$. In fact this follows by the same argument which one can use to prove the case you provided as motivation... You can represent $Z$ as the limit of the trivial inverse system $\{Z_\alpha\}$, where $\alpha$ belongs to the same indexing set as in the case for $\{X_\alpha\}$, all $Z_\alpha$ are $Z$ itself, and the respective projections are identities. Then if $\phi_\alpha = \phi_\beta\circ \psi^\alpha_\beta$ holds, your family of mappings satisfies the definition of a morphism ov inverse systems over the same indexing set. Every such morphism has the limit.
Prove Pascal's formula by induction
This is probably a mistake. The textbook exercise says to use (3) to prove (3) as a hint, which is kind of dumb. It probably meant to tell you to first prove (3), then prove the rest of the identities using (3). Most natural proofs of Pascal's identity do not use induction. There are trivial proofs "by induction". That is, we can turn a normal proof into an inductive proof. For example: We induct on $n$. For $n=1$, we have $\binom 1r = \binom0r + \binom{0}{r-1}$ since this is either saying $1=0+1$ when $r=1$, $1=1+0$ when $r=0$, or $0=0+0$ for all other $r$. Now suppose that Pascal's identity holds for $n-1$ instead of $n$. Without using this hypothesis in the least, we check that $$\binom{n-1}r + \binom{n-1}{r-1} = \frac{(n-1)!}{r! (n-1-r)!} + \frac{(n-1)!}{(r-1)!(n-r)!} = \frac{n!}{r!(n-r)!}\left(\frac{n-r}{n} + \frac rn\right) = \binom nr.$$ Therefore by induction the formula holds for all $n$. This is stupid. There are non-trivial proofs by induction if we allow ourselves some good starting hypotheses. To do a decent induction proof, you need a recursive definition of $\binom nr$. Usually, that recursive definition is the formula $\binom nr = \binom{n-1}r + \binom{n-1}{r-1}$ we're trying to prove here. But if we start with something else, we can prove Pascal's identity. (Usually, the proof goes the other way, though.) Here's one example: Let $\binom nr$ be defined by the recursive formula $\binom nr = \sum_{k&lt;n} \binom k{r-1}$, with base case $\binom n0 = 1$ for all $n$ and $\binom0r = 0$ for $r&gt;0$. Then $\binom nr = \binom{n-1}r + \binom{n-1}{r-1}$. Proof. We induct on $r$. When $r=1$, we can check from the recursive formula that $\binom n1 = n$, and this satisfies $\binom n1 = \binom{n-1}{1} + \binom{n-1}{0}$. Now suppose that Pascal's identity holds for all binomial coefficients $\binom{n'}{r'}$ with $r'&lt;r$. Then \begin{align} \binom nr &amp;= \sum_{k&lt;n} \binom k{r-1} \\ &amp;= \sum_{k&lt;n} \binom{k-1}{r-1} + \sum_{k&lt;n} \binom{k-1}{r-2} \\ &amp;= \sum_{j&lt;n-1} \binom j{r-1} + \sum_{j&lt;n-1} \binom{j}{r-2} &amp; \text{(setting $j=k-1$)} \\ &amp;= \binom{n-1}{r} + \binom{n-1}{r-1}. \end{align} Therefore $\binom nr = \binom{n-1}r + \binom{n-1}{r-1}$ as well, and by induction the identity holds for all $n,r$.