title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Arc Length Parametrization And Unit Tanget
When you have a regular curve $r\colon I \to \Bbb R^3$, you can write $\widetilde{r}(s) = r(s(t))$, where $\widetilde{r}$ has unit speed and $s = s(t)$ denotes arclength. When you write $T(t)$ and $T(s)$, you're supposed to think of "$T_r(t)$" and "$T_{\widetilde{r}}(s)$". They're related by $T_{r}(t) = T_{\widetilde{r}}(s(t))$, for all $t \in I$, point being that these vectors start at the same point in the curve (which is described by different parameters, depending on the parametrization you use).
How to express in closed form?
Does ${\rm sgn}(x) = 0$ if $x = 0$? If so, then using Zhen's indicator function idea and simplifying, $$f(z) = {\rm sgn}({\rm Re}(z)) - {\rm sgn}({\rm Im}(z)) (1-{\rm sgn}({\rm Re}(z))^2)$$ But you may find that with $\rm sgn$ any sort of derivation you'll probably want to split up into cases that are already conveniently split up. That is, even though at first the definition by cases seems more obscure and not 'closed form', it may in the end be easier to manipulate in the 'by cases' form. ('closed form' has many interpretations, and 'by cases' can fall on either side)
Calculating time from ratios given? (Algebra)
Work done by Tom in $3$ hours = $1$ fence Work done by Tom in $1$ hour = $1\over 3$ fence Similarly, Work done by John in $1$ hour = $1\over 2$ fence Work done in $1$ hour by the two of them = $1\over 3$ + $1\over 2$ = $5\over 6$ Time taken for $1$ fence = $1\over 5/6$ = $6\over 5$
Let $f:\mathbb{R}\rightarrow\mathbb{R}$ be a continuous function such that $\int_{x}^{1}f(t)dt\geq(1-x)^{2}$. Prove that $f(1)=0$
This is quite elementary for $x\leq1$ we have the area under $f(x) $ is $\geq0$ since $(1-x)^2\geq0$ $\implies f(t)\geq0 $ for $x<1$ for $x\geq1$ we switch our limits giving us a negative negative sign our equation becomes as follows $-\int_{1}^{x}f(t)dt\geq(1-x)^{2}$, multiplying with a negative sign we get $\int_{1}^{x}f(t)dt\leq-(1-x)^{2}$, the area under $f(x)$ is $\leq0$ since $-(1-x)^2\leq0$ $\implies f(t)\leq0$ for $ x>1$ using intermediate value theorem since $f(x)$ is greter than 0 for $x<1$ and less than 0 for $x>1$ $\implies f(1)=0$
Osborne's rule for hyperbolic functions?
You have to also look at when 4n+1 powers of sinh occurs, for example if it only occurs when expressing sinh(a+b) then you have to divide by i once therefore leaving the identity unchanged. For 4n+2 if it only occurs with identities involving cosh(a+b) you don't divide because cos(ix) = cosh(x). for similar reasons with 4n+3 you divide by -i. This rule can be proven with individual cases but imo they require far too much unnecessary thinking to describe using some sort of intuition
Understanding complex functions in w - and z - plane
$$x+y=1\iff y=1-x$$ and you have the straight line $\;y=1-x\;$ in the complex plane, which you can also express as the set $$\{z\in\Bbb C\;:\;\;z=x+(1-x)i\;,\;\;x\in\Bbb R\}$$ If you take a general element of this set and apply on it the transformation $\;w\;$ ,we get $$w(x+(1-x)i):=x-(1-x)i$$ and the image is the straight line $\;y=-(1-x)=x-1\;$.
Evaluating arithmetic sum using prime factorization
To start, you should verify that if the prime factorization of $m$ is equal to $p_1^{\alpha_1}p_2^{\alpha_2}\cdots p_i^{\alpha_i}$, then $f(m) = f(p_1^{\alpha_1}) \times f(p_2^{\alpha_2})\times \dots \times f(p_i^{\alpha_i})$. In other words, $f$ is what is termed a multiplicative function in number theory. Use the Chinese remainder theorem for this.
Find the complement of the expression
What you got doesn't simplify any further! If you were to do a K-map you'll find that your expression a'c'+a'b+ab'c is the 'simplest' one.
legendre polynomial problem
I may have figured it out actually, very simple. If $$x=cos\theta$$ then $$dx/d\theta=-sin\theta$$ then just inverse
How to factor intricate polynomial $ ab^3 - a^3b + a^3c -ac^3 +bc^3 - b^3c $
$ab^3 - a^3b + a^3c -ac^3 +bc^3 - b^3c =$ $ab^3 - a^3b + a^3c -b^3c -ac^3 + bc^3=$ $ab(b^2-a^2)+c(a^3-b^3)-c^3(a-b)=$ $ab(a+b)(b-a)+c(a-b)(a^2+ab+b^2)-c^3(a-b)=$ $(a-b)[-ab(a+b)+c(a^2+ab+b^2)-c^3]=$ $(a-b)(-a^2b-ab^2+ca^2+cab+cb^2-c^3)=$ $(a-b)(ca^2-a^2b+cab-ab^2+cb^2-c^3)=$ $(a-b)[a^2(c-b)+ab(c-b)+c(b^2-c^2)]= $ $(a-b)(c-b)[a^2+ab-c(b+c)]=$ $(a-b)(c-b)(a^2+ab-bc-c^2)=$ $(a-b)(c-b)(a^2-c^2+ab-bc)=$ $(a-b)(c-b)[(a-c)(a+c)+b(a-c)]=$ $(a-b)(c-b)(a-c)(a+b+c)$ The suggestion of replacing $b$ with $a$ was the following: if in such an expression when you replace $b$ with $a$ you get a $0$ then one of the factors may be $(a-b)$. In the future if you want an idea of the factors that appear you could write on wolframalpha factor$(ab^3 - a^3b + a^3c -ac^3 +bc^3 - b^3c)$
If $h$ has positive derivative and $\varphi$ is continuous and positive. Where is increasing and decreasing $f$
Overall, your approach looks fine. But the devil is in the details… First of all, your derivative obtained via the Chain Rule doesn't look quite right. It should be $$f'(x)=h'\left(\int_{0}^{\frac{x^4}{4}-\frac{x^2}{2}}\varphi(t)\,dt\right)\cdot\varphi\left(\frac{x^4}{4}-\frac{x^2}{2}\right)\cdot(x^3-x).$$ You were missing the last part of the Chain Rule when applied to integrals with variable upper limit. Second, for some reason you were solving inequalities with $\frac{x^4}{4}-\frac{x^2}{2}$ being either greater or less than zero, as if it's a factor in the derivative — but it isn't! You have $\varphi$ OF $\left(\frac{x^4}{4}-\frac{x^2}{2}\right)$, not multiplied by it. And third, you don't need to think much about $h'(\cdots)$ and $\varphi(\cdots)$ being positive or negative — simply because both are given to be always positive. Therefore, $f'(x)$ has the same sign as $(x^3-x)$, and so the only inequalities you need to solve are $x^3-x>0$ and $x^3-x<0$.
Galerkin method for nonlinear ode
You missed nothing. The product is non-linear. However why don't you extend your polynomial expansion with $$\int (1+\sum_{i=1}^n\alpha_i x^i)^2dx\equiv\int (1 + \sum_{i=1}^{2n}\tilde{\alpha_i} x^i) dx.$$ The product of $u\cdot u$ is still a polynomial, however with a higher polynomial degree of at least $2n$. Then you will get the Galerkin solution if you integrate $$\int (1 + \sum_{i=1}^{2n}\tilde{\alpha_i} x^i) dx.$$ The Galerkin solution are the first $n$ coefficients of $\tilde{\alpha_i}$. Simply spoken: You only consider the first $n$ coefficients Coefficients greater $n$ are neglected The truncation of the additional $n$ modes can be intepreted as projection in a $2n$ dimensional space onto a $n$ dimensional space where the solution is orthogonal to the chosen subspaces. This is the key property of the Galerkin approach. Regards
Triangles question and Area
First, you should notice that: $$P_{ABD}=2P_{DEF}\tag{1}$$ Why is that so? Obviously $AD=DF$. But the height of triangle $ABD$ drawn from point $B$ perpendicular to the side $AD$ is twice the hight of triangle $DEF$ drawn from point $E$ perpendicular to the side $DF$ (because $DB=2DE$). Strict proof is elementary and I leave it up to you. In the same way you can show that: $$P_{AFC}=P_{AEB}=2P_{DEF}\tag{2}$$ From (1) and (2) it follows that the area $P$ of triangle $ABC$ is: $$P=7P_{DEF}=0.07\text{m}^2=7\text{dm}^2$$ Triangle $ABC$ is isosceles with $AC=b=\sqrt{14}\text{dm}$. Denote with $a$ the length of basis $AB$ and with $h$ the corresponding height drawn from point $C$ perpendicular to segment $AB$. We have the following equations: $$\frac{ah}2=P\tag{3}$$ $$\left(\frac a2\right)^2+h^2=AC^2=b^2\tag{4}$$ From (3): $$h=\frac{2P}a\tag{5}$$ Replace that into (4): $$\frac{a^2}{4}+\frac{4P^2}{a^2}=b^2$$ $$a^4-4b^2a^2+16P^2=0$$ $$a^2=\frac{4b^2\pm\sqrt{16b^4-64P^2}}{2}$$ $$a^2=2b^2\pm2\sqrt{b^4-4P^2}$$ $$a^2=2\cdot14\pm\sqrt{14^2-4\cdot7^2}=28$$ $$a=2\sqrt{7}\text{dm}$$
If $T$ admits quantifier elimination in $\mathcal{L}$, does it admit quantifier elimination in $\mathcal{L}(c)$?
Yes, the converse is true, and I think it's actually the easier direction. Hint: If the $\mathcal{L}$-formula $\varphi(x,y)$ and $\psi(x,y)$ are equivalent modulo $T$, then the $\mathcal{L}(c)$-formulas $\varphi(x,c)$ and $\psi(x,c)$ are equivalent modulo $T$. Alternatively, you can give a proof using the test for QE quoted in your question. Similarly to the hint above, the idea is to take an $\mathcal{L}(c)$-formula $\varphi(x,c)$ and consider the $\mathcal{L}$-formula $\varphi(x,y)$ obtained by replacing the constant $c$ by a variable $y$.
Differential of definite integral
$$...=\frac{d}{dy}\exp(-y(y+1))+\int_0^y\frac{\partial }{\partial y}\exp(-y(x+1))\mathrm d x.$$ In general, $$\frac{d}{dt}\int_{g(t)}^{h(t)}f(x,k(t))\mathrm dx=\frac{d}{dt}f(g(t),k(t))-\frac{d}{dt}f(h(t),k(t))+k'(t)\int_{g(t)}^{h(t)}\frac{\partial }{\partial y}f(x,k(t))\mathrm d x.$$
Ratio of decreasing functions
No, take $g(x) = e^{-2x}$ and $f(x) = 2e^{-x}$. The reason for this is that for $\frac{f(x)}{g(x)}$ to decrease, we must have $$\frac{f'g - fg'}{g^2} < 0$$ which is if and only if $\frac{f'}{f} < \frac{g'}{g}$ because $f, g$ are positive. Clearly $f' < g'$ is not sufficient since the weights $\frac{1}{f}, \frac{1}{g}$ can change the inequality sign. In other words, the condition equivalent to the decreasing of $\frac{f}{g}$ is that the relative decrease of $f$ must be faster than the relative decrease of $g$ and the assumption only ensures the above for the absolute decrease of $f$ and $g$.
Recurrence theorem (Poincaré)
After reading more carefully, this is not Poincare's recurrence theorem, it is much weaker. One can prove a much stronger statement in that the set of points which do not come back to $U$ infinitely often has measure 0. The claim here is simply that one point comes back to $U$ sometime (which immediately implies it comes back infinitely often by repeating the argument). A function $g\colon M\to M$ is said to conserve the volume if for each Jordan-measurable subset $J\subset M$ it is $\text{vol}(g^{-1}(J))=\text{vol}(J)$. Proposition: Let $M\subset\mathbb{R}^N$ be a Jordan-measurable set and $g\colon M\to M$ a bijective, continious function that conserves the volume and which inverse function is continious, too. Then in each open Jordan-measurable set $U\subset M$ there is a point $x\in U$ and a number $n\geqslant 1$ such that $g^n (x)\in U$. Proof: Because $M$ is Jordan-measurable we have $\text{vol}(M) < \infty$ (this is important, we use it strongly, in fact the result is not true if $vol(M) = \infty$, consider translation on $\mathbb{R}$, a small interval does not have a recurrent point). Furthermore, since $g$ conserves volume, we have $$ \text{vol}(U)=\text{vol}(g^{-n}(U))~\forall~n\in\mathbb{N}.~~~~~~~~~~~(*) $$ Suppose by way of contradiction that $U,g^{-1}(U) , g^{-2}(U) \ldots$ are pairwise disjoint (since $g$ is continuous these sets are all measurable). Since $g$ preserves volume, all of these sets all have the same measure. Note that $V_m = vol(\cup_{i=1}^{m} g^{-i}(U))$ is open (so measurable) since it is the finite union of open sets (using that $g$ is continuous and $U$ is open). Then $$vol(V_m) = vol(\cup_{i=1}^{m} g^{-i}(U)) = \sum_{i=1}^{m} vol(g^{-i}(U)) = \sum_{i=1}^{m} vol(U) = mvol(U).$$ Why is this impossible? We have that $vol(V_m) = m vol(U)$ and $V_m \subset M$ (so $vol(V_m) \leq vol(M)$). Taking $m$ large, this is a contradiction, using that $vol(U) >0$ (the result is false if we do not assume this, take $U$ to even be the empty set). So there are $j>k\geqslant 0$ with $$ g^{-j}(U)\cap g^{-k}(U)\neq\emptyset. $$ From this it follows, using that $g$ is a bijection, that $$ \emptyset\neq g^{k}(g^{-j}(U)\cap g^{-k}(U))=g^{k-j}(U) \cap U. $$ So for $n=k-j \geqslant 1$ there is a point $x\in U$ with $g^n(x)\in U$. From here I would recommend looking at any ergodic theory book (I recommend the one by Paul Halmos), finding Poincare's recurrence and reading it. It will be enlightening, even if you do not understand fully the definitions, it will show you that this result above is strengthened in a big way, by a (much) better idea then to just look at the $V_m$.
Problem with notation in algebraic topology.
That is an example for a slice category. The objects are all arrows in the category of topological spaces with codomain $B$ and for two arrows $f \colon X \rightarrow B$ and $g \colon Y \rightarrow B$ the class of morphisms $\text{Hom}(f,g)$ in the slice category is given by all arrows $h \colon X \rightarrow Y$ in the category of topological spaces such that $f = g \circ h$. For example the category of pointed topological spaces can be seen as the category of topological spaces under a point (if you fix a domain). Analogously for the category of $R$-algebras over a given ring $R$. This means you have probably already seen coslice categories and the category you are asking for is given by a dual notion. In tom Diecks's book you find a detailed explanation in chapter 2.2 "Further Homotopy Notions", page 32.
Why are this all the eigenvalues/eigenvectors of this transformation?
Let $T$ denote the transformation. Note that the eigenspace associated with the eigenvalue $0$ consists of the set of all odd functions within the domain. Now, show that $Tf$ is itself an odd function for all $f$. It follows that for all functions $f$ from the domain, we have $T(T(f)) = 0$. Now, suppose that $\lambda$ is an eigenvalue of $T$ and that $f \neq 0$ is an associated eigenvector. We have $T(T(f)) = 0 \implies \lambda^2 f = 0$. This implies that $\lambda^2 = 0$, which means that $\lambda = 0$. So, the only possible eigenvalue of $T$ is $0$.
Calculate repayments amount on loan
Let $x$ be the starting loan amount, and $d$ be the monthly interest expressed as a decimal (i.e. $22$% interest would be $d = 1.22$ so that multiplying by $d$ corresponds to adding $22$% of interest). We are looking to find $y$ where $y$ is a fixed payment per month subject to: $$ d\cdot\left(d\cdot\left((d\cdot x) -y\right)-y\right)-y = 0 $$ Expressing this equation as an equation in terms of $y$ gives $$d^3x-d^2y-dy-y = 0$$ So $$y = \frac{d^3x}{1+d+d^2}$$ With a starting value of $200$ and monthly interest of $22$%, this becomes $$y=\frac{1.22^3 \times 200}{1+1.22+1.22^2}\approx 97.93$$
Show that if $a ≡ b \pmod m$ and $c ≡ d \pmod m$, where $a, b, c, d$, and $m$ are integers with $m ≥ 2$, then $a − c ≡ b − d \pmod m$.
$$a = mk + b, c = mk' + d$$ $$ a - c \equiv b-d \pmod m \Rightarrow (mk+b)-(mk'+d) = mk'' + (b-d)$$ $$(b-d)= mk''-mk + mk' + (b-d) \Rightarrow 0=m(k''-k+k')$$ $$(b-d)=(b-d)$$ Thus $k''-k+k'=0$ For example try with: $m=7,a=101,b=71$, you will end up with $k=14,k'=10,k''=4$
Procedure for 3 by 3 Non homogenous Linear systems (Differential Equations)
Given: $$A = \begin{bmatrix} 5 & -3 & -2 \\ 8 & -5 & -4 \\ -4 & 3 & 3 \end{bmatrix}, ~~ F[t] = \begin{bmatrix} - \sin t \\ 0 \\ 2 \end{bmatrix}$$ We can find the solution to this system using: $$X(t) = e^{At}X_0 + \int_{t_0}^t e^{A(t-s)}F(s)~ds$$ If we find the Jordan Normal Form of the matrix, we have: $$A = PJP^{-1} = \begin{bmatrix} 0 & -2 & 0 \\ 1 & -4 & 0 \\ -\dfrac{3}{2} & 2 & 1 \end{bmatrix}\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{bmatrix}\begin{bmatrix} -2 & 1 & 0 \\ -\dfrac{1}{2} & 0 & 0 \\ -2 & \dfrac{3}{2} & 1 \end{bmatrix} $$ Update How did we find these eigenvectors? $A$ has only two Jordan blocks for only one eigenvalue $\lambda = 1$, so: $$A-I \ne 0 \\ (A-I)^2 = 0$$ Thus, we can take any vector $v_3$ such that $(A-I)v_3 \ne 0$, which becomes an eigenvector of rank $2$. So, take: $$v_3 = (0,0,1) \implies (A-I)v_3 \ne 0$$ Now, we have: $$v_2 = (A-I)v_3 = (-2,-4,2)$$ Lastly, we need a third linearly independent eigenvector such that: $$(A-I)v_1 = 0 \implies v_1 = \left(0,1,-\dfrac{3}{2}\right)$$ Note: $P$ is made up of three generalized eigenvectors, $P = [v_1~|~v_2~|~v_3]$, from the single eigenvalue, since we need three linearly independent eigenvectors. We can now find $e^{Jt}$ as: $$E^{Jt} =e^t\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & t \\ 0 & 0 & 1 \end{bmatrix}$$ We can now write the fundamental matrix as: $\phi(t) = Pe^{Jt}P^{-1} = \begin{bmatrix} 0 & -2 & 0 \\ 1 & -4 & 0 \\ -\dfrac{3}{2} & 2 & 1 \end{bmatrix} e^t\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & t \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} -2 & 1 & 0 \\ -\dfrac{1}{2} & 0 & 0 \\ -2 & \dfrac{3}{2} & 1 \end{bmatrix}$ So, $$ \phi(t) = e^t\begin{bmatrix} 4t+1 & -3t & -2t \\ 8t & 1-6t & -4t \\ -4t & 3t & 2t+1 \end{bmatrix}$$ We now have: $$\phi^{-1}(t) = e^{-t}\begin{bmatrix} 1-4t & 3t & 2t \\ -8t & 6t+1 & 4t \\ 4t & -3t & 1-2t\end{bmatrix}$$ Find $\phi^{-1}(t) \cdot F(t) = \begin{bmatrix} e^{-t} (4 t+(4 t-1) \sin (t)) \\ 8 e^{-t} t (\sin (t)+1) \\ e^{-t} (-4 \sin (t) t-4 t+2) \\ \end{bmatrix}$ Now we integrate the previous result, that is $w = \int \phi^{-1}(t) F(t)$ and this gives us: $w$ Next, we find: $X(t) = X_h + X_p = \begin{bmatrix} x(t) \\ y(t) \\z(t) \end{bmatrix} = e^{At}X_0 + \phi(t) w = \begin{bmatrix} c_1 e^t (4 t+1)-3 c_2 e^t t-2 c_3 e^t t-4 t (2 t+t \sin (t)+(t+1) \cos (t)+1)+12 t (2 t+t \sin (t)+(t+1) \cos (t)+2)-\frac{1}{2} (4 t+1) (8 (t+1)+(4 t-1) \sin (t)+(4 t+3) \cos (t)) ~~~\\ ~~~8 c_1 e^t t-c_2 e^t (6 t-1)-4 c_3 e^t t-8 t (2 t+t \sin (t)+(t+1) \cos (t)+1)+4 (6 t-1) (2 t+t \sin (t)+(t+1) \cos (t)+2)-4 t (8 (t+1)+(4 t-1) \sin (t)+(4 t+3) \cos (t)) ~~~ \\ ~~~-4 c_1 e^t t+3 c_2 e^t t+c_3 e^t (2 t+1)+2 (2 t+1) (2 t+t \sin (t)+(t+1) \cos (t)+1)-12 t (2 t+t \sin (t)+(t+1) \cos (t)+2)+2 t (8 (t+1)+(4 t-1) \sin (t)+(4 t+3) \cos (t)) \end{bmatrix}$
Can first order logic be defined on the domain of existing and non-existing objects?
In first-order logic proper, this can be handled by tweaking the language a bit. We add to the language we're considering a new distinguished unary predicate symbol $Real$. We think of this as describing the collection of objects which exist. As usual, existential and universal quantifiers range over the whole domain, but we can bound existential quantifiers to $Real$ if we so choose as "$\exists x(Real(x)\wedge ...)$ and "$\forall x\in Real(...)$," respectively. So for example $$\mbox{Everything which exists has property $P$}$$ would be expressed as $$\forall x(Real(x)\rightarrow P(x))$$ and $$\mbox{Everything which has property $P$ exists}$$ would be expressed as $$\forall x(P(x)\rightarrow Real(x)).$$ The point is that all your distinction between "existent"/"nonexistent" objects is really doing is partitioning a broad universe (of "all objects" in some sense) into two pieces, and we can forget the fancy language of existence and just talk about such a partition. Alternatively, we could replace first-order logic with a close relative more suited, at least philosophically, to the idea of nonexistent objects. Free logic is the standard candidate in this context. But per the above, we can translate appropriately between free logic and standard first-order logic; the distinction is one of form rather than essential expressivity.
Show a function's distributional derivative as the summation of delta function
It is a well-defined distribution when the test functions have compact support. Formally, it is an element of $C_c(\mathbb R)^*$, the dual of the space of continuous functions with compact support. Given such a function $f$, the result of applying your distribution is $$ \sum_{n=1}^{\infty}f(n), $$ which is finite since the set $\mathbb N\cap \textrm{supp}(f)$ is both compact and discrete, hence finite - so the sum above is finite.
Is the intersection of dense subsets of a metric space also dense?
HINT: What’s a very familiar countable dense subset of $\Bbb R$? Is its complement dense?
Given $n \in N$, prove that $f: [0, \infty) \rightarrow [0, \infty) f(x) = x^n$ is increasing without calculus (derivates)
You have $$x_2^n - x_1^n = (x_2-x_1) (x_2^{n-1} + x_2^{n-2} x_1 + \cdots + x_2 x_1^{n-2}+ x_1^{n-1}) \gt 0$$ for $x_2 \gt x_1$.
If $\gamma$ is a coupling of $\delta_x$ and $\delta_y$, can we show that $\int f\:{\rm d}\gamma=f(x,y)$?
Let us prove that the only coupling between $\delta_x$ and $\delta_y$ is the product measure $\delta_x\otimes \delta_y$. Consider such a coupling $\pi\in \Pi(\delta_x, \delta_y)$. Since the products of Borel sets form a pi-system, it suffices to check that $\forall (A,B)\in \mathcal B(E)\times \mathcal B(E), \pi(A\times B)=\delta_x\otimes \delta_y(A\times B)=\delta_x(A) \delta_y(B)$. When $x\notin A$ or $y\notin B$, since $\pi(A\times B)\leq \pi(A\times E) = \delta_x(A)$ and $\pi(A\times B)\leq \pi(E\times B) = \delta_y(B)$, we have $\pi(A\times B) \leq \min(\delta_x(A),\delta_y(B))= 0$, hence $\pi(A\times B) = 0 = \delta_x(A) \delta_y(B)$. When $x\in A$ and $y\in B$, we have $\begin{aligned}[t] 1-\pi(A\times B) &= \pi((A\times B)^c) = \pi((A^c\times E)\cup (X\times B^c))\\ &\leq \pi(A^c\times E) + \pi(X\times B^c)\\ &= \delta_x(A^c) + \delta_y(B^c) = 0 \end{aligned}$ Hence $\pi(A\times B)=1=\delta_x(A) \delta_y(B)$ This proves $\pi=\delta_x\otimes \delta_y$.
Relation between Domain and Range of any function.
You can take $f: \mathbb{R} \to \{\triangle, \square \}$, by $f(x) = \triangle$ if $x \ge 0$, and $f(x) = \square$, if $x < 0$. Thus the range doesn't have to be real or complex values...
Integral of $\int e^{2x} \sin 3x\, dx$
You're correct. The integral does indeed require integration by parts. But, it's a little trick. You have to use the method twice, each time using what you consider the differentiated term the trig one or exp it doesn't matter as long as you're consistent. Here's the sketch of the idea. I'll do it in the general case. $$\int e^{ax}\sin(bx)dx=\frac{1}{a}e^{ax}\sin(bx)-\frac{1}{a}\int be^{ax}\cos(bx)dx$$ Now, we do it again. $$\frac{b}{a} \int e^{ax}\cos(bx)dx=\frac{b}{a}\left(\frac{b}{a^2}e^{ax}\cos(x)-\frac{b^2}{a^2}\int e^{ax}[-\sin(bx)]\right)dx= \dots$$ Now, you take it from here, noticing that that last integral is your original one (with a negative). Set $\displaystyle I=\int e^{ax}\sin(bx)dx$, and solve for $I$ after substituting the above expression into the original one.
Dimension of the dual image space
To make it an "official" answer. If $V,W$ are finite dimensional, $T\colon V\to W$, then let $\beta$ be a basis for $V$, $\gamma$ a basis for $W$, and let $\beta^*$, $\gamma^*$ be the dual bases. Then $\dim(\mathrm{Im}(T^*)) = \mathrm{rank}([T^*]_{\gamma^*}^{\beta^*})$, where $[T^*]_{\gamma^*}^{\beta^*}$ is the matrix of $T^*$ with respect to $\gamma^*$ and $\beta^*$. But $$[T^*]_{\gamma^*}^{\beta^*} = \left([T]_{\beta}^{\gamma}\right)^*,$$ the conjugate transpose of the matrix of $T$ with respect to $\beta$ and $\gamma$. Since $\mathrm{rank}(A) = \mathrm{rank}(A^*)$ for any matrix, it follows that $$\begin{align*} \dim(\mathrm{Im}(T^*)) &= \mathrm{rank}([T^*]_{\gamma^*}^{\beta^*})\\ &= \mathrm{rank}(([T]_{\beta}^{\gamma})^*)\\ &=\mathrm{rank}([T]_{\beta}^{\gamma})\\ &= \dim(\mathrm{Im}(T))\\ &= \dim((\mathrm{Im}(T))^*), \end{align*}$$ the last equality because in the finite dimensional case, the dual has the same dimension as the original space.
Integer values of $m$ for which $x^2-(m-3)x+m = 0$ has greater then $2$
You need to have both roots greater than two. First part of solution is right: $$\begin{cases} \alpha, \beta \in \mathbb{R}, \\ \alpha, \beta \ge 2; \end{cases} \begin{cases} D \ge 0, \\ \alpha + \beta > 4, \\ \alpha \cdot \beta > 4, \\ \alpha, \beta \ge 2; \end{cases} \begin{cases} (m-3)^2 - 4m \ge 0, \\ (m-3) \ge 4, \\ m \ge 4, \\ \alpha, \beta \ge 2; \end{cases} \begin{cases} (m-9)(m-1)\ge 0, \\ m \ge 7, \\ m \ge 4, \\ \alpha, \beta \ge 2; \end{cases} \begin{cases} (m-9)(m-1)\ge 0, \\ m \ge 7, \\ m \ge 4, \\ \alpha, \beta \ge 2; \end{cases} \begin{cases} m \ge 9, \\ \alpha, \beta \ge 2; \end{cases} \begin{cases} m \ge 9, \\ \alpha = \left((m-3) - \sqrt{(m-3)^2 - 4m}\right) / 2, \\ \beta = \left((m-3) + \sqrt{(m-3)^2 - 4m}\right) / 2, \\ \alpha, \beta \ge 2; \end{cases} \begin{cases} m \ge 9, \\ \left((m-3) - \sqrt{m^2-10m+9}\right) / 2 \ge 2; \\ \end{cases} \begin{cases} m \ge 9, \\ m - 7 \ge \sqrt{m^2-10m+9}; \\ \end{cases} \begin{cases} m \ge 9, \\ (m - 7)^2 \ge m^2-10m+9; \\ \end{cases} \begin{cases} m \ge 9, \\ 40 \ge 4m; \\ \end{cases} \begin{cases} m \ge 9, \\ m \le 10. \\ \end{cases} $$ So $m$ should be 9 or 10. Both are coreect: $m = 9$ gives $\alpha = \beta = 3$ and $m = 10$ gives $\alpha = 2$, $\beta = 5$.
Probability that the connection broke down in this grid
This is an old chestnut (though I couldn't readily locate an MSE duplicate) and best tackled by a symmetry argument. To illustrate, consider the following problem: Say all the cables are low, so a boat wanting to go from left to right will get stopped by them. Now, if enough cables are broken, there will be a path for such a boat to cross. In fact the possible paths of the boat (marked in your diagram in red as below) is a rotated copy of the cable network itself. Note the symmetry of the boat crossing problem with your island connectivity problem, and also consider that these events are disjoint and one of them is bound to hold true. It follows that the answer to the probability of either occurring is $\frac12$.
Is $(\mathbb{Z}/5\mathbb{Z} \times \mathbb{Z}/3 \mathbb{Z})/\left \cong \mathbb{Z}/2\mathbb{Z} $?
Hint: Note that we can find an isomorphism between $(\mathbb{Z}/5\mathbb{Z} \times \mathbb{Z}/3 \mathbb{Z})/(\;([0]_5,[2]_3)\;)$ and $\Bbb Z/ 5\Bbb Z$. So, the statement that you are trying to show is equivalent to saying that $\Bbb Z/5 \Bbb Z \cong \Bbb Z / 2\Bbb Z$. What is an easy way to see that these two rings (or groups) are not isomorphic?
Studying the intersection $(X)\cap (X^{2}-Y+1)\subseteq\mathbb{R}[X,Y]$.
This is not a direct answer to your question, but an answer to your final statement (which I suspect is your true motivation). So, we want to know whether $\mathbb{R}[x,y]/I$ has any non-trivial idempotents, where $$I=(x)\cap (x^2-y+1)$$ Well, it's well known that $\mathbb{R}[x,y]/I$ has no non-trivial idempotents if and only if $$V(I)\subseteq\mathbb{A}^2_\mathbb{R}=\text{Spec}(\mathbb{R}[x,y])$$ is connected. But, note that if we can show that $$V(I)_\mathbb{C}=V(I_\mathbb{C})\subseteq\mathbb{A}^2_\mathbb{C}$$ is connected, then we're OK since there is a surjection $V(I)_\mathbb{C}\to V(I)$. Note though that $$V(I)_\mathbb{C}=V((x))\cup V((x^2-y+1))$$ and, moreover, $(x)$ and $(x^2-y+1)$ are prime ideals in $\mathbb{C}[x,y]$. So, certainly $V((x))$ and $V((x^2-y+1))$ are connected. Thus, $V(I)_\mathbb{C}$, being the union of these two connected spaces, will be connected if they have a point in common. But, certainly the point $(\zeta_3,0)$ (or $(x-\zeta_3,y)$ if you want to think in terms of primes) is such a point, where $\zeta_3$ is a primitive third root of unity.
How can one prove $\lim \frac{1}{(n!)^{\frac 1 n}} = 0$?
Elementary solution: To calculate $n!$, instead of multiplying the numbers from $1$ to $n$, multiply $(1 * n) * (2 * (n - 1)) * (3 * (n - 2)) ...$ Each product is greater than or equal to $n$; there may be one number left over in the middle which is greater than $\sqrt n$. That makes $n! \geq (\sqrt n)^n$, and the limit follows immediately. Or you could multiply only the larger half of the numbers, and $n! > (n/2)^{n/2}.$
Expressing the cantor function on $[0,1]$ as a function on $\text{Ternary}([0,1])$
In the usual (probabilistic) construction of the uniform distribution $\mu$ on the standard Cantor set $C$, one introduces a sequence $(X_n)_{n\geqslant1}$ of i.i.d. random variables with uniform distribution on $\{0,2\}$, and one considers $$ X=\sum\limits_{n\geqslant1}3^{-n}X_n. $$ Then $X$ is uniformly distributed on $C$, hence the Fourier transform $\widehat\mu$ of the measure $\mu$ is the function defined by $$ \widehat\mu(t)=\int_\mathbb R\mathrm e^{\mathrm itx}\mathrm d\mu(x)=\mathbb E(\mathrm e^{\mathrm itX}). $$ (Different normalizations are used but the steps below can be easily adapted.) Now, by definition of stochastic independence, $$ \mathbb E(\mathrm e^{\mathrm itX})=\prod_{n\geqslant1}\mathbb E(\mathrm e^{\mathrm it3^{-n}X_n})=\prod_{n\geqslant1}\varphi(t/3^{n}), $$ where $\varphi(t)=\mathbb E(\mathrm e^{\mathrm itX_1})$, that is, $\varphi(t)=\frac12(1+\mathrm e^{2\mathrm it})$. Finally, $$ \widehat\mu(t)=\prod_{n\geqslant1}\frac{1+\mathrm e^{2\mathrm it/3^n}}2=\mathrm e^{\mathrm it/2}\cdot\prod_{n\geqslant1}\cos(t/3^n). $$ Nota: Indeed, $\mu(\{x\})=0$ for every $x$, as can be seen by including $\{x\}$ in smaller and smaller triadic intervals of length $3^{-n}$, whose measure is, by definition, either $2^{-n}$ or zero, for every $n\geqslant1$. Likewise, recall that $C$ has Lebesgue measure zero hence $\mu$ and the Lebesgue measure are mutually singular.
Maximal exercise (Set Theory).
"but I cannot compare an element of a with a subset of A" Yes, you can. Because one element of $a\in f^*(b)$ can be a representative element. Prove that if $a\in f^*(b)$ then $a$ must be maximal in $A$.... As $a$ was utterly arbitrary in $f^{*}(b)$ it must be true that all elemetns of $f^*(b)$ are maximal. And, yes, a proof by contradiction is an excellent way to go. Assume $a\in f^*(b)$ so $f(a) = b$. And assume $a$ is not maximal in A. So there is a $c \in A$ so that $c > a$. Then...... $f(c) > f(a)=b$ because $f$ is increasing. So $f(c) > b$.
Name for mappings where there is at least one y for every x
A relationship from X to Y where every element of X is mapped to an element of Y is called a "Total function". Relationships where some elements of X are unmapped are called "Partial functions". However formally all functions are total and people say "Total function" only where it would be ambiguous.
Prove that the equation $x^3+2y^3+4z^3=9w^3$ has no solution $(x,y,z,w)\neq (0,0,0,0)$
Suppose there is an integer solution $(x, y, z, w)$ for your equation, then we have $$ x^3+2y^2+4z^3\equiv 0 \pmod 9$$ However, $0^3\equiv 3^3\equiv 6^3\equiv 0 \pmod 9$, $1^3\equiv 4^3\equiv 7^3\equiv 1 \pmod 9$, $2^3\equiv 5^3\equiv 8^3\equiv -1 \pmod 9$. So this implies there exists $a, b, c \in \{-1,0,1\}$ such that $a+2b+4c\equiv 0 \pmod 9$. By enumerating all possible combinations of $a$, $b$, and $c$, we see that $a=b=c=0$ is necessary. This means $x$, $y$, $z$ are multiples of three. So is true for $w$, for if $x=3k$, $y=3l$ and $z=3m$, the original equation implies $$ 27(k^3+2l^3+4m^3)=9w^3 $$ Hence $3$ divides $w^3$, and by the fact that $3$ is a prime number we have $3$ divides $w$. If non-zero solutions exist, let $(x_0, y_0, z_0, w_0)$ be one of them such that $|x|+|y|+|z|+|w|$ is the smallest. Then we see that $(x_0/3, y_0/3, z_0/3, w_0/3)$ is another non-zero integer solution. But $|x_0/3|+|y_0/3|+|z_0/3| + |w_0/3|< |x_0|+|y_0|+|z_0| + |w_0|$ holds, a contradiction.
If $A$ is a subset of $B \cup C$, then must $A \subset B$ or $A \subset C$?
You could go for a counterexample. $A= \{1,3\}$, $B=\{1,2\}$, $C=\{3,4\}$ would for instance do the trick. (Also, often for this type of questions, drawing helps.)
Prove that all constant functions are in a linear subspace of $C([a,b])$.
the sequence you're looking for is for example given by $$f_n = \frac{\mu_n}{\mu_n - x}$$ for any $\mu_n$ with $\mu_n \rightarrow \infty$. do you think you can show this works?
How can I prove this property of root's multiplicity, for also non-polynomial function?
HINT: Thanks to Taylor series, we can express almost all non-polynomial functions as polynomial functions. However one must check beforehand whether the function at all possesses multiple roots.
How does law of quadratic reciprocity work?
If $p\equiv 2 \mod 3$ then $p$ is not a square mod $3$. That is the justification for the second line. As you correctly calculated, if $p$ is $5 \mod 12$ (and $q$ is odd) then $(-1)^{\frac {p-1}{2} \cdot \frac {q-1}{2}}$ is $(-1)^{\text{even}} = 1$. You have an incorrect statement of the main case of quadratic reciprocity, for $p$ and $q$ odd primes. Instead of an equal sign between $(\frac p q)$ and $(\frac q p)$ there should be no symbol, or perhaps $\times$. The product $(\frac p q) (\frac q p)$ equals $(-1)^{\frac {p-1}{2} \cdot \frac {q-1}{2}}$, which is to say the product is $1$ except when $p$ and $q$ are both $3 \mod 4$, in which case it is $-1$.
How do I use Weierstrass Approximation Theorem?
One path for proving the Weierstrass theorem, that comes with a specific estimate, is to use the Bernstein polynomials $$B_n(f)(x) = 2^{-n}\sum_{k=0}^n f\left(\frac {2k-n}n\right)\cdot \binom{n}{k}(1+x)^k(1-x)^{n-k}$$ For uniformly continuous $f$ on $[-1,1]$, these converge uniformly to $f$ as $n\to\infty$. It's a probabilistic estimate; given a uniform continuity estimate $|f(x)-f(y)|\le g(t)$ whenever $|x-y|\le t$, we estimate \begin{align*}\left|B_n(f)(x)-f(x)\right| &= 2^{-n}\left|\sum_{k=0}^n \left(f\left(\frac {2k-n}n\right)-f(x)\right)\cdot \binom{n}{k}(1+x)^k(1-x)^{n-k}\right|\\ &=\sum_{k=0}^n\left|f\left(\frac {2k-n}n\right)-f(x)\right|\cdot 2^{-n}\binom{n}{k}(1+x)^k(1-x)^{n-k}\\ &\le \sum_{|(2k-n)/n-x|\le t}g(t)\cdot (*) + \sum_{|(2k-n)/n-x|> t}g(2)\cdot (*)\\ &\le g(t)\cdot P(X-x\le t) + g(2)\cdot P(X-x > t)\\ \left|B_n(f)(x)-f(x)\right| &\le g(t)\cdot 1 + \frac1{nt^2}g(2)\end{align*} In this, the random variable $X$ is the scaled binomial distribution with probability mass function $2^{-n}\binom{n}{k}(1+x)^k(1-x)^{n-k}$. That expression is also abbreviated with $(*)$ in the third line, for typographic reasons. This $X$ has mean $x$ and variance $\frac{(1+x)(1-x)}{4n}\le \frac1n$. For the final inequality, we estimate that the first probability is $\le 1$, and that the second probability is $\le \frac1{nt^2}$ by Chebyshev's inequality. For the function $f(x)=|x|$ we're dealing with, we can take $g(t)=t$. The error estimate we get is then $$|B_n(f)(x)-f(x)|\le \inf_t\left(t+\frac{2}{nt^2}\right) = \left(\frac4n\right)^{\frac13} + \frac{2}{n\left(\frac4n\right)^{\frac23}} = \frac{3}{\sqrt[3]{2n}}$$ To make that less than $\epsilon$, we take $n>\frac12\cdot\left(\frac{3}{\epsilon}\right)^3$. This is a theoretical estimate, that gives away more than it has to. It's also not the only way to find approximating polynomials. So, then, some follow-up questions: How accurate are the Bernstein polynomials, really? Can you find a sequence of polynomials that converge significantly faster than that?
What Is The Sum of All of The Real Root
$$x^3-4x^2+x=-6 \implies x^3-4x^2+x+6=0 \implies (x-2)(x-3)(x+1)=0 $$ You missed one solution, $x=-1$. Thus, the answer is $4$. Some people may suggest that you use Vieta's formula, but IMO that would be unwise. This is because Vieta's formula adds all the solutions, even the complex ones, but the question at hand explicitly asks for only real solutions. So this would probably be the best way to do it.
Upper bound in an integral with exponential
Observe the fact that since $S(t)<C$, we can use the estimate that $$\int_{0}^\infty K(u)S(t-u)du<|C|\,|\int_0^\infty e^{-u}(u-u^2/2)du|\leq|C|\int_0^\infty |K(u)|du$$ Now to finish your proof, you have to show that $\int_{0}^\infty |e^{-u}(u-u^2/2)|du$ exists and is finite. Edit: Consider the derivative of $\frac{1}{2}e^{-u}u^2$ and the limit for $u\to\infty$.
Euclidean distance on R and Q
(i) Let $x \in \mathbb B(a, \epsilon)$. We have to show that there exists a $n \in \mathbb N$, such that for each $y \in \mathbb B\left( x, \frac 1 n \right)$ we have $y \in \mathbb B(a, \epsilon)$. Since $d(x,a) < \epsilon$ you can choose $n \in \mathbb N$ big enough, so that $d(x,a) + \frac 1 n < \epsilon$. By the triangle inequality, we have now for each $y \in \mathbb B\left(x, \frac 1 n \right)$ $$ d(y,a) \leq d(y,x) + d(x,a) < \frac 1 n + d(x,a) < \epsilon \; ,$$ i.e. $y \in \mathbb B(a,\epsilon)$. (ii) Let $a = (a_1, \ldots, a_p) \in \mathbb R^p$. Since $\mathbb Q$ is dense in $\mathbb R$, we find for each $i \in \{1, \ldots, p\}$ a $q_i \in \mathbb Q$, such that $\vert a_i - q_i \vert < \frac{\epsilon}{\sqrt{p}}$. Now calculate the distance $\Vert a - q \Vert$, where $q := (q_1, \ldots, q_p) \in \mathbb Q^p$. (iii) Let $U \subset \mathbb R$ be an open subset of $\mathbb R$. For each $x \in U$ you find an $\epsilon > 0$, such that $U_x := (x - \epsilon, x + \epsilon) \subset U$. Observe that $$ U = \bigcup_{x \in U} U_x \; .$$ Now think of how you can make this union countable.
History of notation of sets: Why $\mathbb{Z}$ and $\mathbb{Q}$ for integers and rationals, but $\mathbb{R}$ and $\mathbb{N}$ for reals and naturals?
From the comments above. The notation $\mathbb{Z}$ for the set of integers comes from the German word Zahlen for numbers. The notation $\mathbb{Q}$ for the set of rational numbers was chosen to indicate that $\mathbb{Q}$ is the set of quotients of integers. You might find this site informative on the history of these notations.
Reverse Intermediate Value Theorem
Sorry- I was right in first guess- it was just state and prove the fact that a continuous function on a closed and bounded interval attains its bounds. IVT has nothing to do with it. Use of IVT is for a further later part in the question which I have not included here.
Black-Scholes equation
I have checked my derivation thoroughly, and I believe now that it is a typo.
Staircase spectrum: is there a known solution for this problem?
Let me write \begin{align} V &= I_q \otimes A = \text{blkdiag}(A, ..., A), \\ W &= I_p \otimes B = \text{blkdiag}(B, ..., B). \end{align} We can write $$ V^TW = I_q \otimes (A^TB)=\text{blkdiag}(A^TB,...,A^TB) $$ only if $p=q$. But if $p \neq q$, we can still do the following: $$ R=V^TW = I_n \otimes (V_{\text{sub}}^TW_{\text{sub}})=\text{blkdiag}(V_{\text{sub}}^TW_{\text{sub}},...,V_{\text{sub}}^TW_{\text{sub}}), $$ with \begin{align} n &= \text{GCD}(p, q), \\ V_{\text{sub}} &= I_{q/n} \otimes A = \text{blkdiag}(A,...,A), \\ W_{\text{sub}} &= I_{p/n} \otimes B = \text{blkdiag}(B,...,B). \end{align} So in the end, it turns out that $R$ is a block diagonal matrix with $n=\text{GCD}(p,q)$ identical matrices. To compute the singular values of a block diagonal matrix, it suffices to compute the singular values of the matrices. Hence, the singular values of $R$ are simply $n$ times the singular values of $V_{\text{sub}}^TW_{\text{sub}}$. Note that the fact that $A$ and $B$ are orthogonal is not used for this derivation. It also holds in case $A$ and $B$ are non-orthogonal.
Is Euler's formula valid for complex arguments
You are missing nothing: for every complex number $z$, $e^{iz}=\cos z+i\sin z$.
Group Theory - Prime Index
We have that $$H\lhd G\;\text{and}\;[G:H]=\left|G/H\right|=p\implies\color{red}{\forall\,x\in G\setminus H}\;,\;\;(xH)^p:=x^pH=H\iff x^p\in H$$ and thus we have that (observe that $\;x\notin H\iff x^{-1}\notin H\;\ldots$) : $$\forall\;a\in G\setminus H\;:\;\;G/H=\langle x^{-1}H\rangle=\langle aH\rangle\implies \exists n\in\Bbb Z\;\;s.t.\;\; x^{-1}H=a^nH\iff xa^n\in H$$
Coordinates of the intersection of two tangents to a circle
To begin with we may assume the circle $C$, center $(0,0),$ has radius $1$ (divide all coordinates by the circle radius, then remultiply at the end). Rewrite each of $A,B$ in polar coordinates as $(r_a,\theta_a),\ (r_b,\theta_b).$ For each of $A,B$ there are then two points on $C$ where the tangents from $A$ or $B$ meet $C$, say they are at $t_a,t_b$ in the sense that their coordinates are e.g. $P(t_a)=(\cos(t_a),\sin(t_a))$ for $A$ and similarly for $B.$ The values for these $t$ are $$t_a=\theta_a \pm \cos^{-1}(1/r_a) \tag{1}$$ and similarly for $t_b$, as may be seen from a right triangle with one vertex at $A$, another at $P(t_a)$ and the third at the origin $O=(0,0).$ [right angle is at $P(t_a).$] The $\pm$ choice reflects which of the two tangents from $A$ one wants to use. Once we have made choices for which tangent to use at each point, which amounts to choosing the sign in $(1),$ we have specific values for the angles $t_a$ and $t_b.$ Then the polar coordinates for the intersection point of the two tangents has its angle as $(t_a+t_b)/2$ and its radius as $\sec[(t_a-t_b)/2].$
Hitting time of Brownian Motion combined with another Brownian Motion gives "Cauchyprocess"
Distributions are uniquely characterized by their characteristic function, and therefore it suffices to calculate the characteristic function. Using the independence of $(B_t)_{t \geq 0}$ and $\tau(a)$ we find $$\mathbb{E}\exp(i \xi X(a)) = \int \mathbb{E}\exp(i \xi B_t) \, d\mathbb{P}_{\tau(a)}(t) = \int \exp \left(- \frac{t}{2} |\xi|^2 \right) \, d\mathbb{P}_{\tau(a)}(t).$$ (Here $\mathbb{P}_{\tau(a)}$ denotes the distribution of $\tau(a)$.) Plugging in the distribution of $\tau(a)$, we obtain $$\begin{align*} \mathbb{E}\exp(i \xi X(a)) &= \frac{a}{\sqrt{2\pi}} \int_{(0,\infty)} \frac{1}{t^{3/2}} \exp \left(- \frac{t}{2} \xi^2- \frac{a^2}{2t} \right) \, dt \\ &= \frac{a}{\sqrt{2\pi}} e^{-|\xi| a}\int_{(0,\infty)} \frac{1}{t^{3/2}} \exp \left(- \left[\sqrt{\frac{t}{2}} |\xi|- \frac{a}{\sqrt{2t}} \right]^2 \right) \, dt. \tag{1} \end{align*}$$ If we change the variables according to $t=a^2/(|\xi|^2 s)$, i.e. $dt=-a^2/(\xi^2 s^2) ds$, we find $$\begin{align*} \mathbb{E}\exp(i \xi X(a)) &= \frac{a}{\sqrt{2\pi}} e^{-|\xi| a} \frac{a^2}{\xi^2} \int_{(0,\infty)} \left( \frac{\xi^2 s}{a^2} \right)^{3/2} \exp \left(- \left[ \frac{a}{\sqrt{2s}} - |\xi| \sqrt{\frac{2}{s}} \right]^2 \right) \frac{ds}{s^2} \\ &= \frac{1}{\sqrt{2\pi}} e^{-|\xi| a} |\xi| \int_{(0,\infty)} \frac{1}{\sqrt{s}} \exp \left(- \left[ \frac{a}{\sqrt{2s}} - |\xi| \sqrt{\frac{s}{2}} \right]^2 \right)\,ds. \tag{2} \end{align*}$$ Writing $$\mathbb{E}\exp(i \xi X(a)) = \frac{1}{2} \mathbb{E}\exp(i \xi X(a)) + \frac{1}{2} \mathbb{E}\exp(i \xi X(a))$$ we find from $(1)$ and $(2)$ $$\begin{align*} \mathbb{E}\exp(i \xi X(a)) &= \exp(-|\xi| a) \frac{1}{2\sqrt{2\pi}} \int_{(0,\infty)} \left( \frac{a}{t^{3/2}} + \frac{|\xi|}{\sqrt{t}} \right) \exp \left(- \left[\sqrt{\frac{t}{2}} |\xi|- \frac{a}{\sqrt{2t}} \right]^2 \right) \, dt. \end{align*}$$ Now we perform a further change of variables; we set $y := \sqrt{t/2} |\xi| - a/\sqrt{2t}$, i.e. $$dy = \frac{1}{2 \sqrt{2}} \left( \frac{|\xi|}{\sqrt{t}}+ \frac{a}{t^{3/2}} \right) \, dt,$$ and get $$\begin{align*} \mathbb{E}\exp(i \xi X(a)) &= \exp(-|\xi| a) \frac{1}{\sqrt{\pi}}\int_{\mathbb{R}} \exp(-y^2) \, dy = e^{-|\xi|a}. \end{align*}$$ The right-hand side is the characteristic function of the Cauchy distribution (with parameter $a$), and this finishes the proof.
An irreducible polynomial of degree $m$ over $\Bbb{F}_p$ remains irreducible over $\Bbb{F}_{p^n}$ iff $\gcd(m,n) = 1$.
Observe that $\mathbb{F}_{p^n}$ and $\mathbb{F}_{p^m}$ are both contained in $\mathbb{F}_{p^k}$ where $k=\frac{mn}{\gcd(m,n)}$ is the least common multiple of $m$ and $n$. Since $\beta\in\mathbb{F}_{p^m}$, it follows that $\mathbb{F}_{p^n}(\beta)\subseteq \mathbb{F}_{p^k}$, and so $mn\leq k$. But $k=\frac{mn}{\gcd(m,n)}$, so this can only happen if $\gcd(m,n)=1$.
Notation for sets of unordered pairs
Your first answer is correct. If an element (a single element or a set of elements) belongs to a set, it is represented using the element of operator; therefore: $$\{1, 2\} \in A$$ You would use the element of operator. Also, option 3 is correct because that element is a set itself, and it is a subset of the finite set $A$. So, also $$\{\{1, 2\}\} \subsetneq A$$ The first and third are therefore equally appropriate.
Finding the reaction force for a motorbike on an banked curve
When have an object either stationary on an inclined plane or moving straight up or down the incline, the reactive force perpendicular to the plane does not have to "hold up" the entire weight of the object. The remaining part of the weight of the object is counteracted by friction, or is allowed to accelerate the object downward along the incline, or goes to some combination of those two effects. In short, in an inclined plane exercise we desire the reactive force to be less than the object's weight. Multiplying the weight by $\cos\theta$, which is less than $1$, fits that desire. In a typical "banked turn" exercise, such as with your motorbike, the reactive force has to hold up all the weight of the motorbike (since it is considered undesirable to require friction to supply any of that necessary force, and the motorbike is not supposed to "slide down"), and the reactive force also has to push the motorbike toward the center of the turn in order to make the motorbike follow a curved path. In short, in a banked turn exercise we need the reactive force to be more than the object's weight; in other words, the weight $mg$ must be less than the reactive force $R$, which is accomplished when $mg = R \cos\theta$. The reason the factor is $\cos\theta$ and not some other function of $\theta$ is a matter of the shapes of the force diagrams for the two exercises.
Help with total probability of a transmitted message (Total probability?)
I think you want Bayes Theorem. Let $I$ be the message intended and $R$ be the message received: \begin{align*} P(I = 111 \, | \, R = 010 ) & = \frac{ P( R = 010 \, | \, I = 111 )P(I = 111)}{P(R = 010 \, | \, I = 111)P(I = 111) + P(R = 010 \, | \, I = 000)P(I = 000)} \\ & = \frac{(0.3 \times 0.7 \times 0.3) \times 0.5}{(0.3 \times 0.7 \times 0.3) \times 0.5 + (0.7 \times 0.3 \times 0.7) \times 0.5} \end{align*} Whilst you are correct that, given no evidence to the contrary, $P(I = 111) = 0.5$, the fact that you received $010$ makes the intention of $111$ less likely. Imagine you walk in a room with three men, one of them dead. There's a 50% chance one of the remaining men is the murderer. But then you notice one of them is holding a smoking gun. Is it still 50% chance? $R=010$ is the smoking gun being held by $I = 000$
$P$ is at constant distance $2$ from point $(3,5)$. Find the equation of the locus of $P$.
It's simply a circle of centre $C(3,5)$ and radius $r=2$, so the equation becones: $$(x-3)^2+(y-5)^2=4$$
Stone-Weierstrass Theorem (Lattices)
As stated, the assertion is not true. Take $X=[-1,1]$ and $$ \mathcal A=\{\lambda 1 + f:\ f|_{[0,1]}=0\}. $$ Then $\mathcal A$ satisfies your conditions, and it contains no polynomials other than the constants. Edit: with access to the link, it looks like you have misunderstood a few things. What happens is this: the algebra $\mathcal A$ is closed; lemma 5.10 guarantees that for any $\varepsilon>0$ there exists a polynomial $P_\varepsilon$ such that $$\tag{$*$}\sup_{s\in[-1,1]}|\,|s|-P_\varepsilon(s)|<\varepsilon.$$ as $|g|\leq1$, you have $g(t)\in[-1,1]$ for all $t$. Taking $s=g(t)$ in $(*)$ you get $$ |\,|g(t)|-P_\varepsilon(g(t))|<\varepsilon. $$ As the inequality holds for all $t\in X$, you get that $$\tag{$**$}\||\,|g|-P_\varepsilon(g)\|_\infty<\varepsilon. $$ Since $\mathcal A$ is an algebra that contains the constants you have, for any polynomial $p=\sum_{j=0}^ma_jx^j$, that $$ p(g)=a_0+a_1g+a_2g^2+\cdots+a_mg^m\in\mathcal A. $$
For any integer $a,b$ let $N_{a,b}$ denote the number of positive integer $x<1000$ satisfying $x= a( mod\;27)$ and $x=b(mod\;37)$. Then
When $n=a (mod\;m_1)$ and $n=b (mod\;m_2)$ are only solvable when $a = b (mod\;(gcd\;(m_1,m_2))$. The solution is unique modulo $lcm\;(m_1,m_2)$. By this we get $a=b\;mod(1)$ which is true for all values of $a$ and $b$. So from here we can say that for all $a,b,\; N_{a,b}=1.$
How do I factorise this to easily find the roots?
Since the coefficient of $x^3$ is 1, then the integer roots of $f(x)$ (if it exists) are divisors of constant term. In this case, the divisor of $12$. Let's try for $x=1$. Luckily, we get $f(1)=1^3-13\cdot1+12=0$. So, $x=1$ is a root of $f(x)$ and $x-1$ is a factor of $f(x)$. Divide $f(x)$ by $x-1$ to get the other factor. $$f(x)=(x-1)(x^2+x-12)$$ Since $x^2+x-12=(x+4)(x-3)$, then $$f(x)=(x-1)(x+4)(x-3)$$
Calculate an infinite continued fraction
You can start from the difference equation for the modified Bessel functions: $$Z_{n+1}(x)=-\frac{2n}{x}Z_n(x)+Z_{n-1}(x)$$ Let $x=\dfrac2{m}$: $$Z_{n+1}\left(\frac2{m}\right)=-nm\,Z_n\left(\frac2{m}\right)+Z_{n-1}\left(\frac2{m}\right)$$ Divide both sides of the recurrence with $Z_n\left(\frac2{m}\right)$ and rearrange: $$\frac{Z_n\left(\tfrac2{m}\right)}{Z_{n-1}\left(\tfrac2{m}\right)}=\cfrac1{nm+\cfrac{Z_{n+1}\left(\tfrac2{m}\right)}{Z_n\left(\tfrac2{m}\right)}}$$ Now, replace $n$ with $n+\frac{b}{m}$: $$\frac{Z_{n+\tfrac{b}{m}}\left(\tfrac2{m}\right)}{Z_{n+\tfrac{b}{m}-1}\left(\tfrac2{m}\right)}=\cfrac1{nm+b+\cfrac{Z_{n+\tfrac{b}{m}+1}\left(\tfrac2{m}\right)}{Z_{n+\tfrac{b}{m}}\left(\tfrac2{m}\right)}}$$ One can do something similar for $\cfrac{Z_{n+\tfrac{b}{m}+1}\left(\tfrac2{m}\right)}{Z_{n+\tfrac{b}{m}}\left(\tfrac2{m}\right)}$; iterating this transformation yields the CF $$\frac{Z_{n+\tfrac{b}{m}}\left(\tfrac2{m}\right)}{Z_{n+\tfrac{b}{m}-1}\left(\tfrac2{m}\right)}=\cfrac1{nm+b+\cfrac1{(n+1)m+b+\cfrac1{(n+2)m+b+\cdots}}}$$ Letting $n=1$, we have $$\frac{Z_{\tfrac{b}{m}+1}\left(\tfrac2{m}\right)}{Z_{\tfrac{b}{m}}\left(\tfrac2{m}\right)}=\cfrac1{m+b+\cfrac1{2m+b+\cfrac1{3m+b+\cdots}}}$$ Now, $Z$ can either be $I$ (first kind) or $K$ (second kind) (or a linear combination of those two); we use a big gun to decide which of the two solutions of the modified Bessel recurrence should be taken, in the form of Pincherle's theorem. Before using Pincherle's theorem, though, we have to make sure that the continued fraction we are dealing with is convergent; one could use, e.g. Śleszyński–Pringsheim to show that the continued fraction being considered is indeed well-defined. Having shown the convergence, Pincherle says that $Z$ should be the so-called minimal solution of the modified Bessel recurrence. Roughly speaking, the minimal solution of a difference equation is the unique solution that "decays" as the index $n$ increases (all the other solutions, meanwhile, are termed dominant solutions). The asymptotics of $I$ and $K$ show that $I$ is the minimal solution; thus Pincherle says that $$\frac{I_{\tfrac{b}{m}+1}\left(\tfrac2{m}\right)}{I_{\tfrac{b}{m}}\left(\tfrac2{m}\right)}=\cfrac1{m+b+\cfrac1{2m+b+\cfrac1{3m+b+\cdots}}}$$ Your desired continued fraction is not too hard to obtain from this form. (This solution is more or less an adaptation of the method presented here to more general arithmetic progressions.)
$f_n(x) = \frac{x}{n^2} e^{-\frac{x}{n}}$ converges uniformly to zero
It appears that you are talking about uniform convergence on $[0,\infty)$. The function $ye^{-y}$ is a bounded continuous function on this interval. (It is continuous and tends to $0$ as $ y \to \infty$). Let $ye^{-y} \leq C$ for all $y \geq 0$. Put $f=\frac x n$ and conclude that $0 \leq f_n(x) \leq \frac C n $. This proves uniform convergence of $f_n$ to $0$ on $[0, \infty)$.
How long do I have to own my Hybrid Prius in order to see the saving?
If you drive the same number of miles $d$ in a year, you would use $d/45$ gallons of gas in the Prius, or $d/25$ in the other car. If gas costs $c$ dollars per gallon, the savings would be $s = cd/25 - cd/45$ dollars per year. It would then take $5000/s$ years to save enough on gas to make up for the difference in initial cost. This rather crude calculation doesn't take into account the interest that you could have earned if you invested the 5000 dollars rather than putting it into the Prius, or possible differences in costs of other things (e.g. insurance or maintenance).
Is torus w. disc removed homotopic to Klein bottle w. disc removed?
Look at the polygonal representation of two spaces. Now removing a disc from the middle, the rest of the space will be deformation retract into the boundary, which is nothing but wedge of two circles. (Just draw the picture of polygonal presentation, you can actually see what is happening.)
two limit questions related to $\sin$
We will use unnecessarily explicit inequalities to prove the result. In the first limit, the general term on top can be rewritten as $\dfrac{\sin(1/k)}{1/k}$. This reminds us of the $\frac{\sin x}{x}$ whose limit as $x\to 0$ we needed in beginning calculus. Note that for $0\lt x\le 1$, the power series $$x-\frac{x^3}{3!}+\frac{x^5}{5!} -\frac{x^7}{7!}+\cdots$$ for $\sin x$ is an alternating series. It follows that for $0\lt x\le 1$, $$x-\frac{x^3}{6}\lt \sin x\lt x.$$ and therefore $$1-\frac{x^2}{6}\lt \frac{\sin x}{x}\lt 1.$$ Put $x=1/k$. We get $$1-\frac{1}{6k^2}\lt \frac{\sin(1/k)}{1/k} \lt 1.\tag{$1$}$$ Add up, $k=1$ to $k=n$, and divide by $n$ Recall that $$\frac{1}{1^2}+\frac{1}{2^2}+\frac{1}{3^2}+\cdots =\frac{\pi^2}{6}.$$ We find that $$1-\frac{\pi^2}{36n}\lt \frac{\sin1+2\sin\frac{1}{2}+3\sin\frac{1}{3}+\cdots+n\sin\frac{1}{n}}{n}\lt 1.$$ From this, it follows immediately that our limit is $1$. A very similar argument works for the second limit that was asked about. It is convenient to consider instead the reciprocal, and calculate $$\lim_{n \to \infty }\frac{\frac{1}{\sin1}+\frac{1/2}{\sin1/2}+\frac{1/3}{\sin1/3}+\cdots+\frac{1/n}{\sin1/n}}{n}.$$ We can then use the inequality $$1\lt \frac{1/k}{\sin(1/k)} \lt \frac{1}{1-\frac{1}{6k^2}},$$ which is simple to obtain from the Inequalities $(1)$. Having the $1-\frac{1}{6k^2}$ in the denominator is inconvenient, so we can for example use the inequality $\dfrac{1}{1-\frac{1}{6k^2}}\lt 1+\dfrac{1}{k^2}$ to push through almost the same proof as the first one.
Maximum value of function exists or not
Take $f(x)=-e^{-x}$ and see what happens.
Need a reference book on stokes theorem other than rudin
Try these books: Calculus on Manifolds by Spivak. Geometric approach to differential forms by Bachman.
$X$ and $Y$ are unformly distributed in $[0,1]$ with $P(\max(X,Y)≤z)=P(\min(X,Y)≤(1−z))$. Find $z$.
Hint: If the maximum of two values is at least as great as $z$ then they both are. If the minimum of two values is at least as great as $(1-z)$ then at least one is. $$\begin{align} \mathsf P\big(\max(X,Y)\leq z\big) &amp; = \mathsf P(X\leq z\cap Y\leq z) \\[2ex] \mathsf P\big(\min(X,Y)\leq (1-z)\big) &amp; =\mathsf P\big(X\leq (1-z)\cup Y\leq (1-z)\big) \end{align}$$ Now you have independent and identically distributed uniform random values. Evaluate and find the value of $z$ which makes these qualities equal.
Evaluate the integral $\int\int _{[0,1]^2} \max \left\{x,y\right\} dx\, dy$
This is not a surface integral, but simply an integral over a 2D region. To evaluate it, simply break the integral up into two pieces: one where $x \gt y$, and vice-versa. The integral is then equal to $$\int_0^1 dx \, x \, \int_0^x dy + \int_0^1 dx \, \int_x^1 dy \, y = \frac13 + \frac12 \cdot \frac13 = \frac12$$
Show that the quotient group $T/N$ is abelian
Define a map $\phi:T\to\mathbb{R}^{\times}\times\mathbb{R}^{\times}$ by $$ \phi\Big(\begin{bmatrix}a&amp;b\\0&amp;d\end{bmatrix}\Big)=(a,d) $$ This is a homomorphism because if $t_j=\begin{bmatrix}a_j&amp;b_j\\0&amp;d_j\end{bmatrix}\in T$, $j=1,2$, then $$ \begin{bmatrix}a_1&amp;b_1\\0&amp;d_1\end{bmatrix}\begin{bmatrix}a_2&amp;b_2\\0&amp;d_2\end{bmatrix}=\begin{bmatrix}a_1a_2&amp;a_1b_2+b_1d_2\\0&amp;d_1d_2\end{bmatrix}$$ hence $\phi(t_1t_2)=(a_1a_2,d_1d_2)=(a_1,d_1)(a_2,d_2)=\phi(t_1)\phi(t_2)$. Since $\mathbb{R}^{\times}\times\mathbb{R}^{\times}$ is an abelian group, all that's left to do is to show that $\phi$ is surjective, and that the kernel is $N$, then use the first isomorphism theorem.
How is $f(x)=x+1$ not backwards stable if I consider the error propagated in the addition?
The fundamental problem is that the domain is not clearly stated. We have two functions which are relevant in this context. Let $\mathcal F$ denote our set of floating point numbers. Then the relevant functions are $$ f : \mathcal F \times \mathcal F \rightarrow \mathbb R, \quad f(x,y) = x +y$$ and $$ g : \mathcal F \to \mathbb R, \quad g(x) = 1 + x.$$ This two function are closely related, yet decidedly different as evidenced by their different domains. Let $\hat{f}$ and $\hat{g}$ denote the computed value of $f$ and $g$. In the absence of floating point exceptions, we have $$ \hat{f} = (x+y)(1+\delta), \quad |\delta| \leq u,$$ where $u$ is the unit roundoff. This can also be expressed as $$\hat{f} = f(\bar{x},\bar{y}), \quad \bar{x} = x(1+\delta), \quad \bar{y} = y(1+\delta).$$ We conclude that $f$ is backward stable. In contrast, we have $$ \hat{g} = (1 + x)(1+ \nu), \quad |\nu| \leq u.$$ It is clear, that $$ \hat{g} = 1 + \nu + x(1 + \nu) = 1 + z, \quad z = \nu + x(1+\nu) = x\left( 1 + \left[\frac{\nu}{x} + \nu \right] \right) .$$ Now while $z$ is a good approximation of $x$ in the absolute sense, the relative error is large, when $x$ is tiny. This is the point were we discover that backwards stability has been lost.
How to express a contour
Maybe use $y(t)=-e^{it}$ with the same $t \in [-\pi/2,\pi/2].$ The minus sign in front makes it go in the diametrically opposite points from where your version went. Note: at $t=-\pi/2$ this gives the right starting point $i$ as required. Then at $t=+\pi/2$ it has gotten to $-i.$ The version in the OP actually went from $-i$ to $i$ along the right half of the circle, so that by reflecting it in the origin via $z \to -z$ it ends up as required in the left half of the circle, from $i$ ending at $-i$.
Sketch the Solid of Integration
You can not simply change the order of the two integral. You need to consider the 2D area that you are integrating over. Sketch out the area given by: $y$ goes from $0$ to $2$ and $x$ goes from $0$ to $1-y$. Note that the top half of it is actually going from $0$ to a negative number so it is backwards. We'll need to take that into account later. Next lets look at the range of $x$ - it goes from $-1$ to $0$ then from $0$ to $1$. We need to treat them separately as $y$ is different in each half. Next (for each half) look at how $y$ depends on $x$. For the positive half we have that $y$ goes from $0$ to $1-x$. For the negative half we have that $y$ goes from $1-x$ to $2$. The negative part was backwards so we'll put a negative sign in front of that (or we could reverse the limits). So the equivalent integral (order reversed is): $$\int_0^1 \int_0^{1-x} (xy)dydx-\int_{-1}^0\int_{1-x}^2(xy)dydx$$ Now to evaluate it: $$\int_0^1 \int_0^{1-x} (xy)dydx-\int_{-1}^0\int_{1-x}^2(xy)dydx$$ $$=\int_0^1 \left(\frac{xy^2}{2}\right)_0^{1-x}dx-\int_{-1}^0 \left(\frac{xy^2}{2}\right)_{1-x}^2dx$$ $$=\int_0^1 \left(\frac{x(1-x)^2}{2}-\frac{x\cdot0^2}{2}\right)dx-\int_{-1}^0 \left(\frac{x\cdot2^2}{2}-\frac{x(1-x)^2}{2}\right)dx$$ $$=\int_0^1 \left(\frac{x-2x^2+x^3}{2}-\frac{x\cdot0^2}{2}\right)dx-\int_{-1}^0 \left(\frac{x\cdot2^2}{2}-\frac{x-2x^2+x^3}{2}\right)dx$$ $$=\left(\frac{x^2}{4}-\frac{2x^3}{6}+\frac{x^4}{8}\right)_0^1-\left(x^2-\frac{x^2}{4}+\frac{2x^3}{6}-\frac{x^4}{8}\right)_{-1}^0$$ $$=\left(\frac{1}{4}-\frac{2}{6}+\frac{1}{8}\right)-0-0+\left(1-\frac{1}{4}-\frac{2}{6}-\frac{1}{8}\right)$$ $$=\frac{1}{24}+\frac{7}{24}$$ $$=\frac13$$
Describing: $\{x\in \mathbb R\mid\forall y\left [(y\in \{t\in \mathbb N\mid t>3\})\to (y>x) \right ] \}$
You can write the main set as $$\{x \in \mathbb{R} : \forall y \in \{4,5,\ldots\} \quad y&gt;x \}.$$ Then, it is clear that the main set is equal to $$\{x \in \mathbb{R} : x &lt; 4\}.$$
The formula for the $n^\text{th}$ term of $\frac{x^2}6 -\frac{x^4}9 + \frac{3x^6}{80} - \frac{71 x^8}{15120} + \frac{ 10361 x^{10}}{10886400} \dots$?
Essentially you are asking after a perturbation series for large $a_0$ as then with $f(x)=a_0u(x)^3$ we get $$ 9u^6-18u^4u'^2+3(6u'^2u^4+3u''u^5)=\frac1{a_0} \iff u+u''=\frac1{9a_0u^5}\\ $$ The unperturbed equation with the initial values $u(0)=1$, $u'(0)=0$ has the solution $u(x)=\cos x$, as already found out. For the first perturbation in $u(x)=\cos x+\frac1{a_0}u_0+\frac1{a_0^2}u_1+\dots$ we get the equation $$ u_0''(x)+u_0(x)=\frac1{9\cos^5x},~~ u_0(0)=u_0'(0)=0. $$ The power series on the right starts with $\frac19(1+\frac52x^2\pm\dots)$ so that for the first coefficients one gets \begin{align} 2c_2+c_0&amp;=\frac19&amp;\implies c_2&amp;=\frac1{18}\\ 6c_3+c_1&amp;=0&amp;\implies c_3&amp;=0\\ 12c_4+c_2&amp;=\frac5{18}&amp;\implies c_4&amp;=\frac1{54} \end{align} and so on. Then compute $g_0=3\cos^2(x)\,u_0(x)$,...
Convergence of the double series $\sum\limits_{k=1}^\infty\mu(k)e^{1/k}\sum\limits_{n=1}^\infty\frac{B_n}{n!}\frac{1}{k^n}$
Using $$\sum_{n=0}^\infty B_n\frac{x^n}{n!}=\frac{x}{e^x-1}$$ for $|x|&lt;2\pi$, the inner sum is $$\frac{1+1/k-e^{1/k}}{e^{1/k}-1}$$ and this is $$-\frac1{2k}+O(k^{-2}).$$ As $e^{1/k}=1+O(1/k)$ as $k\to\infty$, your overall sum is $$\sum_{k=1}^\infty\mu(k)\left(-\frac{1}{2k}+O(k^{-2})\right) =-\frac12\sum_{k=1}^\infty\frac{\mu(k)}k+\sum_{k=1}^\infty O(k^{-2}).$$ The last sum converges, and it's a consequence of the Prime Number Theorem that $\sum_{k=1}^\infty\mu(k)/k$ converges (to zero).
Why $\sum_{k=0}^m\sum_{r=k}^{k+n} = \sum_{r=0}^{m+n}\sum_{k=0}^r$?
Iverson notation can be very useful for reordering summations. $$\begin{align} &amp;=\sum_{k=0}^m\sum_{r=k}^{k+n}\cdots &amp;\\ &amp;=\sum_k \sum_r [0 \le k] [k \le m] [k \le r] [r \le k + n] \cdots \\ &amp;=\sum_k \sum_r [0 \le k] [k \le r] [0 \le r] [k \le m] [r \le k + n] [r \le m + n] \cdots \\ &amp;=\sum_r [0 \le r] [r \le m + n] \sum_k [0 \le k] [r - n \le k] [k \le r] [k \le m] \cdots \\ &amp;=\sum_{r=0}^{m+n} \sum_{k=\max(0,r-n)}^{\min(m,r)} \cdots \end{align}$$ The further simplification is not general and relies on the summed term being zero outside the more explicit range.
What is the logic behind this answer?
I am not sure what kind of answer you are looking for, but perhaps this will help: $$\left(3^{x}\right)^{2}=3^{x}\cdot3^{x}=3^{x+x}=3^{2x} $$ and $$\left(3^{2}\right)^{x}=\left(3\cdot3\right)^{x}=3^{x}\cdot3^{x}=3^{2x}.$$ In general, $$\left(x^y\right)^z=x^{yz}=(x^z)^y.$$
Does associativity imply commutativity?
Well done - you've essentially rediscovered free objects. In particular, what you've observed is that in the semigroup freely generated by one element (call it $1$), addition happens to be commutative. In other words: you've essentially explained why addition in $\mathbb{Z}_{\geq 1}$ is commutative. But, I wouldn't say that associativity implies commutativity in general. For instance, multiplication of $2 \times 2$ matrices provides a fundamental example of an associative operation that isn't commutative. Furthermore, the non-equivalence of associativity and commutativity is clear even if we stick to free algebras: in the semigroup freely generated by two elements (call them $A$ and $B$), commutativity fails in a big way. For example, $A + B \neq B+A$ in this context. in the commutative magma freely generated by one element (call it $1$), associativity fails in a big way. For example $(1+1)+(1+1) \neq ((1+1)+1)+1$ in this context. Free algebras are pretty tricky, so don't feel discouraged if you don't 'get' them right away - nobody ever does. It can take a year or two before the idea truly 'sinks in,' and almost discovering them yourself will only help a little in this regard. You may find this question helpful to get you started.
How do I evaluate this integral using cauchy's residue theorem.
By symmetry, our integral is just: $$ \color{red}{I}=4\int_{0}^{\pi/2}\frac{\cos(2\theta)}{1+\sin^2\theta}\,d\theta = 4\int_{0}^{\pi/2}\frac{1-2\cos^2\varphi}{1+\cos^2\varphi}\,d\varphi \tag{1}$$ ($\varphi=\frac{\pi}{2}-\theta$) and through the substitution $\varphi=\arctan t$ we get: $$ I = 4\int_{0}^{+\infty}\frac{t^2-1}{(1+t^2)(2+t^2)}\,dt =\color{red}{\pi(3\sqrt{2}-4)}\tag{2}$$ where the last step follows from partial fraction decomposition: $$ \frac{t^2-1}{(t^2+1)(t^2+2)} = \frac{3}{2+t^2}-\frac{2}{1+t^2}.\tag{3}$$
A problem based on pigeonhole
This is from IMO 1978. The original problem is An international society has members from six different countries. The list of members contains 1978 names, numbered 1, 2, . . . , 1978. Prove that there is at least one member whose number is the sum of the numbers of two members from his own country or twice as large as the number of one member from his own country. Here is the solution from Arthur Engel's Problem Solving Strategies, which is where I first saw it
Show that any vector can be written as a sum of a vector mapped to zero and a constant multiple of a fixed vector
You have the right idea: $F(v_0)\neq 0$ because $v_0\not\in W$, so for any $v\in V$ there is a constant $c$ such that $F(v)=cF(v_0)$. Since $F$ is linear, this can be rewritten as $F(v-cv_0)=0$. Now what can be said about $v-cv_0$?
pmf of a generalization of binomial distribution
Note that $(1-p_i + p_ix)$ can also be written as $(1-p_i)x^0 + p_ix^1$. Now imagine multiplying $n$ of those together and looking at the terms with coefficient $x^m=x^{(n-m)\times 0 +m\times 1 }$. So each of the terms in the sum will have $n-m$ multiplicands of the form $(1-p_i)$ and $m$ terms of the form $p_i$ with all the ${n \choose m}$ combinations of possible $0$s and $1$s for the different $i$ appearing such that exactly $m$ of them are $1$. But this is precisely how you would work out the probability in answer to question 2. For example, if $n=3$ and the $p_i$ are $0.1,0.2,0.3$, and you wanted the probability of exactly two $1$s then you would calculate $0.9 \times 0.2 \times 0.3 + 0.1 \times 0.8 \times 0.3 + 0.1 \times 0.2 \times 0.7$ but this is the same as the coefficient of $x^2$ in $(0.9+0.1 \times x)(0.8+0.2 \times x)(0.7+0.3 \times x)$
Linear Independence of n functions - Wronskian help
Suppose $\sum_{k=2}^{n+1} c_k Y_k = 0$. Then $$ \left(\frac{\sum_{k=2}^{n+1} c_ky_k}{y_1}\right)'=0. $$ Thus, we must have the inside a constant function, say $C$. Then $$ \frac{\sum_{k=2}^{n+1} c_ky_k}{y_1}=C. $$ This gives a linear relation $$ \sum_{k=2}^{n+1}c_ky_k - Cy_1 = 0. $$ Since $y_1,\ldots, y_{n+1}$ are linearly independent, we must have $$ C=c_2=\cdots=c_{n+1}=0. $$ This shows that $Y_2, \ldots, Y_{n+1}$ are linearly independent.
Solving Initial Value Problems 2nd Order DEs
I believe the way to solve this is to multiply both sides by $y'$ $$3(y-1)^2y'=y'y''$$ $$(y-1)^3=\frac12(y')^2+C$$ $$(3-1)^3=\frac12(4)^2+C,C=0$$ $$y'=\sqrt2(y-1)^{3/2}$$ $$(y-1)^{-3/2}dy=\sqrt2dt$$ $$-\frac2{\sqrt{y-1}}=\sqrt2t+C$$ $$-\frac2{\sqrt{3-1}}=C, C=-\sqrt2$$ $$\sqrt{y-1}=-\frac2{\sqrt2t-\sqrt2}$$ $$y-1=\frac2{t^2-2t+1}$$ $$y=1+\frac2{t^2-2t+1}$$
If $V_{1} \subset V \subset V_1 + V_2\subset \mathbb{R}^{n}$. Is it true $V = V \cap V_{1} + V \cap V_2$?
You reach the wrong conclusion $V_2 \subset V$ as an intermediate step. You approach can be fixed though: The inclusion $$ V \supset V\cap V_1 + V\cap V_2 $$ is trivial. For the other inclusion, let $v\in V$ and since $V\subset V_1+V_2$ there are $v_1\in V_1\subset V$ and $v_2\in V_2$ such that $v=v_1+v_2$. As you said, this yields $v_2\in V$ and hence $v=v_1+v_2$ with $v_1\in V_1 = V\cap V_1$ and $v_2\in V\cap V_2$. Since this works for any $v\in V$ we get $$ V \subset V\cap V_1 + V\cap V_2. $$ Putting together both inclusions we get the desired identity.
Finding a Taylor Series representation of $f(x)=\ln(\frac{1+2x}{1-2x})$ centered at $0$.
In what sense is it wrong? $\displaystyle \frac{(-1)^{n}\left( (2x)^{n+1}- (-2x)^{n+1}\right)}{n+1}$ is zero when $n+1$ is even, i.e. when $n$ is odd. Meanwhile, when $n$ is even $(-1)^{n}=1$ and $(2x)^{n+1}=- (-2x)^{n+1}$ so $\displaystyle \frac{(-1)^{n}\left( (2x)^{n+1}- (-2x)^{n+1}\right)}{n+1} = \frac{2^{n+2}x^{n+1}}{n+1}$ which means you could rewrite your answer as $$ \sum_{n=0}^{\infty}\frac{2^{2n+2}x^{2n+1}}{2n+1} = \sum_{n=0}^{\infty} \frac{2}{{2n+1}}(2x)^{2n+1}$$ or numerous other possibilities. The problem may be that computerised marking of such expressions is not an exact science.
Proof that $2^n-1$ is divisible by $3$ for all $n\in\mathbb{N}$ even
Induction step: Suppose given statement holds for $k$, an even number. To show: the given statement holds for $k+2$. $$\begin{align}2^{k+2} - 1 &amp;= 4\cdot2^k - 1\\&amp;=3\cdot2^k + (2^k - 1)\\ &amp;=3\cdot2^k + 3a \textrm{, since } 3|(2^k-1) \\&amp;=3\cdot(2^k +a) \end{align}$$
Prove that $\mathscr B=\{D(z,\epsilon)\} \cup \{E((x,0),\epsilon)\}$ be basis for a topology on $A.$
Note in order to check a collection $\mathscr B$ of subsets of $A$ is a basis you have to check following two things 1) For each $a\in A$ there is at least one element $B$ containing $a$, which is trivially follow. 2) If $a$ belongs to the intersection of two basis elements $B_1$ and $B_2$, then there is a basis element $B_3$ containing $a$ such that $B_3\subseteq B_1\cap B_2$. You have taken $a$ only from the set $\{(x,y)\in R^2 : y&gt;0\}$ i.e. you have to consider also the case when $a\in \{(x,y)\in R^2 : y=0\}$.
$T$ has no eigenvalues
Another approach: The matrix representation of $T$ w.r.t. standard basis $\{(1,0),(0,1)\}$ of $\mathbb{R}^2$ is $A=\begin{pmatrix}0&amp;-3\\1&amp;0\end{pmatrix}$. So the characteristic equation $|A-\lambda I|=0 $ gives $\lambda^2+3=0$. Thus $T$ has no eigenvalues.
How do I read this equation related to Combinations with repetitions in natural language?
The notation is a bit unusual, but the intent is clear. &nbsp; The comma is a statement separator, and the parenthetical expression is a quantification, and the overlined pair would then be a way of writing an interval. $$x_1+x_2+\ldots + x_n = k ,\quad x_i\geq 0\; (i\in \overline {1,n})$$ Read: "The series $x_1+x_2+$ and so on to $x_n$, is equal to $k$, where any term $x_i$ is greater or equal to $0$, for all indices $i$ in the integer interval of $1$ to $n$ (inclusive)." You might more commonly find this expressed as something like: $$\sum_{i=1}^n x_i = k\quad, \text{ where } \forall i\in\{1..n\} : x_i\geq 0$$ In other words, we're counting the non-negative integer solutions for $n$ terms that equal $k$ when they are summed.
Real analysis - Series
If $m,n\in\mathbb N$ and $m\geqslant n$, then$$\sum_{k=n}^m\left|\frac{a_k}k\right|\leqslant\left(\sum_{k=n}^m{a_k}^2\right)\left(\sum_{k=n}^m\frac1{k^2}\right)$$by the Cauchy-Schwarz inequality. Now, apply Cauchy's convergence test.
How can we show a data set satisfies the manifold assumption?
I am not aware of any necessary and / or sufficient condition to prove that a finite set of data in $\mathbb R^n$ is actually contained in a smooth submanifold $\mathcal M$ of low dimension. Actually, modern studies try to identify the topological structure lying behind a given set of data (yes, we have to move down to the topological level) using algebraic topology and, in particular, information coming from persistent homology. This some sort of backward reconstruction: the topological information gathered by the methodology are used for inference or to further characterize clusters of data. A survey is contained in this nice paper. This new machinery is quite powerful, as it is more flexible than MDS and PCA and allows the user to introduce functions to control the simplicial complex definition which is at the core of the method itself. In some applications the authors showed that the given data lie on a smooth manifold; the machinery works at the algebraic topology level, though. If you are interested in this backward reconstruction, then I would start by considering the nice case of the noisy circle introduced in here.
Mixed strategy problem - game theory
The indifference condition in mixed strategy NE does not imply $a=b=1/2$. Here is an example: $\hskip1.8in$ First, note that since there are three actions for player 1, there are technically seven possible supports for his strategy: UCD, UC, UD, CD, U, C, D. Similarly, there are are seven possible supports for player 2. Therefore, there are in fact 49 different combinations of mixed strategies that we could consider in looking for mixed equilibria. Since that sounds like an unpleasant exercise, let's try to narrow it down. First note that $M$ strictly dominates $L$ for player 2. Also, a 50/50 mix of $U$ and $C$ strictly dominates $D$ for player 1. Then, we are left with: $\hskip1.7in$ Note that we can do this elimination because strictly dominated actions are never played with positive probability in mixed equilibria. First, the underlining above for best responses shows that there is no pure-strategy NE. Looking at mixed equilibria, since, for each player, the best response to each of his opponents actions is unique, neither player wants to mix unless the other is mixing. Therefore, both plays must mix to make the other indifferent. Let P1 put probability $p$ on $U$, i.e. $\alpha_1(U)=p$ and probability $1-p$ on $C$. Let P2 put probability $q$ on $M$, i.e. $\alpha_2(M)=q$ and probability $1-q$ on $R$. \begin{align*} \text{To make P2 indiff: } &amp;&amp; 5p+5(1-p) &amp;= 3p + 8(1-p) &amp;\Leftrightarrow p=\frac{3}{5}\\ \text{To make P1 indiff: } &amp;&amp; 3q+6(1-q) &amp;= 5q + 4(1-q) &amp;\Leftrightarrow q=\frac{1}{2} \end{align*} Therefore, the unique mixed equilibrium is $\left((\frac{3}{5},\frac{2}{5},0),(0,\frac{1}{2},\frac{1}{2}) \right) $. As you can see, depending on the game, none of the following is guaranteed: $a=1/2$, $b=1/2$, $a=b$. You often will see $1/2$ in there, just because those problems are easiest to calculate, but there's really no reason $1/2$ should appear more often than any other fraction. The key is that P1 must be mixing to make P2 indifferent between his actions. Sometimes this will involve mixing evenly across P1's actions, but often it won't.
Importance of osculating plane.
Alexander Brodsky (not to be confused with the famous Russian architect by that name) gives a nice exposition of the physical meaning of the osculating plane here: https://www.youtube.com/watch?v=coahLyiATuA
Regular expresson a's and b's
(a*(baa)*)* should work. We enforce the constraint, as well as allow the whole pattern to be repeated any number of times to allow generation of other legal strings.
Proof by Induction: If $x_1x_2\dots x_n=1$ then $x_1 + x_2 + \dots + x_n\ge n$
Proof by Mathematical Induction: First note that if $a_1\le1$ and $a_2 \ge1$ then $$(a_2-1)(1-a_1)\ge0 \implies \color{red}{a_1a_2 \le a_1 + a_2 -1}.\tag{1}$$ Now, if $x_1 \cdot x_2 \cdots x_n \cdot x_{n+1} = 1$ then one of $x_i$'s must be less than or equal to $1$ and another one must be greater than or equal to $1$. Without loss of generality, we presume $x_n \le 1$ and $x_{n+1} \ge 1$. Then $$x_1 \cdot x_2 \cdots x_{n-1} \color{red}{x_{n}\cdot x_{n+1}}=1 \implies x_1+x_2+\cdots+x_{n-1}+\color{red}{x_n\cdot x_{n+1}}\ge n.$$ Using $(1)$, we arrive at the desired result.