title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Show that a function is trivial on the unit circle
Let $z=1$, then $f(1z')=f(1)+f(z')$ and so $f(1)=0$. So, you know at least one element is always mapped to $0$. That's a good starting point for you I think. Next, try looking at the case that $z=z'$. What about $z'=z^2$, $z'=z^n$?
A Theorem Regarding Maps, the Trivial Topology and the Discrete Topology
Your proofs are fine (the idea is good). Maybe he/she wants a more pedantic/formal proof: Suppose $f:X \rightarrow Y$ is a function such that $X$ is a space and $Y$ is indiscrete. Then $f$ is continuous. Proof: let $O$ be open in $Y$. By the definition of the indiscrete topology, $O = Y$ or $O = \emptyset$. Then $f^{-1}[Y] = X$ is open in $X$ and $f^{-1}[\emptyset] = \emptyset$ is open in $X$ too. So $f^{-1}[O]$ is open in $X$ for all open subsets $O$ of $Y$. So $f$ is continuous. Suppose $f:X \rightarrow Y$ is a function such that $Y$ is a space and $X$ is discrete. Then $f$ is continuous. Proof: let $O$ be open in $Y$. Then $f^{-1}[O] \subset X$ and so as $\mathcal{T}_X = \mathscr{P}(X)$ (definition of a discrete space), $f^{-1}[O]$ is open in $X$ for any $O$ open in $Y$, so $f$ is continuous. This spells out all the relevant definitions, so your professor knows you know them too.
Prove $(1-(1-q)/n)^n > q$ for $0<q<1$ and $n \geq 2$ a natural number
Brenoulli's inequality states that $$ (1+x)^n\ge 1+nx$$ for $n\in\Bbb N$ and $x\ge -1$, and we have "$&gt;$" if $n\ge2$ and $x\ne0$. Let $x=-\frac{1-q}{n}$.
Bolzano–Weierstrass theorem
Hint: Note that $$\frac{n^2}{2n+3}=\frac{n^2+3n/2}{2n+3}-\frac{3n/2}{2n+3}=\frac{n}{2}-\frac{3n/2}{2n+3}.$$ Let $n$ be large. The second term has limit $3/4$ as $n\to \infty$. Now multiply by $\pi$. Now let us explore the values of the cosine. We are subtracting something very close to $3\pi/4$ from $n\pi/2$. So our angle has the same cosine as $\pi(n-4)/2$ plus something that approaches $\pi/4$. Look at what happens when $n$ is a multiple of $4$. Then our cosine is very close to the cosine of $\pi/4$, which is $\sqrt{2}/2$. When $n$ is $2$ more than a multiple of $4$, then our cosine is very close to the cosine of $3\pi/4$, so it is near $-\sqrt{2}$. Thus we can use as subsequences the integers $n$ of the form $4k$, and the integers $n$ of the form $4k+2$. (There is a similar alternation with the odd $n$, but we don't need to look at them, since we already have non-convergence.) Remark Informally, $n^2/(2n+3)$ is "about" $n/2$ when $n$ is large, and $\cos(n\pi/2)$ is quite different when $n$ is even than when $n$ is odd. However, "about" is much too vague for our purposes. Even though it is not necessary for the argument, we can go further in our decomposition, by noting that $$\frac{(3n/2)}{2n+3}=\frac{3}{4}-\frac{9/4}{2n+3}.$$ The best approach to our first decomposition is not the "magic" adding and subtracting of $3n/2$. Instead, divide the polynomial $x^2$ by the polynomial $2x+3$ using the ordinary "long division" process. Alternately, let $m=2n+3$. Then $n=(m-3)/2$, and therefore $n^2=(1/4)(m-3)^2$. Expand. Dividing by $m$ is easy.
Proof of the definition corollary of curl
If A is an element of surface area in bounded by a simple closed curve C, let P be an interior point in C and $\mathbf{n}$ a unit normal at P. By Stokes' theorem, $\iint(\nabla\times \mathbf{F})\cdot \mathbf{n}~dS = \oint_{C} \mathbf{F} \cdot d\mathbf{r}.$ Using the mean value theorem for integrals* we can write this as mean[$(\nabla \times \mathbf{F} )\cdot \mathbf{n}$] =$ \frac{\oint_{C} \mathbf{F} \cdot d\mathbf{r}}{\Delta A},$ and the result follows from taking the limit as $\Delta A \to 0. $ In words, the expression for (curl$\cdot\mathbf{n}$) reaches a limiting value as the area A shrinks around the point P. *The MVT for integrals is: If a function f is continuous on [a,b] there exists a point c on [a,b] such that $$f(c) = \frac{1}{b-a}\int_a^b f(x)dx.$$
Solve the following integral $\iiint \left(x+y+z\right)\left(x-y-z\right)\left(x+y-z\right)dV$
Multiply out the polynomial and integrate each term to find: $$\frac{1}{6} x y z \left(2 x^2-2 y^2-3 y z-2 z^2\right)$$
Let $f(x,y)=1/(x^2+y^2)$ for $(x,y)\neq 0$. Determine whether $f$ is integrable over $U-0$ and over $\mathbb{R}^2-\bar{U}$; if so, evaluate.
If you want to have a rigorous reasoning you can compute the integrals : $$\int_{\epsilon \le x^2 + y^2 \le 1} f(x,y) \mathrm d x\mathrm d y$$ and $$\int_{1 \le x^2 + y^2 \le R} f(x,y) \mathrm d x\mathrm d y$$ and $\epsilon \to 0$, $R \to \infty$. To compute the above intergrals you can use the polar coordinates substitution and you have $$\int_{\epsilon \le x^2 + y^2 \le 1} f(x,y) \mathrm d x\mathrm d y = \int_{\sqrt \epsilon}^1 \int_0^{2\pi} \frac1r \mathrm dr \mathrm d \theta = -\pi \ln \epsilon \to_{\epsilon \to 0} \infty$$ and $$\int_{1 \le x^2 + y^2 \le R} f(x,y) \mathrm d x\mathrm d y = \int_1^\sqrt R\int_0^{2\pi} \frac1r \mathrm dr \mathrm d\theta = \pi \ln R \to_{R\to\infty} \infty$$ This proves that $f$ is not integrable on the two sets.
Showing $LU$ is impossible...
We have $$\begin{bmatrix} 0 &amp; 1 \\ 1 &amp; 0 \end{bmatrix} = \begin{bmatrix} a &amp; 0 \\ b &amp; c \end{bmatrix}\begin{bmatrix} x &amp; y \\ 0 &amp; z \end{bmatrix} = \begin{bmatrix} ax &amp; ay \\ bx &amp; by+cz \end{bmatrix}.$$ From $bx = ay = 1$ it follows that $a,x\ne 0$ but this contradicts $ax = 0$.
Find an equation for the plane parallel to the x-axis and passing through two given points
Think about what a planar equation means. If $$-2y+z=-3$$ then can we show that $$2y-z=3$$ (and vice versa) to show that they're equivalent?
Integration of surface differential form that differs by a sign.
You're only getting a negative sign because you're not paying attention to the $2$-form when you do the integral. You're just doing the double integral of a scalar function. Recall that the convention is that $$\int_\Omega f(x)dx_1\wedge\dots\wedge dx_n = \int_\Omega f\,dV$$ when we give $\Omega$ the standard orientation as a region in $\Bbb R^n$. As you pointed out, the parametrization of the lower hemisphere by the disk in the $xy$-plane is orientation-reversing, so you must pull back the area $2$-form $dS$ on the hemisphere and integrate it as a $2$-form $f\, dy\wedge dx=-f\,dx\wedge dy$ and then do the double integration. (It's always a point of confusion that you can do iterated integrals in any order by Fubini, but you must get the sign right at the beginning by having the form agree with the endowed orientation on the submanifold.) To make this explicit, think about what happens when you analogously calculate the arclength of the upper and lower semicircles in $\Bbb R^2$. There the orientation issue is quite clear.
Determine $\sum_{i=0}^n \frac{(i^2 - 1)}{6} $
$$\begin{align}&amp;\sum_{i=0}^n\frac{i^2-1}{6} = \frac 16\left[\sum_{i=0}^n i^2 -\sum_{i=0}^n1\right] = \frac16\left[\frac{n(n+1)(2n+1)}{6} -(n+1)\right]\end{align}$$
Orthonormal basis . Can I have more than one basis for the subspace?
Yes: any $k$ independent vectors are the basis of a space of dimension $k$. (They are from this space, of course).
How to solve the linear programming with constraints of $L0$-norm?
You can make it into a MILP by introducing binary variables indicating if the corresponding $x$ values are zero or not. In your case, assuming $x$ are nonnegative: $$ \begin{array}{l} x_i\leq Mz_i\\ z_i\in\{0,1\}\\ z_0+z_1+z_2\leq 2 \end{array} $$ where $M$ is a reasonable constant, will give you the bound you are looking for. See for instance https://docs.mosek.com/modeling-cookbook/mio.html#implication-of-positivity for more on this. In general problems with $L_0$ norm are hard. A typical approximation is to replace the $0$-norm with $1$-norm and include the $1$-norm as penalty in the objective. This is just a heuristic but it tends to prefer sparse solutions. See for instance from slide 17 in https://web.stanford.edu/class/ee364b/lectures/l1_slides.pdf.
Change of variables in limits (Part 3)
Specifying the theorem for all cases in one place is clumsy. It is best the reader formulates the rules in his/her mind for individual cases. They need not be expressed explicitly in a textbook but can be given as exercises. This is easily done provided one really understands the case when $a, b, c$ are finite. As an example let's make $b=-\infty, c=\infty$. And we can state Theorem: If $\lim_{x\to a} g(x) =-\infty$ and $\lim_{x\to-\infty} f(x) =\infty$ then $\lim_{x\to a} f(g(x)) =\infty$. Here the condition $(2)$ holds automatically as $g(x)$ can not equal $-\infty$ anyway.
Chance of failure of a machine in a year - Probability ?(Interview Question)
As the comments mentioned the worst case scenario is it never works, best case is it works every day. Some information that may be of interest: Let $A$ be the event that $A$ works (thus $A^c$ means it does not work), and $M$ be that the machine works (define similarly for $B,C$). Assuming $A,B,$ and $C$ are independent, on any given day the chance the machine doesn't work is (this formula comes from the fact that if you add the probabilities you are double counting when 2 of them don't work, and when you subtract those away, you double count the time when all 3 don't work): $$\begin{align} P(M^c) &amp; = {P(A^c)+P(B^c)+P(C^c)-P(A^c)P(B^c)-P(B^c)P(C^c)-P(A^c)P(C^c)+P(A)P(B)P(C)} \\[1ex] &amp; =0.03-0.0003+0.000001 \\[1ex] &amp; = 0.029701 \\[3ex] &amp; = 1-P(A)P(B)P(C) \\[1ex] &amp; = 1-0.99^3 \end{align}$$ Thus the expected number of times it doesn't work in a year is $365P(M^c)\approx 10.{\tiny8}$days. From here you can give confidence intervals, say with confidence $p=.01$ (or .05), that the machine will not work some number of days in the range $(x_{min},x_{max})$ out of the year with probability $1-p$, where $x_{min}$ and $x_{max}$ are some number of days (probably around 5-20 (this is what I would expect to be a reasonable answer), depending on the required level of confidence). Perhaps the interviewer wanted you to recognize that anything could happen, but with whatever certainty you decide it will be in the range given.
Question on the span of a tangent plane
I’m not sure how you’re calculating your tangent plane above. The general approach is to observe that the tangent plane should contain every tangent line: given any direction $(dx, dy)$, restrict $F$ to the line passing though $(x_0, y_0)$ in that direction: $$G(t) = F(x_0 + tdx, y_0+ tdy).$$ This is now a one-dimensional function whose derivative gives the slope of a tangent line to $G$; therefore the vector $(dx, dy, G’(0))$ should lie on the tangent plane of $F$. To find a basis for the tangent plane, pick any linearly independent pair of directions $(dx, dy)$. $(1,0)$ and $(1,0)$ are particularly convenient.
How to show $\left|f(x)-T_{N} f(x, 0)\right| \leq 10^{-6}$
For all positive integers $n$ we have $$ f(x) = \sum_{k=0}^n \frac{(1-i)^k+(1+i)^k}{(2k)!}x^k + R_n(x), $$ where $$R_n(x) = \frac{f^{(n+1)}(\xi)}{(n+1)!}x^{n+1} $$ for some $\xi$ between $0$ and $x$. Taking $x=\frac1{10}$ and $\xi=0$, we have $$ R_n(x) = \frac{1}{2} \left((1-i)^{n+1}+(1+i)^{n+1}\right)\frac{(1/10)^n}{(n+1)!}. $$ Now, since $|1-i|^{n+1} = |1+i|^{n+1}=2^{(n+1)/2}$, it follows that $$ |R_n(x)| \leqslant \frac{2^{(n+1)/2}}{10^n\cdot (n+1)!}. $$ For $n=4$ we have $$ |R_4(x)| \leqslant \frac{1}{150000 \sqrt{2}}\approx 4.714045\cdot 10^{-6}, $$ which is greater than $10^{-6}$, and for $n=5$ we have $$ |R_5(x)| \leqslant \frac{1}{9000000} = \frac19\cdot 10^{-6} &lt; 10^{-6}. $$ Hence the minimal $n$ such that the remainder is less than $10^{-6}$ is $n=5$.
Existence of smooth partitions of unity without open condition
It seems to me that you can w.l.o.g. assume that $U$ is open and all the $V_i$ are open. In other words (making this wlog a bit more formal) you can just take the interiors $\mathring{U}$ and $\mathring{V}_i$ and solve the problem with the interiors. Once you have solved the problem with the $\mathring{U}$ and the $\mathring{V}_i$ instead of $U$ and $V$ you have found functions $\zeta_i \in C^{\infty}$ such that $$ \begin{cases} 0 \leq \zeta_i \leq 1\\ \text{supp}(\zeta_i) \subset \mathring{V}_i\\ \sum_{i} \zeta_i = 1 \text{ on } \mathring{U}. \end{cases} $$ Now you can use continuity of the $\zeta_i$ and $\sum_{i} \zeta_i$ to conclude that you actually solved the problem for the original $U$ and $V_i$. Solving the problem for the interiors can be done in a few steps (I think!! but it has been some time since I did this proof, but it can be found online probably, but some hints might be helpful if you want figure it out without full solutions first): First conclude that $U \subset \subset \cup_i V_i \quad$ also holds when you substitute $U$ and $V_i$ with the interiors. Use problem 5 in your image to first find $\xi_i$ for every $V_i$ (actually the interior), this can be done because you are solving the problem with the interiors (which are of course open!) Some tricks to define $\zeta_i$ using a smart division of the form $\zeta_i := \frac{\xi_{i}}{\sum_{j} \xi_j}$. Edit: An update after your comment. Indeed, we have $\cup_i(\text{int}(V_i)) \subset \text{int}(\cup_i(V_i))$, but in general not the other way around, so your comment is valid and I did not think of this beforehand. Luckily however, things work out in the following way: Suppose we have a point $x \in \text{int}(\cup_i(V_i)) - \cup_i(\text{int}(V_i))$ and $x \in \overline{U}$. This means that $x \not \in \text{int}V_i$ for all $i \in \{1,\dots,n\}$ and $x \in \partial V_i$ for at least one $i$. However, in this case a partition of unity subordinate to $\{V_i\}_i$ cannot exist: Indeed for arbitrary $i \in \{1,\dots,n\}$ we have $\zeta_i$ with support in $V_i$ which means that $\zeta_i(x) = 0$, since either $x \in \partial V_i$ or $x \not \in V_i$. This implies we have $\sum_i \zeta_i(x) = 0.$ However, since $x \in \overline{U}$ we get by continuity $\sum_i \zeta_i(x) = 1$, a contradiction. So we are not interested in this case, since no interesting partition of unity exists, so we can wlog assume that $x \in \overline{U} \implies x \in int(V_i)$ for some $i$, and I believe this solves the problem you correctly stated in your comment.
Idea for computing integral (similar to beta function)
One can compute it in a "beta-function way". I'll suppose $s=0$ (as it depends only on the difference $t-s$). Substitution $x_1=u_1$, $x_2=u_2-u_1$,... ,$x_p=u_p-u_{p-1}$ brings the integral to $$I(t)=\int_{x_i&gt;0,\sum_i x_i&lt;t}x_1^{\alpha_1/2}\dots x_p^{\alpha_p/2}dx_1\dots dx_p.$$ Clearly $$I(t)=t^{\sum_i(\alpha_i/2+1)}I(1)$$ and thus $$dI(t)/dt=\sum_i(\alpha_i/2+1)t^{\sum_i(\alpha_i/2+1)-1}I(1).$$ Let us now compute $$J:=\int_{x_i&gt;0}x_1^{\alpha_1/2}\dots x_p^{\alpha_p/2}e^{-\sum_i x_i}dx_1\dots dx_p$$ in two ways. On one hand, it is (by definition of $\Gamma$) $$J=\prod_i\Gamma(\alpha_i/2+1).$$ On the other hand (setting $t=\sum_i x_i$) it is $$J=\int_0^\infty \frac{dI(t)}{dt} e^{-t}dt=\sum_i(\alpha_i/2+1)I(1)\int_0^\infty t^{\sum_i(\alpha_i/2+1)-1}e^{-t}dt$$ $$=\sum_i(\alpha_i/2+1)I(1)\Gamma(\sum_i(\alpha_i/2+1))=I(1)\Gamma(\sum_i(\alpha_i/2+1)+1).$$ We thus got (modulo mistakes on the way) $$I(t)=\frac{\prod_i\Gamma(\alpha_i/2+1)}{\Gamma(\sum_i(\alpha_i/2+1)+1)}t^{\sum_i(\alpha_i/2+1)}.$$
If $f$ is diagonalisable then its minimal polynomial is the product of distinct linear factors
This is very basic, and you do not need to use the characteristic polynomial or the fact that the minimal polynomial divides it at all. You just need to realise that "diagonalisable" means that the sum of the eigenspaces fills the whole space, so a linear operator is zero if (and obviously only if) it is zero on each of the eigenspaces. Now on the eigenspace for an eigenvalue$~\lambda$, our $f$ acts by scalar multiplication by$~\lambda$. It easily follows that on this eigenspace any polynomial $P[f]$ acts by scalar multiplication by$~P[\lambda]$ (just check that $f^k$ acts by multiplication by $\lambda^k$, and then combine the monomials of the polynomial $P$ linearly). So by the above, $P[f]=0$ iff $P[\lambda]=0$ for every eigenvalue$~\lambda$. The minimal monic polynomial$~P$ with that property is the product of (just) one factor $X-\lambda$ for each distinct eigenvalue$~\lambda$ of$~f$; there are distinct linear factors.
can i solve this quadratic equation this way
As a proportion: $\dfrac{A}{B} = \dfrac{C}{D} \to AD = BC$ we have: $36(2n-1) = 11n^2$. So: $11n^2 - 72n + 36 = 0$, and $(11n-6)(n-6) = 0$. So $n = 6$ and $n = \dfrac{6}{11}$
How to solve 4 variables
You can write the unknowns like this: $a$ ; $a-9$ $12-a$ ; $-2-a$ This helps you keep the number of unknowns minimal (just $a$). Now the horizontal ones are satisfied and so is the 1st vertical. Then (from the 2nd vertical) you get $(a-9) + (-2-a) = 2$ which can never be true. So this puzzle has no solution indeed.
Distribution on a collision variant of Hypergeometric distribution?
I do not know whether the distribution of $X$ has a formal name. From your question, you aims for a concentration bound about the expectation and you may interest in the following easy derivation, though it is not a complete answer to your question (it is too long to put as a comment). Let the $K$ marked elements be $o_1$, $o_2$, $\cdots$, $o_K$. For $o_i$, define $$ X_i = \begin{cases} 1\quad \text{if }\ o_i \in A\cap B \\ 0\quad \text{otherwise} \end{cases} $$ Then $$ X = X_1 + X_2 + \cdots + X_K $$ We have $$ \mathbb{E}[X_i] = \Pr[X_i = 1] = \Pr[o_i \in A]\cdot \Pr[o_i \in B] = \frac{n}{N}\cdot\frac{n}{N} $$ and $$ \mathbb{E}[X_i^2] = \mathbb{E}[X_i] = \frac{n^2}{N^2} $$ and $$ \mathbb{E}[X_iX_j] = \Pr[o_i \in A \wedge o_j \in A]\cdot \Pr[o_i \in B \wedge o_j \in B] = (\frac{N - 2 \choose n - 2}{N \choose n})^2 = (\frac{n(n-1)}{N(N-1)})^2 $$ We conclude that $$ \mathbb{E}[X] = \sum_{i}\mathbb{E}[X_i] = \frac{Kn^2}{N^2} $$ and $$ \mathbb{E}[X^2] = \sum_{i}\mathbb{E}[X_i^2] + \sum_{i\neq j}\mathbb{E}[X_iX_j] = \frac{Kn^2}{N^2} + \frac{K(K-1)n^2(n-1)^2}{N^2(N-1)^2} $$ and thus we can compute $ \mathbb{Var}[X] $ and the Chebyshev's inequality can be used for bounding the concentration probability.
Calculating $\sin 45^\circ$ two ways gives $1/\sqrt{2}$ and $\sqrt{1/2}/1$. What went wrong?
Nothing went wrong. $$ \frac{\sqrt{\frac{1}{2}}}{1} = \sqrt{\frac12}=\frac1{\sqrt2} $$
General Concept of an Isomorphism
Indeed. It is a fact from model theory that isomorphic structures are elementary equivalent. This fact is easy to prove by induction on the structure of the formula. Just to give you some example: If $G$ and $H$ are isomorphic groups and $G$ satisfies "every element of order $3$ is central", then the same holds in $H$.
How to get derivative of a function without using the product, quotient, or chain rule?
Try the definition. Just lots of calculations! \begin{align} \frac{g(x+h)-g(x)}{h} &amp;= \frac{1}{h} \left(\frac{(x+h)^2+2(x+h)-1}{\sqrt{x+h}}-\frac{x^2+2x-1}{\sqrt{x}}\right) \\ &amp;= -{\frac {\sqrt {x+h}{x}^{2}-\sqrt {x}{h}^{2}-2\,{x}^{3/2}h-{x}^{5/2}+2 \,\sqrt {x+h}x-2\,\sqrt {x}h-2\,{x}^{3/2}-\sqrt {x+h}+\sqrt {x}}{ \sqrt {x+h}\sqrt {x}h}} \end{align} Rationalize the numerator: $$ \frac{g(x+h)-g(x)}{h} ={\frac {x{h}^{3}+4\,{x}^{2}{h}^{2}+6\,h{x}^{3}+3\,{x}^{4}+4\,x{h}^{2}+ 12\,h{x}^{2}+8\,{x}^{3}+2\,xh+2\,{x}^{2}-1}{\sqrt {x+h}\sqrt {x} \left( {x}^{5/2}+2\,{x}^{3/2}h+2\,{x}^{3/2}+\sqrt {x}{h}^{2}+\sqrt {x +h}{x}^{2}+2\,\sqrt {x}h+2\,\sqrt {x+h}x-\sqrt {x}-\sqrt {x+h} \right) }} $$ Set $h=0$: $$ \lim_{h\to 0}\frac{g(x+h)-g(x)}{h} = {\frac {3\,{x}^{4}+8\,{x}^{3}+2\,{x}^{2}-1}{x \left( 2\,{x}^{5/2}+4\,{ x}^{3/2}-2\,\sqrt {x} \right) }} $$ This is a perfectly good answer. If you like, you can then rationalize the denominator: $$ g'(x) = {\frac {3\,{x}^{2}+2\,x+1}{2{x}^{3/2}}} $$
I am trying to find a function with specific criteria
This is not a general solution, but does show the limits of a 5th degree polynomial and gives an idea of where to try for other things If you assume a 5th degree polynomial, the fact that $f,f',f''$ are all 0 at 0 tells us that it must be of the form $$f(x)=ax^5+bx^4+cx^3$$ so $$f'(x)=5ax^4+4bx^3+3cx^2$$ $$f''(x)=20ax^3+12bx^2 +6cx=3x(10ax^2+6bx+3c)$$ Plugging in $f(1)=0$ and $f'(1)=0=f''(1)$ gets us $$a+b+c=1$$ $$5a+4b+3c=0$$ $$10a+6b+3c=0$$ Solving this gets us to $a=6,b=-15, c=10$ Plugging this into our $f'$ gets us $$f'(x)=30x^4-60x^3+30x^2=30x^2(x-1)^2$$ so we have the required positive first derivative except at 0 and 1 likewise, plugging into $f''$ gets us to $$f''(x)=2(60x^3-90x^2+30x)=60x(x-1)(2x-1)$$ which gets us to 0's at $0,1,\frac 1 2$, as required Since there was nothing left for us to play with, the only possible $k$ that would work as a 5th degree polynomial would be the value of integrating this on $[0,1]$, which gets us to $$\frac 6 6 - \frac {15} 5 + \frac {10} 4=\frac 1 2$$ You would probably need to add a 6th degree term to give a free variable to solve for other values of $k$
Help with finding cosets for cyclic subgroups
Pick an element $x$ in $G$, not in $H$. Then $H+x=\{h+x|h\in H\}$ gives you a new coset. If there are still elements in $G$ that have not appeared in any coset, pick one and make its coset. To be specific, we might choose $(1,0,0)\notin H$ and form the coset $$H+(1,0,0)=\{(1,0,0),(1,0,1),(1,1,0),(1,2,0),(1,1,1),(1,2,1)\}$$ There are still elements of $G$ that don't appear in $H$ or $H+(1,0,0)$, so you pick one of these elements and start a new coset. Note: since each coset has $6$ elements, and $G$ has $24$ elements, there will be a total of $4$ cosets (including $H$).
what is the sum of the digits of 111111x111111?
The simple way is to type it into a calculator. The more clever way is to note that there are $6 \cdot 6=36$ pairs of $1$'s and when you do the addition part of the long multiply there are no carries.
Find all functions $f:\mathbb R\to\mathbb R$ such that $f\left(x^2+f(y)\right)=y+f(x)^2$.
Suppose $a$ is such that $f(a)=0$ (we know there is such $a$, it is enough to consider $y=-f(x)^2$ for some $x$), then substituting $x=a$ and $y=a$, we have $$f(a^2)=a.$$ Then, set $x=0$ and $y=a^2$, to obtain $$0=f(f(a^2))=a^2+f(0)^2.$$ Since we are only dealing with real numbers, it means that $f(0)=0$, and no other number has $0$ as image. As consequences, we have $f(f(y))=y$ and $f(y^2)=f(y)^2$ for every $y\in\mathbb{R}$. This shows $f$ is a bijection. Now plug $y=f^{-1}(\zeta)$, we see $$f(x^2+\zeta)=f^{-1}(\zeta)+(f(x))^2\geq f^{-1}(\zeta)=f(\zeta)$$ Hence $f$ is monotone increasing. Now suppose for some $x_0$ we have $f(x_0)&gt; x_0$, then $x_0=f(f(x_0))\geq f(x_0)&gt; x_0$, a contradiction. Thus we have $f(x)\leq x$. Similarly we have $f(x)\geq x$, so $f(x)=x$ is the only solution.
Check this topology exercise.
Yes, to show that $\mathcal{T}$ includes all open subsets of $\mathbb{R}^2$, it's enough to show that it contains some basis of the standard topology e. g. the rectangles.
Proof of the Theorem of sign permanence
Hint: That follows from the continuity of $f$. Prove by contradiction: assume that is not true and see what happens. Notes. If the conclusion if not true, then there exists (why? exercise!) a sequence $y_n\in B(x,\frac{1}{n})$ such that $f(y_n)\le 0$. Now think about the limit of $(y_n)$ and $(f(y_n))$.
Volume between two surfaces?
Yes, that's one way. Another way is to slice the volume by planes $x + y = -\sqrt2t$, where $0 \leq t \leq 1$. Such a plane is distant $t$ from the origin and cuts off a chord of length $2\sqrt{1 - t^2}$ in the circle. Hence the volume is $\int_0^1\sqrt2t\cdot2\sqrt{1 - t^2}dt$, which gives the same answer as your integral, namely $2\sqrt2/3$.
prove that $f(\bigcap^n_i X_i) \subset\bigcap^n_if(X_i) $
Let $y \in f(\bigcap X_i)$. This means that there is some $x \in \bigcap X_i$ such that $f(x)=y$. Since $x \in \bigcap X_i$, $x \in X_i$ for all $i$. So $y \in f(X_i)$ for all $X_i$, so $y \in \bigcap f(X_i)$. Therefore, $f(\bigcap X_i) \subset \bigcap f(X_i)$.
Recursive algorithm for calculating powers
Solution for the original formulation $p^x=?$ Note that $$p^x = \exp(x \log p) = \sum_{i=0}^\infty \frac{\log^ip}{i !}x^i.$$ You can now truncate the series at some $i$ and recursively compute the partial sum up to that point. If you cannot use the logarithm or exponential functions, you could also express $x$ in binary and adapt the exponentiation by squaring algorithm. Solution for the correct formulation For the corrected version of the problem, where you are asked to compute $x^p$, you can use exponentiation by squaring directly using $$\begin{align} x^0&amp;=1\\ x^n&amp;=(x^{n/2})^2 \ \text{for even }n\\ x^n&amp;=x \cdot (x^{(n-1)/2})^2 \ \text{for odd }n \end{align} $$
Condition for writing an infinite summation of a sequence as a limit
Under the condition that such a limit exists. "Summation" can be formally written for just any sequence, even if it does not converge. However, writing something like $$lim_{n \rightarrow \infty} \sum_{j=1}^n a_j$$ implies that such limit exists. If it converges then such a limit it by definition the sum of the series. Howe else could you define the result of such summation? Still, in this very notation you can write a series that does not converge, for example $$\sum_{k=1}^\infty \frac 1 k$$ This one has no sum (having taken enough its terms you can make it more than any given real number x) but the notation is valid.
Equivalence of maximisers
Yes, it is true. Suppose $y^* \neq \bar{y} \in \underset{y \in \mathcal{Y}}{\mathrm{arg\max}} \: f(y,s^*)$. Then we have by the optimality of $\bar{y}$: $$f(\bar{y},s^*) &gt; f(y^*,s^*) \implies f(\bar{y},s^*) - g(s^*) &gt; f(y^*,s^*) - g(s^*),$$ which contradicts the fact that $(y^*,s^*) \in \underset{(y,s) \in \mathcal{Y} \times \mathcal{S}}{\mathrm{arg\max}} \: f(y,s) - g(s)$.
an example that $X_n \to 0$ almost surely but $\mathbb{E}(X_n | \mathcal{G} ) \not\to 0$.
No, this does not need to be true. Let $(Z_n)$ be a sequence of i.i.d. Bernoulli random variables with $P(Z_n = 2) = P(Z_n = 0) = \frac 12$, and $X_n := \prod_{k=1}^n Z_k$. Then $X_n \ge 0$, and $X$ is a martingale so $\mathbb{E}[X_n] = 1$, but $(X_n) \rightarrow 0$ almost surely. Taking $\mathcal G$ to be the trivial $\sigma$-algebra gives the counterexample. More generally, if we let $(\mathcal F_n)$ be the natural filtration of $(X_n)$ then we have for $\mathcal G = \mathcal F_m$ where $m$ is some fixed integer that $\mathbb{E}[X_n|\mathcal G] = X_{n \wedge m}$ so $$\mathbb{E}[\mathbb{E}[X_n|\mathcal G]^2] \le \mathbb{E}[X_m^2] \le 4^m &lt; \infty,$$ i.e. $\mathbb{E}[X_n|\mathcal G]$ is bounded in $\mathcal L^2$, but $\lim_{n \rightarrow \infty} \mathbb{E}[X_n|\mathcal G] = X_m \ne 0$.
Sum of elements in basis form a basis
You just prooved that the set $\{ v_1 + v_2, v_2 + v_3, ..., v_{n-1} + v_n, v_n \}$ is linearly independent. Next thing you should remember from vector spaces basis is that if $B_1$ and $B_2$ are basis of vector space $V$ then $ |B_1| = |B_2| = dim(V)$. So, if your set $\{ v_1 + v_2, v_2 + v_3, ..., v_{n-1} + v_n, v_n \}$ (let's call it $B'$) is linearly independent and the number of elements in that set is the same number of elements you have in $\{ v_1, v_2, ..., v_n \}$ (ie, the dimension of your vector space), then $B'$ must be a basis.
A question of divisibility.
Let $S = \sum_{i=1}^{q-1} \{ \frac{ip}{q} \}$. Since $p$ and $q$ are relatively prime, multiplication by $p$ permutes the nonzero residues mod $q$, and so $$ S = \sum_{i=1}^{q-1} \{ \frac{ip}{q} \} = \sum_{i=1}^{q-1} \{ \frac{i}{q} \} = \frac 1q \sum_{i=1}^{q-1} i = \frac 1q \cdot \frac{q(q-1)}{2} = \frac{q-1}{2} . $$
Let $(X,Y)$ be uniform $(0, 1) \times (0, 1)$ random vector and $Z=\min \{X,Y\}$. Find $M(1)$
let $z \in (0,1)$, $$Pr(Z \le z) = 1 - Pr(Z &gt; z)=1-Pr(X&gt;z)Pr(Y&gt;z)=1-\left( 1-z\right)^2$$ $$f_Z(z)=\begin{cases} 2(1-z) &amp; ,z \in (0,1) \\ 0&amp;, z \notin (0,1) \end{cases}$$ $$M_Z(t)=E(\exp(tZ))=\int_0^12(1-z)\exp(tz) \, dz$$ \begin{align}M_Z(1)&amp;=2\left(\int_0^1 \exp(z)\, dz-\int_0^1 z\exp(z) \, dz\right) \\ &amp;= 2\left(\int_0^1 \exp(z)\, dz-z\exp(z)|_0^1+\int_0^1 \exp(z) \, dz\right) \\ &amp;=2\left(2\int_0^1 \exp(z)\, dz-z\exp(z)|_0^1\right)\\ &amp;=2\left(2(\exp(1)-1)-\exp(1)\right)\\ &amp;=2(\exp(1)-2)\\ &amp;=2(e-2)\end{align}
Statements equivalent to $A\subset B$
Most of them will become clearer if you draw Venn diagrams to illustrate them. For (2), note that it’s always true that $B\subseteq A\cup B$, so what you really need to show is that $A\subseteq B$ if and only if $A\cup B\subseteq B$. Certainly if $A\subseteq B$, then $A\cup B\subseteq B$, since it’s trivially true that $B\subseteq B$. On the other hand, $A\subseteq A\cup B$, so if $A\cup B\subseteq B$, then $A\subseteq B$. For (3), suppose that $A\subseteq B$. If $x\in B^c$, then $x\notin B$, so certainly $x\notin A$; but that means that $x\in A^c$, and we’ve shown that $B^c\subseteq A^c$. Now suppose that $B^c\subseteq A^c$. By what we just proved, we know that $(A^c)^c\subseteq(B^c)^c$. But $(A^c)^c=A$ and $(B^c)^c=B$, so $A\subseteq B$, as desired. (5) is really very clear if you draw a Venn diagram, but you can argue as follows. Suppose that $A\subseteq B$. Let $x\in U$ be arbitrary. If $x\in B$, then certainly $x\in B\cup A^c$. If $x\notin B$, then $x\notin A$, so $x\in A^c$, and again we see that $x\in B\cup A^c$. Thus, everything in $U$ is in $B\cup A^c$, and since $B\cup A^c\subseteq U$, we must have $B\cup A^c=U$. Conversely, suppose that $B\cup A^c=U$, and let $x\in A$. Then $x\notin A^c$, but certainly $x\in U=B\cup A^c$, so it can only be that $x\in B$. This shows that $A\subseteq B$.
Find intervals where f is continuous, $f(x)=\sqrt{x^{2}+x+1}$
It seems like you already have a few of the ingredients needed to prove the continuity of this function. A rather crucial property of continuity is that the composition of two continuous functions is continuous - this is certainly something you should seek to prove if you haven't done so before. So, if you can prove both that $x^2+x+1$ is continuous and that $\sqrt{x}$ is, so must their composition be. Now, the other consideration is that the square root function requires a positive input, so we need to figure out where $x^2+x+1$ is positive in case the function strays out of the domain of $\sqrt{\cdot}$. There are various ways to do this - the simplest is to plug it into the quadratic formula, see that its roots are $\frac{-1\pm \sqrt{-3}}{2}$, neither of which is real, implying that it never crosses zero and thus must be positive everywhere. Another somewhat more general way would be to note that $x^2+x+1 \geq 1+x$ (the tangent at $x=1$) and $x^2+x+1 \geq -x$ (a tangent line at $x=-1$), and since one of $1+x$ or $-x$ is positive, $x^2+x+1$ must be as well as it is greater than both. Putting all this together should allow you to show that $\sqrt{x^2+x+1}$ is a continuous function.
General form of multiplication of a matrix by its transpose
Per the comment, we generally have $$ \begin{pmatrix} a&amp;b\\c&amp;d\end{pmatrix} \begin{pmatrix} a&amp;c\\b&amp;d\end{pmatrix} = \begin{pmatrix} a^2+b^2 &amp; ac+bd \\ ac+bd &amp; c^2+d^2\end{pmatrix} \overset{?} = \pmatrix{\alpha &amp; -\frac 12 \beta\\ - \frac 12 \beta &amp; \beta} $$ Your question amounts to asking when $a,b,c,d$ are such that $$ ac + bd = - \frac 12 (c^2 + d^2) $$ Given any $c,d$, there is a line of solutions $(a,b)$ to this equation.
$\liminf_{n\to\infty} \frac{\varphi(n)}{n} = 1$, not $0$
About lim inf, if we can provide a sequence of numbers $n$ such that $\phi(n)/n$ goes to zero, then your liminf is, in fact, zero. The sequence we use is the primorials, the products of the consecutive primes up to some bound. So, the sequence is $$ 2,6,30, 210,\ldots $$ If $$ N = \prod_{p \leq x} p $$ for some postive real $x,$ then $$ \frac{\phi(N)}{N} = \prod_{p \leq x} \left( 1 - \frac{1}{p} \right). $$ As $x \rightarrow \infty,$ this product goes to $0.$ Taking logarithms, this follows from the fact that the harmonic sum of the primes diverges. Indeed, for $x &gt; 1,$ from Rosser and Schoenfeld (1962), (3.26) on page 70, we have $$ \frac{\phi(N)}{N} = \prod_{p \leq x} \left( 1 - \frac{1}{p} \right) &lt; \frac{e^{-\gamma}}{\log x} \left(1 + \frac{1}{2 \log^2 x} \right). $$ Here $\gamma = 0.5772156649...$ is the Euler-Mascheroni constant.
Uniform distribution
Hint: Let $B$ be the time (in minutes) elapsed since 10 o'clock. Then $B$ is uniformly distribute on the interval $[0,30]$ with density $\dfrac{1}{30}$. The probability that the waiting time is longer than 10 minutes is $P(B ??)$. $$P(B&gt;10) = \frac{30-10}{30-0} = \frac23.$$ If the bus hasn't arrived at 10:15 (we're given a condition, so think of the definition of conditional probability), the probability that one will have to wait for at least an additional 10 minutes is $P(B?? | B??)$. $$P(B&gt;25|B&gt;15) = \frac{P(B&gt;25,B&gt;15)}{P(B&gt;15)} = \frac{P(B&gt;25)}{P(B&gt;15)} = \frac{\frac16}{\frac12} = \frac13$$
Show that $\sum \frac{a_n}{n^p}$ is absolutely convergent.
By Cauchy-Schwarz, we know that $$(\sum {a_nb_n})^2 \leq \sum a_n^2 \sum b_n^2.$$ Letting $a_nb_n = |\frac{a_n}{n^p}|$, we get that $$(\sum {|\frac{a_n}{n^p}|})^2 \leq \sum a_n^2 \sum \frac{1}{n^{2p}} = L.$$ Thus $(\sum {|\frac{a_n}{n^p}|}) \leq \sqrt{L}$, so we have absolute convergence.
SVM maximum-margin distance
The closest distance of a hyper-plane $w\cdot x + b = 0$ to the origin is given by $\frac{|b|}{||w||}$ . If you want to see the proof, look here: https://en.wikipedia.org/wiki/Distance_from_a_point_to_a_plane Thus the closest distance to the origin for w.x +b = 1 is given by $\frac{|1-b|}{||w||}$ and for $w\cdot x + b = -1$ it is given by $\frac{|-1-b|}{||w||}$ . Thus the distance between the two hyper-planes is given by the difference which is $\frac{2}{||w||}$ .
Show that $f(x)$ cant be written like this
The negation is not correct. Indeed, the correct negation is given by: There exists a polynomial $f(x)=(x−a_1)(x−a_2)⋯(x−a_n)−1$ that can be written as $g(x)h(x)$ where $g$ and $h$ are nonconstant polynomials with integer coefficients. Hint: Now compare the degrees of $g$ and $h$. What happens if the degree of one of both is smaller than $\frac{n}{2}$. What happens if they are both exactly $\frac{n}{2}$?
Affine function is a diffeomorphism?
It is a diffeomorphism iff $T$ is invertible. It is easy to see that $f$ is invertible iff $T$ is invertible with inverse $$f^{-1}(x) = T^{-1}(x - a).$$ As $T^{-1}$ is a linear transformation, it is differentiable everywhere.
How to compute the following set of ODEs numerically (with ICs)
Your system of ODE seems to be dependent. After transforming Laplace we get $$ 0=\left\{ \begin{array}{l} k \Theta _1(s)+s \Theta _0(s)+s \Phi (s)-\Phi (0) -\Theta _0(0)\\ -k \Theta _0(s)+\frac{1}{3} k (s \Phi (s)-\Phi (0))+s \Theta _1(s)-\Theta _1(0) \\ s \delta (s)-\delta (0)+k v(s)+3 (s \Phi (s)-\Phi (0)) \\ H v(s)-i k \Phi (s)+s v(s)-v(0) \\ -4 \pi a^2 G \delta (s) \rho _m-16 \pi a^2 G \rho \Theta _0(s)+3 H^2 \Phi (s)+3 H (s \Phi (s)-\Phi (0))+k^2 \Phi (s) \\ \end{array} \right. $$ with characteristic matrix $$ M = \left( \begin{array}{ccccc} s &amp; 0 &amp; s &amp; 0 &amp; 0 \\ -k &amp; 0 &amp; \frac{k s}{3} &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 3 s &amp; s &amp; k \\ 0 &amp; 0 &amp; -i k &amp; 0 &amp; H+s \\ -16 \pi a^2 G \rho &amp; 0 &amp; 3 H^2+3 H s+k^2 &amp; -4 \pi a^2 G \rho _m &amp; 0 \\ \end{array} \right) $$ and $\det(M) = 0$
Number of values for a such that the following polynomial has only integral roots: $P(x) =X^3-X+a^2+a$?
Let $(X-\alpha)(X-\beta)(X-\gamma) = X^3 - X + a^2 + a$ where $\alpha,\beta,\gamma \in \mathbb{Z}$. Then $$\alpha + \beta + \gamma = 0, \qquad \alpha\beta + \alpha\gamma + \beta\gamma = -1, \qquad \alpha\beta\gamma = -a^2 -a.$$ From the first equation, $\gamma = -(\alpha + \beta)$. Then from the second equation $$\alpha^2 + \alpha\beta + \beta^2 - 1 = 0$$ so $$\beta = \dfrac{-\alpha \pm \sqrt{4 -3\alpha^2}}{2}$$ but since $\alpha,\beta\in \mathbb{Z}$, then $\alpha$ must be either $0$ or $\pm 1$. This gives the solutions $$(\alpha,\beta,\gamma) \in \{(-1,0,1), (-1,1,0),(0,-1,1),(0,1,-1),(1,-1,0),(1-0,-1)\}$$ Which means that $X^3 - X + a^2 + a = X(X-1)(X+1) = X^3 - X$, i.e. $a^2 + a = 0$ which can only happen if $a = 0$ or if $a = -1$ so there are only two possible values of $a$ for which the equation only have integral roots.
Given a point on a circle, find it on a given square
Suppose the circle and square is centered at $(0,0)$ the circle has radius $r$ and the square has sides $2r$. The midpoint of the sides of the square are $(0,r), (r,0), (0, -r), (-r,0)$ and the corners are $(-r,r),(r,r), (r,-r)$ and $(-r,-r)$. Okay a point on the circle is $(x_1,y_1)$ So the equation of the line from $(0,0)$ to $(x_1, y_1)$ will have slope $m=\frac {x_1}{y_1}$. And the equation of the line is $y = \frac {x_1}{y_1}x$. Now there are four cases to consider based on what side of the square the line will intersect. Case 1: The line goes through the right side of the square. That means $x_1 &gt; 0$ and $|y_1| \le x_1$ and so $ -1 \le m \le 1$. So if $m = 1$ then the line will go through the right side of the square. So let that point be $(r, y_2)$. But theb we have the formula $y = mx$ so $y_2 = mr =\frac {y_1}{x_1}r$. and the point on the square is $(r, \frac {y_1}{x_1}r)$ Case 2: The line passes through the top of the square. Then $y_1 &gt; 0$ and $|x_1| \ge y_1$ and so $|m| =|\frac {y_1}{x_1}| \ge 1$ So if $m &gt; 1$ or $m &lt; -1$ then the line goes through the top the square. And the point is $(x_2, r)$. I should point out that if the point of the circle was $(x_1, y_1) = (0, r)$ then we'd have a case where $m = \frac {y_1}{x_1} =\frac r0 =\infty$. This is the case that the line is completely vertical. In this case it's obvious the point on the square is the exact same point of the circle so the point is $(0,r)$. Otherwise.... The equation of our line is $y = mx =\frac{y_1}{x_1}$ so $r = m x_2$ and $r = \frac {y_1}{x_1} x_2$ so $\frac {x_1}{y_1}r = x_2$. And the point is $(\frac {x_1}{y_1}r, r)$&gt; .... The case where the line goes through the left and right side are similar. If $-1 \le m \le 1$ but $x_1 &lt; 0$ then the line goes through the left side of the square and the point is $(x_2, y_2) = (-r, -r\frac {y_1}{x_1})$. And if $m \le -1$ or $m \ge 1$ (or $x_1 = 0$ and $m = \infty$) but $y_1 &lt; 0$ then the line goes through the bottom of the square and the point is $(x_2, y_2) = (-r\frac {x_1}{y_1} ,-r)$
Rationality of $a^2+b^2$
No. Proof by counter-example Take any irrational number $c$ and let $a = \sqrt c$ and $b = -\sqrt c$. Then $a+b$ will be rational, but $a^2 + b^2$ will be irrational.
What is the derivative $f(x)=|x|^\frac{3}{2}$?
Some function can be defined in a piecewise manner for convinience when the given closed form can't be analyzed directly. (|x| can be split like how @Martin R says in the comments. $$|x|^{\frac 32 } = \lbrace ^{x^{3/2} \ ; \ x&gt;0} _{(-x)^{3/2} \ ; \ x\leq0}$$ Beyond this point, I think finding the Derivative wouldn't be a problem. It's always better to check it though, at points where the piecewise definition switches between pieces. You may use Defenition of Differentiability and check and you'll end up with $f'(0) = 0$ You can also generalize that $$ \frac {df(|x|)}{dx} = \frac {df(|x|)}{d|x|} \cdot \frac {d|x|}{dx} \ ; \ x \neq 0$$
Is this a valid proof that the boundary of a set on a metric space is closed?
Yes, that’s fine. You can also do it a little more directly: the definition of $\operatorname{bdry}A$ says that $x\in\operatorname{bdry}A$ iff $x\in\operatorname{cl}A\cap\operatorname{cl}(X\setminus A)$, i.e., that $\operatorname{bdry}A=\operatorname{cl}A\cap\operatorname{cl}(X\setminus A)$, and since this is the intersection of two closed sets, it is closed. While your definition of the boundary is stated in terms of metric spaces, it’s just a special case of the definition for topological spaces in general, and the same argument shows that $\operatorname{bdry}A$ is closed in all topological spaces.
Checking if a number if expressible as $x^2+y^4$
Every perfect square is $0$ or $1$ modulo $4$. So $x^2+y^4\equiv0,1,2$ mod $4$ and this implies if $n\equiv3$ mod $4$ then there is no soloution.
Successor of integers in System F
Your calculations are wrong. In fact, they just don't make any sense. Your initial expansion of $S0$ doesn't match the definition, and then you seem to be "reducing" things when your terms have no $\beta$ redexes. $\alpha$-conversion correctly applied never changes the normal form, so it would never lead you to get different answers. Note $tXxy$ means $((tX)x)y$. $\beta$-reduction is $(\lambda x:X.E)y \to E[x\mapsto y]$ where $E[x\mapsto y]$ means $E$ with all occurrences of $x$ replaced with $y$ in a capture-avoiding way. Similarly for $\Lambda$. Also, while they are often omitted, compound terms are implicitly wrapped in parentheses. So $0 \equiv (\Lambda X.\lambda x:X.\lambda y:X\to X.x)$ or to be completely explicit $(\Lambda X.(\lambda x:X.(\lambda y:(X\to X).u)))$. Your expansion of $S0$ may be a mixture of a typo and failing to include parentheses. The Church-numeral for the natural number $n$ is the $\lambda$-term of the form $$\Lambda N.\lambda z:N.\lambda s:N\to N.s^n z$$ where $s^n z$ is shorthand for for $s$ applied to $z$ $n$-times, e.g. $s^3z = s(s(sz))$. $s^0 z = z$ and so you can verify that the $\lambda$-term for $0$ is indeed the Church-numeral for the number $0$. The calculation proceeds as follows, note that in this case at every point there is only one possible $\beta$-redex: $$\begin{align} S0 &amp; \equiv \Lambda X.\lambda x : X.\lambda y:X\to X.y(0Xxy) &amp; \text{by definition of }S \\ &amp; \equiv \Lambda X.\lambda x : X.\lambda y:X\to X.y((\Lambda U.\lambda u:U.\lambda v:U\to U.u)Xxy) &amp; \text{by definition of 0 and }\alpha \\ &amp; \to \Lambda X.\lambda x : X.\lambda y:X\to X.y((\lambda u:X.\lambda v:X\to X.u)xy) &amp; \beta\text{ reduction} \\ &amp; \to \Lambda X.\lambda x : X.\lambda y:X\to X.y((\lambda v:X\to X.x)y) &amp; \beta\text{ reduction} \\ &amp; \to \Lambda X.\lambda x : X.\lambda y:X\to X.yx &amp; \beta\text{ reduction} \\ \end{align}$$ This is its normal form. Further, it is $\alpha$-equivalent to the Church-numeral for $1$.
Taylor Expansion of a Differential
First we need a preamble on multivariate calculus. The following formula is due to differential of a bivariate function:$$df={\partial f\over\partial x}dx+{\partial f\over\partial y}dy$$(this can be generalized to multivariate calculus with number of variables more that 2 but we're not about to look for it) and if we substitute $x=t$ and $y=S_t$ we obtain$${df\over dt}={\partial f\over\partial t}+{\partial f\over\partial S_t}{dS_t\over dt}$$also$${d^2f\over dt^2}={d\over dt}{df\over dt}={\partial^2 f\over \partial t^2}+{\partial^2 f\over \partial t\partial S_t}{dS_t\over dt}+{\partial f\over \partial S_t}{d^2 S_t\over d t^2}$$Now we are able to write the 2nd-order Taylor series around $0$:$$f(t,S_t)=f(0,S_0)+\left\{{\partial f\over\partial t}+{\partial f\over\partial S_t}{dS_t\over dt}\right\}_{t=0}\cdot t+{1\over 2}\left\{{\partial^2 f\over \partial t^2}+{\partial^2 f\over \partial t\partial S_t}{dS_t\over dt}+{\partial f\over \partial S_t}{d^2 S_t\over d t^2}\right\}_{t=0}\cdot t^2+O(t^3)$$where $O(.)$ denotes Big-O Notation.
Prove that $\mathcal L=\{a^ib^jc^k|i+k=j\}$ is context-free
Your current grammar will not work because it only allows for a single $a$ on the left side. You could allow for more $a$'s on the left side with the rule: $$A \rightarrow aB|aA|B$$ but then we would be allowing an infinite number of $a$'s on the left side. Here's a hint: $\mathcal{L}$ is the concatenation of two context-free languages.
If $f $ is right continuous at $c $, can I claim that $|sup _{x \in [c,x _i ]} f(x) -inf_{x \in [c,x _i ]}f(x)|<\epsilon $?
Yes, to both questions. In essence, you are restricting the domain of the function to the part which is continuous. To get your first claim, we have that for every $\epsilon &gt; 0$, there exists some $x_i &gt; c$ such that $|f(c) - f(x)|&lt; \epsilon$ whenever $x \in [c,x_i]$, so certainly $\sup_{x \in [c,x_i]} |f(c) - f(x)| &lt; \epsilon$. Now note that $|f(c) - \sup_{x \in [c, x_i]} f(x)| \leq \sup_{x \in [c,x_i]} |f(c) - f(x)|$. This is a response to "So to answer ...", but I cannot respond to it because of reputation restrictions. Note that the $x_i$ may be different for the sup and int. It may be more appropriate to use $x_1$ for the sup and $x_2$ for the inf. Let $x_i$ be the minimum of $x_1$ and $x_2$ and you have your conclusion - just as you argued below.
Prove If Statement Is Valid or Invalid
I think it is not "denying the antecedent" but we can conclude by using contraposition. We know that $\forall x[S(x) \implies E(x)]$ and we are given $\lnot E(M)$ and implication of $\lnot E(M)$ is given as $\lnot S(M)$ (not the other way around), therefore conclusion is $\lnot E(M) \implies \lnot S(M)$. From the first premise, we know that $S(M) \implies E(M)$ is valid. Therefore, contrapositive $\lnot E(M) \implies \lnot S(M)$ should also be valid. If we were given "Maggie is not a student" and conclusion was "Maggie does not have an e-mail account", then it would be denying the antecedent.
Why can we restate the standard definition of almost sure convergence in terms of the limit infimum of sets?
Fix $\varepsilon&gt;0$. Denote $$A=\{\omega \in \Omega:\lim_{n \to \infty}X_n(\omega)=X(\omega)\},$$ $$A_n^\varepsilon=\{\omega \in \Omega \, : \,|X_k(\omega)-X(\omega)|&lt; \varepsilon\text{ for all } k \ge n\}.$$ Then the original statement can be written as $P(A)=1$, and the second statement can be written either as $\lim\limits_{n \to \infty} P(A_n^\varepsilon)=1$ or as $P(A^\varepsilon)=1$, where $A^\varepsilon \equiv \lim\limits_{n \to \infty}A_n^\varepsilon=\bigcup\limits_{n=1}^{\infty}A_n^\varepsilon$. (Notice that $\{A_n^\varepsilon\}$ is an increasing sequence of events. By the continuity theorem for monotone events, $P(A^\varepsilon)=\lim\limits_{n \to \infty} P(A_n^\varepsilon)$.) Suppose $P(A)=1$. Note that $A\subseteq A^\varepsilon$. Indeed, in detail, $A^\varepsilon$ is the following set of outcomes: \begin{align*} \quad \ A^\varepsilon &amp; = \{\omega \in \Omega: \exists \,n_\varepsilon(\omega) \text{ such that } \omega \in A_k^\varepsilon, \ \forall k\, \ge n_\varepsilon(\omega)\}&amp;\\ &amp; =\{\omega \in \Omega: \exists \,n_\varepsilon(\omega) \text{ such that } |X_k(\omega)-X(\omega)| \le \varepsilon, \ \forall \, k \ge n_{\varepsilon}(\omega)\}&amp; %&amp; = \{\omega \in \Omega: \exists n_\varepsilon(\omega) \text{ such that } \omega \in A_k^\varepsilon, \ \forall k\ge n_\varepsilon(\omega)\}&amp; \end{align*} Let $\omega \in A$. Then for a chosen $\varepsilon&gt;0$, $\exists \, n_\varepsilon(\omega)$ such that $\forall \,(k \ge n_\varepsilon(\omega)) \ |X_k(\omega)-X(\omega)|\le\varepsilon$. Clearly then, $\omega \in A^\varepsilon$. The fact that $A\subseteq A^\varepsilon$ implies that $P(A^\varepsilon) \ge P(A)=1$ and, thus, because probability cannot be greater than $1$, $P(A^\varepsilon) = 1$. Suppose now that $P(A^\varepsilon)=1$ for any $\varepsilon&gt;0$. Take $\varepsilon=1,\frac{1}{2},\frac{1}{3},\ldots$ Then $A^1\supset A^\frac{1}{2} \supset A^\frac{1}{3}\supset \ldots$ $-$ a decreasing sequence of events, for which $\lim\limits_{m \to \infty}A^\frac{1}{m}=\bigcap\limits_{m=1}^{\infty}A^\frac{1}{m}$. By the continuity theorem, $$P\left(\bigcap\limits_{m=1}^{\infty}A^\frac{1}{m}\right) = P\left(\lim\limits_{m \to \infty}A^\frac{1}{m}\right)=\lim\limits_{m \to \infty}P\left(A^\frac{1}{m}\right)=1.$$ Now notice that $A=\bigcap\limits_{m=1}^{\infty}A^\frac{1}{m}$ and, thus, $P(A)=1$.
Why direction of axes of coordinate often not indicated in diagram?
Its a common convention , one that you are familiar since your school days. So unless and until there is a different convention being used, its assumed that all directions are according to common convention and problems are solved keeping this in mind. However if there is a difference, as far as books i have seen , the author(s) always bring it to the notice of the reader without failure
Values of Inverse Trigonometic Functions without a Calculator
HINT: What is the value of $\tan(-\pi/6)$? (You should know from your earlier education, when $\tan(x)$ has the value $1/\sqrt{3}$, $1$ or $\sqrt{3}$.)
True or false statement about a $C^\infty$ function
What does mean $\mathbb{C}^\infty$? If you wanted to say just $C^\infty$, then the answer to your question is negative as Matthew Leingang showed. Another classical example is the following: Consider the function $f(x)=\exp(-\frac{1}{x})$ if $x&gt;0$, and $f(x)=0$ otherwise. It is not so difficult to show that $f$ is a $C^\infty$ function on the real line. Moreover, this kind of functions are very interesting because they play an important role in the theory of $C^\infty$ functions. As you easily can note, the examples give them here are functions that are not analytic, they just are smooth functions.
First order logic,Step in Derivation of Non standard real numbers
Your approach is overcomplicating the task at hand. Instead, just show that $M_{a(\Sigma')}\vDash \sigma'$ for each $\sigma' \in\Sigma'$ separately. Such a $\sigma'$ will either be something that is true in $M_R$, and therefore also in every $M_a$, or it will assert that $0&lt;b&lt;c_r$ for some $c_r$ where the interpretation of $c_r$ is by construction larger than $a(\Sigma')$. Actually there's something fishy about the construction which I think looks like typos that are easily fixed but it's not quite clear to me which direction they're supposed to be fixed in. Your $\Sigma$ adds new axioms $0&lt;b&lt;r$ for some set of $r$s -- which looks like $b$ is supposed to become an infinitesimal. But what you have written is actually only that $b$ must be squeezed between $0$ and $r$ for every $r\in \mathbb N$, which is easily achievable without any nonstandard elements -- as written, $M_{1/2}$ will satisfy all of $\Sigma$. What must be meant is either to aim for $b$ being infinitesimal and add axioms $$ 0&lt;b \land b&lt;c_r \qquad\text{for every }r\in(0,\infty) $$ or to aim for $b$ being infinite and add axioms $$ c_r &lt;b \qquad\text{for every }r\in\mathbb N $$ The definition of $a(\Sigma')$ that works in each of those cases will be either (say) half of the smallest positive $r$ such that $\Sigma'$ mentions $c_r$, or (say) one plus the largest $r$ such that $\Sigma'$ mentions $c_r$.
Mathematical Logic and Connectives
Since $(\neg p) \iff (p\oplus \top)$ and $\oplus$ is both commutative and associative, this shouldn't be possible.
Solving $\sqrt[3]{x^2} + \sqrt[3]{x} = 2$
Put $x= y^{3}$ then your equation reduces to $$y^{2}+y -2 = (y+2)(y-1)=0$$ So from here you get $y=-2$ or $y=1$. So if $y=1$, then you get $x=1$. And if $y=-2$, you get $x=-8$, which also satisfies the equation.
Ratio Test for $\sum \limits_{n=0}^\infty \frac{n!}{n^n}z^n$ -- need to use squeeze theorem?
The question is resolved in the comments. To summarize the argument, we have $$ \rho = \left( \lim \limits_{n \to \infty} \frac{n^n}{(n+1)^n} \right)^{-1} = \lim \limits_{n \to \infty} \frac{(n+1)^n}{n^n} = \lim \limits_{n \to \infty} \Big(1+\frac{1}{n} \Big)^n = \mathrm e, $$ just from the definition of $\mathrm e$.
Dirichlet series expansion?
Yes, it can be found in Apostol's book. Given $D(s) := \sum_{n \geq 1} f(n) / n^s$, we can extract $f(n)$ by performing the integral for $x \in \mathbb{Z}^{+}$ and any $\sigma &gt; \sigma_D$ the abscissa of convergence for the series as $$f(x) = \lim_{T \rightarrow \infty} \frac{1}{T} \int_{-T}^{T} D(\sigma+\imath t) x^{\sigma+\imath t} dt.$$
About linear transformation
For a simpler example, consider the space $X = \bigoplus_{i=1}^\infty\mathbb{R} e_i$. The linear map $X \to \mathbb{R}$ defined by $e_n \to n$ is not continuous; it isn't bounded. Similarly, the map $X \to X$ defined by $e_n \to n e_n$ is not continuous. The case of $\ell^2$ follows similarly; take a Hamel basis of it.
What's wrong with this "backwards" definition of limit?
I'm a little confused when you ask wether it is correct or not. Do you ask as opposed to the well known definition? That is, as opposed to We say that $\lim\limits_{x\to a}f(x)=\mathscr L$ if for every $\epsilon &gt;0$ there exists a $\delta &gt;0$ such that, for all $x$, if $0&lt;|x-a|&lt;\delta$ then $|f(x)-\mathscr L|&lt;\epsilon$. If so, you can just see $\epsilon$ and $\delta$ are reversed. However, suppose this definition was not given, and we want to define what we mean by limit. First, we clearly want to understand that we're concerned about what happens near $a$, but not at $a$. The idea of a "limit" near $a$ is then that of a value $\mathscr L$ a function $f(x)$ approaches when $x$ approaches $a$, So the idea we want to capture is that a function $f(x)$ has a number $\mathscr L$ as a limit when $x$ approaches $a$ if we can make $\mathscr L$ and $f(x)$ as near as we wish, by taking $x$ sufficiently near $a$. So when do we say that two number are near? We need to formalize our idea of proximity of numbers. $\bf A$. So, we can associate to each $x,y\in \bf R$ the real number $d(x,y)=|x-y|$ and call it the distance from $x$ to $y$. Note that this distance has some properties we really want any notion of distance to have: $(1)$ The distance is symmetric. $$d(x,y)=d(y,x)$$ $(2)$ It is always positive, unless $x=y$. That is, the distance of two numbers is zero if and only if they are the same number. $$d(x,y)&gt;0 \text{ and } d(x,y)=0\iff x=y$$ $(3)$ The distance from a point $x$ to a point $y$ will always be less than or equal to the distance from $x$ to another $z$ plus the distance from that $z$ to $y$. We're just saying the shortest distance from $x$ to $y$ is precisely the straight line that joints them (this generalizes to higher dimensions). $$d(x,y)\leq d(x,z)+d(z,y)$$ $\bf B$. Now, consider this silly theorem: Suppose $x,y\in \Bbb R$. Then, for every $\epsilon&gt;0$, $d(x,y)&lt;\epsilon$ if and only if $x=y$. One direction is easy: if $x=y$, then clearly $d(x,y)=0&lt;\epsilon$ for positive $\epsilon$. Now suppose that for any $\epsilon&gt;0$, we have that $d(x,y)=0&lt;\epsilon$. We aim to prove $x=y$. We will argue by contradiction. Since the distance is symmetric, we might assume $x&lt;y$. Then $d(x,y)=|x-y|&gt;0$. So $|x-y|$ is an $\epsilon&gt;0$, which would mean that $|x-y|&lt;|x-y|$, which is absurd. Thus, it must be that $x=y$. $\bf C$. Now, we have a formal definition of the notion of near, and we can say that two numbers are near if $d(x,y)$ is small. In particular we just showed that if $x=y$, $d(x,y)$ is a smaller than any positive quantity given, since it is zero. It is but only logical to say that making $\mathscr L$ as near as we wish to $f(x)$ is making $|f(x)-\mathscr L|&lt;\epsilon$ , no matter how small $\epsilon$ is, with $\epsilon&lt;0$. $\bf D$. We can now try and think about a degree of closeness. Given a number $\delta &gt;0$, we can standarize and say that $x$ and $a$ are sufficiently near if $d(x,a)&lt;\delta$. Think of it as the ruler we use to tell wether you can go in a rollercoaster ride or not. If $d(x,a)\geq \delta$, then $x$ is "bad" and we discard it. $\bf E$. It is important to note this: we are concerned about making $f(x)$ and $\mathscr L$ close, and we want to succeed in doing so by making $x$ close to $a$. It is not our objective to make $x$ and $a$ close, but our means. So our definition must capture this: given a desired proximity of $f$ and $\mathscr L$, there must be a moment in which any $x$ close enough to $a$ will make $f(x)$ be close to $\mathscr L$. We don't forget that $x\neq a$, which only means $0&lt;|x-a|$. So, let's try and write something down, considering what we discussed in $\bf B,C,D$. We say that $f(x)$ has $\mathscr L$ as a limit when $x$ approaches $a$ if for any prescribed degree of closeness, making $x$ sufficiently close to $a$, but not equal to $a$, will imply that for all those $x$, $f(x)$ and $\mathscr L$ will be within that prescribed degree of closeness. Applying $\bf B,C$, we can write We say that $f(x)$ has $\mathscr L$ as a limit when $x$ approaches $a$ if for any $\epsilon &gt;0$, making $x$ sufficiently close to $a$, but not equal to $a$, will imply that for all those $x$, $|f(x)-\mathscr L|&lt;\epsilon$. Using $\bf D$, we can write We say that $f(x)$ has $\mathscr L$ as a limit when $x$ approaches $a$ if for any $\epsilon &gt;0$, there is a $\delta$ such that for all $x\neq a$, $|x-a|&lt;\delta$ implies $|f(x)-\mathscr L|&lt;\epsilon$. Finally, since $x\neq a$ is the same as $0&lt;|x-a|$, we can be more succint and write. We say that $f(x)$ has $\mathscr L$ as a limit when $x$ approaches $a$ if for any $\epsilon &gt;0$, there is a $\delta$ such that for all $x$, $0&lt;|x-a|&lt;\delta$ implies $|f(x)-\mathscr L|&lt;\epsilon$. You can prove, or find proofs, that if the limit of $f$ exists for some $a$, then it is unique. And this is important. As others have pointed out, this definition, giving freedom on $d(f(x),\mathscr L)$,for example, makes limits lose their uniqueness. Now, your definition clearly doesn't capture our original idea: It is somehow saying $f$ has $\mathscr L$ as a limit if for any prescribed degree of closeness $\delta$, there will exist some positive number $\epsilon$ such that making $d(x,a)&lt;\delta$ will imply $d(f(x),\mathscr L)&lt;\epsilon$. But this is giving us a lot of freedom on $d(f(x),\mathscr L)$, which we are most worried about. Of course we can make $x$ and $a$ as near as we wish, but the question is, can we make $f$ and $\mathscr L$ as close as we wish?
$\partial {f} / \partial{x} = xy, \partial {f} / \partial{y} = x + y $, what is $f$
$$f(x,y)=\int\frac{\partial f}{\partial x}(x,y)\,\text{d} x=\frac12x^2y+h_1(y)$$ and $$f(x,y)=\int\frac{\partial f}{\partial y}(x,y)\,\text{d}y=xy+\frac12y^2+h_2(x)$$ then there are no $h_1(y)$ and $h_2(x)$ such that $xy+\frac12y^2+h_2(x)=\frac12x^2y+h_1(y)$, so there is no function $f$.
Prove True or false : If $A$ and $B$ are nxn invertible matrices and $(AB)^2=A^2B^2$, then $AB=BA$
HINT: Multiply $(AB)^2=A^2B^2$ on the left by $A^{-1}$ and on the right by ...
Are the units of a ring $\Bbb {Z}_n, +, •$ also the generators of the cyclic group $\Bbb{Z}_n, + $
The criterion is the same for each. That $\operatorname {gcd}(a,n)=1$ is equivalent to $a$ being a unit, since by Bezout, this is when $\exists x,y$ such that $xa+yn=1$, or $x=a^{-1}$ exists. The second fact is because $1$ generates $\Bbb Z_n,+$. And hence so does $a=a\cdot 1$ for any a such that $\operatorname{gcd}(a,n)=1$. That's because $\vert a\cdot 1\vert=\dfrac n{\operatorname {gcd}(a,n)}=n$.
Joint Probability Density Function where Y depends on X
Examine the support. We have $0&lt;x&lt; 1$ and $-x&lt;y&lt;x$, exactly when, $\lvert y\rvert &lt; x &lt; 1$ and, $-1&lt;y&lt;1$. $$\large\begin{align}f_{X,Y}(x,y) &amp;=\mathbf 1_{x\in(0,1), y\in(-x,x)}\\[1ex] &amp;=\mathbf 1_{y\in(-1,1), x\in (\lvert y\rvert,1)}\end{align}$$ So $$\begin{align}f_X(x) &amp;=\int_{(-x,x)} \mathbf 1_{x\in(0,1)}~\mathrm d y\\[2ex]f_Y(y)&amp;=\int_{(\lvert y\rvert,1)}\mathbf 1_{y\in(-1,1)}~\mathrm d x\end{align}$$ am frustrated because I cannot visualize what is going on when I compute these objects. The support is a triangle, $\triangle\langle 0,0\rangle\langle 1,-1\rangle\langle 1,1\rangle$. The marginal density for $X$ at some $x\in(0,1)$ integrates over the vertical lines, where $y$ ranges from $-x$ to $x$. Since the joint density is uniform over the triangle, the marginal is proportional to the length of these lines: $2x$. -- smallest near the origin, widest near the opposite base The marginal density for $Y$ at some $y\in(-1,1)$ integrates over the horizontal lines, where $x$ ranges from $\lvert y\rvert$ to $1$. Since the joint density is uniform over the triangle, the marginal is proportional to the length of these lines: $1-\lvert y\rvert$. -- smallest near the tips, largest near the central axis. Oh, and it should be clear what $\mathsf E(Y\mid X)$ equals.
What are good resources on idempotent semirings?
Apart from the books mentioned above, there is a recent article Finite simple additively idempotent semirings with more references.
Solving $e^{iz} = 1+i$, where have I gone wrong?
Your answer is correct except for the extra $i$ you wrote. The answer is $$ z = x + i y = (\pi/4 + 2n \pi) + i ((-1/2) \ln 2).$$ You can check this answer is correct by plugging it in: Since $$iz = (1/2) \ln 2 + i (\pi/4 + 2n \pi),$$ we do have $$ e^{iz} = e^{(1/2)\ln 2} e^{i(\pi/4 + 2n \pi)} = \sqrt{2} e^{i(\pi/4 + 2n \pi)} = 1 + i$$ The answer $z = (\pi/4 + 2 n \pi) - i \pi/4$ in the marking scheme is wrong, because you can check to see you get $$iz = \pi/4 + i (\pi/4 + 2n \pi),$$ so $$ e^{iz} = e^{\pi/4} e^{i(\pi/4 + 2n \pi)} = e^{\pi/4} (1 + i)/\sqrt{2},$$ which does NOT equal $1 + i$.
compute the singular value decomposition of the matrix
First off $\Sigma$ is of the wrong dimension. Since $C$ is a $3 \times 2$ matrix, so should $\Sigma$: $$\Sigma=\left[\begin{matrix}1 &amp; 0 \\ 0 &amp; \sqrt 6 \\ 0 &amp; 0\end{matrix}\right]$$ Next, you need to normalize $V$: $$V=\left[\begin{matrix}-\frac{2}{\sqrt 5} &amp; \frac{1}{\sqrt 5} \\ \frac{1}{\sqrt 5} &amp; \frac{2}{\sqrt 5}\end{matrix}\right]$$ Now, we have: $$U\Sigma=CV=\left[\begin{matrix}0 &amp; \sqrt 5 \\ \frac{1}{\sqrt 5} &amp; \frac{2}{\sqrt 5} \\ \frac{2}{\sqrt 5} &amp; -\frac{1}{\sqrt 5}\end{matrix}\right]$$ Now, since the above is $U\Sigma$ and the first column of $\Sigma$ is $e_1$, the first column of this matrix must be the first column of $U$. Also, since the second column of $\Sigma$ is $\sqrt 6e_2$, the second column of $U$ must be the second column of $CV$ divided by $\sqrt 6$, so we get: $$U=\left[\begin{matrix}0 &amp; \sqrt{\frac 5 6} &amp; ?? \\ \frac{1}{\sqrt 5} &amp; \sqrt{\frac 2 {15}} &amp; ?? \\ \frac{2}{\sqrt 5} &amp; -\frac{1}{\sqrt 30} &amp; ??\end{matrix}\right]$$ Finally, since $C$ has rank $2$ but $m=3$ rows, you need to pad the last column of $U$ with an orthogonal vector from the cokernel so that $U$ is an orthogonal matrix. Since the last column is orthogonal to the first two columns, it must be in the null space of the following matrix: $$\left[\begin{matrix} 0 &amp; \frac 1 {\sqrt 5} &amp; \frac{2}{\sqrt 5} \\ \sqrt{\frac 5 6} &amp; \sqrt{\frac 2 {15}} &amp; -\frac{1}{\sqrt 30}\end{matrix}\right]$$ Basically, I just turned the two columns of $U$ into row vectors of the above matrix so that the null space of the above matrix is orthogonal to the columns of $U$. Now, find the RREF of the above matrix: $$\left[\begin{matrix}1 &amp; 0 &amp; -1 \\ 0 &amp; 1 &amp; 2\end{matrix}\right]$$ From here, it should be clear that $(1,-2,1)^t$ spans the null space of this matrix. However, for $U$ to be orthogonal, we need this to be a unit vector, so normalize it by dividing by $\sqrt 6$: $$\frac{1}{\sqrt{6}}\left[\begin{matrix}1 \\ -2 \\ 1\end{matrix}\right]=\left[\begin{matrix}\frac{1}{\sqrt 6} \\ -\sqrt{\frac 2 3} \\ \frac{1}{\sqrt 6}\end{matrix}\right]$$ Finally, we insert this column into $U$ to get: $$U=\left[\begin{matrix}0 &amp; \sqrt{\frac 5 6} &amp; \frac{1}{\sqrt 6} \\ \frac{1}{\sqrt 5} &amp; \sqrt{\frac 2 {15}} &amp; -\sqrt{\frac 2 3} \\ \frac{2}{\sqrt 5} &amp; -\frac{1}{\sqrt 30} &amp; \frac{1}{\sqrt 6}\end{matrix}\right]$$
Assumptions leading to the mutual independence of random variables
Counter-example: Throw a fair die and define events: \begin{align} A &amp;= \{1,2,3,4\} \\ B &amp;= \{2,3,4,5\} \\ C &amp;= \{1,2,3,5\}. \\ \end{align} Then \begin{align} &amp; P(ABC)=1/3 \\ &amp; P(AB)=P(AC)=P(BC)=1/2 \\ &amp; P(A)=P(B)=P(C)=2/3 \\ &amp; \\ \text{So}\quad &amp;P(ABC)=P(A)P(BC)=P(B)P(AC)=P(C)P(AB) \\ \text{But}\quad &amp;P(ABC)=1/3\neq 8/27 = P(A)P(B)P(C). \end{align}
Given positive integers $n, k, i,$ prove $\binom{n}{k} = \sum_{j=i}^{n-k+i}\binom{j-i}{i-1}\binom{n-j}{k-i}$
The identity (which was originally in the question) $${n \choose k} = \sum_{j = i}^{n - k + 1} {j - 1 \choose i - 1}{n - j \choose k - i}$$ is not true. You can basically take any values of $n, k,$ and $i$ with $1 \leq i \leq k \leq n$ and see this is not true. Here is a combinatorial proof of the following identity, which is what I believe what you meant: $${n \choose k} = \sum_{j = i}^{n - k + i} {j - 1 \choose i - 1}{n - j \choose k - i}$$ We will use an argument counting the number of $n$-bit bitstrings with $k$ zeroes, which is clearly ${n \choose k}$. From here, we will number the bits of an $n$-bit bitstring from left to right as $1$ to $n$. The right-hand side can be interpreted as follows. First choose some random number $1 \leq i \leq k$. Then, to choose a bitstring with $k$ zeroes and $n - k$ ones, choose some bit $j \geq i$ to be a dividing point, and set it to zero. Then, choose $i - 1$ bits of the $j - 1$ bits to the left of the dividing point to be zero, and choose $k - i$ bits to the right of the dividing point to be zero. Note that if we read the bitstring from left to right, the $i$th zero bit will be in the $j$th slot. This is the crucial observation that shows us we have counted every bitstring with $k$ zeroes exactly once. If you want to think in terms of partitioning the bitstrings, the terms of the summation have partitioned these bitstrings into classes where the $i$th zero bit is in the $j$th slot. Note also that $j$ must be no more than $n - k + i$ (i.e. there must be at least $k - i$ bits to the right of the dividing point), since otherwise it will not be possible to choose $k - i$ bits from the right side of the dividing point to be zeroes. Similar logic shows why $j \geq i$ is also required (because otherwise it will not be possible to select $i - 1$ bits to the left of $j$). In other words, the $i$th zero bit in a bitstring of $k$ zeroes lies between bit $i$ and bit $n - k + i$. Note also that the condition $i \leq k$ is crucial, because if $i \geq k + 1$, then our procedure will have chosen at least $k$ zero bits to the left of the dividing point, and will have tried to select a negative number of zero bits to the right of the dividing point, which is clearly impossible. We must select a positive number of bits on each side of the dividing point. In other words, there cannot exist an $i$th zero (reading from left to right) in a bitstring with $k$ zeroes unless $i \leq k$. So this procedure will work for all $1 \leq i \leq k$ as given in the question, and summing over the allowed values of $j$ for any fixed $i$, we get that $${n \choose k} = \sum_{j = i}^{n - k + i} {j - 1 \choose i - 1}{n - j \choose k - i}$$ as desired. $\square$
Is it possible to have simultaneously $\int_I(f(x)-\text{sin} x)^2 dx\leq \frac{4}{9}$ and $\int_I(f(x)-\text{cos} x)^2 dx\leq \frac{1}{9}$?
Hints: What is $\int_I (\sin(x)-\cos(x))^2\; dx$? Triangle inequality
Number of labelled graphs.
With $n$ labelled vertices, there are $\dfrac{n(n-1)}{2}$ potential edges, each one of which may be present or not, so there are $2^{n(n-1)/2}$ possible different graphs.
Character of representation of $D_5$ on $\mathbb{R}^5$.
Just write down the matrices. For the identity, $\iota (a_1, a_2, a_3, a_4, a_5) = (a_1, a_2, a_3, a_4, a_5)$, the matrix is $$ \begin{bmatrix} 1&amp;0&amp;0&amp;0&amp;0 \\ 0&amp;1&amp;0&amp;0&amp;0 \\ 0&amp;0&amp;1&amp;0&amp;0 \\ 0&amp;0&amp;0&amp;1&amp;0 \\ 0&amp;0&amp;0&amp;0&amp;1 \end{bmatrix}$$ whose trace is $5$. For the rotation $r(a_1, a_2, a_3, a_4, a_5) = (a_2, a_3, a_4, a_5, a_1)$, the matrix is $$ \begin{bmatrix} 0&amp;0&amp;0&amp;0&amp;1 \\ 1&amp;0&amp;0&amp;0&amp;0 \\ 0&amp;1&amp;0&amp;0&amp;0 \\ 0&amp;0&amp;1&amp;0&amp;0 \\ 0&amp;0&amp;0&amp;1&amp;0 \end{bmatrix}$$ whose trace is $0$. (The other four rotations have the same trace.) For the reflection $s(a_1, a_2, a_3, a_4, a_5) = (a_5, a_4, a_3, a_2, a_1)$, the matrix is $$ \begin{bmatrix} 0&amp;0&amp;0&amp;0&amp;1 \\ 0&amp;0&amp;0&amp;1&amp;0 \\ 0&amp;0&amp;1&amp;0&amp;0 \\ 0&amp;1&amp;0&amp;0&amp;0 \\ 1&amp;0&amp;0&amp;0&amp;0\end{bmatrix}$$ whose trace is $1$. (The other five rotations have the same trace.) Finally, I'll remark that, although the elements of the group $D_5$ can be thought of as rotations and reflections of a pentagon, they are not acting as rotations and reflections in this representation on $\mathbb R^5$. An actual reflection in $\mathbb R^5$ would have eigenvalues $1, -1, -1, -1, -1$, and hence a trace of $-3$.
Looking for intriguing applications of martingales
I came across the following game a while ago: You start with $\$100$. At each timestep $t = 1,...,n$ you flip a coin. If you get heads, you receive $10\%$ of what you currently have, and if you get tails, you pay $10\%$ of what you currently have. What is the expected value of the game as $n \to \infty$? You may be tempted to say $\$0$, since $-10\%$ at $t = k$ and then $+10\%$ at $t = k+1$ (or the reverse: $+10\%$ at $t = k$ and then $-10\%$ at $t = k+1$) will always make you worse off than before ($0.9 \times 1.1 = 0.99$). However, if you recognize that at each stage the random variable corresponding to the value of the game at the next timestep is a martingale, it is easy to conclude that the expected value of the game is $\$100$.
Problem about the upper bound of the modulus of a root of a monic polynomial
Suppose $z^{n}+a_1z^{n-1}+...+a_n=0$. If $|z| \leq 1$ then the conclusion is trivially true. Suppose $|z| &gt;1$. Then $|z|^{n}=|(a_1z^{n-1}+...+a_n)|\leq |z|^{n-1} (|a_1|+|a_3|+...|a_n|)$. Divide by $|z|^{n-1}$.
last non-zero digit sum
The last non-zero digit follow this pattern : $\underbrace{123456789}_{u_1}$ $\underbrace{u_1 1 u_1 2 u_1 3 u_1 4 u_1 5 u_1 6 u_1 7 u_1 8 u_1 9 u_1}_{u_2}$ $\underbrace{u_2 1 u_2 2 u_2 3 u_2 4 u_2 5 u_2 6 u_2 7 u_2 8 u_2 9 u_1}_{u_3}$ So you just have to calculate the sum for $u_1$, $u_2$, $u_3$ , $\cdots$ and find where your sum begin and end. Then you can be smart, and calculate the number of each $u_n$ you'll need For exemple, between 3456 and 11578, I have the sequence $67896u_17u_18u_19u_1 5 u_2 6 u_2 7 u_2 8 u_2 9 u_2 4 u_3 5 u_3 6u_3 7u_3 8u_39u_31$ (we're at 10000) $u_31 u_22u_23u_2 4u_2 5 u_1 1 u_1 2 u_13 u_1 4 u_1 5 u_1 6 u_1 7 12345678$ So, I need: "6789" to get to the next 10, "6789" + 4*$u_1$ to get to the next 100, "56789" + 5*$u_2$ to get to the next 1000, "45678911" + 8*$u_3$ to get to 11000 etc. So by the same reasonning, if I want the sequence between 684551 and 767894, I'll will need : "23456789" to get to 684560 (the numbers after the 1) "6789" +4$u_1$ to get to 684600 (the numbers after the 5) "6789" +4$u_2$ to get to 685000 (the numbers after the 5) "56789" +5$u_3$ to get to 690000 (the numbers after the 4) "9" +1$u_4$ to get to 700000 (the numbers after the 8) "7" (we're at 700000) "123456" + 6$u_4$ to get to 760000 "1234567" + 7$u_3$ to get to 767000 "12345678" + 8$u_2$ to get to 767800 "123456789" + 9 $u_1$ to get to 767890 "1234" to get to 767894
The $\limsup X_n \leq X$ almost surely
Let $\Omega'=\{\limsup_n X_n\le X\}$. For a fixed $n\ge 1$, let $$ A_{n}=\cup_{m\ge n}\{X_m\ge X+\epsilon\}\cap\Omega'. $$ $\{A_{n}\}_{n\ge 1}$ is a sequence of decreasing sets s.t. $\cap_{n\ge 1}A_{n}=\emptyset$. Hence, $\mathsf{P}(A_{n})\to 0$ so that we may choose $n_\epsilon$ s.t. $\mathsf{P}(A_{n_\epsilon})&lt;\epsilon$. Take $A=\Omega'^{c}\cup A_{n_\epsilon}$. Then $\mathsf{P}(A)\le \mathsf{P}(\Omega'^{c})+\mathsf{P}(A_{n_\epsilon})&lt;\epsilon$ and $X_n&lt;X+\epsilon$ on $A^{c}$ for all $n\ge n_\epsilon$.
Combinatoric Proof of $\sum_0^n\binom{k-1+i}{k-1} = \binom{n+k}{k}$
Suppose $x_1,x_2,...\in \mathbb{N}\cup \{0\}$ and we need to answer numbers of solution of this nequality $$x_1+x_2+x_3+...+x_k\leq n$$ one method is partitioning ,so we have $$x_1+x_2+x_3+...+x_k\leq n (\equiv) \\\to\begin{cases}x_1+x_2+x_3+...+x_k = 0 &amp; \left(\begin{array}{c}0+k-1\\ k-1\end{array}\right)\\x_1+x_2+x_3+...+x_k = 1 &amp; \left(\begin{array}{c}1+k-1\\ k-1\end{array}\right)\\x_1+x_2+x_3+...+x_k =2&amp;\left(\begin{array}{c}2+k-1\\ k-1\end{array}\right)\\\vdots\\x_1+x_2+x_3+...+x_k = n&amp; \left(\begin{array}{c}n+k-1\\ k-1\end{array}\right)\end{cases} $$sum of the is $$\sum_0^n\binom{k-1+i}{k-1} $$ second method to find the number of solution is to add $\bf{x_{k+1}}$ as new variable ,and convert inequality to equation . so $$x_1+x_2+x_3+...+x_k \leq n \space (\equiv)\\ x_1+x_2+x_3+...+x_k +\color{red} {\bf{x_{k+1}}}=n \to \binom{n+(k+1-1}{(k+1)-1}$$ hence $$\sum_0^n\binom{k-1+i}{k-1} =\binom{n+(k+1)-1}{(k+1)-1}=\binom{n+k}{k}$$
Is there anything I could read that talks about dimensionality of prime/composite numbers?
Assuming you meant that 27 would be "three dimensional" but 9 would be "two dimensional", then the "dimension" of a number $n$ is the total number of prime factors, denoted $\Omega(n)$ (often read "big omega"). It is not usually called the dimension of a number, but if you really like to think geometrically, you can think about it as the greatest dimension in which the number is the "hypervolume" of a hyperrectangle with side lengths greater than 1. See the following links for some information. https://oeis.org/A001222 https://en.wikipedia.org/wiki/Prime_factor#Omega_functions https://en.wikipedia.org/wiki/Arithmetic_function#.CE.A9.28n.29.2C_.CF.89.28n.29.2C_.CE.BDp.28n.29_.E2.80.93_prime_power_decomposition
UMP of a Beta($\theta,1$) distribution
One knows that the UMP test of size $\alpha$ of $H_0:\theta\lt\theta_0$ vs $H_1:\theta\gt\theta_0$ is $\phi(x)=\mathbf 1_{x\gt x_0}$ where $x_0$ solves $P_{\theta_0}[X\gt x_0]=\alpha$. Here, $P_{\theta}[X\gt x]=1-x^\theta$ hence $x_0=(1-\alpha)^{1/\theta_0}$. For $\theta_0=1$, this shows that the UMP test of size $\alpha$ of $H_0:\theta\lt1$ vs $H_1:\theta\gt1$ is $\phi(x)=\mathbf 1_{x\gt1-\alpha}$.
Confusion with fourier coeffients
You have $$ c_0=\frac1{2\pi}\int_0^{2\pi}\frac{\pi-t}2\,dt=\frac1{4\pi}\,\left(\pi\times2\pi-\frac{(2\pi)^2}2 \right) =0. $$
Show that if simple graph diameter is bounded then max degree is unbounded
@bof hint pointed in the right direction. If both maximum degree and diameter are bounded you can find an upper bound for the number of nodes in any network of the sequence by doing a BFS from any node. With maximum degree $d$ and maximum diameter $k$ the total number of nodes can be at most the Moore bound: $$ n \leq 1 + d \sum_{i=0}^{k-1}(d-1)^{i}$$
Categorical Pasting Lemma
Let $\{ U_i : i \in I \}$ be an open cover of $X$. Let $\mathcal{J}$ be the poset of subsets of $I$ of size $1$ or $2$, ordered by inclusion, and consider the diagram $V : \mathcal{J}^\mathrm{op} \to \mathbf{Top}$ sending each $J \subseteq I$ to $\bigcap_{j \in J} U_j$. Then $X \cong \varinjlim_{\mathcal{J}} V$. (Just check the universal property!) What is essential in the above is that each $U_i$ is open in $X$. On the other hand, you might like to verify for yourself that we can replace the poset of subsets of $I$ of size $1$ or $2$ with the poset of non-empty finite subsets of $I$.
$16$ natural numbers from $0$ to $9$, and square numbers: how to use the pigeonhole principle?
If the sequence contains a 0 then we are done, so assume that's not the case (1,4 and 9 can also be ignored, and the distinction between 2 and 8 can also be ignored, but let's forget that). As you observed only the primes 2,3,5 and 7 can occur as factors. We want a partial string such that all these four primes occur as factors with an even multiplicity. Hmm. $2^4=16$. Getting warmer! Let $n_1,n_2,\ldots, n_{16}$ be the numbers. Hint #1: There shall be 16 pigeonholes. A number of the form $2^a3^b5^c7^d$ goes to one of the 16 holes according to which (if any) of the exponents $a,b,c,d$ are even and which are odd. Two choices (even vs. odd) for all four exponents, so altogether 16 holes. Hint #2: The pigeons shall be the products of the initial sequences $p_k=n_1 n_2\cdots n_k$, for $k=1,2,\ldots,16$. Turn up the thermostat!! There are a couple of observations that you need to make :-) ============================== Edit: May be the penny didn't drop? Hint #3: If one of these pigeons falls into the hole, where $a,b,c,d$ are all even, then we are done. If not, then there will be only 15 pigeon holes available to our initial products. Therefore two of them, say $p_k$ and $p_\ell, k&lt;\ell$ will go to the same hole. What can you say about $p_\ell/p_k$?
How is the quotient group related to the direct product group?
Your solution is not wrong but it has unnecassary steps. You can simply use following arguments. Let $\pi:G\times H\to G$ be projection map .i.e. $\pi(g,h)=g$. It is clear that map is onto. Claim$1:$ $\pi$ is an homomorphism; $$\pi((g_1,h_1)(g_2,h_2))=\pi((g_1g_2,h_1h_2))=g_1g_2=\pi((g_1,h_1))\pi((g_2,h_2))$$ and since it is onto map it is an epimorphism. Claim$2$: $Ker(\pi)=e_G\times H$ Let $\pi(x,y)=e_g$ then you can say that $x=e_g$ and $y$ is any element in $H$ so result follows. By the way it also show that $e_G\times H$ is normal in $G\times H$. Result: By first isomorphim theorem; $$(G\times H)/ker(\pi)=(G\times H)/(e_G\times H)\cong G$$ Notes: By using the other projection you can show smiliar argument for $G\times e_H$ and $H$. I hope you are familiar with isomorphism theorems.
Why does Hoeffding's Lemma do taylor expansion in the exponent?
Part of the motivation for this approach is the more general Chernoff approach to large deviation bounds, where, by Markov's inequality, one has (for $t&gt;0$): $$\mathbb{P}(X&gt;x) = \mathbb{P}(e^{tX}&gt;e^{tx}) = \le \frac{\mathbb{E}[e^{tX}]}{e^{tx}} = \exp(-[tx-C(t)])$$ with $C(t)=\log \mathbb{E}[e^{tX}]$, the cumulant-generating function of $X$. To proceed, one then calculates the convex conjugate of $C$: $$r(x) = \sup_{t&gt;0}[tx-C(t)]$$ to deduce that $\mathbb{P}(X&gt;x) \le \exp(-r(x))$ (by taking the $t$ which attains the supremum above). The idea of this standard proof of Hoeffding's inequality is to: upper bound $\mathbb{P}(X&gt;x)$, by ... lower bounding $r(x)$, by ... upper bounding $C(t)$ - which is what the proof does. The choice to do this by a Taylor expansion is partially motivated by the fact that $C(t)$ is convex (this is true for all cumulant-generating functions), and as such, we have information about its derivatives and curvature.
Correlation between Beta distributions
You can apply ranks and then use Pearson correlation on the ranks, but ranks are known to have a rectangular distribution instead of Gaussian(normal). Instead, use Spearman rank correlation.
Relative homotopy and composition of maps
For the first one, let $Y = I$ and $Z = \star$, the singleton space, so that $q \colon Y → Z$ is the constant map. Let furthermore $Φ$ and $Ψ$ be different constant maps $I^n → Y$. Then $Φ$ and $Ψ$ cannot be homotopic relative to any non-empty subspace, but $qΦ = qΨ$. For the second one, take the first counterexample while regarding $Z = \star$ as a subspace of $Y = I$.