title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Creating a function based on limits
Take the following function: $f:\Bbb R \setminus \{5\} \to \Bbb R$ given by $f(x) =3$
Neumann boundary conditions for PDE
It's the one sided limit : $$ \frac{\partial u}{\partial \vec{n}}(x) = \lim_{\epsilon \to 0^-} \frac{f(x+\epsilon\vec{n}) - f(x)}{\epsilon}$$ For $\epsilon$ small enough (and negative), $f(x+\epsilon\vec{n})$ is in $\Omega$ (because $\partial \Omega$ is smooth enough)
$(a_n) $ is a sequence of positive real numbers. The series $\sum a_n$ will converge if
If $\sum \frac {a_{n+1}} {a_n}$ converges then $\frac {a_{n+1}} {a_n} \to 0$ so $\sum a_n$ converges by ratio test. If $\sum \frac {a_n} {a_{n+1}}$ converges then $\frac {a_n} {a_{n+1}} \to 0$ and $\frac {a_{n+1}} {a_n} \to \infty $, so ratio test tells you that $\sum a_n$ diverges.
Is $(p, q, r)$ linearly independent?
If (p,q,r) were linearly independent then they would form a basis for P_2 but there are many elements in P_2 (such as 1 and x-1) which cannot be written as a linear combination of p,q,r Apologies for the incorrect answer. I mistakenly thought $P_2$ is over the integers (been working with algebraic integers all day!). Anyway, the idea is same. Can we show that $p,q,r$ form a basis for $P_2$? The answer is yes, and we can show this by observing that $p,q,r$ generate the natural basis of $P_2$: $(x^2,x,1)$. So since $p,q,r$ form a basis of $P_2$, they are linearly independent.
Special chains in $\mathbb{R}^n$
(I'm taking the reading that the chain itself is uncountable, not the sets appearing in the chain) Note that $\mathbb{N}^n$ is an infinite discrete subset of $\mathbb{R}^n$, and therefore, so too is any subset. Hence, if we can find an uncountable chain in $\mathcal{P}(\mathbb{N}^n)$ we are done. As $\mathbb{N}^n$ is countable and we no longer care about any interal structure, we may as well replace it with any countable set we like, and look for a chain in its powerset. In particular, we may as well look for a chain in $\mathcal{P}(\mathbb{Q})$. But then, each $r\in \mathbb{R}$ determines a unique subset of $\mathbb{Q}$: $$D_r = \{ q\in\mathbb{Q} : q<r\} $$ Clearly if $r_1 < r_2$ then $D_{r_1}\subsetneq D_{r_2}$. So then $\{D_r : r\in \mathbb{R}\}$ is a chain in $\mathbb{Q}$ of size continuum, and so we are done. The above construction is the standard Dedekind cut method of constructing $\mathbb{R}$ from $\mathbb{Q}$.
Prove that exists $A\subset \mathbb R^2$, with $y$-projection dense in $\mathbb R$ and single $x$-projection.
It seems that a basis of $\mathbb{R}$ over $\mathbb{Q}$ has cardinality $\mathrm{Card}(\mathbb{R})$. Let $(e_y)_{y \in \mathrm{R}}$ be a basis. Then $A=\{ (q+e_y,y) \vert q \in \mathbb{Q}, y \in \mathbb{R} \}$ has the property you are looking for.
Continuity of Passage Time
For almost all $\omega$, $W_t$ is continuous in $t$. First, this implies $T_b=\min\{t: W_t=b\}$. So if $b'<b$, by intermediate value theorem $T_{b'}< T_b$. On the other hand, for any $\epsilon>0$, there is $\delta>0$ so that $\sup\{W_t: t\in [0, T_n-\epsilon]\}\leq b-\delta$, thus if $b'>b-\delta$ we see $T_{b'}\geq T_b-\epsilon$, so $\lim_{b'\to b-}T_{b'}=T_b$. Next, it is easy to see if $b'>b$ then $S_{b'}\geq S_b$. On the other hand, for any $\epsilon>0$ there is $\delta$, so that there is $t\in [S_b, S_b+\epsilon]$ so that $W_t>b+\delta$. So if $b'\in [b, b+\delta]$, we see $S_{b'}\leq S_b+\delta$. This proves $\lim_{b'\to b+}S_{b'}=S_b$.
Sequence which converges pointwise but not uniformly?
$f_n(x)=x^n, x\in[0,1)$ $f_n(1)=1$ It converges pointwisely to $f(x)=0, x\in[0,1)$ $f(1)=1$ but not uniformly since uniformly convergence preserves continuity. Your example converges uniformly on $[0,1]$.
Hazard function definition
I presume $T$ is a random variable representing a failure time. And your equation should probably read $$h(t) = \lim_{\delta t\downarrow 0} \frac{P(t \le T < t+\delta t\mid T\ge t)}{\delta t}.$$ The hazard function represents an instanteous rate of failure. So if we're at time $t$ and it hasn't failed and we look out into the immediate future, the numerator represents the probability that the failure will happen within time $\delta t.$ The bigger $\delta t$ is, the more time there is for the failure to occur, so the probability of failure within time $\delta t$ is going to be bigger when $\delta t$ is big and $smaller$ when it's small. If $\delta t$ is vanishingly tiny, the probability of failure will be vanishingly tiny as well. In fact in the limit $\delta t\rightarrow 0$ we might expect that the probability vanishes proportional to $\delta t.$ Thus the hazard function tells you the probability of failure per time, expressed as an instantaneous rate.
Can an $n$-dimensional smooth manifold $M$ in $\mathbb R^{n+k}$ ($k\ge 1$) be dense in $\mathbb R^{n+k}$?
If $M$ is embedded in $\Bbb R^{n+k}$ then $M$ is a submanifold of $\Bbb R^{n+k}$. If $k=0$, then the only submanifolds of the same dimension as the surrounding manifold are its open subsets. It follows that $M$ is dense in $\Bbb R^n$ if and only if it is a dense open subset of $\Bbb R^n$. There are many examples of such subsets: just remove a finite number of points, or linear subspaces of dimension $\le n-1$, from $\Bbb R^n$. If $k \ge 1$ then pick an arbitrary point $p \in M$. Since $M$ is a submanifold in $\Bbb R^{n+k}$, there exists an open subset $U \subseteq \Bbb R^{n+k}$ around $p$, an open subset $V \subseteq \Bbb R ^{n+k}$, and a diffeomorphism $f : U \to V$, such that $f (M \cap U) = V \cap (\Bbb R^n \times \{0_k\})$, with $0_k$ being the $0$ vector in $\Bbb R^k$. (Essentially, this tells that, modulo a diffeomorphism, every $n$ - dimensional submanifold is locally an $n$ - dimensional linear subspace.) Pick now a point $q \in V \setminus \Big( V \cap (\Bbb R^n \times \{0_k\}) \Big)$. Since $V$ is open, you may find a small enough open subset $W$ around $q$ such that $W \cap \Big( V \cap (\Bbb R^n \times \{0_k\}) \Big) = \emptyset$. Now, the core fact: $f^{-1} (W)$ is open (because $f$ is a diffeomorphism) and, by construction, $f^{-1} (W) \cap M = \emptyset$. So, you have found an open subset of $\Bbb R^{n+k}$ that does not intersect $M$. But one definition of topological density is: $M$ is dense if and only if it intersects every open subset. This shows that $M$ is not dense in $\Bbb R^{n+k}$.
calculate $a/b\ mod\ p$ where p is a prime and a,b can be very large
Find the maximum power $p^k$ that divides both $a$ and $b$. This can be done with binary search. Now compute $(a/p^k)/(b/p^k)$ using the algorithm you give in the question itself. (Note that dividing a large number by a small one is efficiently implementable.) This assumes that it is valid to have multiples of $p$ in the denominator, which some commenters have claimed is invalid. The solution is as inelegant as solutions get. I don't see a cleaner way.
Application of Lehmann–Scheffé theorem in a sample with normal distribution
First observe that Gaussian distribution belongs to the exponential family thus $T$ is not only Sufficient but Complete (and minimal) too. To use Lehmann Scheffé lemma you have only to show that your estimator is unbiased for the variance: $$\mathbb{E}[T]=\frac{1}{n} n\mathbb{E}[Y_1^2]=\sigma^2$$ that's all
Relations and fucntions
Well \begin{align} (g \circ f)(x) &= 1-(ax+b)+(ax+b)^2 \\ &= 1-ax-b+a^2x^2+2axb+b^2 \end{align} From which you need to show that \begin{align} 3 &= 1-b+b^2 \\ -12 &= 2ab-a \\ 16 &= a^2 \end{align} From which you can find the values for $a, b$.
Two quadratic programs
First, we can solve for $g(x_1)$ explicitly. If $x_1 < 0$, the domain for the second optimization problem includes origin, hence $g(x_1) =0 $. If $x_1 \geq 0$, the distance from the origin to the line $u_1+2u_2=x_1$ would be $\frac{|1(0)+2(0)-x_1|}{\sqrt{1+2^2}}.$ Hence $g(x_1)=\frac{x_1^2}{5}$. In summary, $$g(x_1)= \begin{cases} \frac{x_1^2}{5} & \text{ if } x_1 \geq 0 \\ 0 & \text{ if } x_1 < 0 \end{cases}$$ (Credit: Rodrigo de Azevedo) In the first optimization problem, we are interested in the case where $x_1 \geq 0$, Hence $f(x_1,x_2)=\frac{x_1^2}{5}-x_1^2+x_2^2=-\frac{4x_1^2}{5}+x_2^2$ Are you able to complete the problem now?
Finding the intersection points of two circles algebraically
$$\begin{align*}&x^2-2x+y^2+2y=2\\{}\\ &x^2+2x+y^2-2y=-1\end{align*}$$ Substract: $$-4x+4y=3$$ and this means all the intersection points of the circles are on the above line, so if we take one of them, say $\;\left(x\;,\;\;x+\frac34\right)\;$ , we can substitute back in one of the circles: $$(x-1)^2+\left(x+\frac74\right)^2=4\iff2x^2+\frac32x=4-1-\frac{49}{16}\implies$$ Solve the quadratic to find the points (it doesn't look like a nice answer)
How to calculate an area enclosed by two parametric curves?
$\textbf{Hint :}$ Use relation $$ y^{2/3}+x^{2/3}=1 \qquad \text{for asteroid} \implies y_1(x)=\cdots $$ and $$ y^2+x^2 =1 \qquad \text{for circle} \implies y_2(x)=\cdots $$ So you can integrate directly for first quadrant without worried much about $dx$. If you want to do it by integrate over $t$, you should do them separately to avoid confusion. That is $$ A = \Big|\int_0^1 y_1 dx - \int_0^1 y_2 dx\Big| = \Big|\int_{t_0}^{t_1} \sin^3 t \, d (\cos^3 t) - \int_{s_0}^{s_1} \sin s \, d(\cos s) \Big| $$ for area in the first quadrant.
Find a series convergent $\sum a_n$ such that $\sum\sqrt{a_n/n}$ diverges.
Try $a_n=\frac{1}{n\log^2 n}$.
determine the convergence region of a complex series
HINT Rewrite first $\sum_{n=1}^\infty\frac{1}{n^2}\exp(\frac{nz}{z-2})$ as $\sum_{n=1}^\infty\frac{1}{n^2}\exp(\frac{z}{z-2})^n$. So, you face an expression which look like $\sum_{n=1}^\infty\frac{x^n}{n^2}$ with $x =\exp(\frac{z}{z-2})$ I am sure that you can take from here. By the way, the summation is just $\text{Li}_2\left(e^{\frac{z}{z-2}}\right)$ ,
Finding the unbiased estimator of variance
Hint: The (raw, biased) sample variance of $n$ measurements $x_i$ can be obtained by the formula $$\sigma^2=\left(\frac1n\sum x^2\right)-\left(\frac1n\sum x\right)^2$$
Trouble finding the upper bound for a certain sum.
Note that since $a_n>0$ for any $n$ we have $$ a_{n+1} = a_n(1-\sqrt{a_n})\frac{1+\sqrt{a_n}}{1+\sqrt{a_n}} = \frac{a_n(1-a_n)}{1+\sqrt{a_n}} < a_n(1-a_n) = a_n - a_{n}^{2}. $$ Therefore we get $$ a_1 > a_2 + a_{1}^{2} > \ldots> a_n + a_{n}^2 + \ldots + a_{1}^{2} > a_{n}^2 + \ldots + a_{1}^{2}. $$
Mutlivariable Calculus: Surface Area
I don't see what you did wrong here... To me, to solve these types of problems you have to think geometrically--there isn't going to be some way to do it just from a knowledge of multivariable calculus. First, how to find the total surface area of the sphere--that will help. You need to break the sphere up into circles stacked on top of each other, then find the $dA$: $$ dA = 2\pi r_{\phi} h $$ $h$ is easy to find, it's just $rd\phi$, $r_\phi$ is the radius at the given azimuth: $r_\phi = r\sin(\phi)$ which gives: $$ dA = 2\pi r^2\sin(\phi)d\phi\\ A = \int dA = 2\pi r^2\left.\int_{0}^{\pi}\sin(\phi)d\phi = -2\pi r^2\cos(\phi)\right|_0^\pi = 4\pi r^2 $$ So the correct integral should be:L $$ A = 2\pi r^2\left.\int_{0}^{\phi_0} \sin(\phi)d\phi = -2\pi r^2 \cos(\phi)\right|_{0}^{\phi_0} = 2\pi r^2\left(1 - \cos(\phi_0)\right) $$ In this case, $\phi_0$ satisfies that $z = r\cos(\phi) = 2\cos(\phi_0) = 1 \rightarrow \cos(\phi_0) = \frac{1}{2}$ and thus: $$ A = 2\pi 2^2\left(1 - \frac{1}{2}\right) = 4\pi $$ So I agree with your answer...
Verification and Wording
Yes, you are told that the length of the address is normally distributed, that $X \sim Normal(\mu = 51.75, \sigma = 14.37)$. From this you can conclude that if the next address is randomly selected, the probability will follow $X$ and since $X$ is some normally distributed random variable, you can standardize this to $Z \sim Normal(\mu = 0, \sigma^2 = 1)$ and calculate the probability.
Computing $\int_{\gamma}e^zdz$, where $\gamma$ is a particular semicircle
Hint Consider the integral of $e^z$ around the closed contour given by concatenating $\gamma$ with the line segment from $-3$ to $3$ (oriented rightward), and apply Cauchy's Integral Theorem.
Positive scalar curvature in dimension 4
$1.$ Lichnerowicz proved that if $M$ is a closed spin manifold which admits a positive scalar curvature metric, then $\hat{A}(M) = 0$. In dimensions $4k$, $\alpha(M) = 2\hat{A}(M)$, so it follows that $\alpha(M) = 0$. In fact, Hitchin proved that $\alpha(M) = 0$, regardless of dimension. $2.$ In dimension four, $\hat{A}(M) = -\frac{1}{24}p_1(M) = -\frac{1}{8}\tau(M)$, so $\alpha(M) = 0$ if and only if $\tau(M) = 0$. If $M$ is a simply connected, closed, spin four-manifold with signature zero, then by Freedman's classification, there is an integer $k \geq 0$ such that $M$ is homeomorphic to $k(S^2\times S^2)$, the connected sum of $k$ copies of $S^2\times S^2$; note, the connected sum of zero copies of $S^2\times S^2$ is defined to be $S^4$. If $M$ is diffeomorphic to $k(S^2\times S^2)$, then $M$ admits positive scalar curvature metrics. Said another way, the topological manifolds $k(S^2\times S^2)$ all admit positive scalar curvature for their standard smooth structure. A closed four-manifold can potentially admit a countably infinite number of smooth structures, and the existence of a positive scalar curvature metric depends on which smooth structure you choose. So it is possible that for some choice of $k$ and some choice of smooth structure on $k(S^2\times S^2)$, there does not admit a positive scalar curvature metric. Such examples exist. In Hirzebruch Surfaces: Degenerations, Related Braid Monodromy, Galois Covers (MR 1720873), Teicher constructs examples of simply connected general-type complex surfaces which are spin and have signature zero; see Theorem $5.8$. These are homeomorphic to $k(S^2\times S^2)$ for some $k$, but general type surfaces do not admit positive scalar curvature metrics. $3.$ In dimension four, unlike in other dimensions, there are gauge theoretic obstructions to the existence of positive scalar curvature metrics. In particular, on a closed smooth four-manifold with $b^+ \geq 2$, the existence of a positive scalar curvature metric implies that all of its Seiberg-Witten invariants vanish. This is a necessary condition, but it is not sufficient. Note that these invariants are usually too complicated to calculate, except in ideal situations such as when the four-manifold in question is the underlying four-manifold of a Kähler surface.
Arbitrary Fundamental Group and Surfaces
A presentation of a group $G$ consists of generators and relations. The usual construction actually goes like this (the third step is what you're missing): Take a presentation of $G$ with generators $\{g_\alpha\}_{\alpha \in A} \subset G$ (it's not necessary to wedge a circle for every $g \in G$ and relations $\{r_\beta\}_{\beta \in B}$; For every generator $g_\alpha$, take a circle $C_\alpha \simeq S^1$, wedge all these to get $X'$. Then $\pi_1(X') = F_A$ is the free group on the indexing set $A$; For every relation $r_\beta \in F_A$, choose a loop $\gamma_\beta \in [S^1, X']$ whose homotopy class represents $r_\beta \in \pi_1(X')$. Attach a $2$-disk $D^2$ to $X'$ along $\gamma_\beta$ (ie. consider $X' \cup_{\gamma_\beta} D^2$). Then each such disk "kills" the relation $r_\beta$. Let $X$ be the space where you've attached a loop for each $\gamma_\beta$. Then in the end, the fundamental group of $X$ will be $F_A / \langle r_\beta : \beta \in B \rangle \cong G$ because we had chosen a presentation of $G$ at the beginning. This is corollary 1.28 in Hatcher's book Algebraic Topology, if you'd like a reference with some examples too.
Example of non-reversible folding of a thin chamber complex.
Start with a regular triangulation of the plane, say the one by equilateral triangles. Note that this is a thin chamber complex. Moreover, you can reflect through any of the lines in the triangulation. Now take two vertices which are of distance at least three apart and glue them together. The result is still a thin chamber complex. (I needed the vertices to be sufficiently far apart for the result to still be a simplicial complex.) Now fix a line which has both identified vertices to the same side of it (call this half-space with two points identified $\Phi$). Then one can fold across the line in one direction, toward $\Phi$, by first identifying two vertices on one side and then folding. You can't fold in the other direction because the fold was not a bijection on vertices. (To go the other way, you'd have to disidentify two vertices, which is not even going to give you a function.) This fold works fine on chambers and does everything you want it to on codimension 1 faces. It messes up at codimension 2.
Number of submodules of a semisimple module
I know that you did this exam today so might not care, but just in case you do, as in question 4 on your second representation theory sheet, if we have an isomorphism $\phi : S_i \to S_j$ then we have a homomorphism given by $x \mapsto x + \lambda \phi(x)$. Where $\lambda$ is an element in our field. All these images are isomorphic to $S_i$ but distinct for different $\lambda$ in the field.
consequence of $E|X| < \infty$ on conditional expectation of $X$ on sets $A$ of small size.
You may need to double-check the notation. $E(X|I_{A})$ is not condition expectation, it should be the usual expectation $E(XI_{A})=\int XI_{A}\,dP$. The results can be proved immediately by dominated convergence theorem. Note that the results continue to hold even if the measure under consideration is not a finite measure. Proof: In the following proof, we do not need the fact that $P(\Omega)&lt;\infty$. That is, the same argument works for a general measure space. For each $n$, let $A_{n}=\{\omega\in\Omega\mid|X(\omega)|\leq n\}$. Define $X_{n}=XI_{A_{n}}$. Obviously, $X_{n}\rightarrow X$ pointwisely. Moreover, $|X_{n}|\leq|X|$. By Dominated Convergence Theorem, we have: $\int|X_{n}-X|\,dP\rightarrow0$. Let $\varepsilon&gt;0$ be given. Fix an $n$ such that $\int|X_{n}-X|dP&lt;\varepsilon/2$. Define $\delta=\frac{\varepsilon}{2n}&gt;0$. For any $B\in\mathcal{F}$ with $P(B)&lt;\delta$, we have \begin{eqnarray*} \int_{B}|X|dP &amp; \leq &amp; \int_{B}|X-X_{n}|dP+\int_{B}|X_{n}|dP\\ &amp; \leq &amp; \int|X-X_{n}|dP+nP(B)\\ &amp; &lt; &amp; \frac{\varepsilon}{2}+n\delta\\ &amp; = &amp; \varepsilon \end{eqnarray*}
Differential equation- need help!
This equation is separable. Rewrite as $$y\sqrt{1+y^2}\,dy=-x\sqrt{1+x^2}\,dx,$$ integrate both sides, and solve for $y$. Don't forget that you'll need a constant of integration on one side.
How to use an ML estimate to show the solution to an integral
There are four poles of $\frac{z^2+4z+7}{(z^2+4)(z^2+2z+2)}$ inside $|z-2|&lt;5$ are at $z=\pm 2i$ and $z=-1\pm i$. The residues are $$\begin{align} &amp;-\frac{7}{20}-i\frac{13}{40} \cdots \text{at z= 2i}\\ &amp;-\frac{7}{20}+i\frac{13}{40} \cdots \text{at z= -2i}\\ \end{align}$$ and $$\begin{align} &amp;\frac{7}{20}-i\frac{1}{5} \cdots \text{at z= -1+i}\\ &amp;\frac{7}{20}+i\frac{1}{5} \cdots \text{at z= -1-i} \end{align}$$ Inasmuch as these add to zero, the residue theorem guarantees that $$\oint_C \frac{z^2+4z+7}{(z^2+4)(z^2+2z+2)} dz=0$$ where $C$ is defined by $|z-2|=5$.
Finding coordinate of vertice of triangle given 2 other vertices and angle (Coordinate Geometry)
for the Point $$C(x;y)$$ you have two conditions: $$(x+1)+y^2=(x-7)^2+(y-2)^2$$ and $$y=-4x+13$$ can you solve it now?
Universal Property of free objects
This is not a definition, it is a property of free objects. Every object surjecting onto a free object satisfies this too, and these do not have to be free. (Consider the group $S_n \times \mathbb{Z}$ for instance.) I advise you to read any text / book on universal algebra. There, you can find concise definitions and constructions of free objects.
Find the sign of $\int_{0}^{2 \pi}\frac{\sin x}{x} dx$
$$ \begin{align*} \int_0^{2\pi}\frac{\sin x}{x}\,dx&amp;=\int_0^{\pi}\frac{\sin x}{x}\,dx+\int_\pi^{2\pi}\frac{\sin x}{x}\,dx\\ &amp;=\int_0^{\pi}\frac{\sin x}{x}\,dx+\int_0^{\pi}\frac{\sin(x+\pi)}{x+\pi}\,dx\\ &amp;=\int_0^{\pi}\Bigl(\frac{1}{x}-\frac{1}{x+\pi}\Bigr)\sin x\,dx\\ &amp;=\pi\int_0^{\pi}\frac{\sin x}{x(x+\pi)}\,dx\\ &amp;&gt;0 \end{align*} $$
A question about Brown Numbers $(m, n) : n! + 1 = m^2$
$m = 2r +1$ is enough to generate all odd numbers, no need for the $\pm$. $$n! = m^2 - 1 = (m+1)(m-1) = (2r + 2)(2r) = 4r(r+1)$$ The problem with your argument is that $4r(r+1) = m^2 - 1$, but you're trying to argue that $m^2$ is divisible by 4, I believe because you've forgotten the $-1$ and continued with $4r(r+1) = m^2$.
Modulus calculation for big numbers
Using the Chinese Remainder Theorem in addition to other bits. Observe that your modulus factors like $m=2^8\cdot3\cdot 5^9$. Your number is very obviously divisible by $2^8$, so we can forget about that factor until the end. Modulo $3$? The number $2^{2^{\text{ZILLION}}}$ is clearly a power of $4$, so its remainder modulo $3$ is equal to $1$. Modulo $5^9$? Because $2$ is coprime with $5^9$ we can use the Euler totient function $\phi$. We have $\phi(5^9)=(5-1)5^8=4\cdot5^8.$ Call this number $K$. We know from elementary number theory that $2^K\equiv1\pmod{5^9}$. Consequently also $2^N\equiv 2^n\pmod{5^9}$ whenever $N\equiv n\pmod{K}$. Therefore we want to calculate the remainder of $M=2^{100000000}$ modulo $K$. Let's repeat the steps. $M$ is clearly divisible by $4$, so we concentrate on the factor $5^8$. Euler's totient gives $\phi(5^8)=4\cdot5^7$. Clearly $100000000=10^8=2^8\cdot5^8$ is divisible by $4\cdot5^7$. This implies that $M\equiv 2^0=1\pmod{5^8}$. Now we begin to use the Chinese Remainder Theorem. We know that $M\equiv 0\pmod 4$ and $M\equiv 1\pmod {5^8}$. The CRT says that these congruences uniquely determine $M$ modulo $K=4\cdot5^8$. As $5^8\equiv1\pmod4$, we see that $3\cdot5^8+1$ is divisible by four. As it is clearly also congruent to $1\pmod{5^8}$ we can conclude that $M\equiv 3\cdot5^8+1=1171876\pmod K$. This, in turn, means that $$ 2^M\equiv 2^{1171876}\pmod{5^9}. $$ This exponent, finally, is small enough for square-and-multiply. I cheat and use Mathematica instead. The answer is that $$ 2^{1171876}\equiv1392761\pmod{5^9}. $$ Now we know every thing we need about the remainders: $$ \begin{aligned} 2^M&amp;\equiv0\pmod{2^8},\\ 2^M&amp;\equiv1\pmod3,\\ 2^M&amp;\equiv 1392761\pmod{5^9}. \end{aligned} $$ All that remains is to put these bits together by yet another application of CRT. Have you implemented those routines? Edit: I did this run of CRT with Mathematica. Barring an earlier error (in the above calculations) the answer is that $$ X=2^{2^{100000000}}\equiv 741627136 \pmod{1500000000}. $$ The observations leading to this are: The integer $256$ has remainder $256\pmod {256}$ and $256\equiv1\pmod3$. Therefore CRT says that $X\equiv256\pmod{3\cdot256}$. Here $3\cdot256=768$ Extended Euclidean algorithm tells us that $(-928243\cdot768+365\cdot5^9=1$. Consequently the integer $A=365\cdot5^9$ has remainder $0$ modulo $5^9$ and remainder $1$ modulo $768$. Similarly the integer $B=(-928243)*768$ is divisible by $768$ and has remainder $1$ modulo $5^9$. Therefore $$X\equiv 256\,A+1392761\,B\pmod{1500000000}.$$ Calculating the remainder of that `small' number gives the answer.
Proving the closure and boundary of a specific open set
The closure is the set union its limit points. So take a point in the limit set. By definition, any neighborhood of that point has a point in S in it, so for any epsilon, via the triangle inequality, you get the distance from that point is less than or equal to 1 + any epsilon. Since this works for any epsilon, the distance to the point is less than or equal to 1. Now take a point in the closed circle, and show it's either in S or a limit point (Easy). The boundary is the closure minus the interior....the interior is the inner ball, the closure is the solid disk, so the boundary is the circle
Help to simplify $\arctan\left(\frac{\sqrt{1 + x^2} -1}{x}\right)$
It might be helpful to take the inverse of this function by solving for $x$. We have the following: $$y=\arctan\left(\frac{\sqrt{1+x^2}-1}{x}\right)$$ Take the $\tan$ of both sides: $$\tan y=\frac{\sqrt{1+x^2}-1}{x}$$ Muultiply both sides by $x$ and add $1$: $$x\tan y+1=\sqrt{1+x^2}$$ Square both sides: $$x^2\tan^2 y+2x\tan y+1=1+x^2$$ Subtract both sides by $1+x^2$: $$x^2(\tan^2 y-1)+2x\tan y=0$$ Now, clearly, the original function is undefined for $x=0$, so we can conclude that $x \neq 0$. Therefore, divide by $x$: $$x(\tan^2 y-1)+2\tan y=0$$ Subtract both sides by $2\tan y$ and divide by $\tan^2 y-1$: $$x=-\frac{2\tan y}{\tan^2 y-1}$$ Now, this looks very similar to the $\tan(2y)$ identity. However, the denominator is flipped and there is a $-$ sign out front. Therefore, distribute the negative across the denominator: $$x=\frac{2\tan y}{1-\tan^2 y}=\tan 2y$$ Now, we can solve this equation for $y$ to get a much simpler form of our original equation. Take the $\arctan$ of both sides: $$\arctan x=2y$$ Divide both sides by $2$ and switch the sides of the equation: $$y=\frac{\arctan x}{2}$$
Are there more rectangles than squares?
There are the same number of rectangles as squares for the reason you mention, even though the set of squares is a proper subset of the set of rectangles. It's no stranger than the fact that there are the same number of even integers as there are integers. Every infinite set can be placed in one-to-one correspondence with a proper subset of itself.
Uncountably many pre-measure extensions
This statement is false in the given setting… see my answer here. Unfortunatly I can't tag your question as doubled because the questioner missed to flag my answer there…
existence of a complex sequence
NOT AN ANSWER- JUST SOME COMPUTATIONS THAT WERE TOO LONG FOR COMMENT. Let's compute the inverse recurrence relation using the general solution to a quadratic equation. $$z_{n+1}=z_n^2+z_n$$ So: $$z_{n-1}= -\frac{1}{2} \pm \frac{\sqrt{1 + 4z_n}}{2}$$ Since we know the sequence converges to the RHS for any real $-1 &lt; z_n &lt; 1$, we could try to solve this problem by just showing the LHS convergence starting at some $-1 &lt; z_n &lt; 1$ and following the above recurrence relation which decrements $n$. Now, if we have a converging sequence to the LHS, then $z_n$ will tend towards $0$ and so $\frac{\sqrt{1 + 4z_n}}{2}$ will tend towards $\frac12$ and so beyond a certain point, as we go left, we will no longer be able to choose the minus side of the plus-or-minus sign in the above recurrence relation. In other words, for any arbitrarily small $\epsilon$, we'll need to reach a point $m$ such that $|z_m| &lt; \epsilon$ and forall $i &lt; m$: $$z_{i}= -\frac{1}{2} + \frac{\sqrt{1 + 4z_{i+1}}}{2}$$ We'd also be forced to have $|z_i| &lt; \epsilon$ for all such $i$ all the way to $-\infty$.
Values of $a$ for which $(a+4)x^2-2ax+2a-6 <0$ for all $x \in R$
The quadratic form$(a+4)x^2-2ax+2a-6=\begin{bmatrix} x &amp; 1\end{bmatrix}\begin{bmatrix}a+4&amp; -a \\ -a &amp; 2a-6\end{bmatrix}\begin{bmatrix}x\\ 1\end{bmatrix}$ So if you want the quadratic form to be negative for all $x$, you need a negative definite $\begin{bmatrix}a+4&amp; -a \\ -a &amp; 2a-6\end{bmatrix}$. By Slvester's criterion, this implies $a^2+2a-24&lt;0=(a+6)(a-4)&lt;0$ and $a&lt;-4$. Together these conditions mean $a\in(-\infty,-6)$.
Finding value of k for which fg(x)=k has equal roots?
Your equation could be rewritten as $$10x^2-20x+(k-3)=0.$$ Recall that the roots of a quadratic equation $ax^2+bx+c=0$ are given by $$\frac{-b\pm\sqrt{b^2-4ac}}{2a}.$$ So the two roots are equal when $b^2-4ac=0$. Can you apply this to your problem?
two variable functions with positive laplacian
This particular function can be written as $f(x,y)=(x^3-1/2)+(y-1/2)$ where both summands are 1-variable functions that are convex and strictly positive on $(1,2)$. It is not hard to extend such functions to $(0,2)$ (or the entire line $\mathbb R$, if you wish) while keeping both convexity and positivity. (And of course, convexity implies subharmonicity). Here is a concrete extension to $\Omega$, using the Iverson bracket: $$f(x,y)=\left((x^3-1/2)+10(1-x)^3[x&lt;1]\right)+\left((y-1/2)+(1-y)^3[y&lt;1]\right)$$ You can check directly that both expressions in big parentheses are positive on $(0,1)$. Convexity is clear, as is $C^2$ smoothness.
Is there an alternative notation for a volume integral?
Normally one would use $dV$ as the measure indicator, that is $$\int_D f(x) dV$$ I even think this is a better notation as the multiple integral notation suggests a specific order of integration which is not generally guaranteed to be the same (unless the requirements of Rubini's theorem is fulfilled).
What is the expected value of the number of random letters needed to produce my string.
Here's a concise recursive version of the main theorem proved in Nielsen's article. Suppose $X=X_1X_2X_3\ldots$ is an infinite sequence of i.i.d. random letters from a finite alphabet $\Sigma$. Let $T_w$ denote the first-occurrence time of the word $w$ in the sequence $X$; i.e., $T_w$ is the least $t$ such that $w$ occurs as a subword in $X_1\ldots X_t$. Theorem. For any nonempty word $w\in\Sigma^*$, $$\mathbb{E}T_w=|\Sigma|^{|w|}+\mathbb{E}T_{w'}$$ where $w'$ denotes the longest proper prefix of $w$ that's also a suffix of $w$, and by convention $\mathbb{E}T_\text{empty string} = 0$. NB: Here the first-occurrence time of a word is taken to be the index of the last letter in that occurrence, whereas Nielsen takes it to be the index of the first letter in that occurrence. Example (supposing letters $a,b\in\Sigma$) $$\begin{align} \mathbb{E}T_{abaaba} &amp;=|\Sigma|^6 + \mathbb{E}T_{aba}\\ &amp;=|\Sigma|^6 + |\Sigma|^3 + \mathbb{E}T_{a}\\ &amp;=|\Sigma|^6 + |\Sigma|^3 + |\Sigma|^1 + \mathbb{E}T_\text{empty string}\\ &amp;= |\Sigma| \times(100101)_{|\Sigma|} \end{align} $$ E.g., taking $\Sigma=\{a,b\}$, this gives $\mathbb{E}T_{abaaba}=2^6+2^3+2^1=\color{blue}{74}$. Consistent with this, I ran $10^6$ simulations of $T_{abaaba}$ (using Sage) and found a sample mean equal to $\color{blue}{74}.0$ with standard error $0.1$.
Can ratio of smooth numbers approach 1?
Yes, if $q\ge 3$. This is because $\log_2(3)$ is irrational, so the fractional parts of $\log_2(3^n)$ lie densely in $[0,1)$. In particular they can become arbitrarily small.
Analysis Question? Fixed Point?
The fact that was proven in class is different from what you are trying to prove. By the diagonal, I assume you mean the line $y=x$. Remember, that when you are graphing a function, you are graphing the equation $y=f(x)$. So, if $f(x)=x$, then $y=x$. In other words the point $(x,x)$ is on the graph of $f(x)$ and on the line $y=x$.
Are these permutation groups, defined by asymptotic properties, isomorphic?
(I always find depressing to see group theory classified as &quot;abstract&quot; algebra at some places.) It's correct they're pairwise non-isomorphic (namely, the groups $H_r=\{\sigma\in S_\mathbf{N}: c_\sigma(n)=o(n^r)\}$ for $r\in \mathopen]0,1]$, denoted $G_{x^r}$ in the question). The first observation is that they're not conjugate: this is easy (using that for every permutation $f$ of $\mathbf{N}$ there's an infinite subset $I$ such that $f(n)\ge n$ for all $n\in I$. Edit this non-conjugation claim is unclear at this point. So for the moment this post reduces showing that they're non-conjugate, and it turn this means: for $0&lt;r&lt;s\le 1$, does there exist a permutation $f$ of $\mathbf{N}$ such that $f(N_r)=N_s$, where $N_u$ is the set of subsets $I$ of $\mathbf{N}$ that are $n^u$-negligible: $\#(N_u\cap [n])=o(n^u)$ Next, we see that an isomorphism between two such groups would be implemented by a conjugation. Namely we have the following lemma (which is probably a very particular result of M. Rubin's theorems): Lemma let $X$ be an infinite set; let $U,V$ be subgroups of $S_X$ that contain the group $S_X^\#$ of finitely supported permutations of $X$. Let $f$ be an isomorphism $U\to V$. Then $f$ is the restriction of a conjugation of $S_X$: there exists $\alpha\in S_X$ such that $f(g)=\alpha g\alpha^{-1}$ for all $g\in U$. In particular, $V=\alpha U\alpha^{-1}$. To prove the lemma, the first step is to check that $f(S_X^\#)=S_X^\#$: namely we need to &quot;recognize&quot; $S_X^\#$ within both $U$ and $V$. This is done in &quot;Lemma*&quot; below. The second step is a classical one: $\mathrm{Aut}(S_X^\#)=S_X$ (acting by conjugation). This can be found in the books by Scott or Dixon-Mortimer. So, there exists $\alpha\in S_X$ such that $f(w)=\alpha w\alpha^{-1}$ for all $w\in S_X^\#$. The third and concluding step: for $g\in U$ and $w\in S_X^\#$, we compute $f(gwg^{-1})$ in two ways: first it equals $f(g)f(w)f(g)^{-1}=f(g)\alpha w\alpha^{-1}f(g)^{-1}$, second it equals $\alpha gwg^{-1}\alpha^{-1}$. Since the centralizer of $S_X^\#$ is trivial, we deduce $\alpha g=f(g)\alpha$ for all $g\in U$, which is the desired form. Lemma$^*$ Let $U$ be a subgroup of $S_X$ containing $S_X^\#$. Let $T_U$ be the set of elements $s$ of order 2 in $U$ such that for every $U$-conjugate $t$ of $s$, either $t$ commutes with $s$ or $(st)^3=1$. Then $T_U$ is the set of transpositions of $X$. In particular, $\langle T_U\rangle=S_X^\#$ is a characteristic subgroup of $U$, and for any two such groups $U_1,U_2$, every isomorphism $U_1\to U_2$ induces an automorphism of $S_X^\sharp$. It is clear that a transposition satisfies this property. Conversely, suppose that $s$ has order 2 and is not a transposition. Say, $s(a)=b$ and $s(c)=d$ with $|\{a,b,c,d\}|=4$. First case, $s$ fixes at least two points. Then there exists a $S_X^\#$-conjugate $t$ of $s$ fixing $a,d$ and exchanging $b,c$. Then $st$ has order 4 on $\{a,b,c,d\}$. Second case, $s$ fixes at most one point. Then there exist $e,f,g,h$ such that $s(e)=f$ and $s(g)=h$, and $|\{a,b,c,d,e,f,g,h\}|=8$. Hence there exists a $S_X^\#$-conjugate of $s$ exchanging all pairs $\{b,c\}$, $\{d,e\}$, $\{f,g\}$, $\{h,a\}$. Then $st$ has order 4 on $\{a,b,c,d,e,f,g,h\}$. In both cases, we see that $t$ and $s$ don't commute and $(st)^3\neq 1$.
sum of two closed subspaces of a Banach space need not be closed.
To show that $M+N$ is dense, show that $c_{00}\subset M+N$. Let $(a_n)\in c_{00}$, i.e. there exists $n_0$ such that for all $n&gt; 2n_0$ it is $a_n=0$. Now we need to write $(a_n)$ as a sum of $M+N$. We want to find complex numbers $x_i,y_i$ such that $$(a_1,a_2,\dots,a_{n_0},0,0,0,\dots)=$$ $$=(y_1,2y_1,y_3,4y_3,y_5,6y_5,\dots,2n_0y_{2n_0-1},0,0,0,\dots)+$$ $$(0,x_2,0,x_4,0,\dots,0,x_{2n_0-2},0,x_{2n_0},0,0,0\dots)$$ so we want solutions to the system of equations $a_1=y_1$ and $a_2=x_2+2y_1$, $a_3=y_3$ and $a_4=x_4+4y_3$ and so on. Well, this shows exactly how we can write $(a_n)$ as a sum of $M+N$. To be more specific: $$(a_n)=$$ $$=(a_1,2a_1,a_3,4a_3,a_5,6a_5,\dots,a_{2n_0-1},2n_0a_{2n_0-1},0,0,0,\dots)+$$ $$(0,a_2-2a_1,0,a_4-4a_3,0,a_6-6a_5,0,\dots,0,a_{2n_0}-2n_0a_{2n_0-1},0,0,0,\dots)$$ The first sequence on this sum belongs to $M$ and the second belongs to $N$. Since $c_{00}\subset M+N$ as we just showed, we have $\overline{c_{00}}\subset\overline{M+N}$, but $c_{00}$ is dense in $c_0$, so $c_0\subset\overline{M+N}$ and we are done. To show that it is a proper subspace, consider the sequence $a=(\frac{1}{n})_{n=1}^\infty$. If $a\in M+N$, we have $$(\tfrac{1}{n})=(y_1,2y_1,y_3,2y_3,\dots)+(0,x_2,0,x_4,0,\dots)$$ so $y_{2k-1}=\frac{1}{2k-1}$ and $2ky_{2k-1}+x_{2k}=\frac{1}{2k}$, so $x_{2k}=\frac{1}{2k}-\frac{2k}{2k-1}=\frac{-4k^2-2k-1}{4k^2-2k}\longrightarrow-1$, which is impossible since $(x_k)\in c_0$, so $x_{2k}\to 0$.
A Nontrivial Subgroup of a Solvable Group
A simple solvable group must be cyclic of prime order (since it must be abelian, and so cannot have proper [normal] subgroups). But a simple solvable group would not contain nontrivial normal subgroups, so the proposition would be true for such a group by vacuity (the hypotheses are never satisfied). Alternatively, if you allow the whole group to be a "nontrivial normal subgroup", your $H$ can only be $G$ itself, which is already abelian, so you can set $A=H=G$; either way, the proposition is true for such a group. (If $G$ is solvable, then $[G,G]$ is a proper subgroup of $G$; since it is always normal in $G$, if $G$ is also simple, then we must have $[G,G]=\{1\}$, hence $G$ is abelian). Hint for the question. If $H$ is abelian, you are done. If not, then $[H,H]$ is nontrivial; use the fact that $H^{(n)}\subseteq G^{(n)}$, where $G^{(k)}$ is the $k$th term of the derived series of $G$, to show that $H$ is solvable, and use the fact that $H\triangleleft G$ to show $[H,H]\triangleleft G$. Then replace $H$ with $[H,H]$, lather, rinse, and repeat. Different hint. Let $1=G_0\triangleleft G_1\triangleleft\cdots\triangleleft G_s = G$, with $G_i/G_{i+1}$ abelian. By Problem 8, you can pick the $G_i$ normal in $G$. Look at $H_i=H\cap G_i$. Added. Sigh. I didn't notice that Problem 8 assumes $G$ is finite; the result is true, as witnessed by the derived series, but again it's probably not what you want. Added 2. Okay, this should work; it takes some of the ideas of the hint in Problem 8 of the same page, so it should be "reasonable". Let $i$ be the largest index such that $G_i\cap H$ is a proper subgroup of $H$. Then $H\subseteq G_{i+1}$, hence $G_i\cap H\triangleleft H$. Moreover, $H/(H\cap G_i) \cong (HG_i)/G_i \leq G_{i+1}/G_i$, so $H/(H\cap G_i)$ is abelian. As in problem 8, this means that $x^{-1}y^{-1}xy\in H\cap G_i$ for all $x,y\in H$. Show that this is also true for all $G$-conjugates of $H\cap G_i$, hence their intersection, which is normal in $G$, contains all $x^{-1}y^{-1}xy$ with $x,y\in H$. If this is trivial, then $H$ is abelian and we are done. If it is not trivial, then this intersection is nontrivial, and normal in $G$. Replace $H$ with this intersection, and note that the largest index $j$ such that the intersection with $G_j$ is proper is striclty smaller than $i$; so you can set up a descent. Lather, rinse, and repeat.
Can product of all pairwise sums be computed faster than the naive method?
Well, a repeated product is best done by recursively splitting the set of terms in half*, computing the products in each half, then multiplying the result, but only if the products become big enough that fast multiplication algorithms become relevant. Of course, this is the same number of arithmetic operations either way. But it is still computational work overall. *: half is best computed by total size of the numbers rather than simply how many We could construct the polynomial $$ f(x) = \prod_{s \in S} (x + s) $$ (keep in mind my advice about repeated products when computing this product!) Now, the product you seek to compute is $$ \prod_{s \in S} f(s) $$ The set of values $\{ f(s) | s \in S \}$ could be computed by a fast "multipoint evaluation" algorithm. Recall that the resultant $\mathop{\text{Res}}(g(x),h(x))$ of two monic polynomials is the product $$ \prod (\alpha - \beta) $$ where $\alpha$ and $\beta$ range over the roots of $g(x)$ and $h(x)$ respectively. So we could also compute the product by $$ \mathop{\text{Res}}(f(x), f(-x)) $$ I haven't attempted to compute the time complexity of these algorithms. I expect the resultant to be the best of the methods I describe both in terms of time complexity and number of arithmetic operations. All of these techniques can be adapted directly, I think, to any commutative ring. I know very little about computational algebra in semirings, but I find it very plausible the middle algorithm would only require a little bit of adaptation (although I'd be worried about the existence of fast polynomial multiplication algorithms), and a little bit plausible that there is a variation on resultants that would work.
What is the algebra generated by a set of functions?
The subalgebra generated by $S$ is the smallest subalgebra that contains all elements of $S$. It can also be described as the intersection of all such subalgebras. Or as the set of all finite expressions formed from elements of $S$ under which the algebra must be closed, that is polynomial expressions in the generators. With two functions defines on different metric spaces $E,F$, you may consider both as functions on $E\times F$.
correlation between $\sum_{i=1}^{98}X_i$ and $\sum_{i=3}^{100}X_i$
(B) is correct and here is one formal demonstration. I'll write $C$ below for covariance and $V$ for variance. Also, let $S=\sum_{i=1}^{98}X_i$ and $T=\sum_{i=3}^{100} X_i$. Due to independence, we have $$ V(S)=\sum_{i=1}^{98}V(X_i)=98,\quad V(T)=\sum_{i=3}^{100}V(X_i)=98. $$ Moreover, using $C(X_i,X_j)=0$ for $i\neq j$, we have $$ C(S,T)=C\left(\sum_{i=1}^{98}X_i,\sum_{j=3}^{100} X_j\right)=\sum_{i=1}^{98}\sum_{j=3}^{100}C(X_i,X_j)=\sum_{i=3}^{98}C(X_i,X_i)=96. $$ It follows that $\frac{C(S,T)}{\sqrt{V(S)V(T)}}=\frac{96}{98}\cdot$
Generating function counting quaternary sequence.
You want sequences, so you have to use exponential generating functions (the positions label your digits). Assuming there are "infinite" of each digit available to simplify, you have: Even number of 1 (or 2): $1 + \frac{z^2}{2!} + \frac{z^4}{4!} + \dotsb = \frac{\mathrm{e}^z - \mathrm{e}^{-z}}{2}$ At least one 3: $\frac{z}{1!} + \frac{z^2}{2!} + \frac{z^3}{3!} + \dotsb = \mathrm{e}^z - 1$ Any number of 4: $1 + \frac{z}{1!} + \frac{z^2}{2!} + \frac{z^3}{3!} + \dotsb = \mathrm{e}^z$ Multiply all, extract $n!$ times the coefficient of $z^n$: $\begin{align} n! [z^n] \frac{\mathrm{e}^z - \mathrm{e}^{-z}}{2} \cdot \frac{\mathrm{e}^z - \mathrm{e}^{-z}}{2} \cdot (\mathrm{e}^z - 1) \cdot \mathrm{e^z} &amp;= n! [z^n] \frac{1}{4} \left( \mathrm{e}^{4 z} - \mathrm{e}^{3 z} - 2 \mathrm{e}^{2 z} + 2 \mathrm{e}^{z} + 1 - \mathrm{e}^{-z} \right) \\ &amp;= \frac{1}{4} \left( 4^n - 3^n - 2 \cdot 2^n + 2 + [n = 0] - (-1)^n \right) \end{align}$ Here $[P]$ is Iverson's bracket, $1$ is the proposition $P$ is true, $0$ otherwise.
Prove that $(an+bm)!$ is divisible by $m!.n!$
Not sure when $a,b,m,n\in\mathbb{Z}$ but yes, we can prove it when $a,b,m,n\in\mathbb{Z}^+$. Note that $m\choose{n}$ is an integer and hence, $m!n!|(m+n)!$. Now, for any $a,b\in\mathbb{Z}^+$, $(m+n)!|(am+bn)!$ because $am+bn\geq m+n$. Thus, $m!n!|(am+bn)!$. Indeed, if you allow $a,b,m,n\in\mathbb{Z}$, then, as pointed out by @Sabhrant this is not true.
Bounded subset proof
Your approach is correct and completely fine. It's valid to use $\max\{|m|,|y|\}$ and I expect no errors with its usage in your complete proof. The Archimedean property can be used to construct an integer less than m. Maybe it will be useful to prove that if $N$ is an integer, $m$ is a real number, and $|m| \leq N$ then $-N &lt; m$. The statement $-|m| \leq m \leq |m|$ might also be useful to prove.
Is a function $f:\mathbb{Z}^2\rightarrow\mathbb{Z}$ injective?
Usually $Z^2$ denotes the cartesian product of the integers with the integers. For a proof there is a bijection between them see here. We now discuss the case if $Z^2 $ denotes the set of perfect squares: The function that sends every integer $k$ to $k^2$ is not injective, since $1^2=(-1)^2$. If you restrict it to non-negative elements then that function is indeed an injection. There is a bijection between them though, just map the positive integers to the even squares and the negative integers to the odd squares. So $f(0)=0$ and $f(n)=(2n)^2$ if $n$ is positive and $f(n)=(-2n-1)^2$ if $n$ is negative.
Using geometry to show that $\int_{0}^{b} \sqrt{1-x^2} dx = \frac{1}{2}b\sqrt{1-b^2}+\frac{1}{2}\sin^{-1}b$
The area of the triangle with sides $b$ and $a=\sqrt{1-b^2}$ forming the right angle is $$ \frac 12b\sqrt{1-b^2}\ . $$ The angle, marked in black, can be extracted from the triangle with sides $a,b,r=1$, it is $\arcsin (b/1)$. So the sector has area $$\frac 12\arcsin b\cdot 1^2\ . $$ Adding the two areas we get the answer.
Taylor expansion of $f(x(t),y(t))$ around the point $(x_0,y_0)$.
You might need to set $g\left(t\right)=f\left(x\left(t\right),y\left(t\right)\right)$, then $$ g\left(t\right)\approx g\left(t_{0}\right)+g'\left(t_{0}\right)\left(t-t_{0}\right) $$ where $$ g'\left(t_{0}\right)=\left(\partial_{x}f\right)\left(x_{0},y_{0}\right)x'\left(t_{0}\right)+\left(\partial_{y}f\right)\left(x_{0},y_{0}\right)x'\left(t_{0}\right). $$
Why is the matrix positive semidefinite matrix?
First, argue that $E$ is positive-definite. This is because for any non-zero vector $v$, $$v^TEv = (Zv)^T A (Zv) &gt; 0$$ since $A$ is SPD and $Zv \neq 0$ since $Z$ has full rank. Next, argue that $P$ is semidefinite, by the exact same reasoning: for any vector $v$, $$v^TPv = (Z^TAv)^T E^{-1} (Z^TAv) \geq 0$$ since the inverse $E^{-1}$ of a positive-definite matrix $E$ is positive-definite.
Change the double integral into a single integral in polar coordinates
In polar coordinates, you have,$$x^2+y^2\leqslant x\iff r\leqslant\cos\phi\quad\text{and}\quad x^2+y^2\leqslant y\iff r\leqslant\sin\phi.$$So, clearly (assuming that $\phi\in[0,2\pi]$), $\phi\in\left[0,\frac\pi2\right]$; otherwise, one of the numbers $\cos\phi$ or $\sin\phi$ is smaller than $0$. On the other hand, if $\phi\in\left[0,\frac\pi2\right]$, then $r$ can go from $0$ the smallest of the numbers $\cos\phi$ and $\sin\phi$. So, the largest value that $r$ can take is $\frac{\sqrt2}2$ (that's when $\phi=\frac\pi4$). For each $r\in\left[0,\frac{\sqrt2}2\right]$, $\phi$ can go from $\arcsin r$ to $\arccos r$. To see why, see that if $(x,y)$ is such that $x^2+y^2=y$ and that $\sqrt{x^2+y^2}=r$, then$$r\sin\phi=y=x^2+y^2=r^2,$$and therefore $\sin\phi=r(\iff\phi=\arcsin r)$; a similar argument shows that the largest value that $\phi$ can take is $\arccos r$. Therefore, you have\begin{align}\iint_Gf\left(\sqrt{x^2+y^2}\right)\,\mathrm dx\,\mathrm dy&amp;=\int_0^{\sqrt 2/2}\int_{\arcsin r}^{\arccos r}f(r)r\,\mathrm d\phi\,\mathrm dr\\&amp;=\int_0^{\sqrt2/2}f(r)r\bigl(\arccos(r)-\arcsin(r)\bigr)\,\mathrm dr.\end{align}
Calculate the integral $\int_{-\infty}^\infty\frac{dy}{(1+y^2)(1+[x-y]^2)}$
The proposed formula is based on approach $$[t]\approx t-0.5,$$ so it isn't exact. There is another way, based on exact formula $$[x-y]=-k\quad\text{for}\quad y\in(x+k-1,x+k).$$ Then $$J(x)=\int_{-\infty}^\infty\dfrac{dy}{(y^2+1)([y-x]^2+1}=\sum_{k=-\infty}^\infty\int_{x+k-1}^{x+k}\dfrac{dy}{(y^2+1)(k^2+1)}$$ $$=\sum_{k=-\infty}^\infty\dfrac{\arctan(x+k)-\arctan(x+k-1)}{k^2+1}$$ $$=\sum_{k=-\infty}^\infty\dfrac{\arctan(x+k)}{k^2+1} -\sum_{k=-\infty}^\infty\dfrac{\arctan(x+k-1)}{k^2+1}$$ $$=\sum_{k=-\infty}^\infty\left(\dfrac1{k^2+1}-\dfrac1{(k+1)^2+1}\right)\arctan(x+k)$$ $$=\sum_{k=-\infty}^\infty\left(\dfrac1{k^2+1}-\dfrac1{(k+1)^2+1}\right)\arctan(k+[x]+\{x\})$$ $$=\sum_{k=-\infty}^\infty\left(\dfrac1{(k-[x])^2+1}-\dfrac1{(k-[x]+1)^2+1}\right)\arctan(k+\{x\}).$$ Calculations with Wolfram Alpha give $$J(3.7)\approx0.446,\quad\text{while}\quad\frac{2\pi}{(3.7-0.5)^2+4}\approx0.441.$$
If $xa=xb$ then $a=b$
Now I wonder why we are allowed to do so. Is it a general property of equivalence relationships that if $a=b,c=d \Rightarrow ac=bd$ , or is it a group property? It is property of binary operations. Binary operation on a set $G$ is defined as a function $G\times G\to G$. This means that the result $ac$ is uniquely determined by $a$ and $c$. Which is just rephrasing of the implication you wrote: $a=b \land c=d \Rightarrow ac=bd$. (For functions we have $x=y$ $\Rightarrow$ $f(x)=f(y)$.)
Proof by induction that recursive function $\text{take}$ satisfies $\text{take}(n) = 100 - 2n$
PARENTHESIS! You want to prove ${\rm take}(n)=100-2n$ for the special case when $n=k+1$. So, substitute $k+1$ in place of the $n$, but not only formally, it is a separate unit, needs parenthesis: ${\rm take}(k+1)=100-2(k+1)$ Then open the parenthesis, so what you have to prove is ${\rm take}(k+1)=100-2k-2$ and you're there soon..
What does $\mathbb{Z}/n \mathbb{Z}$ mean in abstract algebra?
$\Bbb Z / 8\Bbb Z$ means the group of integers modulo $8$. (Up to isomorphism) it consists of the eight elements $$ \{0, 1, 2, 3, 4, 5, 6, 7\} $$ and the group operation is addition and then, if possible, subtraction by $8$. For instance, $1+3 = 4$, and $6+7 = 5$ ($13$ becomes $5$ after subtracting $8$). This may be done for any integer $n$. We have $\Bbb Z/3\Bbb Z$ which consists of the elements $\{0, 1, 2\}$ with the addition table $$ \begin{array}{c|ccc} +&amp;0&amp;1&amp;2\\ \hline0&amp;0&amp;1&amp;2\\ 1&amp;1&amp;2&amp;0\\ 2&amp;2&amp;0&amp;1 \end{array} $$ and $\Bbb Z/186\,432\,515\,384\,315\Bbb Z$ which is quite large, but just as straight-forward.
Parametric equation of a cone
Let $z=\sqrt{a^2 x^2 + b^2 y^2}$ where $a&gt;0$ and $b &gt;0$ Then let $z=r$, $x=\frac{r}{a} cos(\theta)$ and $y= \frac{r}{b} sin(\theta)$
Suppose $a_n \to a$ and $a_n \geq b$ for all $n.$ Show that $a \geq b$
Your idea is correct, but you’ve not handled the absolute value carefully enough, and the wording could be considerably improved. For instance, you should say where $N$ comes from. Here’s a better version that sticks reasonably closely to what you wrote. Assume that $b&gt;a$, and let $\epsilon=b-a&gt;0$. Since $a_n\to a$, there must be some positive integer $N$ such that $|a_n-a|&lt;\epsilon$ whenever $n\ge N$. In particular, $|a_N-a|&lt;b-a$. There are now two possibilities. If $a_N\ge a$, then $|a_N-a|=a_N-a$, and we can add $a$ to both sides to get $a_N&lt;b$, a contradiction. And if $a_N&lt;a$, then $a_N&lt;a&lt;b$, with the same contradiction. Thus, it’s impossible that $b&gt;a$, and hence $a\ge b$.
Surjective Morphism of Schemes
The truth of the claimed statement hinges on the precise definition of 'dominant', of which there are two variations: 1) Topologically dominant: $f(X)$ is topologically dense in $f(Y)$. This is not enough for the statement to be true - consider, for a field $k$, the morphism $$k[T]/(T^2) \to k$$ which is a surjective ring homomorphism inducing a homeomorphism on underlying topological spaces of the associated affine schemes. More generally, any nilimmersion will give you a counterexample. 2) Scheme-theoretically dominant: $Y$ is the scheme theoretic image of $f$. By definition, the scheme theoretic image $Z$ of a morphism $f: X \to Y$ is the initial closed subscheme of $Y$ over which $f$ can be factored. In other words, whenever you factor $f$ into $X \to Z' \to Y$ with $Z' \to Y$ a closed immersion, you get a unique factorization $Z \to Z' \to Y$ of the inclusion $Z \to Y$. If you use this definition, your statement is true, albeit somewhat tautological. In fact, for a morphism $f: X \to Y$ of affine schemes, coming from $\phi: R \to A$, the scheme theoretic closure is given by $Spec(R/ker(\phi)) \to Spec(R)$ (the universal property is just the isomorphism theorem). Hence, for a morphism of affine schemes, being scheme theoretically dominant is always equivalent to being injective on the level of rings (without assumption on surjectivity).
Having problem with dtermining the path over which a line integral is to be evaluated
You are right with that parametrization. Here's what this intersection curve looks like when plotted in Mathematica: img1 http://puu.sh/lINR8/862057487a.png What it means to be in the clockwise orientation is that, if you looked at this from above, you would be going around the "circloid" in the counterclockwise direction. See the image below: img2 http://puu.sh/lINUY/e76e85992f.png I hope that makes sense. Another way to think about the counterclockwise direction is increasing $\theta$. Think about the unit circle; going counterclockwise yields more positive values of $\theta$. Clockwise gives negative values of $\theta$ In this case, you can imagine $t$ as $\theta$. As $t$ increases, you move counterclockwise on the parametrization. If you plug $C$ into your line integral, you should be able to calculate it quite easily. I will show the first term, $y\,dx$, and let you go from there. $y=\sin{t}. \frac{\,dx}{\,dt}=-\sin{t}$, so $dx=-\sin{t}\,dt.$ Therefore, $y\,dx=\sin^2{t}\,dt.$ Do the same for the rest of the terms, factor out the $\,dt$ and integrate. One final note: this parametrization has $t\in[0, 2\pi]$. That means as $t$ ranges from $0$ to $2\pi$, the curve is traversed just once. This is generally the case for parametric equations like the one in this example.
Is it a solution of Heat Equation?
It's not quite right. I think you mean $$ T(x,t) = T_0 + \dfrac{\phi}{\lambda} \left[ \sqrt{\dfrac{4at}{\pi}} \exp \left(-\dfrac{x^2}{4at}\right) - x \left(1 - \text{erf}\left(\dfrac{x}{\sqrt{4at}}\right)\right)\right] $$ for the heat equation in the form $$ \dfrac{\partial T}{\partial t} = a \dfrac{\partial^2 T}{\partial x^2} $$
Relations and transitivity
Transitivity simply means if $(x,y) \in R, (y,z) \in R$, then $(x,z)\in R$. In the first case, $(2,1), (1,2) \in R$, but $(2,2) \notin R$. The last one is not transitive because $(2,1), (1,4) \in R$, but $(2,4)\notin R$
linear algebra - proofs of property of postive definite and symmetric real matrix
I had some pretty bad typos in my comment. Here are some corrections. Any submatrix $A_I$ obtained by taking the rows and columns of $A$ indexed by some subset $I\subset \{1,\ldots,n\}$ is positive definite. For example, $I=\{i,j\}$ shows that $A_I=\begin{bmatrix}a_{ii} &amp; a_{ij}\\a_{ji} &amp; a_{jj}\end{bmatrix}$ is positive definite. Similarly taking $I=\{i\}$ shows that $a_{ii} &gt; 0$. To see why $A_I$ is positive definite, consider $v^\top A v$ where the only nonzero entries of $v$ are in the components $I$. Consequently, for any such $A_I$, we have $\det A_I &gt; 0$. The determinant is the product of the eigenvalues of $A_I$ which are all positive. Thus a) follows since for $I=\{i,j\}$, $0&lt;\det A_I = a_{ii} a_{jj} - a_{ij}^2$, where the last equality is due to $a_{ii}&gt;0$ as we noted above. This proves b). This implies $|a_{ij}| &lt; \max(|a_{ii}|,|a_{jj}|)$ for all $i\ne j$, so $\max_{i,j} |a_{ij}| = \max_i |a_{ii}| = \max_i a_{ii}$ (note that diagonal entries are positive, as we showed above), which proves b).
what will be the derivative
At first we recall the differential operator $\frac{\partial}{\partial p_i}$ is linear. So, the following is valid \begin{align*} \frac{\partial}{\partial p_i}\sum_{j}\alpha_j f(p_j) =\sum_{j}\alpha_j \frac{\partial}{\partial p_i}f(p_j) =\alpha_i\frac{\partial}{\partial p_i}f(p_i) \end{align*} Note that all summands $i\ne j$ cancel, since $p_j$ is treated as constant for $i\ne j$ and \begin{align*} \frac{\partial}{\partial p_i}f(p_j)=0\qquad\qquad i\ne j \end{align*} Since we focus at the variable $p_i$ the summands have the structure \begin{align*} \frac{a}{\sigma^{2}+\sum_{n=1,n\neq k}^N p_{n}b + p_{k}c}=\frac{A}{B+C p_i}\tag{1} \end{align*} with $A,B,C$ constants. Differentiation yields \begin{align*} \frac{\partial}{\partial p_i}\left(\frac{A}{B+C p_i}\right)=-\frac{AC}{\left(B+C p_i\right)^2}\tag{2} \end{align*} In the following we use the linearity of the differential operator and conveniently split the sums according to occurrences of $p_i$. We obtain \begin{align*} \frac{\partial}{\partial p_{i}}&amp;\sum_{k=1}^N\frac{a}{\sigma^{2}+\sum_{n=1,n\neq k}^N p_{n}b +p_{k}c}\\ &amp;=\frac{\partial}{\partial p_i}\sum_{{k=1}\atop{k\ne i}}^N \frac{a}{\underbrace{\sigma^{2} +\displaystyle{\sum_{{1\leq n\leq N}\atop{n\ne k, n\ne i}}p_{n}b} +p_{k}c}_B+p_ib} +\frac{\partial}{\partial p_i}\left(\frac{a}{ \sigma^{2}+\displaystyle{\sum_{{1\leq n\leq N}\atop{n\ne i}} p_{n}b} +p_{i}c}\right)\tag{3}\\ &amp;=\sum_{{k=1}\atop{k\ne i}}^N\frac{\partial}{\partial p_i}\left( \frac{a}{\sigma^{2} +\displaystyle{\sum_{{1\leq n\leq N}\atop{n\ne k, n\ne i}}p_{n}b} +p_{k}c+p_ib}\right) +\frac{\partial}{\partial p_i}\left(\frac{a}{ \sigma^{2}+\displaystyle{\sum_{{1\leq n\leq N}\atop{n\ne i}} p_{n}b} +p_{i}c}\right)\tag{4}\\ &amp;=-\sum_{{k=1}\atop{k\ne i}}^N \frac{ab}{\left(\sigma^{2} +\displaystyle{\sum_{{1\leq n\leq N}\atop{n\ne k, n\ne i}}p_{n}b} +p_{k}c+p_ib\right)^2} -\frac{ac}{\left( \sigma^{2}+\displaystyle{\sum_{{1\leq n\leq N}\atop{n\ne i}} p_{n}b} +p_{i}c\right)^2}\\ &amp;=-\sum_{{k=1}\atop{k\ne i}}^N \frac{ab}{\left(\sigma^{2} +\displaystyle{\sum_{{1\leq n\leq N}\atop{n\ne k}}p_{n}b} +p_{k}c\right)^2} -\frac{ac}{\left( \sigma^{2}+\displaystyle{\sum_{{1\leq n\leq N}\atop{n\ne i}} p_{n}b} +p_{i}c\right)^2}\\ \end{align*} Comment: In (3) we extract in the outer sum as well as in each sum in the denominator the term with index $n=i$. Observe the structural similarity of both summands with (2), whereby in the first term the constant part of the denominator is marked as $B$. In (4) we use the linearity of the differential operator and can now apply the differentiation according to (1).
Approximation of a series expansion
Let's try series inversion. If we have: $$\delta=z-\frac{1}{2^{3/2}} z^2+\frac{1}{3^{3/2}} z^3-\dots$$ And we assume: $$z=a_1\delta+a_2 \delta^2+a_3 \delta_3+\dots$$ Then we can substitute: $$(a_1\delta+a_2 \delta^2+a_3 \delta_3+\dots)-\frac{1}{2^{3/2}} (a_1\delta+a_2 \delta^2+a_3 \delta_3+\dots)^2+ \\ +\frac{1}{3^{3/2}} (a_1\delta+a_2 \delta^2+a_3 \delta_3+\dots)^3-\dots=\delta$$ Since we are only interested in the first and second order terms, let's compare the coefficients in front of $\delta$ and $\delta^2$: $$a_1=1 \\ a_2-\frac{a_1^2}{2^{3/2}}=0$$ Which means we have, up to 2nd order: $$z=\delta+\frac{1}{2^{3/2}} \delta^2+\dots$$
Why Doesn't This Integral $\int \frac{\sqrt{x^2 - 9}}{x^2} \ dx$ Work?
Added: It has been pointed out that there may be some ambiguity about the integral the OP is trying to find. The introduction has an $x^2$ at the bottom, but in the solution the OP seems to be using $x^3$. That is much easier. The substitution $x=3\sec\theta$ transforms the integrand into a constant times $\frac{\tan^2\theta}{\sec^2\theta}$, which is just $\sin^2\theta$. Original post: If we are going to use a trigonometric substitution, I would let $x=3\sec\theta$. (Secant is a bit more familiar than cosecant. You are therefore less likely to differentiate $\sec\theta$ incorrectly; incorrect differentiation of $\csc\theta$ spoiled our integration.) Then $dx=3\sec\theta\tan\theta\,d\theta$. Plug in. From now on, I will forget about constants. So everything is a little wrong, but only by a constant factor. On top we get $(\tan\theta)(\sec\theta\tan\theta)$. On the bottom we get $\sec^2\theta$. So we are interested in $$\frac{\tan^2\theta\sec\theta}{\sec^2\theta}.$$ Put everything in terms of sines and cosines. We get $\dfrac{\sin^2\theta}{\cos\theta}$. There is a trick that one can use when there is an odd power of sine or cosine. In this case, multiply top and bottom by $\cos\theta$, and replace the $\cos^2\theta$ now at the bottom by $1-\sin^2\theta$. We have arrived at $$\int \frac{\cos\theta \,\sin^2\theta}{1-\sin^2\theta}\,d\theta.$$ Make the substitution $u=\sin\theta$. We arrive at $$\int \frac{u^2}{1-u^2}\,du.$$ The integrand is a rational function. It can be rewritten as $$\frac{1}{1-u^2}-1.$$ But (partial fractions) we find that $\dfrac{1}{1-u^2}=\dfrac{1}{2}\left(\dfrac{1}{1+u}+\dfrac{1}{1-u}\right)$. Remark: As usual, there are many other approaches. For example, we can change $\frac{\tan^2\theta}{\sec\theta}$ to $\frac{\sec^2\theta-1}{\sec\theta}$, and then to $\sec\theta -\cos\theta$. If we happen to know the integral of $\sec\theta$, we are essentially finished. More strikingly, if we guess somehow that $F(x)$ is an antiderivative, then the magic substitution $u=F(x)$ solves the problem.
Is it true that a 2x2 matrix is diagonalizable iff it has two distinct eigenvalues?
A typical 2 x 2 non-diagonalizable matrix is $$\pmatrix{ 1 &amp; 1 \\ 0 &amp; 1} $$ Its characteristic polynomial has one double-root, but its minimal polynomial is also $(x-1)^2$, which makes it different from the identity, whose char. poly has a double root, but whose minimal polyonomial is $(x-1)$. What your prof. said was correct, but you negated it incorrectly. :) By the way, I applaud your questioning this. Asking questions like this, even ones that seem stupid, is part of how you learn to recognize certain classes of errors and learn not to make them again. Go, you!
Evaluate $ \int_{-2}^{1}\sqrt{\frac{\left ( 1-x \right )\left ( x+2 \right )^2}{x+3}}\,dx$
Hint: Notice that $$\int_{-2}^{1}\sqrt{\frac{\left ( 1-x \right )\left ( x+2 \right )^2}{x+3}}\,dx = \int_{-2}^{1} (x+2)\sqrt{\frac{1-x}{x+3}}\,dx, \text{because $ x\geq -2$}.$$ Take $t^2 = \frac{1-x}{x+3} \implies x= \frac{1-3t^2}{t^2+1}\implies dx = \frac{-8t}{(t^2+1)^2}\,dt.$ Hence, $$\int_{-2}^{1}\sqrt{\frac{\left ( 1-x \right )\left ( x+2 \right )^2}{x+3}}\,dx = 8\int_{0}^{\sqrt{3}}\frac{(3-t^2)t^2}{(t^2+1)^3}\,dt.$$ Now, let $f(t) = t^3$ and $g(t) = (t^2+1)^2$. Notice that $$\frac{(3-t^2)t^2}{(t^2+1)^3} = \frac{f'(t)g(t) - f(t)g'(t)}{g(t)^2}.$$ I think you can finish now!
Testing a series using the ratio test.
When $|x|=1$, your sum is either $$\sum_{n=1}^{\infty}1^n\quad\text{or}\quad \sum_{n=1}^{\infty}(-1)^n.$$ In both cases, $\displaystyle\lim_{n\to\infty}a_n\neq 0$ so they diverge.
invertible polynomials over non-commutative rings
An expansion of my comment. No condition exists which can be stated only in terms of the $a_i$ separately and which is invariant under conjugation. To see this, take $R = \mathcal{M}_2(\mathbb{Q})$ and $$a_0 = I, a_1 = \left[ \begin{array}{cc} 1 &amp; 1 \\\ 1 &amp; -1 \end{array} \right], a_2 = I, a_3 = \left[ \begin{array}{cc} 0 &amp; 0 \\\ 1 &amp; 0 \end{array} \right].$$ Then $$a_0 + a_1 t + a_2 t^2 + a_3 t^3 = \left[ \begin{array}{cc} 1 + t + t^2 &amp; t \\\ t + t^3 &amp; 1 - t + t^2 \end{array} \right]$$ has determinant $1$ and so is invertible in $R[t]$. (And $a_1, a_2$ are not nilpotent!) However, letting $$b_3 = \left[ \begin{array}{cc} 0 &amp; 0 \\\ -1 &amp; 0 \end{array} \right]$$ which is conjugate to $a_3$ we have $$a_0 + a_1 t + a_2 t^2 + b_3 t^3 = \left[ \begin{array}{cc} 1 + t + t^2 &amp; t \\\ t - t^3 &amp; 1 - t + t^2 \end{array} \right]$$ which has non-invertible determinant.
Incorrect use of Borel Cantelli?
Since the series $\sum\limits_n\mathrm P(X_n=1)$ diverges and the random variables $X_n$ are independent, the second Borel-Cantelli lemma tells you that $X_n=1$ infinitely often, that is, that the random set $$ I=\{n\geqslant1\mid X_n=1\} $$ is almost surely infinite. The series $\sum\limits_n\mathrm P(X_n=0)$ diverges as well, hence $X_n=0$ infinitely often, that is, the random set $\{n\geqslant1\mid X_n=0\}$ is almost surely infinite. In other words $I$ is almost surely infinite and co-infinite. Regarding the question in your last paragraph, note that one does not need to assume that the probability space is $\Omega=(0,1)$ or $\Omega=[0,1)$ or anything else. In fact, as almost always in probability, the specification of the probability space $\Omega$ is irrelevant. In your case, for any independent sequence $(X_n)_{n\geqslant1}$ of Bernoulli random variables defined on whichever probability space you want and such that $\mathrm P(X_n=1)=1/n$ for every $n\geqslant1$, the event $[X_n\to0]$ has probability zero. A word of caution: if, however, one insists on specifying a probability space and one chooses $\Omega=(0,1)$ with the Lebesgue measure, then the random variables $Y_n$ defined by $Y_n(\omega)=1$ if $\omega\leqslant1/n$ and $Y_n(\omega)=0$ otherwise, which you seem to have in mind as a realization of the sequence $(X_{n\geqslant1})$, are in fact far from being independent. To wit, $[Y_2=0]=(1/2,1)$ and $[Y_3=1]=(0,1/3]$ hence $[Y_2=0,Y_3=1]=\emptyset$, hence $$ \mathrm P(Y_2=0,Y_3=1)=0\ne1/6=\mathrm P(Y_2=0)\mathrm P(Y_3=1). $$ Other, cleverer, definitions of the random variables $Y_n$ on $\Omega=(0,1)$ exist, which produce the correct joint distribution, but this one does not.
Cross product and right hand rule
By the linearity and anticommutativity, it suffices to prove that $i\times j=k$. $$\left|\begin{array}{ccc}i&amp;j&amp;k\\1&amp;0&amp;0\\0&amp;1&amp;0\end{array}\right|=k$$ A similar computation proves that $j\times k=i$ and $k\times i=j$.
How to solve $5^n - 5^{n-3} = 5^{n-3} *124$
Recall that $$a^b\cdot a^c = a^{b+c}$$ So here we use the fact that $$5^n = 5^{(n - 3)+ 3} = 5^{n -3}\cdot 5^3$$ Factor out $5^{n-3}$ from the left hand side: $$\begin{align}5^n - 5^{n-3} &amp; = \underbrace{\color{blue}{5^{n-3}}\cdot 5^3}_{\large =\,5^n} - \color{blue}{5^{n-3}}\cdot 1 \\ \\ &amp; = \color{blue}{5^{n-3}}(5^3 - 1) \\ \\&amp; = 5^{n - 3}\cdot 124\end{align}$$ For (free) online tutorials dealing with equations and exponents, see the Khan Academy.
Let $(G, \cdot)$ be a group, and $H \leqslant G$. Let $x \in H$, what $C_H(x) < C_G(x)$ mean? ($C_A(x)$ notation for "centralizer of $x$ 'in A")
It means that $C_H(x)$ is a proper subgroup of $C_G(x)$, i.e. $C_H(x) \subseteq C_G(x)$ and $C_H(x) \neq C_G(x)$.
Constructing a divergent sequence
For all $i\in\mathbb{N}$, define the following family $$ A_i:(c_0,\|\cdot\|_\infty)\to \mathbb{R},(s_j)_j\mapsto \sum_j a_{i,j}s_j. $$ Since the summation $\sum_j a_{i,j}s_j$ is finite whenever $s_j$ admits a limit, by picking $s_j=1$ we find that $\sum_j a_{i,j}&lt;\infty$, that is $(a_{i,j})_j\in l^1$ for all $i\in\mathbb{N}$ (I am assuming that the summation is the Lebesgue integral with respect to the counting measure, so that absolute convergence implies convergence and viceversa). $A_i$ is clearly linear and continuous since $$ |A_i (s_j)_j| = \left| \sum_j a_{i,j}s_j\right| \leq \sum_j |a_{i,j}||s_j|\leq \|(s_j)_j\| \sum_j |a_{i,j}| \leq C_i \|(s_j)_j\|. $$ To recover the opposite inequality let $(s_j^i)_j = (sign(a_{i,j}))_j$. Truncate the sequence up to the $n$-th term, so that $$(s^{n,i}_j)_j = (sign(a_{i,j}) \chi_{j\leq n})_j \in c_0, $$ as it is identically equal to $0$ for $j\geq n$. We can compute $$ \|A_i\|\geq \lim_n |A_i(s_j^{i,n})_j| \overset{\text{monotone convergence}}{=} \lim_n \sum_{j=0}^n |a_{i,j}| = C_i, \quad \|A_i\| = C_i = \sum_j |a_{i,j}|. $$ Notice that $c_0 = \{(s_j)_j\in l^\infty: \lim_j s_j=0\}$ is a Banach space (easy to prove) w.r.t $\|\cdot\|_\infty$ norm. Fix now any $(s_j)_j\in c_0$, since $s_j\to 0$ then $A_i(s_j)_j\to 0$ as $i\to\infty$ by hypothesis on the matrix, so that $$ \sup_i |A_i(s_j)_j| &lt;\infty. $$ Apply Banach–Steinhaus to conclude that $$ \sup_i \|A_i\| = \sup_i \sum_j |a_{i,j}| &lt;\infty. $$
Understanding application of trig identity $\cos^2\theta = \frac{1+\cos2\theta}{2}$ in integration
Yes, you are correct. Anything can be substituted for theta and treated this way.
Equivalence relation with a language represented by a regular expression and meaning of ≈$_{L}$
See this post on the Myhill-Nerode relation. If correctly understood, the equivalence relation is a relation on $Σ^∗ =\{ 0,1 \}^∗$ and not on the language $L$. If so, the two string $0$ and $00$ are not equivalent because we can find a string $z$ such that $0z$ is in $L$ while $00z$ is not. It is enough to choose $z=00$. The same for the third case. Choosing $z=0$ we have that $110$ is in $L$ while $11010$ is not. The issue is with the &quot;parity&quot;: if $11$ works with a string of one digit (to get three digits) the second will produces a string of five digits, which is not in $L$. Why $1 \sim_L 1001$ is fine ? Because if string $1z$ is fine, it means that has a length that is a multiple of $3$ i.e. $\text{len}(1z)=3k$. But then $\text {len}(1001z)=3k+3=3(k+1)$ that is again a multiple of $3$.
Evaluate $\int\frac{\sqrt{1+x^2}}{x}\, dx$
Rewrite the integral as $$\int \frac{x\sqrt{1+x^2}}{x^2}\,dx,$$ and let $u^2=1+x^2$. Then $x\,dx=u\,du$, and we end up with $$\int \frac{u^2}{u^2-1}\,du,$$ which is probably familiar. Start by noting that $\frac{u^2}{u^2-1}=1+\frac{1}{u^2-1}$.
Euclid's lemma for non-prime numbers.
$3$ divides $a^2$ $\implies$ 3 divides $a$. $2$ divides $a^2$ $\implies$ 2 divides $a$. $2\cdot 3$ divides $a$.
Is this implication true:$ f(a)=0 \implies f'(a)=0$ with $f$ apolynomial of second degree and $\Delta=0$?
$x=a$ is a multiple root of $f(x)$, so $(x-a)^{2}$ divides $f(x)$, say, $f(x)=(x-a)^{2}g(x)$, then $f'(x)=2(x-a)g(x)+(x-a)^{2}g'(x)$, so $f'(a)=2(a-a)g(a)+(a-a)^{2}g'(a)=0$.
Property of abelian groups without using Lagrange's theorem
The proof of the Euler-Phi theorem works by exhibiting an isomorphism in order to calculate $a^{\phi(n)}$. This proof can be done by considering the isomorphism $\varphi_a : G \rightarrow G$ by sending $b$ to $ab$. In particlar, note that $$\prod_{b \in G} b = \prod_{b \in G} ab$$ and see if you can conclude the proof. This last step is where the hypothesis that $G$ is abelian is essential.
How is the twice continuous differentiability inherited by $g$?
Hint: $$ \frac{\mathrm{d}}{\mathrm{d}t}f(\mathbf{x}+t\mathbf{z})=\mathbf{z}\cdot\nabla f(\mathbf{x}+t\mathbf{z}) $$ lather, rinse, repeat.
Why was set theory inadequate as a foundation to the emerging new fields and why category theory isn't?
It is not that set theory is inadequate for some aspects of homological algebra, but rather that the language provided by category theory is very powerful and very expressive for the purposes of some areas in mathematics, in particular homological algebra. Logically speaking set theory is perfectly adequate since all of homological algebra can be stated in $ZFC$. But, just like some programming languages are better suited for certain tasks then others (but ultimately they are all equivalent to machine code) so is it in mathematics that the choice of base language may be suited to certain things more than to others. Traditional set theory, since Cantor, serves as a very rigorous foundations and provides a strong and consistent common language to discuss mathematics. But, it is centered on sets and so everything in set theory is (warning: getting into philosophy now) static. Even the notion of a function as a relation satisfying a condition is a very static view of what a function is. In category theory in (philosophical) contrast the emphasis is on the morphisms. These are the undefined terms and the resulting language is very powerful in expressing the interrelations between structures and between constructions in mathematics. This does not say that one is good and the other is bad or that one is adequate and the other is not. It is to say that for certain things one formalism may be better suited than the other. Without a doubt the category theory formalism is superior when considering homological algebra (or algebraic topology more generally). In other areas of mathematics (such as, to the best of my knowledge, combinatorics) there seems to be little to gain from the categorical formalism. Now, it was discovered that category theory can be used also as a foundation for logic and there are many differences between categorical logic and classical logic. Here again one formalism may be better suited than another, depending on the purpose. For instance, it would seem that for constructive and intuitionistic logic topos theory provides a very natural setting, more so than classical set theory would. For classical logic it is debatable how much merit there is in adopting the topo theoretic approach. Classical techniques such as Cohen forcing can be understood quite nicely via topos theory but one may argue that the insight provided is not significant enough to justify abandoning standard techniques for new ones. It's a matter of taste. Particularly in homology theory a language that allows one to easily speak of processes, processes between processes, and comparisons between such is crucial. Everything can be stated in $ZFC$ but than one would not see the forest for the trees (or rather one would not see the structure for the objects). When the language talks about morphisms and not objects one can ignore the objects and concentrate on the processes and thus starting seeing the structure.
Testing the endpoints of the interval of convergence
When $x = 3/4$, the general summand is \begin{align*} &amp; \phantom{={}} ((-1)^k + 3)^k (-1/4)^k \\ &amp;= ((-1)^k(-1/4) + 3(-1/4))^k \\ &amp;= ((-1)^{k+1}/4 + -3/4)^k \text{,} \end{align*} which alternates between $(-1)^k$ and $(-1/2)^k$. In particular, when $k$ is even, the general summand is $(-1)^k$, which does not decrease to zero, so the sum does not converge. When $x = 5/4$, the general summand is \begin{align*} &amp; \phantom{={}} ((-1)^k + 3)^k (1/4)^k \\ &amp;= ((-1)^k(1/4) + 3(1/4))^k \\ &amp;= ((-1)^k/4 + 3/4)^k \text{,} \end{align*} which is $1^k$ when $k$ is even, so again the terms do not go to zero, so the sum does not converge. (This $n^\text{th}$ term test for divergence is frequently effective at the endpoints of a power series. Although we can find series which are not resolved by it.)
Special case of graph-theory cycle
I think you're talking about a chordless cycle, or sometimes an induced cycle. A face in a planar graph doesn't necessarily conform to this so there is some conflict in your description.
Find the sum of p+q?
Calculating $p$: First, note that the number of trailing zeros is the number of $5$s in the product. Every multiple of $5$ less than $96$ contributes one such factor, giving $19$ factors, and the numbers $25, 50, 75$ contribute an extra factor, giving $22$ trailing zeroes in total. So we want to find the value of $$\frac{96!}{10^{22}} \bmod 10.$$ We will later use the Chinese Remainder Theorem on $$\begin{align} \frac{96!}{10^{22}} &amp;\bmod 2, \\ \frac{96!}{10^{22}} &amp;\bmod 5,\end{align}$$ so that we now only have to find those two values separately. First, $96!$ contains many more factors $2$ than $5$, so $$\frac{96!}{10^{22}} \equiv 0 \bmod 2.$$ Next, for the value modulo $5$, we first factor the number in two integers: $$\frac{96!}{10^{22}} = \frac{1 \cdot 2 \cdot 3 \cdot 4 \cdot 6 \cdot \ldots \cdot 96}{2^{22}} \cdot \frac{5 \cdot 10 \cdot 15 \cdot \ldots \cdot 95}{5^{22}}.$$ Let us first look at the first term. Since $$1 \cdot 2 \cdot 3 \cdot 4 \cdot 6 \cdot \ldots \cdot 96 \equiv (1 \cdot 2 \cdot 3 \cdot 4)^{19} \cdot 1 \equiv 4^{19} \equiv (16)^9 \cdot 4 \equiv 4 \bmod 5,$$ and $$2^{22} \equiv 16^{5} \cdot 4 \equiv 4 \bmod 5,$$ we get $$\frac{1 \cdot 2 \cdot 3 \cdot 4 \cdot 6 \cdot \ldots \cdot 96}{2^{22}} \equiv 1 \bmod 5.$$ For the other term, we have to divide out all the factors $5$ in the product to get $$\begin{align}\frac{5 \cdot 10 \cdot 15 \cdot 20 \cdot \ldots \cdot 95}{5^{22}} &amp;= (1 \cdot 2 \cdot 3 \cdot 4 \cdot 1) (1 \cdot 2 \cdot 3 \cdot 4 \cdot 2) (1 \cdot 2 \cdot 3 \cdot 4 \cdot 3) (1 \cdot 2 \cdot 3 \cdot 4) \\ &amp;\equiv (-1) \cdot (-2) \cdot 2 \cdot (-1) \\ &amp;\equiv 1 \bmod 5.\end{align}$$ Combining these two terms, we get $$\frac{96!}{10^{22}} = \frac{1 \cdot 2 \cdot 3 \cdot 4 \cdot 6 \cdot \ldots \cdot 96}{2^{22}} \cdot \frac{5 \cdot 10 \cdot 15 \cdot \ldots \cdot 95}{5^{22}} \equiv 1 \cdot 1 \equiv 1 \bmod 5.$$ So $$\begin{align}\frac{96!}{10^{22}} &amp;\equiv 0 \bmod 2, \\ \frac{96!}{10^{22}} &amp;\equiv 1 \bmod 5.\end{align}$$ Solving this using CRT gives us $$\frac{96!}{10^{22}} \equiv 6 \bmod 10.$$ So $p = 6$. Calculating $q$: Let $n = 121212\ldots12$ with the pattern $(12)$ repeated $300$ times. Using the Chinese Remainder Theorem, we only need to find $n \bmod 9$ and $n \bmod 11$ to recover $n \bmod 99$. Since the sum of the digits is $900$ and is divisible by $9$, we have $$n \equiv 0 \bmod 9.$$ For $n \bmod 11$, we can use the following rule: Lemma: If $n = d_k d_{k-1} d_{k-2} \ldots d_2 d_1 d_0$ is the decimal representation of $n$, then $n \equiv d_0 - d_1 + d_2 - d_3 \pm \ldots \pm d_k \equiv \sum_{i=0}^k d_i (-1)^i \bmod 11$. Proof: $d_k d_{k-1} d_{k-2} \ldots d_2 d_1 d_0 = \sum_{i=0}^k d_i (10)^i \equiv \sum_{i=0}^k d_i (-1)^i \bmod 11$, where we used the fact that $10 \equiv -1 \bmod 11$. Applying this to our $n$, we get $n \equiv - 1 + 2 - 1 + 2 \mp \ldots + 2 \equiv -300 + 600 \equiv 300 \equiv 3 \bmod 11$. So we apply the Chinese Remainder Theorem to $$\begin{align} n &amp;\equiv 0 \bmod 9, \\ n &amp;\equiv 3 \bmod 11,\end{align}$$ to obtain $$n \equiv 36 \bmod 99.$$ So $q = 36$. Combining the results: Combining the two answers above, we get $p + q = 42$.
How do I prove the parrallelogram law?
Try expanding the left hand side and then factoring.
Two manifolds $X$ and $Y$ with two bijective continuous functions from $X$ to $Y$ and $Y$ to $X$, but not homeomorphic
To understand this answer you should know about how one can obtain a torus or a cylinder via a projection from the unit square. To do this, one simply identifies opposite sides of the square: 2 for the cylider, and 4 for the torus. On Wikipedia there is a nice gif which illustrates this. Now for the answer. Denote by $R$ the half open unit rectangle, by $C$ the corresponding half open cylinder, and by $T$ the torus. One has projecitons $$ \pi_{R, C} : R \to C, \quad \pi_{C, T} : C \to T, \quad \pi_{R, T} = \pi_{C,T} \circ \pi_{R, C} : R \to T $$ The spaces we need, are the following $$ X = \amalg_{\mathbb{N}}R ~\cup~ C ~\cup~ \amalg_{\mathbb{N}} T\\ Y= \amalg_{\mathbb{N}}R ~\cup R ~\cup~ \amalg_{\mathbb{N}} T $$ The maps are the following: From $X$ to $Y$, you project to the right, i.e. $C$ maps to $T$, and then you continue, by mapping the remaining $T$ components to themselves, and the $R$ to themselves as well: $$ X = \amalg_{\mathbb{N}}R ~\cup~ C ~\cup~ \amalg_{\mathbb{N}} T\\ ~~\downarrow~~~~ \searrow \quad \searrow ~~~\downarrow\\ Y= \amalg_{\mathbb{N}}R ~\cup R ~\cup~ \amalg_{\mathbb{N}} T $$ From $Y$ to $X$ you just project upward, i.e. the middle $R$ projects to the single $C$, and the rest is just identities. $$ X = \amalg_{\mathbb{N}}R ~\cup~ C ~\cup~ \amalg_{\mathbb{N}} T\\ ~~\uparrow~~~~~~ \uparrow ~~~~~~~~~\uparrow\\ Y= \amalg_{\mathbb{N}}R ~\cup R ~\cup~ \amalg_{\mathbb{N}} T $$ Since $X$ and $Y$ are not homeomorphic (left as an exercise), this would be a counterexample.
Let $G$ be a Lie group. Show that there is a diffeomorphism $TG \cong G \times T_e G$.
I assume you mean to prove that $T_e G\times G$ and $TG$ are isomorphic as bundles over $G$. As Jason DeVito points out you have the right idea--you just use the translation maps to slide the local triviality at $e$ around the entire group so that it is globally trivial. So I'm guessing you are having difficulty just getting the details nailed out. Define $\Phi:G\times T_e G\to TG$ by $\Phi(g,X)=(L_g)_\ast(X)$. Clearly $\Phi$ respects fibers and since $(L_g)_\ast$ is a linear isomorphism (since $L_g$ is a diffeomorphism) it follows that $\Phi$ is bijective. Let us now prove that $\Phi$ is smooth. We shall see that everything shall be almost tautological once we choose charts correctly. Of course, the choosing of the charts is going to be slightly messy, but not too bad. Choose any $(g,X)\in G\times T_e G$. Choose then a centered chart $(U,\varphi)$ of $G$ at $e$ and note that this naturally gives us a chart $(V,\psi)$ at $g$ with $V=L_g(U)$ and $\psi=\varphi\circ L_{g^{-1}}$. Now, the combination of these charts gives us a chart $(V\times T_e G,\phi)$ at $(g,X)$ defined by $$\phi\left(h,\sum_{j=1}^{n}\alpha_i \left.\frac{\partial}{\partial x_i}\right|_e\right)=(\psi(g),\alpha_1,\cdots,\alpha_n)$$ Where, as usual, $\displaystyle \left.\frac{\partial}{\partial x_i}\right|_e$ is the pushforward of $\displaystyle \left.\frac{\partial}{\partial x_i}\right|_0\in T_0\mathbb{R}^n$ through $\varphi^{-1}$. Recall that the choice of chart $(V,\psi)$ at $g$ gives us a $(\pi^{-1}(V),\tau)$ (where $\pi:TG\to G$ is the canonical projection) given on the fiber $T_h G$ by $$\sum_{j=1}^{n}\alpha_i\left.\frac{\partial}{\partial x_i}\right|_h\ \longmapsto\ (\psi(h),\alpha_1,\cdots,\alpha_n)$$ where, as above, $\displaystyle \left.\frac{\partial}{\partial x_i}\right|_h$ is the pushforward of $\displaystyle \left.\frac{\partial}{\partial x_i}\right|_{\psi(h)}\in T_{\psi(h)}\mathbb{R}^n$. So, now we want to show that $\tau\circ\Phi\circ\phi^{-1}$ is smooth. That said, $$\begin{aligned}\tau\left(\Phi\left(\phi^{-1}\left(\psi(h),\alpha_1,\cdots,\alpha_n\right)\right)\right) &amp;= \tau\left(\Phi\left(h,\sum_{j=1}^{n}\alpha_i\left.\frac{\partial}{\partial x_i}\right|_e\right)\right)\\ &amp;=\tau\left((L_h)_\ast\left(\sum_{j=1}^{n}\alpha_i\left.\frac{\partial}{\partial x_i}\right|_e\right)\right)\\ &amp;= \tau\left(\sum_{j=1}^{n}\alpha_i (L_h)_\ast\left(\left.\frac{\partial}{\partial x_i}\right|_e\right)\right)\\ &amp;= \tau\left(\sum_{j=1}^{n}\alpha_i \left.\frac{\partial}{\partial x_i}\right|_h\right)\\ &amp;= (\psi(h),\alpha_1,\cdots,\alpha_n)\end{aligned}$$ where we have used the fact that, as defined, $\displaystyle (L_h)_\ast\left(\left.\frac{\partial}{\partial x_i}\right|_e\right)=\left.\frac{\partial}{\partial x_i}\right|_h$ (check this!). Thus, it's pretty obvious that $\tau\circ\Phi\circ\phi^{-1}$ is smooth--it's the identity map! Since $\Phi$ is a bijective bundle map it follows from basic topology that $\Phi$ is a bundle isomorphism Of course, if you are privy to a little more bundle theory than the definitions there is a simpler way to do all of the above. Namely, recall that a global frame for a vector bundle $\pi:B\to M^n$ is a set global sections $\sigma_i:M\to B$, where $i=1,\cdots,n$, such that for each $x\in M$ we have that $\{\sigma_i(x)\}$ is a basis for $B_x$. A common fact then is that a vector bundle is trivial if and only if it admits a global frame (see here if this is unknown to you). So, the above ideal of sliding the local trivilizations around the entirety of $G$ becomes even more concrete since we can use local frames (local trivilizations) to construct global ones). Namely, let us a choose a chart $(U,\varphi)$ at $e\in G$ and note that we have a natural map $\sigma:U\to TG$ defined by taking a poing $h\in U$ to the basis $\displaystyle \left\{\frac{\partial}{\partial x_i}\mid_h\right\}$ (as defined above). Let us then extend $\sigma$ to a global section $\sigma:G\to TG$ by defining $\sigma$ on $L_g(U)$ for every $g\in G$ to send $x$ to $(L_g)_\ast(\sigma(g^{-1}x))$. It's not hard to see that this definition is well-defined, and moreover that $\sigma$ is then a global frame for $TG\to G$. Thus, $TG$ is trivial. You'll note that this is practically the same proof as above, just a lot of work hidden in the theorem "trivial bundles are exactly those with global frames". One last thing I can't help to point out. This theorem (which really was not too hard to prove!) says that every Lie group is parallelizable (has a trivial tangent bundle). This precludes a lot of spaces from possessing a Lie group structure at all! For example, it is the statement of the famous Hairy Ball Theorem that every even dimensional sphere is not parallelizable and thus can't (for any attempted definition of an operation!) be given the structure of a Lie Group. That is pretty cool. By the way, the only spheres which are Lie groups are $S^0$, $S^1$, and $S^3$.
Solution verification: Proving that a local homeomorphism maps open sets to open sets.
You cannot simply take $U$ to be any open nbhd of $x$: all we know is that $x$ has some open nbhd $U$ such that $f\upharpoonright U$ is a homeomorphism of $U$ onto an open subset of $Y$. Later on you say that $(f\upharpoonright U)[B^*\cap U]=f[B^*]\cap f[U]$; this needs more justification. You know that $f\upharpoonright U$ is injective, so you know that $$(f\upharpoonright U)[B^*\cap U]=(f\upharpoonright U)[B^*]\cap (f\upharpoonright U)[U]\,,$$ but you don’t know that $f[B^*]=(f\upharpoonright U)[B^*]$: there might be some $x\in B^*$ and $y\in X\setminus U$ such that $f(y)=f(x)$. And in any case, the fact that $f[U]$ is a subspace of $Y$ is not enough to ensure that $f[B^*]\cap f[U]$ is open in $Y$: you need the fact that $f[U]$ is an open subset of $Y$. You can simplify things and avoid these difficulties as follows. For each $x\in X$ let $U_x$ be an open nbhd of $x$ such that $f\upharpoonright U_x$ is a homeomorphism of $U_x$ onto an open subset of $Y$. Let $\mathscr{B}_0=\{B\in\mathscr{B}:B\subseteq U_x\text{ for some }x\in X\}$. Show that $\mathscr{B}_0$ is still a base for $X$. Let $B\in\mathscr{B}_0$; there is an $x\in X$ such that $B\subseteq U_x$. Now show that $f[B]=(f\upharpoonright U_x)[B]$ is open in $Y$.
There are infinitely many powers of $2$ that are at least $10^6$ away from any square and cube
This is not a full proof, but I would start like this.. A power of 2 that is neither square nor cube must be $2^{6n+1}$ or $2^{6n-1}$, let's take $x=2^{6n+1}$. Now let as find the integers nearest to the square root, which are $\lfloor2^{3n+\frac{1}{2}}\rfloor$ and $\lfloor2^{3n+\frac{1}{2}}\rfloor+1$. First you have to prove that for infinitely many $n$, the squares of these numbers are far enough from $x$. In other words, for infinitely many $n$, $2^{3n}\sqrt{2}$ are far enough from integers -- it is enough to prove that they are contained in $\Bbb Z+[\frac{1}{8}, \frac{7}{8}]$, for example. Here you need to use the irrationality of $\sqrt{2}$: in particular the binary representation of $\sqrt{2}$ contains infinitely many times the string "10". So, for infinitely many $n$, $2^{3n}\sqrt{2}$ looks, in binary, like $\mathrm{something}.*10\ldots$ where $*$ stands for a (possibly empty) string of at most 2 digits. Further, you need to prove that for infinitely many $n$, both $2^{3n}\sqrt{2}$ and $2^{2n} \sqrt[3]{2}$ are far enough from integers $\ldots$