title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
$x+\ln(x)=0$, what is $x$?
$$x+\ln x=0$$ $$e^{x+\ln x}=e^0=1$$ $$xe^x=1$$ This can be solved by lambert $W$: $$x=W(1)$$ There is a special name to this constant, it is called the omega constant. You can find the numerical approximation via Newtons method. Let $x_1=0.5$ And $$x_{n+1}=x_{n}-\frac{x_n+\ln x_n}{1+\frac{1}{x_n}}$$ As $$n \to \infty$$ Then you get closer to the approximate value: $$x=.56714329...$$
What does $5\mathbb{Z}_{25}$ mean? (Notation help)
Both ways of thinking about $5\mathbb Z_{25}$ are essentially the same. Modulo $25$, $$5\times0\equiv5\times5\equiv5\times10\equiv5\times15\equiv5\times20\equiv0,$$ $$5\times1\equiv5\times6\equiv5\times11\equiv5\times16\equiv5\times21\equiv5,$$ $$5\times2\equiv5\times7\equiv5\times12\equiv5\times17\equiv5\times22\equiv10, $$ $$5\times3\equiv5\times8\equiv5\times13\equiv5\times18\equiv5\times23\equiv15,$$ and $$5\times4\equiv5\times9\equiv5\times14\equiv5\times19\equiv5\times24\equiv20,$$ so the elements of $\mathbb Z_{25}$ multiplied by $5$ are represented by the residue classes of the multiples of $5$; there are $5$ elements. $5\mathbb Z_{25}$ can also be thought of as the image of $5\mathbb Z$ under the homomorphism from $\mathbb Z$ to $\mathbb Z_{25}.$
Determine where a function is holomorphic
The standard calculus formulas for derivatives of $f+g$, $f-g$, $f g$, $f/g$ are valid everywhere $f$ and $g$ are differentiable, with the exception that for $f/g$ you need to avoid points where $ g=0$. At such points $f/g$ is undefined, and thus of course not differentiable.
How should I solve this integral with changing parameters?
Start by making a drawing of your domain. You can see that the $v$ is along the diagonal in the first quadrant, and $u$ is along the diagonal in the second quadrant. You can also see that the line between $(0,2)$ and $(2,0)$ is parallel to $u$, and intersects $v$ axis at $v=1$. So $v$ varies between $0$ and $1$ and $u$ varies between $-v$ and $v$.
Compute the determinant of the following matrix
The determinant of a $3\times 3$ matrix $M$ : $$M= \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix}$$ is given by : $$\det(M) = a_{11} \Bigg|\begin{matrix} a_{22} & a_{23} \\ a_{32} & a_{33} \end{matrix}\Bigg| - a_{12}\Bigg| \begin{matrix} a_{21} & a_{23} \\ a_{31} & a_{33}\end{matrix} \Bigg| + a_{13} \Bigg| \begin{matrix} a_{21} & a_{22} \\ a_{31} & a_{32}\end{matrix} \Bigg|$$ Applying this to your given matrix $A$, should yield you the result : $$\det(A) = -b^3 + 3bc - c^3 - 1$$ If you need help to find where you're wrong with your solution, please make sure you edit your question and contain it in it. Here's a cool widget to help you check your $3\times 3$ determinant results !
Is $\mathbb{Z \times Q }$ with lexicographical order isomorphic to $\mathbb{Z \times Z }$ with lexicographical order?
Yes, to prove two totally ordered sets are not isomorphic it is enough to show that one is dense and the other is not. Depending how much detail is required of your proof, though, you may need to actually prove that $\mathbb{Z}\times\mathbb{Q}$ is dense and that $\mathbb{Z}\times\mathbb{Z}$ is not dense, rather than just stating it.
$a \times \mathbf 1 \cong a$ in categories admitting products and having a terminal object $\mathbf 1$
This is a small nitpick, but since there's a unique choice of $f_1$, $f_2$ which makes the diagram commute, I don't think it's wise to say "$f_1$ and $f_2$ are some morphisms we don't have any prior information about, they'll be determined later." Better to just tell the reader $f_1 = \mathbf{1}_a \circ \pi_a$ and $f_2 = \langle{1_a, \mathbf{1}_a}\rangle \circ \pi_a$, since we do have enough prior information to conclude this if we want the diagram to commute! That's just my opinion on proof style, though, and you should feel free to ignore it. This leads into a discussion of the only small error in your proof. You say "taking $f_2 = 1_{a \times \mathbf{1}}$ makes the diagram commute" but you can only conclude this if you already know that $1_{a \times \mathbf{1}} = \langle 1_a, \mathbf{1}_a \rangle \circ \pi_a$! The reason for this is that of course $1_{a \times \mathbf{1}} = \langle 1_a, \mathbf{1}_a \rangle \circ \pi_a$ must hold if the diagram does commute, and the composition $\langle 1_a, \mathbf{1}_a \rangle \circ \pi_a$ cannot be written as a different composition of maps in the diagram (in particular, $\pi_a$ is the unique incoming arrow to $a$ and $\langle{1_a, \mathbf{1}_a}\rangle$ is the unique incoming arrow to $a \times \mathbf{1}$). You're on the right track here, though. You should make explicit the proof that $1_{a \times \mathbf{1}} = \langle 1_a, \mathbf{1}_a \rangle \circ \pi_a$ by using the fact that $1_{a \times \mathbf{1}} = \langle{\pi_a, \pi_{\mathbf{1}}}\rangle = \langle{\pi_a, f_1}\rangle$. Hope this helps!
Good examples of Ansätze
I am not a great fan of the notion of ansatz, simply because it seems magical. One can but imagine that most ansätze are the result of whole series of unsuccessful tries, and that we only hear of the successful ones. So whenever I read «let us try the ansatz such and such» I understand «ok, I spent a couple of weeks trying stuff that did not go anywhere but, who knows how, finally managed to make it work, so let me just tell you the short story and, by the by, make myself look like I came up with this stuff out of the blue»
How do I show that $n$ is prime if and only if $\phi(n) = n − 1$?
If $n$ is prime then any number less than $n$ is coprime to $n$. There are $n-1$ such numbers. Conversely if $\phi(n)=n-1$, then all of the numbers less than $n$ are coprime to $n$, which means $n$ is prime.
Which branch of $f(x)$ use to find the limit?
If the limit exists, it means for all sequences {$x_n$} that converge to a, the sequence {$f(x_n)$} converges to the limit. Now if we take $a=1$ and {$x_n$} to be a sequence of rational numbers that converge to $1$, {$f(x_n)$} converges to $0.5$ and same thing happens if we take {$x_n$} to be a sequence of irrational numbers that converge to $1$.So we can naively conclude that the limit probably(because we haven't yet checked sequences with both rationals and irrationals) exists.If we take $a=-1$ and {$x_n$} be a sequence of rational numbers that converge to $-1$, {$f(x_n)$} converges to $0.5$ again but if we take {$(x_n)$} to be an irrational sequence converging to $-1$, {$f(x_n)$} converges to $-0.5$. So we can definitely say that the limit doesn't exist for $a=-1$.Same line of reasoning shows if the limit exists at some $a$, $a$ must satisfy $$1+a^2=2a\Rightarrow(a-1)^2=0\Rightarrow a=1$$.
Fast multipole method: help on tutorial
On your first question: That sentence is false; the text itself goes on to state in the very next paragraph under which circumstances the method can be used for singular potentials. The case in which it can't be used is with targets and sources spatially intermingled and a singular potential, because in this case there's no single expansion that will work for all source-target pairs. On your second question: The aim is not to minimize the computational effort. (If it were, we'd choose $p=0$ and have $0$ computational effort and an estimate of $0$ for the total interaction energy). The aim is to minimize the computational effort given a target accuracy. Depending on circumstances, it may be that using three expansion centres is more efficient in boosting the accuracy than increasing $p$ in the expansion. This will be the case in particular if the sources and/or the targets are located in clusters.
Show that two functions have the same minimizer
The set of minimisers is almost the same, basically they can differ by a set of measure $0$ when $v_1$ and $v_2$ are not colinear with different sign. Assume first that $v_1$ and $v_2$ are not colinear with different sign. In essence, we assume that $v_1 \ne -c v_2$ for some $c > 0$. We observe that $g(x)$ and $f(x)$ have image in $[0,\infty)$ so that if there exists $x$ such that $f(x) = 0$, then the minimising set for $f$ is its zero set, and similarly for $g$. Set $$\hat x = \left(\frac{v_1}{\|v_1\|} + \frac{v_2}{\|v_2\|}\right)$$ and $$x^\star = \frac{\hat x}{\|\hat x\|}$$ It is not too hard to see that $f(x^\star) = g(x^\star) = 0$ and that $\|x^\star\| = 1$, the latter condition holding because $v_1 \ne -c v_2$. Therefore, since both $f$ and $g$ are nonnegative functions, the set which Now, it is clear that $g(x) = 0$ implies $f(x) = 0$. However, there are always vectors such that $f(x) = 0$ while $g(x) \ne 0$. To have $g(x) = 0$, we need two conditions: $\langle x,v_1 \rangle < 0$ and $\langle x,v_2\rangle < 0$, whereas to have $f(x) = 0$ we need $\langle x,v_1\rangle \le 0$ and $\langle x, v_2 \rangle \le 0$. We therefore have that $$ \{x : f(x) = 0 \text{ and } g(x) \ne 0\} = (\operatorname{span}(v_1)^\perp \cap \{x : \langle x,v_2\rangle \le 0\}) \cup (\operatorname{span}(v_2)^\perp \cap \{x : \langle x,v_1\rangle \le 0\}). $$ This is the union of two intersections of a half-space with a codimension 1 subspace, so it has codimension 1 in the unit sphere. If $v_1 = -c v_2$ for some $c > 0$, the picture is completely different. Then, $$ f(x) = \begin{cases} 0 & \text{if } x \in \operatorname{span}(v_1)^\perp = \operatorname{span}(v_2)^\perp \\ > 0 & \text{otherwise }, \end{cases} $$ whereas $$ g(x) = \begin{cases} 2 & \text{if } x \in \operatorname{span}(v_1)^\perp = \operatorname{span}(v_2)^\perp \\ 1 & \text{otherwise }. \end{cases} $$ This means that in this situation their minimising sets are completely disjoint.
How do I evaluate $k'$ from $\int_{k'}^{\infty} \sqrt{n/2\pi} \, \, e^{(-n/2 \, (\bar x-\mu_0)^2)} \, d\bar x = Z_\alpha$?
Take a look here: Proving $\int_{0}^{\infty} \mathrm{e}^{-x^2} dx = \dfrac{\sqrt \pi}{2}$ You can use the same technique as Ross Millikan. To be more specific: Call $$I=\frac{n}{2\pi}\int_{k'}^{\infty}\int_{k'}^{\infty}e^{\frac{-n(x-\mu_{0})^{2}}{2}}dx$$ So $I^{2}=Z_{\alpha}^{2}$. But $$I^{2}=\frac{n}{2\pi}\int_{k'}^{\infty}\int_{k'}^{\infty}e^{-\frac{\bigl[\sqrt\frac{n}{2}(x-\mu_{0})\bigr]^{2}}{2}-\frac{\bigl[\sqrt\frac{n}{2}(y-\mu_{0})\bigr]^{2}}{2}}dxdy$$ Define $u=\sqrt\frac{n}{2}(x-\mu_{0})$ and $v=\sqrt\frac{n}{2}(y-\mu_{0})$, then $$I^{2}=\frac{n^{2}}{4\pi}\int_{\sqrt\frac{n}{2}(k'-\mu_{0})}^{\infty}\int_{\sqrt\frac{n}{2}(k'-\mu_{0})}^{\infty}e^{-u^2-v^2}dudv$$ Now use polar coordinates and conclude.
Does $\sum_{m=1}^{\infty}\sum_{n=1}^{\infty}a_{mn}^3=\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}a_{mn}^3$ occur here?
The third power is just a distraction – if you have a doubly indexed sequence $b_{mn}$ for which the sums don't commute, you can obtain a counterexample for your equation by taking third roots, $a_{mn}=\sqrt[3]{b_{mn}}$. Can you find such a sequence $b_{mn}$? Edit in response to the comment: Examples abound. Take any convergent series $\sum_nc_n$ and any divergent series $\sum_nd_n$, and set $$ b_{mn}=\begin{cases}d_m&n=1\;,\\-d_m&n=2\;,\\2^{-m}c_{n-2}&n\gt2\;.\end{cases} $$ Then $\sum_n\sum_m b_{mn}$ doesn't exist (since $\sum_mb_{m1}$ doesn't exist), whereas $\sum_m\sum_nb_{mn}=\sum_nc_n$.
Simple functional equation: $\frac 1 2 [\alpha(x - 1) + \alpha(x + 1)]$, $\alpha(0) = 1$, $\alpha(m) = 0$
Hint 1: $1=\frac{1}{2}+\frac{1}{2}$ Hint 2: $a=1 \cdot a = (\frac{1}{2}+\frac{1}{2}) \cdot a$
How to solve parabolic equation via implicit Euler in 2 dimensions?
As user_of_math suggested, you should be consistent with your $k$ index.Let $k$ denote the index for time, $i$ for $x$-direction in space and $j$ for $y$-direction in space. Then you use the wrong discretization. It should always be $k+1$ as the time index except for the time derivative since your scheme should be implicit. \begin{align} \frac{u^{k+1}_{i,j}-u^{k}_{i,j}}{\Delta t} = \frac{u^{k+1}_{i+1,j}-2u^{k+1}_{i,j}+u^{k+1}_{i-1,j}}{\Delta x^2} + \frac{u^{k+1}_{i,j+1}-2u^{k+1}_{i,j}+u^{k+1}_{i,j-1}}{\Delta y^2} \end{align} or when you want to solve for the next time step \begin{align} u^{k+1}_{i,j} = u^{k}_{i,j}+\Delta t\Bigl(\frac{u^{k+1}_{i+1,j}-2u^{k+1}_{i,j}+u^{k}_{i-1,j}}{\Delta x^2} + \frac{u^{k+1}_{i,j+1}-2u^{k+1}_{i,j}+u^{k+1}_{i,j-1}}{\Delta x^2}\Bigr) \end{align} and since $\Delta x =\Delta y$ \begin{align} &u^{k+1}_{i,j} = u^{k}_{i,j}+\frac{\Delta t}{\Delta x^2}\Bigl({u^{k+1}_{i+1,j}+u^{k+1}_{i-1,j}} -4u^{k+1}_{i,j} + {u^{k+1}_{i,j+1}+u^{k+1}_{i,j-1}}\Bigr) \\ \Leftrightarrow& u^{k+1}_{i,j}-\frac{\Delta t}{\Delta x^2}\Bigl({u^{k+1}_{i+1,j}+u^{k+1}_{i-1,j}} -4u^{k+1}_{i,j} + {u^{k+1}_{i,j+1}+u^{k+1}_{i,j-1}}\Bigr) =u^k_{i,j} \end{align} You get this for each pair $(i,j)$ giving you $N_x \times N_y$ equations. Assume $N=N_x=N_y$. Now you need to solve a linear system in each time step. You have to come up with some ordering for your matrix and right hand side. The standard ordering would give you a matrix $A \in \mathbb{R}^{N^2\times N^2}$ which looks like this: \begin{align} A = \begin{pmatrix} \tilde A &-\tilde I &0 & ...&...&0 \\ -\tilde I & \tilde A&-\tilde I &0 & .. & 0 \\ 0 &-\tilde I & \tilde A &-\tilde I & 0 &..\\ &&& \ddots \\ 0 &...&...&0&-\tilde I & \tilde A \end{pmatrix} \end{align} Where $\tilde I$ is the $n\times n$ identiy matrix, scaled by $c:=\Delta t / \Delta x^2$ and $\tilde A$ is an $n\times n$ matrix. \begin{align} \tilde A = \begin{pmatrix} 1+c&-c &0 & ...&...&0 \\ -c & 1+c&-c &0 & .. & 0 \\ 0 &-c & 1+c &-c & 0 &..\\ &&& \ddots \\ 0 &...&...&0&-c & 1+c \end{pmatrix} \end{align} Now you solve \begin{align} Au^{k+1}=u^k \end{align} during each time step. Edit: Of course you have to take care of the boundary values during the computation. I did not consider this.
Is E(XY) = E(YX) for any two continuous random variables X and Y?
$X$ and $Y$ are random variables. This implies that both are functions with codomain $\mathbb R$. If moreover the functions have the same domain $\Omega$ then it make sense to speak of functions like $XY$ and $YX$ prescribed by $\omega\mapsto X(\omega)Y(\omega)$ and $\omega\mapsto Y(\omega)X(\omega)$ respectively. Now observe that in that situation the functions $XY$ and $YX$ coincide because for every $\omega\in\Omega$ we have:$$X(\omega)Y(\omega)=Y(\omega)X(\omega)$$purely based on the commutativity of multiplication on $\mathbb R$.
Compute $\int_{\Gamma}\omega$ where $\omega=(y-2z)dx+(x-z)dy+(2x-y)dz$
$\Gamma$ is the intersection between a sphere centred on the origin and a plane passing through the origin, thus a circle centred on the origin. The normal vector of this circle is $(1,-1,1)$, so find two unit vectors perpendicular to each other and to $(1,-1,1)$. One possible choice is $\frac{\sqrt2}2(1,1,0)$ and $\sqrt{\frac23}\left(-\frac12,\frac12,1\right)$. The two new vectors are an orthonormal basis for the plane the circle lies in, so we can use the ordinary parametrisation of the circle, just with different basis vectors: $$\sqrt{\frac23}\left(-\frac12,\frac12,1\right)r\cos t+\frac{\sqrt2}2\left(1,1,0\right)r\sin t$$ $$=r\left(-\frac{\sqrt2}{2\sqrt3}\cos t+\frac{\sqrt2}2\sin t,\frac{\sqrt2}{2\sqrt3}\cos t+\frac{\sqrt2}2\sin t,\sqrt{\frac23}\cos t\right)$$ Of course, $t$ runs from 0 to $2\pi$.
Particular holomorphic mappings of the Riemann sphere to itself
If $h$ is analytic and non-zero at $e^{it}$ then (since $h(z)=\sum_k c_k (z-e^{it})^k$) so is $g(z)=1/\overline{h(\overline{z}^{\ -1})}$ If $h(e^{it}) = e^{i\theta}$ then $g(e^{it}) = e^{i\theta}$. Thus $h$ is meromorphic $\Bbb{C\to C}$ and $|h(z)|=1$ on $|z|=1$ means that $g-h$ is analytic around $e^{it}$ and vanishes on $e^{ix},x-t\in (-\epsilon,\epsilon)$ ie. $g=h$. Next since $h$ is analytic $\Bbb{P^1(C)\to P^1(C)}$ then it is a rational function $ h(z)=C\frac{\prod_j(z-a_j)}{\prod_i(z-b_j)}$, that $h$ sends each hemisphere to itself gives $|h(z)|=1$ on $|z|=1$ so that $h=g$ and hence $$h(z)=e^{i\theta}\prod_{j=1}^J \frac{z-a_j}{1-\overline{a_j}z}$$ Finally each $|a_j|<1$ since otherwise $h$ would have some zero on the $\infty$-hemisphere.
Calculus banner word problem
Try to express $w = \dfrac{24}{l}$ in terms of $l = \sqrt{18} = 3\sqrt 2$, to get the precise value of $w$. So $w = \dfrac{24}{l} = \dfrac{24}{3\sqrt 2} = \dfrac 8{\sqrt 2}$ And note that indeed, $A = w\cdot l = \dfrac{24}{3\sqrt 2}\cdot 3{\sqrt 2} = 24$ Using a calculator and rounding gets you a bit in the way of a loss of precision. Lastly, you're calculation of the only possible solution to $A'(l) = 0$ is $l = \sqrt {18} = 3\sqrt 2$, since the alternative is negative, and since $l$ represents length, $l > 0$. But even though the only place an extrema can occur in this case is at your solution $l$, and we can guess that since the problem is asking for maximization, that your solution must be a maximum, you really do need to show in your work that $A(\sqrt {18})$ is indeed the maximum value: Show that $A' > 0 $ on $0 < l < \sqrt{18}$, and $A' < 0$ for $l > \sqrt{18}$.
Union and intersection of two different relations
Reflexivity holds for both $\mathcal{R} \cup \mathcal{S}$ and $\mathcal{R} \cap \mathcal{S}$, obviously. For symmetry, it is inconclusive I think. For example take $A = \{1,2,3\}$ and $\mathcal{R} = (A, \{(1,1),(2,2),(3,3),(1,2),(2,1)\})$ and $\mathcal{S} = (A, \{(1,1),(2,2),(3,3),(1,2)\})$. Now $\mathcal{R} \cup \mathcal{S} = \mathcal{R}$ (equivalance relation) and $\mathcal{R} \cap \mathcal{S} = \mathcal{S}$ (partial ordering). But if we take $\mathcal{R} = (A, \{(1,1),(2,2),(3,3),(1,2),(2,1)\})$ and $\mathcal{S} = (A, \{(1,1),(2,2),(3,3),(1,3)\})$, we have $\mathcal{R} \cup \mathcal{S} = (A, \{(1,1),(2,2),(3,3),(1,2),(2,1),(1,3)\})$ (neither equivalance relation nor partial ordering) and $\mathcal{R} \cap \mathcal{S} = (A, \{(1,1),(2,2),(3,3)\})$ (equivalance relation). For transitivity, it is inconclusive for $\mathcal{R} \cup \mathcal{S}$ by the example given in symmetry. It holds for $\mathcal{R} \cap \mathcal{S}$ because if you define a relation $\mathcal{T} = \mathcal{R} \cap \mathcal{S}$, this is equivalent to the following statement: $x \mathcal{T}y$ if and only if $x \mathcal{R} y$ and $x \mathcal{S} y$. Now, we know that transitivity holds for both $\mathcal{R}$ and $\mathcal{S}$. So for all $x,y,z$ we have $$(x\mathcal{R}y \land y\mathcal{R}z \land x\mathcal{R}z) \land (x\mathcal{S}y \land y\mathcal{S}z \land x\mathcal{S}z) = (x\mathcal{R}y \land x\mathcal{S}y) \land (y\mathcal{R}z \land y\mathcal{S}z) \land (x\mathcal{R}z \land x\mathcal{S}z)$$ (In this notation, $x\mathcal{R}y$ means $x$ is related to $y$) which is equivalent to $$x\mathcal{T}y \land y\mathcal{T}z \land x\mathcal{T}z$$ So $\mathcal{T}$ is transitive.
Stuck on tough integral
After your first change of variable you get $$ac \int \dfrac{y}{\frac{cd}{b}-y} \dfrac{1}{(1-y)}dy$$ Now forgetting about the constants for a moment, $$\int \dfrac{y}{(A-y)(B-y)} dy = \dfrac{1}{A-B}\int \dfrac{B}{B-y} - \dfrac{A}{A-y} dy$$ Then go back and substitute.
Examples of cubic graphs in which every cycle of length divisible by 3 has a chord
I think I found an example, but I would be happy if someone could help me check if it really is one: More examples are still welcome of course.
Given $\sigma,\tau\in\operatorname{Gal}(L/K)$, is there an $\alpha\in L$ such that $\sigma\alpha=\alpha$ and $\tau\alpha\neq\alpha$?
$\newcommand{\Q}{\mathbb{Q}}$$\newcommand{\Span}[1]{\left\langle #1 \right\rangle}$(You assume that neither $\sigma$ nor $\tau$ are the identity, I guess - see the comment by drhab below.) Not necessarily. Let $K = \Q(\omega)$, where $\omega$ is a primitive root of $1$. Let $L = K(\sqrt[3]{2})$, the splitting field over $K$ of $x^{3} - 2$. The Galois group is cyclic of order $3$, say $\Span{\sigma}$. Clearly the only elements fixed by $\sigma$ are those of $K$, and the same holds for $\tau = \sigma^{2}$.
why it will be finished when we take lcm of $a$ and $b$ = $da_0b_0?$
This is basically this:$\DeclareMathOperator{\gcd}{\text{gcd}}$ $\DeclareMathOperator{\lcm}{\text{lcm}}$ Let $a, b~$ be as described in question. Consider $\gcd(a, b) = d$. Then by the definition of $\gcd$ we have that, $$ d|a ~\text{ and }~ d|b $$ Thus, we have that $a = dk_1$ and $b = dk_2$. Again, from the definition of $\lcm$ we have $$ \lcm(a, b) = d \cdot k_1 \cdot k_2 $$ Hence: $$ a \cdot b = d k_1 \cdot d k_2 = d \cdot (d \cdot k_1 \cdot k_2) = \gcd(a, b) \cdot \lcm(a, b) $$
Entire function and odd/even function
Recall the Schwarz reflection principal: if $g$ is any entire function that maps $\mathbb{R}$ to $\mathbb{R}$, then $g(\overline{z}) = \overline{g(z)}$ for all $z \in \mathbb{C}$. For the first case, it follows that $g(z) + g(-z) = 0$ on the imaginary axis. It must be zero everywhere, then, because zeros of (nonzero) analytic functions are isolated, and so $g$ is odd. The second case is analogous.
Logarithm calculation result
The base of the logarithm is $2^b$. You want to find an $x$ such that $(2^b)^x = N$, i.e. $2^{bx} = N$. You can rewrite that as $$x = \dfrac{\log N}{b}$$ if you take the $\log$ to base-2.
Prove or Disprove $\sum_{x=-(n+2c)}^n x+c = 0$
$$\begin{align*} \sum_{x=-(n+2c)}^n(x+c)&=\sum_{x=0}^{2n+2c}(x-n-2c+c) &&\textsf{shift index variable}\\\\ &=\sum_{x=0}^{2n+2c}(x-n-c)&&\textsf{simplify}\\\\ &=\left(\sum_{x=0}^{2n+2c}x\right)-\left(\sum_{x=0}^{2n+2c}(n+c)\right)&&\textsf{separate constant terms}\\\\ &=\frac{(2n+2c)(2n+2c+1)}{2}-(2n+2c+1)(n+c)&&\textsf{apply well-known formulas}\\\\ &=0&&\textsf{simplify} \end{align*}$$ The well-known formulas in particular would be $$\sum_{x=0}^{r}x=\frac{r(r+1)}{2}\qquad \sum_{x=0}^rc=(r+1)c$$
Simply connected covering space is a covering of other covering
? A simply connected covering space is called a universal covering space and the reason for calling it so is the fact you are asking a proof of. Anyway, for a proof, use the lifting criterion. In this case it tells you that any map from a simply connected space to another space lifts to any covering space of the target space. Check that the lifting is a covering space. (You might need a locally path connected assumption on the base space )
Proving functions are in $L_1(\mu)$.
Ashamed of myself for not realizing the problem was so elementary, I'll register my solution here, in case it helps someone. We have that for all $a,b \in \Bbb R$: $$\sqrt{a^2 + b^2} \leq \sqrt{|a|^2 + 2|ab| + |b|^2} = \sqrt{(|a|+|b|)^2} = ||a|+|b|| = |a|+|b|$$ and: $$(|a|-|b|)^2 \geq 0 \implies a^2 - 2|ab| + b² \geq 0 \implies |ab| \leq 2|ab| \leq a^2 + b^2 \implies \sqrt{|ab|} \leq \sqrt{a^2 + b^2}.$$ Applying this to the functions, we obtain: $$\int_{\mathcal{X}} \sqrt{f^2 + g^2} \ \mathrm{d}\mu \leq \int_{\mathcal{X}} |f| + |g| \ \mathrm{d}\mu = \int_{\mathcal{X}} |f| \ \mathrm{d}\mu + \int_{\mathcal{X}} |g| \ \mathrm{d}\mu < +\infty$$ since $f,g \in L^1(\mu) \implies |f|,|g| \in L^1(\mu)$. We conclude that $\sqrt{f^2+g^2} \in L^1(\mu)$. And now, we just use that: $$\int_{\mathcal{X}} \sqrt{|fg|} \ \mathrm{d}\mu \leq \int_{\mathcal{X}} \sqrt{f^2 + g^2} \ \mathrm{d}\mu < +\infty$$ and hence $\sqrt{|fg|} \in L¹(\mu)$.
Solve $3^{x+1} = 2^x$
Hint $$3^{x+1}=2^x\iff \left(\frac{2}{3}\right)^x=3.$$
How to prove $C=2πr$?
As the rectification of the circle ultimately relies on triangulation, this follows from the corresponding proportionality for triangles, hence ultimately from the intercept theorem (not a single axiom).
How to determine the closure of a subset and prove it is actually the closure?
First note that in order to prove that a set $B$ is the closure of a set $A$ it suffices to prove that the intersection of any open ball of some arbitrary $p\in B$ with $A$ is not empty (you can prove that a set $A$ is closed iff it is equal to its closure). Now, let $x$ be any point in the complement of $[-\sqrt2, \sqrt2]=E$ and define $r:=\min\{|x-\sqrt2|,|x+\sqrt2|\}.$ What can you say about the open ball $B_r(x)$ (radius $r$ and center at $x$)?
Showing that $\lim_{x \to 0}\frac{f(x)}{g(x)} = \frac{f'(0)}{g'(0)}$.
\begin{align} \lim_{x \to 0}\frac{f(x)}{g(x)} &= \lim_{x \to 0}\frac{f(0+x) - f(0)}{x} \cdot \frac{x}{g(0+x) - g(0)} \end{align} by simple algebra and substitution. Now the limit of a product is the product of the limits if both exist. So to perform the split you wanted, you'll need to show that. The first limit certainly exists, because $f$ is known to be differentiable at 0. What about the second? Well, if $\lim_{x \to 0} h(x) = m \ne 0$, then $\lim_{x \to 0} \frac{1}{h(x)} = \frac{1}{m}$. (This is Theorem 2 of chapter 5 of Spivak's Calculus, if you need a reference). We'd like to apply this to the function $$ h(x) = \frac{g(0+x) - g(0)}{x} $$ but to do so, we need to know that $\lim_{x \to 0} h(x)$ exists and is nonzero. Fortunately for us, $g$ is differentiable at 0 (by assumption), and its derivative, $g'(0)$, is exactly defined to be $\lim_{x \to 0} h(x)$, which therefore exists. Furthermore, the hypothesis $g'(0) \ne 0$ tells us that it's nonzero. So we can conclude that \begin{align} \lim_{x \to 0} \frac{1}{\frac{g(0+x) - g(0)}{x}} &= \frac{1}{\lim_{x \to 0} \frac{g(0+x) - g(0)}{x}}\\ &= \frac{1}{g'(0)}. \end{align} Finally, the limit on the left is, through a little algebra, just \begin{align} \lim_{x \to 0} \frac{1}{\frac{g(0+x) - g(0)}{x}} &= \lim_{x \to 0} \frac{x}{g(0+x) - g(0)}. \end{align} So both the limits in the product into which we want to split our original expression exist and hence we have \begin{align} \lim_{x \to 0}\frac{f(x)}{g(x)} &= \lim_{x \to 0}\frac{f(0+x) - f(0)}{x} \cdot \frac{x}{g(0+x) - g(0)} \\ &= \lim_{x \to 0}\frac{f(0+x) - f(0)}{x} \cdot \frac{1}{ \lim_{x \to 0}\frac{g(0+x) - g(0)}{x}} \\ & = f'(0) \cdot \frac{1}{g'(0)} = \frac{f'(0)}{g'(0)} \end{align}
2 different results using associative property in boolean expression
Order of operations matters.   $Q\wedge(P\vee Q\vee T)$ and $(Q\wedge P)\vee Q\vee T$ are quite different expressions. I simplified this like so: $Q \wedge P \vee Q \vee T \equiv Q \wedge P \vee (Q\vee T) \equiv Q\wedge P \vee T \equiv Q \wedge (P\vee T) \equiv Q\wedge T \equiv Q. $ You are treating that as ${Q\wedge (P\vee Q\vee T)\\Q\wedge((P\vee Q)\vee T)\\Q\wedge T\\ Q}$ But then I noticed there might also be another way to proceed, which would be like so: $Q \wedge P \vee Q \vee T \equiv (Q \wedge P \vee Q)\vee T \equiv T $ Here you treat it as: $(Q\wedge P)\vee Q\vee T\\ ((Q\wedge P)\vee Q)\vee T\\ T$ Which is the usual convention: that $\land$ has precedence over $\lor$ in the same way $\times$ has precedence over $+$. Not everyone uses the same convention, so the best practice is to always use explicit bracketing to ensure the epression is read as intended. Compare with: $q\times p+q+1 = (q\times p)+q+1$
Getting rid of a fractional power over f(x)
we have $\int y^{-1/2}dy=-\int h dt$ this gives us $2y^{1/2}=-ht+C$
Combinatorics problem, contacts increase by 5 how many times to reach 1 billion?
$1$ degree: $5$ contacts. $2$ degrees: $5 + 5^2$ contacts. $3$ degrees: $5 + 5^2 + 5^3$ contacts. So we need to solve $$\sum_{k=1}^x 5^k = 1000000000$$ Using the formula for geometric series, we have $$\frac 54 (5^x - 1)= 1000000000$$
Min of concave symmetric function on a convex set
I think I got the answer (thanks S.B. for the comments) but not sure so I'm placing it as a community wiki feel free to edit: The minimum occurs at $\left(\underbrace{\frac{1-\frac 1k}{N-1},\ldots,\frac{1-\frac 1k}{N-1}}_{k\text{ times}},\underbrace{\frac 1{N-1},\ldots,\frac 1{N-1}}_{N-k\text{ times}}\right)$ for some $k$. First show the minimum must have either $x_1=0$ or $x_N=\frac 1{N-1}$. Assume $z\in C$ such that $0<z_1\le\ldots\le z_N<\frac 1{N-1}$. Pick $\varepsilon>0$ and let $\hat z$,$\tilde z$ coincide with $z$ for all entries but the first and last, $\hat z_1=z_1-\varepsilon$ and $\hat z_N=z_N+\varepsilon$ and $\tilde z_1=z_1+\varepsilon$ and $\hat z_N=z_N-\varepsilon$. Notice that we can choose $\varepsilon >0$ such that $\hat z\in C$ but we cannot guarantee that $\tilde z$ is in $C$. Let's assume $\boldsymbol\phi$ is strictly concave. We have $\phi(z)>\frac12\phi(\hat z)+\frac 12\phi(\tilde z)$ since $z=\frac12\hat z+\frac12 \tilde z$. Even when $\tilde z\not\in C$ we can find a permutation of its entries (call it $\tilde z^\pi$) that is in $C$. Now since $\boldsymbol \phi$ is symmetric $\phi(\tilde z)=\phi(\tilde z^\pi)$. So clearly $z$ can not minimize $\phi$ in $C$. If $x_1=0$ we must have $x_k=\frac{1}{N-1}$ for all $k\ge2$. If $x_1>0$ and $x_N=\frac 1{N-1}$ we claim either $x_2=x_1$ or $x_{N-1}=\frac{1}{N-1}$ and again the proof of the claim is as above. Applying the same reasoning we tackle all the intermediate entries.
$\int_{0}^{\infty} \frac{\cos x - e^{-x^2}}{x} \ dx$ Evaluate Integral
Related problems: (I), (II). Recalling the Mellin transform of a function $f$ $$ F(s) = \int_{0}^{\infty} x^{s-1}f(x) \,dx .$$ Then we consider the more general integral $$ F(s) = \int_{0}^{\infty} x^{s-1}\left(\cos x - e^{-x^2}\right) \, dx \,. $$ The value of the integral in our problem follows by taking the limit as $s\to 0 $ in the above integral. Evaluating the above integral gives $$ F(s) = \Gamma \left( s \right) \cos \left( \frac{\pi \,s}{2} \right) - \frac{1}{2}\, \Gamma \left(\frac{s}{2} \right) \,.$$ Taking the limit as $s \to 0 \,,$ we get the desired result $$ \int_{0}^{\infty} \frac{\cos x - e^{-x^2}}{x} \ dx = -\frac{\gamma}{2}\,. $$
$A\subset\mathbb R$ is open dense, then $\mathbb R=\{x+y\mid x, y\in A\}$
Let $I =(c-r, c+r)$ be an open interval included in $A$, with $r>0$ and let $z\in {\mathbb R}$. Let $x_n\in A$ be a sequence such that $x_n\to z-c$. When $n$ is large enough, the interval $x_n + I\subset A+A$ contains $z$. As $z$ is arbitrary, it follows that $A+A = {\mathbb R}$.
Time Complexity through recurrence relation
$$f_n=\frac{\text{Fibonacci}(n)+\text{Lucas}(n)}{2}$$ As $$\underset{n\to \infty }{\text{lim}}\;\frac{f_n}{\phi ^n}=\frac{\sqrt{5}+5}{10} $$ We can say that the $c$ you are looking for is the golden ratio $$\phi=\frac{\sqrt{5}+1}{2}$$
Properties of injective modules
This is not really an answer to the question, but is too long for a comment. You mention the module $\mathbb{Q}/\mathbb{Z}$, and this actually provides a nice algebraic duality between the injective modules and the flat modules over noetherian rings. First off, this module is an injective cogenerator, so we have a faithful functor $(-)^{d}:=\text{Hom}_{\mathbb{Z}}(-,\mathbb{Q}/\mathbb{Z}):R\text{-Mod}\to \text{Mod-}R$ between the left and right $R$-modules (and vice versa). Moreover there are natural isomorphisms $$\text{Ext}_{R}^{j}(M,N^{d})\simeq \text{Tor}_{j}^{R}(M,N)^{d}$$ for all $R$-modules $M$ and $N$ and $j<\infty$, and $$\text{Ext}_{R}^{j}(A,B)^{d}\simeq \text{Tor}_{j}^{d}(A,B^{d})$$ for all finitely generated $R$-modules $A$, all $R$-modules $B$ and $j<\infty$. From these, you can indeed see that if $M$ is injective, then $M^{d}$ is flat and similarly if $N$ is flat then $N^{d}$ is injective. Obviously projective modules are also flat. From this, you can actually recover the cofree situation: if $M$ is an injective $R$-module, then $M^{d}$ is flat and is therefore a direct limit of finitely generated free modules. In particular, there is a pure quotient $$\bigoplus_{I}F_{i}\to M^{d}\to 0$$ with each $F_{i}$ a finitely generated free module. Applying $(-)^{d}$ to this gives a split sequence $$0\to M^{dd} \to \prod_{I}F_{i}^{d},$$ hence $M^{dd}$ is a direct summand of a cofree module as each $F_{i}^{d}$ is cofree. Moreover, $M$ is a direct summand of $M^{dd}$ as it is injective, so is also a direct summand of a cofree module. In fact, the duality $(-)^{d}$ applies to many more classes than flat and injective modules, and is a very useful object. It can also be replaced by any injective cogenerator for $R$-Mod.
How to linearize a piecewise objective function into a single linear program
Provided that the piecewise linear function to be maximized is continuous and concave down, we can transform it into \begin{align} \mbox{maximize }\,&y+2x_2-5x_3\\ \mbox{subject to }\,&y\leq 2x_1+1,\\ & y\leq x_1+2,\\ & y\leq -2x_1+8.\\ \end{align} The variables are $x_1,x_2,x_3,y$. The nonnegative constraints are $\,x_1\geq 0, x_2\geq 0, x_3\geq 0\,$ as usual. If the piecewise function is not concave down, the maximization requires trials and errors. For problem b, one can study two cases: $x_1\leq 1$ and $x_1\geq 1$. In the two ranges of $x_1$, the problem is concave down and has a unique maximum. Then compare the two maxima to see which one is the global maximum. The method is called branch and bound, which is useful for solving non-convex (because optimization theory often deals with minimization problems) or integer programming problems.
Continuous mapping and fixed points
Suppose $f(a) = b$ for some $a \neq b$, then $f(b) = f(f(a)) = a$. Define $g(x) = f(x) -x$, and we have $$g(a) = f(a) - a = b-a$$ and $$g(b) = f(b) - b = a-b$$ from $g$ is continuous, by intermediate value theorem, there exists $c$ between $a,b$ such that $$g(c) = f(c) - c = 0$$ because $b-a$ and $a-b$ have opposite signs.
Prove by contradiction that If $R$ is a transitive relation on set $A$ then $R^2$ is transitive.
$R\circ R$ is a relation such that if $R(x,y)$ and $R(y,z)$, then $R\circ R(x,z)$. In general, if $R_1$ and $R_2$ are relations on $A$, and $R_1(x,y)$ and $R_2(y,z)$, then $R_2R_1(x,z)$ (note the order of composition; we apply $R_1$ first, then $R_2$). So say $R$ is transitive, $R\circ R(x,y)$ and $R\circ R(y,z)$. You want to show that $R\circ R(x,z)$. To start you off, by definition, $R\circ R(x,y)$ basically means that $x$ relates to something that relates to $y$ and $R\circ R(y,z)$ means that $y$ relates to something that relates to $z$. Using transitivity of $R$, can you see how you get what you want? EDIT: I just noticed the "proof by contradiction" part in the title. The argument should be very similar, except you also assume that for some $x,y,z$, $R\circ R(x,y)$, and $R\circ R(y,z)$, but $\neg R\circ R(x,z)$.
If $\|T\| < 1$, then $I-T$ is invertible and $\|(I-T)^{-1}\| \leq (1-\|T\|)^{-1}$
By the hypothesis the Neumann series $\sum\limits_{n=0}^\infty T^n$ is absolutely convergent so convergent and we have $$(I-T)\sum\limits_{n=0}^\infty T^n=\left(\sum\limits_{n=0}^\infty T^n\right)(I-T)=I$$ so $I-T$ is invertible and its inverse is $\sum\limits_{n=0}^\infty T^n$, moreover we have $$\left\|\sum\limits_{n=0}^\infty T^n\right\|\le\sum\limits_{n=0}^\infty \|T\|^n=\frac{1}{1-\|T\|}$$ Remark In the previous inequality we use the continuity of the norm.
How to deal with equivalence relations and equivalence classes
The point of a relation is that you have a set $X$ of things and you declare that two of them are "related" or "the same" if some sort of property holds between them. An equivalence relation is one that satisfies some extremely nice properties (ones that you would expect to want to use) such as: Reflexivity: "anything is related to itself", i.e. $a\sim a$ for any $a\in X$. Symmetry: "relatedness is irrelevant of order", i.e. if $a\sim b$ then $b\sim a$. Transitivity: "relations can be glued", i.e. if $a\sim b$ and $b\sim c$ then $a\sim c$. For example the notion of "$=$" is always an equivalence relation. In some sense equivalence relations are generalizations of this (where we don't necessary want to measure being the exact same thing but some weaker property). Another example that isn't equality is the notion of being congruent mod $n$. Here $X = \mathbb{Z}$ and we say $a\sim b$ if $n\mid (a-b)$. We write this as $a\equiv b \bmod n$. Now given an equivalence relation we can play the following game. Write down an element and find all things related to it. This is like the "family" of the element (all of its relations). We call this the equivalence class of the element. It turns out that for equivalence relations a nice thing happens. Everything in $X$ lies in a unique equivalence class, i.e. the equivalence classes partition $X$ into pieces. For example there are two equivalence classes on $X=\mathbb{Z}$ under the mod $2$ equivalence relation, the sets consisting of odd numbers and even numbers. The equivalence classes of $=$ are always of the form $\{x\}$ since only $x$ can equal $x$. So on to your question. I see a relation but no set to put it on! This really does matter (do you have any chairs in your family?). If we assume the set is $\mathbb{Z}$ with the relation $m\sim n$ if $m^3 = n^3$ then clearly the equivalence classes are just singleton sets $\{m\}$ (since $m^3 = n^3$ implies $m=n$ in $\mathbb{Z}$). However if instead we place this relation on $\mathbb{C}$ then the equivalence classes are of the form $\{z, \zeta_3 z, \zeta_3^2 z\}$ for $z\neq 0$ and $\{0\}$ (since now taking cube roots gives three answers unless $z=0$). So you see to fully answer your question I need to know a set! You could be placing this relation on numbers, matrices, polynomials, ... and all would give different equivalence classes.
extending an isometry to the completion
Let $\bar f:\bar X_1 \to \bar X_2$ constructed with limits of Cauchy sequences. It remains to show that $\bar f$ is an isometry. Let $a,b\in \bar X_1$ be given. Then there there are sequences $(a_k), (b_k)$ in $X_1$ converging to $a$ and $b$, respectively. Moreover, it holds $$ \bar f(a) = \lim_{k\to\infty} f(a_k), \quad \bar f(b) = \lim_{k\to\infty} f(b_k). $$ Let $d_i$ be the metric of $X_i$ and $\bar X_i$. Then we have $$ d_2(\bar f(a),\bar f(b)) = d_2( \lim_{k\to\infty}f(a_k), \lim_{k\to\infty}f(b_k)) = \lim_{k\to\infty} d_2(f(a_k),f(b_k)) = \lim_{k\to\infty} d_1(a_k,b_k) = d_1(\lim_{k\to\infty}a_k,\lim_{k\to\infty}b_k) = d_1(a,b). $$ Where we used continuity of the metric, and the isometry property of $f$.
Finding $\frac{1}{d_1}+\frac{1}{d_2}+\frac{1}{d_3}+...+\frac{1}{d_k}$
If you write: $$ \sum_{d \mid n} \frac{1}{d} = \frac{1}{n} \sum_{d \mid n} \frac{n}{d} = \frac{1}{n} \sum_{d \mid n} d = \frac{72}{n} $$ The divisor sum $\sum_{d \mid n} d = \sigma(n)$ is easily seen to satisfy $\sigma(n) \ge n + 1$ (those two are divisors always; they are the only ones if $n$ is prime, otherwise there are more). So you'd have to check up to $n = 71$. To find out what $n$ is so that $\sigma(n) = 72$, a fast trip to http://oeis.org/A000203 shows that the full list is 30, 46, 51, 55, 71.
Disproving unique factorization of $\mathbb{Q}(\sqrt{-6})$
We have $$ 10=2\cdot 5=(2-\sqrt{-6})(2+\sqrt{-6}). $$ Now show that these elements are irreducible in $\Bbb Z[\sqrt{-6}]$, so that this ring is not a UFD.
Corollary to Lemma of Nakayama
Hint: if $I$ is nilpotent, it is contained in the Jacobson radical, you deduce that $N+J(A)N'=N+IN'=M$. You have $I\subset J(A)$, thus $IN'\subset J(A)N'$ and $M=N+IN'\subset N+J(A)N'\subset M$. You can apply the Nakayama lemma. statement 3 https://en.wikipedia.org/wiki/Nakayama_lemma#Statement
Direct proof of classification of $\textrm{Spec}(k[X])$, $k$ an algebraically closed field
Let $(f)$ be a nonzero principal prime ideal of $R = k[X,Y]$. Since $R$ is a unique factorization domain, $f$ is irreducible if and only if it is prime, and in any integral domain, a principal ideal is prime if and only if some (equivalently each) generator is prime. Let $P$ be a prime ideal of $R$ which is not principal. It is easy to see that any set of generators for $P$ can be replaced by a set of irreducible generators. In particular, there exist $f, g \in P$ which are irreducible and nonassociate. (Gauss's Lemma): Let $S$ be a unique factorization domain with quotient field $K$. Let $a \in S[X]$ be nonconstant. Then $a$ is irreducible in $S[X]$ if and only if it is irreducible in $K[X]$. By Gauss's Lemma, $f$ and $g$, which are irreducible in $k[X,Y] = k[X][Y]$, remain irreducible in $k(X)[Y]$, and they are obviously still non associate here: otherwise, $f(X,Y) = \frac{h_1(X)}{h_2(X)} g(X,Y)$ for some nonzero $h_i \in k[X]$, or $h_2(X)f(X,Y) = h_1(X)g(X,Y)$. The irreducible factors of $h_i$ remain irreducible in $k[X,Y]$. Writing $h_1$ and $h_2$ as products of irreducibles, we contradict the unique factorization of $k[X,Y]$. So there exist $m_1, m_2 \in k(X)[Y]$ such that $m_1(X)f(X,Y) + m_2(X)g(X,Y) = 1$. Clearing denominators, we obtain an $h \in k[X]$ which lies in $P$. There then exists an irreducible factor $(X-a)$ of $h$ which lies in $P$. By the same argument, there exists a $b \in k$ such that $(Y-b) \subseteq P$. Then $(X-a,Y-b) \subseteq P$, hence $(X-a,Y-b) = P$.
Q about 2nd isomorphism Th for rings: Is this true $(S+I)/I\cong S/(S\cap I)\cong S/I$, for subring $S$ and ideal $I$?
$I$ is not necessarily contained in $S$, so the expression $S/I$ does not necessarily make sense. When it does, it is equal to $S/(S \cap I)$ so nothing new is gained by your version of the theorem.
Triangle inequality problem.
$x^3+y^3-z^3+3xyz=(x+y-z)(x^2+y^2+z^2-xy+yz+xz)$ sinc $(x^2+y^2+z^2-xy+yz+xz) &gt;0 \implies x^3+y^3-z^3+3xyz&gt;0 \iff x+y-z&gt;0$ $x^3+y^3-z^3+3xyz&gt;0 \iff -(a-b)(b-c)(a+c+2b)+3xyz&gt;0 \iff 27(a-b)^2(b-c)^2(c-a)^2(a+b)(b+c)(a+c)&gt; (a-b)^3(b-c)^3(a+c+2b)^3 \iff 27(c-a)^2(a+b)(b+c)(a+c) &gt;(a-b)(b-c)(a+c+2b)^3$ if$(a-b)(b-c)&lt;0$, then it is proved. in case $(a-b)(b-c)&gt;0$, note $a,c$ is symmetry, WOLG,let $c$ is min {$a,b,c$},$a=c+u,b=c+v$ $ 27(c-a)^2(a+b)(b+c)(a+c) -(a-b)(b-c)(a+c+2b)^3=(32v^2-32uv+108u^2)c^3+(48v^3-24uv^2+84u^2v+108u^3)c^2+(24v^4+9u^2v^2+75u^3v+27u^4)c+4v^5+2uv^4-3u^2v^3+11u^3v^2+13u^4v $ $32v^2-32uv+108u^2&gt;0$ $48v^3-24uv^2+84u^2v+108u^3&gt;0$ $4v^5+2uv^4-3u^2v^3+11u^3v^2+13u^4v&gt;0$ $\implies27(c-a)^2(a+b)(b+c)(a+c) -(a-b)(b-c)(a+c+2b)^3&gt;0$ with same method,we have $y+z&gt;x,x+z&gt;y$ QED.
Family of curves $x^n+y^n=a^n$ as $n$ goes from $1$ to $\infty$ (integers) and from $1$ down to $0$
$x^n+y^n=a^n\iff\left(\frac xa\right)^n+\left(\frac ya\right)^n=1\Longrightarrow$ If n is even, then the function $y(x)=\pm\sqrt[n]{a^n-x^n}$ can only exist for $x\in(-a,+a)$. But if n is odd, such restrictions no longer apply. Nevertheless, if we focus solely on the interval $x\in(0,a)$, then all observations that apply to even powers go for odd ones as well (i.e., the form will ultimately degenerate into a rectangle with its height twice the size of its width as $n\to\infty$). $n\to\infty\Longrightarrow\left(\frac xa\right)^n\to0\Longrightarrow\left(\frac ya\right)^n\to1\Longrightarrow y\to a\Longrightarrow$ For $x\in(-a,a)$ we have two straight line segments characterized by $y(x)=\pm a$. $n\to0\Longrightarrow\left(\frac xa\right)^n\to1\Longrightarrow\left(\frac ya\right)^n\to0\Longrightarrow$ y can only be $0$, since otherwise any other number risen to the power $0$ is $1$, whereas $0^0$ is indeterminate, hence $\forall x\in(0,\pm a),\quad y(x)=0$. In $x=0$, however, we have $\underset{n\to0}\lim0^n=0$, hence $y=a$. As an aside, for $n=1$ we have a square with its four corners in $\pm a$ and $\pm ai$. For $n\to\infty$ we have another square, this time with its four corners in $a\pm ai$ and $-a\pm ai$. For $n=2$ we have the circle centered in the origin with the radius $a$.
Finding volume between plane and paraboloid
As before, we divide the boundary $S$ into the two pieces, $S_1=\{(x,y,z)| (z=4) \wedge (x^2+y^2 \le 4)\}$ and $S_2=\{(x,y,z)| (z\le4) \wedge (x^2+y^2 = z) \}.$ We would like $\vec{F} \cdot \hat{n} = 0$ on $S_2$. The symmetry of the paraboloid suggests we want each $\vec{F}$ to lie in a common plane with the $z$ axis, and to be tangent to the parabola that is the intersection of $K_2$ and that plane. In fact that parabola has equation $z = r^2$ where $r = \sqrt{x^2 + y^2}$. This suggests $F_z = 2rF_r$ where $F_r$ is the radial component of $\vec{F}$ in cylindrical coordinates. So let's try $$\vec{F}=\left(\frac x 4,\frac y 4, \frac z 2\right).$$ Then $\nabla \cdot \vec{F} = 1$, so that the integral of $\nabla \cdot \vec{F}$ over the solid region $K$ bounded by $S$ gives the volume of that region. And anywhere on $S_2$, $$ \begin{eqnarray} {\sqrt{4r^2+1}} (\vec{F} \cdot \hat{n}) &amp;=&amp; (-2r\cos\theta)\frac x 4 + (-2r\sin\theta)\frac y 4 + \frac z 2 \\ &amp;=&amp; -\frac{x^2}{2} - \frac{y^2}{2} + \frac{x^2+y^2}{2} \\ &amp;=&amp; 0. \end{eqnarray} $$ Now, $S_1$ is a circular disk of radius $2$, so its area is $4\pi$. Everywhere on $S_1$, we have $z = 4$, and so $$\vec{F} \cdot \hat{n} = F_z = \frac z 2 = 2.$$ Integrating this over $S_1$ gives us $2\cdot(4\pi) = 8\pi$.
Correlation problem involving diagram
More or less by definition, the marginal pdfs $f_X$ and $f_Y$ satisfy\begin{align*} f_Y(y) = \int_{-\infty}^{\infty} \;f_Y(y|x)f_X(x)\, dx. \end{align*} (I'm calling this essentially a definition, but compare it to the discrete case or consider Fubini's theorem and related results.) Thus \begin{align*} h(y) &amp;= \int_{-\infty}^{\infty} \;f_Y(y|x)g(x)\, dx = \int_{-\frac{1}{2}}^{\frac{1}{2}} \;f_Y(y|x)\, dx, \end{align*} and churning through the integral as indicated gives $h$.
Universal Cover of $\mathbb{R}P^{2}$ minus a point
So it's a Moebius band, as you say. The Moebius band is a quotient of a cylinder, which is a quotient of the real plane.
Inequality $\frac{x_1^2}{x_1^2+x_2x_3}+\frac{x_2^2}{x_2^2+x_3x_4}+\cdots+\frac{x_{n-1}^2}{x_{n-1}^2+x_nx_1}+\frac{x_n^2}{x_n^2+x_1x_2}\le n-1$
Let $\frac{x_2x_3}{x_1^2}=\frac{a_1}{a_2}$,... and similar, where $a_i&gt;0$ and $a_{n+1}=a_1$. Thus, we need to prove that: $$\sum_{i=1}^n\frac{1}{1+\frac{a_i}{a_{i+1}}}\leq n-1$$ or $$\sum_{i=1}^n\left(\frac{1}{1+\frac{a_i}{a_{i+1}}}-1\right)\leq-1$$ or $$\sum_{i=1}^n\frac{a_i}{a_i+a_{i+1}}\geq1,$$ which is true because $$\sum_{i=1}^n\frac{a_i}{a_1+a_{i+1}}\geq\sum_{i=1}^n\frac{a_i}{a_1+a_2+...+a_n}=1$$
limsup and liminf and it's relation to number of the indices
Your first remark (1) actually proves statement (2)! You've argued correctly that there are points of $\{x_n\}$ (indeed, infinitely many points) somewhere in the interval $(B-e,B+e)$; in particular, those infinitely many points are all greater than $B-e$. Note that this proof works for any limit point $B$ of $\{x_n\}$, not just the lim sup. To prove statement (1), we need to incorporate the fact that $B$ is the greatest limit point of $\{x_n\}$. So suppose (1) is false, so that $x_n&gt;B+e$ holds for infinitely many $n$. Those infinitely many points still form a bounded set, and so they have a limit point, which is then a limit point of the original sequence. However, since all of those points exceed $B+e$, the limit point we just found is $\ge B+e$—contradicting the fact that $B$ is the greatest limit point.
Finding a power series solution
I am not sure what your definition of the order of a power series is, but I solved this exercise simply by making the ansatz \begin{align} u(t,x)=\sum_{n,k}c_{n,k}x^nt^k \end{align} You can plug this expression into the PDE and set the coefficients of $x^nt^k$ equal to zero. Using the initial conditions it's not very difficult to work out the coefficients $c_{n,k}$. The solution is given by \begin{align} u(t,x)=xe^{-t} \end{align}
Proving a limit equals $0$ using a given theorem
Let $f(x,y):= \frac{ax}{\sqrt{x^2 + y^2}}$ . For $x \ne 0$ we have $f(x,x)=\frac{ax}{\sqrt{2}|x|}.$ Hence $ \lim_{x \to 0+}f(x,x)=\frac{a}{\sqrt{2}}$ and $ \lim_{x \to 0-}f(x,x)=-\frac{a}{\sqrt{2}}$. Conclusion: if $\lim_{(x, y) \to (0,0)} \frac{ax}{\sqrt{x^2 + y^2}} = 0$, then $\frac{a}{\sqrt{2}}=-\frac{a}{\sqrt{2}}$ and threrfore $a=0.$
In how many ways can a sequence not converge?
A sequence of real numbers can fail to converge in only one way: it is not a Cauchy sequence. It can fail to be a Cauchy sequence by being unbounded, or by "oscillating" between two bounding values (possibly over- or undershooting, but by diminishing amounts). An oscillating sequence has subsequences converging to either of the bounding values (the limes superior, $\limsup$ and the limes inferior, $\liminf$), and possibly to (all) values in between. In $\mathbb{R}$, the Bolzano-Weierstraß theorem asserts that every bounded sequence has a convergent subsequence (it has many, of course). So in your situation, your bounded sequence $(a_n)$ has a subsequence $(a_{n_k})$ converging to some real number $A$. If not the entire sequence converges to $A$, there is an $\varepsilon &gt; 0$ and a subsequence $(a_{n_m})$ such that $$\lvert a_{n_m} - A\rvert \geqslant \varepsilon$$ for all $m$. Then the subsequence $(a_{n_m})$ is also bounded, hence has a convergent subsequence ...
Can the intersection of a sequence of nested open intervals be nonempty and have finite cardinality?
An infinite intersection of open sets may not be open. Proof that $b \in \cap_{n=1}^\infty I_n$: $b - \frac{1}{n} &lt; b &lt; b + \frac{1}{n},\forall n \in \mathbb{N}$ $b \in I_n, \forall n \in \mathbb{N}$ $b \in \cap_{n=1}^\infty I_n$ $$\tag*{$\blacksquare$}$$ Can you prove that $b$ is the only element in $\cap_{n=1}^\infty I_n$?
Show that $a^k w b^k$ when $|w|_a$ is divisible by $3$ is not regular
How sure are you that the language isn't regular? If I understand your description of it correctly, it is generated by the regular expression $$B \mid a B b \mid aa B bb \qquad\text{where }B = b^* (ab^* ab^* ab^*)^*$$ In the cases for $k=3n+m$ with $n&gt;1$ and $0&lt;m\le 3$ you can just reinterpret $a^kwb^k$ as $a^m(a^{3n}wb^{3n})b^m$.
Differential Equations: Calculating Barometric Pressure at an Altitude
Pressure decreases exponentially with altitude and thats what you will get provided you fix your signs in the equation. Your equation will look like: $\frac{dP}{da} = -kP$ where $k$ is some positive constant. Solve this to get $P(a) = P(0) e^{-ka}$. and at $a = 0$, $P(0) = 29.92$. Make sure to keep track of all your units.
How to solve a surface integral
The trace of the paraboloid $$ z = x^2+y^2$$ in plane $ z= 1$ projects onto the circle $$ x^2+y^2 =1 $$ in the $xy $ plane, and the portion of the paraboloid that lies below the plane $ z= 1 $ projects onto the region $ R $ that is enclosed by the circle. Thus, it follows from definition of surface area of surfaces of the form $ z = f(x,y)$ that $$ |S| =\iint_{(R)}\sqrt{f'_{|x}(x,y)^2 + f'_{|y}(x,y)^2+1}dA = \iint_{(R)}\sqrt{ 4x^2 +4y^2 +1}dA. $$ The expresion $4x^2 +4y^2 +1 = 4(x^2+y^2)+1 $ in the integrand suggests that we evaluate the integral in polar coordinates. We substitute $ x =r\cos(\phi),\ \ y = r\sin(\phi)$ in the integrand, replace $dA = rdrd\phi $, and find the limits of integration by expressing the region $ R $ in polar coordinates. This yields $$ |S| = \int_{0}^{2\pi}\int_{0}^{1}\sqrt{4r^2+1}rdrd\phi = \int_{0}^{2\pi}\left[ \frac{1}{12}(4r^2 +1)^{\frac{3}{2}}\right]_{0}^{1}d\phi = \int_{0}^{2\pi}\frac{1}{12}(5\sqrt{5}-1)d\phi = \frac{1}{6}\pi(5\sqrt{5}-1).$$
How to Calculate Integral $\int_0^1 \theta^{-1} (1 - \theta)^{n-1} d\theta$
The integral as currently stated is divergent. If you meant to have different bounds, this is how you can express the indefinite integral: $$\int \frac{(1-x)^{n-1}}{x} dx \\ =\int \frac 1x \sum_{k=0}^{n-1} {{n-1}\choose k} (-x)^k dx \\ =\int\frac{dx}{x} +\int\sum_{k=1}^{n-1} {{n-1}\choose k}(-1)^k x^k dx \\ =\ln x +\sum_{k=1}^{n-1}{{n-1}\choose k}(-1)^k\frac{x^{k+1}}{k+1} $$ See how evaluating the term at $x=0$ paired with the minus sign would give $-\ln(0) =\infty$?
Understanding no free lunch theorem
The "output" of deterministic search algorithm $a$ referred to in the Wikipedia article is a sequence in $Y^{|X|},$ not an element of $X.$ You're headed down a blind alley with your map $a:Y^X \rightarrow X.$ Olle Häggström ("Intelligent Design and the NFL Theorems") has observed that the "NFL condition" is exchangeability of the random values $F(x).$ The source on exchangeability that he cites may well provide a rigorous measure-theoretic treatment.
Hard combinatorics and probability question.
Hint: Try to find first the probability that the corner cubes are put into the corners, the face cubes into the faces and the edge cubes into the edges. Then the probability that they have the correct orientation. Complete answer: First, number the small cubes to make them distinguishable. Without taking into account the orientation of the smaller cubes, there are 27! possible ways to reassemble them into a big cube. There is 1 way to place the center cube correctly There are 6! ways to place the face cubes correctly There are 8! ways to place the corner cubes correctly There are 12! ways to place the edge cubes correctly So there are 6!8!12! correct cubes (without taking into account the orientation). Now take such a cube. The center cube has probability 1 of being in the correct orientation. Each face cube has probability 1/6 of being in the correct orientation. Each corner cube has probability 1/8 of being in the correct orientation. Each edge cube has probability 1/12 of being in the correct orientation. This means that the probability of getting a red cube must be $$\frac{6!8!12!}{27!}\left(\frac{1}{6}\right)^6\left(\frac{1}{8}\right)^8\left(\frac{1}{12}\right)^{12}$$
Find at least two ways to find $a, b$ and $c$ in the parabola equation
Here is one way. Plug the coordinates in the equation for each point. Then solve the system. You get, for $P,Q$ and $R$ respectively $$ \matrix{a\cdot0^2+b\cdot 0+c=1\\ a\cdot1^2+b\cdot 1+c=0\\a\cdot(-1)^2+b\cdot (-1)+c=3}\quad\Leftrightarrow\quad \matrix{c=1\\ a+b+c=0\\a-b+c=3} $$ Note that this has a unique solution $(a,b,c)$. You already have $c$ from the first equation. Now find $a$ and $b$ from the last two. You may add them to get $a$, and subtract them to get $b$.
Quadratic approximation of $tan(x)$ at 0.
Hint Use the Taylor series: $$(1-x)^\alpha=1-\alpha x+\cdots$$ with obviously $\alpha=-1$ in your case.
Directional derivative and unit vectors
If $n$ is a unit vector, then $n=(\cos\theta,\sin\theta)$, for some $\theta\in\Bbb R$. And the directional derivative of $f$ at $(0,0)$ in the direction given by $n$ is\begin{align}\lim_{h\to0}\frac{f(hn+(0,0))-f(0,0)}h&amp;=\lim_{h\to0}\frac{h^3\cos^3\theta+2h^3\sin^3\theta}{h^3}\\&amp;=\cos^3\theta+2\sin^3\theta.\end{align}It is not hard to prove that the maximum value of this expression is $2$, attained when $\theta=\frac\pi2$.
Epsilon delta prove for continuïty$ (1-\cos(|xy|))/y^2$
This statement is confusing: $$\begin{align}\left|\frac{1-\cos(|xy|)}{y^2}-\frac{x_0^2}{2}\right|&lt;\epsilon&amp;\iff\left|\frac{2\sin^2(|xy|/2)}{y^2}-\frac{x_0^2}{2}\right|&lt;\epsilon\\&amp;\Leftarrow \left|\frac{|xy|^2}{2y^2}-\frac{x_0^2}{2}\right|&lt;\epsilon\\&amp;\iff\left|\frac{x^2}{2}-\frac{x_0^2}{2}\right|&lt;\epsilon\end{align}$$ How does: $$\left|\frac{|xy|^2}{2y^2}-\frac{x_0^2}{2}\right|&lt;\epsilon$$ imply: $$\left|\frac{2\sin^2(|xy|/2)}{y^2}-\frac{x_0^2}{2}\right|&lt;\epsilon?$$ Answer: It doesn't. It is a correct idea, but the actual statements in that line is dead wrong. The better thing is to start over and use that for small enough $w$ you have: $$\cos w = 1-\frac{1}{2}w^2 + O(\left|w\right|^4)$$ In particular, there is a $C&gt;0$ and a $w_0&gt;0$ so that if $|w|&lt;w_0$ then $$\left|1-\cos w -\frac{1}{2}w^2\right|&lt;C\left|w\right|^4$$ Now let $w=xy$ and divide by $\left|y\right|^2$ and you get: $$\left|\frac{1-\cos (xy)}{y^2} -\frac{1}{2}x^2\right|&lt;C\left|x^2y\right|^2$$ It's still going to take some more steps after that. In particular, you are going to need: $$\left|\frac{1-\cos (xy)}{y^2} -\frac{1}{2}x_0^2\right|\leq \left|\frac{1-\cos (xy)}{y^2} -\frac{1}{2}x^2\right|+\left|\frac{1}{2}x^2-\frac{1}{2}x_0^2\right|$$ So you need $|x^2y|&lt;\sqrt{\frac{\epsilon}{2C}}$ and $|x^2-x_0^2|&lt;\epsilon$. To find a $\delta$, you will have to deal with the cases $x_0=0$ and $x_0\neq 0$ slightly differently. If $x_0=0$ you first need $\delta&lt;\sqrt{\epsilon}$. You also need $|x^2y|&lt;\sqrt{\frac{\epsilon}{2C}}$. Letting $$\delta=\min\left\{1,\epsilon,\sqrt[4]{\frac{\epsilon}{2C}}\right\}$$ gives you the condition you need. If $x^2+y^2&lt;\delta^2$ then $|x|&lt;\delta$ and $|y|&lt;\delta$. Therefore $|x|&lt;\min\{1,\epsilon\}$ so $|x^2-x_0^2|=|x^2|=|x|\cdot |x|&lt;1\cdot \epsilon=\epsilon$, and $|x^2y|&lt;\sqrt{\frac{\epsilon}{2C}}\cdot 1$, so $C|x^2y|&lt;\frac{\epsilon}2$. This next step needs some fixing. If $x_0\neq 0$, then we first start with the condition that $|x-x_0|&lt;\frac{|x_0|}{2}$. This means that $$\left|x+x_0\right|&lt;2\left|x_0\right|$$ and $$|x|&lt;2|x_0|$$. Then you need $$|y|&lt;\frac{1}{4\left|x_0\right|^2}\sqrt{\frac{\epsilon}{2C}}$$ Now $$\left|x^2-x_0^2\right| = \left|(x-x_0)(x+x_0)\right| &lt;2 \left|x-x_0\right| \left|x_0\right|$$ So we choose $$\delta=\min\left\{\frac{1}{4\left|x_0\right|^2}\sqrt{\frac{\epsilon}{2C}}\,,\,\frac{\epsilon}{2\left|x_0\right|}\right\}$$
Hyperbolic isometry and line segments
A Möbius transformation is uniquely defined by 3 points and their images. If you have $z_1\mapsto z_1'$ and $z_2\mapsto z_2'$ mapping the endpoints of the line segments, then add $\overline{z_1}\mapsto \overline{z_1'}$, i.e. map the complex conjugates for one point and its image. If the segments $(z_1,z_2)$ and $(z_1',z_2')$ are indeed of equal length, then the map defined by these three points will also map $\overline{z_2}\mapsto \overline{z_2'}$ and it will have a representation using real coefficients only, so that it preserves the real axis. If some other reader wants the same for the Poincaré disk, use inversion in the unit circle instead of complex conjugate i.e. reflection in the real axis. The idea is that in a way, the upper and the lower half plane in the half plane model, or the inside and the outside (including the point at infinity) of the disk in the disk model, are algebraically pretty much equivalent. It makes sense to think of a hyperbolic point in the half plane model not as a single point in the upper half plane, but as a pair of points reflected in the real axis. Using this helps adding additional constraints for the Möbius transformation.
How to solve this system of equations of degree 3?
This is maybe a hard question but in the end it would not! $$a^3-2a^2+a(b^2+1)+8b-2b^2=0$$ Is the same as: $$a(a^2+b^2+1)=2(a^2+(b-4)b)$$ So you get: $$b=\frac{-((64-4(a-2)(a^3-2a^2+a)))^{1/2}-8}{2(a-2)}$$ Or: $$b=\frac{((64-4(a-2)(a^3-2a^2+a)))^{1/2}-8}{2(a-2)}$$ So you get a list of solutions: $$a=-1,b=2$$ $$a=0,b=0$$ $$a=0,b=4$$ $$a=1,b=0$$ $$a=1,b=8$$ $$a=3,b=-6$$ $$a=3,b=-2$$ $$a=2,b=-\frac{1}{4}$$
Number theory problem on divisors!
So I saw the comments talking about $31$ and $47$ and eventually $61$ that cannot be found. And I started assembling a proof that $31$ or $47$ cannot be found. But in the end I arrived at solutions for these numbers. Extending the search, I end up with a somehow efficient algorithm for finding the solutions. Here are solutions to all primes under $1000$. (3, [2, 1]) (5, [2, 1, 1]) (7, [8, 5, 4]) (11, [8, 4, 2, 1]) (13, [8, 5, 4, 3]) (17, [4, 2, 2, 1, 1]) (19, [14, 7, 2, 1, 1]) (23, [40, 6, 3, 2, 2]) (29, [19, 17, 16, 11, 7]) (31, [54, 52, 47, 31, 27]) (37, [14, 9, 7, 2, 1, 1]) (41, [10, 8, 5, 4, 2, 1]) (43, [90031, 33, 3, 2, 2, 2]) (47, [82, 8, 6, 4, 3, 2]) (53, [13, 2, 2, 2, 1, 1, 1]) (59, [162, 5, 3, 2, 1, 1, 1]) (61, [76, 4, 2, 2, 2, 1, 1]) (67, [184, 10, 5, 2, 2, 1, 1]) (71, [2112, 8, 5, 3, 3, 1, 1]) (73, [2281, 12, 3, 3, 2, 2, 1]) (79, [59, 8, 5, 4, 4, 2, 1]) (83, [62, 2, 2, 1, 1, 1, 1, 1]) (89, [22, 2, 2, 2, 1, 1, 1, 1]) (97, [121, 2, 2, 2, 2, 1, 1, 1]) (101, [25, 4, 2, 2, 2, 1, 1, 1]) (103, [283, 5, 2, 2, 2, 1, 1, 1]) (107, [5002, 17, 16, 8, 7, 6, 5]) (109, [27, 8, 4, 2, 2, 1, 1, 1]) (113, [28, 14, 7, 2, 2, 1, 1, 1]) (127, [1111, 14, 7, 3, 2, 2, 1, 1]) (131, [2456, 4, 4, 4, 2, 2, 2, 1]) (137, [445, 8, 6, 4, 2, 2, 2, 1]) (139, [382, 6, 5, 4, 3, 2, 2, 1]) (149, [37, 2, 2, 2, 1, 1, 1, 1, 1]) (151, [51151, 526, 406, 2, 2, 2, 2, 2]) (157, [2551, 8, 6, 5, 4, 2, 2, 2]) (163, [122, 12, 2, 2, 1, 1, 1, 1, 1]) (167, [292, 3, 2, 2, 2, 1, 1, 1, 1]) (173, [43, 7, 2, 2, 2, 1, 1, 1, 1]) (179, [134, 67, 2, 2, 2, 1, 1, 1, 1]) (181, [45, 12, 3, 2, 2, 1, 1, 1, 1]) (191, [143, 12, 10, 2, 2, 1, 1, 1, 1]) (193, [1041862, 8984, 6, 5, 5, 5, 5, 4]) (197, [49, 8, 4, 2, 2, 2, 1, 1, 1]) (199, [348, 10, 4, 2, 2, 2, 1, 1, 1]) (211, [3323, 17, 4, 4, 2, 2, 1, 1, 1]) (223, [204212, 240, 5, 4, 3, 2, 1, 1, 1]) (227, [15606, 19, 17, 16, 16, 12, 12, 11]) (229, [3034, 26, 4, 4, 2, 2, 2, 1, 1]) (233, [78812, 24, 20, 19, 19, 16, 16, 16]) (239, [5437, 7, 6, 6, 3, 2, 2, 1, 1]) (241, [475312, 11, 6, 3, 3, 3, 2, 1, 1]) (251, [86281, 148, 121, 2, 2, 2, 2, 2, 1]) (257, [39062, 321, 2, 1, 1, 1, 1, 1, 1, 1]) (263, [177196, 58, 5, 3, 3, 3, 2, 2, 1]) (269, [67, 2, 2, 2, 2, 1, 1, 1, 1, 1]) (271, [2281, 474, 12, 3, 3, 3, 2, 2, 1]) (277, [2562, 92, 2, 2, 1, 1, 1, 1, 1, 1]) (281, [3442, 6, 6, 4, 3, 3, 2, 2, 2]) (283, [212, 4, 2, 2, 2, 1, 1, 1, 1, 1]) (293, [12379, 229, 6, 6, 4, 2, 2, 2, 2]) (307, [10361, 40, 13, 6, 4, 3, 2, 2, 2]) (311, [544, 8, 8, 6, 4, 4, 3, 2, 2]) (313, [391, 7, 2, 2, 2, 2, 1, 1, 1, 1]) (317, [79, 13, 2, 2, 2, 2, 1, 1, 1, 1]) (331, [1572, 9, 4, 2, 2, 2, 1, 1, 1, 1]) (337, [84, 12, 3, 3, 2, 2, 1, 1, 1, 1]) (347, [607, 6, 3, 2, 2, 2, 2, 1, 1, 1]) (349, [87, 8, 8, 6, 5, 5, 4, 4, 3]) (353, [1147, 6, 4, 2, 2, 2, 2, 1, 1, 1]) (359, [279571, 44, 3, 2, 2, 2, 2, 1, 1, 1]) (367, [25231, 22, 5, 2, 2, 2, 2, 1, 1, 1]) (373, [93, 8, 4, 4, 2, 2, 2, 1, 1, 1]) (379, [51070, 24, 5, 3, 2, 2, 2, 1, 1, 1]) (383, [38587, 30, 15, 8, 6, 5, 5, 5, 4]) (389, [97, 8, 5, 4, 3, 2, 2, 1, 1, 1]) (397, [1687, 8, 6, 5, 3, 2, 2, 1, 1, 1]) (401, [18947, 13, 8, 4, 3, 2, 2, 1, 1, 1]) (409, [10327, 656, 4, 3, 2, 2, 2, 2, 1, 1]) (419, [144031, 364, 170, 2, 2, 2, 2, 2, 1, 1]) (421, [526, 8, 5, 4, 3, 2, 2, 2, 1, 1]) (431, [1616, 76, 13, 4, 2, 2, 2, 2, 1, 1]) (433, [5737, 26, 6, 4, 3, 2, 2, 2, 1, 1]) (439, [2524, 11, 8, 4, 4, 2, 2, 2, 1, 1]) (443, [97792, 97571, 4, 4, 3, 3, 2, 2, 1, 1]) (449, [112, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1]) (457, [1028, 8, 8, 4, 4, 4, 2, 2, 1, 1]) (461, [1037, 62, 2, 2, 1, 1, 1, 1, 1, 1, 1]) (463, [6366, 722, 5, 4, 4, 4, 2, 2, 1, 1]) (467, [4531, 2218, 13, 4, 2, 2, 2, 2, 2, 1]) (479, [262372, 156, 40, 3, 3, 2, 2, 2, 2, 1]) (487, [33481, 25, 5, 4, 4, 3, 2, 2, 2, 1]) (491, [30319, 256, 7, 6, 4, 2, 2, 2, 2, 1]) (499, [198976, 101, 5, 4, 4, 4, 2, 2, 2, 1]) (503, [4904, 144, 6, 4, 4, 4, 2, 2, 2, 1]) (509, [127, 4, 2, 2, 2, 2, 1, 1, 1, 1, 1]) (521, [130, 7, 2, 2, 2, 2, 1, 1, 1, 1, 1]) (523, [14011562, 862, 17, 3, 1, 1, 1, 1, 1, 1, 1]) (541, [25562, 102, 10, 2, 2, 1, 1, 1, 1, 1, 1]) (547, [597187, 992, 5, 3, 2, 1, 1, 1, 1, 1, 1]) (557, [2112, 1810, 3, 2, 2, 2, 1, 1, 1, 1, 1]) (563, [422, 12, 3, 3, 2, 2, 1, 1, 1, 1, 1]) (569, [142, 14, 7, 2, 2, 2, 1, 1, 1, 1, 1]) (571, [1570, 87, 5, 2, 2, 2, 1, 1, 1, 1, 1]) (577, [144, 4, 4, 2, 2, 2, 2, 1, 1, 1, 1]) (587, [2788, 943, 9, 6, 4, 3, 3, 3, 2, 2]) (593, [148, 8, 8, 6, 5, 4, 4, 3, 2, 2]) (599, [1048, 58, 3, 2, 2, 2, 2, 1, 1, 1, 1]) (601, [30200, 33, 8, 6, 4, 4, 4, 3, 2, 2]) (607, [1062, 6, 4, 3, 2, 2, 2, 1, 1, 1, 1]) (613, [5057, 5, 4, 4, 2, 2, 2, 1, 1, 1, 1]) (617, [615920, 1822, 6, 5, 5, 5, 3, 3, 3, 2]) (619, [144072, 85, 3, 3, 2, 2, 2, 1, 1, 1, 1]) (631, [1735, 22, 5, 3, 2, 2, 2, 1, 1, 1, 1]) (641, [722, 160, 4, 4, 2, 2, 2, 1, 1, 1, 1]) (643, [3697, 11, 7, 4, 2, 2, 2, 1, 1, 1, 1]) (647, [728, 485, 364, 2, 2, 2, 2, 1, 1, 1, 1]) (653, [13876, 8, 7, 7, 2, 2, 2, 1, 1, 1, 1]) (659, [1812, 7, 6, 5, 3, 2, 2, 1, 1, 1, 1]) (661, [1487, 8, 5, 4, 4, 2, 2, 1, 1, 1, 1]) (673, [841, 8, 4, 4, 2, 2, 2, 2, 1, 1, 1]) (677, [107812, 17, 8, 6, 6, 6, 5, 4, 3, 3]) (683, [1878, 5, 4, 4, 3, 2, 2, 2, 1, 1, 1]) (691, [47506, 17, 5, 4, 2, 2, 2, 2, 1, 1, 1]) (701, [175, 8, 5, 4, 3, 2, 2, 2, 1, 1, 1]) (709, [13648, 84, 5, 3, 3, 2, 2, 2, 1, 1, 1]) (719, [1977, 367, 8, 8, 6, 5, 5, 5, 4, 4]) (727, [2726, 14, 10, 8, 8, 7, 5, 5, 4, 4]) (733, [4581, 12, 8, 4, 4, 2, 2, 2, 1, 1, 1]) (739, [10900, 3687, 6, 3, 3, 3, 2, 2, 1, 1, 1]) (743, [1300, 6, 4, 4, 3, 2, 2, 2, 2, 1, 1]) (751, [2065, 6, 5, 4, 3, 2, 2, 2, 2, 1, 1]) (757, [23656, 2065, 5, 4, 2, 2, 2, 2, 2, 1, 1]) (761, [2473, 121, 6, 4, 2, 2, 2, 2, 2, 1, 1]) (769, [192, 8, 8, 5, 4, 4, 2, 2, 1, 1, 1]) (773, [118462, 306, 6, 3, 3, 2, 2, 2, 2, 1, 1]) (787, [221737, 2281, 11, 3, 3, 2, 2, 2, 2, 1, 1]) (797, [4981, 10, 8, 5, 4, 2, 2, 2, 2, 1, 1]) (809, [65731, 16906, 4, 4, 2, 2, 2, 2, 2, 2, 1]) (811, [15206, 9392, 4, 4, 4, 3, 2, 2, 2, 1, 1]) (821, [10057, 37, 6, 6, 3, 3, 2, 2, 2, 1, 1]) (823, [12962, 76, 6, 4, 4, 3, 2, 2, 2, 1, 1]) (827, [2281, 620, 12, 4, 3, 3, 2, 2, 2, 1, 1]) (829, [207, 62, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1]) (839, [6502, 263, 6, 4, 4, 4, 2, 2, 2, 1, 1]) (853, [63335, 39062, 49, 2, 1, 1, 1, 1, 1, 1, 1, 1]) (857, [214, 8, 8, 5, 4, 4, 3, 2, 2, 1, 1]) (859, [2362, 5, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1]) (863, [74002, 17, 8, 6, 4, 3, 3, 2, 2, 1, 1]) (877, [90031, 1096, 33, 4, 3, 2, 2, 2, 2, 2, 1]) (881, [220, 12, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1]) (883, [662, 13, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1]) (887, [1552, 40, 6, 6, 3, 3, 2, 2, 2, 2, 1]) (907, [1587, 857, 12, 2, 2, 1, 1, 1, 1, 1, 1, 1]) (911, [13437, 32, 29, 2, 2, 1, 1, 1, 1, 1, 1, 1]) (919, [39287, 112, 47, 2, 2, 1, 1, 1, 1, 1, 1, 1]) (929, [18812, 32, 12, 3, 2, 1, 1, 1, 1, 1, 1, 1]) (937, [4539062, 1076, 62, 20, 1, 1, 1, 1, 1, 1, 1, 1]) (941, [2373437, 147, 47, 47, 1, 1, 1, 1, 1, 1, 1, 1]) (947, [1657, 4, 3, 2, 2, 2, 2, 1, 1, 1, 1, 1]) (953, [238, 13, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1]) (967, [2659, 49, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1]) (971, [728, 364, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1]) (977, [244, 122, 12, 2, 2, 2, 1, 1, 1, 1, 1, 1]) (983, [737, 162, 5, 3, 2, 2, 1, 1, 1, 1, 1, 1]) (991, [6689, 5312, 4, 4, 2, 2, 1, 1, 1, 1, 1, 1]) (997, [6013156, 21326, 5, 5, 4, 3, 3, 2, 2, 2, 2]) Some explanation on the algorithm: Since my original aim was to prove that some numbers cannot be expressed, I started with the observation that the number of factors is kind of limited, because every factor is somewhat $2 - \epsilon$. I then rewrite the formula as $$\prod_{i = 1}^k(1 - \frac{1}{4b_i + 2}) = \frac{p}{2^k}.$$ Now the observation is that one of the factors must be at most $\frac{p^{\frac{1}{k}}}{2}$. So there is an upper bound to the smallest $b_i$. I just go through all possibilities of this smallest $b_i$, and use a recursive call on the remaining $k - 1$ factors. One key parameter is $k$, the number of factors. I use multithreading on different $k$, and kill all threads once one thread finds a solution. Using three threads, I find solutions to all primes under $1000$ within a minute or so. It is worth noting that using only two threads (i.e. only check the optimal number or (optimal number + 1) of factors) one can also find solutions to all primes under $1000$, just taking longer. I guess it would even work with only one thread, but the current algorithm will be too slow for that. You may try my codes by pasting the following into this website and press "Evaluate". RR = RealField(200) giveup = False def Find(x, k, lbd): if giveup: return None if k == 1: n = (1/(1 - x) - 2) / 4 if n.denominator() == 1: return [n] else: return None bd = Integer(floor((1 / (1 - RR(x)^(RR(1) / k)) - 2) / 4)) for n in range(bd, lbd - 1, -1): newx = x / (1 - 1 / (4 * n + 2)) if newx &gt;= 1: break res = Find(newx, k - 1, n) if res is None: continue res.append(n) return res return None import threading, time TLIMIT = 3 for p in range(3, 1000, 2): if is_prime(p): k = ceil(log(RR(p))/log(RR(2))) res = None giveup = False def ThreadFunc(x, k, lbd): global res myRes = Find(x, k, lbd) if myRes is not None: res = myRes return threads = [] for j in range(TLIMIT): newThread = threading.Thread(target = ThreadFunc, args = (p/2**k, k, 1)) threads.append(newThread) newThread.start() k += 1 while True: allEnd = True for t in threads: if t.is_alive(): allEnd = False break if allEnd: break if res is not None: giveup = True time.sleep(0.1) print(p, res)
Quantifier elimination for theory of equivalence relations
The equivalence holds because the equivalence classes are infinite. If you have $(x_1,…,x_n)$ such that the RHS holds, the set $\{x_j \mid j\in J\}$ is finite. If it is nonempty, there is an element $y$ such that $y\sim x_j$ for $j\in J$ and $y\neq x_j$. If the set is empty, the existence of $y$ is implied by the fact that there are infinitely many equivalence classes, so you can pick a $y$ that is not $\sim$-equivalent to any $x_h,h\in H$.
Odds of getting equal amount of each color in a bag of M&Ms
Compute the chance of getting exactly $9$ of the first color with $54$ draws with replacement. You have $45$ left that are of $5$ colors. Compute the chance of getting exactly $9$ of the second color within this group of $45$. Keep going.
fractal dimension of the Sierpinski carpet
Note that $N(k)$ is a nondecreasing function of $k$ (since if we can cover something with $N$ squares of size $(1/k)\times (1/k)$, we can also cover with $N$ squares of any larger size.) For any positive integer $k$, there exists a nonnegative integer $j$ such that $3^{j}\le k&lt; 3^{j+1}$. Therefore, $$8^j = N(3^{j})\le N(k) \le N(3^{j+1}) = 8^{j+1}$$ Take logarithm and divide by $\ln k$: $$\frac{j \ln 8}{\ln k}\le \frac{\ln N(k)}{\ln k} \le \frac{(j+1) \ln 8} {\ln k}$$ Use inequalities $\ln k \ge j\ln 3$ and $\ln k\le (j+1)\ln 3$: $$\frac{j \ln 8}{(j+1) \ln 3}\le \frac{\ln N(k)}{\ln k} \le \frac{(j+1) \ln 8} {j \ln 3}$$ The middle term is squeezed between two things that both approach $\ln 8 / \ln 3$.
A different proof of Urysohn's Lemma
All claims except the continuity of $g$ are clear. For this, suppose $x_n\to x$, so $$ m(B_\epsilon) \lvert g(x_n) - g(x) \rvert = \lvert m( V_\epsilon \cap B_\epsilon(x_n)) - m(V_\epsilon \cap B_\epsilon(x)) \rvert \le m( V_\epsilon \cap (B_\epsilon(x_n) \mathbin{\square} B_\epsilon(x) ) ) \le m(B_\epsilon(x_n) \mathbin{\square} B_\epsilon(x)) = m(B_\epsilon(x_n)) + m( B_\epsilon(x)) -2 m(B_\epsilon(x_n) \cap B_\epsilon(x)) = 2 (m( B_\epsilon(x)) - m(B_\epsilon(x_n) \cap B_\epsilon(x)) $$ Here I have written $\square$ for symmetric difference and used the inclusion exclusion principle. Hence the result will follow if we can prove that $m(B \cap (B+x)) \to m(B)$ as $x\to 0$. This follows from this result.
A function with partial derivatives but not continuous
For example: $$\frac{\partial f}{\partial x}(0,0):=\lim_{x\to0}\frac{f(x,0)-f(0,0)}x=\lim_{x\to0}\frac{0-1}x$$ so nop: the partial derivatives don't exist at the origin. They do at any other point, though.
Probability that $0$ won't be chosen out of $k$ picks from the set $\{0, 1, \ldots, 9\}$
Its because you aren't accounting for order. Suppose we take $k=2$. Then it is clear there should be $100$ different sets of $2$ digits you can pick. However, $\dbinom{10+2-1}{2} = 55$, which is not $100$. The correct way to do this is there are $10^k$ selections from the ten digit set. However, there are only $9^k$ ways to pick from the set once $0$ is removed. Thus the probability is $\frac{9^k}{10^k}$ as desired.
Identifying functions equivalent to $y=-3\sin x+2$
The correct answer is $A$, $C$ and $D$ First $\sin(-\theta) = -\sin(\theta)$ and second $\cos(\frac{\pi}{2}+\theta) = -\sin(\theta)$. Also $-3\cos(x-\frac{\pi}{2}) = -3\cos(\frac{\pi}{2}-x) = -3\sin(x)$. Other answers dont evaluate to a value of $2-3\sin(x)$
Convergence of power series around another point
Contrary to what hardmath's comment says, your way of expanding out $f(z+w)$ will work. Assume that $|z| &lt; R-r$. In particular $|z+w| &lt; R$, so the series for $f(z+w)$ converges: \begin{align*} f(z+w) &amp;= \sum_{n=0}^\infty a_n (z+w)^n \\ &amp;= \sum_{n=0}^\infty \sum_{i=0}^n a_n \binom{n}{i} z^i w^{n-i} \end{align*} With proper justification$^{1}$, as the series converges absolutely, we may switch the order of summation: \begin{align*} &amp;= \sum_{i=0}^\infty \sum_{n=i}^\infty a_n \binom{n}{i} w^{n-i} z^i \end{align*} So let $b_i = \sum_{n=i}^\infty a_n \binom{n}{i} w^{n-i}$, and we have what we wanted: $$ f(z+w) = \sum_{i=0}^\infty b_i z^i. $$ $^{1}$Specifically, to justify this, it suffices (see this note) to show that $$\sum_{n=0}^\infty \sum_{i=0}^n |a_n| \binom{n}{i} |z|^i |w|^{n-i} &lt; \infty.$$ But the expression is equal to $$\sum_{n=0}^\infty |a_n| \left(|z| + |w|\right)^n,$$ and since $\big||z| + |w|\big| = |z| + |w| &lt; (R-r) + r = R$, the original series converges absolutely for $|z| + |w|$ which is what the above says.
Prove $\bigcap_{i=1}^{\infty} J_i \subseteq \bigcap_{i=a}^{b} J_i$
Notice first that $$|\bigcap_{i=1}^{\infty}J_i| \leq |\bigcap_{i=a}^{b}J_i|$$ If this is not true, there exists $x$ so that $$\forall i: x \in J_i \land \exists i \in [a,b]: x \notin J_i$$ which is a clear contradiction. How does this translate into LHS being a subset of RHS? Can you form an equivalent logical statement with assuming the converse $$\bigcap_{i=1}^{\infty}J_i \not\subset\bigcap_{i=a}^{b}J_i$$
Can this be taken as a derivative?
The derivative of $\sqrt{f(x)}$ at $x=2$ is, by definition, $$\lim_{x\to 2}\frac{\sqrt{f(x)} -\sqrt{f(2)}}{x-2}.$$ So your expression gives the derivative of $\sqrt{f(x)}$ at $x=2$ only if $f(2)$ happens to be $4$.
Question about proof of L'Hospital's Rule with indeterminate limits
Sunce $\lim_{x\to b}g(x)=\infty$, $\lim_{x\to b}\frac B{g(x)}=0$ and therefore, if $x_1$ is close enough to $b$, then $\frac B{g(x)}&lt;\varepsilon$. And if $\lim_{x\to b}f(x)=0$ and $\lim_{x\to b}g(x)=\infty$, then $\lim_{x\to b}\frac{f(x)}{g(x)}=0$.
Largest value of a third order determinant whose elements are 0 or 1
Since $$\begin{vmatrix} a_{1} &amp; b_{1} &amp; c_{1} \\a_{2} &amp; b_{2} &amp; c_{2} \\ a_{3} &amp; b_{3} &amp; c_{3} \end{vmatrix}=a_1b_2c_3+b_1c_2a_3+c_1a_2b_3-a_3b_2c_1-b_3c_2a_1-c_3a_2b_1.$$ Each of these terms are either $0$ or $1$ (depending on the entries chosen to be $1$ or $0$). So to maximize one may want to choose the entries with positive terms becoming $1$ and the negative terms becoming $0$. However that is NOT possible because then all entries become $1$ and the determinant takes on the value $0$. So the maximum value cannot be $3$. Since we cannot have all the positive terms be $1$ (without having the negative terms also being equal to $1$) so we try to see if we can have two of the positive terms as $1$. Observe that we can have the following: $$\begin{vmatrix} 1 &amp; 1 &amp; c_{1} \\a_{2} &amp; 1 &amp; 1 \\ 1 &amp; b_{3} &amp; 1 \end{vmatrix}=1+1+c_1a_2b_3-c_1-b_3-a_2.$$ But now to maximize we need to have $c_1=a_2=b_3=0$. This gives the maximum value of the determinant to be $2$. It might be also good to look at the determinant as volume of the parallelepiped to convince yourself with the validity of the result.
Conceptual problem regarding distance between two sets.
Let $X = \mathbb{R}^2$ with the usual metric. Take $A = \{(x,e^x) \}_{x \in \mathbb{R}}, B= \{ (x,-e^x) \}_{x \in \mathbb{R}}$. Then $d(A,B) = 0$ as is seen by the sequences $a_n = (-n, e^{-n}), b_n = (-n, -e^{-n})$, but clearly neither sequence converges.
Box topology and axiom of choice
This is an issue if you don't allow the empty set to be a topological space (and if you do allow it, then there might be issues elsewhere). But generally if the product is empty, then it is empty. Note that this has nothing to do with the particular topology (box, or otherwise). If the entire product is empty, then every subset of it is also empty, because even if somehow there is a sequence of open sets which does admit a choice function, what really happens is this: the basis is defined as $\{f\in\prod X_\alpha\mid \forall\alpha:f(\alpha)\in U_\alpha\}$. In either case, this is a well-defined construction. If you allow the empty set to be a space, then it's also a compact space; if you disallow it to be a space, then you might say that it is undefined because it's not a space. But it has nothing to do with the topology. Do note, however, that as with all things, being well-defined and having the same properties as the objects have in $\sf ZFC$ universes are two different things. For example, it might be that $\prod X_\alpha$ is non-empty, but for some sequence of open sets, $U_\alpha$, you have that $\prod U_\alpha$ is in fact empty. Topologically it doesn't matter since the empty set is open. But look at this strange example. For each $n\in\Bbb N$ consider $P_n$ to be a pair, such that $\prod P_n=\varnothing$ over any infinite subset of $\Bbb N$ (this is essentially Russell's socks). Now let $X_n=P_n\cup\{n\}$ and give it the topology $\{\varnothing,X_n,P_n\}$. What is the product of these spaces? It's not empty, since we can always choose the natural number from $X_n$. But an open set is non-empty if and only if it only chooses from $P_n$ on finitely many coordinates. Namely, the box product and the Tychonoff product coincide. Strange!
Calculating monthly payments of a savings plan
Firstly we need a reference date for your payments and the payments of the bank. I´ve made a time line. The converted montly interest rate is $i_{12}=\frac{0.04}{12}$ Your payments start at the last day of January. We calculate the future value of your 12 payments at 31.12. We pretend that you start at 01.01, but make the payments at the end of each month. Then the future value is at $\color{red}{t=0} \ (31.12)$ This is the left hand side of the equation. Then we calculate the present value of the payments made by the bank. Since the payments ($+x$) start one month after $t=0$ we just discount the future value 6 times. The equation is $$250\cdot\frac{(1+\frac{0.04}{12})^{12}-1}{\frac{0.04}{12}}=x\cdot\frac{(1+\frac{0.04}{12})^{6}-1}{\frac{0.04}{12}}\cdot\frac1{\left(1+\frac{0.04}{12}\right)^6}$$ $$3055.6157=x\cdot 5.930618 \Rightarrow x=\large{\color{grey}{£}} \ \normalsize 515.23 \ \ $$
Plotting $||x|-|y||=1$: How to verify the extra line segments aren't included in the plot?
For each of the four equations you obtained, the signs of $x$ and $y$ must be considered. We have: $$x-y=1 \text{ for } x,y \ge 0 \text { (satisfying $|x|-|y|=1$) or } x,y \le 0 \text { (satisfying $|y|-|x|=1$) }$$ Notice that the part of the line $x-y=1$ in $x \ge 0, y \le 0$ (Quadrant IV) is omitted. Similarly: $$x+y=1 \text{ for } x \ge 0, y \le 0 \text{ or } x \le 0, y\ge 0$$ $$-x-y=1 \text{ for } x \ge 0, y \le 0 \text{ or } x \le 0, y\ge 0$$ $$-x+y=1 \text{ for } x, y \ge 0 \text{ or } x , y \le 0$$ These extra constraints conveniently corresponds to the four quadrants, omitting Quadrants I, III, II respectively. Plotting these lines according to these constraints will avoid the extra lines in the 'undesired' quadrants.
Find matrix $P$ such that $P^{-1}AP=B$
The computation is correct, and we cannot expect that $P$ is unique in general. A somewhat extreme case is $A=B=0$, where any invertible $P$ will do.
The Polynomials $P_n(x+y)$
If we already know that $P_{n}(x)=(1+x)^{n}$ then it is straightforward $$ \begin{aligned} P_{n}(x+y)&amp;=((1+x)+y)^{n}\\ &amp;=\sum_{k=0}^{n}{\binom{n}{k}(1+x)^{k}y^{n-k}}\\ &amp;=\sum_{k=0}^{n}{\binom{n}{k}P_{k}(x)y^{n-k}} \end{aligned} $$
Number of Generators and Localization
The comment above should also answer 1) and 2): Let $V$ be a $n$-dimensional vector space over $\mathbb F_2$. Take $R=\prod_{i \geq 1} \mathbb F_2$ and $M=\prod\limits_{1 \leq i \leq N} V \times \prod\limits_{i &gt; N} 0$.
Conditions for a topological group to be a Lie group.
Look at the Terence Tao notes on the Hilbert fifth problem. http://terrytao.wordpress.com/books/hilberts-fifth-problem-and-related-topics/ The first chapter contains some sort of overview and explanation of philosophy behind all this.
Solving set of equations
Consider the system $S$ of the first five equations $pa_{i-1} + (1 - p)a_{i+1} = a_i$. Since $S$ is clearly invariant under the map $a_i \to a_i + t$ for any fixed $t$, assume without loss of generality that $a_0 = 0$. It's then easy to verify that the only solution of $S$ is $a_i = 0$ for all $i$. Thus the general solution to $S$ is given by $a_i \to t$ for some fixed $t$, and the only solution to the entire given system is $a_i = \frac{1}{5}$ for all $i$.
Show $g^{-1} \wedge h^{-1}=(g\vee h)^{-1}$
Recall that $g \vee h$ is the least upper bound (supremum) of $g , h$. This means that anything which is $\geq$ both $g$ and $h$ must also be $ \geq g \vee h$.
Show that the matrix is positive definite
Try to verify that its leading principal minors are all positive.