title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Find $ \lim_{n\rightarrow \infty}\int_{a}^{b} \frac{\sin (nx)}{x} dx$
$$\int_a^b\frac{\sin nx}x\,dx=\left[-\frac{\cos nx}{nx}\right]_a^b- \int_a^b\frac{\cos nx}{nx^2}\,dx =\frac{\cos na}{na}-\frac{\cos nb}{nb}-\frac1n\int_a^b\frac{\cos nx}{x^2}\,dx.$$ There are some very convenient denominators here!
Find a metric on $X = \mathbb{R}^2 - \{(0,0), (0,1)\}$ so $(X,D)$ is complete and topologically equivalent to $(X,d)$.
HINT: Let $p=\langle 0,0\rangle$ and $q=\langle 0,1\rangle$, for $x\in X$ let $$f(x)=\frac1{\min\{d(x,p),d(x,q)\}}\,,$$ and for $x,y\in X$ let $$D(x,y)=d(x,y)+|f(x)-f(y)|\,.$$ Show that $D$ generates the same topology on $X$ as $d$ and that $\langle X,D\rangle$ is complete. More generally, if $G$ is open in a complete metric space $\langle X,d\rangle$, let $$f(x)=\frac1{d(x,X\setminus G)}$$ and $$D(x,y)=d(x,y)+|f(x)-f(y)|\,;$$ then $D$ generates the same topology on $X$ as $d$, and $\langle X,D\rangle$ is complete.
A Problem From My Exam
Ratio of diameters $$ \begin{align} AB : BC : CD = 1 : 2 : 3 = r_{1} : r_{2} : r_{3} \end{align} $$ Radius of perimeter segment: $\pi r = 24 \pi$. $$ r = 24 $$ Add up the sgements: $$ \begin{align} r &= r_{1} + r_{2} + r_{3} \\ &= r_{1} + 2 r_{1} + 3 r_{1} = 6 r_{1} \\ 24 &= 6 r_{1} \\ 4 &= r_{1} \end{align} $$ Shaded area $$ \begin{align} % A &= \frac{\pi}{2} \left( r^{2} - \left( r^{2}_{1} + r^{2}_{2} + r^{2}_{3} \right) \right) \\ % &=\frac{\pi}{2} \left( 24^{2} - \left( 4^{2} + 8^{2} + 12^{2} \right) \right) \\ &= 352 \frac{\pi}{2} = \boxed{176 \pi} % \end{align} $$
Definition of the score function.
but $$f(x;p) = \prod_{i=1}^{n} f(x_i; p) $$ no? ... so both definitions are actually the same. I do not see the difference, the log of a product is the sum of the logs...
Minimising expected square of differences to random variables
I think I have an answer. First case, $\theta_P$. The first observation is that since $M$ is uniform and independent over $\{1,2,3\}$, we have that $$ \mathbb E[((\theta_P(X_1, X_2, X_3) - X_M)^2] = \frac 13 \mathbb E \left[\sum_{m=1}^3 (\theta_P(X_1, X_2, X_3) - X_m)^2 \right], $$ where the latter does not involve random $M$ any more. The second fact (very obvious!) is that if $A \le B$ then $\mathbb E[A] \le \mathbb E[B]$. Hence, we can try to find the solution as $$ \theta_P(x_1, x_2, x_3) = \arg \min_t \left( \sum_{m=1}^3 (t-x_m)^2 \right) $$ where $x_1, x_2, x_3$ are treated as constants. In other words, we try to minimise $\theta_P$ in each of the points. By using basic calculus, the optimal $t$ is then $$ t = \frac{x_1 + x_2 + x_3}3, $$ which is also the expression for the optimal $\theta_P(x_1, x_2, x_3)$. Surprisingly enough, the solution does not depend on the distribution of $(X_1, X_2, X_3)$. Second case, $\xi_P$. Again, we do optimisation for each triple of fixed values $(X_1, X_2, X_3)$ separately, i.e. $$ \xi_P(x_1, x_2, x_3) := \arg \min_t \left( \max \left( (t-x_1)^2, (t-x_2)^2, (t-x_3)^2 \right) \right). $$ We observe that $$ \max \left( (t-x_1)^2, (t-x_2)^2, (t-x_3)^2 \right) = \max \left( (t-x_{\min})^2, (t-x_{\max})^2 \right), $$ where $x_{\min} = \min(x_1, x_2, x_3)$ and $x_{\max} = \max(x_1, x_2, x_3)$. Finally, $$ \max( (t-x_\min)^2, (t-x_\max)^2) = \begin{cases} (t-x_\max)^2, & t < \frac{x_\min + x_\max}{2},\\ (t-x_\min)^2, & t \ge \frac{x_\min + x_\max}{2}, \end{cases} $$ and the global minimum is achieved at $t = \frac{x_\min + x_\max}{2}$. Altogether, $$ \xi_P(x_1, x_2, x_3) = \frac{\min(x_1, x_2, x_3) + \max(x_1, x_2, x_3)}2. $$ Again, it doesn't depend on distribution of $X_1, X_2, X_3$.
Strange closed forms for hypergeometric functions
I'm going to only cover conjecture $(4)$. We can use elliptic functions to show it is true. To simplify setup, we will adopt all conventions and notation as in this answer. For $x \in (0, 1)$, let $F(x) = x _3F_2(1,1,\frac54;2,\frac74;x)$, we have $$ F(x) = \sum_{k=0}^\infty \frac{x^{k+1}}{k+1}\frac{(\frac54)_k}{(\frac74)_k} = \frac{\Gamma(\frac74)}{\Gamma(\frac12)\Gamma(\frac54)} \sum_{k=0}^\infty \frac{x^{k+1}}{k+1}\int_0^1 t^{\frac14+k}(1-t)^{-\frac12}dt\\ = -\frac{3}{2\sqrt{2}\omega}\int_0^1 \log(1-xt) t^{-\frac34}(1-t)^{-\frac12} dt $$ Let $x = \alpha^2$, $\beta = \frac{1-\alpha}{1+\alpha}$ and substitute $t$ by $\left(\frac{p-\frac12}{p + \frac12}\right)^2$ and then $p$ by $\wp(z)$, we have: $$\begin{align} F(\alpha^2) &= -\frac{3}{\omega}\int_{\infty}^{\frac12} \log\left[(1-\alpha^2)\frac{(p+\frac12\beta)(p+\frac12\beta^{-1})}{(p+\frac12)^2}\right] \frac{dp}{\sqrt{4p^3-p}}\\ &= -\frac{3}{2\omega}\int_{-\omega}^\omega \log\left[(1-\alpha^2)\frac{(\wp(z)+\frac12\beta)(\wp(z)+\frac12\beta^{-1})}{(\wp(z)+\frac12)^2}\right] dz \end{align}\tag{*1} $$ Notice $1 - (\sqrt{2}-1)^2 = 2 (\sqrt{2}-1)$, conjecture $(4)$ can be rewritten as $$F((\sqrt{2}-1)^2) \stackrel{?}{=}\frac34\left[ \frac{\pi}{2} - 3\log 2\right] - 3 \log\left[1 - (\sqrt{2}-1)^2\right]$$ Compare this with $(*1)$, we find conjecture $(4)$ is equivalent to $$\frac{1}{2\omega}\int_{-\omega}^\omega \log\left[\frac{(\wp(z)+\frac12\beta)(\wp(z)+\frac12\beta^{-1})}{(\wp(z)+\frac12)^2}\right] dz \stackrel{?}{=}\frac{1}{4}(3\log 2 - \frac{\pi}{2})\tag{*2} $$ when $\alpha = \beta = \sqrt{2}-1$. Like the other answer, we can express the RHS of $(*2)$ using Weierstrass sigma function. Before we do that, I will like to point out $$\left(\frac{d}{dz}\wp(z)\right) ^2 = 4 \wp(z)^3 - \wp(z)^3 \;\;\implies\;\; \left(\frac{d}{dz}\frac{1}{4\wp(iz)}\right)^2 = 4 \left(\frac{1}{4\wp(iz)}\right)^3 - \left(\frac{1}{4\wp(iz)}\right) $$ One consequence of this is if we pick a $\rho$ such that $\wp( \pm ( \omega' + \rho) ) = -\frac12\beta$, then $$\wp(\pm ( \omega' + i\rho)) = \wp(\pm( \omega' - i\rho)) = -\frac12\beta^{-1} \quad\text{ and }\quad\wp( \pm( \omega' - \rho)) = -\frac12\beta$$ When $\alpha = \beta = \sqrt{2}-1$, we can pick $\rho = \frac{\omega}{2}$. The RHS of $(*2)$ becomes $$\frac{1}{2\omega}\int_{-\omega}^{\omega} \left\{ \sum_{k=0}^3\log\left[ \frac{\sigma(z + \omega' + i^k\rho)}{\sigma(\omega' +i^k\rho))} \right] - 4 \log\left[ \frac{\sigma(z + \omega')}{\sigma(\omega')} \right] \right\} dz$$ One can perform same analysis as in my other answer by playing with $\varphi_{\pm}(\tau)$ functions. Because of the symmetry of $\wp(z)$ around the point $z = \omega' = i\omega$, their contributions will cancel out with each other. The net result is $$\text{RHS}[*2] = \log\left[\frac{\sigma(\omega')^4}{\prod_{k=0}^3\sigma(\omega' + i^k\rho))}\right]\tag{*3}$$ For Weierstrass elliptic functions with general $g_2, g_3$, we know $$\begin{cases} \wp'(z) &= -\frac{\sigma(2z)}{\sigma(z)^4}\\ \sigma(z+2\omega) &= -e^{2\eta(z+\omega)}\sigma(z)\\ \sigma(z+2\omega') &= -e^{2\eta'(z+\omega')}\sigma(z) \end{cases}$$ This implies $$\sigma(z+\omega)^4 = -\frac{\sigma(2z+2\omega)}{\wp'(z+\omega)} = e^{2\eta(2z+\omega)}\frac{\sigma(2z)}{\wp'(z+\omega)} \quad\implies\quad \sigma(\omega)^4 = e^{2\eta\omega}\frac{2}{\wp''(\omega)}$$ For our case where $(g_2,g_3) = (1,0)$, we know $\eta = \frac{\pi}{4\omega}$, this leads to $\sigma(\omega) = e^{\frac{\pi}{8}}\sqrt[4]{2}$. Let me call this number $\Omega$. When $g_3 = 0$, the double poles of $\wp(z)$ forms a square lattice. This 4-fold symmetry around the origin give us $\sigma(i\omega) = i\sigma(\omega) = i\Omega$. The value of sigma functions at other points in $(*3)$ can be deduced in similar manner: $$\begin{align} \left|\sigma\left(\frac{\omega'}{2}\right)\right| &= \left|\frac{\sigma(\omega')}{\wp'(\frac{\omega'}{2})}\right|^{1/4} = \left(\frac{\Omega}{\sqrt{2}+1}\right)^{1/4}\\ \left|\sigma\left(\frac{3\omega'}{2}\right)\right| &= \left|e^{\eta'\omega'}\sigma\left(-\frac{\omega'}{2}\right)\right| = e^{\pi/4}\left(\frac{\Omega}{\sqrt{2}+1}\right)^{1/4}\\ \left|\sigma\left(\omega'\pm\frac{\omega}{2}\right)\right| &= \left|\frac{\sigma\left(2\omega'\pm\omega\right)}{\wp'(\omega'\pm\frac{\omega}{2})}\right|^{1/4} = \left|e^{2\eta'(\omega'\pm\omega)}\frac{\Omega}{\sqrt{2}-1}\right|^{1/4} = e^{\pi/8}\left(\frac{\Omega}{\sqrt{2}-1}\right)^{1/4} \end{align}$$ Combine all this, we find $$\text{RHS}[*2] = \log\left(\frac{\Omega^4}{e^{\pi/2}\Omega}\right) = 3\log\Omega - \frac{\pi}{2} = 3\left(\frac14\log 2 + \frac{\pi}{8}\right) - \frac{\pi}{2} = \frac14\left(3\log 2 - \frac{\pi}{2}\right)$$ i.e Conjecture $(4)$ is true. Update It turns out there is a cleaner algebraic relation between the hypergeometric function in conjecture $(4)$ and elliptic functions. Using the addition formula for sigma function: $$\wp(z)-\wp(u) = - \frac{\sigma(z+u)\sigma(z-u)}{\sigma(z)^2\sigma(u)^2}$$ We find $$\frac{\sigma(\omega')^4}{\prod_{k=0}^3\sigma(\omega'+ i^k\rho)} =\left(\frac{1}{\sigma(\rho)^2(\wp(\omega')-\wp(\rho))}\right) \left(\frac{1}{\sigma(i\rho)^2(\wp(\omega')-\wp(i\rho))}\right) $$ Notice $$\begin{cases} \wp(\omega'\pm\rho) &= -\frac12\beta\\ \wp(\omega'\pm i\rho) &= -\frac12\beta^{-1} \end{cases} \quad\implies\quad \begin{cases} \wp(\pm \rho) &= \frac{1}{2\alpha}\\ \wp(\pm i\rho) &= -\frac{1}{2\alpha}\\ \end{cases} $$ We get $$\frac{\sigma(\omega')^4}{\prod_{k=0}^3\sigma(\omega'+ i^k\rho)} = \frac{4\alpha^2}{\sigma(\rho)^4(1-\alpha^2)} $$ Which in terms implies a simple relation between $F(\alpha^2)$ and $\sigma(\rho)$: $$_3F_2\left(1,1,\frac54; 2,\frac74; \alpha^2\right) = \frac{F(\alpha^2)}{\alpha^2} = \frac{12}{\alpha^2}\log\left(\frac{\sigma(\rho)}{\sqrt{2\alpha}}\right)$$ When $\alpha = \sqrt{2}-1$, $\rho = \frac{\omega}{2}$ and since $\sigma(\frac{\omega}{2}) = \left(\frac{\Omega}{\sqrt{2}+1}\right)^{1/4}$, we again obtain $$_3F_2\left(1,1,\frac54; 2,\frac74; (\sqrt{2}-1)^2\right) = \frac{3}{(\sqrt{2}-1)^2}\log\left(\frac{e^{\pi/8}\sqrt[4]{2}}{(\sqrt{2}+1)4(\sqrt{2}-1)^2}\right) = \frac{3}{4(\sqrt{2}-1)^2}\left[\frac{\pi}{2} - 7\log 2 - 4\log(\sqrt{2}-1)\right] $$ Setting $\rho$ to other rational multiples of $\omega$, we can deduce a bunch of similar identities. For example, When $\rho = \omega$, it is clear $\alpha = 1$ and $\sigma(\omega) = \Omega$, this leads to $$ _3F_2\left(1,1,\frac54; 2,\frac74; 1\right) = 12\log\left(\frac{\Omega}{\sqrt{2}}\right) = \frac32\left(\pi- 2\log 2\right) $$ When $\rho = \frac{2\omega}{3}$, one can use the triplication formula of sigma function $$\frac{\sigma(3z)}{\sigma(z)^9} = 3\wp(z)\wp'(z)^2 - \frac14 \wp''(z)^2$$ to show $$\alpha = \sqrt{2\sqrt{3}-3} \quad\text{ and }\quad \sigma\left(\frac{2\omega}{3}\right) = e^{\pi/18} 3^{1/8} (2-\sqrt{3})^{1/12}$$ This leads to the identity $$_3F_2\left(1,1,\frac54; 2,\frac74; 2\sqrt{3}-3\right) = \frac{1}{2\sqrt{3}-3}\left(\frac{2\pi}{3} - 6\log 2 - 2\log(2-\sqrt{3})\right) $$
Indefinite summation of polynomials
Your "diff" is actually called (backward) finite difference. \begin{align} \nabla_1 [ P ](x) &= P(x) - P(x-1) \\ &= P(x-1+1) - P(x-1) \\ &= \Delta_1[ P ](x-1) \end{align} The inverse the forward finite difference is called indefinite sum. Extracted from Wikipedia, the useful formulae for polynomials is: \begin{align} \Delta^{-1}_1 x^n &= \frac{B_{n+1}(x)}{n+1} + C \\ \Delta^{-1}_1 af(x) &= a \Delta^{-1}_1 f(x) \end{align} (Bn+1(x) is the Bernoulli polynomial.) The Δ-1 can be converted back to your "int" by substituting $x \mapsto x + 1$.
Computation of a likelihood for a discrete variable
As Did laconically remarked, the answer is "Yes." Note that you can also write the product as $$ L(Z,\lambda)=\left(\mathrm e^{-\lambda c}\right)^{n_0}\left(1-\mathrm e^{-\lambda c}\right)^{n_1}\;. $$
Why is $\equiv$ used for functions that are Identically Zero?
The sign you are referring to is the standard symbol for "identically equal to". For example, a more usual context in which to see that symbol (at least in pre-calculus classes) might be: $$(x-1)^{2} \equiv x^{2}-2x+1,$$ where the presence of the symbol indicates that the equation holds for all values of $x$. The reason for its use in this specific context is that the author wants to emphasize that the function is vanishing everywhere, i.e., identically vanishing, i.e., "identically equal to" $0$. Note that many authors intentionally do not use $\equiv$ when they are defining things; I presume they maintain this practice so that questions like this don't come up. Instead, such authors might use $:=$, which has the advantage that it is meaningfully reversible, and many authors don't use a specific symbol and just use $=$, stating that the equation is meant to be taken as a definition in prose.
Find the $n$ value when sum of first $n$ terms in GP modulo $p$ is given
This question is from a live contest (Hackerearth June Circuit). You could asked it after the contest. edit : sorry but i don't have enough reputation to comment.
Probability of an event that can trigger off itself
The expected number of lever pulls with one coin is: $E=1\cdot P(1)+2\cdot P(2)+3\cdot P(3)+ \dots = \sum_{n=1}^\infty n\cdot P(n)$ Where $P(n)$, the probability of pulling the lever $n$ times, is $(\frac {1}{10})^{n-1}\cdot\frac {9}{10}$ The $\frac {9}{10}$ occurs when you don't get another free pull, terminating the run. You can multiply the $\frac {1}{10}$ together because the probability of getting another free level pull is independent of the past results. This gives us $E=\sum_{n=1}^\infty \frac{9n}{10^n} = 9\cdot\sum_{n=1}^\infty \frac{n}{10^n} =9\cdot(\frac {1}{10}+\frac{2}{100}+\frac{3}{1000}+\dots)$ This is an arithmetico-geometric sequence, which turns out to be $9\cdot \frac{10}{81}=\frac{10}{9}$ as your intuition suggested.
Finding a non-recursive closed from expression for repeating patterns with gaps
Let the period be $p$ (not necessarily prime) As you noticed, you can break down your recurrence into a purely periodic aspect, and a 'constant' on consecutive terms aspect. The constant part is easily dealt with using $ r \lfloor \frac{n-1}{p} \rfloor $, where $r$ is the amount in which it increases after each period. Let $\omega$ be a primitive pth root of unity. Consider the $p-$tuple $A_k = (\omega^k, \omega^{2k}, \ldots , \omega^{pk} ) $, for $1 \leq k \leq p$. Then, these $p$ tuples are linearly independent, which follows from considering the Vandermonde determinant. Hence, the initial terms can be written as a linear combination of these terms. E.g. In your example of $1, 2, 6, 7, \ldots$, we have $p = 2$ and $r=5$. We have $-1$ is the primitive 2nd root of unity, and so $A_1 = (-1,1) $ and $A_2 = (1,1)$. We can form $(1,2) = \frac{1}{2} A_1 + \frac{ 3}{2} A_2$. Hence, your general term is $\frac{1}{2} (-1)^n + \frac{3}{2} (1)^n + 5\lfloor \frac{n-1}{2} \rfloor$ If you do not want the floor function to be involved, you can simply use $\frac{rn}{p}$, which will change your initial values slightly. In this case, the terms have the form $$ -\frac{3}{2} + \frac{5}{2}, -3 + \frac{5\times 2}{2}, -\frac{3}{2} + \frac{5\times 3}{2}, -3 + \frac{5\times 4}{2}, \ldots $$ We can form $(-\frac{3}{2}, -3) = \frac{-3}{4} A_1 + \frac{-9}{4} A_2$. Hence, your general term is $\frac{-3}{4} (-1)^n + \frac{-9}{4} (1)^n + \frac{5n}{2}$, which agrees with Wolfram.
Why is the middle third Cantor set written as this?
It's called the middle-thirds Cantor set because in general you can construct a class of sets with similar properties using a similar but scaled construction. For example, you can start with $[0,1]$, remove an interval of length $\frac{1}{2}$ from the center. Then you have $2$ intervals of length $\frac{1}{4}$, remove an interval of length $\frac{1}{8}$ from the center. Inductively, at each step you have a disjoint union of intervals of length $l$ remaining in your set, and you remove an interval of length $l/2$ from each interval. This set, which we might call the Cantor "middle-halves" set, has many of the same properties of the Cantor middle-thirds set. From this you can imagine constructing the "middle-fourths" set and many other Cantor-type sets. The middle-thirds set is sort of standard because it's the easiest Cantor-type set to construct (mainly it's easy to figure out the lengths of the intervals at each step). The reason for the ternary expansion is precisely as you stated, the middle-thirds construction removes any numbers with absolutely necessary $1$-s in the ternary expansion. You should check this for yourself for a few cases: for example, check if $0.1abcd...$ (ternary) can be in the middle-thirds set, then $0.01abcd...$, $0.21abcd...$, and so on. Then you'll see why the Cantor set construction removes them. This excludes cases like $1/3$, which lies in the Cantor set and has ternary expansion $0.1000...$, because it can also be written $0.0222...$ .
Is every normal subgroup of $G$ of this form?
No. Consider for example the group $G=\Bbb Z/2\Bbb Z\times \Bbb Z/2\Bbb Z$. It has the (normal) subgroup $\{(0,0),(1,1)\}$ which is not of the desired form. However under certain conditions it is true, for example when $H,K$ are finite of coprime order.
Calculate the Kernel of $D$ with char $F=p$
$f'(x)=0$ iff ($n a_n = 0$ for all $n$) iff ($n=0$ or $a_n=0$ for all $n$) iff ($p \nmid n \Rightarrow a_n = 0$ for all $n$) iff $f(x)$ is a polynomial in $x^p$.
Expression of a matrix in the basis formed by its own eigenvectors
If $Av_i= \lambda_i v_i$ for $i=1,...,n$, then the representation matrix $R$ of $A$ with respect to the basis $\{ \vec v \}_{i=1}^n$ is given by $$R=diag (\lambda_1,...,\lambda_n).$$
Logic - "Small model property" for a signature with only binary predicates
It is relatively easy to construct a language that has no constants and only one binary predicate $<$ and a single formula in that language that captures what you know about $<$ on $\mathbb{N}$, namely that it is a total strict ordering with no largest. You should try this yourself. You can then show that any model must be infinite. Similarly there is a formula in a language with just one unary function that can only be satisfied by infinite models. Again, we can find such a formula by looking at the properties of the successor function on the naturals.
Probability within probability
If you know that $X_0=n$, then each $X_k$ has probability $(1-p)^n$ of failing to be bigger than $X_0$, thus (assuming the $X_k$ are independent) $T$ is distributed geometrically with parameter $1-(1-p)^n$. More precisely, $T$ is a discrete random variable and $X_0$ is a discrete random variable, so $E(T|X_0)$ here is simply a discrete random variable with $$E(T|X_0)(n)=\sum_{t=0}^\infty tP(T=t|X_0=n)$$
Identical metric spaces
Triangle inequality: If $x,y,z$ are all distinct the n $x \leq \max \{x,z\}\leq d(x,z)+d(z,y)$ and $y \leq \max \{y,z\}\leq d(x,z)+d(z,y)$. Hence $d(x,y) =\max \{x,y\} \leq d(x,z)+d(z,y)$. The case when the points are not distinct is trivial.
If $E$ and $K$ are splitting fields, then $E\cap K$ is a splitting field for what polynomial?
Try the above for example with $$E=\Bbb Q(\sqrt2)\;,\;\;K=\Bbb Q(\sqrt3)\;\;\text{over}\;\;\Bbb Q\implies E\cap K=\Bbb Q$$ So the intersection is the splitting field of all the rational linear polynomials. But if $$E=\Bbb Q(\sqrt[4]2,i)\;,\;\;K=\Bbb Q(i)\;\;\text{over}\;\;\Bbb Q\implies\;E\cap K=K\;\ldots$$ Thus I don't think there's a general rule for this.
How to measure the length of a given curve precisely?
$$ ds = \text{infinitesimal increment of arc length} = \sqrt{(dx)^2+(dy)^2}. $$ That is an application of the Pythagorean theorem. Therefore $$ \text{arc length} = \int_{x=2.2}^{x=33.64} \sqrt{(dx)^2+(dy)^2}. $$ Then do a bit of algebra: \begin{align} & dy = f'(x)\,dx, \text{ so } (dy)^2 = f'(x)^2(dx)^2, \text{ and consequently } \\[10pt] & \sqrt{(dx)^2+(dy)^2} = \sqrt{(dx)^2+f'(x)^2(dx)^2} = \sqrt{1+f'(x)^2} \sqrt{(dx)^2} = \sqrt{1+f'(x)^2} \ dx. \end{align}
Determine for which values the series $\sum_{k=1}^\infty \frac{b^{k^{2}}}{k!}$ is convergent.
From your attempt one gets that, for $|b|\le1$, as $k \to \infty$, $$ \left| \frac{u_{k+1}}{u_k}\right|=\left|\frac{b^{2k+1}}{k+1}\right|\le \frac{|b|^{2k+1}}{k+1} \le \frac1{k+1} \to 0 $$ Then, by the ratio test the series is convergent for $|b|\le 1$. One may observe that, for $|b|>1$, $$ \lim_{k \to \infty}\left| \frac{u_{k+1}}{u_k}\right|= \lim_{k \to \infty}\frac{|b|^{2k+1}}{k+1} =\infty $$ and the series is divergent for $|b|>1$.
Completion of local ring
The completed local ring depends on information which is a lot more "local" than what the Zariski topology sees. For instance, for any smooth point $x$ in a variety $X$ with $\dim_x X = n$, the completed local ring will just be $K[[x_1,\cdots,x_n]]$. So the only chance of nontrivial $f$ would be if the point were singular. Even here, the completed local ring doesn't see much global information. Consider the following two examples: $V(xy)$ and $V(y^2=x^2(x^2-1)(x-2))$ both as subschemes of $\Bbb A^2_K$. It can be computed that the completed local ring at the origin of both of these is $K[[x,y]]/(xy)$, but there's no isomorphism between any open sets of the two varieties - up to shrinking the open set in $V(xy)$ so that it is only contained in one of the irreducible components, this would correspond to a birational map between curves of different genus, which is clearly nonsense.
Sum of two multinomial random variables
It would be easier to use characteristic functions. \begin{equation} CF_{\text{Multinomial}(n,(p_1,...,p_k))}(t_1,...,t_k) = \bigg(\sum_{j=1}^k p_je^{it_j}\bigg)^n \end{equation} As the CF of a sum of random variables is a product of their CFs, it is easy to spot that \begin{equation} X \sim \text{Multinomial}(n_1+n_2,(p_1,p_2...p_k)) \end{equation} as the equality of CFs induces equality of distributions and \begin{equation} CF_X = CF_{Y_1+Y_2} = CF_{Y_1}CF_{Y_2} = \bigg(\sum_{j=1}^k p_je^{it_j}\bigg)^{n_1}\bigg(\sum_{j=1}^k p_je^{it_j}\bigg)^{n_2} = \bigg(\sum_{j=1}^k p_je^{it_j}\bigg)^{n_1 + n_2}= CF_{\text{Multinomial}(n_1 + n_2,(p_1,...,p_k))}(t_1,...,t_k). \end{equation}
Example of direct decomposition of a Banach space such as $C[0,1]$ into closed linear subspaces
$\{0\}$ means the subset of $X$ which consists of one element, namely the zero element of the space. In the context of functions, $0$ is the function that is identically zero. we would now have three zero functions No, it's the same zero function. Since $X_1\subset X$, the elements of $X_1$ are functions defined on all of $[0,1]$. We don't actually change their domain. "Splitting the domain" isn't really what is happening, it's more like "splitting the support set of functions", making them zero or not on some part of their domain. Construction If we define $$X_1=\{f\in X:f\equiv 0 \text{ on } [1/2,1]\}$$ $$X_2=\{f\in X:f\equiv 0 \text{ on } [0, 1/2]\}$$ then it's true that $X_1\cap X_2 = \{0\}$. But the property $X=X_1\oplus X_2$ fails: every element $f\in X_1\oplus X_2$ satisfies $f(1/2)=0$, so we don't get, for example, the constant function $1$. To get a direct sum, try $$X_1=\{f\in X:f\equiv 0 \text{ on } [1/2,1]\}$$ $$X_2=\{f\in X:f\equiv c \text{ on } [0, 1/2] \text{ for some constant } c\}$$ This still satisfies $X_1\cap X_2 = \{0\}$. But now every $f\in X$ can be written as the sum according to the above: $$f_1(x)=\begin{cases} f(x) - f(1/2), & x\in [0,1/2] \\ 0, & x\in [1/2,1] \end{cases}$$ and $$f_2(x)=\begin{cases} f(1/2), & x\in [0,1/2] \\ f(x), & x\in [1/2,1] \end{cases}$$
Find A Polynomial Solution For The Legendre equation.
$$\alpha=1\qquad\to\qquad (1-x^2)y''-2xy'+2y=0 \tag 1$$ You found a particular solution $y=x$ which is correct. Or, more general, a family of solutions : $\quad y=C\:x\quad$ where $C$ is a constant. In order to find the general solution of the ODE, one can use the method of variation of parameter. In the present case, remplace the parameter $C$ by an unknown function $u(x)$: $y=u(x)\:x \quad\to\quad y'=xu'+u \quad\to\quad y''=xu''+2u'$ Putting them into $(1)$ leads to : $$x(1-x^2)u''+2(1-2x^2)u'=0$$ $$\frac{u''}{u'}=2\frac{2x^2-1}{x(1-x^2)}$$ $$\ln|u'|=2\int \frac{2x^2-1}{x(1-x^2)}dx =-\ln|1-x^2|-2\ln|x|+\text{constant}$$ $$u'=\frac{c_1}{x^2(1-x^2)}$$ $$u=c_1\int \frac{dx}{x^2(1-x^2)} = c_1\left(-\frac{1}{x}+\frac{1}{2}\ln|1+x|-\frac{1}{2}\ln|1-x| \right)+c_2$$ The general solution of $(1)$ is : $$y(x)=c_1\int \frac{dx}{x^2(1-x^2)} = c_1\left(-1+\frac{1}{2}x\ln|1+x|-\frac{1}{2}x\ln|1-x| \right)+c_2x$$
For $a,b,c$ positive real numbers can it be true that: $(ab+bc+ca)^3 \ge (a^2+2b^2)(b^2+2c^2)(c^2+2a^2)$
$\prod\limits_{cyc}(a^2+2b^2)\geq(ab+ac+bc)^3\Leftrightarrow\sum\limits_{cyc}(2a^4b^2+4a^4c^2-a^3b^3-3a^3b^2c-3a^3c^2b+a^2b^2c^2)\geq0$, which is true by Schur and AM-GM: $\sum\limits_{cyc}(2a^4b^2+4a^4c^2-a^3b^3-3a^3b^2c-3a^3c^2b+a^2b^2c^2)\geq$ $\geq\sum\limits_{cyc}(2a^4c^2+3a^3b^3-3a^3b^2c-3a^3c^2b+a^2b^2c^2)=$ $=3\sum\limits_{cyc}(a^3b^3-a^3b^2c-a^3c^2b+a^2b^2c^2)+2\sum\limits_{cyc}(a^4c^2-a^2b^2c^2)\geq0$.
Set of rotations and translations in $\mathbb{R}^2$ is a normal subgroup of isometries group
$\mathcal{M}$ is the set of products of finitely many reflections about lines(not necessarily through the origin). $\mathcal{M_+}$ is the set of all products of an even number number of reflections about lines. What you have to do is take a member $f$ of $\mathcal{M}$ and member $g$ of $\mathcal{M_+}$ and show that there is a member $g'$ of $\mathcal{M_+}$ such that $f o g=g' o f$. This result is an easy conequence of the fact that if $\ell_1$ and $\ell_2$ are lines there exists a line $\ell_3$ in the pencil determined by $\ell_1 \text { and }\ell_2$ such that $$\omega_{\ell_1}o\omega_{\ell_2}=\omega_{\ell_3}o\omega_{\ell_1}$$ where $\omega_{\ell}$ denotes reflection about the line $\ell$.
Lebesgue's Singular Function
Show that the points where the derivative is $0$ are the interior points of the set of all points that have a base-$3$ expansion in which at least one digit is $1,$ i.e. one of the numbers you called $a_k$ is $1.$ The set of points at which $a_1=1$ has measure $1/3,$ and is a closed interval. The function described is not differentiable at the two endpoints of that interval, but it is differentiable everywhere in the interior. The set of points at which $a_2=1$ is a union of two intervals and similar comments apply. Its measure is $2/9.$ The set of points at which $a_3=1$ is a union of four intvervals and similar comments apply. Its measure is $4/27.$ The sequence of these meausures is geometric, so you can find their sum, and when you've done that you will see the answer to part of your question. That this function is continuous surprised me when I first saw it. But remember that the only kind of discontinuity that a monotone function can have is a jump. You just need to show that there are no jumps.
If $\vec{u}+\vec{v}+\vec{w}=\vec{0}$, then triangle $ABC$ exists with $\vec{AB}=\vec{u},\vec{BC}=\vec{v},\vec{CA}=\vec{w}$
The opposite statement is not true. It suffices to consider $A,B,C$ collinear. Set $$\vec{AB}=\vec{u},\vec{BC}=\vec{v},\vec{CA}=\vec{w}$$ It holds $$\vec{u}+\vec{v}+\vec{w}=\vec{0},$$ but $ABC$ is not a triangle.
Area of the intersection of four circles of equal radius
To figure out the area of the shape you described, we can split it into two parts: the area in the square, and the part just outside it. For both, we need to find out the angle between which each circle intersects. We can find this easily by using the equations of circle. To find the top intersection, consider $x^2+y^2=1$ and $(x-1)^2+y^2=1$. These circles intersect at $(\frac12,\frac{\pm\sqrt3}{2})$, which is at an angle of $\frac\pi3$ with the x-axis. Similarily, the coordinates of the right point make an angle of $\frac\pi6$ with the x-axis. Hence, each circle intersects in an angle of $\frac\pi6$. Now to find the areas. To find the areas of the four smaller sections between the square and circles, it is the area of the sector minus the triangle in the sector. Hence the area of each is $$\frac12\cdot\frac\pi6-\frac12\sin\left(\frac\pi6\right)=\frac\pi{12}-\frac14$$. Now for the square. If the side length of the the square is $s$, then by connecting the top and right points with the bottom left point of the big square, we create a triangle with sides $1,1$ and $s$, and the angle between the 1s is $\frac\pi6$. By the cosine rule, the area of the square is $$s^2=1^2+1^2-2\cos\left(\frac\pi6\right)=2-\sqrt3$$ Hence, the area of the shape between the four circles is $$4\left(\frac\pi{12}-\frac14\right)+(2-\sqrt3)=\frac\pi3+1-\sqrt3$$ No calculus required. Alternatively: Consider the set of parametric equations $x(t)=\cos(t)-\frac12,y(t)=\sin(t)-\frac12$. This is the equation of the circle centred at the bottom left point. Point $D$ is a $t=0$, E at $t=\frac\pi6$, and $F$ at $t=\frac\pi3$. Hence, the centre area is 4 times the area between $F$ and $E$, so its area is $$4\int_{t=\frac\pi3}^{t=\frac\pi6} y \,dx=-4\int_{\frac\pi6}^{\frac\pi3}y(t)x'(t)\,dt=4\int_{\frac\pi6}^{\frac\pi3}\left(\sin(t)-\frac12\right)\sin(t)\,dt=\frac\pi3+1-\sqrt3$$
How to find the range of a complex number's argument?
If $z=x+iy$ where $x,y$ are real using atan2 $x-2>0\iff x>2\ \ \ \ (1)$ $y-2\ge0\iff y\ge2\ \ \ \ (2)$ So, $arg(x-2+iy-2i)=60^\circ\implies\dfrac{y-2}{x-2}=\sqrt3$ honoring $(1),(2)$
Neighbour Points in N-Dimensional Space
$3^n-1$ I.e., the $3\times 3 \times 3 \times ...$ cube minus the point in the middle.
Calculate $\iiint_\omega (x+y+z)^2\,dxdydz$.
Using cylindrical coordinates, $$x=\rho\cos\theta,y=\rho\sin\theta,z=z$$ We know the parabaloid intersects the sphere at $z=a$ and thus the integral becomes $$\int_0^a\int_0^\sqrt{2az}\int_0^{2\pi}(\rho\sin\theta+\rho\cos\theta+z)^2\rho\,\mathrm d\theta\, \mathrm d\rho\, \mathrm dz\\+ \int_a^{\sqrt{3}a}\int_0^\sqrt{3a^2-z^2}\int_0^{2\pi}(\rho\sin\theta+\rho\cos\theta+z)^2\rho\,\mathrm d\theta\, \mathrm d\rho\, \mathrm dz$$ where the first term is the integral up to the intersection of the sphere and the parabaloid, and the second term is the rest of the integral. Even though this looks messy, a lot of the terms immediately die when you integrate wrt $\theta$.
How do I plot an inequality containing absolute values?
plots:-inequal(abs(x)+abs(y)<=1,x=-2..2,y=-2..2); plots:-implicitplot(abs(x)+abs(y)<=1,x=-2..2,y=-2..2, gridrefine=1,filledregions, view=[-2..2,-2..2]);
Convex Optimization - Derive Conjugate function
$$f^*(y) := \sup_{x \in \mathbb{R}^n} \{y^\top x - h(\|x\|)\} = \sup_{c \ge 0} \sup_{z : \|z\|=1} \{y^\top (cz) - h(c)\}.$$ The maximizing $z$ in the inner supremum is $z = y / \|y\|$ (why?) which yields $$\sup_{c \ge 0} \{c\|y\| - h(c)\} =: h^*(\|y\|).$$
$\sum_{n=1}^{\infty} \frac{n^2}{ n!}$ equals
You may write, for $n =2,3,4,...$: $$ \frac{n^2}{ n!}=\frac{n^2-n+n}{ n!}=\frac{n(n-1)}{ n!}+\frac{n}{ n!}=\frac{1}{ (n-2)!}+\frac{1}{ (n-1)!} $$ and use a change of indices in the new infinite sums.
Flatness of finitely generated (/finitely presented) module carries to power series module?
I think this is true. Let $0\to K\to N$ be an inclusion of $R[[x]]$ modules. Then, $0\to K\otimes_R M\to N\otimes_R M$ is exact, since $M$ is flat over $R$. Now, $K\otimes_R M=K\otimes_{R[[x]]} R[[x]]\otimes_R M=K\otimes_{R[[x]]} M[[x]]$ and similarly for $N$, proving flatness of $M[[x]]$ over $R[[x]]$.
Proving $(1+a_1)(1+a_2)...(1+a_n) \geq 2^n$
Use that $$(1+a_1)(1+a_2)\cdot…\cdot (1+a_n)\geq 2^n\sqrt[n]{a_1\cdot a_2\cdot a_3\cdot…\cdot a_n}=2^n$$ since $$\prod_{i=1}^na_i=1$$ and by AM-GM.
A problem with notation on B. Maccluer's book
Any subspace of a Banach space which is complete is closed. A Banach algebra is complete so $\mathcal B$ is closed.
Computing consecutive $p$ Bell numbers modulo $p$ (a prime)
For $(1)$, we have the following identity in $\mathbb{F}_p[x]$, where $\mathbb{F}_p=\mathbb{Z}/p\mathbb{Z}$: $$\sum_{n=0}^{p-1}B_n x^n=x^{p-1}+\sum_{n=0}^{p-1}x^n\prod_{k=n+1}^{p-1}(1-kx).$$ It is obtained from the formal power series $\sum_{n=0}^{\infty}B_n x^n$${}=\sum_{n=0}^{\infty}\prod_{k=1}^{n}\frac{x}{1-kx}$ using the identity $\prod_{k=1}^{p-1}(1-kx)\equiv 1-x^{p-1}$. (My initial idea was to use $\sum_{n=0}^{\infty}B_n x^n/n!=\exp(e^x-1)$ with fast composition; the above is a better one.) Rewriting, $\sum_{n=0}^{p-1}B_n x^n=x^{p-1}+Q_{0,p}(x)$, where $$P_{u,v}(x)=\prod_{k=u}^{v-1}(1-kx),\qquad Q_{u,v}(x)=\sum_{n=u}^{v-1}x^n P_{n+1,v}(x),$$ and, for $u\leqslant v\leqslant w$, we have $$P_{u,w}(x)=P_{u,v}(x)P_{v,w}(x),\qquad Q_{u,w}(x)=Q_{u,v}(x)P_{v,w}(x)+Q_{v,w}(x),$$ which gives a divide-and-conquer approach. Assuming multiplication of degree-$d$ polynomials done in $\mathcal{O}(d\log d)$ operations, this results in an $\mathcal{O}\big(p(\log p)^2\big)$ algorithm. For $(2)$, let's reformulate Touchard's congruence in operator terms. Consider the vector space (over $\mathbb{F}_p$) of all sequences in $\mathbb{F}_p$, its subspace $\mathscr{B}_p$ generated by $e_k : n\mapsto B_{n+k}\bmod p$, and the "step operator" $S$ on $\mathscr{B}_p$ that sends $e_k$ to $e_{k+1}$ for each $k$. Then the congruence says $S^p=S+I$, where $I$ is the identity operator. So, the arithmetics of polynomials in $S$ is the one of $\mathbb{F}_p[x]$ modulo $x^p-x-1$; in particular, $S^{p^m}=S+mI$ (already stated in the OP) and, more generally, $$n=\sum_{k=0}^{d}n_k p^k\implies S^n=\prod_{k=0}^{d}(S+kI)^{n_k}.$$ Again, this can be computed by divide-and-conquer, this time in $\mathcal{O}\big(p(\log p)^3\big)$ operations.
proving that the if $s_{n} \leq t_{n}$ for $n\geq N$ $\liminf_{n\to \infty} s_{n} \leq \liminf_{n \to \infty}t_{n}$
Let $n\geq N$ be arbitrary. Let $k\geq n$ be arbitrary. We have that $\inf_{m\geq n}s_{m}\leq s_{k}\leq t_{k}$. Since $k$ is arbitrary, we conclude that $\inf_{m\geq n}s_{m}$ is a lower bound of the set $\{t_{n},t_{n+1},\ldots\}$. Therefore, $\inf_{m\geq n}s_{m}$ is smaller than or equal to the greatest lower bound of $\{t_{n},t_{n+1},\ldots\}$, i.e, $\inf_{m\geq n}s_{m}\leq\inf_{m\geq n}t_{m}$. Denote $S_{n}=\inf_{m\geq n}s_{m}$ and $T_{n}=\inf_{m\geq n}t_{m}$. Note that $(S_{n})_{n\geq N}$ and $(T_{n})_{n\geq N}$ are increasing and $S_{n}\leq T_{n}$, so $\lim_{n}S_{n}\leq\lim_{n}T_{n}$. Hence, $\liminf s_{n}\leq\liminf t_{n}$. The proof for limsup is similar.
Solve in $\mathbb R$ : $x^4-2x^{3}-3x^{2}+4x+\frac{15}{16}=0$
Suppose that instead of a polynomial in $x$ we had a polynomial in $2x$. $$\begin{align} x^4 - 2x^3-3x^2+4x+\frac{15}{16}&=0\\ 16x^4 - 16\cdot2x^3-16\cdot3x^2+16\cdot4x+15&=16\cdot0\\ (2x)^4 - 4(2x)^3-12(2x)^2+32(2x)+15&=0\\ y^4 - 4y^3-12y^2+32y+15&=0\\ \end{align}$$ Where $y=2x$. Can you take it from here? :)
Find the minimum length of a line segment with endpoints on the coordinate axes that passes through the point $(1, 1)$
$\frac{y-1}{x-1}=m$ defines the line. $A$ is the $x$ intercept which is $(1-\frac{1}{m}, 0)$ and $B$ is the $y$ intercept which is $(0,1-m)$. By minimizing the distance between these two points we can calculate $m$ which will then give us the distance. To make the algebra simple we will actually minimize the distance squared, call it d. Then $d=(\frac{1}{m}-1)^2+(1-m)^2$ and taking the derivative with respect to $m$ gives $\frac{dd}{dm}=-\frac{2}{m^2}(\frac{1}{m}-1)-2(1-m)=\frac{-2+2m}{m^3}-\frac{2m^3(1-m)}{m^3}=\frac{-2+2m-2m^3+2m^4}{m^3}$ Setting this equal to $0$ gives $-2+2m-2m^3+2m^4=0\implies -1+m-m^3+m^4\implies m = 1$ or $m=-1$. If $m=1$ then both the $x$ and $y$ intercepts will be $0$ so $A=B$ and the distance is $0$. If $m=-1$ then $A=(2,0)$ and $B=(0, 2)$ so the distance is $\sqrt{2^2+2^2}=\sqrt{8}=2\sqrt{2}$ Therefore, the minimum distance is $0$ when $A=B$. If we change the problem to say that $A$ and $B$ are distinct then the minimum distance is $2\sqrt{2}$.
Properties of an interesting summation $\sum_{k=1}^{\infty} \frac{(-1)^{k+1} }{2k+1}$
Let $f(x)=\arctan(x)$. Then, we have $f'(x)=\frac{1}{1+x^2}=\sum_{n=0}^\infty (-1)^nx^{2n}$. Then, we have $$\begin{align} \arctan(x)&=f(x)=\int_0^x \sum_{n=0}^\infty (-1)^n x'^{2n}\,dx'\\\\ &=\sum_{n=0}^\infty \frac{(-1)^nx^{2n+1}}{2n+1} \end{align}$$ Hence, $$\pi/4=\arctan(1)=\sum_{n=0}^\infty \frac{(-1)^n}{2n+1}$$ and so, we have $$\sum_{n=1}^\infty \frac{(-1)^{n-1}}{2n+1}=1-\frac\pi4$$
Sum of Complex series
You are indeed almost there. Next write $$\sin\theta = {e^{i\theta}-e^{-i\theta}\over 2i}$$ and conclude $${1-e^{10i\theta}\over 1-e^{2i\theta}}={e^{10i\theta}-1\over e^{i\theta}{2i\over 2i}(e^{i\theta}-e^{-i\theta})}$$ $$={e^{-i\theta}(e^{10i\theta}-1)\over 2i\sin\theta} = {e^{9i\theta}-e^{-i\theta}\over 2i\sin\theta}$$ From there you just do $$\sin(2\theta)+\sin(4\theta)+\sin(6\theta)+\sin(8\theta)= {1\over 2i}\bigg((1+e^{2i\theta}+e^{4i\theta}+e^{6i\theta}+e^{8i\theta})-(1+e^{-2i\theta}+e^{-4i\theta}+e^{-6i\theta}+e^{-8i\theta})\bigg)$$ and from what you've already done, that's just $${1\over 2i}\bigg({e^{9i\theta}-e^{-i\theta}\over 2i\sin\theta}-{e^{-9i\theta}-e^{i\theta}\over 2i\sin(-\theta)}\bigg)$$ since $\sin(-\theta)=-\sin\theta$ we get $$=-{1\over 2\sin\theta}\bigg({(e^{9i\theta}+e^{-9i\theta})-(e^{i\theta}+e^{-i\theta})\over 2}\bigg)$$ Finally use $$\cos\theta = {e^{i\theta} +e^{-i\theta}\over 2}$$ to conclude your identity is $$\sin(2\theta)+\sin(4\theta)+\sin(6\theta)+\sin(8\theta)={\cos\theta-\cos(9\theta)\over 2\sin\theta}$$
$ABCD$, $P$ is any interior point, $PA=24, PB=32, PC=28, PD=45$
$AC \le AP + PC = 24 + 28 = 52$ and $BD \le BP + PD=32+45 = 77$ adding $AB + BC < AC \le 52, BC + CD < BD \le 77,CD + DA < CA \le 52$ and $DA + AB < DB \le 77$ we find that the perimeter is bounded above by $129.$ writing $AB = a, BC = b, Cd = c, DA = d,$ we have $$a + b + c + d \le 129, ac+bd \le 52*77=4004$$ i am stuck and not sure if the ptolemy inequality helps. edit after question was edited: with the area of the quadrilateral being $2002$ which is twice of what the ptolemy bound, you can see that the diagonals are collinear and orthogonal to each other. so the perimeter is $$perimeter = \sqrt{24^2 + 32^2} + \sqrt{32^2 +28^2} + \sqrt{28^2 + 45^2 } + \sqrt{45^2 + 24^2}$$ ptolemy inequality sure helped. but an even easier method is pointed out by user orangekid in the comments below.
Probability of selecting a jury
There are $\binom{6}{6}=1$ ways to choose $6$ men, and $\binom{12}{6}=924$ ways to choose the $6$ women. Therefore, there are $\binom{6}{6}\cdot\binom{12}{6}=924$ ways to pick $12$ jurors so that $6$ are women and $6$ are men. In total, there are $\binom{18}{12}=18564$ ways to choose $12$ jurors from $18$ people. Hence, the desired probability is $$ \frac{924}{18564}\approx 4.98\%. $$
Help With Proof of Theorem 1.A. in "Topics in Algebra"
First of all, if you have a relation $\sim$ on $A = \bigcup_{i = 1}^n A_i$, where $A_i$'s are mutually disjoint, then for set of all related pairs $\mathscr{R}$ you have $\mathscr{R} \subseteq A\times A$. This might seem to be a simple property but notice that the statement says "we can define an equivalence relation on A". So if we can construct one such equivalance relation, we are done. Now you have mutually disjoint, nonempty subsets of $A$ so you can construct an equivalance relation $\sim$ = $(A, \mathscr{R})$ where $$\mathscr{R} = \bigcup_{i = 1}^n (A_i \times A_i)$$ Now, let us prove that $\sim$ is an equivalance relation. Here, by definition of $A$, reflexivity holds for the relation $\sim$ because for all $x \in A$, the pair $(x,x) \in \mathscr{R}$ (this is because none of the subsets are empty and they are mutually disjoint). Symmetry also holds because if you have $x,y \in A_j$ where $A_j \subseteq A$ with $1 \le j \le n$, then for all such $x$,$y$ , $(x,y),(y,x) \in A_j \times A_j$. You can show that transitivity also holds with a similar argument. Again say $u,v,t \in A_k$ where $A_k \subseteq A$ with $1 \le k \le n$. Then notice that for all such $u$,$v$,$t$, $(u,v),(v,t),(u,t) \in A_k \times A_k$. Therefore $\sim$ is an equivalance relation. Actually if you write all the elements of $A_j \times A_j$ and $A_k \times A_k$ for $A_j = \{x,y\}$ and $A_k = \{u,v,t\}$, you can see why $\sim$ = $(A, \mathscr{R})$ is an equivalance relation (For example $A_j \times A_j = \{(x,x),(x,y),(y,x),(y,y)\}$).
Check if it is regular language
Hint : this does not seem to be a regular language since a finite state automata would have to keep count of the number of a's he saw to verify it is different from the number of b's (similarly with the c's) Using the pumping lemma seems difficult since we would like to get to a word thats not in the language and such words are constrained with i=j=k that is difficult to get to. But, show that the complement language is not regular, take a word of the above form with i=j=k and pump it so that the part of the word that is being pumped have only a and b or b and c in it so pumping will make j bigger than one of a or c
Explanation on the limitation of the proof of the Law of Cosines
When $\pi/2 < \alpha < \pi$, we have an obtuse angle at $A$ and we must consider the geometry of the figure accordingly. The altitude $h$ from $B$ to $AC$ is no longer "inside" the triangle. It extends to some point, say $B'$, on the line containing $AC$ such that $|B'C| > |AC|$; in other words, the signed distance $r$ would need to be negative and $b-r > b$. That said, the relationship $$r = c \cos \alpha$$ does take this into account, since when $\pi/2 < \alpha < \pi$, $-1 < \cos \alpha < 0$, consequently $-c < r < 0$. You can also see this in the final identity $$a^2 = b^2 + c^2 - 2bc \cos \alpha,$$ since again, when $\cos \alpha < 0$, the RHS exceeds $b^2 + c^2$, which is what we would have if $\alpha$ were a right angle. If we think of $b$ and $c$ as fixed and $\alpha$ allowed to vary continuously from $0$ to $\pi$, you would find that the length of $a$ increases from $0$ to $b+c$.
Proof of $-\frac{1}{4}$ upper bound to the spectrum of $\Delta$
In equation $(1)$ the function $f$ is assumed to be smooth and compactly supported. Thus $$ \int_H \partial_x^2 f(x,y) dx dy = \int_0^\infty \int_{-\infty}^\infty \partial_x^2 f(x,y) dx dy =0 $$ since for a fixed $y >0$ $$ \int_{-\infty}^\infty \partial_x^2 f(x,y) dx = \lim_{T \to \infty} \partial_x f(x,y) \Big\vert^{x=T}_{x=-T} =0 $$ due to the fact that the support of $f$ is compact. Hence for such $f$ $$ \int_H \Delta f(x,y) dx dy = \int_H \partial_y^2 f(x,y) dx dy, $$ and so you can use equation $(2)$.
Weak convergence of stochastic processes: $X_n(\cdot)$ converges to zero
First let $Y_n(t)=X_n(t)-X_n(0)$ for all $t\geq 0$ and $n\in\mathbb{N}$. Since $Y_n(0)=0$ your assumption allows us to conclude that $Y_n \Rightarrow_n 0$. Now let $\rho$ denote the Skorokhod metric on $D([0,\infty),\mathbb{R})$. If you can show that that $\rho(X_n,Y_n) \stackrel{P}{\to} 0$ as $n\to\infty$, then a Slutsky'ish theorem (Theorem 3.1; Convergence of probability measures - P. Billingsley) yields that $X_n \Rightarrow 0$.
If $b$ and $m$ are positive integers, then $b|m$ iff the last $b-a$dic digit $d_{0}$ of $m$ is 0
Hint: Recall that the decomposition of $m$ in $b$-ary means that $m=\sum_{k=0}^\infty d_k b^k$, where each $d_k\in\{0,\dots,b-1\}$ (and only finitely many of them are non-zero). What happens when you take the RHS modulo $b$? (remember that $b\mid m$ is equivalent to $m = 0 \mod b$)
Integral of a distribution function
$$ \int_{-\infty}^z F(r)dr = \int_{-\infty}^z\int_{-\infty}^rf(u)dudr = \int_{-\infty}^z\int_u^zdrf(u)du = \int_{-\infty}^z(z - u)f(u)du$$
Rudin 13.3 zero operator as adjoint
If $\{a,b\}$ is an element in the orthocomplement of $\mathscr{G}(T)$, then for $x\in\mathscr{D}(T)$ we have $(x,a)+(Tx,b)=0$. In particular, for $x=e_n$ you get $(e_n,a)+(x_n,b)=0$. Taking the limit $n\rightarrow\infty$, we see that the first term cancels because of the remark. What's left is $\lim_{n\rightarrow\infty}(x_n,b)=0$. Using the density of the set $\{x_n\}$ it's easy to show that this implies $b=0$. Indeed: just take a sequence $x_n$ which converges to $b$. This just leaves $(x,a)=0$, from which $a=0$ follows since $\mathscr{D}(T)$ is dense. So the orthocomplement of $\mathscr{G}(T)$ is just the zero subspace, from which the density of $\mathscr{G}(T)$ follows.
limit of a sequence of complex integrals
Let $M=\min\{|e^{3z}-1|\,:\,|z|=2\}$. Clearly, $M>0$. Since $({P_N}^3-1)_{N\in\mathbb{N}}$ converges uniformly to $e^{3z}-1$ in the circle $\{z\in\mathbb{C}\,:\,|z|=2\}$, if $N$ is large enough, then$$\bigl|(e^{3z}-1)-({P_N}^3(z)-1)\bigr|<\frac M2.$$ But this implies that $|{P_N}^3(z)-1|>\frac M2$ and therefore$$\left|\frac1{{P_N}^3(z)-1}\right|<\frac2M.$$So, the sequence$$\left(\frac1{{P_N}^3-1}\right)_{N\in\mathbb{N}}$$is uniformly bounded and therefore it converges uniformly to $\frac1{e^{3z}-1}$ in $\{z\in\mathbb{C}\,:\,|z|=2\}$.
Change of basis changes rank of matrix
Well, a base change can be viewed as a left-multiplication with a full-rank (invertible) matrix $B$, i..e., $A' = BA$. By using determinants $\det (A') = \det(BA) = \det(B)\cdot \det(A)$. Since $A$ has full rank by hypothesis and $B$ has full rank as well, $\det(B)\cdot \det(A)\ne 0$ and so $A'$ has full rank as well.
Derivative of a vector valued function
Suppose you want to calculate $grad f(\vec{a})$. Use the "Taylor expansion", $f(\vec{a} + \vec{h}) = \langle \vec{a} + \vec{h}, \vec{a} + \vec{h} \rangle (\vec{a} + \vec{h}) = \| \vec{a} \|^2 \vec{a} + \langle \vec{a}, \vec{a} \rangle \vec{h} + 2 \langle \vec{a}, \vec{h} \rangle \vec{a} + O(\| \vec{h} \|^2) = f(\vec{a}) + \langle \vec{a}, \vec{a} \rangle \vec{h} + 2 \langle \vec{a}, \vec{h} \rangle \vec{a} + O(\| \vec{h} \|^2)$ The derivative means the linear term, thus in this case, it is the linear transformation that sends $\vec{h}$ to $\|\vec{a} \|^2 \vec{h} + 2 \langle \vec{a}, \vec{h} \rangle \vec{a}$.
$\boldsymbol{2}^{\kappa} +_{\mathscr{C}} \kappa = \boldsymbol{2}^{\kappa}$ where $\kappa$ is an infinite cardinal
In general, whenever we have two infinite cardinals $A$ and $B$, we have $A +_{\mathscr{C}} B = \max(A, B)$. So in this case, since $2^\kappa > \kappa$, we have $2^\kappa +_{\mathscr{C}} \kappa = 2^\kappa$.
How many ways can a number be written as a sum of two non negative integers?
If you consider 3+4 and 4+3 as two different ways then yes it will be N+1. Think of it as placing a partition in a row of N objects you can place it right in the beginning, right at the end and any of the N-1 locations. However if 3+4 and 4+3 are considered same then we have only 4 ways of writing 7 as sum of two numbers. In this case the answer will be ceil((N+1)/2) where ceil(x) is smallest integer greater is Han or equal to x.
Cuboid room, hooks and strings proof
A hint: suppose there is no such triangle, and then see what happens if you look at just one of the hooks. The proof benefits from a picture - if we assume there is no such triangle, the PHP guarantees that one of the nodes will either be in a triangle or induce a triangle in its neighbors, so we have a contradiction. Suppose we have the given situation ($6$ nodes, every possible edge colored either blue or red). Assume that there is no such triangle, and we will arrive at a contradiction. Work with a fixed node/hook $v$. By the (strong) pigeonhole principle, there are at least $3$ strings of one color extending from this node; WLOG, let this color be blue and three nodes they reach be $w_1,w_2,w_3$. By hypothesis, none of $w_1,w_2,w_3$ can be connected to each other by a blue string because this would give a blue triangle $\{v,w_i,w_j\}$ contradicting the hypothesis. This means they are all connected by red string, but that gives a red triangle $\{w_1,w_2,w_3\}$ so we arrive at a contradiction. Then our hypothesis was incorrect, so there must be some triangle that with all blue or all red edges.
What coset intuitively means in this case
$K$ consists of rotations in the $xy$-plane. Consider what happens when you perform one of these, and then follow it with some other (general) rotation $g$ (e.g., rotate the $y$ axis 20 degrees towards the $z$ axis). The resulting rotations constitute the coset $gK$. In general if $g \notin K$, this will not contain the identity rotation. What is $G/K$? (BTW, it's not a "coset", it's a "quotient".) It's the set of rotations where two rots $g, g'$ are considered equivalent if there's a rotation $h$ of the $xy$-plane with the property that $g = g'h$. To put it differently, they're equivalent if $g'^{-1}g$ is an $xy$-plane rotation. That means that they're equivalent if $g'^{-1}g$ leaves the north pole $N = (0,0,1)$ fixed... or you could say that $g$ and $g'$ both send $N$ to the same place.
Simplifying after using product rule
$$ \begin {align*} -2 \sin 2x \cdot \cos 4x - 4 \sin 4x \cos 2x &= - 2 \sin 2x \cos 4x - 8 \sin 2x \cos^2 2x \\&= -2 \sin 2x \cdot \left( \cos 4x + 4 \cos^2 2x \right) \end {align*}$$Now, use the fact that $ \cos^4 x = 2 \cos^2 x - 1 $ and see if you can finish. To finish things off: $$ \begin {align*} -2 \sin 2x \cdot \left( \cos 4x + 4 \cos^2 2x \right) &= -2 \sin 2x \cdot \left( 2\cos^2 2x - 1 + 4 \cos^2 2x \right) \\&= - 2 \sin 2x \cdot \left( 6 \cos^2 2x - 1 \right). \end {align*} $$
Topological invariants
Note that there is a continuous map from any topological space to a single point, so no invariant that can distinguish non-points from points will be preserved in general. A homeomorphism will preserve every invariant (by the definition of invariant, as pointed out by lhf). However, many topological invariants (such as the fundamental group and homology) are preserved by homotopy equivalences, which are not homeomorphisms in general, so there is a middle ground.
Counting words with subset restrictions
Call the answer $x_L$. Then $x_L=Nx_{L-1}-y_{L-1}$, where $y_L$ is the number of allowable words of length $L$ ending in $A$. And $y_L=x_{L-1}-y_{L-1}$. Putting these together we get $Nx_L-x_{L+1}=x_{L-1}-(Nx_{L-1}-x_L)$, which rearranges to $x_{L+1}=(N-1)x_L+(N-1)x_{L-1}$. Now: do you know how to solve homogeneous constant coefficient linear recurrences? EDIT. If all you want is to find the answer for some particular values of $L$ and $N$ then, as leonbloy notes in a comment to your answer, you can use the recurrence to do that. You start with $x_0=1$ (the "empty word") and $x_1=N$ and then you can calculate $x_2,x_3,\dots,x_L$ one at a time from the formula, $x_{L+1}=(N-1)x_L+(N-1)x_{L-1}$. On the other hand, if what you want is single formula for $x_L$ as a function of $L$ and $N$, it goes like this: First, consider the quadratic equation $z^2-(N-1)z-(N-1)=0$. Use the quadratic formula to find the two solutions; I will call them $r$ and $s$ because I'm too lazy to write them out. Now it is known that the formula for $x_L$ is $$x_L=Ar^L+Bs^L$$ for some numbers $A$ and $B$. If we let $L=0$ and then $L=1$ we get the system $$\eqalign{1&=A+B\cr N&=rA+sB\cr}$$ a system of two equations for the two unknowns $A$ and $B$. So you solve that system for $A$ and $B$, and then you have your formula for $x_L$.
If $f(x) = y$ holds, does $f(x) = x$ also hold?
I think your confusion is stemming from a misunderstanding of what an equation really says, and when it defines a function. When you have an equation with two variables (here $x$ and $y$), it defines a relation between the two. Sometimes (but not always), this allows you to describe one of the variables as a function of the other. When you have an equation of one variable, it defines a restriction on what that variable can be. This defines a set of possible values for that variable - and the set might have just one element, have lots of elements, or be empty. The equation $y=\lvert x\rvert$ is a statement about two variables. It states that they must have a certain relation. It so happens that there are infinitely many pairs of values you could plug in to $y$ and $x$ that have this relation. Moreover, you can actually choose any real value for $x$, and then there will be exactly one value of $y$ making the equation true. That means this equation defines $y$ as a function of $x$. The equation $x=\lvert x\rvert$ is a statement about one variable, saying it must satisfy a certain condition. This condition happens to hold true for all nonnegative real numbers. But it doesn't define a function, because there's no domain and codomain. Even if you consider the set of pairs $(x,y)$ that satisfy $x=\lvert x\rvert$, you still won't have a function, because there's more than one possible $y$ value for each $x$.
Number of terms in factors of polynomial
Definitely false, for example $x^6-1$ has factors $x^2\pm x+1$.
If $X$ is an Ito process, is $\mathbb E(\int X \mathrm d X)$ convex?
I believe that $F$ fails to be convex. First of all, if $\sigma$ is a "nice" function, then the stochastic integral $M_t := \int_0^{t} \sigma(s) \, dW_s$ is a martingale which implies that $$\mathbb{E} \left( \int_0^T X_s \, dM_s \right)=0,$$ i.e. $$F(X) = \mathbb{E} \left( \int_0^T X_s \mu(s) \, ds \right). \tag{1}$$ Now consider $$X_t := \int_0^t W_s \, ds. $$ It follows from the very definition of $F$ that $$F(W) = \mathbb{E} \left( \int_0^T W_s \, dW_s \right)=0,$$ and $(1)$ yields $$\begin{align*} F(X) = \mathbb{E} \left( \int_0^T X_s W_s \, ds \right) &= \mathbb{E} \left( \int_0^T \int_0^s W_s W_r \, dr \, ds \right) \\ &= \int_0^T \int_0^s \underbrace{\mathbb{E}(W_s W_r)}_{=r} \, dr \, ds \\ &= \frac{T^3}{6}. \end{align*}$$For any $\lambda \in [0,1]$ we have by (1) that $$\begin{align*} F(\lambda X + (1-\lambda) W) &= \mathbb{E} \bigg( \int_0^T (\lambda X_s + (1-\lambda) W_s) (\lambda W_s+0) \, ds \bigg) \\ &=\lambda^2 \underbrace{\mathbb{E}\left( \int_0^T X_s W_s \, ds \right)}_{\stackrel{(2)}{=} T^3/6} + \lambda (1-\lambda) \mathbb{E} \left( \int_0^T W_s^2 \, ds \right) \\ &= \lambda^2 \frac{T^3}{6} + \lambda (1-\lambda) \frac{T^2}{2}. \end{align*}$$ If we choose $\lambda=1/2$ and $T$ small enough (e.g. $T=1$), then $$F(\tfrac{1}{2} X + \tfrac{1}{2} W) = \frac{T^3}{24} + \frac{T^2}{8}$$ is strictly larger than $$\frac{1}{2} F(X) + \frac{1}{2} F(W) = \frac{T^3}{12}.$$ This means that $F$ is not convex.
Vector Calculus - Curve trajectory question
1) Looking at $\delta \mathbf{r} = \mathbf{r}' \delta u + o(\delta u)$ How can we have vector = vector + scalar? A1) Read this not as vector + scalar, rather that the $o(\delta u)$ is a small vector. So rather read: vector + vector. 2) When we substitute in $\delta \mathbf{r}$, how do we get to the final result (i.e can someone expand the last equation to show workings in full and reasoning for each step) A2) What have you tried? Where are you at in your investigation?
When is $\sqrt{2}$ in $\mathbb{Q}(\zeta_n)$?
Since the intersection $\mathbb Q(\zeta_m)\cap \mathbb Q(\zeta_n)=\mathbb Q(\zeta_{\gcd(m,n)})$ (for example here), it suffices to show that $\sqrt2$ is not contained in any cyclotomic field strictly inside $\mathbb Q(\zeta_8)$. The largest such field is $\mathbb Q(\zeta_4)=\mathbb Q(i)$, and it's clear that $\sqrt2$ is not in this field (since the only reals in $\mathbb Q(i)$ are the rationals), so the only $n$ for which $\sqrt2\in\mathbb Q(\zeta_n)$ are multiples of $8$. For the more general case of $\sqrt{m}$ with $m=\pm p_1\cdots p_s$, we can use that, for an odd prime $p$, $$\sqrt{(-1)^{(p-1)/2}p}\in\mathbb Q(\zeta_p).$$ (This can be proven, for example, by using Gauss sums.) From this, we definitely get that $$\sqrt m\in\mathbb Q(\zeta_{4|m|}),$$ since $\sqrt{\pm p_i}$ is in this field for all $i$ (since if $2|m$, $8|4m$, this is true for $p=2$ as well). If $m$ is odd and of correct sign, i.e. $$m=\prod_{i=1}^s (-1)^{\frac{p_i-1}{2}}p_i,$$ then we can reduce $4|m|$ to $|m|$. We can prove that this in fact optimal by strong induction on $s$. For the base case, we need only worry about factors of $p$ or $4p$, and so the cases aren't that bad since $\sqrt{\pm p}$ isn't in $\mathbb Q(i)$ regardless of $p$ and $\mathbb Q(\zeta_n)=\mathbb Q(\zeta_{2n})$ for all odd $n$. For the inductive step, using any factor of $|m|$ or $4|m|$ either reduces the numbers of factors of $2$ (in which case arguments similar to the base case work) or removes a subset of the prime factors, in which case our strong inductive hypothesis can be used directly.
$D_1(0)=\{z\in \mathbb{C} \mid |z|< 1\}$ is not compact
Compact subsets of Hausdorff spaces are closed. Alternatively, the continuous function $z\mapsto (1-z)^{-1}$ on this set is not uniformly continuous, alternatively it doesn't achieve a maximum.
Radial Plane Topology
Prove this by contradiction. Suppose that we have an $x\in\mathbb R^2\setminus A$ and a direction vector $e$ such that there is no line segment $[x,\epsilon e]$ completely contained in $\mathbb R^2\setminus A$ however small we pick $\epsilon&gt;0.$ That means that there exists a sequence $\epsilon_n$ converging to 0 such that the points $x+\epsilon_n e$ belong to $A,$ contradicting the fact that $A$ contains at most two points of the line going through $x$ which is parallel to $e.$
Why is it that we can ignore non-basic variables using the simplex method of linear programming?
"The 1250 is not just the value of z. The value of 1250 is the sum of all of those parameters. Yet we assign 1250 to z by the simple expedient of declaring the other variables to be zero. What is the justification for that?" You are correct. But for a linear program, you know that the optimal solution is at an extreme point. Extreme points are defined by these basic variables. The simplex method, from one iteration to another, just moves from one extreme point to another one with lower cost. You will eventually get to an extreme point where the cost cannot be improved and that would be your best solution. Another way to say it, yes, you could set $z$ not to 1250, and set other non basic variables to non zero value but then it would not be an extreme point and therefore could not be your best solution.
Evaluate the limit of $e^{\pi-\ln \frac{x+4}{-x}}/x$ as $x\to 0$
$$e^{\pi-\ln\frac{x+4}{x}}=e^{\pi}.\dfrac{1}{\dfrac{x+4}{-x}}=e^{\pi}\dfrac{-x}{x+4}$$ Hence $$\lim_{x \to 0^{-}}\frac{e^{\pi-\ln\frac{x+4}{-x}}}{x}=\lim_{x \to 0^{-}}\frac{-e^{\pi}}{x+4}$$
Number of zeros in cosine for one period
For any sine/cosine function, the number of zero crossings will always be two. If you managed to understand that for any period but the first one, just notice that all the periods are strictly the same, that's why they're called periods ;) Btw, for a function defined on $\mathbb{R}$, what would first mean ?
Why is this differentiable?
Fix an $x$. Consider $\frac{g(t+h) - g(t)}{h} = \int_{\mathbb{R}} f(x)e^{-itx}(\frac{e^{-ihx} - 1}{h}) dx$. Notice that $\frac{e^{-ihx} -1}{h} = \frac{-i}{h} \int_0^{hx} e^{-is} ds$, from which we obtain the estimate $|\frac{e^{-ihx} - 1}{h}| \le 1$. So the integrand is bounded by $|f|$, and the differentiation under the integral sign follows by appealing to the dct.
A basic integration problem
Using this set $\displaystyle x=\sec y\implies dx=\sec y\tan y\ dy$ $$\int\frac{x^2}{\sqrt{x^2-1}}dx=\int\frac{\sec^2y\sec y\tan y}{\tan y}\ dy=\int\sec^3y\ dy$$ Use this
How to solve the six elements equations below?
Edit 3: Since we now know that $x_1 = x_4$ and $x_2 = x_5$ and that $x_3$ and $x_6$ are superfluous, I believe that the correct solution to the above system is the set of $(x_1,x_2)$ pairs related by $$ \beta = {{c_1} \over {1-x_1}} + {{c_2} \over {1-x_2}} \\ x_3 = 1 + { {c_3} \over {\beta} } \\ \Rightarrow c_1 \ln \left( { {x_1} \over {1-x_1} } \right) + c_2 \ln \left( { {x_2} \over {1-x_2} } \right) = -c_3 \ln \left( {{-\beta} \over {c_3} } - 1 \right). $$ Since $\ln$ is nonlinear, I don't think the above can be simplified meaningfully. Edit 2: So I updated the cost function to only use $x_1$, $x_2$, $x_4$, and $x_5$ since both $x_3$ and $x_6$ can be expressed in terms of them. I still get a different solution that depends on the starting point (I choose the four starting points using a random number generator). However, the solution always has the relations that $x_1 = x_4$ and $x_2 = x_5$. This suggests to me that there are still two superfluous relations in the system and I'd guess that the the first two equations can be combined with the last two to remove another two variables so that problem reduces to finding $x_1$ and $x_2$. Until then, I'd say that the problem is currently poorly formulated. Edit 1: So the below will only get you one particular solution. If I change the initial guess to [0.7; 0.8; rand(1); 0.7; 0.8; rand(1)], I get a different solution each time. Original Answer: Depending on the values of $c_1$, $c_2$, $c_3$, I can get solutions using the following Nelder-Mead simplex search in Matlab. Note that there is one difference with the equations given above: I assumed that $\ln ( x_6 / (1-x_3) )$ was a typo and that it should be $\ln ( x_6 / (1-x_6) )$, as that fits the pattern of the other equations. Here is the cost function I used with some arbitrary values chosen for the constants: function f = costfun(x) c1 = 100; c2 = -54; c3 = 0.354; if( any( x &gt;= 1 ) || any( x &lt;= 0 ) ) f = inf; return; end f1 = ( c1 / ( 1 - x(1) ) + c2 / ( 1 - x(2) ) + c3 / ( 1 - x(3) ) ).^2; f2 = ( c1 / ( 1 - x(4) ) + c2 / ( 1 - x(5) ) + c3 / ( 1 - x(6) ) ).^2; f3 = ( c1 * log( x(1) / ( 1 - x(1) ) ) + ... c2 * log( x(2) / ( 1 - x(2) ) ) + c3 * log( x(3) / ( 1 - x(3) ) ) ).^2; f4 = ( c1 * log( x(4) / ( 1 - x(4) ) ) + ... c2 * log( x(5) / ( 1 - x(5) ) ) + c3 * log( x(6) / ( 1 - x(6) ) ) ).^2; f5 = ( x(1)*(1-x(1)) / x(4)/(1-x(4)) - x(2)*(1-x(2)) / x(5)/(1-x(5)) ).^2; f6 = ( x(1)*(1-x(1)) / x(4)/(1-x(4)) - x(3)*(1-x(3)) / x(6)/(1-x(6)) ).^2; f7 = ( x(2)*(1-x(2)) / x(5)/(1-x(5)) - x(3)*(1-x(3)) / x(6)/(1-x(6)) ).^2; f = f1 + f2 + f3 + f4 + f5 + f6 + f7; return; I then used the following to estimate the solution: [x, costval, exitflag] = fminsearch( @costfun, 0.5 * ones(6,1) ); For the above values of $c_1$, $c_2$, and $c_3$, I got the following estimate for $\mathbf{x}$ with a total squared error of $3.3112 \cdot 10^{-6}$: x = 0.7176 0.8477 0.1655 0.7176 0.8477 0.1656 The below plot shows the squared error for each iteration of the Nelder-Mead search:
Compactness of a Sphere
As yoyo says in the comments, you can use the compactness criterium that a subset of $\mathbb{R}^n$ is compact if and only if it is closed and bounded. So you can embed the sphere $S^2$ in $\mathbb{R}^3$ via the equation $x^2+y^2+z^2=1$. Satisfying an algebraic eqution is "clearly" a closed condition, so $S^2$ is closed (why is this clear? If you perturb the variables with arbitrarily small $\epsilon$, you will leave the sphere - this is what closedness means). Why is it bounded? This is clear: the equation says precisely that any point on the sphere has absolute value $1$. So the sphere is clearly bounded. It follows from Heine-Borel that the sphere is compact as a subset of $\mathbb{R}^3$. But compactness is a topological invariant, so we are done.
Property of Ellipse evolute
Take the standard parameterization of the ellipse: $$P := (a\cos t, b\sin t) \tag{1}$$ where $a$ and $b$ are the "horizontal" and "vertical" radii, not necessarily "major" and "minor". (The major/minor distinction is immaterial.) To avoid sign complications, we'll consider the first-quadrant arc of the ellipse, where $0\leq t\leq \pi/2$. With this, we are assured that an "inward-pointing" normal at $P$ is given by $$n := (-b \cos t, -a \sin t) \tag{2}$$ (which is obtained from exchanging the components of the tangent vector $P'(t)$, and changing signs to ensure the proper orientation). A point $K$ at distance $k$ from $P$ along the normal line has the form $$K := P + \frac{k}{|n|} n = \left(\; \left(a-\frac{bk}{|n|}\right) \cos t,\;\left(b-\frac{ak}{|n|}\right)\sin t\;\right) \tag{3}$$ In particular, setting appropriate coordinates to zero, we find that the point $X$ and $Y$ on the $x$- and $y$-axes corresponds to the distances $$|PX| =\frac{b}{a}|n| \qquad |PY| = \frac{a}{b}|n| \tag{4}$$ Now, the point $Z$ on the evolute has distance from $P$ equal to the radius of curvature of the ellipse at $P$. By the parametric formula, we have $$|PZ| := \frac{\left(P_x'^2 + P_y'^2\right)^{3/2}}{\left|P_x'' P_y'-P_x'P_y''\right|} = \frac{|n|^3}{a b} \tag{5}$$ (where I'm using $P_x$ and $P_y$ to refer to the coordinates of $P$). Thus, $$|n|^3 = \frac{a^3}{b^3}|PX|^3 = \frac{b^3}{a^3}|PY|^3 = ab|PZ| \tag{$\star$}$$ and the result follows. $\square$
Behavior at $\infty$ of the conjugate of an harmonic function defined in the upper half plane that vanish uniformly at $\infty$.
Another approach: Claim: There exists $f=u+iv$ holomorphic in $U$ such that as $z\to 0$ in $U,$ $u(z) \to -\infty,$ $v(z)\to 0.$ Suppose the claim is proved and we have such an $f.$ Then the function $-if(-1/z) = v(-1/z) -iu(-1/z)$ is holomorphic in $U.$ As $z\to \infty$ within $U,$ $-1/z\to 0$ within $U.$ Hence $v(-1/z) \to 0$ and $-u(-1/z)\to \infty.$ Thus we have a counterexample. Proof of claim: Consider the functions $$f_n(z) =\log (z+i/e^n) = \log |z+i/e^n| + i\text {arg }(z+i/e^n),\,\,n=1,2,\dots,$$ where $\log $ denotes the principal value logarithm. These functions are holomorphic in $U$ and are uniformly bounded on compact subsets of $U.$ For $z\in U,$ define $$f(z)=\sum_{n=1}^{\infty}\frac{f_n(z)}{n^2}.$$ By Weierstrass M, the series converges uniformly on compact subsets of $U,$ hence $f$ is holomorphic in $U.$ Writing $f=u+iv,$ we have $$u(z) = \sum_{n=1}^{\infty} \frac{\log |z+i/e^n|}{n^2}.$$ Note all summands are negative on $\{z\in U: |z|&lt;1/2\}.$ Thus for any $N,$ $$\limsup_{z\to 0} u(z) \le \lim_{z\to 0} \sum_{n=1}^{N} \frac{\log |z+i/e^n|}{n^2} =\sum_{n=1}^{N} \frac{\log |i/e^n|}{n^2} = - \sum_{n=1}^{N} \frac{1}{n}.$$ Since $N$ is arbitrary, we see $\lim_{z\to 0} u(z)=-\infty.$ (Just to be clear, these limits are taken as $z\to 0$ within $U.$) As for $v(z),$ we have $$v(z) = \sum_{n=1}^{\infty} \frac{\text {arg }(z+i/e^n)}{n^2}.$$ This series actually converges uniformly on all of $U.$ Thus $$\lim_{z\to 0} v(z) = \sum_{n=1}^{\infty} \lim_{z\to 0}\frac{\text {arg }(z+i/e^n)}{n^2} = \sum_{n=1}^{\infty} \frac{\text {arg }(i/e^n)}{n^2} = \sum_{n=1}^{\infty} \frac{\pi/2}{n^2}.$$ We're not quite done: Letting $c$ denote the last sum, we see $f(z)-ic$ has the claimed properties. Previous answer: I'm pretty sure this is false. Define $V=\{x+iy: x&gt;0, |y|&lt;x^2.$ Then $V$ is simply connected, so there is a conformal map $g:U\to V.$ And we should be able to arrange things so that $z\to \infty$ in $U$ iff $g(z)\to 0$ in $V.$ Now define $$h(z) = -i\log g(z)= \text {arg }g(z) - i\ln |g(z)|.$$ As $z\to \infty$ in $U,$ $\text {arg }g(z) \to 0.$ That's because $g(z)\to 0$ tangent to the real axis. But the conjugate function $-\ln |g(z)| \to \infty.$
Finding Entire functions with certain conditions
For the first one $$ f(z)=g(z)=\frac{z}{4}. $$ The second one has no solution. If $e^{4F(z)}+e^{4G(z)}=2(z+1)$, then $$ 1+e^{4(G(z)-F(z))}=2(z+1)e^{-4F(z)}. $$ The left hand side is an entire function that does not take the value $1$. Moreover, it has an essential singularity at $\infty$ (since obviously it is not constant). By Picard's Big Theorem, it takes the value $0$ infinitely often. But the right hand side takes the value $0$ only for $z=-1$.
If in a finite group such that $\forall x\exists y:x=y^3$ then its order can't be divided by $3$
Consider $\varphi : G \rightarrow G$ defined for every $x \in G$ by $$\varphi(x)=x^3$$ By hypothesis, $\varphi$ is surjective. But because $G$ is finite, this implies that $\varphi$ is injective. Now, if $G$ has order dividible by $3$, it must have an element of order $3$, which contradicts the fact that $\varphi$ is injective.
proving that the number of painting options is $\binom{n-k-1}{k-1}\cdot \frac{n}{k}$
Line up the chairs in a row, with the throne on the left. We either (i) do not paint the throne or (ii) paint the throne red. Case (i): We have $n-1$ chairs left in the row, and want to paint $k$ of them red. So $n-k-1$ of them will remain unpainted. Write down $n-k-1$ occurrences of $\times$, like this $$ \times\quad\times\quad\times\quad\times\quad\times\quad\times\quad\times\quad\times\quad\times$$ to represent spots for the unpainted chairs. These determine $n-k$ "gaps" ($n-k-2$ real gaps between $\times$'s, plus the $2$ "endgaps") from which we must choose $k$ to slip a painted chair into. There are $\binom{n-k}{k}$ ways to do this. By a standard result, easily checked by a combinatorial argument or by playing with factorials, we have $$\binom{n-k}{k}=\frac{n-k}{k}\binom{n-k-1}{k-1},\tag{1}$$ Case (ii): We must leave the chair next to the throne, and the one at the right end, unpainted. So we must paint $k-1$ of the remaining $n-3$. That will leave $n-k-2$ unpainted. A "gap" argument like the previous one shows that there are $$\binom{n-k-1}{k-1}\tag{2}$$ ways to do this. Finishing: Add the answers (1) and (2). We get $$\left(\frac{n-k}{k}+1\right)\binom{n-k-1}{k-1},$$ which is what we wanted to show.
How do I show this series diverges?
The infinite series $$ \sum_{n=1}^\infty\frac{n^2+1}{2n^2+5}$$ diverges because the term $$\frac{n^2+1}{2n^2+5}\to 1/2 \not=0$$ Intuitively you are adding infinitely many numbers which are very close to $1/2$ and the result does not converge. The so called divergence test indicates that if the general term of a series does not tend to zero, then the series diverges. You do not have to prove anything else for the divergence of the above series.
is it possible to equate/express what a and b is from $a^b\pmod n\equiv x$
The only thing you can say for sure is that $x^1\equiv x \pmod p$. If you want to say more, this will depend strongly on $x,p,\,$ e.g. for a specific $p$ some $x$ are squares, i.e. there exist some $a$ with $a^2\equiv x \pmod p$, and some are no squares. Most $x$ will have an inverse $a$ with $a^{-1}\equiv x \pmod p.$
What is the probability distribution for fitting two equal circles inside a fixed length (L)?
So you shall have that $$ 0 \le x_{\,2} \le x_{\,1} - d\quad \vee \quad x_{\,1} + d \le x_{\,2} \le L $$ which, dividing by $L$, can be reduced to a "adimensional" form as $$ 0 \le {{x_{\,2} } \over L} \le {{x_{\,1} } \over L} - {d \over L}\quad \vee \quad {{x_{\,1} } \over L} + {d \over L} \le {{x_{\,2} } \over L} \le 1 $$ Geometrically this is represented by the attached sketch Therefore you get a probability equal to the area of the two white triangles, i.e. $$ P = \left( {1 - {d \over L}} \right)^{\,2} = {{\left( {L - d} \right)^{\,2} } \over {L^{\,2} }} $$ By the way note that this problem is the contrary of the meeting problem (see e.g. this post)
Principal Part and little o notation
As $x\to 0^+$, $$e^{x+x^{4/3}}=1+(x+x^{4/3})+O(x^2)=1+x+O(x^{4/3}).$$ with Big O notation. Therefore you may write $$f(x)=3x+O(x^{4/3})\quad\mbox{or}\quad f(x)=3x+o(x^{a})$$ with any $1\leq a&lt;4/3$.
$X$ and $Y$ are $n\times n$ matrices such that rank of $X-Y$ is $1$.
This is true over any field of characteristic zero. Since $X-Y$ has rank one, $E:=X-Y=uv^T$ for some nonzero vectors $u$ and $v$. Therefore $2Y^2=XY-YX=EY-YE$ and \begin{align} 2Y^3=Y(EY-YE)&amp;=(EY-YE)Y,\tag{1}\\ 2YEY&amp;=Y^2E+EY^2.\tag{2} \end{align} Since the commutator $EY-YE$ commutes with $Y$ in $(1)$, by Jacobson's lemma it must be nilpotent. Therefore $2Y^2$ is nilpotent and in turn, $Y$ is nilpotent. Let $\{\hat{u}_1,\ldots,\hat{u}_{n-1}\}$ and $\{\hat{v}_1,\ldots,\hat{v}_{n-1}\}$ be two linearly independent sets of vectors such that $\hat{u}_i^Tu=0$ and $v^T\hat{v}_j=0$ for all $i$ and $j$. From $(2)$, we obtain $\hat{u}_i^TYEY\hat{v}_j=0$, i.e. $(\hat{u}_i^TYu)(v^TY\hat{v}_j)=0$. Hence $u$ is a right eigenvector or $v$ is a left eigenvector of $Y$. It follows that $Yu=0$ or $v^TY=0$, because $Y$ is nilpotent. Thus $YE=0$ or $EY=0$. Consequently, $Y(X-Y)Y=YEY=0$, i.e. $Y^3=YXY$. (Actually, since at least one of $YE$ or $EY$ is zero, $(1)$ implies that $Y^3=0$, as observed in loup blanc's answer.)
$x$-axis is meager set on $\mathbb{R}^2$
The interior of $\mathbb{R}$ as a subset of $\mathbb{R}^2$ is very different from its interior as a standalone topological space. Given any point $P$ on the $x$-axis, every open neighbourhood of $P$ contains a point not on the $x$-axis. So $P$ is not in the interior of the $x$-axis. To put it another way, every point of the $x$-axis is on the boundary of the $x$-axis.
To find the particular integral of second order ode with nonconstant coefficients
You don't need to do it this way. Just write $$\frac{1}{D^2-1} = \frac{1}{2}\left(\frac{1}{D-1} - \frac{1}{D+1}\right)$$ Now, $\frac{1}{D-1}\frac{1}{e^t+1}$ is just $$e^t \int \frac{e^{-t}}{e^t+1}$$ (and similarly for the 2nd term) Thus, we can get $y_p$.
Can the equivalence of isometry and unitarity of a linear operator be extended to infinite dimensions?
EDIT: In my original answer, I forgot all about bijectivity! As such, I have updated my answer to fix this. (Thank you to Dimitar for pointing this out.) The short answer is almost. The slightly longer answer is that an operator $U$ on a Hilbert space $H$ is unitary if it is surjective and it preserves the inner product---i.e. $\langle Ux,Uy\rangle = \langle x, y\rangle$ for all $x, y \in H$. Since $\|Ux\|^2 = \langle Ux, Ux \rangle = \langle x, x \rangle = \|x\|^2$, it is clear that if an operator is unitary, then it is an isometry. The converse is almost true, because of the polarization identity: $\langle x,y \rangle = \frac{1}{4}\left(\|x + y\|^2 - \|x - y\|^2 + i\|x + iy\|^2 - i\|x - iy\|^2\right)$. So, if an operator preserves the norm (which is the meaning of isometry), then it automatically preserves the inner product. The only problem is that surjectivity is not guaranteed---there are isometries that are not surjective! Here is an example. Let your Hilbert space be the set of sequences of complex numbers $a_0, a_1, a_2, \ldots$ with the condition that $\sum_{n = 0}^\infty |a_n|^2 &lt; \infty$. This is a Hilbert space with the inner product $\langle a_0, a_1, \ldots, b_0, b_1, \ldots\rangle = \sum_{n = 0}^\infty a_n \overline{b_n}$. Now, consider the map $a_0, a_1, \ldots \mapsto 0, a_0, a_1, \ldots$---it is easy to check that this is an isometry, but it is also clear that it is not surjective!
Cumulative Distribution Function with Infinite Series
It is the formula of the partial sum of a geometric series $$S_n=\sum_{i=0}^{n-1}c\cdot x^{i}=c\cdot \frac{1-x^n}{1-x}$$ It is equal to $$S_n=\sum_{i=1}^{n}c\cdot x^{i-1}=c\cdot \frac{1-x^{n}}{1-x}$$ In your case $x=\frac34$. And $c=\frac14$ Thus $S_n=\frac14\cdot \frac{1-\left(\frac34\right)^n}{1-\frac34}$$ $S_n=\frac14\cdot \frac{1-\left(\frac34\right)^n}{\frac14}$$ $\frac14$ is cancelling out. $s_n=1-\left(\frac34\right)^n$ What happens if n goes to infinity ? $$\lim_{n \to \infty} \left(\frac34\right)^n=0$$ Thus $S_n=1$
Combinatorics question, if $n \geq 3$ then $B(n) < n!$ proof
Since $B(i)\leq i!$ for $i=0,1,\dots n$, by the (strong) induction hypothesis, and $B(i)&lt;i!$ when $i\geq 3$, you have that if $n\geq 3$ then: $$\sum_{i=0}^n \binom{n}{i}B(i)\leq \sum_{i=0}^{n} \binom{n}{i}i!$$ For the rest of the proof: $\binom{n}{i}i!=\frac{n!}{(n-i)!}\leq n!$ So $$\sum_{i=0}{n} \binom{n}{i}i!\leq (n+1)n!=(n+1)!$$
Is $X_n=\frac{x^n}{n(n+1)}$ convergent or divergent?
If $|x| \leq 1$, $$\left|\sum_{n=1}^{\infty}\frac{x^n}{n(n+1)}\right|\leq\sum_{n=1}^{\infty}\frac{1}{n(n+1)}=\sum_{n=1}^{\infty}\frac{1}{n}-\frac{1}{n+1}=1$$ Hence the series converges. Otherwise, $\lim_{n\to\infty}X_n$ diverges and the series sum therefore diverges.
Why eigenvectors with the highest eigenvalues maximize the variance in PCA?
Let the spectral decomposition of $\Sigma$ be $\Sigma=PDP^T,$ where $P$ is orthonormal and $D$ is diagonal. Then $u^T\Sigma u=\displaystyle\sum_{i=1}^d\lambda_i(p_i^Tu)^2,$ where $p_i$ is the $i^{\text{th}}$ column of $P$, in other words, the $i^\text{th}$ eigenvector of $\Sigma.$ We want to find $u$ such that $\displaystyle\sum_{i=1}^d\lambda_i(p_i^Tu)^2$ is maximized. Since $p_i$'s form an orthonormal basis, $\displaystyle\sum_{i=1}^d(p_i^Tu)^2=1.$ Consider the optimization problem: $$\text{Maximize }\displaystyle\sum_{i=1}^d\lambda_iz_i^2\text{ subject to }\sum_{i=1}^dz_i^2=1.$$ Suppose $\lambda_1\ge\lambda_2\ge\dots\ge\lambda_d.$ It is clear that we must have $z_1=1,$ $z_i=0$ for the rest, because otherwise we will have a lower value of the objective function. That would mean $$p_1^Tu=1,\text{ and }p_i^Tu=0\text{ for all }i\neq 1.$$ By equality in Cauchy-Schwarz inequality, $p_1^Tu=1\iff u=c\times p_1,$ for some constant $c.$ By the norm $1$ constraint, $u=p_1.$
Periodic function $f: \mathbb R \to \mathbb R$ (continuous if possible ) such that the sequence $\{f(n) \}$ is not constant and convergent?
No. Assume $f(n)\to c$. Let $p&gt;0$ be a period of $f$. The sequence $n\bmod p$ is either dense in $[0,p)$ or discrete. In the latter case, $\{f(n)\}$ is a periodic sequence of numbers, which is convergent iff it is constant. On the other hand in the former case, all tails (i.e., $\{n\bmod p\mid n&gt;N\}$) are also dense, which implies that for any $\epsilon&gt;0$ the set $f^{-1}((c-\epsilon,c+\epsilon))$ is dense in $\mathbb R$. This implies $f(x)=c$ for all $x\in\mathbb R$.
Equivalence classes of set of rotations
For $S$ a set and $R$ an equivalence relation, $S\setminus R$ is just the set of equivalence classes, that is, the unique partition of $S$ such that $a$, $b$ belong to the same element of $S\setminus R$ iff $aRb$. I've seen it more often written up as $S/R$. Your answer is correct: let's attempt to write it up cleanly. Two rotations $\varphi_\alpha$ and $\varphi_\beta$ will be related by $R$ iff $\alpha-\beta=2\pi k$ for some integer $k$. To see this, notice that if the latter condition holds, for every $x$, $$\varphi_\alpha(x)=(\varphi_\beta\circ\varphi_{2\pi k})(x)=(\varphi_\beta\circ\text{id})(x)=\varphi_\beta(x),$$ implying the former condition. Furthermore, if the former condition holds, $\varphi_\alpha$ will send the point $(\cos(-\alpha),\sin(-\alpha))$ to $(1,0)$, while $\varphi_\beta$ will send it to $(\cos(\beta-\alpha),\sin(\beta-\alpha))$; the distance between these two points will be $$0=\left(\cos(\beta-\alpha)-1\right)^2+\sin(\beta-\alpha)^2=2-2\cos(\beta-\alpha),$$ so that $\cos(\beta-\alpha)=1$, implying the latter condition. This is what we wanted to prove. $\blacksquare$
Let k be a natural number. What is the probability of it being equal to a random number in group?
For the first group, there are $6$ choices for $k$ and $7$ for $A$. So the total number of cases is $42$. The number of successful cases is $6, $ for $\{k=1,X[A]=6\}\cdots$ Therefore the probability is $\frac{6}{42}=\frac 17$ for the first group. Following the same logic, group $2$ has chance $\frac{5}{36}$, and group $3$ has chance $\frac{4}{30}=\frac{2}{15}$ The probability of $k$ equaling each set is therefore the product of all the chances.