title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Convergence of $\sum \limits ^{\infty} _{k=1} \ln(1+\frac{1}{k^2})$
Outline: you have $$0 \leq \ln(1+x) \leq x$$ for all $x\geq 0$ (shown e.g. by concavity of the function $f$ given by $f(x) = \ln(1+x)$).${}^{(\dagger)}$ Then use comparison theorems, noting that the series $\sum_{k=1}^\infty \frac{1}{k^2}$ converges. ${(\dagger)}$ By concavity, the continuously differentiable function $f$ stays below any of its tangents. But the tangent at $0$ is given by $g(x) = f(0)+f^\prime(0)x = 0+1\cdot x = x$.
Find $\lim\limits_{x\rightarrow 0^+}\frac{1}{\ln x}\sum\limits_{n=1}^{\infty}\frac{x}{(1+x)^n-(1-x)^n}$
The limit is equal to $-\dfrac{1}{2}$. A LOWER bound: for $0<x<1$, $\log(x)<0$ and $$\begin{align} \frac{1}{\ln x}\sum_{n=1}^{\infty}\frac{x}{(1+x)^n-(1-x)^n} &\geq \frac{1}{2\ln x}\sum_{n=1}^{\infty}\frac{1}{n+\binom{n}{3}x^2}\\ &\geq \frac{1}{2\ln x}\int_{1/2}^{\infty}\frac{ds}{s+\frac{s^3x^2}{6}}\\ &=\frac{\frac{1}{2}\ln(x^2+24)-\ln x}{2\ln x}\to -\frac{1}{2} \end{align}$$ As regards the UPPER bound, for $0<x<1$, we have that $0<\frac{1}{1+x}<1$ and, from your work, $$-\frac{1}{2}\leftarrow \frac{(1+x)\ln\left(\frac{1}{1-\frac{1}{1+x}}\right)}{2\ln x}=\frac{1}{2\ln x}\sum_{n=1}^{\infty}\frac{1}{n{{(1+x)}^{n-1}}}\geq\frac{1}{\ln x}\sum_{n=1}^{\infty}\frac{x}{{{(1+x)}^{n}}-{{(1-x)}^{n}}}.$$
What's the largest number of prime divisors that a number $n$ could have?
For your case, the answer is $3$. In general, if a number has $k$ divisors, what's the maximum number of prime divisors that it could have? Every integer can be written as the product of different primes raised to some power greater than or equal to $1$, for example $x = {p_1}^{n_1} {p_2}^{n_2} \ldots {p_m}^{n_m}$, with all $n_j \geq 1$. The number of divisors is $k = (n_1 + 1)(n_2 + 1)\ldots(n_m + 1)$. The number of prime divisors is $m$. $m$ is largest if the $n_j$ are as small as possible: Take $k$, factor it into prime factors, and count the prime factors according to their multiplicity. For example, if a number $x$ has $k = 20 = 2 \cdot 2 \cdot 5$ divisors, it can have at most $3$ prime factors since $2 \cdot 2 \cdot 5$ is the product of three primes. Numbers with $20$ divisors and $3$ prime divisors are the numbers of the form $pqr^4$ where $p$, $q$ and $r$ are distinct primes.
Find a bijection from $[0,1]$ to $[0,1]$ that is not strictly monotone. is this possible?
Let $f(x) = x$ for rational $x$ and $f(x) = 1-x$ for irrational $x$. This function is not only non-monotone, it is nowhere monotone (meaning that it is non-monotone on any subinterval of $[0,1]$).
All irreducible fractions whose denominators do not exceed 99 are written in ascending order from left to right
First of all, it should be clear that once we solve the problem for one side, the second side will be done similarly. So let $$\frac{a}{b} < \frac{5}{8}\tag{1}$$ with $b \leq 99$. Now, let $a$ be fixed, then from $(1)$ we get $$\frac{8a}{5}<b\tag{2}.$$ In order to maximize $a/b$, we want to minimize $b$ for fixed $a$, so we want to choose next integer after $8a/5$. To see what integer this will be, we can inspect $5$ cases based on remainder of $a$ divided by $5$. For example, if $a \equiv 0 \bmod {5}$, then $(2)$ gives us $b \geq \frac{8a}{5}+1$, and so $b$ is minimized for $b=\frac{8a}{5}+1$. But then considering $a$ was chosen arbitrary, we will choose maximal $a$ such that $b=\frac{8a}{5}+1\leq 99$ and $a \equiv 0 \bmod {5}$. The inequality simplifies to $a \leq 61$ (since $a$ is an integer), and so maximal $a$ is with $a=60$, and thus $b=97$. Doing this process for all $a\equiv 0,1,2,3,4 \bmod 5$, we will obtain $\frac{a}{b}=\frac{60}{97}$,$\frac{61}{98}$,$\frac{57}{92}$,$\frac{58}{93}$,$\frac{59}{95}$, respectively. Directly comparing the five possibilities, we can see that maximum is at $$\frac{a}{b}=\frac{58}{93}.$$
Continuity and differentiability of $f(x,y)$
You wrote $$\left\lvert \dfrac {xy\sin x}{x^2+y^2}\right\rvert \le \left\lvert \dfrac {xy}{x^2+y^2}\right\rvert.$$ But clearly we have a better estimate: $$\left\lvert \dfrac {xy\sin x}{x^2+y^2}\right\rvert \le |\sin x|\left\lvert \dfrac {xy}{x^2+y^2}\right\rvert.$$ That implies continuity at $(0,0)$. Added later: Suppose $f$ is differentiable at $(0,0).$ Then $$f(x,y) = f(0,0) + \nabla f (0,0)\cdot (x,y) + o((x^2 +y^2)^{1/2}).$$ But note $f(0,0)=0$ and $\nabla f (0,0)=(0,0).$ On the ray $y=x, x>0$ we then have $$\frac{x^2\sin x}{2x^2} = (\sin x)/2 = o((2x^2)^{1/2}) = o(x).$$ Since $(\sin x)/x\to 1,$ we have a contradiction.
A box contains a penny, two nickels, and a dime. If two coins are selected randomly from the box, without replacement, and if X is the sum...
A good notation for the $CDF$ would be $$ F_{X}(x)= \begin{cases} 1 & x \geq 15 \\ \frac{4}{6} & 11 \leq x \lt 15 \\ \frac{3}{6} & 10 \leq x \lt 11 \\ \frac{2}{6} & 6 \leq x \lt 10 \\ 0 & x \lt 6 \end{cases} $$
Show that the double integral $\iint_R f(x,y) dx dy $ does not exist
Every rectangle $R_{ij} =[x_{i-1},x_i] \times [y_{j-1},y_j]$ contains points where $y$ is irrational $(f(x,y) = x)$ and points where $y$ is rational $(f(x,y) = 1/2)$. If $(x,y) \in R_{ij}$ we have $x \leqslant 1/2$ for $i \leqslant n$, and $x \geqslant 1/2$ for $i > n$. Hence, $$\sup_{R_{ij}} \, f(x,y) = \begin{cases}\max(1/2,x_i) = 1/2, & i \leqslant n \\ \max(1/2,x_i) = x_i, & i > n \end{cases}$$ $$\inf_{R_{ij}} \, f(x,y) = \begin{cases}\min(1/2,x_{i-1}) = x_{i-1}, & i \leqslant n \\ \min(1/2,x_{i-1}) = 1/2, & i > n \end{cases}$$ There appears to be a typographical error in the lower sum printed in the book.
Finding the limit of $x_1=0 , x_{n+1}=\frac{1}{1+x_n}$
It is obvious that $f:x\mapsto\frac1{1+x}$ is a monotonically decreasing continuous function $\mathbf R_{\geq0}\to\mathbf R_{\geq0}$, and it is easily computed that $\alpha=\frac{-1+\sqrt5}2\approx0.618$ is its only fixed point (solution of $f(x)=x$). So $f^2:x\mapsto f(f(x))$ is a monotonically increasing function that maps the interval $[0,\alpha)$ into itself. Since $x_3=f^2(x_1)=\frac12>0=x_1$ one now sees by induction that $(x_1,x_3,x_5,...)$ is an increasing sequence bounded by $\alpha$. It then has a limit, which must be a fixed point of $f^2$ (the function mapping each term of the sequence to the next term). One checks that on $ \mathbf R_{\geq0}$ the function $f^2$ has no other fixed point than the one of $f$, which is $\alpha$, so that must be value of the limit. The sequence $(x_2,x_4,x_6,...)$ is obtained by applying $f$ to $(x_1,x_3,x_5,...)$, so by continuity of $f$ it is also convergent, with limit $f(\alpha)=\alpha$. Then $\lim_{n\to\infty}x_n=\alpha$.
The name of this metric.
It's the Euclidean metric tensor written for the sumbanifold, the quadric surface $x_{n+1}= \frac{1}{2}\sum_{i=1}^nx_i^2$, induced by the ambient metric in $\mathbb{R}^{n+1}$, $g=\sum_{i=1}^{n+1}dx_i^2$.
Prove function is Lipschitz'
Hint Consider the function $g(t)=f(X+t(Y-X))$ for $0<t<1$ then calculate $g'(t)$. After calculating $g'(t)$ calculate $f(Y)-f(X)$ ...
Let $f$ be analytic on $D=\{z\in\mathbb C:|z|<1\}.$ Then $g(z)=\overline{f(\bar z)}$ is analytic on $D.$
Here is a more direct approach (which I think is a little clearer): We have $\frac{g(z+h)-g(z)}{h} = \frac{\overline{f}(\overline{z+h})-\overline{f}(\overline{z})}{h} = \overline{ \left( \frac{f(\overline{z+h})-f(\overline{z})}{\overline{h}} \right) } = \overline{ \left( \frac{f(\overline{z}+\overline{h})-f(\overline{z})}{\overline{h}} \right) }$. Since conjugation is continuous, we have \begin{eqnarray} g'(z) &amp;=&amp; \lim_{h \to 0 }\frac{g(z+h)-g(z)}{h} \\ &amp;=&amp; \lim_{h \to 0 }\ \overline{ \left( \frac{f(\overline{z}+\overline{h})-f(\overline{z})}{\overline{h}} \right) } \\ &amp;=&amp; \overline{ \left( \lim_{h \to 0 } \frac{f(\overline{z}+\overline{h})-f(\overline{z})}{\overline{h}} \right) } \\ &amp;=&amp; \overline{ \left( \lim_{h \to 0 } \frac{f(\overline{z}+h)-f(\overline{z})}{h} \right) } \\ &amp;=&amp; \overline{ f'(\overline{z}) } \end{eqnarray}
How to find anti derivative of $e^{-5x}$?
Derivative of $e^{ax}$ is $ae^{ax}$, this can be easily seen by applying the chain rule and using the property of exponential function that derivative of $e^x$ is again $e^x$. Since derivative is a linear operator, derivative of $be^{ax}$ is $b$ times the derivative of $e^{ax}$, i.e. it is $abe^{ax}$. By choosing $b=a^{-1}$, one gets that derivative of $a^{-1}e^{ax}$ is $e^{ax}$. Derivative of a function plus constant is equal to the derivative of the function because derivative of the constant is zero. This means that derivative of $a^{-1}e^{ax}+C$ is also $e^{ax}$, where $C$ is any constant. Anti-derivative works another way, so anti-derivative of $e^{ax}$ is $a^{-1}e^{ax}+C$. Therefore, for special case of $a=-5$, one gets that anti-derivative of $e^{-5x}$ is: \begin{equation} \frac{e^{-5x}}{-5}+C \end{equation}
if f preserves angle at $z_0$ with some condition, is f holomorphic at$z_0$?
The answer is : Yes. for a detailled explaination see Ahlfors Complex analysis edition 1 to 3 chapter 3: analytic functions as mappings or Nehari conformal mapping Chapter 5 page 150 if $f'(z_0)=f''(z_0)=\ldots=f^{(n-1)}=0$ and $f^{(n)}(z_0) \neq 0$ then the angles are magnified by n in $z_0$.
Strong Mathematical Induction Recursion Inequality
For strong induction your induction hypothesis should be that $a_k\le 2^k$ for all integers $k$ such that $5\le k &lt; n$, and you’ll use this to deduce that $a_n\le 2^n$. Start with what you know about $a_n$: $$a_n = a_{n-1}+3a_{n-3}+3\;.\tag{1}$$ In order for this to be useful, you need to make sure that $n-3\ge 5$, so that you can apply your induction hypothesis to it. Thus, you’d better assume that $n\ge 8$. This is not a problem, because you can check by hand that $a_5\le 2^5,a_6\le 2^6$, and $a_7\le 2^7$; that is, in fact, your basis step for this argument. Now go back to $(1)$ and apply your induction hypothesis: $$\begin{align*} a_n &amp;= a_{n-1}+3a_{n-3}+3\\ &amp;\le 2^{n-1}+3\cdot 2^{n-3}+3\;,\tag{2} \end{align*}$$ and you’d like somehow to conclude that the expression in $(2)$ is at most $2^n$. Simply using the fact that $3&lt;4=2^2$ almost works: $$\begin{align*} 2^{n-1}+3\cdot 2^{n-3}+3 &amp;&lt; 2^{n-1}+2^2\cdot 2^{n-3}+3\\ &amp;= 2^{n-1}+2^{n-1}+3\\ &amp;=2^n + 3\;; \end{align*}$$ the only problem is that extra $+3$. But since we used a pretty estimate when we replaced $3$ by $4$ in the second term of $(2)$, and since that almost worked, we can reasonably hope that in fact $3\cdot 2^{n-3}+3\le 2^{n-1}$, which would be enough to make $2^{n-1}+3\cdot 2^{n-3}+3\le 2^n$. The inequality $3\cdot 2^{n-3}+3\le 2^{n-1}$ can be written $3+\dfrac3{2^{n-3}}\le 4$, and you’re assuming that $n\ge 8$, so ...
Is$\ +\infty$ greater than any other number (surreal, superreal, hyperreal, ...)?
We can make it a convention that we would only use $+\infty$ in that way. For example, the natural interpretation (via transfer) of that symbol in non-standard analysis is, in fact, the largest extended hyperreal. This isn't too different from the idea that we often use symbols $1$ and $0$ to denote the largest and smallest elements in a bounded lattice. It would even be reasonable, in the construction where we take an ordered set and adjoin a new element and extend the ordering so the new element is the largest element, to adopt the convention that we use $+\infty$ as the name for this new element. However, if we are using $+\infty$ to refer to a specific object with a prior meaning, then we can't always assume it is the largest element of an ordered set. For example, we can construct an ordered set $(S, &lt;)$ defined by $S = \overline{\mathbf{R}} \cup \{ \star \}$ $x &lt; y$ is true if and only if ($x,y \in \overline{\mathbf{R}}$ and $x&lt;y$ as extended real numbers) or ($y = \star$) In this example, we would have $+\infty &lt; \star$. Or for a more amusing example, we could use the ordered set $(\overline{\mathbf{R}}, &gt;)$; that is, we take the extended real numbers and reverse the ordering. This is still an ordered set, and $+\infty$ is its smallest element.
How can I show that $\lim_{\epsilon\to 0+}\int_{B_1\setminus B_\epsilon}\frac{x\cdot\nabla f}{\|x\|^2}dx_1dx_2=Cf(0)$?
If $C$ is a smooth positively oriented Jordan curve in $\mathbb{R}^2$ and $F:U\to \mathbb{R}^2$ is a smooth vector-field so that $C$ is contained in $U$ then, $$ \int_C F\cdot N ~ ds = \iint_D \nabla \cdot F $$ Where $N$ is the outward unit-normal and $D$ is the interior of $C$. Now if $C$ is the unit-circle then $N(x,y) = \left( \frac{x}{\sqrt{x^2+y^2}},\frac{y}{\sqrt{x^2+y^2}} \right)$ for any point $(x,y)$ on the circle. This is beginning to look like what you want, see if you carry this further.
Lusternik-Shnirelman using closed sets
I have now found a reference for the manifold case. Proposition 4.3 of Rudyak-Schlenk paper http://members.unine.ch/felix.schlenk/Maths/Papers/lus.pdf proves equality $$LS(X)=LS_{closed}(X)$$ for all binormal ANR's. (A space is binormal if its product with the interval is normal. An ANR is an absolute neighborhood retract. This class of spaces includes all simplicial complexes, hence all smooth manifolds.)
Quantile Regression - Linear Loss Minimization
This results from applying Leibniz's Rule \begin{eqnarray*} \frac{d}{d\hat x} E_{ϱ_τ}(X-\hat x) &amp;=&amp; (τ -1)\frac{d}{d\hat x}\int_{-∞}^{\hat x}(x-\hat x)\,dF(x)+τ\frac{d}{d\hat x}\int_{\hat x}^{∞}(x-\hat x)\,dF(x) \\ &amp;=&amp; (τ -1)\left((x - \hat x)|_{\hat x} + \int_{-∞}^{\hat x} \frac{d}{d\hat x}(x-\hat x)\,dF(x)\right)+τ\left((x - \hat x)|_{\hat x} + \int_{\hat x}^{∞}\frac{d}{d\hat x}(x-\hat x)\,dF(x)\right) \\ &amp;=&amp; (τ -1)\left(0 + \int_{-∞}^{\hat x} (-1)\,dF(x)\right)+τ\left(0 + \int_{\hat x}^{∞}(-1)\,dF(x)\right) \\ &amp;=&amp; (1 - \tau)\int_{-∞}^{\hat x} \,dF(x)-τ\int_{\hat x}^{∞}\,dF(x) \\ \end{eqnarray*}
Conclude $r$ from a contradiction
Your proof is fine. Note that your given $q \vee \neg q$ is a disjunction. Particularly, all your proof's strategy is based on getting rid of it through disjunction elimination: $\vee$ Elim: $ \qquad \dfrac{\Gamma, \alpha \vdash \gamma \qquad \Gamma, \beta \vdash \gamma \qquad \Gamma \vdash \alpha \lor \gamma}{\Gamma \vdash \beta}$ During 8-11 you are simply using the ex falso or explosion principle EP: $ \qquad \qquad\qquad \qquad \qquad \dfrac{\alpha , \neg\alpha}{\beta}$ in order to obtain $(q \rightarrow r)$ fro the assumption that $\neg q$ holds.
Understanding the notion of a connection and covariant derivative
In what sense is the connection enabling one to compare the vector field at two different points on the manifold [...], when the mapping is from the (Cartesian product of) the set of tangent vector fields to itself? I thought that the connection ∇ "connected" two neighbouring tangent spaces through the notion of parallel transport [...] To see a connection only as a mapping $\nabla: \mathcal{X}(M)\times\mathcal{X}(M)\rightarrow\mathcal{X}(M)$ is too restrictive. Often a connection is also seen as a map $Y\mapsto\nabla Y\in\Gamma(TM\otimes TM^*)$, which highlights the derivative aspect. However, the important point is that $\nabla$ is $C^\infty(M)$-linear in the first argument which results in the fact that the value $\nabla_X Y|_p$ only depends on $X_p$ in the sense that $$ X_p=Z_p \Rightarrow \nabla_X Y|_p = \nabla_Z Y|_p. $$ Hence, for every $v\in TM_p$, $\nabla_vY$ is well-defined. This leads directly to the definition of parallel vector fields and parallel transport (as I think you already know). Vice versa, given parallel transport maps $\Gamma(\gamma)^t_s: TM_{\gamma(s)}\rightarrow TM_{\gamma(t)}$, one can recover the connection via $$ \nabla_X Y|p = \frac{d}{dt}\bigg|_{t=0}\Gamma(\gamma)_t^0Y_{\gamma(t)} \quad(\gamma \text{ is a integral curve of }X). $$ This is exactly the generalisation of directional derivatives in the sense that we vary $Y$ in direction of $X_p$ in a parallel manner. In Euclidean space this indeed reduces to the directional derivative: Using the identity chart every vector field can be written as $Y_p=(p,V(p))$ for $V:\mathbb R^n\rightarrow \mathbb R^n$ and the parallel transport is just given by $$ \Gamma(\gamma)_s^t (\gamma(s),v)=(\gamma(t),v). $$ Hence, we find in Euclidean space: $$ \frac{d}{dt}\bigg|_{t=0}\Gamma(\gamma)_t^0Y_{\gamma(t)} = \frac{d}{dt}\bigg|_{t=0}(p,V(\gamma(t))) = (p,DV\cdot\gamma'(0)), $$ which is exactly the directional derivative of $V$ in direction $v=\gamma'(0)$. Back to the original question: I think it is hard to see how a connection "connects neighbouring tangent spaces" only from the axioms. You should keep in mind, however, that the contemporary formalism has passed many abstraction layers since the beginning and is reduced to its core, the axioms (for a survey see also Wikipedia). To get the whole picture, it is essential that one explores all possible interpretations and consequences of the definition, since often they led to the definition in the first place. In my opinion, the connection is defined as it is with the image in mind that it is an infinitesimal version of parallel transport. Starting from this point, properties as the Leibniz rule are a consequence. However, having such a differential operator $\nabla$ fulfilling linearity, Leibniz rule and so on, is fully equivalent to having parallel transport in the first place. In modern mathematics, these properties are thus taken as the defining properties/axioms of a connection, mainly because they are easier to handle and easier to generalise to arbitrary vector bundles. Given this, what does the quantity $\nabla_{e_\mu}e_\nu=\Gamma^\lambda_{\mu\nu}e_\lambda$ represent? [...] As you wrote, the connection coefficients / Christoffel symbols $\Gamma^\lambda_{\mu\nu}$ are the components of the connection in a local frame and are needed for explicit computations. I think on this level you can't get much meaning out these coefficients. However, they reappear in a nicer way if you restate everything in the Cartan formalism and study Cartan and/or principal connections. The Wikipedia article on connection forms tries to give an introduction to this approach. Nahakara also gives an introduction to connections on principal bundles and the relation to gauge theory later on in his book. In my opinion, this chapter is a bit short and could be more detailed, especially to the end. But it is a good start.
Arithmetic Job Test question
Weight is proportional to volume. Doubling each of the three dimensions increases the volume by a factor of $2\times 2 \times 2 = 2^3 = 8$. So the weight also increases by a factor of $8$. That means the new weight is $10\times8 = 80 \text{ lbs}$.
Smallest change to a set of vectors that makes them orthogonal?
Hint sequence: Consider the problem of finding the orthogonal matrix $U$ that maximizes $\text{tr} (DU)$, for given diagonal matrix $D$ with non-negative entries. And then consider the problem of maximizing $\text{tr}( AU)$ when $A$ is arbitrary. Finally, consider the problem of minimizing the trace of $(B-U)'(B-U)$ with respect to orthogonal $U$, for given matrix $B$.
Map from $SL(2,\mathbb C)/SU(2)$ to $SL(2,\mathbb C)$
I don't think that the Iwasawa decomposition can help you, because $B \cap SU(2) \neq \{1\}$, so you have to work with the subset of $B$, where the diagonal entries are strictly positive.
How can I prove that without further assumptions Chebyshev's Inequality can not be improved?
Try a discrete distribution with probabilities $\dfrac1{2k^2},1-\dfrac1{k^2},\dfrac1{2k^2}$ at points $-k\sigma,0,k\sigma$.
Differentiation with Ito's Formula
Let $Y_t$ satisfy $$dY_t=cX_t dB_t-\frac{c^2}{2}X_t^2dt.$$ Let $f(x)=\exp(x)$. Then $$df(Y_t)=f'(Y_t)dY_t+\frac{1}{2}f''(Y_t)d[Y]_t.$$ Noting that $f(Y_t)=Z_t$ gives $$dZ_t=Z_tdY_t+\frac{1}{2}Z_td[Y]_t.$$ Note that $d[Y]_t=c^2 X_t^2 dt$, so $$dZ_t=Z_t\left(cX_t dB_t-\frac{c^2}{2}X_t^2dt+\frac{c^2}{2} X_t^2 dt\right).$$ Your result follows.
basis of a matrix that is transposed.
I'm not sure what you mean by "basis" for a matrix. If by basis you mean a basis for the column and rowspaces of the matrix then row reducing the original matrix is enough. The non-zero rows of the RREF forms a basis for the rowspace of the matrix. The pivot columns in turn serve as an "index" for a basis of the columnspace. For example, suppose we are given a matrix $A$ with RREF $R$, $$A = \begin{pmatrix}\mathbf{a_1} &amp; \mathbf{a_2} &amp; \cdots &amp; \mathbf{a_n}\end{pmatrix} \ \sim \ R=\begin{pmatrix}\mathbf{r_1} &amp; \mathbf{r_2} &amp; \cdots &amp; \mathbf{r_n}\end{pmatrix}$$ where $\mathbf{a_i}$ and $\mathbf{r_i}$ are the respective column vectors of $A$ and $R$. Now further suppose that $$\left\{\mathbf{r_{i_1}},\ \mathbf{r_{i_2}},\ \cdots,\ \mathbf{r_{i_k}}\right\}$$ is the set of pivot columns of $R$. Then the corresponding columns of $A$ $$\left\{\mathbf{a_{i_1}},\ \mathbf{a_{i_2}},\ \cdots,\ \mathbf{a_{i_k}}\right\}$$ forms a basis for the columnspace of $A$. Of course this will in general not be the same basis as the one produced by row reducing $A^\mathrm{T}$, but there is rarely a need to prioritize one basis over the other. Of course, all of this is assuming that a basis of the columnspace is what you're after. If you simply want the RREF of the transposed matrix, then row reducing again would probably be the easiest way.
what should the null hypothesis be
The null hypothesis is $$H_0: μ=72$$ and the alternative hypothesis is $$H_0: μ&gt;72$$ If you reject the null hypothesis then this means that your sample of $50$ comes from a "different population" i.e. from a population with higher fitness score. You do not need a difference of means test. Just take a mean test, you can use the $z$ statistic since $n=50&gt;30$.
Where is the flaw in my permutations answer?
You forgot to account for all the possible permutations of the other $n-2$ people sitting down. Thus the right answer is indeed $$\frac{2(n-1)}{n!}\cdot (n-2)!=\frac{2}{n}$$
If $\lambda$ is isolated in $\sigma(u)$, then $E(\left\{\lambda\right\})(H)=\ker(u-\lambda)$.
There is a more general result that can be found in Rudin's Functional Analysis in the chapter on bounded operators on a Hilbert space. Suppose $T\in\mathcal{B}(H)$ is normal and $E$ is its spectral decomposition. If $f\in C(\sigma(T))$ and if $\omega_{0}=f^{-1}(0)$, then $\mathcal{N}(f(T))=\mathcal{R}(E(\omega_{0}))$ where $\mathcal{N}(f(T))$ denotes the null space of the operator $f(T)$ and $\mathcal{R}(E(\omega_{0}))$ denotes the range of the operator $E(\omega_{0})$. The proof is as follows: Let \begin{equation} g(\lambda)=\begin{cases} 1, &amp; \text{if $\lambda\in\omega_{0}$}.\\ 0, &amp; \text{on other points of $\sigma(T)$}. \end{cases} \end{equation} Then $fg=0$ hence by the functional calculus, $f(T)g(T)=0$. But $g(T)=E(\omega_{0})$, hence $\mathcal{R}(E(\omega_{0}))\subset\mathcal{N}(f(T))$. For the other inclusion: For each $n\in \mathbb{N}$ let $\omega_{n}=\{\lambda\in \sigma(T):\frac{1}{n}\leq|f(\lambda)|&lt;\frac{1}{n-1}\}$. The complement $\tilde{\omega}$ of $\omega_{0}$ relative to $\sigma(T)$ is then the union of the disjoint Borel sets $\omega_{n}$. Define \begin{equation} f_{n}(\lambda)=\begin{cases} \frac{1}{f(\lambda)}, &amp; \text{on } \omega_{n}\\ 0, &amp; \text{on other points of $\sigma(T)$} \end{cases} \end{equation} Then each $f_{n}$ is a bounded Borel function on $\sigma(T)$ and $f_{n}(T)f(T)=E(\omega_{n}) \forall n \in \mathbb{N}$. So $f(T)x=0 \Rightarrow E(\omega_{n})x=0$. By the countable additivity of the $H$ valued map $\omega\rightarrow E(\omega)x$, we have $E(\tilde{\omega})x=0$. Hence $E(\omega_{0})x=x$. Thus $\mathcal{N}(f(T))\subset \mathcal{R}(E(\omega_{0}))$. For your question, consider the continuous function $f(z)=z-\lambda$.
Mathematical Puzzle: A Drag Race of Who Wins
I would do it like this. First let the whole distance be $12$ units (you can choose the units to fit and $12$ makes everything an integer). Now at some time $t_1$, Alison has travelled $9$ units so that $9=\frac {at_1^2}2$ and $$t_1^2=\frac {18}a$$ At time $t_2$, Alison has travelled $12$ units, so $$t_2^2=\frac {24}a$$ Now this last quarter of the journey takes $3$ seconds so that $$t_2-t_1=3=\sqrt{\frac {24}a}-\sqrt{\frac {18}a}$$ Squaring this equation and clearing fractions $$9a=24+18-2\sqrt{432}=42-24\sqrt 3$$ and $$a=\frac {14-8\sqrt 3}3$$ Once $a$ is known, $t_2$ is easy to find from the equation for $t_2^2$. And Kevin's journey can be analysed in a similar way. I'd prefer this systematic approach to trying to write down complicated formulae - there is so much less to go wrong, and if you make a slip it is easier to spot where it is.
Optimization over a convex cone generated by a set is equal to optimization over the set.
They are not equivalent . For example in dimension one take $f(x) = x, ~ U=\{1\}, ~ A=0, b=0,~ K=\{0\} $ Then first you have $C= [0, +\infty)$, Thus problem one becomes Maximization of $f(x) = x$ over $C= [0, +\infty)$, which is unbounded , but problem two is maximizing $f(x)=x$ over $U=\{1\}$ which has $x=1$ as its maximizer .
Calculating probability of normal distribution
No. It should be $1-2P(Z\geq1.475)$.
Max between two measures
Indeed. Consider the set $A = \{0,1\}$, with $\Sigma$-algebra $\mathcal P(A)$ and measures $\mu_0,\mu_1$ defined by $\mu_0(\{0\}) = \mu_1(\{1\}) = 1$. Then if we define $\mu = \max\{\mu_0,\mu_1\}$, we would have $$ \mu(A) = \mu(\{0\}) = \mu(\{1\}) = 1, $$ contradicting countable (even finite) additivity of the measure.
Clarification about the definition for polynomial discriminant?
The subscripts index the roots $r_i$ of the polynomial. Since we restrict to $i &lt; j$ (which we could write as $1 \leq i &lt; j \leq n$), this means that the product (1) runs only over pairs of distinct roots, and (2) counts each pair only once (as opposed to twice, for the two possible orders of two roots). For example, if $n = 3$, the pairs $(i, j)$ of integers such that $1 \leq i &lt; j \leq n$ are $(1, 2)$, $(1, 3)$, and $(2, 3)$, and so the discriminant is $$\Delta = a_3^4 (r_1 - r_2)^2 (r_1 - r_3)^2 (r_2 - r_3)^2 .$$
How to prove range of the following operator is closed?
The range of $TP$ is a finite dimensional submanifold, so it is closed.
Inverse of tridiagonal Toeplitz matrix
Firstly Matrix is Toeplitz. This means it represents multiplication by power series expansion. This means matrix inversions corresponds to multiplicative inversion Therefore, consider $$x+x^{-1}=\frac{x^2+1}x$$ Now it's multiplicative inverse: $$(x+x^{-1})^{-1}=\frac x{x^2+1}$$ Now you can expand with geometric series / Taylor expansion for $$\frac 1{x^2+1}=\frac 1{1-(-1\cdot x^2)}$$ And substitute with $$\frac{1}{1-t}=1+t+t^2+\cdots, t=-x^2$$ and then finish. You will notice the flipping sign pattern and that the odd exponents disappear when you substitute and do the expansion.
Where does this product of random variables converge to?
Hint: By the strong law of large numbers, $$\frac{1}{n} \sum_{i=1}^n X_i \to \mathbb{E}X_1 = 0$$ almost surely. Write $$\exp \left( \sum_{i=}^n X_i - \frac{n \sigma^2}{2} \right) = \exp \left( n \left[ \frac{1}{n} \sum_{i=1}^n X_i - \frac{\sigma^2}{2} \right] \right)$$ in order to deduce that $M=0$ a.s.
Assuming that it was enciphered with a generalized Caesar Cipher with multiplier r and shift constant s, find r and s and decipher the message.
If you represent the letters as numbers A=0 B=1 C=2 D=3 etc. then based on frequencies, you have two guesses that give you two equations about $r$ and $s$. (Note that in this numbering system, we have E=4, O=14, P=15, T=19.) $$f(k) = rk + s\pmod{26}$$ $$f(\mathtt{E}) =\mathtt{O}\qquad\qquad4r+s \equiv 14\pmod{26}$$ $$f(\mathtt{T}) = \mathtt{P}\qquad\qquad19r+s \equiv 15\pmod{26}$$ Taking the difference, we have $15r = 1 \pmod{26}$. The solution for $r$ is unique: $r=7$ ($15 \cdot 7 = 105 = 4\cdot 26 + 1)$. I personally found it by trying $15\cdot r \pmod{26}$ for all possible values of $r$. I'm not sure if there's an easier way to do division in modular arithmetic. Using the solution for $r$, we can plug in to one of our equations to find $s$: $4r + s \equiv 14$ means that $14 \equiv 28+s \equiv 26+2+s$ so $s \equiv 12 \pmod{26}$. The overall enciphering function, according to our guess, is $f(k) = 7k + 12\pmod{26}.$ We can create a table to show how each letter is enciphered (Note that as a check, our assumed correspondences $\mathtt E\mapsto \mathtt{O}$ and $\mathtt{T}\mapsto\mathtt{P}$ occur in the table.): $$\begin{array}{cc|cc} k &amp; &amp;&amp; f(k) \\\hline 0 &amp; \mathtt A &amp; \mathtt M &amp; 12 \\ 1 &amp; \mathtt B &amp; \mathtt T &amp; 19 \\ 2 &amp; \mathtt C &amp; \mathtt A &amp; 0 \\ 3 &amp; \mathtt D &amp; \mathtt H &amp; 7 \\ 4 &amp; \mathtt E &amp; \mathtt O &amp; 14 \\ 5 &amp; \mathtt F &amp; \mathtt V &amp; 21 \\ 6 &amp; \mathtt G &amp; \mathtt C &amp; 2 \\ 7 &amp; \mathtt H &amp; \mathtt J &amp; 9 \\ 8 &amp; \mathtt I &amp; \mathtt Q &amp; 16 \\ 9 &amp; \mathtt J &amp; \mathtt X &amp; 23 \\ 10 &amp; \mathtt K &amp; \mathtt E &amp; 4 \\ 11 &amp; \mathtt L &amp; \mathtt L &amp; 11 \\ 12 &amp; \mathtt M &amp; \mathtt S &amp; 18 \\ 13 &amp; \mathtt N &amp; \mathtt Z &amp; 25 \\ 14 &amp; \mathtt O &amp; \mathtt G &amp; 6 \\ 15 &amp; \mathtt P &amp; \mathtt N &amp; 13 \\ 16 &amp; \mathtt Q &amp; \mathtt U &amp; 20 \\ 17 &amp; \mathtt R &amp; \mathtt B &amp; 1 \\ 18 &amp; \mathtt S &amp; \mathtt I &amp; 8 \\ 19 &amp; \mathtt T &amp; \mathtt P &amp; 15 \\ 20 &amp; \mathtt U &amp; \mathtt W &amp; 22 \\ 21 &amp; \mathtt V &amp; \mathtt D &amp; 3 \\ 22 &amp; \mathtt W &amp; \mathtt K &amp; 10 \\ 23 &amp; \mathtt X &amp; \mathtt R &amp; 17 \\ 24 &amp; \mathtt Y &amp; \mathtt Y &amp; 24 \\ 25 &amp; \mathtt Z &amp; \mathtt F &amp; 5 \\ \end{array}$$ For easy deciphering, we may want to sort the table according to $f(k)$. $$\begin{array}{cc|cc} k &amp; &amp;&amp; f(k) \\\hline 2 &amp; \mathtt C &amp; \mathtt A &amp; 0 \\ 17 &amp; \mathtt R &amp; \mathtt B &amp; 1 \\ 6 &amp; \mathtt G &amp; \mathtt C &amp; 2 \\ 21 &amp; \mathtt V &amp; \mathtt D &amp; 3 \\ 10 &amp; \mathtt K &amp; \mathtt E &amp; 4 \\ 25 &amp; \mathtt Z &amp; \mathtt F &amp; 5 \\ 14 &amp; \mathtt O &amp; \mathtt G &amp; 6 \\ 3 &amp; \mathtt D &amp; \mathtt H &amp; 7 \\ 18 &amp; \mathtt S &amp; \mathtt I &amp; 8 \\ 7 &amp; \mathtt H &amp; \mathtt J &amp; 9 \\ 22 &amp; \mathtt W &amp; \mathtt K &amp; 10 \\ 11 &amp; \mathtt L &amp; \mathtt L &amp; 11 \\ 0 &amp; \mathtt A &amp; \mathtt M &amp; 12 \\ 15 &amp; \mathtt P &amp; \mathtt N &amp; 13 \\ 4 &amp; \mathtt E &amp; \mathtt O &amp; 14 \\ 19 &amp; \mathtt T &amp; \mathtt P &amp; 15 \\ 8 &amp; \mathtt I &amp; \mathtt Q &amp; 16 \\ 23 &amp; \mathtt X &amp; \mathtt R &amp; 17 \\ 12 &amp; \mathtt M &amp; \mathtt S &amp; 18 \\ 1 &amp; \mathtt B &amp; \mathtt T &amp; 19 \\ 16 &amp; \mathtt Q &amp; \mathtt U &amp; 20 \\ 5 &amp; \mathtt F &amp; \mathtt V &amp; 21 \\ 20 &amp; \mathtt U &amp; \mathtt W &amp; 22 \\ 9 &amp; \mathtt J &amp; \mathtt X &amp; 23 \\ 24 &amp; \mathtt Y &amp; \mathtt Y &amp; 24 \\ 13 &amp; \mathtt N &amp; \mathtt Z &amp; 25 \\ \end{array}$$ To decipher the message $\mathtt{ZWSTO BPJOG BYQIP JOUWO OZGVS MPJOS MPQAI}$, we look up each letter in the right column of the table, and see what it deciphers to in the left column. For example, we see that $\mathtt{Z}$ deciphers to $\mathtt{N}$, and $\mathtt{W}$ deciphers to $\mathtt{U}$. Continuing in this way, we find that the original, plaintext message is apparently the following statement: NUMBER THEORY IS THE QUEEN OF MATHEMATICS
Topological book which covers applications in the Medical Field (Medicine/Bacteria/Cancer/Viruses)
If you've already studied some topology, you might consider Computational Topology: An Introduction by Edelsbrunner &amp; Harer. It will help with understanding topological data analysis and has a few biological applications in its final chapter. See also Simplicial Models and Topological Inference in Biological Systems by Nanda &amp; Sazdanovic. Here is the abstract from the latter: This article is a user’s guide to algebraic topological methods for data analysis with a particular focus on applications to datasets arising in experimental biology. We begin with the combinatorics and geometry of simplicial complexes and outline the standard techniques for imposing filtered simplicial structures on a general class of datasets. From these structures, one computes topological statistics of the original data via the algebraic theory of (persistent) homology. These statistics are shown to be computable and robust measures of the shape underlying a dataset. Finally, we showcase some appealing instances of topology-driven inference in biological settings – from the detection of a new type of breast cancer to the analysis of various neural structures. For applications to neuroscience, see Two’s company, three (or more) is a simplex: Algebraic-topological tools for understanding higher-order structure in neural data by Giusti, Ghrist and Bassett. For the basics of knot theory, try either The Knot Book by Adams or the brief introduction in Chapter 12 of Introduction to Topology: Pure and Applied by Adams &amp; Franzosa. A useful resource is http://appliedtopology.org/. In fact, the latest entry at the time of writing is Funding opportunities for interdisciplinary research from the Center for Topology of Cancer Evolution and Heterogeneity.
Power Mean Inequality based question
By power mean inequality, $\begin{align} (\frac{a^3 + b^3 + c^3}{3})^2 &amp;\geq (\frac{a^2+b^2+c^2}{3})^3 \text{ Or} \\ 8\cdot(\frac{a^3 + b^3 + c^3}{3})^2 &amp; \geq (\frac{2a^2+2b^2+2c^2}{3})^3 \end{align}$ But $2a^2+2b^2+2c^2 \geq a^2+b^2+c^2+ab+bc+ca$ as $a^2+b^2+c^2 \geq ab+bc+ca$. Thus, $\begin{align} \Biggl( 8\cdot(\frac{a^3 + b^3 + c^3}{3})\Biggr)^2 &amp;\geq \Biggl({\frac{(a^2+bc)+(b^2+ca)+(c^2+ab)}{3}} \Biggr) ^3 \geq \Biggl(\sqrt[3]{(a^2+bc)(b^2+ca)(c^2+ab)}\Biggr) ^{3}=(a^2+bc)(b^2+ca)(c^2+ab) \\ \therefore 8\cdot(a^3 + b^3 + c^3)^2 &amp;\geq 9(a^2+bc)(b^2+ca)(c^2+ab) \\\end{align} $ And we are done.
Scaling probabilities based on various conditions
You have a fixed $p_3$. You know that $p_1+p_2+p_3=1$. With these two constraints, the new value of $p_1$ must lie between $0$ and $1-p_3$. Let $\Delta p_1$ be the change in $p_1$. That is, $p_{1_{new}}=p_{1_{old}}+\Delta p_1$ You can choose your value of $\Delta p_1$ as long as $-p_1\le \Delta p_1\le 1-p_3-p_1$ The change to $p_2$ follows automatically: $p_{2_{new}}=p_{2_{old}}-\Delta p_1$
finding bases of solutions to ODE's
Hint for first equation $$u''' + 6u'' + 12u' + 8u = 0$$ Substitute $z=u'+2u$ then the equation becomes sipmly $$z''+4z'+4z=0$$ Polynome is $R^2+4R+4=(R+2)^2=0 \implies R=-2$ .Then $$z=c_1e^{-2x}+c_2xe^{-2x}$$ Substitute back $z=y'+2y$ $$y'+2y=c_1e^{-2x}+c_2xe^{-2x}$$ $$(ye^{2x})'=c_1+c_2x$$ integrate $$(ye^{2x})=c_1x+c_2x^2+c_3$$ $$\boxed{y(x)=e^{-2x}(c_3+c_1x+c_2x^2)}$$ I let you finish the second equation...
Representation of sphere in parametrized form
In order for a smooth function of two variables to constitute a "regular parametrization" at a point, the derivative must have rank two, i.e., have linearly independent columns.$\newcommand{\Vec}[1]{\mathbf{#1}}$ When $\theta = 0$ or $\theta = \pi$ (i.e., when $\Vec{x}(\theta, \varphi) = (0, 0, \pm 1)$ regardless of $\varphi$), the derivative matrix $D\Vec{x}(\theta, \varphi)$ does not have rank two.
Probability of two women giving birth on their birthday
There are two main sources of fuzziness in your question. The first is how many of the 435 contacts are females; some people have a very significant skew in the gender of their contacts. The second is how many children (and in particular daughters) those females have given birth to; the answer if obviously quite different if we are looking at the contacts of a $15-$year old, or at the contacts of a $51-$year old. In general, a mother of $d$ daughters has probability $\approx$ $(\frac{364}{365})^d$ of having none of them born on her birthday. The $\approx$ is due to some years having $366$ days (which increases the probability) and to the fact that children are not born evenly throughout the year (though close enough). For smallish $d$, we can then approximate the probability of a mother of $d$ daughters having at least one born on her own birthday as $1-(\frac{364}{365})^d\approx \frac{d}{365}$. If we have $m$ such mothers, the probability that exactly $s$ have the same birthday as a daughter is then $\approx{m\choose s} (\frac{d}{365})^s (1-\frac{d}{365})^{m-s}$. For smallish $s$ and $d$, and for $md$ significantly smaller than $365$, we can approximate this further as $\approx\frac{1}{s!}(\frac{md}{365})^s$. So, if all the contacts are mothers (i.e. $m=435$), and each has $1$ daughter ($d=1$), for $s=2$ we have a probability $\approx{435\choose 2} (\frac{1}{365})^2 (1-\frac{1}{365})^{435-2}$ or a little over $20\%$ that exactly $2$ of them will have a daughter with the same birthday. Not that unlikely! Note that $md &gt; 365$ in this case, so we can't quite use the most simplified formula. On the other hand, if we only have $3$ mothers ($m=3$), we can use the most simplified formula and compute the probability as $\approx \frac{1}{2!}(\frac{3\cdot 1}{365})^2$ or about $1$ in $30000$ - far less likely, but not quite impossible.
Is there a power of $2$ which begins, in base 10, with $999...$?
To find many answers, you can start looking for $n,m$ such that $2^n \approx 10^m$, which is equivalent to $n\log2\approx m\log10$. In other words, we look for rational approximations of $\frac{\log10}{\log2}$, in which case $n$ would be the numerator. Using continued fractions, for $$\frac{\log10}{\log2} \approx 3 + \frac1{3 + \frac1{9 + \frac1{2 + \frac1{2 + \frac1{4 + \frac16}}}}} = \frac{13301}{4004}$$ we find $n = 13301$, for which $2^n$ starts with 9999. Note that since the sequence of continued fractions converges to the value, there is no limit on the number of leading 9's a power of 2 can have.
is there any formula for this subset generation problem
Well, kind-of, but it could be accused of being binary-in-disguise. And it's only given by recursion. Call the sequences $s_n$ for $n$ members. Now any number in such a series $s_n(j)$ is either the double of a previous number in the same series or one larger than a number in the preceding series. Or put in formula language $s_n(j) = 2s_n(k)$ (for some $k&lt;j$) or $s_n(j) = 2s_{n-1}(l)+1$ (for some $l$). Note that by this scheme you don't have to consider all $k$s and $l$s since they will be used in sequence: $$\begin{align} s_3(0) &amp;= 2s_2(0)+1 &amp;= 7 &amp;&amp; l = 0\\ s_3(1) &amp;= 2s_2(1)+1 &amp;= 11 &amp;&amp; l = 1\\ s_3(2) &amp;= 2s_2(2)+1 &amp;= 13 &amp;&amp; l = 2\\ s_3(3) &amp;= 2s_3(0) &amp;= 14 &amp;&amp; j = 0\\ s_3(4) &amp;= 2s_2(3)+1 &amp;= 19 &amp;&amp; l = 3 \\ s_3(5) &amp;= 2s_2(4)+1 &amp;= 21 &amp;&amp; l = 4 \\ s_3(6) &amp;= 2s_3(1) &amp;= 22 &amp;&amp; j = 1\\ \vdots \end{align}$$ This is of course binary-in-disguise since what we do is to shift the binary number to the right, if by doing that we lose a $1$ we would find the resulting binary number in the previous sequence. Otherwise we would find the number in the same sequence. Or put in another way, a binary number has three ones if it's either the right shift of a three-one-binary or it's the right shift of a two-ones-binary except with the least significant set to one.
How to solve $\sum_{k=0}^{20} k\;\;^kP_k$
One may observe that $$ k \cdot k!=(k+1)!-k! $$ then one may use a telescoping sum.
Expectation of ratio of dot product of independent multivariate gaussians
$$E\left[\left(\frac{w_1^\intercal w_2}{w_1^\intercal w_1}\right)^2 \mid w_1\right ]=\frac{1}{{(w_1^\intercal w_1)^2}} w_1^\intercal E(w_2 w_2^\intercal)w_1= \frac{1}{{(w_1^\intercal w_1)^2}} w_1^\intercal \sigma^2 I w_1=\frac{\sigma^2}{w_1^\intercal w_1} $$ Now, let $Z = w_1^\intercal w_1$. Then $E(Z)=d \sigma^2$ and $Var(Z)=2d \sigma^4$. For large $d$ we can approximate (eg) : $$ \begin{align} E[1/Z] &amp;\approx \frac{1}{E[Z]} + \frac{1}{2} Var(Z) \frac{2}{(E[Z])^3} +\cdots\\ &amp;=\frac{1}{d \sigma^2} + \frac{2 }{d^2 \sigma^2} + \cdots \end{align}$$ Hence $$E\left[\left(\frac{w_1^\intercal w_2}{w_1^\intercal w_1}\right)^2 \right] = E \left[E\left[\left(\frac{w_1^\intercal w_2}{w_1^\intercal w_1}\right)^2 \mid w_1\right ] \right ] \approx \frac{1}{d} + \frac{2 }{d^2 } +\cdots \approx \frac{1}{d}$$ The exact value seems to be $ \frac{1}{d-2}$
Triple integral $\int_{0}^{2\pi} \int_{0}^{2\cos(\theta)} \int_{0}^{\sqrt{2r\cos(\theta)}} r \ dzdrd\theta$ to find volume of a solid
HINT: Note that $$x^2+y^2=2x\implies x\ge 0 \implies \cos(\theta)\ge 0\implies |\theta|\le \pi/$$ SPOLIER ALERT Scroll over the highlighted area to reveal the solution The integral extends in $z$ from $-\sqrt{2x}$ to $\sqrt{2x}$. So, we can write $$V=\iint_{R_{xy}}2\sqrt{2x}\,dx\,dy$$Now, upon transforming to polar coordinates $(r,\theta)$, we note that the radial variable $r$ extends from $0$ to $2\cos(\theta)$ while the angular variable $\theta$ starts at $-\pi/2$ and ends at $\pi/2$. We can write, therefore $$\begin{align}V&amp;=\iint_{R_{xy}}2\sqrt{2x}\,dx\,dy\\\\&amp;=\int_{-\pi/2}^{\pi/2}\int_0^{2\cos(\theta)}2\sqrt{2r\cos(\theta)}\,r\,dr\,d\theta\\\\&amp;=\int_{-\pi/2}^{\pi/2}\frac45 (2\cos(\theta))^3\,d\theta\\\\&amp;=\frac45 \times 8\times\frac43\\\\&amp;=\frac{128}{15}\end{align}$$as expected!
Boolean Algebra: Can this be simplified further?
To go along with my comment above, the minimal form is what you arrived at. Using a Karnaugh map, you will get $AB+CD+AC+AD+BC+BD$, which factors to your answer.
Solve Differential Equation: $y' = \frac{\sqrt{x^2+y^2}-x}{y}$
$x = r \cos\theta, y = r \sin\theta$ $dx = \cos\theta \ dr - r \sin\theta \ d\theta$ $dy = \sin\theta \ dr + r \cos\theta \ d\theta$ Given equation is $dy = \frac{\sqrt{x^2+y^2}-x}{y} dx$ So we get, $r \sin\theta \ (\sin\theta \ dr + r \cos\theta \ d\theta) = r \ (1 - \cos\theta) \ (\cos\theta \ dr - r \sin\theta \ d\theta)$ On simplifying we get, $\displaystyle \frac{1}{r} \ dr = - \frac{\sin\theta}{1-\cos\theta} \ d\theta$ To integrate, simply observe that $1 - \cos\theta = t \implies \sin\theta \ d\theta = dt$ So both sides are simple integration.
Structure of $C(F_5)$ from Rational Points on Elliptic Curves
It is known in general that the group $E(\mathbb{F}_q)$ for an elliptic curve $E$ over $\mathbb{F}_q$ is either cyclic or isomorphic to $\mathbb{Z}/k\mathbb{Z} \times \mathbb{Z}/\ell\mathbb{Z}$ for some $k,\ell$. For $y^2=x^3+x+1$ over $\mathbb{F}_5$ the group is isomorphic to $\mathbb{Z}/9\mathbb{Z}$, so it is cyclic. Indeed, $P=(0,1)$ is a generator of order $9$. Since $\langle P \rangle \subseteq E(\mathbb{F}_5)$, and both groups have $9$ elements, it follows that $\mathbb{Z}/9\mathbb{Z}\simeq \langle P \rangle\simeq E(\mathbb{F}_5)$.
x raised to an exponent mod n
Any non-unit mod $294409$ works. For instance, $x=37$ or even the trivial example $x=294409$. No unit mod $294409$ works: Since $294409 = 37 \cdot 73 \cdot 109$, we have $U(294409) = U(37) \times U(73) \times U(109)$. Therefore, $U(294409)$ has exponent $\operatorname{lcm}(36,72,108)=216$, that is, $x^{216} \equiv 1 \bmod 294409$ for all units mod $294409$. Now $294408$ is a multiple of $216$ and so $x^{294408} \equiv 1 \bmod 294409$ for all units mod $294409$.
Find the volume of a rotationally symmetric 3D body
$$\int\limits_{z=0}^{1/\sqrt{2}} \pi \left( (1 - z^2) - z^2 \right)\ dz = \int\limits_{z=0}^{1/\sqrt{2}} \pi (1 - 2 z^2)\ dz = \frac{\sqrt{2} \pi }{3}$$ Annular disk of inner radius $z$ and outer radius $\sqrt{1 - z^2}$ and thickness $dz$ (in blue).
$\lim_{x \to 0}\sin(\frac{1}{x})$=?
Recall that when a limit $$\lim_{x\to x_0} f(x)=L$$ exists it is unique and it is the same for all the subsequences, that is $$\forall x_n \to x_0 \implies f_n=f(x_n) \to L$$ Therefore to prove that a limit doesn't exist it suffices to show that at least two subsequences exist with different limit. In this case let consider $$x_n=\frac2{\pi n}\to 0^+$$ then $$\sin\left(\frac{1}{x_n}\right)=\sin\left(n\frac \pi 2\right)$$ What can we conclude form here? (try for example with $n=4k$ and $n=4k+1$)
Class of Fourier multiplier in $L^1$ is the class of Fourier transform of finite Borel measures. (Stein)
You can define the convolution of two bounded variation Borel measures as follows: $\int h(x) d(\mu \ast \nu) (x) = \int \int h(y+z) d\mu(y) d\nu(z)$ for each bounded measurable $h$. In particular, it is true that $\|\mu\ast\nu\|\leq\|\mu\|\|\nu\|$. In your exercise, you can let $\nu$ be defined by $d\nu = fdx$, where $dx$ is the Lebesgue measure. It then follows that $\| \mu \ast f \|_1 \leq \| fdx \| \| \mu \| = \| \mu \| \|f\|_1$
Probability of sum of two randomly picked numbers from a specific range
Actually. it is as follows: Lets consider instead the remainder modulo 5 of the numbers 1 to 100, so we have 20 1's, 20 2's .. 20 4's and 20 0's. The combination of numbers whose sum is divisible by 5 are 1 4, probability of 1 then a 4 is $\frac{20}{100} \times \frac{20}{99} $ 2 3, probability of 2 then a 3 is $\frac{20}{100} \times \frac{20}{99} $ 3 2, probability of 3 then a 2 is $\frac{20}{100} \times \frac{20}{99} $ 4 1, probability of 4 then a 1 is $\frac{20}{100} \times \frac{20}{99} $ 0 0, probability of 0 then a 0 is $\frac{20}{100} \times \frac{19}{99} $ So you have $$4 \times \left(\frac{20}{100} \times \frac{20}{99}\right) + \frac{20}{100} \times \frac{19}{99} $$ which is $$\frac{1980}{9900} = 0.2 $$
Can all proof systems be generically described?
How would e.g. Smullyan-style tableaux proofs ("truth trees") fit into that picture? Indeed, how would Gentzen-style natural deduction with its non-linear proof trees fit into the picture? And what on earth is meant by talking of "syllogisms" here? It is very unclear what insights could be gained by over-abstraction of this kind.
Determination of injectivity and sujectivity of linear transformation
You showed that there are no polynomials satisfying $xf+f'=0$, hence $\ker T =\{0\}$. This guarantees that $T$ is injective. Now, for surjectivity. What is is the dimension of $P_2$ and of $\ker T$? Is there a formula that you can apply to obtain the dimension of $im (T)$? When you find this dimension and compare with the dimension of $P_3$, wha can you conclude?
Logarithmic and exponential equation.
If the expression is correct, then the only thing you can do is notice that $$\log_x2=\frac{1}{\log_2x}$$ and substitute $y=\log_2x$, which gives $$3^y+3^{1/y}=90$$ Then you have to use a numerical method to solve for $y$ and finally find $x$ by $$x=2^y$$
Is there a "successor" metric in topology?
Succinct definition of boundary starting from open sets: Let $X$ be a set and its topology is just a collection $T\subset P(X)$ of some of its subsets that are going to be called open. The collection is required to have some properties which you can check in Wikipedia's article for topology. The set $X$ with the collection $T$ is called a topological space. Examples: $X=\mathbb{R}$ and $T$ consist of the sets that can be obtained by making arbitrary unions of intervals of the form $(a,b)$. $X=\mathbb{N}$ and $T$ consists of all possible subsets of $X$. $X=\{3,4,5\}$ and $T$ consists of all possible subsets of $X$. Interior: Given a topological space $X$ with topology $T$ and given a subset $A\subset X$, the interior of $A$ is the union of all elements of $T$ that are subsets of $A$. Example: The interior of $[a,b)$ in the topological space (1) is $(a,b)$. The interior of $\{3,4,5\}$ in the topological space (2) is $\{3,4,5\}$. Closure: Given a topological space $X$ with topology $T$ and given a subset $A\subset X$, the closure of $A$ is the complement of the union of all elements of $T$ that are completely outside of $A$. Example: The closure of the example (4) is $[a,b]$ The closure of the example (5) is $\{3,4,5\}$ Boundary: Given a topological space $X$ with topology $T$ and given a subset $A\subset X$, the boundary of $A$ is the difference of its closure minus its interior. Example: The boundary of example (4) is $\{a,b\}$ The boundary of example (5) is empty The case of boundary of intervals in $\mathbb{R}$ Given an finite interval that includes or not its extreme points ($A=(a,b)$, $[a,b)$, $(a,b]$, or $[a,b]$) it is always the case that its boundary is the set of its extreme points. Observe that the interior is the interval excluding its extreme points $(a,b)$. This is an element of the topology (see example 1) and we cannot fit any more intervals without extreme points inside $A$. The closure is always $[a,b]$. Observe that in the complement of $A$ we can fit the intervals $(b,+\infty)$ and $(-\infty,a)$. Any other is inside those two. The complement of $(-\infty,a)\cup(b,+\infty)$ is $[a,b]$, and that is the closure. Finally the boundary is closure $[a,b]$ minus interior $(a,b)$. That is $\{a,b\}$. Topology from metric Given a set $X$ and a metric $d$ on it. One can define a topology $T$ to consist of arbitrary unions of balls $B(a,r)=\{x\in X:\ d(a,x)&lt;r\}$. Examples: The usual topology in $\mathbb{R}$ is of this form. Observe that the intervals $(a,b)$ are just the balls $B(\frac{a+b}{2},\frac{b-a}{2})$. Impossibility of a topology on $\mathbb{N}$ such that the boundary of finite intervals consists of its extreme points Assume we want to put a topology on $\mathbb{N}$ such that for every finite interval (finitely many consecutive integers) the boundary is its extreme points. If the boundary of $A=\{a,a+1,a+2,...,a+n\}$ ought to be $\{a,a+n\}$. Then its interior would have to be $\{a+1,a+2,...,a+n-1\}$. Since $a$ and $n$ were arbitrary, that means (taking $n=1$) that all sets of the form $\{a\}$ are open (elements of the topology). By a property required for topologies, which you are supposed to read, this means that all subsets are open. But if all subsets are open, then $A$ is open. Therefore, the interior of $A$ would have to be $A$ and its complement would have to be $A$ as well. This causes that the boundary is empty. Thereofore, there is no such topology as we wanted. In particular, there is no such topology defined by a metric.
Showing the complement of an open set is closed using sequences
A subspace can be both open and closed, so your contradiction is not correct. However, you are close to a rigorous proof: Suppose by contradiction that $X \backslash U$ is not closed. So there exists a sequence $(x_n)$ in $X \backslash U$ converging to some $x \in U$. Because $U$ is open, there exists a neighbohood $V$ of $x$ included into $U$. For $n$ large enough, $x_n \in (X \backslash U) \cap V \subset (X \backslash U) \cap U= \emptyset$: a contradiction.
Is $n^\frac{1}{n}$ ever rational?
$n^{\frac{1}{n}}$ cannot be rational for any positive integer $n&gt;1$ (No matter whether $n$ is prime or composite) This is because the number $n^{\frac{1}{n}}$ is a root of the polynomial $x^n-n$. The leading coefficient is $1$, hence any rational root woule be an integer. If we denote $m:=n^{\frac{1}{n}}$, we get $m^n=n$. $m$ is clearly positive, so it would have to be a posiive integer, if it were rational. We would have $m\ne 1$, hence $m\ge 2$, but then $m^n\ge 2^n&gt;n$ for $n&gt;1$, hence we arrive at a contradiction.
Find all stationary points of multivariable function
If there are no restrictions on x, from $f_x=0$ you get that $y=(-1\pm \sqrt{65})/2$ or $x=(2n+1)\frac{\pi}{2}$, where n is any integer, and from $f_y=0$ you get that $y=-1/2$ or $x=n\pi$, where n is any integer. Therefore the stationary points are of the form $(n\pi, \frac{-1\pm\sqrt{65}}{2})$ and $((2n+1)\frac{\pi}{2}, -\frac{1}{2}).$ Now you need to test each of these points using the Second Partials Test.
What is the Optimal strategy to win in this game I made?
TLDR - Assuming the &quot;one point less to X&quot; rule, the game is a draw. For example, if X starts in the center and both players play optimal moves, then the following unique position (up to symmetry) can always be forced by X: were no matter what X plays on the next two moves, X will end up with one point more than O. (This is a draw if you lower X's points.) After every move by X (while forcing this position), O has only one correct response! That is, every move by O can be forced by X. (If O makes just a single mistake, X will have multiple ways to get an advantage of more than just one point.) Solution I have solved the game computationally. If first player X has penalty to start with 1 point less (equivalently, if second player O has compensation to start with 1 point more), then the game is a draw under optimal play. Otherwise, first player wins by 1 point (or more points if second player doesn't play optimally). I have used alpha-beta pruning with transposition tables for search and bitboards to encode positions. After putting together some quick c++ code, it took me less than one hour to solve the game. (I've shared my code on codereview.) Below are example games using optimal strategy. (To punish non-optimal moves, you can run my code and explore the variations. Note that it can take multiple minutes if you evaluate a position with only a few pieces on the board.) Terminology Given a 5 by 5 grid, call a square &quot;adjacent&quot; if it can be reached by one vertical or horizontal step, unless specified otherwise. Let &quot;advantage&quot; be the number of raw points player X has more than player O. Raw meaning that we are ignoring the penalty aka compensation. To account for the penalty aka compensation, simply subtract one from the advantage. First move The optimal first move for X is to start in the center. This is the only first move after which X can still force a nonnegative advantage even if they make a mistake (plays a random move) on their second move! The second best first move for X is to play in one of four adjacent squares to the center. Even if O follows up by playing in the center, X can still force positive advantage in this scenario. The worst first move for X is to play in a corner or in a square adjacent to a corner. Then, O can force negative advantage after playing in the center. That is, under optimal play, X can secure an advantage of $1$ by starting in any of the five central squares. But, the center itself is the best first move as it forces O to play more precisely. After X plays the first move in the center, the only good move for O is to then play in one of the four adjacent squares. Otherwise, X can force an advantage greater than $1$, which is either $2$ or $3$. Optimal game WLOG (due to symmetry), the optimal game starts as follows: (X takes center and one of the four adjacent squares not opposite the one that O took, while O takes any adjacent square and then one opposite to the one that X took.) The advantage that is forced under optimal play is shown on the left image. That is, X can continue to play into any of the squares marked with the advantage of $1$. The right image shows the number of different optimal responses by O if X plays there. (Lower number in the right image means O needs to be more precise). Therefore, one can argue that squares containing $1$ on both images are the best optimal moves. It turns out that if X picks one of these two squares, then O must play in the other of the two. That is, we get one of the following two positions depending on what X picks. X continues up-adjacent to center. X continues left-down-adjacent to center. Where again, in both cases, X can play in any of the $1$'s in the left image, where again $1$'s in the right image are arguably the best as they force O to play an exact move. In both cases, X has two best optimal moves. But, in the first case, both best optimal moves are equivalent due to symmetry. This leads to three possible optimal games. X plays in 2nd square on the main diagonal, in the previous first case. X makes 3-in-a-row on main antidiagonal, in previous second case. X threatens to make double 3-in-a-row, in previous second case. Notice that in the new third case, X now has a forced move and needs to play precisely from now on to keep the advantage of $1$! So maybe, picking a move that gives O least many options is not always the best optimal move. In the new first case, whatever optimal move X picks, O will still have multiple optimal choices. I won't go into detail into these games as it starts branching a lot. Only in the new second case, X has moves that give no choice to O. Expanding on these two choices from X, we have the following two &quot;best&quot; optimal games. X connects 3-in-a-row vertically, in new second case. X connects 3-in-a-row horizontally, in new second case. Continuing to expand this newer second case, we have: X chooses to make 3-in-a-row. X chooses to block 3-in-a-row threatened by O. The first option is better than the second, so expanding on it: X threatens two 3-in-a-row's. X threatens one 3-in-a-row. The first option is again clearly better, and now it is not hard to also see that the first $1$ is the better continuation for the first option. This leads to one &quot;best&quot; optimal game: Now again, it is not hard to see that $1$ in the bottom left corner is the best optimal move, which leads to the following position. Once again, the $1$ in the last row is the best optimal move, leading to: And now finally, X can force the best optimal position: Finally, whatever X plays on next two moves, the advantage of $1$ is kept. Notice that every move by O was forced so far. That is, this unique position (up to symmetry), can always be forced by X! You can verify all this and also explore various branches, alternatives and responses yourself, using my code that I linked. Btw, to increase the speed of solving early positions, you may want to start saving important optimal moves and responses.
Show that $\int_0^1 (\ln x)^n dx =(-1)^n n!$
Note that \begin{align*} \lim_{x \to 0} x (\ln x)^n &amp;= \lim_{x \to 0} \frac{(\ln x)^n}{\frac 1 x} \\ &amp;= \lim_{x \to 0} \frac{n (\ln x)^{n - 1} \frac 1 x}{-\frac 1 {x^2}} \\ &amp;= -n \lim_{x \to 0} x (\ln x)^{n - 1} \end{align*} as an application of L'Hospital's rule. Repeat as needed to reduce the exponent to zero, and one sees that the limit is zero. It follows that, in your notation, $$I_n = -n I_{n - 1}$$ and the result follows.
Confusion regarding proof using Fatou's lemma
It looks like your question boils down to the inequality $$-\liminf_n \int_{\mathbb{R}-A} f_n\, dm \leq -\int_{\mathbb{R}-A} f\, dm$$ But this is nothing more than Fatou's lemma multiplied by $-1$.
Prove that $p\mid a^2+b^2\,\Rightarrow\, p\equiv 1\pmod{\! 4}$
$1)$ $\ \,a^2\equiv -b^2\,\Leftrightarrow\, (a/b)^2\equiv -1\,\Rightarrow\, (a/b)^4\equiv 1\pmod{\!p}$, so $\text{ord}_p (a/b)=4$. Fermat's little (FLT) $(a/b)^{p-1}\equiv 1\pmod{\!p}$ implies $4\mid p-1$ (proof below). Theorem: $a^k\equiv 1\pmod{\!p}\,\Rightarrow\, \text{ord}_p a\mid k$. Proof: If not, then $\,k=m\left(\text{ord}_pa\right)+r\,$ with $\,0&lt;r&lt;\text{ord}_p a$ But then $a^k\equiv (a^{\text{ord}_pa})^m(a^r)\equiv 1^ma^r\equiv a^r\equiv 1\pmod {\!p}$ - contradiction. $2)\ $ By contradiction: if $\, p\equiv 3\pmod{\! 4}$,$\,$ then $\, a^2\equiv -b^2\,\Rightarrow\, (a/b)^2\equiv -1\,\stackrel{(p-1)/2}\Rightarrow$ $ (a/b)^{p-1}\equiv \color{#00F}{(-1)^{(p-1)/2}}\equiv \color{#00F}{-1}\, $ mod $p\,$ contradicts FLT.
Finding the residues of the following function $\frac{\cos x}{(x^2+a^2)^2}$
Direct substitution gives $$\lim_{z\to ai}\frac{\cos(z)}{(z+ai)^2}=\frac{\cos(ai)}{(ai+ai)^2}=\frac{\cosh(a)}{-4a^2}$$
Proof that $2^{10}+5^{12}$ is a composite number
$$5^{12}+4\cdot 2^{8} = (5^6-2\cdot 5^3 \cdot 2^2+2\cdot 2^4)(5^6+2\cdot 5^3 \cdot 2^2+2\cdot 2^4)$$ Using $a^4+4b^4 =(a^2-2ab+2b^2)(a^2+2ab+2b^2)$
Searching for a bit-string $x$ such that for $f: \{0,1\}^n \rightarrow \{0,1\}$ we have $f(x) = 1$
The author is being a bit lenient in terms of accuracy. What the author is really trying to say is this: When choosing randomly $k$ elements among $\{0,1\}^n$, you cannot guarantee a probability to find your $x$ that is higher than $\frac{k}{2^n}$. What the author writes, somewhat inaccurately, is this: If the probability that a subset of $\{0,1\}^n$ whose cardinality is $k$ contains $x$ is larger than $1-\epsilon$ then $1-\frac{k}{2^n}&lt;\epsilon$. The point is that the probability that a subset of $\{0,1\}^n$ whose cardinality is $k$ contains $x$ is equal to $$1-\frac{{2^n-1\choose k}}{{2^n\choose k}}=\frac{k}{2^n}$$ (because in order not to contain $x$ you must choose $k$ elements among the remaining $2^n-1$). Now, if you know that this probability is exactly $1-\epsilon$, then you obviously get $\epsilon=1-\frac{k}{2^n}$; however, if you want to guarantee a probability larger than $1-\epsilon$, then you get $\frac{k}{2^n}&gt;1-\epsilon$, whence $1-\frac{k}{2^n}&lt;\epsilon$.
Uniform Continuity on unbounded interval
Hint 1: For $\alpha &gt; 1/2$, take $x_n = (n\pi + 1/n)^{1/\alpha}$ and $y_n = (n\pi)^{1/\alpha}$ and show that as $n \to \infty$ we have $|x_n-y_n| \to 0$ but $|f(x_n) - f(y_n)| \not\to 0$. Hint 2: Using the binomial expansion $$|x_n - y_n| = (n\pi)^{1/\alpha}\left[\left(1 + \frac{1}{n^2\pi}\right)^{1/\alpha} - 1\right]= (n\pi)^{1/\alpha}\left[1 + \frac{1}{\alpha n^2\pi } + O(n^{-4})-1\right]$$
Prove that every group of order $4$ is abelian
Consider a group $G$ of order $4$. Suppose, towards a contradiction, that $G$ is not abelian. Then there must exist some distinct non-identity elements $x,y\in G$ such that $xy\ne yx$. But notice that: $xy≠e$ and $yx≠e$ (since $x$ and $y$ don’t commute, so $y≠x^{-1}$) $xy≠x$ and $yx≠x$ (since by hypothesis $y≠e$) $xy≠y$ and $yx≠y$ (since by hypothesis $x≠e$) Thus, it follows that $e,x,y,xy,yx$ are $5$ distinct elements that are all in $G$. But this contradicts the fact that $G$ is of order $4$. Thus, $G$ must be abelian, as desired.
The sum of the bases of $V$ and $V^\perp$ is equal to $n$
Hint: consider the orthogonal projection onto $V$. This is a linear transformation. Now compute its image and its kernel. The orthogonal projection onto $V$ is the linear transformation $\mathbb R^n \to \mathbb R^n$ given by $P(u)=(u\cdot v_1)v_1+\cdots+(u\cdot v_l)v_l$, where $v_1,\dots,v_l$ is an orthonormal basis of $V$ and $u\in \mathbb R^n$.
Decomposition of tall and slim tensors
The multilinear ranks never exceed the dimensions, but tensor rank may. In your case (i.e., if we assume that you have a random sparse tensor), the multilinear rank is ($\min(a,f)$,$\min(a,f)$,$\min(a,f)$)=(20,20,5). The practical way to compute canonical decomposition: Since $a$, $b$, and $f$ are not too large, you can first compute the Higher-Order Singular Value Decomposition of $T$(the sparsity can be ignored). Then compute canonical decomposition for the core tensor (it has dimensions $20\times 20\times 5$). And, finally, the canonical decomposition of the initial tensor can be recovered from the canonical decomposition of the core tensor.
Subgroups of $E_8$ by using extended Dynkin diagrams
You have to remove vertices, and then all edges which connect to such a removed vertex. You must have done that wrongly, because you get none of the groups you list by removing something from the original Dynkin diagram. Notice that they all have rank $8$, whereas if you remove a vertex from a diagram of type $E_8$, you're left with something of rank $\le 7$. From the extended diagram however (which looks like something one might call "$E_9$"), one can remove one of the vertices and its connecting edges to get diagrams of type $A_4 \times A_4, A_2 \times E_6, A_3 \times D_5, D_8, A_1 \times E_7$ and $A_8$, respectively, which correspond to the six groups you list. So no, it is not enough to use the non-extended diagram, and yes, it is important.
Can irregular polygons with unique vertices be guaranteed of a unique center point? Formula?
For simplicity, I would use the centroid of a finite number of points, which is simply calculating the arithmetic mean of each coordinate. This is as if you have an equal weight at each of the points, and then find the balancing point. This is simple to implement without errors. The other formulations you link to assume the weight is uniformly distributed across the whole area of the polygon. That will (if implemented correctly, and if given the vertices in order around the polygon) also give a uniquely determined centroid, but is slightly different to the above, and I see no reason why it would be preferred over the simpler one. Note that as pointed out in the comments, there will always be distinct polygons that have the same centroid. There will be many that have the centroid in the origin (e.g. any polygon with rotational symmetry). There are also ones with matching centroids that are not in the origin, but those will be much rarer. However, given that you are using 80-gons, each vertex with 10 possible values, the probability of getting matching centroids is very small.
Is $f(x)=\sum \limits_{n=1}^{\infty}\frac{1}{x-c_{n}}$ bounded?
As Peter T.off observed, the limit function will not be bounded. Indeed, one such example is $$\sum_{n=1}^{\infty} \frac{1}{x^2-n^2} = \frac{\pi x \cot(\pi x) - 1}{2x^2},$$ where the sum converges uniformly on compact subsets of $\mathbb{C}\setminus\mathbb{Z}$. The limit function has a pole at every integer.
No possible Level Surface?
Level surface are defined as $$p(x,y,z) = e^{(-x^2-y^2-4z^2)}=k\implies -x^2-y^2-4z^2=\log k&lt;0 \implies k\in(0,1)$$ thus set $-\log k=c\in (0,+\infty)$ and the level surface are expressed by $$x^2+y^2+4z^2=c$$ that, fixed c, is precisely an ellipsoid. To visualize you can fix $c$ and then consider the plot for $z=0$ and for $x or y=0$.
Homogeneous parameterization of a line
This parameterization of a line has a fairly straightforward motivation in one of the standard models of the projective plane $\mathbb{RP}^2$. Consider the plane $z=1$ in $\mathbb R^3$. (Other planes serve as well, but this is a common and simple choice.) Each point on this plane defines a unique line through the origin. Conversely, each line through the origin that is not parallel to this plane intersects it in a unique point. Thus, points on the plane $z=1$ can be identified with lines through the origin. The coordinates of the points on the line through $(x,y,1)$ are its scalar multiples $(wx,wy,w)$, which for $w\ne0$ you should recognize as the set of equivalent homogeneous coordinates of $(x,y)$. Three noncolinear points in $\mathbb R^3$ define a plane, so we can similarly identify the line through a pair of points $\mathbf p$ and $\mathbf q$ in the plane $z=1$ with the plane that contains those points and the origin: the line through $\mathbf p$ and $\mathbf q$ is the intersection of this plane with $z=1$. The plane is spanned by $\mathbf p$ and $\mathbf q$, so every point on the plane is of the form $s\mathbf p+t\mathbf q$, $s,t\in\mathbb R$. Obviously, we can replace $\mathbf p$ and $\mathbf q$ by any nonzero scalar multiples and still have the same plane, and thus the same line of intersection with $z=1$. Two nonparallel planes in $\mathbb R^3$ intersect in a line. If that line intersects $z=1$, then it clearly corresponds to the point of intersection of the two lines in $z=1$ represented by the planes. What if their intersection is parallel to $z=1$, i.e., lies in the plane $z=0$? If each plane intersects $z=1$, this means that those intersections are parallel, too. In the projective plane, parallel lines intersect at a point at infinity, so this leads us to identify lines in the plane $z=0$ with points at infinity in $\mathbb{RP}^2$. The plane $z=0$, then, must correspond to the line at infinity. Its intersection with any other plane is a line in $z=0$, i.e., a point at infinity, which is just what we want. With these additional pairings, the identification with $\mathbb{RP}^2$ is complete: every point in $\mathbb R^3\setminus\{0\}$ corresponds to a point in $\mathbb{RP}^2$, and its coordinates are homogeneous coordinates of the corresponding point on the projective plane. The span of any two linearly independent vectors in $\mathbb R^3$ is a plane through the origin, which maps to the line through the corresponding points in the projective plane. Finally, a line through a pair of distinct points $\mathbf p$ and $\mathbf q$ in $\mathbb{RP}^2$ consists of all linear combinations $s\mathbf p+t\mathbf q$ of those points (excluding zero, if working in homogeneous coordinates). Expanding this in terms of coordinates, the line through the points with homogeneous coordinates $[x_0:y_0:w_0]$ and $[x_1:y_1:w_1]$ is $$s[x_0:y_0:w_0]+t[x_1:y_1:w_1]=[sx_0+tx_1:sy_0+ty_1:sw_0+tw_1].$$ For the finite points on this line, $sw_0+tw_1\ne0$, and we can convert to Cartesian coordinates: $$\left({sx_0+tx_1\over sw_0+tw_1},{sy_0+ty_1\over sw_0+tw_1}\right).$$ Incidentally, in projective geometry the set of nonzero linear combinations of a collection of objects is called their join: the join of a pair of points is a line.
Name of quantity that is not invariant, but only changes in one direction
The term I know is "monovariant," but I mostly only seen this term used in the context of contest math. See, for example, these notes.
Prove that if $M$ is finitely generated then it is Artinian.
Without any assumptions about $R$, this is false. For example, consider $\mathbb{Z}$ as a $\mathbb{Z}$-module. It's finitely generated (the set $\{1\}$ is a generating set), but the collection of submodules $$\mathbb{Z}\supset 2\mathbb{Z}\supset 4\mathbb{Z}\supset 8\mathbb{Z}\supset\cdots$$ has no minimal element.
$\lim_{ x \to \infty }\sqrt{x\sqrt{x\sqrt{x}}}-\sqrt{x}$
$$\lim_{ x \to +\infty }\left(\sqrt{x\sqrt{x\sqrt{x}}}-\sqrt{x}\right)=\lim_{ x \to +\infty } \left(x^{7/8}-x^{1/2}\right)=\lim_{ x \to +\infty } x^{7/8}\left(1-\dfrac{1}{x^{3/8}}\right)=(+\infty)\cdot 1=+\infty.$$
A problem on a triangle's inradius and circumradius .
Denote the center of incircle by $O$ and of circumcircle by $O'$. It's easy to caclulate that $$\angle ABC=\angle ACB= 2\angle OBC=2\arctan\frac{r}{BC/2}=2\arctan\frac{1}{2}$$ Thus we can calculate the height $h$ with base $BC$ is $$h=\frac{BC}{2}\tan\angle ABC=24\tan(2\arctan\frac{1}{2})=24\times\frac{2\times\frac{1}{2}}{1-(\frac{1}{2})^2}=32$$ By symmetry, $O'$ shall lie on the height $h$. Consider the property of circumcircle that $O'A=O'B=O'C=R$ $$O'B^2=O'C^2=(\frac{BC}{2})^2+(h-R)^2=R^2$$ which gives the solution $$R=25$$
Surjective geodesic exponential map on a Lie group
In general, if $G$ acts transitively by isometries on a Riemannian manifold $X$, then $X$ is complete and in particular the geodesic exponential map for the Levi-Civita connection is surjective. In your case you can take $G=X$ acting on itself by left multiplication with any left-invariant metric on $G$. (Assuming of course that $G$ is connected.) The short version of the proof goes like this: At some point $p \in X$, the exponential map is defined on a ball of radius $r$ centered at $0$ in $T_pX$. Since $X$ is homogeneous, the same holds at every point of $X$. This implies that every geodesic can be extended by some definite amount, say $r/3$, and then by the Hopf-Rinow theorem the exponential map is defined on all of $T_pX$ and moreover is surjective.
Prove that $P,I$ and $C$ are collinear
Let the point $X$ be the intersection of lines $AP$ and $CB$. Since $\triangle{CLK}$ is isosceles and $AX \parallel LK$, $\triangle{CAX}$ is isosceles. The line $CI$ is then perpendicular to line $AX$, and since $AP\perp PI$ it follows that $P$ is on line $CI$ (because there is only one line going through $I$ that is perpendicular to $AX$).
a problem in linear transformation
While $P(\mathbb{R})$ is a codomain, it is also a domain of $T$, so maybe you are confusing yourself a bit. $f(x)$ may be an image, and but it is also a member of $P(R)$. The transformation $T$ is mapping a polynomial function $P(\mathbb{R})$ to another polynomial function in $P(\mathbb{R})$. For instance: $x+1 \in P(\mathbb{R})$ and $\displaystyle T(x+1)=\int_{0}^{x} t+1 \, dt = \frac{x^2}{2} +x \in P(\mathbb{R}) $ This is linear since integration is clearly a linear mapping. For injectivity, suppose that $T(f)=0$. Since $f$ is a polynomial with real coefficients, you can definitely express it in explicit form $$ f(x)=\sum_{k=0}^{n} a_k x^k $$ Then what is $T(f)$? What does the condition $T(f)=0$ tell you about coefficients of $f$?
Looking for mathematics contests
STEP papers are excellent: rather than expect you to memorise a bunch of formulae and apply it to a situation, they walk you through a problem, and then ask you a much-more-difficult generalisation of the problem that uses similar methods. They are very challenging (particularly STEP II and STEP III) and are based on: Calculus Probability &amp; Statistics Complex numbers Mechanics Coordinate Geometry The great thing about STEP papers is that they require only very basic knowledge of the topic. The hard part is applying the problems they give you, to solve harder tasks.
Show that this Linear Operator is Self-Adjoint
Hint: Note that if $P^{-1}AP = Q^{-1}AQ$ for all matrices $A$, then it follows that $$ A(PQ^{-1}) = (PQ^{-1})A $$ for all matrices $A$. It follows that $PQ^{-1}$ is a multiple of the identity matrix.
Proof $\sum\limits_{r=1}^{n} r >\frac{1}{2}n^2$ using induction
Your second step should read $$\sum_{r=1}^k r &gt; \dfrac{k^2}{2}$$ Then note that $$\sum_{r=1}^{k+1} r = \underbrace{(k+1) + \sum_{r=1}^{k} r &gt; (k+1) + \dfrac{k^2}{2}}_{\text{Induction hypothesis}} = \underbrace{\dfrac{k^2+2k+2}{2} &gt; \dfrac{k^2+2k+1}{2}}_{a+\frac12 &gt; a} = \dfrac{(k+1)^2}{2}$$
Euler's method setup
Fix a small step $h$ and iterate: $$ t_0 = 1 \\ y_0 = y(1) = 2 \\ t_{n+1} = t_{n} + h \\ y_{n+1} = y_{n} + h (-y_{n} + t_{n} y_{n}^{1/2})$$ and the value $y_{n} \approx y(t_n).$ Try different small step $h = 0.1, 0.01, \ldots,$ and compare for all values of $t_0, t_0 + h, t_0 + 2h, \ldots$ the accuracy between the actual solution $y(t_n)$ you have and $y_n$ you computed.
Show $\frac{n!}{(n + 1)! + n^2}$ is decreasing
Note that $$\dfrac{a_{n+1}}{a_n}=\dfrac{(n+1)((n+1)!+n^2)}{(n+2)!+(n+1)^2}&lt;1.$$ Indeed, after dividing by $n+1$ we have $$\dfrac{(n+1)!+n^2}{(n+2)n!+n+1}&lt;1\iff (n+1)!+n^2&lt;(n+2)n!+n+1\iff n^2-n-1&lt;n!$$ So $a_{n+1}&lt;a_n$ which shows that the sequence is decreasing.
Determining the region given by $(x-ay)(x-by)(x-cy)\dots<0$
To prove that of dartboard rule, just take two points not in the lines, in adjacent regions. The sign of each parenthesis is the same for two, except for a single parenthesis. This means that one point is in the region and the other not. The second rule is obvious since the repeated parenthesis does not change the sign of the product. If you want to use formulas and equations, sort the slopes $a_1&lt;a_2&lt;\cdots &lt;a_n$. For any point $(x,y)$ the quotient $\frac xy$ is between two of these slopes, or before the first, or after the last. Can you determine the sign of the product, given the interval where $\frac xy$ is? I find your second question impossible to answer with no conditions about the functions. Even for relatively simple examples, like polynomials, the region can become very complicated.
gradient multivariate quadratic expansion
Let $k$ be fixed. Then in the term $$\sum_{i,j=1}^n x_ix_j A_{ij},$$ there are only three types that involves $x_k$: $$\begin{cases}x_kx_k A_{kk} &amp; \text{when } i=j=k, \\ \sum_{i\neq k} x_i x_k A_{ik} &amp; \text{when } j=k, i\neq k, \\ \sum_{j\neq k} x_k x_j A_{kj} &amp;\text{when } j=k, i\neq k. \end{cases}$$ Taking $\frac{\partial}{\partial x_k}$ to each terms, we obtain \begin{align} \frac{\partial }{\partial x_k} \sum_{i,j=1}^n x_ix_j A_{ij}&amp;= 2x_k A_{kk} + \sum_{i\neq k} x_i A_{ik} + \sum_{j\neq k} x_j A_{kj}\\ &amp;= x_k A_{kk} + \sum_{i\neq k} x_i A_{ik} + x_k A_{kk} + \sum_{j\neq k} x_j A_{kj} \\ &amp;= \sum_{i} x_i A_{ik} + \sum_{j} x_j A_{kj}. \end{align}
What is a transcedental Equation?
A transcendental equation is an equation that is not algebraic. The equation $x^{\frac{3}{2}} - 2x + x^{\frac{5}{3}} = 0$ is algebraic, since its solutions are solutions of some polynomial equation (which takes some manipulation to find).
Expected value for an empirical CDF
HINT By inspection, we can see that the pmf of this distribution is $$ f(x)= \left\{ \begin{array}{ll} 0.4 &amp; \mbox{if $x=0$} \\ 0.4 &amp; \mbox{if $x=1$} \\ 0.2 &amp; \mbox{if $x=2$} \\ 0 &amp; \mbox{elsewhere} \end{array} \right. $$
Tossing a biased coin
The probability that we tossed exactly n heads, given there was exactly 1 tail in the first two tries, is $Q_{1}={{k-2}\choose{n-1}}(\frac{m-1}{m})^{n-1}(\frac{1}{m})^{k-n-1}$, and the probability that we tossed exactly n heads, given there was exactly 2 tails in the first two tries, is $Q_{2}={{k-2}\choose{n}}(\frac{m-1}{m})^{n}(\frac{1}{m})^{k-n-2}$. The probability that we had exactly $1$ tail in the first two tries, given that we know there was at least $1$ tail, is the conditional probability given by $P_{1}=\frac{P((exactly \ one \ tail)\cap(at \ least \ one \ tail))}{P(at \ least \ one \ tail)}=\frac{P(exactly \ one \ tail) }{P(at \ least \ one \ tail)}=\frac{2(\frac{1}{m})(\frac{m-1}{m})}{2(\frac{1}{m})(\frac{m-1}{m})+(\frac{m-1}{m})^{2}}$. Similarly, the probability that we had exactly $2$ tails in the first two tries, given that we know there was at least $1$ tail, is given by $P_{2}=\frac{(\frac{m-1}{m})^{2}}{2(\frac{1}{m})(\frac{m-1}{m})+(\frac{m-1}{m})^{2}}$. The total probability is $P_{1}Q_{1}+P_{2}Q_{2}$.
Which properties does this relation satisfy?
One has that $(x,y)R(a,b)$ if and only if $a\leq x$ and $b\leq y$. Clearly $x\leq x$ and $y\leq y$, thus $(x,y)R(x,y)$ and hence $R$ is reflexive. Notice that $(2,3)R(1,3)$ but not $(1,3)R(2,3)$, hence $R$ is not symmetric. Now try to determine whether $R$ is anti-symmetric and transitive yourself.
Find elements of a group $\operatorname{Aut}(\mathbb{Z_{20}})$ automorphisms of cyclic group $\mathbb{Z_{20}}$
$\operatorname{Aut}(\Bbb Z_{20})\cong\Bbb Z_{20}^×\cong(\Bbb Z_5×\Bbb Z_4)^×\cong\Bbb Z_5^××\Bbb Z_4^×\cong\Bbb Z_4×\Bbb Z_2$ and thus is not cyclic.