title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Example of operator with spectrum equal to $\mathbb{C}$?
Let $T=\frac{1}{i}\frac{d}{dt}$ be defined on the domain $\mathcal{D}(T)$ consisting of all absolutely continuous functions $f \in L^2[0,1]$ for which $f(0)=0=f(1)$. More precisely, $f \in \mathcal{D}(T)\subset L^2[0,1]$ is an equivalence class of functions equal a.e. with one element $\tilde{f}$ of the equivalence class that is absolutely continuous on $[0,1]$ with $\tilde{f}'\in L^2[0,1]$. Then $T$ is closed and densely-defined. It's not hard to check that $T$ is symmetric: $$ (Tf,g)-(f,Tg) = \frac{1}{i}\int_{0}^{1}f'\overline{g}+f\overline{g}'dt=\left.\frac{1}{i}f\overline{g}\right|_{0}^{1} = 0. $$ The resolvent equation is $(T-\lambda I)f=g$, which means $$ f'-i\lambda f=ig,\;\;\; f(0)=0=f(1). $$ Using an integrating factor $e^{-i\lambda t}$ and the fact that $f(0)=0$ must hold, you can see that the following is necessary: $$ \frac{d}{dt}(fe^{-i\lambda t})=e^{-i\lambda t}ig \\ f(x)e^{-i\lambda x} = \int_{0}^{x}e^{-i\lambda t}ig(t)dt $$ However, this is an actual solution iff $$ \int_{0}^{1}e^{-i\lambda t}g(t)dt = 0. $$ So there is no $\lambda\in\mathbb{C}$ for which a solution of the resolvent equations can be found for all $g$. Therefore $\sigma(T)=\mathbb{C}$.
Find the following partial derivatives?
You must think of $z$ as a function of $x$ and $y$: $z=z(x,y)$. Then $$ \begin{align} 0&=dF(x,y,z)=\frac{\partial F}{\partial x} dx + \frac{\partial F}{\partial y} dy + \frac{\partial F}{\partial z} dz = \\ &= \frac{\partial F}{\partial x} dx + \frac{\partial F}{\partial y} dy +\frac{\partial F}{\partial z} \left( \frac{\partial z}{\partial x}dx + \frac{\partial z}{\partial y}dy \right) \\ &=\left( \frac{\partial F}{\partial x} + \frac{\partial F}{\partial z}\frac{\partial z}{\partial x} \right) dx + \left( \frac{\partial F}{\partial y} + \frac{\partial F}{\partial z}\frac{\partial z}{\partial y} \right)dy, \end{align} $$ and therefore $$ \begin{cases} \frac{\partial F}{\partial x} + \frac{\partial F}{\partial z}\frac{\partial z}{\partial x} =0 \\ \frac{\partial F}{\partial y} + \frac{\partial F}{\partial z}\frac{\partial z}{\partial y} =0. \end{cases} $$ Now you can solve this system and get $\partial z/\partial x$ and $\partial z / \partial y$.
Monotone convergence implies $\mathbb{E}\sum X_n = \sum \mathbb{E}X_n$?
$ \mathbb{E}[X_1+X_2 + \cdots + X_n] = \mathbb{E}[X_1] + \cdots + \mathbb{E}[X_n]$ holds for any finite $n$ where $\mathbb{E}[\sum X_n] = \sum \mathbb{E}[X_n]$ is an infinite summation. In order to exchange the expectation and sum, you need some form of monotone convergence, dominated convergence, Fubini's, etc.
solve $T(n)=T(n-1)+T(\frac{n}{2})+n$
If you want a final answer, skip to the last line. $T(n) = T(n - 1) + T(n/2) + n$ The problem is at least $O(n^2)$. My guess is that the $T(n/2)$ contributes $log(n)$. So let's check if $T(n) \in O(n^2 log(n))$ Does there exist $M$ such that as $n$ gets large: $$\exists M>0, n_0, \forall n > n_0, T(n) \le M n^2 log(n) $$ Known: $T(n) = T(n - 1) + T(n/2) + n$ substitute known recursion into $T(n) \le M (n^2 log(n))$ : $$T(n - 1) + T(n/2) + n \le M n^2 log(n)$$ $$T(n - 1) + T(n/2) \le M n^2 log(n) - n$$ We need 2 inductive hypothesis to check this, so we'll be using super violent strong induction: $$\text{Inductive hypothesis:}$$ $$T(n - 1) \le M (n - 1)^2 log(n - 1)$$ $$T(n/2) \le M (n/2)^2 log(n/2)$$ We have inductive hypothesis of the forms $a < b$ and $c < d$ , we wish to use that to prove a statement of the form $a+c < e$, so we need to establish that $b+d < e$ $$M (n - 1)^2 log(n - 1) + M (n/2)^2 log(n/2) \le M n^2 log(n) - n$$ So we want to find if there is any $M$ that doesn't depend on $n$, that for a sufficiently large $n$, makes the above statement true. $$M ((n - 1)^2 log(n - 1) + (n/2)^2 log(n/2) - n^2 log(n)) \le - n$$ $$M (n^2 log(n - 1) - 2nlog(n - 1) + log(n - 1) + 1/4n^2 log(n) - 1/4n^2log(2) - n^2 log(n)) \le -n$$ $$M (n^2 (log(n - 1) - 3/4 log(n) - 1/4log(2)) - 2nlog(n - 1) + log(n - 1) ) \le -n$$ The coefficient of $n^2$ is $log(n - 1) - 3/4 log(n) - 1/4log(2)$, which you can see by inspection will be positive for sufficiently large $n$. Unfortunately, since $M$ must be positive, and the right hand side is negative, that means that no value of $M$ will make the equation true for sufficiently large $n$. Therefore we cannot conclude that $T(n) \in O(n^2 log(n))$. One step in the above was an implication rather than an equivalence, so we can't necessarily say that it is false either. We'll check another bound. Let's see if any polynomial satisfies this. Is $T(n) \in O(n^c)$ for some c. $$T(n) \le M n^c$$ Known: $T(n) = T(n - 1) + T(n/2) + n$ substitute known recursion into $T(n) \le M n^c$ : $$T(n - 1) + T(n/2) + n \le M n^c$$ $$T(n - 1) + T(n/2) \le M n^c - n$$ 2 inductive hypothesis again: $$T(n - 1) \le M (n - 1)^c$$ $$ T(n/2) \le M (n/2)^c$$ We have inductive hypothesis of the forms $a < b$ and $c < d$ , we wish to use that to prove a statement of the form $a+c < e$, so we need to establish that $b+d < e$ $$M (n - 1)^c + M (n/2)^2 \le M n^c - n$$ So we want to find if $\exists M, \forall n > n_0, \text {above statement holds}$. Let's ignore all polynomial terms of degree less than $c$, since we are only concerned with $n$ above a threshhold. $$M ((n - 1)^c + (n/2)^c - n^c) \le - n$$ $$M (n^c + \frac{1}{2^c} n^c - n^c + \dots) \le -n$$ $$M \frac{1}{2^c} n^c + \dots \le -n$$ So we can't find any polynomial which seems to solve this. I think someone suggested checking if $T(n) \in O(n^{log(n)})$. That would be an interesting result. $$T(n) \le M n^{log(n)}$$ Known: $T(n) = T(n - 1) + T(n/2) + n$ substitute known recursion into $T(n) \le M n^{log(n)}$ : $$T(n - 1) + T(n/2) + n \le M n^{log(n)}$$ $$T(n - 1) + T(n/2) \le M n^{log(n)} - n$$ 2 inductive hypothesis again: $$ T(n - 1) \le M (n - 1)^{log(n - 1)}$$ $$ T(n/2) \le M (n/2)^{log(n / 2)}$$ ...blah blah blah $$M (n - 1)^{log(n - 1)} + M (n/2)^{log(n / 2)} \le M n^{log(n)} - n$$ $$M ((n - 1)^{log(n - 1)} + (n/2)^{log(n / 2)} - n^{log(n)}) \le - n$$ $$M (n^{log(n)} - (n - 1)^{log(n - 1)} - (n/2)^{log(n / 2)}) \ge n$$ At this point it's pretty obvious that we've found a solution. Yay. Try $M=1$. $$n^{log(n)} - (n - 1)^{log(n - 1)} - (n/2)^{log(n / 2)} \ge n$$ This holds for $n > 5.4$, a witness for $n_0$. Therefore we can conclude that $T(n) \in O(n^{log(n)})$. We can all be happy now. Please double check this for typos.
How common is it for a densely-defined linear functional to be closed?
Actually I think I have an answer of sorts. Let $X$ be a Banach space and $\varphi$ a not necessarily bounded linear functional with domain $\operatorname{dom}(\varphi)$ a dense subspace of $X$. Claim: If $\varphi$ is closed, then $\operatorname{dom}(\varphi) = X$ and $\varphi$ is bounded. Proof: First we show that $\ker(\varphi)$ is a closed subspace of $X$. Indeed, suppose $x_n \in \ker(\varphi)$ converge to $x \in X$. We have $\lim \varphi(x_n) = \lim 0 = 0$ so, as $\varphi$ is closed, we have $\varphi(x) = 0$ i.e. $x \in \ker(\varphi)$. Now, $\operatorname{dom}(\varphi)$ is the sum of $\ker(\varphi)$ and a $1$-dimensional subspace. It follows that $\operatorname{dom}(\varphi)$ is closed in $X$ (sums of closed subspaces and finite-dimensional subspaces are closed). Since $\varphi$ is densely-definied, we have $\operatorname{dom}(\varphi) = X$. That $\varphi$ is bounded now follows either from the closed graph theorem, or the usual result that functionals with closed kernels are bounded.
Form of weakly continuous linear functional
By hypothesis we have that $\omega$ is weakly continuous. Then $\omega^{-1}(\{|z|<1\}$ is open and contains $0$. So there is a neighbourhood $V$ with $0\in V$ and $$ V=\{x:\ |\langle x\xi_j,\eta_j\rangle|<1,\ j=1,\ldots,m\}\subset\omega^{-1}(\{|z|<1\}. $$ Given an arbitary $x$ the element $x'=\tfrac{x}{2\sum_j|\langle x\xi_j,\eta_j\rangle|}$ is in $V$, so $|\omega(x')|<1$, which gives $$ |\omega(x)|\leq2\sum_j|\langle x\xi_j,\eta_j\rangle|. $$ Name $p_j$ the linear functionals $p_j(x)=\langle x\xi_j,\eta_j\rangle$. We have $\bigcap_j\ker p_j\subset\ker\omega$. By removing some elements from the list if necessary, we may assume that $\bigcap_{j\ne k}\ker p_j\subsetneq\bigcap_j\ker p_j$ for each $k$. Thus there exists, for each $k$, an $x_k$ such that $p_k(x_k)=1$ and $p_j(x_k)=0$ for all $j\ne k$. Now, given any $x$, consider $x_0=x-\sum_jp_j(x)x_k$. For any $j$ we have $$ p_j(x_0)=p_j(x)-\sum_kp_k(x)p_j(x_k)=p_j(x)-p_j(x)=0. $$ So $x_0\in\bigcap_j\ker p_j\subset\ker\omega$. Then $\omega(x)=\sum_j\omega(w_j)\,p_j(x)$, showing that $$ \omega=\sum_j\omega(w_j)\,p_j=\sum_j c_j\langle x\xi_k,\eta_j\rangle=\sum_j \langle x\xi_k',\eta_j\rangle, $$ where $c_j=\omega(x_j)$ and $\xi_j'=c_j\xi_j$. Let $\{\xi_j''\}$ be an orthnormal basis of $\operatorname{span}\{\xi_1,\ldots,\xi_m\} $ and $\{\eta_j''\}$ be an orthnormal basis of $\operatorname{span}\{\eta_1,\ldots,\eta_m\} $. Then $$ \omega=\sum_j\langle x(\sum_s \alpha_{js}\xi_s'',\sum_t\beta_{jt}\eta_t\rangle =\sum_{s,t}\left(\sum_j\alpha_{js}\beta{jt}\right)\,\langle x\xi_s'',\eta_t''\rangle =\sum_{s,t}\langle x\xi_s''',\eta_t''\rangle, $$ where $\xi_s'''=\left(\sum_j\alpha_{js}\beta{jt}\right)\,\xi_s''$. After relabelling, we have that $\omega$ is of the form $$ w=\sum_j\langle\cdot\xi_j,\eta_j\rangle, $$ with both $\{\xi_1,\ldots,\xi_r\}$and $\{\eta_1,\ldots,\eta_r\}$ orthogonal . Finally, regarding the norm, we have $$ \|\omega\|\leq\sum_j\|\langle\cdot\xi_j,\eta_j\rangle\|=\sum_j\|\xi_j\|\,\|\eta_j\|. $$ And it we take $x$ to be the linear operator that maps $\xi_j\longmapsto \tfrac{\|\xi_j\|}{\|\eta_j\|}\eta_j$ and on the orthogonal complement of $\operatorname{span}\{\xi_1,\ldots,\xi_r\}$ is a unitary that maps onto $\{\eta_1,\ldots,\eta_r\}^\perp$, then $x$ is unitary and $$ |\omega(x)|=\sum_j\langle x\xi_j,\eta_j\rangle=\sum_j\|\xi_j\|\,\|\eta_j\|. $$ Thus $$ \|\omega\|=\sum_j\|\xi_j\|\,\|\eta_j\|. $$
Given two circles arbitrarily positioned and oriented in $\mathbb{R}^3$, how can I find the nearest points on each circle?
For a single circle first with radius $r$, if $\mathbf{n}$ is your normal vector, you can use this to find vectors $\mathbf{u}_1,\mathbf{u}_2$ that form an orthonormal basis of the plane containing the circle. Then, with $\mathbf{c}$ the center of the circle, you can parametrize the circle by $\mathbf{c} + (r\text{cos }\theta)\mathbf{u}_1 + (r\text{sin }\theta)\mathbf{u}_2$. This yields a function $P : [0,2\pi] \rightarrow \mathbb{R}^3$. Doing this with both circles, you get functions $P_1$ and $P_2$. Now if $\mathbf{v}$ and $\mathbf{w}$ are any two points in $\mathbb{R}^3$ (in particular, one on each circle) you can compute the square of the distance between them: $$d(\mathbf{v},\mathbf{w})^2 = \sum (v_i - w_i)^2$$ Using the parametrizations, you can get a function $f:[0,2\pi]\times [0,2\pi] \rightarrow \mathbb{R}$ given by $f(\theta_1,\theta_2) = d(P_1(\theta_1),P_2(\theta_2))^2$. $f$ is a function in two variables and if $(\alpha,\beta)$ is a minimum of it, then $P_1(\alpha)$ and $P_2(\beta)$ are a pair of nearest points on your cirles. Can you compute these minima using some calculus, the formula for $d(\mathbf{v},\mathbf{w})^2$ and the parametrizations $P_1$ and $P_2$?
General Gauss-Markov theorem
The Gauss-Markov Theorem states that the OLS estimator: $$\hat{\boldsymbol{\beta}}_{OLS} = (X'X)^{-1}X'Y$$ is Best Linear Unbiased. For the proof, I will focus on conditional expectations and variance: the results extend easily to non conditional. Also, for the proof, I consider $I_{n}$ $=$ $\Omega$, but the result extends easily to the non equal case as well. Proof: Is it unbiased? $$E(\hat{\boldsymbol{\beta}}_{OLS} \mid X) = E[(X'X)^{-1}X'Y \mid X] = E[(X'X)^{-1}X'(X\boldsymbol{\beta} + u) \mid X] = \\ \boldsymbol{\beta} + (X'X)^{-1}X'E(u \mid X) = \boldsymbol{\beta}$$ Yes! Is it, then, among the unbiased, that with the smallest variance? Consider, for this purpose, a general linear unbiased estimator $\boldsymbol{b}$: $$\boldsymbol{b} = C\boldsymbol{y}$$ where $C$ is a generic $k$ $\times$ $n$ matrix that depends only on the sample information in $X$ and, given unbiasedness, such that $CX$ $=$ $I_{k}$ to guarantee unbiasedness. Note that for $\hat{\boldsymbol{\beta}}_{OLS}$, $C_{Ols}$ $=$ $(X'X)^{-1}X'$. It can be proved that: $$Var(\hat{\boldsymbol{\beta}}_{OLS} \mid X) = \sigma^{2}(X'X)^{-1}$$ and, for the generic linear estimator: $$Var(\boldsymbol{b} \mid X) = \sigma^{2}(C'C)^{-1}$$. We can additionally define $D$ $=$ $C$ $-$ $C_{ols}$. It is immediate that $DX$ $=$ $0$. From it, we can finally conclude that: \begin{align} Var(\boldsymbol{b} \mid X) &= \sigma^{2}[D - (X'X)^{-1}X'][D - X(X'X)^{-1}] \\ &= \sigma^{2}(X'X)^{-1} + \sigma^{2}DD' \\ &= Var(\hat{\boldsymbol{\beta}}_{OLS} \mid X) + \sigma^{2}DD' \\ &> Var(\hat{\boldsymbol{\beta}}_{OLS} \mid X) \end{align} Since DD' is non negative defined.
If $A$ is a real $n \times n$ matrix satisfying $A^3 = I$ then Trace of $A$ is always
Here are two matrices $A$ with $A^3 = A$: $$ A = \begin{pmatrix}1 & 0 & 0\\ 0 & 1 & 0 \\ 0 & 0 & -1\end{pmatrix}\qquad A = \begin{pmatrix}-1 & 0 & 0\\ 0 & -1 & 0 \\ 0 & 0 & -1\end{pmatrix} $$ Since these have diffferent traces, the question does not have a definite answer. Edit: The matrix $$ A = \begin{pmatrix} 0 & 1 & 0\\ 0 & 0 & 1 \\ 1 & 0 & 0\end{pmatrix} $$ has the desired property (for the edited question) and has trace 0. Hence, if the question is well-posed, the answer must be 0.
Heaviest subgraph algorithm
Your problem (apart from the two restrictions) includes the max-clique problem. The second restriction does not reduce complexity. If it would, then you could run it for each node separately and still end up with a good algorithm for the max-clique problem. The first restriction does not help either, since $N$ is not specified: it could be larger than the largest clique in the system. So, give up all hope to find an efficient algorithm.
Expected value for jointly Gaussian RV
There is a useful characterization of bivariate Gaussian distributions as the linear transformation of independent univariate Gaussian random variables. Let $Y \sim N(0, 2)$ and $Z \sim N(0, 2)$ be independent. Let $$X := \frac{1}{2} Y + \frac{\sqrt{3}}{2} Z + 1.$$ Then $(X,Y)$ is jointly Gaussian (since any linear combination of $X$ and $Y$ is Gaussian). You can check that $E[X]=1$ and $E[Y] = 0$ and $\text{Var}(Y) = 2$. You can also check that $$\text{Var}(X) = \frac{1}{4} \text{Var}(Y) + \frac{3}{4} \text{Var}(Z) = 2$$ and $$\text{Cov}(X,Y) = \frac{1}{2}\text{Cov}(Y,Y) = 1.$$ Thus $(X,Y)$ has the same distribution as your problem. This particular construction makes conditioning on $Y$ very easy, as we show now. $$E[X \mid Y] = E[\frac{1}{2} Y + \frac{\sqrt{3}}{2} Z + 1 \mid Y] = \frac{1}{2} Y + 1.$$ $$E[X^2 \mid Y] = E[(\frac{1}{2} Y + \frac{\sqrt{3}}{2} Z + 1)^2 \mid Y] = \frac{1}{4}Y^2 + \frac{3}{4} E[Z^2] + 1 + Y = \frac{1}{4} Y^2 + Y + \frac{5}{2}.$$
Stream of numbers: What is the probability that a number is higher than all previous numbers (as a function of its rank)
For the first case: You have $k$ numbers. The highest is equally likely to be first, second, third, ..., $k$th. So the probability is $1/k$ that the last one is highest. $a>b$ and $a>c$ are not independent. $a>b$ makes $a>c$ more likely. Try it out with just those three numbers. There are six options: a>b>c,a>c>b,b>a>c,b>c>a,c>a>b,c>b>a.
checking uniform convergence of series $\sum_{n=1}^\infty x^n$
The supremum is $+\infty$ for all $n$, because $$\lim_{x \to 1-0}\frac{x^n}{1-x}=+\infty$$ The best you can do is uniform convergence on $[-a, a]$ for all $a \in (0, 1)$, with the Weierstass M-test, for example.
A bound of the harmonic series of squares.
Without using integral, there's the classical trick: $$\frac{1}{k^2} < \frac{1}{k(k - 1)} = \frac{1}{k - 1} - \frac{1}{k}.$$ Summing up gives what you want.
Navigating though the surface of a hypersphere in a computer game
The $M$ you asked in your final question is in general non-unique, given the data you specified. The way to see this is as follows: Given a point $(x,y,z,w)$ and another $(x',y',z',w')$, add to it the third point $(0,0,0,0)$ in four dimensional space. Assuming that the two original points do not coincide and that they are not antipodes, this three points are not collinear and so defines a plane in four dimensional space. So you can take a rotation in that plane that carries $(x,y,z,w)$ to $(x',y',z',w')$ while fixing the directions perpendicular to that plane fixed. Call this transformation $M_1$. However, since your ambient space is four dimensional, the directions perpendicular to the given plane also is two dimensional (4-2 = 2). So you can equally take an arbitrary rotation in that plane which fixes all directions perpendicular to it. Call such a rotation $O$. Then you can check that $M_2 = OM_1$ also sends $(x,y,z,w)$ to $(x',y',z',w')$. To be explicit. Assume your initial coordinate is $(1,0,0,0)$ and the final coordinate is $(0,1,0,0)$. Then we have $$ M_1 = \begin{pmatrix} 0 & -1 & 0 & 0\\ 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{pmatrix}$$ If you take $O$, parametrized by $\theta$, to be $$ O(\theta) = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\ 0 & 0 & \cos\theta & \sin\theta \\ 0 & 0 & -\sin\theta & \cos\theta\end{pmatrix}$$ then you can check that $O(\theta)M_1$ will send $(1,0,0,0)$ to $(0,1,0,0)$ for any $\theta$, but the matrices $O(\theta)M_1$ are all different for $\theta$ in the range $0 \leq \theta < 2\pi$. So what does this mean physically? Moving "forward" in Euclidean space is not as simple an issue as you think. The analogue of the different $O(\theta)$ in 3 dimension Euclidean space corresponds to rotating the whole space along the axis of travel! In other words, imagine you have a spaceship in Euclidean space and it spins (with the axis of spinning in the same direction of its travel) while it moves forward. This is what $O(\theta)$ captures, the spinning. Without getting too much into the gory details, I will note that this ambiguity lies in the heart of differential/Riemannian geometry, and is closely connected with the notion of parallel transport. In any case, using a bit of differential geometry, you can see that the matrix $M_1$ defined above is the "correct" notion of translation is you do not want "spinning". Okay, enough about that. How to actually implement this? Let's say that the viewer is sitting at coordinates $(x,y,z,w)$. And let's say the viewer is facing in the direction of $(\delta x, \delta y, \delta z, \delta w)$ with $\delta x^2 + \delta y^2 + \delta z^2 + \delta w^2 = 1$. Notice that since the direction the viewer is facing is tangential to the hypersphere, it must be perpendicular to the coordinates, that is $$ x\cdot \delta x + y\cdot \delta y + z \cdot \delta z + w \cdot \delta w = 0$$ Given these two vectors, you can use some linear algebra to complete them to an orthonormal basis of four dimensional space (which can be obtained by solving some linear equations based on orthogonality to the two given vectors), call the two additional vectors $(a,b,c,d)$ and $(a',b',c',d')$. Then your translation matrix should be given by $$ M(\phi) = \begin{pmatrix} x & \delta x & a & a'\\ y & \delta y & b & b' \\ z & \delta z & c & c' \\ w & \delta w & d & d'\end{pmatrix} \begin{pmatrix} \cos(\phi) & \sin(\phi) & 0 & 0\\ -\sin(\phi) & \cos(\phi) & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{pmatrix} \begin{pmatrix} x & y & z & w\\ \delta x & \delta y & \delta z & \delta w \\ a & b & c & d \\ a' & b' & c' & d'\end{pmatrix} $$ A quick explanation: by fixing the orthonormal basis, we can use it to construct an orthogonal transformation to a coordinate system in which the viewer sits at $(1,0,0,0)$ and is facing in the $(0,1,0,0)$ direction. Then all we need to do is to conjugate back the transportation matrix for that situation. The $\phi$ parameter measures the (angular) distance you travelled on the hypersphere.
Notation for integration excluding endpoint
You can represent what you want in this way: $$\int_{[a,b)}f(x)\mathrm d x$$ But many times you can write for many integrals $$\int_{[a,b)}f(x)\mathrm d x=\int_{[a,b]}f(x)\mathrm d x$$
Help with sum of coefficients please!
The given polynomial is $f(x,y) = (1+x-y)^3$. Problem 1: sum of all coefficients is $f(1,1) = 1$. Problem 2: sum of coefficients of the terms not containing $y$ is $f(1,0) = 2^3 = 8$. Problem 3: sum of coefficients of the terms containing $x$ is $f(1,1) - f(0,1) = 1 - 0 = 1$.
If $\Sigma$ is a homotopy sphere, then $\Sigma\#(-\Sigma)$ bounds a contractible manifold.
As far as I got, from the first statement, it is sufficient to show the following: If $\Sigma$ is a homotopy sphere then the space $\Sigma'$ obtained by deleting an open ball from $\Sigma$ is contractible. I think I can show this (although by a rather cumbersome argument). Firstly note that $\Sigma$ is in particular a homology sphere by Hurewicz theorem (i.e. has homology isomorphic to that of a sphere) . We show by the Mayer-Vietoris sequence $H_{i}(\Sigma',\mathbb{Z}) = 0$ for $i>0$. In the notation of the Wikipedia article on Mayer-Vietoris sequence, take $X = \Sigma$, $A = \Sigma'$ and $B$ a ball in $\Sigma$ slightly bigger than the one you deleted to get $\Sigma'$, then note that $A \cap B$ will be homotopy equivalent to a sphere of dimension $\dim(\Sigma)-1$). Let $n = \dim(\Sigma)$. The vanishing of $H_{i}(\Sigma',\mathbb{Z}) $ follows immediately from the long exact sequence for $0<i<n-1$ since there are zeros either side. For $i=n-1$ note that for $n-1$ the boundary map $H_{n}(X) \rightarrow H_{n-1}(A \cap B)$ is an isomorphism since it maps the fundamental cycle to the fundamental cycle, thus proving that $H_{n-1}(\Sigma',\mathbb{Z})= 0$ (see the description of the boundary map in the Wikipedia article). Finally, all higher homology groups vanish since $\Sigma'$ is a non-compact manifold of dimension $n$ (of finite topological type). Next, it follows from the Hurewicz theorem (plus the fact that $\pi_{1}(\Sigma') \cong \pi_{1}(\Sigma) = \{1\} $) that all of the homotopy groups $\pi_{i}(\Sigma')$ are trivial. Finally, by Whiteheads theorem the inclusion of a point in $\Sigma'$ induces an isomorphism on all homotopy groups hence is a homotopy equivelance, i.e. $\Sigma'$ is a contractible space.
Convergent sequences to the same limit
Let $a \stackrel{\rm def}{=} \lim_{n\to\infty} a_n$ and $b \stackrel{\rm def}{=} \lim_{n\to\infty} b_n$, which you have proven exist and satisfy $0<a\leq b$. Then, by continuity, we have $$a = \lim_{n\to\infty} a_{n+1} = \lim_{n\to\infty} \sqrt{a_nb_n} = \sqrt{ab} $$ i.e., $\sqrt{a}=\sqrt{b}$. You can conclude. I used the limit on $a_{n+1} = \sqrt{a_nb_n}$. You can use the same argument on $b_{n+1} = \frac{a_n+b_n}{2}$ instead, if you prefer, to get $a = \frac{a+b}{2}$, leading to the same conclusion: $a=b$.
Find a finite extension of $\mathbb{Q}$ in which all primes split
It may be possible to do this without using the hint. The polynomial $(x^2-13)(x^2-17)(x^2-221)$ has a root mod $p$ for all $p$; does that imply that every $p$ splits in ${\bf Q}(\sqrt{13},\sqrt{17})$? Or do I have my wires crossed?
Show that If $\phi$ is a homomorphism, then $\phi(a^{-1}) = [\phi (a)]^{-1}$
Hint: $\phi(aa^{-1})=\phi(1)=1=\phi(a)\phi(a^{-1})$
How to convert North/South and East/West velocities into a compass heading degrees?
The angle you want is $$\tan^{-1}\frac{v_{SN}}{v_{WE}}$$ As for the quadrants, many programming languages have a two-argument arctangent function precisely so as to get you into the right quadrant without any effort and also handle the pathological cases of “due north” and “due south”. It is quite likely that Python does too.
Intro to imaginary numbers: If $i$ = $\sqrt{-1}$ and $i^2 = -1$, then when do you use $i^2$ and when $-1$?
No, $\sqrt{-1}$ and $i$ are not the same thing. Actually, since there are two square roots of $-1$, it is not a good idea to use the expression $\sqrt{-1}$, unless you defined it as something more that “square root of $-1$”. On the other hand, I suggest that you use $i^2$ instead of $-1$ whenever that is useful. Such as when solving the equation $z^2=-1$:$$z^2=-1\iff z^2=i^2\iff z=i\vee z=-i.$$
Rank of the incidence matrix of a directed graph.
Let $(G,E)$ refer to our simple, directed graph. Let $(G',E')$ denote the corresponding undirected graph. One proof is as follows: begin with the case where $G'$ is a tree, which can be handled in the manner described in this post. Now, consider the case where $G'$ is any connected graph. We use the following facts: Every undirected connected graph has a spanning tree, Every tree on $n$ nodes has $n-1$ edges, Relabeling the nodes/edges (or equivalently, permuting the rows/columns of the incidence matrix) does not change the rank of the incidence matrix. Relabel the edges of the graph so that the edges $1,\dots,n-1$ are the edges of our spanning tree. The first $n-1$ columns of the matrix form the incidence matrix of a tree, so these are linearly independent. It follows that the span of these $n-1$ columns is given by the subspace $S \subset \Bbb R^n$, defined by $$ S = \{(x_1,\dots,x_n) : x_1 + \cdots + x_n = 0\}. $$ Indeed, it suffices to observe that the span of these colums is a subspace of $S$ and that the dimension of the span is equal to that of $S$. Now, we see that the remaining columns of the incidence matrix are each elements of $S$. Thus, the column span of the entire incidence matrix of $G$ is $S$, which means that this incidence matrix has rank $n-1$. Finally, consider the case of an arbitrary graph $G$. Let $G_1',\dots,G_k'$ denote the connected components of $G'$. Relabel the vertices so that the vertices of $G_1'$ come first, followed by the vertices of $G_2'$, and so forth. Similarly, relabel the edges so that the edges corresponding to $G_1'$ come first, followed by the edges of $G_2'$, and so forth. I claim that the incidence matrix of $(G,E)$ (under this relabeling) has the block-diagonal form $$ B = \pmatrix{B_1 \\ & B_2 \\ && \ddots \\ &&& B_k}, $$ where $B_j$ is the incidence matrix of $G_j$. Conclude that the rank of $B$ is the sum of the ranks of $B_1,\dots,B_k$, and is therefore given by $n - k$, where $k$ is the total number of connected components.
Exercise 6.21 Isaacs's Character theory of finite groups
I figured it out. With the same notations in the question we have the following: Consider $\psi^G$. Note that $\psi^{G}(1)=|G:H_{n-1}|\psi(1)\ge 2^{n}$. Prove by contradiction. Suppose any irreducible character of $G$ has degree less than $2^{n}$. Let $\gamma$ be an irreducible constituent of $\psi^G$, that is $[\psi^G,\gamma]_{G}\ge 1$, by the Frobenius reciprocity, we have $$[\psi,\gamma|_{H_{n-1}}\ ]_{H_{n-1}}=[\psi^G,\gamma]_{G}\ge 1. $$ Clifford's theorem gives us $\gamma|_{H_{n-1}}=e\sum_{i=1}^t\psi^{(i)}$ where $e=[\psi^G,\gamma]_G$ and $\psi^{(i)}$ denotes the conjugate of $\psi$. So $\gamma(1)=et\psi(1)\ge et2^{n-1}$. But $\gamma(1)<2^n$. So $e=t=1$. Thus $\gamma|_{H_{n-1}}=\psi$. And we know that $\psi$ is extendible to $G$. Hence $\gamma\beta(1)\ge 2^{n}$ and $\gamma\beta\in Irr(G)$ by Corollary 6.17 which is a contradiction.
Probability of Selecting 3 Letters from 7 choices
Your attempt would be right... if the actual question was "what are the chances of choosing DCA out of any random assortment of letters?" But you have to realize that the probability of your question is much higher, since the possibilities have been significantly reduced. First, since the first letter is a mandatory D, then this question gets reduced to choosing a sequence of two letters out of a choice of six (A,B,C,E,F,G) Looking at the last two letters, the rest is pretty simple. There are ten possibilities: five with A at the front and each of the remaining five letters at the back, and five of the same in the reverse order. Since "CA" is only one of those ten possibilities, the answer is 1/10.
linear independence of functions
Notice that the sum needs to be the zero constant function. So: $$ \lambda_1 \frac{1}{1+x}+\cdots + \lambda_n \frac{1}{n+x} = 0 \Rightarrow \sum_{i=1}^n \lambda_i\prod_{j\neq i}(j+x) = 0 $$ Taking $x = -i$, we have $$ \lambda_i\prod_{j\neq i}(j-i) = 0 \Rightarrow \lambda_i = 0 $$ since $\prod_{j\neq i}(j-i) \neq 0$.
Cancelling while integrating
The problem occurs in dealing with the two terms in the middle; you can write them as $-\int u_{n}\frac{du_{n+1}}{dt}dt-\int u_{n+1}\frac{du_{n}}{dt}dt=-\int\big(u_{n}\frac{du_{n+1}}{dt}+u_{n+1}\frac{du_{n}}{dt}\big)dt=-\int\frac{d}{dt}(u_nu_{n+1})dt$ $=-u_{n}u_{n+1}+C$ using the product rule For the answer to your original question, though, the formula $\int u_{n}\frac{du_{n}}{dt}dt=\int u_{n}du_{n}$, which we could also write as $\int f(t)f^{\prime}(t)dt=\int udu$, is correct, since it is just an application of u-substitution.
$\frac{d}{dt}$ represents an operator or infinitesimal change?
Interestingly enough, the answer is "both, kinda". When Leibniz invented the notation, he had in mind that $dx$ represented in a change in $x$ so small that it was insignificant compared to even the smallest real numbers. It is called an infinitesimal number. In his notion, $a\cdot dx$ was also infinitesimal for any real number $a$. In fact, there are only two things you can do with infinitesimals to make real numbers out of them. You can take the ratio of two of them (like $\frac{dy}{dx}$), and you can add an infinite number of them (like $\int x\cdot dx$). In that time, doing algebra with differentials was allowed. Flash forward to the nineteenth century, when the real numbers were being formally defined and calculus was becoming real analysis. It was decided at that point that a belief or non-belief in the existence of infinitesimals shouldn't impact your ability to do calculus, so all of the definitions became based on limits and were strictly proved. But we were used to Leibniz's notation, so we stuck with it. The downside of that is that we can't treat differentials as algebraic terms anymore, and if we want to use something like the Chain Rule, we have to prove it based on the limit-centric definitions instead of simply cancelling out differentials. In that sense, now you are right that $\frac{d}{dx}$ is an operator. That being said, there are a lot of people who "hack" calculus by leaning back on Leibniz's notions, and to be honest I don't know of any actual times when it gets you into trouble. Note that, as the links in my comments suggest, partial differentials don't behave well.
Finding parameters of Poisson LogNormal.
I finally find an answer for my question. Note the following: Def. A distribution with probability density function $f$ is said to be reproducible if the sum of two independent random variables $X_1$ and $X_2$, each with probability density function $f$, follows a distribution with probability density function of the same form as $f$ but with possibly different parameters. Prop.(Feller, 1943). The sum of two mixed Poisson variables ($MP(f)$) has an $MP(f)$ distribution if the distribution defined by $f$ is itself reproducible. In my case Normal Distribution (log-normal distribution) is reproducible.
What kind of purpose do elementary matrices serve?
Here is an example. Given a square symmetric matrix $H,$ we can use elementary matrices to perform one step at a time to construct $P^T H P = D,$ where $D$ is diagonal and $\det P = \pm 1.$ As the inverse of an elementary matrix is another (evident) elementary matrix, we can also use these to construct $Q = P^{-1}$ a step at a time. What follows below is the way I like to display the algorithm discussed at http://math.stackexchange.com/questions/1388421/reference-for-linear-algebra-books-that-teach-reverse-hermite-method-for-symmetr $$ P^T H P = D $$ $$ Q^T D Q = H $$ $$ H = \left( \begin{array}{rrr} 0 & 2 & 3 \\ 2 & 1 & 5 \\ 3 & 5 & 10 \\ \end{array} \right) $$ ============================================== $$\left( \begin{array}{rrr} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) $$ $$ P = \left( \begin{array}{rrr} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) , \; \; \; Q = \left( \begin{array}{rrr} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) , \; \; \; D = \left( \begin{array}{rrr} 1 & 2 & 5 \\ 2 & 0 & 3 \\ 5 & 3 & 10 \\ \end{array} \right) $$ ============================================== $$\left( \begin{array}{rrr} 1 & - 2 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) $$ $$ P = \left( \begin{array}{rrr} 0 & 1 & 0 \\ 1 & - 2 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) , \; \; \; Q = \left( \begin{array}{rrr} 2 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) , \; \; \; D = \left( \begin{array}{rrr} 1 & 0 & 5 \\ 0 & - 4 & - 7 \\ 5 & - 7 & 10 \\ \end{array} \right) $$ ============================================== $$\left( \begin{array}{rrr} 1 & 0 & - 5 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) $$ $$ P = \left( \begin{array}{rrr} 0 & 1 & 0 \\ 1 & - 2 & - 5 \\ 0 & 0 & 1 \\ \end{array} \right) , \; \; \; Q = \left( \begin{array}{rrr} 2 & 1 & 5 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) , \; \; \; D = \left( \begin{array}{rrr} 1 & 0 & 0 \\ 0 & - 4 & - 7 \\ 0 & - 7 & - 15 \\ \end{array} \right) $$ ============================================== $$\left( \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & - \frac{ 7 }{ 4 } \\ 0 & 0 & 1 \\ \end{array} \right) $$ $$ P = \left( \begin{array}{rrr} 0 & 1 & - \frac{ 7 }{ 4 } \\ 1 & - 2 & - \frac{ 3 }{ 2 } \\ 0 & 0 & 1 \\ \end{array} \right) , \; \; \; Q = \left( \begin{array}{rrr} 2 & 1 & 5 \\ 1 & 0 & \frac{ 7 }{ 4 } \\ 0 & 0 & 1 \\ \end{array} \right) , \; \; \; D = \left( \begin{array}{rrr} 1 & 0 & 0 \\ 0 & - 4 & 0 \\ 0 & 0 & - \frac{ 11 }{ 4 } \\ \end{array} \right) $$ ============================================== $$ P^T H P = D $$ $$\left( \begin{array}{rrr} 0 & 1 & 0 \\ 1 & - 2 & 0 \\ - \frac{ 7 }{ 4 } & - \frac{ 3 }{ 2 } & 1 \\ \end{array} \right) \left( \begin{array}{rrr} 0 & 2 & 3 \\ 2 & 1 & 5 \\ 3 & 5 & 10 \\ \end{array} \right) \left( \begin{array}{rrr} 0 & 1 & - \frac{ 7 }{ 4 } \\ 1 & - 2 & - \frac{ 3 }{ 2 } \\ 0 & 0 & 1 \\ \end{array} \right) = \left( \begin{array}{rrr} 1 & 0 & 0 \\ 0 & - 4 & 0 \\ 0 & 0 & - \frac{ 11 }{ 4 } \\ \end{array} \right) $$ $$ Q^T D Q = H $$ $$\left( \begin{array}{rrr} 2 & 1 & 0 \\ 1 & 0 & 0 \\ 5 & \frac{ 7 }{ 4 } & 1 \\ \end{array} \right) \left( \begin{array}{rrr} 1 & 0 & 0 \\ 0 & - 4 & 0 \\ 0 & 0 & - \frac{ 11 }{ 4 } \\ \end{array} \right) \left( \begin{array}{rrr} 2 & 1 & 5 \\ 1 & 0 & \frac{ 7 }{ 4 } \\ 0 & 0 & 1 \\ \end{array} \right) = \left( \begin{array}{rrr} 0 & 2 & 3 \\ 2 & 1 & 5 \\ 3 & 5 & 10 \\ \end{array} \right) $$
Doubt computing multiple integrals by Monte Carlo method
lets we want to estimate $I=\int_{x\in (0,1)} \int_{y\in (-1,1)} g (x,y) dx \ dy$. Generate $(x_1,y_1),\cdots ,(x_n,y_n) $ form uniform $(0,1)\times (-1,1)$ #R code n<-1000000 x<-runif(n,0,1) y<-runif(n,-1,1) Calculate $g(x_1,y_1), \cdots , g(x_n,y_n)$ #R code gxy<-x+y Calculate mean of $g(x_1,y_1), \cdots , g(x_n,y_n)$, that is , $\hat{I}=\frac{1}{n} \sum_{i=1}^{n} g(x_i,y_i)$ #R code mean(gxy) #[1] 0.4998572
Should an open bounded set have finite subcover. Shouldn't boundedness be the only criteria for having finite subcover?
Consider the interval $(-1,1)$. Define a covering set $C=\{(-a,a)\mid 0<a<1\}$. Then no finite subset of $C$ covers $(-1,1)$.
Why $\displaystyle\rho_{\alpha, \beta}(f)=\sup_{x\in\mathbb R^n}|x^\beta \partial^\alpha f(x)|$ is not a norm on $\mathcal{S}(\mathbb R^n)$?
The $\rho_{\alpha,\beta}$ are norms on $\mathcal{S}(\mathbb{R}^n)$. That they are seminorms is straightforward to verify, and $\rho_{\alpha,\beta}(f) = 0$ only for $f \equiv 0$ follows since $x^\beta$ is only zero on a nowhere-dense subset (the union of finitely many coordinate hyperplanes $\{x_i = 0\}$), so $\rho_{\alpha,\beta}(f) = 0 \Rightarrow \partial^\alpha f \equiv 0$, and that implies that $f$ is a polynomial, but the only polynomial in $\mathcal{S}(\mathbb{R}^n)$ is $0$. However, that these seminorms are in fact norms is not important. The topology one is interested in is generated by the family of all these (semi-) norms, so whether or not a single one of these is a norm doesn't matter. The topology induced by a finite subfamily of these seminorms is not as useful.
Is there a contradiction in these two definitions of limit Superior?
I hope that this makes things clear for you. Let $A$ denote a sequence $\left(a_{n}\right)$. Definition 1a) $\limsup A:=\lim_{n\rightarrow\infty}s_{n}$ where $s_{n}:=\sup\left\{ a_{k}\mid k\geq n\right\} $ The fact that sequence $\left(s_{n}\right)$ is non-increasing makes it possible to define equivalently: Definition 1b) $\limsup A:=\inf\left\{ s_{n}\mid n\in\mathbb{N}\right\} $. Now define $V:=\left\{ v\in\mathbb{R}\mid\text{the set }\left\{ n\in\mathbb{N}\mid v<a_{n}\right\} \text{ is finite}\right\} $ and note: $$v\in V\iff\exists n\; s_{n}\leq v$$ This shows that $V=\cup_{n=1}\left[s_{n},\infty\right)$ and consequently $\inf V=\inf\left\{ s_{n}\mid n\in\mathbb{N}\right\} =\limsup A$ In special case where $A$ is not bounded above we have $s_n=+\infty$ for each $n$ so that $\limsup A=+\infty$. Secondly we have $V=\emptyset$ and $\inf\emptyset:=+\infty$. So everything remains valid in that case.
Ring isomorphism between $H^*(G_n(\Bbb F^\infty); R)$ and Ring of characteristic classes
We see that the map $\Gamma:H^*(G_n(\mathbb{F}^\infty);R)\rightarrow \Lambda$ is injective by evaluating it on the universal example, namely $G_n(\mathbb{F}^\infty)$ itself. Thus if $k\in H^r(G_n(\mathbb{F}^\infty);R)$, then $\Gamma(k)$ is a natural transformation $Vect_n(-)\Rightarrow H^r(-;R)$, and the component we are interested in the the map $$\Gamma(k)_{G_n(\mathbb{F}^\infty)}:Vect_n(G_n(\mathbb{F}^\infty))\rightarrow H^r(G_n(\mathbb{F}^\infty);R).$$ Then if $\gamma_n$ denotes the universal $n$-plane bundle over $G_n(\mathbb{F}^\infty)$, it is an element of $Vect_n(G_n(\mathbb{F}^\infty))$ which we know to be classified by the identity map $id:G_n(\mathbb{F}^\infty)\rightarrow G_n(\mathbb{F}^\infty)$. Thus from the definition we hav that $$\Gamma(k)_{G_n(\mathbb{F}^\infty)}[\gamma_n]=id^*k=k$$ is non-zero, and clearly if $h\in H^*(G_n(\mathbb{F}^\infty);R)$ is a second element, then $\Gamma(k)_{G_n(\mathbb{F}^\infty)}[\gamma_n]=\Gamma(h)_{G_n(\mathbb{F}^\infty)}[\gamma_n]$ if and only if $k=h$. It follows from this that the correspondence $\Gamma$ must be injective.
Why is this a valid proof for the harmonic series?
Consider the following summation: $$ A=1\cdot 1+\frac12\cdot1+\frac13\cdot1+\frac14\cdot1+... $$ That's a sum of the areas of an infinite number of rectangles of height $\frac1n$, where $n\in\mathbb{N}$, and width 1. Do you agree that's nothing more than the harmonic series? Moreover, the area under the curve $f(x)=\frac{1}{x}$ from $1$ to $\infty$ is clearly less than $A$ and lies totally within $A$ (see the picture below). However, the area under the curve $f(x)=\frac{1}{x}$ from $1$ to $\infty$ doesn't add up to a finite number. It goes to infinity. What should be happening with a piece of area that's even larger than that? Obviously, it must also be infinite. Therefore, the harmonic series diverges.
Plus or minus? Is there a canonical orientation, like counterclockwisely?
An orientation is a map that, given a basis of the tangential space, returns either $+1$ or $-1$. By the transversality assumption, the tangential space $T_Y(p)$ of $Y$ in the point $p$ of intersection equals the direct sum of the tangential spaces of $X$ and $Z$, i.e. $T_Y(p)=T_X(p)\oplus T_Z(p)$. Thus a positively oriented basis of $T_X(p)$ and a positively oriented basis of $T_Z(p)$ give us a basis of $T_Y(p)$. The latter is either positively or negatively oriented and this does not depend on the specific choice of basis in $T_X(p)$ or $T_Z(p)$. The default orientation in $\mathbb R^n$ is that for which the standard basis $(e_1,\ldots , e_n)$ is postively oriented. Especially, for $\mathbb R^2$, the basis $(e_1,e_2)$ is positively oriented. With the usual conventions on how to visualize $\mathbb R^2$, $e_1$, and $e_2$, this means that counterclockwise order of basis vectors is positive orientation.
$\frac 00=$ ? Why It is un answerable?
Because any number multiplied by $0$ gives $0$. So $a \times 0 = 0$ implies that $a = \frac{0}{0}$ for any a at all. So the answer is nonsense. So we simply say that $\frac 00$ is indeterminate form.
standard representation for symmetric groups motivation
The permution representation of $S_n$ on $\{1,\dots,n\}$ is very natural. Now it is not irreducible. It is pretty obvious that it has a $1$-dimensional trivial representation inside. Morevoer, it has a unique complement: this makes that complement quite a natural object! That complement is the standard representation.
Negative Binomial Distribution Question
Yes. $\mathsf P(\{6\leq X\leq 10\}^\complement)~{=1-\mathsf P(6\leq X\leq 10)\\ = 1-\mathsf P(X\leq 10)+\mathsf P(X\leq 5)}$ PS: Though the complement of "$\{6\leq X\leq 10\}$" is "$\{6\gt X\}{~\text{or}~}\{X>10\}$" rather than "and".
Given digits $2,2,3,3,4,4,4,4$ how many distinct $4$ digit numbers greater than $3000$ can be formed?
I will use the fundamental principle of counting to solve this question. Given the set of numbers, $D = \{2, 2, 3, 3, 3, 4, 4, 4, 4\}$. Here, count(2's) = 2, count(3's) = 3, count(4's) = 4. We are required to construct 4-digit numbers greater than 3000. For easy visualisation, we will use dashes on paper, _ _ _ _. Case 1: Thousandth's place is 3. Set, $D' = \{2, 2, 3, 3, 4, 4, 4, 4\}$. Rest of the 3 places can be filled in $3 \times 3 \times 3$ ways provided we have 3 counts of the three unique digits, for filling each of the 3 places. But count(2's) = 2, count(3's) = 2 in the new set. Deficiency for each digit is 1. (Impossible numbers - 3222, 3333). Therefore, ways to fill = $3 \times 3 \times 3 - (1+1) = 25$. Case 2: Thousandth's place is 4. Set, $D' = \{2, 2, 3, 3, 3, 4, 4, 4\}$. Similarly, the deficiency is of only digit 2, which is 1 count. (Impossible number - 4222) Therefore ways to fill = $3 \times 3 \times 3 - 1 = 26$. Summing each case up, we get $26 + 25 = 51$. Hope this helps.
Proving the convergence/divergence of a seemingly oscillating series
Let $b_n = \frac{1}{\ln{(\ln{(n)})}}$ Since $\ln(n)$ is increasing, we know $\ln{(\ln{(n)})}$ also increases, thus we have that: $b_n = \frac{1}{\ln{(\ln{(n)})}}$ is monotonically decreasing on $[2,\infty)$ and also $$\lim_{n\to \infty}b_n= \lim_{n\to \infty}\frac{1}{\ln{(\ln{(n)})}} = 0.$$ Thus, from Leibniz's Test for Alternating Series, we know $\displaystyle \sum_{n=3}^\infty \frac{(-1)^{n-1}}{\ln{(\ln{(n)})}}$ converges.
Given $f :(a,b)\to\mathbb{R}$ is a monotone increasing function bounded above, show that $\lim_{x\to b^-} f(x)$ exists
Application of some sequence. Let $x_n = b -1/n$. Then $x _n \to b^-$. Since $f$ is increasing and bounded above, so is the sequence $(f(x_n))_1^\infty$. Therefore it converges. Let the limit be $A$. By definition, for each $\varepsilon > 0$, there is some $N \in \Bbb N^*$ s.t. $\vert f(x_n) - A\vert < \varepsilon$ whenever $n \geqslant N$. Therefore for $\underline {\delta = b - x_N}$, whenever $\underline{x_N = b -\delta < x < b}$, there is an $M \in \Bbb N^*$ that $M \geqslant N$ and $x_M \leqslant x \leqslant x_{M+1}$. Then by monotonicity, $f(x_M) \leqslant f(x) \leqslant f(x_{M+1})$, and $$ \underline {- \varepsilon <}\ f(x_M) - A \leqslant \underline {f(x) - A} \leqslant f(x_{M+1}) - A\ \underline { < \varepsilon}\ , $$ i.e. $$ \lim_{ x \to b^-} f(x) = A. $$
If $(\cos x)f'(x)\leq (\sin x-\cos x)f(x)$ for all $x\geq 0$. Can it be said that $f(x)$ is a constant.
As you correctly said, the condition is equivalent to $h(x) = e^x \cos(x) f(x)$ being (weakly) decreasing on $[0, \infty)$. Since $h(\pi/2 + k\pi) = 0$ for all non-negative integers $k$, it follows that $h$ is zero for $x \ge \pi/2$, which in turn implies that $f$ is zero for $x \ge \pi/2$. We can also conclude that if $f(x_0) = 0$ for some $x_0 \ge 0$ then $f(x) = 0$ for all $x \ge x_0$. But $f$ can be non-zero on an initial interval: If we choose an arbitrary twice differentiable function $h$ which is strictly decreasing on $[0, \pi/2]$ with $h(\pi/2) = h'(\pi/2) = h''(\pi/2) = 0$ then $$ f(x) = \begin{cases} \frac{h(x)}{e^x \cos(x)} & 0 \le x < \frac \pi 2 \\ 0 & x \ge \frac \pi 2 \end{cases} $$ is differentiable, satisfies $\frac{d}{dx}(e^x\cos xf(x))\leq 0$ for all $x \ge 0$, but is non-zero on $[0, \pi/2)$.
Relationship between a closed operator and an operator with dense image.
The answer to your question is no. You can take any non-closed operator $T$ and restrict its codomain to the closure of $\text{ran}\,(T)$. The image of this new operator is dense, and it is easy to verify (according to your definition) that it remains non-closed.
alternative way of proving $\frac{1}{n}+\frac{1}{n+1}+\frac{1}{n+2}+...+\frac{1}{2n}$ converges to $\ln 2$ without using integrals
Using the inequality$$\frac{1}{(n+1)}<\ln(\frac{n+1}{n})<\frac{1}{n}$$( Since $\frac{1}{(n+1)}*1<\ \int_{n}^{n+1}\frac{1}{x} \,dx <\frac{1}{n}*1$ (think area graphically) ),we can write $\ln(\frac{n+1}{n})<\frac{1}{n}$$<$$\frac{1}{n-1}<ln(\frac{n-1}{n-2})$. Then sum from $n$ to $2n$ and take $\lim_{n\to\infty}$. $ \sum_{i=0}^{n}ln(\frac{n+i+1}{n+i})$$<$ $\sum_{i=0}^{n} \frac{1}{n+i} $$<$$ \sum_{i=0}^{n} ln(\frac{n+i-1}{n+i-2})$ =$\ln(\frac{n+1}{n}*\frac{n+2}{n+1}*...*\frac{2n+1}{2n})<\frac{1}{n}+ \frac{1}{n+1}+...+\frac{1}{2n}<ln(\frac{n-1}{n-2}*\frac{n}{n-1}*...*\frac{2n-1}{2n-2})$ = $\ln(\frac{2n+1}{n})<\frac{1}{n}+ \frac{1}{n+1}+...+\frac{1}{2n}<ln(\frac{2n-1}{n-2})$ $\lim_{n\to\infty}$$\ln(\frac{2n+1}{n})<$$\lim_{n\to\infty}$$(\frac{1}{n}+ \frac{1}{n+1}+...+\frac{1}{2n})<$$\lim_{n\to\infty}$$ln(\frac{2n-1}{n-2})$ $\lim_{n\to\infty}$$\ln(2+\frac{1}{n})<$$\lim_{n\to\infty}$$(\frac{1}{n}+ \frac{1}{n+1}+...+\frac{1}{2n})<$$\lim_{n\to\infty}$$ln(\frac{2-\frac{1}{n}}{1-\frac{2}{n}})$ You will see by the sandwich theorem the value comes to be $\ln2$.
A question about the regular languages being closed under Boolean operation (how to generalize)
As you noted, regular languages are closed under arbitrary finite Boolean operations, which gives the following easy corollary: Given $n$ regular languages $L_1 , \ldots , L_n$ and any function $f : \mathcal{P} ( \{ 1 , \ldots , n \} ) \to \{ 0 , 1 \}$ the language $L$ given by $w \in L$ iff $f ( \{ i \leq n : w \in L_i \} ) = 1$ is also regular. (To see this, note -- using the notation in the OP -- that all languages of the form $L_1^{x_1} \cap \cdots \cap L_n^{x_n}$, where the $x_i$ are in $\{0,1\}$, are regular. Given $f$ as above, let $( x_{1,1} , \ldots , x_{1,n} )$, $\ldots$, $( x_{k,1} , \ldots , x_{k,n} )$ enumerate the $n$-tuples in the pre-image of $1$. Then the language in question is $$\bigcup_{j=1}^k ( L_1^{x_{j,1}} \cap \cdots \cap L_n^{x_{j,n}} ),$$ which is regular, being a finite union of regular languages.) Without going into other operations (concatenation, Kleene-star, etc.), I'm not certain if anything more general can be said.
explanation about $ \iint_B \frac{xy(x^2-y^2)}{(x^2+y^2)^3}dB$
Use polar coordinates: $x=r\cos\theta, y=r\sin\theta$. Then $$\frac{xy(x^2-y^2)}{(x^2+y^2)^3}\,dxdy=\frac{1}{2}\frac{r^4\sin 2\theta(\cos^2\theta-\sin^2\theta)}{r^6}r\,dr d\theta=\frac{f(\theta)}{r}\,dr d\theta$$ For some continuous function of $\theta$. Since the triangle intersects every nbd of $r=0$, you see that the function is not integrable, because $1/r$ is not integrable in a nbd of zero.
How come the Pauli Matrices are the generators of SU(2)
Pauli matrices are not generators of the Lie group $SU(2)$, but after multiplication by$~\mathbf i$ they give are so-called infinitesimal generators. These are elements of the Lie algebra $\mathfrak{su}(2)$ of $SU(2)$, and for such elements $X$ the exponential mapping $t\mapsto\exp(tX)$ gives a one-parameter subgroup of the Lie group. In view of this relation, the determinant of the Pauli matrices is not important; what is relevant is that they are Hermitian (so after multiplication by$~\mathbf i$ they become anti-Hermitian, and the exponential mapping gives unitary matrices) and have trace zero (so that exponential mapping gives matrices of determinant$~1$). No countable set of group elements could generate the uncountable groups $SU(2)$, but the one-parameter subgroups for the Pauli matrices do generate $SU(2)$. Maybe more pertinent is the fact that the Pauli matrices generate the real vector space of traceless Hermitian matrices. Thus every element of $SU(2)$ is the image under the exponential mapping of a real linear combination of the multiples by $~\mathbf i$ of the Pauli matrices (although the fact that $SU(2)$ is connected and compact is instrumental for this fact).
$2$ Dice are rolled, 1 red and 1 blue.
Using counterprobabilities, one obtains $$1-\underbrace{((\underbrace{1-\frac{1}{6}}_{\text{prob. of not rolling a }6\text{ with red die }})(\underbrace{1-\frac{1}{6}}_{\text{prob. of not rolling a }6\text{ with blue die }}))}_{\text{prob. of not rolling a $6$ with any die}}=\frac{11}{36}.$$
Are the torsion elements dense in every compact Lie group?
Yes. Here's a sketch: If $G$ is a torus, the set of torsion points of $G$ is dense in $G$. If $H$ is the closure in $G$ of an arbitrary one-parameter subgroup, then (since $G$ is compact) $H$ is a torus. This means the set of torsion points of the subgroup $H$ is (already) dense in $H$. (Of course, it may be that a one-parameter subgroup itself contains only one torsion point!) Since $G$ is compact and connected, every element of $G$ lies in a one-parameter subgroup.
Find total number of ways to disconnect the following graph
The minimum number of edges that need to be removed to disconnect the graph is $3$, and there are just $4$ ways to do this: remove all of the edges incident at one vertex of degree $3$. There are $9$ ways to remove a set $E$ of edges so that (a) the resulting graph is disconnected, and (b) removing any proper subset of $E$ leaves a connected graph. However, $5$ of these require the removal of $4$ edges, not $3$: $$\begin{align*} &ac,bc,dc,ed\\ &ad,ac,cb,be\\ &ad,dc,ce,eb\\ &ab,bc,ce,ed\\ &ba,ac,cd,de \end{align*}$$ To verify that there really are only $4$ possibilities when you remove just $3$ edges, suppose that you remove $3$ edges, but not all $3$ of the edges incident at any of the corner vertices. If all three of the removed edges are incident at $c$, it’s easy to see that the resulting graph is still connected. If two of them are incident at $c$, removing those $2$ leaves a graph consisting of two cycles that share either one or two edges, depending on which two edges incident at $c$ were removed, and removing one more edge cannot disconnect such a graph. If one of them is incident at $c$, without loss of generality suppose that it’s $ac$. Then at least one of $ab$ and $ad$ was not removed; without loss of generality we may assume that $ab$ is still present. The edges $ab,cd,ce$, and $cb$ are then all present and are sufficient to connect the graph. If none of them is incident at $c$, the $4$ edges incident at $c$ are sufficient to connect what remains.
If a prime number $p$ satisfies $\gcd(a,p−1) = 1$, then for every integer $b$ the congruence relation $x^a \equiv b \pmod{p}$ has a solution.
Using Discrete Logarithm wrt primitive root $g\pmod p$ $\displaystyle a\cdot$ind$_gx\equiv$ind$_gb\pmod{p-1}$ Using Linear congruence theorem (Proof), as $\displaystyle d=(a,p-1)=1$ always divides ind$_gb,$ the number of solutions will be $d$
How does Gauss's lemma follow from Nagata's lemma?
By Gauss' lemma he means the fact that if $A$ is factorial, then so is $A[T]$. Samuel explains the proof in his paper.... namely let $S=A-\{0\}\subset A[T]$. Then $S$ is generated by prime elements, and $S^{-1}A[T]=\mathrm{Frac}(A)[T]$ is a PID, hence factorial. Thus $A[T]$ is itself factorial.
Laplace transform of a convolution
As @Dylan proposed you simply need to multiply the Laplace transforms of each individual function. $$\mathcal{L}\{\exp(t)*\exp(t)\cos t \}=\mathcal{L}\{\exp(t)\}\mathcal{L}\{\exp(t)\cos t \}=\dfrac{1}{s-1}\dfrac{s-1}{(s-1)^2+1}=\dfrac{1}{(s-1)^2+1}$$ In control theory, the cancellation of the unstable pole-zero-pair should not be carried out.
how to simplify this moment generating function
If $\mathsf E(e^{tX})=M_X(t)$ then $\mathsf E(e^{tXab})=M_X(tab)$. Now $M_X(t)=\tfrac{1}{2t}(e^t-e^{-t})$, so $M_X(tab)=\ldots$
Prove that $\mathbf{x^b}\in \langle\mathbf{x^{a_1},...,x^{a_k}} \rangle\iff \exists j\in \{1,...,k\}:\mathbf{x^{a_j}\mid x^b} $
Once you have the equality $$x^b = f_1 x^{a_1} + \cdots + f_n x^{a_n}$$ (I won't bother putting the bold font), you know that $x^b$ must occur as a monomial in the expression $$f_1 x^{a_1} + \cdots + f_n x^{a_n}.$$ But all monomials in this expression occur as multiples of some $x^{a_i}$.
Picard group of a Torus
From what you have, you see that there is a short exact sequence of abelian groups $0\to\mathbb C/\Lambda\to Pic\to\mathbb Z\to 0$ and since $\mathbb Z$ is a free abelian group that sequence splits, so that $Pic$ is isomorphic to $\mathbb C/\Lambda\oplus \mathbb Z$. In terms fo this isomorphism, the maps in the sequence are the obvious ones.
Asymptotics for the tail of $L_p$ norms
I don't think you can get $C_f$ to depend only on the norm of $f$, because $f(\cdot - y)$ has the same $L^1$ and $L^p$ as $f$. I think you can use this idea to find a counterexample to your conjecture: let $g$ be a function whose support is in the unit ball, and whose $L^1$ and $L^p$ norms are non-zero. Let $a_n > 0$ be a summable sequence. Pick $y_n \in \mathbb R^d$ to be a sequence with $|y_n - y_m| > 2$. Then by adjusting $a_n$ and $y_n$, you should be able to find an $f(x) = \sum_n a_n g(x - y_n)$ such that $$ \sup_{R>0} {\|f \chi_{|x|>R}\|}_p R^{1/p'} = \infty .$$
Galois group action on etale cohomology groups
Here is an answer that's basically transcribed from a comment by Keerthi Madapusi Pera on this MO question of mine (where I was asking what happens when the properness assumption is removed). Let $f: X \to \operatorname{Spec} \mathbf{Z}_p$ be the structure map. Then we can form the sheaf $R^i f_* \mathbf{Z}_\ell$ on $\operatorname{Spec} \mathbf{Z}_p$, and since $f$ is a smooth proper map by hypothesis, the proper base-change theorem tells us that $R^i f_* \mathbf{Z}_\ell$ is a locally constant sheaf. The fibre of this sheaf at a generic point $\overline{x}$ is $H^i(X_{\overline{\mathbf{Q}}_p}, \mathbf{Z}_\ell)$; and since it is locally constant, the action of Galois on the generic fibre must factor through $\pi_1(\operatorname{Spec} \mathbf{Z}_p, \overline{x})$, which is the Galois group of the maximal unramified extension of $\mathbf{Q}_p$.
Joint continuity of bilinear pairing (with unusual topology)
No, whenever $V$ is infinite dimensional the evaluation isn't continuous with respect to the product of the weak$^*$ and the finest locally convex topology. Indeed, continuity at $0$ would give a finite set $E\subseteq V$, $\varepsilon >0$, and a $0$-neighbourhood $U$ in the finest locally convex topology such that $|f(u)|\le 1$ whenever $|f(e)|\le \varepsilon$ for all $e\in E$ and $u\in U$. Given $v\in V$ not contained in the linear span of $E$ you can choose $\delta>0$ such that $\delta v \in U$ and a linear functional $f$ which vanishes on $E$ with $|f(v)|\ge 2/\delta$ which yields a contradiction.
Specifying domain range of polar coordinates
As you've said: $$x= r\cos(\alpha),$$ $$y= r\sin(\alpha).$$ (you also have $r^2 = x^2 +y^2$) So if you plug this in, you have $$ r\sin(\alpha) = 2r\cos(\alpha) +3, $$ $$ r(\sin(\alpha) - 2\cos(\alpha)) = 3 \Longrightarrow r(\alpha) = \frac{3}{\sin(\alpha) - 2\cos(\alpha)}. $$ Now, for the range of $\alpha$, if you draw the line it looks like (using google): $\hspace{3cm}$ You can see that when you consider a very distant point on the line (the black line on the plot below) the angle of this line (in red) approaches some quantity $\alpha_0$ degrees. If you consider distant point in the other direction (not drawn), the angle would approach $\alpha_1 = 180 + \alpha_0$ degrees: $\hspace{3cm}$ Overall, the range is $r \in [r_0, \infty)$ and $\alpha \in (\alpha_0, \alpha_1)$ for $r_0$ you need to find the closest point on the line to the origin. If you do the calculations: you get $\alpha_0 = \mathrm{arctang}(2)$ (the ratio of $y$ to $x$ approaches $2$ as we look at a distant points) and $r_0 = \sqrt{3^2 + 1.5^2}$ (Pythagorean theorem). To reiterate my comment: The range of $r$ and $\alpha$ are the values of $r$ and $\alpha$ that you would have to plug into the equation of the line $r(\alpha) = f(\alpha)$ in order to trace out the line. For example $ = 0$ is never used so it is not in the range, same is for $= 250^\circ$ or $\alpha=0$, both are not in the range.
What can go wrong with series solutution - conflicting answers for two different but seemingly equally valid methods
In method II, note that the term for $n=-1$ is zero regardless of the value of $a_{-1}$. So $a_{-1}=B$, where $B$ is an arbitrary constant, and together with what you already had, this recovers the answer from method I.
Discussion of the curve x^2+y^2+1 over $\mathbb{R}$
I think you should write $\mathbb{R}[x,y]/(f)$ rather than $\backslash$, since I think the former is the common notation for quotients. For any integral domain you can take its field of fractions. Here if you take the field of fractions of $\mathbb{R}[x,y]/(f)$ you will get a field of transcendence degree $1$ over $\mathbb{R}$. Geometrically the transcendence degree corresponds to the dimension of your variety. In this case you have a curve so it is $1$. The closed points of $\mathrm{Spec}(A)$ are its maximal ideals. In this case the maximal ideals of $\mathbb{R}[x,y]/(x^2+y^2+1)$ are the maximal ideals of $\mathbb{R}[x,y]$ containing the ideal $(x^2+y^2+1)$. Note that the Galois action sends $z\mapsto \bar{z}$. If you identify $\mathbb{C}$ with $\mathbb{R}[x]/(x^2+1)$, then your tensor product there is given by $\mathbb{C} \otimes_{\mathbb{R}} \mathbb{C} \cong \mathbb{C}[x]/(x^2+1) \cong \mathbb{C}[x]/(x+i) \oplus \mathbb{C}[x]/(x-i) \cong \mathbb{C} \oplus \mathbb{C}$. Explicitly, this maps the element $a+bx \mapsto (a-bi,a+bi)$. Your Galois action acts by mapping $x\mapsto -x$, and you can see that under the identification this corresponds to interchanging the two components.
Formula for Contractor's Hourly Rate
Assuming you want a linear relationship between hours worked and multiplier, you have two data points: $(20,2)$ and $(480,1)$. The line that goes thru these two points is defined by this equation: $m=-h/460+2\frac{1}{23}$ where $m$ is multiplier and $h$ is the number of hours.
How to find a vector in nD that is perpendicular to (n-1) linearly independent vectors.
As you've noted, the problem can be reduced to finding the null space of a matrix. But perhaps to others it is not obvious why this is so. Let $v_1,\ldots,v_{n-1},x\in\mathbb{R}^n$ be column vectors. Consider: $$ A = \begin{bmatrix}v_1^T\\\vdots\\v_{n-1}^T\end{bmatrix} \;\;\;\implies\;\;\; Ax=\vec{0} =\begin{bmatrix}v_1^T x\\\vdots\\v_{n-1}^T x\end{bmatrix} =\begin{bmatrix}v_1\cdot x\\\vdots\\v_{n-1}\cdot x\end{bmatrix} $$ So, finding the null-space of $ A $ is equivalent to enforcing $v_i\cdot x=0$, meaning $x$ is orthogonal to the rest of the set. It may also be worth mentioning the Gram-Schmidt procedure, which is a little more general. Notice that this gives you an easy way to construct an orthonormal vector $v_n$, which is orthogonal to a set of orthonormal vectors $v_1,\ldots,v_{n-1}$. Just choose a random vector $r\in\mathbb{R}^n$, normalize it, and take: $$ v_n = r - \sum_{k=1}^{n-1} \frac{v_k\cdot r}{v_k\cdot v_k}v_k $$ where, if $r$ or $v_n$ turns out to be linearly dependent on one of the $v_i$, just choose a different $r$. (The probability of such an event is extraordinarily low; theoretically, zero in fact.)
Intersection of two subgroups with given information
Lagrange is the way to go. Note that $H\cap K$ is a subgroup of $H$ and $K$, so its order must divide $|H|$ as well as $|K|$...
Integrating function with trigonometric identities, substitution and parts
Hints: A) Bioche's rules say on should use the substitution $$t=\tan 3x,\qquad \mathrm d t=3(1+t^2)\,\mathrm dt. $$ This leads to the integral $$\frac13\int\frac{\mathrm dt}{t^4(1+t^2)},$$ which you should decompose into partial fractions. Note that, as the fraction is even, the decomposition will have the form $$\frac{1}{t^4(1+t^2)}=\frac A{u^2}+\frac B{u^4}+\frac C{1+u^2}.$$ B) Use the substitution $$u=\sqrt{2x}\iff u^2=2x, \qquad 2u\,\mathrm du=2\,\mathrm dx.$$ C) Substitution $\; t=\sqrt[3]x\iff t^3=x$, $\quad3t^2\,\mathrm dt=\mathrm dx$.
Gauss curvature of conformal metrics
So it turns out I was just doing the computation wrong the whole time. Anyway, I finally got it and figured I would post my solution here as well. We use orthogonal local coordinates around $p\in M$ so that $g=E(u,v)du^2+G(u,v) dv^2$ and $h=(e^{2w} E) du^2+(e^{2w} G) dv^2$. We use a well-known formula for Gauss curvature in orthogonal coordinates $$ K=-\frac{1}{2\sqrt{EG}}\left(\frac{\partial}{\partial u}\frac{G_u}{\sqrt{EG}}+\frac{\partial}{\partial v}\frac{E_v}{\sqrt{EG}}\right). $$ So \begin{align*} K_h&=-\frac{1}{2\sqrt{(e^{2w}E)(e^{2w}G)}}\left(\frac{\partial}{\partial u}\frac{(e^{2w} G)_u}{\sqrt{(e^{2w}E)(e^{2w}G)}}+\frac{\partial}{\partial v}\frac{(e^{2w}E)_v}{\sqrt{(e^{2w}E)(e^{2w}G)}}\right) \\&= -e^{-2w}\frac{1}{2\sqrt{EG}}\left(\frac{\partial}{\partial u}\frac{2w_uG+G_u}{\sqrt{EG}}+\frac{\partial}{\partial v}\frac{2w_vE+E_v}{\sqrt{EG}}\right)\\&= e^{-2w}K_g-e^{-2w}\frac{1}{\sqrt{EG}}\left(\frac{\partial}{\partial u}\frac{w_uG}{\sqrt{EG}}+\frac{\partial}{\partial v}\frac{w_vE}{\sqrt{EG}}\right). \end{align*} Expanding one of these derivative terms yields \begin{align*} \frac{\partial}{\partial u}\frac{w_uG}{\sqrt{EG}}&=\frac{\sqrt{EG}(w_{uu} G+w_u G_u)-w_u G \frac{1}{2\sqrt{EG}}(EG)_u}{EG}\\&= \frac{w_{uu}G}{\sqrt{EG}}+\frac{w_{u}G_u}{\sqrt{EG}}-\frac{w_uG_u}{2\sqrt{EG}}-\frac{w_uE_u G}{2E\sqrt{EG}}\\&= \frac{w_{uu}G}{\sqrt{EG}}+\frac{w_{u}G_u}{2\sqrt{EG}}-\frac{w_uE_u G}{2E\sqrt{EG}}. \end{align*} The other term is symmetric. So \begin{align*} \frac{1}{\sqrt{EG}}\left(\frac{\partial}{\partial u}\frac{2w_uG}{\sqrt{EG}}+\frac{\partial}{\partial v}\frac{2w_vE}{\sqrt{EG}}\right)&= \frac{w_{uu}}{E}+\frac{w_uG_u}{2EG}-\frac{w_uE_u }{2E^2}+\frac{w_{vv}}{G}+\frac{w_vE_v}{2EG}-\frac{w_vG_v }{2G^2}\\&= g^{uu}w_{uu}-g^{vv}\Gamma_{vv}^u w_u-g^{uu}\Gamma_{uu}^u w_u+g^{vv}w_{vv}-g^{uu}\Gamma_{uu}^v w_v-g^{vv}\Gamma_{vv}^v w_v\\&= g^{ij}w_{ij}-g^{ij} \Gamma_{ij}^k \partial_k w=\text{tr}_g (\nabla^2 w)=\Delta u. \end{align*} Plugging this back in above, we have $K_h=e^{-2w}(K_g-\Delta_g w)$ as desired.
Calculating the exact value of this infinite series
It's the sum $\sum(\frac34)^n+\sum(\frac14)^n$. Each of those are geometric series. So it equals $\frac1{1-3/4} + \frac1{1-1/4} = 4 + 4/3 = 16/3$
Every group with 1990 elements is solvable
Hint: what do you know about groups of order $10$?
Integration by substitution, why do we change the limits?
Because the function has changed. Let's do an example: $$\int_{-1}^1 x\,dx =0$$ because the integrand is odd and the interval is symmetric (you can also check directly). Let's do a simple $u=x+1$ so that $du=dx$ then the right way we have: $$\int_0^2 u-1\,du = {1\over 2}(u-1)^2\bigg|_0^2={1\over 2}-{1\over 2}=0$$ but if we fail to change the limits: $$\int_{-1}^1(u-1)\,du = {1\over 2}(u-1)^2\bigg|_{-1}^1=-2$$ The underlying reason is that integration comes from Riemann sums, the function values depend on the interval of integration. When you change the interval, the heights of the rectangles you use in the definition change (remember the heights are the function values) so that you end up adding up different things if you don't change the function to compensate.
Perfect square test for polynomial $(x^2+p)^2 + p^2$
We have $$ f_p(p)=(p^2+p)^2+p^2=p^2[(p+1)^2+1]. $$ This is a perfect square, iff $m=(p+1)^2+1$ is a square. But $m-1=(p+1)^2$ is a perfect square. The only two consecutive perfect squares are $0$ and $1$. But $p=-1$ was excluded.
Bounds on how far away most leaves are from the average height of a binary tree
The tree shown below is a counterexample to the current version of the conjecture. o / \ o o / \ o o Here $h=2$, $t=\frac15(1\cdot0+2\cdot1+2\cdot2)=\frac65$, and $\ell=3$. Take $k=\frac85$; then $\frac{\ell}k=\frac{15}8<2$, but there are $2$ leaves of height $2>\frac{48}{25}=\frac65\cdot\frac85=kt$.
Applications of the Hurwitz Theorem on Number of Automorphisms?
You might know the geometric reformulation of the Hurwitz theorem, which says that there exists a unique compact, connected hyperbolic 2-orbifold $P_{237}$ of minimum area, namely the $(2,3,7)$ triangle reflection orbifold. The connection between the version you state and the geometric version is that the quotient space $S / \text{Aut}(S)$ is a compact, connected, hpyerbolic 2-orbifold, and the quotient map $S \mapsto S / \text{Aut}(S)$ is an orbifold covering map of degree equal to $|\text{Aut}(S)|$, so $$\text{Area}(P_{237}) \le \text{Area}(S) \, / \, |\text{Aut}(S)| $$ $$|\text{Aut}(S)| \le \frac{\text{Area}(S)}{\text{Area}(P_{237})} $$ Those areas can be computed using the Gauss-Bonnet formula, which yields the Hurwitz theorem. This geometric version has application to the classification of Fuchsian groups up to isomorphism. For instance: there are only finitely many isomorphism classes of Fuchsian groups $K$ such that $\pi_1(S)$ is isomorphic to a finite index subgroup of $K$, because the index $[K:\pi_1(S)]$ is bounded by the same constant $84(g-1)$.
If T is an infinite subset of $\mathbb{N}$ show that there is a 1-1 mapping of T onto $\mathbb{N}$
It’s a little easier, I think, to define a bijection $f$ from $\Bbb N$ onto $T$ and use its inverse. Define it recursively: $f(0)=\min T$, and if $f(k)$ has been defined for all $k<n$, let $$f(n)=\min\Big(T\setminus\{f(k):k<n\}\Big)\;.$$ If you think a bit about what this construction is doing, you should be able to see that it must be yield a bijection, though you may still struggle a bit to write down a proof. Since the construction is recursive, try a proof by induction that the resulting function is onto.
Generalisation of an Interesting Problem - Probability
When there is a new point in a circle, there is a 1/2 chance of that point to be in the semicircle relative to your first point, this applies for every point you add since there are infinitely many points in a circle. so if there are n points, there would be a $1/2^{n-1}$ chance, which is a $2/2^n$ chance. There are n starting points, so this function is repeated n times, which gives the total probability to be $2n/2^n$.
The general way to show completeness of a theory
There is no "general way" to show the completeness of an arbitrary theory. (I think this is most likely a provable statement: you should be able to reduce the halting problem to deciding the completeness of a recursively axiomatized theory, although I don't feel confident enough in recursion theory to say that with certainty.) The usual way of proving completeness is via quantifier elimination in a language we understand (with the caveat that we can only expand the language by symbols definable in the theory). For instance, DLO has quantifier elimination in the usual language $<$. DLO with the left endpoint has q.e. once you add a constant symbol for the endpoint, etc. Once you have quantifier elimination, completeness is typically immediate (it is enough to show that quantifier-free sentences are decided).
Get unbiased estimator for $2 \theta$
Is there a simpler more straight forward way to get the expected value of E[xr]x^{r}]? Let's consider the likelihood score an let $T=\hat{\theta}$ $$l^*(\theta)=-\frac{n}{\theta}+\frac{\sum_x x^r}{\theta^2}$$ It' is well known (First Bartlett identity) that the expectation of the score is zero, so $$\mathbb{E}[-\frac{n}{\theta}+\frac{nT}{\theta^2}]=0$$ Thus immediately you get that $$\mathbb{E}[T]=\theta$$ ... the rest is self evident
Proving the cardinality of finite sets
Method 1: Construct a bijection. Define $\phi\colon S^T\times S^U\to S^{T\cup U}$ by $\phi((f,g))=h$ where $h\colon T\cup U\to S$ is defined piecewise by, $$h(x)=\begin{cases}f(x)~\forall~x\in T\\ g(x)~\forall~x\in U\end{cases}$$ It should be easy to verify that the map $\phi$ is a bijection when $S,T,U$ are finite and $T\cap U=\emptyset$ Method 2: Notation: If $A$ and $B$ be two finite sets, we denote by $B^A$ the set of all functions from $A$ to $B$. Lemma 1: For two finite sets $A$ and $B$, we have, $|B^A|=|B|^{|A|}$ Proof. Let $f\colon A\to B$ be a function from $A$ to $B$. Then, we can construct $f$ by assigning to each element of $A$ an element of $B$. For each $x\in A$, there are $|B|$ choices for the value of $f(x)$. By the rule of product, we have $\prod\limits_{i=1}^{|A|} |B| = |B|^{|A|}$ ways to construct $f$ and since $B^A$ is the set of all such functions $f$, we conclude that $|B^A|=|B|^{|A|}$ Lemma 2: For two finite sets $A$ and $B$, we have $|A\times B|=|A||B|$ Proof. The set $A\times B$ is the set of all ordered pairs $(x,y)$ such that $x\in A$ and $y\in B$. To construct an element $(x,y)$ of $A\times B$, we have $|A|$ choices for the value of $x$ and $|B|$ choices for the value of $y$. By the rule of product, we have $|A||B|$ ways to construct an element $(x,y)$ of $A\times B$ and since $A\times B$ is the set of all such elements $(x,y)$, we have $|A\times B|=|A||B|$ Lemma 3: For two finite sets $A$ and $B$, we have $|A\cup B|=|A|+|B|-|A\cap B|$ Proof. Trivial using Venn diagrams. Now, using lemmas 1 and 2, we have, $$|S^T\times S^U|=|S|^{|T|}|S|^{|U|}=|S|^{|T|+|U|}=|S|^{|T\cup U|}$$ where the last equality follows by using lemma 3 with $|T\cap U|=|\emptyset|=0$
Can we perform row and column operations while calculating eigenvectors?
Row and columns are described algebraically by multiplying by invertible matrices. You are looking for solutions to the equation $(A - I) v = 0$. The specific form of the matrix doesn't matter, so let's just consider the general problem of solving $Bv = 0$. Row operations are nice for this, because they multiply on the left: if $E$ encodes the row operation, then applying it to the matrix $B$ gives the matrix $EB$. We can make $EB$ appear in the equation simply by multiplying $Bv = 0$ on the left by $E$ to get $EBv = 0$. So, that's why you are taught to use row operations to solve systems of linear equations! Column operations multiply on the right; unfortunately it's harder to make $BE$ appear in the equation: the simplest way to do so is that $Bv = 0$ is equivalent to the equation $(BE) (E^{-1} v) = 0$ So, you can make using column operations work. The trick, through, is that once you've found the solutions to $(BE) x = 0$, the solutions you were actually looking for are those with $E^{-1} v = x$, or equivalently, $v = Ex$. In other words, once you get the solution, you have to take all of the column operations you applied and correctly reinterpret them as column operations to be applied (in reverse order!) to the solution vectors. In your example, this means after obtaining the solution you got to the column-modified matrix, you need to swap the first and third rows of the solution vectors to get the solution to the original system. (both operations are represented by the same elementary matrix, which is why this is the correct operation to do) This is tricky, and (IMO) doesn't really offer any benefit; the usual method taught for solving systems of equations has no problem just doing row operations here, to compute $$ \begin{pmatrix} 0 &0&1\\0 &0&1\\0 &0&1\\\end{pmatrix} \to\begin{pmatrix} 0 &0&1\\0 &0&0\\0 &0&0\\\end{pmatrix} $$ and then to extract the solution space from this row reduced echelon form.
Help with solving integrals of the form $\int\left(1+f(x)^2\right)f(x)\ \text{d}x$
For illustration, consider $A=1,C=-1$. Then you're looking at $\int (2-x^2)\sqrt{1-x^2} dx$. That can be done by trig substitution: if you have a right triangle with legs $x$ and $\sqrt{1-x^2}$ then the hypotenuse is $1$ and so you can choose $\theta$ so $\sqrt{1-x^2}=\cos(\theta)$. In this case $x=\sin(\theta)$ so $dx=\cos(\theta) d \theta$, so you have $$\int (1+\cos(\theta)^2)\cos(\theta)^2 d \theta$$ which is straightforward, albeit a bit tedious, to evaluate. Then back-substitute by setting $\theta=\sin^{-1}(x)$. Integrands of your general form should be tractable but you'll need to take cases depending on the sign of $b^2-4ac$ (taking the more common form $ax^2+bx+c$ instead of your form). In my example this was positive, which gives trigonometric substitutions. When it is negative, you have hyperbolic substitutions. When it is zero, the problem is trivial.
Terminology for a category whose objects are all free
The notion of a "free object" doesn't make sense in an arbitrary category. Note that the universal property of e.g. free groups refers to maps from a set to the underlying set of a group. So you should have something like a forgetful functor to the category of sets in order to talk about "free objects". If you're looking at a category of algebraic structures, in the sense of the category of algebras for a monad $T$, then the subcategory of free algebras is (equivalent to) the Kleisli category of $T$.
Why does $a_n=\sqrt{n} + \sin(n)$ diverge?
The sequence $$ a_n = \sqrt{n} + \sin(n) $$ diverges because it grows without bound. For any given $M$ you can find an $n$ such that $a_n > M$. That's it. Now, it is bounded below, but not above. If you have a sequence that is bounded below and above, and if it is monotonic (i.e. strictly increasing or decreasing from a point) then it will be convergent. But your example is not bounded above.
Transitive closure proof (Pierce, ex. 2.2.7)
First of all, if this is how you define the transitive closure, then the proof is over. But you may still want to see that it is a transitive relation, and that it is contained in any other transitive relation extending $R$. To the second question, the answer is simple, no the last union is not superfluous because it is infinite. Every step contains a bit more, but not necessarily all the needed information. So let us see that $R^+$ is really transitive, contains $R$ and is contained in any other transitive relation extending $R$. Clearly $R\subseteq R^+$ because $R=R_0$. If $x,y,z$ are such that $x\mathrel{R^+} y$ and $y\mathrel{R^+}z$ then there is some $n$ such that $x\mathrel{R_n}y$ and $y\mathrel{R_n}z$, therefore in $R_{n+1}$ we add the pair $(x,z)$ and so $x\mathrel{R_{n+1}}z$ and therefore $x\mathrel{R^+}z$ as wanted. If $T$ is a transitive relation containing $R$, then one can show it contains $R_n$ for all $n$, and therefore their union $R^+$. To see that $R_n\subseteq T$ note that $R_0$ is such; and if $R_n\subseteq T$ and $(x,z)\in R_{n+1}$ then there is some $y$ such that $(x,y)\in R_n$ and $(y,z)\in R_n$. Since $R_n\subseteq T$ these pairs are in $T$, and since $T$ is transitive $(x,z)\in T$ as well.
Particular number is divisible by 11
Note that $N = 1000d+100c+10b+a$ $= (1001d+99c+11b)+(-d+c-b+a)$ $= 11(91d+9c+b)-[(d-c)+(b-a)]$. Hence, $N$ is a multiple of $11$ iff $(d-c)+(b-a)$ is also a multiple of $11$.
Difficult double integration
HINT $$ \int (3x+4y)^4dx = \frac{(3x+4y)^5}{15} + C. $$ Hence, you can integrate this equation as it is $$\int_0^1 \int_0^4 (3x+4y)^4 dxdy,$$ becomes $$ \frac{1024}{5}\int_0^1 \left[ 5y^4 + 30y^3 + 90y^2 + 135y + 81 \right] dy.$$
Number of $3$s in the units place
You are doing it correctly. Simply divide $2014$ by $4$, where you get $503$, plus a $7^{2013}, 7^{2014}$ at the end. Since the $3$ comes $3rd$ in the sequence, it will not appear in the last two, so the answer is just $503$.
Show that cardinality of intersection is infinite
Write $[N] = \{1, \ldots, N\}$. Then note that $$\#(S(x) \cap [N]) \geq N/x + \mathcal O(1)$$ for fixed $x > 1$. Write $\alpha = 1/x_1 + 1/x_2 + 1/x_3 > 1$. It follows that $$ \#(S(x_1) \cap [N]) + \#(S(x_2) \cap [N]) + \#(S(x_3) \cap [N]) \geq \alpha N + \mathcal O(1). $$ As $N$ tends to infinity the gap between the sum on the left-hand side and $N$ grows without bound. Thus at least one pair-wise intersection between the $S(x_i)$ has to be infinite.
Compute the derived subgroup of $S_3$
It matters because $C_3\cong A_3$ implies that $A_3$ is very short on subgroups and $S_3'$ must be one of them. Uhm... No? To me, making sense of that statement is rather challenging. One (but not the sole) of the reasons for this is the fact that you seem to confuse the verb "to commute" (such as in "$g\in G$ and $h\in G$ commute with each other" = "$gh=hg$") with "being commutative" (such as in "the group $G$ is commutative" = "all the elements of $G$ commute with each other"). After edit: Indeed, in general a group $G$ is commutative if and only if $G'=\{e\}$. In fact, if $G'\ne \{e\}$, there must be a commutator $ghg^{-1}h^{-1}\ne e$, id est $gh=ghg^{-1}h^{-1}hg\ne hg$. Viceversa, if $gh\ne hg$, then $ghg^{-1}h^{-1}\ne e$. $A_3$ is simple. In general, $A_n$ is simple if and only if $n\ne 4$. As far as I know, simplicity is an intrinsic property (a group is simple if it has no normal subgroups), not a relative one: thus I wouldn't know how to interprete being "simple in $S_3$".
Calculating Column space, row space and solution space of a vector
$Av=\begin{pmatrix} 0 \\ 9a-90\\ 30-3a\\ \end{pmatrix}$ So you require $a=10$.
Either a or b and either c or d is nonzero implies either ac+2bd or ad+bc is nonzero
Yes, cases might help. Assume $ac+2bd=ad+bc=0$. If $a=0$, we conclude $bc=0$ and $2bd=0$; but as $a=0$ means $b\ne 0$, we get $c=0$ and $d=0$, contradiction. Hence we may assume $a\ne 0$. Then $c=-\frac{2bd}{a}$ and $d=-\frac{bc}{a}$, so $c=\frac{2b^2c}{a^2}$. If $c\ne 0$, this tells us that $\sqrt{\frac 12}=\frac ba$, which is absurd with $a,b$ rational. We conclude $c=0$. But then $2bd=0$ and $ad=0$; but as $c=0$ implies $d\ne 0$, we get $a=0$ and $b=0$, contradiction.
Dimension of subspace orthogonal to a vector
Hint: Build a basis of $\mathbb{R}^n$ that contains $x$. Use the Gram-Schmidt procedure.
Best way to simplify a polynomial fraction divided by a polynomial fraction as completely as possible
simplifying we obtain $$\frac{(5x^2-9x-2)(2x^8+6x^7+4x^6)}{(3x^3+6x^2)(x^4-3x^23-4)}$$ multiplying numerator and denominator out we obtain: $$\frac{10\,{x}^{10}+12\,{x}^{9}-38\,{x}^{8}-48\,{x}^{7}-8\,{x}^{6}}{3\,{x}^{7}+6\,{x}^{6}-9\,{x}^{5}-18\,{x}^{4}-12\,{x}^{3}-24\,{x}^{2}}$$
Proof of $ \zeta(s)=\frac{1}{s-1}+\gamma+O(s-1)$
$$f(s)=\left(\zeta(s)-\frac{1}{s-1}\right) = \int_{0}^{+\infty}\frac{x^{s-1}}{\Gamma(s)}\left(\frac{1}{e^x-1}-\frac{1}{x e^x}\right)\,dx $$ is a holomorphic function in a neighbourhood of $s=1$. Once $\lim_{s\to 1}f(s)=\gamma$ has been proved through the dominated convergence theorem, $$ f(s)-\gamma = O(s-1) $$ as $s\to 1$ is automatic.
Avoiding multiplication in inequality
Your geometry idea is nice, infact it works :) . Consider a triangle with sides $ABC$ in which there is a point $H$ such that $AH=x, BH=y, CH=z$ and $\angle AHB = \angle BHC = \angle CHA = 120^{\circ}$. Then using cosine, rule gives $a^2= y^2+z^2+yz , b^2=x^2+z^2+xz , c^2 = x^2+y^2+xy $. Also since sum of area of these three triangles, equals area of $ABC$ this implies that, $xy+yz+xz= \frac{4A}{\sqrt{3}}$. Now, $(x+y+z)^2= \frac{a^2+b^2+c^2+4A\sqrt{3}}{2} $. So given inequality is equivalent to, $$ \frac{a^2+b^2+c^2+4A\sqrt{3}}{2} \times \frac{16A^2}{3} \le 3a^2b^2c^2 $$ Writing, $abc = 4AR$ , this inequality is equivalent to, $$a^2+b^2+c^2+4\sqrt{3}A \le 18R^2 $$ This is true, because distance between circumcentre of triangle and orthocentre of triangle is precisely $\sqrt{9R^2 -a^2-b^2-c^2}$ . And so, $9R^2 \ge a^2+b^2+c^2 $. And it is very well known that $a^2+b^2+c^2\ge 4\sqrt{3}A $ $\Box$
Tangent to curve $x^3+y^3=a^3$ meets it again.
$$\frac{y_2-y_1}{x_2-x_1} = \frac{y_2^3-y_1^3}{x_2^3-x_1} \times \frac{x_2^2+x_1x_2+x_1^2}{y_2^2+y_1y_2+y_1^2} = (-)\frac{x_2^2+x_1x_2+x_1^2}{y_2^2+y_1y_2+y_1^2}$$ And you have written $$\frac{y_2-y_1}{x_2-x_1} = \left(\frac{x_2^2+x_1x_2+x_1^2}{y_2^2+y_1y_2+y_1^2}\right) $$ Which is wrong. So we have $$\frac{x_1^2}{y_1^2} = \frac{x_2^2+x_1x_2+x_1^2}{y_2^2+y_1y_2+y_1^2} $$ This gives, $$ x_1^2y_2^2+x_1^2y_1y_2+x_1^2y_1^2=y_1^2x_1x_2+x_1^2y_1^2+x_2^2y_1^2 $$ $$ x_1^2y_2^2+x_1^2y_1y_2=y_1^2x_1x_2+x_2^2y_1^2 $$ $$(x_1y_2+x_2y_1)(x_1y_2-x_2y_1)= x_1y_1(x_2y_1-y_2x_1)$$ $$ \frac{x_2}{x_1}+\frac{y_2}{y_1}+1 = 0 \Box $$
PDE Laplace equation. Integral representation form and Green function
It follows from the divergence theorem applied to the vector field $h_y\nabla u - \nabla h_y u$. Then, the we get the terms term $\partial_v u h_y = \partial_vE$ and $\partial_v h_y u$ which cancels the term from $\partial_v G u$, so we recover the original integral representation for $u$.
Prove $a<b$ and $c<d$ then $|a-c|+|b-d|<|a-d|+|b-c|$.
We can assume $a=0$ (otherwise reduce all numbers for $a$) $$|c|+|b-d|&lt;|d|+|b-c| \;\;\;/ ^2$$ $$c^2+b^2+d^2-2bd+2|c||b-d| &lt; d^2+b^2+c^2-2bc +2|d||b-c|$$ $$-bd+|c||b-d| &lt; -bc +|d||b-c|$$ $$|c||b-d|-|d||b-c| &lt; b(d-c)$$ Last one is true since we have, by triangle inequality ($|x|-|y|\leq|x-y|$): $$|c||b-d|-|d||b-c| \leq |bc-dc -db+cd| = b|c-d| $$