title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Find Z Rotation based on X and Y Vector
I am assuming that the arrow 'pointing to 0°' is pointing in $x$-direction of the coordinate system. Then $\frac{y}{x} = \tan(\alpha-180°)$ where $\alpha$ is the angle you are looking for, because the vector $(x,y)$ is poiting from $v_2$ to $v_1$. That means $y/x$ is the slope of the line that goes throu $v_1$ and $v_2$. But you want the other direction thats why you have to substract (or add) 180°.
Why is a function with a hole not considered to be continuous by the $\varepsilon-\delta$ definition?
In order for continuity at $c$ to hold, we require that $$\lim_{x \rightarrow c} f(x) = f(c)$$ While it may be the case that $\lim_{x\rightarrow c} f(x)$ exists, it is clearly not equal to $f(c)$, since $f(c)$ is itself undefined. This type of discontinuity is called a removable discontinuity, and in some ways is the "best" type of discontinuity to have, since it allows you to simply redefine a single point and end up with a continuous function. EDIT using $\epsilon-\delta$ formulation: The definition of continuity at a point can also be written as A function $f$ is continuous at a point $c$ if for all $\epsilon>0$, there exists a $\delta>0$ such that $$|x-c| < \delta \implies |f(x) - f(c)| < \epsilon$$ But clearly we cannot apply this definition to the point $c$; if $f(c)$ is undefined, then so is $|f(x) - f(c)|$. You can choose any $\delta$ you like but the statement $|f(x) - f(c)|<\epsilon$ will always be false.
Let $F$ be a set of subsets of the set $\{1,2,3,...,n\}$ such that:
First up, start by noticing that a set does not contain repeated element so if $A \in F, n = 4$, as soon as you pick one $A $, you cannot pick any other because there is only one element that was not used, and so a second $B $ would have to reuse at least two of the numbers in $A $ and would break the condition of the problem statement regarding the intersections. Thus $f(4) = 1$, which fits in the inequalities: $$\frac{(n-1)(n-2)}{6} \leq f(n) \leq \frac{n (n-1)}{6}\iff\\ 1 \leq 1 \leq 2, n = 4$$
Show that $| \{ 0,1 \}^{A} | = | \mathcal P(A) |$
An explicit bijection $$\varphi:\{f:A\to\{0,1\}\}\to\mathcal{P}(A)$$ is given by $$\varphi(f)=\{a\in A:f(a)=0\}.$$ Clearly, $\varphi$ is surjective: given $B\in\mathcal{P}(A)$, take $f(a)=0$ if $a\in B$ and $f(a)=1$ if $a\in A\backslash B$ and then $\varphi(f)=B$. The map $\varphi$ is also injective: for if $\varphi(f)=\varphi(g)$, we have $f(a)=0$ if $a\in \varphi(f)$ and $f(a)=1$ if $a\in A\backslash\varphi(f)$ and also $g(a)=0$ if $a\in\varphi(g)$ and $g(a)=1$ if $a\in A\backslash \varphi(g)$ but $\varphi(f)=\varphi(g)$ so $f=g$.
About proving a arguments is valid
The move from line 2 to line 3 is called conjunction elimination. It says 'if I know that (A and B) is true, then I know that A is true', also 'if I know that (A and B) is true, then I know that B is true' - where A and B are well formed formulas.
Prove that a cyclic group with only one generator can have at most 2 elements
Perhaps easier: if $g$ generates $G$, then so does $g^{-1}$. The hypothesis then implies that $g=g^{-1}$, so $g^2=1$. Done (either $g=1$ or not, in which case respectively the order of $G$ is $1$ or $2$).
Two Definitions for $E(X)$
$\mu_X$ is defined by $\mu_X(A)=P(X^{-1}(A))$. This can be written as $\int I_A d\mu_X=\int I_A(X) dP$. [ Because $I_A(X(\omega)) =I_{X^{-1}(A)}(\omega)$]. For any simple function $f$ we get $\int fd\mu_X=\int f(X) dP$. From here you can use a standard measure theoretic argument to say that the equation hods for any non-negative measurable function $f$ as also for any $f$ which is integrable w.r.t. $\mu_X$. Taking $f(x)=x$ you get the equation you want provided the integral exists (which is true iff $X$ has finite mean).
locally connected space X
I assume that you've meant $X=\mathbb{N}$, i.e. naturals. If you put the discrete topology on $X$ then every function $f:X\to Y$ is continous, no matter what $Y$ is. That's because every subset of $X$ is open, including $f^{-1}(U)$. So now if you take $Y$ to be any non-locally connected, countable space then you will find continous surjection $f:\mathbb{N}\to Y$. However it won't work if $Y$ is uncountable. So here's another idea, the $Y$'s point of view. Take any non-locally connected space $Y$. Then define $X=Y$ as a set but put a discrete topology on $X$. Define $f:X\to Y$, $f(x)=x$. Now $f$ is continous because topology on $X$ is discrete. Also obviously $f$ is onto. $X$ is locally connected. Every discrete space is because singletons are open. And $Y$ is not locally connected by the choice. For a concrete example take $Y=\mathbb{Q}$ with the usual Euclidean topology. Since $Y$ is countable then you end up with $X\simeq\mathbb{N}$, the same idea you've started with.
Q: (Easy?) test for intersection of two integer sequence generators
Let $a$ and $b$ be the period of sequence $A$ and sequence $B$ respectively (so $a=3$ and $b=4$ in your example). Your question boils down to whether there exist integers $i_1, i_2, k_1, k_2$, such that $$ai_1+k_1 = bi_2+k_2$$ where $k_1$ and $k_2$ are constrained by $k_1 \in [q,r]$, and $k_2 \in [s,t]$. We can rearrange the equation like so: $$k_1-k_2 = -ai_1+bi_2$$ Now the set of possible values of the right-hand side is precisely the integer multiples of $g=\text{gcd}(a,b)$ (e.g., see http://en.wikipedia.org/wiki/B%C3%A9zout%27s_identity), while the set of possible values of the left-hand side is the interval $[q-t,r-s]$. So all you need to do is see if a multiple of $g$ is in the interval $[q-t,r-s]$. This will be the case if and only if (1) $q-t$ is divisible by $g$, or (2) the integer part of $\frac{r-s}g$ is greater than the integer part of $\frac{q-t}g$.
Finding left/right adjoints of a forgetful functor.
Sometimes it helps to unpack the definition. In this case, for the forgetful functor $A/\mathcal{C} \to \mathcal{C}$ to have a left adjoint, you'd need to assign to each object $X$ an object $FX \in \mathcal{C}$ and a morphism $f_X : A \to FX$ such that $$\mathrm{Hom}_{A/\mathcal{C}}(A \xrightarrow{f_X} FX, A \xrightarrow{g} Y) \cong \mathrm{Hom}_{\mathcal{C}}(X, Y)$$ naturally in $X \in \mathcal{C}$ and all $A \xrightarrow{g} Y \in A/\mathcal{C}$. Can you see how you might define such a natural isomorphism? If not, hover over the box below for an additional hint. Define $FX = A+X$ and $f_X = \iota_A : A \to A+X$, and note that given $g : A \to Y$, morphisms $h : A+X \to Y$ such that $h \circ i_A = g$ correspond naturally with morphisms $X \to Y$.
Proving an infinite series is differentiable
$$f'(x)=\sum_n \frac{d}{dx}\frac{e^{-nx}}n=-\sum_n e^{-nx}=-\sum_n (e^{-x})^n$$ Does that give you any ideas? (Note: in order to be rigorous, the first step of distributing the derivative over the sum should be done for a partial sum, since we don't know yet if the sum of derivatives converges)
Conditional expectation of Poisson r.v. $X$ given $X$ is even?
Treat $X\mid X \text{ is even}$ as a random variable. What is its pmf? $$P(X=k\mid X \text{ is even})=0$$ if $k$ is odd (obviously) and for $k$ even $$P(X=k \mid X\text{ is even})=\frac{P(X=k, X \text{ is even})}{P(X \text{ is even})}=\frac{P(X=k)}{P(X \text{ is even})}=\frac{2e^{-λ}λ^k}{k!(1+e^{-2λ})}$$ So \begin{align}E[X \mid X \text{ is even}]&=\sum_{k=0}^{+\infty}kP(X=k \mid X \text{ is even})=\sum_{k\text{ is even}}^{+\infty}k\frac{2e^{-λ}λ^k}{k!(1+e^{-2λ})}\\[0.2cm]&=\frac{2}{1+e^{-2λ}}\sum_{k\text{ is even}}^{+\infty}\frac{e^{-λ}λ^k}{(k-1)!}\\[0.2cm]&=\frac{2λ}{1+e^{-2λ}}\sum_{k\text{ is even}}^{+\infty}\frac{e^{-λ}λ^{k-1}}{(k-1)!}\\[0.2cm]&=\frac{2λ}{1+e^{-2λ}}\sum_{k\text{ is odd}}^{+\infty}\frac{e^{-λ}λ^{k}}{k!}=\frac{2λ}{1+e^{-2λ}}\cdot P(X \text{ is odd})\\[0.2cm]&=\frac{2λ}{1+e^{-2λ}}\cdot \frac{1-e^{-2λ}}{2}=λ\frac{1-e^{-2λ}}{1+e^{-2λ}}\end{align}
Maximum of a sequence $\left({n\choose k} \lambda^k\right)_k$
In discrete case, it may be useful to look at ratio of successful terms. Here, let $a_k = \binom{n}k \lambda^k$. Then: $$\frac{a_{k+1}}{a_k} = \lambda\frac{n-k}{k+1}$$ As $k$ increases from $1$ to $n$, it is easily seen that the numerator decreases and the denominator increases, so the fraction decreases steadily from $\frac{n-1}2$ to $0$. At some point it becomes less than $1$, and the term before that point would be the maximum. So we solve for $$\lambda(n-k) \le k+1 \implies k \ge \frac{n\lambda-1}{\lambda+1}$$
Using eigenvectors to find the general solution from a system of equations
Solving the System of linear Equations So it seems you want to solve the system of equations $$ Ax = \begin{pmatrix}-13&40&-48\\-8&23&-24\\0&0&3\end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} = \begin{pmatrix} x'_1 \\ x'_2 \\ x'_3 \end{pmatrix} = x' $$ Where the $x'$ is given and the $x_j$ are looked for. You would normally solve this by calculating the inverse of $A$, since then $$ x = A^{-1} x' $$ But let's say you really want to solve this by using the eigenvalues and eigenvectors. You have already determined both of these, providing the change of base matrix $S$ which contains the eigenvectors $$ S = \begin{pmatrix} 2 & 5 & -3 \\ 1 & 2 & 0 \\ 0 & 0 & 1 \end{pmatrix} $$ which accomplishes $$ S^{-1} A S = D = \begin{pmatrix} 7 & 0 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & 3 \end{pmatrix} $$ We can now solve for $x$. Using the above, \begin{align*} x &= A^{-1} x' \\&= (SDS^{-1})^{-1} x' \\&= SD^{-1}S^{-1} x' \\&= \begin{pmatrix} 2 & 5 & -3 \\ 1 & 2 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} \frac{1}{7} & 0 & 0 \\ 0 & \frac{1}{3} & 0 \\ 0 & 0 & \frac{1}{7} \end{pmatrix} S^{-1} x' \end{align*} As you can see, we do not yet have the solution, since we don't know what $S^{-1}$ looks like. So what's left to do is: calculate the inverse of $S$, multiply everything and be done. (So whatever method you use, you have to calculate the inverse of some matrix) Getting the Eigenvectors To determine the last two eigenvectors, you got the equation $$ x_1 - 2.5 x_2 + 3 x_3 = 0 $$ whose solutions $(x_1,x_2,x_3)$ you want to find. What the equation tells us is, that if we are given two of the variables, the third one (for example $x_1$) will be determined. $$ x_1 = 2.5 x_2 - 3 x_3 $$ Thus the solutions can be written as $$ ( 2.5 x_2 - 3 x_3 , x_2 , x_3 ) \equiv \begin{pmatrix} 2.5 x_2 - 3 x_3 \\ x_2 \\ x_3 \end{pmatrix} = x_2 \begin{pmatrix} 2.5 \\ 1 \\ 0 \end{pmatrix} + x_3 \begin{pmatrix} - 3 \\ 0 \\ 1 \end{pmatrix} ~~~~~~~ \text{with} ~~ x_2 , x_3 \in \mathbb{R} $$ Any solution of this form is an eigenvector. Since you need two eigenvectors, you take two different solutions. For example the solutions with $$ (x_2,x_3) = (2,0) ~~~\text{or}~~~ (0,1) $$ Which yield the eigenvectors that the computer gave you.
existence of countably additive measure for Borel subsets of $[0,1]$
Okay, I guess I kind of figure it out. Please let me know if there is any mistake. Basically, you do all the measure theory things over $\mathbb{R}$ firstly, just like the first part your post. Then, you get a unique Borel measure $\lambda$ on the whole real line $\mathbb{R}$, and the definition of it is $$\lambda((a,b])=F(b)-F(a).$$ Now, restrict $\lambda$ to $[0,1]$. That is, you define $\mu:=\lambda|_{[0,1]}$. But this is not over, you still need to argue why $$\mu([a,b])=F(b)-F(a),$$ since currently the definition is $\mu((a,b])=F(b)-F(a).$ This involves the continuity. As pointed out by you, if the function has jump, you can have a singular measure at a point. This involves the following lemma. Lemma: Let $F$ be an increasing and right-continuous function, and $\mu$ to be the measure associated to it. Then, $\mu(\{a\})=F(a)-F(a-)$, $\mu([a,b))=F(b-)-F(a-)$ and $\mu([a,b])=F(b)-F(a-).$ You could find the proof here http://www.math.ttu.edu/~drager/Classes/01Fall/reals/ans2.pdf. However, if you read the proof, you will see that by definition of $F(a-)$, if your function is continuous, then $F(a-)=F(a)$ and therefore, for continuous function, you can arrive conclusion $$\mu([a,b])=F(b)-F(a-)=F(b)-F(a).$$ However, for jump-function, if it is right continuous, then you can only get the lemma, which is a little bit weaker. For left continuous function, your construction of Borel measure over $\mathbb{R}$ will use $[a,b)$, so the direction will change. You could easily modify the statement and the proof to discuss the left-continuous case, but the idea is the same so I will not bother to write it here.
Prove that $2^p$ + $3^p$ cannot be a perfect square if p is prime
All perfect squares are either $0$ or $1$ mod $4$. We have that $2^{p}+3^{p}\equiv 2^{p}+(-1)^{p}$. We also have that $p$ is prime, and therefore odd. Therefore, we have $2^{p}-1$. We also know that $2^p$ is always $0$ mod $4$, if $p>1$, which it is, because $2$ is the smallest prime. So that means we $2^{p}-1 \equiv -1 \equiv 3$. Referring back to the first line, we have that this is not a perfect square. In the case $p=2$, we have $2^2+3^2=13$, which is not a perfect square. This is an exception because $2$ is even.
Is there a relation that is irreflexive, anti-symmetric and not transitive?
Consider the ordered pair, such that $\{(x,y)\,|\,x,y\in\{a,b,c,d\}\}$. The following relation satisfies those conditions: $$\{(a,b) ,(b,c), (c,d), (d,a)\}$$ Clearly, this relation is not reflexive since there is no ordered pair with same members i.e. $(x,x)$. This relation is anti-symmetric since for instance, there is no ordered pair $(b,a)$. This relation is also not transitive (which is left for you to work out).
Is it still possible for mathematicians to contribute to the theory of music?
I know that a member of Mathoverflow Tobias Schlemmer works in this topic, you can consult with him.
Time derivative of a pullback of a time-dependent 2-form
This is a frequently-used trick in differential geometry. The key idea is the following fact, taken from basic multi-variable calculus: Let $F:\mathbb{R}^2\to V$ be a smooth function, where $V$ is some vector space, and let $dF_p$ denote the differential of $F$ at $p$. Then at every $p\in\mathbb{R}^2$ the differential $dF_p$ is linear, and in particular, we have $$dF_p(1,1)=dF_p(1,0)+dF_p(0,1).$$ How is this related to the question at hand? We wish to calculate the time derivative $\frac{d}{dt}\psi_t^*\omega_t$. The difficulty lies in the fact that $t$ appears twice in the expression we are trying to differentiate. So, the trick is to consider the two-parameter family $$F(t,s):=\psi_t^*\omega_s,$$ where $t$ and $s$ may vary independently. Then, by the above basic fact, we have $$\frac{d}{dt}\psi_t^*\omega_t=dF(1,1)=\frac{\partial F}{\partial t}+\frac{\partial F}{\partial s},$$ which gives the desired expression.
Find asymptotics of $x(n)$, if $n = x^{x!}$
We'll follow the approach suggested by alex.jordan. By taking logs again we get the equation $$ \log x! + \log\log x = \log\log n. $$ The $\log x!$ term is the dominant term on the left-hand side so it will be the main source of information about $x$. It's clear that $x \to \infty$ as $n \to \infty$, and by rearranging we note that $$ \log x! = \log\log n - \log\log x < \log\log n $$ for $x$ large enough. Now $$ \log x! = x\log x + O(x) > \frac{1}{2} x\log x $$ for $x$ large enough, so $$ x \log x < 2 \log\log n $$ for $x$ large enough. Taking the Lambert $W$ function of both sides yields $$ \log x < W(2\log\log n) $$ since $W(x\log x) = \log x$, whence $$ x < e^{W(2\log\log n)} = \frac{2\log\log n}{W(2\log\log n)} = O\left(\frac{\log\log n}{\log\log\log n}\right). $$ To obtain the last bound we used the fact that $$ W(z) = \log z + O(\log\log z) $$ as derived in this answer. We'll now bootstrap this crude estimate into the previous estimates to obtain a sharper one. The approximation $\log x! = x\log x + O(x)$ becomes $$ \log x! = x\log x + O\left(\frac{\log\log n}{\log\log\log n}\right), $$ which allows us to rewrite the equation $$ \log x! = \log\log n - \log\log x $$ as $$ x\log x + O\left(\frac{\log\log n}{\log\log\log n}\right) = \log\log n + O(\log\log\log\log n). $$ or just $$ x\log x = \log\log n + O\left(\frac{\log\log n}{\log\log\log n}\right). $$ We then have $$ \begin{align} x &= \frac{\log\log n + O\left(\frac{\log\log n}{\log\log\log n}\right)}{W\left[\log\log n + O\left(\frac{\log\log n}{\log\log\log n}\right)\right]} \\ &= \frac{\log\log n + O\left(\frac{\log\log n}{\log\log\log n}\right)}{\log\left[\log\log n + O\left(\frac{\log\log n}{\log\log\log n}\right)\right] + O(\log\log\log\log n)} \\ &= \frac{\log\log n + O\left(\frac{\log\log n}{\log\log\log n}\right)}{\log\log\log n + \log\left[1+O\left(\frac{1}{\log\log\log n}\right)\right] + O(\log\log\log\log n)} \\ &= \frac{\log\log n + O\left(\frac{\log\log n}{\log\log\log n}\right)}{\log\log\log n + O(\log\log\log\log n)} \\ &= \frac{\frac{\log\log n}{\log\log\log n} + O\left(\frac{\log\log n}{(\log\log\log n)^2}\right)}{1 + O\left(\frac{\log\log\log\log n}{\log\log\log n}\right)} \\ &= \left[\frac{\log\log n}{\log\log\log n} + O\left(\frac{\log\log n}{(\log\log\log n)^2}\right)\right]\left[1+O\left(\frac{\log\log\log\log n}{\log\log\log n}\right)\right] \\ &= \frac{\log\log n}{\log\log\log n} + O\left(\frac{(\log\log n)(\log\log\log\log n)}{(\log\log\log n)^2}\right). \end{align} $$ This last part was pretty brutal but at least we wound up with a rigorous error bound. In summary, $$ x = \frac{\log\log n}{\log\log\log n} + O\left(\frac{(\log\log n)(\log\log\log\log n)}{(\log\log\log n)^2}\right) $$ as $n \to \infty$. If we desired, we could bootstrap again with this estimate. Before doing so, let us introduce the notation $$ \begin{align} &\log\log n = L_2(n), \\ &\log\log\log n = L_3(n), \\ &\log\log\log\log n = L_4(n), \end{align} $$ so that the last estimate can be written $$ x = \frac{L_2(n)}{L_3(n)} + O\left(\frac{L_2(n)L_4(n)}{L_3(n)^2}\right). $$ We then find that $$ \begin{align} \log x! &= x \log x - x + O(\log x) \\ &= x \log x - \frac{L_2(n)}{L_3(n)} + O\left(\frac{L_2(n)L_4(n)}{L_3(n)^2}\right), \end{align} $$ so that $\log x! = \log\log n - \log\log x$ becomes $$ x\log x = L_2(n) - \frac{L_2(n)}{L_3(n)} + O\left(\frac{L_2(n)L_4(n)}{L_3(n)^2}\right), $$ yielding $$ x = \frac{L_2(n) - \frac{L_2(n)}{L_3(n)} + O\left(\frac{L_2(n)L_4(n)}{L_3(n)^2}\right)}{W\left[L_2(n) - \frac{L_2(n)}{L_3(n)} + O\left(\frac{L_2(n)L_4(n)}{L_3(n)^2}\right)\right]}. $$ We can then use $W(z) = \log z - L_2(z) + O\left(\frac{L_2(z)}{\log z}\right)$, which also follows from this other answer, to obtain the final result of $$ x = \frac{L_2(n)}{L_3(n)} + \frac{L_2(n)(1-L_4(n))}{L_3(n)^2} + O\left(\frac{L_2(n)L_4(n)}{L_3(n)^3}\right). $$ as $n \to \infty$.
Why is a map to a smaller dimensional space not injective?
You're assuming $\dim V>\dim W$, so $\dim V-\dim W>0$.
Lebesgue Continuity for nonmeasurable sets
If the sets $E_i$ are not Lebesque measurable then the question doesn't make any sense...the sequence $m^*(E_i)$ is not well-defined.
Finding change of basis matrix.What am i doing wrong?
Are you sure that is not the right answer? It seems alright to me. By definition, the "base change matrix" expresses a vector in the old basis in terms of the vectors in the new basis. For example, with the $A$ you have computed $$Ab_1 = \begin{bmatrix} 1/3 & 2/3 & 1 \\ 1/2 & 1/2 & 0 \\ 1/6 & -1/6 & 0 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} = \begin{bmatrix} 1/3 \\ 1/2 \\ 1/6 \end{bmatrix}$$ What this means is that the "coordinates" of $b_1$ in the new basis is $(1/3,1/2,1/6)$, which should satisfy $$b_1 = \frac 13 c_1 + \frac 12 c_2 + \frac 16 c_3$$ and it does.
Drawing a second ball from a randomly selected box
Try $$ P(W_2\cap W_1)=\frac{1}{9}\left(\frac{w_1}{w_1+b_1}\frac{w_1-1}{w_1+b_1-1} + \frac{w_2}{w_2+b_2}\frac{w_2-1}{w_2+b_2-1} + \frac{w_3-1}{w_3+b_3-1}\frac{w_3}{w_3+b_3}\right)$$
Brachistichrone problem with friction
Hint: Let $u=y-\mu x$ , Then $u'=y'-\mu$ $u''=y''$ $\therefore(1+(u'+\mu)^2)(1+\mu^2+\mu u')+2uu''=0$
Isomorphism of Real Quaternions and Complex Matrix
You have to prove that the the quaternions and the matrices of this form are isomorphic as skew fields. Tis means that the addition of two quaternions correspond to the addition of the corresponding matrices and the same for the product and vice-versa, and that all required properties are satisfied ( this is easy). Also you have to prove that the inverse of a quaternion correspond to the inverse of a matrix, this is a consequence of the fact that , if the quaternion $z$ correspond to the matrix $Z$, we have $\det Z=|z|^2$. To complete the proof, we have also to consider that the quaterions are a ring with an involution ( the conjugation) so we have to prove also tha we can define an involution in the ring of matrices and that the conjugate of a quaternion is the conjugate of the corresponding matrix.
Evaluating the following limit without L'Hopital's help
Taking the log of the expression, you're looking at the limit of $-\frac{\ln(\cos x))}{x^2}$ when $x\to0$. Since $\cos x-1\sim -\frac{x^2}{2}$ and $\ln(1+x)\sim x$, you get by composition $$-\frac{\ln(\cos x))}{x^2}\sim\frac12$$ Taking the exp finally yields $\sqrt e$.
Proof that these two definitions are equivalent
Definition 2 is incorrect: In general, the Lie algebra of $G\subset GL_n$ is not a subset of $G$ (or even of $GL_n$). For example, the zero matrix is an element of the Lie algebra, but it's not an element of $GL_n$. In place of Definition 2, you could write Definition 2$'$: The Lie algebra of a Lie group $G \subset GL_n$ is the set of all $n\times n$ matrices $g$ with the property that for all $t \in \mathbb R$ the element $e^{t g}$ is also in $G$. The equivalence of this and Definition 1 should be proved in most elementary books about Lie groups. You can also find it in my Introduction to Smooth Manifolds (chapters 7 and 20).
For the differentiation of $x^{\frac23} + y^{\frac23} = a^{\frac23}$, why is the substitution $x = a \cos^3\theta$ legal?
It is valid because lets say you want $x$ to be $8a$, then $(8a)^{\frac 2 3} + y^{\frac 2 3}=a^{\frac 2 3}$ $y^{\frac 2 3}=-3a^{\frac 2 3}$ Has no real solution for $y$ because the left hand side is a square and the right hand side is a negative number. The idea is the same for the parametrization for the unit circle, $x^2+y^2=1$, as both $x$ and $y$ can only range between $0$ to $1$, so we can let $x=cos\theta$ and $y=sin\theta$
How to integrate $\int{\frac{6x}{x^3+8}dx}$
Write the numerator of the last integral as $(x+1)-3,$ and break the integral up into the two parts. The first integral will be a logarithm the second an arctan but do the calculations yourself.
What is $\lim_{x \to \infty}\lim_{n \to \infty} {{e^x} \over {x^n}} = ??$
Let's limit ourselves to $x>1$. Then $\lim_{n\to\infty}\frac{e^x}{x^n}=0$. As this is true for each $x>1$, we have that $\lim_{x\to\infty}\lim_{n\to\infty}\frac{e^x}{x^n}=0$. Now this is a good example that you have to be very careful about what you can and cannot do with limits. If you try to swap the limits ($\lim_{n\to\infty}\lim_{x\to\infty}\frac{e^x}{x^n}$), it doesn't work (the inner limit is $\infty$ for every $n$). There is even a notion of double (simultaneous) limit $\lim_{x\to\infty, n\to\infty}\frac{e^x}{x^n}$, which does not exist (because, if it did, both the above limits of limits would exist and would coincide).
Sum of two Cosine functions is periodic.
All you need is for the ratio $\ \frac{a_1}{a_2}\ $ to be rational. If $\ \frac{a_1}{a_2}=\frac{k_1}{k_2}\ $, where $\ k_1,k_2\ $ are relatively prime integers, then $$ T=\frac{2\pi k_1}{a_1}=\frac{2\pi k_2}{a_2}\\ $$ is a period of both $\ \cos\big(a_1x+b_1\big)\ $ and $\ \cos\big(a_2x+b_2\big)\ $, and hence a period of $\ \cos\big(a_1x+b_1\big)+\cos\big(a_2x+b_2\big)\ $: \begin{align} \cos\big(a_1(x+nT)&+b_1\big)+\cos\big(a_2(x+nT)+b_2\big)\\ &=\cos\left(a_1\left(x+\frac{2\pi nk_1}{a_1}\right)+b_1\right)+\\ &\hspace{6em}\cos\left(a_2\left(x+\frac{2\pi nk_2}{a_2}\right)+b_2\right)\\ &=\cos\big(a_1x+b_1+2\pi nk_1\big)+\cos\big(a_2x+b_2+2\pi nk_2\big)\\ &=\cos\big(a_1x+b_1\big)+\cos\big(a_2x+b_2\big) \end{align}
How do you get the area under a curve that is not represented by a function?
You must have to know the function to get the area under it. If you don't have the function then you should approximate it in MATLAB by collecting some sample data from the graph. Besides, Line integral is in the first place used to determine the curve length not the area under it. But while doing line integral you also need the function or have to parametrize the curve.
If a determinant of a matrix is 0, is the graph formed from it acyclic?
No, if determinant of adjacency matrix is 0 then its not always the case that the graph is a DAG. Simple example is following : $$A=\begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}$$ On the other hand if the graph is DAG then the determinant of its adjacency matrix is always zero. To see this, first note that a directed graph is acyclic if and only if the vertices can be sorted in such a way that the adjacency matrix has upper triangular form with only zeros in the diagonal. So if A is the adjacency matrix of DAG and P is permuation matrix for the order of vertices then $det(PA)=0 \rightarrow det(A)det(P)=0 \rightarrow det(A)=0$
Is there a Borel-measurable projection to a closed subgroup
The answer is a yes! (Thanks to GEdgar who pointed out the relation with "selection theorems") In the book "A Course on Borel sets" page 186 they proved a claim that implies the following: Theorem: Let $H,G$ be as in the question. Then there exists a "cross-section" $s:G/H\rightarrow G$ such that $s\circ p = id$ where $p:G\rightarrow G/H$ the quotient map $p(g)=gH$. Proof: Let $\Pi = \{a_i H : a_iH \text{ is a coset of H} \}$ then by the theorem in the book there exists a set Borel set $S$ such that $S\cap a_i H = \{s_i\}$. This defines a (Borel) map $s:G/H\rightarrow G$ such that $s(u) = s_i$ if $u\in a_i H$. (The map is Borel because $s^{-1}(U)=s^{-1}(U\cap S) = p(U\cap S)$ and quotient map is open so it sends Borel to Borel), which proves the theorem. This theorem leads to our result, as one can can define a map $\varphi:G\rightarrow H$ by $\varphi(g) = g\cdot (s(p(g))^{-1}$. Since for every $h\in H$ one has that $p(gh)=p(g)$ we have that $\varphi(gh)=\varphi(g)\cdot h$. This also answers positively my (now obviously related) other question: $G\cong G/H\times H$ measureably
Example of function $f:\mathbb R\to \mathbb R$ which is differentible and bijective but its inverse is not differentible.
$f'(x)$ can be zero, e.g. $f(x)=x^3$ – user25959 18 mins ago
Sum of sum of factors of numbers that are < 1
Write $x = \frac ab$. We have $$ \begin{align*} \sum_{i=1}^N{\sum_{j=0}^i}a^j\cdot b^{i-j} &amp;= \sum_{i=1}^Nb^i{\sum_{j=0}^i}x^j\\ &amp;= \sum_{i=1}^Nb^i \frac{x^{i+1} - 1}{x - 1}\\ &amp;= \frac1{x-1}\left(x\sum_{i=1}^N b^ix^i - \sum_{i=1}^Nb^i\right), \end{align*} $$ and now it's a matter of evaluating geometric series, one with ratio $bx = a$ and one with ratio $b$.
Numbers $2^{2017}$ and $5^{2017}$ are written back to back. How many digits are written?
As indicated in the comments, the number of digits of $x$ is $\lfloor \log_{10}(x) \rfloor+1$. So the number of digits of the number you want is $$\lfloor \log_{10}(2^{2017}) \rfloor + \lfloor \log_{10}(5^{2017})\rfloor+2=\lfloor 2017 \cdot \log_{10}(2) \rfloor + \lfloor 2017\cdot \log_{10}(5)\rfloor+2.$$ Note that the logarithms to base $10$ of both $2$ and $5$ are irrational, and that if $x+y$ is an integer but neither $x$ nor $y$ is then $$\lfloor x \rfloor+ \lfloor y \rfloor=x+y-1.$$ Thus the number of digits you want is $2018$. Happy New Year! (...and +1).
Finite injective dimension
The global dimension of a ring is both the sup of projective dimensions and the sup of injective dimensions (both are measured by vanishing of $\text{Ext}$), so if $A$ is regular (and local), then every module has finite injective dimension. In fact, if $A$ is local with residue field $k$, then $\text{gldim } A = \text{injdim } k$, so if $A$ is (Gorenstein but) not regular, then $k$ has infinite injective dimension.
Why are homogeneous and non-homogeneous first order differential equations called homogeneous an vice versa?
A function is homogeneous of degree $n$ if $$f(kx,ky) = k^nf(x,y).$$ This is a two variable example, but you could have more. A homogeneous equation is one that might look like $$f(x,y,z) =0$$ where $f$ is a homogeneous function. For instance $$x^n+y^n-z^n =0$$. What's special about a homogeneous equation is that you can multiply all the variables by a constant and it doesn't really change the equation. If you multiply $x$, $y$ and $z$ by a constant $k$ in the last equation, you can factor out a $k^n$ and divide it out. In a homogeneous equation, if $(x,y,z)$ is a solution, then so is $(kx,ky,kz).$ So now, a homogeneous differential equation is one where you can multiply a solution by a constant and it's still a a solution. When you solve a homogeneous linear equation and find a solution like $y=e^{2t}$, then you know that $y=ke^{2t}$ is also a solution. You get infinite solutions for the price of one. Non-homogeneous equations don't have that nice property.
Please verify my proof for a continuous function attaining a minimum value on an interval.
This is not valid. How do you know that the continuous image of a closed interval is a closed interval? This takes proof - indeed, it's essentially equivalent to what you're trying to prove in the first place. You've now added an argument for why the continuous image of a closed set is closed. This has the right idea, but doesn't fully work. For example, "$\lim_{x\rightarrow c}f(x)=f(c)$" is incorrect: it should be $x\rightarrow d$ for some $d$ with $f(d)=c$, but you don't know such a $d$ exists! It's better to argue as follows. Suppose $c$ is a point in the closure of $f([a, b])$; we want to show $c\in f([a, b])$. Since $c$ is in the closure of $f([a, b])$, we can find a sequence $d_i$ of points in $f([a, b])$ such that $d_i\rightarrow c$. Now, since each $d_i$ is in $f([a, b])$, we can find a sequence $e_i\in [a, b]$ such that $f(e_i)=d_i$. Now what can you say about the sequence $e_i$?
Sequence with partial limits (0,1]
Consider the sequence $$0,1,0,1/2,1,0,1/3,2/3,1,0,1/4,2/4,3/4,1,0,1/5,2/5,3/5,4/5,1,0,1/6,2/6,3/6,\dots.$$ Every real number in the interval $[0,1]$ is the limit of a subsequence of the above sequence. Remark: We cannot get the half-open interval $(0,1]$ as the set of subsequential limits, For take $n$ very large. If $\frac{1}{n}$ is a limit point of an infinite subsequence, there are elements of the subsequence that are within $\frac{1}{2n}$ of $\frac{1}{n}$, so there are infinitely many elements of the sequence that are in the interval $(\frac{1}{2n},\frac{3}{2n})$. Thus $0$ is a limit point of the sequence.
Show IVP solution exactness in given interval
I believe you mean existence - not exactness. In any case, you have a Riccati equation, which is linearlized by the change of variable $$y=-\frac{u'}{u}, \tag{1} $$ into $$u''=-\mathrm{e}^{-t^2} u.$$ The initial conditions are transformed into $$u(0)=1,u'(0)=0.$$ (In fact, the scaling symmetry of $(1)$ allows you to take any nonzero value for $u(0)$.) Singularities will arise in $y$ precisely when $u=0$. Suppose we restrict ourselves to a time interval $t \in [0,T]$, in which $u$ is positive. We then have the following differential inequality over that interval $$u'' \geq -u ,$$ and the theory of linear differential inequalities implies that upon integration, $$u(t) \geq \cos(t). $$ Hence $u$ is positive at least in the interval $[0,\pi/2)$, and correspondingly, $y$ is smooth there.
Simple problem on congruence.
A more compact way is to observe that the remainder of $2101$ on division by $26$ is $21,$ so she paid $21$ to get to the office today. This is often written $2101 \equiv 21 \pmod{26}.$
Is the homotopy type of an aspherical space determined by its fundamental group?
As proposed by studiosus in the comments, the standard unit circle and the pseudocircle (http://en.wikipedia.org/wiki/Pseudocircle) serve as a counterexample, since their universal covering spaces are the real line and the Khalimsky line, both of them contractible. (The contractibility of the latter follows from http://arxiv.org/pdf/0901.2621.pdf .)
When taking a square root of a negative number, why is the result i instead of positive and negative i?
You are correct that both $4i$ and $-4i$ are square roots of $-16,$ just as $4$ and $-4$ are both square roots of $16.$ Nonetheless we like sometimes to pretend that "the square root" is a function. One convention for defining a function on the complex numbers that results in a square root is to write a complex number $z$ in polar coordinates as $$ z = x+iy = r\cos\theta + i r\sin\theta$$ for $r\ge 0$ and $0 \le \theta &lt; 2\pi$ and then define $$\sqrt{z} = \sqrt{r}\cos(\theta/2) + i\sqrt{r} \sin(\theta/2),$$ which you can check has the property that $(\sqrt{z})^2 = z.$ This results in every square root being in the upper half-plane. By this definition, $$\sqrt{-16} = 4i. $$ However, this function has some odd properties. For one, it is discontinuous at the positive real axis since a point just below the positive real axis will get mapped to a point just above the negative real axis whereas points on the positive real axis are mapped to points on the positive real axis. Also, it doesn't have many of the arithmetical properties we are used to, and this makes calculation tricky. For instance, we have $$ \sqrt{-4} \sqrt{-9}=(2i)(3i) = -6 \ne \sqrt{(-4)(-9)} = 6.$$ Generally speaking, it is better to think of the square root as a multi-valued function, but it can be nice to use the single-valued version for the sake of writing down arithmetical expressions with unambiguous meaning. Either way, thought and care are required.
Need help understanding question about how many languages can fit set?
They mean that the total length of the string must be exactly $n$. Given an alphabet $\Sigma$, i.e., a set of symbols, a language over $\Sigma$ is simply any set of finite strings of symbols in $\Sigma$. In this problem $\Sigma=\{0,1\}$, and a language is any set of finite strings of zeroes and ones. Restricting yourself to languages recognized by DFAs would limit you to regular languages; the question, however, is about languages in general. Imagine building a string of $n$ symbols, each of which is $0$ or $1$, one symbol at a time. When you choose the first symbol, there are $2$ possibilities. No matter which choice you make, there are $2$ possibilities for the second symbol, so there are altogether $2\cdot 2$ choices for the first two symbols: $00,01,10$, and $11$. Each time you add another symbol, it can be $0$ or $1$, so you double the number of possibilities. After choosing $3$ symbols, for instance, you can have $000$ or $001$, $010$ or $011$, $100$ or $101$, $110$ or $111$ – a total of $2\cdot 4=8$ strings. Thus, there are $$\underbrace{2\cdot 2\cdot 2\cdot\ldots\cdot 2}_{n\text{ twos}}=2^n$$ possible strings of length $n$. A language consisting entirely of binary strings of length $n$ must be a subset of this collection of $2^n$ possible strings. A set of $m$ objects has $2^m$ subsets, so this set of $2^n$ strings of length $n$ has $2^{2^n}$ subsets. However, one of these subsets is the empty set, and the question asks for the number of non-empty languages, so we need to subtract one to get the desired number, $2^{2^n}-1$.
Every open ball of a normed vector space $E$ its homeomorph to the entire space $E$.
I think that when he says $f$ is continuous from $E$ to $B$. he intends to mean that the important part which was justified was: (..) to $B$. and assumes continuity is a trivial matter (which is, since it is division by $\frac{1}{1+\Vert x \Vert }$. Since the norm is continuous, multiplication by scalar is continuous, inversion is continuous in $\mathbb{R}$ and sum is continuous in $\mathbb{R}$, we have that the function is continuous). Likewise, when he says that $g$ is continuous, I think he just intends to mean that it is well-defined. Now, as to why two open balls are homeomorphic helps in this case, it is just a matter of working in the particular case of the open ball centered in $0$ with radius $1$, and then composing with a homeomorphism to have a homeomorphism from the whole space to any ball. Just for the sake of completeness, opening up why $\frac{1}{1+\Vert x \Vert}x$ is continuous: Consider the functions $\Vert \cdot \Vert: E \rightarrow \mathbb{R}$, $x \mapsto \Vert x \Vert$; $m:\mathbb{R} \times V \rightarrow V,$ $(\lambda, x) \mapsto \lambda x$; $i: \mathbb{R}\backslash\{0\} \rightarrow \mathbb{R},$ $r \mapsto \frac{1}{r}$; $t_1: \mathbb{R} \to \mathbb{R}$, $r \mapsto 1+r$, we have that $f=m \circ \bigg( \big(i \circ t_1 \circ \Vert \cdot \Vert \big) \times Id\bigg)$, which is a composition of continuous functions.
Number of orbits of the action of $GL_3(\mathbb{R})$ on $\mathbb{R}^3$
Consider any two vectors in $\mathbb R^3$. Is there always a matrix in $\operatorname{GL}_3(\mathbb R)$ that transforms one into the other? Or are there exceptions where this is not possible?
Is the Hausdorff distance a metric on the set of closed bounded subsets?
When working with the Hausdorff distance, its better to work with a metric space where $d\le 1$. In principle, this is not a problem, since every metirc space is homeomorphic to one where the metric is bounded. Given this modification, we can define, for arbitrary subsets of $X$, $$ h(A,B):=\begin{cases} 0 &amp; \text{if } A=B=\emptyset\\ 1 &amp; \text{if exactly one of $A$ and $B$ is $\emptyset$}\\ \max\left\{\sup_{a\in A}r(a,B), \sup_{b\in B}r(A,b)\right\}&amp; \text{if }A\neq\emptyset \wedge B\neq\emptyset \end{cases} $$ Then you can show that $h$ is a pseudo metric on $\mathcal{P}(X)$, the key being the inequality: $$ d(x,Y)≤d(x,Z)+h(Y,Z) $$ where we define $d(x,\emptyset)=1$. You can also prove that $h(A,B)=0\iff \bar A=\bar B$, which shows that $h$ is a metric on the family of closed sets. When dealing with compact sets, you get easier proofs of the facts above, together with many extra properties. See I.4.F in Kechris' Classical Descriptive Set Theory
Is an algebraic equation is a form of algebraic expression or are they different?
Agreeably, one can define an algebraic expression as follows: Definition: An algebraic expression is a mathematical expression that consists of variables, numbers and operations, and the value of this expression is allowed to change. Also, one can define an algebraic equation as follows: Definition: An algebraic equation is a statement that the values of two mathematical expressions are equal, which has one or more variables. I.e., we can write the set of algebraic equations as $$\mbox{Set of algebraic equations}=\{\mbox{Algebraic Equations}\}=\{\mbox{Algebraic expression}_1=\mbox{Algebraic expression}_2\}$$ Thus, the answer to your question is NO - by definition, an algebraic equation is NOT a form of an algebraic expression, rather, it is a statement of equality between two algebraic expressions.
Show that every nearly paracompact space is almost paracompact but the converse is not true
You really should think a bit harder about the examples that you’ve already received. Just in the case of this question, one of them answers the present question: the space described in this answer is almost compact, hence almost paracompact, and the same open cover that user87690 used to show that it is not nearly compact works to show that it is not nearly paracompact, either.
Matrix Commutativity - Integration
It is sufficient to find any two non-commuting (skew-symmetric) matrix $P,Q$. Let $p_{i},q_{i}$ denote the entries of $P,Q$ respectively. If we then define $$ B(t) = \pmatrix{0&amp;-p_{3}\;t^{q_3/p_3} &amp; p_2\;t^{q_2/p_2}\\ p_{3}\;t^{q_3/p_3} &amp; 0 &amp; -p_{1}\;t^{q_1/p_1}\\ -p_2\;t^{q_2/p_2}&amp;p_{1}\;t^{q_1/p_1}&amp;0} $$ Then $B'(1)B(1) \neq B(1)B'(1)$. A similar trick using $\sin(t)$ or $e^t$ gives you such a $B(t)$ where the function is necessarily smooth at $0$.
Prove that $n\cdot [a]=[a+ \sqrt{2}n]$ is an action of $Z$ on the set $S$.
Try this: $(n+m)\cdot [a]=[a+\sqrt{2}\times (n+m)]=[a+\sqrt{2}\times m+\sqrt{2}\times n]=n\cdot [a+\sqrt{2}\times m]$
Showing a set of functions $F$ is bounded
Since $|f(x) - f(y)| \le |x - y|$, $f$ is continuous (and Lipschitz continuous with constant at most $1$) as you've noted. By the mean value theorem, such an $x_0$ must exist: The MVT for integrals states that for some $x_0$, $$f(x_0) = \frac{1}{1 - 0} \int_0^1 f(t) dt = \frac{0}{1}$$ Then since $x_0 \in [0, 1]$ and $x \in [0, 1]$, it is immediate that $|x_0 - x| \le 1$; they are elements of the same interval of length $1$.
Standard deviation of $Z = 9 - 3Y^3$
Hint: You can transform the random variable. Z | Pr(Y = y) 12 | 0.4 9 | 0.5 6 | 0.1 Now you can calculate $Var(Z)$ directly.
Solving a second order non-linear ODE (rocket height function) $h''(t)=\frac{F}{m+k-r\cdot t}-\frac{G\cdot M}{(R+h)^{2}}$
At first we solve by numeric method.(Analytically can't be solved. Maple and Mathematica can't solve) Mathematica code: G = 6.67408*10^-11; M = 5.9722*10^24; R = 6378.14*1000; F = 68000; m = 2340; k = 3000; r = 68; t0 = k/r // N when $t=k/r$ gives: 44.11764706 second. $$h''(t)=\frac{F}{k+m-r t}-\frac{G M}{(h(t)+R)^2}$$ sol = NDSolve[{h''[t] == F/(m + k - r *t) - (G*M)/(R + h[t])^2, h[0] == 0, h'[0] == 0}, h, {t, 0, t0}]; ((h[t] /. sol) /. t -&gt; t0) putting numeric values to solution: Rocket will lift at the height: 6192.405405 meters (6.2 km). At second we will a approximation at point h=0 expanding with series: $$\frac{G M}{(h(t)+R)^2}=\frac{G M}{R^2}-\frac{2 (G M) h(t)}{R^3}+O\left(h(t)^2\right)$$ putting to equation: $$h''(t)=\frac{F}{k+m-r t}+\frac{2 G M h(t)}{R^3}-\frac{G M}{R^2}$$ With help CAS we can solve: $h(t)=-\frac{R e^{-\frac{2 \sqrt{2} \sqrt{G} \sqrt{M} (k+m+r t)}{r R^{3/2}}} \left(\sqrt{2} F \sqrt{R} \text{Ei}\left(-\frac{\sqrt{2} \sqrt{G} (k+m) \sqrt{M}}{r R^{3/2}}\right) e^{\frac{\sqrt{2} \sqrt{G} \sqrt{M} (3 k+3 m+r t)}{r R^{3/2}}}-\sqrt{2} F \sqrt{R} \text{Ei}\left(\frac{\sqrt{2} \sqrt{G} (k+m) \sqrt{M}}{r R^{3/2}}\right) e^{\frac{\sqrt{2} \sqrt{G} \sqrt{M} (k+m+3 r t)}{r R^{3/2}}}-\sqrt{2} F \sqrt{R} e^{\frac{\sqrt{2} \sqrt{G} \sqrt{M} (3 k+3 m+r t)}{r R^{3/2}}} \text{Ei}\left(-\frac{\sqrt{2} \sqrt{G} \sqrt{M} (k+m-r t)}{r R^{3/2}}\right)+\sqrt{2} F \sqrt{R} e^{\frac{\sqrt{2} \sqrt{G} \sqrt{M} (k+m+3 r t)}{r R^{3/2}}} \text{Ei}\left(\frac{\sqrt{2} \sqrt{G} \sqrt{M} (k+m-r t)}{r R^{3/2}}\right)-2 \sqrt{G} \sqrt{M} r e^{\frac{2 \sqrt{2} \sqrt{G} \sqrt{M} (k+m+r t)}{r R^{3/2}}}+\sqrt{G} \sqrt{M} r e^{\frac{\sqrt{2} \sqrt{G} \sqrt{M} (2 k+2 m+r t)}{r R^{3/2}}}+\sqrt{G} \sqrt{M} r e^{\frac{\sqrt{2} \sqrt{G} \sqrt{M} (2 k+2 m+3 r t)}{r R^{3/2}}}\right)}{4 \sqrt{G} \sqrt{M} r}$ sol2 = DSolve[{h''[t] == F/(m + k - r *t) - (G M)/R^2 + (2 (G M) h[t])/R^3, h[0] == 0, h'[0] == 0}, h[t], t] // FullSimplify h[t] /. sol2 /. t -&gt; t0 (* {{h[t] -&gt; -(1/(4 Sqrt[G] Sqrt[M] r)) E^(-((2 Sqrt[2] Sqrt[G] Sqrt[M] (k + m + r t))/(r R^(3/2)))) R (E^((Sqrt[2] Sqrt[G] Sqrt[M] (2 (k + m) + r t))/( r R^(3/2))) (-1 + E^((Sqrt[2] Sqrt[G] Sqrt[M] t)/R^( 3/2)))^2 Sqrt[G] Sqrt[M] r + Sqrt[2] F Sqrt[ R] (E^((Sqrt[2] Sqrt[G] Sqrt[M] (3 (k + m) + r t))/( r R^(3/2))) (ExpIntegralEi[-(( Sqrt[2] Sqrt[G] (k + m) Sqrt[M])/(r R^(3/2)))] - ExpIntegralEi[-((Sqrt[2] Sqrt[G] Sqrt[M] (k + m - r t))/( r R^(3/2)))]) + E^((Sqrt[2] Sqrt[G] Sqrt[M] (k + m + 3 r t))/( r R^(3/2))) (-ExpIntegralEi[( Sqrt[2] Sqrt[G] (k + m) Sqrt[M])/(r R^(3/2))] + ExpIntegralEi[(Sqrt[2] Sqrt[G] Sqrt[M] (k + m - r t))/( r R^(3/2))])))}}*) putting numeric values to solution: Rocket will lift at the height: 6192.406503 meters. The difference form numerics and analytically is very small about 1 millimeter! Another approximation.See-> Altitude: $$h''(t)=\frac{F}{k+m-r t}-g$$ where $g$ is standard acceleration of free fall. g = (G*M)/R^2 (* 9.798004977 *) and solution: $$h(t)=\frac{r t (2 F-g r t)+2 F (k+m-r t) (\log (k+m-r t)-\log (k+m))}{2 r^2}$$ sol3 = DSolve[{h''[t] == F/(m + k - r *t) - g, h[0] == 0, h'[0] == 0}, h[t], t] // FullSimplify (h[t] /. sol3 /. t -&gt; t0) (* {{h[t] -&gt; (r t (2 F - g r t) + 2 F (k + m - r t) (-Log[k + m] + Log[k + m - r t]))/(2 r^2)}}*) Rocket will lift at the height: 6190.114097 meters. The difference form numerics and analytically is about 2 meters!
Definition of meromorphic differentials
The notation $\Omega$ is used for the cotangent bundle of a complex manifold. Its sections are known as holomorphic differential forms. In the case of $V \subseteq \mathbb{C}$, the tangent bundle is the trivial line bundle. Therefore, $\Omega = \Omega^1$, the differential $1$-forms, and given a local coordinate $q$, the differential $1$-form $dq$, which is the dual of the vector field (section for the tangent bundle) $\frac{\partial}{\partial q}$, is a basis for this $1$-dimensional space. As with any bundle, one may consider its meromorphic sections, i.e. elements $\omega \in \Omega(U)$ where $U$ is an open subset such that $V \setminus U$ consists of isolated points, and $\omega$ has poles in $V \setminus U$. Note that in this case this is simply $\Omega^{(1)}(V) = \{ f(q) dq \}$ where $f(q)$ is a meromorphic function. Finally, we can tensor bundles. One can think of $\Omega^{\otimes n}$ as the $n$-power tensor bundle of $\Omega^{(1)}$, or as multilinear maps on $n$-tuples of vector fields. However, in this degenerate case, this is again a trivial line bundle, spanned by the element $(dq)^{\otimes n}$, which is denoted in the book as $(dq)^n$. Its meromorphic sections are once more simply of the form $f(q) (dq)^{\otimes n}$, where $f$ is a meromorphic function.
If $P(x)=ax^2+bx+c$ and $Q(x)=-ax^2+dx+c$, $ac$is not $0$, then prove that $P(x)\cdot Q(x)=0$ has at least 2 real roots.
A different approach. The product polynomial has the form $$R(x):=P(x)\cdot Q(x)=-a^2x^4+\dots+c^2.$$ Hence if $a\cdot c\not=0$ then $$\lim_{\pm\infty} R(x)=-\infty\quad\mbox{and}\quad R(0)=c^2&gt;0.$$ Therefore, by continuity, there are at least two distinct zeros: one in $(-\infty, 0)$ and another in $(0,+\infty)$. P.S. Note that the same approach works for two polynomials $$P(x)=ax^m+\dots +c,\quad Q(x)=-ax^n+\dots +c$$ with $m,n\geq 1$ and $m+n$ an even integer.
Importance of Quaternion groups
The quaternion group $Q_8$ is one of the two non-abelian groups of order $8$, see here, and generalised quaternion groups are non-abelian groups of order $2^n$ for $n\ge 3$. These groups are of fundamental importance in algebra, geometry, number theory and many other topics. Just to give an easy example, consider the following result: Theorem: Let $F$ be a finite field not of characteristic $2$. The $2$-Sylow subgroups of $SL_2(F)$ are generalized quaternion groups. If you are interested in physics, see the article The quaternion group and modern physics.
A fake proof for a function taking on all reals in any interval
Your argument just proves that for each interval, there is a function for that interval. This is not the same as proving there is one function which works for all intervals. For example, if $f : (-1, 1)\to \mathbb R$ is $\tan\left(\frac{\pi}2x\right)$ then $f$ works for $(-1,1),$ but it doesn’t work for $(0, 1)$ – you’d need a different function for $(0,1).$ This is an example which shows that you can’t, in general, swap existential quantifiers without changing the meaning of a statement. $$\exists f:\forall (a,b): f((a,b))=\mathbb R$$ is not the same thing as: $$\forall (a,b):\exists f: f((a,b))=\mathbb R.$$
Why should I avoid the Frobenius Norm?
The Frobenius norm is actually quite nice, and also natural. it is defined by merely $$\|A\|_F^2 = \mbox{trace}(A&#39;A)$$ and since it is naturally an inner-product norm, it makes optimization, etc. with it much easier (think quadratic programs, instead of semidefinite programs) Numerical analysis probably like the operator norm perhaps because they often exploit $\|Ax\| \le \|A\| \|x\|$, and if you use the operator-2 norm, you get a tighter inequality (in general). Otherwise, what norm you use, should be governed by the application where you are trying to use it.
Homomorphism between $S_n$ to $\mathbb Z$
An element of finite order is by a homomorphism always mapped to an element of finite order. This is proven quite directly from the defining properties of homomorphisms. So given an $s\in S_n$ and a homomorhism $h:S_n\to \Bbb Z$, what can $h(s)$ possibly be? Does this argument change for $D_n$ instead of $S_n$?
Question about separable extension
One way is obvious. Now use induction on n. For the other side use the fact $K/F$ is separable iff # of $F$-isomorphisms of $K$ into $L =[K:F]$ where $L$ is some normal extension of $F$ containing $K$. So we have #$F$ isomorphisms of $F(u_i)\rightarrow L=F(u_1):F$ By induction $K/F(u_1)$ is separable and hence #$F(u_1)$ isomorphisms of $K\rightarrow L=K:f(u_1)$ So #$(F$ isomorphisms of $K\rightarrow L)= $ #$(F$ isomorphisms of $F(u_i)\rightarrow L).$#$(F(u_1)$ isomorphisms of $K\rightarrow L)$$=[F(u_1):F].[K:F(u_1)]=K:F$. Hence $K/F$ separable.
How does one begin to even write a proof?
Here are a some steps that I find useful when writing proofs, not necessarily in order. If the statement of the problem uses terms you aren't entirely comfortable with, translate it into simpler language, or even paraphrase the problem in regular English. For example, a statement about the linearity of the derivative might be shortened to "the sum of the derivatives is the derivative of the sum". Determine the hypotheses you have to work with. One of the worst things you can try and do is to attack the problem from first principles alone. If you are giving specific hypotheses, start with them! It's quite possible that your conclusion is false without those hypotheses. Figure out what, exactly, you are being asked to show. You might find yourself running in circles if you don't have a clear direction in mind. Look at the relevant chapter(s) of your textbook, and see what theorems relate to the hypotheses you are given. I cannot stress this enough. If your hypothesis is that "$f$ is continuous on a compact set", look at every lemma and theorem in the chapter that begins: "Let $f$ be continuous on a compact set...", even if further assumptions are necessary. Often, if you are given many hypotheses, try and figure out how they work together. A continuous function on a compact set is better than a continuous function and a compact set (as seperate objects). Ask yourself: "Is there any statement or result that would help me get to my conclusion? Is there anything that would help if it were true?" This is how you make your own lemmas and propositions. Most Importantly: After you've translated the problem into a context you understand, ask yourself "why should this be true?" It might help to draw a picture, or consider some explicit examples. In my experience, when it comes to writing proofs, especially in analysis, if you don't know why something should be true, you'll have a hell of a time trying to prove it. When you can convince yourself why something should be true, all that's left is to translate that intuition into rigorous mathematics. While that requires some technical skill, that is something you will pick up over time -- in the long run, the intuitive answer is often the most important part.
How to find the interval of validity for differential equations that are difficult to simplify by hand?
Hint Considering$$(y^2-1)y' = t-1$$ start switching variables and consider $$y^2-1=t'(t-1)$$Now $u=t-1$ gives $$y^2-1=u u'=\frac 12 (u^2)'\implies u^2+C=\frac 13 y^3-y$$
If $f,g \in \mathscr{R[a,b]}$ , then $\sqrt{f^2+g^2}\in \mathscr{R[a,b]}$
$f,g$ are Riemann-Integrable $\implies f^2,g^2$ are Riemann-Integrable $f,g$ are Riemann-Integrable $\implies f+g $ are Riemann-Integrable $ h$ is Riemann-Integrable $\text {and} h\ge 0 \implies \sqrt h$ is Riemann-Integrable Now combine all three.
Swapping signs in analysis proofs
A lot of insight comes from having counterexamples in your head. For example, take the 1st, which is essentially the same as the 4th: I can never remember which way Fatou's lemma goes until I do some simple examples like: (Counter)example: Let $f_n(x) = 1/n$ for all $x$. Clearly $\int_{-\infty}^{\infty} f_n(x)dx = \infty$ for all $n$, so its limit is also $\infty$. But $f_n(x)$ converges point wise to $0$, so the integral of the limit is $0$. So we now remember that Fatou's lemma for non-negative functions $f_n(x)$ is: \begin{eqnarray*} \liminf_{n\rightarrow\infty} \int f_n \geq \int \liminf_{n\rightarrow\infty} f_n \end{eqnarray*} The Lebesgue dominated convergence theorem is the usual thing to apply if you want to get equality results. That is actually proven directly from Fatou: Suppose $f_n(x)$ are non-negative, converge pointwise to $f(x)$, and there is an L1-integrable $y(x)$ such that $f_n(x) \leq y(x)$ for all $x$ and $n$. Then $y(x)-f_n(x)$ is non-negative, so by Fatou: \begin{eqnarray*} \liminf_{n\rightarrow\infty} \int (y-f_n) \geq \int (y-f) \end{eqnarray*} Since $\int y$ is finite we can subtract it from both sides to get: \begin{eqnarray*} \liminf_{n\rightarrow\infty} \int -f_n \geq \int -f \end{eqnarray*} Multiplying the above by $-1$ gives: \begin{eqnarray*} \limsup_{n\rightarrow\infty} \int f_n \leq \int f \end{eqnarray*} On the other hand, by regular Fatou we know for the non-negative function $f_n(x)$: \begin{eqnarray*} \liminf_{n\rightarrow\infty} \int f_n \geq \int f \end{eqnarray*} The previous two inequalities together imply that the $\limsup$ and $\liminf$ are the same, so: \begin{eqnarray*} \lim_{n\rightarrow\infty} \int f_n = \int f \end{eqnarray*} Then the general Lebesgue result for functions $f_n(x)$ that are possibly negative uses a bounding function $y(x)$ that satisfies $|f_n(x)|\leq y(x)$ for all $x$, and is proven by breaking $f_n(x)$ into positive and negative parts. The last one is also directly related to this issue of pushing limits thru integrals. Assume $f(x,t)$ is L1-integrable (over $t$) for all $x$. So we can talk about $\int f(x+h,t)dt - \int f(x,t)dt$ without worrying about pesky cases of $\infty - \infty$. Then, from the definition of derivative we wish to look at: \begin{eqnarray*} \lim_{h\rightarrow 0}\int \frac{f(x+h,t)-f(x,t)}{h}dt \end{eqnarray*} So we can define $g_h(t) = \frac{f(x+h,t)-f(x,t)}{h}$. If we assume $\lim_{h\rightarrow0}g_h(t) = \frac{\partial}{\partial x}f(x,t)$, then we can use the Lebesgue Dominated Convergence theorem to show $\lim_{h\rightarrow 0} \int g_h(t)dt = \int \frac{\partial}{\partial x}f(x,t)dt$. This is justified if we can find an L1-integrable bounding function $y(t)$ that satisfies $|g_h(t)| \leq y(t)$ for all $t$ and $h$. Such bounding functions can often be found by exploiting certain properties of $f(x,t)$, such as Lipschitz properties and/or finite support over $t$. The second property is interesting and had me fooled for a bit! I found a link to a good counterexample here: Under what conditions can I interchange the order of limits for a function of two variable? The simplest (not the strongest) sufficient condition is that $f(x,y)$ be continuous at all points in an open neighborhood of $(a,b)$ (including $(a,b)$ itself). You can prove that and it leads to good "geometric" intuition. A more general sufficient condition for $\lim_{x\rightarrow a} [\lim_{y\rightarrow b} f(x,y)]$ and $\lim_{y\rightarrow b} [\lim_{x\rightarrow a} f(x,y)]$ to exist and be equal is if all of the following three conditions hold: (i) $f(x,y)$ is continuous at $(a,b)$. (ii) There is a $c&gt;0$ such that $\lim_{x\rightarrow a} f(x,y)$ exists as a real number whenever $0&lt;|y-b|&lt;c$. (iii) There is a $d&gt;0$ such that $\lim_{y\rightarrow b} f(x,y)$ exists as a real number whenever $0&lt;|x-a|&lt;d$. Peter Franek's comment perhaps leads to the most "geometrical" intuition (for the 3rd problem). You can consider functions that have their actual value uniformly pushed down to 0, but they still oscillate wildly.
Is there an easy/fast method for calculating $\text{P}(X_i = \text{max}(X_1, \dots, X_n)), \forall i$ where $X_i \sim \text{N}(\mu_i, \sigma_i)$?
We have \begin{align} P\left(X_i = \max(X_1, \dots, X_n)\right) &amp;= E\left(\mathbb{I}_{\{X_i = \max(X_1, \dots, X_n)\}} \right)\\ &amp;= E\left(E(\mathbb{I}_{\{X_i = \max(X_1, \dots, X_n)\}}|X_i) \right)\\ &amp;= E\left(P(X_i = \max(X_1, \dots, X_n)|X_i) \right)\\ &amp;= E\left(P(\bigcap_{k \ne i} \{X_k \le X_i\} |X_i) \right)\\ &amp;= E\left( \prod_{k \ne i} P(X_k \le X_i |X_i) \right) \tag{1}\\ &amp;= E\left( \prod_{k \ne i} \Phi\left(\frac{X_i-\mu_k}{\sigma_k}\right) \right) \\ &amp;= \frac{1}{\sqrt{2\pi\sigma_i}}\int_\mathbb{R}\left(\exp\left(\frac{-(x-\mu_i)^2}{2\sigma_i}\right) \prod_{k \neq i} \Phi\left(\frac{x-\mu_k}{\sigma_k}\right)\right)dx \end{align} At $(1)$, we need the assumption that $(X_i)_{i=1,...,n}$ are independant.
Help understanding modular congruence issue
Use Fermat's Little Theorem to see : $$(28)^{12} \equiv 1 \ \bigl(\text{mod} \ 13 \bigr)$$ $$ \Longrightarrow (28)^{144} \equiv 1 \ \bigl(\text{mod} \ 13 \bigr)$$ Take a look here as well: http://en.wikipedia.org/wiki/Modular_arithmetic#Congruence_relation $\textbf{Added.}$ The finishing step would be: Since $28 \equiv 2 \ \bigl(\text{mod} \ 13\bigr)$ and $(28)^{144} \equiv 1 \bigl(\text{mod}\ 13\bigr)$ and now multiply both of these to get your answer.
Surface integral and divergence theorem do not match, cylindrical coordinates
First of all, it is a good approach to try to find the solution with different methods. Your surface integrals look correct. I suspect a problem with the divergence theorem. $\vec{J}$ should be a continuously differentiable vector field, and I don't believe it is. $\nabla \cdot \vec{J}=\frac{z}{r}-3rz+\frac{1}{z+1}$, which is not well defined on $\mathbb{R}^3$.
Let $f$ be defined on a measurable set $E \subset \mathbb R^n$. If $\{a<f<+\infty\}$ and $\{f=-\infty\}$ are measurable, then $f$ is measurable
It is enough to show that the set $\{x\in E: \ f = +\infty\}$ is measurable, since then for each finite $a$ we get that the set $$ \{x \in E: \ f(x) &gt; a \} = \{x\in E: \ f = +\infty\} \cup \{x \in E: \ +\infty&gt;f(x) &gt; a \} $$ is a union of two measurable sets, hence measurable itself. To see that $\{x\in E: \ f = +\infty\}$ is measurable, observe that in view of the measurability of $\{x\in E: \ f = -\infty\}$ its complement is also measurable, hence $$ \tag{1} \{x\in E: \ f = -\infty\}^c = \{x\in E: \ f &gt; - \infty\} = \\\{x\in E: \ f = +\infty\} \cup \left( \bigcup\limits_{n=-\infty}^{+\infty} \{x \in E: \ n&lt;f(x) &lt; +\infty \} \right). $$ Now the union in big brackets is measurable as a countable union of measurable sets. Since the left-hand side of $(1)$ is also measurable, the claim follows.
Order of a set $X$ acted upon transitively by the Symmetric Group
For $n &lt; 4$ the result is clear. For $n = 4$ the result is false - we have a surjection $S_4 \to S_3$ by killing the unique normal subgroup of order 4, given by the products of transpositions (thanks to cocopuffs for fixing an error in this before). For $n &gt; 4$, the map $S_n \to \text{Aut}_\text{Set}(X) = S_{\# X}$ induced by your action must either be injective or have kernel $A_n$ or $S_n$. In the first case we must have $\#X \ge n$ and in the others we have $\#X = 2$ or $1$ respectively.
Find the splitting field of $x^m-1\in\Bbb F_p$.
It is a finite field and an extension of $\Bbb F_p$ so it is $\Bbb F_{p^k}$ for some $k$. The $m$-th roots of $1$ are the $m_0$-th roots of $1$ and there are $m_0$ of them. The multiplicative group of $\Bbb F_{p^k}$ contains all $m_0$ of them iff $m_0\mid(p^k-1)$, as the multiplicative group of a finite field is cyclic. So the smallest $k$ that works is the least $k&gt;0$ with $p^k\equiv1\pmod{m_0}$.
Why is the borel sum analytic
Let $R$ be the radius if convergence and let $|z| &lt;R$. Let $|z| &lt;R_1&lt;R_2&lt;R$. Since $\sum a_n R_2^{n}$ is convergent, the sequence $|a_n|R_2^{n}$ tends to $0$. In particular there exists a constant $C$ such that $|a_n|R_2^{n}\leq C$. Now $|a_nz^{n}| \leq C (\frac {R_1} {R_2})^{n}$. Hence $\int_0^{\infty}e^{-u} |\sum \frac {a_nz^{n}u^{n}} {n!}|du$ is dominated by $C\int_0^{\infty}e^{-u}e^{\frac {R_1} {R_2}u}du&lt;\infty$.
Number of even and odd subsets -- wrong question?
$\{\}$ is also a subset of the nonempty set.
Limit of sum of periodic function
(this assumes continuity of the $f_i$, still thinking of how to remove it) Suppose $F(x):=\sum_i f_i(x)\to C$, where $f_i(x+p_i)=f_i(x)$ for each $i$. Now, find an sequence of positive reals $h_N$ increasing to $\infty$ which is simultaneously close to $p_i\mathbb{Z}$ for all $i$, i.e. $$h_N=a_i^{(N)}p_i+\varepsilon_N^{(i)}, \text{where }z_N^{(i)}\in\mathbb{Z}, \varepsilon_N^{(i)}\to 0$$ as by noting Dirichlet's simultaneous approximation theorem applied to $\{\frac{1}{p_1},\frac{1}{p_2},...,\frac{1}{p_n}\}$, we can find integers $a_i^{(N)}$ and an integer $h_N\le N$ such that for each $i$: $$\left\vert \frac{1}{p_i}-\frac{a_i^{(N)}}{h_N}\right\vert \le \frac{1}{h_N N^{1/n}}$$ Upon rearrangement, this becomes $\vert h_N - a_i^{(N)}p_i \vert \le \frac{p_i}{N^{1/n}}\to0$ Then, for any $x$: $$\lim_{n\to\infty}F(x+h_n)=\lim_{x\to\infty}F(x)=C$$ $$\lim_{n\to\infty}F(x+h_n)=\lim_{n\to\infty}\sum_i f_i(x+h_n)=\lim_{n\to\infty}\sum_i f_i(x+\varepsilon_n^{(i)})=\sum_i f_i(x)=F(x)$$ So, $F(x)=C$ for all $x$. Alternative Proof Let $P(n)$ be the statement "If a sum of $n$ periodic functions has a limit $C$, then this sum is equal to $C$ for all $x$". If $f$ is $p$-periodic and tends to $C$, then for any $\varepsilon &gt; 0$, there exists $N$ such that $x&gt;N\implies \vert f(x) - C \vert &lt; \varepsilon$. But periodicity gives that this is actually true for all $x$. As this is true for any $\varepsilon &gt; 0$, we recover that $f=C$ for all $x$. So, $P(1)$ is true. Suppose $P(1), P(n-1)$ are true, and consider a sum of $n$ periodic functions, $F(x)=\sum_1^n f_i(x)$ with limit $C$, where in particular, $f_n$ has period $p_n$. Then $F(x+p_n)-F(x)=\sum_1^{n-1} [f_i(x+p_n)-f_i(x)]$ is a sum of $(n-1)$ periodic functions, and converges to $0$, hence is equal to $0$ by $P(n-1)$. So, $F$ is $p_n$-periodic, and converges to $C$, and $P(1)$ tells us that it is identical to $C$ as a result, i.e. $P(n)$ is true. Thus, by induction, $P(n)$ is true for all $n$, and any finite sum of periodic functions with a limit at $\infty$ is constant.
Can the minimal polynomial of a matrix have a root with multiplicity?
Any monic polynomial occurs as the minimal polynomial of some square matrix, in particular of its own companion matrix. Therefore, yes, multiple roots are definitely possible for minimal polynomials.
Expectation of $\log^2(X)$
No, it is not possible to correctly make such a claim. Indeed, here is an explicit counterexample. Let $X$ be the random variable in $(0,1)$ satisfying $$ \mathbb P(X&lt;t)=\frac{1}{(1-\log t)^2},\qquad 0&lt; t\leq 1.\tag{1} $$ I claim that $\mathbb EX&lt;\infty$ and $-\infty&lt;\mathbb E[\log X]&lt;\infty$, and yet $\mathbb E[(\log X)^2]=\infty$. To show this, I will use the auxiliary random variable $Y=-\log X$. Substituting $t=e^{-s}$ in $(1)$ yields $$ \mathbb P(Y&gt;s)=\frac{1}{(s+1)^2},\qquad 0\leq s&lt;\infty. $$ Using the tail sum formula for expectation, $$ \mathbb EY=\int_0^{\infty}\mathbb P(Y&gt;s)\ ds=1, $$ whereas $$ \mathbb E[Y^2]=\int_0^{\infty}2s\cdot \mathbb P(Y&gt;s)\ ds=\infty. $$ Finally, since $X\leq 1$ holds with probability $1$, it follows that $\mathbb EX\leq 1$ and in particular $\mathbb EX&lt;\infty$. Thus we have found an $X$ satisfying all of the hypotheses, but with $\mathbb E[(\log X)^2]=\mathbb E[Y^2]=\infty$, contradicting the claim.
Troubles with proving the following expression $((p \implies q) \implies p) \implies p$ using Fitch System
HINT whenever you want to prove something of the form $\phi \rightarrow \psi$, do a conditional proof where you assume $\phi$ and try to get to $\psi$. So in your case, assume $(p \rightarrow q) \rightarrow p$, and try to get to $p$ You did do a conditional proof, but notice that the conditional you obtained was not the one you want. HINT 2 To get to $p$ inside the conditional proof ... Do a proof by contradiction HINT 3 If you are still stuck ... See the 9th post under 'Related' on the right.
track and field word problem
Hints: If the usual rate is $r$ (kph), then the faster rate is ... At the usual rate it takes ... (time) to cover the distance. (In terms of $r$.) At the faster rate it takes ... (time) to cover the distance. Set up and solve an equation. Final thought: Wow, that is a long distance to run!
For subsets $M\subset N$ of a vector space, $\langle{M\rangle}\subset \langle{N\rangle}$.
Since M $\subset$ N, every vector of M is also a vector of N. So a linear combination of elements of M can be viewed as a linear combination of elements in N. Thus $&lt;M&gt; \subset&lt;N&gt;.$
Converting inequalities into equalities by adding more variables
We will show that the two problems are equivalent in the sense they define the same range of solutions. Define $\tilde{\mathbb{A}} = \begin{bmatrix} \mathbb{A} &amp; \mathbf{I} \end{bmatrix}$ and $\tilde{\mathbf{x}}= \begin{bmatrix} \mathbb{x} \\ \mathbf{t} \end{bmatrix}$. Then, if $\mathbb{x}$ is a solution of the inequality constrained problem : $$\mathbb{A}\mathbf{x} \le 0$$ it suffices to take $\mathbf{t}$ as $-\mathbb{A}\mathbf{x}$, which is a positive vector by the inequality above. So we have : $\tilde{\mathbb{A}} \tilde{\mathbf{x}} = \mathbb{A}\mathbf{x}+\mathbf{t}=\mathbb{A}\mathbf{x}-\mathbb{A}\mathbf{x}=0$. Conversely, if $$\tilde{\mathbb{A}}\tilde{\mathbf{x}} = 0$$ then we have : $\mathbb{A}\mathbf{x}+\mathbf{t} = \tilde{\mathbb{A}}\tilde{\mathbf{x}} = 0$, or equivalently $\mathbb{A}\mathbf{x}=-\mathbf{t}$. Since $\mathbf{t}$ is a positive vector, we can conclude that $\mathbb{A}\mathbf{x} \le 0$.
Closure of a subspace of a metric space
Let $B=\{x\in X\mid \inf\limits_{a\in A}d(x,a)=0\}$. We want to prove that $B=\bar{A}$. Let $b\in B$ and let $\varepsilon&gt;0$. By definition, $\inf\limits_{a\in A}d(b,a)=0$, so there is $a\in A$ with $d(b,a)&lt;\varepsilon$. Thus $a\in B(b,\varepsilon)\cap A$. Therefore $B\subseteq\bar{A}$. Suppose $c\in X\setminus B$, so that $\varepsilon=\inf\limits_{a\in A}d(c,a)&gt;0$ (it can't be $&lt;0$, of course). If $a\in A$, then $d(c,a)\ge\varepsilon$, so $a\notin B(c,\varepsilon/2)$. Therefore $B(c,\varepsilon/2)\cap A=\emptyset$ and so $c\notin\bar{A}$. Hence $\bar{A}\subseteq B$.
Let x and y be real numbers. Prove that if x≤y+ϵ for every positive real number ϵ, then x≤y. Why do we set ϵ = 1/2(x−y)?
Well, you assume that $x&gt;y$ and then need to find an $\epsilon$ such that $x\leq y+\epsilon$ does not hold. The given $\epsilon$ is such that $y+\epsilon$ is right in the middle between $y$ and $x$. (Any other $\epsilon = c(x-y)$ with $0&lt;c&lt;1$ would work, too.)
Positivity property of hermitian inner product
Your inner product will satisfy $\langle z, z \rangle \geq 0$ if and only if $|c| \leq 3$. Hint: Note that $$ |z_1 + cz_2|^2 = (z_1 + cz_2)\overline{(z_1 + cz_2)} = z_1 \bar z_1 + c z_1 \bar z_2 + \bar c z_2 \bar z_1 + |c|^2 z_2\bar z_2 $$
Geodesics of Sasaki metric
(1) Consider a projection $\pi : (T_1M,G) \rightarrow (M,g) $ We want to define a metric $G$. If $\widetilde{c_1} (t),\ \widetilde{c_2}(t) $ are curves starting at $(p,v)$ then we write $\widetilde{c_i}(t)=(c_i(t),v_i(t)),\ c_i(0)=p,\ v_i(0)=v$ so that let $$ G(\widetilde{c_1}'(0),\widetilde{c_2}'(0)) =g(c_1'(0),c_2'(0)) + g(\nabla_{c_1'} v_1,\nabla_{c_2'} v_2)\ \ast $$ And if $c_1(t)=p$, then we use $\frac{d}{dt}\bigg|_{t=0}\ v_1(t)$ instead of $\nabla_{c_1'} v_1$. (2) Now we will interpret this definition. Fix a curve $c$ at $c(0)=p\in M$. If $v(t)$ is a parallel vector field along $c(t)$ with $v(0)=v$, then define $\widetilde{c}(t)=(c(t),v(t))$ So we have ${\rm length}\ c ={\rm length}\ \widetilde{c}$. That is any curve in $M$ can be lifted into some curve of same length to any point. (3) If $c$ is a unit speed geodesic in $M$, then $\widetilde{c}(t)=(c(t),c'(t))$, lift of $c$, is a geodesic in $T_1M$ : Let $\widetilde{p}=\widetilde{c}(0),\ \widetilde{q}=\widetilde{c}(\epsilon )$ Assume that $\widetilde{\gamma} =(\gamma (t),v(t))$ is a geodesic in $T_1M$ between two points s.t. $\gamma$ has unit speed. Hence since $${\rm length}\ \gamma\geq {\rm length}\ c,$$ then $\widetilde{\gamma}$ is $\widetilde{c}$
$x$ intercept problem
By the rational root theorem, if the function $f(x) = x^5 - x^3 + 2 = 0$ has a rational zero, then it must be among the rational numbers: $1, -1, 2, -2$ However, we see that if we plug in any of these values of $x$ into our function, we will not get $0$. That is: $f(1) = 2, \quad f(-1) = 2, \quad f(2) = 26, \quad f(-2) = -22$ Therefore, we may conclude that any value of $x$ which satisfies $x^5 - x^3 + 2 =0$ will not be a rational number (so, at best, we can only approximate the value of $x$ which does so). One of our options is to simply use a computer to approximate the solution for us. Otherwise, we would probably have to use Newton's Method to approximate this solution by hand (which is very tedious).
What constraints can be on the Fourier coefficients of $f(t)$ if $0 \leq f(t) \leq 1 $
By Parseval equation we get $$ \frac{a_0^2}{2}+\sum_{n=1}^{\infty}(a_n^2+b_n^2)=\frac{1}{\pi}\int_{-\pi}^{\pi}\lvert f(x)\rvert^2\, dx\, \leq \frac{1}{\pi}\int_{-\pi}^{\pi}dx\, =2 $$
Does $\frac{\partial}{\partial\epsilon}(\hat A+\hat{O}\cdot\epsilon) = \hat{O}$?
@TurlocTheRed's point is that$$\frac{\partial}{\partial\epsilon_j}\hat{O}\cdot\epsilon=\frac{\partial}{\partial\epsilon_j}\sum_i\hat{O}_i\epsilon_i=\sum_i\hat{O}_i\delta_{ij}=\hat{O}_j.$$We define $\frac{\partial}{\partial\epsilon}\hat{O}\cdot\epsilon$ as the vector whose $j$th entry is $\frac{\partial}{\partial\epsilon_j}\hat{O}\cdot\epsilon=\hat{O}_j$, so$$\frac{\partial}{\partial\epsilon}\hat{O}\cdot\epsilon=\hat{O}\implies\frac{\partial}{\partial\epsilon}(\hat{A}+\hat{O}\cdot\epsilon)=\frac{\partial}{\partial\epsilon}\hat{A}+\hat{O}.$$
system reliability, probabilities of rocket launch suspended
Your answer looks correct. My approach (which I think is the same as yours) would be: (ignoring the sensor) what is the probability a particular computer fails? $0.01$ (ignoring the sensor) what is the probability all three computers fail? $0.01^3 = 0.000001$ by independence (ignoring the sensor) what is the probability at least one of the three computers does not fail? $1- 0.000001=0.999999$ what is the probability the sensor does not fail? $1- 0.000001=0.999999$ what is the probability the sensor does not fail and (ignoring the sensor) at least one of the three computers does not fail? $0.999999 \times 0.999999 = 0.999998000001$ by independence what is the probability either the sensor fails or all three computers fails? $1- 0.999998000001 = 0.000001999999$ which is essentially your $0.000002$ joriki's questions in the comments are about checking whether ignoring the sensor and independence are correct interpretations of the question
Proving $A-B⊆ A$ with set builder notation
I don't understand this notation.. Never seen it before. It makes no sense: an inclusion statement is not a set. But $A - B \subseteq A$ is trivial the standard way: let $x \in A- B$. Then $x \in A$ and $x \notin B$, so in particular $x \in A$ (we can use one half of an "and"-clause). So $A -B \subseteq A$ has been shown.
Proving the rules of a complicated game are well defined
The question is straightforward or impossible depending on what you mean by Consistent. Games that consist of a sequence of "moves" that update a finite amount of discrete state information have the same generality as computer programs and their analysis is subject to the same limitations. Questions about consistency in a higher sense such as "given this rule set, is there a position where no player has a valid move" are not mechanically resolvable by any automatic procedure. They are analogous to generally unsolvable questions about programs, like correctness or halting. And when they are resolvable in principle, due to a finite search space (like playing perfect 8x8 chess) they are usually not feasible in practice due to the number and complexity of possibilities to be considered. Questions about consistency in a simpler sense, such as "does this ruleset define a game" are analogous to judgements of syntactic correctness, like "does this program compile in that Java compiler". That can be done, by having a relatively general specification language for games that is implemented on a computer. Then is this game well-defined becomes the question of whether the game can be written as a compilable program in the specification language. If presentation as a specification is one's definition of even having a precise game to talk about, then the question is answered simply, by running the compiler and seeing if it accepts the spec.
Theorem about embeddings of a field
An embedding of $K$ in $\mathbf C$ maps $\alpha$ onto a root of the minimal polynomial of $\alpha$, and conversely, mapping $\alpha$ onto a root of the minimal polynomial of $\alpha$ defines a $\mathbf Q$-homomorphism of $\mathbf Q(\alpha)$ into $\mathbf C$ (map $\mathbf Q[x]$ into $\mathbf C$ first and check the kernel is generated by the minimal polynomial of $\alpha$). Hence if $\sigma(\alpha)=\alpha$ for all $\sigma\in \operatorname{Hom}(K,\mathbf C)$, the only root of the minimal polynomial is $\alpha$, and it is a simple root, since we're in characteristic $0$, which proves the minimal polynomial has degree $1$ — in other words, $\alpha \in\mathbf Q$.This also proves the second assertion.
Evaluation of $\sum_{n=1}^\infty \frac{1}{\Gamma (n+s)}$
We find an integral expression for the sum (the $u$ integral below) without appealing to the properties of special functions. We have $$\begin{eqnarray*} \sum_{n=1}^\infty \frac{1}{\Gamma(n+s)} &amp;=&amp; \frac{1}{\Gamma(s+1)} \underbrace{\left(1+\frac{1}{s+1}+\frac{1}{(s+1)(s+2)} + \ldots\right)}_{f(s)}. \end{eqnarray*}$$ The series $f(s)$ is a simple example of an inverse factorial series. Such series were studied even in the 18th century by Nicole and Stirling and are dealt with, for example, in Whittaker and Watson's A Course of Modern Analysis. One way to develop such a series is by successively integrating by parts the right hand side of $$f(s) = \int_0^1 d\xi\, s(1-\xi)^{s-1} F(\xi),$$ where $F(\xi)$ is some analytic function of $\xi$ and $\int_0^1$ is shorthand for $\lim_{\epsilon\to 0^+}\int_0^{1-\epsilon}$. One finds $$\begin{eqnarray*} f(s) &amp;=&amp; F(0) + \frac{F'(0)}{s+1} + \frac{F''(0)}{(s+1)(s+2)} +\ldots. \end{eqnarray*}$$ For details on the restrictions on $F(\xi)$, see Whittaker and Watson's 4th edition, $\S 7.82$. For this problem we have $F^{(n)}(0) = 1$, so $F(\xi) = e^\xi$. Then $$\begin{eqnarray*} \sum_{n=1}^\infty \frac{1}{\Gamma(n+s)} &amp;=&amp; \frac{f(s)}{\Gamma(s+1)} \\ &amp;=&amp; \frac{1}{\Gamma(s+1)} \int_0^1 d\xi\, s(1-\xi)^{s-1} e^\xi \\ &amp;=&amp; \frac{e}{\Gamma(s)} \int_0^1 du\, u^{s-1} e^{-u} \hspace{10ex}(\textrm{let }u=1-\xi) \\ &amp;=&amp; \frac{e}{\Gamma(s)} \gamma(s,1), \end{eqnarray*}$$ where $\gamma(s,x)$ is the lower incomplete gamma function. Note that $\gamma(s,x) = \Gamma(s) - \Gamma(s,x)$, where $\Gamma(s,x)$ is the upper incomplete gamma function. Therefore, $$\begin{eqnarray*} \sum_{n=1}^\infty \frac{1}{\Gamma(n+s)} &amp;=&amp; e\left(1-\frac{\Gamma(s,1)}{\Gamma(s)}\right), \end{eqnarray*}$$ as claimed. Thanks for the interesting question!
Equation in complex numbers
Let $n=|z_1|=|z_2|=|z_3|$. Then $n^3=|z_1z_2z_3|=1$, so $n=1$ and the $z_k$ have unit modulus. Since $z_1=z_2=z_3$ implies $1=|z_1+z_2+z_3|=|3z_1|=3$, and the equations are unchanged under permutations of the $z$'s, we can assume WLOG that $z_3\ne1$. Then let $z_k'=\frac{z_k}{1-z_3}$ and $z_4'=\frac1{1-z_3}$, so that: $$z_1+z_2+z_3=1\implies \frac{z_1+z_2}{1-z_3}=z_1'+z_2'=1$$ Note that $|z_1'|=|z_2'|=|z_3'|=|z_4'|$. Solving for $z_2'$ and multiplying by the conjugate, we get: $$|z_2'|^2=|1-z_1'|^2=1-z_1'-\bar z_1'+|z_1'|^2\implies z_1'+\bar z_1'=1$$ So $z_2'=\bar z_1'$. Since it is also true that $-z_3'+z_4'=1$ and $|z_3'|=|z_4'|$, in an exactly equivalent manner we get $-z_3'=\bar z_4'$. Finally, to relate these pairs to each other: $$(z_1'-\bar z_1')^2=(z_1'+\bar z_1')^2-4|z_1'|^2=(z_4'+\bar z_4')^2-4|z_4'|^2=(z_4'-\bar z_4')^2$$ where the central equality is due to $z_1'+\bar z_1'=1=z_4'+\bar z_4'$ and $|z_1'|=|z_4'|$. Thus either $$z_1'-\bar z_1'=z_4'-\bar z_4'\implies 2z_1'-1=2z_4'-1\implies z_1'=z_4'$$ or $$-(z_1'-\bar z_1')=z_4'-\bar z_4'\implies 2z_2'-1=2z_4'-1\implies z_2'=z_4'.$$ Using our remaining permutation freedom of $z_1,z_2$, we can assume WLOG that $z_1'=z_4'$. Then applying the definitions, we get $\frac1{1-z_3}=\frac{z_1}{1-z_3}\implies z_1=1$, and so $z_2=-z_3$. We still have yet to use the third equation, $z_1z_2z_3=1$, and we do so now. Since $z_1=1$, this reduces to $z_2^2=-1$, so $z_2=i$ and $z_3=-i$ or vice versa. Thus up to permutations, $\{z_1,z_2,z_3\}=\{1,i,-i\}$ is the unique solution.
Application of Markov chains (processes) for optimizing a random process?
As my latest Understanding(and as commented by @fesman), Markov chain is enough to model the process, but some kind of optimization is necessary. This is my understanding that what kind of optimization methods can be used Linear/non-linear optimization techniques Graph coloring techniques Game theoretic approaches may be some extra!
Limits and $Hom(-,Y)$-functor in abelian categories
Yes, it is true in any abelian category. In fact, moreover, it's true in every category full stop that $$\hom(\text{colim}_i x_i,y)\cong \lim_i\hom(x_i,y).$$ (in category theory, a projective limit is often just called a limit, and an injective limit is often just called a colimit). It's not only true when $x_i$ is a direct system (the category theoretic analogue of a directed poset is a filtered category). Indeed it's true for any shape diagram in $C$. So the contravariant hom-functor takes colimits to limits. A dual result says that the covariant hom-functor takes limits to limits (one says it is a continuous functor), $$\hom(y,\lim_i x_i)\cong \lim_i\hom(y,x_i).$$ The proof is not hard. By definition of colimit, an arrow out of $\text{colim}_i x_i$ is a cocone over the system $\{x_i\}_i,$ i.e. a commuting set of arrows out of the $x_i$. Note also that while you don't need your system to be directed in order to conclude the colimit commutes with $\hom(-,Y)$, directedness/filteredness is still relevant; filtered colimits commute with finite limits, at least for functors in some nice categories. This useful property is relevant to some exactness statements in homological algebra.
Area of Shaded Region Inside Quadrilateral
There is a simpler approach. Consider an extension of the diagram you drew: Note that the desired area "A" is the sum of the areas of the circular sector and the obtuse triangle. As shown in the diagram, the angle subtended by an arc from the vertex on the circumference is half of the angle measured from the center of the circle. Since $\theta = \tan^{-1} \frac{1}{2}$, it follows that $2\theta = 2\tan^{-1} \frac{1}{2}$ and the area of the circular sector is simply $$A_1 = \tan^{-1} \frac{1}{2}.$$ Then the area of the obtuse triangle is $$A_2 = \frac{1}{2} \cdot \frac{1}{2} \cdot 1 \cdot \sin \left(\frac{\pi}{2} - 2\theta\right) = \frac{1}{4} \cos \left(2 \tan^{-1} \frac{1}{2} \right) = \frac{3}{20}.$$ Thus the total area is $$A = \tan^{-1} \frac{1}{2} + \frac{3}{20} \approx 0.613648.$$
Diagonalizable matrix
Note that $A$ is annihilated by the polynomial $p(X) = X^4 - 1$. Therefore, $A$'s minimal polynomial $\mu_A$ is a divisor of $p$, that is $$ \mu_A(X) \in \{X - 1, X + 1, X^2 - 1, X^2 + 1, X^4- 1 \} $$ Hence, in each case, the minimal polynomial of $A$ has distinct linear factors and therefore $A$ is diagonizable over $\mathbf C$. As matrices with minimal polynomial $X^2 + 1$ are not diagonizable over $\mathbf R$, $A$ need not to be diagonizable over $\mathbf R$.