title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Statistical significance and leve of significance in hypothesis testing
A significance level is essentially a confidence interval. If your $p$-value is less than $0.05$, which it is, this means that there is sufficient evidence to reject the Null Hypothesis. The significance level is your probability of obtaining a Type 1 error, so that is accepting the alternative hypothesis given that the null hypothesis is true.
How do I show $\int_0^\infty \frac{x}{(x+1)^2}dx$ diverges
$\begin{eqnarray}\int_0^\infty\frac{x}{(x+1)^2}\mathrm{d}x&=&\lim_{t\to\infty}\int_0^t\frac{x}{(x+1)^2}\mathrm{d}x\\&=&\left(\lim_{t\to\infty}\frac{1}{t+1}+\log(t+1)\right)-1\\&=&\infty\end{eqnarray}$
Upper bound for $n^{3-n}\sum_{k=1}^{n/2} \binom{n-2}{k-1}k^{k-2}(n-k)^{n-k-2}$
Lets do some rearranging. First split up the binomial notation: $$\frac{\binom{n-2}{k-1}k^{k-2}(n-k)^{n-k-2}}{n^{n-3}}=\frac{(n-2)!k^{k-2}(n-k)^{n-k-2}}{(k-1)!(n-k-1)!n^{n-3}}$$ Multiply the numerators and denominators to remove the $-1$'s and $-2$'s: $$=\frac{n!}{(n-1)n}\cdot\frac{k}{k!}\cdot\frac{n-k}{(n-k)!}\cdot\frac{k^{k}}{k^{2}}\cdot\frac{(n-k)^{n-k}}{(n-k)^{2}}\frac{n^{3}}{n^{n}}.$$ Now rearrange again so that Sterlings formula jumps out at us: $$=\frac{n}{(n-1)}\frac{n}{k\cdot(n-k)}\cdot\frac{n!}{n^{n}}\cdot\frac{k^{k}}{k!}\cdot\frac{(n-k)^{(n-k)}}{(n-k)!}.$$ Applying Sterlings formula roughly, this becomes $$\approx\frac{n}{(n-1)}\frac{n}{k\cdot(n-k)}\cdot\left(\sqrt{2\pi n}e^{-n}\right)\cdot\left(\frac{1}{\sqrt{2\pi k}}e^{k}\right)\cdot\left(\frac{1}{\sqrt{2\pi(n-k)}}e^{n-k}\right)$$ $$=\frac{n}{(n-1)\sqrt{2\pi}}\cdot\left(\frac{n}{k(n-k)}\right)^{3/2}$$ Now compare the last piece to the integral $$\int_{1}^{n-1}\left(\frac{n}{x(n-x)}\right)^{3/2}dx.$$ This integral is bounded by a constant for every $n$ so the proof is finished. (The bound can be placed on the integral by a partition trick that yields an infinite geometric series.) Hope that helps,
What is fastest (multithreaded?) way to identify which columns in a matrix are "connected"?
Create a union-find structure, with columns being elements. Now run the following algorithm: for each row with nonzero elements, take first nonzero element of this row, find a representative of the column containing this element in your union-find structure, and merge remaining nonzero columns in this row with it. This algorithm runs in $O(n \cdot \alpha(n)))$ time, where $n$ is number of non-zero entries in your matrix. This algorithm can easily be parallelized (e.g. by having each thread consider each row separately), assuming you have parallel union-find implementation, which can be found e.g. in Anderson, Richard J.; Woll, Heather (1994). Wait-free Parallel Algorithms for the Union-Find Problem. 23rd ACM Symposium on Theory of Computing. pp. 370–380.
Rank of matrix verification using row echelon form
I do not understand why you keep referring to the sum. The matrix you have does have rank $3$.
In the group $A/B$ find the order of the coset $(x_1+2x_3)+B$
I don't think this is a "deep explanation," but if you set up the equation $$ m(x_1 + 2x_3) = a(x_1 + x_2 + 4x_3) + b(2x_1 - x_2 + 2x_3) $$ you obtain the relations $m = a+2b$, $0 = a - b$, and $2m = 4a+2b$. This system reduces to $m = 3a$. There's probably a slick way to do it using matrices. Edit: A slick method using matrices, a little under 3 years too late! The key ingredient is the Smith normal form of a matrix: see here for more on the general theory. Computing the Smith normal form of the relations matrix $M$ (whose columns are the coefficients of the generators of $B$), we find $$ \underbrace{ \begin{pmatrix} 0 & 1 & 0 \\ 1 & -1 & 0 \\ 2 & 2 & -1 \end{pmatrix} }_P \underbrace{ \begin{pmatrix} 1 & 2\\ 1 & -1\\ 4 & 2 \end{pmatrix} }_M \underbrace{ \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} }_Q = \underbrace{ \begin{pmatrix} 1 & 0 \\ 0 & 3 \\ 0 & 0 \end{pmatrix} }_D \, . $$ This gives us the isomorphism $$ \frac{A}{B} \cong \frac{\mathbb{Z} w_1 \oplus \mathbb{Z} w_2 \oplus \mathbb{Z} w_3}{\mathbb{Z} w_1 \oplus \mathbb{Z} 3 w_2} \cong \frac{\mathbb{Z}}{3\mathbb{Z}}w_2 \oplus \mathbb{Z} w_3 \cong \frac{\mathbb{Z}}{3\mathbb{Z}} \oplus \mathbb{Z} \, . $$ Moreover, the matrix $P$ tells us the basis $\{w_1, w_2, w_3\}$ with respect to which we have this isomorphism. Since $$ P^{-1}= \begin{pmatrix} 1 & 1 & 0 \\ 1 & 0 & 0 \\ 4 & 2 & -1 \end{pmatrix} $$ then $$ w_1 = x_1 + x_2 + 4 x_3,\quad w_2 = x_1 + 2 x_3,\quad w_3 = -x_3 $$ where $\{x_1,x_2,x_3\}$ is the standard basis for $\mathbb{Z}^3$. The element $w_2 = x_1 + 2 x_3$ should look familiar: it is the element whose order in the quotient we are trying to compute! Since the image of $w_2$ generates the factor $\mathbb{Z}/3\mathbb{Z}$, then it has order $3$, as we previously found.
comparing MSE of estimations of binomial random variables
I will write $\tilde p = \frac1{10}X$ and $\hat p=\frac1{12}X$ to avoid confusion. Recall that the mean-squared error of an estimator $\hat\theta$ of a parameter $\theta$ is $$\operatorname{MSE}\left(\hat\theta,\theta\right) = \mathbb E\left[(\hat\theta, \theta)^2\right] = \operatorname{Var}\left(\hat\theta\right) + \operatorname{Bias}\left(\hat\theta,\theta\right)^2, $$ where $$ \operatorname{Bias}(\hat\theta) = \mathbb E\left[\hat\theta\right] - \theta.$$ We can further simplify this as $$\begin{align*} \operatorname{MSE}(\hat\theta) &= \mathbb E\left[\hat\theta^2\right] - \mathbb E\left[\hat\theta\right]^2 + \left(\mathbb E\left[\hat\theta\right]-\theta\right)^2\\ &= E\left[\hat\theta^2\right] - \mathbb E\left[\hat\theta\right]^2 + \mathbb E\left[\hat\theta\right]^2-2\theta\mathbb E\left[\hat\theta\right]+\theta^2\\ &= E\left[\hat\theta^2\right] -2\theta\mathbb E\left[\hat\theta\right]+\theta^2. \end{align*}$$ So we compute $$ \begin{align*} \operatorname{MSE}(\tilde p) &= E\left[\tilde p^2\right] -2 p\mathbb E\left[\tilde p\right]+p^2\\ &= \left(\frac1{10}\right)^2\mathbb E[X^2] - 2p\cdot\frac1{10}\mathbb E[X] + p^2\\ &= \frac1{25}p(3-2p), \end{align*} $$ and $$\begin{align*} \operatorname{MSE}(\hat p) &= E\left[\hat p^2\right] -2 p\mathbb E\left[\hat p\right]+p^2\\\\ &= \left(\frac1{12}\right)^2\mathbb E[X^2] - 2p\cdot\frac1{12}\mathbb E[X] + p^2\\ &=\frac1{12}p(1-p). \end{align*} $$ The values of $p$ for which $\operatorname{MSE}\left(\tilde p\right)>\operatorname{MSE}\left(\hat p\right)$ satisfy $$ \frac1{25}p(3-2p) > \frac1{12}p(1-p),$$ which are $$(-\infty, -11)\cup(0,\infty).$$ Hence $\operatorname{MSE}\left(\tilde p\right)>\operatorname{MSE}\left(\hat p\right)$ for all $p\in(0,1)$.
Showing that for $X_n$ iid, $\frac{S_n}{n} \to 0$ almost surely implies $\sum_{n=1}^{\infty} P(|X_n| \geq \varepsilon n) < \infty$
You are almost done. Just use the fact that $X_i$s are iid so replace the event $E_n = \{ |X_n| &gt; \epsilon n\}$ with $E_n = \{ |X_1| &gt; \epsilon n \} $. Now your events are indeed monotone (decreasing). You don't need the fact that they are mean $0$ here, that would be useful for the converse statement. Edit: Actually, the second Borel-Cantelli you are using requires independence of its events, so, you cannot do the modification above. The result you wanted actually followed directly from the fact that $X_n/n \rightarrow 0$ almost surely (no independence needed), since by Borel-Cantelli, that is equivalent to the sum you are trying to compute.
How to calculate a changing probability situation based on possible improvement? Can this problem not be solved precisely, just estimated?
First, the problem as stated needs additional assumptions to have a clear-cut solution. Make sure you understand why, as it's quite key to understanding how to properly model a real-life problem with probability theory. My personal take would be to see this as a latent variable / Hidden Markov Model (HMM) situation. General intro on HMM here : http://en.wikipedia.org/wiki/Hidden_Markov_model. You could introduce a variable $Y_n$ corresponding to the $n$-th shot's outcome, and a hidden variable (or latent state) $X_n$ corresponding to the player's skill at time $n$. Assuming the player's success at time $n$ only depends on $X_n$ and not on $Y_{&lt; n}$ (e.g. our guy does not stress out if he missed...), and $X_n$ only depends on $X_{n-1}$, our situation is perfectly described by the HMM below (illustration source here) : Now you still need to make a handful of modeling assumptions, namely : a value range for $X_n$ (for instance $\{u^1, u^2, u^3\}$ = $\{$bad, average, good$\}$) a transition probability function : $P(X_n | X_{n-1})$ (for instance $P(u^i | u^i) = \alpha = 2/3$ and $P(u^{i+1} | u^i) = 1-\alpha$ if we assume the player can only get better with time) an output probability function : $P(Y_n | X_n)$ (i.e. in above's notations $P(\text{success} | u^i)$ for all $i$'s). Once you've done all this, your problem is one of inference, namely calculating $P(Y_n | y_1, ..., y_{n-1})$ (where we use the common convention: $Y$ is the random variable and $y$ is a realisation of $Y$). The method for resolving this is way beyond this site's scope of course ... PS : That's just a very simple approach of HMM described in layman's terms. The roughest assumption made here is guessing/fixing the model's parameters once and for all. The bayesian way would be to leave all parameters (i.e. the $a_{m,k} = P(X_n = u^m | X_{n-1} = u^k)$ and $b_k = P(Y_n = \text{success} | X_n = u^k)$) open and then to calculate $P(Y_n | y_1, ..., y_{n-1})$ by averaging over the model's parameter space (given $y_1, ..., y_{n-1})$). This paper http://mlg.eng.cam.ac.uk/zoubin/papers/ijprai.pdf mentions this technique, look for the words "predictive distribution". PPS : Another approach could be to see the situation as a mixture model (MM) problem. Wikipedia link : http://en.wikipedia.org/wiki/Mixture_model. It's far less appropriate than HMMs but you may find it easier to understand. In a nutshell, the typical MM example is one of a bag of $K$ indistinguishable coins which are of two types, each type in an unknown proportion $a_1 K$ and $a_2 K$ and having a given unknown flip bias, say $b_1$ and $b_2$. You start to pick one coin at random, and you flip it $m$ times. Then you put the coin back in the bag and you start again, $n$ times. The goal is to determine best estimates of $\theta = (a_1, a_2 = 1-a_1, b_1, b_2)$ from the outcome of the experiment. Then, with your parameters estimated, it's easy to evaluate the distribution of your next marginal draw. The common technique to estimate $\theta$ is called Expectation maximization (EM) and is described quickly in the wikipedia link above. Intuitively, if you see in your data that in 20% (resp. 80%) of the draws, the proportion of faces is between 10% and 30% with an average of 22% (resp. between 55% and 70% with an average of 63%), you're tempted to conclude that $a_1 = 20\%$, $a_2 = 80\%$, $b_1 = 22\%$ and $b_2 = 63\%$. Also, if a draw has a majority of faces, it's highly probable that the coin in that draw was of type 2. EM formalizes that and does a bit more... Now, you could say that in your problem the player is, for each game $n$, either Type 1 (good) or Type 2 (bad) (MM works with more than 2 categories but let's keep it at two), with the probability $p$ of him being good unknown to you, and that in each state his probability of success is $b_1$ and $b_2$. Run EM (or any other method), it will give you best estimates for $p$ and $b_i$. It will also give you, for each game $n$, the probability that the player was good in this particular game, say $q_n$. Here comes the fiddling : then plot $n \mapsto q_n$, hopefully it should be roughly increasing and come up with an extrapolation/approximation that will give you a guess (let's not call it an estimate) for $q_{n+1}$. A best guess for your probability of success in the future game $n+1$ is now $P(\text{success}_{n+1}) = q_{n+1} \hat b_1 + (1-q_{n+1}) \hat b_2$. Heck, it's really dirty, as some assumptions here are self-contradictory, but it's a start and my old physics teacher would have been satisfied with it.
What are the elements in this set $(-\infty, 4] \cap \mathbb{N}$
The intersection denoted, $\cap$, refers to the elements in common between both sets. In this problem, you are asked, what is the intersection of $(-\infty, 4]$ and $\mathbb{N}$ (natural numbers $0,1,2,3,4 \ldots)$ The numbers in common between the two sets are $\{0,1,2,3,4 \}$
Writing $x\in U\otimes V$ as $x=\sum u_i \otimes v_i$
Suppose that $U$ is a $k$-vector space, and for $(u,a) \in U \times k$ consider $u \otimes a := au$. Note that if $b : U \times k \to Z$ is bilinear, then $b(\_,1) : U \to Z$ satisfies $b = b(\_,1) \circ \otimes$, which means that $U \otimes k \cong U$. Now, it is not necessarily true that for every $x \in U$ there are unique scalars $a_1,\dots,a_n \in k$ such that $x = a_1u_1 + \cdots + a_nu_n$.
Validity and Satisfiability problem.
See XOR Truth table. To say that $F \text { XOR } G$ is not satisfiable means that it is always FALSE. But, due to the corresponding truth table, the formula is always FALSE only when either $F$ and $G$ are both TRUE or they are both FALSE. And two frmulas $F$ and $G$ are equivalent when either they are both TRUE or both FALSE (alternatively, when $F \Leftrightarrow G$ is valid or tautological).
Help Calculating Computer Time used by Algorithms
When we say that an algorithm runs in $O(N \log(N))$ that means that there exists a constant $C$ such that the time of execution $T(n)$ (as a function of the size $n$ of the input) is such that $T(n) \leq C n \log(n)$. And this is not very precise. Often we implicitly mean that the algorithm runs in $\Theta(N \log(N))$, which means that there exists two constants $C_1$ and $C_2$ such that $C_1 n \log(n) \leq T(n) \leq C_2 n \log(n)$. That being said, to answer your question we will consider that the running time of your algorithm is $T(n)=C n \log(n)$. Then $T(500)= 500C \log(500)$ and $T(100)=100C\log(100)$. Then $100C=\dfrac{T(100)}{\log(100)}$. You get that $$T(500)=5 (100C) \log(500) = 5 \dfrac{\log(500)}{\log(100)}T(100)\approx 5 \cdot 1.35 \cdot T(100) = 6.75 \cdot T(100) = 3.4 \textrm{ms}$$ For the other complexities the computations are easier: quadratic: $T(n) = C n^2$ gives $$\dfrac{T(500)}{T(100)}=\dfrac{(500^2)}{100^2}=5^2=25$$ Then $T(500)=25 \cdot 0.5 = 12.5 \textrm{ms}$ cubic: $T(n) = C n^3$ gives $$\dfrac{T(500)}{T(100)}=\dfrac{(500^3)}{100^3}=5^3=125$$ Then $T(500)=125 \cdot 0.5 = 62.5 \textrm{ms}$ Of course these values are only approximations and only give a rough idea of the running time you can expect.
Proving at least one number $\le 1/4$
WLOG let $\alpha \ge \gamma$. Then we have: $$\gamma(1-\alpha) \le \alpha(1-\alpha) = \alpha - \alpha^2 = \frac 14 - \left(\frac 12 - \alpha\right)^2 \le \frac 14$$
How do we know complex number modification results in a rotation? How do we derive the $e$ piece?
Suppose we have: $w = \cos \theta + i\sin\theta\\ z = \cos \phi + i\sin\phi$ And we multiply them together: $wz = $$(\cos \theta + i\sin\theta)(\cos \phi + i\sin\phi)\\ (\cos\theta\cos\phi - \sin\theta\sin\phi) + i(\sin\theta\cos\phi + \cos\theta\sin\phi)\\ \cos(\theta+\phi) + i\sin(\theta + \phi)$ We are working with complex numbers unit magnitude. i.e. $|z| = 1$ but the concept still holds as any complex number can be written $z = |z|(\cos\theta + i\sin \theta).$ $\theta$ is called "the argument" of the complex number. Multiplying complex numbers adds the arguments. If an opperation of multiplication behaves like an operation of addition that is a property of exponentials. We define $e^{i\theta} = \cos \theta + i\sin \theta$ $z = |z|e^{i\theta}\\w =|w| e^{i\phi}\\zw = |zw| e^{i(\theta+\phi)} = |zw|(\cos(\theta+\phi) + i\sin(\theta+\phi))$ Taylor series... Background $e^x = \sum_\limits{n=0}^\infty \frac {x^n}{n!}\\ \cos x = \sum_\limits{n=0}^\infty \frac {x^{2n}}{2n!}\\ \sin x = \sum_\limits{n=0}^\infty \frac {x^{2n+1}}{(2n+1)!}$ or: $e^x = 1 + x + \frac {x^2}{2} + \frac {x^3}{6} +\cdots\\ \cos x = 1 - \frac {x^2}{2} + \cdots\\ \sin x = x - \frac {x^3}{6} + \cdots$ $e^{ix} = 1 + ix + \frac {(ix)^2}{2} + \frac {(ix)^3}{6} +\cdots\\ e^{ix} = 1 + ix - \frac {x^2}{2} - i\frac {x^3}{6} + \frac {x^4}{4!} + \cdots$ Collect the real terms and the imaginary terms... $e^{ix} = $$(1 - \frac {x^2}{2} + \frac {x^4}{4!}-\cdots ) + (ix - i\frac {x^3}{6} + i\frac {x^5}{5!} - \cdots)\\ \cos x + i\sin x$
Computing $\int_A(\beta z-\gamma y)dx+(\gamma x-\alpha z)dy+(\alpha y-\beta x)dz$
As you said - this calls for Stokes theorem. $$ \int_A(\beta z-\gamma y)dx+(\gamma x-\alpha z)dy+(\alpha y-\beta x)dz=2\int\int_D \alpha \space dydz + \beta \space dzdx + \gamma \space dxdy $$ We know that the normal vector is $(\alpha,\beta,\gamma)$ so from that $$ = 2\int\int_D \vec{F} \cdot \vec{n} \space dS $$ Where $\vec{n}$ is a unit normal vector on our surface (in this case our plane) And $\vec{F}$ is $(\alpha,\beta,\gamma)$ from the integral With that $$ = 2\int\int_D(\alpha,\beta,\gamma) \cdot \frac{(\alpha,\beta,\gamma)}{\sqrt{\alpha^2+\beta^2+\gamma^2}} dS = 2r^2\pi \sqrt{\alpha^2+\beta^2+\gamma^2} $$ where $r^2\pi = \int\int_D dS$ The surface of area that C encloses.
Chromatic polynomial of a simple disconnected graph
The chromatic polynomial,$\chi(k)$, counts the number of way you can color a graph with $k$ colors. If the graph $G$ is disconnected, then the coloring from one component does not affect the coloring from other components. If you have two component $G_1$ and $G_2$, and let's say for example that you have 4 way to color $G_1$ with 2 colors, and 2 ways to colors $G_2$ with 2 colors. Then, because $G_1$ and $G_2$ do no interact, you have exactly $4\times 2 = 8$ ways to colors $G$ with 2 colors. For every $k$, the number of ways you can $k$-color $G$ is the product of the number of ways to $k$-color $G_1$ and of the number of ways to $k$-color $G_2$. Then, the least non-vanishing term is the lowest term in the polynomial : Let you polynomial be $\chi(k)=\sum_i a_i k^i$. You can show by induction that (see here) that of your graph has $c$ disconnected components, then all term $a_0,\ldots,a_{c-1}$ are null. Hence the least non-vanishing term is $k^c$
Verify Stokes's Theorem for the given surface and vector field
How comfortable are you with surface integrals? $X(s,t)$, as defined in your post, is a parametrization of $S$. Computing $\int_S \nabla \times \mathbf{F} d\mathbf{S}$ seems like a standard pre-Stokes' homework problem about surface integrals. To compute the line integral side of Stokes' theorem, you'll need to parametrize the boundary of $S$. Notice that the domain of $X$ in the $st$-plane is a rectangle (with sides 1 and $\pi/2$). $X$ sends each point of the rectangle to a point on $S$, and it sends the boundary of the rectangle to the boundary of $S$. Can you see how to parametrize the boundary now? (Make sure it's oriented correctly!) Now all you have to do is compute an ordinary line integral.
Proof of Zariski's Lemma to Nullstellensatz (Fulton)
As per my professor: For the first question, we have that for every $i=1,\dots,n$, we have that $av_i$ is integral over $K[v_1]$. Further $K$ is obviously integral over $K[v_1]$. By the corollary, since (finite) multiplication and addition of integral elements is integral, we must have that $K[av_1,\dots,av_n]$ is integral over $K[v_1]$. Therefore for any element $z\in K[v_1,\dots,v_n]$ has some larger enough N such that $a^Nz$ is integral over K[v_1]. For the second question, since $K[v_1,\dots,v_n]$ is a field, we have that $K(v_1)$ is one of its subfields.
Sum of the series:$\frac{1}{1\cdot 2}+\frac{1\cdot3}{1\cdot2\cdot3\cdot4}+\cdots$
$$t_n=\frac{1\cdot3\cdot5\cdot7\dots(2n-1)}{1\cdot2\cdot3\cdot4\cdot5\cdot6\dots2n}=\frac{(2n-1)!!}{(2n-1)!!(2n)!!}=\frac{1}{(2n)!!}=\frac{1}{2^n n!}$$ Therefore this sum does not go to infinity. Invoking $e^x = \sum^{\infty}_{n=0} \frac{x^n}{n!}$, we can compute the sum: $$\sum^{\infty}_{n=1} \frac{(\frac{1}{2})^n}{n!} = \sum^{\infty}_{n=0} \frac{(\frac{1}{2})^n}{n!}-1 = e^{\frac{1}{2}}-1$$
Dependent increments?
You can think of the independent exponentials as wait times to the first click of $n$ independent Poisson processes. The maximum is the total time till the last one clicks. Say the first $n-1$ have clicked. Then by memorylessness, the remaining time till the final click is exponential and independent of everything else. If the first $n-2$ have clicked, then the remaining time till the $n-1$-st one clicks is the minimum of two exponentials, independent of everything else. So continuing backward inductively, the time till the final click is a sum of $n$ independent variables, where the first is distributed like the minimum of $n$ exponentials, the second is distributed like the minimum of $n-1$ exponentials, and so on. EDIT I realize I forgot to answer your whole question. Hopefully it's clear from above that you can use independence in the way you wanted to and that your expressions are correct. There is no contradiction that the expectation diverges as $n\to\infty$ while the variance goes to a constant. Imagine you had $X_n,$ a sequence of normal random variables where $X_n$ has mean $n$ and variance one. The mean diverges while the variance converges. There is absolutely no problem with this. I think you're confusing it with a case where a single random variable has infinite expectation but finite variance which would indeed be impossible. But here each of the random variables $M_n = \max(X_1,\ldots, X_n)$ has a finite expectation and a finite variance. It's just that as the sequence goes on the means get bigger and bigger while the variances don't.
Proving that $\sum_{i=1}^n\frac{1}{i^2}1$ by induction
The key in this problem is really an algebraic "trick" or manipulation to coax out the right-hand side from the left. You've presumably verified the base case (for $n=2$). You then assumed that $\color{blue}{\sum_{i=1}^k\frac{1}{i^2}=2-\frac{1}{k}}$ for some fixed $k\geq 2$. Now, your goal is to use this assumption (called the inductive hypothesis) to prove that $$\color{green}{\sum_{i=1}^{k+1}\frac{1}{i^2}=2-\frac{1}{k+1}}.$$ Starting with the left-hand side, \begin{align} \color{green}{\sum_{i=1}^{k+1}\frac{1}{i^2}} &amp;= \color{blue}{\sum_{i=1}^k\frac{1}{i^2}}+\frac{1}{(k+1)^2}\tag{by defn. of $\Sigma$}\\[1em] &amp;&lt; \color{blue}{\left(2-\frac{1}{k}\right)}+\frac{1}{(k+1)^2}\tag{by inductive hypothesis}\\[1em] &amp;= 2-\frac{1}{k+1}\left(\frac{k+1}{k}-\frac{1}{k+1}\right)\tag{manipulate}\\[1em] &amp;= 2-\frac{1}{k+1}\left(\frac{k^2+k+1}{k(k+1)}\right)\tag{simplify}\\[1em] &amp;&lt; \color{green}{2-\frac{1}{k+1}}.\tag{$\dagger$} \end{align} we end up at the right-hand side, completing the inductive proof. $(\dagger)$: How did I get from the "simplify" step to the $(\dagger)$ step? Well, the numerator is $k^2+k+1$ and the denominator is $k^2+k$. We note that, $k^2+k+1&gt;k^2+k$ (this boils down to accepting that $1&gt;0$). Since $\frac{1}{k+1}$ is being multiplied by something greater than $1$, this means that what is being subtracted from $2$ in the "simplify" step is larger than what is being subtracting from $2$ in the $(\dagger)$ step. Does that make sense? The main "trick" in the proof above is in the "manipulate" step, where you make a subtle connection with what the right-hand side of what you are trying to prove. Hope that helps.
finding equation of a line parallel to another line and passing a point
What you did with “using the coefficients in front of the $\mu$” was picking $\lambda=0,\mu=1$ which leads to $P(3,-2,-1,1)$ which is not at infinity! The point at infinity is characterized by $t=3\lambda+\mu=0$ so for example $\lambda=1,\mu=-3$ (but any other multiple of this would work, too). With that you get $P(-7,7,2,0)$. Taking this correction into account, the rest of your approach looks good.
Find intervals of decrease and increase of a function $y=\frac{x^2}{2^x}$.
Your derivative is slightly off: $$y=\frac{x^2}{2^x} \implies \frac{\mathrm dy}{\mathrm dx}=\frac{2x\cdot2^x-\ln(2)x^2\cdot2^x}{(2^x)^2}=\frac{2x-\ln(2)x^2}{2^x}$$
Show that $\lbrace 1, \alpha^k, \alpha^{2k}, \cdots\rbrace$ span $\ell^2$ for $0<|\alpha|<1$ and $k \geq 1$
To clarify, not every element of $\ell^2$ is the sum of finitely many $f_k$; take an element of $\ell^2$ that does not decay as $e^{-tn}$ for some $t$, for example. It is true, however, that the closed span of the $f_k$ (i.e., the closure of the vector space they generate) is $\ell^2$ itself. Fix $x = (x_0, x_1, \dots)\in \ell^2$, and assume without loss of generality that each $x_i$ lies in $X = [-1, 1]$. Since $x_i \to 0$, there exists (e.g., by the Tietze extension theorem) some continuous $f:X \to X$ with $f(\alpha^n) = x_n$ for each $x$. Fix $\epsilon &gt; 0$, and choose $N$ such that $$\sum_{n &gt; N} |x_n|^2 &lt; \epsilon.$$ By the Stone-Weierstrass theorem, there exists some polynomial $g(z) = \sum a_n z^n$ with $|g - f| &lt; \epsilon$ on $X$. Then $\xi = \sum a_k f_k\in \ell^2$ has $$|\xi_n - x_n| = \left|\sum_k a_k \alpha^{nk} - x_n\right| = |g(\alpha^n) - x_n| &lt; \epsilon$$ for all $n$. Now bound the $\ell^2$-norm of $\xi - x$, using the fact that the $(f_k)_n$ decay exponentially in $n$.
Find Taylor polynomial of an integral function
If you have a function of x that's defined as the integral from a contant to x, then the derivative of that function is just the integrand evaluated at x. So if you can do a substitution inside of the integral so that one boundary value is a constant and the other one is just x, you can easily find the nth derivative of your function f
Rectangular parallelepiped of greatest volume for a given surface area S
Don't try to perform a second derivative test in connection with Lagrange's method. The point you have found is clearly the maximum. Using the AGM inequality you have $$\root 3 \of{V^2}=\root 3\of{ab\cdot bc\cdot ca}\leq{ab+bc+ca\over3}={1\over6}S\ ,$$ with equality sign iff $ab=bc=ca$, i.e., iff $a=b=c$. It follows that $$V\leq\left({S\over6}\right)^{3/2}$$ with equality iff the parallelopiped is a cube with the given surface area.
Second-order derivative wrt. vector
You need to be a little bit careful with your notation: $\mathbf{c}^2$ is an abuse, since it doesn't really make sense to square a vector! The second derivative is actually a matrix and it is being hit on both sides by $\mathbf{c}$. More specifically, the second-order expansion of $f$ is $$f(x+c) \approx f(x) + c^T \nabla f(x) + \frac{1}{2}c^T\left(\nabla^2f\right) c$$ where $\nabla^2f$ is the Hessian of $f$, i.e. the matrix of second partial derivatives. Intuitively, $\left(\nabla^2 f\right)c$ measures how the gradient of $f$ is changing in the $c$ direction, so the full term $c^T\left(\nabla^2 f\right)c$ measures how the change of $f$ is changing in the $c$ direction.
find a third vector to form a set that spans $\mathbb R^{3}$
Since $u_1$ and $u_2$ as given are unit vectors that are perpendicular to each other (their dot product is $0$), their vector cross product will be a unit vector perpendicular to them, which is what is sought.
Flip a coin 100 times, what is the number of sample points for the event of $6$ heads in a row and hence it's probability? (93!/100!?)
Your hypothesis is that in the last seven tosses you obtain a specific sequence, and you do not ask any other about the first 93 tosses. You have to remember that for Bernoulli theorem the single extraction have not memory, so the probability to obtain THHHHHH in the last seven tosses is: $$ P = \frac{1}{2^7} = \frac {1}{128} $$
How do we view natural transformations as functions
The statement that a natural transformation is a map between functors is wrong, simple because functors are not sets (and even if they were realised as sets, it is certainly not those sets that the natural transformation maps between). But natural transformations can be thought of as morphisms between functors; thus one can define a category whose objects are the functors $C\to D$, and whose morphisms are natural transformations between such functors. Since they are part of the definition of this category, the statement that they are morphisms is kind of bland (we just define morphisms to be natural transformations); however it does come with an obligation: one must be able to define composition of natural transformations, and have the properties one requires for composition of morphisms. Defining composition is easy: if $F_1,F_2,F_3: C\to D$ are functors, and $\eta$ is a natural transformation form $F_1$ to $F_2$ and $\theta$ is a natural transformation form $F_2$ to $F_3$, then the composite transformation $\theta\circ\eta$ should define for every object $c$ of $C$ a morphism $F_1(c)\to F_3(c)$, which can obviously be taken to be the composite (in the category $D$) of the morphisms $\eta(c):F_1(c)\to F_2(c)$ and $\theta(c):F_2(c)\to F_3(c)$. This composition clearly is a natural transformation $F_1\to F_3$, and required (partial) associativity of this composition is ensured by the same property in the category $D$. Composition also needs an identity at each object (i.e., at each functor $F:C\to D$), and this is where the identity natural transformation comes in. Clearly is should be such that it associates to every $c$ the identity morphism of $F(c)$, or else it would not be the identity for composition of natural transformations.
matrix inversion within a function optimization problem
For $X$ whose diagonal entries are non-zero, write $XX^H+Y = X(I+X^{-1}YX^{-H})X^H$. Then $f(X) = u (I+X^{-1}YX^{-H})^{-1} u^H$, where $u$ is a vector all of whose entries are $1$. The eigenvalues of $X^{-1}YX^{-H}$ are non-negative, so $X^{−1}YX^{−H}=Q\Lambda Q^H$ where $Q$ is unitary and $\Lambda$ diagonal with non-negative entries. So $f(X)= v (I+\Lambda)^{-1}v^H$, where $v=Q^H u$ has length $\sqrt N.$ The value $N$ is approached as all the entries of $X$ tend to $\infty$ and so finally $f(X)\le N$. The value $N$ is approached if all the entries of $X$ tend to $\infty$, for which $\Lambda\to 0.$ I don't think the possibility of $0$ entries in $X$ will help beat this.
Question about Vector sapces
Hint for (a): Note that $K_n \subseteq K_{n+1}$ and $I_{n+1} \subseteq I_n$. What can you say about the dimensions if they are not equal?
The product rule with square roots
If you are going to use the Product Rule, you have $$\begin{align*} H&#39;(u) &amp;= \left(u-\sqrt{u}\right)&#39;\left(u-\sqrt{u}\right) + \left(u-\sqrt{u}\right)\left(u-\sqrt{u}\right)&#39;\\ &amp;= \left(u - u^{1/2}\right)&#39;\left(u-u^{1/2}\right) + \left(u-u^{1/2}\right)\left(u-u^{1/2}\right)&#39;\\ &amp;= \left( 1 - \frac{1}{2}u^{-1/2}\right)\left(u - u^{1/2}\right) + \left(u-u^{1/2}\right)\left(1 - \frac{1}{2}u^{-1/2}\right)\\ &amp;= 2\left(u - u^{1/2}\right)\left(1 - \frac{1}{2}u^{-1/2}\right). \end{align*}$$ The first step is the Product Rule. The second step is just the fact that $\sqrt{u}=u^{1/2}$. The third step uses the Sum Rule and the Power Rule. The fourth and final step is just the fact that the two summands are equal. If you want to further multiply out the product, we have: $$\begin{align*} H&#39;(u) &amp;= 2\left(u - u^{1/2}\right)\left(1 - \frac{1}{2}u^{-1/2}\right)\\ &amp;= \left(u - u^{1/2}\right)\left(2 - u^{-1/2}\right)\\ &amp;= 2u - uu^{-1/2} - 2u^{1/2} + u^{1/2}u^{-1/2}\\ &amp;= 2u - u^{1/2} -2u^{1/2}+1\\ &amp;= 2u-3u^{1/2} + 1. \end{align*}$$ For the function you identify in the comments as the "correct one": $$H(u) = (u-\sqrt{u})(u+\sqrt{u})$$ assuming you want to exercise the Product Rule, we have: $$\begin{align*} H&#39;(u_) &amp;= \left( u - \sqrt{u}\right)&#39;\left(u+\sqrt{u}\right) + \left(u-\sqrt{u}\right)\left(u+\sqrt{u}\right)&#39;\\ &amp;= \left( u - u^{1/2}\right)&#39;\left(u+u^{1/2}\right) + \left(u-u^{1/2}\right)\left(u+u^{1/2}\right)&#39;\\ &amp;=\left(1 - \frac{1}{2}u^{-1/2}\right)\left(u+u^{1/2}\right) + \left(u - u^{1/2}\right)\left(1 + \frac{1}{2}u^{-1/2}\right)\\ &amp;= u+u^{1/2}-\frac{1}{2}u^{-1/2}u - \frac{1}{2}u^{-1/2}u^{1/2} + u + \frac{1}{2}uu^{-1/2} - u^{1/2} - \frac{1}{2}u^{1/2}u^{-1/2}\\ &amp;= u + u^{1/2} - \frac{1}{2}u^{1/2} - \frac{1}{2} + u + \frac{1}{2}u^{1/2} - u^{1/2}-\frac{1}{2}\\ &amp;= 2u-1. \end{align*}$$ You can verify this is correct, since $$(u-\sqrt{u})(u+\sqrt{u}) = u^2 - u,$$ and $$(u^2-u)&#39; = 2u-1.$$
What is this notation?
He writes phrases like, "... about the $x_3$-axis... " quite a bit I'm seeing. So, in context, I'm gathering he mean axis by the dash symbol. That is, it's not a minus sign.
A tricky optimization problem: writing the dual (using Lagrangian)
$\newcommand{\R}{\mathbb{R}}\newcommand{\Minimize}{\operatorname*{Minimize}}$Let us define the operator $[{}\cdot{}]_+:\R^n\to\R$ as $$ [z]_+ = \begin{cases} z, &amp;\text{if } z \geq 0 \\ 0, &amp;\text{otherwise} \end{cases} $$ Then, the optimization problem $$ \Minimize_{x} f(x) + [g(x)]_+, $$ is equivalent to \begin{align} &amp;\Minimize_{x} f(x) + y \\ &amp;\text{subject to }y \geq 0, g(x) \leq y. \end{align} One can be seen as the dual of the other. That said, your optimization problem can be written as (I multiplied the cost by $n$ for convenience, but you can undo that) \begin{align} &amp;\Minimize_{\theta_\nu, b_\nu} \sum_{i,\nu} \left[1 - y_i (\langle \theta_\nu, x_i^{(\nu)}\rangle + b_\nu)\right]_+ \\ &amp;+ \sum_{i,j,k} \left[ |\langle \theta_k, x_i^{(k)}\rangle+b_k - \langle \theta_j, x_i^{(j)}\rangle+b_j| - \epsilon \right]_+ + \tfrac{n}{2}\sum_\nu \|\theta_\nu\|^2. \end{align} You can in fact think of your problem as the dual of this one (in fact, one if the dual of the other as the cost function is continuous).
Reasoning with congruences: does a positive integer $x$ exist with the following properties?
You don't mention any specific ordering requirements among the primes, so consider $p_1 = 7$, $p_2 = 5$ and $p_3 = 3$. You then get $$x \equiv p_1 \pmod{p_2} \implies x \equiv 7 \pmod{5} \tag{1}\label{eq1A}$$ $$x \equiv p_2 \pmod{p_3} \implies x \equiv 5 \pmod{3} \tag{2}\label{eq2A}$$ $$x \equiv p_3 \pmod{p_1} \implies x \equiv 3 \pmod{7} \tag{3}\label{eq3A}$$ You can easily confirm that $x = 17$ satisfies the $3$ equations above, and it's the smallest positive integer that does. Also, $$p_1 + (p_3 - 1)p_2 = 7 + (3 - 1)5 = 17 = x \tag{4}\label{eq4A}$$ so $a = p_3 - 1 = 2$ in this case satisfies your requirements. I haven't checked, but I'm fairly certain counter-examples will exist even if you impose an ordering requirement on the primes, e.g., $x_1 \lt x_2 \lt x_3$.
Let $\sum_{n=1}^\infty \frac{a_n}{3^n}.$ Determine (numerically or not) the limit of the infinite series by choosing $a_n=0$ or $2$ randomly.
A better way to generate the desired array is r=2*randi([0 1],10,1) The problem with your approach is that you try to perform a comparison on the array r, instead of each of its elements. A way to do this would be r(find(r==1)) = 0
What is the factorization of $(da+dbi)^2$, $(da+dbi)^4$, and $(da+dbi)^8$,
It looks like you simply need to factor out the variable $d$ from each expression. Note that $$(da+dbi)^n = (d(a+bi))^n = d^n(a+bi)^n.$$
Show that this definition of derivative implies the other one.
Let us generalize the second definition. For $x \in \mathbb{R}$ we denote by $\mathfrak{N}(x)$ the set of all neighborhoods of $x$ in $\mathbb{R}$ (a neighborhood of $x$ is a set $N \subset \mathbb{R}$ such that $(x -\varepsilon, x +\varepsilon ) \subset N$ for some $\varepsilon &gt; 0$). Let $x \in E \subset \mathbb{R}$. Then obviously the following are equivalent: (1) $x$ is a limit point of $E$. (2) There exists $N \in \mathfrak{N}(x)$ such that $x$ is a limit point of $N \cap E$. (3) For all $N \in \mathfrak{N}(x)$, $x$ is a limit point of $N \cap E$. Now let $f: E \to \mathbb{R}$ be a function and $x \in E$ be a limit point of $E$. For each $N \in \mathfrak{N}(x)$ define $$\phi_N: N \cap E \setminus \{x\} \to \mathbb{R}: t \mapsto \frac{f(t)- f(x)}{t-x} .$$ The map $\phi$ from the second definition is given as $\phi = \phi_{\mathbb{R}}$. If $\lim_{x \to t} \phi_N (t)$ exists, we denote it by $f'_N(x)$. In case $N = \mathbb{R}$ we simply write $f'(x)$. The following are obvious: (1) If $\lim_{x \to t} \phi (t)$ exists, then $\lim_{x \to t} \phi_N (t)$ exists for all $N \in \mathfrak{N}(x)$ and $f'_N(x) = f'(x)$. (2) If $\lim_{x \to t} \phi_N (t)$ exists for some $N \in \mathfrak{N}(x)$, then $\lim_{x \to t} \phi (t)$ exists and $f'(x) = f'_N(x)$. Now let $E = [a,b]$. To avoid confusion the function $\phi$ from the first definition will be denoted by $\Phi$. For $x \in (a,b)$ we have $\Phi = \phi_{(a,b)}$, for $x = a$ we have $\Phi = \phi_{(a-1,b)}$ and for $x = b$ we have $\Phi = \phi_{(a,b+1)}$. This shows that the first and the second definition are equivalent.
Necessary and sufficient condition for a normal group to be kernel of a homomorphism from the group to itself
Let $G$ be a group, and let $K \subset G$ be a subgroup. I claim that $K$ is the kernel of some homomorphism $\varphi \colon G \to G$ if and only if $K$ is normal, and $G/K$ is isomorphic to a subgroup $K'$ of $G$. Indeed, if $K = \ker(\varphi)$ for some homomorphism $\varphi \colon G \to G$, then $K$ is normal and by the first isomorphism theorem, $G/K \cong \mathrm{im}(\varphi)$, which is a subgroup of $G$. On the other hand, if $G/K$ is isomorphic to the subgroup $K'$ of $G$, then the composition of the projection $G \to G/K$ with the isomorphism from $G/K$ to $K'$ and the inclusion $K' \hookrightarrow G$ gives a homomorphism from $G$ to $G$ with kernel $K$.
prove $\sum_{cyc}\frac{1}{a(a+b)}\ge\frac{4}{ac+bd}$
By AM-GM $$\frac{2}{\sqrt{abcd}}\geq\frac{4}{ac+bd}.$$ Thus, it remains to prove that $$\sum_{cyc}\frac{1}{a^2+ab}\geq\frac{2}{\sqrt{abcd}}.$$ Since the last inequality is homogeneous, we can assume that $abcd=1$. Now, let $a=\frac{y}{x}$, $b=\frac{z}{y}$ and $c=\frac{t}{z}$, where $x$, $y$, $z$ and $t$ are positives. Hence, $d=\frac{x}{t}$ and we need to prove that $$\sum_{cyc}\frac{1}{\frac{y}{x}\left(\frac{y}{x}+\frac{z}{y}\right)}\geq2$$ or $$\sum_{cyc}\frac{x^2}{y^2+xz}\geq2.$$ But by C-S we have $$\sum_{cyc}\frac{x^2}{y^2+xz}=\sum_{cyc}\frac{x^4}{x^2y^2+x^3z}\geq\frac{(x^2+y^2+z^2+t^2)^2}{\sum\limits_{cyc}(x^2y^2+x^3z)}.$$ Thus, it remains to prove that $$(x^2+y^2+z^2+t^2)^2\geq2\sum\limits_{cyc}(x^2y^2+x^3z)$$ or $$(x^2+z^2)(x-z)^2+(y^2+t^2)(y-t)^2\geq0.$$ Done!
Exponential of operators satisfying Heisenberg Commutation Relation
Sketched proof: "$\Leftarrow$": Differentiate both sides of eq. (2) wrt. both $\sigma$ and $\tau$. Then put $\sigma=0=\tau$ to deduce eq. (1). $\Box$ "$\Rightarrow$": Use the truncated BCH formula $$ e^Ae^B~=~e^{A+B+\frac{1}{2}C} \tag{i}$$ where the commutator $$ C~:=~[A,B]\tag{ii}$$ is assumed to commute with both $A$ and $B$, $$[A,C]~=~0\quad \text{and}\quad [B,C]~=~0, \tag{iii} $$ to deduce that $$e^Ae^B~=~e^Ce^Be^A. \tag{iv} $$ $\Box$
Prove a finite cover contains {$⁡B_r (x_i)|x_i \in X$} for every $r>0$
Pick any $x_1 \in A$. If $A \not\subset B(x_1;r)$, then pick any $x_2 \in A \setminus B(x_1;r)$. If $A \not\subset B(x_1;r) \cup B(x_2;r) $, then pick any $x_3 \in A \setminus (B(x_1;r) \cup B(x_2;r))$. Continue this process. Then either $A \subset \bigcup_{i=1}^n B(x_i;r)$) for some $n$ and the process stops, or you can continue picking $x_{n+1} \in A \setminus \bigcup_{i=1}^n B(x_i;r)$ ad infinitum and get an infinite sequence $(x_i)$ in $A$. We shall show that such a sequence is impossible. Since $A$ is compact, there exists a convergent subsequence $(x_{i_k})$ with limit $x \in A$. Thus $d(x_{i_k},x) &lt; r/2$ for $k \ge K$. We conclude $d(x_{i_K},x_{i_{K+1}}) \le d(x_{i_k},x) + d(x,x_{i_{K+1}}) &lt; r/2 + r/2 = r$. This means $x_{i_{K+1}} \in B(x_{i_K};r) \subset \bigcup_{i=1}^{i_{K+1}-1} B(x_i;r)$ which contradicts the construction of $x_{i_{K+1}}$.
What is $l^2(\mathbb{Z}^2)$?
By definition $l^2(\Bbb Z^2)$ is the set of square-summable families of complex numbers $\{u_{m,n}\}_{(m,n)\in\Bbb Z^2}$. By square-summable we mean $$\sum |u_{m,n}|^2&lt;\infty$$ It is endowed with an inner product the same way as $l^2(\Bbb Z)$ which makes it a separable Hilbert space (the proof is identical to the $\Bbb Z$ case). Fact: there is only one separable infinite-dimensional Hilbert space, up to isomorphism, so $l^2(\Bbb Z^2)$ has to be isomorphic to $l^2(\Bbb Z)$. One way to think about this is the following: Theorem: Let $\{u_i\}_{i\in I}$ be a sequence of non-negative real numbers indexed by a countable set $I$. Then the non-decreasing sequence $$S_N=\sum_{n=1}^N u_{\varphi(n)}$$ has the same limit (hence denoted by $\sum_{i\in I}u_i $) for all bijections $\varphi:\Bbb N\to I$. In other words, the order in which you add the elements does not affect the status of convergence of the series, or its sum. Therefore, since $\Bbb Z$ and $\Bbb Z^2$ are both countable, any bijection between them will induce an isomorphism between $l^2(\Bbb Z)$ and $l^2(\Bbb Z^2)$. In some cases, however, it may seem more natural to consider one notation rather than the other. But both are really simply spaces of countable sequences whose square is an absolutely convergent series.
reducing invertible matrix in an equation of matrix
If $A$ is square and $A^tA$ is invertible then of course $A$ is invertible, and your expression simplifies to $A^{-1}B$.
Finding set of values using inequalities
If you quoted the question correctly, the book is wrong. This quadratic is always positive if $p &gt; 7/2$, and always negative if $p &lt; -1$.
What kind of distribution is that?
As Didier says, it looks like a triangular distribution -- specifically, if we ignore the long tail, it could be the sum of two independent variables distributed uniformly between 0 and about 256, plus perhaps some lower-order noise. It could also be the sum of a random number of uniformly distributed $(0,256)$ variables, which would explain both the tail and the slight sudden drop in frequency that appears to happen right at the apex. It's not easy to estimate the distribution of the number-of-bytes from the graph, except that $2$ clearly dominates, and $1$ is more likely than $&gt;2$. Whether or not this is a sensible hypothesis in your case depends entirely on the process that created the your data.
Function that is closed but not open and not continuous
Your function is not defined on the real numbers: it is defined only on the set of non-negative real numbers. Thus, it does not meet the requirements of the problem. Moreover, it is continuous on its domain. There is absolutely nothing wrong with a functions that is defined piecewise. Perhaps the simplest example that meets the requirements of the problem is the function $\chi_{\{0\}}$, the indicator (or characteristic) function of the set $\{0\}$: it is defined on all of $\Bbb R$; it is not continuous at $0$; it takes every subset of $\Bbb R$ to one of the four subsets $\varnothing,\{0\},\{1\}$, and $\{0,1\}$, all of which are closed; and it takes the open set $\Bbb R$ to the set $\{0,1\}$, which is not open in $\Bbb R$. For that matter, $\chi_A$ works for any non-empty proper subset $A$ of $\Bbb R$. So do the floor and ceiling functions, $f(x)=\lfloor x\rfloor$ and $f(x)=\lceil x\rceil$. All of these are well-known, useful functions
Prove something is a partial order
As you mentioned in a comment, you need to show that the relation is reflexive, antisymmetric, and transitive. And you know that the reflexive step uses the fact that $3^0=1$. From there, you ask how you carefully convey the fact that the relation is reflexive. You want: For all positive integers $x$, $x\mathrm{R}x$. This means that for all positive integers $x$, there exists a nonnegative integer $k$ such that $x=3^k\cdot x$. To show this, let $x$ be given, and take $k=0$, which is a nonnegative integer. Then $x=3^0\cdot x$ shows that $x\mathrm{R}x$. Hint for antisymmetry: $3^k\cdot 3^j=1\Rightarrow k=j=0$. Hint for transitivity: A product of powers of $3$ is a power of $3$.
Find the derivative of the function $(y^2-1)^2/(y^2+1)$
HINT: You can simplify the chain rule using $$(y^2-1)^2=\{(y^2+1)-2\}^2=?$$
Using the Joint MGF to calculate a probability
We know that the mgf corresponds to only one probability function. Therefore we want to find the form of the mgf in question. Looking up in tables or by memory, we identify a form similar to the one of an exponential mgf: $$ M_x(t)=\frac{1}{1-\theta t} $$ For an exponential of the form $$ \frac{1}{\theta}e^{-(1/\theta )x} $$ Also we can assume that the behavior of the components is independent from each other, so the product of the moment generating functions is the joint moment generating function. Solving a simple equation: $$ \frac{1}{(1-\theta_1 t)}\frac{1}{(1-\theta_2 t)}=\frac{1}{1-3s-2t-6ts} $$ Giving that $\theta_1=2$ and $\theta_2 = 3$. Therefore we want to calculate the probability of either component $X$ or $Y$ lasting less than some time $k$. Therefore we want to consider the minimum of either of the distributions failing before time $k$. We also know that $\min\{X_i\}$ when those $X_i$ are exponentials, the new minimum random variable has a distribution of the products, which should be easy to follow from here. Just in case, The distribution of the minimum of the exponentials is an exponential with the parameter $\lambda_{min}=\sum \lambda_i$, where $\lambda_i=\frac{1}{\theta_i}$ because we used the alternative parametrization of the exponential in the problem.
related rate problem
You need to use the product rule. From $a=\frac 12bh$ you get $a'=\frac 12 b'h+\frac 12 bh'$ Added: See if the below figure helps. I drew a rectangle because it seemed clearer. At any time, the rectangle area is $a=bh$. If $b$ and $h$ both increase by $db$ and $dh$ we add the two long rectangles $db \times h$ and $dh \times b$ The small $dh \times db$ is negligible.
How to check an implementation of call-by-need is correct?
In a pure setting, call-by-need should produce the same results as call-by-name. I'm assuming in your notation e.g. $(2\, 2)$ describes two Church numerals applied to each other. In that case yes, your term is equivalent to $2\, 2\, I\, I$ (where $I$ is the identity combinator), since lambda calculus application is left-associative, and should reduce with call-by-need order to $λx.x$. You can verify this e.g. with the following Haskell (which is call-by-need) code e.g. under GHCI or some other Haskell REPL: &gt; two = \f -&gt; \x -&gt; f (f x) &gt; (((two two) id) id) "echo" "echo" &gt; (two two id id) 1 1 The same applies to $(3\, 3)$, $(4\, 4)$ etc.
Proving that a propositional theory of any cardinality has an independent set of axioms
In his paper “Tout ensemble de formules de la logique classique est equivalent ´a un ensemble independant”, Iegor Reznikof showed the conclusion of your question, but unfortunately the original paper is in French. However you can find a translation in English here.
Getting hung up on notation for $\frac{d}{dx}e^u$ vs. $\int{e^u}du$
If you use the notation $u = f(x)$, so that $e^u= e^{f(x)},$ it makes the situation clearer. Then $$\color{blue} {(e^{f(x)} + C)' = e^{f(x)}\cdot f'(x)}$$ Now, $$\int e^{f(x)} \,dx \neq e^{f(x)} + C,\; \text{ unless } u = f(x) = x$$ but rather, $$\color{blue} {\int e^{f(x)} f'(x)\,dx = e^{f(x)}+C}$$
Basic finite dimensional distribution question
A finite-dimensional distribution is just the joint distribution of a random vector. You pick various time points $t_1,t_2,\ldots,t_n$, observe the stochastic process at those time points, and ask yourself "what's the distribution of this random vector?" To define a FDD it is sufficient to specify the joint distribution on rectangles $A_1\times A_2\times \cdots\times A_n$ (as you've suggested), or on products of half-infinte intervals $(-\infty,a_1]\times(-\infty,a_2]\times\cdots\times(-\infty,a_n]$ (to generalize the one-dimensional CDF of a scalar random variable). In general, these joint distributions won't correspond to product measure unless the selected time points yield independent variables; typically you get a joint distribution of correlated random variables. For example, if the stochastic process is Brownian motion, then the finite-dimensional distributions will be correlated multivariate gaussian. The simplest finite-dimensional distribution occurs when $n=1$: you get just the distribution of $X_t$, the process observed at a single time point.
Probability with locks
a) Find the probability that the burglar gets the first or the last digit right (or both). Using the equation: # of sample points in Event / Total # of sample points # of sample points in Event is = 3 because we want the First digit, last digit or both totalling 3. Total # of sample points = 10 x 10 = 100 because we are dealing with only 2 parts of the lock. So for a) 3/100 ? is this correct? Not at all. &nbsp; You want the probability for: guessing the first digit or the last digit or both. &nbsp; Now, these events are not disjoint; they have an intersection of common outcomes, so make sure you don't overcount by using the Principle of Inclusion and Exclusion. &nbsp; Also make sure you count in the same manner as you count the total ways. And so, of the $10\times 10$ ways to guess these two parts of the lock, there are $1\times 10$ ways to guess the first digit, $10\times 1$ ways to guess the last digit, and among these there is $1\times 1$ way to guess both. $$\dfrac {10+10-1}{100}=\dfrac {19}{100}$$ Now, retry the other parts whith what you have learned. (d) Given that the burglar gets exactly two of the six digits right, find the probability that the first two digits are wrong. not sure how to answer this. The digits were selected for each placement were equally and independetly likely to be correct, so therefore the combinations of 2 correct and 4 incorrect placements chosen would also be unbiasedly distributed. So, same principle: How many combinations of 2 correct and 4 incorrect placements are there? How many from these combinations have the first two placements wrong?
Construct triangle $ABC$ given point $C$ and the lines that contain the angle bisectors of angles $A$ and $B$.
Here is a way to construct $\triangle ABC$. Find the interception point O (incenter) of angle bisectors of $\angle A$ and $\angle B$. The constructed $\triangle ABC$ should have circle O as its incircle. Draw a circle with radius r centered at incenter O. Draw two tangent lines CD and CE to the circle that intercept two angle bisector lines at D and E. Connect DE, adjust the radius r so that DE is tangent to the circle, then $\triangle DEC$ is $\triangle ABC$.
If a map is smooth in a point, is it extendible to a smooth function on a nbhd.?
So, your equivalence is: $f\sim g$ if $f=g$ on a neighborhood of $x$. If $f$ is smooth at $x$ only (but not on any neighborhood of $x$) and $f\sim g$, then $g$ can't be smooth on any neighborhood of $x$, either. Two things can't be equal if one is smooth and the other isn't. The following is true: if $f$ is smooth at $x$, then there exists a map $g$, smooth in a neighborhood of $x$, such that $f(x)=g(x)$ and the derivatives of $f$ and $g$ at $x$ are equal (of all orders). Proof: Since the issue is local, move to Euclidean space Smoothness is coordinate-wise with respect to the target, so we can focus on real-valued $f$. By Borel's theorem, there is a smooth function whose partial derivatives are the same as the partial derivatives of $f$, to all orders.
Time and distance: Buses traveling in opposite directions
When the buses meet, Bus A has travelled $8$ fewer minutes than usual, and therefore $8$ fewer km. These $8$ km were travelled by B in the $12$ extra minutes. So the speed of B is $\dfrac{8}{12}$ km per minute.
Simple combinatorics - inclusion and exclusion problem
$|A_1 \cup A_3|=2n-q \implies A_1$ and $A_3$ have $q$ items in common. $|A_2 \cup A_3|=2n-p \implies A_2$ and $A_3$ have $p$ items in common. $A_1 \cap A_2 \subseteq A_3 \implies A_1$ and $A_2$ have no common items that are not also common to $A_3$. Therefore, $|A_1 \cup A_2 \cup A_3|=3n-q-p$.
Show that $B={\{x \in [0,1] : λ(E \cap (x−ε,x+ε)) > 0 \ \text{for all} \ ε>0 }\}$ is perfect.
Let $E$ be a Lebesgue measurable subset of $[0, 1]$, and define $B={\{x \in [0,1] : \lambda(E \cap (x-\varepsilon,x+\varepsilon)) &gt; 0 \ \text{for all} \ \varepsilon&gt;0 }\}$. Show that $B$ is perfect. A perfect set is a closed set with no isolate points (see here). According to this definition, the empty set is a perfect set. Let us prove the result. Proof: $B$ is closed: Let $a \in [0,1]$ and let $\{x_n\}_n$ be a sequence of points in $B$ converging to $a$. So, given $delta &gt; 0$ , there is $k$ such that $x_k \in (a-\delta, a+\delta)$. So, there is $\varepsilon &gt;0$ such that $(x_k- \varepsilon, x_k + \varepsilon) \subseteq (a-\delta, a+\delta)$. Since $x_k \in B$, we have $$\lambda(E \cap (a- \delta, a + \delta)) &gt;\lambda(E \cap (x_k- \varepsilon, x_k + \varepsilon)) &gt; 0$$ So $a \in B$. So, $B$ is closed. $B$ has no isolate points: Let $x\in B$. Suppose $x$ is an isolated point. Then, there is $\delta&gt;0$, such that $(x- \delta, x+ \delta) \cap B=\{x\}$. Note that, by the definition of $B$, $\lambda(E \cap (x- \delta, x+ \delta))&gt;0$ So, for all $y \in (x- \delta, x+ \delta)$, such that $y \ne x$, we have that $y \notin B$. So, there is $\varepsilon &gt;0$, such that $$\lambda(E \cap (y- \varepsilon, y + \varepsilon)) =0$$ Let $r_y &gt;0$ such that $r_y&lt; \varepsilon$ and $(y-r_y, y+r_y) \subseteq (x- \delta, x+ \delta)$. Clearly, we have $$0 \leq \lambda(E \cap (y- r_y, y + r_y)) \leq \lambda(E \cap (y- \varepsilon, y + \varepsilon)) =0$$ So, for all $y \in (x- \delta, x) \cup (x, x+ \delta)$, we have $\lambda(E \cap (y- r_y, y + r_y)) =0$. Let $K$ be any compact set such that $K \subset (x- \delta, x) \cup (x, x+ \delta)$. It is easy to see that there is $n$ and $y_0, y_1, \cdots y_n$ such that $K \subset \bigcup_{i=0}^n(y_i-r_{y_i}, y_i+r_{y_i})$. So $\lambda(E \cap K)=0$. So, for all $K$ compact set such that $K \subset (x- \delta, x) \cup (x, x+ \delta)$, we have $\lambda(E \cap K)=0$. So $$\lambda(E \cap ((x- \delta, x) \cup (x, x+ \delta)))=0$$ Since $\lambda(E \cap \{x\})=0$, we have $$ 0 = \lambda(E \cap \{x\}) + \lambda(E \cap ((x- \delta, x) \cup (x, x+ \delta))) = \lambda(E \cap (x- \delta, x+ \delta))&gt;0$$ Contradiction. So $B$ has no isolated points.
Number of permutations of the sequence
This was fairly tough and my solution is complicated. The problem is equivalent to finding the number of ways to jump from square $0$ to square $n$ on a row of squares, without jumping further than three spaces at a time, and without missing any squares in between. $$\begin{array}{c|cccc} \blacksquare&amp;\square&amp;\square&amp;\square&amp;\ldots&amp;\square\\ 0&amp;1&amp;2&amp;3&amp;&amp;n\end{array}$$ If you experiment a little with the problem, you will find that if you are not allowed to move more than two spaces at a time, then your progress from left to right is highly predictable. However, when you can move three spaces at a time, it is possible to make a sequence of long jumps to the right, double back once, and then return, having filled in all the squares you missed. The fact that the ways to do so are limited makes the problem tractable. First, we start with an empty row, and shade the first square. We can then draw a vertical line after the first square, to remind us that all the squares to the left of the line have been filled. Let $f(n)$ be the number of ways to get to square n, obeying the rules. Now, there are three possibilities: jump one square, jump two squares, or jump three squares. In the first case: $$\begin{array}{|ccccc}\blacksquare&amp;\square&amp;\square&amp;\square&amp;\ldots\end{array}$$ we could draw a new vertical line after the square just shaded, and we'd be in the exact same situation again, but with one square fewer. So from this point on, there are $f(n-1)$ ways to get to square $n$. In the second case, we have the following situation: $$\begin{array}{|ccccc}\square&amp;\blacksquare&amp;\square&amp;\square&amp;\ldots\end{array}$$ Let $g(n)$ be the number of ways to get to the last square from this configuration. In the third case, we have: $$\begin{array}{|cccccc}\square&amp;\square&amp;\blacksquare&amp;\square&amp;\square&amp;\ldots\end{array}$$ Let $h(n)$ be the number of ways to get to the last square from this configuration. The easy part is to see that $f(n)=f(n-1)+g(n-1)+h(n-1)$. After that it gets tricky. In the second case, call the shaded square "1"; then the next moves can be 1-0-2 or 1-0-3 or 1-2-0-3 or 1-3-0-2-4 or 1-3-0-2-5, or 1-4... (long leaps to the right) In the third case, call the shaded square "2"; then the next moves can be 2-0-1-3 or 2-0-1-4 or 2-1-0-3 (but only if all squares to the left of the line are filled) or 2-0-3-1-4 or 2-3-0-1-4 or 2-4-1-0-3-5 or 2-4-1-0-3-6, or 2-5... (long leaps to the right) How can we deal with multiple leaps to the right? Actually, it's quite simple: let $\hat{h}(n)$ be the number of ways to get to square $n$ in the third case, if squares remain to be filled on the left of the vertical line. It turns out that the order in which those earlier squares get filled is entirely dependent on what happens to the right of the line. Putting it all together: $\begin{array}{rcl}f(n)&amp;=&amp;f(n-1)+g(n-1)+h(n-1)\\ g(n)&amp;=&amp;f(n-2)+f(n-3)+f(n-4)+g(n-2)+g(n-4)+\hat{h}(n-2)\\ h(n)&amp;=&amp;2f(n-3)+2f(n-4)+f(n-5)+g(n-3)+g(n-5)+\hat{h}(n-3)\\ \hat{h}(n)&amp;=&amp;h(n)-f(n-4) \end{array}$ $$\begin{array}{c|ccccccccc} n&amp;0&amp;1&amp;2&amp;3&amp;4&amp;5&amp;6&amp;7&amp;\ldots\\ \hline f&amp;1&amp;1&amp;1&amp;2&amp;6&amp;14&amp;28&amp;56&amp;\ldots\\ g&amp;0&amp;0&amp;1&amp;2&amp;4&amp;8&amp;17&amp;37&amp;\ldots\\ h&amp;0&amp;0&amp;0&amp;2&amp;4&amp;6&amp;11&amp;25&amp;\ldots\\ \hat{h}&amp;0&amp;0&amp;0&amp;2&amp;3&amp;5&amp;10&amp;23&amp;\ldots \end{array}$$ It is easy to continue this table using a computer. For example I found that, for one hundred squares, the answer is $f(99)=46103635399805799327514181735963$. I will add this sequence to the OEIS if you like. Edit: I've found a simpler way to express the recurrence: $$f(n)=1\cdot f(n-1)+0\cdot f(n-2)+1\cdot f(n-3)+3\cdot f(n-4)+...$$ The coefficients start $1, 0, 1, 3, 4, 5, 7, 10,$ and from that point each new coefficient is the sum of the last and the third-last, so $5+10=15,$ then $7+15=22,$ etc.
Determining whether a relation is transitive or not.
Remember that the implication: $$p \rightarrow q$$ Is always true when $p$ is false, regardless of the truth value of $q$. Thus, consider your first relation $R = \{(1,6),(2,7),(3,8)\}$. Note that: $$(x,y)\in R \wedge (y,z)\in R$$ is always false, hence the implication: $$(x,y)\in R \wedge (y,z)\in R \rightarrow (x,z)\in R$$ is always true, and hence $R$ is transitive. By the same principle, your second relation is also transitive.
Let $S=\{0,2,4,6,8\}$, $T=\{1,3,5,7\}$. Determine whether each of the following sets of ordered pairs is a function with domain $S$ and co-domain $T$.
The domain values of a function must be mapped to values in the codomain, so, as indicated in the comments, your work is correct.
Rank and Range space of Powers of a Matrix
It is not quite true that the rank is the number of non-zero eigenvalues. If the matrix is diagonalizable then the statement holds. In general, the multiplicity of zero as an eigenvalue will always be greater than or equal to the nullity of the matrix (as a consequence of algebraic multiplicity always being larger than geometric multiplicity). As for when the matrix loses rank as you take powers of it, it is more informative to examine the Jordan blocks of the matrix. If your matrix has a Jordan block of size $k$ corresponding to zero, then it is easy to see that the matrix will lose a rank for each power up to the $k$th power. How many ranks it loses "per power" will be dependent on the number and the sizes of the null Jordan blocks. Notice also, that the example that wj32 gave is precisely a Jordan block of size $2$.
General form of Nullstellensatz
Note that the classical Nullstellensatz in $k[x_1, \dotsc, x_n]$ states $$\sqrt{\mathfrak a} = \mathcal I(\mathcal V(\mathfrak a)) = \bigcap\limits_{\mathfrak a \subset \mathfrak m} \mathfrak m,$$ with the intersection being taken over all maximal ideals containing $\mathfrak a$. In any ring, we have a formal Nullstellensatz: $$\sqrt{\mathfrak a} = \bigcap\limits_{\mathfrak a \subset \mathfrak p} \mathfrak p,$$ with the intersection being taken over all prime ideals containing $\mathfrak a$. Note that the most proofs of this fact are actually modern versions of the Rabinowitsch trick (Rabinowitsch trick is just localizing, not so tricky from the modern point of view ;) ). In a Jacobson ring, we have $\bigcap\limits_{\mathfrak a \subset \mathfrak p} \mathfrak p = \bigcap\limits_{\mathfrak a \subset \mathfrak m} \mathfrak m$, i.e. the formal Nullstellensatz boils down to the formulation of the classical Nullstellensatz. This is why a result about many rings being Jacobson is called a generalization of the Nullstellensatz.
Any convergent sequence is bounded. Don’t we need to use the absolute value in this proof?
Yes, you are correct. Everything in their proof is correct until they choose $C$. $|a_n|&lt;|a|+1$ for all $n&gt;N$ and $|a_n|\leq\sup\{|a_n|:n\leq N\}$, so $|a_n|\leq \sup(\{|a_n|:n\leq N\}\cup\{|a|+1\})$ for all $n$. Take any sequence $(a_n)$ such that $a_n&lt;0$ and $|a_n|&gt;|a|+1$ for all $n\leq N$, and it will not be bounded in the way that their proof asserts.
Why is a number field always of the form $\mathbb Q(\alpha)$ for $\alpha$ algebraic?
Here's a proof that picks up with your observations (that it suffices to verify that there are finitely many intermediate subfields): Let $L^{gc}$ denote the Galois closure of $L$. Then every subfield of $L$ is a subfield of $L^{gc}$, and subfields of $L^{gc}$ are, by Galois theory, in 1-1 correspondence with the subgroups of the Galois group $\operatorname{Gal}(L^{gc}/\mathbb{Q})$. Since the Galois group is finite, there are finitely many subgroups, hence finitely many subfields.
Suppose G is a finite abelian group with a nontrivial subgroup H contained in every subgroup of G. Show that G is cyclic.
(1) $H$ should be of prime order (otherwise it will have a proper subgroup $K$ and $H\nsubseteq K$). (2) Let $|H|=p$. If $|G|$ is divisible by another prime $q$, then there will be a subgroup of order $q$ and $H$ can not be contained in it. (3) Hence $|G|=p^k$ for some $k$, with condition that there is a subgroup $H$ of order $p$ contained in every subgroup. (4) Let $H=\langle h\rangle$. If $k=1$ we are done. Let $k&gt;1$. Then every proper subgroup of $G$ satisfies the hypothesis and so is cyclic by induction. (5) Let $M$ be a maximal subgroup of $G$; it is cyclic by (4), say $M=\langle x\rangle$. Take $y\in G\setminus M$. (6) Then consider the following situation in $G$: $$ 1 \leq \langle h\rangle \leq \cdots \leq \langle x^p\rangle \leq \langle x\rangle \leq G.$$ Since $[G: \langle x\rangle]=p$, so $y^p\in \langle x\rangle$. (7) Suppose if possible $y^p$ is in $\langle x^p\rangle$. Then $y^p=x^{ip}$ for some $i$. Then consider $(yx^{-i})$. Its order is $1$ or $p$ or $p^2$ or ... (8) Since $(yx^{-i})^p=1$ and $yx^{-i}\neq 1$ (o.w. $y\in \langle x\rangle$) hence order of $\langle yx^{-i}\rangle$ is $p$, and it should be equal to the subgroup $H=\langle h\rangle$ by hypothesis. (9) Since $yx^{-1}\in H\leq \langle x\rangle$, hence $y\in \langle x\rangle=M$ contradiction. Thus assumption (that $y^p\in \langle x^p\rangle$) is wrong, hence $y^p$ is in $\langle x\rangle$ but not in $\langle x^p\rangle$. (10) Then $|y^p|=|x|$ so $|y|=p.|x|=|G|$.
How to calculate conditional probability for balls in a two boxes?
The first box has $a$ white and $b$ black, the second box has $b$ white and $a$ black. One ball is either moved from the first box to the second (with probability $1/2$) or from the second box to the first (also probability $1/2$). You then select a ball from the box to which a ball was added. You are looking for the probability that the box you select from is the first box, given that it is white. Let $A$ be the event that the box you select from is the first box. Since both boxes are equally likely to be the box that is added to, $P(A) = 1/2$. Let $B$ be the event that the ball you select is white. By symmetry, $P(B) = 1/2$. You're looking for $P(A \mid B$), so you apply Bayes' theorem: $$P(A \mid B) = \frac{P(B \mid A) \cdot P(A)}{P(B)}$$ Since $P(A) = P(B) = 1/2$, this means $$P(A \mid B) = P(B \mid A)$$ $P(B \mid A)$, the probability that the ball is white, given that you choose from the first box, is considerably easier to calculate. The fact that you choose from the first box means that a ball was taken from the second box and put into the first. The probability $P(W)$ that a white ball was taken from the second box and added to the first is $$P(W) = \frac{b}{a+b}$$ The probability $P(B)$ that a black ball was taken from the second box and added to the first is $$P(B) = 1 - P(W) = \frac{a}{a+b}$$ Now, we write $P(B \mid A)$ in terms of $P(W)$, $P(B)$, $a$, and $b$: $$P(B \mid A) = P(W) \cdot \frac{a+1}{a+b+1} + P(B) \cdot \frac{a}{a+b+1}$$ The terms multiplied by $P(W)$ and $P(B)$ are the probabilities of getting a white ball, given a white-ball-transfer and a black-ball-transfer, respectively. In each case, this is just the number of white balls in the first box, divided by the total number of balls in the first box. $$P(B \mid A) = \frac{b}{a+b} \cdot \frac{a+1}{a+b+1} + \frac{a}{a+b} \cdot \frac{a}{a+b+1}$$ $$P(B \mid A) = \frac{a^2 + ab + b}{a^2 + 2ab + b^2 + a + b}$$ Finally, $$P(A \mid B) = \frac{a^2 + ab + b}{a^2 + 2ab + b^2 + a + b}$$
Isomorphism of direct sums $R/a \oplus R/b \cong R/{\rm lcm}(a,b)\oplus R/\gcd(a,b)$
Hint $ $ The Bezout identity $\rm\: ad+bc\:\! =\:\! g\:$ yields the following Smith normal form reduction $\left[\begin{array}{cc}\rm a &amp; \!\!\!0\\0 &amp;\rm \!\!\!b\end{array}\right] \!\sim\! \left[\begin{array}{cc} \rm a &amp;\rm \!\!\!b\\0 &amp;\rm\!\!\! b\end{array}\right] \!\sim\! \left[\begin{array}{cc} \rm a &amp;\rm\!\!\! b\\0 &amp;\rm\!\!\! b\end{array}\right] \left[\begin{array}{cc} \rm d &amp;\rm \!\!\!\smash{-\!b/g} \\ \rm c &amp;\rm a/g\end{array}\right] $ $= \left[\begin{array}{cc} \rm g &amp;\rm 0\\ \rm bc &amp;\rm \!\!\!ab/g\end{array}\right]^{\phantom{|^|}} \!\sim\! \left[\begin{array}{cc} \rm g &amp;\rm 0\\ 0 &amp;\rm \!\!ab/g\end{array}\right] = \left[\begin{array}{cc} \rm \!gcd(a,b) &amp;\rm 0\\0 &amp;\rm \!\!\!\!\!\!\!\smash{lcm(a,b)}\!\end{array}\right] $ See this answer for further detail.
Adjust Saturation in CIE L*a*b* space.
Here is what I get: suppose your new total value of $sat$ is shown by $x$. Then you want to multiply the original $(a,b)$ by $t={{xL}\over{\sqrt{(a^2+b^2)(1-x^2)}}}$. I dropped all the stars!
Different Differences between an Arrangement of Numbers
[This solution uses the fact, mentioned in a comment, that the sum of the last three numbers is one of $12, 13, 14, 15$ or $16$.] The possible differences are $1,2,3,4,5$ and each must occur. The only way to produce a difference of $5$ is for $1$ and $6$ to be adjacent. The difference of $4$ requires either $1$ adjacent to $5$ or $2$ adjacent to $6$, neither of which is possible if $1$ and $6$ are placed before the $3$. Therefore they are after, and the maximum possible sum of the last three numbers is $1+6+5=12$. Since the other choices are even larger, $12$ must be correct.
Graph of a convex function and normal vectors
First let us take the case $n=1$, i.e., $f\colon\mathbb{R}\to\mathbb{R}$. Up to some stretching and translating, let us say that $P=(0,0)$, $Q=(1,f(1))$, $N_P=(-f'(0),1)$, $N_Q=(-f'(1),1)$. Then $$N_Q-N_P=(f'(0)-f'(1),0)$$ and $$Q-P=Q=(1,f(1))$$ so $$\langle N_Q-N_P,Q-P\rangle=f'(0)-f'(1)$$ Now you should know that the derivative of a convex function is non-decreasing (see details below, as I could not find the proof of this here at MSE), so $f'(0)-f'(1)\leq 0$. For general $f\colon\mathbb{R}\to\mathbb{R}$, a similar argument will yield the result. Or you can &quot;stretch and translate&quot;, as in the beginning of the solution: If $P=(x_0,f(x_0))$ and $Q=(x_1,f(x_1))$, apply the case above to $g(x)=f((x_1-x_0)x+x_0)-f(x_0)$ and rewrite the normals of $f$ in terms of the normals of $g$. As for the fact I mentioned about derivatives of convex functions, we need the following general fact (see Prob. 23, Chap. 4 in Baby Rudin: Every convex function is continuous and every increasing convex function of a convex function is convex): if $f$ is convex on an interval containing $x_1&lt;x_2&lt;x_3$, then $$\frac{f(x_2)-f(x_1)}{x_2-x_1}\leq\frac{f(x_3)-f(x_1)}{x_3-x_1}\leq\frac{f(x_3)-f(x_2)}{x_3-x_2}$$ (i.e., the slope of the secans increase when you move them to the right) Then given $x_0&lt;x_1$, we have, for all $x_0&lt;z&lt;w&lt;x_1$, \begin{align*} \frac{f(z)-f(x_0)}{z-x_0} &amp;\leq\frac{f(x_1)-f(x_0)}{x_1-x_0}\\ &amp;\leq\frac{f(x_1)-f(w)}{x_1-w}\\ &amp;=\frac{f(w)-f(x_1)}{w-x_1} \end{align*} Letting $z\to x_0^+$ and $w\to x_1^-$, we conclude $$f'(x_0)\leq f'(x_1)$$ For general $n$, let $f\colon\mathbb{R}^n\to\mathbb{R}$ convex. The tangent space at a point $(x_0,f(x_0))$ of the graph is given by vectors of the form $(x,\nabla f(x_0)\cdot x)$ (where $\nabla f$ is the gradient of $f$ and &quot;$\cdot$&quot; denotes the usual inner product). It follows that the inner normal vectors are given by $(-\nabla f(x_0),1)$. Let $P=(x_0,f(x_0))$ and $Q=(x_1,f(x_1))$ be given, with $x_0,x_1\in\mathbb{R}^n$. Then $$\langle N_Q-N_P,Q-P\rangle=(\nabla f(x_1)-\nabla f(x_0))\cdot (x_0-x_1)\tag{$\bigstar$}$$ The following argument is a standard way of transforming a multivariable problem into a single-variable problem: Restrict your function to a line! Consider the function $g\colon\mathbb{R}\to\mathbb{R}$ given by $g(\lambda)=f((1-\lambda)x_0+\lambda x_1)$. Then $g$ is convex. The single variable case yields $g'(1)\geq g'(0)$. Let us use the chain rule to calculate $g'$ in terms of $\nabla f$ (I can never remember the formulas, so skip this if you want). I'll denote the derivative of a function $F\colon\mathbb{R}^N\to\mathbb{R}^M$ at a point $x_0$ by $dF_{x_0}$. Let $h\colon\mathbb{R}\to\mathbb{R}^n$, $h(\lambda)=(1-\lambda)x_0+\lambda x_1=x_0+\lambda(x_1-x_0)$. Then $dh_\lambda(t)=t(x_1-x_0)$ for any $\lambda$ $df_{x_0}(x)=\nabla f(x_0)\cdot x$. $dg_\lambda(t)=d(f\circ h)_\lambda(t)=df_{h(\lambda)}(dh_{\lambda}(t))=\nabla f(h(\lambda))\cdot(x_1-x_0)$ $g'(\lambda)=dg_\lambda(1)$ So in particular $$g'(1)=\nabla f(h(1))\cdot(x_1-x_0)=\nabla f(x_1)\cdot(x_1-x_0)$$ and similarly $$g'(0)=\nabla f(x_0)\cdot (x_1-x_0)$$ Using these equalities along with $g'(0)\leq g'(1)$ and equation ($\bigstar$), we conclude that $\langle N_Q-N_P,Q-P\rangle\leq 0$, as we wanted.
La fonction $f$ admet-elle une fonction réciproque?
La fonction étant dérivable et sa dérivée est $$f'(x)=\frac{1-x^2}{(1+x^2)^2}&gt;0,\;\forall x\in]-1,1[$$ donc $f$ est continue et strictement croissante sur son domaine de définition donc elle admet une fonction réciproque. Pour déterminer $f^{-1}$ on exprime $x$ en fonction de $y$ dans l'égalité: $$y=f(x)$$ en résolvant une équation de second degré. Translation The function is differentiable and its derivative is $$ f '(x) = \frac {1-x ^ 2} {(1 + x ^ 2) ^ 2} &gt;0, \, \forall x \in( -1,1 )$$ so $ f $ is continuous and strictly increasing on its domain of definition so it admits an inverse function. To determine $f ^ {-1}$ we express $ x $ in terms of $ y $ in the equation: $$ y = f (x) $$ by solving a quadratic equation.
Definition of Category
The notion of a class is defined rigorously in Von Neumann–Bernays–Gödel set theory, which is a conservative extension of ZFC. Basically, you can form classes of sets using unrestricted comprehension, and you can freely take subclasses, images of classes under functions, and use the Axiom of Choice on classes of sets. However, no proper class is allowed to be an element of anything -- if a class $C$ is an element of a class $D$, then $C$ must be a set. In particular, there is no class of all classes, although there is a class of all sets.
Very basic model theory question
Hint You have to play with the two usages of $\vDash$... Example : assume $T \vDash \varphi$. This means that, for every $\mathscr M$ we have : $\text {if } \mathscr M \vDash T, \text { then } \mathscr M \vDash \varphi$. But the premise of the theorem is : for every model $\mathscr M : \mathscr M \vDash T \text { iff } \mathscr M \vDash T'$. Thus : if $\mathscr M \vDash T'$, then $\mathscr M \vDash \varphi$, which amounts to : $T' \vDash \varphi$.
Why isn't the 3 body problem solvable?
For the classical 3-body problem, the obstacle to a solution is, as you said, integrability. This is also sometimes called separability, and when it fails, it means that there does not exist a manifold in phase space such that on that manifold, the equations for the independent degrees of freedom of the equation are separated into independent equations. This is in turn related to being able to interchange mixed partial derivatives as you mention for the Poisson brackets, because if the equations separate, derivatives (and therefore integrals) can be performed in any order. The relationship between this and chaos is that non-integrable systems are generically chaotic -- meaning "usually" or "observably" chaotic, the obstacle to separating the degrees of freedom being that there are intersecting stable and unstable manifolds of hyperbolic periodic points which cause the solutions to fold endlessly in phase space. "Generic" has a definition here, it means true on a countable interesection of open dense sets -- in other words, for every solution, there is an open subset of solutions arbitrarily close which have this property. Hope this helps. There is a completely worked out solution for what is called the "restricted 3-body problem" (3 body problem in which one of the bodies has no mass) in Jurgen Moser's Stable and Random Motions in Dynamical Systems, which shows that even in this case, the motion of the massless body is chaotic for most initial conditions.
Question Regarding Span and Linear Combination
It seems like your question is about the geometry of the relation between invertibility of a matrix and the linear independence of its columns. Invertibility means that one can always solve $Ax=b$ for $x$, no matter the $b$. Indeed, $A^{-1}$ exists iff the columns (or rows) are linearly independent. (Note that $Ax=b$ can be solvable (depending on $b$) even when $A$ does not have linearly independent columns; i.e. not spanning the space). One can consider that $A$ is a map: it takes in a vector $x$ and produces a vector $Ax$. The question of solving $Ax=b$ is the same as asking whether $b$ is in the column space (i.e. span of the matrix's column vectors). This is by definition of matrix multiplication, which produces $Ax$ by taking a linear combination of the columns of $A$. This is the connection between span and solving linear systems. We can only guarantee that the system is solvable if the span hits all the space of $\mathbb{R}^n$; if there are parts of the space that cannot be reached, then choosing a $b$ in those areas means we cannot solve the system. In terms of linear independence, if two vectors out of $n$ are linearly dependent, then they give no new information about the space (and thus won't span it). Another way to see this is by the Rank-Nullity Theorem (albeit in a less layman way), which says $\text{rank}(A)+\text{nullity}(A)=n$. Only if the dimension of the span of the columns is the whole space can we always solve the system.
What do the orbits look like on a torus given as $C\times C$ acted upon by $\mathbb{R}$?
It will be a whole lot easier to study this action by representing the torus as $\mathbf R^2/\mathbf Z^2$. Then your action comes from the action of $\mathbf R$ on $\mathbf R^2$ defined by $$ t\cdot(x,y)=(x+\tfrac1{2\pi} t,y). $$ It is clear that this is an action, it is just the action by translation on the right of the additive subgroup $\mathbf R=\mathbf R\times\{0\}$ of $\mathbf R^2$. The orbits are the affine lines parallel to the $x$-axis. In the torus this correpsonds to the circles $C\times\{*\}$, where $*$ is any point of the second factor $C$.
A Calculus Question from Math 1A Fall 2018 Practice 2
The velocity of the first rocket is $v_1(t) = 6 − t$ metres per second and the velocity of the second is $v_2(t) = 10 − t$ metres per second,where $t$ is the time in seconds after the first launch. The time $t$ in the second velocity function is also counted as seconds after the first launch so in your integral, you should not start from $t=0$ (and end at $t=x-4$) but start at $t=4$ (and end at $t=x$), to get: $$\int_0^x v_1(t) \,\mbox{d}t = \int_\color{red}{4}^\color{red}{x} v_2(t) \,\mbox{d}t \iff \int_0^x \left(6-t\right) \,\mbox{d}t = \int_\color{red}{4}^\color{red}{x} \left(10-t\right) \,\mbox{d}t \iff x = 8$$
how to prove that f is the identity map?
Let $T$be an automorphism of $\mathbb{D}$. Can you find a condition on $T$, so that $T^{-1} \circ f \circ T$ has the required property of having a fixed point at $0$? Now suppose there is another fixed point. Can Schwarz lemma be applied? (I'll leave you to find the appropriate $T$).
What do you call generating a function out of a graph?
The branch of mathematic called modeling gives you a idea of the type of curve you may want to use. Statistics helps you in knowing whether your sample is correct (is the number of experiments enough, what is the experimental errors,... ). Curve fitting, with optimization is used to adjust the parameters and minimize the difference between your data and you model. Simulation is when you use your model to mimics reality (how many replicas, what is due to randomness and model inaccuracy...) This is not a part of mathematics and you will find a lot information there, but be ready to enter in a new universe.
finding solutions by factoring
We wish to find integer solutions for $a$ and $b$ such that $a^2-b^2=16$. Factoring the expression we obtain $(a+b)(a-b)=16$. This means that $(a+b)|16$ and $(a-b)|16$. From this we can find a set $(a+b)\in \{-16,-8,-4,-2,-1,1,2,4,8,16\}$. So we obtain the equations $a+b=-16 \implies a-b=-1$ $a+b=-8 \implies a-b=-2$ $a+b=-4 \implies a-b =-4$ $a+b =-2 \implies a-b =-8$ $a+b = -1 \implies a-b=-16$ $a+b= 1 \implies a-b=16$ $a+b=2 \implies a-b = 8$ $a+b=4 \implies a-b = 4$ $a+b=8 \implies a-b = 2$ $a+b=16 \implies a-b=1$ Now for each of these there is a corresponding matrix (e.g. the first set of equations has been done below: \begin{align*} \left(\begin{array}{cc|c} 1 &amp; 1 &amp;-16\\ 1 &amp; -1 &amp;-1\\ \end{array}\right)\end{align*} Which we may factor into row echelon form \begin{align*} \left(\begin{array}{cc|c} 1 &amp; 1 &amp;-16\\ 0 &amp; -2 &amp; 15\\ \end{array}\right)\end{align*} To get the solution of $b=-7.5$ and $a=-8.5$. This is not an integer solution so we may discard it. Now continue this process for the remaining 9 pairs of equations.
Finding a special kind of continuous map on finite dimensional Hilbert Space
Since the dimension of the Hilbertspace is finite dimensional we have that $B$ is compact. Since $f$ is continuous $f(B)$ will always be bounded.
Continuous Random Variables with Joint PDF problem
You have $f_{\small X,Y}(x,y)= 3 x^2\mathrm e^{-y}\mathbf 1_{x\in(0;1)}\mathbf 1_{y\in(0;\infty)}$. So indeed: (a) $f_{\small X}(x)=3x^2\mathbf 1_{x\in(0;1)}\require{cancel}\cancelto{1}{~\int_0^1\mathrm e^{-y}\mathrm d y~}$ (b) $f_{\small Y\mid X}(y\mid x)=e^{-y}\mathbf 1_{y\in(0;\infty)}\mathbf 1_{x\in(0;1)}$ and hence:$$\begin{align}\mathsf P(Y\gt 1\mid X=0.5) ~&amp;=~\int_1^\infty \mathrm e^{-y}\,\mathrm d y\\[1ex]&amp;=~1-\int_0^1\mathrm e ^{-y}\,\mathrm d y\end{align}$$ (c) $\mathsf E(XY)=\int_0^1\int_0^\infty xy\cdot 3x^2\mathrm e^{-y}\,\mathrm d y\,\mathrm d x$ and hence $$\begin{align}\mathsf E(XY)&amp;=\int_0^1 x\cdot 3x^2\,\mathrm d x\cdot\int_0^\infty y\cdot \mathrm e^{-y}\,\mathrm d y\\[2ex]&amp;=\text{hmm...}\end{align}$$ Well done. (d) So you are on track to evaluate $\mathsf E(2X(3X+Y))$.
Central algebra vs simple algebra
You can check that the ring $T_n(k)$ of all upper triangular matrices over a field $k$ with $n&gt;1$ is central but not simple. To make this a complete solution for the posted question, I'd like to echo Tobais Kildetoft's excellent example already given in the comments: $\Bbb C$ is a simple noncentral $\Bbb R$-algebra. Comment on the post (and one of its comments): I'm not certain what you want a $k$-algebra over a noncommutative ring to be, but if you just want to talk about a ring $A$ that's a finite dimensional left vector space over $k$, we can do that. If $k$ isn't commutative, then no simple $k$-algebra (in this sense) will ever be central. So you could, for example take the $2\times 2$ matrix ring over $\Bbb H$ as a simple non-central $\Bbb H$-algebra (in this sense). Since the center $Cen(R\oplus S)$ is $Cen(R)\oplus Cen(S)$, you won't ever be able to find an algebra whose center is a field by looking at the product of two $k$-algebras. For this reason, $\Bbb H\oplus\Bbb H$ is not central, as claimed in the comments.
Solving linear equations involving many variables
If it's a system of linear equations, then you can use Gauss-Jordan elimination to get the solutions. You can also take the determinant of the coefficient matrix first to check whether or not you'll have a single solution, infinitely many solutions, or zero solutions.
Is there anything special about the below finite metric space? See below for details.
Yes, this is a metric space. It is (isomorphic to) the subspace of $\mathbb R^2$ consisting of the four points $(0, 0), (0, 1), (1, 0), (1, 1)$, with the metric inherited from the metric on $\mathbb R^2$. 2,3. Nothing particular that I can think of. Most texts for a first class in real analysis should give an overview of metric space theory. I personally like Charles Pugh's Real Mathematical Analysis quite a bit, but any such text will do.
$\mu(E)\ge \nu(E)\ \forall E\in A\ \Rightarrow\ \mu(\cup E)\ge\nu(\cup E)$? Here $\mu,\nu$ are probability measures on a $\sigma$-algebra.
On $\{1,2,3,4\}$ let $\mu$ be uniform on $\{2,4\}$ and $\nu$ be uniform on $\{1,3\}$. Let $A$ consist of just the two sets $E_1:=\{1,2\}$ and $E_2:=\{2,3\}$ (note $A$ is not a $\sigma$-algebra). Then $\mu(E_1)=\nu(E_1)=\frac12$ and $\mu(E_2)=\nu(E_2)= \frac12$ but $\mu(E_1\cup E_2)=\frac12 &lt;1=\nu(E_1\cup E_2)$.
$\epsilon$-regular pair of vertex sets definition
When $A'$ and $B'$ are close in size to $A$ and $B$, such as the condition $|A'| \ge (1-\epsilon)|A|$ and $|B'| \ge (1-\epsilon)|B|$, we expect $d(A',B')$ to be close to $d(A,B)$, no matter what the graph structure is. So your version of the definition is far too weak. For example, when $|A'| \ge (1-\epsilon)|A|$ and $|B'| \ge (1-\epsilon)|B|$, we can write $$|A'||B'|d(A',B') \le |A||B|d(A,B) \le |A'||B'| d(A',B') + 2\epsilon |A| |B|$$ just by counting edges. The first inequality says &quot;there are at least as many edges between $A$ and $B$ as between $A'$ and $B'$.&quot; The second inequality says &quot;we get an upper bound on the edges between $A$ and $B$ by assuming all the edges not between $A'$ and $B'$ are there.&quot; Anyway, dividing by $|A||B|$, we get $$ (1-2\epsilon + \epsilon^2) d(A',B') \le d(A,B) \le (1+\epsilon^2) d(A',B') $$ which is a form of saying &quot;$d(A',B')$ is close to $d(A,B)$&quot;. It follows that $|d(A',B') - d(A,B)| \le 2\epsilon$. This doesn't quite imply your definition, but it's halfway there. We don't want the $\epsilon$-regularity condition to almost hold for any pair $(A,B)$! We want the statement &quot;$(A,B)$ is an $\epsilon$-regular pair&quot; to actually mean something. So we ask for as much as possible - because the regularity lemma says we can. You can imagine a more general $(\epsilon,\delta)$-regularity that says $$|d(A,B) - d(A',B')| \le \epsilon$$ whenever $|A'| \ge \delta |A|$ and $|B'| \ge \delta |B|$. But for both $\epsilon$ and $\delta$, the smaller they are, the better, and the regularity lemma says it's possible to make both of them as small as we want. So we just set them equal to simplify the statement. Finally, we want the regularity condition to be useful. When you start using the regularity lemma, you'll see that the actual form is what we need to find large substructures in our graph.
Integration work: $\int\sqrt{\frac{2-x}{x-3}} \ \mathrm dx$
Hint: try to compute $I$ via the substitution $$ \frac{2-x}{x-3}=t $$
What is the importance of $1 \neq 0$ in Spivak's Calculus?
The axioms can be used to show that if $z$ is an element with the property that $z+x=x+z=x$ for every $x$, then $z=0$. Indeed, $$ z+0=0 $$ from the stated property of $z$ (with $x=0$) and also $$ z+0=z $$ from the property of $0$ stated as an axiom. Hence $z=0$. The proof that if $ux=xu=x$ for all $x$ then $x=1$ is just the same (substitute $u$ for $z$, $1$ for $0$ and multiplication for addition). On the other hand, the set $\{?\}$ ($?$ means any object you like) with the operations $$ {?}+{?}={?},\qquad {?}{?}={?} $$ satisfies all axioms regarding addition and multiplication (and also order, actually) stated for the real numbers. Here $0={?}=1$, because the unique element satisfies the requirements for $0$ and $1$ stated by the axioms. This means that we cannot prove $0\ne1$ only with the stated axioms and so we need to add $0\ne1$ as a further axiom, if we want the axioms to reflect what we expect from the real numbers.
How to 'get rid of' limit so I can finish proof?
The 'if' part follows from the definition of the derivative as a limit. If some expression is always less than or equal to $M,$ and the limit exists, then the limit also satisfies that inequality. That, in its turn, follows from the epsilon-delta definition of a limit. The 'only if' part is the really interesting part. As commenters have pointed out it is (a direct consequence of) the Mean Value Theorem.
Density of $Q$ using Archimedian Property from this note
Since we are in the case $x&gt;0$, $\underline{nx&gt;0}$ and there exists $m\in \Bbb{N}$ such that $$m−1 \leq \underline{nx} &lt;m$$(The proof that such an $m$ exists uses the well-ordering property of $\Bbb{N}$) Then,$$ny&gt; \underline{1+nx \geq m}$$Thus $$nx&lt;m&lt;ny$$ It then follows that the rational number $r=\frac{m}{n}$ satisfies $x&lt;r&lt;y$ .
For a linear system Ax = b where the entries of A are real numbers and A is 17 × 17, it’s possible for the system to have exactly seventeen solutions.
There are no other opinions. $Ax=b$ has either a unique solution, or no solution or infinitely many solutions.
Why function $f(x)=a^x$ is not defined for negative 'a'
Because some non-integer values of $x$ would yield non-real values of $f(x)$. For example, if $a=-1$, then $x=\frac12$ would yield $f(x)=(-1)^{\frac12}=\sqrt[2]{-1}$.
Notation for set of functions between two sets
No. Strictly speaking the two notations talk about two different functions simply because they have different domains and codomains: $f : A \times B \to C$ is a function that produces a value $f(a, b)$ in $C$ when applied to a pair $(a, b)$ in $A \times B$ whereas $f : A \to C^B$ is a function that itself produces a function $f(a) : B \to C$ when applied to a value $a$ in $A$. However, they are equivalent in the following sense: Given any function $f : A \times B \to C$, you can get a function $f^\ast : A \to C^B$ by the recipe: $$\big[f^\ast (a)\big](b) = f(a, b)$$ Conversely, given any $g : A \to C^B$, you can get a function $\overline{g} : A \times B \to C$ by the recipe: $$\overline{g}(a, b) = \big[g(a)\big](b)$$ The correspondences $f \mapsto f^\ast$ and $g \mapsto \overline{g}$ between the sets $A \times B \to C$ and $A \to C^B$ are inverses of one another and they establish a natural bijection between those two sets. In this sense you can see the two functions $f : A \times B \to C$ and $f^\ast : A \to C^B$ (or $g : A \to C^B$ and $\overline{g} : A \times B \to C$) as being essentially the same. Some go as far as to drop the asterisk and say that $f : A \times B \to C$ and $f : A \to C^B$ are the same function when what they really mean is that $f : A \times B \to C$ and $f^\ast : A \to C^B$, as defined as above, represent the same function. This abuse of notation is probably what got you confused.