title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Fuzzy C Means mathematics tutorial
I believe it is a version of an EM algorithm like a usual $k$-means algorithm; only that in this case each $x_i$ is allowed to belong to many clusters at the same time. If my intuition is right, you essentially try to maximize the log-likelihood function $\ell(c) = \log P(x|c)$. The EM proof should carry over to show that you can instead optimize $\ell(u, c) = \log P(x|u, c)$ in $u$, then in $c$, and repeat.
Semi continuity question
For your first question, semicontinuous functions need not be simple. Using the wikipedia article on simple function you see that simple functions are finite sums of characteristic functions. Therefore, they can only take on finitely many values, but semicontinuous functions can take on infinitely many values. Also, some simple functions are indeed semicontinuous ($1_{\mathbb{R}}$ is certainly semicontinuous). The answers to this question give examples/pointers to semicontinuous step functions. For your second question, this does not characterize semi-continuity. For example, the function $$f(x)=\begin{cases}0&x\not=0\\1&x=0\end{cases}$$ is upper semicontinuous, but is neither left-, nor right-continuous at $0$. A similar example works for lower semicontinuous functions. See the third paragraph of examples section of the wikipedia article on semi-continuity.
Hat Matrix Leverages and Subspaces
Note that $H$ is defined as the projection matrix over the subspace $\mathcal X$. As the vector $e_1=(1,0,...,0)$ belongs to $\mathcal X$, then we will have that $$He_1=e_1$$ but $He_1$ is the first column of $H$, so the element $H[1,1]$ must be $1$.
Does natural deduction require a non-empty set of premises?
Natural deduction certainly can prove theorems $\vdash \phi$ with no assumptions to the left of $\vdash$, because rules like $\Rightarrow$-introduction let you discharge (i.e., get rid of) assumptions. For examples, let's prove $\vdash A \Rightarrow A$: Assume $A$. Apply $\Rightarrow$-introduction discharging the assumption made in step 1 to conclude $A \Rightarrow A$. In sequent notation, I have gone from the axiom $A \vdash A$ in step 1 to the theorem $\vdash A \Rightarrow A$ in step 2. So the sequent proof would look like: $A \vdash A$ (Axiom) $\vdash A \Rightarrow A$ (from 1 using $\Rightarrow$-introduction).
Bounded sequence in dual with any weak*convergent subsequence
Consider the sequence $\{f_n\}\subset(\ell^\infty)^*$ where each $f_n$ is the standard coordinate functional. I.e $f_n(e_j)=1$ if $n=j$ and $0$ otherwise. Clearly $\|f_n\|=1$ for all $n\in \mathbb N$, so it is a bounded sequence. Now we know that for $\{f_n\}$ to have a weak* convergent subsequence we would require some subsequence $\{f_{n_k}\}$ such that $f_{n_k}(x)\to f(x)$ for some $f\in (\ell^\infty)^*$ for all $x\in \ell^\infty$. However, for any subsequence $\{f_{n_k}\}$ we can construct the element $x_{n_k}\in \ell^\infty$ by $f_n(x_{n_k})=0$ if $n\neq n_k$ for some $k\in \mathbb N$, and $f_{n_k}(x_{n_k})=(-1)^k$. Clearly $(f_{n_k}(x_{n_k}))\subset \mathbb R$ is not convergent, so $\{f_{n_k}\}$ cannot be convergent. As this is true for any subsequence we conclude that $\{f_n\}$ has no convergent subsequence.
Loops when drawing constantly changing angles and lines.
We work in the complex plane. Let $z=e^{i\alpha}$. Then the path is: $$1+uz+z^2+uz^3+z^4+uz^5+\cdots$$ If $1+uz=0$, meaning $u=1$ and $\alpha=\pi$, then we return to $0$ after $2$ steps. Otherwise, assume that $1+uz\neq0$. Suppose the path returns to $0$ after $2N$ steps for $N>1$. Then we can solve for this: $$0=(1+uz)\sum_{k=0}^{N-1}z^{2k}$$ Divide out $(1+uz)$: $$0=\sum_{k=0}^{N-1}z^{2k}$$ This is true if and only if $z^{2N}=1$ and $z^2\neq1$. In other words, $\alpha$ is a multiple of $\pi/N$, but not $0$ or $\pi$. Note that $u$ is irrelevant! Tackling an odd number is harder. Suppose the path returns to $0$ after $2N+1$ steps. Then: $$0=z^{2N+1}+(1+uz)\sum_{k=0}^{N-1}z^{2k}=z^{2N+1}+(1+uz)\frac{1-z^{2N}}{1-z^2}$$ $$0=z^{2N+1}-z^{2N+3}+1+uz-z^{2N}-uz^{2N+1}$$ That's a really messy equation, and I'm not sure if there's a simple class of solutions.
Show that $\ker\sigma \subset \varphi_1(\ker\rho)$ and $\operatorname{im}\tau \subset \psi_2(\operatorname{im}\sigma)$.
As stated and illustrated in the comments by Mindlack the first part is clearly wrong. Consider $\rho=\sigma=\tau=0$ the trivial homomorphism. Then $\ker\rho=F_1$ and $\ker\sigma= E_1$ and you are asked to show that $E_1\subset\varphi_1(F_1)$ which is only the case for $\varphi_1$ being surjective. Pick your favorite non-surjective $\varphi_1$ and see that this cannot work. However, the reverse inclusion holds. To see this, consider $x\in\ker\rho$. Then by commutativity $$(\varphi_2\circ\rho)(x)=(\sigma\circ\varphi_1)(x)=0$$ and thus $\varphi_1(\ker\rho)\subset\ker\sigma$. The second part is correct. Pick $\tau(x)\in{\rm im}\,\tau$ for some $x\in G_1$. By exactness, $\psi_1$ is surjective and therefore there is some $y\in E_1$ such that $\psi_1(y)=x$. Then $$(\psi_2\circ\sigma)(y)=(\tau\circ\psi_1)(y)=\tau(x)$$ and thus ${\rm im}\,\tau\subset\psi_2({\rm im}\,\sigma)$. Arguments like this are commonly known as proofs by diagram chase. As $y\in\ker\rho$ it is correct to say $\psi_1(y)\in\psi_1(\ker\rho)$ as this is nothing else than applying $\psi_1$. EDIT: Concerning the question below: consider where the arrows are the respective inclusions and projections from the direct sums (and the identity for the rightmost column).$\tau$, in your notation, is surjective but $\sigma$ clearly not.
Good problem books on martingales
Have a look at Probability with Martingales by David Williams.
How to solve $ \sum_{i=1}^{n-1} i^2 \equiv \;? \pmod n$
It never hurts to gather some numerical data. For your first problem: $$\begin{array}{cc|cc|cc} n&\left(\sum_{k=1}^{n-1}k^2\right)\bmod n&n&\left(\sum_{k=1}^{n-1}k^2\right)\bmod n&n&\left(\sum_{k=1}^{n-1}k^2\right)\bmod n\\ \hline 2&1&8&4&14&7\\ 3&2&9&6&15&10\\ 4&2&10&5&16&8\\ 5&0&11&0&17&0\\ 6&1&12&2&18&3\\ 7&0&13&0&19&0 \end{array}$$ Examination of that table should suggest that $$\left(\sum_{k=1}^{n-1}k^2\right)\bmod n=\begin{cases} \frac{n}2,&\text{if }n\equiv 2\!\!\!\pmod 6\\\\ \frac{2n}3,&\text{if }n\equiv 3\!\!\!\pmod 6\\\\ \frac{n}2,&\text{if }n\equiv 4\!\!\!\pmod 6\\\\ 0,&\text{if }n\equiv 5\!\!\!\pmod 6\\\\ \frac{n}6,&\text{if }n\equiv 0\!\!\!\pmod 6\\\\ 0,&\text{if }n\equiv 1\!\!\!\pmod 6\;; \end{cases}\tag{1}$$ since you know that $$\sum_{k=1}^{n-1}k^2=n\left(\frac{n^2}3-\frac{n}2+\frac16\right)=\frac16n(n-1)(2n-1)\;,$$ it shouldn’t be too hard to prove $(1)$. Added: For the second problem, did you try calculating $$\left(\sum_{k=1}^{n-1}k^3\right)\bmod n=\frac{n^2(n-1)^2}4\bmod n$$ for $n=2,3,4,\dots$ as I did for the first problem? Doing so should let you guess right away that the value is $0$ for odd $n$, and discovering the mathematical reason for this isn’t hard. If $n$ is odd, then $\frac{(n-1)^2}4$ is an integer, and clearly $n^2\frac{(n-1)^2}4\bmod n=0$. If $n$ is even, then $$\frac{n^2(n-1)^2}4\equiv\left(\frac{n}2\right)^2(n-1)^2\equiv\left(\frac{n}2\right)(-1)^2\equiv\left(\frac{n}2\right)^2\pmod n\;.$$ Now make a table for $n=2,4,6,\dots$: $$\begin{array}{rccc} n:&2&4&6&8&10&12&14&16\\ \hline \left(\frac{n}2\right)^2\bmod n:&1&0&3&0&5&0&7&0 \end{array}$$ The pattern is pretty obvious: it definitely appears that $$\left(\frac{n}2\right)^2\bmod n=\begin{cases} 0,&\text{if }n\equiv 0\!\!\!\pmod 4\\\\ n/2,&\text{if }n\equiv 2\!\!\!\pmod 4\;, \end{cases}$$ and you shouldn’t have too much trouble proving that this is actually the case.
Zeros of polynomials with real exponents
A simple counterexample is $$f(z) = 1 + z^{1/2}.$$ The definition of $z^r$ used here implies that $\operatorname{Re} z^{1/2} \geqslant 0$ for all $z\in \mathbb{C}$, and therefore $\operatorname{Re} f(z) \geqslant 1$, which shows $f$ has no zero. Any exponent $0 < r < 1$ would give a zero-free function in the same manner.
Radius of convergence of $\sum_{n\geq2}\frac{z^{n}}{\ln(n)}$?
Using the Cauchy-Hadamard theorem: $$\frac{1}R=\limsup_{n\to\infty} \left|\frac{1}{\ln(n)}\right|^\frac{1}{n} = 1$$ Then $R=1$. EDIT: Alternatively, the same result can be obtained using the (less general) ratio test, since: $$\frac{1}R=\lim_{n\to\infty} \left|\frac{\ln(n+1)}{\ln(n)}\right| = 1$$
Finding a tight lower bound for $\left(\frac{1+x}{(1+x/2)^2}\right)^n$
Write $y = x/2$. Then $\left(\frac{1+x}{(1+x/2)^2}\right)^n =\left(\frac{1+2y}{(1+y)^2}\right)^n =\frac{(1+2y)^n}{(1+y)^{2n}} $. Since $(1+2y)^n =\sum_{j=0}^n \binom{n}{j}2^jy^j $ and $\frac1{(1+y)^{2n}} =\sum_{k=0}^{\infty} \binom{2n+k-1}{k}(-1)^ky^k $, $\begin{array}\\ \frac{(1+2y)^n}{(1+y)^{2n}} &=\sum_{j=0}^n \binom{n}{j}2^jy^j\sum_{k=0}^{\infty} \binom{2n+k-1}{k}(-1)^ky^k\\ &=\sum_{j=0}^n\sum_{k=0}^{\infty}y^{j+k} \binom{n}{j}2^j \binom{2n+k-1}{k}(-1)^k\\ &=\sum_{m=0}^{\infty}y^m\sum_{j=0}^n\binom{n}{j}2^j \binom{2n+m-j-1}{m-j}(-1)^{m-j}\qquad j+k = m, k = m-j\\ &=\sum_{m=0}^{\infty}y^m(-1)^m\sum_{j=0}^n\dfrac{n!(2n+m-j-1)!}{j!(n-j)!(m-j)!(2n-1)!}2^j (-1)^{j}\\ &=\dfrac{n!}{(2n-1)!}\sum_{m=0}^{\infty}y^m(-1)^m\sum_{j=0}^n\dfrac{(2n+m-j-1)!}{j!(n-j)!(m-j)!}2^j (-1)^{j}\\ \text{so}\\ \frac{(1+x)^n}{(1+x/2)^{2n}} &=\dfrac{n!}{(2n-1)!}\sum_{m=0}^{\infty}(-1)^m2^{-m}x^m\sum_{j=0}^{\min(m, n)}\dfrac{(2n+m-j-1)!}{j!(n-j)!(m-j)!}2^j (-1)^{j}\\ \end{array} $ With this, you can get the power series. Note: Wolfy says this starts like $1-\dfrac{nx^2}{4}+\dfrac{nx^3}{4}+\dfrac{n(n-7)x^4}{32} -\dfrac{n(n - 3) x^5}{16} - \dfrac{n (n^2 - 33 n + 62) x^6}{384}+O(x^7) $.
Describe the predicate
Hints: $A=\{x\in \Bbb N\colon (\exists n\in \Bbb N)(x=2n)\}$ $A=\{x\in \Bbb N\colon (\exists n\in \Bbb N)(x=2n+1)\}$ $A=\{x\in \Bbb N\colon (\exists k\in \Bbb Z)(x=k^2)\}$
Properties of a continuous map $f : \mathbb{R}^2\rightarrow \mathbb{R}$ with only finitely many zeroes
Either $f(x)\leq 0$ for all $x$ or $f(x)\geq 0$ for all $x$. This one is the most interesting. It can happen that $f$ does not change sign under these conditions: take $u^2+v^2$ for example (if you write $x=(u,v)$). It is positive and has finitely many (one single) zeros. You can have any given number of zeros with $\prod_{i=1}^n \left((u-u_i)^2+(v-v_i)^2\right)$. Notice none of these polynomials ever changes sign. Now, interestingly, if a continuous function $f$ of $\Bbb R^2 \to \Bbb R$ has finitely many zeros, then it can never change sign. Assume the contrary, then there are two points x and y such that $f(x)>0$ and $f(y)<0$. Since it has only finitely many zeros, it must be positive for infinitely many points, or negative for infinitely many points, or both. Assume the former, without loss of generality because otherwise you could take $-f$. So, $f$ is positive for infinitely many points (and maybe also negative for infinitely many others, but we don't care). Draw a ray from $y$, in any direction $d$. If it "hits" a positive value of $f$, that is $f(y+\lambda_d d)>0$ for some $\lambda_d>0$, then there is a zero of $f$ lying between $y_d=y+\lambda_d d$, and $y$, by continuity of $f$, and especially by continuity of the restriction, $\lambda \to f(y+\lambda d)$. Now, two cases may happen: for infinitely many directions, you can find such a $y_d$, thus there are infinitely many zeros of $f$ (they are all different, since they are on different rays), contradiction. you can find only finitely many such directions. Thus, for infinitely many directions, you find only negative or null values along the ray. Now, draw a circle centered at $y$, with radius larger than $|x-y|$, so that your point $x$ is inside the circle. For infinitely many points $t_i$ on this circle, $f(t_i) \leq 0$. But $f$ has finitely many zeros, thus indeed, for infinitely many points $x_i$ on this circle, $f(x_i) < 0$ (with strict inequality). But then, trace rays between $x_i$ and $x$, and you will find again infinitely many zeros of $f$, hence again a contradiction. And you are done. The proof looks rather convoluted to me, maybe there is some obvious argument I didn't see. By the way, it's trivial to generalize to $\Bbb R^n$ for $n>1$: just pick the restriction of $f$ to a plane where it would take on both positive and negative values. I'll try to add a bit of intuition. Sadly, I don't have a scanner to show pictures... It is easy to find a polynomial that has any given number $n$ of zeros, as shown above. But these examples are either always negative or always positive, so you can't get any conclusion from it. Remember, you are in $\Bbb R^2 \to \Bbb R$, thus polynomials don't have necessarily finitely many zeros! And actually, you often get a curve of zeros, and it's a usual way to define curves, for example the unit circle, defined as zeros of $(u,v) \to u^2+v^2-1$. Now, assume $f$ takes on both positive and negative values. Then $f^{-1}(]-\infty,0[)$ and $f^{-1}(]0, +\infty[)$ are both open sets. My idea was that if both are nonempty, the boundary can't be finite. Now, the easiest way to find a zero of a continuous function is if it is a function of one variable, and you know it has a positive and a negative value: you just have to use the good old bisection algorithm. Thus, drawing rays (or half-lines) enables you to look only at functions of one variable. The only thing that remains to do is finding enough zeros, that is enough directions, to lend to a contradiction. Even if the two other points in the question have easy counterexamples, it's interesting to see if they can happen at all. the map $f$ is onto. That is, surjective. Not necessarily. Take $(u,v) \to \frac{1}{1+u^2+v^2}$, which has finitely many (none) zeros, and is bounded. But, as seen in the answer to the first case, $f$ can't change sign, thus it is always $\leq 0$ or always $\geq 0$, thus certainly never onto $\Bbb R$. the map $f$ is one one. Not if there is more than one zero, of course, and we saw examples above. But, could it happen at all? No, there is a proof here: Is there a continuous bijection from $\mathbb{R}$ to $\mathbb{R}^2$. You can also conclude using the fact that if $f$ is continuous and bijective, it has one zero, hence it does no change sign, contradiction because then it can't be bijective.
Give two matrices whose column spaces contain the column space of the given matrix.
For instance, $\begin{pmatrix}1&1&0&0\\1&1&0&0\\0&0&1&1\\0&0&0&1\end{pmatrix}$
Correct term in english to call the symbols: "=" / "≥" / "≤"
In the program context you provide, getConstraintSign is fine. getConstraintSymbol would work too, and is a little more precise, since it's really a glyph (symbol) you're returning.
What kind of algebraic equations do trandescendal numbers not solve?
Whatever you accept as generalization of polynomials, I assume the candidate expressions can be expressed with finitely many symbols taken from a finite alphabet (e.g. it is possible to write them down in an intelligible manner on paper or using $\LaTeX$).. Moreover I assume you only accept functions that have at most countably many zeroes in $\mathbb R$ (so this allows us to define $\pi$ as the smallest positive zero of $\sin x$, for example). Then you still can catch at most countably many numbers, hence certainly not all transcendentals.
when modulus of sum equals sum of the moduli
In triangle inequality $|z_1+z_2|\leq |z_1|+|z_2|$ we have equality iff $z_1=0$ or there is a $\lambda\geq 0$ with $z_2=\lambda z_1$. So $|z_1+\dots+ z_n|=|z_1|+\dots+ |z_n|$ iff there are $i\in \{1,\dotsc,n\}$ and $\lambda_j\geq 0, j \in \{1,\dotsc,n\}\setminus \{i\}$ with $z_j=\lambda_jz_i$ for all $i\neq j.$
Providing a rationale for reducing differentiability on a manifold into differentiability on $R^n $?
The coordinate charts are needed not only to define a calculus on manifolds, but to define the manifolds themselves. If you have a topological space and you want to show that it is a smooth $n$-dimensional manifold, you have to show that for every point in the space there is an open neighborhood that is diffeomorphic to an open set in $\mathbb{R^n}$. The diffeomorphic maps from the manifold to $\mathbb{R^n}$ are called charts. In ordinary (multivariable) calculus one needs to define things like $f(x+h)-f(x)$, which is only possible if the domain of the function is a vector space. The charts allow us to transfer the notion of differentiability by pulling back the function along the chart to a function $\mathbb{R^n}\rightarrow\mathbb{R}$, for which differentiability is defined. If we have two charts, the fact that going from one chart to the other is diffeomorphic means that the differentiability on the manifold is well-definied (does not depend on the chart). When one says that one can 'avoid coordinates', it usually (see the caveat mentioned in the comment below) means the properties of objects that live on the manifold do not depend on specific coordinate charts. Thus any meaningful results should be expressed in a formalism that does not make reference to specific coordinate charts. Needless to say, concrete examples or calculations often require the choice of a specific coordinate chart. The important thing is that in principle any coordinate chart works. For example, if we define the metric using one set of coordinates, we can then express it in any other set of coordinates without changing the metric itself. Thus, geometrical statements about this metric, such as the arc lengths of curves, do not depend on the choice of coordinates.
Finding the limit of $ \lim_{k \rightarrow \infty} \left(\frac{2^k + 1}{2^{k-1} + 3}\right) $
HINT: Multiply the fraction by $1$ in the carefully chosen disguise $$\frac{1/2^{k-1}}{1/2^{k-1}}\;.$$
Give a sequence such that: $\forall \epsilon >0 $ $\exists N \in \Bbb N$ such that if $N \le n \le N+2$ then $|X_n - L| \ge \epsilon $
Case 1 and 2: You want $|a_n - L| < \epsilon$. You showed $|an - 1| < \epsilon$ in case 1 and $|an - 0| < \epsilon$ in case 2 which wasn't asked for. In both cases let $a_n = L$ and you're good. Case 3. You need to define N in terms of epsilon. As you want things to get bigger then epsilon you want it to diverge. Any divergent sequence will work. As you will want to subtract L, I suggest $a_n = n + L$. Then for any $\epsilon > 0$, if $N \ge \epsilon$ then for $N \le n \le N+2$, $|a_n - L| = |n + L -L| =n >= N \ge \epsilon$. Weird but works. Case 4: It does have to be divergent. But the exercise never said divergence wasn't allowed. But this doesn't actually need to be divergent as $\epsilon$ is not arbitrary in this case. Let epsilon be ... whatever you like. $\epsilon = 27$, say. Then you need to find a sequence, and an N such that if n $\ge$ N then $|a_n - L|$ $\ge$ 27. Such an N could be anything arbitrary so N = 3. Then {$a_n$} = {97, -2, 27+ L, 28 + L, -L - 29, 500 + L, 27 + L, 31 + L......} will do. Or something less absurd. Okay, this exercise is weird! But I sort of like it. It forces you to think about exactly what is being asked and literally what does it mean. But I don't like that it plays on expectations and completely thwarts them. Usually we use these N, epsilon to show convergence with the expectation that the epsilon is an arbitrarily small value with we try to squeeze values of the sequence between, and N is a corresponding large value for which index value or terms will be squeezed. In this exercise, this values are nothing at all of the kind. Case 5: Okay, some (not all) epsilon, a value of N and for all n $\ne$ N, $|a_n -L| = \epsilon$. So epsilon is a constant. $a_n - L = \pm \epsilon$ for all n $\ne$ N another constant. Okay... let epsilon be 1. N = 35. $a_n$ = L + 1 for all n $\ne$ to 35. $a_{35} = \frac{-5,726}{\pi}$. ... and so on....
Picking color randomly
Let choose black as our favorite colour. In the first round, you have a $\frac{1}{7}$ chance of picking black, in the second round, you have a $\frac{1}{6}$ chance of picking black, but only if you have not picked black already, which is a $\frac{6}{7}$ chance, so the chance of getting black in the second round is $\frac{6}{7} \times \frac{1}{6} = \frac{1}{7}$. In the last round, you have a $\frac{6}{7} \times \frac{5}{6} \times \frac{1}{5} =\frac{1}{7}$ chance of picking black, since you must pick a different colour in the first two rounds, and black in the last round. So the chance of picking black is $\frac{1}{7} + \frac{1}{7} + \frac{1}{7} = \frac{3}{7}$, as is the chance of any other colour.
Finding possible forms of an analytic function
You may as well consider $h(z) = z - g(z)$. Then what you need is just that $h$ is analytic on $P_a$ and $\text{Re}(h(z))$ is bounded above on $P_a$. There are too many such $h$ to characterize them. For example, you could take $h(z) = H((z-a+1)/(z-a-1))$ where $H$ is analytic on the closed unit disk.
What does it mean to prove a problem cannot be solved by a Turing machine?
You're right - in the background, there's always a coding assumption. E.g., if the inputs to our problem are finite strings of naturals, then we have to fix at the outset some way of coding finite strings of naturals as finite strings of 1s (if that's how we're inputting things to our Turing machine - there are many models of course). Why is this suppressed? Well, actually, what we prove in these cases is very coding-independent: as long as you come up with a method of coding which isn't "silly", you'll get the same result. This can be formalized by proving that any coding satisfying a couple basic properties yields the same result, and these proofs usually aren't more than one or two lines longer than the initial undecidability proof they stem from. In spirit, this suppression of coding method is very similar to appeals to Church's thesis - at that stage in the game, you're expected to be able to fill in the details, and prove the appropriate generalizations, yourself. This may well be bad pedagogy in many cases, but it's what's going on.
Differentiable in normed vector space?
Hint: By induction on $k$, WLOG you may assume $k=1$. Let $a \in X$ and the Linear map $g_i \in L(X, Y_{i} )$ is derivative of $\pi_i \circ f : X \to Y_i$ at point $x=a,$ Now show that the linear map $T(x) = (g_1 (x), g_2 (x), ..., g_n(x)) \in L(X, Y) $ is the derivative of $f$ at point $x=a$
If $d=\gcd\,(f(0),f(1),f(2),\cdots,f(n))$ then $d|f(x)$ for all $x \in \mathbb{Z}$
Note that $d$ divides $\gcd(f_0,f_1,f_2)$ iff $d$ divides $f_0,f_1,f_2$. (I'm using $f_k=f(k)$ for simplicity.) Using repeated differences we get $$ \begin{array}{lll} f_0 & f_1 & f_2 & \\ f_1-f_0 & f_2-f_1 \\ f_2-2f_1+f_0 \\ 0 \\ \end{array} $$ Newton's interpolation formula then gives us $$ f(n) = f_0 \binom{n}{0} + (f_1-f_0) \binom{n}{1} + (f_2-2f_1+f_0) \binom{n}{2} $$ Therefore, if $d$ divides $f_0, f_1, f_2$, then $d$ divides $f(n)$ for all $n$. (And conversely, of course.) In the general case, $$ f(n) = d_0 \binom{n}{0} + d_1 \binom{n}{1} + d_2 \binom{n}{2} + d_3 \binom{n}{3} +\cdots $$ where $d_i$ are the numbers in the first column of the repeated differences array. It is clear that the $d_i$ are integer linear combinations of the $f_i$ and so if $d$ divides all $f_i$ then $d$ divides all $d_i$ and so all $f(n)$. BTW, Newton's interpolation formula also proves that a polynomial takes integral values at integers iff it is an integer linear combinations of the binomial polynomials. See Integer-valued polynomial.
$f(x) \in F[x]$ is irreducible over $F$ if $f(x) = g(x)h(x)$ implies that $g(x) \in F$ or $h(x) \in F$ where $F$ is a field.
No, because being reducible (if $f(x)\ne0$) means that you can factor it as a product of non-invertible elements of $F[x]$.
Criteria for smoothness of the pointwise limit of a sequence of functions
Actually a stronger result holds: Thm: Suppose $f_n\in C^{n-1}([0,1]), n =1,2,\dots,$ and $f_n \to f$ pointwise on $[0,1].$ Suppose further that for each $m\in \{0,1,\dots \},$ $$\sup_{n>m+1} \|D^m f_n\|_1 < \infty.$$ Then $f\in C^{\infty}([0,1]).$ Lemma: Suppose $g_n\in C^{2}([0,1]), n =1,2,\dots$ and $\|g_n\|_1,\|g_n'\|_1,\|g_n''\|_1$ are uniformly bounded. Then there exists a subsequence $g_{n_k}$ that converges uniformly on $[0,1].$ To see how the lemma implies the theorem, we start by showing $f\in C^1.$ Consider the sequence $g_n = f_{n+3}', n=1,2,\dots$ From the hypotheses in the theorem, $g_n$ satisfies the hypotheses in the lemma. Thus there exists a subsequence $g_{n_k}$ that converges uniformly to some continuous $g.$ We thus have the following: $f_{n_k+3} \to f$ pointwise, and $f_{n_k+3}'\to g$ uniformly. By the standard result on uniform convergence and differentiation, $f$ is differentiable on $[0,1]$ and $f'= g.$ Since $g$ is continuous, $f\in C^1.$ We now look at $f_{n_k+3}''.$ The tail end of this sequence satisfies the hypotheses of the lemma. Thus some subsequence $f_{n_{k_j}+3}''$ converges uniformly to some continuous $h.$ In exactly the same way as above, we have $f'' = g' =h$ and $f\in C^2.$ Clearly this process can be continued, which proves $f\in C^\infty.$ Proof of the lemma: Suppose $\|g_n\|_1,\|g_n'\|_1,\|g_n''\|_1$ are all bounded above by $C.$ By Fatou's lemma, $\int_0^1 \liminf |g_n'| \le C.$ Thus $\liminf |g_n'| < \infty$ a.e. Thus there exists $b\in [0,1]$ such that for some subsequence $n_k,$ $$\sup_k |g_{n_k}'(b)|=M_b < \infty$$ for all $k.$ Going to a further subsequence $g_{n_{k_j}},$ we will also have $a\in [0,1]$ such that $\sup_k |g_{n_{k_j}}(a)|=M_a < \infty.$ From the FTC it follows that $$|g_{n_{k_j}}'(x)| \le |g_{n_{k_j}}'(x)-g_{n_{k_j}}'(b)| + |g_{n_{k_j}}'(b)| \le \int_0^1 |g_{n_{k_j}}''| + M_b \le C + M_b.$$ Thus $g_{n_{k_j}}'$ is uniformly bounded on $[0,1].$ By the MVT, $g_{n_{k_j}}$ is uniformly Lipschitz on $[0,1].$ Thus the sequence $g_{n_{k_j}}$ is equicontinuous on $[0,1].$ Since $g_{n_{k_j}}(a)$ is bounded, Arzela-Ascoli implies there is a subsequence of $g_{n_{k_j}}$ that converges uniformly, which gives the lemma.
Integral over complex conjugate domain
Applying the conjugate there is essentially equivalent to applying it to the function; if $I(\gamma) = \int_{\gamma}f(z)\,d\overline{z}$, then $\overline{I(\gamma)} = \int_{\gamma}\overline{f(z)}\,dz$. And the problem with that? It's strongly path-dependent. Unlike holomorphic functions, conjugate-holomorphic functions aren't exact forms. Let $f(x+iy)=g(x,y)+ih(x,y)$ and parametrize the closed loop $\gamma$ enclosing a region $R$ by $Z(t)=X(t)+iY(t)$ for $t$ from $a$ to $b$. We get \begin{align*}I(\gamma) &= \int_a^b \left(g(X(t),Y(t))+ih(X(t),Y(t)\right)\cdot (X'(t)-iY'(t))\,dt\\ &= \int_a^b g(X,Y)X'(t) + h(X,Y)Y'(t) + ih(X,Y)X'(t)-ig(X,Y)Y'(t)\,dt\\ &= \int_{\gamma}(g+ih)(X,Y)\,dx + (h-ig)(X,Y)\,dy\\ I(\gamma) &= \iint_R \frac{\partial (h-ig)}{\partial x} - \frac{\partial (g+ih)}{\partial y}\,dx\,dy\\ &= \iint_R -if'(x+iy) - if'(x+iy)\,dx\,dy\end{align*} (We abbreviate $g(X(t),Y(t))$ by $g(X,Y)$. These are all real double integrals and line integrals, although we allow complex-valued functions to simplify things a bit. The step from a line integral to an area integral was an application of Green's theorem.) So then, the integral $\int_{\gamma} f(z)\,d\overline{z}$ is the area integral of $-2if'$ over the region enclosed. It's not something that we can focus on a few points to handle - we have to deal with the whole region. And that's for a function that doesn't have any singularities in there.
Need help with countability proof
As asked, because $B$ is countable you have an injection from $B \to \Bbb N$ As $A$ is a subset of $B$, the injection from $B$ restricted to $A$ is an injection into $\Bbb N$
How do you stay sharp after graduating?
When Grothendieck died it was discovered that he didn't have not even one mathematical book in his home. He used to say that "Mathematics is to write and not to read". Practice writing Mathematics and you will keep up.
Let $A(x) = \sum_{n\leq x} a(n)$, is it possible that $A(x) = π(x) + O(\sqrt x)$
The function $A(x)$ is often denoted by $\Pi(x)$, and called the Riemann Pi function. In particular, $$A(x)=\Pi(x)=\sum_{n\leq x} \frac{\Lambda(n)}{\log n}$$ where $\Lambda(n)$ is the Von Mangoldt Lambda function. By splitting up the sum based on the value of the exponent, we have that $$\Pi(x)=\pi(x)+\frac{\pi(\sqrt{x})}{2}+\frac{\pi(\sqrt[3]{x})}{3}+\cdots=\sum_{j=1}^{\infty}\frac{\pi(\sqrt[j]{x})}{j}.$$ Using this identity, you can prove your desired result. (Or even something slightly stronger, with an additional $\log x$ in the denominator).
Let $f:\mathbb{R}\rightarrow[0,\infty]$ be measurable that $\int f(x)dx=0$, then $f(x)=0$ for a.e.
Let $E_n=\{x~:~f(x)\geq \frac{1}{n}\}$. Since $f$ is non-negative measurable function and $\int f\,dx=0$, $$\int f\,dx\geq \int \frac{1}{n}~ \chi_{E_n}\,dx= \frac{1}{n}~m(E_n).$$ $$\implies m(E_n)=0$$ $$\{x~|~f(x)>0\}=\cup_{n=1}^\infty E_n.$$ We know that if $\{E_i\}$ is a sequence of measurable sets such that $E_1\subseteq E_2\subseteq E_3\subseteq \cdots$, then $m(\{x~|~f(x)>0\})=m(\cup_{n=1}^\infty E_n)=m(lim E_n)=lim~ m(E_n)=0$. Hence, $f=0$ a.e. .
can GCD(0,8)≠1 be proven purely by lattice laws?
Note $\rm\ x = 0\ $ in $\rm\ x \wedge (x \vee y)\ =\ x\ \ \Rightarrow\ \ 0\wedge (0\vee y)\ =\ 0\ \ $ contra $\rm\ \ 0\wedge x\ =\ 1\ \ $ (presuming $\rm\ 0 \ne 1\:$). Alternatively, recall that the idempotent laws follows from the absorption laws, viz. $$\rm x\wedge x\ =\ x\wedge (x\vee (x\wedge x))\ =\ x $$ Hence $\rm\ \ 0\wedge 0\ =\ 0\ \ $ contra $\rm\ \ 0\wedge x\ =\ 1\:.$
Proving that if $x,y \in \Bbb R$ and $|x|=|y|$ then $x^2=y^2$
Just use that $$\vert x\vert^2=x^2$$ In fact the last equality comes from $$(\vert x\vert -x)(\vert x\vert+x)=0$$ because one of the two factors is $0$ using the definition of the absolute value.
3 judges problem. Binomial-like distribution.
Here’s the work for guilty, try to do the innocent case yourself, hint: try substituting something for p :) Probability of being sentenced to jail by one judge = p Probability of being sentenced to jail by the three: Case one: homer says you’re innocent $$ \frac{1}{2} p^2 $$ Case two: homer says you’re guilty $$ \frac{1}{2} (1-(1-p)^2) = \frac{1}{2} (2p-p^2) $$ Summing together: $$ \frac{1}{2} (p^2-p^2+2p) = p $$
matrix norm equality
Note that if we have $w' = \mathbf{A}w$, then we have $\mathbf{A}^{-1}w'=w$, so: $$\max_{w'}\frac{\|\mathbf{A}^{-1}w'\|}{\|w'\|}=\max_{w}\frac{\|w\|}{\|\mathbf{A}w\|}$$ But $w'$ is just a dummy variable as we range over $\mathbb{C}^{n}$, so: $$\max_{w}\frac{\|\mathbf{A}^{-1}w\|}{\|w\|}=\max_{w}\frac{\|w\|}{\|\mathbf{A}w\|}$$ As required.
How to draw the Greek letter $\xi$
One way, which I don't personally use but which works all the same, is to draw a lower-case $e$ (or even $\varepsilon$ I guess) and, without taking your pen off the paper, draw a lower-case $s$ directly underneath it. It certainly doesn't say anything about the difficulty level of the topic involved. Alternatively...
The average trip is two weeks. What is the probability that it ends on any given day? What if most trips end on Saturday?
Not sure if this properly captures the dynamics of mummy-chasing, but a reasonable assumption is that curses get tripped, with concomitant mummy-chasing, following a homogeneous Poisson point process, which is characterized by two assumptions: (a) Over any time interval $[a,b]$ the number $N(a,b)$ of times the mummy arrives is a Poisson random variable with mean $\lambda(b-a)$, where $\lambda>0$ is some constant that represents the rate or intensity of arrivals. ($\lambda$ is also interpretable as the expected number of chases per unit time.) Note that the mummy can arrive more than once in a given time period, but this has low probability. (b) The number of mummy arrivals in disjoint time intervals are independent random variables. If we measure time in days, then in a given day the number of mummy arrivals would be Poisson($\lambda$) by assumption (a). So the probability of no chase today would be $e^{-\lambda}$, hence the probability of being chased today is $\fbox{$1-e^{-\lambda}$}$. Your mission is to find this probability (i.e., find $\lambda$) under various assumptions about the distribution of trip duration, knowing that chases occur on 10% of trips. In your simplest case, where every trip lasts 14 days, use the independence assumption (b) to conclude the probability of no chase during the trip is $(e^{-\lambda})^{14}$. Equate this to $0.9$, solve for $\lambda$ and plug this into the daily probability of chase. This is exactly your initial argument, with $x:=e^{-\lambda}$. In the general case the trip duration $T$ is a random variable with some density $f$. When the trip duration $T$ has value $t$, the probability of no chase during the entire trip is $\fbox{$e^{-\lambda t}$}$, so averaging over all trip durations we obtain $$P(\text{no chase})=\int P(\text{no chase}\mid T=t)f(t)\,dt=\int e^{-\lambda t}f(t)\,dt.$$ I think you are to assume that $T$ has a normal distribution with mean $\mu$ and standard deviation $\sigma$. If so the RHS of the above evaluates to $\exp(-\mu\lambda + \frac12\sigma^2\lambda^2)$. Since chases occur on just 10% of trips, this gives the equation: $$\bbox[5px,border:1px solid black]{0.9=P(\text{no chase})=\exp(-\mu\lambda + \frac12\sigma^2\lambda^2).}$$ This is the main equation you need to solve for $\lambda$. To do so you need to identify $\mu$ and $\sigma$. Your assumptions (1) and (2) give you enough information to identify $\mu$ and $\sigma$: (1) is saying $\mu=14$, and (2) says $$P(|T-\mu|<2)=0.9.$$ Since the LHS of the above equation is the same as $P(|Z|<\frac2\sigma)$ for $Z$ standard normal, and since $P(|Z|<1.645)=0.9$, you deduce $\frac2\sigma=1.645$ and solve for $\sigma$. You can handle the other scenarios the same way. Your scenario (3a) can be interpreted as $P(|T-\mu|<0.5)=0.4$ (so that 40% of trips end within a specific 24-hour period -- note the fact that this day is Saturday is immaterial). Similarly your scenario (3b) is saying $\sigma=30/24$. Given either of these assumptions you can solve for $\lambda$ under the assumption $\mu=14$. (If you're not comfortable with the Poisson process, you can repeat the above analysis writing $x:=e^{-\lambda}$. It's enough to assume that the probability you are not chased during a trip of duration $T=t$ is $x^t$. The above argument will carry through.)
Existential Logical equivalence
Let the domain be $\{a,b\}$. Suppose $P(a)$ is true, $P(b)$ is false, and $Q(a)$, $Q(b)$ are both false. Then $\exists x(P(x)\rightarrow Q(x))$ is true. Pick $x=b$. But it is clear that $\exists xP(x)\rightarrow \exists x Q(x)$ is false.
Any tree degree sequence is a caterpillar degree sequence
Theorem. Given any tree, there is a caterpillar with the same degree sequence. Proof. Suppose the degree sequence consists of $d_1,d_2,\ldots,d_s$, all greater than or equal to $2$, and $e$ elements equal to $1$. This is the degree sequence of a tree so we have $$\langle\hbox{total degree}\rangle =2\langle\hbox{number of edges}\rangle =2\langle\hbox{number of vertices}\rangle-2\ ,$$ which gives $$d_1+\cdots+d_s+e=2s+2e-2$$ and hence $$e=d_1+\cdots+d_s-2s+2\ .$$ Now it is clear that the vertices of degree $d_1,\ldots,d_s$ can be formed into a chain with $s-1$ edges, and so the number of "available" edges is $$d_1+\cdots+d_s-2(s-1)\ ,$$ which is exactly $e$. So all the vertices of degree $1$ can be fitted around this chain, forming a caterpillar.
For what prime numbers $p$ is $x^2+x+1$ irreducible in $\mathbb{F}_p[X]$
$ax^2+bx+c$ is reducible precisely if $b^2-4ac$ has a square root (PS in response to comments: EXCEPT when the characteristic of the field is $2$. The usual formula for solving quadratic equations cannot apply in that case since dividing by $2a$ would be dividing by $0$.). In the case of $x^2+x+1$, we have $b^2-4ac=-3$. So the question is, for which primes $p$ does $-3$ have a square root modulo $p$? PS: What happens if you try to complete the square in $x^2+\dfrac b a x$ in a field of characteristic $2$? Usually we're told to find half of $b/a$. In a field of characteristic $2$, you can't take half of anything except $0$.
Determine whether the vector $y$ is in the subspace of $R^ 4$ spanned by the columns of $A$.
Almost correct. If the system has one or more solutions then $y$ can be formed as a linear combination of the column vectors of $A$ so it is in the subspace. It doesn't matter whether the solution is unique or not.
Optimization: Finding Volume of a cubic rectangle.
The volume of such a container is: $$V= s^2h$$ Where V is volume, s is one side of the square base, and h is the height. Obviously the maximum volume will use the maximum side lengths, so we have: $$4s+h = 118$$ $$h = 118-4s$$ We want the volume equation to be in terms of just one variable, so we plug in the new value of h: $$V= s^2(118-4s) = 118s^2 -4s^3 $$ Take the derivative: $$V'= 236s-12s^2 $$ And optimize: $$0= 236s -12s^2=236-12s$$ $$s=236/12 = 59/3 $$ Solving for h gives the ideal dimensions at: 59/3" x 59/3" x 118/3"
Find approximation of $y= {x^2}$
So your quadratic is $$q(x) = 3 + \frac{(x-9)}{6} - \frac{{(x-9)}^2}{216}$$
$\lim _{n \rightarrow \infty}\left[(n+1) \int_{0}^{1} x^{n} \ln (1+x) d x\right]$
Clearly $$ I_n := (n+1) \int_0^1 x^n\, \log(1+x)\, dx \leq \log(2) \int_0^1(n+1)x^n\,dx = \log(2). $$ On the other hand, for every $\epsilon\in (0,1)$ we have that $$ \begin{align} I_n & \geq (n+1) \int_{1-\epsilon}^1 x^n\, \log(1+x)\, dx \\ & \geq \log(2-\epsilon) \int_{1-\epsilon}^1 (n+1) x^n\, dx = \log(2-\epsilon) \left[1 - (1-\epsilon)^{n+1} \right]. \end{align} $$ Since the r.h.s. tends to $\log(2-\epsilon)$ as $n\to +\infty$, we conclude that $\lim_n I_n = \log(2)$.
Interpretation of fixed point's paths as functions defined on $\mathbb{S}^{1}$
I think your facts (1) - (3) are mathematical folklore which means that they are well-known and easy to prove. Sometimes it is difficult to find references in textbooks (although they certainly exist somewhere). (1) is obvious because the quotient map $p : I \to S^1, p(t) = e^{2\pi it}$, induces a bijection $p^* : \Omega(S^1,a) \to \Omega(a,a)$ given by $p^*(\alpha) = \alpha \circ p$. For a more detailed treatment see for example Spanier, Edwin H. Algebraic topology. Springer Science & Business Media, 1989. Have a look at Chapter 1, Sections 6 and 8. (2) is covered by Existence of a simple homeomorphism as your edit shows. (3) is covered by my answer to Can we always view loops as maps from $S^1\to X$? Also see Spanier's book Theorem 7 in Chapter I, Section 6. See also Hatcher's "Algebraic Topology", Section "Fundamental group" and especially the exercises.
Cardinality with Cartesian Cross Product problem
Suppose $A$ has $m$ elements and $B$ has $n$. For ordered pairs $(a,b)$ there are $m$ possibilities for $a$, and for each of these there are $n$ possibilities for $b$. Summing over all $a$ we get the result is $mn$.
The sequence $a_{n}a_{n+1}=a_{n+2}$
If the $a_n$ are positive, we can use a new letter, maybe $$ d_n = \log a_n, $$ so that $$ a_n = e^{d_n}. $$ Then $$ d_{n+2} = d_{n+1} + d_n. $$ It follows that there are real constants, call them $A,B,$ not necessarily positive, so $$ d_n = A F_n + B L_n, $$ where $L_n$ are the Lucas numbers. Then $$ a_n = e^{A F_n} e^{B L_n}. $$ So, if we tkae $e^A = G > 0,$ and $e^B = H > 0, $ we do get $$ a_n = G^{F_n} H^{L_n} $$ If you prefer to stick with Fibonacci numbers, with a new positive real $J$ you can write $$ a_n = G^{F_n} J^{F_{n-1}} $$
In how many ways can we pick a group of 3 different numbers from the group $1, 2, 3, ..., 500$ such that one number is the average of the other two?
There are $\binom{250}2$ pairs of even numbers and $\binom{250}2$ pairs of odd numbers. Each of those pairs gives you one solution, and each solution is of one of those two forms. Thus, the total number of solutions is $$2\binom{250}2=2\cdot\frac{250\cdot249}2=250\cdot249=62,250\;.$$ Your mistake was in adding $250$ in the numerator instead of subtracting it to get rid of the $x=y$ pairs.
Lifting submanifolds
Let $\hat x\in\pi^{-1}(\Sigma)$, there exists an open subset $x\in \hat U$ such that the restriction $\pi_{\mid \hat U}:\hat U\rightarrow\pi(\hat U)=U$ is a diffeomorphism. Since $\Sigma$ a submanifold, there exits a neighborhood $V$ of $x=p(\hat x)$ a submersion $f:V\rightarrow \mathbb{R}^p$ such that $V\cap \Sigma= f^{-1}(0)$. Let $\hat V=(\pi_{\mid \hat U})^{-1}(V)$ and $g:\hat V\rightarrow \mathbb{R}^p$ defined by $f\circ\pi_{\mid\hat U}$, $g$ is a submersion and $g^{-1}(0)=\hat V\cap \pi^{-1}(\Sigma)$. This implies that $\pi^{-1}(\Sigma)$ is asubmanifold.
Is the following sequence a null sequence?
I would simplify the given term as follows: $$\frac{(-1)^n}{10}=-\frac{1}{10}$$ if $n$ is odd and in the other case if $n$ is even we get $$\frac{(-1)^n}{10}=\frac{1}{10}$$
I need help finding a rigorous precalculus textbook
The first few chapters of GH Hardy's "A Course of Pure Mathematics" may be worth a read.
What are $\tilde{H_*}(K;\mathbb{Z})$ and $\tilde{H^*}(K;\mathbb{Z})$?
I will help you understand $H_*(K)$ and $H_*(P)$ but the rest of the problem is just an exercise in understanding the statements of UTC and Künneth so I leave that to you. Recall that you can construct $P = \mathbb{RP}^2$ by attaching a disk $D^2$ to the circle via a map $S^1 \to S^1$ of degree $2$. From the cellular chain complex you can see that $H_1(P) \cong \mathbb{Z}/2$ and $H_2(P) = 0$. Recall also that we typically construct $K$ as the quotient of a square where the sides are identified using the scheme $abab^{-1}$. This is equivalent to attaching a disk to a wedge of two circles via the same scheme, and the cellular chain complex looks like $$ \dots \to 0 \to \mathbb{Z} \stackrel{\partial}{\to} \mathbb{Z} \oplus \mathbb{Z} \stackrel{0}{\to} \mathbb{Z} $$ with $\partial(d) = a + b + a + (-b) = 2a$, where $d$ represents the $2$-cell and $a,b$ represent the $1$-cells as above. Then $H_1(K) \cong \mathbb{Z}/2\oplus \mathbb{Z}$ and $H_2(K) =0$. Now given these computations you should be able to use UCT and Künneth to finish the problem.
Compactness of Gelfand topology
The function $p:J \times \mathcal{L} \to \mathbb{C}$ does "contain" all complex homomorfisms: For each $h:\mathcal{L} \to \mathbb{C}$ nonzero homomorfism, $\mathcal{M}:=\ker(h)$ has codimension 1 and is a proper ideal ($I\not\in \mathcal{M}$), thus $\mathcal{M} \in J$. Therefore $h = p(\mathcal{M},\cdot).$ So every $t$ in the closure of (17) is the image of some $\mathcal{M} \in J$ under (17). Note: I think he means that the Gelfand representation is the function $N \in \mathcal{L} \mapsto p(\cdot, N)$, and that the function $p(\cdot, N): J \to \mathbb{C}$ is the representation of $N$. So it is not correct to say "the Gelfand representation contains all homomorphisms", because the homomorfisms are the functions of the form $p(\mathcal{M}, \cdot)$.
If $0\stackrel{f}\rightarrow V \stackrel{g}\rightarrow 0$ is an exact sequence, then $V = 0$.
Well, $\ker g=V$, since $g(v)=0$ for all $v\in V$. On the other hand, $\text{im}f=0 $, since the image of the zero vector space is always zero. The result follows.
Uniformly convergence in compact sets
This is known as Dini's theorem. Since $f_n$ and $f$ are continuous, so is $g_n=f_n-f$, $g_n\geq 0$ and $g_n\to 0$ monotonically. Thus, it suffices to prove the claim when $f_n\to 0$. Fix $\epsilon >0$. Define $$\mathcal O_n=\{x:f_n(x)<\epsilon\}=f_{n}^{-1}(-\infty,\epsilon)$$ By continuity, these are open. Moreover, beacuse $f_{n+1}\leq f_n$, if $x\in \mathcal O_{n}$ then $x\in \mathcal O_{n+1}$, so $\mathcal O_{n}\subseteq \mathcal O_{n+1}$. We claim $\{\mathcal O_n:n\in\Bbb N\}$ covers $K$: since the sequence goes to $0$; for each $x\in K$ there is $N$ such that $f_N(x)<\epsilon$, so $x\in\mathcal O_N$. Thus, there must be a finite cover $\{\mathcal O_{n_1},\ldots,\mathcal O_{n_k}\}$. But if $N'=\max\{n_1,\ldots,n_k\}$ $$\bigcup_{i=1}^k O_{n_i}=O_{N'}\supset K$$ What can you say about $\mathcal O_{N'}$ that relates to $$\sup_{x\in K}|f_{n}(x)|\;\; ;\;\; n\geq N'\text{ ? }$$
Mean And Variance Of Beta Distributions
By definition, the Beta function is $B(\alpha,\beta) = \int_0^1 x^{\alpha - 1} (1-x)^{\beta - 1}\ dx$ where $\alpha, \beta$ have real parts $ > 0$ (but in this case we're talking about real $\alpha, \beta > 0$). This is related to the Gamma function by $$ B(\alpha,\beta) = \dfrac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)}$$ Now if $X$ has the Beta distribution with parameters $\alpha, \beta$, $$\mu = E[X] = \dfrac{\int_0^1 x^{\alpha} (1-x)^{\beta-1}\ dx}{B(\alpha,\beta)} = \dfrac{B(\alpha+1,\beta)}{B(\alpha,\beta)} = \dfrac{\Gamma(\alpha+1) \Gamma(\beta)}{\Gamma(\alpha+\beta+1)} \dfrac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)} = \dfrac{\alpha}{\alpha+\beta}$$ using the identity $\Gamma(t+1) = t \Gamma(t)$. Similarly $$ \sigma^2 + \mu^2 = E[X^2] = \dfrac{B(\alpha+2,\beta)}{B(\alpha,\beta)} = \dfrac{\alpha(\alpha+1)}{(\alpha+\beta)(\alpha+\beta+1)}$$ from which $$\sigma^2 = \dfrac{\alpha\beta}{(\alpha+\beta)^2 (\alpha+\beta+1)}$$
For independent RVs, does $E|X+Y|<\infty$ imply $E|X|<\infty$.
Since $|X| \le |X+Y| + |Y|$, $E[|X|] = E[ |X| \mid Y] \le E[|X+Y| \mid Y] + E[|Y| \mid Y]$ and $E[|X|] \le E[|X+Y|] + E[|Y|]$ Suppose $E[|X+Y|] = R &lt; \infty$. Now $|X+Y| \ge |X| - |Y|$, so $$R = E[|X+Y|] \ge E[|X|-|Y|] = E[E[|X|-|Y| \mid Y]] = E[E[|X|] - |Y|]$$ But that implies $\text{Prob}(E[|X|]-|Y| \le R) &gt; 0$ and thus $E[|X|] \le R + s &lt; \infty$ for some $s$ where $\text{Prob}(|Y|&lt;s) &gt; 0$.
Is conditional expectation with respect to two sigma algebra exchangeable?
A fair die is rolled with outcome $X$. The sample space is $\Omega=\{\omega_1,\omega_2,\omega_3,\omega_4,\omega_5,\omega_6\}$, where all outcomes are equally likely and $X(\omega_i)=i$ for $1\leq i\leq 6$. Let $\cal P$ be the $\sigma$-algebra generated by $\{\omega_1,\omega_2,\omega_3\}$ and $\{\omega_4,\omega_5,\omega_6\}$. Let $\cal H$ be the $\sigma$-algebra generated by $\{\omega_1,\omega_2\}$, $\{\omega_3,\omega_4\}$, and $\{\omega_5,\omega_6\}$. Then \begin{eqnarray*}\mathbb{E}(\mathbb{E}(X\,|\,{\cal P})\,|\,{\cal H})(\omega_i)&amp;=&amp;\cases{2&amp; for $i=1,2$\cr 7/2&amp; for $i=3,4$\cr 5&amp; for $i=5,6$.}\\[10pt] \mathbb{E}(\mathbb{E}(X\,|\,{\cal H})\,|\,{\cal P})(\omega_i)&amp;=&amp;\cases{13/6 &amp; for $i=1,2,3$\cr 29/6&amp; for $i=4,5,6$.}\end{eqnarray*}
Epsilon Delta Limit Proofs at and going to infinity.
Here, you need to prove that $e^x$ is unbounded. see http://en.wikipedia.org/wiki/Limit_of_a_function#Limits_involving_infinity Given any $\epsilon\gt 0, \exists x_0\gt 0$ such that $|f(x)|\gt \epsilon$ $\forall x\gt x_0$. Choose $\epsilon\gt 0$. Now, we need to find corresponding $x_0$ such that $|e^x|\gt \epsilon $ $\forall x\gt x_0$. If we take $x_0=\ln\epsilon$, then $|e^x|\gt|e^{x_0}|(=\epsilon)\implies |e^x|\gt \epsilon$ $\forall x\gt \ln\epsilon$, Hence $e^x$ is unbounded and positive and thus $\lim_{x\to\infty}e^x=\infty$
Cesaro average semigroup
We have $y\in D(L)$ if the limit $$\lim_{h\to 0^+} \frac{P_hy-y}{h}$$ exists. In this case, $Ly$ equals to the limit. Therefore, you have to show that the limit $$\lim_{h\to 0^+} \frac{P_hA_tf-A_tf}{h}$$ exists and is equal to $$\frac1t\int_0^tLP_sf\,ds.$$ For this, note that \begin{align*} \frac{P_hA_tf-A_tf}{h}&amp;=\frac{P_h\frac{1}{t}\int_0^tP_sf\,ds-\frac{1}{t}\int_0^tP_sf\,ds}{h}\\ &amp;=\frac{1}{th}\left(\int_0^tP_{s+h}f\,ds-\int_0^tP_s f\,ds\right)\\ &amp;=\frac{1}{th}\left(\int_h^{t+h}P_{\tau}f\,d\tau-\int_0^tP_s f\,ds\right)\\ &amp;=\frac{1}{th}\left(\int_0^{t+h}P_{\tau}f\,d\tau-\int_0^{h}P_{\tau}f\,d\tau-\int_0^tP_s f\,ds\right)\\ &amp;=\frac{1}{th}\left(\int_t^{t+h}P_{\tau}f\,d\tau-\int_0^hP_\tau f\,d\tau\right)\\ &amp;=\frac{1}{t}\left(\frac{1}{h}\int_t^{t+h}P_{\tau}f\,d\tau-\frac{1}{h}\int_0^hP_\tau f\,d\tau\right) \end{align*} Then, taking the limit as $h\to0^+$, we conclude that $$LA_t f=\frac{1}{t}\left(P_tf-P_0f\right)=\frac{1}{t}\int_0^t\frac{d}{ds}(P_s f)\,ds=\frac{1}{t}\int_0^tLP_s f\,ds.$$ because $$\frac{1}{h}\int_t^{t+h} P_sf\,ds\overset{h\to 0^+}{\longrightarrow}P_t f\quad\text{and}\quad \frac{d}{ds}(P_sf)=LP_sf.$$
What fraction of the job can be done by one man in one day given 10 men can do a job in 10 days?
The 10 men do 1 job in 10 days, so 1 man does 1/10 of the job in the 10 days. so 1 man does 1/100 of the job in 1 day.
Metrisability is hereditary property
If $B$ is an open subset of $Y$, with respect to the subspace topology, then $B=O\cap Y$, for some open subset of $X$. For each $x\in B$, let $r&gt;0$ be such that $B_r(x)\subset O$. Then $B_r(x)\cap Y\subset O\cap Y=B$. Therefore,$$B=\bigcup_{x\in B}B_r(x)\cap Y,$$and, on the other hand, $B_r(x)\cap Y$ is the open ball in $Y$ centered at $x$ and whose radius is $r$.
Prove an equation has a solution in within specific interval (IVT)
Define $\displaystyle f(x) := \sum_{k = 1}^n b_k^{x^k}$. Note that $\displaystyle f(0) = \sum_{k = 1}^n b_k^0 = n$ and $\displaystyle f(1) = \sum_{k = 1}^n b_k = n$. Thus, using Rolle's Theorem, we can conclude that for some $c \in [0,1]$, $f'(c) = 0$. In other words, $\displaystyle f'(x) = \sum_{k =1}^n k \cdot \ln(b_k) \cdot x^{k-1} \cdot b_k^{x^k} = 0$ has a solution in $[0,1]$.
Is there a reason Halmos defined continuation on well-ordered sets, but not total ordered sets?
We would like, in general, for properties of well-orders to be preserved under order-preserving maps - that is, we want it to be the order that matters, not the elements. The two sets you name, $\{z \in \mathbb{Z} : z \leq 100\}$ and $\{z \in \mathbb{Z} : z \leq 1\}$, are order-isomorphic; they differ only in elements, not in structure. The property "$A$ is a continuation of $B$" is a property of the order - if $A$ is a continuation of $B$ and both are well-orders, then $A$ and $B$ are both isomorphic to ordinals and the ordinal of $B$ is greater than the ordinal of $A$. In your proposed change, "continuation" would be a property of the particular set; possibly an interesting notion anyway, but not what we usually care about when talking about well-orders.
square matrix $\mathbf{A}$ with $\mathbf{A}^\intercal = -\mathbf{A}$, proof $\mathbf{A}$ is not invertible.
Hint: This is only true if $A$ has an odd size. In this case, note that $$ \det(A)= \det(A^T) =\det(-A) $$ An even example: $$ A=\pmatrix{0&amp;-1\\ 1&amp;0} $$ I would say that the best conceptual understanding comes from an understanding of eigenvalues. Note in particular that $A$ and $A^T$ share eigenvalues, and $-A$ has eigenvalue $-\lambda$ for each eigenvalue $\lambda$ of $A$.
The volume of a cone is $18π m^3$ Find the minimum length of the slant edge
You had the result in your hands. $2r-\dfrac{4*54^2}{r^5}=0$ means $r^6=2*54^2=2^3*(3^2)^3$, $r^2=2*3^2=18$ Hence $h^2=9$ and $l^2=18+9$, $l=3\sqrt 3$.
Krull dimension of $\mathbb{C}[x,y] / (xy)$
The prime ideals of $\mathbb{C}[x,y]/(xy)$ correspond to the prime ideals of $\mathbb{C}[x,y]$ containing $(xy)$. Thus as in the comment of Youngsu, $(x, y-1)$ is a prime ideal, and your conjecture is wrong. The prime ideal $I$ of $\mathbb{C}[x,y]$ that contains $(xy)$ must contains $x$ or $y$. We assume that $x\in I$, and that $(x)\subsetneq I$. Since $(x)$ is prime, the height of $I$ in $\mathbb{C}[x,y]$ is at least 2. But $\mathrm{dim}\mathbb{C}[x,y]=2$. (It needs some conclusions about dimension of finitely generated algebra over a field. You can find these, for example, in the Ch.11 of Atiyah-Macdonald's book.) Therefore, $I$ must be a maximal ideal of $\mathbb{C}[x,y]$. By Hilbert's Nullstellensatz, $I$ must have the form like $(x, y-b)$. But anyway, the Krull's dimension of $\mathbb{C}[x,y]/(xy)$ is just equal to $1$.
Push-forward vector field for constant vector field
The total differential $Df$ of a linear map $f$ is just the map itself. This follows directly from the definition of the derivative as a linear map given e.g. in this article.
How is $r(t) = \overrightarrow{r_0} + t\overrightarrow{v}$ defined?
OK, let's make it an answer. Yes, you can use any positive multiple of $\langle−1,2,−2\rangle$ but in order to get the segment from $A$ to $B$, you also need to specify the domain of $t$, and with $\langle-1,2,-2\rangle$ the domain is $[0,1]$ whereas with the unit vector you've given it is $[0,3]$ (nothing wrong with the latter answer but you get $[0,1]$ automatically by using the vector from $A$ to $B$).
How can the closure of $(1,7]$ in $(1,7]$ with subspace topology include $7$?
An open ball in $(1,7]$ is of the form $(1,7]\cap (a,b)$ by definition of subspace topology. So for example the open ball $(1,7]\cap (2,8)=(2,7]$ in $(1,7]$ (a subspace of $\mathbb{R}$) includes $7$.
Maximizing and minimizing two functions simultaneously
Generally when we have two functions and we want to maximize one and simultaneously minimize the other if we do it independently, generally we would get different $f$'s.Therefore we should assign a weight to each of them, according to their priorities. One way is to minimize (maximize) their difference, another one is minimizing one plus reciprocal of the other, etc. So generally there is no a unique answer to this question. Even saying considering trade-off is a qualitative statement and it could have lots of quantitative interpretations. When we do a trade-off, the $f^*$ would not be global extremum of both of them anymore. Consider we are maximizing $y_1$ and minimizing $y_2$ simultaneously where: $$ y_1 = e^{-(x-1)^2}, \qquad y_2 = 2 - e^{-(x-2)^2} $$ which are plotted here : Now consider these two functions: $$ y_3 = y_1 - y_2, \qquad y_4 = y_1 + \frac1{y_2} $$ Which are plotted here : It is easy to see that maximums of each of them occur in different places. Also note that generally it is not possible to find an $f^*$ satisfying $y_1(f^*)&lt;y_1(f)$ and $y_2(f^*)&gt;y_2(f)$ simultaneously.
Which properties and which not does a topological manifold inherit from $\mathbb R^d$?
In addition to the "locally euclidean" property that you mentioned one usually requires a topological manifold to be Hausdorff and 2nd countable. Under these two assumptions, every manifold is homeomorphic to a subset of $R^N$ (one can take $N=2n+1$ where $n$ is the dimension of the manifold). Yes, one considers $R^n$ equipped with the standard topology. Neither "betweenness" (whatever it means for $n\ge 2$: I do not think it has any meaning), nor "directionality" (whatever it means) have any meaning for a topological manifold. Take a simple example, like the $n$-dimensional sphere and try to make sense of these two words (I would not even call them notions). Even on the circle, you cannot make sense of the statement that one point is between two other points. (You need orientation for that.) To check your intuition, just think of the graph of a continuous function $R^2\to R^2$: It is a topological submanifold of $R^4$. But do not assume any degree of smoothness, think of a fractal surface. Nevertheless, topological manifolds do inherit some properties of $R^n$, such as local connectivity/contractibility, the fact that a bijective continuous map between manifolds is a homeomorphism, etc. My suggestion is to take Munkres' book "Topology" and read it through the chapter on dimension theory. If you make it there, you will get a much better idea what topological structures/notions are meaningful and which are not. If you really are a programmer, my suggestion is to think not of topological manifolds but of piecewise-linear manifolds and smooth manifolds.
Show how $\lim\limits_{n\to\infty}\left(\frac{n-1}{n+1}\right)^n = \frac{1}{e^2}$
Hint: $$\left(\frac{n-1}{n+1}\right)^n=\frac{\left(1-\frac{1}{n}\right)^n}{\left(1+\frac{1}{n}\right)^n}\longrightarrow\frac{e^{-1}}{e}$$
Action of special orthogonal group on Upper half complex plane.
OK. Let's look at the map $$ f(z) = \frac{az + b}{cz + d}, $$ where we'll plug in specific values for $a,b,c,d$ soon. The derivative of this is \begin{align} f'(z) &amp;= \frac{ad - bc}{(cz+d)^2}; \end{align} evaluating this at $z = i$ (the point fixed by your stabilizer subgroup) gives \begin{align} f'(i) &amp;= \frac{ad - bc}{(ci+d)^2}; \end{align} Now let's substitute in the values $\sin x$ and $\cos x$ where appropriate to get \begin{align} f'(i) &amp;= \frac{(\cos x)(\cos x) - (\sin x)(-\sin x)}{((-\sin x)i+ \cos x)^2}\\ &amp;= \frac{1}{(\cos x - i \sin x)^2} \\ &amp;= \frac{1}{\cos 2x - i \sin 2x}\\ &amp;= \cos 2x + i \sin 2x. \end{align} This derivative is the best local linear approximation to the transformation at $z = i$. In other words, if $v$ is a vector based at $i$, then $f'(i)\cdot v$ is where $v$ would point &quot;after applying $f$&quot;. And because multiplying by $\cos t + i \sin t$ amounts to rotation by $t$ in the complex plane, we see that in our case, the transformation described ends up rotating (locally) by $2x$.
Faster way to find the rank of the following matrix?
Try subtracting the first row from the second, third, fourth and fifth rows. The resulting four bottom rows will all be multiples of the same vector (that is different from the first row), thus the matrix has rank 2.
How does one find an exterior angle bisector relative to the x-axis?
The angle of inclination of a line through $(0,0)$ with slope $m$ is $\tan m$. Assume in your diagram that $B$ is the lower point (in quadrant 4) and $C$ the upper point in quadrant 1. Let $m$ be the slope of the line through the origin and $C$, and $n$ the slope of the line through the origin and $B$. We can get the slope $x$ of the bisector of angle $CAB$ by solving $$\frac{m-x}{1+mx}=\frac{x-n}{1+xn}.$$ This has two solutions $$x=\frac{mn-1 \pm \sqrt{(m^2+1)(n^2+1)}}{m+n}.\tag{1}$$ I did a few cases and it seems if we use the $+$ sign here we get the slope for the bisector of angle $CAB$, provided the slopes are as in the diagram and so satisfy $m&gt;n.$ Once you have that slope for the bisector, its angle of inclination with the $x$ axis is $\arctan x.$ Of course that's not what you're after since you want the external bisector. But the external bisector makes a 90 degree angle with the internal bisector, so that's not much more work. Added: I think one may just choose the $-$ sign in $(1)$, calculate arctangent, and add 180 to the result, and arrive at the inclination of the external bisector in one step. (It may be best to calculate everything to make sure where the angles in question actually wind up relative to a given diagram.)
Integral of $(x^2 \sin (x))/(x^4 + 2 x ^2 - 1)$
Let: $f(x)=\frac{x^{2}sin(x)}{x^{4}-x^{2}-1}$. Note that $f(-x)=\frac{-x^{2}sinx}{x^{4}-x^{2}-1}=-f(x)$. Hence the function is odd. And since the function passes through the origin, and the integral between $\pi$ and $-\pi$ is the signed area bounded by the curve and $x$ axis, the total area is $0$.
First-order logic from computational linguistics - implication
Implication is often used in formal logic to translate "if..., then...". Thus, "That John is rich implies that he is happy" is the same as : "if John is rich, then he is happy". "If..., then..." is symbolized with the conditional connective : $\to, \supset, \Rightarrow$. The symbol $\subset$ is used for the relation of inclusion between sets (or : classes). In that cases we have e.g. : $\text {Humans} \subset \text {Animals}$. But $\text {Rich}(\text{John})$ is not the name of a class; it is a sentence : "John is rich".
$f$ is bounded on $\sigma$ and holomorphic on $\sigma\setminus \{a\}$, where $a$ lies in $\sigma$. Show that $f$ is holomorphic.
Hint: Since $f$ is given to be bounded on $\sigma,$ by Riemann's theorem on removable singularity, the desired result follows.
Bound of a process in terms of total variation
Not necessarily, consider the case of a process which is constant in time but has an unbounded initial condition. However, it follows from the definition of total variation that $|X_t-X_0| \leq K$, so by the triangle inequality you only need to be able to control the initial condition $X_0$.
Can we solve an algebraic system where the number of equations is less than the number of unknowns?
Technically, no. Consider $x^2 + y^2 = 1$. The fact that we have but one equation doesn't mean we can't solve the problem. It only means there is more than one solution (in this case $x = cos(t), y = sin(t)$ is an infinite amount of solutions).
Approximating the value of an $n$th degree taylor polynomial
Hints: 1) Taylor series at a point $x=a$ is given by $$ f(x) = \sum_{k=0}^{\infty} \frac{f^{(n)}(a)}{n!}(x-a)^n $$ 2) try to find few derivatives to see a general formula for the $n$th derivative $$\sqrt{1+x}= \sum_{k=0}^{\infty} {1/2\choose k} x^k .$$ 3) use the identity $$\cos(x) = \frac{e^{ix}+e^{-ix}}{2}.$$
Spectral norm of product
This is true, at least for normal matrices ($CC^* = C^*C$). In this case, you have that the singular values are $\sigma_i = |\lambda_i|$, so if the eigenvalues are the same, then the singular values are the same. A counterexample for non-normal matrices: $$ A = \begin{pmatrix}0 &amp; 1 \\ 0 &amp; 0 \end{pmatrix}, B = \begin{pmatrix}1 &amp; 1 \\ 1 &amp; 0 \end{pmatrix}. $$ Then $$ AB = \begin{pmatrix}1 &amp; 0 \\ 0 &amp; 0 \end{pmatrix}, (AB)^*AB = \begin{pmatrix}1 &amp; 0 \\ 0 &amp; 0 \end{pmatrix}; \quad BA = \begin{pmatrix}0 &amp; 1 \\ 0 &amp; 1 \end{pmatrix}, (BA)^*BA = \begin{pmatrix}0 &amp; 0 \\ 0 &amp; 2 \end{pmatrix} $$ and the relevant singular values are, respectively, $1$ and $\sqrt 2$.
Duality between $L^\infty$ and $L^1$
Let $S=\{f\ne 0\}$ and $N_{\epsilon}=\{x\in S:|f(x)|&gt;\|f\|_{L^{\infty}}-\epsilon\}$, then $\mu(N_{\epsilon})&gt;0$ and $N_{\epsilon}=\displaystyle\bigcup_{n}(N_{\epsilon}\cap E_{n})$ and put $g=\dfrac{\chi_{M}}{\mu(M)}$ for $M=N_{\epsilon}\cap E_{n_{0}}$, where $\mu(N_{\epsilon}\cap E_{n_{0}})&gt;0$. Now $\displaystyle\int_{\Omega}|fg|d\mu=\int_{S}|fg|d\mu=\int_{M}\dfrac{|f|}{\mu(M)}d\mu\geq\|f\|_{L^{\infty}}-\epsilon$.
Determine all complex numbers $z$ for an inequality
The inequality $|2z-1| \le 2|z-i|$ is equivalent with $|2z-1|^2 \le 4|z-i|^2$. Now use, that for a complex number $w$ we have $|w|^2=w \overline{w}$. Can you take it from here ?
Polynomials and Partitions with restrictions
From the definition, it should be clear that \begin{eqnarray} P(x) &amp;=&amp; \prod_{k \geq 1}(1 + x^k + x^{2k} + \dots) = \prod_{k \geq 1} \frac 1{1 - x^k}\\ Q(x) &amp;=&amp; \prod_{k \geq 2}(1 + x^k + x^{2k} + \dots) = \prod_{k \geq 2} \frac 1{1 - x^k}\\ R(x) &amp;=&amp; \prod_{k \geq 3}(1 + x^k + x^{2k} + \dots) = \prod_{k \geq 3} \frac 1{1 - x^k}\\ \end{eqnarray} and it follows easily that $\frac{Q(x)}{P(x)} = 1 - x$ and $\frac{R(x)}{P(x)} = (1 - x)(1 - x^2) = 1 - x - x^2 + x^3$.
Roots of an equation over the finite field $\operatorname{GF}(p^q)$
I would just use the factorization $$f(x)=x^r-1=\prod_{i=0}^{r-1}(x-\gamma^i).$$ It implies that $$ x^r-y^r=y^rf(x/y)=y^r\prod_{i=0}^{r-1}(x/y-\gamma^i)=\prod_{i=0}^{r-1}(x-y\gamma^i). $$ You can then cancel the factor $x-y$ corresponding to $i=0$. The trick is known as homogenization. It adds one more variable to a polynomial, and gives, as an end product, a homogeneous polynomial, i.e. a polynomial such that all the terms share the same total degree.
Why is $E[\ \sup_{ 0\leq u \leq t } |X_u|\ ]<\infty$ where $X$ is a right continuous submartingale?
No, in general the assertion does not hold true - not even for martingales. You can find several counterexamples here. What is, however, true is the following statement: Let $(M_t)_{t \geq 0}$ be a martingale (or a positive submartingale) with càdlàg sample paths. Then $$\mathbb{E} \left( \sup_{s \leq t} |M_s| \right) \leq \frac{e}{e-1} \mathbb{E}(|M_t| |\log M_t|).$$ You can find a (sketch of the) proof for discrete martingale for instance in Revuz &amp; Yor, Exercise 2.1.16; using standard approximation it can be easily extended to the time-continuous setting.
Recurrence Question involving logarithm
For $n=9$ the recurrence $T(n)=T(3\sqrt n)+\lg n$ gives $T(9)=T(9)+\lg 9$, hence $\lg9=0$ ?? Something wents wrong ...
Limit comparison test proof
Hint: Suppose that $L\gt 0$ (a similar argument will deal with $L\lt 0$). Because $\lim_{n\to\infty}\frac{a_n}{b_n}=L$, there is an $N$ such that if $n\gt N$ then $$\frac{L}{2}\lt \frac{a_n}{b_n}\lt \frac{3L}{2}.$$ The above inequality follows from the definition of limit by taking $\epsilon=\frac{L}{2}$. Now, as you expected, Comparison does it.
Finding horizontal asymptotes, algebraic help
Hint $$y=\frac{1}{\sqrt{x^2-2x}-x}=\frac{1}{\sqrt{x^2-2x}-x}\times \frac{\sqrt{x^2-2x}+x}{\sqrt{x^2-2x}+x}=\frac{\sqrt{x^2-2x}+x}{x^2-2x-x^2}$$ $$y=-\frac 12 \frac{\sqrt{x^2-2x}+x}x$$
Suppose A has eigenvalues 1,2, 4.
Yes this is correct as long as you assume that $A$ is a $3 \times 3$ matrix, except $tr(A^2) = 1^2+ 2^2 + 4^2 = 21$, since $Av = \lambda v \implies A^2v = \lambda^2v$
intuition about cubic splines vs quadratic splines (degree 3 vs degree 2).
Let's count the conditions: $N+1$ points for $N$ intervals. Resulting in $4N$ coefficients of $N$ cubic polynomials. On the side of the equations we get $2N$ prescribed function values $N-1$ continuity conditions for the first derivative $N-1$ continuity conditions for the second derivative the third derivative is piecewise constant, so there must be jumps, no equations. Which still leaves us with a difference of $2$ degrees of freedom. One usually imposes conditions on the first and last point, for instance a zero second derivative
Surface integral over portion of cylinder
You can make use of Gauss's Theorem, which relates the surface integral over some closed surface $S$ to a volume integral over the (bounding) volume $V$, namely $\iint_{(S)} {\bf F} \cdot dS = \iiint_{(V)} \nabla \cdot F \,dV$. Your function, ${\bf F} = x^2 \boldsymbol i + 2z \boldsymbol j -y \boldsymbol k$, gives $\nabla \cdot F = \frac{\partial x^2}{\partial x} + \frac{\partial z}{\partial y} + \frac{\partial -y}{\partial z} = 2x$. So you have to calculate $\iiint_{(V)} \nabla \cdot \boldsymbol F \,dV = \iiint_{(V)} 2x\,dV$. The integral over $dz$ is trivial, and is $6$. The integral over $dx, dy$ is more complicated and is $\int_{y=3}^{y-5} dy \int_{x=-\sqrt{25-y^2}}^{x=+\sqrt{25-y^2}} \, 2 x dx$.
Sample Space for a Pair of Loaded Dice
Just as for rolling two ordinary dice, the sample space consists of a $6 \times 6$ of pairs of faces. Enumeration: For the sum $S$ on the two dice, each of the 36 cells can also be labeled with the total of the two corresponding faces. Then count the cells for each total. (The first two of the six rows are shown below.) 1 3 4 5 6 8 -------------------------- 1 2 4 5 6 7 9 2 3 5 6 7 8 11 ... Analytic methods: It is easy to show that $$E(S) = E(D_a) + E(D_b) = 15/6 + 27/6 = 42/6 = 3.5,$$ which is the same as for regular dice. A bit more tediously, one can show that $Var(S)$ is the same as for regular dice. 'Probability generating functions' could be used to show that the distribution of $S$ agrees with the (triangular) distribution of the sum of two ordinary dice. Simulation: The distribution of $S$ can be very closely approximated by simulating the sums on a million rolls of these two special dice and tallying the results. (Simulation in R statistical software gives probabilities accurate to about three places.) m = 10^6; da = c(1,2,2,3,3,4); db = c(1, 3:6, 8) s = replicate(m, sample(da,1)+sample(db,2)) round(table(s)/m, 3) s 2 3 4 5 6 7 8 9 10 11 12 0.056 0.111 0.167 0.222 0.279 0.334 0.278 0.221 0.167 0.111 0.055 The plot below shows a histogram of the million simulated totals obtained when rolling a pair of these special dice. The dots show the exact distribution. hist(s, br=1.5:12.5, prob=T, col="skyblue2", main="Simulated Sums") x = 2:12; pdf=c(1:6, 5:1)/36 points(x, pdf, col="red", pch=19)
How would you solve this.
By the mean value theorem for integrals, there exists some $\xi\in(0,1)$ such that $$f(\xi)=\int_0^1f(x)\,\mathrm d x.$$ Hence, choose $c_0=0$, $c_1=1$, and $x_1=\xi$ to obtain exact precision. Of course, this doesn't really help much if you're supposed to find the value of $\xi$ because you don't know it in the first place.
Order of Subgroup in $S_5$ generated by $(123), (345)$
It is $A_5$ of order $60$. Note that $(123)(345)=(12345)$ and since $(12345)$ and $(123)$ generates $A_5$, it must be $A_5$. This is because they are even permutations and so no odd permutations are generated.
Help transforming a basic equation
There's a basic fact that I'm assuming that you know: $$ x^a \cdot x^b = x^ {a+b}. $$ In your case, $a$ and $b$ are the same, and both are $2^{n+1}$. So all you need to know is what is $$ a + b = 2^{n+1} + 2^{n+1}? $$ Well, it's two copies of $2^{n+1}$, so it's $$ 2^1\cdot 2^{n+1} = 2^{n+2} $$ by the same $a$-and-$b$ fact above. And that makes the final answer be $$ 2^{2^{n+2}}. $$