title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Find the exact value of the limit of the sequence defined by $a_1 = \sqrt 5, \;a_{n+1}= \sqrt {5 + a_n}$
HINT: Suppose that the sequence actually does have a limit, and let that limit be $L$. Then $$L=\lim_{n\to\infty}a_n=\lim_{n\to\infty}a_{n+1}=\lim_{n\to\infty}\sqrt{5+a_n}=\sqrt{5+L}\;,$$ since the function $f(x)=\sqrt{5+x}$ is continuous. You can solve the equation $L=\sqrt{5+L}$ for $L$. Corrected: It remains, however, to show that the sequence has a limit. Note that $a_{n+1}\ge a_n$ if and only if $\sqrt{5+a_n}\ge a_n$, which is true if and only if $5+a_n\ge a_n^2$, i.e., if and only if $a_n^2-a_n-5\le 0$. This is the case if and only if $a_n$ lies between the two roots of $x^2-x-5$. Try to show by induction on $n$ that this is always the case, so that the sequence, being bounded and increasing, must have a limit.
the relationship between convergence and topology
In a metric space you can define the closure of a set $A$ as the set of limit points of all convergent sequences of $A$. In general topologic spaces this does not work unless you replace "sequence" by "net". Using $\nu$ as an index suggests your textbook is talking about nets and not sequences so in this case it would indeed define a topology by net convergence. You can find more details here: http://math.uga.edu/~pete/convergence.pdf. Proposition 3.1. is what you need.
Show that if $I$ is in a maximal ideal $M$, then rad$(I)$ have to be in the same maximal ideal.
Suppose that $x\in \mathrm{rad}(I)$. Then $x^n\in I$ for some $n$, which since $I\subset M$ means that $(x+M)^n=0$ in $A/M$. But $A/M$ is a field, so $x+M=0$, i.e. $x\in M$. Therefore $\mathrm{rad}(I)\subset M$.
Does there exist a refinement of a given partition satisfies given properties?
Hint: There are such sets $Q$. Think about what each of the conditions mean intuitively. i) No two of the $t_i$ are in the same subinterval of $Q$. ii) Each $t_i$ is in some subinterval of $Q$ (This seems a slightly unusual conition, surely any partition of $[a,b]$ will satisfy this property for any subset of $[a,b]$?). iii) Each subinterval of $Q$ has some element of $t_i$ in it.
An Intuitive Understanding of Covariance
Here is my intuition on covariance: The covariance of $X$ and $Y$ is positive, if the two have a monotonous correlation. This means that high values of $X$ imply high values of $Y$ and vice versa. It is negative, if they have a negative monotonous correlation. This means that high values of $X$ imply low values of $Y$ and vice versa. If the covariance is zero, then there exists no monotonous correlation between $X$ and $Y$. Other correlations are still possible though. The covariance provides the direction of a correlation between two random variables, but it does not provide the strength of it. To make the correlation comparable it has to be normalized (compare for example Pearson correlation coefficient).
$ A_n $ =$[\frac{n}{n+1},\frac{n+1}{n+2}] $ be closed subsets find $\bigcup_{n=1}^\infty A_n $
$$\bigcup_{n=1}^{m}\left[\frac{n}{n+1}\,,\,\,\frac{n+1}{n+2}\right]=\left[\frac{1}{2}\,,\,\,\frac{m+1}{m+2}\right]$$ Take limit $m\rightarrow \infty$ you get $$ \bigcup_{n=1}^{\infty}A_n=\left[\frac{1}{2}\,,\,\,1\right) $$
Linear span of given vectors expresses a plane in $\mathbb R^3$?
Your intuition is correct. It can be more rigorously stated, considering that the equation of a plane in $\mathbb R^3$ is : $$Ax + By + Cz = 0$$ But since your given vectors are linearly dependent, you will arrive at such an equation, thus satisfying the known algebraic expression (form) of a plane.
Solve linear programming given access to an oracle
This is not a complete answer - more of a pointer, if even that. Since you have an oracle that only tells you if a formulation has a solution, to use this to pinpoint an optimum I think you will need to use something like the Dual-Simplex method, where you stop once you get a Primal feasible solution. However, instead of having to actually run the simplex method, you can simply check feasibility at each step and then use the Dual formulation to adjust bounds on the constraints until you acheive Primal feasibility.
Choosing non-adjacent chairs
Yes, you are correct in thinking that the questions are connected. They all depend upon the method used in part (a). (a) The chosen chairs occupy $k$ of the $n-k+1$ positions between and at the ends of the row of non-chosen chairs. The number of choices is therefore $$\begin{pmatrix}n-k+1\\k\\\end{pmatrix}.$$ (b) If chair 1 is not chosen, then the choice of $3$ chairs from the remaining $9$ is calculated as in (a). If chair 1 is chosen then, ignoring chair $1$ and the chairs adjacent to $1$, the choice of $2$ chairs from $7$ is also calculated as in part (a). The number of choices is therefore $$\begin{pmatrix}7\\3\\\end{pmatrix}+\begin{pmatrix}6\\2\\\end{pmatrix}=50.$$ (c) As in part (b) we have $$\begin{pmatrix}n-k\\k\\\end{pmatrix}+\begin{pmatrix}n-k-1\\k-1\\\end{pmatrix}=\frac{n(n-k-1)!}{k!(n-2k)!}.$$
Finding irreducible polynomial in finite field
Deciding whether a degree $3$ polynomial is irreducible over $\mathbb F_4$ is actually quite easy. If a degree $3$ polynomial factors then at least one of those factors must be degree $1$, i.e., your polynomial must have a root. So a degree $3$ polynomial factors if and only if it has a root. As your field has only $4$ elements this is straightforward to check.
$5$ dimensional space over $\mathbb{R}$
I feel compelled to write this answer, based on the fact that the answer which was accepted makes no sense. What the OP mentions seems to be an embedding of $\mathbb{R}^5$ inside the space $M$ of hermitian $2\times 2$ matrices over the quaternions, which is $Sp(2)$-equivariant: where $Sp(2)$ acts on $\mathbb{R}^5$ via the canonical double cover $Sp(2) \to SO(5)$ and acts on $M$ by the conjugation. This is analogous to the following well-known construction. You can exhibit the double cover $SL(2,\mathbb{C}) \to SO(3,1)_0$ as follows. You identify $\mathbb{R}^4$ with the real vector space $H$ of hermitian $2\times 2$ matrices. (Minus) the determinant of a hermitian $2 \times 2$ matrix defines a quadratic form on $H$ coming from an inner product of signature $(3,1)$. Define an action of $SL(2,\mathbb{C})$ on $H$ by $h \mapsto s h s^\dagger$, for $h \in H$ and $s \in SL(2,\mathbb{C})$. This is a linear transformation of $H$ which preserves the determinant (since $s$ has unit determinant) and hence defines an element of $O(3,1)$. Since $SL(2,\mathbb{C})$ is connected, it actually defines a surjective map $SL(2,\mathbb{C}) \to SO(3,1)_0$ to the identity component. The kernel of this map is the group of order $2$ generated by $-I$. The analogy breaks down in that there is no quaternionic determinant in general, but perhaps for matrices in $M$ it can be defined. I have not checked. There are similar double covers which can be explicitly described in this way: e.g., $SL(2,\mathbb{R}) \to SO(2,1)_0$ and $SL(2,\mathbb{H}) \to SO(1,5)_0$.
Showing Convergence of Positive Series
I'll work on $\mathbb R.$ The extension to $\mathbb R^n$ will be clear I hope. We fix a real sequence $a_k$ and a positive sequence $r_k$ and assume $\sum_{k=1}^{\infty}r_k|\cos (kx+a_k)| < \infty$ for $x$ in some set of positive measure. Lemma: Suppose $E$ is a set of positive finite measure. Then there exists $k_0$ such that $$\tag 1 \int_E |\cos (kx+a_k)|\, dx > m(E)/4 \text { for } k > k_0.$$ Proof: We have $|\cos (kx+a_k)|\ge \cos^2 (kx+a_k) = [1+\cos (2kx + 2 a_k)]/2.$ Therefore $$\int_E |\cos (kx+a_k)|\, dx \ge \int_E [1+\cos (2kx + 2 a_k)]/2\, dx =m(E)/2 + \int_E [\cos (2kx + 2 a_k)]/2\, dx.$$ An easy argument using Riemann-Lebesgue lemma shows the last integral $\to 0$ as $k\to \infty.$ This proves $(1).$ Define $f(x) = \sum_{k=1}^{\infty}r_k|\cos (kx+a_k)|.$ Then $f$ is a measurable function on all of $\mathbb R,$ with values in $[0,\infty].$ We are given that $f<\infty$ in a set of positive measure. It follows that $f$ is bounded by some $C$ on a set $E$ of positive finite measure. Referring to the lemma we then have $$\infty> Cm(E) \ge \int_E f \ge \int_E \sum_{k>k_0}^{\infty}r_k|\cos (kx+a_k)|\, dx$$ $$ = \sum_{k>k_0}^{\infty}r_k\int_E |\cos (kx+a_k)|\, dx \ge \sum_{k>k_0}^{\infty}r_k[m(E)/4].$$ This shows $\sum_{k>k_0}^{\infty}r_k <\infty$ and we're done.
Can one give me some concrete examples explaining Picard's Great Theorem
Fort the first part, the function $f(z)=e^z$ is a typical example. It is a non-constant entire function attaining every value with one exception - it is never zero. For the second part, typically $e^{1/z}$ is considered. One shows that arbitrarily close to the essential singularity $z=0$, all non-zero values are attained. You can "see" this in the plot here: http://en.wikipedia.org/wiki/Picard_theorem.
Hatcher's formula in homotopy equivalence proof
First of all once computes $\partial \sigma$ where $\sigma$ is understood to be $\sigma : [v_0,v_1,.....,v_n] \rightarrow X$. $\partial \sigma = \sum_i (-1)^i\sigma|[v_0,....,v_{i-1}, \hat{v}_i,...v_n]$. Now use the formula for the prism operator. Now we have to be careful with the terms. Here we are just applying the prism operator to each of the $n-1$ simplex in the boundary. so calculating we have $P(\sigma|[v_0,....,v_{i-1}, \hat{v}_i,...v_n]) = \sum_{i>j} (-1)^jF\circ (\sigma \times Id)| [v_0,....,v_j,w_j,.... \hat{w}_i,...w_n] + \sum_{i<j} (-1)^{j-1}F\circ (\sigma \times Id)|[v_0,...,\hat{v}_i,...,v_j,w_j,...,w_n]$. Now when we sum this over $i$ with the sign $(-1)^i$, on the LHS we get $P(\partial \sigma)$ and on the RHS the formula mentioned.
Find a linear operator $T:X\to X$ , $X$ normed space and $T$ maps closed sets onto closed sets, but $T$ is not bounded.
You can look at $X=c_{00}$ and take $T:c_{00}\to c_{00}$ given by $$T[(x_n)_{n\in\Bbb N}]=(2^n x_n)_{n\in\Bbb N}.$$ This is obviously linear and unbounded. Further it is a bijection as well as open as the map is bounded from below, so it is a closed map.
Looking for a recurrence relation ot combinatorial way to calculate initial number
Just start with the last lake: Let $n_i$ be the number of birds left before coming to the $i$-th lake (so $n_0$ is the total number of birds). In other words $n_I$ is the number of birds that fly between lake $i-1$ and lake $i$. Obviously $n_8 = 0$ since there were no birds left. But $n_8 = n_7 - (n_7/2 +1/2)$ (the half of $n_7$ plus a half left at the $7$-nth sea, so $n_7$ minus that number is still left. This recursion holds also for the rest: $n_{i+1} = n_i - (n_i/2+1/2) = (n_i-1)/2$. So $n_8 = 0 \implies n_7 = 1 \implies n_6 = 3 \implies \ldots$ (And I recommend trying to find an explicit formula for this sequence. Perhaps just number the indices in reverse, that might make it easier.)
Trapezium rule vs integration
Given a function $x\mapsto f(x)\in{\mathbb R}$ $\>(a\leq x\leq b)$ the integral $\int_a^b f(x)\>dx$ is a certain real number. This number has various intuitive descriptions, and is mathematically defined by an involved limiting process. The trapezian rule is a formula for obtaining approximations to this number. It contains a parameter $n\in{\mathbb N}$ or $h>0$ defining the fineness of the intended approximation. If $f$ is integrable over the interval $[a,b]$ then it is usually easy to show that under $\lim_{n\to\infty}$, or $\lim_{h\to0}$, the trapezian sums converge to to the intended integral. But each individual sum is just a finite sum, and is not equal to the integral. The interesting point, however, is to estimate the error when we use a trapezian sum $T_n$ as approximate value for the intended integral.
A question about similar triangles.
Suppose that your similar triangle has sides of length $x, y, z$. We'll then have, by similarity, $x=2t, y=3t, z=4t$, for some (positive) $t$. The perimeter, then, will be $2t+3t+4t = (2+3+4)t = 9t = 36$ so we have $t=4$ and so the sides will be $2\cdot4, 3\cdot4, 4\cdot4$ or $8, 12, 16.$
Do there exist three non null vectors a,b,c with $a.b=a.c$ such that $b\ne c$?
Consider $$a=\begin{bmatrix}1\\0\end{bmatrix}, b=\begin{bmatrix}0\\1\end{bmatrix},c=\begin{bmatrix}0\\2\end{bmatrix}.$$ As long as $a$ is perpendicular to $b-c$, you will get the equality $a\cdot b=a\cdot c$.
Let $G$ be a group with $|G| = 18$. Suppose that $G$ has a cyclic subgroup of order $9$. Classify $G$
Let $H$ be the cyclic subgroup of order 9. Then $H$ is normal in $G$. Let $h$ be a generator of $H$. Let $i \in G$ be an element of order 2 (which must exist). Then $ihi^{-1} = h^n$ for some $n$. Then $h = i^2hi^{-2} = i(ihi^{-1})i^{-1} = ih^ni^{-1} = h^{n^2}$. So $n = \pm 1$. If $n = 1$ then the group is isomorphic to $\mathbb Z_{18}$, and if $n = -1$ then the group is isomorphic to $D_{18}$.
Find the degree and a basis for $\mathbb{Q}(\sqrt{3} + \sqrt{5})$ over $\mathbb{Q}(\sqrt{15})$
There's a very obvious way to do this when you know a primitive element: clearly, we have that $ \mathbf Q(\sqrt{3} + \sqrt{5}) = \mathbf Q(\sqrt{15})(\sqrt{3} + \sqrt{5}) $, that is, the extension is generated over $ \mathbf Q(\sqrt{15}) $ by the same primitive element. Since we know that the extension degree is $ 2 $, we can easily surmise that a basis for the extension is simply $ \{ 1, \sqrt{3} + \sqrt{5} \} $, which is readily verified. Of course, the extension has many bases...
When did this Sequence v converge?
Ok. Let's start by studying the evolution of $a_n$. $2(a_{n+1}-a_n)=a_n^2-2a_n+c$ Studying the function $x^2-2x+c$ tell us that this equation has roots for $c\leq 1$. Case 1: $c>1$ Can the sequence have a limit? Well, you know that this is not the case (since it would mean $l=\frac{l^2+c}{2}$, wich is not possible) Case 2: $c=1$ Then, three subcases... $a_n=1$, then $a_{n+1}=1$... BUT $a_1=\frac 12$, so this is not possible. $a_n > 1$, $a_{n+1}>a_n>1$. The sequence cannot converge to $1$. $a_n<1$, $a_{n+1}>a_n$ AND $a_{n+1}<\frac{1+1^2}{2}=1$; $a_n$ is stricly increasing, bounded, thus converges to $1$ (the limit if $c=1$). Since $a_1= \frac 12$, you are from the very beginning in the case 3... Case 3: c<1 You go on with the same kind of reasoning as in the case 2, by being careful in the monotony of the sequence, the fact that it stays, or not, in the same intervals you chose for your cases etc. You will find obviously your two potential limits $r_1$ and $r_2$, with $r_1=1-\sqrt{1-c}$ and $r_2=1+\sqrt{1-c}$
If $x$, $y$ and $z$ are distinct positive integers and $x+y+z=11$ then what is the maximum value of $(xyz+xy+yz+zx)$?
Note that $(x+1)(y+1)(z+1)=1+(x+y+z)+(xy+yx+zx)+xyz$ so the sum you want is $$(x+1)(y+1)(z+1)-12$$ which justifies your comment about the product. If $a\gt a-1\ge b+1\gt b$ we have $$(a-1)(b+1)=ab+(a-b)-1\gt ab$$ since $a-b\ge 2$. The best selection of distinct integers is (as others have noted) $5+4+2=11$. And the product formula gives $90-12=78$.
$\lim_{n\to \infty}\frac{(3^4)^{n+1}((n+1)!)^4}{(4(n+1))!}\cdot\frac{(4n)!}{(3^4)^n(n!)^4}$
Actually $$\lim_{n\to \infty}\frac{(3^4)\big((n+1)n!\big)^4}{(4n+4)!}\cdot\frac{(4n)!}{(n!)^4}=$$ $$=\lim_{n\to \infty}\frac{(3^4)(n+1)^4}{(4n+4)(4n+3)(4n+2)(4n+1)(4n)!}\cdot(4n)!=$$ $$=\lim_{n\to \infty}\frac{3^4(n+1)^4}{4^4(n+1)\left(n+\frac34\right)\left(n+\frac12\right)\left(n+\frac14\right)}=\left(\frac34\right)^4<1\;.$$ So the series converges.
Erlang C for Large Numbers
The question author shg has come up with a practical solution and posted a VBA function in the Adobe thread linked above. This answer describes an alternate solution in terms of built-in functions not requiring iteration. Unfortunately it doesn't seem to work accurately in Excel, but it should work in R or Matlab. The sum can be expressed in terms of a cumulative distribution function: \begin{align*} \frac{m!}{u^m}\sum\limits_{k = 0}^{m - 1} {{{{u^k}} \over {k!}}} &=\frac{m!}{u^m}e^uF_{\mathrm{Poisson}}(m-1;u)\\ &=\exp(\ln(\Gamma(m+1))-m\ln(u)+u)F_{\mathrm{Poisson}}(m-1;u) \end{align*} where $F_{\mathrm{Poisson}}(m-1;u)$ is the probability that a $\mathrm{Poisson}(u)$ variable is at most $m-1$, and $\Gamma(m+1)$ denotes the Gamma function. The $F_{\mathrm{Poisson}}(m-1;u)$ factor is numerically nice because it is not too small, in fact it looks like $$\tfrac 1 2 < F_{\mathrm{Poisson}}(m-1;u)<1\text{ for $u<m$.}$$ (I don't know a rigorous proof for the lower bound, but by the normal approximation for Poisson variables, for large $u$ the median will be approximately the mean, giving $F_{\mathrm{Poisson}}(m-1;u)\geq F_{\mathrm{Poisson}}(u;u)\to \tfrac 1 2$.) Excel's POISSON function can be used to evaluate $F_{\mathrm{Poisson}}$, so the whole expression could be expressed as: EXP(GAMMALN(m+1)-m*LN(u)+u)*POISSON(m-1,u,TRUE) Unfortunately Excel's POISSON can't be trusted here; my copy of Excel gives POISSON(1e6,1e6,TRUE)=0.863245255, where R gives the more plausible ppois(1e6,lambda=1e6)=0.500266.
Moment of inertia of a uniform disk -- what is the integrand in an area element integral?
As an answer to your second question, double integrating a function (not equal to one) is the continuum analogue of the double sum $ \sum_{i=1}^n\sum_{j=1}^n f(i,j) .$ In fact, the integral is defined as the limit of sums of this kind, so you are just adding up the total amount of stuff (in this case, mass) in the region you are integrating over. Were you looking for anything more than this? Reading the last part of your question, it seems like you have more or less arrived at this answer yourself too.
Expected Number of Drinks in 'Ride the Bus' drinking game
This is not an answer. Think about states and transitions between them. A state is $(n,j,q,k,a)$, where $n$ is the number of face-down cards you have, $j$ is the number of jacks flipped, $q$ is the number of queens flipped etc. Your game starts in state $(10,0,0,0,0)$. With probability $\frac{4}{52}$ it transitions into state $(10,1,0,0,0)$ which involves a drink. With probability $\frac{4}{52}$ it transitions into state $(11,0,1,0,0)$ which involves two drinks. The same probability applies to states $(12,0,0,1,0)$ and $(13,0,0,0,1)$ and to two and three drinks respectively. With the remaining probability you transition to state $(9,0,0,0,0)$ with no drink. Hence value of $(10,0,0,0,0)$is $$v_{(10,0,0,0,0)}=\tfrac{4}{52}(1+v_{(10,1,0,0,0)})+\tfrac{4}{52}(2+v_{(11,0,1,0,0)})+\tfrac{4}{52}(3+v_{(12,0,0,1,0)})+\tfrac{4}{52}(4+v_{(13,0,0,0,1)})+\tfrac{52-16}{52}v_{(9,0,0,0,0)}$$ If you know the value of the states in the expression, you know the expected number of drinks. You can proceed working backwards from the states that are final. For example, $v_{(n,4,4,4,4)}=0$ for any $n$. Practical problem with calculating the values is that, say, the probability of drawing jack changes depending on how many you have drawn already. If you are willing to change the game such that 'cards are drawn with replacement' your calculation would be easier. That is, assume that you have deck of 52 cards and you start knowing that you have to flip $10$. You draw a card. You apply your rules adding to the number of card you have to flip if you draw jack/queen/etc. and put the card back among the $52$. With this, state is described simply by the number of card you have to draw.
If a sequence of random matrices converge in probability, do their elements also converge?
Yes: if $Y=\max\limits_{1\leqslant k\leqslant N}|Y_k|$ converges to $0$ in probability, then each $Y_k$ converges to $0$ in probability. Proof: For every $\varepsilon\gt0$, $[Y_k\geqslant\varepsilon]\subseteq[Y\geqslant\varepsilon]$. QED
Expanding matrix to a larger matrix - name of the operation
You can describe the operation you are performing with a map $T: M_{n\times n}(\mathbb R) \to M_{n^2\times n^2}(\mathbb R)$, where $M_{n \times n}(\mathbb R)$ is the vector space of $n \times n$ matrices with real-valued entries. Define $T$ by $T: A \mapsto \begin{pmatrix} A_{1,1} & \ldots & A_{1,n} \\ \vdots & \ddots & \vdots \\A_{n,1} & \ldots & A_{n,n} \end{pmatrix}$, where $A_{i, j}$ is the matrix whose entries are everywhere zero except in row $i$, column $j$, and that entry is the $i,j$-th entry of $A$ itself. As for your other question about creating a diagonal matrix with $n$ real values $a_1, \ldots, a_n$, saying $D = \text{diag}(a_1, \ldots, a_n)$ is indeed valid, and $D$ is the $n \times n$ matrix whose entries are zero except along the diagonal, where they are the values $a_1, \ldots, a_n$.
Finite cyclic groups
They could: $G$ is also equal to $\{1,g^{-1},g^{-2},\dots,g^{1-n}\}$ and to $\{g^n,g^{n+1},\dots,g^{2n-1}\}$, among many other sets of powers of $g$. It’s pretty easy to see that $G=\{g^n,g^{n+1},\dots,g^{2n-1}\}$: after all, $g^{k+n}=g^k\cdot g^n=g^k\cdot 1=g^k$, so $g^n=1,g^{n+1}=g,\dots,g^{2n-1}=g^{n-1}$. You have to work a little harder to match up $\{1,g,g^2,\dots,g^{n-1}\}$ with $\{1,g^{-1},g^{-2},\dots,g^{1-n}\}$, but not much: $g\cdot g^{n-1}=1$, so $g^{-1}=g^{n-1}$, and in general $g^{n-k}=g^n\cdot g^{-k}=1\cdot g^{-k}=g^{-k}$. Exercise: If $k,k+1,\dots,k+n-1$ are any $n$ consecutive integers, then $$G=\{g^k,g^{k+1},\dots,g^{k+n-1}\}\;.$$ Exercise: Find a set of $n$ exponents, $\{k_1,k_2,\dots,k_n\}$, that are not consecutive integers but still have the property that $$G=\{g^{k_1},g^{k_2},\dots,g^{k_n}\}\;.$$
Fourier (sine) series of a piecewise function
The Fourier series coefficients of $$f(\text{x})=\pi\ \theta\left(x-\frac{\pi}{2}\right),\quad 0<x<\pi\tag{1}$$ are given by $$b_n=\frac{1}{\pi/2}\int\limits_0^{\pi} f(x)\ \sin\left(\frac{\pi\ n\ x}{\pi/2}\right)\,dx\tag{2}=\frac{\cos(\pi\ n)-\cos(2\ \pi\ n)}{n}$$ and the values of these coefficients for $1\le n\le 10$ are as follows: $$\begin{array}{cc} n & b_n \\ 1 & -2 \\ 2 & 0 \\ 3 & -\frac{2}{3} \\ 4 & 0 \\ 5 & -\frac{2}{5} \\ 6 & 0 \\ 7 & -\frac{2}{7} \\ 8 & 0 \\ 9 & -\frac{2}{9} \\ 10 & 0 \\ \end{array}$$ Therefore the Fourier series representation of $f(x)$ is as follows: $$f(x)=\frac{\pi}{2}-\underset{K\to\infty}{\text{lim}}\left(\sum\limits_{k=1}^K\frac{2}{2\ k-1}\ \sin\left(\frac{\pi\ (2\ k-1)\ x}{\pi/2}\right)\right),\quad 0<x<\pi\tag{3}$$ The figure below illustrates the Fourier series defined in formula (3) above in orange overlaid on the reference function $f(x)$ defined in formula (1) above in blue where formula (3) is evaluated at $K=10$.
Prefix and infix forms of functions
Your prefix form is essentially a tree. If you traverse it from the bottom up, you can build an infix form. Conversely, given an expression in infix form, you can build a parse tree for it, by parsing it according to the precedence rules, aka working an expression from inside out.
6 x 6 matrix for calculating Jordan Canonical form
$$ \left( \begin{array}{cccccc} 0&2&0&0&0&0 \\ -2&0&0&0&0&0 \\ 0&0&0&3&0&0 \\ 0&0&-3&0&0&0 \\ 0&0&0&0&0&4 \\ 0&0&0&0&-4&0 \\ \end{array} \right) $$
Group homomorphism in category theory
Well, it is quite trivial: both groups consist of a single object, so the functor can only map the first object to the second. For two arrows $g,h$ in the first group, and a functor $f$ to the second group, functoriality means that $f(gh) = f(g)f(h)$. But this is precisely the definition of a group homomorphism.
Analogs of Wigner-Eckart theorem for other groups
For compact groups see Theorem 2 on page 247 of A. Barut, and R. Razka. Theory of Group Representations and Applications. Singapore, World Scientific Publishing, 1986. For locally compact groups, see A. U. Klimyk, Wigner–Eckart theorem for locally compact groups. Theor. Math. Phys. 8 (1971), p. 668–672.
Good undergraduate texts in analysis for self studying
I would purchase a copy of Understanding Analysis by Abbott. There are a lot of pictures and the exercises are aimed at undergraduate students.
Determine if a vector is orthogonal to a subspace?
To say that $v$ is orthogonal to the space means that $v$ is orthogonal to each element of the space. Now an element of the space spanned by $w_1,w_2,$ and $w_3$ looks like \begin{equation*} w = aw_1 + bw_2 + cw_3 \end{equation*} for some real numbers $a,b,c\in\mathbb{R}$. Then, since $v$ is orthogonal to each of $w_1$, $w_2$, and $w_3$, we have \begin{equation*} v\cdot w = a(v\cdot w_1) + b(v\cdot w_2) + c(v\cdot w_3) = 0. \end{equation*} So $v$ is orthogonal to each element of the space spanned by $w_1$, $w_2$, and $w_3$, and we're done. (Short answer: Yes, you've got the right idea.)
Values of θ for which r is minimized in a circle?
You know that the trigonometric functions satisfy $-1\leq sin(\theta)\leq 1$ and $-1\leq cos(\theta)\leq 1$ In particular $sin(\theta)=-1$ if $\theta= \frac{3}{2}\pi$ Then the minimum Value of $\sin(\theta)$ is $-1$ that means that the minimum value of $6\sin(\theta)$ is $-6$ when $\theta=\frac{3}{2}\pi$
Distance between incenters and excenters
Let $I_1,I_2,I_3$ be the excenters opposite to $A,B,C.$ As $BI_2,CI_3$ intersect at $I.$ So $\angle I_2II_3=\angle BIC=\pi-\frac{B}{2}-\frac{C}{2}$,because they are vertically opposite angles. Using the Cosine law in $\triangle II_2I_3$, $(I_2I_3)^2=(II_3)^2+(II_2)^2-2(II_2)(II_3)\cos(\angle I_2II_3)$ $(I_2I_3)^2=(II_3)^2+(II_2)^2-2(II_2)(II_3)\cos(\pi-\frac{B}{2}-\frac{C}{2})$ $(I_2I_3)^2=(II_3)^2+(II_2)^2+2(II_2)(II_3)\cos(\frac{B}{2}+\frac{C}{2})$ As $II_3=4R\sin(\frac{C}{2}),II_2=4R\sin(\frac{B}{2})$,using this in the above equation. $(I_2I_3)^2=(4R\sin(\frac{C}{2}))^2+(4R\sin(\frac{B}{2}))^2+2(4R\sin(\frac{C}{2}))(4R\sin(\frac{B}{2}))\cos(\frac{B}{2}+\frac{C}{2})$ $(I_2I_3)^2=16R^2[\sin^2\frac{C}{2}+\sin^2\frac{B}{2}+2\sin\frac{C}{2}\sin\frac{B}{2}(\cos\frac{B}{2}\cos\frac{C}{2}-\sin\frac{B}{2}\sin\frac{C}{2})]$ $(I_2I_3)^2=16R^2[\sin^2\frac{C}{2}+\sin^2\frac{B}{2}-2\sin^2\frac{B}{2}\sin^2\frac{C}{2}+2\sin\frac{C}{2}\sin\frac{B}{2}\cos\frac{B}{2}\cos\frac{C}{2}]$ $(I_2I_3)^2=16R^2[\sin^2\frac{C}{2}-\sin^2\frac{B}{2}\sin^2\frac{C}{2}+\sin^2\frac{B}{2}-\sin^2\frac{B}{2}\sin^2\frac{C}{2}+2\sin\frac{C}{2}\sin\frac{B}{2}\cos\frac{B}{2}\cos\frac{C}{2}]$ $(I_2I_3)^2=16R^2[\sin^2\frac{C}{2}\cos^2\frac{B}{2}+\sin^2\frac{B}{2}\cos^2\frac{C}{2}+2\sin\frac{C}{2}\sin\frac{B}{2}\cos\frac{B}{2}\cos\frac{C}{2}]$ $(I_2I_3)^2=16R^2[\sin\frac{B}{2}\cos\frac{C}{2}+\cos\frac{B}{2}\sin\frac{C}{2}]^2$ $(I_2I_3)^2=16R^2\sin^2(\frac{B}{2}+\frac{C}{2})$ $(I_2I_3)^2=16R^2\cos^2(\frac{A}{2})$ So $(II_1)^2+(I_2I_3)^2=16R^2\sin^2(\frac{A}{2})+16R^2\cos^2(\frac{A}{2})=16R^2$
Finding elements from Venn Diagram
Work out from the middle: you know that there are $3$ people who can sing, dance, and act. There are $10$ who can sing and act, but $3$ of them are already accounted for, so the shaded region at the top where you have $10$ should really have $7$: $7$ of the people who can sing and act cannot dance, and the other $3$ can. Similarly, there are $7$ who can act and dance but cannot sing, and there are $5$ who can dance and sing but cannot act. Now consider the singers. There are $21$ of them altogether. $7$ of these can also act but cannot dance; $3$ can also act and dance; and $5$ can also dance but cannot act. These singers who can do something else as well account for $15$ of the $21$ singers, so there must be $6$ singers who can neither act nor dance. Similar reasoning, which I’ll leave you to try, shows that there are $6$ dancers who can neither sing nor act and $5$ actors who can neither dance nor sing. If you now add up the figures in all seven regions within the three circles, you should get a total of $39$. Since there are $42$ people altogether, this means that there must be $42-39=3$ managers. Note that up to this point the directors and poets are red herrings: you can ignore them completely. The last question is ambiguous. I can’t tell whether it wants the total number of people who can do exactly two things, the total number who can do at least two things, or for every pair of things the number who can do at least (or exactly) those two things. If it’s asking for the number of people who can do at least two things, we can start with the $3+7+7+5=22$ people in the various intersections of the circles. Now consider the poets: all of them are actors, so all of them can do at least two things. Two of them, however, are actor-dancers, so we’ve already counted them in the $22$. The other $3$, however, are actors who neither dance nor sing, so they weren’t counted and need to be added; this brings the total to $25$. Similarly, one of the $3$ directors has already been counted (as a dancer-singer), but the other two are dancers who neither sing nor act, so they have to be added to the total of those with multiple talents, bringing it to $27$.
Numerical Mathematics - Constraint or Unconstraint Optimization?
Before you reinvent the wheel, do note that optimization over semidefinite matrices (semidefinite programming) is a very well established field with lots of theory, applications, and generally available solvers. Essentially all methods to solve these problems work in what you could say is your category (2), by using (typically) interior-point primal-dual solvers. Practical examples include SeDuMi, SDPA,SDPT3, CSDP, Mosek, DSDP, etc. There is one singular exception though, and that is a method proposed by Burer and Monteiro (implmented in the solver SDPLR). Here they parameterize the matrix using a factorized representation as you describe in (1). This leads to a non-convex problem (in contrast to the underlying problem which is convex), but surprisingly it performs well, and it has recently been analysed further, and it has been proven that it does not suffer from local minima, despite non-convexity.
Probabilities of calling with Cell phones
For part b), recall that two events are independent iff $P(B \cap C) = P(B)P(C)$. In this case $P(B \cap C) = 4/15$ and $P(B)P(C) = 9/15*4/15$ which is not equal to $P(B \cap C)$. Therefore the events are NOT independent. So you are correct. Edit: We get the relationship $P(A \cap B) = P(A)P(B)$ to prove independence between two variables from $P(A|B) = P(A)$... (given B, the probability of A happening is the same regardless: the definition of independence). $P(A|B) = P(A \cap B)/P(B) = P(A)$ so $P(A \cap B) = P(A)P(B).$ I hope this helps.
Diagonalizable Matrices - Eigenvector test
Here's one way to determine if an $n\times n$ matrix $A$ is diagonalizable. Compute the distinct eigenvalues $\lambda_1,\dotsc,\lambda_r$ of $A$. For each eigenvalue $\lambda$ define the eigenspace associated to $\lambda$ as $E_\lambda=\DeclareMathOperator{Null}{Null}\Null(\lambda\cdot I-A)$. For each eigenvalue $\lambda$ compute $\dim E_\lambda$. If $\sum_{j=1}^r\dim E_{\lambda_j}=n$, then $A$ is diagonalizable. If $\sum_{j=1}^r\dim E_{\lambda_j}\neq n$, then $A$ is not diagonalizable. Note: If $r=n$, then $A$ is automatically diagonalizable.
Find the minimum of $p>0$ such that $\frac 1 {1^p}+\frac 1 {2^p}+\frac 1 {3^p}+...$ convergent.
Hint: Use the integral test for convergence of infinite series: $$ \sum_{n=1}^{\infty}f(n) < \infty \iff \int_1^{\infty}\!f(x)\,dx <\infty $$ (provided $f$ is defined and decreasing on $[1,\infty)$)
Combinatorial interpretation of $\prod_{i\geq 1} (1-q^i)$ in terms of partitions
It is the difference of the number of partitions with distinct parts having and even number of parts and those with and odd number of parts. Try it out until $3$. $(1-q)(1-q^2)(1-q^3) = (1 -q -q^2 +q^3) - (q^3 -q^4 -q^5 +q^6)= 1-q -q^2 +q^4 +q^5-q^6$. The terms up to order $3$ are thus as in the final product. You see that there is no $q^3$. As you have $1+2$ and $3$ as partitions so the difference of even and odd is $0$.
Have any definitions in mathematics been redefined
Throughout the history of math terms have been redefined because the concepts are useful in more general settings than the ones where they were originally conceived: 1) prime (simple definition in the integers, but the idea of a prime has to be recast to work well in other rings, where a distinction occurs between what is now called "prime" and "irreducible"), 2) algebraic number (originally defined as a certain type of complex number, but when the importance of $p$-adic fields on the same footing as $\mathbf R$ and $\mathbf C$ became clearer, the term was applied more broadly) 3) group (originally it was a group of permutations, so in the finite case the existence of inverses was not even an axiom). 4) algebraic variety (originally defined over the complex numbers as a certain subset of affine or projective space over $\mathbf C$ before being generalized by Weil, Zariski, and finally Grothendieck). The term "elliptic curve" has undergone a similar change in its definition. 5) tensor product (first for finite-dimensional real or complex vector spaces, using bases, then more generally for modules that don't necessarily have a basis) Some controversies over definitions continue to this day, e.g., whether or not a commutative ring should contain a multiplicative identity (see http://en.wikipedia.org/wiki/Ring_%28mathematics%29#History).
When does $f(x^n) \mid (f(x))^n$? And related questions.
Since $f(x^n)$ and $f(x)^n$ have same degree, $f(x^n)|f(x)^n$ means $a_0^{n-1} f(x^n)= f(x)^n$ so f(x) could only be in the form $ax^k$
Proving a possible corollary of the monotone convergence theorem
If you are allowed to use Fatou's lemma$^{(1)}$, then $$\int f = \int \liminf f_n \leq \liminf \int f_n \leq \int f,$$ where the middle inequality is Fatou's lemma. This implies that $\liminf \int f_n=\int f$. It is obvious that $\limsup \int f_n \leq \int f$. Since the $\limsup$ is bigger than the $\liminf$, it is also equal to $\int f$, and it follows that $\lim \int f_n=\int f.$ $^{(1)}$ Using Fatou's lemma is not a big deal, since it is just the monotone convergence theorem applied to $g_i:=\inf\{f_i,f_{i+1},\cdots\}$.
Question regarding projective coordinate transformation
$V$ is certainly changing in the sense that after applying a coordinate transformation $A$ to $\mathbb{P}^n$ you will find that $A(V)$ is not (in general) the zero locus of the same homogeneous polynomials $F_1,\ldots,F_m$ that $V$ was, but the author is saying that there still exist other homogeneous polynomials $G_1,\ldots,G_m$ that cut out $A(V)$ as their zero locus. So $A(V)$ is still a projective variety - i.e. the concept of being a projective variety is invariant under coordinate transformations. Indeed, you can check that letting $G_i:=F_i\circ A^{−1}$ suffices.
Does every continuous time minimal Markov chain have the Feller property?
Not every continuous time minimal Markov chain has the Feller property. See a counterexample.
What is $\int_{2}^{2}\frac{dx}{x-2}$?
The integral $\int_a^bf(x)\,dx$ is defined when $[a,b]$ is included in the domain of definition of $f$, and if so it is $=0$ when $a=b$. When $(a,b)$ is included in the in the domain of definition, but $[a,b]$ is not, the integral $\int_a^bf(x)\,dx$ is defined as a limit. In your case, I'd say that the symbol is not defined since there's no open interval over which you can take the limit of. More or less like trying to define $\int_{-2}^{-1}\log(x)\,dx$.
Sobolev norm in the definition of Sobolev spaces
Shouldn't also the Sobolev norm exist (be finite)? Yes, but this condition is automatically satisfied because of the way that the space was defined. In fact, by definition $$L_2(\Omega)=\left \{f:\Omega\to\mathbb{R};\;f\text{ is measurable and }\int_\Omega |f(x)|^2 \ dx<\infty\right\}.$$ So, "$u \in L_2(\Omega)$" implies $$\int_\Omega |u(x)|^2 \ dx<\infty$$ and "$\partial^\alpha u \in L_2(\Omega)$ for all $|\alpha|\leq k$" implies $$\int_\Omega |\partial^\alpha u(x)|^2 \ dx<\infty,\quad \forall\ |\alpha|\leq k.$$ Thus, your definition implies that $\|u\|_{H^k(\Omega)}$ is finite (that is, the "existence" (finiteness) of the Sobolev norm is implicitly stated). Remark: When you said and so its weak derivative, you were using the following property: If $u$ possess classical derivative $u'$, then $u$ possess weak derivative and thus $u\in H^1(0,1)$. Moreover, $u'$ coincides with the weak derivative of $u$. However, by definition of $H^1(0,1)$, the weak derivative of a function in $H^1(0,1)$ have to be a function in $L_2(0,1)$ and thus the above property is false because sometimes $u'$ exists but doesn't belong to $L_2(0,1)$ (that is, doesn't satisfy $\int_0^1 |u'(x)|^2 \ dx<\infty.$ An example is given by your function $u(x)=x^{-1/4}$). The valid property is: If $u \in C^1(0,1)\cap L_2(0,1)$ and if $u'\in L_2(0,1)$, then $u\in H^1(0,1)$ . Moreover, $u'$ coincides with the weak derivative of $u$.
Schwarz inequality for unital completely positive maps
Note that Kadison-Schwarz is stated for positive (not necessarily cp) maps; that's why the restriction to normals is required (the C$^*$-algebra generated by a normal is abelian, and then any positive map is completely positive). Also, for Schwarz inequality all that is required is 2-positivity (which of course is implied by complete positivity). This is the proof of the inequality as in Paulsen's book: We have $$ \begin{bmatrix}1&a \\ a^*& a^*a\end{bmatrix}=\begin{bmatrix}1&a\\ 0&0\end{bmatrix}^*\begin{bmatrix}1&a\\ 0&0\end{bmatrix}\geq0. $$ Then $$ 0\leq\delta^{(2)}\left(\begin{bmatrix}1&a \\ a^*& a^*a\end{bmatrix}\right) =\begin{bmatrix}1&\delta(a) \\ \delta(a)^*& \delta(a^*a)\end{bmatrix} $$ Applying this in particular to the vector $\begin{bmatrix}-\delta(a)\eta\\ \eta\end{bmatrix}$, we get $$ 0\leq\left\langle\begin{bmatrix}1&\delta(a) \\ \delta(a)^*& \delta(a^*a)\end{bmatrix}\begin{bmatrix}-\delta(a)\eta\\ \eta\end{bmatrix},\begin{bmatrix}-\delta(a)\eta\\ \eta\end{bmatrix}\right\rangle=\langle(\delta(a^*a)-\delta(a)^*\delta(a))\eta,\eta\rangle. $$ As the vector $\eta$ can be chosen arbitrarily, we conclude that $\delta(a^*a)-\delta(a)^*\delta(a)\geq0$.
primes congruent to +-1 (mod 6)
The only possibilities are $$x\equiv 0,1,2,3,4,5\bmod 6.$$ If $x \equiv 0 \bmod 6$ then $x$ is divisible by $6$ and so not prime. If $x \equiv 2 \bmod 6$ or $x \equiv 4 \bmod 6$ then $x$ is divisible by $2$ and so not prime (unless $x=2$). If $x \equiv 3 \bmod 6$ then $x$ is divisible by $3$ and so not prime (unless $x=3$). So we are only left with $x \equiv 1 \bmod 6$ or $x \equiv 5 \equiv -1 \bmod 6$ if $x>3$ is prime.
basis for a subpace topology
So we know that $B$ is a basis for the topology on $X$. To show that $B \cap Y$ is a basis for $Y$, we need to show that every open set $U \subset Y$ can be written as a union of sets of the form $B \cap Y$. Let $U$ be an open set contained in $Y$. Then , because $Y$ is under the subspace topology of $X$ (I'm assuming this), $U$ is also an open set in $X$. Because $B$ is a basis for $X$, $U$ can be written as a union of sets from $B$, i.e.$$U = \bigcup_{\alpha \in A} B_\alpha$$ where $A$ is some indexing set and $B_\alpha \in B$ for every $\alpha$. Now, just take the intersection with $Y$: $$ U \cap Y = U = \bigg(\bigcup_{\alpha \in A} B_\alpha\bigg) \cap Y = \bigg(\bigcup_{\alpha \in A} (B_\alpha \cap Y)\bigg) $$ That is, $U$ has been written as a union of elements of the form $B_\alpha \cap Y$, where $B_\alpha \in B$. Hence, because $U$ was an arbitrary open set, $B \cap Y$ forms a basis for the subspace topology on $Y$.
uniqueness of split idempotent
There is only one distinguished morphism $b \to c$, namely $qi$, and the same for $c \to b$, namely $pj$. One checks without effort that they are inverse to each other. For example, we have $qi \, pj = qfj = qjqj=1$.
Topology One point Compactification and Metrizability
HINT: Every metrizable space is first countable.
Splitting Integral of Brownian motion over stopping times
You have a little mistake in your second equality. By bringing in the expectation with respect to the conditional probability $\Bbb P (\cdot \; | \tau_b < \tau_c)$ you have forgotten to multiply with the inverse of reweighting factor $1/ \Bbb P ( \tau_b < \tau_c)$, namely $ \Bbb P ( \tau_b < \tau_c)$. My calculation goes as follows. Note that $\tau_{[a,c]} = \tau_a \wedge \tau_c := \min (\tau_a ,\tau_c)$. We have $$\Bbb E^x[ \int_0^{\tau_{[a,c]}} f(B_s) \text d s ] = \Bbb E^x[ 1_{\{\tau_b < \tau_c\}}\int_0^{\tau_{[a,c]}} f(B_s) \text d s ] + \Bbb E^x[ 1_{\{\tau_b > \tau_c\}}\int_0^{\tau_{[a,c]}} f(B_s) \text d s ]$$ The case $\tau_b < \tau_c$ implies that $\tau_b \leq \tau_c \wedge \tau_a $. Thus by splitting the Riemann integral the above equals $$\Bbb E^x[ 1_{\{\tau_b < \tau_c\}}\int_0^{\tau_b} f(B_s) \text d s ]+ \Bbb E^x[ 1_{\{\tau_b < \tau_c\}}\int_{\tau_b}^{\tau_c \wedge \tau_a} f(B_s) \text d s ] +\Bbb E^x[ 1_{\{\tau_b > \tau_c\}}\int_0^{\tau_{[a,c]}} f(B_s) \text d s ]$$ Treating the middle term by using the markov property and using that $B_{\tau_b} = b$ we get $$\Bbb E^x[ 1_{\{\tau_b < \tau_c\}}\int_{\tau_b}^{\tau_c \wedge \tau_a} f(B_s) \text d s ] = \Bbb E^x[ 1_{\{\tau_b < \tau_c\}} \Bbb E^x[\int_{\tau_b}^{\tau_c \wedge \tau_a} f(B_s) \text d s | \mathcal F_{\tau_b}] ] \\ =\Bbb E^x[ 1_{\{\tau_b < \tau_c\}} \Bbb E^{B_{\tau_b}}[\int_{0}^{\tau_c \wedge \tau_a} f(B_s) \text d s ] ] = \Bbb E^x[ 1_{\{\tau_b < \tau_c\}} \Bbb E^{b}[\int_{0}^{\tau_c \wedge \tau_a} f(B_s) \text d s ] ]\\ = \Bbb P^x (\tau_b < \tau_c) E^{b}[\int_{0}^{\tau_c \wedge \tau_a} f(B_s) \text d s ] $$ Now we observe that the case $\tau_b > \tau_c$ implies $\tau_a\wedge \tau_c = \tau_c = \tau_b \wedge\tau_c$ and the case $\tau_b < \tau_c$ implies $\tau_b = \tau_b \wedge \tau_c$. Putting this together yields $$\Bbb E^x[ \int_0^{\tau_{[a,c]}} f(B_s) \text d s ] \\ = \Bbb P^x (\tau_b < \tau_c) E^{b}[\int_{0}^{\tau_c \wedge \tau_a} f(B_s) \text d s ] + \Bbb E^x[ 1_{\{\tau_b < \tau_c\}}\int_0^{\tau_b\wedge \tau_c} f(B_s) \text d s ] + \Bbb E^x[ 1_{\{\tau_b > \tau_c\}}\int_0^{\tau_b\wedge \tau_c} f(B_s) \text d s ] \\ = \Bbb P^x (\tau_b < \tau_c) E^{b}[\int_{0}^{\tau_{[a,c]}} f(B_s) \text d s ] + E^x[ \int_0^{\tau_{[b,c]}} f(B_s) \text d s ]$$
$\mathbb{Z}^2 \ast \mathbb{Z}^2$ is isomorphic to no finite index proper subgroup of itself
Here's how you can proceed once you've applied the Kurosh subgroup theorem. Starting from the expression $H = (*_i H_i) * F$ for your subgroup $H \le G$, break the first term into $$(*_{i \in I_0} H_i) * (*_{i \in I_1} H_i) * (*_{i \in I_2} H_i) * F $$ where if $i \in I_k$ then $H_i$ is a free abelian group of rank $k$ ($k = 0,1,2$). We can then collect terms of the free product and rewrite this as $$(*_{i \in I_2} H_i) * \underbrace{\left(F * (*_{i \in I_1} F_i) \right)}_{\text{free of rank $R_1 = \text{rank}(F) + \left| I_1 \right|$}} $$ So what you have is a free product of $R_2 = \left| I_2 \right|$ rank 2 free abelian groups and a free group of rank $R_1$. Using Grushko's Theorem, you can prove that such a group is determined, up to isomorphism, by the the ordered pair $(R_1,R_2)$. So, your subgroup $H$ is isomorphism to the whole group $G$ if and only if $\left|I_2\right|=2$ and $\text{rank}(F) = \left|I_1\right|=0$. Since your subgroup $H$ has finite index in $G$, the quotient graph of groups $T/H$ is a finite tree. Since a vertex of $T/H$ labelled with the trivial group would have to have countably infinite valence, there can be no such vertices. And there are no vertices labelled with a rank $1$ abelian group. All remaining vertices of $T/H$ are therefore labelled with rank $2$ abelian groups. Since there are only two such vertices, the the graph of groups $T/H$ is forced to be of exactly the same type as $T/G$: two $\mathbb Z^2$ vertices and one edge labelled by the trivial group. It follows that any single edge $E$ of the original Bass-Serre tree is simultaneously a fundamental domain for the whole group $G$ and for its subgroup $H$. Since $G$ acts freely on edges, it follows that $H=G$.
Arithmetic progression topology and continuous functions
Hint: Suppose $k>0$, If $a$ is odd, $f^{-1}(B(a,k))$ is empty. If $a$ is even, $f^{-1}(B(a,k)=B({a\over 2},k-1)$. If $k=0$, $B(a,k)=\mathbb{Z}$ and $f^{-1}(B(a,k))=\mathbb{Z}$.
Prove triangles with same perimeter and point of tangency of excircle and nine-point circle.
Well, for part A) the only thing you need to prove is that $AB+BX+AX=AC+CX+AX.$ If $B_a$ and $C_a$ are the points where excircle touches $AC$ and $AB,$ then $AB_a=AC_a$ and $CB_a=CX,$ $BC_a=BX.$ Now $AB_a=AC+CB_a=AC+CX=AC_a=AB+BC_a=AB+BX$ and the result follows. Now the point $X$ does not belong to the Euler's circle. Indeed, Euler's circle passes through the midpoint of the side $BC$ as well as through the projection of $A$ on the side $BC.$ Clearly, none of the two points mentioned above coincides with $X$ in general and circle cannot have more than two points of intersection with a straight line.
Query regarding phase of analytic signal.
As stated in comments, applying atan after angle makes no sense, since the angle command outputs the phase in radians. It would be appropriate to have either ph = angle(sig_a); ph = unwrap(angle(sig_a)); Unwrap is a useful command that eliminates unnecessary breaks in the phase plot by picking a continuous branch of argument.
Link of a power series by the Bernoullis for a Riccati equation to zonotopes?
$$z= 1-x^4/(3 \cdot 4)+x^8/(3 \cdot 4 \cdot 7 \cdot 8) - x^{12}/(3 \cdot 4 \cdot 7 \cdot 8 \cdot 11 \cdot 12) + .... $$ The calculus below leads to the relation to the Bessel function : $$z=\Gamma\left(\frac{3}{4}\right) \sqrt{\frac{x}{2}}\;J_{-1/4}\left(\frac{x^2}{2}\right)$$ NOTE : It is easy to solve $$\frac{d^2 z}{dz^2}=-x^2 z$$ which is an ODE of the Bessel kind. $$z=c_1\sqrt{x}\;J_{1/4}\left(\frac{x^2}{2}\right)+c_2\sqrt{x}\;J_{-1/4}\left(\frac{x^2}{2}\right)$$ This confirmes that the infinite series considered above is a particular solution of the ODE. OTHER EXAMPLES (with the same method) : $$1+\frac{x^4}{3 \cdot 4}+\frac{x^8}{3 \cdot 4 \cdot 7 \cdot 8} + \frac{x^{12}}{3 \cdot 4 \cdot 7 \cdot 8 \cdot 11 \cdot 12} + .... = \Gamma\left(\frac{3}{4}\right)\sqrt{\frac{x}{2}}\;I_{-1/4}\left(\frac{x^2}{2}\right)$$ $I_\nu(X)$ is the modified Bessel function of the first kind. $$1-\frac{x^4}{4 \cdot 5}+\frac{x^8}{4 \cdot 5 \cdot 8 \cdot 9} - \frac{x^{12}}{4 \cdot 5 \cdot 8 \cdot 9 \cdot 12 \cdot 13} + .... = \Gamma\left(\frac{5}{4}\right)\sqrt{\frac{2}{x}}\;J_{1/4}\left(\frac{x^2}{2}\right)$$ $$1+\frac{x^4}{4 \cdot 5}+\frac{x^8}{4 \cdot 5 \cdot 8 \cdot 9} + \frac{x^{12}}{4 \cdot 5 \cdot 8 \cdot 9 \cdot 12 \cdot 13} + .... = \Gamma\left(\frac{5}{4}\right)\sqrt{\frac{2}{x}}\;I_{1/4}\left(\frac{x^2}{2}\right)$$
How to find the intersection coordinate with circle and line equation?
Well, why not bash here? Solve the equations in terms of $x,$ as follows: Given: $$(x - a)^{2} + (y - b)^{2} = r^{2}$$ $$y = m\left(x - x_{1}\right) + y_{1}$$ With substitution: $$(x - a)^{2} + \left(m\left(x - x_{1}\right) + \left(y_{1} - b\right)\right)^{2} = r^{2}$$ $$\left(x^{2} - 2ax + a^{2}\right) + m^{2}\left(x^{2} - 2x_{1}x + x_{1}^{2}\right) + 2m\left(x - x_{1}\right)\left(y_{1} - b\right) + \left(y_{1}^{2} - 2by_{1} + b^{2}\right) = r^{2}.$$ I leave you to use the quadratic equation on $x$ and finish the problem. Note that sometimes, there will be no solution.
Are there conditions for a sum or difference of linear maps between the same spaces to be an isomorphism?
This is not complete solution, but just some interesting notes. I will work over $\mathbb{R}$ or $\mathbb{C}$. If we have $||L^{-1}M||<1$, then we can create an inverse, denoting $L^{-1}M=X$, by $(\sum_i X^i)L^{-1}$. Moreover, if $||M^{-1}L||<1$, the same aurguement applies. In particular, if we have $||L^{-1}||^{-1}<||M||$ or $||M^{-1}||^{-1}<||L||$. Sadly we can have $||M^{-1}||>||M||^{-1}$, so we cannot go much further with this route. Note that it is clear than an inverse exists if and only if $char(L^{-1}M)(1)\neq 0$, so that failure is actually pretty rare, namely, there is only a finite number of values of $a$ such that $L-aM$ is not invertible. Going a different direction, we note that $I-L$ is invertible if and only if $A(I-L)A^{-1}=I-ALA^{-1}$ is, so taking $L=diag(1, \gamma_1, \dots \gamma_n)$, $\gamma_1\neq 0$, we get a pretty large set of matrices without the condition.
Can fractions be written as over 1?
Of course! All real numbers, when divided by one, equal the number itself. This is true regardless of how the real number is expressed, whether it be a fraction, a mixed number, or any other representation.
Fourier transform of $f (x)=(1-|x|)\,\mathbf 1_{|x| < 1}$
There was an error in the OP regarding the calculation of $1-|x|$. Instead note that we have for $-1\le x\le 1$ $$1-|x|=\begin{cases}1+x&amp;,-1\le x&lt;0\\\\1-x&amp;,0\le x\le 1\end{cases}$$ And inasmuch at $1-|x|$ is an even function about $x=0$, we can write $$\hat f(t)=2\int_0^1 (1-x)\cos(\omega x)\,dx=\left(\frac{2\sin(\omega/2)}{\omega}\right)^2$$ where we used $1-\cos(\omega)=2\sin^2(\omega/2)$.
Let $G=(\Bbb Z/12\Bbb Z)^\times$ and $S=(\Bbb Z/13\Bbb Z)^\times$. Show that $\alpha$ is a group action of $G$ on $S$.
Part 1 $1$ is an element of $G$ because it is coprime to $12$. Indeed, its inclusion is essential since it is the identity element. Part 2 The group action on $S$ (if I've interpreted your symbols correctly) is that the element $a$ of $G$ maps $s\in S$ to $s^a$. You should be able to check the conditions that you have been given for a group action (identity and compatibility) but ask if you get stuck. However, it might be worth knowing that the reason this works is because, by Fermat's little theorem, $s^{12}\equiv 1$ modulo $13$.
Calculating limit approaches inifnity having infinite terms
Note that $$\displaystyle\lim_{n \to \infty} {\frac{1}{n+1}+\frac{1}{n+2}+...+\frac{1}{n+n}}=\displaystyle\lim_{n \to \infty} \frac{1}{n}{\sum_{k=1}^n\frac{1}{1+\frac{k}{n}}}$$ This is a Riemann Sum for $f(x)=\frac{1}{1+x}$ over the interval $[0,1]$, with the satndard partition $x_i=\frac{k}{n}$ for $1 \leq k \leq n$ and the right hand points of the interval as intermediate points. Therefore $$\displaystyle\lim_{n \to \infty} {\frac{1}{n+1}+\frac{1}{n+2}+...+\frac{1}{n+n}}=\int_0^1 \frac{1}{1+x}dx =\ln(1+x)|_0^1=\ln(2)$$ P.S. One can reach the came conclusion by using the well known identity $$\frac{1}{n+1}+\frac{1}{n+2}+...+\frac{1}{n+n}=\frac{1}{1}-\frac{1}{2}+\frac{1}{3}+...-\frac{1}{2n}$$ and the standard definition of the Euler -Mascheroni constant, but this approach is typically beyond calculus. Just for fun, here is how you get the limit with the E-M constant $$\frac{1}{n+1}+\frac{1}{n+2}+...+\frac{1}{n+n}=\frac{1}{1}-\frac{1}{2}+\frac{1}{3}+...-\frac{1}{2n} \\ =\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+...+\frac{1}{2n}-2 (\frac{1}{2}+\frac{1}{4}+...+\frac{1}{2n})\\ =\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+...+\frac{1}{2n}- (\frac{1}{1}+\frac{1}{2}+...+\frac{1}{n})\\ =\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+...+\frac{1}{2n}- \ln(2n)- (\frac{1}{1}+\frac{1}{2}+...+\frac{1}{n} -\ln(n))+\ln(2) $$ This converges to $\gamma-\gamma+\ln(2)$.
$\begin{Bmatrix} x-\frac{1}{2}\end{Bmatrix}$ uniformly continuous in $(-\frac {1}{2},+\frac {1}{2})$ and in $[-\frac {1}{2},+\frac {1}{2}]$
In both cases, you are extending a continuous $f$ defined on some open interval $(a, b)$ to a continuous function $\tilde{f}$ defined on $[a, b]$. This is possible if the limits $\lim_{x\to b^-}f(x)$ and $\lim_{x\to a^+}f(x)$ exist. Once you have a continuous function on a compact set, in this case $[a, b]$, you can show that it is uniformly continuous. The restriction of a uniformly continuous function to subsets is also going to be uniformly continuous. However, the function $\{x-1/2\}$ is not continuous on $[-1/2, 1/2]$, therefore not uniformly continuous.
Why is $\sqrt{3}=[1;1,2,1,2,\dots]$?
But then $\sqrt{3} = 1+1/x_1$. That's because $$\frac{1}{x_1}=\frac{2}{1+\sqrt 3} = \frac{2(\sqrt{3}-1)}{2} = \sqrt{3}-1$$ The other approach is to define: $$y = 1+ \dfrac{1}{1+\dfrac1{1+y}}$$ Then you get $y=1+\frac{y+1}{y+2}=\frac{2y+3}{y+2}$ which reduces to $y^2=3$.
Find $a$ and $b$, so that $2x+y-3z = 0$ and$\frac{x}{a}=\frac{y}{b}=\frac{z}{3}$ are orthogonal
A line in $\mathbb R^3$ is of the form $r=c+td$ where $r,c,d\in \mathbb R^3,t\in \mathbb R$. If $r=(x,y,z),a=(a_1,a_2,a_3),b=(b_1,b_2,b_3)$ then the equation is written in the form $\frac {x-a_1}{b_1}=\frac {y-a_2}{b_2}=\frac {z-a_3}{b_3}$. Now the vector $(2,1,-3)$ is perpedicular to your plane and so you want $b$ to be parallel to $(2,1,-3)$ in order to be perpedicular to your plane. Here you have the $b_3=3$ and thus a good choice is $-(2,1,-3)=(-2,-1,3)$. This means that $b_1=-2,b_2=-1,b_3=3$.
Simplify formula $e^{-i0.5t}+e^{i0.5t}$
Use $e^{ia} = \cos a + i\sin a$, and find that the answer is slightly different from what you wrote.
Expressing a point in two coordinate systems
You have equations expressing $e_i$'s in terms of $e_i$, you need inverse equations expressing $e_i$ in terms of $e_i$'s, so express your equations as a matrix $$\left( \begin{array}{c} e_1'\\ e_2'\\ e_3' \end{array}\right)=\left( \begin{array}{ccc} 1 &amp;-1&amp;3\\ 1&amp;1&amp;1\\ 1&amp;-1&amp;-1 \end{array} \right)\left( \begin{array}{c} e_1\\ e_2\\ e_3 \end{array} \right)$$ Taking the inverse matrix you get $$\left( \begin{array}{c} e_1\\ e_2\\ e_3 \end{array}\right)=\frac{1}{4}\left( \begin{array}{ccc} 0 &amp;2&amp;2\\ -1&amp;2&amp;-1\\ 1&amp;0&amp;-1 \end{array} \right)\left( \begin{array}{c} e_1'\\ e_2'\\ e_3' \end{array} \right)$$ which gives you expressions of $e_i$'s in terms of $e_i'$'s. Now, part a): you have that the coordinates of A in $(O,e_1,e_2,e_3)$ are $(2,3,4)$, hence $$A=O+2e_1+3e_2+4e_3$$ $$=O'+\overline{O'O}+2e_1+3e_2+4e_3$$ $$=O'-\overline{OO'}++2e_1+3e_2+4e_3$$ $$=O'-(2e_1-e_2+e_3)+2e_1+3e_2+4e_3$$ $$=O'+4e_2+3e_3$$ Now replace $e_2$ and $e_3$ with their expressions in terms of $e_2'$ and $e_3'$ giving $$A=O'-\frac{3}{4}e_1'+2e_2'-\frac{5}{4}e_3'$$ so $A$ has coordinates $(-\frac{3}{4},2,-\frac{5}{4})$ in the coordinate system $(O',e_1',e_2',e_3')$. Similarly for part b).
Understanding the setup for the probability that $Ax^2+Bx+C$ has real roots if A, B, and C are random variables uniformly distributed over (0,1).
To resolve $\min\{1,\sqrt{4ac}\}$, we need to figure out which of the two arguments is smaller. If it's $1$, the integral is $0$ and thus doesn't contribute. For it to be $\sqrt{4ac}$, we need to have $\sqrt{4ac}\le1$, and thus $c\le1/(4a)$. This is the answer to your question where $1/(4a)$ comes from. I'm not sure how to answer your question why the $\min$ doesn't go to the front integral. My counter-question would be how you'd propose to move it to the front integral such that the result is equivalent. Unless you can come up with such a proposal, I'd suggest to concentrate on why the form written here is equivalent to the one in the previous step. Generally speaking, it makes sense that resolving a minimum in a limit of the innermost integral affects the limits of the immediately enclosing integral and not of a more remote enclosing integral. The breaking up into two integrals is again the direct result of resolving the minimum in the upper limit of the second integral. That minimum is $1$ if $1\le1/(4a)$, i.e. if $a\le\frac14$, and is $1/(4a)$ otherwise; thus we have to split the outer integral over $a$ into two parts according as $a\lessgtr\frac14$.
Solve recurrence relation!
We look at the generating function $\frac{1+x}{1-x-xy-x^2y}$ and derive the coefficients $A(n,k)$. It is convenient to use the coefficient of operator $[x^n]$ to denote the coefficient of $x^n$ of a series. We obtain \begin{align*} \color{blue}{A(n,k)}&amp;=[x^ny^k]\frac{1+x}{1-x-xy-x^2y}\\ &amp;=[x^ny^k]\frac{1+x}{1-x-x(1+x)y}\\ &amp;=[x^ny^k]\frac{1+x}{1-x}\cdot\frac{1}{1-\frac{x(1+x)}{1-x}y}\\ &amp;=[x^n]\frac{1+x}{1-x}[y^k]\sum_{j=0}^\infty\left(\frac{x(1+x)}{1-x}\right)^jy^j\tag{1}\\ &amp;=[x^n]\frac{1+x}{1-x}\left(\frac{x(1+x)}{1-x}\right)^k\tag{2}\\ &amp;=[x^{n-k}]\frac{(1+x)^{k+1}}{(1-x)^{k+1}}\tag{3}\\ &amp;=[x^{n-k}](1+x)^{k+1}\sum_{j=0}^\infty\binom{-k-1}{j}(-x)^j\tag{4}\\ &amp;=[x^{n-k}](1+x)^{k+1}\sum_{j=0}^\infty\binom{k+j}{j}x^j\tag{5}\\ &amp;=\sum_{j=0}^{n-k}\binom{k+j}{j}[x^{n-k-j}](1+x)^{k+1}\tag{6}\\ &amp;=\sum_{j=0}^{n-k}\binom{k+j}{j}\binom{k+1}{n-k-j}\tag{7}\\ &amp;\,\,\color{blue}{=\sum_{j=0}^{n-k}\binom{n-j}{k}\binom{k+1}{j}}\tag{8} \end{align*} Comment: In (1) we do a geometric series expansion with respect to $y$. In (2) we select the coefficient of $y^k$. In (3) we do some simplifications and apply the rule $[x^p]x^qA(x)=[x^{p-q}]A(x)$. In (4) we do a binomial series expansion. In (5) we apply the binomial identity $\binom{-p}{q}=\binom{p+q-1}{q}(-1)^q$. In (6) we select the coefficient of $x^{n-k}$ and restrict the upper bound of the sum by $n-k$ since other terms do not contribute to $[x^{n-k}]$. In (7) we select the coefficient of $x^{n-k-j}$. In (8) we do a final rearrangement by changing the order of summation $j\to n-k-j$. We use the formula (8) to check the boundary conditions. We obtain \begin{align*} A(n,0)&amp;=\sum_{j=0}^n\binom{n-j}{0}\binom{1}{j}=\sum_{j=0}^n\binom{1}{j}=\begin{cases} 1&amp;\qquad\qquad\qquad\qquad n=0\\ 2&amp;\qquad\qquad\qquad\qquad n&gt;0 \end{cases}\\ A(n,1)&amp;=\sum_{j=0}^{n-1}\binom{n-j}{1}\binom{2}{j}=\sum_{j=0}^{n-1}(n-j)\binom{2}{j}=\begin{cases} 1&amp;\qquad n=1\\ 4(n-1)&amp;\qquad n&gt;1 \end{cases} \end{align*} We observe, the generating function $\frac{1+x}{1-x-xy-x^2y}$ does not follow OPs stated boundary conditions $A(n,0)=1\ (n\geq 0)$ and $A(n,1)=2n\ (n\geq 1)$ and this is a reason for the difficulties OP has to cope with. Note: We find with some help of Wolfram Alpha a series expansion \begin{align*} \frac{1+x}{1-x-xy-x^2y}&amp;=1+(2+y)x+(2+4y+y^2)x^2+(2+8y+6y^2+y^3)x^3\\ &amp;\qquad+(2+12y+18y^2+8y^3+y^4)x^4+\cdots \end{align*} The corresponding sequence of the coefficients $A(n,k)$ starting with \begin{align*} &amp;1;\\ &amp;\color{blue}{2},\color{blue}{1};\\ &amp;\color{blue}{2},\color{blue}{4},1;\\ &amp;\color{blue}{2},\color{blue}{8},6,1;\\ &amp;\color{blue}{2},\color{blue}{12},18,8,1;\ldots \end{align*} can be found as A113413 in OEIS. We can find there OPs stated generating function $\frac{1+x}{1-x-xy-x^2y}$ as well as the recurrence relation $A(n,k) = A(n-1, k-1)+A(n-2, k-1)+A(n-1, k)$, but we have different boundary conditions (marked above in $\color{blue}{\mathrm{blue}}$).
Is the space of continuous functions on a compact set a complete space?
No, that is not correct. The sequence $(f_n)_{n\in\Bbb N}$ converges in $\mathcal{C}^{(0)}([a,b])$ to the constant function $1$. You can take$$f_n(x)=\begin{cases}0&amp;\text{ if }x&lt;\frac{a+b}2-\frac1{2n}\\n\left(x-\frac{a+b}2\right)+\frac12&amp;\text{ if }\frac{a+b}2-\frac1{2n}\leqslant x\leqslant\frac{a+b}2+\frac1{2n}\\1&amp;\text{ if }x&gt;\frac{a+b}2+\frac1{2n}.\end{cases}$$It is a Cauchy sequence in $\mathcal{C}^{(0)}([a,b])$, but it doesn't converge.
What is the future of Set Theory if it is NOT the foundation of Mathematics?
Others have addressed the concrete question of "what do set theorists do?", so let me take a stab at the speculative "If HoTT is the foundation of all mathematics, will that pronounce death of Set Theory research?" I can imagine many possible futures in the foundations of mathematics, such as: Set theory remains ascendant. In this case, there would probably not be much change in set theory research. Some other foundation, such as HoTT, becomes dominant in the same way that set theory is now. This would take a long time to happen, but it's at least conceivable. In this case, set theory would be somewhat reduced in foundational importance, but set theory research as an independent subject would, I think, be largely unaffected. From a HoTT point of view, the set theory that "set theorists" do could be called "the study of classical well-foundedness", and it is an interesting subject regardless of its foundational importance or unimportance. Moreover, formulating this theory within another theory like HoTT might open up new ways to look at it and new directions of research. Mathematics is freed from dependence on a single foundation. To some people this is the most attractive possibility, and it also seems more likely than the second possibility, at least in the short to medium term (it's hard to imagine set theorists, at least, giving up the view of set theory as foundational!). In this case, set theorists would probably continue to consider set theory as foundational, HoTT theorists would consider HoTT as foundational, and likewise for people working on any other foundational systems, while mathematicians not working in a subject closely related to foundations would probably remain mostly as indifferent to foundations as they are today. Set theory research would probably be almost entirely unaffected, except for the same possibility of new connections and directions mentioned above. Of course, this is entirely speculation, and as we all know it is difficult to make predictions, especially about the future. Others may speculate differently than I.
Would this infinite product converge?
So I just looked at convergence of the reciprocal. $g(x)=\prod_{n=1}^\infty\left(1+\frac{(-1)^n}{nx}\right)$ Convergence of $g$ is equivalent to convergence of the sum $\sum_{n=1}^\infty\frac{(-1)^n}{nx} = -\frac{1}{x}\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n} = -\frac{ln2}{x}$ The last sum is the alternating harmonic series. Ok I don't know if this is correct. Because the sum is not defined for $x=0$ but the original product is undefined for $x=\frac{1}{n}$ where $n$ is odd.
a simple measure theory question (from homework)
If $\mathcal F_t=\sigma(B_s;0\leqslant s\leqslant t)$, this is obviously false: take $X=2$ with full probability, then $M_t=B_{2t}$ is not measurable with respect to $\mathcal F_t$ (except when $t=0$).
Conditional expectation problem with minimum of two uniformly distributed random variables,
You can directly compute the expectation of $Z$ as follows: \begin{align*} \mathbb E[\min(X,Y)]&amp;=\int_{400}^{800}\int_{500}^{600}\min(x,y)\dfrac{\mathrm{d} x}{100}\dfrac{\mathrm{d} y}{400}\\ &amp;=\dfrac{1}{40000}\left(\int_{400}^{500}\int_{500}^{600}\min(x,y) \ \mathrm{d}x\mathrm{d}y + \int_{500}^{600}\int_{500}^{600}\min(x,y) \ \mathrm{d}x\mathrm{d}y + \int_{600}^{800}\int_{500}^{600}\min(x,y) \ \mathrm{d}x\mathrm{d}y\right) \\ &amp;=\dfrac{1}{40000}\left(\int_{400}^{500} 100y\ \mathrm{d}y +\int_{500}^{600}\int_{500}^{600}\min(x,y) \ \mathrm{d}x\mathrm{d}y+ \int_{600}^{800} \dfrac{600^2-500^2}{2}\ \mathrm{d}y\right) \\ &amp;=\dfrac{1}{40000}\left(\dfrac{1}{2}\left(100(500^2-400^2)+200(600^2-500^2)\right)+\int_{500}^{600}\int_{500}^{600}\min(x,y) \ \mathrm{d}x\mathrm{d}y\right) \\ &amp;=387.5+\dfrac{1}{40000}\left(\int_{500}^{600}\int_{500}^{600}\min(x,y) dxdy\right)\\ &amp;=387.5+\dfrac{1}{40000}\left(\int_{500}^{600}\left(\int_{500}^{y}\min(x,y) \ \mathrm{d}x + \int_y^{600} \min(x,y)\ \mathrm{d}x\right)\ \mathrm{d}y\right)\\ &amp;=387.5+\dfrac{1}{40000}\left(\int_{500}^{600}\left(\int_{500}^{y}x \ \mathrm{d}x + \int_y^{600} y\ \mathrm{d}x\right)\ \mathrm{d}y\right)\\ &amp;=387.5+\dfrac{1}{40000}\left(\int_{500}^{600}\left(y^2/2-125000 + (600-y) y\right)\ \mathrm{d}y\right)\\ &amp;=387.5+\dfrac{1}{40000}\left(16000000/3\right)\\ &amp;=387.5+400/3\\ &amp;=\dfrac{3125}{6}\approx 520.83. \end{align*} As far as $\mathbb E[\min(X,Y)\ | Y]$ is concerned, you can obtained it as the inner integral: \begin{align*} \mathbb E[\min(X,Y)\ | Y] &amp;=\int_{500}^{600}\min(x,y)\dfrac{\mathrm{d} x}{100}\\ &amp;=\dfrac{1}{100}\left(100y \ 1_{y \leq 500} + \int_{500}^{600} \min(x,y) \ 1_{500\leq y\leq 600} \ \mathrm{d}y+55000 \ 1_{y\geq 600} \right)\\ &amp;=y \ 1_{y \leq 500} + 550 \ 1_{y\geq 600} + \dfrac{1}{100}\left(\int_{500}^{y}\min(x,y) \ \mathrm{d}x + \int_y^{600} \min(x,y)\ \mathrm{d}x\right)\\ &amp;=y \ 1_{y \leq 500} + 550 \ 1_{y\geq 600} + \dfrac{1}{100}\left(y^2/2-125000 + (600-y) y\right)\\ &amp;=y \ 1_{y \leq 500} + 550 \ 1_{y\geq 600} + \dfrac{1}{100}\left(-y^2/2-125000 + 600 y\right). \end{align*} Integrating this quantity with respect to $y$ gives the previous result.
Show that if $G$ is cyclic then so is $H$
If $G=\langle g\rangle$ then $H=\langle f(g)\rangle$. Let $h\in H$ then there exists $x=g^k\in G$ such that $h=f(x)=f(g^k)=f(g)^k$.
Unitary Operator on Hilberspace to show that Fourierbasis is a maximal Orthogonal Set
To show that $\{ e^{int} \}$ is a complete orthogonal subset of $L^2[0,2\pi]$, suppose that $(f,e^{int})=0$ for all $n\in\mathbb{Z}$. It must be shown that $f=0$ a.e.. To prove this, define $$ E(\lambda)=\frac{1}{e^{-2\pi i\lambda}-1}\int_{0}^{2\pi}ie^{-i\lambda t}f(t)dt,\;\;\;\lambda\in\mathbb{C}. $$ Then $E$ extends to an entire function of $\lambda$ because it has only removable singularities at the integers. It is not too hard to show that $E(\lambda)$ is uniformly bounded on square contours $C_n$ formed by connecting these vertices in order $$ \frac{2n+1}{2}(-1-i),\;\frac{2n+1}{2}(1-i),\;\frac{2n+1}{2}(1+i),\;\frac{2n+1}{2}(-1+i). $$ Using such boundedness, a simple modification of the proof of Liouville's Theorem shows that $E$ must be a constant function. Hence, there is a constant $C$ such that $$ \int_{0}^{2\pi}f(t)e^{-i\lambda t}dt = C(e^{-2\pi i\lambda}-1). $$ Choosing $\lambda=-ir$ for $r &gt; 0$, and letting $r\rightarrow\infty$ gives $0$ on the left and $-C$ on the right. Hence, $C=0$. Therefore the left side is identically $0$ in $\lambda$. All of the derivatives with respect to $\lambda$ must also be $0$. Therefore, $$ \int_{0}^{2\pi}f(t)t^{n}dt=0,\;\;\; n=0,1,2,3,\cdots. $$ Because the polynomials are dense in $L^2$, it follows that $f=0$ a.e.. Hence, $\{ e^{int} \}_{n=-\infty}^{\infty}$ is a complete orthogonal subset of $L^2[0,2\pi]$.
Applying the Cartesian Coordinate
This is an analytic proof: $$\int_0^4\int_0^{8-2x} \sqrt{(-1)^2+(-2)^2+1} \ dydx=\sqrt{6}\int_0^4 8-2x dx=16\sqrt6$$ This is a geometric proof: the surface is a triangle, whose three vertices lie on the three axes. The three vertices have the following coordinates: $(4,0,0);(0,8,0);(0,0,8)$ which yields the following lengths for the sides: $$a=4\sqrt{5}\quad b=4\sqrt{5}\quad c=8\sqrt{2}$$ Applying Heron's formula yields:$$s=\frac{a+b+c}2=4\sqrt5+4\sqrt2$$ $$A^2=s(s-a)(s-b)(s-c)=(4\sqrt5+4\sqrt2)\times(4\sqrt5+4\sqrt2-4\sqrt5)^2\times(4\sqrt5-4\sqrt2)$$ $$A^2=16\times16\times(5-2)\times2$$ $$A=16\sqrt{6}$$
Tangential polygons: Conditions on edge lengths?
This problem is discussed in Djukić, Janković, Matić, Petrović, The IMO Compendium, pg 561, extracted below. If the edge lengths satisfy a certain necessary and sufficient condition (basically a sanity check), then (much as for the cyclic polygons described in the question) the corresponding polygon can be found by wrapping the sequence of edges tangentially along an arc of a large circle and then shrinking the circle until the first and last edges meet up.
Evaluting $\lim_{(x,y) \to(0,0)} x \cdot \ln{(x^2+2y^2)}$
Hint: If $x=0$ then our product is $0$. If $x\ne 0$ and $(x,y)$ is close enough to $(0,0)$, then $x^2+2y^2\lt 1$, and therefore $|\ln(x^2+2y^2)|\le |\ln(x^2)|$.
Percent less sum (sum - $\%$)
If I understand the question correctly, $$83,59 \times 1,19.$$ In case you meant to say decrease, $$83,59 \times 0,81.$$
What courses/branches of math directly build upon calc III? (Measure Theory, Lebesgue-Integral, Stokes, ...)
If you are interested in “vector calculus” that you studied in $\mathbb{R}^n$, then the natural generalization is in ambient that are not (fully) Euclidean, like manifolds. Then, together with the notions you’ll be studying in general topology, you will find all the calc 3 tools in differential topology. Just as a small reference to feel the continuity of these topics: check Vector Analysis by Klaus Jänich. Then one can decide to take a more geometric approach, studying differential forms from geometric view point, with complex geometry as well with theories like Dolbeaut. On the contrary, you can approach manifolds and differential forms from the analytic point of view, adding a particular &quot;scalar product&quot; on the manifold you are studying and defining on it all the differential operator of calc 3 (gradient, divergence, curl, laplacian,...) and studying PDEs on manifolds (like the heat equation on a torus with two holes, and so on). You can have REAL FUN in either direction!
Keeping an exponentially decaying system steady.
The limit is: $\lim_{t\to0} \left( \frac{2^{\frac{t}{t_{1/2}}} X-X}{2^{\frac{t}{t_{1/2}}}} \cdot \frac{T}{t} \right) = \lim_{t\to0} \left( \left( X-X\left(\frac{1}{2}\right)^\frac{t}{t_{1/2}}\right) \cdot \frac{T}{t}\right)$ $ = XT \lim_{t\to0} \left( \frac{1 - \left(\frac{1}{2}\right)^\frac{t}{t_{1/2}}}{t} \right) = XT\lim_{t\to0} \left( \frac{1 - e^\frac{\ln(1/2) \cdot t }{t_{1/2}}}{t}\right)$$ Letting $ k = \frac{\ln{\frac{1}{2}}}{t_{1/2}}$ and using the fact that $\lim_{n\to0} (1 + n)^{\frac{1}{n}} = e $, then our expression follows as: $= XT\lim_{t\to0} \left( \frac{1 - e^{kt}}{t}\right) = XT\lim_{t\to0} \left( \frac{1 - (1+t)^{\frac{1}{t} \cdot kt}}{t}\right) = XT\lim_{t\to0} \left( \frac{1 - (1+t)^k}{t}\right)$ By the L'Hôpital's rule (the limit of a quotient of derivable functions is equal to the limit of the quotient of the derivatives of these functions): $ = XT\lim_{t\to0} \left( \frac{ - k(1+t)^{k-1}}{1}\right) = XT \cdot (-k) = \frac{-XT \ln\frac{1}{2}}{t_{1/2}}$
Complex logarithm and derivatives
(c) By definition, $$Log\,z:=\log|z|+i\arg z$$ where $\,\arg z\,$ is defined only up to an integer multiple of $\,2\pi\,$, thus taking into account what for the corresponding real functions happens, we have: $$e^{Log\,z}=e^{\log|z|+i\arg z}=e^{\log|z|}e^{i\arg z}=|z|e^{i\arg z}=z$$ since the expression before the last to the right above is just the polar representation of the complex number $\,z\,$ . From here, and knowing that $\,(e^z)'=e^z\,$, we get by the chain rule: $$e^{Log\,z}=z\Longrightarrow \left(e^{Log\,z}\right)'=(z)'\Longrightarrow (Log\,z)'e^{Log\,z}=1\Longrightarrow (Log\,z)'=\frac{1}{e^{Log\,z}}=\frac{1}{z}$$ (d) Take $\,z=0\Longrightarrow \arg 0=\arg 1=2k\pi i\,\,,\,\,k\in\Bbb Z\,$ , so $$Log\,(e^0)=Log\,1=\log|1|+i\arg 1=2k\pi i\,$$ so the above value depends on the chosen branch for the logarithm and thus the equality is not necessarily true
lebesgue distribution in probability
The probability that $$X_n(\omega)=\omega^n\leq x$$ is the Lebesgue measure of the set $$\{\omega \in [0,1] | \omega^n\leq x \}=\{\omega\in [0,1] |\omega\leq x^{\frac 1 n}\}$$ which is $$x^{\frac 1 n}, x\in [0,1],$$ and zero below $0$, and $1$ above $1$. The derivative of the function above is the density of $X_n$. Your mistakes are shown in red below: $$\mathbb{P}(X_n \leq t)= \int_{[0,1]} \color{red}{ω^n}.\mathbb{1}_{\omega^{\color{red}n} \leq t} d\omega$$
Homogeneous ideals are contained in homogeneous prime ideals
A different proof: $I$ is contained in a prime (maximal) ideal of $S$, say $P$. Since $I$ is homogeneous, $I\subseteq P^*$, where $P^*$ is the ideal generated by the homogeneous elements of $P$. Note that $P^*$ is also a prime ideal (see Bruns and Herzog, Lemma 1.5.6(a)) and homogeneous.
Bolzano-Weierstrass Theorem is false when S ⊂ Q
Use the fact that rational numbers are dense in real, so pick out a irrational number and then approximate it with rational numbers, such rational numbers can be taken as mutually distinct, so they constitute an infinite set, since those rational numbers are convergent, they are bounded, but fail to have an accumulation point as by assumption the accumulation point is exactly the irrational number.
Prove that $|ab+1|>|a+b|$ with $|a|<1$, $|b|<1$
Hint: $$(ab+1)^2-(a+b)^2=(ab-a-b+1)(ab+a+b+1)=(a-1)(b-1)(a+1)(b+1)=(a^2-1)(b^2-1)&gt;0$$
What is the limit for this series
A geometric series converges when $|r| &lt; 1$, in which case $$\sum_{k=0}^{\infty}ar^k - r\sum_{k=0}^{\infty}ar^k = a$$ so $$\sum_{k=0}^{\infty}ar^k = \frac{a}{1-r}$$ The restriction on $r$ is required since $$\sum_{k=0}^{\infty}ar^k = \lim_{n \to \infty} \sum_{k=0}^{n}ar^k = \lim_{n \to \infty}\frac{a(1 - r^{n+1})}{1-r} = \frac{a}{1-r}$$ only when $|r| &lt; 1. $ This formula tells you that your geometric series converges to $\frac{1}{1 - 1/2} = 2$ You cannot conclude that a series converges by using the n'th term test.
Convex combination with binomial probabilities
No, this is not the case, not even for $n=2$. You excluded $0$ and $1$, but by continuity we can still use them for a simple counterexample: For $p=0$, $q=1$, the left-hand side is zero whereas the right-hand side is positive for any $\lambda\in(0,1)$.
What is wrong in my $f'(x)$?
You are missing the red parts in your "method". $$(2x-1)(x^2+x+1)-(x^2-x+1)(2x+1)$$ $$=(2x-1)(x^2+1)+(2x-1)x-(x^2+1)(2x+1)+x(2x+1)$$ $$=(x^2+1)(2x-1-2x-1)\color{red}{+2x^2-x+2x^2+x}$$ $$=(x^2+1)(-2)\color{red}{+4x^2}$$
The Determinant of a Special Vandermonde Matrix
The question is to give the first minor of Vandermonde matrix. Note that for all square matrix $\mathbf {A}$, ${\mathbf {A} \operatorname {adj} (\mathbf {A} )=\det(\mathbf {A} )\,\mathbf {I}}$, and $\operatorname {adj} (\mathbf {A} )=[{\mathbf {C} _{ij}]^{\mathsf {T}}=[(-1)^{i+j}\mathbf {M} _{ji}}]$ where $\mathbf {M} _{ij}$ is the $(i,j)$ minor of $\mathbf {A}$. And here is the inverse of Vandermonde matrix.
Expected number of distinct items picked from a set, putting back an item in the box every time it is picked
Hints: Find the probability a particular item is not picked the first time Find the probability a particular item is not picked any time Find the probability a particular item is picked at least once Find the expected number of items picked at least once
Questionw about Random Mapping Representation
You need independence because $X_n$ depends on $Z_n$, and unless $Z_n$ is independent of, say, $Z_{n-2}$ then there's no guarantee that the distribution of $X_n$ conditional on $X_{n-1}$ is independent of $X_{n-2}$. As for the proof it should literally just consist of unpacking the definition. Do you know the precise definition of a Markov chain? UPDATE \begin{align*} \mathbb{P}&amp;(X_n = c_n \; | \; X_{n-1} = c_{n-1}, \ldots, X_0 = c_0) \\[3mm] &amp;= \mathbb{P}(f(x_{n-1}, Z_n) = c_n \; | \; f(x_{n-2}, Z_{n-1}) = c_{n-1}, \ldots, f(x_0, Z_{1}) = c_1, X_0 = c_0) \\[3mm] &amp;= \mathbb{P}(f(x_{n-1}, Z_n) = c_n) = P(x_{n-1},c_{n-1}) \end{align*} where the next-to-last step comes from the independence of $Z_n$ from the other $Z_i$. (In fact we also need to require that each $Z_i$ is independent of $X_0$, which the question didn't explicitly mention.)