title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
If $S_1=0$ and $S_2=0$ represent two non intersecting circles, what does $S_1+\lambda S_2=0$ represent?
Your two questions are in fact one, because the pencil for circles $C_1+\lambda C_2$ contains the radical axis of $C_1,C_2$ (the one that you rejected), and conversely, the pencil $C+\lambda L$ defined by a circle and a line is such that $L$ is the radical axis of $C and another circle in the pencil. WLOG, consider $$\begin{cases}S_1\equiv x^2+y^2=1,\\S_2\equiv(x-d)^2+y^2=r^2.\end{cases}$$ By subtraction, the line in the pencil is $$L\equiv d(2x-d)=1-r^2$$ or $$x=\frac d2+\frac{1-r^2}{2d}.$$ The distances of the intersection with the axis $x$ to the centers are $$\pm\frac d2+\frac{1-r^2}{2d}$$ and we do have by Pythagoras $$\left(\frac d2+\frac{1-r^2}{2d}\right)^2-1^2=\left(-\frac{d}2+\frac{1-r^2}{2d}\right)^2-r^2,$$ which proves that the lengths of the tangent segments to the circles from this point have equal lengths and the line is the radical axis. https://en.wikipedia.org/wiki/Radical_axis#/media/File:Radical_axis_orthogonal_circles.svg These families of circles are well illustrated by the Appolonius' coordinates system:
Calculate vector flux throught surface defined by paraboloid and plane
$\iint_{D}\int^{4}_{x^{2}+y^{2}}z dzdxdy$ where $D$ is the intersection of ${x^{2}+y^{2}}=4$ Let $x=r \cos(\theta)$ and $y=r \sin(\theta)$ then $0\le r\le 2$ and $0\le \theta\le 2\pi$ Therefore $\iint_{D}\int^{4}_{x^{2}+y^{2}}z dzdxdy$=$\int^{2\pi}_{0}\int^{2}_{0} \frac{r^{4}}{2}r drd\theta$
Is $8^x$ positive or negative?
What is special about the number 8 ? Do you know that $a^x=e^{x\log a}>0$ for all $a>0$? Simply the exponential function is positive.
Proving existence of a square-free sequence
Let $p_1, p_2, \cdots , p_k$ be all the prime numbers less than $n^2$ (for some $k$). For any $K > k$, let $p_{k+1}, p_{k+2}, \cdots , p_K$ be the next prime numbers after that (larger than $n^2$). For $1 \le i \le k$, there is at least one value of $r$ $\pmod {p_i^2}$ that satisfies $r + 1^2, r+2^2, \cdots , r+n^2 \not \equiv 0$ (i.e. there is at least one viable possible remainder for $r$.) To see this, note that the squares modulo $p_i^2$ do not include $p_i$ itself, thus setting $r \equiv -p_i$ is sufficient. For $k+1 \le i \le K$, the numbers $1^2, 2^2, \cdots , n^2$ are all distinct $\pmod {p_i}$, because $p_i > p_{k+1} > n^2$. Therefore, there are exactly $(p_i^2 - n)$ viable possibilities for $r \pmod {p_i}$. Now let $N = p_1^2p_2^2p_3^2\cdots p_K^2$. By the Chinese remainder theorem, there are at least $$(1)(1) \cdots (1)(p_{k+1}^2 - n)(p_{k+2}^2 - n)\cdots (p_{K}^2 - n)$$ viable options for $r$ modulo $N$ (by which we mean values of $r$ modulo $N$ that cause $r + 1^2, r + 2^2, \cdots r+n^2$ to be free from squares of all the primes $p_1, \cdots ,p_K$). For any $K$, we define $$ x_K := \frac{\text{Number of viable options for } r \text{ mod } N}{N} $$ which evaluates to $$ \frac{(p_{k+1}^2 - n)(p_{k+2}^2 - n)\cdots (p_{K}^2 - n)}{p_1^2p_2^2p_3^2\cdots p_K^2} = \frac{1}{p_1}\frac{1}{p_2} \cdots \frac{1}{p_k} \left(1 - \frac{n}{p_{k+1}^2}\right)\cdots \left(1 - \frac{n}{p_K^2}\right) $$ The infinite product $\lim_{K \to \infty} \left(x_K \right)$ converges to a positive number, and the viable options for $r$ modulo $N$ for the first $K+1$ primes are a subset of the viable options for $r$ modulo $N$ for the first $K$ primes for any $K$. Therefore, there are infinitely many values of $r$ which work for every $K$. In particular, there is at least one positive integer $r$ such that $r + 1^2, r+2^2, \cdots r+n^2$ are squarefree.
Trigonalise a matrix
From @Ian's comment below, the author appears to be saying to "make triangular by similarity transformations", which gives something of the form (or like you added to the comment). $$T = \left( \begin{array}{ccc} \lambda_{1} & a & b \\ 0 & \lambda_{2} & c \\ 0 & 0 & \lambda_{3} \\ \end{array} \right)$$ We try two things: $1.)~$ Diagonalize when possible, and if that fails, $2.)~$ Find the Jordan Normal Form. For this specific problem, we have: $$\mathbf{A}=\begin{bmatrix} 0 & 1 & 1 \\ -1 & 1 & 1 \\ -1 & 1 & 2 \end{bmatrix}$$ We find the characteristic polynomial and eigenvalues using $|A - \lambda I| = 0$, resulting in: $$-\lambda ^3+3 \lambda ^2-3 \lambda +1 = 0 \implies -(-1 + \lambda)^3 = 0 \implies \lambda_{1, 2, 3} = 1$$ Next, we want to find three linearly independent eigenvectors using $[A- \lambda I]v_i = 0$, but this is not always possible due to algebraic and geometric difference (deficient matrices), so we have to resort to generalized eigenvectors. So, we have $[A - \lambda_1 I]v_1 = [A -I]v_1 = 0$ as: $$\left( \begin{array}{ccc} -1 & 1 & 1 \\ -1 & 0 & 1 \\ -1 & 1 & 1 \\ \end{array} \right)v_1 = 0$$ If we put that in row-reduced-echelon-form (RREF), we arrive at: $$\left( \begin{array}{ccc} 1 & 0 & -1 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \\ \end{array} \right) v_1 = 0$$ This only provides one linearly independent eigenvector as: $$v_1 = \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix}$$ Now, we need to find two more generalized eigenvectors and there are many approaches to that. One approach is to try $[A - \lambda I]v_2 = v_1$, yielding an augmented RREF of: $$\begin{pmatrix} 1 & 0 & -1 & 0 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 \end{pmatrix}$$ This yields $a = c, b = 1$, so choose $c = 0$, thus: $$v_2 = \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}$$ Repeating this process, we set up and solve $[A - I]v_3 = v_2$, yielding an augmented RREF of: $$\begin{pmatrix} 1 & 0 & -1 & -1 \\ 0 & 1 & 0 & -1 \\ 0 & 0 & 0 & 0 \end{pmatrix}$$ This yields $a = -1 + c, b = -1$, so choose $c = 0$, thus: $$v_3 = \begin{pmatrix} -1 \\ -1 \\ 0 \end{pmatrix}$$ We can now form $P = (v_1~|~v_2~|~v_3)$ as: $$P = \begin{pmatrix} 1 & 0 & -1 \\ 0 & 1 & -1 \\ 1 & 0 & 0 \end{pmatrix}$$ This yields the Jordan Normal Form: $$T = PAP^{-1} = \left( \begin{array}{ccc} 1 & 1 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \\ \end{array} \right)$$ Note: there are various ways to solve these problems. You want to look for references with the Jordan Normal Form. You can sometimes infer $T$ (see references below). Some sources for generalized eigenvectors and the Jordan Normal Form are: Wiki Notes 1 Notes 2 Jordan Normal Form and Chaining Book - Abstract Algebra by Dummit and Foote Books - many books on linear algebra
Finding approximate largest common divisor for sets of real numbers.
A partial answer could go something like this. If we consider the set of numbers $\{d_1,\cdots,d_n\}$. We then estimate a smooth histogram like this: $$h_l = \sum_{\forall k}\exp\left[-\frac{|d_k-v_l|} \sigma\right]$$ for histogram center points $v_l$. Now we can do a FFT on the histogram: And the Fourier spectrum has very distinct local peaks at all multiples of the frequency of the lowest common multiple. To get the gap we now need to divide the histogram support by this base frequency.
Radioactive Substance Decay Problem
Can you solve the differential equation? You should have a solution with two constants-the initial amount and $a$. The two pieces of data give you two equations in two unknowns to find these constants. Added: your solution needs a constant of integration. The solution should be $x=c\cdot e^{-at}$. Now plug in the data you are given: $$1000=c\cdot e^{-2a}\\300=c \cdot e^{-7a}$$ Now solve those for $a,c$ and $x(0)$ is just $c$
Question on the definition of a dynkin diagram
Let $\Phi$ be an irreducible root system of rank $r$. The Dynkin diagram will have $r$ vertices, and although Dynkin diagrams of type $A_r$ or $D_r$ can be seen deduced from the Coxeter graph, types $B_r$ ($r\geq 2)$ and $C_r$ ($r\geq 3)$ cannot be deduced from the Coxeter graph. Consider $B_3$ and $C_3$. These both have Coxeter diagram: $$\circ -\circ\stackrel{4}-\circ,$$ despite having different Dynkin diagrams and Cartan matrices, $$\begin{pmatrix}2&-1&0\\-1&2&-2\\0&-1&2 \end{pmatrix},\quad \begin{pmatrix}2&-1&0\\-1&2&-1\\0&-2&2\end{pmatrix},$$ for $B_3$ and $C_3$ respectively. Where we can distinguish between these two via the Dynkin diagrams respectively by adding an arrow pointing pointing towards the smaller of a pair of repeated roots: $\,$ The entries of the Cartan matrices are called Cartan integers, given by $(M)_{ij}=\langle \alpha_i,\alpha_j\rangle$ where there is a fixed order on the simple roots $\{\alpha_1,\cdots,\alpha_r\}$. In this language, the number of edges joining the $i^{\text{th}}$ vertex to the $j^{\text{th}}$ vertex is given by $\langle \alpha_i,\alpha_j\rangle \langle \alpha_j,\alpha_i\rangle$, which must be nonnegative by the multiplication of two Cartan integers. Now we can see in the Coxeter graph case, we are determining the entire Dynkin diagram, meaning all roots have the same length, since in such a case $\langle \alpha_i,\alpha_j\rangle = \langle \alpha_j,\alpha_i\rangle$, but as we can see from the Cartan matrices, this fails in $B_3$ and $C_3$ above, so the arrow pointing to the shorter of the roots is needed to completely describe the two, recovering all Cartan integers. Note: when $\Phi$ is an irreducible root system, there are only two root lengths in $\Phi$. So we are safe to talk about short and long roots. Attempt short answer - A Dynkin diagram can be built from the Cartan matrix(or from the root system itself), where each simple root is a vertex in the graph. When there is only one line connecting two vertices if the two roots have the same length. An arrow can be used to indicate that the two roots have different lengths, with the note above, one small and one long, pointing from the longer to the shorter root.
Spivak - Chapter 7 Question 5 - Could f(x) be a function other than a constant function?
If $f$ is not constant, then it has values $f(x_1)\ne f(x_2)$. The intermediate value theorem then says it attains all values between $f(x_1)$ and $f(x_2)$. So the problem is only to show that there must be some irrational number between $f(x_1)$ and $f(x_2)$. Has Spivak posed an earlier exercise asking you to prove that between any two real numbers there is some irrational number?
Derivative of the inclusion of a submanifold
This is certainly a valid proof, and whilst the method behind it is very intuitive, actually writing out what's being doing is very messy, especially considering the problem feels like it should be "obvious". A much "nicer" proof comes from using the following equivalent definition of derivative, if $f:X\rightarrow Y$ is a smooth map of manifolds embedded in $\mathbb{R}^N,\mathbb{R}^M$ respectively, then we define the derivative of $f$ at $x$, $df_x:T_xX\rightarrow T_{f(x)}Y$ by taking a smooth extension of $f$, $F:U\rightarrow\mathbb{R}^M$ (where $U$ is an open subset of $\mathbb{R}^N$) and taking $df_x$ to be the restriction of $dF_x$ to $T_xX$. You can show that this is (well) defined (i.e. a map to $T_{f(x)}Y$, independent of choice of extension) by showing that for any given extension and choice of parameterisations $\phi$ on $X$ and $\psi$ on $Y$, $dF=(d\psi)(d\bar f)(d\phi)^{-1}$ (on suitable domains). Note that the left hand side is independent of chart, and the right hand side is independent of extension, so both expressions are independent of both! I count at least three birds with one short proof (which is one reason that I much prefer this definition of derivative) because simply showing this definition is well defined also shows that your definition is well defined, and that the two are the same, providing you multiple methods for computation. Anyway, using this definition, with the notation as in your question, the inclusion map $\iota :X \rightarrow Y$ is just the restriction of the identity map $I_N$ on $\mathbb{R}^N$. Thus $d\iota_x$ is the restriction of the derivative of $I_N$ to $T_xX$, i.e. the inclusion map. Done in barely three lines, with no computation!
How to work faster on the GRE.
It is hard to give anything more than general advice, which is to go through practice papers and calculus type questions. Doing calculus type questions refreshes you on the techniques and doing enough of them gets you faster, while doing practice papers also gets you used to using in-exam methods, taking advantage of the nature of these multiple choice questions. For example, rather than finding the actual equation of the plane and plugging them in, I would have computed the gradient vector, which is $$ ( -2z^2+6xy, 3x^2+2zy, y^2-4xz)\mid_{(1,1,1)}= (4,5,-3).$$ Any point $P$ that lines on the plane in question must satisfy $(P- (1,1,1))\cdot (4,5,-3)=0$ so subtract 1 from each coordinate and just compute dot products until you find one that is $0.$ You wouldn't even have to compute (a) or (d) properly, you'll notice by signs that the dot product will be positive. It is quite similar to finding the equation of the plane first and plugging them in, but seems to save at least a little bit of time. General Tips: Don't spend more than 2 minutes on any question, after your two minutes if you haven't got the answer, you should at least be able to rule out 2 answers. Mark off those 2 answers for now. Dimensional analysis is a good trick. There are some calculus type rates questions (A car uses this much fuel per mile when travelling at some speed...). The options might be some integrals with various integrands, and only one of them even has the correct units. If you can rule out 2 options, good, and if you can rule out 3 even better, guessing is benefical. Don't guess if you can't rule anything out, because there is only 1/5 chance of getting the right answer, and an incorrect answer subtracts 0.25. Some things they seem to like testing, so worth revising are: Green's theorem, and using it to find areas, Volumes of surfaces of revolution, finding volumes by setting up double integrals, Decompositions of finite abelian groups, basic statistics and probability was something I had to revise, Residue theorem, Cauchy-Riemann equations. Also, many of the incorrect answer choices for questions on Lebesgue theory seem to come from incorrectly thinking uncountable implies 0 measure. Keep in mind the Cantor set for these questions. Good luck!
a follow up question about modeling with exponential distributions
Note that Naomi always goes to the clerk that finishes first. Consequently, the behavior of Naomi's random service time is dependent on whether she goes to the first or second clerk (which we will assume to have service rates of $\lambda_1$ and $\lambda_2$, respectively). To this end, consider the joint distribution of the random service times of John and Paul: $$f_{X,Y}(x,y) = f_X(x) f_Y(y) = \lambda_1 \lambda_2 e^{-(\lambda_1 x + \lambda_2 y)}, \quad x, y > 0,$$ where we have assumed without loss of generality that John went to clerk 1, and Paul went to clerk 2. Then the probability that Naomi is still waiting when both have left is $$\Pr[Z > |X-Y|] = \Pr[(Z > X-Y) \cap (X > Y)] + \Pr[(Z > Y-X) \cap (X \le Y)].$$ Considering the first term, we have $$\Pr[(Z > X-Y) \cap (X > Y)] = \int_{y=0}^\infty \int_{x=y}^\infty (1 - F_Y(x-y)) f_{X,Y}(x,y) \, dx \, dy.$$ This is because the service time $Z$ follows the distribution of whichever clerk finished first, and in this case, it was $Y$. I leave it to you to show that this integral is equal to $$\frac{\lambda_1 \lambda_2}{(\lambda_1 + \lambda_2)^2},$$ and consequently by symmetry, $$\Pr[Z > |X-Y|] = \frac{2\lambda_1 \lambda_2}{(\lambda_1 + \lambda_2)^2}.$$
Proof that integers (except $1$ and $-1$) don't have an integer multiplicative inverse
Well, let's assume $ab=1$ when $a,b\in\mathbb{Z}$. Of course it means that both $a$ and $b$ are non zero. Because they are integers we can conclude that their absolute values are at least $1$. And then $1=|ab|=|a||b|\geq |a|$, and in the same way $1\geq |b|$. So both $a$ and $b$ must belong to the set $\{-1,1\}$.
Prove that $\sum_{i=1}^n\Theta(i)=\Theta(n^2)$.
So we have $f(n)\in\Theta(n)$, i.e. there is $k_1,k_2>0$ such that $$k_1n\leq f(n)\leq k_2n$$ eventually. Now lets add those inequalities to each other for consecutive $n$, then we get $$\sum_{i=1}^n k'_1i \leq\sum_{i=1}^nf(i)\leq \sum_{i=1}^n k'_2i$$ Note that since the initial inequality is "eventually" (i.e. starting from some $n_0$) then it may happen that we need to reduce $k_1$ to $k'_1$ and increase $k_2$ to $k'_2$ (i.e. consider what's going on for $n<n_0$). Anyway I leave as an exercise that such $k'_1,k'_2>0$ can be chosen so that they will work eventually as well (i.e. the further we go, the less initial $n_0$ terms matter). And so we get $$k'_1\frac{n(n+1)}{2}\leq\sum_{i=1}^nf(i)\leq k'_2\frac{n(n+1)}{2}$$ which shows that $\sum_{i=1}^n f(i)$ is $\Theta\big(\frac{n(n+1)}{2}\big)=\Theta(n^2)$.
Solving a System of Equations without a Square Matrix
There is a solution because the rank of $A$ is $M$ (so the transformation is onto $\mathbb F^M$). You could row reduce $A$ and then read off the solutions.
Prove $\inf_{x \in X} \sup_{y \in Y} f(x,y) \ge \sup_{y \in Y}\inf_{x \in X} f(x,y)$
That is the meaning. For all $x$ and $y$, $f(x,y)\le f^*(x)$. Thus $$\forall y\in Y,\qquad f_*(y)=\inf_{x\in X} f(x,y)\le \inf_{x\in X}f^*(x)$$ Notice that the RHS is just a fixed extended real number. Thus, $$\sup_{y\in Y} f_*(y)\le\inf_{x\in X} f^*(x)$$ The hypothesis of boundedness is unnecessary. Meaning: if you are working with functions which take values in the appropriate version of the extended real numbers (namely, $[-\infty,\infty]$), the inequality still holds in the natural way.
Solution for Finding Total Number of Computer-Generated Passwords
Thank you for posing this question. If you have a four letter password then you have four spaces, each of which can take on 26 different characters: _. _. _. _. So that leads to: 26*26*26*26 If you include capital letters then those are distinct characters as well, so now that is still four spaces, but each one with 52 characters: 52*52*52*52 Therefore, the total number of possible combinations is $52^{4}$
Prove $\int _a^b |f(t)| \, dt $ is a norm
Suppose $\left\Vert f\right\Vert =0$ but $f\neq0$. Recall that compositions of continuous functions (e.g. $|\cdot|\circ f$) are continuous. Use the continuity of $t\mapsto|f(t)|$ to establish the existence of an interval $I\subset[a,b]$ on which $|f(t)|\geq M$ for some $M>0$. This yields the desired contradiction since $$ \left\Vert f\right\Vert =\int_{a}^{b}|f(t)|dt\geq\int_{I}|f(t)|dt\geq\int_{I}Mdt>0. $$
Given $A\in \mathcal M_{k\times l}(\Bbb F)$, prove that matrix $M=\begin{pmatrix} \ I_k & A \\ A^T & -I_l \end{pmatrix}$ is invertible
For $A,B,C,D$ matrices of appropriate sizes, you have the formulae $$\det(M)= \det(A-BD^{-1}C)\det(D)$$ where $$M = \begin{pmatrix} A & B\\ C & D \end{pmatrix}.$$ Which gives in your particular case that $$\det(M) = \det(I_k+A^TA)\det(-I_l).$$ We're left to prove that $\det(I_k+A^TA) \neq 0$, i.e. that $-1$ is not an eigenvalue of $A^TA$. And indeed if that was the case, we would be able to find $X \neq 0$ such that $$\Vert AX \Vert^2 = X^T A^T A X = -X^TX = -\Vert X \Vert^2$$ and that can't be as the left side of the equality is non negative while the right side is strictly negative. Note: following the answer of "levap", I made here the assumption that $\mathbb F$ is $\mathbb R$ or a formally real field.
Count subsets with zero sum of xors
Any subset of $S$ must have a XOR-sum in $S$. Also, if you consider the subset $T=\{1=2^0,2^1,\ldots,2^{N-1}\}$ of $S$, given any element $s$ of $S$, there is a unique subset of $T$ with XOR-sum $s$, which is given by choosing the elements of $T$ corresponding to the bits in the binary expansion of $s$ which are $1$. Therefore, to find a subset $S'$ of $S$ with XOR-sum zero, you can choose the intersection of $S'$ with $S\setminus T$ freely, and then there is a unique way to choose $S'\cap T$ which will make the overall XOR-sum zero. The number of ways to choose a subset of $S$ with XOR-sum zero is then the number of subsets of $S\setminus T$, which is $$2^{\#(S\setminus T)}=2^{2^N-N}.$$
Show that a the periodic function is even in a specific interval
By "even in the given period", I assume you mean symmetric around the midpoint of the interval, i.e. that $f(2 \pi - x) = f(x)$. Just plug in $2 \pi - x$ in place of $x$, and simplify. Or complete the square.
Why do people use scientific notation to write very large/very small numbers?
A simple example should be enough to illustrate why. Consider Avogadro's constant, $L$. Would you rather see $$ L = 6.022141\times 10^{23} $$ or $$ L = 602214100000000000000000 $$ when reading an article, book, etc.?
Matrix multiplied by its transpose, when X is in M2(R), and X has columns u and v, what is the basis of X in terms of u*u, v*v and u*v?
There's no need for bases. The quickest way to see what's going on is to use block-matrix multiplication: $$ X^TX = \pmatrix{u&v}^T \pmatrix{u&v} = \pmatrix{u^T\\v^T}\pmatrix{u&v} = \pmatrix{u^Tu&u^Tv\\v^Tu & v^Tv}. $$
The Whitney sphere immersion
It is easier to split into two cases. When $y\neq 0$, the set $\mathbb S^n_\pm = \{((x, y)\in \mathbb S^n: \pm y>0\}$ admits charts $\phi_\pm:D(1) \to \mathbb S^n$ $$\phi_{\pm} (x)= (x, \pm \sqrt{1-|x|^2}) $$ Under this chart, the immersion $W(x,y)= (1+iy)x$ is $$ W\circ \phi_\pm (x) = (1\pm i\sqrt{1-|x|^2}) x = x \pm i x \sqrt{1-|x|^2},$$ which is clearly an immersion since it is a graph $x\mapsto (x, f(x))$ (treating $\mathbb C^n = \mathbb R^n \oplus \mathbb R^n$ here). Note that from here we also show that $W$ is Lagrangian as $$\pm x\sqrt{1-|x|^2} = \pm \frac 13\nabla (1-|x|^2)^{3/2}.$$ When $y = 0$, $W(x, 0) = x$. It is very clear that it is an immersion when $W$ is restricted to the equator $\{(x, y): y=0\}$. So it suffices to check $(W_*)_{(x,0)} (0,\cdots, 0, 1)$. By definition of the differential, $$\begin{split} (W_*)_{(x,0)} (0,\cdots, 0, 1)&= \frac{d}{dt}\bigg|_{t=0} W((\cos t)x, \sin t ) \\ &= \frac{d}{dt}\bigg|_{t=0} (1+ i\sin t) (\cos t) x \\ &= ix. \end{split}$$ Note that this implies $(W_*)_{(x,0)}$ is also injective as $i x$ lies outside the real $n$ plane $\mathbb R^n \oplus \{0\}$ (where all others $(W_*)_{(x,0)} (v)$ lives for $v$ in the tangent bundle of the equator)
Does the heat equation have a unique solution with these mixed boundary conditions
When you define $\omega$, you know $\omega(x,0)=0$, $\omega(0,t)=0$, and $\omega_x(1,t)=0$; a priori you cannot state $\omega(1,t)=0$, so you cannot conclude $\omega\equiv 0$ from here. The usual trick for uniqueness with Neumann (or mixed) boundary condition is to use an energy method. We define an auxiliary functional $$ E = \frac{1}{2}\int_{0}^{1}u^2 \mathrm{d}x \geq 0, $$ and observe that if $u$ is a solution to your problem, this energy is dissipated in time: \begin{align} \frac{\mathrm{d}E}{\mathrm{d}t} = \frac{1}{2}\int_{0}^{1}\frac{\partial}{\partial t}u^2 \mathrm{d}x = \int_{0}^{1} uu_t \mathrm{d}x = \int_{0}^{1} uu_{xx} \mathrm{d}x = \left[uu_{x}\right]_{x=0}^{x=1} - \int_{0}^{1} (u_{x})^2 \mathrm{d}x \leq 0. \end{align} Now, you define $\omega=u-v$, and study what happens to its energy. From this you will be able to conclude $\omega\equiv 0$, like you wanted, and show uniqueness.
Argument Principle: roots of $z^6+9z^4+z^3+2z+4$
"$z^6$ dominates" means that when $|z|$ is "large" you have $$ p(z)\approx z^6 $$ (All other terms $9z^4+z^3+2z+4$ are relatively small compared with $z^6$.) In particular, when $R$ is large, on $\Gamma_R$, you have (1). To be precise, you have $$ \lim_{|z|\to\infty}\frac{|p(z)|}{|z^6|}=1 $$
Question about interpretation of the divergence of a vector field
I don't think your examples contradict your view on the divergence. Take your first example, the divergence is $$\frac{\partial}{\partial x}\left(\frac{x}{\sqrt{x^2}}\right) = \frac{\sqrt{x^2} - x\frac{2x}{2\sqrt{x^2}}}{x^2} = \frac{\sqrt{x^2} - \frac{x^2}{\sqrt{x^2}}}{x^2} = 0.$$ And similarly we find for your second example: $$\frac{\partial}{\partial x}\left(\frac{x}{x^2}\right) = \frac{x^2 - x\cdot2x}{x^4} = -\frac{x^2}{x^4} = -\frac{1}{x^2},$$ which is clearly not zero.
Can this difference operator be factorised?
$-\frac{\epsilon}{h^2} (Y_{i+1}-2Y_i + Y_{i-1}) + \alpha(Y_i-Y_{i-1}) = 0 \;\;\; (1)$ $-\frac{\epsilon}{h^2} Y_{N+1} + \left ( \frac{2\epsilon}{h^2} + \alpha \right )Y_N - \left ( \frac{\epsilon}{h^2} + \alpha \right ) Y_{N-1} = 0$ $\dots$ $Y_{N} = \frac{\epsilon Y_{N+1}}{2\epsilon + \alpha h^2}+\frac{\epsilon + \alpha h^2}{2\epsilon + \alpha h^2} Y_{N} \equiv aY_{N+1} + (1-a)Y_{N-1} = aY_{N+1}(1+(1-a) + (1-a)^2 + \dots+ (1-a)^N)+(1-a)^{N+1}Y_0 = a\frac{1-(1-a)^{N+1}}{a}Y_{N+1} +(1-a)^{N+1}Y_0 = (1-(1-a)^{N+1})Y_{N+1}+(1-a)^{N+1}Y_0$ $Y_{N-1} = (1-(1-a)^{N})Y_{N+1}+(1-a)^{N}Y_0$ Hence, (1) can be rewritten as a function of $Y_0$ and $Y_{N+1}$. Collecting the terms by $Y_0$ and $Y_{N+1}$ and defining your $\phi$ appropriately, you should get the desired. Not worth the bounty but should point you in the right direction.
Finding the sum of a series using De Moivre's Theorem
\begin{align} S(x) &=1+ 2\sin x+ \dfrac{2^{2}\sin 2x}{2!}+ \dfrac{2^{3}\sin 3x}{3!}+ \dots \\ &=1+{\bf Im}\sum_{k=1}^\infty\dfrac{2^ke^{ikx}}{k!} \\ &={\bf Im}\left(1+\sum_{k=1}^\infty\dfrac{(2e^{ix})^k}{k!}\right) \\ &={\bf Im}e^{2e^{ix}} \\ &=e^{2\cos x}\sin(2\sin x) \end{align}
$c_{00}$ and the $\ell^2$-norm
The norm of continuous linear operator can always be estimated on a dense subspace. Since $c_{00}$ is a dense subspace of $\ell^2$ it follows that $\sup\limits_{||\beta||_2=1, \text{ }\beta\in c_{00}}|\langle \alpha, \beta \rangle|=\|\alpha\|_2$.
Proof of Polyates Lemma
I detailed a complete proof here: https://sites.google.com/site/largenumbers/home/appendix/c -- Sbiis Saibian
Proving some relations regarding the continuous function
Maybe not the best solution, but it works. The idea is to use the mean value theorem and binary search: W.l.o.g. you can assume there is a $c \in [0,n] $ s.t. $f(c) = \max \limits_{x \in [0,n]} f(x)$ and $f(c) \neq f(0)$. Now we look at $g:= \frac{f(0)-f(c)}{2}$. By Mean value thm we know there is a $d_0$ in $(0,c)$ s.t. $f(d_0) = g$. Analogously there is $\bar{d_0}$ in $(c,n)$ s.t. $f(\bar{d_0}) = g$. If $d_0 -\bar{d_0} = 1$ we are done, if $d_0 -\bar{d_0} > 1$ we look in $(0,d_0)$ and $(\bar{d_0},n)$ and if $d_0 -\bar{d_0} < 1$ we look in $(d_0,c)$ and $(c,\bar{d_0})$. Thus by recursively using binary search we approach the limit values $z$ and $\bar{z}$ with $\bar{z} = z+1$. Because $f$ is continuous it follows $f(\bar{z}) = f(z+1)$ and we are done.
Roots of unity and cyclotomic polynomials
Let $e(x)=\exp(2\pi i x)$ and $U=e({\mathbb Q})$. Putting $a=w^x,b=w^y,c=w^z$, your question can be rephrased as : find all $a,b,c\in U$ such that $$a+b+c=ab+ac+bc \tag{1}$$. Now, put $u_1=a,u_2=b,u_3=c,u_4=-ab,u_5=-ac,u_6=-bc$. We have : Theorem. Let $u_1,u_2,\ldots,u_6 \in U$. Then $u_1+u_2+\ldots +u_6=0$ iff for some permutation $\sigma$ of $[|1..6|]$, and $v=(u_{\sigma(1)},u_{\sigma(2)},\ldots,u_{\sigma(6)})$ one of the following two holds : (i) $v=(\rho_1,-\rho_1,\rho_2,-\rho_2,\rho_3,-\rho_3)$, with $\rho_1,\rho_2,\rho_3 \in U$ (ii) $v=(\rho_1,\rho_1 e(\frac{1}{3}),\rho_1 e(\frac{2}{3}), \rho_2,\rho_2 e(\frac{1}{3}),\rho_2 e(\frac{2}{3}))$, with $\rho_1,\rho_2 \in U$. (iii) $v=(\rho e(\frac{1}{5}),\rho e(\frac{2}{5}), \rho e(\frac{3}{5}), \rho e(\frac{4}{5}),-\rho e(\frac{1}{3}),-\rho e(\frac{2}{3}))$, with $\rho \in U$. Proof : see see Henry B. Mann, "On linear relations between roots of unity", Mathematika 12(1965), pp.107-117. For shorter formulas, let us put $j=e(\frac{1}{3}),\eta=e(\frac{1}{5})$. (i) yields the following solutions to (1) : $$ \lbrace a,b,c \rbrace = \lbrace x,-x,-x^{2} \rbrace, \lbrace 1,x,x^{-1} \rbrace, \ \textrm{or} \ \lbrace y_1,y_2,y_1y_2 \rbrace, \ \textrm{with} \ x\in U,y_1=\pm i, y_2=\pm i. \tag{2} $$ (ii) yields the following solution to (1) : $$ \lbrace a,b,c \rbrace = \lbrace x,jx,j^2x\rbrace \ \textrm{or} \ \lbrace x,-j,-j^2\rbrace, \ \textrm{with} \ x\in U. \tag{3} $$ (iii) yields the following solutions to (1) : $$ \lbrace a,b,c \rbrace = \lbrace -\eta^x, j^{y}\eta^{2x},j^{y}\eta^{4x}\rbrace, \ \textrm{with} \ 1 \leq x \leq 4, 1\leq y \leq 2. \tag{4} $$
Discretization before or after closing the feedback loop?
By using a discrete controller you have to implement a controller in the z-domain and you indirectly also discretise your plant. The discetization of your plant is because the controller will only consider outputs from the plant at a fixed sample rate and the controller will usually hold its output constant during one such sample time. So I would say to discretise the controller and plant before closing the loop. When discretizing I would use ZOH for the plant and preferably Tustin with pre-warping for the controller (or design the controller directly in the z-domain). Often the biggest difference between $G(s)$ and it discretized with ZOH is that it adds delay and thus a phase drop proportional to frequency. The DAC, ADC, computing the controllers output and maybe some other factors can add some additional delay to the plant. Delay will affect the phase margin of your closed loop, so too much delay might make it unstable. If you want to design a digital controller for $G(s)$ in real life then I would recommend to measure its frequency response function or use some other system identification method to be sure about the amount of delay your system has.
Contour Integration to Evaluate Improper Integral
Note that for $0<\text{Re}(p)<1$, we have $$\begin{align} \int_{-\infty}^\infty\frac{e^{px}}{1+e^x}\,dx &=2\pi i \left(\frac{-e^{i\pi p}}{1-e^{i2\pi p}}\right)\\\\ &=\frac{2\pi i}{e^{i\pi p}-e^{-i\pi p}}\\\\ &=\frac{\pi}{\sin(\pi p)} \end{align}$$ which is real valued when $p$ is a real number with $0<p<1$.
Suppose $a\in\mathbb{R}^n$ is a point and $Y\subset\mathbb{R}^n$ is a closed set. Prove that $\exists\:y\in Y$ s.t. $d(a,Y)=|a-y|$
Last part: yes, greatest lower bound is same as inf. Proof of existence of $y$: there iexisrs $\{y_n\} \subset Y$ such that $|a-y_n| \to d(a,Y)$.I t follows that $|y_n| \leq |y_n-a|+|a|<1+|a|$ for $n$ large enough , so $\{y_n\}$ is a bounded sequence. By Bolzano _ Weirstrass theorem there is a subsquence $\{y_{n_k}\}$ which converges to some point $y$. Since $Y$ is closed, $y \in Y$. Now $d(a,y)=\lim d(a, y_{n_k})=d(a,Y)$.
Clarification of Proof on Elements of Set Theory Chapter $2$ Q.$8$
As it stands right now, you need to improve your argument. The theorem you are using refers to the set of all sets, not just the set of all singletons. However, take a look at Why does the set of all singleton sets not exist? and that might help improve your argument
Have no idea to evaluate this integral with e
Let $e^{2x}=u$. Then $e^{4x}=u^2$ and $du=2e^{2x}dx$. Then we have $\frac{1}{2}\int\dfrac{du}{2^2+u^2}=\frac{1}{2}\tan^{-1}(u/2)+C$. Now substitute back $e^{2x}$ for $u$
Justifying exchange of integration order
This appears to be less about mathematics and more about how to negotiate with a professor regarding which facts can be used. One possibility is to clearly state what you are using: Special case of Fubini's theorem. When $a,b,c,d$ are finite and $f$ is jointly continuous in both variables, we have $$\int_a^b \int_c^d f(x,y)\,dy\,dx=\int_c^d \int_a^b f(x,y)\,dx\,dy$$ Then apply this to each side of rectangle $\Gamma$ separately. Integral over a line segment in the complex plane is readily written as an integral over a closed interval in $\mathbb R$, Alternative approach, without Morera's theorem: write $$\frac{\sin tz}{t}=\sum_{n=1}^\infty (-1)^{n-1} t^{2n-2} \frac{z^{2n-1}}{(2n-1)!}$$ and observe that as long as $z$ is bounded ($|z|\le M$) the series converges uniformly on $[0,1]$. Therefore, it can be integrated term by term, which yields a power series representation for the integral.
$f(x)\le g(x) \implies \lim_{x\to a}f(x)\le\lim_{x\to a}g(x)$
It suffices to prove the following statement: If $f(x) \leq 0$ and $\displaystyle \lim_{x \to a} f(x) = L$, then $L \leq 0$. We can prove $L \leq 0$ by contradiction. So suppose that $L > 0$, then choose $\epsilon = L > 0$, then there is a $\delta > 0$ such that: if $ 0 < |x - a| < \delta$ then $|f(x) - L| < L$, and this means: $-L < f(x) - L < L$ or $f(x) > 0$. Contradiction.
Permutation matrix notation
Horn and Johnson in their book on Matrix Analysis use the same $P$. Look at this: http://books.google.de/books?id=LeuNXB2bl5EC&pg=PA165&lpg=PA165&dq=Horn+and+Johnson+matrix+analysis++permutation+matrix&source=bl&ots=SmGty8svBa&sig=BruvJ-h7lRg3BQ5t98UIqfetiu8&hl=fr&sa=X&ei=xFMoUq28FIqztAbj1oFY&ved=0CIEBEOgBMAg#v=onepage&q=Horn%20and%20Johnson%20matrix%20analysis%20%20permutation%20matrix&f=false
Bell numbers (number of partitions of set of cardinality n) recurrence relation proof
For concreteness, let's suppose we are partitioning the set $\{1, 2, \dots, n+1\}$. Focus first on the block containing the element $1$. Let $k$ denote the number of elements other than $1$ that belong to this block. We can choose these elements in $\binom{n}{k}$ ways. Having formed this block, we partition the remaining $n + 1 - (k + 1) = n -k$ elements in $B_{n-k}$ ways. Summing over $k$ gives $$ \sum_{k = 0}^n \binom{n}{k} B_{n-k}. $$ By the symmetry of the binomial coefficients, this expression is equivalent to $$ \sum_{k = 0}^n \binom{n}{k} B_{k}. $$
Is there an easy way to calculate $\Big(1-\frac{n}{4}\Big)\Big(1-\frac{n}{5}\Big)\dots \Big(1-\frac{n}{30}\Big)$ for any integer $n$?
Rewrite $1 - \frac{n}{4}$ as $\frac{4 - n}{4}$; do this for each term, and see whether you notice a pattern. You can simplify to a quotient of factorials. Of course, to compute those may take some work, but for an approximate value, Stirling's approximation might suffice. Oh...also pretend that your whole product is multiplied by $(1 - n/1) (1 - n/2) (1 - n/3)$, and then divide by that at the end.
Spivak Calculus on Manifolds - Problem 4-1
The second summand in your last equation (the one with $\star$ on the left hand side) should be $(-1)*(0 \cdot 0)$ since you are evaluating $\phi_4$ on $e_3$ and $\phi_3$ on $e_4$.
Finding least positive integer $n$
For $n=41$ we have $n^4+1=41^4+1=2\cdot 137\cdot 10313$. It is not difficult to see that there is no smaller $n$ with $274=2\cdot 137\mid n^4+1$. First, $n$ has to be odd. Then it is enough to solve the congruence $n^4+1\equiv 0 \bmod 137$.
Question about application of double integrals to find out volume of solid cone!
We can see that the cross-sectional figure is obtained by an homothety transformation of the base S with respecto to the vertex, and where the factor of the transformacion is the value k=t/h. So may be important to make the following question: How the area of a plane region change when we apply a homothety? The answer to this question can be deduced by using some properties of the integral, we shall concluide that the area of the new figure is the K^2*A, where A is the area of the initial figure and k is the factor of the homothety. With this in mind we can easy deduce that the cross-sectional area in question is (t/h)^2*A.
Relation between the determinant and its inverse for solving a linear equation
There's a matrix called the adjugate of $A$, with components given by: $$\mathrm{adj}(A)_{ji}=\frac 12\epsilon_{ipq}\epsilon_{jkl}A_{pk}A_{ql}.$$ The matrix and its adjugate obey the property that $$\mathrm{adj}(A)A=\mathrm{det}(A)I=A\mathrm{adj}(A)$$ and so when we have $\mathrm{det}(A)\neq 0$ we can divide through by $\mathrm{det}(A)$ and get that $$A^{-1}=\frac{\mathrm{adj}(A)}{\mathrm{det}(A)}.$$ So in components we have $$A^{-1}_{\;\;\,ji}=\frac{\epsilon_{ipq}\epsilon_{jkl}A_{pk}A_{ql}}{2\epsilon_{abc}A_{a1}A_{b2}A_{c3}}$$ and since $x=A^{-1}w$ we have $$x_j=\frac{\epsilon_{ipq}\epsilon_{jkl}A_{pk}A_{ql}w_i}{2\epsilon_{abc}A_{a1}A_{b2}A_{c3}}$$
confusing between rational and integer numbers
Integers are a subset of rational numbers, meaning all integers are rational numbers, but not all rational numbers are integers. This Venn diagram shows it best it think (ignoring the fact that there is space outside irrational and rational numbers, because the union of these two sets is the set of reals)
Application of Residues
It's not ``that we assume off the bat that our limits of integration are sufficiently large or small", it's that we extend the integral so the residue theorem applies. Once we have that, we then play with the components of our closed curve to simplify our expression and solve more complicated integrals. Consider the integral $ \int_\mathbb{R} f(x) dx $ where $f(x)$ is well defined on $\mathbb{R}$ but isn't nice for integrating. We can extend this integral into the complex plane as a line integral of some closed curve, i.e we look at $\oint_\gamma f(z) dz$ For the sake of having an example, lets take our path $\gamma$ as a semi circle about $z=0$ with radius $R$. By linearity of the integral, we can break up the the path into two integrals, the bit on the real axis, and the arc in the complex plane. $\oint_\gamma f(z) dz = \int_{-R}^R f(x) dx + \int_{arc} f(z) dz$ What's nice about this? We have the residue theorem so we know $\int_{-R}^R f(x) dx + \int_{arc} f(z) dz = 2 \pi i \sum Residues(f)$ This means that we need to take a limit to play around with this, i.e $\int_\mathbb{R} f(x) dx = \lim_{R \to \infty} 2 \pi i \sum Residues(f) - \int_{arc} f(z) dz$ Residues are something we can calculate quite easily and arcs generally go to zero in calculations. This is why we care about this method.
How do you integrate $\int e^x \cot(x)dx$
Your calculation is wrong. The integral of $e^x\cdot\cot(x)dx$ is not elementary. But, the integral you started does have an antiderivative. $$\frac d{dx}(\arctan(\sqrt x))=\frac12\cdot\frac1{x^{\frac12}+ x^{\frac32}}.$$ So when $t=\arctan(\sqrt x)$, the integral becomes $2e^t dt$, so the final antiderivative is $$2e^t+C=2e^{\arctan(\sqrt x)} + C.$$
Does the set of zeros of an absolutely continuous function contain an open interval?
Let $A\subset[0,1]$ be a closed set of positive measure that doesn't contain any interval (this, for instance). Then the function $\phi(x)=d(x,A)$ is Lipshchitz, hence absolutely continuous, and $\{\phi=0\}=A$ doesn't contain any interval.
How do I prove the set of the equivalence classes of $R$ has the same cardinality as the set of all finite sets of primes?
Let $m$ and $n$ be two positive square-free integers (i.e., $p^2$ doesn't divide $n$ or $m$, for any prime $p$), and suppose that $mRn$. If $mRn$, then there exists a rational $q$ such that $\frac{m}{n}=q^2$. Now, we may assume $q>0$, and this rational number can be written as $q = \frac{r}{s}$, where $r,s$ are positive integers and $\gcd(r,s)=1$. So we have $$ r = p_1^{\alpha_1}\cdots p_k^{\alpha_k} \quad\text{and}\quad s = p_1^{\beta_1}\cdots p_k^{\beta_k}, $$ where $\alpha_i\beta_i=0$, that is, for each $I$, we can't have both $\alpha_i\neq0$ and $\beta_i\neq0$. It follows that $$ms^2=nr^2,$$ and so $r^2|m$ and $s^2|n$ (because $\gcd(r,s)=1$). But then $p_i^{2\alpha_i}|m$ and $p_i^{2\beta_i}|n$. Since $m$ and $n$ are square-free, it must be $\alpha_i = \beta_i =0$, for all $i$. But that means $q=r=s=1$ and $m=n$. So, if $m \neq n$ then they represent different classes. Given that they are square-free, each number if fully determined by the (finite) set of primes that divide it. Edit. The reasoning above shows that each square-free positive integer is in a different class. To finish the desired correspondence, one still has to prove that there are no other classes, i.e., that every $n \in \mathbb Z^+$ is related to some square-free one. This follows from the fact that $nRnp^2$, for every prime $p$ and integer $n$. Indeed, there exists $q \in \mathbb Q$ such that $\frac{np^2}{n}=q^2$: just take $q=p$.
Complex linear functionals and real linear functionals
Let $z \in \def\C{\mathbf C}\C$ be any complex number, then $\def\Re{\operatorname{Re}}\def\Im{\operatorname{Im}}$ $$ z = \Re z + i\Im z $$ multiplying by $i$, we have $$ iz = -\Im z + i\Re z$$ that is the real part of $iz$ is $\Re(iz) = -\Im z$, so we get $$ \Im z = -\Re (iz) $$ Folland applies this to $z = f(x)$, giving $$ \Im f(x) = -\Re [if(x)] $$ As $f$ is complex linear, $if(x) = f(ix)$, giving $$ \Im f(x) = -\Re f(ix) $$ As $u$ was defined to be $\Re f$, we have $$ \Im f(x) = -u(ix) $$ Hence \begin{align*} f(x) &= \Re f(x) + i\Im f(x)\\ &= u(x) + i \cdot \bigl(-u(ix)\bigr)\\ &= u(x) -iu(ix) \end{align*}
What is the expected value of geometric brownian motion?
The (big) problem in your calculation is that you seem to think that $$ E(e^X) = e^{E(X)}$$ which is wrong. If you search for characteristic function of a normal and remember that $B_t \sim N(0,t)$ you should be able to compute the correct result
Showing that an integral function is continuous
If $u_1\leq u_2$, we have $$|G(u_1)-G(u_2)|=\left|\int_{(u_1,u_2)}xg(x)d\lambda(x)\right|\leq\int_{(u_1,u_2)} \left|xg(x)\right|d\lambda(x) \\ \leq \int_{(u_1,u_2)} \left|g(x)\right|d\lambda(x)\leq\sqrt{|u_2-u_1|}\cdot\lVert g\rVert_{L^2}.$$
Assume integer divison
By integer division, they might mean $f(X)=\left\lfloor X/2\right\rfloor$, though it's hard to say for sure. See this.
Probability first sample is the smallest
The first sample must be one of the $100$ possible values of $X$, and as you already know the remaining samples will be larger than $k$ with probability $(P(X>k))^N$, you can find the answer by summing over all possible values for the first sample $$ \sum_{k=1}^{100} P(X=k)(P(X>k))^N $$ If you know the distribution, you can do some things to simplify or approximate this.
$A_1,A_2$ fulfill property, but their sum $A_1+A_2$ does not
Here is a counterexample. Let $V = \mathbb{R}$ and $$ A_1(x)= \begin{cases} \frac1x & \text{if } x \ne 0 \\ 42 & \text{if } x = 0 \end{cases} $$ and $$ A_2(x)= \begin{cases} -\frac1x & \text{if } x \ne 0 \\ 23 & \text{if } x = 0 \end{cases} $$ Then, it is easy to check that $A_1$ and $A_2$ satisfy your property, but $A_1 + A_2$ does not satisfy it.
What is a better attempt to create a sigma-algebra?
The most standard way to construct a $\sigma$-algebra is to consider some sets to be "measurable" and then to take the minimal $\sigma$-algebra which contains those. This way is general as any $\sigma$-algebra can be constructed this way. Observe first, that $P(\mathbb{R}^n)$ is a $\sigma$-algebra which contains every subset of $\mathbb{R}^n$. Now suppose that $(\mathcal{F}_i)_{i\in I}$ is a family of $\sigma$-algebras. Then it is not hard to show that the intersection $\bigcap_{i\in I} \mathcal{F}_i$ is a $\sigma$-algebra. This lead to the following natural definition: Let $A\subseteq P(\mathbb{R}^n)$ be any family of subsets of $\mathbb{R}^n$. It could be the open sets, or the convex hulls, or whatever you can think about. Then define a $\sigma$-algebra $\sigma(A)$ to be the intersection over all $\sigma$-algebras which contains $A$. In other words $\sigma(A)$ is the minimal $\sigma$-algebra which contains the elements in $A$. So here are few instances that might interest you: If $A=\{B\}$ is a single set then $\sigma(A) = \{\emptyset,\mathbb{R}^n,B,B^c\}$. (see @Kavi Rama Murthy 's comment) If $A$ is a $\sigma$-algebra, then $\sigma(A)=A$. In other words you can construct any $\sigma$-algebra that way. If $A$ are the open sets, then $\sigma(A)$ is the Borel-$\sigma$ algebra (but I see you said you want to avoid it). If $n=1$ and $A$ contains all intervals $[a,b],(a,b),(a,b],(b,a]$ then $\sigma(A)$ is the Lebesgue $\sigma$-algebra. One can extend this to higher dimensions by taking products of the above. In general, if one consider a $\sigma$-algebra on $\mathcal{F}_X$ on $X$ and $\mathcal{F}_Y$ on $Y$. The only natural way to construct a $\sigma$-algebra on $X\times Y$ is by taking $\sigma(\mathcal{F}_X\times \mathcal{F}_Y)$. One last point, it is also possible to replace the family of set $A$ with a family of functions. That is, instead of pointing out which sets you choose to be measurable, you can point out which functions are to be measurable. In other words, if $A$ is a family of functions $f:\mathbb{R}^n\rightarrow Y$ for some measure space $Y$, one can define $\sigma(A)$ to be the smallest $\sigma$-algebra for which all $f\in A$ are measurables ( by taking an intersection of all the relevant $\sigma$-algebras).
What is the maximum cardinality of $C$?
If $A = \{1, 2, 3\}$ then we might have $C = \{\varnothing, \{1\}, \{1, 2\}, \{1, 2, 3\}\}$ which would be $3+1$ elements. We absolutely cannot have $$C = \{\varnothing, \{1\}, \{2\}, \{3\}, \{1, 2\}, \{1, 3\},\{2, 3\}, \{1, 2, 3\}\}$$because in that case, for instance, $\{1\} \in C$ and $\{2\} \in C$, but we have neither $\{1\}\subseteq \{2\}$ nor $\{2\}\subseteq \{1\}$. For this reason $C$ has to be a strict subset of the powerset of $A$ (unless $A$ has only $0$ or $1$ element), which means it cannot have $2^n$ elements. As for why it's exactly $n+1$, note that if $A = \{x_1,\ldots,x_n\}$, then $C = \{S_0, S_1, \ldots,S_n\}$ with $S_0 = \varnothing$ and otherwise $S_i = \{x_1, \ldots,x_i\}$ does have the property we want, so the answer is at least $n+1$. There also cannot be more than $n+1$ elements in $C$, because if there were, then there would be $S_i, S_j \in C$ with $S_i \neq S_j$ such that $|S_i| = |S_j|$, by the pidgeonhole principle (there are only $n+1$ different cardinalities available). In that case we cannot have neither $S_i \subseteq S_j$ nor $S_j \subseteq S_i$.
Inclusion exclusion problem, why is my answer nonsense?
You would be correct if 18% watched at least 2 shows. But it is 18% watched exactly 2 shows. So $p(a\cap b)+p(a\cap c)+p(c\cap b) - 3 p(a\cap b \cap c)=.18$
On the fundamental domain of an action
Based on the following facts, found on the above cited paper, I prove that $-p \not \in \overline{\Delta}_p$. Fact 1: $\partial \Delta_p = \bigcup_{g \neq e} \partial \Delta_p \cap \partial \Delta_{g(p)}$ Fact 2: $\overline{\Delta}_p \cap \overline{\Delta}_{g(p)} \subset A_{p, g(p)}$, where $A_{p,g(p)} = \{ x \in \mathbb{S}^n : d(p, x) = d(g(p), x)\}$. Suppose $-p \in \overline{\Delta}_p$. We divide in two cases Case 1: $-p \in \Delta_p$. In this case, we would have, for each $g\neq e$, $$\pi = d(p, -p) < d(g(p), -p),$$ absurd. Case 2: $-p \in \partial \Delta_p$. In this case, from the facts above, we conclude that there exists $g_0 \in \Gamma, g_0 \neq e$, such that $-p \in \partial \Delta_p \cap \partial \Delta_{g_0(p)} \subset A_{p,g_0(p)}$. Thus, from $$\pi = d(p, -p) = d(g_0(p), -p)$$ we conclude that $g_0(p) = p$, which cannot occur, since no element of $\Gamma$ has a fixed point.
Prove that $\text{dom }\bigcup\mathfrak{A}=\bigcup\{\text{dom }R|R\in\mathfrak{A}\}$
I think that thinking about ordered pairs in the Kuratowski definition obfuscates the meaning, and it makes it somewhat harder to understand this question. Instead of writing $\{\{\{a\},\{a,b\}\ldots$, write $\{R,S\}$ and $R=\{\langle a,b\rangle\}$ and $S=\{\langle a,c\rangle,\langle b,d\rangle\}$. Then $\{\operatorname{dom} R\mid R\in\frak A\}$ is the set $\{a,b\}$ and $\operatorname{dom}\bigcup\frak A$ is the domain of the relation $\{\langle a,b\rangle,\langle a,c\rangle,\langle b,d\rangle\}$ which happens to be exactly $\{a,b\}$. Work with the definition of $\operatorname{dom}$, and use $\langle a,b\rangle$ notation -- the less braces, the easier it is for humans to parse the statement.
How to obtain a formula for $f(z)$ given this recurrence
In the last step you plug $1+x$ in the place of $x$, hence you should have $$ f\left( \frac{1}{x}\right) = 1 + \frac{1}{x}f\left( \frac{1}{1+x}\right) = $$ $$ 1 + \frac{1}{x}\left[ 1 + \frac{1}{1+x}f\left( \frac{1}{2+x} \right)\right]$$ So I think the telescopic series you achieved isn't correct.
Prove that the degree of 2 vertices, which are the start and end of the longest path in a graph, are less than or equal to the length of the path
How many neighbors of $x$ can be on the path $P$? Show that if $\text{deg}(x)>m$, then you can lengthen the path $P$ to include one more edge, which is a contradiction.
How to choose contour for computing this particular real integral?
Since $f(z) = e^{-z} z^{s-1}$ is holomorphic on $Re(z) > 0$ with exponential decay as $Re(z) \to +\infty$, with the Cauchy integral theorem you get that for every $Re(b) > 0$ : $$Re(s) > 0, \qquad\qquad\Gamma(s) = \int_0^\infty e^{-z}z^{s-1}dz=\int_0^{b \infty} e^{-z}z^{s-1}dz $$ $$ = \int_0^\infty e^{-bx}(bx)^{s-1}d(bx) = b^{s} \int_0^\infty e^{-bx}x^{s-1}dx$$ Now for $Re(s) \in (0,1)$, you can extend it by continuity to $b = \pm i$ and get $$\Gamma(s) =e^{i \pi s/2} \int_0^\infty e^{-ix}x^{s-1}dx=e^{-i \pi s/2} \int_0^\infty e^{ix}x^{s-1}dx$$ i.e. for $Re(a) \in (-1,0)$ : $$\boxed{\int_0^\infty \cos(x) x^{-a}dx = \frac{e^{ i\pi (1- a)/ 2}+e^{- i \pi(1-a) / 2}}{2}\Gamma(1-a) = \sin( \pi a/ 2)\Gamma(1-a)}$$
Finding the sum of digits of $9 \cdot 99 \cdot 9999 \cdot ... \cdot (10^{2^n}-1)$
Let us assume that $N$ is a number with $2^k-1$ decimal digits, whose last digit is $\geq 1$. Let $S(N)$ be the sum of digits of $N$. Let us study the sum of digits of $$ N\cdot(10^{2^k}-1) = N\cdot 10^{2^k}- N = N\cdot 10^{2^k} - 10^{2^k} + (10^{2^k}-1-N)+1. $$ We have: $$ S(N\cdot(10^{2^k}-1)) = S(N)-1+\left(9\cdot(2^k-1)-S(N)+9\right)+1 $$ and it is very interesting to notice that such sum does not depend on $S(N)$, but simply is $9\cdot 2^k$. The number $$ N = 9 \cdot 99 \cdot 9999 \cdots (10^{2^{k-1}}-1) $$ has $2^k-1$ decimal digits, the last of them being $1$ or $9$. By induction it follows that $$ S\left(9\cdot 99\cdots (10^{2^k}-1)\right) = \color{red}{9\cdot 2^{k}}.$$
Are there more branches to Calculus other than differentiation and integration?
Yes, you could say that Calculus has some "minor branches." One "minor branch" in most first-year Calculus textbooks is infinite series. This topic is related to both differentiation and integration but belongs to neither. This is often at the end of the textbook, not always reached during the class lessons.
rates of motion of projected points along a circle
Instead of moving $p$ around, let us consider both $p$ and $q$ as functions of the angle $\theta$ of the line through the origin. (Note: this is a different $\theta$ than in my first comment.) In differential terms, the distance that $p$ moves by a change in $\theta$ is $\frac{\lvert p \rvert\, \mathrm d\theta}{\sin\phi}$, where $\lvert p \rvert$ is the distance of $p$ from the origin, and $\phi$ is the angle between the line and the tangent to the circle: Now because it is a circle, $\phi$ is the same for both $p$ and $q$. Also, the ratio of $\lvert p \rvert$ to $\lvert q \rvert$ is the same as the ratio of their $x$-coordinates. Thus you get the desired result. So to answer your question, this result isn't too hard to derive from elementary Euclidean geometry. Perhaps it falls under your second bullet point, in the same sense that no one really "recalls" $19\times23=437$ from memory but can work it out if the need arises.
Determinant of the matrix $\binom{m_i}{j-1}$
Result $$\det \binom{m_i}{j-1} = \frac{1}{\prod_{k=1}^{n-1} k^{n-k}}\prod_{0<i_1<i_2\leq n}\left(m_{i_2}-m_{i_1}\right)$$ Proof Let us simplify the proof by breaking it into several elementary facts. Fact 1 $$\det \binom{m_i}{j-1} = Const_n\prod_{0<i_1<i_2\leq n}\left(m_{i_2}-m_{i_1}\right)$$ Proof 1 If we compute $\det \binom{m_i}{j-1}$ per definition, as a sum of the products of the elements taken with the appropriate sign, we will get a sum of products of type $m_1^{q_1}m_2^{q_2}\dots m_n^{q_n}$, where $\sum_{i=1}^n q_i \leq \frac{n(n-1)}{2}$ (each column will contribute $number\ of\ column-1$ to this sum). In other words, we will get a $n$-dimensional polynom, and the degree of each summand is at most $\frac{n(n-1)}{2}$. On the other hand, if $\exists i_1,i_2 : m_{i_1}= m_{i_2}$, then 2 rows will be the same and $\det \binom{m_i}{j-1}$ is $0$. Thus, our determinant can be presented as $$\det\binom{m_i}{j-1} =SomePolynom\left(m_{1},\dots, m_n\right)\prod_{i_1< i_2}\left(m_{i_1}- m_{i_2}\right)$$ (it can be easily proved by absurd). So we have two representations of the same polynom, the first one has "overall" degree of at most $\frac{n(n-1)}{2}$, the second one will have "overall" degree of at least $\frac{n(n-1)}{2}$. Thus, the "overall" degree of this polynom is exactly $\frac{n(n-1)}{2}$, and, equivalently, $$SomePolynom\left(m_{1},\dots, m_n\right)\equiv SomeConstant$$ This constant is the same for any given $n$, but can vary from $n$ to $n$, so we will add a subscript to recognise it: $Const_n$. Fact 2 Recursive formula for $Const_n$ is $$Const_n = \frac{Const_{n-1}}{(n-1)!}$$ Proof 2 Let us compute the determinant of matrix $\binom{m_i}{j-1}$ by expanding it over the last column. We have $$Const_n\prod_{0<i_1<i_2\leq n}\left(m_{i_2}-m_{i_1}\right)= \sum_{I=1}^n (-1)^{n+I}\binom{m_I}{(n-1)!} Const_{n-1}\prod_{0<i_1<i_2\leq n,\; i_1\neq I, i_2\neq I}\left(m_{i_2}-m_{i_1}\right)$$ We consider this as a polynom wrt $m_n$ and check the coefficient of $m_n^{n-1}$. This coefficient from the left hand side expression is $Const_n \prod_{0<i_1<i_2< n} \left(m_{i_2}-m_{i_1}\right)$; the one taken from the right hand side is $(-1)^{n+n}\frac{1}{(n-1)!}Const_{n-1} \prod_{0<i_1<i_2< n} \left(m_{i_2}-m_{i_1}\right) = Const_{n-1} \prod_{0<i_1<i_2< n} \left(m_{i_2}-m_{i_1}\right)$. Since these values are equal, we have $$Const_{n} = \frac{Const_{n-1}}{(n-1)!}$$ Fact 3 $$Const_{2}=1$$ Fact 4 $$Const_{n} = \prod_{k=1}^{n-1} \frac{1}{k!} = \frac{1}{\prod_{k=1}^{n-1} k^{n-k}}$$ Proof 4 $\frac{Const_{n-1}}{Const_{n}} = \frac{\prod_{k=1}^{n-1} k^{n-k}}{\prod_{k=1}^{n-2} k^{n-k}}=(n-1)!$
Find $C\in\mathbb R$ such that $(2,3,5)$ be in $\text{Im}(F)$
$$(x+Cz, x+y+2z, x+Cy)=(2,3,5)$$ $$\implies x+Cz=2;~~~ x+y+2z=3;~~~~ x+Cy=5$$ $$\begin{bmatrix}1&0&C\\1&1&2\\1&C&0\end{bmatrix}\begin{bmatrix}x\\y\\z\end{bmatrix}=\begin{bmatrix}2\\3\\5\end{bmatrix}$$ applying row transformation, $R_2\leftarrow R_2-R_1$, $R_3\leftarrow R_3-R_1$ $$\begin{bmatrix}1&0&C\\0&1&2-C\\0&C&-C\end{bmatrix}\begin{bmatrix}x\\y\\z\end{bmatrix}=\begin{bmatrix}2\\1\\3\end{bmatrix}$$ $R_3\leftarrow R_3-C R_2$ $$\begin{bmatrix}1&0&C\\0&1&2-C\\0&0&C^2-3C\end{bmatrix}\begin{bmatrix}x\\y\\z\end{bmatrix}=\begin{bmatrix}2\\1\\3-C\end{bmatrix}$$ For unique solution $(x,y,z)$, we take $C$ different from $3$. NOTE:- for $C=0$, rank of coefficient matrix is not equal to rank of augmented matrix. Hence, for $C=0$ solution does not exist. Take $C=1$ $$\begin{bmatrix}1&0&1\\0&1&1\\0&0&-2\end{bmatrix}\begin{bmatrix}x\\y\\z\end{bmatrix}=\begin{bmatrix}2\\1\\2\end{bmatrix}$$ $R_3\leftarrow \frac{-1}{2}R_3$ followed by $R_2\leftarrow R_2-R_3$, $R_1\leftarrow R_1-R_3$ $$\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}\begin{bmatrix}x\\y\\z\end{bmatrix}=\begin{bmatrix}3\\2\\-1\end{bmatrix}$$ $$x=3;~~y=2;~~z=-1.$$ $$(x+Cz, x+y+2z, x+Cy)=(3+1(-1),3+2+2(-1),3+1(2))=(2,3,5)$$
Why is the partial derivative function an isomorphism?
$E$ cannot be empty, it might at most be trivial (i.e., contain only the zero vector). Howver, we have $$ \dim \ker f'(x_0)=\dim \Bbb R^n-\dim \operatorname{im}f'(x_0)\ge n-m>0.$$ After that, $\dim F=\dim\operatorname{im}f'(x_0)=m$ by surjectivity and $f'(x_0)|_F$ is injective, hence $f'(x_0)|_F\to\Bbb R^m$ is an isomorphism
Condition to be able to decompose a finite-dimensional real vector space V into kernel and image of a linear map T from V to V
The statement is indeed correct and essentially follows right from the definition. The rank-nullity theorem tells us that ${\rm dim Ker} A + {\rm dim Im} A = {\rm dim} V$ so we only need to prove that ${\rm Ker} A \cap {\rm Im} A = 0$. But this is precisely the condition for the geometric multiplicity being equal to algebraic multiplicity.
Proof verification: Any countable subset of $\Bbb R$ is disconnected
The idea is good, but there are a few problems. For instance, the set $S$ may not have supremum or infimum. That's not a really serious problem, since you can use $\infty$ instead of $\sup S$ and $-\infty$ instead of $\inf S$ then. However, you can simply take $s_1,s_2\in S$ with $s_1<s_2$ and consider the interval $(s_1,s_2)$. Since its cardinal is greater than the cardinal of $S\cap(s_1,s_2)$, there is some $y\in(s_1,s_2)$ such that $y\notin S$. So, $S$ is not an interval and therefore it is not connected.
Resolvent of a Markov process
Yes, $R_1 \varphi>0$ holds true. Suppose that $R_1 \varphi(x)=0$ for some $x \in E$. Then $$\mathbb{E}^x \left( \int_0^{\infty} e^{- t} \varphi(X_t) \, dt \right)=0$$ implies $$e^{- t} \varphi (X_t)=0$$ almost everywhere (since the mapping $(\omega,t) \mapsto e^{- t} \varphi(X_t(\omega))$ is non-negative). Hence, $$\varphi(X_t)=0$$ almost everywhere. This is a contradiction to our assumption that $\varphi>0$.
Relationship of Two Harmonic Functions Applications?
The analytic function $f(z) = u(x,y) + i v(x,y)$ corresponds to a vector field ${\bf F} = u(x,y) {\bf i} - v(x,y) {\bf j}$ that is irrotational and incompressible. This connection between analytic functions and two-dimensional fluid flows is quite fundamental, and responsible for many applications of complex variables.
A family of lines normal to one and tangent to another curve
Re-write the given line $$y=\sin \theta (x-2)-9 \sin \theta +\sin 3\theta ~~~(0)$$ as $$y=s x -8s-4s^3~~~(1)$$ where $s=\sin \theta$ is a real parameter. Compare it with the normal of the parabola $y^2=4ax$ which is $$y=mx-2am-am^3.$$ This gives $a=4$, so the given family of lines (0) or (1) are normals of the parabola $f(x,y)=y^2-16x=0.$ Usually, a normal at one point of the curve cuts it at some other point also. Next we set up the first order ODE for (1) eliminating $s$ by differentiating (1) w.r.t. $x$, we get $$s=y'~~~(2).$$ So the required ODE for the family of lines (1) is $$y=xy'-8y'-4y'~^3~~~(3)$$ This is the well known Clairaut's equation https://en.wikipedia.org/wiki/Clairaut%27s_equation, an ODE that has two solutions. One of the solutions is (1) where $s$ is a real parameter. The other one is a singular solution without a parameter. To get this singular (fixed) solution of (3), differentiate (3) w.r.t. $x$ to write $$y''(x-8-12y'~^2))=0~~~(4)$$ $$ \Rightarrow y''=0 ~ \mbox {or}~ y'~^2=\frac{x-8}{12} ~~~(5).$$ While $y''=0$, gives back (2) and hence the general solution of (3), where the parameter $s$ is the single constant of integration as the ODE (3) is first order. The second part of (5) gives the fixed singular solution as $$ 27 y^2=(x-8)^3,~ x\ge 8~~~(6).$$ Finally, Eq. (6) gives us the required fixed curve $g(x,y)=27y^2-(x-8)^3=0$, which touches the given family of lines (0) or (1). Note that (6) is a non-quadratic and non-conic curve whose tangent at one point may cut this curve at some other point.
Using Markov Chains on gambling problems
From ยฃ$200$, he can go to ยฃ$400$ or ยฃ$0$ From ยฃ$400$, he can go to ยฃ$800$ or ยฃ$0$ From ยฃ$800$, he can go to ยฃ$1000$ or ยฃ$600$ From ยฃ$600$, he can go to ยฃ$1000$ or ยฃ$200$ This gives all possible states of the Markov chain.
Multiplicative group of finite field with non irreducible polynomial
Hint: $x^3+x+2 \equiv (x+1)(x^2+2x+2) \bmod3$ implies that the group is a product of two groups. Find the order of these groups. Solution: Since $x^3+x+2 = (x+1)(x^2+2x+2) = (x+1)((x+1)^2+1)$, we have $$ R= \frac{F_3[x]}{\langle x^3+x+2 \rangle} \cong \frac{F_3[x]}{\langle x+1 \rangle} \times \frac{F_3[x]}{\langle (x+1)^2+1 \rangle} \cong F_3 \times F_3[u] \cong F_3 \times F_9 $$ with $u^2=-1$. Therefore, $ R^{\times} \cong F_3^{\times} \times F_9^{\times} \cong C_2 \times C_8 $ is not cyclic. In particular, $\bar x$ is not a generator. Indeed, if you compute the powers of $\bar x$, using $\bar x^3+\bar x+2=0$, you'll find that $\bar x^8=1$.
Solving exponential equations using logarithms (where two things are being multipled together)
Notice $4^{2x+1} = 2^{2(2x+1)}$ $2^{x-1} \cdot 4^{2x+1} = 2^{x-1} \cdot 2^{2(2x+1)} = 2^{x-1 + 2(2x+1)} = 2^{5x+1}=32=2^5$ which implies $5x+1=5$ so $x = \frac{4}{5}$
How to solve a + 2^a = 6?
The function $f(x)=x+2^x$ is increasing as $f'(x)=1+\ln{(2)} 2^x \gt 0$ for all $x\in \mathbb{R}$. So, the only solution to the equation given in the set $\mathbb{R}$ is $a=2$.
Independence in a geometric random variable - Am I doing this correctly?
Everything is correct, except that you missed somewhere a square in $E[X]$ \begin{align}Var(X)=E[X^2]-E[X]^2 \implies E[X^2]=Var(X)+E[X]^2\end{align} Hence\begin{align}E[X^2]&=\frac{1-p}{p^2}+\frac{1}{p^2}=\frac{0.1}{0.9^2}+\frac{1}{0.9^2}=\frac{1.1}{0.9^2}\end{align}
Finite abelian group and GCD.
Let $h , g$ belong to $S$. Then $(o(h),m)=1$ and $(o(g),m)=1$ but how is it true that $(o(gh),m)=1$? Let $a = o(h)$ and $b = o(g)$. Since $(a,m) = (b,m) = 1$, then $(ab,m) = 1$. Now $(gh)^{ab} = g^{ab}h^{ab} = (g^a)^b (h^b)^a = 1$, so $o(gh)$ divides $ab$. Therefore, $(o(gh),m) = 1$.
Given $l$ points in unit $n$-sphere, expectation of smallest radial distance
The cdf of the random variable describing the distance $r$ from a random point in the unit $n$-sphere to the centre is $r^n$, so the pdf is its derivative or $nr^{n-1}$. Now we are looking for the first order statistic of a sample of $l$ points from this distribution, whose pdf computes to (using the formula given on MathWorld) $$l(1-r^n)^{l-1}nr^{n-1}$$ Thus the desired expectation is $$E(n,l)=\int_0^1rl(1-r^n)^{l-1}nr^{n-1}\,dr$$ Substitute $s=r^n$: $$=l\int_0^1s^{1/n}(1-s)^{l-1}\,ds=l\mathrm B\left(1+\frac1n,l\right)=\frac{l\Gamma(1+1/n)\Gamma(l)}{\Gamma(1+1/n+l)}=\frac{l!}{\prod_{i=0}^{l-1}(1+1/n+i)}$$ $$E(n,l)=\prod_{i=1}^l\frac i{1/n+i}$$
Software to render formulas to ASCII art
You can use this Web Application: Diagon No need to download anything Supports ASCII and Unicode. Supports other kind of ASCII art diagrams. Examples: sum(i^2, i=0, n) = n^3/2+n^2/2+n/6 output (Unicode) n ___ 3 2 โ•ฒ 2 n n n โ•ฑ i = โ”€โ”€ + โ”€โ”€ + โ”€ โ€พโ€พโ€พ 2 2 6 i = 0 output (ASCII): n === 3 2 \ 2 n n n / i = -- + -- + - === 2 2 6 i = 0 mult(i^2, i=1, n) = (mult(i, i=1, n))^2 2 n โŽ› n โŽž โ”โ”ณโ”ณโ” 2 โŽœโ”โ”ณโ”ณโ” โŽŸ โ”ƒโ”ƒ i = โŽœ โ”ƒโ”ƒ iโŽŸ i = 1 โŽi = 1 โŽ  sqrt(1 + sqrt(1 + x/2)) _____________ โ•ฑ _____ โ•ฑ โ•ฑ x โ•ฑ โ•ฑ 1 + โ”€ โ•ฒโ•ฑ 1 + โ•ฒโ•ฑ 2 [1,2; 3,4] * [x; y] = [1*x+2*y; 3*x+4*y] โŽ›1 2โŽž โŽ›xโŽž โŽ›1 โ‹… x + 2 โ‹… yโŽž โŽœ โŽŸ โ‹… โŽœ โŽŸ = โŽœ โŽŸ โŽ3 4โŽ  โŽyโŽ  โŽ3 โ‹… x + 4 โ‹… yโŽ  int(x^2/2 * dx ,0 ,1) = n^3/6 1 โŒ  2 3 โŽฎ x n โŽฎ โ”€โ”€ โ‹… dx = โ”€โ”€ โŒก 2 6 0 phi = 1 + 1/(1+1/(1+1/(1+1/(1+...)))) 1 ฯ† = 1 + โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ 1 1 + โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ 1 1 + โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ 1 1 + โ”€โ”€โ”€โ”€โ”€โ”€โ”€ 1 + ... Disclaimer: I am the author. It is an open source project under the MIT license.
How can I prove that a graph with a required amount of edges per node is invalid?
In this case it's simple: the sum of the grades must be even for the Handshaking Lemma. In general, there's a rule for deciding if a sequence of numbers is a grafic sequence, that is, there exists a graph whit that sequence as the degree sequence of his nodes: general rule
Fatou, Dominated Convergence, etc. for nets (in relation to stochastic processes)
One trick to apply DCT or similar statements is following: Let's stay tou want to show $\int X_t \rightarrow \int X$ as $t \rightarrow a$. If you show this for every sequence $t_n \rightarrow a$, then you are done. In this case you are reduced to the more familiar situation. You can also consider this by proof by contradiction. If such a statement didn't hold, then for every $\epsilon_n$ you would get a $t_n$ such that something goes wrong.. Then consider those $X_{t_n}'s$. But by DCT (or whatever you are using) things can't go wrong for a sequence- hence contradiction. I have to be a bit vague, as I don't have any particular example of what you are trying to show- but these are some ways that you can prove these sort of statements.
Automorphisms of an affine line over finite fields
As Mariano comments, $\Bbb A^1_k=\operatorname{Spec}(k[x])$ for any field $k$ (or indeed, any ring $k$). Thus, an automorphism $\varphi$ of $\Bbb A^1$ is dual to an automorphism of $k$-algebras $\varphi^{\sharp}:k[x]\to k[x].$ The map $\varphi^\sharp$ is determined by $\varphi^\sharp(x)=a_0+a_1x+\cdots+a_dx^d.$ And, the inverse $(\varphi^\sharp)^{-1}$ is also determined by $(\varphi^\sharp)^{-1}(x)=b_0+b_1x+\cdots+b_ex^e.$ We must have $$x=(\varphi^\sharp)^{-1}(\varphi^\sharp(x))=(\varphi^\sharp)^{-1}(a_0+a_1x+\cdots+a_dx^d)=a_0+a_1(\varphi^\sharp)^{-1}(x)+\cdots+a_d(\varphi^\sharp)^{-1}(x)^d=a_0+a_1(b_0+b_1x+\cdots+b_ex^e)+\cdots+a_d(b_0+b_1x+\cdots+b_ex^e)^d$$ which is a polynomial of degree $de.$ Thus, $d=e=1,a_1,b_1\neq 0$ and we find that $\varphi^\sharp(x)=a_0+a_1x.$ Edit: Here is a remark that may be more germane to the OPs worry. Consider the case $k=\Bbb F_2.$ The variety $\Bbb A^1$ contains closed points $(x),(x+1)$ corresponding to $0,1\in\Bbb F_2.$ One might think that the product $f(x)=x(x+1)$ lies in "the ideal of" $\Bbb A^1$ since we clearly have $f(0)=f(1)=0,$ and these are the only values in $\Bbb F_2.$ However, classical algebraic geometry is done over algebraically closed fields, which are infinite, and so we cannot run into this problem, by contrivance. To work with nonalgebraically closed fields, or other interesting rings, we pass to the scheme theory of Grothendieck (and many others), taking by definition $\Bbb A^1_k$ to mean the prime spectrum of the (coordinate) ring $k[x].$ One of the beautiful things about this choice of definition is that we are bestowed with an equivalence between the categories of affine schemes/morphisms and (commutative) rings (with unit)/homomorphisms, which means that results such as that which you mention continue to hold, even for spaces that might otherwise exhibit some confusing behaviour. Of course, it is interesting also to note that $\Bbb A^1_k$ now contains many more points than just $0$ and $1,$ for example containing the generic point, along with those points defined by all the higher degree irreducible polynomials over $k=\Bbb F_2.$
If $\det\left(\begin{smallmatrix}a & 1 & 1\\1 & b & 1\\1 & 1 & c\end{smallmatrix}\right) > 0 $ then prove that $abc> -8$
When $a=b=-3$ and $c=-0.9$, $abc-(a+b+c)=-1.2>-2$ but $abc=-8.1<-8$.
Geometry/Programming- Draw An Equilateral Triangle Given One Point And A Desired Rotation
Ok, this answer is a bit lame but anyway. To achieve what you are searching for you need just two functions A function that draw a triangle with a given corner point and a rotation and a radius lets call this one drawTriangle A function that draws supsequent triangles drawMoreTriangles The problem I ran into when I wanted to create the second part (after getting pygame work again) was that you your triangle drawing function does not work with any rotation but just with a very specific one ($\frac{\pi}{6}$). I noticed this after implementing part 2! The Problem here is that your Point2D class has really less functionality to work with what you need is a class like this: class Point2D: def __init__(self, x,y): self.x = x self.y = y def __add__(self, obj): """ This Function will translate a Point """ self.x = obj.x self.y = obj.y def rotate(self, arc): """ This function will rotate by arc around (0,0) """ sa = math.sin(arc) ca = math.cos(arc) x = ca * self.x - sa * self.y self.y = sa * self.x + ca * self.y self.x = x Now you can implement your triangle (part 1) really easy: def drawTriangle(window, center, radius, rotation): corner = Point2D(radius, 0) pointlist = [None] for i in range(3): pointlist[i] = [int(corner.x+center.x), int(corner.y+center.y)] if i == 2: break # Rotate the corner by 60 degrees corner = corner.rotate(math.pi/3) pygame.draw.polygon(window, color, pointlist, width) Now you can also easily achieve (part 2), by using the above function for drawing triangles (if it works did not test yet) and using rotation and translation! Gonna post a bit more if I've got more time!
Pullback of a function
The following definition is from the book you mention but I couldn't find your example there. If $\omega$ is a 1 -form on $M,$ we define its pullback $F^{*} \omega$ to be the 1 -form on $N$ given by $$ \left(F^{*} \omega\right)_{p}\left(X_{p}\right)=\omega_{F(p)}\left(F_{*} X_{p}\right) $$ for any $p \in N$ and $X_{p} \in T_{p} N$ You need to be certain of your definitions and objects you are working with (in this case your $x$ and $y$, pullback). An example calculation: Consider the one-forms $\alpha_{(x_1,x_2)}=x_1dx_1+0dx_2$ and $\beta_{(x_1,x_2)}=0dx_1+x_2dx_2$. Let $h: \mathbb{R} \rightarrow \mathbb{R}^2$ be given by $h(t)=(\cos(t),\sin(t))$. Consider a general vector field $X$ on $\mathbb{R}$, $X=X^1 \frac{\partial}{\partial t}$. Jacobian matrix of $h_*$ is $(h_*)_{(t)}= \big(\begin{smallmatrix} -\sin(t) \\ \cos(t) \end{smallmatrix}\big).$ Hence $(h_*)_{(t)}X = -\sin(t)X^1\frac{\partial}{\partial x_1}+\cos(t)X^1\frac{\partial}{\partial x_2}.$ So $$\alpha_{h(t)}(h_*X)=\alpha_{(\cos(t),\sin(t))}(h_*X)=-\cos(t)\sin(t)X^1$$ $$\beta_{h(t)}(h_*X)=\beta_{(\cos(t),\sin(t))}(h_*X)=\sin(t)\cos(t)X^1$$ by comparing coefficients $$h^*\alpha = -\cos(t)\sin(t)dt \text{ and } h^*\beta = \sin(t)\cos(t)dt.$$
Is there a rule for the $n$th root of a radical?
The rule would be $$\sqrt[n]{x^m} = \sqrt[n/m]x = x^{m/n}$$ It is a coincidence that $4-2=4/2$ so you got the right result.
Show that the operator $\phi$ has a unique fixed point
No, just because $\phi^3$ is a contraction it doesn't follow that $\phi$ is a contraction. For example, on $\mathbb R^2$, the linear transformation $A$ corresponding to the matrix $$ \pmatrix{1 & -7\cr 1/4 & -3/2\cr}$$ is not a contraction, but $A^3 = I/8$ is. Hint: if $p$ is a fixed point of $\phi^3$, then so is $\phi(p)$.
Is stopping time directed convergence in probability equivalent to a.s. convergence?
For fixed $\delta,\epsilon>0$ let $\sigma$ be a bounded stopping time such that $$\forall \tau \in S, \tau \geq \sigma: \quad \mathbb{P}(|X_{\tau}-X|>\delta) < \epsilon. \tag{1}$$ Without loss of generality, we may assume $\sigma \geq 1$; otherwise we replace $\sigma$ by $\sigma+1$. Note that $(1)$ implies, in particular, $$\mathbb{P}(|X_{\sigma}-X|>\delta) < \epsilon. \tag{2}$$ For fixed $k \in \mathbb{N}$ we define a bounded stopping time $\tau_k$ by $$\tau_k(\omega) := \inf\{n \geq \sigma(\omega); |X_n(\omega)-X_{\sigma}(\omega)| \geq 2 \delta\} \wedge (k \sigma(\omega)).$$ Clearly, $$\begin{align*} \mathbb{P}(|X_k-X|>3 \delta \, \, \text{i.o.}) &\leq \mathbb{P}(|X_{\sigma}-X|>\delta) + \mathbb{P}(|X_{\sigma}-X| \leq \delta, |X_k-X|>3 \delta \, \, \text{i.o.}) \\ &\stackrel{\text{(2)}}{\leq} \epsilon + \mathbb{P}(|X_{\sigma}-X| \leq \delta, |X_k-X_{\sigma}|>2 \delta \, \, \text{i.o.}). \end{align*}$$ By the definition of $\tau_k$ this implies $$\mathbb{P}(|X_k-X|>3 \delta \, \, \text{i.o.}) \leq \epsilon + \mathbb{P}(|X_{\sigma}-X| \leq \delta, |X_{\tau_k}-X_{\sigma}|> 2\delta \, \, \text{i.o.}).$$ As $k \sigma \to \infty$ as $k \to \infty$ it follows (from the definition of $\tau_k$) that we can choose $K \gg 1$ sufficiently large such that $$\mathbb{P}(|X_k-X|>3 \delta \, \, \text{i.o.}) \leq 2\epsilon + \mathbb{P}(|X_{\sigma}-X| \leq \delta, |X_{\tau_K}-X_{\sigma}|>2\delta).$$ Hence, $$\mathbb{P}(|X_k-X|>3 \delta \, \, \text{i.o.}) \leq 2\epsilon+ \mathbb{P}(|X_{\tau_K}-X|>\delta) \stackrel{\text{(1)}}{\leq} 3 \epsilon.$$ Since $\epsilon>0$ and $\delta>0$ are arbitrary, this proves $X_k \to X$ almost surely.
Proving an inequality involving norms of real functions.
We have$$\left|f\left(x\right)\right|=\left|f\left(a\right)+\int_{a}^{x}f'\left(t\right)dt\right|\leq\left|f\left(a\right)\right|+\int_{a}^{x}\left|f'\left(t\right)\right|dt\leq\left|f\left(a\right)\right|+\max_{t\in\left(a,b\right)}\left|f'\left(t\right)\right|\left(b-a\right)\leq\left(1+b-a\right)\left|f\left(a\right)\right|+\max_{t\in\left(a,b\right)}\left|f'\left(t\right)\right|\left(1+b-a\right)=\left(1+b-a\right)\left\Vert f\right\Vert$$ because $b-a>0$ and $1<1+b-a$. So$$\max_{x\in\left[a,b\right]}\left|f\left(x\right)\right|\leq\left(1+b-a\right)\left\Vert f\right\Vert.$$
Double integration in a region extending below the x-axis
I found the answer!! My method of $$ \int_0^{1/2} \int_{y_1 -1}^{1/2} 30y_1^2 y_2 dy_2 dy_1 $$ was supposed to work, I must have made an algebraic error. Would like to hear an explanation about how $y_2$ cannot be negative though, since I still don't really understand how I got the right answer and what it would mean conceptually.
Prove that $\sqrt{2}$, $\sqrt{3}$, and $\sqrt{5}$ can not all be terms of a single arithmetic progression
Following your hint, set $$\sqrt{2}=a_m=a+(m-1)r$$ $$\sqrt{3}=a_n=a+(n-1)r$$ $$\sqrt{5}=a_p=a+(p-1)r$$ Then solving for $m,n,$ and $p$, we get $$m=\frac{\sqrt{2}+r-a}{r}$$ $$n=\frac{\sqrt{3}+r-a}{r}$$ $$p=\frac{\sqrt{5}+r-a}{r}$$ Using the hint provided gives $$\frac{n-m}{p-m}=\frac{\sqrt{3}-\sqrt{2}}{\sqrt{5}-\sqrt{2}}$$ Now, since $n,m,p\in\mathbb{N}$, we know $$\frac{b}{c}=\frac{n-m}{p-m}=\frac{\sqrt{3}-\sqrt{2}}{\sqrt{5}-\sqrt{2}}\in\mathbb{Q}$$ Manipulating this expression gives us $$b(\sqrt{5}-\sqrt{2})=c(\sqrt{3}-\sqrt{2})$$ $$b\sqrt{5}-c\sqrt{3}=\sqrt{2}(b-c)$$ $$5b^2-2bc\sqrt{15}+3c^2=2(b-c)^2$$ $$\frac{5b^2+3c^2-2(b-c)^2}{2bc}=\sqrt{15}$$ Since the left hand side is rational but the right hand side irrational, we have arrived at a contradiction. We conclude that $\sqrt{2}$, $\sqrt{3}$, and $\sqrt{5}$ cannot be terms of a single arithmetic progression
Algebraic transformation โ€”ย where is my mistake?
Your calculations seem fine, you should only continue rearranging the terms and using the fact that from the basic definition of a mean, we have: $$ \sum x_i = n\bar x = \sum \bar x .$$
Why does a homogeneous ODE need initial condition for a particular solution while a non-homogeneous ODE doesn't?
No, you don't need initial conditions to find a particular solution of the homogeneous case. For example, $e^{r_1 x}$ is a particular solution.
Not solutions to a equation
First of all, $F(X_1,\dots,X_n)$ is a polynomial, not a polynomial equation. I assume that you mean $F(X_1,\dots,X_n)=0$. Then this fact is a trivial observation, since $F(X_1,\dots,X_{n-1},1)=0$ is the special case of $F(X_1,\dots, X_n)=0$ with $X_n=1$. Does this help you? If not, please specify a little more, which part you don't understand and I will try to elaborate on this answer.