title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Proving that removing any two vertices from a 6-regular, graph of order 10 still results in a Hamiltonian graph
You are right to use Ore's theorem: if $G$ is a graph on $n\geq 3$ vertices and $\deg(x)+\deg(y)\geq n$ for all $xy\notin E$, then $G$ is hamiltonian. You've correctly applied this to $G$ since $\deg(x)+\deg(y)=12\geq 10$ for all $x,y\in V$. You've also applied this to $G-u$ since $\deg(x)+\deg(y)\geq 5+5=10\geq 9$ for all $x,y\in V(G-u)$ ($G-u$ has $9$ vertices!) For $G-u-v$, each vertex has degree at least $4$ and $G-u-v$ has $8$ vertices, so certainly $\deg(x)+\deg(y)\geq 8$. Really we're not using the full power of Ore's theorem, but instead just using Dirac's theorem, which says that if $G$ is a graph on $n\geq 3$ vertices and $\delta(G)\geq n/2$, then $G$ is hamiltonian. Note that this is the case for $G$, $G-u$ and $G-u-v$.
Root Guess For a Function (Yield Calculation of a Bond)
As a guess for y to feed to the algorithm i would do this: $$\bar{t}=\frac{1}{N}\sum_{i=1}^N t_i$$ $$\bar{C}=\frac{1}{N}\sum_{i=1}^N C_i$$ and replace all quantities in the formula by their mean value: $$ P_{simp}(y) = Re^{-y \bar t}+\sum_{i=1}^{N}\bar C e^{-y \bar t}= (R+N\bar C) e^{-y \bar t} $$ now solve $P_{simp}(y)=Pm$ for y: $$ y_{guess}=-\frac{1}{\bar t}\log\frac{Pm}{R+N\bar C}=\frac{1}{\bar t}\log\frac{R+N\bar C}{Pm} $$ this could work really well as a starting point, but you have to try to know if indeed it does. Adaptive Newton Method 1) $\lambda \to 1$ 2) $x_{n+1}=x_n-\lambda\frac{P(x_n)-Pm}{P'(x_n)}$ 3) $\lambda\to\lambda\left\{1+\frac{Pm-P(x_{n+1})}{P(x_{n+1})-P(x_{n})}\right\}_{0.2}^{1.8}$ 4) until convergence go to 2) those numbers on the right graph on (2) are a lower limit and a upper limit for the coefficient between graphs that multiplies $\lambda$. I devised this algorithm in a simulation that required really fast convergence since executing the simulation was really costly and it worked nice.
Can a Mersenne number be a power (with exponent > 1) of a prime?
It is not possible. Catalan's conjecture (proved in 2002) states that the only solution of $x^a-y^b=1$ ($x$ and $y$ are positive integers and $a$ and $b$ are integers greater than $1$) is $3^2-2^3=1$. If a Mersenne's number were the power of a prime, we'd have $$2^n-1=p^k$$ or $$2^n-p^k=1$$ contradicting Catalan's conjecture.
Decomposing $\mathfrak{sl}_3(\mathbb{C})$
The line that's missing is not necessarily spanned by $D$, but by some element in $D + V_1 + V_1^{\prime} + V_2$, so you have to look for some ${\mathfrak sl}_2$-invariant element in there: $D + \frac{1}{2}h$ works.
Convergence of a positive series
We want to get some idea of the size of $2^{1/n}-1$ for large $n$. One way is to look at the ratio $$\frac{2^{1/n}-1}{1/n}.$$ Replace $1/n$ by $t$. We will investigate the behaviour of $$\frac{2^t-1}{t}$$ as $t$ approaches $0$ through positive values. Note that $2^t=e^{t\ln 2}$. Using L'Hospital's Rule, or in another way, we find that $$\lim_{t\to 0^+}\frac{e^{t\ln 2}-1}{t}=\ln 2.$$ Thus $\lim_{n\to\infty} \frac{2^n-1}{1/n}=\ln 2$. By Limit Comparison with the series $\sum_1^\infty \frac{1}{n}$, it follows that our series diverges.
If $\{u, v\}$ is an orthonormal set, how is $\|u - v\| = \sqrt{2}$?
You've not applied the definition of orthonormal correctly: You cannot conclude that $\|u\| = 0$, but rather that $\|u\| = 1$. The only term that is zero is the product $u \cdot v$. Hence, you get $\sqrt{1 - 2 \cdot 0 + 1}$. Another red flag that you should look for: Any time you conclude a vector has norm zero, you can conclude that the vector is zero.
Calculate $I = \int_0^1 \frac{\ln(\frac{1 - x} {x})} {x^2 + 1} dx$
Note \begin{align} I =& \int_0^1 \frac{\ln(\frac{1 - x} {x})} {x^2 + 1} dx =\int_0^1 \frac{\ln({1 - x})} {x^2 + 1} dx- \int_0^1 \overset{x\to\frac{1-x}{1+x}}{\frac{\ln x} {x^2 + 1} }dx\\ =& \int_0^1 \frac{\ln({1 + x})} {x^2 + 1} dx \overset{x\to\frac{1-x}{1+x}}=\int_0^1 \frac{\ln 2} {x^2 + 1} dx - \int_0^1 \frac{\ln (1+x)} {x^2 + 1} dx\\ =& \frac\pi4 \ln2- I=\frac\pi8\ln2 \end{align}
Prove that $ (X, d) $ is a metric space
HINT: If $x_k\ne y_k$, then either $x_k=z_k\ne y_k$, $x_k\ne z_k=y_k$, or $x_k\ne z_k\ne y_k\ne x_k$. Each $k$ of the first type also contributes to $d(x,y)$ $d(z,y)$ but not to $d(x,z)$. Each $k$ of the second type also contributes to $d(x,z)$ but not to $d(z,y)$. And each $k$ of the third type contributes to both $d(x,z)$ and $d(z,y)$. If there are $a$ indices $k$ of the first type, $b$ of the second, and $c$ of the third, then $d(x,y)=a+b+c$. Show that $d(x,z)+d(z,y)$ must be at least this big. One does not correctly say that $\langle\Bbb C,|\cdot|\rangle$ is a metric space: $|\cdot|$ is the absolute value function on $\Bbb C$, so that $|a+bi|=\sqrt{a^2+b^2}$, and this is not a metric. (The dot stands for the missing argument of the function.) What is meant is presumably that the metric is given by $d(x,y)=|x-y|$.
What are some effective ways in teaching fractions to 5th graders who are behind (special needs)
I would recommend an intuitive approach that they can grasp. Buy a pizza and a cake. Ask them how we would know how to cut them up to divide them equally. This is fractions. Now, show a picture of halfs and that each is represented by the fraction $\frac{1}{2}$. What allows us to add the fractions together is the common denominator. Repeat for $3, 4, 10$ and the actual number of students in the class. Break, have pizza and cake as now, they'll have interest. Now, change it up with denominators like a quarter and a half to show with some live training aid. You get the idea, use an intuitive approach with reward. Make sure to use pictures with colors to represent the fractions to show a real world application to them. Regards
Why are the OLS residuals and $\hat{\beta}$ uncorrelated?
The vector $\hat e$ of residuals has expectation zero. Here's the proof: By definition, $$\hat e:=y-\hat y=y-X\hat \beta_T.$$ Recall Theorem 3.4(a) stated that $$ E(\hat\beta_T)=\beta_0 $$ where $\beta_0$ is such that $E(y)=X\beta_0$, as specified in Classical Condition [A2] (section 3.2.1). Since $X$ is non-stochastic (condition [A1]), it follows that $$ E(X\hat\beta_T)=XE(\hat\beta_T)=X\beta_0 = E(y). $$ Or you can prove this directly using the definition $\hat\beta_T:=(X'X)^{-1}X'y$: $$E(X\hat\beta_T)=E(X(X'X)^{-1}X'y)\stackrel{[A1]}=X(X'X)^{-1}X'E(y)\stackrel{[A2]}=X(X'X)^{-1}X'X\beta_0=X\beta_0=E(y).$$
Optimization triangular prism
This boils down to an optimization problem of a function of 3 variables subject to a constraint: You can vary 3 parameters (height, base width, base length), for which you have a "Price Function" which is the sum of all the material times its respective cost (this is a trivial problem in geometry). Thus you wish to minimize the "price function" c(h,w,l) You are constrained by the fact that the volume (also a function of h,l,w) is constrained. This is the textbook application of Lagrange Multipliers. If you are unfamiliar with them, I would check out Paul's Online or Khan Academy, as they both provide great explanations. I hope this helps!
Expected value of a sum starting at a value given through a random variable
The first part is basically correct, which leads to $$ G(n)=E\left[ \sum_{k=K}^n a_n(k) \right],\qquad a_n(k)={n\choose k} p^k (1-p)^{n-k}. $$ Now, rewrite the random variable in the RHS as $$ \sum_{k=0}^n a_n(k)\mathbf 1_{K\leqslant k}, $$ and deduce that $$ G(n)=\sum_{k=0}^n a_n(k)P(K\leqslant k). $$ There is no general expression of the RHS as a function of $\mu$ only. An equivalent formulation is $G(n)=E[b_n(K)]$ where $$ b_n(i)=\sum_{k=i}^na_n(k). $$
Linearly Independent Real Numbers over $\mathbb{Q}$
Hint: if there were a linear dependence relation between these elements, then $\sqrt[n]{2}$ would satisfy a (monic) polynomial of degree $\leqslant n-1$ (after dividing by the highest nonzero coefficient). On the other hand, $\sqrt[n]{2}$ is a root of $X^{n} - 2$, which is irreducible over $\mathbb{Q}$ by Eisenstein at $2$. Why is this first bit in contradiction with the second fact?
How to find the vector field associated with an ODE?
The ODE is the vector field. That is, the associated order 1 system is. In that case, the vector field is the same as the (vector valued) right side of the ODE. So in your example, one would associate the order 1 system $$ \dot x=P(x,y),\\ \dot y=Q(x,y). $$
Equivalent (?) definitions of $TP_2$
Here's an example showing that $\text{tp}_2$ does not imply $\text{TP}_2$. Let $L = \{\leq_m\mid m\in \omega\}$, and let $T$ be the theory of independent linear orders. This is the model companion of the theory which says that each $\leq_m$ is a linear order. In the reduct to each finite sublanguage of size $n$, $T$ is the theory of the Fraïssé limit of the class of finite structures equipped with $n$ linear orders. $T$ is $\text{NIP}$, hence $\text{NTP}_2$, i.e. no formula has $\text{TP}_2$. Proof: $T$ has quantifier elimination, since any reduct of $T$ to a finite sublanguage has quantifier elimination (being the theory of a Fraïssé limit). So every $L$-formula is equivalent modulo $T$ to a boolean combination of atomic formulas. Every atomic formula is $\text{NIP}$ (it's an instance of equality or one of the orders $\leq_m$), and $\text{NIP}$ formulas are closed under Boolean combinations. Thus every $L$-formula is $\text{NIP}$ modulo $T$. $T$ has $\text{tp}_2$. Proof: Pick $a$, $b$, and $c$ such that $b<_m a <_m c$ for all $m\in \omega$. Now we pick an array $(b^m_n,c^m_n)_{m,n\in \omega}$ carefully. For each $m\in \omega$, we define the row $(b^m_n,c^m_n)_{n\in \omega}$ so that $b^m_n <_k b$ and $c<_k c^m_n$ for all $n$ and all $k\neq m$, but $b<_m b^m_n <_m c^m_n <_m c$ for all $n$, and the $<_m$-intervals $(b^m_n,c^m_n)$ are pairwise disjoint. Letting $p(x,b,c)$ be $\text{tp}(a/bc)$, we have $p(x,b^m_n,c^m_n)\cup p(x,b^m_{n'}c^m_{n'})$ is inconsistent for all $m\in \omega$ and $n\neq n'$, since this type says $b^m_n<_m x <_m c^m_n$ and $b^m_{n'}<_m x <_m c^m_{n'}$, but these intervals are disjoint. But for all $\sigma\in \omega^\omega$, the type $\bigcup_{m\in \omega}p(x,b^m_{\sigma(m)},c^m_{\sigma(m)})$ is consistent. It suffices to see that what it says about each order is consistent. Fixing an order $<_k$, the type says $b^m_{\sigma(m)} <_k x <_k c^m_{\sigma(m)}$ for all $m\in \omega$. But for all $m\neq k$, we have $b^m_{\sigma(m)}<_k b <_k b^k_{\sigma(k)} <_k c^k_{\sigma(k)}<_k c <_k b^m_{\sigma(m)}$. So picking some $a_*$ in the $<_k$-interval $b^k_{\sigma(k)} <_k c^k_{\sigma(k)}$ for all $k$ works. On the other hand, $\text{TP}_2$ implies $\text{tp}_2$. Proof: Suppose $\varphi(x,y)$ has $\text{TP}_2$. By standard tricks, we may assume that the array $(b^i_j)_{i,j\in \omega}$ witnessing this has mutually indiscernible rows, and that the sequence of rows is indiscernible (see, for example, Lemma 7.4 here, where Casanovas calls this kind of array very indiscernible). Now we can stretch the indiscernible sequence of rows to a sequence $(b^\alpha_i)_{\alpha<\kappa,i\in \omega}$ of length $\kappa$. where $\kappa>|S_{xy}(T)|$, the number of complete types in variables $xy$ over the empty set. By compactness, there is some $a_*$ such that $\varphi(a_*,b^\alpha_0)$ is true for all $\alpha < \kappa$. For each such $\alpha$, let $p_\alpha = \text{tp}(a_*b^\alpha_0)$. By pigeonhole, some type $p$ must appear infinitely many times. So we can refine our array back down to $(b^m_n)_{m,n\in \omega}$ and assume that $a_*b^m_0$ realizes $p$ for all $m\in \omega$. But now by mutual indiscernibility of the rows of the array, the existence of $a_*$ realizing $\bigcup_{m\in \omega} p(x,b^m_0)$ implies that for any $\sigma\in \omega^\omega$, the type $\bigcup_{m\in \omega} p(x,b^m_{\sigma(m)})$ is consistent. And on the other hand, for any fixed $m$ and any $n\neq n'$, the type $p(x,b^m_n)\cup p(x,b^m_{n'})$ is inconsistent, since this type contains both $\varphi(x,b^m_n)$ and $\varphi(x,b^m_{n'})$. So we have a witness to $\text{tp}_2$ (taking $a = a_0$, $b = b^0_0$, and $k = 2$).
Why is the × operator not defined for vector × vector?
The concept of a vector space isn't so much defined as recognized. All over Mathematics there are sets of things that have a natural addition, and a natural multiplication by real or complex numbers, but no natural way to multiply two things in the set to get another thing in the set. Once you've seen enough of these things, you accept that that's what Mathematics is giving you, and you make your definitions accordingly. The definitions commonly in use lead to a huge number of useful concepts and results. Linear combination, span, linear dependence/independence, basis, dimension for starters - does your proposed multiplication lead to any of those concepts, or to any other concepts even half as useful?
For what value of $a$ equation $\cos2x +7 = a(2-\sin x)$ can have a real solution
the quadratic equation $2t^2-at+2a-8=0$ has two real roots $t_1=2$ and $t_2=(a-4)/2$. Therefore the trig equation will have real roots if $-1 \le (a-4)/2 \le 1$ that is $-2 \le a-4 \le 2$ $2 \le a \le 6$
Let n be an arbitrary natural number and let the property P(n) be the equation 2 · 6 · 10 · 14 · ... · (4n - 2) = (2n)! / n!
I think you have probably confused yourself by your proof approach. You have basically assumed the result and then reduced the equality to one you know to be true. This is not a correct proof. Instead, you should start with one side and show that it is equal to the other side. So: \begin{align*} 2\cdot 4\cdot 6\cdots\cdot (4k-2)\cdot (4(k+1)-2) &= \frac{(2k)!}{k!}\cdot (4(k+1)-2) \\ &= \frac{(2k)!}{k!}\cdot(4k+2) \\ &= \frac{2(2k+1)(2k)!}{k!} \\ &= \frac{2(k+1)(2k+1)(2k)!}{(k+1)k!} \\ &= \frac{(2k+2)(2k+1)(2k)!}{(k+1)!} \\ &= \frac{(2k+2)!}{(k+1)!}. \end{align*}
An introduction to algebraic topology from the categorical point of view
Rotman's An Introduction To Algebraic Topology is a great book that treats the subject from a categorical point of view. Even just browsing the table of contents makes this clear: Chapter 0 begins with a brief review of categories and functors. Natural transformations appear in Chapter 9, followed by group and cogroup objects in Chapter 11. The aspect I like most about this book is that Rotman makes a clear distinction between results that are algebraic and topological. E.g., he proves several statements about group actions before then applying them to the particular topological setting of covering spaces and the action of the fundamental group on a fiber.
Prove that an $n×n$ matrix with entries $+1$ or $-1$ has determinant divisible by $2^{n-1}$
Take the first row, and either add or subtract it to each of rows $2$ through $n$ in order to cancel out the first entry in each of those rows to $0$. Doing this does not change the determinant of the matrix. When you have finished, you will have a matrix such that all entries except for the first row are even. Now if you do cofactor expansion along the first column, you will see that the determinant of the whole matrix is equal to the determinant of the lower right$(n-1)\times (n-1)$ submatrix. Now divide the rows of this submatrix by $2$, one at a time, to see that each time the determinant is divided by $2$ as well. At the end you will be left with an integer matrix, which must therefore have integer determinant, and you will have done a total of $n-1$ divisions, proving that the original determinant was in fact a multiple of $2^{n-1}$.
Dini's continuity vs Holder continuity
Here is an example: $N=1$, $E=[0,e^{-3})$ and $$f(x)=\left\{\begin{array}{ccc}(\log x)^{-2}&,& x\in(0,e^{-3})\\ 0&,&x=0\end{array}\right..$$ For every $\alpha>0$, $$\lim_{x\to 0^+}\frac{|f(x)-f(0)|}{x^\alpha}=\infty,$$ so $f$ is not $\alpha$-Hölder continuous. Now let us show that $f$ is Dini continuous. To begin with, note that $$f'(x)=-\frac{2}{x(\log x)^3}>0,\quad x\in(0,e^{-3}),$$ and $$f''(x)=\frac{2}{x^2(\log x)^3}+\frac{6}{x^2(\log x)^4}<0,\quad x\in (0,e^{-3}).$$ Therefore, $f$ is increasing on $[0,e^{-3})$ and $f'$ is decreasing on $(0,e^{-3})$. As a result, for every $t\in (0,e^{-3})$, when $0\le x<y\le x+t<e^{-3}$, $$0\le f(y)-f(x)\le f(x+t)-f(x)=\int_0^tf'(x+s)ds \le \int_0^tf'(s)ds=f(t).$$ It follows that $$\omega_f(t)\le f(t),\ \forall t\in (0,e^{-3})\Longrightarrow \int_0^{e^{-3}}\frac{\omega_f(t)}{t}dt\le \int_0^{e^{-3}}\frac{f(t)}{t}dt<\infty.$$
Graph with fixed amount of spanning trees
27 = 3*3*3, so maybe we have to work with triangles. A triangle has three spanning trees. Let's take three triangles and try to connect them in such a way that we get 27 spanning trees and 8 edges. Call them triangle A, B, C. Connect triangle A and B together so that they share a vertex. Now we have 8 vertices. Connect a vertex from triangle C to the shared vertex of triangle A and B through an edge. Now we have 3*3*3 spanning trees. I realize my description might be confusing so this may be more clear:
Scalar to the power vector
Almost all of the time, if you don't know what $e^X$ means in some context, it means the usual power series: $$e^X = 1 + X + \frac{X^2}{2} + \ldots$$ Of course, this relies on a notion of a "product" of two vectors, and it's not automatically clear what that means. The obvious choice is the geometric product, but you'd know if it was that. Your use of the phrase "identity vector" is suggestive. There is no such thing, so you may have meant something other than "vector". I think you might be referring to operators, i.e. matrices. Edit It turns out that all is well, and these are all scalars. The actual equation is of the form: $$\mathbf{y} = C \log ( 1 + e^{C^{-1} \mathbf{x}} )$$ ...where $C$ is a discrete Fourier transform. (It's actually in cepstral space rather than the usual frequency space, but anyone playing along at home can ignore this detail.) The inverse DFT transforms the vector of frequency components back into a continuous function, and it is this function which goes through $\exp$ and $\log$. At the final step, the manipulated continuous function is re-transformed back into a frequency vector. Did that make sense?
isomorphisms- subspaces in topology
The product has a base of open sets of the form $U(a,b,c)=(a,b)\times(c,\to)$, where $a,b,c\in\Bbb R$ and $a<b$; $$U(a,b,c)=\{\langle x,y\rangle\in\Bbb R^2:a<x<b\text{ and }c<y\}\;.$$ You need to investigate how these basic open sets intersect $A$ and $B$. $A$ is just the graph of the line $y=-x$. Show that each $U(a,b,c)$ intersects $A$ in an open interval of the line $A$, and that each open interval on $A$ can be obtained in this way. This means that $A$ in its subspace topology is homeomorphic to what familiar space? $B$ is the graph of a rectangular hyperbola together with its centre point. Here again you should look at the intersections $B\cap U(a,b,c)$ of $B$ with basic open sets in the product. You’ll find that a lot of them are open intervals of $B$. However, you’ll find that $\langle 0,0\rangle$ is not an isolated point of $B$ in this topology, though it is in the usual topology; every $U(a,b,c)$ that contains $\langle 0,0\rangle$ also contains other points of $B$.
The number of ways in which 10 identical apples can be distributed to six children so that each child receives at least one apple
There is a better approach. Let $x_k$ be the number of apples received by the $k$th child. Then $$x_1 + x_2 + x_3 + x_4 + x_5 + x_6 = 10$$ is an equation in the positive integers. A particular solution corresponds to the placement of five addition signs in the nine spaces between successive ones in a row of ten ones. $$1 \square 1 \square 1 \square 1 \square 1 \square 1 \square 1 \square 1 \square 1 \square 1$$ For instance, $$1 1 + 1 + 1 1 1 + 1 + 1 + 1 1$$ corresponds to the solution $x_1 = 2$, $x_2 = 1$, $x_3 = 3$, $x_4 = x_5 = 1$, $x_6 = 2$. The number of such solutions is the number of ways we can select five of the nine spaces in which to place an addition sign, which is $$\binom{9}{5}$$ Addendum: Using Barry Cipra's observation, we can confirm this result by using your method. One child receives five apples and the other five children each receive one apple: There are $6$ ways to select the child who receives five apples. One child receives four apples, another child receives two apples, and each of the other four children each receive one apple: There are six ways to choose the child who receives four apples and five ways to choose the child who receives two apples. Hence, there are $6 \cdot 5 = 30$ such distributions. Two children each receive three apples and the other four children each receive one apple: There are $$\binom{6}{2} = 15$$ ways to select the two children who each receive two apples. One child receives three apples, two children each receive two apples, and the other three children each receive one apple: There are six ways to choose the child who receives three apples and $\binom{5}{2}$ ways to choose which two of the other five children each receive two apples. Hence, there are $$\binom{6}{1}\binom{5}{2} = 6 \cdot 10 = 60$$ such distributions. Four children each receive two apples and the other two children receive one apple: There are $$\binom{6}{4} = 15$$ ways to select which two children will receive two apples. Observe that $$6 + 30 + 15 + 60 + 15 = \binom{9}{5}$$
Prove that there exist integers $k, l \in \mathbb{N}$ such that $\mathrm{ord}(x^ky^l) = \mathrm{lcm}(\mathrm{ord}(x), \mathrm{ord}(y))$
Let: $ord(x)=m$, $ord(y)=n$, $d=gcd(m,n)$, then: $$m=\overline{m}d, n=\overline{n}d,$$ with $gcd(\overline{m},\overline{n})=1$. The idea is to find a partition $d=d_1d_2$, such that $gcd(d_1\overline{m},d_2\overline{n})=1$. From fact $2)$, there exist $\overline{x} \in G$, with $ord(\overline{x})=d_1\overline{m}$, and $\overline{y} \in G$, with $ord(\overline{y})=d_2\overline{n}$. Due to fact $1)$, it gives us: $$ord(\overline{x}\overline{y})=ord(\overline{x})ord(\overline{y})=d\overline{m}\overline{n}=lcm(m,n).$$ In fact, $\overline{x}=x^{d/d_1}$ and $\overline{y}=y^{d/d_2}$, so we can take $k:=d/d_1$ and $l:=d/d_2$, and we are done. Now, returning to the multiplicative partition of $d$: Take $d=p_1^{\alpha_1}...p_h^{\alpha_h}$ its decomposition into primes. For each $p_i^{\alpha_i}$, $1\leq i \leq h$, we have three cases: $i)$ $p_i\mid \overline{m}$, then $d_1$ will contain $p_i^{\alpha_i}$, whereas $p_i\nmid \overline{n}$; $ii)$ $p_i\mid \overline{n}$, then $d_2$ will contain $p_i^{\alpha_i}$, whereas $p_i\nmid \overline{m}$; $iii)$ $p_i\nmid \overline{m}$ and $p_i\nmid \overline{n}$, then either $d_1$ or $d_2$ will contain $p_i^{\alpha_i}$.
Find the number of ways to obtain a total of $15$ points by throwing $4$ different dice?
We have $$\sum_{r=1}^6 x^r=\frac{x-x^7}{1-x}$$ Thus using binomial theorem we have $$\begin{aligned}&\left[\sum_{r=1}^6 x^r\right]^4=x^4(1-x^6)^4(1-x)^{-4}\\ =&x^4\left(\sum_{r=0}^4(-1)^r\binom{4}{r}x^{6r}\right)\left(\sum_{k=0}^\infty(-1)^k\binom{-4}{k}x^k\right)\\ =&\left(\sum_{r=0}^4(-1)^r\binom{4}{r}x^{6r+4}\right)\left(\sum_{k=0}^\infty(-1)^k\binom{-4}{k}x^k\right)\\ =&\sum_{r=0}^4\left[(-1)^r\binom{4}{r}x^{6r+4}\left(\sum_{k=0}^\infty(-1)^k\binom{-4}{k}x^k\right)\right]\\ =&\sum_{r=0}^4\left(\sum_{k=0}^\infty\left((-1)^r\binom{4}{r}x^{6r+4}\right)\left((-1)^k\binom{-4}{k}x^k\right)\right)\\ =&\sum_{r=0}^4\sum_{k=0}^\infty\left((-1)^{r+k}\binom{4}{r}\binom{-4}{k}x^{k+6r+4}\right)(*) \end{aligned}$$ Here we recall the definition of binomial coefficient: for any $\alpha\in\mathbb{R}$ and $k\in\mathbb{N}$, the binomial coefficient "$\alpha$ choose $k$" is $$\binom{\alpha}{k}=\frac{\alpha(\alpha-1)\dots(\alpha-(k-1))}{k!}$$ and the binomial theorem says $$(a+b)^{\alpha}=\sum_{k=0}^\infty\binom{\alpha}{k}a^{\alpha-k}b^k$$ Now we look at the $x^{15}$ term in $(*)$, this is equivalent to solve $k+6r+4=15$ where $r=0,1,2,3,4$ and $k=0,1,2,\dots$. The only possibilities are $r=0,k=11$ and $r=1, k=5$, and the correponding coefficients are $(-1)^{11}\binom{4}{0}\binom{-4}{11}$ and $(-1)^{6}\binom{4}{1}\binom{-4}{5}$, by adding them we get the coefficient for the $x^{15}$ term : $$\begin{aligned}&(-1)^{11}\binom{4}{0}\binom{-4}{11}+(-1)^6\binom{4}{1}\binom{-4}{5}\\ =&(-1)^{11}\frac{(-4)(-5)\dots(-4-(11-1))}{11!}+4\cdot(-1)^6\frac{(-4)(-5)\dots(-4-(5-1))}{5!}\\ =&\frac{4\cdot 5\cdot\dots\cdot 14}{11!}-4\frac{4\cdot\dots\cdot 8}{5!}\\ =&\frac{14!}{3!11!}-4\frac{8!}{3!5!}\\ =&140 \end{aligned}$$
How do you define sample space?
It depends what the sample space is for. If you are tossing 2 coins and just counting the number of heads/tails that happened then your first sample space would be correct. I.e you don't care about the order that the events occurred. If you are tossing 2 coins and recording the 1st outcome and 2nd outcome separately then the second sample space would be correct. I.e. you do care about the order that the events occurred. Alternatively, you could only be recording whether a head was flipped at all with either coin in which case the sample space becomes (with your notation): {0}, {H} So to summarize, a sample space is defined as the set of all possible measured outcome. So it's contents depend upon what you are measuring.
Distance from a point to a plane
Suppose you have a plane given by the basis $$\{\begin{pmatrix}1\\0\\-1\end{pmatrix},\begin{pmatrix}1\\1\\1\end{pmatrix}\}$$ To find the distance from point $P$ to this plane, first find a vector orthogonal to this plane, so that that vector and this plane form a basis of $\mathbb{R}^3$. $$\{\begin{pmatrix}1\\0\\-1\end{pmatrix},\begin{pmatrix}1\\1\\1\end{pmatrix},\begin{pmatrix}1\\-2\\1\end{pmatrix}\}$$ Find the representation of $P$ in this basis. $$P = \frac{P_x-P_z}{2}\begin{pmatrix}1\\0\\-1\end{pmatrix}+\frac{P_x+P_y+P_z}{3}\begin{pmatrix}1\\1\\1\end{pmatrix}+\left(\frac{P_x+P_z}{2}-\frac{P_x+P_y+P_z}{3}\right)\begin{pmatrix}1\\-2\\1\end{pmatrix}$$ Note that the first two terms of this representation is the orthographic projection of $P$ onto the plane, and the third term is rejection. This means that the distance from the point to the plane is the magnitude of this vector. $$\begin{Vmatrix}\left(\frac{P_x+P_z}{2}-\frac{P_x+P_y+P_z}{3}\right)\begin{pmatrix}1\\-2\\1\end{pmatrix}\end{Vmatrix}$$
Expressing half-open interval [a,b) as infinite intersection of open intervals. (Answer Verification)
To prove that $L=[a,b) =\displaystyle \bigcap_{n=1}^{\infty}\left(a-\frac{1}{n},b\right)=R$, you need to show that one is the subset of the other. To show $L \subseteq R$, we start with some $x \in [a,b)$. Then $a-\frac{1}{n}<a\leq x$ for all $n \in \mathbb{N}$, so $x \in R$. Now we start with $y \in R$, this means for all $n \geq 1$, we have $a-\frac{1}{n}<y<b$. If $y \not\in L$, that would mean $y<a$. So let $y=a-\epsilon$, with $\epsilon >0$, by the Archimedean property, there exists $k \in \mathbb{N}$ such that $k\epsilon >1$. This would mean, $$y=a-\epsilon <a-\frac{1}{k}.$$ But then $y \not\in \left(a-\frac{1}{k},b\right)$. This contradicts the assumption we made at the very beginning. So $y \in L$. Similarly you can check the other part.
Countable index in a direct sum
If $I$ is infinite, we can take a countable subset $J\subset I$ and let $M_n=S_{j_n}\oplus S_{j_{n+1}}\oplus \cdots$. Then $M\supset M_1\supset M_2\supset \cdots$ is a descending chain which does not terminate.
find local minimum and maximum of implicit function
Here you already have $y = -4x+1$, just plug the $y = y(x)$ back in the original $y^2 + 2xy = 2x - 4x^2$ and solve for $x$. There might be zero or two real solutions to $x$. If there are two (including a single solution with multiplicity of $2$), those are the points on the curve. These are the critical points defined by having an asymptote OR tangent line in some direction, including horizontal. (what you did was effectively letting $\frac{dy}{dx} = 0$ via setting $F_x = 0$) If there is no real solution, see the next section after the dividor. Next you should do the same by setting $F_y = 0$, which might give you another set of critical points defined by asymptote or tangent line (perhaps vertical). It is a standard procedure and you should check both $F_x =0$ and $F_y = 0$. You can get some accurate information without making a precise plot. When the points satisfy both $F_x = 0$ and $F_y = 0$, then $\frac{dy}{dx} = \frac00$ is undetermined (by that formula you use), which means the asymptote (or tangent line) defining the critical point is neither horizontal nor vertical. The given equation is a quadratic curve (a.k.a. conic section) that is either an ellipse (including circle), a parabola, or a hyperbola (a pair of arcs). There's a standard way to categorize a quadratic equation via its discriminant with precal methods. See wiki or Wolfram Mathworld. Note that a general parabola has one tangent line with a single critical point which $x$ is a single root with multiplicity of $2$. If it's an ellipse (which happens to be the case here), there are always 4 such critical points on the curve to let you "sort" as per the requirement. A hyperbola always have $4$ such critical points "at the infinity" defined by the asymptotes. This is the case where there is no real solution when you plug $y = y(x)$ back.
Prime numbers primitive roots and $\Phi$?
We claim that the sum of the primitive roots $\bmod p$ is equivalent to $\mu(p-1)\bmod p$, where $\mu$ is the Mobius function. To prove this, we are going to do something similar to Mobius inversion. Letting $g$ be any primitive root $\bmod p$, we can represent the primitive roots $\bmod p$ as the set of values of $g^k$, where $\gcd(k,p-1)=1$. So, the sum of all of these is $$\sum_{k=0}^{p-2} g^k[\gcd(k,p-1)=1],$$ where the Iverson bracket $[\gcd(k,p-1)=1]$ is $1$ if the condition is true, and $0$ otherwise. This can be expanded, using the property that $$\sum_{d|n} \mu(d) = [n=1],$$ as $$\sum_{k=0}^{p-2} g^k\sum_{d|k,\ d|p-1} \mu(d).$$ Switching the indices of summation and setting $k=dm,$ $$\sum_{d|p-1} \mu(d)\sum_{m=0}^{\left(\frac{p-1}{d}\right)-1} g^{md}.$$ The inside sum is a geometric series, and we can simply use the formula for the sum of one to get that it equals $$\frac{g^{p-1}-1}{g^d-1}.$$ If the denominator is not $0\bmod p$, then this is $0\bmod p$ (as $g^{p-1}\equiv 1\bmod p$); so, the only term we don't need to neglect is that where $d=p-1$, at which point the sum if $1$ and the sum reduces to $\mu(p-1),$ finishing the proof. Edit: What follows is not terribly accurate numerical analysis - that of Peter in his answer is more accurate. Now, it's probably pretty difficult to get exact distribution statistics on $\mu(p-1)$ as the connection between the factorizations of consecutive integers is not very well-hashed-out. However, we can get some heuristics based on the generic properties of the Mobius function: First off, the distribution of $1$s should be the same as the distribution of $-1$s, so it suffices to find the distribution of $0$s. If $p\equiv 1\bmod 4$, then $p-1$ is not squarefree, so $\mu(p-1)=0$. We know that the probability that $\mu(n)\neq0$ is $6/\pi^2$. However all of these numbers are not $0\bmod 4$, so the probability that a number that is $2\bmod 4$ (specifically, $p-1$) has $\mu(n)\neq0$ should be $$\frac{4}{3}\left(\frac{6}{\pi^2}\right) = \frac{8}{\pi^2}.$$ Thus, the probability that a prime $p$ has $\mu(p-1)=0$ should be $$\frac{1}{2}\left(1+1-\frac{8}{\pi^2}\right) = 1-\frac{4}{\pi^2},$$ and each of $\pm 1$ has probability $\frac{2}{\pi^2}$.
Tricky detail in extreme value theorem proof
My guess: the idea is to prove that $f(c)=M$. He proves it by contradiction. Assume that $f(c)<M$ then by continuity we can go a bit further to $c+\delta$ and still have all functional values being under $M$. It contradicts the choice of $c$ as the supremum of $X$. P.S. It is easy, but you have to mention also that $c\in[a,b]$.
Prove that the least upper bound of $\mathcal F$ is $\bigcup\mathcal F$ and the greatest lower bound of $\mathcal F$ is $\bigcap\mathcal F$.
Your proves look good. You could gain in clarity by explaining upfront what you're doing. For example for least upper bond. Let's first prove that $\bigcup \mathcal F$ is an upper bound. Proof... And now let's prove that $\bigcup \mathcal F$ is less than any upper bound $U$. Proof... This allows to conclude that $\bigcup \mathcal F$ is the least upper bound.
Please, help me with this definition of a function
If $A$ is a subset of the domain of $f$, then one helpful way to think about $f(A)$ might perhaps be $$ f(A) = \{f(x) : x \in A\}. $$ You can check that this agrees with the definition you gave above. As for your example, assuming $-2$ is in the domain of $f$, it is true that $f(-2) \in f(A) = \{f(2),f(3)\} = \{4,9\}$ since $f(-2) = 4$, but this causes no contradiction! Indeed, by your definition $4 \in f(A)$ if and only if there is some $x \in A$ such that $f(x) = 4$; since $2 \in A$ and $f(2) = 4$, then we're a-ok. The important thing to note is that your definition does NOT imply that if $f(x) \in f(A)$ then $x \in A$, rather it implies that if $f(x) \in f(A)$ then there is some $\tilde{x} \in A$ such that $f(\tilde{x}) = f(x)$.
What is limit of $\lim_{x\to\infty}((\frac{a^x+b^x}{2})^{1/x})$?
HINT: We have $$\dfrac{\max(a^x,b^x)}2 \leq \dfrac{a^x+b^x}2 \leq \max(a^x,b^x)$$ Hence, $$\dfrac{\max(a,b)}{2^{1/x}} \leq \left(\dfrac{a^x+b^x}2\right)^{1/x} \leq \max(a,b)$$
Fourier transform as an integral on $L^2$
The inversion integral for $f\in L^2$ converges in $L^2(\mathbb{R})$. That is, $$ \lim_{R\rightarrow\infty} \left\|\frac{1}{\sqrt{2\pi}}\int_{-R}^{R}\hat{f}(s)e^{-isx}ds-f\right\|_{L^2(\mathbb{R})}=0. $$ You can choose a subsequence that converges pointwise a.e. to $f$.
Show that there is a number $x\in[\pi/2, \pi]$ such that $\tan(x) = -x$
Can I solve this by the intermediate value theorem? Yes, you can. Consider $f(x) = \tan x + x$. Notice that $f(\pi/2 + \epsilon) \approx -\infty$ for small $\epsilon$ (more formally, the right limit as $x \to \pi / 2$ of $f(x)$ is $-\infty$) while $f(\pi) = \pi$. Now the intermediate value theorem gives you want you want.
How to show $a^2+2B^2=p$ has integer solutions for all primes p with $(\frac{−2}{p})=1$
You proved that $p$ is not prime in $\mathbb Z[\sqrt{-2}]$ (as $p|(a-\sqrt{-2})(a+\sqrt{-2})$ and $p$ does not divide both multipliers). Since $\mathbb Z[\sqrt{-2}]$ is UFD it implies that $p$ is also not irreducible. Thus there are integers $x,y,u,v$ such that $p=(x+y\sqrt{-2})(u+v\sqrt{-2})$ with $x^2+2y^2$ and $u^2+2v^2$ not equal $1$. Taking the norm we get $p^2=(x^2+2y^2)(u^2+2v^2)$. Clearly, $p^2$ has only two different factorizations in $\mathbb Z$ into two positive integers, $p^2=1\cdot p^2$ and $p^2=p\cdot p$. The former is excluded, therefore $x^2+2y^2=u^2+2v^2=p$.
If $0⩽x⩽y⩽z⩽w⩽u$ and $x+y+z+w+u=1$, prove $xw+wz+zy+yu+ux⩽\frac15$
It is quite easy to solve this problem with the Lagrange multipliers Calling $$ L(x,y,z,w,u,\lambda) = \frac 15 -(xw+wz+zy+yu+ux)+\lambda(x+y+z+w+u-1) $$ The stationary points are the solutions for $$ \nabla L = (L_x,L_y,L_z,L_w,L_u,L_{\lambda})=0 $$ or $$ \lambda -u-w = 0\\ \lambda -u-z = 0\\ \lambda -w-y = 0\\ \lambda -x-z = 0\\ \lambda -x-y = 0\\ u+w+x+y+z-1 = 0 $$ Solving this linear system we obtain $$ x = y = z = w = u = \frac 15,\;\lambda = \frac25 $$ This point is the only tangent point between the surface $g(x,y,z,w,u)=\frac 15 -(xw+wz+zy+yu+ux)$ and the hyperplane $\Pi =x+y+z+w+u-1 = 0$ and $g(x,y,z,w,u) \ge 0$ is located into one of the semi-spaces delimited by $\Pi$ and also at the tangency point we have $g = 0$ with the values found before. NOTE This formulation is a short hand for $$ L(x,y,z,w,u,\lambda,\epsilon) = \frac 15 -(xw+wz+zy+yu+ux)+\lambda_1(x+y+z+w+u-1)+\lambda_2(x-\epsilon_1^2)+\lambda_2(y-x-\epsilon_2^2)+\lambda_4(z-y-\epsilon_3^2)+\lambda_5(w-z-\epsilon_4^2)+\lambda_6(u-w-\epsilon_5^2) $$ and the result should be the same as you can verify with a little patience.
Harmonic series "fulfills" Cauchy criterion
You know that for all $p$ and $\varepsilon$ there is some $N_{p,\varepsilon}$ such that, for all $n\ge N_{p,\varepsilon}$, $\lvert a_{n+p}-a_n\rvert\le \varepsilon$. What you do not know is wheter or not for all $\varepsilon$ there is some $N_\varepsilon$ such that, for all $n\ge N_\varepsilon$ and for all $p$, $\lvert a_{n+p}-a_n\rvert\le\varepsilon$. As of why these need not be equivalent, the instance at hand shows.
Let $\mathcal{D}=\{(0,\frac{1}{n})\mid n\in \mathbb{N}\}.$ Determine $\bigcup\mathcal{D}$ and $\bigcap\mathcal{D}$.
$\newcommand{\FF}{\mathcal{F}} \newcommand{\setb}[2]{\left\{ #1 \; \middle| \; #2 \right\}} \newcommand{\set}[1]{\left\{ #1 \right\}}$Generally for a family of sets $\FF$, we define those symbols as so: $$\bigcup \FF = \bigcup_{F \in \FF} F \qquad \bigcap \FF = \bigcap_{F \in \FF} F$$ or, more precisely, $$\bigcup \FF = \setb{x}{x \in F \text{ for some } F \in \FF}$$ $$\bigcap \FF = \setb{x}{x \in F \text{ for every } F \in \FF}$$ An example: let $\FF$ be the collection of sets $$\Big\{ \set{1}, \set{1,2}, \set{1,2,3}, \set{1,2,3,4}, \cdots \Big\}$$ Then $\bigcup \FF = \Bbb Z^+$ and $\bigcap \FF = \set{1}$. That's because every positive integer will be in some $F \in \FF$ at some point, and $1$ is the only element in every $F \in \FF$.
Distribution of no. of siblings of a random child if the no. of children of a family is Poisson distributed
As you say, the problem is in the first step The likelihood that a randomly chosen family has $m=k+1$ children is proportional to $e^{-\lambda} \frac{\lambda^m}{m!}$ but the likelihood that a randomly chosen child is in a family with $m$ children is proportional to $m e^{-\lambda} \frac{\lambda^m}{m!}$ since there are more children in larger families than in smaller famalies, and in particular you cannot choose a child from families with $0$ children so the likelihood that a randomly chosen child has $k=m-1$ siblings is proportional to $(k+1) e^{-\lambda} \frac{\lambda^{k+1}}{(k+1)!} = e^{-\lambda} \frac{\lambda^{k+1}}{k!}$, which is not what you have as you have $(k+1)!$ in the denominator This is not the exact probability unless $\lambda=1$, as taking the sum $\sum\limits_{k=0}^\infty e^{-\lambda} \frac{\lambda^{k+1}}{k!} = \lambda \not=1$, so we need to divide the expression by $\lambda$ to give the probability that a randomly chosen child is in a family with $m$ children as $e^{-\lambda} \frac{\lambda^{k}}{k!}$, as expected
Average Number Of Factors
You seem to be looking for this limit: $$\lim_{k \to \infty} \dfrac{\displaystyle \sum_{n=1}^\infty \left\lfloor \dfrac{k}{p^n} \right\rfloor }{k}$$ To calculate this, it suffices to look at values of $k=p^m$ for some arbitrarily large $m$. This gives $$\lim_{m \to \infty} \dfrac{\displaystyle \sum_{n=0}^{m-1}p^n}{p^m} = \lim_{m \to \infty} \dfrac{\left( \dfrac{p^m-1}{p-1} \right)}{p^m} = \dfrac{1}{p-1}$$
Is the following producing a list of all prime numbers without skipping in a consecutive order?
To add to Henry's answer: your claim that the first appearances of primes (other than 3) come in increasing order is also correct. More specifically, for all primes $p > 3$, the first occurrence of $p$ in the last column of the table is next to either $A = 2p-1$ or $A = 2p+1$. Proof: since $B = (A-1)(A+1)/6$ (and $p$ doesn't divide 6), $p$ divides $B$ if and only if it divides either $A-1$ or $A+1$. The first such odd numbers $A > 1$ with this property are $A = 2p \pm 1$. So $p$ will first appear next to either $A = 2p-1$ (if this isn't a multiple of 3) or $A = 2p+1$ (otherwise).
Finding Cumulative Distribution Function given two independent pdfs
Sketch the $x$-$y$ plane and indicate on it the region where the joint density $f_{X,Y}(x,y)$ of $X$ and $Y$ is nonzero. What is the region of the plane corresponding to the event $\left\{\frac{X}{Y} \leq w\right\}$ where $w$ is some fixed number in $(0,1)$? Find $P\left\{\frac{X}{Y} \leq w\right\}$ by integrating the joint density over the region you found. If you stop and think a bit and look at your sketches a tad more, you might even be able avoid integrations. Repeat for the case $w > 1$. Verfiy that your answer asymptotically approaches $1$ as $w \to \infty$. Congratulations. You have found the distribution function $F_W(w)$ of the random variable $W = \displaystyle \frac{X}{Y}$ for $w \geq 0$. Differentiate to get the density function.
Jech's proof of Silver's Theorem on SCH
You left out an important detail in your outline, namely that before we define the sets $F_f$, we have identified (for $\alpha$ in the relevant stationary set $S$) each $A_\alpha$ with a subset of $\omega_{\alpha+1}$. The outcome of this is that, since $f(\alpha)\in A_\alpha$, then it is an ordinal of size at most $\omega_\alpha$, and therefore $A'_\alpha=f(\alpha)$ is a set as in Lemma 8.15. If this happens for a stationary set of indices, we are done since $\prod_{\alpha}A_\alpha$ can be replaced with $\prod_{\alpha}A'_\alpha$ when discussing $F_f$. The other detail left out in the outline is that, before defining the sets $F_f$, the further simplifying assumption was made that all $A_\alpha$ have size at most $\omega_{\alpha+1}$. Without this assumption, the construction of the ultrafilter needs to be done relative to the stationary set $S$ of indices where this happens and, in particular, the set $T$ should have stationary intersection with $S$.
How to prove floor identities?
This theorem is not true if $b$ is not an integer. Take $x=b=1.5$ and take $a=1$. If $b$ is an integer, this follows from the rule $$\left\lfloor \frac{y}{b}\right\rfloor = \left\lfloor \frac{\lfloor y \rfloor}b\right\rfloor$$ Setting $y=\frac x a$. Showing this rule, then, suffices. Let $y = \lfloor y \rfloor + \{y\}$, where $0\leq \{y\} < 1$. Use division algorithm to write $\lfloor y \rfloor = qb + r$ with $0\leq r <b$. The $\frac{\lfloor y \rfloor} b = q + \frac{r}{b}$, and $0\leq \frac{r}{b} <1$, so $$\left\lfloor \frac{\lfloor y \rfloor}b\right\rfloor = q$$ On the other hand, since $[y]< (q+1)b$, since $0\leq\{y\}<1$, then $y = [y]+\{y\}<(q+1)b$. So $q \leq \frac y b < q+1$ and again $$\left\lfloor \frac{y}{b}\right\rfloor = q $$
Cauchy's Integral Formula: conditions vs singularities
The way the Cauchy Integral Formula is in this case is as follows: $$\int_{C} f(z) dz = \int_C \frac{1}{z} dz = 1$$ We view the second integral as an integral of the analytic function 1 divided by $z$, which by the Cauchy formula the integral is equal to the function $1$ evaluated at zero.
A (combinatorics?) problem about shoes
Let the shoes be numbered from 1 to 30 and let $f(n)$ be the number of left shoes in the set of the $n$-th to the $(n+9)$-th shoe for $1\leq n \leq 21$. $f$ has a minimum that is at most $5$ and a maximum that is at least $5$. Considering the fact that $|f(n+1)-f(n)| \leq 1$ for all $n$ will lead to a proof. EDIT: Argument with minimum and maximum of $f$.
Conditional Probability and Bayes' Theorem
I don't understand the phrase "average probability of responding." Presumably you're being asked to find the (net) probability of responding to treatment. So, by Bayes's Formula, this is \begin{align*} P(R) &= P(R\big|\text{treatment } 1)P(\text{treatment } 1) + P(R\big|\text{treatment } 2)P(\text{treatment } 2) + P(R\big|\text{treatment } 3)P(\text{treatment } 3) \\ &= (.03)(.8) + (.95)(.15) + (.02)(.05) = .1675 \end{align*} And, yes, for the second question: \begin{align*} P(\text{treatment }3\big|R) &= \frac{P(\text{treatment }3 \text{ and }R)}{P(R)} \\ &= \frac{P(R\big|\text{treatment }3)P(\text{treatment }3)}{P(R)} \\ &= \frac{(.02)(.05)}{.1675} \end{align*}
limits proof by induction
There are several issues with your question. First, you should require $b \gt 0$ and $c \ge 0$ for it to work properly (e.g., if $c \lt 0$, then $\sqrt{c}$ is not even a real number). Second, your $\lt$ should be $\le$, e.g., if $b = c = 1$. Third, what you want to prove for "all integers $n$" should be for positive integers $n$ since your relation doesn't necessarily hold for $n = 0$ (e.g., $b = 1$ and $c = 2$). This means you want to prove that for all integers $n \ge 1$ that $$a_n \le \sqrt{c} \le b_n \tag{1}\label{eq1A}$$ Note you can prove this directly. However, to do it using induction, start with your base case of $n = 1$. As you stated, using the inequality of arithmetic and geometric means gives $$b_1 = \frac{a_{0} + b_{0}}{2} \ge \sqrt{a_{0}b_{0}} = \sqrt{c} \tag{2}\label{eq2A}$$ Using $a_1 = \frac{c}{b_1} \implies a_1 b_1 = c$ and multiplying both sides above by $a_1$ (note $a_1 \ge 0$ so the inequality doesn't change) gives, for $c \gt 0$, that $$\begin{equation}\begin{aligned} a_1 b_1 & \ge a_1\sqrt{c} \\ c & \ge a_1\sqrt{c} \\ \sqrt{c} & \ge a_1 \end{aligned}\end{equation}\tag{3}\label{eq3A}$$ Note if $c = 0$, then $a_n = 0$ for all $n \ge 0$ so \eqref{eq3A} (and \eqref{eq5A} below) still hold. This confirms the base case. For the inductive step, assume \eqref{eq1A} is true for $n = k$ for some integer $k \ge 1$. Since $a_k b_k = c$, using the AM-GM inequality again gives, similar to \eqref{eq2A}, $$b_{k+1} = \frac{a_{k} + b_{k}}{2} \ge \sqrt{a_{k}b_{k}} = \sqrt{c} \tag{4}\label{eq4A}$$ Basically repeating the procedure used with \eqref{eq3A} gives $$\begin{equation}\begin{aligned} a_{k+1}b_{k+1} & \ge a_{k+1}\sqrt{c} \\ c & \ge a_{k+1}\sqrt{c} \\ \sqrt{c} & \ge a_{k+1} \end{aligned}\end{equation}\tag{5}\label{eq5A}$$ This shows \eqref{eq1A} is also true for $n = k + 1$ so, by induction, it's true for all $n \ge 1$.
How to find these stationary points in multivariable calculus?
Yeah, as Alberto said, it should be $f_y=3x-2y=0$ so that $y=3x\over 2$ ; and then substituting in the remaining partial derivative $3y-3x^2={9x\over 2} -3x^2= x({9\over 2}-3x)=0$ So we have $x=0$ or $x={3\over 2}$. Then substitute into $y% and you get the required stationary points.
Error in approximating the sum
Using the suggestions given so far note that the role of $\Delta x_i$ is played by $\frac{1}{N+1}$. So when you write $$A = \sum_{j=0}^{N}\frac{j^k}{N^k} \cdot \frac{1}{N+1}$$ keep the term involving $N+1$ inside the summation instead of pulling it outside as you did, and let $ \frac{j^k}{N^k} $ play the role of $f(w_i)$. Then perhaps you'll see how to get to the approximation given by your professor. And since we're only going for an approximation here, it will be close enough if $N$ is large enough.
Patterns in pi in "Contact"
Given that transcendental numbers (or, for that matter, irrational numbers) are infinite, non-repeating sequences of digits, they contain any possible sequence of numbers, including the one Arroway found. This is not necessarily true. A number whose digits contain, with equal frequency, all possible sequences of the same length, is a normal number. There are trancendental numbers that are known to be normal, and others that are known to be not normal, generally because they have been specifically constructed as such. However, whether pi is normal has been long suspected, and appears to be the case when you look at the digits already computed, but it's never been proven one way or the other. So, in that sense, it is indeed possible for there to be some kind of message encoded in pi that we would recognise as being something other than pure chance. It's also one of the best places to put such a message, since pi is (believed to be) a universal constant that does not depend on local physics or units of measurement. However, there is a risk that other sentient races would use a different circle constant (such as some movements on Earth that claim that tau, $\tau=2\pi$, is a more natural choice) and any race that goes down that route would never see such a message (and if the message is actually hidden in tau, we won't see it either).
Order of zero of a function
If we denote the $n$-th derivate of $f(z)$ with $f^n(z)$, then the following holds : If $f(z)=0$ and $n$ is the smallest positive integer with $f^n(z)\ne 0$ , then the zero $z$ of $f(z)$ has order $n$.
Exponentiation of Real Numbers?
You should proceed with the definition $a^{x} = \lim_{n \to \infty}a^{x_{n}}$ where $x_{n}$ is a sequence of rationals tending to $x$ and $a > 0$. This route is bit difficult compared to the standard route of using logarithms via integral. But providing a rigorous justification of all the usual properties of exponents using the above definition turns out to be a good exercise in real analysis. The first thing one needs to do is to show that the limit $\lim_{n \to \infty}a^{x_{n}}$ exists and the definition is unambiguous i.e. if $x_{n}, y_{n}$ are sequences of rationals both tending to $x$ then $\lim_{n \to \infty}a^{x_{n}} = \lim_{n \to \infty}a^{y_{n}}$. The algebraic properties like $a^{x + y} = a^{x}a^{y}$ don't pose a major challenge. About your inequalities let us say we have $a > b > 0$ and $x_{n}$ is a sequence of rationals tending to $x$. We need to show $a^{x} > b^{x}$. Clearly by the rules of rational exponents we have $a^{x_{n}} > b^{x_{n}}$ but taking limits as $n \to \infty$ weakens the inequality to $\geq $. So what we need are the following inequalities $$ra^{r - 1}(a - b) > a^{r} - b^{r} > rb^{r - 1}(a - b)$$ and $$sa^{s - 1}(a - b) < a^{s} - b^{s} < sb^{s - 1}(a - b)$$ where $r, s$ are rationals with $0 < s < 1 < r$. These are established in this answer. Let $x > 1$ then we can suppose after a certain value of $n$, $x_{n} > 1$. Clearly then we have $$a^{x_{n}} - b^{x_{n}} > x_{n}b^{x_{n} - 1}(a - b)$$ Taking limits as $n \to \infty$ we get $$a^{x} - b^{x} \geq xb^{x - 1}(a - b) > 0$$ so that $a^{x} > b^{x}$. Similarly we can use other inequality (dealing with $s$) to handle the case when $0 < x < 1$. For negative values of $x$ the inequality is reversed i.e. $a^{x} < b^{x}$. This can be easily done via the rule that $a^{-x} = 1/a^{x}$.
Proving a sequence is a martingale
Yes, you have shown what you need for it to be a martingale --- assuming you are taking the martingale to be with respect to some appropriate increasing sequence of $\sigma$-fields. You just need that $S_n$ is integrable for each fixed $n$. So your bound is enough -- assuming it is correct. I didn't check it myself. The adaptedness condition for martingales would be automatic if the filtration you are working with is $\sigma(Y_1 \ldots Y_n)$ or some other filtration to which the sequence $\{Y_n\}$ is adapted.
same cycle type but not conjugate
For example in $A_4$, the alternating group on 4 symbols, consisting of the even permutations in $S_4$. The elements $(1\ 2\ 3)$ and $(1\ 3\ 2)$ of $A_4$ have the same cycle structure, but they are not conjugate in $A_4$. That is, there are elements $g$ in $S_4$ such that $g^{-1}(1\ 2\ 3)g=(1\ 3\ 2)$, but there is no such element in $A_4$. Reference: Why are two permutations conjugate iff they have the same cycle structure?
Proof verification for $x^{10}+y^{10}+z^{10}\ge x^9+y^9+z^9$ (where $xyz=1$ and $x,y,z\in \mathbb{R}^+$)
My instinct is to use $xyz=1$ to rewrite the RHS as $$x^9+y^9+z^9=x^{28/3}y^{1/3}z^{1/3}+x^{1/3}y^{28/3}z^{1/3}+x^{1/3}y^{1/3}z^{28/3} $$ and then use AM/GM on each of these terms: $$x^{28/3}y^{1/3}z^{1/3}\le\frac{28x^{10}+y^{10}+z^{10}}{30}$$ via the $n=30$ case of AM/GM. Do this for all three terms and add. This method should prove your generalisation, and also work for other values of $9$ and $10$.
Proof that $E[SS_E] = (n-2)\sigma^2$
I'll show you a general way to prove it... Note that $SS_E=\sum_i(Y_i-\hat{\beta}_0-\hat{\beta}_1x_i)^2$. There are at least two ways to show the result. Both ways are easy, but it is convenient to do it with vectors and matrices. Define the model as $Y_{(n\times 1)}=X_{(n\times k)}\beta_{(k\times 1)}+\epsilon_{(n\times 1)}$ (in your case $k=2$) with $E[\epsilon]=0_{(n\times 1)}$ and $Cov(\epsilon)=\sigma^2I_{(n\times n)}$. With this framework, $$SS_E=(Y-X\hat{\beta})^{\top}(Y-X\hat{\beta})=Y^{\top}(I-P)Y,$$ where $P$ is the projection matrix on the column space of $X$. It is a fact that $\hat{\beta}$ is such that $PY=X\hat{\beta}$, and if $X$ is full rank $\hat{\beta}=(X^{\top}X)^{-1}X^{\top}Y$. If $\epsilon\sim N_n(0,\sigma^2I)$, the result is immediate, because $Y\sim N_n(X\beta,\sigma^2I_n)$ and $$\dfrac{SS_E}{\sigma^2}=\dfrac{Y^{\top}(I-P)Y}{\sigma^2}\sim\chi^2_{(n-k)},$$ bacause $I-P$ is a projection matrix of rank $n-k$. The second way doesn't need that $\epsilon\sim N_n(0,\sigma^2I)$, just that $E[\epsilon]=0_{(n\times 1)}$ and $Cov(\epsilon)=\sigma^2I_{(n\times n)}$. But you need to show that for any random vector $Z_{(n\times 1)}$ with $E[Z]=\mu$ and $Cov(Z)=\Sigma$, and any symmetric matrix $A_{(n\times n)}$, $$E[Z^{\top}AZ]=tr(A\Sigma)+\mu^{\top}A\mu.$$ So in this case \begin{align*} E[SS_E]&=tr(\sigma^2(I-P))+(X\beta)^{\top}(I-P)X\beta\\ &=\sigma^2(n-k)+0, \end{align*} where we use that $PX=X$ (by definition of $P$) and $tr(I-P)=n-k$ because $P$ just have 0 and 1 eigenvalues (and the trace can be obtained by their sum).
proving existence of supremum in $\mathbb Q$
If $A$ is any nonempty set of real numbers that has an upper bound, then $\sup A$ exists, by the completeness property of the real numbers. So it suffices to show that your set $A$ is nonempty and has an upper bound. I trust that you can show $A$ is nonempty (you just have to give an example of an element of $A$). To say that $A$ has an upper bound means that there is an element $r\in \mathbb{R}$ such that $x\leq r$ for all $x\in A$. That is, you want a number $r$ such that $x\leq r$ for all $x\in\mathbb{Q}$ such that $x^3<2$. Can you think of any such number? (It turns out that $\sup A$ is indeed $2^{1/3}$, but you don't have to prove that to solve the problem--it just asks you to "guess" what you think $\sup A$ is!)
Uniformly integrability and convergence
Q1: You can prove it directly, without the Vitali convergence theorem. A hint to get you started: Let's let $M_n = \max_{1 \le k \le n} X_k$. Now fix $\epsilon > 0$ and write $$E\left[\frac{M_n}{n}\right] \le E\left[\frac{M_n}{n}; M_n \le \epsilon n\right] + \sum_{k=1}^n E\left[\frac{M_n}{n}; M_n > \epsilon n, M_n = X_k\right].$$ (The inequality is to handle the possibility of a "tie", when $M_n = X_k$ for more than one $k$, so that the events $\{M_n = X_k\}, k=1,\dots,n$ are not disjoint.) Q2: As in my comment, if you choose the $X_k$ to have disjoint support, then $M_n = X_1 + \dots + X_n$. Now choose them such that $E[X_k] = 1$ for every $k$. (You can construct such a sequence on the probability space $[0,1]$ equipped with Lebesgue measure...)
By Yoneda lemma, kernel of morphism in preadditive category is unique
An important corollary of Yoneda Lemma is that the Yoneda embedding is full and faithfull. Hence any two objects representing the same functor must be isomorphic.
Why is Minkowski's Theorem so powerful?
If we have an algebraic number field $K$ we can consider the real places and the complex ones i.e. embeddings $\sigma: K\to \mathbb{C}$, if $\sigma(K)\subseteq \mathbb{R}$, we say $\sigma$ is a real-place. Each of these places defines a Archimedean absolute value on $K$. Now form the product $\prod_{\sigma} K_{|\cdot |_{\sigma}}$ where $|\cdot |_{\sigma}$ is the induced absolute value and $K_{|\cdot |_{\sigma}}$ is the completion. The map $K\to \prod_{\sigma} K_{|\cdot |_{\sigma}}$ sends $K$ to a lattice, each completion is isometric to $\mathbb{R}$. Perhaps we can then say that if we known something about the lattice, we know something about the Archimedean absolute values, which in turn gives us information about $K$?
Stability of a system that has (Jacobian-like) matrix with eigenvalue of less than 1 that has $x$ as non-eigenvector
Let's take the simplest case, where $A$ is $n\times n$ and ${\bf R}^n$ has a basis consisting of eigenvectors of $A$ (in other words, the case where $A$ is diagonalizable). Let a basis of eigenvectors be $v_1,\dots,v_n$ with corresponding eigenvalues $b_1,\dots,b_n$, respectively. You can write $x(0)$ as a linear combination of the basis vectors, $$x(0)=c_1v_1+\cdots+c_nv_n$$ Then $$x(m)=c_1b_1^mv_1+\cdots+c_nb_n^mv_n$$ Now if, say, $|b_1|\gt1$, then $x(m)$ blows up as $m$ increases. $x(0)$ doesn't have to be the eigenvector $v_1$, it just has to have a nonzero component in the $v_1$ direction in order for there to be instability.
Check whether a system {$v_1,...,v_m$} of vectors in $\mathbb R^n$ (in $\mathbb R[x]$) is linearly independent.
Easy way to do so is to build a matrix which $ {v}_{1}, {v}_{2}, ... $ are its rows or columns. Calculate the Determinant of this matrix. If the determinant is different from zero, the vectors are independent. Let $ M \in {\mathbb{R}}^{n \times m} $ such that $ M = \begin{bmatrix} \vdots & \vdots & \vdots & \vdots \\ {v}_{1} & {v}_{2} & \cdots & {v}_{m} \\ \vdots & \vdots & \vdots & \vdots \end{bmatrix} $. Then the following holds, $ \det \left( M \right ) = 0 \Leftrightarrow \left \{ {v}_{1}, {v}_{2}, \cdots {v}_{m} \right \}, \; Linear \, Dependent $. The intuition behind it comes from the meaning of the determinant, where you can read about at the Wikipedia Article - The Determinant. This works for $ m = n $. For the case $ m > n $ they must be dependent. For the case $ m < n $ you can use the SVD. If the number of singular values is lower than $ m $ they are dependent. By the way, this is equivalent of the question whether the equation $ M x = 0 $ has a solution or not.
Geometric-Variational idea of Sine
What you've identified is an important reason mathematicians don't use the classical geometric definition of sine (opposite-over-hypotenuse, SOH-CAH-TOA) as the formal definition - it's not clear what happens after 90 degrees since you can't have an obtuse right triangle, much less a triangle with more than 180 degrees! You said you didn't want to bring circles in but let me show you a different way of looking at the unit circle that might be more enlightening: Consider a point on a unit circle centered at the origin. Draw the radius from the origin to that point - this will be your hypotenuse. Draw a segment from the point straight down to the x-axis - the length of this leg represents the sine of the angle $\theta$ between the radius and the x-axis. Draw another segment along the x-axis to where you dropped the perpendicular down already. Now imagine moving from $\theta=0^o$ anticlockwise at a constant rate, say 1 degree per second, around the circle. You should see why the graph is the shape it is. Still confused? Take a look here for what I mean.
A result on semisimple left Artinian Rings
Consider $$\mathcal{F}=\{I\subseteq R|\,I\mbox{ is a left ideal in }R\mbox{ and }I\mbox{ is not a direct sum of minimal left ideals of }R\}$$ Assume that $\mathcal{F}$ is nonempty (if it is empty, then we are done). Since $R$ is left artinian, there exists a minimal (with respect to inclusion) ideal $I\in \mathcal{F}$. Then $I$ is a left ideal and not a direct sum of minimal left ideals of $R$. Note that for every left ideal $J\nsubseteq I$ by minimality of $I$ we deduce that $J$ is a direct sum of minimal left ideals. This justifies the claim from your note. Next consider $I$. Then there exists minimal left ideal $I_0\subseteq I$. We cannot have $I_0^2 = 0$ because $N(R) =0$. Thus $I_0 = Re$ is generated by an idempotent $e$. Next $R = Re\oplus R(1-e)$ as left $R$-module. We have $$I = I \cap (Re+R(1-e)) = I\cap (I_0 + R(1-e)) = I_0 + I\cap R(1-e)$$ and also $I_0 \cap (I\cap R(1-e)) = Re\cap (I\cap R(1-e)) = 0$. This implies that $$I = I_0\oplus (I\cap R(1-e))$$ as left $R$-modules. Then $J = (I\cap R(1-e))$ is a left ideal properly contained in $I$. Then $J$ is a direct sum of minimal ideals by assumptions. We derive that $I$ is a direct sum of minimal left ideals. This is contradiction. Thus $\mathcal{F}$ is empty.
Are there any Turing-undecidable problems whose undecidability is independent of the Halting problem?
Yes! We can define an equivalence relation on decision problems: $A\equiv_T B$ if $A$ computes $B$ and $B$ computes $A$. This defines the partial ordering of Turing degrees: decision problems ordered by relative reducibility. The properties of this partial order have been extensively studied. Some results include: Given any nonzero (= not the degree of the computable sets) Turing degree $d$, there is a $\hat{d}$ which is incomparable with $d$ (this answers your question). In fact, any Turing degree is contained in an antichain of degrees of size continuum. Every Turing degree is above only countably many degrees, so the "height" of the Turing degrees is $\omega_1$; in case CH fails, this means the poset of Turing degrees are "wider" than it is "tall." The Turing degrees form an upper semilattice - given decision problems $A, B\subseteq\omega$, the join of their degrees is the degree of $\{2n: n\in A\}\cup\{2n+1: n\in B\}$. Moreover, this semilattice has no top element (because of the Turing jump). However, their exist Turing degrees $d, \hat{d}$ with no greatest lower bound. Moreover, nontrivial infinite joins never exist (Exact Pair Theorem): given an infinite set of degrees, $D$, if $D$ is nontrivial (= does not have a finite subset $D_0$ such that $\forall d\in D\exists d_0\in D_0(d\le_T d_0)$) and $a$ is an upper bound of $D$, then there is a degree $b$ which also is an upper bound of $D$, and which is incomparable with $a$; moreover, $\{d: d\le a, d\le b\}=D$. If $A'$ and $B'$ are the halting problems of $A$ and $B$, and $A\le_T B$, then $A'\le_T B'$. This means the jump can be thought of as an operation on degrees, not just sets. It turns out this operation is definable just in terms of the partial ordering! This was an extremely surprising result; see https://math.berkeley.edu/~slaman/talks/sw.pdf. The converse of the above bullet point fails extremely badly: the jump is not injective, and indeed given any degree $d$ above the halting problem, there is a minimal degree $\hat{d}$ whose jump is $d$. (Minimal here means, "not $\ge_T$ any degree except itself and the degree of computable sets.) This is all about the global theory of the Turing degrees; people have also studied extensively the local theory of special subclasses of Turing degrees. The best known such class is the class of degrees of domains of partial computable functions, the c.e. degrees. Of course, the degree of the halting problem is the maximal c.e. degree, so in this context the answer to your question is “no;” however, given any c.e. degree which is not the degree of the halting problem or the computable sets, there is an incomparable c.e. degree. This is the Friedberg-Muchnik Theorem, and its proof was a precursor to the method of forcing in set theory. Let me end by stating my favorite two open questions about the Turing degrees: Does the poset of all Turing degrees have any nontrivial automorphisms? Does the poset of c.e. degrees have any nontrivial automorphisms? Back in the day both these degree structures were believed to be very homogeneous, with lots of automorphisms; the theme of pure computability theory post-1955ish, however, was quite the opposite: the Turing/c.e. degrees are structurally rich, with lots of definable subclasses, and in fact it is now believed that both partial orders are rigid. All that is currently known however is that the automorphism groups are at most countable. Finally, having said that the answer to your question is “yes,” let me explain a sense in which the answer to your question is “no.” The only “natural” increasing functions on the Turing degrees which have been discovered so far are (essentially) iterates of the Turing jump. Martin conjectured that (1) Assuming ZF+AD, every increasing function is (perhaps when restricted to the set $\{d: d>c\}$ for some $c$; that is, “on a cone") an iterate of the jump. (2) Assuming ZFC, Every Borel function which is degree-invariant and increasing is an iterate of the jump, on a cone. Currently weak versions of Martin’s conjecture have been proved, although the full conjecture is still very much open.
Why do these two volume integrals express different values?
The function is not symmetrical about the $y$-axis, indeed the curve "lives" in the first and fourth quadrants. There is symmetry about the $x$-axis, which is why one integral is twice the other. Remark: Since you are much more familiar with integration with respect to $x$, you might interchange the roles of $x$ and $y$ (geometrically: reflect in the line $y=x$). Then you will see more clearly what is going on. The first integral gives the volume of a half-sphere. The second gives the volume of a full sphere.
How to calc arc sine without a calculator?
To compute $\arcsin(x)$ we might need to use a bit of calculus. Note that $$ \arcsin(x)=\int_0^x\frac{\mathrm{d}t}{\sqrt{1-t^2}} $$ Using the binomial theorem, we get that $$ \frac1{\sqrt{1-x^2}}=\sum_{k=0}^\infty\binom{2k}{k}\left(\frac{x}{2}\right)^{2k} $$ Integrating term by term, we get $$ \arcsin(x)=\sum_{k=0}^\infty\frac2{2k+1}\binom{2k}{k}\left(\frac{x}{2}\right)^{2k+1} $$ Iterative Method Requiring Square Roots We can use the identity $$ \begin{align} \sin^2(x/2) &=\frac{1-\sqrt{1-\sin^2(x)}}{2}\\[6pt] &=\frac{\sin^2(x)}{2+2\sqrt{1-\sin^2(x)}} \end{align} $$ and the limit $$ \lim_{n\to\infty}2^n\sin(x/2^n)=x $$ to compute $x$ from $\sin(x)$.
Linearly independent over $\mathbb{Z}$, also linearly independent over $\mathbb{R}$?
YES. First observe that if $V$ is linearly independent of $\mathbb Z$ then it is linearly independent of $\mathbb Q$. Next assume that $$ c_1v_1+\cdots+c_kv_k=0, \quad c_1,\ldots, c_k\in\mathbb R. $$ The $c_j$'s span a vector space $X$ over $\mathbb Q$. That is $$ X=\{q_1c_1+\cdots+q_nc_k : q_j\in\mathbb Q\}\subset\mathbb R. $$ Say that $\dim_{\mathbb Q}X=\ell\le k$, and $b_1,\ldots, b_\ell$ a basis of $X$. Then $c_i=\sum_{j=1}^\ell q_{ij}b_j$, for some $q_{ij}\in\mathbb Q$. (The $c_i$'s are uniquely such expressed.) Then $$ 0=\sum_{i=1}^k c_iv_i=\sum_{i=1}^k\sum_{j=1}^\ell q_{ij}b_j v_i=\sum_{j=1}^\ell\left(\sum_{i=1}^kq_{ij}v_i\right)b_j $$ If we take the $m-$component of the vectors above ($m=1,\ldots,n$) we obtain $$ 0=\sum_{j=1}^\ell\left(\sum_{i=1}^kq_{ij}v_{im}\right) b_j $$ This implies that $\sum_{i=1}^kq_{ij}v_{im}=0$ for all $m=1,\ldots,n$ and all $j=1,\ldots,\ell$, since the $b_j$ are linearly independent over $\mathbb Q$. Hence $$ \sum_{j=1}^\ell\left(\sum_{i=1}^kq_{ij}v_{i}\right) b_j=0. $$ And as the $v_i$'s are also linearly independent over $\mathbb Q$, then the $q_{ij}$ also are zero. Therefore $$ c_i=\sum_{j=1}^\ell q_{ij}b_j=0,\quad\text{for all $i=1,\ldots,k$} $$
Let f be a positive integer be define recursively by $f(1)=1$ and $f(n+1)=\sqrt{2+f(n)}$ for all integers n. Prove that $f(n) = (2^n)-1$.
Hint: $\sqrt{2+2}=2$. ${}{}{}{}{}{}$
Constructing symplectic structure on $T^*M$
$x$ is an element of $M$, $TM$ the tangent bundle of $M$.$T_xM$ is the fiber of the projection $p:TM\rightarrow M$, $T^*M$ is the contangent bundle. If $\pi:T^*M\rightarrow M$ is the projection, the fiber $T^*M_x$ is the dual of $T_xM$. Thus an element $\epsilon_x\in T^*M_x$ is a linear form $\epsilon_x:T^*M_x\rightarrow R$. The differential of $\pi$ is a morphism $d\pi:T(T^*M)\rightarrow TM$. Thus if $\epsilon_x\in T^*M_x, v_{\epsilon_x}\in T_{\epsilon_x}(T^*M), d\pi_{\epsilon_x}(v_{\epsilon_x})\in T_xM$, so since $\epsilon_x:TM_x\rightarrow R$, we can define the Liouville form by: $\epsilon_x(d\pi_{\epsilon_x}(v_{\epsilon_x}))$. To check that $\omega$ is not degenerated, use local coordinates: Let $(U_i)_{i\in I}$ be a trivialisation of $TM$, $TU_i=U_i\times R^n$, $T^*U_i=U_i\times {R^n}^*$. An element $\epsilon_x\in {T^*U_i}_x$ is of the form $(x,e_x), e_x:R^n\rightarrow R$.$T({T^*U_i})=TU_i\times {R^n}^*$, an element of $T({T^*U_i}_{\epsilon_x})$ is of the form $v_x=(x,u_x,w_x)$ $d\pi(v_x)=(x,u_x)$. This implies that $\epsilon_x(d\pi(v_x))=e_x(u_x)$. If you take a basis $(e_1,...,e_n)$ of $R^n$ and $(e_1^*,..,e_n^*)$ its dual which define trivialisation of $TU_i$ and $T^*U_i$ $(x,e_i)\in T_xU_i$, $(x,e_i^*)\in T^*_xU_i$, you have $(x,e_j^*)(d\pi(x,e_i,w_x)=e_j^*(e_i)$.
Odd Formed Diophatine Equation Help
Hint: The equation is equivalent to $$ (x+2y+22)(22-x)=17\cdot31 $$
Change of Basis Given 2 Vectors and Transition Matrix
Note that the inverse should be $$S^{-1} = \left( \begin{array}{ccc} 5/9 &7/9\\ 2/9 & 1/9 \end{array} \right)$$
Example of closed and discontinuous transformation
Let $f$ be the identity map from $\mathbb R$ with usual metric to $\mathbb R$ with discrete metric. Then $f$ is not continuous but it is a closed map. Another example where both spaces are compact: define $f:[0,1] \to \{0,1\}$ by $f(x)=0$ if $x <\frac 1 2$ and $f(x)=1$ otherwise. Note that image of any set is closed!.
A set M of X is dense IFF the intersection of an open ball in X is non-empty.
You want to show that $M\cap B_{\epsilon}(x)\neq \emptyset$, for every $x\in X$ and $\epsilon>0$. Since you go by contradiction, this means that you assume that the statement above is not true. So there exist $x\in X$ and $\epsilon>0$ such that $M\cap B_{\epsilon}(x)= \emptyset$. ($M$ doesn't appear in your proof and it is a typo). From this you deduce that $M\subseteq B_{\epsilon}(x)^c$.
Question about gradient notation
Yes, you can write it in terms of the tensor product: $[ (\nabla \otimes h)\cdot \vec{g}]_j = \sum_{i=1}^3 (\partial_j h_i)g_i$.
Problem about $n$ couples sitting at a round table
If we have n couples, we have 2n people. And we can arrange 2n people around a circular table in $\frac{2n!}{2n}$ = $(2n-1)!$ For n-1 couples say we have $a_{n-1}$ arrangements. When we go to the n case we can put the first spouse in $2n-2 $ places and then we can put the second in $ 2n -2$ places also. so we get $a_n = a_{n-1}(2n-2)^2$ Rolling back we get $a_n = a_1(2n-2)^2(2n-4)^2...(2)^2 =4^{n-1}(n-1)^2(n-2)^2...(1)^2 =4^{n-1}(n-1!)^2$ ...and sorry about the former mess up.
An explicit solution to Poisson's equation with Gaussian "charge density"
I've got the solution, its actually really simple. You just treat it both as a Poisson equation and a solution to the homogeneous heat equation with initial data $\log|x|$. (This is legal following some general theory of allowable initial data which I don't remember the specifics of). Then you just need to change variables in time; $z = \frac{|x|^2}{4t}$.
Something strange regarding Euler's Identity
Your problem resides in taking the $\ln$ of a complex exponential. You assert that $e^{i \theta} = \phi$, but notice that: $$e^{i \tau} = e^{2 i \tau} = e^{3 i \tau} = ... = 1$$ (I use $\tau = 2 \pi$). This shows that the complex exponential is not a one-to-one function, and so you can't just take its inverse with $\ln$. Thus your assertion that: $$ \theta = \frac{\ln \phi}{i}$$ May not be true. [EDIT:] In regards to why you obtain a division by zero inside your $\arcsin$, the issue lies in the derivation of your formula for inverting $a \cos x + b \sin x$. In your question that you referenced, you multiply and divide your equation by $\sqrt{a^2 + b^2}$. In this case, since $a^2 + b^2 = 0$, you're dividing by $0$, which isn't allowed. You may have to find a different way to get an inverse while avoiding dividing by zero.
Constructing a morphism by defining it on generalized elements
That's a great question ! The answer is yes, that's essentially the statement of the Yoneda lemma. It says more specifically that given a natural transformation $\eta : \hom(-,A)\to \hom(-,B)$ (that is, a way to assign a generalized element of $B$ to a generalized element of $A$ in a coherent fashion), there is a unique morphism $f: A\to B$ such that $\eta (g) = f\circ g$ for all $g: X\to A$ generalized element. The proof is quite easy actually : if you know how to assign generalized elements of $B$ to generalized elements of $A$, just look at "the universal generalized element of $A$" : $id_A: A\to A$. Then you get $\eta_A(id_A) : A\to B$ and that's exactly your $f$ (and the fact that this $f$ works for everyone comes from the naturality of $\eta$)
What is the point of giving a tensor identity in normal coordinates?
As object is said to be coordinate independent when it can be defined without direct reference to coordinate charts. For instance, the Riemannian curvature $R$ can be viewed as a multilinear map between tangent spaces $$ R|_{p}:T_pM\times T_pM\times T_pM\to T_pM \\ (X_p,Y_p,Z_p)\mapsto R(X_p,Y_p)Z_p $$ And similarly for other tensors. This does not mean that the components $R^{a}{}_{bcd}$ will be independent of the choice of coordintes (though they will be related by a simple transformation rule). A component such as $R^4{}_{123}$ has no meaning unless we fix a coordinate chart. Index expressions remain useful because they hold in any coordinate chart, even though the values of the individual components may change. Generally speaking, coordinates are most often used as an intermediate step in computing coordinate-independent relationships between coordinate-independent objects. In this context, one can choose an arbitrary coordinate chart, or one which is convenient for the task at hand. For instance, if we want to prove the first Bianchi identity $$ R(X,Y)Z+R(Y,Z)X+R(Z,X)Y=0 $$ we can work one point at a time by choosing an arbitrary point $p$ and showing it holds at $p$. By choosing normal coordinates centered at $p$, we can do the computation in a setting where the components of $R$ are particularly simple, since half of the terms vanish.
Let R be a Boolean ring with unity. Show that $0$ is the only nilpotent element and $1$ is the only unit.
For the first part, $x^2=x$ implies that $x^3=x^2x=x^2=x$ and similarly $x^n=x$ for all $n\geq 1$. Therefore if $x^n=0$ for some $n$ then $x=0$. For the second part, since $x^2=x$ it follows that $x(1-x)=0$. Therefore if $xy=1$ then multiplying by $1-x$ we get $1-x=0$, so $x=1$.
problems with Trapezoidal rule
I find the presentation in Wikipedia to be disorganized and unclear. It would be better to read a textbook or even some clearly written course notes such as these. There are two well-known versions of the error formula. One is $$\text{Error} = -\frac{f''(\zeta) h^3 N}{12} \text{ for some $\zeta \in [a,b]$.}$$ Now if you just choose a random $\zeta$ in the interval $[a,b],$ the right hand side of that formula might be greater than the error or even less than the error. But the formula doesn't say to take any random $\zeta$. It says that if you choose a particularly "good" value of $\zeta$ in that interval you will be able to make $-\frac{f''(\zeta) h^3 N}{12}$ be exactly equal to (not greater than) the actual error of your particular application of the trapezoid rule. The formula doesn't tell you how to find a "good" value of $\zeta$ that will be in the interval $[a,b]$ and will make the formula true. It just says that such a value of $\zeta$ exists somewhere in the interval $[a,b]$. So for practical purposes, we might choose a value $\zeta_\max \in [a,b]$ such that $f''(\zeta_\max)$ has the largest absolute value that $f''$ has anywhere on $[a,b]$. That means $\lvert f''(\zeta_\max)\rvert \geq \lvert f''(\zeta)\rvert$ where $\zeta$ is the "good" value (the value we know exists, but don't know how to find). Then we might say that $$\left\lvert\text{Error}\right\lvert \leq \frac{f''(\zeta_\max)h^3 N}{12}.$$ In this formula, if we want "Error" to mean the exact error and not just some kind of error bound, we cannot guarantee equality and must use the $\leq$ symbol instead. That's because we picked a "worst case" choice of $\zeta$ instead of the "actual" choice of $\zeta.$ I think, however, rather than $\zeta_\max$ it's more common that we write something like $$ M = \max_{a\leq x\leq b} f''(x),$$ and then the second formula can be written $$\left\lvert\text{Error}\right\lvert \leq \frac{M h^3 N}{12}.$$ I see both formulas represented in some form or another in the Wikipedia article. But there is one place where the text says "the error is bounded by" and then there is a formula of the form $$ \text{error} = \ldots .$$ Here the $\leq$ relation is implied by the words "bounded by", and the word "error" in the equation is not really the error but merely the error bound. This is one of the ways the article is disorganized and unclear; it would have been better if the word "error" had simply not been written in that equation at all. The word "error" in that equation certainly does not mean what you mean when you write "|Error|". For the second part of your question, you and Wikipedia are both right, because $a+k\frac{b-a}{N} = a_k$ according to the way you numbered the $a_k,$ and $$\frac{f(a)+f(b)}{2} + \sum_{k=1}^{N-1}f(a_k) = \sum_{k=1}^{N}\frac{f(a_k)+f(a_{k-1})}{2}.$$ Consider this example where $N=3$: \begin{multline} \frac{f(a_1)+f(a_0)}{2} +\frac{f(a_2)+f(a_1)}{2} +\frac{f(a_3)+f(a_2)}{2} \\ = \frac{f(a_0)+f(a_1)}{2} +\frac{f(a_1)+f(a_2)}{2} +\frac{f(a_2)+f(a_3)}{2} \\ = \frac{f(a_0)}{2} +\frac{f(a_1)+f(a_1)}{2} +\frac{f(a_2)+f(a_2)}{2} +\frac{f(a_3)}{2} \\ = \frac{f(a_0)}{2} +f(a_1) +f(a_2) +\frac{f(a_3)}{2} \\ = \frac{f(a_0)+f(a_3)}{2} +\left(f(a_1) + f(a_2)\right). \end{multline} Your formula is more intuitively obvious, but the Wikipedia formula is preferred for calculation since it does not require so many additions and divisions.
Maximal height of subgroups in $S_n$?
This value is known. The length of the longest subgroup chain in $S_{n}$ is given by $$\left\lceil\frac{3n}{2}\right\rceil - b(n) - 1,$$ where $b(n)$ is the number of $1$s in the base $2$ representation of $n$. Reference: P.J. Cameron, R. Solomon and A. Turull, Chains of subgroups in symmetric groups, J. Algebra 127 (1989), 340-352. There is also an earlier paper: Reference: L. Babai, On the length of subgroup chains in the symmetric group, Comm. Algebra 14 (1986), 1729-1736, in which the upper bound $2n-1$ is established. One reason why such a longest chain is interesting is that its length provides a bound on the minimal number of generators for any subgroup. In particular, for the case of $S_{n}$, the bound above provides an upper bound for the minimal number of generators of any permutation group of degree $n$.
Proving every discrete subgroup of $\mathbb{R}^n$ is isomorphic to $\mathbb{Z}^m$ with $m \leq n$
Suppose you have $m>n$. You may throw away some generators and assume that $m=n+1$. In this case you have $n+1$ generators of your group $v_1,\ldots,v_{n+1}$. They cannot be linearly independent over $\Bbb R$, but they are linearly independent over $\Bbb Q$. We can assume for some $r\le n$ that $v_1,\ldots,v_r$ are linearly independent over $\Bbb R$ but that $$v_{r+1}=a_1 v_1+\cdots +a_r v_r$$ with the $a_i\in\Bbb R$. Without loss we may assume $a_1\notin\Bbb Q$. For each $k\in \Bbb N$, $$\{ka_1\}v_1+\cdots+\{k a_r\}v_r\in G$$ where $\{x\}$ denotes the integer part of $x$. As all the $\{ka_1\}$ are distinct, these are infinitely many elements of $G$ in a bounded region, contradicting discreteness.
Not closed somewhere $\Rightarrow$ not compact anywhere?
Although it might seem that way from the definitions, compactness is not a relative property. If $E\subseteq X$, $X$ a metric space, then the compactness of $E$, although phrased in terms of open coverings of $E$ by sets open in $X$, only depends on the topology of $E$ (the subspace topology, which is also the topology you get by restricting the metric $d$ to $E\times E$). Compactness is an intrinsic property. I'm not sure if you know about subspace topologies, but any subset of a metric space can be regarded as a metric space by restricting the metric. You can show then that $E$ is compact in your sense if and only if every cover of $E$ by sets open in $E$ (for the metric topology on $E$) has a finite subcover. So, if $E\subseteq X$ is compact, and $E\rightarrow X^\prime$ is an embedding into another metric space, meaning a homeomorphism onto its image, then the image of $E$ will be closed in $X^\prime$, because the image of $E$ will be compact, and compact subsets of metric spaces are closed. In fact, this will work for an arbitrary continuous $E\rightarrow X^\prime$. If you can find an embedding (or any continuous map whatsoever) $E\rightarrow X^\prime$ whose image is not closed, then $E$ cannot be compact. I should add that I was confused by this when I was first learning this stuff. The definition used in the context of metric spaces makes it seem like compactness is related to the ambient metric space $X$. What cleared everything up for me was a course in general point-set topology.
Determine the minimum of $\frac{\int_0^1{x^2\left( f'\left( x \right) \right) ^2 dx}}{\int_0^1{x^2\left( f\left( x \right) \right) ^2dx}}$
Let $g = xf$, notice $$\begin{align}|g'|^2 &= (xf' + f)^2 = (xf')^2 + 2xff' + f^2 = (xf')^2 + x(f^2)' + x'f^2\\ &= (xf')^2 + (xf^2)'\end{align}$$ Intergate both sides over $[0,1]$, we obtain $$\require{cancel} \int_0^1 |g'|^2 dx = \int_0^1 (xf')^2 dx + \color{red}{\cancelto{0}{\color{gray}{\left[xf^2\right]_0^1}}} = \int_0^1 (xf')^2dx$$ because $f(1) = 0 \implies \left[xf^2\right]_0^1 = 0$. As a result, $$\mathcal{I(f)} \stackrel{def}{=} \frac{\int_0^1 (xf')^2 dx}{\int_0^1 (xf)^2 dx} = \frac{\int_0^1 |g'|^2 dx}{\int_0^1 g^2 dx}$$ Notice $f \in C^1 \implies g = xf \in C^1$ and $g(0) = g(1) = 0$. By Wirtinger's inequality, we have $$\int_0^1 |g'|^2 dx \ge \pi^2\int_0^1 |g|^2 dx$$ This implies $\mathcal{I}(f)$ is bounded from below by $\pi^2$. It is easy to see this lower bound is attained by $$g(x) = \sin(\pi x)\quad\iff\quad f(x) = \frac{\sin\pi x}{x}$$ The minimum we seek equals to $\pi^2$.
Problems with exponential equation
It's one equation in two unknowns, so all you can do is solve for one in terms of the other, or an equivalent. If $e^{-xy}=2y$, you can solve for $x$ in terms of $y$ by $e^{-x}=(2y)^{1/y} $ or $x =-(\ln(2y))/y $. To solve for $y$ in terms of $x$ looks like it would involve the Lambert W function. I don't know what the distance function has to do with it.
Galois theory: Gal(K/F) divides [K:F]?
The numbers $\alpha^a\beta^b\gamma^c$ are not necessarily linearly independent for $a,b,c\in \{0,1,2\}$. For $F(\alpha,\beta,\gamma)$, a basis of this should be $\alpha^a\beta^b\gamma^c$ where $1 \le c \le [F(\gamma):F]$, $1 \le b \le [F(\gamma,\beta):F(\gamma)]$ and $1 \le a \le [F(\alpha,\beta,\gamma):F(\beta,\gamma)]$. So your counting of $27$ is not quite correct. In fact, over $F(\gamma)$, we have $x^3+7x+101=(x-\gamma)(x^2+mx+n)$ so $\beta$ is root of polynomial $x^2+mx+n$ so $[F(\beta,\gamma):F(\gamma)]\le 2$.
Trig identities - stuck solving $\tan^2\theta = -\frac 32 \sec\theta$
Set $\cos\theta=t$; then $$ \tan^2\theta=\frac{1-t^2}{t^2} \qquad \sec\theta=\frac{1}{t} $$ and the equation becomes $$ \frac{1-t^2}{t^2}=-\frac{3}{2t} $$ that is, $$ 2-2t^2=-3t $$ and finally $$ 2t^2-3t-2=0 $$ The roots are $2$ and $-1/2$. So the equation reduces to $$ \cos\theta=-\frac{1}{2} $$
n-dimensional Euclidean space is separable
To answer your second question first, note that the interval $I_k=[k,k+1]$ is compact for each $k\in\Bbb Z$. Let $\langle a_1,\ldots,a_n\rangle\in\Bbb Z^n$; then $\prod_{k=1}^nI_{a_k}$ is a compact metric space, so by (1) it is separable. Finally, $\Bbb Z^n$ is countable, and $$\Bbb R^n=\bigcup\left\{\prod_{k=1}^nI_{a_k}:\langle a_1,\ldots,a_n\rangle\in\Bbb Z^n\right\}\,,$$ so (2) implies that $\Bbb R^n$ is separable. To prove (2), let $X=\bigcup_{n\in\Bbb N}X_n$, where each $X_n$ is separable. This means that each $X_n$ has a countable dense subset $D_n$. Let $D=\bigcup_{n\in\Bbb N}D_n$; $D$ is the union of countably many countable sets, so $D$ is countable. Finally, let $U$ be any non-empty open subset of $X$; there is at least one $n\in\Bbb N$ such that $U\cap X_n\ne\varnothing$. And $U\cap X_n$ is a non-empty open subset of $X_n$, so $U\cap D_n\ne\varnothing$: $D_n$ is dense in $X_n$, so it intersects every non-empty open subset of $X_n$. But then $U\cap D\supseteq U\cap D_n\ne\varnothing$, so we’ve shown that every non-empty open subset of $X$ intersects $D$, i.e., that $D$ is dense in $X$. Thus, $X$ has a countable dense subset and is therefore separable.
Convergence of random variable 5
$n/m^2\to 0$, for otherwise if $\liminf_{n\to\infty} n/m^2\to c>0$, then $\liminf_{n\to\infty} \sqrt{n}/m\to c'>0$, so for all large $n, m<\sqrt{n}/(c'-\epsilon)$. so $m=O(\sqrt{n})$, so $m=\Theta(\sqrt{n})$, which contradicts $m=\omega(\sqrt{n})$ according to your link. $X_n\to 1$ a.s. means outside a set of probability $0$, $(n/m^2)X_n\to (0)(1)=0$, i.e., $(n/m^2)X_n\to 0$ a.s. See also Slutsky's theorem.
Inclusion of subgroups implies the group is cyclic
It might be worth noting that you do need $G$ to be finite; otherwise, the claim is not true. For example, the Prüfer $p$-group has the property that given any two elements $a$ and $b$, either $\langle a\rangle \subseteq \langle b\rangle $ or $\langle b\rangle \subseteq \langle a\rangle$, but the group itself is not cyclic. So, we want to prove: Suppose $G$ is a finite group such that for all $a,b\in G$, either $\langle a\rangle\subseteq \langle b\rangle$ or else $\langle b\rangle \subseteq \langle a\rangle$. Then $G$ is cyclic. Your "I think this works" is very badly phrased. You cannot begin by assuming that $G=\langle g\rangle$, because this is tantamount to assuming that $G$ is cyclic; but this is what you want to prove! You don't get to assume it to begin with. At best, you should say: "If $G=\langle g\rangle$, then we're done; otherwise, let..." The argument works reasonably well: suppose $G=\{g_1,\ldots,g_n\}$ (remember, $G$ has to be finite). Let $a_1=g_1$. Now, consider $g_2$; either $g_2\in\langle a\rangle$ or $\langle a\rangle\subseteq \langle g_2\rangle$. In the former case, let $a_2=g_1$; in the latter, let $a_2=g_2$. Assume you have already defined $a_1,\ldots,a_{\ell}$, with $1\leq \ell\lt n$. Consider $g_{\ell+1}$. If $g_{\ell+1}\in\langle a_{\ell}\rangle$, let $a_{\ell+1}=a_{\ell}$. Otherwise, we have $\langle a_{\ell}\rangle \subseteq \langle g_{\ell+1}\rangle$; then set $a_{\ell+1}=g_{\ell+1}$. Now, I claim that $G=\langle a_{n}\rangle$. Indeed, note that by construction, $g_k\in\langle a_{k}\rangle\subseteq \langle a_{k+1}\rangle$. Therefore, $g_1,\ldots,g_k\in\langle a_{k}\rangle$. In particular, $G=\{g_1,\ldots,g_n\}\subseteq \langle a_n\rangle \subseteq G$, giving equality. Thus, $G$ is cyclic. (There are of course easier ways of doing this; but this follows the intuitive idea of your argument, only written up more formally).
Find eigenvalues and eigenvectors of the operator $A$
Notice that $A$ is a symmetric matrix. Because $A_{ij}=(Ae_{j}\cdot e_{i})=|a|^{2}(e_{j}\cdot e_{i})-(a\cdot e_{j})(a\cdot e_{i})=(Ae_{i}\cdot e_{j})=A_{ji}$. So it is diagonalizable and has an orthonormal eigen basis. Intuition behind the eigen vectors: Notice that $A'x:=(a\cdot x)a$ is a projection matrix projecting every vector along the direction of $a$. Modify a little bit and define $A''x:=\frac{1}{|a|^{2}}(a\cdot x)a$. Then $A''$ is an orthogonal projection, and $x-\frac{1}{|a|^{2}}(a\cdot x)a$ is the vector orthogonal to the direction of $\frac{1}{|a|^{2}}(a\cdot x)a$. (Geometrically) It is clear that $a$ is an eigen vector of $A''$ corresponding to the eigenvalue $1$. Another eigenvalue of $A''$ is $0$ and eigenvectors are the vectors which are orthogonal to $a$, let's call them $a_{1}^{\bot}$ and $a_{2}^{\bot}$. Now if we look at $I-A''$, then it is a projection onto the space generated by $a_{1}^{\bot},a_{2}^{\bot}$. In fact, $(I-A'')a_{i}^{\bot}=a_{i}^{\bot}$ for $i=1,2$ and $(I-A'')a=0$, i.e., $a_{i}^{\bot}$ s are the eigenvectors of $(I-A'')$ corresponding to the eigenvalue $1$, and $a$ is an eigenvector of $(I-A'')$ corresponding to the eigenvalue $0$. In our case $A=|a|^{2}(I-A'')$. Using the above intuition we find the eigenvalues of $A$ are $|a|^{2}, 0$ and the corresponding eigenvectors are $a_{1}^{\bot}, a_{2}^{\bot}$ and $a$ respectively.
Existence of a fixed-point free map in a manifold.
Given $x \in M$, use the rectification theorem to prove there exists a neighbourhood $U$ of $x$ and $\varepsilon > 0$ such that $\theta(t, y) \neq y$ for all $y \in U$ and for all $0 < |t| < \varepsilon$. Use compactness to extend the result to $M$.