title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Why is it that I cannot imagine a tesseract?
Big surprise: our brains evolved in a three-dimensional environment, and so that is what they are best suited for thinking about. It's easy to visualize because we literally see it all the time. Thinking in higher dimensions is harder because we have no (little?) direct experience with them, so there is not a clear prototype for most people to use as a springboard for visualizing it.
Is this function analytic: $F(x+iy)=\frac1{\pi}\int_{\mathbb R}\frac y{(x-t)^2+y^2}\,f(t)\,dt$.?
No. This function is real-valued, so it cannot be analytic since this would violate the Cauchy-Riemann equations. However, it is a harmonic function. Indeed, $F$ is a convolution by the Poisson kernel, which I suggest you look into a little.
How to know if a point in a circle has crossed a plane passing through the center point?
Solved! Thanks to @SrinivasK for the suggestion of using the [Math.Atan2][1] function. I realized that I could use it to get the tangent angle of North then compare that against the tangent angle of the mouse point to see if it was within the first 180 degrees. Here is the C# code I used, which should be simple enough to use as general psuedo-code: //If in the 'left hemisphere', subtract from 360 //1) Get the point offset from center to north PointF offsetN = new PointF(north.X - center.X, north.Y - center.Y); //2) Get the Atan2 of the north point in degrees double northTan = Math.Atan2(offsetN.Y, offsetN.X).ToDegrees(); //3) Get the offset of the location point PointF offsetL = new PointF(location.X - center.X, location.Y - center.Y); //4) Get the tangent of the location point in degrees double locationTan = Math.Atan2(offsetL.Y, offsetL.X).ToDegrees(); //5) Conditionally update the value double northTanBound = northTan + 180; if (locationTan < northTan || locationTan > northTanBound) C = 360.0 - C;
My Proof for the Inverse Function Theorem
It might be easier to do the following: Let $g = f^{-1}$, we have $g(f(x)) = x$. Suppose $t_n \to t=f(x_0)$, then (for sufficiently large $n$) we can find $x_n$ such that $t_n = f(x_n)$. Futhermore, we have $x_n \to x_0$. Then ${g(t_n)-g(t) \over t_n-t} = {x_n -x_0 \over f(x_n)-f(x_0)} \to {1 \over f'(x_0)}$.
Calculation of fundamental group using Van Kampen
Van Kampen doesn't seem like the way to go here to be honest. First of all, their intersection is not just $0$ under the equivalence relation, they are the same set, so this doesn't really work. I think a better way to approach this is to find a space homeomorphic to the quotient space. Try $\Bbb R_{\geq 0}$.
Is the following equality obtained using algebra?
$\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{\mathrm{i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\Li}[1]{\,\mathrm{Li}_{#1}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ $\ds{c_{0}:\ Initial\ Debt.\quad r = Rate.\quad n: Payment\ Number.\quad x =\ ?}$. \begin{align} \left\lbrace\begin{array}{rclcl} \ds{c_{1}} & \ds{=} & \ds{c_{0} + c_{0}r - x} & \ds{=} & \ds{c_{0}\pars{1 + r} - x} \\[2mm] \ds{c_{2}} & \ds{=} & \ds{c_{1}\pars{1 + r} - x} & \ds{=} & \ds{c_{0}\pars{1 + r}^{2} - x\pars{1 + r} - x} \\[2mm] \ds{c_{3}} & \ds{=} & \ds{c_{2}\pars{1 + r} - x} & \ds{=} & \ds{c_{0}\pars{1 + r}^{3} - x\pars{1 + r}^{2} - x\pars{1 + r} - x} \\[2mm] \ds{\vdots} & \ds{=} & \ds{\vdots} & \ds{=} & \ds{\vdots} \\[2mm] \ds{c_{n}} & \ds{=} & & \ds{=} & \ds{c_{0}\pars{1 + r}^{n} - x\sum_{k = 0}^{n - 1}\pars{1 + r}^{k} = c_{0}\pars{1 + r}^{n} - x\,{\pars{1 + r}^{n} - 1 \over \pars{1 + r} - 1}} \end{array}\right. \end{align} Since $\ds{c_{n} = 0}$: $$ \color{#f00}{x} = {c_{0}\pars{1 + r}^{n}\,\,r \over \pars{1 + r}^{n} - 1} = \color{#f00}{{r \over 1 - \pars{1 + r}^{-n}}\,c_{0}} $$
Proving that $\sum_ {n=1}^{\infty}{\frac{(-1)^n}{\sqrt{n}}} $ converges
To use Alternating series test for series $\sum(-1)^n a_n$ you should have $a_n\geq 0$ $a_n\geq a_{n+1}$ $\lim a_n = 0$ In your approach you wrote that $a_{n+1} = \frac{1}{\sqrt{n+1}}>\frac{1}{\sqrt n} = a_{n}$ which is wrong and not what you need. Fix it and your solution will be correct.
Sum of probability differential is zero
Presumably because we always have $ \sum_i p_i = 1$: if we differentiate this condition, $$ \sum_i dp_i = 0 $$ so that the total probability remains $1$. It's like $\dot{x} \cdot x = 0 $ when $x$ is forced to lie on a sphere $x \cdot x = R^2$.
Why do we put the f on the left of x?
This is consistent with English syntax and with the concept of "function" being a special kind of relation on sets. Look at the relationship "Husband of": In English this is typically expressed, for example, as Husband of Hillary Clinton is Bill Clinton. In mathematical notation this gets naturally translated maintaining the syntax order: H(Hillary) = Bill Successor of Obama is Trump (in US presidency) $S(n) = n+1$ (Successor function in natural numbers)
Rudin chapter 3 Functional Analysis problem 3
For part (b), consider the two subsets of $\mathbb R^2$, $$ A = \{ y > 0 \} \cup \{ x < 0,\, y \geq 0 \}$$ and $$ B = \{ (0,0) \}\,. $$ Then any line in $\mathbb R^2$ that passes through the origin will intersect both $A$ and $B$, meaning that it is not possible to separate them with a linear functional. To be precise, if $f : \mathbb R^2 \to \mathbb R$ is any linear functional, then $\ker f$ is a line through the origin. Since $\ker f$ intersects both $A$ and $B$, we have $0 \in f(A) \cap f(B)$. Thus $f(A)$ and $f(B)$ will never be disjoint.
Let $A,B$ be nonempty subsets of a topological space $X$. Prove that $A\cup B$ is disconnected if $(\bar{A}\cap B)\cup(A\cap\bar{B})=\emptyset$.
You’re right to be concerned: it’s not enough to show that $A\cap B=\varnothing$. Let $C=A\cup B$. Use the fact that $(\operatorname{cl}A)\cap B=\varnothing$ to show that $B$ is a relatively open subset of $C$, and the fact that $A\cap\operatorname{cl}B=\varnothing$ to show ... ?
Prove $(\frac{n+1}{n})^{n+1}$ is decreasing
$$\frac{\left(\frac{n+1}{n}\right)^{n+1}}{\left(\frac{n}{n-1}\right)^n}=\frac{(n+1)^{n+1}(n-1)^n}{n^{2n+1}}=\underbrace{\left(1+\frac{1}{n}\right)}_{\leqslant\left(1+\frac{1}{n^2}\right)^n}\left(1-\frac{1}{n^2}\right)^n\leqslant\left(1-\frac{1}{n^4}\right)^n<1.$$
Conjecture about polynomials $f_n\in\mathbb Q[X_1,\dots,X_n]$ defining bijections $\mathbb N^n\to\mathbb N$
The diagonalization argument can indeed be generalized. Roughly, one can see all elements of $\mathbb{N}$ appear in order by going through all hyperplanes $X_1+\ldots+X_n = s$. Now for the proof. Let $s=X_1+...+X_n$. Then your function $f_n$ can be written as $$ f_n(X_1, ..., X_n) = \binom{s+n-1}{n} + f_{n-1}(X_1, \ldots, X_{n-1}), $$ with the conventions that $f_n(0,\ldots, 0) = 0$ and $f_0 = 0$. Claim: Fix $s\in \mathbb{N}$. Then $f_n$ induces a bijection $$ \Big\{ (X_1, \ldots, X_n)\in \mathbb{N}^n \ | \ s=X_1+...+X_n \Big\} \xrightarrow{f_n} \Big[ \binom{s+n-1}{n}, \binom{s+n}{n} -1\Big], $$ where $[x,y]$ is the set of integers from $x$ to $y$, and if $s=0$, then the set on the right is $[0,0]$ by convention. Proof of the claim. It is obviously true for $n=1$. Assume it is true for $n-1$. Let us show it is true for $n$. For a fixed $s=X_1+...+X_n$, the value $t=X_1 + \ldots + X_{n-1}$ can be anything from $0$ to $s$, depending on the value of $X_n$. Thus, by the hypothesis on $f_{n-1}$, it induces a bijection $$ \Big\{ (X_1, \ldots, X_n)\in \mathbb{N}^n \ | \ s=X_1+...+X_n \Big\} \xrightarrow{f_{n-1}} \Big[ 0, \binom{s+n-1}{n-1} -1\Big] $$ defined by $(X_1, \ldots, X_n) \mapsto f_{n-1}(X_1, \ldots, X_{n-1})$. Thus $f_n$ induces a bijection from the set on the left to the interval $\Big[\binom{s+n-1}{n}, \binom{s+n-1}{n} + \binom{s+n-1}{n-1}-1\Big]$. Since $\binom{s+n-1}{n} + \binom{s+n-1}{n-1} = \binom{s+n}{n}$, the claim is proved. The fact that $f_n$ is a bijection from $\mathbb{N}^n$ to $\mathbb{N}$ then follows immediately from the claim.
What went wrong in these solutions of $\log \big(x^{\log x}\big)=4$
EDIT : I assume that you mean $\log(x^{\log(x)})$. Whenever you take the logarithm of something which is of the form $x^a$ then $$\log(\lambda^a) \equiv a \log(\lambda).\,\,\,(♦)$$ What I mean to say is that the exponent "comes down". Taking $\log(x^{\log(x)})$ and comparing it with $(♦)$, we get $\lambda = x$ and $a = \log(x)$. So the next steps should be clear : $$\log(x^{\log(x)}) = (\log(x))^2 =4$$ $$\implies \log(x) = \pm 2$$ $$x = 10^2 \,\,\,\,\,\,\,OR \,\,\,\,\,x=10^{-2} = \dfrac{1}{100}$$ So your "second" solution is correct except for the first step where you wrote $LHS=2$ instead of $4$. ($LHS\,\,\equiv $ "Left Hand Side" (of the equation) ) Now coming to your first solution : Note the following property : $$b^{\log_b(\lambda)} \equiv \lambda\,\,\,\,.(♣)$$ That is, if both the "number to be raised to some power" and "base of logarithm in the exponent" are same, then what we get is "the number/quantity whose logarithm is being taken in the exponent". Come to the original equation : $\log_{10}(x^{\log(x)}) =4$. Now raise both sides of the equation to $10$ : $$10^{\log_{10}(x^{\log(x)})} = 10^4$$ Compare this with $(♣)$ to get : $b=10$ and $\lambda = x^{\log(x)}$. So what we should get is : $$x^{\log(x)} = 10000$$ That seems to bring us nowhere. Though it is possible to continue from here, the main point I want to highlight is the mistake you have committed in your "first" solution. I have tried to make it as detailed as possible. Hope this helps ! :-)
Integral $ \int dxe^{-x^{2}}\sqrt{2x+a} $
I doubt there's any kind of closed form primitive in terms of common special functions, but we can try to find a series solution for $t < \infty$: $$\int_0^t e^{-x^2}\sqrt{x+b}~dx=\sum_{k=0}^\infty \frac{(-1)^k}{k!} \int_0^t x^{2k} \sqrt{x+b}~dx$$ The integral can be transformed to yield a hypergeometric function: $$\int_0^t x^{2k} \sqrt{x+b}~dx=\sqrt{b} ~t^{2k+1} \int_0^1 y^{2k} \left(1+\frac{t}{b} y\right)^{1/2}~dy= \\ =\frac{\sqrt{b} ~t^{2k+1}}{2k+1} {_2F_1} \left(-\frac{1}{2};2k+1;2k+2;-\frac{t}{b} \right)$$ Which gives us for the original integral: $$\int_0^t e^{-x^2}\sqrt{x+b}~dx=\sqrt{b} ~t~\sum_{k=0}^\infty \frac{(-1)^k~t^{2k}}{k!(2k+1)}{_2F_1} \left(-\frac{1}{2};2k+1;2k+2;-\frac{t}{b} \right) \tag{1} $$ The hypergeometric function is elementary for integer $k$, but I haven't been able to find a complete closed form. The first few are: $$\frac{1}{2}~{_2F_1} \left(-\frac{1}{2};1;2;-p \right)=\frac{(1+p)^{3/2}-1}{3p}$$ $$\frac{1}{2}~{_2F_1} \left(-\frac{1}{2};3;4;-p \right)=\frac{(1+p)^{3/2}(8-12p+15p^2)-8}{35p^3}$$ $$\frac{1}{2}~{_2F_1} \left(-\frac{1}{2};5;6;-p \right)=\frac{(1+p)^{3/2}(128-192p+240p^2-280p^3-315p^4)-128}{693p^5}$$ Some of the coefficients can be found in http://oeis.org/A061549 and http://oeis.org/A001803. Using a simple susbsitution $y=1-z$ we obtain another form of the integral, resulting in another hypergeometric function: $$\int_0^t e^{-x^2}\sqrt{x+b}~dx=\sqrt{t+b}~ ~t~\sum_{k=0}^\infty \frac{(-1)^k~t^{2k}}{k!(2k+1)}{_2F_1} \left(-\frac{1}{2};1;2k+2;\frac{t}{t+b} \right) \tag{2}$$ Numerically, this series converges slower than the first one. However, by looking at the denominators, it's clear that for exotic case $t=-b$ we should use $(1)$, while for the case $b=0$ we should use $(2)$.
Solve $\int\ e^x \sin(9x)\,dx$ using integration by parts
More simply and without integration by parts we have $$\int e^x\sin(9x)dx=\operatorname{Im}\int e^{(1+9i)x}dx=\operatorname{Im}\left(\frac1{1+9i}e^{(1+9i)x}\right)=\operatorname{Im}\left(\frac1{82}(1-9i)e^{(1+9i)x}\right)$$ Now develop and take the imaginary part.
Question regarding onto and 1-1 functions
This problem is essentially asking you to show that bijections (1-1 functions that are onto) have inverses (and that this inverse is also a bijection). Let $f : A \to B$ be a bijection. For every $y \in B$, there exists a unique point $x \in A$ such that $f(x) = y$. This point $x$ exists because $f$ is onto, and is unique because $f$ is 1-1. Define $g(y) := x$, where $x$ is the point with $y = f(x)$. This gives us a function $$ g : B \to A. $$ You can verify directly that $g(f(x)) = x$ for all $x \in A$ and $f(g(y)) = y$ for all $y \in B$. Now, you have to show that $g$ is also 1-1 and onto. More precisely, you have to show the following: $g(y_1) = g(y_2)$ implies $y_1 = y_2$ (i.e. $g$ is 1-1) for every $x \in A$, there exists $y \in B$ such that $g(y) = x$ (i.e. $g$ is onto). To prove these two points, we will need the following $$ g(f(x)) = x \quad \text{and} \quad f(g(y)) = y $$ for all $x \in A$ and $y \in B$. We now prove that $g$ is 1-1. Suppose that $g(y_1) = g(y_2)$, applying the function $f$ to both sides of this equality gives: $$ y_1 = f(g(y_1)) = f(g(y_2)) = y_1. $$ Hence, $g$ is 1-1. I will leave it to you to verify that $g$ is onto (use that $g(f(x)) = x$ for all $x \in A$).
Prove that $(\mathbb{Z}\times\mathbb{Z})/J$ is a domain.
Hint: consider the map $(x,y)\mapsto y$ from $\mathbb{Z}\times\mathbb{Z}\to\mathbb{Z}$. This should solve your problem almost instantly.
Help with integral / Finding the characteristic function
Keeping on that way and substituting $a$ from above equation you will have: $\phi(t)={1\over {1-it\phi}}$
Associativity of Matrix Multiplication
What makes you think that $(AB)C$ is a well-defined product of matrices? In general, if $M$ is an $m\times n$ matrix and $N$ is a $p\times q$ matrix, then the product $MN$ is defined if and only if $n=p$; and when the product $MN$ is defined, it is an $m\times q$ matrix. In the case of $M=AB$ and $N=C$, you have $$m=1,\quad n=1,\quad p=5,\quad q=1$$ so the product $(AB)C$ is not defined, so the fact that $A(BC)$ is also not defined is perfectly okay.
real analysis heine-borel property
If you know Heine-Borel Theorem, then its fairly obvious wich is compact and wich isnt, $(-2,4)$ is not closed so it cannot be compact, and $[-2,4]$ is both closed and bounded, so it is compact. If you want to do it more 'hands on' you can try the following: For $1)$: Try to construct a sequence wich doesnt have any subsequence converging in the space $(-2,4)$ For $2)$ How do you know $[a,b]$ is compact? Hint: Let $I_n$ be an open cover of $[a,b]$, then take $$C = \{x : \text{ such that $[a,x]$ has a finite subcover} \}$$ Prove $\sup C$ exists and that $\sup C = \max C = b$ This will yield a general result for $[-2,4]$ you can even replace $a = -2, b = 4$ and use the same hint.
Compute the value of the modulo without using a calculator
Hint 29 is one of the numbers in 1 ... 91
Primitive roots of unity and $I$-adically separated rings.
First, note that the minimal polynomial (over $\Bbb Z$) of a primitive $p^k$th root of unity $\zeta$ is $$f(x) = \prod_{i\in\left(\Bbb Z/p^k\Bbb Z\right)^\times} (x - \zeta^i) = x^{p^{k-1}(p-1)} + x^{p^{k-1}(p-2)}+\cdots + x^{p^{k-1}} + 1.$$ Evaluating $f$ at $x = 1$ gives $\prod_i (1 - \zeta^i) = p.$ Now, for any fixed $i,$ $1 - \zeta^i = \epsilon_i(1 - \zeta),$ where $$\epsilon_i = \frac{1 - \zeta^i}{1 - \zeta} = 1 + \zeta + \dots + \zeta^{i - 1}\in R.$$ In fact, $\epsilon_i$ is a unit of $R$! Indeed, let $j$ be such that $ij\equiv 1\pmod{p^k}.$ Then we compute $$\epsilon_i^{-1} = \frac{1 - \zeta}{1 - \zeta^i} = \frac{1 - \zeta^{ij}}{1 - \zeta^i} = 1 + \zeta^{i} + \dots + \zeta^{i(j - 1)}\in R.$$ Thus, at the level of ideals we have $$ (p) = \prod_i (1 - \zeta^i) = \prod_i (1 - \zeta) = (1 - \zeta)^{\varphi(p^k)}. $$ More refined arguments may be found in most books on algebraic number theory; for example, in chapter 1 section 10 of Neukirch's Algebraic Number Theory. For the implication (2)$\implies$(3), we note that we simply must show that $\xi = \zeta^{-1}\zeta'$ is a primitive $p^k$th root of unity. Indeed, once we have this, the previous argument showing $(1 - \zeta_{p^k})^{\varphi(p^k)} = (p)$ proves that $R$ is $p$-adically separated. We will prove the contrapositive: we will assume that $\xi$ is a primitive $n$th root of unity, where $n$ is not a prime power, and prove that $R$ is not $(1 - \xi) = (\zeta - \zeta')$-adically separated. In fact, we shall prove that $(1 - \xi) = R.$ Let $p$ be any prime dividing $n.$ Then $\xi^r = \zeta_p$ is a primitive $p$th root of unity for some $r$, and $$ \frac{1 - \zeta_p}{1 - \xi} = \frac{1 - \xi^r}{1 - \xi} = 1 + \xi + \dots + \xi^{r - 1}, $$ so that $1 - \zeta_p\in(1 - \xi).$ Now, because $(1 - \zeta_p)^{p - 1} = (p),$ it follows that $p\in (1 - \xi).$ If $\ell$ is another prime dividing $n,$ the same argument shows that $\ell\in (1 - \xi)$ as well. Elementary number theory tells us that there exist integers $a$ and $b$ such that $$ 1 = ap + b\ell. $$ But writing $p = u(1 - \xi)$ and $\ell = v(1 - \xi)$ for some $u,v\in R,$ it then follows that \begin{align*} 1 &= ap + b\ell\\ &= au(1 - \xi) + bv(1 - \xi)\\ &= (au + bv)(1 - \xi)\\ \implies 1&\in (1 - \xi). \end{align*} So, if multiple primes divide $\operatorname{ord}(\xi),$ we have $(1 - \xi) = R,$ and $$ \bigcap_i (1 - \xi)^i = \bigcap_i R = R, $$ which implies $R$ is not $(1 - \xi) = (\zeta - \zeta')$-adically separated (as $R$ is not the zero ring). While I'm at it, I'll try to clear up the confusion about the argument provided that shows (1)$\implies$(2). ...since $(q−\zeta)^m\in(q−\zeta')+I[q]$ for some $m\geq 0$ and $I\subseteq R$ such that $R$ is $I$-adically separated, then $(\zeta - \zeta')^m\in I$. This is shown by simply evaluating at $q = \zeta'.$ Indeed, $(q−\zeta)^m\in(q−\zeta')+I[q]$ means precisely that $$ (q - \zeta)^m = P(q)(q - \zeta') + Q(q), $$ where $Q(q)\in I[q]$ and $P(q)\in R[q].$ Notice that $(q - \zeta)^m = \pm (\zeta - q)^m,$ depending on whether $m$ is even or odd, so that $$ (\zeta - q)^m = \tilde{P}(q)(q - \zeta') + \tilde{Q}(q), $$ where $\tilde{P}\in R[q]$ and $\tilde{Q}\in I[q].$ Setting $q = \zeta',$ we find $$ (\zeta - \zeta')^m = \tilde{P}(\zeta')(\zeta' - \zeta') + \tilde{Q}(\zeta') = \tilde{Q}(\zeta')\in I, $$ because each coefficient of $\tilde{Q}$ is in $I.$ The rest of your reasoning showing that $R$ is then $(\zeta - \zeta')$-adically separated is correct.
Clear difference between $(x≠y)∧P(x)$ and $(x≠y)→P(x)$ in logic
Your statement is always true, regardless of what has been mailed or telephoned, because for all $x$ there is a $y$ such that $x = y$, which makes the whole implication true. The bracketed statement in the given answer requires that $x\neq y$ in order to be true. Yours does not, but only states what we require in the case where $x$ and $y$ are unequal.
Wilson's theorem states that if n is a prime number, it will divide (n-1)! + 1, using this find the smallest divisor of 12!+6! +12!×6! + 1?
If $\,f(x)\,$ is a polynomial with integer coefficients and $\,\color{#c00}{f(0) = 1},\ \color{#0a0}{f(-1) = 0}\,$ then the prime $\,p\,$ is the least factor $> 1$ of $\,f((p\!-\!1)!)\, $ since, $\,{\rm mod}\ p\!:\ f((p\!-\!1)!)\equiv \color{#0a0}{f(-1)\equiv 0}\,$ by Wilson, and if prime $\,q< p\,$ then $\,q\mid (p\!-\!1)!\,$ therefore $\,{\rm mod}\ q\!:\ f((p\!-\!1)!) \equiv \color{#c00}{f(0)\equiv 1},\,$ so $\,q\nmid f((p\!-\!1)!).$ OP is special case $\,p=7,\,\ f(x) = (12!/6!\cdot x +1)(x+1),\, $ so $\,f(6!) = (12!+1)(6!+1)$ Above we used standard Congruence Rules, most notably the Polynomial Congruence Rule $\, a\equiv b\,\Rightarrow\, f(a)\equiv f(b)\ \pmod{\! m},\ $ for any polynomial $\,f(x)\,$ with integer coefficients.
Question about sets of well-orderings
Every well-ordered set is order-isomorphic to an ordinal, and every set of ordinals is well-ordered. So if your set $W$ has at most one representative of each (well-)order-type, it will itself be well-ordered. However, if $W$ contains two sets of the same order-type then comparing order-types won't even partially order $W$, though it will endow it with a preorder.
Existence of a surjective map $h:\alpha\rightarrow [\alpha]^{\lt\omega}$ such that $h \in \tilde{\Sigma}_1^{J_\alpha[E]}$
You might consider finding a surjection onto all of $M$ itself, then just restricting to the preimage of $[\alpha]^{<\omega}$ (this is why Schindler notes that the function is partial). The result is Lemma 2.10 of Jensen's original paper, which can found in full here, where the result is proven. I unfortunately don't have time to summarize the proof right now, but that should help point you in the right direction.
Proof of Theorem 18.8 in Rockafellar's Convex Analysis
If it's ok, I'll postpone the discussion of vertical exposed rays until the end. First, about the non-vertical ones. Rockafellar states '...the linear functions $\langle x, \cdot \rangle$ majored by $\delta^*(\cdot|C)$, which correspond of course to points in $C$, are the same as the linear functions whose epigraphs contain every non-vertical exposed ray of G.' Let $NV\subseteq S$ be the 'non-vertical' exposed rays of $G$. For each ray $r$ in $NV$ is a non-vertical ray, we may write it as $r := r(x^*, \alpha):= \{\lambda (x^*, \alpha): \lambda\geq 0\}$ for some non-zero $x^*\in \mathbb{R}^n$ and $\alpha\in \mathbb{R}$. Note that $x^*\neq 0$ which comes from the ray being non-vertical. With this notation, \begin{align*} C = &\{x\in \mathbb{R}^n: \langle x, x^*\rangle\leq \alpha,~ \forall ~r(x^*, \alpha)\in NV\}\\ =&\bigcap_{r(x^*, \alpha)\in NV}\{x\in \mathbb{R}^n: \langle x, x^*\rangle\leq \alpha\}. \end{align*} Hence, $C$ is the intersection of half-spaces of the form $\{x\in \mathbb{R}^n: \langle x, x^*\rangle\leq \alpha\}$ where $r(x^*, \alpha)$ is a non-vertical extreme ray. In other words, $C$ is the intersection of half-spaces of the form $\{x\in \mathbb{R}^n: \langle x, x^*\rangle\leq \alpha\}$ where all non-negative scalars of $(x^*, \alpha)$ form a non-vertical extreme ray of $C$. ------- Now for vertical exposed rays, I'll try to give an intuition (I hope this helps). A vertical exposed ray of $G$ is defined by a vertical hyperplane $H$ in $\mathbb{R}^{n+1}$ that defines a valid inequality for $G$ --- by a vertical hyperplane, I mean one of the form $H=\{(x, y)\in \mathbb{R}^n\times \mathbb{R}: \langle (x,y), (x^*, 0)\rangle = \beta\}$ for some $x^*\in \mathbb{R}^n$ and $\beta\in \mathbb{R}$ (note that $\beta$ will actually be $0$ here because of the form of a vertical ray). Note that $G$ is contained on one side of $H$. On the side of $H$ that does not contain $G$, the function support function $\delta^*(\cdot|C)$ equals infinity. This means that if $x\in \mathbb{R}^n$ is separated from $G$ by $H$, then $\delta^*(x|C) = \sup\{\langle x, x^*\rangle:~x^*\in C\} = \infty$ and so $x$ is a recession direction of $C$. So vertical exposed rays do not need to be considered as they define unbounded directions of $C$.
Periods of Sine, Cosine and Tangent
I know that for a signal to be periodic there has to be such T, that satisfies: By that definition there may be multiple periods. $\sin(n + 4,567,282\pi) = \sin n$ so $\sin n$ is periodic. And $4,567,282\pi$ is a period of $\sin$ but in might not be the smallest. Likewise if $f(n+T) = f(n)$ then $f(n+97T)=f(n)$ so $f$ is periodic but $97$ isn't the smallest period and, for all we know, $T$ might not be either. Usually when saying a function is "periodic" we want to know what the smallest period is (which we usually say is "the" period). If $T = 2k\pi$ then $\sin (n+T) = \sin n$ but the smallest such $T$ (greater than $0$) so that $\sin (n+T)=\sin n$ is $T= 2\pi$. It is true that for any $n$ that $\sin (n + 2\pi) = \sin n$. I won't prove this, but for any $T< 2\pi$ there will always be some $n$ where $\sin (n+T) \ne \sin n$. For example if $T = \pi$ then $\sin(n+\pi) =-\sin n$ and those aren't usually equal. So why is $\tan$ different. Well, $\tan$ is a different function that $\sin$ so there is no reason it should be the same. But notice: $\tan (n+\pi) = \frac {\sin (n + \pi)}{\cos (n+\pi)} = \frac {-\sin n}{-\cos n} = \frac {\sin n}{\cos n}$ so $\pi$ is a period, even though it isn't a period for $\sin$ or $\cos$.
Three questions about ucp convergence
First of all, we choose $M \in \mathbb{N}$ sufficiently large such that $\sum_{m \geq M} 2^{-m} < \delta/3$. Moreover, we note that $$\begin{align*} \mathbb{E}\left( 1 \wedge \sup_{s \leq m} |X_s^n-X_s| \right) &\leq \mathbb{E}\left( 1 \wedge \sup_{s \leq m} |X_s^n-X_s| \cdot 1_A \right)+ \mathbb{E}\left( 1 \wedge \sup_{s \leq m} |X_s^n-X_s| \cdot 1_{A^c} \right) \\ &\leq \frac{\delta}{3} + \mathbb{P} \left( \sup_{s \leq m} |X_s^n-X_s| > \frac{\delta}{3} \right) \\ &\leq \frac{\delta}{3} + \mathbb{P} \left( \sup_{s \leq M} |X_s^n-X_s| > \frac{\delta}{3} \right)\end{align*}$$ for any $m \leq M$ and $$A := \left\{\sup_{s \leq m} |X_s^n-X_s| \leq \frac{\delta}{3} \right\}.$$ By the UCP-convergence, we may choose $N \in \mathbb{N}$ sufficiently large such that $$ \mathbb{P} \left( \sup_{s \leq M} |X_s^n-X_s| > \frac{\delta}{3} \right) < \frac{\delta}{3}$$ for all $n \geq N$. Plugging these estimates into the definition of the metric yields $d(X^n,X) \to 0$ as $n \to \infty$. For the converse, we apply Markov's inequality: $$\begin{align*} \mathbb{P} \left( \sup_{s \leq t} |X_s^n-X_s| > \varepsilon \right) &= \mathbb{P} \left( 1 \wedge \sup_{s \leq t} |X_s^n-X_s| > \varepsilon \right) \\ &\leq \frac{1}{\varepsilon} \mathbb{E} \left( 1 \wedge \sup_{s \leq t} |X_s^n-X_s| > \varepsilon \right)\\ &\leq \frac{1}{\varepsilon} \mathbb{E} \left( 1 \wedge \sup_{s \leq M} |X_s^n-X_s| > \varepsilon \right)\end{align*}$$ for any $t \leq M$, $0<\varepsilon<1$. Hence, $$\mathbb{P} \left( \sup_{s \leq t} |X_s^n-X_s| > \varepsilon \right) \leq \frac{2^M}{\varepsilon} d(X^n,X) \stackrel{n \to \infty}{\to} 0 \tag{1}.$$ If $d(X^n,X) < 2^{-n}$ and $t \leq m$, then $(1)$ shows $$\mathbb{P} \left( \sup_{s \leq t} |X_s^n-X_s|>\varepsilon \right) \leq \frac{2^m}{\varepsilon} d(X^n,X) = \frac{2^{m-n}}{\varepsilon}.$$ Thus, $$\sum_{n =1}^{\infty} \mathbb{P} \left( \sup_{s \leq t} |X_s^n-X_s|>\varepsilon \right) \leq \frac{2^m}{\varepsilon} \sum_{n=1}^{\infty} 2^{-n} < \infty.$$ We have $$\begin{align*} \mathbb{P} \left( \sup_{s \leq t} |Y_s^n-Y_s| > \varepsilon \right) &= \underbrace{\mathbb{P} \left( \sup_{s \leq t} |Y_s^n-Y_s| > \varepsilon, t < T_n \right)}_{0} + \mathbb{P} \left( \sup_{s \leq t} |Y_s^n-Y_s| > \varepsilon,t \geq T_n \right) \\ &\leq \mathbb{P} \left(T_n \leq t \right) \end{align*}$$ Now by the definition of $T_n$, $$\mathbb{P}(T_n \leq t) \leq \mathbb{P} \left( \sup_{s \leq t} |Y_s| \geq n \right).$$ Since $Y$ has cadlag paths, its path are bounded on compact sets, and therefore the latter probability converges to $0$ as $n \to \infty$.
Intersection line of planes
Hint: The cross product of two normal vectors gives a vector which is perpendicular to both of the planes given in your question and is therefore parallel to the line of intersection of the two planes. You can either use the cross product to find the intersection of the two planes or find the parametric representation. $Q$ X $R$ = $\left(\begin{array}{ccc} i& j& k \\ 2 & -1 & 0 \\ 1 & -1 & -1 \end{array}\right)$ = i$\left(\begin{array}{ccc} -1 & 0 \\ -1 & -1 \end{array}\right)$ - j$\left(\begin{array}{ccc} 2 & 0 \\ 1 & -1 \end{array}\right)$ + k$\left(\begin{array}{ccc} 2 & -1 \\ 1 & -1 \end{array}\right)$ = $ i + 2j - k$ Then we use the line P to find the intersection.
FIFA Probability Problem
We can simplify the problem by eliminating pot 4 and changing the rule to: no team may contain more than 1 european. We will not take into account pot $4$ since it only contains europeans and won't affect the game once we modify the rule.Notice also that pot $3$ can be picked freely since it is the only one containing teams of north america or asia. So only the selections of pots 1 and 2 could incur in breaking any rules with the modified European rule. So the probability a group division breaks no rules is the same as the probability the part of the groups corresponding to pots 1 and 2 don't have more than 1 european or more than 1 south american. Pot 1 has 4 south americans and 4 europeans and pot 2 has 1 european, 2 south africans and 5 africans. The number of divisions not breaking any of these rules is the same as the number of bijections from the set $\{A_1,A_2,A_3,A_4,A_5,S_1,S_2,E_1\}$ To $\{S'_1,S'_2,S'_3,S'_4,E'_1,E'_2,E'_3,E'_4\}$ Where no $E$ is mapped to an $E'$ and no $S$ is mapped to an $S'$ How many of these are there? There are $4$ options for the image of $E_1$, and there are $4\cdot3$ ways to select the images of $S_1$ and $S_2$ Once we have selected those there are 5! ways to select where the E's are mapped. So in total there are 5760 ways to do it. On the other hand there are $8!$ ways to select the group taking into account the first 2 pots . So the probability is $\frac{4*4*3*5!}{8!}=\frac{1}{7}$ This means the probability there is a broken rule is $\frac{6}{7}$
Can a Lipschitz continuous function be linear almost everywhere but not linear everywhere?
Let $S\subset [0,1]$ such that $S$ is closed and nowhere dense in $\Bbb R$ and such that the Lebesgue measure $m( S)$ is positive. (E.g. $S$ can be a "fat Cantor set".) Let $f(x)=x$ for $x<0$ and let $f(x)=\int_0^x(1-\chi_S(t))dt$ for $x\geq 0.$ So for $x\geq 0$ we have $f(x)=m([0,x] \setminus S).$
Distributing Apples and oranges. confused about solution
You should look at the technique of Stars and Bars, also known as using identical dividers. In this case, we use 4 dividers to simulate distributing the oranges amongst 5 boxes.
Expression for the k-th term of the sequence of the 'decimals' of natural numbers.
The number of digits in the usual decimal representation of $n\in\Bbb Z^+$ is $\lfloor\log_{10}n\rfloor+1$. The integer $n$ has $d$ digits if and only if $10^{d-1}\le n<10^d$, i.e., $d-1\le\log_{10}n<d$, which is precisely what it means for $\lfloor\log_{10}n\rfloor$ to be $d-1$. Thus, you want $$a_n=\frac{n}{10^{\lfloor\log_{10}n\rfloor+1}}\;.$$
Existence and uniqueness of the central value of a probability measure by Doob.
Hints: Using the dominated convergence theorem show that the mapping $$\gamma \mapsto F(\gamma) := \int_{\mathbb{R}} \arctan(x-\gamma) \, \mu(dx)$$ is continuous. Conclude from the fact that $y \mapsto \arctan(y)$ is strictly increasing that $\gamma \mapsto F(\gamma)$ is strictly increasing. Use the dominated convergence theorem to show that $$\lim_{\gamma \to \infty} F(\gamma) = - \frac{\pi}{2} \quad \text{and} \quad \lim_{\gamma \to -\infty} F(\gamma) = \frac{\pi}{2}.$$ Conclude.
What does $\sum_{j=1}^{n} {n \choose j}(-1)^jx_{t-j}$ mean?
A big sum operator is expanded as a linear sum where you replace the dummy index by all values in the allowed range. Hence, $$\sum_{j=1}^{2} {2 \choose j} (-1)^jx_{t-j}= {2 \choose 1}(-1)^1x_{t-1}+ {2 \choose 2}(-1)^2x_{t-2},$$ $$\sum_{j=1}^{3} {3 \choose j} (-1)^jx_{t-j}= {3 \choose 1}(-1)^1x_{t-1}+ {3 \choose 2}(-1)^2x_{t-2}+ {3 \choose 3}(-1)^3x_{t-3},\\\cdots$$ You can lookup the coefficients in Pascal's triangle.
Polynomials - finding the remainder
The remainder factor theorem states the following: If we have a polynomial $P(x)$, then the remainder we get on dividing $P(x)$ by $(ax-b)~,~a\neq 0$ is given by $P\left(\frac{b}{a}\right)$ That is, $$P(x)\equiv P\left(\frac{b}{a}\right)\pmod{ax-b}~,~a\neq 0$$ Proof: Take $Q(x)$ as the quotient polynomial on division of $P(x)$ by $ax-b$. Then, $$P(x)=(ax-b)Q(x)+\textrm{Rem}$$ Plug in $x=\dfrac{b}{a}$ here to get, $$P\left(\frac{b}{a}\right)=\textrm{Rem}$$ Using this theorem, you will get that the remainder is $a^{51}+51$ which is dependent on the value of $a$.
Equivalence of a scalar to a vector:
It is definitely not an equivalence, addition of a scalar and a matrix is not defined. Although it is not commonly used, you can surely say that if $A$ is a $m\times n$ matrix and $k$ is a scalar then $A+k$ is a shorthand for $A+K$ where $K$ is the $m\times n$ matrix with all entries equal to $k$. But $k$ and $K$ are not the same thing, it is just notation. A notation which is much more widely used is, if $A$ is a square matrix and $k$ a scalar, to write $A +k$ for $A+k I$, where $I$ is the identity matrix of the appropriate dimension. While more used, this is again notation, $k$ and $kI$ are not the same thing.
How to show that 0.5Y is an unbiased estimator of alpha?
How do I show that 0.5Y in part (a) down below is an unbiased estimator of alpha? The rv $Y\cdot n$ is evidently a binomial $$nY\sim Bin(n;2\alpha)$$ thus $$\mathbb{E}[nY]=2n\alpha$$ $$\mathbb{E}[Y]=2\alpha$$ and evidently, $$\mathbb{E}[\frac{Y}{2}]=\alpha$$
A faster way to evaluate $\int_1^\infty\frac{\sqrt{4+t^2}}{t^3}\,\mathrm dt$?
I am very fond of hyperbolic functions for integrals. If we begin with your $t = 2 \sinh x,$ we expect to get to something consistent with the wikipedia way of writing the Weierstrass substitution for hyperbolic functions, give me a few more minutes. $$ \int \frac{\cosh^2 x}{ 2 \sinh^3 x} dx. $$ Then let us use a letter different from your $t,$ $$ \sinh x = \frac{2u}{1 - u^2}, \; \; \; \frac{1}{\sinh x} = \frac{1 - u^2}{2u} $$ $$ \cosh x = \frac{1 + u^2}{1 - u^2}, $$ $$ d x = \frac{2du}{1 - u^2} \; . $$ $$ \int \frac{(1 + u^2)^2 (1 - u^2)^3 2 du}{2 (1 - u^2)^2 (2u)^3 (1-u^2)} $$ $$ \int \frac{(1 + u^2)^2 du}{ (2u)^3} $$ $$ \int \frac{1 + 2u^2 + u^4 }{ 8u^3} \; du $$ $$ \int \frac{1}{8u^3} + \frac{1}{4u} + \frac{u}{8} \; \; du $$ They give more detail here, and we do need an expression for our $u$ That comes out $$ u = \tanh \frac{1}{2} x = \frac{\sinh x}{\cosh x + 1} = \frac{\cosh x - 1}{\sinh x} $$
expected value of brownian motion
$W_t$ is a normal random variable with mean $0$ and variance $t$. If $f(x)$ is the density of a standard normal distribution, you're looking at $t \int_{-\infty}^\infty |x^2 - 1| f(x)\ dx$, which according to Maple is $ 2 t e^{-1/2} \sqrt{2/\pi}$
What is the expected time to absorption in a Markov Chain given that the system is absorbed to a specific recurrent state?
Here, I am generalizing NCH's answer to this question. Consider a Markov Chain with the state space $\Omega$. I use $A$ to denote the set of absorbing states and $A^c$ to denote the set of transient states ($A\cup A^c = \Omega $). I am interested in calculating $E(V_{ij}|B_{ib})$, where the random variable $V_{ij}$ is the number of visits to State $j \in A^c$, given that the system starts from State $i \in A$, and $B_{ib}$ denotes the event for absorption at State $b \in A$ given that the system starts from State $i \in A$. We know: $$ \Pr(V_{ij}=k|B_{ib}) = \frac{\Pr(V_{ij}=k,B_{ib}) }{\Pr(B_{ib})}. $$ The probability $\Pr(B_{ib})$ can be calculated as shown in this Wikipedia article (Subsection Absorbing Probabilities). Let's use $T_{ij}$ to denote the event of visiting State $j$, starting from State $i$, before any absorption (not just absorption at $b$). Then $V_{ij}=k \cap B_{ib}$ includes: one time moving from $i$ to $j$, $k-1$ time moving from $j$ to $j$, and moving from $j$ to $b$ in the end without visiting $j$. That is: $$ \Pr(V_{ij}=k,B_{ib}) = \Pr(T_{ij}) \Pr(T_{jj})^{k-1} [\Pr(B_{jb})(1-\Pr(T_{jj}))] . $$ To calculate $\Pr(T_{ij})$, I will use the result in Transient Probabilities subsection of this Wikipedia article. So: $$ \begin{align} E(V_{ij}|B_{ib}) &= \sum_{k=0}^\infty k \Pr(V_{ij}=k|B_{ib}) \\ &= \sum_{k=0}^\infty k \frac{\Pr(T_{ij}) \Pr(T_{jj})^{k-1} [\Pr(B_{jb})(1-\Pr(T_{jj}))]}{\Pr(B_{ib})} \\ &= \frac{\Pr(T_{ij}) [\Pr(B_{jb})(1-\Pr(T_{jj}))]}{\Pr(B_{ib})} \sum_{k=0}^\infty k \Pr(T_{jj})^{k-1} \\ &= \frac{\Pr(T_{ij}) [\Pr(B_{jb})(1-\Pr(T_{jj}))]}{\Pr(B_{ib}) (1-\Pr(T_{jj}))^2} \\ & = \frac{\Pr(T_{ij}) \Pr(B_{jb})}{\Pr(B_{ib}) (1-\Pr(T_{jj}))}, \forall i \ne j \in A, b\in A^c. \end{align} $$ If $i = j$: $$ E(V_{ii}|B_{ib}) = \frac{1}{1-\Pr(T_{ii})}, \forall i \in A, b\in A^c. $$
Diamond and silver rings.
How much do two diamond rings and two silver rings costs? Can you subtract that from the first equation, leaving only two silver rings?
Phase of the Fourier Transform of a function
As it is a real number, $0$ or $\pi$ (modulo $2k\pi$, $k\in\mathbb{Z}$), depending on its sign. Since $e^{(0+2k\pi)\imath}=1$, and $e^{(\pi+2k\pi)\imath}=-1$, every positive real number can be written as $r = |r|e^{(0+2k\pi)\imath}$, and every negative real number as $r = |r|e^{(\pi+2k\pi)\imath}$. And there is an indeterminacy for $0$, since any phase would suit. Finally, it is conventional to choose, for each number, a "simple" phase, for instance that looks constant, or "relatively continuous". Yet to be clean, on each non-zero number the phase is defined (modulo $2k\pi$).
What maps descend to homeomorphisms
Let $\pi:\mathbf R\times [0,1]\to A$ be the covering map. If $x\in A$, you can pick a $y\in \pi^{-1}(x)$. Thanks to the equivariance property, $f(x) = \pi(My)$ does not depend on the choice of $y$. Local triviality gives you the continuity of $f$. In our case we can do the same things with $M^{-1}$, so this is an homeomorphism (I don't know what is needed in general).
Find the series expansion of $f(z)=\frac{4}{(z-1)(z+3)}$ around $z_0=-1$.
No, you definitely can't just replace $z$ with $z+1$. However, there is something else that you can do here. Start from what you already did: $$ \frac{4}{(z-1)(z+3)}=\frac{1}{z-1}-\frac{1}{z+3}. $$ Now, try rewriting these as $$ \frac{1}{z-1}-\frac{1}{z+3}=\frac{1}{(z+1)-2}-\frac{1}{(z+1)+2}=-\frac{1}{2-(z+1)}-\frac{1}{2+(z+1)}. $$ Why is this helpful? Because now, you can note that $$ -\frac{1}{2-(z+1)}-\frac{1}{2+(z+1)}=-\frac{1}{2}\cdot\frac{1}{1-\frac{z+1}{2}}-\frac{1}{2}\cdot\frac{1}{1-(-\frac{z+1}{2})}, $$ and use $\sum_{n=0}^{\infty}w^n=\frac{1}{1-w}$, with $w=\frac{z+1}{2}$ for the first sum and $w=-\frac{z+1}{2}$ for the second. Radius of convergence also follows from this immediately: remember that $\sum_{w=0}^{\infty}w^n$ converges if $\lvert w\rvert<1$ and diverges if $\lvert w\rvert>1$, and plug in your definitions for $w$ for each series.
For what values of a will the system have a unique solution, and for which pair of values (a,b) will the system have more than one solution
The original system is equivalent to the following under-determined system: $$x+2y+2z+w=0\tag{1}$$ $$x+ay+3z+3w=0\tag{2}$$ $$x+11y+az+bw=0\tag{3}$$ Solve $z,w$ from (1) and (2) and substitution of it into (3) leads to: $$(3-2a+b)x+(33+a^2+6b-2a(3+b))y=f(a,b)x+g(a,b)y=0\tag{4}$$ (1)If $a=-1,b=-5$ or $a=5,b=7$, then $f(a,b)=g(a,b)=0$. Thus $x,y$ can take any values. (2) If $a\not=-1,5$ and $b=2a-3$, then $f(a,b)=0$ but $g(a,b)\not=0$, so $y=0$ and $x$ can take any value. (3) If $a\not=-1,5$ and $b=\frac{33-6a+a^2}{2(a-3)}$, then $g(a,b)=0$ but $f(a,b)\not=0$, so $x=0$ and $y$ can take any value. (4) If $a\not=-1,5$, $b\not=2a-3,\frac{33-6a+a^2}{2(a-3)}$, then $f(a,b)\not =0,g(a,b)\not=0$, so $x=-\frac{g(a,b)}{f(a,b)}y$ and $y$ can take any value.
Proving that the products GCDs of the coefficients of two polynomials is equal to the GCD of their product's coefficients?
Write $p(x)=y P(x)$ and $q(x)=zQ(x)$, where $P(x)$ and $Q(x)$ are primitive polynomials (i.e. the g.c.d. of their coefficients is $1$). Thus $$p(x)q(x)=yz P(x)Q(x)$$ and it is enough to show $P(x)Q(x)$ is primitive. . It not, there exists a prime number $a$ that divides all its coefficients. Reduce the coefficients modulo $a$; in the ring $\mathbf Z/a\mathbf Z$, we have $\overline{\!PQ}=\overline{\mkern-1muP\vphantom Q}\,\overline{\mkern-1muQ}$. As $\mathbf Z/a\mathbf Z$ is an integral domain, this implies $\overline{\mkern-1muP}=0 $ or $\overline{\mkern-1 muQ}=0$, which means $a$ divides all coefficients of $P$ or all coefficients of $Q$, and is impossible since $P$ and $Q$ are primitive. Note. This proof is valid for polynomials over any UFD, since irreducible elements in such rings are prime.
Computational complexity of $n$-fold convolution of a vector versus a function
The output of the $n$-fold convolution has $nN-n+1$ samples, so you meed to dp Fourier transforms of this size. These Fourier transforms have complexity $nN\log nN$. In the first case, you take the FFT of $v$, raise each sample to the $n$-th power, and take the IFFT. As raising $nN-n+1$ samples to the $n$-th power has $\mathcal{O}(nN)$ complexity, the total complexity is $$\mathcal{O}(nN\log nN)$$ In the second case, you need to take the FFT of all $n$ vectors ($\mathcal{O}(n^2N\log nN)$), multiply them together ($\mathcal{O}\left(n^2N\right)$), and take one IFFT ($\mathcal{O}(nN\log nN)$). This gives $$\mathcal{O}(n^2N\log nN).$$ So, even in big-$\mathcal{O}$ sense, the second case requires many more operations than the first one.
Find an Equation Given the Parameter: $x=t\cos(t),\ y=t\sin(t),\ t=π$
$$\frac{\text{d}x}{\text{d}t}=\cos t-t\sin t, \frac{\text{d}y}{\text{d}t}=\sin t+t\cos t$$ Thefore, $$(x,y)=(-\pi,0), (x',y')=(-1,-\pi)$$ Equation of the line is $$y=\pi(x+\pi)$$
Gamma function integral inequality
K. defaoite's comment is correct. In more detail, let $x=z^\beta/2$. Then \begin{align} z&=(2x)^{1/\beta}\\ dx&=\frac{\beta z^{\beta-1}}{2}dz\\ &=\frac{1}{\beta}2^{1/\beta}x^{1/\beta-1}dz\\ dz&=\frac{2}{\beta z^{\beta-1}}dx \end{align} Doing integration by substitution one has that \begin{align} \int_0^{\infty}\exp(-z^\beta/2)dz&=\int_0^\infty \exp(-x)\frac{1}{\beta}2^{1/\beta}x^{1/\beta-1}dx\\ &=\frac{1}{\beta}2^{1/\beta}\Gamma\left(\frac{1}{\beta}\right) \end{align} which is the right hand side of the inequality. It then follows immediately.
Why do we only compute the limit of a function at a point in a CLOSED set?
If $a\in X$, then $a\in\overline X$. We define this only when $a$ is in the closure of $X$ to make sure that there are points of $E$ as close to $a$ as we want. Otherwise, every element of $F$ would be a limit of $f$ at the point $a$. But with the restriction of imposing that $a\in\overline X$, the limit (if it exists) is unique.
Computing Stirling cycle and subset numbers with minimal storage
The Stirling number of the second kind can be calculated efficiently using the identity $$\left\{ n\atop{k} \right\} = \frac1{k!} \sum_{j=0}^k (-1)^j \binom{k}{j} (k-j)^n.$$
How to calculate a confidence interval for a binomial, given a specific prior
One possible thing to do would be to calculate your confidence interval using a use a Beta distribution For example the following R code ci <- function(x, n, prior, conf) { c(qbeta((1 - conf) / 2, prior[1] + x, prior[2] + n -x) , qbeta((1 + conf) / 2, prior[1] + x, prior[2] + n -x) ) } prior <- c(1,99) ci( 0, 0, prior, 0.95) ci( 20, 2000, prior, 0.95) ci(2000, 200000, prior, 0.95) produces these results > ci( 0, 0, prior, 0.95) [1] 0.0002557027 0.0365757450 > ci( 20, 2000, prior, 0.95) [1] 0.006203473 0.014677571 > ci(2000, 200000, prior, 0.95) [1] 0.009568703 0.010440574
Prove that linear transformation is isomorphic, given the characteristic polynomial
$T^2+2T-3I=0$ implies that $T(T+2I)=3I$ and ${1\over 3}T(T+2I)=(2T+I){1\over 3}T=I$
structure of extension by quadratic elements
As abstract groups, both Galois groups are isomorphic to $C_2\times C_2$: explicitly, we have automorphisms $$\sigma:\sqrt a\mapsto-\sqrt a, \sqrt b \mapsto \sqrt b\\\tau:\sqrt a\mapsto\sqrt a, \sqrt b \mapsto- \sqrt b$$ and one can check, using the fact that $\sqrt b\notin \mathbb Q(\sqrt a)$, that $\sigma\tau = \tau\sigma,\ \sigma^2=\tau^2 = 1$. However, bear in mind that this is an isomorphism of abstract groups, and there is no canonical choice of isomorphism - i.e. any isomorphism requires us to manually and arbitrarily pick where each automorphism maps to. In particular, these groups do not map to the same subgroup of $\mathrm{Gal}(\mathbb Q(\sqrt a ,\sqrt b,\sqrt2,\sqrt 3)/\mathbb Q)$.
Intuition for "identification" and "generalization" (computability theory)
For the first, here are three simple cases. Suppose we define $$P(x) \Leftrightarrow Q(x, x)$$ or $$P'(x,y,z) \Leftrightarrow Q'(x,y,z,y)$$ or $$P''(x,y,z,w) \Leftrightarrow Q''(x,y,z,x,w)$$ So in each case we get an $n-1$-place predicate from an $n$-place predicate by filling two slots in the $n$ place predicate in the same way. Hermes is just giving a general description of this operation. For the second, that isn't a conjunction in Hermes but a universal quantification. In more familiar notation, examples would be where we define $$P(x) \Leftrightarrow \forall yQ(x, y)$$ or $$P(x, y) \Leftrightarrow \forall zQ(z, x, y)$$ getting an $n-1$-place predicate by quantificationally binding one place in an $n$-predicate. [It is nice to see a mention of Hermes's old book: it taught me a lot when I read it as a student when the English translation was newly published -- so I could reach for it still on my shelves!]
Solution to 2nd order ode
Your ODE is of the type in $y(x)$ $$xy''+2y'-axy=-cx \implies xy''+2y'-x(ay+c)=0~~~~(1)$$ Let $ay-c=z$, then we get a homogenous 2nd order ODE: $$\frac{x}{a}z''+\frac{2}{a}z'-xz=0 \implies z''+\frac{2}{x}z'-2az=0~~~~(2)$$ Next Let $z(x)= u(x)/x$, we get $$u''(x)-2au(x)=0 \implies u(x)=C_1 e^{x\sqrt{2a}}+C_2 e^{-x\sqrt{2a}}$$ $$\implies z(x)=\frac{1}{x}[C_1 e^{x\sqrt{2a}}+C_2 e^{-x\sqrt{2a}}]$$ Finally $$y(x)=\frac{1}{ax}[C_1 e^{x\sqrt{2a}}+C_2 e^{-x\sqrt{2a}}]+\frac{c}{a}$$
Vector (transpose) Matrix addition - Why it could work despite different dimensions (Matlab)
You are correct, mathematically speaking adding a $1\times 3$ vector to $3\times 1$ vector does not make much sense in terms of vector spaces, but it can be useful when doing calculations. Matlab is doing something called "automatic broadcasting" which it adopted a few versions ago from Octave. Here you can read more about it: https://blogs.mathworks.com/loren/2016/10/24/matlab-arithmetic-expands-in-r2016b/ What happens with broadcasting is the lacking dimensions are replicated: $$[a,b,c] + \begin{bmatrix}x\\y\\z\end{bmatrix} \to \begin{bmatrix}a&b&c\\a&b&c\\a&b&c\end{bmatrix} + \begin{bmatrix}x&x&x\\y&y&y\\z&z&z\end{bmatrix}$$ So the row of the first vector gets copied as many times as needed to fit the 3 rows of the second vector and vice versa. Edit: Mathematically speaking, what actually happens is that instead of calculating $${\bf v}^T+\bf w$$ The software finds $N,M\in \mathbb Z$ so that the following expression makes sense: $$({\bf v}^T \otimes {\bf 1}_N) + ({{\bf 1}_M}^T\otimes \bf w)$$ where $\otimes$ is a Kronecker product and then calculates it.
How to prove that $a_nb_n < a_n+b_n$
Your claim is true if and only if $x\le 4$. We will use induction to show some facts about both sequences. Namely, $$a_{n+1}b_{n+1}=\frac{2a_nb_n}{a_n+b_n}\cdot\frac{a_n+b_n}2=a_nb_n=x$$ $$b_{n+1}-a_{n+1}=\frac{a_n+b_n}2-\frac{2a_nb_n}{a_n+b_n}=\frac{(b_n-a_n)^2}{2(a_n+b_n)}&gt;0$$ Also, $a_{n+1}&gt;a_n$: $$a_{n+1}-a_n=\frac{2x}{a_n+b_n}-a_n&gt;\frac{2x}{2b_n}-a_n=a_n-a_n=0$$ Since $a_nb_n$ is constant and $a_n$ is strictly increasing, $b_n$ is strictly decreasing. All this implies that $a=\lim a_n$ and $b=\lim b_n$ exist. Also, $a_n&lt;a\le b&lt;b_n$ for every $n\in\Bbb N$. Taking limits in the equations: $$a=\frac{2ab}{a+b}$$ $$b=\frac{a+b}2$$ And then $$2(a+b)^2=4ab+(a+b)^2$$ That is $$(a-b)^2=0$$ Then $a=b$. So $a=b=\sqrt x$. Avoiding formalism for the moment, for $n$ large enough, we see that $a_n+b_n$ will be near from $2\sqrt x$, which, for $x&gt; 4$, is lesser than $x$. Assuming $x\le 4$, by GM-AM inequality, $$\frac{a_n+b_n}{a_nb_n}&gt;\frac{2\sqrt x}x=\frac2{\sqrt x}\ge 1$$ But if $x&gt;4$, for any $\epsilon&gt;0$ there exists some $n\in\Bbb N$ such that $$\sqrt x-\epsilon&lt;a_n&lt;\sqrt x&lt;b_n&lt;\sqrt x+\epsilon$$ and then $$a_n+b_n&lt;2\sqrt x+\epsilon$$ Taking $\epsilon=x-2\sqrt x&gt;0$, we see that the claim is false.
Using mean value theorem .
Assuming $S$ is an interval Hint: Suppose that $f(x)$ is not constant on $S$. Then take two points $a&lt;b$ (both in $S$) where $f(a)\neq f(b)$. Apply MVT and get a point where the derivative is not zero in $(a,b)$.
Bounding the outputs of a polynomial
Too long for a comment. This is a collection of thoughts that might lead to the right track. I also agree that it should hold true for $k \geq 5$ also. Fix $k$. Suppose $x_1, \dots, x_{k-1}$ are all the same and $x_k$ is distinct. Then we can say Let $p(x) = a(x - x_1)^{e_1}(x - x_2)^{e_2}\dots(x - x_{k-1})^{e_3} + b$ with $0 &lt; b &lt; k$. But the product $|p(x_k)| = |a(x_k -x_1)^{e_1}(x_k - x_2)^{e_2} \dots (x_k - x_{k-1})^{e_{k-1}}| + b$ is too big. Indeed, when $k = 6$ and $n = 5$ we have it minimized when the roots are $x_{k} - 1, x_k - 2, x_k + 1, x_k + 2, x_k + 3$, the multiplicities are one, and $a = 1$. The product minimum is $12$ in that case, so all of the $x_i$ must be equal. This clearly holds when $k$ is larger as well. This also works for $k=5$, but not for $k=4$ (take $p(x) = (x-2)(x-1)(x+1) + 1$).
I wonder what is really a reducible matrix?
A matrix $M$ is reducible if there is a permutation matrix $P$ so that $$PMP^T=\begin{bmatrix}B&amp;C\\0&amp;D\end{bmatrix}$$ where $B$ and $D$ are square. On the other hand, $M$ is partly decomposable if there are permutation matrices $P$ and $Q$ so that $$PMQ=\begin{bmatrix}B&amp;C\\0&amp;D\end{bmatrix}$$ where $B$ and $D$ are square. The matrix $$A = \begin{bmatrix} 0&amp; 1&amp; 0&amp; 0\\ 0&amp; 0&amp; 1&amp; 0\\ 0&amp; 0&amp; 0&amp; 1\\ 1&amp; 0&amp; 0&amp; 0 \end{bmatrix}$$ happens to be partly decomposable, but not reducible. You give a permutation $P^T$ of the columns so that $AP^T$ is of the form $\begin{bmatrix}B&amp;C\\0&amp;D\end{bmatrix}$, but for reducibility we need a simultaneous permutation of the rows and columns; $PAP^T$ must be of this form.
How to handle indices with fractional degree?
Hint: Notice that $\sqrt{x^2+\sqrt[3]{x^4y^2}}+\sqrt{y^2+\sqrt[3]{x^2y^3}}$ $= \sqrt{x^2+x^{4/3}y^{2/3}}+\sqrt{y^2+x^{2/3}y^{4/3}}$ $= \sqrt{x^{4/3}(x^{2/3}+y^{2/3})}+\sqrt{y^{4/3}(y^{2/3}+x^{2/3})}$ $= x^{2/3}\sqrt{x^{2/3}+y^{2/3}}+y^{2/3}\sqrt{y^{2/3}+x^{2/3}}$ $= (x^{2/3}+y^{2/3})\sqrt{x^{2/3}+y^{2/3}}$ $= (x^{2/3}+y^{2/3})^{3/2}$. Can you finish the problem from here?
How many integers $n$ for $3<n<100$ are such that $1+2+3+\cdots+(n-1)=k^2$, with $k \in \mathbb{N^*}$?
The general question of determining which triangle numbers (i.e. of the form $1+2+\dotsb+i$) are also squares is deeply related to the Pell-Fermat equation $x^2-2y^2=1$, thus it is non-trivial. More precisely, the condition $n=\frac{t(t+1)}{2}=s^2$ is equivalent to $x^2-2y^2=1$ when setting $x=2t+1$ and $y=2s$, the values of $(x,y)$ are thus the $(x_k,y_k)$ given by $x_k+\sqrt{2}y_k=(3+2\sqrt{2})^k$, giving the values for $n$: $n_k=(y_k/2)^2$. You easily get $s_{k+2}=6s_{k+1}-s_k$ with $(s_0,s_1)=(0,1)$, the numbers whose squares are also triangular numbers. The first ones are $0$, $1$, $6$, $35$, $204$ and $1\,189$.
Find an example of degree-100 extension of $\Bbb Q(\zeta_5)$ and $\Bbb Q(\sqrt[3]{2})$.
Based on my comments I give the following answer. It uses only the irreducibility of cyclotomic polynomials in $\mathbb{Q}[x]$ and avoids the theorem of Dedekind mentioned in comments. Let $\zeta_{n} = e^{2\pi i/n}$. Then $\zeta_{n}$ is a primitive $n$th root of unity there are $\phi(n)$ such primitive $n$th roots of unity given by $\zeta_{n}^{r}$ where $1 \leq r \leq n$ is and $r$ is coprime to $n$. The polynomial $$\Phi_{n}(x) = \prod_{1 \leq r \leq n, (r, n) = 1}(x - \zeta_{n}^{r})$$ is having integer coefficients and is irreducible in $\mathbb{Q}[x]$ so that $[\mathbb{Q}(\zeta_{n}):\mathbb{Q}] = \phi(n)$. Next let $m, n$ be positive integers coprime to each other. Then we have integers $a, b$ such that $am + bn = 1$ and therefore $$\zeta_{mn} = \zeta_{n}^{a}\zeta_{m}^{b}$$ and clearly $$\zeta_{m} = \zeta_{mn}^{n}, \zeta_{n} = \zeta_{mn}^{m}$$ so that $$\mathbb{Q}(\zeta_{mn}) = \mathbb{Q}(\zeta_{m}, \zeta_{n})$$ And then $$[\mathbb{Q}(\zeta_{mn}): \mathbb{Q}] = \phi(mn) = \phi(m)\phi(n)$$ Let's put $m = 5, n = 101$ so that $\phi(m) = 4, \phi(n) = 100, \phi(mn) = 400$. We now have $$\mathbb{Q} \subset \mathbb{Q}(\zeta_{m})\subset\mathbb{Q}(\zeta_{mn})$$ and $$[\mathbb{Q}(\zeta_{mn}):\mathbb{Q}(\zeta_{m})] = \frac{[\mathbb{Q}(\zeta_{mn}):\mathbb{Q}]}{[\mathbb{Q}(\zeta_{m}):\mathbb{Q}]} = \frac{\phi(mn)}{\phi(m)} = \phi(n)$$ so that $\mathbb{Q}(\zeta_{mn}) = \mathbb{Q}(\zeta_{505})$ is our desired field extension. From the above argument we see that if $m, n$ are coprime to each other then $$\mathbb{Q} (\zeta_{mn}) = \mathbb{Q}(\zeta_{m},\zeta_{n})=\mathbb{Q} (\zeta_{n}) (\zeta_{m}) $$ is a field extension of $\mathbb{Q} (\zeta_{n}) $ of degree $\phi(m) $. Moreover $\zeta_{m} $ satisfies a polynomial $\Phi_{m} (x) \in \mathbb{Q} [x] \subset\mathbb{Q} (\zeta_{n}) [x] $ of degree $\phi(m) $. It follows that the polynomial $\Phi_{m} (x) $ is irreducible in $\mathbb{Q} (\zeta_{n}) [x] $. Thus starting from the irreducibility of $\Phi_{n} (x) $ in $\mathbb{Q} [x] $ and using the theorem about degrees of a tower of field extensions we have proved the theorem of Dedekind referred in the beginning of the post: Theorem: If $m, n$ are positive integers coprime to each other then the cyclotomic polynomial $\Phi_{m} (x) $ is irreducible in $\mathbb{Q} (\zeta_{n}) [x] $.
Converting English events to Set Theory Notation using Operators
From my understanding these are the answers that I got, could someone please verify them and / or correct them: $ A \cap ( B \cap C)^c $ $ ABD \cup ACD $ $ D \cap (A \cup B \cup C ) $
Prove that there is no primitive Pythagorean triple $(a,b,c)$ where one side differs from another by three
Use the parametric form: $$a=r^2-s^2,$$ $$b=2rs,$$ $$c=r^2+s^2.$$ If say $a-b=3$ then $3=r^2-2rs-s^2=(r-s)^2-2s^2$. I think you can use congruences modulo some small number $n$ to dispose of this case. Then you need to tackle $b-a=3$, $c-a=3$ and $c-b=3$.
Sensitivity equations with discontinuities
In general this problem seems to be complicated, but in this simple case you can be rather explicit. Consider the problem: $$\frac{\partial x}{\partial t}=kx+f(t),x(k,t=0)=x_0.$$ The solution to this problem is $$x(k,t)=e^{kt} \left ( x_0 + \int_0^t e^{-ks} f(s) ds \right ).$$ Thus the derivative with respect to $k$ is $$\frac{\partial x}{\partial k}(k,t)=t e^{kt} \left ( x_0 + \int_0^t e^{-ks} f(s) ds \right ) - e^{kt} \left ( \int_0^t s e^{-ks} f(s) ds \right ).$$ Differentiating that with respect to $t$ gives $$\frac{\partial^2 x}{\partial t \partial k} = e^{kt} \left ( kt + 1 \right ) \left ( x_0 + \int_0^t e^{-ks} f(s) ds \right ) - k e^{kt} \int_0^t s e^{-ks} f(s) ds - t f(t)$$ If $f$ is just zero then this reduces to your equation, but otherwise the situation is quite a bit more complicated even in this relatively simple situation, as you can see. That said, you can implement your case of interest by considering $f(t)=\sum_{i=1}^n a_i \delta(t-t_i)$; these correspond to dosages of size $a_i$ at times $t_i$. By the way, if you are careful about the order of multiplication, everything I did above translates to higher dimensions. It also translates to general inhomogeneous linear equations, provided that you can construct the necessary Green's function for them.
Consider the six dot product of four vectors $v_1,v_2,v_3,v_4$ on $\mathbb{R}^2$. Can all of them be negative?
Assume this is possible. Since you are only concerned with angles between the vectors (as you correctly note, the pairwise angles must be between $\pi/2$ and $3\pi/2$), you can rotate any such system of vectors so that $v_1$ is along the $x$-axis. Then $v_2$ must be in the 2nd or 3rd quadrant. If $v_2$ is in the third, $v_3$ cannot be in the 4th and can only be in the second. If $v_2$ is in the second, $v_3$ cannot be in the 4th so must be in the 3rd. In summary, there is no place to put $v_4$ so that it would be more than $\pi/2$ away from other placed vectors.
Prerequisites for Stein and Shakarchi Fourier Analysis
I think firm understanding of introductory analysis is a requisite for Stein&amp;Shakarchi's tough series. If you have not read any analysis textbook, I recommend you to find one and at least get used to the properties of series of complex numbers, series of functions and rigorous understanding of differentiation and integration. My personal choice was Rudin, and I suppose the material is already an overkill if your primary aim is to read Stein&amp;Shakarchi's first book. For a beginner Stein is quite difficult to read, I think, but logically the only prerequisites are analysis, calculus, and linear algebra.
Why the operator $T$ is positive and self-adjoint, which $(T(t)f)=\sum_{n=0}^{\infty}(n+1)^{-t}c_{n}z^n$?
As written the question makes little sense. The inner product should be a number, but as written it seems to be a function (and, in that case, the formula as written cannot be right). I will assume that the inner product is $$ \langle \sum_n c_nz^n,\sum_nd_nz^n\rangle=\sum_n c_n\overline{d_n} $$ (this makes $\mathcal H$ a Hilbert space as it is actually $\ell^2(\mathbb N)$ in disguise). In that case, you have $$ \langle T(t)f,f\rangle=\sum_n\frac{|c_n|^2}{(n+1)^t}\geq0. $$ Since the Hilbert space is complex, this is enough to imply that $T(t)$ is selfadjoint (because $\langle T(t)f,f\rangle$ is real), and that $T(t)$ is positive.
Using the converse if Cayley Hamilton theorem
What the author is doing is using that if a matrix $M$ satisfies a polynomial $p(t)$, the minimal polynomial of $M$ divides $p(t)$. As all the eigenvalues of $M$ appear as roots of the minimal polynomial, you get that the eigenvalues of $M$ are contained in the set $\{2,3\}$.
How is Maximum Independent Set maximum-degree-approximative?
Welcome to MSE! Your idea is a good one. Write $\Delta$ for the maximum degree. If we can show $$\Delta |I| \geq |V| \geq \mathsf{OPT}$$ then we're done. Say (towards a contradiction) that $\Delta |I| &lt; |V|$. Then there must be some vertex $v$ which is not adjacent to any element of $I$ (do you see why?) But then $v$ should have been added to $I$, and we contradict the definition of the algorithm. I hope this helps ^_^
relationship between matrix and adjoint of a matrix and orthonormal system
Assume $A^*A=I$. On both sides of this equation is an $n\times n$ matrix. We can understand what this equation means, by writing down what it means for the $(i,j)$th component on both sides: $$\sum_{k=1}^m \overline{A}_{ki}A_{kj}=\delta^i_j$$ where $\delta^i_j$ is the Kronecker delta, which is $1$ if $i=j$ and $0$ otherwise, and $A_{ij}$ is the $(i,j)$th component of the matrix $A$, i.e. the number at row $i$, column $j$. The columns of $A$ are the vectors $g_j=(A_{ij})_{i=1,..,m}\in\mathbb{C}^m$ for $j=1,..,n$. The above equation can be rewritten using the scalar product in $\mathbb{C}^m$: $$\langle g_i, g_j\rangle = \sum_{k=1}^m \overline{A}_{ki}A_{kj} = \delta^i_j$$ where $\langle v,w\rangle = \sum_{k=1}^m \overline{v}_k w_k$ for $v,w\in\mathbb{C}^m$. $\langle\cdot,\cdot\rangle$ is called the scalar product. This means by definition that $(g_n)_n$ is an orthonormal system of $\mathbb{C}^m$: $$\|g_i\|^2=\langle g_i, g_i\rangle=1\,\,\text{and}\,\,\langle g_i,g_j\rangle=0\,\,\text{for}\,\,i\not=j$$ That is, all the column vectors have norm $1$ and are orthogonal to one another.
Is Rational number under multiplication group?
HINT: What is the multiplicative inverse of $0$? If we replace $\mathbb Q$ with $\mathbb Q\setminus \{0\}$ to exclude $0$ from our set, under the operation of multiplication, we do then have a group: $(\mathbb Q\setminus \{0\}, \cdot)$
Show that any abelian subgroup of $D_n$ is cyclic when n is odd
Hint: Do a proof by contradiction. Suppose there is an abelian subgroup $H$ of $D_n$ which is not cyclic. Note that this means $H$ must contain some element of the form $sr^k$ for some $0&lt;k&lt;n$ and as it is not cyclic, it must contain another element of the form $sr^j$ or $s$ or $r^i$. Show that each of these does not commute with $sr^k$, hence the subgroup cannot have been abelian, a contradiction.
Help With An Expression Involving Limits
Okay, here is what I came up with today. My confidence in its correctness is somewhere around luke warm. $$ \lim_{R\rightarrow\infty}{S\left(Re^{j\theta}\right)}=\lim_{R\rightarrow\infty}{\frac{1}{1+L\left(Re^{j\theta}\right)}}=\frac{1}{1+\lim_{s\rightarrow\infty}{L\left(s\right)}R^{-n_r}e^{-n_rj\theta}} $$ To understand what is going on when going from the second term to the third term in the equation above, consider the following general form of the open-loop transfer function. $$ L\left(Re^{j\theta}\right)=\frac{H\left(R^{n_H}e^{j\theta n_H}\right)}{G\left(R^{n_G}e^{j\theta n_G}\right)} $$ The Open-Loop Transfer Function, L(s), is ultimately a complex function, to some order of z. The above equation shows that the numerator is of order nH and the denominator is of order nG. Let us take a closer look at this specific term in the denominator of the original equation in question. $$ \lim_{R\rightarrow\infty}{L\left(Re^{j\theta}\right)}=\lim_{R\rightarrow\infty}{\frac{H\left(R^{n_H}e^{j\theta n_H}\right)}{G\left(R^{n_G}e^{j\theta n_G}\right)}} $$ For the general transfer function, the numerator and denominator will be in terms of powers of z. Next, consider that in the limit of R going to infinity, it is ultimately the highest order power of R in the numerator and denominator, along with the coefficient associated with that term, which determines the value of the limit. We can express that term as the following. $$ \lim_{R\rightarrow\infty}{\frac{H\left(R^{n_H}e^{j\theta n_H}\right)}{G\left(R^{n_G}e^{j\theta n_G}\right)}}=\frac{A\ast R^{n_H}e^{j\theta n_H}}{B\ast R^{n_G}e^{j\theta n_G}}=\frac{A}{B}R^{\left(n_H-n_G\right)e^{j\theta\left(n_H-n_G\right)}} $$ There are two things to note about that last term. We are taking the limit of L(z) in the complex plane and arriving at the above result. Well, what if we also took the limit of L(s) in the s-plane as s goes to infinity. If you work through the same reasoning we used in the complex plane, you will eventually come to an answer of A/B! The other thing to note is that (nH - nG) is just the relative degree of L(z), or nr. That let’s us write all this up as: $$ \lim_{R\rightarrow\infty}{S\left(Re^{j\theta}\right)}=\frac{1}{1+L\left(\infty\right)\ast R^{-n_r}e^{-n_rj\theta}} $$ In the above, $$ L\left(\infty\right)=\lim_{s\rightarrow\infty}{L\left(s\right)}. $$
Poisson Distribution Approximation
The Poisson distribution takes a an average rate, which you've found, and tells you the probability that a number of occurrences will happen. For a Poisson distribution, the density function is given by $$\mathbb P(S=s)=\frac{\lambda^se^{-\lambda}}{s!}$$ where $s$ is a non-negative integer. In this case, we are concerned with the number of faults per $50$ units so $\lambda=\frac{1}{500}\frac{\text{faults}}{1\text{unit}}= 0.1 \frac{\text{faults}}{50\text{units}} \implies \lambda= 0.1$. You can then calculate the probability of $s$ faults happening with the formula above! Note that the maximum number of faults that can occur is $50$, so theoretically $s$ cannot be any positive integer and to be precise you should be using the binomial distribution with $n=50$ and the probability of a fault $p=1/500$, however, the Poisson distribution is derived by taking $n \to \infty$ so for large samples the Poisson distribution is a good approximation.
For an $m\times n$ matrix $A$, does the union of the basis of the row space and null space of $A$ span $\mathbb{R}^n$?
The answer is yes. In particular, the row space and null space are orthogonal complements to each other. This is sometimes called (part of) the fundamental theorem of linear algebra.
Understanding where my naive attempt to prove Countable Choice out of Finite Choice fails
The problem with the usual inductive arguments is that if $A_n$ has more than one element, and you are unable to specify exactly a unique element of $A_n$ being chosen at each step, then you have a "splitting" in your recursion which requires you making some arbitrary choice. As long as you're going along a finite path, making a choice is fine, because you've only had to make finitely many of theses choices. But if you want to amalgamate them, well, then you have to already have had a set of coherent choices, that is to say, an infinite path that you could follow in your recursive definition. The problem is when you have splittings and you need to make arbitrary choices every so often, you cannot guarantee to have had this path, unless you assume countable choice. And thus cause a circularity. This is exactly the same failure with "every $n\in\Bbb N$ is finite, therefore $\Bbb N$ is finite". The only way to have proved that $\Bbb N$ is finite, is to have assumed it was finite to begin with. Compare this with the actual proof of the recursion theorem, well there you do have a well-defined singular choice. You are given a function $f\colon A\to A$, and you are using it to define your sequence $F(0)=f(a)$, for some fixed $a\in A$, and as a function spits a single element back at you, there is no arbitrary choices when you want to define $F(n+1)$, it is simply $f(F(n))$.
When to use derivatives and integrals to find a power series
"why do we have to use the antiderivative for one and not the other, and what's the intuition behind finding the power series through these derivatives?" It's different for every case, but essentially your use of (anti)derivatives comes down to manipulation of the terms of the series. For example if you knew what $$\sum a_nx^n$$ was and you wanted to know what $$\sum \frac{a_nx^{n+1}}{n+1}$$ was then of course you'd integrate both sides. Similarly, if you knew what $$\sum \frac{a_nx^{n+1}}{n+1}$$ was and you want $$\sum a_nx^n$$ then you differentiate. Another example: We know that $$\frac1{1+x^2}=\sum_{n\geq0}x^{2n}$$ so if we integrate both sides we get $$\arctan x=\sum_{n\geq0}\frac{x^{2n+1}}{2n+1}$$ In short it's just a matter of what you want vs. what you have and doing what you can to turn what you have into what you want.
Calculating a p-value when H0 is an inequality
You are wanting to find the size of a test. For non simple hypotheses this is defined as $\text{sup}_{\theta \in H_0} W(\theta)$ where $W(\theta)$ is the power function: $W(\theta)=\mathbb{P}(\text{reject} H_0 | \theta)$
Self Study in Dynamical Systems
Good books with enough introductory material in this regard are: introduction to dynamical systems, Michael Brin. This book defines all the basic concepts of dynamic requirements of Chapter one is called Low dimensional dynamics Where does the topological classification of homeomorphisms of the circle (Theorem of Poincare). It also has the interesting theorem Sharkovsky (Generalization of period three implies chaos). 2.A First Course in Dynamics with a Panorama of Recent Developments. This book covers a similar content, but with a different style of writing. For books that go deeper into the core of the theory are two books by the same author: Lectures on One-Dimensional Dynamics. Publ. do 17º Colóquio Brasileiro de Matemática, 1988 One Dimensional Dynamics. Springer-Verlag, 1993 With S. Van Strien You can visit the author page: http://w3.impa.br/~demelo/listapub.html Regarding prerequisites believe that in either case you need to know something of Differential Calculus. When you want to discuss one-dimensional dynamic write to me.
Valid Usage of Chi Squared Test
The Chi-Squared goodness of fit test is generally used for counts of categorical variables. Counts are generally more than 0, but here white is encoded as 0. This means that in the expected case, we are dividing by 0, which generates a statistic of infinity. This is because in this test, observing something for which we expect no counts is considered impossible. But in my own circumstance, I want whitespace to be considered a valid value. Therefore a Chi-Squared goodness of fit test is innappropriate for this problem.
Moment of inertia of a semicircle by simple integration.
You can also calculate the moment of inertia by using double integrals. Considering the semicircle you have is from -π/2 to π/2 the moment of inertia you re looking for with respect to the x axis will be: integrate y^2 with polar coordinates: the inside integral will be bounded by -π/2, π/2 (with respect to θ --> the angle) and the other one will be bounded by 0 to r--> radius of the circle (with respect to p). Don't forget the jacobian( it will be p) and obviously y^2 = p^2 * (sinθ)^2. (You will transfer (x,y) to (p,θ) ).
Topological counterexample: compact, Hausdorff, separable space which is not first-countable
The space $I^I$ (i.e., product of $\mathfrak c$-many copies of the unit interval $I=[0,1]$ is a compact Hausdorff space. It is not first-countable, see here: Uncountable Cartesian product of closed interval It is separable by Hewitt-Marczewski-Pondiczery theorem, see here: On the product of $\mathfrak c$-many separable spaces As pointed out in a comment, we could also prove separability by directly showing that polynomials with rational coefficients form a countable dense subset. See also this answer for a similar approach in a slightly different space.
Probability Type I error
You work seems fine, except for the final step as Augustin pointed out. In terms of probability, recall that we are seeking the probability that $\bar x &lt; 124$ given that the $\mu = 125$. This means $$P(\bar x &lt;124|\mu = 125)$$ By standardizing, we get \begin{align*} P(\bar x &lt;124|\mu = 125) &amp;= P\left(Z &lt; \frac{124-125}{5/\sqrt{100}}\bigg|\mu = 125\right) \\ &amp;= P(Z&lt;-2|\mu = 125) \\ &amp;= \Phi(-2) \\ &amp;= 0.02275013 \end{align*} In simpler terms (if you don't know probability), we are looking for the "area to the left" of $-2$ under the normal curve.
Prove that for any square matrix, an invertible matrix B exists, so that BA is triangular
Gaussian elimination (also known as row reduction) transforms a square matrix into an upper triangular matrix. Every elementary row operation multiplying a row by a number different from $0$ summing a row with another row multiplied by any number swapping two rows can be realized as the multiplication by an invertible matrix. Namely, the matrix realizing each operation is the one obtained from the identity matrix subject to the same elementary row operation. Therefore the successive elementary row operation give $$ U = E_k E_{k-1} \dots E_2 E_1 A $$ where $U$ is in (reduced) row echelon form (hence triangular). Then $$ A=(E_k E_{k-1} \dots E_2 E_1)^{-1}U $$ is the required decomposition.
Applying the implicit function theorem
Consider the function $G$ defined on $\mathbb{R}^{2}$ by : $$ \forall (x,y) \in \mathbb{R}^{3}, \; G(x,y,z) = F(xy,z-2x) $$ Then, there exists $(x_{0},y_{0},z_{0}) \in \mathbb{R}^{3}$ such that : $G(x_{0},y_{0},z_{0})=0$ and : $$ \frac{\partial G}{\partial z}(x,y,z) = \frac{\partial F}{\partial v}(xy,z-2x) $$ So that $\displaystyle \frac{\partial G}{\partial z}(x_{0},y_{0},z_{0}) = \frac{\partial F}{\partial v}(x_{0}y_{0},z_{0}-2x_{0}) \neq 0$. So, applying the implicit function theorem to $G$ gives : there exists an open neighborhood $V$ of $(x_{0},y_{0})$ in $\mathbb{R}^{2}$, an open neighborhood $W$ of $z_{0}$ in $\mathbb{R}$ and a function $f \, : \, V \, \longrightarrow \, W$, $\mathcal{C}^{1}$ on $V$, such that : $$ \Big( \, \forall (x,y,z) \in V \times W, \; G(x,y,z)=0 \, \Big) \Leftrightarrow \Big( \, \forall (x,y) \in V, \; z=f(x,y) \, \Big) $$
Number of paths from A to B with no direction constraints
I believe this is the problem of counting self-avoiding rook walks, and for the square case, you can look here for the first few values at the OEIS here. Unfortunately it seems like there is no known closed-form expression for arbitrary $n$. There's some more information and references at Mathworld.
Limit of a bounded differentiable function
$f(x)=\sin (\ln (1+x))$ is a counter-example. Note that $\lim_{n\to \infty}f(e^{\frac {(2n+1)\pi} 2}-1)$ does not exist. The hypothesis holds with $C=1$.
How can one find this limit?
$$\lim_{x\to 2} (f^2(x)-6f(x)+9)=0\iff\lim_{x\to 2} (f(x)-3)^2=0$$
How to find the Lie algebra of a specific subgroup of a product Lie group
It is crucial for your case that $C$ is a normal subgroup, hence that $Lie(C)$ is an ideal in the Lie algebra. Then observe that if $X-Y \in Lie(C)$ we must also have $Lie(C) \ni [X-Y, Y] = [X, Y]$ since $Lie(C)$ is an ideal, hence specifically iterated Lie brackets of $X$ and $Y$ all lie in $Lie(C)$. Then you can use the Baker-Campbell-Hausdorff formula to write: $$ \exp(tX)\exp(-tY) = \exp(t(X-Y) + \frac{t^2}{2}[X,Y] + \dots) $$ for sufficiently small $t$. And all later terms in the dots are iterated Lie brackets of $X$ and $Y$, hence the expression in the exponential lies in $Lie(C)$.
Urn Probability Combination Problem
Let's look at the following hypotheses: $H_1$: $U_1$ was picked, $H_2$: $U_2$ was picked. By the problem's definition, $\mathrm{P}(H_1) = \mathrm{P}(H_2) = \frac12$. Let $R$ denote the event that two red balls were chosen. Now, observe that $\displaystyle \mathrm{P}(R \mid H_1) = \frac{\binom{10}{2}}{\binom{18}{2}}$ and $\displaystyle \mathrm{P}(R \mid H_2) = \frac{\binom{16}{2}}{\binom{20}{2}}$. Now, by the formula for total probability, we have that $\mathrm{P}(R) = \mathrm{P}(R \mid H_1) \mathrm{P}(H_1) + \mathrm{P}(R \mid H_2) \mathrm{P}(H_2) = \ldots$; As previously, let $R$ denote the event that two red balls were chosen, and $B$ the event that two black balls where chosen. Now, the probability that the two balls are of different color is $1 - \mathrm{P}(R) - \mathrm{P}(B)$; I think you can handle this one yourself.
Describe the orbits of poles for the group of rotations of an octahedron.
There are indeed $24$ elements of the group. You should always ask yourself which one is the identity, however.
Find what level of the Calkin–Wilf tree a number is on
The tree is known as the Calkin-Wilf tree (https://en.wikipedia.org/wiki/Calkin%E2%80%93Wilf_tree) If $a$ and $b$ are relatively prime, there is exactly one way to get from $(1,1)$ to $(a,b)$; otherwise, the pair $(a,b)$ is not reachable. Suppose $a &lt; b$ (the other case is similar). If the steps in the Euclidean algorithm to find $\gcd(a,b)$ are $\begin{align*} b &amp; =q_1 a + r_1 \\ a &amp; = q_2r_1 + r_2 \\ \vdots &amp; = \vdots \\ r_m &amp; = q_{m+2} r_{m+1} + r_{m+2} = q_{m+2}r_{m+1}+1 \\ r_{m+1} &amp; = q_{m+3} \cdot 1 + 0 \end{align*}$ then the number of steps to get from $(1,1)$ to $(a,b)$ is $q_1+q_2 + \dotsb + q_{m+3} -1$. Added: the python program below computes the level by keeping track of the subtractions involved in the Euclidean algorithm. There is no need to compute $\gcd(a,b)$ separately. def steps(a,b): level = -1 while a &gt; 0: while a &lt;= b: a, b = a, b - a level += 1 a, b = b, a if b == 1: return level else: return -1
Partial fraction decomposition of $\frac{5z^4 + 3z^2 + 1}{2z^2 + 3z + 1}$
$$\frac{5z^4+3z^2+1}{2z^2+3z+1}=\frac{5}{2}z^2-\frac{15}{4}z+\frac{47}{8}-\frac{9}{z+1}+\frac{\frac{33}{16}}{z+\frac{1}{2}}.$$
The product of locally compact spaces is locally compact
With that definition of local compactness (there are several, so I'm glad you included it) your proof is indeed correct. $\overline{U\times V} = \overline{U} \times \overline{V}$ in the product topology and the product of compact sets is compact, in short. It's very common for people to consider local compactness in the context of Hausdorff spaces only, as then all the different usual definitions are all equivalent to each other, and we can prove many more theorems (and we have a nicely behaved one-point compactification as well). The exercise poser could just have been lazy and didn't realise that for this one result he could do away with the Hausdorffness altogether.