title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Sigma algebra related
I'm guessing that the somewhat cryptic notation $\sigma(\mathcal C)$ denotes the $\sigma$-algebra generated by $\mathcal C.$ If I guessed right, then the assertion you want to prove is that $\sigma(\mathcal C)=\mathcal S$ where $\mathcal S=\bigcup\{\sigma(\mathcal D):\mathcal D\subseteq\mathcal C,\ \mathcal D\text{ countable}\}.$ For that you just have to show that $\mathcal C\subseteq\mathcal S\subseteq\sigma(\mathcal C)$ (which is pretty obvious) and that $\mathcal S$ is a $\sigma$-algebra. The trickiest part is showing that $\mathcal S$ is closed under countable union. I guess you know that a countable union of countable sets is countable?
Minimal polynomial of $\cos(72°)$ over $\Bbb Q$
The rational root test quickly shows that the polynomial $$q=16x^4+16x^3-4x^2-4x+1=y^4+2y^3-y^2-2y+1,$$ where $y=2x$, does not have any linear factors over $\Bbb{Q}$. So it is either irreducible, or it is of the form $$q=(ay^2+by+c)(dy^2+ey+f),$$ for some $a,b,c,d,e,f\in\Bbb{Q}$, and in fact $a,b,c,d,e,f\in\Bbb{Z}$ by Gauss' lemma. Expanding the product leads to the system of equations \begin{eqnarray*} ad&=&1\\ ae+bd&=&2\\ af+be+cd&=&-1\\ bf+ce&=&-2\\ cf&=&1. \end{eqnarray*} The first and last show that without loss of generality $a=d=1$ and $c=f=\pm1$, yielding \begin{eqnarray*} e+b&=&2\\ be\pm2&=&-1\\ \pm(b+e)&=&-2, \end{eqnarray*} where the two $\pm$-signs agree. Comparing the first and last show that we must have the $-$-sign, yielding $b+e=2$ and $be=1$, so $b=e=1$ and hence $$q=(y^2+y-1)^2=(4x^2+2x-1)^2.$$ From a slightly more abstract perspective; we see that the minimal polynomial of $z$ equals $z^4+z^3+z^2+z+1$ because $$(z-1)(z^4+z^3+z^2+z+1)=z^5-1=0,$$ and $z\neq1$. Note that this polynomial is irreducible because $$(z+1)^4+(z+1)^3+(z+1)^2+(z+1)+1=z^4+5z^3+10z^2+10z+5,$$ is Eisenstein w.r.t. the prime $5$. Now $2\cos(72^{\circ})=z+\bar{z}$, and the tower of fields $$\Bbb{Q}\subset\Bbb{Q}(z+\bar{z})\subset\Bbb{Q}(z),$$ shows that $[\Bbb{Q}(z+\bar{z}):\Bbb{Q}]\leq2$ because $z+\bar{z}$ is fixed by complex conjugation. Hence the minimal polynomial of $z+\bar{z}$ has degree at most $2$.
In no wff are the symbols $\neg$ and $)$ next to each other
From the substitution rules, we see that in any wff $\neg$ can only be preceded by $($ and be followed by the start of a wff the start of a wff is either an atom or $($ This immediately shows that $\neg$ can never appear beside $)$ in a wff. $\neg$ is not an atomic formula because $\neg$ is a logical connective that just happens to connect to only one argument. It does not constitute a predicate.
How to compute $z'=\frac{\mathrm d}{\mathrm dx}\int_0^x \sin(x-s)f(s) \, \mathrm ds$
Looking at the wikipedia page of differentiation under the integral sign, you set $a(x) = 0$, $b(x) = x$ and $f(x, s) = \sin(x-s) f(s)$. Alternatively, consider $z(x) = \int_0^x \sin(x-s) f(s) \mathrm{d} s$, and think of $z(x+ \delta x)-z(x)$: $$ \begin{eqnarray} z(x+ \delta x)-z(x) &=& \int_0^{x+\delta x} \sin(x+\delta x-s) f(s) \mathrm{d} s - \int_0^x \sin(x-s) f(s) \mathrm{d} s \\ &=& \int_0^{x+\delta x} \sin(x+\delta x-s) f(s) \mathrm{d} s - \int_0^x \sin(x+\delta x-s) f(s) \mathrm{d} s + \\&& \int_0^x \sin(x+\delta x-s) f(s) \mathrm{d} s - \int_0^x \sin(x-s) f(s) \mathrm{d} s \\ &=& \int_{x}^{x+\delta x} \sin(x+\delta x-s) f(s) \mathrm{d} s + \int_0^x \left( \sin(x+\delta x -s) - \sin(x-s) \right) f(s) \mathrm{d} s \\ &\sim& \delta x \left( \sin(\delta x) f(x) + \int_0^x \cos(x-s) f(s) \right) \end{eqnarray} $$ In the limit $\lim_{\delta x \to 0} \frac{z(x+\delta x)-z(x)}{\delta x}$ the first term becomes zero, and you are get the answer: $$ z^\prime(x) = \int_0^x \cos(x-s) f(s) \mathrm{d} s $$
Proving that the number of compositions of n into positive odd summands is Fibonacci sequence
We can also map the compositions of n into odd parts to the compositions of n-1 into 1s and 2s (which is the same as the number of ways of dividing a 2x(n-1) rectangle into 2x1 rectangles) as follows: Given a composition of n into odd parts, split each part into 0 or more 2s followed by a 1. Remove the final 1. You now have a composition of n-1 into 1s and 2s. To go in reverse, given a composition of n-1 into 1s and 2s, add an extra 1 at the end and then group parts in sequences of 0 or more 2s that end in a 1. Sum each group of 2s followed by a 1 to get a composition of n into odd parts. For example, the compositions of 5 into odd parts and 4 into 1s and 2s are related as follows: 1+1+1+1+1 <-> 1+1+1+1 1+1+3 <-> 1+1+2 1+3+1 <-> 1+2+1 3+1+1 <-> 2+1+1 5 <-> 2+2
Poor man's Navier-Stokes and the tent map
Hint: As given in List of trigonometric identities, $$\cos(2\theta) = 1 - 2\sin^2(\theta) \tag{1}\label{eq1A}$$ $$\cos(\theta) = \sin(\theta + \frac{\pi}{2}) \tag{2}\label{eq2A}$$ To get the second part (I'm not sure why the author has $2$ parts, but my best guess is this is to have $0 \le x_i \le 1$ for all $i$), you can use $$\sin(\theta) = -\sin(\theta - \pi) \tag{3}\label{eq3A}$$
Lebesgue measurablity of Hardy Littlewood maximal function
Recall that $f:\Bbb{R}^d\rightarrow \Bbb{R}$ is Lebesgue measurable if $\{f>\alpha\}$ is open for every real number $\alpha$ (this follows from the standard definition that $f$ is measurable if $f^{-1}([-\infty,\alpha))$ is measurable). Then, let the maximal function be defined as usual $$ Mf(x)=\sup_{B\ni x}\frac{1}{\vert B\vert}\int_B\vert f(y)\vert dy $$ Now, $\{Mf>\alpha\}$ is open since if $y\in \{Mf>\alpha\}$, there is a ball $B$ such that $y\in B$ and $$ \frac{1}{\vert B\vert}\int_B\vert f\vert >\alpha $$ And, for any other $x\in B$, we have $$ Mf(x)\geq\frac{1}{\vert B\vert}\int_B\vert f\vert>\alpha $$and hence $x\in \{Mf>\alpha\}$ as well.
Non-negative fractions summing to $1$
Hint: Multiply both sides of the equation $\sum_{i=1}^n \frac{c_i}{d_i}=1$ by the product $d_2\cdots d_n$.
Integral from $0$ to $t$ $\int \frac{e^x}{(t-x)^{9/10}} dx$
From what you did we have that $$I = e^t \int_{0}^{t} \frac{e^{-u}}{u^{\frac{9}{10}}}\mathrm du = e^t\int_0^tu^{-9/10}e^{-u}\mathrm du = e^t\int_0^tu^{1/10-1}e^{-u}\mathrm du$$ If we use the definition of a Lower Incomplete Gamma Function $$\gamma(s,t):= \int_0^tu^{s - 1}e^{-u}\mathrm du$$ So you have that your integral $I$ is $$I = e^t\gamma(1/10,t)$$ We can use the relation $$\gamma(s,x) = \Gamma(s) - \Gamma(s,x)$$ Then we will have that $$I = I(t) = e^{t}\left(\Gamma(1/10)-\Gamma(1/10,t)\right)$$ where we use the definition $$\Gamma(s,x) := \int_x^{+\infty}u^{s-1}e^{-u}\mathrm du$$ $$\Gamma(1/10) = 9.513507698668731836\dots $$ There is another approach. We say as before that $I(t) = e^t\gamma(1/10,t)$ and we can use that, for $a,x \in \mathbb{R}$ and $a>0,x>0$ we have $$\gamma(a,x) = x^a\sum_0^{+\infty}\frac{(-1)^nx^n}{n!(a+n)}$$ Which gives you a series representation for your function of $t$. The proof that this series can be used is in here, what the link says is that this series can be created by termwise integration of $$t^{a-1}e^{-t} = \sum_{0}^{+\infty}\frac{(-1)^n}{n!}t^{a+n-1}$$ And termwise integration is justified by uniform convergence of the power series for $e^{-t}$ on bounded intervals.
Calculating elements of the $\mathbb{Z}^2/\!\ker\varphi$ group
To apply the homomorphism theorem the map should be an epimorphism. In the other hand to count how many elements in the image are, you just got to see is the orders of $\varphi(1,0)=(2,7,3,11,5)(12,13)$ and $\varphi(0,1)$ which happen to be ten and six respectively. Then $|{\rm im}\varphi|$ is $60$. Update: The next calculations might help you to understand \begin{eqnarray*} \varphi(2,0)&=&((2,7,3,11,5)(12,13))^2,\\ &=&(2,3,5,7,11).\\ &&\\ \varphi(3,0)&=&((2,7,3,11,5)(12,13))^3,\\ &=&(2,11,7,5,3)(12,13).\\ &&\\ \varphi(4,0)&=&(2,5,11,3,7).\\ &&\\ \varphi(5,0)&=&(12,13).\\ &&\\ &...&\\ &&\\ \varphi(10,0)&=&e. \end{eqnarray*}
Topological spaces and metric spaces - notion of open sets
The definition you've highlighted in your box is the more general definition. You can show that the open sets in a metric space satisfy the criteria for the general definition of "open set" (e.g. it isn't too hard to see that the union of open sets in a metric space is again an open set since unions will preserve the "wiggle room" about every point). However, not every topology is metrizable. A topology on a set $X$ is called metrizable whenever there exists a metric $d:(X,X) \rightarrow \mathbb{R}$ whose open sets according to the "metric space" definition correspond with the open sets of the topology. Easy examples of non-metrizable spaces are certainly to be had; for instance, consider the trivial topology on any set containing more than one element, whose opens are simply $\emptyset$ and the whole space. More interesting examples of non-metrizable spaces can be found by recognizing that metrizable spaces must be at least Hausdorff, though again one can find examples of non-metrizable Hausdorff spaces, such as the lexicographic order topology on the unit square. Determining whether a space is metrizable is, historically, a major problem in point-set topology, and much work has been dedicated to finding relevant necessary and/or sufficient conditions with results such as Urysohn's metrization theorem, the Nagata–Smirnov metrization theorem, et al. In summary, your two definitions are equivalent when the space is metrizable. If it is not, then the second definition cannot even apply.
If the gcd(s, t)=1, prove $F_s F_t$ divides $F_{st}$ for all s,t > 1
Using that $\gcd(F_m,F_n) = F_{\gcd(m,n)}$ for all $n,n \in \mathbb{Z_{\gt 2}}$, as you've basically stated and as the link of GCD of Fibonacci Numbers which lab bhattacharjee's question comment indicates, you get for $s,t \gt 2$ that $$\gcd(F_{st},F_{s}) = F_{\gcd(st,s)} = F_{s} \tag{1}\label{eq1A}$$ $$\gcd(F_{st},F_{t}) = F_{\gcd(st,t)} = F_{t} \tag{2}\label{eq2A}$$ $$\gcd(F_{s},F_{t}) = F_{\gcd(s,t)} = F_{1} = 1 \tag{3}\label{eq3A}$$ Note \eqref{eq1A} gives that $F_{s} \mid F_{st}$ and \eqref{eq2A} gives that $F_{t} \mid F_{st}$. Since \eqref{eq3A} shows that $F_{s}$ and $F_{t}$ are relatively prime, this means that $F_{s}F_{t} \mid F_{st}$ (e.g., as shown in if $b$ divides $ck$ and $b$ and $c$ are relatively prime, then $b$ must divides $k$). This just leaves the case where one of $s$ or $t$ is $2$. WLOG, let $s = 2$ so $t$ is odd. Since $F_2 = 1$, the question is asking to prove that $F_{t} \mid F_{2t}$ for all odd $t$. For this, you can use Proving that $n|m\implies f_n|f_m$ that mathlove's question comment states, since $t \mid 2t$ so $F_t \mid F_{2t}$.
$K$-invariant $F$-homomorphism over finite normal field extension $K/F$
I was thinking too hard. We should be able to do it like this. $\newcommand{\ph}{\varphi}$ Since $K/F$ is normal, it is the splitting field of some polynomial $f(x)\in F[x]$. Denote by $\alpha_1,\ldots,\alpha_n$ the roots of $f(x)$. Since $K/F$ is finite (do we need this here?) we may write $K=F(\alpha_1,\ldots,\alpha_n)$. Now let $$ v=\sum_{i_1,\ldots,i_n}\lambda_{i_1\ldots i_n}\alpha_1^{i_1}\cdots\alpha_n^{i_n}\in K $$ where each $\lambda_{i_1\ldots i_n}\in F$. We have $$ \begin{align*} \ph(v)&=\sum_{i_1,\ldots,i_n}\ph(\lambda_{i_1\ldots i_n})\ph(\alpha_1^{i_1}\cdots\alpha_n^{i_n})\\ &=\sum_{i_1,\ldots,i_n}\lambda_{i_1\ldots i_n}\ph(\alpha_1)^{i_1}\cdots\ph(\alpha_n)^{i_n}\in K. \end{align*} $$ This is because $\ph$ is an $F$-homomorphism, so it fixes $F$ pointwise and permutes the roots of $f(x)$. Hence $\ph(K)\subseteq K$.
Using $\displaystyle \mathbb{C}[G]\cong \bigoplus_{irreducible \ \rho}\rho^{\dim \rho}$ for $S_3$
$\rho^{\dim\rho}$ just means $\bigoplus_{i=1}^{\dim\rho}\rho$, so the character of the regular representation is the sum of the two linear characters with twice the character of the 2-dimensional representation.
I read that ordinal numbers relate to length, while cardinal numbers relate to size. How do 'length' and 'size' differ?
If the line to the bathroom is finite, then the length and the number of people in that line is the same number. However suppose that the line is infinite, but every person has to wait a finite time before entering the bathroom. Now comes another guy, and he has to wait until all those before him get in before he can use the facilities. In terms of cardinality we didn't increase the line, but its length is longer by one person now. Formally, the infinite line where every person has to wait a finite amount of time is $\Bbb N$ with the order $\leq$. Indeed every natural number has only finitely many predecessors. The next line is $\Bbb N\cup\{\bullet\}$, where $\bullet$ is some new element which is not a natural number and we decree that it is larger than all the natural numbers themselves. So the order type of the new set is longer because it has an initial segment of order type $\Bbb N$, but another point above that. However the cardinality is the same. To see that note that the following map is a bijection between the two sets: $$f(n)=\begin{cases}\bullet & n=0\\ n-1 & n\neq 0\end{cases}$$ So we have a strictly longer line (and this poor guy who has to wait an infinite time before he can pee), but that line has the same number of people.
$\Psi_g(A)=\Phi(g)A^t\Phi(g)$; express $\chi_\Psi$ through $\chi_\Phi$
Edit: incidentally, the way you seem to have interpreted the definition, it doesn't even define a left action, since you don't have $\Psi_{gh}=\Psi_g\Psi_h$. I am sure the intention was for the transpose sign to belong to the second $\Psi(g)$, rather than to $A$. The character of a representation is the trace of the matrix with respect to any basis on your vector space. The vector space attached to $\Psi$ is $n^2$-dimensional. So any group element is represented by an $n^2$-by-$n^2$ matrix, and the character is a sum of $n^2$ values. I don't understand your calculation, but I am pretty sure that that's not what you were doing. So question number 1 is: can you give yourself a nice basis on the space of $n\times n$ matrices? After that you have to act by $g$ on each such basis element, and then express the result again in terms of your basis and take the coefficient of the basis element that you have acted on. That's what it means to read off a diagonal value of $\Psi_g$. Your calculation will be simplified if you assume that $\Phi(g)$ is diagonal. You can always do that for any given $g$ by choosing a nice basis on your original vector space (of course you may not be able to diagonalise all $g\in G$ simultaneously, but that's ok).
Fourier transform of indicator function
The basic case is that the Fourier transform of the indicator function of an interval centered at zero is a sinc function. So by this calculation and linearity, your question essentially reduces to determining how the Fourier transform is affected by translation. It is easy to calculate this by a change of variable: you find that $\widehat{f(x-x_0)}(\xi) = e^{-i x_0 \xi} \hat{f}(\xi)$. (As usual, everything is slightly different if you use a different normalization.) So the Fourier transform of a sum of indicator functions of intervals is a sum of sincs, each multiplied by $e^{-i x_k \xi}$ where $x_k$ is the center of the $k$th interval. On characteristic functions of random variables: the characteristic function of a random variable is the Fourier transform of its density (in the sense of distribution theory, if the variable isn't continuous).
Solve 2nd order ordinary differential equation by Laplace transforms and convolution of their inverse functions. (5.6-39)
You transformed $\sin(2t)$ incorrectly. The rule says $y(t) = \sin(at) \implies Y(s) = \frac{a}{s^{2} + a^{2}}$ But you did $y(t) = \sin(at) \implies Y(s) = \frac{1}{a}\frac{a}{s^{2} + a^{2}}$ The rest of your working is fine though.
Separable extensions of complete valued fields are automatically totally ramified?
$\nu(\pi) = \nu'(N_{L|K}(\pi))$ ?
Definition of fixed points?
They are related as follows. Let $x(n)=f^{(n)}(a)$. Then if $f(a)=a$, $x(n)$ is constant, which is like saying $x'(t)=0$.
The set $\{f\in X:\int_{0}^{1}fdx=0\}$
A. $S$ is a non-zero linear subspace of $C[0,1]$, hence it's unbounded and non compact. D. $S$ is connected and closed, non-empty and not equal to $C[0,1]$, hence it's not open. E. $S$ is closed and not equal to $C[0,1]$, hence it's not dense.
"Inverting" a Markov chain transition matrix into a weighted, undirected graph
You can find an explanation of what's going on here in Chapter 9 of Markov Chains and Mixing Times by Levin, Wilmer, Peres, which is available online. Briefly, we can say the following: such a weighted graph exists if and only if the Markov chain is reversible. If the Markov chain has a unique stationary distribution $\pi$, then reversibility is equivalent to the detailed balance equation $$ \pi(x)P(x,y) = \pi(y) P(y,x) $$ holding for all states $x,y$. With that said, one valid weighting is to take $\pi(x) \cdot P(x,y)$ to be the weight for the edge $xy$.
Lifting representation Heisenberg algebra
You are representing the three algebra elements $q, p, 1\!\!1$ by hermitian operators acting on functions of q. For simplicity, non-dimensionalize $\hbar=1$--only the clueless keep it unscaled. Multiplying linear combinations of such operators by i and exponentiating yields the generic unitary group element representation for you , $$ e^{ic + ibq + a\partial_q} ~ f(q) = e^{ic + iab/2} e^{ibq} e^{a\partial_q} ~f(q)= e^{ic + iab/2 +ibq}~ f(q+a), $$ for real coefficients a, b, c, the standard form used routinely in applications. The first equality is a bland application of the Zassenhaus formula ("inverse Campbell-Baker-Haussdorf"), and the second equality is using up the Lagrange shift operator involved.
open covering and open intervals covering
An open interval cover is just a special case of an open cover where the open sets are intervals. There is no direct connection between this theorem and the definition of a compact set.
Quasidiagonality and homotopy invariance: norm inequality occurring in the proof
The observation that is not mentioned in the book is this: since $q\leq F_j\leq 1$, $$ \langle q\xi,q\xi\rangle=\langle q\,q\xi,q\xi\rangle\leq\langle F_j\,q\xi,q\xi\rangle\leq\langle q\xi,q\xi\rangle. $$ So $q(F_j-q)q=0$. Then $q(1-F_j)q=0$. As $1-F_j\geq0$, it follows that $(1-F_j)q=0$, and so $qF_j=q$. This implies that $qG_0=q$, $qG_j=0$ for all $j\geq1$. Then \begin{align} \|P\pi(b)P\| &=\|V^*\pi(b)V\|\\[0.3cm]&\geq\|qV^*\pi(b)Vq\| =\Big\|\sum_jqG_j^{1/2}\rho_j(b)G_j^{1/2}q\Big\|\\[0.3cm] &=\|qG_0^{1/2}bG_0^{1/2}q\|=\|G_0^{1/2}qbqG_0^{1/2}\| =\|G_0^{1/2}qb^*qG_0qbqG_0^{1/2}\|^{1/2}\\[0.3cm] &=\|G_0^{1/2}qb^*qbqG_0^{1/2}\|^{1/2} =\|qbqG_0qb^*q\|^{1/2}\\[0.3cm] &=\|qbqb^*q\|^{1/2}=\|qbq\|. \end{align}
How to check the following two connected open sets coincide?
Yes, this applies to any topological space, not only Hilbert spaces (of course $0$ is then replaced with an arbitrary point). For $i=1,2$ we have that $A_i,B_i$ are disjoint and open and $A_i$ is connected. It follows that $A_i$ is a connected component of $A_i\cup B_i$. Since $0\in A_i$ then $A_i$ is the connected component of $0$. But connected components of a given point are unique, meaning $A_1=A_2$.
Can someone suggest me a suitable curve fit?
Assuming that the model is $$σ=A\,ϵ\,e^{Bϵ}$$ the given conditions give as equations $$f_c=A\,ϵ_p\,e^{B\,ϵ_p}$$ $$\frac{f_c}3=A\,ϵ_f\,e^{B\,ϵ_f}$$ $$G=A\frac{ e^{B \,\epsilon_f} (B\, \epsilon_f-1)-e^{B\, \epsilon_p} (B \,\epsilon_p-1)}{B^2}$$ I suppose that $f_c,\epsilon_p,G$ are given and that you search for unknowns $A,B,\epsilon_f$. This is not the most pleasant system to solve (I suppose that otherwise you would not have posted the problem) but a few things can be done. Using the first and second equations, you can express $A$ and $\epsilon_f$ as function of $B$. This gives $$A=\frac{f_c }{\epsilon_p }e^{-B\, \epsilon_p }$$ $$\epsilon_f=\frac{1}{B}\,W\left(\frac{B \,\epsilon_p}{3} e^{B\, \epsilon_p }\right)$$ where $W(z)$ is Lambert function. So, you are left with one ugly equation in $B$ (the third equation replacing $A$ and $\epsilon_f$ by their expressions) but numerical methods (such as Newton) can do the job. If you use methods requiring derivatives, I suggest you use numerical derivatives instead of the analytical ones.
Subrings of fraction fields
There are lots of examples arising from integral closures. For example, $k[x^2,x^3] \subseteq k[x]$ with field of fractions $k(x)$ is no localization since $k[x^2,x^3]^* = k^* = k[x]^*$.
Solve for integer $m,n$: $2^m = 3^n + 5$
Rename $m\to x$ and $n\to y$ We see $x\geq 3$, $y\geq 1$. Modulu 3 implies $x$ is odd. For $x\leq 5$ we get only $(3,1)$, $(5,3)$. Say $x\geq 6$, then $$3^y\equiv -5\;({\rm mod}\; 64)$$ It is not difficult to see $$3^{11}\equiv -5\;({\rm mod}\; 64)$$ so $3^{y-11}\equiv 1\;({\rm mod}\; 64)$. Let $r=ord_{64}(3)$, then since $\phi(64)=32$, we have (Euler) $$3^{32}\equiv 1\;({\rm mod}\; 64)$$ We know $r\;|\;32$. Since $$3^{32} -1 = (3^{16}+1)\underbrace{(3^8+1)(3^4+1)(3^2+1)(3+1)(3-1)}_{(3^{16}-1)}$$ we get $r=16$ so $16\;|\;y-11$ and thus $y=16k+11$ for some integer $k$. Look at modulo 17 now. By Fermat little theorem: $$2^x\equiv 3^{16k+11}+5\equiv (3^{16})^k \cdot 3^{11}+5\equiv 1^k\cdot 7+5\equiv 12\;({\rm mod}\; 17)$$ Since $x$ is odd we have \begin{eqnarray*} 2^1&\equiv & 2\;({\rm mod}\; 17)\\ 2^3&\equiv & 8\;({\rm mod}\; 17)\\ 2^5&\equiv & -2\;({\rm mod}\; 17)\\ 2^7&\equiv & -8\;({\rm mod}\; 17)\\ 2^9&\equiv & 2\;({\rm mod}\; 17) \end{eqnarray*} so upper congurence is never fulfiled, so no solution for $x\geq 6$.
$k_w$-space is normal
The answer is yes as I found in Franklin, S.P. and B.V. Smith Thomas, A survey of k$_\omega$-spaces, Topology Proceedings 2 (1978), 111–124. See https://pdfs.semanticscholar.org/8b2a/1b2db52abdf330df52fc019468002ffe093e.pdf p.113.
General Distributive Law and Axiom of Choice
I don’t know a reference, but I can give you a fairly short proof. (I want it available as a reference for myself, even if you really want only a reference.) The harder direction is that the general distributive law in question implies $\mathsf{AC}$. I’ll prove $\mathsf{AC}$ in the following form: for any sets $X$ and $Y$, each surjection $f:X\to Y$ has a right inverse. Proof: Let $X$ and $Y$ be sets, and let $f:X\to Y$ be surjective. For $\langle y,x\rangle\in Y\times X$ let $$A(y,x)=\begin{cases}\{0\},&\text{if } y=f(x)\\0,&\text{otherwise}\;.\end{cases}$$ For each $y\in Y$ there is an $x\in X$ such that $y=f(x)$, so $\bigcup_{x\in X}A(y,x)=\{0\}$, and therefore $$\bigcup_{g\in{^YX}}\bigcap_{y\in Y}A\big(y,g(y)\big)=\bigcap_{y\in Y}\bigcup_{x\in X}A(y,x)=\{0\}\;.$$ Thus, there is a $g\in{^YX}$ such that $$\bigcap_{y\in Y}A\big(y,g(y)\big)=\{0\}\;,$$ i.e., such that $y=f\big(g(y)\big)$ for each $y\in Y$. Clearly $f\circ g=\operatorname{id}_Y$, as desired. $\dashv$. The other direction is routine. Assume $\mathsf{AC}$. Given sets $A(i,j)$ for $\langle i,j\rangle\in I\times J$, suppose that $x\in\bigcap_{i\in I}\bigcup_{j\in J}A(i,j)$; then $J_x(i)=\{j\in J:x\in A(i,j)\}\ne\varnothing$ for each $i\in I$. Let $\xi:I\to J$ be a choice function for $\{J_x(i):i\in I\}$; then $$x\in\bigcap_{i\in I}A\big(i,\xi(i)\big)\subseteq\bigcup_{\varphi\in{^IJ}}\bigcap_{i\in I}A\big(i,\varphi(i)\big)\;.$$ On the other hand, if $x\in\bigcup_{\varphi\in{^IJ}}\bigcap_{i\in I}A\big(i,\varphi(i)\big)$, then there is a $\varphi\in{^IJ}$ such that $$x\in\bigcap_{i\in I}A\big(i,\varphi(i)\big)\subseteq\bigcap_{i\in I}\bigcup_{j\in J}A(i,j)\;.$$
the length of the circumference of a circle always bears a constant ratio to its diameter
It would suffice to say that all circles are "similar" to each other. That's why circumference of a circle always bears a constant ratio to its diameter. In my opinion: Even if you don't understand it, it doesn't make much of a difference.
$(0,2)$ is a locally compact Hausdorff space. Then what will be that one point which will make $(0,2)$ a compact Hausdorff space?
Let $T_X$ be a locally compact, non-compact Hausdorff topology on $X.$ Let $p$ be some (any) point not in $X$ and let $Y=\{p\}\cup X.$ Let $C$ be the set of all compact subsets of $X,$ with respect to the topology $T_X.$ Define a topology on $Y$ as $T_Y=T_X\cup \{Y\setminus c:C\in C\}.$ It is straightforward to show that $T_Y$ is a compact Hausdorff topology on $Y,$ and that the topology on $X,$ as a subspace of $Y,$ is $T_X.$ (And also that $X$ is dense in $Y$ and open in $Y.$) It may be convenient to take a space $(X', T_{X'})$ that is homeomorphic to $(X,T_X),$ where $(X', T_{X'})$ is a dense open subspace of a compact Hausdorff space $(Y', T_{Y'}),$ such that $Y'\setminus X'$ contains a single point $p$. For example with $X=(0,2),$ the function $f(x)=e^{\pi i (x-1)}$ maps $X$ homeomorpically to $X'=Y'\setminus \{-1\},$ where $ Y'= \{z\in \Bbb C:|z|=1\}.... $ .... so let $p=-1$ and let $Y=X\cup \{-1\}.$ And extend the function $f:X\to X'$ to $f:Y\to Y'$ by letting $f(p)=p.$ Now let $U\subset Y$ be open in $Y$ iff $\{f(u):u\in U\}$ is open in $Y'.$ As to what this topology on $Y=(0,2)\cup \{-1\}$ "looks like": If $U \subset Y$ then $U$ is open in $Y$ iff (i). $U\cap (0,2)$ is open in $X=(0,2),$ and (ii). if $(-1)\in U$ then $U\supset (0,r_0)$ and $U\supset (r_2,2)$ for some $r_0,r_2\in (0,2).$
Find all of the elements of a subgroup generated by a $3$-cycle notation
Yes, your answer is correct. $H = \langle (123) \rangle = \{e, (123), (132) \}$.
Find the surface area of part of cone $x^{2} = y^{2} + z^{2}$ which lies inside the sphere $x^{2} + y^{2} + z^{2} = 2z$.
The condition $ x^2 = y^2 + z^2 $ implies the existence of two cones $x = \sqrt {y^2 + z^2}$ and $x = -\sqrt {y^2 + z^2}$ so the result should be doubled, as you only used the first cone. However, the expected answer still remains different.
Write down explicitly the expression for the $ n$-th derivative of the function $f(x)=x^2e^{3x}$.
\begin{align}(x^2e^{3x})^{(n)}&=\sum_{k=0}^n\binom nk(x^2)^{(k)}(e^{3x})^{(n-k)} \\&=\binom n0x^2\,3^ne^{3x}+\binom n12x\,3^{n-1}e^{3x}+\binom n22\,3^{n-2}e^{3x} \\&=(9x^2+6nx+n(n-1))3^{n-2}e^{3x}.\end{align}
Recommend papers for an undergraduate.
Well, it basically relies on your field of interest. For different fields there are different journals in which papers concerning that topic are published. Then again, many different journals are published by the same publisher, see for example Elsevier. Wikipedia has a list of journals. Usually, unless they are open access or your university has a contract with the publisher, you need to pay money to read a paper or journal. But your local math library probably has a lot of old (and new) journal articles to read from. arxiv.org should also be mentioned, as perhaps the most well known source for open access prepapers as well as Google Scholar which can help you to find a specific paper given the title and/or author. But also note that the quality of journals differ, going down to journals which will publish anything for money (see also). Reading new papers nowadays is challenging anyway, because they are basically the frontier of science and therefore often really specialized. For example, I can hardly make sense of papers about vector bundles because I didn't really ever had anything to do with them and so hardly know what they are anyway. If you want to start with a topic, it is usually more advisable to read a structured book about it, which usually contains a lot of references to papers, if you want to delve into it. On another note, it can be very educational to read the original papers of great mathematicans. For example, there is the Euler Archive but there exist paper collections of almost all great mathematicians.
Positive definiteness of "sub-matrices"
The matrix which you have constructed is a $2 \times 2$ principal submatrix. "Submatrix" is indeed the correct term, and this term is in common use. "Principal" here refers to the fact that the diagonals of your submatrix were taken from the diagonal of the original matrix (equivalently, the set of columns extracted and set of rows extracted are the same). To your first question: yes. If $M$ is symmetric positive definite, then all of its $2 \times 2$ principal submatrices will be symmetric positive definite (in fact, the same can be said for all principal submatrices). To your second question: no. One example of a matrix with positive definite $2 \times 2$ principal minors that fails to be positive definite is $$ M = \pmatrix{2&-1&3\\-1&5&0\\3&0&5} $$ In particular, we see that this $M$ fails to be invertible, but all of its $2 \times 2$ principal submatrices are invertible. For a more dramatic example, the matrix $$ M = \pmatrix{10 & 3 & 2 & 1\\ 3 & 10 & 0 & 9\\ 2 & 0 & 10 & 4\\ 1 & 9 & 4 & 10} $$ is invertible, but has a negative determinant.
Does localisation commute with Hom for finitely-generated modules?
Let me try a counterexample. I will take $N=R$. Consider $\phi : R^2\to R$ the linear map $(x,y)\mapsto x+y$, $\rho : R\to R_P$ the localization map at some prime ideal $P$, $K=\phi^{-1}(\ker(\rho))$ and $M=R^2/K$. Then, as in the first part of your question, one can see that $$\mathrm{Hom}(M, R)_P\to \mathrm{Hom}(M_P, R_P)$$ is surjective if and only if $$ \mathrm{Hom}(K, R)_P\to \mathrm{Hom}(K_P, R_P)$$ is injective when restricted to the image of $\mathrm{Hom}(R^2, R)_P\to \mathrm{Hom}(K, R)_P$. Let $\psi=\phi|_K$. Then the image of $\psi_P$ in $\mathrm{Hom}(K_P, R_P)$ is zero by the construction of $K$. Now let us see under which condition $\psi_P=0$. This is equivalent to $\exists s\in R\setminus P$ such that $s\psi=0$. Or $s\mathrm{Im}(\psi)=0$. As $\phi$ is surjective, $\mathrm{Im}(\psi)=\ker\rho$. So $$ \psi_P=0 \Longleftrightarrow \exists s\in R\setminus P, \ s\ker\rho=0.$$ So to have a counterexample, it is enough to find $R, P$ such that the kernel of $R\to R_P$ is not killed by any $s\notin P$. Consider $R$ the product of infinitely many copies of $\mathbb F_2$ (say indexed by $\mathbb N$). All prime ideals are maximal and correspond to ultrafilters (see http://en.wikipedia.org/wiki/Ultrafilter) of $\mathbb N$. Some of them are obivous: those with $0$ in a fixed component. They correspond to principal ultrafilters. Let $P$ be a maximal ideal corresponding to a non-principal ultrafilter (it is known that such ultrafilters exist). The kernel of $R\to R_P=\mathbb F_2$ is $P$. If $P$ was killed by a single $s\notin P$, $s=(a_0, a_1,....)$, then $P$ is killed by some $t=(0,...,a_r,0,..)$ with $a_r\ne 0$. So $P\subseteq \mathrm{Ann}(t)$. But the latter is a maximal ideal corresponding to a principal ultrafilter. Contradiction.
Essential range of a continous function.
If $f:[0,1]\to \mathbb C$ is continuous, then the essential range of $f$ is equal to the range of $f.$ Proof: Let $z\in f([0,1]).$ Let $\epsilon>0.$ Then $f^{-1}(D(z,\epsilon))$ is open in $[0,1]$ by continuity, and it is nonempty because $z\in f([0,1]).$ Every nonempty open subset of $[0,1]$ has positive Lebesgue measure. This implies $z$ is in the essential range. (This is essentially what Rob Arthan was doing in a comment.) Now suppose $z\notin f([0,1]).$ Note that $f([0,1])$ is compact by the continuity of $f.$ Thus the distance from $z$ to $f([0,1])$ is some positive number $r.$ It follows that $D(z,r/2)\cap f([0,1])=\emptyset.$ Therefore $z$ is not in the essential range. Thus the essential range is precisely the range, and we're done.
Prove that $A \cup C \sim A$ when $C$ is countable and $A$ is infinite.
HINT: Write $A$ as a disjoint union of $A'$ and $ A\setminus A'$, where $A'$ is countable.
Solving the system $\sum \sin = \sum \cos = 0$.
Try something similar to what has been posted yet. Take one variable to the other side, then square and add the equations. What you get is $\alpha-\beta=120°$ and same for cyclic permutations (or negating all angles). The solutions is the three angles $\delta$, $\delta+120°$, $\delta-120°$ (arbitrary $\delta$) in any order. EDIT: Or simply realize that the equations are equivalent to $\exp(i\alpha)+\exp(i\beta)+\exp(i\gamma)=0$ which make the answer immediately obvious.
Area of Square - Comparing squares
The area of a parallelogram (or see on Wikipedia) is the base times the height. The base here is $JM$ and the height is $KN$, so the area is $$KN * JM = n$$ So you have $$ \left(n + \frac{1}{n}\right)*JM = n $$ Then you solve for $JM$
How many ways can 15 people be divided into 3 classes of 5, if there are 3 blond people....
Yes. The logic is correct. For the first, each blonde person identifies their group. For the second, only the group with them all is uniquely identified.
how do I find the inverse of this matrix?
Hint: You can use the inverse matrix rule for a $2x2$ matrix $${\begin{pmatrix}a& b \\ c&d \end{pmatrix}}^{-1}=\frac{1}{ad-bc}\begin{pmatrix}d& -b \\ -c&a \end{pmatrix}$$ Can you take it from there? The $ad-bc$ term should simplify rather nicely.
Injectivity, surjectivity, and cardinality
From the definition of cardinality, we can easily prove that if $\;\;A\subseteq B\;\;$then $\;\;Card\;(A)\le Card \;(B)\;.\;$ ( It follows from the fact that the inclusion function of $\;A\;$ is one-one.) ...............................................................................................(1) Your first question can be answered as follows: Since $\;f\;\;$ is onto , we get , for each $\;\;x\in A\;,\;\;\;\;f^{-1}(x)\;\;$ is a non-empty subset of $\;\mathbb{N}\;.\;$ Since $\;\mathbb{N}\;$is well-ordered we get $\;\;f^{-1}(x)\;\;$ has a unique least element (say) $\;n_{x}\;.\;$ Thus the mapping $\;\theta\;:\;x \mapsto n_{x}\;\;$ is a bijection from $\;A\;$ to $\;\theta(A) \subseteq \mathbb{N}\;.\;$ Hence $\;\;Card\;(A)= Card \;(\theta(A)) \le Card(\mathbb{N}\;.\;$ Therefore $\;\;Card\;(A)\le Card\;(\mathbb{N})\;.\;$ To answer the second question, note that since $\;f\;$is one-one , for each $\;a\in A\;\;$ there is a unique element (say) $\;n_{a}\in \mathbb{N}\;.\;$ Thus the mapping $\;\eta\;:\;a \mapsto n_{a}\;\;$ is a bijection from $\;A\;$ to $\;\eta(A) \subseteq \mathbb{N}\;.\;$ Therefore, $\;\;Card\;(A)= Card \;(\eta(A)) \le Card(\mathbb{N})\;.\;$ Hence $\;\;Card\;(A)\le Card\;(\mathbb{N})\;.\;$
By using the Big Picard theorem show that $a$ is unique
I will flesh out the hint given by Jonas Meyer in the comments. Note that the assumption $a\neq 0$ is not needed. We first recall the Big Picard theorem. Big Picard Theorem: If an analytic function $f$ has an essential singularity at a point $w$, then on any punctured neighborhood of $w$, $f(z)$ takes on all possible complex values, with at most a single exception, infinitely often. By assumption, our given $f(z)$ has an infinite power series when expanded at $0$. This means that $f(1/z)$ has an essential singularity at $0$. By the theorem (taking the neighborhood to be $\mathbb C\backslash \{0\}$), there is at most one value $w$ such that $f(1/z)$ takes on $w$ a finitely number of times. Hence there is at most a single $a$ such that $f(z)$ takes on $a$ a finite number of times. (The value at $0$ doesn't matter, because it won't make the difference between some value being taking a finite or being taken an infinite number of times.)
Bessel Function of the first kind
The Bessel Function of the first kind and order $p$ has series representation given by $$J_p (x) = \sum_{n=0}^\infty \frac{(-1)^n}{n! \,\Gamma(n+p+1) } \left( \frac x 2 \right)^{2n+p} $$ For $p=1/2$, we find $$\begin{align} J_{1/2} (x) &= \sum_{n=0}^\infty \frac{(-1)^n}{n! \,\Gamma(n+3/2) } \left( \frac x 2 \right)^{2n+1/2} \\\\ &=\sqrt{\frac {1}{2x}}\,\sum_{n=0}^\infty \frac{(-1)^n\,x^{2n+1}}{n! \,4^n\,\Gamma(n+3/2) } \tag 1 \end{align}$$ Applying recursively the functional relationship, $\Gamma(1+x)=x\Gamma(x)$, for the Gamma Function $\Gamma(n+3/2)$ reveals $$\begin{align} \Gamma(n+3/2)&=(n+1/2)(n-1/2)(n-3/2)\cdots (3/2)(1/2)\Gamma(1/2)\\\\ &=\frac{(2n+1)!!}{2^{n+1}}\sqrt \pi\\\\ &=\frac{(2n+1)!}{2^{2n+1}\,n!}\sqrt \pi \tag2 \end{align}$$ Substituting $(2)$ into $(1)$ yields $$\begin{align} J_{1/2}(x)&=\sqrt{\frac{1}{2x}}\sum_{n=0}^\infty \frac{(-1)^n\,x^{2n}}{n!\,4^n\,\frac{(2n+1)!}{2^{2n+1}\,n!}\sqrt \pi }\\\\ &=\sqrt{\frac{2}{\pi x}}\sum_{n=0}^\infty \frac{(-1)^n\,x^{2n+1}}{(2n+1)!}\\\\ &=\sqrt{\frac{2}{\pi x}}\,\sin(x) \end{align}$$ as was to be shown!
Proof of Taylor expansion in multiple variables
The result you wrote is true : it is called Hadamard's Lemma, but it is not truly a Taylor expansion of $f$ in a neighbourhood of $0$. Actually, the proof of your result relies heavily on the Taylor formula with integral rest. See the Wikipedia page of the Hadamard's lemma for more details on the proof. If you are looking for "true" Taylor formulas in case of functions in multiple variables, you should check this Wikipedia page about them which will contain the formulas. As for how you can prove them, I can't think of an english reference as of now, but the usual way of proving them relies on the following principle : Take a function $f : \mathbb{R}^n \to \mathbb{R}$, and also $a,x$ $\in \mathbb{R}^n$ (with $a$ the point where you're looking for a Taylor expansion). Then define $g: \mathbb{R} \to \mathbb{R}$ with $g(t) = f(a + t(x-a))$, and write down Taylor's formula for $g$ (where any version would work - just pay attention to the Taylor Lagrange one, you can find more information about why by searching on Google about the mean value theorem in multiples variables). I guess I went a bit overboard with this answer, and not that precise. I hope it helped though, and feel free to ask any question.
Solving this equation to be 3 literals
It is often quite useful to consider the negation of the given expression and then use $aa = a$ $aa'=0$ $a+ab = a(1+b) = a$ So, consider $E' = \left( (x'y' + z)' + z + xy + wz \right)'$: $$\begin{eqnarray*} E' & = & \left( (x'y' + z)' + z + xy + wz \right)' \\ & = & (x'y' + z)z'(xy)'(wz)' \\ & \stackrel{zz' = 0}{=}& x'y'z'(x'+y')(w'+z') \\ & \stackrel{x'x' = x' etc.}{=}& x'y'z' + x'y'z'w' \\ & = & x'y'z'(1+ w' ) \\ & = & x'y'z' \\ \end{eqnarray*} $$ $$\Rightarrow E = (x'y'z')' = x+y+z$$
A mathematically mature introduction to Turing Machines and Computability [reference-request]
How about Introduction to the Theory of Computation by Michael Sipser?
Find $\sum_{n=1}^{\infty} \frac{n^{\sigma -1} (n+\sigma )-(n+1)^{\sigma }}{\sigma(1-\sigma)}$ for $ 0<\sigma<1$
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ With $\ds{N \in \mathbb{N}_{&gt;\ 1}}$: \begin{align} &amp;\bbox[5px,#ffd]{\left.\sum_{n = 1}^{N} {n^{\sigma -1}\pars{n + \sigma} - \pars{n + 1}^{\sigma} \over \sigma\pars{1 - \sigma}} \,\right\vert_{\ 0\ &lt;\ \sigma\ &lt;\ 1}} \\[5mm] = &amp;\ {1 \over \sigma\pars{1 - \sigma}}\bracks{% \sum_{n = 1}^{N}n^{\sigma} + \sigma\sum_{n = 1}^{N}n^{\sigma - 1} - \sum_{n = 1}^{N}\pars{n + 1}^{\sigma}} \\[5mm] = &amp;\ {1 \over \sigma\pars{1 - \sigma}}\ \times \\[2mm] &amp;\ \braces{% \pars{1 + \sum_{n = 2}^{N}n^{\sigma}} + \sigma\sum_{n = 1}^{N}n^{\sigma - 1} - \bracks{\sum_{n = 2}^{N}n^{\sigma} + \pars{N + 1}^{\sigma}}} \\[5mm] = &amp; {1 \over \sigma\pars{1 - \sigma}} + {1 \over 1 - \sigma}\sum_{n = 1}^{N}{1 \over n^{1 - \sigma}} - {\pars{N + 1}^{\sigma} \over \sigma\pars{1 - \sigma}} \\[5mm] = &amp;\ {1 \over \sigma\pars{1 - \sigma}} \\[2mm] &amp;\ + {1 \over 1 - \sigma}\ \bracks{\zeta\pars{1 - \sigma} + {N^{\sigma} \over \sigma} + \pars{1 - \sigma}\int_{N}^{\infty}{\braces{x} \over x^{2 - \sigma}}\,\dd x} \\[2mm] &amp;\ - {\pars{N + 1}^{\sigma} \over \sigma\pars{1 - \sigma}} \\[5mm] = &amp; {1 + \sigma\,\zeta\pars{1 - \sigma} \over \sigma\pars{1 - \sigma}} + \int_{N}^{\infty}{\braces{x} \over x^{2 - \sigma}}\,\dd x - {\pars{N + 1}^{\sigma} - N^{\sigma} \over \sigma\pars{1 - \sigma}} \end{align} See this identity. Note that \begin{align} 0 &amp; &lt; \verts{\pars{1 - \sigma}\int_{N}^{\infty}{\braces{x} \over x^{2 - \sigma}}\,\dd x} &lt; \pars{1 - \sigma}\int_{N}^{\infty}{\dd x \over x^{2 - \sigma}} \\[5mm] &amp; = {1 \over N^{1 - \sigma}} \,\,\,\stackrel{\mrm{as}\ N\ \to \infty}{\Large\to}\,\,\, \color{red}{\large 0} \end{align} and $\ds{{\pars{N + 1}^{\sigma} - N^{\sigma} \over \sigma\pars{1 - \sigma}} \sim {1 \over 1 - \sigma} \,{1 \over N^{1 - \sigma}} \to \color{red}{0}\,\,\,}$ as $\ds{\,\,\, N \to \infty}$. Then, $$ \bbox[5px,#ffd]{\left.\sum_{n = 1}^{\infty} {n^{\sigma -1}\pars{n + \sigma} - \pars{n + 1}^{\sigma} \over \sigma\pars{1 - \sigma}} \,\right\vert_{\ 0\ &lt;\ \sigma\ &lt;\ 1}} = \bbx{1 + \sigma\,\zeta\pars{1 - \sigma} \over \sigma\pars{1 - \sigma}} \\ $$ It is amusing that the solution limiting case $\ds{\sigma \to 0^{+}}$ is equal to $\ds{\gamma}$ ( the Euler-Mascheroni Constant ).
Elementary Combination Question
Yes, your approach is correct. Let me explain: Since ice-cream and topping are independent, $$#of sundays=#of icecreams * #of toppings$$ If types of ice-cream didn't have to be different your result would be $12^3$ Now you have to choose one flavor (12 ways), then a different flavor (11 ways) and then a third one (10 ways). So your result would be $12*11*10$. But keep in mind that you are not interested in the order of the flavors. There are $3!$ ways of ordering the 3 flavors. So, your true number of different ice-creams is$\frac{12*11*10}{3!}$ which is $_{12}C_3$. Same way for toppings. If you define as different sundae a sundae where the order of the ice-creams or the order of the toppings is different, then your result would be $(12*11*10)*(8*7)$
Compactness, connecteness and Hausdorffness on $ (S^2 /\mathscr{R}, \tau_{e_{|S^2}}/\mathscr{R}) $
So first of all, I assume that $S^2=\{v\in\mathbb{R}^3\ |\ \lVert v\rVert=1\}$ is the standard 2-dimensional sphere. Well, it doesn't really matter what $S^2$ is, as long as it is a compact and connected subspace of $\mathbb{R}^3$. Recall that we have the standard projection $$\pi:S^2\to S^2/\mathscr{R}$$ $$\pi(x)=[x]_\mathscr{R}$$ which is a continuous surjection. For compactness I would say that since the sphere is closed and bounded, the property passes to the quotient , so the quotient $ (S^2 /\mathscr{R}, \tau_{e_{|S^2}}/\mathscr{R})=(X,\tau) $ is then compact &quot;Closed and bounded&quot; property does not pass to quotients. Quotients need not be a subspace of $\mathbb{R}^n$, or even metrizable for &quot;bounded&quot; to make sense. Also quotient maps need not preserve closed sets. Nevertheless, the quotient will be compact. We can use our projection $\pi$ and note that the image of a compact space is compact. Indeed, if $f:X\to Y$ is a continuous surjection with $X$ compact, then take $\mathscr{U}$ an open covering of $Y$. Then $f^{-1}(\mathscr{U})=\{f^{-1}(U)\ |\ U\in\mathscr{U}\}$ is an open covering of $X$. By compactness this covering has a finite subcovering $\{f^{-1}(U_1),\ldots,f^{-1}(U_n)\}$. You can easily check that $\{U_1,\ldots,U_n\}$ is a finite subcovering of $\mathscr{U}$. For connectedness since the sphere is path-connected, then the property passes to the quotient and it is connected. Yes. A continuous image of any (path) connected space is (path) connected, i.e. $\pi(S^2)=S^2/\mathscr{R}$ is (path) connected. The concrete formula for $\mathscr{R}$ is irrelevant. For Hausdorffness I am not sure how to procede, since I think in this case the equivalence relation does comes into play. It sure does. Unlike connectedness and compactness, not every quotient of a Hausdorff space is Hausdorff. But we have this neat property that a quotient of a compact Hausdorff space $X$ via relationship $R$ is Hausdorff if and only if $R$ is a closed subspace of $X\times X$. See this: Question about quotient of a compact Hausdorff space So let's have a look at $\mathscr{R}=\{(x,y,z,x',y',z')\in\mathbb{R}^6\ |\ x+y=x'+y'\}$. Is that a closed subspace of $S^2\times S^2$? Sure it is, because we have a continuous function $$f:S^2\times S^2\to\mathbb{R}$$ $$f(x,y,z,x',y',z')=x+y-x'-y'$$ and with that we have $\mathscr{R}=f^{-1}(\{0\})$ and so it is closed.
How to prove these propositions?
Assume that $3\mid n$. Then $n=3q$ for some integer $q$. Hence $n^2=(3q)^2=9q^2=3(3q^2)$. Since $3q^2$ is an integer, $3\mid n^2$.
$f$ is integrable on $[a,b]$ then $\int_{x}^{x+h}\frac{|f(t)-f(x)|}{h}dt\to 0$ if $h\to 0$ for almost all $x$
The problem with what you wrote is: if $f$ is not continuous, $\phi$ is not guaranteed o be derivable with derivative $f$ (sad). What you can do: $$ \int_x^{x+h} \frac{|f(t) - f(x)|}{h} dt = \int_0^1 {|f(x + \theta h) - f(x)|} d\theta $$ and now use the dominated convergence theorem.
biology models which uses a system of differential equations
Ironically, it can be hard to find mathematical modelling in biology that is not differential equations. But here are some examples. The Hodgkin-Huxley model (or other biological neuron models) of the cellular dynamics of neurons. The Hindmarsh-Rose model is another simple model that exhibits bursting. Mathematical models of oncological tumor growth (e.g. [1] or [2]). Among predator-prey models, there are quite a few DE models beyond Lotka-Volterra. You can also extend to spatial distributions using PDEs (e.g. [1] or [2]). Fisher's equation is a related model of gene propagation. Turing's model of developmental morphogenesis (e.g. [1] or [2]). In pharmacology, you can model ADME kinetics via DEs called PBPK models (e.g. here) The rate equations play a large role in biochemistry. Of course, this is related to the Michaelis-Menten equation. The spreading of the electrophysiological cardiac contraction signal is modelled either by a reaction-diffusion or an eikonal equation (e.g. here). The replicator equation is a common DE in evolutionary game theory. Non-linear differential equations are good models for cellular dynamics, incorporating gene expression, translation, the cell cycle, etc... (e.g. [1], [2]). There are large-scale models of neural activity (i.e. modelling neural populations rather than single neurons) that utilize differential equations (e.g. [1], [2]). See also neural mass model. In the field of animal behaviour, often stochastic differential equations are used (though that is true for many of the other ones mentioned above too). You can, however, look at collective flocking behaviours (e.g. the Cucker-Smale model) and compartmental models of behaviour (described via DEs). Also pretty much everything in biophysics is described by differential equations (e.g. models of protein folding, molecular dynamics, continuum mechanical modelling of tissue, etc...).
Commutative property in one object set
An set $A=\{x\}$ with one element has exactly one binary operation, the "identity" binary operation given by $x+x=x$. So yes, with this operation $A$ is commutative. Given any $a,b\in A$ we have $a=x,b=x$ so $a+b=x+x=b+a$.
When will $f(x)$ be a factor of $f(x^2)$?
Suppose $f(x)$ is a monic polynomial of degree $n$, so that $$f(x)=(x-r_1)\cdots(x-r_n),$$ where the $r_i$ are the roots of $f$. Then $$f(x^2)=(x^2-r_1)\cdots(x^2-r_n)=(x+\sqrt{r_1})(x-\sqrt{r_1})\cdots(x+\sqrt{r_1})(x-\sqrt{r_n}).$$ For $f(x)$ to be a factor of $f(x^2)$, every root of $f(x)$ must also be a root of $f(x^2)$. This means that for all $i$, there is a $j$ such that $$r_i=\pm\sqrt{r_j}\iff r_i^2=r_j.$$ So if $f(x)$ has the root $r$, it must also have the roots $r^2, r^4, r^8,\cdots.$ Since it is a polynomial, this list must be finite. This can only happen if the elements of the list eventually repeat, which requires $r^{2^n}=r^{2^m}$ for some $m&lt;n$. This condition is equivalent to $$r^{2^n}-r^{2^m}=0\implies r^{2^m}\left(r^{2^{n-m}}-1\right)=0.$$ Thus we either have $r=0$, or since $j=n-m$ can be any integer, $r$ is a $2^{j}$th root of unity. The first case gives the family of polynomials $x^n$ for any positive integer $n$; the second case gives the family $$(x-z)(x-z^2)\cdots\left(x-z^{2^j}\right),$$ where $z$ is a $2^j$th root of unity. We can check that both of these families are solutions.
Convergence of a function with $e$ in the denominator
Set $t=1/x$. We then obtain $dt=-dx/x^2$. Hence, the integral becomes $$I = \int_1^0 \dfrac{t}{e^t-1} (-dt) = \int_0^1 \dfrac{t}{e^t-1} dt$$ The convergence is now obvious, since for $t\in(0,1)$, we have $e^t-1 &gt; t$, which implies $$\dfrac{t}{e^t-1} &lt; 1 \implies I &lt; \int_0^1 1 dt = 1$$ We can in fact evaluate this integral $$I = \int_0^1\dfrac{te^{-t}}{1-e^{-t}}dt = \int_0^1 \sum_{k=0}^{\infty}te^{-(k+1)t}dt = \sum_{k=1}^{\infty} \int_0^1 te^{-kt}dt$$ We have $$\int_0^1te^{-kt}dt = \dfrac{1-e^{-k}(k+1)}{k^2}$$ Hence, the integral is $$\sum_{k=1}^{\infty} \dfrac{1-e^{-k}(k+1)}{k^2} = \sum_{k=1}^{\infty} \dfrac1{k^2} - \sum_{k=1}^{\infty} \dfrac{e^{-k}}{k} - \sum_{k=1}^{\infty} \dfrac{e^{-k}}{k^2}=\zeta(2) + \log(1-1/e) - \text{Li}_2(1/e)$$
Expression for the Lee form
Okay I think I got it actually, so I'll post an answer for future reference or in case it might be usefull for other people at some point: The co-differential $d^*$ on a Hermitian manifold satisfies $d^*=-*d*$ and $*$ is an isometry, so in particular injective. It can also be checked that $\omega^{n}=n!\text{Vol}$ where $\text{Vol}$ is the Riemanian volume form. Hence identity 1. is equivalent to $\frac{1}{n!}d^*\omega=-*(\theta \wedge \omega^{n-1})$. Also, we can pick a basis $e^1,...,e^{2n}$ for which $\theta=ae^1$ and $\omega=e^1 \wedge e^2+\cdots+e^{2n-1}\wedge e^{2n}$ (so we take $e^2=Je^1,e^4=Je^3,...$). Using the fact that 2-forms commute one can be convinced that $*(\theta \wedge \omega^{n-1})=\frac{1}{a(n-1)!}e^2=\frac{1}{a^2(n-1)!}J\theta$ and so with an appropiate choice of $a$ we see that 1. and 2. are in fact equivalent.
Higher order iterative methods
The one with the smaller fraction. Or the one where the whole expression is closer to $x_n$. Or in the end, the root with the same sign as $f'(x_n)$ so that the denominator of the fraction almost cancels. Better yet, apply binomial formulas to get $$ x_{n+1}=x_n-\frac{2f(x_n)}{f'(x_n)+sign(f'(x_n))\sqrt{f'(x_n)^2-2f(x_n)f''(x_n)}} $$ to avoid floating-point cancellation errors. Look up Halley's and Laguerres method for alternative formulas with cubic convergence. One can also apply the Taylor series of the square root wherever it appears. In the first solution form of the quadratic equation one would need to use the quadratic Taylor polynomial $\sqrt{1-a}=1-\frac a2-\frac{a^2}8+O(a^3)$ to retain a third order method, \begin{align} x_{n+1}&amp;=x_n-\frac{f'(x)\left(1-\sqrt{1-2\frac{f(x_n)f''(x_n)}{f'(x_n)^2}}\right)}{f''(x_n)}\\ &amp;=x_n-\frac{f(x)}{f'(x)}\left(1+\frac{f''(x_n)f(x_n)}{2f'(x_n)^2}\right)+O(f(x_n)^3) \end{align} which is also a named method, especially if (properly) applied to the vector case.
A subspace $Y$ of a Banach space $X$ is complete iff the set $Y$ is closed in $X$.
This is basic metric space theory. Suppose $x_n\to x$. Given $\varepsilon&gt;0$ there is some $N\in\mathbb N$ such that $\|x_n-x\|&lt;\varepsilon/2$ for $n\geq N$. Then for $n,m\geq N$ we have $$\|x_n-x_m\|\leq\|x_n-x\|+\|x-x_m\|&lt;\varepsilon.$$ Therefore the sequence $(x_n)$ is Cauchy.
Big oh proof for a(n) using big oh hierarcy
1) Which term dominates as $n$ grows large, $n$ or $3(\log_2{n})$? 2) Think about the identity: $(ab)^c = a^cb^c$ 3) Use Stirling's approximation - $n! \approx (\frac{n}{e})^n \sqrt{2\pi n}$ As for your comment, you cannot convert between $\log{n}$ and $\sqrt{n}$. $\log$ is the inverse function of exponentiation, so $e^{\log{g(n)}} = g(n)$. Roots on the other hand are inverse functions of powers, so $\sqrt[k]{g(n)^k} = g(n)$. Your statement is true for large n because $\sqrt{n}$ grows faster than $\log_2{n}$. (Adding in a coefficient of 2 doesn't hurt either). Try plotting $f(n)=\frac{a*\log{n}}{\sqrt{n}}$. No matter what constant $a$ you choose, $f(n)$ will go to $0$ for large $n$. This indicates that $\sqrt{n}$ grows asymptotically faster than $\log{n}$.
Dual of $\mathbb{Q}$[x] is not isomorphic to $\mathbb{Q}$[x]
The vector space $\Bbb Q[x]$ may be viewed as consisting of those sequences $(a_i)_0^\infty, \tag 1$ where each $a_i \in \Bbb Q$ and $a_i = 0$ for all but a finite number of $i$; now consider an arbitrary sequence of the form $(\lambda_i)_0^\infty \in \Bbb Q^\infty, \tag{2}$ where we allow $\lambda_i \ne 0$ for an infinite number of index values $i$. Any such sequence $\lambda = (\lambda_i)_0^\infty$ determines a well-defined linear functional on $\Bbb Q[x]$ via the formula $\lambda(p(x)) = \displaystyle \sum_0^\infty \lambda_i p_i, \tag 3$ where $p(x) = \displaystyle \sum_0^{\deg p} p_i x^i \in \Bbb Q[x]. \tag 4$ Since only a finite number of the $p_i \ne 0$, the sum in (3) is well-defined and determines a unique element of $(\Bbb Q[x])^\ast$; linearity is easily verified. Now the cardinality $\vert \Bbb Q[x] \vert$ of $\Bbb Q[x]$ is well known to be $\vert \Bbb Q[x] \vert = \aleph_0, \tag 5$ that is, $\Bbb Q[x]$ is countable; but it is also reasonably well-known that the cardinality of the set of sequences of rationals is $\vert \Bbb R \vert$, the cardinality of $\Bbb R$: $\vert \{ (\lambda_i)_0^\infty \} \vert = \vert \Bbb R \vert; \tag 6$ since $\vert \Bbb Q[x] \vert = \aleph_0 \ne \vert \Bbb R \vert = \vert \{ (\lambda_i)_0^\infty \} \vert, \tag 7$ we see that $\Bbb Q[x] \not \cong (\Bbb Q[x])^\ast. \tag 8$
Size of minimal families of $k$-element subsets that cannot be met by a $k$-element subset simultaneously
If $k&gt;\left\lfloor \dfrac{n}{2}\right\rfloor$ then $\mathfrak S$ cannot exist. If $k=1$, then $|\mathfrak S| \le 2$. Example: $\mathfrak S = \{\{1\},\{2\}\}$ If $k=2$, then $|\mathfrak S| \le 6$. Example: $\mathfrak S = \{\{1,2\},\{3,4\},\{1,3\},\{2,4\},\{2,3\},\{1,4\}\}$ In general, for $k\le \left\lfloor \dfrac{n}{2}\right\rfloor$, I believe you need $|\mathfrak S| \le \dbinom{2k}{k}$. Edit: As Calvin Lin suggested, you also have: $$k+1 \le |\mathfrak S| \le \dbinom{2k}{k}$$
Prove that the set of $2\times 2$ matrices with integer entries whose determinant is equal to $1$ is a group under multiplication
Matrix multiplication is associative. The product of two matrices with determinant $1$ again has determinant $1$. Also, the inverse matrix of a matrix with determinant $1$ has determinant $1$, and there is a neutral elelement, namely $I$ with determinant $1$. It remains to show that the inverse of an integer matrix with determinant $1$ is again an integer matrix. This can be easily done using the formula with adjoints and the determinant of a matrix : The adjoints are, of course, integers and the denominator (the determinant of the matrix) is $1$. Putting altogether, this proves the claim.
Why is homeomorphism understood as stretching and bending?
I’d say just leave it as an informal, but powerful way to intuitively reason about topology. In an analogous situation, computability and decidability of problems can be defined mathematically, but we never can prove mathematically that these definitions really grasp our intuitive notion of what can be computed. The Church–Turing thesis is a meta-mathematical thesis which says that those definitions really do grasp our intuitive notion of computability. If viewed as a philosophical statement about the nature of our universe, there really is strong evidence in favor of the Church–Turing thesis: For one, all attempts are formalizing computability have been shown to be equivalent. And on the other hand, as one develops a sense for what is Turing–computable, one finds that anything that looks intuitively computable also looks Turing-computable and vice versa: The intuition for the mathematical concept of computability starts to overlap with the pre-mathematical intuition of computability. And this is the point of my answer: I think with topology, it’s much the same – only that the thesis in question doesn’t have a name (and is known to be slightly false). If you just start doing topology, you will most probably find that your intuition for bending and stretching and your intuitions for homeomorphisms start to overlap a lot. I think this is the best justification for the cited non-mathematical description of topology as a field. And now, to address your concerns regarding this: And as you said, the thesis doesn’t work out completely and I share your concerns about using it to prove stuff. Luckily, I found that most of the stuff I have seen proved using visual bending–stretching arguments can really be proven rigorously using real mathematics. This really helped me a lot to accept the attitude. Maybe this will help you as well. So my advice to you is to not lose to much sleep over it: Don’t think of it as a definition of the field, but rather as a description and don’t expect stretching and bending to be defined formally. You may either regard topology as mathematics which formalizes reasoning about things like bending and stretching, or you may regard things like bending and stretching as intuitive notions which help you reason about topology.
Applications of Steenrod Squares
Let $x$ be the non-zero element of $H^n(M, \mathbb{Z}/2)$. Consider the linear map $L : H^n(M, \mathbb{Z}/2) \to \mathbb{Z}/2$ given by $y \mapsto \langle x\cup y, [M]\rangle$ where $[M]$ denotes the fundamental homology class (if $M$ is connected, this is the unique non-zero element of $H_{2n}(M, \mathbb{Z}/2)$). Then $L = 0$ if and only if $x \cup x = 0$, but by Poincaré duality $L \neq 0$ so $x\cup x \neq 0$. If $n$ is not a power of $2$, then $\operatorname{Sq}^n$ can be written as a sum of compositions of Steenrod operations of smaller degree than $n$. But note that for $i &lt; n$, $$\operatorname{Sq}^i : H^n(M, \mathbb{Z}/2) \to H^{n+i}(M, \mathbb{Z}/2) \cong H_{n-i}(M, \mathbb{Z}/2) = 0$$ so $\operatorname{Sq}^n : H^n(M, \mathbb{Z}/2) \to H^{2n}(M, \mathbb{Z}/2)$ must be the zero map. As $\operatorname{Sq}^n(x) = x\cup x \neq 0$, we see that $n$ must be a power of $2$.
$U$ linear and bounded, is an isomorphism $\iff$ $U$ is invertible and $U^{-1}=U^*$
$&lt;Ux,Uy&gt;=&lt;U^*Ux,y&gt;=&lt;x,y&gt;$. So, $U^*U=I$. Now, $U$ is also surjective, and so $U$ is invertible, and $U^{-1}=U^{*}$. Conversely, let $U$ be invertible with $U^{-1}=U^*$. Then, $U$ is a surjection, and $&lt;Ux,Uy&gt;=&lt;U^*Ux,y&gt;=&lt;Ix,y&gt;=&lt;x,y&gt;$
conversion of Cartesian to spherical
All answers you got are incorrect because you were indifferent to sign of x- and y-. If both x and y are negative the angle terminates in the third quadrant. With respect to relative magnitudes of x and y in clockwise direction it should be a bit less than 270 degrees, in counter-clockwise direction it should be a bit more than 90 degrees, There are two types of arctan function. The one does not give attention to quadrant, and one fixes quadrant correctly. In Mathematica the latter angle in degrees is: ArcTan[-.0000088, -.180976] * $180/ \pi$ = -90.0028 BTW it is same for spherical or cylindrical coordinates because you are not involving z coordinate.
Existential quantifier in a topos
It is not just the singleton map, but the image of $\pi_2\circ\epsilon:\in\;\hookrightarrow X\times PX\to PX$. After all, given a formula $\varphi(x)$, you still want the existential quantifier map to return $\top$ if $\{x:\varphi(x)\}$ has more than one element--you want it to return $\top$ on any non-empty &quot;subset&quot; of $X$.
Order of Hom$(D_n,\mathbb{C}^*)$
If $f\colon G\to H$ is a homomorphism and $H$ is abelian, then $$ f(xyx^{-1}y^{-1})=f(x)f(y)f(x)^{-1}f(y)^{-1}=1 $$ and so $xyx^{-1}y^{-1}\in\ker f$. Thus $[G,G]\subseteq\ker f$ and $f$ induces a homomorphism $\bar{f}\colon G/[G,G]\to H$. Thus the number of homomorphisms $D_n\to\mathbb{C}^*$ is the same as the number of homomorphisms $D_n/[D_n,D_n]$. If $n$ is odd, then this group is $C_2$, the cyclic group of order $2$. The elements in $\mathbb{C}^*$ with order a divisor of $2$ are $1$ and $-1$, which means there are two homomorphisms $D_n\to\mathbb{C}^*$. If $n$ is even, this group is isomorphic to $C_2\times C_2$, so the homomorphisms are four. Why?
Finding ALL rational solutions to a diophantine equation
I get that all solutions are the intersection of the curve (over the reals) with lines through $(1,1)$ with slope a negative square, that is $$ p^2 x + q^2 y = p^2 + q^2 $$ with intgers $p,q &gt; 0$ and, might as well, $\gcd(p,q) = 1.$ ADDED: I went through it more carefully, for each positive pair we get two intersections. A little cleaner to allow integers $\gcd(p,q) = 1$ and $p + q \neq 0,$ not necessarily both positive. Then $$ x = \frac{ p^3 + pq^2\; }{p^3 + q^3}= \frac{ \left(p^2 + q^2\right)p \; }{p^3 + q^3} \; \; , $$ $$ y = \frac{p^2 q + q^3 \;}{p^3 + q^3} \; \; = \frac{ \left(p^2 + q^2\right)q \; }{p^3 + q^3} \; \; . $$ For that matter, it turns out that we can take any direction $(p,q)$ with integers $p,q$ and $p+q &gt; 0$ ( so that we will also have $p^3 + q^3 &gt; 0$) and simply adjust the length by multiplying by the correct scalar quantity, giving $$ (x,y) = \; \; \left( \frac{ \; p^2 + q^2 \; }{\; p^3 + q^3 \;} \right) \; \; \; (p,q) $$ Let me edit in the nine examples at LINK For example, with $(p,q) = (3,-1),$ we get $\frac{ \; p^2 + q^2 \; }{\; p^3 + q^3 \;} = \frac{9+1}{27 - 1} = \frac{10}{26} = \frac{5}{13},$ so we get $(x,y) = \left(\frac{5}{13}\right) (3,-1) = \left(\frac{15}{13}, \; \; -\frac{5}{13} \right)$ $$ (3,-1) \mapsto \left( \frac{15}{13}, \; - \frac{5}{13} \right) $$ $$ (2,-1) \mapsto \left( \frac{10}{7}, \; - \frac{5}{7} \right) $$ $$ (5,-3) \mapsto \left( \frac{85}{49}, \; - \frac{51}{49} \right) $$ $$ (3,-2) \mapsto \left( \frac{39}{19}, \; - \frac{26}{19} \right) $$ $$ (7,-5) \mapsto \left( \frac{259}{109}, \; - \frac{185}{109} \right) $$ $$ (4,-3) \mapsto \left( \frac{100}{37}, \; - \frac{75}{37} \right) $$ $$ (9,-7) \mapsto \left( \frac{585}{193}, \; - \frac{455}{193} \right) $$ $$ (5,-4) \mapsto \left( \frac{205}{61}, \; - \frac{164}{61} \right) $$ $$ (11,-9) \mapsto \left( \frac{1111}{301}, \; - \frac{909}{301} \right) $$ =========================================================================== Indeed, all we need to do is take consecutive exponents, say $$ \color{red}{x^7 + y^7 = x^6 + y^6.} $$ We can take any direction $(u,v)$ with integers $u,v$ and $u+v &gt; 0$ and multiply by the correct scalar quantity, giving $$ (x,y) = \; \; \left( \frac{ \; u^6 + v^6 \; }{\; u^7 + v^7 \;} \right) \; \; \; (u,v) $$ ====================================================================== The precise form of the numerator and denominator are not that important, as long as they are homogeneous. I would get an analogous outcome for $$\color{red}{ x^4 + x^2 y^2 + y^4 = x^3 + x^2 y + x y^2 + y^3}, $$ all rational points are $$ (x,y) = \; \; \left( \frac{ \; u^3 + u^2 v + u v^2 + v^3 \; }{\; u^4 + u^2 v^2 + v^4 \;} \right) \; \; \; (u,v) $$
Prove $\lim_{x \to 0} x^2\sin(1/x^2)/\sqrt x (x+1)=0$
Your limit doesn't exist as $f(x)$ is undefined for $x&lt;0$. Although, right side limit does exist and is equal to $$\lim_{x\to0^+} \frac{x^2\sin(1/x^2)}{\sqrt x(x+1)}=\lim_{x\to0^+} \frac{x\sqrt x\sin(1/x^2)}{x+1}$$ The limit $\lim\limits_{x\to0^+}x\sqrt x\sin(1/x^2)$ is equal to $0$ because $$-1\leqslant\sin(1/x^2)\leqslant1$$ $$-x\sqrt x\leqslant x\sqrt x\sin(1/x^2)\leqslant x\sqrt x$$ And $\lim\limits_{x\to0^+} x\sqrt x = 0$, so using the squeeze theorem we get $$\lim\limits_{x\to0^+}x\sqrt x\sin(1/x^2)=0$$ And finally, $$\lim_{x\to0^+} \frac{x\sqrt x\sin(1/x^2)}{x+1}=0$$
Showing properties of a set of functions
For a) showing that if $f$ is constant then $f$ is left absorbing is trivial. To show that if $f$ is left-absorbing, then $f$ is constant we will show, that if $f$ is not constant, then it's not left-absorbing. So assume, that $f$ is not constant, ie. there are two elements $a,b$ in $A$, such that $$f(a)\neq f(b)$$ Now let's take $g\in M$, such that $g(a)=b$. We have then: $$f(g(a))=f(b)\neq f(a)$$ Thus $f$ is not left-consuming. To show b) let's assume, that $M$ contains function $f$ that is right-absorbing, ie. $\forall g\in M\forall x\in A g(f(x))=f(x)$. Let's say, that $f(a)=b$. Let's take a function $g\in M$ such that $g(b)\neq b$. We have then: $$g(f(a))=g(b)\neq b = f(a)$$ Thus $f$ is not right-consuming.
The $i^{th}$ prime in a given ring R
If by ''consistently'' you mean "in a canonical way which behaves well with respect to the ordering of $\mathbb Z$", then the obvious answer is "no". Suppose you have a finite field extension of $\mathbb Q$, call it $K$. Then after doing a bit of algebraic number theory you learn that there is a ring $A$ containing $\mathbb Z$ which has essentially all the nice properties you want concerning prime ideals (called the ring of integers of $\mathbb Q(i)$). In particular, if the class group of $K/\mathbb Q$ is $1$, the ring $A$ is a principal ideal domain, so that the non-zero prime ideals and the prime elements of $A$ are essentially the same thing. Then the ordering you desire is equivalent to look above each prime $p$ in $\mathbb Z$ and fix an ordering in the primes $q$ lying above $p$, and of course there is no canonical way of doing that. To illustrate what I mean, consider $K = \mathbb Q(i)$, so that the ring $A$ described above is $\mathbb Z[i]$. It is well-known that if $p \in \mathbb Z$ is prime, then $$ (p)_{\mathbb Z[i]} = (a-bi)(a+bi) $$ when $p \equiv 1 \pmod 4$, and when $p \equiv 3 \pmod 4$ then $p$ is also a prime element in $\mathbb Z[i]$ ; furthermore, $(2)_{\mathbb Z[i]} = (1+i)^2$. So an ordering on $\mathrm{Spec}(\mathbb Z[i])$ is essentially equivalent to fixing $a+bi \le a-bi$ or the other way around. The reason why you should believe that there is no canonical way of doing that is because $\mathbb Z[i] \simeq \mathbb Z[-i]$ ; if there was a canonical way of doing it, morally, it should be preserved under isomorphism (or relabelling of the roots if you prefer). But of course, set-theoretically speaking there is nothing wrong in labelling each of the primes lying above each prime $p$ in $\mathbb Z$ and making it consistent, but then it becomes quite ad hoc and practically not so useful in my opinion. Hope that helps,
finding the factors of a function via gcd
Well, if $f(x)$ and [something else] have a GCD with positive degree, by definition, the GCD is a divisor of $f(x)$. Did you mean something about multiple roots?
Increasing sequence by derivative?
Hint: (needs to be filled in) Starting from $x$ you can by taking $p=d/2$ get to anything of the form $x+k*d/2$, and since $f(x)&lt;f(x+p)$ for all $p&lt;d$ the sequence of intervals $$[x,x+d/2], [x+d/2,x+2\cdot(d/2], \cdots$$ may be joined up to conclude that if $x&lt;y$ then $f(x)&lt;f(y).$ That is, if $y$ is not initially within $d$ of $x$ the relation follows by transitivity. For example $f(x)&lt;f(x+d/2)$ and $f(x+d/2)&lt;f(x+2\cdot(d/2)$ implies that also $f(x)&lt;f(x+2\cdot(d/2)0$, and similarly by more iterations $f(x)&lt;f(x+k\cdot(d/2))$ Then after enough iterations to arrive just before $y$ you can do one more from there to $y$ itself to finish. ANOTHER WAY: Given $x&lt;y$ choose some -positive integer $n$ for which $\alpha=(y-x)/n&lt;d$. Then apply $f(x)&lt;f(x+\alpha)$ a total of $n$ times, increasing the input by $\alpha$ each time, and use transitivity of $&lt;$ to conclude $f(x)&lt;f(y).$
Does this alternating series converge?
The logical answer: Because the series is alternating, you can apply the test for alternate series. The other series is not alternating and thus the same argument does not apply. The intuitive answer: The problem with convergence in this case is whether the infinite sum reaches infinity. The non-alternating series gets to big, but the minuses in the alternating series makes sure that the series does not grow like crazy. Remember that a convergent series is not necessarily absolutely convergent. This only holds the other way around.
Decomposition formulas for rotational symmetries of a cube
The main point : all your subgroups are rather small and generated by a single rotation. There are at least 24 elements in $G$ : The identity ${\sf id}$, Rotations around axes through the middles of one of 3 pairs of opposing faces and with a rotation of $\frac{k\pi}{2}(1\leq k \leq 3)$ :there are $3\times 3=9$ such rotations, and we denote them by $OF(&lt;name\ of\ face&gt;,&lt;angle&gt;)$ Half-turns around axes through the middles of one of 6 pairs of opposing edges : there are $6$ such rotations, and we denote them by $HT(&lt;name\ of\ opposite\ edges&gt;)$. Rotations with axis $18,26,37$ or $45$, and angle $\frac{2k\pi}{3}(1\leq k \leq 2)$ :there are $2\times 4=8$ such rotations, and we denote them by $R(&lt;name\ of\ axis&gt;,&lt;angle&gt;)$. Since there are at most $24$ orientation-preserving of the cube (see here), we see that $G$ consists exactly of the $24$ elements enumerated above. Let $G_v$ denote the subgroup of $G$ fixing the vertex $1$, $G_e$ the subgroup fixing the edge $12$, and $G_f$ the subgroup fixing the face $1234$. By the enumeration above, we have : $$ \begin{array}{lcl} G_v &amp;=&amp; \lbrace {\sf id};R(18,\frac{2k\pi}{3}) (1 \leq k \leq 3) \rbrace \\ G_e &amp;=&amp; \lbrace {\sf id};HT(12,78) \rbrace \\ G_f &amp;=&amp; \lbrace {\sf id};OF(1234,\frac{k\pi}{2}) (1 \leq k \leq 3) \rbrace \end{array} $$ It is easy then to deduce the decompositions into orbits : $$ \begin{array}{|l|l|} \hline \text{Subgroup} &amp; G_v \\ \hline \text{Orbits in } V &amp; [1] [2,3,5] [4,6,7] [8] \\ \hline \text{Orbits in } E &amp; [12,13,15] [24,37,56] [26,34,57] [48,68,78] \\ \hline \text{Orbits in } F &amp; [1234,1256,1357] [2468,3478,5678] \\ \hline \end{array} $$ $$ \begin{array}{|l|l|} \hline \text{Subgroup} &amp; G_e \\ \hline \text{Orbits in } V &amp; [1,2] [3,6] [4,5] [7,8] \\ \hline \text{Orbits in } E &amp; [12] [13,26] [15,24] [34,56] [37,68] [48,57] [78] \\ \hline \text{Orbits in } F &amp; [1234,1256] [1357,2468] [3478,5678] \\ \hline \end{array} $$ $$ \begin{array}{|l|l|} \hline \text{Subgroup} &amp; G_f \\ \hline \text{Orbits in } V &amp; [1,2,3,4] [5,6,7,8] \\ \hline \text{Orbits in } E &amp; [12, 13, 24, 34] [15, 26, 37, 48] [56, 57, 68, 78] \\ \hline \text{Orbits in } F &amp; [1234] [1256,1357,2468,3478] [5678] \\ \hline \end{array} $$
Prove that lim $\sqrt{n^2+1}-n = 0$
Your proof is correct! Another way would be to use the squeeze theorem: $$ 0 &lt; \sqrt{n^2 + 1} - n &lt; \sqrt{n^2 + 2 + \frac{1}{n^2}} - n = \frac{1}{n}. $$
How to choose degree for polynomial regression?
An alternative to polynomial regression is to a fit with Chebychev polynomials, which essentially is a least squares fit. Usually the coefficients will decrease from the low order terms, and you can stop when the coefficients get small enough. You can then convert from Chebychev form to polynomial form. Hope this is hand-wavey enough.
Proving that linear combination of exponentials is positive
The problem in showing that $$f(t)=3-5e^{-2t}+6e^{-3t}+2e^{-5t}-3e^{-(3-\sqrt5)t}-3e^{-(3+\sqrt5)t}\gt0$$ for all $t &gt; 0$ is that the terms with the slowest decay (except for the constant term) occur with a negative coefficient, that makes it a bit hard to see whether the derivatives are positive everywhere. Simple cure: multiply with a suitable growing exponential, e.g. $e^{5t}$. For $$g(t) = 3e^{5t} - 5e^{3t} + 6e^{2t} + 2 - 3e^{(2+\sqrt{5})t} - 3e^{(2-\sqrt{5})t},$$ things are easier. We still have $g(0) = 0$ of course, and $$g'(t) = 15e^{5t} - 15e^{3t} + 12e^{2t} - 3(2+\sqrt{5})e^{(2+\sqrt{5})t} - 3(2-\sqrt{5})e^{(2-\sqrt{5})t},$$ where things still aren't obvious, since the second and third fastest-growing terms have negative coefficients whose sum is larger in absolute value than the coefficient of the fastest-growing term, but further differentiations let the coefficient of the fastest-growing term grow faster than the absolute value of the coefficients of the next fastest-growing terms, so while $g^{(n)}(0) \geqslant 0$, we can simply continue differentiating and get easier to estimate stuff with each differentiation. So, check $g'(0) = 0$, and differentiate once more, $$g''(t) = 75e^{5t} - 45e^{3t} + 24e^{2t} - 3(9+4\sqrt{5})e^{(2+\sqrt{5})t} - 3(9-4\sqrt{5})e^{(2-\sqrt{5})t}.$$ The coefficient of $e^{5t}$ is still not large enough for a trivial estimate, but we still have $g''(0) = 0$, so let's do it again: $$g'''(t) = 375e^{5t} - 135e^{3t} + 48e^{2t} - 3(38+17\sqrt{5})e^{(2+\sqrt{5})t} - 3(38-17\sqrt{5})e^{(2-\sqrt{5})t}.$$ Now we're there, $375 &gt; 135 + 3(38+17\sqrt{5})$, so $$g'''(t) = 3(38+17\sqrt{5})\left(e^{5t} - e^{(2+\sqrt{5})t}\right) + 135(e^{5t} - e^{3t}) + 3(42-17\sqrt{5})e^{5t} + 48e^{2t} + 3(17\sqrt{5}-38)e^{(2-\sqrt{5})t}$$ where each term is easily recognised as non-negative, and even strictly positive for $t &gt; 0$. So $g''(t)$ is strictly increasing, whence positive, therefore $g'(t)$ is strictly increasing, hence positive, thus $g(t) &gt; 0$ for $t &gt; 0$, and therefore $f(t) &gt; 0$ for $t &gt; 0$.
How find the value of $f(n)=P_{n-1}^{1}-P_{n-1}^{2}+P_{n-1}^3+\cdots+(-1)^{n}P_{n-1}^{n-1}$
What this problem reduces to [If $n$ is even] is: $$f(n)=(n-1)!{({1\over0!}-{1\over1!}+{1\over2!}-{1\over3!}+\cdots)}$$ Which becomes: $$f(n)=(n-1)!{(e^{-1})}$$ This is not the exact answer but should give an approximate idea of the answer.
Show that [x] is denumerable
Let me use $[x]$ for the equivalence class of $x.$ A real number $y$ belongs to $[x]$ if and only if $x-y$ is a rational number. This happens if and only if $y=x+q$ for some rational number $q.$ Hence, the number of elements in the equivalence class of $x$ is the same as the number of rationals, that is, $[x]$ is denumerable (by which I'm assuming you mean countably infinite). For your third question, consider that all rationals belong to the same equivalence class. Next, let $x$ and $y$ be distinct irrationals for which $x-y$ is not rational. What can you say about $[x]$ and $[y]?$ Does that suggest a strategy?
Angle between non parallel unit vectors
Hint: Note that $\cos(\theta)\lVert a\rVert\lVert b\rVert=\langle a,b\rangle=\left\langle u+\sqrt3v,u-\sqrt3v\right\rangle=-2$. Now, what can you say about $\lVert a\rVert$ and $\lVert b\rVert$?
A basic doubt on rationals approximating reals
Let $(x_n)$ and $(y_n)$ be two such sequences. Consider the ratio $\dfrac{2^{x_n}}{2^{y_n}}=2^{x_n-y_n}$. Let $a_n=x_n-y_n$. Then the sequence $(a_n)$ has limit $0$. Show that $\lim_{n\to\infty}2^{a_n}=1$.
Leibniz integral rule application
It seems that the function $x\mapsto A(x)$ is continuous. Let an $x$ and a $v$ be given. We have to prove that your first inequality implies the second. Let an $\epsilon&gt;0$ be given. There is a $c&gt;0$ such that $$\|A(x+tv)-A(x)\|\leq\epsilon\qquad(0\leq t\leq c)\ .$$ It follows that $$|A(x)v|\leq|A(x+tv)v|+\epsilon|v|\qquad(0\leq t\leq c)\ .$$ Your first inequality then allows to conclude that $$|A(x)v|\leq\rho|v|+\epsilon|v|\ ,$$ and as $\epsilon&gt;0$ was arbitrary it then follows that in fact $$|A(x)v|\leq\rho|v|\ .$$
$n$ users placed in cells randomly and independently and figure out the expected value
I interpret the question to mean that different users want different files, where user $i$ wants file $f_i$, and that there is a fixed probability $p$ such that for each pair of user $i$ and file $f_j$ with $i\ne j$ there is an independent probability $p$ of that user having that file, with zero probability of user $i$ having their own file $f_i$. In this case, a cell with $k$ users has probability $1-(1-p)^{k(k-1)}$ of having a matching user and file, so the expected number of such cells is $$ m\sum_{k=0}^n\binom nk\left(\frac1m\right)^k\left(1-\frac1m\right)^{n-k}\left(1-(1-p)^{k(k-1)}\right)\\ =m\left(1-\frac1m\right)^n\sum_{k=0}^n\binom nk(m-1)^{-k}\left(1-(1-p)^{k(k-1)}\right)\;. $$
What is the remainder when $6\times7^{32} + 7\times9^{45}$ is divided by $4$?
The relation "go modulo 4" is very nice. It respects addition and multiplication. So $$ 6 \cdot 7^{32} + 7 \cdot 9^{45} $$ is the same as $$ 6 \cdot 1 + 7 \cdot 1 $$ by what you have already computed!
$I_1=\int_0^1 \frac 1 {1+\frac 1 {\sqrt x}} dx$
We define $$j=j(a,b)=\int_0^1\frac{a+\sqrt x}{b+\sqrt x}dx$$ So we see that $$j^*(a_1,a_2,a_3,a_4)=\int_0^1\frac{a_1+a_2\sqrt x}{a_3+a_4\sqrt x}dx=\frac{a_2}{a_4}j\left(\frac{a_1}{a_2},\frac{a_3}{a_4}\right)$$ So then we see that $$\begin{align} j&amp;=\int_0^1\frac{a-b+b+\sqrt x}{b+\sqrt x}dx\\&amp;=(a-b)\int_0^1\frac{dx}{b+\sqrt x}+\int_0^1\frac{b+\sqrt x}{b+\sqrt x}dx\\&amp;=(a-b)\int_0^1\frac{dx}{b+\sqrt x}+1 \end{align}$$ Then we set $x=u^2\Rightarrow dx=2udu$: $$\begin{align} \int_0^1\frac{dx}{b+\sqrt x}&amp;=2\int_0^1\frac{u}{b+u}du\\ &amp;=2\int_0^1\frac{-b+b+u}{b+u}du\\ &amp;=2-2b\int_0^1\frac{du}{b+u}du\\ &amp;=2-2b\ln|u+b|]_0^1\\ &amp;=2+2b\ln\left|\frac{b}{b+1}\right| \end{align}$$ So $$j(a,b)=1+2(a-b)\left(1+b\ln\left|\frac{b}{b+1}\right|\right)$$ And your integral is given by $$I_n=j^*(F_n,F_{n-1},F_{n+1},F_n)=\frac{F_{n-1}}{F_n}j\left(\frac{F_n}{F_{n-1}},\frac{F_{n+1}}{F_n}\right)$$
Finding $a$ in modular arithmetic while $a<0$
All he is saying is that if $a &lt; 0$ is it possible to say $a = 26k + b$ where $b = \{1,....., 26\}$. He does: $a &lt; 0$ so $-a &gt; 0$. Then if you divide $-a$ by $26$ you we get a quotient $q$ and a remainder $r$ so that $\frac {-a}{26} = q + \frac r{26}$ and $-a = 26q + r$ and $ 0 \le r &lt; 26$. Then $-a = 26(q+1) -26 + r = 26(q+1) - (26-r)$. So $0 \le r &lt; 26$ we have $0 &lt; 26-r \le 26$ so if we let $b = 26-r \in \{1,..., 26\}$ and let $k = -q-1$ we get $a = 26(-q-1) +(26-r) = 26k + b$. It works. If $a &gt; 0$ then there is a $b \in \{1....26\}$ where $a \equiv b \pmod {26}$. And if $a &lt; 0$ then there is a $b \in \{1... 26\}$ where $a \equiv b\pmod {26}$. And if $a = 0$ then $a \equiv 26\pmod{26}$. .... It's not deep. I'd have done it simply by saying. "Just add or subtract $26$ until you get some between $1$ and $26$ inclusive". ====== old answer=== If $a &gt; 0$ there is a $k$ and a $b &gt; 0$ so that $a = 26k + b$. [The book seems to have established that.] But what if $a &lt; 0$. Can we find a $k$ and $b &gt; 0$ so that $a = 26k +b$. Yes. If $a &lt; 0$ then $-a &gt; 0$ and we can find $q$ and $r: 0 \le r &lt; 26$ so that $-a = 26q + r$ $= 26(q+1) - 26 + r$ $= 26(q+1) - (26-r)$ and note that $r &lt; 26$ so $ 26 -r &gt; 0$. But then $a = 26(-q-1) + (26-r)$. Let $k = -q-1$ and $b = 26-r$ and this works fine $a = 26k + b$ where $b &gt; 0$ even though $a &lt; 0$. [ I suppose for $a = 0$ the book uses $0 = 26(-1) + 26$? ] [It's not entirely clear what the book is trying to show.]
Meaning of the inverse of the Laplacian matrix
One answer is in terms of the commute time distance. Let $C_{ij}$ denote the expected length of a random walk on the graph starting at node $i$, passing through node $j$, then returning to node $i$. Let $P$ denote the orthogonal projection against the constant vector. Let $L$ denote the graph Laplacian. Then $L^\dagger \propto PCP$. Because $L^\dagger$ is positive semi-definite and $L^\dagger \mathbf 1 = \mathbf 0$, this makes $C$ a Euclidean distance matrix, and we may compute the entries using $$ C_{ij} \propto (L^\dagger)_{ii} + (L^\dagger)_{jj} - 2 (L^\dagger)_{ij} ~~.$$ (The proportionality constant is the sum over all node degrees) The Euclidean coordinates obtained from $C$ provide one "spectral" embedding of the graph, sometimes similar but distinct from the typical one using eigenvectors of the Laplacian. Von Luxburg talks about this in her tutorial on spectral clustering.
Definition of tensor product
This sort of tensor product cannot be extended to more than two modules because in general the tensor product is no longer an $R$-module, only an abelian group. If $N$ is an $(R,R)$-bimodule, then the tensor product is a right $R$-module and we may take the tensor product with a left $R$-module. Again this will only be an abelian group in general. I'm sure you can see how to generalize this to any finite number of factors, or a countably infinite number when the index set is order isomorphic to $\mathbb{N}$ or $\mathbb{Z}$. Without an ordering, though, this kind of tensor product doesn't make sense because of the interplay between the left and right module structures.
Is there a set of all topological spaces?
Your idea is right, but technically it needs some small adjustment. As a you say it's not $S$ that's in $\mathfrak T$, but $\{\emptyset, S\}$, so "the set of all sets" isn't really going to be a subset, but there's an easy injective map from "the set of all sets" to $\mathfrak T$ so you can easily recreate Russell's paradox.
Maximum likelihood estimator for family of uniform distributions
The problem with your answer is that the likelihood actually is $$ L(\theta)= \theta^{-n} \prod_{i=1}^n\mathbf 1_{\{0 \leq Y_i \leq \theta \}}$$ i.e. you forgot the support of your family of models. In particular, if you set $\theta = \min\{Y_i\}$, then if there exists $Y_j &gt; \min\{Y_i\}$ (which happens with probability $1$ (also in your example it is the case), then the likelihood will be $0$! And thus your choice of $\theta$ definitely does not maximize it..
Can you multiply imaginary unit i and a basis vector together?
Multiplying by $i$ is a 90-degree counterclockwise rotation. Multiplying by $J$ is a 90-degree clockwise rotation. (So, you can substitute $J$ as $-i$, not $i$.) Both $i^2$ and $J^2$ give $-I$, because rotating 180 degrees in either direction give the same result, namely, flip the vector in the opposite direction. Perhaps you got the correct answer to your problem because you only had to apply the rotation twice; in this case, substituting $J$ with $i$ or $-i$ gives the same thing.
Questions in Abstract Algebra
For 1b) Let $H_1$ be the 5-Sylow subgroup. Since $H_1$ is normal in $G$, consider $G'= G/H_1$. This is a group of order 8, and so has subgroups of orders 2 and 4. Now use the correspondence theorem to get subgroups in $G$ of the required sizes. For 2) It suffices to assume that $A=\mathbb{Z}$ or $A = \mathbb{Z}/p^r\mathbb{Z}$. For the former, you could tensor by $\mathbb{Q}$ and compare dimension of the resultant $\mathbb{Q}$-vector space. For the latter, you could just compare the invariant factors on either side. Does that not work?
Resolve integral representation of positive part
If you take the second derivative over x: $$f''(x)=\frac{1}{2\pi}\int_{-\infty}^\infty e^{(m+iz)x}{\rm d}z=e^{mx}\delta(x)=\delta(x)$$ I recognized the Fourier transform of the delta function. Now integrate twice. First integral is the Heaviside step function: $$f'(x)=\int_{-\infty}^x \delta(t){\rm d}t=H(x)$$ Integrate again, and you get $$f(x)=\int_{-\infty}^x H(t){\rm d}t=x^+$$ The integrals are only defined in terms of distributions, but you can regularize the procedure, for instance, by including a dissipative term $e^{-\epsilon|z|}$ and sending $\epsilon\to 0$.