title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Distinct roots of $z^n-z$
Hint: $$ z^n-z = z \, (z^{n-1}-1) $$ and the $n-1$ roots of unity are different from each other (and different from $0$).
$|H|$ is relatively prime to $[G:H]$
Let $p$ be a prime dividing $|H|$ and $P$ be a Sylow $p$ subgroup of $H$. Then $P$ is contained in some Sylow $p$ subgroup $Q$ of $G$. Note that if $q\in Z(Q)$ then $q\in C_G(x)$ for all $x\in P$ so $q\in H$. But $Z(Q)$ is non-trivial, so let $1\ne q\in Z(Q)$. $Q\le C_G(q)$ so $Q\le H$ and therefore $P=Q$. Now we have for any $p$ dividing $|H|$ that the Sylow $p$ subgroups of $H$ are Sylow $p$ subgroups of $G$ so $p$ does not divide $[G:H]$. Hence $|H|$ and $[G:H]$ are relatively prime.
In the group $\left( \mathbb{C} \setminus\{0\}, \times \right)$ find all elements of order $12$.
Every finite subgroup of $\Bbb C^{\times}$ is cyclic. The cyclic group $C_{12}$ has $\phi(12)=4$ different generators.
Find an atlas for $M=\{(x,y,z)\in\mathbb{R}^3:x^2+y^2=1+z^2\}$
This is a one sheeted hyperboloid. You can find one parametrization rotating the hyperbola $(\cosh u, 0, \sinh u)$ to obtain: $${\bf x}(u,v) = (\cosh u \cos v, \cosh u \sin v, \sinh u), \quad u \in \Bbb R, \quad 0 < v < 2\pi.$$ This leaves out a meridian ($v = 0$). You can cover it taking any $\epsilon > 0$ and making $v \mapsto v+ \epsilon$ above. These two parametrizations will cover the surface.
Why can't you cancel both xs with 2x/3x?
In such expressions $$\frac{x^m}{x^n}$$ we can always cancel out terms without change its value but under the condition that $x\neq 0$, indeed $\frac{x}{x}=1$ for all $x\neq 0$ but $\frac{0}{0}$ is not defined.
Show that $\cot \frac{\pi}{2m}\cot \frac{2\pi}{2m}\cot \frac{3\pi}{2m}...\cot \frac{(m-1)\pi}{2m}=1$
We use the fact that $$\left(\cot x\right)\left(\cot\left(\frac{\pi}{2}-x\right)\right)=1.$$ The product of the entry that is $k$ from the beginning and the entry that is $k$ from the end is $1$. (If $m$ is even, there is a "middle" term, but it is $1$.)
The order of an element in a quotient group
I got the idea now. $(|H|,|G/N|)=1$ is key to do this problem. Notice that $|hN|||h|$ (check) and also $|h|||H|$ so that $|hN|||H|$ but $|hN||(G/N)$ since $|H|$ and $|G/N|$ are relatively prime so that $|hN|=1$. Thus $hN=N$ and then $H\subset N.$ This finishes the proof.
What is the distribution of $\frac{s^2}{10x̅^2}$
Cochran's theorem (see also this answer) implies that $$\sum_{i=1}^{10} (X_i - \bar{x})^2/ \sigma^2 \sim \chi^2_9$$ and $$10 \bar{x}^2 / \sigma^2 \sim \chi^2_1$$ and that the two are independent. Thus $$\frac{\left(\sum_{i=1}^{10} (X_i - \bar{x})^2/ \sigma^2\right) / 9}{(10\bar{x}^2 / \sigma^2) / 1} = \frac{s^2}{10 \bar{x}^2}$$ follows the $F_{9,1}$ distribution.
Evaluation of $\int_{0}^{10 \pi} ([\sec ^{-1}x]+[\cot^{-1} x])~\mathrm dx$
$\left\lfloor \sec^{-1}x\right\rfloor=\begin{cases} 0 & \text{ for } x<\sec(1)\approx1.851\\ 1 & \text{ for } x\ge\sec(1) \end{cases}$ $\left\lfloor\cot^{-1}x\right\rfloor=\begin{cases} 1 & \text{ for } 0\le x<\cot(1)\approx0.642\\ 0 & \text{ for } x\ge\cot(1) \end{cases}$ So \begin{eqnarray} \int_0^{10\pi}\left\lfloor \sec^{-1}x\right\rfloor+\left\lfloor\cot^{-1}x\right\rfloor\,dx &=\int_0^{\cot(1)}1+0\,dx+\int_{\sec(1)}^{10\pi}0+1\,dx\\ &=\cot(1)+10\pi-\sec(1) \end{eqnarray} Graph
Parametric uncertainty in conditional term of piecewise nonlinear dynamical system
I assume that $i$ is fixed. If it is not fixed, then the problem is not well-posed (as the control law will be a relation, not a function). In other words, is the dead-zone nonlinearity applied only to the $i$-th component, or is it applied to all components? Note that $\delta$ cannot have just "some distribution". It must take nonnegative values only, otherwise $|x_i| < \delta_i$ makes little sense. For simplicity, let us suppose that $\delta$ is a constant vector. We then have a continuous-time piecewise-affine (PWA) dynamical system, which is already problematic. If you allow $\delta$ to be a stochastic process, then you have a time-varying CT-PWA system in which the dynamics change stochastically. Even if you're comfortable with stochastic differential equations (SDEs), it appears to be a ridiculously difficult problem. Jorge Gonçalves did some work on deterministic CT-PWA systems a decade or so ago. You may want to take a look at his PhD thesis and papers.
STEP 2 2002 Statistics Question
Your calculation technique returns the probability, if you catch 200 voles with replacement, that 11 will be marked. But in this problem, the sample is taken without replacement. This has the consequence stated in @A.Goodier's comment.
Solve for function in a composition
Yes there is such a theory – called functional equations. Specifically, see the section on solving function equations.
Latin Squares - Proving the Unique number of Sudoku that can be generated
Sudokus are a proper subset of the Latin squares of order $9$, as they have the added restriction of the $3 \times 3$ boxes. Thus, all sudokus are Latin squares, but not all Latin squares are sudokus. As with Latin squares, there is no computer-free proof that these numbers are correct. Typically, these are checked by performing independent computations (possibly by slightly different methods). That being said, most of the searching can be eliminated by identifying symmetries. Both Latin squares, in general, and sudokus have "symmetries", which can be exploited to give a significant reduction in the search space. In a Latin square, for instance, we can permute the rows arbitrarily to give another Latin square (so it'd be a waste of time to count all of these separately). I'm not particularly familiar with the methods used in enumerating sudokus, but a website by Jarvis (and the linked papers) offers much detail into his enumeration method. There are $R_9=377597570964258816$ reduced Latin squares of order 9. I don't think any computer ever has counted from 1 to $R_9$, let alone play around with Latin squares at each step. Thus, it's safe to say a brute force enumeration is completely out of the question. To date, the easiest way to find $R_9$ is to use Sade's method which I mention in my answer to the linked question. Sade's method is the only feasible way for order $10$ or greater. Unfortunately, Sade's method was published only in a very obscure paper and is hard to obtain. But I describe it in great detail in my survey paper (here). Roughly speaking, Sade's method saves an enormous amount of time by identifying Latin rectangles that have the same number of completions, and clumping them together. We then count the number of ways of extending each equivalence class by one row, then identify which of these extended Latin rectangles admit the same number of completions, and so on recursively. (Note: In an earlier version of this post, I claimed it was the only way for $9 \times 9$ squares, but there is actually another way in this case.) For $9 \times 9$ Latin squares, it's possible to iterate through representatives of the 19270853541 main classes of Latin squares on a computer, and at each step calculate the size of the autoparatopism group $\mathrm{Par}(L)$. The total number of $9 \times 9$ Latin squares is thus \[\sum_L \frac{6n!^3}{|\mathrm{Par}(L)|},\] where the sum is over the 19270853541 representatives. Generating these representatives can be done via the "canonical construction path" described in: B. D. McKay, Isomorph-free exhaustive generation, J Algorithms, 26 (1998), 306-324. See this paper for more details (including the relevant definitions: "main class" and "autoparatopism"): B. D. McKay, A. Meynert, W. Myrvold, Small Latin squares, quasigroups, and loops. J. Combin. Des. 15 (2007), 98-119. Sade's method for $12 \times 12$ Latin squares would work fine, if we had a sufficiently powerful computer with enough memory (and the budget to use it, and know-how to program it efficiently). I think it's safe to say we (as a species) could find this number in the next 100 years or so (I'm hoping to see $R_{12}$ before I die). Note that the number of Latin squares grows quite fast. Most people think $n!$ grows quickly. Well, Smetaniuk showed that $L_{n+1} \geq (n+1)!\ L_n$ (where $L_n$ counts non-reduced Latin squares). The sheer number of Latin squares is the obstacle here. B. Smetaniuk, A new construction of Latin squares. II. The number of Latin squares is strictly increasing, Ars Combin., 14 (1982), pp. 131-145.
Is this $\epsilon$-conditon for $\limsup$ false?
If (ii) is true, $\exists N$ such that if $n\ge N$, then $a-\varepsilon<s_n\le a+\varepsilon.$ This is incorrect. There may be infinitely many $n$ such that $a-\epsilon\geq s_n$. For instance, maybe $a-\epsilon<s_n$ whenever $n$ is odd but $a-\epsilon\geq s_n$ whenever $n$ is even. You don't know that $a-\epsilon<s_n$ for all but finitely many $n$, only for infinitely many $n$.
Integral of delta function and the constant for fund. solution to laplace's eq
This may be justified in the sense that in the space of distributions there are measures. For example, if $\Omega \subset \mathbb{R}^n$ is an open set, the Dirac measure is defined by $\delta_{x_0}(\Omega)=1$ if $x_0 \in \Omega$ and $\delta_{x_0}(\Omega)=0$ if $x_0 \notin \Omega$. Then $\displaystyle \delta_{x_0} (\varphi)=\int_{\Omega} \varphi(x) d \delta_{x_0}(x)=\varphi(x_0)$ $\forall \varphi \in \mathcal{D}(\Omega)$ and as a function the Dirac measure is a distribution.
Is a map a homotopy equivalence if its suspension is so?
I believe the answer to your first question is no. Let $X$ be any connected acyclic CW-complex with non-trivial fundamental group, for example the space constructed as example 2.38 in Hatcher. Such a space has the property that $H_i(X) = 0$ for $i>0$ and $H_0(X) = \mathbb{Z}$, but $\pi_1(X) \neq 0$. (In particular $\pi_1(X)$ must be perfect). Consider the projection map $f: X \to pt$. By looking $\pi_1$, $f$ cannot be a homotopy equivalence. However, $\Sigma f: \Sigma X \to \Sigma pt$ is a homotopy equivalence. To see this, note that suspension increases the connectivity, which implies that both spaces are simply connected. Hence the homology Whitehead theorem applies, which says that a map between simply connected CW-complexes is a homotopy equivalence if and only if it induces isomorphisms on all homology groups. Using the suspension axiom in homology, we see that all $H_i(\Sigma X)$ and $H_i(\Sigma pt)$ for all $i>0$ are zero and for $i=0$ are $\mathbb{Z}$. It is then easy to check that $\Sigma f$ is an isomorphism in all degrees. edit: in your addition, I think the answer is yes, if we replace homotopy equivalence by weak homotopy equivalence. The Whitehead theorem says that $f: X \to Y$ is homotopy equivalence if and only if induces an isomorphism on all $\pi_i$. Because $\Omega X$ and $\Omega Y$ have the homotopy type of CW-complexes, we can replace them by CW-complexes with the price of replacing homotopy equivalence with weak homotopy equivalence. Now note that $Map_+(S^n,\Omega X) \cong Map_+(S^{n+1},X)$ and similarly $Map_+(S^n,\Omega Y) \cong Map_+(S^{n+1},Y)$. Under this isomorphism $(\Omega f)_*$ corresponds to $f_*$.
First Order Logic - Axiom vs Formula
For propositional logic we have: Language: connectives and sentential variables: $p_1,p_2, \ldots$. Formulas are expressione formed with variables and connectives according to the formation rules; e.g. $\lnot p_1, p_1 \to p_2$ are example of formulas of propostional calculus. A schema is an expression of the meta-language where the (meta-)variables: $\varphi, \psi,\ldots$ stay for formulas. An axiom schema is an expression of the meta-language, like: $\varphi \lor \lnot \varphi$, and it must be read as a "recipe" to generate infinitely many axioms (called instances of the schema). How ? Replacing uniformly the meta-variables with formulas of the language. Thus, form the axiom schema: $\varphi \lor \lnot \varphi$ we can generate the axioms: $p_1 \lor \lnot p_1, p_2 \lor \lnot p_2, (p_1 \to p_2) \lor \lnot (p_1 \to p_2), \ldots$ The same for predicate calculus (with the obvious changes regarding the basic elements of the language and the formation rules). An example of axiom schema can be $\forall x \alpha \to \alpha[t/x]$ and a corresponding instance is: $\forall x (x \ge 0) \to (1 \ge 0)$. In conclusiom: axioms are expression of the (formal) language; axiom schema are expression of the meta-language.
Coefficients in products and powers of large polynomials
The coefficient of $\prod_{i=1}^nx^{k_i}$ in $f$ is the same thing as $(\partial_{(\bar{k})} f)|_{\bar{x}=\bar{0}}$, where $\partial_{(\bar{k})}=\prod_{i=1}^n\frac{\partial_{x_i}^{k_i}}{k_i!}$. You can take advantage of this to simplify calculations. As a small example, consider $$\begin{align}f(x,y,z) &= 1+x +y -2xy + 3xz^2 & m&=x^2y^2z^2\end{align}$$ with $r = 5$. $$\begin{align} (\partial_{(2,2,2)} f^5)|_{(x,y,z)=\bar{0}} & =\frac{1}{8}(\partial_{x}^2\partial_{y}^2\partial_{z}^2 f^5)|_{(x,y,z)=\bar{0}}\\ & =\frac{1}{8}(\partial_{x}^2\partial_{y}^2\partial_{z} 5f^4f_z)|_{(x,y,z)=\bar{0}}\\ & =\frac{1}{8}\partial_{x}^2\partial_{y}^2 (20f^3f_z^2+5f^4f_{zz})|_{(x,y,z)=\bar{0}}\\ \end{align} $$ Now that we will never again differentiate by $z$, it is acceptable to evaluate at $z=0$. Let $g(x,y)=f(x,y,0)=1+x+y-2xy$. Note that $f_z(x,y,0)=0$ and $f_{zz}(x,y,0)=6x$, simplifying what we have so far. We are left with: $$\begin{align} (\partial_{(2,2,2)} f^5)|_{(x,y,z)=\bar{0}} & =\frac{1}{8}\partial_{x}^2\partial_{y}^2 (30xg^4)|_{(x,y)=\bar{0}}\\ & =\frac{1}{8}\partial_{x}^2\partial_{y} (120xg^3g_y)|_{(x,y)=\bar{0}}\\ & =\frac{1}{8}\partial_{x}^2(360xg^2g_y^2+120xg^3g_{yy})|_{(x,y)=\bar{0}}\\ & =\frac{1}{8}\partial_{x}^2(360xg^2g_y^2)|_{(x,y)=\bar{0}}\\ \end{align} $$ where we used the fact that $g_{yy}=0$. As we will never again differentiate with respect to $y$... let $h(x)=g(x,0)=1+x$. $$\begin{align} (\partial_{(2,2,2)} f^5)|_{(x,y,z)=\bar{0}} & =\frac{1}{8}\partial_{x}^2(360xh^2(1-2x)^2)|_{(x)=\bar{0}}\\ & =\frac{1}{8}\partial_{x}^2(360x(1+x)^2(1-2x)^2)|_{(x)=\bar{0}}\\ & =\frac{1}{8}\partial_{x}^2(360x(1+2x+x^2)(1-4x+4x^2))|_{(x)=\bar{0}}\\ & =\frac{1}{8}\partial_{x}^2(360x(1+2x)(1-4x))|_{(x)=\bar{0}}\\ & =\frac{1}{8}\partial_{x}^2(360x(1-2x-8x^2))|_{(x)=\bar{0}}\\ & =\frac{1}{8}\partial_{x}^2(-720x^2)|_{(x)=\bar{0}}\\ & = \frac{1}{8}(-720)(2)\\ & = -180 \end{align} $$ where in this final block we have stayed conscious of the degree to which we are differentiating and that we will evaluate at $0$ . I think this strategy (using derivatives and evaluating at $0$) will generally be more efficient than directly computing $f^r$. This should also be applicable to products of polynomials.
Inversion of linear combination of discrete shift operators
If $f$ has nice enough properties, a $z$ transform should work. Namely, consider $$G(z)=\sum_{n=-\infty}^{\infty} z^n f(n)$$
Is there a possibility that ZFC is inconsistent and, if it is, do we have to throw out our old proofs?
First of all, proofs don't exist in vacuum. We cannot prove the consistency of $\sf ZFC$ from theories like $\sf ZFC$ itself, or even $\sf PA$. But we can prove the consistency of $\sf ZFC$ from other theories, stronger theories. For example, from $\sf ZFC+I$ where $\sf I$ is the axiom stating that there exists an inaccessible cardinal we can prove that $\sf ZFC$ is consistent. This is an answer to your last question. To prove $\sf ZFC$ is consistent with need to work in theories which are not "just $\sf ZFC$ itself". Now. Is it possible that $\sf ZFC$ is inconsistent? Yes. It is possible. What happens if it is inconsistent? No bridges will collapse, that's for sure. We'll investigate the inconsistency to see what caused it. After we've understood that, we'll try to rescue whatever we can from the mathematics of the last 200 years, and we'll proceed as before, pushing mathematics to the limit. And as for the arguments that $\sf ZFC$ is consistent. Well, self-evidency for one. I think that a lot of the axioms are quite natural. Perhaps the power set axiom is a bit too much, but the rest of the axioms are really quite natural and "un-intruding", in the sense that you don't feel them when you do your work. Which is a good thing from a foundational theory. Another argument is that we haven't found any contradictions so far, and we've been pushing nearly a century since $\sf ZFC$ was established. Some very smart people looked into that, and if they haven't found any, there's a good chance we won't find that contradiction either. Both these arguments are a bit silly, and a bit circular or fallacious. There are no "good" arguments. This is a matter of belief, if you want to believe that $\sf ZFC$ is inconsistent, then by all means find a better alternative. The rest of us will continue to do mathematics as we did until now.
Integral of exponential
One way to do this is to integrate $e^{iaz - b z^2}$ around a rectangular contour with corners at $-R$, $R$, $R + i a/(2b)$ and $-R + i a/(2b)$, then taking $R \to +\infty$.
What is this symbol in the definition of the homotopy extension property?
It's the subset inclusion map.
On differential polynomials
I have an algebra $[E,F]=I, [J,E]=E, [J,F]=-F$, $I$ is central. I would like to have a differential polynomial realization of it, but such that it is not degenerate when $I$ acts as $0$. For example the dif. polynomial $E=i\sqrt{a} x$, $F=i\sqrt{a} \partial_x$, $I=a$, $J=x\partial_x$ where $a$ is a number, is a realization, but $E=F=0$ when $a=0$. Is there another one such that neither $E$ nor $F$ is $0$ when $a=0$ ?
Finding the smooth curve of minimum length between two points with some constraints
A priori there's no reason to think the solution should be of the form you suggest. Here's one possible approach (although this is basically the idea behind calculus of variations). Suppose we have a solution $g$. Consider a small perturbation $g+\epsilon h$, where $\epsilon$ is small and $h$ is a smooth function with $h(10)=h(30)=h'(10)=0$ and $\int_{10}^{30} h = 0$ (so the perturbed curve still has the correct end-points and satisfies conditions (1) and (2)). Since $g$ minimises the arc length, the length of the perturbed curve must be minimised at $\epsilon=0$. Hence $$ \frac{\mathrm{d}}{\mathrm{d}\epsilon}\bigg|_{\epsilon=0} \int_{10}^{30} \sqrt{1+(g+\epsilon h)'^2} \ \mathrm{d}x = 0.$$ Taking the derivative inside the integral we get $$\int_{10}^{30} \frac{g'h'}{\sqrt{1+g'^2}} \ \mathrm{d}x = 0.$$ Integrating by parts we then see that $$\int_{10}^{30} \Bigg(\frac{g'}{\sqrt{1+g'^2}}\bigg)' h \ \mathrm{d}x = 0.$$ This must hold for all smooth perturbations $h$ satisfying the above conditions. By taking $h$ to be appropriate bump functions, this is only possible if $$\Bigg(\frac{g'}{\sqrt{1+g'^2}}\bigg)'$$ is constant. One can now solve this by an appropriate trig substitution to see that the solution curve is an arc of a circle.
Prove by induction: power/chain rule combination
Using only the product rule, here is the induction step (for $n\ge2$). Set $g(x)=f(x)^{n-1}$; by the induction hypothesis, $g'(x)=(n-1)f'(x)f(x)^{n-2}$ and \begin{align} D(f(x)^n) &=D(f(x)g(x))\\[4px] &=f'(x)g(x)+f(x)g'(x)\\[4px] &=f'(x)f(x)^{n-1}+f(x)\cdot (n-1)f'(x)f(x)^{n-2}\\[4px] &=f'(x)f(x)^{n-1}+(n-1)f'(x)f(x)^{n-1}\\[4px] &=nf'(x)f(x)^{n-1} \end{align} You can supply the base case for $n=2$.
If $Y_n=\min\{M_n,7\}$ and $\{M_n\}$ is a martingale wrt ${X_n}$, show that ${Y_n}$ is a supermartingale wrt ${X_n}$
The function $f(x)=\min\{x,7\}$ is concave, hence by Jensen's inequality for conditional expectation $$ \mathbb{E}[Y_{n+1}|\mathcal{F}_n]=\mathbb{E}[f(X_{n+1})|\mathcal{F}_n]\leq f(\mathbb{E}[X_{n+1}|\mathcal{F}_n])=f(X_n)=Y_n$$
Integral $\int^{ \pi /2}_{0} \ln (\sin x)\ dx$
Hint: Try \begin{align}\int^{ \pi /2}_{0} \ln \sin x \, dx &= \int^{ \pi /2}_{0} \ln \left(2 \sin \frac{x}{2} \cos \frac{x}{2} \right) \, dx \\ &=\int^{ \pi /2}_{0} \ln 2 \, dx + \int^{ \pi /2}_{0}\ln \left( \sin \frac{x}{2} \right) \, dx + \int^{ \pi /2}_{0} \ln \left( \cos \frac{x}{2} \right) \, dx \\ &= \frac{\pi}{2} \ln 2 + \underbrace{2 \int^{ \pi /4}_{0} \ln (\sin u) \, du}_{\text{Let }u=x/2} + \underbrace{2 \int^{ \pi /4}_{0} \ln (\cos u) \, du}_{\text{Let }u=x/2} \end{align}
Prove that $q(x) : =\prod_\limits{i=0}^{n-1}(x-A[i])-\prod_\limits{i=0}^{n-1}(x-B[i])$ is only the null polynomial if $A$ is permuation of $B$
For $i=0,...,n-1$ you have $q(A[i])=0$, as both products are zero. The coefficient of the leading term of both products is 1, thus the leading terms $x^n-x^n$ cancel leaving you with $\deg q\le n-1$. Now you have a polynomial of degree at most $n-1$ that has at least $n$ roots, …
If $a\geq 0$ and $a0$ can we show $a=0$ without the law of excluded middle?
Many systems of constructive analysis reject the classical axiom: $$ (\forall x \in \mathbb{R})[x < 0 \lor x = 0 \lor x > 0].\qquad\qquad\qquad (*) $$ Intuitively, this rejection comes from the fact that, in general, there is no way to decide, given an $x \in \mathbb{R}$, which of the three alternatives holds. However, the analogous formula to $(*)$ for $\mathbb{Q}$ is constructively acceptable, because given a fraction we can tell which option holds by inspection. There are some systems of constructive analysis which are compatible with the existence of infinitesimals. In such systems, the axiom above will really fail to hold, and the result from the question will not be provable. However, there are some constructive systems where you can prove the result from the question, which is $$ (\forall x \in \mathbb{R})[x \geq 0 \land (\forall r \in \mathbb{Q}^+)[x \leq r] \to x = 0].\qquad\qquad\qquad (**) $$ Unfortunately, work in constructive mathematics requires careful attention to the definitions of "real number" and the order relations on the reals. Different constructive system define these in different ways which affect the theorems that can be proved. In the systems I have in mind, a real number is defined to be a quickly converging Cauchy sequence of rationals, that is, a sequence $(x_n)$ of rationals such that $|x_n - x_m| \leq 2^{n}$ when $n < m$. The relation $x \leq y$ is defined as $$ (\forall k)[x_k \leq y_k - 2^{-k+1}], $$ and $x = y$ is defined as $x \leq y \land y \leq x$. The real number $0$ is defined as the constant Cauchy sequence made from $0_\mathbb{Q}$, and the same method is used to embed $\mathbb{Q}$ into $\mathbb{R}$. These definitions, for example, are compatible with Bishop's system for constructive analysis. In such systems, we can prove (**) without the law of the excluded middle (but using the fact that (*) holds for $\mathbb{Q}$). To do so, we already have $0 \leq x$, by assumption, so we need to prove $x \leq 0$ and we are done. This means we need to prove that $(\forall k)[x_k \leq -2^{-k+1}]$. For each rational $r$, we already know that $x \leq r$. In particular we know $(\forall r > 0)(\forall k)[x_k \leq r - 2^{-k+1}]$. Now we fix $k$ and apply (*) in the form of the proposition $$ x_k < -2^{-k+1} \lor x_k = -2^{-k+1} \lor x_k > -2^{-k+1}. $$ The disjunction here is decidable, because $x_k$ is an explicit rational. And we can prove that $x_k > -2^{-k+1}$ is impossible because, if it happens, then we can write (constructively) $x_k = s + -2^{-k+1}$, and then let $r = s/2$ and we will have $x_k > r - 2^{-k+1}$, which is impossible because we know that $x \leq r$ in the sense of $\mathbb{R}$. Now, although excluded middle is not provable constructively, the following tautology is provable constructively: $$ (A \lor B \lor C) \land (\lnot C) \to (A \lor B). $$ In our case, that means that we can prove $(x_k < 2^{-k+1} \lor x_k = 2^{-k+1})$ for all $k$. Which is exactly what we need to prove to show that $x \leq 0$ in the sense of $\mathbb{R}$, completing the proof of $(**)$.
Calculate number of primes
Suppose $1<k\leq2006$. Then $k$ is a factor of $2016!$, and also of $2016!+k$, which equals $n+(k-1)$. Thus, each number in your list has a factor greater than $1$, and is therefore not prime.
Non-proportional, inverse algorithm
There are many possible answers: $$y = 1 - \left(\frac{x}{30}\right)^k$$ or $$y = \left(\frac{30- x}{30}\right)^k$$ for some positive value of $k$ would each do what you ask: you might experiment. If $k=1$ then the relationship is linear.
Schroeder-Bernstein Theorem
One can proceed like this: The map $n \mapsto (n,0)$ is injective from $\mathbb{N}$ to $\mathbb{N}^2$. The map $(m, n) \mapsto 2^m 3^n$ is injective from $\mathbb{N}^2$ to $\mathbb{N}$ (here you need to use uniqueness of prime factorization). Thus by Schröder–Bernstein there is a bijection.
Evaluating an improper integral yields an indeterminate answer?
Hint You also can rewrite the antiderivative $$f(t)=\ln(1-t)-\ln(1+t)-\frac{\ln(1-t)}{t} -\frac{\ln(1+t)}{t} $$ as $$f(t)=-\frac{(1+t) \log (1+t)}{t}-\frac{(1-t) \log (1-t)}{t}$$ and remember that, when $x$ goes to $0$, $x \log(x)$ goes to $0$ too. So,$f(1)=-2 \log(2)$ and $f(\frac {1}{2})=-3 \log(3)+4 \log(2)$ and the value of the integral is then $3 \log(3)-4 \log(2)$
Show that if $F \subset D \subset E$ then D is a field
It's already an integral domain, all you're missing is inverses. Let $p_\beta(x) = a_0+a_1x+\ldots + a_nx^n$ be the minimal polynomial for $\beta$ over $F$. Then note $$-a_0^{-1}(a_1+a_2\beta+a_3\beta^2+\ldots + a_n\beta^{n-1})\cdot\beta=1$$ So $\beta^{-1}\in D$, showing inverses. Here we use that all such $\beta\in E$ which is finite, so that there is a minimal polynomial for it over $F$.
Center of a group of order 77
Hint: There is only one such group of order $77$: $$|G| = 77 = 7\cdot 11,\; \text{and}\;\;7, \; 11\;\text{prime};\;\;7\not\mid (11 - 1)\;\implies G \;\text{is abelian}$$ You need only know that $G$ must be abelian for this problem. The fact that it's cyclic then follows since clearly, $$\;\gcd(7, 11) = 1 \iff \mathbb Z_{7}\times \mathbb Z_{11} \cong \mathbb Z_{77}$$ Theorem: Let $G$ be a group of order $pq$, where $p,\,q$ are prime, $p\lt q$, and $p$ does not divide $q−1$. Then $G$ is cyclic.
How to recover the Logarithm of rotations in the plane
You could use the Taylor series of $\exp$ (hint: the powers of $\left[ \matrix{0 & -1\cr 1 & 0\cr} \right]$ have a simple pattern). Or diagonalize...
Difference between the definitions regarding distribution of prime numbers
They are slightly different. The first gives a limit asymptotic constant equal to $1$, but no bracketing. The second doesn't give a single asymptotic constant, but a bracketing (for $x$ large enough). Obviously, $A'\le1\le A$.
Intuituive reason why Fermats last theorem holds
You will find a lot of intuitive reasons why Fermat last theorem holds in the following book : "Modular Forms and Fermat’s Last Theorem", from Cornell, Silverman and Joseph, I quite liked it in my young years.
Does $A\times A\cong B\times B$ imply $A\cong B$?
The answer is no. It is known that ZF+'$\aleph_1$ and $2^{\aleph_0}$ are incomparable' is consistent if ZF does. Assume that $\aleph_1$ and $2^{\aleph_0}$ are incomparable (over ZF),then $\aleph_1$, $2^{\aleph_0}<\aleph_1+2^{\aleph_0}$ and $\aleph_1+2^{\aleph_0}\neq\aleph_1\cdot 2^{\aleph_0}$. 1 is trivial. To prove 2, we can use following theorem: Theorem (ZF) If $\mathfrak{p}$ is a (possibly non-well-orderable) cardinal and $\alpha$ is an aleph and they satisfy $\mathfrak{p}+\alpha=\mathfrak{p}\cdot\alpha$, then they are comparable. You can find this theorem and its proof in Jech's 'Axiom of choice', Lemma 11.16. We will check that $(\aleph_1+2^{\aleph_0})^2=(\aleph_1\cdot 2^{\aleph_0})^2=\aleph_1\cdot 2^{\aleph_0}$. It is just a calculation: $$ \begin{align}(\aleph_1+2^{\aleph_0})^2 &=\aleph_1+2^{\aleph_0}+2\cdot \aleph_1\cdot2^{\aleph_0}\\ &=\aleph_1+2^{\aleph_0}+\aleph_1\cdot2^{\aleph_0}\\ &=\aleph_1+(\aleph_1+1)\cdot 2^{\aleph_0}\\ &=\aleph_1+\aleph_1\cdot 2^{\aleph_0}\\ &=\aleph_1\cdot(1+2^{\aleph_0})\\ &=\aleph_1\cdot2^{\aleph_0}.\\ \end{align}$$ Also, the square of $\aleph_1\cdot 2^{\aleph_0}$ is $\aleph_1\cdot 2^{\aleph_0}$ itself so $(\aleph_1+2^{\aleph_0})^2=(\aleph_1\cdot 2^{\aleph_0})^2$. However, we already know that $\aleph_1+2^{\aleph_0}$ and $\aleph_1\cdot 2^{\aleph_0}$ are not equal.
Finding the number of fleets formed form a certain group of ships
Your answer is correct provided that each of the cargo ships, cruisers, battleships, destroyers and aircraft carriers is unique in nature. The answer may seem high at first but then realize that just swapping out one particular unit with another leads to a different combination. You often end up with huge numbers in combinatorics.
Rotation Inequality Conjecture
As noted in the comments, $f^*$ and $g^*$ are not necessarily functions after rotation. For example, consider the upper semicircle $y = \sqrt{1-x^2}$; any amount of rotation will cause it to no longer be a function.$\newcommand{\degs}{^\circ} \newcommand{\cut}{\, \backslash \,} \newcommand{\AND}{\ {\rm{\small{AND}}}\ } \newcommand{\OR}{{\ \rm{\small{OR}}}\ } \newcommand{\NOT}{\ {\rm{\small{NOT}}}\ } \newcommand{\Implies}{\Rightarrow} \newcommand{\If}{\Leftarrow} \newcommand{\Iff}{\Leftrightarrow} \newcommand{\x}{\times} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\E}{\operatorname{\rm{\small{E}}}} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \newcommand{\dash}{\textrm{-}} \newcommand{\der}{\partial} \newcommand{\del}{\nabla} \newcommand{\inv}{{\sim}} \newcommand{\eps}{\varepsilon} \newcommand{\dedent}{\!\!\!\!\!\!\!\!\!}$ However, we really do need the rotated curves to be functions if we want a notion of "above" and "below" to make sense. To see what I mean, let $f$ denote the upper unit semicircle I defined earlier, and let $g$ be identically zero on $[-1,\ 1]$. If we rotate them both clockwise we get the following: In the red zone, the rotated $f^*$ has no $g^*$ underneath, leaving the comparison undefined. So we need to ensure that $f^*$ and $g^*$ are well-defined functions. To do so, we need to impose some special conditions onto both $f$ and $g$. I don't know what the bare minimum necessary conditions are, but I do have a reasonably broad set of sufficient conditions that work. I will state them as part of the following proposition which, I think, is the essential gist of what you're wanting to prove: Proposition: Let $\Delta\theta$ be an angle in the open interval $(0,\ \pi/2)$, and let $f$ and $g$ be differentiable functions on the closed interval $[a,b]$ such that the following hold: $f(a) = g(a) \AND f(b) = g(b)$, $f(x) \geq g(x)$ for all $x \in [a,b]$, $f'(x),\ g'(x) < \cot \Delta\theta$ for all $x \in [a,b]$. Then the counterclockwise rotations $f^*$ and $g^*$ by $\Delta\theta$ exist on the shifted interval $[a^*, b^*]$ (to be computed later) and satisfy $f^*(x) \geq g^*(x)$ on it. For ease of presentation, this proposition is restricted to counterclockwise rotations, but the more general version for clockwise rotations is stated similarly to this one. The key condition I imposed is that the derivatives of both $f$ and $g$ must be bounded by the cotangent of the rotation angle. To see the intuitive reason I did this, I pictured the rotation being a rotation of the axes instead of the curves. In that case, the question of whether the rotated curves will satisfy the vertical line test becomes equivalent to asking whether any line of the form $$ y = (\cot \Delta\theta) x + b $$ intersects any one unrotated curve twice. With that I'll dive into the proof of the Proposition. (Note: every step may not be perfectly rigorous, but I think it works pretty well, and it's probably briefer than a thorough proof.) The first thing we need to show is that $f^*$ and $g^*$ are well-defined functions. To do that, let's suppose that they're not. In that case the rotations fail the vertical line test, or equivalently, there exists a line $$ y = (\cot \Delta\theta)x + b $$ that intersects either $f$ or $g$ twice. Assume without loss of generality that $f$ is guilty of this (the logic is the same for $g$). Let's call the two $x$-values of the two intersection points $\alpha$ and $\beta$. In that case the two points $\big(\alpha,\ f(\alpha) \big)$ and $\big(\beta,\ f(\beta) \big)$ must lie on a line with slope $\cot \Delta\theta$. Therefore $$ \frac{f(\beta) - f(\alpha)}{\beta - \alpha} = \cot \Delta\theta $$ Then by the Mean Value Theorem, there must exist some $c$ between $\alpha$ and $\beta$ such that $f'(c) = \cot \Delta\theta$. This contradicts our assumption that $f'(x) < \cot \Delta\theta$ for all $x \in [a,b]$. $\checkmark$ Now we prove that the rotations still preserve the order. Consider any $x^* \in [a^*, b^*]$. We want to show $f^*(x^*) \geq g^*(x^*)$. Now the points $\big(x^*, f^*(x^*) \big)$ and $\big(x^*, g^*(x^*) \big)$ correspond to two points $\big(x_f, f(x_f) \big)$ and $\big(x_g, g(x_g) \big)$ in unrotated space. The statement $f^*(x) \geq g^*(x)$ is equivalent to saying that $x_f \geq x_g$ i.e. $f^*(x^*)$ is above $g^*(x^*)$ if and only if the corresponding point $\big(x_f, f(x_f) \big)$ is upslope from $\big(x_g, g(x_g) \big)$. To prove it, suppose not. Suppose $x_f < x_g$. Then the two points $\big(x_f, f(x_f) \big)$ and $\big(x_g, g(x_g) \big)$ lie on a line of slope $\cot \Delta\theta$ and so $$ \frac{g(x_g) - f(x_f)}{x_g - x_f} = \cot \Delta\theta $$ Now we assumed that $f(x) \geq g(x)$ for any $x$, so that means $f(x_g) \geq g(x_g)$. Hence $$ \frac{f(x_g) - f(x_f)}{x_g - x_f} \geq \frac{g(x_g) - f(x_f)}{x_g - x_f} = \cot \Delta\theta $$ So by the Mean Value Theorem we again have a $c$ between $x_f$ and $x_g$ such that $f'(c) \geq \cot \Delta\theta$. A contradiction. $\checkmark$ And lastly a note on how to compute $a^*$ and $b^*$. We can treat a point $(x,y)$ on the plane as the complex number $x+yi$. If we want to rotate that point about the origin by $\Delta\theta$ radians (either positive or negative), we simply multiply: $$ x^* + y^*i = (x+yi)e^{\theta i} $$ which yields $$ x^* + y^*i = x \cos \Delta\theta - y \sin \Delta\theta + (y \cos \Delta\theta + x \sin \Delta\theta)i $$ Substituting \begin{align} x &= a,\ b \\ y &= f(a),\ f(b) \end{align} respectively and taking the real part will give $a^*$ and $b^*$. For your particular $f$ and $g$: \begin{align} f(x) &= \sqrt{1-x^2} \\ g(x) &= 1-x \end{align} on the interval $[0,1]$ I get $a^*$ and $b^*$: \begin{align} a^* &= \Re \left(i \cdot e^{\pi/4 i} \right) = \Re \left(e^{3\pi/4 i} \right) = \cos(3\pi/4) = -\sqrt{2}/2 \\ b^* &= \Re \left(1 \cdot e^{\pi/4 i} \right) = \cos(\pi/4) = \sqrt{2}/2 \end{align} yielding an interval length of $\sqrt{2}$ which is expected since that is the length of the diagonal of $y = 1-x$ on $[0,1]$.
Inequality Question-Maximum
Use Cauchy-Schwarz on the vectors $$x = \begin{pmatrix}a+b \\ c+d \\ e+f\end{pmatrix}, \quad y = \begin{pmatrix}1 \\ 1 \\ 1\\ \end{pmatrix}$$ Then $$x\cdot x = (a+b)^2 + (c+d)^2 + (e+f)^2 \\= a^2 + b^2 + c^2 + d^2 + e^2 + f^2+ 2ab+2cd+2ef \\=6+2\times3 \\=12$$ Also, $y\cdot y = 3$, and $x \cdot y = a+b+c+d+e+f$, the quantity you're looking to maximise. Cauchy-Schwarz says $(x \cdot y)^2 \leq (x\cdot x)(y \cdot y)$, so $$ (a+b+c+d+e+f)^2 \leq 12 \times 3 \\a+b+c+d+e+f \leq 6$$ Also, observe that $a=b=c=d=e=f=1$ achieves this value.
Closed form of the series
Another approach, using binomial coefficients: $$\begin{align} \sum_{i=1}^{n}(x+i)^4&=\sum_{i=1}^{n}\left[\binom{x+i+3}4+11\binom{x+i+2}4+11\binom{x+i+1}4+\binom{x+i}4\right]\\ &=\ \ \quad \binom{x+n+4}5+11\binom{x+n+3}5+11\binom{x+n+2}5+\binom{x+n+1}5\end{align}$$ which is nice and symmetrical and be easily evaluated. If a factorised form is required, then substitute $y=x+n$ to get the rather untidy $$\frac 1{30}y(y+1)(2y+1)(3y^2+3y-1)$$
Integration of $\int\limits_{ -\infty }^{\infty} x^2 e^{-x^2/2} \; dx$.
This looks like a correct use of integration by parts.
Construct a discontinuous solution of a given autonomous differential equation from a continuous one
By definition, a solution of an ODE satisfies the given ODE at all points in its domain. Therefore it is differentiable, hence continuous, in its domain. There is no such thing as a "discontinuous solution" of an ODE. Assume that $f$ satisfies the assumptions of the existence and uniqueness theorem for ODE's. If $f(y_0)=0$ then $y(t):\equiv y_0$ is a solution. Between two zeros $y_0$ and $y_1$ of $f$ the right side $y\mapsto f(y)$ does not change sign. It follows that for any initial point $(t_*,y_*)$ with $y_0<y_*<y_1$ the solution of the IVP $$y'=f(y),\qquad y(t_*)=y_*$$ is monotonic and satisfies $y_0<y(t)<y_1$ for all $t$ in the domain of $y(\cdot)$. The cutting procedure you are describing can therefore not lead to a continuous function. If, e.g., the function $f$ is given by $f(y):=3|y|^{2/3}$ then your cutting procedure, applied at $y=0$, produces new solutions.
Fundamental weights of $A_n$
If I understand the notation well, the $i$th fundamental weight should be $e_1^*+\dots+e_i^*$.
Finding X and Y Intersections of "Ray D1" and "RayD2"
One way to find the intersection points of the two lines is to first find the equations of the lines (but see the remark at the end of this answer). Recall that the equation of a line with slope $m$ and $y$-intercept $b$ is $y=mx+b$. The line $d_1$ (assuming the two points you gave define it) has slope ${2-0\over 0-(-1)}=2$ and $y$-intercept $2$ (looking at the point $(0,2)$), so its equation is $$\tag{1}y=2x+2.$$ The line $d_2$ has slope ${0-(-1)\over 2-0}={1\over2}$ and $y$-intercept $-1$ (looking at the point $(0,-1)$), so its equation is $$\tag{2}y={1\over2}x-1.$$ To find the intersection point of the lines, set the right hand sides of $(1)$ and $(2)$ equal to each other $$ {1\over2}x-1 = 2x+2 $$ and solve for $x$. The solution to the above equation is $x=-2$. This gives the $x$-coordinate of the point of intersection. the $y$-coordinate can be found by substitutng this $x$-value into either equation $(1)$ or $(2)$. Using equation $(1)$, we get $y=2\cdot(-2)+2=-2$. So the point of intersection is $(-2,-2)$. Remark: Note if you draw the lines, you can take advantage of the symmetry displayed to find the point of intersection. (It lies on the line $y=x$; note the slopes of the lines are negative reciprocals of each other.)
Prove if p is a prime number that does not divide a, then $a^{p^2}$ congruent to $a^p\mod p^2$
You have $$a^{p^2} - a^p =a^p(a^{p^2-p} -1) =$$ $$ a^p(a^{\phi(p^2)}-1)\equiv a^p(1-1) \pmod{p^2},$$ where Euler's theorem is used in the last step.
Standard free resolution of a Hopf algebra : ?$\exists$ an explicit chain homotopy?
You just need to put the 1 on the other side and to include a sign: the correct map is $l_n(h_1\otimes\cdots\otimes h_n) = (-)^n h_1\otimes\cdots\otimes h_n\otimes 1$. For example, $l(d(h_1\otimes h_2)) = -\epsilon(h_1)h_2 \otimes 1 + h_1h_2 \otimes 1$ and $d(l(h_1\otimes h_2)) = \epsilon(h_1)h_2\otimes 1 - h_1h_2 \otimes 1 +h_1\otimes h_2$.
Help with solving PDE $A\frac{\partial\omega}{\partial t} = B\frac{\partial^2\omega}{\partial \eta^2} + C$
Using the Fourier transform $$ \hat{f}(t,\xi) = \int_{-\infty}^{\infty} f(t,\eta) \, e^{-i\xi\eta} \, d\eta. $$ we get $$ A\partial_t\hat{f}(t,\xi) = -B\xi^2\,\hat{f}(t,\xi) + C\,2\pi\,\delta(\xi). $$ The homogeneous equation, $$ A\partial_t\hat{f}(t,\xi) = -B\xi^2\,\hat{f}(t,\xi) $$ has solutions $$ \hat{f}_h(t,\xi) = \hat{R}(\xi)\,e^{-B\xi^2t/A}, $$ where $\hat{R}(\xi)$ is some differentiable function. One particular solution to the inhomogeneous equation is $$ \hat{f}_p(t, \xi) = \frac{2\pi\,C}{2B}\delta''(\xi) $$ since $\xi^2\delta''(\xi) = 2\delta(\xi).$ Thus the complete family of solutions is given by $$ \hat{f}(t, \xi) = \hat{R}(\xi)\,e^{-B\xi^2t/A} + \frac{2\pi\,C}{2B}\delta''(\xi). $$ An inverse Fourier transform now gives $$ f(t,\eta) = R*\rho(t,\eta) - \frac{C}{2B}\eta^2, $$ where $R$ is the inverse transform of $\hat{R}$ and $\rho(t,\eta)$ is the inverse transform of $e^{-B\xi^2t/A}.$
Scheme theoretic dual of $\mathbb P^n_k$
If you have a $k$-vector space $V$ you can form the symmetric algebra on $V$, $$\operatorname{Sym} V = k \oplus V \oplus V^{\otimes 2} / S_2 \oplus V^{\otimes 3} / S_3 \oplus \cdots $$ and it is clearly a graded $k$-algebra. We define $\mathbb{P} (V) = \operatorname{Proj} \operatorname{Sym} V$. The dual of $\mathbb{P} (V)$ is just $\mathbb{P} (V^\vee)$.
Dinner group rotation. Sixteen couples. Four couples per house. Each couple to meet all the others, no repetition.
First number is the host couple; spring 1,14,15,16; 2,9,11,12; 3,6,7,10; 4,5,8,13; summer 5,1,10,12; 6,11,13,14; 7,2,4,15; 8,3,9,16; fall 9,1,4,6; 10,2,8,14; 11,3,5,15; 12,7,13,16; winter 13,1,2,3; 14,5,7,9; 15,6,8,12; 16,4,10,11;
Surjectivity of multiplication by $n$ on the separable points of an elliptic curve
$\newcommand{\F}{\Bbb F} $ After thinking again to this question, I finally came up with a counter-example myself. Let me first notice that whenever $n$ is an integer coprime to the characteristic of a field $K$, then the multiplication-by-2 map $$ [n] : E(K^s) \to E(K^s)$$ is surjective on the points of an elliptic curve $E$ with coordinates in the separable closure $K^s$ of $K$. This is because under the coprimality assumption, $[n] : E \to E$ is an étale morphism (then apply Surjective étale morphisms on points.). However, this fails if $n$ is no longer coprime to $\mathrm{char}(K)$. Namely, let $E$ be the elliptic curve given by $y^2 + xy = x^3 + t^4$ over $K = \F_2(t)$. We prove that the multiplication-by-2 map $$ [2] : E(K^s) \to E(K^s)$$ is not surjective on the separable closure $K^s$ of $K$. Let $Q = (t, t^2) \in E(K) \subset E(K^s)$. Let $P = (x, y) \in E(\overline K)$ be such that $Q = 2P$. Then we get (see Silverman's book AEC) $$t = x(2P) = \dfrac{x^4 - t^4}{x^2},$$ which implies $x^4 - t x^2 - t^4 = 0$. If we prove that the polynomial $$f(z) := z^4 + t z^2 + t^4 \in \F_2(t)[z] = K[z]$$ is irreducible, then from $f(x) = 0$ and $f' = 0$ we deduce that $f$ is the minimal polynomial of $x$ over $K$ and is not separable. Hence $x \not \in K^s$ and $P \not \in E(K^s)$, which shows that $ [2] : E(K^s) \to E(K^s)$ is not surjective. It remains to prove that $f(z) := z^4 + t z^2 + t^4 \in \F_2(t)[z] = K[z]$ is irreducible. First, by Gauss' lemma, $f$ is irreducible over $K$ iff it is irreducible over $\F_2[t]$. Then, observe that $f$ has no roots : if $z(t) \in \F_2[t]$ is a root of $f$, then plugging $t=1$ yields $z(1)^4 + z(1)^2 + 1 = 0$ but this is impossible since $z(1) \in \F_2$. Then assume $z^4 + t z^2 + t^4 = (az^2 + bz + c)(\alpha z^2 + \beta z + \gamma)$. Then $a, \alpha \in \F_2[t]^{\times} = \{1\}$ and $$z^4 + z^3(b + \beta) + z^2(\gamma + c + b \beta) + z(b \gamma + c \beta) + c \gamma \;=\; z^4 + t z^2 + t^4.$$ This gives $\beta = -b$ and so $b(\gamma - c) = 0$. The case $b=0$ would imply $(t+c)c = t^4$ but again evaluating at $t=1$ makes it impossible to happen. So $\gamma = c$ and we are left with $b \beta = t = -b^2$, and for degree reasons, this cannot happen either. All in all, we proved that the above factorization is impossible. Therefore $z^4 + t z^2 + t^4$ is indeed irreducible over $\F_2[t]$, and this concludes the proof. $\hspace{4cm} \square$
Euler-Langrange equation for improper action integral
The Euler Lagrange equation still applies in this case. Even when the interval is $[a,b] = [-\infty, \infty]$, recall that the Euler Lagrange equation is derived using $$\frac{d}{dt} S (q+ t \phi) \bigg|_{t=0} = 0$$ where $\phi$ is a function with compact support in $[a,b]$. The end points of this interval does not come into play.
Find and prove a recurrence relation that $t_n$ satisfies.
Hint: let $u_n$, $v_n$ and $w_n$ be the number of such sequences that end in $a, b, c$ respectively. Find a system of recurrences for these.
Affine space notation
I'm not understanding how this gives anything redundant. What is it repeating? "It gives the impression of a linear space operation": First of all, an affine space is a linear space. Furthermore, I would guess that you are talking about linear spaces in the sense of a vector space over a (possibly skew) field $\mathbb{F}$, which are exactly the examples of affine spaces. The translates of subspaces in the vector space are the subspaces of the affine space (so the vectors, translates of the trivial vector subspace $\{0\}$, are the points of the affine space). In particular, the points on the line through points $\mathbf{a}$ and $\mathbf{b}$ are given by $\{ \lambda \mathbf{a} + (1-\lambda) \mathbf{b} \ : \ \lambda \in \mathbb{F}\}$. Your notation at the bottom is.... very confusing. And it actually is redundant, since we already have a way of writing this that is not significantly longer than what you suggest.
Integral Inequality 3 terms- Cauchy Schwarz
As far as I'm not missing anything, splitting $f$ into two square roots should work: Let $J\subseteq \mathbb R$ measurable. Assuming $f,g,h\colon J\to \mathbb R$ are (Lebesgue-)integrable and $0\leq f(x)$, we have that $\sqrt f g$ and $\sqrt f h$ are square-integrable, hence we can apply Cauchy-Schwarz as follows: \begin{align*} {\left( \int_J f(x)g(x)h(x)\mathrm dx \right)}^2 &= {\left( \int_J \sqrt{f(x)}g(x)\sqrt{f(x)}h(x)\mathrm dx \right)}^2 \\ &\leq \int_J {\left( \sqrt{f(x)}g(x)\right)}^2 \mathrm dx \int_J {\left( \sqrt{f(x)}h(x)\right)}^2 \mathrm dx\\ &= \int_J f(x)g(x)^2 \mathrm dx\int_J f(x)h(x)^2 \mathrm dx\\ \end{align*} Note that in general, the square root is not integrable, but we only need the square integrability, since $(a,b)\mapsto \int_J a(x)b(x)\mathrm dx$ is a scalar product on $L^2(J,\mathbb R)$.
Let $C\ne \emptyset$ and $A, B\subset C$ sets so that $A\cap B=\emptyset$.
First assume that $f$ is injective. Then $f(A\cup B) = (A, B)$ and $f(C) = (C\cap A, C\cap B) = (A, B)$, so $A\cup B = C$ since $f$ is injective. Now assume that $A\cup B = C$. If $X$ and $Y$ are subsets of $C$ such that $f(X) = f(Y)$, then $(X\cap A, X\cap B) = (Y\cap A, Y\cap B)$ and so $X\cap A = Y\cap A$ and $X\cap B = Y\cap B$. So \begin{equation} (X\cap B) \cup (X\cap A) = (Y\cap B) \cup (Y\cap A), \end{equation} and since $(X\cap B) \cup (X\cap A) = X \cap (A\cup B) = X\cap C = X$ and $(Y\cap B) \cup (Y\cap A) = Y \cap (A\cup B) = Y\cap C = Y$ it follows that $X=Y$ and hence that $f$ is injective.
The perfect Number system
In any number system that uses a finite (or even countably infinite) set of characters, the set of numbers that are representable in a finite string is at most $\aleph_0$. Most reals will therefore not be representable by any finite string, so a perfect (as I read your definition) number system is not possible.
Explicit solution to nonlinear ODE
Change variable to $y=\sqrt{\alpha t}-u$. This will give $$ \frac{dy}{dt}=-\frac{du}{dt}-\sqrt{\alpha}\frac{1}{2\sqrt{t}}. $$ The equation becomes $$ \frac{dy}{dt}=\frac{\beta}{y}-\sqrt{\alpha}\frac{1}{2\sqrt{t}}. $$ This equation can be solved by taking $y=C\sqrt{t}$ that gives $$ \frac{C}{2}-\frac{\beta}{C}=\frac{\sqrt{\alpha}}{2} $$ that determines $C$.
Can anyone provide me a hint for finding the limit involving a factorial function?
Hint: $$\frac{\frac{(n+1)!^2}{(2n+2)!}}{\frac{n!^2}{(2n)!}}=\frac{(n+1)^2}{(2n+2)(2n+1)}=\frac{n+1}{4n+2}\xrightarrow[n\to +\infty]{}\frac{1}{4}. $$
Existence of a homeomorphism on ambient space mapping submanifolds
Assuming I'm interpreting your question correctly, the answer is basically never. Consider \begin{align*} M &= (-\infty, \infty) \subseteq \mathbb{R} \\ N &= (-1,1) \subseteq \mathbb{R} \end{align*} Then $M$ and $N$ are both diffeomorphic to $\mathbb{R}$, but there is no ambient homeomorphism $\phi$ of $\mathbb{R}$ carrying $M$ to $N$, since $M$ is closed in $\mathbb{R}$ and $N$ is not closed in $\mathbb{R}$. For other dimensions, take \begin{align*} \tilde{M} = M \times \mathbb{R}^{d-1} \subseteq \mathbb{R} \times \mathbb{R}^{n-1} \cong \mathbb{R}^n \\ \tilde{N} = N \times \mathbb{R}^{d-1} \subseteq \mathbb{R} \times \mathbb{R}^{n-1} \cong \mathbb{R}^n \end{align*} where $\mathbb{R}^{d-1} \subseteq \mathbb{R}^{n-1}$ is a standard linear embedding. Then $\tilde{M}$ and $\tilde{N}$ are both diffeomorphic to $\mathbb{R}^d$, but $\tilde{M}$ is closed in $\mathbb{R}^n$ and $\tilde{N}$ is not, so no ambient homeomorphism taking $M$ to $N$ exists. So it seems the only time such a homeomorphism exists is if $d = 0$.
Let $f(x) = \ln x - 5x$, for $x > 0$.
$a)$ Yes $b)$ No. $$f'(x)=1/x-5$$ $$\implies f''(x)=-1/x^2$$ because $$\frac{d}{dx}(1/x)=-1/x^2$$ $c)$ $$-1/x^2=1/x-5$$ $$5x^2-x-1=0$$ $$\implies x=\frac{1+\sqrt{21}}{10}\quad \text{or}\quad \frac{1-\sqrt{21}}{10}$$
In which direction to round the answer, if it represents maximal population that could be infected?
If you have proven that it is impossible for more than 240.07... people to get infected, then you have proven that it is impossible for 241 people to get infected. Thus, 240 is the correct maximum, and you should round down. Note that in this case, you should round down even if your calculated maximum is 240.99.
find the minimum value for the following expression $\max \{ {x_{i_1}, x_{i_2}-x_{i_1},x_{i_3}-x_{i_2} ... x_{i_n}-x_{i_{n-1}}, 1-x_{i_n} }\}$
I have an algorithm for this, but I haven't been able to prove that it's optimal (in my experience, proving strong lower bounds for computational problems is typically very hard), so maybe this isn't the answer you're looking for. Let's rephrase the problem: given numbers $n < m$ and $x_0 < x_1 < \cdots < x_{m-1} < x_m$, we want to find $$\min_{0 < i_1 < i_2 < \cdots < i_n < m} \max \{x_{i_1} - x_0, x_{i_2} - x_{i_1}, \dots, x_{i_n} - x_{i_{n-1}}, x_m - x_{i_n}\}.$$ We can think of this as the problem of "cutting" the interval $[x_0, x_m]$ into $n+1$ blocks, where we are only allowed to cut in the prescribed places $x_i$, and we want to minimize the largest resulting block size. Define $A(k, r)$ as the minimum largest block size resulting from cutting $[x_0, x_k]$ into $r+1$ blocks, i.e. $$A(k, r) = \min_{0 < i_1 < i_2 < \cdots < i_r < k} \max \{x_{i_1} - x_0, x_{i_2} - x_{i_1}, \dots, x_{i_r} - x_{i_{r-1}}, x_k - x_{i_r}\}$$ so we want to find $A(m, n)$. For $r > 1$, it is not hard to see that $$A(k, r) = \min_{r \leq j < k} \max\{A(j, r-1), x_k - x_j\}$$ since \begin{align*} A(k, r) &= \min_{r \leq i_r < k} \min_{0 < i_1 < \cdots < i_r} \max\{x_{i_1} - x_0, x_{i_2} - x_{i_1}, \dots, x_{i_r} - x_{i_{r-1}}, x_k - x_{i_r}\} \\ &= \min_{r \leq i_r < k} \max \left\{\min_{0 < i_1 < \cdots < i_r} \max\{x_{i_1} - x_0, x_{i_2} - x_{i_1}, \dots, x_{i_r} - x_{i_{r-1}}\}, x_k - x_{i_r} \right\} \\ &= \min_{r \leq i_r < k} \max \{A(i_r, r-1), x_k - x_{i_r}\} \\ \end{align*} This gives a pretty direct dynamic programming approach: set all A(k, 1) = x_k - x_0 for r in {2, ..., n}: for k in {r+1, ..., m}: iterate over all j in {r, ..., k-1} to find the smallest max(A(j, r-1), x_k - x_j) set A(k, r) to this value which clearly runs in $O(nm^2)$ steps. However, we can actually get this down to $O(nm \log m)$ steps by using binary search, as described below. The important fact is that in the computation of $$A(k, r) = \min_{r \leq j < k} \max\{A(j, r-1), x_k - x_j\},$$ $x_k - x_j$ is a strictly decreasing function of $j$, while maybe less obviously, $A(j, r-1)$ is in fact an increasing function of $j$. To show this last part, note that $A(k, r)$ can also be defined as the minimum largest block size when we cut $[x_0, x_k]$ into at most $r+1$ blocks (that is, with at most $r$ cuts): making fewer cuts cannot decrease the largest block size. Then for $j' < j$, if we have a choice of at most $r-1$ cuts of $[x_0, x_j]$, this gives a choice of at most $r-1$ cuts of $[x_0, x_{j'}]$ (namely those cuts lying in the smaller interval), and the largest block size cannot increase, so it follows that $A(j', r-1) \leq A(j, r-1)$. Thus, if we consider $f(j) = \max\{A(j, r-1), x_k - x_j\}$ as a function of $j \in \{r, \dots, k-1\}$, we can break $\{r, \dots, k-1\}$ into the two regions $\{r, \dots, t-1\}$ where $x_k - x_j$ is larger, and $\{t, \dots, k-1\}$ where $A(j, r-1)$ is larger (though one of these may be empty), so that $f$ is decreasing on the first region, and increasing on the second region. This means the minimum must occur at either $t-1$ or $t$, and one of these must be the point at which $g(j) = A(j, r-1) - (x_k - x_j)$ (a strictly increasing function) is closest to 0. Now, since $g(j)$ is strictly increasing, we can find the index $j_0$ where $g(j)$ is closest to $0$ by binary searching for $0$ in the list $\{g(r), g(r+1), \dots, g(k-1)\}$. Importantly, we never explicitly construct this list, we only construct the values $g(j)$ when we query them, to save time, so this takes $O(\log m)$ steps. Then the index $j^*$ which minimizes $f(j)$ is among $j_0-1, j_0, j_0+1$, so we set $A(k, r)$ to be the smallest of $f(j_0-1), f(j_0), f(j_0+1)$. Since we take $O(\log m)$ steps to compute each $A(k, r)$, it takes $O(nm\log m)$ steps to compute all $A(k, r)$, and in particular, $A(m, n)$. Thus in the relevant case, where $m = n^2 + 1$, and $x_0 = 0$, $x_m = 1$, the algorithm takes $O(n(n^2 + 1)\log(n^2 + 1)) = O(n^3 \log n)$ steps.
Position of a particle, Newtons Second Law
Now, you know the velocity, you can use: $v(t)=\dfrac{dx}{dt}$ So you compute the particle's position at time t: $x(t)$. $x(t)=\dfrac{2}{15}t^6−\dfrac{2}{3}t^4−\dfrac{9}{2}t^2+10t+C$, where C is a constante. Boundary condition: $\begin{align} x(0) &= C\\ &=14 \end{align}$ Thus: $x(t)=\dfrac{2}{15}t^6−\dfrac{2}{3}t^4−\dfrac{9}{2}t^2+10t+14$ Note also: $a(t)=\dfrac{d^2x}{dt^2}$ is the acceleration of the particule at time t.
Is a Poisson r.v.'s parameter a rate $\mu$ or a count $\mu t$?
The interpretation of any given parameter depends on context. But you can resolve your specific question by viewing $\mu$ as "people per hour" in both cases, and viewing $N$ as the special case $t=1$; that is the "$\mu$" in $\text{Pois}(\mu)$ is actually $\text{Pois}(\mu \cdot 1)$. In general, the parameter of a Poisson process is a rate (arrivals per unit of time), and the parameter of a Poisson random variable is a counting unit (number of people, number of arrivals, etc.). In your example $\mu$ is a rate, and $\mu \cdot 1$ is a counting unit.
How to compute symmetrical determinant
Use the following rules: Adding a multiple of one row to another row does not change the determinant. If $B$ is obtained by multiplying a row of $A$ by a constant $c$ then $\det B=c \det A$ Start by subtracting the first row from each of the other rows, we get $$A=\begin{pmatrix} 2 & 1 & 1 & 1 & 1\\ -1 & 2 & 0 & 0 & 0\\ -1& 0 & 3 & 0 & 0\\ -1 & 0 & 0 & 4 & 0\\ -1 & 0 & 0 & 0 & 5 \end{pmatrix}$$ which has the same determinant as your matrix. Even just doing this makes the determinant much easier to calculate but we can go further. Divide the second, third, fourth and fifth rows by their corresponding diagonal term, we get $$B=\begin{pmatrix} 2 & 1 & 1 & 1 & 1\\ -1/2 & 1 & 0 & 0 & 0\\ -1/3& 0 & 1 & 0 & 0\\ -1/4 & 0 & 0 & 1 & 0\\ -1/5 & 0 & 0 & 0 & 1 \end{pmatrix}$$ which by rule #$2$, has determinant $\det B =\frac 1{120}\det A$. Finally, subtract each of the other rows from the first row, we get $$B'=\begin{pmatrix} 147/60 & 0 & 0 & 0 & 0\\ -1/2 & 1 & 0 & 0 & 0\\ -1/3& 0 & 1 & 0 & 0\\ -1/4 & 0 & 0 & 1 & 0\\ -1/5 & 0 & 0 & 0 & 1 \end{pmatrix}$$ where $\det B'=\det B$ by rule #$1$. This is a lower triangular matrix so the determinant is simply the product of the diagonal elements, i.e. $\det B' = 147/60$. Therefore $$\det A = 120 \det B=120 \det B' =120 \cdot \frac{147}{60}=394.$$ This is generally the way to go if you're calculating determinants of large matrices by hand, just be sure to keep track of each time you multiply a row by something so you can get back to the original determinant. Also, the symmetric property makes row-reduction easier but you can do this procedure for any matrix.
Limit of summation of trigonometric series
If $e^{i\theta} \neq 1$ ($\theta \neq 0 \, (\textrm{mod } 2\pi)$), we have: $$S_n=\sum_{k=1}^n \sin(k\theta)=\Im{\sum_{k=1}^n e^{ik\theta}}=\sin(\frac{n+1}{2}\theta)\frac{\sin(\dfrac{n}{2}\theta)}{\sin(\theta/2)}$$ Then $$\sum_{n=1}^N S_n=\dfrac{1}{\sin(\theta/2)}\sum_{n=1}^N \sin(\dfrac{n}{2}\theta)\sin(\frac{n+1}{2}\theta) = -\dfrac{1}{\sin(\theta/2)}\sum_{n=1}^N \dfrac{1}{2}(\cos(\dfrac{n+1}{2}\theta)-\cos(\theta/2)) $$ $$\sum_{n=1}^N S_n=\dfrac{N}{2\tan(\theta/2)}-\dfrac{1}{2\sin(\theta/2)}\sum_{n=1}^N\cos(\dfrac{n+1}{2}\theta)$$ And as previously you can compute $$\sum_{n=1}^N\cos(\dfrac{n+1}{2}\theta)=\Re \sum_{n=1}^{N} e^{i\theta n/2}e^{i\theta/2}=\cos((N/2+1)\theta)\dfrac{\sin(N\theta/2)}{\sin(\theta/2)}$$ which is bounded by $1/\sin(\theta/2)$. Finally we find that $$\dfrac{1}{N}\sum_{n=1}^N S_n \to \dfrac{1}{2}\cot(\theta/2) $$ And if $e^{i\theta}=1$, then $S_n=0$ and the sum converges to $0$ which is not $\cot(0)/2$...
$\lim_{(x,y)\rightarrow0} \frac{x\ln(1+x^3)}{y(x^2+y^2)}$ doesn't exist (?)
Yes you derivation is correct, indeed since $x\to 0$ $$\frac{\ln(1+x^3)}{x^3}\to1$$ it is true and we can conclude that for the trajectories $y=Cx^2\to 0$ $$\lim_{(x,y)\rightarrow0} \frac{x\ln(1+x^3)}{y(x^2+y^2)}=\frac1C$$ then the limit doesn't exist.
Cardinality of the set of multiples of "n"
Cardinality is not continuous. Just because a sequence of sets have the same cardinality all through the sequence doesn't mean that the limit of these sets will have the same cardinality as well. It's easy to observe, indeed, that $|S_n|=\aleph_0$ for all $n$. So the sequence of cardinals is constant. The limit of the sets, however, is $\bigcap S_n=\{x\in\Bbb N\mid \forall n(x\in S_n)\}$. Namely all the numbers which are divisible by all the integers. How many are there? Well, if $0\in\Bbb N$ then $0$ is such number. If you don't consider $0$ to be a natural number, then the intersection is empty. No positive integer is divisible by its successor. Therefore the cardinality of the limit of the sequence of set, is not the limit of the cardinals of the sets in the sequence.
Principal ideal domain that is not Euclidean domain.
Since $z$ does not intervene in the second equation, we could forget it in the first one and simply write it as $ax + by \equiv 1 \pmod{c}$. This is the hint we need to see that the second equation is simply $ay - 19 bx \equiv r \pmod{c}$. We only have to choose $r$ as the “balanced lift” of $ay - 19 bx \pmod{c}$: we will then always have $r \in [-c/2, c/2]$ and $ay - 19 bx = qc+r$ for some integer $q$.
Problem about strictly normed spaces.
$$ \frac{x+y}{|x|+|y|}=\frac{|x|}{|x|+|y|}\Big(\frac{x}{|x|}\Big)+\frac{|y|}{|x|+|y|}\Big(\frac{y}{|y|}\Big) $$ This shows that $v:=(x+y)/(|x|+|y|)$ is on the segment connecting $x/|x|$ to $y/|y|$. Since the unit sphere has no segments, $|v|$ cannot be $1$. By the triangle inequality, $|v|\le 1$.
How does associativity work for this fuzzy norm?
I don't know what's fuzzy about this. What would we need for the following statement? $$\min(1, \min(1, x+y) + z) < \min(1, x + \min(1, y+z))\tag{1}$$ Note that $\min(a,b) < c$ is equivalent to $(a < c) \ \text{or}\ (b < c)$, while $a < \min(b,c)$ is equivalent to $(a < b) \ \text{and}\ (a < c)$. Using these a few times, and noting that $1<1$ and $0 < 0$ are false, we find that (1) is equivalent to $$ ((0 < -z) \; \text{and}\; (-x + z < 0)\; \text{and}\; (1-x < y))\ \text{or}\ ((-x + z < 0) \; \text{and}\; (x + y < 1 - z) \; \text{and} \; (1 - x < y))$$ Both clauses here are incompatible with $z \ge 0$. So we conclude that (1) is impossible if $z \ge 0$. Since $$\min(1, \min(1, x+y) + z) > \min(1, x + \min(1, y+z))\tag{2}$$ is equivalent to what you get from (1) by interchanging $x$ with $z$, that is incompatible with $x \ge 0$. We conclude that your equation is always satisfied if $x \ge 0$ and $z \ge 0$.
Notation for modules.
In group theory it is standard to view $G$-modules $A$ as embedded in the semi-direct product $G \ltimes A$. Inside the semidirect product, the commutator subgroup $[A,U]$ makes sense for any subgroup $U \leq G$ and since $A$ is normal in $G \ltimes A$, we get $[A,U] \leq A$ and by the end we need make no reference to the semi-direct product. If we let $A$ be a right $G$-module written multiplicatively so that the $G$ action is written as exponentiation, then $$[A,U] = \langle [a,u] : a \in A, u \in U \rangle = \langle a^{-1} a^u : a \in A, u \in U \rangle$$ If you have a left $A$-module written additively and $G$-action written as multiplication, then we get $$[A,U] = \langle a - u\cdot a : a \in A, u \in U \rangle = \sum_{u \in U} \operatorname{im}(1-u)$$ which is just the sum of the images of $A$ under the nilpotent operators $1-u$, which is probably a fairly interesting thing to consider. In some sense this is the dual of the centralizer: $A/[A,U]$ is the largest quotient of $A$ centralized by $U$.
Finding the region of convergence of a complex series
Hint: when $n$ is large, $a^n/b^n$ is close to $0$.
Probability having K consecutive numbers in permuted N numbers ??
First, $i$ can take only values in $\{1,\ldots,N-K+1\}$, so it has $N-K+1$ possibilities and the first value of this subsequence of $K$ consecutive numbers can start in $N-K+1$ positions. Let's consider the first subsequence $\{1,\ldots,K\}$ and assume that it starts in position $i$. Then, you have $(N-K)!$ permutations for each $i$. So, if we restrict to permutations containing the subsequence $\{1,\ldots,K\}$, we have a total of $(N-K+1) \times (N-K)!$ cases. Now, consider the second subsequence $\{2,\ldots,K+1\}$. The observation is that permutations containing the second subsequence have been already counted before unless the second subsequence starts in a position smaller than $2$. So, the second subsequence is allowed to start in only one position (the first one), and for this position there are again $(N-K)!$ permutations. This gives a total of $(N-K)!$ other possibilities for the second subsequence. For the $i$-th subsequence, with $1<i\le K$, things do not change: permutations containing the $i$th subsequence have been already counted before unless the $i$-th subsequence starts in the first position. For the $i$-th subsequence, with $i> K$, all permutations have been already counted. Summing up, we have a total of \begin{align} & (N-K)! (N-K+1) + \underbrace{ (N-K)! + \cdots + (N-K)! }_{ K - 1 \mbox{ times}} \\ &= (N-K)! (N-K+1) + (K-1) (N-K)!\\ & = (N-K)! N \end{align} permutations.
A single die is rolled 7 times. What is the probability that a six is rolled exactly once, if it is known that at least one six is rolled?
To calculate $P(\text{exactly one } 6) $, the denominator should be the total number of outcomes, which is $6^7$. Looking at the outcomes with one $6$, there are seven slots to put the 6. From the remaining six slots, there are $5^6$ possible outcomes. Hence, $$ P(\text{exactly one } 6)=\frac{7\times5^6}{6^7}. $$ For $P(\text{at least one } 6)$, use the complement law. The answer is then \begin{align} P(\text{exactly one }6|\text{at least one }6)&=\frac{P(\text{exactly one }6\text{ and at least one }6)}{P(\text{at least one }6)} \\ &=\frac{P(\text{exactly one }6)}{P(\text{at least one }6)}. \end{align}
Every operator $T\in \mathcal L(V)$ where $V$ is a $\mathbb C-$vector space has an eigenvalue.
Product of a finite number of injective linear maps is injective. Since $0$ is not injective it follows that one of the factors on the right is not injective.
If $f\in L^1[0,1]\cap L^2[0,1]$, then $\|f\|_1 \le \|f\|_2$.
Let's review what you've shown. If $f\in L^2$, then $\|f\|_1\leq \|f\|_2$. This shows: $L^2\subseteq L^1$, and therefore $L^1\cap L^2=L^2$. For all $f\in L^1\cap L^2 = L^2$, $\|f\|_1\leq \|f_2\|$. I.e., you basically had proved (a) as well as (b) with the same step.
Combinatorial Techniques: Putting two and two together
Let $S_1$ be the set of sequences with exactly $3$ quarters and $S_2$ the set of sequences with exactly $3$ nickels. Then $S_1\cup S_2$ is the set of sequences with exactly $3$ quarters or exactly $3$ nickels, and $S_1\cap S_2$ is the set of sequences with exactly $3$ quarters and exactly $3$ nickels. You want $|S_1\cup S_2|$. You already know that $$|S_1|=|S_2|=\binom83\cdot3^5\;,$$ so you could start with $|S_1|+|S_2|=2\binom83\cdot3^5$ as a first approximation to $|S_1\cup S_2|$. However, every sequence in $S_1\cap S_2$ is counted twice in that figure, so it’s an overestimate. What must you subtract in order to correct that overcounting? If you get completely stuck, this article tells you what to do right at the beginning, but you should try to work it out yourself. After you’ve done that, though, the article is well worth reading.
obtaining inequality for the DCT
For any $x\geq 0$, it is clear that $\sqrt{1+x}\geq 1$. You can see this by taking square of both sides. Also $x^2\geq 0$ for any $x$. For the interval $[1,\infty]$, it is clear that $x^2>0$. So if you combine all these info you get $$ \frac{1}{x^2}\leq \frac{\sqrt{1+x}}{x^2} $$
Mapping a distorted ellipse onto a circle
If we assume for simplicity that the picture is straight on (i.e. tha axis of the cylinder is vertical) and the distance camera-object is much larger than the bending (no perspective idstortion), you can, for each $x$, measure the height $h$ of the vertical line through it and shift the line to its correct place, i.e. to $x'$ where $x'^2+(h/2)^2=r^2$, where $r$ is half the maximal height.
How is $\det(I+aXY)=\det(I+aX^{\frac{1}{2}}YX^{\frac{1}{2}})$?
Hint: Let $B=I+aX^{1/2}YX^{1/2}$, then $$ (I+aXY)X^{1/2}=X^{1/2}B,\qquad X^{1/2}(I+aYX)=BX^{1/2}, $$ and $$ \det(X^{1/2}B)=\det(BX^{1/2}). $$
Prove this relation for the legendre polynomials
You have already proved the relation. Your last line implies: $$P_{2m}(0)=\binom{-1/2}{m}=\frac{(-1/2)(-3/2)\cdot \ldots\cdot(-(2m-1)/2)}{m!}=(-1)^m\frac{(2m-1)!!}{2^m\cdot m!},$$ as wanted. As an alternative to the generating function technique, you can use the binomial theorem and the Rodrigues formula. Since: $$P_n(x)=\frac{1}{2^n\cdot n!}\frac{d^n}{dx^n}(x^2-1)^n$$ we have: $$ P_{2n}(0)=\frac{1}{4^n}[x^{2n}](x^2-1)^{2n}=\frac{(-1)^n}{4^n}\binom{2n}{n}.$$
Show that $f(x) = x^{p}$ is not uniformly continuous on $\mathbb{R}$ if $p > 1$ - Proof Verification
To prove that $f(x)=x^p$ when $p=2$ what we do is the following. Let $\epsilon=1$. Let $\delta >0$ be given. Now, let $x=\frac{1}{\delta}$ and $y=\frac{1}{\delta}+\frac{\delta}{2}$ in the common definition of uniformly continuous (see here). Then, $$|x-y|=\frac{\delta}{2}<\delta.$$ But $$|f(x)-f(y)|=|x-y||x+y|=|\frac{2}{\delta}+\frac{\delta}{2}|\frac{\delta}{2}> \frac{ 2\delta}{\delta 2}=1=\epsilon.$$ Thus, $f(x)=x^2$ is not uniformly continuous. Now, can you try with a general $p$?
Complementary and bipartite graphs
Any two vertices in $X_1$ have no edge in $G$, hence have an edge in $\overline G$. If $\overline G$ is bipartite, we conclude that $X_1$ (and similarly $X_2$) has at most two elements. This leaves only very finitely many possibilities for $G$.
union(new: intersection) of any number of open sets is also open
Your proof doesn't seem to be quite correct. Note that as your definition you have that $U \subseteq \mathbb{R}^n$ is open iff for each $x \in U$ there is an open rectangle $A = (a_1,b_1) \times \cdots \times (a_n,b_n)$ containing $x$ such that $A \subseteq U$. This means that for every point $x$ of $U$ you have to find such an open rectangle, and the choice of rectangle may depend on the choice of $x$. Thus, "picking $A$ to work for $U$ and $B$ to work for $V$" doesn't quite make sense. What you need to do is first pick the $x$ from the set you wish to show is open, and then show that there is an open rectangle that would work for this particular $x$. The "trick" is to note that if $\{ U_i : i \in I \}$ is any family of sets, then $x \in \bigcup_{i \in I} U_i$ iff there is an $i \in I$ such that $x \in U_i$, and also note that if $A \subseteq U_i$ for some $i$, then $A \subseteq \bigcup_{i \in I} U_i$. I think this should lead you in the right direction.
AP Statistics practice question about Linear Combinations
The key point is that you have 11 independent random variables. Let us focus on the 4 apples and denote the corresponding random variables for the weights as $X_1,X_2,X_3$ and $X_4$. The assumption is formally $P(X_i|X_j)=P(X_i) \ \ \forall \ i,j=\{1,2,3,4 \}, i\neq j$ If you have $n$ independent distributed random variables then the variance of the sum of the variable is equal to the sum of the variances: $$Var\left(\sum_{i=1}^n X_i\right)=\sum_{i=1}^n Var\left( X_i \right)$$ In case of the 4 apples we have $Var\left( X_1+X_2+X_3+X_4\right)=Var(X_1)+Var(X_2)+Var(X_3)+Var(X_4)$ $=0.2^2+0.2^2+0.2^2+0.2^2=4\cdot 0.2^2=4\cdot 0.04=0.16$ It would be another case if you would calulate the variance of the four times weight of a single apple. $Var\left(4X_1\right)=16\cdot Var\left(X_1\right)=0.64$
Show that if each $X_k$ is open in $X,$ then $X$ is simply connected.
If $\gamma\colon S^1\to X$ is continuous, then the image is compact, hence covered by only finitely many of the open $X_k$ and hence by a single $X_k$, in which $\gamma$ can be contracted.
Calculate the expected time for the spider to catch the fly.
I managed to solve it myself after a while: $E(T|A_{D})=\frac{1}{1-2p}+\frac{D-1}{1-p}+\frac{p}{p-1}E(T|A_{D-1})$ $(\frac{p-1}{p})^{D}E(T|A_{D})=\frac{1}{1-2p}(\frac{p-1}{p})^{D}+\frac{D-1}{1-p}(\frac{p-1}{p})^{D}+(\frac{p-1}{p})^{D-1}E(T|A_{D-1})$ Putting $(\frac{p-1}{p})^{D}E(T|A_{D})=F_D$, $F_D-F_{D-1}=\frac{1}{1-2p}(\frac{p-1}{p})^{D}+\frac{D-1}{1-p}(\frac{p-1}{p})^{D}$ $\sum_{D=2}^{N}F_D-F_{D-1}=\sum_{D=2}^{N} (\frac{1}{1-2p}(\frac{p-1}{p})^{D}+\frac{D-1}{1-p}(\frac{p-1}{p})^{D})$ $F_N-F_1=\sum_{D=2}^{N} (\frac{1}{1-2p}(\frac{p-1}{p})^{D}+\frac{D-1}{1-p}(\frac{p-1}{p})^{D})$ $F_1=\frac{p-1}{p}E(T|A_1)=\frac{p-1}{p(1-2p)}$ On evaluating the right side, we get the required result: $E(T|A_N)=\frac{2p(p-1)((\frac{p}{p-1})^N-1)}{1-2p}+N$ Edit: For $p=\frac{1}{2}$ and even $N$, calculate the limiting value of the expression of $E(T|A_N)$ as $p \to \frac12$.
Let $\gamma: (\alpha, \beta) \rightarrow \mathbb R^3$ be a space curve with speed $1$ s.t $|| \gamma(s) || = R > 0$. Show $\kappa(s) \ge \frac 1 R$.
Let $\gamma$ be given in the form $s\mapsto x(s)$. From $\bigl|x(s)\bigr|\equiv R$ we obtain $x\cdot\dot x=0$, and differentiating again produces $\dot x\cdot\dot x+x\cdot\ddot x=0$, or $x\cdot \ddot x=-1$. As $|x|=R$ we obtain $\kappa:=|\ddot x|\geq{1\over R}$, by Schwarz' inequality. If this was too fast we can argue as follows: You may assume that the curve $\gamma$ is lying on the unit sphere $S^2$. Let $P={\bf x}(0)$ be an arbitrary point on $\gamma$. You can then move $\gamma$ rigidly around the sphere so that $P$ becomes the point $(1,0,0)$ on the equator, and $\dot{\bf x}(0)$ points in the direction of the north pole. In this way $\gamma$ has a parametric representation of the form $$\gamma:\quad s\mapsto{\bf x}(s):=\bigl(\cos\phi(s)\cos\theta(s),\sin\phi(s)\cos\theta(s),\sin\theta(s)\bigr)$$ with $\phi(0)=\dot\phi(0)=\theta(0)=0$. One computes $$|\dot{\bf x}(s)|^2=\cos^2\theta(s)\dot \phi^2(s)+\dot\theta^2(s)\ ,$$ and this implies $\dot\theta(0)=1$. A lengthy computation (which I left over to Mathematica) gave $$\ddot{\bf x}(0)=\bigl(-\dot\phi^2(0)-\dot\theta^2(0),\ddot\phi(0),\ddot\theta(0)\bigr)= \bigl(-1,\ddot\phi(0),\ddot\theta(0)\bigr)\ ,$$ so that we obtain $$\kappa(0)=\bigl|\ddot{\bf x}(0)\bigr|=\sqrt{1+\ddot\phi^2(0)+\ddot\theta^2(0)}\geq1\ .$$
Conditional Probability, card question
The distribution of cards may have been realized as follows. First we give $13+13$ cards to N+S. If we do not have exactly eight $\spadesuit$ cards among them, we ignore this case, it is not contributing to the conditional probability. Else we go on. There are $26=21+5$ cards remaining. Let us count then the number of ways to split the five $\spadesuit$ cards for the EW axis. We have the possibilities, and the corresponding number of ways to realize them, determined by the E hand: $5=5+0$, totally $\binom 55\cdot \binom {21}8$ distributions, $5=4+1$, totally $\binom 54\cdot \binom {21}9$ distributions, $5=3+2$, totally $\binom 53\cdot \binom {21}{10}$ distributions, $5=2+3$, totally $\binom 52\cdot \binom {21}{11}$ distributions, $5=1+4$, totally $\binom 51\cdot \binom {21}{12}$ distributions, $5=5+0$, totally $\binom 50\cdot \binom {21}{13}$ distributions. Each "split" has to be weighted with the corresponding number of distributions. So the probabilities are: sage: for k in [0..5]: ....: p = binomial(5,k)*binomial(21,13-k)/binomial(26,13) ....: print "5=%s+%s :: probability %s ~ %s" % (k, 5-k, p, p.n()) ....: 5=0+5 :: probability 9/460 ~ 0.0195652173913043 5=1+4 :: probability 13/92 ~ 0.141304347826087 5=2+3 :: probability 39/115 ~ 0.339130434782609 5=3+2 :: probability 39/115 ~ 0.339130434782609 5=4+1 :: probability 13/92 ~ 0.141304347826087 5=5+0 :: probability 9/460 ~ 0.0195652173913043 (The $5=5+0$ cases will probably feel not so rare, because they remain a long time in the memory.)
No set in $\mathbb{R}^2$ satisfies the property that every $S \in \mathbb{R}$ is a section
HINT: Each section of $A$ is determined by an ordered pair $\langle a,b\rangle$ of points of $\Bbb R^2$. What is the cardinality of $\Bbb R^2\times\Bbb R^2$? There are at most that many different sections of any given $A\subseteq\Bbb R^2$. What is the cardinality of $\wp(\Bbb R)$?
Find the transition probability matrix. Check my answer.
I think in this case you need to have 4 states: state 2 tails, state tail, state head and state 2 heads which I will call states 1,2,3 and 4 respectively. $$P= \begin{bmatrix} 1&0&0&0\\ \dfrac{1}{2}&0&\dfrac{1}{2}&0\\ 0&\dfrac{1}{2}&0&\dfrac{1}{2}\\ 0&0&0&1 \end{bmatrix}. $$ If I understand you correctly you keep playing until you get either two heads or two tails. Then, states 1 and 4 are absorbing states. Then when you are in state 2 (got a tail) then you can jump after the coin flip to either state 1 (two tail flips) or state 3 (one head). If you start with a tail then your initial state probability vector $\pi$ would be $$\pi= \begin{bmatrix} 0\\ 1\\ 0\\ 0 \end{bmatrix}. $$
Why is this way of deriving a cone volume formula by integration wrong?
If by "slant height" you mean the length along the slant, the circular cross-section of thickness $dx$ has area $\pi r^2L(x)^2/l^2$, where $l$ is the distance along the slant to that cross-section. Thus $L(x)\ne x$; in fact $L(x)=lx/h$, with $h$ the cone height. So the integral becomes $\int_0^h\pi r^2\frac{x^2}{h^2}dx=\frac13\pi r^2h$.
Prove that there is a unique topology given interior operator
Let $\mathcal T$ be any topology on $X$ with the property that for all $A\subseteq X$, $\operatorname{int}(A)=f(A)$. Then for $U\in\mathcal T$, we have $U=\operatorname{int}(U)=f(U)$, and for any $B\notin \mathcal T$, we have $B\ne\operatorname{int}(B)=f(B)$. We conclude that $\mathcal T=\{\,A\subseteq X\mid f(A)=A\,\}=:\mathcal T_0$, i.e., there is at most one topology with said property. Next we have to show that $\mathcal T_0$ is indeed a topology. We have $X\in\mathcal T_0$ because of $I_1$ and $\emptyset\in \mathcal T_0$ because of $I_2$ $\mathcal T_0$ is closed under finite intersection because of $I_3$ Let $\{U_i\}_{i\in I}$ be a family of sets $U_i\in\mathcal T_0$ and $U=\bigcup U_i$. We need to show $U\in\mathcal T_0$. From $I_2$, $f(U)\subseteq U$. On the other hand, for each $i\in I$, $$U_i=f(U_i)=f(U_i\cap U)=f(U_i)\cap f(U)\subseteq f(U),$$ by $I_3$, hence $U=\bigcup U_i\subseteq f(U)$ and ultimately $f(U)=U$, $U\in \mathcal T_0$. Finally, it remains to be shown that $\mathcal T_0$ does have the claimed property. This is where $I_4$ comes into play: It guarantees that for each $A$, the set $f(A)$ is in $\mathcal T_0$. To finish, the other properties guarantee that there cannot be a larger open set contained in $A$: If $U\in\mathcal T_0$ for $U\subseteq A$, then $U=f(U)\subseteq f(A)$.
a misunderstanding about the definition of f(x)
Consider the function $f$ whose output is double its input. We can write $f: x\to 2x$, but we can also write this a different way as $f(x)=2x$. We are defining $f$ via explaining what it does to every input. This is enough, because that specifies the function uniquely.
Solve $A B A^T= C$ subject to $Ax=y$.
Since you write $A^T$, I suppose your matrices are real. The equation $ABA^T=C$ means that $C^{-1/2}AB^{1/2}$ is real orthogonal. Hence the general solution is given by $A=C^{1/2}QB^{-1/2}$, where $Q$ is an arbitrary real orthogonal matrix. If you want $Ax=y$, you need to solve $Q(B^{-1/2}x)=C^{-1/2}y$ for a real orthogonal $Q$. Clearly this is solvable if and only if $\|B^{-1/2}x\|_2=\|C^{-1/2}y\|_2$. When it is indeed solvable, you may pick any real orthogonal matrix that maps $B^{-1/2}x$ to $C^{-1/2}y$. For instance, when $B^{-1/2}x=C^{-1/2}y$, simply pick $Q=I$; otherwise, you may pick the Householder reflection matrix $Q=I-2ww^T$ where $$ w=\frac{B^{-1/2}x-C^{-1/2}y}{\|B^{-1/2}x-C^{-1/2}y\|}. $$
I don't understand how to prove $X^n -1 = \prod_{d|n}{\Phi_d}$ where $\Phi$ is the cyclotomic polynomial
1) We can write the set $\{1,\cdots,n\}$ as the disjoint union of the sets $\{k;\,1\le k\le n\,\mathrm{and}\,gcd(k,n)=d\}$ for all $d$ belonging to the set of (positive) divisors of $n$. 2) Yes, the bijection you mention is the main key (another reason lies in the fact that multiplication of integers is a commutative law).