title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Show $\frac{\sin(x)}{x}>\cos(x)$ for $0<x<\pi$ using the Mean Value Theorem
This is not true in general. For an interval of clear counterexamples, consider that for $x\in(\frac32\pi,2\pi)$ we have $$ \frac{\sin x}{x} &lt; 0 &lt; \cos x$$ Update after the question was amended to specify $0&lt;x&lt;\pi$: The mean value theorem says that $\frac{\sin x}{x} = \sin'(\alpha)$ for some $\alpha\in(0,x)$. We have $\sin'(\alpha)=\cos\alpha$ so what you need to show is merely that $\cos \alpha &gt; \cos x$. Hopefully you already know that the cosine decreases monotonically between $0$ and $\pi$...
Step function and integration
Since $f$ is continuous on the compact interval $[0,1]$, it is uniformly continuous. Hence, for any $\epsilon &gt; 0$ there is a $\delta &gt; 0$ such that $|f(x)-f(y)| &lt; \epsilon$ if $|x-y| &lt; \delta$. Choose a partition $(x_0,x_1, \ldots, x_n)$ of $[0,1]$ such that $\max(x_i - x_{i-1}) &lt; \delta$. Let $$\phi_i = \sup\{f(x): x_{i-1} \leq x \leq x_i\}$$ and $$\phi(x)=\sum_{i=1}^n \phi_i\chi_{[x_{i-1},x_i)}(x).$$ If $x \in [0,1]$, then for some $j$, $x \in [x_{j-1},x_j]$ and $\phi(x) = \phi_j \geq f(x).$ For any $\eta &gt; 0$, there exists $z \in [x_{i-1},x_i]$ such that $\phi_i - \eta &lt; f(z) \leq \phi_i$ and for any $x \in [x_{i-1},x_i]$ we have $\phi_i - f(x) \leq |f(x)-f(z)| + \eta&lt; \epsilon + \eta.$ Since this is true for any $\eta &gt; 0$ we have $\phi_i - f(x) &lt; \epsilon$. Furthermore, $$\int_0^1|f(x) - \phi(x)|\, dx = \sum_{i=1}^n\int_{x_{i-1}}^{x_i}|f(x)-\phi_i|dx&lt; \sum_{i=1}^n\epsilon(x_i-x_{i-1})=\epsilon$$
Levi-Civita connection on a vector space
Yes. This is the easiest possible case of a Riemannian manifold. The Christoffel symbols $\Gamma^i_{jk}$ are 0 because all the derivatives of the metric are 0. So, in coordinates/components, $\nabla_j w^i = \dfrac{\partial w^i}{\partial x^j} + \Gamma^j_{ik}w^k = 0$ for every $i,j$ (and hence also for the vector $\dot\gamma$).
Forcing an absolute value of x after a square root operation
There's nothing wrong with your equations other than the fact that they aren't equations. An equation is an expression which says that the left side of it is equal to the right, while the expression $$x=\pm\sqrt{2|x|}$$ really means $x$ is equal to $\sqrt{2|x|}$ or $x$ is equal to $-\sqrt{2|x|}$. Your entire reasoning therefore starts by assuming that $f(x) = x$ and $g^2(x) = 2x$ and concludes by showing that if $g(x) = f(x)$, then $x$ is equal to $2$ or $x$ is equal to $-2$. This is a completely true statement, just like the statement "the moon is round or the moon is made of cheese" is a true statement.
Need help with a fundamental theorem of finite arithmetic
Note that the set $f=N^3$ meets your axioms. This is because you have said in your axioms what must be in $f$, but you have said nothing about what must not be. Perhaps what you are trying to prove could be a good third axiom :)
How can I prove that A\A\B = A&B?
Because I have not fully understood the negation of $\land$ The equivalences, $\neg(a\land b)\equiv (\neg a\lor\neg b)$ and $\neg(a\lor b)\equiv (\neg a\land\neg b)$, are known as deMorgan's Laws. &nbsp; The justification is as follows: $\neg(a\land b)$ is true when it is not the case that both $a$ and $b$ are true. &nbsp; This happens exactly when at least one of $a$ or $b$ is false, that is $(\neg a\lor\neg b)$ is true. $\neg(a\lor b)$ is true when it is not the case that at least one of $a$ or $b$ are true. &nbsp; This happens exactly when at both of $a$ and $b$ are false, that is $(\neg a\land\neg b)$ is true. In a similar vein, $a\land(\lnot a\lor b)$ is true exactly when $a$ is true and at least one of $\lnot a$ or $b$ is true. &nbsp; So it means $a$ is true, and since $\lnot a$ cannot also be true, therefore $b$ must be. &nbsp; Conversely $a\land b$ is true when $a$ and $b$ are both true. &nbsp; Now $b$ being true implies that at least one of $b$ or $\neg a$ is true. &nbsp; Therefore $a\land(\lnot a\lor b)\equiv (a\land b)$
Weil Restriction and Distinguished Opens
$\newcommand{\Res}{\mathsf{Res}}$ $\newcommand{\Spec}{\mathrm{Spec}}$$\newcommand{\Hom}{\mathrm{Hom}}$ I'd like to tell you a way to think about this that I think is instructive. I hope you feel similarly even though, most likely, it's outside the purview of what Springer is actually discussing. For the sake of simplicity, let us assume that $E/F$ is Galois and let me denote by $\Gamma$ the group $\mathrm{Gal}(E/F)$ and we denote a general element of $\Gamma$ by $\sigma$ or $\tau$. Let us begin by defining for all $\sigma$ in $\Gamma$ the $E$-algebra $$A^\sigma:= A\otimes_{E,\sigma}E$$ where this notation means that we're taking the tensor product of $E$-algebras where $E\to A$ is the structure map and $E\to E$ is the map given by $\sigma$. We consider this an $E$-algebra by defining $e(a\otimes b):=a\otimes (eb)$. Let us note that we have a map of $F$-algebras $$\sigma:A\to A^\sigma:a\mapsto a\otimes 1$$ Note though that this map is not $E$-linear. In fact, $$\sigma(ea)=(ea)\otimes 1 = a\otimes \sigma(e)=\sigma(e)(a\otimes 1)$$ so $\sigma:A\to A^\sigma$ is $\sigma$-linear. We now consider the $E$-algebra $$A^{\otimes \Gamma}:=\bigotimes_{\sigma\in\Gamma_F}A^\sigma$$ where the tensor product on the right hand side is a tensor product of $E$-algebras. We shall denote a general simple tensor in $A^{\otimes\Gamma}$ by $\displaystyle \bigotimes a_\sigma$ (i.e. the $\sigma^\text{th}$-coordinate in the simple tensor is $a_\sigma$). Note that $A^{\otimes\Gamma}$ carries a natural $\Gamma$-action by permuting the coordinates or, more explicitly, $$\tau\left(\bigotimes a_\sigma\right)=\bigotimes b_\sigma,\qquad b_{\tau\sigma}=a_\sigma$$ Note that the action of $\Gamma$ is not $E$ linear, but is $F$-linear. Let us now consider the $F$-algebra $(A^{\otimes \Gamma})^{\Gamma}$--the $\Gamma$-fixed points of $A^{\otimes \Gamma}$. We have an obvious inclusion of $F$-algebras $$\iota:(A^{\otimes\Gamma})^{\Gamma}\hookrightarrow A^{\otimes \Gamma}$$ Less obviously is the fact that the induced map of $E$-algebras $$(A^{\otimes\Gamma})^{\Gamma}\otimes_F E\to A^{\otimes\Gamma}:x\otimes e\mapsto ex$$ is an isomorphism of $E$-algebras. In fact, it's actually an isomorphism of $E$-algebras with an action of $\Gamma$ where $\Gamma$ acts on the source by its action on $E$! Why is $A^{\otimes\Gamma}$ or $(A^{\otimes{\Gamma}})^{\Gamma}$-important? Well, note that for any $F$-algebra $R$ the obvious map $$\Hom_F\left((A^{\otimes \Gamma})^\Gamma,R\right)\to \Hom_E\left ((A^{\otimes \Gamma})^\Gamma\otimes_F E,R\otimes_F E\right)^{\Gamma}$$ is a bijection where the $\Gamma$ action on $$\Hom_E\left ((A^{\otimes \Gamma})^\Gamma\otimes_F E,R\otimes_F E\right)$$ takes a homomorphism $\alpha$ to $\sigma\circ \alpha\circ \sigma^{-1}$ where $\sigma^{-1}$ is acting on $(A^{\otimes \Gamma})^\Gamma\otimes_F E$ by its action on $E$ and the action of $\sigma$ is on $R\otimes_F E$ also acting by $E$. But, we've already noted that we have an isomorphism $$(A^{\otimes\Gamma})^{\Gamma}\otimes_F E\to A^{\otimes\Gamma}:x\otimes e\mapsto ex$$ of $E$-algebras with $\Gamma$-action. Thus, we see that $$\Hom_E\left ((A^{\otimes \Gamma})^\Gamma\otimes_F E,R\otimes_F E\right)^{\Gamma}=\Hom_E\left (A^{\otimes\Gamma},R\otimes_F E\right)^{\Gamma}$$ but what is a $\Gamma$-equivariant map of $E$-algebras $A^{\otimes \Gamma}\to R\otimes_F E$? Well, by the definition of tensor product over $E$, its a collection of maps of $E$-algebras $$f_\sigma:A^\sigma\to R\otimes_F E$$ where we abbreviate $f_{\mathrm{id}}$ to $f$ such that for any collection $a_\sigma\in A^\sigma$ you have $$f_\sigma(a_\sigma)=\sigma(f(a))$$ In other words, you see that such data is an entirely determined by $f$. In other words, summing everything up, there is a natural series of bijections $$\begin{align}\Hom_F\left((A^{\otimes\Gamma})^\Gamma,R\right) &amp;= \Hom_E\left((A^{\otimes\Gamma})^\Gamma\otimes_F E,R\otimes_F E\right)^\Gamma\\ &amp;= \Hom_E(A^{\otimes\Gamma},R\otimes_F E)^\Gamma\\ &amp;= \Hom_E(A,R\otimes_F E)\end{align}$$ or, in other words, we have shown that $$\Res_{E/F}\Spec(A)=\Spec\left((A^{\otimes\Gamma})^\Gamma\right)$$ More explicitly we have a bijection $$J:\Hom_F\left((A^{\otimes\Gamma})^\Gamma,R\right)\xrightarrow{\approx}\Hom_E(A,R\otimes_F E)$$ given by taking $f$ to $(f\otimes 1)\mid_A$. Now, what does this have to do with the norm map?Note we have a multiplicative map $$N:A\to A^{\otimes \Gamma}:a\mapsto \bigotimes \sigma(a)$$ which we call the norm map. Note that this map is not additive but does have image in $(A^{\otimes\Gamma})^\Gamma$. Thus, if $a\in A$ then $N(a)\in (A^{\otimes\Gamma})^\Gamma$. Given our above discussion it's now easy to verify that $$\Res_{E/F}(D(a))=D(N(a))$$ Indeed, what is a map of $F$-schemes $$\Spec(R)\to D(N(a))$$ but a map of $F$-algebras $$(A^{\otimes\Gamma})^\Gamma\to R$$ such that $N(a)$ maps to a unit. What is a map of $E$-schemes $$\Spec(R\otimes_F E)\to\Spec(A)$$ but a map of $E$-algebras $$A\to R\otimes_F E$$ Note then that under our above bijection $$\Hom_F\left((A^{\otimes\Gamma})^\Gamma,R\right)\xrightarrow{\approx}\Hom_E(A,R\otimes_F E)$$ one sees that $$f(N(a))=J(f)(a)$$ and so $f(N(a))$ is a unit iff $J(f)(a)$ is. This is precisely what we want. Let me say one last word on what the relationship is between the norm map $$N:A\to (A^{\otimes\Gamma})^\Gamma$$ and more familiar versions of the norm map. Namely, let us suppose that there is some $F$-algebra $B$ such that $A=B\otimes_F E$. Then, we note that we have an isomorphism of $E$-algebras $$A^\sigma=B\otimes_F E\otimes_{E,\sigma}E\to A=B\otimes_F E:(b\otimes e_1)\otimes e_2\mapsto b\sigma(e_1)e_2$$ With this, one can show that there is an isomorphism of $E$-algebras $$A^{\otimes \Gamma}\cong B\otimes_E E^{\otimes\Gamma}\cong A\otimes_E (E^{|\Gamma|})\cong B^{|\Gamma|}$$ Moreover, it's not hard to check then that the natural map $$B\to (A^{\otimes\Gamma})^\Gamma$$ is an isomorphism. Thus, we see that the norm map is a map $$N:B\otimes_F E=A\to (A^{\otimes\Gamma})^\Gamma=B$$ Now, let $x\in B\otimes_F E$ act on the left of $B\otimes_F E$ by left multiplication denote this by $\ell_x$. Since $B\otimes_F E$ is a free $B$-module of finite rank, one can consider $\det(\ell_x)\in B$. Then, one has , under the above identifications, that $N(x)=\det(\ell_x)$.
Show that $ 2(a^2 b^2+a^2c^2+b^2c^2)-(a^4+b^4+c^4) = (a+b+c)(-a+b+c)(a-b+c)(a+b-c) $
$ 2(a^2 b^2+a^2c^2+b^2c^2)-(a^4+b^4+c^4)$ $=4a^2b^2-(a^4+b^4+c^4-2c^2a^2-2b^2c^2+2a^2b^2)$ $=(2ab)^2-(a^2+b^2-c^2)^2$ $=(2ab-a^2-b^2+c^2)(2ab+a^2+b^2-c^2)$ $=\{c^2-(a-b)^2\}\{(a+b)^2-c^2\}$ $=(c-a+b)(c+a-b)(a+b-c)(a+b+c)$ $=(-a+b+c)(a-b+c)(a+b-c)(a+b+c)$
Discrete Independent random variables X and Y
As others have suggested, you could use that $P(Y&lt;X) = P(X&lt;Y)$ since they are IID. But your initial step is wrong as there is a whole lot of double counting happening. Instead of that try writing(more along your approach) $$P(Y&gt;X) = \sum_{x=1}^{\infty}P(X = x, Y&gt;x) = \sum_{x=1}^{\infty}P(X=x)(1 - P(Y \leq x))$$ Since the variables are independent.
showing $a_n = \frac{\tan(1)}{2^1} + \frac{\tan(2)}{2^2} + \dots + \frac{\tan(n)}{2^n}$ is not Cauchy
In 2008, Shalikhov has improved the upper bound of irrationality measure of $\pi$ to about 7.6063. This means for any $\mu &gt; 7.6063$, there are at most finitely many pairs of relative prime integers $(p,q)$ such that $$|\pi - \frac{p}{q}| &lt; \frac{1}{q^{\mu}}$$ A consequence of this is if one choose a small enough $C_\mu &gt; 0$, then for any pair of positive integers $(p,q)$, we have $$|\pi - \frac{p}{q}| &gt; \frac{C_\mu}{q^{\mu}}$$ In order for $\tan n$ to blow up, $n$ need to be very close to some half integer multiple , $(\ell + \frac12)\pi$, of $\pi$. Using the bound of irrational measure above and let $\mu = 8$, we find $$\left|n - (\ell + \frac12)\pi\right| = \frac{2\ell+1}{2}\left| \frac{2n}{2\ell+1} - \pi \right| &gt; \frac{C_8}{2(2\ell+1)^7} \sim \frac{\pi^7C_8}{2^8n^7} $$ Notice $$|\tan x| \sim \frac{1}{|x - (\ell+\frac12)\pi|}\quad\text{ for } x \sim (\ell+\frac12)\pi $$ This gives us an approximate bound for $\tan n$ $$|\tan n| \,\lesssim\,\frac{2^8 n^7}{\pi^7C_8} \quad\implies\quad \sum_{n=1}^{\infty} \frac{\tan n}{2^n}\quad\text{ converges absolutely.}$$
Equivalent form of derivative as limit?
Let $\Delta y=-\Delta x$. Then $$\lim_{\Delta x \to 0} \dfrac{f(x) - f(x - \Delta x)}{\Delta x}= \lim_{\Delta y \to 0} \dfrac{f(x) - f(x + \Delta y)}{-\Delta y}$$ Now move the $-$ to the numerator.
Bounds for $D = {x^2+y^2\leq4}$ vs $C = {x^2+y^2=4}$
In the first case, you are considering only a circumference, so you have linear density, i.e. mass per unit of lenght. Thus, if we set $x=2\cos\theta$ and $y=2\sin\theta$, we have: $$ m = \int_{0}^{2\pi}(2\cos\theta+7)2d\theta$$ In the second case, you are considering a full disk, so you have a superficial density, i.e. mass per unit of area. Thus, using the same change of variables: $$ m = \int_{0}^{2\pi}\int_{0}^{2}(r\cos\theta+7)rdr d\theta$$
How to solve the functional equation $ f(f(x))=ax^2+bx+c $
SHORT ANSWER: A general solution to this problem is not known in the closed form, but some special cases can be solved. Sorry. LONG ANSWER: Notice that if a function $f$ satisfies $$f(x)=(g^{-1} \circ h \circ g)(x)$$ for some $g,h$, then $$f^n(x)=(g^{-1} \circ h^n \circ g)(x)$$ where the superscript denotes functional iteration rather than exponentiation. We can solve this equation for a whole class of quadratics $$f^2(x)=q(x)=ax^2+bx+c$$ for which $$c=\frac{b^2-2b}{4a}$$ This is because we can rewrite $$q(x)=ax^2+bx+\frac{b^2-2b}{4a}$$ as $$q(x)=a\bigg(x+\frac{b}{2a}\bigg)^2-\frac{b}{2a}$$ and so, by letting $g(x)=x-\frac{b}{2a}$ and $h(x)=ax^2$, $$q(x)=(g^{-1} \circ h \circ g)(x)$$ and thus $$q^n(x)=(g^{-1} \circ h^n \circ g)(x)$$ and since the formula for $h^n$ is $$h^n(x)=a^{2^n-1}x^{2^n}$$ we have $$q^n(x)=a^{2^n-1}\bigg(x+\frac{b}{2a}\bigg)^{2^n}-\frac{b}{2a}$$ and, finally, $$f(x)=q^{1/2}(x)=a^{\sqrt 2-1}\bigg(x+\frac{b}{2a}\bigg)^{\sqrt 2}-\frac{b}{2a}$$ So there's the solution for that special case. There is another special case when $$c=\frac{b^2-2b-8}{4a}$$ but the solution to that case is much longer, and so I will omit it and leave it to you for independent research. There is another special case involving trigonometric functions. Note that if we let $$g(x)=\arccos x$$ $$h(x)=2x$$ we have $$(g^{-1}\circ h\circ g)(x)=\cos(2\arccos x)$$ and, using the double-angle formula, $$(g^{-1}\circ h\circ g)(x)=\cos^2(\arccos x)-\sin^2(\arccos x)$$ $$(g^{-1}\circ h\circ g)(x)=x^2-(1-x^2)$$ $$(g^{-1}\circ h\circ g)(x)=2x^2-1$$ and so $g^{-1}\circ h\circ g$ is a quadratic. Now let $$q(x)=(g^{-1}\circ h\circ g)(x)$$ so that $$q^n(x)=(g^{-1}\circ h^n\circ g)(x)$$ Now, since $$h^n(x)=2^n x$$ we have $$q^n(x)=\cos(2^n\arccos x)$$ and $$f(x)=q^{1/2}(x)=\cos(\sqrt{2}\arccos x)$$ which solves yet another special case. However, notice that this is only solved on the domain $[-1,1]$, because that is where $\arccos x$ is defined. I am sure there are other special cases involving trigonometric functions, but I will leave those for you to find.
How should I solve these inequalities?
The first problem. Your proof is beautiful! My solution: We need to prove that: $$\sum_{cyc}\frac{a^2+ac-2ab}{ab+1}\geq0$$ or $$\sum_{cyc}\frac{a^2+ac+2}{ab+1}\geq6.$$ Now, by C-S $$\sum_{cyc}\frac{a^2+ac+2}{ab+1}\geq\frac{\left(\sum\limits_{cyc}(a^2+ac+2)\right)^2}{\sum\limits_{cyc}(a^2+ac+2)(ab+1)}.$$ Thus, it's enough to prove that: $$\frac{\left(\sum\limits_{cyc}(a^2+ac+2)\right)^2}{\sum\limits_{cyc}(a^2+ac+2)(ab+1)}\geq6$$ or $$\sum_{cyc}(a^4-4a^3b+2a^3c+3a^2b^2-2a^2bc)+6\sum_{cyc}(a^2-ab)\geq0$$ and since $$6\sum_{cyc}(a^2-ab)=3\sum_{cyc}(a-b)^2\geq0,$$ it's enough to prove that $$\sum_{cyc}(a^4-4a^3b+2a^3c+3a^2b^2-2a^2bc)\geq0,$$ which is smooth. The second problem. By your work it's enough to prove that: $$\frac{3}{8}\sum_{cyc}(x+3)+\sum_{cyc}\frac{6}{x+3}\geq9,$$ which is true by C-S and AM-GM: $$\frac{3}{8}\sum_{cyc}(x+3)+\sum_{cyc}\frac{6}{x+3}=\frac{3}{8}\left(\sum_{cyc}(x+3)+16\sum_{cyc}\frac{1}{x+3}\right)\geq$$ $$\geq \frac{3}{8}\left(\sum_{cyc}(x+3)+\frac{144}{\sum\limits_{cyc}(x+3)}\right)\geq\frac{3}{8}\cdot2\cdot\sqrt{144}=9.$$
Proving the transformation of a generating set will generate the image of the transformation
You basically got it. You noted that for $w \in \mathrm{Im}(T)$, you have a $v \in V$ such that $T(v) = w$ by definition of $\mathrm{Im}(T)$ and you noted that by linearity of $T$, you have $w = \alpha_1 \cdot T(v_1) + \dots + \alpha_n \cdot T(v_n$). Since $w$ is any vector of $\mathrm{Im}(T)$, that means that $T(v_1), \dots T(v_n)$ generates $\mathrm{Im}(T)$. Just make sure to mention the terminology in italics. And don't forget the scaling factors $\alpha_i$ in a linear combination.
Binary variables in time series: integer linear programming
One simple way to enforce a run length of at least three, is to forbid patterns 010 and 0110. This can be modeled as: $$ -x_t + x_{t+1} - x_{t+2} \le 0 $$ and $$ -x_t + x_{t+1} + x_{t+2} - x_{t+3} \le 1 $$ A little bit of thought is needed to decide what to do at the borders, especially the first time period. A different approach is detailed here.
Naturality of the lie bracket.
Yeah, these ideas can be expressed in the language of category theory. For instance, the fact that the star is a subscript (instead of a superscript) on $F_*$ indicates that there is a covariant functor (instead of a contravariant functor) behind the scenes. To be precise, the naturality you seek involves the category Diff with smooth manifolds as objects and diffeomorphisms as morphisms, as well as the category Vect$_\mathbf{R}$ with real vector spaces as objects and linear maps as morphisms. In this language, $\mathfrak{X}$ is a functor from Diff to Vect$_\mathbf{R}$ sending diffeomorphisms $F\colon M\to N$ of smooth manifolds to linear maps $F_*\colon\mathfrak{X}(M)\to\mathfrak{X}(N)$, where $\mathfrak{X}(M)$ denotes the space of smooth vector fields on $M$. That is, $\mathfrak{X}\colon F\mapsto F_*$. Similarly, we may define a functor $\mathfrak{X}\times\mathfrak{X}$ from Diff to Vect$_\mathbf{R}$ by sending $F$ to $F_*\times F_*$, where $(F_*\times F_*)\colon\mathfrak{X}(M)\times\mathfrak{X}(M)\to\mathfrak{X}(N)\times\mathfrak{X}(N)$ is defined as one may expect — namely by sending $(X,Y)$ to $(F_*X,F_*Y)$. The Lie bracket $[\cdot,\cdot]\colon\mathfrak{X}\times\mathfrak{X}\to\mathfrak{X}$ is then a natural transformation from the functor $\mathfrak{X}\times\mathfrak{X}$ to the functor $\mathfrak{X}$, as the identity $F_*[X,Y]=[F_*X,F_*Y]$ is precisely the statement that the diagram $$\require{AMScd}\begin{CD}\mathfrak{X}(M)\times \mathfrak{X}(M) @&gt;F_*\times F_*&gt;&gt; \mathfrak{X}(N)\times \mathfrak{X}(N)\\@VV[\cdot,\cdot]V @VV[\cdot,\cdot]V\\\mathfrak{X}(M) @&gt;F_*&gt;&gt; \mathfrak{X}(N)\end{CD}$$ commutes.
Find ABC given that the other five possible permutations of its digits add up to 3194
Suppose we try all six permutations of the three (unknown) digits, so that adding them up contributes: $222 \times (A+B+C)$ Now this will exceed $3194$ by the value of (digit represented value) ABC, so we can begin by guessing the multiple: $\lceil 3194/222 \rceil $ That one doesn't work out properly, but the next multiple does. ABC = $358$ and $3194 + 358 = 222 \times (3+5+8)$
Characterizing the irreducible complex representations of a group - trying to find a simpler method
There is a standard theorem that all irreducible complex representations of finite abelian groups are one-dimensional. And the number of distinct such representations is equal to the order of the group. This follows, among other ways, from standard theorems that the number of irreducible representations is equal to the number of conjugacy classes and that the sum of squares of the dimensions of the irreducible representations equals the order of the group. Note that a more elementary argument simply uses Schur's Lemma. Over any algebraically closed field, the set of linear transformations commuting with every element of an irreducible representation are just the scalar matrices. Since $G$ is abelian all its representing matrices commute with each other and thus must all be scalar matrices. Clearly this means that the representation fails to be irreducible unless the dimension is 1. You can find an elementary and brief proof of Schur's Lemma on Wikipedia.
Find all holomorphic functions $f:\mathbb{C}\setminus\{0\}\rightarrow \mathbb{C}$
From the first condition, $\dfrac{f(z)}{\sin z}$ has a removable singularity at $0$. Hence $f(z) = g(z)\sin z$ for some holomorphic function $g$, at least locally. In particular, $f(0) = 0$. On the other hand, from the second assumption, $f$ has a pole of order at most $1$ at infinity. Hence $f$ has to be a polynomial of degree at most $1$. Hence $f(z) = cz$ for some $c$. Added more details, since apparantly someone thought it wasn't clear enough.
Question about principal bundle on Wikipedia
The vector bundle is trivialized by/on some cover of your manifold, and of course there are transition maps when you represent the "same" vector that is sitting on top of two different particular open sets, which of course we know. This is what they mean by gluing (of the original vector bundle). Then, the same transition maps "naturally" induce a gluing of the associated principle bundle (instead of thinking of the data vector by vector, we get transition maps by taking a "matrix of bases" to another matrix of bases.)
Prove by induction $T_n$ is odd
$(a)$ Let $T_n$ is odd for $n=m+1$ Then $\displaystyle T_{m+2}=T_{m+1}+2T_m$ which is odd as for integer $\displaystyle T_m, T_{m+2}-T_{m+1}$ will be even i.e., $\displaystyle T_{m+2},T_{m+1}$ will have same parity $(b)$ $$(T_{n+2},T_{n+1})=(T_{n+1}+2T_n,T_{n+1})=(2T_n,T_{n+1})=(T_n,T_{n+1})$$ as $T_{n+1}$ is odd $$\implies(T_{n+2},T_{n+1})=(T_{n+1},T_n)$$ which reminds me of Prove that any two consecutive terms of the Fibonacci sequence are relatively prime
Find Original Value using result of discounted discount
discounted OP = OP - 10% discount on OP = 0.9 * OP TP=0.8* discounted OP=0.72 * OP Therefore OP = TP/0.72
Find the mistake (AM GM ineqality)
Your use of the AM-GM is equivalent to saying something like $$\frac{2 + (-1) + (-1)}{3} \ge \sqrt[3]{2(-1)(-1)}.$$ It doesn't work because we require each term to be a nonnegative real number.
Find the volume between $y = 4 − \frac{3x}{2}$ and $y=0$ and $x\in [0, 1]$
If I have not misunderstood your given curve, $$\begin{align} V =&amp;\int_0^1 \pi\left(4-\frac{3x}{2}\right)^2dx\\ =&amp;\pi\int_0^1 \left(16-12x+\frac{9x^2}{4}\right)dx\\ =&amp;\pi\left[16x-6x^2+\frac{3x^3}{4}\right]_0^1\\ =&amp;\pi\left(16-6+\frac{3}{4}\right)\\ =&amp;\frac{43\pi}{4}\\ \end{align}$$
How to calculate this integral $\int_{0}^{2\pi}\sqrt{1+\pi^2-2\pi\cos t}dt$
$$ I = \int_{-\pi}^{+\pi}\sqrt{\pi-e^{i\theta}}\sqrt{\pi-e^{-i\theta}}\,d\theta $$ is a complete elliptic integral of the second kind. By expanding $\sqrt{\pi-z}$ as a Taylor series and exploting $\int_{-\pi}^{\pi}e^{ki\theta}\,d\theta = 2\pi \delta(k)$ we have that $I$ can be represented by the following fast-converging series: $$ I = 2\pi^2\sum_{n\geq 0}\binom{\frac{1}{2}}{n}^2\frac{1}{\pi^{2n}}=\color{red}{2\pi^2\sum_{n\geq 0}\binom{2n}{n}^2\frac{1}{(2n-1)^2(16\pi^2)^n}}. $$ In a similar way, by setting $\kappa=\frac{2\pi}{1+\pi^2}$ and computing $\int_{-\pi}^{\pi}\cos^{2n}(\theta)\,d\theta$, we get the equivalent representation: $$ I = \color{red}{2\pi\sqrt{\pi^2+1}\sum_{n\geq 0}\binom{4n}{n,n,2n}\frac{\kappa^{2n}}{(1-4n)64^n}}.$$
Ternary representation of Cantor set
Let $C_1= [0,\frac{1}{3}] \cup [\frac{2}{3}, 1]$, $C_2 = [0, \frac{1}{9}]\cup[\frac{2}{9},\frac{1}{3}]\cup[\frac{2}{3}, \frac{7}{9}]\cup[\frac{8}{9},1]$, $\ldots$ and continue, in this manner, defining the $C_k$ by "eliminating the middle third" of each interval in the previous $C_{k-1}$, then we have the Cantor tertiary set $$C=\bigcap_{k=1}^\infty C_k$$ and our task is to show that $C$ is precisely the set of all real numbers in $[0,1]$ that can be represented with only zeros and twos in its base-three decimal expansion. NOTE: finite decimal expansions, as always, are not formally well-defined; for instance, $\frac{1}{3} = 0.1 = 0.022222 \ldots$ so that $\frac{1}{3} \in C$ because it can be expressed with only zeros and twos in its base-three decimal expansion. Note, also, that $\frac{1}{3}$ is in every $C_k$ as defined above, so that it is in the intersection of all of them. This is going to be our general idea. If some $x \in C$, we show that $x$ has a base-three expansion consisting of only zeros and twos. But that doesn't let us off the hook yet - there perhaps could be numbers with base-three expansions consisting of only zeros and twos that are NOT in $C$, so we will, as our second component of the proof, show that given a number with a base-three expansion consisting of only zeros and twos, that number must be in $C$. We have that $x \in C$ if and only if $x \in C_k$ for all $k \in \mathbb{N}$. For $C_1$ note that $x$ can have a $0$ in the "one-thirds" place if $x$ is in $[0, \frac{1}{3}]$, or that $x$ can have a $2$ in the "one-thirds" place if $x$ is in $[\frac{2}{3}, 1]$. Likewise, for the second decimal place, consider which of the two intervals in $C_2$ where $x$ belongs (why do we only examine two, and not all four intervals used to construct $C_2$?) and note a zero or a two can be assigned accordingly for the second decimal place. Continue likewise, and formalize what I have written, to conclude that $x \in C$ if and only if $x$ has a base-three expansion consisting of only zeros and twos.
Enumeration of dense countable subset and the axiom of choice
Suppose that $C(x)$ is the following formula (expressible in the language of first order theory of sets) $$x\,\mbox{ is a enumeration of a countable dense subset of a fixed normed space }X$$ I think (maybe other users can correct me) that there is the following theorem of the first order logic: $$\exists_xC(x)\rightarrow \left(\left(C(s)\rightarrow T\right)\rightarrow T\right)$$ where $T$ is any sentence of the first order set theory. So if we pick as $T$ a statement of Hahn-Banach theorem for $X$ (expressed in the language of first order set theory) and provided that we have proved before that $C(s)\rightarrow T$, then by modus ponens we have proved $T$. All steps without invoking axiom of choice. This boils down to say that we have valid deduction $$C(s)\rightarrow T, \exists_xC(x), \exists_xC(x)\rightarrow \left(\left(C(s)\rightarrow T\right)\rightarrow T\right)\vdash T$$ and all three premises are proved without AC. Is that correct (Asaf Kagila can you please help me)? I am not an expert in logic. Edit. Moreover, $C(s)\rightarrow T$ is just the same as proving Hanh-Banach theorem for $X$ with the assumption that $s$ is a fixed enumeration of a countable dense subset of $X$. So $s$ is of the form $\{x_n\}_{n\in \mathbb{N}}$ and this sequence is dense in $X$. You want to prove from this HB for $X$.
Significant figures when using formula
The $2(1+\sqrt2)$ component should be treated as a constant with infinite number of significant digits (because there is no doubt about its value whatsoever). Since this constant is being multiplied with $a^2$, the answer should have the minimum number of significant digits among $2(1+\sqrt2 ),a$ and $a$. Hence the answer will have four significant digits.
Induction proof - divisibility by 3
For $n=0$ this should be clear. So suppose we know that $2^{2\cdot n + 1}$ is divisible by $3$. Then \begin{align} 2^{2\cdot (n +1) + 1} &amp;= 2^{2\cdot 1 + 2\cdot n + 1}\\ &amp;= 2^2 \cdot 2^{2\cdot n + 1} \end{align} is also divisible by $3$, since the second factor is.
Eigenvalues and eigenvectors general $n \times n$
You need to start by computing the characteristic polynomial of your matrix A, i.e. you need to compute the polynomial $$ p(x) = \det(x\mathbb{1} - A) $$ where $\mathbb{1}$ is the identity matrix, and then solve $p(x) = 0$. The roots will be your eigenvalues, and once you have them you can easily find the eigenvectors. If I'm not wrong the characteristic polynomial in the case $n\times n$ turns out to be given by $$ (x-2)[(x-2)^{n-1}-n+1] $$
How many nonnegative integer solutions are there to the pair of equations $x_1+x_2+…+x_6=20$ and $x_1+x_2+x_3=7$?
You are correct. You can also think of it in terms of permutations. The number of non-negative integer solutions of $x_1+x_2+x_3=7$ is the number of permutations of a multiset with seven $1$'s, and two $+$'s. This is $$\frac{9!}{7!\ 2!}.$$ Similarly, the number of non-negative integer solutions of $x_4+x_5+x_6=13$ is the number of permutations of thirteen $1$'s, and two $+$'s. This is $$\frac{15!}{13!\ 2!}.$$ This is why the first number in your combination is what the variables equal, and the second is "one less" the amount of variables, since you're permuting the $+$'s.
How to find out if it is possible to contruct a binary matrix with given row and column sums.
It seems from your description that you don't need to actually compute the matrix, but just insure that the necessary and sufficient conditions exist for one to be computed. For that, I think that all you'll need is J. H. Ryser's paper, "Combinatorial properties of matrices of zeros and ones." You may also find Richard A. Brualdi's paper "Matrices of zeros and ones with fixed row and column vectors" very readable. Applying those, we first establish some terminology. As you've already said, the matrix ${\bf A}$ is an $m \times n$ (where $m$ and $n$ are positive integers) matrix of ones and zeros. Further, we have row and column sum vectors ${\bf R} = (r_1,\dots,r_m$) and ${\bf S}=(s_1,\dots,s_n)$ which are nonnegative integral vectors. First, it's clear that if there are $\tau$ ones in the matrix ${\bf A}$, then $\displaystyle \tau = \sum_{i=0}^{m}r_i = \sum_{j=0}^{n}s_j$. If that condition fails, then obviously the answer is "no." If that succeeds, then following Brualdi, we can define a function to help with the evaluation. First, let $I \subseteq \left\{ 1,\dots,m \right\}$ and $J \subseteq \left\{1,\dots, n \right\}$. Define $\displaystyle t(I,J) =|I||J|+\sum_{i \notin I}r_i - \sum_{j \notin J}s_j$. All that remains is to demonstrate that $t(I,J) \ge 0$ for all $I\subseteq\left\{1,\dots,m\right\}$ and $J\subseteq\left\{1,\dots,n\right\}$.
prove the root of a polynomial (with prime power constant) is also a power of prime
If $f(k)=0$, we can write $f(x)=g(x)(x-k)$, where $g(x) = b_{n-1}x^{n-1}+b_{n−2}x^{n-2}+...+b_1x+b_0$ is some polynomial of degree $n-1$. We know, however, that the constant term $b_0 = \frac{a_0}{k}$, and because $a_0$ is a prime power, and $b_0$ is an integer, $k$ must be a prime power.
Seat friends at a dinner table
A slight improvement on user8734617's method: Exactly $\frac{1}{7}$ of the possible arrangements have a fixed pair $A_1,B_1$ seated together. To see this, given any arrangement with the pair seated together, fix the position of $A_1$ and rotate the other $7$ around the table, giving $6$ other arrangements with $A_1,B_1$ not seated together. This method produces all possible arrangements around the table, so each arrangement with $A_1,B_1$ seated together corresponds to $6$ other arrangements, meaning $A_1,B_1$ sit together with probability $\frac{1}{7}$. By the same method, of those arrangements with a fixed pair $A_1,B_1$ seated together, $\frac{1}{5}$ have another fixed pair $A_2,B_2$ also seated together, so two fixed pairs sit together with probability $\frac{1}{7 \cdot 5}$; similarly, three fixed pairs sit together with probability $\frac{1}{7 \cdot 5 \cdot 3}$ and four with probability $\frac{1}{7\cdot5\cdot3\cdot1}$. Then we apply inclusion-exclusion: our answer is $$\begin{align} &amp;1 - \binom{4}{1}\cdot\frac{1}{7} + \binom{4}{2}\cdot\frac{1}{7 \cdot 5} - \binom{4}{3}\cdot\frac{1}{7 \cdot 5 \cdot 3} + \binom{4}{4}\cdot\frac{1}{7\cdot5\cdot3\cdot1} \\ &amp;= 1 - \frac{4}{7} + \frac{6}{7\cdot5} - \frac{4}{7 \cdot 5 \cdot 3} + \frac{1}{7 \cdot 5 \cdot 3} \\ &amp;= \frac{3}{7} + \frac{6}{7\cdot5} - \frac{3}{7 \cdot 5 \cdot 3} \\ &amp;= \frac{3}{7} + \frac{5}{7\cdot5} \\ &amp;= \frac{4}{7} \end{align}$$
Variation over univariate Schwartz–Zippel lemma
It seems to me that we are at liberty to choose the polynomials $s$ and $s&#39;$. So let's select $s&#39;=0$ and $$s(x)=(x-1)(x-2)\cdots(x-[\sqrt{n}]).$$ Then $s(r)=s&#39;(r)$ for all $r$ in the given range,and both these polynomials meet your bound on their degree. Thus the upper bound $Pr()\le 1$ can be attained, and is therefore the best bound. Or did you want to ask something else?
extending a continuous function from a closed subset
If we don't have any restriction on the range of the continuous function, then the answer is no. Namely, let $A=\{0,1\}\subseteq\mathbb{R}$ and consider the continuous function $\operatorname{id}_A$. You can't extend $\operatorname{id}_A$ to a continuous function on the whole $\mathbb{R}$ because $\mathbb{R}$ is connected while $A$ is not. However, if the range of the continuous function is for example $\mathbb{R}$, then the extension can always be done, see Tietze extension theorem.
How many primes in the first 6000 primes have a particular property
I count 77 solutions: 2, 3, 5, 7, 11, 19, 23, 29, 31, 41, 43, 47, 53, 59, 61, 67, 79, 89, 103, 113, 127, 131, 167, 173, 193, 211, 227, 239, 271, 281, 283, 409, 419, 431, 439, 443, 463, 547, 571, 601, 617, 619, 677, 743, 761, 1013, 1051, 1223, 1231, 1289, 1381, 1409, 1559, 1597, 1613, 1933, 2003, 2111, 2311, 2351, 2411, 2549, 2551, 2791, 2857, 2927, 2969, 4831, 5059, 5801, 5903, 6373, 8191, 9901, 10973, 17291, 23561
Intuition/example on the smallest $\sigma$-algebra
I'll try to explain this from probability theory perspective. Maybe someone will correct me here In probability of discrete sample spaces we denote by $\Omega$ the sample space. For example by speaking of one dice rolling the sample space can be the possible results: $\Omega = {1,2,3,4,5,6}$ and then we can simply take $2^\Omega = \{ \emptyset,\{1\},\{2\},..,\{1,2\},\{1,3\},...,\{1,2,3,4,5,6\} \}$ to be every possible event. But, when there is a need to speak of not countable sample spaces (which is the need for example when you investigate probability of events that occur on the interval $(0,1)$) we cant take $2^\Omega$ (There are going to be sets like Vitally sets that will cause problems) Hence, we need a subset of $2^\Omega$ that maintains the important properties: empty event, complement of an event and union of events. Now, the concept of minimal $\sigma$-algebra is important because intersection of $\sigma$-algebras is also $\sigma$-algebra and we can prevent ourselves from including problematic sets as Vitally sets and take for example the minimal sigma algebra that includes all the open subsets of $(0,1)$ and that is good enough for many applications. (Of course we still have to prove that each set of this sigma algebra can have probability, but this can be done by Lebesgue measure).
Generalization of an inequality $0\lt e^6-{\pi}^4-{\pi}^5\lt 0.00002$
I'm afraid the only way to solve this and other similar problems is by the help of a computer. I've just searched for solutions to all inequalities of the form $|\pm e^a\pm\pi^b\pm\pi^c|\leqslant10^{-2}$, $a,b,c\leqslant100$ , and came up empty, save for the solution you've already presented. If you are interested in this kind of topics, I would recommend reading these two Wikipedia articles, as well as using this site.
A sequence of random variables, such that CDF converges
No, it is not true as stated. For instance, if $P(X_n=n)=1$ for all $n$, your supposed $X$ would have cdf. $F_X(t)=0$ for all $t$. But it is close to something true &amp; important: Prokhorov's Theorem, which covers the case where the $X_n$ are ``tight'', that is, for every $\epsilon$ there is a $K$ such that $P|X_n|&gt;K)&lt;\epsilon$ for all $n$. (Which of course is not satisfied by the example in the above paragraph.)
Missing a trick: 3D Brownian Motion from a covariance matrix
All of the eigenvalues and eigenvectors of this matrix can be found by inspection. In general, for a symmetric matrix you only need to find two eigenvalue/eigenvectors pairs since you get the third pair “for free:” the trace of the matrix is equal to the sum of its eigenvalues, and you can just take the cross product of the two eigenvectors you’ve found to get a third. It’s also sometimes easier to look for eigenvectors and deduce the corresponding eigenvalues from them instead of going at it the other way around. For this matrix in particular, observe that the second column only has a non-zero entry in its second row. This means that $M(0,1,0)^T$ will be some multiple of $(0,1,0)^T$, so there’s one eigenvector/eigenvalue pair. You know that the other eigenvectors are orthogonal to this, which means that they must be of the form $(x,0,z)^T$. So, to find other eigenvectors play around with linear combinations of the first and third columns. An easy thing to try first is their sum and difference. I should add that finding the eigenvalues via the characteristic polynomial isn’t all that much work for this matrix. Expanding the determinant along the second row eliminates most of the terms and gives you one of the eigenvalues immediately: $$(\lambda-c^2)((\lambda-a^2-b^2)^2-4a^2b^2)=0.$$ Simplifying and factoring the remaining term doesn’t look too terrible.
What are irrational real numbers?
Here is the definition (in terms of sets) of an Irrational number: The set of Real numbers $\mathbb{R}$ minus the set of Rational numbers $\mathbb{Q}$ is the set of Irrational numbers which is written $\mathbb{R} \setminus\mathbb{Q}$. Less formally, a definition of an Irrational number is a number that cannot be written in the form $\cfrac{p}{q}$ where $p\in \mathbb{Z}$ and $q\in \mathbb{N^{+}}$. In simple English a Rational number is any fraction such as $\cfrac{3}{4}$, $\cfrac{6}{1}=6$ etc. The other way to tell if a number is rational is to see if it's decimal digits recur (repeat) such as $\cfrac{1}{9}=0.111111$ and $\cfrac{2}{15}=0.13333333$, also $\cfrac{1}{7}=0.$$\color{blue}{142857}$$142857$$\color{blue}{142857}$$142857$ $\implies$ ($\color{red}{142857}$ recur in this case) An Irrational number is $\sqrt{3}$ for example, from which it can be seen that its decimal digits do not recur (although no-one of course has "thoroughly" checked this throughout the infinite decimal): $\sqrt{3} = 1.73205080756887729352744634150587236 ....$ A word of caution: If you are simply told that a number is not rational and nothing else, that does not mean that it is irrational, and vice versa. However, since $\left(\mathbb{R} \setminus\mathbb{Q}\right) \subset \mathbb{R}$ It is okay to say that if a real number is not irrational then it must be rational, and vice versa. This is because we are referring to a subset of the real numbers.
How to prove that $\sum_{n \, \text{odd}} \frac{n^2}{(4-n^2)^2} = \pi^2/16$?
First, the partial fraction of the summand can be written $$\begin{align} \frac{n^2}{(4-n^2)^2}&amp;=\frac14\left(\frac{1}{n-2}+\frac{1}{n+2}\right)^2\\\\ &amp;=\frac14 \left(\frac{1}{(n-2)^2}+\frac{1}{(n+2)^2}+\frac{1/2}{n-2}-\frac{1/2}{n+2}\right) \end{align}$$ Second, we note that $$\begin{align} \sum_{n\,\,\text{odd}}\frac{1}{(n\pm 2)^2}&amp;=\sum_{n=-\infty}^\infty \frac{1}{(2n-1)^2}\\\\ &amp;=2\sum_{n=1}^\infty \frac{1}{(2n-1)^2}\\\\ &amp;=2\left(\sum_{n=1}^\infty \frac{1}{n^2}-\sum_{n=1}^\infty \frac{1}{(2n)^2}\right)\\\\ &amp;=\frac32 \sum_{n=1}^\infty \frac{1}{n^2}\\\\ &amp;=\frac{\pi^2}{4} \end{align}$$ Third, it is easy to show that $$\sum_{n=-\infty}^\infty \left(\frac{1}{2n-3}-\frac{1}{2n+1}\right)=0$$ Putting it all together we have $$\sum_{n,\,\,\text{odd}}\frac{n^2}{(4-n^2)^2}=\frac{\pi^2}{8}$$ If we sum over the positive odd only, then the answer is $(1/2)\pi^2/8=\pi^2/16$
Finding mixed (and pure) Nash equilibria of a 2-players $n \times n$ game
The "well-known formulation" you describe is simply wrong. It is not sufficient for a player to be indifferent between some pure strategies to be able to mix between them in equilibrium; they must also be best response, i.e., give the best possible payoff among all pure strategies. Finding all equilibria is harder for "degenerate games", where there be infinitely many equilibria, but if you are content with just finding the extreme equilibria (i.e., essentially equilibria which are not convex combinations of other equilibria) then the simplest approach is to do support enumeration. Definition: the support of a mixed strategy is the set of pure strategies that it plays with positive probability. Now given a pair of supports, you first check can each player make the other player indifferent between his support strategies using only her support strategies and, if this is possible, are the resulting mixed strategies that do this are best responses (i.e. give higher payoff than all things outside the support). Pure equilibria have support size 1 for both players. When checking for the indifference part is trivial, since there is a single strategy, and you just need to check if the pure strategies are best responses against each other. For larger supports, you need to do both parts: see if indifference is even possible given the candidate support of the other player, and if it is possible for both players, see if the payoffs of the pure strategies in the supports (which will now all be the same for a given player since we achieved indifference) are the best possible. In some sense, we are taking what you know about finding pure equilibria, and finding 2x2 mixed equilibria in 2x2 games, and combining them into a general algorithm.) This is described as Algorithm 1 in the paper you refer to: David Avis, Gabriel D. Rosenberg, Rahul Savani, and Bernhard von Stengel. Enumeration of Nash equilibria for two-player games. Economic Theory, 42(1):9–37. It is not the algorithm used on banach.lse.ac.uk, which is indeed more complicated. P.S. I wrote banach.lse.ac.uk and am a co-author on the Avis et al. paper.
Show that $\frac{x}{\sqrt{1-a^2}}=\frac{y}{\sqrt{1-b^2}}=\frac{z}{\sqrt{1-c^2}}$
Hint: Relating to geometry of triangles, looks much like sine rule to me, where $x,y$ and $z$ are sides of triangle and $a, b, c$ are the cosines of the corresponding opposite angles to the respective sides. But here we restrict that $-1\le a, b, c\le 1$ because for the denominator to be meaningful in the "real number" sense we need to find the range of $a, b, c$ Edit: Consider triangle $XYZ$ with sides $x, y, z$ and corresponding opposite angles $X, Y, Z$ . It is quite well known from a little trigonometry of triangles that $$x=y\cos Z +z\cos Y$$ $$y=x\cos Z +z\cos X$$ $$z=y\cos X +x\cos Y$$ And from sine rule we have $$\frac {x}{\sin X}=\frac {y}{\sin Y}=\frac {z}{\sin Z}$$ Now in your given equations just substitute $a=\cos X$,$b=\cos Y$,$c=\cos Z$ On doing this the given three equations transform to the three equations I have given above. While the equation you need to prove may be simply written as the Sine rule. I think this clarification might get you visualised what I really mean in my answer.
Sample space for die throwing experiment
Answer: Number of favorable cases $ = (-1)^0 {8\choose0}{29\choose7}+(-1)^1 {8\choose1}{23\choose7}+(-1)^2 {8\choose2}{17\choose7}+(-1)^3 {8\choose3}{11\choose7} = 1560780 - 8*245157 + 28*19448-56*330 =125588$ In general, the formula for finding the distribution of sum s in throwing n dice with x sides goes like this $$\sum_{k = 0}^{[(s-n)/x]} -1^k {n\choose k}{(s-1-xk)\choose (x-1)}$$
Determining planarity??
Being non-planar doesn't mean being homeomorphic to $K_5$ or $K_{3,3}$. Rather, a graph is non-planar if and only if it contains either $K_5$ or $K_{3,3}$ as a minor. Here, a graph $H$ is a minor of a graph $G$ if $H$ can be obtained from $G$ by deleting vertices, deleting edges, and/or contracting edges. (I.e., $G$ has a subgraph homeomorphic to $H$.) Here's a hint for this problem: show that your graph contains $K_{3,3}$ as a minor.
Stirling's formula: proof?
A proof I found a while ago entirely relies on creative telescoping. Since $\frac{1}{n^2}-\frac{1}{n(n+1)}=\frac{1}{n^2(n+1)}$, $$\begin{eqnarray*} \sum_{n\geq m}\frac{1}{n^2}&amp;=&amp;\sum_{n\geq m}\left(\frac{1}{n}-\frac{1}{(n+1)}\right)+\frac{1}{2}\sum_{n\geq m}\left(\frac{1}{n^2}-\frac{1}{(n+1)^2}\right)\\&amp;+&amp;\frac{1}{6}\sum_{n\geq m}\left(\frac{1}{n^3}-\frac{1}{(n+1)^3}\right)-\frac{1}{6}\sum_{n\geq m}\frac{1}{n^3(n+1)^3}\tag{1}\end{eqnarray*} $$ hence, by the series representation for $\psi(z)=\frac{d}{dz}\log\Gamma(z)$ (where $\Gamma(z)$ is the analytic continuation of $\int_{0}^{+\infty}t^{z-1}e^{-t}\,dt$, defined for $\text{Re}(z)&gt;0$): $$ \psi'(m)=\sum_{n\geq m}\frac{1}{n^2}\leq \frac{1}{m}+\frac{1}{2m^2}+\frac{1}{6m^3}\tag{2}$$ and in a similar fashion: $$ \psi'(m) \geq \frac{1}{m}+\frac{1}{2m^2}+\frac{1}{6m^3}-\frac{1}{30m^5}.\tag{3}$$ Integrating twice, we have that $\log\Gamma(m)$ behaves like: $$ \log\Gamma(m)\approx\left(m-\frac{1}{2}\right)\log(m)-\color{red}{\alpha} m+\color{blue}{\beta}+\frac{1}{12m}\tag{4}$$ where $\color{red}{\alpha=1}$ follows from $\log\Gamma(m+1)-\log\Gamma(m)=\log m$. That gives Stirling's inequality up to a multiplicative constant. $\color{blue}{\beta=\log\sqrt{2\pi}}$ then follows from Legendre's duplication formula and the well-known identity: $$ \Gamma\left(\frac{1}{2}\right)=2 \int_{0}^{+\infty}e^{-x^2}\,dx = \sqrt{\pi}.\tag{5}$$ Addendum: if we apply creative telescoping like in the second part of this answer, i.e. by noticing that $k(x)=\frac{60x^2-60x+31}{60x^3-90x^2+66x-18}$ gives $k(x)-k(x+1)=\frac{1}{x^2}+O\left(\frac{1}{x^8}\right)$, we arrive at $$\begin{eqnarray*} m!&amp;\approx&amp; 2^{\frac{37-32m}{42}}e^{\frac{1}{84} \left(42-\sqrt{35} \pi -84 m+2 \sqrt{35} \arctan\left[\sqrt{\frac{5}{7}} (2m-1)\right]\right)} \\ &amp;\cdot&amp;\sqrt{\pi}\, m\,(2m-1)^{\frac{8}{21}(2m-1)}\left(m^2-m+\frac{3}{5}\right)^{\frac{5}{84}(2m-1)}\tag{6}\end{eqnarray*} $$ that is much more accurate than the "usual" Stirling's inequality, but also way less "practical". However, it might be fun to plug in different values of $m$ in $(6)$ to derive bizarre approximate identities involving $e,\pi,\sqrt{\pi}$ and values of the arctangent function, like $$ \sqrt{\pi}\,\exp\left[-\frac{1}{42}\left(147+\sqrt{35} \arctan\frac{1}{\sqrt{35}}\right)\right]\approx 2^{\frac{19}{6}}3^{\frac{1}{6}}5^{\frac{5}{12}} 7^{-\frac{37}{12}}.\tag{7}$$
Couter example for set inequality
Just as an easy example, consider a constant function (i.e. $f(x) = y_0$ for some particular $y_0$ and all $x$) and a collection of disjoint $A_i$'s.
Question about inverting this linear map
Yes, it is sufficient to check the invertibility of the matrix to check the if the operator is invertible. Your operator is not an isometry as you can easily see by comparing the lengths of $v_{2}$ and its image. Now, for the matrix in new basis, observe that the first column is the coordinates of image of first basis vectors and so on.
What is a non-elementary event in Probability?
An event is a subset of the sample space. A non-elementary event is an event that is not elementary, i.e. a subset that contains more than (or less than) one element. For example, from the same $S$, you could have $\{HH,TH\}$.
strongly convergent operators applied to a weakly convergent sequence
No. Let $e_n$ be the standard orthonormal basis of $\ell^2$, $T_n:\ell^2\to\ell^2$ defined by $T_n(x)=(n^{th} $ component of $x)$ $e_1$ and $x_n=e_n$. Then $T_n\to 0$ in the strong operator topology and $x_n\to 0$ weakly but $T_n(e_n)=e_1$ for all $n$.
Another probability distribution question:
$F(x)$ is the probability of $X\le x$. Thus $F(x)=0$ for $x&lt;0$, $F(x)=\frac34$ for $0\le x&lt;1$, $F(x)=\frac{15}{16}$ for $1\le x&lt;2$. You should find that $$F(x)=1-\frac1{4^{\lfloor x\rfloor+1}}$$ if $x\ge0$.
Shell method on $f(x) = \frac{1}{\sqrt{x^2 + 1}}$
The shell method about x=0 would be $$V = 2\pi \int_0^2 x f(x) dx$$ In your case this becomes, $$V = 2\pi \int_0^2 \frac{x}{\sqrt{x^2+1}} dx$$ Which is a straight forward u-substitution.
Finding the Area inside a limacon and outside a circle.
Just ignore the small green loop (it's $r&lt;0$)
Soft question: What if BODMAS/PEDMAS wasn't used. Would maths still "work"?
Many other rules are possible. The most important aspect is that the writer and the reader agree. For example: on which side of the road should we drive? The left or the right will work provided that we agree. As Arthur says, as long as brackets / parentheses are regarded as the highest priority, the other rules could be pretty much anything. One possibility is that there are no other rules and the parentheses are mandatory. In other words, $2 + 3 \times 4$ would not be a valid expression. You would need to write either: $(2 + 3) \times 4$ or $2 + (3 \times 4)$. Or, you could devise a very different system which does not even need parentheses. This has been done, see Polish Notation or Reverse Polish Notation which was once popular on calculators. Yet more schemes could be easily devised. However, most of us have agreed on one system so if you want to switch then you might be rather lonely and you will need to carefully warns others of your system. Is the accepted system the best? That is hard to answer precisely and I think that most would agree that it is not perfect. It does have some advantages, for example polynomials can be represented quite compactly which would not be the case in some other schemes.
Compute $\lim_{x \to 0} x \lfloor x - \frac{1}{x} \rfloor$
By definition of the floor function $$x-\frac{1}{x}-1&lt;\left\lfloor x-\frac{1}{x}\right\rfloor \le x-\frac{1}{x}$$ So for $x&gt;0$ $$x^2-1-x &lt; x \left\lfloor x-\frac{1}{x}\right\rfloor \le x^2-1$$ so by the Squeeze Theorem, $$\lim_{x\to 0^+}{x \left\lfloor x-\frac{1}{x}\right\rfloor}=-1$$ Now for $x&lt;0$ the direction of the inequality reverses so $$x^2-1-x &gt; x \left\lfloor x-\frac{1}{x}\right\rfloor \ge x^2-1$$ and again by the Squeeze Theorem, $$\lim_{x\to 0^-}{x \left\lfloor x-\frac{1}{x}\right\rfloor}=-1$$ Since the two one-sided limits are the same, the limit as $x\to 0$ is $-1$.
Finding the monotonicity of simple sequence - how to?
Rearrange things a bit $$ a_n = 3\left[\left(\frac{2}{3}\right)^n + 1 \right]^{1/n} $$ As $n$ increases the term $(2/3)^n$ goes to zero and $a_n$ monotonically decreases to $3$
What is the result of multiplying $\sin x . \cos y$?
$$ A=(3\sin x-1)^2+(5\cos y-2)^2-5. $$ Hence the minimum of $A$ is obtained for $\sin x=1/3$ and $\cos y=2/5$.
Simplify the fractions
Because $$\frac{\log3}{\log2}+\frac{\require{cancel}\cancel2\log3}{\cancel2\log2}=\frac{\log3}{\log2}+\frac{\log3}{\log2}=2\frac{\log3}{\log2}$$ and similarly $$\frac{2\log2}{\log3}+\frac{\log2}{2\log3}=2\frac{\log2}{\log3}+\frac12\frac{\log2}{\log3}=\left(2+\frac12\right)\frac{\log2}{\log3}=\frac52\frac{\log2}{\log3}$$
What is the explicit formula for the sequence representing the number of any triangles in a triangular grid?
The values fit $a_n=2n^2-2n+1$ To find that, two levels of differences gives a constant $4$, so the leading term is $(4/2!)n^2$. Subtract that off and you can find the rest.
Confusing question: try and prove that $x -\tan(x) = (2k+1)\frac{\pi}{2}$ has no solution in $[\frac{3\pi}{4},\frac{5\pi}{4}]$
Let $f(x) = x -tan(x)$. Taking the first derivative we see that this function is constantly decreasing. The derivative will be negative everywhere besides $x = \pi$. Hence the values live in the interval $[f(\frac{5\pi}{4}),f(\frac{3\pi}{4})]$. Computing those values you see that for every choice of $k$, the number $(2k+1) \frac{\pi}{2}$ is outside the interval. Basically you need to see that it is true for $k = 0, 1, -1$.
How is uncountability characterized in second order logic?
The definition of uncountability is essentially the same as it is in first order logic: a set $X$ is uncountable if there is no surjective function $F$ from the natural numbers $\mathbb{N}$ onto $X$ (there are other ways of formulating uncountability, but they are equivalent to this definition). Consequently in order to provide a formal definition in second order logic we must first define the following: the set of natural numbers $\mathbb{N}$; functions and their properties in second order logic; and finally the definition of uncountability. Let's start with functions. Let $F$ be a unary function symbol, and let the domain of $F$ be given as $$ \mathrm{dom}(F) = \left\{ x : \exists{y} (F(x) = y) \right\} $$ while the range of $F$ is $$ \mathrm{ran}(F) = \left\{ y : \exists{x} (F(x) = y) \right\}. $$ These sets exist via the second order comprehension scheme. Now we characterise the natural numbers $\mathbb{N}$ in second order logic. Given a function symbol $S$ for the successor function and a constant symbol $0$, the natural numbers is the set satisfying the following axioms. By a theorem of Dedekind, this suffices to characterise the natural numbers up to isomorphism. $$ \forall{x}(0 \neq S(x)) \\ \forall{n}\forall{m} (S(n) = S(m) \rightarrow n = m) \\ \forall{P} ( P(0) \wedge \forall{n} (P(n) \rightarrow P(S(n))) \rightarrow \forall{n} (P(n)). $$ Finally we are in a position to present a definition of uncountability in second order logic. A set $X$ is countable just in case it satisfies the following definition. $$ \mathrm{countable}(X) =_{df} \exists{F} (\mathrm{dom}(F) = \mathbb{N} \wedge \mathrm{ran}(F) = X \wedge \forall{x \in X}\exists{y \in \mathbb{N}} (F(y) = x). $$ The definition of uncountability is then just the negation of the countability condition. The question of why this characterisation is 'absolute' in a way that the first order characterisation is not can be found in the Dedekind categoricity theorem cited above. Given the standard semantics for second order logic, all structures satisfying the second order Peano axioms are isomorphic—the theory is categorical. Because of this, the Löwenheim–Skolem theorem is false for second order logic (with the standard semantics), so the kind of model-theoretic relativity you get in first order logic does not appear (at least to the same extent). We don't get models where the whole domain is countable (from the 'external' perspective), but the model itself thinks that there are uncountable sets just because certain functions don't exist.
Given points on graph, fit a line such that largest error is minimal
What you're doing is called "minimax approximation", because you are trying to minimize the maximum error. Alternatively, it's called "uniform approximation" because the error is measured using the uniform ($L_\infty$) norm, rather than the $L_2$ norm used in least-squares fitting. You can find some useful material by googling these two terms. For minimax approximation of a function by a polynomial, the standard technique is the Remez algorithm, which is fairly complicated. Your case is much simpler than this. You basically have only two variables (the slope and intercept of the line), so any decent optimization function should be able to find the best solution.
functions of two variables with one variable defined on a compact set uniformly converge to zero
Try $f(x,y) = xy^2 e^{-xy}$. What is $f(x,1/x)$ for $x &gt; 0$?
Unresolved "To the power of half"?
Based on your question, I assume you are asking not for an algebraic demonstration of why "to the 1/2 power" means "square root", but rather an intuitive explanation of why it should mean that. Here is my best attempt at explaining why that makes sense. Suppose I ask you the question: What number is halfway between $6$ and $96$? You might reason as follows: To get from $6$ to $96$, I add $90$. If I want to break that up into two equal steps, I could think of it as adding $45$, then adding $45$ again. The value in the middle would be what I get if I start with $6$ and add $45$ -- in other words, $51$. That would be a perfectly reasonable explanation and answer if you are only able to think of getting from one number to another by way of addition. But there is another perspective on the question, one that thinks of getting from one number to another by way of multiplication. That second way of reasoning goes as follows: To get from $6$ to $96$, you multiply by $16$. Now ask yourself: How do I break the operation "multiply by $16$" into two equal "half-steps"? A naïve first answer might be to think: "Half of $16$ is $8$, so I multiply by $8$ twice." But it's easy to see that doesn't work: If you start with $6$, multiply by $8$, then multiply by $8$ again, you definitely don't end up at $96$. What does work is to think: "If I multiply by $4$, then multiply by $4$ again, the result is the same as just multiplying by $16$ in a single step." And you can check this: Start with $6$, multiply by $4$ to get $24$, then multiply by $4$ again to get $96$. The number "in the middle" is $24$. So there are two different types of reasoning involved: Additive reasoning and Multiplicative reasoning: According to additive reasoning, "half of $+90$" means $+45$, and the number halfway between $6$ and $96$ is $51$. This is called the arithmetic mean (or just the average) of $6$ and $96$. According to multiplicative reasoning, "half of $\times 16$" means $\times 4$, and the number halfway between $6$ and $96$ is $24$. This is called the geometric mean of $6$ and $96$. To understand why $16^{1/2}$ means "the square root of $16$", you have to reason multiplicatively. As you know, $16^5$ means "multiply $5$ factors of $16$ together". Likewise for any positive whole number $k$, $16^k$ means "multiply $k$ factors of $16$ together." In both cases, we are thinking multiplicatively, not additively. So to make sense of $16^{1/2}$ we should understand it as meaning "Multiplying by $16$ 'half of one time'." As the discussion above explains, the only sensible interpretation of that is that it means "Multiply by $4$".
Radially Symmetric Solutions of a Nonlocal Nonlinear Transport Equation
The standard approach is to show that if $\rho$ is a solution and $T:\mathbb R^n\to\mathbb R^n$ is an orthogonal linear transformation, then $\rho\circ T$ is also a solution. The symmetry follows from the fact that the solution for given initial condition $\rho_0$ is unique: since $\rho_0\circ T=\rho_0$, we have $\rho\circ T=\rho$. So, all we do it plug $\tilde \rho = \rho\circ T$ into the equation and check that everything commutes with the composition operator. First, $\tilde \rho_t=\rho_t\circ T$. Then $$ \tilde v(x) = \int k(y)\tilde \rho(x-y)\,dy=\int k(y) \rho(Tx-Ty)\,dy = \int k(\tilde y) \rho(Tx-\tilde y)\,d\tilde y = v(Tx) $$ where I changed the variable $\tilde y=Ty$ and took advantage of orthogonality $d\tilde y=dy$ and radial symmetry $k(Ty)=k(y)$. Finally, $\nabla \cdot((\rho v)\circ T) = (\nabla \cdot(\rho v))\circ T$ because divergence is cool like that. Everything checks out.
Prove that each integer n ≥ 12 is a sum of 4's and 5's using strong induction
The trick to these sorts of problems is to realize that if we can find four consecutive integers ($4$ is the smallest of our numbers we're taking a combination of) that can be represented as a non-negative sum of the fours and fives. For example $$12 = 4(3) + 5(0)$$ $$13 = 4(2) + 5(1)$$ $$14 = 4(1)+ 5(2)$$ $$15 = 4(0) + 5(3).$$ With this information is it clear how you could represent 16? Just take the solution from the 12 case, and add 4! (so increase $x$ by $1$.) Here's the formal argument. Let $P(n)$ be the open sentence "$n$ can be written as a non-negative combination of $4$ and $5$". By what we've shown above, we know that $P(k)$ is true for $k = 12,13,14,15.$ We wish to prove that $P(k+1)$ is true. Our strong inductive hypothesis is to suppose that for some $k \in \mathbb{Z}$ that for every $i$ with $12\leq i\leq k$ that $P(k)$ is true, and we need to prove that $P(k+1)$ is true. If $k = 12,13$ or $14$, we've already seen that $p(k+1)$ is also true, so suppose that $k \geq 15$. Then we know that $12 \leq k-3\leq k$ so then by our strong inductive hypothesis, there exists $x,y \in \mathbb{Z}$ such that $k-3 = 4(x) + 5(y)$. Then adding $4$ to both sides gives that $k+1 = 4(x+1)+5(y)$ so that $P(k+1)$ is true. Thus by the principle of mathematical induction, $P(n)$ must be true for all $n \geq 12$.
Is it possible to evaluate $f(5)$ from given function?
Note that $$f(x^2+x) = 4(x^2+x)-3$$ Let $x^2+x=5$ then $$f(5) = 4(5)-3 = 20-3 =17$$
Limit with number $e$ and complex number
Expand using the binomial formula: $\displaystyle \left(1+\frac{z}{n}\right)^n = \sum_{k=0}^n {n\choose k}\left( \frac{z}{n}\right)^k = \sum_{k=0}^\infty E_k^n$ where we define $\displaystyle E_k^n = {n\choose k}\left( \frac{z}{n}\right)^k$ for $k \le n$ and $= 0$ otherwise We want $\displaystyle \sum_{k=0}^\infty E_k^n$ to converge to $\displaystyle \sum _{k=0}^\infty \frac{z^k}{k!}$ as $n \to \infty$ To do that we will show $\displaystyle E^n_k \to \frac{z^k}{k!}$ as $n \to \infty$ $\displaystyle E^n _k = \frac{n!}{k!(n-k)!}\left( \frac{z}{n}\right)^k = \frac{n!}{k!(n-k)!} \frac{z^k}{n^k} = \frac{n!}{n^k (n-k)!}\frac{z^k}{k!} =\frac{n}{n} \frac{(n-1)}{n}\cdot \ldots \cdot \frac{(n-k+1)}{n}\frac{z^k}{k!} $ Therefore we just have to prove $\displaystyle \frac{n}{n} \frac{(n-1)}{n}\cdot \ldots \cdot \frac{(n-k+1)}{n} \to 1$. The number of terms to multiply is constant and equal to $k$. So there is no problem with invoking how each of them goes to $1$ seprately, and that limits commute with multiplication.
Prove that $f(x)\leq x\cdot\log_2 x$ for all integer $x\geq1$
This is fun. Ok set let $k = log_2 x$, i.e. $2^k = x$. (Currently in the definition we can only compute $f(x)$ on $x$ which are powers of 2. Now this means when we are applying our halvings to $x$ in $f$, we are allowed to do it $k$ times. Of course, $f(1)=0$, this will make our inequality work. I will do the first few terms for you and see if you can spot a pattern. $f(x) = 2f(x/2)+x$ $f(x) = 2(2f(x/4)+x/2)+x = 2^2 f(x/4) + 2x$ … So think of what the term involving $f(x/(2^k)) = f(1) = 0$ would be.
How do I find the limits of the outermost integral in a triple integral representing a 3D solid?
Remember the mass is given by \begin{align*} M = \iiint_{D}m(x,y,z)\mathrm{d}x\mathrm{d}y\mathrm{d}z \end{align*} In the present case, $D = \{(x,y,z)\in\textbf{R}^{3} \mid \sqrt{x^{2}+y^{2}}\leq z \leq 3\}$ and $m(x,y,z) = z$. Thus we have to solve the following integral \begin{align*} \int_{-3}^{3}\int_{-\sqrt{9-x^{2}}}^{+\sqrt{9-x^{2}}}\int_{\sqrt{x^{2}+y^{2}}}^{3}z\mathrm{d}z\mathrm{d}y\mathrm{d}x \end{align*} Then apply the the cylindrical change of variables. Can you take it from here?
Verifying that Kummer hypergeometric function is a solution to $xy''+(b-x)y'-ay=0$
$\def\d{\mathrm{d}}$If $y(x) = \sum\limits_{n = 0}^∞ \dfrac{a^{(n)} x^n}{b^{(n)} n!} = 1 + \sum\limits_{n = 1}^∞ \dfrac{a^{(n)} x^n}{b^{(n)} n!}$, then$$ ay(x) = a + \sum_{n = 1}^∞ \frac{a · a^{(n)} x^n}{b^{(n)} n!}, $$\begin{gather*} (b - x) y'(x) = (b - x) \sum_{n = 1}^∞ \frac{a^{(n)} nx^{n - 1}}{b^{(n)} n!} = \sum_{n = 1}^∞ b · \frac{a^{(n)} nx^{n - 1}}{b^{(n)} n!} - \sum_{n = 1}^∞ x · \frac{a^{(n)} nx^{n - 1}}{b^{(n)} n!}\\ = \sum_{n = 1}^∞ \frac{a^{(n)} x^{n - 1}}{(b + 1)^{(n - 1)} (n - 1)!} - \sum_{n = 1}^∞ \frac{a^{(n)} x^n}{b^{(n)} (n - 1)!} = a + \sum_{n = 1}^∞ \frac{a^{(n + 1)} x^n}{(b + 1)^{(n)} n!} - \sum_{n = 1}^∞ \frac{a^{(n)} x^n}{b^{(n)} (n - 1)!}, \end{gather*}\begin{gather*} xy''(x) = \sum_{n = 2}^∞ x · \frac{a^{(n)} n(n - 1)x^{n - 2}}{b^{(n)} n!} = \sum_{n = 2}^∞ \frac{a^{(n)} x^{n - 1}}{b^{(n)} (n - 2)!} = \sum_{n = 1}^∞ \frac{a^{(n + 1)} x^n}{b^{(n + 1)} (n - 1)!}, \end{gather*} and\begin{align*} &amp;\mathrel{\phantom{=}}{} xy''(x) + (b - x)y'(x) - ay(x)\\ &amp;= {\small \sum_{n = 1}^∞ \frac{a^{(n + 1)} x^n}{b^{(n + 1)} (n - 1)!} + \left( a + \sum_{n = 1}^∞ \frac{a^{(n + 1)} x^n}{(b + 1)^{(n)} n!} - \sum_{n = 1}^∞ \frac{a^{(n)} x^n}{b^{(n)} (n - 1)!} \right) - \left( a + \sum_{n = 1}^∞ \frac{a · a^{(n)} x^n}{b^{(n)} n!} \right)}\\ &amp;= \sum_{n = 1}^∞ \left( \frac{a^{(n + 1)}}{b^{(n + 1)} (n - 1)!} + \frac{a^{(n + 1)}}{(b + 1)^{(n)} n!} - \frac{a^{(n)}}{b^{(n)} (n - 1)!} - \frac{a · a^{(n)}}{b^{(n)} n!} \right) x^n\\ &amp;= \sum_{n = 1}^∞ \bigl( (a + n)n + (a + n)b - (b + n)n - a(b + n) \bigr) \frac{a^{(n)}}{b^{(n + 1)} n!} x^n = 0. \end{align*}
Homeomorphism between $\prod X_i$ and the Cantor set
There is a theorem that every compact, totally disconnected, perfect, metrizable space is homeomorphic to the Cantor set. The 2nd step of the proof, which is all you need, is constructive and so you can write down a formula by following it. A good reference is the book by Moise, "Geometric Topology in Dimensions 2 and 3". The first step of that proof is to find a "clopen filtration" of the space. This means a sequence $B_1, B_2, B_3, \ldots$ such that each $B_k$ is a "clopen decomposition"---a pairwise disjoint cover of the space by closed-and-open subsets---, each element of each $B_{k+1}$ is a subset of some element of $B_k$, and the union $B_1 \cup B_2 \cup B_3 \cup \cdots$ is a basis for the topology. For your situation, you do not have to work hard to find a clopen filtration: simply define $B_k$ to be the set of point inverse images of the projection to $X_1 \times \cdots \times X_k$. If each $B_k$ has $2^k$ elements, i.e. if each $X_k$ has 2 elements, you are done of course; let me call this a "binary" clopen filtration. In general, though, the number of elements of $B_k$ is equal to $|X_1| \times |X_2| \times \cdots \times |X_k|$. The 2nd, constructive step of the proof is to use your given nonbinary clopen filtration to write a different clopen filtration $B'_1,B'_2,B'_3,\ldots$ that is binary. This can be done constructively, by induction. Here is a rough outline. Using $B_1$ we define $B'_1$: clump the elements of $B_1$ into two disjoint subsets, which is possible because $|X_1| \ge 2$. Let $B'_1$ be the two element clopen cover you get: the two elements of $B'_1$ are the union of clump number 1 of $B_1$ and the union of clump number 2 of $B_1$. Proceeding by induction, suppose that $B'_k$ is a clopen cover by $2^k$ sets, each of which is a union of elements of $B_1 \cup \cdots \cup B_k$. We want to define $B'_{k+1}$ by carefully decomposing each element $U \in B'_k$ into two clopens. To do this, first decompose $U$ as coarsely as possible as a disjoint union of elements of the open cover $B_1 \cup \cdots \cup B_k$: start with each element $V \in B_k$, ask whether $U$ contains $V$, and if so put $V$ into the disjoint union; if $U$ does not contain $V$, break $V$ into a disjoint union of elements of $B_{k-1}$, and repeat the question for each of those; continue by downward induction… Next, rewrite this decomposition of $U$ by taking the finest elements, i.e. those which are elements of $B_i$ for minimal value of $i$, and breaking each into its two or more pieces in $B_{i+1}$; this is possible because $|X_{i+1}| \ge 2$. Now, using this decomposition of $U$, break it into two clumps, and make sure that each clump contains each one piece of each of the rewritten decompositions of finest elements in $B_i$. This last step, the messiest part, is needed in order to guarantee that you get a basis for the topology. That's basically all there is to it. If you want to verify that this is indeed a clopen filtration, you can read the details in Moise's book cited above.
Obtain the set of points from Voronoi diagram
Hint: It's false. Counter-example: $\boxed{\strut}\boxed{\strut\quad}\!\boxed{\strut}$. (Adjacent points must be equidistant from the edge.)
How to show that the variational problem does not attain its infimum
Let $f_n(x)=nx(1-x)$. Then $J(f_n)=\int_0^{n} e^{-n^{2}(1-2x)^{2}} \, dx$ which tends to $0$ as $n \to \infty$ by Dominated Convergence Theorem. Hence the infimum of $J$ is $0$. Obviously, $0$ is not attained.
Three Exponential Equations
HINT: write your first equation as $$5\cdot (5^x)^2+25=125\cdot 5^x+5^x$$ and your second equation as $$(3^x)^3+14\cdot 2^x=7\cdot (2^x)^2+8$$ and the last one as $$(3^x)^3-12\cdot (3^x)^2+63\cdot 3^x=4\cdot 3^3$$
How to show exponential integral is finite
If $t \le 0$, convergence of the integral follows from $$0 \le \int_1^\infty \frac{e^{tx}}{x^2}\, dx \le \int_1^\infty \frac{1}{x^2}\, dx = 1.$$ If $t &gt; 0$, divergence of the integral follows from the fact that $$\int_1^\infty \frac{e^{tx}}{x^2}\, dx &gt; \int_1^\infty \frac{tx}{x^2}\, dx = t\int_1^\infty \frac{1}{x}\, dx = \infty.$$
Factors: operations, theory and other names?
The use of the term 'factor' in the context of PGM is computer science lingo and is practically never used in mathematics circles. The factors you are talking about are just functions. In more detail, a factor $F$ of scope $X_1,\cdots, X_n$ (where the $X_i$ are sets) is a function $F:X_1\times \cdots X_n \to \mathbb R$. So, a different name for factors is: a real valued function on the cartesian product of sets. Since this is such a general concept you can define a million different and crazy operations on factors. In the context of PGM there are some natural operations to consider since most factors encountered are either probability distributions, or unnormalized such, or somehow related to such. In particular, the common operations performed directly on factors are, as you mention, product factors and marginalizations (and also nomrlizations etc.). The most straightforward algorithms for these operations are the most efficient ones and are very fast (often $O(n^2)$). If the factors are known to have certain properties (e.g., sparsity) then faster algorithms can be developed.
Homotopy type of the loop space of a compact Lie group
You can easily drop the connectivity condition as long as each (equivalently, one) component is simply-connected. But the assumption that $\pi_1(X)=1$, is a necessary condition. If $X$ is a complex without odd-dimensional cells then $\pi_1(X)=1$: Indeed, by the cellular approximation theorem, every loop $c$ in $X$ is homotopic to a loop $c'$ in $X^1$. If $X^1$ contains no 1-cells, then $X^1=X^0$, implying that $c'$ is constant. Thus, $\pi_1(X)=1$.
Intersection of circles toward the left of the blue line
Both circles are symmetric about line $AD$, so if they intersect at $F$ they will also intersect at the reflection of $F$ about $AD$.
Irreducible representations of an abelian group $G$ are $1$-dimensional.
Proof is fine for finite groups, assuming you know $\rho(g)$s are diagonalizable in the first place. For infinite $G$, each $\rho(g)$ induces a decomposition of $V$ into subspaces, and any two of these decompositions have a mutual refinement. If $V$ is finite-dimensional, among these there must be a maximal refinement (you can't keep refining forever) with respect to which all the operators are diagonalizable. Hope this works.
N women and N men. Groups of pairs.
Your reasoning is fine. To make $k$ pairs, you can choose the $k$ women, put them in order, choose the $k$ men, and choose the order of the men, so ${n\choose k}^2k!$
Show that $f'(x) \simeq \frac{1}{12h} [-f(x+2h) + 8f(x+h) -8f(x-h) + f(x-2h)]$?
If you know $$\frac{f(x+h)-f(x-h)}{2h} \to f'(x)$$ $$\frac{f(x+2h)-f(x-2h)}{4h} \to f'(x)$$ then \begin{align} &amp;\frac{1}{12h}[-f(x+2h) + 8f(x+h) - 8f(x-h) + f(x-2h)] \\ &amp;= -\frac{1}{3} \frac{f(x+2h)-f(x-2h)}{4h} + \frac{4}{3} \frac{f(x+h)-f(x-h)}{2h} \\ &amp;\to f'(x) \end{align}
What is the $\Theta$ of $ (lg n) ^ n$
Since $\lg n$ grows faster than a constant, $(\lg n)^n$ grows faster than any exponential: much faster than $n \lg n$. You can write $(\lg n)^n$ as $2^{n \lg \lg n}$ if it helps you have intuition for how big it is, but either form is "simplified": there's no simpler expression with the same $\Theta$.
Is $\{ \emptyset \}$ is a subset of set $\{ \emptyset, 1, 2, 3 \}$?
The set $\{x\}$ is a subset of $\{a,b,c\}$ if and only if $x$ is equal to $a$, $b$ or $c$. So, unless you have a weird definition of $1$, $2$ or $3$, $\{\emptyset\}\nsubseteq\{1,2,3\}$. But $\{\emptyset\}\subseteq\{\emptyset, 1,2,3\}$ Remark: The most usual construction of $\Bbb N$ from ZFC defines $0=\emptyset$ and $1=\{\emptyset\}$.
If $x=[x_1,...,x_n]$ is Multivariate normal, what is the $x_1,...,x_k$ that will maximise $P(x_1,...,x_k , x_{k+1},...x_n| \mu, \Sigma)$?
Since we are dealing with continuous distribution, it is meaningful to study the pdf $f(x_1,\dots,x_k|\mu,\Sigma,x_{k+1},\dots,x_n).$ Let us write the vectors $y_1=(x_1,\dots,x_k)^T$ and $y_2=(x_{k+1},\dots,x_n)^T$ For the multivariate normal, $$\begin{pmatrix}y_1 \\y_2\end{pmatrix}\sim\mathcal{N}\left(\begin{pmatrix}\mu_1 \\\mu_2\end{pmatrix},\begin{bmatrix}\Sigma_{11} &amp;\Sigma_{12} \\\Sigma_{21} &amp;\Sigma_{22}\end{bmatrix}\right)$$ where $\mu_1,$ $\mu_2$ are $k\times 1$ and $(n-k)\times 1$ respectively, with $\mu=(\mu_1^T,\mu_2^T)^T.$ Similarly, $\Sigma_{11}$ is $k\times k,$ $\Sigma_{12}=\Sigma_{21}^T$ is $k\times (n-k)$ and $\Sigma_{22}$ is $(n-k)\times (n-k).$ The required conditional distribution can then be written as $$f(y_1|\mu,\Sigma,y_2=\mathbf{a})=\mathcal{N}\left(\mu_1+\Sigma_{12}\Sigma_{22}^{-1}(\mathbf{a}-\mu_2),\Sigma_{11}-\Sigma_{12}\Sigma_{22}^{-1}\Sigma_{21}\right)$$ More on this can be found here and here. By the property of multivariate normal distribution, it is now clear that given $(x_{k+1},\dots,x_n)=\mathbf{a}^T,$ the pdf is maximized at the mean $\mu_1+\Sigma_{12}\Sigma_{22}^{-1}(\mathbf{a}-\mu_2).$
Bounded Convergence Theorem and vanishing functions
It's because $f_n(x)\to f(x)$ a.e. So lets consider a more specific example. Len $E=[-1,1]\subset \mathbb{R}$ and $\lambda$ the Lebesgue measure on the line. Let's consider $f_n$ are supported on $[-1,1]$ and $f$ is the the pointwise limit and lets say that $f$ vanishes outside of $E$. Then if $g(x):=f(x)$ for $x\in \mathbb{R}\setminus \mathbb{Q\cap E}$ and $g(x) :=f(x)+1=1$ for $x\in \mathbb{Q\cap E}$, then $f_n\to g$ a.e. as well.
Definite integral with non-injective u-substitution
The substitution rule/change of variables theorem says the following: Suppose $f:[a,b]\to\Bbb{R}$ is continuous, and $u:[\alpha,\beta]\to [a,b]$ is differentiable with Riemann-integrable derivative (or at this point if you don't like remembering various hypotheses, just assume everything is smooth). Then, \begin{align} \int_{\alpha}^{\beta}f(u(x))\cdot u'(x)\,dx &amp;= \int_{u(\alpha)}^{u(\beta)}f(t)\,dt \end{align} &quot;substitute $t=u(x)$&quot; If you state the theorem like this, there is no need at all for any injectivity assumptions on $u$; this equality follows immediately from the fundamental theorem of calculus and chain rule (if $F$ is a primitive of $f$, then the LHS and RHS are equal to $F(u(\beta))-F(u(\alpha))$). The problem is that people often don't carefully specify the two functions $f$ and $u$; what ends up happening is they misapply the theorem and then impose extra conditions like injectivity (which of course doesn't hurt, but it doesn't really address the issue). In your example, let $f(t)=t^2$ and $u(x)=\sin x$ (we can define these functions on all of $\Bbb{R}$, so there's no domain issues here at all, and all the compositions make sense etc). Then, \begin{align} \int_0^{2\pi}\sin^2x\cdot \cos x\,dx &amp;=\int_0^{2\pi}f(u(x))\cdot u'(x)\,dx\\ &amp;=\int_{u(0)}^{u(2\pi)}f(t)\,dt\\ &amp;=\int_0^0t^2\,dt\\ &amp;= 0. \end{align} This really is by a direct application of the theorem; I'm not sure why you say it is erroneous. Of course, a corollary of the theorem I wrote above is the following: Suppose $g:[\alpha,\beta]\to\Bbb{R}$ is continuous and $v:[\alpha,\beta]\to [a,b]$ is $C^1$ with $C^1$ inverse. Then, \begin{align} \int_{\alpha}^{\beta}g(x)\,dx &amp;= \int_{v(\alpha)}^{v(\beta)}g(v^{-1}(t))\cdot (v^{-1})'(t)\,dt\\ &amp;=\int_{v(\alpha)}^{v(\beta)}g(v^{-1}(t))\cdot \frac{1}{v'(v^{-1}(t))}\,dt \end{align} &quot;substitute $x=v^{-1}(t)$&quot; The &quot;advantage&quot; of this formula is that on the LHS there is only $g$, i.e without any variable changes, and we move all the stuff involving changes of variables to the other side of the equation (so all instances of $v$ appear only on the RHS). Compare this to my first formula, where we didn't make any assumptions of injectivity, and thus as a result we have $u$ appearing on both the LHS and the RHS of the equation. The added hypothesis of injectivity is the price we pay if we want to isolate everything to one side. Sometimes in computations, this second form of the theorem (which is really a special case of the one above) is more useful, which is why people may sometimes insist that injectivity is a must.
What does $X \times X \rightarrow \mathbb R $ mean?
The symbol(?) $X \times X$ represents the Cartesian Product of the set $X$ with itself. Thus, $X \times X = \{(x_0,x_1): x_0,x_1 \in X\}$. Whenever you see something like "$f: A \to B$," this is to be read as "$f$ is a function which takes elements in the set $A$ to elements in the set $B$." Thus, $X \times X \to \mathbb{R}$ is the description of the domain/range pair of some function, which takes elements of the form $(x_1, x_2) \in X \times X$ and turns them into some real number. As an example, suppose $X$ is the set of integers, and let $f: X \times X \to \mathbb{R}$ be defined as $f(x_1, x_2) = |x_1 - x_2|$. Then $f((1,5)) = |1-5| = |-4| = 4$.
Ball and urn problem
The appropriate probability distribution for this question is the multinomial distribution, which is a generalization of the binomial distribution. That is to say, let $\boldsymbol X = (A,B,C)$ be a vector-valued random variable that counts the number of balls drawn of each type. Then the parameters are $n = 6$, $p_1 = 0.1586$, $p_2 = 0.81859$, and $p_3 = 1 - p_1 - p_2 = 0.02275$. Then $$\Pr[\boldsymbol X = (3,2,1)] = \frac{6!}{3!2!1!} p_1^3 p_2^2 p_3^1.$$
If $A$ is a set, then $\mathrm{card}(P(A)) = 2^{\mathrm{card} A}$.
In order to fully answer this question, first you need to understand what is cardinal exponentiation. We define $|A|^{|B|}$ to be the cardinality of the set $\{f\colon B\to A\mid f\text{ is a function}\}$, which is often denoted as $A^B$. So we have that $|A|^{|B|}=\left|A^B\right|$. It is a good exercise to see that this definition is a good definition and if $|A|=|C|$ and $|B|=|D|$ then $\left|A^B\right|=\left|C^D\right|$. So we have that $2$ is really just the cardinality of a set with two elements, say $\{0,1\}$. And so we wish to prove that $|\mathcal P(A)|$ is exactly $2^{|A|}$, or $\left|\{0,1\}^A\right|$. The elements of $2^A$ are functions from $A$ into $\{0,1\}$, and we can think of each function as a huge set of switches, and when we enter the function $a\in A$ it tells us whether the switch is on or is it off. This is really saying whether $a$ is in a particular set or not. This set is defined by the function itself. Therefore we define the function $F\colon2^A\to\mathcal P(A)$ as, $F(\pi)=\{a\in A\mid \pi(a)=1\}$. The function $F$ takes a function $\pi$ and returns the set of those elements of $A$ that $\pi$ returns $1$ for, that is "the switch is on". Now it is up to you to show that this $F$ is a bijection. To see that, let me give you a hint that all the functions in $2^A$ have the same domains, and distinct functions with the same domain have some element that they disagree on (that is, $\pi(a)\neq\sigma(a)$ if $\pi,\sigma$ are such functions and $a$ is such element). Also relevant: How to Understand the Definition of Cardinal Exponentiation
Two different dense open subsets
In $\mathbb{R}$ in the usual topology, $\mathbb{R}\setminus \mathbb{Z}$ is open and dense, and so are all sets of the form $\mathbb{R}\setminus F$ where $F$ is finite (e.g. a singleton). $\mathbb{R}\setminus C$ where $C$ is the middle third Cantor set is another still, as is $\mathbb{R}\setminus \{x: \exists n \in \mathbb{N}^+: x=\frac{1}{n} \lor x=0\}$ etc. Plenty of open dense sets in most metric spaces. Only in the discrete metric/topology there is only one open dense subset, namely the whole space.
proving a matrix is symmetric to its squared version
The solution, as given in the comments: We note that $$ A^T = (A^T A)^T = A^T (A^T)^T = A^T A = A $$ So that $A = A^T$. Thus, $A$ is symmetric, and $A = A^TA = (A)A = A^2$.
Triangle related coordinate geometry question
Hint: Here is a diagram of the construction described above. $\hspace{4.5cm}$ There is no absolute scale given in the problem, so lets set the inradius of $\triangle ABC$ to be $1$. Since the radius of the excircle opposite $A$ is twice that of the incircle, it is $2$. By similar triangles, we have $|AE|=2|AD|$. Furthermore, $|DE|=3$, so $|AD|=3$ and $|AE|=6$. Everything else should be calculable using similar triangles.
Show that A is an algebra
The complement of $A$ is the difference of $\Omega$ and $A$, so $A^c$ belongs to $X$ if $A$ does. Given $A,B\in X$ we therefore also have that $$ A^c-B=(A^c\cap B^c)=(A\cup B)^c $$ and hence $A\cup B$ belongs to $X$.
What does $\sum x$ mean when $x$ is a vector?
Since you write that each column defines a distribution, then this is "somewhat equivalent" that each column is an observation / random variable. Then, just like the standard axiom for probabilities: $$\sum_i p_i x_i=1$$ One considers $$\sum_{i,j} p_i x_{ij}=(1_1,1_2,...,1_{no. \space of \space cols})$$ where $j$ refers to the column and $i$ refers to the row. So for each column one sums over the rows and each such sum must evaluate to $1$.
Transform $(\phi \vee \psi) \wedge (\neg \phi \vee \neg \psi) $ to $ (\phi \wedge \neg \psi) \vee (\neg \phi \wedge \psi)$ using equivalences?
I wrote down my question and instantly knew the answer. I just leave this here since I already typed it. $(\phi \vee \psi) \wedge (\neg \phi \vee \neg \psi) $ $\equiv (\phi \wedge (\neg \phi \vee \neg \psi)) \vee (\psi \wedge (\neg \phi \vee \neg \psi)) $ $\equiv (\phi \wedge \neg \psi) \vee (\neg \phi \wedge \psi)$
$I_n=\int_0^1{x^ne^x\,dx}$. Find $\lim_{n\to\infty}nI_n$
Integration by parts shows that $$ n\int_0^1 {x^n e^x dx} = \frac{n}{{n + 1}}e - \frac{n}{{n + 1}}\int_0^1 {x^{n + 1} e^x dx} . $$ Hence $$ \mathop {\lim }\limits_{n \to + \infty } n\int_0^1 {x^n e^x dx} = e - \int_0^1 {\mathop {\lim }\limits_{n \to + \infty } x^{n + 1} e^x dx} = e, $$ since the integrand tends to $0$ pointwise for $0&lt;x&lt;1$. The change in the order of integration and the limit is permitted since the integrand is at most $e^x$ which is intgerable on $(0,1)$.