title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Texts on the theory of partial algebras.
I've been interested in similar questions. I haven't come across any such book. However, there is the notion of an inverse semigroup which model the algebra of partial maps on a space by partial compositions and there are books devoted to that. For example: Petrich, Inverse Semigroups Amd Lawson, Inverse Semigroups: The Theory of Partial Symmetries Unfortunately, I haven't looked at either (they are on my to-read list) so please don't take this as a recommendation. They can be categorified into Partial categories. There is also the paper: Ruy Exel, Partial actions of groups and actions of semigroups Which might be worth taking a look at. It's available on the Arxiv as article 9511003
Area of a surface of revolution
Surface area in this case is given by $$2 \pi \int_1^2 dx \, x \, \sqrt{1+y'^2}$$ $$y=\frac14 x^4 + \frac18 x^{-2} \implies y'=x^3-\frac14 x^{-3}$$ so that $$y'^2 = x^6 - \frac12 + \frac{1}{16} x^{-6} \implies 1+y'^2 = x^6 + \frac12 + \frac{1}{16} x^{-6} = \left ( x^3+\frac14 x^{-3} \right)^2$$ Thus, the surface area is $$2 \pi \int_1^2 dx \, x \left( x^3+\frac14 x^{-3} \right)$$ Can you take it from here?
Maximum operation order for a set of integers
I think it's fairly trivial that if there are no $1$s, then the best strategy is to multiply the numbers as you said. The question is what to do when there are $1$s. Let's say that you have a $1$ in your array and you also have an $n$. The default strategy is to multiply all the numbers greater than $1$, so we're trying to decide where to add the $1$. If you add $1$ to $n$, then it's like you're dividing out the factor $n$ and multiplying in an $n+1$. The goal, then, is to maximize the ratio $\frac{n+1}{n}$. I'll use $[1,2,3,4]$ as an example. The product is $2 \times 3 \times 4 =24$. If you add the $1$ the one to the $4$, for instance, now your product is $2 \times 3 \times 4 \times \frac{5}{4} = 2 \times 3 \times 5 = 30$. But clearly we're going to get the best results when the ratio of $n$ and $n+1$ is large, so we add our $1$ to the smallest factor there is. So the result is $2 \times 3 \times 4 \times \frac{3}{2} = 3 \times 3 \times 4 = 36$. Why isn't it a good strategy to add $1$ to itself? Doing so creates a $2$, which lets you double your product, but it uses up two of your $1$s. Adding a $1$ to a $2$ increases the product by 50%, and doing that twice gives you $150\% * 150\% = 225\%$, which is better than doubling.
3D geometry, what are the coordinates of the 4th vertex and the point of intersection of this trapezoid?
Hint: From parallelism, $\vec{AB} = k\vec{CD}$ for some negative $k$. Find $k$; so $1/k(B-A) + D = C$. To get the intersection of the diagonals, look at the lines $t(C - A)$ and $r(D-B)$ and their intersection.
$\# \{\text{primes}\ 4n+3 \le x\}$ in terms of $\text{Li}(x)$ and roots of Dirichlet $L$-functions
(1) is a correct computation. In general, to treat primes of the form $kn+m$, you would have a linear combination of $\phi(k)$ sums, each of which runs over the zeros of a different Dirichlet $L$-function (of which the Riemann $\zeta$ function is a special case). And yes, assuming the generalized Riemann hypothesis, all of the terms including those sums over zeros can be estimated into the $O(x^{1/2+\epsilon})$ term. To find out more, you want to look for "the prime number theorem for arithmetic progressions", and in particular the "explicit formula". I know it appears in Montgomery and Vaughan's book, for example.
How to prove the equivalence of these two equations in modular arithmetics?
It's fairly obvious, to me, the question is not intended for you to prove it for all $x \neq 0 \pmod p$ since, for $x = 1$, you have $x^4 = 1$. This means $x^4 \equiv -1 \pmod{p}$ can only be true for the prime $p = 2$. Instead, I believe a more appropriate wording of what the question intended is that for odd prime $p$, there exists an integer $x$ where $x^4 \equiv -1 \pmod{p} \iff p \equiv 1 \pmod{8}$. For the $\implies$ direction, you have the right idea, as also indicated in the question comments. Squaring gives $$x^8 \equiv 1 \pmod{p} \tag{1}\label{eq1A}$$ This means the multiplicative order of $x$ modulo $p$ must divide $8$. Since the only smaller factors of $8$ are $1$, $2$ and $4$, where $x$ to any of those powers can't be congruent to $1$, you get $8$ must be the multiplicative order of $x$ modulo $p$. Since the multiplicative order must divide the Euler's totient function value, which is $p - 1$ for primes, means you have $$8 \mid p - 1 \implies p \equiv 1 \pmod{8} \tag{2}\label{eq2A}$$ For the $\impliedby$ direction, since $p$ is a prime, there is an element $g$ which is a primitive root and, thus, also generator of the multiplicative group of non-zero remainders. This means there exists an integer $1 \lt j \lt p - 1$ where $$g^{j} \equiv -1 \pmod{p} \implies g^{2j} \equiv 1 \pmod{p} \tag{3}\label{eq3A}$$ As $g$ is a primitive root, this means its multiplicative order is $p - 1$ so $p - 1 \mid 2j$. However, $j \lt p - 1 \implies 2j \le 2(p - 1)$, means $$2j = p - 1 \tag{4}\label{eq4A}$$ Since $p \equiv 1 \pmod{8}$, there exists a positive integer $k$ such that $p = 8k + 1$. Using this in \eqref{eq4A} gives $$2j = 8k \implies j = 4k \tag{5}\label{eq5A}$$ Thus, \eqref{eq3A} gives $$g^{4k} \equiv -1 \pmod{p} \implies (g^k)^4 \equiv -1 \pmod{p} \tag{6}\label{eq6A}$$ You therefore have $x \equiv g^k \pmod{p}$ satisfying $x^4 \equiv -1 \pmod{p}$.
Trouble understanding noncrossing partitions
Draw the elements of your set in clockwise order around a circle. Connect elements that belong to the same block by a straight line. A partition is non-crossing if and only if the lines don't cross.
Calculating the limit of the "$\dfrac{volume}{area}$" ratio for a 2D function
It is indeed possible. For "nice enough" functions, we can rewrite this limit as $$ \lim_{x \to x_0} \frac{1}{x-x_0}\left( \lim_{y \to y_0} \frac{1}{y-y_0}\int_{y_0}^y \left(\int_{x_0}^x f(s,t)\,ds \right)dt \right) $$ Which we may evaluate using the fact (via the fundamental theorem of calculus) that $$ \lim_{x \to x_0} \frac{1}{x-x_0} \int_{x_0}^x g(t)\,dt = g(x_0) $$
Ratio of two sums with inverse radicals
Let the numerator and denominator be $N$ and $D$, respectively. Writing the even and odd terms together in $N$, $$N = \sum_{n=0}^{\infty} \left ( \frac{1}{\sqrt{6n+1}} -\frac{1}{\sqrt{6n+2}} +\frac{1}{\sqrt{6n+4}} -\frac{1}{\sqrt{6n+5}} \right) \\ = D -\sum_{n=0}^{\infty} \frac{1}{\sqrt{6n+2}} -\frac{1}{\sqrt{6n+4}} \\ =D-\frac{1}{\sqrt 2} N \\ \implies \frac ND = \frac{\sqrt 2}{\sqrt 2+1} =2-\sqrt 2 $$
Convergence of Sequences of Functions
Mimic the proof of the Ascoli-Arzela theorem. Let $E=\{e_n\}_{n=1}^\infty$. The sequence $\{f_n(e_1)\}$ is bounded. There is a convergent subsequence that I denote $\{f_{n^1_k}(e_1)\}$. The sequence $\{f_{n^1_k}(e_2)\}$ is bounded. It has a convergente subsequence that I will denote $\{f_{n^2_k}(e_2)\}$. Observe that $\{f_{n^2_k}(x)\}$ converges if $x\in\{e_1,e_2\}$. Iterate this argument. In the $m$-th step we will have constructed a sequence $\{f_{n^m_k}\}$ such that $\{f_{n^m_k}(x)\}$ onverges if $x\in\{e_1,\dots,e_m\}$. The diagonal sequence $\{f_{n_k^k}\}$ converges for all $x\in E$.
About the Killing-Hopf theorem
Here is one example. You may have heard about the 3-dimensional Poincare Conjecture (if $M$ is a compact, without boundary, simply-connected 3-dimensional manifold, then $M$ is homeomorphic to $S^3$). Here is how it was proven by Gregory Perelman, broadly speaking: $M$ is known to admit a smooth structure. Equip $M$, therefore with a randomly chosen Riemannian metric $g$. Since we know nothing about the properties of $g$, we cannot deduce any conclusions about $M$ from the existence of $g$. Deform $g$ via the Ricc flow (with surgeries) to an Einstein metric $g_0$ on $M$. (OK, not on $M$ itself but on pieces of a connected sum decomposition of $M$, but let's ignore this.) In dimension 3 each Einstein metric has constant curvature. It is easy to see that this curvature has to be positive (since $M$ is compact and simply connected), hence, after rescaling, we can assume it to be equal to $1$. Hence, by the Killing-Hopf theorem (I always thought of it is Elie Cartan's theorem...) $(M, g_0)$ is isometric to the unit 3-dimensional sphere with its standard metric. In particular, $M$ itself is diffeomorphic to $S^3$. Now, you see how this theorem is useful and what advantage one has having a metric of constant curvature on a manifold.
Elementary "bugs" in computer algebra systems?
Pick your favorite expression that is zero but not easily simplified to zero by the CAS. Then divide by this zero and deduce all kinds of absurdities. Another place a CAS go wrong is properly dealing with branch cuts, and keeping track of domains of validity for various expressions. These and other problems have been discussed in the literature. A good place to find such information is to browse the web pages of leading researchers, and conference proceedings (ISSAC,SYMSAC,Sigsam,Eurosam, etc). For example, see Richard Fateman's papers, e.g. his 33 page critique of Mathematica, and Why Computer Algebra Systems Can't Solve Simple Equations and Branch Cuts in Computer Algebra, etc.
If $A$ is a set of size $2^{\aleph_0}$ and $S$ is an uncountable partition of $A$, is there an injection from $A$ into $S$?
No, there is no way to define such injection for two major reasons: It is consistent with $\sf ZF$ with the axiom of choice, the usual axioms of set theory, that there is some $S\subseteq\Bbb R$ such that $\aleph_0<|S|<2^{\aleph_0}$. In that case we can easily define the partition of singletons of the elements of $S$, and the rest is just one part. Now we have a partition which has a strictly smaller cardinality than $\Bbb R$, and therefore there is no injection as wanted. The word "define" often refers, implicitly of course, to an explicit definition, without an appeal to the axiom of choice. However it is consistent with $\sf ZF$ that the real numbers can be partitioned into an uncountable number of parts $S$ and neither $2^{\aleph_0}<|S|$ nor $|S|<2^{\aleph_0}$. In that case, as it shows, one cannot define an injection in any direction.
An exercise about distribution in Rudin
If you can prove the case that $\Lambda$ is of compact support, it is good. Then for the general case, suppose that $\psi\in C_c^\infty(\Omega)$ such that $\text{supp}(\phi)\subset\text{supp}(\psi)$ and $\psi(x)=1$ on $\text{supp}(\phi)$. You can prove that $\text{supp}(\psi\Lambda)\subset \text{supp}(\psi)$, so $\psi\Lambda\in \mathcal{D}'(\Omega)$ is of compact support. Also, you can show that $\langle \psi\Lambda,D^\alpha\phi\rangle=\langle \Lambda,D^\alpha\phi\rangle$ for all $\alpha$. Then by the first part, $\langle \Lambda,\phi\rangle=\langle \psi\Lambda,\phi\rangle=0$.
Is there a nice way to represent $\sum_{n=1}^\infty \frac{(-1)^{n+1}H_n}{n+m+1}$?
Mathematica choked when I gave it the general expression (at least the way I tried). Here's how the sum works out for some small values of $m$. \begin{gather} m = 0 : \tfrac{1}{2}\log^2(2) \\ m = 1 : -1 + 2 \log(2) - \tfrac{1}{2}\log^2(2) \\ m = 2 : \tfrac{5}{4} - 2 \log(2) + \tfrac{1}{2}\log^2(2) \\ m = 3 : - \tfrac{55}{36} + \tfrac{8}{3} \log(2) - \tfrac{1}{2}\log^2(2) \\ m = 4 : \tfrac{241}{144} -\tfrac{8}{3} \log(2)-\tfrac{1}{2}\log^2(2) \\ m = 5 : - \tfrac{6589}{3600} + \tfrac{46}{15} \log(2) + \tfrac{1}{2}\log^2(2) \end{gather}
Calculating relationship between two coordinate frames with known relationships between each coordinate frame and a third one
The error was the assumption: $T_c^d = R_d^c T_d^c$ Which only takes into account rotation between the coordinate frames, and not the translation. I used the fact that a point $p$, in C coordinate space can be related to D as follows: $p^c = T_d^c + R_d^c p^d $ Rearranging, taking the inverse of the rotation matrix and using the orthogonality rule gives: $p^d = (R_d^c)^T (p^c - T_d^c) $ Because we want the translation from origin of D to C, I use the fact that the point in C coordinate frame is the origin. $p^d$ then becomes $T_d^c$. $T_c^d = (R_d^c)^T (\begin{bmatrix} 0 \\ 0 \\ 0 \\ \end{bmatrix} - T_d^c) $ So my final equation becomes: $T_c^w = T_d^w + R_d^w ((R_d^c)^T (\begin{bmatrix} 0 \\ 0 \\ 0 \\ \end{bmatrix} - T_d^c))$ When I evaluate this, I get my result \begin{bmatrix} 0.1 & 0.1 & 0.3 \\ \end{bmatrix} As expected.
How to compute chord distance which passes through two points in an ellipse/ellipsoid?
Let $\mathbf v={B-A\over \lVert B-A\rVert}$. The the line through points $A$ and $B$ can be parameterized as $A+t\mathbf v$. Since $\lVert\mathbf v\rVert=1$, the parameter $t$ measures distance along this line. Now, substitute this into the equation of the ellipsoid, obtaining a quadratic equation in $t$. Writing this equation as $at^2+bt+c=0$, its solutions are the standard ${-b\pm\sqrt{b^2-4ac}\over2a}$, per the quadratic formula. The length of the chord is of course the distance between the points represented by these solutions, but since we’ve arranged for $t$ to measure distances along the line, the chord length is also equal to the absolute value of the difference between the roots of the quadratic, namely $${\sqrt{b^2-4ac}\over\lvert a\rvert}.\tag1$$ You will need to determine the coefficients of this quadratic equation yourself from whatever equation it is that you have for the ellipsoid. For instance, the equation of any quadric in $\mathbb R^n$ can be written in matrix/vector form as $$(\mathbf x-\mathbf p)^TQ(\mathbf x-\mathbf p)=0,$$ where $Q$ is a symmetric $n\times n$ matrix and $\mathbf p$ corresponds to a translation from the origin (in the case of an ellipsoid, its center). Substituting the parameterization of the line into this, expanding by linearity and rearranging, we obtain the quadratic equation $$(\mathbf v^TQ\mathbf v)t^2+2\mathbf v^TQ(A-\mathbf p)t + (A-\mathbf p)^TQ(A-\mathbf p)=0.\tag2$$ Since you know the center $\mathbf p$, rotation $R$ and semiaxis lengths $\lambda$, $\mu$, $\tau$† of the ellipsoid, you can simplify this quite a bit. If you apply the inverse of the transformation that rotates and translates the ellipsoid from standard position, i.e., set $A'=R^{-1}(A-\mathbf p)$, $B'=R^{-1}(B-\mathbf p)$ and $\mathbf v' = {B'-A'\over\lVert B'-A'\rVert} = R^{-1}\mathbf v$, then in this new coordinate system equation (2) becomes $$(\mathbf v'^TQ'\mathbf v')t^2+2(\mathbf v'^TQ'A')t+A'^TQ'A'=0$$ with $Q'=\operatorname{diag}(1/\lambda^2,1/\mu^2,1/\tau^2)$, making the chord length $${2\sqrt{(\mathbf v'^TQ'\mathbf A')^2-(\mathbf v'^TQ'\mathbf v')(\mathbf A'^TQ'\mathbf A')}\over\lvert\mathbf v'^TQ'\mathbf v'\rvert}.\tag3$$ Expanding by coordinates, we have $$\mathbf v'^TQ'\mathbf v' = {v_x'^2\over\lambda^2}+{v_y'^2\over\mu^2}+{v_z'^2\over\tau^2} \\ \mathbf v'^TQ'\mathbf A' = {v_x'a_x'\over\lambda^2}+{v_y'a_y'\over\mu^2}+{v_z'a_z'\over\tau^2} \\ \mathbf A'^TQ'\mathbf A' = {a_x'^2\over\lambda^2}+{a_y'^2\over\mu^2}+{a_z'^2\over\tau^2}.$$ I’ll leave working the rest out to you. If I were coding this, though, I’d compute it in stages as was done above instead of trying to expand (1) into a closed-form expression in terms of the coordinates of $A$ and $B$. Recall, too, that for a rotation matrix $R^{-1}=R^T$. † These are usually named $a$, $b$ and $c$, but I’ve already used those names for the coefficients of the quadratic equation.
Limit of $a_{k+1}=\dfrac{a_k+b_k}{2}$, $b_{k+1}=\sqrt{a_kb_k}$?
The limit $L$ is known as arithmetic-geometric mean, see MathWorld entry, and can be expressed in terms of elliptic integrals: $$ L = \frac{(a+b)\pi}{4K((a-b)/(a+b))}$$ where $K$ is a complete elliptic integral. This goes back to Legendre and Gauss.
Weak convergence of Dirac measures
The functions $F_n$ have the special property that $F_n(x)$ can only be zero or one, for every $x$. For every $x$ in the dense set where $F_n(x)\to F(x)$, the same is true for $F(x)$, i.e., either $F(x)=0$ or $F(x)=1$. Since $F$ is right continuous, we have $F(x)=0$ or $F(x)=1$ for every $x\in \mathbb{R}$. The function $F$ is non-decreasing and non-constant, and has $F(x)\in\{0,1\}$ for every $x$. The only such functions are $\chi_{[\alpha,\infty)}$ for $\alpha\in\mathbb{R}$. Added: (1) You do not need to show a specific relationship. The proof does not assume that $\lim_n \alpha_n$ exists. As in my solution, you first show that $\alpha$ exists, and then prove that $\alpha_n\to\alpha$. You yourself said that this is easy. (2) Any dense set will do.
What (if anything) am I doing wrong in this eigenvalue boundary value problem?
Your $X'\left(0\right)$ equation correctly gives $B=0$. This leaves the $X\left(2 \pi\right)$ equation, or $X\left(2 \pi \right) = A \cos 2 \pi \alpha = 0$. Now you are supposed to determine the values of $\alpha$ that satisfy this equation. There are infinitely many, and thus infinitely many solutions that you will then sum in a linear combination.
When $\operatorname{dim}X = \operatorname{dim} Y$, immersions are the same as local diffeomorphism.
This is a direct application of the inverse function theorem: Let $x \in X$. Since $f$ is an immersion, $df_x : T_xX \to T_{f(x)}Y$ is injective. Since the two spaces have the same dimension $\mathrm{dim}(X) = \mathrm{dim}(Y)$, $df_x$ is actually an isomorphism. By the inverse function theorem, this implies that there is a neighborhood $U$ of $x$ such that $f_{|U} : U \to f(U)$ is a diffeomorphism. This is the definition of a local diffeomorphism: every point has a neighborhood like that.
Justin is setting a password on his computer. He is told that his password must contain at least 4 but no more than 6 characters.
If your password must have at least $4$ characters and $6$ at most, it can either have $4, 5$ or $6$. I am assuming you are using the alphabet abcdefghijklmnopqrstuvwxyz which has $26$ letters. Adding the numbers $(0, ..., 9)$ you have $36$ possible characters. Let's calculate separately how many passwords contain $4$, $5$ and $6$ passwords. Let us build a password with $4$ characters. For the first character there are 36 possible options. You pick some character and you are left with 35. For the second character, you have 35 options, so there are $36\times35$ ways of building a password with 2 characters. You still have 34 characters left so there are $36\times35\times34$ unique passwords with 3 characters, and with no repeated characters. Since 33 characters are left, we pick the last one and get a password with 4 characters. Hence there are $36\times35\times34\times33$ passwords with 4 characters. Repeating for 5 and 6 characters we get a total of $36\times35\times34\times33\times32$ 5-character passwords and $36\times35\times34\times33\times32\times31$ 6-character passwords. We add the totals up to $36\times35\times34\times33\times32\times31 + 36\times35\times34\times33\times32 + 36\times35\times34\times33 = \frac{36!}{30!} + \frac{36!}{31!} + \frac{36!}{32!}$ Now on to the second question. How many passwords contain only the letters of his name? Since all letters in his name are unique and his name has 6 letters, then such a password can only be a permutation of his name. There are $n!$ permutations of $n$ unique objects. Since you are talking about probability, it should be $$P(password\ with\ letters\ in\ justin) = \frac{passwords\ that\ satisfy\ your\ criteria}{total\ number\ of\ passwords}$$ Hence the probability is $$\frac{6!}{\frac{36!}{30!} + \frac{36!}{31!} + \frac{36!}{32!}}$$
Question on invariant mean.
Let $G$ be a locally compact Abelian group. A function $f \in L^{\infty} (G)$ is called almost periodic whenever the collection of translates $\{f (x y^{-1}) \}_{y \in G}$ is relatively compact in $L^{\infty} (G)$. The collection of all almost periodic functions on $G$ is denoted by $AP(G)$. There is the following fundamental result on the existence and uniqueness of a mean on $AP(G) \subset L^{\infty} (G)$: Let $G$ be a locally compact Abelian group. Then there exists a translation-invariant mean $M$ on $AP(G)$. Moreover, if $N : AP(G) \to \mathbb{C}$ is a linear functional satisfying $N(1_G) = 1$; $N(f) \leq 0$ for $f \leq 0$ $N(f(\cdot y^{-1})) = N(f(\cdot))$ for $y \in G$ Then $M = N$. A more concrete expression of the mean can be given for the case that $G$ is assumed to be $\sigma$-compact, which your specific case $G = \mathbb{Z}$ clearly is. Under this additional assumption, there is the following result, which can be found as Theorem 18.10 in Abstract Harmonic Analysis by Hewitt & Ross. Let G be a locally compact, $\sigma$-compact, Abelian group. Then there exists an increasing sequence $\{H_n \}_{n \in \mathbb{N}}$ in $G$ of relatively compact, open sets such that $$ M(f) = \lim_{n \to \infty} \mu_G (H_n)^{-1} \int_{H_n} f(x) \; d\mu_G (x) $$ for all $f \in AP(G)$. In the special case $G = \mathbb{R}$, a mean on $AP(G)$ is given by $$ M(f) = \lim_{T \to \infty} \frac{1}{2T} \int_{-T}^T f(x) dx, $$ and for $G = \mathbb{Z}$, it is given by $$ M(f) = \lim_{n \to \infty} \frac{1}{2n+1} \sum_{k = - n}^n f(k). $$ Both examples are discussed in Example 18.15 of the aforementioned book by Hewitt & Ross.
Permutations with conditions - small exclusions from a finite pool
Unfortunately your guess, while tempting at first glance, is not quite correct (it is, as you say, a little too simplistic). Before we look at where it goes wrong, it is instructive to first outline the logic that led to the formula you give for the unrestricted case (the one corresponding to any bead being allowed in any square) for those readers who (like me at first) can't intuit where it comes from: There are 20 total beads, and 18 total squares, meaning that when the board is full, two beads are left off All arrangements are of two types: ones where the two missing beads are of the same color, and one where they are different. In the first case, there are 4 ways to choose the color of the missing beads, and the beads remaining on the board belong to groups with multiplicities 5, 5, 5, and 3. In the second case, there are 4*3/2 = 6 ways to choose the colors of the missing beads, and the remaining beads form color groups of multiplicities 5, 5, 4, and 4. Using the standard formulas, then, for permutations of objects, groups of which are indistinguishable, we get the final result $N_{boards}=4 \times \frac{18!}{5!5!5!3!} + 6 \times \frac{18!}{5!5!4!4!}$ This is all well and good; now let's tackle the case where certain squares cannot accept a bead of a certain color. Let's assume that we've chosen our two beads to leave off the board, and we're interested to find the way of arranging the remaining beads subject to these choices. In the original, unrestricted case, this number was just 18! divided by a product-of-factorials that accounts for the indistinguishability of beads of the same color. Let's ignore these latter factors for now; we can assume that, in addition to colors, numbers 1-18 are painted on our beads, making them distinguishable, count the number of ways of arranging these distinguishable beads subject to the color constraints, and then at the end imagine erasing the numbers and putting back in the product-of-factorials factor (hopefully it won't be too hard to convince yourself that this procedure is valid). So, we are looking to find out what replaces 18! when we arrange 18 distinguishable, colored beads onto 18 squares, BUT requiring that there are two squares for every color where that color is excluded. Let's imagine drawing a random first bead from a jar and placing it on the empty board. Clearly, since exactly two squares are excluded for whatever color the bead is, this number of ways is 16. We are well on our way to getting 16! which was your guess! Now we pick the second bead from the jar and try to place it. Trouble already!! It's true that, for most placements of our first bead, the second bead will have 15 remaining allowed placements, but what if our first bead went on one of the squares where the second bead would have been excluded anyway due to it's color? In that case, there are STILL 16 places left to put the second bead (we didn't "use up" any with the first bead). Here's the rub: with the color constraints in place, we cannot imagine sequentially placing beads where each of these placements is independent. So what's the correct way to do the calculation? The answer is that it's a really hard problem. While the fundamental combinatorics is trivial at each stage, there are just a whole lot of interdependent cases to analyze. I won't do that here, but at least you now know that your original guess needs some refinement, and hopefully have some clues as to how to proceed if you choose to (though I would warn that this seems like a ~100 man-hour or more type of project to me... maybe someone else will find a clever solution :)). Please report back if you end up finding your answer!
mathematical patterns and their connection to grouping "things".
Let me explain to you the mathematical concept of "abstraction". By definition it is the quality of dealing with ideas rather than events. What we mean is that the underlying property that you are studying is what forms the basis of all interpretation and identification. So in your context, if you associate a binary string with certain objects then the binary can be thought as an abstraction of the objects.
Convergence of $\frac{\sinh^2(x)}{\sinh^2(x) + C}$ as $x \to \infty$
Observe that $$\frac{\sinh^2(x)}{\sinh^2(x)+C}=\frac{\sinh^2(x)+C-C}{\sinh^2(x)+C}\\=1-\frac C{\sinh^2(x)+C}$$ And then the limit is hopefully more clear.
Proving that CDFs of maximum and $\frac{1}{k}$ sum are equal for $X_i \sim \text{Exp}(1)$.
1) if $X\sim Exp(1)$ it is very easy to verify that $\frac{X}{k}\sim Exp(k)$ 2) the density of the $max(X_i)$ is the following $$ \bbox[5px,border:2px solid black] { f_{max}(z)=n e^{-z}(1-e^{-z})^{n-1} \qquad (1) } $$ Now it is very easy to check the density of the various sum with the convolution n=2 $$f_Z(z)=\int_0^z e^{-x}2e^{-2(z-x)}dx=2e^{-z}(1-e^{-z})$$ n=3 $$f_Z(z)=\int_0^z 2e^{-x}(1-e^{-x})3e^{-3(z-x)}dx=3e^{-z}(1-e^{-z})^2$$ and so on... this prove what requested: as you can see the densities are equivalent to the general formula (1)
Proving a complex version of the Hahn-Banach theorem
$|\mathrm{Re}(f(x))|\le\sqrt{|\mathrm{Re}(f(x))|^2+|\mathrm{Im}(f(x))|^2}=|f(x)|\le p(x)$
Applicable Group Problem Involving Modular Arithmetic
Your identity element is $25$ because in mod $40$ you have $$ 25\times 5=125 \equiv 5$$ $$ 25\times 15=375\equiv 15$$ $$25\times 25=625\equiv 25$$ $$25\times 35=875\equiv 35$$ It is interesting to see that for this group every element is its own inverse.
Improper integral $ \int\limits_0^{\infty} \frac{ x^2 \arctan x }{x^4 + x^2 + 1 } dx $
I will show that $$I = \int_0^\infty \frac{x^2 \tan^{-1} x}{1 + x^2 + x^4} \, dx = \frac{\pi^2}{8 \sqrt{3}} + \frac{\pi}{24} \ln \left (\frac{2 - \sqrt{3}}{2 + \sqrt{3}} \right ) + \frac{2}{3} \mathbf{G},$$ where $\mathbf{G}$ is Catalan's constant. Observing that $$\frac{\tan^{-1} x}{x} = \int_0^1 \frac{dy}{1 + x^2 y^2},$$ on converting the integral to a double integral we have $$I = \int_0^1 \int_0^\infty \frac{x^3}{(1 + x^2 + x^4)(1 + x^2 y^2)} \, dx \, dy,$$ after the order of integration has been changed. Finding a partial fraction decomposition for the integrand with respect to the variable $x$ gives $$I = \int_0^1 \frac{1}{1 - y^2 + y^4} \int_0^\infty \left [- \frac{xy^2}{1 + x^2 y^2} + \frac{x - y^2 + 1}{2(x^2 + x + 1)} + \frac{x + y^2 - 1}{2(x^2 - x + 1)} \right ] \, dx \, dy.$$ Each of the $x$-integrals is elementary, being equal to either a log or the inverse tangent. The result is $$I = \frac{\pi}{6 \sqrt{3}} \int_0^1 \frac{2y^2 - 1}{1 - y^2 + y^4} \, dy - \int_0^1 \frac{\ln y}{1 - y^2 + y^4} \, dy = \frac{\pi}{6\sqrt{3}} I_1 - I_2.$$ The first of the integrals is elementary. Here \begin{align} I_1 &= \int_0^1 \frac{2y^2 - 1}{1 - y^2 + y^4} \, dy\\ &= \int_0^1 \left [\frac{y - \sqrt{3}}{2 \sqrt{3} (-y^2 + y \sqrt{3} - 1)} + \frac{y + \sqrt{3}}{2 \sqrt{3} (y^2 + y \sqrt{3} + 1)} \right ] \, dy\\ &= -\frac{3}{4 \sqrt{3}} \int_0^1 \frac{2y + \sqrt{3}}{y^2 + y \sqrt{3} + 1} \, dy + \frac{1}{4} \int_0^1 \frac{dy}{(y + \sqrt{3}/2)^2 + 1/4}\\ & \qquad + \frac{3}{4 \sqrt{3}} \int_0^1 \frac{-2y + \sqrt{3}}{-y^2 + y \sqrt{3} - 1} \, dy + \frac{1}{4} \int_0^1 \frac{dy}{(y - \sqrt{3}/2)^2 + 1/4}\\ &=\frac{3}{4 \sqrt{3}} \ln \left (\frac{2 - \sqrt{3}}{2 + \sqrt{3}} \right ) + \frac{1}{2} \tan^{-1} (2 + \sqrt{3}) + \frac{1}{2} \tan^{-1} (2 - \sqrt{3})\\ &= \frac{\pi}{4} + \frac{3}{4 \sqrt{3}} \ln \left (\frac{2 - \sqrt{3}}{2 + \sqrt{3}} \right ). \end{align} The second of the integrals is a completely different beast. It will be handled through the dilogarithm machinery. In particular, we will make use of the result $$\int_0^1 \frac{\ln x}{x - r} \, dx = \operatorname{Li}_2 \left (\frac{1}{r} \right ), \quad r \neq 0, \tag1$$ where $\operatorname{Li}_2 (x)$ is the dilogarithm. This result can be readily proved using integration by parts and noting the integral definition for $\operatorname{Li}_2 (x)$. Denoting the roots of the equation $y^4 - y^2 + 1 = 0$ as $r_1 = (\sqrt{3} + i)/2$, $r_2 = -(\sqrt{3} + i)/2$, $r_3 = (\sqrt{3} - i)/2$, and $r_4 = (\sqrt{3} - i)/2$, on factoring the denominator appearing in the integrand of $I_2$ into linear factors over the complex domain before finding a partial fraction decomposition, one finds \begin{align} I_2 &= -\frac{\sqrt{3} + 3i}{12} \int_0^1 \frac{dx}{x - r_1} + \frac{\sqrt{3} + 3i}{12} \int_0^1 \frac{dx}{x - r_2}\\ & \qquad -\frac{\sqrt{3} - 3i}{12} \int_0^1 \frac{dx}{x - r_3} + \frac{\sqrt{3} - 3i}{12} \int_0^1 \frac{dx}{x - r_4}, \end{align} or in terms of the dilogarithm, from (1) \begin{align} I_2 &= \frac{\sqrt{3} + 3i}{12} \left [\operatorname{Li}_2 \left (\frac{-\sqrt{3} + i}{2} \right ) - \operatorname{Li}_2 \left (\frac{\sqrt{3} - i}{2} \right ) \right ]\\ & \qquad + \frac{\sqrt{3} - 3i}{12} \left [\operatorname{Li}_2 \left (\frac{-\sqrt{3} - i}{2} \right ) - \operatorname{Li}_2 \left (\frac{\sqrt{3} + i}{2} \right ) \right ]\\ &= \frac{\sqrt{3} + 3i}{12} \left [\operatorname{Li}_2 \left (e^{5 \pi i/6} \right ) - \operatorname{Li}_2 \left (e^{-\pi i/6} \right ) \right ] + \frac{\sqrt{3} - 3i}{12} \left [\operatorname{Li}_2 \left (e^{-5 \pi i/6} \right ) - \operatorname{Li}_2 \left (e^{\pi i/6} \right ) \right ]\\ &= \frac{\sqrt{3} + 3i}{12} S_1 + \frac{\sqrt{3} - 3i}{12} S_2. \end{align} To find values for $S_1$ and $S_2$ the definition for the dilogarithm will be used, namely $$\operatorname{Li}_2 (z) = \sum_{n = 1}^\infty \frac{z^n}{n^2}, \qquad |z| < 1.$$ For $S_1$, setting $z = e^{5\pi i/6}$ and $z = e^{-i\pi/6}$ in the above sum leads to \begin{align} S_1 &= \sum_{n = 1}^\infty \frac{1}{n^2} \left [\cos \left (\frac{5 n\pi}{6} \right ) - \cos \left (\frac{n\pi}{6} \right ) \right ] + i \sum_{n = 1}^\infty \frac{1}{n^2} \left [\sin \left (\frac{5 n\pi}{6} \right ) - \sin \left (\frac{n\pi}{6} \right ) \right ]\\ &= S_{1,1} + i S_{1,2}. \end{align} To find the first of these sums, as the series converges absolutely we can rearrange terms as follows: \begin{align} S_{1,1} &= - \sqrt{3} \sum_{\substack{n = 1\\n \in 1,13,\ldots}}^\infty \frac{1}{n^2} + \sqrt{3} \sum_{\substack{n = 1\\n \in 5,17,\ldots}}^\infty \frac{1}{n^2}\\ & \qquad + \sqrt{3} \sum_{\substack{n = 1\\n \in 7,19,\ldots}}^\infty \frac{1}{n^2} - \sqrt{3} \sum_{\substack{n = 1\\n \in 11,23,\ldots}}^\infty \frac{1}{n^2}\\ &= -\frac{\sqrt{3}}{144} \sum_{n = 0}^\infty \frac{1}{n + 1/12)^2} + \frac{\sqrt{3}}{144} \sum_{n = 0}^\infty \frac{1}{(n + 5/12)^2}\\ & \qquad + \frac{\sqrt{3}}{144} \sum_{n = 0}^\infty \frac{1}{(n + 7/12)^2} - \frac{\sqrt{3}}{144} \sum_{n = 0}^\infty \frac{1}{(n + 11/12)^2}\\ &= -\frac{\sqrt{3}}{144} \psi^{(1)} \left (\frac{1}{12} \right ) + \frac{\sqrt{3}}{144} \psi^{(1)} \left (\frac{5}{12} \right ) + \frac{\sqrt{3}}{144} \psi^{(1)} \left (\frac{7}{12} \right ) - \frac{\sqrt{3}}{144} \psi^{(1)} \left (\frac{11}{12} \right )\\ &= -\frac{\sqrt{3}}{144} \left [\psi^{(1)} \left (1 - \frac{1}{12} \right ) + \psi^{(1)} \left (\frac{1}{12} \right ) \right ]\\ & \qquad + \frac{\sqrt{3}}{144} \left [\psi^{(1)} \left (1 - \frac{5}{12} \right ) + \psi^{(1)} \left (\frac{5}{12} \right ) \right ]\\ &= -\frac{\sqrt{3}}{144} \frac{\pi^2}{\sin^2 (\pi/12)} + \frac{\sqrt{3}}{144} \frac{\pi^2}{\sin^2 (5\pi/12)}\\ &= - \frac{\pi^2}{6}. \end{align} Here $\psi^{(1)} (z)$ is the trigamma function with its reflexion formula having been used. In a similar fashion the second sum $S_{1,2}$ can be found. Here \begin{align} S_{1,2} &= \sum_{\substack{n = 1\\n \in 1,13,\ldots}}^\infty \frac{1}{n^2} + 2 \sum_{\substack{n = 1\\n \in 3,15,\ldots}}^\infty \frac{1}{n^2}\\ & \qquad + \sum_{\substack{n = 1\\n \in 5,17,\ldots}}^\infty \frac{1}{n^2} - \sum_{\substack{n = 1\\n \in 7,19,\ldots}}^\infty \frac{1}{n^2}\\ & \qquad -2 \sum_{\substack{n = 1\\n \in 9,21,\ldots}}^\infty \frac{1}{n^2} - \sum_{\substack{n = 1\\n \in 11,23,\ldots}}^\infty \frac{1}{n^2}\\ &= \frac{1}{144} \sum_{n = 0}^\infty \frac{1}{(n + 1/12)^2} + \frac{1}{72} \sum_{n = 0}^\infty \frac{1}{(n + 3/12)^2}\\ & \qquad + \frac{1}{144} \sum_{n = 0}^\infty \frac{1}{(n + 5/12)^2} - \frac{1}{144} \sum_{n = 0}^\infty \frac{1}{(n + 7/12)^2}\\ & \qquad - \frac{1}{72} \sum_{n = 0}^\infty \frac{1}{(n + 9/12)^2} - \frac{1}{144} \sum_{n = 0}^\infty \frac{1}{(n + 11/12)^2}\\ &= \frac{1}{144} \left [\psi^{(1)} \left (\frac{1}{12} \right ) - \psi^{(1)} \left (\frac{7}{12} \right ) \right ] + \frac{1}{72} \left [\psi^{(1)} \left (\frac{1}{4} \right ) - \psi^{(1)} \left (\frac{3}{4} \right ) \right ]\\ & \qquad + \frac{1}{144} \left [\psi^{(1)} \left (\frac{5}{12} \right ) - \psi^{(1)} \left (\frac{11}{12} \right ) \right ]. \tag2 \end{align} Two special values for the trigamma function are: $$\psi^{(1)} \left (\frac{1}{4} \right ) = \pi^2 + 8 \mathbf{G} \quad \text{and} \quad \psi^{(1)} \left (\frac{3}{4} \right ) = \pi^2 - 8 \mathbf{G}.$$ The two other trigamma terms appearing in (2) can be dealt with by making use of the multiplication theorem for the trigamma function of $$9 \psi^{(1)} (3z) = \psi^{(1)} (z) + \psi^{(1)} \left (z + \frac{1}{3} \right ) + \psi^{(1)} \left (z + \frac{2}{3} \right ).$$ Setting $z = 1/4$ leads to \begin{align} 9 \psi^{(1)} \left (\frac{3}{4} \right ) &= \psi^{(1)} \left (\frac{1}{4} \right ) + \psi^{(1)} \left (\frac{7}{12} \right ) + \psi^{(1)} \left (\frac{11}{12} \right )\\ &= \psi^{(1)} \left (\frac{1}{4} \right ) + \psi^{(1)} \left (\frac{7}{12} \right ) + \psi^{(1)} \left (1 - \frac{1}{12} \right ), \end{align} or $$\psi^{(1)} \left (\frac{1}{12} \right ) - \psi^{(1)} \left (\frac{7}{12} \right ) = 4 \sqrt{3} \pi^2 + 80 \mathbf{G},$$ where we have made use of the two special values for the trigamma function together with the reflexion formula. Also, again starting with $z = 1/4$ in the multiplication formula we have \begin{align} 9 \psi^{(1)} \left (\frac{3}{4} \right ) &= \psi^{(1)} \left (\frac{1}{4} \right ) + \psi^{(1)} \left (\frac{7}{12} \right ) + \psi^{(1)} \left (\frac{11}{12} \right )\\ &= \psi^{(1)} \left (\frac{1}{4} \right ) + \psi^{(1)} \left (1 - \frac{5}{12} \right ) + \psi^{(1)} \left (\frac{11}{12} \right ), \end{align} or $$\psi^{(1)} \left (\frac{5}{12} \right ) - \psi^{(1)} \left (\frac{11}{12} \right ) = 4 \sqrt{3} \pi^2 + 80 \mathbf{G},$$ where again we have made use of the two special values for the trigamma function together with the reflexion formula. Thus $$S_{1,2} = \frac{1}{144} \left (4 \sqrt{3} \pi^2 + 80 \mathbf{G} \right ) + \frac{1}{72} \left (\pi^2 + 8 \mathbf{G} -\pi^2 + 8 \mathbf{G}\right ) + \frac{1}{144} \left (-4 \sqrt{3} \pi^2 + 80 \mathbf{G} \right ) = \frac{4}{3} \mathbf{G}.$$ Thus $$S_1 = -\frac{\pi^2}{6} + \frac{4 i \mathbf{G}}{3}.$$ Needless to say, a similar method can be used to find $S_2$. The result is: $$S_2 = -\frac{\pi^2}{6} - \frac{4 i \mathbf{G}}{3},$$ giving $$I_2 = \frac{\sqrt{3} + 3i}{12} \left (-\frac{\pi^2}{6} + \frac{4i \mathbf{G}}{3} \right ) + \frac{\sqrt{3} - 3i}{12} \left (-\frac{\pi^2}{6} - \frac{4i \mathbf{G}}{3} \right ) = -\frac{\pi^2}{12 \sqrt{3}} - \frac{2}{3} \mathbf{G}.$$ So on returning to our initial integral one has: $$I = \frac{\pi}{6 \sqrt{3}} \left [\frac{\pi}{4} + \frac{3}{4 \sqrt{3}} \ln \left (\frac{2 - \sqrt{3}}{2 + \sqrt{3}} \right ) \right ] + \frac{\pi^2}{12 \sqrt{3}} + \frac{2}{3} \mathbf{G},$$ or $$\int_0^\infty \frac{x^2 \tan^{-1} x}{1 + x^2 + x^4} \, dx = \frac{\pi^2}{8 \sqrt{3}} + \frac{\pi}{24} \ln \left (\frac{2 - \sqrt{3}}{2 + \sqrt{3}} \right ) + \frac{2}{3} \mathbf{G}.$$ This result agrees numerically with the previous result given by @Sangchul Lee in the comment section.
Knowing the distributions of $X$ and $Y$, find the distribution of $Z=XY^2$
Let $W=Y^2$. For $t\in(0,1)$ we have $$ \mathbb P(W\leqslant t) = \mathbb P(Y\leqslant \sqrt t) = \int_0^{\sqrt t}2y\ \mathsf dy = t, $$ so $W$ has density $f_W(t) = \mathsf 1_{(0,1)}(t)$. Now for the product $Z=XW$, we have \begin{align} \mathbb P(Z\leqslant z) &= \mathbb P(XW\leqslant z)\\ &= \mathbb P(W\leqslant z/X)\\ &= \int_0^\infty f_X(x)\int_0^{z/x}f_W(w)\ \mathsf dw\ \mathsf dx. \end{align} Differentiating with respect to $z$, the density of $Z$ is given by \begin{align} f_Z(z) &= \int_0^\infty f_X(x)f_W(z/x)\frac1x\ \mathsf dx\\ &= \int_z^1 6x(1-x)\frac1x\ \mathsf dx\\ &= 3(1-z)^2\cdot\mathsf 1_{(0,1)}(z). \end{align}
Stochastic processes for beginers (good links and books)
Standard texts on Markov chains are : Norris: Markov Chains D. Stroock: An Introduction to Markov Processes Markov processes in cont. time is a more advanced subject. Literature suggestions: Last chapter in Cinlar: Probability and Stochastics; Getoor/ Blumenthal: Markov Processes and Potential theory.
If $a,b,c$ are in AP, $b,c,d$ are in GP and $c,d,e$ are in HP, then $a,c,e$ are in ($a\neq b\neq c\neq d\neq e>0\in\mathbb R$)
Just write everything in terms of $a$ and $b$: $$c = 2b -a$$ $$d = \frac{(2b-a)^2}{b}$$ $$\frac{1}{e} = \frac{2}{d} - \frac{1}{c}$$ Therefore, substituting $c$ and $d$ from above: $$e = \frac{(2b-a)^2}{a}$$ Now it is easy to see that the sequence $a, c, e$ is a G.P: $$(a,c,e) = \Big(a, 2b-a, \frac{(2b-a)^2}{a}\Big)$$ therefore $$\sqrt{ae} = c.$$
Proof Explanation: If a function is increasing and bounded above, then the left sided limit exists at every point.
The proof proceeds in two steps $g-f(x_n)<\epsilon$ for arbitrarily small $\epsilon>0$. $0\le g-f(x_n)$. Combining these two steps yields $0 \le g-f(x_n)<\epsilon$ and letting $\epsilon\to 0$ yields the result. Hence, $r_n$ is needed to establish the second step. In particular $x_n<a$ implies that there exists an integer - denoted by $r_n$ - such that still $x_n+\frac1{r_n}<a$. By strict monotonicity of $f$, this in turn implies that $f(x_n)<f(a-1/r_n)$. Now, by the very first part of the proof, this implies that $f(a-1/r_n)\le g$ which concludes the second step.
The limits of a quadratic function
From the graph, you can realize that as x goes toward negative infinity f(x) heading toward positive infinity. This limit is unbound or undefined.
Measure Theory & Integration
This works for any function $f$. And such an arbitrary function $f$ is a simple function: $$ f=\sum_{i=1}^\infty f(\omega_i)\,1_{\{\omega_i\}}. $$ As $f$ is positive, $f_n\nearrow f$, where $f_n=\sum_{i=1}^nf(\omega_i)\,1_{\{\omega_i\}}$. Then, by monotone convergence, $$ \int f\,d\mu=\int\lim f_n\,d\mu=\lim_n\int f_n\,d\mu=\lim_n\sum_{i=1}^nf(\omega_i)\,\mu(\{\omega_i\})=\sum_{i=1}^\infty f(\omega_i)\,\mu(\{\omega_i\}) $$
Showing a certain pre-Hilbert space is complete
The only nontrivial point is proving completeness. Note that the absolutely continuous assumption is there to insure that the derivatives exist a.e. Suppose $\{f_n\}$ is a Cauchy sequence in $\mathcal H$. Then $\{f'_n\}$ is a Cauchy sequence in $L^2[0,1]$. This implies it has a limit $g$, because $L^2$ is complete. We will show that the natural guess, $h(x)=\int_0^x g(t)\ dt$, is the limit of $\{f_n\}$ in $\mathcal H$. First, note this function is in $\mathcal H$: its derivative is in $L^2[0,1]$, and $h(0)=0$. It is the limit of $\{f_n\}$ in $\mathcal H$ essentially by definition: $$\| f_n - h\|_{\mathcal H} = \|f_n' - g\|_{L^2[0,1]},$$ and the latter quantity tends to $0$ by the construction of $g$.
Find the volume of the solid interior to both $x^2 + y^2 + z^2 = 4$ and $(x-1)^2 + y^2 = 1$?
$(x-1)^2+ y^2 = 1$ translates to $r^2 - 2r\cos \theta + 1 = 1\Rightarrow r=2cos\theta$. To integrate over this area we must cover the whole circle. Write down the values of $r$ for $\theta = 0, \frac{\pi}{2}, \pi, \frac{3\pi}{2}$ and $2\pi$. You'll see that the circle is traversed twice. Therefore, we only need to integrate from $0$ to $\pi$.
Show that $\frac{(m+n-1)!}{m!n!}$ is an integer where m and n are positive integers and gcd(m,n)=1
Hint: This is equivalent to showing that $m+n$ divides $\binom{m+n}{m}$, or equivalently $a$ divides $\binom{a}{b}$ for any $a>b$ where $\gcd(a,b)=1$. See if you can partition the subsets of $a$ objects with $b$ elements into blocks of $a$ to prove this combinatorially. If you know it, you might want to recall the "necklace proof" of Fermat's Little Theorem - this has a similar flavor.
$(x-1)(y-2)=5$ and $(x-1)^2+(y+2)^2=r^2$ intersect at four points $A,B,C,D$. Centroid of $\Delta ABC$ lies on $y=3x-4$, then the locus of $D$
Point D is the intersection of line $y=3x$ (parallel to $y=3x-4$ and through origin) and curves: $(x-1)(3x-2)=5$ and $(x-1)^2+(y)^2=r^2$ . $\begin{cases} y=3x \\(x-1)(y-2)=5 \\ (x-1)^2+(y+2)^2=r^2 \end{cases}$ $(x-1)(3x-2)=5$ $3x^{2}-5x-3=0$ $\begin{array}{} x_{D}=\frac{5+\sqrt{61}}{6} & y_{D}=\frac{5+\sqrt{61}}{2} \end{array}$ solving numerically $(x_{D}-1)^2+(y_{D}+2)^2=r^2$ $r^2=\frac{335+\sqrt{97600}}{9}≈71.93444300402957$ $\left\{ \begin{array}{} (x-1)(y-2)=5 & ⇒ y=\frac{2x+3}{x-1} \\ (x-1)^2+(y+2)^2=r^2 \end{array} \right\}$ $sol=\left( \begin{array}{left } x_{A}=-6.788336312 & y_{A}=1.358014369 \\ x_{B}=.599099974 & y_{B}=-10.471937333 \\ x_{C}=8.054194725 & y_{C}=2.708798126 \\ x_{D}=2.135041613 & y_{D}=6.405124838 \end{array} \right)$ checking if the centroid of triangle ABC belongs to the line $y = 3x-4$ $\begin{array}{} \text{Centroid (G)} & x_{G}=\frac{x_{A}+x_{B}+x_{C}}{3} & y_{G}=\frac{y_{A}+y_{B}+y_{C}}{3} \end{array}$ $G=(0.621652796,-2.135041613)$ $3·x_{G}-4=y_{G}$
Techniques for showing that the common intersection of a family of sets is non-empty
If $\mathscr F$ has $m$ members and $m(n-r) \ge n$, then it is possible for the intersection to be empty, since the universe can be written as the union of $m$ sets of size $n-r$, and you can take $\mathscr F$ to be the complements of these.
In an integral, why does logarithmic function of an exponential completely drop out?
$\frac 1 {\sqrt {2\pi}}\int_{-\infty}^{\infty} t^{2}e^{-t^{2} /2 } dt=\frac 1 {\sqrt {2\pi}}\int_{-\infty}^{\infty} e^{-t^{2} /2 } dt$ since RHS is $1$ and LHS is the variance of the standard normal distribution which is also $1$. Just make the substitution $t=\frac {x-\mu} {\sigma}$ or $x=\mu +\sigma t$ and you will get your identity.
Prove $[0,1]$ isn't a differential variety or manifold.
The differentiable structure is not necessary here. The point is that the unit interval $[0,1]$ with its usual topology is not locally homeomorphic to an open interval. To prove this, you must prove that no charts exists at the boundary points, but you had the idea right. A connected open neighborhood in $[0,1]$ of either boundary point is homeomorphic to a half-open half-closed interval. Since you can always find a connected subneighborhood, we can restrict to this case. Hence if we prove a half-open half-closed interval is not homeomorphic to an open interval, we will be done. This can be done with a connectedness argument: removing any point from an open interval will disconnect it, but removing the present boundary point from a half-open half-closed interval leaves it connected. Hence these spaces cannot be homeomorphic. It's worth remarking that if you really want to prove the usual topology on the unit interval is not a topological manifold, you must check this for arbitrary open subsets of Euclidean space and not just open intervals. It's a significant theorem that Euclidean spaces of different dimension are not homeomorphic. You can still use connectedness arguments to differentiate between half-open half-closed intervals and opens subsets of $\mathbb R^n$ for $n\geq 2$ however: removing two points disconnects the former but not the latter.
Number of faces of dimension p of simplex
an $n$-dimensional simplex is made up by $(n+1)$ vertices. any $(p+1)$ vertices will make up one $p$-dimensional face. (it takes two vertices to create an edge, three to make a triangle, 4 for a tetrahedron, etc...) in fact, there is precisely a $p$-simplex between any $(p+1)$ vertices, because simplexes are complete graphs.
Solving Lotka-Volterra model using Euler's method
We are given: Let $p$ be the prey density and $q$ is the predator density, thus: $$\frac{dp}{dt} = ap\left(1-\frac{p}{K}\right)-\frac{bpq}{1+bp}$$ $$\frac{dq}{dt}=mq\left(1-\frac{q}{kp}\right).$$ We are also given $a=0.2, \ m=0.1, \ K=500,\ k = 0.2, b = 0.1, p(0) = 10, q(0) = 5.$ We will choose $\Delta t = h = 0.1$. You provided the important piece of information: $$p_i = p_{i-1}+\text{p-slope}_{i-1}\Delta t$$ $$q_i = q_{i-1}+\text{q-slope}_{i-1}\Delta t.$$ This leads to the recurrence: $$\begin{align} p_i& = p_{i-1} + \left(0.2~ p_{i-1}\left(1 - \dfrac{p_{i-1}}{500}\right) -\dfrac{0.1~ p_{i-1}~ q_{i-1}}{1 + 0.1~ p_{i-1}} \right)(0.1) \\ q_i& = q_{i-1} + \left(0.1~q_{i-1}\left(1 - \dfrac{q_{i-1}}{0.2~ p_{i-1}}\right) \right)(0.1) \end{align}$$ This gives the iteration: $p_0 = 10, q_0 = 5$ $p_1 = 9.946, q_1 = 4.925$ $p_2 = 9.89538, q_2 = 4.85231$ $p_3 = 9.84803, q_3 = 4.78187$ $\ldots$
how to find the roots of: $x^{3}+6x^{2}-24x+160$ if one root is $2-2(3)^{1/2}i$
Hint Notice that if a polynomial $P$ with real coefficients has a complex root $\alpha$ with multiplicity $m$ then its conjugate $\overline\alpha$ is also a root of $P$ with the same multiplicity and then $\left(x^2-2\operatorname{Re}(\alpha)x+|\alpha|^2\right)^m$ divides $P$.
How to calculate the square area in coordinate system given only y coordinates?
Order the y-axis coordinates in increasing order so that $y_1 \leq y_2 \leq y_3 \leq y_4$. Let $a = y_2 - y_1$ and $b = y_3 - y_1$, and notice that in following figure the inscribed square is the square you are trying to find the area. By the Pythagorean theorem, the area of that square is $a^2 + b^2$.
Proving well formed propositional statements have well defined truth values
'Well-defined' has nothing to do with injectivity; that is where you are confused. 'Well-defined' in this context simply means that every proposition has exactly one truth-value. To show this, you show: Base: ever atomic proposition has exactly one truth-value (refer to definition of atomic statements and their truth-values here) Step: Any complex statement that is the result of applying some logical operator to a bunch of other statements has exactly one truth-value assuming (hypothesis) those other statements have exactly one truth-value (refer to truth-functional nature of each defined operator here). That's all!
Limit of a function involving cdf and pdf of a normal variable
You can use an easy estimate, with $$\phi(t) = \frac{1}{\sqrt{2\pi}}e^{-\frac{t^2}{2}} \qquad \Phi(t) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^z e^{-\frac{t^2}{2}} d t$$ When $t \ge 1$ one has $e^{-\frac{t^2}{2}}\le t e^{-\frac{t^2}{2}}$, hence $$ z\ge 1 \Longrightarrow \frac{1}{\sqrt{2\pi}}\int_z^{+\infty} e^{-\frac{t^2}{2}} d t \le \frac{1}{\sqrt{2\pi}}\int_z^{+\infty} t e^{-\frac{t^2}{2}} d t = \frac{1}{\sqrt{2\pi}} e^{-\frac{z^2}{2}} = \phi(z) $$ It follows that $$z\ge 1 \Longrightarrow 0\le (1-\Phi(z))\le \phi(z)$$ Now since $\lim_\limits{z\to+\infty}z^k\phi(z) = 0$ for any $k\in {\mathbb R}$, we have $\lim_\limits{z\to+\infty}z^k (1-\Phi(z)) = 0$ When $z\to - \infty$, we know that $\Phi(z)= (1 - \Phi(-z))$, hence $\lim_\limits{z\to-\infty}|z|^k \Phi(z) = 0$. Now observe that $2\Phi^3 - 3 \Phi^2+ \Phi = \Phi(1 -\Phi)(1 - 2\Phi)$. It follows easily that $|z|^k (2\Phi^3 - 3 \Phi^2+ \Phi)\to 0$ when $z\to \pm\infty$.
mean value of multiple card draw
Let's say you draw $n$ cards and $X_i$ denotes the value of card $i$. The rv's $X_i$ have equal distribution hence equal mean: let's say that $\mu:=\mathbb EX_1$. $X:=X_1+\cdots+X_n$ is the total value of the hand with: $$\mathbb EX=\mathbb EX_1+\cdots+\mathbb EX_n=n\mu$$ on base of linearity of expectation.
Does a set of all real valued functions form group under componentwise multiplication?
Exactly. So this set is not a group. You gave an example of a function which has a problem at one point. I can give you an example where things are much worse: how about the function $f\equiv 0$? So it is not a group. Now, you can look at the set of real valued functions $f$ such that $f(x)\ne 0$ for each $x\in\mathbb{R}$. Now this is a group with respect to multiplication of functions as you can easily check.
Construction of a basis
You don't need Gram-Schmidt. Start with a generating set or keep adding vectors to a set and use Gaussian elimination to remove linear dependences.
How much does Proof writing improve over the years?
If it's alright, I'll just offer my own personal experience. Like you, I also studied real analysis out of Rudin as a high school junior. I have since learned that Rudin's text -- while very efficient -- is notoriously concise, and that there are far more student-friendly texts out there. Like you, I was often very frustrated with my inability to solve problems, create proofs, and understand the proofs given in the text. Though I ended up receiving A's in nearly all of my math classes, I sweated and toiled through each one. Very little came easily. And like you, I clearly remember going to a second- or third-year graduate student's office hours, and sitting there dumbfounded as he easily breezed through problems that had taken me hours. I simplistically assumed that he was brilliant, that I was not, that things were simply going to be this way, and that I would never attain that level of fluency. It's now 8 years later, and I've attained that fluency. For me, there were two points at which my ability to prove things and "see through a problem" significantly improved. The first was when I completed the standard undergraduate sequence of courses (real/complex analysis, algebra, topology, etc.), about four years later. The second was roughly three years after that, when I had finished the graduate sequence of courses (and started reading papers). I don't know why I had those mental growth spurts. Some part of me thinks that it has something to do with having a broader perspective, and seeing the bigger picture. But just as much of me thinks that it's about the sheer number of hours I'd committed to math, in various forms. On a somewhat more personal note, I remember it used to bother me when professors and older students would insist that "seeing the big picture" would somehow, magically make problems clearer -- especially when my own on-the-ground experience felt counter to that. At the time, I felt a sharp division between technical mastery and conceptual understanding. And indeed, I do think that there's a difference: technical ability and abstract conceptual ability can be different things. But lately (and only lately), I've found the two to be merging for me. On the one hand, I'm seeing just how big the big picture really is, and have been using it to solve problems quicker. At the same time, a friend's exam-time recommendation that I "focus on the methods" has led me to understand certain concepts better. In short, many things which seemed magic and completely, utterly out of reach eight years, seven years, ..., and even as recently as three years ago, no longer seem to be so now. Hope this helps.
Generation processes and P-density
Denote $\mathcal X:=\left\{x_1,\dots,x_n\right\}$. For each $i\in \{1,\dots,n\}$, the set $\{X=x_i\}$ belongs to $\mathcal F$ hence there exists $A_i\in \sigma\{Y \circ \tau^n, n\in\mathbb N\}$ such that $\mathbb P\left(\{X=x_i\}\Delta A_i\right)=0$. Now define $Z=\sum_{i=1}^nx_i\mathbf 1_{A_i}$. Then $Z=X$ almost surely and $Z$ is a function of $Y_0^\infty$.
Unsure how to prove the amount of real roots of the equation $(1+x^2)e^x = k$
It has at most one real root. Let $f(x)=e^x(1+x^2)$. Notice that $$\frac{\mathrm{d}}{\mathrm{d}x}f(x)=e^x(1+2x+x^2)\geq 0 ~ \forall x\in \Bbb{R}.$$ I.e, the function is non-decreasing. Therefore it can only attain the value of $k$ at one point, assuming $f$ is never "completely flat".
What are the properties around Pi products
The product you have is $\prod_{i=0}^{n-1} (n-i) = \prod_{i=1}^{n} i = n!$ ($n$ factorial). These are well-known and you can compute them once (or even look them up).
Chi square independence test
The test statistic would be $$ \sum \frac{(\text{observed}-\text{expected})^2}{\text{expected}}. $$ It should have a chi-square distribution with $(8-1)(3-1)=14$ degrees of freedom if the hypothesis of independence is true. It it has an improbably large value, you reject the null hypothesis.
Span of Density operators (Positive Semi-definite matrices of Trace one)
Given any $T\in L(\mathbb C^n)$, you can write $$ T=\frac{T+T^*}2+i\,\frac {T-T^*}{2i} $$ so $T$ is a linear combination of selfadjoints. And for each selfadjoint you have the Spectral Theorem saying that they are a linear combination of rank-one projections (which are positive semidefinite of trace one). The result is even true when $\mathcal X$ is an infinite-dimensional Hilbert space, although it is not trivial in that case.
Probability Question Regarding Selection of Committees...
Your method for part 2 under counts. A way to correctly count is add the counts of committees with $2M,1F$ and $1M,2F$, viz $\binom42\binom51 + \binom41\binom52$
Evaluate $ \lim\limits_{x \to 0}\frac{\int_0^{x^2}f(t){\rm d}t}{x^2\int_0^x f(t){\rm d}t}.$
Continued from where you got stuck: $$ \cdots = 4\lim_{x\to 0}\frac {f'(x^2)} {3\dfrac {f(x)}x +f'(x) }= \frac {4f'(0)}{4f'(0)} = 1, $$ where $$ \lim_{x\to 0} \frac {f(x)}x = \lim_{x \to 0} \frac {f(x) -f(0)}{x - 0}, $$ and $f'$ is continuous at $0$.
Find $r$ where $\dfrac{(7n)!}{7^n n!} \equiv r \pmod{7}$, $r \in [[0, 6]]$
Hint: $$\frac{(7n)!}{7^n\cdot n!}\equiv \frac{(6!)^n\cdot (7\times 14\times\cdots \times 7n)}{7^n\cdot n!}\equiv\frac{7^n\cdot n!\cdot (6!)^n}{7^n\cdot n!}\equiv (6!)^n\pmod 7$$ Now, use Wilson's Theorem. This can further be generalized for all primes $p$. We have, $$\frac{(pn)!}{p^n\cdot n!}\equiv (-1)^n\pmod p$$
Metalanguage of mathematics
Let's clear up a misconception: mathematics as a whole doesn't have one metalanguage. One theory $T$ in a language $L$ has a metatheory $T^\prime$ in metalanguage $L^\prime$. Tarski showed languages are free of paradox iff they can't ascribe truth values as a predicate to its own statements, so we take $L^\prime\ne L$. But presumably, you want both theories to be part of "mathematics", as well as an infinite hierarchy of metatheories above $T^\prime$ too. Meanwhile, what we can write in practice can be formal, semiformal or informal, which allows us to think of English with extra symbols as the language of every theory we study. This makes sense as long as you accept each such extension gives a "different" language. For example, English with $L^\prime$ add-ons isn't the same as English with $L$ add-ons. In Objective Knowledge , Popper discussed the use of one natural language to statements in another, which is another way you can get your head around the principles of metalanguages. Popper used this to explain Tarski's technical insights into metalanguage behaviour. Here's an example: "The German sentence Der Mond ist aus grünem Käse gemacht is true iff the moon is made of green cheese."
Computing left derived functors from acyclic complexes (not resolutions!)
Compare with a projective resolution $P_\bullet\to M\to 0$. By projectivity, we obtain (from the identiy $M\to M$) a complex morphism $P_\bullet\to A_\bullet$, which induces $F(P_\bullet)\to F(A_\bullet)$. With a bit of diagram chasing you shold find that $H_\bullet(F(P_\bullet))$ is the same as $H_\bullet(F(A_\bullet))$. A bit more explict: We can build a resolution of complexes $$\begin{matrix} &\downarrow && \downarrow&&\downarrow\\ 0\leftarrow &A_2&\leftarrow&P_{2,1}&\leftarrow&P_{2,2}&\leftarrow\\ &\downarrow && \downarrow&&\downarrow\\ 0\leftarrow &A_1&\leftarrow&P_{1,1}&\leftarrow&P_{1,2}&\leftarrow\\ &\downarrow && \downarrow&&\downarrow\\ 0\leftarrow &M&\leftarrow &P_{0,1}&\leftarrow&P_{0,2}&\leftarrow\\ &\downarrow && \downarrow&&\downarrow\\ &0&&0&&0 \end{matrix} $$ i.e. the $P_{i,j}$ are projective and all rows are exact. The downarrows are found recursively using projectivity so that all squares commute: If all down maps are called $f$ and all left maps $g$, then $f\circ g\colon P_{i,j}\to P_{i-1,j-1}$ maps to the image of $g\colon P_{i-1,j}\to P_{i-1,j-1}$ because $g\circ(f\circ g)=f\circ g\circ g=0$, hence $f\circ g$ factors through $P_{i-1,j}$, thus giving the next $f\colon P_{i,j}\to P_{i-1,j}$. We can apply $F$ and take direct sums across diagonals, i.e. let $B_k=\bigoplus_{i+j=k} FP_{i,j}$. Then $d:=(-1)^if+g$ makes this a complex. What interest us here, is that we can walk from the lower row to the left column by diagram chasing, thus finding that $H_\bullet(F(P_{0,\bullet}))=H_\bullet(F(A_\bullet))$. Indeed: Start with $x_0\in FP_{0,k}$ with $Fg(x_0)=0$. Then we find $y_1\in FP_{1,k}$ with $Ff(y_1)=x_0$. Since $Ff(Fg(y_1))=Fg(Ff(y_1))=0$, we find $y_2\in FP_{2,k-1}$ with $Ff(y_2)=y_1$, and so on until we end up with a cycle in $A_k$. Make yourself clear that the choices involved don't make a difference in the end (i.e. up to boundaries). Also, the chase can be performed just as well from the left column to the bottom row ...
Finding Domain of a Function with a natural logarithm at the denominator of the fraction
a logarithm being negative. Watch out: you want the argument of the logarithm to be positive, not the logarithm itself ($\ln x$ will be negative if $0 < x < 1$, that's not a problem). So the denominator can't be $0$ which means $\ln x$ can't be zero which means $x$ can't be $1$: $$\color{blue}{x \ne 1}$$ Secondly, for $\ln x$ to exist, $x$ has to be strictly positive: $$\color{blue}{x > 0}$$ Combining both conditions yields: $0 < x < 1$ or $x > 1$, or written as a set: $$x \in (0,1) \cup (1,+\infty)$$ Hence, is the system: $$ \begin{cases} & \color{red}{x \ne 0}\\ & x > 0 \end{cases} $$ Your error is in red; see above: it is $x=1$ that makes the denominator $0$, not $x=0$.
Using the precise definition of limits to prove a limit
Your $\delta$ doesn't work and it would be strange if it did, because in almost every case $\delta$ must be very close to $0$ when $\varepsilon$ is very close to $0$. But your $\delta$ is always greater than $2a$. Note that$$\frac{3x^2+2ax+a^2}{2x+a}-2a=\frac{(3x+a)(x-a)}{2x+a}$$and that therefore$$\left|\frac{3x^2+2ax+a^2}{2x+a}-2a\right|=\left|\frac{3x+a}{2x+a}\right||x-a|.\tag1$$If $|x-a|<\frac a4$, then $|x|<\frac54a$ and so $|3x+a|<\frac{19}4a$. On the other hand,\begin{align}|2x+a|&=|2(x-a)+3a|\\&\geqslant3a-2|x-a|\\&>\frac52a.\end{align}So,$$\left|\frac{3x+a}{2x+a}\right|<\frac{19}{10}.$$Therefore, it follows from $(1)$ that $\delta=\min\left\{\frac a4,\frac{10}{19}\varepsilon\right\}$ will work.
Symmetric power of a manifold
If $M$ is a Riemann surface / complex surface then $M^{(n)}$ is a smooth manifold. It's a fairly standard argument. But as Willie Wong mentions, generally $M^{(n)}$ isn't a manifold unless you assume more of $M$. Interestingly enough, $(S^1)^{(3)}$ is a manifold and it's a fun exercise to figure out which one it is. For the Riemann surface case, first consider $\mathbb C^{(n)}$. This is the space of n-tuples of points in $\mathbb C$ but with the ordering forgotten. As a space, it's homeomorphic to the space of monic complex polynomials of degree $n$ -- since monic complex polynomials have $n$ roots up to multiplicity -- the bijection is given in terms of the roots of the polynomials. So you can use a fundamental domain for the Riemann surface (or some other similar argument) to show $M^{(n)}$ is a manifold when $M$ is a Riemann surface. FYI: I didn't just invent the above argument. It's a standard argument used in setting up Heegaard-Floer theory for 3-manifolds.
Suppose 99 passengers are assigned to one of two flights. Show one of the flights has at least 50 passengers assigned to it.
Suppose, to the contrary that both flights had strictly less than $50$ passengers. (i.e. $F_1 \leq 49$ and $F_2 \leq 49$). What can you say about $F_1 + F_2$?
Finding generating function in a closed form : $\sum_{n=1}^N(3n^2-10n)\cdot2^n$
Hint. Here is a general route. We assume $x \ne 1$. One may recall the standard geometric evaluation: $$ 1+x+x^2+...+x^n=\frac{1-x^{n+1}}{1-x},\tag1 $$ then by differentiating $(1)$ we have $$ 1+2x+3x^2+...+nx^{n-1}=\frac{1-x^{n+1}}{(1-x)^2}-\frac{(n+1)x^{n}}{1-x}, \tag2 $$ multiplying $(2)$ by $x$ and differentiating once more gives $$ 1+2^2x+3^2x^2+...+n^2x^{n-1}=\frac{d}{dx}\left(\frac{1-x^{n+1}}{(1-x)^2}-\frac{(n+1)x^{n}}{1-x}\right). \tag3 $$
Arranging numbers so that $i$ is not immediately followed by $i+1$
This is sequence 255 from the Online Encyclopaedia of Integer Sequences https://oeis.org/A000255
Topological groups question from Munkres
By continuity of multiplication, there are neighbourhoods $V_1$ and $V_2$ of $e$ such that $V_1 \cdot V_2 \subseteq U$. Try $V = V_1 \cap V_1^{-1} \cap V_2 \cap V_2^{-1}$.
Please help us diagram this trig problem.
Here's a possibility: The "ends" of the lake are points $C$ and $D$; $\angle CBD=130^\circ$, $\angle DBA=120^\circ$, $\angle CAD=65^\circ$, $\angle BAD=20^\circ$.
Expectation of supremum
Elaboration on the comment by Zhen, just consider $x(t) = 1$ a.s. for all $t$ and $T = 0.5$
How do you check if a basis of matrices are orthogonal, in a given inner product space?
There is no better way than computing all $\frac{(n-1)n}2$ inner products, as they are independent of each other. So in principle $O(n^3)$ operations. If $n$ is large enough ($n>100$), a fast matrix product algorithm such as Strassen can be thought of. This will lower to $O(n^{\log_27})$ operations. Gram-Schmidt is a viable alternative, as the complexity is also $O(n^3)$ [$n^3$ additions and multiplications; also $n^2$ divisions]; be sure to use the modified Gram-Schmidt version, for better numerical stability.
Is the l'Hospital's Rule applicable in this case?
Yes. Because the exponential function is continuois, $\lim_{x\to0}\frac{f(x)}{g(x)}=c$ implies $\lim_{x\to 0}e^{\frac{f(x)}{g(x)}}=e^c$. Whether or not you use l'Hopital to arrive at the intermediate result $\lim_{x\to0}\frac{f(x)}{g(x)}=c$ is of no concern.
$S=\{(x,y) \in \Bbb{R}^2\ |\ x^2-y=.3\}$ is connected/compact?
You are right in seeing that it's not compact, because $S$ is not bounded in $\mathbb{R}^2$. However, it is connected. To see this, define a function $\langle id_\mathbb{R}, x^2-.3\rangle: \mathbb{R}\rightarrow \mathbb{R}^2$ by $\langle id_\mathbb{R}, x^2-.3\rangle(x) = (x,x^2-.3)$. Then, $\langle id_\mathbb{R}, x^2-.3\rangle$ is continuous and $\langle id_\mathbb{R}, x^2-.3\rangle[\Bbb{R}]=S$. Thus, $S$ is the continuous image of a connected set, hence connected.
Difference between a measure and a premeasure
Let $X$ be a non-empty set and $\mathscr A$ an algebra on it. A premeasure on a $\mathscr A$ is a function $\lambda:\mathscr A\to[0,\infty]$ such that $\lambda(\varnothing)=0$; and if $A_1,A_2,\ldots$ is a countable collection of disjoint sets in $\mathscr A$ and if their union is contained in $\mathscr A$, then $$\lambda\left(\bigcup_{n=1}^{\infty} A_n\right)=\sum_{n=1}^{\infty}\lambda(A_n).$$ If $\mathscr B$ is a $\sigma$-algebra on $X$, then a measure on $\mathscr B$ is a function $\mu:\mathscr B\to[0,\infty]$ such that $\mu(\varnothing)=0$; and if $B_1,B_2,\ldots$ is a countable collection of disjoint sets in $\mathscr B$ (and, since $\mathscr B$ is a $\sigma$-algebra, their union is already contained in $\mathscr B$, so this condition need not be prescribed explicitly in this case), then $$\mu\left(\bigcup_{n=1}^{\infty} B_n\right)=\sum_{n=1}^{\infty}\mu(B_n).$$ Basically, yes, the main difference is that of the domains, viz., a premeasure is defined on an algebra and a measure is defined on a $\sigma$-algebra. There is another subtle difference, though: while both concepts are required to satisfy $\sigma$-additivity, in the case of a premeasure this makes sense only for countable collections of disjoint sets of an algebra whose unions, too, are actually in the algebra.
Groupoids more fundamental than categories, really?
The definition of category in the HoTT book is exactly what you are looking for: just read "groupoid" for "1-type". But it can also be said in more traditional language: categories can be identified with internal categories in groupoids that satisfy a saturation condition.
Singular values and positive semi-definiteness
This is a typical application the nonstrict Schur Complement Formula. Basically the following are equivalent (two of them are trivially true): $$ \begin{pmatrix} I &A\\ A^* &I \end{pmatrix} \succeq 0 \iff \begin{align} I &\succeq 0\\ I - A^* I A &\succeq 0\\ A(I-I) &= 0 \end{align} $$ Hence, from the second condition we infer $I\succeq A^*A$, which is another version of the desired property.
Why is this integral calculation incorrect?
Because $\int_{-\infty}^{+\infty}\frac{dx}{x^2+1}$ exists, what you did first and second gives correct answers. But if it is the case that $\int_{-\infty}^{\infty} f(x) dx$ does not exist then the following may or may not be true, $\int_{-\infty}^{\infty} f(x) dx = \lim_{a \to \infty} \int_{-a}^{a} f(x) dx$ To see why, consider for example, $$f(x)=x$$ Consider using the interval $[-t,2t]$ with $t \to \infty$ as bounds. Then use the interval $[-t,t]$ with $t \to \infty$ as bounds. Both the intervals approach $(-\infty,\infty)$ but..
Advanced integration, how to integrate 1/polynomial ? Thanks
The details depend on $a$, $b$, and $c$. Assume $a\ne 0$. If there are two distinct real roots, use partial fractions. If there are two identical real roots, we are basically integrating $\dfrac{1}{u^2}$. If there are no real roots, complete the square. With the right substitution, you basically end up integrating $\dfrac{1}{1+u^2}$, and get an $\arctan$. For polynomials of higher degree, factor as a product of linear terms and/or quadratics with no real roots. (In principle this can be done. In practice, it may be very unpleasant). Then using partial fractions and substitutions you end up with integrals of $\dfrac{1}{u^{n}}$, and/or $\dfrac{u}{(1+u^2)^{n}}$ and/or $\dfrac{1}{(1+u^2)^{n}}$. All of these are doable. Added: It turns out the OP was interested in the irreducible case. Will write a bit on that, because I want to advocate a procedure slightly different from the standard one. Assume that $a$ is positive. Rewrite $\dfrac{1}{ax^2+bx+c}$ as $\dfrac{4a}{4a^2x^2+4abx+4ac}$, and then, completing the square, as $\dfrac{4a}{(2ax+b)^2+4ac-b^2}$. Note that $4ac-b^2$ is positive. Call it $k^2$, with $k$ positive. Make the change of variable $2ax+b=ku$. Substitute. There is some cancellation, and we end up integrating $\dfrac{2}{k}\dfrac{1}{1+u^2}.$ I would suggest going through this procedure in any individual case. As an example, with $\dfrac{1}{x^2+x+1}$ we write $\dfrac{4}{4x^2+4x+4}$, then $\dfrac{4}{(2x+1)^2+3}$, make the change of variable $2x+1=\sqrt{3} u$. Another addition: The OP has expressed a wish to see the particular problem $\int\frac{dx}{x^2+10x+61}$. The numbers here are particularly simple, designed for the "standard" style, so we will do it that way. Also, we will use more steps than necessary. First we complete the square. We get $x^2+10x+61=(x+5)^2-25+61=(x+5)+36$. Now let $u=x+5$. Then $du=dx$ and $$\int\frac{dx}{x^2+10x+61}\int \frac{dx}{(x+5)^2+36}=\int\frac{du}{u^2+36}.$$ Now maybe think: it would be nice if we had $u^2=36w^2$, because the $36$ could then "come out." So let $u=6w$. Then $du=6\,dw$, and we get $$\int\frac{du}{u^2+36}=\int \frac{6\,dw}{36w^2+36}=\int\frac{1}{6}\frac{dw}{w^2+1}=\frac{1}{6}\arctan(w) +C.$$ Finally, we undo our substitution. We have $w=\frac{u}{6}=\frac{x+5}{6}$.
complex automorphisms acting on a projective variety
As Mariano points out, the dimension is preserved. As for (2), the answer is certainly almost never, unless $X$ comes from base change of a variety defined over the real numbers. If you replace "birational" by "isomorphic", then it is precisely when $X$ comes from base change of a variety defined over the real numbers (under the assumption that $X$ is projective).
Weak convergence of orthonormal vectors
By Bessel's inequality, the sequence $\{\langle e_n,y\rangle\}_{n\in\Bbb N}$ is square-summable and thus infinitesimal.
The partial derivative of a characteristic function (exercise).
I will write the answer for $X$ one dimensional. Your approach of bringing the derivative inside the integral is right. You need to estimate: $ \frac {e^{iux}-1}{u}.$ Writing down this function you find: $$\frac {\cos (ux) -1}{u} + i \frac {\sin (ux)}{u} = x \sin (\xi_1 x) - ix \cos (\xi_2 x)$$ by the mean value theorem. So in norm this can be estimated by $2x$, and you find the uniform bound you needed, to apply dominated convergence. Note that the estimate is global (w.r.t. u). In general it is sufficient to have a local estimate (e.g. for u small enough).
Notation for equivalence relations
The '$\,\sim\,$' symbol is often used to denote "is related to" by whatever relation you are discussing at the time.   The '$\,\equiv\,$' symbol usually denotes "is equivalent to" by whatever equivalence relation you are discussing; and that is most usually a standard form of equivalence. You could use '$\,\equiv\,$' in this case, but more typically we reserve it for relations that are already known to be equivalences, rather than those we wish to prove are such.
Topology, Proving Interior and Exterior in $Q^n$
Fact of life: every open ball contains rational and irrational points. This implies all of the following, straight from the definitions: $\mathbb{Q}^\circ = \emptyset$. $\mathbb{P}^\circ =\emptyset$. $\overline{\mathbb{Q}} = \mathbb{R}$. $\overline{\mathbb{P}} = \mathbb{R}$. So $\operatorname{Ext}(\mathbb{Q}) = \mathbb{R} \setminus \overline{\mathbb{Q}} = \emptyset$ and similarly for the irrationals $\mathbb{P}$.
Can someone describe SO(n)/SO(n-1) for me?
The group $SO(n+1)$ acts on $S^n$ transitively, so we have a surjection $SO(n+1) \to S^n$ given by $g\to g.x_0$ for some fixed $x_0\in S^n$. For any $p\in S^n$, the preimage $f^{-1}(p)$ is isomorphic to the stabilizer $\{g\in S^n:\, g.x_0 = x_0\} = SO(n)$. (If the latter result isn't clear, assume without loss of generality that $f^{-1}(p)$ or $x_0$ is simply $(1, 0, \dots, 0)$; then the set of matrices in question is $(1)\oplus SO(n)\subset SO(n+1)$.) Putting this all together, we get a homeomorphism $SO(n+1)/SO(n) \to S^n$ defined by $g \to g.x_0$. It's not quite canonical; we need to specify which copy of $SO(n)$ inside $SO(n+1)$ we're killing, at that corresponds to the choice of point $x_0$. Showing that this map is continuous is mostly just a matter of unwinding the definitions; showing that its inverse is continuous is just a matter of noting that the spaces here are compact.
Solving $y^3=x^3+8x^2-6x+8$
Since we have$$y^3-x^3=2(4x^2-3x+4)$$ there exists an integer $k$ such that $$y-x=2k\iff y=x+2k.$$ So, we have $$(x+2k)^3-x^3=2(4x^2-3x+4)\iff (3 k-4) x^2+(6 k^2+3) x+4k^3-4=0\tag1$$ Now we have $$D=(6k^2+3)^2-4(3k-4)(4k^3-4)\ge 0\iff -12 k^4+64 k^3+36 k^2+48 k-55\ge 0$$$$\iff 12 k^4-64 k^3-36 k^2-48 k+55\le 0$$ Here, let $f(k)=12 k^4-64 k^3-36 k^2-48 k+55=4k[k\{k(3k-16)-9\}-12]$. Now, for $k\ge 6$, we have $$3k-16\ge 2\Rightarrow k(3k-16)\ge 2k\ge 12$$$$\Rightarrow k\{k(3k-16)-9\}-12\ge 3k-12\gt 0\Rightarrow f(k)\gt 0.$$ Also, for $k\le -1$, we have $$f(k)=12 k^4-36k^2(k+1) -48 k+55-28k^3\gt 0.$$ Hence, we have $k=0,1,2,3,4,5.$ Then, from $(1)$, you can find integer roots $x$ for each $k$.
Showing $e^{-x^2}$ small when $x$ is large just from series definition
Here is a method that might work even if you don't have the function in closed form. First it is seen that since $$F(x):=\sum_{k=0}^{\infty}{\frac{(-1)^kx^{2k}}{k!}}$$ Is a power series with infinite radius of convergence, it has derivatives of all orders that can be obtained by differentiating the terms of the series. In particular,$$F'(x)=\sum_{k=1}^{\infty}{\frac{(-1)^kx^{2k-1}2k}{k!}}=2\sum_{k=1}^{\infty}{\frac{(-1)^kx^{2k-1}}{(k-1)!}}=-2xF(x)$$ for all $x\in \mathbb{R}$. Which can be seen with a shift of index. $$\\$$ Next we want to show that $1\geq F(x)>0$ on $[0,\infty)$ $F(0)=1>0$. Suppose that $F(x_0)<0$ for some $x_0>0$. Since $F$ is continuous, there is some $c\in(0,x_0)$ such that: $F(c)=0$. We can also assume that there is a largest such $c$, because otherwise $c$ could get arbitrarily close to $x_0$, which would imply that $F(x_0)=0$ via the continuity of $F$. Since $F<0$ on $(c,x_0)$, then by the relation between $F$ and its derivative, we have that $F'>0$ on $(c,x_0)$. Thus $F$ is increasing on $(c,x_0)$ which implies that $F(x)>F(c)=0$ on $(c,x_0)$. Which is a contradiction. It can also be shown that $F(x)\neq 0$ on $[0,\infty)$ - I'll leave this to you - Thus, $0<F(x)\leq 1$ on $[0,\infty)$. $$\\$$ Lastly, $$F''(x)=-2xF'(x)-2F(x)=F(x)\left(4x^2-2\right)$$ $$$$ So $x_1=\frac{1}{\sqrt{2}}$ is the only point on $[0,\infty)$ where $F''=0$. If $0\leq x \leq \frac{1}{\sqrt{2}}\implies F''\leq 0 \implies F'(\frac{1}{\sqrt{2}})\leq F'(x)\leq 0$ If $x\geq \frac{1}{\sqrt{2}}\implies F''\geq 0\implies F'(\frac{1}{\sqrt{2}})\leq F'(x)\leq 0$ Thus $|F'|$ is bounded on $[0,\infty)$. $$$$ Since, $|F(x)|=\left|\frac{F'(x)}{2x}\right|$, we can now see that $F$ is small if $x$ is large.
Getting sum greater than $10$
for the first question : you get sum $ \geq 10$ with permutations $(6,6)$,$(6,5)$,$(5,6)$, $(5,5)$ then the probability that sum $< 10$ in one throw is $\frac{32}{36}=\frac{8}{9}$. Every throw is independent, so the probability that number of throws is less than 10 is equal to : $$ \sum_{k=0}^8{(\frac{8}{9})^k \frac{1}{9} } = 1-(\frac{8}{9})^9 $$ for the second question you need to have, in the sequence of throws, the two dices sum to $11$ before they sum to $12$, can you finish from here?
Radius of convergence of the series
when $0<x<1$, $x^{n!} < x^n$ hence $\sum x^{n!} <\infty$ and $R\ge 1$. When $x >1$, $x^ {n!}\nrightarrow 0$ hence $R\le 1$. $R=1$. Consider the ratio test: $$ x^{\log (m+1)}/x^ {\log m} = x^ {\log (m+1)-\log m} =x^ {\log(1+1/m)} \\ = \exp(\log(1+1/m)\log x) = \exp(\log x/m + O(m^{-2})) \\ = 1 + \log x/m + O(m^{-2}) $$ The interval is, according to Gauss's test, such as $$ \log x < -1 \implies x < e^{-1} $$ hence the answer is $[0,1/e)$. Detail: I used $$\log(1+u) = u+O(u^2) \\ \exp(u) = 1+u+O(u^2) $$when $u\to 0$.
How many examples exist of Lie groups that are 2-dimesional surfaces?
There's also a cylinder. That's it. You can prove this by fully classifying 2-dimensional Lie groups. It's much easier to classify 2-dimensional Lie algebras, of which there are two up to isomorphism, and hence 2 simply connected 2-dimensional Lie groups up to isomorphism: $\Bbb R^2$ and $\text{Aff}(1)$, the affine transformations of the line. Now one classifies their discrete closed normal subgroups. For $\Bbb R^2$, there are lattices, either isomorphic to $\Bbb Z$ or $\Bbb Z^2$, and the quotient is either a cylinder or a torus, depending. The only normal subgroups of $\text{Aff}(1)$ are translation groups, giving the same result.
One homomorphism to another in group under addition
Hint: Given a homomorphism $f:G\to G$ write $\bar{a}=f(1)$. Now use the properties of a homomorphism.
Taking a relative limit
let $ z \equiv \frac{|p|}{m_0 c}$ then the limit $|p|<< m_0 c $ means $z<<1$ so that we can use the approximation $(1+z)^n \approx 1+nz$ and keep only the lowest powers of $z$ in any polynomial expansion. note that $$ \frac{d\vec{x}}{dt} = \frac{\vec{p}c^2}{\sqrt{m_0^2c^4 + |\vec{p}|^2c^2}} = \frac{z}{\sqrt{1+z^2}} \hat p c \approx z(1-\frac{z^2}{2}) \hat p c \approx z \hat p c = \frac{\vec p}{m_0} $$ So we recover the non-relativistic result $ \vec p = m \vec v $
Find the quadratic variation process of $\int f(s) \, dB_s$
The assertion follows if we can show that $$\lim_{\delta \to 0} \sup_{\|\Delta\| \leq \delta} \sum_{i=1}^n \left( \int_{t_{i-1}}^{t_i} f(s)^2 \, ds \right)^2 = 0. \tag{1}$$ Recall the following result (see e.g. here or here) Let $u \in L^1([a,b])$ be an integrable function. Then $u$ is uniformly integrable, i.e. for any $k \in \mathbb{N}$ there exists a constant $r>0$ such that $$\int_A |u(s)| \, ds \leq \frac{1}{k}$$ for all measurable sets $A \subseteq [a,b]$ with Lebesgue meausre $\leq r$. Fix $k \in \mathbb{N}$. Since $u := f^2$ is integrable, we can choose $r>0$ such that $\int_A |f(s)|^2 \, ds \leq 1/k$ for any measurable set $A$ with Lebesgue measure $\leq r$. If $\Delta_n$ is a partition of $[a,t]$ with $\|\Delta_n\| \leq r$ we get \begin{align*} \sum_{i=1}^n \left( \int_{t_{i-1}}^{t_i} f(s)^2 \, ds \right)^2&\leq \frac{1}{k} \sum_{i=1}^n \left( \int_{t_{i-1}}^{t_i} f(s)^2 \, ds \right) \\ &= \frac{1}{k} \int_a^t f(s)^2 \, ds. \end{align*} Hence, $$\limsup_{\delta \to 0} \sup_{\|\Delta\| \leq \delta} \sum_{i=1}^n \left( \int_{t_{i-1}}^{t_i} f(s)^2 \, ds \right)^2 \leq \frac{1}{k},$$ and since $k \in \mathbb{N}$ is arbitrary this proves the assertion. A final remark regarding your reasoning: To get the last equality in your computations I would rather use that $\int_u^v f(s) \, dB_s$ is Gaussian with mean zero and variance $\int_u^v f(s)^2 \, ds$ (.. note that this allows you to compute all moments of $\int_u^v f(s) \, dB_s$). There is no need to know the distribution of the squared integral.
Identity matrix and matrix to the $n$th power
Hint: Try making $A$ a rotation matrix.
How to convert coeffecient multiplying $t$ in a trig function (e.g. $\sin(bt)$) into Hertz?
Hertz is $s^{-1}$ in other words "per second". Sensibly, the argument to $\sin$ will be unitless but assuming that $t$ is in seconds then $b$ should be in $s^{-1}$. For frequency, you want whole waves per second, so considering that the period of $\sin$ is $2 \pi$, the frequency will be $f = \frac{b}{2 \pi}$. This is also $s^{-1}$ but Hz is traditionally used. The angular speed $\omega$, which is just $b$ for you is larger, by a factor of $2 \pi$, because it is measured in radians per second rather than whole revolutions. This Wikipedia article uses angular frequency, angular speed,or a few other terms and reserves angular velocity for the vector. This is consistent with linear motion. The main distinction that I am trying to make is between measuring whole revolutions or radians. In day to day life, rotational rate is typically measured in revolutions per time unit. E.g. records have typical speeds of $33 \frac{1}{3}$, $45$, or $78$ rpm and tachometers in cars also use rpm (or a multiple of it). However, if you see $\sin(\omega t)$ then $\omega$ will be in radians per unit time; probably second but it could be other units as long as they match $t$. $\omega t$ needs to be dimensionless and in radians. I guess that rotational rate could be measured in degrees per unit time but I have never seen that. Radians are often treated as if they were a unit but they are not really. They are not arbitrary in the way that the metre or the yard is. A measurement in radians is just a number. So, radians per second is actually just $s^{-1}$ and we could use Hz. However, because other measures of angles are common, degrees or revolutions, we generally feel the need to emphasize that radians are being used. Hz is usually reserved for whole waves per second. Becquerel is also $s^{-1}$ but it is used for radioactive decays per second. More units are used than are really necessary.
What are some uses for Monte Carlo simulations in mathematics?
Monte Carlo methods are very useful in numerically evaluating high-dimensional integrals. With traditional integration methods, the number of integrand evaluations required to maintain accuracy grows quickly as dimension increases. With Monte Carlo integration, the number of integrand evaluations needed is independent of dimension. For many high-dimensional integrals, Monte Carlo methods are the only practical choice.
Probabilty Independent 2
Well, for a: the logic is okay, but the calculations went awry. $$0.41/(0.41+0.40-0.41\cdot0.40)~{=0.41/0.646\\ \approx 0.63}$$ And for b: you used $0.42$ instead of $0.41$.
Primality test bound: square root of n
Once you find the smaller divisor, you automatically find the larger divisor too.
How to find the two lines that have the same distance from a parallel line?
You are nearly done. The parallel lines at distance of $2$ units from the given line together form the locus of all those points $(x_0,y_0)$ that are at a distance of $2$ units from the given line. So$$|3x_0-4y_0+5|=2\times5=10\iff 3x_0-4y_0+5=\pm10$$giving the equations $3x-4y-5=0,3x-4y+15=0$.