title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Find the expected value of $3-X$ for a random variable $X$ with the following moment generating function
The property of moment-generating functions that you should recall for this problem is $$\left[\frac{\partial^k M_X(t)}{\partial t^k}\right]_{t=0} = M^{(k)}_X(0) = \operatorname{E}[X^k];$$ that is to say, the $k^{\rm th}$ derivative of the MGF at $t = 0$ is the $k^{\rm th}$ raw moment of $X$, whenever such a moment is defined. So, for $k = 1$, we observe that the first derivative of $M_X$ at $t = 0$ gives the expectation; for $k = 2$, we get the expectation of $X^2$. Then to obtain the variance, you would calculate $$\operatorname{Var}[X] = \operatorname{E}[X^2] - \operatorname{E}[X]^2.$$ To understand where this relationship comes from, recall that the MGF is defined as $$M_X(t) = \operatorname{E}[e^{tX}] = \operatorname{E}\left[\sum_{k=0}^\infty \frac{(tX)^k}{k!}\right] = \sum_{k=0}^\infty \frac{\operatorname{E}[X^k]}{k!} t^k.$$ But by Taylor's theorem, $$M_X(t) = \sum_{k=0}^\infty \frac{M_X^{(k)}(0)}{k!} t^k,$$ thus $\operatorname{E}[X^k] = M_X^{(k)}(0)$ as claimed (again, whenever the moments are defined).
How to deal with negative exponents in modular arithmetic?
Remember that one of the rules of exponents is that $$(x^a)^b = x^{ab}.$$ So we can rewrite $$208 \cdot 2^{-21} \pmod{421}$$ as $$208 \cdot (2^{-1})^{21} \pmod{421}.$$ You can then solve for the modular multiplcative inverse by one of a few techniques, including, as you note, the Extended Euclidean Algorithm. With this specific example, we get $$x \equiv 208 \cdot (2^{-1})^{21} \pmod{421}$$ $$\equiv 208 \cdot (211)^{21} \pmod{421}$$ $$\equiv 208 \cdot 329 \pmod{421}$$ $$\equiv 230 \pmod{421}$$
Outlier-resistant average of a set?
The median, as suggested by @Henry seems the simplest solution to your problem. You might also consider a 'trimmed mean'. Below I show the mean, a 5% trimmed mean (average the middle 90% of your data after discarding the top and bottom 5%), and median. (In a sense, the median is a 50% trimmed mean.) You could try out various degrees of trimming to see what works best in your situation. I pasted your data into R statistical software with the following results. mean(x); mean(x, tr=.05); median(x) ## 107.3667 # ordinary mean ## 66.06368 # 5% trimmed mean ## 65 # median Both the trimmed mean and the median would also provide protection if there were users who purposely gave absurdly low values. Another approach would be to use a 'boxplot' to detect 'outliers', ignore the outliers, and average the rest. The default outlier-detection method may include as outliers some values you would want to keep. (You could adjust the outlier rule to be less aggressive.) The procedure below ignores the possibility of low outliers, and only omits the high ones. It has some potential of giving a result biased on the low side. boxplot.stats(x)$out ## 75 75 75 75 78 80 85 317 318 630 630 640 6511 mean(x[x <= boxplot.stats(x)$stats[5]]) # effect is to avg values 74 or less ## 65.77665 My guess is that you want something simple and automatic. I would probably use the ordinary mean, 5% trimmed mean, and median in tandem for a while and then pick one of the latter two, depending on track record.
Trigonometric function, with calculus integration.
If the integral is $$\int dx \, \sin{x} \sec^3{x}$$ then this is equivalent to $$\int dx \tan{x} \, \sec^2{x} = \int d(\tan{x}) \tan{x} = \frac12 \tan^2{x} + C$$
Complex solutions of a third degree equation
Since $\sqrt[3] 2$ is a root of $f(x)=x^3-2$, the polynomial $x-\sqrt[3] 2$ divides $f(x)$. So $f(x)=(x-\sqrt[3] 2)(x^2+ax+b)$ for some real numbers $a,b$. If you perform the long division you can obtain $a$ and $b$, and then solve the quadratic equation $x^2+ax+b=0$ using the quadratic formula, which gives the two complex roots. Here is a more general solution: If $f(x)=x^n-a$ then the roots of $f(x)$ are $\sqrt[n] a,~ \alpha\sqrt[n] a, \ldots, ~\alpha^{n-1}\sqrt[n] a$ where $\alpha$ is a primitive $n$-th root of unity (e.g. $e^{2\pi i/n}$).
Finding generalized eigenvectors from a Jordan form
First I will answer your questions and then explain how to write J. Each n$\times$n Jordan block represents one eigenvector and n − 1 generalized eigenvectors. All the eigenvectors and generalized eigenvectors are linearly independent. You cannot determine the eigenvectors from J unless J = A; you also need the matrix M from the similarity transformation J = M$^{-1}$AM. Hence, yes, you can use the definition of an eigenvector applied to matrix A. One online source on Jordan normal form properties and their relation to eigenvectors is the Wikipedia article "Generalized eigenvector," which also justifies my answers. Here is how to write J. The degree of the characteristic polynomial gives the size of J, 8$\times$8 in this case. With the characteristic polynomial in the form you have it, the exponent of each monomial gives the number of times the corresponding eigenvalue appears along the diagonal of J; in this case, λ$_1$ appears five times and λ$_2,$ three times. The number of linearly independent eigenvalues gives the number of Jordan blocks, four in this case. With the minimal polynomial in the form you have it, the exponent of each monomial gives the size of the largest Jordan block for the corresponding eigenvalue. In this case, λ$_2$ has a 3$\times$3 Jordan block, which takes care of all three λ$_2$'s along the diagonal of J. The largest Jordan block for λ$_1$ is 2$\times$2, and we need three Jordan blocks for λ$_1.$ The only way to arrange that is to have two 2$\times$2 and one 1$\times$1 Jordan blocks. Thus, we can write J as shown in your answer document.
If 7 dice are thrown simultaneously, then what does the probability that all six digit appears on the upper face equal to?
Here's a very simple, structured way of doing it. Consider a multinomial distribution with 6 outcomes. In $n=7$ trials, you want exactly two of one face and exactly one of all other faces. There are 6 equally likely situations, since it is equally likely that a face will be the one showing up twice. For a single case, say, the probability that there are two 1's, and one of 2,3,4,5,6. The probability of this happening is $$ {7\choose 2,1,1,1,1,1} (1/6)^7. $$ Then accounting for all six equally likely situations, the final answer is $$ 6\cdot {7\choose 2,1,1,1,1,1} (1/6)^7=\frac{7!}{2!} (1/6)^6, $$ which simplifies to the correct answer.
Summing Odd Fractions to One, and Odd Perfect Numbers
The claim is true because $1$ is the sum of finitely many fractions with odd denominator and unit numerator. More generally, for any statement $P$ the implication $P\ \implies\ Q$ is true if $Q$ is true. This says nothing about the truth value of $P$, however. In this particular case, this doesn't make the existence of odd perfect numbers any more or less likely. In this sense the quoted comment is a bit misleading.
Applications of the Markov Property
I don't see how you come to the conclusion in 2 from $\Psi$. Note that the event on the left hand side $X_2=x_2$ is not properly defined (where does $x_2$ come from?). If you take another step and apply the Markov property you get: \begin{align*} P(X_3=x_3|X_0=x_0,X_1=x_1) &= \sum_{x_2}P(X_3=x_3|X_0=x_0,X_1=x_1,X_2=x_2)P(X_2=x_2|X_0=x_0,X_1=x_1) \\ &= \sum_{x_2}P(X_3=x_3|X_1=x_1,X_2=x_2)P(X_2=x_2|X_1=x_1) \\ &= \sum_{x_2}P(X_3=x_3,X_2=x_2|X_1=x_1) \\ &= P(X_3=x_3|X_1=x_1)\ . \end{align*}
Why do lagrange multipliers have the form $\nabla G$
Heuristic answer (the sort you would see in, e.g., a thermodynamics class): Let $G : \mathbb{R}^2 \to \mathbb{R}$. Consider the surface $G(x,y)=0$. Then to move from a point $(x,y)$ to an infinitely close point $(x+dx,x+dy)$ on the surface, we must have $$dG=\frac{\partial G}{\partial x} dx + \frac{\partial G}{\partial y} dy = 0.$$ So we get an implicit function $y : \mathbb{R} \to \mathbb{R}$ such that $\frac{dy}{dx} = -\frac{\frac{\partial G}{\partial x}}{\frac{\partial G}{\partial y}}$. Essentially the same thing happens if $G : \mathbb{R}^{n+m} \to \mathbb{R}^m$ instead. If you don't like explicit differentials, this idea can avoid them by introducing an auxiliary variable $t$ which parametrizes the surface, and then writing $$\frac{dG}{dt} = \frac{\partial G}{\partial x} \frac{dx}{dt} + \frac{\partial G}{\partial y} \frac{dy}{dt} = 0$$ and then using the chain rule to identify $\frac{dy}{dx}$. Formal (but incomplete) answer: The formal result being used here is called the implicit function theorem. In the general situation, you have a differentiable function $G : \mathbb{R}^{n + m} \to \mathbb{R}^m$, and are considering the implicit function $g : \mathbb{R}^n \to \mathbb{R}^m$ defined by $G(x,g(x))=0$. (Here I am abusing notation somewhat; $g$ is really only defined on a subset of $\mathbb{R}^n$, and $G$ itself may only be defined on a subset of $\mathbb{R}^{n+m}$.) We define $$Dg : \mathbb{R}^n \to \mathbb{R}^{m \times n}$$ to be the Jacobian of $g$, $$D_y G : \mathbb{R}^{n + m} \to \mathbb{R}^{m \times m}$$ to be the Jacobian of $G$ with respect to the "$y$" variables, and $$D_x G : \mathbb{R}^{n+m} \to \mathbb{R}^{m \times n}$$ to be the Jacobian of $G$ with respect to the "$x$" variables. Then the implicit function theorem tells us that if $G(x_0,y_0)=0$ and $D_y G(x_0,y_0)$ is invertible, then $g$ is defined and differentiable on a neighborhood of $x_0$ and $$Dg(x) = -D_yG(x,g(x))^{-1} D_x G(x,g(x)).$$ I don't think the rigorous proof of this statement is accessible to a multivariable calculus student. The rigorous proof I know of can be found in Strichartz' The Way of Analysis. This constructs $g$ using a contraction principle, but the procedure is complicated by the fact that the domain has to be open in order to make sense of derivatives, while the contraction principle needs the domain to be closed.
If $A⊆P_1\cup P_2$, then $A⊆P_1$ or $A⊆P_2$.
HINT: Let $x=x_1+x_2$; if $x\in P_1$, then $x_1=x-x_2\in P_1$, so $x\notin P_1$; can you see a contradiction coming? Added: Note that the suggested argument does not actually use the primality of $P_1$ and $P_2$. The generalization of this result to the union of more than two ideals does require that the ideals be prime, however.
Create non-principal filter contained in a principal ultrafilter
I’m assuming for this answer that you translated as substantive and chief some word whose correct English translation in this context is principal. Let $E=\{2n:n\in\Bbb N\}$, let $\mathscr{U}$ be a non-principal ultrafilter on $\Bbb N\setminus E$, and let $$\mathscr{F}=\{E\cup U:U\in\mathscr{U}\}\;;$$ then $\bigcap\mathscr{F}=E$, but $E\notin\mathscr{F}$, so $\mathscr{F}$ is non-principal. However, it’s easy to extend $\mathscr{F}$ to a non-principal filter $\mathscr{G}$ on $\Bbb N$: if $\mathscr{V}$ is any non-principal filter on $E$, $\mathscr{V}$ is a base for such a $\mathscr{G}$.
Modular quadratic equation help
Partial answer: Note that $X^2 + a \equiv Y^2 + a \equiv P \mod N$ $\implies X^2 \equiv Y^2 \mod N$ $\implies (X - Y)(X + Y) \equiv 0 \mod N$ A family of the solutions can be found using the following: Take $X-Y, X+Y$ to be divisors of $Nk$ for some integer $k$. If $X - Y = d$ is a divisor of $N$, then you get $$X - Y + X + Y = 2X = d + {Nk \over d}$$ $$\implies X = {d + {Nk \over d} \over 2}, k \in Z$$ $$\implies Y = {d + {Nk \over d} \over 2} - d$$ If you equate $X + Y = d$, you get the same family of solutions as above since $X-Y$ and $X+Y$ are the two divisors whose product equals $Nk$. These are not the only solutions because there could be divisors of $Nk$ that are not divisors of $N$ (divisors of $k$).
Product of the cycles of permutation group
Are you asking whether or not if $(12345)$ can be written as $(54)(52)(21)(25)$? You can compute these and see whether or not this is the case. Note usually with cycle notation we have that unique elements should only appear in the cycle decomposition once.
If $a_{ij}=\max(i,j)$, calculate the determinant of $A$
Let $d_n$ be the determinant of the $n\times n$ matrix We can also write it as a recurrence By expanding on the last row (or column) we observe that all but the minors of last two columns have linear dependent columns, so we have: $d_n=-\frac{n^2}{n-1}d_{n-1}+nd_{n-1}=-\frac{n}{n-1}d_{n-1}$ Coupled with $d_1=1$ we get $d_n=(-1)^{n-1}n$
Orthogonality of dot product is applicable only in $\mathbb{R}^3$ and $\mathbb{R}^7$
Often the definition of the angle between two vectors in $\mathbb{R}^n$ is based on the inverse cosine of their dot-product. That is, vectors are defined as being orthogonal if their dot product is zero.
Weakening the Fundamental Lemma of Calculus of Variations
It wouldn't be a a weakening, since polynomials are not compactly supported. Notice also that you really desire boundary values to be zero, since otherwise you cannot derive the differential form of Euler-Lagrange equation (getting rid of boundary terms during integration by parts).
A question about inverse modulo a number.
As long as $\gcd(a,y)=1$, yes, you could. If not, then you can't. Find some integer solution to $my+na=1$, say by the extended Euclidean algorithm. Modulo $a$ this yields $my\equiv 1$, which means that $m$ corresponds to $\frac 1y$. As noted in the comments above, though, it is more common in modular arithmetic to write $y^{-1}$.
Proof that $\overline{B}_1(0)$ is not compact in $(C[0,1],||\cdot||_{\infty})$
For each $n\in\mathbb{N}$, define a function $f_n\in C\bigl([0,1]\bigr)$ such that: $x\notin\left[\frac1{n+1},\frac1n\right]\implies f_n(x)=0$; $(\forall x\in[0,1]):-1\leqslant f_n(x)\leqslant 1$; $f_n(x_n)=1$ for some $x_n\in\left[\frac1{n+1},\frac1n\right]$. It follows from this that $m\neq n\implies\|f_m-f_n\|_\infty=1$. Therefore, $(f_n)_{n\in\mathbb N}$ has no Cauchy subsequence, and therefore no convergent subsequence.
Approximation of square root
$$\begin{align} \\ & \sqrt{10^2-(6.9\times 10^{-2})^2} \\ & = \sqrt{10^2\left[1 -\frac{1}{10^{2}}\left\{(6.9)^2\times 10^{-4}\right\}\right]} \\ & = \left[10^2\{1 -(6.9)^2\times 10^{-6}\}\right]^{\frac{1}{2}} \\ & \approx 10\left[1-\frac 1 2(6.9)^2\times10^{-6}\right] \,\,\,\,\,\,\,\,\,\,\,\,\, \text{using binomial approximation} \end{align}$$ The binomial approximation is stated in this link.
$\int e^ {x^2}dx$ : Integration of e to the power of x^2
If you're looking for an exact solution, you're probably out of luck. If you're looking for a numerical solution, you're going to want to use the equation $e^x = \sum_{n=0}^\infty\frac{x^n}{n!}$. So, for instance, for the example you gave, $\int_0^2e^{x^2}dx$, you would do $$\int_0^2 e^{x^2}dx = \int_0^2\sum_{n=0}^\infty\frac{x^{2n}}{n!}dx = \sum_{n=0}^\infty\frac{2^{2n+1}}{(2n+1)n!} = 2 + \frac{8}{3} + \frac{16}{5} + \frac{64}{21} + \frac{64}{27} + \cdots \approx 16.5$$ I got the final answer from Wolfram, but if you really wanted to work it out mathematically, working out eight or more terms, doubling the final would give you an upper bound; in general, for an arbitrary positive limit of integration x, you would have to write out enough terms that for the last $\frac{(n+1)(2n+3)}{(2n+1)} > 2x^2$. (Since it's an even function, you don't really need to consider negative bounds.)
What is the distribution of residual in simple linear regression?
The $i$th residual has normal distribution, with mean zero and variance $$ \operatorname{Var}(e_i) = \sigma^2\left(1-\frac1n-\frac{(x_i-\bar x)^2}{\operatorname{SSX}}\right) $$ where SSX is shorthand for $\sum_k(x_k-\bar x)^2$. It has normal distribution because of the formulas $$ e_i=y_i-\hat y_i=(\epsilon_i-\bar\epsilon)-(B_1-\beta_1)(x_i-\bar x) $$ and $$B_1-\beta_1=\frac{\sum_k(x_k-\bar x)(\epsilon_k-\bar \epsilon)}{\operatorname{SSX}} $$ which express $e_i$ as a linear combination of the independent variables $\epsilon_1,\ldots,\epsilon_n$. The mean is computed as zero from the same formulas. An elementary derivation of the variance is found in this answer. A slicker but more advanced derivation is found in this answer. More can be said: It turns out that the correlation between the residual $e_i$ and the error $\epsilon_i$ is $$\sqrt{1-\frac1n-\frac{(x_i-\bar x)^2}{\operatorname{SSX}}}. $$ This tells us that as $n$ gets large, the correlation tends to $1$, so that the residual is virtually identical to the error. This makes sense, because as the sample gets bigger, the estimators $B_0$ and $B_1$ converge to the true $\beta_0$ and $\beta_1$, hence the regression line $y=B_0+B_1 x$ converges to the theoretical line $y=\beta_0+\beta_1 x$.
Vector Calculus Question
Hint: You have the vector field $F(x,y,z)=(y,z,x)$, and you want to evaluate $$\int_C F\cdot ds.$$ What is the Curl of your vector field? Notice that it must be a constant since $F$ is linear. Let $K=\nabla \times F$ denote this constant. Now, apply Stokes Theorem. What is the area of the enclosed curve? (Hint: it is a circle of radius $2$). If we call that area $A$, then the line integral is equal to $$K A.$$ Hope that helps,
UMVUE of $ \frac{1}{\theta}$ coming from $f(x) = \theta x^{\theta - 1}$.
$\newcommand{\E}{\operatorname{E}}$ \begin{align} \E(-\log X) & = \int_0^1 (-\log x) \Big(\theta x^{\theta-1}\,dx\Big) = \int u\,dv = uv - \int v \, du \\[8pt] & = \left.(-\log x)x^\theta\vphantom{\frac11}\,\right|_0^1 - \int_0^1 x^\theta\Big( \frac{-dx} x \Big) \\[8pt] & = \int_0^1 x^{\theta-1} \,dx \quad(\text{L'Hopital's rule showed that the first term is 0.}) \\[8pt] & = \left.\frac {x^\theta}\theta\right|_0^1 = \frac 1 \theta. \end{align} That takes care of unbiasedness. The joint density is $$ f(x_1,\ldots,x_n) = \theta^n (x_1\cdots x_n)^{\theta-1} \cdot 1 $$ where the "$1$" is a factor that does not depend on $\theta$ and in this case does not depend on $x_1,\ldots,x_n$, but dependence on those would not upset the following conclusion: the product, and therefore the sum of the logarithms, is sufficient. You need to show that this sufficient statistic admits no nontrivial unbiased estimators of zero, i.e. there is no nonzero function $g(x_1,\ldots,x_n)$, not depending on $\theta$, for which $$ \int_0^1\cdots\int_0^1 g(x_1,\ldots,x_n)\theta^n(x_1\cdots x_n)^{\theta-1}\,dx_1\cdots dx_n = 0\text{ for all values of $\theta>0$}. $$ (You can divide both sides of that by $\theta^n$ and it's a little bit simpler.) Maybe I'll be back later to deal with this integral${}\,\ldots\ldots$
Help with understanding point from Nassim Taleb's book "Dynamic Hedging"
With a 248 day year, use $\sigma \sqrt{\frac{1}{248}} =.01$ to normalize for a daily $\sigma$ of $1\%$. This gives $\sigma \approx .157$.
Showing that the $n$-th positive integer that is not a perfect square is $n+\{\sqrt{n}\}$, where $\{\}$ is the "closest integer" function
The first step is certainly valid. The intervals $[m^2-m+1,m^2+m]$ partition the positive integers. Note that $$(m+1)^2-(m+1)+1=m^2+m+1.$$
If $P$ is projective and $A,B$ are direct summands of $P$ then $A\cap B$ is a direct summand of $P$
We have an exact sequence $0\to P/A\cap B\stackrel{f}\to P/A\oplus P/B$. (Define $f(x)=(x,-x)$.) Then $P/A\cap B$ is a submodule of a projective module which over $\mathbb Z$ is free, so $P/A\cap B$ is also free.
Translation of a complex sentence
Using: $P(x)$: $x$ is a person $P(x)$: $x$ is an act $P(x,y)$: $x$ performs $y$ $D(x,y)$: $x$ does damage to $y$ $B(x)$: $x$ is blameworthy $J(x)$: $x$ is justifiable $O(x,y)$: $x$ is obligated to pay damages to $y$ you get: $$\forall x \forall y (P(x) \land P(y) \land \exists z (A(z) \land P(x,z) \land D(z,y) \land B(z) \land \neg J(z))) \to O(x,y))$$ One would of course like to break down $O(x,y)$ a little more ... but in order to do that, we'd need to go into modal logic, which is outside the scope of this exercise.
Possible Jordan Canonical Forms Given Minimal Polynomial
Yes, you did. Based on the minimal polynomial, you must have a two-by-two Jordan block for eigenvalue 2 and a one-by-one block for eigenvalue 1. You can fill in the five-by-five matrix with more of those blocks or with one-by-one blocks for eigenvalue 2. Using those rules yields precisely your first four matrices. Your fifth matrix is not correct. Furthermore, you can permute the blocks. Thus, your first matrix yields 3!/2! = 3 Jordan forms, your second and third matrices yield 4!/2! = 12 forms each, and your four matrix yields 4!/3! = 4 forms for a total of 3 + 2 · 12 + 4 = 31 forms.
Elementary set theory - are these sets empty?
There is exactly one map $\mathbb N\to\{\emptyset\}$, given by $f(n)=\emptyset$. There is no map $\mathbb N\to\emptyset$ as we cannot have e.g. $f(1)\in\emptyset$. There is exactly one map $\emptyset\to\mathbb N$. If you give me an element of $\emptyset$, I am willing to name you a natural number. For 4 just pick e.g. a surjection $\mathbb N\to\{1,2,3\}$.
How do I show that this function is a contraction?
By definition of the contraction, you need to show that there exists such $0\leq k < 1$ that $$d(f(x), f(y))\leq k d(x, y)$$ for all $x, y\in\mathbb{R}^N$ Now, here the distance $d$ is a normal euclidian distance on $\mathbb{R}^N$, so this boils down to proving that $$\sqrt{\sum_{i=1}^n (f_i(x) - f_i(y))^2}\leq k\sqrt{\sum_{i=1}^n (x_i - y_i)^2}.$$ Try substituting the expressions for $f_i(x)$ and $f_i(y)$ into this inequality and see what you get!
Summation involving 2 variables
$$ \sum_{i = j + 1}^4 (25-5i)=\sum_{i=1}^4(25-5i)-\sum_{i=1}^j(25-5i)$$ Can you take it from there?
Expectation and Variance of Sales
A Poisson model is reasonable. The number of customers in a day has Poisson distribution with parameter $80$. The number of hot dogs sold in a day therefore has Poisson distribution with parameter $\lambda=\frac{3}{4}\cdot 80$. Now recall that a Poisson random variable with parameter $\lambda$ has mean and variance $\lambda$.
Function of arc every few steps along cone
The radius of each arc is $r_k=\frac kn r$ where $k$ ranges from $1$ to $n$. The points can be calculated by taking intervals of $\theta$ from $0$ to $\alpha$ as $x=r_k \cos (\frac {\pi-\alpha}2+\theta),y=r_k \sin (\frac {\pi-\alpha}2+\theta)$, the standard parameterization of a circle. Added: the vertex of your angle is assumed to be the origin of the coordinates. For 3D you can restrict $\theta$ to $[0,\alpha /2]$, which will give you the point in the $xy$ plane with $x \ge 0$. Now you can revolve it around the $y$ axis. The $y$ values are all the same. Given the $x$ we calculated for a given $\theta$ the point is $(x',y,z)$ with $x'=x \cos \phi, z=x \sin \phi$ as $\phi$ ranges from $0$ to $2\pi$ The shape is no longer an arc, which is what led me to the 2D solution. It is now a spherical cap.
Prove that quotient group K/H is normal subgroup of quotient group G/H
The elements of $K/H$ are cosets which we can write as $kH$, and the elements of $G/H$ are cosets which we can write as $gH$. We want to show that $(gH)^{-1}kHgH \in K/H$. By the group law in $G/H$ we have $(gH)^{-1}kHgH = g^{-1}kgH$. Now $g^{-1}kg \in K$ because $K$ is normal in $G$, so $g^{-1}kgH \in K/H$.
What is the cardinality of $A=\{(a,b)\in \mathbb{R}\times \mathbb{R}\mid 2a+b\in \mathbb{N}\text{ and }a-2b\in \mathbb{N}\}$
Since $$ \det \begin{pmatrix} 2 & 1\\ 1 & -2 \end{pmatrix} =-5\neq 0 $$ for each pair $(x, y) \in \mathbb{N} \times \mathbb{N}$ there is a unique solution of $$ 2a+b = x\\ a-2b = y $$ So you have at most ${\aleph }_{0}$ solutions and you can find directly ${\aleph }_{0}$ solutions.
Proving a statement related to circles in a coordinate plane.
Suppose line $PQ$ and $RS$ intersect at $T$. The power of $T$ with respect to $ω_1$ is $TR \cdot TS$, and the power of $T$ with respect to $ω_2$ is $TP \cdot TQ$. Because $P, Q, R, S$ are concyclic, then$$ TR \cdot TS = TP \cdot TQ. $$ Thus $T$ is on the radical axis of $ω_1$ and $ω_2$. Now, because line $PQ$ is the radical axis of $ω_2$ and circle $PQRS$, then $OO_1 ⊥ PQ$, i.e. $OO_1 ⊥ TO_2$. Analogously, $OO_2 ⊥ TO_1$. Thus $O$ is the orthocenter of $△TO_1O_2$, which implies $TO ⊥ O_1O_2$. Since $T$ is on the radical axis of $ω_1$ and $ω_2$, then $O$ is also on the radical axis of $ω_1$ and $ω_2$.
Limit of multivariate function at $(0,0)$.
For the first approach with ellipses, $x=r\cos (\theta)$ and $y=\frac{1}{\sqrt{2}}r\sin (\theta)$ with $r \to 0^+$, regardless of $\theta$. $$=\lim_{r \to 0^+} \frac{r^3 \cos^3 (\theta)-\frac{1}{\sqrt{2}}r^3 \sin^3 (\theta)}{r^2}$$ For the second limit, looking at denominator set it equal to, $r^2$. So $x^2+y^6=r^2$ which gives some wierd but closed shape. Now use $x=r \cos (\theta)$ and $y=r^{1/3} \sin^{1/3} (\theta)$ . And approach $r \to 0^+$ regardless of $\theta$. $$=\lim_{r \to 0^+} \frac{r^{7/3} \cos (\theta) \sin^{4/3} (\theta)}{r^2}$$
Name of a vector of 1s?
It's not quite common enough to have a standard notation, but a reasonably well-accepted notation would be something like $\mathbf{1}_n = (1, 1, \ldots, 1) \in \mathbb{R}^n$, and if you needed a column vector then you'd write $\mathbf{1}^\intercal_n$. It may sometimes be called the 1-vector of size $n$ or a size $n$ vector of 1s. As such, it's the kind of thing that when you use it you would probably be best off defining it explicitly so that it's clear what you're doing with it.
Show that the greatest common divisor $gcd(x,y)$ is primitive recursive.
Hint: The basic relation you need is divisibility: $$D = \{(x,y)\in{\Bbb N}_0^2\mid x\;\text{divides}\; y\}.$$ $x$ divides $y$ if and only if $x·i = y$ for some $1 ≤ i ≤ y$. So the characteristic function of $D$ can be written as $$\chi_D(x, y) = \text{sgn}[\chi_=(x · 1, y) + \chi_=(x · 2, y) + . . . + \chi_=(x · y, y)],$$ where $\chi_=$ is the (primitive recursive) characteristic function of the equality relation $\{(x,x)\mid x\in{\Bbb N}_0\}$. Since the characteristic function $\chi_D$ is primitive recursive, the relation D is primitive.
How to solve for $ x $: $ ae^{bx}+ce^{dx}+x = f $
Waiting for more information about the parameters, I shall assume that $a$ and $c$ are positive. You are looking for the zero of function $$f(x)=ae^{bx}+ce^{dx}+x - f$$ which will not show any analytical solution and for which a numerical method will be required. This implies a "reasonable" starting guess or at least a renge for the solution. What we can consider is that $f(x)$ is somewhere between $$g(x)=ae^{bx}+ce^{bx}+x - f \quad \text{and} \quad h(x)=ae^{dx}+ce^{dx}+x - f$$ $$g(x)=0 \implies x_g=f-\frac 1b W\left(b (a+c) e^{b f}\right)$$ $$h(x)=0 \implies x_h=f-\frac 1d W\left(d (a+c) e^{d f}\right) $$ Let us try for $a=2$, $b=\frac 12$, $c=3$, $d=\frac 15$ and $f=23.456$. This would give $$x_g \sim 2.83386 \quad \text{and} \quad x_h\sim 6.19505$$ Using the average as $x_0$, Newton iterates would be $$\left( \begin{array}{cc} 0 & 4.514450 \\ 1 & 3.885394 \\ 2 & 3.787382 \\ 3 & 3.785395 \\ 4 & 3.785394 \end{array} \right)$$ Using @Semiclassical's example $$f(x)=2e^{x}+e^{2x}+x-1$$ we should have $$x_g \sim -0.617642 \quad \text{and} \quad x_h\sim -0.38607$$ Newton iterates would be $$\left( \begin{array}{cc} n & x_n \\ 0 & -0.5018562 \\ 1 & -0.5274934 \\ 2 & -0.5277952 \end{array} \right)$$
Proving that the $\sigma$-algebra of Lebesgue measurable sets on the real line is not separable (i.e. countably generated).
One way is to prove that any countably generated $\sigma$-algebra has cardinality at most that of the continuum. That takes some time to show, but it is done in the top answer to this question Is the intersection of two countably generated $\sigma$-algebras countably generated?. Then recall that any subset of the Cantor set is Lebesgue measurable so there are clearly more than $\aleph_1$ of them. Btw, you shouldn’t equate countable generability and separability: the Lebesque sets are separable in the metric induced by the measure.
Solve $\sin(x) = \cos(x)$ where $0°\leq x\leq 450°$
You are correct, there are only $3$ solutions to the equation $\sin(x)=\cos(x)$ in the interval $[0,\frac{5}{2}\pi]$. Perhaps you might've missed something else in the question? A graphical approach shows that there are $3$ solutions at $\frac{\pi}{4},\frac{7\pi}{4}$ and $\frac{9\pi}{4}$.
7 boys and 5 girls sitting in a row, if no 2 girls are sitting together, find the permutations.
Two girls that are siting together we can see as one girl they can sit in two ways.Two girls out of 5 we can choose in $\binom{5}{2}$ and all diferent positions are $$2\cdot\binom{5}{2}\frac{(7+4)!}{7!4!}=6600$$
Deriving least squares coefficients for curve of form $y=a/(x^2+b)$
Starting with a rather silly sentence : since the model is nonlinear with respect to $b$, by the end, you will need a nonlinear regression and, as usual, you will need reasonable starting guesses for the parameters. You can get these estimates in a preliminary step $$y=\frac{a}{x^2+b} \implies \frac 1y=\frac {x^2} a+\frac ba$$ So, defining for each $(x_i,y_i)$ data points, new variables $t_i=x_i^2$ and $z_i=\frac 1{y_i}$, you have $$z=\alpha\, t+\beta$$ and, from the corresponding linear regression, you will get, as estimates $a=\frac 1 \alpha$ and $b=\frac \beta \alpha$. If you do not want to use nonlinear regression, you can reduce the problem to a nasty equation in $b$. Using Rohan's expressions, let us define $$S_1=\sum \frac{y_i}{x_i^2+b} \qquad S_2=\sum \frac{1}{(x_i^2+b)^2}$$ $$S_3=\sum \frac{y_i}{(x_i^2+b)^2} \qquad S_4= \sum \frac{1}{(x_i^2+b)^3}$$ Eliminating $a$, you then need to to find the zero of function $$f(b)=S_2\, S_3-S_1 S_4$$ Since you have the estimate, you could use it as the starting point of Newton method with analytical derivatives since $$f'(b)=S_2S'_3+S'_2S_3-S_1S'_4-S'_1S_4$$ where will appear other summations (I let you the task of writing them - they are not difficult). The iterative process will converge very fast. So $b$ and then $a$. Edit For illustration purposes, consider the following data set $$\left( \begin{array}{cc} x & y \\ 0 & 46 \\ 1 & 46 \\ 2 & 45 \\ 3 & 43 \\ 4 & 41 \\ 5 & 38 \\ 6 & 36 \\ 7 & 33 \\ 8 & 30 \\ 9 & 28 \end{array} \right)$$ The preliminary step leads to $z=0.000176905 t+0.0216373$ from which the estimates $a=5652.76$ and $b=122.310$. Starting with this guess, the iterates of Newton method will be $$\left( \begin{array}{cc} n & b_n \\ 0 & 122.31000 \\ 1 & 121.84307 \\ 2 & 121.85174 \end{array} \right)$$ from which $a=5635.22$. A nonlinear regression would have be providing exactly the same values for the parameters.
Finding the residues of poles
Your last approach is correct. Here is a slight variation of the argument that avoids Taylor expanding. If $\mathcal{F}$ is analytic at $\lambda$ and $\mathcal{F}(\lambda) = 0$ with $\mathcal{F}'(\lambda) \not= 0$ then we can write $\mathcal{F}(z) = (z-\lambda)g(z)$ where $g(z) = \frac{\mathcal{F}(z)}{z-\lambda}$. Now since $\lim\limits_{z\to\lambda}g(z) = \mathcal{F}'(\lambda)$ we have that $g$ is analytic at $z=\lambda$ (it has a removable singularity) and $$\frac{\mathcal{F}'(z)}{\mathcal{F}(z)} = \frac{1}{z-\lambda} + \frac{g'(z)}{g(z)}$$ Since $\mathcal{F}'(\lambda) \not= 0$ we have $g(\lambda)\not=0$ so the last term is analytic at $z=\lambda$ and the residue can be read off from the first term. As a sidenote to your first attempt: when $\mathcal{F}$ is entire and has zeros for all $z\in\{\lambda_n\}_{n=1}^\infty$ you cannot simply write $\mathcal{F}(z) = A(z-\lambda_1)(z-\lambda_2)\cdots$ as we can do when $\mathcal{F}$ is a polynomial. However there is a very nice theorem called Weierstrass factorization theorem that gives us the functional form of $\mathcal{F}$. It is slightly more complicated: $$\mathcal{F}(z)=z^m e^{h(z)} \prod_{n=1}^\infty \left(1 - \frac{z}{\lambda_n}\right)E_{p_n}\!\!\left(\frac{z}{\lambda_n}\right)$$ where $h$ is some analytic function, $m$ is the order of the zero at $z=0$, $p_n$ is some set of integers and $$E_{n} = \left\{\matrix{1 & n=0\\\exp(z + \frac{z^2}{2} + \ldots + \frac{z^n}{n}) & n> 0 }\right.$$ If you want you can also use this formula to solve your problem. By taking the logarithmic-derivative we get $$\frac{\mathcal{F}'(z)}{\mathcal{F}(z)} = \frac{m}{z} + \sum_{n=1}^\infty \frac{1}{z-\lambda_n} + \text{(analytic function)}$$ so the residue $\text{Res}\left[\frac{\mathcal{F}'(z)}{\mathcal{F}(z)};\lambda\right]$ is equal to the order of the zero of $\mathcal{F}$ at $z=\lambda$ which in your case is just $1$ since all zeros are simple.
Feedback on my proof that $(A\setminus B)\cup(B\setminus A)=(A\cup B)\setminus (A\cap B)$
I think you did quite well, and it's the most appropriate approach for you at this point in time: Just expanding on your work (filling in some gaps which were easy enough to assume): Assume $x \in (A\setminus B)\cup(B\setminus A)$. Then, by the definition of union, (a) $x \in A\setminus B$, or $x \in B\setminus A$. (a.i) $x \in A\setminus B$, then $x \in A$ and $x \notin B$, so $x \notin A \cap B$. or (a. ii) $x \in B\setminus A$, then $x \in B$ and $x \notin A$, so $x \notin B \cap A$. From (a), and (a.i) and (a.ii), we have ($x \in A$ or $x \in B$) and $x\notin (A\cap B)$. Hence, $x \in (A\cup B)$ and $x \notin (A\cap B)$. That is, $x \in (A\cup B)\setminus(A\cap B)$. Therefore, $(A\setminus B)\cup(B\setminus A)\subseteq(A\cup B)\setminus (A\cap B).\tag{1}$ Now, Assume $x \in [(A \cup B) \setminus (A \cap B)]$. Then $x \in (A\cup B)$ and $x\notin (A \cap B)$, by the definition of setminus. Then $x \in A$ or $x \in B$ by the definition of union. However we also know that $x \notin (A \cap B)$. Therefore it must be the case that either $(x \in A$ and $x \notin B)$ or ($x \in B$ and $x \notin A$). That is, either $x \in (A \setminus B)$ or $x \in (B \setminus A)$. So $x \in (A\setminus B) \cup (B \setminus A)$. Therefore, $(A\cup B)\setminus (A\cap B)\subseteq(A\setminus B)\cup(B\setminus A)\tag{2}$ Very nicely done, by the way.
What can I say about a map multiplication ?(2)
From what I can tell, the assertion is that the group homomorphism $f : \mathbb Z \to \mathbb Z$ defined by $f(r) = kr$ induces a surjective group homomorphism $g(r + m \mathbb Z) = kr + n \mathbb Z$ if and only if $n \,|\, m$ and $\gcd(k, n) = 1.$ For the case that $m = 2^4,$ $n = 2^3,$ and $r = 2,$ we have that $$k = \frac{rn}{\gcd(m, n)} = \frac{2^4}{2^3} = 2$$ so that $\gcd(k, n) = 2.$ Ultimately, there is no contradiction because $\gcd(k, n) \neq 1.$
Finite Element method: Matrix element
To solve your first doubt, just compute the derivative of $\Phi_i$. About your second doubt, just notice $(1-i)=-(i-1)h/h$.
Joint distribution of Brownian motion and its running maximum when time is different
If you consider the filtration $\mathcal{F}_s$ the increments of the Brownian motion for $t>s$ are independent from $\mathcal{F}_s$ Moreover, since $\mathbb{E}(B_t|\,\mathcal{F}_s)=B_s$: $B^*_s\geq B_s\Rightarrow \mathbb{E}(B^*_t|\,\mathcal{F}_s)=B^*_s$ $B^*_s=B_s\Rightarrow \mathbb{E}(B^*_t|\,\mathcal{F}_s)=B_s.$ This means that $$ \mathbb{P}(\mathbb{E}(B^*_t|\,\mathcal{F}_s)\geq y, B_s\leq x )=\mathbb{P}(B^*_s\geq y, B_s\leq x) $$ and $$ f_{B^*_t,\,B_s|\,\mathcal{F}_s}(x,y|\mathcal{F}_s)=f_{B^*_s,\,B_s}(x,y). $$ If you consider instead the filtration $\mathcal{F}_t$, then $B_s$ is not random anymore, but a realized value, hence you would not have a joint distribution, but merely a conditional one. I will try and expand on such distribution later on when I will have more time. Hope this helps.
Clique cover problem with general clique weights
It is NP-hard. Lets reduce independent set with bounded degree to this problem. We are given graph $G$ with $n$ vertices and want to know if it has an independent set of size $k$. For each edge $u-v$ add new vertex $x_{uv}$ and replace this edge with two: $u-x_{uv}$ and $v - x_{uv}$. For each vertex $u$ and it's neighbors $v$ and $v'$ add edge $x_{uv} - x_{uv'}$. This will be our new graph. Inclusion-maximal cliques in our new graph are of form $\{u\} \cup \{x_{uv} | v \in N(u)\}$. As $G$ has bounded degree, number of cliques will be polynomial (really linear) in $n$. Lets say that weight of such inclusion-maximal clique is $1$, and weight of any other clique is $0$. Note that for any two vertices $u$, $v$ s.t. there was edge $u-v$ in $G$, in any clique-cover at most one of $u$, $v$ is covered with inclusion-maximal clique. At the other hand, if we have some independent set, we can cover vertices from it with inclusion-maximal cliques (and cover rest however we want, with singletons, for example). Thus independent sets of size $m$ corresponds to clique-covers with weight $m$, so $G$ has independent size of size $k$ iff our new graph has clique-cover of weight at least $k$.
Simple trig integration
Did you use Degrees or radians while calculating it, because it is correct for radians. radians degrees
Prove the series converges
Can't you just use comparison? $$0 \leq \frac{\ln(n+1)}{n^2} =o\left(\frac{1}{n^{3/2}}\right)$$ As for your proof, you forgot to apply the $n$-th root to the logarithm: $$ \ln^{\frac{1}{n}}(n+1) = \exp_- \frac{\ln\ln(n+1)}{n} \xrightarrow[n\to\infty]{} e^0=1 $$
How to find the ends with restrictions function $f(x,y)=4x^2+2y^2+10$, subject to $4x^2+y^2=4$
Hint (Without Lagrange) Under the restriction your function is $$ f(x,y) = y^2 + 14 $$ Since $0\leq y^2 \leq 4$, you can find the end points pretty fast.
General solution of $x^{ 2 }\left( y-x\frac { dy }{ dx } \right) =y{ \left( \frac { dy }{ dx } \right) }^{ 2 }$
$${ x }^{ 2 }\left( y-x\dfrac { dy }{ dx } \right) =y{ \left( \dfrac { dy }{ dx } \right) }^{ 2 }.$$ Differentiate with respect of the variable $u=x^2$: $$y-2u\dfrac { dy }{ du } =4y{ \left( \dfrac { dy }{ du} \right) }^{ 2 }.$$ Multiply by $y (y \ne 0)$ both sides ( note $y=0$ is a solution of the DE): $$y^2-2uy'y =4y^2{ \left( y' \right) }^2$$ $$w=uw' +{w'}^2$$ Where $u=x^2$ and $w=y^2$ This is Clairaut's differential equation. $$y=xy'+f(y')$$ Is the general solution of Clairaut's equation. $$y=Cx+f(C)$$ So that: $$w=Cu+f(u)=Cu+C^2$$ $$\boxed {y^2(x)=C(x^2+C)}$$ Is the general solution of the differential equation. You still need to find the singular solution.
Calculate Probability that at least 1/4 stocks will be a total loss, given 40% of all stocks are at a total loss
For at least one stock to be a total loss, is for it to not be the case that all stocks are not total losses. If the probability of $X$ happening is $P(X)$ then the probability of $X$ not happening is $1-P(X)$. So the probability of at least one stock being a total loss is $1 -$ the probability that all stocks are not total losses. If the probability of $X$ is $P(X)$, then the probability of $X$ occurring $n$ times out of $n$ is $(P(X))^n$. So the probability that all stocks are not total losses is $($the probability one stock is not a total losses$)^4$. And repeating myself: If the probability of $X$ happening is $P(X)$ then the probability of $X$ not happening is $1-P(X)$. So the probability that one stock is not a total loss is $1 -$ probability that a stock is a total loss. And probability that a stock is a total lose is $0.4$.
Commuting analogue of the cross product
I claim that the only such product is the trivial product. Observe that the datum of a bilinear form $m : V \times V \to V$ is equivalent to the datum of a tensor $\omega \in (V^\ast)^{\otimes 3}$ via $$ \forall x, y, z \in V, \quad \omega(x,y,z) = \langle x,m(y,z) \rangle. $$ Then $m$ is commutative if and only if $\omega \in V^\ast \otimes (S^2 V^\ast)$ and satisfies the orthogonality condition if and only if $\omega \in (\wedge^2 V^\ast) \otimes V^\ast$, so that $m$ is an operation of the desired form if and only if $$ \omega \in V^\ast \otimes (S^2 V^\ast) \cap (\wedge^2 V^\ast) \otimes V^\ast. $$ But now, suppose we have such a tensor $\omega$. Then for any $x$, $y$, $z \in V$, $$ \omega(x,y,z) = -\omega(y,x,z) = -\omega(y,z,x) = \omega(z,y,x) = \omega(z,x,y) = -\omega(x,z,y) = -\omega(x,y,z), $$ so that $\omega(x,y,z) = 0$. If you like, the underlying reason why any product of the desired form must be zero is that the subspaces $V^\ast \otimes (S^2 V^\ast)$ (corresponding to symmetry) and $(\wedge^2 V^\ast) \otimes V^\ast$ (corresponding to the orthogonality condition) of $(V^\ast)^{\otimes 3}$ have trivial intersection, as a result of the behaviour of the canonical representation of the permutation group $\mathfrak{S}_3$ on $(V^\ast)^{\otimes 3}$ by permutation of indices. Explicitly, since $V^\ast \otimes (S^2 V^\ast)$ is the $+1$ eigenspace of the permutation $(2\,3) \in \mathfrak{S}_3$ and $(\wedge^2 V^\ast) \otimes V^\ast$ is the $-1$ eigenspace of the permutation $(1\,2) \in \mathfrak{S}_3$, it follows that $V^\ast \otimes (S^2 V^\ast) \cap (\wedge^2 V^\ast) \otimes V^\ast$ is in the $-1$ eigenspace of $$ (2\,3)(1\,2)(2\,3)(1\,2)(2\,3)(1\,2) = e \in \mathfrak{S}_3, $$ which, of course, is trivial.
Question about subsequential limits and limit superiors
(set of all subsequential limits) $S = \{0,1 \}$ lim sup $s_n = 1$ lim inf $s_n = 0$
Coordinates of a meetpoint of two tangent (fun)
I think that explicit computations is the way in this question. Rewrite $C_1$ as $$C_1: (x+2)^2 + (y-8)^2 = 45$$ The line from the center of $C_1$, which is $(-2,8)$, to the point $(-5,14)$ is given by $$L_1: y=-2x+4$$ So the tangent to $C_1$ at $(-5,14)$ is perpendicular to $L_1$ (so it has gradient $1/2$), and since it must pass through $(-5,14)$, it is given by $$L_2: y = \frac 12 x+\frac {33}{2}$$ Rewrite circle $C_2$ as $$C_2: (x+28)^2+(y+5)^2 = 65$$ Solve the two equations $L_2$ and $C_2$ simultaneously to obtain the intersection points $$A = (-27,3) \qquad B = (-35, -1)$$ The center of $C_2$ is $O_2 = (-28,-5)$, and the lines from $O_2$ to each of $A$ and $B$ are respectively given by $$O_2A : y=8x+219 \qquad O_2B: y = -\frac 47 x-21$$ So the tangents to $O_2$ at $A$ and $B$ have respective gradients $- 1/8$ and $7/4$, and are hence given, respectively, by $$L_3: y = -\frac 18 x -\frac 38 \qquad L_4: y = \frac 74 x+ \frac{241}{4}$$ Finally, solve $L_3$ and $L_4$ simultaneously to obtain the intersection point being $$\biggl(-\frac {97}{3}, \frac {11}{3}\biggl)$$
Finding $9202 \pmod {15}$
The simplest approach is just to divide $9202$ by $15$ and find the remainder. In this case $9202=15\cdot 613+7$ so $9202 \equiv 7 \pmod {15}$ Your approach does not strictly work because if $c=5$ you cannot have $ac=9202$ because it is not a multiple of $5$. You could reduce $9202$ modulo $3$ and $5$ and use the Chinese Remainder Theorem to combine the results.
How do you find the sum of this infinite series?
Hint: Note that $$(2+1)^n = 2^n + n 2^{n-1} + \binom{n}{2} 2^{n-2} + \binom{n}{3} 2^{n-3} + \cdots + n 2^1 + 1.$$
Are two topologies that contain the subbase of each other equal?
$\mathcal{S}_1$ is a subbase for $\mathcal{O}_1$, which means that $\mathcal{O}_1$ is the smallest topology on $X$ that contains $\mathcal{S}_1$ as a subset. If now $\mathcal{S}_1 \subseteq \mathcal{O}_2$, that minimality of $\mathcal{O}_1$ (as $\mathcal{O}_2$ is a topology that contains $\mathcal{S}_1$) gives us that $$\mathcal{O}_1 \subseteq \mathcal{O}_2$$ and the reverse inclusion follows in the same way, mutatis mutandis. So we indeed have equality of topologies.
What is the expected performance of a team in a contest?
Using the Hockey-stick formula, you get that $$\sum _{k=1}^{n-m}mC_{n-k}^{m-1}=\sum _{k=m}^{n-1}mC_{k}^{m-1}=m(C_{n}^m-1).$$ Just a comment for the confused reader: $C_n^m=\binom{n}{m}.$
I forgot how to solve two systems of equations, I am trying to solve these two equations?
$$1=r^2-1+2a-a^2$$ $$4=r^2-16+8a-a^2$$ so $r^2=2-2a+a^2=20-8a+a^2 \iff 6a=18 \iff a=3 $ $$r^2=5 \iff r=\sqrt{5} \lor r=-\sqrt{5}$$
Second Borel–Cantelli lemma intuition
They are not independent: since $A_n \supseteq A_{n+1}$ and $P(A_n) \ne 1$ for $n > 1$ we have $$P(A_n \cap A_{n+1}) = P(A_{n+1}) \ne P(A_n) \cdot P(A_{n+1}), \qquad n > 1.$$
Randomly generated pairwise matrices
Unless you have some kind of bound on the maximum size of an entry or some probability distribution on that size you can't really do this, since there's no way to make sense of "a random positive number". But if you do have a distribution, or a bound so that you can use the uniform distribution, just choose the $n(n-1)/2$ elements above the diagonal independently, then fill in below the diagonal with reciprocals. Possible word of warning. If you need several of these and you want them uniformly distributed among the possibilities this algorithm may fail. For example, if you choose the entries above the diagonal uniformly distributed in $[0,M]$ for some large $M$ then those entries will mostly be greater than $1$ while the entries below the diagonal will be less than $1$. That may bias any statistical conclusions you want to draw.
integral $I=\int_{-\infty}^\infty e^{-\alpha x^{2k}}dx$
Any ideas what's going wrong here? Yes. $\displaystyle\int_{-\infty}^\infty e^{-x^{2k}}dx$ cannot be computed by expanding $e^t$ into its Taylor series, and then switching the order of summation and integration. It just doesn't work. All you get is an alternating sum of $\pm\infty$, which is undetermined.
Range of random vector
So, I think I finally found the way to solve my problem. Suppose $C$ is singular and $\mathbb{E}[\textbf{X}]=\bf{0}$. Then there exists $\textbf{v} \in \mathbb{R}^n$ such that $\bf{v}\neq\bf{0}$ and $C\bf{v}=\bf{0}$. Then $ \textbf{v}^T C \textbf{v}=0 \implies \sum_i v_i \sum_j v_j C_{ij} =0 \implies Cov(\sum_i v_i X_i,\sum_j v_j X_j)=0 \implies Var((\sum_i v_i X_i)^2)=0 \implies \mathbb{E}[(\textbf{X}^T\textbf{v})^2]=0$ The last implication holds because $Var(\textbf{X}^T\textbf{v})=\mathbb{E}[(\textbf{X}^T\textbf{v})^2]-\mathbb{E}[\textbf{X}^T\textbf{v}]^2$ and because $\mathbb{E}[\textbf{X}^T\textbf{v}]=\mathbb{E}[\textbf{v}^T\textbf{X}]=\textbf{v}^T\mathbb{E}[\textbf{X}]=0$ by hypothesis. Then follows $(\textbf{X}^T\textbf{v})^2 = 0$ almost surely, so $\textbf{X}^T\textbf{v}=0$ almost surely. This shows that $\forall \textbf{v} \in Ker(C)$ $ \textbf{X}^T\textbf{v}=\textbf{0}$ almost surely $\implies \textbf{X} \in Ker(C)^{\perp}=Col(C)$ almost surely (since $Ker(C)$ is finite dimensional). If otherwise $\mathbb{E}[\textbf{X}]\neq\bf{0}$ one can take $\textbf{Y}=\textbf{X}-\mathbb{E}[\textbf{X}]$, which has the same covariance matrix as $\bf{X}$ and expected value $\bf{0}$.
Why is the residue of $f(z)=\cot(z)-\frac{1}{z}$ at $z=n\pi$ is different via diffrent approaches?
There is indeed a simple pole at $n\pi$, for $n\ne0$, because $$ \lim_{z\to n\pi}(z-n\pi)f(z)= \lim_{w\to0}wf(w+n\pi)= \lim_{w\to0}\left(w\cot w-\frac{w}{w+n\pi}\right)=1 $$ On the other hand, $$ \lim_{z\to0}f(z)=\lim_{z\to0}\frac{z\cos z-\sin z}{z\sin z}= \lim_{z\to0}\frac{z-z^3/2-z+z^3/6+o(z^3)}{z^2+o(z^2)}=0 $$ so at $0$ the function has a removable singularity (which becomes a zero of order $1$).
Aplying permutations to transform expressions
What is happening for your first question is that $j$ is being extended by linearity and products. We know that $j(x_1)=x_2$, $j(x_2)=x_3$, and $j(x_3)=x_1$. Now, what would be $j(x_1-x_2)$? We actually don't know what it would be (it could be anything). However, in algebra, we like functions that preserve operations (homomorphisms). So, the most natural thing to do is $j(x_1-x_2)=j(x_1)-j(x_2)$. Similarly, when we want to deal with $j((x_1-x_2)(x_1-x_3))$, we haven't defined what this should be, but the definition that preserves the multiplication is $j(x_1-x_2)j(x_1-x_3)$. So, we can compute $$ j(\Delta)=j(x_1-x_2)j(x_1-x_3)j(x_2-x_3)=(i(x_1)-j(x_2))(j(x_1)-j(x_3))(j(x_2)-j(x_3)). $$ For your second question, factor $(-1)$ from $(x_2-x_1)$ and from $(x_3-x_1)$. Then, rearrange the factors.
Finding the angle between two line equations
Find the slope of each line. Find the angle of inclination of each line, using $\theta=\tan^{-1}m$. (Here, $\theta$ is the angle of inclination, $m$ is the slope.) Subtract the two angles. Handle the case where this difference is not an acute angle. (If you get a negative angle, take its absolute value. Also, when two lines intersect, they form two pairs of equal angles. Unless the lines are perpendicular, one pair will be acute and the other obtuse. You want to find the acute pair, so if you calculated the obtuse pair just subtract the value from pi radians or $180°$ to get the acute values.) There are other ways, such as finding vectors on the lines and using their dot products. But the way I showed uses simple, basic trig methods. The final answer is $$\pi-\left|\tan^{-1}(4)-\tan^{-1}(-1)\right|$$ converted to degrees, if the calculator is in radians mode, or $$180°-\left|\tan^{-1}(4)-\tan^{-1}(-1)\right|$$ if the calculator is in degrees mode.
Determine all subspaces of a vector space over a finite field
Your example is a vector space of dimension 2, so the only proper subspaces are those of dimensions 0 and 1. You have already accounted for dimension 0. A vector space of dimension 1 consists of a single nonzero vector and all of its scalar multiples. So: pick a nonzero vector, gather all of its scalar multiples, there's a proper subspace. Then pick a vector not in that subspace, and repeat the exercise. Repeat until you have there are no more nonzero vectors left out, and you have all the proper subspaces.
Find the number of ways to put the balls into the boxes with no restrictions?
Yes you can, Stirling numbers of the second kind give you the correct answer for this one: The Stirling numbers of the second kind, written ${\displaystyle S(n,k)}$ or ${\displaystyle \lbrace \textstyle {n \atop k}\rbrace }$ or with other notations, count the number of ways to partition a set of ${\displaystyle n}$ labelled objects into ${\displaystyle k} $ nonempty unlabelled subsets You have no restrictions so I assume you can have empty boxes. Therefore the total number of ways will be n.o of ways to put all balls in 1 box + n.o of ways to put all balls in 2 boxes + n.o of ways to put all balls in 3 boxes. There is only 1 way of having everything in the same box. Using the stirling numbers, there are $S(7, 2) $ ways of having the balls in 2 different boxes and $S(7, 3) $ ways of having them split between 3 boxes. Now just use the recurrence formula to calculate it.
Prove that the probability that $x+y\leq 1,$ given that $x^2+y^2\geq \frac{1}{4}$ is $\frac{8-\pi}{16-\pi}$.
For the conditional probability you need to divide by the "given" area. in this case $ Area(x^2+y^2 \ge \frac 14)=1-\frac{\pi}{16}$
Probability of $a>cb$, $a$ and $b$ are uniform in some ranges.
$$\mathbb{P}(a > cb) = \int_{b'=0}^{\min(n,1/c)} \mathbb{P}(a>cb') \mathbb{P}(b \in (b',b'+db']) = \int_{b'=0}^{\min(n,1/c)} \mathbb{P}(a > cb') \dfrac{db'}{n}\\ = \int_{b'=0}^{\min(n,1/c)} \dfrac{1-cb'}{1} \dfrac{db'}{n} = \left . \dfrac{b'-cb'^2/2}{n} \right \rvert_{0}^{\min(n,1/c)} = \min(1,1/cn) - \dfrac{\min(cn,1/cn)}{2}.$$ EDIT As @Dilip Sarwate rightly points out, I am assuming that $a$ and $b$ are independent random variables. EDIT Another method that doesn't require integration is as follows. Look at the variable $d = \dfrac{a}{c}$. $d$ has uniform distribution on the interval $\left[0, \dfrac1c \right]$. We now want the probability that $$\mathbb{P}(d>b).$$ If $\dfrac1c < n$, then for $d > b$, $b$ first needs to fall in the interval $\left[0,\dfrac1c \right]$ and whenever it falls within this interval half of the times it will be greater than $a$ and rest of the times it will be lesser than $a$. Hence the desired probability is $$\underbrace{\dfrac{1/c}{n}}_{\text{fall in the interval $\left[0,\frac1c \right]$}} \times \underbrace{\dfrac12}_{\text{half of the times it will be less than $d$}} = \dfrac1{2cn}$$ If $\dfrac1c > n$, then for $d > b$, if $d$ falls in the interval $\left[n,1/c \right]$ then it is always greater than $b$ else if $d$ falls in the interval $\left[0,n \right]$ and whenever it falls within this interval half of the times it will be greater than $b$ and rest of the times it will be lesser than $b$. Hence the desired probability is $$\underbrace{\dfrac{1/c-n}{1/c}}_{\text{falls in the interval $\left[n,1/c \right]$}} + \underbrace{\dfrac{n}{1/c}}_{\text{falls in the interval $\left[0,n \right]$}} \times \underbrace{\dfrac12}_{\text{half of the times it will be greater than $b$}} = 1 - \dfrac{cn}{2}$$
Exponents inequality
Looks good. Now ask where the maximum of $\frac{\log y}{y}$ is attained.
Integral $\int _1^{\infty }\sin^2 \left(\frac{3}{x} \right)dx$
As pointed out by David Peterson in the comments, the error you make is splitting $$\frac{1}{2}\int_1^{\infty}1-\cos\left(\frac{6}{x}\right)\ dx= \int _1^{\infty }\frac{1}{2}dx - \frac{1}{2}\int _1^{\infty }\cos\left(\frac{6}{x}\right)\ dx$$ which you can only do when both integrals on the RHS converge. In fact, the original improper integral is convergent. Since $$0\leq \sin^2\left(\frac{3}{x}\right)\leq \frac{9}{x^2}$$ and $$\int_1^{\infty}\frac{9}{x^2}\ dx$$ converges so does $$\int_1^{\infty} \sin^2\left(\frac{3}{x}\right) \ dx$$ by the comparison test. In fact $$\int_1^{\infty} \sin^{\alpha}\left(\frac{1}{x}\right)\ dx$$ converges for every $\alpha>1.$
Mean value theorem for complex analysis?
Hint: The purpose of $G$ containing the line segment from $a$ to $b$ is that this allows you to integrate $f'$ along the straight line linking $a$ and $b$.
How can I prove that the set $\{1, x, x^2, ..., x^n, ...\}$ is a linearly independent set in $\mathbb{Q} [x]$
Any finite subset of $\{1, x, x^2,\ldots\}$ is contain in $\{1, x, \ldots, x^n\}$ for some $n$. So it suffices to consider $\{1, \ldots, x^n\}$. Next, consider the Wronskian \begin{align} W[1,x, \ldots, x^n]= \begin{vmatrix} 1 & x & x^2 & \ldots & x^n\\ 0 & 1 & 2x & \ldots & nx^{n-1}\\ \vdots & 0 & 2 & \ldots & \vdots\\ \vdots & \vdots & 0 & \ldots &\vdots\\ 0 & 0 & \ldots & 0 & n! \end{vmatrix}=\prod^n_{k=0}k! \neq 0 \end{align} which means $\{1, \ldots, x^n\}$ are linearly independent.
any simple way to integrate products involving sign functions?
It does generalize. Let $g(u) = f'(u) \prod_{i = 1}^n \operatorname{sgn} (u - u_i)$. Suppose $u_i$ are ordered, then $g$ equals $f'$ times a constant on $(u_i, u_{i + 1})$. Then $f$ times the same constant is an antiderivative for $g$ on $(u_i, u_{i + 1})$. The jumps in the antiderivative between the intervals will be equal to $2 (-1)^{n - i} f(u_i)$, therefore an antiderivative which has only removable discontinuities at $u_i$ is $$G(u) = f(u) \prod_{i = 1}^n \operatorname{sgn}(u - u_i) - 2 \sum_{i = 1}^n (-1)^{n - i} f(u_i) H(u - u_i),$$ where $H$ is the unit step function. When $G$ is defined by continuity at $u_i$, the integral of $g$ over any $[a, b]$ is $G(b) - G(a)$.
Expressing Differential Form in Different Coordinates
I don't know how Lee explains this, but as soon as we pick local coordinates $x^i$ we have 3 related (but distinct) things: Coordinate functions $x^i$. Coordinate vector fields $\partial_i = \partial/\partial x^i$. Coordinate covector felds $dx^i$. These satisfy a bunch of relations, e.g. $\partial_j x^i = \delta_j^i$ (Kronecker delta) and $dx^i(\partial_j) = \delta_j^i$. If we have a map $x = \phi(y)$ given in coordinates by $x^i = \phi^i(y^1, \cdots, y^m)$, there is a notion of pushforward of a tangent vector. If we have some tangent vector (in the $y$ coordinates which is given by $$ v = \sum_i v^i \frac{\partial}{\partial y^i} $$ then we obtain a vector $\phi_\ast v$ in the $x$ coordinates by $$ \phi_\ast v = \sum_{ij} v^i \frac{\partial \phi^j}{\partial y^i} \frac{\partial}{\partial x^j}. $$ If you think of tangent vectors as being derivations of smooth functions, this is completely obvious: it's just the chain rule. Ok, so if you understand the chain rule, then you understand pushforwards, and if you understand pushforwards you understand pullbacks, since they are just dual to pushforwards. Remember that a $p$-form is just a (skew) multilinear function. So to define a $p$-form in the $y$ coordinates you just have to say how to act on tangent vectors. But we already know how to push tangent vectors forward to the $x$ coordinates. So if $\omega$ is a $p$-form on in the $x$-coordinates, we obtain a $p$-form in the $y$-coordinates by the formula $$ (\phi^\ast \omega)_y(v_1, \cdots, v_p) = \omega_{\phi(y)}(\phi_\ast v_1, \cdots, \phi_\ast v_p) $$ as you wrote above. To emphasize: this is all just the chain rule and linear algebra. Now let's look at your particular example. We start with coordinates $x,y$ and want to pass to coordinates $r,t$, via $$ x = r \cos t $$ $$ y = r \sin t $$ Call this map $(r,t) \mapsto (x,y)$ $\phi$. So, how do we push vectors forward? By the chain rule, we have \begin{align} \phi_\ast(\frac{\partial}{\partial r}) &= \frac{\partial x}{\partial r} \frac{\partial}{\partial x} + \frac{\partial y}{\partial r} \frac{\partial}{\partial y} \\ &= \cos t \frac{\partial}{\partial x} + \sin t \frac{\partial}{\partial y} \end{align} Similarly, \begin{align} \phi_\ast(\frac{\partial}{\partial t}) &= \frac{\partial x}{\partial t} \frac{\partial}{\partial x} + \frac{\partial y}{\partial t} \frac{\partial}{\partial y} \\ &= -r\sin t \frac{\partial}{\partial x} + r \cos t \frac{\partial}{\partial y} \end{align} Now, to get the pullback of $dx \wedge dy$, we just need to evaluate it on these pushforwards (now edited to include more detail): \begin{align} (dx \wedge dy)(\phi_\ast\partial_r, \phi_\ast\partial_t) &= (dx \wedge dy)(\cos t \partial_x + \sin t \partial_y, -r\sin t \partial_x + r\cos t \partial_y) \\ &= r\cos^2 t (dx\wedge dy)(\partial_x, \partial_y) -r\sin^2 t (dx\wedge dy)(\partial_y, \partial_x) \\ &= r\cos^2 t + r \sin^2 t \\ &= r \end{align} Here, in going from the first line to the second I used the bilinearity and skew-symmetry properties of forms (e.g. $(dx\wedge dx)(\partial_x, \partial_x) = 0)$) to expand it. To go from the second to the third I used the fact that $(dx\wedge dy)(\partial_x, \partial_y) = 1$, which is true because $dx$ and $dy$ are dual to $\partial_x$ and $\partial_y$. That is, $$ \phi^\ast (dx \wedge dy)(\partial_r, \partial_t) = r $$ So we have $$ \phi^\ast (dx \wedge dy) = r dr \wedge dt. $$ Note that this is the same result you'd obtain if you just differentiated $x = r\cos t$ and $y = r\sin t$ and then wedged the results together. The reason this works is because of two facts: Pullbacks commute with the exterior derivative $d$. The exterior derivative of a coordinate function $x^i$ is exactly the coorinate covector field $dx^i$. (Hence the notation!)
The probability density function $f(x)$ of a random variable $X$ is symmetric about $0$. Then we have:
Since $f(x)$ is an even function, $F(0)=\frac12$ and $F(x)-\frac12$ is an odd function. Then $$\int_{-2}^2F(x)\,dx=\int_{-2}^2(F(x)-1/2)\,dx+\int_{-2}^2\frac12\,dx$$ Since the integral of an odd function from $-a$ to $a$ is zero: $$=0+\int_{-2}^2\frac12\,dx=\frac12\cdot4=2$$ and the answer is 2.
Sobolev inequality with a constant independent of the support of the function
This is only true for the critical exponent $p^*=np/(n-p)$ and $p<n$. The one-dimensional counterexample you sketched doesn't really apply. (Although, for $n=1$ it's okay to use $p=1$ and $p^*=\infty$; then the statement is true with $C=1/2$.) At the critical exponent, both sides of the inequality scale the same way: replacing $u$ with $u(\lambda x)$ yields the same power of $\lambda$ on both sides. Hence, whatever constant works for the unit ball also works for all bounded domains, hence for all compactly supported functions. And then it works for $W^{1,p}_0(\mathbb{R}^n)$ by the density of compactly supported functions.
Sum of all the elements of the coset of a subgroup
Let $a_1,\dots,a_k$ be the elements of a coset of the multiplicative group of $\mathbb Z_p$ of size other than $1$, assume the coset can be written as $a_1H$ where $H$ is a subgroup. If we take all of the elements in the coset and multiply them by $h\in H$ with $h\neq 1$ we get the elements $a_1,\dots,a_k$ in a different order. It follows $(a_1+ \dots + a_k) = h(a_1+ \dots + a_k)$, and so the sum is $0\bmod p$.
Parity of the sum of three dice's faces.
In route 2 you forgot the case of one odd and two evens. But there is a simpler way to see that the result is $1/2$. Imagine that you throw two dice first, check the sum, and only then throw the third. It's clear that half the possible throws of the third die will result in the total sum being even, and half of them will result in the sum being odd.
Show the following map is jointly convex.
It is three steps proof: Apply Lieb's concavity theorem to conclude that $$ (A,B)\mapsto \text{Tr}\,(B^{1-p}A^p) $$ is jointly concave. Construct a new function $$ (A,B)\mapsto \underbrace{\frac{1}{p-1}}_{\text{negative}}\Big(\underbrace{\text{Tr}\,(B^{1-p}A^p)-\text{Tr}\,A}_{\text{concave}}\Big). $$ It is convex (=minus concave). Limit of convex functions is convex. Note that the limit of the function i Step 2 when $p\to 1$ is the derivative of the function at $p=1$. Take the derivative w.r.t. $p$ $$ (A,B)\mapsto \text{Tr}\,(-B^{1-p}\log(B)\, A^p+B^{1-p}A^p\log(A)) $$ and set $p=1$. Change the order of $A$ and $\log(B)$ under trace (possible). Done.
Special case of Pillai's conjecture
Yes, this follows from the following more general result by taking $P = \{2,3\}$: Let $P$ be a finite set of primes, and let $S$ be the set of natural numbers whose prime factors are entirely contained in $P$. Then the gaps between elements of $S$ grows to infinity, that is for each $k$ there are only finitely many distinct pairs of $x,y \in S$ such that $|x-y| < k$. In other words, the set of $r$-smooth numbers for any fixed $r$ is very sparse. This result can be deduced easily from standard theorems on finiteness for high-genus polynomial Diophantine equations. It was previously discussed on this site here: Gap between smooth integers tends to infinity (Stoermer-type result)? And some of the details were described in my answer to a related question: https://math.stackexchange.com/a/725149/30402
Residue theorem for evaluating integral
First, we notice that $\gamma$ is two complete turn arounds (as we have $4\pi$ instead of $2\pi$) in the circle trajectory around $i$. Second, the only singularities inside the circle of radius $2$ with center $i$ are $0$ and $i$, so we only have to consider the residues of these two singularities. Third and last, we make the addition of the residues with multiplicities (where these are the number of times the closed path goes around the singularity counting counter clockwise turns as positive and clockwise turns as negative); in this particular case, $\gamma$ goes around $0$ two times in a counter clockwise way and around $i$ two times in a counter clockwise way, so we get $$\frac{1}{2\pi i}\int_{\gamma }\, \left(e^{\frac{1}{z} }+\frac{\sin(z^2)}{z+2} -\frac{5}{z-i} \right) \, dz=2\mathrm{Res}_{z=0}\left(e^{\frac{1}{z} }+\frac{\sin(z^2)}{z+2} -\frac{5}{z-i} \right)+2\mathrm{Res}_{z=i}\left(e^{\frac{1}{z} }+\frac{\sin(z^2)}{z+2} -\frac{5}{z-i} \right)=2\cdot 1+2\cdot (-5)=-8\text{.}$$ Hence $$\int_{\gamma }\, \left(e^{\frac{1}{z} }+\frac{\sin(z^2)}{z+2} -\frac{5}{z-i} \right) \, dz=-16\pi i\text{.}$$ Also note, although not important for the integral as its singularity is not surrounded by $\gamma$, that $$\mathrm{Res}_{z=-2}\left(e^{\frac{1}{z} }+\frac{\sin(z^2)}{z+2} -\frac{5}{z-i} \right)=\mathrm{Res}_{z=-2}\frac{\sin(z^2)}{z+2}=\sin((-2)^2)=\sin(4)$$ and not $\sin(-4)$.
problem to show that a map is continuous and open
Let's show that for an open neighbourhood $V$ of $e$, the corresponding neighbourhood $[V]$ is also open. Let $\mathscr{F}\in [V]$ arbitrary, so $V \in \mathscr{F}$. The filter basis $$\mathscr{F}_0 = \{ F \cdot W : F \in \mathscr{F}, W \in \mathcal{N}_e \}$$ generates a Cauchy filter that is coarser than $\mathscr{F}$, and since $\mathscr{F}$ is a minimal Cauchy filter, it generates $\mathscr{F}$. Hence there is an $F \in \mathscr{F}$, and a $W\in\mathcal{N}_e$ with $F\cdot W \subset V$. But that means $\mathscr{F}[W] \subset [V]$, so $[V]$ is a neighbourhood of $\mathscr{F}$. Since $\mathscr{F}$ was arbitrary, $[V]$ is open. Then we see that $\alpha(V) = [V] \cap \alpha(G)$ is open in $\alpha(G)$, hence $\alpha$ is open at $e$, also $\alpha^{-1}([V]) = V$, so $\alpha$ is continuous at $e$. Since $\alpha$ is a homomorphism, $\alpha$ is continuous and open (the latter as a mapping to $\alpha(G)$, not in general to $\widehat{G}$, of course).
Is there a way to visualize an incompressible torus in a 3-manifold?
Take a solid torus $\hat{T}$ in $S^3$, let $M=S^3- \hat{T}$. Then, unless $\hat{T}$ is unknotted in $S^3$, the boundary of $M$ will be an incompressible torus in $M$. If you want to get an incompressible torus in a closed manifold, glue two such manifolds $M_1, M_2$ along their boundary tori.
Coloring the Mandelbrot set using iterated points
K.I. Martin's paper has 2 techniques. The first is perturbation, where iterations of a pixel can be performed in low precision relative to a high precision reference. The second is series approximation, which allows to initialize all the pixels to a neighbourhood of the high precision reference after skipping some iterations. You can combine both techniques. Typically you would use series approximation to initialize your pixels, then iterate them with the perturbed iteration formula until they or the reference escapes, or you reach a maximum iteration limit (if the reference escapes early, you need to find a better one). An issue not mentioned it the paper is the occurence of "glitches", where the dynamics of the pixel differ too much from the dynamics of the reference. These can be detected and corrected by using a more appropriate reference. More information on this stuff can be found (historical archive) at http://www.fractalforums.com and (current research) https://fractalforums.org .
Prove that for each $n \in Z$ the $3n^2-1$ is not the square of an integer
Hint: $3$ never divides $p^2+1$ for every positive integer $p$. To prove this, you can consider the cases $p=3k, 3k+1, 3k+2$ for $k \in \mathbb{Z}$.
Weird qualification about Cauchy derivative-integral formula
Note that in the statement of the Cauchy Integral Formula (which share the same necessary conditions as the resulting differentiation formulae), the closure of the disk $D$ must lie completely within the domain of holomorphicity $U.$ Since we don't know that $f$ is holomorphic on an open set containing the entire unit disk, the limit argument has to be used.
Relation of Brownian Motion to Helmholtz Equation
The general form for the infinitesimal generator of a continuous diffusion in $\mathbb{R}^n$ is $$ Af(x) = \frac12\sum_{ij}a_{ij}\frac{\partial^2 f(x)}{\partial x_i\partial x_j}+\sum_ib_i\frac{\partial f(x)}{\partial x_i}-cf(x).\qquad{\rm(1)} $$ Here, $a_{ij}$ is a positive-definite and symmetric nxn matrix, $b_i$ is a vector and $c$ is a non-negative scalar, with the coefficients $a,b,c$ functions of position $x$. Such operators are said to be semi-elliptic second order differential operators. The case with $c=0$ is the most common - being the generator of a Markov process (or semigroup). However, the $c > 0$ case does occur, and is then a generator of a submarkovian semigroup. The coefficients $a,b,c$ can be understood as follows: $a$ gives the covariance matrix of the motion over small time intervals (i.e., the level of noise). $b$ gives the mean over small time intervals (the drift) and $c$ is the rate at which the process is "killed". To be precise, a processes $X_t$ can be modeled formally by adding an additional state $\it\Delta$ called the cemetary. So, we represent the state space for the killed diffusion as $\mathbb{R}^n\cup\{{\it\Delta}\}$. In a small time interval $\delta t$, the process has probability $c\delta t$ of being killed, in which case it jumps to the cemetary, and stays there. So, $X_t={\it\Delta}$ for all $t\ge\tau$ with $\tau$ being the (random) time when the process is killed. The terminology I am using here is taken from Revuz & Yor (Continuous Martingales and Brownian Motion), and will vary between different authors. Anyway, getting back to PDEs. Suppose we want to solve the PDE $A\psi(x)=0$ on an open domain $U\subseteq\mathbb{R}^n$ with boundary condition $\psi(x)=\psi_0(x)$ for $x$ on the boundary of $U$ ($\partial U$, say). You can do the following. Simulate the process $X_t$ with initial condition $X_0=x$. Wait until the first time $T$ at which it hits the boundary and, when this occurs (if the process doesn't get killed first. i.e., $T < \tau$), take the expected value. $$ \psi(x)=\mathbb{E}_x\left[\psi_0(X_T)1_{\{T < \tau\}}\right].\qquad\qquad{\rm(2)} $$ Then $\psi$ satisfies the PDE $A\psi=0$. This is all very general. Getting back to the Helmholtz equation, we can let $a_{ij}$ be the identity matrix and $b_i=0$, and $c$ is a constant. In that case our generator becomes $A\psi=\frac12\Delta\psi-c\psi$. [Edit: This is not quite the same as the Helmholtz equation, which has $c=-k^2$, because here we have $c > 0$. There is a sign difference which changes the behaviour of the solutions. See below] The process then is the following: run a Brownian motion starting from $x$ until the first time $T$ it hits the boundary. Decide if it has not been killed, which has probability $e^{-cT}$ conditional on $T$. If it hasn't, take the value $\psi_0(X_T)$. Finally, take the average of this process (e.g., using Monte Carlo). There is one practical issue here though. Throwing away all the paths on which the process gets killed is a bit wasteful, so you would simply multiply by the probability of not being killed on each path, rather than actually discarding them. i.e., you simulate a regular Brownian motion, and then calculate $$ \psi(x)=\mathbb{E}_x\left[\psi_0(X_T)e^{-cT}\right].\qquad\qquad{\rm(3)} $$ We can even go the whole hog and solve $A\psi(x)=\varphi(x)$ for a general $x$-dependent coefficients and source term $\varphi$, $$ \begin{align} \psi(x)&=\mathbb{E}_x\left[1_{\{T<\tau\}}\psi_0(X_T)-\int_0^{T\wedge\tau}\varphi(X_t)\,dt\right]\\ &=\mathbb{E}_x\left[e^{-\int_0^Tc(\hat X_s)\,ds}\psi(\hat X_T)-\int_0^Te^{-\int_0^tc(\hat X_s)\,ds}\varphi(\hat X_t)\,dt\right]. \end{align}\qquad{\rm(4)} $$ Here, $X$ is the process killed at (state-dependent) rate $c$, and I'm using $\hat X$ for the process without killing, which requires multiplying by the survival probabilities $e^{-\int c(\hat X_s)\,ds}$ instead. One other area in which you have a '$-cf$' term in the PDE governing diffusions is in finance, and it occurs in two different, but closely related ways. Prices of financial assets are frequently modeled as diffusions (even, jump-diffusions), and the value of a financial derivative would be expressed as the expected value of its future value - under a so-called "risk-neutral measure" or "martingale measure" (which are just a special probability measures). However, you need to take interest rates into account. If the rate is $r$, then you would multiply the future (time $t$) value by $e^{-rt}$ before taking the expected value, which is effectively the same as adding a $-rf(x)$ term to the generator. And, as in the general case above, $r$ can be a function of the market state. The second main way (which occurs to me) in which such terms appear in finance is due to credit risk. If a counterparty has probability $rdt$ of defaulting in any time $t$, then you would have a $-rf(x)$ term occurring in the diffusion. This is more in line with the "killing" idea discussed above, but behaves in much the same way as interest rates. Finally, I'll mention that the PDE in the more general time-dependent situation is of the form $\partial f/\partial t + Af =0$, where $f$ is the expected value of some function of the process at a future time (and not necessarily the first time it hits the boundary). As mentioned in the other answer this is sometimes known as the Feynman-Kac formula, generally by physicists, and also as the Kolmogorov backward equation by mathematicians. Actually, the backward equation in the Wikipedia link doesn't have the $-cf$ term, but it would in the more general case of diffusions with killing. The adjoint PDE applying to probability densities is known as the Fokker-Planck equation by physicists and the Kolmogorov forward equation to mathematicians. Edit: As mentioned above, what we have here does not quite correspond to the Helmholtz equation, because of the sign of $c$, and the behaviour of the solutions does change depending on whether $c$ is positive or negative. In the probabilistic interpretation, $c > 0$ is the rate at which the process is killed. Looking at (3) and (4), we can see that solutions to the PDE will decay exponentially as we move further from the boundary. Furthermore, if the values on the boundary are non-negative, then $\psi$ has to be non-negative everywhere. The probabilistic method naturally leads to $\psi(x)$ being a positive linear combination of its boundary values (i.e., an integral with respect to a measure on the boundary). On the other hand, the Helmholtz equation has oscillating wavelike solutions. The values of $\psi(x)$ can exceed its values on the boundary and, even if $\psi\vert_{\partial U}$ is positive, it is possible for $\psi(x)$ to go negative inside the domain. So, it is not a positive linear combination of its boundary values. We could just try using a negative $c$ in (3) and (4) but, for the reasons just mentioned, this cannot work in general. What happens is that $e^{\vert c\vert T}$ is not integrable. To get around this, it is possible to transform the Helmholtz equation so that the zeroth order coefficient $-c$ is positive. We can make a substitution such as $\psi(x)=\tilde\psi(x)e^{ikS(x)}$, where $S$ is any solution to $\Vert\nabla S\Vert=1$. Then, the Helmholtz equation becomes, $$ \frac12\Delta\tilde\psi + ik\nabla S\cdot\nabla\tilde\psi + \frac{ik}{2}(\Delta S)\tilde\psi=0. $$ So we have a zeroth order term of the form $-\tilde c\tilde\psi$ for $\tilde c=-ik\Delta S/2$. This is imaginary, so does not make sense as a "killing" rate any more. However, as its real component is nonnegative (zero here), equations such as (3) and (4) above give bounded and well-defined expressions for $\tilde\psi$. A google search gives the following paper which uses such a substitution: Novel solutions of the Helmholtz equation and their application to diffraction. I expect that the papers linked to in the other answer use similar techniques to transform the equation into a form which can be handled by the probabilistic method - although I have not had a chance to look at them yet, and are not free access.
Regression to the mean - a simple question
Just citing this wikipedia article on regression toward the mean, they define regression toward the mean as: the phenomenon that arises if a random variable is extreme on its first measurement but closer to the mean or average on its second measurement and if it is extreme on its second measurement but closer to the average on its first In this case, the sons are the second measurement, and they are an extreme (they are tall) thus the first measurement (the fathers) must be closer to the average, and because the sons are tall, the fathers must be shorter than them on average. I would assume in this case that tall does not mean "taller than the average" but instead is more "significantly taller than the average" which is why your reasoning fails here. If this assumption is incorrect then this problem requires more thought.
Power series of ln(x) within IOC requiring massive n?
One obvious improvement is to use that $\log{2}=-\log{0.5}$, which converges rather more rapidly. One can further improve convergence by various other tricks, but you do have another problem: $\log{4.5} \neq \log{2}+\log{2}+\log{.5} $: the logarithm turns multiplication into addition, not addition into addition. You have instead $\log{4.5} = \log{3}+\log{1.5} = \log{2}+2\log{1.5} = -\log{0.5}+2\log{1.5}$.
limit of an integral with a Lorentzian function
I'll assume that $f$ has compact support (though it's enough to suppose that $f$ decreases very fast). As $f(0)=0$ he have $f(x)=xg(x)$ for some smooth $g$. Let $g=h+k$, where $h$ is even and $k$ is odd. As $k(0)=0$, again $k(x)=xm(x)$ for some smooth $m$. We have $$\int_{-\infty}^{\infty} \frac{f(x)}{x^2 + \epsilon^2} dx =\int_{-\infty}^{\infty} \frac{xg(x)}{x^2 + \epsilon^2} dx =\int_{-\infty}^{\infty} \frac{x(h(x)+xm(x))}{x^2 + \epsilon^2} dx = \int_{-\infty}^{\infty} \frac{x^2m(x)}{x^2 + \epsilon^2} dx $$ (the integral involing $h$ is $0$ for parity reasons) and $$\int_{-\infty}^{\infty} \frac{x^2m(x)}{x^2 + \epsilon^2} dx=\int_{-\infty}^{\infty} m(x)dx-\int_{-\infty}^{\infty} \frac{m(x)}{(x/\epsilon)^2 + 1} dx. $$ The last integral converges to $0$, so the limit is $\int_{-\infty}^{\infty} m(x)dx$ where (I recall) $$m(x)=\frac{f(x)+f(-x)}{2x^2}.$$
$|x_n| \to \infty \implies |f(x_n)| \to \infty$
The first condition means $$\forall M \quad \exists \bar x : \forall x\, |x|>\bar x \quad |f(x)|>M$$ and from here the second follows, indeed we have $$\forall \bar x \quad \exists \bar n : \forall n>\bar n \quad |x_n|> \bar x$$ that is $$\forall M \quad \exists \bar n : \forall n>\bar n \quad |f(x_n)|>M$$
Multivariable change of variables
Consider the function $\varphi(w)=x+tw$, where $x,t$ are fixed. Then, integrating over the image of $\varphi$ is the same as integrating over the ball of radius $t$ centered at $x$. Further, the Jacobian determinant is $t^2.$ Now, you just directly apply the change of variables formula. To compute the Jacobian, we note that if we write $\varphi=(\varphi_1,\varphi_2), x=(x_1, x_2),$ and $w=(w_1, w_2),$ then $(\varphi_1(w),\varphi_2(w))=(x_1+tw_1, x_2+tw_2),$ and \begin{align*}\frac{\partial \varphi_1}{\partial w_1}&=t,\\ \frac{\partial \varphi_1}{\partial w_2}&=0,\\ \frac{\partial \varphi_2}{\partial w_1}&=t,\\ \frac{\partial \varphi_2}{\partial w_2}&=0, \end{align*} and we get the Jacobian $$D\varphi(w)=\begin{pmatrix} \frac{\partial \varphi_1}{\partial w_1}& \frac{\partial \varphi_1}{\partial w_2}\\ \frac{\partial \varphi_2}{\partial w_1} & \frac{\partial \varphi_2}{\partial w_2} \end{pmatrix}=\begin{pmatrix} t & 0 \\ 0 & t \end{pmatrix},$$ which has determinant $t^2.$
31,331,3331, 33331,333331,3333331,33333331 are prime
333333331 is not prime; it is divisible by 17. This does not require a computer. Euler did calculations like this all the time. What's more, in your sequence 31, 331, 3331, 33331, …, every 15th number is divisible by 31. Proof: An noted in lab bhattacharjee's answer, the sequence has the form $$ a_n = \frac{10^{n+1}-7}{3} $$ Now, 15 is the multiplicative order of $10 \pmod{31}$, so $$ a_{15k+1} = \frac{10^{15k+2}-7}{3} \equiv \frac{10^2-7}{3} \equiv 0 \pmod{31}. $$ It has been proven that for all sequences that look like $ab$, $abb$, $abbb$, $abbbb$, … or $ab$, $aab$, $aaab$, $aaaab$, … where the $a$ and $b$ are digits, that periodically the numbers in the sequence are divisible by the first number $ab$. As an easy exercise, show that in the sequence 11, 111, 1111, 11111, …, that every second term is divisible by 11.