title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
For $\mathfrak{m}$ maximal and principal, there's no ideal between $\mathfrak{m}^2$ and $\mathfrak{m}$
Solution $1$. Suppose $\mathfrak m=(t)$, and $t\notin I$. Let's show that $I\subseteq\mathfrak m^2$. Pick $a\in I$. Then $a=tb$, $b\in R$. If $b\in\mathfrak m$, then $b=tc$ and thus $a=t^2c\in\mathfrak m^2$. Otherwise, $\mathfrak m+(b)=R$, so $1=tx+by$. Then $t=t^2x+tby$, so $t=t^2x+ay\in I$, a contradiction. A little more general: Let $(A,\mathfrak n)$ be an artinian local ring and $\mathfrak n=(t)$. If $\mathfrak n^r=0$, and $\mathfrak n^{r-1}\ne0$, then the ideals of $A$ are: $(0)$, $\mathfrak n^{r-1}$, $\dots$, $\mathfrak n$, $A$. In your case $A=R/\mathfrak m^2$. Solution $2$. $\mathfrak m/\mathfrak m^2$ is an $R/\mathfrak m$-vector space, and the subspaces of $\mathfrak m/\mathfrak m^2$ correspond to the ideals $I\subseteq R$ such that $\mathfrak m^2\subseteq I\subseteq\mathfrak m$. Since $\mathfrak m$ is principal we have $\dim_{R/\mathfrak m}\mathfrak m/\mathfrak m^2=1$, so $\mathfrak m/\mathfrak m^2$ has only the trivial subspaces.
What is the derivative of absolute value of a complex number?
Hint: $f = g\circ h$ with $$g:\Bbb C\times\Bbb C\longrightarrow\Bbb C,\qquad(w,z)\longmapsto w\bar z$$ real bilinear and $$h:\Bbb C\longrightarrow\Bbb C\times\Bbb C\qquad z\longmapsto (z,z)$$ real (and complex) linear.
A question concerning isometries and determinants
An isometry fixing the origin is a linear orthogonal map. So $A$ is an orthogonal matrix satisfying $AA^T=I$, This implies $\det(A)^2=1$, or $\det(A)=\pm 1$.
How to write $(\wedge _j \vee _i x_{i,j})$ $\vee$ $(\wedge _k \vee _l y_{k,l})$ in terms of $\wedge \vee$?
I'm using the fact that the underlying lattice in a Riesz space is distributive, i.e., it satisfies $$x \wedge (y \vee z) = (x \wedge y) \vee (x \wedge z),$$ and its (equivalent) dual $$x \vee (y \wedge z) = (x \vee y) \wedge (x \vee z).$$ Also, addition distributes over both lattice operations, i,e., the space satisfies $$(x \wedge y) + z = (x+z) \wedge (y+z),$$ and $$(x \vee y) + z = (x+z) \vee (y+z).$$ I'll use the fact that the lattice is distributive to answer or questions (1) and (3). A completely distributive lattice satisfies $$\bigwedge_{i\in I}\bigvee_{j\in J}x_{i,j} = \bigvee_{f:I\to J}\bigwedge_{i\in I}x_{i,f(i)}.$$ Whether the underlying distributive lattice of a Riesz space is completely distributive or not, is not relevant here, since the joins and meets you ask about are finitary. So I'll derive an expression for the general, finite case, base on that above. For notational convenience, for each natural number $n \geq 1$, identify $n$ with the set $\{1,\ldots,n\}$. I'll prove, by induction on $n$, that $$\bigwedge_{i=1}^n\bigvee_{j=1}^m a_{i,j} = \bigvee_{f:n\to m}\bigwedge_{i=1}^n a_{i,f(i)}.$$ The result is trivial if $n=1$; so suppose it's true for a certain $n$, and let's see it still holds for $n+1$. \begin{align} \bigvee_{f:n+1\to m}\bigwedge_{i=1}^{n+1}a_{i,f(i)} &=\bigvee_{f:n+1\to m}\left(\bigwedge_{i=1}^n a_{i,f(i)} \wedge a_{n+1,f(n+1)} \right)\\ &=\bigvee_{f:n+1\to m}\bigwedge_{i=1}^n a_{i,f(i)} \wedge \bigvee_{f:n+1\to m}a_{n+1,f(n+1)}, \end{align} where, in the last equality, it was used distributivity, as in the second equality in the expression $\bigvee_i\bigwedge_j(u_{ij}\wedge v_i) = \bigvee_i(\bigwedge_ju_{ij}\wedge v_i) = \bigvee_i\bigwedge_ju_{ij}\wedge\bigvee_iv_i$. Now, $\bigvee_{f:n+1\to m} a_{n+1,f(n+1)} = \bigvee_{j=1}^m a_{n+1,j}$, because $f(n+1)$ takes all the values in that range. Also, $\bigvee_{f:n+1\to m}\bigwedge_{i=1}^n a_{i,f(i)}=\bigvee_{f:n\to m}\bigwedge_{i=1}^n a_{i,f(i)}$, because if $i \leq n$, then it doesn't matter the value $f$ takes at $n+1$, and so we can restrict $f$ to the domain $\{1,\ldots,n\}$. So \begin{align} \bigvee_{f:n+1\to m}\bigwedge_{i=1}^n a_{i,f(i)} \wedge \bigvee_{f:n+1\to m} a_{n+1,f(n+1)} &=\bigvee_{f:n\to m}\bigwedge_{i=1}^n a_{i,f(i)} \wedge \bigvee_{j=1}^m a_{n+1,j}\\ &=\bigwedge_{i=1}^n\bigvee_{j=1}^m a_{i,j} \wedge \bigvee_{j=1}^m a_{n+1,j}\\ &=\bigwedge_{i=1}^{n+1}\bigvee_{j=1}^m a_{i,j}, \end{align} where the second equality was the induction hypothesis. This answers question (3). Likewise we have the expression $$\bigvee_{i=1}^n\bigwedge_{j=1}^m a_{i,j} = \bigwedge_{f:n\to m}\bigvee_{i=1}^n a_{i,f(i)},$$ and since addition distributes over the lattice operations, $$\sum_{i=1}^n\bigwedge_{j=1}^m = \bigwedge_{f:n\to m}\sum_{i=1}^n a_{i,f(i)}$$ and $$\sum_{i=1}^n\bigvee_{j=1}^m = \bigvee_{f:n\to m}\sum_{i=1}^n a_{i,f(i)}.$$ It follows that $$\bigwedge_{f:n\to m}\bigvee_{i=1}^n\bigvee_{k=1}^p a_{i,f(i),k} = \bigvee_{i=1}^n\bigwedge_{j=1}^k\bigvee_{k=1}^p a_{i,j,k} = \bigvee_{i=1}^n\bigvee_{g:m\to p}\bigwedge_{j=1}^m a_{i,j,g(j)},$$ that is, given a join of meets of joins, you can transform it into a join of meets or a meet of joins, whichever suits you best. With $n=2$, this answers question (1). The answer to question (2) should follow from \begin{align} \sum_{i=1}^n\bigwedge_{j=1}^m\bigvee_{k=1}^p a_{i,j,k} &=\bigwedge_{f:n\to m}\sum_{i=1}^n\bigvee_{k=1}^p a_{i,f(i),k}\\ &=\bigwedge_{f:n\to m}\bigvee_{g:n\to p}\sum_{i=1}^n a_{i,f(i),g(i)}, \end{align} but that depends in what shape you're interested in... I mean, if $A$ is closed under $+$, then it follows from the above expression that so does $A^{\vee\wedge}=A^{\wedge\vee}$. Update The proof that $$\bigwedge_{i=1}^n\bigvee_{j=1}^m a_{i,j} = \bigvee_{f:n\to m}\bigwedge_{i=1}^n a_{i,f(i)}$$ above has what seems to me now a circular reasoning; specifically, when I say that I'm using $$\bigvee_i(\bigwedge_ju_{ij}\wedge v_i) = \bigvee_i\bigwedge_ju_{ij}\wedge\bigvee_iv_i$$ this expression seems not to be easier to prove than the original one. So I'm adding a new proof by induction, and this time I was more careful, so I trust it is correct. \begin{align} \bigvee_{f:n+1\to m}\bigwedge_{i=1}^{n+1}a_{i,f(i)} &=\bigvee_{f:n+1\to m}\left(\bigwedge_{i=1}^{n}a_{i,f(i)} \wedge a_{n+1,f(n+1)}\right)\\ &=\bigvee_{j=1}^{m}\bigvee_{\substack{f:n+1\to m\\f(n+1)=j}}\left(\bigwedge_{i=1}^{n}a_{i,f(i)} \wedge a_{n+1,f(n+1)}\right)\\ &=\bigvee_{j=1}^{m}\bigvee_{\substack{f:n+1\to m\\f(n+1)=j}}\left(\bigwedge_{i=1}^{n}a_{i,f(i)} \wedge a_{n+1,j}\right)\\ &=\bigvee_{j=1}^{m}\left(\left( \bigvee_{\substack{f:n+1\to m\\f(n+1)=j}}\bigwedge_{i=1}^{n}a_{i,f(i)}\right) \wedge a_{n+1,j} \right) \tag{1}\\ &=\bigvee_{j=1}^{m}\left(\left( \bigvee_{f:n\to m}\bigwedge_{i=1}^{n}a_{i,f(i)}\right) \wedge a_{n+1,j} \right) \tag{2}\\ &=\left(\bigvee_{f:n\to m}\bigwedge_{i=1}^{n}a_{i,f(i)}\right)\wedge\bigvee_{j=1}^{m}a_{n+1,j} \tag{3}\\ &=\bigwedge_{i=1}^{n}\bigvee_{j=1}^{m}a_{i,j} \wedge \bigvee_{j=1}^{m}a_{n+1,j}\tag{4}\\ &=\bigwedge_{i=1}^{n+1}\bigvee_{j=1}^{m}a_{i,j}, \end{align} where (1) is because $a_{n+1,j}$ depends neither on $j$ nor on $f$; (2) is because $1 \leq i \leq n$ so we only take the restriction of $f:n+1\to m$ to the set $\{1,\ldots,n\}$, which is the same as considering functions $f:n\to m$; (3) is because $\bigvee_{f:n\to m}\bigwedge_{i=1}^{n}a_{i,f(i)}$ doesn't depend on $j$; (4) is by induction hypothesis. In both (1) and (3) we're using the straightforward generalized version of distributivity which gives $\bigvee_{i=1}^m(a_i\wedge b)=(\bigvee_{i=1}^m a_i)\wedge b$.
If $(x+1)^4+(x+3)^4=4$ then how to find the sum of non-real solutions of the given equation?
Setting $t:=x+2$ is indeed a way to solve the problem. You then obtain an equation of the form $$ at^4+bt^2+c=0\tag{1} $$ and setting $w:=t^2$ you can solve with the quadratic formula for $w$. You should find a positive solution $w_1$ and a negative solution $w_2$. Then the only complex $t$ solutions of $(1)$ are $\pm \sqrt{-w_2}i$ and the only complex solutions of the initial problem are $x_1:=-2+\sqrt{-w_2}i$ and $x_2:=-2-\sqrt{-w_2}i$, for which the sum is equal to $-4$. Note that we knew in advance that the result would be real since the complex roots of a polynomial with real constants come in conjugate pairs.
If $P(B)>0 , B\subset A$ and $A$ and $B$ are independent, then $P(A)=1$
Observe that $A\cap B=B$ so that in fact:$$P(B)=P(A\cap B)=P(A)P(B)$$Now draw conclusions on base of $P(B)>0$.
Prove continuity of $a(u,v)$ when applying Lax-Milgram theorem
We have $$a(u,v)= 2\int_\Omega u_xv_x+2\int_\Omega u_yv_y+ 8\int_\Omega uv+ \int_\Omega u_x v_y+ \int_\Omega v_xu_y$$ So, it is enough to prove that the norm of each integral is bounded by $C\|u\|_{H^1}\|v\|_{H^1}$, for some constant $C$ (not necessarily the same constant for all integrals). This estimate follows from the Cauchy-Schwarz inequality. For example, for the first integral we have: $$\left|2\int_\Omega u_xv_x\right|=2|(u_x,v_x)_{L^2}|\leq 2\|u_x\|_{L^2}\|v_x\|_{L^2}\leq 2\|u\|_{H^1}\|v\|_{H^1}.$$ For the other integrals, the calculation is pretty much similar.
ratio of the length of right triangle are identical proof
Let in $\Delta ABC$ we have $\measuredangle ACB=90^{\circ}$ and $\measuredangle A=\theta.$ Let $D$ and $E$ be placed on the rays $AC$ and $AB$ respectively such that $\measuredangle ADE=90^{\circ}.$ Thus, $CB||DE$ and we obtain $\Delta ACB\sim\Delta ADE,$ which gives $$\frac{x}{y}=\frac{CB}{AC}=\frac{DE}{AD},$$ which says that $\frac{x}{y}$ depend on $\measuredangle A$ only.
Show that arbitrary $A$ and $A^T$ have same eigenvalue, algebraic and geometric multiplicity
Geometric Multiplicity of $\lambda$ in $A=\text{rank} (A-\lambda I)=\text{rank }(A-\lambda I)^T=\text{rank } (A^T-\lambda I)=$Geometric multiplicity of $\lambda$ in $A^T$
Define $x_{2n}=\frac{x_{2n-1}+2x_{2x-2}}{3}\,\,\,\,\,,x_{2n+1}=\frac{2x_{2n}+x_{2n-1}}{3}$ for $n \in \mathbb{N^+}$.
Let $a_n=x_{2n},b_n=x_{2n+1}$. Now it follows that: $$a_n=\frac{1}{3}b_{n-1}+\frac{2}{3}a_{n-1}...(1)$$ $$b_n=\frac{2}{3}a_{n}+\frac{1}{3}b_{n-1}....(2)$$ From (1) we get: $b_{n-1}=3a_n-2a_{n-1}$. Substitute in reccurence (2) to get: $$(3a_{n+1}-2a_n)=\frac{2}{3}a_n+\frac{1}{3}(3 a_{n}-2a_{n-1})$$ Solve this reccurence relation to get $a_n$ (which will let you find $b_n$). Using a closed form of $a_n,b_n$ its easy to check if they converge. If they converge: Let $\lim_{n\rightarrow\infty}a_n=A, \lim_{n\rightarrow\infty}b_n=B$ one can show that they converge to the same limit because the recurrence relations lead to: $$A=\frac{1}{3}B+\frac{2}{3}A$$ This leads to $A=B$, hence $x_n$ converges.
How Convergence in two different norms relates to Equivalence of Norms
Edit: As Kyle pointed out, the answer below is not correct. Take $\alpha = \dfrac12\min\left(\|x^1 - x^2\|_1, \|x^1 - x^2\|_2\right)$. Now, notice that the ball of radius $\alpha$ around $x^1$ in the first norm does not intersect the ball of radius $\alpha$ around $x^2$ in the second norm. This means that the sequence cannot be in both balls at the same time.
Inner product space, points cannot be placed inside a ball of a given radius
I think there is a mistake in the example. For $n=4$ the shape is not a Rhombus but a tetrahedron. Imagine $4$ points with equal distance with each other. The resulting geometric shape must be a tetrahedron. Similarly the $n=k$ case should give us a $k$-dimensional simplex. The question now is to deduce the ball which can contain a $k$-dimensional simplex with side length $2$. The calculation seems to be already done at here, which I am not going to replicate. This should be enough to solve the problem.
Does $\sum_{n=1}^\infty \frac{n^{n+\frac{1}{n}}}{(n+\frac{1}{n})^n}$ converge?
Nope, $$\lim\limits_{n\rightarrow \infty} \frac{n^{n+1/n}}{(n+1/n)^n} = 1 \neq 0.$$ If $\sum_{n=0}^{\infty} \frac{n^{n+1/n}}{(n+1/n)^n}$ coverged, we would have $\lim\limits_{n\rightarrow \infty} \frac{n^{n+1/n}}{(n+1/n)^n} = 0.$
Unique extrema of sum of monotonically increasing and decreasing functions on an interval
It's certainly not true. Let $f$ alternate between increasing slowly and increasing quickly, let $g$ alternate between decreasing quickly and decreasing slowly, and you can have as many local extrema as you want.
discretization error in numerical
Assuming $f$ is sufficiently smooth around $x_0$, one can use the Taylor series to write for all $x$ close enough to $x_0$: $$f(x_0+h)= f(x_0) + f'(x_0)h + \frac{1}{2}f''(x_0)h^2 + \mathcal{O}(h^3)$$ Using these one can expand the terms in your formula as follows (A) $f(x_0+2h)=f(x_0) + f'(x_0)2h + 2f''(x_0)h^2 + \mathcal{O}(h^3)$ (B) $f(x_0+h/2)=f(x_0) + f'(x_0)h/2 + \frac{1}{8}f''(x_0)h^2 + \mathcal{O}(h^3)$ Plugging these into the formula you gave one can write $$\frac{f(x_0+2h) - 4f(x_0+h/2) + 3f(x_0) + \mathcal{O}(h^3)}{3/2h^2} = \frac{ f(x_0) + f'(x_0)2h + 2f''(x_0)h^2 - 4(f(x_0) + f'(x_0)h/2 + \frac{1}{8}f''(x_0)h^2) + 3f(x_0) + \mathcal{O}(h^3)}{3/2h^2} = \frac{ f(x_0) + 2f'(x_0)h + 2f''(x_0)h^2 - 4f(x_0) - 2f'(x_0)h - \frac{1}{2}f''(x_0)h^2 + 3f(x_0) + \mathcal{O}(h^3)}{3/2h^2} = f''(x_0) + \mathcal{O}(h). $$ This should give enough of a lead to give you the result. If not, it is possible to expand a discussion in the comments..
Substitution theorem for integrals, a "trick"
Another trick without sub: $$\displaystyle I=\int \dfrac{1+e^x}{1-e^x} dx=\int \dfrac{1-e^x+2e^x}{1-e^x} dx$$ $$I=\displaystyle \int \dfrac{1-e^x}{1-e^x} dx+2\int \dfrac{e^x}{1-e^x} dx$$ $$I=\int 1 dx-2\int \dfrac{-e^x}{1-e^x}dx=x-2\ln (1-e^x)+c$$
Deriving a differential equation from a given expectation
$s$ occurs in two places. You've correctly taken into account the occurrence in the integration limit. You also need to differentiate the factor $\mathrm e^{-s}$ in the integrand; that yields a term $-g(s)$. Perhaps this is easier to see if you first pull the factor outside the integral.
Find real $x$ satisfying $2^x + 2^{|x|} ≥ 2\sqrt{2}$
Hint: If $x\ge 0$, the inequality reduces to $2^x + 2^x \ge 2\sqrt{2}$, while if $x<0$, it becomes $2^x + 2^{-x} \ge 2\sqrt{2}$.
Simultaneous Equations Finding the Intersection of a Cubic and a Quadratic
You know that you have three real roots. Using whole numbers, your cubic equation write $$\frac{2 }{25}x^3-\frac{199 }{100}x^2+\frac{29451}{2000}x-\frac{22136643}{800000}=0$$ Use the trigonometric method as described in the Wikipedia page and get the nice $$x_k=\frac{199}{24}+\frac{19}{12} \sqrt{\frac{59}{5}} \cos \left(\frac{2 k\pi }{3}-\frac{1}{3} \cos ^{-1}\left(-\frac{69497939}{4046810 \sqrt{295}}\right)\right)$$ with $k=0,1,2$.
Arithmetical progression and quadratic equation
HINT: Define $u=x^2$ and solve the quadratic in terms of $u$. Then replace $u$ by $x=\sqrt u$ and you should get $4$ roots. Now use the AP rule.
Ruled surfaces in $\mathbb{P}^3$ of high degree
Here's one for you to play with: Consider the surface $$z^2w^2+4x^3z-6xyzw+4y^3w - 3x^2y^2 = 0.$$ It's called a tangent developable, the surface of lines tangent to a smooth curve in $\Bbb P^3$. Can you find the curve?
Cauchy's theorem proof clarification (group theory)
Take an element $\;(x_1,...,x_p)\in X\;$ . If its orbit has only one element, it means that if $\;C_p=\langle z\rangle\;$ then $$(x_1,...,x_p)=z(x_1,...,x_p)=(x_2,...,x_p,x_1)=z^2(x_1,...,x_p)=(x_3,...,x_p,x_1,x_2)=\ldots$$ and we get that $\;x_1=x_2=\ldots=x_p\;$, so the element is in fact of the form $\;(x,x,...,x)\;$ .
Show that $\frac{(x+2)^p-2^p}{x}$ is irreductible in $\mathbb{Z}[x]$ for $p$ an odd prime number
Hint: Expand $(x+2)^p-2^p$ and remember that, if $p$ is prime, $p$ is a divisor of all $\dbinom pk$ for $0<k<p$.
probability of sequence of random variables
a) You are on the right track. $$ F_X(x) = P(X \leq x) = P(A \leq xt), $$ leading to $$ P(A \leq xt) = \int_{-\infty}^{xt} \frac{3}{8} a^2 da = \int_{0}^{xt} \frac{3}{8} a^2 da = \frac{1}{8} a^3 |_0^{tx} = \frac{1}{8} t^3 x^3 $$ And then yes, the probability density function is the derivative of the cumulative distribution function (with respect to $x$!). b) Your probability density function is in terms of $x$, so it should be straight forward to find $E(X)$ now. $$ E(X) = \int_{-\infty}^{\infty}x f_X(x)dx = \int_0^{2/t}x f_X(x)dx $$
Compactness of topology on $[0,\infty)$
The first variant is correct one. The only possible cover of $[0,\infty)$ in this topology is a cover which has $[0,\infty)$ in it. This follows since $\cup_{a}(a,\infty)=(0,\infty)$, so it is not possible to construct cover of $[0,\infty)$ using only sets $(a,\infty)$ with $a>0$.
Prove that a group of odd order doesn't have two conjugate inverses
First, we must exclude $a = e$, since of course $g^{-1} e g = e = e^{-1}$. Then, suppose you had an $a \in G\setminus \{e\}$ and a $g \in G$ with $g^{-1} a g = a^{-1}$. Let $\psi_g(x) = g^{-1}xg$. Then $\psi_g^{\operatorname{ord} g} = \operatorname{id}$. But look what $\psi_g^{\operatorname{ord} g}$ would do to $a$.
How to show that $f$ is a zero function?
WLOG assume $a \ge 0$. Notice that $f'$ is bounded because $f$ is bounded and that $f(x) = \int_0^x f'(t)\,dt$ so $$|f'(x)| \le k|f(x)| = k\left|\int_0^x f'(t)\,dt\right| \le k\int_0^x |f'(t)|\,dt \le k\|f'\|_\infty \int_0^x 1\,dt = kx \|f'\|_\infty$$ $$|f'(x)| \le k\int_0^x |f'(t)|\,dt \le k^2\|f'\|_\infty\int_0^x t\,dt = k^2 \frac{x^2}2 \|f'\|_\infty$$ $$|f'(x)| \le k\int_0^x |f'(t)|\,dt \le k^3\|f'\|_\infty\int_0^x \frac{t^2}2\,dt = k^3 \frac{x^3}{6} \|f'\|_\infty$$ Continuing inductively we see $$|f'(x)| \le k^{n} \frac{x^n}{n!}\|f'\|_\infty \xrightarrow{n\to\infty} 0$$ so $f' \equiv 0$. Therefore $f \equiv f(a) = 0$.
If a finite set $G$ is closed under an associative product and both cancellation laws hold, then it is a group
Hints: Cancellation is true if and only if, for all $a\in G$, $L_a:G\to G$ and $R_a:G\to G$, defined as $L_a(g)=ag$ and $R_a(g)=ga$, are $1-1$. And: A function from a finite set to itself is $1-1$ if and only if it is onto...
Statistics - Approximating Poisson Distribution
You can use the normal approximation (indeed this is due to the Central Limit Theorem) to the Poisson distribution (see here). You have that $$X \sim Poisson(\lambda=36)$$ (where $\lambda>20$ as required in the link), therefore $$X \sim N(\mu=36, \sigma^2=36)$$ approximately, which can be equivalently formulated as $$Z=\frac{X-36}{\sqrt{36}}=\frac{X-36}{6} \sim N(0, 1)$$ (i.e. $Z$ has the standard normal distribution). Then you want to calculate the probability $$P(X\ge45)\approx P(Z\ge\frac{45-36}{\sqrt{36}})=P(Z\ge \frac{3}{2})=1-\Phi(1.5)=1-0.9332=0.0668$$ or $6.68\%$.
How to derive these calculus identities?
The gradient vector to the function $x_3 - f(x_1, x_2)$ is $\left[-\dfrac{\partial f}{\partial x_1}, -\dfrac{\partial f}{\partial x_2}, 1\right]$. This is normal to the surface. So you divide by the length $\sqrt{1 + \left(\frac{\partial f}{\partial x_1}\right)^2 + \left(\frac{\partial f}{\partial x_2}\right)^2}$ to get a unit normal. Its dot product with a unit vector (in particular the unit vector in the coordinate direction $[1,0,0]$, $[0,1,0]$ or $[0,0,1]$) is the cosine of the angle between those vectors.
Three consecutive sums of two squares
The numbers $4n^4+4n^2,4n^4+4n^2+1,4n^4+4n^2+2$ are all sums of two squares, so there's another infinite family. I don't think you're going to find a useful formula that will give all solutions, but I don't see how to prove this.
Do the radii of a family of nested balls (in a Banach space) converge?
Except in the trivial case of the zero space, being Banach prevents pathological examples such as you link to. More precisely, in a Banach space with at least one nonzero vector, we cannot have $B(x,R)\subseteq B(y,r)$ with $R>r$. You can see this by considering the two balls restricted to a line that contains $x$ and $y$, with the induced metric. (In the case $x=y$ you need to assume that the space is not the zero space in order to choose such a line). The line is isometric to $\mathbb R$, and the balls intersect it at balls of the same radii on the line. And certainly on the real line, a larger ball cannot be contained in a smaller one.
Proving that the medians of a triangle are concurrent
This is my short proof from 1963: In the triangle ABC draw medians BE, and CF, meeting at point G. Construct a line from A through G, such that it intersects BC at point D. We are required to prove that D bisects BC, therefore AD is a median, hence medians are concurrent at G (the centroid). Proof: Produce AD to a point P below triangle ABC, such that AG = GP. Construct lines BP and PC. Since AF = FB, and AG = GP, FG is parallel to BP. (Euclid) Similarly, since AE = EC, and AG = GP, GE is parallel to PC Thus BPCG is a parallelogram. Since diagonals of a parallelogram bisect one another (Euclid), therefore BD = DC. Thus AD is a median. QED Corollary: GD = AD/3. Proof: Since AG = GP and GD = GP/2, AG = 2GD. AD = (AG + GD) = (2GD + GD) = 3GD. Hence GD = AD/3. QED
Definition of a cluster point
No, you cannot. In $\mathbb R$, with the usual distance, take $x_n=(-1)^n$. Then $1$ is a cluster point of this sequence according to the usual definition, but not according to the alternative version that you suggested. If you take $\varepsilon=1$, then there is no $N\in\mathbb N$ such that$$n\geqslant N\implies|1-x_n|<1=\varepsilon.$$
Tautology Problem confusion
That method is viable for small number of variables. The problem is when there are a large number of variables. Take an instance of SAT with n variables and m clauses. Call it $\phi_{n,m}$. Make the statement $\phi_{n,m} \implies F$. If the statement $\phi_{n,m}$ is not satisfiable then this statement will give $T$ eventually. Otherwise it is not a tautology. Determining whether or not $\phi_{n,m}$ is not satifiable is too hard. The problem with just applying the rules is you will have to keep going for a long time in order to be guaranteed when you are done.
Image of smooth manifold is a submanifold
Let $g\colon \mathbb{R} \to \mathbb{T}^2= \mathbb{R}^2/\mathbb{Z}^2$ be given by $t\mapsto [(t,\alpha t)]$, where $\alpha$ is any irrational number. The image of $g$ is a dense subset of $\mathbb{T}^2$ but $g$ is not surjective. Hence, $\mathrm{Im}(g)$ cannot be a submanifold of $\mathbb{T}^2$. Compose this map with the embedding of the torus into $\mathbb{R}^3$.
Creating simple function to rate potential quality of a basketball game
Let $N$ represent the number of teams in the league and $R$ represent the sum of the Ranks then $R_{min}=3$ and $R_{max}=N(N-1)$ let $S$ represent the point spread so $S_{min}=0$ and you can decide what to put in for $S_{max}$ ( since it's basketball you might start with $S_{max}=100$ ) Construct a rating function containing 3 arbitrary parameters as follows ... $$ T(R,S) = aR+bS+c $$ Get 2 equations for the 3 unknowns by requiring that ... $$ T(R_{max}, S_{max})=0 \text{ and } T(R_{min}, S_{min})=1 $$ Use these equations to eliminate 2 of the unknowns . You will be left with one adjustable parameter which you can use to satisfy your requirement that Rank should carry more weight than spread. Alternatively you could create a third equation $a=kb$ then you could solve the 3 equations for $a, b, ,c$ and adjust $k$ to satisfy your requirement that Rank should carry more weight than spread. Expect to get negative values for $a$ and $b$ and a positive value for c.
$x$-coordinate distribution on the $n$-sphere
$\def\d{\mathrm{d}}$This is a direct application of the formula for the surface area of the hyperspherical cap. Denoting by $I(x; a, b)$ the regularized beta function, the surface area of an $(n + 1)$-dimensional hyperspherical cap with height $x \leqslant 1$ and radius $1$ is$$ A_n(x) = \frac{1}{2} A_n(2) I\left( x(2 - x); \frac{n}{2}, \frac{1}{2} \right), $$ thus for $-1 \leqslant x \leqslant 0$,$$ P(X_1 \leqslant x) = \frac{A_n(x + 1)}{A_n(2)} = \frac{1}{2} I\left( 1 - x^2; \frac{n}{2}, \frac{1}{2} \right). $$ Since $\dfrac{∂I}{∂x}(x; a, b) = \dfrac{1}{B(a, b)} x^{a - 1} (1 - x)^{b - 1}$, then for $-1 < x < 0$,$$ f_{X_1}(x) = \frac{\d}{\d x} P(X_1 \leqslant x) = \frac{(1 - x^2)^{\frac{n}{2} - 1}}{B\left( \dfrac{n}{2}, \dfrac{1}{2} \right)}. $$ By symmetry,$$ f_{X_1}(x) = \frac{(1 - x^2)^{\frac{n}{2} - 1}}{B\left( \dfrac{n}{2}, \dfrac{1}{2} \right)}. \quad \forall -1 < x < 1 $$ Indeed, for $n = 2$ this is a uniform distribution.
How does this transformation works?
$$-\frac{N}{2}\ln|\Sigma|-\frac12 \operatorname{Tr}\left[\Sigma^{-1}\sum_{n=1}^N (t_n-y_n) (t_n-y_n)^T\right]=\frac{N}{2}\ln|\Sigma^{-1}|-\frac12 \operatorname{Tr}\left[\Sigma^{-1}\sum_{n=1}^N (t_n-y_n) (t_n-y_n)^T\right]$$ Differentiating with respect to $\Sigma^{-1}$ and equate it to $0$, $$\frac{N}{2}(\Sigma^{-1})^{-T}-\frac12\sum_{n=1}^N(t_n-y_n)(t_n-y_n)^T=0$$ $$\Sigma^T=\frac1{N}\sum_{n=1}^N(t_n-y_n)(t_n-y_n)^T$$ Taking transpose and due to the right hand side is symmetric, $$\Sigma=\frac1{N}\sum_{n=1}^N(t_n-y_n)(t_n-y_n)^T$$
Inequalities - proof by induction that $n^2 \leq n!$ for $n\geq 4$
If $m!\ge m^2,$ $$(m+1)!=(m+1)\cdot m!\ge(m+1)\cdot m^2$$ which needs to be $\ge(m+1)^2$ $$\iff m^2\ge m+1\iff m(m-1)\ge1$$ which holds true if $m\ge2$
Average area of the shadow of a convex shape
The average size of the shadow cast by any 3D convex shape is $\frac{1}{4}$ times its surface area. Because the shape is convex, you can compute the size of its shadow along a particular direction $\vec{u}$ by summing the shadows cast by its surface pieces: Divide the surface of the shape into little area patches. Compute the size of each patch's shadow by measuring the size of the dot product with $\vec{u}$. Add up the sizes of each of the shadow-patches. Note that this is actually twice the size of the shape's shadow, because we've double-counted: every patch has a counterpart on the "other side" of the shape, and both patches cast the same, overlapping shadow. So we must divide our total by two. Because the shadow of a convex shape can be computed by summing the shadows of its individual surface patches, the average shadow can be computed in the same way: Divide the surface of the shape into little area patches. Compute the average shadow cast by the patch over all directions. Add up the average shadows of each of the patches. Divide the total by two. There is a constant of proportionality $\lambda$ which connects the surface area of any convex object to the average size of its shadow. The size of the shadow cast by a patch, averaged over all directions, should be proportional to the area of the patch. (After all, both the patch and the shadow have units of area, and magnifying the patch by some amount should magnify the shadow by the same amount.) It follows that there is a constant of proportionality $\lambda$ such that if the area of the patch is $A$, the average area of its shadow is $\lambda \cdot A$. When we compute the average shadow of the shape by adding up the individual contributions from each patch, this constant $\lambda$ will factor out of the sum. The remaining sum is just the surface area of the shape. Hence the average shadow cast by a convex shape should be $\lambda$ times its surface area (divided by two). Because the constant of proportionality $\lambda$ is the same for all convex shapes, we can use a known shape to solve for its value. A sphere of radius 1 has surface area $4\pi$, and casts a shadow of area $\pi$ in each direction. Hence its average shadow area over all directions is $\pi$, and the constant of proportionality must be $$\lambda \equiv \frac{\pi}{4\pi} = \frac{1}{4}.$$ So the average size of the shadow cast by any 3D convex shape is $\frac{1}{4}$ times its surface area. Bonus: By extension to $n$ dimensions, there will be a constant of proportionality $\lambda_n$ relating the outer surface of a convex volume to the average size of its shadow. We can use $n$-dimensional spheres, whose geometries are known, to compute those constants. The constant $\lambda_n$ is the volume of an $n-1$ ball (a disc, in our example), divided by the surface area of an $n-1$ sphere (a sphere, in our example). So, by standard formulas, $$V_n = \frac{\pi^{n/2}}{\Gamma(\frac{n}{2}+1)}$$ $$A_n = \frac{2\cdot \pi^{\frac{n+1}{2}}}{\Gamma(\frac{n+1}{2})}$$ $$\lambda_{n+1} = \frac{V_{n}}{A_n} = \frac{\pi^{n/2}}{\Gamma(\frac{n}{2}+1)}\cdot \frac{\Gamma(\frac{n}{2}+2)}{2\cdot \pi^{\frac{n+1}{2}}} = \frac{1}{2\sqrt{\pi}}\frac{\Gamma(\frac{n}{2}+2)}{\Gamma(\frac{n}{2}+1)} $$ Note that the $\Gamma()$ function simplifies if we consider odd and even cases separately. After manipulation, we find: $$\lambda_{n+1} = \begin{cases}\frac{1}{2^{n+1}} {n \choose n/2} & n\text{ even}\\ \frac{1}{\pi}\frac{2^n}{n+1} {n \choose {\lfloor n/2\rfloor} }^{-1} & n\text{ odd} \end{cases}$$ And so $$\lambda_n = \frac{1}{2},\; \frac{1}{\pi},\; \frac{1}{4},\; \frac{2}{3\pi},\; \frac{3}{16},\; \frac{8}{15\pi},\; \ldots .$$
Does the function have derivatives at $x=0$
You were asked whether or not $f(x)$ has a derivative at $x=0$, and if so, compute. $f(x)$ at $x=0$ is $0$ as stated by the problem, and it does in fact have a derivative. The derivative of $f(x)$ at $x=0$ is $0$, which is a constant. The derivative of any constant is $0$. Therefore, the derivative of $f(x)$ at $x=0$ is $0$.
This is equation is giving me issues $x^2 - 6x + 15 = 0 $
Another approach $$x^2-6x+15=0$$ $$x^2-6x=-15$$ $$x^2-6x+9=-6$$ $$(x-3)^2=-6$$ Take the square root of both sides $$x-3=i\sqrt6,\;x-3=-i\sqrt6$$ $$\boxed{\color{red}{x=3\pm i\sqrt6}}$$
Is every open, bounded and simply connected subset of $\mathbb{R}^n$ essentially a ball?
$B_1(0)\setminus\{0\}$ is a counterexample when $n>2$.
Is it necessary to use the Hahn-Banach theorem to show that $(X/M)^*\simeq M^\perp$?
Your reasoning is correct. In fact, Hahn-Banach is not needed for the isometric part either. On the other hand, it is needed to show that the dual of a subspace $M\subseteq X$ is isometric to the quotient space $X^*/M^\perp$.
What is the most important test for a uniform random number generator?
Any single test is inadequate. A single test will find failures of randomness of one kind, but the generated numbers might still be very orderly in other respects. So it's best to use a bunch of different tests that look for different patterns. I'm not sure I understand your use case (and by now you may no longer have the same need), but if you really want to test the quality of a uniform random number generator, you should use a high-quality, up to date test suite such as Pierre L'Ecuyer's TestU01. A random number generator that passes all of the tests might still exhibit systematic nonrandom patterns (and if it's a pseudorandom number generating algorithm, it's guaranteed to do so), but the chance that they'd make a difference to your application would be small. The papers linked at the TestU01 page are very informative, too, as is Melissa O'Neil's PCG paper. Good books include: Kneusel's Random Numbers and Computers Johnston's Random Number Generators--Principles and Practices. Knuth, Chapter 3 in volume 2 of the 3rd edition of The Art of Computer Programming (still deserves to be called the "bible" of pseudorandom number generation, even though there have been crucial innovations since it was published, so there are better algorithms now.) (Some partially-similar questions I've answered on another site: https://stats.stackexchange.com/questions/335203/understanding-different-kinds-of-rngs-tests/407910#407910 https://stats.stackexchange.com/questions/438398/is-there-a-way-to-visualize-the-limited-randomness-of-an-algorithm/439691#439691 https://stats.stackexchange.com/questions/436733/is-there-such-a-thing-as-a-good-bad-seed-in-pseudo-random-number-generation/438057#438057 )
Stuck on a Ax=B matrix question, need confirmation if my work is correct
You found that if $b_1=4b_2$ then there is only one solution which is $x_1=\frac{b_1}{2},x_2=\frac{b_1}{4}$. So you actually showed that if a solution exists it must be unique.
Differential Equations Solve the initial Value problem $y' = -0.05 y + 25 ,y(0) = 80$
hint: $y' + 0.05y = 25 \to (e^{0.05t}y)' = 25e^{0.05t}$. Can you continue?
Projection onto subspace spanned by a single vector
The formula for projection of a vector $v$ to another vector $u$ uses dot product: $$\mathrm{proj}_u(v)= \frac{ v\cdot u}{u\cdot u}u$$ In the case you have given the projection is $\frac{0+6+0}{3}(0,9,0)= (0,2,0)=\frac{6}{9}u$. Of course you can reformulate it using matrix product.
Is there a math symbol for $f(x) > g(x)$ except when $x=0$, in which case $f(x)=g(x)$?
No, there isn't, there doesn't need to be. When you write "$f(x) > g(x)$, except when $x = 0$, in which case $f(x) = g(x)$," that's plenty clear. Getting carried away with symbols might obscure rather than clarify your meaning in this instance. If you're dealing with a whole bunch of functions related to each other in this manner, what you might need is some kind of terminology. I don't know what that terminology is, so I'll just "marklarable" for the sake of providing examples. Thus, in your paper, you might write "$f(x)$ is marklarable to $g(x)$, as is $\alpha(x)$ to $\beta(x)$."
How to evaluate last nine digits of $(2^{120} - 1) / 3$ without any intermediate value exceeding $2^{32}$
Suppose you are dividing by hand $2^{120}-1$ by $3$ by the usual algorithm: when you get to the last 9 digits, you'll have on the left the remainder carried over from the preceding division, and that remainder can be $0$, $1$ or $2$. But since we know the division is exact, the only possibility is $1$ and the answer therefore is $$1,280,344,575/3=426781525.$$
Calculate the value of a floating-point coordinate in a 2d-matrix
You have a two dimensional function $f : \mathbb{R}^2 \to \mathbb{R}$ and the values $f(0,0)$, $f(0,1)$, $f(1,0)$ and $f(1,1)$ in your initial matrix. If you want more values, here it seems $f(x,y)$ for $(x,y) \in [0,1]^2$, you must make some assumption about $f$. A simple assumption is that $f$ is a linear function on $[0,1]^2$. Under this assumption you can use linear interpolation. Interpolation along the axes gives: $$ f(x,y) = (1-x) f(0,y) + x f(1,y) \\ f(x,y) = (1-y) f(x, 0) + y f(x, 1) $$ we combine this into $$ f(x,y) = (1-x)[(1-y)f(0,0) + y f(0,1)] + x [(1-y) f(1,0) + y f(1,1)] $$ Note that if the four corner values do not lie on a common plane, this function will be some quadratic function through those four points. You can fiddle with the example here. (Move the four sliders with the corner values in the middle view)
Find a 4 x 4 matrix with rational entries such that its fourth power is the identity matrix multiplied by -1
As in my comment, you can take the matrix $\begin{pmatrix}0&0&0&-1\\1&0&0&0\\0&1&0&0\\0&0&1&0\end{pmatrix}$. You may check that this has the desired characteristic and minimal polynomial or just note that with $\{e_i\}$ the standard basis that this maps $e_1\to e_2\to e_3\to e_4\to -e_4$ so that it will have the desired property.
How to prove convergence in this methods of a differential equation?
Note that $A^2=-I$, so that $A$ represents the complex unit $i$. Then compare 1.) to the evolution of $(1+ih)^n$ and 2.) to the evolution of $\left(\frac{2+ih}{2-ih}\right)^n$, and the exact solution to $e^{it}$. The qualities asked for are best seen in a polar representation.
Show that $\log^{i} n \in O(n^{j})$ for $i,j > 0 $
Hint: The power $j$ on $n^j$ in the denominator stays the same, but your power on the logarithm drops by $1$. What happens if you do this over and over so that the power on $\log$ becomes less than or equal to zero? Then you can put it in the denominator by switching signs on the power and hopefully be able to determine the limit from there, assuming $j > 0$.
Find the limit $\sqrt[3]{n^3+n^2} - \sqrt[3]{n^3+1}$
Hint: $$ a^3-b^3=(a-b)\left(a^2+ab+b^2\right)\ \Longrightarrow\ a-b=\dfrac{a^3-b^3}{a^2+ab+b^2}. $$ Now make a suitable choice for $a$ and $b$ that will simplify your limit.
Floor property In general, ⌊ n x ⌋ = n ⌊ x ⌋ .
Let $n=3$ and $x=\frac{1}{3}$ floor(NX)= floor(1) = 1 $\neq$ 0 = 3(0) = 3 (floor($\frac{1}{3}$)) = N*floor(X).
Proving that a limit exists
It might be easier to work from the definition. If $g$ is differentiable at $x$, then for all $\epsilon>0$, there exists some $\delta>0$ such that if $|y-x| < \delta$, then $|g(y)-g(x)-g'(x)(y-x)| < \epsilon |y-x|$. Now try to bound $|\frac{g(v_n)-g(u_n)}{v_n-u_n} - g'(x)|$, writing $g(v_n)-g(u_n) = g(v_n)-g(x)+g(x)-g(u_n)$, and $g'(x)(v_n-u_n) = g'(x)(v_n-x) + g'(x)(x-u_n) $. Then use the fact that $u_n\le x \le v_n$ to obtain a suitable bound.
Evaluate $\lim_{x\to 0} {\sqrt[3]{1+{x\over 3}} - \sqrt[4]{1+{x\over 4}} \over 1-\sqrt{1-{x\over 2}}}$
You may prove first that for any $p,q\in\mathbb{N}^+$ $$ \lim_{x\to 0}\frac{\sqrt[p]{1+\frac{x}{q}}-1}{x}=\frac{1}{pq} \tag{1}$$ holds by rationalization, then use such result to show that your limit equals $$ \frac{\frac{1}{9}-\frac{1}{16}}{\frac{1}{4}}=\color{red}{\frac{7}{36}}.\tag{2}$$
How prove this inequality with $x+y+z=1$
We can show LHS $\ge 1 \iff $ $$ (1+xy+yz+xz)[1+3(x^2+y^2+z^2)]\ge 9(x+y)(y+z)(x+z) $$ Let $p = x+y+z = 1, q = xy+yz+zx \le \frac13, r = xyz$. Then we have the inequality as $$(1+q)[1+3(p^2-2q)] \ge 9(pq-r) \iff 4+9r \ge 11q + 6q^2$$ By Schur $p^3+9r \ge 4pq \implies 1+9r \ge 4q$, so it is enough to show that $3 \ge 7q + 6q^2$ which follows from $q \le \frac13$. For the second part, i.e. RHS $\le \frac23$, it is sufficient to note that $$ \frac{x\sqrt{1+x}}{\sqrt[4]{3+9x^2}} \le \sqrt{\frac23}x \iff (3x-1)^2 \ge 0$$
Question about positve and negative parts of a function
This is a bit of a subtle point, in my opinion the exact kind that Rudin is likely to omit. $f$ is well-defined and fixed before $g$ and $h$. In other words, we start with $f$ and we choose extended real functions $g$ and $h$ so that $f = g-h$ makes sense pointwise everywhere. Therefore it is implicit that $g$ and $h$ are not $+\infty$ at the same time, as otherwise we could not satisfy the constraint $f=g-h$ in a meaningful way. We could run into the problem you are foreseeing if we were to start with $g$ and $h$ and define $f$ to be $g-h$. Fortunately this is not the case here.
$C^*$ algebra of compact operators as a direct limit of matrix algebras?
The direct limit of the sequence $M_2(\mathbb{C}) \subset M_4(\mathbb{C}) \subset \cdots$ with diagonal maps $a \mapsto \left(\begin{array}{cc} a & 0 \\ 0 & a\end{array} \right)$ will give you the $2^{\infty}$ UHF algebra (CAR algebra), not the compact operators. Perhaps you mean $a \mapsto \left(\begin{array}{cc} a & 0 \\ 0 & 0\end{array} \right)$? In that case everything in the limit is the limit of finite rank operators so must be contained in $\mathcal{K}$. On the other hand, observe that you can get any finite rank operator in the limit since, for any $n$, you can find $k$ such that $M_n(\mathbb{C}) \subset M_{2^k}(\mathbb{C})$, so in fact the limit must be $\mathcal{K}$.
Finding the number of ways we can arrange numbers under conditional repetition.
Let the range have $n$ numbers denoted by $x_1,x_2,\cdots x_n$ and $k-$digit numbers(your problem is for $k=4$), then what you want is to keep track of how many have element have you place of each digit, call this number $y_i$ for $x_i$ and so your expressions is $$\sum _{\substack{y_1+y_2+\cdots +y_n=k\\0\leq y_i\leq 2}}\frac{k!}{y_1!\cdots y_n!}.$$ For the special case $k=4$ we can do case work as: $2+2=4$ then $\binom{n}{2}\frac{4!}{4}=3\cdot n(n-1).$ $1+1+2=4$ then $\binom{n}{2}(n-2)\frac{4!}{2}=6\cdot n(n-1)(n-2).$ $1+1+1+1=4$ then $4!$ So we get $$[n>1]3\cdot n(n-1)+[n>2]6\cdot n(n-1)(n-2)+[n>3]24,$$ where $[n>i]$ is the Iverson bracket meaning is $1$ if the proposition is true and $0$ otherwise. For example, then for $n=2$ we have $6$ and for $n=4$ we get $36+144+24=204.$
Canonical Vitali set
No, there cannot be a formula that always defines a Vitali set whenever one exists. In fact we can say something stronger: There is a model of $\mathsf{ZFC}$ with no definable Vitali set. Assume that $\mathsf{ZF} + \mathsf{DC}$ holds but there is no Vitali set (this follows, for example, from "$\mathsf{ZF} + \mathsf{DC} + {}$every set of reals has the Baire property," which is consistent relative to $\mathsf{ZFC}$ by a theorem of Shelah.) Force with $\text{Col}(\omega_1,\mathbb{R})$ to add a well-ordering of the reals. Because the forcing is countably closed and $\mathsf{DC}$ holds in the ground model, no reals are added. The generic extension has a well-ordering of its reals, so it has a Vitali set. But it cannot have a definable Vitali set because the forcing is homogeneous, so every subset of the ground model that is definable in the forcing extension is an element of the ground model. Remark: As Andreas Blass points out in the comments, there is a more direct construction assuming the existence of an inaccessible cardinal. Let $\kappa$ be inaccessible and let $g \subset \text{Col}(\omega,\mathord{<}\kappa)$ be a $V$-generic filter. Then in the generic extension $V[g]$ every definable set of reals is Lebesgue measurable by a theorem of Solovay (in fact, every set of reals that is ordinal-definable from a real parameter is Lebesgue measurable, has the Baire property, etc.) So in this generic extension there can be no definable Vitali set.
$\iint \limits_{x^2+y^2\leq a^2}x^my^ndxdy=0$
Suppose first that $m$ is odd. Substitution by $(x,y)=(-z,y)$ gives $$dxdy=|-1|dzdy=dzdy$$ and $$ I=\iint \limits_{x^2+y^2\leq a^2}x^my^ndxdy=\iint \limits_{z^2+y^2\leq a^2}(-z)^my^ndzdy=-\iint \limits_{z^2+y^2\leq a^2}z^my^ndzdy=-I. $$ Thus $I=0$. The case where $n$ is odd can be addressed in the same manner by making substitution $(x,y)=(x,-z)$.
How to do Partial derivatives with respect to a function
I feel that some relationship between the three functions has to be implied in order for things like $\frac{\partial f}{\partial g}$ to make sense. (Perhaps sharing the same independent variables is sufficient -- hopefully someone more familiar with the topic than me can address that.) If that is true, then consider $$\frac{\partial f}{\partial x} = \frac{\partial f}{\partial g} \frac{\partial g}{\partial x} + \frac{\partial f}{\partial h} \frac{\partial h}{\partial x}$$ and likewise, $\frac{\partial g}{\partial x} = \frac{\partial g}{\partial f} \frac{\partial f}{\partial x} + \frac{\partial g}{\partial h} \frac{\partial h}{\partial x}$. Then by holding $h$ constant (i.e. $\frac{\partial f}{\partial h}=0$), the result for the first equation follows. For the second part, consider $$\frac{\partial f}{\partial g} = \frac{\partial f}{\partial x} \frac{\partial x}{\partial g} + \frac{\partial f}{\partial y} \frac{\partial y}{\partial g}$$ Holding $x$ constant eliminates the first term on the right-hand side. Regarding "reciprocals" of derivatives, perhaps the inverse function theorem or this question can shed some light.
First order logic - is this a valid interpretation of a sentence?
You are right. Normally, in a structure with domain $\mathcal{A}$ and $\mathcal{I}$, for an $n$-ary predicate $P$, $\mathcal{I}(P^n) \subseteq \mathcal{A}^n$, i.e. interpretation of an 1-ary predicate is a subset of the domain, the interprtation of an $2$-ary predicate is a subset of $\mathcal{A} \times \mathcal{A}$, ..., Your book might presuppose some weird definition of models in which non-logical symbols can be interpreted as anything, but normally you'd want models to be closed systems, in the sense that interpretating some predicate shouldn't shoot you out of the domain of the model. In order to prove that $\exists x L(x) \not \vDash \exists x T(x)$, they should rather have presented a model in which there are liears but no thieves at all, e.g. $$\text{domain} = \{1\};\ \mathcal{I}(L) = \{1\};\ \mathcal{I}(T) = \emptyset$$ Or, with the $\leftrightarrow$ notation, $$L(x) \leftrightarrow x = 1;\ T(x) \leftrightarrow \bot$$ - but this notation presupposes that 1 (and 2 etc.) are constants, which, as Mauro Allegranza points out in their comment, need to be assigned an interpretation as well in order to be any meaningful. May I ask which book you are using, and how they define models? If they nowhere give a precise definition of what an interpretation of a predicate is, then it's probably not a good book anyway.
Every first contable locally convex space has a countable neighborhood basis of balanced and convex sets
Let $(U_n)_n$ be any countable local base at $0$ for $X$. As you stated under (b), for each $n$ we can find a convex balanced neighbourhood $V_n$ of $0$ such that $V_n \subseteq U_n$, as $X$ is locally convex. It’s immediate that $(V_n)_n$ is your required local countable base at $0$.
uniformization theorem - squares and circles
Compilation of comments, expanded. (1) In practical terms, it is slightly easier to work with upper half-plane instead of the open unit disk. The composition with $(z-i)/(z+i)$ then gives a map onto the disk. The Schwarz–Christoffel method gives a practical way to find a conformal map of upper half-plane to a polygon. (2) Freely downloadable program zipper by Donald Marshall computes and plots conformal maps using a sophisticated numerical algorithm. It can handle an L-shape, or far more complicated shapes: Zipper-generated images are very nice, though not as flashy as this one, linked to by brainjam. (3) Closed square is not allowed in the uniformization theorem. Conformal (or general holomorphic) maps are normally defined on an open set. While one may talk about boundary correspondence under conformal maps, it's understood in the sense of limits at the boundary. A conformal map of open square onto a disk has a continuous extension to the closed square, by Carathéodory's theorem, but I would not call the extended map conformal: the angles at the corners are not preserved.
$f=g$ almost everywhere $\Rightarrow |f|=|g|$ almost everywhere?
Since $f,g$ are measurable, so are $f-g$ and $f+g$. Let $$ E=\{x\in X:\ f(x)=g(x)\}, \ \ E'=\{x\in X:\ f(x)=-g(x)\}. $$ If $F$ is as in the question, the set where $|f|\neq|g|$, then we have $F^{c}=E\cup E'$. And then $F$ is measurable, because both $E$ and $E'$ are: $$ E=(f-g)^{-1}(\{0\}),\ \ E'=(f+g)^{-1}(\{0\}). $$
Show that there exists a dual space $E^*$ such that the natural injection $E^* \rightarrow L (E)$ is not surjective.
Your $L(E)$ is the “full dual”. Take a basis $B$ of $E$ and, for $v\in B$, define $v^*(v)=1$, $v^*(w)=0$, for $w\in B$, $w\ne v$ and extend by linearity. Take as $E^*$ the subspace of $L(E)$ spanned by $\{v^*:v\in B\}$.
Does the average reciprocal of the smallest or largest prime factors of integers converge?
First let me prove that $\dfrac{n}{\sum_{k \le n}\frac{1}{b_k}}$ diverges. It suffices to show the reciprocal $\dfrac{\sum_{k \le n}\frac{1}{b_k}}{n}$ converges to $0$. Fix $N\in\mathbb{N}$ and let $p_1,\dots,p_m$ be all the primes less than $N$. Note that then there is a constant $C$ such that for all $n>1$, at most $C\log(n)^m$ integers between $1$ and $n$ have only the primes $p_1,\dots,p_m$ in their prime factorization (for each $p_i$, there are at most $1+\log_{p_i} n$ choices for the power of $p_i$). Since $\frac{\log(n)^m}{n}\to 0$ as $n\to\infty$, the fraction of $k\leq n$ such that $b_k<N$ goes to $0$ as $n\to\infty$. So the contribution of such $k$ to the average $\dfrac{\sum_{k \le n}\frac{1}{b_k}}{n}$ goes to $0$, and thus $\limsup \dfrac{\sum_{k \le n}\frac{1}{b_k}}{n}\leq 1/N$. Since $N$ is arbitrary, this implies $\dfrac{\sum_{k \le n}\frac{1}{b_k}}{n}$ converges to $0$. To prove that $\dfrac{n}{\sum_{k \le n}\frac{1}{a_k}}$ converges, we similarly analyze the reciprocal $\dfrac{\sum_{k \le n}\frac{1}{a_k}}{n}$. Let $p_i$ be the $i$th prime in order ($p_1=2,p_2=3,\dots$). For fixed $i$, note that as $n\to\infty$, the proportion of $k$ from $1$ to $n$ such that $a_k=p_i$ approaches the number $$r_i=\frac{1}{p_i}\prod_{j<i}(1-1/p_j)$$ (by the Chinese remainder theorem, this is the proportion of residues mod $\prod_{j\leq i} p_i$ that are divisible by $p_i$ but not by $p_j$ for any $j<i$). So for fixed $m$, when $n$ is sufficiently large, the proportion of $k$ such that $a_k=p_i$ will be arbitrarily close to $r_i$ for all $i\leq m$. This means the contribution of these terms to the average $\dfrac{\sum_{k \le n}\frac{1}{a_k}}{n}$ is arbitrarily close to $\sum_{i=1}^m\frac{r_i}{p_i}$. Note moreover that $1-\sum_{i=1}^mr_i=\prod_{i\leq m}(1-1/p_i)$ goes to $0$ as $m\to\infty$ (since $\sum_{i=1}^\infty 1/p_i$ diverges), so the contribution of the remaining terms (those with $a_k=p_i$ for $i>m$) becomes arbitrarily small as $m$ gets large. So fixing a sufficiently large $m$, for all sufficiently large $n$, $\dfrac{\sum_{k \le n}\frac{1}{a_k}}{n}$ will be arbitrarily close to $\sum_{i=1}^m\frac{r_i}{p_i}$. It follows that $\dfrac{\sum_{k \le n}\frac{1}{a_k}}{n}$ converges to the value $$\sum_{i=1}^\infty\frac{r_i}{p_i}$$ and so $\dfrac{n}{\sum_{k \le n}\frac{1}{a_k}}$ converges to its reciprocal.
Diffeomorphism between a triangle and a square?
If by "triangle" you mean the open set bounded by three line segments (the boundaries themselves are not included), then yes, every convex open subset of the Euclidean plane is diffeomorphic to $\mathbb{R}^2$.
How may limit points can a countable space have?
Take Q, the space of rational numbers. Every point is a limit point. So there's a countable set of limit points.
Max value of $a-b$ given $\frac {a!}{b!}$ multiple of $ 4$ but not of $8$.
$\frac{a!}{b!} = (b+1)*(b+2)*...*(a-1)*a$ If you have two consecutive numbers multiplied, then one will be even, the other odd, and the product of the two will be even. If you have 3 consecutive numbers multiplied, then you will have either ODD x EVEN x ODD or EVEN x ODD x EVEN and if you have 4 consecutive numbers multiplied, then you will definitely have two evens and two odds. If you have two consecutive even numbers, then one will be a multiple of 4 and the second even number will make the product into a multiple of eight.
Explicitly representing a random variable such as $ X(\omega):=\frac{1}{\lambda} \ln \frac{1}{1-\omega}$, which is exponential
This is called Skorokhod representation, according to David Williams' Probability with Martingales.(*) For a given cdf $F$, the random variable can be explicitly represented by computing $$X(\omega) = \sup\{y \in \mathbb{R}: F(y) < \omega\}$$ Like for exponential: $$F(y) < \omega$$ $$\iff 1-e^{-\lambda y} < \omega$$ $$\iff y < \frac{1}{\lambda} \ln(\frac{1}{1-\omega})$$ Thus, $$X(\omega) = \sup(y \in \mathbb{R}: F(y) < \omega) = \sup(-\infty,\frac{1}{\lambda} \ln(\frac{1}{1-\omega})) = \frac{1}{\lambda} \ln(\frac{1}{1-\omega})$$ (*) This can also be called canonical representation (MAT 235A / 235B: Probability Instructor: Prof. Roman Vershynin Prof typeset by Edward D. Kim) or Skorokhod representation of random variables using quantile transforms (Optimal Transport Methods in Economics By Alfred Galichon). Skorokhod representations relate to quantile functions, similarly defined: $$Q(p) = \inf\{x \in \mathbb R | F(x) \ge p\}$$ In the wiki page for random variables under distribution functions, it says: The probability distribution "forgets" about the particular probability space used to define X and only records the probabilities of various values of X. [...] In practice, one often disposes of the space $\Omega$ altogether and just puts a measure on $\mathbb {R}$ that assigns measure 1 to the whole real line, i.e., one works with probability distributions instead of random variables.
Isomorphism of orthogonal groups
The text provides the explicit map of the isomorphism between the orthogonal groups namely $$\tau \mapsto \sigma \tau \sigma^{-1}$$ There are 3 things you'd need to show to flesh out the details: If $\tau$ in the orthogonal group of $V$ then $\sigma \tau \sigma^{-1}$ is in the orthogonal group $V'$ $\tau \mapsto \sigma \tau \sigma^{-1}$ is a homomorphism $\tau \mapsto \sigma \tau \sigma^{-1}$ is bijective You seemed most concerned about 3 in the comments so lets start with that. To SHOW 1) $\tau \mapsto \sigma \tau \sigma^{-1}$ is injective Suppose that $\sigma \tau \sigma^{-1}=I$ where $I$ is the identity transformation of $V'$ then $\tau=\sigma^{-1}I\sigma$. Pick an arbitrary element $v \in V$ then $$\begin{split} \tau(v)&=(\sigma^{-1}I\sigma)(v)\\ &=\sigma^{-1}I(\sigma(v))\\ &=\sigma^{-1}(\sigma(v))\\ &=v\\ \end{split}$$ So $\tau$ is the identity transformation on $V$. Therefore the matp is injective. $\tau \mapsto \sigma \tau \sigma^{-1}$ is surjective Let $\alpha:V'\to V'$ be an invertible orthogonal transformation. Then consider $\tau:V \to V$ defined by $\tau=\sigma^{-1}\alpha\sigma$. It's clear that $\tau$ is invertible but we need to show it preserves the bilinear form $b$. Note that $\sigma:V \to V'$ being a linear isometry means that for all $a,b \in V$ $$ b(a,b)=b'(\sigma(a),\sigma(b)) $$ and for all $x,y \in V'$ $$ b'(x,y)=b(\sigma^{-1}(a),\sigma^{-1}(b)) $$ Also note since $\alpha$ is in the orthogonal group of $V'$ $$ b'(\alpha (x),\alpha(y))=b'(x,y) $$ So let $v,w \in V$ and consider $$\begin{split} b(\tau(v),\tau(w))&=b(\sigma^{-1}\alpha\sigma(v),\sigma^{-1}\alpha\sigma(w)) \\ &=b(\sigma^{-1}(\alpha\sigma(v)),\sigma^{-1}(\alpha\sigma(w))) \\ &=b'(\alpha\sigma(v),\alpha\sigma(w))\\ &=b'(\sigma(v),\sigma(w))\\ &=b(v,w)\\ \end{split} $$ So $\tau$ preserves the bilinear form $b$. Note that usually $\sigma$ is assumed to be linear. Often it is called a linear isometry. I don't think this is true or even 1) is true if $\sigma$ is not linear. To SHOW 2) $\alpha,\beta$ in the orthogonal group of $V$. Then $$ \sigma \alpha \beta \sigma^{-1}=(\sigma \alpha\sigma^{-1}) (\sigma\beta \sigma^{-1} ) $$ To SHOW 1) Let $v,w \in V'$, tau in the orthogonal group of $V$ $$\begin{split} b'(\sigma\tau\sigma^{-1}(v),\sigma\tau\sigma^{-1}(w)) &=b'(\sigma(\tau\sigma^{-1}(v)),\sigma(\tau\sigma^{-1}(w))) \\ &=b(\tau\sigma^{-1}(v),\tau\sigma^{-1}(w)) \\ &=b(\sigma^{-1}(v),\sigma^{-1}(w)) \\ &=b'(v,w)\\ \end{split} $$ The linearity and invertibility of $\sigma\tau\sigma^{-1}$ follow from $\sigma,\tau,\sigma^{-1}$ each being linear and invertible.
Properties of Harmonic sum sequence
There are many such results. The one that comes immediately to mind (perhaps because I've assigned it in a number theory class in the past) is Wolstenholme's Theorem, which says that the numerator of $H_{p-1}$ is congruent to $0$ mod $p^2$ when $p > 3$ is prime. Or, as none of the denominators are divisible by $p$, it says that $$ 1 + \frac{1}{2} + \cdots + \frac{1}{p-1} \equiv 0 \pmod {p^2}. $$ Like many elementary-seeming number theoretic questions, the methods used to try to understand these sums are many and varied, and also still incomplete.
Solve $x^2\equiv a$ mod $\prod P_i^{e_i}$
Yes, you can. You have to start from a Bézout's relation between $3$ and $5$: $$2\cdot 3-1\cdot 5=1$$ Now if you have solutions $a$ mod. 3 and $b$ mod. $5$, we deduce solutions mod. $15$: $$x\equiv b\cdot 2\cdot 3-a\cdot 1\cdot 5=6b-5a\mod15.$$
Find the matrix of linear transformation (in Standard basis) that rotates clockwise every vector
The product of two matrices is not commutative. If you want the matrix that represents first the rotation than the reflection, the correct order is $R'\cdot R$.
Are there uncountable groups which satisfy the minimal condition on subgroups?
There are uncountable artinian groups. The original reference is: A.Yu. Ol’shanskii, Geometry of defining relations in groups. Kluwer, 1991 Theorem 35.2 states that there are artinian groups of cardinality $\aleph_1$. PS: In arXiv:1206.3639 (Example 2.6) it is claimed that every artinian group is countable (on the other hand, the abstract and the introduction speak of countable artinian groups). A reference is given to Kurosh's classic text The Theory of groups, page 192. But I couldn't find it there ...
Proving that this sequence is convergent
The sequence is descendant and positive, so it's convergent. Where does it converge to ? Supposedly $l$ is its limit. $l=\frac{l+5}{4}$ gives $l=\frac{5}{3}$
How many five-digit numbers can be formed using digits $1,2,3,4,5,6$ which are divisible by $3$ and $5$?
hint The mostright digit should be $ 5$. now, we have to choose $ 4 $ digits from $5 $. there are five possibilities $$\{1,2,3,4\}\;;\;\{1,3,4,6\}\;;\;\{1,2,4,6\}\;;\;\{2,3,4,6\}\;;\;\{1,2,3,6\}$$ take those for which the (sum +5) is divisible by $ 3$. we find $$\{1,2,4,6,5\}\;;\;\{1,2,3,4,5\}$$ The final answer is $2\times 4!=48$.
Recurrent points and rotation number
This is a corollary of a more general result, that every continuous self-map $f:X \to X$ of a (non-empty) compact metric space $X$ has a recurrent point. Here is a proof, stolen from Katok / Hasselblatt, Modern Theory of Dynamical Systems, Cambridge University Press 1995, Proposition 3.3.6 and Corollary 3.3.7. Let $\mathcal{C}$ be the collection of all non-empty compact subsets $A \subseteq X$ with the property that $f(A) \subseteq A$, partially ordered by inclusion. If $(A_i)_{i \in \mathcal{I}}$ is a non-empty chain in $\mathcal{C}$, i.e., a totally ordered subset, then the intersection $A = \bigcap_{i \in \mathcal{I}} A_i$ is again non-empty and compact and satisfies $f(A) \subseteq A$, so $A \in \mathcal{C}$. By Zorn's lemma $\mathcal{C}$ has a minimal element $A^*$. Pick $x \in A^*$ and let $A_x$ be the closure of the forward orbit of $x$. Then $A_x \subseteq A^*$ and $A_x \in \mathcal{C}$, so $A_x = A^*$, and in particular $x$ is recurrent.
Edges of a permutohedron
Recall that a face of $P\subset\Bbb R^n$ is defined by a set of vertices $v_1,...,v_k\in \mathrm{vert}(P)$ that maximize a linear functional $\langle c,\cdot\rangle$ for some $c\in\Bbb R^n$. Now, given a $c\in\Bbb R^n$ it is not hard to characterize the vertices of the permutahedron $P$ that maximize the functional $\langle c,\cdot\rangle$. Let me write $v(i)$ for the index of the component at which $i\in\{1,...,n\}$ occurs in $v\in\mathrm{vert}(P)$ (this is well-defined for the vertices of the permutahedron). Then, a vertex $v\in\mathrm{vert}(P)$ maximizes $\langle c,\cdot\rangle$ if and only if $$ c_{v(1)} \le \cdots \le c_{v(n)},$$ where $c_i$ denotes the $i$-th component of $c$. If this is not clear to you, write down $\langle c,v\rangle$ in components $c_1v_1+\cdots + c_n v_n$ and observe that above characterization ensures that the largest component of $c$ is paired with the largest component of $v$, and the second-largest component of $c$ is paired with the second-largest component of $v$, and so on. And this is the way to obtain the largest possible inner product. For example, if $c$ has all different components, then above sorted sequence is unique (given by sorting $c$), and there is only a single vertex that maximizes $\langle c,\cdot\rangle$. To define an edge, we need exactly two ways to sort $c$ in non-decreasing order. This can be done by making exactly two components of $c$ equal (convince yourself, that this is the only way). Let's say, the identical components are $c_k$ and $c_\ell$. Then, we can sort them in the two ways $$...\le c_k\le c_\ell\le...\qquad\text{or}\qquad ...\le c_\ell\le c_k\le ...$$ Hence, these identical components occupy consecutive positions in the sorted sequence, let's say positions $i$ and $i+1$. The only two vertices that maximize $\langle c,\cdot\rangle$ have then determined positions for all entries other than $i$ and $i+1$, and $$v(i)=k,\; v(i+1)=\ell\qquad\text{and}\qquad \bar v(i)=\ell,\; \bar v(i+1)=k.$$ This means, that $v$ has consecutive numbers at index $k$ and $\ell$, and these numbers appear swapped in the other vertex $\bar v$.
Probability of finding mutation
The point is, $1$ in every $50000$ cells has a mutation. Hence, to find $1$ E cell, you have to look through $50000$ O cells. Now, extend this argument. Suppose you are looking for $2$ E cells. To find the first one, you need to look through $50000$ cells, and the to find the second one another $50000$ cells ,so in total, you had to look through $50000$ cells twice, so the answer is $50000 \times 2$ in that case. Similarly, suppose you are looking for $3$ E cells. To find the first one, you need to look through $50000$ cells, and the to find the second one another $50000$ cells , and then to find the third one another $50000$ cells ,so in total, you had to look through $50000$ cells thrice, so the answer is $50000 \times 3$ in that case. Now, if you have to find $162$ E cells, then you have to look through $50000$ cells $162$ times. Hence, the number of cells you have to look through is $162 \times 50000 = 8100000$ cells. Reply back if you don't understand this logic.
Verify $\lim\limits_{n\rightarrow\infty}\int^\infty_0 \! e^{-nx} \sin(e^x) \, \mathrm{d}x = 0.$
How to verify $\lim\limits_{n\rightarrow\infty}\int^\infty_0 \! e^{-nx} \sin(e^x) \, \mathrm{d}x = 0$ You may just observe that $$ \left|\int^\infty_0 \! e^{-nx} \sin(e^x) \, \mathrm{d}x\right|\leq \int^\infty_0 \! \left|e^{-nx} \right|\, \mathrm{d}x=\frac1n, \qquad (n>0), $$ and then let $n \to +\infty$.
Maclaurin series stuck at finding $L_n$
When calculating successive derivatives, sometimes it is more illuminating to not simplify out the entire calculation: $$\begin{align*} f(x) &= (1-x)^{-2}, \\ f'(x) &= 2(1-x)^{-3}, \\ f''(x) &= 2(3)(1-x)^{-4}, \\ f'''(x) &= 2(3)(4)(1-x)^{-5}, \\ f^{(4)}(x) &= 2(3)(4)(5)(1-x)^{-6}, \\ &\vdots \\ f^{(n)}(x) &= \ldots. \end{align*}$$ Does the pattern seem more evident now?
Find the coefficient of $x^{10}$
Note that we may at any point in our calculation ignore any terms of degree $11$ or higher. Therefore we may also ignore any terms whose inclusion will only lead to degree $11$ or higher terms. We have: $$ f(f(x)) = (x + x^2 + x^4 + x^8 + \cdots) + (x + x^2 + x^4 + x^8 +\cdots)^2 \\+ (x + x^2 + x^4 + \cdots)^4 + (x + x^2 + \cdots)^8 + \cdots $$ From here we may look at each of the brackets and simply extract the ones which lead to degree $10$, using the multinomial theorem (basically the binomial theorem) for what it's worth: $x + x^2 + x^4 + x^8 + \cdots$: no terms $(x + x^2 + x^4 + x^8 + \cdots)^2$: we get $2x^2\cdot x^8$ $(x + x^2 + x^4 + \cdots)^4$: we get $6 (x)^2\cdot (x^4)^2 + 4(x^2)^3\cdot x^4$ $(x + x^2 + \cdots)^8$: we get $28(x)^6\cdot(x^2)^2$ where I've used brackets to clarify which terms I've picked in each case.
Why does a flat finite type morphism of irreducible noetherian schemes map generic pt to generic pt
Any flat morphism $f:X\to Y$ with $X$ and $Y$ irreducible sends the generic point of $X$ to the generic point of $Y$. This is a consequence of the fact that generalizations lift along flat morphisms (of arbitrary schemes), which is itself a global version of the fact that a flat ring map $A\to B$ satisfies going down. Here are the relevant Stacks Project references: https://stacks.math.columbia.edu/tag/03HV (generalizations lift along flat morphisms) https://stacks.math.columbia.edu/tag/00HS (flat ring maps satisfy going down) https://stacks.math.columbia.edu/tag/00HW ($A\to B$ satisfies going down if and only if generalizations lift along $\mathrm{Spec}(B)\to\mathrm{Spec}(A)$) https://stacks.math.columbia.edu/tag/0063 (definition of generalizations lifting along a map) https://stacks.math.columbia.edu/tag/0061 (definition of generalizations in a topological space) Now assume $X$ and $Y$ are irreducible with generic points $\eta_X$ and $\eta_Y$, respectively, and let $f:X\to Y$ be flat. Consider the generalization $\eta_Y\rightsquigarrow f(\eta_X)$ (the generic point of an irreducible scheme generalizes every point in the scheme). Since generalizations lift along $f$ (due to $f$ being flat), there is a point $x\in X$ with $f(x)=\eta_Y$ and $x\rightsquigarrow \eta_X$, i.e., $x$ is a generalization of $\eta_X$. Because $\eta_X$ is the generic point of $X$ and the underlying topological space of a scheme is sober (I guess just Kolmogorov is adequate–all that we need is that distinct points have distinct closures), we must have $x=\eta_X$. Thus $f(\eta_X)=f(x)=\eta_Y$.
Number of integers $n$ such that $n$ is divisible by every prime belonging to the interval $(1; \sqrt{n})$ -- is it finite?
There is only a finite number of such integers $n$. Suppose an integer $m$ were divisble by every prime up to $K$. Then it is easy to see that $m \ge \prod_{p \le K} p \in \omega(K^4)$ [indeed $\theta(K/\log K)$ primes $p \geq 2$ so $m \ge \prod_{p \le K} p \doteq g(K) \in 2^{\Omega(K/\log K)}$] So it follows that $m \in 2^{\Omega(K/\log K)} \subset \omega(K^4)$. However, if there exists such an $n$, then taking $K =\sqrt{n}$, we note that the integer $g(K)$ must satisfy $g(K) \le n = K^2$. But as we observed above $g(K) \in \omega(K^4)$, so there are only a finite number of $K$ s.t $g(K) \le K^2$. Thus as such an $n$ satisfies at least the conditions $n=K^2$ for some $K$ satisfying $g(K) \le K^2$, it follows that there are at most a finite number of $n$.
Question on proof of $1+2+\dots+n=\frac{n(n+1)}{2}$ by induction.
The second way is correct but I would be more carefull and use like this: $$1+2+...+k+(k+1)=\frac{k(k+1)}{2}+k+1=\frac{k^2+3k+2}{2}=\frac{(k+1)(k+2)}{2}$$ P.s: Not use the equality in the first place. The problem using the equality is that you need guarantee equivalence in every step what is not so easy in many problems.
Proof using Identity theorem?
Just look at $f(x)=\sin(\frac{\pi}{x})$ and $g(x)=0$, they agree on all points $x_n=\frac{1}{n}$ but are not the same...
Badly Conditioned Matrix Error - Should I concern myself with it?
For some reason, doing Inverse[A].b instead of LinearSolve[A,b] didn't yield this error, so I believe it solves the problem in my case.
Find a 1-1 correspondence between N and 5N (the set of all positive multiples of 5)
Such a one-to-one correspondence looks like \begin{align} 0 &\leftrightarrow 0 \\ 1 &\leftrightarrow 5 \\ 2 &\leftrightarrow 10 \\ 3 &\leftrightarrow 15\\ &\,\,\vdots \end{align} Let's try to come up with a function $f$ with that is such a bijection. We want $f(0)=0$, $f(1)=5$, $f(2)=10$, etc. This looks linear, with slope equal to 5 and y-intercept of 0. So maybe the function $$ f(x) = 5x$$ will work. We will prove that $f$ works (ie is a 1-1 cor). To prove $f$ is one-to-one, take $x,y\in\mathbb{N}$. Then if $f(x) = f(y)$, we have $5x=5y$. Dividing through by 5, we see that $x=y$. To prove it is onto, take any $y \in 5\mathbb{N}$. Then set $x=y/5 \in \mathbb{N}$. This gives $$f(x) = f(y/5) = 5(y/5) = y.$$
Size issues in the reduction of Colimits to a Coequalizer of Coproducts.
I believe the point is to use the last paragraph of 3.3.9 which talks about extending an $\text{ob}\,\mathcal A$-indexed family of limits to a functor on $\mathcal A$. Assuming you already have this family of limits, then there aren't really any size issues with regards to this construction. For a given $f:A\to B$ in $\mathcal A$, we turn the limit cone $\lim_{J\in\mathsf J}D_A(J)$ over the diagram $D_A$ to a cone over $D_B$ which, by the universal property of $\lim_{J\in\mathsf J}D_B(J)$, induces an arrow $\lim_{J\in\mathsf J}D_A(J)\to\lim_{J\in\mathsf J}D_B(J)$ as desired. The only potentially sketchy part with respect to sizes in this is the mapping of cones induced by $f$. But while $\mathsf J\to\mathcal C^{\mathcal A}$ is sketchy given $\mathsf J$ is small and $\mathcal C$ and $\mathcal A$ are locally small, $\mathsf J\times\mathcal A\to\mathcal C$ and even $\mathcal A\to\mathcal C^{\mathsf J}$ are completely fine, and the latter is the functor we're using to transport (the bases of the) cones. Other parts of 3.3.9 become questionable if $\mathcal A$ is not small, such as $\mathcal C^{\text{ob}\,\mathcal A}$ or $\prod_{\text{ob}\,\mathcal A}\mathcal C$ existing, at least as objects of $\mathbf{CAT}$. For the theorem you're discussing, we're only concerned about a particular functor/family, not the category of them. It's definitely a bit loose to talk about a functor $\mathbf{Set}^{\mathcal A}\to\mathbf{Set}^{\text{ob}\,\mathcal A}$, but the actual construction alluded to works functor-by-functor and that's all that's needed in this case.
Hodge star/ Technical question
After some discussion, it seems like your question is actually the following: Let $\omega$ be a differential form and $c \in \mathbb{R}$. Is $\star(c\omega) = c(\star\omega)$? The answer to this is yes. The Hodge dual is real-linear, in fact it is linear over real-valued functions, i.e. $\star(f\omega) = f(\star\omega)$. If you want to bring out a complex number or complex-valued function, then it depends on your definition of $\star$; in particular, whether it is complex linear or conjugate linear.
What happens to the Stone-Cech compactification if you change "compact Hausdorff" to "compact"?
No, if you drop the Hausdorff condition when talking about the Stone-Cech compactification, then it never exists for any non-compact space. Indeed, suppose $X$ is not compact and suppose there existed an initial continuous map $f:X\to Y$ to a compact space $Y$. Consider the space $K$ obtained by adjoining two points $a,b$ to $X$ and declaring that a set is open in $K$ iff it is either an open subset of $X$ or is equal to all of $K$. Then $K$ is compact and the inclusion map $i:X\to K$ would be continuous, so there would have to be a unique continuous $g:Y\to K$ such that $gf=i$. In particular, this means that the image of $g$ contains all of $X$, and therefore must also contain at least one of $a$ and $b$ since the image of $g$ must be compact and $X$ is not compact. But now define $g':Y\to K$ by $g'(y)=g(y)$ if $g(y)\in X$, $g'(y)=b$ if $g(y)=a$, and $g'(y)=a$ if $g(y)=b$. This $g'$ is still continuous, since the open sets containing $a$ are the same as the open sets containing $b$. Also, for any $x\in X$, $g(f(x))=x\in X$ so $g'(f(x))=x$ as well. That is, $g'f=i$. This contradicts the uniqueness of $g$. From a categorical perspective, what's going on here is that compact spaces (unlike compact Hausdorff spaces) are not closed under limits in the category of topological spaces, and thus are not a reflective subcategory. The issue is with equalizers: the equalizer of two maps between Hausdorff spaces is closed in the domain, and thus compact if the domain is compact. However, the equalizer of two maps between compact spaces need not be compact.
2 fake coins out of m coins in each of n boxes.
Yes, your answer is correct. The probability of picking a genuine coin out of any box will be $$P_i={{m-2}\over m}$$ This probability is for each of the boxes. Since we want this to happen in all the boxes simultaneously, we multiply the probabilities. $$P=P_1{\times}P_2{\times}P_3{\times}{\cdots}{\times}P_n$$ $$P=\left({m-2}\over m\right)^n$$
Dual of the Minkowski Sum
Disappointingly, there is no simpler expression than the definition itself. in Combinatorial Convexity and Algebraic Geometry by Günter Ewald, page 105, the author states: The polar body $(K+L)^*$ of a sum of convex bodies has, in general, no plausible interpretation in terms of $K^*, L^*$. Only in the case of direct sums do we present such an interpretation. Namely: if $K$ and $L$ both contain $0$ and are contained in linear subspaces $A,B$ with $A\cap B=\{0\}$, then $(K+L)^*$ is the convex hull of $K^{*_A}\cup L^{*_B}$ where the subscripts on asterisks mean that the polar is taken within the indicated subspace. An example: the polar of the Minkowski sum of line segments is the convex hull of (different) line segments: this matches the description of the unit ball of $\ell^1$. The above does not apply in your case since you assume nonempty interior.