title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Does the inclusion of the kernels of two linear forms imply that one of the linear forms is a multiple of the other?
You don't need the fact that $V$ is finite-dimensional. If $V$ is a vector space over $K$ and $f, g \in V^{*}$, we can prove that $\ker f \subseteq \ker g$ if and only if there exists some scalar $\lambda$ such that $g = \lambda f$. $(\Leftarrow)$ This direction is easy. $(\Rightarrow)$ This requires a little bit more work. Notice that $K$ is a one-dimensional vector space, so $\text{im }f$, which is a subspace of $K$, must be finite-dimensional as well. If $\text{im }f = \{ 0 \}$, then $f$ is the zero linear map from $V$ to $K$, so $g$ must be the zero map as well and any $\lambda \in K$ will do. Suppose $\text{im } f \neq \{ 0 \}$. Then $\text{im } f$ must be all of $K$. Let $u$ be the vector in $V$ such that $f(u) = 1$. Then $f(u)$ is a basis of $K$, so there must be some scalar $\lambda$ such that $g(u) = \lambda f(u)$. Now let $v \in V$ be arbitrary. Then $f(v) = \alpha f(u)$ for some scalar $\alpha$. We see that $f(v - \alpha u) = 0$, so $v - \alpha u \in \ker f$. By our hypothesis, $v - \alpha u$ is in $\ker g$ as well, so $$ g(v - \alpha u) = 0 .$$ This implies that $g(v) = \alpha g(u) = \alpha (\lambda f(u)) = \lambda (\alpha f(u) ) = \lambda f(v).$ Since $v$ was arbitrary, we can conclude that $g = \lambda f$. Addendum: This result depends on the fact that $K$ is a one-dimensional vector space. In a more general setting, the following two results hold: Suppose $T_{1}, T_{2}$ are linear maps from $V$ to $W$ and $W$ is finite-dimensional. Then $\ker T_{1} \subseteq \ker T_{2}$ if and only if there exists a linear map $S$ from $W$ to $W$ such that $T_{2} = ST_{1}$. Every linear map from a one-dimensional vector space to itself is multiplication by some scalar.
Deriving the formula for the $n^{th}$ tetrahedral number
If one is in a combinatorial mood, one can do it basically without computation, as follows. Note that $\frac{(n+1)(n)}{2}=\binom{n+1}{2}$. Note also that $\frac{(x^2+x)(x+2)}{6}=\binom{x+2}{3}$. So we want to show that $$\sum_1^x \binom{n+1}{2}=\binom{x+2}{3}.\tag{1}$$ But Formula 1 has a simple combinatorial interpretation. Imagine that we are choosing $3$ numbers from the numbers $1,2,\dots,x+2$. There are clearly $\binom{x+2}{3}$ ways to do it. That gives us the right-hand side of Formula 1. Now let us count the number of choices another way. There are $\binom{x+1}{2}$ choices where $1$ is the smallest number chosen. For the other two numbers can be chosen from the remaining $x+1$ in $\binom{x+1}{2}$ ways. There are $\binom{x}{2}$ choices where $2$ is the smallest number chosen. For the other two numbers can be chosen from the remaining $x$ in $\binom{x}{2}$ ways. And so on. Finally, there are $\binom{2}{2}$ ways to choose so that $x$ is the smallest number chosen. Add up. We get the left-hand side of Formula 1.
Continuity of a two variable function at$(0,0)$
In this case the answer is yes. The continuity of $g$ ensures the continuity of $f$. Basically, we are considering the functions $h(x,y)=x + 2y$ and $g(t) = \frac{\sin^{-1}(t)}{\tan^{-1}(2t)}$ and just saying that $f = g \circ h$. The results on limits and continuity of composed functions guarantee your claim.
If a metric space X is compact (sequentially compact) than is it complete?
$\Bbb R$ is complete but not compact
Finding a tight upperbound
The set of seeds, $S$, contains $300$ vertices. The absolute upper bound on the number of phone numbers on which we can perform a warrantless query is $300 + (300 \cdot 100) + (300 \cdot 100 \cdot 99) + (300 \cdot 100 \cdot 99 \cdot 99) = 297030300$. Reasoning: We can take our seed set $S$. Assume none of the seeds are connected. Then we have $300$ vertices we can query. Now assume that each $s \in S$ is connected to $100$ distinct vertices. This gives another $300 \cdot 100$ vertices we can query: call this set $P$. Now take each one of these vertices in $P$. They are also all connected to another 100 vertices. In each case, however, they will connect to one vertex among our original $300$ seeds. So from each of our current vertices in $P$, we can only query another $99$ new vertices. So we add $300 \cdot 100 \cdot 99$ vertices. The step for the last hop is identical to the one described in the paragraph above.
How would one find $\int e^{e^{2016x}+6048x} \,dx$?
For the integral, $$ J = \int e^{e^{kx}+3kx} \,dx $$ make the substitution $t \mapsto e^{kx}$ $$ J = \frac{1}{k} \int e^t t^2 \,dt $$ Now IBP where $u = t^2$ and $dv = e^t \,dt$ $$ J = \frac{1}{k} e^t t^2 - \frac{2}{k} \int e^t t \,dt $$ again IBP with $f = t$ and $dg = e^t \, du$ $$ J = -\frac{2}{k} e^t t + \frac{1}{k} e^t t^2 + \frac{2}{k} e^t + c$$ factor out $\frac{e^t}{k}$ and substitute back $t \mapsto e^{kx}$ $$ J = \frac{1}{k}e^{e^{kx}} \left(-2e^{kx} + e^{2kx} + 2 \right) + c $$
Unbiasedness and Consistency of the following Estimator
If $X_i \sim B(1,\theta)$, then $E[X_i] = \dfrac{1}{1+\theta}$. So $\bar{X}$ is consistent for $\dfrac{1}{1+\theta}$, not $\theta$. EDIT: Assuming $B(1,\theta)$ means Bernoulli($\theta$), then I don't think there's anything wrong with your proof. Are you sure the manual is correct? EDIT 2: This is a duplicate question.
Global differential form arising from an hermitian line bundle
$\newcommand{\dd}{\partial}$By removing the zeros and poles as you describe, you can (and should) think of $s$ as a non-vanishing local holomorphic section. Since $L$ is locally trivial on $X$, there exists a non-vanishing local holomorphic section in a neighborhood of an arbitrary point of $X$. If $t$ is another, the ratio $s/t := f$ is a non-vanishing local holomorphic function, and \begin{align*} \omega_{s} &= -i\dd \bar{\dd} \log h(s, s) \\ &= -i\dd \bar{\dd} \log h(ft, ft) \\ &= -i\dd \bar{\dd} \log \bigl[f\bar{f}\, h(t, t)\bigr] \\ &= -i\dd \bar{\dd} \bigl[\log f + \log \bar{f} + \log h(t, t)\bigr] \\ &= -i\dd \bar{\dd} \log h(t, t) \end{align*} since $\log f$ is holomorphic (hence annihilated by $\bar{\dd}$) and $\log \bar{f}$ is antiholomorphic (annihilated by $\dd$).
Integrability of a function on the circle
Clearly $g$ is measurable. Use the following inequalities: For all $|x|\leq\pi/2$ we have $|\sin(x)|\geq2|x|/\pi$. For all $\pi/2\leq|x|\leq\pi$ we have $|\sin(x)|\geq-2|x|/\pi+2$. The first one is seen geometrically by considering the line through the points $(0,0)$ and $(\pi/2,\sin(\pi/2))$. The second one by considering the line through the points $(\pi/2,\sin(\pi/2))$ and $(\pi,\sin\pi)$. We have \begin{align} \int_{-\pi}^{\pi} \left|\frac{f(e^{2ix})}{\sin x}\right|\,dx &=\int_{-\pi}^{-\pi/2} \left|\frac{f(e^{2ix})}{\sin x}\right|\,dx + \int_{-\pi/2}^{\pi/2} \left|\frac{f(e^{2ix})}{\sin x}\right|\,dx + \int_{\pi/2}^{\pi} \left|\frac{f(e^{2ix})}{\sin x}\right|\,dx \\ &=: I_1+I_2+I_3 \end{align} Let's show that $I_1,I_2,I_3<\infty$. First, $I_1$ and $I_3$ are treated similarly so let's show $I_1<\infty$ : \begin{align} I_1 &\leq \int_{-\pi}^{-\pi/2} \frac{|f(e^{2ix})|}{-|2x|/\pi+2}\,dx\\ &=\frac{\pi}{2}\int_{-2\pi}^{-\pi}\frac{|f(e^{ix})|}{x+2\pi}\,dx\\ &=\frac{\pi}{2}\int_0^{\pi}\left|\frac{f(e^{ix})}{x}\right|\,dx\\ &\leq\frac{\pi}{2}\int_{-\pi}^{\pi}\left|\frac{f(e^{ix})}{x}\right|\,dx\\ &<\infty \end{align} Finally, \begin{align} I_2 &\leq \pi \int_{-\pi/2}^{\pi/2}\left|\frac{f(e^{2ix})}{2x}\right|\,dx\\ &=\frac{\pi}{2}\int_{-\pi}^{\pi}\left|\frac{f(e^{ix})}{x}\right|\,dx\\ &<\infty \end{align}
Prove that $(U^{t})^{-1}=(U^{-1})^{t}$.
$$ U^{-1} U = I \implies U^{-1} \alpha_j' = \alpha_j \implies (U^{-1})^t f_i = f_i' $$ $$ (U^t)^{-1} U^t = I \implies (U^t)^{-1} f_i = f_i' $$ Equating expressions for $f_i'$ gives $(U^{-1})^t = (U^t)^{-1}$ on the basis $f_i$, so those operators are equal by linearity.
Evaluate $\ \int_0^3 x^3\,d[\frac{x}{2}]$
I think if $f(x)=\left\lfloor\dfrac{x}{2}\right\rfloor\!,$ then it's easier to work, on your domain, with $f(x)=U(x-2),$ the unit step function. Then $f'(x)=U'(x-2)=\delta(x-2)$ on your domain. You could use the identity $\int_a^b f(x)\,\delta(x-c)\,dx=f(c)$ if $a<c<b$.
Are 2 rare events that happen simultaneously likely dependent?
Suppose that $A$ and $B$ are disjoint events, each with positive probability. Can they be independent? No. This follows since $P(A)P(B) \gt0$ yet $P(AB)=P(\emptyset)=0$. So, if $P(AB)=0$, then $P(A|B) = P(B|A) = 0$. Thus, you might ask yourself if once you've lived a rare event $A$ or $B$, then, after a period of time, rare events $B$ or $A$ can occur, respectively. This is, ¿Are $P(A|B) = P(B|A) = 0$? If the anser is yes, then them both are dependent. However, it can't be assumed they are independent otherwise.
Proving a function of a bounded variation
It is enough to show that $f'$ is integrable; continuity at $0$ is not required. Can you verify that each of the two terms in $f'$ is intergrable? Once you do that use the fact that $f(y)-f(x)=\int_x^{y} f'(t) \, dt$ to show that $f$ is of bounded variation. [Total variation of $f$ is less than or equal to $\int_0^{1}|f'(t)|\, dt$].
Is there an Asymptotic Formula for the Largest Prime Factor of a Number?
No, there is no asymptotic. In short, $P(x)$ is simply too rough. Most importantly, $P(n) = 2$ for infinitely many $n$, and $P(n) = n$ for infinitely many $n$. So the only bounds for individual $P(x)$ that you can get are $$ 2 \leq P(n) \leq n,$$ which is not useful.
What does the notation min max mean?
The meaning will depend on context. Here it means that for each triple $\langle x,y,z\rangle$ such that $xyz=1$ we find the maximum of $x+y,x+z$, and $y+z$, and then we find the smallest of those maxima: it’s $$\min\Big\{\max\{x+y,x+z,y+z\}:xyz=1\Big\}\;.$$ In general it will be something similar: you’ll be finding the minimum of some set of maxima.
Measure and integration(In specific about $L^1$)
Hint: suppose that $|f(x)| > \frac{1}{\mu(A)}\int_A |f|\,d\mu$ for a.e. $x$. What happens if you integrate both sides over $A$? Do you see why it is important that $0 < \mu(A) < \infty$ for this argument?
Scaling of a Proximal Operator - $\mathrm{Prox}_{f}(x)$ and $\mathrm{Prox}_{af}(x)$
No, $y_2-x'+ a \partial f(y_2)=0$ and $ay'-{x'}+a\partial f(y')=0$ does not imply that $y_2=ay'$. It is true when $f$ is a norm, because then $\partial f(y)=\partial f(ay)$ for all $a>0$. Here's a simple counterexample. Suppose $f$ is the indicator function of some closed convex set. Then the proximal operator is just the projection onto that set. That is, for some closed convex set $C$ $$ f(x) = \begin{cases} 0 & x \in C \\ +\infty & x \notin C \end{cases} $$ Then $\mathrm{Prox}_{af}(x)=\mathrm{Prox}_{f}(x) \in C$. Assume $C$ is not a cone. Then for some $x$ we must have $a \mathrm{Prox}_{f}(x/a) \notin C$, which means $\mathrm{Prox}_{af}(x) \ne a \mathrm{Prox}_{f}(x/a)$ .
Proof for $Z^2$ not being cyclic
Hint: If $k(p,q)=(a,b)$ then $k$ must divide both $a,b$. What can you say about $(p,q)$ if you try to generate, say, $(0,1)$ and $(1,0)$?
Is there a constructive discontinuous exponential function?
Let $g(x)=\ln f(x)$, so that $g(x+y)=g(x)+g(y)$. The solutions to this equation are precisely the ring homomorphisms from $\mathbb R\to\mathbb R$ and they are in bijection with solutions to your original equation. Since $\mathbb R$ is an infinite dimensional $\mathbb Q$-vector space, there are infinitely many such ring homomorphisms. They are determined by a Hamel basis for $\mathbb R$. Does this answer your question?
If the Killing form of a Lie algebra is negative definite, then the Lie algebra is the Lie algebra of a compact semisimple Lie group?
The fact that every finite dimensional Lie algebra is associated to a Lie group is the Lie third theorem. In the 1 reference below, it is quote the Cartan theorem which establishes a correspondence between the Lie algebra and 1-connected Lie groups. This implies if the Lie agebra of $G$ and $H$ are isomorphic, their universal cover $\tilde{G}$ and $\tilde{H}$ are isomorphic. Suppose that $G$ is compact and semi-simple, its fundamental group is finite 2. This implies that $\tilde{G}$ is a finite cover of $G$ and is also compact. 1 https://en.wikipedia.org/wiki/Lie%27s_third_theorem 2 https://mathoverflow.net/questions/95637/connected-compact-semisimple-lie-group-finite-fundamental-group If $G$ is a compact Lie group and ${\cal G}$ its Lie algebra then $exp:{\cal G}\rightarrow G$ is surjective so its image is compact. https://en.wikipedia.org/wiki/Exponential_map_(Lie_theory)#Surjectivity_of_the_exponential
Find the interval in which $x^n(1-x^2)$ converges point wise and find its limit function.
Pointwise convergence is related with simple limits. If $x = \pm 1$ the $f_{n}(x) \equiv 0$ for all $n$. If $x \in (-1,1)$ the the limit $x^{n} \to 0$ as $n \to \infty$. Thus, $f_{n}(x)=x^{n}(1-x^{2})$ converges pointwise to $f(x) = 0$.
A question about p-adic numbers
no they are not zero.it doesn't have an inverse in $\mathbb{Z}_7$ but this doesn't mean it should be zero $\mathbb{Z}_7$ is a ring like $\mathbb{Z}$ and has many non-zero,non-invertible elements. we can describe invertible elements in $\mathbb{Z}_7$: a sequence is invertible in $\mathbb{Z}_7$ if and only if it's first component is not divisible by seven. the reason is that every element of $\mathbb{Z}/7^n$ which is coprime with 7 has a unique inverse. so you can take the inverses componentwise and they are compatible with each other. by this fact you can see that every element of $\mathbb{Z}_7$ can be written in the form $7^nu$ where u is invertible.now your second question is this fact in another language.
Determining whether the series: $\sum_{n=1}^{\infty} \tan\left(\frac{1}{n}\right) $ converges
Or by limit comparison test with $\sum\frac1n$ since by standard limit for $x\to 0\implies\frac{\tan x}{x}\to 1$ and then $$\frac{\tan\left(\frac{1}{n}\right)}{\frac1n}\to1$$
Transpose of a projection matrix
Remember that transposition and inversion commute, i.e. the transpose of the inverse is equal to the inverse of the transpose: $$\left(B^\mathrm{T}\right)^{-1} = \left(B^{-1}\right)^\mathrm{T}.$$ Using this fact, we have $$\left(\left(A^\mathrm{T}A\right)^{-1}\right)^\mathrm{T} = \left(\left(A^\mathrm{T}A\right)^{\mathrm{T}}\right)^{-1} = \left(A^\mathrm{T}A\right)^{-1},$$ where the last equality follows since $A^\mathrm{T}A$ is symmetric.
Two parts I cannot understanding on simple proof about Hilbert Space
If no such $z\neq 0$ in $M^{\perp}$ exists, then since $M$ is a closed subspace, either $M=H$ or $M^{\perp}\neq\{0\}$ (every closed subspace is complemented). As for the first question, I think you are misunderstanding the proof. The existence part does not assume $L(x)=\langle x, y\rangle$ for some $y$, you are using the value of $L$ at $z$ to create this $\alpha$ and hence $y$. We are claiming this $y$ will be the one such that $L(x)=\langle x, y\rangle$. The proof then goes on to show that this is indeed the case. Now the uniqueness says that if we have two such $y$'s that do this, they must actually be the same.
Finding wages of each worker
The relative wages of A,B and C are $8\cdot 3$, $9\cdot 5$ and $12\cdot 4$. The ratio of the relative wage of C and the sum of all relative wages is $r=\frac{12\cdot 4}{8\cdot 3+9\cdot 5+12\cdot 4}=\frac{16}{39}$ This is the porportion what $C$ gets from $Rs.1950$.
Prove $\frac{\sin\theta}{1+\cos\theta} + \frac{1+\cos\theta}{\sin\theta} = \frac{2}{\sin\theta}$
$$\dfrac{\sin(\theta)}{1+\cos(\theta)}+\dfrac{1+\cos(\theta)}{\sin(\theta)}=\dfrac{\sin^2(\theta)+(1+\cos(\theta))^2}{\sin(\theta)(1+\cos(\theta))}$$ $$=\dfrac{\sin^2(\theta)+\cos^2(\theta)+1+2\cos(\theta)}{\sin(\theta)(1+\cos(\theta))}$$ $$=\dfrac{2+2\cos(\theta)}{\sin(\theta)(1+\cos(\theta))}$$ $$=\dfrac{2(1+\cos(\theta))}{\sin(\theta)(1+\cos(\theta))}$$ $$=\dfrac{2}{\sin(\theta)}$$
Show that a set of functions is dense in $L^2(0,2)$
Consider $$B=\{f\in C^0((0,2)):\ f(1)=0 \text{ and } \lim\limits_{x \to 0^+} f(x) = \lim\limits_{x \to 2^-} f(x) =0\}.$$ You have to prove that $B$ is dense in $L^2((0,2))$. For that, it is sufficient to prove that any continuous function $g$ with compact support included in $(0,2)$ is the limit in $L^2((0,2))$ of $g \cdot f_n$, where $f_n \in B$ is defined by $$f_n(x)= \begin{cases} nx & 0 < x \le 1/n\\ 1 & 1/n \le x \le 1-1/n\\ n(1-x) & 1-1/n \le x \le 1\\ n(x-1) & 1\le x \le 1+1/n\\ 1 & 1+1/n \le x \le 2-1/n\\ n(2-x)& 2-1/n \le x <2 \end{cases}$$ for $n \ge 3$. Drawing the graph of $f_n$ will help you to understand the background idea! This is also using the fact that for $\mathcal F \subseteq \mathcal G \subseteq H$, if $\mathcal F$ is dense in $\mathcal G$ and $\mathcal G$ is dense in $\mathcal H$, then $\mathcal F$ is dense in $\mathcal H$.
Removing object from category
Yes. The only thing that could conceivably go wrong in the definition of a category would be composition of morphisms, but if you only remove the morphisms that start or end with the object that you’re removing, you’re never going to have problems with composition.
Model reduction of estimated state space models - System identification
From a statistical point of view, information criteria like the Akaike Information Criterion (AIC) or the Bayesian Information Criterion (BIC) provide hints for good model orders. Wikipedia provides a list with several more. ( I guess this is what you were looking for) From an engineering point of view, the classcial way to test the suitability of identified models is to validate them with simulation data (take care to use data that is NOT used for the identification). Validation is today heavily used in Machine Learning. From a more philosophical point of view, the idea of a "true model" is sort of deceptive. As Lennart Ljung points out in his well known book, any model is a simplification of reality - so that there is nothing like a true model (and hence no true model order). The actual question is, if the model is suitable for the intended use (whatever this is in your case) - and this question can be often answered by validation.
uniformly bounded sequence of non constant holomorphic functions
Suppose $f(z_0)=0$. Pick $r>0$ so that $f\ne 0$ when $|z-z_0|=r$. Apply Rouche's theorem in the disk $D=\{z: |z-z_0|\le r\}$ to the function $$nf(z) - (f_n(z)+\ln n)\tag{1}$$ The theorem applies when $n$ is large enough, because $|nf(z)|>|f_n(z)+\ln n|$ on the boundary of $D$. Why? Let $m=\min_{\partial D} |f|$ and $M=\sup_{n,z} |f_n(z)|$; observe that $mn>M+\ln n$ when $n$ is large enough.) This is a contradiction, since (1) is assumed not to have zeros.
Coordinate transformation by using the differential of a function
Unless your x and y mean something special, that final statement is not valid. From discussion in the comments, all I can think of is causality, but without boundary conditions on which it could be based, it can't be justified and in no way is true in general. Go back to your levturer. The onus is on them to explain things clearly.
How to compute $\sum\limits_{n=1}^{\infty}\frac{1}{x^n-1}$?
Let us apply the ratio test, so that the series converges if the following limit is less than 1 and diverges if it is greater than 1:$$\lim_{n\to\infty}\left|\frac{\frac{1}{x^{n+1}-1}}{\frac{1}{x^{n}-1}}\right|=\lim_{n\to\infty}\left|\frac{x^{n}-1}{x^{n+1}-1}\right|$$ For $\left|x\right|>1$ this is:$$\lim_{n\to\infty}\left|\frac{1-\frac{1}{x^{n}}}{x-\frac{1}{x^{n}}}\right|=\left|\frac{1}{x}\right|<1$$ For $\left|x\right|<1$ we can see the series diverges as the summand goes to $-1$. So the series converges only when $\left|x\right|>1$ . So when $\left|\frac{1}{x}\right|<1$ , and we can expand $\frac{1}{1-\left(\frac{1}{x}\right)^{n}}$ as a geometric series in that interval and it will converge.$$\sum_{n=1}^{\infty}\frac{1}{x^{n}-1}=\sum_{n=1}^{\infty}\frac{1}{x^{n}}\frac{1}{1-\left(\frac{1}{x}\right)^{n}}=\sum_{n=1}^{\infty}\frac{1}{x^{n}}\left(1+x^{-n}+x^{-2n}+\dots\right)$$ $$=\sum_{n=1}^{\infty}\left(x^{-n}+x^{-2n}+x^{-3n}+\dots\right)=\sum_{n=1}^{\infty}\sum_{m=1}^{\infty}x^{-n\cdot m}=\sum_{\nu=1}^{\infty}d\left(\nu\right)\cdot x^{-\nu}$$ Here $d$ gives the number of divisors of $n$, including $1$ and $n$. The last equality is because for a given power $\nu$ of $x^{-1}$ , we will have a term for every pair $\left(n,m\right)$ such that the product $n\cdot m$ is $\nu$ , and there is a pair like this for each $n$ which divides $\nu$ .
Standard result for $\log(x)$
The Harmonic series is what you are looking for.
A solution $y$ of a Nth order homogenous linear ODE has infinite number of zeros on a closed interval. Prove the $y$ is identically $0$.
As you point out, for $n$-th order ODEs this is insufficient, since you need the value of $n-1$ derivatives at $x_0$. However, you can remedy this: you can use the same approach to calculate the $m$-th derivative of $y$, by $y^{(m)}(x_0) = \lim_{k\to \infty} \frac{y^{(m-1)}(x_{n_k})}{x_{n_k}-x_0}$ and then use induction.
Dirichlet's Divisor Problem
You can use inclusion/exclusion: $$\sum_{n\leq x} d(n) = \sum_{mn\le x} 1 = \sum_{m\le \sqrt{x}} \ \sum_{n\le x/m} 1 + \sum_{n\le \sqrt{x}} \ \sum_{m\le x/n} 1 - \sum_{m\le\sqrt{x}} 1 \sum_{n\le\sqrt{x}} 1.$$ Now the first two double sums are the same (with the roles of $m$ and $n$ interchanged). Hence $$ \sum_{n\leq x} d(n) = 2 \sum_{n\le \sqrt{x}} \Big\lfloor\frac{x}{n}\Big\rfloor - \Big( \big\lfloor\sqrt{x}\big\rfloor \Big)^2$$ where $\lfloor x \rfloor$ denotes the largest integer less than or equal to $x$. Now use the fact that $\lfloor x \rfloor=x+O(1)$ to finish the proof. You will have to use the identity $$ \sum_{n\leq x} \frac{1}{n} = \log x +\gamma + O\Big(\frac{1}{x}\Big)$$ where $\gamma$ is Euler's constant.
Transportation problem in supply chain
The question is tagged homework, so I'll help you but I won't give a complete solution. The first thing you need to consider is the objective - what are you minimising/maximising, which, according to what constraints? Getting the answers to these questions down on paper, expressed with algebra, can make the method to gaining a solution easier to see than a few tables and a crazy diagram!
$\sin(x) \leq x$ on the interval $[0,1]$
Use differentiation. You know that at $x=0$, $\sin(0) = 0$. But since $\displaystyle \frac{d}{dx} \sin(x) = \cos(x)$, and $\displaystyle \frac{dx}{dx}=1$, you can use the fact that $|\cos(x)| \le 1$ for all $x$. Can you draw the conclusion, then?
How to solve these simultaneous equations using any better way?
Hint: $2x^2+5xy+2y^2=(2x+y)(x+2y)$
If $f$ is a pdf can we construct $g$ such that $x\sim U[0,1)$ implies $g(x)\sim f$
Let $F(x)=\int_{-\infty}^x f(u) du$ be the corresponding cdf of $f$. Let $F^{-1}(y)=\inf\{x\in\mathbb{R}: F(x)\geq y\}$ be the corresponding quantile function of $f$ or $F$. Then $F^{-1}$ is the function $g$ that you want. This is because $F^{-1}$ has the property that $F^{-1}(U)\sim f$ if $U$ is a uniform random variable on $[0,1]$. This is also how the inverse transform sampling method works (proof).
Solving $\log\vert z \vert = -2\arg(z)$
There's a small slip in your working:- $ z=e^{\theta(i-2)}$. That's the answer!
How many roots does $f=x^4+x^2+2$ have in $\mathbb Q[x]/(f)$?
The more concrete way of thinking about it is, if we take one of the roots such as $r = \sqrt{-\frac{1}{2}+i\frac{\sqrt{7}}{2}}$, which of the other roots can we express in terms of $r$? In this case, we have $-r = -\sqrt{-\frac{1}{2}+i\frac{\sqrt{7}}{2}}$, another root of $f$. The other two roots are $\pm \sqrt{-1 - r^2}$, so we want to know if there is any element of $\mathbb Q[x]/(f)$ which squares to $-1-x^2$. The brute force approach is to take an arbitrary element $p = a_0 + a_1 x + a_2 x^2 + a_3 x^3$, square it, and check if it's possible to get $p^2 = -1-x^2$. This is a painful but not really difficult computation; you'll quickly see that either $a_0 = a_2 = 0$ or $a_1 = a_3 = 0$, and then get a quadratic equation to solve with no rational roots. (So we conclude that $\mathbb Q[x]/(f)$ has only two elements $t$ such that $t^4+t^2+2=0$: $t= \pm x$.) It might also help to observe that if $p^2 = -1-x^2$, then $(px)^2 = -x^2-x^4 = 2$. I'm pretty sure that this is a contradiction, but I don't actually know what I'm talking about when it comes to field extensions - I just felt really bad when I came across this question and saw everybody interpreting it as "how do I find the roots of $x^4+x^2+2=0$?"
A question on notation related to covariant derivative
This use of notation is not limited to physics - it's very popular amongst differential geometers too. The reason it's useful to interpret things this way is because it makes it easier to write down coordinate-invariant expressions (which are those that are geometrically or physically meaningful) while retaining the power of index notation to clearly express complicated tensor products and contractions. We're much more likely to care about $(\nabla_\nu \eta)_\mu$ than we are $\partial_\nu \eta_\mu$, so why not make the former easier to write down? If you don't adopt this convention, then you either have to clutter your calculations with extra parentheses as in your $(\nabla_\nu \eta)_\mu$ (becomes extremely unwieldy once you start taking iterated derivatives), or forget about covariance entirely and work with partial derivatives and Christoffel symbols (ugly!).
Visualizing the line bundle associated to the sheaf $\mathcal{O}_{\mathbb{P}^1}(2)$
At Nicolas' recommendation, I will open up Huybrechts and expand a bit (this is on p. 91). I will use $\mathcal O$ to denote the total space of the trivial line bundle, and the usual variations to denote the associated tensor bundles. We have the inclusion $$ \mathcal O(-1) \subset \mathcal O^{\oplus n+1}, $$ where every fiber of the trivial bundle is the $\mathbb C^{n+1}$ of which we get $\mathbb P^n$ as a quotient, and the fiber of $\mathcal O(-1)$ over $[\ell] \in \mathbb P^n$ is $\ell \subset \mathbb C^{n+1}$. Since any inclusion of bundles induces an inclusion of tensor powers, we have $$ \mathcal O(-k) \subset \mathcal O^{\oplus k(n+1)}. $$ Now any $F \in \mathbb C[z_1,...,z_{n+1}]_k$ induces a holomorphic map $\tilde{F}: O^{\oplus k(n+1)} \to \mathbb C$ which is linear on all fibers of $\mathcal O^{\oplus k(n+1)} \to \mathbb P^n$ (hence the same thing as an $\mathcal O$-linear map to $\mathcal O$). Restricting $\tilde{F}$ to $\mathcal O(-k)$ thus produces a holomorphic section of $\mathcal O(k)$ by definition: recall that $\mathcal O(k) = \mathcal Hom(\mathcal O(-k),\mathcal O)$. There are a bunch of details I'm leaving out (in particular you have to use power series to prove that the homogeneous holomorphic functions you obtain are in fact algebraic), but this hopefully conveys some of the geometry you were looking for.
When does $\cos\frac{\pi}{m}=2\cos\frac{\pi}{r}\cos\frac{\pi}{n}$ with $m,n,r \in \mathbb{Z}$ hold?
After my discussion with Jim, I have just confirmed that those are the only solutions. The simple proof follows from the fact that if $x\geq 4$, then $2\cos\dfrac{\pi}{x}\geq \sqrt{2}$. Hence since $2\cos\dfrac{\pi}{m}< {2}$, the result follows by multiplying boths sides of the equation in the problem by $2$ since $(2\cos\dfrac{\pi}{n})(2\cos\dfrac{\pi}{r})\geq {2}$ for $r,n\geq 4.$ I should also add that I am only interested in non-units $m,n,r\in \mathbb{Z}$. If units are allowed, the only other soultions are: $m=1 ,n=-1$ (or $n=3$) and $r=3$ (or $r=-1$) AND $m=-1 ,n=1$ (or $n=-3$) and $r=-3$ (or $r=1$).
Generators of a finitely generated free module over a commutative ring
If $x_1,\ldots,x_m$ generate $L$, then you get a surjective $A$-module map $A^m\rightarrow L$. Tensoring with $k(\mathfrak{m})=A/\mathfrak{m}$, $\mathfrak{m}$ a maximal ideal, gives you a surjection from an $m$-dimensional $k(\mathfrak{m})$-vector space to an $n$-dimensional $k(\mathfrak{m})$-vector space, so $m\geq n$. If $n=m$, then you get a surjective endomorphism $L\rightarrow L$, and any surjective endomorphism of a finite $A$-module is injective. So in this case the elements form a basis.
If $u \in H^1(\Omega) \cap L^\infty(\Omega)$, is $u|_{\partial\Omega} \in L^\infty(\partial\Omega)$?
Consider the function $u + A$. It belongs to $H^1(\Omega)$ and is non-negative. A standard procedure yields a sequence $\{v_n\} \in C(\bar\Omega) \cap H^1(\Omega)$ with $v_n \ge 0$ and $v_n \to u + A$ in $H^1(\Omega)$. Now, $T v_n \ge 0$, since it corresponds with the usual trace of $v_n$. Since $T$ is continuous, you have $T v_n \to T(u + A)$ and $T(u+A) \ge 0$. Now, you can easily show $T(u + A) = T u + A$ and this yields $T u \ge -A$. Similarly, $T u \le A$ follows.
If $R\equiv RR$, then $R\equiv \emptyset\text { or } R\equiv \epsilon\text{ or } L(R)\text{ is infinite}$
Often, if you want to show that one of multiple conclusions is true, you can assume that all but one are false and show that the last one has to be true. Here, assume that $R \not\equiv \varnothing$ and $R \not\equiv \{\varepsilon\}$. We show that $L(R)$ must be infinite. We know that $L(R)$ must contain a nonempty string, call it $w$. Then we also know that $w^k$ satisfies $R^k$. But we have that $R \equiv R^k$ for any $k \in \mathbb{Z}^+$ (by induction, $R^{k+1} \equiv RR^k \equiv RR \equiv R$). Thus, $w^k \in L(R)$ for any $k \in \mathbb{Z}^+$. This shows that our language contains an infinite number of strings, and we are done. Note that it's not true that $L(R)$ being infinite implies that it is closed under Kleene-$*$. For example, consider $R = ab*$. This contains an infinite number of strings, but is in fact not closed under Kleene-$*$, since every string in the language must contain exactly one $a$. In this case, we are close, since $R \equiv R^k$ for any $k \in \mathbb{Z}^+$, but we don't neccessarily know if this is true for $k = 0$, which is required for closure under Kleene-$*$.
How much RAM is needed for $2^{14}$ square grid of double numbers?
A double precision number requires $8$ bytes of storage. The total number of bytes needed is then the number of double precision numbers multiplied by the number of bytes needed to store each. We have \begin{equation} \text{# bytes } = 8\cdot2^{14}\cdot2^{24} = 2^3\cdot2^{14}\cdot2^{24} = 2^{41}, \end{equation} which is approximately $2.2$ Terabytes.
Problem : What are our ages now?
Let your age be $x$. If the age difference is $\Delta x$, in $\Delta x$ years our ages will sum to 63, so we have $2x+\Delta x = 63$, which is to say $\Delta x = 63-2x$. We also know that $\Delta x$ years ago you were twice my age, so $x-\Delta x = 2(x-2\Delta x)$ or $3\Delta x = x$. $$3\cdot 63 -6x = x$$ $$\implies x = 27$$ $$\implies \Delta x = 63-54 = 9$$ $$\implies\boxed{\text{You are 27, I am 18}}$$ So there seems to be a miscalculation by your teacher. (Obviously the reasoning is instructive but just to confirm, $9$ years ago you were $18$ and I was $9$, hence satisfying the first statement, and in $9$ years you will be $36$ and I will be $27$ meaning our ages will indeed sum to $63$)
Find the mean of the Geometric distribution from the MGF
$M'_X(t)$ = $(1-qe^t) \frac{dp}{dt} - p \frac{d}{dt} (1-qe^t) \over {(1-qe^t)^2}$ = $0 + p qe^t \over {(1-qe^t)^2} $ $E[X] = M'_X(0) = \frac{q}{p}$ Can you do for variance now?
Need explanation of the spinor norm
I'm not sure about an online source, but you can check section 9.3 of Quadratic and Hermitian Forms by Scharlau, there are a reasonable amount of details. It is called the spinor norm because it is actually naturally defined on the spinor group. Indeed, you have a natural involution $x\mapsto \sigma(x)$ on the Clifford algebra $C(V,q)$ of a quadratic space $(V,q)$ (which is characterized by the fact that it is the identity on $V$), and thus you have a "norm" $N: C(V,q)\to C(V,q)$ given by $x\mapsto x\sigma(x)$. (This may be reminiscent of the quaternion norm.) Then if you restrict $N$ to the spinor group $\Gamma(V,q)\subset C(V,q)$, you actually get a group morphism $\Gamma(V,q)\to K^*$, which induces the spinor norm $O(V,q)\to K^*/K^{*2}$. Note that it is not true that the spinor norm is trivial over $\mathbb{R}$; it is trivial for the usual scalar product, but there are other quadratic forms over $\mathbb{R}$, for which the spinor norm need not be trivial.
Minimum of a tail of computable function
Your hunch is right: $F$ does not have to be recursive. For simplicity, I'll think of $F$ and $G$ as infinite sequences of natural numbers. Here's the idea: consider the sequence $G$ whose $n$th term is $2$ if $\varphi_0(0)[n]\uparrow$ and $1$ otherwise. So e.g. if $\varphi_0(0)$ halts at stage $3$, then $G$ looks like $$2,2,2,1,1,1,1,1,...$$ The point is that we now have $F(0)=2$ iff $\varphi_0(0)\uparrow$, so we've encoded a bit of the halting problem into $F$. Of course, coding one bit isn't enough - we need to do better. So what we'll do is start with a "default" sequence $G_*$ (I'd call it "$G_0$" but that would be way too confusing) and fold in a lot of "drops" which code halting facts. To accommodate dropping, $G_*$ needs to grow without bound, so let's take it to be the identity function $0,1,2,3,...$ I'll also adopt the convention that $(i)$ for each $s$ there is at most one $e$ such that $\varphi_e(e)$ halts in exactly $s$ stages and $(ii)$ we never have $\varphi_e(e)$ halt in $\le e$ stages. (I believe in Soare's book this is called the "hat trick," but I don't have my copy on hand. It's a good exercise to show that this isn't bogus, either by modifying the construction below to work without it or - much more flexibly - passing from the usual universal Turing machine to one with the property above.) Now my sequence $G$ is simple: the $s$th term of $G$ is the unique $e$ such that $\varphi_e(e)$ halts in exactly $s$ stages, if such an $e$ exists, and is $s$ itself otherwise. Note that by the convention above, there are only finitely many possible candidates for this $e$ (namely, if $G(s)=e$ then $e\le s$ since no program with index at least $s$ halts in $s$ steps), so $G$ is in fact computable. For example, if $\varphi_1(1)$ halts at stage $2$, $\varphi_0(0)$ halts at stage $4$, and no programs halt at stages $0, 1$, or $3$, the sequence $G$ will begin $$0, 1, \color{red}{1}, 3, \color{red}{0}, ...$$ It's a good exercise to show that from the corresponding $F$ we can compute the halting problem. HINT: note that $F(e+1)\le e$ iff some $\varphi_e(e)$ halts at some stage $>e$. So to tell if $\varphi_e(e)$ halts, compute $F(e+1)$; if $F(e+1)=e$ it does and if $F(e+1)>e$ it doesn't. What if $F(e+1)<e$? Then we search for some $s\ge e+1$ such that $G(s)=F(e+1)$. Run $\varphi_e(e)$ for $s$ steps; if it halts in that time then it halts, and if it doesn't go ahead and compute $F(s)$. If $F(s)=e$ then ...
What is the relation of this line $y = ax +b$ with the points $(x_1, y_1), (x_2, y_2)$
Line $y=ax+b$ through points $(x_1,y_1)$ and $(x_2,y_2)$ satisfies $y_1=ax_1+b$ and $y_2=ax_2+b$. Then, from $a = (y_1-b)/x_1 = (y_2-b)/x_1$ follows the value for $b$ as given in the question.
How do i determine maximum or minimum at (1,1) of function $ f(x,y)=(x-y)^{4} + (y-1)^{4}$
Hint: Notice that you add $2$ non - negative terms! Thus, $f(x,y) \ge 0,$ for all $x,y \in \mathbb R$. For which $x,y$ does it hold $f(x,y) = 0$?
The equation of a plane passing through three noncolinear points $p_1 = (x_1 , y_1 , z_1)$, $p_2 = (x_2 , y_2 , z_2)$, $p_3 = (x_3 , y_3 , z_3)$
As noticed, we should have a $p_3$ instead of a $p$ for the cross product used to determine the normal, then we can use the triple product determinant $$\left[(p_3 − p_1) \times (p_3 − p_2)\right] \cdot (p − p_3)=\det \begin{vmatrix}x_3-x_1&y_3-y_1&z_3-z_1\\x_3-x_2&y_3-y_2&z_3-z_2\\x-x_3&y-y_3&z-z_3 \end{vmatrix}=0$$ to get the plane equation.
$f$ is irreducible $\iff$ $G$ act transitively on the roots
Proposition. Let $P\in K[X]\setminus K$ be separable over $K$ and $L$ be a splitting field of $P$ over $K$. Let $\mathcal{R}_L(P)$ be the roots of $P$ in $F$. (i) $\textrm{Gal}(L/K)$ acts on $\mathcal{R}_L(P)$. (i) $P$ is irreducible over $K$ if and only if $\textrm{Gal}(L/K)$ acts transitively on $\mathcal{R}_L(P)$. Proof. $L/K$ is a galois extension, since $L$ is a splitting field of a separable polynomial over $K$. Let: $$G_P:=\textrm{Gal}(L/K).$$ (i) There exists $\lambda\in K^\times$ such that: $$P=\lambda\prod_{\eta\in\mathcal{R}_L(P)}(X-\eta).$$ Let $g\in G_p$ and $\eta\in\mathcal{R}_L(P)$, since $P\in K[X]$ and $L^{G_P}=K$, one has: $$P(g(\eta))=g(P(\eta))=g(0)=0.$$ Therefore, $g(\eta)\in\mathcal{R}_L(P)$ and the following map is well-defined: $$\left\{\begin{array}{ccc}G_P & \rightarrow & \mathfrak{S}(\mathcal{R}_L(P))\\g & \mapsto & g_{\vert\mathcal{R}_L(P)}\end{array}\right..$$ $G_P$ acts on $\mathcal{R}_L(P)$. (ii) Let $\{\mathcal{O}_i\}_{i\in\{1,\ldots,k\}}$ be the pairwise distinct orbits of the action of $G_P$ on $\mathcal{R}_L(P)$. One has: $$\mathcal{R}_L(P)=\coprod_{i=1}^k\mathcal{O}_i.$$ Therefore, one derives: $$P=\lambda\prod_{\eta\in\mathcal{R}_L(P)}(X-\eta)=\lambda\prod_{i=1}^k\prod_{\eta\in\mathcal{O}_i}(X-\eta).$$ Notice that for all $i\in\{1,\ldots,k\}$, $\displaystyle\prod_{\eta\in\mathcal{O}_i}(X-\eta)$ is irreducible over $K$, since it is the minimal polynomial of an element of $\mathcal{O}_i$. Hence, $P$ is irreducible if and only if $k=1$ if and only if there is only one orbit of the action of $G_P$ on $\mathcal{R}_L(P)$ if and only if $G_P$ acts transitively on $\mathcal{R}_L(P)$. $\Box$ N.B. If you need it, I can explain why $\displaystyle\prod_{\eta\in\mathcal{O}_i}(X-\eta)$ is the minimal polynomial of an element of $\mathcal{O}_i$. To answer precisely to your question : Q1) You're right, this means that there is only one orbit of the action of $G_P$ on $\mathcal{R}_L(P)$. Q2) You're teacher seems to be wrong, if $P$ is irreducible and $P(\alpha)=0$, $P$ is the minimial polynomial of $\alpha$. However, if $P$ isn't irreducible this isn't true! Indeed, let $K:=\mathbb{Q}$ and $P:=X^4-1$, then $L:=\mathbb{Q}(i)$ is a splitting field of $X^4-1$ over $\mathbb{Q}$. Howeover, $X^4-1$ is not the minimal polynomial of $i$ over $\mathbb{Q}$. For Q3) and Q4), I am not familiar with your construction.
Find x-intercept and vertical asymptotes at rational function
You are right. I think, it's better to write the following. $$\lim_{x\rightarrow2}\frac{(x+1)(x-1)}{(x-1)(x-2)(x+3)}=\infty,$$ which says $x=2$ is an asymptote. $$\lim_{x\rightarrow-3}\frac{(x+1)(x-1)}{(x-1)(x-2)(x+3)}=\infty,$$ which says $x=-3$ is an asymptote. $$\lim_{x\rightarrow1}\frac{(x+1)(x-1)}{(x-1)(x-2)(x+3)}=-\frac{1}{2},$$ which does not give asymptote. Actually. We don't need to write here: $$\lim_{x\rightarrow2^+}\frac{(x+1)(x-1)}{(x-1)(x-2)(x+3)}=+\infty,$$ $$\lim_{x\rightarrow2^-}\frac{(x+1)(x-1)}{(x-1)(x-2)(x+3)}=-\infty,$$ $$\lim_{x\rightarrow-3^+}\frac{(x+1)(x-1)}{(x-1)(x-2)(x+3)}=+\infty$$ and $$\lim_{x\rightarrow-3^-}\frac{(x+1)(x-1)}{(x-1)(x-2)(x+3)}=-\infty.$$
Proof of Divergence Theorem for radial invariant function
Write $\lVert x \rVert = r$ for brevity. Then $\partial_i r^2 = 2x_i$, so $\partial_i r = \partial_i \sqrt{r^2} = \frac{1}{2\sqrt{r^2}}2x_i = x_i/r $. Now, I think you actually want the field to be $g(r) \hat{x} = xg(r)/r $ (so that the radial direction given to it is as a unit vector) and so $$ \frac{\partial}{\partial x_i} \left(g(r)\frac{x_i}{r}\right) = \frac{g(r)}{r}\operatorname{div}{x}+\left( \frac{g'(r)}{r} -\frac{g(r)}{r^2} \right)\frac{x_ix_i}{r} = g'(r)+\frac{n-1}{r}g(r). $$ Now, it is useful to note that this is $$ \frac{1}{r^{n-1}}(r^{n-1}g(r))'. $$ The radial integration measure in $n$ dimensions is found by taking the volume of a spherical annulus as the width goes to zero, which is $S(r) \, dr= \frac{2\pi^{n/2}}{\Gamma(n/2)} r^{n-1}\, dr$. Hence the divergence theorem gives $$ S(1) R^{n-1}g(R) = S(1)\int_0^R \frac{1}{r^{n-1}}(r^{n-1}g(r))' r^{n-1} \, dr, $$ which is clearly the case using the Fundamental Theorem of Calculus.
How to derive this curious approximation to the cube root of $a + bi$?
An iterative approximation of the cube root The intention of the construction of the fraction $F(z)$ is that if $k=k_0$ is chosen such that $z=c/k^3$ has $|z|\approx 1$ and $|\arg(z)|<\frac\pi3$, then $k_+=k·F(z)$ is a good approximation of $\sqrt[3]c$. One obtains successively more accurate root approximations by iterating $$ k_{n+1}=k_n·F(c/k_n^3) $$ In view of that iteration it is sensible to express the error in terms of powers in $x=z-1$. On the order of approximation Let's reverse engineer this. Use, for instance, the Magma CAS online calculator with the commands PS<x> := PowerSeriesRing(Rationals()); P<z> := PolynomialRing(Rationals()); num := Evaluate(29*z^3+261*z^2+255*z+22 , 1+x); num; // 567 + 864*x + 348*x^2 + 29*x^3 den := Evaluate(7*z^3+165*z^2+324*z+71, 1+x); den; // 567 + 675*x + 186*x^2 + 7*x^3 f := num/den + O(x^8); f; // 1 + 1/3*x - 1/9*x^2 + 5/81*x^3 - 10/243*x^4 + 461/15309*x^5 - 7430/321489*x^6 + 367466/20253807*x^7 + O(x^8) f^3; // 1 + x - 1/5103*x^5 + 34/35721*x^6 - 12361/6751269*x^7 + O(x^8) which shows that the expression is correct to order $O((z-1)^5)$. With degree $3$ in both numerator and denominator and thus $2·4-1$ free coefficients, one could find an expression that is $O((z-1)^7)$, i.e., the Taylor series expressions of both sides match in the first $7$ terms. However, the loss in error order around the origin might have been used to reduce the maximal error in the disk around $1$, or at least around the segment of the unit circle that is relevant according to the initial considerations. Some related balanced Padé approximants The 2/2 order 5 Padé approximant is $$ \sqrt[3]z=\frac{14z^2 + 35z + 5}{5z^2 + 35z + 14} + O((z-1)^5) $$ and the 3/3 order 7 Padé approximant is $$ \sqrt[3]z=\frac{7z^3 + 42z^2 + 30z + 2}{2z^3 + 30z^2 + 42z + 7} + O((z-1)^7) $$ Root iterations comparing the 3 fractions From WP take the moderately interesting test case $c=11+197i$ with initial guess $k_0=6$. In python define def FracLud(z): return (((29*z+261)*z+255)*z+22)/(((7*z+165)*z+324)*z+71) def FracPad1(z): return (2*z+1)/(z+2) def FracPad2(z): return ((14*z+35)*z+5)/((5*z+35)*z+14) def FracPad3(z): return (((7*z+42)*z+30)*z+2)/(((2*z+30)*z+42)*z+7) and iterate c=11+197j k=6 for _ in range(4): z=c/k**3; k *= FracLud(z); print k, abs(c-k**3) (5.07502720254+2.80106967751j) 2.5577949438 (5.09495916653+2.81659441015j) 1.39979625158e-11 (5.09495916653+2.81659441015j) 1.71121351099e-13 (5.09495916653+2.81659441015j) 8.54314994314e-14 k=6 for _ in range(4): z=c/k**3; k *= FracPad1(z); print k, abs(c-k**3) (4.67251486867+3.25849790265j) 60.612721727 (5.09919052684+2.81457943777j) 0.476738798014 (5.09495916512+2.8165944087j) 2.05734637949e-07 (5.09495916653+2.81659441015j) 8.54314994314e-14 k=6 for _ in range(4): z=c/k**3; k *= FracPad2(z); print k, abs(c-k**3) (5.16659218119+2.75059104398j) 9.95653080095 (5.09495916898+2.81659441176j) 2.97864153108e-07 (5.09495916653+2.81659441015j) 2.89169943033e-14 (5.09495916653+2.81659441015j) 8.54314994314e-14 k=6 for _ in range(4): z=c/k**3; k *= FracPad3(z); print k, abs(c-k**3) (5.08204659353+2.82547419372j) 1.59145673695 (5.09495916653+2.81659441015j) 8.54314994314e-14 (5.09495916653+2.81659441015j) 2.89169943033e-14 (5.09495916653+2.81659441015j) 8.54314994314e-14 so the order 5 Ludenir/Luderian method converges slightly faster than the computationally faster order 5 Padé2 method, but noticeably slower than the order 7 Padé3 method that has the same computational complexity. The computationally simplest Halley/Padé1 method needs 4 steps where Padé3 needs 2. In terms of operations, there are 4*(2 div + 2 mul) (discounting addition, multiplication with small constants) versus 2*(2 div + (2+6) mul). Counting 1 complex division as about equal to 2 complex multiplications, this is an equal effort.
Can multiple events have more than a 100% chance to fire?
Your friend is correct. Multiplying $2\times50\%=100\%$ implicitly assumes that the system has "memory", meaning that for example if you toss a coin and get a head, then the next toss is guaranteed to have a tail, since the probability of getting head is the same as the probability of getting tails. But that kind of reasoning is clearly absurd. What the previous toss had, has nothing to do with the current toss, provided the coin is fair and not rigged in some fashion. Returning to your final question. The probability of hitting at least $2$ balls, is $1$ minus the probability of missing at least $5$ balls. $$1 -\binom{6}{5}\left(\frac{1}{2}\right)^5 = 1-\frac{6}{32}=\frac{26}{32}\approx 81.25\%$$ Here $\binom{n}{r}$ is the binomial coefficient.
Is it true that $\mathbb{C}\otimes_\mathbb{Z}\mathbb{C}=\mathbb{C}$?
$\def\tensor{\otimes}\def\C{{\mathbb C}}\def\Z{{\mathbb Z}}\def\Q{{\mathbb Q}}$Let $G$ be an abelian group and $b\colon \C \times \C \to G$ $\Z$-bilinear. Then $b$ is $\Q$-bilinear, as for $q=\frac mn \in \Q$, $z,w \in C$: \[ b(qz,w) = b\left(\frac mnz, w\right) = b\left(\frac 1n z, m\frac nn w\right) = b\left(\frac nn z, \frac mn w\right) = b(z,qw) \] So $b$ induces an unique homomorphism $\beta\colon\C \otimes_\Q \C \to G$. That is, as abelian groups $\C \otimes_\Z\C \cong \C \otimes_\Q \C$. But the latter is isomorphic to $\C$ as $\Q$-vector space (having a basis of cardinality $2^{\aleph_0} \cdot 2^{\aleph_0} = 2^{\aleph_0}$), hence as abelian group. So $\C \tensor_\Z \C \cong \C$ as abelian groups.
How to find cosh(arcsinh(f(x)))?
Hyperbolic functions satisfy the fundamental identity $$\cosh(x)^2 - \sinh(x)^2 = 1.$$ This identity comes from the interpretation of hyperbolic functions in terms of hyperbolic triangles, cf. Wikipedia for example. So now $$\cosh(\operatorname{argsinh}(y))^2 - \sinh(\operatorname{argsinh}(y))^2 = 1 \implies \cosh(\operatorname{argsinh}(y)) = \sqrt{1 + y^2}$$ because $\cosh$ is always nonnegative. Replace $y$ by $f(x)$ to get the answer: $\sqrt{1+(3x)^2} = \sqrt{1+9x^2}$.
Show that the sequence defined recursively by $a_{n+1} = \sqrt{3a_n + 1}$ is increasing
$a_{n+1} - a_n = \sqrt{3a_n+1} - \sqrt{3a_{n-1}+1} = \dfrac{3(a_n-a_{n-1})}{\sqrt{3a_n+1}+\sqrt{3a_{n-1}+1}} \geq 0$ since by the inductive step: $a_n \geq a_{n-1}$, and the proof is completed.
Is it possible for two different regions to have the same closure?
What about the punctured disk $\{z \in \mathbb{C} \mid 0 < |z - a| < r\}$?
Comparing Hartogs number to set of all sets of relations on a set
HINT: Suppose that $\mathscr{H}(A)=|\wp(\wp(A\times A))|$; then there is a bijection $$h:\mathscr{H}(A)\to\wp(\wp(A\times A))\;.$$ You already know that there is a surjection $$f:\wp(A\times A)\to\mathscr{H}(A)\;.$$ You would then have a surjection $$h\circ f:\wp(A\times A)\to\wp(\wp(A\times A))\;.$$
Why does this log simplify to this?
Assuming that $a$ is a positive real number you have that $a^{b-c} = a^b\times a^{-c}=a^b\times \dfrac{1}{a^c} = \dfrac{a^b}{a^c}$ This is simply an application of that using $e, 1, \ln 4$ in the place of $a,b,c$ respectively. Be extremely cautious if the base of the exponent is not a positive real number. Things very frequently break.
A priori error estimate for Dirichlet problem under geometric uncertainty
You can use the maximum principle to get error estimates. Since $w:=u_1-u_2$ is harmonic in $D_1$ we have $$\max_{D_1} |u_1-u_2| \leq \max_{\partial D_1}|u_1-u_2| = \max_{\partial D_1} |u_2|.$$ Now, to estimate the boundary term, you need some notion of closeness of $D_1$ and $D_2$. For example, let us set $$\varepsilon = \max\{\text{dist}(x,\partial D_2) \, : \, x \in \partial D_1 \}.$$ Given $D_1,D_2$ are open bounded with sufficiently smooth boundaries, the solution $u_2$ is Lipschitz continuous, and so $|u_2(x)| \leq C\text{dist}(x,\partial D_2)$. Therefore $\max_{\partial D_1} |u_2| \leq C\varepsilon$ and so $$\max_{D_1} |u_1-u_2| \leq C\varepsilon.$$ There are other conditions you can place on the closeness of $D_1$ and $D_2$. If you want to measure the difference in terms of the measure of $D_2\setminus D_1$, I would try energy methods, though this may be harder. In general the solutions can be much different if the domains are not similar.
How can I prove that $\forall x \in D (P(x)\implies Q(X))$ is not equivalent to $(\forall x \in D, P(x))\implies (\forall x \in D, Q(X))$?
It is true that, if all people are left-handed, then all people are Chinese (because the hypothesis, "all people are left-handed," is false and therefore implies every statement). But it is not true that all left-handed people are Chinese.
Showing that (-1, 0 0) is the maximum of f(x, y, z)
Take the partial derivative of f with respect to x, y, and z. Set those to zero. That will give you all the extrema. Now, you have to use the second-derivative test for multi-variable functions. The second-derivative test requires the computation of a 3 by 3 matrix. (Because your function is a three-variable function) D= det( $\matrix{f_{xx} & f_{xy} &f_{xz} \\ f_{yx} & f_{yy} &f_{yz} \\ f_{zx} & f_{zy} &f_{zz}}$ ) Calculate the determinant of that and if D is positive, and $f_{xx}$ is negative, then f(a,b) is a local maximum. If D is positive, and $f_{xx}$ is positive, then f(a,b) is a local minimum. However, if D is negative, then that means f(a,b) is neither a maximum or a minimum. Further analysis would be required.
Can we permute the coefficients of a polynomial so that it has NO real roots?
Yes: put the $n+1$ largest coefficients on the even powers of $x$, and the $n$ smallest coefficients on the odd powers of $x$. Clearly the polynomial will have no nonnegative roots regardless of the permutation. Changing $x$ to $-x$, it suffices to show: if $\min\{a_{2k}\} \ge \max\{a_{2k+1}\}$, then when $x>0$,$$a_{2n}x^{2n} - a_{2n-1}x^{2n-1} + \cdots + a_2x^2 -a_1x+a_0$$is always positive. If $x\ge1$, this follows from $$ (a_{2n}x^{2n} - a_{2n-1}x^{2n-1}) + \cdots + (a_2x^2 -a_1x) +a_0 \ge 0 + \cdots + 0 + a_0 > 0. $$ If $0<x\le1$, this follows from \begin{multline*} (a_0 - a_1x) + (a_2x^2-a_3x^3) + \cdots + (a_{2n-2}x^{2n-2}-a_{2n-1}x^{2n-1}) + a_{2n}x^{2n} \\ \ge 0 + \cdots + 0 + a_{2n}x^{2n} > 0. \end{multline*}
Evaluating $\int_\gamma \frac{1}{(z-4)^2}$ where $\gamma$ is the circle of radius 2 centered at $4+i$
$$\oint_\gamma\frac1{(z-4)^2}dz=0$$ which is trivial, since the integrand is of the form $$f(z)=\dots+\frac{c_{-3}}{(z-4)^3}+\frac{c_{-2}}{(z-4)^2}+\boxed{\frac{c_{-1}}{z-4}}+c_0+c_1(z-4)+\dots$$ And any contour integral enclosing $z=4$ has $$\oint_\gamma f(z)dz=2\pi ic_{-1}$$
Formula of Squaring Sums / Integrals
For any $a \in \mathbb{R}$, consider the following: \begin{align} \left(\int_a^{\infty} f(x)dx \right)^2 &= \left( \int_a^{\infty} f(x) dx \right) \left( \int_a^{\infty} f(y) dy \right)\\ &= \int_a^{\infty} \left( \int_a^{\infty} f(x)dx \right) f(y) dy \\ &= \int_a^{\infty} \int_a^{\infty} f(x)f(y)dxdy. \end{align} The same principle holds in the discrete case: \begin{align} \left( \sum_{n=a}^{\infty} a_n \right)^2 &= \left( \sum_{m=a}^{\infty} a_m \right) \left( \sum_{n=a}^{\infty} a_n \right)\\ &= \sum_{m=a}^{\infty} \left( \sum_{n=a}^{\infty} a_n \right) a_m \\ & = \sum_{m=a}^{\infty} \sum_{n=a}^{\infty} a_m a_n. \end{align}
On the self-adjoint form for the general elliptic PDE
In fact, there exists a necessary and sufficient condition in order for an elliptic PDE of the form \eqref{a} to be put in the self-adjoint form \eqref{b}. In order to see this, let's first drop Einstein summation convention and show the summations explicitly. Then, formally (i.e. without considering, for the moment, the differentiability requirements of the functions involved) we have: $$ \DeclareMathOperator{\divg}{\nabla\cdot}\begin{split} \divg \left(\sum_{j=1}^n{a}_{ij}(x) \frac{\partial u(x)}{\partial x_j} \right) & = \sum_{i=1}^n\sum_{j=1}^n\frac{\partial}{\partial x_i}\left({a}_{ij}(x) \frac{\partial u(x)}{\partial x_j} \right) \\ & = \sum_{i=1}^n\sum_{j=1}^n\left[ \frac{\partial a_{ij}(x)}{\partial x_i } \frac{\partial u(x)}{\partial x_j} + a_{ij}(x) \frac{\partial^2 u(x)}{\partial x_i \partial x_j} \right] \end{split} $$ This implies that, for the two forms \eqref{a} and \eqref{b} of a PDE to be equivalent it should be $$ \sum_{i=1}^n \left[\sum_{j=1}^n \frac{\partial a_{ij}(x)}{\partial x_i } \frac{\partial u(x)}{\partial x_j} - b_i(x)\frac{\partial u(x)}{\partial x_i } \right]=0\label{1}\tag{1} $$ However, working a bit by using Kronecker's delta $\delta_{ij}$, we have $$ \begin{split} \sum_{i=1}^n \left[\sum_{j=1}^n \frac{\partial a_{ij}(x)}{\partial x_i } \frac{\partial u(x)}{\partial x_j} - b_i(x)\frac{\partial u(x)}{\partial x_i } \right] & = \sum_{i=1}^n \left[\sum_{j=1}^n \frac{\partial a_{ij}(x)}{\partial x_i } \frac{\partial u(x)}{\partial x_j} - \delta_{ij} b_i(x)\frac{\partial u(x)}{\partial x_j } \right] \\ & = \sum_{j=1}^n \left[\sum_{i=1}^n \frac{\partial a_{ij}(x)}{\partial x_i } \frac{\partial u(x)}{\partial x_j} - \delta_{ij} b_i(x)\frac{\partial u(x)}{\partial x_j } \right] \\ & = \sum_{j=1}^n \left[\sum_{i=1}^n \frac{\partial a_{ij}(x)}{\partial x_i } - \delta_{ij} b_j(x) \right]\frac{\partial u(x)}{\partial x_j } \end{split} $$ and, due to the arbitrariness of $\frac{\partial u(x)}{\partial x_i }$ for $i=1,\ldots, n$, we finally get the condition $$ \left[\sum_{i=1}^n \frac{\partial a_{ij}(x)}{\partial x_i } - \delta_{ij} b_j(x) \right]=0\qquad\forall j=1,\ldots, n \label{2}\tag{2} $$ which can be given an elegant form by putting $\big(a_{ij}(x)\big)_{i,j=1,\ldots,n}\triangleq \mathbf A(x)$ and $\mathbf b(x)\triangleq {(b_1,\ldots, b_n)}$: $$ \divg\mathbf{A}(x)= \mathbf b(x)\label{2'}\tag{2'} $$ Addendum: is it possible to find an equivalent form \eqref{b} for a given non self-adjoint PDE \eqref{a}? (Follow-up to the comments of Fizikus) Several answers to the problem of finding an equivalent symmetric form for a given PDE have been given by researchers who investigated the "inverse problem of the calculus of variations": this problem asks, for a differential equation (ordinary or partial, linear or nonlinear), to find a Lagrangian functional such that the give DE is its Euler-Lagrange equation. Since for a self-adjoint DE the solution of this problem is straightforward, the researchers sought to find ways of transforming non self-adjoint DE in self-adjoint ones. In particular Copson [A1], for equations of type \eqref{a} whose coefficient matrix $A(x)$ is a symmetric, constructed a function $\Phi(x)$ and a linear partial differential operator $\mathscr{L}_{\Phi}(x,\partial_i)$ such that the equation $$ e^{\Phi(x)} \big( \mathscr{L}_{\Phi}(x,\partial_i)u(x) - f(x)\big)=0 $$ has the self-adjoint form \eqref{b}. The Copson construction is explicit: apart from the original paper, it is also described by Filippov ([A2], §11.2 pp. 94-97), who gives also some "symmetrization methods" which work for particular classes or single equations, both linear and non-linear. Notes Formula \eqref{2}-\eqref{2'} is seen in classical monographs on the theory of elliptic PDEs ([2], chapter 1, §6 p. 9 and [3] chapter I, §6 p. 12) where it is clearly stated that it is a necessary and sufficient condition for selfadjointness, without proof possibly due to the basic character of the result. From the calculation above, it can be seen that condition \eqref{2}-\eqref{2'} is meaningful for weakly differentiable matrix functions $\mathbf{A}(x)$ and $\mathbf b(x)$. This implies also that all the above calculations are rigorously valid for an elliptic PDE with weakly differentiable coefficients as, for example, bounded differentiable functions. Equation \eqref{2}-\eqref{2'} in the homogeneous case ($\mathbf{b}(x)\equiv\mathbf{0}$) was explicitly solved by Bruno Finzi in 1932 (see [1]) for second, third and fourth order symmetric tensors and after by Maria Pastori in 1942 (see [4]) for every general, not necessarily symmetric, $n$-th order tensors. Perhaps their work could be of some interest in calculating "symmetrizing" factors for transforming the form \eqref{a} in the \eqref{b}. References [1] Bruno Finzi, "Integrazione delle equazioni indefinite della meccanica dei sistemi continui", (Italian), Atti della Accademia Nazionale dei Lincei, Rendiconti, VI Serie 19, pp. 620-623 (1934), JFM 60.0708.02. [2] Carlo Miranda, Equazioni alle derivate parziali di tipo ellittico, (Italian), Ergebnisse der Mathematik und ihrer Grenzgebiete, 2. Heft, Berlin-Göttingen-Heidelberg: Springer-Verlag pp. VIII+222 (1955), MR0087853, Zbl 0065.08503. [3] Carlo Miranda (1970) [1955], Partial Differential Equations of Elliptic Type, Ergebnisse der Mathematik und ihrer Grenzgebiete – 2 Folge, Band 2, translated by Motteler, Zane C. (2nd Revised ed.), Berlin – Heidelberg – New York: Springer Verlag, pp. XII+370, doi:10.1007/978-3-642-87773-5, ISBN 978-3-540-04804-6, MR 0284700, Zbl 0198.14101. [4] Maria Pastori, "Integrale generale dell’equazione $\operatorname{div}\mathsf T=0$ negli spazi euclidei", (Italian) Rendiconti di Matematica e delle Sue Applicazioni, V Serie, 3, pp. 106-112 (1942), MR0018968 Zbl 0027.13302. Addendum references [A1] Edward Thomas Copson, "Partial differential equations and the calculus of variations" Proceedings of the Royal Society of Edinburgh 46, 126-135 (1926), JFM 52.0509.01. [A2] Vladimir Mikhailovich Filippov,, Variational principles for nonpotential operators. With an appendix by the author and V. M. Savchin, Transl. from the Russian by J. R. Schulenberger. Transl. ed. by Ben Silver, Translations of Mathematical Monographs, 77. Providence, RI: American Mathematical Society (AMS). pp. xiii+239 (1989), ISBN: 0-8218-4529-2. MR1013998, ZBL0682.35006.
Existence of valuation rings in an algebraic function field of one variable
I borrowed the idea of the Bourbaki's proof of Krull-Akizuki theorem. Definition Let $A$ be a not-necessarily commutative ring. Let $M$ be a left $A$-module. Suppose $M$ has a composition series, the lengths of each series are the same by Jordan-Hoelder theorem. We denote it by $leng_A M$. If $M$ does not have a composition series, we define $leng_A M = \infty$. Lemma 1 Let $A = k[X]$ be a polynomial ring of one variable over a field $k$. Let $f$ be a non-zero element of $A$. Then $A/fA$ is a finite $k$-module. Proof: Clear. Lemma 2 Let $A = k[X]$ be a polynomial ring of one variable over a field $k$. Let $M$ be a torsion $A$-module of finite type. Then $M$ is a finite $k$-module. Proof: Let $x_1, ..., x_n$ be generating elements of $M$. There exists a non-zero element $f$ of $A$ such that $fx_i = 0$, $i = 1, ..., n$. Let $\psi:A^n \rightarrow M$ be the morphism defined by $\psi(e_i) = x_i$, $i = 1, ..., n$, where $e_1, ..., e_n$ is the canonical basis of $A^n$. By Lemma 1, $A^n/fA^n$ is a finite $k$-module. Since $\psi$ induces a surjective mophism $A^n/fA^n \rightarrow M$, $M$ is a finite $k$-module. QED Lemma 3 Let $A = k[X]$ be a polynomial ring of one variable over a field $k$. Let $M$ be an $A$-module. Then $length_A M < \infty$ if and only if $M$ is a finite $k$-module. Proof: Suppose $length_A M < \infty$. Let $M = M_0 \supset M_1 \supset ... \supset M_n = 0$ be a composition series. Each $M_i/M_{i+1}$ is isomorphic to $A/f_iA$, where $f_i$ is an irreducible polynomial in $A$. Since $dim_k A/f_iA$ is finite by Lemma 1, $dim_k M$ is finite. The converse is clear. QED Lemma 4 Let $A$ be a not necessarily commutative ring. Let $M$ be a left $A$-module. Let $(M_i)_I$ be a family of $A$-submodules of $M$ indexed be a set $I$. Suppose $(M_i)_I$ satisfies the following condition. $M = \cup_i M_i$, and for any $i, j \in I$, there exists $k \in I$ such that $M_i \subset M_k$ and $M_j \subset M_k$. Then $leng_A M = sup_i leng_A M_i$. Proof: Suppose $sup_i leng_A M_i = \infty$. Since $sup_i leng_A M_i \leq leng_A M$, $leng_A M = \infty$. Hence we can assume that $sup_i leng_A M_i = n < \infty$. Let $n = leng_A M_{i_0}$. For each $i \in I$, there exists $k \in I$ such that $M_{i_0} \subset M_k$ and $M_i \subset M_k$. Since $leng_A M_k = n$, $M_{i_0} = M_k$, $M_i \subset M_{i_0}$. Since $M = \cup_i M_i$, $M = M_{i_0}$. Hence $leng_A M = n$. QED Lemma 5 Let $A = k[X]$ be a polynomial ring of one variable over a field $k$. Let $K$ be the field of fractions of $A$. Let $M$ be a torsion-free $A$-module of finite type. Let $r = dim_K M \otimes_A K$ Let $f$ be a non-zero element of $A$. Then $leng_A M/fM \leq r(leng_A A/fA)$ Proof: There exists a $A$-submodule $L$ of $M$ such that $L$ is isomorphic to $A^r$ and $Q = M/L$ is a torsion module of finite type over $A$. Hence, by Lemma 2, $Q$ is a finite $k$-module. The kernel of $M/f^nM \rightarrow Q/f^nQ$ is $(L + f^nM)/f^nM$ which is isomorphic to $L/(f^nM \cap L)$. Since $f^nL \subset f^nM \cap L$, $leng_A M/f^nM \leq leng_A L/f^nL + leng_A Q/f^nQ \leq leng_A L/f^nL + leng_A Q$. Since $M$ is torsion-free, $f$ induces isomorphism $M/fM \rightarrow fM/f^2M$. Hence $leng_A M/f^nM = n(leng_A M/fM)$. Similarly $leng_A L/f^nL = n(leng_A L/fL)$. Hence $leng_A M/fM \leq leng_A L/fL + (1/n) leng_A Q$. Since $L$ is isomorphic to $A^r$, $leng_A L/fL = r(leng_A A/fA)$. Hence $leng_A M/fM \leq r(Leng_A A/fA)$. QED Lemma 6 Let $A = k[X]$ be the polynomial ring of one variable over a field $k$. Let $K$ be the field of fractions of $A$. Let $M$ be a torsion-free $A$-module. Suppose $r = dim_K M \otimes_A K$ is finite. Let $f$ be a non-zero element of $A$. Then $leng_A M/fM \leq r(Leng_A A/fA)$ Proof: Let $(M_i)_I$ be the family of finitely generated $A$-submodules of $M$. $M/fM = \cup_i (M_i + fM)/fM =\cup_i M_i/(M_i \cap fM)$. Since $fM_i \subset M_i \cap fM$, $M_i/(M_i \cap fM)$ is isomorphic to a quotient of $M_i/fM_i$. Hence, by Lemma 5, $leng_A M_i/(M_i \cap fM) \leq r(leng_A A/fA)$. Hence, by Lemma 4, $leng_A M/fM \leq r(leng_A A/fA)$ QED Lemma 7 Let $A = k[X]$ be a polynomial ring of one variable over a field $k$. Let $K$ be the field of fractions of $A$. Let $L$ be a finite extension field of $K$. Let $B$ be a subring of $L$ containing $A$. Then $B/fB$ is a finite $k$-module for every non-zero element $f \in B$. Proof: Since $L$ is a finite extension of $K$, $a_rf^r + ... + a_1f + a_0 = 0$, where $a_i \in A, a_0 \neq 0$. Then $a_0 \in fB$. Since $B \otimes_A K \subset L$, $dim_K B \otimes_A K \leq [L : K]$. Hence, by Lemma 6, $leng_A B/a_0B$ is finite. Hence $leng_A B/fB$ is finite. Hence, by Lemma 3, the assertion follows. QED Lemma 8 Let $A$ be an integrally closed domain containing a field $k$ as a subring. Suppose $A/fA$ is a finite $k$-module for every non-zero element $f \in A$. Let $S$ be a multiplicative subset of $A$. Let $A_S$ be the localization with respect to $S$. Then $A_S$ is an integrally closed domain containing a field $k$ as a subring and $A_S/fA_S$ is a finite $k$-module for every non-zero element $f \in A_S$. Proof: Let $K$ be the field of fractions of $A$. Suppose that $x \in K$ is integral over $A_S$. $x^n + a_{n-1}x^{n-1} + ... + a_1x + a_0 = 0$, where $a_i \in A_S$. Hence there exists $s \in S$ such that $sx$ is integral over $A$. Since $A$ is integrally closed, $sx \in A$. Hence $x \in A_S$. Hence $A_S$ is integrally closed. Let $f$ be a non-zero element of $A_S$. $f = a/s$, where $a \in A, s \in S$. Then $fA_S = aA_S$. By this, $aA$ is a product of prime ideals of $A$. Let $P$ be a non-zero prime ideal $P$ of $A$. Since $P$ is maximal, $A_S/P^nA_S$ is isomorphic to $A/P^n$ or $0$. Hence $A_S/aA_S$ is a finite $k$-module. QED Lemma 9 Let $A$ be an integrally closed domain containing a field $k$ as a subring. Suppose $A/fA$ is a finite $k$-module for every non-zero element $f \in A$. Let $P$ be a non-zero prime ideal of $A$. Then $A_P$ is a discrete valuation ring. Proof: By Lemma 8 and this, every non-zero ideal of $A_P$ has a unique factorization as a product of prime ideals. Hence $PA_P \neq P^2A_p$. Let $x \in PA_P - P^2A_P$. Since $A_P$ is the only non-zero prime ideal of $A_P$, $xA = PA_P$. Since every non-zero ideal of $A_P$ can be written $P^nA_P$, $A_P$ is a principal ideal domain. Hence $A_P$ is a discrete valuation ring. QED Theorem Let $k$ be a field. Let $K$ be a finitely generated extension field of $k$ of transcendence degree one. Let $A$ be a subring of $K$ containing $k$. Let $P$ be a prime ideal of $A$. Then there exists a valuation ring $R$ of $K$ dominating $A_P$. Proof: We can assume that $A$ contains a transcendental element $x$ over $k$(otherwise the theorem would be trivial). We can also assume that $P \neq 0$. Let $B$ be the integral closure of $A$ in $K$. By Lemma 7, $B/fB$ is a finite $k$-module for every non-zero element $f \in B$. Let $S = A - P$. Let $B_P$ and $A_P$ be the localizations of $B$ and $A$ with rspect to $S$ respectively. Let $y \in P$ be a non-zer element. By Lemma 8, $B_P/yB_P$ is a finite k-module. Since $yB_P \subset PB_P$ and $PB_P \neq B_P$, $yB_P \neq B_P$. Hence there exists a maximal ideal $Q$ of $B_P$ containing $y$. Since $B_P$ is integral over $A_P$ and $PA_P$ is a unique maximal ideal of $A_P$, $P = Q \cap A_P$. Let $Q' = Q \cap B$. Then $Q'$ is a prime ideal of $B$ lying over $P$. By Lemma 9, $B_Q'$ is a discrete valuation ring and it dominates $A_P$. QED
Counting integers with a least prime factor greater than $x$ in a sequence of $x$ consecutive integers.
Everything looks correct. You did an excellent job, but I have just a few, relatively minor, points. For your (1), you could have just provided a link to an existing explanation, such as A question about the Mobius Function. Nonetheless, I appreciate what you wrote since it is a more simple & basic explanation than anything I've seen elsewhere. My only comment is regarding your fourth bullet point of The number of integers $t \le k$ not divisible by any prime $p \le x$ is: $\sum\limits_{i|x\#}\left(\left\lfloor\dfrac{k}{i}\right\rfloor\right)\mu(i)$ You may wish to prepend it with something like "Extending the principle of inclusion-exclusion, " to make it clear this is what you're using, although it should already be relatively clear from the context that this is the basic principle you're using. In your step (2), here is what I believe is a somewhat simpler way of explaining it. After your first bullet point, I would take the numerator of what you're trying to prove on the RHS and expand it instead to get: \begin{align} x - r(x, i) + d(k, x, i) &= x - r(x, i) + r(k, i) + r(x, i) - r(x + r, i) \\ & = x + r(k, i) - r(x + r, i) \end{align} I don't believe you even really need a third bullet point, but if you use one, you could then just indicate the RHS of your first bullet point is equal to the RHS of point (2). In your point (3), at the end of the second bullet point, you may wish to add something like "since $1$ is the only integer $t \le x$ which is not divisible by any prime $p \le x$". At least for me, this wasn't immediately clear & it took me a short while to figure it out.
trying to read quadratic programming problem in cplex, get error
CPLEX solves only convex optimization problems. No math programming instance with a nonlinear equality constraint can be convex, so cplex will refuse to attempt to solve any problem with such a constraint.
Nontrivial example of Finite intersection property
Fix a point on the real line, say 0. Then take any family of compact intervals all containing 0. You can easily get a family of intervals which are not nested. But I am afraid this example is trivial in that sense that it is obvious by definition that any subfamily (not even finite) has a non-empty intersection. A nicer example is the construction of the Cantor set. The family of the compact sets is nested but the members are not intervals - they are finite unions of intervals.
How many algebras of subsets of $X$ contain exactly four elements?
Other than the $\binom{5}{2}$ algebras you have, there are also ones where the remaining $2$ sets have $1$ and $4$ members respectively. There are $\binom{5}{1}$ ways to choose those. So the total number is: $\binom{5}{2} + \binom{5}{1} = 15$.
Number theory, prove that a prime number $p \mid 1$
In the 6th line of the text, $m$ is defined in such a way that $p_r|m$. Hence $\frac{m}{p_r} \in \mathbb{Z}$.
Proof Fragment and Questions
You could try and formalize part (b) a bit: Since $n^2=(3k+1)^2=9k^2+6k+1=3(3k^2+2k)+1$, we know that $3$ cannot divide $n^2$, since it divides $n^2-1$. Since $p \implies q$ and $\neg p$ what can you say about the truth value of this type of statement, more generally? The "if" part of your statement is always false. for a quick proof of the my statement: Suppose $p$ is prime and that $p \mid k$. Then $k=pr$ for some $r \in \mathbb{Z}$. Suppose $p \mid k+1$, then $k+1=pl$ for some $l \in \mathbb{Z}$. But then $1=k+1-k=pl-pr=p(l-r)$ so $p \mid 1$. Since $3 \neq 1$, my first statement stands. For part (a), your logic is not sound. If $n=3k+1$, we could easily let $k=1$, meaning that $n=3+1=4$ which is even. You should think a lit bit more about why (a) could hold (as in, do the algebra) For part (c), yes it holds kind of for the reason you mention. If $n=3k+1$, it will always leave a remainder of one upon division by $3$, and the algebra you did shows that $n^2$ will as well. Rule of thumb: more or less, the only time a "proof by example" works is when it is a counter-example. If you wish to prove a statement false, one example will suffice. For example, the statement (c) is saying that $\forall n \in \mathbb{Z}\, s.t \, n=3k+1$ we know that $n^2=3m+1$ for some $m \in \mathbb{Z}$. Of course, to show something is true for all anything, examples will not suffice.
Orthogonal Complement and dimension
It is just because if $Q\oplus W=E$ and the space is finite-dimensional, the union of a basis of $Q$ and a basis of $W$ is a basis of $E$, whether the direct sum is orthogonal or not. In other words: $$\dim Q+\dim W=\dim E.$$
Finding extremal of the functional
Your general solution is correct: $$y(x)=c_1\dfrac{1}{x^2}+c_2x.$$ From $y(x=0)=0$ we see that $c_1=0$ must be given or your solution will explode. One way to show this would be to multiply the equation with $x^2$ to obtain $$x^2y(x)=c_1+c_2x^3 \implies 0 = c_1 + c_2 \cdot 0^3 \implies c_1=0.$$ Using $y(x=1)=2=c_2\cdot 1$, will give you $c_2=2$. The final solution is $y(x)=2x$.
Determine the numbers of solutions of equations $\sin v= \frac{v}{1964}$ and $\sin v= \log_{100} v$ .
Hint on # 2: $\log_{100} v =\dfrac{\log(v)}{\log(100)} $ so $\log(v) \le \log(100)$ so $v \le 100 $. You then have to divide this into intervals.
Is a category one which contains only two object and **one** morphism between them?
Note that composition is part of the data of a category. So in this case we are given a graph consisting of two vertices $A,B$ and three conveniently labeled directed edges $1_A: A \rightarrow A$, $f:A\rightarrow B$, $1_B:B\rightarrow B$. We define composition to be that $$\begin{align*} 1_A \circ 1_A &= 1_A\\ f \circ 1_A &= f\\ 1_B \circ f &= f\\ 1_B \circ 1_B &= 1_B \end{align*}$$ From here on it is straightforward to check that this data indeed satisfies all the axioms of a category (associativity and identity). This category is important, because it is the free category with an arrow in the following sense. Given any category $\mathcal{C}$ an arrow $\widetilde{f}: \widetilde{A} \rightarrow \widetilde{B}$ in $\mathcal{C}$ induces a unique functor $$\begin{array}{rcl} \{A \xrightarrow{f} B\} & \longrightarrow & \mathcal C\\ A & \longmapsto & \widetilde A\\ B & \longmapsto & \widetilde B\\ f & \longmapsto & \widetilde f \end{array}$$ Conversely any functor $\{A \xrightarrow{f} B\} \longrightarrow \mathcal{C}$ defines a unique morphism $\widetilde f: \widetilde A \rightarrow \widetilde B$ in $\mathcal{C}$ as the image of $f$. These assignments are mutually inverse, so we can deduce that there is a canonical bijection $$\operatorname{Fun}(\{A\xrightarrow{f}B\},\mathcal{C}) \cong \operatorname{Mor}(\mathcal{C})$$ This makes $\operatorname{Mor}: \mathsf{Cat} \rightarrow \mathsf{Set}$ a representable functor, a concept, which you will either already know or learn soon and which is of utter most importance for category theory.
Polynomial division first!
Let $f(x)$ be your polynomial. It has a number of symmetries. Dietrich Burde's answer exhibits one. We see another by calculating $$g(u):=2^{-9}f(\sqrt{2u})=32 u^{14}+144 u^{13}+312 u^{12}+380 u^{11}+152 u^{10}-384 u^9-964 u^8-1217 u^7-964 u^6-384 u^5+152 u^4+380 u^3+312 u^2+144 u+32.$$ A most obvious feature of this is that it is palindromic. Meaning that $u=\xi$ is a zero of $g(u)$ if and only if $u=1/\xi$ is. A common trick taking advantage of this is to write everything in terms of the new variable $v=u+1/u$. It is simple to verify that $$ u^{-7}g(u)=P(v), $$ with $$ P(v)=32 v^7+144 v^6+88 v^5-484 v^4-960 v^3-608 v^2-84 v+23. $$ Mathematica thinks that $P(v)$ is irreducible over $\Bbb{Q}$. Judging from the plot it has five zeros in the interval $(-2,0)$ and two positive zeros approximately $0.1$ and $2.1$. If $u$ is real, then $v=u+1/u$ has absolute value $\ge2$. This would yield four real zeros of $f(x)$ subject to symmetries: $x\mapsto -x$ and $x\mapsto 2/x$.
Exercises combining Taylor polynomials and the Chain Rule
HINT: First, you know the Taylor polynomials at $0$ of $\sin(y)$ and $e^u$. Note that $f(1,0)=5$, so $f(x,y)-5 = 0$ at $(1,0)$. I suggest you substitute $u=f(x,y)-5$. Then the Taylor polynomial of the product is obtained by multiplying the Taylor polynomials and dropping the terms of too high a degree. You can think of this in terms of a composition of functions, of course: Let $G(u,v) = e^u\sin v$, and consider $G(f(x,y)-5,y)$.
$f$ measurable with $f=g$ a.e. then $g$ measurable
To fix notation: Suppose that $(X,\mathcal{A},\mu)$ is a complete measure space, $(Y,\mathcal{B})$ is a measure space and that $f:X \to Y$ is measurable. We have to show that $$g^{-1}(B) \in \mathcal{A}$$ for all $B \in \mathcal{B}$. Now $$\begin{align*} g^{-1}(B) &= \{x; g(x) \in B \} \\ &= \underbrace{\{x; g(x) \in B, f(x)=g(x)\}}_{=:A} \cup \underbrace{\{x; g(x) \in B, g(x) \neq f(x)\}}_{=:N}. \end{align*}$$ By definition, we have $$N \subseteq \{x; f(x) \neq g(x)\}.$$ Since $f=g$ almost everywhere, this shows that $N$ is a subset of a nullset; hence measurable as $\mu$ is complete. Moreover, $$\begin{align*} A &= \{x; g(x) \in B, f(x)=g(x)\} \\ &= \{x; f(x) \in B, f(x)=g(x)\} \\ &= \{x; f(x) \in B\} \cap \{x; f(x)=g(x)\} \end{align*}$$ is measurable as the intersection of two measurable sets (the first set at the right-hand side is measurable because $f$ is measurable and for the second one we note that $$\{x; f(x)=g(x)\} = X \setminus \{x; f(x) \neq g(x)\}$$ is the complement of a measurable set and therefore measurable.)
Why does $AX=0$ have only the trivial solution when $A=\left(\int_a^b g_i(x)g_j(x)dx\right)$?
Hint. (Presumably $g_1,g_2,\ldots,g_m$ are linearly independent on $[a,b]$ rather than on $\mathbb{R}$, otherwise $A$ can be singular.) The integrand is the matrix $\mathbf{g}(x)\mathbf{g}(x)^T$, where $\mathbf{g}(x)=(g_1(x),\ldots,g_m(x))^T$. If $A\mathbf{u}=0$ for some vector $\mathbf{u}$, then $\mathbf{u}^TA\mathbf{u}=0$ and in turn $$ \int_a^b \mathbf{u}^T\mathbf{g}(x)\mathbf{g}(x)^T\mathbf{u}\,dx=\int_a^b \left(\mathbf{g}(x)^T\mathbf{u}\right)^2dx=0.\tag{1} $$ Note that the integrand in $(1)$ is a square term, hence nonnegative. Using the continuity and linear independence of $g_1,g_2,\ldots,g_m$, argue that $\mathbf{u}$ must be zero. Remark. $A$ is a Gramian matrix. In general, a Gramian matrix is always positive semidefinite.
Let $A$ be a square matrix such that $A_10=0$ while $A_9≠0$. Let $V$ be the space of all polynomials of $A$. Find the dimension of $V$.
If $A^{10} =0$ then any polynomial in A is a linear combination of $I,A,A^{2},... , A^{9}$ (because, $A^{10},A^{11},A^{12},..$ are all 0. Now let us show that $I,A,A^{2},...,A^{9}$ are linearly independent. Suppose $\sum_0 ^{9} a_i A^{i} =0$. Multiply both sides by $A^{9}$ to see that $a_1 =0$. Then multiply by $A^{8}$ to get $a_2 =0$ etc. In 10 steps you get $a_i =0$ for $0 \leq i \leq 9$. Hence $I,A,A^{2},...,A^{9}$ form a basis for V and the dimension is 10.
$X$ is completely regular iff it carries the initial topology w.r.t. $C(X,\mathbb{R})$
We do need a topology on $X$ and indeed, $X$ is by the assumption a topological space. What the statement means is that $X$ is completely regular if and only if the original topology on $X$ coincides with the initial topology with respect to $C_b(X,{\bf R})$ (which is, in general, coarser). For examples of Hausdorff spaces which are not completely regular, you may want to consult the $\pi$-base. There is also a related question here on math.se.
Dividing to exclude order in counting
Suppose you have a collections of some objects let us say you have five of them, say a bowl full of fruits, say an Apple, Banana, Orange, Pear, Lemon. If you now want to choose two of them: for the first choice you have $5$ possibilities. for the second choice you have $4$ possibilities (one less as you have already chosen one fruit). Now, you have $5$ choices and for each of the $5$ choices, you have $4$ choices. So, in total you have $5 \cdot 4 =20$ choices to first choose a certain fruit and then to choose a second fruit. However, perhaps or even likely, you do not care so much if you take A first and then B or the other way round first B and then A. Since you might only care that in the end you have an apple and a banana (the order is irrelevant for you). So, if the order is irrelevant for you theen first A and then B is essentially the same as first B and then A. Therefore you in fact only have $20/2 = 10$ choices. You need to divide by the ways to arrange $2$ distinct objects, which is $2! =2$. Now if you choose $3$ fruits, you get $5\cdot 4 \cdot 3 = 60$ ways to choose a first fruit, a second fruit, a third fruit, but if you do not care about the order you need to divide by the number of ways to arrange $3$ distinct objects that is $3!=6$. Getting $60/6= 10$ possibilities.
Finding the closure of $\mathbb{Z}$ and $\mathbb{Q}$ in $\mathbb{R}$
HINTS: If $x\in\Bbb R\setminus\Bbb Z$, then there is a unique integer $n$ such that $n<x<n+1$; can you find an $r>0$ such that $T(x,r)\cap\Bbb Z=\varnothing$? For any $x\in\Bbb R$ you know that $T(x,r)=(x-r,x+r)$. Does that open interval contain a rational number?
Congruence equation with polynomials
Fill in details: doing arithmetic modulo $\;41\;$ all along, we get: $$x^{12}=37=-4\implies x^{12}+4=0\implies (x^6-2i)(x^6+2i)=0$$ with $\;i^2=-1\;$ , which we know it exists (why and what number is square root of $\;-1\;$ here?) Now, for example: $$0=x^6+18=(x^3-3\sqrt2\,i)(x+3\sqrt2\,i)$$ and we also should (or could) know that $\;2\;$ is a quadratic residue since $\;41=\pm1\pmod8\;$ ... Try now to take it from here...and I've no idea what the indices technique is and couldn't find it in google.
An $L^2$ bound for a function of the form $w(x)=|x|^{-1} - (g^2*|x|^{-1}*g^2)(x)$
Without loss of generality, let $\xi = 1$. Here $x$ and $y$ will denote elements in $\mathbb{R}^3$. Now we go all the way back to Isaac Newton! We have the following integral over the surface of the sphere (the "shell theorem" from classical mechanics), $$\int_{\mathbb{S}^2} |r \omega - x | ^{-1} d\omega = 4\pi\min(r^{-1}, |x|^{-1}).$$ This gives a striking identity for the convolution of $|x|^{-1}$ with a radially symmetric function $\rho(x) = \rho(|x|)$. $$\left(|x|^{-1} \ast \rho(x) \right)(x) = 4\pi \int_0^\infty \min(r^{-1},|x|^{-1}) \rho(r) r^2 \,dr.$$ Note that this particular identity works only in $\mathbb{R}^3$, but there is a more general version that you can find in Lieb and Loss "Analysis", page 249. Now let $$\rho(x) = (g^2 \ast g^2)(x) = \frac{1}{\sqrt{8}} \exp\left(-\frac{\pi |x|^2}{2}\right).$$The operation of convolution is commutative and associative, so we have $$w(x) = |x|^{-1} - (|x|^{-1} \ast \rho)(x) = \int_\mathbb{R^3} |x|^{-1} \rho(y) \,d^3y - \int_\mathbb{R^3} |x-y|^{-1} \rho(y) \,d^3y \\= 4\pi \int_0^\infty |x|^{-1} \rho(r)r^2\,dr - 4\pi \int_0^\infty \min(r^{-1}, |x|^{-1}) \rho(r)r^2 \,dr = 4\pi \int_{|x|}^\infty (|x|^{-1}-r^{-1}) \rho(r)r^2 \,dr.$$ Now we can rewrite $w(x)$ in terms of complementary error function "erfc", $$w(x) = \frac{\operatorname{erfc}(|x|\sqrt{\pi/2})}{|x|}.$$ Thus we have $$\|w\|_2^2 = \int_{\mathbb{R}^3} |w(x)|^2 \,d^3x = \int_0^\infty 4\pi \operatorname{erfc}(r\sqrt{\pi/2})^2 \,dr = 8(\sqrt{2}-1).$$ So we conclude with an exact value for the $L^2$-norm of $w$, $$\|w\|_2 = \sqrt{8(\sqrt{2}-1)}.$$
What does 1+1≠0 mean?
If you are working over the field $\mathbb F_2$, then you'll have $1+1=0$. More generally, the fields for which this equality holds are called fields with characteristic $2$. So, that book assumes that we are not working over such a field.
Relation between homomorphisms and monomorphims of finite groups
I think that is a answer for this question. If we have a monomorphism from $L/N$ to $G$ for any $N$, just make a composition with the $\pi: L \rightarrow L/N$, so we obtain a morphisms $L \rightarrow G$. From the otherside, if we have $f:L \rightarrow G$ a morphism, we know that image of $f$ is a monomorphism $m: Im(f) \rightarrow G$, with property to decomposite $f$. For the first isomorphism theorem, $L/Ker(f) \simeq Im(f)$. For each $N$ normal, we make $\pi: L \rightarrow L/N$, and $\psi: L/N \rightarrow L/Ker(f)$. Thats compositions make the relation above presented. If anyone have other ideia, show us please.
$f\circ g(x)=(x-1)$ and $g\circ f(x)=(x+1).$ Can we prove $f$ and $g$ are linear functions?
Below is an example of non-linear functions. Let $A, B$ are any rational numbers, $A \ne B$. Let \begin{equation} g(x) = \begin{cases} A-x &\text{if $x\in \mathbb{Q}$}\\ B-x &\text{if $x\in \mathbb{I}$} \end{cases} \end{equation} \begin{equation} f(x) = \begin{cases} A-x-1 &\text{if $x\in \mathbb{Q}$}\\ B-x-1 &\text{if $x\in \mathbb{I}$} \end{cases} \end{equation} It easy to see that $f(g(x))=x-1$, $g(f(x))=x+1$.
Can ZFC decide number theory?
Gödel's proof gives an explicit construction for a statement in a given (sufficiently strong, and recursively axiomatizable) theory that cannot be proved or disproved in the theory. It so happens that when the theory is ZFC the independent statement turns out to be one that is the set-theoretic representation of a purely number-theoretic statement. At least this is the case when you use the most natural way to establish that ZFC meets the condition of being "sufficiently strong". Basically, "sufficiently strong" boils down to being able to represent certain basic number-theoretic constructions, and the Gödel sentence is then constructed as the representation of a particular number-theoretic property.
Graph or its Complement contains a triangle.
It seems to be slightly easier to think about if you formulate it as Color each edge in $K_6$ either red or green. Then there at least one triangle that is all red or all green. (This is really the same as "arbitrary graph on 6 vertices", where "red" means an edge that's there and "green" means one that isn't -- but it makes it a bit easier to think about "edge is there" and "edge isn't there" as symmetric cases). And a proof could go somewhat like: Choose three vertices $1$, $2$ and $3$. If the edges $12$, $23$ and $31$ all have the same color, then we're done. Otherwise let's say without loss of generality that $12$ and $23$ are red and $31$ is green. Now consider a fourth vertex $4$. If $14$ and $34$ are both green, then $134$ is a green triangle. So assume one of them is red. If $24$ is now red then $124$ or $234$ is a red triangle. So the only way not to have a valid triangle yet is if $24$ is green. ... and so forth.