title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Prove: Use the triangle inequality to prove that for all $x, y, z, | x − z | ≤ | x − y | + | y − z |$ | Simpler:
$$|x-z|=|(x-y)+(y-z)|\le |x-y|+|y-z|$$ |
A matrix multiple a vector to get the identity matrix | Let $$v= \begin {pmatrix} v_1\\v_2\\...\\v_n\end {pmatrix}$$ and $$A= \begin {pmatrix} a_1\\a_2\\...\\a_n\end {pmatrix}$$
For $$ Av^{*}=I_{n}$$ we need to have $a_1v_1=1$ and $a_1v_i=0$ for $i=2,3,...,n$
That is $v_2=v_3=...=v_n=0$
On the other hand you need $a_2v_2=1$ which is not possible due to $v_2=0$
Thus the answer is no, we can not find such an $A$ |
What is the fairness and usefulness of writing proofs with help? | The simple fact that you spend hours working on a proof is useful. It's not a waste of time at all. Even if you don't finally find the proof, this work will make you progress, because it makes you explore ideas and try many different things. And this will pay off later, because whether you are aware of it or not, these ideas will remain in some place of your brain, and you'll be able to use them later, for another (similar) problem. Or at least, you'll learn to know a little better if an idea has a chance to work or not. That's how you develop intuition in math: spend hours trying to find proof and read other's proofs.
By the way, not being able to find all proofs and needing help is perfectly normal, this is part of the learning process in maths. |
How exactly does the class of generalized functions include that of ordinary functions? | Consider for example a set of functions that has been used in physics: The set $F$ of infinitely differentiable $f:\Bbb R \to \Bbb R$ such that $f(x)$ and each of its derivatives converges uniformly to $0$ as $|x|\to \infty,$ and such that $\int_{\Bbb R} f^{(j)}(x)dx$ exists for all $j$.... where $f^{(0)}=f,$ and $f^{(j)}$ is the $j$-th derivative of $f$ when $j>0.$
Now let $B[\Bbb R]$ be the set of bounded integrable $g:\Bbb R\to \Bbb R.$ For $g\in B[\Bbb R]$ and $f\in F$ define $g^*(f)=\int_{\Bbb R}f(x)g(x)dx.$ Then $g^*:F\to \Bbb R$ is linear , that is $g^*(f_1+kf_2)=g^*(f_1)+kg^*(f_2)$ for $f_1,f_2 \in F$ and $k\in \Bbb R.$
But there are linear maps from $F$ to $\Bbb R$ that are not in the set $B^*=\{g^*: g\in B[\Bbb R]\}.$ For example, $\delta(f)=f(0)$ for all $f\in F.$
However $\delta$ is the point-wise limit of a sequence of members of $B^*.$ That is, there is a sequence $(g_n)_{n\in \Bbb N}$ in $B[\Bbb R]$ such that $\lim_{n\to \infty} g^*_n(f)=\delta (f)=f(0)$ for each $f\in F.$
So we may consider a generalized function to be a linear map $h:F\to \Bbb R.$ A feature of the Heaviside operational calculus is that we can often treat such $h$ as if they were members of $B^*,$ even as if they of the form $g^* $ with continuously differentiable $g\in B[\Bbb R]$ when in fact they are point-wise limits (as in the sense above) of sequences of such $g^* .$
I hope this clarifies some of the issues. It can be confusing to read of a function $\delta$ such that $\delta (x)=0$ when $x\ne 0,$ but $\int_{\Bbb R}\delta(x)dx=1.$ |
Image of $\gamma(t)=(\sin(t+k)\cos(t), \sin(t+k)\sin(t))$ is a circle. | The problem is in setting $T = 2\pi$, because your function is actually $\pi$ periodic. Hence the area enclosed is
$$A = \frac 1 2 \int_0^{\pi} x\dot y - y \dot x = \frac 1 2 \int_0^{\pi} \sin^2(t + k) \, dt = \frac{\pi}{4}$$
while the corresponding length is
$$\ell(\gamma(t)) = \int_0^{\pi} \, dt = \pi.$$
Now the equality
$$A = \frac{\ell^2}{4\pi}$$
holds.
For a purely algebraic calculation that also reveals the true periodicity of the curve, note that
$$x(t) = \frac 1 2 \big(\sin(2t + k) + \sin(k)\big)$$
and
$$y(t) = -\frac 1 2 \big(\cos(2t + k) - \cos(k)\big).$$ |
Proving the non-derivability of formula using the of the kripke model | You need a world $w_1$ where $(p\to q)\to(\neg p \lor q)$ is not satisfied. In other words $w_2$ needs to have at least one successor $w_2$ that satisfies $p\to q$ but neither $\neg p$ nor $q$.
The way for $w_2$ not to satisfy $\neg p\equiv p\to \bot$ is for it to have a successor $w_3$ that satisfies $p$ but not $\bot$. This is true if only $p$ is true in $w_3$. On the other hand, since $w_2$ must satisfy $p\to q$, we must then also have that $q$ is true in $w_3$.
This suggests a Kripke frame with three worlds $w_1<w_2<w_3$ where $w_3$ asserts both $p$ and $q$, but $w_1$ and $w_2$ assert neither.
(Actually $w_1$ is superfluous here; we can take $w_1=w_2$ instead). |
Probabilistic statement about finitenes of a random variable | Because, for every $x$ in $[0,+\infty]$, $\mathrm e^{-cx}\to\mathbf 1_{x\lt\infty}$ when $c\to0^+$. |
what loops and points of numbers are possible when you take the alternating sum of the digits of squared? | Partial answer: the only points are $0,1$ and $48$.
Let $f()$ be the function that performs one step of your transformation, e.g. $f(125) = |1^2 - 2^2 + 5^2| = 22$ (not $23$ like you said).
If $x$ is a $k$-digit number, we have $f(x) \le \lceil k/2 \rceil \times 9^2$ because half the digits count positively and half count negatively, and each digit is at most $9$. This bound immediately shows that $f(x) < x$ for these cases ($d$ denotes leading digit):
All $k \ge 4$.
$k = 3, d \ge 2: f(x) \le 2 \times 81 $ but $x \ge 200$.
$k = 3, d = 1:$ let the number be $x = (1bc)_{10}$ and $f(x) = |1 - b^2 + c^2| \le 1 + 81$ but $x \ge 100$.
Now consider $k=2, x = (ab)_{10} = 10a + b$.
If $a \ge b$, we have $f(x) \le a^2 < 10a \le x$.
If $a < b$, for $x$ to be a point we need $f(x) = b^2 - a^2 = 10a + b$ or $b(b-1) = (10+a)a.$ Now it is just a matter of trying each $a = 1, 2, 3, \dots$ and we find that the only solution is $x = 48, a = 4, b = 8:$
$$f(48) = |4^2 - 8^2| = 64 - 16 = 48$$
For $k=1$ we have $f(x) = x^2$ so $f(x) = x$ iff $x = 0,1$.
Incidentally, since $f(x) < x$ for sufficiently large $x$, this also means the process "collapses" onto the small integers, and there are only a finite number of loops. |
Is there a unique projection map in this case? | Usually not. Just consider $\mathbb{C} \oplus \mathbb{C}$; we have lots of projections onto the first component: anything of the form $(x,y) \mapsto (x + cy, 0)$, for $c \in \mathbb{C}$. |
Different ways of evaluating $n!$? | Your first equation is wrong. The correct exponent of prime $p$ in the prime factorization of $n!$ is given by Legendre's formula:
$$\nu_p(n!)=\sum_{r=1}^{\infty} \left\lfloor \frac{n}{p^r} \right\rfloor$$
Thus, taking the logarithm, and the sum on $p$ running on prime $p\leq n$,
$$\log(n!)=\sum_{p\leq n}\left(\sum_{r=1}^{\infty} \left\lfloor \frac{n}{p^r}\right\rfloor\right)\log(p)$$
Your second equation is also wrong. For instance,
$$\log(5!)=\log120 \neq \sum_{k=1}^{p_k < n} \lf \frac{n}{p_k} \rf \ln p_k + \ln{\lf \frac{n}{p_k} \rf}!=[2\log2+\log(2!)]+[\log3+\log(1!)]=\log24$$ |
How to prove that ($x_n$) converges and lim $x_n$ is also an upper bound of A | Let ${x_n}$ be a decreasing sequence of upper bounds of $A$. That means, that $\{ x_n\}$ is lower bounded, by $\sup A$. Hence $x_1 \geq \{ x_n\}\geq \sup A$ for all $n$.
Using the Bolzano Weierstrass theorem, $\{ x_n\}$ has a convergent subsequence, say $\{ x_{n_k}\}$.
However, let $x_{n_k} \to L$. I claim two things:
$\lim x_n = L$.
Proof: Let $\epsilon > 0$. Note that there exists $K$ such that $k \geq K \implies |x_{n_k} - L| < \epsilon$. Now, note that $x_n < x_{n_K}$ for all $n > n_K$, so that $|x_{n} - L| < \epsilon$ whenever $n > n_K$. Hence $x_n \to L$.
$L \geq \sup A$.
Note that $x_n > \sup A$ for all $n$. Hence, the subsequence $\{ x_n - \sup A\}$ is a non-negative sequence, hence must have non-negative limit. That is , $\lim x_n = L \geq \sup A$. As $L \geq \sup A$, $L$ is greater than every element of $A$, so is an upper bound of $A$. |
If $U$ is simply connected and $u: U\to\Bbb R$ is harmonic then it have a conjugate in $U$ | Your proof looks fine.
Now, consider the function $f=u_x-iu_y$. Since $u$ is harmonic, $f$ is holomorphic and therefore analytice. Since $U$ is simply connected,$f$ has a primitive $F$. But then, there's some constant $k$ such that $\operatorname{Re}F=u+k$. Now, you can use $\operatorname{Im}F$ get a conjugate of $u$. |
How do we indicate the integrating variable? | Yes, you can very well enhance the integration range.
$$\int_{t=0}^5 d\left(\frac{t^2}2e^x\right)$$
or
$$\int_{x=0}^5 d(te^x). $$ |
What's an example of a number that is neither rational nor irrational? | I've thought about this question for a while without an answer. The key is to consider the structure of the constructible real numbers. I was actually a bit cavalier with my original definition, "$x$ is irrational if $|x-p/q|<q^{-2}$ has infinitely many coprime solutions". The problem lies in what is meant by "infinitely many" here. If it means that there is an injection from $\Bbb N$, then we get the same set as described by the other definition using infinite continued fractions. In particular, if $x$ is irrational, then there is a function $f:\Bbb N\to\Bbb Q$ that converges to $x$, and under the "Russian constructivist" camp, we can assume that $f$ is a computable function, so $x$ is computable. And obviously the rational numbers are computable.
Thus Chaitin's constant $\Omega$ is neither rational nor irrational.
To be clear, in constructivist logic we don't necessarily know that $\Omega$ even exists; in fact, that's the whole point. But it does mean that we can take the Markovian model of constructible reals $\Bbb R_M$ and add $\Omega$ to get a model $\Bbb R_M[\Omega]=:\Bbb R_{\Omega}$ which can be viewed as a subset of $\Bbb R$ (in the ambient universe where LEM is true). Then $\Bbb Q=\Bbb Q_M=\Bbb Q_{\Omega}$, and $\Bbb I_M=\Bbb I_{\Omega}\subsetneq\Bbb I$, with $\Omega\in\Bbb I\setminus \Bbb I_{\Omega}$, yet at the same time $\Omega\in \Bbb R_{\Omega}$ by definition, so we have $\Bbb Q_{\Omega}\cup\Bbb I_{\Omega}\subsetneq\Bbb R_{\Omega}$. (We can even say $\Bbb Q_{\Omega}\cup\Bbb I_{\Omega}=\Bbb R_M$ but this is only valid as a proof in the full universe, using LEM.)
I mentioned that "infinitely many" had two interpretations above. The other one is that it is not finite, and this yields the poorer definition "$x$ is irrational if $x$ is not rational". In this case it is clear that no number can be neither rational nor irrational, because if it is not rational then it must be irrational by definition. |
Asymptotic expansion of $\sum_{k=0}^{\infty} k^{1 - \lambda}(1 - \epsilon)^{k-1}$ | $$
\sum_{k=0}^{\infty} k P(k) (1 - \epsilon)^{k-1} = \sum_{k=0}^\infty k P(k) \left(\sum_{j=0}^{k-1} {k-1 \choose j}(-\epsilon)^j \right)\\
= \sum_{k=0}^\infty k P(k) { k-1 \choose 0} - \sum_{k=0}^\infty k P(k) { k-1 \choose 1} \epsilon + \ldots\\
= \langle k \rangle - \langle k(k-1) \rangle\epsilon + \ldots
$$ |
weak mixing dynamical system but not mixing | Actually, "most" transformations are weak mixing but not mixing.
More precisely, if you consider a probability measure $\mu$ and the set of all invertible $\mu$-invariant measurable transformations with the "weak" topology, the subset of weak mixing transformations is of the second category, while the set of mixing transformations is of the first category.
To give an explicit example is much more delicate:
A possibility is to consider interval exchange transformations associated to irreducible permutations $\sigma$, in the sense that $\sigma\{1,\ldots,k\}\ne\{1,\ldots,k\}$ for any $k$. Katok showed that no such transformation is mixing. The most general result on weak mixing seems to be a result by Ávila and Forni showing that if $\sigma$ is not a rotation, then almost every interval exchange transformation is weak mixing ("almost every" with respect to the lengths of the intervals using Lebesgue measure).
The first example of a transformation that is weak mixing but not mixing seems to be due to Kakutani using some combinatorial arguments, followed by Maruyama using Gaussian processes. |
Variance of model with Bernoulli-distributed random variables | I'm not sure how $X_2$ is defined unless we index the $Z_i$ with negative indices, so I started the sum from $i=4$. Also, the variance is clearly unbounded for
$$ \lim_{n\to\infty}\sum_{i=4}^n X_i, $$
so I'm assuming instead you mean the standard normalization
$$ \lim_{n\to\infty}\frac{1}{\sqrt{n}}\sum_{i=4}^nX_i. $$
I'm going to compute what you called $\text{Var}(S^*_n)$, but with the $1/\sqrt{n}$ normalization above.
Writing out the first few terms of the sequence, it's easy to see that, for even $n\geqslant6$,
$$ \tag{1}\label{eqn:1}\sum_{i=4,6,\dots}^{n}X_i=Y_5(3+4Z_1)+Y_{n+3}Z_{n-1}+\sum_{i=6,8,\dots,}^{n}Y_{i+1}(3+5Z_{i-3}) $$
Since each summand in the sum above is independent, in light of (\ref{eqn:1}), we have that
\begin{align}
\text{Var}\bigg(\sum_{i=4,6,\dots}^nX_i\bigg)
&=\text{Var}\bigg(Y_5(3+4Z_1)+Y_{n+3}Z_{n-1}+\sum_{i=6,8,\dots,}^{n}Y_{i+1}(3+5Z_{i-3})\bigg)\\
&=\underbrace{\text{Var}(Y_5(3+4Z_1))}_{\displaystyle\equiv\alpha}+\underbrace{\text{Var}(Y_{n+3}Z_{n-1})}_{\displaystyle\equiv\beta}+\sum_{i=6,8,\dots,}^{n}\underbrace{\text{Var}(Y_{i+1}(3+5Z_{i-3}))}_{\displaystyle\equiv\gamma}\\
&=\alpha+\beta+\big(\frac{n-4}{2}\big)\gamma
\end{align}
Note that the constants $\alpha,\beta$ and $\gamma$ are independent of $n$. Our result is then
\begin{align}
\text{Var}\bigg(\frac{1}{\sqrt{n}}\sum_{i=4,6,\dots}^nX_i\bigg)
&=\frac{1}{n}\bigg(\alpha+\beta+\big(\frac{n-4}{2}\big)\gamma\bigg)\\
&=\frac{\alpha+\beta}{n}+\bigg(\frac{1}{2}-\frac{2}{n}\bigg)\gamma
\end{align}
which is clearly finite as $n\to\infty$. My calculations give that
$$ \gamma=25q(1-q)p+p(1-p)(3+5q)^2. $$
If I made any algebraic mistakes let me know! |
Growth of $()$ satisfying $t(n)=2^nt(n/2)+n^n$ | One approach
$$
\begin{split}
t_n
&= 2^n t_{n/2} + n^n \\
&= 2^n \left[2^{n/2} t_{n/4} + (n/2)^{n/2}\right] + n^n \\
&= 2^{n + n/2} t_{n/4} + 2^n (n/2)^{n/2} + n^n \\
&= 2^{2n - n/2} t_{n/4} + 2^{n/2} n^{n/2} + n^n \\
&= 2^{2n - n/2} \left[2^{n/4} t_{n/8} + (n/4)^{n/4}\right] + 2^{n/2} n^{n/2} + n^n \\
&= 2^{2n - n/4} t_{n/8} + 2^{2n - n/2} \left[(n/4)^{n/4}\right] + 2^{n/2} n^{n/2} + n^n \\
&= 2^{2n - n/4} t_{n/8} + 2^n n^{n/4} + 2^{n/2} n^{n/2} + n^n \\
\end{split}
$$
and you can iterate a couple more terms until yousee the pattern... |
Is there a unique function $f:\Bbb R\to\Bbb R$ satisfying $f(x)^3+3f(x)^2-x^3+2x+3f(x)=0$? | It's $$(f(x)+1)^3=x^3-2x+1$$ or
$$f(x)=\sqrt[3]{x^3-2x+1}-1.$$ |
Minimize $\sum_{i=1}^n|x-a_i|$, $\max \{|x-a_i|, i=1,\cdots,n\}$, $\sum_{i=1}^n|x-a_i|^2$ and maximize $\Pi_{i=1}^n|x-a_i|$ | a) The top answer to the linked question does provide the outline of a formal proof, it's just disguised a little. A more formal outline would go something like this: the function $\sum_i|x-a_i|$ is continuous as a function of $x$. It is decreasing on the domain $(-\infty,a_{n/2})$ and increasing on the domain $(a_{n/2},\infty)$. Hence, the minimum must occur at $a_{n/2}$ (of course I'm being quite sloppy when I say $a_{n/2}$. Read the conversation on the linked post for more precise definition of the "median"). This should lead to a perfectly rigorous proof (no derivatives/subgradients required).
b) The answer is the midpoint between $a_1$ and $a_n$ (i.e. $(a_1+a_n)/2$). See if you can work backwards from this answer.
c) Note that
\begin{equation}
\sum_{i=1}^n|x-a_i|^2=\sum_{i=1}^n(x-a_i)^2.
\end{equation}
Taking the derivative w.r.t $x$ we obtain
\begin{equation}
2\sum_{i=1}^n(x-a_i)=2nx-2\sum_{i=1}^na_i.
\end{equation}
Setting this equal to zero and solving for $x$, we obtain
\begin{equation}
x^*=\frac{1}{n}\sum_{i=1}^na_i,
\end{equation}
i.e. the arithmetic mean of the $a_i$. Since the function $\sum_i(x-a_i)^2$ is convex and continuously differentiable as a function of $x$, this is sufficient to conclude that $x^*$ is a global minimizer.
d) Your intuition is correct--as stated, the function is unbounded. A more interesting problem might be to consider maximizing the product over the domain $[a_1,a_n]$. This could be a good exercise. |
How to show that $\exp(x+y) = \exp(x)\times\exp(y)$ | Theorem: Given two convergent series $X=\sum x_n$ and $Y=\sum y_n$, you define $\sum z_n$ with partial sums $Z_n=\sum_{r=0}^n x_ry_{n-r}$=coefficient of $t^n$ in $(\sum_{r=0} x_rt^r)(\sum_{s=0} y_s t^s)$, then it can be proven that if $\lim Z_n=\sum z_n$ exists (let it be equal to $Z$), then $Z=XY$. (Abel's Theorem).
In your case, $e^x=\sum\frac{x^m}{m!}, e^y=\sum\frac{y^n}{n!}$, both are absolutely convergent series that is their Cauchy product will converge. (Mertens theorem)
Now define Cauchy product, $Z_n=\sum_{r=0}^n \frac{x^ry^{n-r}}{r!(n-r)!}$
This product will converge by Mertens theorem. And by Abel's theorem, $\lim Z_n= \sum_{n=0}^\infty\sum_{r=0}^n \frac{x^ry^{n-r}}{r!(n-r)!}=
(e^x)(e^y)$
$\begin{align}
\implies (e^x)(e^y)&=\sum_{n=0}^\infty\sum_{r=0}^n \frac{n! x^ry^{n-r}}{r!(n-r)! n!}\\ &= \sum_{n=0}^\infty\sum_{r=0}^n \frac{^nC_r x^ry^{n-r}}{n!}\\
&=\sum_{n=0}^\infty \frac{(x+y)^n}{n!}=e^{x+y}\end{align}$.
Note that:
$(x+y)^n=\sum_{r=0} ^n$ $^nC_r x^ry^{n-r}$ by Binomial Theorem.
Wherever no superscript is added to $\sum$ in my answer above, please treat the superscript as $\infty$ and in case of missing subscript to $\sum$, please treat the same as index variable $=0$. |
If a number is of type $n^n$, how to identify that? Example: 256 is $4^4$, 3125 is $5^5$ | How big can your input be (let's call it $r$)? If it is a 64-bit integer, then you only have to try $n=1,\ldots,15$, because $16^{16} = 2^{64}$ is already too big. The easiest way is to pre-calculate these 15 values, so you can just look them up in a table.
If $r$ is an arbitrary-length integer, then you still only have to try $n$ up to $\log_2r$, i.e. the bit-length of $r$. In fact you can check if all prime factors of $r$ are $\le \log_2r$ (you don't need to find all the prime factors to do this, just those that are $\le \log_2r$); if not, then $r$ is not of the form $n^n$. If all prime factors are this small, you can easily factorise $r$, and then you can check each possible $n$ very efficiently. |
Tychonoff Theorem and the axiom of choice | Hint: J. Terilla, Tychonoff’s Theorem, P7.
Theorem 9. The Tychonoff theorem $\Leftrightarrow$ the axiom of choice . |
The name: `Algebra' over a field/ring. | There was once a persian mathematician that wrote an important book on rules of computations. His name, "al-Khwarizmi" was made eternal in "algorithm", the book title containing "ḥisāb al-jabr wa-l-muqābala" gave the word "algebra".
In the renaissance, algebra was everything that surpassed simple accountancy. Most prominently, solving polynomial equations and computations with roots.
Until the middle-end of the 19th century, mathematics was largely without structures. The idea of vector spaces and even of the set of real numbers started only in that period. The systematic use of structures is a side effect of axiomatic mathematics. Thus it is natural that the structure of polynomials, which were called "(entire (rational)) algebraic functions" at that time, is called "algebra". Because you need polynomials and their arithmetic to do "algebra".
That this was then generalized to the definition that you gave is due to the economy of words, why invent a new word when you can just re-purpose the name of the most prominent example.
To pinpoint this more precisely, one would have to find out when "Matrizenkalkül" or "matrix calculus" became "matrix algebra" and when the structure of infinitesimal symmetries was first named "Lie algebra", and if it had a different name before. |
Why is divergence related to "volume"? | Flux is defined in "absolute terms", which is what creates the problem here.
To illustrate this by example, think of $V$ as a screen and $\mathbf F$ as a constant vector field representing water flowing through the screen $V$ (at a constant rate everywhere). This example shows two problems if we don't divide by $|V|$ in our divergence equation, where our intuition about divergence is that it measures to what extent $\mathbf F$ is a source/sink.
In our example, taking $V$ to be large makes $\oint_V \mathbf F \cdot d\mathbf a$ large, and taking $V$ small makes $\oint_V \mathbf F \cdot d\mathbf a$ small. So this is not by itself a good measure of the extent to which $\mathbf F$ is a source/sink, since we know that $\mathbf F$ is constant. Hence $\oint_V \mathbf F \cdot d\mathbf a$ is only really a measure of whether $\mathbf F$ is a source/sink.
Moreover, if $\mathbf F$ is not constant, it could be letting water flow into or out of the screen at different rates in different places, and hence $\oint_V \mathbf F \cdot d\mathbf a$ only measures whether $\mathbf F$ is a net source/sink over the region $V$. To test whether $\mathbf F$ is a source/sink at a point $\mathbf p$, we would approximate by taking a region $V$ about $\mathbf p$ and computing $\oint_V \mathbf F \cdot d\mathbf a$. Again, as $V$ shrinks down to $\mathbf p$, it has $0$ volume, so the integral goes to $0$, and we don't get a very useful measure. So we should instead consider the value of the integral relative to the size of $V$ as we shrink $|V|$ to $0$, thus getting a limit as is familiar with derivatives. |
"Summing" the series $\sin(x)-\dfrac{1}{2}\sin(2x)+\dfrac{1}{3}\sin(3x)-...$ | You are almost there! We have:
$$\ln(1 + \cos(x) + i\sin(x))= \ln(A + iB) = \frac{1}{2}\ln(A^{2}+B^{2})+i\arctan\bigg(\frac{B}{A}\bigg)$$
$$=\frac{1}{2}\ln\big((1 + \cos(x))^{2} + \sin^{2}(x)\big)+i\arctan\bigg(\frac{\sin(x)}{1 + \cos(x)}\bigg)$$
Then, the imaginary part is:
$$\arctan\bigg(\frac{\sin(x)}{1 + \cos(x)}\bigg)$$
Using the identity $\displaystyle\tan\bigg(\frac{x}{2}\bigg) = \frac{\sin(x)}{1 + \cos(x)}$:
$$ = \arctan\bigg(\tan\bigg(\frac{x}{2}\bigg)\bigg) = \boxed{\frac{x}{2}}$$
Of course, with this approach you will run into the usual domain problems with inverse trig functions and complex logarithms. |
$Hom(A^n, A^m)$ is a free $mn$ module | Let $e_1, \dots, e_n$ be a basis for $A^n$ and $d_1, \dots, d_m$ a basis for $A^m$. An $A$-linear map $A^n \to A^m$ is determined exactly by where it sends each $e_i$ (do you see why? given where the map sends each $e_i$, where does it send the general element $\sum_i c_i e_i$?), and each $e_i$ is sent to some element of the form
$$
\sum_{j=1}^m a_{ij}d_j \in A^m,
$$
where the $a_{ij}$ are in $A$. Conversely, clearly any set of $a_{ij}$ also gives rise to a map $A^n \to A^m$ in the same way. In other words, each map can be expressed uniquely as a $m \times n$ matrix of elements of $A$, and certainly this is just a free $A$-module of rank $mn$. |
Prove that $\sum_{n=0}^{\infty}\frac{\Gamma^2(n+1)}{\Gamma(2n+2)}=\frac{2\pi}{3^{3/2}}$ | A not-really-alternative approach:
$$ \sum_{n\geq 0}\frac{\Gamma(n+1)^2}{\Gamma(2n+2)}=\sum_{n\geq 0}\frac{1}{(2n+1)\binom{2n}{n}}=\sum_{n\geq 1}\frac{2}{n\binom{2n}{n}} $$
and since $\sum_{n\geq 1}\frac{z^{2n}}{n^2\binom{2n}{n}}$ equals $2\arcsin^2\tfrac{z}{2}$ (by the Lagrange inversion theorem or equivalent approaches), we simply have
$$ \sum_{n\geq 0}\frac{\Gamma(n+1)^2}{\Gamma(2n+2)}=\frac{d}{dz}\left.2\arcsin^2\tfrac{z}{2}\right|_{z=1}=\left.\frac{4\arcsin(z/2)}{\sqrt{4-z^2}}\right|_{z=1}=\frac{2\pi}{3\sqrt{3}}.$$ |
Is the solution of functional equation $x^x=y^y$ when $0\lt x\lt y$ uncountable? | Your proof is perfectly fine. It seems to me that this approach is what the asker had in mind. |
How to calculate AP Cost for foods & liquids | Start with your formula.
Unit Cost = As Purchased Cost / Number of Units
Apply the values that were given.
Unit Cost = $2.89/2 L = 2.89/2 $/L = 1.445 $/L
That answer is accurate, but the answer in the book is per 50 ML, not per 50 ML
You can convert the answer by multiplying it by 1, written in a convenient way, tp get rid of the L units on the bottom and replace it with a 50 ML units on the bottom. Note that (1 L/ 20*50 ML) = 1.
You also use the units as part of the math now.
Unit Cost = $2.89/2 L * (1 L/20*50 ML) = (289*1)/(2*20) $/ 50 MLL
Unit Cost = 0.07225 $/ 50 ML |
What area(s) of mathematics to uv parameterise a 3D triangulation? | I think you want mesh parameterization. This is a large area. See these references for instance:
http://www.inf.usi.ch/hormann/parameterization/index.html
http://geom.mi.fu-berlin.de/publications/db/KNP07-QuadCover.pdf
http://www.cs.ubc.ca/~sheffa/papers/CGV011-journal.pdf
Or do you simply want to convert a triangulation into a quadrangulation? |
Determine all functions $f:\mathbb{Z}\to\mathbb{Z}$ such that $f\big(f(n)\big)=-(q-p)\,f(n)+pq\,n$ for all $n\in\mathbb{Z}$. | Here is another solution: $p=2,q=3,f(n)=\begin{cases} 2n \text{ if } n \text{ is even}\\
-3n\text{ if } n \text{ is odd}\end{cases}$
Use the technique from your first link, it can be shown that all solutions to the functional equation are of the form $f(n)=\begin{cases} pn \text{ if } n\in T\\
-qn\text{ if } n\in \mathbb{Z}\setminus T \end{cases}$
Write $f(f(…(n)..)$ as $f^k(n)$. Then $f^k(n)=-(q-p)f^{k-1}(n)+pqf^{k-2}n$. This is a linear recurrence equation and standard techniques yield a solution of the form $f^k(n)=A(n)p^kn+B(n)(-q)^kn$. Substituting back into the original functional equation tells us that $A+B=1+2AB$. The only integer solutions are $A=1,B=0$ and $A=0,B=1$. |
Integer Solutions of the Equation $u^3 = r^2-s^2$ | $u^3=(r+s)(r-s)$ and $\gcd(r+s,r-s)=1$, so $r+s$ and $r-s$ are odd, coprime perfect cubes.
So let $r+s=a^3$, $r-s=b^3$. Then
$$r=\frac{a^3+b^3}2$$
$$s=\frac{a^3-b^3}2$$
where $a$ and $b$ are odd and coprime.
Conversely, if $a$ and $b$ are odd and coprime, let $r=(a^3+b^3)/2$ and $s=(a^3-b^3)/2$, which are coprime and have different parity. Indeed,
$$r+s=a^3$$
which is odd and coprime with
$$r-s=b^3$$ |
Is $y$ to the power of $\frac{1}{\infty}$ equals 1? | For $y\in(0,\infty)$, $\lim_{x\to\infty} y^{1/x}=1$. For $y=0$, $\lim_{x\to\infty}y^{1/x}=0$. |
Example of Galois, non-monogenic number field | Here is one local obstruction to being monogenic: If $2$ splits completely in $F$ and $d = [F:\mathbf{Q}] > 2$, then $F$ is not monogenic. Otherwise $\mathcal{O}_F/2 = (\mathbf{F}_2)^d$ is a quotient of $\mathbf{Z}[x]$, but there are no such maps if $d > 2$ (note that the image of $x$ in any copy of $\mathbf{F}_2$ must satisfy $x^2 = x$, so the largest such quotient is $\mathbf{Z}[x](x^2-x) = \mathbf{F}^2_2$.
But now it is easy to produce examples.
The simplest one: Let $p$ and $q$ be two primes which are $1$ modulo $8$. Then take $F = \mathbf{Q}(\sqrt{p},\sqrt{q})$, and take
$$f(x) = (x^2 - p - q)^2 - 4pq,$$
with roots $\pm \sqrt{p} \pm \sqrt{q}$.
The more complicated field one writes down, the harder it is to write down a minimal polynomial explicitly.
Let $H = \mathbf{Q}(\zeta_{pq})$ with $p,q$ primes that are $1$ modulo $N$. Then $H$ contains a $(\mathbf{Z}/N \mathbf{Z})^2$ extension in which $2$ is unramified and thus a $(\mathbf{Z}/N \mathbf{Z})$ extension in which $2$ is totally split.
It's also easy to write down explicit polynomials whose splitting field $F$ has the property that the Galois group is $S_n$ and $2$ splits completely. However, it's not so easy to write down the minimal polynomial of the splitting field by hand. The degree would be at least $6$, for a start.
If you want to do something completely by hand, start by looking for polynomials of degree $3$ with small height which split completely over $\mathbf{Q}_2$ (which you can check using Hensel's Lemma by hand).
I found $x^3 + 4 x^2 - x + 4$ has this property. The discriminant of the polynomial is $-2^2 \cdot 431$, and the discriminant of the field is $-431$, because it is unramified at $2$. (The argument above shows that this cubic field is non-monogenic, of course!)
Now you want the minimal polynomial of the Galois closure. (If $2$ splits completely in a field $K$, it does so also in the Galois closure $F$.) If $\alpha$ is a root of the cubic above, you could take $\beta = \alpha + \sqrt{-431}$. It's easy to see this will be a primitive element of the Galois closure. The minimal polynomial of $\beta$ over $\mathbf{Q}(\sqrt{-431})$ is obviously
$$(z - \sqrt{-431})^3 + 4 (z -\sqrt{-431})^2 - (z - \sqrt{-431}) + 4.$$
So the minimal polynomial is the produce of this with $\sqrt{-431}$ replaced by $-\sqrt{-431}$. Not super fun to do by hand, but not impossible. The result is
$$z^6 + 8z^5 + 1307z^4 + 6896z^3 + 571108z^2 + 1472288z + 83393344.$$ |
Find $A$ for linearized system $\dot{\overrightarrow{x}} = A \overrightarrow{x}$ | The matrix $A$ is found by differentiating the two RHS with respect to $x$ and $y$. Let $g_1(x,y) = y$ and $g_2(x,y) = -cy-f(x)$. Then
$$
A = \left[
\begin{array}{cc}
\frac{\partial g_1(x,y)}{\partial x} & \frac{\partial g_1(x,y)}{\partial y}\\
\frac{\partial g_2(x,y)}{\partial x} & \frac{\partial g_2(x,y)}{\partial y}
\end{array}
\right] = \left[
\begin{array}{cc}
0 & 1\\
-f'(x) & -c
\end{array}
\right]
$$
Then you can plug the values for $x,y$ in the expression for $A$ to find out the particular linearization at the particular point. |
Ways of computing $\int_{-\beta}^{\beta}dx\cot(x/2)\sin(n\pi x/\beta)$ | I do not know how much this could help you.
Start with $x=iy$ and $\frac{\pi n} \beta=k $
$$\int\cot \left(\frac{x}{2}\right) \sin (k x)\,dx=i \int\coth \left(\frac{y}{2}\right) \sinh (k y)\,dy$$
$$2\int\coth \left(\frac{y}{2}\right) \sinh (k y)\,dy=B_{e^y}(1-k,0)+B_{e^y}(-k,0)-B_{e^y}(k,0)-B_{e^y}(k+1,0)$$ which makes the definite integral
$$2\int_{-a}^a\coth \left(\frac{y}{2}\right) \sin (k y)\,dy$$ to be
$$\Big[B_{e^{-a}}(k,0)+B_{e^{-a}}(k+1,0)+B_{e^a}(1-k,0)+B_{e^a}(-k,0)\Big]-$$
$$\Big[B_{e^a}(k,0)+B_{e^a}(k+1,0)+B_{e^{-a}}(1-k,0)+B_{e^{-a}}(-k,0)\Big]$$ |
If $8$ does not divide $x^2-1$, then $x$ is even; prove by contrapositive | Rewrite $$(x^2-1)=(x-1)(x+1)$$ If 8 does not divide $(x^2-1)=(x-1)(x+1)$, then 2 does not devide $x^2-1=(x-1)(x+1)$. But that is the same as saying that neither $x-1$ nor $x+1$ are even. Hence $x$ must be even. |
Does $f'(x)\equiv\dfrac{1}{x^2+f^2(x)}$ for $x\geqslant 1$ and $f(1)=1$ imply $f(+\infty)$ exists? | Note that from the derivative we can conclude that $f(x)\geq 1$ for $x\geq 1$ whence
$$
0\leq f'(x)=\frac{1}{x^2+f^2(x)}\leq \frac{1}{1+x^2}
$$
and
$$
f(x)=f(1)+\int_{1}^xf'(t)\, dt
$$
from which the result follows. |
Quadratic form reduction on the $n$-dimensional complex space $\mathbb{C}^n$ | This might be helpful
If we define
$$
\Phi_1:=\sum_{1\leq k<l\leq n}(k+il)x_kx_l
$$
and
$$
\epsilon_{k,l}=\left\{\begin{array}{cc}
1\textrm{ , if }k<l\\
i\textrm{ , if }k>l\\
0\textrm{ , else }
\end{array}\right\}\textrm{, }i=\sqrt{-1}
$$
then
$$
\Phi_1=\sum^{n}_{k,l=1}\epsilon_{kl}kx_kx_l=\frac{1+i}{1-i}\sum_{k<l}(k+l)x_kx_l-\frac{2i}{1-i}\sum^{n}_{k<l}kx_kx_l
$$ |
How many ways can the books be lined up? | Answer to part 1
1) If 10 books are identical, obviously there is only one way to line up them.
2) If 10 books are not identical, each book is different from the rest. then it's a full arrangement. The answer is $A^{10}_{10}$ = 10! = 3628800
Answer to part 2
Assume there are 10 slots and each slot refers to a book. You have to choose 5 slots out of 10 to put Math books, there are $C^{5}_{10}$ kinds of combination conditions. Then you have to choose 3 slots out of 5 to put English books, which is $C^{3}_{5}$ kinds of combination conditions. The rest slots are for Science books, there is only 1 condition. So the answer is $C^{5}_{10} * C^{3}_{5} * 1 = 2520$ |
Extending a full row rank matrix to a full rank square matrix | Yes you just need to continually add row vectors that aren't in the span of the previous ones until we have $n$ rows. This will always be possible as long as the previous row vectors don't span $\mathbb R^n$, which in our case won't happen since we have less than $n$ row vectors.
Each time we are adding $1$ to the dimension of the space generated by the rows. Sine we start out with $m$ and do it $n-m$ times at the end we get that the rows span a space of dimension $n$ as desired. |
Definite Integral evaluation and limit change of $\int_0^{\pi/2}\frac{\cos x}{\sqrt{1-\sin x}}\,dx $ | I put it into symbolab and I got the correct answer! |
Equation in polar coordinates to rectangular coordinates | Good start! You want to gather the $y^2$ terms on one side, though, so instead, from $$x^2+y^2=\frac{2y^2}x,$$ subtract $y^2$ from both sides and factor, yielding $$x^2=\frac{2y^2}x-y^2=\left(\frac2x-1\right)y^2=\frac{2-x}xy^2.$$ Can you finish it from here? |
The norm of linear functional $x\mapsto \sum_{n=1}^{\infty} \frac{x_n}{2^n}$ on $c_0$ | Your argument is right. To express it more rigorously, you can do something like this: for every $\|x\|<1$,
$$ \sum_{n=1}^{\infty} \frac{x_n}{2^n} < \sup_{\|x\|=1} \sum_{n=1}^{\infty} \frac{1}{2^n}=1$$
where the inequality is strict because $x_n\le 1$ for all $n$ and $x_n<1$ for some $n$.
On the other hand, the vector $x = (1,\dots,1,0,\dots)$ with $N$ ones has $ \sum_{n=1}^{\infty} \frac{x_n}{2^n} = 1-2^{-N}$, which can be arbitrarily close to zero.
Together, the above paragraphs show that the norm is $1$ and it is not attained. |
Inequality $\sum\limits_{1\le k\le n}\frac{\sin kx}{k}\ge 0$ (Fejer-Jackson) | In short: let $f_n(x)$ denote the function on the lhs of the inequality. Of course, $f_1(x)=\sin x\geq 0$ on $[0,\pi]$. We will prove that $f_n(x)\geq 0$ on $[0,\pi]$ by induction on $n$. It is not too hard to determine the local minima of $f_n$ on $[0,\pi]$ by investigating its derivative. Then Ma Ming observed that $f_n$ coincides with $f_{n-1}$ on these local minima. And the induction step follows easily. Of course, $f_n(0)=f_n(\pi)=0$. We will actually prove that
$$
f_n(x)=\sum_{k=1}^n\frac{\sin kx}{k}>0\qquad\forall x\in(0,\pi).
$$
Remark: it is worth noting that the $f_n$'s are the partial sums of the Fourier series of the same sawtooth function. Just look at the case $n=6$, for instance, to see how they tend to approximate it nicely. See here to get an idea how to estimate the error in such approximations. As pointed out by math110, there are many proofs of this so-called Fejer-Jackson inequality. It can even be shown that the $f_n$'s are bounded below by a certain nonnegative polynomial on $[0,\pi]$. The proof below is at the calculus I level. I'm not sure it can be made more elementary.
Proof: first, $f_1(x)=\sin x$ is positive on $(0,\pi)$. Assume this holds for $f_{n-1}$ for some $n\geq 2$. Then observe that $f_n$ is differenbtiable on $\mathbb{R}$ with
$$
f_n'(x)=\sum_{k=1}^n\cos kx=\mbox{Re} \sum_{k=1}^n (e^{ix})^k.
$$
For $x\in 2\pi \mathbb{Z}$, we have $f_n'(x)=n$. So the zeros of $f_n'$ are the zeros of
$$
\mbox{Re}\;e^{ix}\frac{e^{inx}-1}{e^{ix}-1}=\mbox{Re}\;e^{i(n+1)x/2}\frac{\sin (nx/2)}{\sin(x/2)}=\frac{\cos((n+1)x/2)\sin (nx/2)}{\sin(x/2)}.
$$
This yields
$$
\frac{nx}{2}\in \pi\mathbb{Z}\quad\mbox{or}\quad \frac{(n+1)x}{2}\in \frac{\pi}{2}+\pi\mathbb{Z}
$$
i.e.
$$
x\in \frac{2\pi}{n}\mathbb{Z}\quad\mbox{or}\quad x\in \frac{\pi}{n+1}+\frac{2\pi}{n+1}\mathbb{Z}.
$$
Between $0$ and $\pi$, these are ordered as follows:
$$
0<\frac{\pi}{n+1}<\frac{2\pi}{n}<\frac{3\pi}{n+1}<\frac{4\pi}{n}<\ldots < \frac{2\lfloor n/2\rfloor \pi}{n}\leq \pi.
$$
The sign of $f_n'$ changes at each of these zeros, starting from a positive sign on $(0,\pi/(n+1))$. It follows that $f_n$ is positive on the latter, positive on the last interval (if nontrivial, i.e. in the odd case), with local minima at
$$\frac{2j\pi}{n}\qquad\mbox{for}\qquad j=1,\ldots,\lfloor n/2\rfloor.$$
But now here is Ma Ming's key observation: for these values, we have
$$
f_n\left(\frac{2j\pi}{n}\right)=f_{n-1}\left(\frac{2j\pi}{n}\right)+\sin\left(n\cdot\frac{2j\pi}{n}\right)=f_{n-1}\left(\frac{2j\pi}{n}\right)>0
$$
by induction step. It follows that $f_n(x)>0$ on $(0,\pi)$. QED. |
Is this a presentation of a known group? | Let $z = xy$. Note that $$\langle x, y : xyxy = 1\rangle = \langle x, z : z^2 = 1 \rangle$$ so your group is isomorphic to the free product $$ \mathbf Z \ast \mathbf Z/ 2\mathbf Z.$$ |
Proof of an analogue of the Fundamental Theorem of Calculus for infinitesimal generators. | Why do we have $F(t)=0$ for all $t$ by the continuity of $F$ and $F(0)=0$?
Because in the previous step you proved that $D^+F=0$. This result with the continuity of $F$ implies that $F$ is constant (see here the proof).
How does the lemma follow by the Hahn-Banach Theorem?
As a consequence of the Hahn-Banach Theorem, the dual of a Banach space $B$ separates points of $B$. Thus, if the equality were not true, there would be $\phi\in B^*$ such that
$$\left\langle T_tf-f-\int_0^t T_x Zfdx, \phi\right\rangle\neq 0.$$
But, as proved, this is not the case. |
The Whitehead product and $\pi_{\leq 3} S^2$ | I have looked at the draft version of Simpson's book, and one can put the argument as that strict 3-groupoids cannot model Whitehead products. Another argument for this is that strict 3-groupoids are equivalent to crossed complexes (over groupoids) of length 3. This is the case $n=3$ of a result proved in this paper. This structure has no quadratic information. Thus strict globular groupoids model only a restricted range of homotopy types.
However cat$^3$-groups, which can be seen as strict $3$-fold groupoids in which one structure is that of a group, do model pointed homotopy 3-types, as shown by Loday in 1982, and an account of how these and related structures model Whitehead products $\pi_2 \times \pi_2 \to \pi_3$ is given in this essay, Theorem 2.4. Such structures are in some cases calculable because of a related van Kampen type theorem, published in 1987, referred to in the previous op. cit. This theorem also led, through considering some particular pushouts, to a nonabelian tensor product of groups (which act on each other and on themselves by conjugation) which has proved of interest especially to group theorists, so that a current bibliography has now 169 items dating from 1952. In particular there can be deduced many calculations of the first non trivial Whitehead product for $SK(G,1)$ for groups $G$ (see [119] by G. Ellis in that bibliography).
It seems to me important that such strict higher van Kampen theorems have been proved only for certain structured spaces, namely filtered spaces, and $n$-cubes of pointed spaces. Of course, homotopy groups themselves are defined only for spaces with a base point, a structure which does not contain much information on the space.
Grothendieck was of course unaware of such uses of strict $n$-fold groupoids when writing Pursuing Stacks. |
drawing markings on a ruler | Take your example of $n=3$. If we were really marking at a resolution of $1/2^3=1/8$ inches, we’d mark $1/8,1/4,3/8,1/2,5/8,3/4$, and $7/8$ (since we’re not marking the endpoints at $0$ and $1$ inches). To make matters simpler, we’re going to rescale this by multiplying everything by $8$, so that our interval will be from $0$ to $8$ instead of from $0$ to $1$, and the marks will come every inch, instead of every $1/8$ inch.
If $n$ were $4$, the original problem would be to mark the interval from $0$ to $1$ inch in increments of $1/16$ inch: $1/16,1/8,3/16,1,4$, and so on up to $15/16$. To avoid the fractions, we would multiply everything by $2^4=16$ and cover the interval from $0$ to $16$ instead, marking $1$-inch intervals instead of $1/16$-inch intervals.
The marking itself is fairly simple. The mark in the middle is to be of height $n$; if $n=3$, that will be a mark $3$ units high at $x=4$. Then we look at the left interval, from $x=0$ to $x=4$, and the right interval from $x=4$ to $x=8$, and find the midpoint of each; these midpoints are $x=2$ and $x=6$, and they get marks of height $n-1=2$. Keep going in this manner; in the case of $n=3$, there’s only one more stage, and it produces marks of height $1$ at $x=1,3,5$, and $7$. Here’s a rough sketch:
x
x x x
x x x x x x x
|-----------------------------------------------|
0 1 2 3 4 5 6 7 8
The idea is probably for you to design a recursive algorithm that takes as input the ends of an interval and a height, finds the midpoint of the interval and draws a mark of the desired height, and then calls itself for the left and right subintervals surrounding that mark. If you’ve learned anything about traversing trees, you may want to think about that, too. (Of course, it can also be done without recursion, and my guess about that may be wrong.) |
Study convergence of the series of the $\sum_{n=1}^{\infty}(-1)^{n-1}\frac{\sin{(\ln{n})}}{n^a}$ | @nanchangjian:
You are not understanding the cases $a>1$ and $a\leq 0$ correctly.
The reason why it converges for $a>1$: is as @DonAntonio pointed out,
$$\left|(-1)^{n-1}\frac{\sin\log n}{n^a}\right|\le\frac1{n^a}$$
Notice that $(-1)^{n-1}$ is inside the absolute value sign.
In case $a\leq 0$: it diverges because
$$\limsup \left((-1)^{n-1}\frac{\sin\log n}{n^a}\right)\geq 1$$
The case when $0<a\leq 1$:
Notice that the series is the imaginary part of the following:
$$\sum_{n=1}^{\infty} (-1)^{n-1} \frac{1}{n^{a-i}}$$
where $i=\sqrt{-1}$.
So, we consider the Dirichlet series
$$\sum_{n=1}^{\infty} (-1)^{n-1} \frac{1}{n^s}$$
This series has $1$ as abscissa of absolute convergence and $0$ as abscissa of convergence.
That means the series converges whenever $\textrm{Re} (s) >0$.
Since $\textrm{Re}(a-i) = a >0$, the series converges.
It will be helpful if you understand this lemma about Dirichlet series:
(Lemma)
Suppose that $F(s)=\sum_{n=1}^{\infty} \frac{f(n)}{n^s}$ converges for some $s_0\in\mathbb{R}$. Then the series converges for all $s$ in the region $\textrm{Re}(s)>s_0$.
The proof of this uses partial summation. |
Show compactness with Arzelà–Ascoli | We have to show that for every sequence $f_n$ in the unit ball of $C([0,1])$, there exists a subsequence of $T(f_n)$ which converges.
Let $\|f\|$ denote the uniform norm on $C([0,1])$.
So let $f_n$ be such a sequence.
1) Let $M$ be an upper bound for the continuous function $|k|$ on the compact $[0,1]^2$. Now oObserve that
$$
|T(f_n(x))|\leq\int_0^1|k(x,y)||f_n(y)|dy\leq M\|f_n\| \leq M
$$
for all $n$ and all $x\in[0,1]$. So
$$
\|T(f_n)\|\leq M
$$
for all $n$. THis proves that the $T(f_n)$ are uniformly bounded.
2) Now
$$
|T(f_n(x))-T(f_n(x_0))|\leq \int_0^1|k(x,y)-k(x_0,y)||f_n(y)|dy\leq \int_0^1|k(x,y)-k(x_0,y)|dy
$$
for all $n$ and all $x,x_0$.
Since $k$ is continuous on the compact domain of integration, it is uniformly continuous there. Thus the right-end side tends to $0$ when $x$ tends to $x_0$.
This proves that the $T(f_n)$ are equicontinuous.
By Arzela-Ascoli, the exists a subsequence of $T(f_n)$ which converges. |
How to prove $\Delta : X \to X \times_Y X$ is quasi-compact for $X$ noetherian? | To check that $\Delta : X \to X\times_Y X$ is quasicompact, it is enough to show that the intersection of any two open affine subsets $U$ and $V$ of $X$ is quasicompact. Can you show this? (Can you show that any open subset of $\text{Spec }A$ is quasicompact if $A$ is a Noetherian ring?) |
Why does the power series expansion of $f(z) = e^{\sum_{j=1}^{k}\frac{z^{j}}{j}}$ have all coefficients positive? | In a combinatorial flavor, $n!$ times the coefficient of $z^n$ in
$$ \exp\left(z+\ldots+\frac{z^k}{k}\right) $$
is the number of permutations in $S_n$ that decompose in disjoint cycles with length $\leq k$ (see here).
Then it is quite trivial that the previous coefficient is rational and non-negative. |
Non-uniqueness in the $L^1$ martingale representation | Suppose $\xi$ is only integrable. Let $X=(X_t)_{t\in[0,T]}$ denote a continuous version of the martingale $\Bbb E[\xi|\mathfrak F_t]$, $0\le t\le T$. By localizing the usual $L^2$ martingale representation, there is a predictable process $K$ with $\int_0^T K_s^2\,ds<\infty$ a.s., such that
$$
X_{t}=\Bbb E[\xi]+\int_0^t K_s\,dW_s,\qquad 0\le t\le T, \hbox{ a.s.}
$$
The process $K$ is uniquely determined by the martingale $X$.
On the other hand, using the representation you cite, we can define a local martingale $Y$ by
$$
Y_t=\Bbb E[\xi]+\int_0^t H_s\,dW_s,\qquad 0\le t\le T.
$$
The difference $X_t-Y_t$ is a local martingale on $[0,T]$ with terminal value $0$. If we knew that this implied $X_t=Y_t$ for all $t$, then we would have $H=K$ and the uniqueness you seek. But examples show that a local martingale need not be determined by its terminal value.
For example (modifying an example found in R. Williams' text on Math. Finance), take $T=1$ and define $Z_t=\int_0^t (1-s)^{-1/2}\,dW_s$ for $0\le t<1$. Then $Z$ is a local martingale on $[0,1)$, and one checks $\langle Z\rangle_t\uparrow+\infty$ as $t\uparrow 1$. It follows that the stopping time $\tau:=\inf\{t>1/2: Z_t=0\}$ satisfies $0<\tau<1$ a.s. The stopped process $Z_{t\wedge\tau}$ is then a local martingale on $[0,1]$ with terminal value $Z_1=0$, but also $Z_1=\int_0^1 J_s\,dW_s$, where $J_s=1_{\{s\le\tau\}}(1-s)^{-1/2}$. Notice that $J$ is non-zero!
More intriguing, R.M. Dudley showed in 1977 that any $\mathfrak F_T$ measurable r.v. $\xi$ admits a representation $\xi=c+\int_0^T H_s\,dW_s$ with $H$ predictable such that $\int_0^T H_s^2\,ds<\infty$ a.s., and the constant $c$ chosen arbitrarily by you ahead of time. |
Inverse of a matrix in $\mathbb{F}_5^{4\times4}$ | $g/h$ is coprime with $f/h$, there is $u,v$ such that $ug/h+vf/h=1$ so $v(A)$ is your inverse. Moreover you can reduce your computation modulo $ g/h=2X^2+aX+b$, searching for the inverse of $f/h$ in $\Bbb{F}_5[X]/(2X^2+aX+b)$ |
What is a good way to measure the distance between finite subsets of the reals? | Let $A$ be a finite subset of $\mathbb{R}$. For $x\in\mathbb{R}$ define $d(x,A)$, the distance from $x$ to $A$, to be $$d(x,A) = \min\{\vert x-a\vert:a\in A\},$$ the distance from $x$ to the closest point of $A$. If $B$ is another finite subset of $\mathbb{R}$, define $d^*(A,B)$, the asymmetric distance from $A$ to $B$, to be $$d^*(A,B) = \max\{d(a,B):a\in A\}.$$ Finally, define $d(A,B)$, the distance between $A$ and $B$, to be $$d(A,B) = \max\{d^*(A,B),d^*(B,A)\}.$$ This is the Hausdorff distance mentioned by deinst.
A few examples:
$$\begin{align*}d(\{0.9,1.1\},\{1\}) &= \max\{0.1,0.1\} = 0.1\\
d(\{0.9,1.1\},\{2\}) &= \max\{1.1,0.9\} = 1.1\\
d(\{1,1.1\},\{1\}) &= \max\{0.1,0\} = 0.1\\
d(\{1,1.1\},\{2\}) &= \max\{1,0.9\} = 1\\
d(\{0.9,1.1\},\{1.9,2.1\}) &= \max\{1,1\} = 1\\
d(\{0.9,1.1\},\{1.9,2.0,2.1\}) &= \max\{1,1\} = 1\\
d(\{1.9,2.0,2.1\},\{1.9,2.1\}) &= \max\{0.1,0\} = 0.1
\end{align*}$$ |
What is the benefit in constructing the integers from natural numbers? | The result comes from a strong historical desire to find a list of axioms from which all mathematical truths could be proved in a first-order system. With the advent of calculus, people began to prove all types of completely incorrect statements by their cavalier manipulation of the infinite. This fact, coupled with the discovery of Russell's Paradox, led to people to believe that there should be a more systematic way of theorem-proving. It was later proved impossible by Godel to find such a comprehensive system, even if infinitely many axioms were allowed. Still, most results of interest can be proved from the set-theoretic construction of mathematics known as ZFC set theory. One of the axioms is the existence of the natural numbers.It is generally accepted that one should take as few axioms as possible. Also, it actually turns out proof-wise to be easier to construct the integers than assuming their existence, as it would be more difficult to define the addition and multiplication operations on them. These are the two largest reasons why. We actually construct the natural numbers in a sense as well. They are defined as follows:
$$
\begin{align}
0 &:= \emptyset \\
1 &:= \{0\} \\
2 &:= \{0,1\} \\
3 &:= \{0,1,2\} \\
\vdots
\end{align}
$$
One then proves that recursively defining a function is an acceptable way to define a function, and defines the operations of addition and multiplication recursively as follows:
$$
\begin{align}
S(x)&:= x \cup \{x\} \text{ (the successor of x i.e. x+1)} \\
\\
x + 0 &:= x \\
x + S(y) &:= S(x+y) \\
\\
x * 0&:= 0 \\
x * S(y)&:= (x*y) + x
\end{align}
$$
One can also take out zero and call it the natural numbers if one then wanted.
After that one then constructs the integers as equivalence classes of ordered pairs of natural numbers, where the ordered pair $(x,y):= \{\{x\},\{x,y\}\}$, represents the number $x-y$, and so on.(Functions are also defined as sets of ordered pairs btw). The case of the reals is more complicated, and also the biggest leap in turns of taking its existence as an axiom, as it's tantamount to assuming their are no holes in the number line. I know this is more than you asked for (the answer to your question is in bold above), I just thought I'd give you an idea of how this progression actually goes in terms of defining everything as a set.
EDIT-
In reponse to your comment. You start out constructing
a system for expressing and proving logical truths. You introduce a set of logical symbols like and,or,not ect, and a set of nonlogical symbols consisting of function symbols, relation symbols, and constants, and a syntax for them. You then find a way to actually have these statements acquire meaning by introducing a signature, consisting of a language (subset of the nonlogical symbols), a domain/universe this language means to describe, and an interpretation function that maps purely syntantical statements to what the real-world meaning of this statement would be. You then introduce tarski semantics(see T-schema on Wikipedia), which is a way of defining what it means for these statements to be true, in such a way that a formula will be true if and only if the idea that it expresses will be true. Then you introduce rules of inference which allows you to prove things with this system. You now have a system for expressing an unbelievable quantity of statements and proving things from them. Next you want to know that you can be assured that you won't prove false statements from true assumptions. This is called soundness, and it turns out to be the case that FOL is sound. Then you want to know whether any logically-valid relationship capable of being expressed in your system can actually be proved in your system. This is called completeness (distinguished from the term completeness when referring to a theory), and although its a pain in the a** you can prove first-order logic is complete. (See completeness theorem).
Here is the point of all of this. You ask yourself, can I prove results from Real analysis in this system? That way I can be confident that I'm not proving things that are false. You introduce the language of set theory, generally either only $\{\in\}$ or $\{\in, \emptyset \}$, and the axioms of ZFC become your basis for proving things. You then crucially prove that its possible to expand your language, by adding new symbols to it, in such a way that it's possible to remove these new symbols from any statement and replace them with your original symbols, and that by adding these new symbols you are not accidently adding new axioms. That is, you can't prove anything with these new symbols that ultimately can't be proved from your original symbols and axioms. You then proceed to define all the number systems in your system, properties of sets, limits, functions, etc., and at each step, you are proving these theorems in first order logic, in a way that all these new symbols could be eliminated and results proved from just the axioms of ZFC. As I said, nobody actually does this anymore, people still use the same axioms, but proofs are done in paragraph form. Still, its satisfying to know that you could do it if you wanted. |
Why aren't all polynomial functions of odd degree, odd functions? | This is because to make $f(x)$ odd, it must satisfy $f(x) = -f(-x)$. Now in polynomials, if $f(x)$ is odd then it must have all the powers of variable odd.
We can prove this by showing that other cases where polynomial has atleast 1 even power does not satisfy.
Eg:
$f(x) = x + 1$
$f(-x) = -x + 1$
$f(x) ≠ -f(-x)$ |
Spanning Set And Solution Of Linear Equations | An homogenous system of linear equations alwwyas has a solution: the null solution.
In your case, you have three equations in three unknowns. Then your vectors span $\mathbb{R}^3$ if and only if the system has one and only one solution. |
Why is this way used to solve for base 10 to base $16$ | It's very fast to compute the base $2$ expansion by succesive divisions. Then you only have to group the digits in the binary expansion by groups of $4$ digits, starting from the right, since $16=2^4$.
For instance, the last hexadecimal digit here is, in binary form, $\;2^3+2^2+2$. As we know that $2^3+2^2+2+1=2^4-1 =F$, we deduce at once that
$$2^3+2^2+2=(2^{4}-1)-1=F-1=E.$$
The second hexadecimal digit is $\;2^7=2^3\cdot2^4$, which is, in hexadecimal form, $2^3=8$. |
Prove all roots of $p_n(x)-x$ are real and distinct | Lemma; Let $n \geq 2$; If $p_{n}(x)$ has a local minima it is $-2$. If $p_{n}(x)$ has a local maxima it is $2$. Local minima with $x>0$ occurs $2^{n-2}$ times, Local minima with $x<0$ occurs $2^{n-2}$ times, Local maxima with $x<0$ occurs $2^{n-2} -1$ times and finally Local maxima with $x>0$ occurs $2^{n-2}-1$ times. Local maxima occurs at $x=0$
Proof; Differentiating $p_{n}$ we find that
$p_{n}'(x) = 2p_{n-1}(x)(p_{n-1}'(x))$
Hence
(1) $p_{n}'(x) = 2p_{n-1}(x)(2p_{n-2}(x))...(2p_{1}(x))(2x)$
To be a local minima or maxima one has $p_{n}'(x) = 0$. By looking at (1) we see that some $p_{n-j}(x)$ is $0$ or $x=0$. If $p_{n-1}(x)$ is 0 then $p_{n}(x)$ is $-2$. If some $p_{n-j}(x)$ for $j\geq 2$ is $0$ (just define $p_{0}(x) = x$) then $p_{n}(x) = 2$. So if (x,p_n(x)) is a turning point p_n(x) is $\pm 2$. Also note that there are no double roots as if $p_h(x) = 0$ then for any $a >0$ $p_{h+a}(x) = 2$ or $-2$
Now note that each $p_n(x)$ has $2^{n}$ distinct roots. So we can label the roots in an increasing order $c_{1} < c_{2} ...< c_{2^n}$. Observe that between any two successive roots $c_{u}$ and $c_{u+1}$ one has a turning point.
We claim that between any successive roots there is exactly one turning point. To see this suppose that there is an interval $[c_u,c_{u+1}]$ that contains atleast two turning points; hence there are at least $2^n$ turning points in $[c_{1},c_{2^n}]$ but there can only be at most $2^n-1$ turning points (as degree of $p_n(x)$ is $2^{n}$). Combining this information and the information in the previous paragraph we have that in an interval $[c_{u}, c_{u+1}]$
; $p_n(x)$ attains exactly one of the following;
(I) A local minima of -2
(II) A local maxima of 2
Going all the way back and looking at (1) (the equation above) (I) occurs exactly when $p_{n-1}(x) = 0$ which has $2^{n-1}$ distinct roots; half positive and half negative (even function). (II) occurs exactly when some $p_{n-j}(x) =0$ for some $j \geq 2$ which happens at $2^{n-1} - 1$ distinct places with one of them being $0$.
We will use the observation that the roots of $p_n(x)$ satisfy $|x| < 2$ to finish the proof.
Now consider each of the $2^{n-2} -1$ type (II) intervals $[c_{u},c_{u+1}]$ with $c_{u} > 0$ we can split this interval into two intervals $[c_{u},d]$ and $[d,c_{u+1}]$ where $p_n(d) = 2$ applying intermediate value theorem in each of the intervals to the function $g_n(x) = p_n(x) - x$ we find that $g_n(x)$ contains two distinct roots in $[c_{u},c_{u+1}]$. This till now gives us a total of $2(2^{n-2} -1)= 2^{n-1} -2$ distinct real roots.
It is now time to look at the $2^{n-2}$ type (I) intervals $[c_p, c_{p+1}]$ with $c_{p} < 0$; we once again autistically split this interval into $[c_p, e]$ and $[e, c_{p+1}]$ where $e$ is such that $p_n(e) = -2$ (as we are dealing with a type (I) case); we once again apply the intermediate value theorem in each of the intervals to the function $g_n(x) = p_n(x) - x$ to gather that $g_n(x)$ contains two distinct roots in $[c_p, c_{p+1}]$. Total number roots of g_n(x) found in these type of intervals is hence $2(2^{n-2}) = 2^{n-1}$ distinct roots.
The total number of distinct roots we now have is $2^{n-1} -2 + 2^{n-1} = 2^{n-1}-2$. Where the other two roots are is a question you may ask. You will find those other two roots in the interval $[-a,a]$ where $a$ is the smallest positive root of $p_n(x)$ and you can prove yourself that this is so by doing a similar analysis to that done above. |
Tangent bundle of P^n and Euler exact sequence | No, any construction using the Hermitian metric is going to take you outside the holomorphic category! By the way, you can understand the Euler sequence very elegantly by mapping $\Bbb P^n\times \Bbb C^{n+1}\to T\Bbb P^n\otimes\mathscr L$ as follows: If $\pi\colon\Bbb C^{n+1}-\{0\}\to\Bbb P^n$ and $\pi(\tilde p) = p$, for $\xi\in T_{\tilde p}\Bbb C^{n+1}$, map $\xi$ to $\pi_{*\tilde p}\xi\otimes\tilde p$, and check this is well-defined.
You might also try to generalize the Euler sequence to a complex submanifold $M\subset\Bbb P^n$, letting $\tilde M = \pi^{-1}M$. Then you get the exact sequence
$$0\to \mathscr L \to E \to TM\otimes\mathscr L\to 0\,,$$
where $E_p = T_{\tilde p}\tilde M$ for any $\tilde p\in \pi^{-1}(p)$. ($\tilde M$ is the affine cone corresponding to $M\subset\Bbb P^n$.) |
How to solve integration elegantly using contour integration | This is a standard integral:
Let $\cos \theta = a$ (and $-1\leq a \leq 1$):
$$\int\limits_{\psi = 0}^{\pi/2} \frac{1}{1 - a \cos \psi}\ d\psi =\frac{2 \tan ^{-1}\left(\frac{a+1}{\sqrt{1-a^2}}\right)}{\sqrt{1-a^2}}$$ |
Composing equivalence relations | There isn't really a general principle, because the general statement is false, although for some equivalence relations the composition is an equivalence relation (e.g. if both are the identity). What you need is a counterexample to show that the statement isn't always true.
Consider two equivalence relations on $\{1,2,3\}$ given by their partitions as follows:
$$
A/R = \{\{1,2\}, \{3\}\} \\
A/S = \{\{1\}, \{2,3\}\}
$$
Then $S\circ R$ is not symmetric: $1(S\circ R)3$ because $1R2$ and $2S3$, but not $3(S\circ R)1$. (If $3(S\circ R)1$, there would be some $x$ such that $3Rx$ and $xS1$. But $xS1$ implies $x=1$; however, not $3R1$. So there's no such $x$.)
For an example where the composition fails to be transitive:
Consider $A = \{1,2,3,4\}$, and let the two equivalence relations $R,S$ be given by their partitions:
$$
A/R = \{\{1,2\}, \{3,4\}\} \\
A/S = \{\{1,2,3\}, \{4\}\} \\
$$
Then $(S\circ R)$ isn't transitive:
$1(S\circ R)3$ because $1R1$ and $1S3$, and $3 (S\circ R) 4$ because $3R4$ and $4S4$; but $(1,4) \notin (S\circ R)$. (If it were, there would be some $x$ such that $1Rx$ and $xS4$, but by inspection there's no such $x$.) |
vector basis with inner product less than zero | Ad 1. I think the vector $\sum e_i$ will do, and this can be proven with triangular inequality.
You may also need that the max inner product of $e_1$ with $e_2$ or $e_3$ will be smaller than with $e_1$ and $e_2+e_3$ |
About the sign of $\cos(x)-\sin(x)$ on $[0, \pi]$ | In the plane $xy$, the locus of the points $x>y$ is the lower half plane delimited by the line through the origin with slope $1$.
This plane cuts two arcs of the half unit circle, for angle ranges $(0,\frac\pi4)$ and $(\frac\pi4,\pi)$. |
Distance involving 3D lines and vectors. | $
\renewcommand{\v}{\mathbf{v}}
\newcommand{\vo}{\mathbf{v_0}}
\renewcommand{\p}{\mathbf{p}}
\renewcommand{\d}{\mathbf{d}}
\renewcommand{\a}{\mathbf{a}}
\renewcommand{\b}{\mathbf{b}}
$
First, WLOG assume $\p = \a$, so that $\v = \p + \d t = \a + \d t$.
Then the distance between $\v $ and $\a$ is
$$
\|\v - \a\| = \|\p + \d t - \a\| = \|\a + \d t - \a\| = \|\d \| t
$$
Since we want to make sure that $\|\v - \a\| = t$, we need to choose $\d$ such that 1) $\p$ is collinear with $\b-\a$, and 2) $\|\p\| = 1$.
The most obvious choice is $\d = \dfrac{\b - \a}{\|\b - \a\|}$.
Then we have
$$
\|\v - \a\| = \|\a + \d t - \a\| = \| \d \| t =
\left\| \dfrac{\b - \a}{\|\b - \a\|}\right\| t =
\dfrac{\left\| \b - \a\right\|}{\|\b - \a\|} t = t
$$
Thus, for a vector
$$\v = \p + \d t = \a + \dfrac{\b - \a}{\|\b - \a\|} t$$
we have
$$
\|\v - \a \| = t
$$ |
Stuck on finding dimension and basis of a solution set | So assuming your solution is correct, the solution vector is
$$
\begin{bmatrix} x \\ y\\ z\\ w \end{bmatrix}
= \begin{bmatrix} z + 2w \\ z - 2w\\ z\\ w \end{bmatrix}
= \begin{bmatrix} 1 \\ 1\\ 1\\ 0 \end{bmatrix} z
+ \begin{bmatrix} 2 \\ -2\\ 0\\ 1 \end{bmatrix} w
= \begin{bmatrix} 1 & 2 \\ 1 & -2 \\ 1 & 0\\ 0 & 1 \end{bmatrix}
\begin{bmatrix} z \\ w \end{bmatrix}
$$
The solution is then the column space of the matrix above. Now can you find the dimension and basis of the solution set?
To have the solution set as a kernel of a matrix, can you write down the definition of a kernel and see how you can convert this definition into one? |
Ring of invariants for the action $SL_2(\mathbb{C})$ on binary quadratic forms | Certainly there is a more scholarly argument but let me try by hand.
Let $q_d(x,y)=x^2+dy^2$ and if $f$ is an invariant polynomial then let $\phi(d)=f(q_d)$. Then I say that $f=\phi\circ disc$.
Indeed a generic quadratic form is given by $q(x,y)=ax^2+2bxy+cy^2=q_d(\frac{ax+by}{\sqrt{a}},\frac{y}{\sqrt a})$ where $d=disc(q)$. This is clearly the composition with an element of $SL_2(\mathbb C)$ so $f(q)=f(q_d)=\phi\circ disc(q)$. Then continuity of $f$ implies that this is true for any $q$. |
Confusion about event in a sample space. | The event $\{H,T\}$ is the event that the coin turns up heads or tails. This event always happens and thus has probability $1$.
The empty event, $\varnothing$, should not be thought of as the event that the coin lands on its edge. It is assumed that the coin always lands heads or tails. Rather the event $\varnothing$ is the event that there is no outcome, which never happens and has probability $0$.
It might be illustrative to consider the sample space of a die roll: $\{1,2,3,4,5,6\}$. Now the event $\{2,4,6\}$ is the event that the die shows $2$, $4$, or $6$, not the event that it shows $2$, $4$, and $6$. Equivalently, $\{2,4,6\}$ is the event that the die roll is an even number, which should make sense as something we would like to consider as an event. |
"mini-shuffle moves": reversing order in a line | Notice that you want to realize the permutation $(1,10)(2,9)(3,8)(4,7)(5,6)$ this is an odd permutation, and because of this cannot be written as a product of even permutations. Notice that mini-shuffle moves are $3$-cycles, in other words even permutations. So it cannot be done, no matter how many permutations mini-shuffle moves you make. |
Help with a quintic polynomial | One way to obtain a "symbolic version," as requested in the comments, is to compute some relatively simple approximation and polish it with a Newton-Raphson step. Because this function is smooth and monotonic for $0 \le y \le 35$ this is going to work very well.
In fact, a least-squares fit of the functional form $a \log(b + c(y+1)^{1/5} + d(y-e)^2$ to the solutions for $y=0, 1, \ldots, 35$ already gets close: most of the errors are less than 0.0003 . One Newton-Raphson step is a rational function of this expression of degree 5 (numerator) and 4 (denominator), thereby expressible in terms of 11 parameters derived from the original polynomial. The residuals of this 16-parameter expression range from $-6 10^{-6}$ to $2.7 10^{-7}$, which is close to the precision of the original polynomial coefficients. For $y \ge 4$ the errors are all less than $10^{-7}$, which is as good as one can hope for.
To find this solution in Mathematica, begin by generating the array of solutions for $y=0, 1, \ldots, 35$:
Clear[x, y];
roots = x /. Table[FindRoot[-y + 0.10 + 4.060264 x - 6.226862 x^2 +
48.145864 x^3 - 60.928632 x^4 + 49.848766 x^5, {x, .5}], {y, 0, 35}]
Fit the initial simple model (using some eyeball guesses for the parameters):
Clear[a, b, c];
model = a Log[b + c y^(1/5)] + d (y - e)^2;
fit = FindFit[roots, model, {{a, .5}, {b, 1}, {c, .1}, {e, 18}, {d, .0001}}, y]
Create a Newton-Raphson step for a function f at the argument a:
Clear[nr];
nr[f_, a_] := (x - f[x]/D[f[x], x]) /. x -> a
Use it to improve the model:
Clear[x];
x[z_] := ( nr[f[#] - y + 1 &, model /. fit ]) /. y -> (z + 1)
(The shift to y-1 from y is needed because Mathematica starts indexing at 1, not 0.) The model works well for $1 \le y \le 35$ and exceptionally well for $y \ge 4$.
g = Table[x[y], {y, 1, 36}];
ListPlot[roots - g, PlotRange -> {Full, Full},
PlotStyle -> PointSize[0.015], DataRange -> {0, 35},
AxesLabel -> {"y", "Error"}]
If you need better solutions for $y \lt 4$, you could similarly fit a simple model plus a Newton-Raphson polish to this range of values alone. |
Evaluate $ \int \int_{S} F \cdot n \ dA \ $ by Gauss-divergence theorem | By direct computation we have
$$\oint_S \vec F\cdot\hat n\,dS=\int_0^{2\pi}\int_0^\pi \left.\left(\vec r\cdot \hat r r^2 \right)\right|_{r=3}\sin(\theta)\,d\theta\,d\phi=4\pi (3)^3=108\pi$$
And using the Divergence Theorem, we have
$$\oint_S \vec F\cdot\hat n\,dS=\int_V \nabla \cdot \vec F\,dV=\int_0^{2\pi}\int_0^\pi\int_0^3 (3)\,r^2\,\sin(\theta)\,dr\,d\theta\,d\phi=108\pi$$
as expected! |
Propostion 1.6 on Atiyah's commutative algebra text | The ideal generated by $x$ and $M$ strictly contains $M$ (as $x \notin M$ by assumption). Then, since $M$ is maximal, by definition, any ideal containing $M$ must be the whole ring; thus, $(x,M)$ is the entire ring (as dezdichado said in the comments). |
Question to an exact ODE, integrating factor does not work out | Use the substitution $tx = v$:
$$\frac{v'}{t}-\frac{v}{t^2} = \frac{v^2+1}{2t^2} \implies 2tv' = (v+1)^2$$
from here it is separable with solution
$$x = \frac{2}{Ct-t\log t} - \frac{1}{t}$$ |
Why is it not sufficient to only check the third condition when verifying equality of functions? | A function $f:X\to Y$ consists of three pieces of information: the domain $X$, the codomain $Y$, and the graph $G_f\subseteq X\times Y$. So formally, it makes sense to define a function not just as its graph, but as the tuple $(X,Y,G_f)$, and two functions $f=(X,Y,G_f)$ and $g=(V,W,G_g)$ are equal iff $X=V$, $Y=W$, and $G_f=G_g$. In words, two functions are equal iff their domains, codomains and graphs are equal. But we get the domain for free by checking the graphs, since the graphs contain a pair $(x,f(x))$ for every $x\in X$, so we can extract the domain from the graph. So we only need to check the graph and the codomain, but not the domain. |
$\lim_{p\rightarrow\infty}||x||_p = ||x||_\infty$ given $||x||_\infty = max(|x_1|,|x_2|)$ | Hint: WLOG, assume $|x_1| \ge |x_2|$. Then, $\left(|x_1|^p+|x_2|^p\right)^{1/p} = |x_1|\left(1+\left(\dfrac{|x_2|}{|x_1|}\right)^p\right)^{1/p}$.
Can you show that this approaches $|x_1|$ as $p \to \infty$? |
Comparing Equations to find optimum point | $$
\exists x_0 | f'(x_0) = 0 \land g'(x_0) = 0
$$
Then $f(x)$ and $g(x)$ wouldn't be arbitrary, meaning, you can't optimize in general both. |
Solving Ordinary Differential Equation involving trig functions | Hint:
$y(x) = u(x)v(x)$ then
$y'(x) = u'v + v'u $ |
Are simplicial sets equal if their non-degenerate simplices are the same? | It's not quite true that $X^\text{inj}$ (as you defined it) is the semismplicial set of non-degenerate simplices of $X$. In fact, it has exactly the same elements as $X$. On the other hand the collection of non-degenerate simplices does not always form a semisimplicial set: this is because there may be some non-degenerate simplex of $X$ that has a face that is degenerate.
There are some statements which are true. For example:
Let $X$ and $Y$ be simplicial sets, and let $f, g : X \to Y$ be morphisms of simplicial sets. If $f (x) = g (x)$ for all non-degenerate simplices $x$ of $X$, then $f = g$.
Another example:
Let $X$ and $Y$ be simplicial sets and let $f : X \to Y$ be a morphism of simplicial sets. If $f$ restricts to a bijection between non-degenerate simplices of $X$ and $Y$, then $f$ is an isomorphism of simplicial sets. |
The linear map that's equipped with a $k[x]$-module, is it fixed or can it vary? Or something else? What's an example? | The $\hat{x}$ represents one particular linear map $V \to V$. Since $A = k[x]$, you can express every element of $A$ as a $k$-linear combination of powers of $x$. Now, the vector space $V$ is an $A$-module precisely when you have a linear map $V \to V$ associated to every element of $A$. Since you know how to do this for the element $x$ (namely, the associated map is $\hat{x}$), you can associate to a polynomial $p(x) \in A$ the map $p(\hat{x})$. |
How do I adjust values for consistency? | You want the standard deviation.
First, define the variance, which is the average of the square of the offset. In your example, the first machine is always at 10, so the offset is zero, and the variance is zero. In the second machine, the offset is either 20-10=10 or 0-10=-10. The square in both cases is $10^2=100$ or $(-10)^2=100$, so the variance is 100.
The standard deviation is the square-root of the variance. In this case it is 0 for the first machine and 10 for the second machine.
Usually, you keep both numbers - the mean and the standard deviation - rather than combine them. So you know the (10,0) machine is more reliable than the (10,10) machine, but both give the same output in the long run. |
Question regarding an example of disjoint convex sets that cannot be separated | Disjointness of $\Gamma$ and $\Sigma$ is clear (also for the second part I suppose $\Sigma$ does not allow vectors with negative $x_1$ component). Convexity of $\Sigma$ is clear. Convexity of $\Gamma$: if $x, y \in \Gamma$, $\alpha, \beta \in [0,1]$ with $\alpha+\beta=1$
$$\alpha x_1 + \beta y_1 ≥ n(\alpha |x_n-1/n^{2/3}| + \beta |y_n -1/n^{2/3}|)≥ n\cdot\left| \alpha x_n + \beta y_n -(\alpha + \beta)n^{2/3} \right|$$
and $\alpha x + \beta y \in \Gamma$ follows from $\alpha + \beta = 1$.
Now for the second part:
Suppose there exists such a $g$. Note that any $\sigma \in \Sigma$ can be written as $\sigma_1 e_1$ with $\sigma_1≥0$. So $g(\sigma)=\sigma_1 g(e_1)$. It follows that $g(e_1)$ cannot be positive, as otherwise we could chose a $\sigma_1$ to make $g(\sigma)$ as big as we like, certainly bigger than some value of $g(\gamma)$.
Let $\gamma := \sum_{n\neq1} 1/n^{2/3} e_n \in \Gamma$ and denote $g(\gamma)=c$. This must be positive, as otherwise $g(0)=0>c$. From this positivity it follows that $g(e_n)\neq 0$ for at least one $n$ (linear functionals are continuous).
If $x \in \Gamma$ then $n |x_n -1/n^{2/3}|$ must be a bounded function, so write $x=a e_1 + \gamma + \sum_{n\neq 1} \frac{f(n)}n e_n $, in which $|f(n)|$ is bounded and $a≥\sup_n|f(n)|$. Then
$$g(x)≤c+\sum_{n\neq1}\frac{f(n)}ng(e_n)$$
Since $g(e_1)≤0$. But remember that $g(e_\tilde n)\neq0$ for some $\tilde n$ must hold. Take then $$f(n)=\begin{cases}-\frac{\tilde n\ c}{g(e_n)} & n=\tilde n\\0 & n\neq \tilde n \end{cases}$$
You then get $g(x)≤0=g(0)$. |
Infinite product formula for a complex function | Your answer looks right.
It follows from the following theorem (Functions of One Complex Variable, John B Conway, Indian Edition, page number 169).
Theorem 5.12: Let $\{a_n\}$ be a sequence in $\mathbb{C}$ such that $\lim |a_n| = \infty$ and $a_n \neq 0$ for all $n \ge 1$. If
$\{p_n\}$ is any sequence of integers such that
$$ \sum_{n=1}^{\infty} \left(\frac{r}{|a_n|}\right)^{p_n+1} \lt \infty$$
for all $r \gt 0$, then
$$ f(z) = \prod_{n=1}^{\infty} E_{p_n}\left(\frac{z}{a_n}\right)$$
converges and $f$ is an entire function with zeroes only at the points
$a_n$. If $z_0$ occurs in $\{a_n\}$ exactly $m$ times, then $f$ has a
zero $z_0$ of multiplicity $m$.
You have chosen $p_n = n$ which works.
This theorem is used to prove the Weirstrass Factorization theorem. |
difference between linear, semilinear and quasiliner PDE's | I think this will help you to understand the PDE $:$
Linear PDE: $a(x,y)u_x+b(x,y)u_y+c(x,y)u=f(x,y)$
Semi-linear PDE: $a(x,y)u_x+b(x,y)u_y=f(x,y,u)$
Quasi-linear PDE: $a(x,y,u)u_x+b(x,y,u)u_y=f(x,y,u)$ |
Hard Combinatorical Geometric Problem on Intersecting Circles | Given a circle $C$, denote by $N(C)$ the number of unique intersection points on $C$. Also, given an intersection point $p$, denote by $m(p)$ the number of circles that go through $p$.
Let $p$ be an intersection point and $C$ a circle going through $p$. Then
$$m(p)\leq N(C).$$
Indeed, there is an injection from the set of circles going through $p$ other than $C$ to the set of intersection points lying on $C$ other than $p$, since every other circle going through $p$ will intersect $C$ in another point (no tangencies, and by two given points will pass only two circles).
Now consider the sum $$S=\sum_C \sum_{p\in C} \dfrac{1}{N(C)}$$
First, since $\sum_{p\in C}={N(C)}$ for every circle $C$ one obviously has $$S=\sum_C 1=n.$$
On the other hand, $$S=\sum_p \sum_{C\ \ni p} \dfrac{1}{N(C)}\leq \sum_p \sum_{C\ \ni p} \dfrac{1}{m(p)}=\sum_p 1$$
and the last right-hand side is the number of intersection points, which concludes the proof. |
Prove that $\left(A+B\right)^{2}\nmid A^{2n+1}+B^{2n+1}$ | The statement is false. We show how to find counterexamples.
Since we know that $$A^{2n+1}+B^{2n+1}=(A+B)(A^{2n}-A^{2n-1}B+\dots-AB^{2n-1}+B^{2n}),$$ it is equivalent to finding $A,B$ such that $$A+B\mid A^{2n}-A^{2n-1}B+\dots-AB^{2n-1}+B^{2n},$$ which is the same as $$A^{2n}-A^{2n-1}B+\dots-AB^{2n-1}+B^{2n}\equiv 0\pmod{A+B}.$$
However, we have from $A\equiv -B\pmod{A+B}$ that
\begin{align*}
A^{2n}-A^{2n-1}B+\dots-AB^{2n-1}+B^{2n}&\equiv (-B)^{2n}-(-B)^{2n-1}B+\dots-(-B)B^{2n-1}+B^{2n}\pmod{A+B}\\
&\equiv (2n+1)\cdot B^{2n}\pmod{A+B}.
\end{align*}
As such, we see that counterexamples can be constructed by selecting the value of $n$ such that $A+B\mid 2n+1$. For example, $(A,B,n)=(3,4,3)$ is a counterexample because $$(3+4)^2=49\mid 18571=3^7+4^7.$$ |
Showing that $\frac{x^{2x}}{(x+1)^{x+1}}\rightarrow +\infty$ as $x\rightarrow +\infty$ | Hint:$$\frac{x^{2x}}{(x+1)^{x+1}}=x^{2x-(x+1)}\frac{1}{\left(1+ \frac{1}{x} \right)^{x+1}} \sim\frac{x^{x-1}}{e} $$ |
Solving mathematical economics problem | I don't know much about economics, so take this with a grain of salt!
I assume only the interest from the loan gets taxed. I also assume that backpayment of the loan and the interest are done at the same time after one year.
(a) After one year, you have your $\$350,000$ loan paid back plus $\frac{20}{100}\times\$350,000 = \$70,000$ as interest. The interest gets taxed, however, so you loose $\frac13\times\$70,000=\$23,333$ to taxes. So after taxes you have $\$350,000+\$70,000-\$23,333=\$396,667$ in cash. That makes what I would consider your 'real earning' to be $\$396,667-\$350,000=\$46,667$. However, due to inflation, the buying power of each of those Dollars is less than it was a year before. The $12\%$ inflation means that now you need to spend $\$1.12$ to buy what you could by for $\$1.00$ a year ago. So your cash now is worth is much as $\frac{\$396,667}{1.12} = \$354.167$ was a year ago. So your buying power, compared to one year ago, has increased by a factor of
$$\frac{\$354.167}{\$350,000}=1.0119\ldots,$$
so just a wee bit more than one percent. That's not surpising: Your $20\%$ interest was only $13.3\%$ after taxes, and that got almost eaten up by the inflation.
(b) If you get no interest on the loan, you haven't earned anything, so you don't need to pay taxes on it. So it is the same as if it had been in your home all the time. As before, it is worth less after that one year, due to the assumed inflation rate it is now worth $\frac{\$350,000}{1.05}=\$333,333$. Your buying power has thus increased by a factor of
$$\frac{\$333.333}{\$350,000}=0.952\ldots,$$
or rather decreased by almost $5\%$. |
sum of digit of a large power | $333^{333} < 1000^{333} = 10^{999}$ so our number has less than $1000$ digits in it.
$P(333^{333})$ is then somewhere strictly between $0$ and $1000\cdot 9 = 90000$ since the largest sum would have occurred if we really had $1000$ digits and they were all $9$'s and the sum clearly cannot be zero as the only number with digitsum equal to zero is zero itself.
We then also have $P(P(333^{333}))\leq P(89999) = 44$ since $89999$ is the number with the largest digit sum while the number is still less than or equal to $90000$
Similarly, $P(P(P(333^{333})))\leq P(39) = 12$ since the number with the largest possible sum of digits less than $44$ is $39$.
So... This all tells us that $P(P(P(333^{333})))\leq 12$
Now... recall that the sum of the digits of a number has the same remainder as the original number when divided by $9$. Clearly, $333^{333}$ is a multiple of $9$ so $P(P(P(333^{333})))$ must be a multiple of $9$ as well.
As $9$ is the only multiple of nine between $0$ and $12$, it follows that the digit sum is exactly $9$. |
Simple, Speed Rate Multiple Choice Question. | To check your answer we can try another approach.
Suppose that the speed of the slower person is x per hr, then the speed of the faster person is 3x per hr.
The amount of job is $3(x+3x)=12x$
The time it takes for the slower person to complete the job is therefore $\frac{12x}{x}=12$ hrs
I think you got the right answer, probably something wrong with the question. |
For positive invertible operators $C\leq T$ on a Hilbert space, does it follow that $T^{-1}\leq C^{-1}$? | Note that $T \ge C$ iff $C^{-1/2} T C^{-1/2} \ge I$ iff $C^{1/2} T^{-1} C^{1/2} = (C^{-1/2} T C^{-1/2})^{-1} \le I$ iff $T^{-1} \le C^{-1}$ |
Show that if either (a) G is not 2-connected, or , (b) G is bipartite with bipartition (X, Y) where IXI different to lYI, then" G is non hamiltonian. | To show that not 2-connected $\implies G$ not Hamiltonian, you can equivalently show that $G$ Hamiltonian $\implies$ 2-connected. To do this, take a graph with a Hamiltonian cycle, and think about why removing any one edge still leaves a path between any two vertices.
For (b) $\implies$ $G$ not Hamiltonian, assume there were a Hamiltonian cycle $v_1,v_1,\dots,v_n,v_1$, where without loss of generality $v_1$ is in the $X$ part. The nature of bipartite graphs is the vertices in the path would alternate between $X$ and $Y$, so that $v_1,v_3,v_5\dots$ are in $X$ and $v_2,v_4,v_6\dots$ are in $Y$. In particular, this would imply $n$ was even (since $v_1$ and $v_n$ are next to each other, so $v_n$ must be in $Y$), which would mean that there are the same number of vertices in the lists $v_1,v_3,\dots$ and $v_2,v_4,\dots$, contradicting (b). |
Confusing double angle identity | \begin{align*}
\cos^4(x) &= \left(\frac{e^{ix}+e^{-ix}}{2}\right)^4\\
&= \frac{e^{4xi} + 4e^{2xi} + 6 + 4e^{-2xi} + e^{-4xi}}{16}\\
&= \frac{3}{8} + \frac{1}{2} \frac{e^{2xi} + e^{-2xi}}{2} + \frac{1}{8} \frac{e^{4xi}+e^{-4xi}}{2}\\
&= \frac{3}{8} + \frac{1}{2} \cos(2x) + \frac{1}{8} \cos(4x)
\end{align*} |
Geodesic distance on $U\setminus(\text{closed enumerable set})$ coincides with the one on $U$ | (1) $D=\{x_i\}$ and $D_\alpha=\{x_1,\cdots, x_\alpha\}$
For $D_\alpha$ we do the following : For $x,\ y\in U$, for $\epsilon$, there is $x_i$ s.t. $d_U(x,y) +\epsilon >
\sum_{i=1}^k\ |x_i-x_{i+1}|$.
Hence there is $y_i\in U-D_\alpha$ s.t. $|x_i-y_i|<\frac{\epsilon}{2^i}$. Hence $$ \sum_i\ |y_i-y_{i+1}| <\sum_i\ |x_i-x_{i+1}| + 2\epsilon <d_U(x,y) + 3\epsilon $$
Hence we have sequence of piecewise line segment $c_\alpha :=
\bigcup_i\ [ y_iy_{i+1}]$
(2) We do a limit for $c_\alpha$. |
How many rectangles larger than $2 \times 2$ can be made in a $5 \times 5$ grid with a hole in the center? | I assume a rectangle must have height and base at least $2$.
There are $10\times 10$ rectangles total.
Of these the ones that are not larger than $2\times 2$ are only the ones that are $2\times 2%$ and they are $16$.
This means there are $84$ rectangles larger than $2\times 2$.
We just have to substract the ones that contain the center as a vertex. We assume the opposite corner is in the top left. Then there is $3$ options for the oposite corner.
So if we don't assume it is in the top left there are $4\times 3$ options.
Hence there are $84-12 = 72$ rectangles larger than $2\times 2$ that do not have a vertex in the center. |
Calculate the eigenvalues of the symmetric part of $A$ form the eigenvalues of $A$ and $A^T$ without calculating the symmetric part of $A$? | $A+A^T=A+A^{-1}$ has the eigenvalues $\lambda+\frac{1}{\lambda}$. Since $|\lambda|=1$ it is the same as $\lambda+\bar\lambda=2 \mathrm{Re}\,\lambda$. |
Prove that the sequence ${f_n(z)}_{n=1}^\infty$ converges for all $z ∈ \mathbb{D}.$ | The family $\mathscr{F} = \{ f_n : n \in \mathbb{Z}^+\}$ is a normal family, since $\mathbb{D}$ is bounded. Thus every sequence in $\mathscr{F}$ has a compactly convergent subsequence.
Now, let $\sigma, \tau \colon \mathbb{Z}^+ \to \mathbb{Z}^+$ two strictly increasing maps such that the subsequences $(f_{\sigma(n)})$ and $(f_{\tau(n)})$ of the original sequence converge compactly. Denote their respective limit functions by $f^{\sigma}$ and $f^{\tau}$.
Since for all $k \in \mathbb{N}\setminus\{0,1\}$ the sequence $f_n(1/k)$ is assumed to converge, it follows that
$$f^\sigma\left(\tfrac{1}{k}\right) = f^\tau\left(\tfrac{1}{k}\right)$$
for all $k \in \mathbb{N}\setminus\{0,1\}$, and since $(1/k)$ has a limit point in $\mathbb{D}$ (namely $0$), the identity theorem asserts that $f^\sigma \equiv f^\tau$.
Thus all convergent subsequences converge to the same limit.
This is a general criterion for the convergence of a sequence:
A sequence $(x_n)$ in a topological space $X$ converges to a point $x\in X$ if and only if every subsequence $(x_{n_k})$ of $(x_n)$ has a further subsequence $\left(x_{n_{k_m}}\right)$ that converges to $x$.
So we not only have the pointwise convergence of $f_n$, we even have the locally uniform convergence. |
If $H$ is a subgroup of $\mathbb{R}$ and there is $h\in H$, $0<h<\epsilon$ then $H$ is dense | I'd do it slightly differently, I hope you like it.
We have to prove for every open interval $(a,b)$ there is $h\in H$ with $h\in (a,b)$.
Take $h\in H$ with $0<h<b-a$.
Notice that if $n<\frac{b}{h}$ then $nh<b$.
Let $m$ be the largest integer with $n<\frac{b}{h}$ (it exists because it is a subset of $\mathbb Z$ bounded above).
Suppose that $mh\leq a$, then $(m+1)h=mh+h\leq a+h<a+(b-a)=b$.
Contradicting the maximality of $m$.
We conclude $mh>a$ and $mh<b$. So $mh\in (a,b)$ and $mh\in H_\blacksquare$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.