title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
An $n$th root inequality: $\sqrt[n]{n} < 1 + \sqrt{2/n}$ | $$\left(1+\sqrt{\frac{2}{n}}\right)^n>1+C_n^2*\frac{2}{n}=n$$ |
How does $\left|\sqrt{2+3x_1} - \sqrt{2+3x_2}\right|$ become $\left| \frac{3(x_1-x_2)}{\sqrt{2+3x_1} + \sqrt{2+3x_2}} \right|$? | Let $A_i = \sqrt{2+3x_i}$ for $i=1,2$ as suggested by Don Thousand in the comments. Then your original expression is $|A_1 - A_2|$.
Notice, then:
$$A_1 - A_2 = (A_1-A_2) \cdot \underbrace{\frac{A_1 + A_2}{A_1 + A_2}}_{=1} = \frac{A_1^2 - A_2^2}{A_1+A_2}$$
$A_i^2 = 2+3x_i$ for both $i$ indices and thus the difference in the numerator is $2+3x_1 - 2 - 3x_2$, or, more simply, $3(x_1 - x_2)$.
Then
$$A_1 - A_2 = \cdots = \frac{A_1^2 - A_2^2}{A_1+A_2} = \frac{3(x_1 - x_2)}{\sqrt{2+3x_1} + \sqrt{2+3x_2}}$$
Throw in the absolute values and you're done! |
Why we always put log() before the joint pdf when we use MLE(Maximum likelihood Estimation)? | Some reasons why taking the logarithm of the likelihood can be helpful include:
It turns the likelihood into a sum instead of a product.
In many cases, the individual terms of the sum are logs of exponentials which simplify.
In many cases, the log likelihood is a concave function while the likelihood itself is not concave. When this is the case, convex optimization techniques can be used to find the MLE.
In numerical work, the likelihood is often nonzero but so tiny that it cannot be represented in double precision floating point. For example, suppose that the likelihood is $1.0 \times 10^{-500}$. This is too small to represent in double precision but could easily be the likelihood in a situation where we had 500 data points each with likelihood 0.1. The logarithmic transformation reduces the range of values that have to be dealt with. |
(Finitely) decimal expressible real numbers between [0,1] countable? | Your intuition is correct: the set of "decimal expressible" real numbers in [0, 1] such that the decimal expression has only finitely many nonzero digits is countable, and the "injective function" you gave works.
However, if you allow decimal expressions with an infinite number of decimals (e.g. 0.333...), then this no longer works. First of all, which natural number are you going to associate with 0.333...? In fact, if you allow decimal expressions with an infinite number of decimals, then the set you described becomes uncountable; for the justification of that, I'll point you to Cantor's Diagonal Argument.
To answer your three questions:
Yes.
Yes, all you needed to do is associate every element of your set with an element of the natural numbers (and vice versa), which you've done. If I give you a decimal expression with finitely many terms, you can give me the unique natural number that you've associated with that decimal expression, and if I give you a natural number, you can give me the unique (finite) decimal expression it corresponds to. This is exactly what it means for a set to be countable.
No; again, see Cantor's Diagonal Argument. |
Complex symmetric polynomials with alternate conjugates | I don't know that these polynomials have been studied, and that may be because they don't have the "nice" properties which make the symmetric polynomials appealing and useful:
these are not symmetrical;
there is no direct relation between $x_i$ and the roots of an associated function.
In the $n=2$ case for example $f_{21}=x_1+x_2$ and $f_{22}=x_1 \overline{x_2}$ give after simple manipulations:
$$|x_1|^2 - \overline{f_{21}} \; x_1 + f_{22} = 0$$
$$|x_2|^2 - \overline{f_{21}} \; x_2 + \overline{f_{22}} = 0$$
where the equations are not polynomial, and are not the same for the two variables. |
Any references for Wall's Lemma of exactness? | See
Wall, C. T. C. "On the exactness of interlocking sequences." Enseignement math 12 (1966): 95-100
https://www.e-periodica.ch/cntmng?pid=ens-001:1966:12::36
Hardie, K. A., and K. H. Kamps. "Exact sequence interlocking and free homotopy theory." Cahiers de topologie et géométrie différentielle catégoriques 26.1 (1985): 3-31
http://www.numdam.org/article/CTGDC_1985__26_1_3_0.pdf |
Checking that a torsion-free abelian group has finite rank | (Edited to reflect that $\mathbb Z_{l}$ denotes the $l$-adic integers instead of $\mathbb Z/l\mathbb Z$.)
No. Let $G$ be the additive group of the ring $\mathbb Z[1/p]$, where $p \neq l$ is a prime. Then you can check that $\mathbb Z[1/p] \otimes_{\mathbb Z} \mathbb Z_l = \mathbb Z_l$, essentially because $1/p \in \mathbb Z_l$. This is free of rank one over $\mathbb Z_l$, even though $\mathbb Z[1/p]$ is not finitely-generated as an abelian group. |
conversion of discrete to continuous | Actually this dynamics is solvable, giving $N_j=(1+k)^jN_0$ for every integer $j$, hence $N_j=n(j)$ where $n(t)=(1+k)^tN_0$ for every real number $t$. This is $n(t)=\mathrm e^{at}N_0$ for $a=\ln(1+k)$ hence $n'(t)=a\mathrm e^{at}N_0$, that is, $n'(t)=\ln(1+k)\cdot n(t)$. |
Fitness based ranking | [assuming I understood what you meant] use Exponential ranking: $p_j = \frac{e^{- \alpha x_j}}{\sum_j e^{- \alpha x_j}}$
Where $\alpha$ is a coefficient $>1$ if you need to increase the 'weight' of the individual with the lower value |
Indefinite Integral -- exponential and arctan | I'm pretty sure that this integral doesn't have a closed-form expression in terms of elementary functions and operations. You could let $$f(x)=\int_0^x \exp\left({\tan^{-1}\left(1+ \frac{t}{t^2+1}\right)}\right)\mathrm{d}t+C$$ Then $$\frac{d}{dx} f(x)=\exp\left({\tan^{-1}\left(1+ \frac{x}{x^2+1}\right)}\right)$$I know that this has come way too late to be of any help to you in your exam. |
Does $\int x\sqrt{x-1}\arctan(x)\, \text{d}x$ have a closed form? | Hint. In the last rational function, the denominator is quadratic in $t^2.$ So you can factor into two quadratics and express the fraction in parts. This would require manipulating complex entities, but the procedures are the same.
Another way may be to try a substitution of the form $$x=at+\frac bt,$$ where $a$ and $b$ are suitable constants. |
Boundedness of a modular form in $\mathbb{H}$ | Hint: Since $h$ is continuous, you simply have to show that it stays bounded as $y$ tends to infinity (uniformly in $x$ -- here $z=x+iy$). Consider the expansion $f(z) = \sum_{n\geq 1} a_n {\rm e}^{2\pi i n z}$, which starts at $n=1$ since $f$ is cuspidal. As $y\to\infty$, what happens for each terms in the series ? How fast do they decay ? |
Prove that $\frac{\binom{2n}{n}}{n+1}$ is an integer. | At stated in the comments, these are the Catalan numbers. On the Wikipedia article, some combinatorial interpretations are given. My favourite proof is to solve Bertrand's ballot problem and to use it to find a formula for the Catalan recurrence. This method is presented on p.13 of Four Proofs of the Ballot Theorem by Renault.
If you look up Richard Stanley, you'll find that the fellow has been collecting for years combinatorial problems whose solution is the Catalan numbers. He is famous, among other reasons, for having dozens of exercises that boil down to the Catalan numbers in his book, Enumerative Combinatorics. In the end in 2015, he published a book on it, Catalan Numbers, which has over 200 objects that can be counted using the Catalan numbers. On this web page, he wrote that
The material I have gathered on Catalan numbers was collected into a monograph. This will be the final version of my material on Catalan numbers. The current web page will not be updated. |
Infinite dimensional Vector Spaces and Bases of Quotients | No. If $V$ is any vector space with underlying set $S$, then it is a quotient of $K^{\oplus S}$, which has $S$ as a basis. But the existence of bases in arbitrary vector spaces is equivalent to AC (by a result by Andreas Blass). |
Finding $\lim_{x\to1}f(x)$ given $\lim_{x\to1}\frac{f(x)-8}{x-1}=10$ | Here's another solution since I'm not too big of a fan on symbolic limits.
Since
\begin{align}
\lim_{x\rightarrow 1}\frac{f(x)-8}{x-1} = 10
\end{align}
then it follows
\begin{align}
\lim_{x\rightarrow 1} \frac{f(x)-8-10(x-1)}{x-1} = 0.
\end{align}
which means when $x$ is close to $1$ we have have that
\begin{align}
\left|\frac{f(x)-8 -10(x-1)}{x-1} \right| \leq 1\ \ \Rightarrow\ \ |f(x)-8-10(x-1)| \leq |x-1|.
\end{align}
Using triangle inequality, we have that
\begin{align}
|f(x)-8| \leq 11|x-1|.
\end{align}
Thus, it follows $\lim_{x\rightarrow 1} f(x) = 8$. |
Given $L_1 =\{w: |w| \bmod 3 >0\}$ and $L_2 =\{w: |w| \bmod 5 =0\}$,. What is $L=L_1 \cap L_2$ and the grammar it produces? | The first part of your question boils down to a standard exercise in arithmetic.
You can use the Chinese remainder theorem to show that the conjunction of the conditions $|w| \bmod 3 > 0$ and $|w| \bmod 5 = 0$ is equivalent to
$${|w| \bmod {15} = 5} \quad \text{or}\quad |w| \bmod {15} = 10 $$
If $A$ is the alphabet, the corresponding language is $(A^{15})^*(A^5 + A^{10})$.
Writing a grammar for this language should now be an easy, also somewhat tedious, exercise. |
How can I determine this function? | $$
f(x) = \left\{
\begin{array}{ll}
110004 & \quad x = -2 \\
68 & \quad x = -\frac{1}{2} \\
4 & \quad x = \frac{1}{2} \\
110011 & \quad x = \frac{3}{2} \\
7 & \quad x = 2 \\
75 & \quad x = 3 \\
0 & \quad x\neq-2,-\frac{1}{2},\frac{1}{2},\frac{3}{2},2,3
\end{array}
\right.
$$ |
Bounded function with bounded second derivative imply bounded first derivative | First, it's enough to consider the case $n=1$: Say $v\in\Bbb R^n$ with $||v||=1$ and let $g(t)=f(t)\cdot v$. Then $|g'|\le A$ and $|g''(t)|\le B$; if we show that $|g'(t)|\le C$ then $|f'(t)\cdot v|\le C$ for all $v$ with $||v||=1$, hence $|f'(t)|\le C$.
Taylor's Theorem shows that for $t\in (t_0-\alpha,t_0+\alpha)$ we have $$f(t)=P(t)+R(t)$$where $$P(t)=f(t_0)+(t-t_0)f'(t_0)$$and $$|R(t)|\le\frac12 B(t-t_0)^2<\frac12 B\alpha^2.$$
So $$2A\ge|f(t)-f(t_0)|=|(t-t_0)f'(t_0)+R(t)|
\ge|t-t_0||f'(t_0)|-\frac12 B\alpha^2$$Chooing $t$ close to $t_0\pm\alpha$ now gives $$2A\ge\alpha|f'(t_0)|-\frac12B\alpha^2,$$or$$|f'(t_0)|\le2A/\alpha+\frac12 B\alpha.$$ |
Some very particular strictly ordered sequence of numbers | The Catalan number $Cn$ is the number of monotonic lattice paths along the edges of a grid with $n × n$ square cells, which do not pass above the diagonal. A monotonic path is one which starts in the lower left corner, finishes in the upper right corner, and consists entirely of edges pointing rightwards or upwards. Counting such paths is equivalent to counting our sequence as I show in the following picture:
Indeed we have $C_5=42$. |
Counting minimum elements needed such that their sum covers the whole finite space. | You can solve the problem via integer linear programming as follows. For $i\in \{0,\dots,n-1\}$, let binary decision variable $x_i$ indicate whether $i$ is selected. For $0 \le i < j \le n-1$, let binary decision variable $y_{i,j}$ represent the product $x_i x_j$. The problem is to minimize $\sum_i x_i$ subject to
\begin{align}
\sum_{\substack{0 \le i < j \le n-1:\\ i + j = k}} y_{i,j} &\ge 1 &&\text{for $k \in \{1,\dots,n-1\}$} \tag1 \\
y_{i,j} &\le x_i &&\text{for $0 \le i < j \le n-1$} \tag2 \\
y_{i,j} &\le x_j &&\text{for $0 \le i < j \le n-1$} \tag3
\end{align}
Constraint $(1)$ forces each sum $k$ to be covered.
Constraints $(2)$ and $(3)$ enforce the logical implication $y_{i,j} \implies (x_i \land x_j)$.
The values for $n \in \{1,\dots,50\}$ are
$$
\begin{matrix}
0 &2 &3 &3 &4 &4 &4 &5 &5 &5\\
6 &6 &6 &6 &7 &7 &7 &7 &8 &8 \\
8 &8 &8 &9 &9 &9 &9 &9 &10 &10 \\
10 &10 &10 &10 &11 &11 &11 &11 &11 &11\\
11 &12 &12 &12 &12 &12 &12 &12 &13 &13
\end{matrix}
$$
If you change the condition to $(i+j) \mod n = k$ in constraint $(1)$, the values for $n \in\{1,\dots,50\}$ are instead
\begin{matrix}
0 &2 &3 &3 &4 &4 &4 &5 &5 &5 \\
5 &6 &6 &6 &7 &7 &7 &7 &7 &7 \\
8 &8 &8 &8 &8 &9 &9 &9 &9 &9 \\
9 &10 &10 &10 &10 &10 &10 &11 &11 &11 \\
11 &11 &11 &11 &12 &12 &12 &12 &12 &12
\end{matrix} |
Why are $p$-adic numbers formal series? | That is because of how the inverse limit is defined: an element in $\mathbf Z_p$ is a sequence of congruence classes
$$x=(x_n\bmod p^n)_{n\ge 1}\quad\text{s. t. } \;\forall n,\;x_{n+1}\equiv x_n\mod p^n. $$
Initially, $x_1\in\mathbf Z/p\mathbf Z$ is represented by an integer in $[0,p)$, and from these relations, it is easy to deduce by induction that for all $n$, $x_n$ can be written as $\;a_0+a_1p+\dots+a_{n-1}p^{n-1}$, where $\;0\le a_i<p$ for each i. |
Response of stationary distribution to perturbation of a stochastic matrix | Assume that $P(x)$ has only one eigenvalue equal to $1$. Let $q(x)$ be the unique unit-norm leading eigenvector of $P(x)$ for all $x$. For convenience, I'll drop the "$(x)$" in the computations.
We require $Pq = q$, i.e. $(I-P)q = 0$ and $q^Tq = 0$ for all $x$.
Differentiation yields: $(I-P)\dfrac{dq}{dx} - \dfrac{dP}{dx}q = 0$ and $q^T\dfrac{dq}{dx} + \dfrac{dq}{dx}^Tq = 0$,
These can be rewritten as (1) $(I-P)\dfrac{dq}{dx} = \dfrac{dP}{dx}q$ and (2) $q^T\dfrac{dq}{dx} = 0$.
By assumption, $P$ has only one eigenvalue equal to $1$, so $I-P$ has only one $0$ eigenvalue. Thus, $I-P$ has a one dimensional nullspace, namely $\text{span}(q)$. So, if we can find one solution $\dfrac{dq}{dx} = v$ to (1), then all solutions to (1) will be of the form $\dfrac{dq}{dx} = v+tq$ for some $t \in \mathbb{R}$. Using the pseudoinverse, one solution is $\dfrac{dq}{dx} = (I-P)^+\dfrac{dP}{dx}q$. Therefore, the solutions to (1) are all in the form $\dfrac{dq}{dx} = (I-P)^+\dfrac{dP}{dx}q + tq$ for some $t \in \mathbb{R}$.
Condition (2) requires that $\dfrac{dq}{dx}$ is orthogonal to $q$. Since the solution $(I-P)^+\dfrac{dP}{dx}q$ is already orthogonal to the nullspace of $I-P$, which includes $q$, this is the only solution of (1) which also satisfies (2).
Therefore, the solution to (1) and (2) is: $\dfrac{dq}{dx} = (I-P)^+\dfrac{dP}{dx}q$.
If we instead wish to normalize $q$ such that $\sum_{i}q_i = 1$ instead of $q^Tq = 1$, then condition (2) becomes $\displaystyle\sum_{i}\dfrac{dq_i}{dx} = 0$. The solution will still be in the form $\dfrac{dq}{dx} = (I-P)^+\dfrac{dP}{dx}q + tq$ for some $t \in \mathbb{R}$. However, finding the value of $t$ to satisfy (2) might be harder. |
Does "either" make an exclusive or? | No, you cannot depend on that. If it were that simple, we wouldn't need clunky phrases like "exclusive or" to make clear when an "or" is exclusive.
Linguistically, "either" is simply a marker that warns you in advance that an "or" is going to follow. Nothing more. |
Visual Complex Analysis - Transformation and Rotations of complex numbers | "rotaiton about the origin and then translation can be expressed as a single rotation": no.
$$e^{i\theta}z+a=ze^{i\theta'}$$ cannot hold.
Take $z=0$: this implies $a=0$.
But if you allow rotation about another point $c$,
$$e^{i\theta}z+a=(z-c)e^{i\theta'}+c$$ is obviously solved by
$$\theta'=\theta,\\c=\frac a{1-e^{i\theta}}.$$ |
Ordinary Generating function V.S. Exponential generating function | For a very thorough discussion, see e.g. Wilf's "generatingfunctionology". A more "combinatorial" view is given by Sedgewick and Flajolet "Analytic Combinatorics". Both are freely available, the second one is significantly heavier going (but you'd just want the more descriptive view there).
Short answer: If the objects are interchangeable (not distinguished say by positions), use ordinary generating functions, otherwise exponential generating functions are called for. In this case, for passwords the order of the letters is relevant.
Without a more detailed explanation how the specific problem is solved, we can't really tell if the solution given is right. |
Solve linear system of recurrence relation transformed to differential equations | I am not sure if you need to convert the recurrence system into a differential equation to solve. Define $c_i = \begin{pmatrix}b_i\\a_i\end{pmatrix}$.
Let $\delta = 1-{2\over N}$, and $A = \begin{pmatrix}\delta \ \ \ \ \ \delta+1 \\\delta-1 \ \ \ \ \ \delta \end{pmatrix}$.
Then,
$$c_{i+1}=Ac_i, $$ or $$c_k=A^kc_0.$$
To compute $A^k$, you can try decomposing the matrix using eigenvectors, $v$, such that $(A - \lambda I) v = 0$
It is a bit of algebra, but the eigenvectors and eigenvalues are:
$$v_1 = \begin{pmatrix}i\sqrt{\frac{1+\delta}{1-\delta}}\\1\end{pmatrix}, \lambda_1=\delta-i\sqrt{1-\delta^2}$$
$$v_2 = \begin{pmatrix}-i\sqrt{\frac{1+\delta}{1-\delta}}\\1\end{pmatrix}, \lambda_2=\delta+i\sqrt{1-\delta^2}$$
So, $A^k = V\Lambda^k V^{-1}$, where $\Lambda=\begin{pmatrix}\lambda_1\ 0\\0 \ \ \lambda_2\end{pmatrix}$, and $V=[v1, v2]$
Addendum: This simplifies further (again, with more algebra).
Let $\phi = \tan ^{-1}\frac{\sqrt{1-\delta^2}}{\delta}$ and $\beta = \sqrt{\frac{1+\delta}{1-\delta}}=\sqrt{N-1}.$
Then,
$$c_k=\begin{pmatrix}\cos\phi k \ \ \ \ \ \ \beta\sin \phi k \\ -\beta^{-1}\sin \phi k \ \ \ \ \cos\phi k\end{pmatrix}c_0$$
For $N>>1$, $\beta\sin \phi k \approx 2k\frac{N-1}{N-2}$. As a sanity check, in the limit as $N$ approaches $\infty$, notice the recurrence becomes $a_n = a_0$, and $b_{n+1}=b_n+2a_0$. Thus, $b_n=2na_0+b_0$ |
Resources for learning integral calculations | Boros and Moll, Irresistible Integrals, Cambridge University Press 2004 has some very nice content. |
How to see that $\mathbb Q (z_1,\ldots,z_n, \bar z_1,\ldots, \bar z_n)$ is closed under conjugation? | Notice that $\mathbb{Q}$ and $\{z_1,\ldots,z_n,\overline{z_1},\ldots,\overline{z}_n\}$ are closed under conjugation and recall that $\mathbb{Q}(z_1,\ldots,z_n,\overline{z_1},\ldots,\overline{z}_n)$ is the smallest field containing $\mathbb{Q}$ and $\{z_1,\ldots,z_n,\overline{z_1},\ldots,\overline{z}_n\}$. |
If $T:\mathbb R^2 \to \mathbb R^2$ is a linear transformation show that $T(u_1)$ and $ T(u_2)$ are linearly independent | Hint
If $T(u_1)$ and $T(u_2)$ are linearly dependent then there exist numbers $\alpha_1$ and $\alpha_2$ (at least, one of them nonzero) such that $$\alpha_1 T(u_1)+\alpha_2 T(u_2)=\vec{0}.$$ Is this possible? |
Showing the irreducibility of a polynomial in $\mathbb{Q}$[x]. | From equating coefficients of powers of $x$ in the equation
$(x^2+ax+b)(x^2+cx+d)=x^4+0x^3-2x^2+8x+1$
$=x^4+(a+c)x^3+(ac+b+d)x^2+(ad+bc)x+bd$
we know that $a+c=0, ac+b+d=-2, ad+bc=8,$ and $bd=1.$ |
Given a probability density, how can I sample from the induced distribution? | Your probability density on $u,v$ is not a valid probability density.
A probability density should have $\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f(u,v)\mathrm{d}u\mathrm{d}v=1$
If you convert to polar coordinates you will see that
$\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}(u^2+v^2)^{-3/2}\mathrm{d}u\mathrm{d}v=\int_0^{\infty} 2\pi r^{-2} \mathrm{d}r=\infty$
Removing the point $\{0\}$ from $\mathbb{R}^2$ won't help either - you have to remove a neighborhood of zero in order to make your density normalisable. |
Sum of uniformly distributed random variables in a given range | If we replace uniforms on $(0,1)$ by uniforms on $(0,w)$, the resulting random variable $Y$ has the same distribution as $wX$, where $X$ has Irwin-Hall distribution. In particular, $\Pr(Y\le y)=\Pr(wX\le y)=\Pr(X\le \frac{y}{w})$. It follows that if $f_X$ is the density function of the Irwin-Hall, then $Y$ has density $f_Y(y)=\frac{1}{w}f_X(y/w)$.
In a similar way, one can read off almost any desired facts about the distribution of $Y$ from related facts about the Irwin-Hall. |
For what $x,y$ does $\sum_{k,l\ge 0} \frac{(k+l)!}{k!l!} \left| x^ky^l\right|$ converge? | You will need $|x|+|y|<1$. That should be necessary and sufficient.
This is because, re-arranging, this is $\sum_{n=0}^{\infty} (|x|+|y|)^n$, where $n=k+l$. |
Anagrams with Generating Functions | Hint
A five letter word is given by partitioning $\{1,2,3,4,5\}$ into four parts, an $a$, $b$, $c$ and $d$ part, and filling the corresponding letter in the spaces in the part. This corresponds exactly to the action of multiplying exponential generating functions.
The EGF for the $a$ part is just $\sum_{n\ge 0}\frac{x^n}{n!}$, since there is exactly one way to fill $n$ given spots with $a$, for all $n\ge 0$. The same goes for the $c$ part. However, the EGF for the $b$ part is $\sum_{n\ge 0}\frac{x^{2n}}{(2n)!}=(e^x+e^{-x})/2$, since there is $1$ way to fill an even number of spots with $b$'s, and zero ways to fill and odd number of spots.
I leave you to find the EGF for the $c$ parts. Once you have all four EGF's, you need to multiply them all together, extract the coefficient of $x^5$, then multiply by $5!$.
Addendum: You should get that the number of sequences is $4^4$, and in general the number of $n$ letter sequences is $4^{n-1}$, for all $n\ge 1$. This is begging for a simple bijective proof; here is one. To choose a sequence of length $n$ with an even number of $b$'s and an odd number of $a$'s, choose the first $n-1$ symbols arbitrarily, then...
If there are an even number of $d$'s and an even number of $b$'s, make the last symbol $d$.
If there are an odd number of $d$'s and an odd number of $b$'s, make the last symbol $b$.
If there are an odd number of $d$'s and an even number of $b$'s, make the last symbol $a$.
If there are an even number of $d$'s and an odd number of $b$'s, make the last symbol $c$, and then replace all $d$'s with $b$'s and all $b$'s with $d'$s. |
showing the fundamental group of a graph $G$ with spanning tree $T\subset{G}$ s.t $G - T = \{e\}$ | Your proof looks good.
You might want to explicitly mention the fact that if $A$ is a deformation retract of $B$, then $\pi_1(B) \cong \pi_1(A)$, but other than that, you're good.
As a fun aside, this result can be generalized: for any spanning tree $T$ of a graph $G$, $\pi_1(G)$ is the free group generated by $G\setminus T$. |
Check my solution to $x^2 + x + 1 > 0$ | Hint $\rm\ \ 4\:f(x)\: =\: 4\:(x^2+x+1)\ =\ (2x+1)^2 + 3\: \ge\: 3\ $ thus $\rm\:f(x)\ge 3/4$
More generally one can decide polynomial (in)equations by partitioning the real line into intervals based on the finite number of roots, and testing on each interval. Here there are no real roots, so it has constant sign, so has the sign of $\rm\:f(0) = 1.\:$
This is a special case of Collin's cylindrical algebraic decomposition algorithm, an effective form of Tarski-Seidenberg quantifier elimination for the reals. For deep generalizations, search on "semialgebraic geometry" and "o-minimal". |
What is the formula for this probability problem? | Suppose you have $n$ characters, and $m$ spaces. You are then counting solutions to $$a_0+a_1+\cdots +a_n=m$$
in nonnegative integers. $a_0$ represents the number of spaces following $0$ letters, $a_1$ represents the number of spaces following $1$ letter, etc.
This is counted with Multisets, specifically as $$\left(\!\!{n+1\choose m}\!\!\right)={m+n\choose m}=\frac{(m+n)!}{m!n!}$$
For the test case, $n=3, m=2$, giving $\frac{5!}{2!3!}=\frac{120}{2\cdot 6}=10$. |
Probability of flopping a royal flush | In the poker game you are describing, you are dealt two cards and there are a further three cards dealt on the "flop". There is no opportunity to exchange cards. So really, this is the same as asking the probability of a Royal Flush when you are dealt five cards from a deck of fifty-two.
There are ${52 \choose 5} = 2,598,960$ different equiprobable hands you can be dealt when you are dealt five cards from a standard deck of fifty-two. There are four different hands that constitute a Royal Flush (i.e., one for each suit). This means that the probability of a Royal Flush is:
$$\mathbb{P}(\text{Royal Flush on the flop}) = \frac{4}{{52 \choose 5}} = \frac{4}{2,598,960} = \frac{1}{649,740}.$$ |
How sine and cosine can be used to describe 2d-vector coordinates | Use trigonometric function can be useful to parametrize a line through the origin and with slope $\theta$ with respect to $x$ axis that is
$$\vec{a}=
l·\begin{pmatrix}
\cos \theta \\
\sin \theta \\
\end{pmatrix}$$
with $l\in \mathbb{R}$ parameter.
Note that in your example it should be
$$\vec{a}=
l·\begin{pmatrix}
\cos(270) \\
\sin(270) \\
\end{pmatrix}=
l·\begin{pmatrix}
0 \\
\color{red}{-1} \\
\end{pmatrix}$$ |
Octal Multiplication | Octal multiplication is not that different from regular multiplication. I'm going to work through a different example ($25·47$) in base 10 and then in base 8 so that you can see where the similarities and the differences are.
First, what does $25·47$ really mean? In base 10, 25 means $2·10 + 5$ and 47 means $4·10+7$. So we're really doing: $$(2·10+5)(4·10+7).$$
Ordinary algebra tells us that $(a+b)(c+d) = ac + ad + bc + bd$, so we can rewrite this as
$$\begin{array}{lll}
\text{hundreds} & \text{tens} & \text{units} \\ \hline
\color{darkblue}{2·4}·10^2 & + \color{purple}{2}·10·\color{purple}{7} + \color{purple}{5·4}·10 & + \color{maroon}{5·7} = \\ \color{darkblue}{2·4}·10^2 & + \color{purple}{(2·7+5·4)}·10 & + \color{maroon}{5·7}
\end{array}
$$
The three terms here represent the hundreds (blue), tens (purple), and units (red) columns. Now we need to do the carrying.
$\color{maroon}{5·7} = 3·10+5$, so we move the three tens from the units column into the next column to the left: $$\begin{array}{lll}
\text{hundreds} & \text{tens} & \text{units} \\ \hline \color{darkblue}{2·4}·10^2 & + \color{purple}{(2·7+5·4\color{maroon}{+3})}·10 & + \color{maroon}{5}\end{array}$$
and then $\color{purple}{2·7+5·4 + \color{maroon}{3}} = 3·10+7$, so we move the three tens left from the tens column:
$$\begin{array}{lll}
\text{hundreds} & \text{tens} & \text{units} \\ \hline \color{darkblue}{(2·4\color{purple}{+3})}·10^2 & + \color{purple}{7}·10 & + \color{maroon}{5}\end{array}$$
Finally, $ \color{darkblue}{2·4\color{purple}{+3}} = 1·10+1$, so we move the ten left from the hundreds column into a new column:
$$\begin{array}{llll}
\text{thousands} & \text{hundreds} & \text{tens} & \text{units} \\ \hline 1 ·10^3 & + \color{darkblue}{1}·10^2 & + \color{purple}{7}·10 & + \color{maroon}{5}\end{array}$$
And in base 10, we abbreviate this to just $$1 \color{darkblue}{1} \color{purple}{7} \color{maroon}{5}$$
which is the answer: $25·47 = 1125$.
Now make sure you followed that carefully, because we're going to do it over in base 8.
What does $25·47$ really mean in base 8? In base 8, 25 means $2·8 + 5$ and 47 means $4·8+7$. So we're really doing: $$(2·8+5)(4·8+7).$$
We can rewrite this as
$$\begin{array}{lll}
\text{64s} & \text{8s} & \text{units} \\ \hline \color{darkblue}{2·4}·8^2 & + \color{purple}{2}·8·\color{purple}{7} + \color{purple}{5·4}·8 & + \color{maroon}{5·7} = \\ \color{darkblue}{2·4}·8^2 & + \color{purple}{(2·7+5·4)}·8 &+ \color{maroon}{5·7}\end{array}$$
The three terms here represent the sixty-fours, eights, and units columns. Now we need to do the carrying.
$\color{maroon}{5·7} = 4·8+3$, so we move the four eights from the units column into the next column to the left:
$$\begin{array}{lll}
\text{64s} & \text{8s} & \text{units} \\ \hline \color{darkblue}{2·4}·8^2 &+ \color{purple}{(2·7+5·4\color{maroon}{+4})}·8 &+ \color{maroon}{3}\end{array}$$
and then $\color{purple}{2·7+5·4 + \color{maroon}{4}} = 4·8+6$, so we move the four eights left from the eights column:
$$\begin{array}{lll}
\text{64s} & \text{8s} & \text{units} \\ \hline \color{darkblue}{(2·4\color{purple}{+4})}·8^2 & + \color{purple}{6}·8 & + \color{maroon}{3}\end{array}$$
Finally, $ \color{darkblue}{2·4\color{purple}{+4}} = 1·8+4$, so we move the one eight left from the sixty-fours column into a new column:
$$\begin{array}{llll}
\text{512s} & \text{64s} & \text{8s} & \text{units} \\ \hline 1 ·8^3 &+ \color{darkblue}{4}·8^2 &+ \color{purple}{6}·8 &+ \color{maroon}{3}\end{array}$$
And in base 8, we abbreviate this to just $$1 \color{darkblue}{4} \color{purple}{6} \color{maroon}{3}$$
which is the answer: $25_8·47_8 = 1463_8$.
Now let's see if we can do the octal multiplication shorthand, without so much toil. We want $$\begin{array}{lll}&2&5\\×&4&7\\\hline \end{array}$$ but with everything in base 8.
We start as usual: $5×7 = 35$… in base 10 we would put down the 5 and carry the 3, but in base 8 we understand it as 4 eights and 3 units, so we put down the 3 and carry the 4:
$$\begin{array}{lll}&2&5^4\\×&4&7\\\hline &&3\end{array}$$
Now $5×4 $ plus the $4$ we carried is $24$, but that's 3 eights and no units, so we put down $30$, not $24$:
$$\begin{array}{lll}&2&5^4\\×&4&7\\\hline 3&0&3\end{array}$$
That $303$ is because $5×47$ is written as $303$ in base 8. Now we continue as usual: $2×7=14$, which is one eight and six units, so we put down the $6$ and carry the $1$:
$$\begin{array}{llll}&&2^1&5^4\\×&&4&7\\\hline &3&0&3\\&&6\end{array}$$
Then $2×4$ plus the $1$ we carried is one eight and one unit, which we put down:
$$\begin{array}{llll}&&2^1&5^4\\×&&4&7\\\hline &3&0&3\\1&1&6\\\hline\end{array}$$
The $116$ is how $2×47$ is written in base 8.
Now we add up the partial products, which is easy; there isn't even any carrying:
$$\begin{array}{llll}&&2^1&5^4\\×&&4&7\\\hline &3&0&3\\1&1&6\\\hline
1&4&6&3\end{array}$$
And 1463 is the answer. |
the logarithm of quaternion | I can't see the page in Google Books, but what you apparently have there is the logarithm of a unit quaternion $\mathbf q$, which has scalar part $\cos(\theta)$ and vector part $\sin(\theta)\vec{n}$ where $\vec{n}$ is a unit vector.
Since the logarithm of an arbitrary quaternion $\mathbf q=(s,\;\;v)$ is defined as
$\ln \mathbf q=\left(\ln|\mathbf q|,\;\;\left(\frac1{\|v\|}\arccos\frac{s}{|q|}\right)v\right)$
where $|\mathbf q|$ is the norm of the quaternion and $\|v\|$ is the norm of the vector part (and note that the vector part of $\ln\mathbf q$ has a scalar multiplier); applying that formula to a unit quaternion yields a scalar part of $0$ (the logarithm of the norm of a unit quaternion is zero), and you should now be able to derive the formula for the vector part. |
Optimal Control example from Wikipedia | 1) By definition of Hamiltonian, $H(x,u,\lambda,t)$ is equal to cost function plus Lagrange multiplier times into constraint. In discrete time system, this becomes equal to $\lambda_{t+1}$ because of aesthetic reason and numerous simplification to solution. One of the reason is because your discrete time system model is $x(t+1) = f(x_t,u_t)$, where $t$ can only take discrete values. In the above example, firstly its not the Hamiltonian of the entire system because it should also include the summation but is the Hamiltonian at time $t$ only. Therefore Hamiltonian is mentioned the way you have written.
2) Simply take partial derivative with respect of $x_t$ on Hamiltonian and multiply it with negative will yield the result. Why is it equal to left hand side is by the Calculus of variation theory in optimal control. Its part of necessary condition for optimality.
3) The equation that you solved are actually differential equations and differential equations need boundary values to be solved. Initial and Final time are boundary conditions that you will need to solve to find the exact solution. Otherwise you will have general constant of integration. |
Evaluate integral in terms of Gamma function | You just rewrite the cosine in terms of the complex exponential:
$$ \cos(\lambda t \sin\theta) = \frac 12\left[\exp(i\lambda t \sin\theta)+\exp(-i\lambda t \sin \theta)\right] $$
Your integral is then
$$ \frac 12\int_0^\infty dt\,t^{x-1}
\left(
\exp(-t(\lambda\cos\theta-i\lambda \sin\theta))+
\exp(-t(\lambda\cos\theta+i\lambda \sin\theta))
\right)$$
That's a sum of two similar terms. In each of them, one only rescales $t$ by the factor of
$$\lambda\cos\theta\mp i\lambda \sin\theta = \lambda\exp(\mp i \theta) $$
so that the exponentials become $\exp(-T)$ while a factor of the coefficient above to the $(-x)$-th power is picked from $dt\,t^{x-1}$. So the result is
$$ \dots = \frac {\Gamma(x)}{2\lambda^x} (\exp(ix\theta)+\exp(-ix\theta )) = \frac{\Gamma(x)}{\lambda^x}\cos(x\theta) $$
I am going to fix the minor mistakes now. Note that the limits of the definite integral aren't really changed even though $t$ was rescaled by a complex factor – as long as the phase is within some limits. |
Distance between points of the geodesic flow tends to $0$. | Let's consider $v=i$ first. Suppose $\operatorname{Im}z\ne 1$. Let $\zeta$ be the number with $\operatorname{Re}\zeta = \operatorname{Re}w $ and $\operatorname{Im}\zeta=1$. Then
$$ d(g_t((i,i)),g_t((\zeta, i))) \to 0$$
as you have already shown. On the other hand, $d(g_t((z,i)),g_t((\zeta, i)))$ stays constant (and nonzero) because we are just moving both numbers along the same geodesic. By the triangle inequality, $ d(g_t((i,i)), g_t((z, i)))$ does not tend to zero.
Next, suppose $v\ne i$. Then $ d(g_t((z,i)), g_t((z, v)))\to \infty$, which is easiest to see in the disk model by placing $z$ in the center: two points move to the boundary along different radii. On the other hand, $ d(g_t((i,i)), g_t((z,i)))$ stays bounded, as one can check with a direct estimation of the distance between $e^t i$ and $\operatorname{Re}z + e^t\operatorname{Im} z$ (e.g., integrate $1/\Im\operatorname{zeta}$ along the line segment between them). Hence, $ d(g_t((i,i)), g_t((z, v)))\to \infty$. |
Clarify mathematical induction: Expressing $f(n + 1)$ in terms of $f(n)$, for all nonnegative integers $n$. | Your $f$ is a function, not a property. You are confusing inductive proofs with inductive definitions. To define $f(n)$ inductively, don't start with $f(n) = n +2$. That's already a definition, so you don't have to re-define it inductively. Instead, define $f(n)$ by
$f(0) = 2$, $f(n+1) = f(n) + 1$ for $n > 0$.
Then you can prove that $f(n) = n+2$ using an inductive proof. |
double series with positive term | Absolute convergence is enough. Let $c_k$ be any enumeration of $a_{ij}$, then let $\phi : \mathbb{N}\times \mathbb{N} \to \mathbb{N}$ be the $1-1$ map such that $a_{ij} = c_{\phi(i,j)}$ (we don't know what $\phi$ is, but we don't care as well). Now,
\begin{align}
& \displaystyle\sum_{i=1}^\infty \displaystyle\sum_{j=1}^\infty a_{ij} = B \iff \displaystyle\sum_{i,j \in \mathbb{N}\times \mathbb{N}} a_{ij} = B \ (\text{absolute convergence allows combination of sum}) \\ & \iff \displaystyle\sum_{(i,j) \in \mathbb{N}\times \mathbb{N}} a_{\phi(i,j)} = B \ (\text{absolute convergence allows rearrangement}) \\& \iff \displaystyle\sum_{k=\phi(i,j) \in \mathbb{N}} c_k = B \ (\text{change of index})
\end{align}
We are done. |
Are approximate minimisers the minimisers of a perturbed function? | Yes. Given $f$ and $y^*$ define $g(x) = \max \{ f(x), f(y^*)\}$. This is convex as the max of two convex functions. It collapses the convex region $C = \{x \in X: f(x) \le f(y^*)\}$ into a flat section. The min and max of the original function over that region are $f(x^*)$ and $f(y^*)$. From this we see $\max |f(x)-g(x)| \le f(y^*)$ for all $x \in C$. For $x \in X-C$ we have $f(x)=g(x)$ and so $\max |f(x)-g(x)| \le f(y^*)$ for all $x \in X$. |
Can we define Urysohn function here? | A space $X$ satisfying the first condition is is called a Uryso(h)n space. A space satisfying the second condition is called a completely Hausdorff or functionally Hausdorff space. Every completely Hausdorff space is Uryson, but not every Uryson space is completely Hausdorff.
One Uryson space that is not completely Hausdorff is the space called the Arens square in Steen & Seebach, Counterexamples in Topology. It’s defined as follows.
Let $Q=\Bbb Q\cap(0,1)$, and let $S=Q\times Q$. Let $p_0=\langle 0,0\rangle$ and $p_1=\langle 1,0\rangle$. For each $r\in\Bbb Q\cap\left(0,\frac12\sqrt2\right)$ let $p_r=\left\langle\frac12,r\sqrt2\right\rangle$, and let $L=\left\{p_r:r\in\Bbb Q\cap\left(0,\frac12\sqrt2\right)\right\}$. Finally, let $X=S\cup L\cup\{p_0,p_1\}$. Points of $S$ have the nbhds that they inherit from the usual topology of $\Bbb R^2$. For $n\in\Bbb Z^+$ let $$\begin{align*}B(0,n)&=\{p_0\}\cup\left\{\langle x,y\rangle\in S:x<\frac14\text{ and }y<\frac1n\right\}\text{ and}\\B(1,n)&=\{p_1\}\cup\left\{\langle x,y\rangle\in S:x>\frac34\text{ and }y<\frac1n\right\}\;,\end{align*}$$ and for $n\in\Bbb Z^+$ and $r\in\Bbb Q\cap\left(0,\frac12\sqrt2\right)$ let $$B(r,n)=\left\{\langle x,y\rangle\in S\cup L:\frac14<x<\frac34\text{ and }\left|y-r\sqrt2\right|<\frac1n\right\}\;.$$ Then $\{B(0,n):n\in\Bbb Z^+\}$ is a local base at $p_0$, $\{B(1,n):n\in\Bbb Z^+\}$ is a local base at $p_1$, and $\{B(r,n):n\in\Bbb Z^+\}$ is a local base at $p_r$ for each $r\in\Bbb Q\cap\left(0,\frac12\sqrt2\right)$.
It’s not hard to verify that $X$ is Uryson; the key observation is that no point of $X\setminus L$ has the same $y$-coordinate as any point of $L$.
To show that $X$ is not completely Hausdorff, we show that there is no continuous $f:X\to[0,1]$ such that $f(p_0)=0$ and $f(p_1)=1$. Suppose that $f$ is such a function. The sets $\left[0,\frac14\right)$ and $\left(\frac34,1\right]$ are open in $[0,1]$, so their inverse images under $f$ are open in $X$. This means that there are $m,n\in\Bbb Z^+$ such that $f\big[B(0,m)\big]\subseteq\left[0,\frac14\right)$ and $f\big[B(1,n)\big]\subseteq\left(\frac34,1\right]$. Now choose any $r\in\Bbb Q\cap\left(0,\frac12\sqrt2\right)$ such that $r\sqrt2<\min\left\{\frac1m,\frac1n\right\}$.
Now $f(p_r)$ cannot be in both $\left[0,\frac14\right)$ and $\left(\frac34,1\right]$, so without loss of generality suppose that $f(p_r)\notin\left[0,\frac14\right)$. Then there are $a,b\in(0,1)$ such that $\frac14<a<f(p_r)<b$. Clearly $\left[0,\frac14\right]$ and $[a,b]$ are disjoint closed subsets of $[0,1]$, so their inverse images under $f$ are disjoint closed sets in $X$ that contain open nbhds of $p_0$ and $p_r$, respectively. In fact, $f^{-1}\left[\left[0,\frac14\right]\right]\supseteq B(0,m)$, and I’ll leave it to you to show that the choice of $r$ ensures that for each $k\in\Bbb Z^+$,
$$\big(\operatorname{cl}B(0,m)\big)\cap\operatorname{cl}B(r,k)\ne\varnothing\;,$$
contradicting the disjointness of the inverse images of $\left[0,\frac14\right)$ and $\left(\frac34,1\right]$. |
Problem with Lipschitz function | Note that $c(s)=\min\{y(s), z(s)\}$ and $d(s)=\max\{ y(s), z(s)\}$ are continuous functions of $s$. Fix $I_s=[c(s),d(s)]$ and $I=\bigcup_{s\in [a,b]}I_s$. If $c=\inf_{s\in [a,b]}c(s)$ and $d=\sup_{s\in [a,b]}d(s)$ then $I\subset [c,d]$.
It suffices that you prove that the function $\mathbb{R}^2\ni (s,y)\mapsto f(s,x)\in \mathbb{R}$ is uniformly lipschitz in the second variable $x$ in the set
$
[a,b]\times I
$.
Here the constant $K$ will depend only on the functions $y(s)$ and $z(s)$ but not $s\in[a,b]$.
That is, there is a constant $K>0$ that does not depend on $s\in[a,b]$ such that
$$
|f(s,u) - f(s,v)| \le K|u - v|.
$$
Recall that a function $(s,x)\mapsto f(s,x) $ is a uniformly lipschitz in the second variable if, only if,
$$
K=\sup_{s}\sup_{u,v} \frac{|f(s,u)-f(s,v)|}{|u-v|}<\infty.
$$
By mean value theorem there is $\tau_s\in [0,1]$, whit $w_s= \tau_sc_s+(1-\tau_s)d_s\in [c_s,d_s]$ and $s=\tau_ss+(1-\tau_s)s$, such that
\begin{align}
\sup_{u,v\in [c_s,b_s]} \frac{|f(s,u)-f(s,v)|}{|y(s)-z(s)|}
=&
\sup_{u,v\in [c_s,b_s]} \frac{\big| D_1 f(s,w_s)\cdot [s-s)]+ D_2 f(s,w_s)\cdot [d_s-c_s]\,\big|}{|y(s)-z(s)|}
\\
=&
\sup_{u,v\in [c_s,b_s]} \frac{\big|D_2 f(s,w_s)\cdot [d_s-c_s]\,\big|}{|y(s)-z(s)|}
\\
=&
\sup_{u,v\in [c_s,b_s]} \big|D_2 f(s,w_s)\big|
\\
\leq &
\sup_{u,v\in [c_s,b_s]} \sup_{w\in [c_s,d_s]}\big|D_2 f(s,w)\big|
\\
= &
\sup_{w\in [c_s,d_s]}\big|D_2 f(s,w)\big|
\\
\leq &
\sup_{w\in [c,d]}\big|D_2 f(s,w)\big|
\end{align}
for $[c,d]=\bigcup_{s\in [a,b]}[a_s,b_s]$.
Since $[a,b]\ni s \mapsto \sup_{w\in [c,d]}\big|D_2 f(s,w)\big| $ is a continuous function then
$$
\sup_{s\in [a,b]}\sup_{w\in [c,d]}\big|D_2 f(s,w)\big|<\infty.
$$ |
Example of a module such that every proper submodule is finitely generated but the module is not. | The direct limit of $\Bbb Z$-modules:
$$\Bbb Z/p\to \Bbb Z/p^2\to \Bbb Z/p^3\to\cdots$$
is not finitely generated as a $\Bbb Z$ module but every proper submodule is isomorphic to $\Bbb Z/p^k$ for some $k$.
Edit: For more information, please see this wiki page. |
Proving the differentiability of a function | The following proof may be repeated with $x$ substituted for $y+h$ if it makes the reader feel more comfortable. Either is valid.
If a function $f:\mathbb R \to \mathbb R$ satisfies the condition $|f(x)-f(y)|\leq |x-y|^{\sqrt 2}$ for all $x$ and $y$
then for $x\neq y$ it satisfies
$$\dfrac{|f(x)-f(y)|}{|x-y|} \leq |x-y|^{\sqrt{2}-1}$$
Now $f$ is differentiable at $x$ if and only if the left hand side of the equation approaches a limit whenever $y$ approaches $x$. The right hand side implies this limit is $0$. Thus, $f'$ exists and is $0$ everywhere. $f$ must be constant. |
True or false: There is no square $6$ mod $7$. | Suppose $x^2\equiv 6$ then $x^4\equiv 36 \equiv 1$, and clearly $x$ is not a multiple of $7$, so little Fermat tells us that $x^6\equiv 1$ but then $x^6=x^2\cdot x^4\equiv 6\times 1\equiv 6$ is a contradiction.
This also shows by easy generalisation that the congruence $x^2\equiv -1 \bmod p$ cannot be solved for any prime $p\equiv 3 \bmod 4$ |
dividend = divisor*quotient + remainder for rational number | "while dividing 1/11 we get quotient = 0.090909 and reminder = 1"
No. You do not.
If you divide $11$ into $1$ we find $11 < 1$ so so $11$ goes into $1$, $0$ times so the quotient is $0$ and we have remainder $1$.
And we apply that to $dividend = divisor*quotient + remainder$ we get:
$1 = 11*0 + 1$
which is ... perfect.
You probably went past the decimal point while dividing.
The thing is if you go past the decimal point you are multiplying the dividend by a power of $10$ for every decimal place. That means you will have to divide the remainder by the same powers of $1$.
So if you divided and got $quotient = 0.09090909$ and stopped you went past the decimal place $8$ places.
What that really means is you divided $10^8$ by eleven and got a quotient of $9090909$ and a remainder of $1$. This works. $10^8 = 11*9090909 + 1 = 99999999 + 1 = 100000000 = 10^8$.
But we divide everything by $10^8$ to have a $quotient = 0.09090909$ and a $remainder = 0.00000001$. ANd our equation is $1 = 11*0.09090909 + 0.00000001 = 0.99999999 + 0.00000001 = 1.00000000$.
And that's just fine.
Everything is fine. |
Determine if a matchsticks game is possible | I don't think that there's a way to check the trickier cases without brute force: just trying all the possibilities of removing sticks. For example, it's probably impossible to confirm without lots of casework that there's no way to remove $7$ matchsticks to leave exactly $7$ squares (given that both $6$ and $8$ are possible).
On the other hand, brute force isn't out of the question for a $3 \times 3$ problem: there's only $2^{24} = 16\,777\,216$ cases, even if we don't try to take symmetry into account. It's when we increase the size of the grid that we need to search for different approaches.
If you're creating the matchsticks game, you have another option open to you: choose a random set of sticks to remove (weighted however you like toward more or fewer sticks), count the number of squares left, and challenge the player to reproduce your solution. The disadvantage is that harder versions of the puzzle, which have fewer solutions, are also less frequently generated. |
Estimating the series: $\sum_{k=0}^{\infty} \frac{k^a b^k}{k!}$ | If my calculation is correct, we have
$$\sum_{k=0}^{\infty} \frac{k^a}{k!} x^{k} = e^{x} \left\{ x^{a} + \binom{a}{2}x^{a-1} + \mathcal{O}(x^{a-2}) \right\}$$
as $x \to \infty$. I have only an iPad currently in my hand, which is apparetly inadequate for $\TeX$ing. So I will elaborate my answer later. |
Analytic Geometry | Two Planes and a Angle | Two Solutions | The normal vector to the sought plane is orthogonal to the
line $P_1 P_2$ and at angle $\pi/3$ to the normal to your given plane. |
Name of this almost diagonal matrix | This would likely class as a Frobenius matrix, see https://en.wikipedia.org/wiki/Frobenius_matrix, where we have multiplied the matrix with $d$ and thus the $a_i = b_i / d$ for $d \neq 0$. |
Proof that $a^{n}+b^{n}$ is irreducible over $\mathbb Q$ | Counterexample to the original question:
$$(a^4+b^4)(a^8-a^4b^4+b^8)=a^{4\times3}+b^{4\times3}$$
Hint for a possible proof for the revised question:
$$b^{2^n}\Phi_{2^{n+1}}\left(\tfrac{a}{b}\right)=a^{2^n}+b^{2^n},$$
where $\Phi_k$ is the $k$-th cyclotomic polynomial, which is irreducible. |
Greatest distance between vertices from two squares | As you noticed (correctly), $$a^2 + b^2 = 25$$ and $$a+b=7$$
It's possible to substitute and solve the quadratic with factoring, but it's easier to notice that $(a, b) = (3,4)$ or $(a,b) = (4,3)$ are solutions. Assume, for convenience, that $b$ is the larger of the two, so that $a=3$ and $b=4$. Then, the largest distance from a vertex of the inner square to one of the outer square should be the length of the red line in this diagram:
The red line is the hypotenuse of a right triangle with bases $4$ and $7$. Its length, by the Pythagorean theorem, is $$\sqrt{4^2 + 7^2} = \boxed{\sqrt{65}}$$ |
When does -1 have a squareroot in a finite field? (-1 as a quadratic residue) | Since the unit group of a finite field $\mathbb{F}$ is always cylic of order $|\mathbb{F}|-1=:n$ and $-1$ is the unique element of order $2$ in $\mathbb{F}^ \times$ (if $1 \neq -1$ in $\mathbb{F}$) there is an element $x$ with $x^2=-1$ if and only if $4 \mid n$ so if and only if $|\mathbb{F}| \equiv 1\pmod 4$.
Edit: Just to clarify: If $\mathrm{char}(\mathbb{F})=2$ we have $1=-1$ so $-1$ is obviously a square. The above argument is hence only concerned with odd characteristic. |
Is this homomorphism surjective? | Let $H$ be a group such that there exists a surjective homomorphism $f: D_{2p}\to H$. By the first isomorphism theorem, $$H\cong D_{2p}/\ker f.$$
Thus any such group will have the form $D_{2p}/N$ where $N$ is a normal subgroup of $D_{2p}$.
Conversely, if you have a group of the form $D_{2p}/N$ ($N$ a normal subgroup of $D_{2p}$, you clearly have a surjective homomorphism $$D_{2p}\to D_{2p}/N$$ (the projection). |
Resolve equation in sens of D' | In fact, you need to solve three different equations.
First we need to learn how to solve the equation $$xu=0.$$This is a standard result, the solution writes $u=b\delta_0$ with $b$ an arbitrary constant.
Second, the equation $$xu=1$$ has a particular solution $PV(1/x)$ - the principal value.
Finally, the most interesting one is the part $$xu=H.$$ The naive approach would be to say that $$s=\begin{cases}1/x,&x>0\\0,&x\le0.\end{cases}$$
This, however, is not a distribution on $\Bbb R$, hence we need to invent something else.
We can notice that the antiderivative of $s$, namely, $H(x)\ln x$, is a distribution (it is locally integrable) on $\Bbb R$, so we would like to say that $u=-(H(x)\ln x)'$. A quick integration by part shows that indeed in the sens of distributions one has
$$-x(H(x)\ln x)'=H(x).$$
We can now combine all these results into the final answer
$$u = -(H(x)\ln x)'+cPV(1/x)+b\delta_0,\quad b\in \Bbb C.$$ |
How to claim the subspace $AB-BA$ has dimension $n^2-1$? | We claim that $\{AB - BA : A,B \in M_n(\mathbb{C})\} = \{A \in M_n(\mathbb{C}), \operatorname{Tr }A = 0\} = \ker \operatorname{Tr}$.
We already know that $\{AB - BA : A,B \in M_n(\mathbb{C})\} \subseteq \ker \operatorname{Tr}$.
Let $E_{ij}$ denote the matrix with $1$ at the position $(i,j)$ and $0$ elsewhere.
Check that $B = \{E_{ij} : 1 \le i, j \le n, i\ne j\} \cup \{E_{ii} - E_{nn} : 1 \le i \le n-1 \}$ is a basis for $\ker \operatorname{Tr}$.
For $1 \le i, j \le n, i\ne j$ we have
$$E_{ij} = E_{ik}E_{kj} - E_{kj}E_{ik}$$
where $k$ is some index $\ne i,j$. To see this, let $\{e_1, \ldots, e_n\}$ be the standard basis for $\mathbb{C}^n$ and note that $E_{ij}e_j = e_i$ and $E_{ij}e_r = 0$ for $r \ne j$. Now verify that
$$(E_{ik}E_{kj} - E_{kj}E_{ik})e_r =
\begin{cases}
0, &\text{if } r \ne j,k\\
E_{ik}E_{kj}e_j = E_{ik}e_k = e_i, &\text{if }r = j\\
-E_{kj}E_{ik}e_k = -E_{kj}e_i = 0, &\text{if }r = k\\
\end{cases}$$
For $1 \le i \le n-1$ we have
$$E_{ii} - E_{nn} = E_{in}E_{ni} - E_{ni}E_{in}$$
Indeed
$$(E_{in}E_{ni} - E_{ni}E_{in})e_r =
\begin{cases}
0, &\text{if } r \ne i,n\\
E_{in}E_{ni}e_i = E_{in}e_n = e_i, &\text{if }r = i\\
- E_{ni}E_{in}e_n = -E_{ni}e_i = -e_n, &\text{if }r = n\\
\end{cases}$$
Therefore $B \subseteq \{AB - BA : A,B \in M_n(\mathbb{C})\}$ so we conclude $\ker \operatorname{Tr} \subseteq \{AB - BA : A,B \in M_n(\mathbb{C})\}$. |
How to prove $\int_0^{\infty}\frac{x^2+3x+3}{(x+1)^3} e^{-x}\sin x\, dx = \frac{1}{2}.$ | We have:
$$ I = \int_{0}^{+\infty}\frac{\sin x}{x}e^{-x}\,dx -\int_{0}^{+\infty}\frac{\sin x}{x(x+1)^3}e^{-x}\,dx=\color{blue}{I_1}-\color{red}{I_2}.$$
Since
$$\frac{\sin x}{x}=\sum_{k=0}^{+\infty}(-1)^k\frac{x^{2k}}{(2k+1)!}$$
and
$$\int_{0}^{+\infty}x^{2k}e^{-x}\,dx = (2k)! $$
we have:
$$\color{blue}{I_1}=\int_{0}^{+\infty}\frac{\sin x}{x}e^{-x}\,dx = \sum_{k=0}^{+\infty}\frac{(-1)^k}{2k+1}=\arctan(1)=\color{blue}{\frac{\pi}{4}}.$$
Using a standard trick:
$$\int_{0}^{+\infty}\frac{\sin x}{x(x+1)^3}e^{-x}\,dx=\frac{1}{2}\int_{0}^{+\infty}\int_{0}^{+\infty}\frac{\sin x}{x}e^{-x}t^2 e^{-t(x+1)}\,dt\,dx$$
and by the previous lemma we get:
$$\color{red}{I_2}=\int_{0}^{+\infty}\frac{\sin x}{x(x+1)^3}e^{-x}\,dx=\frac{1}{2}\int_{0}^{+\infty}t^2 e^{-t}\arctan\frac{1}{1+t}\,dt.$$
Now we use integration by parts. We have:
$$\color{red}{I_2} = \left.\frac{1}{2}e^{-t}(-t^2-2t-2)\arctan\frac{1}{t+1}\right|_{0}^{+\infty}-\frac{1}{2}\int_{0}^{+\infty}e^{-t}\,dt = \color{red}{\frac{\pi}{4}-\frac{1}{2}}$$
and we are done. |
Asymptotic Winding of the Geodesic Flow on Modular Surfaces and Continuous Fractions | Of course these are not normal subgroups, but these are closed subgroups; the quotients are understood as smooth manifolds. |
If $f$ is a positive integrable function on $(X,\mu)$, why is $\int_0^\infty\mathbb 1_{\{x\in X: f(x)>t\}}(y)dt=\int_0^{f(y)}1dt$? | $$
\begin{align}
\int_0^\infty\mu\left(\{x:f(x)\gt t\}\right)\,\mathrm{d}t
&=\int_0^\infty\int_X\mathbf{1}_{\{x:f(x)\gt t\}}(y)\,\mathrm{d}\mu(y)\,\mathrm{d}t\\
&=\int_0^\infty\int_X\,[f(y)\gt t]\,\mathrm{d}\mu(y)\,\mathrm{d}t\\
&=\int_X\int_0^{f(y)}\,\mathrm{d}t\,\mathrm{d}\mu(y)\\
&=\int_Xf(y)\,\mathrm{d}\mu(y)
\end{align}
$$
Using Iverson Brackets. |
Show that finite simple group $G$ with $n$ involutions satisfies $n < |G| / 3$ | Corollary $(2I)$ in the paper "On groups of even order" by Brauer and Fowler says that if $G$ is a simple group which contains $n$ involutions and $t= \frac{|G|}{n}$ then $|G| < \lceil t(t+1)/2 \rceil !$.
This is a simple consequence of Theorem $(2F)$ or Theorem $(2H)$ in that paper so check if something along these lines is in your book. (Which book is it by the way?)
Arguing by contradiction, suppose that $n \geq \frac{|G|}{3}$. Then $t \leq 3$ so $|G|<720$. There are just $5$ non-abelian simple groups of order less than $720$, which are $A_5$, $A_6$, $\operatorname{PSL}_2(7)$, $\operatorname{PSL}_2(8)$ and $\operatorname{PSL}_2(11)$, and these have $15$, $45$, $21$, $63$, and $55$ involutions respectively. In no case does $n \geq \frac{|G|}{3}$ hold, which is a contradiction.
Added. You can use directly Theorem $6.7$ in Rose's "A Course on Group Theory" which says:
Let $G$ be a group of even order with precisely $n$ involutions, and suppose that $|Z(G)|$ is odd. Let $a = |G|/n$. Then $G$ has a proper subgroup $H$ such that either $|G:H|=2$ or $|G:H|<\frac{1}{2}a(a+1)$.
Now suppose that $G$ is a finite simple group with precisely $n$ involutions. Since $|Z(G)|=1$ the preceding theorem applies. Note that $|G:H| \neq 2$ since otherwise $H$ is normal in $G$. In fact, the stronger claim is true (which Derek mentioned in his answer), that $|G:H| \geq 5$.
Assume for a contradiction that $n \geq |G|/3$. Then $a := |G|/n \leq 3$, so $G$ has a proper subgroup $H$ such that $|G:H|<6$ by Thm. $6.7$, thus $|G:H|=5$ by the preceding observation. But $|G:H|=5$ is only possible if $G \cong A_5$ (do you see why?) and you are given that $A_5$ has less than $60/3=20$ involutions. That is a contradiction, however, and the proof is complete. |
Analytic in the domain | Since you've already shown that $Log(-z) +i\pi$ is analytic in the desired region, the only thing left to show is that it is a branch of $\log$, i.e., to show that $\exp(Log(-z) + i\pi) = z$. |
Continuity of the operation: Infimum of two projections. | No. The example is in $H=\mathbb C^2$, and the nets are sequences indexed by the positive integers. Let $p$ be the projection onto the span of $(1,0)$, and let $p_i=p$ for all $i$. Let $q_i$ be the projection onto the span of $(1+1/i,1/i)$. Then $q_i\to q=p=p\land q\neq 0$, while $p_i\land q_i =0$ for all $i$. |
Hadamard transforms seen as rotations in higher dimensions | Well, it depends on what you mean by "rotation". Yes, it's an element of SO(N), though this is almost "by accident". Fourier Transforms are generally complex-valued, and it is usually a good idea to choose conventions so that they are unitary. These aren't so far apart: If you use the standard embedding of a complex $N \times N$ matrix into a real $2N \times 2N$ matrix (each $a + ib$ in the matrix expands to a block of the form $\begin{pmatrix}a & b \\ -b & a\end{pmatrix}$), a unitary matrix suddenly looks orthogonal.
There is no conflict, because once we label these spaces with a particular basis (which we have as soon as we write down the elements of the transform matrix) they are "self-dual": there is a direct isomorphism between the dual space and the original space given by mapping the first basis vector to the first basis dual-vector, and so forth.
As a very high dimensional rotation, there isn't necessarily a reason to expect any easy way to visualize or explain the rotation. For the standard form, $H = H^\dagger = H^{-1}$, so $H^2 = I$, which means that the only eigenvalues are $\pm 1$ (and in fact these have equal multiplicity of N/2). As 180 degree rotations and non-rotations, viewing them as rotations really doesn't add much. In the simple
case of $H_1$, the unit vectors (1, 0) and (1,1)/$\sqrt{2}$ are exchanged, so their sum $x_+ = (1 + \sqrt{2}/2, \sqrt{2})$ is preserved, and belongs to the positive eigenvalue, and similarly their difference $x_-$ is negated, and belongs to the negative eigenvalue. As $H_n$ is the $n$-fold Kronecker product of $H_1$, taking
all N combinations of Kronecker products of these eigenvectors will give a set of vectors. The $N/2$ with an odd number $x_{-}$ will serve as a basis for the negative
eigenspace, and the other $N/2$ with an even will serve as a basis for the positive eigenspace.
I would've liked to believe that your rearrangement wouldn't make much of a difference, but it does; $H_1$ is no longer $H_1^\dagger=H_1^{-1}$, and this version has a more complicated structure. Instead $H_1^4 = -1$, and the eigenvalues are $\exp(\pm i \pi/ 4)$, which really can be viewed as a 45 degree rotation. I don't know a great way of extending this to higher $n$ in a useful way, as the eigenvectors are inherently complex, and not in the space you're thinking of acting on. I suppose the right thing is to think of the collection of planes that are rotated, but I don't currently see an easy way to characterize them.
There do exist good conventions for discrete Fourier transforms in which
they're just a change of basis. However in general these transforms take place with complex coëfficients. You absolutely can view them as a "complex rotation", as they are unitary, and do preserve norm. For e.g. the cyclic group of order $k$, you can just take the transform to be: $F_{ij} = \exp(2 \pi \cdot i \cdot j / k)/\sqrt{k}$. In general, this has order 4. For $k=2$ this reduces to the first order Hadamard $H_1$. (It does not for higher $k$, as the higher order Hadamard $H_n$ are Fourier transforms on $(\mathbb{Z}/2\mathbb{Z})^n$, rather than $\mathbb{Z}/2^{n}\mathbb{Z}$). |
Importance of cover (topology) | It would take a whole book really to explain the importance of covers, and especially of open covers, in topology. For starters, compactness is a topological property of central importance, and although this fact wasn’t immediately apparent in the early years of topology, it is best defined in terms of open covers of spaces. Metric spaces are clearly important, and not just in topology proper; one of the less obvious reasons for their importance is the fundamental fact that they are paracompact: this property, which is also defined in terms of open covers, is the reason for some of their nicer properties. Study of metrization theory, i.e., of when a space is metrizable, has led naturally to the study of a large variety of so-called covering properties, properties defined in terms of covers of various sorts.
In purely set-theoretic terms a cover of a set $X$ is simply a family $\mathscr{C}\subseteq\wp(X)$ such that $X=\bigcup\mathscr{C}$. Covers aren’t generally interesting or useful unless they have additional properties, however. One might, for instance, require that the members of a cover be pairwise disjoint and non-empty, in which case one has a very familiar object: a partition of a set $X$.
If $X$ is partially ordered, one might be interested in covering it with chains (linearly ordered subsets); the Dilworth decomposition theorem says that a partially ordered set of finite width $w$ can be covered by $w$ chains, which can actually be chosen to be pairwise disjoint. There is some interest in theoretical computer science in algorithms for carrying out partitions into as few chains as possible when the partially ordered set is presented one element at a time (together with its relationships, if any to previously presented elements), and the elements are irrevocably assigned to chains as they are presented. In general the Dilworth limit cannot be achieved, and the on-line partitioning problem for partially ordered sets is to establish the best achievable value.
I would expect other applications of covers to be similarly specific to the types of structures in which they’re used. |
Building Euclidean space | In VI Arnold's classical mechanics page 5 he defines a Euclidean space as an affine space with a norm derived from the inner product on a real vector space. http://users.uoa.gr/~pjioannou/mech1/READING/Arnold_Clas_Mech_ch_1_2.pdf |
Show that if $ar + bs = 1$ for some $r$ and $s$ then $a$ and $b$ are relatively prime | Hint: Suppose to the contrary that $d\gt 1$ divides both $a$ and $b$. Then $d$ divides $ar$ and $\dots$. |
Left and Right Ideal Generated by Two Matrices. | Sorry, there are some issues here caused by your choice of notation. By multiplying the same $a,b,c,d$ matrix with both generators, you've overlooked that there's nothing wrong with multiplying the generators with two different matrices and adding, which will produce many more possiblities.
In the left ideal generated by those two things, a general element will look like this:
$\begin{bmatrix}a&b\\c&d\end{bmatrix}\begin{bmatrix}1&0\\0&0\end{bmatrix}+\begin{bmatrix}e&f\\g&h\end{bmatrix}\begin{bmatrix}0&1\\0&0\end{bmatrix}=\begin{bmatrix}a&e\\c&g\end{bmatrix}$.
Notice how we didn't recycle the $a,b,c,d$ matrix for both generators. Since you can pick $a,e,c,g$ to be whatever you want, you can see that the left ideal generated by these two things is the entire matrix ring.
Try to apply similar reasoning to the right ideal generated by these two things. You will get a different answer than the left ideal, and the answer you gave is not incorrect. It's just that you can express what the right ideal looks like more simply. |
Simplifying exponentials of the form $\,a^x \cdot b^y$ | Here, we have the form $a^x\cdot b^y$, with the added knowledge that $y = \frac 12 x$.
So we can manipulate the expression to obtain the product of two bases raised to the same power, as you did, or as shown below. It all falls out from the laws of exponents, as they relate to real numbers.
$$\left(\dfrac 12\right)^x \cdot 4^{(x/2)} = \left(\dfrac 12\right)^x \cdot 2^{\left(2 (x/2)\right)} = \left(\dfrac 12\right)^x \cdot 2^{x}= \left(\dfrac 12\cdot 2\right)^x = 1^x = 1$$ |
Question about $l_\infty$ as a complete metric space | What Brian writes is this:
he is working with a sequence $\langle x^n:n\in\mathbb N\rangle$ of elements of $\ell^\infty$; so, each $x^n$ is an element of $\ell^\infty$;
if you fix a $n\in\mathbb N$, then, since $x^n\in\ell^\infty$, $x^n$ is a sequence$$\left\langle x_k^n:k\in\mathbb N\right\rangle$$of real numbers.
The fact “that a real number is an equivalence class of Cauchy sequence of rational numbers” is not used at all in Brian's answer. |
metric space, complete | Your method is true but the elements to choose and analyze the cauchy property and convergence are a little inappropriate. I give my proof only for the $\Rightarrow$. The other side is the similar. Also the following needs some details that you can refine. Furthermore, there is a similar proof in the Real Analysis of "Aliprantis".
Let
$ C_{b}(Y,X) $
is a complete metric space. And let
$ \{f_{m}(y)\}_{m\in \mathbb{N}}$
is cauchy sequence in
$X.$ So there exist $m,n \in \mathbb{N}$ such that $d(f_{m}(y),f_{n}(y))<\epsilon$.
Since $ C_{b}(Y,X) $
is a complete metric space and,
$D(f_{m},f_{n})=\sup_{y\in Y}d(f_{m}(y),f_{n}(y))<\epsilon $
then,
there exists
$f\in C_{b}(Y,X)$
such that
$f_{m}\to f $ this gives us
$D(f_{m},f)<\epsilon\, \Rightarrow\, d(f_{m}(y),f(y))<\epsilon$ so
$f(y)\in X $ |
Find the number of days in which the job would be finished | Your answer is correct. It may be that you are expected to explain that the combined rate is $1+\frac{1}{2^2}+\frac{1}{3^2}+\cdots$, and to give some sort of reference for Euler's closed form for $\zeta(2)$. |
Why is $x+e^{-x}>0$ for all $x \in \mathbb{R}$? | Take
$x < y < 0; \tag 1$
then
$f(y) - f(x) = \displaystyle \int_x^y f'(s)\; ds = \int_x^y (1 - e^{-s})\; ds; \tag 2$
it is clear that, for any $M < 0$, there exists $y_0 < 0$ such that
$s < y_0 \Longrightarrow 1 - e^{-s} < M; \tag 3$
if we now choose
$y < y_0, \tag 4$
then
$f(y) - f(x) = \displaystyle \int_x^y (1 - e^{-s})\; ds < \int_x^y M \; ds = M(y - x); \tag 5$
we re-arrange this inequality:
$f(x) - f(y) > -M(y - x) = M(x - y), \tag 6$
$f(x) > f(y) + M(x - y); \tag 7$
we now fix $y$ and let $x \to -\infty$; then since $M < 0$ and $x - y < 0$ for $x < y$,
$\displaystyle \lim_{x \to -\infty} f(y) + M(x - y) = \infty, \tag 8$
and hence
$\displaystyle \lim_{x \to -\infty}f(x) = \infty \tag 9$
as well.
We note that (9) binds despite the fact that $f'(x) < 0$ for $x < 0$; though this derivative is negative, when $x$ decreases we are "walking back up the hill," as it were; as $x$ decreases, $f(x)$ increases.
Note Added in Edit, Wednesday 20 March 2019 10:39 PM PST: We may also dispense with the title question and show
$x + e^{-x} > 0, \forall x \in \Bbb R; \tag{10}$
for
$x \ge 0, \tag{11}$
$e^{-x} > 0 \tag{12}$
as well, hence we also have
$x + e^{-x} > 0; \tag{13}$
for
$x < 0, \tag{14}$
we may use the power series for $e^{-x}$:
$e^{-x} = 1 - x + \dfrac{x^2}{2!} - \dfrac{x^3}{3} + \ldots; \tag{15}$
then
$x + e^{-x} = 1 + \dfrac{x^2}{2!} - \dfrac{x^3}{3!} + \ldots > 0, \tag{16}$
since every term on the right is positive when $x < 0$. End of Note. |
Lagrange multipliers problem with two constraints | You can get rid of one constraint for free: All three variables $x$, $y$, $z$ have to be positive. It is therefore allowed to put
$$x:=u^2,\quad y:=uv,\quad z:=v^2\ ,$$
so that $xz=y^2$ is automatically satisfied.
We now have to investigate the function
$$g(u,v):=(2a+b)\log u+(2c+b)\log v$$
under the sole constraint $u^2+uv+v^2=1$. We obtain the conditions
$$G_u={2a+b\over u}-\lambda(2u+v)=0,\quad G_v={2c+b\over v}-\lambda(u+2v)=0\ .$$
Multiplying the first equation by $u$, the second by $v$, and adding gives
$$2(a+b+c)-2\lambda(u^2+uv+v^2)=0\ ,$$
whence $\lambda=a+b+c$. Etcetera. |
If a,b,c are positive and (a,b)=(b,c)=1 and 1/a + 1/b + 1/c = integer then b=1 and a=c=1 or 2 | So you have proven that the given cases does indeed work, and you need help to show that these are the only cases. So let's try to find another case. Let's see what the fraction sum turns out to be:
$$
\frac{1}{a} + \frac{1}{b} + \frac{1}{c} = \frac{bc + ac + ab}{abc}
$$
Since $b$ have no factors in common with neither $a$ nor $c$, the last fraction cannot be shortened by any factor of $b$, so the denominator will always be a multiple of $b$, and thus the fraction cannot be an integer unless $b=1$.
If $b = 1$ (which we now know it must be for the conditions to hold), then $\frac{1}{a} + \frac{1}{c}$ must also be an integer, and thus none of $a$ and $c$ can be greater than $2$, and they have to be the same number. So either they are both $1$ or they are both $2$. |
Homeomorphism on the Hilbert space | It is enough to show that $(l^2(\omega), \tau)$ where $\tau$ is the product topology on $\mathbb{R}^{\omega}$ restricted to $l^2(\omega)$ is not completely metrizable. Suppose not. Then $l^2(\omega)$ must be a dense $G_{\delta}$ subset of $\mathbb{R}^{\omega}$. But then $l^2(\omega)$ is comeager in $\mathbb{R}^{\omega}$ which is easily refuted by observing that the sets $A_n = \{x \in l^2(\omega) : \sum_k (x(k))^2 < n\}$ are nowhere dense in $\mathbb{R}^{\omega}$. |
Does tr($XY$) > 0 for positive-definite X imply that Y is positive-definite? | The best you can get is that $Y$ is positive-semidefinite and not zero. That is, if $Y$ is positive-semidefinite, then $\operatorname{Tr}(XY)>0$ for all positive-definite $X$.
Indeed, since $Y$ is symmetric, it is orthogonal-diagonalizable. That is, $Y=UDU^T$ with $U$ orthogonal. If $Y\ne0$, then there exists $k$ with $D_{kk}>0$. Then
\begin{align*}
\operatorname{Tr}(XY)
&=\operatorname{Tr}(XUDU^T)=\operatorname{Tr}(U^TXU\,D)
=\sum_j D_{jj}\,(U^TXU)_{jj}\geq D_{kk}\,(U^TXU)_{kk}>0,
\end{align*}
where we use that $X$ is positive-definite, so is $U^TXU$, and $(U^TXU)_{jj}=e_j^T(U^TXU)e_j>0$ for all $j$.
To prove that if $\operatorname{Tr}(XY)>0$ for all positive-definite $X$ then $Y$ is positive-semidefinite and nonzero, suppose that $Y$ is not positive-semidefinite. This means that there exists $j$ with $D_{jj}<0$. Now let $X$ be the diagonal matrix
with diagonal
$$
(1,\ldots,1,-\frac{\sum_{k\ne j}D_{kk}}{D_{jj}}+1,1,\ldots,1),
$$
where the distinct entry is in the $j^{\rm th}$ position. Then $X$ is positive-definite and
$$
\operatorname{Tr}(UXU^TY)=\operatorname{Tr}(XD)=\sum_{j\ne k}D_{kk}-\sum_{k\ne j}D_{kk}+D_{jj}=D_{jj}<0.
$$ |
Finitely generated modules over a principal ideal domain. | Any finitely-generated torsion abelian group will do (and all such groups are actually finite). All elements are annihilated by some nonzero integer, and hence you have no linearly independent sets. |
probability in normal density function | You are on the right track, but I think you wanted $P(-1\le Z\le2)$ instead of
$P(1\le Z\le2)$; so this would give $P(-1\le Z\le0)+P(0\le Z\le2)=P(0\le Z\le1)+P(0\le Z\le2)$.
For the second part, you want to find $P(Z\ge1.5)=1/2-P(0\le Z\le1.5)$. |
Name of probability distribution | As Byron Schmuland says, this is a beta distribution with the second parameter $\beta=1$.
It is sometimes called a "standard power-function distribution". |
Prove that 1 = 0 at the trivial ring | You might be asking the wrong question. It seems like you're thinking of $1$ as the number or symbol. Don't try to show that $1\in R$. Instead, show that the element of $R$ has the required property:
We say that $a\in R$ is an identity if for all $b\in R$, $ab=ba=b$. Now, let's explore this quantified statement for $R=\{0\}$. Since $R$ has only one element, $a=0$. Now, let $b\in R$, since $b$ is in $R$, $b=0$. Since $ab=0=b$ and $ba=0=b$, it follows that $a$ has the properties of the identity, i.e., is the identity. |
N-polygons in hyperbolic geometry | no, take $N=4,$ one figure is a "square" with four equal edges and four equal angles at the vertices. The other figure is a rhombus, same four edges but hte vertex angles in two pairs. For example, we could just glue two equilateral triangles together along one edge.
Isometries preserve angles too. |
Describing qualitatively the level sets of the function $f(x, y) = x^3 - x$ | The function $f(x, y)$ depends only on $x$, so if $(x_0, y_0)$ is on the level curve, so is $(x_0, y)$ for every $y \in \mathbb{R}$. Thus, any level set $\{f(x, y) = c\}$ is a union of the vertical lines $\{x = x_0\}$ in the plane, where $x_0$ varies over the roots of $$f(x, y) = x^3 - x - c$$ regarded as a function of $x$ alone. Since this is a cubic polynomial in $x$, depending on the value of $c$ it can have three real single roots, one real single root and one real double root, or one real single root and two nonreal roots. |
Euler bricks and the $4^{th}$ dimension | The existence of $4$-dimensional Euler bricks is still an open problem. For the same question see here. |
Example of Converge in measure, but not converge point-wise a.e.? | For the first part, consider the typewriter sequence (Example 4). |
How to find Big O notation here. | In fact, since
$$
(\log n + 2)(n-1) = n\log n - \log n + 2n -2
$$
for all $n \geq 1$,
neither dividing by $\log n$ nor by $n$ suffices to ensure boundedness; dividing by $n\log n$ suffices.
So
$(\log n+ 2)(n-1) = O(n\log n)$ as $n \to \infty$. |
Preservation of Lipschitz Constant by Convolutions | Recall Bochner's theorem: A measurable function $h: \Omega \to F$ is integrable if and only if the scalar function $\|h\|$ is integrable. This is a straightforward consequence of the usual dominated convergence theorem (approximate $h$ by simple functions).
This readily implies that $\|\int h \| \leq \int \|h\|$, see e.g. Section 2.3 in Ryan, Introduction to tensor products of Banach spaces, Springer Monographs in Mathematics, 2002.
Using this, you have
\[
\Vert g(z) - g(z')\Vert \leq \int \Vert f(z+x) - f(z'+x)\Vert \phi(x)\,dx \leq \|f\|_{\text{Lip}} \cdot \|z - z'\| \int \phi \, dx,
\]
so $\Vert g\Vert_{\text{Lip}} \leq \Vert f \Vert_{\text{Lip}}$ as desired. |
Prove that all squares are congruent to $0,1,4,9 \pmod{16}$ | Let $k\in \mathbb{Z},m\in\{0,1,2,3\}$ such that $n = 4k+m$. Consider:
\begin{align}
n^2 &= (4k+m)^2 \\ &=16k^2 + 8km + m^2 \\ &\equiv m^2+8km \pmod{16}
\end{align}
If $2\mid km$, then $n^2\equiv m^2\pmod{16}$, that is $0,1,4$ or $9$.
If $2\nmid km$, then $m\in\{1,3\}$, that implies $m^2 = 1$ or $9$. In both cases you'll have $n^2\equiv m^2+8\pmod{16}$, resulting $9$ or $1$.
So any square will be $\equiv 0,1,4$ or $9 \pmod{16}$. |
Saddle point and linear programming | For
$$
f(x,y) = -c \cdot x + y \cdot (Ax - b) = f(u)
$$
with $u = (x_1,\dotsc,x_n, y_1, \dotsc, y_m)^\top$.
The first partial derivatives are
$$
\partial_k f = \partial_{x_k} f = y_i a_{ik} - c_k \quad (k \in \{1,\dotsc,n \}) \\
\partial_k f = \partial_{y_{k-n}} f = a_{(k-n)j} x_j - b_{k-n} \quad (k \in \{n+1,\dotsc,n+m \})
$$
or
$$
\DeclareMathOperator{grad}{grad}
\grad f =
\begin{pmatrix}
A^\top y -c \\
Ax-b
\end{pmatrix}
$$
The Hessian is
$$
H_{ij} = \partial_i \partial_j f
$$
with
$$
H_{ij} = 0 \quad (i,j \in \{1, \dotsc, n\}) \\
H_{ij} = 0 \quad (i,j \in \{n+1, \dotsc, n+m\}) \\
H_{ij} = a_{(j-n)i} \quad (i \in \{1, \dotsc, n\}, j \in \{n+1, \dotsc, n+m\}) \\
H_{ij} = a_{(i-n)j} \quad (i \in \{n+1, \dotsc, n+m\}, j \in \{1, \dotsc, n\})
$$
or
$$
H =
\begin{pmatrix}
0 & A^\top \\
A & 0
\end{pmatrix}
$$
It seems we need to show that $H$ has positive and negative eigenvalues to be indefinite and indicating a saddle point. Not sure if this is sufficient for Hessians with more than two variables.
$$
u^\top H u
= x^\top A^\top y + y^\top A x
= y^\top A x + y^\top A x
= 2 y^\top A x
$$ |
Confusion about the Galois group $\mbox{Gal}(\mathbb{Q}(\zeta_{p^\infty})/\mathbb{Q})\cong \mathbb{Z}_p$ | You are correct. The correct statement should be $$\operatorname{Gal}(\mathbf{Q}(\zeta_{p^\infty})/\mathbf{Q})\cong \mathbf{Z}_p^\times.$$
for $p$ prime. The (projective) limit commutes with taking unit groups. This is a special case of the fact that the functor $$\mathbf{Ring}\to \mathbf{Grp},R\mapsto R^\times$$
is a right adjoint and that right adjoints commute with taking limits. |
A question about invertible matrices, $A,B$ are invertible matrices, $AB+BA=0$, show that n is even | If you write your relation
$$
ABA^{-1}=-B
$$
you get that
$$\det B=\det(-B).$$
Now, what can you say about $\det(aB)$, where $a$ is a scalar? |
In a group of $n$ people everybody with a mutual friend know different number of people. Do there exist somebody knowing only one person? | Let $v$ be a vertex with the highest degree $k\geq 1$ in $G$. Let $v_1,\ldots, v_k$ be the vertices connected to $v$. Since $v_i$ and $v_j$ have a common neighbor, we have $\deg v_i \neq \deg v_j$ for all $i\neq j$. But $1 \leq \deg v_i \leq k$ for all $i$ and if they are all distinct, one must have $\deg v_i =1$ for some $i$. |
limits of integration in non spherical coordinates | By a sketch
we see that by cylindrical coordinates the set up should be
$$\int_0^{6\frac{\sqrt2} 2}\, dz\int_0^{2\pi}\, d\theta\int_0^{z} \, dr+\int_{6\frac{\sqrt2} 2}^{6+6\frac{\sqrt2} 2}\, dz\int_0^{2\pi}\, d\theta\int_0^{\sqrt{6^2-\left(z-6\sqrt 2\right)^2}} \, dr$$ |
Isomorphism with affine scheme | Here is one example where $X$ is not affine. A similar idea would work if $A$ is reduced with infinitely many points in $\operatorname{Spec}A$, and you took the same definition for $X$ below using residue fields of points in $\operatorname{Spec}A$. A more interesting question might be if your question is true assuming $X$ is connected.
Let $k$ be a field, and consider the morphism
$$X := \coprod_{x \in \mathbf{A}^1_k} \operatorname{Spec} \kappa(x) \longrightarrow \mathbf{A}^1_k$$
where $\kappa(x)$ denotes the residue field at a point $x \in \mathbf{A}^1_k$, and the map is defined by mapping the unique point in $\operatorname{Spec} \kappa(x)$ to $x$. This is a bijection on topological spaces by construction, each scheme theoretic fiber is a spectrum of a field, hence is a single reduced point, and $X$ is reduced. On the other hand, $X$ is not quasicompact, hence $X$ cannot be affine. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.