title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Proving that the following bilinear map is well defined. | Yes, you want to use Stokes's Theorem, remembering that $\partial M = \emptyset$. Note that $w$ is a closed form so $d(\sigma\wedge w) = d\sigma\wedge w$. |
For a polyhedron $P$ associated with an LP, find a polyhedral description of the integer polyhedron $P_{IP}$ using Gomory-Chvátal cuts. | Because this is two dimensions, a good place to start is to plot the feasible region, enumerate the integer feasible solutions, and determine which of these define the integer hull. Then you can see which cuts are needed. |
Solving congruence system by deducting equations with one another | $$\begin{align}
3x+4y &\equiv 2 \bmod 13 \tag{1}\\
2x+6y &\equiv 1 \bmod 13 \tag{2}\\
10x+30y &\equiv 5 \bmod 13 \tag{3 [5$\times$(2)]}\\
10x+4y &\equiv 5 \bmod 13 \tag{4}\\
13x+8y &\equiv 7 \bmod 13 \tag{5 [(1)+(4)]}\\
8y &\equiv 7 \bmod 13 \\
y &\equiv 9 \bmod 13 \tag{6}\\ \hline
2x+54 &\equiv 1 \bmod 13 \tag{7 [(6)$\to$(2)]}\\
2x+2 &\equiv 1 \bmod 13 \\
2x &\equiv 12 \bmod 13 \\
x &\equiv 6 \bmod 13 \\ \hline
&\bigstar
\end{align}$$ |
Functions strictly monotone increasing and monotone increasing | If a function $f$ is strictly monotone increasing then for $x, y$ such that $x < y$, $f(x) < f(y)$, so $f(x) \le f(y)$ so $f$ is monotone increasing. |
What can we say about the polar decomposition of a skew symmetric matrix? | We can define the polar decomposition of a rectangular matrix of maximal rank. For example, let $A\in M_{mn}$ with $m\geq n$ and $rank(A)=n$. Since $A^TA$ is symmetric $>0$, let $S=\sqrt{A^TA}$, $Q=AS^{-1}\in M_{mn}$. Then $Q^TQ=I_n$ and $Q$ is pseudo-orthogonal (the $n$ columns of $Q$ form an orthonormal system in $\mathbb{R}^m$).
If $K\in GL_{2p}$ is skew symmetric ($n=2p$ is even), then $K=Pdiag(a_1U,\cdots,a_pU)P^{-1}$ where $P$ is orthogonal, $U=\begin{pmatrix}0&-1\\1&0\end{pmatrix}$ and $a_i>0$. Thus the polar decomposition is $K=QS$ where $Q=Pdiag(U,\cdots,U)P^{-1}$ and $S=Pdiag(a_1I_2,\cdots,a_pI_2)P^{-1}$.
EDIT. When $A\in M_n$ is singular, one can define "A" polar decomposition (it is not unique) as follows: let $(A_k)_k$ be a sequence of invertible matrices that converges to $A$ and $A_k=Q_kS_k$ be their (unique) polar decomposition. Since $O(n)$ is compact, there exists a subsequence of $(Q_k)$ that converges to $Q\in O(n)$. Then $A=Q\sqrt{A^TA}$ is such a decomposition.
If $A$ is skew symmtric, then $A$ is orthogonally similar to $D=diag(a_1U,\cdots,a_pU,0_{n-2p})$ and some decompositions of $D$ are $D=diag(U,\cdots,U,R)diag(a_1I_2,\cdots,a_pI_2,0_{n-2p})$, where $R\in O(n-2p)$. |
Question regarding to Dedekind domain and PID | The answer depends crucially on which of many equivalent definitions of Dedekind domain that you employ. Here's an answer that covers all bases. Find your Prufer analog $(n)$ in the following list, then trace the path from $(n)$ to $(4)$ in the proofs in [1]
THEOREM $\ \ $ Let $\rm\:D\:$ be a domain. The following are equivalent:
(1) $\rm\:D\:$ is a Prufer domain, i.e. every nonzero f.g. (finitely generated) ideal is invertible.
(2) Every nonzero two-generated ideal of $\rm\:D\:$ is invertible.
(3) $\rm\:D_P\:$ is a Prufer domain for every prime ideal $\rm\:P\:$ of $\rm\:D.\:$
(4) $\rm\:D_P\:$ is a valuation domain for every prime ideal $\rm\:P\:$ of $\rm\:D.\:$
(5) $\rm\:D_P\:$ is a valuation domain for every maximal ideal $\rm\:P\:$ of $\rm\:D.\:$
(6) Every nonzero f.g. ideal $\rm\:I\:$ of $\rm\:D\:$ is cancellable, i.e. $\rm\:I\:J = I\:K\ \Rightarrow\ J = K\:$
(7) $\: $ (6) restricted to f.g. $\rm\:J,K.$
(8) $\rm\:D\:$ is integrally closed and there is an $\rm\:n > 1\:$ such that for all $\rm\: a,b \in D,\ (a,b)^n = (a^n,b^n).$
(9) $\rm\:D\:$ is integrally closed and there is an $\rm\: n > 1\:$ such that for all $\rm\:a,b \in D,\ a^{n-1} b \ \in\ (a^n, b^n).$
(10) Each ideal $\rm\:I\:$ of $\rm\:D\:$ is complete, i.e. $\rm\:I = \cap\ I\: V_j\:$ as $\rm\:V_j\:$ run over all the valuation overrings of $\rm\:D.\:$
(11) Each f.g. ideal of $\rm\:D\:$ is an intersection of valuation ideals.
(12) If $\rm\:I,J,K\:$ are nonzero ideals of $\rm\:D,\:$ then $\rm\:I \cap (J + K) = I\cap J + I\cap K.$
(13) If $\rm\:I,J,K\:$ are nonzero ideals of $\rm\:D,\:$ then $\rm\:I\ (J \cap K) = I\:J\cap I\:K.$
(14) If $\rm\:I,J\:$ are nonzero ideals of $\rm\:D,\:$ then $\rm\:(I + J)\ (I \cap J) = I\:J.\ $ ($\rm LCM\times GCD$ law)
(15) If $\rm\:I,J,K\:$ are nonzero ideals of $\rm\:D,\:$ with $\rm\:K\:$ f.g. then $\rm\:(I + J):K = I:K + J:K.$
(16) For any two elements $\rm\:a,b \in D,\ (a:b) + (b:a) = D.$
(17) If $\rm\:I,J,K\:$ are nonzero ideals of $\rm\:D\:$ with $\rm\:I,J\:$ f.g. then $\rm\:K:(I \cap J) = K:I + K:J.$
(18) $\rm\:D\:$ is integrally closed and each overring of $\rm\:D\:$ is the intersection of localizations of $\rm\:D.\:$
(19) $\rm\:D\:$ is integrally closed and each overring of $\rm\:D\:$ is the intersection of quotient rings of $\rm\:D.\:$
(20) Each overring of $\rm\:D\:$ is integrally closed.
(21) Each overring of $\rm\:D\:$ is flat over $\rm\:D.\:$
(22) $\rm\:D\:$ is integrally closed and prime ideals of overrings of are extensions of prime ideals of $\rm\:D.$
(23) $\rm\:D\:$ is integrally closed and for each prime ideal $\rm\:P\:$ of $\rm\:D,\:$ and each overring $\rm\:S\:$ of $\rm\:D,\:$ there is at most one prime ideal of $\rm\:S\:$ lying over $\rm\:P.\:$
(24) For polynomials $\rm\:f,g \in D[x],\ c(fg) = c(f)\: c(g)\:$ where for a polynomial $\rm\:h \in D[x],\ c(h)\:$ denotes the "content" ideal of $\rm\:D\:$ generated by the coefficients of $\rm\:h.\:$ (Gauss' Lemma)
(25) Ideals in $\rm\:D\:$ are integrally closed.
(26) If $\rm\:I,J\:$ are ideals with $\rm\:I\:$ f.g. then $\rm\: I\supset J\ \Rightarrow\ I|J.$ (contains $\:\Rightarrow\:$ divides)
(27) the Chinese Remainder Theorem $\rm(CRT)$ holds true in $\rm\:D\:,\:$ i.e. a system of congruences $\rm\:x\equiv x_j\ (mod\ I_j)\:$ is solvable iff $\rm\:x_j\equiv x_k\ (mod\ I_j + I_k).$
[1] Bazzoni and S. Glaz, Prüfer rings, Multiplicative Ideal Theory in Commutative Algebra, Springer-Verlag (2006), pp. 55–72. |
Find a function with general shape as a parabola and $f(0) = m$ and $f(1/2) = t$ where $m, t \in \Bbb R$. | Well, you know that:$$f(x)=t-a(x-1/2)^2\tag{vertex formula}$$and$$f(x)=-ax^2+bx+m\tag{standard form}$$so$$\begin{align}t-a(x-1/2)^2&=-ax^2+ax+t-\frac14a=-ax^2+bx+m\end{align}\\t-\frac14a=m\implies a=4(t-m)\\a=b\implies b=4(t-m)$$
$$f(x)=-4(t-m)x^2+4(t-m)x+m$$ |
A $\| \cdot \|_2$-closed subspace of $C[0,1]$ is always Banach. | This is not true. Let $Y=C([0,1])$ itself. Define
$$f_n(x):=\begin{cases}
0&\text{if}\ 0\le x\le\tfrac{n-2}{2n},\\
nx-\tfrac{n-2}2&\text{if}\ \tfrac{n-2}{2n}<x\le\frac12,\\
1&\text{if}\ \tfrac12<x\le1.
\end{cases}$$
Define $f=\mathbf1_{[\frac12,1]}$. Clearly $f_n\to f$ pointwise, and $|f_n-f|^2\le2\in L^1(0,1)$, so by the dominated convergence theorem $f_n\to f$ in $L^2$. Since $f_n\in Y$ for each $n$ this implies $(f_n)$ is an $L^2$-Cauchy sequence in $Y$. If $f_n\stackrel{L^2}\to g\in Y$ then $g=f$ a.e. which can easily be seen to be impossible. Hence $Y$ is not Banach.
However, it is true that $(Y,\|\cdot\|_*)$ is Banach. If $(f_n)$ is $\|\cdot\|_*$-Cauchy in $Y$ then it is also $\|\cdot\|_\infty$-Cauchy, so there exists $f\in Y$ such that $f_n\stackrel{\|\cdot\|_\infty}\to f$. This implies
$$\int_0^1|f_n-f|^2\ \mathrm{d}x\leq\|f_n-f\|^2_\infty\int_0^11\ \mathrm{d}x\to0$$
as $n\to\infty$. Hence $\|f_n-f\|_*=\|f_n-f\|_\infty+\|f_n-f\|_2\to0$, so $(Y,\|\cdot\|_*)$ is complete. |
If $\kappa<\aleph_\alpha$, then $\kappa\leq\alpha$? | Your cardinality is non-standard. Typically, $\aleph_\alpha$ is the $\alpha$th infinite cardinal, and it is the size of $Z(\alpha)$, the set of ordinals $\gamma$ such that $\gamma<\omega_\alpha$, where $\omega_\alpha$ is the $\alpha$th initial ordinal.
If $\alpha$ is infinite, the set of $\gamma$ such that $|\gamma|\le\alpha$ has size $|\alpha|^+$, and this cardinal is not $\aleph_\alpha$ (under the standard usage).
Is $m$ supposed to be $\alpha$? Under your definition, $Z(\alpha)$ is an ordinal itself. I suppose that saying that "$\gamma$ is the ordinal of $B$" means that $\gamma$ is the order type of $B$. Since $|B|=\kappa$, $\gamma$ is an ordinal of size $\kappa$. Since $B$ is a subset of $Z(\alpha)$, $\gamma$ is at most the order type of $Z(\alpha)$ itself, i.e., $Z(\alpha)$. This means, precisely, that $\gamma\le Z(\alpha)$. But it may be a good idea to check the definitions you are using, due to their non-standard nature. |
Change of bases (arithmetic) | A fraction has an infinite repeat when the denominator (in lowest terms) has a prime factor that does not divide into the base. In base $10$ the fractions that terminate are ones with denominators of the form $2^a5^b$. If there is any other prime in the factorization of the denominator, the fraction will terminate. If it terminates, it will have $\max(a,b)$ places because that is the smallest power of $10$ that $2^a5^b$ divides into.
Your approach to converting a whole number to another base works fine. More commonly taught is to divide the number by $5$ and keep the remainder as the ones digit of the converted number. Divide the quotient by $5$ and keep the remainder as the fives digit. Keep going until you are done. It is described here. |
Can $n+1$ distinct positive vectors in $\mathbb{R}^n_{>0}$ agree on $n$ distinct weighted $p$-norms? | I haven't found an answer to your question, but here is a different spin to your question, that might help finding an answer: if we define the weighted p-norm
$$ \lVert x \rVert_{w,p} = \left( \sum_{i=1}^n w_i x_i^p \right)^{1/p},$$
and $$\mathbb{R}^n_{+,o} = \{ x \in (0, \infty)^n: x_1 \leq x_2 \leq \cdots \leq x_n \},$$
then if I understand correctly, you would like to show that for any choice choice of weights $\{w^{(1)},...,w^{(n)}\}\subset\Delta^{n-1}$, norms $\{p^{(1)},...,p^{(n)}\}\subset [1,\infty)$, and scalars $\{q^{(1)},...,q^{(n)}\}\subset(0,\infty)$, the set
$$ X = \bigcap_{k=1}^n \left\{ x \in \mathbb{R}^n_{+,o} : \lVert x \rVert_{w^{(k)}, p^{(k)}} = q^{(k)} \right\} $$
has cardinality at most $n$. Of course there are some choices of weights, norms, and scalars that trivially yield $|X| > n$, but we rule these out. This invites a new way of viewing the problem: you are interested in the cardinality of the intersection of weighted $l_p$ spheres, and looking into this literature might lead to a solution (can any analyst help?). Another way at looking at the problem is that you are essentially looking at solutions to a system of polynomial equations, and this could also lead to a solution.
(I apologize that this is perhaps more of a comment than an answer, but since I am new to stackexchange I do not have the reputation to post comments yet.) |
When deriving the multivariate normal distribution, please explain this limit | There is certainly a typo and it should be $\Delta x\rightarrow 0$. The result being quoted is the integral mean value theorem. What's going on is as follows. In (6), they derive that the probability equals $f(x_0,y_0)\Delta x\Delta y$ for $x_0\in (x,x+\Delta)$ and $y_0\in (y,y+\Delta)$. This is precisely what the integral mean value says, that the integral of a continuous function is equal to the area on which you are integrating multiplied by the value of the function somewhere in that area (exactly where is somewhat immaterial as we'll see below). It's a sort of average value result.
The point is that in $(7)$, you are no longer looking just at $x_0,y_0$ but $f(x,y)$. But, if we interpret the mean value theorem as in $(6)$ then we realize that as $\Delta x$ and $\Delta y$ go to 0, then $x_0\rightarrow x$ and $y_0\rightarrow y$, so by continuity of $f$, we get $f(x_0,y_0)\rightarrow f(x,y)$. Again, here $x_0$ and $y_0$ depend both on $x,y$ and $\Delta x$, $\Delta y$, but we are guruanteed they lie somewhere in $(x,x+\Delta x)$ and $(y,y+\Delta y)$ respectively so as the intervals shrink to zero the $x_0$ and $y_0$ are squeezed arbitrarly close to $x,y$. |
Is $f(x)=\begin{cases}x \ \text{if } x\in [0,1),\\3-x \ \text {if} \ x \in [1,2]\end{cases}$ continuous from $[0,2]$ to $[0,2]$? Yes/No | Notice that:
$$\lim_{x \to 1^-} f(x) = \lim_{x \to 1^-} x = 1,$$
and
$$\lim_{x \to 1^+} f(x) = \lim_{x \to 1^+} 3-x = 2.$$
Since
$$\lim_{x \to 1^-} f(x) \neq \lim_{x \to 1^+} f(x),$$
we can conclude that this function is not continuous $[0, 2]$. |
Studying the convergence of two sequences | Notice that you have
$$a_n = \log_n \sqrt{n^2+n-1} = \frac{\log(\sqrt{n^2+n-1})}{\log(n)} = \frac{\log(n^2+n-1)}{2\log(n)}$$
by logarithm laws. Now you can use L'Hopital's Rule to deduce
$$ \lim_{n \to \infty} \frac{\log(n^2+n-1)}{2\log(n)} = \lim_{n \to \infty} \frac{\frac{2n + 1}{n^2+n-1}}{2 \frac 1 n } = \lim_{n \to \infty} \frac{2n^2 + n}{2n^2 + 2n - 2} = \lim_{n \to \infty} \frac{2 + \frac 1 n}{2 + \frac 2 n - \frac{2}{n^2}} = \frac 2 2 = 1.$$
The second sequence can be treated accordingly. You should try it yourself :) |
Transform $\int^a_b\int^c_df(x,y)dxdy$ into $\int^k_l\int^m_nf(r,\theta)drd\theta $. | No, if you intend to have constant $a, c, b, d$ and $k, l, m, n$, this is not possible, because in the $xy$ domain you have a rectangle, and its edges have equations, say,
$$
x = A\\
y = B
$$
where $A,B$ are constant, and these become in the $r\theta$ domain
$$
r=\frac{A}{\cos\theta}\\
r=\frac{B}{\sin\theta}
$$
which cannot be constant, if we exclude the simple case of $A$ or $B$ zero. |
Numerical precision of arctan function | If anything, I would suspect the acos function since you need to compute its argument very precisely near the z-axis to ensure that you remain within the domain $[-1,1]$. Due to the sensitivity of acos near the endpoints of its domain to small numerical errors (the slope approaches vertical), I would suggest computing $\theta$ using the asin function when you detect that the $z$ coordinate dominates the $x$ and $y$ coordinates.
Currently, you are essentially taking a dot product of your normalized vector $(x,y,z)/|(x,y,z)|$ with the unit $z$ vector $(0,0,1)$, which gets you $\cos\theta$. Instead, compute the magnitude of the cross product of your normalized vector with the unit $z$ vector, which gets you $\sin\theta$. The slope of asin is around $1$ near $0$, leading to much improved numerical sensitivity. |
Power Series Solution to Differential Equation with Initial Conditions | hint
After writing that the coefficient of $x^n $ is zero , you will get
$$a_{n+2}=\frac {2n-10}{(n+1)(n+2)} a_n$$
and treat the two cases : $n $ even to use $a_0$ and $n $ odd to use $a_1$.
Since $a_0=y (0)=0$, we conclude that
$$a_{2p}=0$$
and since $a_1=y'(0)=3$, we find
$$a_3=\frac {-8}{2\times 3}3 $$
$$a_5=\frac {4}{4\times 5}4$$
$$a_7=0=a_9=a_{11}=.. $$
finally
$$\boxed {y=3x-4x^3+\frac {4}{5}x^5}$$ |
Immersion in both ways yet not an isomorphism | Let $A=[0,\infty)$ and let $B=(0,\infty)$.$\;$Then
$A$ can be order-embedded in $B$ via the map $a\mapsto a+1$.$\\[4pt]$
$B$ can be order-embedded in $A$ via the map $b\mapsto b$.
but $A,B$ are not order-isomorphic since $A$ has a least element but $B$ does not have a least element.
As regards your proposed example, it doesn't work; there is no injection of $A=\mathbb{R}$ into $B=\mathbb{Q^{\geqslant0}}\;$since $A$ is uncountable whereas $B$ is countable. |
The probability of drawing 7 red balls from an urn containing a total of 10 balls | Firstly, the probability of drawing one red ball is $7/10$. The probability of drawing a red ball from an urn with $9$ balls, $3$ of which are blue and $6$ of which are red is $6/9$. Thus, $$P(R)=\frac{7}{10}\cdot \frac{6}{9}$$ $$=\frac{7}{15}.$$ |
Show if the series $f(x)=\sum\limits_{k=1}^\infty \frac{1}{k} \sin(\frac{x}{k})$ converges uniformly or not. | Weierstrass M-test states: IF there is $M_k$ such that $|f_k(x)|\le\ M_k $ for all $k,x$ and $\sum\limits_{k=1}^\infty\ M_k$ converges, then $\sum\limits_{k=1}^\infty\ f_k(x)$ converges uniformly.
In particular, what you just have shown tells nothing about uniform convergence of the series.. since you just found 'a' sequence always bigger than f, whose series does not converge.
If you think about it, the term $\sin(\frac{x}{k})$ provides some additional speed of convergence to 0 when mutiplied to $\frac{1}{k}$ so overall the term converges to 0 faster than $\frac{1}{k}$ alone, and so the series does converge and does so uniformly
So obvious thing to try is to bound the term $\sin(\frac{x}{k})$ in some way (in your case you bounded it by 1).
Notice that on $[0,\infty)$ we have the following:
$g(x)=x-\sin(x) \implies g'(x)=1-\cos(x) \implies g'(x)\ge0$
and $g(0)=0$, hence $g(x)\ge0 \implies x\ge\sin(x)$
The above inequality can be obtained without using differentiation, if you'd like, using power series (which has infinite radius of convergence) for $\sin(x)$ as well.
Then for any $x\in[0,1]$
$$ \left|\frac{1}{k}\sin\left(\frac{x}{k}\right)\right|=\frac{1}{k}\sin\left(\frac{x}{k}\right)\le\frac{x}{k^2}\le\frac{1}{k^2}=M_k$$
and $\sum\limits_{k=1}^\infty M_k$ converges. |
Having trouble forming mathematical definition | I'm not competent in mathematical logic so take my answer with a grain of salt. If anything you should also come visit "logic room" at math.stackexchange. The following is basically my way of dealing with definitions.
In short, you come into the world of mathematics with 4 things already built-in:, language(for example First order logic with $=$,$\,\in$), rules of logic (for example Fitch-style), axioms (for example ZFC) and definitorial expansion.
The purpose of logic rules and axioms is obvious, we need some formal language like FOL to be able to formalize what we write (our proofs) in a computer. Definitorial expansion is exactly the thing that we included to be able to make new definitions and add new symbols on the fly to our language. Theoretically it is possible to be able to prove theorems using just the first two built-in symbols ($=$ and $\in$) but the theorems and definitions would be so long that you wouldn't have enough time to physically write them down.
So, to answer your last question, the thing that allows us to create definitions is called definitorial expansion, but unlike the part with axioms we didn't took it on faith, definitorial expansion is backed-up by some theorems in mathematical logic that guarantee that its use (i.e. the use of new symbols and definitions that we created by definitorial expansion) won't lead to contradiction and also wont allow us to prove something new, something, that we could not have proved without using def exapansion. So definitorial expansion is perfectly fine to use, those theorems in mathematical logic just confirm that it acts like it's supposed to act, just as a shortcut to make long sentences fit a couple of symbols.
Now, this expansion actually consists of 3 parts:
Firstly, you can add new constants to your language, constants are the objects from your universe, for example some specific sets such as $\varnothing$, here $\varnothing$ is a constant symbol.
The rule is simple, suppose you have proved $$\exists ! x\, P(x)$$ that is, you have proved $\exists x P(x)$ and you have proved $\forall x_1,x_2$ if $P(x_1)$ and $P(x_2)$ then $x_1 = x_2$, where $P$ is a formula of First-Order logic with $x$ being its only free variable.
Then you can introduce a new symbol, let's call it $c$, and two axioms: $P(c)$ and forall $y$ if $P(y)$ then $y = c.$ (where $y$ is any variable that does not appear in the formula $P$) (In my system there is one more axiom $c$ : obj, but more on it later)
For example, you can prove using the axiom of the empty set and the axiom of extensionality that $\exists ! x\, ($forall $y\, y \notin x)$ (you can see that this sentence is indeed of the form $\exists ! x P(x)$ where $P(x) := (\forall y\, y \notin x)$.
Thus you can introduce a new constant symbol $\varnothing$ and two axioms:
$\forall y\, y \notin \varnothing$ and $\forall z \, (\forall y\, y \not \in z) \implies z = \varnothing$. (see, here I had to use new variable $z$ instead of $y$ this is why I added "(where $y$ is any variable that does not appear in the formula $P$)" sentence to deal with the annoying variable substitutions, I won't mention stuff like that in the following rules.
Second part is about predicates (that is, properties like $n$ is even or $f$ is continuous or $x < y$)
Here, if you have proved that "forall $x_1,x_2,\ldots x_n$ if $C(x_1,x_2,\ldots x_n)$ then $\phi(x_1,x_2,\ldots x_n)$ : bool"
or "forall $x_1,x_2,\ldots x_n \in S_1,S_2,\ldots S_n$ if $C(x_1,x_2,\ldots x_n)$ then $\phi(x_1,x_2,\ldots x_n)$ : bool"
or "forall $x_1,x_2,\ldots x_n\, \phi(x_1,x_2,\ldots x_n)$ : bool"
or "forall $x_1,x_2,\ldots x_n \in S_1,S_2,\ldots S_n\, \phi(x_1,x_2,\ldots x_n)$ : bool"
then you can introduce a new predicate-symbol $P$ and a new logic rule
$x_1,x_2,...,x_n (\in S_1,S_2,..S_n) (C(x_1,...x_n)) ⊢ P(x_1,..x_n) \iff \phi(x_1,..x_n)$.
This rule should be understood as follows:
you first check that $x_1,..x_n$ (which are some terms of your language FOL) are in those sets (S_1,..S_n) (if you have proved the version with the sets) and if $x_1,..x_n$ satisfy condition $C$ (if you have proved the version with condition) if they do then you can use this axiom $P(x_1,..x_n) \iff \phi(x_1,..x_n)$.
Also you have a rule that if $x_1,x_2,\ldots,x_n$ in your sets $S_1, S_2,\ldots, S_n$ and $C(x_1,x_2,...x_n)$ : bool then you can deduce that $$P(x_1,x_2,\ldots,x_n) : bool$$ once again, you don't have to check the conditions that $x_i$ are in the sets $S_i$ or that they make the condition $C$ : bool if you didn't prove the corresponding versions.
How to check whether a given formula in FOL has type :bool :
Suppose that $t_0,t_1, S_1,S_2$ are terms of your current language FOL; $A,B$ are some formulas of FOL then
$t_0, t_1$ : obj $ ⊢ t_0 = t_1$ : bool
$t_0, t_1$ : obj $ ⊢ t_0 \in t_1$ : bool
$t_0 \in S_1, t_1 \in S_2$ $⊢ t_0 = t_1$ : bool
$t_0 \in S_1, t_1 \in S_2$ $⊢ t_0 \in t_1$ : bool
$A, B$ : bool $⊢ A $ and $B$ : bool, $A$ or $B$ : bool, $A \implies B$ : bool, not $A$ : bool
$A$ : bool, if you suppose that $A$ is true and are able to deduce that $B$ : bool $ ⊢
A$ and $B$ : bool, $A \implies B$ : bool.
$(x$ : obj $⊢ \phi(x)$ : bool) $⊢ \forall x \phi(x)$ : bool, $\exists x \phi(x)$ : bool
Also there are some logic rules that work as follows:
$A$ : bool $ ⊢$ you may say "Suppose that $A$"
For example $3 = 4$ : bool, you may say "Suppose that $(3 = 4)$" and reason further saying "but $(3 \neq 4)$ thus not $(3 = 4)$". You are not allowed to say, "suppose $1/0 = 4$" because you won't be able to deduce that $1/0$ : obj.
$A$ : bool $⊢ A$ or not $A$, you may say $3 = 4$ or $3 != 4$ but you can't say $(1/0 = 4)$ or $(1/0 != 4)$.
Let us make some examples: you can deduce that $\forall x (x \in \mathbb{Z}$ and $\exists k \in \mathbb{Z}\, x = 2k)$ : bool.
This is how: I won't write it too formally but by given rules you assume that you have some term $x \in \mathbb{Z}$ and you have some term $k \in \mathbb{Z}$ and then try to deduce that $x = 2k$ : bool, Suppose you have defined $2$ to be constant (constant-symbol) such that $2 \in \mathbb{Z}$ and you have defined multiplication $\cdot$ to be a function (functional symbol) such that $\forall x,y \in \mathbb{Z}\, xy \in \mathbb{Z}$ then you may confirm that $2k \in \mathbb{Z}$ and since $\mathbb{Z}$ was another constant-symbol it is certainly a term of your language and therefore by the rule $t_0 \in S_1, t_1 \in S_2 ⊢ t_0 = t_1$ : bool you may confirm that $x = 2k$ : bool.
Thus you can define $\forall x (x$ is even $\iff x \in \mathbb{Z}$ and $\exists k \in \mathbb{Z} x = 2k)$
This will give even(1/2) = false,
Now adressing your second question, you might have wanted to have even(1/2) = undefined that is you do not even want to speak about parity of rational numbers because it makes sense only to speak about parity of integer numbers. This makes sense and generally the only distinction between predicates of the form $B$ iff $C$ and if $A$ then $B$ iff $C$ is if you want to make them be false when they don't make sense or if you don't even want to speak about them when they don't make sense (i.e. when the condition $A$ is not satisfied) I prefer the second variant so we would like to have the following definition:
$x \in \mathbb{Z}$ ⊢ $x$ is even $\iff \exists k \in Z\, x = 2k$
and as a special case:
$\forall x \in \mathbb{Z} (x$ is even $\iff (\exists k \in Z x = 2k)$.
The proof of its validity is the same as the previous case. Here, you won't be able to deduce that even(1/2) = false, you won't even be able to talk about even(1/2) because to talk about it you would have to be able to conclude that $\frac{1}{2} \in \mathbb{Z}$
Also you can check that you can have both your definitons from the post (forme1 and forme 2) now and to use the second one you'd first have to deduce that $f$ and $g$ are functions.
Now for the third part, functional symbols:
Suppose you have proved that $\forall x_1,x_2,\ldots,x_n$ if $P(\forall x_1,x_2,\ldots,x_n)$ then $\exists ! y \phi(\forall x_1,x_2,\ldots,x_n,y)$
From this you may add a new functional symbol $f$ and new rules
$P(x_1,x_2,\ldots,x_n)$ ⊢ $\phi(x_1,x_2,\ldots,x_n,f(x_1,x_2,\ldots,x_n)$ and
$P(x_1,x_2,\ldots,x_n)$ ⊢ $\forall y\, \phi(x_1,x_2,\ldots,x_n,y) \implies y = f(x_1,x_2,\ldots,x_n)$,
$P(x_1,x_2,\ldots,x_n)$ ⊢ $f(x_1,\ldots,x_n)$ : obj
the condition part $P(x_1,..x_n)$ is optional, that is you can prove that $\forall x_1,x_2,\ldots,x_n\, \exists ! y\, \phi(\forall x_1,x_2,\ldots,x_n,y)$ and use the same rules without first checking any condition $P$.
The second part allows you to also create functions via definitorial expansion (that is, set-theoretic functions that are subsets of $S \times T$ from functional symbols! This is very comfortable for practice. Suppose you have proved
$$\forall x_1,x_2,..x_n \in S_1,S_2,..S_n\, \exists ! y \in T\, \phi(x_1,..x_n,y)$$
then you can add a new functional symbolf $f$ and have axioms
$\forall x_1,x_2,\ldots,x_n \in S_1,\ldots,S_n\, f(x_1,x_2,\ldots,x_n) \in T$,
$\forall x_1,x_2,\ldots,x_n \in S_1,\ldots,S_n\, phi(x_1,x_2,\ldots,x_n,f(x_1,x_2,\ldots,x_n)$,
$\forall x_1,x_2,\ldots,x_n \in S_1,\ldots,S_n\, \forall y \in T \,\phi(x_1,x_2,\ldots,x_n,y) \implies y = f(x_1,x_2,\ldots,x_n)$
but not only that! You may also conclude that you have a term $f_0$ such that $f_0\colon S_1 \times ... \times S_n \to T$ and $\forall x_1...x_n \in S_1,..S_n\, Val(f_0,(x_1,...x_n)) = f(x_1,..x_n)$, where $Val$ is a functional symbol such that if $f$ is a function and $x$ in dom($f$) ⊢ $(x,Val(f,x)) \in f$, $Val(f,x)$ means the very same element $y$ that satifies $(x,y) \in f$. So you can denote $Val(f,x)$ as $f(x)$.
So for example you can prove that $\forall n \in \mathbb{N_{>0}}\, \exists ! y \in \mathbb{N}\, y = \frac{1}{n}$ with this you can create a functional symbol (let's call it an $f$) such that $\forall n \in \mathbb{N_{>0}}\, f(n) = 1/n$ and that there is also a term $f_0$ such that $f_0\colon \mathbb{N_{>0}} \to \mathbb{N}$ and $\forall n \in {N_{>0}}\, f_0(n) = 1/n$ basically what this means is that there is also a function that acts like this functional symbol $f$ we created.
The last thing concerning $A \stackrel{def}{\equiv} B$ or $A \stackrel{def}{=} B$ notation, which is the same as saying $(A :\equiv B)$ or $A := B$ this is just a fancy way of writing definitorial expansion
For example when we say let $\forall A\, \forall B\, A \subset B \stackrel{def}{\equiv} \forall x \in A\, x \in B$ what we really did here was that we first (implicitly) deduced that $A,B ⊢ \forall x \in A\, x \in B$ : bool and after that we used def expansion to get $A,B$ : obj $⊢ A \subset B \iff \forall x \in A\, x \in B$ or, equivalently $\forall A,B A \subset B \iff \forall x \in A\, x \in B$.
Another example when in a proof we say "let $M := \varepsilon/2$" what we are really doing here we implicitly deduce $\exists ! x\, x = \varepsilon/2$ and then we use def expansion to get constant-symbol $M$ such that it has property $P(x) := (x = \varepsilon/2)$ that is, $M = \varepsilon/2$.
Also See this How could we formalize the introduction of new notation?
and this Why do we not have to prove definitions? |
Commutators Calculus | $[A_0,_p,x] = [[A_0,_{p-1},x],x]=1$, so $x$ centralizes $[[A_0,_{p-1},x]$, and $N$ does also, so $C_G([A_0,_{p-1},x])$ contains $\langle N,x \rangle = G$. i.e. $[[A_0,_{p-1},x] \in \zeta_1(G)$.
Then we get $x$ centralizes $[[A_0,_{p-2},x]$ modulo $\zeta_1(G)$, so $[[A_0,_{p-2},x] \le \zeta_2(G)$, etc. |
Jacobian matrix for $f(x,y,z) := (4y, 3x^2-2\sin(yz), 2yz)$ | Minor typo when you differentiate $f_2$ with respect to $z$,$$\begin{pmatrix}
0 & 4 & 0\\
6x & -2z\cos(zy) & -2\color{blue}y\cos(yz)\\
0 & 2z & 2y
\end{pmatrix}$$
It is not invertible when the determinant is zero.
That is when $$4(6x)(2y)=0$$ |
Explain why there is no subring of $\mathbb{Z}_{19}$ which is isomorphic to $\mathbb{Z}_{6}$. | Every subring of $R=\mathbb{Z}/n$ is an ideal, see
Show every subring of ring $\Bbb Z_n$ is ideal.
Since for $n=19$, $R$ is a field, every ideal is $0$ or $R$. Both cannot be isomorphic to $\mathbb{Z}/6$ for cardinality reasons. |
Is it possible to generate irrationals in a set full of irrational numbers? | Any actual, physical computer has finite memory, that can only be in some finite number of possible states, and thus can only represent a finite number of numbers. So any random distribution used by a computer to generate (pseudo-)random numbers must be a discrete distribution. Different (non-standard) ways to represent numbers will definitely make it possible to get irrational numbers. But no matter how you choose to represent your numbers, you will not be able to overcome the fundamental finiteness restriction at the heart of this. |
Do algebraically open sets define a vector space topology? | The answer is negative. We know that every finite dimensional topological vector space is topologically isomorphic to $\mathbb{K}^n$ with the Euclidean topology, for some $n\in\mathbb{N}$. However, there exist algebraically open subsets of $\mathbb{R}^2$ that are not open by the Euclidean topology. |
If $\sum c_n$ is convergent where $c_n = a_1b_n + \cdots +a_nb_1$, then $\sum c_n = \sum a_n\sum b_n$ | This can be solved by Cesaro Theorem of Cauchy product.here |
Find a non injective function between a set of integers and itself | If $A$ is finite, you cannot do this. The reason is that if $A$ is a finite set, then $f\colon A\to A$ is injective if and only if it is surjective if and only if it is bijective.
If you just look for a non-injective function without fixed points (namely, it never satisfies $f(x)=x$), you can do it granted $A$ has at least two elements.
Pick $a,b\in A$ and define $f(x)=\begin{cases} a & x\neq a\\ b & x=a\end{cases}$. |
A step function is right continuous with left limits | The graph of $f(x)=\mathbf{1}\{x\geq b\}$ looks like this:
Clearly, approaching any number from the right yields the same value of $f$ meaning that $f$ is right-continuous. That $f$ has left limits just means that the limit exists and is finite when approaching any number from the left. This is also obvious from the graph.
Note also what happens if the filled dot and the hollow dot swap places. Then we're looking at the graph of $f(x)=\mathbf{1}\{x>b\}$ instead, and this is left-continuous with right limits. |
Positive definite matrix and unitary matrix | one understanding of positive definitive is to have all eigenvalue positive,or whole determinant and also minor determinants should be greater then $0$,for example let us consider
$(1-a^2)>0$
which means that from negative infinity to $-1$ or from $1$ to infinity,also
$(a^2-a)>0$
or
$a(a-1)>0$
or from negative infinity to $0$ or $1$ to infinity
as determinant we have
syms a;
>> A=[1 a a;a 1 a;a a 1]
A =
[ 1, a, a]
[ a, 1, a]
[ a, a, 1]
>> det(A)
ans =
2*a^3 - 3*a^2 + 1
ans must be positive,or
[V D]=eig(A)
V =
[ -1, -1, 1]
[ 1, 0, 1]
[ 0, 1, 1]
D =
[ 1 - a, 0, 0]
[ 0, 1 - a, 0]
[ 0, 0, 2*a + 1]
we have
$(1-a)>0$ means that $1>a$
$2*a+1>0$
means that $a>-0.5$ ,so we have
$1>a$ and $a>-0.5$
about unitary matrix
Finding a unitary matrix that diagonalizes a given matrix |
why does exact binomial confidence intervals have wider than nominal coverage? | Because the distribution is discrete. One might get
$$
\sum_{k=y}^n \cdots \ge \frac\alpha2 > \sum_{k=y+1}^n \cdots
$$
but there's nothing between $y$ and $y+1$ that you can put there to make it exactly $\alpha/2$, so you err on the side of caution and use the one that makes it more than $\alpha/2$. |
Interesting math-facts that are visually attractive | I like $e^{i\pi}=-1$ for making people stop and go "What? Really?"
Besides the simple explanation "It's just $\cos(\theta) + i \sin(\theta)$" you can watch whichever definition of the exponential function you start with converge to the unit circle.
Definition 1: $\exp(z)=\sum_{i=0}^\infty \frac{z^i}{i!}$
Definition 2: $\exp(z)=\lim_{n\rightarrow \infty} (1 + \frac{z}{n})^n$ |
mutidimensional local homeomorphism and inverse map | As freakish said in the comments, there are counterexamples already when $N=1$. Let $f:\Bbb R\to S^1:x\mapsto e^{ix}$; this is easily seen to be a local homeomorphism that just wraps $\Bbb R$ repeatedly around the unit circle. Let $h:S^1\to\Bbb R$ be any continuous map. Then $h[S^1]$ is a compact, connected subset of $\Bbb R$, so there are $a,b\in\Bbb R$ such that $a\le b$, and $h[S^1]=[a,b]$. Clearly $f\circ h\ne\mathrm{id}_{S^1}$ if $a=b$, so assume that $a<b$, and let $p,q\in S^1$ be such that $h(p)=a$ and $h(q)=b$. Let $C_0$ and $C_1$ be the two closed arcs of $S^1$ with endpoints $p$ and $q$; then $h[C_0]=[a,b]=h[C_1]$. Let $x\in(a,b)$; there are $u\in C_0$ and $v\in C_1$ such that $h(u)=h(v)=x$. Then $(f\circ h)(u)=f(x)=(f\circ h)(v)$, and $u\ne v$, so either $(f\circ h)(u)\ne u$, or $(f\circ h)(v)\ne v$, and again $f\circ h\ne\mathrm{id}_{S^1}$. |
Sum of squares of degrees in a planar graph | The following are some hints but there is still much to work out, especially for (2). Hopefeully they will not be too confusing:
The first part of the exercise is probably more about algebraic inequalities rather than graph theory. We want to maximize $\sum(deg(v)^2)$ knowing that $4\leq deg(v)\leq n-1$ for all $v$ and, as you stated, $\sum deg(v)\leq6n-12$. Should we make the degrees as equal as possible or maybe something else?
If the minimum degree is $\geq4$ then we are done by point (1), so suppose not. Now it seems natural to try induction by removing a vertex of minimum degree, say $v_0$. Then $deg(v_0)\in\{0,1,2,3\}$ by assumption. When we add $v_0$ back the sum $\sum(deg(v)^2)$ clearly increases: we have an extra term given by $deg(v_0)^2$ and most importantly the degree of each neighbor of $v_0$ increases by one. The tricky part is bounding the latter increment. |
Solving $\begin{cases} z_1z_2=10\operatorname{cis}(\frac{4\pi}5)\\\frac{z_1}{\overline{z_2}^2}=\frac2{25}\operatorname{cis}(\frac{\pi}5)\end{cases}$ | The line where you have $(*)$ should better be
$$
\left|\frac{z_1}{\overline{z_2}^{\,2}}\right|=
\frac{|z_1|}{\bigl|\overline{z_2}^{\,2}\bigr|}=\frac{r_1}{r_2^2}
$$
There is no reason for the equality before the one you mark with $(*)$.
The equality you mark with $({*}{*})$ is indeed wrong and it should be
$$
\frac{r_1(\cos\theta_1+i\sin\theta_1)}{r_2^2(\cos2\theta_2-i\sin2\theta_2)}=\frac{r_1}{r_2^2}(\cos(\theta_1+2\theta_2)+i\sin(\theta_1+2\theta_2))=\frac{2}{25}(\cos\frac{\pi}{5}+i\sin\frac{\pi}{5})
$$
because if $z=r(\cos\theta+i\sin\theta)$, then $\bar{z}=r(\cos\theta-i\sin\theta)$. Next apply the standard rules
\begin{gather}
\cos\theta-i\sin\theta=(\cos\theta+i\sin\theta)^{-1}\\[4px]
(\cos\alpha+i\sin\alpha)(\cos\beta+i\sin\beta)=
\cos(\alpha+\beta)+i\sin(\alpha+\beta)
\end{gather}
Further notes.
The equality
$$
\frac{z_1z_2^2}{\overline{z_2}^{\,2}z_2^2}=\frac{r_1r_2^2}{|z_2|^4}
$$
is generally false, because there's no reason for $z_1z_2^2$ to be real.
When you expand $\dfrac{z_1}{\overline{z_2}^{\,2}}$, you should write
$$
\frac{r_1(\cos\theta_1+i\sin\theta_1)}
{\bigl(\overline{r_2(\cos\theta_2+i\sin\theta_2}\bigr)^2}
=
\frac{r_1(\cos\theta_1+i\sin\theta_1)}
{(r_2(\cos\theta_2-i\sin\theta_2)^2}
=
\frac{r_1(\cos\theta_1+i\sin\theta_1)}
{r_2^2(\cos2\theta_2-i\sin2\theta_2)}
$$ |
The splitting fields of two irreducible polynomials over $Z / p Z$ both of degree 2 are isomorphic | Some ideas (hopefully you've already studied this stuff's details):
For a prime $\;p\;$ and for any $\;n\in\Bbb N\;$, prove that the set of all the roots of the polynomial $\;f(x):=x^{p^n}-x\in\Bbb F_p[x]\;$ in (the, some) algebraic closure $\;\overline{\Bbb F_p}\;$ of $\;\Bbb F_p:=\Bbb Z/p\Bbb Z\;$ , with the usual operations modulo $\;p\;$ , is a field with $\;p^n\;$ elements, which we denote by $\;\Bbb F_{p^n}\;$
From the above it is immediate that $\;\Bbb F_{p^n}\;$ is the minimal field which contains all the roots of $\;f(x)\;$ and is thus this polynomial's splitting field over $\;\Bbb F_p\;$ .
Now, apply the above to the particular case $\;n=2\;$ and deduce at once your claim. |
Integral in solution of geodesic problem | Hint:
$$\int\frac{c\ d\theta}{\sin\theta\sqrt{\sin^2\theta-c^2}}=\int\dfrac{c}{\sin^2\theta \sqrt{(1-c^2)\left(1-\frac{c^2}{1-c^2}\cot^2\theta\right)}}\ d\theta$$
now let substitution
$$\cos\phi=\frac{c}{\sqrt{1-c^2}}\cot\theta$$ |
Problem in functional analysis: application of open mapping theorem | Since $M+N(T)$ is closed, its complementary subspace $U$ is open, since $T$ is surjective, the open map theorem implies that $T(U)$ is open, but $T(U)$ is the complementary of $T(M)$, thus $T(M)$ is closed. |
Is it possible for an event $A$ to be independent from event $B$, but not the other way around? | $P(B\mid A)$ is undefined when $P(A)=0$, so you can’t draw any conclusions about independence of the two events from it. That one reason why (despite what the Wikipedia page on conditional probability might imply) the fundamental definition of independence of two events uses their joint probability: $A$ and $B$ are independent iff $P(A\cap B)=P(A)P(B)$. This definition is symmetric. |
A question about functionals and dual space | Some steps for reach the result:
Prove that $f_1,\dots,f_n$ is a basis for $V^*$, the space of all linear functions from $V$ to $\mathbf F$.
For each $v \in V$ define $\operatorname{ev}(v) : V^* \to \mathbf F$ by $\operatorname{ev}(v)(\phi) = \phi(v)$, and prove that $\operatorname{ev}(v) \in V^{**}$, where $V^{**}$ is the space of all linear functions from $V^*$ to $\mathbf F$.
Prove that if $v \in V \setminus \{0_V\}$ then there exists $\phi \in V^*$ such that $\phi(v) \neq 0$. Conclude that $\operatorname{ev} : V \to V^{**}$ is injective, and then, conclude that any $\varphi \in V^{**}$ is $\operatorname{ev}(v_\varphi)$ for some $v_\varphi \in V$.
If $\varphi_1,\dots,\varphi_n \in V^{**}$ is the dual basis for $f_1,\dots,f_n$, then for each $i$ between $1$ and $n$ let $x_i \in V$ such that $\varphi_i = \operatorname{ev}(x_i)$, and prove that $x_1,\dots,x_n$ is the desired basis for $V$. |
Calculating observational error for complex expressions | Let $w = F(x_1, x_2, \ldots, x_n)$ represent the relationship between actual measured quantities $x_k$ and the quantity of interest $w$. Assuming $F$ is a analytic function, we get
$$
\Delta(F) \approx \sum_{k=1}^n \frac{\partial F}{\partial x_k} \Delta x_k
$$
Is we further assume that sources of error $\Delta x_k$ for each measured quantity $x_k$ are normally distributed with zero mean and independent, we get
$$
\mathbb{E}\left( (\Delta F)^2 \right) \approx \sum_{k=1}^n \sum_{\ell=1}^n \frac{\partial F}{\partial x_k} \frac{\partial F}{\partial x_\ell} \mathbb{E}\left( \Delta x_k \Delta x_\ell\right)
$$
Due to independence, for $k \not= \ell$, $\mathbb{E}(\Delta x_k \Delta x_\ell) = \mathbb{E}(\Delta x_k) \mathbb{E}(\Delta x_\ell) = 0$, thus
$$
\mathbb{E}((\Delta F)^2) = \sum_{k=1}^n \left( \frac{\partial F}{\partial x_k} \right)^2 \mathbb{E}((\Delta x_k)^2)
$$
In the case of $F$ that is a simple product of powers $F=x_1^{p_1} \cdots x_n^{p_n}$, $$
\frac{\partial F}{\partial x_k} = p_k \frac{F}{x_k}
$$ |
Deriving unit vector $\hat{\theta}$ | HINT:
The unit vector $\hat{\theta}$ is given by
$$ \hat{\theta} = \frac{\frac{d \mathbf{r}}{d\theta}}{|\frac{d \mathbf{r}}{d\theta}|},$$
where $\mathbf{r}$ is the radius vector. So, in Cartesian coordinates it is given by
$$ \mathbf{r} = \begin{bmatrix} x \\ y \end{bmatrix}, $$
in polar coordinates it is given by
$$ \mathbf{r} = \begin{bmatrix} r \cos \theta \\ r\sin \theta \end{bmatrix}. $$
Can you take it from here? |
integration over a ball | Basically, you are integrating a radially simmetric function over a ball. In general, in $B \subset \mathbb{R}^n$ is the unit ball and $f=f(r)=f(|x|)$, then $$\int_B f \, d \mathcal{L}=|S^{n-1}| \int_0^1 f(r) r^{n-1}\, dr$$ where $|S^{n-1}|$ is the surface of the sphere $S^{n-1}$. In dimension $n=3$, the surface of $S^2$ is... |
Why does the proper class construction of Stone–Čech compactification fail? | I'd like to give you a slightly more general perspective, assuming that you know a bit of category theory.
Let $Top$ be the category of topological space and $\bf CHaus$ the full subcategory of $Top$ given by compact Hausdorff spaces. Let $j\colon\bf CHaus \to Top$ be the inclusion functor.
The problem of finding the stone Čech compactification is equivalent to find a left adjoint $\beta\colon\bf Top \to CHaus$. Indeed, you search for a universal compactification in the following sense: for any $f\colon X \to i(C) $ where $X \in\mathbf{Top}, C \in\mathbf{CHaus}$, there is a unique extension $ \beta X \to C$.
Does a left adjoint to the inclusion exist? We will invoke here the
General Adjoint Functor Theorem. If $C$ is complete and locally small and $R: C\to D$ preserve small limits, then $R$ has a left adjoint if and only if it satisfies the solution set condition.
The solution set condition can be stated as follows: for every object $Y \in D$, there exist a small set $I$ of maps $ Y \to R(X_i) $ that is "initial"; this means that for any $Z \in C$ and a map $Y \to R(Z) $, there exist some $i$ and a map $X_i \to Z$ so that
$$ Y \to R(Z) = Y \to R(X_i) \to R(Z) $$
The only if part is simple. If there exist an adjoint $L$, then $\{LY\}$ will be the solution set: for any $Y \to R(Z) $, by adjunction you have $LY \to Z$ , and it is a classical result that
$$ Y \to R(Z) = Y \to RLY \to R(Z) $$
Conclusions. Note that $\bf CHaus$ is stable for products and quotients, which gives that $\bf CHaus$ is complete. It is also locally small, because $\operatorname{Hom}(X, Y) $ is a set for any spaces $X, Y$. The inclusion preserve limits, because both products and quotients in $\bf CHaus$ are computed as in $\bf Top$.
Since hypothesis of GAFT are verified, the existence of the left adjoint is equivalent to find a small solution set. Since for any $f\colon X \to C$ where $C \in\bf CHaus$ this factors through $\overline{\operatorname{Im} f} \in\bf CHaus$, you can take the small solution set of compacts Hausdorff in which $X$ is dense, which is bounded in cardinality.
What I want you to focus on is that size can be a real issue, and there exist examples in which GAFT can't be applied; not because there is some argument involving classes that would do the job, but because the left adjoint does not exist at all. See example 3.1 at the nlab page:
https://ncatlab.org/nlab/show/adjoint+functor+theorem |
How to compute the partial derivatives of this function? | If $x,y,t$ are independent variables, you can consider $y$ as a constant when deriving with respect to $x$ and use the Leibintz Integral Rule. So we have:
$$
\frac{\partial}{\partial x}\int_{-x}^y\sinh(xyt^2)dt =\sinh(x^3y)+\int_{-x}^y\left[\frac{\partial}{\partial x}\sinh(xyt^2)\right]dt
$$
and do the same for the derivative with respect to $y$. |
Is it acceptable to say that a divergent series that tends to infinity is 'equal to' infinity? | Yes - it is both very common and entirely correct to do so. There is a bit of formal trickery here because $\infty$ is not a number, but you can do analysis with it anyways - meaning limits and that sort of thing. In particular, there is a set called the affinely extended reals which is basically the real numbers $\mathbb R$ along with two new objects $\infty$ and $-\infty$, one at each 'end'. This is a topological space, meaning that you can take limits in it, but be careful that some things like $∞-∞$, $0·∞$, $0/0$ and $∞/∞$ are undefined.
Consider that, for real numbers $x$, the definition of a sequence $s_n$ converging to $x$ is as follows:
For any $\varepsilon >0$, there exists some $N$ such that if $n>N$ then $|s_n-x|<\varepsilon$.
This can be rewritten as saying:
For any open interval $I$ containing $x$, there exists some $N$ such that for all $n > N$ we have $s_n\in I$.
The idea behind either of these definition is that if we choose some "neighborhood" of $x$ - consisting of $x$ and at least some positive radius around $x$ - the sequence eventually is constrained in that neighborhood. More formally, a neighborhood of a real number is any set $S$ containing an open interval around $x$. Then, you can define convergence to $x$ as follows:
For any neighborhood $I$ of $x$, there exists some $N$ such that for all $n >N $ we have $s_n\in I$.
To define limits to $\infty$ and $-\infty$, one just needs to define their neighborhoods. In particular, $\infty$ is meant to be the "upper end" of the real line - and being close to $\infty$ means that a number is very large. So one defines a neighborhood of $\infty$ to be any set $I$ containing an interval of the form $(C,\infty]$ for some $C\in\mathbb R$. Then, we say
$\lim_{n\rightarrow\infty} s_n = \infty$ if for every neighborhood $I$ of $\infty$, there exists some $N$ such that if $n>N$ then $s_n\in I$.
This is equivalent to saying that $s_n$ converges to $\infty$ if, for every $C$, there exists an $N$ such that if $n>N$ then $s_n > C$ - which is the usual definition you find in textbooks (but note that it is actually a theorem - a consequence of the definition of $\infty$!) - and that in any context that you might allow a statement like $\lim_{n\rightarrow\infty}s_n = \infty$, you might as well be working in the extended reals.
Then, since infinite sums are just limits of partial sums, it is perfectly rigorous to write
$$\sum_{n\rightarrow\infty}\frac{1}n = \infty$$
and to know that this truly means that the left hand side evaluates to $\infty$, not to think that this is some special statement where equality is not equality. This is actually very common in real analysis (the branch of mathematics dealing with limits, continuity, differentiability, and all that stuff) - especially in subfields like measure theory and sometimes in the theory of metric spaces as well.
However, it is also important to know that many people do not share the view that $\infty$ is always a perfectly valid object, defined by its neighborhoods. So, even though you would technically be right to write such an equality, it might not go over well with your audience nonetheless - and you should keep your audience in mind whenever you write anything because "formal correctness" is no substitute for "understood by your audience" - and you will often encounter examples of things which are technically correct, but might confuse or annoy your audience nonetheless.
(Sidenote: The limit $n\rightarrow\infty$ in the subscript $\lim_{n\rightarrow\infty}s_n$, as you might notice, is also defined by the neighborhoods: we get to restrict $s_n$ by forcing $n$ to lie in some neighborhood of $\infty$ that we get to choose - which is what's going on when we say that there's some $N$ so that if $n>N$, blah blah blah) |
Conditions for a split exact sequence in an Abelian category? | It's not so difficult to come up with a proof without elements, here's one. Let $q\colon B\twoheadrightarrow C$ be an epimorphism with a section $j\colon C\to B$ such that
$$\tag{1} q\circ j = id_C.$$
As we have a short exact sequence $0\to A\xrightarrow{i} B\xrightarrow{q} C \to 0$, the morphism $i\colon A\rightarrowtail B$ satisfies the universal property of $\ker q$. In particular, we have
$$\tag{2} q\circ i = 0.$$
We have $q\circ (id_B - j\circ q) = q−q\circ j\circ q = q - id_C \circ q = 0$, so by the universal property of kernels, there exists a canonical arrow $p\colon B \to A$ such that $i\circ p = id_B - j\circ q$; that is,
$$\tag{3} i\circ p + j\circ q = id_B.$$
Note that $i \circ p \circ i = (id_B - j \circ q) \circ i = i - j ◦ \underbrace{q \circ i}_{= 0} = i$, and $i$ is a monomorphism, therefore
$$\tag{4} p\circ i = id_{A}.$$
We also have $i\circ p\circ j = (id_B - j\circ q) \circ j = j - j\circ \underbrace{q\circ j}_{= id} = j - j = 0$, and $i$ is a monomorphism, therefore
$$\tag{5} p\circ j = 0.$$
Now the identities (1)-(5) say that $B \cong A\oplus C$. |
fnding the probability of a joint density | Draw axes in the usual way, and draw the region $A$ on which our joint density function "lives." In this case, it is the triangle with corners $(1,0)$, $(1,1)$, and $(0,1)$.
For $X\lt 2Y$, draw the line $x=2y$, that is, $y=\frac{x}{2}$. The event $X\lt 2Y$ occurs if the pair $(X,Y)$ lands in the part of our triangle $A$ which is above the line $y=\frac{x}{2}$. Let $B$ be this region. Our probability is
$$\iint_B 6xy\,dx\,dy.$$
This integral is not completely pleasant to evaluate. It is substantially easier to find the probability that $(X,Y)$ lies in the part $C$ of $A$ which is below the line $ y=\frac{x}{2}$. Then our required probability is $1$ minus this. So our probability is
$$1-\iint_C 6xy\,dx \,dy.$$
We now evaluate the integral, by integrating first with respect to $y$, and then with respect to $x$. The line $y=\frac{x}{2}$ meets the hypotenuse $x+y=1$ of triangle $A$ at $x=\frac{2}{3}$. Thus
$$\iint_C 6xy \,dx\,dy=\int_{x=2/3}^1 \left(\int_{y=1-x}^{x/2} 6xy\,dy\right)\,dx.$$
For $XY\le \frac{1}{2}$, the procedure is similar. Draw the curve $xy=\frac{1}{2}$. Let $D$ be the part of $A$ which is below this hyperbola. We want $$\iint_D 6xy\,dx\,dy$. The hyperbola is everywhere above the line $x+y=1$. As in the previous problem, it is easier to find first the probability that $XY\gt \frac{1}{2}$.
We leave the rest to you, but the probability that $XY\gt \frac{1}{2}$ is
$$\int_{x=1/2}^1 \left(\int_{y=1/(2x)}^{1} 6xy\,dy\right)\,dx.$$ |
Conditional expectation: when does $X_t=E[X_t\mid \mathcal{F}_s]$ for $s<t$ | Let's work it out (also let's finish the calculation):
$$
\begin{aligned}
E[B_s(B_t^2-t)]&=E[E[B_s (B_t^2-t) \mid \mathcal{F}_s]] \\
&=E[B_s E[(B_t^2-t) \mid \mathcal{F}_s]] \\
&=E[B_s (B_s^2-s)] \\
&=E[B_s^3]-E[sB_s] \\
&=0.
\end{aligned}
$$
The first equality is the tower property. The second equality is "factoring out what is known". The third equality is the martingale property for $B_t^2-t$. The rest is linearity and properties of Gaussian random variables. |
Number of $9-$digit numbers, all digits are different and nonzero and having no consecutive digits in consecutive positions | The first step of any problem in enumerative combinatorics is to compute the first few cases, then look the sequence up on OEIS. If we do that here, we find A002464:
Hertzsprung's problem: ways to arrange n non-attacking kings on an n X n board, with 1 in each row and column. Also number of permutations of length n without rising or falling successions.
There is no particularly nice formula for $A_n$, which dashes our hopes of finding a particularly nice solution.
Your approach of finding $A_n$ in terms of $A_{n-1}$ and $B_{n-1}$ seems like it might possibly lead to the recurrence $$A_n = (n+1)A_{n-1} - (n-2)A_{n-2} - (n-5)A_{n-3} + (n-3) A_{n-4}$$ after a lot more work. But the original source for this recurrence finds it algebraically, so it's not clear if a combinatorial interpretation is even known.
Taking an inclusion-exclusion type approach to this problem is slightly more straightforward. For this, we want to compute the number of $n$-digit numbers that at least contain some $r$ pairs of adjacent consecutive digits, in $c$ "clumps". For example, in $843256917$, there are $r=3$ adjacent pairs total: $(4,3)$, $(3,2)$, and $(5,6)$, in two clumps: $432$ and $56$.
We can:
permute the $c$ clumps and $n-r-c$ elements outside the clumps ($n-r$ objects total) in $(n-r)!$ ways,
orient the clumps in $2^c$ ways,
choose the sizes of the $c$ clumps (each with size at least $2$, total size $r+c$), in $\binom{r-1}{c-1}$ ways, and
choose the sizes of the $n-c+1$ gaps between the largest element of one clump and smallest element of the next, plus the start and end (each gap at least $0$, total size of gaps $n-r-c$) in $\binom{n-r}{c}$ ways.
This gives us the formula $$A_n = n! + \sum_{r=1}^{n-1} (-1)^r (n-r)! \sum_{c=1}^r 2^c \binom{r-1}{c-1} \binom{n-r}{c}.$$
(See the Art of Problem Solving thread for more discussion of this formula.)
Finally, we can approximate $A_n$ fairly closely by an asymptotic formula: there are $n-1$ pairs of consecutive elements, which can be adjacent in a random permutation with probability $\frac{2}{n-1}$. Assuming independence (falsely), there is a a $(1 - \frac{2}{n-1})^{n-1} \approx e^{-2}$ chance that no pair of consecutive elements is adjacent, which implies that $A_n \approx e^{-2} n!$. |
Exactly K Successes in N Distinct Bernoulli Trials Each With Different Parameter $p_i$ | This is called Poisson Trials or Poisson Binomial Distribution. See the following paper on some techniques to solve this: www3.stat.sinica.edu.tw/statistica/oldpdf/A7n44.pdf |
Uniform convergence of restriction | Writing $f(t) = (2x)^{t}$, $\phi_n(x) = 2\frac{f(\frac{1}{n})-f(0)}{\frac{1}{n}}$,
then by MVT
$\phi_n(x) = 2(2x)^{t} \ln(2x)$ for some $t \in (0, \frac{1}{n})$.
Now we want to bound $\phi(x)-\phi_n(x)$ uniformly in $x$, where $\phi(x) = 2 \ln(2x)$.
We can say that $$|\phi_n(x) - \phi(x)| = \phi_n(x) - \phi(x)$$ $$ = 2 \ln(2x)(2^{t}-1) \leq 2 \ln(8)(2^{\frac{1}{n}}-1).$$ This is a uniform bound converging to zero in $n$. |
What is the probability that smallest number is $6$ and largest is $15$? | The numerator is actually $\left(\begin{array}{c} 2 \\ 2 \end{array}\right)\left(\begin{array}{c} 8 \\ 3 \end{array}\right)$. You have to pick 6 and 15.
Also, you have pick 3 numbers out of the set $\left\{7, \ldots, 14 \right\}$, this set has 8 elements. |
Disproving the proposition that $a_n = (-1)^nn$ has a limit. | The proof is missing $\varepsilon$ if you intend to use the $\varepsilon-N$ definition of the limit for sequences. If $(a_n)$ had a limit $L<+\infty$ then so would $(b_n)$ where $b_n:=|a_n|$. This follows from triangle inequality
$$|b_n-|L||=||a_n|-|L||\leqslant |a_n-L|<\varepsilon$$
whenever $n\geqslant N$ for some large enough $N$. But $\lim_nb_n=\lim_n |a_n|=\lim_nn=+\infty$. |
Prove the following sequence converges to 2: | (I am assuming you meant $5a_{n+1}$ on the left side of your equation.) After having figured out that it converges you can usually find the limit by rewriting it like this:
$5a_\infty=a_\infty ^2+6$ and solve for $a_\infty$. One of the solutions is $2$. |
Examples for abstract class field theory? | The main goal of class field theory is to characterize Galois groups of abelian extensions of number fields (or of function fields in characteristic p), so your question about general profinite groups acting on discrete modules is not relevant, unless it concerns e.g. the cohomological approach to CFT, which starts from this kind of action . Anyway, if you ask for "nice actions" of Z${_p}$ or F*${_p}$ of an arithmetic nature, here is a rather classical exercise : fix an odd prime p and denote by $W_{p^n}$ the group of ${p^n}$-th roots of 1, $W$ the union of all the $W_{p^n}$'s. Let k = Q($W_{p}$) and K = Q($W$), g = Gal(k/Q) and G = Gal(K/k). Show that g = F*${_p}$ , G = Z${_p}$ , Gal(K/Q) = g x G. Describe the action of Gal(K/Q) on W (hint : this action is given by a homorphism of Gal(K/Q) into Z${_p}$ called the "cyclotomic character) . |
Let $A$ be any non-empty set and $\alpha$ be an infinite cardinal . When can we say $|A^\alpha|=\alpha$ ? | Never.
Either $|A|=1$ in which case $|A^\alpha|=1$, or $|A|>1$ in which case $|A^\alpha|\geq 2^\alpha>\alpha$. |
Does $^nC_{n+1}$ exist? | The formula
$$C(m,n)=\frac{m!}{n!(m-n)!}$$
is valid only if $m\ge n\ge 0$.
$C(m,m+1)$ is the number of subsets of $m+1$ elements in a set of $m$. Since there are no such subsets, $C(m,m+1)=0$. |
Almost perfect numbers | Let $s(n)$ denote the sum of positive divisors of $n$, itself excluded. Observe that $n^k-1=(n-1)(n^{k-1}+n^{k-2}+\cdots+1)$. This means if $n$ is almost perfect and so is $n^k$, then $$s(n^k)=n^k-1=s(n)(n^{k-1}+n^{k-2}+\cdots+1).$$
We'll prove that in general, $s(n)(n^{k-1}+n^{k-2}+\cdots+1)<s(n^k)$. Let $d_1,\ldots,d_{m}$ be the positive divisors of $n$, less than $n$. Every $d_in^j$ is a divisor of $n^k$. Moreover, all $d_in^j$ are different for distinct choices of $i$ and/or $j$. Consequently,
$$s(n)(n^{k-1}+n^{k-2}+\cdots+1)\leqslant s(n^k).$$
When does equality occur? Not too often, it turns out:
Suppose $n$ has at least two prime divisors, say $p$ and $q$, with corresponding multiplicities $a$ and $b$. $d=p^{2a}$ is a divisor of $n^k$, but is it also of the form $d_in^j$? No, it isn't: clearly $n\mid d$ is impossible, and so is $d=d_i$. Hence $d$ would be a divisor of $n^k$ not occuring in $s(n)(n^{k-1}+n^{k-2}+\cdots+1)$, and the inequality is strict.
The only case left is where $n$ is a prime power. Note that $s(p^a)=\frac{p^a-1}{p-1}\leqslant p^a-1$, with equality only if $n=2$. So the only case left is where $n$ is a power of $2$, and you may note that for every $k\geqslant0$, $s(2^k)=2^k-1$, which means all powers of $2$ indeed satisfy the desired property. |
Are all 2-adic integers even or odd? | Yes. Every 2-adic integer $a$ has unique 2-adic expansion
$$a_0+a_1\cdot 2+a_2\cdot2^2+a_3\cdot2^3+\cdots\quad (\text{For each $a_i$ is $\,0\,$ or $\,1\,$}).$$
If $a_0=0$, then $a=2(a_1+a_2\cdot2+a_3\cdot2^2+\cdots)$, and if $a_0=1$ then $a=1+2(a_1+a_2\cdot2+a_3\cdot2^2+\cdots).$ |
Prove or disprove: $\frac12-\frac{\psi _2^{(1)}(n+2)}{\log 4}$ have infinite zeros | Write the $q$-polygamma function as a sum:
$$\Gamma_2(z) = 2^{(z - 1) (z - 2)/2} \hspace{1px}\Gamma_{1/2}(z), \\
\psi_2(z) = \frac 1 {\Gamma_2(z)} \frac d {dz} \Gamma_2(z) =
\psi_{1/2}(z) + z \ln 2 - \frac {3 \ln 2} 2 =
-\ln 2 \left(
\frac 1 2 - z + \sum_{k \geq 0} \frac 1 {2^{k + z} - 1} \right), \\
\frac 1 2 - \frac 1 {2 \ln 2} \frac d {dz} \psi_2(z) =
-\frac {\ln 2} 2 \sum_{k \geq 0} \frac {2^{k + z}} {(2^{k + z} - 1)^2}.$$
Setting $z = n + 2$ gives your function. It's singular at $n = -2, -3, \dots$ and negative everywhere else. |
$\mathbb E[\frac{\partial}{\partial\theta}\log f(X;\theta)]^2$ and $\mathbb E[\frac{\partial^2}{\partial\theta^2}\log f(X;\theta)]$ | We wish to find the Expectation of The Function of The Cauchy Random Variable.
Let $f_X(x;\theta) = \frac{1}{\pi(1+(x-\theta)^2)}$, with $x\in(-\infty,+\infty)$ and $\theta\in(-\infty,+\infty)$.
$$
\begin{aligned}
log f_X(x;\theta) & = (-1)\log\pi(1+(x-\theta)^2) \\
& = (-1)[\log \pi + \log(1+(x-\theta)^2)] \\
& = -\log \pi - \log(1+(x-\theta)^2)
\end{aligned}
$$
$$
\begin{aligned}
\frac{\partial \log f_X(x;\theta)}{\partial \theta} & = 0 + (-1)\frac{1}{(1+(x-\theta)^2)}\frac{\partial}{\partial \theta}(1+(x-\theta)^2) \\
& = (-1)\frac{1}{(1+(x-\theta)^2)}\cdot 2 (x-\theta)^1 \cdot (-1) \\
& = (+1)\cdot \frac{2(x-\theta)}{1+(x-\theta)^2} = \frac{2x-2\theta}{1+(x-\theta)^2}
\end{aligned}
$$
where we have used the facts:
$$
\begin{aligned}
\frac{\mathrm{d}}{\mathrm{d}x} \log x & = \frac1x \\
\frac{\mathrm{d}}{\mathrm{d}x} \log f(x) & = \frac{1}{u(x)} \frac{\mathrm{d}}{\mathrm{d}x} u(x)
\end{aligned}
$$
$$
\begin{aligned}
\frac{\partial^2 \log f_X(x;\theta)}{\partial \theta^2} & = \frac{(-2)(1+(x-\theta)^2) - 2(x-\theta)^1 (-1)(2x-2\theta)}{(1+(x-\theta)^2)^2} \\
& = \frac{-2\cdot(1+(x-\theta)^2)+2(x-\theta)\cdot2(x-\theta)}{(1+(x-\theta)^2)^2} \\
& = \frac{-2-2(x-\theta)^2 + 4(x-\theta)^2}{(1+(x-\theta)^2)^2} \\
& = \frac{-2 + 2(x-\theta)^2}{(1+(x-\theta)^2)^2} \\
& = \frac{2\cdot((x-\theta)^2 - 1)}{(1+(x-\theta)^2)^2}
\end{aligned}
$$
Let: $$
\begin{aligned}
A & = \left(\frac{\partial \log f_X(x;\theta)}{\partial \theta}\right)^2 = \frac{4(x-\theta)^2}{(1+(x-\theta)^2)^2} \\
B & = \frac{\partial^2 \log f_X(x;\theta)}{\partial \theta^2} = \frac{2((x-\theta)^2 - 1)}{(1+(x-\theta)^2)^2}
\end{aligned}
$$
As a hint, we will only be solving $E_X[A]$. The same mathematical principles can be used to solve $E_X[B]$.
$$
\begin{aligned}
E_X[A] & = \int_{-\infty}^{+\infty} A\cdot f_X(x;\theta)\,\mathrm{d}x \\
& = \int_{-\infty}^{+\infty} \frac{4\cdot(x-\theta)^2}{(1+(x-\theta)^2)^2} \cdot \frac{1}{\pi(1+(x-\theta)^2)}\,\mathrm{d}x \\
& = \int_{-\infty}^{+\infty} \frac{4}{\pi}\cdot \frac{(x-\theta)^2}{(1+(x-\theta)^2)^3} \, \mathrm{d}x
\end{aligned}
$$
Using u-substitution, let $u = x-\theta$, $\mathrm{d}u = \mathrm{d}x$, $u_\mathrm{lower} = -\infty - \theta = -\infty$, $u_\mathrm{upper} = +\infty - \theta = +\infty$.
Then,
$$
E_X[A] = \int_{-\infty}^{+\infty} \frac{4}{\pi} \frac{u^2}{(1+u^2)^3} \, \mathrm{d}u
$$
Now, recall the Trigonometric Identities: $\sin^2 x + \cos^2 x = 1$, $1 + \tan^2 x = \sec^2 x$, $\sec x = \frac{1}{\cos x}$. Furthermore, recall the following Derivatives of Trigonometric Functions: $\frac{\mathrm{d}}{\mathrm{d}x}\sin x = \cos x$, $\frac{\mathrm{d}}{\mathrm{d}x}\cos x = -\sin x$, and $\frac{\mathrm{d}}{\mathrm{d}x}\tan x = \sec^2 x$.
Applying the Trigonometric Substitution Technique, let $u = \tan \theta$, $\mathrm{d}u = \sec^2 \theta \mathrm{d}\theta$. This implies $\theta = \tan^{-1}u$. Then, $\theta_\textrm{lower} = \tan^{-1}(-\infty) = -\frac\pi 2$ and $\theta_\textrm{upper} = \tan^{-1}(+\infty) = +\frac\pi 2$.
$$
\begin{aligned}
E_X[A] & = \int_{-\frac \pi 2}^{+\frac \pi 2} \frac{4}{\pi} \cdot \frac{\tan^2 \theta}{(1+\tan^2 \theta)^3} \sec^2 \theta \, \mathrm{d}\theta \\
& = \int_{-\frac \pi 2}^{+\frac \pi 2} \frac{4}{\pi} \cdot \frac{\tan^2 \theta (\sec^2 \theta)^1}{(\sec^2 \theta)^3} \, \mathrm{d} \theta \\
& = \int_{-\frac \pi 2}^{+\frac \pi 2} \frac{4}{\pi}\cdot \frac{\tan^2 \theta}{(\sec^2 \theta)^2}\, \mathrm{d}\theta \\
& = \int_{-\frac \pi 2}^{+\frac \pi 2} \frac{4}{\pi} \sin^2 \theta \cos^2 \theta \, \mathrm{d}\theta \\
& = \int_{-\frac \pi 2}^{+\frac \pi 2} \frac{4}{\pi} (1-\cos^2 \theta)\cos^2 \theta \, \mathrm{d}\theta \\
& = \int_{-\frac \pi 2}^{+\frac \pi 2} \frac{4}{\pi}(\cos^2 \theta - \cos^4 \theta)\,\mathrm{d}\theta
\end{aligned}
$$
Recall the following Properties of Even Functions: If $f(x)$ is even, then $\int_{-A}^{+A}f(x)\,\mathrm{d}x = 2\int_{0}^{+A}f(x)\,\mathrm{d}x$. If $f(x)$ is even, then $[f(x)]^n$ is even.
Then,
$$
E_X[A] = \frac{8}{\pi}\int_{0}^{+\frac \pi 2}(\cos^2 \theta - \cos^4 \theta)\,\mathrm{d}\theta.
$$
Now, we take a slight diversion and calculate a Useful Integral.
$$
C(n) = \int_{0}^{+\frac \pi 2}\cos^n x \, \mathrm{d}x = \int_{0}^{+\frac \pi 2} (\cos^1 x)(\cos^{n-1}x)\, \mathrm{d}x.
$$
Recall Integration by Parts: $\int_{a}^{b}u\, \mathrm{d}v = uv|_{a}^{b} - \int_{a}^{b}v\,\mathrm{du}$. Let $u = \cos^{n-1}x = (\cos x)^{n-1}$, $\mathrm{d}v = \cos x\, \mathrm{d}x$. Then, $\frac{\mathrm{d}}{\mathrm{d}x}u = \frac{\mathrm{d}}{\mathrm{d}x}(\cos x)^{n-1} = (n-1)(\cos x)^{n-2}(-1)(\sin x)$, and $v = \sin x$. The calculations below hold for $n \neq 1$. (If $n = 1$, we have the expression $0^0$, which is undefined, so we should calculate the $n = 1$ case separately.)
$$
\begin{aligned}
C(n) & = \int_{0}^{+\frac \pi 2} \underbrace{(\cos^{n-1} x)}_{u}\underbrace{(\cos^1 x)\, \mathrm{d}x}_{\mathrm{d}v} \\
& = \cos^{n-1}x \sin x |_{0}^{+\frac \pi 2} - \int_{0}^{+\frac \pi 2}\sin x (n-1) (\cos x)^{n-2} (-1)\sin x \, \mathrm{d}x \\
& = \underbrace{\left(\cos \frac \pi 2\right)}_{0} \sin \frac \pi 2 - (\cos 0)^{n-1}\underbrace{\sin 0}_0 + \int_{0}^{+\frac \pi 2}(\sin^2 x) (n-1)(\cos x)^{n-2}\, \mathrm{d}x \\
& = \int_{0}^{+\frac \pi 2} (\sin^2 x)(n-1)(\cos^{n-2}x)\, \mathrm{d}x \\
& = \int_{0}^{+\frac \pi 2} (1-\cos^2 x)(n-1)(\cos^{n-2}x)\, \mathrm{d}x \\
& = (n-1)\int_{0}^{+\frac \pi 2} (1-\cos^2 x)\cos^{n-2}x\,\mathrm{d}x \\
& = (n-1) \int_{0}^{+\frac \pi 2}(\cos^{n-2}x - \cos^n x)\, \mathrm{d}x \\
& = (n-1)\left[\int_{0}^{+\frac \pi 2}\cos^{n-2} x\, \mathrm{d}x - \int_{0}^{+\frac \pi 2}\cos^n x \, \mathrm{d}x\right] \\
& = (n-1)[C(n-2) - C(n)]
\end{aligned}
$$
Then,
$$
\begin{aligned}
C(n)+(n-1)C(n) = (n-1)C(n-2) & \Rightarrow C(n)(1 + (n-1)) = (n-1)\cdot C(n-2) \\
& \Rightarrow C(n) = \frac{(n-1)C(n-2)}{n}
\end{aligned}
$$
Plugging in the relevant values of $n$,
$$
\begin{aligned}
C(0) & = \int_{0}^{+\frac \pi 2} (1)\,\mathrm{d}x = + \frac \pi 2 \\
C(2) & = \frac12 \cdot C(0) = \frac12 \cdot \frac \pi 2 = +\frac \pi 4 \\
C(4) & = \frac34 \cdot C(2) = \frac34 \cdot \frac \pi 4 = \frac{3}{16}\pi.
\end{aligned}
$$
Returning to where we left off,
$$
\begin{aligned}
E_X[A] & = \frac{8}{\pi}\int_{0}^{+\frac \pi 2} (\cos^2 \theta - \cos^4 \theta)\, \mathrm{d}\theta \\
& = \frac{8}{\pi}\left[\int_{0}^{+\frac \pi 2} \cos^2 \theta \, \mathrm{d}\theta - \int_{0}^{+\frac ]pi 2}]cos^4 \theta \, \mathrm{d}\theta\right] \\
& = \frac{8}{\pi}[C(2)-C(4)] \\
& = \frac{8}{\pi}\left[\frac{\pi}{4} - \frac{3\pi}{16}\right] \\
& = \frac{8}{\pi}\left[\frac{4\pi}{16} - \frac{3\pi}{16}\right] = \frac{8}{\pi}\cdot \frac{\pi}{16} = \frac12.
\end{aligned}
$$ |
Derivative $\Lambda(\phi,\psi)=\frac{\langle\phi|A|\psi\rangle}{\langle\phi|\psi\rangle}$ | First let's justify the relation, which using the model of functions to represent the states, says that
$$
\frac{\int \left(\phi^\dagger(x) + \epsilon \alpha^\dagger(x)\right)A(\psi(x)) \,dx}{\int \left(\phi^\dagger(x) + \epsilon \alpha^\dagger(x)\right)(\psi(x)) \,dx} - \frac{\int \phi^\dagger(x) A(\psi(x)) \,dx}{\int \phi^\dagger(x) (\psi(x)) \,dx}
= \epsilon \frac{\int \alpha^\dagger(x)A(\psi(x)) \,dx}{\int \phi^\dagger(x) (\psi(x)) \,dx} - \epsilon \frac{\int \phi^\dagger(x) A(\psi(x)) \,dx} {\left(\int \phi^\dagger(x) (\psi(x)) \,dx\right)^2} \int \alpha^\dagger(x) A(\psi(x))\, dx + O(\epsilon^2)
$$
Using perturbation methods, expand (for small $\epsilon$) the first term on the left using the usual trick of multiplying the denominator by
$$
\frac {\int \left(\phi^\dagger(x) - \epsilon \alpha^\dagger(x)\right)(\psi(x)) \,dx}{\int \left(\phi^\dagger(x) - \epsilon \alpha^\dagger(x)\right)(\psi(x)) \,dx}
$$
This rearranges into the $\epsilon=0$ piece being subtracted off, plus the expression on the right of your relation. However, to get this rearrangement, two properties of the functions $\phi, \alpha, \psi$ are required:
All three must fall off sufficiently rapidly that the needed manipulations of the integrals are not rendered meaningless.
Unless I have done something wrong, I need to assume $\alpha$ commutes with $\psi$ (or, it turns out, with $\phi$) in order to manipulate to get that nice expression.
So when you have an operator which commutes with the test states, you relation gives an expression for the derivative of the matrix element for that operator. But I think you have not seen this relation much because it is wrong for operators in general, and it would be "trappy" to blithely point it out with the caveate that you must be careful to use it only in the commutative case. |
If $f(z)$ is real periodic and $g(z)$ is complex periodic , Can $g(z+f(z))$ be periodic? | Yes. There are functions $\theta(z)$, with a 1-cyclic real period, and $g(z), \;h(z)$, are complex periodic functions where $h(z)=g(z+\theta(z))$ and $g(z)$ has a different complex period than $h(z)$.
I generate an example solution below, using complex dynamics. Consider iterating the function $f(z)=z^2+z-0.01$ Also, we could use other small negative values other than -0.01, to generate two fixed points.
$f(z)$ has two fixed points, -0.1 and 0.1, where $f(-0.1)=-0.1\;\;\;\;f(0.1)=0.1\;\;\;\;$ You can develop the Schroeder function at either fixed point. 0.1 is a repelling fixed point, -0.1 is an attracting fixed point.
Now consider the first half of the solution to the Op's function as $g(z)=f^{o z}(0)\;\;\;\;g(z+1)=f(g(z))$, where $f^{o z}(0)$ is generated from the +0.1 fixed point using the inverse Abel function $\alpha^{-1}(z)$, and the Abel function is generated from standard Schroeder function solution around the +0.1 fixed point. $g(z)$ is a complex periodic function with a period of $\frac{2\pi i}{\ln(\lambda)}$ where $\lambda$ is the multiplier at the fixed point. For the fixed point of 0.1, $\lambda=\frac{6}{5}$ since $f(0.1+x)=0.1+\frac{6x}{5}+x^2$, and the period is $\approx 34.462i$, so the $g(z)$ function is imaginary periodic, and is generated from the Schroeder equation for $y \mapsto \frac{6}{5}y+y^2;\;\; y=z-0.1\;\;\;\;$ Also, $g(z)$ turns out to be entire. Here is the periodic Fourier series form for $g(z)$.
$$g(z) = 0.1 + \sum_{n=1}^{\infty} a_n \left( \frac{6}{5} \right) ^ {nz} $$
Now, consider another function $h(z)$, which is generated from the other -0.1 fixed point in much the same way. It turns out that $h(z)=g(z+\theta(z))$, where $\theta(z)$ is a 1-cyclic periodic function, $\theta(z+1)=\theta(z)$. But first, back to $h(z)$.
$h(z)=f^{o z}(0)\;\;\;\;h(z+1)=f(h(z))$, but this time we are using the other fixed point of -0.1, which is an attracting fixed point. $h(z)$ is generated from the -0.1 fixed point using the inverse Abel function $\alpha^{-1}(z)$, and the Abel function is generated from standard Schroeder function solution around the -0.1 fixed point. $h(z)$ is a complex periodic function with a period of $\frac{2\pi i}{\ln(\lambda)}$ where $\lambda$ is the multiplier at the fixed point. For the fixed point of -0.1, $\lambda=\frac{4}{5}$ since $f(-0.1+x)=-0.1+\frac{4x}{5}+x^2$, and the period is $\approx 28.157i$ The $h(z)$ function is imaginary periodic, and is generated from the Schroeder equation for $y\mapsto \frac{4}{5}y+y^2;\;\; y=z+0.1\;\;\;\;$ $h(z)$ has square root branches where $h(z)=-0.5$. These singularities occur at half the imaginary period of $h(z)$, plus any multiple of the period; one of them is near $z\approx 4.0831 \pm 14.0788i$. $h(z)$ is analytic to the right of this singularity, and where also where $|\Im(z)|<\approx 14.0788i$
$$h(z) = -0.1 + \sum_{n=1}^{\infty} b_n \left( \frac{4}{5} \right) ^ {nz} $$
Both $h(z)$ and $g(z)$ are real valued at the real axis; both have the same defining iteration equations, and for integer values of z, $h(z)=g(z)$. They disagree very slightly on fractional iterations, which is what leads to $\theta(z)$. Both h(z) and g(z) have a limiting value of -0.1 as real z gets arbitrarily large positive, and a limiting value of 0.1 as z real z gets arbitrarily large negative. g(z) is well defined for complex values of z if $\Re(z)$ is a large negative number, where $g(z)\approx 0.1+1.2^{z+k}$, and is imaginary periodic, with values cycling around 0.1 with an imaginary period of $\approx 34.462i$. At half the imaginary period, $g(z)$ grows from the 0.1 repelling fixed point and gets arbitrarily large. $h(z)$ is well defined in the complex plane if $\Re(z)$ is a large positive number, where $h(z)\approx -0.1$, and $h(z)$ is also imaginary periodic, and $h(z) \approx -0.1 + 0.8^{z+k}$ is cycling around -0.1 with a different imaginary period of $\approx 28.157i$
So, it turns out that there is a small one cyclic periodic function, $f(z)=\theta(z)$. $\theta(z)$ is analytic at the real axis, and has a 1-cyclic singularity at $z \approx n + 0.0831 \pm 14.0788i$, where the singularities of $h(z)$ are. And, an example solution to the Op's question becomes:
$$h(z)= g(z+\theta(z))$$
Here is a picture of the iterated function $f^{o z}(0)$ at the real axis, from -25 to +25, where it gradually goes from +0.1 to -0.1, where $f(z)=z^2+z-0.01$ Integer values of $f^{o z}$ can be trivially generated by iterating $z \mapsto z^2+z-0.01$ or iterating the inverse function, $f^{-1}(z)$ which is $z \mapsto \sqrt{z+0.26}-0.5$. For fractional iterations of $f^{o z}$, there are two very slightly different solutions, depending on whether we develop the iterated function from the attracting fixed point of -0.1 at $+\infty$ or the repelling fixed point of +0.1 at $-\infty$. These two solutions for $f^{o z}(0)$ can be generated from the two formal Schroeder equations for the two fixed points, although the formal solution needs to be scaled so that g(0)=0 and h(0)=0. I can add later.
Here is a graph of the tiny real valued 1-cyclic $\theta(z)$ function at the real axis, from -1 to +1. As you can see, its magnitude is $|\theta(z)| \lt 8 \cdot 10^{-40}$
Here is a graph of $\left(g(z)-0.1\right)$ at $\;\;\Re(z)=-25$. I used the iteration function $y \mapsto \frac{6y}{5}+y^2$ where $z=y+0.1$ and $z \mapsto z+z^2-0.01$. Here, $g(z)$ has an imaginary period of $\frac{2\pi i}{\ln(1.2)}\approx 34.462i$. The graph goes from $z=-25-18i$ to $z=-25+18i\;\;$ Red is imaginary and magenta is real. A similar graph could be generated for $h(z)+0.1$ at $\Re(z)=+25$, but the imaginary period would be $\approx 28.158i$. |
Natural transformation of groups | Let's spell everything out. We view the group $B$ as a category with one object $*_B$, the only Hom-set is $\text{End}(*_B)=B$ and composition is given by the group law. Similar for $C$. If $S\colon B\to C$ is a functor, then $S$ consists of the following data: $S(*_B)=*_C$ and for each $b\in \text{End}(*_B)$, $S(b)\in \text{End}(*_C)$ such that $S(bb')=S(b)S(b')$.
A natural transformation $\eta\colon S\Rightarrow T$ consists of the following data:
$\eta_{*_B}\in \text{End}(*_C)=C$;
For all $c\in \text{End}(*_C)=C$, we have $T(c)\eta_{*_B}=\eta_{*_B}S(c)$.
The second condition is simply $$T(c)=\eta_{*_B}S(c)\eta_{*_B}^{-1}.$$ This is precisely saying that $S$ and $T$ are conjugate. |
Expected value for a binomial distribution | Generally if you want to calculate moments of the binomial distribution, using the Stirling's number of the second kind would be the best approach, due the identity:
$$
x^n=\sum_{k=0}^n{n \brace k}\prod_{j=0}^{k-1}(x-j)
$$
And:
$$
j(j-1)\ldots(j-t+1)\binom{n}{j}=
j(j-1)\ldots(j-t+1)\frac{n!}{j!(n-j)!}=
$$
$$
=n(n-1)\ldots(n-t+1)\frac{(n-t)!}{(j-t)!(n-j)!}=n(n-1)\ldots(n-t+1)\binom{n-t}{j-t}
$$
$$
\sum_{k=0}^n k^l\binom{n}{k}p^{k}(1-p)^{n-k}=\sum_{k=0}^n \sum_{m=0}^l{n \brace m}\prod_{j=0}^{m-1}(k-j)\binom{n}{k}p^{k}(1-p)^{n-k}=
$$
$$
=\sum_{k=0}^n \sum_{m=0}^l{n \brace m}\prod_{j=0}^{m-1}(n-j)\binom{n-m}{k-m}p^{k}(1-p)^{n-k}=...
$$
(The rest is the same as in the comment) |
Applications on derivative | the derivative of the curve, $\frac{dy}{dx} = 6x^2$.
For $x=2$, the slope of the curve is equal to $24$.
For $x=-2$ the slope of the curve is also equal to $24$.
Hence, the tangents on the curve will have the same slope on $x=2$ and $x=-2$. Conclusion: The tangents are parallel. |
Number of real or purely imaginary solution | A complex value is only equal to zero if both it's real and imaginary part are zero.
If $z$ is real then $z^3 - 1 + iz$ has an imaginary component of $z$. So the only possible solution would be $z = 0$, but this is not valid.
If $z$ is purely imaginary then it can be written as $ir$ for real $r$. Substituting $z = ir$ into the formula gives us:
$$(ir)^3 +i^2r - 1 = 0$$
$$- r - 1 -ir^3 = 0$$
$$-r -1 = 0 \wedge -r^3 = 0$$
$$r = -1 \wedge r = 0$$
This also forms a contradiction, thus there are no purely real or imaginary solutions. |
References for Algebraic number theory | Serge Lang's. Algebraic Number Theory is a good text also. |
What is the motivation of the graph filter in the Graph Convolutional Networks? | I think the graph filter make a role as localization the node with its neighborhoods. To make them share the properties with it. And nice paper recently explain this trick |
Is there a definite integral that evaluates to the constant $e$? | $e$ is not a period i.e. not a number that can be represented by the integral of a rational or irrational function over a domain defined by rational functions. The periods form a subring of $\mathbb{C}$. There is a very good article by D. Zagier and M.Kontsevitch on periods. Don't have the reference but Google should help |
How to solve simple Diophantine equation coming from solutions in modulos. | Apply the extended Euclidean Algorithm; see Wikipedia
First solve ax + by = d where d is the GCD, then multiply by z/d to get the solutions to ax + by = z. |
In how many ways can you put two people in $5$ seats in a row if they cannot sit together? | Hint:
$$\text{Find the number of ways they can sit (without restrictions).}\tag1$$
$$\text{Then, find the number of ways they cannot sit.}\tag2$$
$$\text{Answer}=(2)-(1)$$ |
Finding a solution to $xu_x+(y+1)u_y=u-1$ given an initial condition | Follow the method in http://en.wikipedia.org/wiki/Method_of_characteristics#Example:
$\dfrac{dx}{dt}=x$ , letting $x(0)=1$ , we have $x=e^t$
$\dfrac{dy}{dt}=y+1$ , we have $y+1=y_0e^t=y_0x$
$\dfrac{du}{dt}=u-1$ , we have $u(x,y)=F(y_0)e^t+1=xF\left(\dfrac{y+1}{x}\right)+1$
$u(x,2x-1)=e^x$ :
$xF(2)+1=e^x$ , which is impossible.
$\therefore$ There is no solution. |
Show that this function is uniformly continuous on $\mathbb{R}^d$ | It is much simpler:
If we set $\hat f:=u\cdot f$ then $\hat f\in C_b({\mathbb R})$, and $\hat f$ vanishes identically on ${\mathbb R}\setminus I_{n+1}$.
Claim: The function $\hat f$ is uniformly contiuous on ${\mathbb R}$.
Proof. Let an $\epsilon>0$ be given. As $\hat f$ is continuous on the compact set $I_{n+2}$ there is a $\delta\in\>]0,1[\>$, such that $|\hat f(x)-\hat f(y)|<\epsilon$ whenever $x$, $y\in I_{n+2}$ and $|x-y|<\delta$. In reality this $\delta$ works for all of ${\mathbb R}$: When $x, \>y\in{\mathbb R}$ have distance $<\delta<1$, and they are not both in $I_{n+2}$, then both are $\notin I_{n+1}$, hence $\hat f(x)=\hat f(y)=0$.
For the $d$-dimensional case replace $I_n$ by the $d$-dimensional ball $B_n:=\bigl\{x\in{\mathbb R}^d\bigm| \|x\|\leq n\bigr\}$. |
perumations and combinations question with strings [want to check answers] | The very first step to this series of questions is to try to decide on a convenient sample space to use.
Our sample space in this problem is the set of four letter words which use the letters $\{a,b,c,d,e\}$, and I will denote the set by the name $S$. This is a particularly useful sample space because the problem implies that we are selecting one of these words uniformly at random, so each outcome in the sample space is equally likely, i.e. it is an equiprobable sample space.
When we have an equiprobable sample space like this, the probability of an event occurring is the ratio of the number of outcomes in the event over the number of outcomes in the sample space. That is to say $$Pr(A)=\frac{|A|}{|S|}$$
(note: this is ONLY guaranteed to be true in an equiprobable sample space. if the outcomes are not equally likely the formula above might not be true)
In our question here, the size of $|S|$ can be calculated using multiplication principle and we learn that $|S|=5^4$
For part (a), you correctly calculated the number of words with all different letters to be $5\cdot 4\cdot 3\cdot 2$. Letting $A$ be the event that all letters are different, we have then $|A|=5\cdot 4\cdot 3\cdot 2$. This implies then that $$Pr(A)=\frac{|A|}{|S|}=\frac{5\cdot 4\cdot 3\cdot 2}{5^4}$$ as you correctly eventually got in the comments above.
For part (b), again you correctly calculated the number of words with all letters being consonants (i.e. no vowels) to be $3\cdot 3\cdot 3\cdot 3=3^4$. Letting $B$ be the event that there are no vowels, that is then to say that $|B|=3^4$
We see then that $Pr(B)=\frac{|B|}{|S|}=\dots$
For part (c) you could approach directly with counting to find that there are $2\cdot 5\cdot 5\cdot 5$ words which begin with a vowel to get $Pr(C)=\frac{|C|}{|S|}=\frac{2\cdot 5^3}{5^4}=\frac{2}{5}$, but it is probably easier to think about it directly using a probabilistic argument by noting that only the first letter matters in our calculation.
For part (d), as already mentioned in the comments above linearity of expectation implies that $$E[X+Y]=E[X]+E[Y]$$
Letting $X_i$ be the random variable $X_i=\begin{cases}1&\text{if the}~i\text{'th letter is a consonant}\\0&\text{otherwise}\end{cases}$ we see that the total number of consonants in our word is $X=X_1+X_2+X_3+X_4$
We have then that $E[X]=E[X_1]+E[X_2]+E[X_3]+E[X_4]$
Now, remember the definition of expected value. $E[X]=\sum\limits_{k\in \Delta} k\cdot Pr(X=k)$ where $\Delta$ is the set of possible values that $X$ can take.
In our specific case, $E[X_1]=0\cdot Pr(X_1=0)+1\cdot Pr(X_1=1)$ which simplifies as $E[X_1]=Pr(\text{the first letter is a consonant})$. Similarly each of the other terms can be simplified and using this knowledge one can calculate $E[X]$.
As a final aside, you said "I'm pretty sure my answers to b and c are wrong because I didn't use a permutation for them."
There are many questions in probability and combinatorics where permutations are not used, just like there are many questions where catalan numbers and stirling numbers aren't used. Permutations and combinations should be some of the tools in your toolbox that get used on occasion for problems that need it. As you continue your studies you will continue to grow your toolbox, increasing the variety of problems that you can solve, but don't rely too heavily on your toolbox as there will eventually be problems that are truly new to you that requires a technique you've never seen before. |
Is it possible for the intersection of nonempty subset to be empty? | For compact closed sets it is famously not possible.
For closed sets, it can be empty: consider $C_n=[n,\infty)$ in $\mathbb R$. |
About a function and its Fourier transform being zero at the same time. | One way to go about this is the Paley-Wiener theorem:
The Fourier transform of a distribution of compact support on $\mathbb{R}^n$ is an entire function on $\mathbb{C}^n$
So, assume that both $f$ and $\widehat f$ are compactly supported. Then they are both compactly-supported entire functions (as, more or less, the Fourier transforms of each other). How many functions are both entire and compactly supported?
Take some point $x_0$ outside the support of $f$, and expand in a Taylor series there (possible because $f$ is entire). You know that $f$ and all of its derivatives are zero there, so what is the resulting Taylor series?
the zero function. |
Is there an orthogonal matrix that diagonalizes both $A$ and $B$? | If there were a matrix $S$ such that $D=S^{-1}AS$ and $E=S^{-1}BS$ were both diagonal, then since diagonal matrices commute we would have
$$ AB=(SDS^{-1})(SES^{-1})=SDES^{-1}=SEDS^{-1}=(SES^{-1})(SDS^{-1})=BA $$
But in this case $AB\neq BA$, so $A$ and $B$ aren't simultaneously diagonalizable, even without the restriction that $S$ be orthogonal. |
Which of the following are possible measures of the exterior angles of a polygon and how many sides does the polygon have: 90, 80, 75, 30, 46, 36, 2 | The sum of all exterior angles in any polygon is $\mathbf{360^{\circ}}$. Thus, we have all the divisors of $360$ as a possible exterior angle as a choice and the number we have to divide $360$ by to get the divisor will be the number of sides in the particular polygon.
For example, we have $90$ as a divisor of $360$ so $90^{\circ}$ is a possible exterior angle. Since $90 \times 4 = 360$, the particular polygon with $90^{\circ}$ exterior angle will have $4$ sides.
Another fact: the sum of angles in a polygon with $n$ sides is $180(n - 2)$ degrees. So all the sums will be a multiple of $180$. Use that formula with $n = 8$ to get the sum of all interior angles in an octagon.
Remark: Technically, all the choices you have been provided with can be the measures of an exterior angle. If in the future, any question asks about regular polygons, you can predict that the answers will only be the divisors of $360$. |
Proving that $x + y + z < 3abcd$ | From the $y$ and $z$ equations we get $$ab + ac + abc + abd + acd + bcd \gt y$$ $$ab + ac \gt y+z$$ as $a, b, c, d$ are greater than 1. We can then subtract the $x$ equation to get $$ab + ac - a - b - c - d \gt x + y + z $$ so
$$ab + ac \gt x + y + z $$ as $a, b, c, d$ are positive so
$$3abcd \gt 2abcd \gt ab + ac \gt x + y + z$$
I can't see any errors in my method, although I am not certain of it as it doesn't use the last equation |
How can I find all solutions to differential equation? | We are going to obtain in two steps all $C^1$ solutions of
$$\tag{0}(f(x))^2+(f'(x))^2=1.$$
Step 1: Let us follow a method similar to that given either by @David Quinn for example or @Ian Eerland or @Battani, with some supplementary precision on the intervals of validity.
Let $f$ be a solution to $(0)$.
Let us consider a point $x_0$. Either
case 1: $|f(x_0)|=1$ or
case 2: $|f(x_0)|<1$, which implies $|f'(x_0)|>0$.
Let us detail case 2.
By continuity of $f$, there exist $\epsilon_1>0$ such that on $I_1:=(x_0-\epsilon_1,x_0+\epsilon_1)$ we have $|f(x)|<1$.
By continuity of $f'$, there exist $\epsilon_2>0$ such that on $I_2:=(x_0-\epsilon_2,x_0+\epsilon_2)$ we have $|f'(x)|>0$.
Let us transform $(0)$ into
$$\tag{1}\dfrac{y'}{\sqrt{1-y^2}}=s$$
Differential equation (1) needs to be considered for values of variable $x$ in $I:=I_1 \cap I_2$ in order that the ratio is well defined and that the sign $s=\pm1$ doesn't change throughout this interval $I$.
Taking a primitive function on $I$ of both sides of (1):
$$\arcsin(y)=s x+\varphi \ \ \ \text{for some constant} \ \varphi.$$
Thus $y=f(x)=\sin(s x+ \varphi)$ on interval $I$ with either $s=1$ or $s=-1$.
This solution can be presented under the simpler form: $f(x)=\sin(x+ \varphi)$ (one finds back case $s=-1$ through replacement of $\varphi$ by $\varphi+\pi$).
Conclusion of step 1: a solution $f$ and a particular point $x_0$ being given, there are two cases:
\begin{cases} (a) \ \text{Either} \ f(x_0) = \pm1 \ \text{or}\\ (b) \ \text{There exist an interval} \ J \ \text{containing} \ x_0 \ \text{
such that,} \forall x \in J, \ f(x)=\sin(x+\varphi) \ \end{cases}
for a certain $\varphi$ ($J$ being any interval of validity of solution $f$, including interval $I$ as defined before).
Step 2: We are going to obtain these intervals $J$ by using a topological argument (A recall about topology can be found there). From (0), we know that the values taken by $f(x)$ are in the closed interval $[-1,1]$. Now consider the set of values of $x$ such that their image is in the open interval $(-1,1)$:
$$S:=\{x \ | -1<f(x)<1 \}=f^{-1}((0,1)). $$
$S$ is an open set, being the preimage of an open set by a continuous function; as such, it can be written as a countable union of open intervals $S=\bigcup_k (a_k,b_{k})$ (see this). In this way ;
(b) will be valid on open intervals $(a_k,b_k)$
(a) will be valid on closed intervals $[b_k,a_{k+1}]$.
At junction point $b_k$, the only possible way to preserve $C^1$ continuity is by taking, according to the value of $\lim_{x\rightarrow b_k} f(x)$ value $1$ or $-1$ for the whole closed interval $[b_k,a_{k+1}]$. Same procedure at the initial junction point $a_k$.
Let the curve of $y=sin(x)$ restricted
to $[0,\pi/2]$ (increasing) be called an $s_+$ arc.
to $[\pi/2,\pi]$ (decreasing) be called an $s_-$ arc.
Let a line segment of any positive length with equation $y=1$ (resp. $y=-1$) be called a $1_+$ (resp a $1_-$).
Thus one can describe the possible $C^1$ integral curves of (0) by a $C^1$ "glueing" of curves as a (finite or infinite) repetition of the following "motives":
$$\cases{1_+^*[s_-1_-^*s_+1_+^*]^*\\(1_-)^*[s_+1_+^*s_-1_+^*]^*}$$
(mathematical "grammars" notations, where exponant * means 0 or more times). See figure.
Intervals where the solution is constant may have any length.
"Phase shifts" $\varphi_k$ of the sine curves are specific to each interval.
It is to be noted that closed intervals $[b_k,a_{k+1}]$ can be reduced to a point.
Appendix : I had, at first, obtained a certain family of solutions by
differentiating both sides of $(0)$:
$$2f'(x)(f(x)+f''(x))=0.$$
After a certain number of hours spent on this method, which assumes $f \in C^2$, I realized that it is impossible to go back to the initial equation dealing with $C^1$ solutions.
Added on August 30th: this differential geometry question gives an application of (0). |
Show that $\lim_{n\rightarrow\infty}\int_{0}^{n}\frac{\sqrt x\ln x}{(1+x)^2}\,dx=\pi$ | The integrand is positive for $x>1$, so normally we'd just state the problem as $$\int_0^\infty\frac{\sqrt{x}\ln x}{(1+x)^2}dx=\pi.$$Let's first note that the substitution $x=\tan^2 t$ allows us to solve a seemingly unrelated problem, $$\int_0^\infty\frac{x^{k-1}}{(1+x)^2}dx=\int_0^{\pi/2}2\sin^{2k-1}t\cos^{3-2k} tdt=\operatorname{B}(k,\,2-k)\\=\Gamma(k)\Gamma(1-k)=\pi(1-k)\csc\pi k.$$(Look up Beta and Gamma functions if you don't know them well.) But this problem is not unrelated! Let's differentiate with respect to $k$: $$\int_0^\infty\frac{x^{k-1}\ln x}{(1+x)^2}dx=-\pi\csc\pi k[1+(1-k)\cot\pi k].$$Finally, substituting $k=\frac{3}{2}$ gives $$\int_0^\infty\frac{\sqrt{x}\ln x}{(1+x)^2}dx=\pi.$$ |
How to find root of $ x^n+ax+b=0$? | If by a "formula" you mean an expression in radicals, then you remember incorrectly. The polynomial $x^5-x+1$ has no solutions in radicals. See also http://en.wikipedia.org/wiki/Bring%E2%80%93Jerrard_normal_form#Bring.E2.80.93Jerrard_normal_form |
Proving compactness of a subset | To prove that the family is equicontinuous you can just note that all $f\in M_c$ are lipschitz of (common) constant $\frac{c}{b-a}$ (having the first derivative absolutely bounded by construction) |
Finding connected components in a graph using BFS | Use an integer to keep track of the "colors" that identify each component, as @Joffan mentioned. Start BFS at a vertex $v$. When it finishes, all vertices that are reachable from $v$ are colored (i.e., labeled with a number). Loop through all vertices which are still unlabeled and call BFS on those unlabeled vertices to find other components.
Below is some pseudo-code which initializes all vertices with an unexplored label (an integer 0). It keeps a counter, $componentID$, which vertices are labeled with as they are explored. When a connected component is finished being explored (meaning that the standard BFS has finished), the counter increments. BFS is only called on vertices which belong to a component that has not been explored yet.
// input: graph G
// output: labeling of edges and partition of the vertices of G
LabelAllConnectedComponents(G):
// initialize all vertices and edges as unexplored (label is 0)
for all u ∈ G.vertices()
setLabel(u, UNEXPLORED)
for all e ∈ G.edges()
setLabel(e, UNEXPLORED)
// call BFS on every unlabeled vertex, which results in
// calling BFS once for each connected component
componentID = 1
for all v ∈ G.vertices()
if getLabel(v) == 0:
BFS(G, v, componentID++)
// standard breadth-first-search algorithm that works on one component
BFS(G, s, componentID):
L[0] = new empty sequence
insert vertex s at the end of L[0]
setLabel(s, componentID)
i = 0
while L[i] is not empty:
L[i+1] = new empty sequence
for all v ∈ L[i].elements()
for all e ∈ G.incidentEdges(v)
if getLabel(e) == UNEXPLORED
w ← opposite(v,e)
if getLabel(w) == UNEXPLORED
setLabel(e, DISCOVERY)
setLabel(w, componentID)
L[i+1].insertLast(w)
else
setLabel(e, CROSS)
i = i+1
The total running time is $O(|V| + |E|)$ since each edge and vertex is labeled exactly twice - once to initialize and again when it's visited. |
Convergence with the $5$-adic metric. | Promoting some of the comments to an answer with a view of A) removing this from the list of unanswered questions, B) covering some of the features of the $p$-adic metric that surprise learners at first, and C) also aiming to draw an analogy with the ring of formal power series.
1. The $p$-adic "size" of rational numbers does not match the intuition based on the archimedean valuation better known as the absolute value. $2$-adically a high power of $2$ is tiny in comparison to a meager $-1$. Therefore, for example, the sequences $2^n\to0$ and $2^n-1\to-1$. In other words, in a series like
$$1+2+4+8+16+32+\cdots$$ the first term $1$ is the dominant one. This is not atypical of converging series in all domains :-)
2. The formula for the sum of a geometric series is your friend. Whenever it converges the sum of the series is surely (by the usual argument)
$$
a+aq+aq^2+aq^3+\cdots=\frac a{1-q}.\qquad(*)
$$
Here it converges if and only if the ratio $q$ has $p$-adic size $<1$ (hardly a surprise!). In other words, the formula $(*)$ holds iff $|q|_p<1$. Therefore, as Fly by Night observed
$$
...33333=3+30+300+3000+30000+\cdots=\frac3{1-10}
$$
whenever $|10|_p<1$. This inequality holds when $p=5$ or $p=2$, and in both cases the sum of this series is $-1/3$.
3. The mildly surprising fact about this limit is that a sequence of positive rationals has a negative limit. This will cease to amaze a learner whenever they recall that $2$-adically $1024$ is very close to $-1024$, but relatively far away from $1025$. A more formal way of phrasing this is that the $p$-adic fields do not have a total ordering - a relation that would allow us to, among other things, partition the $p$-adic numbers into negative and positive numbers. One rigorous argument for that parallels the reasoning why we don't have a total ordering in $\Bbb{C}$ either. Remember that if $i$ were either positive or negative, then its square should be positive, which it ain't. Similarly in all $p$-adic fields some negative integers have square roots. There is a $5$-adic $\sqrt{-1}$ (see here for a crude description of the process of finding a sequence of integers converging $5$-adically to a number with square $=-1$. Exactly which integers have $p$-adic square roots is number-theoretic in nature. For example when $p=2$ it turns out that $\sqrt{m}$ of an odd integer $m$ exists inside $\Bbb{Q}_2$, iff $m\equiv1\pmod8$, so $-7$ has a $2$-adic square root.
4. The same themes recur, if we want to define a $p$-adic exponential function with the usual power series
$$
\exp(x):=\sum_{n=0}^\infty\frac{x^n}{n!}.
$$
The problem is with convergence. Contrary to expectations from real analysis this series will not converge for all $x\in\Bbb{Q}_p$. The reason is the denominators. For large $n$, the factorial will be divisible by higher and higher powers of $p$. Therefore we are dividing by a sequence of small numbers tending towards zero, and the numerator $x^n$ needs to compensate for that. A more careful analysis of the situation reveals, that this series converges, iff
$|x|_p<p^{-1/(p-1)}$.
5. Apropos series. An analogue of the $p$-adic metric you may be familiar from courses on analysis is the $x$-adic topology (can turn it into a metric if so desired) on (formal) power series (with coefficients in, say, $\Bbb{R}$!). Two power series are close to each other $x$-adically, iff their difference is divisible by a high power of $x$. Therefore we can say that $e^x$ and $1+x+x^2/2$ are already quite close to each other, but $\sin x$ and $x-x^3/6+x^5/120$ are closer still. A common feature of all non-archimedean metrics ($p$-adic, $x$-adic,...) is that adding "small" numbers together will never make a large number, no matter how many of them you add together. So w.r.t. the $2$-adic metric no matter how many numbers divisible by four you add together you will never get a large number, say an odd number. Similarly, if you add together several formal power series divisible by $x$ you never create a non-zero constant term. This has an impact on some things. For example, when defining integrals, we want to approximate something by a sum of small things. In the $p$-adic world we need to... |
Calculating $\sqrt[3]{\sqrt 5 +2}-\sqrt[3]{\sqrt 5 -2}$ | By observation,
$$(\sqrt5+1)^3=16+8\sqrt5=8(\sqrt5+2)$$
$$\implies\left(\dfrac{\sqrt5+1}2\right)^3=\sqrt5+2$$
Similarly,$$\left(\dfrac{\sqrt5-1}2\right)^3=\sqrt5-2$$
Motivation:
$$(\sqrt5+a)^3=a^3+15a+\sqrt5(3a^2+5)$$
Let us find $a$ such that $$\dfrac{a^3+15a}{3a^2+5}=\dfrac21\iff0=a^3-6a^2+15a-10=(a-1)(a^2-5a+10)$$
Observe that the only real root is $a=1$ |
Is there a definition of the existential quantifier which does not imply the axiom of choice? | The symbols for quantifiers used in Nicolas Bourbaki, Elements of Mathematics : Theory of sets (1968 - French ed. 1958), see page 20 and page 36, are derived from The Epsilon Calculus developed by David Hilbert during the 20s [with $\tau_x$ in place of $εx$].
The syntax of the $ε$ symbol is:
if $A$ is a formula and $x$ is a variable, $εx \ A$ is a term
with the axiom (Hilbert's “transfinite axiom”) :
$A(x) → A(εx A)$.
The intended interpretation is that $εx \ A$ denotes some $x$ satisfying $A$, if there is one.
Quantifiers can defined as follows:
$∃x A(x) ≡ A(εx A)$
$∀x A(x) ≡ A(εx (¬A)).$
Of course, in classical logic, the usual quantifiers are interdefineable. |
Proof regarding homogeneous functions of degree $n$ | $\frac {\partial f(u,v)}{\partial u}$ means partial derivative with respect to the first variable, which is also denoted by $f_x(u,v)$. Here the subscript $x$ denotes that the partial derivative is with respect to the first variable $u=tx$ and not $x$ itself. So in our case $\frac {\partial f(u,v)}{\partial u}=f_x(u,v)=f_x(tx,ty)=t^{n-1}f_x(x,y)$ and you are already done. |
Show F is the Fourier transform of an integrable function | You are looking for the Bessel potential. Stein's "Singular integrals and differentiability properties of functions" Chapter 5 Section 3 covers all the details. |
Does the double integral $\int_1^\infty \int_0^x \frac{1}{x^3+y^3} \,dy \,dx$ converge or diverge? | Another solution using no fancy change of variables. Just note that $\frac{1}{x^3+y^3}\leq \frac{1}{x^3}$ for $y\geq 0$ and thus,
$$
\int_1^{\infty} \int_0^x \frac{1}{x^3+y^3}\textrm{d}y\textrm{d}x\leq \int_1^{\infty} \frac{1}{x^2}\textrm{d}x<\infty
$$ |
polynomial norm and Banach spaces | Approximate the exponential function by its Taylor polynomials. You should also prove that the exponential function is not a polynomial, not even in an interval. |
expressing some polynomial in terms of symmetric polynomials | Well, start by writing down the products of elementary symmetric polynomials on $3$ variables that have the correct degree, take a linear combination of them with coefficients $t_1, t_2, \ldots$, expand everything out, and see what the coefficients have to be in order to make this match your polynomial. |
Can there be non trivial self-dual 5-forms on a 10-dimensional compact orientable manifold without boundary? | You're mixing up the real and complex as well as Riemannian and pseudo-Riemannian.
Let $X$ be a $2n$-dimensional Riemannian manifold. Then a self-dual $n$-form is a $+1$-eigenvector of the Hodge star map
$$\star: \Omega^n(X; \Bbb R) \longrightarrow \Omega^n(X; \Bbb R).$$
We have the well-known formula
$$\star^2 = (-1)^{n(2n - n)} \mathrm{Id} = (-1)^{n^2} \mathrm{Id}.$$
Now if $n$ is odd, as it is in your case, Then $\star^2 = -\mathrm{Id}$, so it cannot have $\pm 1$ as eigenvalues, and hence there are no self-dual or anti-self-dual $n$-forms. $\star$ does, however, have $\pm i$ as eigenvalues in this case, so you can define (anti-)self-dual $n$-forms as $\pm i$-eigenvectors of $\star$, as you did in your original post. Notice, however, that when you do this you are now working with complex $n$-forms and you should not be using the real inner product any longer.
To complicate matters further, you're really interested in Lorentzian manifolds. One way to define the Hodge star is by
$$\alpha \wedge \star \beta = \langle\alpha, \beta\rangle\, \mathrm{vol},$$
where $\langle \cdot, \cdot \rangle$ is the inner product induced by the metric $g$. This is the same inner product you integrate to get the global inner product, and when $X$ is a Lorentzian manifold, $g$ is indefinite and hence the inner product on forms is indefinite as well. As you know, there is nothing wrong with having vectors of length zero in an indefinite inner product space (e.g. lightlike vectors in Minkowski space).
As a final remark, on a $2n$-dimensional Lorentzian manifold, we actually have
$$\star^2 = (-1)^{n(2n - n) + (2n - 1)}\mathrm{Id} = (-1)^{n^2 + 2n - 1}\mathrm{Id}$$
(the signature of the metric affects things now). When $n$ is odd (as in your $10$-dimensional case), we then see that $\star^2 = \mathrm{Id}$ on a Lorentzian manifold (of dimension $4k + 2$), so we actually use the usual definition of self-duality (without any $i$'s). When $n$ is even (e.g. in Minkowski space), then $\star^2 = -\mathrm{Id}$ and we have to use the alternate definition of self-duality and work with complex inner product spaces. |
Minkowski sum of a positive Lebesgue measure set and $\mathbb{Q}$. | Suppose to the contrary that the set $ \mathbb R\setminus (A+\mathbb Q)$ has positive measure. By the Lebesgue density theorem, it has a point of density, call it $x$. Also pick a point of density of $A$, call it $a$. For sufficiently small $r>0$, we have
$$
\mu((x-2r,x+2r)\cap (A+\mathbb Q))<r
$$
and
$$
\mu((a-r,a+r)\cap A)> r
$$
Let $q$ be a rational number such that $|a+q-x|<r$. Then
$$((a-r,a+r)\cap A)+q \,\subseteq \,(x-2r,x+2r)\cap (A+q) $$
but the set on the left has greater measure, a contradiction. |
Parallel Transport and Affine Connection | Unless $X$ is very special, the answer will be no. To know the covariant derivative of $X$ with respect to your vector $v$, you don't just need to know the values of $X$ along any curve, but along a curve to which $v$ is tangent In particular, what you're asking will work if $v$ happens to be a multiple of $\gamma'(t)$. If this is not the case though, there may not be a curve $\alpha$ passing through $\gamma(t)$ to which $v$ is tangent so that $X$ is parallel along $\alpha$.
Drawing a picture in $\Bbb{R}^2$ might help see the idea. Start with a curve and a parallel vector field along that curve, and see if you can extend the vector field to something whose covariant derivative doesn't vanish anywhere but the curve. |
Calculating error percentage for rolling dice | TheOdds is the number of times you expect to get the associated Sum in 36 trials. E.g. For Sum = 2 (i.e. two $1s$) we expect it to happen once in $36$ times.
The formula you want is:
$$\text{Error} = \dfrac{\vert \text{Rolled} - \text{TheOdds}\vert }{\text{TheOdds}}\times 100\%.$$
E.g. For Sum=$3$:
$$\text{Error} = \dfrac{\vert 1 - 2\vert }{2}\times 100\% = 50\%.$$ |
Explanation for recurrence relation of a counting problem | HINT: Let $P_n$ be the set of valid permutations of $[n]=\{1,\ldots,n\}$, and let $Q_n'$ is the set of valid permutations of $[n]$ that begin $12$. For $n\in\Bbb Z^+$ let $a_n=|P_n|$ and $b_n=|Q_n|$. (Thus, $a_n$ is your $dp[n]$.)
Let $(12p_3\ldots p_n)$ be a permutation of $[n]$ in $Q_n$. If we remove the initial $1$ and subtract $1$ from each of the remaining $n-1$ entries, we get a valid permutation of $[n-1]$. Conversely, if $(1q_2\ldots q_{n-1})$ is a valid permutation of $[n-1]$, and we set $p_{k+1}=q_k+1$ for $k=2,\ldots,n-1$, the resulting permutation $(12p_3\ldots p_n)$ is in $Q_n$. It follows that $b_n=a_{n-1}$.
Every permutation in $P_n\setminus Q_n$ begins with $13$. (Why?) Let $c_n$ be the number of valid permutations of $[n]$ that begin with $132$. An argument similar to that of the previous paragraph shows that there is a bijection between these permutations and $P_{n-3}$, the set of valid permutations of $n-3$, so $c_n=a_{n-3}$.
If $d_n$ is the number of valid permutations of $[n]$ that begin with $13$ but not with $132$, then clearly
$$a_n=b_n+c_n+d_n=a_{n-1}+a_{n-3}+d_n\;,$$
and we’re done if we can show that $d_n=1$.
Let $\pi=(13p_3\ldots p_n)$ be a valid permutation of $[n]$ that begins with $13$ but not with $132$.
Show that $p_n=2$ and $p_{n-1}=4$.
Show that if we remove the first and last elements of $\pi$ and subtract $2$ from each of the remaining $n-2$ elements, we get a valid permutation of $[n-2]$ that begins with $13$ but not with $132$.
Prove by induction that for each $n\ge 4$ there is exactly one valid permutation that begins with $13$ but not with $132$. |
$R$ is a ring with identity. Why from $f(1)=0$ it's concluded that $\forall r\in R; f(r)=0$? | $$f(r) = f(r.1) = f(r).f(1).$$
$f(1) = 0 \Rightarrow f(r) = 0 \forall r \in R$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.