title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
What is the proof for $\sqrt{-a}\times\sqrt{-b}\neq\sqrt{ab},\text{ where }a,b\in \mathbb{R}$ | In line three you use the fact that for positive reals $a,b$ from $a^n=b^n$ it follows that $a=b$. This is not longer true over the complex numbers(its not even true over the real numbers). For example $i^4=1^4$ but certainly we don't have $i=1$
Also showing that the above proof doesn't work for couplex numbers does not prove that the theorem is wrong. To show that the theorem is wrong you just have to give a counterexample. As you already noted $a=b=-1$ would do the job. |
The exterior Lebesgue measure for the image of Lipschitz function | Let $E\subset \bigcup_{n=1}^\infty [a_n,b_n]$. Then
$f(E)\subset \bigcup_{n=1}^\infty f([a_n,b_n])$.
By continuity $f([a_n,b_n])=[f(c_n),f(d_n)]$ for some $c_n,d_n \in [a_n,b_n]$. By the Lipschitz continuity $|f(d_n)-f(c_n)| \leq M |d_n-c_n| \leq M (b_n-a_n)$.
Now
$$
|f(E)|_e \leq \sum_{n=1}^\infty |f(d_n)-f(c_n)|\leq M \sum_{n=1}^\infty (b_n-a_n)
$$
Since the sequence $([a_n,b_n])$ covering $E$ was arbitrary, we have
$$
|f(E)|_e\leq M |E|_e.
$$ |
How come associative law of matrix multiplication won't work when permutation matrices come in. Which is the case for some | You're right; if you copied the problem correctly, this is a mistake. $(Px)^\top y=(x^\top P^\top)y=x^\top(P^\top y)$. |
Additive functional equation | It is straightforward to show that Cauchy's functional equation implies $f(qx)=q f(x)$ for all $q\in\mathbb{Q}, x\in\mathbb{R}$. Thus we can see $f$ as a $\mathbb{Q}$-linear map of the $\mathbb{Q}$-vector space $\mathbb{R}$. Like every linear map, it is determined by its values on a basis.
Let us choose a $\mathbb{Q}$-basis $B\subset\mathbb{R}$ of $\mathbb{R}$. Note that this requires the axiom of choice. That is, for every $x\in\mathbb{R}$ we can choose a coefficient function $x^*:B\rightarrow \mathbb{Q}$ such that $q(b)\not=0$ only for finitely many $b\in B$ and
$$x=\sum_{b\in B} x^*(b) b$$
Since $f$ is a linear map, it can be represented by an (infinite) $B\times B$ matrix of rational coefficients $(F_{b,b^\prime})_{b,b^\prime\in B}$ (with only finitely many non-zero terms in every column) such that
$$f(x)= F\cdot x$$
where $\cdot$ denotes multiplication of the matrix $F$ with the $\mathbb{Q}$-vector $x$, i.e.
$$f(x)^*(b) = \sum_{b^\prime\in B} F_{b,b^\prime} x^*(b^\prime)$$
$F_{b,b^\prime}$ is simply the coefficient of $b^\prime$ in the expansion of $f(b)$.
These are all solutions to Cauchy's functional equation by itself.
The condition $f(f(x))=x$ now reads
$$F^2=I$$
with $I$ being the identity matrix. That is,
$$\sum_{b^{\prime\prime}\in B} F_{b,b^{\prime\prime}} F_{b^{\prime\prime},b^\prime}=\left\{\begin{array}{ll}1 & \text{if}\;b=b^\prime,\\
0 & \text{if}\;b\not=b^\prime.\end{array}\right.$$
This characterizes all the solutions to the simultanous functional equations. The two solutions corresponding to the continuous solutions are just the cases $F=\pm I$. None of the other solutions satisfy any of your conditions $1.$ through $4.$ (since they all imply $f(x)=\pm x$). |
How to determine if a surface exists | Such a surface $S$ is isometric to the plane, so using theorema egregium, Gauss curvature $K$ of $S$ is just $0$. However, you also have $K= \frac{LN-M^2}{EG-F}=-1$. Therefore, $S$ cannot exist. |
Rings: Ascending Chain Condition iff Finitely Generated Ideals | Whether $R$ has a unit is totally irrelevant here. By definition, a set $S$ generates an ideal $I$ if $I$ is the smallest ideal containing every element of $S$. In particular, $I$ does contain every element of $S$. So in your situation, if $x_1,\dots,x_\ell$ are generators of $I$, then they are all elements of $I$, and so some $I_n$ contains them all. |
Understanding convergence of $a_n := \frac{2n^3 +n^2 +3}{n^3-4}$ | We have $$-\frac{2n^{3}-8}{n^3-4}=-\frac{2(n^3-4)}{n^3-4}=-2.$$ |
Prove Inverses not need be Unique if Associativity fails | Associativity fails if you consider $ y \cdot z \cdot z$. We have $ (y \cdot z)\cdot z = x \cdot z = z$, but on the other hand $ y \cdot (z \cdot z) = y \cdot x = y$. So this is not a group, and inverses need not be unique. |
Changing order of integration:$\int_0^\infty\int_{-\infty}^{-y}f(x)\mathrm dx\mathrm dy\Rightarrow\int_{-\infty}^0\int_0^{-x}f(x)\mathrm dy\mathrm dx$ | The easiest way to perform a change of the order of integration in the multivariable setting is via Iverson's bracket. This is the indicator function such that $$[P] =\begin{cases} 1 & P \text{ is true}\\
0 & \text{else}.\end{cases}$$ The change of variable arises from reinterpreting the system of inequalities (see below).
With the Iverson notation, one can remove the boundaries from the integral and implement it in the integrand, i.e.,
$$\int_{0}^{\infty} \int_{-\infty}^{-y} f(x,y)dx \,dy =\iint_{\mathbb{R}^2} f(x,y)\Bigl[(y\geq 0) \text{ and } (x\leq-y) \Bigr]dx\, dy \,.$$
Now in order to perform the change of the order of integration, you have to reinterpret Iverson's bracket. You have to figure out what condition $$P=(y\geq0) \text{ and } (x\leq-y)$$ poses on $x$ first.
The maximal value that $x$ can achieve is $0$ (when $y=0$). The second condition in $P$ is equivalent to
$$ x \leq -y \Leftrightarrow y \leq -x \,.$$
The first condition demands that $y>0$. Together, we have that
lf that $P$ is equivalent to
$$ P \Leftrightarrow (x<0) \text{ and } (0 < y < -x )\,.$$
So we find
$$\int_{0}^{\infty} \int_{-\infty}^{-y} f(x,y)dx\, dy =\iint_{\mathbb{R}^2} f(x,y)\Bigl[(x<0) \text{ and } (0 < y < -x )\Bigr]dx \,dy = \int_{-\infty}^0 \int_{0}^{-x} f(x,y) \,dy\,dx \,.$$ |
Finding generating function of binary tree using quadratic formula? | From your quadratic formula you can deduce $T(x) = \frac{1 \pm \sqrt{1 - 4 x}}{2x}$, solving (i). Which leaves the question, what sign to use (ii).
Luckily, we know the first terms of the genreating function $T(x)$: There is one tree without nodes and one tree with one node. This yields
$$T(x)= 1 + 1x + \ldots.$$
So the constant term should be $1$. But the constant term is $T(0)$. However, because of the fraction we cannot simply evaluate at $0$ but we may multiply the equation by the denominator. So we get
$$2x T(x) = 1 \pm \sqrt{1 - 4x} \; \Rightarrow \; 0 = 1 \pm \sqrt{1 - 0}.$$
Where the left equation is only true for $-$. Finally, we get
$$T(x) = \frac{1 - \sqrt{ 1 - 4x}}{2x}.$$
If you want to find the Taylor expansion of this expression, please refer to the first proof on the Wikipedia-Page of the Catalan numbers. |
Branched cover in algebraic geometry | Let's tackle ramified/unramified first, since that's something that's pretty uniform across the literature.
Definition (ref): A morphism of schemes $f:X\to S$ is unramified at $x\in X$ if there exists an affine open neighborhood $\operatorname{Spec} A=U\subset X$ of $x$ and an affine open $\operatorname{Spec} R=V\subset S$ so that $f(U)\subset V$ and the induced ring map $R\to A$ is of finite type and the module of Kahler differentials $\Omega_{A/R}$ is zero. A morphism of schemes is unramified if it is unramified at every point.
Equivalently, $f:X\to S$ is unramified if it is locally of finite type and $\Omega_{X/S}=0$. One may find an overview of relevant results and alternate characterizations at the Stacks Project section on unramified morphisms.
Perhaps somewhat expectedly, if something isn't unramified, then it's ramified. The best intuition one can have for this sort of morphism is provided at the Wikipedia page on the ramification with the following diagram:
Ramification is where there are "fewer points than you expect" because some branches "came together", like the marked points on $X$ on top in this image (this characterization assumes that your map behaves with respect to the included finiteness condition, because otherwise you're truly out of luck). To be precise, the ramification locus is the locus of points $x\in X$ where $(\Omega_{X/Y})_x\neq 0$, and the branch locus is it's image in $Y$ under the map $f$.
A branched covering is maybe a little more "geometrically" defined in the literature, so let us talk about that. The goal is to get maps which are covering maps "most places" - that is, we allow some amount of defect outside of a dense open subset. The moral of the story (and what I would take as my definition if I were in charge) is that a branched covering is a finite surjective morphism which is generically unramified and thus generically etale. (Generically unramified is automatic in characteristic zero, so frequently one may omit this condition if that's the only scenario one is concerned with - this is somewhat common, though decidedly not universal.)
To construct $\pi$ explicitly, the idea is that one wants to emulate the construction of the square-root function as a double cover of the complex plane ramified at the origin. The intuitive way to do this is to construct a square-root of the equation of the curve $C$, and this actually works: if our sextic is cut out by a homogeneous degree-six equation $f(x,y,z)$, then the equation $w^2=f(x,y,z)$ inside the weighted projective space $\Bbb P(1,1,1,3)$ with coordinates $x,y,z,w$ will cut out our ramified double cover.
As for part 3, no, there's no conspiracy that I'm aware of. It does happen sometimes in mathematics that there are things that "everybody knows" which can be pretty frustrating when you're not among the "everybody". This problem is not unique to algebraic geometry - in fact, in some ways, algebraic geometry has done a lot of work to get rid of this sort of thing via resources like Vakil's online notes and Stacks Project, though neither are full and complete references. I've found that the best way to resolve things like this is to start googling, going to the library (though, uh, with the way the world is right now, this might need some adjustment), and asking people who know more than I do what's up. |
Limits question help me please? | If you square the expression (call it $a(x))$ you get
$$
a(x)^2=\frac{(x+1)^2(x^2+2)^2\cdot ...\cdot(x^n+n)^2}{(n^nx^n+1)^{n+1}}=\frac{x^{n(n+1)}+\ldots +n!^2}{n^{n(n+1)}x^{n(n+1)}+\ldots + 1}
$$
$a(x)^2$ is a fraction of two polynomials in $x$, so we only need to look at the leading coefficients to conclude the limit of $a(x)^2$ is $n^{-n(n+1)}$. So the limit of $a(x)$ is $n^{-n(n+1)/2}$. |
Solving the differential equation $\frac{dy}{dx}=\frac{3y^2+x}{4y^2+5}$ | Notice that if $x \geq \frac{15}{4}$
$$y' = \frac{3y^2+x}{4y^2+5} \geq \frac{3y^2+\frac{15}{4}}{4y^2+5} = \frac{3}{4}$$
Which means at the very least
$$f(x) \geq \frac{3}{4}x-\frac{45}{16} \implies \int_{\frac{15}{4}}^{\frac{27}{4}} f(x)\:dx \geq \int_{\frac{15}{4}}^{\frac{27}{4}}\frac{3}{4}x-\frac{45}{16}\:dx = \frac{27}{8}$$
so options $1$ and $2$ are in but option $3$ is out. Can you continue with option $4$? |
Series question with unknown limit | Hints:
$$\sum_{n=1}^N3n=3\sum_{n=1}^Nn=3\frac{N(N+1)}2$$
Since you actually have, say
$$\sum_{r=1}^N (3r+5)=3\sum_{r=1}^Nr+\sum_{r=1}^N5=3\frac{N(N+1)}2+5N$$
Proof of the first sum above: we do the same sum in two ways
$$\begin{align}&S=1&+&2&+\ldots+&(N-1)&+&N\\
&S=N&+&(N-1)&+\ldots+&2&+&1\end{align}$$
Sum now both lines above:
$$2S=\overbrace{(N+1)+(N+1)+\ldots+(N+1)}^{n\;\text{ times}}=N(N+1)$$
and now you can deduce the sum of the first $\;n\;$ consecutive natural numbers. |
Give an example of four different subsets A, B, C and D of {1, 2, 3, 4} such that all intersections of two subsets are different. | If they asked the dual problem, find four subsets of $\{1,2,3,4\}$ whose unions are all different, the obvious answer is $\{1\},\{2\},\{3\},\{4\}$. So take the complements: $\{2,3,4\},\{1,3,4\},\{1,2,4\},\{1,2,3\}$. The intersection of the complements is the complement of the union. |
How may we make $(\forall y ?)(\forall x ?)P$ equivalent to $(\forall x>0)(\forall y>x)P$ | If the qualified $y$ makes reference to the $x$, as in $y >x$, you cannot swap the quantifiers as indeed you would get a free variable $x$. So ... something you can do is to not qualify the $y$ right inside the quantifier, but wait until the $x$ has been introduced. So, you could do:
$$(\forall y)(\forall x >0)(y >x \rightarrow P)$$
You could also un-qualify the $x$ ... for consistency sake:
$$(\forall y)(\forall x)((x>0 \land y>x) \rightarrow P)$$
The nice thing about unqualified quantifiers is that they can be freely swapped, as long as they are of the same type and next to each other.
On the other hand, if you don't like the extra parentheses, you could do:
$$(\forall y)(\forall 0 < x<y) P$$ |
Can an upperbound constraint on the squared Frobenius norm of a matrix be expressed as a linear matrix inequality? | There might be a compacter and more elegant way, but one way you can represent it as LMI's is by using intermediate values for each diagonal term of $X^\top X$. These can be calculated using
$$
Y_i = X\,e_i
$$
with $e_i$ a vector with the $i$th element equal to one and the rest zeros (so $Y_i$ is the $i$th column of $X$), such that $Y_i^\top Y_i$ is the $i$th diagonal term of $X^\top X$. Then using the Schur complement you can write for every diagonal term an LMI for $Y_i^\top Y_i \leq \alpha_i$, namely
$$
\begin{bmatrix}
I & Y_i \\ Y_i^\top & \alpha_i
\end{bmatrix} \succeq 0,
$$
with $\alpha_i \in \mathbb{R}$. Now a bound for $\|X\|_F^2$ can be found by summing all $\alpha_i$, which should be smaller or equal to $t$
$$
\sum \alpha_i \leq t,
$$
which is also a linear inequality.
By using an intermediate LMI for $X^\top X$ you might also be able to write $X^\top X \preceq M$, with $M = M^\top$, as
$$
\begin{bmatrix}
I & X \\ X^\top & M
\end{bmatrix} \succeq 0.
$$
An upper bound for $\|X\|_F^2$ would then be $\text{Tr}(M)$, so adding the linear inequality $\text{Tr}(M) \leq t$ would make this system of LMI's equivalent to your problem. To show that $X^\top X \preceq M$ also implies that $\text{Tr}(X^\top X) \leq \text{Tr}(M)$ you can use that the trace of a matrix is equal to the sum of all its eigenvalues. Namely $X^\top X \preceq M$ is equivalent to $M - X^\top X \succeq 0$, thus $M - X^\top X$ can only have non-negative eigenvalues and therefore $\text{Tr}(M - X^\top X)$ is the sum of these non-negative eigenvalues, which is also non-negative. The trace inequality $\text{Tr}(X^\top X) \leq \text{Tr}(M)$ is equivalent to $\text{Tr}(M - X^\top X) \geq 0$ and in the previous sentence it was shown that it holds when $X^\top X \preceq M$, thus $\text{Tr}(X^\top X) \leq \text{Tr}(M)$ should hold in that case as well.
It can be noted that $M$ might add more degrees of freedom than all $\alpha_i$ so might or might not be an attractive alternative of writing the problem as LMI's. |
How can $G$ a simple group always be isomorphic to $Z_p$ for some prime $p$? | Is it not a necessary condition that the order of two groups must be
equal for them to be isomorphic?
It is. Any two isomorphic groups have the same order.
Does "$G$ is a simple group of odd order" somehow imply "$\lvert G \rvert$ is prime"?"
Yes it does. This is equivalent to the Feit-Thompson theorem that every finite group of odd order is solvable, as discussed in the question Every simple group of odd order is isomorphic to $\mathbb{Z}_{p} $ iff every group of odd order is solvable.
That theorem was proved in the 255-page 1963 paper Solvability of groups of odd order. |
Lottery on four-digit number | Let's attack the problem from scratch. Without loss of generality, assume $0000$ is the winning number. There are $90$ ways to match the last two digits without matching further: nine choices for the third-last digit on your ticket and ten for the first (it's unconstrained). Then there are $9$ ways to match the last three without matching it all and $1$ jackpot. So the expected payoff is
$$\frac{90\cdot50+9\cdot500+1\cdot5000}{10000}-2=-0.6$$
where, of course, the $-2$ represents the ticket price. |
Cantor Set and Fractals | You ask: "Is Wikipedia's definition of fractal the standard?" and right near the top of Wikipedia's page of fractals, we see the following definition:
A fractal is a mathematical set that has a fractal dimension that
usually exceeds its topological dimension and may fall between the
integers.
The statement that the fractal dimension may "fall between the integers" really adds nothing but, other than that, I would say that this is fairly standard; it is unquestionably the definition that was put forward by Mandelbrot around 1975 when he coined the term "fractal". He did not refer to "fractal dimension" at that time but, rather, the "Hausdorff-Besicovitch dimension" as he put it. In fairness, the usefulness of this definition has been debated with even Mandelbrot himself feeling that it might not be inclusive enough. Nonetheless, this comparison of dimension is central in fractal geometry. Gerald Edgar calls his great book, Measure, Topology, and Fractal Geometry, a meditation on the definition.
Taking this to be the definition, we can definitely say that the Cantor set satisfies it. If by "fractal dimension" you mean similarity dimension, then the Cantor set has fractal dimension $\log(2)/\log(3)$, since it's composed of two copies of itself scaled by the factor three. Also, the set is regular enough that any reasonable definition of fractal dimension agrees with that computation. (Well, any real-valued defintion.)
Topological dimension is a trickier thing, actually. It's inductive in nature. Totally disconnected sets (like single points, finite sets, or notably the Cantor set) have dimension zero. Higher dimensions are defined in terms of lower dimensions. The space we live in is three dimensions because balls in this space have a surface that is two dimensional. Because of this inductive nature, topological dimension always yields an integer.
When you write that you "do not see the irregular aspects or the complexity that is usually inherent with fractals", I think you might have a bit of a mis-understanding about fractal geometry. The Cantor set is indeed regular but, then so are all the strictly self-similar sets studied in classical fractal geometry - the Koch curve, the Sierpinski triangle, the Menger sponge, and countless others all display this regularity. Indeed, it's exactly this regularity that allows us to understand them.
To emphasize this regularity, and how it appears in not just the Cantor set, compare the following zooms of
The Cantor set
The Koch curve
Now, of course, there are "irregular" fractals - or, at least, less regular fractals. Examples include random version of self-similar sets, examples that arise from number theory, and examples arising from complex dynamics (like Julia sets). It's not their irregularity that makes these objects fractal, however. On the contrary, its the regularity that we can find that allows us to analyse these objects to the point where we can characterize them as fractal. Of course, this analysis is bit harder with these less regular examples. |
Partitioned matrix of partitioned matrices | Simply take determinants on both sides of the identity
$$
\left( \begin{array} {c,c} A \quad 0 \\ C \quad I
\end{array} \right)
\left( \begin{array} {cc} I & A^{-1}B \\ 0 & -CA^{-1}B
\end{array} \right)=
\left( \begin{array} {cc} A \quad B \\ C \quad 0
\end{array} \right)
$$ |
Poker combinations & probability | The number of hands is $\frac{47\cdot 46}2=1081$ as you see five cards (your two plus the flop) so there are $47$ left to choose from. If you are looking for the highest, there is no probability involved, just look at all of them and find the highest. Ignoring any information from the betting, the chance that an opponent has three of a kind is just (number of holdings that make three of a kind)/1081. For this example, if none of the five cards you see match in value, there are nine hands that make your opponent three of a kind (three pairs of each value that matches one of the flop cards).
None of this speaks to which of the hands are reasonable for your opponent to behave as you have seen so far. There are many books on the subject (most of which give similar advice) that your opponent may well have read, so you can take some guidance there. |
Inverse Laplace transform shifting error | I prefer to use the definition then tables to solve the inverse Laplace transform.
\begin{align}
\mathcal{L}^{-1}\{e^{-s}/(s - 1)\} &= \frac{1}{2\pi i}\int_{\gamma - i\infty}^{\gamma + i\infty}\frac{e^{-s}e^{st}}{s- 1}ds\\
&= \sum\text{Res}\\
&= \lim_{s\to 1}(s-1)\frac{e^{s(t-1)}}{s-1}\\
&= e^{t - 1}\mathcal{U}(t - 1)
\end{align}
Then from the definition, we see where the $t - 1$ comes from.
Just to note, I don't know why the unit step comes into play when we have a shift. I have asked a question on it here 10 days agos but no answer yet. |
Determine the kernel of a homomorphism $θ: S_3 \rightarrow G$? | Hint: The Kernel is in fact a normal subgroup so try computing if $\langle(1,2)\rangle$ is normal and if not, what the smallest normal subgroup containing it has to be (which shouldn't be too hard since $S_3$ is a pretty small group.) |
Convergence at $\alpha$ | In fact this is correct, by the theorem of convergence. |
misunderstood in Fubini's theorem | The second and third are iterated integrals. The first is a single integral, which in simple cases, can still be calculated "directly" (consider the integral of 1 over [0,1] x [0,1]). The power of Fubini's theorem is that, in the many cases where calculating an integral in more than one variable is too difficult, you can simply resort to an iterated one to get the right value.
EDIT: to address your edit, the first equality is something you get by transitioning into an iterated integral from the noniterated(?) integral. Then the second integral can be written
$$\int_{-n}^{n}dy\int_{-n}^{n}e^{-(x^2+y^2)}dx=\int_{-n}^{n}e^{-y^2}dy \int_{-n}^{n} e^{-x^2} dx$$
since with respect to integration over $x$, the $e^{-y^2}$ is a constant, so we can pull that factor out. Then we simply observe that the two integrals on the right are the same expression, just with different "dummy" variables. So we relabel both $x$ and $y$ with $t$.
Check out https://en.wikipedia.org/wiki/Gaussian_integral#Computation for a simple but important computation that has many consequences for many fields of math. |
Obtaining the area of a loop of the curve | There are only two loops, not four.
First, note that the LHS is always nonnegative, therefore, any solution $(x,y)$ satisfying the given relation must have $x^2 \ge y^2$. This precludes any points for which $y > |x|$ or $y < -|x|$.
Since the relation is also symmetric about the coordinate axes, we conclude that the behavior of the relation in the first quadrant, and specifically, in the wedge-shaped region $0 \le y \le x$, is sufficient to characterize the behavior in the entire plane.
From this, we simply express $y$ in terms of $x$: $$y = \sqrt{x^2 - \frac{x^4}{16}} = \frac{x}{4} \sqrt{16 - x^2}, \quad 0 \le y \le x.$$ Then integration of this function on $[0,4]$ gives, by symmetry, one-fourth of the total area enclosed.
Of course, you have an $a$ somewhere that is not reflected in the first equation you posted, so I have chosen to ignore it. |
Proving $x = y$ or $x = -y$ when $x^n = y^n$ and $n$ is even | Let $n=2p$. For convenience let us denote $y=a$. From the algebraic identities
\begin{eqnarray}
x^{2p}-a^{2p} &=&(x-a)\sum_{k=0}^{2p-1}a^{k}x^{2p-1-k}, \tag{1} \\
\sum_{k=0}^{2p-1}a^{k}x^{2p-1-k} &=&(x+a)\sum_{k=0}^{p-1}a^{2k}x^{2p-2-2k},\tag{2}
\end{eqnarray}
we conclude that
\begin{equation}
x^{2p}-a^{2p}=(x-a)(x+a)\sum_{k=0}^{p-1}a^{2k}x^{2p-2-2k}. \tag{3}
\end{equation}
Since for $a\neq 0$ the polynomial $\sum_{k=0}^{p-1}a^{2k}x^{2p-2-2k}$ on the right-hand side of (3) has no real
roots, it follows that the equation $x^{2p}-y^{2p}=0$ is equivalent to $(x-y)(x+y)=0$, thus proving that if $x^{n}=y^{n}$ and $ n $ is even, then $x=y$ or
$x=-y$.
The identities $(1)$ and $(2)$ can be justified by applying Ruffini's Rule twice: for identity $(1)$
$$
\begin{array}{c|cccccccc}
& 1 & 0 & 0 & \ldots & 0 & 0 & & -a^{2p} \\
a & & a & a^2 & \ldots & a^{2p-2} & a^{2p-1} & & a^{2p} \\
\hline
& 1 & a & a^{2} & \ldots & a^{2p-2} & a^{2p-1} & | & 0
\end{array}
$$
\begin{equation*}
x^{2p}-a^{2p}=(x-a)(x^{2p-1}+ax^{2p-2}+a^{2}x^{2p-3}+\cdots
+a^{2p-2}x+a^{2p-1}),
\end{equation*}
and for identity $(2)$
$$
\begin{array}{c|cccccccc}
& 1 & a & a^{2} & a^{3} & \ldots & a^{2p-2} & & a^{2p-1} \\
-a & & -a & 0 & -a^{3} & \ldots & 0 & & -a^{2p-1} \\
\hline
& 1 & 0 & a^{2} & 0 & \ldots & a^{2p-2} & | & 0
\end{array}
$$
$x^{2p-1}+ax^{2p-2}+\cdots +a^{2p-2}x+a^{2p-1}$
$$=(x+a)(x^{2p-2}+a^{2}x^{2p-4}+a^{4}x^{2p-6}+\cdots +a^{2p-4}x^{2}+a^{2p-2})$$ |
Why can a derivative be non-linear? | Edited
OK, so I'll try to make this post more self contained.
To find an equation of the tangent line to any function $y = f(x)$ at the point $A(x_0, y_0)$, iff $f(x)$ is differentiable at that point,
one needs to calculate its algebraic form as
$$
y = y_0 + f'(x_0)(x - x_0)
$$
so, as you can see, even though you calculate $f'(x)$ to find tangent line slope, it's evaluated at $x_0$, which makes it a number, but that number will change when you move from point to point.
Now, let's consider that example
$$
y = x^3 \implies y' = 3x^2 \implies y = x_0^3 + 3x_0^2(x - x_0) = 3x_0^2 x - 2x_0^3
$$
Simple animation of the tangent line when point it was calculated at is moving provided below. |
Calculating the determinant as a product without making any calculations | Use multilinearity of determinant:
$$\begin{vmatrix}na_1+b_1&na_2+b_2&na_3+b_3\\
nb_1+c_1&nb_2+c_2&nb_3+c_3\\
nc_1+a_1&nc_2+a_2&nc_3+a_3\end{vmatrix}=\begin{vmatrix}na_1&na_2&na_3\\
nb_1&nb_2&nb_3\\
nc_1&nc_2&nc_3\end{vmatrix}+\begin{vmatrix}na_1&na_2&b_3\\
nb_1&nb_2&c_3\\
nc_1&nc_2&a_3\end{vmatrix}+$$
$$+\begin{vmatrix}na_1&b_2&na_3\\
nb_1&c_2&nb_3\\
nc_1&a_2&nc_3\end{vmatrix}+\begin{vmatrix}na_1&b_2&b_3\\
nb_1&c_2&c_3\\
nc_1&a_2&a_3\end{vmatrix}+\begin{vmatrix}b_1&na_2&na_3\\
c_1&nb_2&nb_3\\
a_1&nc_2&nc_3\end{vmatrix}+\ldots$$
Observe that if we put
$$\Delta=\begin{vmatrix}a_1&a_2&a_3\\
b_1&b_2&b_3\\
c_1&c_2&c_3\end{vmatrix}$$
then we have that the four first determinants above equal (factor out constants from rows/columns):
$$n^3\Delta+n^2\Delta+n^2\begin{vmatrix}a_1&b_2&a_3\\b_1&c_2&b_3\\c_1&a_2&c_3\end{vmatrix}+n\begin{vmatrix}a_1&b_2&b_3\\b_1&c_2&c_3\\c_1&a_2&a_3\end{vmatrix}+\ldots$$
Well, develop the other three determinants left and sum up all. |
Asymptotic distribution of MLE | You can use your cdf $F_{\hat{\theta}}$ to get the pdf for $\hat{\theta}$ (which you should notice is supported on $[0,\theta]$) by taking a derivative: $$f_{\hat{\theta}}(x)=n\Bigg(\frac{x+e^{-x}-1}{\theta+e^{-\theta}-1}\Bigg)^{n-1}\Bigg(\frac{1-e^{-x}}{\theta+e^{-\theta}-1}\Bigg)$$ Since $\hat{\theta}$ is supported on $[0,\theta]$ we have that $n(\hat{\theta}-\theta)$ is supported on $[-n\theta,0]$. Using the fact that $f_{n(\hat{\theta}-\theta)}(x)=\frac{1}{n}f_{\hat{\theta}}\big(\frac{x}{n}+\theta\big)$ we get $$f_{n(\hat{\theta}-\theta)}(x)=\Bigg(\frac{\frac{x}{n}+\theta-e^{-\big(\frac{x}{n}+\theta\big)}-1}{\theta+e^{-\theta}-1}\Bigg)^n\Bigg(\frac{1-e^{-\big(\frac{x}{n}+\theta\big)}}{\frac{x}{n}+\theta+e^{-\big(\frac{x}{n}+\theta\big)}-1}\Bigg)$$ The support of $f_{n(\hat{\theta}-\theta)}$ encompasses the entire negative real axis as $n \longrightarrow \infty$. The right hand term in the above expression approaches $\frac{1-e^{-\theta}}{\theta+e^{-\theta}-1}$ as $n$ gets large. Using the substitution $m=x/n$ along with L'Hopital's rule, you can easily show that $$\Bigg(\frac{\frac{x}{n}+\theta-e^{-\big(\frac{x}{n}+\theta\big)}-1}{\theta+e^{-\theta}-1}\Bigg)^n \longrightarrow \exp \Bigg\{\frac{x\big(1-e^{-\theta}\big)}{\theta+e^{-\theta}-1}\Bigg\}$$ Finally the limiting distribution of $\hat{\theta}$ is the pdf $f$ defined by $$f(x)=\frac{1-e^{-\theta}}{\theta+e^{-\theta}-1}\exp \Bigg\{\frac{x\big(1-e^{-\theta}\big)}{\theta+e^{-\theta}-1}\Bigg\} $$ which is supported on $(-\infty,0]$. In other words, the limiting distribution of $\hat{\theta}$ is $-\text{Exponential}\Bigg(\frac{1-e^{-\theta}}{\theta+e^{-\theta}-1}\Bigg)$. |
Factorial moment of negative binomial | Just to give an actual answer for posterity, let us use the hint by @A.S.
The probability generating function (PGF) is:
$$ f(z) = \left(\frac{1-p}{1-pz}\right)^r, $$
and thus
$$ f^{(k)}(z) = \frac{p^k \left(\frac{1-p}{1-pz}\right)^r \prod_{i=0}^{m-1} r + i}{(1-pz)^k}, $$
so finally
$$ E[(X)_m] = \frac{p^m \prod_{i=0}^{m-1} r + i}{(1-p)^m}. $$ |
Knowing $X \sim N(0, \sigma)$, how come $E\left(\frac{\sum X_i^2}{n}\right)= \sigma?$ | "What I don't like is $X_i^2$"
Hint:
$$E[X_i^2] = \operatorname{Var}(X_i)+(E[X_i])^2.$$ |
Algebraic structure of Riemann Sphere | No, the Riemann sphere in its entirety is not a ring, algebra or field in any interpretation I know.
I think its most natural algebraic interpretation the Riemann sphere has is its identification with the complex projective line, so that it is a set upon which the Möbius group acts upon. So, not a field, ring or algebra, but the underlying space of a group action.
I think probably (but I don't know, personally) that one can probably say something more about it being a topological group action on the complex projective line. |
For $V$ vector space of dimension $n$ under $\mathbb C$ and $T: V \to V $ linear transformation , show $V= \ker T^n \oplus $ Im $T^n$ | The main point is that ${\rm Ker}\ T^n \cap {\rm Im}\ T^n = \{0\}$. Show that if $T^k x \ne 0$ and $T^{k+1} x = 0$, then $x, Tx, \ldots T^k x$ are linearly independent, and therefore $k \le n-1$. No need here for Jordan Canonical Form - in particular this works over any field. |
How to show the inclusion $M\subset\Bbb R^{k+n}$ is locally flat, where $M\subset\Bbb R^k$ is an $n$- dimensional topological manifold? | Let $p \in M$ be a point, and $(U, \varphi)$ be a chart neighborhood in $M$, $\varphi : U \to \Bbb R^n$ being the chart homeomorphism. Considering the embedding $M \subset \Bbb R^k$, find a neighborhood $V$ of $p$ in $\Bbb R^k$ such that $V \cap M = U$ and define a continuous extension $\widetilde{\varphi} : V \to \Bbb R^n$, so that $\widetilde{\varphi}|_U = \varphi$.
Now consider the embedding $M \subset \Bbb R^k \times \{0\} \subset \Bbb R^{k+n}$ into the first factor of $\Bbb R^{k+n} = \Bbb R^k \times \Bbb R^n$ and let $B$ be a ball around $p$ in $\Bbb R^{k+n}$ such that $B \cap (\Bbb R^k \times \{0\}) = V$. Define $\Phi : B \to \Bbb R^{k+n}$ by
$$\Phi(x, y) = (\varphi^{-1}(y + \widetilde{\varphi}(x)) - x, y + \widetilde{\varphi}(x))$$
Checking that $\Phi$ is a homeomorphism onto image is a triviality. Note that for any $(x, 0) \in U \times \{0\} \subset M$, $\Phi(x, 0) = (\varphi^{-1}(\varphi(x)) - x, \varphi(x)) = (0, \varphi(x)) \in \{0\} \times \Bbb R^n$, so $\Phi(U \times \{0\}) \subset \{0\} \times \Bbb R^n$. Thus, $M \subset \Bbb R^{k+n}$ is a locally flat submanifold at $p$.
I've decided to make an edit to explain the idea behind the map $\Phi$. For illustration it's easier to imagine a smaller dimensional example of a non locally flat embedding than the Alexander horned sphere. I shall use a properly embedded wild arc $A \subset \Bbb R^3$. We embed $A$ horizontally in $\Bbb R^3 \times \Bbb R^1$.
Our plan is to shift $A \subset \Bbb R^3 \times \{0\}$ vertically up along the $\Bbb R^1$-direction in $\Bbb R^3 \times \Bbb R^1$ so that it "snakes upward" diagonally as we go along the arc. The natural way to do it is to observe the arc admits a homeomorphism $\varphi : A \to \Bbb R$, which we extend by Tietze extension theorem to a map $\widetilde{\varphi} : \Bbb R^3 \to \Bbb R$, and use this to construct a shear homeomorphism
$$\Phi_1 : \Bbb R^3 \times \Bbb R^1 \to \Bbb R^3 \times \Bbb R^1, \; \Phi_1(x, y) = (x, y + \widetilde{\varphi}(x))$$
The advantage of using $\varphi$ along the arc while shearing is that it "unwinds" the many twists in $A$ because $\varphi$ by definition topologically simplifies the arc to the straight real line.
$\hskip1.4in$
What was the point of this construction? The point is now $\Phi_1(A) = \{(x, \varphi(x)) : x \in A\}$ is the graph of $\varphi^{-1} : \Bbb R \to A \subset \Bbb R^3$ over the $\{0\} \times \Bbb R^1$ factor, and graphs of functions are always locally flat submanifolds. Indeed, consider a function $f : \Bbb R^n \to \Bbb R^m$ and $\Gamma_f = \{(x, f(x)) : x \in \Bbb R^n\}$ be the graph in $\Bbb R^n \times \Bbb R^m$. Then another shear homeomorphism
$$\Phi_2 : \Bbb R^n \times \Bbb R^m \to \Bbb R^n \times \Bbb R^m, \; \Phi_2(x, y) = (x, y - f(x))$$
satisfies $\Phi_2(\Gamma_f) \subset \Bbb R^n \times \{0\}$, making $\Gamma_f \subset \Bbb R^n \times \Bbb R^m$ locally flat.
The final homeomorphism $\Phi : \Bbb R^4 \to \Bbb R^4$ such that $\Phi(A) \subset \{0\} \times \Bbb R$, making $A$ locally flat, can be obtained from composing the homeomorphisms $\Phi_1$ and $\Phi_2$, where $\Phi_2$ is done with the Euclidean factors reversed. None of this is special to the example I took, and one can check the formula for $\Phi$ obtained in this manner is exactly what I have above. |
The inverse Fourier transform of $1$ is Dirac's Delta | Let me give a calculus-based "flavor" of the proof that is OK on the physics level. Denote your function by $f_L(x)$ (since it depends on $L$), so
$$f_L(x) = \frac{ \sin L x}{\pi x}.$$
This function is even, peaked around $x=0$ and decays as $O(1/x)$ to both sides.
Around the origin,
$$f_L(x) = \frac{L}{\pi} \left[1 - \frac{L^2 x^2}{3!} + \frac{L^4 x^4}{5!} + \ldots\right].$$ This looks like a Gaussian (just plot it). In fact, if you write
$$h_L(x) = \frac{L}{\pi} \exp\left[- \frac{x^2}{2 \sigma^2} \right]$$
then you can show that
$$f_L(x) - h_L(x) = O(x^4).$$
if you pick $\sigma = \sqrt{3}/L$. So up to $O(x^4)$ terms, $f_L(x)$ is bell-shaped and has a width $\sim 1/L.$ But this is precisely what we want from a delta function: it sniffs out the value of the function you integrate it with at the origin.
We still need to check the prefactor. It suffices to do this for any function we like, say
$$Z(x) = \exp(-x^2).$$
Now
$$\int Z(x) f_L(x) dx = \text{Erf} \frac{L}{2}$$
where $\text{Erf}$ is the error function. But
$$\lim_{L \rightarrow \infty} \int Z(x) f_L(x) dx = \lim_{L \rightarrow \infty} \text{Erf} \frac{L}{2} = 1 = Z(0)$$
so we're done.
[Edit: you should also check that $\int f_L(x) dx = 1$ for all $L$.] |
Inclusion Exclusion Principle (language confusion) | Using the De Morgan's low we have tha the complementary of $B$ and $C$ is not $B$ or not $C$. So therefore you need to find the cases when only $A$ and $B$ occurs, as well as only $A$ and $C$ and then subtract these probability from $1$ to get the wanted probability. |
What is the solution of this recursion, that's defined in terms of a sum, but with this $1$ odd twist? | We can simplify the recurrence to $F(n)=F(n-1)+F(\lfloor n/k\rfloor)$ for $n>0$ and normalize it by putting $F(0)=1$. We can draw a graph of $F(n)$ but it is not easy to find even asymptotic growth of $F(n)$ from it. Nevertheless, such sequences are known and for particular values of $k$ belong to the on-line encyclopedia of integer sequences: $k=2$, $k=3$, $k=4$, and $k=5$. There is a lot of references, especially for the first sequence. |
Littlewood's 1914 proof relating to Skewes' number | It does look like that
"$\psi(x) -1$"
should be
"$\psi(x) -x$".
Since $\psi(x) \approx x$,
$\psi(x) - 1$ is still
$\approx x$.
Do you have a link to Littlewood's paper? |
Does $g$ behave like $t^k$ near the origin? | I assume you are asking "whether $G(t)$ always behaves like $t^k$" (as in your question) instead of "whether $g(t)$ always behaves like $t^k$" (as in the title) for small $t$.
The answer is No.
The question is equivalent to given a monotonic increasing $C^2$ function $G : [0,\infty] \to [0,\infty]$ satisfying:
$$G(0) = 0 \quad\quad\text{and}\quad\quad 1 < b_1 \le \frac{d \log G(t)}{d \log t} \le b_2 < \infty\text{ for }t \in (0,\infty)$$
Is it always true that there exists some $k_G > 0$ such that:
$$k_G = \lim_{t\to 0+} \frac{\log G(t)}{\log t} \quad\text{ and }\quad \limsup_{t\to 0+} | \log G(t) - k_G \log t| < \infty$$
Define $G(t)$ by:
$$G(t) = \begin{cases}t^3 e^{f(\log t)}, & t > 0\\0, & t = 0\end{cases}$$
where $f(s) = \sqrt[4]{s^2+1}$. It is not hard show for any $s \in (-\infty,\infty)$,
$$|f'(s)| = |\frac{s}{2(s^2+1)^{\frac34}}| \le \frac{1}{\sqrt[4]{108}} \sim 0.3102 $$
This implies for $t \in (0,\infty)$,
$$\frac{d \log G(t)}{d \log t} = 3 + f'(\log t) \implies 2.6 < \frac{d \log G(t)}{d \log t} < 3.4 $$
Furthermore, the $t^3$ factor in $G(t)$ make it falls to zero fast enough as $t$ approaches $0$ and turn $G(t)$ into a $C^2$ function over $[0,\infty)$. More precisely, this mean:
$$\begin{align}
\lim_{t\to0+} G'(t) &= 0 = G'(0) \stackrel{def}{=} \lim_{h\to 0+}\frac{G(h)}{h}\\
\lim_{t\to0+} G''(t) &= 0 = G''(0) \stackrel{def}{=} \lim_{h\to 0+}\frac{G'(h)}{h}
\end{align}$$
Notice
$$\lim_{t\to0+}\frac{\log G(t)}{\log t} = \lim_{s\to-\infty}\frac{3s+f(s)}{s} = 3$$
$k_G$ for this particular $G$ is $3$. However,
$$\limsup_{t\to0+}|\log G(t) - k_G \log t| = \limsup_{s\to-\infty}|f(s)| \ge \limsup_{s\to-\infty}\sqrt{|s|} = \infty$$
This means in general, $G(t)$ need not behave like any $t^k$ for small $t$. |
showing a space is complete | So you've reduced things to proving the following:
A uniform limit of real bounded functions is bounded.
Let $f$ be the pointwise limit of the sequence $(f_n)$, i.e. $f(x) = \lim_{n\to\infty} f_n(x)$ for all $x\in\mathbb{R}$. The limit is, of course, well-defined.
What's a good way to bound $|f(x)|$ uniformly? Well, the only information we've got about the size of anything is on $|f(x) - f_n(x)|$ (uniform convergence). So we'll try the triangle inequality:
$$
|f(x)| \leq |f(x) - f_n(x)| + |f_n(x)|.
$$
Is there a way to uniformly bound the left-hand side? (Hint: Where does each $f_n$ lie? Can you pick a good $n$ to make things uniformly bounded?) |
What's wrong with this formula for the dot product of a vector and a matrix acting on that vector? | I think what you're saying is something like this:
$$\sum_{j=1}^n (R_k)_jv_j=\left(\sum_{j=1}^n (R_k)_j\right) \left( \sum_{j=1}^n v_j \right)$$
This is your mistake: summations are not distributive over multiplication. This is like saying the following:
$$\sum_{j=1}^n (1)(1)=\left(\sum_{j=1}^n 1\right) \left( \sum_{j=1}^n 1\right)$$
Clearly, the left side is $n$ while the right side is $n^2$, so this kind of manipulation is invalid by this counterexample. |
Does ZFC decide every question about finitely generated groups? | You're not very concrete about what you consider a "statement about f.g. groups", but presumably we can speak about the property of being a finitely generated free abelian group.
If $G_1$ is a free abelian group on $n$ generators and $G_2$ is a free abelian group on $m$ generators, then the direct product $G_1\times G_2$ is a free abelian group on $n+m$ generators, and the tensor product $G_1\otimes G_2$ is a free abelian group on $nm$ generators.
Therefore, if you allow speaking about direct products and tensor products of abelian groups, then you can speak about addition and multiplication of natural numbers, and then you can express every arithmetical sentence. Among these is "ZFC is consistent", which is undecidable by ZFC itself (unless ZFC is, in fact, not consistent). |
Outer product - what about it's (co/contra)variance? | I am not sure if any of this answers your question(s), but the comment I was trying to write got longer and longer. I hope something of this is helpful:
First of all, there are different conventions in different areas of mathematics, not all of them use the idea of "vector variance" ("bottom index means row vector" etc.) which to my knowledge is mostly used in areas related to physics. In other areas, the position of an index or the distinction between row and column vectors is not relevant.
For me, it seems the main problem you are having with the article is due to these differences in conventions.
In the article you linked to, first, we are given
$$ u = (u_1,\dots,u_m) $$
and
$$ v = (v_1,\dots,v_n)$$
(row vectors by the looks of it) but when relating the outer product to matrix multiplication, it is said that $u$ and $v$ should be considered as $m \times 1$- and $n \times 1$-column vectors, respectively, in order for
$$\begin{equation} u \otimes v = u v^T \end{equation}$$
to make sense.
The outer product is essentially defined as a map
$$\otimes : \mathbb{R}^m \times \mathbb{R}^n \to \mathbb{R}^{m \times n} $$
with position $(i,j)$ of $u \otimes v$ being the product of the $i$th entry of $u$ with the $j$th entry of $v$. The position of the indexes (top/bottom) is not relevant to define this map and can be chosen arbitrarily (at least from the viewpoint of someone not adhering to the vector variance convention).
Concerning the question of orthogonal coordinate systems,
the article explains the distinction between the outer and the inner product and thereby mentions the identity $$ \langle u,v \rangle = u^T v $$
(for $m = n$ and column vectors $u$ and $v$)
so the matrix and vectors representations are with respect to an orthonormal coordinate system.
In fact, I would say the article is written as to consider the vector space $\mathbb{R}^n$ with its usual inner product and does not consider an abstract inner vector space where the choice of basis is not that straightforward.
That being said, the distinction between bottom and top indexes has advantages when we consider the usage of matrices. In this context,
we consider the column space $\mathbb{R}^{n \times 1}$ and identify its dual space (the space of linear maps $\mathbb{R}^{n \times 1} \to \mathbb{R}$) with the row space $\mathbb{R}^{1 \times n}$.
The space of $m \times n$-matrices can now be used in various ways which leads to the different ways of writing indexes on top and on the bottom.
A $m \times n$-matrix $[A^i_j]$ is used to represent a linear map $\mathbb{R}^{n \times 1} \to \mathbb{R}^{m \times 1}, [v^j] \mapsto [\sum_j A^i_j v^j]$ whereas a $m \times n$-matrix $[A_{i,j}]$ is used to represent a bilinear map $\mathbb{R}^{m \times 1} \times \mathbb{R}^{n \times 1} \to \mathbb{R}, ([u^i],[v^j]) \mapsto \sum_{i,j} u^i A_{i,j} v^j$. We can thus interpret the space of $m \times n$-matrices in different ways.
For now, let us consider $\mathbb{R}^{m \times n}$ as space of linear maps, so that composition of linear maps corresponds to matrix multiplication
$$\mathbb{R}^{l \times m} \times \mathbb{R}^{m \times n} \to \mathbb{R}^{l \times n}, ([A^i_{j}],[B^{j}_{k}]) \mapsto [\sum_{j} A^i_j B^j_k].$$
The problem is now that the article does not have this viewpoint. Of course we can define any sorts of maps
$$ ([u^i],[v^j]) \mapsto [u^i v^j]$$
$$ ([u^i],[v^j]) \mapsto [u^i v_j]$$
$$ ([u^i],[v^j]) \mapsto [u_i v^j]$$
$$ ([u^i],[v^j]) \mapsto [u_i v_j]$$
but what of this makes sense?
If we want to have $u \otimes v = u v^T$ for column vectors $u$ and $v$, then we have
$$ [u^i] \otimes [v^j] = [u^i][v^j]^T = [u^i][v_j] = [u^i v_j]$$
as you already calculated which gives a matrix representing a linear map. In fact, this is the map $w \mapsto \langle v,w \rangle u$.
For other combinations of column and row vectors we clearly do not have $u \otimes v = u v^T$ because the right hand side just does not make any sense in general or would yield a scalar instead of a matrix.
Still, you could argue that the definition of outer product (position $(i,j)$ is $i$th component times $j$th component) makes sense regardless of whether the input vectors are rows or columns and this is true. However, if you try to express this in terms of matrix multiplication and transposition, if needed, and use these to calculate the tensor variance of the corresponding matrix, you will find that all combinations of column and row vector yield a matrix $[T^i_j]$. |
What is the cardinality of the set of mathematical structures? | The correct term is "a proper class", which means that this collection is not a set.
And while we can certainly talk about proper classes and to some extent treat them as sets for some rudimentary things (e.g. intersections, unions, products), these are not sets. And one thing that proper classes do not have is cardinality.1
Your idea is correct, but your argument is a bit lacking. Indeed, fixing a language, we can find arbitrarily large structure for that language, which implies that this is a proper class. But there is no "direct connection" between the cardinality of the collection of structure, and the cardinality of a new structure. If $X$ is a countable set, then $X\cup\{X\}$ is also a countable set.
The correct approach, however, would be to argue that if $X$ is a set of $S$-structures, then there is some cardinal $\kappa$ such that no member of $X$ has size $\kappa$. Therefore taking any $S$-structure of size $\kappa$, it will not be in $X$. So no set can exhaust all the $S$-structures.
Footnotes.
It is actually possible to define cardinality for proper classes, by talking about the existence of bijections between the classes. But this requires better grasp of axiomatic set theory, and understanding what does it mean for objects to live in the meta-theory and how set theory interacts with its meta-theory. So let's just agree that for now, proper classes do not have cardinality. |
Why isn't this set of vectors a basis of planar subspace in ${\bf R}^3$ | A basis of $S$ is a set of maximally linear independant vectors in $S$ which minimally generate $S$. You are right, the canonical basis of $\mathbb{R^3}$ is "overqualified". |
proof that $\frac{a_{4n}-a_2}{a_{2n+1}}$ : integer | It is easy to prove by induction that $a_n=aF_{n-2}+bF_{n-1}$, where $F_k$ is Fibonacci numbers. I want to prove that $\frac{a_{4n}-a_2}{a_{2n+1}}=F_{2n-2}+F_{2n}$.
Therefore, we must prove that $$aF_{4n-2}+bF_{4n-1}-b=(aF_{2n-1}+bF_{2n})(F_{2n-2}+F_{2n})$$ Therefore, we need prove that $F_{4n-2}=F_{2n-1}(F_{2n-2}+F_{2n})$ and $F_{4n-1}-1=F_{2n}(F_{2n-2}+F_{2n})$.These two equalities are easily proved by well-known formulas from Wikipedia. |
Radius of a Sphere inscribed in a Convex Polyhedron | Hint to start: $P T T'$ is an equilateral triangle, and is the cross-section of your polyhedron through $P$ orthogonal to $PQ$. Same goes for $Q S S'$. |
how to find the geometric centroid of a trapezoid? | If you choose a coordinate system where $b$ is on the positive $x$ axis, between origin $(0, 0)$ and $(b, 0)$, the four vertices of the trapezoid are
$$(0, 0), \quad (b, 0), \quad (f+a, h), \quad (f, h)$$
Since trapezoids are simple polygons, we can use the shoelace formula for the area of a 2D polygon,
$$A = \displaystyle \frac{1}{2} \sum_{i=0}^{n-1} x_i y_{i+1} - x_{i+1} y_i \tag{1}\label{None1}$$
and the centroid of a simple polygon:
$$\left\lbrace ~ \begin{aligned}
\overline{x} &= \displaystyle \frac{1}{6 A} \sum_{i=0}^{n-1} (x_i + x_{i+1})(x_i y_{i+1} - x_{i+1} y_i) \\
\overline{y} &= \displaystyle \frac{1}{6 A} \sum_{i=0}^{n-1} (y_i + y_{i+1})(x_i y_{i+1} - x_{i+1} y_i) \\
\end{aligned} \right . \tag{2}\label{None2}$$
where $x_0 = 0$, $x_1 = b$, $x_2 = f + a$, $x_3 = f$, and $x_4 = x_0 = 0$; and $y_0 = 0$, $y_1 = 0$, $y_2 = h$, $y_3 = 0$, and $y_4 = y_0 = 0$.
These yield
$$\begin{aligned}
\overline{x} &= \displaystyle \frac{f (b + 2 a) + (a + b)^2 - a b}{3 (a + b)} \\
\overline{y} &= \displaystyle h \frac{b + 2 a}{3 (a + b)} \\
\end{aligned} \tag{3}\label{None3}$$
You can solve $f$ and $h$ by splitting the trapezoid into two right triangles and a rectangle in between, and solving the system of three equations and three unknowns (the third being $g$, such that $f + a + g = b$). It yields
$$\begin{aligned}
f &= \displaystyle \frac{b - a}{2} + \frac{c^2 - d^2}{2 (b - a)} \\
g &= \displaystyle \frac{b - a}{2} - \frac{c^2 - d^2}{2 (b - a)} \\
h &= \displaystyle \frac{\sqrt{ 2 (d^2 + c^2)(b - a)^2 - (d^2 - c^2)^2 - (b - a)^4}}{2 ( b - a ) } \\
\end{aligned} \tag{4}\label{None4}$$
Substituting $f$ in the centroid $x$ coordinate indeed yields, after simplification, $\overline{x} = (b/2) + (2 a + b)(c^2 - d^2) / (6 (b^2 - a^2))$.
As to references, the above links to Wikipedia have references you can use.
Or, you can start with the integral definition.
Given the characteristic function $g(x, y)$ of the shape, $g(x,y)$ being $1$ inside the shape and $0$ outside, the centroid is
$$\begin{aligned}
\overline{x} &= \displaystyle \frac{ \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} x g(x, y) ~ d x ~ d y }{ \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} g(x, y) ~ d x ~ d y } \\
\overline{y} &= \displaystyle \frac{ \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} y g(x, y) ~ d x ~ d y }{ \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} g(x, y) ~ d x ~ d y } \\
\end{aligned} \tag{5}\label{None5}$$
The characteristic function is the weight function, so one might consider this the definition of the 2D centroid. (For references, look for "centroid integral".)
For the $y$ axis, the integral simplifies to
$$\begin{aligned}
\overline{y} &= \frac{ \int_{0}^{h} y \left(a + \frac{b - a}{h} y\right) ~ dy }{ \int_{0}^{h} a + \frac{b - a}{h} y ~dy } \\
~ &= \frac{ ~ \frac{ 2 b + a }{ 6 } h^2 ~ }{ ~ \frac{ b + a }{2} h ~ } \\
~ &= \frac{ 2 b + a }{ 3 ( b + a ) } h \\
\end{aligned} \tag{6a} \label{None6a}$$
For the $x$ axis, we need to split the integral into three parts. From above, we already know the divisor integral evaluates to $h (b + a) / 2$:
$$\begin{aligned}
\overline{x} &= \frac{2}{h (b + a)} \biggr( \int_{0}^{f} x \left( \frac{h}{f} x \right) ~ d x ~ + \int_{f}^{f+a} x h ~ d x ~ + \int_{f+a}^{b} x \left(\frac{b - x}{b - f - a} h \right) ~ d x \biggr) \\
~ &= \frac{2}{h (b + a)} \biggr( \frac{f^2 h}{3} ~ + ~ \frac{2 a f h + a^2 h}{2} ~ + ~ \frac{ h }{6} \left( f ( b - 4 a - 2 f ) + b^2 + a b - 2 a^2 \right) \biggr) \\
~&= \frac{(2 a + b) f + (a + b)^2 - a b}{3 (a + b)} \\
\end{aligned}$$
which is exactly the same as we got in the polygon form. |
Evaluate $\int _{D}\frac{\sin z}{z^2-z}dz$ | Partial fraction expansion gives $$\frac{1}{z^2-z}=\frac{1}{z-1}-\frac{1}{z}$$
Then, we can write
$$\oint_{|z-1|=2}\frac{\sin(z)}{z^2-z}\,dz=\oint_{|z-1|=2}\frac{\sin(z)}{z-1}\,dz-\oint_{|z-1|=2}\frac{\sin(z)}{z}\,dz \tag1$$
Note that the integrand of the second integral on the right-hand side of $(1)$ is analytic. Therefore, Cauchy's Integral Theorem guarantees that $\oint_{|z-1|=2}\frac{\sin(z)}{z}\,dz=0$.
Using Cauchy's Integral Formula to evaluate the first integral on the right-hand side of $(1)$ reveals $\oint_{|z-1|=2}\frac{\sin(z)}{z-1}\,dz=2\pi i \sin(1)$.
Putting it together yields
$$\oint_{|z-1|=2}\frac{\sin(z)}{z^2-z}\,dz=2\pi i \sin(1)$$ |
Turing machine notation, need translation | The first Turing Machine is a Turing Machine represented in the form of states.
The second one is a "complex" Turing Machine represented in the form of basic Turing machines. Complex in the sense that it is made up of smaller, more basic Turing Machines combined in some form.Ra is a Turing Machine whose head shifts right in the input tape until it reads a. "#" is a Turing Machine which replaces the input symbol pointed by the head in the input tape to "#".Finally a Turing machine RaL is equivalent to: Ra → LThis means the equivalent Turing Machine as a whole will move to the right until it finds an a in the input tape; the arrow without any symbols represents that the Turing Machine will move to the left(since 'L' Turing Machine is present on the end of the arrow) on the encounter of any symbol in the alphabet.
I recommend you read 'Elements of The Theory of Computation' written by Harry R. Lewis if you still don't understand. |
Entire functions for which the absolute value is the sum of functions of $x$ and $y$ | A function that is the sum of a function of $x$ and a function of $y$ has zero mixed partial derivative $\frac{\partial^2}{\partial x\partial y}$ (and conversely). This partial can be related to Wirtinger derivatives:
$$
\frac{\partial^2 }{\partial z^2}-\frac{\partial^2 }{\partial \bar z^2}
=\frac14 \left(\frac{\partial }{\partial x}-i\frac{\partial }{\partial y} \right)^2 - \frac14 \left(\frac{\partial }{\partial x}+i\frac{\partial }{\partial y} \right)^2 = -i\frac{\partial^2}{\partial x\partial y}
$$
So,
$$\frac{\partial^2 |f|^2}{\partial z^2} = \frac{\partial^2 |f|^2}{\partial \bar z^2}$$
Write $|f|^2=f\bar f$ and differentiate, using the fact that $f$ is holomorphic:
$$f'' \bar f = f \overline{f''} $$
It follows that the (meromorphic) function $f''/f$ is real and therefore identically constant.
The solutions of the equation $f''=Cf$ are well-known: $Ae^{\alpha z}+Be^{-\alpha z}$ where $\alpha ^2=C$, as well as $f(z)=Az+B$ if $C=0$. Note that $\alpha $ may be complex.
The preceding describes all entire functions such that $|f(z)|^2=G(x)+H(y)$ for some $G,H$. Since $|f|^2\ge 0$, the summands $G,H$ can also be taken nonnegative.
It remains to take the square root. In general, a smooth nonnegative function need not be the square of a smooth function (reference). But a nonnegative real-analytic functions $G$ is the square of some real-analytic function $g$. Indeed, the zeros of $G$ form a discrete set and near each zero, $G$ is represented as $G(x)=(x-a)^{2k} r(x)$ with $r(a)> 0$. Then $G(x) = \pm (x-a)^k \sqrt{r(x)}$ works locally, and one can choose $\pm$ signs consistently. |
Is bifunctor $\hom_C(\cdot,\cdot)$ non-degenerated | If the isomorphism is natural in $Z$, then yes, this is true since the Yoneda embedding is fully faithful. This is an immediate consequence of the Yoneda lemma. Note that in this case we only require the natural isomorphism to be one of sets, i.e. we regard the $\mathrm{hom}$-functor to be $\mathrm{Set}$-valued. But since isomorphic groups are in particular isomorphic sets, this does not make any difference here as far as I know.
In fact this is true in any locally small category (that is, in one where $\mathrm{hom}_C(X,Y)$ is a set). |
$\int_0^{2\pi} \left| \sum_{n\in \mathbb N} e^{-nt} e^{in\theta} \right| d\theta \leq C(t) C'$? | Hint: Triangle inequality shows
$$\left|\sum_{n\in \mathbb N} e^{-nt} e^{in\theta}\right|\leq \sum_{n\in \mathbb N}\left| e^{-nt} e^{in\theta}\right| = \sum_{n\in \mathbb N} e^{-nt} = \dfrac{1}{e^t-1}$$
for $t\geq0$. |
ZFC,unprovability of existence of a countable model,Skolem construction and paradox | We can have a model $V$ of ZFC, which is uncountable, but which does not think there exist any countable models of ZFC: Externally, Lowenheim-Skolem gives us a countable elementary submodel $M\subseteq V$, but there's no reason to believe $M\in V$, so $V$ might not "see" that such an $M$ exists at all.
Note also that $V$ may "think" that $V\not\models ZFC$! This is because if $V$ has nonstandard natural numbers, this will yield nonstandard axioms of what $V$ thinks is $ZFC$, and it's possible that one of these might be false in $V$ (actually, this phrase doesn't make sense - the sentence in question will be nonstandard, so infinite in length, so it's not even clear what "false in $V$" means - but this can be made precise and true with some effort). So even ignoring the fact that $V$ can't construct its own theory, $V$ might think "$ZFC$ is false," even though $V$ satisfies every actual axiom of $ZFC$. (Lest you think this is a dodge, by the way, note that nonstandard natural numbers are crucial to Godel's theorem, so this is really very much in the right spirit.) |
Combinations' Problem | He had one mean alone and one meal with each friend alone. One meal had all 6 friends present. All other meals involved 5 attendees and one absent friend. The easiest way to solve this is to make a table for all 14 dinners and all 6 friends.
Below is a solution where 1 means present (though it would be identical practically if 0 ment present).
[ \begin{array}{lcr}
\mbox{Friend} & Adam & Bob&Charlie&Dave&Evgeni&Fred \\
\mbox{Dinner 1} & 1 & 1&1&1&1&1 \\
\mbox{Dinner 2} & 0 & 1&1&1&1&1 \\
\mbox{Dinner 3} & 1 & 0&1&1&1&1 \\
\mbox{Dinner 4} & 1 & 1&0&1&1&1 \\
\mbox{Dinner 5} & 1 & 1&1&0&1&1 \\
\mbox{Dinner 6} & 1 & 1&1&1&0&1 \\
\mbox{Dinner 7} & 1 & 1&1&1&1&0 \\
\mbox{Dinner 8} & 0 & 0&0&0&0&0 \\
\mbox{Dinner 9} & 1 & 0&0&0&0&0 \\
\mbox{Dinner 10} & 0 & 1&0&0&0&0 \\
\mbox{Dinner 11} & 0 & 0&1&0&0&0 \\
\mbox{Dinner 12} & 0 & 0&0&1&0&0 \\
\mbox{Dinner 13} & 0 & 0&0&0&1&0 \\
\mbox{Dinner 14} & 0 & 0&0&0&0&1 \end{array}] |
how to detect the points on which Newton Raphon method will give a oscillating sequence. | Somewhat more generally, suppose $f(x) = x^3 + a x^2 + b x + c$. The corresponding Newton iteration is $x_{n+1} = g(x_n)$ where
$$g(x) = x - \frac{f(x)}{f'(x)} = \frac {2 x^3 + a{x}^{2}-c}{3 x^2 +2\,ax+b}$$
A $2$-cycle for this iteration is $(x, g(x))$ where $g(g(x)) = x$ but $f(x) \ne 0$. It can be verified that such a $2$-cycle occurs when
$$20\,{x}^{6}+40\,a{x}^{5}+ \left( 27\,{a}^{2}+19\,b \right) {x}^{4}+
\left( 6\,{a}^{3}+27\,ab-5\,c \right) {x}^{3}+ \left( 9\,{a}^{2}b-5\,
ac+8\,{b}^{2} \right) {x}^{2}+ \left( -2\,{a}^{2}c+5\,a{b}^{2}+bc
\right) x-abc+{b}^{3}+2\,{c}^{2}
= 0$$
In particular this occurs for $x = 0$ and $x = g(0) = -c/b$ when $-abc + b^3 + 2 c^2 = 0$ but $c \ne 0$, which is the case for your example ($a=0, b=-1/2, c=1/4$).
Exercise: find $a,b,c$ such that $0, g(0), g(g(0))$ forms a $3$-cycle. |
How to prove that this function all over the positive integers gives us this sequence? | Idea:
In the sequence, $a_n$ becomes $m$ when $n=\sum\limits_{i=0}^m i=\dfrac{m(m+1)}2$; i.e., $m^2+m-2n=0$.
Solving this quadratic for $m$, we get $m=\dfrac{-1+\sqrt{1+8n}}2$. |
Finding Laurent Series in annulus $\{2<|z|<3\}$ | Hint: In your case the annulus is $2<|z|<3$. So you can consider $2<|z|$ as $|2/z|<1$ and similarly $|z|<3$ as $|z/3|<1$ and then proceed. Hopes this helps. |
Refinement of $a^{4b^2}+b^{4a^2}\leq1$ | Update
It remains to prove the case when $b \in [\frac{3}{10}, \frac{1}{2}]$.
From Proposition 5.2 in [1], we have $a^{2b} + b^{2a} \le 1$. Since $a^{2b} + b^{2a} \le (a^{2b} + b^{2a})^{ab}$,
it remains to prove the following results (see The.old.crap's work):
Claim 1: Let $a = 1-b$ and $b\in [\frac{3}{10}, \frac{1}{2}]$. Then
$a^{4b^2} + b^{4a^2} \le a^{2b} + b^{2a}$.
[1] Vasile Cirtoaje, "Proofs of three open inequalities with power-exponential functions",
The Journal of Nonlinear Sciences and its Applications (2011), Volume: 4, Issue: 2, page 130-137.
https://eudml.org/doc/223938
Partial answer
Problem: Let $a, b > 0$ with $a+b=1$. Prove that
$$a^{4b^2} + b^{4a^2} \le (a^{2b} + b^{2a})^{ab}.$$
WLOG, assume that $a\ge b$. Then we have
$a = 1- b$ and $b\in (0, \frac{1}{2}]$.
First, let us prove the case when $b\in (0, \frac{3}{10}]$.
We have the following auxiliary results (Facts 1 through 6).
The proof of Fact 5 is given later.
For the proof of Fact 1, see How to prove this $\sum_{i=1}^{n}(x_{i})^{S-x_{i}}>1?$
Fact 1: $u^v \ge \frac{u}{u+v-uv}$ for $u>0, \ v\in [0, 1]$.
Fact 2: By using Fact 1, $(1-b)^{2b} \ge \frac{1-b}{2b^2 - b + 1}$ for $b\in (0, \frac{1}{2}]$.
Fact 3: By using Fact 1, $b^{2(1-b)} = b \cdot b^{1-2b} \ge \frac{b^2}{2b^2 - 2b + 1}$ for $b\in (0, \frac{1}{2}]$.
Fact 4: By using Bernoulli's inequality, we have $(1-b)^{4b^2} \le 1 - 4b^3$ for $b\in (0, \frac{1}{2}]$.
Fact 5: $b^{-8b + 4b^2} \le 12 - \frac{2}{3}b$ for $b\in (0, \frac{1}{2}]$.
Fact 6: By using Fact 5, $b^{4(1-b)^2} = b^4 \cdot b^{-8b + 4b^2} \le b^4(12 - \frac{2}{3}b)$ for $b\in (0, \frac{1}{2}]$.
From Facts 1, 2 and 3, we have
\begin{align}
(a^{2b} + b^{2a})^{ab} &= ((1-b)^{2b} + b^{2(1-b)})^{b(1-b)}\\
&\ge \left(\frac{1-b}{2b^2 - b + 1} + \frac{b^2}{2b^2 - 2b + 1}\right)^{b(1-b)}\\
&= w^{b(1-b)}\\
&\ge \frac{w}{w + b(1-b) - wb(1-b)}\\
&= \frac{2b^4-3b^3+5b^2-3b+1}{-2b^6+5b^5-2b^4-2b^3+5b^2-3b+1}
\end{align}
where
$w = \frac{1-b}{2b^2 - b + 1} + \frac{b^2}{2b^2 - 2b + 1}$ (Clearly $w>0$ and $b(1-b)\in [0,1)$).
With this in mind, from Facts 4, 5 and 6, it suffices to prove that for $b\in (0, \frac{3}{10}]$,
$$1-4b^3 + b^4(12 - \tfrac{2}{3}b) \le \frac{2b^4-3b^3+5b^2-3b+1}{-2b^6+5b^5-2b^4-2b^3+5b^2-3b+1}$$
or
$$\frac{b^3 (-4 b^8+82 b^7-208 b^6+128 b^5+58 b^4-204 b^3+155 b^2-60 b+9)}{-6 b^6+15 b^5-6 b^4-6 b^3+15 b^2-9 b+3} \ge 0.$$
It is not hard.
$\phantom{2}$
Proof of Fact 5: It suffices to prove that for $b\in (0, \frac{1}{2}]$,
$$\ln (12 - \tfrac{2}{3}b) \ge (-8b+4b^2)\ln b.$$
It is easy to prove that for $(0, \frac{1}{2}]$,
$$\ln (12 - \tfrac{2}{3}b) \ge \frac{25539}{10325} - \frac{10}{177}b.$$
Thus, it suffices to prove that for $(0, \frac{1}{2}]$,
$$f(b) = \frac{\frac{25539}{10325} - \frac{10}{177}b}{8b-4b^2} + \ln b \ge 0.$$
We have
$$f'(b) = \frac{(10b-3)(6195b^2-23009b+25539)}{61950b^2(2-b)^2}.$$
Thus, $f(b)$ is strictly decreasing on $(0, \frac{3}{10})$,
and strictly increasing on $(\frac{3}{10}, \frac{1}{2}]$.
Also, we have $f(\frac{3}{10}) > 0$. The desired result follows. |
Find the real and imaginary parts in the given expression: | Hint assume $z=u+iv$ so on expanding we get $x^2+2xiy-y^2+2x+2iy+1$ thus $(x+1)^2-y^2+i(2xy+2y)$ these imaginary and real parts now use $tan^{-1}(v/u)$ to find argument and $\sqrt{u^2+v^2}$ to find modulus=r and then express in terms of $f(z)=(r,\theta)$ |
Find the necessary and sufficient condition on $x$ so given matrix becomes orthogonal. | The answer in the link you posted does not say $xx^t=x^tx$, it just says that $xx^txx^t = x(x^tx)x^t = (x^tx)xx^t$ because $x^tx$ is a scalar. |
Find this ODE solution $xy''+2y'-xy=e^x$ | It looks like you were halfway there.
$$(xy)''=(xy'+y)'=xy''+2y'$$
Does this help? |
Prob. 20 (a) & (b), Exercises 8.9, in Apostol's CALCULUS vol. 2: If $f^\prime(x;y)=0$ for every $x$ and for every $y$, . . . | Your proof of part (a) is fine, but part (b) is not the general correct result. You should be able, with a slight modification, to show that $f$ does not change along any line segment in the ball with direction vector $\mathbf y$. |
If an ideal contained in union of finite number of ideal then ideal is contained in some of them. | Let $k$ be the infinite field contained in $R$. Then $R$ is a $k$-vector space, and the ideals $I, J_1,J_2,\dots, J_n$ are subspaces of $R$. It is a consequence of this lemma for vector spaces:
Let $k$ be a field,$V$ a vector space which is the union of $n$ proper subspaces $W_1,W_2,\dots, W_n$. Then $\;|k|\le n-1$.
(for a proof, you can look at my answer to this question
So, over an infinite field, a vector space cannot be the union of a finite number of proper subspaces.
Now, if $I$ is not contained in any $J_k$, this means that $\;I\cap J_k\varsubsetneq I$ for each $k$, so the vector space $I$ is the union of the $n$ proper subspaces $I\cap J_k$ $(k=1, \dots ,n)$n which contradicts the infiniteness of $k$. |
Question on the definition of trace operator | I believe what your lecturer meant is that functions in $W^1_p$ are only defined up to sets of measure zero. That's because functions in $W^1_p$ are technically equivalence classes of functions. Thus, you can't simply define the trace operator to be the value of the function on the boundary, because it has measure zero.
This definition however works for functions that are also continuous. The trick then is to show that the trace operator is continuous, and to extend the definition of the operator to all of $W^1_p$ using the fact that $C^1$ is dense in $W^1_p$.
Here's how the extension of the operator to $W^1_p$ works. Suppose we have already proved the following inequality for functions $u\in C^1(\Omega)$:
$$
\|\left. u\right|_{\partial\Omega}\|_{L^p(\partial\Omega} \leq \|u\|_{W^1_p(\Omega)}.
$$
Now suppose we want to define the trace $\left.v\right|_{\partial\Omega}$ for some $v\in W^1_p$ which is not necessarily in $C^1(\Omega)$. What we can do is find a sequence of functions $u_n$ which converges to $v$ in $W^1_p$ (since continuous functions are dense in $W^1_p$). Then we have
$$
\|\left. (u_n-u_m)\right|_{\partial\Omega}\|_{L^p(\partial\Omega}
\leq \|u_n-u_m\|_{W^1_p(\Omega)}.
$$
Thus the sequence of the traces on $\partial\Omega$ is a Cauchy sequence in $L^p(\partial\Omega)$. We can take its limit to be the definition of the trace for $v$. This definition now allows us to consider the trace operator as a continuous linear operator defined on all of $W^1_p$. |
Set representation of bounded functions | Let $f$ be in $S$. That means there is an $\epsilon>0$ such that for any $\delta$, whenever, $|x-y|<\delta$, $|f(x)-f(y)|<\epsilon$. So, choose $y-0$. Then for any $|x|<\delta$, $f(x)\in[f(0)-\epsilon,f(0)+\epsilon]$. But this is true for all $\delta$. So we have no restriction on $x$. Thus, for any $x$, $f(x)$ is between $f(0)-\epsilon$ and $f(0)+\epsilon$. Thus $f$ is bounded. |
Applying mean value theorem to function of two variables | In general for a differentiable function from $\mathbb{R}^2$ to $\mathbb{R}$, there exists a point $(c,d)$ on the line segment from $(x_1,y_1)$ to $(x_2,y_2)$ such that
$$f(x_2,y_2)-f(x_1,y_1)=\frac{\partial f}{\partial x}(c,d)(x_2-x_1)+ \frac{\partial f}{\partial y}(c,d)(y_2-y_1)$$
Taking $y_2 = y_1 = y_0$ the above expression simplifies to
$$f(x_2,y_0)-f(x_1,y_0)=\frac{\partial f}{\partial x}(c,y_0)(x_2-x_1)$$ |
Find the least value of $\frac{1}{x}+\frac{3}{y}+\frac{5}{z}$ | One can show that $\sum \limits_{I=1}^{n}\dfrac{a_i^2}{b_i}\geq \dfrac{(a_1+\dots+a_n)^2}{b_1+\dots+b_n}.$ using this inequality it follows that $$\frac{1}{x}+\frac{3}{y}+\frac{5}{z}\geq (1+\sqrt{3}+\sqrt{5})^2$$ |
Poker blind interest equation | There shall be no mocking here.
Say you start with $a$ blinds and after $n$ (exponential) increases end up with $b$. Then there is some constant $C$ so that
$$b = a \cdot C^n$$
Solving for $C$ gives
$$C = \left(\frac{b}{a}\right)^{\frac{1}{n}}$$
Meaning: At each step, the new big blind is the earlier big blind multiplied by $C$.
Your example: Here, initial big blind is $a=50$ and ends up with $b=4300$ after $n=10$ increases.
$$C = \left(\frac{4300}{50}\right)^{\frac{1}{10}} \approx 1.56$$
($1.56$ means an increase of $56 \%$) |
Analyze the convergence or divergence of $\{1/n^2\}$ | UPDATE:
To answer straight from the definition, take $N$ such that $1/N < \sqrt{\epsilon/2}$
Then using the triangle inequality for $m,n\geq N$
$$|\frac{1}{n^{2}}-\frac{1}{m^{2}}| \leq |1/m^{2}|+|1/n^{2}|< \epsilon/2 + \epsilon/2 = \epsilon$$
Your approach is flawed because your first inequality is incorrect. The easiest way to show this sequence is Cauchy is to show it converges to 0. (A convergence sequence is Cauchy)
Fix $\epsilon>0$.
There exists an $N$ (by the Archimedean property) such that for all $n\geq N$,
$1/n \leq 1/N < \sqrt{\epsilon}$
Then
$$|1/n^{2} - 0| = 1/n^{2} <\epsilon$$ |
How the determinant of $A^2$ is $0$? | Since $$\operatorname{rank}(A)<5 \implies \det A=0$$ and by Binet theorem
$$\det(A^2)=\det A\cdot \det A=0$$ |
Finding the remainder of $49!$ when divided by $53$ | So, $\displaystyle49!\equiv 6^{-1}\pmod{53}$
Now, as $\displaystyle6\cdot9=54\equiv1\pmod{53}, 6^{-1}\equiv9\pmod{53}$ |
Solving $x + \sqrt{x(x-a)} = b + a/2$ | $x+\sqrt{x(x-a)}=b+\frac a2\iff 2\sqrt{x(x-a)}=2b+a-2x\quad$ we square this
$\require{cancel}\implies 4x(x-a)=\cancel{4x^2}-\cancel{4ax}=4b^2+a^2+\cancel{4x^2}-8bx+4ab-\cancel{4ax}$
$\implies 8bx=4b^2+a^2+4ab=(2b+a)^2$
First if $b=0$ then $a=0$ is forced and the equation reduces to $x+|x|=0$ which gives whole $x\in\mathbb R^-$ solution.
Then for $b\neq 0$ we have $\displaystyle{x=\frac{(2b+a)^2}{8b}}$.
But as I stated in the comment, finding this does not end the resolution of the problem, we have to check for two conditions :
$\begin{cases}x(x-a)\ge 0 \\[2ex] b+\frac a2-x\ge 0\end{cases}$
So let's calculate them :
$\displaystyle{x(x-a)=\frac{(a+2b)^2(a-2b)^2}{64b^2}\ge 0}\quad$ this is ok.
$\displaystyle{b+\frac a2-x=\frac{4b^2-a^2}{8b}}\quad$ we need $|a|\le |2b|$ for $b>0$ and the opposite when $b<0$.
So to conclude :
If $a=b=0$ then any $x\in\mathbb R^-$ is solution.
If $b>0$ and $|a|\le |2b|$ then $x=\frac{(2b+a)^2}{8b}$
If $b<0$ and $|a|\ge |2b|$ then $x=\frac{(2b+a)^2}{8b}$
In all other cases there are no solutions in $\mathbb R$
As I said, this really important to report in the original equation (in this case, it means checking the signs of various stuff). You cannot just state that $x=f(a,b)$ might be solution, you have to verify for which values of $a,b$ this is really true. |
Finding the distance between a point and a line, can there be a negative distance? | Let $\mathcal{H}\colon\mathbf{w}\cdot\mathbf{x}+b=0$ be a hyperplane in $\Bbb{R}^n$, then
$$
d = \frac{\mathbf{w}\cdot\mathbf{x}_0+b}{\lVert\mathbf{w}\rVert}
$$
gives the signed distance (with respect to the normal vector) between a point $\mathbf{x}_0$ and the hyperplane. $\lvert d \rvert$ gives the "traditional" distance.
The signed distance takes into consideration in which halfspace the point lies.
For instance, if you have your line in the form $Ax+By+c=0$, then $\mathbf{w}=(A, B)^\top$, $b=c$, and the signed distance is
$$
d = \frac{Ax+By+c}{\sqrt{A^2+B^2}}.
$$ |
Measure of variation(?) of multidimensional polynomial function | Have you considered something like
$$
\frac1{b-a}\int_a^b\frac{\left(\mathbf f(x)\cdot\mathbf f'(x)\right)^2}{\left(\vphantom{\mathbf f'}\mathbf f(x)\cdot\mathbf f(x)\right)\left(\mathbf f'(x)\cdot\mathbf f'(x)\right)}\mathrm dx\;?
$$
You probably won't like the denominator, but as no-one else has answered, it's at least another direction to think into. |
Intuition and Tricks - Crafty Short Proof - Generators, Order of a Cyclic Group - Fraleigh p. 64 Theorem 6.14 | $1$. You are trying to find the order of the element $a^d$. The only thing you know about $a$ is that it has order $n$; in particular, $a^n = 1$. So, if you can get some power of $a^d$ to simplify to $a^n$, this will be $1$. As $d$ is a divisor of $n$, the fraction $\frac{n}{d}$ is an integer so we can take $a^d$ to the $\left(\frac{n}{d}\right)^{\text{th}}$ power which gives $$(a^d)^{\frac{n}{d}} = a^{d\times\frac{n}{d}} = a^n = 1.$$ We know that $\frac{n}{d}$ is a positive ($d$ is positive, so $\frac{n}{d}$ is positive) integer such that $(a^d)^{\frac{n}{d}} = 1$. So the order of $a^d$, which is the smallest positive integer $k$ such that $(a^d)^k = 1$, is less than $\frac{n}{d}$ (i.e. $|a^d| \leq \frac{n}{d}$). We still have to show that $\frac{n}{d}$ is the smallest such positive integer.
$2$. Here's one way to think about the result:
$$1\underset{\times a^d}{\underrightarrow{\xrightarrow{\times a} a^1 \xrightarrow{\times a}\dots \xrightarrow{\times a}}}a^d\underset{\times a^d}{\underrightarrow{\xrightarrow{\times a}\dots\xrightarrow{\times a}}}a^{2d}\underset{\phantom{\dots}\\\dots}{\xrightarrow{\times a}\dots\xrightarrow{\times a}}a^{n-d}=a^{d\left(\frac{n}{d}-1\right)}\underset{\times a^d}{\underrightarrow{\xrightarrow{\times a}\dots\xrightarrow{\times a}a^{n-1}\xrightarrow{\times a}}}1.$$
[If I were better with latex I would try to make this prettier. If anyone has suggestions of how to use latex to make this diagram clearer, please let me know.]
That is, if we follow the arrows along the top, it takes $n$ of them to get from $1$ back to $1$. As each arrow corresponds to multiplication by $a$, we see that $a^n = 1$. If we follow the arrows along the bottom, how many arrows does it take to get back to $1$? As each arrow corresponds to multiplication by $a^d$, we see that $(a^d)^? = 1$. |
What does Hinich mean by "homotopy" and "contractible"? | Concerning your first question: d(h) = d∘h + h∘d (by definition), so Hinich's definition is the standard one.
For the second question, having d(h) = id defines a contractible chain complex,
since d(h) = d∘h + h∘d = id. |
How to compute the volume of the intersection of two cylinders | HINT: What are the cross-sections if you slice perpendicular to the $x$-axis? |
Help proving that $ A \times B \subset \mathcal P(\mathcal P(A \cup B))$ | Hint:
$$(a,b)=\{\{a\},\{a,b\}\}$$
by definition. |
What is the precise definition of a multi-connected manifold? | A manifold that is connected (so path-connected too) but not simply connected, i.e. $\pi_1(X,x_0)$ is not trivial (for some choice of base point, it matters not which, by path-connectedness). |
Injectivity of simple modules | Let $I$ be an ideal of $R$, and $f\in\operatorname{Hom}(I,R/M)$. Then $f_M\in\operatorname{Hom}(I_M,R_M/MR_M)$. If we take into account that $R/M\simeq R_M/MR_M$ we obtain a homomorphism $g\in\operatorname{Hom}(I_M,R/M)$. Since $R_M$ is a field we have $I_M=(0)$ or $R_M$. Clearly, in both cases, $g$ can be extended to $R_M$, and thus $f$ can be extended to $R$. |
Prove that the expression is a perfect square | Supposing $m$ is not a perfect square, then $m=n^2+k$, where $n^2$ is the largest perfect square less than $m$. Without loss of generality, if $k>n$ we can take $m_0=m-n$ and $k_0=k-n$, otherwise $m_0=m, k_0=k$.
Then we can see that $f^2(m_0) = n^2+k_0+2n = (n+1)^2+(k_0-1)$.
Taking $m_1=f^2(m_0)$ and $k_1=(k_0-1)$ we can see the same process applies relative to $(n+1)^2$ and so in a total of $2k_0$ applications of $f$ we will have a perfect square, $f^{2k_0}(m_0) = (n+k_0)^2$.
Additional observation: Note that once a square is found, $s_0^2 = f^d(m)$, the same process can be applied to $f^{d+1}(m) = s_0^2+s_0$, which will then give another perfect square at $f^{d+1+2s_0}(m) = (2s_0)^2$.
Thus there are an infinite number of perfect squares in the given sequence, of the form $(2^as_0)^2$, where $a$ is a non-negative integer. This also means there is at most one odd square in the sequence, which only occurs if $m_0$ is odd (or if $m$ itself is an odd square). |
Stuck in determining truth value using proof | I think what you need most is a translation (from math into English).
1) For each given $x$ there exists a $y$ such that $x|y$ (try to find one).
2) For each given $y$ there exists an $x$ such that $x|y$ (try to find one).
3) There exists a value of $x$ that satisfies $x|y$ for each given $y$.
4) There exists a value of $y$ that satisfies $x|y$ for each given $x$. |
Could someone tell me how this is derived? | Recall that the moment-generating function (MGF) of a random variable $Y$ is $\mathbb{E}[e^{tY}]$.
In this case, observe that the MGF of $Y \mid X = x$ is thus $\mathbb{E}[e^{tY} \mid X = x]$.
Recall also that if $Z \sim N(\mu, \sigma^2)$ that its MGF is given by
$$M_{Z}(s) = \exp\left(\mu s+ \dfrac{\sigma^2}{2}s^2 \right)\text{.}$$
Thus, since $Y \mid (X = x) \sim N(0, x^2)$,
$$\mathbb{E}[e^{sY} \mid X = x] = \exp\left(0s + \dfrac{x^2}{2}s^2 \right) = e^{s^2x^2/2}\text{.}$$ |
Chess Problem on Rook Placement | Hint: If you place one rook how many squares are now not available to the second rook, because placing it there would attack the first rook? |
find $f(1+i)$ if $\Im[f'(z)]=6x(2x-y), f(0) = 3-2i, f(1) = 6-5i$ | Hints: $f$ differentiable implies $f'$ differentiable. Then $f'$ satisfies Cauchy-Riemann equations - this should tell you a lot about the real part of $f'$. And once you know $f'$, the initial conditions will tell you $f$. |
If $g\neq g^{-1}$ for all $g\in G\setminus \{e\}$, then the order of $G$ is odd | Hint (assuming the order is finite):
Pair up each element with its inverse, and remember what's the group's unit's inverse... |
Complex analysis: showing that contour integration of function cos and sin are equal | Enforcing the substitution $x\to \sqrt{x}$ reveals
$$\int_0^\infty xe^{ix^4}\,dx=\frac12 \int_0^\infty e^{ix^2}\,dx \tag 1$$
Since $e^{iz^2}$ is entire, Cauchy's Integral Theorem guarantees that $\oint_C e^{iz^2}\,dz=0$ for any rectifiable curve $C$. Letting $C$ be the "pie wedge" contour with edges from $0$ to $R$ and from $Re^{i\pi/4$}$ to $0$, we find that
$$\begin{align}
\int_0^\infty e^{ix^2}\,dx&=\lim_{R\to \infty}\int_0^R e^{ix^2}\,dx\\\\
&=\lim_{R\to \infty}\int_0^R e^{i(e^{i\pi/4}y)^2}\, e^{i\pi/4}\,dy-\lim_{R\to \infty}\int_0^{\pi/4}e^{i(Re^{i\phi})^2}\,iRe^{i\phi}\,d\phi\\\\
&=e^{i\pi/4}\int_0^\infty e^{-y^2}\,dy\\\\
&=e^{i\pi/4}\frac{\sqrt{\pi}}{2}\tag 2
\end{align}$$
Substituting $(2)$ into $(1)$ and equating real and imaginary parts yields the coveted results
$$\int_0^\infty x\cos(x^4)\,dx=\frac14\sqrt{\frac{\pi}{2}}$$
and
$$\int_0^\infty x\sin(x^4)\,dx=\frac14\sqrt{\frac{\pi}{2}}$$ |
Planar graphs with $n \geq 2$ vertices have at least two vertices whose degree is at most 5 | Yes your ideas are correct. For the sake of not leaving this question unanswered, I'll refine your proof a bit.
Proofs in graph theory have a tendency to naturally be broken into cases like your proof above. A lot of the time though, this isn't necessary, and you can make a proof more concise by trying to combine cases.
For example, in your proof, separating out the case where $n=2$ is not necessary. The inequalities in your $n > 2$ case covers it. In fact explicitly stating a base case is often only necessary when you are arguing by induction. Also separating the cases where there is one vertex of degree at most five and no vertices of degree at most five is not necessary. Since $6n \geq 6(n-1) + \operatorname{deg}(v_k)$, your work for the first cases actually covers both cases.
Theorem: For $n \geq 2$, a planar graph with $n$ vertices must have at least two vertices with degree at most five.
Proof $\;$ First note that if this theorem holds for connected graphs, then it must hold for non-connected graphs as well because we can consider a single component and find the required two vertices with degree at most five.
Let $G$ be a connected planar graph with $n \geq 2$ vertices and $e$ edges. Suppose for the sake of contradiction that $G$ has at most one vertex, call it $v_k$, of degree less than or equal to five. Then by the handshaking lemma we have
$$
2e = \!\!\!\sum_{v_i \in \operatorname{V}(G)}\!\!\!\operatorname{deg}(v_i)
\geq 6(n-1) + \operatorname{deg}(v_k)
\geq 6n-1\;.
$$
But since $G$ is planar and connected we know that $e \leq 3n-6$, getting us the contradiction
$$
6n-1 \;\leq\; 2e \;\leq\; 2(3n-6) = 6n-12\;.
$$
Therefore there must be at least two vertices of degree at most five. |
Calculate Rate of Change | Assuming the decomposition is exponential(since it is radioactive decomposition), let $N_t = N_0 e^{-\lambda t}$ where $N_t$ is the amount of the radioisotope that is decomposed after $t$ seconds, $N_0$ is the initial amount of the radioisotope and $\lambda$ is a constant.
It is given that the half-life of the radioisotope is $3$ seconds. Hence $N_3/N_0 = 0.5 = e^{-\lambda 3}$. After taking the natural logarithm of each side and rearranging terms you get:
$$ \lambda = - \ln (0.5)/3 \approx 0.231$$
The rate of the decomposition can be found by taking the derivative with respect to $t$:
$$\dfrac{d}{dt} N_t = -0.231e^{-0.231t}$$
And, the amount(mass) left after $t = 15$ is $N_{15} = 1000e^{-0.231 \cdot 15} \approx 31.3\ g$ or $0.0313 kg$. |
Let $X=\left\{f\in\mathbb{N}^\mathbb{N} : \;|f(n + 1)-f(n)|=1\right\}$. Prove that cardinality of $X$ is continuum. | One idea could be to give an injection $\def\P{\mathfrak P}\def\N{\mathbb N}\P(\N) \to X$. To encode an $A \subseteq \N$ by a function in $X$, the idea could be to go "one up" at $n$ if $n \in A$ and "one down" if $n \not\in A$. As we must make sure to go up often enough to ensure $f\colon \N \to \N$, we can add a step up before every encoding step.
That is, for $A \subseteq \N$ define by induction $f_A \colon \N \to \N$ by
\begin{align*}
f(0) &:= \begin{cases} 0 & 0 \not\in A\\
1 & 0 \in A\\
\end{cases}\\
f(n) &:= \begin{cases} f(n-1) + 1 & n \text{ odd}\\
f(n-1) + 1 & n \text{ even } \frac n2 \in A\\
f(n-1) - 1 & n \text{ even } \frac n2 \not\in A
\end{cases}, \quad n \ge 1
\end{align*}
Then $\P(\N) \ni A \mapsto f_A \in X$ is one-to-one, and hence $\left|X\right| \ge \mathfrak c$, on the other hand $\left|X\right|\le \left|\N^\N\right| = \mathfrak c$. |
How to show that this matrix is positive semidefinite? | Here is a way to show that it is not positive definite.
Let $x_1=x_2=1$ and $x_3=0$.
As for showing that it is positive semidefinite, you have shown that quadratic form is nonnegative. |
Basis for recurrence relation solutions | The sequences that result from such a recurrence relation are determined by the initial conditions, which would ordinarily be prescribed as $U(0)$ and $U(1)$. It should be clear that any values can be assigned for these first two values, and that once that is done the rest of the sequence is fully determined by repeated application of the recurrence relation.
It follows that the dimension of the vector space you define (all sequences satisfying the rule) is exactly two, and a basis may be given (for example) by the respective two sequences that correspond to:
$$ U_1(0) = 0, U_1(1) = 1 $$
$$ U_2(0) = 1, U_2(1) = 0 $$
In other words, it is obvious that sequences $U_1,U_2$ so developed are linearly independent, and further that any sequence satisfying the recurrence relation may be expressed as a linear combination of these two (to fit the required initial conditions). |
What does an integral with a horizontal bar through it mean? | According to the digital copy of the text which was linked by the user Chappers, this notation is the Cauchy principle value. This use is listed in the rather extensive index of notation at the end of the text—specifically, in the section labeled Miscellaneous Notation.
The Cauchy Principal Value is a way of assigning a value to certain "improper" integrals which would otherwise be undefined. If $f : \mathbb{R} \to \mathbb{R}$ has a singularity at $c \in [a,b]$, then the Cauchy Principal value is given by
$$ -\kern-9pt\int_{a}^{b} f(x)\,\mathrm{d}x
:= \lim_{\varepsilon\searrow 0} \left[ \int_{a}^{c-\varepsilon} f(x)\,\mathrm{d}x + \int_{c+\varepsilon}^b f(x)\,\mathrm{d}x\right].$$
A similar definition applies if $f$ has a singularity at infinity:
$$ -\kern-9pt\int_{-\infty}^{\infty} f(x) = \lim_{R\to\infty} \int_{-R}^{R} f(x)\,\mathrm{d}x. $$
In this second case, it is easier to see how Cauchy principal value differs from the "usual" method of assigning a value to an improper integral. In the usual setting, we define
$$ \int_{-\infty}^{\infty} f(x)\,\mathrm{d}x
:= \lim_{a\to-\infty} \int_{a}^{c} f(x)\,\mathrm{d}x + \lim_{b\to\infty} \int_{c}^{b} f(x),\mathrm{d}x, $$
where $c$ is any real number. Using this standard definition, the sine function cannot be integrated over the entire real line. However, the Cauchy principle value does exist:
$$ -\kern-9pt\int_{-\infty}^{\infty} \sin(x)\,\mathrm{d}x = \lim_{R\to\infty} \int_{-R}^{R} \sin(x) \,\mathrm{d}x = 0, $$
since sine is an odd function.
It is also well worth noting that $-\kern-7.5pt\int$ is not standardized notation for the Cauchy principal value. Most authors will, instead, use the notation $PV\kern-4pt\int$, or something similar. Also, the notation $-\kern-7.5pt\int$ is used by other authors to mean something different. For example, in his text on PDEs, Evans uses $-\kern-7.5pt\int$ to denote the average integral over a ball, i.e.
$$ -\kern-9pt\int_{B(x,r)} f(y)\,\mathrm{d}y
= \frac{1}{\mu(B(x,r))} \int_{B(x,r)} f(y)\,\mathrm{d}y,$$
where $B(x,r)$ denotes a ball in $n$-dimensional Euclidean space with center $x$ and radius $r$, and $\mu(B(x,r))$ denotes the $n$-dimensional volume (Lebesgue measure) of that ball. |
non negative solution of the matrix equation $A^T U A = U−C$ if C is non-negative | For every $x$, $x^*Ux=\sum\limits_{k\geqslant0}x_k^*Cx_k$, where $x_k=A^kx $ for every $k\geqslant0$. If $C$ is nonnegative, then $x_k^*Cx_k\geqslant0$ for each $k\geqslant0$ hence $x^*Ux\geqslant0$. Thus, $U$ is nonnegative. |
Finding length of side | By Pythagorean theorem, $$CD^2+BD^2=BC^2=15^2,$$ and $$BE^2+BD^2=DE^2=AD^2=AC^2-CD^2.$$ From the first equlity, we have that $BD^2=225-CD^2$, then putting it into the second one, we have that $BE^2-CD^2+225=AC^2-CD^2$. So $AC^2=BE^2+225=36+225=261$, which implies that $AC=3\sqrt{29}$. |
Prove that $\sin(a)$ + $\cos(a)\leq\sqrt{2}$ | Alternative path:
$$
\sin a+\cos a=
\sqrt{2}\left(\sin a\cos\frac{\pi}{4}+\cos a\sin\frac{\pi}{4}\right)=
\sqrt{2}\sin\left(a+\frac{\pi}{4}\right)\le\sqrt{2}
$$
(and also $\ge-\sqrt{2}$, of course).
However your reasoning is basically correct; only you need to do it backwards:
$$
\sin^2a+2\sin a\cos a+\cos^2a\le2
$$
because $\sin^2a+\cos^2a=1$ and $2\sin a\cos a=\sin 2a\le 1$; therefore
$$
(\sin a+\cos a)^2\le 2
$$
and so
$$
\sin a+\cos a\le \sqrt{2}
$$ |
Equivalent conditions for $M$-genericity | There are multiple questions here; let me address the question of applying $(2)$ (which seems the most unclear point).
The key to applying $(2)$ is to note that every dense set contains a maximal antichain. Specifically, fix a dense set $D$ and let $A$ be a set with the following properties:
$A\subseteq D$,
$A$ is an antichain, and
there is no antichain $B\subseteq D$ with $A\subsetneq B$.
The existence of such an $A$ is guaranteed, as usual, by Zorn's Lemma (consider the partial order of antichains which are subsets of $D$). Now I claim that this $A$ is in fact a maximal antichain.
For suppose otherwise. Let $a\not\in A$ such that $A\cup\{a\}$ is an antichain. Then no extension of $a$ can meet $D$, since if $b\le a$ with $b\in D$ then $A\cup\{b\}$ would be an antichain strictly containing $A$ which is a subset of $D$.
The idea, then, is the following: "If $G$ meets every maximal antichain, then $G$ meets every dense set since every dense set contains a maximal antichain." Do you see how to appropriately formulate this to get $(2)\rightarrow(1)$?
Note that unlike the "maximal-antichain-to-dense-open" translation $$A\leadsto\{p: \exists a\in A(p\le a)\},$$ there is in general no canonical way to find a maximal antichain inside a given dense set. Indeed, it is consistent with $\mathsf{ZF}$ that there is a partial order $P$ with top element $\mathbb{1}_P$ which is separative (= nowhere-trivial, from the forcing perspective) but which has no maximal antichains other than $\{\mathbb{1}_P\}$; in such a poset, the dense set $P\setminus\{\mathbb{1}_P\}$ does not contain a maximal antichain. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.