title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Evaluate $\sum_{n=0}^\infty \frac1{n^3+1}$ if it can express in terms of elementary functions. | Use $n^3+1 = (n+1)(n^2-n+1) = (n+1)(n-\omega)(n - \bar{\omega})$, where $\omega = \mathrm{e}^{i \pi/3}$. Then
$$
\frac{1}{n^3+1} = \frac{1}{3} \frac{1}{n+1} - \frac{1}{3} \frac{\omega}{n -\omega} - \frac{1}{3} \frac{\bar{\omega}}{n -\bar{\omega}} = \frac{1}{3} \frac{\omega +\bar{\omega}}{n+1} - \frac{1}{3} \frac{\omega}{n -\omega} - \frac{1}{3} \frac{\bar{\omega}}{n -\bar{\omega}}
$$
Therefore
$$
\sum_{n=0}^m \frac{1}{n^3+1} = \frac{\omega}{3} \sum_{n=0}^m \left(\frac{1}{n+1} - \frac{1}{n-\omega} \right) + \frac{\bar{\omega}}{3} \sum_{n=0}^m \left(\frac{1}{n+1} - \frac{1}{n-\bar{\omega}} \right)
$$
Thus
$$
\begin{eqnarray}
\sum_{n=0}^\infty \frac{1}{n^3+1} &=& \frac{\omega}{3} \sum_{n=0}^\infty \left(\frac{1}{n+1} - \frac{1}{n-\omega} \right) + \frac{\bar{\omega}}{3} \sum_{n=0}^\infty \left(\frac{1}{n+1} - \frac{1}{n-\bar{\omega}} \right) \\
&=& \frac{\omega}{3} \left( \gamma + \psi(-\omega) \right) + \frac{\bar{\omega}}{3} \left( \gamma + \psi(-\bar{\omega}) \right)
\end{eqnarray}
$$
where $\psi(x)$ is the digamma function, and $\gamma$ is the Euler-Mascheroni constant.
Thus the sum is not elementary. |
Creating a function from inputs and outputs | The slope equation for the first should be $\frac {12-0}{4-0}=3$ and the function $f(x)=3x$ generates the points desired. For the second, yes, you could find the equation of the line through any two of the points, then noted that the third point is not on the line. |
Is there possibly a largest prime number? | Euclid's famous proof is as follows: Suppose there is a finite number of primes. Let $x$ be the product of all of these primes. Then look at $x+1$. It is clear that $x$ is coprime to $x+1$. Therefore, no nontrivial factor of $x$ is a factor of $x+1$, but every prime is a factor of $x$. By the fundamental theorem of arithmetic, $x+1$ admits a prime factorization, and by the above remark, none of these prime factors can be a factor of $x$, but $x$ is the product of all primes. This is a contradiction. |
Checking if a mapping exists in a composition of partial functions | The symbol "$\downarrow$" is often used to indicate defined-ness of a partial function on a given input. So e.g. you could abbreviate the condition by "$f_{n-1}\circ...\circ f_1(a)\downarrow$." I don't know of any shorter expression for it.
(Incidentally, if this is a condition you're interested in it may be worth doing a bit more work and letting (say) $g_{i,j}$ for $i\le j$ denote the function $f_{j-1}\circ...\circ f_i$; then your condition is just $g_{1, n}(a)\downarrow$. Introducing these new functions may or may not be worthwhile depending on what you're interested in.) |
Linear Algebra - Determine if a linear transformation is one-to-one | Hint:
$T$ is injective (1-1) $\iff (AX=\vec0 \iff X= \vec0)$ $\iff T$ is surjective (onto) $ \iff \det{A} \neq 0 \iff A$ is invertible..
Not necessarily in that order, and all of this due to linearity.
If you prove this chain, you will see the things hidden in between, and many things will become easier and easier to prove, I think. |
Seemingly contradictional facts on whether Chern classes determine a line bundle or not. | To answer question 1: yes, this is confusing! The explanation is that for (say) complex algebraic varieties, there are different "flavours" of Chern class, taking values in different groups: topological Chern classes, which take values in cohomology $H^*(X,\mathbf{Z})$, and algebraic Chern classes, which live in the Chow groups $A^*(X)$. There is a so-called cycle map $A^*(X) \rightarrow H^*(X,\mathbf{Z})$ linking the two.
In my comment to the linked question, what I was saying was that the topological first Chern class $c_1^{top}(L)$, which is an element of $H^2(X,\mathbf{Z})$, does not determine the line bundle $L$.
However, the statements by Gathmann and Teitler both refer to the (more refined) algebraic first Chern class, $c_1^{alg}(L)$. This takes values in $A^1(X)$, which is exactly the same thing as $Pic(X)$.
In much fewer words, your map $\alpha$ is always an isomorphism. The map that can have a nontrivial kernel is the cycle map $A^1(X) \rightarrow H^2(X,\mathbf{Z})$.
A good example to keep in mind is an elliptic curve $E$. On such a curve, every point $p$ has an associated line bundle $O(p)$, and these are not isomorphic for different points. On the other hand, any two points clearly give the same class in $H^2(X)$. |
Finite sum which sums to $x e^x$ | Let's write the sum in question using summation notation:
\begin{align*}
S&=\sum_{n=0}^\infty \left(e^x-\sum_{k=0}^n\frac{x^k}{k!}\right)\\
&=\sum_{n=0}^\infty \left(\sum_{k=0}^\infty \frac{x^k}{k!}-\sum_{k=0}^n \frac{x^k}{k!}\right)\\
&=\sum_{n=0}^\infty \sum_{k=n+1}^\infty \frac{x^k}{k!}
\end{align*}
Let's switch the order of summation:
$$S=\sum_{k=1}^\infty\sum_{n=0}^{k-1}\frac{x^k}{k!}$$
But note: the value of the summand doesn't depend on $n$. Therefore, we can treat that inner sum like we have a constant summand, in which case we're just multiplying by $k=(k-1)-0+1$. So,
$$S=\sum_{k=1}^\infty k\cdot\frac{x^k}{k!}=\sum_{k=1}^\infty\frac{x^k}{(k-1)!}$$
Let $k=m+1$:
$$S=\sum_{m=0}^\infty \frac{x^{m+1}}{m!}=x\sum_{m=0}^\infty\frac{x^m}{m!}=xe^x$$
as required. |
Consider P(n). Show that it is true for n > 2 given the two statements below are true... | Yes, that works.
You can note that $P(n)$ implies $P(n-1)$ means $P(n)$ implies $P(m)$ for all $m \leq n$.
Then for any particular natural number $p$, choose $2^q \geq p$. Then we have $P(2^q)$ is true so $P(p)$ is true. |
Norms on $W^{1,p}(\Omega )$ | They are all equivalents. The equivalence of the two first norm and the two last is a common result. To have the equivalence of the first and the third, use the equivalence of norms of $\mathbb R^n$ as following :
$$\|\nabla u\|_{L^p}^p=\int_\Omega |\nabla u|^p,$$
where $|\nabla u|=:|\nabla u|_2=\sqrt{\partial _1u^2+...+\partial _n u^2}$. Since $|\cdot |_2$ is equivalent to $|\cdot |_p$, there are $C,D>0$ s.t. $$C|\nabla u|_p\leq |\nabla u|_2\leq D|\nabla u|_p$$
and thus $$C\sum_{i=1}^n\|\partial _i u\|_{L^p}^p\leq \|\nabla u\|_{L^p}^p\leq D\sum_{i=1}^n\|\partial _i u\|_{L^p}^p,$$
and conclude.
The rest follow from all the previous results. |
How to prove that the smallest asymmetric tree has at least 7 vertices? | If a tree has exactly two leaves, then it is a path, and so is not asymmetric. So any such tree has at least three leaves.
If any two of these leaves have a common neighbor, then we can switch those two leaves, and this gives a symmetry. Thus, the three leaves, call them $v_1,v_2,v_3$ have pairwise distinct neighbors $w_1,w_2,w_3$. In particular, the graph has at least six vertices. Suppose it had exactly six. Then $w_1,w_2,w_3$ cannot themselves be leaves, so they must connect to each other. There is up to isomorphism only one way to do this: $w_2$ connects to $w_1$ and $w_3$. But this graph has an automorphism: we can switch $w_3$ and $v_3$ with $w_1$ and $v_1$, respectively.
Thus, any graph with no automorphism must have at least seven vertices. |
Interesting calculus problems of medium difficulty? | If $C_0 + C_1/2 + \ldots + C_n/(n+1) = 0$, where each $C_i \in \mathbb{R}$, then prove that the equation $C_0 + C_1x + \ldots + C_nx^n = 0$ has at least one real root between 0 and 1. (Rudin, Ch 5 Exercise 4)
If $|f(x)| \leq |x|^2$ for all $x \in \mathbb{R}$, then prove that $f$ is differentiable at $x = 0$. (Spivak?) (OK, so this one is easier than "medium" perhaps, but I like it anyway. It illustrates nicely that growth conditions can have an impact on smoothness.) |
Two holes with an area $0.2cm^2$, each are drilled one above the other in the wall of a vertical vessel filled with water..... | Note that the speeds with which the water leaves the holes are proportional to the water pressures at the holes, which are proportional to the vertical distances between the holes and the water surface. Let $h$ be the distance between the upper hole and the water surface. Then
$$\frac{v_1}{v_2} = \frac h{h+50}\tag 1$$
Also, the kinetic energy with which the water leaves the holes is converted from the difference of gravitational potential energy between the holes and the water surface, that is,
$$\frac12 \Delta m_1v_1^2 + \frac12 \Delta m_2v_2^2 = \Delta m_1gh + \Delta m_2g(h+50)$$
where $\Delta m_1$ and $\Delta m_2$ are the amount of water leaving the two holes, respectively, which are proportional to their speeds leaving the holes. So, rewrite the energy equation above as,
$$\frac12 v_1^3 + \frac12 v_2^3 = v_1gh + v_2g(h+50)\tag 2$$
Then, along with the equation you already had,
$$v_1+v_2=700\tag3$$
the equations (1), (2) and (3) have three unknowns in $v_1$, $v_2$ and $h$ and they can be solved from the system of the three equation derived.
As you indicated, once you obtain $v_1$, $v_2$ and $h$, finding the cross point $x$ and $y$ should be straightforward by examining the intersection of the two projectile trajectories. |
A "sort-of strict" separation between two disjoint closed, convex sets | Here is a counter-example to the statement as given. Define:
$$ X = \{(x,y) \in \mathbb{R}^2 : y \geq e^x\}$$
$$ W = \{(x,0) \in \mathbb{R}^2 : x \in \mathbb{R}\}$$
Suppose there is a nonzero vector $(u,v)$ and real number $\alpha$ such that
\begin{align}
&xu + yv \leq \alpha \quad \forall (x,y) \in X \quad (Eq 1)\\
&xu + 0v > \alpha \quad \forall (x,0) \in W \quad (Eq 2)
\end{align}
Then (Eq 2) means
$$ xu > \alpha \quad \forall x \in \mathbb{R} $$
Now if $u\neq 0$ we can make $x = M(-u)$ for large $M$ to make $xu$ as small as we like, violating the fact that $xu > \alpha$ for all $x \in \mathbb{R}$. So we know that $u=0$. This means that (Eq 2) reduces to
$$ 0 > \alpha $$
On the other hand (Eq 1) and $u=0$ gives
$$ yv \leq \alpha \mbox{ whenever $y \geq e^x$ for some $x \in \mathbb{R}$} $$
In particular:
$$ e^x v \leq \alpha \quad \forall x \in \mathbb{R}$$
Taking a limit as $x\rightarrow -\infty$ gives
$$ 0 \leq \alpha$$
a contradiction. |
Proving that $f=F,f(t)=\int_{-\infty}^\infty \frac{{e}^{2i\pi tx}}{1+{x}^{2}}dx, F(t)=\iint_{\Bbb R^2}{e}^{\frac{2i\pi ty}{x}}{e}^{-(x^2+y^2)}dxdy $ | Surprisingly this problem doesn't have a whole lot of work to it. Consider the 2D integral $F(t)$ and convert to polar coordinates:
$$ = \int_{-\frac{\pi}{2}}^{\frac{3\pi}{2}} \int_0^\infty e^{i2\pi t \tan\theta} re^{-r^2}drd\theta = \frac{1}{2}\int_{-\frac{\pi}{2}}^{\frac{3\pi}{2}} e^{i2\pi t \tan\theta} d\theta$$
Because of the tangent singularities at odd multiples of $\frac{\pi}{2}$, we will split the integral up into two pieces and then use the substitution $z=\tan\theta$:
$$=\frac{1}{2}\Biggr( \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} e^{i2\pi t \tan \theta} d\theta + \int_{\frac{\pi}{2}}^{\frac{3\pi}{2}} e^{i2\pi t \tan \theta} d\theta \Biggr) = \frac{1}{2}\Biggr(\int_{-\infty}^\infty \frac{e^{i2\pi t z}}{1+z^2}dz + \int_{-\infty}^\infty \frac{e^{i2\pi t z}}{1+z^2}dz \Biggr)$$
$$= \int_{-\infty}^\infty \frac{e^{i2\pi t z}}{1+z^2}dz $$
which is exactly the integral that defines $f(t)$. |
Triple integral (determining boundaries): | In cartesians, you are finding the amount of space between the two surfaces, which is expressed as a function of $z$. So you have to integrate
$$\int_{x^2+y^2}^{\sqrt{x^2+y^2}} dz\:$$
over the region in $(x,y)$ where $\sqrt{x^2+y^2} >x^2+y^2$. This region happens to be the circle $x^2+y^2$=1. Therefore, you get to integrate the above integral over the disk $x^2+y^2 \le 1$, and you should get as the volume
$$\int_{-1}^1 dy \: \int_{-\sqrt{1-y^2}}^{\sqrt{1-y^2}} dx \: \int_{x^2+y^2}^{\sqrt{x^2+y^2}} dz\:$$
(I do not know where you got the $x y z$ from, as this is a volume calculation.) You then use polar coordinates after doing the trivial integral over $z$ (or, equivalently, cylindrical coordinates), with $x = r \cos{\theta}$, $y=r \sin{\theta}$, and $dx\, dy = r \,dr \, d\theta$. We then get for the volume
$$\int_0^{2 \pi} d\theta \: \int_0^1 dr \: r (r-r^2) = 2 \pi \left ( \frac{1}{3} - \frac{1}{4} \right ) = \frac{\pi}{12}$$ |
Einstein Tensor Notation: Addition inside a function | With tensor notation I assume you just mean Einstein's summations convention, i.e. the convention where repeated indices are summed over all the coordinates instead of having an explicity sum: $a_ib_i \equiv \sum_{i=1}^n a_i b_i$.
The notation $f({\bf x})$ is just a shorthand for $f(x_1,x_2,\ldots,x_n)$, i.e. to tell the reader that $f$ takes points in $\mathbb{R}^n$ as it's argument. The point $(x_1,\ldots,x_n)$ can be written as the sum $x_\mu e^\mu$ where $e^\mu$ are basis-vectors in $\mathbb{R}^n$, for example $e^\mu=(0,\ldots,0,1,0,\ldots,0)$, however writing $f(x_\mu e^\mu)$ is not standard and can be confusing. You are better off using one of the two standard ways mentioned above with $f({\bf x})$ being the most compact one.
The summation convention is often very useful when doing calculations, the final result of such calculations often has a more clear and compact formulation using vector-calculus expressions like $\nabla$, dot-product, $\times$, etc. The notation should be used wisely. Personally I think the formula you have looks better as written than expressed in the summation convention (i.e. ${\bf v}\cdot \nabla f$ is more clear and compact than $v_i\frac{\partial f}{\partial x_i}$) and also note that the sum $\sum_{n=2}^N$ cannot be removed even when using the summation convention (without additional definitions). |
Not able to find the horizontal asymptote | $$\eqalign{y
&=(x^2+x)^{1/2}-(x^2-1)^{1/2}\cr
&=\frac{((x^2+x)^{1/2}-(x^2-1)^{1/2})((x^2+x)^{1/2}+(x^2-1)^{1/2})}{(x^2+x)^{1/2}+(x^2-1)^{1/2}}\cr
&=\frac{x+1}{(x^2+x)^{1/2}+(x^2-1)^{1/2}}\cr
&=\frac{1+\frac1x}{(1+\frac1x)^{1/2}+(1-\frac1{x^2})^{1/2}}\cr
&\to\frac12\quad\hbox{as}\ x\to\infty\ .\cr}$$ |
Proof of theorem 4.10 in Walter Rudin Analysis | Fixed $\epsilon >0$ and $x$. For each $f_i$ there exists $\delta(i, x, \epsilon)$, i.e. depends on $i$ such that
\begin{align}
|f_i(x)-f_i(y)|<\frac{\epsilon}{\sqrt{n}}
\end{align}
provided
\begin{align}
d(x, y)<\delta(i, x, \epsilon).
\end{align}
Since $i = 1, \ldots, n$ is finite, then we can define
\begin{align}
\delta(x, \epsilon) = \min_{1 \leq i \leq n}\delta(i, x, \epsilon)
\end{align}
such that
\begin{align}
\|f(x)-f(y)\|=\sqrt{ \sum^n_{i=1}|f_i(x)-f_i(y)|^2} < \sqrt{n\times\frac{\epsilon^2}{n}}= \epsilon
\end{align}
whenever
\begin{align}
d(x, y) <\delta.
\end{align} |
running time of a multiplication algorithm | Suppose that $x$ and $y$ are $n$ bits long. Then $x+x=2x$ is at most $n+1$ bits long, $2x+x=3x$ is at most $n+2$ bits long, and in general $kx$ is at most $n+k-1$ bits long. In particular, $xy$ is at most $n+n-1=2n-1$ bits long. The values of $z$ generated by this algorithm are the numbers $kx$ for $k=0,1,\ldots,y$, and we’ve just seen that they all have fewer than $2n$ bits. Addition and subtraction of two numbers of at most $2n$ bits takes at most $2kn$ time for some positive constant $k$, so each iteration of the loop takes at most $2kn$ time.
We go through the loop $y$ times. The largest $n$-bit number is $2^n-1$, so we go through the loop fewer than $2^n$ times, and the total time is therefore bounded above by $2kn\cdot2^n$. That is, if $t(x,y)$ is the time required for inputs of $x$ and $y$, we have $t(x,y)<2kn\cdot2^n$. It follows that if $T(n)$ is the maximum running time over all inputs $\langle x,y\rangle$ such that $x$ and $y$ are both at most $n$ bits long, then $0\le T(n)<2kn\cdot2^n$ for all $n\in\Bbb Z^+$. Since $2k$ is a fixed positive constant, this means precisely that $T(n)$ is $O(n2^n)$, which is exponenetial, not polynomial.
Note: I’ve neglected overhead: some time is spent in initialization and output. That overhead is essentially constant, however, so $T(n)$ is bounded by $c+2kn\cdot2^n$ for some constant $c$, and since $n2^n$ is unbounded, it’s not hard to verify that $c+2kn\cdot2^n$ is also $O(n2^n)$. This is routine, which is why it isn’t even mentioned in the solution that you have. |
In a complete perfect metric space, transitive distance-preserving maps are minimal | Here's the deal: the problem is solved, if we assume that $f$ has a dense orbit. This is sometimes taken as the definition of transitivity, but it's not the definition I was thinking of (which is: given two open non-empty sets $U$ and $V$, there exists $n\geq 0$ such that $f^nU\cap V\neq \emptyset$).
The thing is, this definition implies the existence of a dense orbit, if we assume that $X$ is a complete separable metric space. (it may not be perfect, but it should be separable!)
So... once we know that we have a point $x$ such that $o^+(x)$ is dense in $X$: take any point $y$. Let us see that $o^+(y)$ is dense in $X$. Let $z\in X$.
Let $\epsilon>0$. There exists $n\geq 0$ such that $d(f^n(x),y)<\epsilon/2$. As $o^+(f^n(x))$ is also dense in $X$, there exists $k\geq 0$ such that $d(f^{n+k}(x),z)<\epsilon/2$. Thus since $f$ is an isometry,
$$ d(f^k(y),z)\leq d(f^k(y),f^{n+k}(x)) + d(f^{n+k}(x),z) \leq d(y,f^n(x))+\epsilon/2 \leq \epsilon$$
Since $\epsilon$ and $z$ are arbitrary, this proves that $o^+(y)$ is dense in $X$. |
extending continuity in proof of l'Hopital rule | Suppose we define $F$ on $[a,b)$ by setting $F(x) = f(x), x\in (a,b),$ $F(a)=0.$ Do the same with $G,g.$ Then $F,G$ are continuous on $[a,b).$ These functions are extensions of the orginal functions. You're right, it doesn't really make strict sense to talk of $f(a),g(a),$ but this process is so simple and natural that we usually drop the $F,G$ business and use $f,g$ for the extensions as well. |
What creates a unique quaternion? | I think a lot of the confusion around your question comes from the fact that you don't really have a 'chain' of the sort you're talking about, because different senses of the word 'transform' are being used in each case.
Very broadly, there's little distinction between 'points' and 'vectors' in the sense that you're specifying; each point can be identified with the vector from the origin to that point. Viewed from this perspective, the transformation process you're talking about that takes the form of a map (or, if you prefer, a function) from $\mathbb{R}^n\rightarrow\mathbb{R}^n$ (here $\mathbb{R}^n$ just means the space of $n$-dimensional vectors, and $\mathbb{R}$ means we're talking about vectors over the real numbers). Each vector $\vec{v}$ gives rise to the transformation $T_{\vec{v}}: \mathbb{R}^n\rightarrow\mathbb{R}^n$ given by $T_{\vec{v}}(\vec{w}) = \vec{v}+\vec{w}$. Note that 'applying' two different vectors $\vec{u}$ and $\vec{v}$ to the same point $\vec{w}$ is the same as applying their sum: $T_\vec{u}(T_\vec{v}(\vec{w})) = T_\vec{u}(\vec{v}+\vec{w}) = \vec{u}+(\vec{v}+\vec{w}) = (\vec{u}+\vec{v})+\vec{w} = T_{\vec{u}+\vec{v}}(w)$.
Now, it's worth noting here that I used '$n$' above — vectors aren't limited to just 3-dimensional space. The example you gave, for instance, is a two-dimensional vector, and vectors make sense in five dimensions, ten dimensions, and even infinitely many dimensions (some caveats apply!). On the other hand, quaternions only${}^* $ work in 3 dimensions (${}^* $: and 4, sort of, but that's a more complicated story) and they don't produce all transformations of vectors in 3 dimensions; they only represent rotations of 3-dimensional space. Each quaternion $\mathbf{q}$ gives rise to a transformation $T_{\mathbf{q}}: \mathbb{R}^3\rightarrow\mathbb{R}^3$ given by $T_{\mathbf{q}}(\vec{v}) = \mathbf{q}\vec{v}\mathbf{q}^{-1}$ , where all of the multiplications are quaternion multiplications and we're representing the vector as a 'purely imaginary' quaternion (one with zero scalar part); even though you're multiplying quaternions here, the result will still be a vector (that is, the scalar part will be zero). And much like with the vector 'transformation' above, composing transformations — that is, applying multiple quaternions to a vector, one after the other — corresponds to applying their product: $T_{\mathbf{p}}(T_{\mathbf{q}}(\vec{v})) = T_{\mathbf{p}}(\mathbf{q}\vec{v}\mathbf{q}^{-1}) = \mathbf{p}(\mathbf{q}\vec{v}\mathbf{q}^{-1})\mathbf{p}^{-1} = (\mathbf{p}\mathbf{q})\vec{v}(\mathbf{q}^{-1}\mathbf{p}^{-1}) = (\mathbf{p}\mathbf{q})\vec{v}(\mathbf{p}\mathbf{q})^{-1} = T_{\mathbf{p}\mathbf{q}}(\vec{v})$
Now, you may have noticed that even though these two examples are different, they do share a bit of flavor: in both cases, we can say that an object of the given type (a vector or a quaternion) gives rise to a transformation (or a map) from a space to itself, such that composing the transforms corresponds to adding (in the first case) or multiplying (in the second) the objects. This concept comes up often enough that mathematicians have come up with a special name for it, the group action; you can find more about it under that name. A group basically just means 'a bunch of abstract things with some special operator for turning two things in the group into a third, satisfying certain conditions'; here, our groups are the group of all vectors (that is, the things in the group are vectors and the operator is vector addition) and the group of all quaternions (where the things in the group are quaternions and the operator is vector multiplication), and the spaces they're acting on are either the space $\mathbb{R}^n$ of $n$-dimensional vectors in the first case or the specific space $\mathbb{R}^3$ of $3$-dimensional vectors in the second. What you're asking is basically whether there's some other group that acts on the space of all quaternions $\mathbb{H}$ in this same way. (Note that we call the quaternions '$\mathbb{H}$' in honor of their discoverer Hamilton).
Unfortunately, it turns out that there's an almost-trivial example, and it's closely related to your first example: any group always acts on itself, with the 'transformation' given by the group operation itself. In the case of vectors, this is exactly your first example; every vector $\vec{v}$ gives rise to a transformation from $\mathbb{R}^n\rightarrow\mathbb{R}^n$ given by the 'group operation' vector addition. In the case of the quaternions, it means that the quaternions transform themselves; for each quaternion $\mathbf{p}$ there's a transformation $T_{\mathbf{p}}:\mathbb{H}\rightarrow\mathbb{H}$ given by $T_{\mathbf{p}}(\mathbf{q}) = \mathbf{p}\mathbf{q}$. If you're wondering about other groups with an action on the quaternions - that, unfortunately, I'm not sure of. Rotations of the quaternions (seen as a general 4-dimensional space) can actually be specified by a pair of unit quaternions $\mathbf{p}$, $\mathbf{q}$ where their action on a given point in that space (that is, a quaternion) $\mathbf{h}$ is given by $T_{\mathbf{p},\mathbf{q}}(\mathbf{h}) = \mathbf{p}\mathbf{h}\mathbf{q}$; Wikipedia's page on 4-dimensional rotations has some more details on this, but I'll warn you in advance that they start to get very boggy very quickly.
Hopefully this gives you, if not quite an answer to your question, some more information on the sort of thing you're looking for! These concepts (representations and actions) are very core in a lot of areas in mathematics and I highly encourage you to dig more into them. |
What is the remainder when ${2222}^{5555}+{5555}^{2222}$ is divided by $7$? | First of all, since it's a multiple-choice question, the fastest approach depends on the choices.
For example, if all choices except for one are not between $0$ and $6$, then the answer is obvious.
That being said, there is one step in your solution which you could probably complete faster:
You had to do some math in order to conclude $[3^3\equiv-1\pmod7]\wedge[4^3\equiv+1\pmod7]$.
Without any computation, we know that $[3^6\equiv1\pmod7]\wedge[4^6\equiv1\pmod7]$.
This is due to the fact that $\forall{p}\in\mathbb{P}:[{0<n<p}\implies{n^{p-1}}\equiv1\pmod{p}]$.
Hence ${(3^6)}^{925}\cdot3^5+{(4^6)}^{370}\cdot4^2\equiv1\cdot243+1\cdot16\equiv259\equiv0\pmod7$. |
How can one distinguish the interior and exterior of a contour on a Riemann sphere? | First, you should know that on the sphere, the residue theorem as stated is not quite true. What is true is that the sum of the residues at all singularities is $0$. However, you have forgotten to check what happens at $\infty$. The residue at infinity is defined by
$$\operatorname{Res}(f,\infty)= \operatorname{Res}\!\Bigg(\!\!-\frac{1}{z^2}f\bigg(\frac{1}{z}\bigg),0 \Bigg) $$
When you do this, you see that the example you give has a residue at $\infty$ as well. Now taking into consideration the orientations of the boundary curves, you'll see that one integral evaluates to $2\pi i$ and the other to $-2\pi i$, consistent with the theorem. In other words, once you have stated the residue theorem correctly, the orientation of the curve still works fine to determine the inside and outside. |
If $R$ is a Boolean ring which is in fact a field, then it must be isomorphic to $\mathbb Z / 2 \mathbb Z$ | The only idempotents in a field are $0$ and $1$. Hence, the only field that is also a Boolean ring is the one with two elements, which is $\mathbb{Z}/2\mathbb{Z}$. |
Average number of dice rolls before having 3 of kind | Let $X$ be the random variable that represents the number of rolls until having a three of a kind. To compute its expectation, we condition on the first roll and denote by $Y$ the number of different results we got.
If we got three of a kind ($Y=1$ w.p. $\tfrac{1}{6^2}$), $X=1$ and the expected number of additional rolls is $0$.
If we got two of a kind ($Y=2$ w.p. ${3 \choose 2} \tfrac{5}{6^2}$), we continue to re-roll the third kind. The number of required rolls is $Geom(\tfrac{1}{6})$ (expectation of additional rolls is $6$).
If we got three different results ($Y=3$ w.p. $\tfrac{20}{36}$) then you reroll all or just two (no matter which ones), so the process repeated from this point onward.
By the law of complete expectation (the one represent the first role, the other term is the expected additional number of rolls till three of a kind):
$$E(X)=E(E(X\vert Y))=1+\tfrac{15\cdot 6 + 20 \cdot E(X)}{36}$$
Solving this leads to $E(X)=\tfrac{63}{8}=7.875$ |
$E\in \mathcal{B}(H)$ is a nontrivial projection, then $\mathcal{A}=\{\alpha E:\alpha\in \mathbb C\}$ is weakly closed. | A comment first: what you are using is the weak operator topology. The weak topology is something else. While is true that it is common to say that something converges "weakly" to mean "in the weak star topology", no one would use the name "weak topology".
From a more abstract point of view, you have a finite-dimensional subspace, and in a finite dimensional subspace there is a single locally convex topology, and the subspace is closed in it.
If you want a direct proof, the easiest way is using convergence. Since $B(H)$ is complete in the weak operator topology, all you need to show is that if $\{\alpha_jE\}_j\to T$ then $T=\alpha E$ for some $E$. For any $x\in H$, you have
$$
\langle Tx,x\rangle=\lim_j\langle \alpha_jEx,x\rangle=\langle Ex,x\rangle\,\lim_j\alpha_j.
$$
Using $x$ such that $Ex\ne0$, we get that the net of numbers $\{\alpha_j\}$ converges to some $\alpha\in\mathbb C$, showing that $\langle Tx,x\rangle=\langle \alpha Ex,x\rangle$ for all $x\in H$. This shows that $T=\alpha E$ and so your set is closed. |
Evaluate $\int{ \ln(\sqrt{x})dx}$ using integration by parts | You have not done anything wrong. Simply note that $\log(\sqrt x)=\frac12 \log(x)$.
Therefore, your analysis is correct; the answer is $(B)\,\, \frac12x\log(x)-\frac12 x+C$. |
Geometric Brownian Motion w/ Normal Distribution | $$I = \int_{-\infty}^\infty e^{\mu t+\sigma x} \frac{1}{\sqrt{2\pi t}}e^{-\frac{x^2}{2t}}dx = \frac{1}{\sqrt{2\pi t}} e^{\mu t} \int_{-\infty}^\infty e^{\sigma x -\frac{x^2}{2t}}dx$$
We now need to complete the square of the exponent, $$\sigma x -\frac{x^2}{2t} = -\frac{1}{2t}(x^2-2t\sigma x) = -\frac{1}{2t}(x^2-2t\sigma x + (t\sigma)^2) + \frac{(t\sigma)^2}{2t} = -\frac{1}{2t}(x - t\sigma)^2 + \frac{1}{2}t\sigma^2$$
Now, let $u= x-t\sigma$ and $du = dx$.
$$I = \frac{1}{\sqrt{2\pi t}} e^{\mu t + \frac{1}{2}t\sigma^2} \int_{-\infty}^\infty e^{-\frac{1}{2t}u^2}dx = \frac{1}{\sqrt{2\pi t}} e^{\mu t + \frac{1}{2}t\sigma^2} \sqrt{2\pi t} = e^{\mu t + \frac{1}{2}t\sigma^2}$$
Note that I used the fact that $\int_{-\infty}^\infty e^{-ax^2} = \sqrt{\frac{\pi}{a}}$ |
Proving second isomorphism theorem | First Issue: You are mapping into $(I+J)/J$. That means that your elements are of the form $x+J$, with $x\in I+J = \{a+b\mid a\in I,b\in J\}$.
Since you start with $a\in I\subseteq I+J$, the element $a$ is in fact in $I+J$: it can be written as $a+0$, with $a\in I$, $0\in J$. Thus, $a+J$ makes sense as an element of $(I+J)/J$. That tells you the map is at least well-defined set-theoretically.
I'm guessing what you really have is an issue with surjectivity. Well, suppose that $x+J\in (I+J)/J$. That means $x=a+b$, with $a\in I$ and $b\in J$. There is one obvious thing to try, given that you take things in $I$; could it be that $f(a)=x+J$? Remember that equality means "equality of cosets", not "equality of how you write the element".
Second Issue: If $a\in\mathrm{ker}(f)$, then $a+J = 0+J$; when are two cosets of $J$ equal? What does that mean for $a$? Conversely, if $a\in I\cap J$, then you need to verify that $a+J = 0+J$ in $(I+J)/J$. |
Decomposing partial fraction with a denominator negative quadratic expression $-x^2$ | Yes, they are correct : $$\frac{5x+4}{-x^2-x+2}=\frac{3}{1-x}\color{red}{+}\frac{2}{-x-2}.$$ |
Taylor series convergence for sin x | For a) and by the Maclaurin formula for the sine function there's $\theta\in(0,1)$ such that
$$\sin x=x-\frac{x^3}{3!}+\frac{x^4}{4!}\sin^{(4)}(\theta x)=x-\frac{x^3}{3!}+\frac{x^4}{4!}\sin(\theta x)>x-\frac{x^3}{3!},\; \forall x\in(0,\frac\pi2)$$
and
$$\sin x=x-\frac{x^3}{3!}+\frac{x^5}{5!}+\frac{x^6}{6!}\sin^{(6)}(\theta x)=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^6}{6!}\sin(\theta x)\\<x-\frac{x^3}{3!}+\frac{x^5}{5!},\; \forall x\in(0,\frac\pi2)$$
so we deduce the result. Can you generalize this to find b)? |
Catching the bus through NT | Since the bus passes every $7$ hours, there will be an integer number of $7$-h intervals between the two catches. So the number of hours between the two catches will be a multiple of $7$.
On the other hand, since Tom took the bus at $9:00$ am and he is gonna catch it again at $9$ a.m., there will be an integer number of days (i.e. $24$-h periods) between the two catches. So the number of hours between the two catches will be a multiple of $24$
And since we need the first time we'll catch the bus again at $9:00$ am, we need the least common multiple of $7$ and $24$, i.e.:
$$
lcm(7,24)=7\cdot24=168 \ hours
$$
whish is $7$ days after Monday's first catch or $24$ buses after that. |
For this bilinear form: $q(v)=q(x_1,x_2,x_3)=x_1^2+x_2^2+9x_3^{2}+2x_1x_2-6x_1x_3-5x_2x_3$ find a base $B$ so that $[q]^B_B=D$ diagonalizable matrix | It is messy because you have misunderstood the problem. While $q(\underline{v})$ is induced by the bilinear form $f(\underline{u}, \underline{v})=\underline{v}^TA\underline{u}$, where $A$ is your $3\times 3$ coefficient matrix, $q$ is quadratic, not bilinear, also not a linear transformation. So, what you are asked to do is to find a decomposition of the form $A = P^TDP$ (where $P$ is invertible and the diagonal of $D$ does not necessarily contain any eigenvalue of $A$), but you have confused this with an eigenvalue decomposition $A = P^{-1}DP$. Surely, as your matrix $A$ is real symmetric, you can do both by performing an orthogonal decomposition $A=Q^TDQ$ where $QQ^T=I$ and $D$ contains the eigenvalues of $A$, but this is simply not required.
In general, you can find a decomposition $A = P^TDP$ by using elementary row/column operations. This is somewhat akin to finding a row-reduced echelon form of a matrix, but here we need to perform both an elementary row operation and a corresponding elementary column operation at each step. In other words, if, in a certain step, you multiply $A$ by an elementary matrix $E$ on the left, you should also mutiply $A$ by $E^T$ on the right.
For the problem you describe, however, simple inspection plus some completing-square trick is enough. Note that
$$
\begin{eqnarray}
&&x_1^2 + x_2^2 + 9x_3^2 + 2x_1x_2 - 6x_1x_3 - 5x_2x_3\\
&=&(x_1 + x_2 - 3x_3)^2 + x_2x_3\\
&=&(x_1 + x_2 - 3x_3)^2 + \frac14[(x_2 + x_3)^2 - (x_2 - x_3)^2].
\end{eqnarray}
$$
So you may take $B=\{(x_1 + x_2 - 3x_3),\ (x_2 + x_3),\ (x_2 - x_3)\}$. You may verify that $A = P^TDP$ where
$$
P=\begin{pmatrix}
1&1&-3\\0&1&1\\0&1&-1
\end{pmatrix},
\ D=\begin{pmatrix}
1\\&\frac14\\&&-\frac14
\end{pmatrix}.
$$ |
Skew-symmetric non-degenerate bilinear form and $J^2=-I$ operator | Currently, I can only show this for $V = \mathbb R^n$. In this case, $f(x,y) = \langle Ax,y\rangle$ with a skew-symmetric matrix $A\in\mathbb R^{n\times n}$, that is, $A^T = -A$ (this also has to be shown). The eigenvalues of $A$ (considered as a complex matrix) lie on the imaginary axis. As $f$ is non-degenerate, $A$ is invertible. If $ia$, $a > 0$, is an eigenvalue of $A$, also $-ia$ is an eigenvalue with the same multiplicity. Therefore, $n$ is actually even, $n = 2m$. Let $ia_j$, $j=1,\ldots,m$, be the eigenvalues of $A$ in the upper halfplane (i.e., $a_j> 0$ for each $j$), counting multiplicities. Let $x_j\in\mathbb C^n$ be an eigenvector corresponding to $ia_j$ such that $\langle x_j,x_k\rangle = 0$ for $j\neq k$. Then $\overline{x_j}$ is an eigenvector of $A$ corresponding to $-ia_j$. Put $u_j := \Re x_j$ and $v_j := \Im x_j$ (real and imaginary parts), $j=1,\ldots,m$. We have $Au_j = -a_j v_j$ and $Av_j = a_ju_j$. The $u_j$'s and $v_j$'s span $\mathbb R^n$. They are even an orthogonal basis: $au_j^Tv_j = (Av_j)^Tv_j = v_j^TA^Tv_j = 0$ since $x^TA^Tx = -x^TAx = -(x^TAx)^T = -x^TA^Tx$. Similarly, one shows that $\|u_j\|^2 = \|v_j\|^2$. Hence, we can scale the $x_j$'s so that the basis $B := \{u_j : j=1,\ldots,m\}\cup\{v_j : j=1,\ldots,m\}$ is orthonormal.
Now, put $Ju_j := -v_j$ and $Jv_j := u_j$ and extend $J$ linearly. With respect to the orthonormal basis $B$ the matrices $J$ and $A$ admit the representations
$$
J = \left(\begin{matrix}0 & I_m\\-I_m & 0\end{matrix}\right) \qquad\text{and}\qquad A = \left(\begin{matrix}0 & D\\-D & 0\end{matrix}\right),
$$
where $D = \operatorname{diag}(a_1,\ldots,a_m)$. Therefore, $JA$ is a diagonal matrix with only negative entries on the diagonal. In particular, $-JA$ is symmetric and positive definite. Finally, define $\varphi(x,y) := -\langle JAx,y\rangle$ for $x,y\in\mathbb R^n$. This is obviously a positive definite bilinear form and $\varphi(x,y) = \langle Ax,Jy\rangle = f(x,Jy)$.
It is more or less clear that everything above can be "generalized" to arbitrary finite-dimensional real vector spaces. Unfortunately, I don't know how to prove the result for infinite-dimensional spaces. |
Hilbert Space: infinite or finite? - All real inner product spaces are Hilbert spaces? | Everything you said is correct: A Hilbert space $H$ is a real or complex vector space, equipped with a scalar product. But you forgot an essential point: This $H$ must be complete as a metric space. This extra condition is automatically fulfilled when $H$ is finite dimensional, because in this case $H$ is (in the real case) isometric with our standard euclidean ${\mathbb R}^n$.
If, however, the given $H$ is infinite dimensional then the completeness has to be proved. Consider, e.g., the space $X$ of continuous functions $f:\>[0,1]\to{\mathbb R}$ with scalar product $\langle f,g\rangle:=\int_0^1 f(t)\,g(t)\>dt$. This particular $X$ is not complete. |
Understanding the rate of infection in SIR models | This is the difference between models using population density and such using population counts. What you have in mind is the density equation
$$
\dot s = -acs\imath
$$
where $s=\frac SN$ and $\imath=\frac IN$ are densities. If you insert that you get for the equation of the counts
$$
\dot S = -ac\frac{SI}N.
$$ |
Prove that for all sets $A$ and $B$ $A\subseteq B$ implies $A\cap B=A$. | This can be proved using 'modus tollens', which is:
$P\to Q \implies \lnot Q \to \lnot P$
So, we need to show that:
$$A\cap B\ne A \to A \not\subseteq B$$
If $A\cap B\ne A$, then $\exists x\in A$, such that $x\not\in B$ which means $A \not\subseteq B$, as required.
Modus tollens then states that the contrapositive, i.e. your original statement, is also true. |
Set-Theoretic Properties and Combinatorics | The collection $G$ is partially ordered by inclusion. Let $M$ be the set of minimal elements.
Prove $M$ is a partition of $G$ and every $c\in G$ is a union of $m\in M$.
Determine $|G|$ in terms of $|M|$. |
Eigen vector proof: $cI=A$ if $A$ has all non 0 eigenvetors | if $\alpha_1,\alpha_2$ are linear independent vectors,
assume $A(\alpha_1)=k_1\alpha_1,A(\alpha_2)=k_2\alpha_2$
but $\alpha_1+\alpha_2$ is eigenvector of $A$.
so $A(\alpha_1+\alpha_2)=k(\alpha_1+\alpha_2)=k_1\alpha_1+k_2\alpha_2$
we get $(k-k_1)\alpha_1+(k-k_2)\alpha_2=0$. $\rightarrow k_1=k_2$
so eigenvalues of $A$ are all equal. and $A$ can be diagonalization,so $A=cI$ |
$7^n$ contains a block of consecutive zeroes | Since $7$ is coprime to $10$, for each $k\geq 1$ we have $7^{\phi(10^k)}\equiv 1$ (mod $10^k$), where $\phi$ is Euler's totient function.
In other words, $7^{\phi(10^k)}-1$ is divisible by $10^k$. What does this tell you about the decimal expansion of $7^{\phi(10^k)}$? |
Where did the Orbit-Stabilizer came from (not historically)? | I'm going to answer what I think is the literal question; if I've misunderstood the question, perhaps you can correct me.
I believe what you are asking is: How can a mathematician discover this theorem, knowing nothing about it beforehand?
Well, the answer is that they can't. That's not what happens. Just learning the abstract definition of a group, and maybe the definition of a group action, and maybe the definition of orbits, and then expecting that anyone can just say "Hey, here's a theorem!"... well... that's not how any mathematical theorems ever get discovered.
Instead, someone learns about actual groups, and actual group actions. They learn examples. They observe patterns. They stumble upon this particular pattern, noticing that it holds in a few different examples: the order of the group is the size of an orbit times the size if a stabilizer of a point in that orbit. They think "Huh... is this just a coincidence?" They might look for more examples to bolster the point, they might look (unsuccessfully) for counterexamples, which bolsters the point even more.
They become more and more convinced that the pattern is true.
And when a mathematician becomes convinced that something is true, then they are very motivated to prove that it is true.
And fortunately, the proof is easy.
I really don't think there's much more than that to say. |
Prove that given any $2$ vertices $v_0,v_1$ of Graph $G$ that is a club, there is a path of length at most $2$ starting in $v_0$ and ending in $v_1$ | The statement is not true. Consider a path of length 4, where $v_0,v_1$ are the endpoints of the path. There is no path of length at most two from $v_0$ to $v_1$, and the graph is a club by your definition.
Edit: After the definition of club changed
With the new definition, the proof can be made as follows:
Let $G$ be a (simple) graph which is a club.
Assume there is no path of length 1 or 2 from $v_0$ to $v_1$. Let $S_0$ denote the set of vertices joined to $v_0$ by an edge, and let $S_1$ be the set of vertices joined to $S_1$ by an edge. If there is a vertex $x$ which is in both $S_0$ and $S_1$ (that is $x \in S_0 \cap S_1$), then there is a path $v_0,x,v_1$ of length 2 from $v_0$ to $v_1$, so assume there is no vertex in $S_0 \cap S_1$.
We know by the definition of $S_0$ and $S_1$ that $|S_0| = \text{deg}(v_0)$ and $|S_1| = \text{deg}(v_1)$, and since $G$ is a club, $|S_0| + |S_1| = \text{deg}(v_0) + \text{deg}(v_1) \geq n$. Since $S_0$ and $S_1$ have no vertices in common, this means that any of the $n$ vertices is in either $S_0$ or $S_1$. But this is a contradiction since $v_0,v_1$ are in none of them (if they were, there would be a direct edge from $v_0$ to $v_1$). |
Find the conditional joint PMF of the random vector $(X_1,...,X_n)$ given $S_n=X_1+...+X_n=k$ where $X_i$~Bernoulli(p) | You have the right conditional pmf provided $a=(a_1,\ldots,a_n)$ is in the set
$$ S:=\Big\{a:a_i\in\{0,1\},|\{i:a_i=1\}|=k\Big\},$$
and the pmf is zero otherwise. To show that the pmf sums to $1$ you just have to count the number of elements in $S$, which you should be able to do.
To test for independence, try a checking events of the form $A=\{X_1=X_2=\ldots=X_k=1\}$ and $B=\{X_{k+1}=\ldots=X_n=0\}$. If $(X_1,\ldots,X_n)$ were independent conditioned on $\{S_n=k\}$ then you would have $\mathbb P(A\cap B\,|\,S_n=k)=\mathbb P(A\,|\,S_n=k)\mathbb P(B\,|\,S_n=k)$. Is this the case here? |
Convergence of sequence of functions? | Consider $g(x) = x - x^5$; this is non-negative for $0 \le x \le 1$, and its derivative is
$$g'(x) = 1 - 5x^4$$
so the function is maximized at $x = 1/\sqrt[4]{5}$ and the maximum is strictly less than $1$. Call the maximum $r$. Then for all $x \in [0, 1]$,
$$|f_n(x) - 0| = |g(x)|^n \le r^n$$
This bound tends to $0$ as $n$ grows, and the bound is uniform in $x$. Hence the convergence is uniform. |
geometric progression calculation | It appears that the terms are numbered starting with $a_1$, so that $a_2=a_1q$, $a_3=a_1q^2$, and in general $a_n=a_1q^{n-1}$. Thus, the subseries of even-numbered terms is
$$11.25=a_1q+a_1q^3+a_1q^5+\ldots=a_1q\sum_{n\ge 0}q^{2n}=a_1q\sum_{n\ge 0}(q^2)^n=\frac{10q}{1-q^2}\;,$$
and $11.25(1-q^2)=10q$. This is just a quadratic in $q$, so you can solve it. (It even factors nicely once you simplify it to a form with integer coefficients, though of course you can also just use the quadratic formula.) You’ll find that one solution is the answer that you were given, and the other is unusable, because it’s too big. |
How to compute the monstrous $ \int_0^{\frac{e-1}{e}}{\frac{x(2-x)}{(1-x)}\frac{\log\left(\log\left(1+\frac{x^2}{2-2x}\right)\right)}{2-2x+x^2}dx} $ | Notice, we have $$\int_{0}^{\frac{e-1}{e}}\frac{x(2-x)}{1-x}\frac{\log\left(\log\left(1+\frac{x^2}{2-2x}\right)\right)}{2-2x+x^2}dx$$
$$=\int_{0}^{\frac{e-1}{e}}\frac{x(2-x)}{1-x}\frac{\log\left(\log\left(\frac{2-2x+x^2}{2-2x}\right)\right)}{2-2x+x^2}dx$$
Let, $$\log\left(\frac{2-2x+x^2}{2-2x}\right)=u$$
$$\implies \frac{d}{dx}\left(\log\left(\frac{2-2x+x^2}{2-2x}\right)\right)=\frac{d}{dx}(u)$$ $$\frac{1}{\left(\frac{2-2x+x^2}{2-2x}\right)}\cdot \left(\frac{(2-2x)(-2+2x)-(2-2x+x^2)(-2)}{(2-2x)^2} \right)=\frac{du}{dx}$$
$$\left(\frac{2-2x}{2-2x+x^2}\right)\cdot \left(\frac{2x(2-x)}{(2-2x)^2} \right)=\frac{du}{dx}$$ $$\frac{x(2-x)}{(1-x)}\frac{1}{(2-2x+x^2)}dx=du$$ Now, we have $$\int_{0}^{\log\left(\frac{e^2+1}{2e}\right)}\log(u)du$$
$$=\left[u\log(u)-u\right]_{0}^{\log\left(\frac{e^2+1}{2e}\right)}$$
$$=\left[u\log\left(\frac{u}{e}\right)\right]_{0}^{\log\left(\frac{e^2+1}{2e}\right)}$$
$$=\log\left(\frac{e^2+1}{2e}\right)\cdot\log\left(\frac{1}{e}\log\left(\frac{e^2+1}{2e}\right)\right)-\lim_{u\to 0}u\log\left(\frac{u}{e}\right)$$
$$=\log\left(\frac{e^2+1}{2e}\right)\cdot\log\left(\frac{1}{e}\log\left(\frac{e^2+1}{2e}\right)\right)-0$$
Hence, we get
$$\bbox[5px, border:2px solid #C0A000]{\color{red}{\int_{0}^{\frac{e-1}{e}}\frac{x(2-x)}{1-x}\frac{\log\left(\log\left(1+\frac{x^2}{2-2x}\right)\right)}{2-2x+x^2}dx}=\color{blue}{\log\left(\frac{e^2+1}{2e}\right)\cdot \log\left(\frac{1}{e}\log\left(\frac{e^2+1}{2e}\right)\right)}}$$ |
How prove this $(x,y)H\binom{x}{y}\ge p(x^2+y^2)$ | Let $v=[x,y]^T$ and $H$ has the eigen decomposition $H=P^TDP$ where $D$ is the $2\times 2$ diagonal matrix with eigenvalues $\lambda_1,\lambda_2$ as its diagonal entries. Let $\lambda_1\geq \lambda_2$. Define $y=Pv=[y_1,y_2]^T$. Then
\begin{align}
v^THv=v^TP^TDPv=y^TDy=\lambda_1y_1^2+\lambda_2y_2^2\geq \lambda_2(y_1^2+y_2^2)=\lambda_2(x^2+y^2)
\end{align}
Convince yourself that $y_1^2+y_2^2=x^2+y^2$. Also check if you derive something in the same lines for any $N\times N$ positive definite matrix. |
Generate Unique EMV Payment Token from PAN | There are different methods of generating tokens from a PAN: Cryptographic methods and non-cryptographic methods. The non-cryptographic methods are straight forward, you just generate random strings of numbers and keep a table/database that will have the PAN-Token mapping. For cryptographic tokens, you need an algorithm that encrypts the original PAN to create a new token. One thing to have in mind is the BIN range as you have mentioned. There exists a family of encryption algorithms refereed to as Format Preserving Encryption (FPE) algorithms. And as the name implies, they preserve the format of any data you are encrypting. So encrypt 16-digits plain-text (PAN) and get 16-digit cipher-text (token). NIST has released a specification explaining how to use FPE algorithms, see: https://www.google.co.uk/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwiCqo3ej7nWAhVEIVAKHZLqCLkQFggoMAA&url=http%3A%2F%2Fnvlpubs.nist.gov%2Fnistpubs%2FSpecialPublications%2FNIST.SP.800-38G.pdf&usg=AFQjCNG77pHskWJzuLbq0HrWTnRJ29hoAQ
It is important to note that the FPE algorithms in the NIST document do not deal with luhn check digit. So a solution to this is to remove the luhn check digit from the PAN and then encrypt the rest of the digits, and then after getting the cipher-text (token), yo calculate a new luhn check digit and append it to the end of the token you generated. |
Convergence in distribution of conditional expectations | A single sequence $ a_n$ that works for all $ X $ and $\mathcal{F}_n $ does not exist. Just take any such inputs and create a new sequence of filtrations by repeating the old filtrations more and more often. More specifically let
$$
\tilde{\mathcal{F}}_n:=\mathcal{F}_{\sqrt{n}}
$$
(Round to nearest integer)
If you had rate $1/2$ convergence before, you will have rate $1/4$ convergence with respect to this new filtration.
Since i.i.d. random variables can be considered as special case of martingales we obviously do have convergence in some situations. To study when this is the case, one way is to have a look at $ Z_n :=E[X|F_n]-E[X|F_{n-1}] $ and apply various general forms of the central limit theorem (Lindeberg, ...). |
Solutions to Introduction to applied linear algebra book | I am pretty sure the solutions aren't published. However, you may ask specific questions about one particular problem and SE Mathematics might answer. |
Elementary question in differential geometry | You've endowed $C_h$ with the structure of an abstract manifold but $C_h$ is not a submanifold of $\mathbb R^3$. The fact that your set isn't a submanifold boils down to two observations:
(1) The fact your map $i$ is not differentiable at the origin
and
(2) An application of the implicit function theorem gives the proof by contradiction. The implicit function theorem says that if your set was a submanifold, $i$ would have to be smooth -- technically you have to consider the two other coordinate projections $(x,y,z) \to (y,z)$, $(x,y,z) \to (x,z)$ but $C_h$ does not satisfy the "vertical line rule" so it can't be a graph of a function of $(y,z)$ or $(x,z)$. |
Is my reasoning about Borel Lebesgue theorem correct? | $U_k$'s do not cover $[a,b]$ since $b$ is not in any of them.
$[a,b]$ is the union of all singleton sets $\{\{x\}: a \leq x \leq b\}$ and this gives a closed cover with no finite subcover. |
Is this set a manifold?.... | I think $x=ca$ is not well-defined , so you could rather write
$A =(\mathbb{R}\setminus0)\times M$ where you interpret the ordered pair $(c,m)$ as the object you originally wrote as $cm$.
Now observe, that $\mathbb{R}\setminus$ 0 and $M$ are manifolds and thus their cartesian product is too.
Hope that helps you! |
How to disprove the following using negation? | In general
$$\neg\left(\forall A\in U\,,\,A\implies B\right)=\exists A\in U\,A\rlap{\;\;\;\,/}\implies B\;,\;\text{or}\;\;\exists A\in U\;,\;A\,\wedge\neg B $$
In your case:
$$\exists f,\,g\in \mathcal F\;,\;\;\log f(n)=\mathcal O(g(n))\;\;\wedge\;\;f(n)\notin\mathcal O(3^{g(n)})$$ |
An analytic function satisfies $f(1/z)=f(z) $, if $f$ is real on $\{|z|=1\}$, then the coefficients of expansion are real. | The expression you compute for $\overline{a_n}$ looks much more like the expression you computed for $a_{-n}$ thanlike the one you computed for $a_{-n-1}$. |
Reference request: Controllable and Observable form for transform function | Consider the generic transfer function
$$
H(s) = \frac{Y(s)}{X(s)} = \frac{b_0s^n + \cdots + b_ns^0}{s^n + a_1s^{n-1} + \cdots + a_n}
$$
Then the state space model is
\begin{align}
\dot{\mathbf{q}} &= \mathbf{Aq} + \mathbf{B}u\\
y &= \mathbf{Cq} + Du
\end{align}
where
\begin{align}
\mathbf{A} &=
\begin{bmatrix}
-a_1 & 1 & 0 & \cdots & 0\\
-a_2 & 0 & 1 & 0 & \vdots\\
\vdots & \vdots & & \ddots & 0\\
-a_{n-1} & 0 &\cdots & & 1\\
-a_n & 0 & 0&\cdots & 0
\end{bmatrix}\\
\mathbf{B} &=
\begin{bmatrix}
b_1 - a_1b_0\\
b_2 - a_2b_0\\
\vdots\\
b_{n-1} - a_{n-1}b_0\\
b_n - a_nb_0
\end{bmatrix}\\
\mathbf{C} &= \begin{bmatrix}1 & 0 & \cdots & 0\end{bmatrix}\\
D &= b_0
\end{align}
You can go through the full derivation here. |
solving a strange Diophantine equation ${\sqrt{n}}^\sqrt{n} -11 =m!^2$ | Let
$$k^k-11 = (m!)^2.$$
If $k>=11$ then $$121\not|\, LHS,\quad 121\,|\, RHS,\quad LHS\not= RHS.$$
In the other hand, $k$ is odd and $k^k > 11$, so
$$k\in\{3,5,7,9\}.$$
Note than:
$$\sqrt{3^3-11} = 4 \not= m!,$$
$$\sqrt{5^5-11} = \sqrt{3114}\in(55,56),\quad \sqrt{5^5-11}\not\in\mathbb N,$$
$$\sqrt{7^7-11} = \sqrt{823532}\in(907,908),\quad \sqrt{7^7-11}\not\in\mathbb N,$$
$$\sqrt{9^9-11} = \sqrt{387420478}= \sqrt{11683^2-1}\in(11682,11683),\quad \sqrt{9^9-11}\not\in\mathbb N.$$
So the issue diophantine equation has not solutions. |
Complex number equivalency | Note that $(i+1)^{2} = i^{2} + 2i + 1 = -1 + 1 + 2i = 2i$, so in fact, the two roots you've listed are equal.
Just to make it perfectly explicit:
$$r = \pm(i+1)\sqrt{\frac{\alpha}{2\beta}} = \pm\sqrt{\frac{(i+1)^{2}\alpha}{2\beta}} = \pm\sqrt{\frac{2i\alpha}{2\beta}} = \pm\sqrt{\frac{i\alpha}{\beta}}$$ |
sequence with no accumulation point | You can take the sequence $\left(\sqrt n\right)_{n\in\mathbb N}$, for instance. Can you prove that it works? |
When does it suffice to show statements about rings only for the local ring after localizing at a prime? | First of all, I love this question.
Secondly, as Max says in the comments, what needs to be true in order to use this trick is that the statement "Property A holds for ring $R$ (or module $M$)" needs to follow from the statement "Property A holds for all localizations $R_p$ (or $M_p$), as $p$ ranges over all primes of $R$." Properties of which this is true are said to be local properties. Some properties are local by definition, i.e. the definition is first given for local rings and then defined to hold for general rings whenever it holds for every localization (e.g. the notion of a regular ring), but other properties' localness requires a theorem.
For example, injectivity is a local property of a map: If $\phi: M\rightarrow N$ is a module homomorphism over a ring $R$, and for every prime $p\triangleleft R$, we have $\phi_p:M_p \rightarrow N_p$ is injective, then $\phi$ is injective. This statement requires proof (it is Proposition 3.9 in Atiyah-MacDonald's classic Introduction to Commutative Algebra and I'm sure it's also proven in Eisenbud), but once we have proved it, it means we can afterward assess injectivity just by considering localizations. |
Interested in a closed form for this recursive sequence. | Based on OEIS, this seems to be https://oeis.org/A158466. There you can see some closed-form formulas (if you consider finite sum closed, which is quite usual), for example
$$
a_n = \sum_{k=1}^{n}(-1)^{k+1}\binom{n}{k}\frac{2^k}{2^k-1}.
$$
You should be able to verify it by plugging it into the recursive formula. |
if $G_{4n+1}=2G_{2n+1}-G_{n},G_{4n+3}=3G_{2n+1}-2G_{n}$,Find $G_{n}=n?$ | This is sequence A030101 in the OEIS. That is, $G(n)$ is the number obtained by reversing the digits of $n$ when written base $2$, e.g. $G(25)=G(11001_2)=10011_2=19$.
This is easy to check: If $n=d_0+2d_1+\cdots+2^rd_r$, then
$$
\begin{align}
G(n)&=d_r+\cdots+2^rd_0\\
G(2n)&=d_r+\cdots+2^rd_0+2^{r+1}\cdot0\\
G(2n+1)&=d_r+\cdots+2^rd_0+2^{r+1}\cdot1\\
G(4n+1)&=d_r+\cdots+2^rd_0+2^{r+1}\cdot0+2^{r+2}\cdot1\\
G(4n+3)&=d_r+\cdots+2^rd_0+2^{r+1}\cdot1+2^{r+2}\cdot1
\end{align}
$$
from which the identities $G(2n)=G(n)$, $G(4n+1)=2G(2n+1)-G(n)$, and $G(4n+3)=3G(2n+1)-2G(n)$ are easily verified.
The upshot is that the OP's "good" numbers are those that are palindromes when written in binary. |
Is $\operatorname{rk}(\bar AA)=\operatorname{rk}(A^*A)$ ?? | It isn't true.
Consider $A =\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}$. Then $\overline{A}A = A^2 = 0$ but
$$A^*A = A^TA = \begin{bmatrix} 0 & 0 \\ 1 & 0\end{bmatrix}\begin{bmatrix} 0 & 1 \\ 0 & 0\end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & 1\end{bmatrix}$$ |
$\ \sqrt{x+39}-\sqrt{x+7}=4 $ | One can square twice. I prefer to invert and obtain
$$\frac{1}{\sqrt{x+39}-\sqrt{x+7}}=\frac{1}{4},$$
and then by rationalizing the denominator get
$$\frac{\sqrt{x+39}+\sqrt{x+7}}{32}=\frac{1}{4},$$
or equivalently
$$\sqrt{x+39}+\sqrt{x+7}=8.$$
Now "add" to the original equation, divide by $2$. We get $\sqrt{x+39}=6$, and now it's over.
Remark: In the original post, the first step used was to square. This to some degree complicates things, the middle term should have been $-2\sqrt{(x+39)(x+7)}$. Rearrangement and squaring now get us to where we want. |
Permutations/combinations, number of elements and ways | Here, as an example, $$P(5, 3) = \frac{5!}{2!}$$ means that out of 5 possible objects, how many ways there are to choose 3 objects where order matters. On the other hand, $$C(5, 3) = \frac{5!}{3!\cdot 2!}$$ means the same thing except order doesn't matter.
In this case, note that the textbook says "where order matters." Hence, $P(N, R)$ is number of ways to choose R objects where order matters and $C(N, R)$ is the number of ways to choose an object where order doesn't matter.
Does this help? |
How to uniform 1 out of 7 chance, using 2 coins | I think it cannot and the problem is the following. Let there be $N-n$ throws of the fair coin and $n$ throws of the unfair coin (hence $N$ total trows). Then the probability of any configuration happening is
$$
\frac{1}{2^{N-n}}\frac{6^k}{7^{n}}
$$
wherre $0\leq k\leq n$.
Now in order to simulate a fair dice you want to associate to some subset of configuration each face, and you want such a subset to have probability $1/7$. In other words you want to divide the configurations with probability given by the expression at the beginning into groups whit probability 1/7. Then, for instance, for the first face you will have to find integers $k_1,k_2,\dots,k_i$ such that
$$
\frac{1}{7}=\sum_{j=1}^i\frac{1}{2^{N-n}}\frac{6^{k_i}}{7}=\frac{1}{2^{N-n}}\frac{6^{\sum_i k_i}}{7^n}.
$$
Then
$$
7\times 6^{\sum_i k_i}=2^{N-n}7^n
$$
This is definitely impossible since the left hand side is divisble by 3 while the right hand side is not.
Then it is impossible to simulate the fair dice with a fixed finite number of tries. |
integral operator with degenerate kernel | In your case,
$$
K(s,t) = \overline{K(t,s)}
$$
so the corresponding integral operator $T_K$ is self-adjoint. Thus, the norm of $T_K$ coincides with its spectral radius. Now since the spectrum is compact, $\exists 0\neq \lambda \in \sigma(T_K)$ such that $|\lambda| = \|T_K\|$. Since $T_K$ is a compact operator and $\lambda\neq 0$, $\lambda$ must be an eigen-value of $T_K$. |
Simplify the expression $\sin 4x+\sin 8x+\cdots+\sin 4nx$ | If we multiply the whole sum by $2\sin(2x)$, by exploiting:
$$2\sin(4mx)\sin(2x) = \cos((4m-2)x)-\cos((4m+2)x) $$
we have, through telescopic sums:
$$ 2\sin(2x)\cdot\sum_{m=1}^{n}\sin(4mx) = \cos(2x)-\cos((4n+2)x).$$ |
Shortest Path and Minimum Curvature Path - implementation | Equation 6
By (5) you have formulas for the displacement in $x$ and in $y$ direction. These are simply numbers. The squared length is the sum of the squares of these displacements, by Pythagoras. That squared length is then written as a function of a two-element parameter vector $\alpha_i$. It is quadratic in $\alpha$, so it can be split into a matrix which gets multiplied by $\alpha_i$ from both sides, a vector which gets multiplied by $\alpha_i$ in what is essentially a dot product, and a constant part which does not depend on $\alpha_i$ at all. This separation is what (6) does.
Let's have a closer look. Remember that the form of your $x$ displacement was this:
$$
\Delta P_{x,i} =
\begin{pmatrix}\Delta x_{i+1}\\-\Delta x_i\end{pmatrix}^{\mathrm T}\alpha_i
+ \Delta x_{i,0} =
\alpha_i^{\mathrm T}\begin{pmatrix}\Delta x_{i+1}\\-\Delta x_i\end{pmatrix}
+ \Delta x_{i,0}
$$
Lets introduce some abbreviations for the reoccurring vectors here:
$$
\beta_i := \begin{pmatrix}\Delta x_{i+1}\\-\Delta x_i\end{pmatrix} \\
\gamma_i := \begin{pmatrix}\Delta y_{i+1}\\-\Delta y_i\end{pmatrix}
$$
Now you can write
$$
\Delta P_{x,i} = \beta_i^{\mathrm T}\alpha_i + \Delta x_{i,0} \\
\Delta P_{y,i} = \gamma_i^{\mathrm T}\alpha_i + \Delta y_{i,0}
$$
and with these you can compute the squared length as the sum of the squared displacements in the two coordinate directions like this:
\begin{align*}
\Delta P_{x,i}^2 + \Delta P_{y,i}^2 &=
\left(\beta_i^{\mathrm T}\alpha_i + \Delta x_{i,0}\right)^2 +
\left(\gamma_i^{\mathrm T}\alpha_i + \Delta y_{i,0}\right)^2 \\&=
\left(\beta_i^{\mathrm T}\alpha_i\right)^2 +
2\left(\beta_i^{\mathrm T}\alpha_i\right)\Delta x_{i,0} +
\Delta x_{i,0}^2 +
\left(\gamma_i^{\mathrm T}\alpha_i\right)^2 +
2\left(\gamma_i^{\mathrm T}\alpha_i\right)\Delta y_{i,0} +
\Delta y_{i,0}^2 \\&=
\alpha_i^{\mathrm T}\beta_i\beta_i^{\mathrm T}\alpha_i +
2\Delta x_{i,0}\beta_i^{\mathrm T}\alpha_i +
\Delta x_{i,0}^2 +
\alpha_i^{\mathrm T}\gamma_i\gamma_i^{\mathrm T}\alpha_i +
2\Delta y_{i,0}\gamma_i^{\mathrm T}\alpha_i +
\Delta y_{i,0}^2 \\&=
\alpha_i^{\mathrm T}\left(
\beta_i\beta_i^{\mathrm T} + \gamma_i\gamma_i^{\mathrm T}
\right)\alpha_i +
\left(
2\Delta x_{i,0}\beta_i^{\mathrm T} +
2\Delta y_{i,0}\gamma_i^{\mathrm T}
\right)\alpha_i +
\left(
\Delta x_{i,0}^2 + \Delta y_{i,0}^2
\right)
\end{align*}
So here you see that $H_{S,i}$ is the matrix
$$ H_{S,i}:=\beta_i\beta_i^{\mathrm T} + \gamma_i\gamma_i^{\mathrm T} $$
and $B_{S,i}$ is the row vector
$$ B_{S,i}:=
2\Delta x_{i,0}\beta_i^{\mathrm T} + 2\Delta y_{i,0}\gamma_i^{\mathrm T} $$
while the constant term is the last parenthesis and irrelevant for the subsequent optimization.
Equation 7
Based on this, (7) expresses $\alpha_i$ as the matrix-times-vector product
$$\alpha_i = E_i\alpha = \begin{pmatrix}\alpha_{i+1}\\\alpha_i\end{pmatrix}$$
The matrix $E_i$ here simply has two rows and $n$ columns. The first row has a $1$ in column $i+1$ and the second a $1$ in column $i$. This matrix $E_i$ can be combined with the matrices of the previous step in order to obtain an expression in $\alpha$, without the need to select elements based on $i$. With this formulation, the sum of products with matrices can be turned into a product with sums of matrices, using
\begin{align*}
H_S &:= \sum_{i=1}^n E_i^{\mathrm T}H_{S,i}E_i \\
B_S &:= \sum_{i=1}^n B_{S,i}E_i
\end{align*}
Minimum curvature
Explaining pretty much every equation in the MCP case is beyond the scope of this post. The above should give you an idea about the kind of transformations required. If you need more help, please ask more specific questions separately. |
Equation of the straight line equidistant from $(2,-2)$ & $3x - 4y + 1 = 0$? | What you have done will result in a parbola which is defined as equidistant from a fixed line $ax+by+c=0$ and a fixed point $(x_1,y_1)$, the general equation of which is:
$$\sqrt{(x-x_1)^2+(y-y_1)^2}=\frac{|ax-by+c|}{\sqrt{a^2+b^2}}$$
Let's use a naming $L_g$:given line, $L_f$: line to be found, $P$: the givenpoint
You were required to use distance between $L_f$ and $P$ (which is actually the distance between foot of perpendicular on $L_f$ and $P$), not distance between any point on $L_f$ and $P$.
The resulting line will also be parallel to already given line.See given figure, blue line is what we have, red point is what we have, now, the green curve is what you predicted and the the purple line is what we want, this line might look parallel, but before starting think it's not parallel, so these lines must intersect somewhere and thus the distance between them is $???$ (think), (that makes it necesarry to have both lines parallel) |
Calculating expectation, infinite fair coins tosses | Use the law of total expectation to solve this question.
$$\mathbb E[\text{#number of tosses}] = P(\text{H throws first})*\mathbb E[\text{T is thrown}+1] + P(\text{T throws first})*\mathbb E[\text{#number of tosses} +1]$$.
Note that the "$+1$" is because of the "turn wasted". The probabiliies we get $H$ or $T$ at the first turn is $0.5$ for each, and the expectation of $T$ is thrown is geometric with $p=0.5$. If we get $T$ at the first turn we need to "start all over again" (but remember we already "wasted" one turn).
Therefore:
(Let us write $\mathbb E[\text{#number of tosses}] = X$)
$$X = 0.5 * (2+1) + 0.5(X + 1)$$
$$ X = 1.5 + 0.5X + 0.5$$
$$0.5X = 2$$
$$ X=4$$ |
When do roots of three quadratic polynomials multiply to 1? | Your question asks if you have three quadratic polynomials
$$ p_1(x)\!:=\!a_1 x^2+b_1 x+c_1, \;\;
p_2(x)\!:=\!a_2 x^2+b_2 x+c_2, \;\;
p_3(x)\!:=\!a_3 x^3+b_3 x+c_3 $$
with three pairs of roots
$$ p_1(r_1^+) = p_1(r_1^-) = 0, \quad
p_2(r_2^+) = p_2(r_2^-) = 0, \quad
p_3(r_3^+) = p_3(r_3^-) = 0 $$
where for $\,n=1,2,3,\,$
$$ r_n^{\,\pm} := \frac{-b_n\pm\sqrt{b_n^2-4a_n c_n}}{2a_n}, $$
then what is the condition that $\, r_1 r_2 r_3 = 1\,$ for some
choice of the roots as given in terms of the coefficients of the
three polynomials? The answer is given by a homogeneous degree
$12$ polynomial expanded out with $34$ monomial terms
$$ P := (a_1a_2a_3)^4 \prod_{i,j,k=\pm}
(1 - r_1^{\,i}\,r_2^{\,j}\,r_3^{\,k}) =
(a_1a_2a_3)^4 + \dots + (c_1c_2c_3)^4 $$ where the $\,\dots\,$
represents the other $32$ degree-$12$ monomial terms. I used a
computer algebra system to get the expansion. As stated in the
question
this turns out to be quite messy
and I don't think it can be simplified except for special cases,
but I have been wrong before, so maybe there is hope. |
Summing Over Uncountable Index Sets | Summing over an uncountable index results in an infinite sum if uncountably many terms are non-zero.
To see this, we prove the contrapositive: that if a sum over an uncountable index is finite implies that at most countably many terms are non-zero.
Proof. Let $\sum_{\alpha \in A} x_\alpha = L$. Let $S_n = \{\alpha \in A \mid x_\alpha > 1/n\}$. Then
$$L = \sum_{\alpha \in A} x_\alpha > \sum_{\alpha \in S_n} 1/n = \frac{|S_n|}{n}$$
So $| S_n| < nL$.
Let $S = \{\alpha \in A \mid x_\alpha > 0\}$. The $S$ is the countable union of each $S_n$, which are in turn each at most countable, so the result is at most countable. |
$8$ people including $A,B,C$, and $D$ will be rearranged. In how many ways can they be rearranged such that $B$ and $C$ will be between $A$ and $D$? | First choose 4 places, that you can do on ${8\choose 4}$ ways, then on edge put $A$ and $D$, you can do that on 2 ways and in the midle $B$ and $C$, again on 2 ways. Then arrange all others on remaining places, that is on 4! ways. Now multiply all these. So the answer is $${8\choose 4}\cdot 2\cdot 2\cdot 4! = 8!/6$$ |
Convert form English to logical symbols. | In the language of first-order logic, this could be formalized as follows (there are in fact many ways, I'll just show one of them):
Let $h$ denote the unary predicate "... is human", let $m$ denote the unary predicate "... is mortal", and let $z$ denote "Zeus".
Then your assumptions are
$$\forall x.\;h\ x\to m\ x$$
(read: "for all things, their humanness implies their mortality"),
and
$$\neg\ m\ z$$
(read: "Zeus' mortality is not given").
By modus tollens, these two statements imply the conclusion
$$\neg\ h\ z$$
(read: "Zeus' humanness is not given").
In short:
$$\frac{\displaystyle\forall x.\;h\ x\to m\ x\quad\quad\neg\ m\ z}{\displaystyle\neg\ h\ z}$$ |
Functional analysis problems collection with solutions | Volumes 3 and 4 of Kadison-Ringrose consist of full solutions to the exercises in volumes 1 and 2. I don't know other examples first hand.
That said, looking at full solutions is likely a bad way to go about functional analysis. You will certainly struggle with some exercises--we all do--but in my view that struggle is part of getting an understanding of the subject. |
Further studies on Fourier Series and Integrals. | @Peter, I strongly recommend T.W. Körner's Fourier Analysis (see http://www.amazon.com/Fourier-Analysis-T-246-rner/dp/0521389917/ref=sr_1_1?s=books&ie=UTF8&qid=1376232360&sr=1-1&keywords=korner+fourier+analysis) and the accompanying book of exercises. It's exceedingly well-written, full of interesting stuff, and accessible to good undergraduates. |
Nontrivial examples of pro-$p$ groups | To answer some of the questions you can consider $\mathbb{Z}_p \wr \mathbb{Z}_p$. It is a finitely generated pro-$p$ group of infinite rank. Also it is solvable but not nilpotent.
Finally, in the fourth example, a pro-$p$ group of finite rank cannot have a (topologically) infinitely generated (closed) subgroup by definition. Did you mean something different? |
One-point perspective formula | If you have a 3D coordinate system with origin $(0, 0, 0)$ at your (dominant) eye, the picture plane at $z = d$, and the vanishing point at origin in your picture, then each 3D point $(x, y, z)$ can be trivially projected to your picture plane at $(x', y')$:
$$\begin{array}{l} x' = x \frac{d}{z} \\ y' = y \frac{d}{z} \end{array}$$
This is not a trick. It is not an approximation either. If the coordinate systems and the location of the observer are defined this way, you do get the 2D-projected coordinates simply by multiplying them by $d / z$, where $d$ is the distance from the observer to the projection plane (picture), and $z$ is the 3D $z$-coordinate for that point. I've explained why at the beginning of this answer.
If you want the 3D origin to be on the picture plane, with your dominant eye at $(0, 0, -d)$, then
$$\begin{array}{l} x' = x \frac{d}{z + d} \\ y' = y \frac{d}{z + d} \end{array}$$
If you want the 3D origin, and the vanishing point, to be at $( x_0 , y_0 )$ on the picture plane, and your eye is at $( x_0 , y_0 , -d )$, then
$$\begin{array}{l} x' = x_0 + ( x - x_0 )\frac{d}{z + d} \\ y' = y_0 + ( y - y_0 )\frac{d}{z + d} \end{array}$$
The case where the eye is at $(0, 0, -d)$ but vanishing point at $( x_0 , y_0 )$ on the image plane, is a bit funny and somewhat more complicated, because the picture plane is skewed: it is not perpendicular to the line between the eye and the vanishing point $(0, 0, \infty)$. I haven't worked out the math for that case, because I've never needed to find out how to project an image that would be mostly viewed from the side.
Other projection models can be constructed from the same principles I outlined here, but the corresponding formulae are more complicated.
For example, two- and three-point perspectives can be modeled using a 3D coordinate system where origin is at the origin of the picture plane, but the 3D $x$ and $z$ axes (for two-point perspective), or $x$, $y$, and $z$ axes (for three-point perspective) are towards their respective vanishing points. The formulas for $x'$ and $y'$ are more complicated than above, because the 3D coordinate system is rotated compared to the 2D one.
(After all, vanishing points are nothing special or magical: any point infinitely far away is a vanishing point. When used as a geometric tool for perspective, we simply utilize the fact that all parallel lines meet at the same vanishing point. If you wanted to draw a picture of a quarter henge with say five monoliths, you could use five vanishing points, one for each monolith.)
Non-planar projections, for example projecting on a cylinder, are derived the same way using optics and linear algebra: simply put, by finding where the line between the eye and the detail we are interested in, intersects the projection surface.
Matrices are only used in this kind of math, because they let us write the operations needed in more concise form: in shorthand, in a way. In fact, if you can add, subtract, multiply, divide, and use up to three variables in a formula, you know enough math to start delving into descriptive geometry. If you then learn about vectors, matrices, and versors, you can quickly master the principles used in 3D graphics, ray tracing, and descriptive geometry applications in general. |
Sequences of functions which are cauchy w.r.t one norm but not another | As @hardmath pointed out, compact support on $\mathbb{N}$ just means that each function can only take a nonzero value on a finite number of points in $\mathbb{N}$.
Hint 1: For the $\|\cdot\|_{\infty}$ norm on this class of functions, since the support is compact, $\|f\|_{\infty}$ is precisely the largest value attained by $f$. Even if the other values of $f$ are not summable or square-summable, the infinity norm does not care.
Hint 2: What if we consider sequences of functions who have different compact supports? What if we consider sequences of functions whose supports vary with $n$? |
Question regarding the existence of a Bijection $f:\Bbb R^2\to \Bbb R$ | Digits and convergence do not behave well together. If $x_1,x_2,\ldots$ converges to $x$, there may never be an $n$ so that even the first digit of $x_n$ is the same as that of $x$. Consider, for example, the sequence $0.9$, $0.99$, $0.999$, $\ldots$, and its partner $1.1$, $1.01$, $1.001$, $\ldots$. These two sequences converge to the same point, but the construction outlined in what you linked sends them to wildly different places. In fact, a situation like this can be arranged around every rational point - which means that this function has densely many discontinuities. |
"Uniform Convergence" of the integral of a function | First, convince yourself that if $(a,b) \subseteq \mathbb{R}$ is any interval of length smaller than $2\pi$, i.e. $b-a \leq 2 \pi$, then we have
$$\int_{a}^{b} |f| ~\mathrm{d}x \leq \int_{0}^{2\pi} |f| ~ \mathrm{d}x =: I$$
due to the periodicity of $f$. Because of the uniform convergence, for any $\epsilon > 0$ you can find a $M \in \mathbb{N}$ such that $$\left| \sum_{k=m+1}^{\infty} \frac{\sin(kx)}{x}\right| < \frac{\epsilon}{I}, \quad x \in [\delta, 2\pi - \delta], \quad m \geq M.$$
We find that
$$|G_m (\zeta)| \leq \frac{\epsilon}{I}\int_{\delta}^{2\pi - \delta}|f(x+\zeta)| ~ \mathrm{d}x = \frac{\epsilon}{I}\int_{\zeta + \delta}^{\zeta + 2\pi - \delta} |f(y)|~\mathrm{d}y \leq \frac{\epsilon}{I}I = \epsilon $$ The last inequality holds because the length of the integration interval is smaller than $2\pi$. |
Recurrent random walk | I guess you meant to write $\mathbf{E}[S_n]=0$ because if $\mathbf{E}[N]=0$ this would mean that the probability of returning to zero is zero and the walk would be transient.
For recurrence you have to show that $\mathbf{E}[N]$ (as you defined) is infinite. Here is a link to a solution
https://www.google.com.sg/url?sa=t&source=web&rct=j&url=http://www.statslab.cam.ac.uk/~james/Markov/s16.pdf&q=recurrent%20random%20walk&ved=0ahUKEwiTk6_Q_rrJAhXGC44KHWLHD74QFggZMAA&usg=AFQjCNFIjCsdmA2gB95-Zw7gWlMah3Fl_A |
How can one define the trace of a linear operator on any finite dimensional vector space, using the fact that $tr(A) = tr(P^{-1}AP)$? | Your argument is correct! Now consider an operator $\phi:V\to V$, chose any basis $(e_i)$, suppose $\phi$ is given by matrix $A$, define $tr(\phi):=tr(A)$. What remains is to show this do not depend on the basis. |
whether gluing the all faces of finite tetrahedrons in pairs would yield a manifold? | No, gluing faces of tetrahedra in pairs need not yield a 3-manifold. Stick two tetrahedra together to get a square pyramid, i.e. with a square face and four triangular faces in cyclic order $F_1,F_2,F_3,F_4$. Glue $F_1$ to $F_3$ and $F_2$ to $F_4$ so that a slice of the pyramid just below the top vertex is glued into a torus. Then the image of the vertices near this torus has all sufficiently small neighborhoods homeomorphic to a cone on a torus, which is not a 3-disk, and so the resulting identification space is not a manifold around the image of the vertex.
EDIT: This is incomprehensible in text, so I've sketched up a picture:
We have here a square pyramid resulting from gluing two tetrahedra along faces that now share vertices $v,w,s$ as indicated by the dotted line. To construct a non-three-manifold from this pyramid, glue the two faces labelled $A$ and the two faces labelled $B$ so as to glue the red quadrilateral into a torus. (This isn't hard to do: identify $u$ to $s$ and $t$ to $w$ for the $A$-gluing and $u$ to $w$, $t$ to $s$ for the $B$-gluing.) Then a neighborhood of $v$ extending to the red quadrilateral is glued into the cone on a torus, which is not homeomorphic to a 3-ball. |
An odd positive integer is the product of $n$ distinct primes. In how many ways can it be represented as the difference of two squares? | So, the way I'd think of it intuitively using the following fact
Given any two odd integers, $a,b$, you can find an $x,y$ such that $a=x+y$
and $b=x-y$. To get these $x,y$ you let $x=\frac{a+b}{2}$ and
$y=\frac{a-b}{2}$.
This means that you can just think about how to partition the primes. By partitioning your list of primes into two groups, you have picked a pair $(a,b)$ from which you can use the above to obtain $x$ and $y$.
There is a small caveat here: You need $a\neq b$ for the above fact to work the way you want! However, in this problem, that cannot happen because of the Unique Prime Factorization property. If your list of primes were to have repeats, you would have to be more careful.
So, now the question is how can we partition the primes. There are $2^n$ ways to split $n$ objects into $2$ collections. However, there will be a small number of repeats because the partitions $(\{2,3\},\{5,7\})$ and $(\{5,7\},\{2,3\})$ produce the same pair of numbers $x,y$ just switched. This is the only way repeats can occur, and so dividing by $2$ to account for this gives the answer of $2^{n-1}$ |
Limit of the function | First, notice that: $$\forall (x,y)\in\mathbb{R}^2,|\sin(x)-\sin(y)|\leqslant|x-y|.$$
Then, notice that: $$\forall x>0,\sqrt{x+1}-\sqrt{x}=\frac{1}{\sqrt{x+1}+\sqrt{x}}.$$ |
Finding the remainder polynomial for a given polynomial. | There exists a polynomial $q(x)$ such that $$p(x)=(3x^2-8x+5)q(x)+ax+b.$$ Using that $p(1)=19$ you get (note that when you consider $x=1$ you have the equalities $p(1)=(3\cdot 1^2-8\cdot 1+5)\cdot q(1)+a\cdot 1+b=0\cdot q(1)+a+b=a+b$) $$19=a+b$$ and from $p\left(\frac{5}{3}\right)=25$ (using the same argument as before) you have
$$25=5a+b.$$
Solve the linear system and you have the solution. |
Natural deduction proof: C Ʌ D, C ↔ E, D ↔ F |- E Ʌ F | 1 C Ʌ D Premiss
2 C ↔ E Premiss
3 D ↔ F Premiss
4 D 1 (ɅE)
5 D → F 3 (↔E)
6 F 4, 5 (->E)
7 C 1 (ɅE)
8 C → E 2 (↔E)
9 E 7, 8 (->E)
10 E Ʌ F 6, 9 (ɅI)
Everything is good. |
How can we express the statement "$f$ is a bijection from $A$ to $B$" in predicate logic? | Establish Domain and Codomain:
$$\forall y~ \forall x~ F(x) = y \implies (x \in A \land y \in B)$$
Surjection:
$$\forall y \in B ~ \exists x ~F(x)=y$$
Injection:
$$\forall y ~ \forall x_1 ~ \forall x_2 ~ (F(x_1) = y \land F(x_2) = y) \implies (x_1 = x_2)$$ |
Evaluating the Limit of 2 Variables | In general, no. $\displaystyle\lim_{(x,y)\to (0,0)} {y\over x}$ does not exist:
$$\eqalign{
\lim_{(x,x)\to (0,0)} {x\over x} &= \lim_{x\to0}{x \over x} = 1\cr
\lim_{(x,0)\to (0,0)} {0\over x} &= \lim_{x\to0}{0 \over x} = 0\cr
}$$
Even if $\displaystyle\lim_{(x,rx)\to(0,0)} f(x,rx)$ exists, $\displaystyle\lim_{(x,y)\to(0,0)} f(x,y)$ might not exist, because you might have
$\displaystyle\lim_{(x,rx)\to(0,0)} f(x,rx) \not= \lim_{(x,x^2)\to(0,0)} f(x,x^2)$. (I'm sure someone will chime in with an example ...) |
Seemingly contradictory results show $f(n) = n e^{\frac{-\pi n}{2} i }$ is divergent | In your second method you saw the pattern of your sequence $f(n)$: from the step $n-1$ you achieve the step $n$ by increasing by a unit the distance from the origin and rotating the new point clockwise of 90 degrees. The $n$-point is going further from the origin while it is rotating. The definition of oscillating sequence is given for real sequence $g_n:\mathbb{N}\to \mathbb{R}$.
In case of complex sequences we define the convergence of $f(n)$ to a point $z\in \mathbb{C}$ if the distance $\lvert f(n)-z \rvert\to 0$ as $n\to \infty$. By definition the sequence $f(n)$ diverges if such $z$ does not exist, that is our case as you proved. The crucial point is that we need an order in the range (that is what $\mathbb{C}$ does not possess) to talk about oscillation. |
Help with Gradient-related concepts | Let $\vec{x}(t)$ be a parametrisation of the level set $f(\vec{x})=k$, i.e. $f(\vec{x}(t)) = k$ for all $t$, then $\vec{v} = \frac{\mathrm{d}\vec{x}}{\mathrm{d}t}$.
It follows that $0=\frac{\mathrm{d}}{\mathrm{d}t}f(\vec{x}(t)) = \frac{\partial f}{\partial x}\frac{\mathrm{d}x}{\mathrm{d}t}+\frac{\partial f}{\partial y}\frac{\mathrm{d}y}{\mathrm{d}t} = (\frac{\partial f}{\partial x},\frac{\partial f}{\partial y})\cdot \vec{v}$.
On the other hand $\frac{\mathrm{d}}{\mathrm{d}t}f(\vec{x}+t\vec{v}) = \frac{\partial f}{\partial x}\vec{v}_x+\frac{\partial f}{\partial y}\vec{v}_y = (\frac{\partial f}{\partial x},\frac{\partial f}{\partial y})\cdot \vec{v}$.
Thus, indeed $D_{\vec{v}}f = \frac{\mathrm{d}}{\mathrm{d}t}f(\vec{x}+t\vec{v}) = 0$.
The result indeed generalises in that for any vector $\vec{v}$ tangent to a level set of $f$, $D_{\vec{v}}f =0$. |
Prove or disprove: If the sequence $(x_{n})_{n\in\mathbb{N}}\subset \mathbb{R}$ is convergent then $(nx_{n})_{n\in\mathbb{N}}$ is divergent | The statement is not true. If $x_n = 0$ which is convergent then $n x_n = 0$ which is also convergent.
For the non-zero case it is true. If $\lim_{n\to +\infty}x_n =c$, then $\lim_{n\to +\infty}n x_n =\lim_{n\to +\infty}n c = ± \infty$ |
Is there a way to prove that 2y(y-1) is divisible by four other than by means of induction? | If $y$ is even, $y=2p$ for certain $p\in\mathbb{Z}$ so:
$$
2y(y-1)=4p(y-1).
$$
If $y$ is odd, $y-1=2p$ for certain $p\in\mathbb{Z}$ so:
$$
2y(y-1)=4py.
$$
According to this, $4|2y(y-1)$ always. |
Proving that $V(R^*)=V(R)-1$ | I think this is essentially right. I think you meant to say that $x \in J - J^2$ [set subtraction] and I would want to see a little more justification for why $r \in J$ at the end, but it's good.
I do think it looks little more complicated than it could be. I would try to present the argument as: to get the cotangent space of $R/(x)$ I take that of $R$, which is $J/J^2$, and then I quotient by the span of the nonzero vector $\bar{x}$ [the image of $x$]. Just to emphasize that it's linear algebra and nothing more. |
Vanishing of local cohomology $\operatorname{H}^1_J(\Gamma_I(M))=0$ | Sketch of proof:
Let $J=(x_1,\dots,x_n)$. If $J\subseteq I$ it's done. Let $J\nsubseteq I$ therefore $\exists x\in J-I $. By 4.1.22 from the same book we have the following exact sequence
$$0\rightarrow \Gamma_{I+x}(M)\rightarrow\Gamma_I(M)\rightarrow \Gamma_I(M_x)\rightarrow 0=H^1_{I+x}(M).$$
Applying $\Gamma_J(-)$ we get
$$ \cdots\rightarrow \Gamma_J(\Gamma_I(M_x))\rightarrow H^1_J(\Gamma_{I+x}(M))\rightarrow H^1_J(\Gamma_I(M))\rightarrow H^1_J(\Gamma_I(M_x))\rightarrow\cdots $$
Now use the independence Theorem
$$\Gamma_J(\Gamma_I(M_x))\cong \Gamma_J(\Gamma_{I_x}(M_x))\cong \Gamma_J(\Gamma_I{(M )})_x\cong \Gamma_{J_x}(\Gamma_I(M))_x=0. ~ (Since~ x\in J)$$
$$H^1_J(\Gamma_I(M_x))\cong H^1_J(\Gamma_{I_x}(M_x))\cong H^1_J(\Gamma_I{(M )})_x\cong H^1_{J_x}(\Gamma_I(M))_x=0.~ (Since~ x\in J)$$
Therefore $H^1_J(\Gamma_{I+x}(M))\cong H^1_J(\Gamma_I(M))$.
Continuing as before we have
$$H^1_J(\Gamma_I(M))\cong H^1_J(\Gamma_{I+J}(M))=0. ~(Since~ J\subseteq I+J)$$ |
Proving there is no non-abelian finite simple group of order a Fibonacci number | The paper of Florian Luca that proves this result, and was mentioned in the comments, is as follows: Fibonacci numbers, Lucas numbers and orders of finite simple groups, Journal of Algebra, Number Theory and Applications, vol. 4, no. 1, 23--54 (2004).
Unfortunately, the paper is in an obscure journal, no version exists on the arXiv, and the journal website does not appear to be, how shall we say, very good.
I see some ideas with how to prove this myself, but would need to know more about Fibonacci numbers to do so. I satisfied myself with a proof for the alternating groups. (The sporadics can be done by inspection.)
By Carmichael's theorem, every $F_n$ is divisible by some prime not dividing any smaller $F_m$. Since $p$ divides $F_{p\pm 1}$, this primitive divisor must be at least $n-1$. But $|A_n|$ is divisible by exactly those primes at most $n$, and grows faster than $F_n$. In particular, if $|A_n|=F_m$, then $m>n$. Thus you only need to worry about the case $m=n+1$. And there is one: $n=3$, which of course does not yield a simple group. |
Branching/layered optimisation - how? | What you are talking about is essentially decomposition, but in reverse. Decomposition works by taking a large, complex problem and breaking it down into smaller, more easily solved subproblems. You're talking about building a large model out of many small ones, but the idea is identical.
If your problem is nicely separable, then the main problem is just the sum of its parts. If the problem is inherently non-parallellizable but intractable without decomposition, then you have some headaches to sort out, the specifics of which depends on the problem.
A good example is decoding of LDPC codes (which are basically the standard codes used for almost all digital communication nowadays), which uses an algorithm based on message-passing between the variable 'nodes' to break up a large, complicated problem into easy subproblems.
This turns an NP-complete problem into something solvable in linear time. But there are plenty of caveats, because the problem is inherently not parallellizable, which causes convergence issues that have been a huge area of research for the last 20 or so years.
If your problem is a linear program, then you can use for example Dantzig-Wolfe decomposition which breaks the problem up into a bunch of subproblems, the solutions of which are combined by a master problem in each iteration. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.