title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Partial derivative involving Kronecker product | $\def\R#1{{\mathbb R}^{#1}}\def\v{{\rm vec}}\def\d{{\rm diag}}\def\D{{\rm Diag}}\def\o{{\tt1}}\def\p{{\partial}}\def\grad#1#2{\frac{\p #1}{\p #2}}\def\z{{\rm size}}$For
typing convenience, define the following matrix variables
$$\eqalign{
R &= XY^T-A(I_p\otimes X)Z^T \\
B &= A^TRZ \\
}$$
and use $e_k$ to denote the $k^{th}$ column of the identity matrix $I_p\,$ in order to define the block matrices
$$\eqalign{
&M_k = e_k\otimes I_m \\
&N_k = e_k\otimes I_n \\
}$$
and the matrix sum
$$\eqalign{
&S = \sum_{k=1}^p\;M_k^TBN_k\\
}$$
Let's also assign concrete dimensions to all of the matrices involved
$$\eqalign{
m,n &= \z(X) = \z(S) \\
q,n &= \z(Y) \\
m,mp &= \z(A) \\
p,p &= \z(I_p),\quad\z(e_k) = p,1 \\
q,np &= \z(Z) \\
m,q &= \z(R) \\
mp,np &= \z(B) \\
mp,m &= \z(M_k) \\
np,n &= \z(N_k) \\
}$$
One final bit of notation is to use a colon to denote the trace/Frobenius product
$$\eqalign{
S:X &= \sum_{i=1}^m \sum_{j=1}^n\;S_{ij} X_{ij} \;=\; {\rm Tr}(S^TX) \\
X:X &= \big\|X\big\|^2_F \\
}$$
Now write the objective using the above notation. Then calculate its differential and gradient.
$$\eqalign{
f &= R:R \\
df &= 2R:dR \\
&= 2R:\big(dX\,Y^T - A(I_p\otimes dX)Z^T\big) \\
&= 2RY:dX - 2A^TRZ:(I_p\otimes dX) \\
&= 2RY:dX - 2B:(I_p\otimes dX) \\
&= 2(RY-S):dX \\
\grad{f}{X} &= 2(RY-S) \\\\
}$$
The following identity was utilized above
$$\eqalign{
S:dX &= \sum_{k=1}^p\;
\left(e_k^T\otimes I_m\right)B\left(e_k\otimes I_n\right):dX \\
&= \sum_{k=1}^p\;
B:\left(e_k\otimes I_m\right)(I_\o\otimes dX)\left(e_k^T\otimes I_n\right) \\
&= \sum_{k=1}^p\;B:\big(e_ke_k^T\big)\otimes\big(dX\big) \\
&= B:I_p\otimes dX \\
}$$ |
an integral from nonrelativistic quantum mechanics | One has
$$\int_{\mathbb{R}^{n}}\exp\Big[-\frac{1}{2}x^{T}Ax+J^{T}x\Big]d^{(n)}x=\sqrt{\frac{(2\pi)^{n}}{\det[A]}}\exp\Big[\frac{1}{2}J^{T}A^{-1}J\Big]$$
In your case $n=3$, $A=\frac{it}{m}I$, where $I$ is the identity, and $J=ir$. The determiant of $A$ is $\det[A]=(\frac{it}{m})^{3}=-i\frac{t^{3}}{m^{3}}$. The inverse of $A$ is $A^{-1}=-i\frac{m}{t}I$. |
Context free grammar question | HINT: I’m assuming that the problem is to create context-free grammars that generate these languages. You need basically the same idea for both, so I’ll just deal with the first one. The idea is to use a non-terminal symbol to pump out $a$’s on the left and $c$’s on the right, one at a time; a production like $S\to aSc$ does this. At some point you then let $S\to b$, and you’re left with $a^nbc^n$ for some $n\ge 0$. Of course you want to rule out $n=0$, so instead of collapsing $S$ to $b$ at the end, you collapse it to ... what? |
zero's for odd and $\pm 1$ for even. Or $\pm 1$ for odd and zero's for even | Adjust the frequency and amplitude of a sine or cosine function.
$$\cos(\pi n/2)$$ |
Show that $4^{3x+1} + 2^{3x+1}+ 1$ is divisible by 7 | Hint:
$$2^{3x+1} = 8^x\times 2$$
$$8 \equiv 1\pmod{7} \implies 8^x \equiv 1 \pmod{7}\implies 8^x \times 2\equiv 2\pmod{7}.$$ |
What is the fundamental group of 4 lines with endpoints identified? | collapse one of the lines -> get a bouquet of three circles. |
Finding a machine number such that $fl(x)=x(1+\delta)$. | Your floating point numbers are consistent with IEEE single precision. Here the unit roundoff is $u=2^{-24}$. Now consider real numbers $x \in [1,2)$. The spacing between the floating numbers in this interval is $2u$, i.e., the numbers are $$1, 1+2u, 1+4u, \dotsc, 2-2u.$$ It follows that the error is bounded by $u$. This bound is achieved at the midpoints between successive floating point numbers, i.e., at $$1+u, 1+3u, 1+5u, \dotsc, 2-u.$$ The largest relative error occurs at the left endpoint, i.e., at the real number $x=1+u$. The floating point representation is either $\text{fl}(x)=1$ or $\text{fl}(x)=1+2u$ depending on the rounding mode. In either case, the relative error $\delta = (x -\text{fl}(x))/x$ satisfies $$|\delta| = u/x = u/(1+u) < u.$$ We conclude that $|\delta|$ cannot achieve the value $u$ for any $x \in [1,2)$.
The extension to general $x$ in the representational range is straightforward. |
What is a good way to calculate $E|\frac{1}{n}\sum X_{i}|$? | The random variable $\frac{1}{n}\sum X_i$ has normal distribution, mean $0$, variance $\frac{1}{n}$. Call it $W$.
Now a calculation much like the one you did for the second problem will give $E(|W|)$.
The density function of $W$ is not hard to write down. One can then exploit symmetry as you did in the second problem. The integration is of the same kind. |
How to prove that ${1,(x-1),(x-1)^2,\dots}$ is basis of the polynomial vector space? | You can easily prove it without induction, using the properties of $K[X]$ as a ring ($K$ is the base field).
The map $K[X]\longrightarrow K[X],\; X\longmapsto X-1$ defines a $K$-algebra endomorphism, which is by definition a $K$-linear map. This endomorphism is actually an automorphism since it has an inverse endomorphism: the map $X\longmapsto X+1$.
Thus, the map $X\longmapsto X-1$ is a $K$-vector space isomorphism, and as such, it maps a basis onto a basis. The image of the standard basis $\{1, X, X^2,\dots\}$ is precisely the set $\{1, X-1, (X-1)^2,\dots\}$.
Edit:
To answer your question in the below comment, no, it wouldn't be enough to show it has the same cardinality as the standard basis: such an argument is valid only for finite cardinalities.
Counterexample: the set $\{1,X^2,X^4,\dots,X^{2n},\dots\}$ has the same cardinality as the standard basis, yet it spans the set of polynomials with terms of even degree $K[X^2]$, so you can't obtain $X$, for instance. |
Universal binary operation and finite fields (ring) | I'm not sure I get the question right, I understand you are asking if it is true that you can express any boolean operation using only one gate. If this is your question, the answer is yes.
Take the NAND, for example (represented in boolean argebra by the sheffer stroke |). It can replace any unary or binary gate.
We already know that anything can be expressed with AND and NOT.
If we can express AND and NOT with NAND,
therfore we can express anything with NAND.
Reminder, NAND can be understood in English as "At most one", which means it's true except if both p and q are true:
p q p|q
-------------
0 0 1
0 1 1
1 0 1
1 1 0
Let's prove that NOT (¬) can be expressed with NAND (|):
p ¬p p|p
---------------------
0 1 1 (0|0=1)
1 0 0 (1|1=0)
NOT can be expressed with NAND: ¬p = p|p
Let's now prove that AND (^) can be expressed with NAND(|). p^q = ¬(p|q) and we already know how to express NOT with NAND:
p q p^q p|q (p|q)|(p|q)
----------------------------------
0 0 0 1 0 (1|1=0)
0 1 0 1 0 (1|1=0)
1 0 0 1 0 (1|1=0)
1 1 1 0 1 (0|0=1)
AND can be expressed with NAND: p^q = (p|q)|(p|q)
For your information, the OR gate can be expressed (p|p)|(q|q), I'm sure you can prove it for yourself. |
sum of power of prime factor given find number | The smallest $x$ with $f(x)=y$ will be $2^y$. This is clear as $f(pn)=f(2n)$ for any prime $p$. |
Differentiability of CDF at 0 | One could say it's "differentiable at $0$ within the domain $[0,\infty)$". Within that space, $0$ can be approached only from the right, so "$\lim\limits_{x\to0}$" could only mean it's approaching $0$ from the right. Without something like that interpretation, the information you have been given is, as you have noted, contradictory. That is probably what the authors intended, and probably they should have said so. |
The motivating properites of beta distribution and how its density function is developed? | One way in which the Beta distribution arises is as the distribution of the order statistics of a random sample from the uniform distribution. Suppose $X_1,\ldots, X_n$ are i.i.d. and uniformly distributed in $[0,1]$. Let $X_{(1)} < \cdots < X_{(n)}$ be the corresponding order statistics, i.e. the observations sorted into increasing order. Then $X_{(k)} \sim\operatorname{Beta}(k,n-k+1).$
Another way in which the Beta distribution arises is as the arrival times in a Poisson process, conditioned on the number of arrivals in a specified time interval. For example, suppose it is given that there were exactly $n$ arrivals between times $0$ and $T$. Let $S$ be the time of the $k$th arrival. Then $S/T \sim\operatorname{Beta}(k,n-k+1).$
Suppose $R\sim\operatorname{Uniform}(0,1)$ and $X_1,\ldots,X_n\mid R \sim \operatorname{i.i.d.} \operatorname{Bernoulli}(R),$ i.e. $$\begin{cases} \Pr(X_1=1\mid R) = R, \\ \Pr(X_1=0\mid R) = 1-R. \end{cases}$$ Then $$ R \mid (X_1+\cdots +X_n = k) \sim \operatorname{Beta}(k+1,n-k+1).$$
Suppose $X$ has the Gamma distribution $\displaystyle \frac 1 {\Gamma(\alpha)} (x/\mu)^{\alpha-1} e^{-(x/\mu)} (dx/\mu)$ and $Y$ is independent of $X$ and has the Gamma distribution $\displaystyle \frac 1 {\Gamma(\beta)} (x/\mu)^{\beta-1} e^{-(x/\mu)} (dx/\mu). \vphantom{\frac {\displaystyle\int} {\displaystyle\int}} $ The $X/(X+Y)$ is independent of $X+Y$ and $X/(X+Y) \sim\operatorname{Beta}(\alpha,\beta).$ |
Trigonometry: finding the length of an arc. | Between two spokes there is only one interval.
So the angle is
$\dfrac{2\pi}{16}$
radians.
Between $n$ spokes there are $n-1$ intervals.
So the angle for this case is
$\dfrac{(n-1)\pi}{16}$
radians. |
Find two unknowns that have a certain ratio. | What you have is a system of linear equations. Several methods for solving these are described in this section of that article.
In the present case, the most direct solution would be to substitute $L$ from the first equation into the second one:
$$(2S + 20) + S + 60 = 1160$$
$$3S + 80 = 1160$$
$$3S=1080$$
$$S=360\;.$$
Then plugging that into the first equation yields
$$L=2S + 20 = 2\cdot360+20= 720 + 20=740\;.$$ |
understanding the reducibility axiom | The Axiom of reducibility was an axiom specific of W&R's system developed into Pincipia Mathematica : the so-called Ramified Type Theory.
You can see : Bernard Linsky, Was the Axiom of Reducibility a Principle of Logic?.
See : Alfred North Whitehead & Bertrand Russell, Principia Mathematica to *56 (2nd ed - 1927), page 56 :
The axiom of reducibility is the assumption that, given any function $\phi \hat x$, there is a formally equivalent predicative function, i.e. there is a predicative function [$\psi !$] which is true when $\phi x$ is true and false when $\phi x$ is false. In symbols, the axiom is:
$$(\exists \psi)(\forall x)(\phi x \equiv \psi ! x)$$
where [page 53] :
We will define a function of one variable as predicative when it is of the
next order above that of its argument, i.e. of the lowest order compatible with
its having that argument. |
Are line integral function of a function differentiable? | If the first partial derivatives exist and are equal to $f_k$, and each $f_k$ is continuous (by assumption), then $\phi$ is of class $C^1$, hence (by a well-known basic theorem) differentiable. |
Finding the tension in two ropes. | Note that $30^\circ$ is the complement of $60^\circ$
$$u = \frac{1830kg\cdot\cos(30^\circ)}{\cos(45^\circ)} = 2241.283kg$$ |
Limits Properties of exponential function | HINT:
For real-values of $x$ and $x\ge0$, we have
$$\sum_{n=0}^{\infty}\frac{x^n}{n!}\ge 1+x$$
In addition, we have
$$\frac{E(h)-1}{h}=1+\frac12h+O(h^2)$$
Alternatively, we have for $h<1$
$$1+h \le E(h)\le \frac{1}{1-h}$$
since $\frac{1}{1-h}=\sum_{n=0}^{\infty}h^n$. Therefore,
$$1\le \frac{E(h)-1}{h}\le \frac{h}{1-h}$$
Then, use the Squeeze Theorem. |
Prove the collection of isolated points of a set in $\mathbb{R}^n$ is countable. | Your argument is fine, and you don’t need the axiom of choice.
If $\beta$ is countably infinite, by definition there is a bijection from $\Bbb N$ to $\beta$, so we can enumerate $\beta=\{B_k:k\in\Bbb N\}$, and for each isolated point $x$ of $S$ we can let
$$k(x)=\min\big\{k\in\Bbb N:B_k\cap S=\{x\}\big\}\,.$$
Your $U_x$ is then $B_{k(x)}$, and $k(x)$ is a well-defined natural number, not an arbitrary choice. |
Solve $ \log_3x\log_4x(\log_5x-1)$=$\log_5x(\log_4x+\log_3x)$ | The trick, as N.S.JOHN already commented,is to use a common base.
Let us make the problem even more complex, looking for the solutions of $$\log_a(x)\log_b(x)(\log_c(x)-d)-\log_e(x)(\log_f(x)+\log_g(x)=0$$ and apply for each term $$\log_a(x)=\frac{\log (x)}{\log (a)}$$ After simplification, this write $$\log ^2(x) \left(\frac{\log (x)-d \log (c)}{\log (a) \log (b) \log
(c)}-\frac{\frac{1}{\log (f)}+\frac{1}{\log (g)}}{\log (e)}\right)=0$$ So, the first solution is $$\log(x)=0\implies x=1$$ and the second one (from the term in parentheses) is given by $$\log(x)=\log (c) \left(\frac{\log (a) \log (b) (\log (f)+\log (g))}{\log (e) \log (f) \log
(g)}+d\right)\implies x= $$ |
Solve $z^5 = 32$ (include complex solutions) | The complex solutions to your problem is $2e^{\dfrac{i2k\pi}{5}}$ where $k=0,1,2,3,4$ |
Group objects in $\mathbf{Ab}$ and $\mathbf{Grp}$ | The inverse map $\iota$ is now not just a function, but a morphism in the category $\mathbf{Grp}$.
We already know that if $g\cdot h$ is an object in $G$, then $(g\cdot h)^{-1} = h^{-1}\cdot g^{-1}$. Because of your previous exercise (Eckmann-Hilton), this carries over to the group object: $\iota(g\circ h) = \iota(h)\circ \iota(g)$.
Now because $\iota$ is a group morphism, we also have $\iota(g\cdot h) = \iota(g)\cdot \iota(h)$. This additional property establishes that $\iota(g)\circ \iota(h) = \iota(gh) = \iota(h)\circ \iota(g)$, which proves commutativity.
Your proof was entirely correct; you just didn't use the fact that $\iota$ is a group morphism, which means that (because the multiplication structure is the same) the group object, and hence the original group, must be commutative. (Noncommutative groups don't have corresponding group objects because their inversion operator fails to be a homomorphism of groups.) |
Square of an increasing function over an interval. | I am converting my comments into an answer and comments will be deleted shortly.
For the given question the correct answer is (d) (you have already rejected (a), (b) via a correct argument and one of the comments to the question rejects (c)). So either there is a typo in question or in official key. Such printing mistakes are reasonable and in any case not a punishable offence :) :)
If on the other hand the question is about a function $f:I\to\mathbb{R} ^{+} $ which is increasing then it should be obvious that the correct answer is (a). This is because if $f$ is increasing then for any $x, y\in I$ with $x<y$ we have $f(x) \leq f(y) $ and since $f(x), f(y) $ are positive it follows that $f^{2}(x)\leq f^{2}(y)$ and thus $f^2$ is increasing. One can easily prove that the option (c) also holds.
Moreover one should notice that the ideas of increasing / decreasing functions does not necessarily require the functions concerned to be differentiable. And therefore it is not necessary to invoke any arguments based on derivatives to solve this problem. |
Equality between 2 summations unclear | As you are integrating $x^2$ with respect to $x$ from $x=0$ to $x=a$, you divide the area into $n$ strips of width $\Delta x=\frac an$ unit each. Your $x_{i-1}$ is consequently $x_{i-1}=(i-1)\Delta x =\frac{a}{n}(i-1)$, so you have $(x_{i-1})^2=\left(\frac{a}{n}(i-1)\right)^2$. Simplifying, you have
$$(x_{i - 1})^2\Delta x = \frac{a^3}{n^3}(i - 1)^2 $$ |
The sum of consecutive integers is $50$. How many integers are there? | If your $n$ is odd, then the middle number has to be $50/n$. The odd divisors of $50$ are $1$ and $5$, which gives us two solutions $50=50$ and $8+9+10+11+12=50$
If $n$ is even, then $50/n$ is the half-integer between the middle two numbers. So $n$ has to be an even divisor of $100$, but not a divisor of $50$, so $n=4$ or $20$.
If $n=4$, then $50/4 = 12.5$ and we get $11+12+13+14=50.$
If $n=20$. then $50/20 = 2.5$ and we get $-7+-6+-5+\cdots +11+12 = 50.$
So there are 4 answers: $n=1, 4, 5, $ and $20$.
Edit: As Bill points out, I missed the divisors $25$ and $100$, which give two more answers: $50 = -10+-11+\cdots+14$ and $50 = -49 +-48+\cdots +50$.
Note that each solution with negative integers is related to an all-positive solution. From the solution $11+12+13+14=50$, we just prepend the terms $-10, -9, \ldots, 10$, which add to $0$, and we have another solution. |
Division with dividend less than divisor | That division you give seems to be division for integers.
If $b > a$ it turns into
$$
a = 0 \cdot b + a
$$
Thus having the result $q=a/b = 0$ and rest $r=a \bmod b = a$. |
Derivative of double integral with respect to symmetric limits | By the Fundamental Theorem of Calculus, the derivative is
$$\int_{-x}^x f(y,x)\,dy-(-1)\int_{-(-x)}^{-x} f(y,-x)\,dy.$$
If we make the change of variable $t=-x$ in the second integral, we get
$$\int_{-t}^{t} (-1)f(y,t)\,dy.$$
This cancels the first integral. |
Find a parameterization of the paraboloid $900z = 25x^2 + 36y^2$, $z \le 16$. | Let's divide your equation by $900=25\cdot 36$ You get
$$z=\frac{x^2}{6^2}+\frac{y^2}{5^2}$$
If we do the transformation $x/6=X$, and $y/5=Y$, this looks very much like the equation of a circle $X^2+Y^2=R^2$. In this case $R^2=z$. The parametric equation of the circle is then $X=R\cos\theta$ and $Y=R\sin\theta$. So you have $x=6R\cos\theta$, $y=5R\sin\theta$, $z=R^2$, where $\theta$ varies between $0$ and $2\pi$. Since $z$ is between $0$ and $16$, it means $R$ is between $0$ and $4$. If you rename your variables from $R$ to $v$, and from $\theta$ to $u$, you get your result |
Can the formula for finding the slope of a line be reversed and still be right? | More generally,
the line through
$(x_1, y_1)$
and
$(x_2, y_2)$
has slope
$\frac{y_2-y_1}{x_2-x_1}
$.
Swapping the points,
the line through
$(x_2, y_2)$
and
$(x_1, y_1)$
has slope
$\frac{y_1-y_2}{x_1-x_2}
$.
But
$\frac{y_2-y_1}{x_2-x_1}
=\frac{y_1-y_2}{x_1-x_2}
$,
so the two lines
have the same slope. |
Necessary and sufficient condition for two polynomials to generate coprime ideals | If the principal ideals generated by $p,q$ are coprime, then their highest degree coefficients are coprime, since if $g$ is the greatest common divisor of the highest degree coefficients of $p,q$ then $g$ also divides the highest degree coefficient of every element of $(p)+(q)$. Suppose $\deg p \le \deg q=n$.
Lemma.
The highest degree coefficients of $p,q$ are coprime if and only if there is a monic polynomial of the form $ap+bq$ with degree at most $n$.
Proof.
Multiplying $p$ by $x^k$ for some $k$, we can make the degree of $x^kp$ equal to the degree of $q$. Then the degree $n$ coefficients of $x^kp,q$ are coprime iff there is an integer linear combination making them 1, iff there is an integer linear combination of $x^kp,q$ that is monic.
$\blacksquare$
Here is an algorithm:
Check that the highest degree coefficients of $p,q$ are coprime. If not, $(p)+(q)\ne (1)$, so stop.
Otherwise, there is a monic polynomial of degree $n$ of the form $ap+bq$, divide $p$ by $ap+bq$ and replace $p$ by the remainder.
Repeat.
This produces a sequence of generators of the ideal $(p)+(q)$ of decreasing degree. Either it terminates in some round of step 1 showing that the polynomials do not generate coprime ideals, or we get a monic polynomial of degree 0 in $(p)+(q)$, i.e. $(p)+(q)=(1)$.
In your example, $2x+3,4$ do not have highest degree coefficients coprime, so $(2x+3)+(4)\ne (1)$ as you said. |
Finite groups acting on torus | There is indeed a bounded number of generators, and it follows from an explicit description of all possible such groups $G$. The idea is that you can make $G$ act by isometries with respect to a Euclidean metric on $T^2$, and then you can lift $G$ via a universal covering map $f : \mathbb R^2 \to T^2$ to obtain a Euclidean crystallographic group $\widetilde G$. These groups are known up to isomorphism: they are the 17 wallpaper groups. Since this is just a finite set of finitely generated groups, one obtains a bound for the number of generators of $\widetilde G$, and since $G$ is just a quotient group of $\widetilde G$ one obtains the same bound for the number of generators of $G$. If you walk through the list of 17, you'll see that each is generated by at most 4 elements.
Notice one thing that this proof does not say: there are not finitely many isomorphism types of the groups $G$, because the kernel of the quotient homomorphism $\widetilde G \mapsto G$ can have arbitrarily large index. What makes this proof work is that the infinite set of possible $G$'s is just a set of quotients of the finite set of groups $\widetilde G$. |
Latin Squares in combinatorics | The $(2,3)$ entry of a Latin square with $1,2,3,4$ on the diagonal cannot be $2$ or $3$, so it can only be $1$ or $4$. This shows there are at most two mutually orthogonal Latin squares with $1,2,3,4$ on the diagonal. You can just fill in the two possibilities and complete the squares.
$$1342\quad\quad 1423\\4213\quad\quad 3241\\2431\quad\quad 4132\\3124\quad\quad 2314$$
Now check that they are in fact orthogonal. I glanced and didn't see a problem. This shows that there are exactly two MOLS with $1,2,3,4$ on the diagonal. |
Does a "cubic" matrix exist? | If we're working with three-dimensional vectors, a matrix is a $3\times 3$ array of 9 numbers. If I'm understanding your question right, you're asking whether there is something like a $3\times 3\times 3$ array of 27 numbers with interesting properties.
Yes, there is such a thing; it is called a tensor. Tensors are a generalization of both vectors and matrices:
A number is a "rank-0 tensor".
A vector is a "rank-1 tensor"; it contains $D$ numbers when we're working in $D$ dimensions.
A matrix is a "rank-2 tensor", containing $D\times D$ numbers.
Your "cubic" thing is a "rank-3 tensor", containing $D\times D\times D$ numbers.
... and so forth.
One use for a rank-3 tensor is if you want to express a function that takes two vectors and produces a third vector, with the property that if you keep any one of the arguments constant, the output is a linear function of the other input. (That is, a bilinear mapping from two vectors to one). One familiar example of such a function is the cross product. In order to completely specify such a thing you need 27 numbers, namely the 3 coordinates of each of $f(e_1,e_1)$, $f(e_1,e_2)$, $f(e_1,e_3)$, $f(e_2,e_1)$, etc. Using linearity to the left and right, this is enough to determine the output for any two input vectors.
I haven't heard of any generalization of determinants to higher-rank tensors, but I cannot offhand think of a principled reason why one couldn't exist.
The study of tensors belongs in the field of multilinear algebra. It's quite possible to get at least an undergraduate degree in mathematics without ever hearing about them. If you take physics, you'll see lots and lots of them, though. |
How exactly is the squeeze theorem used in this example? | We have $$-1 \leq \cos(x) \leq 1, \quad \text{ and } \quad -1 \leq \sin\left(\frac{1}{x}\right) \leq 1$$ for every $x \in \mathbb{R},x \neq 0$. Now divide the first equation by $x>0$ and multiply the second by $|x|$. It is maybe not 100% systematic but it is very common to use such bounds on $\sin, \cos$ and then use the squeezing theorem (and it's worth it to try). |
Determinant of $xP+yQ$ is resultant of $P$ and $Q$ | I'll start with an example using the GAP computer algebra system:
gap> P := (x+1)*y^2+(2*x-1)*y+(x-3);;
gap> Q := (2*x+1)*y+(x-4);;
gap> M := sylvester(P,Q);
[ [ x+1, 2*x-1, x-3 ], [ 2*x+1, x-4, 0 ], [ 0, 2*x+1, x-4 ] ]
gap> p := determinant(M);
x^3+x^2-2*x+9
On the other hand :
gap> GroebnerBasis([P,Q], MonomialLexOrdering([y,x]));
[ x*y^2+2*x*y+y^2+x-y-3, 2*x*y+x+y-4, -y^2-1/2*x-1/2*y, -1/2*x^2-1/4*x-
9/4*y,1/4*x^3+1/4*x^2-1/2*x+9/4 ]
This seems to point out that Buchberger's algorithm in this special case with a base of only two polynomials modifies the Sylvester matrix in such a way that at each step a one appears on the diagonal preceded by zeroes, except in the last case where the univariate determinant pops up. Since at each step only sums of terms of the form $Py^i$ and $Qy^j$ appear the rows represent polynomials of the ideal, in casu the last row. |
Combinatorial approach to a binomial sum identity? | Suppose you have $n + 1$ positions and $a + b + 1$ markers. Put one marker in the $(k+1)^{\text{st}}$ position and place $a$ markers to the left and $b$ markers to the right (with no two markers occupying the same position). How many ways are there of doing this? |
Examples for completions of number fields | For each prime ideal $p\in P\subset O_K$ there is a $p$-adic completion $$K_v= \operatorname{Frac}(\varprojlim O_K/P^n)$$ where $v$ is the discrete valuation $v(a)= n$ if $a\in P^n,\not \in P^{n+1}$.
$K_v$ is the field of limits of sequences of elements of $K$ that converge for the absolute value $|a|_v=p^{-v(a)}$ and $\varprojlim O_K/P^n$ mostly means the same.
From the primitive element theorem $K=\Bbb{Q}[x]/(f(x))$ then $K_v \cong \Bbb{Q}_p[x]/(f_j(x))$ where $f_j$ is one of the $\Bbb{Q}_p$-irreducible factor of $f$.
For a Galois extension $\{ \sigma \in \operatorname{Gal}(K/\Bbb{Q}), \sigma(P)=P\}=\operatorname{Gal}(K_v/\Bbb{Q}_p)$.
Try with $K=\Bbb{Q}(i)$ and $p=2,3,5$ to see how it works. |
Integration by substitution - question about finding an expression for du or dx | You just need to remember the rule for figuring out what $d(f(t))$ means. The rule is $d(f(t)) = f'(t)dt$.
So starting with $u^2=x^2+1$, we have $$u^2=x^2+1 \\ \implies d(u^2)=d(x^2+1) \\ \frac{d(u^2)}{du}du=\frac{d(x^2+1)}{dx}dx \\ 2udu=2xdx$$ |
Bounding the exponential of a stochastic integral | Since $$\mathbb{E}(Y) = \int_{(0,\infty)} \mathbb{P}(|Y| \geq r) \, dr$$ for any non-negative random variable $Y$, we have
\begin{align} \mathbb{E}\exp(|X|) = \int_{(0,\infty)} \mathbb{P}( \exp(|X|) \geq r) \, dr &= \int_{(0,\infty)} \mathbb{P}(|X| \geq \ln(r)) \, dr \\ &= \int_{(-\infty,\infty)} \exp(u) \mathbb{P}(|X| \geq u) \, du \tag{1} \end{align} for any random variable $X$. Using this estimate for $X:=M_t := \sup_{s \leq t} \int_0^t f(s) \, dW_s$ it follows from the probability estimate which you gave at the end of your question that \begin{align*} \mathbb{E}\exp(|M_t|) &\stackrel{(1)}{=} \int_{\mathbb{R}} \exp(u) \mathbb{P}(|M_t| \geq u) \, du \\ &\leq 2 \int_{\mathbb{R}} \exp(u) \mathbb{P}(|W_{C^2 t}| \geq u) \, du \\ &\stackrel{(1)}{=} 2 \mathbb{E}\exp(|W_{C^2 t}|).\end{align*} As $W_{C^2 t}$ is Gaussian with mean 0 and variance $C^2 t$, we obtain that \begin{align*} \mathbb{E}\exp(|M_t|) \leq 2 \mathbb{E}\exp(W_{C^2 t}) + 2 \mathbb{E}\exp(-W_{C^2 t}) &= 4 \mathbb{E}\exp(W_{C^2 t}) \\ &= 4 \exp \left( \frac{C^2 t}{2} \right). \end{align*} |
Show that $f \in L^{1}(X)$ if and only if $\sum_{n=1}^{\infty} n \mu(E_{n}) < \infty$. | Well, the proof is exactly the same as for the other statement (in fact, the sums in question are equal). I'm assuming, by the way, that you meant $E_n=[n-1,n)$.
Basically, $f=\sum_{n=1}^{\infty} 1_{E_n} f,$ and clealy $1_{E_n} f$ lies between $n-1$ and $n$ when it's not equal to $0$. Hence, $1_{E_n}f\leq 1_{E_n}n$ and $1_{E_n}f \geq 1_{E_n} (n-1),$ and we get, by Monotone Convergence, that
$$
\int f\textrm{d}\mu=\sum_{n=1}^{\infty} \int f 1_{E_n}\textrm{d}\mu\leq \sum_{n=1}^{\infty} n \mu(E_n)
$$
and similarly,
$$
\int f\textrm{d}\mu\geq \sum_{n=1}^{\infty} (n-1)\mu(E_n)\geq \left(\sum_{n=1}^{\infty}n \mu(E_n)\right) -\mu(X),
$$
where we've used that $\mu(X)<\infty$ and that the $E_n$ are disjoint.
The statement follows. The general $L^p$ argument is similar. |
Find a Hermitian Matrix | Given two vectors $x\ne0$ and $y$, can we find a Hermitian matrix $H$ such that $y=Hx$?
We may assume that $x$ is a unit vector. Then $y=Hx$ if and only if $U^\ast y=(U^\ast HU)U^\ast x$ for any unitary matrix $U$. Let $U$ be a unitary matrix such that its first column is $x$, so that $U^\ast x=(1,0,\ldots,0)^T$. Now the rest is trivial. |
Prove $Z_{20}/\langle 5\rangle \cong Z_{5}$ | Hint
Take $\varphi :\mathbb Z_{20}\to \mathbb Z_5$ defined by $$\varphi (x+20\mathbb Z)=x+5\mathbb Z.$$
Surjectivity is clear. Compute the kernel, and conclude using the first isomorphism theorem. |
Projection onto space in $L^2([0,1])$ gives shorter length | The answer is no.
That seems to be a overkill, but we can use the fact that $||f||_p \to ||f||_\infty$ as $p\to \infty$ and $||f||_p \leq ||f||_q$ if $p \leq q$. If we can find $g\in L^2(0,1)$ such that $||Pg||_\infty > ||g||_\infty$, then for $p$ big enough, we have $||Pg||_p > ||g||_p$.
I tried $g = \chi_{[0,1/2]}$, $h= g+ 2\chi_{[23/24, 1]}$ and project $g$ onto $h$:
$$Pg= \frac{\langle g, h\rangle}{||h||^2_2}h$$
and seems we have $||Pg||_\infty = 3/2> ||g||_\infty.$ |
Proof $x+a \in J$ if $J := \mathbb{R}\setminus\mathbb{Q}$ and $x \in J$, $a \in \mathbb{Q}$ | one can assume that $a+x\in\mathbb{Q}$ , and therefore $x+\frac{m}{n}=\frac{p}{q}$, where $\frac{m}{n}$ is the representation of $a$ as a rational number, and $\frac{p}{q}$ is the representation of the result. then we get $x=\frac{p}{q}-\frac{m}{n}\in\mathbb{Q}$, which is a contradiction.
the exact same argument can be made for multiplication. |
How to compute the divergence of a measured vector field? | You can use the divergence theorem to approximate the divergence and prevent noise from ruining your approximation.
We have $$\int_C \mathbf{F} \cdot d\mathbf{S} = \int_A (\text{div} \mathbf{F}) dA$$ where the left hand side integral is over the boundary $C$ of any sufficiently nice set $A$ and the right hand side integral is over the set $A$.
Now consider a small area $A$ surrounding the point $p$ and assume that you know $\mathbf{F}$ at some points $q_i$ of $C$. You can then approximate the left integral using the a weighted sum of the $\mathbf{F}(q_i)$. The integral on the right is approximately $ A \text{div} \mathbf{F}(p)$.
Your graph suggests that you know $\mathbf{F}$ on a uniform grid with square cells. For each cell with corners $a_i$ you can find a new cell such that $a_i$ marks the middle of the $i$th edge of the new cell and the outward normal is well defined at $a_i$. You need to rotate 45 degrees to get the new cell. Use the new cell to compute an approximation for the divergence at the center of the new cell. |
(locally Holder) + (locally Lipschitz) $\Longrightarrow$ Continuity? | Yes, the inequality you wrote trivially implies continuity.
A tiny effort is required to obtain the inequality you wrote from the assumptions "locally Hölder continuous in $x$ and locally Lipschitz continuous in $y$", since each of these assumptions refers to the behavior of $f$ when one of its variables is being held constant. This tiny effort amounts to using the triangle inequality. |
convergence of expectation sum of infinite random variables | It holds if $E[|\sum X_i|] < \infty$ or $E[\sum |X_i|] < \infty$.
Consider $X_1, X_2, ...$ in $(\Omega, \mathscr F, \mathbb P) = ([0,1], \mathscr B([0,1]), \lambda)$ where
$$X_n = 2^n 1_{A_n} + -2^{n} 1_{B_n} + 01_{A_n^C \cap B_n^C}$$ where $\lambda(A_n) = \frac{1}{2^{n+1}} = \lambda(B_n)$ and $A_i \cap B_j = A_n \cap B_n = A_i \cap A_j = B_i \cap B_j = \emptyset$
We have:
$$\sum_{n=1}^{\infty} X_n < \infty \ \lambda-a.s.$$
$$\sum_{n=1}^{\infty} E[X_n] = \sum_{n=1}^{\infty} 0 = 0$$
But we cannot compute
$$E [\sum_{n=1}^{\infty} X_n ]$$
because we have
$$ E [| \sum_{n=1}^{\infty} X_n |]$$
$$= E [|\sum_{n=1}^{\infty} 2^n 1_{A_n} + -2^{n} 1_{B_n}|] $$
$$= E [\sum_{n=1}^{\infty} |2^n| 1_{A_n} + |-2^{n}| 1_{B_n}] $$
$$= E [\sum_{n=1}^{\infty} (2^n 1_{A_n} + 2^{n} 1_{B_n})] $$
Note that $(2^n 1_{A_n} + 2^{n} 1_{B_n}) \ge 0$. Hence:
$$= \sum_{n=1}^{\infty} E [(2^n 1_{A_n} + 2^{n} 1_{B_n})] $$
$$= \infty$$
I think the above is independent of whether or not the random variables are independent. |
Equation from a given sequence | This question (with different numbers) just keeps popping up every now and then. Long story short, it is possible to produce a formula, and you just did that. The general problem (to infer an equation from a sequence), however, can't be solved, and not because there is no way, but because there are too many ways. There are many equations for any given set of numbers, and all those equations would exactly reproduce your numbers, and then would diverge and follow utterly different paths.
Also, there is nothing wrong with complex-looking equation producing smooth results. |
$A_n$ normal subgroup of $S_n$ | This is correct. However, you could use a more general result. $\mathfrak A_n$ is the kernel of the signature and the kernel of a group homomorphism is always a normal subgroup. |
Uniform convergence problem of sequences of functions | You are absolutely correct that the problem is at $x=0.$ However, you're asked to establish uniform convergence on $[a,\infty)$ for $a>0$, so the problem at zero isn't a problem on this interval. Note that for any $a>0,$ $f_n(a)\to 1$ so it's not the case that points close to $x=0$ have $f(x)\to 0$; this only happens at $x=0$ itself. The points closer to zero will go more slowly, though.
As a reflection of this, notice from your plots that convergence to $1$ improves as $x$ gets larger. This actually helps you see that there's uniform convergence. The $N$ that works for $x=a$ will also work for $x>a.$ Now, you just need to convert this into a rigorous proof.
And then to show that it doesn't converge uniformly on $[0,\infty),$ just use the problem you noticed at $x=0.$ The function that you converge to pointwise isn't continuous, and this always means that the convergence isn't uniform (if each $f_n$ is continuous). |
Fourth Order Nonlinear ODE | Generally speaking, if your ODE is amenable to finding a solution using series, then the radius of convergence (ROC) of the series tells you there's a singularity at a distance of the ROC from the initial time. A singularity could be a pole -- a "blow up" -- or it could be an essential singularity, such as increasingly wild oscillation on shorter and shorter time scales. If the ROC is infinite, there are no singularities, and we expect a solution for all time.
For a series solution, we guess at the form of the answer, substitute it into the ODE, and equate coefficients of like order.
For this ODE, we might guess $$w(t) = t^{-2} \sum_0^\infty a_n t^n$$ because the indicial-like equation gives $r=-2$. (I say "indicial-like" because this equation is not linear.) The integer root (non-fractional) means using a Laurent series. OTOH, the finite initial conditions mean two negative-order Laurent coeffiecients are zero, and we have a plain old Taylor series. Plug and play from there: substitute this form in the ODE and ICs, and equate like terms.
For the ICs you specified, you'll wind up with most of the terms being $0$, and a form like $$w(t) = t^3 \sum_0^\infty b_n t^{10n}$$. From there the algebra gets hairy because of the cube term. The Cauchy product means the recurrence relation among coefficients involves terms of all order. I'm not sure it's possible to find a closed-form expression for the coefficients in terms of $n$. But if you could find such an expression, you could use the ratio test or root test to find the ROC of the series. The first few terms are $$w(t) = \frac{1}{6}t^3 - \frac{1}{3706560}t^{13} + \frac{1}{9452617574400}t^{23} - \frac{1}{21722750401872199680}t^{33} + \frac{661}{37638496964414476201623552000}t^{43} - \frac{405469}{66922504728527550129991809682636800000}t^{53} + \ldots$$
I wasn't able to find such an expression myself, but I believe $b_n$ is asymptotically $Cn\lambda^n$. I did some experimenting on the first few hundred or so coefficients, and they fit that pattern very closely. A fit gives $\lambda \approx \exp(15.068)$. That would mean the radius of convergence of the series is $\exp(1.5068)\approx 4.512$.
I wouldn't be surprised if there's a trick to this problem, an elegant way of transforming it to show the nature of the growth. I also suspect the equation is integrable, but only could find an exact form for the third derivative.
BTW, just curious: where did this problem come from? |
Decide whether the map $f:\mathbb Z\to\mathbb Z_{10}$ given by $f(n)=[n]$ is injective or surjective. Prove both. | $\mathbb Z_{10}$ is the integers modulo $10$. To construct $\mathbb Z_{10}$, start with the integers $\mathbb Z$, and define a relation on them by $a\,R\,b$ if $a-b$ is a multiple of $10$. This relation turns out to be an equivalence relation, and it has exactly ten equivalence classes: $[0],\,[1],\,[2],\,...,[9].$ We can do arithmetic in $\mathbb Z_{10}$ like this: $[a]+[b]=[a+b]$, and $[a]\times[b]=[a\times b]$. For example, $[6]+[8]=[14]=[4]$ and $[6]\times[8]=[48]=[8]$. Note that multiplication has some different properties in $\mathbb Z_{10}$ than in $\mathbb Z$. For example, if $a$ and $b$ are integers, then if $ab=0$, then either $a$ or $b$ are zero. But if $a=2$ and $b=5$, then $[a]\times [b]=[2]\times [5]=[10]=[0]$.So we can multiply nonzero things and get zero.
Now to answer your question. The map $f$ takes the integer $n$ to its equivalence class. It is not injective. Can you think of two integers that have the same equivalence class modulo $10$? The function $f$ is surjective. This is not hard to see. For each $[a]$, what integer maps to $[a]$ under $f$? |
Prove that $L=\big\{\langle G\rangle \mid G\text{ is a CFG over }\Sigma=\{0,1\}\text{ and }1^* \cap L(G)\ne\varnothing\big\}$ is decidable. | Hint. The intersection of a context-free language with a regular language is context-free. |
Need some help understanding the condition of the implicit function theorem | If all the partial derivatives are zero at a point and the function is smooth then the partial derivatives must be zero in some open set near the point. It follow the function is locally constant at such a degenerate point. In this case, we cannot solve for one variable in terms of another.
In the case the function is continuous but not differentiable we face all the usual problems we did in the single-variate case. For example, if $f(x) = |x|$ then $f'(0)$ does not exist and I also am not able to solve for $x$ as a function of $y$ in an open set near $0$. On the other hand, there are other problems where I could. For example, $f(x) = x$ for $x<0$ and $f(x)=2x$ for $x>0$ we have left and right slopes at $x=0$ which do not match hence $f'(0)$ does not exist. That said, I can solve for $x$ as a function of $y$ in this case: $x = y$ for $y<0$ and $x = y/2$ for $y \geq 0$. So, failure of differentiability is not necessarily tied to failure to find an inverse function (which, goes hand in hand with the implicit function theorem question, these theorems can be used to imply the other, usually a textbook proves one and uses it to get the other as almost a corollary)
In short, the question of what it means when a function is continuous, but not differentiable, is a far more subtle question. There are many possible behaviors and I have only shown the most trivial of cases. On the other hand, all the derivatives, or too many of the derivatives, being zero just means the function is constant (or too constant) to solve for one variable in terms another.
I know I'm being a bit nebulous here, but, when I proved the implicit function theorem there is this step where you have to multiply by the inverse of the part of the Jacobian which has to be full-rank. That step is spoiled when too many of those derivatives are zero. |
Proof: The sum of the n-th complex roots of a Unity is $0$ | Given that$$\omega^n=1\implies \omega^n-1=0\implies(\omega-1)(\omega^{n-1}+\omega^{n-2}+\cdots+\omega+1)=0,$$ you have $$\omega^{n-1}+\omega^{n-2}+\cdots+\omega+1=0$$ for $\omega\ne1.$ |
Suppose that $q=\frac{2^n+1}{3}$ is prime then $q$ is the largest factor of $\binom{2^n}{2}-1$ | Since you already have other answers, I will instead offer a small generalization.
Suppose $p=\frac{k^n +1}{k+1}$ is a prime number. Then, $p$ is the largest prime factor of $\binom{k^n}{2}-1$ if $k=2j^2$ for some integer $j$.
Proof:
We have:
$$\binom{k^n}{2}-1 = \frac{1}{2}k^n(k^n-1)-1 = \frac{1}{2}(k^{2n}-1)-\frac{1}{2}(k^n+1) = \frac{1}{2}(k^n+1)(k^n -2)$$
This can of course be rewritten as $\frac{k+1}{2}p (k^n-2)$
Now suppose $k=2j^2$ for some $j \in \mathbb{N}$. Then, note that since $k \equiv -1 \mod (k+1)$, $n$ must be an odd integer in order for $\frac{k^n +1}{k+1}$ to have integer values. Thus, $\frac{n-1}{2}$ is also an integer. Then:
$$\frac{k+1}{2}p (k^n-2) = (k+1)p (2^{n-1}j^{2n}-1) = (k+1)p(2^{\frac{n-1}{2}}j^{n}-1)(2^{\frac{n-1}{2}}j^{n}+1)$$
And certainly we have that all the other factors are less than $p$, so this is the largest prime factor. (Note: this can be seen because the other 3 factors $\neq p$ in $(k+1)p(2^{\frac{n-1}{2}}j^{n}-1)(2^{\frac{n-1}{2}}j^{n}+1)$ must all be less than $p$, regardless of whether or not they are prime, implying $p$ has to be the largest prime factor).
EDIT: At the request of the person who asked the original question, I will provide a proof of the following (a quick counterexample to the contrary statement is $n=81$).
Suppose $k$ arbitrary. Then it is always possible to find $n$ such that $3^k$ divides $\binom{2^n}{2}-1$. In particular, $n = 3^{k-2}$ always works.
Proof:
This proof will use the Lifting the Exponent (LTE) Lemma. I suggest looking up the statement if you want to fully understand the proof.
Let $k$ be given. Then we can rewrite $\binom{2^n}{2}-1 = (2^n+1)(2^{n-1}-1)$. In order for either one of these factors to be divisible by $3$, we must have that $n$ is an odd number (since $(-1)^n \equiv -1 \mod 3$). Then $n-1 = 2j$, an even integer. We can then rewrite the above:
$$\binom{2^n}{2}-1 = (2^n+1)(2^{n-1}-1) = (2^n+1)(4^{j}-1)$$
We can now apply the LTE Lemma. Define a valuation $v$ as follows: $v_p (x) = y$ implies that $y$ is the largest power of $p$ that divides $x$, where $p$ is prime. Then LTE Lemma says:
$$v_3((2^n+1)(4^{j}-1)) = v_3(2^n+1) + v_3(4^{j}-1) = v_3(3) + v_3(n)+v_3(3)+v_3(j)$$
Note that $v_3(3) = 1$ of course. Then, note $j = (n-1)/2$, we have:
$$v_3((2^n+1)(4^{j}-1)) = 2 + v_3 \Big( \frac{n(n-1)}{2} \Big)$$
Now, in the above, note that either $n$ or $n-1$ can be divisible by $3$, but not both, so we can assume without loss of generality that $n-1$ is not divisible be $3$, ie $v_3(n-1) = 0$. Also, since a valuation is logarithmic, the $2$ in the denominator goes away since $v_3(2) = 0$. Thus we have the following:
$$v_3((2^n+1)(4^{j}-1)) = 2+v_3(n)$$
Then we are done. Suppose $k$ is given. Then firstly, we have that $3^2 = 9$ will always divide $\binom{2^n}{2}-1$. Now suppose we want $3^k$ to divide it. By the above, we only need to choose $n = 3^{k-2}$ (or any multiple of this).
Actually ... you can generalize this to the $\binom{k^n}{2}-1$ case, but don't worry I will spare you the details. |
Suppose a in an integer, and p is prime. Prove that if $gcd(a,p)>1$, then $p$ divides $a$ | HINT: The positive divisors of $p$ are $1$ and $p$. As $1$ is ruled out we have that $(a,p) = p$.
And yes your proof seems to be alright. |
Extension of Luzin N-property of absolutely continuous functions and measure preserving mappings | So far, I found the answer to 4., The definition of absolute continuity without disjointness of intervals is satisfied only by Lipschitz functions
whereby it follows that 4. is satisfied provided $g$ is Lipschitz function;
And I also found the answer to 3.,
Absolutely Continuous (is the collection of subintervals supposed to be finite or infinite)
whereby it follows that 3. is equivalent to choosing finitely many non-overlapping intervals, so that absolutely continuous functions satisfy 3. |
Can different tetrations have the same value? | No, that is not possible, not even if we allow one or more of $a,b,c,d$ to be $2$.
Suppose it was possible, assume without loss of generality that $a>c$, and consider the prime factorizations
$$ a = p_1^{\alpha_1} p_2^{\alpha_2} \cdots p_k^{\alpha_k} \\
c = p_1^{\gamma_1} p_2^{\gamma_2} \cdots p_k^{\gamma_k} $$
Because $a^{a\uparrow\uparrow (b-1)}=c^{c \uparrow\uparrow(d-1)}$ we must have that $\frac{\alpha_i}{\gamma_i}=\frac{c\uparrow\uparrow(d-1)}{a\uparrow\uparrow (b-1)} =: Q$ for every $i$.
The net exponent of $p_i$ in $\frac{c\uparrow\uparrow(d-1)}{a\uparrow\uparrow (b-1)}$ is $$\gamma_i(c\uparrow\uparrow(d-2))-\alpha_i(a\uparrow\uparrow(b-2))=\gamma_i\Bigl((c\uparrow\uparrow(d-2))-Q(a\uparrow\uparrow(b-2)\Bigr)$$
which has the same sign for every $i$, and since $a>c$, $Q$ must be an integer and $a=c^Q$.
But then $Q=\frac{c\uparrow\uparrow(d-1)}{a\uparrow\uparrow(b-1)}$ becomes
$$ Q = c^{c\uparrow\uparrow(d-2)-Q(a\uparrow\uparrow(b-2))}$$
Setting $R=\log_c Q$, this is
$$ R = c\uparrow\uparrow(d-2)-c^R(a\uparrow\uparrow(b-2)) $$
Now, $c\uparrow\uparrow(d-2)$ and $a\uparrow\uparrow(b-2)$ are both non-negative powers of $c$, so we have
$$ R = c^{\text{something}} - c^{R+\text{something}} $$
which must be positive (otherwise $Q=1$ and $a=c$) and is therefore a multiple of $c^{R+\text{something}}$. But for $R$ to be a multiple of $c^R$ when $c\ge 2$ is a contradiction! |
Question about transcendental equation | You xan work this problem making $f(x)$ explicit.
$$f(x) = 1-e^{-2xf(x)}\implies f(x)=1+\frac 1{2x}W\left(-2x\, e^{-2 x}\right)$$ where appears Lambert function.
For more legibility, I shall use $t=-2x\, e^{-2 x}$
From this
$$f(x)=1+\frac{W(t)}{2x}$$
$$f'(x)=W(t)\frac{ x t'-t (1+W(t))}{2 x^2 t(1+W(t))}$$
$$f''(t)=W(t)\frac {-x^2 W(t) (W(t)+2) t'^2+x t (W(t)+1)^2 \left(x t''-2
t'\right)+2 t^2 (W(t)+1)^3 }{2 x^3 t^2 (1+W(t))^3 }$$ |
Structural Inductions | The base for the induction is the tree that's just a single vertex. That tree has one leaf and no internal vertices, so the statement is true for that tree.
Now let $T$ be a tree that's not just a single vertex, and make the induction hypothesis that the statement is true for all trees smaller than $T$. We get $T$ from trees $T_1$ and $T_2$, and we are assuming the statement is true for them. How many leaves does $T$ have? The leaves of $T$ consist of the leaves of $T_1$, together with the leaves of $T_2$, so $$\ell(T)=\ell(T_1)+\ell(T_2)\tag1$$ How many internal vertices does $T$ have? The internal vertices of $T$ are the internal vertices of $T_1$, the internal vertices of $T_2$, and the roots of $T$, so $$i(T)=i(T_1)+i(T_2)+1\tag2$$ By the induction hypothesis, $$\ell(T_1)=i(T_1)+1{\rm\ and\ }\ell(T_2)=i(T_2)+1\tag3$$ Put (3) into (1) to get $$\ell(T)=i(T_1)+i(T_2)+2\tag4$$ Now comparing (2) and (4), $\ell(T)=i(T)+1$, as we were to prove.
Let me know if there's anything that needs explaining (after you've tried to work through it on your own). |
Question about proofs using the converse | This is a pretty confusing question to ask early in one’s logical education! What’s going on is that, if we want to think about the statement “$p$ implies $q$” and we can prove that the converse “$q$ implies $p$” is false, then in fact we know that $q$ is true and $p$ is false, which means $p$ does imply $q$. That is, symbolically, $$\neg(q\to p)\to (p\to q).$$
But this last implication is not an equivalence! That is, knowing $p\to q$ is true doesn’t mean $q\to p$ is false, or equivalently, knowing $q\to p$ is true doesn’t mean $p\to q$ is false.
Thus a statement is to some extent independent of its converse, but not perfectly so, as it’s impossible for both a statement and its converse to be false. This comes somehow from the asymmetry of the truth table for implication, which has three T’s out of four. |
Using conservation of mass to derive inviscid Burgers' equation | with
$$v=\frac{b}{a}h\to v_t=\frac{b}{a}h_t\to h_t=\frac{a}{b}v_t$$
$$(hv)_x=\left(\frac{a}{b}v^2\right)_x=\frac{a}{b}2v v_x$$
thus
$$h_t + (hv)_x =\frac{a}{b}v_t+\frac{a}{b}2v v_x=r$$
$$v_t+2v v_x=r\frac{b}{a}$$
with $u=2\dfrac{b}{a}h=2v$ write
$$\frac{u_t}{2}+2\frac{u}{2}\frac{u_x}{2}=r\frac{b}{a}$$
so
$$u_t +uu_x =f$$
where $f=2\dfrac{b}{a}r$. |
Every tree has two leaves. Is my proof ok? | As F.Webber indicated, it suffices to select the endpoints of a maximal path, since such a path can neither be extended nor loop back on itself.
Alternatively, you can prove this by induction. Base Case: The $2$-vertex tree has two leaves. Induction: Every tree with at least $3$ vertices can be constructed by attaching a pendent edge to a smaller graph; the additional edge adds a new leaf and covers at most one leaf. |
Uniform continuity on bounded open intervals | Extension of uniformly continuous functions is discussed in detail in $\S$ 10.11 of these notes. |
If $T\subset\mathbb{R}$ is bounded and $S \subset T$, then $\sup S \leq \sup T$ and $\inf T \leq\inf S$ | We have $\inf T \le t \le \sup T$ for all $t \in T$.
Since $S \subset T$, if $s \in S$, then we have $\inf T \le s \le \sup T$. It follows that $\inf T \le \inf S$, and $\sup S \le \sup T$. |
What is the weak formulation of this problem? | I think that your first approach is correct.
You have to keep the weighted Neumann jumps
$[a \frac{\partial u}{\partial n}]$
on the internal edges.
Usually, these jumps are explicitly specified
in the strong formulation.
I don't understand the formulation that you
provide in your second edit. If $a$ is a function,
why is it outside the integral? And where is
the test function $v$ in the boundary part? |
Regularity of metric under smooth map. | The pullback metric is $$g_{ij} = \sum_k\frac{\partial f^k}{\partial x^i}\frac{\partial f^k}{\partial x^j}.$$ Since $f$ is $C^{k;\alpha}$ we know its partial derivatives are $C^{k-1;\alpha}$. Since Hoelder spaces are closed under addition and multiplication, we can conclude that $g$ is $C^{k-1;\alpha}$. |
Combinatorics in ML: Counting Points with co-ordinates from among a set. | An $n$-tuple can be interpreted as a function $f : \{1,\dots,n\} \to X$. Namely, $(f(1),f(2),\dots,f(n))$. What you are asking is to count all such functions whose image has $i$ elements. The image is a subset of $X$ and there are $\binom{M}{i}$ subsets with $i$ elements. Then you want a surjective map onto this set.
Surjective functions $g : A \to B$ where $|A| = n$ and $|B| = k$ are counted by Stirling numbers. (See Twelvefold way for details.) Namely there are
$$ k!\left\{ \begin{array}{c} n \\ k \end{array} \right\} = \sum_{j=0}^k (-1)^{k - j} \binom{k}{j}j^n $$
surjective functions $A \to B$. You can also obtain this from inclusion-exclusion.
Therefore
$$ |S_i| = i!\binom{M}{i} \left\{ \begin{array}{c} n \\ i \end{array} \right\} = \binom{M}{i} \sum_{j = 0}^i (-1)^{i - j} \binom{i}{j}j^n. $$ |
What do I do with these equations to create a Jacobian matrix? | The Jacobian is the just the matrix of partial derivatives. You can compute it row-by-row. For example the first equation is:
$$f_1(T,D,C) = \lambda -\mu T - \beta T C,$$
which has partial derivatives,
\begin{align}
\frac{d f_1}{dT} &= -\mu -\beta C \\
\frac{d f_1}{d D} &= 0 \\
\frac{d f_1}{d C} &= -\beta T.
\end{align}
This gives us the first row of the Jacobian matrix:
$$J(T,D,C) = \begin{bmatrix}
-\mu -\beta C & 0 & -\beta T \\
&\text{second row of Jacobian} \\
&\text{third row...}
\end{bmatrix}$$.
It is a matrix that depends on the parameters $T,D,C$. So you can plug in the specific values $(T,D,C) = (T_0, 0, 0)$ to get:
$$J(T_0,0,0) = \begin{bmatrix}
-\mu -\beta \cdot 0 & 0 & -\beta T_0 \\
&\dots \\
&\dots
\end{bmatrix}$$.
Hopefully you can do the rest of the problem, seeing how it is done for the first row.
Jacobians are a very interesting concept - if you have time you should consider looking more into them to understand the meaning, which can be obscured if you focus too much on computation. Heres a picture I made showing what the Jacobian matrix means geometrically for a system with two equations and two variables: |
Characteristic $n$ and local rings | (b) Since $a\mathbb{Z}$ and $b\mathbb{Z}$ are coprime, the same is true for $aA$ and $bA$. The Chinese Remainder Theorem implies that $A \cong A/aA \times A/bA$. Now prove that $\mathrm{char}(A/aA)=a$ (resp. for $b$).
(a) follows from (b) since a local ring has only trivial idempotents. |
Regular Expression even length at least aaaa-substring | It doesn’t quite work: you don’t get $baaaab$, for instance. Your expression covers all of the words $xa^4y$ such that $|x|$ and $|y|$ are even, but it misses the words $xa^4y$ with $|x|$ and $|y|$ both odd.
Suppose that $xaaaay$ is in the language, and $|x|$ and $|y|$ are odd. If $x=x'a$, then $xaaaay=x'\color{red}{aaaa}ay$ has a block of $aaaa$ between strings of even length, so it’s generated by your expression. The same is true if $y=ay'$. Thus, the only problematic case is when $x=x'b$ and $y=by'$, in which case $xaaaay=x'baaaaby'$ has the string $baaaab$ between two strings of even length.
I don’t know whether you consider it a simplification, but you can rewrite
$$aa+ab+ba+bb$$
as $\big((a+b)(a+b)\big)^*$. Call this expression $\sigma$. Then we’ve just shown that
$$\sigma(aaaa+baaaab)\sigma$$
covers all of the desired language. |
Level curve of the function $f(x,y)=\min\{x^2+y^2,xy\}$ | Hint: If $xy \geq 0$ then $x^2+y^2 \geq 2xy \geq xy$. If $xy <0$ then $x^2+y^2>xy$. Therefore, $f(x,y)=xy.$ |
X and Y are independent Poisson random variables, each with rate lambda. | Hint
You should know that if $X\sim \mathcal P(\lambda_1)$ and $Y\sim \mathcal P(\lambda _2)$ are independents, then $X+Y\sim\mathcal P(\lambda _1+\lambda _2)$. If not, prove it ! Given this information, the result is straightforward. |
natural log of an integral | You can't interchange the $\log$ function with integration.
Observe
\begin{align}
\log\left(\int^\infty_0 e^{-x}\ dx \right) = \log\left(1\right) = 0
\end{align}
but
\begin{align}
\int^\infty_0 \log e^{-x}\ dx=\int^\infty_0 -x\ dx = -\infty.
\end{align} |
Definite Integration explanation | I think that instead of viewing it as "the area under the curve", you should view it as "an infinite sum of infinitesimal values". Using it to find the area under the curve is just one usage. Thinking of it as an infinite sum of infinitesimal values makes the process make more sense.
Let's start with a function:
$$ y = F(x)$$
Now, let's differentiate it without dividing by $dx$:
$$ dy = f(x)\,dx $$
$dy$ is the infinitesimal change in $y$. Now, if our graph is smooth and continuous, then the total change in $y$ between two $x$ values is just going to be the sum of all of the infinitesimal changes between those two places.
So, we can write:
$$ \text{total change in }y = \int_{x = a}^{x = b} dy = \int_{x = a}^{x = b} f(x)\,dx $$
Now, interestingly, the "total change in $y$" is actually the same as $y$, but possibly offset by a constant. That is, if we graphed the "change" in $y$ from a given starting point, it would give us the same graph as $y$ itself, just with a possible vertical offset.
Therefore:
$$ \text{total change in }y = y + C = \int dy = \int f(x)\,dx $$
So, if we just pick two pieces of this puzzle that we are interested in:
$$ y + C = \int f(x)\,dx $$
The big question is, why does the value on the right give the area under the curve of $f(x)$? The answer is that if we break the graph up into rectangles, then each one will be $f(x)$ high, and $dx$ wide. So $f(x)\cdot dx$ is the area of each rectangle, and so $ \int f(x)\,dx $ is simply the sum of all of them. The ones we are interested in are those from $x = a$ to $x = b$.
So why the difference between $F(a)$ and $F(b)$? Well, since $y + c$ is equal to the "total change in $y$", then the difference between two points in $y$ is going to give us that total change. Since we have a formula for $y$ based on points in $a$ and $b$, ($F(x)$), we can use that formula. I.e., since $y = F(x)$ (our original given),
$$ \text{total change in }y_{x = a}^{x = b} = (y + C) |_{x = a}^{x = b} = \int_{x = a}^{x = b} dy = \int_{x = a}^{x = b} f(x)\,dx \\
F(b) - F(a) = \int_{x = a}^{x = b} f(x)\,dx
$$ |
how is the rank of a skew map is even ? | A skew-symmetric linear transformation on an odd-dimensional space has determinant zero and is therefore not regular. So since $\psi_1 \colon \operatorname{Im}(\psi) \to \operatorname{Im}(\psi)$ is skew-symmetric and regular, the space $\operatorname{Im}(\psi)$ must be even-dimensional. |
convergence of $\sum\frac{a_{n}}{n}$ if $\sum_{k=1}^{n}a_{k}\le M*n^{r}$ where $r<1$ | Hint: Abel's transform, namely, write
$$\sum_{n=1}^N\frac{a_n}n=\sum_{n=1}^N\frac{s_n}n-\sum_{n=1}^N\frac{s_{n-1}}n.$$ |
A random invertible matrix | EDIT.
We consider matrices in $M_n(K)$, where $K$ is a finite field with $q$ elements. We use an uniform distribution of probability over the elements of $K$.
We randomly choose an upper invertible triangular matrix $U$ and a lower triangular invertible matrix $L$ and put $A=LU$.
The complexity is
$n(n-1)$ (independent) random choices in the underlying field $K$ and $2n$ (independent) random choices in $K\setminus \{0\}$.
A matricial product of complexity $n^3/3$.
$\textbf{Remarks.}$ i) most invertible matrices in $M_n(K)$ admit LU decomposition (cf. https://arxiv.org/pdf/math/0506382v1.pdf). Of course, we cannot obtain $A=\begin{pmatrix}0&1\\1&0\end{pmatrix}$. To make this kind of trouble unlikely, we have better choose $q$ not too small.
ii) A finite sum of product of random elements of $K$ is uniformly distributed over $K$. Then the $(a_{i,j})$ are uniformly distributed. However, since the diagonals have non-zero elements, there is a little gap concerning the North-West entries of $A$; for example $a_{1,1}\not=0$. So you have to choose $q$ not too small.
iii) Assume that $K$ is infinite, for example $K=\mathbb{Q}$. Then we randomly choose the $(l_{i,j}),(u_{i,j})$ according to ad hoc laws of probability. I don't think that $a_{2,2}$ (a sum of $2$ products) and $a_{n,n}$ (a sum of $n$ products) follow the same distribution law... |
For a certain insurance company, 10% of its policies are Type A; 50% are Type B; and 40% are Type C: | Okay. You have noticed that $X\mid\Lambda\sim\mathcal{Pois}(\Lambda)$ where $\Lambda$ is distributed as given. So you have decided to attempt to apply the Law of Total Expectation . That was a good plan; just not executed correctly.
So indeed, $\mathsf E(X\mid \Lambda)=\Lambda$ and $\mathsf {Var}(X\mid \Lambda)=\Lambda$.
However, $\mathsf E(X^2\mid \Lambda)=\mathsf {Var}(X\mid \Lambda)+\mathsf E(X\mid\Lambda)^2$, that is $\Lambda+\Lambda^2$.
$\begin{split}\mathsf E(X)&=\mathsf E(\mathsf E(X\mid \Lambda))\\ & =\mathsf E(\Lambda)\\ & = 0.1(1)+0.5(2)+0.4(10) & \checkmark\\[2ex]\mathsf E(X^2) & =\mathsf E(\mathsf E(X^2\mid \Lambda))\\ &= \mathsf E(\mathsf {Var}(X\mid \Lambda)+\mathsf E(X\mid \Lambda)^2)\\ &=\mathsf E(\Lambda+\Lambda^2)\\&=\\[2ex]\mathsf {Var}(X) & =\mathsf E(\Lambda+\Lambda^2)-\mathsf E(\Lambda)^2\\ &=\end{split}$ |
Predicate Logic to English | You're right about the first two. Let's look at the last one. Yes, it says something about all women $x$. "All women... something"; let's figure out what something is. First, the part inside the negation:
$$
\forall y (L(x,y) \to (M(y) \wedge H(y))
$$
In English, there are a few ways to say it: "whoever $x$ loves is a handsome man", or "$x$ only loves handsome men". Negating it, however, gives something harder to express in English without transforming the formula. Let's move the negation past the quantifier and find a more comprehensible way to write it:
$$
\begin{align}
\neg (\forall y (L(x,y) \to (M(y) \wedge H(y))) &\iff \exists y (L(x,y) \wedge \neg(M(y) \wedge H(y))) \\
&\iff \exists y (L(x,y) \wedge (\neg M(y) \vee \neg H(y)))
\end{align}
$$
This says, "$x$ loves someone ($y$) who either isn't a man or isn't handsome".
So the entire statement says,
Every woman loves someone who either isn't a man or isn't handsome.
You could even drop "either". |
Counting eleven digit integers with the sum of the digits 2 | Per the OP's edit, to remove this from the unanswered question list, the 11th entry is 20,000,000,000. |
derivation of the cubic formula | We start with the substitution $z^3=u$ and obtain a quadratic equation in $u$
\begin{align*}
z^6+fz^3-\frac{e^3}{27}&=0\\
u^2+fu-\frac{e^3}{27}&=0\tag{1}
\end{align*}
We solve the quadratic equation (1) and get
\begin{align*}
u_{0,1}=-\frac{f}{2}\pm\sqrt{\frac{f^2}{4}+\frac{e^3}{27}}\tag{2}
\end{align*}
Since $z^3=u$ we obtain from (2)
\begin{align*}
z_{0,1}=\sqrt[3]{-\frac{f}{2}\pm\sqrt{\frac{f^2}{4}+\frac{e^3}{27}}}
\end{align*}
We are free to select either $z_0$ or $z_1$ and we choose $z_0$. Since $z^3=u$ we know there are three solutions
\begin{align*}
z_0,z_0 e^{\frac{2\pi i}{3}},z_0 e^{\frac{4\pi i}{3}} \tag{3}
\end{align*}
From (3) we can calculate the three solutions for $x$ via
\begin{align*}
y=z-\frac{s}{3z}\qquad\text{ and }\qquad x=y-\frac{b}{3a}
\end{align*}
We finally obtain with $\omega_1=e^{\frac{2\pi i}{3}}$
\begin{align*}
x_0&=z_0-\frac{s}{3z_0}-\frac{b}{3a}\\
x_1&=z_0\omega_1-\frac{s}{3z\omega_1}-\frac{b}{3a}\\
x_2&=z_0\omega_1^2-\frac{s}{3z\omega_1^2}-\frac{b}{3a}
\end{align*}
[2017-11-05] Add-on: Let's make an example to see the formulas in action.
Find the solutions of
\begin{align*}
\color{blue}{x^3-7x^2+14x-8=0}
\end{align*}
Step 1: We consider $ax^3+bx^2+cx+d=0$ and calculate $x=y-\frac{b}{3a}$.
We set $x=y+\frac{7}{3}$ and obtain
\begin{align*}
y^3-\frac{7}{3}y-\frac{20}{27}=0
\end{align*}
Step 2: We consider $y^3+ey+f=0$ and calculate $y=z-\frac{e}{3z}$.
We set $y=z+\frac{7}{9z}$ which gives
\begin{align*}
729z^6-540 z^3+343=0
\end{align*}
Step 3: We substitute $z^3=u$ and solve the quadratic equation
\begin{align*}
729u^2-540 u+343=0
\end{align*}
We obtain
\begin{align*}
u_{1,2}=\frac{10}{27}\pm\frac{i}{\sqrt{3}}
\end{align*}
Step 4: The cubic roots $z_0,z_0\omega_1,z_0\omega_2$
We select $u_1=\frac{10}{27}+\frac{i}{\sqrt{3}}$ and calculate the three cubic roots:
\begin{align*}
z_0&=\sqrt[3]{\frac{10}{27}+\frac{i}{\sqrt{3}}}=\frac{1}{6}\left(5+i\sqrt{3}\right)\\
z_0\omega_1&=z_0e^{\frac{2\pi i}{3}}=\frac{1}{3}\left(-2+i\sqrt{3}\right)\\
z_0\omega_2&=z_0e^{\frac{4\pi i}{3}}=-\frac{1}{6}\left(1+3i\sqrt{3}\right)\\
\end{align*}
Step 5: The solutions $x_0,x_1,x_2$
Finally we can calculate the solutions from $z_0,z_1,z_2$ via
\begin{align*}
x=y-\frac{b}{3a}=z-\frac{e}{3z}-\frac{b}{3a}
\end{align*}
We obtain
\begin{align*}
\color{blue}{x_0}&=z_0+\frac{7}{9z_0}+\frac{7}{3}\color{blue}{=4}\\
\color{blue}{x_1}&=z_0\omega_1+\frac{7}{9z_0\omega_1}+\frac{7}{3}\color{blue}{=1}\\
\color{blue}{x_2}&=z_0\omega_2+\frac{7}{9z_0\omega_2}+\frac{7}{3}\color{blue}{=2}\\
\end{align*}
and conclude
\begin{align*}
\color{blue}{x^3-7x^2+14x-8=(x-1)(x-2)(x-4)=0}
\end{align*}
Note: Working with the conjugate complex $u_2=\overline{u_1}=\frac{10}{27}-\frac{i}{\sqrt{3}}$ instead of $u_1$ gives the complex conjugates $\overline{z_1},\overline{z_1}e^{-\frac{2\pi i}{3}}$ and $\overline{z_1}e^{-\frac{4\pi i}{3}}$ and leads (of course) to the same results $x_0=4,x_1=1$ and $x_2=2$. |
Pell's equation with d congruent to 1 mod 4 | If $p \equiv 1 \pmod 4$ is a (positive) prime, there is always an integer solution to $x^2 - p y^2 = -1.$ The short proof is in Mordell, Diophantine Equations. The same proof gives the same conclusion for primes $p\equiv q \equiv 1 \pmod 4,$ Legendre symbol $(p|q) = -1,$ and $x^2 - pq y^2 = -1.$ I have probably posted both proofs on this site. There is no guarantee if $(p|q) = 1.$ The two smallest such failures are $x^2 - 205 y^2 \neq -1$ and $x^2 - 221 y^2 \neq -1.$
Actually finding such answers is simply finding the continued fraction for the square root of the $p$ or $pq$
===================================================================
Method described by Prof. Lubin at Continued fraction of $\sqrt{67} - 4$
$$ \sqrt { 13} = 3 + \frac{ \sqrt {13} - 3 }{ 1 } $$
$$ \frac{ 1 }{ \sqrt {13} - 3 } = \frac{ \sqrt {13} + 3 }{4 } = 1 + \frac{ \sqrt {13} - 1 }{4 } $$
$$ \frac{ 4 }{ \sqrt {13} - 1 } = \frac{ \sqrt {13} + 1 }{3 } = 1 + \frac{ \sqrt {13} - 2 }{3 } $$
$$ \frac{ 3 }{ \sqrt {13} - 2 } = \frac{ \sqrt {13} + 2 }{3 } = 1 + \frac{ \sqrt {13} - 1 }{3 } $$
$$ \frac{ 3 }{ \sqrt {13} - 1 } = \frac{ \sqrt {13} + 1 }{4 } = 1 + \frac{ \sqrt {13} - 3 }{4 } $$
$$ \frac{ 4 }{ \sqrt {13} - 3 } = \frac{ \sqrt {13} + 3 }{1 } = 6 + \frac{ \sqrt {13} - 3 }{1 } $$
Simple continued fraction tableau:
$$
\begin{array}{cccccccccccccccccccccccc}
& & 3 & & 1 & & 1 & & 1 & & 1 & & 6 & & 1 & & 1 & & 1 & & 1 & & 6 & \\
\\
\frac{ 0 }{ 1 } & \frac{ 1 }{ 0 } & & \frac{ 3 }{ 1 } & & \frac{ 4 }{ 1 } & & \frac{ 7 }{ 2 } & & \frac{ 11 }{ 3 } & & \frac{ 18 }{ 5 } & & \frac{ 119 }{ 33 } & & \frac{ 137 }{ 38 } & & \frac{ 256 }{ 71 } & & \frac{ 393 }{ 109 } & & \frac{ 649 }{ 180 } \\
\\
& 1 & & -4 & & 3 & & -3 & & 4 & & -1 & & 4 & & -3 & & 3 & & -4 & & 1
\end{array}
$$
$$
\begin{array}{cccc}
\frac{ 1 }{ 0 } & 1^2 - 13 \cdot 0^2 = 1 & \mbox{digit} & 3 \\
\frac{ 3 }{ 1 } & 3^2 - 13 \cdot 1^2 = -4 & \mbox{digit} & 1 \\
\frac{ 4 }{ 1 } & 4^2 - 13 \cdot 1^2 = 3 & \mbox{digit} & 1 \\
\frac{ 7 }{ 2 } & 7^2 - 13 \cdot 2^2 = -3 & \mbox{digit} & 1 \\
\frac{ 11 }{ 3 } & 11^2 - 13 \cdot 3^2 = 4 & \mbox{digit} & 1 \\
\frac{ 18 }{ 5 } & 18^2 - 13 \cdot 5^2 = -1 & \mbox{digit} & 6 \\
\frac{ 119 }{ 33 } & 119^2 - 13 \cdot 33^2 = 4 & \mbox{digit} & 1 \\
\frac{ 137 }{ 38 } & 137^2 - 13 \cdot 38^2 = -3 & \mbox{digit} & 1 \\
\frac{ 256 }{ 71 } & 256^2 - 13 \cdot 71^2 = 3 & \mbox{digit} & 1 \\
\frac{ 393 }{ 109 } & 393^2 - 13 \cdot 109^2 = -4 & \mbox{digit} & 1 \\
\frac{ 649 }{ 180 } & 649^2 - 13 \cdot 180^2 = 1 & \mbox{digit} & 6 \\
\end{array}
$$
=======================================
$$ \sqrt { 85} = 9 + \frac{ \sqrt {85} - 9 }{ 1 } $$
$$ \frac{ 1 }{ \sqrt {85} - 9 } = \frac{ \sqrt {85} + 9 }{4 } = 4 + \frac{ \sqrt {85} - 7 }{4 } $$
$$ \frac{ 4 }{ \sqrt {85} - 7 } = \frac{ \sqrt {85} + 7 }{9 } = 1 + \frac{ \sqrt {85} - 2 }{9 } $$
$$ \frac{ 9 }{ \sqrt {85} - 2 } = \frac{ \sqrt {85} + 2 }{9 } = 1 + \frac{ \sqrt {85} - 7 }{9 } $$
$$ \frac{ 9 }{ \sqrt {85} - 7 } = \frac{ \sqrt {85} + 7 }{4 } = 4 + \frac{ \sqrt {85} - 9 }{4 } $$
$$ \frac{ 4 }{ \sqrt {85} - 9 } = \frac{ \sqrt {85} + 9 }{1 } = 18 + \frac{ \sqrt {85} - 9 }{1 } $$
Simple continued fraction tableau:
$$
\begin{array}{cccccccccccccccccccccccc}
& & 9 & & 4 & & 1 & & 1 & & 4 & & 18 & & 4 & & 1 & & 1 & & 4 & & 18 & \\
\\
\frac{ 0 }{ 1 } & \frac{ 1 }{ 0 } & & \frac{ 9 }{ 1 } & & \frac{ 37 }{ 4 } & & \frac{ 46 }{ 5 } & & \frac{ 83 }{ 9 } & & \frac{ 378 }{ 41 } & & \frac{ 6887 }{ 747 } & & \frac{ 27926 }{ 3029 } & & \frac{ 34813 }{ 3776 } & & \frac{ 62739 }{ 6805 } & & \frac{ 285769 }{ 30996 } \\
\\
& 1 & & -4 & & 9 & & -9 & & 4 & & -1 & & 4 & & -9 & & 9 & & -4 & & 1
\end{array}
$$
$$
\begin{array}{cccc}
\frac{ 1 }{ 0 } & 1^2 - 85 \cdot 0^2 = 1 & \mbox{digit} & 9 \\
\frac{ 9 }{ 1 } & 9^2 - 85 \cdot 1^2 = -4 & \mbox{digit} & 4 \\
\frac{ 37 }{ 4 } & 37^2 - 85 \cdot 4^2 = 9 & \mbox{digit} & 1 \\
\frac{ 46 }{ 5 } & 46^2 - 85 \cdot 5^2 = -9 & \mbox{digit} & 1 \\
\frac{ 83 }{ 9 } & 83^2 - 85 \cdot 9^2 = 4 & \mbox{digit} & 4 \\
\frac{ 378 }{ 41 } & 378^2 - 85 \cdot 41^2 = -1 & \mbox{digit} & 18 \\
\frac{ 6887 }{ 747 } & 6887^2 - 85 \cdot 747^2 = 4 & \mbox{digit} & 4 \\
\frac{ 27926 }{ 3029 } & 27926^2 - 85 \cdot 3029^2 = -9 & \mbox{digit} & 1 \\
\frac{ 34813 }{ 3776 } & 34813^2 - 85 \cdot 3776^2 = 9 & \mbox{digit} & 1 \\
\frac{ 62739 }{ 6805 } & 62739^2 - 85 \cdot 6805^2 = -4 & \mbox{digit} & 4 \\
\frac{ 285769 }{ 30996 } & 285769^2 - 85 \cdot 30996^2 = 1 & \mbox{digit} & 18 \\
\end{array}
$$
=================================================================
$$ \sqrt { 205} = 14 + \frac{ \sqrt {205} - 14 }{ 1 } $$
$$ \frac{ 1 }{ \sqrt {205} - 14 } = \frac{ \sqrt {205} + 14 }{9 } = 3 + \frac{ \sqrt {205} - 13 }{9 } $$
$$ \frac{ 9 }{ \sqrt {205} - 13 } = \frac{ \sqrt {205} + 13 }{4 } = 6 + \frac{ \sqrt {205} - 11 }{4 } $$
$$ \frac{ 4 }{ \sqrt {205} - 11 } = \frac{ \sqrt {205} + 11 }{21 } = 1 + \frac{ \sqrt {205} - 10 }{21 } $$
$$ \frac{ 21 }{ \sqrt {205} - 10 } = \frac{ \sqrt {205} + 10 }{5 } = 4 + \frac{ \sqrt {205} - 10 }{5 } $$
$$ \frac{ 5 }{ \sqrt {205} - 10 } = \frac{ \sqrt {205} + 10 }{21 } = 1 + \frac{ \sqrt {205} - 11 }{21 } $$
$$ \frac{ 21 }{ \sqrt {205} - 11 } = \frac{ \sqrt {205} + 11 }{4 } = 6 + \frac{ \sqrt {205} - 13 }{4 } $$
$$ \frac{ 4 }{ \sqrt {205} - 13 } = \frac{ \sqrt {205} + 13 }{9 } = 3 + \frac{ \sqrt {205} - 14 }{9 } $$
$$ \frac{ 9 }{ \sqrt {205} - 14 } = \frac{ \sqrt {205} + 14 }{1 } = 28 + \frac{ \sqrt {205} - 14 }{1 } $$
Simple continued fraction tableau:
$$
\begin{array}{cccccccccccccccccccccc}
& & 14 & & 3 & & 6 & & 1 & & 4 & & 1 & & 6 & & 3 & & 28 & \\
\\
\frac{ 0 }{ 1 } & \frac{ 1 }{ 0 } & & \frac{ 14 }{ 1 } & & \frac{ 43 }{ 3 } & & \frac{ 272 }{ 19 } & & \frac{ 315 }{ 22 } & & \frac{ 1532 }{ 107 } & & \frac{ 1847 }{ 129 } & & \frac{ 12614 }{ 881 } & & \frac{ 39689 }{ 2772 } \\
\\
& 1 & & -9 & & 4 & & -21 & & 5 & & -21 & & 4 & & -9 & & 1
\end{array}
$$
$$
\begin{array}{cccc}
\frac{ 1 }{ 0 } & 1^2 - 205 \cdot 0^2 = 1 & \mbox{digit} & 14 \\
\frac{ 14 }{ 1 } & 14^2 - 205 \cdot 1^2 = -9 & \mbox{digit} & 3 \\
\frac{ 43 }{ 3 } & 43^2 - 205 \cdot 3^2 = 4 & \mbox{digit} & 6 \\
\frac{ 272 }{ 19 } & 272^2 - 205 \cdot 19^2 = -21 & \mbox{digit} & 1 \\
\frac{ 315 }{ 22 } & 315^2 - 205 \cdot 22^2 = 5 & \mbox{digit} & 4 \\
\frac{ 1532 }{ 107 } & 1532^2 - 205 \cdot 107^2 = -21 & \mbox{digit} & 1 \\
\frac{ 1847 }{ 129 } & 1847^2 - 205 \cdot 129^2 = 4 & \mbox{digit} & 6 \\
\frac{ 12614 }{ 881 } & 12614^2 - 205 \cdot 881^2 = -9 & \mbox{digit} & 3 \\
\frac{ 39689 }{ 2772 } & 39689^2 - 205 \cdot 2772^2 = 1 & \mbox{digit} & 28 \\
\end{array}
$$
=====================================================================
$$ \sqrt { 221} = 14 + \frac{ \sqrt {221} - 14 }{ 1 } $$
$$ \frac{ 1 }{ \sqrt {221} - 14 } = \frac{ \sqrt {221} + 14 }{25 } = 1 + \frac{ \sqrt {221} - 11 }{25 } $$
$$ \frac{ 25 }{ \sqrt {221} - 11 } = \frac{ \sqrt {221} + 11 }{4 } = 6 + \frac{ \sqrt {221} - 13 }{4 } $$
$$ \frac{ 4 }{ \sqrt {221} - 13 } = \frac{ \sqrt {221} + 13 }{13 } = 2 + \frac{ \sqrt {221} - 13 }{13 } $$
$$ \frac{ 13 }{ \sqrt {221} - 13 } = \frac{ \sqrt {221} + 13 }{4 } = 6 + \frac{ \sqrt {221} - 11 }{4 } $$
$$ \frac{ 4 }{ \sqrt {221} - 11 } = \frac{ \sqrt {221} + 11 }{25 } = 1 + \frac{ \sqrt {221} - 14 }{25 } $$
$$ \frac{ 25 }{ \sqrt {221} - 14 } = \frac{ \sqrt {221} + 14 }{1 } = 28 + \frac{ \sqrt {221} - 14 }{1 } $$
Simple continued fraction tableau:
$$
\begin{array}{cccccccccccccccccc}
& & 14 & & 1 & & 6 & & 2 & & 6 & & 1 & & 28 & \\
\\
\frac{ 0 }{ 1 } & \frac{ 1 }{ 0 } & & \frac{ 14 }{ 1 } & & \frac{ 15 }{ 1 } & & \frac{ 104 }{ 7 } & & \frac{ 223 }{ 15 } & & \frac{ 1442 }{ 97 } & & \frac{ 1665 }{ 112 } \\
\\
& 1 & & -25 & & 4 & & -13 & & 4 & & -25 & & 1
\end{array}
$$
$$
\begin{array}{cccc}
\frac{ 1 }{ 0 } & 1^2 - 221 \cdot 0^2 = 1 & \mbox{digit} & 14 \\
\frac{ 14 }{ 1 } & 14^2 - 221 \cdot 1^2 = -25 & \mbox{digit} & 1 \\
\frac{ 15 }{ 1 } & 15^2 - 221 \cdot 1^2 = 4 & \mbox{digit} & 6 \\
\frac{ 104 }{ 7 } & 104^2 - 221 \cdot 7^2 = -13 & \mbox{digit} & 2 \\
\frac{ 223 }{ 15 } & 223^2 - 221 \cdot 15^2 = 4 & \mbox{digit} & 6 \\
\frac{ 1442 }{ 97 } & 1442^2 - 221 \cdot 97^2 = -25 & \mbox{digit} & 1 \\
\frac{ 1665 }{ 112 } & 1665^2 - 221 \cdot 112^2 = 1 & \mbox{digit} & 28 \\
\end{array}
$$
============================================================== |
Probability of 3 people having birthday in March | If we assume that all days are equally likely to be a person's birthday, and that the three people's birthdays are independent events, then the probability is just the product of the probability of the individual events: $\left(\frac{31}{365}\right)^3$.
Empirically, these assumptions could both likely be false. |
How do I rotate a line segment in a specific point on the line? | Lets just say you have a point $A=(1, 1)$ and $B=(1, 3)$. Suppose you want to
rotate $\overline{AB}$ about the point $C=(1, 2)$ by $\theta = \frac{\pi}{2}$ counter-clockwise, which will give you a segment from $(0, 2)$ to $(2, 2)$.
Step 1. You need to translate $C$ to the origin, i.e. apply the linear map $Ax = x-C$.
Step 2. Rotate the segment by using the rotational matrix
\begin{align}
R(\theta) =
\begin{pmatrix}
\cos\theta & -\sin \theta\\
\sin\theta & \cos\theta
\end{pmatrix}.
\end{align}
Step 3. Translate back, i.e. $A^{-1}x= x+C$.
Thus, the entire process becomes $A^{-1}R(\theta)A$, a conjugation. Any how, lets see this in our example.
Take the point $(1, 1)$ which gets map to $(1,1)-(1, 2) = (0, -1)$. Then the rotation by $R(\pi/2)$ gives
\begin{align}
\begin{pmatrix}
0 & -1\\
1 & 0
\end{pmatrix}
\begin{pmatrix}
0\\
-1
\end{pmatrix}
=
\begin{pmatrix}
1\\
0
\end{pmatrix}.
\end{align}
Lastly, translate back gives $(1, 0) +(1, 2) = (2, 2)$. |
How can I solve this higher degree first order differential equation? | Since $e^y \ge 0$ and $\cos x \ge -1$ we have
$$ (y')^2 + e^y + \cos x + 3 \ge 2 $$
So there is no solution. |
Probability notation | What exactly does "a distribution over the random variable $B$" mean?
Does this distribution mean some mapping of values that r.v. can take to probability values? I usually think about distribution in terms of PDF or CDF. I think $p(r)$ is the density (or PMF in discrete r.v. case).
$p(B)$ represent the probability mass functions of random variable $B$ at arbitrary values. It is a terribly lazy (but unfortunately common) shorthand used when authors are more interested in showing the dependencies between random variables rather than any particular evaluations.
$p(r)$ demonstrates why this is a horribly confusing idea, as the "probability mass function of value $r$", is meaningless unless it is implicitly clear what is the random variable being discussed.
More properly we should write something like $p_{\small B}(r)$ to indicate the probability mass function of random variable $B$ evaluated at $r$. In this case that is $\mathsf P(B=r)$; the probability for the event of the box being red.
How can we condition on a random variable $X$? My understanding is that we can only condition on events. Does $p(Y |X)$ mean conditional distribution of $Y$ given $X$ taking some value which is not specified?
Again, the this appears to be an abbreviation for $p_{\lower{0.5ex}{\small Y\mid X}}(y\mid x)$, or $\mathsf P(Y=y\mid X=x)$, where $x,y$ are arbitrary arguments.
The occurrence of $X$ realising a particular value is an event. |
How to explain that the the following points are optimal in a region? | In the paper (found via Google Scholar), the objective value is LDoF$_{sum}$ (see either the description or the caption of Fig. 4), defined in formula (6) as LDoF$_{u}$+LDoF$_{d}$. So, the optimal point is not the one that maximizes (3).
Which point is optimal is determined by the slope of the diagonal line. If it is steeper than 45 degrees, the bottom point is optimal. If it is more horizontal, the top corner point is optimal.
For fig. (4d), the two conditions under the figure oppose each other: the first one makes the line more horizontal, while the second one makes the line less horizontal. This case is therefore not as simple as the other ones. Let's check the objective value at the circled point:
$$\begin{align}
LDoF_u + LDoF_d &= \min(K_u,M_r) + \frac{N_d}{N_d+K_d-1} \left(K_d - \frac{K_d}{\min(K_u,N_b)} \min(K_u,M_r)\right)\\
&= \frac{N_dK_d}{N_d+K_d-1} + \min(K_u,M_r) - \frac{N_dK_d}{N_d+K_d-1} \frac{\min(K_u,M_r)}{\min(K_u,N_b)}\\
\end{align}$$
The second term on the first line is obtained by solving (23) for LDoF$_d$. This equals the right hand side of the inequality at the left bottom of page 7202, which according to Theorem 1 is optimal. |
Suppose $(X, Y)$ is a Gaussian random vector with mean $(0, 0)$. Then $\mathbb{E}[X|Y] = \frac{\mathbb{E}[XY]}{\mathbb{E}[Y^2]}\mathbb{E}[Y]$ | Firstly some intuition. While working with conditional expectations, it's always nice to have it in form $\mathbb E[F(X,Y)|\mathcal G]$, where $X$ is independent of $\mathcal G$, $Y$ is $\mathcal G$ measurable and $F$ is borel such that $\mathbb E|F(X,Y)| < \infty$, because we have tools how to calculate those.
Having said that, let's try to make $\mathbb E[X | Y]$ looking like that above. Note that we're working with gaussian vector $(X,Y)$. Recall that if $V=(V_1,...,V_n)$ is gaussian, then coordinates $V_1,...,V_n$ are independent if and only if covariance matrix of $V$ is diagonal (It is crucial that whole vector $V$ is gaussian, it does not work with only $V_1,...,V_n$ being gaussians).
So we can try to change our vector $(X,Y)$ into some gaussian vector (to achieve this (in general), we need a linear (or at least affine) map) $(Z,Y)$ such that $Z$ is independent of $Y$ and $X=Z+W$ where $W$ is $Y$ measurable. Since we want linear map, then $Z$ must be of form $cX+dY$, but since scalling by constant doesn't change independence (if the constant isn't zero of course) we can assume $Z=X-aY$ for some $a \in \mathbb R$. Then $W=aY$ is $Y$ measurable no matter which $a \in \mathbb R$ we choose.
To end, we need to find good $a \in \mathbb R$ meaning $X-aY,Y$ are independent. Since $(X-aY,Y)$ is gaussian, due to characterisation, it is enough to have $$ 0 = Cov(X-aY,Y) = Cov(X,Y) - aVar(Y) = \mathbb E[XY] - \mathbb E[X]\mathbb E[Y] - a\big(\mathbb E[Y^2] - (\mathbb E[Y])^2\big) $$ and due to zero mean, we obtain $$ 0 = \mathbb E[XY] - a\mathbb E[Y^2]$$ hence $$a = \frac{\mathbb E[XY]}{\mathbb E[Y^2]}$$ (I'm assuming that $\mathbb E[Y^2] > 0$, because otherwise we just have $Y \sim \delta_0$ and such random variable is independent of $X$ by itself, hence in case $\mathbb E[Y^2] = 0$ we get $\mathbb E[X|Y] = \mathbb E[X] = 0$ (almost surely)).
Now, to end, by linearity of conditional expectation we get $$ \mathbb E[X|Y] = \mathbb E[(X-aY) + (aY) | Y] = \mathbb E[X-aY | Y] + a\mathbb E[Y|Y] $$
And since $X-aY,Y$ are independent, and $Y$ is $\sigma(Y)$ measurable, we end up with $$ \mathbb E[X|Y] = \mathbb E[X-aY] +aY = aY = \frac{\mathbb E[XY]}{\mathbb E[Y^2]}Y \quad \text{almost surely} $$ |
How find this minimum $\sum_{i=1}^{n}a^2_{i}-2\sum_{i=1}^{n}a_{i}a_{i+1},a_{n+1}=a_{1}$ | UPDATE 11/14/2013 : Part of the first proposed answer was wrong. The method
proposed only works for $n<7$, and I currently have no complete solution for $n\geq 7$. All that is corrected in the updated version below.
Let us put
$$
Q_n(a_1,a_2,\ldots ,a_n)=
a^2_{1}+a^2_{2}+\cdots+a^2_{n}-2a_{1}a_{2}-2a_{2}a_{3}-\cdots-2a_{n-2}a_{n-1}-2a_{n-1}a_{n}-2a_na_1
$$
and
$$
T_n(a_1,a_2,a_3,\ldots,a_n)=
Q_n(a_1,a_2,\ldots,a_n)+\frac{(a_1+a_2+a_3+\ldots +a_n)^2}{n}
$$
For $3\leq n\leq 6$, the minimum is $-\frac{1}{n}$ (attained when all coordinates
are equal to $\frac{1}{n}$), because of
$$
\begin{array}{lcl}
T_6(a_1,a_2,\ldots,a_6)&=&
\frac{1}{42}\bigg(-5a_1+a_2+a_3+a_4-5a_5+7a_6\bigg)^2 \\
& & +\frac{1}{28}\bigg(-3a_1+2a_2+2a_3-5a_4+4a_5\bigg)^2
+\frac{1}{4}\bigg(-a_1+2a_2-2a_3+a_2\bigg)^2 \\
T_5(a_1,a_2,\ldots,a_5) &=&
\frac{1}{30}\bigg(-4a_1+a_2+a_3-4a_4+6a_5\bigg)^2
+\frac{1}{6}\bigg(-a_1+a_2-2a_3+2a_4\bigg)^2 \\
& &
+\frac{1}{2}\bigg(-a_2+a_3\bigg)^2+\frac{1}{2}\bigg(-a_1+a_2\bigg)^2 \\
& & \\
T_4(a_1,a_2,a_3,a_4) &=&
\frac{1}{20}\bigg(-3a_1+a_2-3a_3+5a_4\bigg)^2
+\frac{1}{20}\bigg(-a_1-3a_2+4a_3\bigg)^2 \\
& &
+\frac{3}{4}\bigg(-a_1+a_2\bigg)^2 \\
& & \\
T_3(a_1,a_2,a_3) &=&
\frac{1}{3}\bigg(-a_1+a_2+2a_3\bigg)^2
+\bigg(-a_1+a_2\bigg)^2 \\
\end{array}
$$
Unfortunately, this method does not work any more for $n \geq 7$. Indeed, in that
case the minimum is $\leq -\frac{1}{6}$ (because
$Q_n(0,\ldots,0,\frac{1}{6},\frac{1}{3},\frac{1}{3},\frac{1}{6})=
-\frac{1}{6}$) ; on the other hand, the polynomial
$$
R_n(a_1,a_2,\ldots ,a_n)=Q_n(a_1,a_2,\ldots,a_n)+\frac{(a_1+a_2+a_3+\ldots +a_n)^2}{6}
$$
is not nonnegative on ${\mathbb R}^n$ any more. For example, for $n=7$ we have
$$
R_7(4, 1, 0, -1, 0, 3, 5)=(-2) < 0
$$ |
Proving $\vdash \exists x (x=c)$ for each term $c$. | If your form of Existential Generalization requires you to replace all occurences of a single constant with your chosen variable, you can instead use proof by contradiction:
$\neg\exists x(x=c)$
$\forall x\neg(x=c)$
$\neg(c=c)$
$\bot$
$\neg\neg\exists x(x=c)$
$\exists x(x=c)$ |
Examples and Counterexamples of Relations which Satisfy Certain Properties | 1. Reflexivity and Irreflexivity
A relation on a nonempty set cannot be both reflexive and irreflexive. This follows almost immediately from the definitions: a reflexive relation on a nonempty set $X$ must contain every pair of the form $(x,x) \in X\times X$, while an irreflexive relation cannot contain any such pair. Reflexivity and irreflexivity are mutually exclusive properties.
2. Transitivity and Intransitivity
A relation may be vacuously transitive and intransitive: if there is no $y$ such that $(x,y),(y,z) \in R$ for some $x$ and $z$, then the hypotheses of both transitivity and intransitivity fail. Any conclusion is implied by a false hypothesis, so such a relation is both transitive and intransitive. For example, let $R$ be the relation on the three element set $X = \{1,2,3\}$ given by
$$ R = \{ (1,2), (1,3) \}. $$
This relation is (trivially) both transitive and intransitive, as there is no $y$ which appears in the first slot of one pair, and in the second slot of another.
Aside from such vacuous examples (vacuous in the sense that the hypotheses are false, not in the sense that they are "easy"), a relation cannot be both transitive and intransitive: if $(x,y), (y,z) \in R$, then either $(x,z) \in R$ (and $R$ is not intransitive), or $(x,z) \not\in R$ (and $R$ is not transitive). Aside from vacuous examples, these two properties are mutually exclusive.
3. Intransitivity and Irreflexivity
A nontrivial relation which is intransitive must also be irreflexive. The essential idea here is that reflexive relations "build in" transitive relations. More formally, consider a proof by contraposition: suppose that $R$ is a nontrivial relation which is not irreflexive. Then there is some $x$ such that $(x,x) \in R$. Taking $x=y=z$, this implies that
$$ (x,y), (y,z), (x,z) \in R, $$
which contradicts the definition of intransitivity. Thus $R$ is not intransitive. Therefore a relation which is not irreflexive is not intransitive.
By contraposition, an intransitive relation must be irreflexive.
4. Symmetry and Antisymmetry
Perhaps counterintuitively, a nontrivial relation can be both symmetric and antisymmetric. Suppose that $R$ is a nontrivial relation which is both symmetric and antisymmetric. As $R$ is nontrivial, it contains some pair $(x,y)$. The symmetry of $R$ implies that $(y,x)$ is also in $R$. The antisymmetry of $R$ then implies that $x=y$. Hence a relation on a set $X$ which is both symmetric and antisymmetric must be a subset of the diagonal $\{(x,x) : x \in X\}$. Any such relation is vacuously transitive, and can be reflexive if it is the entire diagonal (this is the equality relation). There is no nontrivial irreflexive relation which is both symmetric and antisymmetric.
5. Examples on a Set with Three Elements
The remainder of this answer is structured as follows: the set $X$ is the three element set $X = \{1,2,3\}$. Each of the items below gives an example of a relation $R$ on $X$ which satisfies various combinations of the properties listed in the question. The examples are labeled with a string such as "[RT-]".
The first character may be R for a reflexive relation, I for an irreflexive relation, or - for a relation which is neither reflexive nor irreflexive.
The second character may be T for a transitive relation, I for an intransitive relation, or - for a relation which is neither transitive nor intransitive.
The third character may be S for a symmetric relation, A for an antisymmetric relation, or - for a relation which is neither symmetric nor antisymmetric.
Commentary is given in cases where it might be illuminating.
[RTS] $R = \{(1,1), (1,2), (2,2), (2,1), (3,3)\}$
[RTA] $R = \{(1,1), (1,2), (1,3), (2,2), (2,3), (3,3)\}$
[RT-] $R = \{ (1,1), (1,2), (2,1), (2,2), (3,1), (3,2), (3,3) \}$
[RIS] No example exists, see 3.
[RIA] No example exists, see 3.
[RI-] No example exists, see 3.
[R-S] $R = \{(1,1), (1,2), (2,1), (2,2), (2,3), (3,2), (3,3)\}$
[R-A] $R = \{(1,1), (1,2), (2,2), (2,3), (3,3)\}$
[R--] $R = \{(1,1), (2,2), (3,3), (1,2), (2,3) \}$
[ITS] No nontrivial example exists.
Suppose that $R$ is some nontrivial, irreflexive, transitive relation. If $R$ is not antisymmetric, then there exist pairs $(x,y)$ and $(y,x)$ which are both elements of $R$. But $R$ is transitive, so $(x,x)$ and $(y,y)$ must also be elements of $R$. In other words, a nontrivial, irreflexive, transitive relation must be antisymmetric.
[ITA] $R = \{(1,2), (1,3), (2,3)\}$
The usual order relations ($\le$, $<$, $\ge$, $>$) on $\mathbb{R}$ are more interesting examples of relations which are transitive and antisymmetric. Weak inequalities are reflexive, while strict inequalities are irreflexive.
[IT-] No nontrivial example exists, see [ITS].
[IIS] $\{(1,2), (2,1)\}$.
[IIA] $R = \{(1,2)\}$
[II-] $\{(1,2), (1,3), (2,1)\}$.
[I-S] $R = \{(1,2), (2,1), (2,3), (3,2) \}$.
Transitivity and intransitivity can be a little hard to see by inspection. This relation is not intransitive, as every intransitive relation must be antisymmetric; and it is not transitive, as $(1,2),(2,3) \in R$ but $(1,3)\not\in R$.
There is no example of an irreflexive and antisymmetric relation on $X$ which is neither transitive nor intransitive. However, if $R$ is a relation on as set $Y = \{a,b,c,d\}$, then an example exists:
[I-A] $R = \{ (a,b), (a,c), (b,c), (c,d) \}$
This relation is not transitive, because $(a,c), (c,d) \in R$, but $(a,d)\not\in R$; and is not intransitive, because $(a,b), (b,c), (a,c) \in R$.
[I--] $R = \{(1,2), (1,3), (2,1), (2,3)\}$
[-TS] $R = \{(1,1), (1,2), (2,1), (2,2)\}$
Note that the above relation is not reflexive on the three element set $X = \{1,2,3\}$ because it does not contain the pair $(3,3)$. However, thought of as a relation on the two element set $\{1,2\}$, this relation is reflexive.
[-TA] $R = \{(1,1), (1,2), (2,3), (3,1)\}$
[-T-] $R = \{(1,1), (1,2), (1,3), (2,1), (2,2), (2,3) \}$
[-IS] No nontrivial example exists, see 3.
[-IA] No nontrivial example exists, see 3.
[-I-] No nontrivial example exists, see 3.
[--S] $R = \{(1,2), (2,1), (2,2) \}$
[--A] $R = \{(1,1), (1,2), (2,3) \}$
[---] $R = \{ (1,1), (1,2), (2,1), (2,3) \}$
Some Additional Examples
Abstractly, it is good to have simple examples and counterexamples to different permutations of relational properties. However, it is also useful to have in mind more interesting models—every one of these properties comes from something in the world. The arbitrary permutations of properties may not have any useful meaning, but the properties themselves are interesting.
An equivalence relation is any relation which is reflexive, transitive, and symmetric. The most basic such relation is equality ($=$): $x=y$ if and only if $x$ and $y$ are, in fact, the same object. Abusing notation a bit, this means that $=$, though of as an equivalence relation on some arbitrary set $X$, is the diagonal of $X\times X$. That is,
$$ = \quad\text{is the set}\quad \{ (x,x) : x \in X\}. $$
There are other important equivalence relations, and many important properties in mathematics hold only "up to equivalence" with respect to some equivalence relation.
For example, $1/2$ and $2/4$ are not really the same object—ask any second grader. If I have a package of two cookies, then I can have one cookie, and give another to a friend. We each get one of the two cookies, or $1/2$ of the package. If I have a package of four cookies, then I can have two and give two to a friend. We each get two cookies, or $2/4$ of the package. Two is not one! These things are different. However, from the point of view of addition and multiplication, $1/2$ and $2/4$ behave in essentially the same way—they are equivalent with respect to a relation which ultimately gives us the rational numbers. Hence we can treat them as though they are the same object (and typically do!).
Order relations are examples of transitive, antisymmetric relations. For example, $\le$, $\ge$, $<$, and $>$ are examples of order relations on $\mathbb{R}$—the first two are reflexive, while the latter two are irreflexive. Set containment relations ($\subseteq$, $\supseteq$, $\subset$, $\supset$) have simililar properties.
In general, I think that it is reasonable to think of transitive, antisymmetric relations as those relations which "rank" or "order" things in some rough way. Inequalities order numbers, set containment relations order sets, taxonomies classify and order living organisms, etc.
Intransitive relations are kind of an odd duck, and it is not immediately obvious how they might come up in the real world. However, they do! My favorite example is the two-player game "Rock-Paper-Scissors". Rock beats scissors, scissors beats paper, paper beats rock. The relation "beats" is intransitive. Parenthood is also (generally speaking—one can always find exceptions once human behaviour is involved) an intransitive relation: I am the parent of my daughter, and my mother is my parent, but my mother is not my daughter's parent. |
Find the area of the top half of the polar curve $r=3-2 \cos \theta$ | Draw tha graph of the cartesian equation $y=3-2\cos x$. With this, it is easy to draw (without the need of a computer) the graph in polar coordinates of $r=3-2\cos \theta$, it would look something like that :
Now notice that the top of the figure is obtained when $0\leq \theta \leq \pi$. So, the area is given by
$$\frac{1}{2} \int_0^\pi r^2 d\theta = \frac{1}{2} \int_0^\pi (3-2\cos \theta)^2 d\theta$$ |
Demand Function and Dead Weight Loss | The demand is $q=20-p$.
Your revenue is $r=qp=20p-p^2$
Your net profit is $n=r-c=r-q^2=60p-2p^2-400$
which is maximised when $\frac{\partial n}{\partial p}=0$
Accordingly, the solution of $p_*=60-4p_*$ gives the optimum price of $p=15$
I found this definition of "Deadweight Loss" from Wikipedia:
"In economics, a deadweight loss (also known as excess burden or allocative inefficiency) is a loss of economic efficiency that can occur when equilibrium for a good or service is not achieved or is not achievable. "
This requires knowledge of a supply function though, which I did not notice in the problem specs. I may be missing something. |
Summation to Integral | It is straightforward to show that
$$\begin{aligned}\frac {\color{red}{4\binom {1/2}m}+\color{green}{\binom {1/2}{m-1}}}{\color{blue}{\left(m-\tfrac 52\right)!}} &= \left[ {\color{red}{4\frac{{{{( - 1)}^{m - 1}}(2m - 2)!}}{{{2^{2m - 1}}m!(m - 1)!}}} + \color{green}{\frac{{{{( - 1)}^m}(2m - 4)!}}{{{2^{2m - 3}}(m - 1)!(m - 2)!}}}} \right]\color{blue}{\frac{{{2^{2m - 4}}(m - 2)!}}{{\sqrt{\pi}(2m - 4)!}}} \\ &= \frac{{3{{( - 1)}^{m - 1}}(m - 2)}}{{2\sqrt{\pi}m!}}\end{aligned}
$$
Hence your sum $S$ equals to
$$S = \frac{3}{{2\sqrt \pi }}\int_0^{ + \infty } {\frac{{\sum\limits_{m = 3}^\infty {\frac{{{{( - 1)}^{m - 1}}(m - 2)}}{{m!}}{x^m}} }}{{{x^{3/2}}({e^x} - 1)}}dx} $$
Using the series (directly obtainable from the series of $e^{-x}$) $$\sum\limits_{m = 3}^\infty {\frac{{{{( - 1)}^{m - 1}}(m - 2)}}{{m!}}{x^m}} = {e^{ - x}}(2 - 2{e^x} + x + x{e^x})$$ concludes the proof. |
Why is the sum of any ten consecutive Fibonacci numbers always divisible by $11$? | Just write every term in the sum in terms of $a_1$ and $a_2$ (keeping in mind that $a_{n+2}=a_n+a_{n+1}$):
$$a_1+a_2+(a_1+a_2)+(a_1 + 2a_2)+(2a_1+3a_2)+(3a_1+5a_2)+(5a_1+8a_2)+(8a_1+13a_2)+(13a_1+21a_2)+(21a_1+34a_2). $$
Then the sum is clearly equal to $55a_1+88a_2 = 11(5a_1+8a_2)$, which is $11$ times the seventh term of the sum. |
Find a positive constant $C_1 \geq 0$ such that .... | $\|Ax\|_1 = |a_{11}x_1+a_{12}x_2|+|a_{21}x_1+a_{22}x_2| \le |a_{11}||x_1|+|a_{12}||x_2|+|a_{21}||x_1|+|a_{22}||x_2|$
Since $|x_1|,|x_2| \le ||x||_1$, we get, with $C_1:= \max\{|a_{11}|,|a_{12}|,|a_{21}|,|a_{2}|\}$:
$\|Ax\|_1 \le ||x||_1$ |
Let $G$ be a group with $p^3$ elements, $p$ prime. Find the cardinal of the set $\{\operatorname{C}(x)\mid x\in G\}.$ | For $x\in G-Z(G)$ you know that $C(x)$ contains $p^2-p$ elements outside $Z(G)$. Any two such centralisers are either identical or intersect in $Z(G)$ and every element of $G$ is in its own centraliser.
Let there be $N$ such centralisers then, counting elements in $G$,
$$N(p^2-p)+p=p^3$$
and so $N=p+1$. Since we also have $G$ itself, as the centraliser of any element in $Z(G)$, there are $p+2$ centralisers in total. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.