title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
What is the sum of these numbers? | Let $$S =1×\frac{1}{2^0}+2×\frac{1}{2^1}+...+k×\frac{1}{2^{k-1}} $$
If we can find $S$ , we only need to multiply by $n$ to get $A$.
Let $f(x) = x + x^2 + x^3 + \cdots + x^k = \large \frac{x^{k+1}- 1} {x - 1} $
By the formula for the sum of a geometric series.
Now differentiate both sides, the left side term by term, and the right side using the chain rule. Substitute $x=\frac12$ and we're done. |
Linear Transformations with given basis | It all depends on your notation. For me, $\;[T]_{B_1}^{B_2}\;$ means the matrix of basis change from $\;B_1\;$ to $\;B_2\;$ , which means:
$$\begin{cases}T(1,1,1)=3(1,0,1)+0(0,1,1)+1(0,1,0)=(3,1,3)\\{}\\
T(1,1,0)=0(1,0,1)+1(0,1,1)+0(0,1,0)=(0,1,1)\\{}\\
T(1,0,0)=2(1,0,1)+1(0,1,1)+2(0,1,0)=(2,3,3)\end{cases}$$
Since $\;(2,2,0)=0(1,1,1)+2(1,1,0)+(-2)(1,0,0)\;$, we get that by linearity of $\;T\;$ :
$$T(2,2,0)=0\cdot T(1,1,1)+2T(1,1,0)+(-2)T(1,0,0)=0(3,1,3)+2(0,1,1)-2(2,3,3)=$$
$$=(-4,-4,-4)$$
The last vector is expressed in the coordinates of $\;B_2\;$ . Try now to end the argument. |
Proof on maps and basic set theory | $S(1)=-1$ or $-2$. Because $\operatorname{Im}\psi=T$, $S(2)$ is determined now. Now it should be easy to give answers. |
How do you get from this to this formula? | Hint: Use the fact that $a^nb^n = (ab)^n$. Put $a = 4,\;b = \frac 13$. |
Clarification on the definition of the Lebesgue number of a metric space | In the general case, you just can say that there exists at least one member of the covering $\mathcal{O}$ containing that subset.
There is no problem if the covering $\mathcal{O}$ contains two open sets with one of them contained on the other. |
Why does $P(X^n>x^n)$ equal $P(X>x)$? | If $a$ and $b$ are non-negative real numbers and $u$ is a positive real number, then $$a^u>b^u\iff a>b$$
Therefore, $\{X^n>a^n\}=\{\omega\in\Omega\,:\,(X(\omega))^n>a^n\}=\{\omega\in\Omega\,:\,X(\omega)>a\}=\{X>a\}$ as sets. |
Calculation of prime numbers - why so difficult? | Depends on what you call "cheap and easy". Adleman-Pomerance-Rumely-Cohen-Lenstra and Elliptic Curve Primality testing have been used to prove primality of numbers (not of special form) with thousands of digits. |
Find quotient group $GL(n, \mathbb{C})/H$, where $H$ is a group of invertible matrices $GL(n, \mathbb{C})$ with $det \in \mathbb{R}$. | Try the following : Given a surjective homomorphism $f:G\to G'$ between two groups, and a normal subgroup $H'<G'$, consider the map
$$
G \to G'/H' \text{ given by } a \mapsto f(a)H'
$$
Show that this is a surjective homomorphism which kernel $f^{-1}(H')$.
Edit: To complete the proof, let $G = GL_n(\mathbb{C}), G' = \mathbb{C}^{\times}$ and $f$ be the determinant map. Let $H' = \mathbb{R}^{\times} \triangleleft G'$, then by definition
$$
H = f^{-1}(H')
$$
Hence by the above statement and the first isomorphism theorem,
$$
G/H \cong G'/H' = \mathbb{C}^{\times}/\mathbb{R}^{\times}
$$
Now you can use polar coordinates to recover the right hand side as something familiar. |
determine outer normal unit vector of $\{(x,y,z)|y^2+z^2\leq1\}$ | If your surface is the level set of a function $f:\mathbb R^3\to\mathbb R$ (for example $S=\{x\in\mathbb R^3;f(x)=0\}$), then a normal to the surface $S$ at a point $x$ is given by the gradient $\nabla u(x)$, provided it is nonzero.
For example, the second part of your surface is a part of the sphere $S=\{x\in\mathbb R^3;x_1^2+x_2^2+x_3^2=1\}=\{x\in\mathbb R^3;f(x)=1\}$ where $f(x)=x_1^2+x_2^2+x_3^2$.
The gradient of $f$ is $\nabla f(x)=(2x_1,2x_2,2x_3)=2x$.
If we want to normalize it to unit length, we can divide out the two (or in general take $\nabla u(x)/\|\nabla u(x)\|$).
The first part of your surface is $\{(x,y,z)|x=0,y^2+z^2\le1\}$; it is convenient to write the set so that all coordinates are free and then restrictions are given.
(This refers here to including the silly condition $x=0$.)
Now you can choose $f(x,y,z)=z$ and your surface is part of the set where $f=0$.
The normal is given by the gradient of $f$: $\nabla f(x,y,z)=(1,0,0)$. |
computer recursion same question but omitting defdef | I would go with mutual recursion:
Base cases ($\varepsilon$ as empty string):
$a\in X$
$\varepsilon,b\in Y$
closure conditions:
$w\in X\to wb\in Y$
$w\in Y\to wa\in X\land wb\in Y$
The desired language is then $X\cup Y$. |
How to solve matrix equations involving non invertible matrices? | if $A$ is singular, you either want to solve one of these two problems:
\begin{align*}
& \label{pb1}\tag{1} \min_x\|Ax-b\|^2 \\
& \label{pb2}\tag{2} \min_x\|x\|^2:Ax-b=0
\end{align*}
For the problem \eqref{pb1}, you want to use Newtons' method for optimization.
For the problem \eqref{pb2}, you want to use Lagrange multipliers. |
Itinerary Combinatorics questions | There are $3! = 6$ possible orders in which Boston, Chicago, and New York could be visited. Of these, only one involves visiting Boston before New York and New York before Chicago. Thus, by symmetry, in $1/6$th of the $10!$ possible itineraries is Boston visited before New York and New York visited before Chicago. Hence, the number of such itineraries is
$$\frac{1}{6} \cdot 10!$$
Edit: In response to the question posed in the comments, you counted the number of excluded itineraries incorrectly.
If you wish to exclude those itineraries in which New York is visited before Boston or Chicago is visited before New York, we start with the $10!$ possible itineraries. By symmetry, in half of the possible itineraries New York is visited before Boston. Similarly, in half of the possible itineraries Chicago is visited before New York. In subtracting these itineraries from the total possible number of itineraries, we have subtracted those in which New York is visited before Boston and Chicago is visited before New York twice. Thus, we must add those itineraries to the total. By the argument given above, in $1/6$th of the possible itineraries, New York is visited before Boston and Chicago is visited before New York. Hence, by the Inclusion-Exclusion Principle the number of possible itineraries is
$$10! - \frac{10!}{2} - \frac{10!}{2} + \frac{10!}{6} = \frac{10!}{6}$$
Alternatively, note that if we wish to count the number of itineraries in which New York is visited before Boston, we can choose two of the ten positions for New York and Boston, place New York in the leftmost of the selected positions and Boston in the other, then arrange the remaining eight cities in $8!$ orders. Thus, the number of itineraries in which Boston is visited before New York is
$$\binom{10}{2} \cdot 8! = \frac{10!}{2!8!} \cdot 8! = \frac{10!}{2!} = \frac{10!}{2}$$
and similarly for the number of itineraries in which Chicago is visited before New York. However, as above, we have to take into account the possibility that Chicago is visited before New York and New York is visited before Boston. |
Show me how this equation is true $\dfrac {a}{b}=\dfrac {b}{~\frac{a}{2}~}=\dfrac {2b}{a}$? | It's not something that holds generally, it's from the property of A4 paper that its aspect ratio is the same when folded in half (that is, the ratio of the long side to the sort side is the same when folded in half):
$$\frac{\text{Long side}}{\text{Short side}}=\frac{\color{green}{\bullet}}{\color{orange}{\bullet}}=\frac{a}{b}=\frac{b}{a/2}$$ |
Holomorphic Functions and Cauchy Theorem | Hint: consider a contour that looks like this: |
Deriving Area of Circle | Because angles are measured counterclockwise from $0$, if you put the greater value of $\theta$ in the lower bound, you are integrating the given area backwards. If you swap the bounds of integration, you will get the same area. The bounds of integration should be $0$ and $\pi$, assuming you're integrating the area in the upper half plane. |
Why Fibonacci sequence start at $0$? Generalized Fibonacci sequences | The essential definition of a $k$-nacci sequence is that the earliest positive term is $1$ and all subsequent terms are the sum of the previous $k$ terms.
So to find the second earliest positive term (which is also $1$) you need at least $k-1$ zeros at the beginning of the sequence to do the sum. In your tables there are at least $k-1$ columns of implicit zeroes to the left of the columns you show.
Since the Fibonacci sequence conventionally starts with $F(1)=1$ and $F(0)=0$, it is a plausible but not necessary convention to start the $k$-nacci sequence with the initial leading zero also at a zero index, and this is what the OEIS has done.
You could do something different, but then you would need to translate between your indices and those used by others. That is the effect of conventions. |
What's wrong with this proof that irreps over $\mathbb{C}$ are dimension $1$? | I'm not sure why you'd say "it follows $\mathbb C[G]$ acts on $V$ by scalars."
Taking your example of $\mathbb C[S_3]$, you know it is a noncommutative semisimple ring, and you are aware that $M_2(\mathbb C)$ appears in its factoriation. The way $\mathbb C[S_3]$ acts on the simple submodule for that piece is not via a scalar, but via $M_2(\mathbb C)$. |
Geometric interpretation of lines in a plane. | Sorry, this is not very elementary but I think this is very nice. Since the isomorphism you asked is not canonical, we will have some choice to do, we will pick two distinct lines $A,B$ in $H$.
We begin by a lemma :
Lemma : Let $X \cong \Bbb P^1 \times \Bbb P^1$ a quadric surface. $X$ blown-up in one point is isomorphic to $\Bbb P^2$ blown-up in two points. In particular, blowing-up $X$ in one point and contracting two lines gives $\Bbb P^2$.
For definition of a blow-up and the proof of the lemma you can see these notes.
Construction of the isomorphism $\text{Line}(H) \cong \Bbb P^2$ :
Let $p = A \cap B$. Any line $L$ is determined by $L \cap A$ and $L \cap B$ if $p \notin L$. This give a rational map $f : \text{Line}(H) \dashrightarrow \Bbb P^1 \times \Bbb P^1$. We can make the map an isomorphism the following way : first we will blow-up $(p,p)$ for obtaining a well-defined map $g : \text{Line}(H) \to Bl_{(p,p)}\Bbb P^1 \times \Bbb P^1$. Now, it is not surjective precisely for the point on the form $p \times \Bbb P^1$ or $\Bbb P^1 \times p$, but we can contract these lines by $k : Bl_{(p,p)}(\Bbb P^1 \times \Bbb P^1) \to Y$. By the lemma, $Y \cong H$ so $k \circ g : \text{Line}(H) \to H$ is an isomorphism. |
Maximum value of frobenious norm of probuct of three matrices. | I suspect that in general, no manageable closed form will exist.
That being said, here is a thorough approach to the case of $R = 2$. Let $a_1,a_2$ denote the columns of $A$ and $c_1,c_2$ the rows of $\overline C$. Without loss of generality, take $\theta_1 = 0$ and $\theta_2 = \theta$. We then have
$$
ABC = \pmatrix{a_1 & a_2} \pmatrix{1&0\\0&e^{i\theta}}\pmatrix{c_1^*\\c_2^*} = a_1c_1^* + e^{i\theta}a_2c_2^*.
$$
Thus, we have
$$
\|ABC\|_F = \operatorname{tr}[(a_1c_1^* + e^{i\theta}a_2c_2^*)^*(a_1c_1^* + e^{i\theta}a_2c_2^*)]
\\ = |a_1|^2 \cdot |c_1|^2 + |a_2|^2 |c_2|^2 + 2\operatorname{Re}[e^{i\theta} (a_1^*a_2)(c_2^*c_1)].
$$
This is maximized when $\theta = -\operatorname{Arg}[(a_1^*a_2)(c_2^*c_1)] = -\operatorname{Arg}[(A^*A)_{1,2} \cdot (CC^*)_{2,1}]$. |
Is $\emptyset$ in $R^n$ an open set? | Note that open and closed are not mutually exclusive descriptors (though you might be used to thinking that way due to learning about the closed $[a,b]$ and the open $(a,b)$ first). A set can be both open and closed (such as $\mathbb{R}^n$ and $\emptyset$). |
ZF and The Cardinality of The Set of Finite Subsets | Suppose that $\varphi:A\to\wp'(S)$ is a bijection. Let $A_0=\{a\in A:|\varphi(a)|\text{ is even}\}$, and let $A_1=A\setminus A_0$. Then $\{A_0,A_1\}$ is a partition of $A$ into infinite sets, and $A$ is therefore not amorphous. However, there are models of $\mathsf{ZF+\neg AC}$ in which there are amorphous sets. |
Is below mathematical rigorous and the simplest way to find $x$ for the imaginary unit $i$ such that $i^n=x$? | Hint: by remainder theorem we have $$n=4k+r,\;\;\;\;\;\;\;\;r\in\{0,1,2,3\}$$
So $$i^n = (i^4)^n\cdot i^r = 1\cdot i^r =...$$ |
Fair gambler's ruin problem intuition | The probability of reaching \$$n$ starting with \$$k$ can be split up by what possible first steps you can take - you either lose the first toss or win, each with probability $1/2$. If you win, you have \$$(k+1)$, so the probability of reaching \$$n$ from here is $p_{k+1}$. If instead, you lose the first toss, then its \$$p_{k-1}$. Then use the Law of Total Probability $P(X)=\sum_n P(X|Y_n)P(Y_n)$ where $Y_n$ is a partition of the sample space. In this case, $Y_1=\{\text{lose toss}\}$, and $Y_2=\{\text{win toss}\}$. Then you get
$$p_k=\frac12(p_{k-1}+p_{k+1})$$ Rearranging this gives $$2p_k=p_{k-1}+p_{k+1}\\p_k-p_{k-1}=p_{k+1}-p_k$$ as required, and iterating it multiple times gets to $p_1-p_0$, and of course, $p_0=0$. |
Let $a^n, a^m \in (a^k)$ for some positive integer $k$. Then $k \mid n, m.$ Hence $k \mid \operatorname{gcd(n, m)}?$ | Since $k$ divides both $m$ and $n$, hence it divides all linear combinations $mx+ny$. GCD is the least positive linear combination of $m$ and $n$, hence divisible by $k$ as well OR you can use the definition of GCD which stipulates that every common divisor (which happens to be $k$ here) divides the GCD. |
Representation of dirac delta function | As distributions, $d_1=d_2$ iff for all test functions $f\in C_c^{\infty}( \Bbb{R})$, $(d_1,f)=(d_2,f)$.
So you are correct. |
Definition and use of Empirical Cumulative Distribution Function (ECDF) | Sometimes one says that a histogram based on a large sample size
gives a good idea about the shape of the population density function.
(But information is lost in binning, and a modern 'density estimator'
usually works better.)
In somewhat the same way an empirical cumulative distribution function (ECDF) of a large sample is a good estimator of the population CDF.
The following R program samples 3000 observations from $Gamma(5, 1)$ to illustrate @Clement C's comment. The figure below shows the histogram (at left) along with the known population density (dotted) and a density estimator. At right, the
CDF (thin light green) is superimposed on the ECDF (heavy black) of the sample.
A larger sample would show better fit, but perhaps too good to
see distinctions between population and sample curves.
x = rgamma(3000, 5, 1) # generate random sample
par(mfrow=c(1,2)) # two panels in one graph
hist(x, prob=T, col="wheat")
lines(density(x), lwd=2, col="blue") # density estimator
curve(dgamma(x, 5, 1), lty="dotted", lwd=2, col="red", add=T)
plot.ecdf(x) # empirical CDF
curve(pgamma(x, 5, 1), col="green", add=T) # pop CDF
par(mfrow=c(1,1)) # returns to default single panel
If you have access to R, you can try other population distributions
and sample sizes. The same program as above, except with a sample of
size $n = 100$ was used to produce the figure below. Roughly speaking, the ECDF gives a better estimate of the CDF than a
histogram gives of the PDF. A 'nonparametric bootstrap' procedure uses the sample ECDF in place of the unknown population CDF. |
"Independent and Dependent Events" Algebra 2 Homework | Using the fact that the events are independent we have that $P(A,B) = P(A)P(B)$. Then we can use the definition of conditional probability
$$P(A\mid B) = P(A,B)/P(B) = \frac{P(A)P(B)}{P(B)} = P(A).$$
This is a simple consequence of indepdence. Whether or not event $B$ occurs does not effect the probability of event $A$ happening, therefore the conditional probability $P(A\mid B) = P(A)$. |
Number of Points on an Elliptic Curve | The number of points is $1+\sum_{x\in \mathbb F_p}\left(\left(\frac{x^3+ax+b}{p}\right)+1\right)$, where $\left(\frac{x^3+ax+b}{p}\right)$ is the Legendre symbol modulo $p$. If $x$ is not a root of $x^3+ax+b$, the term inside the sum is either $0$ or $2$. Otherwise, it is $1$. If $x^3+ax+b$ has a root, then it has either $1$ or $3$ roots and the sum is odd, which makes the number of points even.
More conceptually, if that polynomial has a root than $E$ has a rational $2$-torsion points, and therefore $2$ divides the order of $E(\mathbb F_p)$. |
What is the most efficient way to calculate $R^2$? | Recall that in simple linear regression the square of the Pearson correlation coefficient equals $R^2$, thus
$$
r^2 = \left( \frac{\sum x_i y_i -n\bar{x}\bar{y}}{(\sum x_i^2 - n\bar{x}^2)^{1/2}(\sum y_i^2 - n\bar{y}^2)^{1/2}} \right)^2=\hat{\beta}_1^2\frac{S_{XY}}{S_{YY}}=R^2.
$$
Which, given the table, shouldn't be hard to compute. |
Permutations and Sample Spaces | If there are $k$ "cars" and each car has $m$ choices, the total number of choices is $m^k$.
So in the case of $3$ cars and $3$ choices, the sample space has $3^3$ elements.
As far as listing goes, whether a way to list is good depends on the kind of information you need quick access to. The issues are complex and important. However, what the three cars do can be thought of as a word of length $3$ over the alphabet L, R, S. Listing alphabetically seems reasonable. So the first nine items are LLL, LLR, LLS, LRL, LRR, LRS, LSL, LSR, LSS.
If we think of L as the digit $0$, R as $1$, and S as $2$, then we can think of our string as a "three-digit" number is base $3$. |
Lebesgue-Stieljes measure and substitution counterexample. | This idea comes from Nate Eldredge: Suppose $\mu$ is a point mass of 1 at $\alpha/2$ for $\alpha>0$ and $0$ elsewhere. This implies that $\int_{(0,\infty)} f(x) d\mu = f(\alpha)$, while $\int_{(\alpha,\infty)} f(x-\alpha) d\mu = 0$.
There is something consistent about the usual Lebesgue measure in that $m((a,b)) = b-a$. In other words the measure of intervals is not based on their location on the number line, but their width. |
Is $x$ rational? If so, express $x$ as a ratio of two integers. If not, give a counterexample. | Let put
$$X=\frac{1}{x}$$
$$Y=\frac{1}{y}$$
$$Z=\frac{1}{z}$$
we have
$$X+Y=\frac{1}{a}$$
$$X+Z=\frac{1}{b}$$
$$Y+Z=\frac{1}{c}$$
thus
$$X+Y+Z=\frac{1}{2a}+\frac{1}{2b}+\frac{1}{2c}$$
and
$$X=\frac{1}{2a}+\frac{1}{2b}-\frac{1}{2c}$$
$\implies$
$$x=\frac{2abc }{bc+ac-ab }\in \mathbb Q$$ |
Identity for frequency of integers with smallest prime(n) divisor | The definition is $$\psi(x)=\sum_{p^k\le x}\log p$$ where $p$ runs through primes. That is, it's the sum of $\log p$ over all prime powers $p^k$ not exceeding $x$. So $$e^{\psi(p_n)}=\prod_{p^k\le p_n}p=p_n\prod_{p^k\le p_n-1}p=p_ne^{\psi(p_n-1)}$$ where $p_n$ is the $n$th prime. So, the left side of the identity is $${1\over p_n}{\phi(u)\over u}$$ where $u=e^{\psi(p_n-1)}$. Since $\phi(m)=m\prod_{p\mid m}(1-p^{-1})$, the identity is immediate. |
Prove that $x^p\equiv 1$ (mod $p$) has only one solution. | $(x-1)^p \equiv (x^p - 1) \mod p$ by binomial theorem. So, if $p$ divides $(x^p - 1)$, it certainly divides $(x-1)^p$. Therefore, by fermat's last theorem, $p$ divides $x-1$. Hence, you get the desired result. |
Prove that for all $n \ge 2$ there exist finite field extensions of $\mathbb{Q}$ of dimension $n$. | You are definitely on the right track. You should justify why you are guaranteed an irreducible polynomial, which is simple: let $f(x) = x^n - p$ for some prime $p$. Then $f$ is certainly irreducible by Eisenstein's criterion.
From there, if $\alpha$ is a root of $f$, then $\mathbb{Q}[\alpha]$ is a field extension of dimension $\deg(f) = n$.
I should add that $f(x) = x^n - a$ for any arbitrary $a \in \mathbb{Q}$ is not necessarily irreducible. For example, $f(x) = x^n - 1$ will always have a factor of $(x-1)$. |
Retract of noncompact surface to its boundary? | "Simple" depends on what you've studied. Under one valuation ...
Let $M$ be a connected, noncompact, 2-manifold with boundary having circle boundary, $\partial M \cong S^1$. Show there is a retraction $r: M \rightarrow \partial M$.
Let $E$ be the set of ideal points in the Freudenthal end point compactification of $M$ and let $M''$ be the Freudenthal compactification of $M$. Let $e \in E$ and $M' = M'' \smallsetminus \{e\}$, the Freudenthal end point compactification of $M$, ignoring the end corresponding to $e$. A quick reminder of these ideas is here. For more on this, see Raymond, Frank, The End Point Compactification of Manifolds, including details of the construction when $E$ is not countable. $M'$ is a connected, noncompact, 2-manifold with circle boundary $\partial M = \partial M'$, and one end. We will abuse notation and call this end $e$.
Under a nested compact exhaustion of $M'$, there is an annular neighborhood of the end $e$. (It is an open disk neighborhood of $e$ in the Freudenthal compactification.) (See O. Ya. Viro et al, Elementary Topology: Textbook in Problems, Ch. XI, 48${}^\circ$2x for more.) Let $A$ be a core $S^1$ of this annulus and $p_1$ be a point of $A$. Let $B = \partial M' \cong S^1$ and $p_2$ be a point of $B$. There is a simple path, $p$, in $M'$ connecting $p_1$ and $p_2$ (which we would use to change the basepoint of the fundamental group from one to the other). $B$ has an open collar homotopic to an annulus closed on the $B$ boundary component and open on the other. $p$ has an open bicollar homeomorphic to $p \times (0,1)$. Let $S$ be the union of these three open collars. ($S$ is a regular neighborhood of a $1$-skeleton of the boundary, the end $e$, and a path between the two in $M'$.)
Observe that $M' \smallsetminus S$ is a connected, noncompact 2-manifold with circle boundary. We construct $r$ in two steps. Let $r_1$ be the continuous map holding $A$, $B$, and $p$ fixed and contracting $M' \smallsetminus S$ to a point. Let $D$ be the open unit disk and $r_1(M')+D$ be $D$ injectively identified to the boundary of $r_1(M')$. The nullhomotopic homotopy classes were parallel to $A$ and $B$, and were rendered nullhomotopic by transport across $D$, so $r_1(M')+D$ is a simply connected, noncompact 2-manifold without boundary with one end. By Viro et al., 53.Ax, a simply connected non-compact manifold of dimension two without boundary is homeomorphic to $\mathbb{R}^2$. Therefore, $r_1(M') +D \cong \mathbb{R}^2$. Constructing the retract $r_2:\mathbb{R}^2 \smallsetminus\{(0,0)]\} \rightarrow B$ is a standard exercise. Then $\left. r_2 \right|_{r_1(M')}$ is a domain restriction of a continuous function, so is continuous, and $r = \left. r_2 \right|_{r_1(M)} \circ \left. r_1 \right|_{M}$ is the desired map.
In short: find a very simple neighborhood of one end, the boundary, and a path between the two. Crush everything else to a point, yielding a half-infinite cylinder. Then telescope the cylinder onto the boundary. |
Order of operations (BODMAS) | $$40-10+22.5=52.5$$
You're correct, subtraction and addition have the same level of priority, so when both exist, then you operate from left to right. You are entirely correct about that. If you were puzzled when you saw the "answer" $\,7.5$, good for you, since $\;\;40-10+22.5\neq 7.5!$
Now, note: $$40-(10+22.5)=7.5$$
because operations in "brackets" (or parentheses) have greatest priority.
So let's be forgiving: perhaps the author of the problem statement/review notes forgot parentheses! |
Finding the domain of an equation and then solving. | In general when we deal with expression in the form
$$\frac{A(x)}{B(x)}$$
we always need that $B(x)\neq 0$ because the division by zero is not defined.
Therefore in that case we need for the existence that $x\neq 0$ then, keeping in mind that condition, we can multiply both sides by $6x$ to obtain
$$\frac{x+2}{6x} +1 = \frac{x-7}{x}\iff x+2 +6x = 6(x-7)$$
from here we can solve for $x\neq 0$. |
Place b blue balls and r red balls in a box. Prove that the probability that the kth ball taken from the box is blue is the same for 1 ≤ k ≤ b. | We have $b$ blue balls and $r$ red balls, making for $b+r$ total balls in the box. Let $B_k$ be the event the $k^{th}$ ball drawn is blue.
Clearly, $P(B_1) = \frac{b}{b+r}$, because of the total number of balls in the box, we have $b$ blue balls to chose from.
What is $P(B_2)$? This requires a little more thought. Since we are drawing as second ball, this means we have already drawn a ball from the box, meaning we are now drawing from a box of $b+r - 1$ balls. Also, we must consider the cases where the already drawn ball is either blue or red.
In the first case, there are $b-1$ balls to choose from out of the $b+r-1$ balls. This case happens with $\frac{b}{b+r}$ probability. In the other case, we drew red and there are still $b$ balls to choose from. This case happens with $\frac{r}{b+r}$ probability. So, we have:
$P(B_2) = \frac{b}{b+r}\cdot\frac{b-1}{b+r-1} + \frac{r}{b+r}\frac{b}{b+r-1}$
$=\frac{b(b+r-1)}{(b+r)(b+r-1)} = \frac{b}{b+r}$
Using this information, can you figure out a way we can frame the expression for the general case of $P(B_k)$? (Hint: conditional probabilities, like we used for the case of $B_2$). |
Computing endpoint coordinates of point P of a line of known length d that ends perpendicularly at the known mid-point coordinates of another line | Hints
The slope of $d$ is given by $$m_d=\frac{x_1-x_2}{y_2-y_1}$$
The midpoint $M$ of the segment has the coordinates $$M\bigg(\frac{x_1+x_2}{2} \mid \frac{y_1+y_2}{2}\bigg)$$
If $P$ has the coordinates $P(x_p\mid y_p)$, then $$d^2=\big (\frac{x_1+x_2}{2}-x_p\big)^2+\big(\frac{y_1+y_2}{2}-y_p\big)$$ |
Change the form of equation of surface | First you have to eliminate the linear terms of the equation using a special translation. After that, in order to eliminate the mixed terms, you have to diagonalize the matrix you wrote above. Then you'll have the diagonal matrix and your equation. |
The numbers $1,\frac{1}{2},\frac{1}{3},\dots,\frac{1}{100}$ are written on a white board | Note that
$$a+b+ab=(a+1)(b+1)-1$$
So, in the end after all operations, we get
$$(a_1+1)(a_2+1) \cdots (a_n+1) -1=\prod_{i=1}^{n}\left(1+\frac1i\right)-1=(n+1)-1=n$$ |
Finding the general equation of a cross section of a roof and the position of each joist that makes up its surface | The way to get the information that you want is to start with the equation of the plane, and plug it into the equation for $z$. This way you get the intersection. All you need is now to have $z$ as a linear function of either $x$ or $y$.
Suppose $a\ne 0$. Then we can write $x=\frac{c-by}{a}$. If we put this expression into $f(x,y)$ we get
$$z=\frac{5}{2}+\frac{1}{200}\left(9\frac{c^2-2bcy+b^2y^2}{a^2}-4y^2\right)$$
The coefficient of $y^2$ is $9\frac{b^2}{a^2}-4$. In order to have the intersection as a line, this coefficient must be $0$. You then get $$\frac{b}{a}=\pm\frac{2}{3}$$
You notice that you have two sets of parallel lines. In your figure you show one of these. In order to get the $c$ value, you just need to plug in the value for a point on the line $P_k$. Also note that you can get only ratios of coefficients, since $ax+by=c$ is equivalent to say $2ax+2by=2c$. You can choose $a=1$ and then calculate $b$ and $c$. As a last observation, make sure you cannot get $a=0$. That's easy. Just plug the equation for the plane into the expression for $z$, and show that the coefficient of $y^2$ is not $0$. |
Prove that $\lim\limits_{n \rightarrow \infty} \left(1-\frac{1}{n^2}\right)^n= 1.$ | Hint: if $x\geqslant 0$ and $n$ is an integer, then $(1-x)^n\geqslant 1-nx$.
This can be seen definigin $f(x)=(1-x)^n+nx$ and showing that the derivative is non-negative. |
How to prove ${n \choose p} \equiv\left\lfloor \frac{n}{p} \right\rfloor\pmod p$ | Lucas' theorem states that if we have two naturals $n,m$ and prime $p$, we can write $n$ and $m$ in base $p$:
$$n=n_0+n_1p+\cdots+n_kp^k ,\qquad m=m_0+m_1p+\cdots+m_kp^k$$
where $0\le n_i, m_i \le p-1$. Then:
$$\binom{n}{m}\equiv\prod_{i=0}^{k}\binom{n_i}{m_i}\bmod p$$
And so if $m=p$, we know that $p = 0 + 1 \cdot p$ in base $p$, so we get:
$$\binom{n}{p}\equiv\binom{n_0}{0} \binom{n_1}{1} \binom{n_2}{0} \binom{n_3}{0} \cdots\bmod p$$
Or:
$$\binom{n}{p}\equiv n_1 \bmod p$$
How do we get the digit $n_1$? It's like asking how to get the tens place digit in base $10$ from a number $1234$. You floor-divide it by the base (turn $1234$ into $123$) and then take that number mod the base ($123$ mod $10$ is $3$, which is the digit we were after).
Similarly, $n_1 = \lfloor\frac{n}{p}\rfloor \bmod p$, which means:
$$\binom{n}{p} \equiv \left\lfloor\frac{n}{p} \right\rfloor \bmod p$$ |
How do I calculate the angle between the tangent to an outer circumference and a line passing through a specific point on the inner circumference? | Apply sine rule to triangle $AOC$:
$$
{\sin\beta\over r}={\sin\alpha\over r+d}.
$$ |
Evaluate the following integral. | HINT:
Set $x=\sin y$ to get
$$\int_0^{\pi/2}\sin^{2n+1}y\ dy=I_{2n+1}$$
Form this, $$I_m=\dfrac{m-1}mI_{m-2}$$ |
Left adjoint functor and corepresentablility | I understand the statement to be:
Suppose $G:A\to B$ and let $X\in B$ and suppose $F \dashv G$. Then, $\hom (X,G-)$ is representable $\forall X\in B$, and conversely.
One direction is trivial because by adjointness we have that the pair $(FX,\phi )$ where
$\phi:\hom (FX,-)\to \hom (X,G-)$ is a representation of $\hom (X,G-)$ because $\phi $ is a natural isomorphism.
On the other hand, if $(A,\phi )$ and $(A',\phi' )$are representations of $\hom (X,G-)$ and $\hom (Y,G-)$, repsectively, then $\phi :\hom (A,-)\to \hom (X,G-)$ and $\phi' :\hom (A',-)\to \hom (Y,G-)$ are natural isomorphisms.
We now define $F:B\to A$ by $FX=A$ on objects.
To obtain $F$ on arrows $f:X\to Y$ consider $\phi _{FY}:\hom (FX,FY)\to \hom (X,GFY)$ and $\phi' _{FY}(1_{FY})$ and take $Ff=\phi^{-1} _{FY}\left ( \phi' _{FY}(1_{FY})\circ f \right )$.
You can check that $F$ is a functor and that $F \dashv G$. |
How do I use the appropriate addition formula to find the exact value of this expression | $(1)$ $$\sin\left(\frac{11}{12}\pi\right) = \sin\left(\frac 5{12}\pi + \frac 6{12} \pi \right) = \sin\left(\frac 5{12}\pi + \frac 12 \pi\right)$$
You can use the sum-of-angles formula for $\sin(a + b)$:
$$\sin(a + b) = \sin(a)\cos(b) + \cos(a)\sin(b)$$
$$\sin\left(\frac 5{12}\pi + \frac 12 \pi\right) = \sin\left(\frac 5{12} \pi\right)\cos\left(\frac 12 \pi\right) + \cos\left(\frac 5{12} \pi\right)\sin\left(\frac 12 \pi\right)$$
$(2)$ Notice that the first term in the sum is multiplied by $\cos\left(\frac{\pi}{2}\right) = 0.$ And since $\sin\left(\frac{\pi}{2}\right) = 1$ you need only compute $\;\cos\left(\frac{5 \pi}{12}\right)\;$. Indeed, seeing that $$\cos\left(\frac{5\pi}{12}\right) = \cos\left(\frac{2\pi}{12} + \frac{3\pi}{12}\right) = \cos\left(\frac{\pi}6 + \frac{\pi}4\right)\tag{1}$$
we can use the "angle-sum formula" for cosine: $$\cos(a + b) = \cos a \cos b - \sin a \sin b$$ $$\cos\left(\frac{\pi}6 + \frac{\pi}4\right) = \cos\left(\frac{\pi}6\right)\cos\left(\frac{\pi}4\right) - \sin\left(\frac{\pi}6\right)\sin\left(\frac{\pi}4\right)$$
Now the computation is one with which you should be familiar:
$$\sin\left(\frac{\pi}{6}\right) = \frac12,\;\;\cos\left(\frac{\pi}6\right) = \sqrt 3/2,\;\; \cos\left(\frac{\pi}4 \right) = \sin\left(\frac{\pi}4\right) = 1/\sqrt 2$$
You can scroll over the greyed out line below to check your answer.
$$\;\sin\left(\frac{11 \pi}{12}\right)\;\;=\;\;\frac{\sqrt 3 -1}{2\sqrt 2}$$
You can also use the fact that $$\sin\left(x +\frac {\pi}{2}\right) = \cos(x)$$ |
Proving inequality $\frac{a}{\sqrt{a+2b}}+\frac{b}{\sqrt{b+2c}}+\frac{c}{\sqrt{c+2a}}\lt \sqrt{\frac{3}{2}}$ | This is too long for a comment but it's more of a suggestion than an answer. Without loss of generality let $a \leq b \leq c. $ Put $a = x,\ b = y,\ c = z = 1 - x - y$:
Let $$w(x,y) = \frac{x}{\sqrt{x+2y}} + \frac{y}{\sqrt{y+2(1-x-y)}} + \frac{1-x-y}{\sqrt{1-x-y+2x}} $$
The constraint $a + b + c = 1$ means that x can be no larger then 1/3, otherwise y must be greater than 1/3, and z is forced to be less than 1/3 contrary to assumption. So $0 < x \leq 1/3.$ If x is minimally (almost) $0$, y is at most $1/2$ otherwise z is less than y contrary to assumption.
So $x\leq y \leq 1/2.$
So the problem is now:
Maximize w(x,y)
subject to
$0 < x\leq 1/3$
$x\leq y \leq 1/2$
If the maximum we find is less than $\sqrt{\frac{3}{2}}$ the original inequality is true.
We should verify (formally) the visual evidence of a 3D plot of the feasible region, which is that in $\{0 < x\leq 1/3, x\leq y \leq 1/2\}$ the function $w(x,y)$ has a local maximum on $y = 0^+$ (a last reminder that $0<x,y$) for suitable choice of x. Then to find that maximum we can let y equal $0$ and set the derivative $w_x(x,0) $ equal to zero. This (using a numerical routine) gives $x\approx 0.1547.$ A plot of $w(x, 0)$ shows this to be a maximum at about 1.179.
The value of $\sqrt{3/2}$ is about 1.22, so $w(x,y)\ll \sqrt{3/2}$ for this point.
Again, using the visual shortcut, there also appears to be a local maximum for $w(x,y)$ on $x=0$ for suitable choice of y. If we let $w_y(0,y)= 0$ we (again numerically) obtain a value of $y \approx 0.845,$ at which $w(0,y)$ appears to be a maximum, but this is outside the feasible region, and the maximum is attained in the feasible region at $w(0,1/2) = 1.115,$ which is less than 1.179 and not maximal for the region as a whole.
So this sketch of an argument, which omits some important formalities, suggests the inequality is true.
Edit: Khue noted a problem with this answer and suggested a simple fix. We cannot without generality assume $x < y < z,$ but can assume $x =\text{ min}(x,y,z),$ then the problem is minimize $w(x,y)$ subject to:
$0\leq x \leq 1/2$, and $x \leq y \leq 1.$
As Khue notes, this changes the feasible region. Again we can plot the feasible region and the max appears to occur along $x = 0, 0 \leq y \leq 1.$ As before finding $w_y(0,y)$ the max occurs at about $y = 0.845299$ and at that value $w = 1.17996.$ |
Integral that arises from the derivation of Kummer's Fourier expansion of $\ln{\Gamma(x)}$ | Random Variable has kindly pointed out that the integral has been evaluated by Cody on this site. Here is a slightly different method of evaluating this integral.
Begin with the infinite product representation of the gamma function.
\begin{align}
\Gamma(x)=\frac{e^{-\gamma x}}{x}\prod^\infty_{k=1}e^\frac{x}{k}\left(1+\frac{x}{k}\right)
\end{align}
Take the logarithm and multiply throughout by $\sin(2n\pi x)$.
\begin{align}
\ln{\Gamma(x)}\sin(2n\pi x)=-(\gamma x+\ln{x})\sin(2n\pi x)+\sum^\infty_{k=1}\left\{\frac{x}{k}-\ln\left(1+\frac{x}{k}\right)\right\}\sin(2n\pi x)
\end{align}
Integrating the non-sum terms from $0$ to $1$,
\begin{align}
\int^1_0(-\gamma x-\ln{x})\sin(2n\pi x)\ {\rm d}x
=&\frac{\gamma}{2n\pi}+\frac{\ln{x}\cos(2n\pi x)}{2n\pi}\Bigg{|}^1_0-\int^1_0\frac{\cos(2n\pi x)}{2n\pi x}{\rm d}x\\
=&\frac{\gamma}{2n\pi}+\left[\frac{\ln{x}\cos(2n\pi x)-{\rm Ci}(2n\pi x)}{2n\pi}\right]^1_0\\
=&\frac{\gamma-{\rm Ci}(2n\pi)}{2n\pi}-\lim_{\epsilon\to 0}\frac{\ln{\epsilon}-{\rm Ci}(2n\pi \epsilon)}{2n\pi}\\
=&\frac{\gamma-{\rm Ci}(2n\pi)}{2n\pi}-\lim_{\epsilon\to 0}\frac{\ln{\epsilon}-\ln(2n\pi)-\ln{\epsilon}-\gamma+\mathcal{O}(\epsilon)}{2n\pi}\\
=&\frac{2\gamma+\ln(2n\pi)-{\rm Ci}(2n\pi)}{2n\pi}
\end{align}
Integrate the remaining terms from $0$ to $1$.
\begin{align}
&\int^1_0\sum^\infty_{k=1}\left\{\frac{x}{k}-\ln\left(1+\frac{x}{k}\right)\right\}\sin(2n\pi x)\ {\rm d}x\\
=&-\frac{1}{2n\pi}\sum^\infty_{k=1}\left\{\frac{1}{k}+{\rm Ci}(2n\pi k+2n\pi)-{\rm Ci}(2n\pi k)-\ln\left(1+\frac{1}{k}\right)\right\}\\
=&-\frac{1}{2n\pi}\left[\sum^\infty_{k=1}\left\{\frac{1}{k}-\ln\left(1+\frac{1}{k}\right)\right\}+\lim_{N\to\infty}\left(\sum^{N+1}_{k=2}{\rm Ci}(2n\pi k)-\sum^{N}_{k=1}{\rm Ci}(2n\pi k)\right)\right]\\
=&-\frac{1}{2n\pi}\left[\gamma+\lim_{N\to\infty}\left({\rm Ci}(2n\pi+2n\pi N)-{\rm Ci}(2n\pi)\right)\right]
=\frac{{\rm Ci}(2n\pi)-\gamma}{2n\pi}
\end{align}
Adding them together,
\begin{align}
\color{red}{\int^1_0\ln{\Gamma(x)}\sin(2n\pi x)\ {\rm d}x}=\frac{2\gamma+\ln(2n\pi)-{\rm Ci}(2n\pi)+{\rm Ci}(2n\pi)-\gamma}{2n\pi}\color{red}{=\frac{\gamma+\ln(2n\pi)}{2n\pi}}
\end{align}
I am looking forward to seeing cleaner and more interesting approaches to this integral.
As an aside, I derived that
$$\int^1_0\psi_0(x+a)\sin(2n\pi x)={\rm Si}(2an\pi)-\frac{\pi}{2}$$
This isn't really related to the problem though. |
Proof of an inequality involving factorials | Here is the sketch of the proof.
$$
(n!)^{n+1}((n+1)!)^{-n}\gt \frac{n^{n^2+n}}{(n+1)^{n^2+n}},\\
\frac{n!}{(n+1)^n}\gt \frac{n^{n(n+1)}}{(n+1)^{n(n+1)}},\\
n!\gt \frac{n^{n(n+1)}}{(n+1)^{n^2}},\\
$$
Last inequality can be proved using induction and $$
\left(1+\frac{1}{n}\right)^n \lt \left(1+\frac{1}{n+1}\right)^{n+1}.
$$ |
Involution and Gelfand Transform Properties | If all you assume about your involution is that it's an involution. that is, $(x+y)^*=x^*+y^*$, $(xy)^*=x^*y^*$, $x^{**}=x$ and $(cx)^*=\overline cx^*$, then most of what you expect doesn't follow. In particular you assume above that $\phi(x^*)=\overline{\phi(x)}$, and that doesn't follow:
Consider $C([-1,1])$. Define $$f^*(t)=\overline{f(-t).}$$ That's an involution with $||f^*||=||f||$ but $\phi(f^*)\ne\overline{\phi(f)}$.
It's easy to make things worse. Define a non-standard norm on $C([-1,1])$ by $$||f||=\max(\sup_{0\le t\le 1}|f(t)|,\sup_{-1\le t<0}2|f(t)|).$$That norm makes $C([-1,1])$ into a Banach algebra isomorphic to the usual $C([-1,1])$, but the involution above has $||f^*||\ne||f||$.
Bonus Not that you asked, but on the topic of proving the basic facts about $C^*$ algebras under weaker hypotheses, you might note that it's impossible to give purely algebraic proofs of various things that look like they should be nothing but algebra.
Example A gizmo that satisfies all the properties of a $C^*$ algebra except that the norm is not complete, but such that $\phi(x^*)\ne\overline{\phi(x)}$ for every complex homomorphism $\phi$, and $x^*=x$ does not imply $\phi(x)\in\Bbb R$
Let $A$ be the algebra of trigonometric polynomials $$f(t)=\sum_{n=-N}^Na_ne^{int},$$with pointwise operations, $||f||=\sup_{t\in\Bbb R}|f(t)|$, and the usual involution $$f^*(t)=\overline{f(t)}=\sum_{n=-N}^N\overline{a_{-n}}e^{int}.$$That satisfies all the axioms except for completeness of the norm. But if $$\phi(f)=\sum_{n=-N}^Na_n2^n$$then $\phi$ is a complex homomorphism of $A$ with $\phi(f^*)\ne\overline{\phi(f)}$. And in fact $f^*=f$ does not imply $\phi(f)\in\Bbb R$.
(A few years ago I noticed that the proof that $x^*=x$ implies $\phi(x^*)\in\Bbb R$ in a $C^*$ algebra involved completeness, and this seemed "wrong". Spent a long time looking for a purely algebraic proof... When I finally noticed that example I decided that from then on when I was doing $C^*$ algebra things I'd assume I had a $C^*$ algebra, heh-heh.)
Edit It appears that all this started with an exercise.
Part (a) of the exercise is fine, and in fact we don't need to show that $\phi(x^*)=\overline{\phi(x)}$ to prove it. Instead we define an involution on the maximal ideal space by $\phi^*(x)=\overline{\phi(x^*)}$, and (a) follows easily.
But part (b) is simply false (unless the definition of "involution" in the exercise includes something we haven't beem told about).
Some authors require $||e||=1$ in the definition of Banach algebra with identity and some do not. If $||e||\ne1$ is allowed then it's clear that (b) is false: Given $A$ as in the exercise, define $$|||x|||=2||x||.$$Then $A$ is also a Banach algebra with the new norm, and (b) cannot be true of both norms (since $||\hat x||$ is independent of the choice of norm on $A$).
If $||e||=1$ is part of the definition, let $A$ be a commutative Banach algebra with an involution but without an identity. Append an identity in the usual way: Let $A'=A\times\Bbb C$, define a multiplication so that $e=(0,1)$ is an identity, and define $$||(x,\lambda)||=||x||+|\lambda|.$$Consider $$|||(x,\lambda)|||=2||x||+|\lambda|.$$
Part (c) is ok, in fact the hypothesis in (c) is what is usually taken as the definition. See Wikipedia for a conjecture regarding what the definition in the exercise might be. The equivalence of the two conditions in that Wikipedia article is a standard thing; however we prove it we'd better not use (b)... |
A problem of J. E. Littlewood | Here is my take: There are $4$ degrees of freedom in selecting the center line of each cylinder, for a total of $4n$ degrees of freedom. Subtract from this the $6$ degrees of freedom given by the Euclidean motions (rotations and translations in space), as applied to the total configuration – for a total of $4n-6$ degrees of freedom.
For two cylinders to touch, the minimal distance between points on their respective center lines must be $2$. This results in $\binom{n}{2}$ equations. To be able to satisfy all these equations, we must probably have $4n-6\ge\binom{n}{2}$, which holds for $n\le7$. |
Big O Notation logarithms | For the first one, notice that
$$\frac{(n + \log_2 n)^5}{n} = \left(1 + \frac{\log_2 n}{n}\right)^5$$
It is necessary to show that this fraction is bounded above and below. The lower bound is easy, since the term in parentheses is at least $1$. The upper bound can be done with the knowledge that $\log_2 n = O(n)$.
For the second one your result is correct but your argument is not. After all, $2n = O(n)$ even though $2n$ is always larger than $n$. The inequality must hold up to a constant, only. To argue correctly, notice that
$$\log_2(n^5) = 5 \log_2(n)$$
Now if you know how to show that $x^5 \ne O(x)$, you'll be off to a pretty good start. |
Convergence of $\int_{0}^{\infty }{\frac{e^{-x}-e^{-2x}}{x}}dx $ | It is enough to show the function is continuous and bounded in a region close to zero to say that it converges as $x\to0$. To be more precise, one could establish bounds:
$$0<\int_0^1\frac{e^{-x}-e^{-2x}}x~\mathrm dx<\int_0^11~\mathrm dx=1$$
$$0<\int_1^\infty\frac{e^{-x}-e^{-2x}}x~\mathrm dx<\int_1^\infty e^{-x}~\mathrm dx=\frac1e$$
So it converges and is bounded by
$$0<\int_0^\infty\frac{e^{-x}-e^{-2x}}x~\mathrm dx<1+\frac1e$$
Specifically, we may use Frullani's integral to see the given integral is $\ln(2)$.
To solve the integral in an elementary fashion, consider the more general integral:
$$I(t)=\int_0^\infty\frac{e^{-x}-e^{-(t+1)x}}x~\mathrm dx$$
Let $u=e^{-x}$ to get
$$I(t)=\int_0^1\frac{u^t-1}{\ln(u)}~\mathrm du$$
Now differentiate w.r.t. $t$ to get
$$I'(t)=\int_0^1u^t~\mathrm du=\frac1{t+1}$$
Integrate back to get
$$I(t)-I(0)=\int_0^t\frac1{x+1}~\mathrm dx=\ln(t+1)$$
Since it should be trivial that $I(0)=0$, we find that
$$I(1)=\ln(2)$$
As claimed. |
Finding convergence of two similarly looking logarithmic series | For the first series, note that
$$ (\ln\ln n)^{\ln n}=\exp(\ln n\ln\ln\ln n)\geq \exp(2\ln n)=n^2$$
for all sufficiently large $n$. So the series in question converges by comparison with $\sum\frac{1}{n^2}$.
For the second,
$$ (\ln n)^{\ln\ln n}=\exp\big[(\ln\ln n)^2\big]\leq \exp(\ln n)=n$$
for all sufficiently large $n$. So the series diverges by comparison with $\sum\frac{1}{n}$. |
Write a vector in terms of 2 other vectors | $\vec{AD}=u+v$ so $\vec{DC}=\frac{1}{2}u + \frac{1}{2}v$
$\vec{BC}=v+\vec{DC}=v+\frac{1}{2}u + \frac{1}{2}v=\frac{1}{2}u+\frac{3}{2}v$ |
existence and uniqueness of Hermite interpolation polynomial | I think you've got your indices mixed up a bit; they're sometimes starting at $0$ and sometimes at $1$. I'll assume that the nodes are labeled from $1$ to $n$ and the first $m_i$ derivatives at $x_i$ are determined, that is, the derivatives from $0$ to $m_i-1$.
A straightforward proof consists in showing how to construct a basis of polynomials $P_{ik}$ that have non-zero $k$-th derivative at $x_i$ and zero for all other derivatives and nodes. For given $i$, start with $k=m_i-1$ and set
$$P_{i,m_i-1}=(x-x_i)^{m_i-1}\prod_{j\ne i}(x-x_j)^{m_j}\;.$$
Then decrement $k$ in each step. Start with
$$Q_{i,k}=(x-x_i)^k\prod_{j\ne i}(x-x_j)^{m_j}\;,$$
which has zero derivatives up to $k$ at $x_i$, and subtract out multiples of the $P_{i,k'}$ with $k'\gt k$, which have already been constructed, to make the $k'$-th derivatives at $x_i$ with $k'\gt k$ zero. Doing this for all $i$ yields a basis whose linear combinations can have any given values for the derivatives.
Uniqueness follows from the fact that the number of these polynomials is equal to the dimension $d=\sum_i m_i$ of the vector space of polynomials of degree up to $d-1$. Since the $P_{ik}$ are linearly independent, there's no more room for one more that also satisfies one of the conditions, since it would have to be linearly independent of all the others. |
Uniqueness of the Legendre-Fenchel Transformation | Legendre transformations preserve information.
Indeed if it is possible to recreate $y$ from $y^{*}$, then no information can
have gotten lost. This is the case under the assumption of convexity of $y$. I refer to this paper.
Observe, though, that this does only hold if the function we started with was convex or concave. Since the Legendre transform still makes sense if we
have local deviations from convexity or concavity (say, an
overall convex function with a local concave “bump”), we
might ask what now happens after two Legendre transforms. The answer is that we recover the convex (concave) envelope of the original function. This finding plays
an important role in the theory of phase transitions.
For sufficient and necessary conditions, as Giuseppe Negro was suggesting, we have that $f=f^{{**}}$ (biconjugate) if and only if $f$ is convex and lower semi-continuous (under some assumptions on the domain), by the Fenchel–Moreau theorem. |
Lebesgue $n$-dimensional measure of a hyperplane | Since Lebesgue measure is invariant under rotations, the problem boils down to showing ($\mathbf x = [\mathit{x}_0,\dots,\mathit{x}_{n-1}]$)
$$ S = \{\mathbf x \in \mathbb{R}^n\mid \mathit{x}_0 = 0\}
$$
has $0$ Lebesgue measure. For every $j \in \mathbb{n}, \epsilon > 0$, define
$$ R_j(\epsilon) = \left[\frac{-\epsilon}{2^{j+n} j^{n-1}}, \frac{\epsilon}{2^{j+n} j^{n-1}} \right] \times \prod_{k=1}^{n-1} [-j,j] \subseteq \mathbb{R}^n
$$
Then $R_j(\epsilon)$ has Lebesgue measure (This directly comes from the definition of product measure, since $R_j(\epsilon)$ is a measurable rectangle)
$$ \lambda(R_j(\epsilon)) = \frac{\epsilon}{2^{j+n} j^{n-1}}(2j)^{n-1} = \frac{\epsilon}{2^j}
$$
Also, $S \subseteq \bigcup_{j=1}^\infty R_j(\epsilon)$ for any $\epsilon > 0$, so by subadditivity,
$$ \lambda(S) \leq \sum_{j=1}^\infty \lambda(R_j(\epsilon)) = \sum_{j=1}^\infty \frac{\epsilon}{2^j} = \epsilon
$$
Since that holds for all $\epsilon > 0$,$\lambda(S) = 0$.
Note: The value $\epsilon/(2^{j+n} j^{n-1})$ is carefully adjusted so the infinite sum of measures converge. |
Is the set of paths between any two points moving only in units on the plane countable or uncountable? | Yes, this looks convincing. For the missing step, go directly from A towards P in unit steps until the distance left is less than 2. Then use the remaining distance as the base of an isosceles triangle with unit legs, which you make point away from L. |
Help with Exponential/Poisson Distribution time interval problem | First, compute the probability that no claim in the $1$ hours. Note that it follows Poisson distribution.
$$\exp(-2) \frac{2^0}{0!}=\exp(-2)$$
Now, the probability that at least one claim in an hour would be
$$1-\exp(-2)$$
Hence arrival in each the hours are independent. Hence the desired probability is
$$(1-\exp(-2))^9$$ |
"Greatest lower bound function" | Hint : $\lim_{h \to 0}m(x) = \lim_{h \to 0}\inf \{ f(x): x \in [c,c+h] \}=\inf \lim_{h \to 0}\{ f(x): x \in [c,c+h] \}$(since $f$ is continuous ) |
Bound for Double Sum. | No. For $n=m=2$ and
$$
A = B =
\begin{pmatrix}
1 & x \\
x & 1
\end{pmatrix}
\quad
$$
the left-hand side is equal to $2 + 2x^2$, and the right-hand side is $(1+x)^2$, so that the estimate does not hold for small positive $x$. |
Bound for analytical function | For $r=0$ any $\lambda_r$ will do. For $0 < r < 1$ we have
$$
\lambda_r = \max\left\{ \left|\frac{f(z) }{z}\right| : |z| \le r\right\} = \left|\frac{f(z_0) }{z_0}\right| \le 1
$$
because the function $f(z)/z$ has a removable singularity at zero, and the Schwarz Lemma tells us that the expression on the right is $\le 1$,
with equality only for functions of the form $f(z) = \alpha z$. Since Möbius transformation were excluded, $\lambda_r$ is strictly less than one. |
Cayley transform of matrices in $SU(n)$ are traceless. | This is not true. The Cayley transform is skew-Hermitian and so its trace always has a zero real part, but the imaginary part of the trace can be nonzero. E.g.
\begin{align}
A&=\operatorname{diag}\left(-i,\ e^{-i\pi/3},\ ie^{i\pi/3}\right)
=\operatorname{diag}\left(-i,\ \frac{1-i\sqrt{3}}2,\ \frac{-\sqrt{3}+i}2\right),\\
K=(I-A)(I+A)^{-1}
&=\operatorname{diag}\left(i,\ \frac{i}{\sqrt{3}},\ -(2+\sqrt{3})i\right),\\
\operatorname{trace}(K)&=-i\left(1+\frac{2}{\sqrt{3}}\right)\ne0.
\end{align} |
Examples of cardinals such that $a>b$ and $a<a^b$ | Assuming $\sf GCH$, these are the only counterexamples you will find. And since it is consistent with $\sf ZFC$, you can't do much better.
To see this is the case, note that $\sf GCH$ implies that for all regular $\kappa$, $\kappa^{<\kappa}=\kappa$.
If there is some place the countinuum hypothesis fails, it will be the counterexample you want anyway. |
How do I algebraically find the intersection of the plane $z = 4 + x + y$ with the cylinder $x^2 + y^2 = 4$? | The projection of the surface in the $xy$ plane is the disc
$$
x^2+y^2 \le 4
$$
So a straightforward parametrization is
\begin{cases}
x=x\\
y=y\qquad \qquad\text{with}\quad (x,y)\mid x^2+y^2 \le 4\\
z=4+x+y
\end{cases} |
fixed point set of orthogonal action on$ S^n $is again a sphere $S^r$ | Hint: The fixed point set of a single orthogonal transformation $T$ on a sphere centered at the origin is the intersection of the sphere with the $1$-eigenspace of $T$. |
What 's the upper limit of a binomial expansion with fractional power? | Firstly you have to keep this in mind that the fractional powers can only be expanded if $|x|<1$. Now as $x^k\to 0$ as $k\to \infty$ so the terms become very smaller and smaller as you go high. You can expand it till anywhere but as the terms become small after a few stage there is no noticeable change in the value of the expression if we stop it after a certain stage. |
Can we enumerate finite sequences which have no halting continuation? | The answer to your question has to be no. The finite sequences that do encode a machine that halts form a recursively enumerable set that is not recursive, so its complement (finite sequences encoding a machine that does not halt) cannot be either recursively enumerable or recursive. |
Fundamental forms of a composite function | Here's something to get you started.
We consider the hypersurface element given by the parametrization $f(u_1,\ldots,u_n) = (u_1,\ldots,u_n,F(u_1,\ldots,u_n)).$ We put $u := (u_1,\ldots,u_n).$
Let's fix a parameter value $v = (v_1,\ldots,v_n).$ Then the tangent space at the hypersurface element in the point $f(v)$ is spanned by the vectors $f_{u_1}(v),\ldots,f_{u_n}(v).$ We compute
$$f_{u_j}(v) = (0,\ldots,0,1,0,\ldots,0,F_{u_j}(v)) \qquad for\ j \in \{1,\ldots,n\}
$$
where the $1$ is in the $j$-th coordinate.
With this basis of the tangent space at $v,$ the first fundamental form is given by
$$
g_{ij} = \langle f_{u_i},f_{u_j}\rangle \qquad for\ i,j\in \{1,\ldots,n\}
$$
where $\langle\cdot,\cdot\rangle$ denotes Euclidean scalar product in $\mathbb R^{n+1}.$
For the given parametrization we calculate $g_{ii} = 1+F_{u_i}(v)^2$ and $g_{ij} = F_{u_i}(v)F_{u_j}(v).$
The normal direction of the hypersurface element at $v$ is the unique direction orthogonal to all the $f_{u_i}.$
Can you continue? |
If $B\times \{0\}$ is a Borel set in the plane, then $B$ is a Borel set in $\mathbb{R}$. | The map $f\colon (x,y)\mapsto \chi_{B\times\{0\}}(x,y)=\chi_B(x)\chi_{\{0\}}(y)$ is Borel measurable. As the map $\phi\colon (x,y)\mapsto (x,0)$ is Borel-measurable, so is $f\circ \phi$, which is $\chi_{B\times\Bbb R}$, hence $\chi_B$ is a Borel measurable function. |
Bounded (from below) harmonic functions from $\mathbb R^2 \setminus \{0\}$ | It depends highly on how much you know about Harmonic functions and its relation with Complex Analysis. For example, are you familiar with Liouville's theorem, that states that every Harmonic function that is defined on all of $R^{n}$ and is bounded from above or below is constant.
Defining the function $v = u - M$, we have a positive function that is harmonic if and only if $u$ is. Considering with $R^{2}$ $C$, we can consider the function $z \mapsto v(e^{z})$ that is defined on all of $C$. If you want to avoid complex numbers, consider the function $(x_{1},x_{2}) \mapsto v(e^{x_{1}} \cos(x_{2}),e^{x_{1}} \sin(x_{2}))$. This function is positive and Harmonic on $R^{2}$ and therefore constant, due to Liouville's theorem. |
Find the number of ways to split a group of 12 into groups of three. Also, Sam and Tom won't sit together. | Your strategy of subtracting the number of distributions in which Sam and Tom sit together from the total number of distributions is correct. However, the stated answer is incorrect.
As you found, the number of ways of splitting twelve people into four labeled groups of three is
$$\binom{12}{3}\binom{9}{3}\binom{6}{3}\binom{6}{3}$$
Suppose Sam and Tom sit together. There are four ways to choose which group they are in. There are ten ways to choose one of the remaining people to be in the same group. The remaining nine people can be split into three groups of three in $\binom{9}{3}\binom{6}{3}\binom{3}{3}$ ways. Hence, there are
$$\binom{4}{1}\binom{10}{1}\binom{9}{3}\binom{6}{3}\binom{3}{3}$$
distributions in which Sam and Tom sit together.
Consequently, the number of admissible ways of splitting the twelve people into four labeled groups of three is
$$\binom{12}{3}\binom{9}{3}\binom{6}{3}\binom{3}{3} - \binom{4}{1}\binom{10}{1}\binom{9}{3}\binom{6}{3}\binom{3}{3} = 302,400$$ |
Is there a function $f\in L_{1}[0,1]$ but $ \frac{d}{dx}\Big(\int_{0}^{x}\frac{f(s)}{(x-s)^{\alpha}}ds\Big)\not\in L_{1}[0,1]? $ | $$
\int_{0}^{x}\frac{f(s)}{(x-s)^{\alpha}}ds=\Gamma(1-\alpha)\frac{d^{(\alpha-1)}}{dx^{(\alpha-1)}}f(x)$$
$\Gamma$ is the Gamma function.
$\frac{d^\nu}{dx^\nu}$ denotes the fractionnal derivative of non-integer degree $\nu$ or the fractionnal antiderivative of non-integer degree $-\nu$.
Thus this question is related to fractional calculus : https://en.wikipedia.org/wiki/Fractional_calculus
There is an extensive literature about fractional calculus. At lower level, an article for the general public : https://fr.scribd.com/doc/14686539/The-Fractional-Derivation-La-derivation-fractionnaire
Also you can find a wide documentation about the above intgral referenced as Riemann-Liouville integral and/or Riemann-Liouville's transform. |
find the equation of the line tangent to the curve $y=\sqrt{x}$ at the point $(1,1)$ | You lost a one going from step 3 to step 4. More exactly:
$$
\frac{\sqrt{1+h} - 1}{h} \frac{\sqrt{1+h} + 1}{\sqrt{1+h} {\bf+ 1}}=\frac{1+h - 1}{h(\sqrt{1+h} {\bf+ 1})}$$ |
Is it possible to prove that the metric space is an open set without choice? | You can pick the radius of the open ball centred at $x$ however you like, and the union of all the balls will be $X$. For example, you could just let all the radii be 1. |
Prove that the orthogonal group is a submanifold of $\Bbb{R}^{m^2}$ | I don't know whether you view this as "applying the regular value theorem", but if $Df(A)$ is surjective, then continuaity of $Df$ implies that there is an open neighborhood $U_A$ of $A$ in $M_m(\mathbb R)$ such that $Df(B)$ is surjective for all $B\in U_A$. (Viewing $Df(A)$ as a matrix, there is an ivertible square submatrix of maximal size and you can take $U_A$ to be the set on which the determinant of that submatrix is non-zero.) Then put $U:=\cup_AU_A$ and restrict $f$ to $U$. |
Equivalent markings on Riemann surfaces | Two markings are equivalent if they are related by conjugation (inner automorphism of $\pi_1(R)$). But there is still a big family of markings that are related by automorphisms of $\pi_1(R)$ that are not conjugations, which are non-equivalent. The group of automorphism quotiented by the inner automorphisms is called outer automorphism group of $\pi_1(R)$ and denoted $\operatorname{Out}(\pi_1(R))$.
For example, consider the markings $\Sigma_p = \{ [A_j], [B_j] \}$ and $\Sigma'_p = \{ {T_{B_1}}_*([A_j]), {T_{B_1}}_*([B_j]) \}$ where $T_{B_1} : R \to R$ is the Dehn twist about the curve $B_1$. Note that ${T_{B_1}}_*([A_1])=[A_1 \circ B_1]$, while ${T_{B_1}}_*([B_j])=[B_j]$ for $j=1,\cdots,g$ and ${T_{B_1}}_*([A_j])=[A_j]$ for $j=2,\cdots,g$, so these two markings are not related by a conjugation.
As a note, the group $\operatorname{Out}(\pi_1(R))$ is an extension of the mapping class group of the surface (see The Dehn–Nielsen–Baer theorem, a good reference on this is Primer on Mapping class groups by Farb and Margalit). |
Detecting duplicates of order 2 in $A_5$ | In order to count number of elements of order $2$, first you have to be know what should be look of the elements. They should be of even order It can be easily seen elements of order two in $S_n$ have look like $(12)$ and $(12)(34)$, but elements of $(12)$ are not possible. Therefore all the elements of $S_n$ which are of type $(12)(34)$ are the only elements of order $2$ in $A_5$.
$$\text{elements of same cocyle}=\frac{120}{2^2 2!}=\frac{120}{8}=15$$
Therefore total $15$ elements of order $2$ in $A_5.$ For the used formula please see the link Counting cycle structures in $S_n$. |
Direction of Normal in Vector Calculus | The right-hand-rule convention for this kind of integral is done a little differently than the cross product of vectors $\mathbf x \cdot \mathbf y$ that you're thinking of. For the curl, you use your fingers to be curved in the orientation of the integration (this depends on how you parameterize the curve), "holding" the normal vector in your right hand, like if you reached to grab a long wire and wrapped your fingers around it. Then the normal vector is in the direction of your thumb if you make a thumbs-up. |
Properties of the solution for a binar matrix | I think you're considering the $(n-1) \times n$ matrix $A$ as the augmented matrix of a system, so you're solving $Bx = b$ where $B$ is an $(n - 1) \times (n-1)$ matrix, and all elements of $B$ and $b$ are 1's and 0's. Well, consider the case $B = \pmatrix{1 & 1 & 1\cr 0 & 1 & 0\cr 0 & 0 & 1\cr}$, $b = \pmatrix{0 \cr 1 \cr 1\cr}$, where the unique solution is $\pmatrix{-2\cr 1 \cr 1\cr}$. |
Lower bound of $rank(A+A^T)$ | If $k$ is odd, then we have a lower bound of $\operatorname{rank}(A + A^T) \geq 1$. We can't do any better than that, though. |
How to Recover a Set from its Closure under Logical Consequence | If you had a specific $A$ that led to $S$, then no, you cannot recover $A$ from $S$, even if you knew that $A$ is finite ($S$, of course, will be infinite, because there are infinitely many tautologies that will all be in $S$).
For example, if you had $P \land Q \in S$, you don't know whether you had $P \land Q \in A$, or both $P \in A$ and $Q \in A$, or maybe all three, or maybe both $P$ and $P \land Q$, or maybe both $Q$ and $P \land Q$, or maybe just $Q \land P$, or ...
Something you could do, is to find a kind of 'minimal' set whose closure is $S$. See prime implicants for one way to think about that. However, the techniques discussed there are for propositional logic, and for first-order logic things will get a lot more complicated ... indeed, you may well run into the problem of the undecidability of first-order logic when trying to find such a 'minimal' set.
Finally, as I said, $S$ will be of infinite size, so there are some serious practical considerations to deal with here as well! |
How to place 14 dots on the plane | Apparently the maximum is $33$, as was worked out in C. Schade: Exakte Maximalzahlen gleicher Abstände, Diploma thesis directed by H. Harborth, Techn. Univ. Braunschweig 1993. Sadly, I couldn't find a description of the proof. This maximum can be realized in two ways, up to graph isomorphism:
Image source: Jean-Paul Delahaye, Les graphes-allumettes, (in French), Pour la Science no. 445, November 2014, pages 108-113. http://www.lifl.fr/~jdelahay/pls/2014/252.pdf
More resources are listed at https://oeis.org/A186705. |
Reference request: Poisson approximation to the binomial distribution in total variation | This can be found (in a maybe overkill form, as the paper is concerned with tight bounds, with the "right" constants) in a paper of Roos:
Roos, Bero. "Sharp constants in the Poisson approximation." Statistics & probability letters 52, no. 2 (2001): 155-168.
https://doi.org/10.1016/S0167-7152(00)00208-X
(see specifically Theorem 2). As mentioned, this is a bit overkill, as it deals with more general bounds on the distance between Poisson Binomial and Poisson (not just Binomial and Poisson).
Note that you can combine this with bounds on the TV between Poisson distributions (parameters $\lambda$ and $np$ in your case) to get what you want what the triangle inequality. For that second part, see for instance Section 2.1 (eq (2.2)) of
Adell, José A., and Pedro Jodrá. "Exact Kolmogorov and total variation distances between some familiar discrete distributions." Journal of Inequalities and Applications 2006, no. 1 (2006): 64307. https://doi.org/10.1155/JIA/2006/64307 |
How to calculate the position of a turning object, based on its rotation? | If the turn rate and speed is constant the object will travel on a circle with a circumference $360$ units. Given that you can calculate the position using:
\begin{align}
r &= \frac{360^\circ}{2\pi}\\[12pt]
cp &= (r,0)\\[12pt]
p &= (cp\cdot x - r \cos α\, ,\, cp\cdot y + r \sin α)
\end{align} |
$a_n=O(\frac{1}{n}),\ \frac{\sum_{k=1}^nS_k}{n}$ converges $\Rightarrow S_n$ converges | The other answers are right, but the OP’s implication is actually correct.
Cesaro summable implies Abel summable
shows that in your situation, the sequence $\sum{S_n-S_{n-1}}$ is Cesaro-summable, so is Abel summable.
Now, $S_n-S_{n-1}=O(1/n)$, so by the examples of Tauber theorems here ( https://en.m.wikipedia.org/wiki/Abelian_and_Tauberian_theorems ) $\sum{S_n-S_{n-1}}$ converges. |
How to prove this result on binomial coefficients? | $$\sum_{j \mathop = 1}^n \left({-1}\right)^{n + 1} j \binom n j=\sum_{j \mathop = 1}^n \left({-1}\right)^{n + 1} n \binom {n - 1} {j - 1}= n \sum_{j \mathop = 0}^{n - 1} \left({-1}\right)^{n - 1} \binom {n - 1} j=0$$
NOTE
we have used that
$$j \binom n j= n \binom {n - 1} {j - 1}$$
which has a simple explanation: select $j$ people out of n, then designate one as special. The LHS represents how many ways we can do this by first picking the $j$ people and then making designation. On the RHS, we have the number of ways to select the special one and then picking the remaining $j-1$ from the remaining $n-1$. |
Looking for reference: If a Riemanian manifold is foliated by max symmetric submanifolds, then coordinates can always be chosen such that ... | I found the original article that proves the result I was looking for, albeit in a somewhat different language:
B. Schmidt, “Isometry gropus with surface-orthogonal trajectories,” z naturforsch
sect A 22 (1967)
The proper statement is the following:
Let $G$ be a Lie group that acts with isometries on a Riemannian manifold $M$ of dim $m$ such that its orbits are connected $n$-dimensional Riemannian manifolds and the the stabilizer subgroup of any point $p\in M$ leaves no vector in the tangent space $T_pN$ invariant. If $\dim G = n(1+1)/2$ (i.e. if the orbits are maximally symmetric) then suitable coordinates $(v^1,\dots, v^{m-n}, u^1,\dots,u^n)$ can always be chosen such that the metric on $M$ has the form
$$ g=g_{ab}(v)\text{d}v^a \text{d} v^b+f(v)h_{ij}\text{d}u^i\text{d}u^j$$
with $u^i$ being the coordinates on $N$. |
Clarification about Ideal and zero sets of empty set in Varieties | There are many reasons, but I guess what I would think as the most important one is that it is the only definition for which $S\subseteq T \implies Z(T)\subseteq Z(S)$ (and the obvious similar statement for $I$), which are obviously true for nontrivial ideals, to hold. |
Finding primary decompositions of ideals | $\langle x^2,\underline{xy},yz^2\rangle=\langle x^2,{\color{red}{x}},yz^2\rangle\cap \langle x^2,{\color{red}{y}},yz^2\rangle=\langle x,\underline{yz^2}\rangle\cap \langle x^2,y\rangle=\langle x,{\color{red}y}\rangle\cap\langle x,{\color{red}{z^2}}\rangle\cap \langle x^2,y\rangle=\langle x^2,y\rangle\cap\langle x,z^2\rangle$ |
For what points $c$ in $\mathbb{R}$ is $f$ continuous? | $f$ is continuous at all points in $\mathbb{R} - X$.
The reasoning in your example is incorrect. The left and right limits are the same.
$\lim_{x \rightarrow a^+} f(x) = \lim_{x \rightarrow a^{-}} f(x) = 0$
However $f(a) = 1$. Hence $f$ is not continuous at $a$ since the limit and the actual value of the function does not agree. |
Inverse cdf of the $\chi$-squared distribution | I have found relevant information in this paper: "Exploring How to Simply Approximate the P-value of a Chi-Squared Statistic, Eric J. Beh, Austrian Journal of Statistics
June 2018, Volume 47, 63-75." |
$\mathbb Q/\mathbb Z$ is an infinite group | Hint: Prove that if $x,y\in[0,1)\cap\Bbb Q$, then $x\sim y$ if and only if $x=y$. |
Powers of maximal ideal in a Noetherian local ring | The two parts of your question are somewhat related. If $\operatorname{Spec}(R)=\{m\}$, then $R$ is Artinian, and $m$ is its Jacobson radical, so that $m$ is nilpotent and $m^n=0$ for some positive integer $n.$ Thus, the chain of powers of $m$ eventually stabilizes.
As Shivering Soldier suggested, your initial question is easily approached via the contrapositive. If $m^n=m^{n+1}$ for some positive integer $n,$ then Nakayama allows that $m^n=0.$ If $P$ is any prime ideal of $R$, then $m^n \subseteq P.$ But then $m \subseteq P,$ so, since $m$ is maximal, $m=P,$ and $m$ is the unique prime ideal of $R.$ In particular, this tells you that $R$ is Artinian, since all primes are maximal.
All of this is contained in Chapter 8 of Atiyah and Macdonald, at least in my first edition of it. |
Deceptively simple divisibility problem | This is not true, take $a=6,b=15$, $a+b$ is $21$ and $2a+b=27$. These numbers fit the criterion, what must happens is that there must be a prime $p$ such that the largest power of the prime dividing both $a$ and $b$ is equally high (call it $n$), and the largest power of $2a+b$ is larger than $n$, in this case that prime was $3$ |
Proving that the derivative always diverges faster than the original function | This is actually wrong. For this function:
there are values of x arbitrarily close to $0$ such that $\frac{f'(x)}{f(x)} = 0$.
If you want an explicit expression, take $f(x) = \frac1x + sin\Big(\frac1x\Big)$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.