INSTRUCTION
stringlengths 61
6.63k
| RESPONSE
stringlengths 1
11k
|
---|---|
Newton-Raphson method and solvability by radicals If a polynomial is not solvable by radicals, then does the Newton-Raphson method work slower or faster? I don't know how to approach this.
|
The speed of Newton-Raphson has [EDIT: almost] nothing to do with solvability by radicals. What it does have to do with is $f''(r)/f'(r)$ where $r$ is the root: i.e. if $r$ is a root of $f$ such that $f'(r) \ne 0$ and
Newton-Raphson starting at $x_0$ converges to $r$, then
$$\lim_{n \to \infty} \dfrac{x_{n+1} - r}{(x_n - r)^2} = - \frac{f''(r)}{2 f'(r)}$$
If, say, $f(x) = x^n + \sum_{j=0}^{n-1} c_j x^j$ is a polynomial of degree $n\ge 5$, $f''(r)/f'(r)$ is a continuous function of the coefficients $(c_0, \ldots, c_n)$ in a region that avoids $f'(r) = 0$. But there is a dense set of $(c_0,\ldots,c_n)$ for which $f$ is solvable by radicals (e.g. where the roots are all of the form $a+bi$ with $a$ and $b$ rational),
and a dense set where it is not (e.g. where $c_0,\ldots,c_n$ are algebraically independent).
EDIT: On the other hand, if the convergence of Newton-Raphson is slow (linear rather than quadratic) and $f$ is a polynomial of degree $5$, then $f$ is solvable by radicals (over the field generated by the coefficients). For in this case $f'(r) = 0$, and so $r$ is a root of the gcd of $f$ and $f'$, which has degree $<5$.
|
If $b$ is a root of $x^n -a$, what's the minimal polynomial of $b^m$? Let $x^n -a \in F[x]$ be an irreducible polynomial over $F$, and let $b \in K$ be its root, where $K$ is an extension field of F. If $m$ is a positive integer such that $m|n$, find the degree of the minimal polynomial of $b^m$ over $F$.
My solution:
$[F(b^m):F]=[F(b^m):F(b)][F(b):F] \Rightarrow n\le [F(b^m):F]$
and
$F(b^m)\subset F(b) \Rightarrow [F(b^m):F]\le [F(b):F]=n$
Then
$[F(b^m):F]=n$
Comments
I didn't use the fact that $m|n$, where am I wrong? I need help how to solve this problem.
Thanks
|
Hint Let $km = n$ then if $b^{mk} - a = 0$ then $(b^m)^k - a = 0$ so maybe $x^k-a$ is the minimal polynomial?
Hint' Show that if $x^k-a$ has a factor then so does $x^{mk} - a$.
Given a field extension $K/L$ then $K$ is a vector space with coefficients in $L$ of dimension $\left[K:L\right]$ which is called the degree of the field extension.
The vector space $F(b^m)$ is spanned by $F$-linear combinations of the basis vectors $\left\{1,b^m,b^{2m},\ldots,b^{(k-1)m}\right\}$ so $\left[F(b^m):F\right] = k$.
Furthermore $\left[F(b):F\right] = n$ and $\left[F(b):F(b^m)\right] = m$ (prove these, for the second one use that $b$ is the minimal polynomial of $z^m - b^m$ [why can we not just use $z-b$?] in $F(b^m)$) so by $mk = n$ we have the identity $\left[F(b):F(b^m)\right]\left[F(b^m):F\right] = \left[F(b):F\right]$.
Why is $F(b^m)$ spanned by $F$-linear combinations of $\{1,b^m,b^{2m},…,b^{(k−1)m}\}$?
$F(b^m)$ is the field generated by all well defined sums differences products and fractions of the elements $F \cup {b^m}$. So that means it includes $b^m, (b^m)^2, (b^m)^3, \ldots$ but since $b^m$ satisfies a polynomial every power of $b^m$ higher or equal to $k$ can be reduced by it to a linear combination of lower powers. Similarly $(b^m)^{-1} = a (b^m)^{k-1}$, of course the sum of linear combinations is again a linear combination so we have seen that $F$-linear combinations of $\{1,b^m,b^{2m},…,b^{(k−1)m}\}$ span $F(b^m)$. The fact it's an independent basis (i.e. cannot be made smaller) comes from the polynomial being minimal.
|
Describe a group homomorphism from $U_8$ to $S_4$ Im in an intro course to abstract algebra and we have been focusing completely on rings/the chinese remainder theorem and this question came up in the review and totally stumped me (we only have basic definitions of groups and subgroups and homomorphisms).
I think that $U_8$ is the group of units modulo 8, and $S_4$ is the permutation group of 4 letters. Ive figured out what $S_4$ looks like by examining certain sets of permutations but dont understand homomorphisms enough to be able to name the one in question. I do know that im looking for something of the form $f(ab) = f(a)f(b)$, but thats about it.
I was told a hint: that the units mod 8 were cosets which are relativley prime to 8, which i think would be $[1],[3],[5],[7]$ in mod 8, though im not really sure why this is the case. What I do notice is that each of these elements has an order of 2, which i think somehow should relate to the order of my permutations in $S_4$, but again, i'm not certain.
Any help is much appreciated, thanks.
|
A unit mod $8$ is a congruence class mod $8$ which is invertible, i.e., a class $[a]$ such that there exists $[b]$ with $[a][b] = [1]$, or equivalently $ab +8k = 1$ for some integer $k$. Now any number dividing both $a$ and $8$ would also divide $ab+8k=1$, so this implies that $[a]$ being a unit implies $(a,8)=1$ (where the parentheses indicate the greatest common divisor.) On the other hand, one corollary of the Euclidean algorithm is that $(a,8)$ can always be written as a linear combination of $a$ and $8$, so in the case of relatively prime $a$ and $8$ there always exist such $b$ and $k$, and so $[a]$ is a unit.
If $f:U_8 \to S_4$ is a homomorphism, then the order of $\phi([a])$ always divides the order of $[a]$, so the image of $[1]$ has to be $()$ (the identity permutation), and the images of $[3]$, $[5]$, and $[7]$ have to have order $1$ or $2$. Obviously you also need that $f([3]) f([5]) = f([7])$ etc.
Now the question is what exactly you are trying to find, just one homomorphism (which is easy, there is always the trivial one mapping everything to the identity), or all of them (which is not quite as easy but doable with the information here and some trial and error.)
|
Expectations with self-adjoint random matrix So, we have a square matrix $A=(a_{ij})_{1 \leq i,j \leq n}$ where the entries are independent random variables with the same distribution. Suppose $A = A^{*}$, where $A^{*}$ is the classical adjoint. Moreover, suppose that $E(a_{ij}) = 0$, $E(a_{ij}^{2}) < \infty$. How can I evaluate? $E(Tr A^{2})$?
Clearly, we have $E(Tr A) = 0$ and we can use linearity to get something about $E(Tr A^{2})$ in terms of the entries using simply the formula for $A^{2}$, but for instance I don't see where $A=A^{*}$ comes in... I suppose there's a clever way of handling it...
|
Clearly
$$
\operatorname{Tr}(A^2) = \sum_{i,j} a_{i,j} a_{j,i} \stackrel{\rm symmetry}{=}\sum_{i,j} a_{i,j}^2
$$
Thus
$$
\mathbb{E}\left(\operatorname{Tr}(A^2) \right) = \sum_{i,j} \mathbb{E}(a_{i,j}^2) = n^2 \mathbb{Var}(a_{1,1})
$$
|
Is there a Math symbol that means "associated" I am looking for a Math symbol that means "associated" and I don't mean "associated" as something as complicated as isomorphism or anything super fancy.
I am looking for a symbol that means something like "$\triangle ABC$ [insert symbol] $A_{1}$" (as in triangle ABC "associated" with area_{1}) Or want to say something like "The eigenvector associated with the eigenvalue"
You get the idea.
|
In general, you can use a little chain link symbol since the meaning behind "associated" is "connection" where you are not specifying the type of connection or how they are connected. That will reduce your horizontal space and make sense to people.... ~ is the NOT symbol in logic so never use that! Don't use the squiggle "if, and only if" symbol either because that insinuates that there is some kind of bijection and that is a specific type of connection. Your only caring about if there is "some kind of connection/association" between two different sets/elements/statements/primitive statements/etc. You should treat it as if it were a logical connective so again, don't use NOT because that would confuse logisticians and pure math people most definitely. The squiggle double arrow would be even more confusing like saying a "loose bijection" which is quite the fancy abstraction that is not what your aiming for... just a simple link between two "things" should be sufficient for what you want.
|
Coercion in MAGMA In MAGMA, if you are dealing with an element $x\in H$ for some group $H$, and you know that $H<G$ for some group $G$, is there an easy way to coerce $x$ into $G$ (e.g. if $H=\text{Alt}(n)$ and $G=\text{Alt}(n+k)$ for some $k\geq 1$)? The natural coercion method $G!x$ does not seem to work.
|
G!CycleDecomposition(g);
will work
|
Do an axis-aligned rectangle and a circle overlap? Given a circle of radius $r$ located at $(x_c, y_c)$ and a rectangle defined by the points $(x_l, y_l), (x_l+w, y_l+h)$ is there a way to determine whether the the two overlap? The square's edges are parallel to the $x$ and $y$ axes.
I am thinking that overlap will occur if one of the rectangle's corners is contained in the circle or one of the circle's circumference points at ($n\frac{\pi}{2}, n=\{0,1,2,3\}$ radians) is contained in the rectangle. Is this true?
EDIT: One answer has pointed out a case not covered by the above which is resolved by also checking whether the center of the circle is contained.
Is there a method which doesn't involve checking all points on the circle's circumference?
|
No. Imagine a square and enlarge its incircle a bit. They will overlap, but wouldn't satisfy neither of your requirement.
Unfortunately, you have to check all points of the circle. Or, rather, solve the arising inequalities (I assume you are talking about filled idoms):
$$\begin{align} (x-x_c)^2+(y-y_c)^2 & \le r \\
x\in [x_l,x_l+w]\ &\quad y\in [y_l,y_l+h]
\end{align}$$
Or.. Perhaps it is enough to add to your requirements, that the center of the circle is contained in the rectangle.
|
How to solve second order PDE with first order terms. I know we can transform a second order PDE into three standard forms. But how to deal with the remaining first order terms?
Particularly, how to solve the following PDE:
$$
u_{xy}+au_x+bu_y+cu+dx+ey+f=0
$$
update:
$a,b,c,d,e,f$ are all constant.
|
Case $1$: $a=b=c=0$
Then $u_{xy}+dx+ey+f=0$
$u_{xy}=-dx-ey-f$
$u_x=\int(-dx-ey-f)~dy$
$u_x=C(x)-dxy-\dfrac{ey^2}{2}-fy$
$u=\int\left(C(x)-dxy-\dfrac{ey^2}{2}-fy\right)dx$
$u=C_1(x)+C_2(y)-\dfrac{dx^2y}{2}-\dfrac{exy^2}{2}-fxy$
Case $2$: $a\neq0$ and $b=c=0$
Then $u_{xy}+au_x+dx+ey+f=0$
Let $u_x=v$ ,
Then $u_{xy}=v_y$
$\therefore v_y+av+dx+ey+f=0$
$v_y+av=-dx-ey-f$
$(\exp(ay)v)_y=-dx\exp(ay)-ey\exp(ay)-f\exp(ay)$
$\exp(ay)v=\int(-dx\exp(ay)-ey\exp(ay)-f\exp(ay))~dy$
$\exp(ay)u_x=C(x)-\dfrac{dx\exp(ay)}{a}-\dfrac{ey\exp(ay)}{a}+\dfrac{e\exp(ay)}{a^2}-\dfrac{f\exp(ay)}{a}$
$u_x=C(x)\exp(-ay)-\dfrac{dx}{a}-\dfrac{ey}{a}+\dfrac{e}{a^2}-\dfrac{f}{a}$
$u=\int\left(C(x)\exp(-ay)-\dfrac{dx}{a}-\dfrac{ey}{a}+\dfrac{e}{a^2}-\dfrac{f}{a}\right)dx$
$u=C_1(x)\exp(-ay)+C_2(y)-\dfrac{dx^2}{2a}-\dfrac{exy}{a}+\dfrac{ex}{a^2}-\dfrac{fx}{a}$
Case $3$: $b\neq0$ and $a=c=0$
Then $u_{xy}+bu_y+dx+ey+f=0$
Let $u_y=v$ ,
Then $u_{xy}=v_x$
$\therefore v_x+bv+dx+ey+f=0$
$v_x+bv=-dx-ey-f$
$(\exp(bx)v)_x=-dx\exp(bx)-ey\exp(bx)-f\exp(bx)$
$\exp(bx)v=\int(-dx\exp(bx)-ey\exp(bx)-f\exp(bx))~dx$
$\exp(bx)u_y=C(y)-\dfrac{dx\exp(bx)}{b}+\dfrac{d\exp(bx)}{b^2}-\dfrac{ey\exp(bx)}{b}-\dfrac{f\exp(bx)}{b}$
$u_y=C(y)\exp(-bx)-\dfrac{dx}{b}+\dfrac{d}{b^2}-\dfrac{ey}{b}-\dfrac{f}{b}$
$u=\int\left(C(y)\exp(-bx)-\dfrac{dx}{b}+\dfrac{d}{b^2}-\dfrac{ey}{b}-\dfrac{f}{b}\right)dy$
$u=C_1(x)+C_2(y)\exp(-bx)-\dfrac{dxy}{b}+\dfrac{dy}{b^2}-\dfrac{ey^2}{2b}-\dfrac{fy}{b}$
Case $4$: $a,b,c\neq0$
Then $u_{xy}+au_x+bu_y+cu+dx+ey+f=0$
Try let $u=p(x)q(y)v$ ,
Then $u_x=p(x)q(y)v_x+p_x(x)q(y)v$
$u_y=p(x)q(y)v_y+p(x)q_y(y)v$
$u_{xy}=p(x)q(y)v_{xy}+p(x)q_y(y)v_x+p_x(x)q(y)v_y+p_x(x)q_y(y)v$
$\therefore p(x)q(y)v_{xy}+p(x)q_y(y)v_x+p_x(x)q(y)v_y+p_x(x)q_y(y)v+a(p(x)q(y)v_x+p_x(x)q(y)v)+b(p(x)q(y)v_y+p(x)q_y(y)v)+cp(x)q(y)v+dx+ey+f=0$
$p(x)q(y)v_{xy}+p(x)(q_y(y)+aq(y))v_x+(p_x(x)+bp(x))q(y)v_y+(p_x(x)q_y(y)+ap_x(x)q(y)+bp(x)q_y(y)+cp(x)q(y))v=-dx-ey-f$
Take $q_y(y)+aq(y)=0\Rightarrow q(y)=\exp(-ay)$ and $p_x(x)+bp(x)=0\Rightarrow p(x)=\exp(-bx)$ , the PDE becomes
$\exp(-bx-ay)v_{xy}+(c-ab)\exp(-bx-ay)v=-dx-ey-f$
$v_{xy}+(c-ab)v=-dx\exp(bx+ay)-ey\exp(bx+ay)-f\exp(bx+ay)$
Case $4a$: $c=ab$
Then $v_{xy}=-dx\exp(bx+ay)-ey\exp(bx+ay)-f\exp(bx+ay)$
$v_x=\int(-dx\exp(bx+ay)-ey\exp(bx+ay)-f\exp(bx+ay))~dy$
$v_x=C(x)-\dfrac{dx\exp(bx+ay)}{a}-\dfrac{ey\exp(bx+ay)}{a}+\dfrac{e\exp(bx+ay)}{a^2}-\dfrac{f\exp(bx+ay)}{a}$
$v=\int\left(C(x)-\dfrac{dx\exp(bx+ay)}{a}-\dfrac{ey\exp(bx+ay)}{a}+\dfrac{e\exp(bx+ay)}{a^2}-\dfrac{f\exp(bx+ay)}{a}\right)dx$
$\exp(bx+ay)u=C_1(x)+C_2(y)-\dfrac{dx\exp(bx+ay)}{ab}+\dfrac{d\exp(bx+ay)}{ab^2}-\dfrac{ey\exp(bx+ay)}{ab}+\dfrac{e\exp(bx+ay)}{a^2b}-\dfrac{f\exp(bx+ay)}{ab}$
$\exp(bx+ay)u=C_1(x)+C_2(y)-\dfrac{(dx+ey+f)\exp(bx+ay)}{ab}+\dfrac{d\exp(bx+ay)}{ab^2}+\dfrac{e\exp(bx+ay)}{a^2b}$
$u=C_1(x)\exp(-bx-ay)+C_2(y)\exp(-bx-ay)-\dfrac{dx+ey+f}{ab}+\dfrac{d}{ab^2}+\dfrac{e}{a^2b}$
$u=C_1(x)\exp(-ay)+C_2(y)\exp(-bx)-\dfrac{dx+ey+f}{ab}+\dfrac{d}{ab^2}+\dfrac{e}{a^2b}$
Hence the really difficult case is that when $c\neq ab$ . By letting $u=\exp(-bx-ay)v$ the PDE will reduce to $v_{xy}+(c-ab)v=-dx\exp(bx+ay)-ey\exp(bx+ay)-f\exp(bx+ay)$ , which is as headache as https://math.stackexchange.com/questions/218425 for finding its most general solution.
|
How many numbers between $1$ and $6042$ (inclusive) are relatively prime to $3780$? How many numbers between $1$ and $6042$ (inclusive) are relatively prime to $3780$?
Hint: $53$ is a factor.
Here the problem is not the solution of the question, because I would simply remove all the multiples of prime factors of $3780$.
But I wonder what is the trick associated with the hint and using factor $53$.
|
$3780=2^2\cdot3^3\cdot5\cdot7$
Any number that is not co-prime with $3780$ must be divisible by at lease one of $2,3,5,7$
Let us denote $t(n)=$ number of numbers$\le 6042$ divisible by $n$
$t(2)=\left\lfloor\frac{6042}2\right\rfloor=3021$
$t(3)=\left\lfloor\frac{6042}3\right\rfloor=2014$
$t(5)=\left\lfloor\frac{6042}5\right\rfloor=1208$
$t(7)=\left\lfloor\frac{6042}7\right\rfloor=863$
$t(6)=\left\lfloor\frac{6042}6\right\rfloor=1007$
Similarly, $t(30)=\left\lfloor\frac{6042}{30}\right\rfloor=201$
and $t(2\cdot 3\cdot 5\cdot 7)=\left\lfloor\frac{6042}{210}\right\rfloor=28$
The number of number not co-prime with $3780$
=$N=\sum t(i)-\sum t(i\cdot j)+\sum t(i\cdot j \cdot k)-t(i\cdot j\cdot k \cdot l)$ where $i,j,k,l \in (2,3,5,7)$ and no two are equal.
The number of number coprime with $3780$ is $6042-N$
Reference: Venn Diagram for 4 Sets
|
$n$ Distinct Eigenvectors for an $ n\times n$ Hermitian matrix? Much like the title says, I wish to know how it is possible that we can know that there are $n$ distinct eigenvectors for an $n\times n$ Hermitian matrix, even though we have multiple eigenvalues. My professor hinted at using the concept of unitary transform and Gram-Schmidt orthogonalization process, but to be honest I'm a bit in the dark. Could anyone help me?
|
You can show that any matrix is unitarily similar to an upper triangular matrix over the complex numbers. This is the Schur decomposition which Ed Gorcenski linked to. Given this transformation, let $A$ be a Hermitian matrix. Then there exists unitary matrix $U$ and upper-triangular matrix $T$ such that
$$A = UTU^{\dagger}$$
We can show that any such decomposition leads to $T$ being diagonal so that $U$ not only triangularizes $A$ but in fact diagonalizes it.
Since $A$ is Hermitian, we have
$$A= UT^{\dagger}U^{\dagger} = UTU^{\dagger} = A^{\dagger}$$
This immediately implies $T^{\dagger} = T$. Since $T$ is upper-triangular and $T^{\dagger}$ is lower-triangular, both must be diagonal matrices (this further shows that the eigenvalues are real). This shows that any Hermitian matrix is diagonalizable, i.e. any $n\times n$ Hermitian matrix has $n$ linearly independent eigenvectors.
|
Is there a name for this ring-like object? Let $S$ be an abelian group under an operation denoted by $+$. Suppose further that $S$ is closed under a commutative, associative law of multiplication denoted by $\cdot$. Say that $\cdot$ distributes over $+$ in the usual way. Finally, for every $s\in S$, suppose there exists some element $t$, not necessarily unique, such that $s\cdot t=s$.
Essentially, $S$ is one step removed from being a ring; the only problem is that the multiplicative identity is not unique. Here is an example.
Let $S=\{\text{Continuous functions} f: \mathbb{R}\rightarrow \mathbb{R} \ \text{with compact support}\}$ with addition and multiplication defined pointwise. It is clear that this is an abelian group with the necessary law of multiplication. Now, let $f\in S$ be supported on $[a,b]$. Let $S'\subset S$ be the set of continuous functions compactly supported on intervals containing $[a,b]$ that are identically 1 on $[a,b]$. Clearly, if $g\in S'$, then $f\cdot g=f$ for all $x$. Also, there is no unique multiplicative identity in this collection since the constant function 1 is not compactly supported.
I've observed that this example is an increasing union of rings, but I don't know if this holds for every set with the property I've defined.
|
This is a pseudo-ring, or rng, or ring-without-unit. The article linked in fact actually mentions the example of functions with compact support. The fact that you have a per-element neutral element is probably not sufficiently useful to give a special name to pseudo-rings with this additional property.
|
Simple Characterizations of Mathematical Structures By no means trivial, a simple characterization of a mathematical structure is a simply-stated one-liner in the following sense:
Some general structure is (surprisingly and substantially) more structured if and only if the former satisfies some (surprisingly and superficially weak) extra assumption.
For example, here are four simple characterizations in algebra:
*
*A quasigroup is a group if and only if it is associative.
*A ring is an integral domain if and only if its spectrum is reduced and irreducible.
*A ring is a field if and only its ideals are $(0)$ and itself.
*A domain is a finite field if and only if it is finite.
I'm convinced that there are many beautiful simple characterizations in virtually all areas of mathematics, and I'm quite puzzled why they aren't utilized more frequently. What are some simple characterizations that you've learned in your mathematical studies?
|
A natural number $p$ is prime if and only if it divides $(p-1)! + 1$ (and is greater than 1).
|
Are there values $k$ and $\ell$ such that $n$ = $kd$ + $\ell$? Prove. Suppose that n $\in$ $\mathbb Z$ and d is an odd natural number, where $0 \notin\mathbb N$. Prove that $\exists$ $\mathcal k$ and $\ell$ such that $n =\mathcal kd +\ell$ and $\frac {-d}2 < \ell$ < $\frac d2$.
I know that this is related to Euclidean's Algorithm and that k and $\ell$ are unique. I do not understand where to start proving this (as I don't most problems like these), but I also have a few other questions.
Why is is that d is divided by 2 when it is an odd number? I'm not even sure how $\ell$ being greater than and less than these fractions has anything to do with the rest of the proof. Couldn't $\ell$ be any value greater than or less than $0$?
Since d can never equal $0$, then kd could never equal $0$, so doesn't that leave the only n to possibly equal $0$?
I would appreciate anyone pushing me in the correct direction.
|
We give a quite formal, and unpleasantly lengthy, argument. Then in a remark we say what's really going on. Let $n$ be an integer. First note that there are integers $x$ such that $n-xd\ge 0$. This is obvious if $n\ge 0$. And if $n \lt 0$, we can for example use $x=-n+1$.
Let $S$ be the set of all non-negative integers of the shape $n-xd$. Then $S$ is, as we observed, non-empty. So there is a smallest non-negative integer in $S$. Call this number $r$. (The fact that any non-empty set of non-negative integers has a smallest element is a hugely important fact equivalent to the principle of mathematical induction. It is often called the Least Number Principle.)
Since $r\in S$, we have $r\ge 0$. Moreover, by the definition of $S$ there is an integer $y$ such that $r=n-yd$, or equivalently $n=yd+r$.
Note that $r\lt d$. For suppose to the contrary that $r \ge d$. Then $r-d\ge 0$. But $r-d=r-(y+1)d$, and therefore $r-d$ is an element of $S$, contradicting the fact that $r$ is the smallest element of $r$.
To sum up, we have shown that there is an $r$ such that $0\le r\lt d$ and such that there exists a $y$ such that $r=n-yd$, or equivalently $n=yd+r$.
Case (i): Suppose that $r\lt \dfrac{d}{2}$. Then let $k=y$ and $\ell=r$. We have then $n=kd+\ell$ and $0\le \ell\lt \dfrac{d}{2}$.
Case (ii): Suppose that $r \ge \frac{d}{2}$. Since $d$ is odd, we have $r\gt \dfrac{d}{2}$. We have
$$\frac{d}{2}\lt r \lt d.$$
Subtract $d$ from both sides of these inequalities. We obtain
$$-\dfrac{d}{2}\lt r-d\lt 0,$$
which shows that
$$-\frac{d}{2}\lt n-yd-d\lt 0.$$
Finally, in this case let $k=y+1$ and $\ell=n-kd$. Then $n=kd+\ell$ and
$$-\dfrac{d}{2}\lt kd+\ell\lt 0.$$
Remark: There is surprisingly little going on here. We first found the remainder $r$ when $n$ is divided by $d$. But the problem asks for a "remainder" which is not necessarily, like the usual remainder, between $0$ and $d-1$. We want to allow negative "remainders"
that are as small in absolute value as possible. The idea is that if the ordinary remainder is between $0$ and $d/2$, we are happy with it, but if the ordinary remainder is between $d/2$ and $d-1$, we increase the "quotient" by $1$, thereby decreasing the remainder by $d$, and putting it in the right range. So for example if $n=68$ and $d=13$, we use $k=5$, and $\ell=3$. If $n=74$ and $d=13$, we have the usual $74=(5)(13)+9$. Increase the quotient to $6$. We get $74=(6)(13)+(-4)$, and use $k=6$, and $\ell=-4$.
We gave a proof in the traditional style, but the argument can be rewritten as an ordinary induction argument on $|n|$. It is a good idea to work separately with non-negative and negative integers $n$. We sketch the argument for non-negative $n$. The result is obvious for $n=0$, with $k_0=\ell_0=0$. Suppose that for a given non-negative $n$ we have $n=k_nd+\ell_n$, where $\ell_n$ obeys the inequalities of the problem, that is, $-d/2\lt \ell_n\lt d/2$. If $\ell_n\le (d-3)/2$, then $n+1=k_{n+1} +\ell_{n+1}$, where $k_{n+1}=k_n$ and $\ell_{n+1}=\ell_n+1$. If $\ell_n=(d-1)/2$, let $k_{n+1}=k_n+1$ and $\ell_{n+1}=-(d-1)/2$. It is not hard to verify that these values of $\ell_{n+1}$ are in the right range.
|
Minimal surfaces and gaussian and normal curvaturess If $M$ is the surface $$x(u^1,u^2) = (u^2\cos(u^1),u^2\sin(u^1), p\,u^1)$$ then I am trying to show that $M$ is minimal. $M$ is referred to as a helicoid.
Also I am confused on how $p$ affects the problem
|
There is a good reason that the value of $p$ does not matter, as long as $p \neq 0.$
If you begin with a sphere of radius $R$ and blow it up to a sphere of radius $SR,$ the result is to multiply the mean curvature by $\frac{1}{S}.$ This is a general phenomenon. A map, which is also linear, given by moving every point $(x,y,z)$ to $(\lambda x, \lambda y, \lambda z)$ for a positive constant $\lambda,$ is called a homothety. A homothety takes any surface and divides the mean curvature (at matching points, of course)) by $\frac{1}{\lambda}.$ This can be done in any $\mathbb R^n,$ I guess we are sticking with $\mathbb R^3.$
So, what I need to do is show you that your helicoid with parameter $p,$ expanded or shrunk by a homothety, is the helicoid with a different parameter, call it $q.$ I'm going to use $u = u^1, v = u^2.$ And that is just
$$ \frac{q}{p} x(u, \frac{pv}{q}) = \frac{q}{p} \left(\frac{pv}{q} \cos u , \frac{pv}{q} \sin u, p u \right) = (v \cos u , v \sin u, q u). $$
Well, the mean curvature of the original helicoid is $0$ everywhere. So the new helicoid is still minimal.
There is a bit of work showing that a homothety changes the mean curvature in the way I described, no easier than your original problem. True, though.
|
Proj construction and fibered products How to show, that
$Proj \, A[x_0,...,x_n] = Proj \, \mathbb{Z}[x_0,...,x_n] \times_\mathbb{Z} Spec \, A$?
It is used in Hartshorne, Algebraic geometry, section 2.7.
|
Show that you have an isomorphism on suitable open subsets, and that the isomorphisms glue. The standard ones on $\mathbb{P}^n_a$ should suffice. Use that $$\mathbb{Z}[x_0, \ldots, x_n] \otimes_\mathbb{Z} A \cong A[x_0,..., \ldots, x_n].$$ Maybe you could prove the isomorphism by using the universal property of projective spaces too, but that might be overkill / not clean at all.
|
Show that $\left(\frac{1}{2}\left(x+\frac{2}{x}\right)\right)^2 > 2$ if $x^2 > 2$ Okay, I'm really sick and tired of this problem. Have been at it for an hour now and we all know the drill: if you don't get to the solution of a simple problem, you won't, so ...
I'm working on a proof for the convergence of the Babylonian method for computing square roots. As a warming up I'm first using the sequence $(x_n)$ defined by:
$$
x_1 = 2\\
x_{n+1} = \frac{1}{2} (x_n + \frac{2}{x_n})
$$
Now for the proof, I want to show that: $\forall n \in \mathbb{N}: x^2_n > 2$. I want to prove this using induction, so this eventually comes down to:
$$
x_n^2 > 2 \implies x_{n+1}^2 = \frac{1}{4}x_n^2 + 1 + \frac{1}{x_n^2} > 2
$$
And I can't seem to get to the solution. Note that I don't want to make use of showing that $x=2$ is a minimum for this function using derivatives. I purely want to work with the inequalities provided. I'm probably seeing through something very obvious, so I would like to ask if anyone here sees what's the catch.
Sincerely,
Eric
|
First, swap $x_n^2$ for $2y$, just to make it simpler to write. The hypothesis is then $y > 1$, and what we want to show is
$$
\frac{2}{4}y + \frac{1}{2y} > 1
$$
$$
y + \frac{1}{y} > 2
$$
Multiply by $y$ (since $y$ is positive, no problems arise)
$$
y^2 -2y + 1 > 0
$$
$$
(y-1)^2 > 0
$$
which is obvious, since $y > 1$.
|
Convergence of Lebesgue integrable functions in an arbitrary measure. I'm a bit stuck on this problem, and I was hoping someone could point me in the right direction.
Suppose $f, f_1, f_2,\ldots \in L^{1}(\Omega,A,\mu)$ , and further suppose that $\lim_{n \to \infty} \int_{\Omega} |f-f_n| \, d\mu = 0$. Show that $f_n \rightarrow f$ in measure $\mu$.
In case you aren't sure, $L^1(\Omega,A,\mu)$ is the complex Lebesgue integrable functions on $\Omega$ with measure $\mu$.
I believe I have to use the Dominated convergence theorem to get this result, and they usually do some trick like taking a new function $g$ that relates to $f$ and $f_n$ in some way, but I'm not seeing it. Any advice?
|
A bit late to answer, but here it is anyways.
We wish to show that for any $\epsilon > 0,$ there is some $N$ such that for all $n \geq N, \mu(\{x : |f_n(x) - f(x)| > \epsilon\}) < \epsilon.$ (This is one of several equivalent formulations of convergence in measure.)
If this were not the case, then there'd be some $\epsilon > 0$ so that for every $N$ there is an $n \geq N$ that doesn't satisfy the above condition. So, pick $N$ large enough so that for all $n \geq N, \int_\Omega |f-f_n| \ d\mu < \epsilon^2.$ Then, for this $N$ we have, by our assumption, some $n_0$ with $\mu(L_{n_0}) \geq \epsilon$ where $L_{n_0} = \{x: |f_{n_0}(x) - f(x)| > \epsilon\}.$
But then, we'd have that $$\epsilon^2 > \int_\Omega |f_{n_0} - f| \ d\mu \geq \int_{L_{n_0}} |f_{n_0} - f| \ d\mu > \epsilon\mu(L_{n_0}) \geq \epsilon^2,$$ which is a contradiction. Hence, we must have convergence in measure.
|
Is this the category of metric spaces and continuous functions?
Suppose the object of the category are metric spaces and for $\left(A,d_A\right)$ and $\left(B,d_B\right)$ metric spaces over sets A and B, a morphisms of two metric space is given by a function between the underlying sets, such that $f$ presere the metric structure: $\forall x,y,z \in A$ we have:
*
*$$ d_B\left(f\left(x\right),f\left(y\right)\right)= 0 \Leftrightarrow
f\left(x\right)=f\left(y \right)$$
*$$d_B\left(f\left(x\right),f\left(y\right)\right)=d_y\left(f\left(y\right),f\left(x\right)\right)$$
*$$d_B\left(f\left(x\right),f\left(y\right)\right) \le d_B\left(f\left(x\right),f\left(z\right)\right) + d_B\left(f\left(z\right),f\left(y\right)\right) $$
and furthermore : $\forall \epsilon > 0$, $\exists \delta >0 $ which satisfy:
*$$d_A\left(x,y\right)<\delta \Rightarrow d_B \left(f\left(x\right),f\left(y\right)\right)< \epsilon$$
Is this the category of metric spaces and continues functions? What if we drop the last requirement?
|
I don't think there's really one the category of metric spaces. The fourth axiom here gives you a category of metric spaces and (uniformly) continuous functions. The other axioms are implied by the assumptions. Allowing $\delta$ to depend on $x$ gives you the category of metric spaces and (all) continuous functions.
One way to preserve metric structure would be to demand that $d_B(f(x),f(y))=d_A(x,y)$. This would restrict the functions to isometric ones, which are all homeomorphisms, so you could relax the restriction to $d_B(f(x),f(y))\le d_A(x,y)$. That way you get the category of metric spaces and contractions.
|
Counting permutations of students standing in line Say I have k students, four of them are Jack, Liz, Jenna and Tracy. I want to count the number of permutations in which Liz is standing left to Jack and Jenna is standing right to Tracy. I define $A = ${Liz is left to Jack} so $|A| = \frac{k!}{2}$. The same goes for $B$ with Jenna and Tracy.
I know that $$|A \cap B| = |A| + |B| - |A \cup B|$$
But how do I find the union? I'm guessing it involves inclusion-exclusion, but I can't remember how exactly.
Any ideas? Thanks!
|
The order relationship between Liz and Jack is independent of that between Jenna and Tracy. You already know that there are $k!/2$ permutations in which Liz stands to the left of Jack. In each of those Jenna can be on the right of Tracy or on her left without affecting the order of Liz and Jack, so exactly half of these $k!/2$ permutations have Jenna to the right of Tracy. The answer is therefore $k!/4$.
|
Does a natural transformation on sites induce a natural transformation on presheaves? Suppose $C$ and $D$ are sites and $F$, $G:C\to D$ two functors connected by a natural transformation $\eta_c:F(c)\to G(c)$.
Suppose further that two functors $\hat F$, $\hat G:\hat C\to\hat D$ on the respective categories of presheaves are given by $\hat F(c)=F(c)$ and $\hat G(c)=G(c)$ where I abuse the notation for the Yoneda embedding.
Is there always a natural transformation $\hat\eta_X:\hat F(X)\to \hat G(X)$?
The problem is, that in the diagram
$$
\begin{array}{rcccccl}
\hat F(X)&=&\operatorname{colim} F(X_j)&\to& \operatorname{colim} G(X_j)&=&\hat G(X)\\
&&\downarrow &&\downarrow\\
\hat F(Y)&=&\operatorname{colim} F(Y_k)&\to& \operatorname{colim} G(Y_k)&=&\hat G(Y)
\end{array}
$$
for a presheaf morphism $X\to Y$ the diagrams for the colimits may be different, or am I wrong?
|
Recall: given a functor $F : \mathbb{C} \to \mathbb{D}$ between small categories, there is an induced functor $F^\dagger : [\mathbb{D}^\textrm{op}, \textbf{Set}] \to [\mathbb{C}^\textrm{op}, \textbf{Set}]$, and this functor has both a left adjoint $\textrm{Lan}_F$ and a right adjoint $\textrm{Ran}_F$. Now, given a natural transformation $\alpha : F \Rightarrow G$, there is an induced natural transformation $\alpha^\dagger : G^\dagger \Rightarrow F^\dagger$ (note the direction!), given by $(\alpha^\dagger_Q)_C = Q(\alpha_C) : Q(G C) \to Q(F C)$. Consequently, if $\eta^G_P : P \to (\textrm{Lan}_G P) F$ is the component of the unit of the adjunction $\textrm{Lan}_G \dashv G^\dagger$, we can compose with $\alpha^\dagger_{\textrm{Lan}_G P}$ to get a presheaf morphism $\alpha^\dagger_{\textrm{Lan}_G P} \circ \eta^G_P : P \to (\textrm{Lan}_G P) F$, and by adjunction this corresponds to a presheaf morphism $\textrm{Lan}_F P \to \textrm{Lan}_G P$. This is all natural in $P$, so we have the desired natural transformation $\textrm{Lan}_\alpha : \textrm{Lan}_F \Rightarrow \textrm{Lan}_G$.
|
How to solve/transform/simplify an equation by a simple algorithm? MathePower provides an form. There you can input a formula (1st input field) and a variable to release (2nd input field) and it will output a simplified version of that formula.
I want to write a script which needs to do something similar.
So my question is:
*
*Do you know about any simple algorithm which can do something like the script on MathePower? (I just want to simplify formulas based on the four basic arithmetical operations.)
*Are there any example implementations in a programming language?
Thanks for your answer. (And please execuse my bad English.)
|
This is generally known as "computer algebra," and there are entire books and courses on the subject. There's no single magic bullet. Generally it relies on things like specifying canonical forms for certain types of expressions and massaging them. Playing with the form, it seems to know how to simplify a rational expression, but not for instance that $\sin^2 x + \cos^2 x = 1$.
|
Maximum based recursive function definition Does a function other than 0 that satisfies the following definition exist?
$$
f(x) = \max_{0<\xi<x}\left\{ \xi\;f(x-\xi) \right\}
$$
If so can it be expressed using elementary functions?
|
Since we cannot be sure if the $\max$ exists, let us consider $f\colon I\to\mathbb R$ with
$$\tag1f(x)=\sup_{0<\xi<x}\xi f(x-\xi)$$
instead, where $I$ is an interval of the form $I=(0,a)$ or $I=(0,a]$ with $a>0$.
If $x_0>0$ then $f(x)\ge (x-x_0)f(x_0)$ for $x>x_0$ and $f(x)\le\frac{f(x_0)}{x_0-x}$ for $x<x_0$.
We can conclude $f(x)\ge0$ for all $x>0$: Select $x_0\in(0,x)$. Note that $f(x_0)$ may be negative. Let $\epsilon>0$. For $0<h<x-x_0$ we have $f(x_0+h)\ge h f(x_0)$ and $f(x)\ge (x-x_0-h)f(x_0+h)\ge h(x-x_0-h)f(x_0)$. If $h<\frac{\epsilon}{(x_1-x_0)|f(x-0)|}$, this shows $f(x)\ge-\epsilon$. Since $\epsilon$ was arbitrary, we conclude $f(x)\ge0$.
Assume $f(x_0)>0$. Then for any $0<\epsilon<1$ there is $x_1<x_0$ with $(x_0-x_1)f(x_1)>(1-\epsilon)f(x_0)$ and especially $f(x_1)>0$. In fact, for a sequence $(\epsilon_n)_n$ with $0<\epsilon_n<1$ and
$$\prod_n (1-\epsilon_n)=:c>0$$
(which is readily constructed) we find a sequence $x_0>x_1>x_2>\ldots$ such that $(x_n-x_{n+1})f(x_{n+1})>(1-\epsilon_n)f(x_n)$, hence
$$\prod_{k=1}^{n} (x_{k}-x_{k+1})\cdot f(x_{n+1})>\prod_{k=1}^{n-1}(1-\epsilon_k)\cdot f(x_1)>c f(x_1). $$
By the arithmetic-geometric inequality, $${\prod_{k=1}^n (x_{k}-x_{k+1})}\le \left(\frac {x_1-x_n}n\right)^n<\left(\frac {x_1}n\right)^n$$
and
$$f(x_{n+1})>c f(x_1)\cdot \left(\frac n{x_1}\right)^n$$
The last factor is unbounded.
Therefore,
$f(x_0)\ge (x_0-x_n)f(x_{n+1})\ge (x_0-x_1) f(x_{n+1})$ gives us a contradiction.
Therefore $f$ is identically zero.
|
Alice and Bob Game Alice and Bob just invented a new game.
The rule of the new game is quite simple. At the beginning of the game, they write down N
random positive integers, then they take turns (Alice first) to either:
*
*Decrease a number by one.
*Erase any two numbers and write down their sum.
Whenever a number is decreased to 0, it will be erased automatically. The game ends when all numbers are finally erased, and the one who cannot play in his(her) turn loses the game.
Here's the problem: Who will win the game if both use the best strategy?
|
The complete solution to this game is harder than it looks, due to complications when there are several numbers $1$ present; I claim the following is a complete list of the "Bob" games, those that can be won by the second player to move. To justify, I will indicate for each case a strategy for Bob, countering any move by Alice by another move leading to a simpler "Bob" game.
I will write game position as partitions, weakly decreasing sequences of nonnegative integers (order clearly does not matter for the game). Entries present a number of times are indicated by exponents in parentheses, so $(3,1^{(4)})$ designates $(3,1,1,1,1)$. Moves are of type "decrease" (type 1 in the question) or "merge" (type 2); a decrease from $1$ to $0$ will be called a removal.
Bob-games are:
*
*$(0)$ and $(2)$
*$(a_1,\ldots,a_n,1^{(2k)})$ where $k\geq0$, $n\geq1$, $a_n\geq2$, $(a_1,\ldots,a_n)\neq(2)$, and $a_1+\cdots+a_n+n-1$ is even. Strategy: counter a removal of one of the numbers $1$ by another such removal; a merge of a $1$ and an $a_i$ by another merge of a $1$ into $a_i$; a merge of two entries $1$ by a merge of the resulting $2$ into one of the $a_i$; a decrease of an $a_i$ from $2$ to $1$ by a merge of the resulting $1$ into another $a_j$; any other decrease of an $a_i$ or a merge of an $a_i$ and $a_j$ by the merge of two entries $1$ if possible ($k\geq1$) or else merge an $a_i$ and $a_j$ if possible ($n\geq2$), or else decrease the unique remaining number making it even.
*(to be continued...)
Note that the minimal possibilities for $(a_1,\ldots,a_n)$ here are $(4)$, $(3,2)$, and $(2,2,2)$. Anything that can be moved into a Bob-game is an Alice-game; this applies to any $(a_1,\ldots,a_n,1^{(2k+1)})$ where $k\geq0$, $n\geq1$, $a_n\geq2$, $(a_1,\ldots,a_n)\neq(2)$ (either remove or merge a $1$ so as to leave $a_1+\cdots+a_n+n-1$ even), and to any $(a_1,\ldots,a_n,1^{(2k)})$ where $k\geq0$, $n\geq1$, $a_n\geq2$, and $a_1+\cdots+a_n+n-1$ odd (either merge two of the $a_i$ or two entries $1$, or if there was just an odd singleton, decrease it). All cases $(3,1^{(l)})$ and $(2,2,1^{(l)})$ are covered by this, in a manner depending on the parity of $l$. It remains to classify the configurations $(2,1^{(l)})$ and $(1^{(l)})$. Moving outside this remaining collection always gives some Alice-game $(3,1^{(l)})$ or $(2,2,1^{(l)})$, which are losing moves that can be ignored. Then we complete our list of Bob-games with:
*
*$(1^{(3k)})$ and $(2,1^{(3k)})$ with $k>0$. Bob wins game $(1,1,1)$ by moving to $(2)$ in all cases. Similarly he wins other games $(1^{(3k)})$ by moving to $(2,1^{(3k-3)})$ in all cases. Finally Bob wins $(2,1^{(3k)})$ by moving to $(1^{(3k)})$ (unless Alice merges, but this also loses as we already saw).
|
How to undo linear combinations of a vector If $v$ is a row vector and $A$ a matrix, the product $w = v A$ can be seen as a vector containing a number of linear combinations of the columns of vector $v$. For instance, if
$$
v = \begin{bmatrix}1, 2\end{bmatrix}, \quad
A = \begin{bmatrix}0 & 0 & 0 \\ 1 & 1 & 1\end{bmatrix}, \quad
w = vA = \begin{bmatrix}2, 2, 2\end{bmatrix}
$$
read by columns, the matrix $A$ is saying: make 3 combinations of the columns of vector $v$, each of which consists of taking 0 times the first column and 1 time the second column.
Now, the goal is to reconstruct, to the extent of possible, vector $v$ from $A$ and $w$, in other words to find a vector $v'$ such that $$v'A = w .$$
Two things to consider:
*
*The matrix $A$ can have any number of columns and may or may not be square or invertible.
*There are times when elements of the original vector can't be known, because $w$ contains no information about them. In the previous example, this would be the case of $v_1$. In this case, we would accept any value of $v'_1$ as correct.
How would you approach this problem? Can $v'$ be found doing simple operations with $w$ and $A$ or do I have to invent an algorithm specifically for the purpose?
|
Clearly
$$ \begin{bmatrix}0 & 2\end{bmatrix}
\begin{bmatrix}0 & 0 & 0 \\ 1 & 1 & 1\end{bmatrix}
= \begin{bmatrix}2 & 2 & 2\end{bmatrix}$$
So $v'=[0 \; 2]$ is a solution.
So we can suppose than any other solution can look like
$v'' = v' + [x \; y]$.
\begin{align}
(v' + [x \; y])A &= [2 \; 2 \; 2] \\
v'A + [x \; y]A &= [2 \; 2 \; 2] \\
[2 \; 2 \; 2] + [x \; y]A &= [2 \; 2 \; 2] \\
[x \; y]A &= [0 \; 0 \; 0] \\
[y \; y \; y] &= [0 \; 0 \; 0] \\
y &= 0
\end{align}
So the most general solution is
$v'' = v'+ \begin{bmatrix}x & 0\end{bmatrix} = \begin{bmatrix}x & 2\end{bmatrix}$
|
*Recursive* vs. *inductive* definition I once had an argument with a professor of mine, if the following definition was a recursive or inductive definition:
Suppose you have sequence of real numbers. Define $a_0:=2$ and $a_{i+1}:=\frac{a_i a_{i-1}}{5}$. (Of course this is just an example and as such has only illustrative character - I could have as well taken as an example a definition of some family of sets)
I claimed that this definition was recursive since we have an $a_{i+1}$ and define it going "downwards" and use the word "inductive" only as an adjectiv for the word "proof", but my professor insisted that we distinguish between these types of definition and that this was an inductive definition, since we start with $a_0$ and work "upwards".
Now, is there even someone who can be right ? Since to me it seems that mathematically every recursive definition is also inductive (whatever these two expressions may finally mean), since the mathematical methods used to define them (namely equations) are the same. (Wikipedia also seems to think they are the same - but I trust a sound mathematical community, i.e. you guys, more than Wikipedia)
And if there is a difference, who is right and what is, if the above is a recursive definition, an inductive definition (and vice-versa) ?
(Please, don't ask me to ask my professor again - or anything similar, since I often get this answer here, after mentioning that this question resulted from a discussion with some faculty member - since out discussion ended with him saying that "it definitely is inductive, but I just can't explain it")
|
Here is my inductive definition
of the cardinality
of a finite set
(since,
in my mind,
finite sets are built
by adding elements
starting with the
empty set):
$|\emptyset|
= 0
$.
$|A \cup {x}|
=
\begin{cases}
x \in A
&\implies |A|\\
x \not\in A
&\implies |A|+1\\
\end{cases}
$
|
Prove that $mn|a$ implies $m|a$ and $n|a$ I am trying to prove this statement about divisibility: $mn|a$ implies $m|a$ and $n|a$.
I cannot start the proof. I need to prove either the right or left side. I don't know how to use divisibility theorems here. Generally, I have problems in proving mathematical statements.
This is my attempt: $m$ divides $a$ implies that $mn$ also divides $a$. How do I show that $n$ also divides $a$?
|
If $mn|a$ then $a=kmn$ for some integer $k$. Then $a=(km)n$ where $km$ is an integer so that $n|a$. Similarly, $a=(kn)m$ where $kn$ is an integer so that $m|a$.
|
Evaluating a double integral: $\iint \exp(\sqrt{x^2+y^2})\:dx\:dy$? How to evaluate the following integral? $$\iint \exp\left(\sqrt{x^2+y^2} \right)\:dx\:dy$$
I'm trying to integrate this using substitution and integration by parts but I keep getting stuck.
|
If you switch to polar coordinates, you end out integrating $re^r \,dr \,d\theta$, which you should be able to integrate over your domain by doing the $r$ integral first (via integration by parts).
|
Non-closed compact subspace of a non-hausdorff space I have a topology question which is:
Give an example of a topological (non-Hausdorff) space X and a a non-closed compact subspace.
I've been thinking about it for a while but I'm not really getting anywhere. I've also realised that apart from metric spaces I don't really have a large pool of topological spaces to think about (and a metric sapce won't do here-because then it would be hausdorff and any compact set of a metric space is closed)
Is there certain topological spaces that I should know about (i.e. some standard and non-standard examples?)
Thanks very much for any help
|
Here are some examples that work nicely.
*
*The indiscrete topology on any set with more than one point: every non-empty, proper subset is compact but not closed. (The indiscrete topology isn’t good for much, but as Qiaochu said in the comments, it’s a nice, simply example when it actually works.)
*In the line with two origins, the set $[-1,0)\cup\{a\}\cup(0,1]$ is compact but not closed: $b$ is in its closure.
*The set $\{1\}$ in the Sierpiński space is compact but not closed.
*For each $n\in\Bbb N$ let $V_n=\{k\in\Bbb N:k<n\}$; then $\{V_n:n\in\Bbb N\}\cup\{\Bbb N\}$ is a topology on $\Bbb N$, in which every non-empty finite set is compact by not closed.
*Let $\tau$ be the cofinite topology on an infinite set $X$. Then every subset of $X$ is compact, but the only closed subsets are $X$ and the finite subsets of $X$.
In terms of the common separation axioms: (1) is not even $T_0$; (2) and (5) are $T_1$; and (3) and (4) are $T_0$ but not $T_1$.
|
Maps of maximal ideals Prove that $\mu:k^n\rightarrow \text{maximal ideal}\in k[x_1,\ldots,x_n]$ by $$(a_1,\ldots,a_n)\rightarrow (x_1-a_1,\ldots,x_n-a_n)$$ is an injection, and given an example of a field $k$ for which $\mu$ is not a surjection.
The first part is clear, but the second part needs a field $k$ such that not all maximal ideals of the polynomial ring is of the form $(x-a_1,\ldots,x-a_n)$. I am not sure how to find one as I obviously need to a non-obvious ring epimorphism $k[x_1,\ldots,x_n]\rightarrow k$ such that the kernel is the maximal ideal. This question is quite elementary and I feel embarrassed to ask.
|
At Julian's request I'm developing my old comment into an answer. Here is the result:
Given any non algebraically field $k$, the canonical map $$k\to \operatorname {Specmax}(k[x]):a\mapsto (x-a)$$ is not surjective.
Indeed, by hypothesis there exists an irreducible polynomial $p(x)\in k[x]$ of degree $\gt 1$.
This polynomial generates a maximal ideal $\mathfrak m=(p(x))$ which is not of the form (x-a), in other words which is not in the image of our displayed canonical map.
|
Number of squares in a rectangle Given a rectangle of length a and width b (as shown in the figure), how many different squares of edge greater than 1 can be formed using the cells inside.
For example, if a = 2, b = 2, then the number of such squares is just 1.
|
In an $n\times p$ rectangle, the number of rectangles that can be formed is $\frac{np}{4(n+1)(p+1)}$ and the number of squares that can be formed is $\sum_{r=1}^n (n+1-r)(p+1-r)$.
|
For which values of $\alpha \in \mathbb R$ is the following system of linear equations solvable? The problem I was given:
Calculate the value of the following determinant:
$\left|
\begin{array}{ccc}
\alpha & 1 & \alpha^2 & -\alpha\\
1 & \alpha & 1 & 1\\
1 & \alpha^2 & 2\alpha & 2\alpha\\
1 & 1 & \alpha^2 & -\alpha
\end{array}
\right|$
For which values of $\alpha \in \mathbb R$ is the following system of linear equations solvable?
$\begin{array}{lcl}
\alpha x_1 & + & x_2 & + & \alpha^2 x_3 & = & -\alpha\\
x_1 & + & \alpha x_2 & + & x_3 & = & 1\\
x_1 & + & \alpha^2 x_2 & + & 2\alpha x_3 & = & 2\alpha\\
x_1 & + & x_2 & + & \alpha^2 x_3 & = & -\alpha\\
\end{array}$
I got as far as finding the determinant, and then I got stuck.
So I solve the determinant like this:
$\left|
\begin{array}{ccc}
\alpha & 1 & \alpha^2 & -\alpha\\
1 & \alpha & 1 & 1\\
1 & \alpha^2 & 2\alpha & 2\alpha\\
1 & 1 & \alpha^2 & -\alpha
\end{array}
\right|$ =
$\left|
\begin{array}{ccc}
\alpha - 1 & 0 & 0 & 0\\
1 & \alpha & 1 & 1\\
1 & \alpha^2 & 2\alpha & 2\alpha\\
1 & 1 & \alpha^2 & -\alpha
\end{array}
\right|$ =
$(\alpha - 1)\left|
\begin{array}{ccc}
\alpha & 1 & 1\\
\alpha^2 & 2\alpha & 2\alpha \\
1 & \alpha^2 & -\alpha
\end{array}
\right|$ =
$(\alpha - 1)\left|
\begin{array}{ccc}
\alpha & 1 & 0\\
\alpha^2 & 2\alpha & 0 \\
1 & \alpha^2 & -\alpha - \alpha^2
\end{array}
\right|$ = $-\alpha^3(\alpha - 1) (1 + \alpha)$
However, now I haven't got a clue on solving the system of linear equations... It's got to do with the fact that the equations look like the determinant I calculated before, but I don't know how to connect those two.
Thanks in advance for any help. (:
|
Let me first illustrate an alternate approach. You're looking at $$\left[\begin{array}{ccc}
\alpha & 1 & \alpha^2\\
1 & \alpha & 1\\
1 & \alpha^2 & 2\alpha\\
1 & 1 & \alpha^2
\end{array}\right]\left[\begin{array}{c} x_1\\ x_2\\ x_3\end{array}\right]=\left[\begin{array}{c} -\alpha\\ 1\\ 2\alpha\\ -\alpha\end{array}\right].$$ We can use row reduction on the augmented matrix $$\left[\begin{array}{ccc|c}
\alpha & 1 & \alpha^2 & -\alpha\\
1 & \alpha & 1 & 1\\
1 & \alpha^2 & 2\alpha & 2\alpha\\
1 & 1 & \alpha^2 & -\alpha
\end{array}\right].$$ In particular, for the system to be solvable, it is necessary and sufficient that none of the rows in the reduced matrix is all $0$'s except for in the last column. Subtract the bottom row from the other rows, yielding $$\left[\begin{array}{ccc|c}
\alpha-1 & 0 & 0 & 0\\
0 & \alpha-1 & 1-\alpha^2 & 1+\alpha\\
0 & \alpha^2-1 & 2\alpha-\alpha^2 & 3\alpha\\
1 & 1 & \alpha^2 & -\alpha
\end{array}\right].$$
It's clear then that if $\alpha=1$, the second row has all $0$s except in the last column, so $\alpha=1$ doesn't give us a solvable system. Suppose that $\alpha\neq 1$, multiply the top row by $\frac1{\alpha-1}$, and subtract the new top row from the bottom row, giving us $$\left[\begin{array}{ccc|c}
1 & 0 & 0 & 0\\
0 & \alpha-1 & 1-\alpha^2 & 1+\alpha\\
0 & \alpha^2-1 & 2\alpha-\alpha^2 & 3\alpha\\
0 & 1 & \alpha^2 & -\alpha
\end{array}\right].$$
Swap the second and fourth rows and add the new second row to the last two rows, giving us $$\left[\begin{array}{ccc|c}
1 & 0 & 0 & 0\\
0 & 1 & \alpha^2 & -\alpha\\
0 & \alpha^2 & 2\alpha & 2\alpha\\
0 & \alpha & 1 & 1
\end{array}\right],$$ whence subtracting $\alpha$ times the fourth row from the third row gives us $$\left[\begin{array}{ccc|c}
1 & 0 & 0 & 0\\
0 & 1 & \alpha^2 & -\alpha\\
0 & 0 & \alpha & \alpha\\
0 & \alpha & 1 & 1
\end{array}\right].$$
Note that $\alpha=0$ readily gives us the solution $x_1=x_2=0$, $x_3=1$. Assume that $\alpha\neq 0,$ multiply the third row by $\frac1\alpha$, subtract the new third row from the fourth row, and subtract $\alpha^2$ times the new third row from the second row, yielding $$\left[\begin{array}{ccc|c}
1 & 0 & 0 & 0\\
0 & 1 & 0 & -\alpha^2-\alpha\\
0 & 0 & 1 & 1\\
0 & \alpha & 0 & 0
\end{array}\right],$$ whence subtracting $\alpha$ times the second row from the fourth row yields $$\left[\begin{array}{ccc|c}
1 & 0 & 0 & 0\\
0 & 1 & 0 & -\alpha^2-\alpha\\
0 & 0 & 1 & 1\\
0 & 0 & 0 & \alpha^3+\alpha^2
\end{array}\right].$$ The bottom right entry has to be $0$, so since $\alpha\neq 0$ by assumption, we need $\alpha=-1$, giving us $$\left[\begin{array}{ccc|c}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 1\\
0 & 0 & 0 & 0
\end{array}\right].$$
Hence, the two values of $\alpha$ that give the system a solution are $\alpha=0$ and $\alpha=-1$, and in both cases, the system has solution $x_1=x_2=0$, $x_3=1$. (I think all my calculations are correct, but I'd recommend double-checking them.)
The major upside of the determinant approach is that it saves you time and effort, since you've already calculated it. If we assume that $\alpha$ is a constant that gives us a solution, then since we're dealing with $4$ equations in only $3$ variables, we have to have at least one of the rows in the reduced echelon form of the augmented matrix be all $0$s--we simply don't have enough degrees of freedom otherwise. The determinant of the reduced matrix will then be $0$, and since we obtain it by invertible row operations on the original matrix, then the determinant of the original matrix must also be $0$.
By your previous work, then, $-\alpha^3(\alpha-1)(1+\alpha)=0$, so the only possible values of $\alpha$ that can give us a solvable system are $\alpha=0$, $\alpha=-1$, and $\alpha=1$. We simply check the system in each case to see if it actually is solvable. If $\alpha=0$, we readily get $x_1=x_2=0$, $x_3=1$ as the unique solution; similarly for $\alpha=-1$. However, if we put $\alpha=1$, then the second equation becomes $$x_1+x_2+x_3=1,$$ but the fourth equation becomes $$x_1+x_2+x_3=-1,$$ so $\alpha=1$ does not give us a solvable system.
|
Solve $2a + 5b = 20$ Is this equation solvable? It seems like you should be able to get a right number!
If this is solvable can you tell me step by step on how you solved it.
$$\begin{align}
{2a + 5b} & = {20}
\end{align}$$
My thinking process:
$$\begin{align}
{2a + 5b} & = {20} & {2a + 5b} & = {20} \\
{0a + 5b} & = {20} & {a + 0b} & = {20} \\
{0a + b} & = {4} & {a + 0b} & = {10} \\
{0a + b} & = {4/2} & {a + 0b} & = {10/2} \\
{0a + b} & = {2} & {a + 0b} & = {5} \\
\end{align}$$
The problem comes out to equal:
$$\begin{align}
{2(5) + 5(2)} & = {20} \\
{10 + 10} & = {20} \\
{20} & = {20}
\end{align}$$
since the there are two different variables could it not be solved with the right answer , but only "a answer?"
What do you guys think?
|
Generally one can use the Extended Euclidean algorithm, but that's overkill here. First note that since $\rm\,2a+5b = 20\:$ we see $\rm\,b\,$ is even, say $\rm\:b = 2n,\:$ hence dividing by $\,2\,$ yields $\rm\:a = 10-5n.$
Remark $\ $ The solution $\rm\:(a,b) = (10-5n,2n) = (10,0) + (-5,2)\,n\:$ is the (obvious) particular solution $(10,0)\,$ summed with the general solution $\rm\,(-5,2)\,n\,$ of the associated homogeneous equation $\rm\,2a+5b = 0,\:$ i.e. the general form of a solution of a nonhomogeneous linear equation.
|
Combinatorics: When To Use Different Counting Techniques I am studying combinatorics, and at the moment I am having trouble with the logic behind more complicated counting problems. Given the following list of counting techniques, in which cases should they be used (ideally with a simple, related example):
*
*Repeated multiplication (such as $10 \times 9\times 8\times 7$, but not down to $1$)
*Addition
*Exponents
*Combination of the above ($2^6 + 2^5 + 2^4 + 2^3 + 2^2 + 2^1 + 2^0$)
*Factorials
*Permutations
*Combinations
*A case like this: $2^{10} \times \left({6 \choose 2} + {6 \choose 1} + {6 \choose 0}\right)$
*A case like this: $13 \times {4 \choose 3} \times {4 \choose 2} \times 12$
*A case like this: $13 \times {4 \choose 3} \times {4 \choose 2} \times {4 \choose 1}$
Sorry for the crash list of questions, but I am not clear on these issues, especially not good when I have a test in a few days!
Thank you for your time!
|
Let me address some of the more general techniques on your list, since the specific ones just appear to be combinations of the general ones.
Repeated Multiplication: Also called "falling factorial", use this technique when you are choosing items from a list where order matters. For example, if you have ten flowers and you want to plant three of them in a row (where you count different orderings of the flowers), you can do this in $10 \cdot 9 \cdot 8$ ways.
Addition: Use this to combine the results of disjoint cases. For example, if you can have three different cakes or four different kinds of ice cream (but not both), then there you have $3 + 4$ choices of dessert.
Exponents: As with multiplication, but the number of choices does not decrease. For example, if you had ample supply of each of your ten kinds of flowers, you could plant $10 \cdot 10 \cdot 10$ different ways (because you can reuse the same kind of flower).
Factorials/Permutations: As with the first example, except you use all ten flowers rather than just three.
Combinations: Use this when you want to select a group of items from a larger group, but order does not matter. For example, if you have five different marbles and want to grab three of them to put in your pocket (so the order in which you choose them does not matter), this can be done in $\binom{5}{3}$ ways.
|
$G$ finite group, $H \trianglelefteq G$, $\vert H \vert = p$ prime, show $G = HC_G(a)$ $a \in H$ Let $G$ be a finite group. $H \trianglelefteq G$ with $\vert H \vert = p$ the smallest prime dividing $\vert G \vert$. Show $G = HC_G(a)$ with $e \neq a \in H$. $C_G(a)$ is the Centralizer of $a$ in $G$.
To start it off, I know $HC_G(a)\leq G$ by normality of $H$ and subgroup property of $C_G(a)$. So I made the observation that
$\begin{align*} \vert HC_G(a) \vert &= \frac{\vert H \vert \vert C_G(a) \vert}{\vert H \cap C_G(a) \vert}\\&=\frac{\vert H \vert \vert C_G(a) \vert}{\vert C_H(a)\vert}\end{align*}$
But, from here on I never reach the result I'm looking for. Any help would be greatly appreciated!
Note: I posted a similar question earlier, except that one had the index of $H$ being prime, this has the order of $H$ being prime: Link
|
Since $N_G(H)/C_G(H)$ injects in $Aut(H)\cong C_{p-1}$ and $p$ is the smallest prime dividing $|G|$, it follows that $N_G(H)=C_G(H)$. But $H$ is normal, so $G=N_G(H)$ and we conclude that $H \subseteq Z(G)$. In particular $G=C_G(a)$ for every $a \in H$.
|
Strict Inequality for Fatou Given $f_n(x)=(n+1)x^n; x\in [0,1]$
I want to show $\int_{[0,1]}f<\liminf\int_{[0,1]}f_n$, where $f_n$ converges pointwise to $f$ almost everywhere on $[0,1]$.
I have found that $\liminf\int f_n = \int f +\liminf\int|f-f_n|$, but I'm not sure how to use this, and I don't even know what $f_n$ converges to here. Can someone hint me in the right direction?
|
HINT
Consider $a \in [0,1)$. The main crux is to compute $$\lim_{n \to \infty} (n+1)a^n$$
To compute the limit note that $a = \dfrac1{1+b}$ for some $b > 0$.
Hence, $$a^n = \dfrac1{(1+b)^n} < \dfrac1{\dfrac{n(n-1)}2 b^2}\,\,\,\,\,\,\,\,\, \text{(Why? Hint: Binomial theorem)}$$ Can you now compute $\lim_{n \to \infty} (n+1)a^n$?
HINT: $$\lim_{n \to \infty} (n+1)a^n < \lim_{n \to \infty} \dfrac{2n+2}{n(n-1) b^2} = ?$$
|
Prove that $A=\left(\begin{array}{ccc}1&2&3\\4&5&6\\7&8&9\end{array}\right)$ is not invertible $$A=\left(\begin{array}{ccc}1&2&3\\4&5&6\\7&8&9\end{array}\right)$$
I don't know how to start. Will be grateful for a clue.
Edit: Matrix ranks and Det have not yet been presented in the material.
|
Note that $L_3-L_2=L_2-L_1$. What does that imply about the rank of $A$?
|
Finding common terms of two arithmetic sequences using Extended Euclidean algorithm I have a problem which could be simplified as: there are two arithmetic sequences, a and b. Those can be written as
a=a1+m*d1
b=b1+n*d2
I need to find the lowest term, appearing in both sequences. It is possible to do by brute force, but that approach is not good enough. I was given a hint - extended Euclidean algorithm can be used to solve this. However, after several frustrating hours I cannot figure it out.
For example:
a1 = 2
d1 = 15
b1 = 67
d2 = 80
That gives these sequences
2 17 32 47 62 77 ... 227 ...
67 147 227
^
Needed term
Could you somehow point me to how to use the algorithm for this problem? It's essentially finding the lowest common multiple, only with an "offset"
Thank you
|
Your equations:
$$a(m) = a_1 + m d_1$$
$$b(n) = b_1 + n d_2 $$
You want $a(m) = b(n)$ or $a(m)-b(n)=0$, so it may be written as
$$(a_1-b_1) + m(d_1) +n(-d_2) = 0$$ or $$ m(d_1) +n(-d_2) = (b_1-a_1) $$
You want $n$ and $m$ minimal and that solves that. This is of course very similar to the EGCD, but that the value of $b_1 - a_1$ is the desired value intead of the value $1$. EGCD sovles it for the value of $1$ (or the gcd of $d_1$ and $d_2$).
This is actually a lattice reduction problem since you are interested in the minimum integer values for $m$ and $n$. That is an involved problem, but this is the lowest dimension and thus is relatively easy.
The method I have written in the past used a matrix form to solve it. I started with
the matrix $$\pmatrix{1 & 0 & d_1 \\ 0 & 1 & -d_2}$$
which represents the equations
\begin{align}
(1)d_1 + (0)(-d_2) = & \phantom{-} d_1 \\
(0)d_1 + (1)(-d_2) = & -d_2 \\
\end{align}
Each row of the matrix gives valid numbers for your equation, the first element in the row is the number for $m$, the second is for $n$ and the third is the value of your equation for those $m$ and $n$. Now if you combine the rows (such as row one equals row one minus row two) then you still get valid numbers. The goal then is to find a combination resulting in the desired value of $b_1 - a_1$ in the final column.
If you use EGCD you can start with this matrix:
$$\pmatrix{d_2 \over g & d_1 \over g & 0 \\ u & -v & g}$$
where $u$, $v$, and $g$ are the outputs of the EGCD (with $g$ the gcd) since EGCD gives you $ud_1 + vd_2 = g$
In your example you would have:
$$\pmatrix{16 & 3 & 0 \\ -5 & -1 & 5}$$
From there you can scale the last row to get $kg = (b_1 - a_1)$ for some integer $k$, then to find the minimal values use the first row to reduce, since the zero in the first row will not affect the result.
Again, for your example, $k=13$ which gives
$$\pmatrix{16 & 3 & 0 \\ -65 & -13 & 65}$$
Adding $5$ times the first row gives
$$\pmatrix{15 & 2 & 65}$$
Which represents the $16$th and $3$rd terms (count starts at $0$) respectively.
|
If $d^2|p^{11}$ where $p$ is a prime, explain why $p|\frac{p^{11}}{d^2}$. If $d^2|p^{11}$ where $p$ is a prime, explain why $p|\frac{p^{11}}{d^2}$.
I'm not sure how to prove this by way other than examples. I only tried a few examples, and from what I could tell $d=p^2$. Is that always the case?
Say $p=3$ and $d=9$. So, $9^2|3^{11}$ because $\frac{3^{11}}{9^2}=2187$. Therefore, $3|\frac{3^{11}}{9^2}$ because $\frac{2187}{3}=729$. Is proof by example satisfactory?
I know now that "proof by example" is only satisfactory if it "knocks out" every possibility.
The proof I am trying to form (thanks to the answers below):
Any factor of $p^{11}$ must be of the form $p^{k}$ for some $k$.
If the factor has to be a square, then it must then be of the form $p^{2k}$, because it must be an even power.
Now, we can show that $\rm\:p^{11}\! = c\,d^2\Rightarrow\:p\:|\:c\ (= \frac{p^{11}}{d^2})\:$ for some integer $c$.
I obviously see how it was achieved that $c=\frac{p^{11}}{d^2}$, but I don't see how what has been said shows that $p|\frac{p^{11}}{d^2}$.
|
Hint $\ $ It suffices to show $\rm\:p^{11}\! = c\,d^2\Rightarrow\:p\:|\:c\ (= p^{11}\!/d^2).\:$ We do so by comparing the parity of the exponents of $\rm\:p\:$ on both sides of the first equation. Let $\rm\:\nu(n) = $ the exponent of $\rm\,p\,$ in the unique prime factorization of $\rm\,n.\:$ By uniqueness $\rm\:\color{#C00}{\nu(m\,n) = \nu(m)+\nu(n)}\:$ for all integers $\rm\:m,n\ne 0.\:$ Thus
$$\rm \color{#C00}{applying\,\ \nu}\ \ to\ \ p^{11}\! =\, c\,d^2\ \Rightarrow\ 11 = \nu(c) + 2\, \nu(d)$$
Therefore $\rm\:\nu(c)\:$ is odd, hence $\rm\:\nu(c) \ge 1,\:$ i.e. $\rm\:p\mid c.\ \ $ QED
|
Net Present Worth Calculation (Economic Equivalence) I'm currently doing some work involving net present worth analyses, and I'm really struggling with calculations that involve interest and inflation, such as the question below. I feel that if anyone can set me on the right track, and once I've worked through the full method for doing one of these calculations, I should be able to do them all. Is there any chance that anyone may be able to guide me through the process of doing the question below, or give me any pointers?
Thanks very much in advance!
You win the lottery. The prize can either be awarded as USD1,000,000 paid out in full today, or yearly instalments paid out at the end of each of the next 10 years. The yearly instalments are USD100,000 at the end of the first year, increasing each subsequent year by $5,000; in other words you get USD100 000-00 at the end of the first year, USD105,000 at the end of the second year, USD110 000-00 at the end of the third year, and so on.
After some economic research you determine that inflation is expected to be 5% for the next 5 years and 4% for the subsequent 5 years. You also discover that real interest rates are expected to be constant at 2.5% for the next 10 years.
Using net present worth analysis, which prize do you choose?
Further, given the inflation figures above, what will the real value of the prize of USD1,000,000 be at the end of 10 years?
|
Using the discount rate (interest rate) calculate the present value of each payment. Let A = 100,000, first payment a = 5,000 (annual increase), and r = 2.5%, interest rate to simply the formulae.
$$
PV_1 = A/(1+r) \\
PV_2 = (A+a)/(1+r)^2 \\
... \\
PV_{10} = (A+9a)/(1+r)^{10}
$$
The total present value (PV) is just the sum of individual payment present values. To compute the expected real value (V) apply the expected inflation rates ($i_1 \dots i_{10}$) on this present value, i.e. in year one
$V_1 = PV/(1+i_1), V_2=V_1/(1+i_2),$ etc. You can do the simplifications since $i_1 = \dots = i_5$ and write one equation. However, my suggestion is use a spreadsheet and do the computations individually to better comprehend the subject.
|
Noetherian rings and modules A ring is left-Noetherian if every left ideal of R is finitely generated. Suppose R is a left-Noetherian ring and M is a finitely generated R-module. Show that M is a Noetherian R-module.
I'm thinking we want to proceed by contradiction and try to produce an infinitely generated ideal, but I'm having trouble coming up with what such an ideal will look like.
|
If $\{x_i\mid 1\leq i\leq n\}$ is a set of generators for $M$, then the obvious map $\phi$ from $R^n$ to $M$ is a surjection. Since $R^n$ is a Noetherian left module, so is $R^n/\ker(\phi)\cong M$.
|
Does $f(x)$ is continuous and $f = 0$ a.e. imply $f=0$ everywhere? I wanna prove that
"if $f: \mathbb{R}^n \to \mathbb{R}$ is continuous and satisfies $f=0$ almost everywhere (in the sense of Lebesgue measure), then, $f=0$ everywhere."
I am confident that the statement is true, but stuck with the proof. Also, is the statement true if the domain $\mathbb{R}^n$ is restricted to $\Omega \subseteq \mathbb{R}^n$ that contains a neighborhood of the origin "$0$"?
|
A set of measure zero has dense complement. So if a continuous function zero on a set of full measure, it is identically zero.
|
Find a basis of $\ker T$ and $\dim (\mathrm{im}(T))$ of a linear map from polynomials to $\mathbb{R}^2$ $T: P_{2} \rightarrow \mathbb{R}^2: T(a + bx + cx^2) = (a-b,b-c)$
Find basis for $\ker T$ and $\dim(\mathrm{im}(T))$.
This is a problem in my textbook, it looks strange with me, because it goes from polynomial to $\mathbb{R}^2$. Before that, i just know from $\mathbb{R}^m \rightarrow \mathbb{R}^n$.
Thanks :)
|
You can treat any polynomial $P_n$ space as an $R^n$ space.
That is, in your case the polynomial is $P_2$ and it can be converted to $R^3$.
The logic is simple, each coefficient of the term in the polynomial is converted to a number in $R^3$.
In the end of this conversion you'll get an isomorphics spaces/subspaces.
In your case :
The polynomial : $a + bx + cx^2$ can be converted to the cartesian product: $(c, b, a)$
I chose the coefficient from the highest degree to the smallest. That is important only to set some ground rules so you will know how to revert your cartesian product into polynomial again, if you want to do so. *You can choose any other order.
Now because these are isomorphics subspaces you'll get the same Kernel and the same Image.
I hope It helps.
Guy
|
"8 Dice arranged as a Cube" Face-Sum Problem I found this here:
Sum Problem
Given eight dice. Build a $2\times 2\times2$ cube, so that the sum of the points on each side is the same.
$\hskip2.7in$
Here is one of 20 736 solutions with the sum 14.
You find more at the German magazine "Bild der Wissenschaft 3-1980".
Now I have three (Question 1 moved here) questions:
*
* Is $14$ the only possible face sum? At least, in the example given, it seems to related to the fact, that on every face two dice-pairs show up, having $n$ and $7-n$ pips. Is this necessary? Sufficient it is...
*How do they get $20736$? This is the dimension of the related group and factors to $2^8\times 3^4$, the number of group elements, right?
i. I can get $2^3$, by the following: In the example given, you can split along the $xy$ ($yz,zx$) plane and then interchange the $2$ blocks of $4$ dice. Wlog, mirroring at $xy$ commutes with $yz$ (both just invert the $z$ resp. $x$ coordinate, right), so we get $2^3$ group lements. $$ $$
ii. The factor $3$ looks related to rotations throught the diagonals. But without my role playing set at hand, I can't work that out. $$ $$
iii. Would rolling the overall die around an axis also count, since back and front always shows a "rotated" pattern? This would give six $90^\circ$-rotations and three $180^\circ$-rotations, $9=3^2$ in total.
$$ \\ $$
Where do the missing $2^5\times 3^2$ come from?
*Is the reference given, online available?
EDIT
And to not make
tehshrike sad
again,
here's the special question for $D4$:
What face sum is possible, so that the sum of the points on each side
is the same, when you pile up 4 D4's to a pyramid (plus the octahedron mentioned by Henning) and how many representations, would such a pyramid
have?
Thanks
|
Regarding your reference request:
The site of the magazine offers many of their articles online starting from 1997, so you cannot obtain the 1980 edition online (although you can likely buy a used print version).
Most good libraries in German-speaking countries do have this magazine, so, depending on your country, you could go directly go to the library, try to get an inter-library loan or contact friends in German-speaking countries to scan the appropriate pages for you.
Of course, the article will be in German.
|
Show $W^{1,q}_0(-1,1)\subset C([-1,1])$ I need show that space $W^{1,q}_0(-1,1)$ is a subset of $C([-1,1])$ space. How I will able to doing this?
|
If $q=+\infty$, and $\varphi_n$ are test functions such that $\lVert \varphi_n-u\rVert_{\infty}\to 0$, then we can find a set of measure $0$ such that $\sup_{x\in [-1,1]\setminus N}|\varphi_n(x)-u(x)|\to 0$, so $u$ is almost everywhere equal to a continuous function, and can be represented by it.
We assume $1\leqslant q<\infty$.
As $W_0^{1,q}(-1,1)$ consists of equivalence classes of functions and $C[-1,1]$ consists of functions, what we have to show is that each element of $W_0^{1,q}(-1,1)$ can be represented by a continuous function. First, for $\varphi$ a test function, we have
$$|\varphi(x)-\varphi(y)|\leqslant \left|\int_x^y|\varphi'(x)|dx\right|\leqslant |x-y|^{1-1/q}\lVert\varphi'\rVert_{L^q}.$$
Now, let $u\in W_0^{1,q}(-1,1)$. By definition, we can find a sequence $\{\varphi_k\}\subset D(-1,1)$ such that $\lim_{k\to+\infty}\lVert u-\varphi_k\rVert_q+\lVert u'-\varphi'_k\rVert_q=0.$
Up to a subsequence, we can assume that $\lim_{k\to+\infty}\varphi_k(x)=u(x)$. As for $k$ large enough,
$$|\varphi_k(x)-\varphi_k(y)|\leqslant |x-y|^{1-1/q}(\lVert u'\rVert_{L^q}+1),$$
we have for almost every $x, y\in (-1,1)$,
$$|u(x)-u(y)|\leqslant |x-y|^{1-1/q}(\lVert u'\rVert_{L^q}+1),$$
what we wanted (and even more, as $u$ is represented by a $\left(1-\frac 1q\right)$-Hölder continuous function, as noted robjohn).
|
The degree of a polynomial which also has negative exponents. In theory, we define the degree of a polynomial as the highest exponent it holds.
However when there are negative and positive exponents are present in the function, I want to know the basis that we define the degree. Is the order of a polynomial degree expression defined by the highest magnitude of available exponents?
For example in $x^{-4} + x^{3}$, is the degree $4$ or $3$?
|
For the sake of completeness, I would like to add that this generalization of polynomials is called a Laurent polynomial. This set is denoted $R[x,x^{-1}]$.
|
Leslie matrix stationary distribution Given a particular normalized Perron vector representing a discrete probability distribution, is it possible to derive some constraints or particular Leslie matrices having the given as their Perron vector?
There is a related question on math overflow.
|
I have very little knowledge about demography. Yet, if the Leslie matrices you talk about are the ones described in this Wikipedia page, it seems like that for any given $v=(v_0,v_1,\ldots,v_{\omega - 1})^T$, a corresponding Leslie matrix exists if $v_0>0$ and $v_0\ge v_1\ge\ldots\ge v_{\omega - 1}\ge0$.
For such a vector $v$, let $v_j$ be the smallest nonzero entry (i.e. $j$ is the largest index such that $v_j>0$). Define
$$
s_i = \begin{cases}\frac{v_{i+1}}{v_i}&\ \textrm{ if } v_i>0,\\
\textrm{any number } \in[0,1]&\ \textrm{ otherwise}.\end{cases}
$$
and let $f=(f_0,f_1,\ldots,f_{\omega-1})^T$ be any entrywise nonnegative vector such that
$$
f_0 + \sum_{i=1}^{\omega-1}s_0s_1\ldots s_{i-1}f_i = 1.
$$
Then the Euler-Lokta equation
$$
f_0 + \sum_{i=1}^{\omega-1}\frac{s_0s_1\ldots s_{i-1}f_i}{\lambda^i} = \lambda$$
is satisfied and hence $v$, up to a normalizing factor, is the stable age distribution or Perron eigenvector of the Leslie matrix
$$
L = \begin{bmatrix}
f_0 & f_1 & f_2 & f_3 & \ldots &f_{\omega - 1} \\
s_0 & 0 & 0 & 0 & \ldots & 0\\
0 & s_1 & 0 & 0 & \ldots & 0\\
0 & 0 & s_2 & 0 & \ldots & 0\\
0 & 0 & 0 & \ddots & \ldots & 0\\
0 & 0 & 0 & \ldots & s_{\omega - 2} & 0
\end{bmatrix}.
$$
|
Infinitely many primes p that are not congruent to 1 mod 5 Argue that there are infinitely many primes p that are not congruent to 1 modulo 5.
I find this confusing. Is this saying $p_n \not\equiv 1 \pmod{5}$?
To start off I tried some examples.
$3 \not\equiv 1 \pmod{5}$
$5 \not\equiv 1 \pmod{5}$
$7 \not\equiv 1 \pmod{5}$
$11 \equiv 1 \pmod{5}$
$13 \not\equiv 1 \pmod{5}$
$17 \not\equiv 1 \pmod{5}$...
If this is what the question is asking I've come to the conclusion that this is true. Either way, I've got no clue how to write this as a proof.
|
You can follow the Euclid proof that there are an infinite number of primes. Assume there are a finite number of primes not congruent to $1 \pmod 5$. Multiply them all except $2$ together to get $N \equiv 0 \pmod 5$. Consider the factors of $N+2$, which is odd and $\equiv 2 \pmod 5$. It cannot be divisible by any prime on the list, as it has remainder $2$ when divided by them. If it is prime, we have exhibited a prime $\not \equiv 1 \pmod 5$ that is not on the list. If it is not prime, it must have a factor that is $\not \equiv 1 \pmod 5$ because the product of primes $\equiv 1 \pmod 5$ is still $\equiv 1 \pmod 5$
|
Showing $f:\mathbb{R^2} \to \mathbb{R}$, $f(x, y) = x$ is continuous Let $(x_n)$ be a sequence in $\mathbb{R^2}$ and $c \in \mathbb{R^2}$.
To show $f$ is continuous we want to show if $(x_n) \to c$, $f(x) \to f(c)$.
As $(x_n) \to c$ we can take $B_\epsilon(c)$, $\epsilon > 0$ such that when $n \geq$ some $N$, $x_n \in B_\epsilon(c)$.
As $x_n \in B_\epsilon(c)$ this implies that $f(x_n) \in f(B_\epsilon(c))$.
This holds for all $\epsilon$, so as $\epsilon \to 0$ and $B_\epsilon(c)$ becomes infinitely small, we can always find $n \geq$ some $N$ such that $x_n \in B_\epsilon(c)$ and $f(x_n) \in f(B_\epsilon(c))$.
Hence as $\epsilon \to 0$, $(x_n)$ clearly converges to $c$ and $f(x_n)$ clearly converges to $f(c)$.
Does that look ok?
|
There's a bit of repetition when you say $x_n \in B_\epsilon(c) \implies f(x_n) \in f(B_\epsilon(c))$. While this is true as you define it, repeating it doesn't add to the proof. What you need to show is that the image of $B_\epsilon(c)$ is itself an open neighborhood of $f(c)$.
Another look, which uses uniform continuity...
For any open neighborhood $N_\delta(\mathbf{x})$ of $\mathbf{x} = (x_1,y_1) \in \Bbb R^2$ and $(x_2,y_2) = y \in N_\delta(\mathbf{x})$ with the usual metric we have
$$\delta > d(\mathbf{x},\mathbf{y}) = \sqrt{(x_1-x_2)^2+(y_1-y_2)^2} \ge \sqrt{(x_1-x_2)^2} = |x_1-x_2| = d(f(\mathbf{x}),f(\mathbf{y})).$$
Thus, we can set $\delta = \varepsilon$ and obtain uniform continuity on any open subset of $\Bbb R^2$.
|
What does $ ( \nabla u) \circ \tau \cdot D \tau $ and $ \nabla u \cdot (D \tau_\gamma)^{-1} $ mean? To understand the question here
$\def\abs#1{\left|#1\right|}$
\begin{align*}
F(u_\gamma) &= F(u \circ \tau_\gamma^{-1})\\
&= \int_\Omega \abs{\nabla(u \circ \tau_\gamma^{-1})}^2\\
&= \int_\Omega \abs{(\nabla u) \circ \tau_\gamma^{-1} \cdot D\tau_\gamma^{-1}}^2\\
&= \int_{\tau_\gamma^{-1}\Omega} \abs{(\nabla u) \circ \tau_\gamma^{-1}\circ \tau_\gamma\cdot D\tau_\gamma^{-1}\circ \tau_\gamma}^2\abs{\det(D\tau_\gamma)}\\
&= \int_\Omega \abs{\nabla u\cdot (D\tau_\gamma)^{-1}}^2\abs{\det(D\tau_\gamma)}
\end{align*}
I know that by chain rule $ \cdots $ componentwise we have
$$ \partial_i ( u \circ \tau) = \sum_{j} (\partial_j u) \circ \tau \cdot \partial_i \tau_j. $$
Thus, $ \nabla ( u \circ\tau )= ( \nabla u) \circ \tau \cdot D \tau $. I'd like to understand this equality or this notaition. I know that
\begin{equation}
\nabla u = (\partial_1 u, \partial_2 u, \cdots , \partial_n u)
\end{equation}
and I guess that
$$ D \tau = \left[
\begin{array}{cccc}
\partial_1 \tau_1 & \partial_2 \tau_1 & \cdots & \partial_n \tau_1\\
\partial_1 \tau_2 & \partial_2 \tau_2 & \cdots & \partial_n \tau_2\\
\vdots & \vdots & \ddots & \vdots\\
\partial_1 \tau_n & \partial_2 \tau_n & \cdots & \partial_n \tau_n\\
\end{array}
\right] $$
Then, what does $ ( \nabla u) \circ \tau \cdot D \tau$ mean? And what does $ \nabla u \cdot (D \tau_\gamma)^{-1} $ mean?
|
$\nabla ( u \circ\tau )= ( \nabla u) \circ \tau \cdot D \tau$. I'd like to understand this equality or this notaition.
Think of the chain rule: derivative of composition is the product of derivatives. On the left, $u\circ \tau $ is composition (not Hadamard product, as suggested in the other answer). On the right, we have a product of $( \nabla u) \circ \tau$ (which is a vector) and $D \tau$ (which is a matrix); this is the usual application of matrix to a vector, except that the vector, being written as a row, appears to the left of the matrix. It is not necessary to use the dot here: $ (( \nabla u) \circ \tau ) D \tau$ would be better.
In the chain of computations in your question, the chain rule is applied to the composition of $u$ with $\tau_\gamma^{-1}$, which is why $\tau_\gamma^{-1}$ appears instead of $\tau$.
|
continued fraction expression for $\sqrt{2}$ in $\mathbb{Q_7}$ Hensel's lemma implies that $\sqrt{2}\in\mathbb{Q_7}$. Find a continued
fraction expression for $\sqrt{2}$ in $\mathbb{Q_7}$
|
There's a bit of a problem with defining continued fractions in the $p$-adics. The idea for finding continued fractions in $\Bbb Z$ is that we subtract an integer $m$ from $\sqrt{n}$ such that $\left|\sqrt{n} - m\right| < 1$. We can find such an $m$ because $\Bbb R$ has a generalized version of the division algorithm: for any $r\in\Bbb R$, we can find $k\in\Bbb Z$ such that $r = k + s$, where $\left|s\right| < 1$. This means that we can write
$$
\sqrt{n} = m + (\sqrt{n} - m) = m + \frac{1}{1/(\sqrt{n} - m)},
$$
and since $\left|\frac{1}{\sqrt{n}-m}\right| > 1$, we can repeat the process with this number, and so on, until we have written
$$
\sqrt{n} = m_1 + \cfrac{1}{m_2 + \cfrac{1}{m_3 + \dots}},
$$
where each $m_i\in\Bbb Z$. However, in $\Bbb Q_p$ there is no such division algorithm because of the ultrametric property: the norm $\left|\,\cdot\,\right|_p$ in $\Bbb Q_p$ satisfies the property that $\left|a - b\right|_p\leq\max\{\left|a\right|_p,\left|b\right|_p\}$, with equality if $\left|a\right|_p\neq\left|b\right|_p$. If $\sigma\in\Bbb Z_p$, then $\left|\sqrt{\sigma}\right|_p\leq 1$, as $\sqrt{\sigma}$ is a $p$-adic algebraic integer. Then we may find an integer $\rho$ such that $\left|\sqrt{\sigma} - \rho\right|_p < 1$, but there's a problem: if we write $\sqrt{\sigma} = \rho + \frac{1}{1/(\sqrt{\sigma} - \rho)}$, we have $\left|\frac{1}{\sqrt{\sigma} - \rho}\right|_p > 1$. As any element of $\Bbb Z_p$ has norm less than or equal to $1$, the ultrametric property of $\left|\,\cdot\,\right|_p$ tells us that we can never find an $p$-adic integer to subtract from $\frac{1}{\sqrt{\sigma} - \rho}$ that will give the difference norm less than $1$. So we cannot write
$$
\sqrt{2} = a_1 + \cfrac{1}{a_2 + \cfrac{1}{a_3 + \dots}}
$$
with $a_i\in\Bbb Z_7$, as we can in $\Bbb R$. If you wished, you could remedy this by taking your $a_i\in\Bbb Q_7$, but since $\sqrt{2}\in\Bbb Q_7$ already, this is silly.
|
Proof of $\sum_{k=0}^n k \text{Pr}(X=k) = \sum^{n-1}_{k=0} \text{Pr}(X>k) -n \text{Pr}(X>n)$ $X$ is a random variable defined in $\mathbb N$. How can I prove that for all $n\in \mathbb N$?
*
*$ \text E(X) =\sum_{k=0}^n k \text{Pr}(X=k) = \sum^{n-1}_{k=0} \text{Pr}(X>k) -n \text{Pr}(X>n)$
*$\text E(X) =\sum_{k=0}^n k \text{Pr}(X=k)=\sum_{k\ge 0} \text{Pr}(X>k) $
|
For part $a)$, use Thomas' hint. You get
$$
\sum_{i=0}^{n}k(P(X>i-1)-P(X>i)).
$$
This develops as $P(X>0)-P(X>1)+2P(X>1)-2P(X>2)+3P(X>2)-3P(X>3)+\cdots nP(X>n-1)-nP(X>n)$
for part $b)$:
In general, you have
$\mathbb{E}(X)=\sum\limits_{i=1}^\infty P(X\geq i).$
You can show this as follow:
$$
\sum\limits_{i=1}^\infty P(X\geq i) = \sum\limits_{i=1}^\infty \sum\limits_{j=i}^\infty P(X = j)
$$
Switch the order of summation gives
\begin{align}
\sum\limits_{i=1}^\infty P(X\geq i)&=\sum\limits_{j=1}^\infty \sum\limits_{i=1}^j P(X = j)\\
&=\sum\limits_{j=1}^\infty j\, P(X = j)\\
&=\mathbb{E}(X)
\end{align}
$$\sum\limits_{i=0}^{\infty}iP(X=i)=\sum\limits_{i=1}^\infty P(X\geq i)=\sum\limits_{i=0}^{\infty} P(X> i)$$
|
Expectation and Distribution Function? Consider X as a random variable with distribution function $F(x)$. Also assume that $|E(x)| < \infty$. the goal is to show that for any constant $c$, we have:
$$\int_{-\infty}^{\infty} x (F(x + c) - F(x)) dx = cE(X) - c^2/2$$
Does anyone have any hint on how to approach this?
Thanks
|
Based on @DilipSarwate suggestion, we can write the integral as a double integral because:
$\int f(y)dy = F(y)$ so, we can write:
$ \int_{-\infty}^{\infty} x (F(x + c) - F(x)) dx = \int_{-\infty}^{\infty} x \{\int_{x}^{x + c} f(y)dy\} dx = \int_{-\infty}^{\infty} \{\int_{x}^{x + c} xf(y)dy\} dx = \int_{-\infty}^{\infty} \int_{x}^{x + c} xf(y)dy\ dx = \text{based on the Fubini's Thm. since $f(y)\ge 0$ and we know that $\int |f|dp < \infty (why?) $, then we can change the order of integrals}\\ = \text{assume that we can show the integ. is eq to} = \frac{1}{2}(E(X^2)- E((X - c)^2)) = \frac{1}{2}(E(X^2) - E(X^2 + c^2 - 2Xc)) = \frac{1}{2}(-E(c^2) + 2cE(x)) = cE(x) - \frac{c^2}{2}$
The missing part here is to know how to show the integral of $\int_{-\infty}^{\infty} \int_{x}^{x + c} xf(y)dy\ dx$ is equal to $\frac{1}{2}\{E(X^2) - E(X^2 + c^2 - 2Xc)\}$ ?!
|
Proof of sigma-additivity for measures I understand the proof for the subadditivity property of the outer measure (using the epsilon/2^n method), but I am not quite clear on the proof for the sigma-additivity property of measures. Most sources I have read either leave it an exercise or just state it outright.
From what I gather, they essentially try to show that a measure is also *super*additive (the reverse of subadditive) which means it must be sigma-additive. However, I'm a bit confused as to how they do this.
Would anyone be kind enough to give a simple proof about how this could be done?
|
As far as I'm aware, that's the standard approach. The method I was taught is here (Theorem A.9), and involves showing countable subadditivity, defining a new sigma algebra $\mathcal{M}_{0}$ on which countable additivity holds when the outer measure is restricted to $\mathcal{M}_{0}$ (by showing superadditivity), and then showing that $\mathcal{M}_{0}$ is just $\mathcal{M}$, the sigma algebra of measurable sets (the sigma algebra generated by null sets together with Borel sets).
The notes I linked to are based on the book by Franks, which might cover it in a bit more detail/give a slightly different approach if you aren't happy with the notes.
|
convergence tests for series $p_n=\frac{1\cdot 3\cdot 5...(2n-1)}{2\cdot 4\cdot 6...(2n)}$ If the sequence:
$p_n=\frac{1\cdot 3\cdot 5...(2n-1)}{2\cdot 4\cdot 6...(2n)}$
Prove that the sequence
$((n+1/2)p_n^2)^{n=\infty}_{1}$ is decreasing.
and that the series $(np_n^2)^{n=\infty}_{1}$ is convergent.
Any hints/ answers would be great.
I'm unsure where to begin.
|
Hint 1:
Show that (n+1/2)>=(n+1.5)(2n+1/2n+2)^2 for all positive integers n, then use induction to show that the first sequence is decreasing
Hint 2:
show that 1/2n<=p(n), thus 1/2n<= np(n)^2 therefore the second series diverges
|
Prove $\frac{1}{a^3} + \frac{1}{b^3} +\frac{1}{c^3} ≥ 3$ Prove inequality $$\frac{1}{a^3} + \frac{1}{b^3} +\frac{1}{c^3} ≥ 3$$ where $a+b+c=3abc$ and $a,b,c>0$
|
If $a, b, c >0$ then $a+b+c=3abc \ \Rightarrow \ \cfrac 1{ab} + \cfrac 1{bc}+ \cfrac 1{ca} = 3$
See that $2\left(\cfrac 1{a^3} +\cfrac 1{b^3}+ \cfrac 1{c^3}\right) +3 =\left(\cfrac 1{a^3} +\cfrac 1{b^3}+ 1\right)+\left(\cfrac 1{b^3} +\cfrac 1{c^3}+ 1\right)+\left(\cfrac 1{c^3} +\cfrac 1{a^3}+ 1\right) $
Use $AM-GM$ inequality on each of them and you've got your proof.
|
Show that the unit sphere is strictly convex I can prove with the triangle inequality that the unit sphere in $R^n$ is convex, but how to show that it is strictly convex?
|
To show that the closed unit ball $B$ is strictly convex we need to show that for any two points $x$ and $y$ in the boundary of $B$, the chord joining $x$ to $y$ meets the boundary only at the points $x$ and $y$.
Let $x,y \in \partial B$, then $||x|| = ||y|| = 1.$ Now consider the chord joining $x$ to $y$. We can parametrise this by $c(t) := (1-t)x + ty$. Notice that $c(0) = x$ and $c(1) = y$. We need to show that $c(t)$ only meets the boundary when $t=0$ or $t=1$. Well:
$$||c(t)||^2 = \langle c(t), c(t) \rangle = (1-t)^2\langle x, x \rangle + 2(1-t)t \, \langle x,y \rangle + t^2 \langle y,y \rangle$$
$$||c(t)||^2 = (1-t)^2||x||^2 + 2t(1-t)\langle x,y \rangle + t^2||y||^2$$
Since $x,y \in \partial B$ it follows that $||x|| = ||y|| = 1$ and so:
$$||c(t)||^2 = (1-t)^2 + 2t(1-t)\langle x,y \rangle + t^2 \, . $$
If $c(t)$ meets the boundary then $||c(t) || = 1$, so let's find the values of $t$ for which $||c(t)|| = 1$:
$$(1-t)^2 + 2t(1-t)\langle x,y \rangle + t^2 = 1 \iff 2t(1-t)(1-\langle x, y \rangle) = 0 \, .$$
Clearly $t=0$ and $t=1$ are solution since $c(0)$ and $c(1)$ lie on the boundary. Recall that $\langle x, y \rangle = \cos\theta$, where $\theta$ is the angle between vectors $\vec{0x}$ and $\vec{0y}$, because $||x|| = ||y|| = 1.$ Thus, provided $x \neq y$ we have $\langle x, y \rangle \neq 1$ and so the chord only meets the boundary at $c(0)$ and $c(1).$
|
Does this change in this monotonic function affect ranking? I need to make sure I can take out the one in $(1-e^{-x})e^{-y}$ without affecting a sort order based on this function. I other words, I need to prove the following:
$$
(1-e^{-x})e^{-y} \ >= \ -e^{-x}e^{-y}\quad\forall\ \ x,y> 0
$$
If that is true, then I can take the logarithm of the right hand side above: $\log(-e^{-x}e^{-y}) = x + y$ and my life is soooo much easier...
|
Looking at the current version of your post, we have
$$(1-e^{-x})e^{-y}=e^{-y}-e^{-x}e^{-y}>-e^{-x}e^{-y},$$ since $e^t$ is positive for all real $t$. However, we can't take the logarithm of the right-hand side. It's negative.
Update:
The old version was $$(1-e^{-x_1})(1-e^{-x_2})e^{-x_3}=e^{-x_3}-e^{-x_1-x_3}-e^{-x_2-x_3}+e^{-x_1-x_2-x_3},$$ and you wanted to know if that was greater than or equal to $$(-e^{-x_1})(-e^{-x_2})e^{-x_3}=e^{-x_1-x_2-x_3}$$ for all positive $x_1,x_2,x_3$. Note, then, that the following are equivalent (bearing in mind the positivity of $e^t$):
$$(1-e^{-x_1})(1-e^{-x_2})e^{-x_3}\geq e^{-x_1-x_2-x_3}$$
$$e^{-x_3}-e^{-x_1-x_3}-e^{-x_2-x_3}\geq 0$$
$$e^{-x_3}(1-e^{-x_1}-e^{-x_2})\geq 0$$
$$1-e^{-x_1}-e^{-x_2}\geq 0$$
This need not hold. In fact, for any $x_2>0$, there is some $x_1>0$ such that the inequality fails to hold. (Let me know if you're interested in a proof of that fact.)
|
definition of morphism of ringed spaces I've recently started reading about sheafs and ringed spaces (at the moment, primarily on wikipedia). Assuming I'm correctly understanding the definitions of the direct image functor and of morphisms of ringed spaces, a morphism from a ringed space $(X, O_X)$ to a ringed space $(Y, O_Y)$ is a continuous map $f\colon X\to Y$ along with a natural transformation $\varphi$ from $O_Y$ to $f_*O_X$.
Why does the definition require $\varphi$ to go from $O_Y$ to $f_*O_X$ as opposed to from $f_*O_X$ to $O_Y$?
|
Think about what it means to give a morphism from $\mathcal O_Y$ to $f_* \mathcal O_X$: it means that for every open set $V \subset Y$, there is a map
$$\mathcal O_Y(V) \to \mathcal O_X\bigl( f^{-1}(V) \bigr).$$
If you imagine that $\mathcal O_X$ and $\mathcal O_Y$ are supposed to be some sorts of "sheaves of functions" on $X$ and $Y$, then this accords perfectly with the intuition that a morphism of ringed spaces should allow us to "pull back" functions.
Indeed, in concrete examples (such as smooth manifolds equipped with the structure sheaf of smooth functions), the map $\mathcal O_Y \to f_* \mathcal O_X$ is just the pull-back map on functions.
A morphism in the opposite direction doesn't have any analogous intuitive interpretation, and doesn't accord with what happens in the key motivating examples.
|
Is $G/pG$ is a $p$-group? Jack is trying to prove:
Let $G$ be an abelian group, and $n\in\Bbb Z$. Denote $nG = \{ng \mid g\in G\}$.
(1) Show that $nG$ is a subgroup in $G$.
(2) Show that if $G$ is a finitely generated abelian group, and $p$ is prime,
then $G/pG$ is a $p$-group (a group whose order is a power of $p$).
I think $G/pG$ is a $p$-group because it is a direct sum of cyclic groups of order $p$.
But I cannot give a detailed proof.
|
$G/pG$ is a direct sum of a finite number of cyclic groups by the fundamental theorem of finitely generated abelian groups. Since every non-zero element of $G/pG$ is of order $p$.
It is a direct sum of a finite number of cyclic groups of order $p$.
|
Bernoulli Polynomials I am having a problem with this question. Can someone help me please.
We are defining a sequence of polynomials such that:
$P_0(x)=1; P_n'(x)=nP_{n-1}(x) \mbox{ and} \int_{0}^1P_n(x)dx=0$
I need to prove, by induction, that $P_n(x)$ is a polynomial in $x$ of degree $n$, the term of highest degree being $x^n$.
Thank you in advance
|
Recall that $\displaystyle \int x^n dx = \dfrac{x^{n+1}}{n+1}$. Hence, if $P_n(x)$ is a polynomial of degree $n$, then it is of the form $$P_n(x) = a_n x^n + a_{n-1} x^{n-1} + \cdots + a_1 x + a_0$$ Since $P_{n+1}'(x) = (n+1) P_n(x)$, we have that $$P_{n+1}(x) = \int_{0}^x (n+1) P_n(y) dy + c$$
Hence, $$P_{n+1}(x) = \int_{0}^x (n+1) \left(a_n y^n + a_{n-1} y^{n-1} + \cdots + a_1 y + a_0\right) dy + c\\ = a_n x^{n+1} + a_{n-1} \left(\dfrac{n+1}n\right) x^n + \cdots + a_{1} \left(\dfrac{n+1}2\right) x^2 + a_{0} \left(\dfrac{n+1}1\right) x + c$$
Now finish it off with induction.
|
f, g continuous for all rationals follow by continuous on all reals?
Possible Duplicate:
Can there be two distinct, continuous functions that are equal at all rationals?
Let $f, g:\Bbb{R}\to\Bbb{R}$ to be continuous functions such that $f(x)=g(x)\text{ for all rational numbers}\,x\in\Bbb{Q}$. Does it follow that $f(x)=g(x)$ for all real numbers $x$?
Here is what I think:
f continuous when $\lim\limits_{x\to x_0}f(x)=f(x_0)$ and $\lim\limits_{x\to x_0}g(x)=g(x_0)$
So it does not neccesarily mean that $f(x)=g(x)$ when x is irrational. So I can pick a function f so that
$f(x) =
\begin{cases}
g(x) & \text{if $x\in\Bbb{Q}$} \\
x & \text{if $x\in\Bbb{R}\setminus \Bbb{Q}$} \\
\end{cases}
$
|
Hint: prove that if $\,h\,$ is a real continuous function s.t. $\,h(q)=0\,\,,\,\,\forall\,q\in\Bbb Q\,$ , then $\,h(x)=0\,\,,\,\,\forall\,x\in\Bbb R\,$
Further hint: For any $\,x\in\Bbb R\,$ , let $\,\{q_n\}\subset\Bbb Q\,$ be s.t. $\,q_n\xrightarrow [n\to\infty]{} x\,$ . What happens with
$$\lim_{n\to\infty}f(q_n)\,\,\,?$$
|
67 67 67 : use 3, 67's use any way how to get 11222 I need to get 11222 using three 67 s (Sixty seven)
We can use any operation in any maner
67 67 67
use 3, 67's use any way but to get 11222.
|
I'd guess this is a trick question around "using three, sixty-sevens" to get $11222$.
In particular, $67 + 67 = 134$, which is $11222$ in ternary (base $3$).
|
Which player is most likely to win when drawing cards? Two players each draw a single card, in turn, from a standard deck of 52 cards, without returning it to the deck. The winner is the player with the highest value on their card. If the value on both cards is equal then all cards are returned to the deck, the deck is shuffled and both players draw again with the same rules.
Given that the second player is drawing from a deck that has been modified by the first player removing their card, I'm wondering if either player is more likely to win than the other?
Does this change as the number of players increases?
|
If the second player were drawing from a full deck, he would draw each of the $13$ ranks with equal probability. The only change when he draws from the $51$-card deck that remains after the first player’s draw is that the rank of the first player’s card becomes less probable; the other $12$ ranks remain equally likely. Thus, given that the game is decided in this round, the second player’s probability of winning is the same for both decks, namely, $\frac12$. The only effect of not replacing the first player’s card is to decrease the expected number of tied rounds before the game is won or lost.
|
Find a polynomial only from its roots Given $\alpha,\,\beta,\,\gamma$ three roots of $g(x)\in\mathbb Q[x]$, a monic polynomial of degree $3$. We know that $\alpha+\beta+\gamma=0$, $\alpha^2+\beta^2+\gamma^2=2009$ and $\alpha\,\beta\,\gamma=456$. Is it possible to find the polynomial $g(x)$ only from these?
I've been working with the degree of the extension $\mathbb Q \subseteq \mathbb Q(\alpha,\,\beta,\,\gamma)$. I've found that it must be $3$ because $g(x)$ is the irreducible polynomial of $\gamma$ over $\mathbb Q(\alpha,\,\beta)$. But there is something that doesn't hold, there must be some of these roots that are not algebraic or something. May be this approach is totally wrong. Is there anyone who knows how to deal with this problem?
|
The polynomial is $(x-a)(x-b)(x-c)$ with the roots being $a,b,c$. By saying "three roots" you imply all these are different. Note that when multiplied out and coefficients are collected you have three symmetric functions in the roots. For example the constant term is $-abc$, while the degree 2 coefficient is $-(a+b+c)$. The degree 1 coefficient is $ab+ac+bc$, which can be written as
$$\frac{(a+b+c)^2-(a^2+b^2+c^2)}{2}.$$
So it looks like you can get all the coefficients of the monic from the givens you have.
Note: Just saw copper.hat's remark, essentially saying what's in this answer. I'll leave it up for now in case the poser of the question needs it (or can even use it...)
|
Simple linear recursion $x_n=\frac{x_{n-1}}{a}+\frac{b}{a}$ with $a>1, b>0$ and $x_0>0$
I tried to solve it using the generating function but it does not work because of $\frac{b}{a}$, so may you have an idea.
|
Hint: Let $x_n=y_n+c$, where we will choose $c$ later. Then
$$y_n+c=\frac{y_{n-1}+c}{a}+\frac{b}{a}.$$
Now can you choose $c$ so that the recurrence for the $y$'s has no pesky constant term?
Remark: There is a fancier version of the above trick. Our recurrence (if $b\ne 0$) is not homogeneous. To solve it, we find the general solution of the homogeneous recurrence obtained by removing the $b/a$ term, and add to it some fixed particular solution of the non-homogeneous recurrence. In this case it is easy to find such a particular solution. Look for a constant solution.
|
List of interesting integrals for early calculus students I am teaching Calc 1 right now and I want to give my students more interesting examples of integrals. By interesting, I mean ones that are challenging, not as straightforward (though not extremely challenging like Putnam problems or anything). For example, they have to do a $u$-substitution, but what to pick for $u$ isn't as easy to figure out as it is usually. Or, several options for $u$ work so maybe they can pick one that works but they learn that there's not just one way to do everything.
So far we have covered trig functions, logarithmic functions, and exponential functions, but not inverse trig functions (though we will get to this soon so those would be fine too). We have covered $u$-substitution. Thinks like integration by parts, trig substitution, and partial fractions and all that are covered in Calc 2 where I teach. So, I really don't care much about those right now. I welcome integrals over those topics as answers, as they may be useful to others looking at this question, but I am hoping for integrals that are of interest to my students this semester.
|
I remember having fun with integrating some step functions, for example:
$$\int_{0}^{2} \lfloor x \rfloor - 2 \left\lfloor \frac{x}{2} \right\rfloor \,\mathrm{d}x.$$
My professor for calculus III liked to make us compute piecewise functions, so it would force us to use the Riemann sum definition of the integral.
|
Finding a certain integral basis for a quadratic extension This is a problem in the first chapter of Dino Lorenzini's book on arithmetic geometry. Let $A$ be a PID with field of fractions $K$ and $L/K$ a quadratic extension (no separability assumption). Let $B$ be the integral closure of $A$ in $L$. Now assuming that $B$ is a f.g. $A$-module, then the problem asks to show that $B=A[b]$ for some $b\in B$.
Obviously, we know that $B$ is free of rank $2$ as an $A$-module, so there must be some integral basis $\{ b_1,b_2\}$. However, I don't see how one of these can be assumed to be $1$. Similarly, any $A$-submodule, including ideals of $B$, must be generated by at most two elements. Other things I've tried is using the fact that $B$ must have dimension $1$. However, I've failed to see how any of this could be applied.
|
Consider the quotient $B/A$. What can you say about it?
|
Convergence properties of a moment generating function for a random variable without a finite upper bound. I'm stuck on a homework problem which requires me that I prove the following:
Say $X$ is a random variable without a finite upper bound (that is, $F_X(x) < 1$ for all $x \in \mathbb{R}$). Let $M_X(s)$ denote the moment-generating function of $X$, so that:
$$M_X(s) = \mathbb{E}[e^{sX}]$$
then how can I show that
$$\lim_{s\rightarrow\infty} \frac{\log(M_X(s))}{s} = \infty$$
|
Consider the limit when $s\to+\infty$ of the inequality
$$
s^{-1}\log M_X(s)\geqslant x+s^{-1}\log(1-F_X(x)).
$$
|
Inscribed and Escribed Squares Assume a circle of diameter $d$. Inscribe a square $A$ centred in the circle with its diagonal equal to the diameter of the circle. Now escribe a square $B$ with the sides equal to the diameter of the circle. Show how to obtain the ratio of the area of square $A$ to the area of square $B$.
|
This can be done by a computation. The outer square $B$ has area $d^2$. Let the side of the inner square $A$ be $s$. Then by the Pythagorean theorem, $s^2+s^2=d^2$. But $s^2$ is the area if the inner square, and we are finished.
But there is a neater way! Rotate the inner square $A$ about the centre of the circle, until the corners of the inner square are the midpoints of the sides of the outer square. (Sorry that I cannot draw a picture: I hope these words are enough for you to do it.)
Now draw the two diagonals of the inner square. As a check on the correctness of your picture, the diagonals of the inner square are parallel to the sides of the outer square. We have divided the outer square into $8$ congruent isosceles right triangles. And the inner square is made up of $8$ of these triangles. So the outer square $B$ has twice the area of the inner square.
|
Extra $100 after borrowing and shopping I took \$1000 from my friend James and \$500 from Bond.
While walking to the shops I lost \$1000 so now I only have \$500.
I did some shopping, spending \$300 so now I have \$200 left.
I gave \$100 back to James and \$100 back to Bond.
Now my liabilities are \$900 for James and \$400 for Bond, so my total liabilities are \$1300.
Total liabilities + Shopping = \$1300 + \$300 = \$1600, but I only borrowed \$1500.
Where did the extra \$100 come from?
|
That $\$1300$ already includes the $\$300$, along with the $\$1000$ you lost--that was your net loss of money for the day--you don't need to add it again.
The $\$900$ and the $\$400$ you still owe is just another way of reaching the same number.
|
Trace of a matrix to the $n$ Why is it that if $A(t), B(t)$ are two $n\times n$ complex matrices and $${d\over dt}A=AB-BA$$ then the trace of the matrix $A^n$ where $n\in \mathbb Z$ is a constant for all $t$?
|
Note that Trace(FE)=Trace(EF) in general.
$n>0$ : Trace$(A^n)' = n [$Trace$ (A'(t) A^{n-1})] = n[ $Trace$ ((AB - BA)A^{n-1})] = 0$
$ n=0$ : $A^0 = I$ So we are done
$n <0$ : Check $(A^{-1})' = A^{-1} B - BA^{-1}$ So this case is reduced to the first case.
|
linear operator on a vector space V such that $T^2 -T +I=0$ let T be a linear operator on a vector space V such that $T^2 -T +I=0$.Then
*
*T is oneone but not onto.
*T is onto but not one one.
*T is invertible.
*no such T exists.
could any one give me just hint?
|
$$
T^2-T+I=0 \iff T(I-T)=I=(I-T)T,
$$
i.e. $T$ is invertible and $T^{-1}=I-T$. In particular $T$ is injective and surjective.
|
Prove the transcendence of the number $e$ How to prove that the number $e=2.718281...$ is a transcendental number? The truth is I have no idea how to do it.
If I can recommend a book or reference on this topic thank you.
There are many tests on the transcendence of $ e $?
I'd read several shows on the transcendence of $ e $
|
Your might be interested in the Lindemann-Weierstrass-theorem, which is useful for proving the transcendence of numbers, e.g., $\pi$ and $e$. If you read further, you'll see that the transcendence of both $\pi$ and $e$ are direct "corollaries" of the Lindemann-Weierstrass theorem.
Indeed, $e^x$ is transcendent if $x$ is algebraic and $x \neq 0\,$ (by the Lindemann–Weierstrass theorem).
A sketch of a (much) more elementary proof is given here.
|
Find the kernel of a linear transformation of $P_2$ to $P_1$ For some reason, this particular problem is throwing me off:
Find the kernel of the linear transformation:
$T: P_2 \rightarrow P_1$ $T(a_0+a_1x+a_2x^2)=a_1+2a_2x$
Since the kernel is the set of all vectors in $V$ that satisfy $T(\vec{v})=\vec{0}$, it's obvious that $a_0$ can be any real number. What matters, if I understand correctly, is that $a_1$ and $a_2$ should equal 0 in order to satisfy the zero vector (i.e. $0+2(0)x$).
Granted that what I stated is correct, why would my book say that the $\ker(T)=\{a_0: a_0 \; \text{is real}\}$?
Yes, $a_0$ can be any real number, but what must $a_1$ or $a_2$ equal? I don't see it specified within the set. Perhaps it's implied - I'm not sure.
Let me add some more detail:
Again, if I understand correctly, I could make a system of equations as such:
$a_1 = 0$
$2a_2 = 0$
From that I can translate it into a matrix and find that $a_1$ and $a_2$ really does equal zero.
|
Your argument is totally correct. Your book means that $\ker(T)=\{a_0+0\cdot x+0\cdot x^2|a_0\in \mathbb{R}\}$, i.e. $a_1=0$ and $a_2=0$, which is the same as you proved.
|
An inequality for $W^{k,p}$ norms Let $u \in W_0^{2,p}(\Omega)$, for $\Omega$ a bounded subset of $\mathbb R^n$. I am trying to obtain the bound
$$\|Du\|_p \leq \epsilon \|D^2 u\|_p + C_\epsilon \|u\|_p$$
for any $\epsilon > 0$ (here $C_\epsilon$ is a constant that depends on $\epsilon$, and $\|.\|_p$ is the $L^p$ norm). I tried deducing this from the Poincare inequality, but that does not seem to get me anywhere. I also tried proving the one dimensional case first, but was no more able to do that than the $L^p$ case. Any suggestions for how to proceed with this problem?
|
Such inequalities appear all over the place in PDE theory. They all can be seen as instances of Ehrling's lemma. Here, you have
$$ (W^{2,p}_0(\Omega), ||\;||_3) \hookrightarrow (W^{1,p}_0(\Omega), ||\;||_2) \hookrightarrow (L^p(\Omega), ||\;||_1) $$
where
$$ ||u||_3 = ||D^2u||_p, ||u||_2 = ||Du||_p, ||u||_1 = ||u||_p. $$
The first inclusion is compact, the second continuous and hence from Ehrling's lemma you have for any $\epsilon > 0$ a constant $C(\epsilon) > 0$ such that
$$ ||u||_2 \leq \epsilon ||u||_3 + C(\epsilon)||u||_1. $$
The fact that $||\;||_2$ is an equivalent norm for the Sobolev space $W^{1,p}_0(\Omega)$ is the Poincaré inequality. The fact that $||\;||_3$ is an equivalent norm for the Sobolev space $W^{2,p}_0(\Omega)$ can itself be seen as an application of Ehrling's lemma together with the Poincaré inequality.
|
Prove $3^{2n+1} + 2^{n+2}$ is divisible by $7$ for all $n\ge0$ Expanding the equation out gives
$(3^{2n}\times3)+(2^n\times2^2) \equiv 0\pmod{7}$
Is this correct? I'm a little hazy on my index laws.
Not sure if this is what I need to do? Am I on the right track?
|
Note that
$$3^{2n+1} = 3^{2n} \cdot 3^1 = 3 \cdot 9^n$$ and $$2^{n+2} = 4 \cdot 2^n$$
Note that $9^{3k} \equiv 1 \pmod{7}$ and $2^{3k} \equiv 1 \pmod{7}$.
If $n \equiv 0 \pmod{3}$, then $$3 \cdot 9^n + 4 \cdot 2^n \equiv (3+4) \pmod{7} \equiv 0 \pmod{7}$$
If $n \equiv 1 \pmod{3}$, then $$3 \cdot 9^n + 4 \cdot 2^n \equiv (3 \cdot 9 + 4 \cdot 2) \pmod{7} \equiv 35 \pmod{7} \equiv 0 \pmod{7}$$
If $n \equiv 2 \pmod{3}$, then $$3 \cdot 9^n + 4 \cdot 2^n \equiv (3 \cdot 9^2 + 4 \cdot 2^2) \pmod{7} \equiv 259 \pmod{7} \equiv 0 \pmod{7}$$
EDIT
What you have written can be generalized a bit. In general, $$(x^2 + x + 1) \vert \left((x+1)^{2n+1} + x^{n+2} \right)$$
The case you are interested in is when $x=2$.
The proof follows immediately from the factor theorem. Note that $\omega$ and $\omega^2$ are roots of $(x^2 + x + 1)$.
If we let $f(x) = (x+1)^{2n+1} + x^{n+2}$, then $$f(\omega) = (\omega+1)^{2n+1} + \omega^{n+2} = (-\omega^2)^{2n+1} + \omega^{n+2} = \omega^{4n} (-\omega^2) + \omega^{n+2} = \omega^{n+2} \left( 1 - \omega^{3n}\right) = 0$$
Similarly, $$f(\omega^2) = (\omega^2+1)^{2n+1} + \omega^{2(n+2)} = (-\omega)^{2n+1} + \omega^{2n+4} = -\omega^{2n+1} + \omega^{2n+1} \omega^3 = -\omega^{2n+1} + \omega^{2n+1} = 0$$
|
infinitely many primes p which are not congruent to $-1$ modulo $19$. While trying to solve answer a question, I discovered one that I felt to be remarkably similar. The question I found is 'Argue that there are infinitely many primes $p$ that ar enot congruent to $1$ modulo $5$. I believe this has been proven. (brief summary of this proof follows).
Following the Euclid Proof that there are an infinite number of primes.
First, Assume that there are a finite number of primes not congruent to $1 \pmod 5$.
I then multiply them all except $2$ together to get $N \equiv 0 \pmod 5$.
Considering the factors of $N+2$, which is odd and $\equiv 2 \pmod 5$.
It cannot be divisible by any prime on the list, as it has remainder $2$ when divided by them.
If it is prime, we have exhibited a prime $\not \equiv 1 \pmod 5$ that is not on the list.
If it is not prime, it must have a factor that is $\not \equiv 1 \pmod 5$.
This is because the product of primes $\equiv 1 \pmod 5$ is still $\equiv 1 \pmod 5$.
I can't take credit for much of any of the above proof, because nearly all of it came from \href {http://math.stackexchange.com/questions/231534/infinitely-many-primes-p-that-are-not-congruent-to-1-mod-5}${\text {Ross Millikan}}$. Either way I'm trying to use this proof to answer the following question. I'm having a very difficult time doing so.
My question:
I wish to prove that there are infinitely many primes p which are not congruent to $-1$ modulo $19$.
|
Let $p_1,p_2,\dots,p_n$ be any collection of odd primes, and let $n=19p_1p_2\cdots p_n+2$. A prime divisor of $n$ cannot be one of the $p_i$. And $n$ has at least one prime divisor which is not congruent to $-1$ modulo $19$, else we would have $n\equiv \pm 1\pmod{19}$.
Remark: Not congruent is generally far easier to deal with than congruent.
|
Prime $p$ with $p^2+8$ prime I need to prove that there is only one $p$ prime number such that $p^2+8$ is prime and find that prime.
Anyway, I just guessed and the answer is 3 but how do I prove that?
|
Any number can be written as $6c,6c\pm1,6c\pm2=2(3c\pm1),6c+3=3(2c+1)$
Clearly, $6c,6c\pm2,6c+3$ can not be prime for $c\ge 1$
Any prime $>3$ can be written as $6a\pm 1$ where $a\ge 1$
So, $p^2+8=(6a\pm 1)^2+8=3(12a^2\pm4a+3)$.
Then , $p^2+8>3$ is divisible by 3,hence is not prime.
So, the only prime is $3$.
Any number$(p)$ not divisible by $3,$ can be written as $3b\pm1$
Now, $(3b\pm1)^2+(3c-1)=3(3b^2\pm2b+c)$.
Then , $p^2+(3c-1)$ is divisible by 3
and $p^2+(3c-1)>3$ if $p>3$ and $c\ge1$,hence not prime.
The necessary condition for $p^2+(3c-1)$ to be prime is $3\mid p$
$\implies$ if $p^2+(3c-1)$ is prime, $3\mid p$.
If $p$ needs to be prime, $p=3$, here $c=3$
|
Which kind product of non-zero number non-zero cardinal numbers yields zero? Let $I$ be a non-empty set. $\kappa_i$ is non-zero cardinal number for all $i \in I$.
If without AC, then $\prod_{i \in I}\kappa_i=0$ seems can be true(despite I still cannot believe it).
But what property should $I$ and $\kappa_i$ have?
Can $\prod_{i \in I}\kappa_i\ne 0$ be proved without AC when $I$ and each $\kappa_i$ all is well-orderable?
Conversely if $I$ is not well-orderable, or if some $\kappa_i$ is not well-orderable, is $\prod_{i \in I}\kappa_i=0$ definitely holds?
|
The question is based on presuppositions that might not be true in the absence of AC. Let's consider the simplest non-trivial case, the product of countably many copies of 2, that is, $\prod_{n\in\mathbb N}\kappa_n$ where $\kappa_n=2$ for all $n$. A reasonable way to define this product would be: Take a sequence of sets $A_n$ of the prescribed cardinalities $\kappa_n$, let $P$ be the set of all functions $f$ that assign to each $n\in\mathbb N$ an element $f(n)\in A_n$, and then define the product to be the cardinality of $P$. Unfortunately, the cardinality of $P$ can depend on the specific choice of the sets $A_n$.
On the one hand, it is consistent with ZF that there is a sequence of 2-element sets $A_n$ for which there is no choice function; that is, the $P$ defined above is empty. So these $A_n$'s lead to a value of 0 for the product.
On the other hand, we could take $A_n=\{0,1\}$ for all $n$, and then there are lots of elements in $P$, for example the constant function with value 0. Indeed, for any subset $X$ of $\mathbb N$, its characteristic function is a member of $P$. The resulting value for the product of countably many 2's would then be the cardinality of the continuum.
The moral of this story is that, in order for infinite products to be well-defined, one needs AC (or at least some special cases of it), even when the index set and all the factors in the product are well-orderable.
Digging into the problem a bit more deeply, one finds that the natural attempt to prove that "the cardinality of $P$ is independent of the choice of $A_n$'s" involves the following step. If we have a second choice, say the sets $B_n$, and we know that each $A_n$ has the same cardinality as the corresponding $B_n$, so we know that there are bijections $A_n\to B_n$ for all $n$, then we need to fix such bijections --- to choose a specific such bijection for each $n$. Then those chosen bijections can be used to define a bijection between the resulting two versions of $P$. But choosing those bijections is an application of the axiom of choice.
|
$\lim\limits_{x\to\infty}f(x)^{1/x}$ where $f(x)=\sum\limits_{k=0}^{\infty}\cfrac{x^{a_k}}{a_k!}$. Does the following limit exist? What is the value of it if it exists?
$$\lim\limits_{x\to\infty}f(x)^{1/x}$$
where $f(x)=\sum\limits_{k=0}^{\infty}\cfrac{x^{a_k}}{a_k!}$ and $\{a_k\}\subset\mathbb{N}$ satisfies $a_k<a_{k+1},k=0,1,\cdots$
$\bf{EDIT:}$ I'll show that $f(x)^{1/x}$ is not necessarily monotonically increasing for $x>0$.
Since $\lim\limits_{x\to+\infty}\big(x+2\big)^{1/x}=1$, for any $M>0$, we can find some $L > M$ such that $\big(2+L\big)^{1/L}<\sqrt{3}$.
It is easy to see that:
$$\sum_{k=N}^\infty \frac{x^k}{k!} = \frac{e^{\theta x}}{N!}x^N\leq \frac{x^N}{N!}e^x,\quad \theta\in(0,1)$$
Hence we can choose $N$ big enough such that for any $x\in[0,L]$
$$\sum_{k=N}^\infty \frac{x^k}{k!} \leq 1$$
Now, we let
$$a_k=\begin{cases}k,& k=0,1\\ 0,& 2\leq k <N\\ k,& k\geq N\end{cases}$$
Then $f(x)= 1+x+\sum\limits_{k=N}^\infty\frac{x^k}{k!}$ and
$$f(2)^{1/2} \geq \sqrt{3} > (2+L)^{1/L} \geq f(L)^{1/L}$$
which shows that $f(x)^{1/x}$ is not monotonically increasing on $[2,L]$.
|
This limit does not exist in general. First observe that for any polynomial $P$ with non-negative coefficients we have
$$ \lim_{x\to\infty} P(x)^{1/x} = 1$$
and
$$ \lim_{x\to\infty} (e^x - P(x))^{1/x} = \lim_{x\to\infty} e (1-e^{-x}P(x))^{1/x} = e.$$
For ease of notation let
$$ e_n(x) = \sum_{k=n}^\infty \frac{x^k}{k!} = e^x - \sum_{k=0}^{n-1} \frac{x^k}{k!}.$$
Note that $\lim\limits_{n\to\infty} e_n(x) = 0$ for every fixed $x$.
Now define a power series of the form
$$ f(x) = \sum_{i=1}^\infty \sum_{k=m_i}^{n_i} \frac{x^k}{k!}, $$
along with partial sums
$$ P_j(x) = \sum_{i=1}^j \sum_{k=m_i}^{n_i} \frac{x^k}{k!}, $$
where $1 \le m_1 \le n_1 < m_2 \le n_2 < \ldots$ are chosen inductively below. We want to find increasing sequences $(x_i)$ and $(y_i)$ with $x_i \to \infty$, $y_i \to \infty$, and $f(x_i)^{1/x_i} \le \frac32$ and $f(y_i)^{1/y_i} \ge 2$, which obviously implies non-existence of $\lim\limits_{x\to\infty} f(x)^{1/x}$.
Having already defined $m_i$, $n_i$, $x_i$, $y_i$ for $i < j$, we know that $\lim\limits_{x\to\infty} P_{j-1}(x)^{1/x} = 1$, so there exists $x_{j}>j+x_{j-1}$ such that $P_{j-1}(x_j)^{1/x_j} \le \frac54$. Then there exists $m_{j}>n_{j-1}$ such that
$$(P_{j-1}(x_j)+e_{m_{j}}(x_{j}))^{1/x_j} \le \frac32,$$
which implies that whatever choices we make for $n_j$, $m_{j+1}$, etc., we always get
$$f(x_j)^{1/x_j} \le (P_{j-1}(x_j)+e_{m_{j}}(x_{j}))^{1/x_j} \le \frac32.$$
We also know that
$$\lim\limits_{x\to\infty} (P_{j-1}(x) + e_{m_j}(x))^{1/x} = e>2,$$
so there exists $y_j > x_j$ with
$$ (P_{j-1}(y_j) + e_{m_j}(y_j))^{1/y_j} >2. $$
Furthermore, there exists $n_j > m_j$ with
$$ (P_{j-1}(y_j) + e_{m_j}(y_j)- e_{n_j+1}(y_j))^{1/y_j} >2.$$
Lastly, this implies
$$ f(y_j)^{1/y_j} \ge P_j (y_j)^{1/y_j} = (P_{j-1}(y_j) + e_{m_j}(y_j)- e_{n_j+1}(y_j))^{1/y_j} >2.$$
By pushing this idea a little further, one can achieve $\liminf\limits_{x\to\infty} f(x)^{1/x} = 1$ and $\limsup\limits_{x\to\infty} f(x)^{1/x} = e$.
|
lagrange multiplier with interval constraint Given a function $g(x,y,z)$ we need to maximize it given constraints $a<x<b, a<y<b$.
If the constraints were given as a function $f(x,y,z)$ the following equation could be used.
$\nabla f(x,y,z) = \lambda \nabla g(x,y,z)$
How would I set up the initial equation given an interval constraint. Or how would I turn the interval constraint into a function constraint.
EDIT:: Added $a<y<b$ to the constraints.
|
Maximize $g$ ignoring the constraint. If the solution fulfills the constraint, you're done. If not, there's no maximum, since it would have to lie on the boundary, but the boundary is excluded by the constraint.
|
Does taking $\nabla\times$ infinity times from an arbitrary vector exists? Is it possible to get the value of:
\begin{equation}
\underbrace{\left[\nabla\times\left[\nabla\times\left[\ldots\nabla\times\right.\right.\right.}_{\infty\text{-times taking curl operator}}\mathbf{V}\left.\left.\left.\right]\right]\ldots\right] = ?
\end{equation}
For any possible values of vector $\mathbf{V}$.
|
Two applications of $\nabla$ yield $\nabla \times (\nabla \times F) = -\nabla^2 F + \nabla(\nabla \cdot F)$. Why? Well, setting $F = \sum_i F_i e_i$ where $e_i$ is the standard cartesian frame of $\mathbb{R}^3$ allows the formula:
$$ (\nabla \times F)_k = \sum_{ij} \epsilon_{ijk} \partial_i F_j $$
Curling once more,
$$ [\nabla \times (\nabla \times F)]_m = \sum_{kl}\epsilon_{klm}\partial_k\sum_{ij} \epsilon_{ijl} \partial_i F_j $$
But, the antismmetric symbol is constant and we can write this as
$$ [\nabla \times (\nabla \times F)]_m = \sum_{ijkl}\epsilon_{klm}\epsilon_{ijl} \partial_k \partial_i F_j $$
A beautiful identity states:
$$ \sum_{l}\epsilon_{klm}\epsilon_{ijl} = -\sum_{l}\epsilon_{kml}\epsilon_{ijl} =
-\delta_{ki}\delta_{mj}+\delta_{kj}\delta_{mi}$$
Hence,
$$ [\nabla \times (\nabla \times F)]_m = \sum_{ijk}(-\delta_{ki}\delta_{mj}+\delta_{kj}\delta_{mi}) \partial_k \partial_i F_j = \sum_i [-\partial_i^2F_m+\partial_m(\partial_iF_i)]$$
and the claim follows since $m$ is arbitrary. Now, let's try for 3:
$$ \nabla \times (\nabla \times (\nabla \times F)) = \nabla \times \bigl[-\nabla^2 F + \nabla(\nabla \cdot F)\bigr] =\nabla \times (-\nabla^2 F)$$
I used the curl of a gradient is zero. This need not be trivial, take $F = <x^2z,0,0>$ as an example. I suppose I could have shot for a four-folded curl by doubly applying the identity.
$$ \nabla \times (\nabla \times (\nabla \times (\nabla \times F))) =?$$
Set $G = -\nabla^2 F $ since we know the gradient term vanishes,
$$ \nabla \times (\nabla \times G) = -\nabla^2 G + \nabla \cdot G = \nabla^2(\nabla^2 F)-\nabla [\nabla \cdot (\nabla^2 F)]$$
So, there's the four-folded curl. Well, I see no reason this terminates. I guess you can give it a name. I propose we call (ordered as the edit indicates) $ \nabla \times \nabla \times \cdots \times \nabla = \top$
|
Exponential operator on a Hilbert space Let $T$ be a linear operator from $H$ to itself. If we define $\exp(T)=\sum_{n=0}^\infty \frac{T^n}{n!}$ then how do we prove the function $f(\lambda)=exp(\lambda T)$ for $\lambda\in\mathbb{C}$ is differentiable on a Hilbert space?
|
$$\frac{f(\lambda)-f(0)}{\lambda}=\frac{\exp(\lambda T)-Id}{\lambda} = \frac1\lambda\left( \sum_{n=1}^{\infty} \frac{\lambda^nT^n}{n!} \right) =
\sum_{n=1}^{\infty} \frac{\lambda^{n-1}T^n}{n!}$$
|
Describing A Congruence Class The question is, "Give a description of each of the congruence classes modulo 6."
Well, I began saying that we have a relation, $R$, on the set $Z$, or, $R \subset Z \times Z$, where $x,y \in Z$. The relation would then be $R=\{(x,y)|x \equiv y~(mod~6)\}$
Then, $[n]_6 =\{x \in Z|~x \equiv n~(mod~6)\}$
$[n]_6=\{x \in Z|~6|(x-n)\}$
$[n]_6=\{x \in Z|~k(x-n)=6\}$, where $n \in Z$
As I looked over what I did, I started think that this would not describe all of the congruence classes on modulo 6. Also, what would I say k is? After despairing, I looked at the answer key, and they talked about there only being 6 equivalence classes. Why are there only six of them? It also says that you can describe equivalence classes as one set, how would I do that?
|
Let’s start with your correct description
$$[n]_6=\{x\in\Bbb Z:x\equiv n\!\!\!\pmod 6\}=\{x\in\Bbb Z:6\mid x-n\}$$
and actually calculate $[n]_6$ for some values of $n$.
*
*$[0]_6=\{x\in\Bbb Z:6\mid x-0\}=\{x\in\Bbb Z:6\mid x\}=\{x\in\Bbb Z:x=6k\text{ for some }k\in\Bbb Z\}$; this is just the set of all multiples of $6$, so $[0]_6=\{\dots,-12,-6,0,6,12,\dots\}$.
*$[1]_6=\{x\in\Bbb Z:6\mid x-1\}=\{x\in\Bbb Z:x-1=6k\text{ for some }k\in\Bbb Z\}$; this isn’t quite so nice, but we can rewrite it as $\{x\in\Bbb Z:x=6k+1\text{ for some }k\in\Bbb Z\}$, the set of integers that are one more than a multiple of $6$; these can be described as the integers that leave a remainder of $1$ when divided by $6$, and $[1]_6=\{\dots,-11,-5,1,7,13,\dots\}$.
More generally, if $x$ is any integer, we can write it as $x=6k+r$ for integers $k$ and $r$ such that $0\le r<6$: $r$ is the remainder when $x$ is divided by $6$. Then
$$\begin{align*}
[r]_6&=\{x\in\Bbb Z:6\mid x-r\}\\
&=\{x\in\Bbb Z:x-r=6k\text{ for some }k\in\Bbb Z\}\\
&=\{x\in\Bbb Z:x=6k+r\text{ for some }k\in\Bbb Z\}\\
&=\{6k+r:k\in\Bbb Z\}\;
\end{align*}$$
the set of all integers leaving a remainder of $r$ when divided by $6$. You know that the only possible remainders are $0,1,2,3,4,5$, so you know that this relation splits $\Bbb Z$ into exactly six equivalence classes, $[0]_6,[1]_6,[2]_6,[3]_6,[4]_6$, and $[5]_6$.
|
Domain, codomain, and range This question isn't typically associated with the level of math that I'm about to talk about, but I'm asking it because I'm also doing a separate math class where these terms are relevant. I just want to make sure I understand them because I think I may end up getting answers wrong when I'm over thinking things.
In my first level calculus class, we're now talking about critical values and monotonic functions. In one example, the prof showed us how to find the critical values of a function $$f(x)=\frac{x^2}{x-1}$$ He said we have to find the values where $f' (x)=0$ and where $f'(x)$ is undefined.$$f'(x)=\frac{x^2-2x}{(x-1)^2}$$
Clearly, $f'(x)$ is undefined at $x=1$, but he says that $x=1$ is not in the domain of $f(x)$, so therefore $x=1$ is not a critical value. Here's where my question comes in:
Isn't the "domain" of $f(x)$ $\mathbb{R}$, or $(-\infty,\infty)$? If my understanding of Domain, Codomain, and Range is correct, then wouldn't it be the "range" that excludes $x=1$?
|
$x=1$ is not in the domain because when $x=1$, $f(x)$ is undefined. And by definition, strictly speaking, a function defined on a domain $X$ maps every element in the domain to one and only element in the codomain.
The domain and codomain of a function depend upon the set on which $f$ is defined and the set to which elements of the domain are being mapped; both are usually made explicit by including the notation $f: X \to Y$, e.g. along with defining $f(x)$ for $x\in X$.
$X$ is then taken to be the domain of $f$, and $Y$ the codomain of $f$, though you'll find that some people interchange the terms "codomain" and "range". So "range" is a bit ambiguous, depending on the text used and how it is defined, because "range" is sometimes defined to be the set of all values $y$ such that there is some $x \in X$ for which it is true that $f(x) = y$, i.e. $f[X]$.
One way to circumvent any ambiguity related to use of "range" to refer to $f[X]$ is to note that many prefer to define $f[X]$ to be the "image" of $X$ under $f$, often denoted by $\text{Im}f(x)$, with the understanding that $f[X] = \text{Im}f(x) \subseteq Y.\;\; f[X]=\text{Im}f(x) = Y$ when $f$ is onto $Y$.
|
A limit $\lim_{n\rightarrow \infty}\sum_{k=1}^{n}\frac{k\sin\frac{k\pi}{n}}{1+(\cos\frac{k\pi}{n})^2}$ maybe relate to riemann sum Find
$$\lim_{n\rightarrow \infty}\sum_{k=1}^{n}\frac{k\sin\frac{k\pi}{n}}{1+(\cos\frac{k\pi}{n})^2}$$
I think this maybe relate to Riemann sum. but I can't deal with $k$ before $\sin$
|
If there is no typo, then the answer is $\infty$. Indeed, let $m$ be any fixed positive integer and consider the final $m$ consecutive terms:
$$ \sum_{k=n-m}^{n-1} \frac{k \sin \frac{k \pi}{n}}{1 + \cos^2 \frac{k \pi}{n}}
= \sum_{k=1}^{m} \frac{(n-k) \sin \frac{k \pi}{n}}{1 + \cos^2 \frac{k \pi}{n}}. $$
As $n \to \infty$, each term converges to $k \pi$, in view of the substitution $x = \frac{k\pi}{n}$ and the following limit
$$ \lim_{x\to 0}\frac{\sin x}{x(1 + \cos^2 x)} = 1. $$
Thus
$$ \liminf_{n\to\infty} \sum_{k=1}^{n} \frac{k \sin \frac{k \pi}{n}}{1 + \cos^2 \frac{k \pi}{n}}
\geq \lim_{n\to\infty} \sum_{k=n-m}^{n-1} \frac{k \sin \frac{k \pi}{n}}{1 + \cos^2 \frac{k \pi}{n}}
= \sum_{k=1}^{m} k \pi = \frac{m(m+1)}{2}\pi. $$
Now letting $m \to \infty$, we obtain the desired result.
Indeed, we have
$$ \lim_{n\to\infty} \sum_{k=1}^{n} \frac{\frac{k}{n} \sin \frac{k \pi}{n}}{1 + \cos^2 \frac{k \pi}{n}} \frac{1}{n} = \frac{1}{\pi^2} \int_{0}^{\pi} \frac{x \sin x}{1 + \cos^2 x} \, dx = \frac{1}{4}. $$
|
Unimodular matrix definition? I'm a bit confused. Based on Wikipedia:
In mathematics, a unimodular matrix M is a square integer matrix
having determinant +1, 0 or −1. Equivalently, it is an integer matrix that is invertible over the integers.
So determinant could be +1, 0 or −1. But a matrix is invertible only if determinat is non-zero! In fact, from Wolfram:
A unimodular matrix is a real square matrix A with determinant det(A) = -1|+1.
Which is right answer?
|
Well spotted. In a case like this, it's a good idea to check the article's history (using the "View history" link at the top). In the present case, the error was introduced only two days ago by an anonymous user in this edit (which I just reverted).
|
What exactly does conjugation mean? In group theory, the mathematical definition for "conjugation" is:
$$
(g, h) \mapsto g h g^{-1}
$$
But what exactly does this mean, like in laymans terms?
|
The following is equivalent to the second paragraph of Marc van Leeuwen's answer, but I think it might help emphasize how natural conjugation really is. With notation as in Marc's answer, let me write $h'$ for the conjugate $ghg^{-1}$. Then $h'$ is obtained by shifting $h$ along $g$ in the sense that, whenever $h$ sends an element $x\in X$ to another element $y$, then $h'$ sends $g(x)$ to $g(y)$. If, as people sometimes do, one regards a function $h$ as a set of ordered pairs, then $h'$ is obtained by applying $g$ to both components in all those ordered pairs.
|
Prove that: Every $\sigma$-finite measure is semifinite. I am trying to prove every $\sigma$-finite measure is semifinite. This is what I have tried:
Definition of $\sigma$-finiteness: Let $(X,\mathcal{M},\mu)$ is a measure space. Then, $ \mu$ is $\sigma$-finite if $X = \bigcup_{i=1}^{\infty}E_i$ where $E_i \in \mathcal{M}$ and $\mu(E_i) < \infty$ for all $ j \in N$. (Real Analysis: Modern Techniques and Their Applications 2nd Edition by Foland).
Definition of semifiniteness: $\mu $ is simifinite if for each $E \in \mathcal{M}$ with $\mu(E) = \infty$ $\exists$ $F \subset E$ and $F \in \mathcal{M}$ and $0 < \mu(F) < \infty$.
So, take $A$ s.t. $\mu(A) = \infty$. We know $X \cap A = A$. Then, $A = A \cap \bigcup E_j$ hence $A = \bigcup E_j \cap A$. By subadditivity,
$$\infty = \mu(A) = \mu\left(\bigcup E_j \cap A\right) \leq \sum_1^{\infty} \mu(E_j \cap A) $$
OK, I am here. But I do not understand how to continue, or even this is a right approach. Thanks.
|
We can find $N$ such that $\mu\left(A\cap E_N\right)>0$ (otherwise, we would have for each $n$ that $\mu\left(A\cap\bigcup_{j=1}^nE_j\right)=0$ and $\mu\left(A\right)=\lim_{n\to +\infty}\mu\left(A\cap\bigcup_{j=1}^nE_j\right)$), and we have $\mu\left(A\cap E_N\right)\leqslant \mu\left( E_N\right)<+\infty$. Furthermore, $A \cap E_N\subset A$, hence the choice $F:=A\cap E_N$ does the job. This proves that $\mu$ is semi-finite.
The converse is not true: counting measure on the subsets of $[0,1]$ is semi-finite but not $\sigma$-finite.
|
Probability of winning in the lottery In the lottery there are 5 numbers rolled from 35 numbers and for 3 right quessed numbers there is a third price.
What's the propability that we will win the third price if we buy one ticket with 5 numbers.
|
Choose 5 from 35 in $\binom{35}{5}$ and from 5 numbers to get 3 exists $\binom{5}{3}=10$ possibilities and 2 other numbers you choose from 30 others thats not are in your ticket in $\binom{30}{2}=435 $ ways so total ways to win third place is $10\times435=4350$ ways.
|
Find an equation of the plane that passes through the point $(1,2,3)$, and cuts off the smallest volume in the first octant. *help needed please* Find an equation of the plane that passes through the point $(1,2,3)$, and cuts off the smallest volume in the first octant.
This is what i've done so far....
Let $a,b,c$ be some points that the plane cuts the $x,y,z$ axes. --> $\frac{x}{a} + \frac{y}{b} + \frac{z}{c} = 1$, where $a,b,c >0$.
I saw a solution for this question was to use Lagrange multiplier. The solution goes as follows...
The product $abc$ will be equal to $6$ times the volume of the tetrahedron $OABC$ (could someone explain to my why is this so?)
$f(a,b,c) = abc$ given the condition $(\frac1a + \frac2b + \frac3b -1)$
$f(a,b,c) = abc + \lambda (\frac1a + \frac2b + \frac3c -1)$
2nd query to the question...
$f_a = \lambda g_a \Rightarrow bc - \frac\lambda {a^2} ; a = \sqrt \frac \lambda {bc}
\\f_b = \lambda g_b \Rightarrow ac - \frac\lambda {b^2} ; b = \sqrt \frac {2\lambda}{ac}
\\f_c = \lambda g_c \Rightarrow ab - \frac\lambda {c^2} ; c = \sqrt \frac {3\lambda}{ab}$
using values of $a,b,c$ into $\frac1a+\frac1b+\frac1c = 1\Rightarrow \lambda =\frac{abc}{a+2b+3c}$.
May i know how should i proceed to solve the unknowns?
|
The volume of a pyramid (of any shaped base) is $\frac13A_bh$, where $A_b$ is the area of the base and $h$ is the height (perpendicular distance from the base to the opposing vertex). In this particular case, we're considering a triangular pyramid, with the right triangle $OAB$ as a base and opposing vertex $C$. The area of the base is $\frac12ab$, and the height is $c$, so the volume of the tetrahedron is $\frac16abc$--equivalently, $abc$ is $6$ times the volume of the tetrahedron.
|
Numbers to the Power of Zero I have been a witness to many a discussion about numbers to the power of zero, but I have never really been sold on any claims or explanations. This is a three part question, the parts are as follows...
*
*Why does $n^{0}=1$ when $n\neq 0$? How does that get defined?
*What is $0^{0}$? Is it undefined? If so, why does it not equal $1$?
*What is the equation that defines exponents? I can easily write a small program to do it (see below), but what about in equation format?
I just want a little discussion about numbers to the power of zero, for some clarification.
Code for Exponents: (pseudo-code/Ruby)
def int find_exp (int x, int n){
int total = 1;
n.times{total*=x}
return total;
}
|
To define $x^0$, we just cannot use the definition of repeated factors in multiplication. You have to understand how the laws of exponentiation work. We can define $x^0$ to be: $$x^0 = x^{n - n} = \frac{x^n}{x^n}.$$ Now, let us assume that $x^n = a$. It would then be simplified as $$\frac{x^n}{x^n} = \frac{a}{a} = 1.$$ So that's why $x^0 = 1$ for any number $x$.
Now, you were asking what does $0^0$ mean. Well, let us the example above: $$0^0 = 0^{n - n} = \frac{0}{0}.$$ Here is where it gets confusing. It is more likely to say that $\frac{0}{0}$ equals either $0$ or $1$, but it turns out that $\frac{0}{0}$ has infinitely many solutions. Therefore, it is indeterminate. Because we mathematicians want to define it as some exact value, which is not possible because there are many values, we just say that is undefined.
NOTE: $0^0$ still follows the rule of $x^0 = 1$. So it is correct to say that $x^0 = 1$ for ANY value of $x$.
I hope this clarify all your doubts.
|
The inverse of the adjacency matrix of an undirected cycle Is there an expression for $A^{-1}$, where $A_{n \times n}$ is the adjacency matrix of an undirected cycle $C_n$, in terms of $A$?
I want this expression because I want to compute $A^{-1}$ without actually inverting $A$. As one answer suggests, $A$ is non-invertible for certain values of $n$ (namely when $n$ is a multiple of $4$).
|
For $n=4$, the matrix in question is $$\pmatrix{0&1&0&1\cr1&0&1&0\cr0&1&0&1\cr1&0&1&0\cr}$$ which is patently noninvertible.
|
$\epsilon$-$\delta$ proof involving differentiation in a defined neighborhood The problem states: Suppose $f'(b) = M$ and $M <0$. Find $\delta>0$ so that if $x\in (b-\delta, b)$, then $f(x) > f(b).$
This intuitively makes sense, but I am not exactly sure how to find $\delta$. I greatly appreciate any help I can receive.
|
Remember that the definition of derivative will imply that
$$
\lim_{x\to b^-}\frac{f(b)-f(x)}{b-x}=M.
$$
But, $M<0$ and $b-x>0$.
|
Finding a conformal map from the exterior of unit disk onto the exterior of an ellipse Find a conformal bijection $f(z):\mathbb{C}\setminus D\rightarrow \mathbb{C}\setminus E(a, b)$ where $E(a, b)$ is the ellipse $\{x + iy : \frac{x^2}{a}+\frac{y^2}{b}\leq1\}$
Here $D$ denotes the closed unit disk.
I hate to ask having not given the question a significant amount of thought, but due to illness I missed several classes, really need to catch up, and the text book we're using (Ahlfors) doesn't seem to have anything on the mapping of circles to ellipses except a discussion of level curves on pages 94-95. and I can't figure out how to get there through composition of the normal elementary maps (powers, exponential and logarithmic), and fractional linear transformations take circles into circles and are therefore useless for figuring this out.
I prefer hints, thanks in advance.
|
The conformal map $z\mapsto z+z^{-1}$ sends $\{|z|>R\}$ onto the exterior of ellipse with semi-axes $A=R+R^{-1}$ and $B=R-R^{-1}$. Note that $A^2-B^2=4$. Thus, you should multiply the given $a,b$ by a constant $C$ such that $(Ca)^2-(Cb)^2=4$, then solve $Ac=R+R^{-1}$ for $R$. After applying the map given above, the final step is $z\mapsto z/C$.
|
Finding the limit of $x_1=0 , x_{n+1}=\frac{1}{1+x_n}$ I have had big problems finding the limit of the sequence $x_1=0 , x_{n+1}=\frac{1}{1+x_n}$. So far I've only succeeded in proving that for $n\geq2$: $x_n>0\Rightarrow x_{n+1}>0$
(Hopefully that much is correct: It is true for $n=2$, and for $x_{n+1}>0$ exactly when $\frac{1}{1+x_n}>0$, which leads to the inequality $x_n>-1$ which is true by the induction assumption that $x_n>0$.)
On everything else I failed to come up with answers that make sense (such as proving that $x_{n+1}>x_n \forall n\geq1$). I'm new to recursive sequences, so it all seems like a world behind the mirror right now. I'd appreciate any help, thanks!
|
It is obvious that $f:x\mapsto\frac1{1+x}$ is a monotonically decreasing continuous function $\mathbf R_{\geq0}\to\mathbf R_{\geq0}$, and it is easily computed that $\alpha=\frac{-1+\sqrt5}2\approx0.618$ is its only fixed point (solution of $f(x)=x$). So $f^2:x\mapsto f(f(x))$ is a monotonically increasing function that maps the interval $[0,\alpha)$ into itself. Since $x_3=f^2(x_1)=\frac12>0=x_1$ one now sees by induction that $(x_1,x_3,x_5,...)$ is an increasing sequence bounded by $\alpha$. It then has a limit, which must be a fixed point of $f^2$ (the function mapping each term of the sequence to the next term). One checks that on $ \mathbf R_{\geq0}$ the function $f^2$ has no other fixed point than the one of $f$, which is $\alpha$, so that must be value of the limit. The sequence $(x_2,x_4,x_6,...)$ is obtained by applying $f$ to $(x_1,x_3,x_5,...)$, so by continuity of $f$ it is also convergent, with limit $f(\alpha)=\alpha$. Then $\lim_{n\to\infty}x_n=\alpha$.
|
Chain rule application I want to find $y'$ where $$ y = \frac{\frac{b}{a}}{1+ce^{-bt}}.$$ But I dont want to use quotient rule for differentiation. I want to use chain rule. My solution is:
Write $$y=\frac{b}{a}\cdot \frac{1}{1+ce^{-bt}}.$$ Then in $$\frac{1}{1+ce^{-bt}},$$ the inner function is $1+ce^{-bt}$ and the outer function is $$\frac{1}{1+ce^{-bt}}.$$
Hence using the chain rule we have
$$\left(\frac{1}{1+ce^{-bt}}\right)'= \frac{-1}{(1+ce^{-bt})^2} \cdot -bce^{-bt} = \frac{bce^{-bt}}{(1+ce^{-bt})^2}.$$
Thus $$y'= \frac{\frac{b^2}{a}ce^{-bt}}{(1+ce^{-bt})^2}.$$ Am I correct?
|
Yes.
The outer function is $s\mapsto \displaystyle\frac1s$ or you can call its variable anything. And '$\cdot -bce^{-bt}$' should be in parenthesis: $\cdot (-bce^{-bt})$, else it seems correct.
|
question about normal subgroups If $N$ is a normal subgroup of $G$ and $M$ is a normal subgroup of $G$, and if $MN=\{mn|m\in M,n\in N\}$, prove that $MN$ is a subgroup of $G$ and that $MN$ is a normal subgroup of $G$.
The attempt:
I tried just starting by showing that $MN$ is a subgroup of $G$. I said let $a=m_1 n_1$ for some $m_1 \in M$ and $n_1 \in N $ and let $b=m_2 n_2$ for some $m_2 \in M$ and $n_2 \in N$, and we need to show $a*b^{-1}$ $\in MN$.
So I get $a*b^{-1}$=$m_1n_1n_2^{-1}m_2^{-1}=m_1n_3m_2^{-1}$ but then I don't know how to show that this is in $MN$. Tips on this or the next part of the problem?
|
$$m_1n_3m_2^{-1}=m_1m_2^{-1}(m_2n_3m_2^{-1})\in MN$$
General Lemma: if $\,M,N\,$ are subgroups of $\,G\,$ , $\,MN\,$ is a subgroup iff $\,MN=NM\,$ .
In particular, if $\,M\triangleleft G\,$ or$\,N\triangleleft G\,$ ,then $\,MN=NM\,$
|
Disjoint Equivalence Why do equivalence classes, on a particular set, have to be disjoint? What's the intuition behind it? I'd appreciate your help
Thank you!
|
The idea behind an equivalence relation is to generalize the notion of equality.
The idea behind the equality relation is that something is only equal to itself. So two distinct objects are not equal.
With equivalence relation, if so, we allow two things to be "almost equal" (namely equal where it count, and we don't care about their other distinctive properties). So the equivalence class of an object is the class of things which are "almost equal" to it. Clearly if $x$ and $y$ are almost equal they have to have the same class of almost equal objects; and similarly if they are not almost equal then it is impossible to have an object almost equal to both.
|
A question on $\liminf$ and $\limsup$ Let us take a sequence of functions $f_n(x)$. Then, when one writes $\sup_n f_n$, I understand what it means: supremum is equal to upper bound of the functions $f_n(x)$ at every $x$. Infimum is defined similarly. Then when one writes $\lim \sup f_n$, then I understand following: There are convergent subsequences of $f_n$, let us call them as $f_{n_k}$ and their limits as a set $E$. Then, $$\limsup f_n = \sup E$$
First question: Are these definitions right?
Second question: I do not understand the notion of convergent subsequences. What does it mean really? And why they are necessary at the first place, why they are important?
Thanks.
|
1 For any $ x $ there are $ n_{k(x)} $ such that
\begin{equation}
\limsup f_n(x) = \lim f_{n_{k(x)}}(x)
\end{equation}
2 Maybe $ f_n(x) $ can not converges. But, there are subindices $ n_k $ such that $ f_{n_k}(x) $ converges.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.