INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Prove A is symmetric matrix iff $A$ is square and $x^T Ay = (Ax)^T y$ Prove A is a symmetric matrix iff $A$ is square and $x^T Ay = (Ax)^T y$. (for all $x,y \in \mathbb{R}^n$) Going from the assumption that it is symmetric to the two conditions is fairly straightforward. However, going the other way, I am stuck at proving $A^T = A$ using the second and condition, being stuck at $X^T (A-A^T)y=0$. Note T is for transpose!
First, lets remember the rules of transposing the product: $(AB)^T = B^T A^T$ Using this, we start with the giver equation $ x^TAy=(Ax)^Ty$ and apply the rule above which yields $x^TAy = x^TA^Ty$. Note that the equation is true for all $x, y$, so the only way that is possible is if $A = A^T$.
Congruence of invertible skew symmetric matrices I am asking for some hints to solve this excercise. Given an invertible skew symmetric matrix $A$, then show that there are invertible matrices $ R, R^T$ such that $R^T A R = \begin{pmatrix} 0 & Id \\ -Id & 0 \end{pmatrix}$, meaning that this is a block matrix that has the identity matrix in two of the four blocks and the lower one with a negative sign. I am completely stuck!
Hint: a skew-symmetric matrix commutes with its transpose, and so it is diagonalizable. Your block matrix on your right hand side is also skew-symmetric, and so it is also diagonalizable.
Probability of retirement event This is elementary, but not clear to me. Suppose I know that the mean age of retirement is $\mu$ and the standard deviation $\sigma$. What is the probability that someone of age $x$, who has not yet retired, will retire sometime in the next year, i.e., between $x$ and $x+1$? Clearly for small $x$, far below the mean, the probability is near zero, and for large $x$, it approaches $1$. So it is a type of cumulative distribution... Thanks for your help!
Integrate the Gaussian probability density function from x to x+1
Do the premises logically imply the conclusion? $$b\rightarrow a,\lnot c\rightarrow\lnot a\models\lnot(b\land \lnot c)$$ I have generated an 8 row truth table, separating it into $b\rightarrow a$, $\lnot c\rightarrow\lnot a$ and $\lnot (b\land\lnot c)$. I know that if it was $$\lnot c\rightarrow\lnot a\models\lnot(a \land \lnot c)$$ I would only need to check the right side with every value that makes the left side true to make sure the overall statement is true. How do I deal with more than one premise?
The second premise $\neg c\to\neg a$ implies that $a\to c$. The first premise $b\to a$ leads to $b\to a\to c$, which implies $\neg c\to\neg b$. The two last statements clearly prevent $b\land\neg c$ from being true.
Does there exist a matrix $P$ such that $P^n=M$ for a special matrix $M$? Consider the matrix $$ M=\left(\begin{matrix} 0&0&0&1\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ \end{matrix}\right). $$ Is there a matrix $P\in{\Bbb C}^{4\times 4}$ such that $P^n=M$ for some $n>1$? One obvious fact is that if such $P$ exists, then $P$ must be nilpotent. However, I have no idea how to deal with this problem. Furthermore, what if $M$ is an arbitrary nilpotent matrix with index $k$?
Since it has been proposed to treat this Question as a duplicate of the present one, it should be noted that there is a negative Answer to the "new" issue raised in Yeyeye's Answer here. As earlier observed, since $M$ is nilpotent, for $P^n = M$ for $n\gt 1$ will require that $P$ is nilpotent. It follows that the minimal polynomial for $P$ must divide $x^d$ where $d$ is the degree of nilpotency (i.e. the least power such that $P^d=0$. Now the characteristic polynomial of $P$ will have degree $4$ (because $P$ is $4\times 4$), and thus $d\le 4$. So the minimal polynomial of $P$ has the form $x^k$ for $1\lt k \le d \le 4$. In other words, $P^4=0$ given the above information. So $P^3 = M \neq 0$ is the largest power $n=3$ that one can achieve.
Let $W$ be a Wiener process and $X(t):=W^{2}(t)$ for $t\geq 0.$ Calculate $\operatorname{Cov}(X(s), X(t))$. Let $W$ be a Wiener process. If $X(t):=W^2(t)$ for $t\geq 0$, calculate $\operatorname{Cov}(X(s),X(t))$
Assume WLOG that $t\geqslant s$. The main task is to compute $E(X_sX_t)$. Write $X_t=(W_t-W_s+W_s)^2=(W_t-W_s)^2+2W_t(W_t-W_s)+W_s^2$, and use the fact that if $U$ and $V$ are independent random variables, so are $U^2$ and $V^2$.
stability and asymptotic stability: unstable but asymptotically convergent solution of nonlinear system Consider nonlinear systems of the form $X(t)'=F(X(t))$, where $F$ is smooth (assume $C^\infty$). Is it possible to construct such a system (preferably planar system) so that $X_0$ is an unstable equilibrium, but all nearby solution curves tend to $X_0$ as $t \to \infty$? If so, how? A conceptual construction is enough. What will the phase portrait look like?
Definition (Verhulst 1996) The equilibrium solution $X_c$ is called asymptotically stable if there exists a $\delta(t_0)$ such that \begin{equation} ||X_0 - X_c|| \le \delta(t_0) \implies \lim_{t\rightarrow\infty} ||X(t;t_0,X_0) - X_c|| =0 \end{equation} So no, you're essentially asking can a system be both asymptotically stable and unstable which is a contradiction. EDIT: Glendinning's text refers to this definition as quasi-asymptotic stability to differentiate it from 'direct' asymptotic stability (solutions are both quasi-asymptotically stable and Lyupanov stable). As an example he presents this equation: \begin{equation} \dot{r} = r(1-r^2) \qquad \dot{\theta} = 2\sin^2\left(\frac{1}{2}\theta\right), \end{equation} which has an unstable critical point at $(0,0)$ a quasi-asymptotically stable critical point at $(1,0)$. Points are attracted to the invariant circle at $r=1$. On this curve, the flow is semi-stable. If $\pi>\theta >0$ then the flow wraps around until it passes $\theta = \pi$. If $ -\pi> \theta>0$ the trajectory falls into the critical point at $(1,0)$. This is an example of a homoclinic orbit. In the example provided, the critical point at $(1,0)$ is a saddle. Which, if all we considered was the linearised system, is unstable. Indeed it is Lyapunov unstable as there is a trajectory that leaves any $\epsilon$-ball about $(1,0)$. However, in nonlinear systems the linearisation is often not sufficient, especially with saddles, which are often the cause of a whole bunch of interesting behavior that aren't represented in the linearisation, like homoclinic and heteroclinic connections. I think the confusion comes from how you/they define instability. The texts I refer to define a critical point as unstable if it is not Lyapunov stable or 'Quasi-Asympotically' stable. However if all you go off is the linearisation then i'd imagine the example is what you're after.
Where is wrong in this proof Suppose $a=b$. Multiplying by $a$ on both sides gives $a^2 = ab$. Then we subtract $b^2$ on both sides, and get $a^2-b^2 = ab-b^2$. Obviously, $(a-b)(a+b) = b(a-b)$, so dividing by $a - b$, we find $a+b = b$. Now, suppose $a=b=1$. Then $1=2$ :)
By a simple example: if( A = 0 ) A * 5 = A * 7 and by dividing by A we have 5 = 7. Then we can dividing by A if A not equal to zero.
what is the value of this trigonometric expression I want to find out value of this expression $$\cos^2 48°-\sin^2 12°$$ Just hint the starting step.Is there any any formula regarding $\cos^2 A-\sin^2 B$?
I've got a formula : $$\cos(A+B).\cos(A-B)=\cos^2A-\sin^2B$$ so from this formula this question is now easy $$\cos^248-\sin^212$$ $$\cos60.\cos36$$ $$\frac{\sqrt{5}+1}{8}$$
Is the difference of two recursively enumerable sets, reducible to $K$? Is the difference of two recursively enumerable sets, reducible to $K$? $W_x/W_y=\{z|z \in W_x \& z \notin W_y\}$ $K=\{x|\Phi_x(x) \downarrow\}$ $W_x= \text{dom}(\Phi_x)$
No. Let $\omega$ denote the set of natural numbers. $K$ is c.e. but incomputable. If a set $A$ and its complement $\bar{A} = \omega - A$ is c.e., then $A$ must be computable. Hence $\bar{K} = \omega - K$, the complement of $K$, is not a c.e. set. It is clear that $\omega$, the set of natural numbers, is c.e. (it is computable). $K$ is c.e. Thus $\omega - K$ is a different of two c.e. sets. $\omega - K = \bar{K}$ which as mentioned above is not c.e. Suppose $\omega - K = \bar{K}$ was many to one reducible to $K$; in notations $\bar{K} \leq_m K$. Since $K$ is c.e., $\bar{K}$ would have to c.e. (see the little lemma under the line). This a contradiction. If if two set $A \leq_m B$ (are many to one reducible) and $B$ is c.e., then $A$ is c.e. By definition of $A \leq_m B$, there is a computable function $f$ such that $x \in A$ if and only if $f(x) \in B$. Since $B$ is c.e., $B = W_n$ for some $n$. Define a new partial computable function $\Psi(x) = \begin{cases} 0 & \quad \text{if }\Phi_n(f(x)) \downarrow \\ \uparrow & \quad \text{otherwise} \end{cases}$ Since $\Psi$ is partial computable, it has a index $p$. That is, $\Psi = \Phi_p$. Then $A = W_p$ since $x \in A$ if and only if $f(x) \in B$ if and only if $\Phi_n(f(x)) \downarrow$ if and only if $\Phi_p(x) \downarrow$. $A$ is c.e.
Prove $L=\{ 1^n| n\hspace{2mm}\text{is a prime number} \}$ is not regular. Prove $L=\{ 1^n| n\hspace{2mm}\text{is a prime number} \}$ is not regular. It seems to use one Lemma: Pumping Lemma.
In addition: Instead of Pumping lemma one can use the following fact: $L$ is regular iff it is an union of $\lambda$-classes for some left congruence $\lambda$ on the free monoid $A^*$ such that $|A^*/\lambda|<\infty$. Here $A=\{a\}$ (in your notation $a=1$) is commutative, so $\lambda$ is two-sided. The structure of the factor-monoid $A^*/\lambda =\langle {\bar a}\rangle$ is well-known -- it is defined by a relation ${\bar a}^{n+r}={\bar a^n}$. Therefore every $\lambda$-class is either one-element $\{a^k\}$ for $k< n$ or has the form $\{a^k,a^{k+r},a^{k+2r},\ldots\}$ for $k\ge n$. Since $L$ contains an infinite $\lambda$-class, we get a contradiction.
Weaker/Stronger Topologies and Compact/Hausdorff Spaces In my topology lecture notes, I have written: "By considering the identity map between different spaces with the same underlying set, it follows that for a compact, Hausdorff space: $\bullet$ any weaker topology is compact, but not Hausdorff $\bullet$ any stronger topology is Hausdorff, but not compact " However, I'm struggling to see why this is. Can anyone shed some light on this?
Hint. $X$ being a set, a topology $\tau$ is weaker than a topology $\sigma$ on $X$ if and only if the application $$ (X, \sigma) \to (X,\tau), x \mapsto x $$ is continuous.
Even weighted codewords and puncturing My question is below: Prove that if a binary $(n,M,d)$-code exists for which $d$ is even, then a binary $(n,M,d)$-code exists for which each codeword has even weight. (Hint: Do some puncturing and extending.)
Breaking this down to individual steps. Assume that $d=2t$ is an even integer. Assume that an $(n,M,d)$ code $C$ exists. * *Show that puncturing the last bit from the words of $C$, you get a code $C'$ with parameters $(n-1,M,d')$, where $d'\ge d-1$. Actually we could puncture any bit, but let's be specific. Also we fully expect to have $d'=d-1$, but can't tell for sure, and don't care. *Let us append each word of $C'$ by adding an extra bit chosen in such a way that the weight of the word is even. Call the resulting code $C^+$. Show that $C^*$ has parameters $(n,M,d^*)$, where $d^*\ge d'\ge 2t-1.$ Observe that all the words of $C^*$ have an even weight. *Show that the minimum distance of $C^*$ must be an even number. Conclude that $d^*\ge d$. Observe that we did not assume linearity at any step.
$\int_0^1 \frac{{f}(x)}{x^p} $ exists and finite $\implies f(0) = 0 $ Need some help with this question please. Let $f$ be a continuous function and let the improper inegral $$\int_0^1 \frac{{f}(x)}{x^p} $$ exist and be finite for any $ p \geq 1 $. I need to prove that $$f(0) = 0 $$ In this question, I really wanted to use somehow integration by parts and/or the Fundamental theorem of calculus. Or even maybe use Lagrange Means value theorem, but couldn't find a way to se it up. I'll really appreciate your help on proving this.
Hint: Let $$ g(t) = \int_t^1 \frac{f(x)}{x^p}dx $$ Now investigate properties of $g(t)$ around $t=0$.
Introduction to Abstract Harmonic Analysis for undergraduate background I'm looking for a good starting book on the subject which only assumes standard undergraduate background. In particular, I need to gain some confidence working with properties of Haar measures, so I can better understand the spaces $L^{p}(G)$ for a locally compact group $G$. For some perspective on what I currently know, I can't yet solve this problem: Verifying Convolution Identities Something with plenty of exercises would be ideal.
I suggest the short book by Robert, "Introduction to the Representation Theory of Compact and Locally Compact Groups" which is leisurely and has plenty of exercises. The only prerequisite of this book is some familiarity of finite dimensional representations. A second book you should look at is Folland's "A Course in Abstract Harmonic Analysis", which is more advanced, and requires more experience with analysis (having seen Banach spaces is not a bad thing), but the advantage of this book is that it has very clearly written proofs, that are easily to follow (I do algebra mostly, and I find many analysis tracts a bit opaque in this regard). Unfortunately, this book does not have exercises, and should be approached once you have plenty of examples in mind. Donald Cohn's "measure theory" has a large number of exercises on the basics of topological groups and Haar measure, but it doesn't do representation theory or much else on locally compact groups except an introduction.
$d\mid p\!-\!1\Rightarrow x^d-1\pmod{\!p^2}$ has exactly $d$ roots for odd prime $p$ I'm trying to figure out the number of solutions to the congruence equation $x^d \equiv1 \pmod{p^2}$ where $p$ is prime and $d\mid p-1$. For the congruence equation ${x^d}\equiv1 \pmod p$ where $p$ is prime and $d\mid p-1$ I've shown that there are exactly $d$ solutions modulo $p$. I'm trying to use the above result to extend it to the higher power. I'm aware of something called Hensel's Lemma which says if a polynomial has a simple root modulo a prime $p$, then this root corresponds to a unique root of the same equation modulo any higher power of $p$. We can 'lift' the root iteratively up to higher powers. I'm unsure exactly how this process works. All I have to start with is that I assume a solution of the form $m+np$ solves the 'base' congruence and I'm trying to somehow extend it to $p^2$. Any help would be appreciated. Thanks.
Let $p$ be an odd prime, and work modulo $p^k$ (so $k$ does not have to be $2$). Then if $d$ divides $p-1$, the congruence $x^d\equiv 1\pmod{p^k}$ has exactly $d$ solutions. To prove this, we use the fact that there is a primitive root $g$ of $p^k$, that is, a generator of the group of invertibles modulo $p^k$. Then for $1\le n\le (p-1)p^{k-1}$, we have $(g^n)^d\equiv 1\pmod{p^k}$ if and only if $nd$ is a multiple of $(p-1)p^{k-1}$. There are $d$ different such $n$, namely $i\frac{p-1}{d}p^{k-1}$ for $i=1,2,\dots, d$.
Arc length of logarithm function I need to find the length of $y = \ln(x)$ (natural logarithm) from $x=\sqrt3$ to $x=\sqrt8$. So, if I am not mistake, the length should be $$\int^\sqrt8_\sqrt3\sqrt{1+\frac{1}{x^2}}dx$$ I am having trouble calculating the integral. I tried to do substitution, but I still fail to think of a way to integrate it. This is what I have done so far: $$\sqrt{1+\frac{1}{x^2}}=u-\frac{1}{x}$$ $$x=\frac{2u}{u-1}$$ $$dx=\frac{2}{(u-1)^2}du$$ $$\sqrt{1+\frac{1}{x^2}}=u-\frac{1}{x}=u-\frac{1}{2}+\frac{1}{2u}$$ $$\int\sqrt{1+\frac{1}{x^2}}dx=2\int\frac{u-\frac{1}{2}+\frac{1}{2u}}{(u-1)^2}du$$ And I am stuck.
$\int^{\sqrt{8}}_{\sqrt{3}}\sqrt{1+\frac{1}{x^{2}}}dx$=$\int^{\sqrt{8}}_{\sqrt{3}}\frac{\sqrt{1+x^{2}}}{x}dx$=$\int^{\sqrt{8}}_{\sqrt{3}}\frac{1+x^{2}}{x\sqrt{1+x^{2}}}$=$\int^{\sqrt{8}}_{\sqrt{3}}\frac{x}{\sqrt{1+x^{2}}}dx$+ +$\int^{\sqrt{8}}_{\sqrt{3}}\frac{1}{x\sqrt{1+x^{2}}}dx$= $=\frac{1}{2}\int^{\sqrt{8}}_{\sqrt{3}}\frac{(1+x^{2})'}{\sqrt{1+x^{2}}}dx$ $-\int^{\sqrt{8}}_{\sqrt{3}}\frac{(\frac{1}{x})'}{\sqrt{1+\frac{1}{x^{2}}}}dx =$ $\sqrt{1+x^{2}}|^{\sqrt{8}}_{\sqrt{3}}-ln(\frac{1}{x}+ \sqrt{1+\frac{1}{x^{2}}})$ $|^{\sqrt{8}}_{\sqrt{3}}$ $=1+\frac{1}{2}$$ln\frac{3}{2}$
Order of this group Given $k\in{\mathbb{N}}$, we denote $\Gamma _2(p^k)$ the multiplicative group of all matrix $\begin{bmatrix}{a}&{b}\\{c}&{d}\end{bmatrix}$ with $a,b,c,d\in\mathbb{Z}$, $ad-bc = 1$, $a$ and $d$ are equal to $1$ module $p^k$ and $b$ and $c$ are multiples of $p^k$. How can I show that $|\Gamma _2(p)/\Gamma _2(p^k)|\le p^{4k}$?
By exhibiting a set of at most $N=p^{4k}$ matrices $A_1,\ldots,A_N\in\Gamma_2(p)$ such that for each $B\in\Gamma_2(p)$ we have $A_iB\in\Gamma_2(p^k)$ for some $i$. Matrices of the form $A_i=I+pM_i$ suggest themselves. Or much simpler: by counting how many matrices in $\Gamma_2(p)$ and $\Gamma_2(p^k)$ map to the same element of $SL_2(\mathbb Z/p^k\mathbb Z)$ under the canonical projection.
Artinian rings are perfect Definition. A ring is called perfect if every flat module is projective. Is there a simple way to prove that an Artinian ring is perfect (in the commutative case)?
The local case is proved here, Lemma 10.97.2, and then extend the result to the non-local case by using that an artinian ring is isomorphic to a finite product of artinian local rings and a module $M$ over a finite product of rings $R_1\times\cdots\times R_n$ has the form $M_1\times\cdots\times M_n$ with $M_i$ an $R_i$-module.
How do I see that $F$ is a vector field defined on all of $\mathbb{R}^3$? $$\vec{F}(x,y,z)= y^3z^3\mathbf{i} + 2xyz^3\mathbf{j} + 3xy^2z^2\mathbf{k}$$ How do I see that $F$ is a vector field defined on all of $\mathbb{R}^3$? And then is there an easy way to check if it has continuous partial derivatives? I am looking at a theorem and it states, If F is a vector field defined on all of R^3 whose component functions have continuous partial derivatives and curl F = 0, then F is a conservative vector field. I don't know how to check those conditions. Could someone show with that problem given?
The vector field $F$ is defined on all of $\mathbb{R}^3$ because all of its component functions are (there are no pints where the functions are undefined, i.e., they make sense when plugging in any point of $\mathbb{R}^3$ into them). If say one of the component functions was $\frac{1}{x-y}$ then the vector field wouldn't be defined along the plane $x=y$. Moreover the component functions are continuous, as every polynomial function is continuous. To calculate the curl of the vector field just use the definition of curl, which involves just computing partial derivatives.
For any $11$-vertex graph $G$, show that $G$ and $\overline{G}$ cannot both be planar Let $G$ be a graph with 11 vertices. Prove that $G$ or $\overline{G}$ must be nonplanar. This question was given as extra study material but a little stuck. Any intuitive explanation would be great!
It seems the following. Euler formula implies that $E\le 3V-6$ for each planar graph. If both $G$ and $\bar G$ are planar, then $55=|E(K_{11})|\le 6|V(K_{11})|-12=54$, a contradiction.
Total no. of ordered pairs $(x,y)$ in $x^2-y^2=2013$ Total no. of ordered pairs $(x,y)$ which satisfy $x^2-y^2=2013$ My try:: $(x-y).(x+y) = 3 \times 11 \times 61$ If we Calculate for positive integers Then $(x-y).(x+y)=1.2013 = 3 .671=11.183=61.33$ my question is there is any better method for solving the given question. thanks
You can solve this pretty quickly, since you essentially need to solve a bunch of linear systems. One of them is e.g. $$x-y = 3\times 11$$ $$x+y = 61$$ Just compute the inverse matrix of $$\left[\begin{array}{cc} 1 & -1\\ 1 & 1 \end{array}\right]$$ and multiply that with the vectors corresponding to the different combinations of factors. (ps. How do I write matrices???)
Bernoulli differential equation help? We have the equation $$3xy' -2y=\frac{x^3}{y^2}$$ It is a type of Bernouli differential equation. So, since B. diff equation type is $$y'+P(x)y=Q(x)y^n$$ I modify it a little to: $$y'- \frac{2y}{3x} = \frac{x^2}{3y^2}$$ $$y'-\frac{2y}{3x}=\frac{1}{3}x^2y^{-2}$$ Now I divide both sides by $y^{-2}$. What should I do now?
$$\text{We have $3xy^2 y'-2y^3 = x^3 \implies x (y^3)' - 2y^3 = x^3 \implies \dfrac{(y^3)'}{x^2} + y^3 \times \left(-\dfrac2{x^3}\right) = 1$}$$ $$\text{Now note that }\left(\dfrac1{x^2}\right)' = -\dfrac2{x^3}. \text{ Hence, we have }\dfrac{d}{dx}\left(\dfrac{y^3}{x^2}\right) = 1\implies \dfrac{y^3}{x^2} = x + c$$ $$\text{Hence, the solution to the differential equation is }\boxed{\color{blue}{y^3 = x^3 + cx^2}}$$
Find quotient space on $\mathbb{N} $ On $\mathbb{N}$ is given equivalence relation R with $nRm \iff 4|n-m$. Topology on $\mathbb{N}$ is defined with $\tau=\{\emptyset\}\cup\{U\subseteq\mathbb{N}|n\in U \wedge m|n \implies m\in U\}$. I need to find quotient space $(\mathbb{N}/R,\tau_{R})$. I have solution: $\tau_{R}=\{\emptyset,\mathbb{N}/R,\{[1],[2],[3]\}\}$ where $\mathbb{N}/R=\{[1],[2],[3],[4]\}$. But I have no idea how to prove that $p^{-1}[\{[1],[2],[3]\}]=\cup_{k\in \mathbb{N}_0}\{4k+1,4k+2,4k+3\}$, where $p$ is quotient mapping, contains all divisors of its elements. (for other set it's easy to find element whose divisor is not in set)
Note that $p^{-1}(\{[1],[2],[3]\}) = \mathbb{N}\backslash 4\mathbb{N}$. We need to prove that $n\in\mathbb{N}\backslash 4\mathbb{N}$ and $m|n$ implies $m\in\mathbb{N}\backslash 4\mathbb{N}$ and we do this by contraposition. Suppose $m\notin\mathbb{N}\backslash 4\mathbb{N}$, then $m = 4k$ and thus $m|n$ implies $n = pm = 4pk$, hence $n\notin\mathbb{N}\backslash 4\mathbb{N}$ concluding the argument.
Composition of $\mathrm H^p$ function with Möbius transform Let $f:\mathbb D\rightarrow \mathbb C$ be a function in $\mathrm{H}^p$, i.e. $$\exists M>0,\text{ such that }\int_0^{2\pi}|f(re^{it})|^pdt\leq M<\infty,\forall r\in[o,1)$$ Consider a Möbius transform of the disk $\varphi :\mathbb D\rightarrow\mathbb D$, which generally does not fix $0$. (or more generally consider a holomorphic function $g:\mathbb D \rightarrow \mathbb D$) Is the composition $f\circ\varphi $ in $\mathrm H^p$? If $\varphi $ fixes $0$ then we can apply Littlewood subordination theorem and derive the result. However, what can we say if $\varphi$ does not fix zero? Does it hold $f\circ\varphi $ in $\mathrm H^p$, or is there a counterexample?
Yes. If $\varphi$ is a holomorphic map of unit disk into itself, the composition operator $f\mapsto f\circ \varphi$ is bounded on $H^p$. In fact, $$\|f\circ \varphi\|_{H^p}\le \left(\frac{1+|\varphi(0)|}{1-|\varphi(0)|}\right)^{1/p}\|f \|_{H^p} \tag1$$ Original source: John V. Ryff, Subordinate $H^p$ functions, Duke Math. J. Vol. 33, no. 2 (1966), 347-354. Sketch of proof. The function $|f|^p$ is subharmonic and has uniformly bounded circular means (i.e., mean values on $|z|=r$, $r<1$). Let $u$ be its the smallest harmonic majorant in $\mathbb D$. Then $u\circ \varphi$ is a harmonic majorant of $|f\circ \varphi|^p$. This implies that the circular means of $|f\circ \varphi|^p$ do not exceed $u(\varphi(0))$. In particular, $f\circ \varphi\in H^p$. Furthermore, the Harnack inequality for the disk yields $$u(\varphi(0))\le u(0)\frac{1+|\varphi(0)|}{1-|\varphi(0)|}$$ proving (1).
What is the meaning of "independent events " and how can we logically conclude independence of two events in probability? What is the meaning of "independent events " in probability For eg: Two events (say A and B)are independent , what I understand is the occurrence of A is not affected by occurrence of B .But I am not comfortable with this understanding , thinking this way every events I meet upon are independent!, does there exist a more mathematical definition( I don't want the formula associated with it) Another thing I want to know in a real case or a problem how do we understand(logically) if two events are independent? That is without verifying that $P(A\cap B) = P(A)P(B) \Large\color{blue}{\star}$ how do we conclude that two events $A$ and $B$ are independent or dependent?. Take for eg an example by "Tim" Suppose you throw a die. The probability you throw a six(Event $A$) is $\frac16$ and the probability you throw an even number(Event $B$) is $\frac12$. And event $C$ such that $A$ and $B$ both happen would mean $A$ should happen(As here $A$ is a subset of $B$) hence it's too $\frac16$ My thought: The above example suggest something like this "Suppose I define two events $A$ and $B$ and if one of the even is a subset of the other then the events are not independent". Of course such an argument is not a sufficient condition for dependency , for eg: consider an event $A$ throwing a dice and getting an odd number and event $B$ getting 6. They aren't independent ($\Large\color{blue}{\star}$ isn't satisfied) . So again I improve my suggestion "Suppose I define two events $A$ and $B$ and if one of them is a subset of the other or their intersection is a null-set then the events are not independent $\Large\color{red}{\star}$" So at last is $\Large\color{red}{\star}$ a sufficient condition?. Or does there exist a suffcient condition (other than satisfying the formulas $\Large\color{blue}{\star}$? And what is the proof, I can't prove my statement as the idea for me is not that much mathematical .
I think you are asking why is the notion of independent events when there is no relation between them! Notion of independent events help to calculate the probability of occurring two independent events on like p(A)Up(B),P(A) intersection P(B) on contrary to calculating dependent events. Independent events--> their probabilities of occurring is not dependent on one another Dependent events ---> Calculation of probabilities dependent.
on the commutator subgroup of a special group Let $G'$ be the commutator subgroup of a group $G$ and $G^*=\langle g^{-1}\alpha(g)\mid g\in G, \alpha\in Aut(G)\rangle$. We know that always $G'\leq G^*$. It is clear that if $Inn(G)=Aut(G)$, then $G'=G^*$. Also if $G$ is a non abelian simple group or perfect group, then $G'=G^*=G$. Now do there exist a group such that $Inn(G)\neq Aut(G)$ and $G'=G^*\neq G$? Thank you
Serkan's answer is part of a more general family and a more general idea. The more general idea is to include non-inner automorphisms that create no new subgroup fusion. The easiest way to do this is with power automorphisms, and the simplest examples of those are automorphisms that raise a single generator to a power. The more general family is parameterized by pairs $(n,d)$ where $n$ is a positive integer and $1 \neq d \neq n-1$, but $d$ divides $n-1$. Let $A=\operatorname{AGL}(1,\newcommand{\znz}{\mathbb{Z}/n\mathbb{Z}}\znz)$ consist of the affine transformations of the line over $\znz$. The derived subgroup $D=A'$ consists of the translations. Choose some element $\zeta$ of multiplicative order $d$ mod $n$, and let $G$ be the subgroup generated by multiplication by $\zeta$ and by the translations. Explicitly: $$\begin{array}{rcl} A &=& \left\{ \begin{bmatrix} \alpha & \beta \\ 0 & 1 \end{bmatrix} : \alpha \in \znz^\times, \beta \in \znz \right\} \\ D &=& \left\{ \begin{bmatrix} 1 & \beta \\ 0 & 1 \end{bmatrix} : \beta \in \znz \right\} \\ G &=& \left\{ \begin{bmatrix} \zeta^i & \beta \\ 0 & 1 \end{bmatrix} : 0 \leq i < d, \beta \in \znz \right\} \\ \end{array}$$ For example, take $n=5$ and $d=2$, to get $D$ is a cyclic group of order 5, $G$ is dihedral of order 10, and $A$ is isomorphic to the normalizer of a Sylow 5-subgroup ($D$) in $S_5$. Notice that $Z(G)=1$ since $d\neq 1$, so $G \cong \operatorname{Inn}(G)$. We'll show $A \cong \operatorname{Aut}(G)$ so that $D=G'=G^*$ as requested. Since $d \neq n-1$, $A \neq G$. Notice that $D=G'$ (not just $D=A'$) so $D$ is characteristic in $G$, so automorphisms of $G$ restrict to automorphisms of the cyclic group $D$. Let $f$ be an automorphism of $G$, and choose $\beta, \bar\alpha, \bar\beta \in \znz$ so that $$\begin{array}{rcl} f\left( \begin{bmatrix} 1 & 1 \\0 & 1 \end{bmatrix} \right) &=& \begin{bmatrix} 1 & \beta \\ 0 & 1 \end{bmatrix} \\ f\left( \begin{bmatrix} \zeta & 0 \\0 & 1 \end{bmatrix} \right) &=& \begin{bmatrix} \bar\alpha & \bar\beta \\ 0 & 1 \end{bmatrix} \end{array}$$ Notice that $f$ is determined by these numbers since $G$ is generated by those two matrices, and that $\beta, \bar\alpha \in \znz^\times$. Now $$ \begin{bmatrix} \zeta & 0 \\0 & 1 \end{bmatrix} \cdot \begin{bmatrix} 1 & 1 \\0 & 1 \end{bmatrix} \cdot {\begin{bmatrix} \zeta & 0 \\0 & 1 \end{bmatrix}}^{-1} = \begin{bmatrix} 1 & \zeta \\0 & 1 \end{bmatrix} $$ so applying $f$ we get $$ \begin{bmatrix} 1 & \bar\alpha\cdot\beta \\0 & 1 \end{bmatrix} = \begin{bmatrix} \bar\alpha & \bar\beta \\0 & 1 \end{bmatrix} \cdot \begin{bmatrix} 1 & \beta \\0 & 1 \end{bmatrix} \cdot {\begin{bmatrix} \bar\alpha & \bar\beta \\0 & 1 \end{bmatrix}}^{-1} = \begin{bmatrix} 1 & \zeta\cdot\beta \\0 & 1 \end{bmatrix} $$ hence $\bar\alpha \beta = \zeta\beta$ and since $\beta \in \znz^\times$ (lest $D \cap \ker(f) \neq 1$), we get $\bar\alpha=\zeta$. Hence every automorphism of $G$ is conjugation by an element of $A$, namely $$\bar f = \begin{bmatrix} 1/\beta & \bar\beta/(\beta(\zeta-1)) \\ 0 & 1 \end{bmatrix} \in A. $$
Newbie vector spaces question So browsing the tasks our prof gave us to test our skills before the June finals, I've encountered something like this: "Prove that the kernel and image are subspaces of the space V: $\ker(f) < V, \operatorname{im}(f) < V$, where $<$ means a subspace." Is it just me or there's something wrong with the problem? I mean: I was rewriting the tasks from the blackboard by hand so I may have made a mistake but is a problem like this solvable or I messed up and should rather look for the task description from someone else? Cause for the time being, I don't see anything to prove here since we don't know what V is, right?
Let $f$ is a linear transformation from $V$ to $V$. So what is the kernel of $f$? Indeed, it is $$\ker(f)=\{v\in V\mid f(v)=0_V\}$$ It is obvious that $\ker(f)\subseteq V$. Now take $a,b\in F$ the field associated to $V$, and let $v,w\in\ker(f)$. We have $$f(av+bw)=f(av)+f(bw)=af(v)+bf(w)= 0+0=0$$ So the subset $\ker(f)$ has the eligibility of being a subspace of $V$. Go the same way for another subset.
Show that $\exp: \mathfrak{sl}(n,\mathbb R)\to \operatorname{SL}(n,\mathbb R)$ is not surjective It is well known that for $n=2$, this holds. The polar decomposition provides the topology of $\operatorname{SL}(n,\mathbb R)$ as the product of symmetric matrices and orthogonal matrices, which can be written as the product of exponentials of skew symmetric and symmetric traceless matrices. However I could not find out the proof that $\exp: \mathfrak{sl}(n,\mathbb R)\to\operatorname{SL}(n,\mathbb R)$ is not surjective for $n\geq 3$.
Over $\mathbb{R}$, for a general $n$, a real matrix has a real logarithm if and only if it is nonsingular and in its (complex) Jordan normal form, every Jordan block corresponding to a negative eigenvalue occurs an even number of times. So, you may verify that $\pmatrix{-1&1\\ 0&-1}$ (as given by rschwieb's answer) and $\operatorname{diag}(-2,-\frac12,1,\ldots,1)$ (as given by Gokler's answer) are not matrix exponentials of a real traceless matrix.
Determine the number of elements of order 2 in AR So i have completed parts a and b. For b i reduced R to smith normal form and ended up with diagonals 1,2,6. From this i have said that the structure of the group is $Z_2 \oplus Z_6 \oplus Z$. But i have no idea what so ever about part c.
As stated in the comments, let us focus on $\,\Bbb Z_2\times\Bbb Z_6\,$ , which for simplicity (for me, at least) we'll better write multiplicatively as $\,C_2\times C_6=\langle a\rangle\times\langle b\rangle\,\;,\;\;a^2=b^6=1$ Suppose the first coordinate is $\,1\,$ , then the second one has to have order $\,2\,$ , and the only option is $\,(1,b^3)\,$ , so we go over elements with non-trivial first coordinate, and thus the second coordinate has to have order dividing two: $$(a,1)\;,\;\;(a,b^3)$$ and that seems to be pretty much all there is: three involutions as any such one either has first coordinate trivial or not...
Closed form for $\sum_{n=1}^\infty\frac{1}{2^n\left(1+\sqrt[2^n]{2}\right)}$ Here is another infinite sum I need you help with: $$\sum_{n=1}^\infty\frac{1}{2^n\left(1+\sqrt[2^n]{2}\right)}.$$ I was told it could be represented in terms of elementary functions and integers.
Notice that $$ \frac1{2^n(\sqrt[2^n]{2}-1)} -\frac1{2^n(\sqrt[2^n]{2}+1)} =\frac1{2^{n-1}(\sqrt[2^{n-1}]{2}-1)} $$ We can rearrange this to $$ \left(\frac1{2^n(\sqrt[2^n]{2}-1)}-1\right) =\frac1{2^n(\sqrt[2^n]{2}+1)} +\left(\frac1{2^{n-1}(\sqrt[2^{n-1}]{2}-1)}-1\right) $$ and for $n=1$, $$ \frac1{2^{n-1}(\sqrt[2^{n-1}]{2}-1)}-1=0 $$ therefore, the partial sum is $$ \sum_{n=1}^m\frac1{2^n(\sqrt[2^n]{2}+1)} =\frac1{2^m(\sqrt[2^m]{2}-1)}-1 $$ Taking the limit as $m\to\infty$, we get $$ \sum_{n=1}^\infty\frac1{2^n(\sqrt[2^n]{2}+1)} =\frac1{\log(2)}-1 $$
Extending transvections/generating the symplectic group The context is showing that the symplectic group is generated by symplectic transvections. At the very bottom of http://www-math.mit.edu/~dav/sympgen.pdf it is stated that any transvection on the orthogonal space to a hyperbolic plane (a plane generated by $u,v$ such that $(u,v)=1$ with respect to the bilinear form) can be extended to a transvection on the whole space with the plane contained in its fixed set. Is there an easy way to see why this is true? If not, does anyone have a reference/solution? Thanks.
Assume that $V$ is whole space with $2n$ dimensional. Then the orthogonal space you mentioned (let it be $W$) is $2n-2$ dimensional symplectic vector space (It is very easy to check the conditions). Then take another space spanned by $v$ and $w$ such that $v$, $w$ $\in W$ and $\omega(v,w)=1$. The new space orthogonal to this space is symplectic vector space with dimensional $2n-4$. You know the rest.
Calculus II, Curve length question. Find the length of the curve $x= \int_0^y\sqrt{\sec ^4(3 t)-1}dt, \quad 0\le y\le 9$ A bit stumped, without the 'y' in the upper limit it'd make a lot more sense to me. Advice or solutions with explanation would be very appreciated.
$$\frac{dx}{dy} = \sqrt{\sec^4{3 y}-1}$$ Arc length is then $$\begin{align}\int_0^9 dy \sqrt{1+\left ( \frac{dx}{dy} \right )^2} &= \int_0^9 dy\, \sec^2{3 y} \\ &= \frac13 \tan{27} \end{align}$$
What are some relationships between a matrix and its transpose? All I can think of are that If symmetric, they're equivalent If A is orthogonal, then its transpose is equivalent to its inverse. They have the same rank and determinant. Is there any relationship between their images/kernels or even eigenvalues?
Fix a ring $R$, and let $A \in M_n(R)$. The characteristic polynomial for $A$ is $$\chi_A(x)=\det (xI-A),$$ so that $\chi_{A^T}(x) = \det (x I -A^T)= \det ((xI-A)^T)=\det(xI-A)$. Since the eigenvalues of $A$ and $A^T$ are the roots of their respective characteristic polynomials, $A$ and $A^T$ have the same eigenvalues. (Moreover, they have the same characteristic polynomial.) It follows that $A$ and $A^T$ have the same minimal polynomial as well, because the minimal polynomial arises as the radical of the characteristic polynomial.
Infinitely many primes of the form $4n+3$ I've found at least 3 other posts$^*$ regarding this theorem, but the posts don't address the issues that I have. Below is a proof that for infinitely many primes of the form $4n+3$, there's a few questions I have in the proof which I'll mark accordingly. Proof: Suppose there were only finitely many primes $p_1,\dots, p_k$, which are of the form $4n+3$. Let $N = 4p_1\cdots p_k - 1$. This number is of the form $4n+3$ and is also not prime as it is larger than all the possible primes of the same form. Therefore, it is divisible by a prime $ \color{green}{ \text{(How did they get to this conclusion?)}}$. However, none of the $p_1,\dots, p_k$ divide $N$. So every prime which divides $N$ must be of the form $4n+1$ $ \color{green}{ \text{(Why must it be of this form?)}}$. But notice any two numbers of the form $4n+1$ form a product of the same form, which contradicts the definition of $N$. Contradiction. $\square$ Then as a follow-up question, the text asks "Why does a proof of this flavor fail for primes of the form $4n+1$? $ \color{green}{ \text{(This is my last question.)}}$ $^*$One involves congruences, which I haven't learned yet. The other is a solution-verification type question. The last one makes use of a lemma that is actually one of my questions, but wasn't a question in that post.
Therefore, it is divisible by a prime (How did they get to this conclusion?). All integers are divisible by some prime! So every prime which divides N must be of the form 4n+1 (Why must it be of this form?). Because we've assumed that $p_1, \dots, p_k$ are the only primes of the form 4n+3. If none of those divide N, and 2 doesn't divide N, then all its prime factors must be of the form 4n+1. "Why does a proof of this flavor fail for primes of the form 4n+1? (This is my last question.) Can you do this yourself now? (Do you understand how the contradiction works in the proof you have? What happens if you multiply together two numbers of the form 4n+3?)
generalized eigenvector for 3x3 matrix with 1 eigenvalue, 2 eigenvectors I am trying to find a generalized eigenvector in this problem. (I understand the general theory goes much deeper, but we are only responsible for a limited number of cases.) I have found eigenvectors $\vec {u_1}$ and $\vec {u_2}.$ When I try $u_1$ and $u_2$ as $u_3$ into this equation: $$ (A - I)u_4 = u_3$$ I get systems which are inconsistent. How can I find the $u_3$? I've been told it has something to do with $(A - I)^3 = 0$, but that's about it.
We are given the matrix: $$\begin{bmatrix}2 & 1 & 1\\1 & 2 & 1\\-2 & -2 & -1\\\end{bmatrix}$$ We want to find the characteristic polynomial and eigenvalues by solving $$|A -\lambda I| = 0 \rightarrow -\lambda^3+3 \lambda^2-3 \lambda+1 = -(\lambda-1)^3 = 0$$ This yields a single eigenvalue, $\lambda = 1$, with an algebraic multiplicity of $3$. If we try and find eigenvectors, we setup and solve: $$[A - \lambda I]v_i = 0$$ In this case, after row-reduced-echelon-form, we have: $$\begin{bmatrix}1 & 1 & 1\\0 & 0 & 0\\0 & 0 & 0\\\end{bmatrix}v_i = 0$$ This leads to the two eigenvectors as he shows, but the problem is that we cannot use that to find the third as we get degenerate results, like you showed. Instead, let's use the top-down chaining method to find three linearly independent generalized eigenvectors. Since the RREF of $$[A - 1 I] = \begin{bmatrix}1 & 1 & 1\\0 & 0 & 0\\0 & 0 & 0\\\end{bmatrix}$$ We have $E_3 = kernel(A - 1I)$ with dimension $= 2$, so there will be two chains. Next, since $$[A - 1 I]^2 = \begin{bmatrix}0 & 0 & 0\\0 & 0 & 0\\0 & 0 & 0\\\end{bmatrix}$$ the space Kernel $(A-1I)^2$ has dimension $=3$, which matches the algebraic multiplicity of $\lambda=1$. Thus, one of the chains will have length $2$, so the other must have length $1$. We now form a chain of $2$ generalized eigenvectors by choosing $v_2$ in kernel $(A-1I)^2$ such that $v_2$ is not in the kernel $(A-1I)$. Since every vector is in kernel $(A-1I)^2$, and the third column of $(A-1I)$ is non-zero, we may choose: $$v_2 = (1, 0, 0) \implies v_1 = (A-1I)v_2 = (1,1,-2)$$ To form a basis for $\mathbb R^3$, we need one additional chain of one generalized eigenvector. This vector must be an eigenvector that is independent from $v_1$. Since $$E_3 = ~\text{span}~ \left(\begin{bmatrix}0\\1\\-1\\\end{bmatrix}, \begin{bmatrix}-1\\0\\1\\\end{bmatrix}\right).$$ and neither of these spanning vectors is itself a scalar multiple of $v1$, we may choose either one of them. So let $$w_1 = (0, 1, -1).$$ Now we have two chains: $$v_2 \rightarrow v_1 \rightarrow 0$$ $$w_1 \rightarrow 0$$ So, to write the solution, we have: $\displaystyle 1^{st}$ Chain $$x_1(t) = e^t \begin{bmatrix}1\\1\\-2\\\end{bmatrix}$$ $$x_2(t) = e^t\left(t \begin{bmatrix}1\\1\\-2\\\end{bmatrix} + \begin{bmatrix}1\\0\\0\\\end{bmatrix}\right)$$ $\displaystyle 2^{nd}$ Chain $$x_3(t) = e^t \begin{bmatrix}0\\1\\-1\\\end{bmatrix}$$ Thus: $$x(t) = x_1(t) + x_2(t) + x_3(t)$$ Note, you can use this linear combination of $x(t)$ and verify that indeed it is a solution to $x' = Ax$.
Free online mathematical software What are the best free user-friendly alternatives to Mathematica and Maple available online? I used Magma online calculator a few times for computational algebra issues, and was very much satisfied, even though the calculation time there was limited to $60$ seconds. Very basic computations can be carried out with Wolfram Alpha. What if one is interested in integer relation detection or integration involving special functions, asymptotic analysis etc? Thank you in advance. Added: It would be nice to provide links in the answers so that the page becomes easily usable. I would also very much appreciate short summary on what a particular software is suitable/not suitable for. For example, Magma is in my opinion useless for doing the least numerics.
I use Pari/GP. SAGE includes this as a component too, but I really like GP alone, as it is. In fact, GP comes with integer relation finding functions (as you mentioned) and has enough rational/series symbolic power that I have been able to implement Sister Celine's method for finding recurrence relations among hypergeometric sums in GP with ease.
Integral of $\int^1_0 \frac{dx}{1+e^{2x}}$ I am trying to solve this integral and I need your suggestions. I think about taking $1+e^{2x}$ and setting it as $t$, but I don't know how to continue now. $$\int^1_0 \frac{dx}{1+e^{2x}}$$ Thanks!
$\int^{1}_{0}\frac{dx}{1+e^{2x}}=$ $ \int^{1}_{0}\frac{e^{-2x}dx}{1+e^{-2x}}=$ $-\frac{1}{2}\int^{1}_{0}\frac{(1+e^{-2x})'dx}{1+e^{-2x}}=$ $=-\frac{1}{2}ln(1+e^{-2x})|^{1}_{0}= \frac{1}{2}ln\frac{2e^{2}}{1+e^{2}}$
Zorn's lemma and maximal linearly ordered subsets Let $T$ be a partially ordered set. We say $T$ is a tree if $\forall t\in T$ $\{r\in T\mid r < t\}$ is linearly ordered (such orders can be considered on connected graphs without cycles, i.e. on trees). By a branch we mean a maximal linearly ordered subset in $T$. It is easy to prove that each tree has a branch using Zorn's lemma. However, the converse is also true (I read it recently in an article). Can anybody give a sketchy proof?
Here is a nice way of proving the well-ordering principle from "Every tree has a branch": Let $A$ be an infinite set, and let $\lambda$ be the least ordinal such that there is no injection from $\lambda$ into $A$. Consider the set $A^{<\lambda}$, that is all the functions from ordinals smaller than $\lambda$ into $A$, and order those by end-extension (which is exactly $\subseteq$ if you think about it for a moment). It is not hard to verify that $(A^{<\lambda},\subseteq)$ is a tree. Therefore it has a branch. Take the union of that branch, and we have a "maximal" function $f$ whose domain is an ordinal $\alpha<\lambda$. Note that it has to be a maximal element, otherwise we can extend it and the branch was no branch. If the range of $f$ is not all $A$ then there is some $a\in A$ which witnesses that, and we can extend $f\cup\{\langle\alpha,a\rangle\}$ to a strictly larger function. Since $f$ is maximal, it follows it has to be a surjection, and therefore $A$ can be well-ordered. (Note: If we restrict to the sub-tree of the injective functions we can pull the same trick and end up with a bijection, but a surjection from an ordinal is just as good). From this we have that every set can be well-ordered, and therefore the axiom of choice holds.
Number of ways to arrange $n$ people in a line I came across this confusing question in Combinatorics. Given $n \in \mathbb N$. We have $n$ people that are sitting in a row. We mark $a_n$ as the number of ways to rearrange them such that a person can stay in his seat or move one seat to the right or one seat to the left. Calculate $a_n$ This is one of the hardest combinatorics questions I have came across (I'm still mid-way through the course,) so if anyone can give me a direction I'll be grateful!
HINT: It’s a little more convenient to think of this in terms of permutations: $a_n$ is the number of permutations $p_1 p_2\dots p_n$ of $[n]=\{1,\dots,n\}$ such that $|p_k-k|\le 1$ for each $k\in[n]$. Call such permutations good. Suppose that $p_1 p_2\dots p_n$ is such a permutation. Clearly one of $p_{n-1}$ and $p_n$ must be $n$. * *If $p_n=n$, then $p_1 p_2\dots p_{n-1}$ is a good permutation of $[n-1]$. Moreover, any good permutation of $[n-1]$ can be extended to a good permutation of $[n]$ with $n$ at the end. Thus, there are $a_{n-1}$ good permutations of $[n]$ with $n$ as last element. *If $p_{n-1}=n$, then $p_n$ must be $n-1$, so $p_1 p_2\dots p_{n-2}p_n$ is a good permutation of $[n-1]$ with $n-1$ as its last element. Conversely, if $q_1 q_2\dots q_{n-1}$ is a good permutation of $[n-1]$ with $q_{n-1}=n-1$, then $q_1 q_2\dots q_{n-2}nq_{n-1}$ is a good permutation of $[n]$ with $n$ as second-last element. How many good permutations of $[n-1]$ are there with $n-1$ as last element? Alternatively, you could simply work out $a_k$ by hand for $k=0,\dots,4$, say, and see what turns up; you should recognize the numbers that you get.
Why do we use open intervals in most proofs and definitions? In my class we usually use intervals and balls in many proofs and definitions, but we almost never use closed intervals (for example, in Stokes Theorem, etc). On the other hand, many books use closed intervals. Why is this preference? What would happen if we substituted "open" by "closed"?
My guess is that it's because of two related facts. * *The advantage of open intervals is that, since every point in the interval has an open neighbourhood within the interval, there are no special points 'at the edge' like in closed intervals, which require being treated differently. *Lots of definitions rely on the existence of a neighbourhood in their most formal aspect, like differentiability for instance, so key properties within the result may require special formulation at the boundary. In PDE/functional analysis contexts for example boundaries are very subtle and important objects which are treated separately.
Computing $\int_0^{\pi\over2} \frac{dx}{1+\sin^2(x)}$? How would you compute$$\int_0^{\pi\over2} \frac{dx}{1+\sin^2(x)}\, \, ?$$
HINT: $$\int_0^{\pi\over2} \frac{dx}{1+\sin^2(x)}= \int_0^{\pi\over2} \frac{\csc^2xdx}{\csc^2x+1}=\int_0^{\pi\over2} \frac{\csc^2xdx}{\cot^2x+2}$$ Put $\cot x=u$
Building a tower using colorful blocks How many possibilities are there to build a tower of n height, using colorful blocks, where: * *white block is of 1 height *black, red, blue, green, yellow, pink blocks are equal to 2 heights I need to find the generating function formula for this. So, for n = 1 I get 1 possibility, for n = 2 I get 2 possibilities, for n = 3 I get 3 possibilities for n = 4 I get > 4 possibilities etc. The generating function at this moment would be $1 + 2x + 3x^{2} + ...$. But I have no idea how can I find the general formula calculating this. Could you give me any suggestions, or solutions (;-)) ?
Let the number of towers of height $n$ be $a_n$. To build a tower of height $n$, you start with a tower of height $n - 1$ and add a white block ($a_{n - 1}$ ways) or with a tower of height $n - 2$ and add one of 6 height-2 blocks. In all, you can write: $$ a_{n + 2} = a_{n + 1} + 6 a_n $$ Clearly $a_0 = a_1 = 1$. Define $A(z) = \sum_{n \ge 0} a_n z^n$, multiply the recurrence by $z^n$ and sum over $n \ge 0$ to get: $$ \frac{A(z) - a_0 - a_1 z}{z^2} = \frac{A(z) - a_0}{z} + 6 A(z) $$ You get: $$ A(z) = \frac{1}{1 - z - 6 z^2} = \frac{3}{5} \cdot \frac{1}{1 - 3 z} + \frac{2}{5} \cdot \frac{1}{1 + 2 z} $$ This is just geometric series: $$ a_n = \frac{3}{5} \cdot 3^n + \frac{2}{3} \cdot (-2)^n = \frac{3^{n + 1} - (-2)^{n + 1}}{5} $$
Matrix Norm Inequality $\lVert A\rVert_\infty \leq \sqrt{n} \lVert A\rVert_2$ So I'm trying to prove that $\lVert A\rVert_\infty \leq \sqrt{n} \lVert A\rVert_2$. I've written the right hand side in terms of rows, but this method doesn't seem to be getting me anywhere. Where else should I go?
Writing $A=(A_1,\dots,A_n)^\mathrm{T}$ with $A_i$ being the $i$-th row of the matrix, let $A_j$ be the row for which $$ \lVert A\rVert_\infty = \max_{1\leq i\leq n }\lVert A_i\rVert_1 = \lVert A_j\rVert_1 = \sum_{k=1}^n \left|A_{i,j}\right| $$ Then $$ n\lVert A\rVert_2^2 = n\sum_{i=1}^n \lVert A_i\rVert_2^2 \geq n\lVert A_j\rVert_2^2 \geq \lVert A_j\rVert_1^2 = \lVert A\rVert_\infty^2 $$ where the last inequality is "standard" (relation between 1 and 2-norm on $\mathbb{R}^n$, can be proven with Cauchy-Schwarz).
Radius of convergence of the Bernoulli polynomial generating function power series. The generating function of the Bernoulli Polynomials is: $$\frac{te^{xt}}{e^t-1}=\sum_{k=0}^\infty B_k(x)\frac{t^k}{k!}.$$ Would it be right to say that the radius of convergence of this power series is $2\pi$ ? I'm not sure since the power series above is in fact a double series: $$\sum_{k=0}^\infty\left(\sum_{j=0}^k {k\choose j}B_{k-j} x^j\right)\frac{t^k}{k!}.$$ What if I were to choose a fixed value for $x$? Would the radius be $2\pi$ then, even for the double power series?
For every fixed $x=c$, the radius of convergence of the power series is $2\pi$. This is because $$\frac{ze^{cz}}{e^z-1}$$ is analytic everywhere except at $z=i2\pi n, n=\pm 1,\pm 2,\cdots$ (not at $0$ though.) The disk $B(0,2\pi)$ is the smallest one centered at $0$ that contains a singularity on its boundary, so the radius of convergence is $2\pi$.
Does this ODE have an exact or well-established approximate analytical solution? The equation looks like this: $$\frac{\mathrm{d}y}{\mathrm{d}t} = A + B\sin\omega t - C y^n,$$ where $A$, $B$, $C$ are positive constants, and $n\ge1$ is an integer. Actually I am mainly concerned with the $n=4$ case. The $n=1$ case is trivial. Otherwise the only method I can think of is kind of an iterative approximation, in which a "clean" expression seems not easy to obtain. Just in case it is too "easy" for the experts, we may generalize it to the form $$\frac{\mathrm{d}y}{\mathrm{d}t} = f(t) - g(y).$$
If $\frac{dy}{dt} = F(t,y)$ has $F$ and $\partial_2F$ continuous on a rectangle then there exists a unique local solution at any particular point within the rectangle. This is in the basic texts. For $F(t,y) = f(t)-g(y)$ continuity requires continuity of $f$ and $g$ whereas continuity of the partial derivative of $F$ with respect to $y$ require continuity of $g'$. Thus continuity of $f$ and continuous differentiability of $g$ suffice to indicate the existence of a local solution. Of course, this is not an analytic expression just yet. That said, the proof of this theorem provides an iteratively generated approximation. Alternatively, you could argue by Pfaff's theorem there exists an integrating factor which recasts the problem as an exact equation. So, you can rewrite $dy = [f(t)-g(y)]dt$ as $Idy+I[f(t)-g(y)]dt=dG$ for some function $G$. Then the solution is simply $G(t,y)=k$ which locally provides functions which solve your given DEqn. Of course, the devil is in the detail of how to calculate $I$. The answer for most of us is magic. Probably a better answer to your problem is to look at the Bernoulli problem. There a substitution is made which handles problems which look an awful lot like the one you state.
relationship of polar unit vectors to rectangular I'm looking at page 16 of Fleisch's Student's Guide to Vectors and Tensors. The author is talking about the relationship between the unit vector in 2D rectangular vs polar coordinate systems. They give these equations: \begin{align}\hat{r} &= \cos(\theta)\hat{i} + \sin(\theta)\hat{j}\\ \hat{\theta} &= -\sin(\theta)\hat{i} + \cos(\theta)\hat{j}\end{align} I'm just not getting it. I understand how, in rectangular coordinates, $x = r \cos(\theta)$, but the unit vectors are just not computing.
The symbols on the left side of those equations don't make any sense. If you wanted to change to a new pair of coordinates $(\hat{u}, \hat{v})$ by rotating through an angle $\theta$, then you would have $$ \left\{\begin{align} \hat{u} &= (\cos \theta) \hat{\imath} + (\sin \theta)\hat{\jmath} \\ \hat{v} &= (-\sin \theta) \hat{\imath} + (\cos \theta)\hat{\jmath}. \end{align}\right. $$
Differentiate $\log_{10}x$ My attempt: $\eqalign{ & \log_{10}x = {{\ln x} \over {\ln 10}} \cr & u = \ln x \cr & v = \ln 10 \cr & {{du} \over {dx}} = {1 \over x} \cr & {{dv} \over {dx}} = 0 \cr & {v^2} = {(\ln10)^2} \cr & {{dy} \over {dx}} = {{\left( {{{\ln 10} \over x}} \right)} \over {2\ln 10}} = {{\ln10} \over x} \times {1 \over {2\ln 10}} = {1 \over {2x}} \cr} $ The right answer is: ${{dy} \over {dx}} = {1 \over {x\ln 10}}$ , where did I go wrong? Thanks!
$${\rm{lo}}{{\rm{g}}_{10}}x = {{\ln x} \over {\ln 10}} = \dfrac{1}{\ln(10)}\ln x$$ No need for the chain rule, in fact, that would lead you to your mistakes, since $\dfrac 1 {\ln(10)}$ is a constant. So we differentiate only the term that's a function of $x$: $$\dfrac{1}{\ln(10)}\frac d{dx}(\ln x)= \dfrac 1{x\ln(10)}$$
If $\lim_{t\to\infty}\gamma(t)=p$, then $p$ is a singularity of $\gamma$. I'm trying to solve this question: Let $X$ be a vectorial field of class $C^1$ in an open set $\Delta\subset \mathbb R^n$. Prove if $\gamma(t)$ is a trajectory of $X$ defined in a maximal interval $(\omega_-,\omega_+)$ with $\lim_{t\to\infty}\gamma(t)=p\in \Delta$, then $\omega_+=\infty$ and $p$ is a singularity of $X$. The first part is easy because $\gamma$ is contained in a compact for a large $t$, my problem is with the second part, I need help. Thanks in advance
For $n \geq 0$, let $t_n \in (n,n+1)$ such that $\gamma'(t_n)=\gamma(n+1)-\gamma(n)$ (use mean value theorem). Then $\gamma'(t_n)=X(\gamma(t_n)) \underset{n \to + \infty}{\longrightarrow} X(p)$ and $\gamma'(t_n) \underset{n \to + \infty}{\longrightarrow} p-p=0$. Thus $X(p)=0$.
Sequence of Functions Converging to 0 I encountered this question in a textbook. While I understand the intuition behind it I am not sure how to formally prove it. Define the sequence of functions $(g_n)$ on $[0,1]$ to be $$g_{k,n}(x) = \begin{cases}1 & x \in \left[\dfrac{k}n, \dfrac{k+1}n\right]\\ 0 & \text{ else}\end{cases}$$ where $k \in \{0,1,2,\ldots,n-1\}$. Prove the following statements: 1) $g_n \to 0$ with respect to the $L^2$ norm. 2) $g_n(x)$ doesn't converge to $0$ at any point in $[0,1]$. 3) There is a subsequence of $(g_n)$ that converges pointwise to $0$. Thank you for your replies.
1) What is $\lVert g_n\rVert_2$? 2) Show that for any $x$, $g_n(x)=0$ infinitely often and $g_n(x)=1$ infinitely often. 3) Note that $\frac1n\to 0 $ and $\frac2n\to 0$
Do there exist some non-constant holomorphic functions such that the sum of the modulus of them is a constant Do there exist some non-constant holomorphic functions $f_1,f_2,\ldots,f_n$such that $$\sum_{k=1}^{n}\left|\,f_k\right|$$ is a constant? Can you give an example? Thanks very much
NO. Suppose $f, g$ are holomorphic functions on the unite disc. $$ 2\pi r M=2\pi r( |f(z_0)|+|g(z_0)|)=|\int_{|z-z_0|=r} fdz|+|\int_{|z-z_0|=r} gdz|\le \int_{|z-z_0|=r} (|f|+|g|)|dz|=2\pi r M $$ so all equal sign hold, then $f, g$ are constants.
Solve $\sqrt{2x-5} - \sqrt{x-1} = 1$ Although this is a simple question I for the life of me can not figure out why one would get a 2 in front of the second square root when expanding. Can someone please explain that to me? Example: solve $\sqrt{(2x-5)} - \sqrt{(x-1)} = 1$ Isolate one of the square roots: $\sqrt{(2x-5)} = 1 + \sqrt{(x-1)}$ Square both sides: $2x-5 = (1 + \sqrt{(x-1)})^{2}$ We have removed one square root. Expand right hand side: $2x-5 = 1 + 2\sqrt{(x-1)} + (x-1)$-- I don't understand? Simplify: $2x-5 = 2\sqrt{(x-1)} + x$ Simplify more: $x-5 = 2\sqrt{(x-1)}$ Now do the "square root" thing again: Isolate the square root: $\sqrt{(x-1)} = \frac{(x-5)}{2}$ Square both sides: $x-1 = (\frac{(x-5)}{2})^{2}$ Square root removed Thank you in advance for your help
To get rid of the square root, denote: $\sqrt{x-1}=t\Rightarrow x=t^2+1$. Then: $$\sqrt{2x-5} - \sqrt{x-1} = 1 \Rightarrow \\ \sqrt{2t^2-3}=t+1\Rightarrow \\ 2t^2-3=t^2+2t+1\Rightarrow \\ t^2-2t-4=0 \Rightarrow \\ t_1=1-\sqrt{5} \text{ (ignored, because $t>0$)},t_2=1+\sqrt{5}.$$ Now we can return to $x$: $$x=t^2+1=(1+\sqrt5)^2+1=7+2\sqrt5.$$
Guides/tutorials to learn abstract algebra? I recently read up a bit on symmetry groups and was interested by how they apply to even the Rubik's cube. I'm also intrigued by how group theory helps prove that "polynomials of degree $\gt4$ are not generally solvable". I love set theory and stuff, but I'd like to learn something else of a similar type. Learning about groups, rings, fields and what-have-you seems like an obvious choice. Could anyone recommend any informal guides to abstract algebra that are written in (at least moderately) comprehensible language? (PDFs etc. would also be nice)
I really think that Isaacs book Algebra: A Graduate Course introduces the group theory in detail without omitting any proof. It may sound difficult because of the adjective "Graduate" but I do not think that the explanations are that difficult to follow for undergraduates as long as they know how to write proofs. The best freebies for algebra in my opinion is Milne's website (http://www.jmilne.org/math/). Not every note is complete, but his excellent notes tell you which books to buy to corroborate them.
properties of recursively enumerable sets $A \times B$ is an r.e.(recursively enumerable) set, I want to show that $A$ (or $B$) is r.e. ($A$ and $B$ are nonempty) I need to find a formula. I've got an idea that I should use the symbolic definition of an r.e. set. That is, writing a formula for the function that specifies $A$ or $B$, assuming a formula exists for $A \times B$. I guess I must use Gödel number somewhere. I should've mentioned that I asked this question over there (Computer Science) but since I am more interested in mathematical arguments and looking for formulas, I'd ask it here as well.
The notion of computable or c.e. is usually defined on $\omega$. To understand what $A \times B$ being computable or c.e., you should identify order pairs $(x,y)$ with the natural number under a bijective pairing function. Any of the usual standard pairing function, the projection maps $\pi_1$ and $\pi_2$ are computable. First note that $\emptyset \times B = \emptyset$. So the result does not hold if one of the sets is empty. Suppose $A$ and $B$ are not empty. Suppose $A \times B$ is c.e. Then there is a total computable function $f$ with range $A \times B$. Then $\pi_1 \circ f$ and $\pi_2 \circ f$ are total computable functions with range $A$ and $B$, respectively. So $A$ and $B$ are c.e.
Cyclic shifts when multiplied by $2$. I was trying to solve the following problem: Find a number in base $10$, which when multiplied by $2$, results in a number which is a cyclic shift of the original number, such that the last digit (least significant digit) becomes the first digit. I believe one such number is $105263157894736842$ This I was able to get by assuming the last digit and figuring out the others, till there was a repeat which gave a valid number. I noticed that each time (with different guesses for the last digit), I got an $18$ digit number which was a cyclic shift of the one I got above. Is there any mathematical explanation for the different answers being cyclic shifts of each other?
Suppose an $N$ digit number satisfies your condition, write it as $N= 10a + b$, where $b$ is the last digit. Then, your condition implies that $$ 2 (10a + b) = b\times 10^{N-1} + a $$ Or that $b \times (10^{N-1} -1 ) = 19 a$. Clearly, $b$ is not a multiple of 19, so we must have $10^{N-1} -1$ to be a multiple of 19. You should be able to verify (modular arithmetic), that this happens if and only if $N\equiv 0 \pmod{19} $. This gives us that $N = 19k$, and the numbers have the form $$\frac{ 10^{19k} + 9 } {19} b = 10 \lfloor \frac{10^{19k-1} } {19} b \rfloor$$ [The equality occurs because $\frac{1}{19} < 1$.] I now refer you to "When is $\frac{1}{p}$ obtained as a cyclic shift", for you to reach your conclusion. There is one exception, where we need the leading digit to be considered as 0 (not the typical definition, so I though I'd point it out). We can get more solutions, obtained by concatenations of your number with itself, and several cyclic shifts (and again accounting for a possible leading digit of 0).
Approximation of alternating series $\sum_{n=1}^\infty a_n = 0.55 - (0.55)^3/3! + (0.55)^5/5! - (0.55)^7/7! + ...$ $\sum_{n=1}^\infty a_n = 0.55 - (0.55)^3/3! + (0.55)^5/5! - (0.55)^7/7! + ...$ I am asked to find the no. of terms needed to approximate the partial sum to be within 0.0000001 from the convergent value of that series. Hence, I tried find the remainder, $R_n = S-S_n$ by the improper integral $\int_{n}^{\infty} \frac{ (-1)^n(0.55)^{2n-1}} {(2n-1)!} dn $ However, I don't know how to integrate this improper integral so is there other method to solve this problem?
Hint: $\sin x=x-\dfrac{x^3}{3!}+\dfrac{x^5}{5!}-\dfrac{x^7}{7!}+ \dots$ Your expression is simply $\sin (0.55)$.
how to apply hint to question involving the pigeonhole principle The following question is from cut-the-knot.org's page on the pigeonhole principle Question Prove that however one selects 55 integers $1 \le x_1 < x_2 < x_3 < ... < x_{55} \le 100$, there will be some two that differ by 9, some two that differ by 10, a pair that differ by 12, and a pair that differ by 13. Surprisingly, there need not be a pair of numbers that differ by 11. The Hint Given a run of $2n$ consecutive integers: $a + 1, a + 2, ..., a + 2n - 1, a + 2n$, there are n pairs of numbers that differ by n: $(a+1, a+n+1), (a + 2, a + n + 2), \dots, (a + n, a + 2n)$. Therefore, by the Pigeonhole Principle, if one selects more than n numbers from the set, two are liable to belong to the same pair that differ by $n$. I understood the hint but no concrete idea as to how to apply it, but here are my current insights: My Insights break the set of 100 possible choices into m number of 2n(where $n \in \{9,12,13\}$ ) consecutive numbers and since 55 numbers are to be chosen , show that even if one choses randomly there will be n+1 in one of the m partitions and if so, by the hint there will exist a pair of two numbers that differ by n.
Here is $9$ done explicitly: Break the set into subsets with a difference of $9$: $\{1,10,19,28,\ldots,100\},\{2,11,20,\ldots 92\},\ldots \{9,18,27,\ldots 99\}$. Note that there is one subset with $12$ members and eight with $11$ members. If you want to avoid a pair with a difference of $9$ among your $55$ numbers, you can't pick a pair of neighbors from any set. That means you can only pick six from within each set, but that gives you only $54$ numbers, so with $55$ you will have a pair with difference $9$. The reason this fails with $11$ is you get one subset with $10$ members and ten with $9$ members. You can pick $5$ out of each and avoid neighbors.
How to integrate $\int \sqrt{x^2+a^2}dx$ $a$ is a parameter. I have no idea where to start
I will give you a proof of how they can get the formula above. As a heads up, it is quite difficult and long, so most people use the formula usually written in the back of the text, but I was able to prove it so here goes. The idea is to, of course, do trig-substitution. Since $$\sqrt{a^2+x^2} $$ suggests that $x=a\tan(\theta)$ would be a good one because the expression simplifies to $$a\sec(\theta)$$ We can also observe that $dx$ will become $\sec^2(\theta)d\theta$. Therefore$$\int \sqrt{a^2+x^2}dx = a^2\int \sec^3(\theta)d\theta$$ Now there are two big things that we are going to do. One is to do integration by parts to simplify this expression so that it looks a little better, and later we need to be able to integrate $\int \sec(\theta)d\theta$. So the first step is this. It is well known and natural to let $u=\sec(\theta)$ and $dv=\sec^2(\theta)d\theta$ because the latter integrates to simply, $\tan(\theta)$. Letting $A = \int \sec^3(\theta)d\theta$,you will get the following $$A = \sec(\theta)\tan(\theta) - \int{\sec(\theta)\tan^2(\theta)d\theta}$$ $$=\sec(\theta)\tan(\theta) - \int{\sec(\theta)d\theta - \int\sec^3(\theta)d\theta}$$ therefore,$$2A = \sec(\theta)\tan(\theta)-\int \sec(\theta)d\theta$$ Dividing both sides give you $$A = 1/2[\sec(\theta)\tan(\theta)-\int \sec(\theta)d\theta]$$ I hope you see now why all we need to be able to do is to integrate $\sec(\theta)$. The chance that you know how is rather high because you are solving this particular problem, but let's just go through it for the hell of it. This is a very common trick in integration using trig, but remember the fact that $\sec^2(\theta)$ and $\sec(\theta)\tan(\theta)$ are derivatives of $\tan(\theta)$ and $\sec(\theta)$, respectively. So this is what we do. $$\int \sec(\theta)d\theta = \int {{\sec(\theta)(\sec(\theta)+\tan(\theta))} \over {\sec(\theta)+\tan(\theta)}} d\theta$$ Letting $w = \sec(\theta)+\tan(\theta)$, $$= \int {dw \over w} = \ln|w|$$ So, long story short, $$\int \sqrt{a^2+x^2}dx = a^2/2[\sec(\theta)\tan(\theta) - \ln|\sec(\theta)+\tan(\theta)|]$$ $$= {x\sqrt{a^2+x^2}\over 2} + {{a^2\ln|x+\sqrt{a^2+x^2}|}\over 2} + C$$
Evaluating a limit with variable in the exponent For $$\lim_{x \to \infty} \left(1- \frac{2}{x}\right)^{\dfrac{x}{2}}$$ I have to use the L'Hospital"s rule, right? So I get: $$\lim_{x \to \infty}\frac{x}{2} \log\left(1- \frac{2}{x}\right)$$ And what now? I need to take the derivative of the log, is it: $\dfrac{1}{1-\dfrac{2}{x}}$ but since there is x, I need to use the chain rule multiply by the derivative of $\dfrac{x}{2}$ ?
Recall the limit: $$\lim_{y \to \infty} \left(1+\dfrac{a}y\right)^y = e^a$$ I trust you can finish it from here, by an appropriate choice of $a$ and $y$.
Ball-counting problem (Combinatorics) I would like some help on this problem, I just can't figure it out. In a box there are 5 identical white balls, 7 identical green balls and 10 red balls (the red balls are numbered from 1 to 10). A man picks 12 balls from the box. How many are the possibilities, in which: a) exactly 5 red balls are drawn -- b) a red ball isn't drawn -- c) there is a white ball, a green ball and at least 6 red balls Thanks in advance.
Hints: (a) How many ways can we choose $5$ numbers from $1,2,...,9,10$? (This will tell you how many different collections of $5$ red balls he may draw.) How many distinguishable collections of $7$ balls can he draw so that each of the seven is either green or white? Note that the answers to those two questions do not depend on each other, so we'll multiply them together to get the solution to part (a). (b) Don't overthink it. How many ways can this happen? (c) You can split this into $5$ cases (depending on the number of red balls drawn) and proceed in a similar way to what we did in part (a) for each case (bearing in mind that we've already drawn one green ball and one white ball). Then, add up the numbers of ways each case can happen.
Evaluating the integral: $\lim_{R \to \infty} \int_0^R \frac{dx}{x^2+x+2}$ Please help me in this integral: $$\lim_{R \to \infty} \int_0^R \frac{dx}{x^2+x+2}$$ I've tried as usually, but it seems tricky. Do You have an idea? Thanks in advance!
$$\dfrac1{x^2+x+2} = \dfrac1{\left(x+\dfrac12 \right)^2 + \left(\dfrac{\sqrt{7}}2 \right)^2}$$ Recall that $$\int_a^b \dfrac{dx}{(x+c)^2 + d^2} = \dfrac1d \left.\left(\arctan\left(\dfrac{x+c}d\right)\right)\right \vert_{a}^b$$ I trust you can finish it from here. You will also need to use the fact that $$\lim_{y \to \infty} \arctan(y) = \dfrac{\pi}2$$
Proof of: If $x_0\in \mathbb R^n$ is a local minimum of $f$, then $\nabla f(x_0) = 0$. Let $f \colon \mathbb R^n\to\mathbb R$ be a differentiable function. If $x_0\in \mathbb R^n$ is a local minimum of $f$, then $\nabla f(x_0) = 0$. Where can I find a proof for this theorem? This is a theorem for max/min in calculus of several variables. Here is my attempt: Let $x_0$ = $[x_1,x_2,\ldots, x_n]$ Let $g_i(x) = f(x_0+(x-x_i)e_i)$ where $e_i$ is the $i$-th standard basis vector of dimension $n$. Since $f$ has local min at $x_0$, then $g_i$ has local minimum at $x_i$. So by Fermat's theorem, $g'(x_i)= 0$ which is equal to $f_{x_i}(x_0)$. Therefore $f_{x_i}(x_0) = 0$ which is what you wanted to show. Is this right?
Do you know the proof for $n=1$? Can you try to mimic it for more variables, say $n=2$? Since $\nabla f(t)$ is a vector what you want to prove is that $\frac{\partial f}{\partial x_i}(t)=0$ for each $i$. That is why you need to mimic the $n=1$ proof, mostly. Recall that for the $n=1$, we prove that $$f'(t)\leq 0$$ and $$f'(t)\geq 0$$ by looking at $x\to t^{+}$ and $x\to t^{-}$. You should do the same in each $$\frac{\partial f}{\partial x_i}(t)=\lim_{h\to 0}\frac{f(t_1,\dots,t_i+h,\dots,t_n)-f(t_1,\dots,t_n)}h$$ ADD Suppose $f:\Bbb R\to \Bbb R$ is differentiable and $f$ has a local minimum in $t=0$. Then $f'(t)=0$. P Since $f$ has a local minimum at $t=0$, for suitably small $h$, $$f(t+h)-f(t)\geq 0$$ If $h>0$ then this gives $$\frac{f(t+h)-f(t)}{h}\geq 0$$ While if $h<0$ we get $$\frac{f(t+h)-f(t)}{h}\leq 0$$ Since $f'$ exist, the side limits also exist and equal $f'(t)$. From the above we conclude $f'(t)\geq 0$ and $f'(t)\leq 0$, so that $f'(t)=0 \;\;\blacktriangle$. Now, just apply that coordinatewise, and you're done.
How to find the limit for the quotient of the least number $K_n$ such that the partial sum of the harmonic series $\geq n$ Let $$S_n=1+1/2+\cdots+1/n.$$ Denote by $K_n$ the least subscript $k$ such that $S_k\geq n$. Find the limit $$\lim_{n\to\infty}\frac{K_{n+1}}{K_n}\quad ?$$
We know that $H_n=\ln n + \gamma +\epsilon(n)$, where $\epsilon(n)\approx \frac{1}{2n}$ and in any case $\epsilon(n)\rightarrow 0$ as $n\rightarrow \infty$. If $m=H_n$ we may as a first approximation solve as $n=e^{m-\gamma}$. Hence the desired limit is $$\lim_{m\rightarrow \infty} \frac{e^{m+1-\gamma}}{e^{m-\gamma}}=e$$ For a second approximation, $m=\gamma + \ln n +\frac{1}{2n}=\gamma+\ln n+\ln e^{\frac{1}{2n}}=\gamma+\ln ne^{\frac{1}{2n}}$. This may be rearranged as $ne^{\frac{1}{2n}}=e^{m-\gamma}$. This has solution $$n=-\frac{1}{2W(-e^{\gamma-m}/2)}$$ where $W$ is the Lambert function. Hence the desired limit is now $$\lim_{m\rightarrow \infty}\frac{W(-e^{\gamma-m}/2)}{W(-e^{\gamma-m-1}/2)}=e$$ Although not a proof, this is compelling enough that I'm not going to think about the next error term.
Proof regarding unitary self-adjoin linear operators I'm suck on how to do the following Linear Algebra proof: Let $T$ be a self-adjoint operator on a finite-dimensional inner product space $V$. Prove that for all $x \in V$, $||T(x)\pm ix||^2=||T(x)||^2+||x||^2.$ Deduce that $T-iI$ is invertible and that $[(T-iI)^{-1}]^{*}=(T+iI)^{-1}.$ Furthermore, show that $(T+iI)(T-iI)^{-1}$ is unitary. My attempt at a solution (to the first part): $||T(x)\pm ix||^2=\left< T(x)\pm ix, T(x)\pm ix\right>$ $=\left< T(x), T(x) \pm ix\right>\pm \left<ix, T(x)\pm ix \right>$ ... $=\left<T(x), T(x) \right>+ \left<x,x \right>$ $=||T(x)||^2+||x||^2$ The ... is the part I'm stuck on (I know, it's the bulk of the first part). I have yet to consider the next parts since I'm still stuck on this one. Any help would be appreciated! Thanks.
we have $$(Tx+ix,Tx+ix)=(Tx,Tx)+(ix, Tx)+(Tx, ix)+(ix,ix)=|Tx|^{2}+i(x,Tx)-i(x,Tx)+|x|^{2}$$where I assume you define the inner product to be Hermitian. I think for $Tx-ix$ it should be similar. The rest should be leave as an exercise for you; they are not that difficult. To solve the last one, notice we have $(T-iI)^{-1}$'s adjoint to be $(T+iI)^{-1}$, and $(T+iI)$'a adjoint to be $T-iI$. The first one can be proved by expanding $(T-iI)^{-1}$ as $(I-iT^{-1})^{-1}T^{-1}$, then use geometric series. The second one follows by $(Tx+ix,y)=(Tx,y)+(x,-iy)=(x,Ty)+(x-iy)=(x,(T-I)y)$. If we want $(T+iI)(T-iI)^{-1}$ to be unitary, then we want $$((T+iI)(T-iI)^{-1}x,(T+iI)(T-iI)^{-1}x)=(x,x)$$ for all $x$. Moving around this is the same as $$((T^{2}+I)(T-iI)^{-1}x,(T-iI)^{-1}x)=(x,x)$$ but this is the same as $$((T+iI)x, (T-iI)^{-1}x)=(x,x)$$ and the result follows because we know $(T-iI)^{*}=(T+iI)^{-1}$.
positive Integer value of $n$ for which $2005$ divides $n^2+n+1$ How Can I calculate positive Integer value of $n$ for which $2005$ divides $n^2+n+1$ My try:: $2005 = 5 \times 401$ means $n^2+n+1$ must be a multiple of $5$ or multiple of $401$ because $2005 = 5 \times 401$ now $n^2+n+1 = n(n+1)+1$ now $n(n+1)+1$ contain last digit $1$ or $3$ or $7$ $\bullet $ if last digit of $n(n+1)+1$ not contain $5$. So it is not divisible by $5$ Now how can I calculate it? please explain it to me.
A number of the form $n^2+n+1$ has divisors of the form 3, or any number of $6n+1$, and has a three-place period in base n. On the other hand, there are values where 2005 divides some $n^2+n-1$, for which the divisors are of the form n, 10n+1, 10n+9. This happens when n is 512 or 1492 mod 2005.
Do these definitions of congruences on categories have the same result in this context? Let $\mathcal{D}$ be a small category and let $A=A\left(\mathcal{D}\right)$ be its set of arrows. Define $P$ on $A$ by: $fPg\Leftrightarrow\left[f\text{ and }g\text{ are parallel}\right]$ and let $R\subseteq P$. Now have a look at equivalence relations $C$ on $A$. Let's say that: * *$C\in\mathcal{C}_{s}$ iff $R\subseteq C\subseteq P$ and $fCg\Rightarrow\left(h\circ f\circ k\right)C\left(h\circ g\circ k\right)$ whenever these compositions are defined; *$C\in\mathcal{C}_{w}$ iff $R\subseteq C\subseteq P$ but now combined with $fCg\wedge f'Cg'\Rightarrow\left(f\circ f'\right)C\left(g\circ g'\right)$ whenever these compositions are defined. Then $P\in\mathcal{C}_{s}$ and $P\in\mathcal{C}_{w}$ so both are not empty. For $C_{s}:=\cap\mathcal{C}_{s}$ and $C_{w}:=\cap\mathcal{C}_{w}$ it is easy to verify that $C_{s}\in\mathcal{C}_{s}$ and $C_{w}\in\mathcal{C}_{w}$. My question is: Do we have $C_{w}=C_{s}$ here? It is in fact the question whether two different definitions of 'congruences' both result in the same smallest 'congruence' that contains relation $R\subseteq P$. I ask it here for small categories so that I can conveniently speak of 'relations' (small sets), but for large categories I have the same question. Mac Lane works in CWM with $C_{s}$, but is $C_{w}$ also an option?
They are identical. I will suppress the composition symbol for brevity and convenience. Suppose first that $C \in \mathcal C_w$, and that $f C g$. Since $h C h$ and $k C k$, we have $f C g$ implies $hf C hg$, which in turn implies $hfk C hgk$. Thus $C \in \mathcal C_s$. Suppose now that $C \in \mathcal C_s$, and that $f C g, f' C g'$. Then we have $ff' C gf'$ (take $h = \operatorname{id}, k = f'$) and $gf'Cgg'$ (take $h = g, k = \operatorname{id}$). By transitivity, $ff'Cgg'$. Thus $C \in \mathcal C_w$. Therefore, $\mathcal C_s = \mathcal C_w$, and we conclude $C_s = C_w$.
Power series of $\frac{\sqrt{1-\cos x}}{\sin x}$ When I'm trying to find the limit of $\frac{\sqrt{1-\cos x}}{\sin x}$ when x approaches 0, using power series with "epsilon function" notation, it goes : $\dfrac{\sqrt{1-\cos x}}{\sin x} = \dfrac{\sqrt{\frac{x^2}{2}+x^2\epsilon_1(x)}}{x+x\epsilon_2(x)} = \dfrac{\sqrt{x^2(1+2\epsilon_1(x))}}{\sqrt{2}x(1+\epsilon_2(x))} = \dfrac{|x|}{\sqrt{2}x}\dfrac{\sqrt{1+2\epsilon_1(x)}}{1+\epsilon_2(x)} $ But I can't seem to do it properly using Landau notation I wrote : $ \dfrac{\sqrt{\frac{x^2}{2}+o(x^2)}}{x+o(x)} $ and I'm stuck... I don't know how to carry these o(x) to the end Could anyone please show me what the step-by-step solution using Landau notation looks like when written properly ?
It is the same as in the "$\epsilon$" notation. For numerator, we want $\sqrt{x^2\left(\frac{1}{2}+o(1)\right)}$, which is $|x|\sqrt{\frac{1}{2}+o(1)}$. In the denominator, we have $x(1+o(1))$. Remark: Note that the limit as $x\to 0$ does not exist, though the limit as $x$ approaches $0$ from the left does, and the limit as $x$ approaches $0$ from the right does.
How to prove the existence of infinitely many $n$ in $\mathbb{N}$,such that $(n^2+k)|n!$ Show there exist infinitely many $n$ $\in \mathbb{N}$,such that $(n^2+k)|n!$ and $k\in N$ I have a similar problem: Show that there are infinitely many $n \in \mathbb{N}$,such that $$(n^2+1)|n!$$ Solution: We consider this pell equation,$n^2+1=5y^2$,and this pell equation have $(n,y)=(2,1)$,so this equation have infinite solution$(n,y)$,and $2y=2\sqrt{\dfrac{n^2+1}{5}}<n$. so $5,y,2y\in \{1,2,3,\cdots,n\}$, so $5y^2<n!$ then $(n^2+1)|n!$ but for $k$ I have consider pell equation,But I failed,Thank you everyone can help
Similar to your solution of $k=1$. Consider the pell's equation $n^2 + k = (k^2+k) y^2$. This has solution $(n,y) = (k,1)$, hence has infinitely many solutions. Note that $k^2 + k = k(k+1) $ is never a square for $k\geq 2$, hence is a Pell's Equation of the form $n^2 - (k^2+k) y^2 = -k$. Then, $2y = 2\sqrt{ \frac{ n^2+k} { k^2 +k } } \leq n$ (for $k \geq 2$, $n\geq 2$) always.
Prove that $3^n>n^4$ if $n\geq8$ Proving that $3^n>n^4$ if $n\geq8$ I tried mathematical induction start from $n=8$ as the base case, but I'm stuck when I have to use the fact that the statement is true for $n=k$ to prove $n=k+1$. Any ideas? Thanks!
You want to show $3^n>n^4$. This i.e. to showing $e^{n\ln3}>e^{4\ln n}$. This means you want to show $n\ln 3>4\ln n$. It suffices to show $\frac{n}{\ln n }>\frac{4}{\ln 3}$. Since $\frac{8}{\ln 8}>\frac{4}{\ln 3}$ and since $f(x)=\frac{x}{\ln x}$ has a positive first derivative for $x\geq 8$, the result follows.
Parent and childs of a full d-node tree i have a full d-node tree (by that mean a tree that each node has exactly d nodes as kids). My question is, if i get a random k node of this tree, in which position do i get his kids and his parent? For example, if i have a full binary tree, the positions that i can find the parent,left and right kid of the k node are $\dfrac k2, 2k, 2k+1$ respectively. Thanks in advance.
It looks like you're starting numbering at 1 for the root, and numbering "left to right" on each level/depth. If the root has depth $0$, then there are $d^{t}$ nodes with depth $t$ from the root in a full $d$-dimensional tree. Also, the depth of node $k$ is $\ell_{k}=\lceil\log_{d}(k-1)\rceil$. The number of nodes at depths below the depth of node $k$, then, is $$n_{k} = \sum_{i = 0}^{\ell_{k-1}}d^{i} = \frac{d^{\ell_{k}}-1}{d-1}.$$ So indexing on the row containing node $k$ starts at $n_{k}+1$. Children of node $k$: The position of $k$ in its row is just $p_{k}=k-n_{k}$. The children of node $k$ have depth one more than that of $k$, and in their respective row, the first child $c$ has position $d(p_{k}-1)+1=d(k-n_{k}-1)+1$, so the $j^{th}$ child of $k$ has position $$j + \sum_{i = 0}^{\ell_{k}+1}d^{i} + d(k-\sum_{i = 0}^{\ell_{k}}d^{i}-1) = j + \frac{d^{\ell_{k}+1}-1}{d-1} + d(k-\frac{d^{\ell_{k}}-1}{d-1}-1)$$ $$=j +dk-d +1$$ Parent of node $k$: Since this formula applies for the parent $p$ of $k$, for $1 \leq j \leq d$ $$1+dp-d +1\leq k \leq d+dp - d + 1 = dp+1$$ $$\Rightarrow dp - d \leq k-1 \leq dp$$ $$\Rightarrow p-1 \leq \frac{k-1}{d} \leq p$$ so $p = \lceil\frac{k-1}{d}\rceil$.
Construct a linear programming problem for which both the primal and the dual problem has no feasible solution Construct (that is, find its coefficients) a linear programming problem with at most two variables and two restrictions, for which both the primal and the dual problem has no feasible solution. For a linear programming problem to have no feasible solution it needs to be either unbounded or just not have a feasible region at all I think. Therefore, I know how I should construct a problem if it would only have to hold for the primal problem. However, could anyone tell me how I should find one for which both the primal and dual problem have no feasible solution? Thank you in advance.
Consider an LP: $$ \begin{aligned} \min \; & x_{1}+2 x_{2} \\ \text { s.t. } & x_{1}+x_{2}=1 \\ & 2 x_{1}+2 x_{2}=3 \end{aligned} $$ and its dual: $$ \begin{aligned} \max\; & y_{1}+3 y_{2} \\ \text { s.t. } & y_{1}+2 y_{2}=1 \\ & y_{1}+2 y_{2}=2 \end{aligned} $$ They are both infeasible.
notation question (bilinear form) So I have to proof the following: for a given Isomorphism $\phi : V\rightarrow V^*$ where $V^*$ is the dual space of $V$ show that $s_{\phi}(v,w)=\phi(v)(w)$ defines a non degenerate bilinear form. My question : Does $\phi(v)(w)$ denote the map from $v$ to a linear function $w$? (in this case i had serious trubles in showing linearity in the second argument,really confusing. Or maybe it just means $\phi(v)$ times $w$ where $w$ is the scalar value ( we get $w$ by applying $v$ in the linear function it is mapped to) I just started today with dual spaces and try my best with the notation , but i couldn't figure it out , please if you have any idea please help me with the notation , i will solve the problem on my own.
Note that $\phi$ is a map from $V$ to $V^\ast$. So for each $v \in V$, we get an element $\phi(v) \in V^\ast$. Now $V^\ast$ is the space of linear functionals on $V$, i.e. $$V^\ast = \{\alpha: V \longrightarrow \Bbb R \mid \alpha \text{ is linear}\}.$$ So each element of $V^\ast$ is a function from $V$ to $\Bbb R$. Then for $v, w \in V$, the notation $$\phi(v)(w)$$ means $$(\phi(v))(w),$$ i.e. the function $\phi(v): V \longrightarrow \Bbb R$ takes $w \in V$ as its argument and we get an element of $\Bbb R$. So $s_\phi$ is really a map of the form $$s_\phi: V \times V \longrightarrow \Bbb R,$$ $$(v, w) \mapsto (\phi(v))(w).$$
Moment generating function of a stochastic integral Let $(B_t)_{t\geq 0}$ be a Brownian motion and $f(t)$ a square integrable deterministic function. Then: $$ \mathbb{E}\left[e^{\int_0^tf(s) \, dB_s}\right] = \mathbb{E}\left[e^{\frac{1}{2}\int_0^t f^2(s) \, ds}\right] $$ Now assume $(X_t)_{t\geq 0}$ is such that $\left(\int_0^tX_sdB_s\right)_{t\geq 0}$ is well defined. Does $$ \mathbb{E}\left[e^{\int_0^tX_s \, dB_s}\right] = \mathbb{E}\left[e^{\frac{1}{2}\int_0^tX_s^2 \, ds}\right] $$ still hold?
If $X$ and $B$ are independent, yes (use the first result to compute the expectation conditional on $X$, then take the expectation). Otherwise, no. For a counterexample, consider $X=B$ and use Itô's formula $\mathrm d (B^2)=2B\mathrm dB+\mathrm dt$.
Matrix $BA\neq$$I_{3}$ If $\text{A}$ is a $2\times3$ matrix and $\text{B}$ is a $3\times2$ matrix, prove that $\text{BA}=I_{3}$ is impossible. So I've been thinking about this, and so far I'm thinking that a homogenous system is going to be involved in this proof. Maybe something about one of the later steps being that the last row of the matrix would be $0\neq \text{a}$, where a is any real number. I've also been thinking that for a $2\times3$ matrix, there is a (non-zero) vector $[x,y,z]$ such that $\text{A}[x,y,z]=[0,0]$ because the dot product could possibly yield $0$. I'm not sure if that's helpful at all though. Trouble is I'm not really too sure how to continue, or even begin. Any help would be appreciated.
Consider the possible dimension of the columnspace of the matrix $BA$. In particular, since $A$ has at most a two-dimensional columnspace, $BA$ has at most a two-dimensional columnspace. Stated more formally, if $A$ has rank $r_a$ and $B$ has rank $r_b$, then $BA$ has rank at most $\min\{ r_a, r_b \}$.
Are the graphs of these two functions equal to each other? The functions are: $y=\frac{x^2-4}{x+2}$ and $(x+2)y=x^2-4$. I've seen this problem some time ago, and the official answer was that they are not. My question is: Is that really true? The functions obviously misbehave when $x = -2$, but aren't both of them indeterminate forms at that point? Why are they different?
$(1)$The first function is undefined at $x = -2$, $(2)$ the second equation is defined at $x = -2$: $$(x + 2) y = x^2 - 4 \iff xy + 2y = x^2 - 4\tag{2}$$ It's graph includes the entire line $x = -2$. At $x = -2$, all values of y are defined, so every point lying on the line $x = -2$: each of the form $(-2, y)$ are included in the graph of function (2). Not so with the first equation. ADDED: Just to see how well Wolfram Alpha took on the challenge: Graph of Equation $(1)$: (It fails to show the omission at $x = -2$) But it does add: Graph of Equation $(2)$: Note: The pair of graphs included here do not match in terms of the scaling of the axes, so the line $y = x - 2$ looks sloped differently in one graph than in the other.
Working with subsets, as opposed to elements. Especially in algebraic contexts, we can often work with subsets, as opposed to elements. For instance, in a ring we can define $$A+B = \{a+b\mid a \in A, b \in B\},\quad -A = \{-a\mid a \in A\}$$ $$AB = \{ab\mid a \in A, b \in B\}$$ and under these definitions, singletons work exactly like elements. For instance, $\{a\}+\{b\} = \{c\}$ iff $a+b=c$. Now suppose we're working in an ordered ring. What should $A \leq B$ mean? I can think of at least two possible definitions. * *For all $a \in A$ and $b \in B$ it holds that $a \leq b$. *There exists $a \in A$ and $b \in B$ such that $a \leq b$. Also, a third definition was suggested in the comments: *For all $a \in A$ there exists $b \in B$ such that $a \leq b$. Note that according to all three definitions, we have $\{a\} \leq \{b\}$ iff $a \leq b$. That's because "for all $x \in X$" and "there exists $x \in X$" mean the same thing whenever $X$ is a singleton set. What's the natural thing to do here? (1), (2), or something else entirely? Note that our earlier definitions leveraged existence. For example: $$A+B = \{x\mid \exists a \in A, b \in B : a+b=x\}.$$
Since we're talking about ordered rings, maybe the ordering could be applicable to each comparison, too. i.e., $$ a_n \le b_n \forall a \in A,b \in B$$ if you were to apply this to the sets of even integers (greater than 0) and odd integers it might look like 1<2, 3<4, etc. Of course, cardinality comes into play since you need n-elements to exist in both sets. Maybe the caveat could be $n=min([A],[B])$.
Is there a name for this given type of matrix? Given a finite set of symbols, say $\Omega=\{1,\ldots,n\}$, is there a name for an $n\times m$ matrix $A$ such that every column of $A$ contains each elements of $\Omega$? (The motivation for this question comes from looking at $p\times p$ matrices such that every column contains the elements $1,\ldots, p$).
A sensible definition for this matrix would be a column-Latin rectangle, since the transpose is known as a row-Latin rectangle. Example: A. Drisko, Transversals in Row-Latin Rectangles, JCTA 81 (1998), 181-195. The $m=n$ case is referred to as a column-Latin square in the literature (this is in widespread use). I found one example of the use of column-Latin rectangle here (ref.; .ps file).
How to prove these two ways give the same numbers? How to prove these two ways give the same numbers? Way 1: Step 1 : 73 + 1 = 74. Get the odd part of 74, which is 37 Step 2 : 73 + 37 = 110. Get the odd part of 110, which is 55 Step 3 : 73 + 55 = 128. Get the odd part of 128, which is 1 Continuing this operation (with 73 + 1) repeats the same steps as above, in a cycle. Way 2: Step 1: (2^x) * ( 1/73) > 1 (7 is the smallest number for x) (2^7) * ( 1/73) - 1 = 55/73 Step 2: (2^x) * (55/73) > 1 (1 is the smallest number for x) (2^1) * (55/73) - 1 = 37/73 Step 3: (2^x) * (37/73) > 1 (1 is the smallest number for x) (2^1) * (37/73) - 1 = 1/73 Repeating the steps with the fraction 1/73 goes back to step 1, and repeats them in a cycle. The two ways have the same numbers $\{1, 37, 55\}$ in the 3 steps. How can we prove that the two ways are equivalent and give the same number of steps?
Let $M=37$ (or any odd prime for that matter). To formalize your first "way": You start with an odd number $a_1$ with $1\le a_1<M$ (here specifically: $a_1=1$) and then recursively let $a_{n+1}=u$, where $u$ is the unique odd number such that $M+a_n=2^lu$ with $l\in\mathbb N_0$. By induction, one finds that $a_n$ is an odd integer and $1\le a_n<M$ To formalize your second "way": You start with $b_1=\frac c{M}$ where $1\le c<M$ is odd (here specifically: $c=1$) and then recursively let $b_{n+1}=2^kb_n-1$ where $k\in\mathbb N$ is chosen minimally with $2^kb_n>1$. Clearly, this implies by induction that $0< b_n\le 1$ and $Mb_n$ is an odd integer for all $n$. Then we have Proposition. If $a_{m+1}=M b_n$, then $a_m=M b_{n+1}$. Proof: Using $b_{n+1}=2^kb_n-1$, $M+a_m=2^la_{m+1}$, and $a_{m+1}=M b_n$, we find $$Mb_{n+1}=2^kMb_n-M = 2^ka_{m+1}-M=2^{k-l}(a_m+M)-M.$$ If $k>l$, we obtain that $Mb_{n+1}\ge 2a_m+M>M$, contradicting $b_{n+1}\le 1$. And if $k<l$, we obtain $Mb_{n+1}\le \frac12 a_m-\frac 12 M<0$, contradicting $b_{n+1}>0$. Therefore $k=l$ and $$ Mb_{n+1} = a_m$$ as was to be shown. $_\square$ Since there are only finitely many values available for $a_n$ (namely the odd naturals below $M$), the sequence $(a_n)_{n\in \mathbb N}$ must be eventually periodic, that is, there exists $p>0$ and $r\ge1$ such that $a_{n+p}=a_n$ for all $n\ge r$. Let $r$ be the smallest natural making this true. If we assume $r>1$, then by chosing $c=a_{r-1+p}$ in the definition of the sequenc $(b_n)_{n\in\mathbb N}$ we can enforce $Mb_1=a_{r-1+2p}=a_{r-1+p}$ and with the proposition find $Mb_2=a_{r-1+p}=a_{r-1}$ contradicting minimality of $r$. We conclude that $r=1$, that is the sequence $(a_n)_{n\in\mathbb N}$ is immediately periodic. Now the proposition implies that the sequence $(b_n)_{n\in\mathbb N}$ is also immediately periodic: Let $a_1=Mb_1$. Then by periodicity of $(a_n)$, we have $Mb_1=a_{1+p}$, by induction $Mb_k=a_{2+p-k}$ for $1\le k\le p+1$. Especially, $b_{p+1}=b_1$ and hence by induction $b_{n+p}=b_n$ for all $n$. Finally, we use the fact that $M$ is prime. Therefore the $Mb_n$ are precisely the numerators of the $b_n$. Our results above then show that these numerators are (if we start with $b_1=\frac{a_1}M$) precisely the same periodic sequence as $(a_n)$, but walking backwards. This is precisely what you observed. EDIT: As remarked by miket, $M$ need only be odd but not necessarily prime. To see that, one must observe that the $a_n$ are always relatively prime to $M$ if one starts with $a_1$ relatively prime to $M$. Consequently, the $Mb_n$ are still the numerators of the $b_n$ (i.e. their denominators are $M$ in shortest terms).
equality between the index between field with $p^{n}$ elements and $ \mathbb{F}_{p}$ and n? can someone explain this? $ \left[\mathbb{F}_{p^{n}}:\mathbb{F}_{p}\right]=n $
$$|\Bbb F_p|=p\;,\;\;|\Bbb F_n^n|=p^n$$ amd since any element in the latter is a unique linear combination of some elements of it and scalars from the former, it must be that those some elements are exactly $\,n\,$ in number.
General Solution of Diophantine equation Having the equation: $$35x+91y = 21$$ I need to find its general solution. I know gcf $(35,91) = 7$, so I can solve $35x+917 = 7$ to find $x = -5, y = 2$. Hence a solution to $35x+91y = 21$ is $x = -15, y = 2$. From here, however, how do I move on to finding the set of general solutions? Any help would be very much appreciated! Cheers
Hint: If $35x + 91y = 21$ and $35x^* + 91y^* = 21$ for for some $(x,y)$ and $(x^*, y^*)$, we can subtract the two equalities and get $5(x-x^*) + 13(y-y^*) = 0$. What does this tell us about the relation between any two solutions? Now, $5$ and $13$ share no common factor and we're dealing with integers, $13$ must divide $(x-x^*)$. In other words, $x = x^* + 13k$ for some integer $k$ and substituting it into the equality yields $y = y^* - 5k$. Thus, once you have one solution $(x^*,y^*)$, all of them can be expressed as $(x^*+13k, y^*-5k)$.
Comparing $\sqrt{1001}+\sqrt{999}\ , \ 2\sqrt{1000}$ Without the use of a calculator, how can we tell which of these are larger (higher in numerical value)? $$\sqrt{1001}+\sqrt{999}\ , \ 2\sqrt{1000}$$ Using the calculator I can see that the first one is 63.2455453 and the second one is 63.2455532, but can we tell without touching our calculators?
You can tell without calculation if you can visualize the graph of the square-root function; specifically, you need to know that the graph is concave (i.e., it opens downward). Imagine the part of the graph of $y=\sqrt x$ where $x$ ranges from $999$ to $1001$. $\sqrt{1000}$ is the $y$-coordinate of the point on the graph directly above the midpoint, $1000$, of that interval. $\frac12(\sqrt{999}+\sqrt{1001})$ is the average of the $y$-coordinates at the ends of this segment of the graph, so it's the $y$-coordinate of the point directly above $x=1000$ on the chord of the graph joining those two ends. The concavity of the graph shows that the chord lies below the graph. So $\frac12(\sqrt{999}+\sqrt{1001})<\sqrt{1000}$. Multiply by $2$ to get the numbers in your question.
Limit of $\lim_{x \to 0}\left (x\cdot \sin\left(\frac{1}{x}\right)\right)$ is $0$ or $1$? WolframAlpha says $\lim_{x \to 0} x\sin\left(\dfrac{1}{x}\right)=0$ but I've found it $1$ as below: $$ \lim_{x \to 0} \left(x\sin\left(\dfrac{1}{x}\right)\right) = \lim_{x \to 0} \left(\dfrac{1}{x}x\dfrac{\sin\left(\dfrac{1}{x}\right)}{\dfrac{1}{x}}\right)\\ = \lim_{x \to 0} \dfrac{x}{x} \lim_{x \to 0} \dfrac{\sin\left(\dfrac{1}{x}\right)}{\dfrac{1}{x}}\\ = \lim_{x \to 0} 1 \\ = 1? $$ I wonder where I'm wrong...
$$\lim_{x \to 0} \left(x\cdot \sin\left(\dfrac{1}{x}\right)\right) = \lim_{\large\color{blue}{\bf x\to 0}} \left(\frac{\sin\left(\dfrac{1}{x}\right)}{\frac 1x}\right) = \lim_{\large\color{blue}{\bf x \to \pm\infty}} \left(\frac{\sin x}{x}\right) = 0 \neq 1$$
Smooth maps on a manifold lie group $$ \operatorname{GL}_n(\mathbb R) = \{ A \in M_{n\times n} | \det A \ne 0 \} \\ \begin{align} &n = 1, \operatorname{GL}_n(\mathbb R) = \mathbb R - \{0\} \\ &n = 2, \operatorname{GL}_n(\mathbb R) = \left\{\begin{bmatrix}a&b\\c&d\end{bmatrix}\Bigg| ad-bc \ne 0\right\} \end{align} $$ $(\operatorname{GL}_n(\mathbb R),\cdot)$ is a group. * *$AB$ is invertible if $A$ and $B$ are invertible. *$A(BC)=(AB)C$ *$I=\begin{bmatrix}1&0\\0&1\end{bmatrix}$ *$A^{-1}$ is invertible if $A$ is invertible. $$(\operatorname{GL}_n(\mathbb R) := \det{}^{-1}(\{0\}))$$ $\det{}^{-1}(\{0\})$ is open in $M_{n\times n}(\mathbb R)$. $\det : M_{n\times n}(\mathbb R) \to \mathbb R$ is continuous, why? $\dim \operatorname{GL}_n(\mathbb R) = n^2 - 1$, why? $(\operatorname{GL}_n(\mathbb R),\cdot)$ is a Lie group if: $$ \mu : G\times G \to G \\ \mu(A,B) = A\cdot B \:\text{ is smooth} \\ I(A) = A^{-1} \:\text{ is smooth} $$ How can I show this? I want to show that general special linear group is Lie group. I could not show the my last step. How can I show that $AB$ and $A^{-1}$ are smooth? please help me I want to learn this. Thanks Here the my handwritten notes http://i.stack.imgur.com/tkoMy.jpg
Well first you have to decide what exactly the "topology" on matrices is. Suppose we considered matrices just as vectors in $\mathbb{R}^{n^2}$, with the usual metric topology. Matrix multiplication by say $A$ take a matrix $B$ to another $\mathbb{R}^{n^2}$ vector where the entries are polynomials in the components of $B$. The other bit is similar, if you know what crammer's rule is.
Why is boundary information so significant? -- Stokes's theorem Why is it that there are so many instances in analysis, both real and complex, in which the values of a function on the interior of some domain are completely determined by the values which it takes on the boundary? I know that this has something to do with the general version of Stokes's theorem, but I'm not advanced enough to understand this yet -- does anyone have a (semi) intuitive explanation for this kind of phenomenon?
This is because many phenomena in nature can be described by well-defined fields. When we look at the boundaries of some surface enclosing the fields, it tells us everything we need to know. Take a conservative field, such as an electric or gravitational field. If we want to know how much energy needs to be added or removed to move something from one spot in the field to another, we do not have to look at the path. We just look at the initial and final location, and the potential energy equation for the field will give us the answer. The endpoints of a path are boundaries in a 1D space, but the idea extends to surfaces bounded by curves and so on. This is not so strange. For instance, to subtract two numbers, like 42 - 17, we do not have to look at all the numbers in between, like 37, right? 37 cannot "do" anything that will wreck the subtraction which is determined only by the values at the boundary of the interval.
Counting Problem - N unique balls in K unique buckets w/o duplication $\mid$ at least one bucket remains empty and all balls are used I am trying to figure out how many ways one can distribute $N$ unique balls in $K$ unique buckets without duplication such that all of the balls are used and at least one bucket remains empty in each distribution? Easy, I thought. I'll just hold a bucket in reserve, distribute the balls, and place the empty bucket. I get: $ K\cdot N! / (N-K-1)! $ Even were I sure this handles the no duplicates condition, what if $K \geq N$? Then I get a negative factorial in the denominator. Is the solution correct and/or is there a more general solution? Thanks!
A slightly different approach using the twelvefold way. If $K>N$ then it doesn't matter how you distribute the balls since at least one bucket will always be empty. In this case we are simply counting functions from a $N$ element set to a $K$ element set. Therefore the number of distributions is $K^N$. If $K=N$ then the only bad assignments are the ones in which every bucket contains precisely one ball. This happens in precisely $N!$ ways, so just subtract out these cases for a total of $N^N - N!$ distributions. If $K < N$, we first choose a number of buckets which cannot be filled and then we fill the remaining buckets. If we choose $m$ buckets to remain empty, then the remaining $K-m$ buckets must be filled surjectively. The number of surjections for each $m$ is $$(K-m)!{N\brace K-m}$$ where the braced term is a Stirling number of the second kind. Summing over $m$ gives the required result $$\sum_{m=1}^{K-1}(K-m)!{N\brace K-m}\binom{K}{m}$$ I am not sure if this simplifies or not. In summary, if we let $f(N,K)$ denote the number of distributions, then $$f(N,K) = \begin{cases}K^N & K > N\\ N! & K=N\\ \sum_{m=1}^{K-1}(K-m)!{N\brace K-m}\binom{K}{m} & K < N\end{cases}$$
Finding an area of the portion of a plane? I need help with a problem I got in class today any help would be appreciated! Find the area of the portion of the portion of the plane $6x+4y+3z=12$ that passes through the first octant where $x, y$, and $z$ are all positive.. I graphed this plane and got all the vertices but I am not sure how my teacher wants us to approach this problem.. Do I calculate the line integral of each side of the triangle separately and add them together? because we are on the section of line integrals, flux, Green's theorem, etc..
If you have the three vertices, you can calculate the length of the three sides and use Heron's formula
Linear Algebra determinant and rank relation True or False? If the determinant of a $4 \times 4$ matrix $A$ is $4$ then its rank must be $4$. Is it false or true? My guess is true, because the matrix $A$ is invertible. But there is any counter-example? Please help me.
You're absolutely correct. The point of mathematical proof is that you don't need to go looking for counterexamples once you've found the proof. Beforehand that's very reasonable, but once you're done you're done. Determinant 4 is nonzero $\implies$ invertible $\implies$ full rank. Each of these is a standard proposition in linear algebra.
Dimensions of vector subspaces in a direct sum are additive $V = U_1\oplus U_2~\oplus~...~ \oplus~ U_n~(\dim V < ∞)$ $\implies \dim V = \dim U_1 + \dim U_2 + ... + \dim U_n.$ [Using the result if $B_i$ is a basis of $U_i$ then $\cup_{i=1}^n B_i$ is a basis of $V$] Then it suffices to show $U_i\cap U_j-\{0\}=\emptyset$ for $i\ne j.$ If not, let $v\in U_i\cap U_j-\{0\}.$ Then \begin{align*} v=&0\,(\in U_1)+0\,(\in U_2)\,+\ldots+0\,(\in U_{i-1})+v\,(\in U_{i})+0\,(\in U_{i+1})+\ldots\\ & +\,0\,(\in U_j)+\ldots+0\,(\in U_{n})\\ =&0\,(\in U_1)+0\,(\in U_2)+\ldots+0\,(\in U_i)+\ldots+0\,(\in U_{j-1})+\,v(\in U_{j})\\ & +\,0\,(\in U_{j+1})+\ldots+0\,(\in U_{n}). \end{align*} Hence $v$ fails to have a unique linear sum of elements of $U_i's.$ Hence etc ... Am I right?
Yes, you're correct. Were you second guessing yourself? If so, no need to: You're argument is "spot on". If you'd like to save yourself a little space, and work, you can write your sum as: $$ \dim V = \sum_{i = 1}^n \dim U_i$$ "...If not, let $v\in U_i\cap U_j-\{0\}.$ Then $$v= v(\in U_i) + \sum_{\large 1\leq j\leq n; \,j\neq i} 0(\in U_j)$$
About the induced vector measure of a Pettis integrable function(part 2) Notations: In what follows, $X$ stands for a Hausdorff LCTVS and $X'$ its topological dual. Let $(T,\mathcal{M},\mu)$ be a finite measure space, i.e., $T$ is a nonempty set, $\mathcal{M}$ a $\sigma$-algebra of subsets of $T$ and $\mu$ is a nonnegative finite measure on $\mathcal{M}$. Definition. A function $f:T\to X$ is said to be Pettis-integrable if * *for each $x'\in X'$, the composition map $$x'\circ f:T\to \mathbb{R}$$ is Lebesgue integrable and *for each $E\in \mathcal{M}$, there exists $x_E\in X$ such that $$x'(x_E)=\int_E(x'\circ f)d\mu$$ for all $x'\in X'$. In this case, $x_E$ is called the Pettis integral of $f$ over $E$ and is denoted by $$x_E=\int_E fd\mu.$$ Remark. Let $f:T\to X$ be Pettis-integrable. Define $$m_f:\mathcal{M}\to X$$ by $$m_f(E)=\int_E fd\mu$$ for any $E\in \mathcal{M}.$ Hahn-Banach Theorem ensures that $x_E$ in the above definition is necessarily unique and so $m_f$ is a well-defined mapping. Moreover, Orlicz-Pettis Theorem imply that the induced vector measure $m_f$ is countably additive, see for instance this. Question. With the above discussions, how do we show that $m_f$ is $\mu$-continuous? I would be thankful to someone who can help me...
Let $\mu(E)=0$. Take arbitrary $x'\in X'$, then $$ x'(m_f(E))=\int_E (x'\circ f)d\mu=0 $$ then. Since $x'\in X'$ is arbitrary by corollary of Hahn-Banach theorem $m_f(E)=0$. Thus, $m_f\ll\mu$
Finding the Taylor series of $f(z)=\frac{1}{z}$ around $z=z_{0}$ I was asked the following (homework) question: For each (different but constant) $z_{0}\in G:=\{z\in\mathbb{C}:\, z\neq0$} find a power series $\sum_{n=0}^{\infty}a_{n}(z-z_{0})^{n}$ whose sum is equal to $f(z)$ on some subset of $G$. Please specify exactly on which subset this last claim holds. Suggestion: Instead of calculating derivatives of f, try using geometric series in some suitable way. What I did: Denote $f(z)=\frac{1}{z}$ and note $G\subseteq\mathbb{C}$ is open. For any $z_{0}\in G$ the maximal $R>0$ s.t $f\in H(D(z_{0},R))$ is clearly $R=|z_{0}|$. By Taylor theorem we have it that $f$ have a power series in $E:=D(z_{0},R)$ (and this can not be expended beyond this point, as this would imply that $f$ is holomorphic at $z=0$. I am able to find $f^{(n)}(z_{0})$ and to solve the exercise this way: I got that $$f^{(n)}(z)=\frac{(-1)^{n}n!}{z^{n+1}}$$ hence $$f(z)=\sum_{n=0}^{\infty}\frac{(-1)^{n}}{z_{0}^{n+1}}(z-z_{0})^{n};\, z\in E$$ but I am not able to follow the suggestion, I tried to do a small manipulation $$f(z)=\frac{1}{z-z_{0}+z_{0}}$$ and tried to work a bit with that, but I wasn't able to bring it to the form $$\frac{1}{1-\text{expression}}$$ or something similar that uses a known power series . Can someone please help me out with the solution that involves geometric series ?
I think you were on the right track: $$\frac1z=\frac1{z_0+(z-z_0)}=\frac1{z_0}\cdot\frac1{1+\frac{z-z_0}{z_0}}=\frac1z_0\left(1-\frac{z-z_0}{z_0}+\frac{(z-z_0)^2}{z_{0}^2}-\ldots\right)=$$ $$=\frac1z_0-\frac{z-z_0}{z_0^2}+\frac{(z-z_0)^2}{z_0^3}+\ldots+\frac{(-1)^n(z-z_0)^n}{z_0^{n+1}}+\ldots$$ As you can see, this is just what you got but only using the expansion of a geometric series, without the derivatives explicitly kicking in...The above is true whenever $$\frac{|z-z_0|}{|z_0|}<1\iff |z-z_0|<|z_0|$$
prove that $\sum_{n=0}^{\infty} {\frac{n^2 2^n}{n^2 + 1}x^n} $ does not converge uniformly in its' convergence radius In calculus: Given $\displaystyle \sum_{n=0}^{\infty} {\frac{n^2 2^n}{n^2 + 1}x^n} $, prove that it converges for $-\frac{1}{2} < x < \frac{1}{2} $, and that it does not converge uniformly in the area of convergence. So I said: $R$ = The radius of convergence $\displaystyle= \lim_{n \to \infty} \left|\frac{C_n}{C_{n+1}} \right| = \frac{1}{2}$ so the series converges $\forall x$ : $-\frac{1}{2} < x < \frac{1}{2}$. But how do I exactly prove that it does not converge uniformly there? I read in a paper of University of Kansas that states: "Theorem: A power series converges uniformly to its limit in the interval of convergence." and they proved it. I'll be happy to get a direction.
According to Hagen von Eitzen theorem, you know that it will uniformly (and even absolutly) converge for intervals included in ]-1/2;1/2[. So the problem must be there. The definition of uniform convergence is : $ \forall \epsilon>0,\exists N_ \varepsilon \in N,\forall n \in N,\quad [ n \ge N_ \varepsilon \Rightarrow \forall x \in A,d(f_n(x),f(x)) \le \varepsilon] $ So you just have to prove that there is an $\varepsilon$ (1/3 for instance or any other) where for any $N_ \varepsilon$ there will always be an $x$ (look one close from 1/2) where $d(f_n(x),f(x)) \ge \varepsilon $ for an $n \ge N_ \varepsilon$
Sequence $(a_n)$ s.t $\sum\sqrt{a_na_{n+1}}<\infty$ but $\sum a_n=\infty$ I am looking for a positive sequence $(a_n)_{n=1}^{\infty}$ such that $\sum_{n=1}^{\infty}\sqrt{a_na_{n+1}}<\infty$ but $\sum_{n=1}^{\infty} a_n>\infty$. Thank you very much.
The simplest example I can think of is $\{1,0,1,0,...\}$. If you want your elements to be strictly positive, use some fast-converging sequence such as $n^{-4}$ in place of the zeroes.
A limit on binomial coefficients Let $$x_n=\frac{1}{n^2}\sum_{k=0}^n \ln\left(n\atop k\right).$$ Find the limit of $x_n$. What I can do is just use Stolz formula. But I could not proceed.
$x_n=\frac{1}{n^2}\sum_{k=0}^{n}\ln{n\choose k}=\frac{1}{n^2}\ln(\prod {n\choose k})=\frac{1}{n^2}\ln\left(\frac{n!^n}{n!^2.(n-1)!^2(n-2)!^2...0!^2}\right)$ since ${n\choose k}=\frac{n!}{k!(n-k)!}$ $e^{n^2x_n}=\left(\frac{n^n(n-1)!}{n!^2}\right)e^{(n-1)^2x_{n-1}}=\left(\frac{n^{n-1}}{n!}\right)e^{(n-1)^2x_{n-1}}$ By Stirling's approximation, $n! \sim n^ne^{-n}\sqrt{2\pi n}$ $e^{n^2x_n}\sim \left(\frac{e^n}{n\sqrt{2\pi n}}\right)e^{(n-1)^2x_{n-1}}$ $x_n \sim \frac{(n-1)^2}{n^2}x_{n-1}+\frac{1}{n}-\frac{1}{n^2}\ln(n\sqrt{2\pi n})$ The $\frac{1}{n}$ term forces $x_n$ to tend to infinity, because $\frac{1}{n^2}\ln(n\sqrt{2\pi n})$ does not grow fast enough to stop it diverging.
sum of monotonic increasing and monotonic decreasing functions I have a question regarding sum of monotinic increasing and decreasing functions. Would appreciate very much any help/direction: Consider an interval $x \in [x_0,x_1]$. Assume there are two functions $f(x)$ and $g(x)$ with $f'(x)\geq 0$ and $g'(x)\leq 0$. We know that $f(x_0)\leq 0$, $f(x_1)\geq 0$, but $g(x)\geq 0$ for all $x \in [x_0,x_1]$. I want to show that $q(x) \equiv f(x)+g(x)$ will cross zero only once. We know that $q(x_0)\leq 0$ and $q(x_1)\geq 0$. Is there a ready result that shows it or how to proceed to show that? Many thanks!!!
Alas, the answer is no. $$f(x)=\begin{cases}-4& x\in[0,2]\\ -2& x\in [2,4]\\0& x\in[4,6]\end{cases}$$ $$g(x)=\begin{cases}5 & x\in [0,1]\\3& x\in[1,3]\\1& x\in[3,5]\\ 0 & x\in[5,6]\end{cases}$$ $$q(x)=\begin{cases} 1 & x\in [0,1]\\ -1& x\in[1,2]\\ 1 & x\in[2,3]\\ -1 & x\in[3,4]\\ 1& x\in[4,5]\\ 0 & x\in[5,6]\end{cases}$$ This example could be made continuous and strictly monotone with some tweaking.
What is the Broader Name for the Fibonacci Sequence and the Sequence of Lucas Numbers? Fibonacci and Lucas sequences are very similar in their definition. However, I could just as easily make another series with a similar definition; an example would be: $$x_0 = 53$$ $$x_1 = 62$$ $$x_n = x_{n - 1} + x_{n - 2}$$ What I want to ask is, what is the general name for these types of sequences, where one term is the sum of the previous two terms?
Occasionally (as in the link posted by vadim123) you see "Fibonacci integer sequence". Lucas sequences (of which the Lucas sequence is but one example) are a slight generalization. Sometimes the term Horadam sequence is used instead. The general classification under which all of these fall is the linear recurrence relation. Most of the special properties of the Fibonacci sequence are inherited from either linear recurrence relations or divisibility sequences.
question on summation? Please, I need to know the proof that $$\left(\sum_{k=0}^{\infty }\frac{n^{k+1}}{k+1}\frac{x^k}{k!}\right)\left(\sum_{\ell=0}^{\infty }B_\ell\frac{x^\ell}{\ell!}\right)=\sum_{k=0}^{\infty }\left(\sum_{i=0}^{k}\frac{1}{k+1-i}\binom{k}{i}B_in^{k+1-i}\right)\frac{x^k}{k!}$$ where $B_\ell$, $B_i$ are Bernoulli numbers. Maybe we should replace $k$ with $j$? Anyway, I need to prove how to move from the left to right. Thanks for all help.
$$\left(\sum_{k=0}^{\infty} \dfrac{n^{k+1}}{k+1} \dfrac{x^k}{k!} \right) \left(\sum_{l=0}^{\infty} B_l \dfrac{x^l}{l!}\right) = \sum_{k,l} \dfrac{n^{k+1}}{k+1} \dfrac{B_l}{k! l!} x^{k+l}$$ $$\sum_{k,l} \dfrac{n^{k+1}}{k+1} \dfrac{B_l}{k! l!} x^{k+l} = \sum_{m=0}^{\infty} \sum_{l=0}^{m} \dfrac{n^{m-l+1}}{m-l+1} \dfrac{B_l}{(m-l)! l!} x^{m}$$ This gives us $$\sum_{m=0}^{\infty} \sum_{l=0}^{m} \dfrac{B_l}{(m-l+1)! l!}n^{m-l+1} x^{m} = \sum_{m=0}^{\infty} \left(\sum_{l=0}^m \dfrac1{m-l+1} \dbinom{m}{l} B_l n^{m-l+1}\right)\dfrac{x^m}{m!}$$
Packing circles on a line On today's TopCoder Single-Round Match, the following question was posed (the post-contest write-up hasn't arrived yet, and their explanations often leave much to be desired anyway, so I thought I'd ask here): Given a maximum of 8 marbles and their radii, how would you put them next to each other on a line so that the distance between the lowest point on the leftmost marble and the lowest point on the rightmost marble is as small as possible? 8! is small enough number for brute forcing, so we can certainly try all permutations. However, could someone explain to me, preferably in diagrams, how to calculate that distance value given a configuration? Also, any kind of background information would be appreciated.
If I understand well, the centers of the marbles are on the line. In that case, we can fix a coordinate system such that the $x$-axis is the line, and the center $C_1$ of the first marble is the origin. Then, its lowest point is $P=(0,-r_1)$. Calculate the coordinates of the centers of the next circles: $$C_2=(r_1+r_2,0),\ C_3=(r_1+2r_2+r_3,0),\ \dots,\ \\C_n=(r_1+2r_2+\ldots+2r_{n-1}+r_n,\ 0)$$ The lowest point of the last circle is $Q=(r_1+2r_2+..+2r_{n-1}+r_n,-r_n)$. Now we can use the Pythagorean theorem to calculate the distance $PQ$: $$PQ^2=(r_1+2r_2+..+2r_{n-1}+r_n)^2+(r_1-r_n)^2= \\=\left(2\sum_{k=1}^n r_k-(r_1+r_n)\right)^2+(r_1-r_n)^2\,.$$ It follows that the distance is independent of the order of the intermediate marbles, only the first and the last one counts. Hopefully from here you can calculate the minimum of this expression.
Help me prove this inequality : How would I go about proving this? $$ \displaystyle\sum_{r=1}^{n} \left( 1 + \dfrac{1}{2r} \right)^{2r} \leq n \displaystyle\sum_{r=0}^{n+1} \displaystyle\binom{n+1}{r} \left( \dfrac{1}{n+1} \right)^{r}$$ Thank you! I've tried so many things. I've tried finding a series I could compare one of the series to but nada, I tried to change the LHS to a geometric series but that didn't work out, please could someone give me a little hint? Thank you!
Here is the proof that $(1+1/x)^x$ is concave for $x\ge 1$. The second derivative of $(1+1/x)^x$ is $(1+1/x)^x$ times $$p(x)=\left(\ln\left(1+\frac{1}{x}\right)-\frac{1}{1+x}\right)^2-\frac{1}{x(1+x)^2}$$ Now for $x\ge 1$, we have $$\ln(1+1/x)-\frac{2}{1+x}\le \frac{1}{x}-\frac{2}{1+x}=\frac{1-x}{x(1+x)}\le 0$$ and $$\ln(1+1/x)\ge \frac{1}{x}-\frac{1}{2x^2}\ge 0,$$ so \begin{align*}p(x)&= \ln^2(1+1/x)-\frac{2\ln(1+1/x)}{1+x}+\frac{1}{(1+x)^2}-\frac{1}{x(1+x)^2}\\ &=\ln(1+1/x)(\ln(1+1/x)-2/(1+x))+\frac{x-1}{x(1+x)^2}\\ &\le \left(\frac{1}{x}-\frac{1}{2x^2}\right)\left(\frac{1}{x}-\frac{2}{1+x}\right)+\frac{x-1}{x(1+x)^2}\\ &=-\frac{(x-1)^2}{2x^3(1+x)^2}\le 0 \end{align*} proving that $(1+1/x)^x$ is concave for $x\ge 1$.
Baire's theorem from a point of view of measure theory According to Baire's theorem, for each countable collection of open dense subsets of $[0,1]$, their intersection $A$ is dense. Are we able to say something about the Lebegue's measure of $A$? Must it be positive? Of full measure? Thank you for help.
Let $q_1,q_2,\ldots$ be an enumeration of the rationals. Let $I_n^m$ be an open interval centered at $q_n$ with length at most $1/m 1/2^n$. Then $\bigcup_n I_n^m$ is an open dense set with Lebesgue measure at most $1/m$. The intersection of these open dense sets has Lebesgue measure zero.
Calculating $\sqrt{28\cdot 29 \cdot 30\cdot 31+1}$ Is it possible to calculate $\sqrt{28 \cdot 29 \cdot 30 \cdot 31 +1}$ without any kind of electronic aid? I tried to factor it using equations like $(x+y)^2=x^2+2xy+y^2$ but it didn't work.
If you are willing to rely on the problem setter to make sure it is a natural, it has to be close to $29.5^2=841+29+.25=870.25$ The one's digit of the stuff under the square root sign is $1$, so it is either $869$ or $871$. You can either calculate and check, or note that two of the factors are below $30$ and only one above, which should convince you it is $869$.