title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Dummit & Foote's Abstract Algebra, 3rd Edition, Exercise 0.1.7
Your proof is fine (maybe a bit formal!) except it says "for some $p \in A$" which makes it seem like you are picking a $p$, when in fact $p$ is defined to be $h(q)$. I would rewrite that sentence Therefore, $h(q)=p \Rightarrow q=f(p)$, for some $p\in A$. to say Write $p$ for $h(q)$. Then $q = f(p)$.
Are these graphs all bipartite?
Key step: Prove by contradiction that no odd cycle exists, thus we can conclude immediately that the graph can be made bipartite. Consider the possible value of $D^2$. Since that is the square of distance between 2 integer points, it is of the form $a^2 + b^2$. Then, we get that $ D^2 \equiv 0, 1, 2 \pmod{4} $. If $ D \equiv 1 \pmod{4} $, then consider the parity of the points $(x,y) \rightarrow x+y$. Any 2 vertices that are connected by an edge must have opposite parity, hence no odd cycle exists. If $ D \equiv 2 \pmod{4} $, if $(x_1, y_1 )$ is connected to $(x_2, y_2)$, then it follows that $x_1 - x_2 \equiv y_1 - y_2 \equiv 1 \pmod{2}$. What this means is that the parity of $x$ itself alternates, hence no odd cycle exists. If $ D \equiv 0 \pmod{4} $, if $(x_1, y_1 )$ is connected to $(x_2, y_2)$, then it follows that $x_1 - x_2 \equiv y_1 - y_2 \equiv 0 \pmod{2}$. Now, consider any odd cycle, WLOG one of the points is $(0,0)$. Then, all of the coordinates of this cycle must be even. Dividing throughout by 2, we get a cycle of points whose distance is $ D^* = \frac{D}{4} $. Continue dividing until we we do not have $ D^* \equiv 0 \pmod{4}$, and thus were are in one of the above cases, hence no odd cycle exists.
path of the integral in the initial definition of gamma function
Yes. Fix $z$ such that $\operatorname{Re}z>0$. By Cauchy's theorem, the difference between the integral over line segment $[r,R]$ and the integral over line segment $[re^{ia},Re^{ia}]$ comes from the contribution of two circular arcs: arc from $R$ to $Re^{ia}$ arc from $r$ to $re^{ia}$ Since $ t^{z-1} e^{-t}$ decays exponentially at infinity, the integral over the first arc tends to $0$ as $R\to 0$. For $0\le \theta\le a$ we have $$\left|(re^{i\theta})^{z-1}\right| = r^{\operatorname{Re}z-1} |e^{i\theta (z-1)}|$$ The factor $|e^{i\theta (z-1)}|$ is harmless: it is bounded by some $M$ that does not depend on $r$. Integration over the second arc contributes at most $$ar M r^{\operatorname{Re}z-1} = aMr^{\operatorname{Re}z}$$ which tends to $0$ as $ r\to 0$.
Classical probability, need my work checked.
We do have $p_a=5p_b$ and $p_a+p_b+\frac{7}{12}=1$, and we can now calculate $p_b$ and $p_a$. So we can, as you did, take the basic probabilities as known. Your solution to (a) is correct, it is now just a matter of filling in the numbers. For (b), you are aware that we need to find $\Pr(A_1)$, the probability of exactly $1$ prize A. This one is tricky. There are $3$ ways this can happen. (i) We have $2$ empty eggs, and the remaining prize is an A; (ii) We have $1$ empty egg, and precisely one of the two remaining eggs has an A: (iii) We have $0$ empty eggs and precisely one of the three eggs has an A. To complete the calculation, you will also need to use the probability of (iii), as your solution indicated. We calculate the probability of (i). There are $\binom{12}{3}$ equally likely ways to choose $3$ eggs. There are $\binom{7}{2}\binom{5}{1}$ ways to choose two empty eggs and a non-empty one. Thus the probability we choose two empty and one non-empty is $\frac{\binom{7}{2}\binom{5}{1}}{\binom{12}{3}}$. Given that we got $2$ empty and $1$ non-empty, the probability our non-empty has an A is $\frac{p_a}{p_a+p_b}$. This ratio is $\frac{5}{6}$. So the probability of (i) is $$\frac{\binom{7}{2}\binom{5}{1}}{\binom{12}{3}}\cdot \frac{5}{6}.$$ Now we need the probabilities of (ii) and of (ii). The basic arguments are the same as for (i). We leave these to you. If difficulties arise, please leave a comment. Remark: The tricky thing about the calculations is that both the hypergeometric and the binomial are involved. We could bypass the binomial coefficients by imagining we are choosing eggs one at a time. But we have to take account of the fact that, for example, choosing an empty egg on the first trial affects the probability of choosing an empty egg on the next trial.
Is integrating $e^{iz^{2}}$, along the real axis in the complex plane the same as integrating the riemann integral of $e^{x^2}$?
No they are not the same. The $i$ before $z^2$ (that is not present before $x^2$) changes things. For one, the modulus of $e^{i\theta}$ is always one whenever $\theta \in \mathbb{R}$, and since your integral is along the real line, $z$ (and thus $z^2$) assumes only real values.
The ideal generated by $X^{2}-Y^{3}$ and $Y^{2}-X^{3}$ is not prime
Here is a more geometric solution. To show that $I=(x^2-y^3,y^2-x^3)$ is not prime, one way is to show that their set of common zeroes has cardinality $>1$. This works because the irreducible components of their common zero set correspond to the radicals of the primary components in the decomposition of $I$. So suppose $x^2-y^3=y^2-x^3=0$. From the first equality, you get $$x^3-y^2=x^2 x - y^2=y^3x-y^2 = y^2(x-1)=0.$$ But then either $y=0$ or $x=1$. If $x=1$, then we must have $y=1$. If $y=0$, we must have $x=0$. Hence we have found two distinct points solving the equation, hence it cannot be prime.
Which properties determine the uniqueness of the global Artin map?
What I think works: let $A: \mathbb{I}_K \rightarrow Gal(L/K)$ be a homomorphism satisfying (i), (ii), and (iv). Each $K_v^{\ast}$ inherits its topology as a subgroup of $\mathbb{I}_K$, so we can restrict $A$ to a map $A_v: K_v^{\ast} \rightarrow G(L/K)$. Then $A$ is just the product $\prod\limits_v A_v$. When $v$ is unramified and finite, $A_v: K_v^{\ast} \rightarrow Gal(L/K)$ does what we want by (iv). When $v$ is ramified and finite, restrict $A_v$ to a continuous map $\mathcal O_v^{\ast} \rightarrow Gal(L/K)$. The preimage of $\{1\}$ is an open and closed subgroup of $\mathcal O_v^{\ast}$, necessarily containing $1 + \mathfrak p_v^n$ for some $n \geq 1$. We can enlarge $n$ to a number $n_v$ for which $1 + \mathfrak p_v^{n_v}$ is also contained in the group of local norms. When $v$ is infinite, the preimage of $1$ under the map $K_v^{\ast} \rightarrow G(L/K)$ is an open and closed subgroup of $K_v^{\ast}$. If $v$ is real, this can either be all of $K_v^{\ast}$ or $(0, \infty)$. If $v$ is complex, this has to be all of $K_v^{\ast}$. In any case, we can restrict $A$ to a homomorphism on $$H_{\mathfrak c} = \prod\limits_{v \mid \mathfrak c} W_v(\mathfrak c) \prod\limits_{v \nmid \mathfrak c}' K_v^{\ast}$$ for a suitable admissible cycle $\mathfrak c$, and here $A$ agrees with the global Artin map. Since $H_{\mathfrak c} K^{\ast} = \mathbb{I}_K$, $A$ agrees with the global Artin map everywhere by (i).
Find the smallest number with total number of set bits greater than X
Let $P_X(n)$ be true if the sum of set bits of the numbers from 1 to $n$ is greater or equal to $X$. Then $P_X(n)$ true implies $P_X(n')$ true for any $n' > n$. Therefore you can apply binary search: suppose you are searching the interval $[a,b]$. Let $m=\lfloor(a+b)/2\rfloor$. If $P_X(m)$ is true, then you can reduce your interval to $[a,m]$; if it is false, you can reduce your interval to $[m+1,b]$.
A function that integrates to zero against a sequence of weights
Consider $g(u) = \frac12 f(u^{-1/2}) u^{-3/2}$. Then for each $n$: $$ 0 = \int_a^1 t^{-2n} f(t) dt = \int_1^{a^{-2}} u^n g(u) du $$ so $g = 0$ (a classical application of the Stone Weiertrass theorem) and $f = 0$.
Solve a linear differential equation and find the limits
You are correct, $y(t) = (A+3/4)e^{6t}-\frac 34 e^{2t}$. However, whatever the initial condition, the absolute values of the solution goes to infinity. Indeed, if $A\ne-0.75$, then the dominating term in the solution is $e^{6t}$. which goes to infinity. If $A=-0.75$, then $y(t)=-0.75e^{2t}$ and goes to $-\infty$, too. THe problem with your reasoning that you took $y(t) = (A+3/4)e^{6t}-\frac 34 e^{2t}$ and passed to the limit on the left side (replaced $y$ with zero), but didn't do the same on the right side. This allowed to multiply by an appropriate exponent and yielded erroneous results.
why cos of angle between two vectors is their dot product over product of their length?
$$\frac u{\|u\|}, \frac v{\|v\|}$$ are unit vectors, and WLOG we can take the dot product of the two unit vectors $(1,0)$ and $(\cos\theta,\sin\theta)$, which form an angle of aperture $\alpha$: $$(1,0)\cdot(\cos\alpha,\sin\theta)=1\cdot\cos\theta+0\cdot\sin\theta=\cos\theta.$$ As the dot product is invariant to a rotation, the dot product is always the cosine of the angle between the vectors.
What is the length of pair of wires after twisting them around each other?
The axis of each wire will take the shape of a helix of radius $D/2$ and height $H$. The length of each helix is (for one turn) $$ L=\sqrt{H^2+\pi^2 D^2} $$ and we can invert this to obtain the desired new length: $$ H=\sqrt{L^2-\pi^2 D^2}. $$ This is the vertical distance between the centers of the end sections of a wire, total length will be $H+D\cos\theta$, where the slope $\theta$ of a wire (called "pitch" in the question) is given by $\tan\theta={H\over\pi D}$ (see figure). As for the minimum slope, my previous answer was not correct. Experimenting with GeoGebra, it seems that $\theta_\min=45°$, which is the slope in the figure. EDIT. The equations of the two helices, if $a=D/2$ and $b=H/(2\pi)$, can be written as: $$ (a\cos t, a\sin t, bt),\quad (-a\cos t, -a\sin t, bt),\quad 0\le t\le 2\pi. $$ Let's take any point on the first helix, e.g. $P=(a,0,0)$ (corresponding to $t=0$). Its distance $s$ from any point on the second helix is: $$ s(t)=\sqrt{2a^2(1+\cos t)+b^2t^2},\quad 0\le t\le 2\pi. $$ The wires don't intersect if $s\ge2a$, but it's easy to check that the minimum of $s(t)$ is $s(0)=2a$ only if $b\ge a$. On the contrary, if $b<a$ the minimum of $s(t)$ occurs when $t$ is the root of ${\sin t\over t}={b^2\over a^2}$, and this minimum is strictly less than $2a$. For the wires not to intersect, we must therefore require $b\ge a$, which translates in a bound on the slope: $$ \tan\theta={b\over a}\ge1,\quad\text{i.e.}\quad \theta\ge45°. $$ EDIT. Here's the GeoGebra code to draw the two surfaces in the figure above: Surface((a cos(t), a sin(t), b t) + r cos(u) (cos(t), sin(t), 0) + r sin(u) (b sin(t), -b cos(t), a) / sqrt(a² + b²), t, 0, 2π, u, 0, 2π) Surface((-a cos(t), -a sin(t), b t) + r cos(u) (-cos(t), -sin(t), 0) + r sin(u) (-b sin(t), b cos(t), a) / sqrt(a² + b²), t, 0, 2π, u, 0, 2π) Where $a$ is the distance of each helix from the axis of symmetry, $b$ is the height of each helix, and $r$ is the radius of the wire (which is the same as $a$ in figure above).
Is two-dimensional hyperbolic geometry unique up to isomorphism?
The appropriate notion here is isometry: if points $a$ and $b$ are mapped to $a'$ and $b'$ respectively, the distance between points $d(a,b) = d'(a',b')$. $d$ and $d'$ here denote the represented distance, not the Euclidean distance between the points in the model. Geometry is about measuring distances, so to say. Assuming four postulates of Euclid plus your variant of the Playfair's axiom, you have that the curvature is fixed and negative, but the specific value of curvature is not known. With curvature -1, a triangle with four angles 45 degrees each will have an area of $\pi/4$. If the curvature is, say, -2, this triangle will have an area of $\pi/8$; in general, the sum of angles of a triangle minus $\pi$ equals area times the curvature (or, for surfaces where the curvature is not constant, the integral of the curvature). The curvature is only the matter of scale (a larger sphere will have smaller (positive) curvature, but it is essentially the same shape). When we fix curvature -1, all the common models (Poincaré, Klein, hyperboloid, half-plane) are isometric (my page lists the common models and several less common ones; but I have no references for how the postulates plus curvature fix the geometry). An useful analogy: cartographers use many projections of the surface of the sphere (stereographic, Mercator, etc.) but they all describe the same mathematical object on a flat 2D map, and since the sphere is not flat, none of them is perfect, and they have different advantages and disadvantages. The same is true about the models of hyperbolic geometry. Surfaces of constant curvature need not be isometric to hyperbolic plane, because they can correspond to only a fragment of a plane (a disk is not isometric to the whole plane, even though both have curvature 0) or they can be wrapped (a cylinder is not isometric to the whole plane, even though both have curvature 0 -- this happens with the tractricoid aka pseudosphere, which is listed sometimes as a model of hyperbolic geometry). However, such cases do not satisfy the postulates. The isometric mappings between the common models not only exist, but they are also given by simple formulas. For example, the mapping between the half-plane and the Poincaré disk is inversion, and Klein/Poincaré/Gans models are obtained from the hyperboloid model with a simple perspective transformation.
Find the center and the radius of convergence for this complex series.
For power series, it is usually either the root or ratio test. Factorials work much nicer with ratio test than the root test.
Problem in equality of two functions.
The only way that you can alter the function is if and only if the domain allows it. For example in our case. $f:\mathbb{R}-{2}->\mathbb{R}$ with $f(x)=\dfrac{(x-2)(x+2)}{(x-2)}=\dfrac{x^2-4}{x-2}$ then you can use algebra to transform the function into $g(x)=x+2$ These functions are unequal in general because $Dom(f)\neq Dom(g)$, but equal in the subset $\mathbb{R}-{2}\subset\mathbb{R}$
If $A$ is a symmetric and positive definite matrix then $\text{tr}(A)^n\geq n^n\det A$
Since $A$ is symmetric and positive definite, the eigenvalues of $A$ satisfy $\lambda_1, \ldots, \lambda_n > 0$. By AM-GM, we have $\dfrac{\lambda_1+\cdots+\lambda_n}{n} \ge \sqrt[n]{\lambda_1\cdots\lambda_n}$, i.e. $\dfrac{\text{tr}A}{n} \ge \sqrt[n]{\det A}$. Thus, $(\text{tr} A)^n \ge n^n\det A$, as desired.
What non-integer number has the smallest factorial?
As Martin R says, there is no known simple closed form for the minimum of the Gamma function. However, for fun, here are the first 1000 digits of the root (in Mathematica): FindRoot[D[Gamma[x], x], {x, 1.5}, WorkingPrecision -> 1000] which produces 1.46163214496836234126265954232572132846819620400644635129598840859878\ 6440353801810243074992733725592750556793365533053341617365778466985829\ 1771683816450246525426187920443843819783335597739619760747194319349371\ 7541405945193010996372416652777217279167325088046396007693297814490147\ 5185803414306536810631010706016949785457933765577116113646852653864407\ 7372589890682262958196750529119944311972207258664056482074952272808066\ 6492780264672546913947612363653574355170333094944302512419288581347767\ 7638037268207228647702315131118425758160327915893181546219377794692453\ 7619116776460200864228324583987363497206740454783034481431732283690209\ 3646770017559017693843888399883958228679946842407928700859042045977138\ 8194093679122118848402487784207298777528715900614407383331513574002791\ 5353912845037515421736664287138645800903908013264994637024811461027624\ 7799875927238664920666176370867887038347260422463147415264091765916362\ 0499923428977096412741183720620861677533192913168549317959136215151559\ 088748352371795155035
Integral $\int_0^{\infty} \frac{\ln \cos^2 x}{x^2}dx=-\pi$
Let the desired integral be denoted by $I$. Note that $$\eqalign{ 2I&=\int_{-\infty}^\infty\frac{\ln(\cos^2x)}{x^2}dx= \sum_{n=-\infty}^{+\infty}\left(\int_{n\pi}^{(n+1)\pi}\frac{\ln(\cos^2x)}{x^2}dx\right)\cr &=\sum_{n=-\infty}^{+\infty}\left(\int_{0}^{\pi}\frac{\ln(\cos^2x)}{(x+n\pi)^2}dx\right) \cr &=\int_{0}^{\pi}\left(\sum_{n=-\infty}^{+\infty} \frac{1}{(x+n\pi)^2}\right)\ln(\cos^2x)dx \cr &=\int_{0}^{\pi}\frac{\ln(\cos^2x)}{\sin^2x}dx \cr } $$ where the interchange of the signs of integration and summation is justified by the fact that the integrands are all negative, and we used the well-known expansion: $$ \sum_{n=-\infty}^{+\infty} \frac{1}{(x+n\pi)^2}=\frac{1}{\sin^2x}.\tag{1} $$ Now using the symmetry of the integrand arround the line $x=\pi/2$, we conclude that $$\eqalign{ I&=\int_{0}^{\pi/2}\frac{\ln(\cos^2x)}{\sin^2x}dx\cr &=\Big[-\cot(x)\ln(\cos^2x)\Big]_{0}^{\pi/2}+\int_0^{\pi/2}\cot(x)\frac{-2\cos x\sin x}{\cos^2x}dx\cr &=0-2\int_0^{\pi/2}dx=-\pi. } $$ and the announced conclusion follows.$\qquad\square$ Remark: Here is a proof of $(1)$ that does not use residue theorem. Consider $\alpha\in(0,1)$, and let $f_\alpha$ be the $2\pi$-periodic function that coincides with $x\mapsto e^{i\alpha x}$ on the interval $(-\pi,\pi)$. It is easy to check that the exponential Fourier coefficients of $f_\alpha$ are given by $$ C_n(f_\alpha)=\frac{1}{2\pi}\int_{-\pi}^{\pi}f_\alpha(x)e^{-inx}dx=\sin(\alpha\pi)\frac{(-1)^n}{\alpha \pi-n\pi} $$ So, by Parseval's formula we have $$ \sum_{n\in\Bbb{Z}}\vert C_n(f_\alpha)\vert^2=\frac{1}{2\pi}\int_{-\pi}^\pi\vert f_\alpha(x)\vert^2dx $$ That is $$ \sin^2(\pi\alpha) \sum_{n\in\Bbb{Z}}\frac{1}{(\alpha\pi-n\pi)^2}=1 $$ and we get $(1)$ by setting $x=\alpha\pi\in(0,\pi)$.
Show that $m(\mathbb{A})=0$ where $\mathbb{A}=\{x\in \mathbb{B} \mid \exists \delta >0 \quad m\left((x-\delta,x+\delta)\cap \mathbb{B} \right) =0 \}$
Let $\{x_n\}$ be a countable sequence enumerating the rational numbers. The set of intervals $(x_n - 1/m, x_n +1/m)$ is countable as the cartesian product of two countable sets is countable. Let $\{I_n\}$ be a sequence enumerating those intervals. Now let $$\mathbb A^\prime = \{x \in \mathbb B \mid \exists n \in \mathbb N (x \in I_n \text{ and } \ m(I_n \cap \mathbb B) = 0)\}.$$ I claim that $\mathbb A = \mathbb A^\prime$. The inclusion $\mathbb A^\prime \subseteq \mathbb A$ is clear. Conversely, for $x \in \mathbb A$, it exists $\delta \gt 0$ with $m\left((x-\delta,x+\delta)\cap \mathbb{B} \right) =0$. Let $m \in \mathbb N$ be such that $1/m \lt \delta/2$ and $n \in \mathbb N$ such that $\vert x - x_n \vert \lt 1/m$. For $y \in (x_n-1/m, x_n +1/m)$ we have $$\vert x- y \vert \le \vert x - x_n \vert + \vert x_n - y \vert \le 2/m \lt \delta$$ proving that $(x_n-1/m, x_n +1/m) \subseteq (x-\delta,x+\delta)$ and therefore that $x \in \mathbb A^\prime$. From there, you get that $\mathbb A$ is included in a countable union of null sets and is, therefore, a null set.
Probability Central Limit Application
By the central limit theorem: $$ \lim_{n\to\infty}\frac{\sum_{j=1}^{n}X_{j}}{n}=\lim_{n\to\infty}\frac{\sum_{j=1}^{n}X_{j}}{\sqrt{n\left(n+1\right)}}\overset{d}{\longrightarrow}N\left(0,\frac{1}{2}\right) $$
Double integral using polar coordinates
It is probably easiest to do a substitution such that your region of integration is translated over to a circle centered at the origin. Then your theta would be going from $0$ to $2\pi$ while your $r$ would be going from $0$ to the radius of the circle.
How many four-digit integers are there that contain exactly one $8$?
You made a good start. Indeed, if you fix the first spot as $8$, the second, third and fourth spot can be filled with any digit in $\{0,1,2,3,4,5,6,7,9\}$, hence you have $9^3$ possible choices. It remains to count how many number you could have when the first spot is not $8$. Notice that, as Peter suggests above, usually the first digit is supposed to be not zero. Fix the second spot to be $8$. Then you have only $8$ possible choices for the first digit, infact you can choose any number in $\{1,2,3,4,5,6,7,9\}$. The third and fourth spot can be filled with any number different to $8$, that is any number in $\{0,1,2,3,4,5,6,7,9\}$. Thus you have $8\times 9^2$ choices fixing the second spot to be $8$. The same reasoning can be done also when you fix the third spot to be $8$ and when you fix the fourth spot to be $8$. In other words, when the first spot in not $8$ you have $8\times 9^2 + 8\times 9^2+8\times 9^2=3\times 8\times 9^2$ possible choices. Hence the total amount is $9^3+3\times 8\times 9^2$ numbers.
find the least positive residue of $1!+2!+3!+...+100!$ modulo each of the following integers
$$1!+2!+\dots 100!\\ \equiv 1! +2!+3\times 2!+ 4\times 3\times 2!...\\ \equiv 1!+0+3\times 0+ 4\times 3\times 0...\equiv 1!\pmod 2$$ Because $2!$ is congruent to $0$ $\pmod 2$. As André Nicolas pointed out in the comments, $4!\equiv 0\pmod {12}$ which means that $5!$ which is $5\times 4!$ is congruent to $0$ as well. See if you can generalize this strategy to other moduli as well. EDIT: Lets do the $\pmod {12}$ example. $$1!+2!+\dots 100!\\ \equiv 1! +\ldots+4!+5\times 4! + 6\times 5\times 4!+7\times 6\times 5\times 4!+...\\ \equiv 1! +\ldots+24+5\times 24 + 6\times 5\times 24+7\times 6\times 5\times 24+... \pmod {12}$$ Because $4!=24$. $24=12\times 2$ which is congruent to $0\times 2=0$ $\pmod{12}$. So... $$1! +\ldots+24+5\times 24 + 6\times 5\times 24+7\times 6\times 5\times 24+..\\ \equiv 1!+2!+3!+0+5\times 0+6\times 5\times 0\ldots\\ \equiv 1!+2!+3! \pmod{12}$$ and $1!=1,2!=2,3!=6$ so $1!+2!+3!=1+2+6=9$
Finding a Diagonal Matrix for a Linear Transformation
Besides the fact that you got the wrong value for $w_1^2+w_2^2$, the problem is that you’re only half done. You’ve found a matrix for $T$, but relative to the standard basis. The problem asks you to find a basis for which this matrix is diagonal. It’s an orthogonal projection, so the eigenvalues are $0$ and $1$ (check this yourself) and you should be able to figure out the corresponding eigenvectors without having to solve any equations. Hint: What does this projection do to the line $k[1,2]^T$ and what is its kernel? Update: In this two-dimensional case, you can think of the projection $T$ as breaking a vector down into two components: one parallel to $[1,2]^T$ and one perpendicular to it. The parallel component is unchanged by $T$, but the perpendicular one gets mapped to $0$, so is in $\ker T$. This is just like the situation in the standard basis: you have a component parallel to the $x$-axis and one perpendicular to it. If you project orthogonally onto the $x$-axis, the $x$-component of the vector remains unchanged, but the $y$-component becomes $0$. The matrix of this projection onto the $x$-axis (relative to the standard basis) is obviously $$M=\pmatrix{1&0\\0&0}.$$ So, for the transformation $T$, if you take $[1,2]^T$ and some vector perpendicular to it, such as $[-2,1]^T$, as a basis, the matrix of $T$ relative to that basis will also be $M$. The change-of-basis matrix that maps from this basis back to the standard one will, of course, have these two vectors as its columns. This leads to one way to motivate the idea of eigenvalues and eigenvectors, by the way: we ask what lines (planes, &c) are mapped to themselves by a linear transformation. That is, can we find a non-zero vector $\mathbf v$ and a scalar $\lambda$ such that $T\mathbf v=\lambda\mathbf v$?
Prove that a given function is Lipschitz
The partial derivative $\frac{\partial f}{\partial x}$ is continuous, hence, by the Mean value theorem, $f$ is locally Lipschitz.
Solve differential equation (Double integral)
$$u^2f''+2uf'=(u^2f')'=0\to u^2f'=c_0\to f'=\frac{c_0}{u^2}\to f=\frac{c_1}u+c_2.$$
Products of irrational numbers
If $a$ is any rational number, and $b$ is any irrational number, then $c=a/b$ is irrational (it's pretty easy to prove that; I can give specifics if necessary) so the product of the two irrational numbers $b$ and $c$ is rational. And every case of a product of two irrational numbers being rational is an instance of exactly that situation, as you'll see if you think it through.
Injection and Surjection
Take $A= \big \{1,2 \big \}, B= \big \{1,2,3 \big \}, C= \big \{1,2 \big \}$. Define $f:A \rightarrow B, g:B \rightarrow C$ by \begin{align} f(1)=1, f(2)=2 \end{align} \begin{align} g(1)=1, g(2)=2, g(3)=3 \end{align} Then $g$ is not injective and $f$ is not surjective. And $g \circ f:A \rightarrow C$ is given by \begin{align} (g \circ f)(1) =g(f(1)) =g(1)=1 \\ (g \circ f)(2) =g(f(2)) =g(2)=2 \end{align} Note that $g \circ f$ is both injective and surjective.
Evaluation Of Logarithmic Limits
Yes, we can replace because $\ln$ is a continuous function, which gives: $$\lim_{x\rightarrow0}\ln(1+x)=\ln\left(\lim_{x\rightarrow0}(1+x)\right)=\ln(1+0)=0.$$
self-adjoint operator and unitary orthogonal matrix
For part 1, use Proposition 16.16 from Golan. Let $V$ and $W$ be finitely-generated inner product spaces, having ONB $B=\{v_1, \ldots, v_n\}$ and $D=\{w_1, \ldots, w_n\}$, respectively. Let $\alpha: V \to W$ be a linear transformation. Then $\Phi_{BD}(\alpha)$ is the matrix $A=[a_{ij}]$, where $a_{ji}=\langle \alpha(v_i),w_j\rangle$ and $\Phi_{DB}(\alpha^*)=A^H$. The proof of this proposition follows: For all $1 \leq i \leq n$, let $\alpha(v_i)= \displaystyle\sum_{h=1}^k a_{hj} w_h$. Then for all $1 \leq j \leq k$, we have $\langle \alpha(v_i), w_j \rangle= \langle \sum_{h=1}^k a_{hj} w_h, w_j \rangle=a_{ji}$ and also $\langle \alpha^*(w_j),v_i \rangle= \overline{\langle v_i, \alpha^*(w_j) \rangle}= \overline{\langle \alpha(v_i), w_j \rangle}=\overline{a_{ji}}$ as needed. The proof for this proposition is similar to the proof of Part 1.
Penalizing small values
You have to decide what you want-there is a function to do that. If $r$ is recency, the simplest is to subtract $r$ from the score. If that is too big a penalty, subtract $ar$ for some constant $a$. Is $r=3$ lots worse than $r=1?$ How about $r=25$ compared to $r=5$? As you say, you can just add $\frac 1{a+r}$ to all your scores. As $a$ grows, the variation decreases. You can also subtract $\log r$. These are all monotonically decreasing with $r$. Since you haven't said what you want beyond that, I can't suggest one over the other.
Continuous constant map and topologies
Yes. $\mathbb R \owns x \mapsto c \in \mathbb R$ for any real number $c$.
The stability of system using Lyapunov function
Perhaps this is relevant to your underlying question: Let $y=(y_1, \ldots, y_N)$ represent a vector in $\mathbb{R}^N$. Define the norm $||y|| = \sqrt{\sum_{i=1}^N y_i^2}$. Let $\mathcal{Y}$ be a subset of $\mathbb{R}^N$ (possibly being $\mathbb{R}^N$ itself). Let $V(y)$ be a function from $\mathcal{Y}$ to $\mathbb{R}$. Suppose the function $V(y)$ has the following property that we shall call property P: Property P: If $\{y[k]\}_{k=1}^{\infty}$ is an infinite sequence of vectors in $\mathcal{Y}$ that satisfies $\lim_{k\rightarrow\infty} ||y[k]||=\infty$, then $\lim_{k\rightarrow\infty} V(y[k])=\infty$. Let $x(t) = (x_1(t), \ldots, x_N(t))$ be a vector-valued function of time such that $x(t) \in \mathcal{Y}$ for all $t\geq 0$. Claim: If $V(y)$ has property P, and if there is a finite constant $C$ such that $V(x(t)) \leq C$ for all $t\geq 0$, then there is a finite constant $M$ such that $||x(t)||\leq M$ for all $t \geq 0$. In particular, if $V(x(t))$ is bounded for all $t\geq 0$, then $x(t)$ is bounded for all $t\geq 0$. Proof: Suppose there is no such $M$ (we will reach a contradiction). Then for each positive integer $k$, there is a time $t_k\geq 0$ such that $||x(t_k)|| > k$. It follows that $\{x(t_1), x(t_2), x(t_3), \ldots\}$ is an infinite sequence of vectors in $\mathcal{Y}$ such that $\lim_{k\rightarrow\infty} ||x(t_k)||=\infty$. Since $V(y)$ has property P, it follows that $\lim_{k\rightarrow\infty} V(x(t_k)) = \infty$, which contradicts the fact that $V(x(t))\leq C$ for all $t \geq 0$. Exercise 1: Let $\mathcal{Y} = \mathbb{R}^N$. Show that the functions $V(y) = ||y||$, $L(y) = ||y||^2$, $H(y) = \sum_{i=1}^Ne^{|y_i|}$ all have property P. Exercise 2: Show that the function $V(y) = \sum_{i=1}^{N} e^{y_i}$ does not have property P when $\mathcal{Y} = \mathbb{R}^N$, but does have property P when $\mathcal{Y} = \mathbb{R}_+^N = \{(y_1, \ldots, y_N) \in \mathbb{R}^N | y_i \geq 0 \: \: \forall i \in \{1, \ldots, N\}\}$. Exercise 3: Let $\mathcal{Y} = \mathbb{R}$. Show that the function $V(y) = |\arctan(y)|$ does not have property P. Exercise 4: Let $\mathcal{Y} = \mathbb{R}$. Show that the function $V(y)=0$ does not have property P.
Entropy of the upper and lower bits of a square number
With help from Peter Taylor's comment, I was able to prove the expression for $H_{\star}(n)$. I believe the approximation for $H^{\star}(n)$ could also be proven by noting that for $x>2^{n/2}$, $\sqrt{x^2-2^{n-1}}\leqslant \sqrt{q}$ or $\sqrt{q}\leqslant\sqrt{x^2+2^{n-1}}$, and then expanding these bounds as Taylor series to show that most of the higher bits of $\sqrt{q}$ are the same as the higher bits of $x$, hence the entropy of $q$ must be nearly the same as the entropy of $x$. Unfortunately the argument seems to get pretty messy so I have yet to formalize it. Let $(x,y)$ denote the greatest common divisor of $x$ and $y$. The proof of the expression for $H_{\star}(n)$ relies on the following proposition. Proposition. For two numbers $x,y$ with $(x,2^n)=2^m$, $x^2\equiv y^2 \mod 2^n$ iff $x\equiv\pm y\mod 2^{n-\min\{m+1,\lfloor n/2\rfloor\}}$. proof. First suppose $(x,2^n)=2^m$ and $x\equiv\pm y\mod 2^{n-\min\{m+1,\lfloor n/2\rfloor\}}$, so that we can write $x=i2^m$ and $y = x+ k2^{n-\min\{m+1,\lfloor n/2\rfloor\}}$. Then \begin{align} y^2 &= \left(x+ k2^{n-\min\{m+1,\lfloor n/2\rfloor\}}\right)^2\\ &= x^2 + 2kx2^{n-\min\{m+1,\lfloor n/2\rfloor\}} + k^22^{2\left(n-\min\{m+1,\lfloor n/2\rfloor\}\right)}\\ &= x^2 + kj2^{n+m+1-\min\{m+1,\lfloor n/2\rfloor\}} + k^22^{n+\left(n-\min\{2m+2,2\lfloor n/2\rfloor\}\right)}\\ &\equiv x^2 \mod 2^n. \end{align} Now suppose $(x,2^n)=2^m$ and $x^2\equiv y^2 \mod 2^n$. Then $2^n|(x+y)(x-y)$. Certainly it must be the case that $2^{\lceil n/2 \rceil}=2^{n-\lfloor n/2 \rfloor}$ divides one of these terms. Now suppose $2^{m+2}$ divides both of these terms. Then we would also have $2^{m+2}|(x+y)+(x-y)=2x$, so $2^{m+1}|x$ contradicting our assumptions. Thus $2^{n-(m+1)}$ must divide either $(x+y)$ or $(x-y)$. Since $2^{n-\min\{m+1,\lfloor n/2\rfloor\}}$ equals either $2^{n-(m+1)}$ or $2^{n-\lfloor n/2 \rfloor}$, we in particular have $2^{n-\min\{m+1,\lfloor n/2\rfloor\}}$ divides $(x+y)$ or $(x-y)$, i.e. $x=\pm y\mod 2^{n-\min\{m+1,\lfloor n/2\rfloor\}}$.$$\tag*{$\blacksquare$}$$ Now back to the original problem. \begin{align} H_{\star}(n) &= -\frac{1}{2^n}\sum_{x=0}^{2^n-1}\log_2\left(\frac{\#\{x:x^2\equiv y^2\mod 2^n\}}{2^n}\right)\\ &=-\frac{1}{2^n}\sum_{x=0}^{2^n-1}\log_2\left(2^{\min\{\log_2((x,2^n))+1,\lfloor n/2\rfloor\}-n}\cdot\left[x\equiv 0\mod 2^{n-\min\{\log_2((x,2^n))+1,\lfloor n/2\rfloor\}}?\,1:2\right]\right)\\ &=-\frac{1}{2^n}\sum_{x=0}^{2^n-1}\log_2\left(2^{\min\{\log_2((x,2^n))+2,\lfloor n/2\rfloor\}-n}\right)\\ &=\frac{1}{2^n}\sum_{x=0}^{2^n-1}\left(n-\min\{\log_2((x,2^n))+2,\lfloor n/2\rfloor\}\right)\\ &=n-\frac{1}{2^n}\sum_{x=0}^{2^n-1}\min\{\log_2((x,2^n))+2,\lfloor n/2\rfloor\}. \end{align} Since the number of values less than $2^n$ for which $2^m|x$ is exactly $2^{n-m}$, the sum restricted to the $x$'s such that $(x,2^n)=2^m$ with $m+2\leqslant \lfloor n/2\rfloor$ is just $(m+2)(n^{n-m}-2^{n-m-1})=(m+2)2^{n-m-1}$. Thus we get \begin{align} H_{\star}(n) &=n-\sum_{m=0}^{\lfloor n/2\rfloor-2}\frac{m+2}{2^{m+1}}-\lfloor n/2\rfloor2^{-\lfloor n/2\rfloor+1}. \end{align} From there a simple computation suffices to show the formula I gave originally is correct.
How to prove a statistic is sufficient using conditional probability argument?
You have a sample with $n$ components. Then the procedure is as follows. You take one component and test whether this component is satisfactiry or not. You have two possible outcomes, let us say with probability $p=P(\mbox{ Component is satisfactory})$ and $1-p$ otherwise. You repeat this, take another component and so on. Each test is independent so $X=\{\mbox{ Number of satisfactory components}\}$ follows a binomial distribution $Bin(n,p)$ where $p$ is the proportion of satisfactory components. One can think of a binomial distribution as a sum os single independent Bernoulli trials. That is $$X=\sum_{i=1}^n X_i$$ where each $X_i$ is a Bernoulli trial with probability of success $p$ (in this case meaning the component is satisfactory). Clearly, an unbiased estimator for $p$ is to take a sample, count the number of satisfactory components and $$\hat{p} = \frac{X}{n}=\frac{1}{n} \sum_{i=1} X_i.$$ Already here we can see that the statistic $T(X_1,\dots, X_n)=\sum_{i=1}^n X_i$ is sufficient for $p$. Statistician $A$ is given the information $X_i$ of which components are satisfactory and which are not. Whilst statistian $B$ is only given the amount of $X$. You can see that this amount of information is enough to estimate $p$. Mathematically speaking, a sufficient statistic $T(X)$ for $p$ is such that $$P(X=x | T(X)=t, p) = P(X=x|T(X)=t)$$ where $X$ can be a vector. If you look at the formula above this essentially means that the information of knowing $T(X)$ and/or $p$ is the same. In other words, $T(X)$ is sufficient (or provides with sufficient information). In our case: $$P(X_1=x_1, \dots, X_n=x_n| \sum_{i=1}^n X_i=t ,p ) = P(X_1=x_1, \dots, X_n=x_n| \frac{1}{n}\sum_{i=1}^n X_i=\frac{k}{n},p ) = P(X_1=x_1, \dots, X_n=x_n| \frac{1}{n}\sum_{i=1}^n X_i=\frac{k}{n} )$$ where in the last step we see that $p$ is redundant, since we already know $\sum_{i=1}^n X_i$ and hence this is enough to know $p$, so conditioning on redundant knowledge is useless. Another rule of thumb for sufficient statistics is the following: You have some model/distribution function depending on some parameter $\theta$ and some statistic $T(X_1,\dots, X_n)$. You collect observations $x_1, \dots, x_n$ and want to do some inference on $\theta$. Then you compute the value of $T(x_1,\dots, x_n)$ which will infer $\theta$. Then $T$ is sufficient if when having collected the data set and computed $T(x_1,\dots, x_n)$ we can "throw away" our sample. It is not needed anymore. While if the value $T(x_1,\dots, X_n)$ happens not to be sufficient that means that you still can gather more information from the sample, so you should not "throw it away". A very illustrative and good example is here: http://en.wikipedia.org/wiki/Sufficient_statistic#Example
diffrential equation - prove that any solution is not intersecting with the x axis
Since $y\equiv 0$ is already a solution, by uniqueness the whole $x$-axis is excluded from all other solutions. Now to change sign you would have to cross the $x$-axis… Or assume that for some solution $y(t^*)=0$, then $y$ also solves the IVP with $y(t^*)=0$ and the only solution of that IVP is $y\equiv 0$.
Approach the covariance matrix
It is given that $X$ and $Y$ are jointly normal, $EX=EY=0$, $EX^{2}=EY^{2}=1$ and $cov(X,Y)=\rho$ which gives $EXY=\rho$. $X+Y$ is normal with mean $0$ and its variance is $E(X+Y)^{2}=EX^{2}+EY^{2}+2EXY=1+1+2\rho$. $P(W \leq w)=P-\sqrt w \leq X \leq \sqrt w)-\int_{-\sqrt w} ^{\sqrt w} f(x)dx$ where $f$ is the standard normal density. $cov(X,W)=EX^{3}-EXEX^{2}=EX^{3}=0$ by symmetry of standard normal distribution. I will leave the calculation of $cov (Z,W)$ to you.
If the $100$-th derivative of $f$ vanishes on $\Bbb R$, then $f$ is a polynomial.
Depending on taste and preference: 0 is a polynomial. Integrate it, and you get a polynomial of degree $\le{0}$. Integrate that, and you get a polynomial of degree $\le{1}$. Integrate 100 times and you have $f(X)$ as a polynomial of degree $\le{99}$. More formally, try a proof by induction. Prove that if the proposition is true for $n$ then it is true for $n+1$, then note that it is true if you differentiate $f(x)$ $n=0$ times.
How much water must be added to $10\%$ of $350\text{mL}$ alcohol to dilute it to $5\%$
Hint: Let $x$ be the amount in ml to be added to the drink, then: $0.05(350+x) = 0.1(350)$
Any way to calculate approximate SD without keeping list of numbers? Using only the mean + mean SD?
You can certainly do this; just requires keeping track of a few pieces of data, rather than a single number. I'd start by rewriting $$ \begin{align*} \frac{1}{n-1}\sum_{k=1}^{n}(x_k-\bar{x})^2&=\frac{1}{n-1}\left[\sum_{k=1}^{n}x_k^2-2\bar{x}\sum_{k=1}^{n}x_k+n(\bar{x})^2\right] \end{align*} $$ Note that to compute this, you just need to know three quantities: $n$, $\sum x_k$, and $\sum x_k^2$. So, to do this in a large-data context, you could think of taking each $x_k$, mapping it to the triple $(1,x_k,x_k^2)$, and then adding all of these triples. This can be done in a streaming manner without ever having to store a list of values. Then, if your final aggregate triple is $(n, \sigma_1,\sigma_2)$, you can compute the variance as $$ \frac{1}{n-1}\left[\sigma_2-\frac{(\sigma_1)^2}{n}\right] $$
Elements in a convex set, regarding distance
So the answer was actually right away. Thanks for both suggestions, they were really helpful in my thought process. Define $ z = \frac{r_2}{r_1 + r_2} x + \frac{r_1}{r_1 + r_2} y $.
series involving $\log(\tanh(\pi k/2))$ II
There is clearly a typo in the last equation. The sum $$\sum_{n = 1}^{\infty}\frac{1}{n(e^{2\pi n} + 1)}$$ should be replaced by $$\sum_{n = 1}^{\infty}\frac{1}{n(e^{n\pi^{2}/x} + 1)}$$ As shown in this answer we have $$\sum_{n = 1}^{\infty}\log\tanh (nx) = \log\vartheta_{4}(e^{-2x})\tag{1}$$ where $\vartheta_{4}(q)$ is a Jacobi theta function given by $$\vartheta_{4}(q) = \prod_{n = 1}^{\infty}(1 - q^{2n})(1 - q^{2n - 1})^{2} = \prod_{n = 1}^{\infty}\frac{(1 - q^{n})^{2}}{1 - q^{2n}} = \prod_{n = 1}^{\infty}\frac{1 - q^{n}}{1 + q^{n}} = \sqrt{\frac{2k'K}{\pi}}\tag{2}$$ In what follows we have $q = e^{-2x}$. We next calculate the value of a sum defined by $$F(q) = \sum_{n = 1}^{\infty}\log(1 - q^{n})\tag{3}$$ Note that the Dedekind eta function defined by the equation $$\eta(q) = q^{1/24}\prod_{n = 1}^{\infty}(1 - q^{n})\tag{4}$$ satisfies the relation $$\eta(q) = 2^{-1/6}\sqrt{\frac{2K}{\pi}}k^{1/12}k'^{1/3}\tag{5}$$ where $q = e^{-\pi K'/K}$. Taking logs we get $$\frac{\log q}{24} + F(q) = \frac{\log k}{12} + \frac{\log k'}{3} + \frac{\log 2}{3} + \frac{1}{2}\log\left(\frac{K}{\pi}\right)$$ or $$F(q) = \frac{\log k}{12} + \frac{\log k'}{3} + \frac{\log 2}{3} + \frac{1}{2}\log\left(\frac{K}{\pi}\right) + \frac{\pi K'}{24K}\tag{6}$$ Similarly from the relation $$\eta(q^{2}) = 2^{-1/3}\sqrt{\frac{2K}{\pi}}(kk')^{1/6}$$ we get $$F(q^{2}) = \frac{\log(kk')}{6} + \frac{\log 2}{6} + \frac{1}{2}\log\left(\frac{K}{\pi}\right) + \frac{\pi K'}{12K}\tag{7}$$ Similarly we get $$F(q^{4}) = \frac{\log(k^{4}k')}{12} - \frac{\log 2}{6} + \frac{1}{2}\log\left(\frac{K}{\pi}\right) + \frac{\pi K'}{6K}\tag{8}$$ Next we calculate the value of sum $$S = -\sum_{n = 1}^{\infty}\frac{1}{n(e^{2ny } + 1)}$$ We have \begin{align} S &= -\sum_{n = 1}^{\infty}\frac{1}{n(e^{2ny } + 1)}\notag\\ &= -\sum_{n = 1}^{\infty}\frac{q'^{n}}{n(1 + q'^{n})}\text{ (where }q' = e^{-2y})\notag\\ &= -\sum_{n = 1}^{\infty}\frac{1}{n}\sum_{k = 1}^{\infty}(-1)^{k - 1}q'^{nk}\notag\\ &= \sum_{k = 1}^{\infty}(-1)^{k}\sum_{n = 1}^{\infty}\frac{q'^{kn}}{n}\notag\\ &= \sum_{k = 1}^{\infty}(-1)^{k + 1}\log(1 - q'^{k})\notag\\ &= \sum_{k \text{ odd}}\log(1 - q'^{k}) - \sum_{k\text{ even}}\log(1 - q'^{k})\notag\\ &= \sum_{k = 1}^{\infty}\log(1 - q'^{k}) - 2\sum_{k = 1}^{\infty}\log(1 - q'^{2k})\notag\\ &= F(q') - 2F(q'^{2})\notag\\ &= F(q_{1}^{2}) - 2F(q_{1}^{4})\text{ (where }q_{1} = e^{-y})\notag\\ &= \frac{\log(ll')}{6} + \frac{\log 2}{6} + \frac{1}{2}\log\left(\frac{L}{\pi}\right) + \frac{\pi L'}{12L}\notag\\ &\,\,\,\, - \left(\frac{\log(l^{4}l')}{6} - \frac{\log 2}{3} + \log\left(\frac{L}{\pi}\right) + \frac{\pi L'}{3L}\right)\notag\\ &= -\frac{\log l}{2} + \frac{\log 2}{2} - \frac{1}{2}\log\left(\frac{L}{\pi}\right) - \frac{\pi L'}{4L}\tag{9} \end{align} where $l,l', L$ correspond to $q_{1} = e^{-y} = e^{-\pi L'/L}$. Now we use the fact that $y = \pi^{2}/2x$ and hence so that $q = e^{-2x}$ and $q_{1} = e^{-y}$ are sort of conjugate in the sense that $l = k', l' = k, L = K', L' = K$. Then we can see that the sum $$S = -\frac{\log k'}{2} + \frac{\log 2}{2} - \frac{1}{2}\log\left(\frac{K'}{\pi}\right) - \frac{\pi K}{4K'}\tag{10}$$ From equations $(1)$ and $(10)$ we can see that $$\log\vartheta_{4}(q) = S + \frac{1}{2}\log(2k') + \frac{1}{2}\log\left(\frac{K}{\pi}\right) - S$$ or $$\log\vartheta_{4}(q) = S + \frac{1}{2}\log(2k') + \frac{1}{2}\log\left(\frac{K}{\pi}\right) + \frac{\log k'}{2} - \frac{\log 2}{2} + \frac{1}{2}\log\left(\frac{K'}{\pi}\right) + \frac{\pi K}{4K'}$$ or $$\log\vartheta_{4}(q) = S + \log k' + \frac{\pi K}{4K'} + \frac{1}{2}\log\left(\frac{KK'}{\pi^{2}}\right)$$ We now use the fact that $$2x = -\log q = \frac{\pi K'}{K}$$ so that $x = \pi K'/2K$. Then we get $$\log\vartheta_{4}(q) = S + \frac{\pi^{2}}{8x} + \frac{1}{2}\log\left(\frac{k'^{2}KK'}{\pi^{2}}\right)$$ I wonder if the last term $(1/2)\log(x/2\pi)$ in question is correct or not. Update: I checked after some manipulation that if we use $$\log\vartheta_{4}(q) = -S + \frac{1}{2}\log(2k') + \frac{1}{2}\log\left(\frac{K}{\pi}\right) + S$$ then we get $$\log\vartheta_{4}(q) = -S + \log 2 + \frac{1}{2}\log(K/K') - \frac{\pi K}{4K'} = -S - \frac{1}{2}\log\left(\frac{x}{2\pi}\right) - \frac{\pi^{2}}{8x}$$ and therefore the right formula is \begin{align}\sum_{n = 1}^{\infty}\log \tanh (nx) &= \log\vartheta_{4}(e^{-2x})\notag\\ &= \sum_{n = 1}^{\infty}\frac{1}{2n}\left(1 - \tanh \left(\frac{\pi^{2}n}{2x}\right)\right) - \left\{\frac{\pi^{2}}{8x} + \frac{1}{2}\log\left(\frac{x}{2\pi}\right)\right\}\notag\\ &= \sum_{n = 1}^{\infty}\frac{1}{n(e^{n\pi^{2}/x} + 1)} - \left\{\frac{\pi^{2}}{8x} + \frac{1}{2}\log\left(\frac{x}{2\pi}\right)\right\}\notag \end{align} Thus the original formula given in the question is differing only in sign (apart from the typo mentioned in the beginning of my answer). The formula corresponds not to $\sum\log\tanh(kx)$ but rather to $\sum\log\coth(kx) = -\sum\log\tanh(kx)$.
To prove something is a functor isn't it enough to prove that it commutes with composition?
Here's a concrete counter-example. Every monoid (set with a associative binary operation with left and right unit, i.e. a group without inverses) gives rise to a category: pick something arbitrary for the single object and have the hom-set be the elements of the monoid, composition be the multiplication, and the identity be the unit of the monoid. A functor between such categories is then a monoid homomorphism. Your question is then: is a semigroup homomorphism automatically a monoid homomorphism? A semigroup being like a monoid but without a unit. The answer is "no". The powerset of a singleton set, $\mathcal{P}(1)$, is a monoid with binary operation union and unit $\{\}$. Let $f : \mathcal{P}(1) \to \mathcal{P}(1)$ defined by $f(s) = 1$. Then $1 = f(s\cup t) = f(s)\cup f(t) = 1 \cup 1 = 1$, so it is a semigroup homomorphism, but $f(\{\}) = 1 \neq \{\}$, so it is not a monoid homomorphism. That said, every surjective semigroup homomorphism is a monoid homomorphism. This proves that every full functor automatically preserves identities if it preserves composition.
Why does changing the center of a geometric power series change the interval of convergence?
You are not finding different intervals of convergence for the same power series. You are finding two different power series, centered at different points and converging on different intervals, for the same function. That function is of course $f(x) = 1/(1-x).$ Note that $f$ blows up at $1.$ So if you center the power series of $f$ at $0,$ you can't expect it to converge at $x=1$! But if you center the power series of $f$ at $-1,$ the distance to the blow up point is now $2.$ That is why you found a radius of convergence of $2$ for the other power series.
On a proof of a number field theorem
Here is one-possible approach: If $\mathbb{Q}(\zeta_m)=\mathbb{Q}(\zeta_n)$ by comparing degrees we get that $\varphi(m)=\varphi(n)$. But, it's easy to see that $\mathbb{Q}(\zeta_n)\cap\mathbb{Q}(\zeta_m)=\mathbb{Q}(\zeta_{(n,m)})$, so we can actually assume WLOG that $n\mid m$. But, factor $m=p_1^{e_1}\cdots p_\ell^{e_\ell}$ so then by assumption, with the possibility of reordering, $n=p_1^{f_1}\cdots p_k^{f_k}$ with $k\leqslant\ell$ and $f_j\leqslant e_j$. So, then, $$\varphi(m)=p_1^{e_1-1}(p_1-1)\cdots p_\ell^{e_\ell-1}(p_\ell-1)$$ and $$\varphi(n)=p_1^{f_1-1}(p_1-1)\cdots p_k^{f_k-1}(p_k-1)$$ By assumption, $\varphi(m)=\varphi(n)$. But, upon division we find that this implies that $$p_1^{e_1-f_1}\cdots p_k^{e_k-f_k}p_{k+1}^{e_{k+1}-1}(p_k-1)\cdots p^{e_\ell-1}(p_\ell-1)=1$$ Clearly we must have that $e_i-f_i=0$ for $i=1,\ldots,k$ and $e_i-1=0$ for $i=k+1,\ldots,\ell$ and also that $p_i-1=1$ for $i=k+1,\ldots,\ell$. Thus, if $k\ne\ell$ This last condition automatically implies that $p_i=1$ for $i=k+1,\ldots,\ell$ and since the $p_i$ are distinct this actually implies that $\ell=k+1$ and $p_\ell=2$. So then, it's easy to see that $$m=2 p_1^{e_1}\cdots p_k^{e_k}$$ and $$n=p_1^{e_1}\cdots p_k^{e_k}$$ which is exactly what you wanted.
Questions about sequences
This all looks fine and shows good ideas and effort. Let me mention a different trick based on the binomial formulas that often helps with square roots without resorting to derivatives: $$|\sqrt x-\sqrt y|=\frac{|x-y|}{\sqrt x+\sqrt y}\le\frac1{2\sqrt{\min\{x,y\}}} \cdot|x-y|$$ But note that either way $m$ may be too small to make $k=\frac1{2\sqrt m}$ work. Instead, I'd try the more direct estimate $$\begin{align} \alpha_{n+2}&\le|\sqrt{u_{n+1}}-2|+|\sqrt{u_n}-2|\\&=\frac{\alpha_{n+1}}{\sqrt{u_{n+1}}+2}+\frac{\alpha_{n}}{\sqrt{u_{n}}+2}\\ &\le\frac{\alpha_{n+1}+\alpha_n}{2+\sqrt m}.\end{align}$$ Therefore with $\beta_n=\max\{\alpha_{2n},\alpha_{2n+1}\}$ and $\kappa=\frac2{2+\sqrt m}<1$ $$ \alpha_{2n+2}\le\kappa\beta_n,\qquad\alpha_{2n+3}\le \frac{\kappa\beta_n+\beta_n}{2+\sqrt m}\le \kappa\beta_n,$$ hence $$\beta_{n+1}\le \kappa\beta_n$$ so that $\beta_n\to 0$, $\alpha_n\to 0$, $u_n\to 4$.
What is the exact meaning of the following sentence?
The only way a convex function can fail to be differentiable at a point is to have a "corner" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x < 0$ and $+1$ for $x > 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.
Counter-example for the Slutsky Theorem
Let $Y_n=Y=X_n$ for every $n$ where $Y$ is not degenerated. Further let $X\stackrel{d}{=}Y$ and let $X$ and $Y$ be independent. Then $Y_n\stackrel{p}{\to}Y$ and $X_n\stackrel{d}{\to} X$ and $X_nY_n\stackrel{d}{\to}Y^2$. But because $Y$ is not degenerated $Y^2$ and $XY$ do not have the same distribution. Consequently we do not have $X_nY_n\stackrel{d}{\to}XY$.
Prove independence of subset of mutually independent sets
This is not always true. Toss a fair $4$-sided die with the sides labelled $1,2,3,4$. Let $A$ be the event the number rolled is even, and let $B$ be the event the number is $\le 2$. Then $A$ and $B$ are independent. Let $C$ be the event the number rolled is $2$, and let $D=C$. Then $C\subseteq A$ and $D\subseteq B$, but $C$ and $D$ are not independent.
Prove that every vertex has degree of 2
Hint: From the equation you gave, we can divide both sides by $n$ to get the average degree: $$\frac1n \sum_{v\in V} d(v)=2$$ Now, if any vertex has a degree greater than the average, there must necessarily be another vertex with a degree lower than the average. What does that imply?
Proof, Mathematical Induction concept
It is necessary. Here is a slightly different example. Prove that $n=n+1$ for all $n \in \mathbb N$. Suppose that the statement is true for all $k \leq n$. Then $(n+1)+1=(n)+1$, as desired. The point is that without the base case, we can prove things vacuously true by using false assumptions. For your case, you show it is true for $n=1$, then the induction step will show if it is true for $n$, then it is strue for $n+1$. This will guarantee the result for all $n \in \mathbb N$. The latter half says $P(n) \implies P(n+1)$. So, if you have the latter half, then you have that $P(1) \implies P(2)$. However, if you have $P(1) \land (P_1 \implies P_2)$, then we can deduce $P(2)$, and so on.
A question from Titchmarsh's zeta function book.
For the functional equation $$ \xi(s) = \xi(1 - s) $$ we have the definition $$ \xi(s) = \frac{1}{2}\pi^{-s/2}s(s-1)\Gamma\left(\frac{s}{2}\right)\zeta(s).\! $$ The functional equation just gives $\xi(0)=\xi(1)$. The function $Z(s)=\frac{1}{2}\pi^{-s/2}\Gamma(\frac{s}{2})\zeta(s)$ has a meromorphic continuation to the whole $s$-plane, with simple poles at $s=0$ and $s=1$. So the question is, what the value of $\xi(0)$ is. Following your argument, we should have $\xi(0)=0$, and not $\xi(0)=1/2$. However, we have to take into account the simple poles, too.
Let $a,b,c$ be positive reals. Prove that $ab(a+b)+bc(b+c)+ca(c+a)\geq \sum_{cyc} ab\sqrt {{a\over b}(c+a)(c+b)}$
First it's useful to expand $$ab(a+b)+bc(b+c)+ca(c+a)=a^2b+b^2a+b^2c+c^2b+c^2a+a^2c$$ By Cauchy-Schwarz $$\left(\sum_{cyc} ab\sqrt {{a\over b}(c+a)(c+b)}\right)^2\le\left( \sum_{cyc}a^2(c+b)\right)\left(\sum_{cyc}ab(c+a)\right)$$ now expand the RHS : $$\left( ab(a+b)+bc(b+c)+ca(c+a) \right) \left(3abc+a^2b+b^2c+c^2a\right)$$ and recall that $3abc\le b^2a+c^2b+a^2c$ by AM-GM. Hence we get $$RHS\le \left( ab(a+b)+bc(b+c)+ca(c+a) \right)^2$$ and this is the desired inequality.
Counting number of solutions for $x = (a-1)(b-2)(c-3)(d-4)(e-5)$
I'll take "natural number" to mean at least 1 (so, in particular, not 0). You've got five distinct natural numbers less than 6, that means you have to use each of the numbers 1, 2, 3, 4, 5 exactly once. So you have a permutation of the set $\lbrace1,2,3,4,5\rbrace$. The non-zero condition on the product says that this permutation can't have any fixed points. So the question is a disguised way of asking you about the number of derangements of 5 objects. Now any good combinatorics text will tell you all about counting derangements; alternatively, just type the word into your favorite search engine and see what comes up.
Discrete Math Modulus Beginner Direct Proof
"$a$ is congruent to $b$ modulo 12" means that $r=\frac1{12}(a-b)$ is an integer. "$a$ is congruent to $b$ modulo 6" means that $s=\frac16(a-b)$ is an integer. So you need to prove that if $r$ is an integer, then $s$ is an integer. What is the relationship between $r$ and $s$?
About mixed strategy Nash Equilibrium
Let A choose row, either Up or Down. B chooses column, Left or Right. Given any mixed strategy by A, choosing U with p and D with (1-p), then B's payoff is q(payoff to L)+(1-q)(Payoff to R) or q[p+1-p]+(1-q)[p+1-p]=1. Hence, given any mixed strategy of A's, any mixed strategy of B's will be a best reply. Now, what is the payoff to A, and what is the best reply to B? It is to choose p to maximize p(payoff to U)+(1-p)(payoff to D), remembering that B is choosing between L and R with probability q and (1-q).
Continuity of the function $f(x,y)=\max\{x,y\}$ in $\mathbb{R}^2$.
You need to prove that $max(x, y) < max(a, b) + \epsilon$ and $max(x, y) > max(a, b) - \epsilon$, knowing that $x < a + \epsilon$ and $x > a - \epsilon$ and same with y and b Making the cases as you suggest is a good idea, and I would encourage you to choose $\epsilon < b-a$ in the case $a < b$ Another way to see it : $max(x, y) = \frac{x + y}{2} + \frac{|x - y|}{2}$ so it is continuous
Prove that $\frac{a}{2a+\beta b}+\frac{b}{\alpha b+\beta a}\ge \frac{2}{\alpha +\beta }$
It's wrong. Try $\beta=1$, $\alpha=\frac{1}{2}$ and $a=b=1.$
Holomorphic and Harmonic functions
For a fixed $z_0\in U$, consider an open disc $D$ centered at $f(z_0)$ and contained in $V$. On $D$, the function $\phi$ is the real part of a holomorphic function $\Phi$, and $\Phi\circ f$ is holomorphic (so its real part $\phi\circ f$ is harmonic). In this way you can show that the Laplacian of $\phi\circ f$ is zero at every point in $U$.
Find $c$ in equation system with 2 equations and 4 variables
This simple procedure for finding a integer lattice basis (given some linear homogeneous diophantine equations) does not seem widely taught. One needs to know how to use matrices. There is a ton of material written on basis reduction; it is assumed that we know how to find a basis to begin with... $$ \left( \begin{array}{rrrr} 1&2&5&10 \\ 1&1&1&1 \\ \end{array} \right) \left( \begin{array}{rrrr} -1&2&-3&8 \\ 1&-1&4&-9 \\ 0&0&-1&0 \\ 0&0&0&1 \\ \end{array} \right) = \left( \begin{array}{rrrr} 1&0&0&0 \\ 0&1&0&0 \\ \end{array} \right) $$ $$ \left( \begin{array}{rr} -3&8 \\ 4&-9 \\ -1&0 \\ 0&1 \\ \end{array} \right) \left( \begin{array}{cc} 2&3 \\ 1&1 \\ \end{array} \right) = \left( \begin{array}{rr} 2&-1 \\ -1&3 \\ -2&-3 \\ 1&1 \\ \end{array} \right) $$ The Gram matrix of the initial basis is $$ \left( \begin{array}{cc} 26&-60 \\ -60&146 \\ \end{array} \right) $$ The vectors in the reduced basis are much closer to orthogonal, and the Gram matrix is closer to diagonal. Note that basis reduction for a two dimensional lattice, as here, is easily found (by hand) using Gauss reduction of the quadratic form associated with the Gram matrix, keeping track of the two by two matrix that accomplishes Gauss reduction as $P^TH_1P = H_2$ $$ \left( \begin{array}{cc} 10&2 \\ 2&20 \\ \end{array} \right) $$ In the lattice given by making the system homogeneous, a reduced basis (as rows) is $$ ( 2,-1,-2,1) , \; \; \; ( -1,3,-3,1) \; \; . \; $$ Solving the original system gives one solution $( -107, 159, 0, 0) $ so that all integer solutions are found by $$ (\; \; \; -107 + 2s-t,\; \; \; 159 -s+3t,\; \; \; -2s - 3t, \; \; \;s+t \; \; \;)$$ Then it is just inequalities about $s,t$ to get all four elements non-negative Here is is blown up. We can see that there are many lattice points inside or on the black quadrilateral. If $a,b,c,d$ must be positive, we count those strictly inside. If non-negative, we also include any lattice points on the boundary. As $c = -2s - 3t,$ we see that the line at with $2s + 3t = -40 $ (parallel to an edge) narrowly misses the quadrilateral, so that we know $c \leq 39 .$ A more careful calculation of $c = -2s-3t$ along the boundary shows that positivity of $a,b,c,d$ demands $c \leq 37,$ achieved when $s=40, t = -39$ and $(a,b,c,d) = (12,2,37,1) \; . \; \;$ The nearby (boundary) point $s= 39, t=-39$ in the diagram gives $(a,b,c,d) = (10,3,39,0) \; , \; \;$ which violates strict positivity. Another nearby (boundary) point $s= 38, t=-38$ in the diagram gives $(a,b,c,d) = (7,7,38,0) \; , \; \;$ which also violates strict positivity.
Extension of operator $T$ from a subspace $Y$ of $X$ to $\Bbb R^n$ without increasing the norm
If we use the max-norm on $\mathbb R^n$, then this is pretty easy. We need to compute the operator norm of $T$ in terms of the $T_i$'s. Let $x\in X$. Then $$ \|Tx\|_\infty = \max_i |T_i(x)| \le \max_i \|T_i\|_{Y^*}\cdot \|x\|_X. $$ Now let $i$ be such that $T_i$ is a functional with maximal norm, $\|T_i\| = \max_j\|T_j\|$. Let $\epsilon>0$ be such that $\|T_i\|> \|T_j\|+\epsilon$ for all $j$ such that $\|T_j\| < \|T_i\|$. Let $x$ with $\|x\|\le 1$ such that $T_i(x) \ge \|T_i\|-\epsilon$. Then $$ \|Tx\|_\infty = \max_j |T_j(x)| = T_i(x) \ge \|T_i\|-\epsilon = \max_j \|T_j\|_{Y^*} -\epsilon. $$ This proves $$ \|T\| = \max_j \|T_j\|_{Y^*}. $$ Now, $\tilde T_j$ is norm-preserving extension of $T_j$, building the extension $\tilde T$. Using the above identity, we see $\|T\|= \|\tilde T\|$. This construction heavily uses the particular norm on $\mathbb R^n$. I do not see, how one can prove norm-equality for an arbitrary norm on $\mathbb R^n$.
A Closed Form Lower Bound Approximating $p_{n,m,s} = n![z^n]\left(\sum_{k=0}^s\frac{z^k}{k!}\right)^m$
Note that $p_{n,m,s}$ can be broken into $$ p_{n,m,s} = p_{n,m,s}^{1} + p_{n,m,s}^{\geq 2}, $$ where $p_{n,m,s}^1$ is the number of ways to distribute $n$ balls into $m$ bins such that each bin has at most 1 ball and $p_{n,m,s}^{\geq 2}$ is the number of ways to distribute these balls so that at least 1 bin has at least 2 balls. Trivially, $p_{n,m,s} \geq p_{n,m,s}^{1}$. Note that $p_{n,m,s}^1 = m (m-1) (m-2) \cdots (m-n+1) = (m)_n$. Hence $$ p_{n,m,s} \geq (m)_n. $$ Depending on the nature of $n \ll m$, the main contribution to $p_{n,m,s}$ will come from $p_{n,m,s}^1$. Note that $$ p_{n,m,s}^{\geq 2} \leq {m \choose 1} {n \choose 2} (m)^{n-2}, $$ where we choose a bin to have at least 2 balls, then choose two balls in this bin; finally, distribute the remaining $n-2$ balls among any bins. Note that if $n^2/m \to 0$ (as $m \to \infty$), then $$ \frac{{m \choose 1} {n \choose 2} (m)^{n-2}}{(m)_n} \leq \frac{n^2}{m} \frac{m^n}{(m)_n} \leq \frac{n^2}{m} \left( \frac{m}{m-n} \right)^n = \frac{n^2}{m} \left( 1 + \frac{n}{m-n}\right)^n \leq \frac{n^2}{m} e^{n^2/(m-n)} \to 0. $$ In this case, $$ \frac{p_{n,m,s}}{p_{n,m,s}^1} \to 1. $$ So this lower bound is tight if $n^2/m \to 0$.
Limsup, liminf, closure and interior of a Borel set
This is false. Consider e.g. $X_n=0$ if $n $ even and $X_n =1$ if $n $ is odd. Now take $B=\{0\} $. Then $\limsup_n P (X_n \in B) =1 >0=\liminf_n P (X_n \in \overline {B}) $. Similar examples apply to other inequalities of the form $\limsup_n A \leq \liminf_n B $ for the $A,B $ that you consider.
Calculating the response of a linear time-invariant system to arbitrary inputs
In general the solution to the initial value problem $\dot{x} = Ax + Bu$ subject to $x(0) = x_0$ is $x(t) = e^{At} x_0 + \int_0^t e^{A(t-\tau)} Bu(\tau) d \tau$. In the particular question, $A$ is diagonalisable, so computing $e^{At}$ is fairly straightforward.
$\sum a_n^2 $-convergent $\Rightarrow? \sum\frac{|a_n|}{n} -$ convergent
Your wording is ambiguous, presumably you meant iff? If $\sum_n |a_n|^2$ converges then so does $\sum_n \frac{a_n}{n}$ by the Cauchy Schwarz inequality. The reverse direction is not true. Take $a_n=1/\sqrt{n}$.
Can inverse fourier transform be formulated in terms of residue?
You have to take the $j$ out of $a+jw$, i.e. $\frac{1}{a+jw} = \frac{-j}{-ja+w}$. Then you can also take $-j$ out of the integral and this factor cancels with $j$. The j is the imaginary unit. Recall: $\oint \frac{f(w)}{w-a} dw = 2 \pi i f(a)$. Of course you have a residue!
Intuition of divergence and curl
The curl of a vector field is only defined in three dimensions. Imagine a small circle perpendicular to the vector $e_i$. The $e_i$ component of the curl, $(\nabla \times V)\cdot e_i$, can be described as the average over all points $v$ on that circle of the derivative of $V$ in the direction counterclockwise tangent to the circle at $v$. For example, suppose that we want to find the z-component of curl and we have a unit circle in the xy-plane parameterised as $v(\theta)=(\cos\theta, \sin \theta)$. The counterclockwise tangent vectors are $w(\theta)=(-\sin\theta, \cos\theta)$. This z-component is proportional to $\displaystyle \int_0^{2\pi} \partial_iV^j v_iw_j \mathrm{d}\theta=\int_0^{2\pi} (-\partial_1V^1 \cos\theta\sin\theta +\partial_1V^2\cos^2\theta-\partial_2V^1\sin^2\theta+\partial_2V^2\sin\theta\cos\theta)\ \mathrm{d}\theta=c(\partial_1V^2-\partial_2V^1)$
Annihilator of Simple Module
I got an idea : If we restrict the map $f: C \to S$ on $R$. Then we have an isomorphism between $f(R)=Ry$ and the quotient $R/(\ker(f) \cap R)$, where $\ker(f) \cap R$ is the annihilator of $S$ in $R$. So to show $R/(\ker(f) \cap R)$ is maximal, it is enough to show that $Ry$ is simple as an $R$-module. Let $ry \in Ry$ be a nonzero element. Then clearly $Ry$ contains $\langle ry \rangle$ as $R$-modules. Conversely, since $ry \in Ry \subset S$ is nonzero, $\langle ry \rangle = S \supset Ry$. So $Ry = \langle ry \rangle$ for any nonzero $ry \in Ry$. Hnece $Ry$ is simple and $R/(\ker(f) \cap R)$ is maximal. Please let me know if there is ambiguity in my answer.
characteristic function for independent $X$ and $Y$
The substitution $\gamma(t)$ has the following properties:$$\gamma(t) = \phi(t)/\phi(-t)=\frac{\phi^3(t/2)\phi(-t/2)}{\phi(t/2)\phi^3(-t/2)}=\gamma^2(t/2)$$ therefore,$$\gamma(t)=\lim_{n\to\infty}{\gamma^{2^n}(t/2^n)}$$ It is easy to show that $$\gamma(0)=1$$ $$\gamma'(0)=0$$ That means the maclaurin expansion as $t\to0$ will be $$\gamma(t)=1+o(t^2) $$ thus $$\gamma(t)=\lim_{n\to\infty}{{\left[1+o(t^2/4^n)\right]}^{2^n}}=1$$ Hence, $$\phi(t)=\phi(-t)$$ And further that $$\phi(2t)=\phi^4(t)$$ Therefore, $$\phi(t)=\phi^{4^n}(t/2^n)$$ We know that $$\phi(t)=1+E(X)ti-\frac{E(X^2)}2t^2+o(t^2)=1-\frac{t^2}2+o(t^2)$$ then we get that long expression. By taking limit on it we get $$\phi(t)=\lim_{n\to\infty}{\left\{1-\frac12\frac{t^2}{4^n}\right\}^{4^n}}=e^{-\frac12t^2}$$
Steps in Proof of Convolution Theorem
You have $$\int_{-\infty}^\infty|g(z-x)|\,dx.$$ Do a substitution: $u = z-x$ and $du=-dx$. You get $$ \int_\infty^{-\infty} |g(u)| \, (-du). $$
How to explain this $3=2$ proof?
Note: $\displaystyle {\sqrt[n]{a^n} = |a|}$, if $n$ is even and square root is $n=2$. So, $$2− \frac52=-\frac{1}{2}$$ is not a positive square root making your assumption, and hence, the entire proof, flawed.
Complicated Improper Integral convergence/divergence
The function $t\mapsto \frac{\ln^2\left(\lvert t\rvert\right)}{t^{1/3}}$ is continuous over $(-\infty, 0)$ so the only possible issue with your integral is at 0. If we show that, for example $$\int _{-1}^0\:\cfrac{\ln^2 \ (|t |)}{ t^{1/3} }dt$$ converges then your problem is solved. To make things more practical, let's go back to positive numbers. Let $\varepsilon\in(0,1)$ and write the change of variable $u=-t$ $$\int _{-1}^{-\varepsilon}\cfrac{\ln^2 \ (|t |)}{ t^{1/3} }dt = \int _{-1}^{-\varepsilon}\cfrac{\ln^2 \ (-t)}{ t^{1/3} }dt = -\int_{\varepsilon}^1\frac{\ln^2u}{u^{1/3}}du$$ Letting $\varepsilon\rightarrow 0$, your problem is thus equivalent to showing the convergence of $$\int_{0}^1\frac{\ln^2u}{u^{1/3}}du$$ Write one more change of variable $v=\ln(u)$ $$\int_{\varepsilon}^1\frac{\ln^2u}{u^{1/3}}du = \int_{\ln \varepsilon}^0e^{\frac{2v}{3}}v^2dv$$ Letting $\varepsilon\rightarrow 0$ again, we find $$\int_{0}^1\frac{\ln^2u}{u^{1/3}}du = \int_{-\infty}^0e^{\frac{2v}{3}}v^2dv=\frac{27}{4}$$ This method also allows you to compute your integral modulo some work on the integration boundaries.
$X\sim N(1,1)$. Find $\operatorname{Var}(X^2)$
$\newcommand{\Var}{\operatorname{Var}} \newcommand{\E}{\mathbb{E}}$ You know $X=Z+1$ where $Z\sim N(0,1)$. \begin{align} \Var(X^2)&= \Var((Z+1)^2)\\ & = \Var(Z^2+2Z+1)\\ &=\Var(Z^2+2Z)\\ &=\E[Z^4+4Z^3+4Z^2]-(\E[Z^2]+2\E[Z])^2\\ &=\E[Z^4]+4\E[Z^3]+4\E[Z^2]-(1+0)^2\\ &\stackrel{(1)}{=}\Var(Z^2)+\E[Z^2]^2+0+4-1\\ &=2+1+4-1\\ &=6 \end{align} Where in (1) we have used $\E[Z^{3}]=0$ (why?) and $E[Z^4]=\Var(Z^2)+E[Z^2]^2$. Moreover I have seen that you already knew that $Z^2\sim \chi_1^2$ and therefore $\Var(Z^2)=2$.
Every open set in $\mathbb{R}$ is $F_\sigma$ sets
For unbounded intervals such as $( a , + \infty )$ you just have to look at the bounded side: for each $n$ let $F_n = [ a + \frac{1}{n} , + \infty )$, which is closed, and show that $( a , + \infty ) = \bigcup_{n=1}^\infty F_n$. (And analogously for $( - \infty , b )$.) To get that every open set in $\mathbb{R}$ is F$_\sigma$, note the following: Every open set in $\mathbb{R}$ is the (disjoint) union of countably many open intervals. Every open interval in $\mathbb{R}$ is F$_\sigma$. It is not too hard to prove that countable unions of F$_\sigma$ sets are also F$_\sigma$.
Vector spaces - $\min\{p\in\mathbb{N}|\text{ker}f^p=\text{ker}f^{p+1}\}=\min\{q\in\mathbb{N}|\text{im}f^q=\text{im}f^{q+1}\}$
The crucial facts are the implications $\ker g^2 = \ker g \implies \operatorname{im} g \cap \ker g = \{0\}$ and $\operatorname{im} g^2 = \operatorname{im} g \implies E = \operatorname{im} g + \ker g$ for any endomorphism $g$ of $E$. Let $m = \max \{ p_0,q_0\}$. Then we have $\operatorname{im} f^{2m} = \operatorname{im} f^m = \operatorname{im} f^{q_0}$ and $\ker f^{2m} = \ker f^m = \ker f^{p_0}$, so $$E = \operatorname{im} f^m \oplus \ker f^m = \operatorname{im} f^{q_0} \oplus \ker f^{p_0}.\tag{1}$$ We also have $E = \operatorname{im} f^{q_0} + \ker f^{q_0}$ and $\operatorname{im} f^{p_0} \cap \ker f^{p_0} = \{0\}$. The next thing we need is that if $A,B,C$ are subspaces of $E$ with $A \subsetneq B$, then $$B\cap C = \{0\} \implies (A + C) \subsetneq E,\tag{2}$$ and conversely $$E = A + C \implies B\cap C \neq \{0\}.\tag{3}$$ If we had $p_0 < q_0$, then $\operatorname{im} f^{p_0} \supsetneq \operatorname{im} f^{q_0}$. Apply $(3)$ with $A = \operatorname{im} f^{q_0},\, B = \operatorname{im} f^{p_0}$ and $C = \ker f^{p_0}$ to conclude $\operatorname{im} f^{p_0} \cap \ker f^{p_0} \neq \{0\}$ since $E = A+C$ by $(1)$. And if we had $q_0 < p_0$, then we had $\ker f^{q_0} \subsetneq \ker f^{p_0}$. Set $A = \ker f^{q_0},\, B = \ker f^{p_0}$ and $C = \operatorname{im} f^{q_0}$ in $(2)$ to conclude $\operatorname{im} f^{q_0} + \ker f^{q_0} \neq E$ since $B\cap C = \{0\}$ by $(1)$.
Riemann Integration and the Axiom of Choice
The axiom of choice is not needed at all. Note that the integral is defined as a limit over finite partitions. Moreover these partitions are partitions into intervals, so the partitions themselves are simplistic as well. The partition are finite, so there is no need to worry about the existence of choice functions, and we don't choose a particular sequence of partitions. We consider all of them. And as the old saying goes, if you don't know what to choose - take everything.
Develop a function into Taylor series
It happens that$$f(x)=-\frac3{x-2}+\frac4{x-3}=\frac3{2-x}-\frac4{3-x}=\frac{3/2}{1-x/2}-\frac{4/3}{1-x/3}.$$ Now, use the geometric series.
System of trignometric equations.
Hint: From $\sin t=\sin(-s)\Rightarrow -s=t+2k\pi$ or $-s=-t+(2k+1)\pi$. Now you consider each case separately. 1) $-s=t+2k\pi\Rightarrow$ plug it in, say the second equation: $K\sin(t-s)=\sin t$ $$K\sin(t+t+2k\pi)=\sin t\Leftrightarrow K\sin(2t+2k\pi)=\sin t\Leftrightarrow K\sin 2t=\sin t\Leftrightarrow 2K\sin t\cos t=\sin t$$ Now consider again two cases: $\sin t=0\,$ or $\,\sin t\neq 0$. If $\sin t=0$, see the second case. If not, divide by $\sin t\neq 0$ and get $2K\cos t=1\Rightarrow \cos t=\frac{1}{2K}$. For this we obviously need $\frac{1}{2K}\in[-1,1]$. Then $t=\arccos\frac{1}{2K}$ where the usual principle value for $t$ is in $[0,\pi]$. 2) $-s=-t+(2k+1)\pi$. Again plug it in, say the second equation: $$K\sin(t-t+(2k+1)\pi)=\sin t\Leftrightarrow K\sin ((2k+1)\pi)=\sin t\Leftrightarrow 0=\sin t\Rightarrow t=l\pi,\,l\in\mathbb Z$$ and the equation is satisfied no matter what is $K$. Therefore the solution of the system is $t=l\pi,\,l\in\mathbb Z,\,-s=t=2k\pi=l\pi+2k\pi,\,k\in\mathbb Z,\,\forall K\in\mathbb R$
Rudin's definition of an ordered set
The proof of equivalence is straightforward by simply writing down the desired properties one by one: Let $<$ be an order on $S$ in the first sense and define $x\le <\iff x<y\lor x=y$ Then For all $a\in S$, we have $a=a$ and hence $a\le a$ Assume $a\le b$ and $b\le a$. If $a<b$, then by the first property of $<$, neither $b<a$ nor $b=a$, contradicting $b\le a$. Hence $a=b$. Assume $a\le b$ and $b\le c$. If $a=b$ or $b=c$, then trivially $a\le c$. In the remaining case $a<b$ and $b<c$, also $a<c$ and so $a\le c$ Let $a,b\in S$. Then one of $a<b$, $a=b$, b Let $\le $ be an order on $S$ in the second sense. Define $x<y\iff x\le y\land x\ne y$. Let $x,y\in S$. Then $x\le y$ or $y\le x$, so $x<y\lor x=y\lor y<x$. By our definition of $<$, we cannot have $x<y\land x=y$, nor $y<x\land x=y$. Remains the possibility that $x<y\land y<x$. But then $x\le y\land y\le x$ implies $x=y$, contradiction. Hence excactly one of $x<y,x=y,y<x$ holds Assume $x<y$ and $y<z$. Then $x\le y$ and $y\le z$, so $x\le z$. If we had $x=z$, then $y<x$, but that contradicts $x<y$. Hence $x\ne z$ and so $x<z$.
Proving every subgroup of a nilpotent group is subnormal
Hint (in case of finite groups): use the “normalizers grow” principle in nilpotent groups.
Finding Multiplicative Inverse
Hint, easy to observe $9\cdot(-4)=-36=1 \pmod {37}$. On your other point, Euclidean algorithm works fine.
Showing existence of a solution for a system of equations involving expectations of random variables
Here are some cases: Case 1: Suppose $f_0 \in Conv(f_1, ..., f_m)$. Suppose $f_0(x) = \sum_{i=1}^m \beta_i f_i(x)$ for some nonnegative constants $\beta_i$ that sum to 1. Then we can choose $\alpha_i=\beta_i$ for all $i \in \{1, ..., m\}$ and we obtain the desired equalities: $$ \int f_k(x)\log\left(\frac{\sum_{i=1}^m \alpha_i f_i(x)}{f_0(x)}\right)dx = 0 \quad \forall k \in \{1, ..., m\} $$ Case 2: $f_0$ is uniform over $[a,b]$ Suppose the PDFs all have support over an interval $[a,b]$ and that $f_0$ is uniform over this interval. If $f_0 \notin Conv(f_1, ..., f_m)$, a natural choice is to "project" $f_0$ into $Conv(f_1, ..., f_m)$ (in a KL sense): Let $P = \{(p_1, ..., p_m): p_i\geq 0, \sum_{i=1}^mp_i=1\}$ denote the probability simplex. Find probabilities $(\alpha_1, ..., \alpha_m) \in P$ that minimize the following expression: $$ g(\alpha_1, ..., \alpha_m)=\int_a^b \left(\sum_{i=1}^m \alpha_if_i(x) \right)\log\left(\frac{\sum_{i=1}^m\alpha_i f_i(x)}{f_0(x)}\right)dx$$ Suppose we are lucky enough to find a minimizer $\alpha^*=(\alpha_1^*, \ldots, \alpha_n^*)$ that has strictly positive entries, so that $\alpha_i^*>0$ for all $i \in \{1, ..., n\}$ (so it is not a boundary point of $P$). It follows by property of minimizers that there is a constant $c$ such that: $$ \left.\frac{\partial g}{\partial \alpha_k}\right|_{\alpha^*} = c \quad \forall k \in \{1, ..., m\}$$ (Otherwise, if the partial with respect to $i$ is larger than the partial with respect to $j$, we can improve $g$ by taking a small amount $\delta$ from $\alpha_i$ and giving it to $\alpha_j$, which contradicts the fact that we are already at a minimum.) In particular for all $k \in \{1, ..., m\}$ we have \begin{align} c &= \int_a^b f_k(x)\log\left(\frac{\sum_{i=1}^m\alpha_if_i(x)}{f_0(x)}\right)dx + \int_a^b f_k(x)f_0(x)dx\\ &= \int_a^b f_k(x)\log\left(\frac{\sum_{i=1}^m\alpha_if_i(x)}{f_0(x)}\right)dx + \frac{1}{b-a} \end{align} and so the desired integrals are the same for all $k \in \{1, ..., m\}$. General case Let $f_0, f_1, ..., f_m$ be general. Define: $$ v_k = \int f_k(x)f_0(x)dx \quad \forall k \in \{1, ..., m\} $$ Define the function $h:[0,1]^m\rightarrow\mathbb{R}$ by $$ h(\alpha_1, ..., \alpha_m) = -\sum_{i=1}^m\alpha_iv_i + \int \left(\sum_{i=1}^m \alpha_i f_i(x)\right)\log\left(\frac{\sum_{i=1}^m\alpha_if_i(x)}{f_0(x)}\right)dx $$ Now minimize $h(\alpha_1, ..., \alpha_m)$ over $(\alpha_1, ..., \alpha_m)\in P$. Again suppose the minimizer $(\alpha_1^*, ..., \alpha_m^*)$ has $\alpha_i^*>0$ for all $i\in \{1, ..., m\}$. Then again we must have a constant $c$ such that $$ \left.\frac{\partial h}{\partial \alpha_k}\right|_{\alpha^*} = c \quad \forall k \in \{1, ..., m\}$$ Thus for all $k \in \{1, ..., m\}$ we get: \begin{align} c &= -v_k + \int f_k(x)\log\left(\frac{\sum_{i=1}^m\alpha_if_i(x)}{f_0(x)}\right)dx + \int f_k(x)f_0(x)dx\\ &= \int f_k(x)\log\left(\frac{\sum_{i=1}^m\alpha_if_i(x)}{f_0(x)}\right)dx \end{align} and so the desired integrals are the same for all $k \in \{1, ..., m\}$. Fortunately, the functions $g(\alpha_1, ..., \alpha_m)$ and $h(\alpha_1, ..., \alpha_m)$ are always convex functions! So the minimization is always a convex minimization. In fact, convexity implies that there exist strictly positive values $\alpha_i$ that sum to 1 that satisfy the desired equalities if and only if there is a minimizer of the function $h$ over the simplex $P$ that is not a boundary point of $P$.
finding a solution of a PDE
If you didn't have the condition $u(0,t)=0$ and the domain for the $x$ variable was $\mathbb{R}$ instead of $(0,\infty)$, then you just would use D'Alembert's formula. The trick in your situation is to think of $u$, $\varphi$ and $\psi$ to be extended from $(0,\infty)$ to $\mathbb{R}$ by odd reflection, that is, $u(-x,t)=-u(x,t)$, and likewise for $\varphi$ and $\psi$. The extended $u$ satisfies the wave equation with extended initial conditions $\varphi$ and $\psi$ (check this!), and thus you can apply D'Alembert's formula in this situation.
Continuous functions question
HINT: The left hand side is known to be differentiable by the fundamental theorem of calculus, so the right hand side is also differentiable. Differentiate both sides to form a differential equation, and then solve that.
Equation of a circle given two points and tangent line
HINT: If $(a,b)$ is the center of the required circle we have $$(a+3)^2+(b+1)^2=(a-5)^2+(b-3)^2=r^2$$ where $r$ is the radius Now, the perpendicular distance of the tangent form the center $(a,b)$ is again equal to the radius Do you know how to calculate the perpendicular distance?
Longest Contiguous sequence Problem
Edit: second version Let $T1(k) = [...(i,t), ..., (j,t)...]$ and $T2(k) = [...(k,t), ..., (l,t)...]$ be lists of substrings from $s1$ and $s2$. $t$ is present at index $i$ and $j$ in $s1$. The lists are ordered by index. To compute $T1(1)$ and $T2(1)$ is trivial. To compute other values : Let $Ti'(2k)$ be the list $[(i,t+t') for (i,t) and (i+k,t') in Ti(k)]$ $T1(2k)$ and $T2(2k)$ are the sublists of $T1'(2k)$ and $T2'(2k)$ reduced to common substrings. Let $K$ be the last $k$ such as $T1(2^k)$ is not empty. The searched substring has a length l such $2^K \leq l <2^{K+1}$ From the lists of $\{Ti(2^l)\}$ you can compute $Ti(2^{K+\frac{K}{2}})$, if it is not empty, $2^{K+\frac{K}{2}} \ leq l <2^{K+1}$, else $2^K \leq l<2^{K+\frac{K}{2}}$. etc. Complexity To build $T1(k+p)$ and $T2(k+p)$ from $T1(k)$, $T1(p)$, $T2(k)$ and $T2(p)$ can be performed in $O(n*log(n))$ Global complexity is in $O(n*log(n)^2)$
How to define a set without set builder notation
They are not equivalent. If $S'=\{1,2,3,4,5,6,7\}$ then we have : $$\forall x \in \Bbb N \quad 0<x<5 \Rightarrow x \in S'$$ but $S \neq S'$.
Decompose integral of derivative and $e^{st}$ (laplace transform)
First in the entire process, you assume that $f(t)$ grows very slowly than exponential for the integral to make sense, i.e., $\lim_{t\to \infty}e^{-st} f(t) = 0$. Assuming this, by integration by parts, we have $$\int_0^{\infty} e^{-st} f'(t)dt = \int_0^{\infty} e^{-st}d\left(f(t)\right) = \left. e^{-st}f(t)\right \vert_{t=0}^{\infty} - \int_0^{\infty}f(t) d(e^{-st}) = -f(0) + s\int_0^{\infty}f(t)e^{-st}dt$$ where we made use of the fact that $\lim_{t\to \infty}e^{-st} f(t) = 0$, while evaluating the upper limit.
$E[X^4]+E[X]^4 \geq 2E[X^2]^2$ for $X \geq 0$
No. Consider $X$ following a Bernoulli distribution with parameter $p\in[0,1]$. Then, since $X^4 =X^2 = X$, your inequality is equivalent to $\mathbb{E}[X]+\mathbb{E}[X]^4 \geq 2\mathbb{E}[X]^2$, i.e., $$ p + p^4 \geq 2p^2 $$ which is not always true for $p\in[0,1]$. (E.g., it fails for $p=3/4$).
limit of an expression
You can't use l'Hopital rule if $f,g,h$ are not derivable. You can't say anything as long as you don't know more about $f,g$ and $h$.
Norm of sum of $n$ vectors with norm $\leq 1$ is equal to $n$
First note that by the triangle inequality we have that $$\|\sum^{n}_{i=1}\xi_{i}\|\leq \sum^{n}_{i=1}\|\xi_{i}\|\leq \sum^{n}_{i=1}1=n$$ Hence $\|\sum^{n}_{i=1}\xi_{i}\|=n$ if and only if $\|\xi_{i}\|=1$. Now suppose $\xi_{i}\neq \xi_{j}$. Note that if there is a $\lambda\in\mathbb{R}$ such that $\lambda\xi_{i}=\xi_{j}$, then $\lambda=\pm 1$ as $$1=\|x_{j}\|=|\lambda|\|\xi_{i}\|=|\lambda|$$ and since $\xi_{i}\neq \xi_{j}$ we have $\lambda=-1$. Then $$\|\sum^{n}_{k=1}\xi_{k}\|=\|\sum_{k\in\{1,...,n\}\setminus\{i,j\}}\xi_{k}\|\leq n-2.$$ So $\xi_{i}$ and $\xi_{j}$ are linearly independent. It follows that \begin{align*} \|\sum^{n}_{k=1}\xi_{k}\|&\leq n-2+\|\xi_{i}+\xi_{j}\|=n-2+\|(1+\langle\xi_{i},\xi_{j}\rangle\xi_{i})\xi_{i}+(\xi_{j}-\langle\xi_{i},\xi_{j}\rangle\xi_{i})\|\\ &=n-2+\sqrt{(1+\langle\xi_{i},\xi_{j}\rangle)^{2}+\|\xi_{j}-\langle\xi_{i},\xi_{j}\rangle\xi_{i}\|^{2}}\\ &\leq n-2+\sqrt{(1+\langle\xi_{i},\xi_{j}\rangle)^{2}+(\|\xi_{j}\|+\|\langle\xi_{i},\xi_{j}\rangle\xi_{i}\|)^{2}}\\ &\leq n-2+\sqrt{2(1+\langle\xi_{i},\xi_{j}\rangle)^{2}} \end{align*} Since by Cauchy-Schwarz we have that $\langle\xi_{i},\xi_{j}\rangle=\|\xi_{i}\|\|\xi_{j}\|\leq1$ with equality if and only if $\xi_{i}$ and $\xi_{j}$ are linearly dependent we have $$\|\sum^{n}_{k=1}\xi_{k}\|\leq n-2+\sqrt{2(1+\langle\xi_{i},\xi_{j}\rangle)^{2}}<n.$$ As this is a contradiction we find that all $\xi_{i}$ are the same.
Taking a Putnam (General Questions)
The Putnam competition is offered every year on the first Saturday in December. It is limited to undergraduates, but not limited to math majors - any undergraduate in the US is eligible to take it. However, if you intend on taking the exam you need to let your math department know, preferably sometime in October or so; the Putnam committee only sends as many exams as necessary to each school. In theory, every problem can be answered by someone who has had some education in Euclidean geometry, linear algebra, definitions in abstract algebra, and calculus (up to multivariable). In practice, some exposure to number theory, basic real analysis, and combinatorics will probably help. You should never need things like complex analysis, topology, advanced group theory, or other advanced topics to solve a problem on the Putnam. Knowledge of more advanced topics is never necessary, and often not particularly helpful either (though many problems have multiple solutions, and sometimes some of these solutions can pull in more advanced material). The Putnam is all about knowing the tricks to solve a problem. There is a sort of standard toolbox for Putnam problems, and for the most part you shouldn't have to deviate outside of this toolbox. Things like the basic inequalities (AM-GM-HM, Cauchy-Schwarz, Holder, etc.), generating functions, pigeonhole principle. Some institutions offer a Putnam seminar or problem-solving session where such a toolbox is developed. The first problems on each set are usually the easiest. The fifth and sixth problems are usually quite difficult for all takers. There are years where no one in the top 200 takers score on one of these problems. For scoring, it's better to provide a very complete solution to one problem than provide incomplete solutions to two or three problems. The scoring committee is very stingy with partial credit; answers usually score 0,1,2, 9, or 10, and rarely in between. There is, strictly speaking, not really much of a learning gap between the problems, only a major difficulty gap after problem 4. To do well on the Putnam, you need practice with competition math problems. These sorts of problems are typically eventually solvable using something in a standard toolbox, but you need to put the problem in a form where you can recognize what tools you need. This sort of intuition and creativity can only be gained with practice, since by design the problems will try to avoid being intuitively approachable. If you decide to take the Putnam, remember not to take it overly seriously. While doing well can be a plus to your CV should you decide to apply to graduate school or some mathematics REUs, it is at best a rather minor plus. There is no such thing as doing poorly on a test whose median score is a zero. It's supposed to be for fun, so you should have fun!
Doubt on locus of a median point
The question possibly creates confusion by using $a$ and $b$, which could be assumed to mean constants. I am guessing the question is: If segment $AB$ varies such that $A$ is on the $x$-axis, $B$ in on the $y$-axis, and the length of the segment is the constant $2l$, what is the equation of the locus of the midpoint of $AB$. Now if $A$ was $(x,0)$ and $B$ was $(0,y)$, what is the relation between $x$ and $y$ which will ensure that the length of the segment $AB$ is $2l$? Spoiler: $x^2 + y^2 = 4l^2$ Using that, can you find an equation for the locus of the midpoint $(x', y')$? Spoiler: $x'^2 + y'^2 = l^2$
Cyclic sylow subgroup of order 9 with 3-core of order 3
Such groups are easy to construct: let $C_9$ act on something through a $C_3$-quotient. E.g. construct the semi-direct product $C_7\rtimes C_9$ such that $C_9$ acts on $C_7$ through the unique automorphism of order 3. Then, the Sylow 3-subgroups are not normal, so there is more than one of them, but there is a unique $C_3$, which is the centre of $G$.
convergence radius of two specific power series
Answer for b). What you have calculated is the radius of convergence of $\sum k8^{x}x^{k}$. The given series is convergent if $|x^{3}| < \frac 1 8$ and divergent for $|x^{3}| >\frac 1 8$. Can you see from this that the radius of convergence is $\frac 1 2$?. Part a). It is not true that $(k!)^{1/k} \to \infty$ as $k \to \infty$. Use Stirling's approximation for this part.
Example of non-amenable group which is the inverse limit of amenable groups
The paper "Amenable semigroups" of Day answers the first question. The free group is an inverse limit of amenable groups. The second question remains open.
Can the product of two different* and non-reciprocal* irrational numbers be rational?
No, there aren't. If $ab=q$, with $q\in\mathbb Q$, then $a=b^{-1}q$, Note that if $a=bq$ and $b=\sqrt p$ for some $p\in\mathbb Q$, then$$a=b^{-1}b^2q=b^{-1}pq.$$Therefore, the first case is a particular type of the second one.
Psedoinverse and Left Inverse
The pseudoinverse of a matrix $A$ exists for any matrix, and is uniquely determined. I will work over the real numbers, though pseudoinverses also exist over complex numbers. To explain what is going on here, let me take you back a bit to just regular functions (the matrices define linear transformations, which are special types of functions between vector spaces, and pseudoinverses and left inverses are related to their nature as functions). Say $f\colon X\to Y$ is a function. A left inverse of $f$ is a function $g\colon Y\to X$ such that $g\circ f = \mathrm{id}_X$. A right inverse of $f$ is a function $h\colon Y\to X$ such that $f\circ h=\mathrm{id}_Y$. An inverse (or a two-sided inverse) is a function $\mathfrak{F}\colon Y\to X$ which is both a left and a right inverse of $f$: that is, $\mathfrak{F}\circ f = \mathrm{id}_X$, and $f\circ\mathfrak{F}=\mathrm{id}_Y$. One can show that if $f$ has both a left inverse and a right inverse, then they are the same function and that function is a two-sided inverse, which is then unique, and so we denote it by $f^{-1}$: indeed, if $g$ is a left inverse to $f$, and $h$ is a right inverse to $f$, then $$g = g\circ \mathrm{id}_Y = g\circ(f\circ h) = (g\circ f)\circ h = \mathrm{id}_X\circ h = h,$$ so $g=h$. Now for functions between sets (when we don't ask any kind of structure preservation, continuity or anything like that) we have: Theorem 1. Let $f\colon X\to Y$ be a function, with $X$ not empty. The following are equivalent: $f$ is one-to-one. $f$ has a left inverse. and (assuming the Axiom of Choice; if you don't know what that means, don't worry about it, it's a technical logical issue): Theorem 2. Let $f\colon X\to Y$ be a function. The following are equivalent: $f$ is onto. $f$ has a right inverse. As a consequence, we get: Theorem 3. Let $f\colon X\to Y$ be a function. The following are equivalent: $f$ is bijective (one-to-one and onto). $f$ has a two-sided inverse. Now, it is instructive to see the proof that (1) implies (2) in Theorem 1, because it is relevant to your query. Suppose $f\colon X\to Y$ is one to one. We construct a left inverse $g\colon Y\to X$ for $f$ as follows: Since $X$ is not empty, let $x_0\in X$ be an arbitrary element. If $y\in\mathrm{image}(f)$, then we know there exists $x_y\in X$ such that $f(x_y)=y$. And since $f$ is one-to-one, $x_y$ is uniquely determine by $y$. So define $g$ as follows: $$g(y) = \left\{\begin{array}{ll} x_y & \text{if }y\in\mathrm{image}(f)\\ x_0 & \text{if }y\notin \mathrm{image}(f). \end{array}\right.$$ Now, if $x\in X$, then $f(x)\in Y$, and $x_{f(x)} = x$. So for all $x\in X$, $g(f(x)) = x_{f(x)} = x$, hence $g\circ f = \mathrm{id}_X$, showing that $g$ is a left inverse of $f$. Now notice that there are a number of choices we can make here: We can select $x_0$ arbitrarily; if $X$ has more than one element and $f$ is not onto, then picking different $x_0$s will give us different left inverses to $f$. If $X$ is has more than one element and $f$ is not onto, then we can even pick different values for different things not in $\mathrm{image}(f)$, arbitrarily, and get left inverses. Now, let's go back to matrices and linear transformations. We can't just make arbitrary choices and hope to get a linear transformation (i.e., a matrix). For example, if we decide that $f(x) = a$ and that $f(y)=b$, then we better have $f(x+y) = a+b$. So it's not as simple to construct even one-sided inverses. Now, what we can do is define the left inverse on a basis, arbitrarily, and extend it to a linear transformation. So, say $T\colon V\to W$ is a linear transformation that is one-to-one (has trivial nullspace); you can think of it as an $m\times n$ matrix $A$, mapping $\mathbb{R}^n$ to $\mathbb{R}^m$. Since we are assuming it is one-to-one, we must have $n\leq m$. So, we can look at the basis $\mathbf{e}_1,\ldots\mathbf{e}_n$ of $\mathbb{R}^n$, and since $T$ is one-to-one, the images form a linearly independent subset of $\mathbb{R}^n$; these are the columns $A$. To construct a left inverse, we can: Extend $A\mathbf{e}_1,\ldots, A\mathbf{e}_n$ to a basis for $\mathbb{R}^m$, by adding vectors $\mathbf{w}_{n+1},\ldots,\mathbf{w}_m$ to this list to get a basis $\gamma= [A\mathbf{e}_1,\ldots, A\mathbf{e}_n,\mathbf{w}_{n+1},\ldots,\mathbf{w}_m]$. Define a new linear transformation/matrix $U$ by defining it on $\gamma$ as $$\begin{align*} U(A\mathbf{e}_1) &= \mathbf{e}_1\\ &\vdots\\ U(A\mathbf{e}_n) &= \mathbf{e}_n\\ \end{align*}$$ and then just picking any vectors $\mathbf{v}_{n+1},\ldots,\mathbf{v}_m$ in $\mathbb{R}^n$ and letting $$\begin{align*} U(\mathbf{w}_{n+1}) &= \mathbf{v}_{n+1}\\ &\vdots\\ U(\mathbf{w}_{m}) &= \mathbf{v}_m. \end{align*}$$ Then extend linearly; then figure out what this linear transformation is on the standard basis for $\mathbb{R}^m$, and thus get a matrix $B$. Since $U\circ T = \mathrm{id}_{\mathbb{R}^n}$, it follows that $BA=I_n$. That is, $B$ is a left inverse of $A$. Now again notice how much freedom you have in selecting $B$: you can arbitrarily extend your original list to the basis $\gamma$; and you can arbitrarily decide what happens to the new vectors. So that if I go home and get a left inverse and you go home and get a left inverse, we have a hope of coming back with the same answer, we can reduce the choices by agreeing that some choices make more sense than others, or at least make things easier. One such thing we could do is to just agree that we will let those extra basis vectors go to $\mathbf{0}$. That is, we will always pick $\mathbf{v}_{n+1}=\cdots=\mathbf{v}_m = \mathbf{0}$. That's not an unreasonable choice. However, it is very difficult to agree on how we will deal with the choices of $\mathbf{w}_{n+1},\ldots,\mathbf{w}_{m}$ in the abstract. In fact, there is no good way of having an a priori agreement in general without bringing in more structure. So we bring in more structure: the inner product. Given any subspace $W$ of $\mathbb{R}^k$, we let $$W^{\perp} = \{\mathbf{x}\in\mathbb{R}^k\mid \langle \mathbf{x},\mathbf{w}\rangle = 0\text{ for all }\mathbf{w}\in W\},$$ where $\langle \mathbf{x},\mathbf{w}\rangle$ is the inner product (you may know it as the dot product; that's one type of inner product). Then $W^{\perp}$, the orthogonal complement of $W$, is a subspace of $W$. Turns out, it is always the case that every vector in $\mathbb{R}^k$ can be written uniquely as a vector in $W$ plus a vector in $W^{\perp}$. This gives us a way of agreeing, ahead of time, on how we will define a left inverse: Say $A$ is $n\times m$, and has trivial nullspace. Calculate the range of $A$, $R(A)$. Calculate the orthogonal complement $R(A)^{\perp}$ of the range of $A$. Define $U\colon\mathbb{R}^m\to \mathbb{R}^n$ as follows: i. if $\mathbf{y}\in R(A)$, let $\mathbf{x}_{\mathbf{y}}\in\mathbb{R}^n$ be the unique vector such that $A\mathbf{x}_{\mathbf{y}} = \mathbf{y}$. ii. If $\mathbf{w}\in \mathbb{R}^m$, write it uniquely as $\mathbf{w}=\mathbf{y} + \mathbf{z}$, where $\mathbf{y}\in R(A)$ and $\mathbf{z}\in R(A)^{\perp}$. iii. Define $U(\mathbf{w}) = U(\mathbf{y})+U(\mathbf{z}) = \mathbf{x}_{\mathbf{y}} + \mathbf{0} = \mathbf{x}_{\mathbf{y}}$. One can then check that $U$ is a left inverse to the linear transformation defined by $A$, and so if $B$ is the standard matrix representation of $U$, then $B$ is a left inverse of $A$. What if $A$ is not one-to-one? Then we can't find a left inverse. Basically the problem is that if $\mathbf{x}$ is in the nullspace of $A$, then as soon as you apply $A$ you lose all information about $\mathbf{x}$. So instead we try to find something that will undo $A$ in some "essential part" of $A$: take the nullspace of $A$, $N(A)$, and find a basis for it, $\mathbf{z}_1,\ldots,\mathbf{z}_k$. Then extend to a basis for all of $\mathbb{R}^n$, $\mathbf{z}_1,\ldots,\mathbf{z}_k,\mathbf{x}_{k+1},\ldots,\mathbf{x}_{n}$. Then $A\mathbf{x}_{k+1},\ldots, A\mathbf{x}_{n}$ form a basis for $R(A)$, and now we can proceed as we did above to define $B$; this $B$ will have the property that $BA\mathbf{x}_{k+1}=\mathbf{x}_{k+1},\ldots, BA\mathbf{x}_{n}=\mathbf{x}_n$. Of course, $BA\mathbf{z}_i = \mathbf{0}$ for $i=1,\ldots,k$. Again, the problem is picking $\mathbf{x}_{k+1},\ldots, \mathbf{x}_n$; without more structure, we don't have good ways of making these arbitrary choices. But if we use the inner product, once again we have a good way of making those choices: start with $N(A)$, then construct its orthogonal complement $N(A)^{\perp}$, and let $\mathbf{x}_{k+1},\ldots,\mathbf{x}_n$ be a basis for $N(A)^{\perp}$. Now proceed. This process actually works even if $A$ is one-to-one, since then $N(A)$ is trivial, so we can just take $k=0$ and $\mathbf{x}_1,\ldots,\mathbf{x}_n$ the standard basis. This process produces a matrix $A^{\dagger}$. This matrix has the following properties, by design, as you can verify by following through the construction: If $A$ is invertible, then $A^{\dagger}$ is in fact the inverse of $A$. If $A$ is one-to-one but not onto, then $A^{\dagger}$ is one of the left inverses of $A$. If $A$ is onto but not one-to-one, then $A^{\dagger}$ is one of the right inverses of $A$. The following properties also come out of the special choices we make for the construction: $AA^{\dagger}A = A$ and $A^{\dagger}AA^{\dagger}=A^{\dagger}$. (Not quite an inverse, but kind of close...) $AA^{\dagger}$ and $A^{\dagger}A$ are symmetric. These two properties turn out to uniquely determine $A^{\dagger}$: any other matrix that has those conditions is in fact equal to $A^{\dagger}$. It also turns out that, because of how $A^{\dagger}$ is constructed, we can say more about what $A^{\dagger}A$ and $AA^{\dagger} $ do: $A^{\dagger}A$ is the orthogonal projection onto $N(A)^{\perp}$. $AA^{\dagger}$ is the orthogonal projection onto $R(A)$. In particular: if $N(A)$ is trivial, then $A^{\dagger}A$ is the "orthogonal projection" onto all of $\mathbb{R}^n$, that is, the identity; and therefore $A^{\dagger}$ is indeed a (indefinite article) left inverse of $A$. And if $R(A)$ is all of $\mathbb{R}^m$, then $AA^{\dagger}$ is the identity and so $A^{\dagger}$ is a (indefinite article) right inverse of $A$. (Which we already knew, but now we have more information when they aren't just that). Longwindedly, then: The pseudoinverse $A^{\dagger}$ will work as a one-sided or two-sided inverse of $A$ when $A$ has that type of inverse (that is, if $A$ has left inverses, then $A^{\dagger}$ will be a very specific left inverse of $A$, carefully selected from among however many left inverses there may be; if $A$ has a right inverse, then $A^{\dagger}$ will be a very specific right inverse of $A$, carefully selected from among however many right inverses there may be; nd if $A$ has a two-sided inverse, then $A^{\dagger}$ will be that unique two-sided inverse). If $A$ does not have a one-sided inverse, then $A^{\dagger}$ gets close, pretty much as close as we can hope for (via the identities $AA^{\dagger}A = A$ and $A^{\dagger}AA^{\dagger}=A^{\dagger}$). It turns out to have a lot of very good properties: you can use the pseudoinverse to solve systems of linear equations, in such a way that (i) if the system has a unique solution the pseudoinverse will find it; (ii) if the system has infinitely many solutions, the pseudoinverse will find the one of smallest size; (iii) if the system does not have solutions, the pseudoinverse will find the vector of smallest size that gives you the least squares solution; and (iv) you don't have to analyze the system to figure out which category you are in in order to get the solution (whereas the "usual" least squares and minimal solution procedures are different and so you need to know which case you are in first, before being able to solve it). In that sense, it is way more useful than some random right inverse you found on the street (assuming one exists)... And to finally answer your question: no, "pseudoinverse" is not synonymous with "left inverse". If left inverses exist for $A$, then the (definite article, there is just one) pseudoinverse is a (indefinite article, there may be many) left inverse; but not every left inverse need be the pseudoinverse. And certainly, if no left inverse exists, then the pseudoinverse still does exist, and so it cannot be synonymous with something that doesn't exist at all.
Solving inequations so that $x \rightarrow a^x$
When the sides of an inequality are multiplied (or divided) by a negative number, the inequality turns to the opposite side. It appears you took the $\log$ of the sides to write your inequalities. in the first case, for instance, $$\log\left(\left(\frac{1}{3}\right)^{2x}\right) \le \log\left(\left(\frac{1}{3}\right)^{x+1}\right)$$ $$2x\log\left(\frac{1}{3}\right) \le (x+1)\log\left(\frac{1}{3}\right)$$ Dividing the sides by $\log\left(\frac{1}{3}\right)$ $\color{red}{\text{which is negative}}$ yields: $$2x\ge x+1\Rightarrow x\ge1$$ Since $\frac{1}{3}<1$ and $0.1<1$, this will flip the inequalities (since the result of $\log$ is negative). So the result is $x\ge1$ and $x>\frac{1}{2}$.
Adjoint differential equations
Okay, I believe, I have constructed the bridge between the adjoint of a vector equation and the adjoint of a scalar equation. I will explain it step by step here for those who might be interested. Higher-Order Two-Term Scalar Equations Now, consider the following higher-order two-term scalar differential equation \begin{equation} x^{(n)}(t)+p(t)x(t)=0,\label{hottceeq1}\tag{1} \end{equation} where $p$ is a complex-valued continuous function. Then, the matrix representation for \eqref{hottceeq1} is \begin{equation} \left( \begin{array}{c} x\\ x^{\prime}\\ \vdots\\ x^{(n-1)} \end{array} \right)^{\prime} = \left( \begin{array}{cccc} &1&&\\ &&\ddots&\\ &&&1\\ -p(t)&&& \end{array} \right) \cdot \left( \begin{array}{c} x\\ x^{\prime}\\ \vdots\\ x^{(n-1)} \end{array} \right).\label{hottceeq2}\tag{2} \end{equation} Thus, we obtain \begin{equation} (-1) \left( \begin{array}{cccc} &1&&\\ &&\ddots&\\ &&&1\\ -p(t)&&& \end{array} \right)^{\ast} = \left( \begin{array}{cccc} &&&\overline{p}(t)\\ (-1)&&&\\ &\ddots&&\\ &&(-1)& \end{array} \right)\notag \end{equation} and \begin{equation} \left( \begin{array}{c} -y^{(n-1)}\\ y^{(n-2)}\\ \vdots\\ (-1)^{n}y \end{array} \right)^{\prime} = \left( \begin{array}{cccc} &&&\overline{p}(t)\\ (-1)&&&\\ &\ddots&&\\ &&(-1)& \end{array} \right) \cdot \left( \begin{array}{c} -y^{(n-1)}\\ y^{(n-2)}\\ \vdots\\ (-1)^{n}y \end{array} \right),\label{hottceeq3}\tag{3} \end{equation} where we have constructed the unknown matrix from bottom to the top. This system gives us the adjoint equation \begin{equation} (-1)^{n}y^{(n)}(t)+\overline{p}(t)y(t)=0.\notag \end{equation} Note that, if $n=\text{even}$, then both \eqref{hottceeq2} and \eqref{hottceeq3} represent \eqref{hottceeq1}, i.e., \eqref{hottceeq1} is self-adjoint. Further, using the definition of the inner product in the first post, we get \begin{equation} \left\langle\left( \begin{array}{c} x\\ x^{\prime}\\ \vdots\\ x^{(n-1)} \end{array} \right), \left( \begin{array}{c} -y^{(n-1)}\\ y^{(n-2)}\\ \vdots\\ (-1)^{n}y \end{array} \right)\right\rangle =\sum_{j=0}^{n-1}(-1)^{n-j}x^{(n-1-j)}\overline{y}^{(j)}=\text{constant}.\notag \end{equation} Higher-Order Autonomous Scalar Equations Next, consider the \begin{equation} x^{(n)}(t)+p_{1}x^{(n-1)}(t)+\cdots+p_{n}x(t)=0,\label{hoaseeq1}\tag{4} \end{equation} where $p_{1},p_{2},\cdots,p_{n}$ are complex numbers. Then, the matrix representation for \eqref{hoaseeq1} is \begin{equation} \left( \begin{array}{c} x\\ x^{\prime}\\ \vdots\\ x^{(n-1)} \end{array} \right)^{\prime} = \left( \begin{array}{cccc} &1&&\\ &&\ddots&\\ &&&1\\ -p_{n}&-p_{n-1}&\cdots&-p_{1} \end{array} \right) \cdot \left( \begin{array}{c} x\\ x^{\prime}\\ \vdots\\ x^{(n-1)} \end{array} \right).\notag \end{equation} On the other hand, by using the adjoint coefficient matrix, we form the differential system \begin{equation} \begin{aligned}[] &\left( \begin{array}{c} (-1)^{n-1}y^{(n-1)}+(-1)^{n-2}\overline{p}_{1}y^{(n-2)}+\cdots+\overline{p}_{n-1}y\\ (-1)^{n-2}y^{(n-2)}+(-1)^{n-3}\overline{p}_{2}y^{(n-3)}+\cdots+\overline{p}_{n-2}y\\ \vdots\\ y \end{array} \right)^{\prime}\\ =& \left( \begin{array}{cccc} &&&\overline{p}_{n}\\ (-1)&&&\overline{p}_{n-1}\\ &\ddots&&\vdots\\ &&(-1)&\overline{p}_{1} \end{array} \right) \cdot \left( \begin{array}{c} (-1)^{n-1}y^{(n-1)}+(-1)^{n-2}\overline{p}_{1}y^{(n-2)}+\cdots+\overline{p}_{n-1}y\\ (-1)^{n-2}y^{(n-2)}+(-1)^{n-3}\overline{p}_{2}y^{(n-3)}+\cdots+\overline{p}_{n-2}y\\ \vdots\\ y \end{array} \right), \end{aligned} \notag \end{equation} which gives the scalar equation \begin{equation} (-1)^{n}y^{(n)}(t)+(-1)^{n-1}\overline{p}_{1}y^{(n-1)}(t)+\cdots+\overline{p}_{n}y(t)=0.\notag \end{equation} Further, putting $p_{0}:=1$ for simplicity, we have \begin{equation} \left\langle\left( \begin{array}{c} x\\ x^{\prime}\\ \vdots\\ x^{(n-1)} \end{array} \right), \left( \begin{array}{c} \begin{array}{c} \sum_{j=0}^{n-1}(-1)^{j}\overline{p}_{n-1-j}y^{(j)}\\ \sum_{j=0}^{n-2}(-1)^{j}\overline{p}_{n-2-j}y^{(j)}\\ \vdots\\ y \end{array} \end{array} \right)\right\rangle =\sum_{k=0}^{n-1}\sum_{j=0}^{k}(-1)^{j}p_{k-j}x^{(n-1-k)}\overline{y}^{(j)}=\text{constant}.\notag \end{equation} Higher-Order Scalar Equations with Variable Coefficients Finally, we consider \begin{equation} x^{(n)}(t)+p_{1}(t)x^{(n-1)}(t)+\cdots+p_{n}(t)x(t)=0,\notag \end{equation} where $p_{i}(t)$ ($i=1,2,\cdots,n$) is complex-valued and is $i$ times continuously differentiable function. Thus, the matrix representation is \begin{equation} \left( \begin{array}{c} x\\ x^{\prime}\\ \vdots\\ x^{(n-1)} \end{array} \right)^{\prime} = \left( \begin{array}{cccc} &1&&\\ &&\ddots&\\ &&&1\\ -p_{n}(t)&-p_{n-1}(t)&\cdots&-p_{1}(t) \end{array} \right) \cdot \left( \begin{array}{c} x\\ x^{\prime}\\ \vdots\\ x^{(n-1)} \end{array} \right).\notag \end{equation} Then, the associated adjoint matrix is \begin{equation} - \left( \begin{array}{cccc} &1&&\\ &&\ddots&\\ &&&1\\ -p_{n}(t)&-p_{n-1}(t)&\cdots&-p_{1}(t) \end{array} \right)^{\ast} = \left( \begin{array}{cccc} &&&\overline{p}_{n}(t)\\ (-1)&&&\overline{p}_{n-1}(t)\\ &\ddots&&\vdots\\ &&(-1)&\overline{p}_{1}(t) \end{array} \right),\notag \end{equation} which yields the system \begin{equation} \begin{aligned}[] &\left( \begin{array}{c} \sum_{j=0}^{n-1}(-1)^{j}[p_{n-1-j}\overline{y}]^{(j)}\\ \sum_{j=0}^{n-2}(-1)^{j}[p_{n-2-j}\overline{y}]^{(j)}\\ \vdots\\ \overline{y} \end{array} \right)^{\prime}\\ &= \left( \begin{array}{cccc} &&&\overline{p}_{n}(t)\\ (-1)&&&\overline{p}_{n-1}(t)\\ &\ddots&&\vdots\\ &&(-1)&\overline{p}_{1}(t) \end{array} \right) \cdot \left( \begin{array}{c} \sum_{j=0}^{n-1}(-1)^{j}[p_{n-1-j}\overline{y}]^{(j)}\\ \sum_{j=0}^{n-2}(-1)^{j}[p_{n-2-j}\overline{y}]^{(j)}\\ \vdots\\ \overline{y} \end{array} \right), \end{aligned}\notag \end{equation} where we put $p_{0}(t):\equiv1$ for simplicity. Transforming this into the differential equation, we get \begin{equation} (-1)^{n}y^{(n)}(t)+(-1)^{n-1}[\overline{p}_{1}y]^{(n-1)}(t)+\cdots+[\overline{p}_{n-1}y]^{\prime}(t)+\overline{p}_{n}(t)y(t)=0.\notag \end{equation} Using the inner product in the first post gives us \begin{equation} \left\langle\left( \begin{array}{c} x\\ x^{\prime}\\ \vdots\\ x^{(n-1)} \end{array} \right), \left( \begin{array}{c} \begin{array}{c} \sum_{j=0}^{n-1}(-1)^{j}[p_{n-1-j}\overline{y}]^{(j)}\\ \sum_{j=0}^{n-2}(-1)^{j}[p_{n-2-j}\overline{y}]^{(j)}\\ \vdots\\ \overline{y} \end{array} \end{array} \right)\right\rangle =\sum_{k=0}^{n-1}\sum_{j=0}^{k}(-1)^{j}x^{(n-1-k)}[p_{k-j}\overline{y}]^{(j)}=\text{constant},\label{finaleq}\tag{#} \end{equation} which is the desired identity. I believe that \eqref{finaleq} and (*) in the first post are equivalent.