title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
A simple and obvious looking inequality but difficult to prove.
Let $$F(x)=x^m, x>0 \Rightarrow F''m(m-1)x^{m-2} >0 ~if~ m<0 ~ or ~ m>1$$ Sot the curvature is positive, then by Jemsen's inequality we get $$\frac{f(a_1)+f(a_2)+f(a_3)+....+f(a_n)}{n} \ge \left(\frac{a_1+a_2+a_3+...+a_n}{n}\right)^m.$$ $$\Rightarrow \frac{a_1^m+a_2^m+a_3^m+...+a_n^2}{n} \ge \left( \frac{a_1+a_2+a_3+..a_n}{n} \right)^m, ~if~ m<0~ or ~m>1.$$ But if $0<m<1$, the sign of inequality reverses. For m=0,1, equality holds
Defining a 'Second' Kernel in a Ring or the Fibre Over One
Sure, you can forget the addition of your ring and get a monoid $(R, \times, 1)$, something like a group but where elements aren't necessarily invertible. A morphism of rings $f : R \to S$ is in particular a morphism of monoids (i.e. $f(ab) = f(a) f(b)$), and what you're considering is the kernel of that morphism of monoids. So it has all the properties that kernels of monoid morphisms have. I don't really think you can say anything more specific than that, it's a notion that plays pretty poorly with the addition of $R$ ($1+1$ isn't $1$ very often, for example), so the ring structure is not very relevant, I believe. Compare that with the usual kernel, which is the kernel of the group morphism $(R,+,0) \to (S,+,0)$; since $0 \times x = 0$ for all $x$, the multiplication does play interestingly with the kernel.
Contour integral around 'D-contour'
The integral about the semicircular arc of radius $R$ is $$i R \int_0^{\pi} d\theta \, e^{i \theta} \frac{R e^{i \theta} e^{i a R e^{i \theta}}}{R^2 e^{i 2 \theta} + 6 R e^{i \theta} + 25} $$ This has a magnitude bounded by $$\frac{R^2}{R^2-6 R+25} \int_0^{\pi} d\theta \, e^{-a R \sin{\theta}} \le \frac{R^2}{R^2-6 R+25} \int_0^{\pi} d\theta \, e^{-2 a R \theta/\pi} \le \frac{\pi}{2 a} \frac{R}{R^2-6 R+25}$$ which vanishes as $R \to \infty$. Thus the contour integral is equal to the integral over the real line.
Evaluate the sum $\sum_{0\leq j < k\leq n}\binom{n}{j}\binom{n}{k}$
Your sum looks like the cross product terms of $$\left[\binom n0+\binom n1+\cdots+\binom nn\right]^2.$$ Since $$2^n=\binom n0+\binom n1+\cdots+\binom nn,$$ squaring both sides gives us $$2^{2n}=\binom n0^2+\binom n1^2+\cdots+\binom nn^2+2\sum_{j\lt k}\binom nj\binom nk.$$ Since $$\binom n0^2+\binom n1^2+\cdots+\binom nn^2=\binom n0\binom nn+\binom n1\binom n{n-1}+\cdots+\binom nn\binom n0=\binom{2n}n$$ (Vandermonde's identity), the previous equation simplifies to $$2^{2n}=\binom{2n}n+2\sum_{j\lt k}\binom nj\binom nk$$ which we can solve to get your answer: $$\sum_{j\lt k}\binom nj\binom nk=\frac{2^{2n}-\binom{2n}n}2.$$
Showing $\mathscr{P}_n=\langle\zeta, \tau, \pi, \xi\rangle$.
Each $\alpha\in\mathscr{P}_n$ can be written as $$\alpha=\left(\prod_{i\in X\setminus \operatorname{dom}\alpha}(1i)\xi(1i)\right)\hat{\alpha}$$ and $\hat{\alpha}\in\mathscr{T}_n$. Hence $\mathscr{P}_n=\langle \zeta, \tau, \pi, \xi\rangle$.
Hat-guessing problem for finitely many prisoners and infinite sets of hat colors and the Axiom of Choice
Fascinatingly, the "easiest" case is when H is not merely infinite but uncountably infinite. Namely, if H becomes uncountably infinite, the number of prisoners can scale up to countably infinite without breaking the problem at all. This is because the set of all sequences of real numbers has cardinality equal to that of the real numbers themselves, meaning that once you've assigned all (uncountably infinite) colors to real numbers, you can create a bijection from the reals to every single sequence of reals. This way, when the person in the back announces a color, everyone in front of them immediately knows their own hat color before anyone else must announce anything at all because the sequence itself was fully encoded in the color the person announced. This assertion does not rely on the axiom of choice in any way. When H is countably infinite, this indeed becomes more of a "sum and subtract" game as indicated in the original quote in a fairly easy-to-understand way that also does not require choice (let's ignore my previous note about countably many prisoners and only focus on the finitely many prisoners presented in the actual problem). When H is finite, you can just perform a standard modulus for the sum (that is, modulo |H|). You seem to be imagining the axiom of choice as necessary for the warden to select hats to place in the first place, but that... simply isn't a relevant part of the problem? The interest lies in the solution, not the problem statement. The problem statement doesn't change even if the actual hats were chosen long in advance: the prisoners are merely presented with their potential hat colors in advance.
Transformation of contour integral $\int \frac{z^2}{e^{2\pi i z^3}-1} \operatorname dz$ over the circle $|z|=\sqrt[3]{n+\frac{1}{2}}$
Let $w=z^3$. Then since $\frac1{e^{2\pi iw}-1}$ has a residue of $\frac1{2\pi i}$ at each integer, we get $$ \begin{align} \int_{|z|=\left(n+\frac12\right)^{1/3}}\frac{z^2\,\mathrm{d}z}{e^{2\pi iz^3}-1} &amp;=\frac13\cdot3\int_{|w|=n+\frac12}\frac{\mathrm{d}w}{e^{2\pi iw}-1}\\ &amp;=2\pi i(2n+1)\frac1{2\pi i}\\ &amp;=2n+1 \end{align} $$ since there are $2n+1$ integers inside $|w|=n+\frac12$. The factor of $3$ is because for each time $z$ traces the circle, $w=z^3$ circles it $3$ times.
Could there be a sum of polynomials for the nth prime?
By grouping terms with equal degrees in $i$, we can write $$f(i,n)=f_d(n)i^d+f_{d-1}(n)i^{d-1}+\dots+f_1(n)i+f_0(n)$$ for $f_k(n)$ some polynomials in $n$. Summing this over $i$ gives $$\sum_{i=0}^n f(i,n)=\sum_{k=0}^d f_k(n)\sum_{i=0}^n i^k$$. The sum of $k$-th powers of numbers from $0$ to $n$ can be written as a polynomial $P_k(n)$, so the expression in your question is a polynomial $\sum_{k=0}^d f_k(n)P_k(n)$. As I mention in a comment, this polynomial grows either linearly or at least quadratically, which means that this cannot be an expression for the $n$-th prime, since that grows line $n\log n$.
How do I evaluate this sum(involving the floor function)?
This is Dirichlet's divisor summatory function $D(x)$. It is known that $$D(x) = x\log x + x(2\gamma -1) + \Delta(x)$$ and the non-leading term $\Delta(x)$ is $O(\sqrt{x})$. Forget about its closed form, even the behaviour of the non-leading term $\Delta(x)$ is a well known unsolved problem. Dirichlet divisor problem Find the smallest value of $\theta$ for which $\Delta(x) = O(x^{\theta+\epsilon})$ holds true for all $\epsilon &gt; 0$.
Complex polynomial and the unit circle
Let $g(z):=z^nP\left(\frac 1z\right)$. It's a polynomial whose leading term is $a_0$ and constant coefficient is $1$. We have that $g(0)=1$ and $\max_{|z|=1}|g(z)|=1$, hence by maximum modulus principle, $g$ is constant equal to $1$. This gives the wanted result.
Are transfinite numbers like aleph nought, continuum etc. a set itself?
Yes, in the context of set theory, almost anything is a set. $\aleph_0$ is a set (as is any other cardinal). Any function is also a set. $\Bbb R$ is a set, either just of points, but also the topological space, the metric space, the group, the field, the $\Bbb Q$-vector space, and any other interpretation, common or uncommon, with or without ordering, of $\Bbb R$, regardless of whether you like to define it from Dedekind cuts, Cauchy sequences, or any other way. However, there are things that are not sets, even though they intuitively look like collections of things. They are called proper classes (sets can also be classes, but they are not proper). Which ones they are depend on your specific axioms, but in the standard set theory called $ZF$, some proper classes include the class of all sets, the class of all cardinals, or something like the class of all topological spaces (which, together with all possible continuous functions between them become the category of topological spaces).
Proof by contradiction and division by $0$
Your argument is based on a false assumption: far from having two values, $\frac10$ is undefined and therefore has no value. The statements $\lim\limits_{x\to 0^+}\frac1x=\infty$ and $\lim\limits_{x\to 0^-}\frac1x=-\infty$ are abbreviations for precise descriptions of how the function $f(x)=\frac1x$ behaves near (but not at) $x=0$; they say nothing about the undefined symbol $\frac10$.
Matrix representing inner product?
The expression $Au$ doesn't make sense as $u$ is an abstract vector and $A$ is a matrix. The Gramian matrix $G(\beta)$ is the matrix representing the inner product (as a quadratic form) with respect to the basis $\beta$. Then we have $$ \left &lt;v, w \right&gt;_V = [v]_{\beta}^T G(\beta) [w]_{\beta} = \left&lt; G(\beta) [v]_{\beta}, [w]_{\beta} \right&gt;_{\mathbb{R}^n} = \left&lt; [v]_{\beta}, G(\beta) [w]_{\beta} \right&gt;_{\mathbb{R}^n} $$ where $\left&lt; \cdot, \cdot \right&gt;_{\mathbb{R}^n}$ is the standard inner product on $\mathbb{R}^n$. Since $G(\beta)$ is self-adjoint and positive-definite, it has a unique self-adjoint positive-definite root and so you can also write $$ \left&lt; v, w \right&gt;_V = \left&lt; \sqrt{G(\beta)}[v]_{\beta}, \sqrt{G(\beta)}[w]_{\beta} \right&gt;_{\mathbb{R}^n} $$ which is closer in spirit to what you wrote (with $A = \sqrt{G(\beta)}$).
Group of Order $5$ Generated by $(1,2,3,4,5)$
There's only one group of order $5$: the cyclic one. One can think in terms of $\mathbb Z_5:=\mathbb Z/5\mathbb Z $. That's the additive group of integers $\bmod5$. If you're thinking of $S_5$, then the cycle $(12345) $ has order $5$. Thus it generates a copy of the cyclic group of order $5$. That is $\langle (12345)\rangle \cong C_5$, as it is also denoted. In general, an element $a $ of order $n $ in a group $G $ generates a cyclic group of order $n $: $\langle a\rangle\cong C_n\le G$. Also there's only one group of order any given prime (up to isomorphism), and it's cyclic
PDE: What are these type of boundary conditions called?
I doubt that it has a well agreed-upon name. From the physics point of view there are many interpretations of this boundary condition, and probably many names, depending on which area of physics you consider the wave equation to represent: electromagnetics, mechanical, acoustics, or something else. For example, from the mechanical side $v=u_t$ is a velocity and $\frac{1}{\mu}u_x$ is a force, force distribution, pressure, or flux. So, this boundary condition relates the velocity of the particle at $L$ to the pressure at the same point (pressure or flux coming from outside the modeling domain $[0,L]$). In a wave equation context, a first order derivative in time represents damping. This means that this boundary condition gives a damping that is proportional to the pressure/flux. This has a stabilizing effect on the equation. Yet another perspective on this equation would be to view it in the frequency domain replacing $u_t$ with $j\omega u$, assuming things are linear. The boundary condition then reads $\frac{1}{\mu}u_x-j\omega u=0$ and is a form of Robin condition, but now in the frequency domain. Again, whenever you see $j\omega u$ in a frequency domain expression you know there is damping in the system.
Cauchy's integral theorem vs line integration of a function $f : \mathbb{R}^2 \rightarrow \mathbb{R} $ over a closed curve
The line integral of a function is usually done with respect to the curve length element, the $ds$, while the Cauchy integral is done with respect to $dz$. To see the difference, the line integral is $$ \int_\gamma c \, ds = c ~ \text{length} (\gamma) . $$ The integral in Cauchy is $$ \int_\gamma c \, dz , $$ where $dz = dx +idy$. So it is really the line integral $$ \int_\gamma c \, (dx +idy) = \int_\gamma c \, dx + i \int_\gamma c \, dy , $$ and each one of these is zero. So these are very different integrals. Perhaps write down the thing in terms of parametrization, say $u(t)+iv(t)$. Then $ds = \sqrt{(u'(t))^2+(v'(t))^2}\, dt$ and $dx = u'(t) dt$ and $dy = v'(t) dt$.
Finite Intersection Property proof attempt.
For a $k \in \bigcap_{1}^{n} K_{\alpha_{i}}$ there is $\gamma$ such that $k \not\in K_{\gamma}$, but may exist $l \in \left(\bigcap_{1}^{n} K_{\alpha_{i}}\right)\cap K_{\gamma}$. The problem is: you cannot find a $\gamma$ that works for every $k$. Couterexample. Consider the family (in $\mathbb{R}$) $\{[n, \infty)\}_{n \in \mathbb{N}}$. So, given any finite subcollection has a nonempty intersection. But, if $\alpha \in \bigcap_{n \in \mathbb{N}}\{[n, \infty)\}$, then $\alpha \in [n,\infty)$ for every $n \in \mathbb{N}$, that is, $\alpha \geq n$ for every $n \in \mathbb{N}$, an absurd! Here's a proof using your idea: Proof. Suppose that $\bigcap_{\alpha}K_{\alpha} = \emptyset$ and fix $\alpha_{0}$. Note that if $x \in K_{\alpha_{0}}$, since $\bigcap_{\alpha}K_{\alpha} = \emptyset$, then $$x \in \left(\bigcap_{\alpha}K_{\alpha}\right)^{c} = \bigcup_{\alpha}K_{\alpha}^{c}.$$ But, $K_{\alpha_{0}}$ is compact, there is $\alpha_{1},..., \alpha_{m}$ such that $$K_{\alpha_{0}} \subset \bigcup_{i=1}^{m}K_{\alpha_{i}}^{c} = \left( \bigcap_{i=1}^{m}K_{\alpha_{i}}\right)^{c}.$$ Therefore, $K_{\alpha_{0}} \cap K_{\alpha_{1}} \cap ... \cap K_{\alpha_{m}} = \emptyset$, a contradiction.
Uniform convergence of sequence of function $f_n(x) = \frac{nx^4+1}{nx^4+2x+3}e^{-nx^2}$ on the interval $(1,+ \infty)$
Notice that for all $x&gt;1$ $$\sup_{(1,\infty)} \frac{nx^4+1}{nx^4+2x+3}e^{-nx^2}\leq \sup_{(1,\infty)} \frac{nx^4+1}{nx^4}e^{-nx^2}$$ because dividing by a larger number gets a smaller number. So now we want to show $$\lim_{n\rightarrow\infty} \sup_{(1,\infty)} \Big[\Big(1+\frac{1}{nx^4}\Big)e^{-nx^2}\Big]=0.$$ Since $1/(x+\delta) &lt; 1/x$ and $e^{-x}&lt;e^{-(x+\delta)}$ for all $\delta&gt;0$, then the supremum is at $x=1$. Then it is sufficient to show that $$\lim_{n\rightarrow\infty} \Big(1+\frac{1}{n}\Big)e^{-n}=0.$$
Orthogonal and symmetric Matrices
Such a matrix is such that $A^2=I$, so it is diagonalizable, and its possible eigenvalues are $+1$ and $-1$. Since it is unitary, the eigenspaces corresponding to $1$ and to $-1$ are orthogonal. Conversely, every diagonalizable matrix with eigenvalues contained in $\{+1,-1\}$ and orthogonal eigenspaces is of that form. It follows that the set of your matrices is in bijection with the set of subspaces of $\mathbb C^n$. Explicitely: If $V$ is one such subspace, there is a unique linear transformation $f:\mathbb C^n\to\mathbb C^n$ such that $V$ and $V^\perp$ are eigenspaces for the eigenvalues $1$ and $-1$. The matrix $A_V$ of $f$ with respect to the standard basis of $\mathbb C^n$ satisfies your conditions. The bijection is $$V\in\{\mathrm{subspaces\;of\;}\mathbb C^n\}\leftrightarrow A_V.$$ As a consequence, considered as a whole, your set is a disjoint union of submanifolds homeomorphic to Grassmannian varieties. Literally books have been written about them.
Function with removable singularity
Multiply both sides by $|z-z_{0}|$ to get $|(z-z_{0})(f(x))| \leq A |z-z_{0}|^{\epsilon}$. Now you just show that letting $z \rightarrow z_0$ demonstrates that $res_{z_{0}}(f)=0$.
Is the argument of the Riemann zeta function bounded in the root free region on RH?
Found it! Being referenced in Titchmarsh, The theory of the Riemann zeta-function, Oxford University Press, 2nd edition, 1951, section 8.13, p.209. The reference there is: H.L. Montgomery, Extreme values of the Riemann zeta function, Comment. Math. Helvetici 52 (1977) 511-518 Corollary of the main result: $Let$ $\sigma$ $be$ $fixed$, $1/2&lt;\sigma&lt;1$. $Then$ $as$ $t\to\infty$, $\log\mid\zeta(s)\mid =\Omega_+ \left( \frac{(\log t)^{1-\sigma}}{(\log\log t)^\sigma} \right) $ $\arg\zeta(s) =\Omega_\pm \left( \frac{(\log t)^{1-\sigma}}{(\log\log t)^\sigma} \right) $
Extension degree of a splitting field
If $f$ is separable: The Galois group $G=\operatorname{Gal}(\overline F/F)$ acts faithfully on the roots of $f$ (because they generate $\overline F$). Hence it embeds in $S_n$. Because $f$ is separable, $|G|=[\overline F:F]$. In general: if $\alpha_1, \ldots,\alpha_k$ are the roots of $f$, $k\leq n$, then each $\alpha_i$ has degree $\leq n-i+1$ over $F(\alpha_1,\ldots,\alpha_{i-1})$. So the degree of $\overline F=F(\alpha_1,\ldots,\alpha_k)$ is $\leq n\cdots (n-k+1)\leq n!$.
Optimization Problem, (rectangular box)
Here are some hints: You're trying to minimize the cost of building the box. Your box has a square base, say of side length $s$. Call the height of the box $h$. Using what you know, you can express the cost of the box as a function of $s$ and $h$. How? You should now have a cost function $C(s,h)$ that you wish to minimize. To do this, you need to find a way to express $C(s,h)$ as a function of a single variable. Since they're asking about the height for which the cost is minimized, ideally you'll find a way to rewrite $C(s,h)$ as a function $f(h)$ of the height. How can you do this? What information about the box have you not used yet? Find a way to use this information to express $s$ in terms of $h$. This will allow you to rewrite $C(s,h)$ solely in terms of $h$.
Does in invertibility of Hessian matrix $H_{F(X)}$ implies $v^tH_{f(X)}v\neq 0$?
It seems to me the answer is "no". Let $f = x^2 + y^2 - z^2; \tag 1$ then $H_f = \begin{bmatrix} 2 &amp; 0 &amp; 0 \\ 0 &amp; 2 &amp; 0 \\ 0 &amp; 0 &amp; -2 \end{bmatrix} = \text{diag}(2, 2, -2); \tag 2$ we have $\det H_f = -8 \tag 3$ everywhere in $U$, but with $v = \begin{pmatrix} 1 \\ 1 \\ \sqrt 2 \end{pmatrix} \tag 4$ we find $v^T H_f v = \begin{pmatrix} 1 &amp; 1 &amp; \sqrt 2 \end{pmatrix} \begin{bmatrix} 2 &amp; 0 &amp; 0 \\ 0 &amp; 2 &amp; 0 \\ 0 &amp; 0 &amp; -2 \end{bmatrix}\begin{pmatrix} 1 \\ 1 \\ \sqrt 2 \end{pmatrix} = 0, \tag 5$ as easy computations affirm.
Having difficulty when finding minima and maxima of function with ln.
The derivative of $-4x$ with respect to $x$ is $-4$; the derivative of $\log(10x)$ with respect to $x$ is taken via the chain rule: derivative of $\log(x)$ with respect to $x$ (which is $1/x$) evaluated at point $10x$ (which yields $1/(10x)$) is multiplied by the derivative of $10x$ with respect to $x$, which is $10$. Now the derivative of $4\log(10x)$ with respect to $x$ is just four times the derivative of $\log(10x)$ with respect to $x$ as constant factors can be carried out of the differential operator. Combining: $$f'(x)=4\frac{1}{10x}\times10-4=\frac{4}{x}-4.$$ Now to solve $f'(x)=0$ for $x$, add $4$ to both sides: $$\frac{4}{x}=4;$$ divide both sides by $4$: $$\frac{1}{x}=1;$$ multiply both sides by $x$: $$x=1.$$ Therefore $1$ is a critical point of $f$. Please notice that $0$ is also a critical point, as it yields an infinite numerical value of $f'$.
Exact counting of odd graphs
Start with $K_n$ over the vertices $\{1,2,\ldots, n\}$. Assign an arbitrary colour $\in\{\text{black},\text{white}\}$ to each edge except those connecting some $k$ and $n$ (so that's $n-1\choose 2$ edges). Now pick the colour of edge $kn$ deterministically to make vertex $k$ incident with an even number of black edges. In the end, $n$ is automatically incident with an even number of black edges.
Continuation of strictly monotone functions on $\mathbb{R}$
No, consider for instance the case where $D=(0,1)$ and $f$ is the function given by $f(x)=-\frac1x$. There cannot exist a stricly increasing extension of $f$ to $\mathbb R$ because $\lim\limits_{x\to 0^+}f(x)=-\infty$. (That is, the value of the extension at any non-positive point would have to be lower than any real number. That is not possible.)
Is there a way to generalize clock algebra?
The elements of $\Bbb Z/n\Bbb Z$ are equivalence classes of integers, like so: $$[a]_n:=\{b\in\Bbb Z: n\mid a-b\}.$$ Addition is defined like so: $$[x]_n+_n[y]_n:=[x+y]_n,$$ where $+$ is standard addition. It follows, then, that subtraction is defined like so: $$[x]_n-_n[y]_n:=[x-y]_n,$$ which is, indeed, $[x]_n+_n[(-y)]_n$.
Property of minimal normal subgroups 2
If $K$ is a minimal normal subgroup of $G$, then it is a direct product of isomorphic copies of a simple group (they have to be isomorphic, because they are conjugate in $G$). One possibility is for this simple group to be cyclic of prime order; in that case, $K$ is a direct product of finitely many cyclic groups of order $p$, and this is called an "elementary abelian $p$-group". What if the simple group is not abelian? Then $K = S\times S\times\cdots\times S$, where $S$ is a nonabelian simple group. The center of a direct product is the direct product of the centers, and the center of a nonabelian simple group is trivial (since the center is normal, $Z(S)=\{1\}$ or $Z(S)=S$, and since $S$ is nonabelian, $Z(S)\neq S$). Thus, the center of $K$ is trivial.
Its about Finding the values of (A) for which the system has no solution, infinitely many solutions, and a unique solution in linear Algebra
You have the matrix approach in the post Amzoti mentions in his comment. So, let me try another, much less elegant, approach. You have a system of three liner equations for three unknowns $x,y,z$. So, you can eliminate $x$ from the first equation, replace its expression in the second and third equations. Then now, eliminate $y$ from the second equation and replace its expression in the third equation which is jus linear in $z$. Solve it for $z$ and go backward and get $y$ and then $x$. The final result is $$x=-\frac{a^3+a^2}{a-2},y=\frac{a^2}{a-2},z= a$$ From here, you can have the same conclusions as in the previous post.
Finding a general form of the inverse of a function
First you have to make sure that there exists an inverse for $-1\leq q&lt;1$, which you will have to find out (hint: a strictly monotonous polynomial in a given range is invertible in this range). Supposing that an inverse exists and baring in mind that this function is rather complicated, I would suggest that you use some computer software to find the inverse, like Mathematica. You can do it by solving the equation $\rho=\rho(r)$ and obtain the solution(s) $r=r(\rho)$.
Holomorphic map containing $\mathbb{T}$ in its image?
There are holomorphic maps of the open unit disk which are continuius in the closed disk and such that the image of the boundary circle is the whole closed disk. I will supply a reference if this is what you are looking for. See Theorem1 in http://www.ams.org/journals/proc/1955-006-04/S0002-9939-1955-0072227-5/S0002-9939-1955-0072227-5.pdf for the reference.
Complex number set has least upper bound property?
Hint: consider the subset $\{(0,y)\,:\,y\in\Bbb R\}$.
Determining a multiple of a power of 2.
HINT: Write it as $19×10^4 , 19×10^5 , 19×10^3$ Since 19 is not divisible by 32, the power of 10 must be. Use $32=2^5$ and $10=2×5$
How can I find a number $a$ such that this limit is 1
Multiplying by the conjugate gives: $$ \frac{\sqrt{ax+b}-2}{x}=\frac{ax+b-4}{x(\sqrt{ax+b}+2)}. $$ As you already observed, you need $b=4$, for otherwise the limit is infinite. Then you can simplify by $x$. Now the limit is: $$ \frac{a}{\sqrt{4}+2}=\frac{a}{4}. $$ So you want $a=4$.
Proof that a linear code union/intersect with another linear code is a linear code
A) Since $C$ is a binary linear code, for any $c_1,c_2\in C$ we have $c_1+c_2\in C$. Since $1^n\in C$ therefore for any $c\in C$ also $c+1^n\in C$ where $\neg c=c+1^n\in C$ and this holds for all $c\in C$ so $\neg C=C$. B) This one doesn't hold e.g. take $C=\{(0,0),(0,1)\}$ as a linear codebook, therefore $\neg C=\{(1,1),(1,0)\}$ which is not linear since $(1,1)+(1,0)=(0,1)\notin \neg C$. C) this is true only if $\neg C$ is a linear codebook. In that case, for $c_1,c_2\in C\cap \neg C$ we have $c_1,c_2\in C$ and $c_1,c_2\in\neg C$. Since both $C$ and $\neg C$ are linear we have $c_1+c_2\in C$ and $c_1+c_2\in\neg C$ or $c_1+c_2\in C\cap\neg C$. Therefore $C\cap\neg C$ is linear. D) $C\cup \neg C$ is linear only if $\neg C$ is linear (why?)
Symmetric power subspace of tensor product
The short and sweet answer: The space $T^d(\Bbb C^2)$ can be identified with the space of degree $d$ homogeneous polynomials with complex coefficients in non-commuting variables $x$ and $y$. That is, we have $xyx - x^2y \neq 0$. As you might expect, this gives us a $2^d$-dimensional space. Symmetrizing tensors, then, is tantamount to allowing the variables $x$ and $y$ to commute.
Relating two properties of $e^x$
$e^0=1$, and this is the slope of $e^x$ at $x=0$. So...
Alternating random variable
By inspection, $X_n(\omega) \to_n 0$ because Lebesgue measure on the $A$: $X_n \neq 0$ converges to $0$. At the same time, $\mathbf{E}X_k = (-1)^k \neq 0$
Partial derivatives of all orders of linear map exist
Yes this is true as the derivative of a linear map $F$ at a point $a \in \mathbb R^n$ is a linear map equal to $F$. Therefore the derivative $F^\prime$ is the constant function that maps $x \in \mathbb R^n$ to the constant linear map $F$. Hence the second derivative is equal to $0$ as all further order derivatives.
Finding a vector within a specified range of angles from other vectors
It would be helpful if you give a bit more background on the reason for your question. Why do you need this? What do you want to do with this? Are the various numbers $l_i , u_i$ random, fixed or related and the same for the vectors. Are you looking for a numerical answer/method or something analytical? Without knowledge of any of these thing it not possible to give a good answer. In the general case without additional information one can only try and find a vector ${\bf x}$ in a systematic fashion. This can be done more efficiently by analysing the set of vectors ${\bf v}_i$ in detail, for instance if there is an $1 \leq i&lt;j \leq k$ such that $\arccos({\bf v}_i \cdot {\bf v}_j) &gt;u_i + u_j$ there is no ${\bf x}$ that can satisfy all the constraints. In particular the small $u_i$ are relevant for such an approach. A very simple method would be to generate random points on a unit sphere that represent the direction of ${\bf x}$ and check whether they satisfy all of the constraints. The disadvantage is of course that if you do not find a good direction, you still don't know whether it exist or not. In a more systematic way you need to consider ${\bf v}_1$ and ${\bf v}_1$ and find the area on the unit sphere that is allowed by their simultaneous constraints. Successively adding more ${\bf v}_i$ will diminish the area on the unit sphere that is allowed. The process stops whenever the area completely vanishes, or you run out of vectors ${\bf v}_i$. Problematic here is the fact that the area most likely will consists of various disjoint parts of the unit sphere each of which is bounded by a subset of the set of constraints. Alternatively, one could make imperfect representation of the allowed area by a discretisation of the unit sphere, for instance a fine triangulation mesh, and keep track of which of the triangles is completely allowed and which ones are intersected by one or more constraints.
Is it always possible to recover an affine morphism by restricting a finite-type morphism?
$\newcommand{Spec}{\operatorname{Spec}}$You are mistaken that $f^{-1}(D(h))=D(\phi(h))$. Rather, what is true is that if we write $f|_{\Spec A}:\Spec A\to\Spec B$ for the restriction of $f$ to $\Spec A$, then we have $f|_{\Spec A}^{-1}(D(h))=D(\phi(h))$, which is really just saying that $f^{-1}(D(h))\cap\Spec A=D(\phi(h))$. Another way to see that your assumption is incorrect is to just take $h=1$. Then $D(h)=\Spec B$ so $f^{-1}(D(h))=f^{-1}(\Spec B)$ which, as we know, is not necessarily equal to $\Spec A=D(\phi(h))$.
Prove that the functions is of the form "$cx$".
By changing the variable we have $$tf(x)=\int_x^{x+t}f(u)du-\int_0^tf(y)dy=\int_t^{x+t}f(y)dy-\int_0^xf(y)dy$$ so by differentiating with respect to $x$ we get for all $t$ $$tf'(x)=f(x+t)-f(x)$$ and by differentiating with respect to $t$ we get for all $x$ $$f'(x)=f'(x+t)$$ so finally for $x=0$ we ge $f'(0)=c=f'(t)$ and then $f(t)=ct+c'$ and clearly $f(0)=0$.
Integer induction without infinity
Suppose that $\psi$ is such formula, and let $(M,E)$ be a model of $T$ in which $\forall n(n\text{ is an integer}\rightarrow\psi(n))$ is false. Consider $A=\{k\mid M\models\lnot\psi(k)\land k\text{ is an integer}\}$, then this class cannot have an $E$-minimal element. If $M\models k=\min A$, then of course $k\neq 0$, since $T\vdash\varphi(0)$, so $k=n+1$, but $M\models\psi(n)\land(\psi(n)\rightarrow\psi(n+1))$, so $M\models\psi(k)$. Therefore $A$ is a class without a least element. This is a contradiction, since if $M\models k\in A$, then $M\models k\cap A\text{ is a set linearly ordered by }E\text{ and without a least element}$, which is a contradiction to the fact that $M$ satisfies the axiom of foundation. Note that the assumption that $T\vdash\psi(0)\land\forall k(k\text{ is an integer}\land\psi(k)\rightarrow\psi(k+1))$ was necessary here. On the other hand, if you only require that $T$ proves this for meta-integers, then something like $n$ encodes a proof for contradiction from $T$ is an example for a statement like that.
How can I divide by a matrix?
For a matrix $$A=\begin{bmatrix}d&amp;e&amp;f\\g&amp;h&amp;i\\j&amp;k&amp;l\end{bmatrix}$$ and a vector $y=\begin{bmatrix}a\\b\\c\end{bmatrix}$ the product $y\cdot A$ is not defined. The product $A\cdot y$, on the other hand, is defined by $$Ay = \begin{bmatrix}ad + be + cf\\ag + bh + ci\\aj + bk + al\end{bmatrix}$$ But your question is how to divide by the matrix, and the answer is: it's complicated. In general, division by a matrix is not well defined and is usually not referred to as dividing by a matrix. First, let's see division in real numbers. What does $x=\frac{a}b$ really mean? $x=\frac ab$ really means that $x$ is the unique solution to the equation $$bx=a$$ For example, $\frac12$ is the number that we have to multiply by $2$ to get the result of $1$. Similarly, when you want to ask "What is $y$ divided by $A$", what you are really asking is Which vector $x$ do I need to multiply by $A$ to get $y$? Or, in other words, you are solving the equation $$Ax=y$$ and you want to solve it for $x$. Now, you run into problem. What if all elements of $A$ are $0$, but $y$ is not all zero? Then obviously, any vector $x$, multiplied by $A$, will be equal to $0$, so there is no solution to $Ax=y$. What if all elements of $A$ and $y$ are both zero? Then any vector $x$ will be a solution to $Ax=y$. These two problems are similar to the division by $0$ problem in real numbers: The equation $0\cdot x=1$ has no solutions, and $0\cdot x=0$ has infinitely many solutions. However, it gets worse. The matrix $$A=\begin{bmatrix}1&amp;0&amp;0\\1&amp;0&amp;0\\1&amp;0&amp;0\end{bmatrix}$$ also causes problems, since $$A\cdot\begin{bmatrix}a\\b\\c\end{bmatrix}=\begin{bmatrix}a\\a\\a\end{bmatrix}$$ no matter what $b,c$ are. Again, you have zero solutions for the equation $Ax=\begin{bmatrix}1\\0\\0\end{bmatrix}$ and infinitely many solutions for $Ax=\begin{bmatrix}1\\1\\1\end{bmatrix}$.
Conjugacy classes of SL($4,\mathbb{R}$)
Consider first the problem of determining when two $n \times n$ square matrices over a field $k$ are conjugate by an element of $GL_n(k)$; working with conjugation by a subgroup of $GL_n(k)$ is a refinement of this so it's good to have a grasp on this case first. If $k$ is algebraically closed the conjugacy classes are labeled by Jordan normal forms up to permutation of blocks, and in general they are labeled by a generalization called rational canonical forms; both of these follow from the structure theorem for finitely generated modules over a PID, since the problem is equivalent to classifying $n$-dimensional modules over $k[x]$ up to isomorphism. Specialized to $GL_4(\mathbb{R})$, we conclude the following. We have the following cases depending on what the eigenvalues look like: All eigenvalues are real. In this case the rational canonical forms are real Jordan normal forms. Generically these are diagonal matrices, but the following Jordan block structures are also possible: there may be one $2 \times 2$ Jordan block, two $2 \times 2$ Jordan blocks, one $3 \times 3$ Jordan block, or one $4 \times 4$ Jordan block. Two eigenvalues are real and two are complex. In this case the rational canonical forms are direct sums of either a $2 \times 2$ diagonal block or a $2 \times 2$ Jordan block together with a $2 \times 2$ companion matrix of a quadratic polynomial with complex roots. All eigenvalues are complex. In this case the rational canonical forms are either direct sums of two $2 \times 2$ companion matrices or a $4 \times 4$ "Jordan block" of $2 \times 2$ matrices as described in the Wikipedia article on rational canonical forms. Together with the additional constraint that the product of the eigenvalues is equal to $1$, this answers the question of when two matrices in $SL_4(\mathbb{R})$ are conjugate by an element of $GL_4(\mathbb{R})$. An element of $GL_4(\mathbb{R})$ can be rescaled to have determinant either $1$ or $-1$, so this is almost the same as conjugacy by elements of $SL_4(\mathbb{R})$ except for the additional freedom to conjugate by some, hence any, matrix of determinant $-1$. This means the classification of conjugacy classes in $SL_4(\mathbb{R})$ involves some of the above cases splitting into two cases; I don't know off the top of my head what that splitting looks like, unfortunately.
Probability for Communication Networks
This is a typical Binomial Distribution calculation. The probability that $k$ or fewer bits are transmitted incorrectly is $$\sum_{i=0}^k \binom{n}{i}q^ip^{n-i}.$$ Remark: In the formula, I have used your assertion that the probability that a bit is transmitted incorrectly is called $q$. There may be a typo in the question, since in the concrete example you use $p=0.01$, meaning that the probability $q=1-p$ that a bit is transmitted incorrectly is $0.99$. If there is such a high probability of incorrect transmission of a bit, we would be much better off reversing the bit received! If there really is a typo, and the probability of incorrect transmission is $p$, then in the formula above, just interchange $q$ and $p$.
Proving $(a+b+c)^2 > 3(bc+ca+ab)$
Hint: add $a^2+b^2\ge 2ab$, $b^2+c^2\ge 2bc$ and $c^2+a^2\ge 2ca$ together.
How to expand and simplify this expression?
Recall that since matrix multiplication isn't always commutative, we don't know for certain whether or not $AB=BA$ (indeed, $A$ and $B$ likely don't commute). Hence, the best we can do is to simplify it as: $$ A^2 + AB -BA +B^2 $$
Topological properties of symmetric positive definite matrices
It is not just connected, in fact it is path connected. for $A,B$ such matrices we have $x^TAx\ge 0$, $x^TBx\ge 0$ so for $\lambda \in [0,1]$ we get $x^T[\lambda A+(1-\lambda)B]x\ge 0$.
What is the complexity of recurrence $T(n)=T(n/2)+T(n/4)+T(n/6)+T(n/12)+1$
So if $T(n) = cn - \frac 13$, then $$T(n/2)+T(n/4) + T(n/6) + T(n/12) +1 = cn - \frac 43 + 1 = cn - \frac 13$$ Also, $T(0)$ is really $-\frac13$ from the recurrence.
Prove $\log x≤x-1 \forall x>0$ with the mean value theorem
Why don't you apply your own idea to the original problem, $g(x)=x-1-\log x\ge0$? If $x&gt;1$, $g(x)=g(x)-g(1)=g'(c)(x-1)=(1-1/c)(x-1)\ge0$, because $c\ge1$. If $x&lt;1$, both factors are non-positive (then, $c\le1$), and the argument is still valid.
What is the maximum number of vectors in $\mathbb C^d$ with the "uniform overlap property"?
Such a set $\{v_1,v_2,\dots,v_N\}\subset\mathbb C^d$ of unit vectors having the "uniform overlap property" contains as indicated at most $d^2$ elements, because the associated rank-one projectors $p_i=v_i\langle\,\cdot\,|v_i\rangle\in M_d(\mathbb C)$ are necessarily linear independent, and $\,\dim M_d(\mathbb C)=d^2.$ Before showing this, notice that $$\operatorname{trace}(\,p_ip_j) \:=\:\begin{cases}\langle v_j|v_i\rangle\,\langle v_i|v_j\rangle\:=\:C&amp; \text{if }i\ne j\,,\\ 1 &amp; \text{if }i=j\,.\end{cases}$$ And furthermore that $0&lt;C&lt;1$, since $C=1$ would unmask the $v_i$ to be pairwise linear dependent, thus multiples of each other. Consider the ansatz $\,0=\sum_{i=1}^N\mu_ip_i$, then multiply by $p_j$, and take the trace. This yields the linear equation system $$\begin {pmatrix} 1 &amp; C &amp; \cdots &amp; C \\ C &amp; 1 &amp; \ddots &amp; \vdots \\ \vdots &amp; \ddots &amp; \ddots &amp; C\\ C &amp; \cdots &amp; C &amp; 1 \end {pmatrix}\, \begin {pmatrix} \mu_1\\ \mu_2 \\ \vdots\\ \mu_N \end{pmatrix}\:=\:\begin {pmatrix} 0\\ 0 \\ \vdots\\ 0\end {pmatrix}$$ whose determinant equals $(1+(N-1)C)\,(1-C)^{N-1}$, cf Marc van Leeuwen's summary , hence is positive in our case. Thus the $\mu_i$ have to be zero, proving the linear independence of the $p_i$. $\;\blacksquare$ When looking at the linked paper &amp; hopping a bit further I'd conclude with: The existence of maximal sets with the uniform overlap property is a very much harder question, and in particular not solved for all $d$.
if $AB=AC=EF,AE=BE,\angle BAC=120^{\circ}$ then find $\angle ABE$
Let $\angle EBA = \alpha$. Let $A$ and $X$ be symmetric with respect to $BE$. Then $BX=AB$. Also note that $BE=AE=EX$, so $A,B,X$ lie on the circle with center $E$ and radius $AE$. Let $Y$ be on $AF$ and $AY=AB$. Since $\angle BAY = 60^\circ$, $\triangle BAY$ is equilateral. Thus $BY=AB$. Therefore $A,X,Y$ lie on the circle with center $B$ and radius $AB$. Since $AE=BE$ and $AY=BY$, triangles $AYE$, $BYE$ are congruent (sss). In particular $\angle AYE = \angle EYB = \frac 12 \angle AYB = 30^\circ$. We have $$\angle AXY = \frac 12 (360^\circ - \angle YBA) = 150^\circ = 180^\circ - \angle AYE = \angle EYF.$$ Moreover $$\angle XYA = \frac 12 \angle XBA = \alpha$$ and $$\begin{align*}\angle FEY &amp; = \angle BEY - \angle BEF = \\ &amp; = (180^\circ - \angle YBE - \angle EYB) - 90^\circ = \\ &amp; = 180^\circ - (60^\circ - \alpha) - 30^\circ - 90^\circ = \\ &amp; = \alpha, \end{align*}$$ therefore $$\angle XYA = \angle FEY.$$ It follows that triangles $AXY$ and $FYE$ are similar. Their similarity ratio is $\dfrac{AY}{EF} = 1$ so in fact they are congruent. Thus $EY=XY$. Using the fact that $EXY$ is isosceles we get \begin{align*}180^\circ &amp; = \angle XYE + 2 \angle YEX = \\ &amp; = 30^\circ + \alpha + 2(\angle YEA - \angle XEA) = \\ &amp; = 30^\circ + \alpha + 2\angle BEY - 2\angle XEA = \\ &amp; = 30^\circ + \alpha + 180^\circ + 2 \alpha - 8 \alpha = \\ &amp; = 210^\circ - 5 \alpha, \end{align*} therefore $$\alpha=6^\circ.$$
Making a substitution to reach a Chini/Ricatti ODE
Let $$\eqalign{V(t) &amp;= \int_0^t {\frac {12{s}^{2} \; ds}{3\,{\pi}^{2}\sqrt {2}+4\,{s}^{2}+4\,{s}^{3}}}\cr &amp;= \sum_r \frac{3 r}{3 r + 2} \ln(1-t/r)} $$ where the sum is over the roots of the cubic polynomial $3 \pi^2 \sqrt{2} + 4 x^2 + 4 x^3$. Then the solution is given implicitly by $$ V(h(x)/x^{1/3}) + \ln(x) = 0 $$
Bounded linear transformation theorem
Why the completion is normed space by definition? The completion of normed vector spase is its completion as a metric space with a metric generated by the norm. If this completion is Banach space, i.e. complete normed vector space, we can somehow prove this statement, cant we?
Good Textbook on Matrices
Like I said in my comment above, it's a little unclear in what capacity you're familiar with "abstract" linear algebra, but to gain a good clear understanding of matrix operations I'd look at the following books. Basic Introductions These are books that I would consider beginner-level texts. I'm assuming you're looking for something harder than these, but I'll include them for completeness. Linear Algebra and its Applications by Gilbert Strang is a very popular text. This takes linear algebra from mostly an engineering and applications perspective. Linear Algebra Done Right by Sheldon Axler is a great abstract/proof-based approach to the subject. This is a great book for learning basic proof methods in linear algebra and it contains within it the seeds of functional analysis. The No Bullshit Guide to Linear Algebra by Ivan Savov is a great compendium of everything may need to know about lin alg for undergraduate applications. I particularly like the organization of the book, but it may be sparse in certain respects. Matrix Analysis More advanced books that deal with matrix analysis which could be used for calculus of matrices, more advanced decompositions, and also bridging the way into functional analysis/operator theory. Matrix Analysis by Horn and Johnson contains almost encyclopedic knowledge of matrix decompositions. Matrix Analysis by Rajendra Bhatia is similarly a good book along these lines as well. Linear Algebra and its Applications by Peter Lax is a good advanced book that has a mix of theory and applications.
Finding the billionth number in the series: $2, 3, 4, 6, 9, 13, 19, 28, 42, \ldots $?
The related series $$x_n=\lceil\frac32x_{n-1}\rceil,\ x_0=1$$ is studied at MathWorld. Since for all integers $n$, $\lceil\frac32n\rceil=\lfloor\frac32(n+1)\rfloor-1$, we can rewrite this as $x_n+1=\lfloor\frac32(x_{n-1}+1)\rfloor$, and $x_0+1=2$; thus we have $a_n=x_n+1$ for all $n$. That page (links to a paper that) derives that there exists a real number $K$ such that for all $n$, $$x_n=\lceil K(3/2)^n\rceil,$$ which tells us that the sequence very nearly follows the expected power law, although the accumulated roundoff error $K$ is hard to calculate in advance. So let's take the direct route. Define the sequences $a_{k+1}^+=\frac32a_k^+$, $a_{k+1}^-=\frac32a_k^--1$, where the initial values set $a_m^+=a_m^-=a_m$ for some fixed basepoint $m$. Then it is easy to show that $$a_n^+=\Big(\!\frac32\!\Big)^{n-m}a_m;\qquad a_n^-=\Big(\!\frac32\!\Big)^{n-m}(a_m-2)+2.$$ Now we have the inequality $a_n^-\le a_n\le a_n^+$ because $x-1\le\lfloor x\rfloor\le x$, so this directly gives bounds on $a_n$ for $n=10^9$. Since you want around 10 digits of accuracy, we need to pick an $a_m\ge 2\cdot 10^{10}$: for example, $a_{58}=26510400994$. Then we have: $$\log_{10}(a_m-2)\le\log_{10}a_n-(n-m)\log_{10}(3/2)\le\log_{10}a_m$$ $$\log_{10}26510400992\le\log_{10}a_{10^9}-(10^9-58)\log_{10}(3/2)\le\log_{10}26510400994$$ $$176091259.2658045137\underline{02}\le\log_{10}a_{10^9}\le176091259.2658045137\underline{34}$$ Thus the exponent of $a_{10^9}$ is $176091259$, and the fractional part is $10^{0.2658045137}\approx1.844185120$ (where only the digits that agree for both bounds are shown). Putting it all together, we have $$a_{10^9}=1.844185120\dots\times10^{176091259}.$$ By repeating this process with bigger basepoints, you can confirm as many digits as you want. Here's a few more correct digits: $$a_{10^9}=1.844185120759922192245258053300812265366889206992486395592885\times10^{176091259}.$$
Use Linear approximation to estimate a function
This the form for linear approximation is $f′(x)∗dx$ is too vague to be useful. The definition of the derivative says that when $h$ is small, $$ \frac{f(x+h) - f(x)}{h} \approx f'(x). $$ Multiply that approximation by $h$ and rearrange to get $$ f(x+h) \approx f(x) + h \times f'(x). $$ Now cleverly choose $f$, $x$ and $h$ to get the approximation you want. If you must use differentials (I wouldn't) then $dx$ is $h$, the small change in $x$, and $dy$ is the corresponding small change in $f(x)$.
A problem about cyclic groups.
Here are some hints of a possible way to prove this: 1) Show that for all $n$ dividing $|G|$, there is at most one cyclic subgroup of cardinal $n$ in $G$. Call it $H_n$ (when it exists). 2) Look at the map $\Psi: G\to \{\text{cyclic subgps} \}$, $x\mapsto \langle x \rangle$. Compute $|\Psi^{-1}(H_n)|$. 3) Write an equation giving $|G|$ in terms of $|\Psi^{-1}(H_n)|$, and compare with a famous formula involving Euler's totient function.
Complex function with infinitely many zeros on bounded domain
Let me put this as an answer to bring all the comments together and make things clear: As a concrete example $f(z)=\sin \frac{\pi}{1-z}$ is the easiest example of a (nontrivial) holomorphic function on the unit disc that has infinitely many zeroes $z_n=1-1/n$. However $\sum (1-(1-1/n))=\infty$ and that by a general result forces $f$ to be unbounded which is clear since it has an essential singularity at $1$. If the zeroes would be say $1-1/n^2$ or any $z_n$ with $\sum (1-|z_n|) &lt; \infty$ one can actually construct a bounded function on the unit disc with zeroes precisely at $z_n$ (Blaschke products) and that is one of the simplest theorem of the type $f$ satisfies some growth condition (here bounded on the unit disc), then its zeroes satisfy some growth restrictions (here $\sum (1-|z_n|) &lt; \infty$) and those are sufficient. Going even more general, we know that given any sequence in the plane $z_n$ with no (finite) accumulation point, one can construct an entire function $f$ with zeroes precisely at $z_n$ and then there is a factorization theorem that tells us that all such functions are of a specific form (though one needs growth restrictions like finite order for the theorem to be useful as otherwise there are too many choices) so one can ask same about any domain the plane, bounded or not and indeed the analogous result is true though it requires some subtle topological arguments in full generality, while the factorization theorem is even less useful outside specific cases like the unit disc and bounded functions say. So given any domain (open connected) $G$ and any sequence $z_n$ with no accumulation point in $G$, there is $f$ holomoprhic on $G$ with zeroes at precisely $z_n$
Probability from 2 continuous random variables with joint density
Sometimes figures are worth 1000 words:
choosing a uniform random permutation of cards
I'm not sure who "they" refers to in your question, but you're right that just randomly picking the cards one by one will work. This is essentially the Fisher-Yates shuffle in its historic, paper-and-pencil form. For computer use the improvements given at the Wikipedia article (due to Durstenfeld originally, and popularized by Knuth - I've heard this called the "Fisher-Yates-Knuth shuffle") is better (i. e. faster). A Markov chain method is not really necessary here - the Markov chain methods for sampling random objects from some set are useful when you don't have an explicit algorithm for generating the objects, and are only approximate (although you can get an approximation as good as you want if you wait long enough).
$x^{(p-1)/d}$ takes d distinct values
Each non-zero $x^{(p-1)/d}$ is a root of $x^d\equiv 1\pmod{p}$. So, there are at most $d$ distinct values of $x^{(p-1)/d}$.
Area double integral over a semicircle domain
You have the wrong integrand. For calculating area it should be $$\iint_D1\,dx\,dy$$
Conxex combinations of max and min
Denote by $\def\Prob{\mathop{\rm Prob}}\Prob(X)$ the set of all probiability measures on $X$, which is a weak$^*$ compact convex subset of the dual of the bounded measurable functions on $X$. Given a bounded measurable $g \colon X \to \mathbb R$, the map $\phi_g \colon \Prob(X) \to \mathbb R$, $\phi_g(p) = \int_X g \, dp$ is an affine weak$^*$ continuous functional. Now, as $P$ and $Q$ are weak$^*$ compact, the maxima exist, let $p \in P$, $q \in Q$, and $r \in (1-\alpha)P + \alpha Q$, such that $$ \phi_g(p) = \max_P \phi_g, \quad \phi_g(q) = \max_Q \phi_g, \quad \phi_g(r) = \max_{(1-\alpha)P + \alpha Q} \phi_g $$ Then $(1-\alpha)p + \alpha q \in (1-\alpha)P +\alpha Q$, hence $$ \phi_g\bigl((1-\alpha)p + \alpha q\bigr) \le \phi_g(r) $$ which gives, as $\phi_g$ is affine $$ (1-\alpha)\phi_g(p) + \alpha \phi_g(q) \le \phi_g(r) $$ On the other hand, for some $p' \in P$, $q'\in Q$, we have $(1-\alpha)p' + \alpha q' = r$, hence $$ \phi_g(r) = (1-\alpha)\phi_g(p') + \alpha\phi_g(q') \le (1-\alpha)\phi_g(p) + \alpha \phi_g(q) $$ which gives $$ (1-\alpha)\max_P \phi_g + \alpha \max_Q\phi_g = \max_{(1-\alpha)P+\alpha Q} \phi_g $$ as wished. The same formula for $\min$ can be proved along the same lines (or by replacing $g$ with $-g$ an noting that $\max {\phi_{-g}}= -\min \phi_g $.
Series - series that looks like the product of hypergeometric functions
Please check and make (constructive) comments and corrections. I usually get some strange dyslexia while thinking. We start with a few simple formula: $\left(\gamma+\psi\right)_{n}=\frac{\Gamma\left(\gamma+\psi+n\right)}{\Gamma\left(\gamma+\psi\right)}$ $\Gamma(z)\cdot\Gamma\left(1-z\right)=\frac{\pi}{sin\left(\pi\cdot z\right)}$ And for future compression/representation: Lemma 1. $e^{\beta\cdot t}\cdot_{p}F_{q}((a);(b);\lambda\cdot t)={\displaystyle \sum_{k=0}^{\infty}\frac{t^{k}}{k!}\cdot\beta^{k}\cdot _{p+1}F_{q}\left(-k,\left(a\right);\left(b\right);-\frac{\lambda}{\beta}\right)}$ Proof. Consider a general convolution term and evaluate the coefficient of $\frac{t^{k}}{k!}$ $e^{\beta\cdot t}\cdot_{p}F_{q}((a);(b);\lambda\cdot t)={\displaystyle \sum_{k=0}^{\infty}}{\displaystyle \sum_{l=0}^{k}\frac{\left(a\right)_{l}}{\left(b\right)_{l}}\frac{\left(\beta\cdot t\right)^{k-l}}{l!}\cdot\frac{\left(\lambda\cdot t\right)^{l}}{\left(k-l\right)!}}$ ${\displaystyle \sum_{k=0}^{\infty}\frac{t^{k}}{k!}}{\displaystyle \sum_{l=0}^{k}}\frac{k!}{l!\cdot\left(k-l\right)!}\cdot\beta^{k}\cdot\frac{\left(a\right)_{l}}{\left(b\right)_{l}}\cdot\left(\frac{\lambda}{\beta}\right)^{l}$ $ {\displaystyle \sum_{k=0}^{\infty}\frac{t^{k}}{k!}}{\displaystyle \sum_{l=0}^{k}}\beta^{k}\cdot\frac{\left(-k,\left(a\right)\right)_{l}}{\left(b\right)_{l}}\cdot\frac{\left(-\frac{\lambda}{\beta}\right)^{l}}{l!}$ QED Note that the spliting of $\beta$ is soley for the purpose of standard generalized Hypergeometric representation; i.e. don't let $\beta^{k}$ wander off unless you realize the risks. Now to the derivation $\Gamma\left(\alpha+2+n\right)=\frac{\pi}{\Gamma\left(1-\left(\alpha+2+n\right)\right)\cdot sin\left(\pi\cdot\left(\alpha+2+n\right)\right)}=\frac{\pi}{\Gamma\left(-\left(\alpha+1+n\right)\right)\cdot sin\left(\pi\cdot\left(\alpha+2+n\right)\right)}$ Thus the front end is: $\frac{sin\left(\pi\cdot\left(\alpha_{1}+2+n\right)\right)}{sin\left(\pi\cdot\left(\alpha_{3}+2+n\right)\right)}\cdot\frac{\Gamma\left(-\left(\alpha_{1}+1+n\right)\right)}{\Gamma\left(-\left(\alpha_{3}+1+n\right)\right)}$ And the Confluent Hypergeometric Function is $$_{1}F_{1}\left(-\left(\alpha_{1}+1+n\right);-\left(\alpha_{3}+1+n\right);\beta_{3}\cdot x\right)={\displaystyle \sum_{k=0}^{\infty}\frac{\frac{\Gamma\left(-\left(\alpha_{1}+1+n\right)+k\right)}{\Gamma\left(-\left(\alpha_{1}+1+n\right)\right)}}{\frac{\Gamma\left(-\left(\alpha_{3}+1+n\right)+k\right)}{\Gamma\left(-\left(\alpha_{3}+1+n\right)\right)}}\cdot\frac{\left(\beta_{3}\cdot x\right)^{k}}{k!}}$$ Thus we have: $${\displaystyle \sum_{n=0}^{\infty}}{\displaystyle \sum_{k=0}^{\infty}}\frac{sin\left(\pi\cdot\left(\alpha_{1}+2+n\right)\right)}{sin\left(\pi\cdot\left(\alpha_{3}+2+n\right)\right)}\cdot\frac{\Gamma\left(k-\alpha_{1}-1-n\right)}{\Gamma\left(k-\alpha_{3}-1-n\right)}\cdot\left(\frac{\beta_{1}}{\beta_{3}}\right)^{n}\cdot\frac{\left(\beta_{3}\cdot x\right)^{k}}{k!}$$ For integer $\alpha$ the sin()'s basically cancel and the ratio can be replaced with something like $(-1)^{\alpha_{1}-\alpha_{3}}$ A standard representation: This part is incomplete/incorrect. Please finish or delete it: or I will later. (Hint: you have to add $\frac{(1)}{(p-k)!}$ and some other shaping (I think). Now we start to rewrite it and use the Lemma to get a standard (more or less) convolution representation: Let: $p=k-n;n=k-p$ ${\displaystyle \sum_{k=0}^{\infty}}{\displaystyle \sum_{p=0}^{\infty}}\frac{sin\left(\pi\cdot\left(\alpha_{1}+2+n\right)\right)}{sin\left(\pi\cdot\left(\alpha_{3}+2+n\right)\right)}\cdot\frac{\Gamma\left(p-\alpha_{1}-1\right)}{\Gamma\left(p-\alpha_{3}-1\right)}\cdot\left(\frac{\beta_{1}}{\beta_{3}}\right)^{p-k}\cdot\frac{\left(\beta_{3}\cdot x\right)^{k}}{k!}$ ${\displaystyle \frac{\Gamma\left(-\alpha_{1}-1\right)}{\Gamma\left(-\alpha_{3}-1\right)}\cdot\left(-1\right)^{\left(\alpha_{1}-\alpha_{3}\right)}\cdot\sum_{k=0}^{\infty}}{\displaystyle \frac{t^{k}}{k!}\sum_{n=0}^{\infty}\beta_{3}^{k}\cdot}\frac{\left(-\alpha_{1}-1\right)_{\left(p\right)}}{\left(-\alpha_{3}-1\right)_{\left(p\right)}}\cdot\left(\frac{\beta_{1}}{\beta_{3}}\right)^{p-k}$
A system with a hidden mistake
The second information is unnecessary (and causes problem). We know that the length of the diagonal of the square is $l\sqrt2$ and that the diagonals bisect each other. Only using the fact that the perimeter of $\bigtriangleup AOB$ is $24$: $$l+\frac{l\sqrt2}{2}+\frac{l\sqrt2}{2}=24$$ which gives: $$l = 24 \;(\sqrt2 - 1)$$
Regularity of PDEs, general question (Example: Kolmogorov equation)
You first need to define a suitable Hilbert space in which the states evolve. The state space should be chosen in such a way that for a particular boundary condition the PDE yields one and only one solution! The PDE here consists of a linear operator and a nonlinear one. Apply Lumer-Phillips theorem to make sure the linear part is the generator of a (contraction) semi-group. Then look at the nonlinear part, there are many scenarios: A Frechet differentiable nonlinearity A locally Lipschitz nonlinearity A locally Lipschitz nonlinearity on the fractional powers of the state space (only if the semigroup is analytical, so-called parabolic PDEs) Once your are done with the well-posedness and uniqueness. You can define the adjoint system to be the system along which the inner-product of adjoint state with original state remains constant. For a linear system everything comes straightforward, and many properties carry over to adjoint system. However, for a nonlinear PDE, the existence and uniqueness result cannot be transferred.
Is the set linear span of the family $\{ \sin nt\}_{n=1}^{\infty}$ dense in the space $L^1(-\pi, \pi)$ ? True/False
No. $\int_{-1}^{1} \sin (nt)dt=0$ so any linear combination so these functions also has integral $0$. If $f$ is a limit in $L^{1}$ of such linear combinations then $\int_{-1}^{1} f(t) dt=0$. In particular, the constant function $1$ cannot be approximated by linear combinations of $\{sin (nt): n\geq 1\}$.
Maximize integral over line
Write $y=(\cos\theta,\sin\theta)$ and differentiate with respect to $\theta$. You get (please check!): $$ {dI\over d\theta}= \cos2\theta\int_{p_0}^{p_1}2(a_1-x_1)(a_2-x_2)\, dl -\sin2\theta\int_{p_0}^{p_1} \big((a_1-x_1)^2-(a_2-x_2)^2\big)\, dl. $$ That vanishes if $$ \tan2\theta= {\int_{p_0}^{p_1} 2(a_1-x_1)(a_2-x_2)\, dl \over \int_{p_0}^{p_1}\big((a_1-x_1)^2-(a_2-x_2)^2\big)\, dl }. $$
How to determine some points are inside or outside in triangle
Remember that the equation of a line, such as $x=0$ not only gives you a line, it also gives you a half-plane. The half-plane to the right of the $y$-axis is given by $x&gt;0$. Similarly, the half-plane above the $x$-axis is given by $y&gt;0$ and the half-plane below the line $4x+3y=60$ is given by $4x+3y&lt;60$. The intersection of these three half-planes is the interior of the triangle. So all you have to do is check whether (2,18) satisfies all three to find out if it lies inside the triangle. It obviously satisfies the first two, so you just have to check the third.
Under what conditions does $\forall x(\alpha \to \beta) \leftrightarrow (\forall x \alpha \to \forall x \beta)$ hold?
I don't have any proof, but here is my intuition: I doubt there is any general condition besides obvious $$\forall x(\alpha \to \beta) \leftarrow (\forall x \alpha \to \forall x \beta). \tag{1}$$ The problem is that it might be true for many different reasons, e.g. the universe being of size one or perhaps the $\forall x \beta$ is true and it makes the implication trivial. Observe that $\forall x(\alpha \to \beta)$ means that a single $x$ satisfying $\alpha$ is enough to make $\beta$ true (some clarification: it does not mean that $\beta$ is true in general, it is true only for that particular $x$). On the other hand $\forall x \alpha \to \forall x \beta$ means that you need $\alpha$ to hold for all the $x$es to satisfy $\beta$. For example, let's assume the universe has more than one element, then: $$\forall x \Big(P(x) \to \exists y \big(P(x) \land P(y)\big)\Big)$$ would be of first kind (because you can set $y = x$), while $$\forall x P(x) \to \forall x \exists y(P(x)\land P(y) \land x \neq y)$$ is the second kind, because you need to know, that there are at least two $x$es that satisfy $P$. So the general condition would be something like "from the $\forall x \alpha$ you need only one $x$ to make $\beta$ true", but that is exactly $(1)$. I hope it explained something ;-)
Let $n$ be a positive integer. Show that if $2^n -1$ is a prime number, then $n$ is a prime number.
It's probably easier to prove the contrapositive: if $n$ is composite then $2^n - 1$ is composite. So suppose $n$ is composite, i.e. $n=ab$ with $a$ and $b$ integers greater than $1$. Using the factorization $$ y^a - 1 = (y - 1)(y^{a-1} + y^{a-2} + \cdots + y + 1) $$ and writing $2^n - 1 = 2^{ab} - 1 = (2^b)^a - 1$, we have $$ 2^n - 1 = (2^b - 1)(2^{b(a-1)} + 2^{b(a-2)} + \cdots + 2^b + 1). $$ Specifically, I let $y=2^b$ in the factorization I gave earlier. Then $2^b-1$ divides $2^n-1$ but $2^b - 1$ is strictly less than $2^n-1$, as $b&lt;n$. So $2^n-1$ is composite.
General Solution for the Gravity Between Two 3D Triangles
This computation is essentially the same as computing the "Form Factor" in rendering (for computer graphics). It turns out that the integrals (once you include the direction of the force as you should) are not nice (i.e., involve dilogarithms). Here's a paper that describes the computation in some detail.
For which $n$, $x^{n - 1} \sin{\dfrac{1}{x}}$ is differentiable for all $x$?
If $x\neq 0$ then clearly the function $f$ is differentiable at $x.$ So it is enough to check out for which $n$ the function is differentiable a the point $x=0.$ Thus we have $$f'(0)=\lim_{v\to 0} \frac{f(v) -f(0)}{v} =\lim_{v\to 0} v^{n-2}\sin (v^{-1} )$$ but the last limit exists if and only if $n-2&gt;0.$ So the answer is $n&gt;2.$
If $K$ is a field, prove the polynomial ring $K[x]$ has infinitely many maximal ideals.
HINT: $K[x]$ is principal ideal domain. Now consider the ideals generated by irreducible polynomials in $K[x]$
Rank of a matrix shifted by all one matrix over any field
Hint. Let $A_1, \dots, A_n$ be column vectors, and let $E$ be a vector consisting of all ones. Prove that the rank of the system $(A_1 + E, \dots, A_n + E)$ is at most one more than the rank of the system $(A_1, \dots, A_n)$.
multiplication on left coset
$g(xH)=(gx)H$ is simply the associativity of the group law. As for $\DeclareMathOperator{\stab}{Stab}\stab(xH)$: first check, again by assciativity, that $xHx^{-1}\subset \stab(xH)$. Conversely, if $g$ stabilises $xH$, this means $\;g(xH)=(gx)H\subset xH $, whence $\;H\subset x^{-1}xH=H$, i.e. $x^{-1}gx\in H$, whence (left multiplying both sides by $x$ and right multiplying by $x^{-1}$): $$g\in xHx^{-1}$$
Does there exists an analytic function from $D$ to $D$?
Find automorphisms of $D$ that map $\frac 34\mapsto 0$ and $0\mapsto \frac 34$, respectively. From these and $f$ construct a holomorphic $g\colon D\to D$ with $g(0)=0$ and $g'(0)=???$. Similarly for 4.
Equation for function that's differentiable and continuous
Hint: The right hand side looks like the derivative of $$xf(x)$$ evaluated at $c$. How about using the theorem on this new function $xf(x)$?
Limitation of Henkin sematics in second order logic
In a Henkin model, the second-order quantifiers are not required to range over all possible properties/relations over the domain of objects. In other words, the domain of the second-order quantifiers can be just some of the properties/relations which could be in principle be defined over the domain of objects. So take e.g. a Henkin model with (i) infinitely many objects in the domain of the first-order quantifiers, but where (ii) the only (two-place) relations in the domain of the second-order quantifiers are reflexive. Then $\lambda_\infty$ is false (because there doesn't exist a relation $X$ which is irreflexive etc.); so this model indeed satisfies $\Sigma$. Think of it this way. In the Henkin semantics, we can have fewer relations in the domain of the second-order domain than in the full semantics: that's why it is easier for a second-order existential quantification to fail (easier for its negation to be satisfied) when interpreted Henkin-style.
Does continuous extension exists under specific conditions
Hint: What is the simplest function you can think of that blows up at $1/\sqrt 2?$
Probability on product spaces
That's a good question. The point is that if you have a random variable $$ X:(\Omega,\mathscr F,P)\to (\Bbb R,\mathscr B(\Bbb R)) $$ and you need to consider, say, two copies of this variable $X_1$ and $X_2$, a direct way would be to define the vector $\tilde X = (X_1,X_2)$ on a product space $(\Omega,\mathscr F,P)\otimes (\Omega,\mathscr F,P)$. This is intuitive, this way always works, and IMHO it's easier to compute the probabilities you've mentioned over a product space - you have clear image of the diagonal in your mind when dealing with $\{X_1 = X_2\}$ and of a subdiagonal triangle when dealing with $\{X_2\leq X_1\}$. The latter make easier computations of the correspondent double integrals. On the other hand, formally speaking, you don't have to construct a product space in most of the practial cases. That is, most of the probability spaces we're dealing with are standard. For example, since $X$ is a real-valued random variable, you can always take $$ (\Omega,\mathscr F,P) = ([0,1],\mathscr B([0,1]),\lambda) $$ where $\lambda$ is the Lebesgue measure. As a result, the product space is isomorphic to the original space and hence any random vector defined over the product space can be defined over the original space. However, I wouldn't suggest going that way due to the following reasons: It does not always work: if $\Omega$ has just two elements and $\mathscr F$ is its powerset, then you can't defined $\tilde X$ over the original space. I disagree with your friend that it is easier to compute probabilities when defining $\tilde X$ over the original state space, rather than over the product space. It is less intuitive, more technically involved and unnecessary. Please, tell me whether the answer is clear to you.
Show $ \int_0^1 \int_0^1 \log \left| x-y\right|\,dx\,dy>-\infty.$
By setting $f(x,y)=\log\left|x-y\right|$ we have $f(x,y)=f(y,x)$ and $f\leq 0$ for any $(x,y)\in(0,1)^2$, hence $$ \iint_{(0,1)^2}f(x,y)\,dx\,dy = 2\int_{0}^{1}\int_{0}^{x}f(x,y)\,dy\,dx \stackrel{y\mapsto xz}{=} 2\iint_{(0,1)^2}x\,f(x,xz)\,dz\,dx$$ and the original integral boils down to: $$ 2\int_{0}^{1}\int_{0}^{1}x \log(1-z)+x\log(x)\,dx\,dz = 2\int_{0}^{1}x(\log x-1)\,dx=\color{red}{-\frac{3}{2}}.$$
How to simplify summation $(1\cdot2) + (2\cdot3) + (3\cdot4) + (4\cdot5) + (5\cdot6) + ... + (N\cdot(N+1))$ in terms of N?
By the comment, your sum is: $$ \sum_{1 \le k \le n} k (k + 1) = \sum_{1 \le k \le n} k^2 + \sum_{1 \le k \le n} k = \frac{n (n + 1) (2 n + 1)}{6} + \frac{n (n + 1)}{2} = \frac{n (n + 1) (n + 2)}{3} $$ Or even simpler, with $k^{\overline{m}} = k (k + 1) \ldots (k + m - 1)$, you have $$ \sum_{1 \le k \le n} k^{\overline{m}} = \frac{n^{\overline{m + 1}}}{m + 1} $$ In your case $m = 2$.
Prime Number Congruence Conjecture
I've found two numbers that fit your requirements exactly - 6 and 109505970 - but if we loosen them a bit then we get an interesting sequence! The numbers we're trying to find need to satisfy the following conditions. If $n$ fits the pattern, then: $n+(n-1)$ is prime $n+(n+1)$ is prime $n(n-1) - 1$ is prime $n(n-1) + 1$ is prime $n(n+1) - 1$ is prime $n(n+1) + 1$ is prime Below 10 million, there are 25 numbers that fit this pattern: $3$, $6$, $21$, $1365$, $86604$, $185535$, $411501$, $759759$, $833799$, $1192290$, $1297815$, $2092719$, $2130324$, $2876160$, $3469311$, $3515799$, $5268606$, $5335959$, $7279791$, $7544901$, $7749435$, $7787661$, $7994085$, $8067501$, and $9954141$. This sequence ignores the "multiple of 6" condition and the "$n-1$ and $n+1$ have to be twin primes" condition. If you add the condition that $n-1$ and $n+1$ also have to be prime, then the only numbers that I've found that satisfy this condition are $6$ and $109505970$, and $109505970$ is actually a multiple of 6!
How do you convert a ternary to a novenary?
Using your notation, $$x = \sum_{i=1}^{\infty}t_i3^{-i} = \sum_{i=1}^{\infty}s_i9^{-i}$$ Where $0.s_1s_2s_3...$ is the 9-ary expansion. Simply substituting $s_i = 3t_{2i-1}+t_{2i}$ into the right sum gives us $$\sum_{i=1}^{\infty}(3t_{2i-1}+t_{2i})9^{-i} = \sum_{i=1}^{\infty}(3t_{2i-1}+t_{2i})3^{-2i} = \sum_{i=1}^{\infty}3^{-(2i-1)}t_{2i-1}+3^{-2i}t_{2i} = $$ $$\sum_{i=1}^{\infty}t_i3^{-i}$$ The equality checks out, and so this is a valid way to calculate the 9-ary expansion from the 3-ary expansion. As an example, let's look at how you would convert $0.122101_3$ into 9-ary. $12_3 = 5_9$, $21_3=7_9$, $01_3=1_9$, and so $0.122101_3 = 0.571_9$.
How to find B-Spline represenation of an Akima spline?
There are two ways (at least). A brute force way and a clever way. The brute force way just uses interpolation techniques. The Akima curve is a cubic spline that is $C_1$ at each knot. So, all of its interior knots are double knots (multiplicity two). So, if you have $n+1$ points, $(x_0, x_1, \ldots, x_n)$, your knot sequence will have the form $(x_0, x_0, x_0, x_0, x_1, x_1, \ldots, x_i, x_i, \ldots , x_n, x_n, x_n, x_n)$. So, you have $2n+6$ knot values, and using these, you can construct $2n+2$ cubic b-spline basis functions, $B_0, \ldots, B_{2n+1}$. The b-spline we want will be $$ f(x) = \sum_{i=0}^{2n+1}\alpha_i B_i(x) $$ We can calculate the coefficients $\alpha_0, \ldots, \alpha_{2n+1}$ by interpolation. Choose $2n+2$ values $z_0, \ldots, z_{2n+1}$, and evaluate your Akima curve at these values to get $2n+2$ ordinate values $y_0, \ldots, y_{2n+1}$. Then solve the linear system $$ y_j = \sum_{i=0}^{2n+1}\alpha_i B_i(z_j) \quad (j = 0, 1, \ldots, 2n+1) $$ to get the b-spline coefficients $\alpha_0, \ldots, \alpha_{2n+1}$. You have to choose the $z_j$ values with a bit of care, or else you'll end up with a linear system that does not have a unique solution. The crucial point here is that the interpolation problem has a unique solution. Since the b-spline curve and the Akima curve are both solutions, they must be the same curve. The clever way is to just fabricate the $\alpha$ coefficients using the end-points and end tangents of the cubic segments in the Akima curve. Suppose the points are $(x_0, x_1, \ldots, x_n)$, again, and let $(y_0, y_1, \ldots, y_n)$ and $(d_0, d_1, \ldots, d_n)$ be the values and first derivatives of the Akima curve at these points. The b-spline control points are then: \begin{align} y_0 \quad &amp;; \quad y_0 + \tfrac13 d_0 (x_1 - x_0) \\ y_1 - \tfrac13 d_1 (x_1 - x_0) \quad &amp;; \quad y_1 + \tfrac13 d_1 (x_2 - x_1) \\ y_2 - \tfrac13 d_2 (x_2 - x_1) \quad &amp;; \quad y_2 + \tfrac13 d_2 (x_3 - x_2) \\ &amp;\vdots \\ y_i - \tfrac13 d_i (x_i - x_{i-1}) \quad &amp;; \quad y_i + \tfrac13 d_i (x_{i+1} - x_i) \\ &amp;\vdots \\ y_n - \tfrac13 d_n (x_n - x_{n-1}) \quad &amp;; \quad y_n \end{align} This gives you $2n+2$ coefficients, as before. The knot sequence is the same as in the brute force approach above. Here's a picture The black points represent the coefficients. The red points are the breaks between the cubic segments. We have 5 original data points, so $n=4$. This means we will have $2n+2 = 10$ coefficients.
Comparison test for series $\sum_{n=1}^{\infty}\frac{n}{n^3 - 2n + 1}$
We need to start the summation somewhere after $n=1$, since the denominator is $0$ at $n=1$. We will show that $\sum_{n=2}^\infty \frac{n}{n^3-2n+1}$ converges. For large enough $n$, we have $n^3-2n+1\gt \frac{n^3}{2}$. This follows from general considerations. But more specifically, $n^3-2n+1\gt \frac{n^3}{2}$ for $n \ge 2$. This is because $n^3-2n\ge \frac{n^3}{2}$. To see that, note that the inequality $n^3-2n\ge \frac{n^3}{2}$ is equivalent to $\frac{n^3}{2}\ge 2n$, which is equivalent to $n^2\ge 4$. So for $n\ge 2$, we have $$\frac{n}{n^3-2n+1} \lt \frac{2}{n^2}.$$ And we know that $\sum_2^\infty \frac{2}{n^2}$ converges. Remark: We used a common trick. If you have any polynomial $P(x)=a_0x^n +a_1x^{n-1}+\cdots$, where $a_0\ne 0$, then for large enough $|x|$, we have $|P(x)|\lt \left|\frac{a_0 x^n}{2}\right|$. The dominant term is $\dots$ dominant.
RREF using mod 2 operations
After adding the first row to the second one, you get, as you wrote:$$\begin{bmatrix}1&amp;1&amp;1&amp;0\\0&amp;0&amp;1&amp;1\\0&amp;0&amp;1&amp;1\end{bmatrix}.$$Then, after adding the second row to the first and to the third ones, you get:$$\begin{bmatrix}1&amp;1&amp;0&amp;1\\0&amp;0&amp;1&amp;1\\0&amp;0&amp;0&amp;0\end{bmatrix}.$$And this matrix is in RREF.
Is this another valid delta-epsilon proof?
This is basically correct, though to get an actual proof you would need to flesh out the logical relationships between all your inequalities to show that $|x-9|&lt;6\epsilon-\epsilon^2$ really does imply $|\sqrt{x}-3|&lt;\epsilon$. Note though that to prove the limit you need $\delta&gt;0$, so this only works as a value of $\delta$ if $\epsilon&lt;6$. As long as $\epsilon\leq 3$, this is indeed the largest $\delta$ you can use. You can't use any larger $\delta$, since if you did, then $x=(3-\epsilon)^2$ would satisfy $0&lt;|x-9|&lt;\delta$ but $|\sqrt{x}-3|=\epsilon\not&lt;\epsilon$. (The assumption that $\epsilon\leq 3$ is needed here so that $\sqrt{x}=3-\epsilon$ rather than $\epsilon-3$. If $\epsilon&gt;3$, then it is actually impossible to have $\sqrt{x}\leq3-\epsilon$, and so the best possible $\delta$ will be $\delta=(3+\epsilon)^2-9=6\epsilon+\epsilon^2$.)
Zero set of convolution of two integrable function.
Recall that the support of a functions is defined as $$supp(f)=\overline{\{x:f(x)\neq0\}}$$ Assuming that the integral for convolution exists then it can be shown that $$supp(f*g)\subset \overline{supp(f)+supp(g)}$$ So for the zero sets $$supp(f*g)^c \supset \overline{supp(f)+supp(g)}^c$$ Since $Z_f=f^{-1}(\{0\})$ and $supp(f)=\overline{f^{-1}(\{0\})^c}$ We have $$\big(\overline{Z_{f*g}^c}\big)^c\supset \big(\overline{\overline{Z_{f}^c}+\overline{Z_{g}^c}}\big)^c $$
different conditional probabilities and their interpretation
"The father with three sons of green or brown eyes" problem can be modeled by the following set of elementary events: $$\Omega=\{(g,g,g),(g,g,b),(g,b,g),(g,b,b),(b,g,g),(b,g,b),(b,b,g),(b,b,b)\}$$ where the $i^{\text{th}}$ components of the triplets denote the eye color of the $i^{\text{th}}$ son. What abou the probabilities assigned to these elementary events? If we assume that the eye color of the brothers is independent and that color green is of probability $\frac14$ and then color brown is of probability $\frac34$ then we have the following probabilities: $$P:\left\{\frac1{4^3},\frac3{4^3},\frac3{4^3},\frac9{4^3},\frac3{4^3},\frac9{4^3},\frac9{4^3},\frac{27}{4^3}\right\}$$ in the order of the elementary events listed above. 1 "Given that I know [at least] one son has green eyes, the probability of at least two having green eyes is" a conditional probability of the following form: $$P(\{(g,g,g),(g,g,b),(g,b,g),(b,g,g)\}\mid \{(g,b,b),(b,g,b),(b,b,g),(g,g,g),(g,g,b),(g,b,g),(b,g,g)\})=$$ $$=\frac{P(\{(g,g,g),(g,g,b),(g,b,g),(b,g,g)\})}{P(\{(g,b,b),(b,g,b),(b,b,g),(g,g,g),(g,g,b),(g,b,g),(b,g,g)\})}=$$ $$=\frac{\frac1{4^3}+\frac3{4^3}+\frac3{4^3}+\frac3{4^3}}{1-\frac{27}{4^3}}=\frac{10}{37}.$$ That is, the first result is OK. 2 "Given that I know that the first son has green eyes, the probability of two or more having green eyes" is a conditional probability of the following form $$P(\{(g,g,g),(g,g,b),(g,b,g),(b,g,g)\}\mid \{(g,g,g),(g,g,b),(g,b,g),(g,b,b)\})=$$ $$=\frac{P(\{(g,g,g),(g,g,b),(g,b,g),(b,g,g)\}\cap\{(g,g,g),(g,g,b),(g,b,g),(g,b,b)\})}{P(\{(g,g,g),(g,g,b),(g,b,g),(g,b,b)\})}=$$ $$=\frac{P((g,g,g),(g,g,b),(g,b,g))}{P(\{(g,g,g),(g,g,b),(g,b,g),(g,b,b)\})}=$$ $$=\frac{\frac1{4^3}+\frac3{4^3}+\frac3{4^3}}{\frac1{4^3}+\frac3{4^3}+\frac3{4^3}+\frac9{4^3}}=\frac7{16}.$$ That is, the second result is OK as well.
Does redundancy removal in linear programming follow a distributive property?
The answer is negative, consider $$\min\{x_1 + x_2 : x \in A, x \in B\}$$ with $$A = \{x : x_1 \geq 1-0.25 x_2, x_1 \geq 4-4x_2, x_1 \geq 0.7\}$$ $$B = \{x : x_1 \geq 10 - 0.5 x_2, x \geq 0 \}.$$ In $A$, the last constraint is redundant since the optimum is at $(0.8,0.8)$. However, for the full problem the optimal solution is $(0.7,18.6)$, which depends on the last constraint in $A$.
A better general definition of a predicate
Very very generally, a predicate is something that expects zero or more objects as inputs and produces a truth value as output. Now, of course you have to specify what exactly that means. In particular, in first-order logic here are two possible definitions of predicates (but you cannot choose both!): Simply a (well-formed) formula. The inputs are the free variables, and the output is the truth value of the formula (in a given model). An function $f$ in the meta-system based on a formula $φ$ with numbered blanks that when given terms as inputs produces a formula that is obtained by substituting each blank numbered $k$ by the $k$-th input term. Under this definition a predicate applied to terms can still have free variables. Also, there are things called predicate symbols, which are just symbols that can be used in a formula by writing it with the appropriate number of terms after it, usually placed in brackets. Of course, for every predicate symbol there is a corresponding predicate (no matter which definition above you choose) that 'behaves' the same way. But predicate symbols are not the same as predicates. However, for any first-order theory we can add in one predicate symbol for every predicate, like this: &emsp; For each $k$-input predicate $P$: &emsp; &emsp; Let $p$ be a new predicate symbol. &emsp; &emsp; Add the axiom $\forall x_{1..k}\ ( P(x_{1..k}) \leftrightarrow p(x_{1..k}) )$. This can be proven to give a conservative extension of the original theory, meaning that any sentence over the original theory can be proven iff it can be proven over the extended theory. This is in fact how we formally justify definitions! For example over the theory of $PA$ we can let $even$ be a new predicate symbol such that: &emsp; $\forall n\ ( even(n) \leftrightarrow \exists k\ ( n=k+k ) )$. Since this is a conservative extension, we can now work in the extended theory and use $even$ anywhere we like without worrying that we can prove 'more' than we could before. A similar technique can be used for formulae that 'express functions', but that's for another time!
Non-Homeomorphicity of Compact surfaces
The key here is to abelianize (which essentially means look at $H_1$ and is exactly the surjection onto $\mathbb{Z}^{2g}$ you were talking about). Note that the product of commutators for the orientable surface when abelianized yields a trivial relation because $$a_1b_1a_1^{-1}b_1^{-1} \mapsto a_1 + b_1 - a_1 - b_1 = 0$$ So the abelianization of $\pi_1(\Sigma_g) = H_1(\Sigma_g) = \mathbb{Z}^{2g}$. So if $\Sigma_g \not \cong \Sigma_{g'}$ then $\pi_1(\Sigma_g) \not\cong \pi_1(\Sigma_{g'})$. Now let us consider what happens when we abelianize $\pi_1(S_g)$. The relation $c_0^2...c_g^2 = 1$ turns into $2c_0 +...+ 2c_g = 0$, which I can write as $2(c_0 + ... + c_g) = 0$. So there is a non-zero element with $2$-torsion in this abelianization, which is not true in the abelianization of $\pi_1(\Sigma_g)$. So $\pi_1(\Sigma_g) \not \cong \pi_1(S_{g'})$. Note that we can rewrite the presentation for the abelianized group now by replacing $c_g$ with $c_0 + ... + c_g$. Basically, instead of saying $\pi_1(S_g)$ is the free abelian group with generators being $c_0,...,c_g$ and relator being $2(c_0+..+c_g) = 0$, we can instead write it as the free abelian group on $d_1,...,d_g$ with relator $2d_g = 0$, where $d_i = c_i$ for $i &lt; g$ and $d_g = c_1+...+c_{g-1}$. This shows us that the abelianization of $\pi_1(S_g) = H_1(S_g) = \mathbb{Z}^{g-1} \oplus \mathbb{Z}_2$. Thus if $S_g \not \cong S_{g'}$ then $\pi_1(S_g) \not \cong \pi_1(S_{g'})$. Edit: I wasn't sure if you're familiar with homology so if not, ignore any mention of $H_1$. Also, if you're not familiar with abelianization, I can rewrite this to be more explicit!
fitting the content of an elliptical region in an image into the equivalent stretched circle
Let $f$ be the image in the ellipse, $g$ the image in the circle, and $\phi$ the coordinate mapping from the ellipse to the circle. Define the set of values you want to resample to in the circle and call them $(a_i,b_i)$, for $i=1, \dots, N$. Map the circle locations to the equivalent positions in the ellipse: $\phi^{-1}(a_i,b_i)\rightarrow (x_i,y_i)$. Now, you already have image data for a set of $(x,y)$ pairs in the ellipse, so let $f(x,y)$ be the value of the image at the point $(x,y)$. All you have to do is interpolate the known image values $f(x,y)$ onto the locations given by $(x_i,y_i)$, that is, $g(a_i,b_i) = f(x_i,y_i)$. Here is an example in matlab, though since it starts with circular coins and deforms an ellipse to a circle, it looks like we are deforming circles to ellipses. I hope that makes sense. I couldn't find a built-in image of an ellipse, so I used this one. % Test image and its sample locations. f = double( imread( 'coins.png' ) ); x = ( -1 : 2 / ( size( f, 2 ) - 1 ) : 1 ); y = ( -1 : 2 / ( size( f, 1 ) - 1 ) : 1 )'; % Define the output sample locations. We go bigger % than the input so that we can see the ellipse % deformed into a circle. a = (-2:0.01:2); b = (-2:0.01:2)'; [ a_mesh, b_mesh ] = meshgrid( a, b ); % The parameters of the ellipse that is mapped % to the unit circle. e_x = 1; e_y = 1/2; % Now we map the so-called circle coordinates % to the ellipse coordinates. theta = atan2( b_mesh, a_mesh ); r = reshape( sqrt( sum( a_mesh(:).^2 + b_mesh(:).^2, 2 ) ), size( a_mesh ) ); x_i = e_x * r .* cos( theta ); y_i = e_y * r .* sin( theta ); % Perform the interpolation. g = interp2( x, y, f, x_i, y_i, 'linear' ); figure; title( 'Circle Image' ); imagesc( a, b, g, [0 255] ); colormap( gray ); axis( 'equal' ); figure; title( 'Ellipse Image' ); imagesc( x, y, f, [0 255] ); colormap( gray ); axis( 'equal' );
find the solution basis to $ty'-(t+2)y=-2t^2-2t$
we have $$ty'-(t+2)y=-2t^2-2t$$ at first we solve the equation $$ty'-(t+2)y=0$$ which gives us $$y(t)=e^t\cdot t^2C$$ after this you will get a particular solution with $y_p=mt+n$ this gives us $$y_p=-2t$$
Is this function of Semi-definite matrices, convex?
The function $X \mapsto \langle X,D \rangle$ is linear, hence convex. Same for $X \mapsto \langle X,E \rangle$. The function $t \mapsto -c \sqrt{t}$ is convex for $t \ge 0$, hence $f$ is convex.