INSTRUCTION
stringlengths 61
6.63k
| RESPONSE
stringlengths 1
11k
|
---|---|
Every invertible matrix can be written as the exponential of one other matrix I'm looking for a proof of this claim: "every invertible matrix can be written as the exponential of another matrix".
I'm not familiar yet with logarithms of matrices, so I wonder if a proof exists, without them. I'll be happy with any proof anyways. I hope someone can help.
|
I assume you are talking about complex $n\times n$ matrices. This is not true in general within real square matrices.
A simple proof goes by functional calculus. If $A$ is invertible, you can find a determination of the complex logarithm on some $\mathbb{C}\setminus e^{i\theta_0}[0,+\infty)$ which contains the spectrum of $A$. Then by holomorphic functional calculus, you can define $B:=\log A$ and it satisfies $e^B=A$.
Notes:
1) There is a formula that says $\det e^B=e^{\mbox{trace}\;B}$ (easy proof by Jordan normal form, or by density of diagonalizable matrices). Therefore the range of the exponential over $M_n(\mathbb{C})$ is exactly $GL_n(\mathbb{C})$, the group of invertible matrices.
2) For diagonalizable matrices $A$, it is very easy to find a log. Take $P$ invertible such that $A=PDP^{-1}$ with $D=\mbox{diag}\{\lambda_1,\ldots,\lambda_n\}$. If $A$ is invertible, then every $\lambda_j$ is nonzero so we can find $\mu_j$ such that $\lambda_j=e^{\mu_j}$. Then the matrix $B:=P\mbox{diag}\{\mu_1,\ldots,\mu_n\}P^{-1}$ satisfies $e^B=A$.
3) If $\|A-I_n\|<1$, we can define explicitly a log with the power series of $\log (1+z)$ by setting $\log A:=\log(I_n+(A-I_n))=\sum_{k\geq 1}(-1)^{k+1}(A-I_n)^k/k.$
4) For a real matrix $B$, the formula above shows that $\det e^B>0$. So the matrices with a nonpositive determinant don't have a log. The converse is not true in general. A sufficient condition is that $A$ has no negative eigenvalue. For a necesary and sufficient condition, one needs to consider the Jordan decomposition of $A$.
5) And precisely, the Jordan decomposition of $A$ is a concrete way to get a log. Indeed, for a block $A=\lambda I+N=\lambda(I+\lambda^{-1}N)$ with $\lambda\neq 0$ and $N$ nilpotent, take $\mu$ such that $\lambda=e^\mu$ and set $B:=\mu+\log(I+\lambda^{-1}N)=\mu+\sum_{k\geq 1}(-1)^{k+1}\lambda^{-k}N^k$ and note that this series has actually finitely many nonzero terms since $N$ is nilpotent. Do this on each block, and you get your log for the Jordan form of $A$. It only remains to go back to $A$ by similarity.
6) Finally, here are two examples using the above:
$$
\log\left( \matrix{5&1&0\\0&5&1\\0&0&5}\right)=\left(\matrix{\log 5&1&-\frac{1}{2}\\ 0&\log 5&1\\0&0&\log 5} \right)
$$
and
$$
\log\left(\matrix{-1&0\\0&1} \right)=\left(\matrix{i\pi&0\\0&0} \right)
$$
are two possible choices for the log of these matrices.
|
How to combine Bézier curves to a surface? My aim is to smooth the terrain in a video game. Therefore I contrived an algorithm that makes use of Bézier curves of different orders. But this algorithm is defined in a two dimensional space for now.
To shift it into the third dimension I need to somehow combine the Bézier curves. Given two curves in each two dimensions, how can I combine them to build a surface?
I thought about something like a curve of a curve. Or maybe I could multiply them somehow. How can I combine them to cause the wanted behavior? Is there a common approach?
In the image you can see the input, on the left, and the output, on the right, of the algorithm that works in a two dimensional space. For the real application there is a three dimensional input.
The algorithm relays only on the adjacent tiles. Given these it will define the control points of the mentioned Bézier curve. Additionally it marks one side of the curve as inside the terrain and the other as outside.
|
I would not use Bezier curves for this. Too much work to find the end-points and you end up with a big clumsy polynomial.
I would build a linear least squares problem minimizing the gradient (smoothing the slopes of hills).
First let's split each pixel into $6\times 6$ which will give the new smoothed resolution (just an example, you can pick any size you would like).
Now the optimization problem $${\bf v_o} = \min_{\bf v} \{\|{\bf D_x(v+d)}\|_F^2+\|{\bf D_y(v+d)}\|_F^2 + \epsilon\|{\bf v}\|_F^2\}$$
where $\bf d$ is the initial pixely blocky surface, and $\bf v$ is the vector of smoothing change you want to do.
Already after 2 iterations of a very fast iterative solver we can get results like this:
After clarification from OP, I realized it is more this problem we have ( but in 3D ).
Now the contours (level sets) to this smoothed binary function can be used to create an interpolative effect:
|
Why this function is not uniformly continuous? Let a function $f: \mathbb R \rightarrow \mathbb R$ be such that $f(n)=n^2$ for $n \in \mathbb N$.
Why $f$ is not uniformly continuous?
Thanks
|
Suppose it is. i.e. for every $\varepsilon>0$ there is some $\delta>0$ such that if $|x-y| \leq \delta$ then $|f(x)-f(y)| \leq \varepsilon$. Any interval $[n,n+1]$ can be broken into at most $\lceil \frac{1}{\delta} \rceil$ intervals with length$<\delta$ using some partition $x_0=n<x_1<\ldots<n+1=x_k$. Using the triangle inequality, $|f(n)-f(n+1)| \leq |f(n)-f(x_1)|+|f(x_1)-f(x_2)|+\ldots+|f(x_{k-1})-f(n+1)| \leq \lceil \frac{1}{\delta} \rceil \varepsilon$. But this can't hold for all $n$ because $|f(n)-f(n+1)|=1+2n$ grows larger than any bound.
|
How to evaluate $\int^\infty_{-\infty} \frac{dx}{4x^2+4x+5}$? I need help in my calculus homework guys, I can't find a way to integrate this, I tried use partial fractions or u-substitutions but it didn't work.
$$\int^\infty_{-\infty} \frac{dx}{4x^2+4x+5}$$
Thanks much for the help!
|
*
*Manipulate the denominator to get $(2x+1)^2 + 4 = (2x+1)^2 + 2^2$.
*Let $u = 2x+1 \implies du = 2 dx \implies dx = \frac 12 du$,
*$\displaystyle \frac 12 \int_{-\infty}^\infty \dfrac{du}{u^2 + (2)^2} $
*use an appropriate trig substitution which you should recognize:
$$ \int\frac{du}{{u^2 + a^2}} = \frac{1}{a} \arctan \left(\frac{u}{a}\right)+C $$
|
Bernstein Polynomials and Expected Value The first equation in this paper
http://www.emis.de/journals/BAMV/conten/vol10/jopalagzyl.pdf
is:
$$\displaystyle B_nf(x)=\sum_{i=0}^{n}\binom{n}{i}x^i(1-x)^{n-i}f\left(\frac{i}{n}\right)=\mathbb E f\left(\frac{S_{n,x}}{n}\right)$$
where $f$ is a Lipschitz continuous real function defined on $[0,1]$, and $S_{n,x}$ is a binomial random variable with parameters $n$ and $x$.
How is this equation proven?
|
$$\mathbb E(g)=\sum_{i}\mathbb P\left(S_{n,k}\right)g(i)$$
where:
$$g(S_{n,x})=f\left(\frac{S_{n,x}}{n}\right)$$
Therefore we have:
$$\mathbb E\left(f\left(\frac{S_{n,x}}{n}\right)\right)=\sum_{i=0}^{n}\mathbb P(S_{n,x}=i)f\left(\frac{i}{n}\right)$$
But $$\mathbb P(S_{n,x}=i)=\binom{n}{i}x^i(1-x)^{n-i}$$
Therefore
$$\sum_{i=0}^{n}\binom{n}{i}x^i(1-x)^{n-i}f\left(\frac{i}{n}\right)=\mathbb E\left(f\left(\frac{S_{n,x}}{n}\right)\right)$$ which was the original statement.
|
Need to prove the sequence $a_n=(1+\frac{1}{n})^n$ converges by proving it is a Cauchy sequence
I am trying to prove that the sequence $a_n=(1+\frac{1}{n})^n$ converges by proving that it is a Cauchy sequence.
I don't get very far, see: for $\epsilon>0$ there must exist $N$ such that $|a_m-a_n|<\epsilon$, for $ m,n>N$
$$|a_m-a_n|=\bigg|\bigg(1+\frac{1}{m}\bigg)^m-\bigg(1+\frac{1}{n}\bigg)^n\bigg|\leq \bigg|\bigg(1+\frac{1}{m}\bigg)^m\bigg|+\bigg|\bigg(1+\frac{1}{n}\bigg)^n\bigg|\leq\bigg(1+\frac{1}{m}\bigg)^m+\bigg(1+\frac{1}{n}\bigg)^n\leq \quad?$$
I know I am supposed to keep going, but I just can't figure out the next step. Can anyone offer me a hint please? Or if there is another question that has been answered (I couldn't find any) I would gladly look at it.
Thanks so much!
|
We have the following inequalities:
$\left(1+\dfrac{1}{n}\right)^n = 1 + 1 + \dfrac{1}{2!}\left(1-\dfrac{1}{n}\right)+\dfrac{1}{3!}\left(1-\dfrac{1}{n}\right)\left(1-\dfrac{2}{n}\right) + \ldots \leq 2 + \dfrac{1}{2} + \dfrac{1}{2^2} + \dots =3$
Similarly,
$
\begin{align*}
\left(1-\dfrac{1}{n^2}\right)^n &= 1 - {n \choose 1}\frac{1}{n^2} + {n \choose 2}\frac{1}{n^4} + \dots\\
&= 1 - \frac{1}{n} + \dfrac{1}{2!n^2}\left(1-\dfrac{1}{n}\right) - \dfrac{1}{3!n^3}\left(1-\dfrac{1}{n}\right)\left(1-\dfrac{2}{n}\right) + \ldots
\end{align*}
$
So,
$$
| \left(1-\dfrac{1}{n^2}\right)^{n} -\left(1-\frac{1}{n} \right)| \leq \dfrac{1}{2n^2} + \dfrac{1}{2^2n^2} + \dfrac{1}{2^3n^2} + \ldots = \dfrac{1}{n^2} .
$$
Now,
$
\begin{align*}
\left(1+\frac{1}{n+1}\right)^{n+1} - \left(1+\frac{1}{n}\right)^n &= \left(1+\frac{1}{n}-\frac{1}{n(n+1)}\right)^{n+1}-\left(1+\frac{1}{n}\right)^n \\
&=\left(1+\frac{1}{n}\right)^{n+1}\left\{ \left( 1- \frac{1}{(n+1)^2}\right)^{n+1} - \frac{n}{n+1}\right\}\\
&= \left(1+\frac{1}{n}\right)^{n+1}(1 - \frac{1}{n+1} + \text{O}(\frac{1}{n^2}) - \frac{n}{n+1})\\
& = \left(1+\frac{1}{n}\right)^{n+1}\text{O}(\frac{1}{n^2}) \\
&= \text{O}(\frac{1}{n^2}) \text{ (since $(1+1/n)^{n+1}$ is bounded) }.
\end{align*}
$
So, letting $a_k = (1+1/k)^k$ we have,
$|a_{k+1}-a_k| \leq C/k^2$ for some $C$ and hence,
$\sum_{ k \geq n } | a_{k+1} - a_k | \to 0$ as $n \to \infty$.
Since $|a_n - a_m| \leq \sum_{ k \geq \min\{m,n\}} |a_{k+1} - a_k|$. So given $\epsilon > 0$ chose $N$ such that $\sum_{ k \geq N} |a_k - a_{k+1}| < \epsilon$ and $|a_n - a_m| < \epsilon $ for $n,m \geq N$.
|
Does there exist a bijective mapping $f: \Delta\to\mathbb{C} $ Does there exist a bijective mapping $f: \Delta\to\mathbb{C} $?
Yet I have not found such example. Is it false (why?),please help me.
$ \Delta $ is the unit disk in $\mathbb {C}$.
|
I would do this radius by radius. So in case we were talking about the open disk, all we’d need to do is get a nice map from $[0,1\rangle$ onto $[0,\infty\rangle$, like $x/(1-x)$. In case we were talking about the closed disk, you need a (discontinuous) map from $[0,1]$ onto $[0,1\rangle$, and follow it by the map you chose before. For this discontinuous map, one might send $1/n$ to $1/(n+1)$ for all $n\ge1$.
|
Expected value of a minimum
Under a group insurance policy, an insurer agrees to pay 100% of the medical bills incurred during the year by employees of a company, up to a max of $1$ million dollars. The total bills incurred, $X$, has pdf
$$
f_X(x) =
\begin{cases}
\frac{x(4-x)}{9}, & \text{for } 0 < x < 3\\
0& \text{else}
\end{cases}
$$
where $x$ is measured in millions. Calculate the total amount, in millions of dollars, the insurer would expect to pay under this policy.
So I was able to obtain part of the solution, which was
$$
E(\min(X,1)) = \int_0^1 x\cdot \frac{x(4-x)}{9} dx \tag1
$$
However, the solution has $(1)$, plus
$$
E(\min(X,1)) = \int_1^3 1\cdot \frac{x(4-x)}{9} dx \tag2
$$
What I don't understand is if the problem explicitly states that they agree to pay up to $1$ million, why would you even have to bother with $(2)$?
|
For amounts greater than one million, they still pay out a million.
Let $x$ be the total bills in millions. For $x \in [0,1]$ the payout is $x$, for $x \in [1,\infty)$, the payout is $1$. Hence the payout as a function of $x$ is $p(x) = \min(x,1)$, and you wish to compute $Ep$.
\begin{eqnarray}
Ep &=& \int_0^\infty p(x) f_X(x) dx \\
&=& \int_0^1 x f_X(x) dx + \int_1^3 1 f_X(x)dx + \int_3^\infty 1 f_X(x)dx \\
&=& \frac{13}{108} + \frac{22}{27}+ 0 \\
&=& \frac{101}{108}
\end{eqnarray}
|
Evaluate the Integral using Contour Integration (Theorem of Residues) $$
J(a,b)=\int_{0}^{\infty }\frac{\sin(b x)}{\sinh(a x)} dx
$$
This integral is difficult because contour integrals normally cannot be solved with a sin(x) term in the numerator because of singularity issues between the 1st quadrant to the 2nd quadrant in the upper half circle of the contour.
I've assumed:
$$
\sin(bx)=e^{ibz}
$$
since it effectively IS the sine term in the upper half circle.
I've also used the following substitutions:
$$
x=z ; z=re^{i\theta} ; z=\cos(\theta)+i\sin(\theta)
$$
From what I understand, the x range has to include all values somehow.
When I plug in the substitutions and the Euler forms of sin and sinh, the integral becomes:
$$
2\int_{-\infty }^{\infty }\frac{e^{ibx}}{e^{ax}-e^{ax}}dx
$$
Have I screwed up the subs? Did I reduce incorrectly? Not sure what to go from here. If anyone could help shed some light it would be very much appreciated.
|
You can change the integration $\int_{0}^{\infty}$ to $\frac{1}{2}\int_{-\infty}^{\infty}$ . Since $\cos$ part will disappear,
Rewrite the integrand as
$$
\frac{e^{ibx}}{e^{ax}-e^{-ax}}=\frac{\exp\{(a+ib)x\}}{\exp(2ax)-1}
$$
Take the contour $[-R,R]$, $[R,R+\frac{i\pi}{a}]$, $[R+\frac{i\pi}{a},-R+\frac{i\pi}{a}]$, $[-R+\frac{i\pi}{a}, -R]$. (We will take $R\rightarrow\infty$).
However the integrand has singularities at $x=0$ and $x=\frac{i\pi}{a}$, so detour those points by taking upper half circle of small radius $\epsilon$ at $x=0$, and lower half circle of small radius $\epsilon$ at $x=\frac{i\pi}{a}$. Then the contour does not have singularities inside. (We will take $\epsilon\rightarrow 0$).
Note that the integrals on the vertical sides and the small part of the horizontal lines around $0$ and $\frac{i\pi}{a}$ will vanish with $R\rightarrow\infty$ and $\epsilon\rightarrow 0$.
Denote the integral by $I=\int_{-\infty}^{\infty}\frac{e^{ibx}}{e^{ax}-e^{-ax}}dx$. Then by Residue Theorem, we have
$$
I-I\exp\{(a+ib)\frac{i\pi}{a}\}-\frac{1}{2}2\pi i \frac{1}{2a}-\frac{1}{2}2\pi i\frac{1}{2a}\exp\{(a+ib)\frac{i\pi}{a}\}=0
$$
We solve this for $I$, it now follows that
$$
J(a,b)=\int_{0}^{\infty }\frac{\sin(b x)}{\sinh(a x)} dx=-iI=\frac{\pi}{2a}\frac{1-\exp(-\frac{b}{a}\pi)}{1+\exp(-\frac{b}{a}\pi)}
$$
and this RHS is the integral that we wanted in the beginning.
|
irreducibility of polynomials with integer coefficients Consider the polynomial
$$p(x)=x^9+18x^8+132x^7+501x^6+1011x^5+933x^4+269x^3+906x^2+2529x+1733$$
Is there a way to prove irreducubility of $p(x)$ in $\mathbb{Q}[x]$ different from asking to PARI/GP?
|
This polynomial has the element $\alpha^2+\beta$ described in this question as a root. My answer to that question implies among other things that the minimal polynomial of that element is of degree 9, so this polynomial has to be irreducible.
|
Solvability of $S_3\times S_3$ I know that the direct product of two solvable groups are solvable. The group $S_3$ is solvable, so $S_3\times S_3$ is solvable. But how am I going to establish the subnormal series of $S_3\times S_3$?or is there any simpler way to show its solvability?
Thanks.
|
Perhaps this is one of those cases where you understand things better by looking at a more general setting.
Let $G, H$ be soluble groups, and let $G.H$ be any extension of $G$ by $H$. Then $G.H$ is soluble.
Start with a subnormal series with abelian factors that goes from $\{1\}$ to $G$. Then continue with a subnormal series with abelian factors of $H$, or to be precise, with the counterimages of the elements of such a series through the epimorphism $G.H \to H$.
In your case $G.H = G \times H$. So you simply start with the required subnormal series for $G$, and then from $G$ you continue with $G N$, with $N$ in the required subnormal series for $H$.
|
Value of $\sum\limits^{\infty}_{n=1}\frac{\ln n}{n^{1/2}\cdot 2^n}$ Here is a series:
$$\displaystyle \sum^{\infty}_{n=1}\dfrac{\ln n}{n^{\frac12}\cdot 2^n}$$
It is convergent by d'Alembert's law. Can we find the sum of this series ?
|
Consider
$$f(s):=\sum_{n=1}^\infty \frac {\left(\frac 12\right)^n}{n^s}=\operatorname{Li}_s\left(\frac 12\right)$$
with $\operatorname{Li}$ the polylogarithm then (since $\,n^{-s}=e^{-s\ln(n)}$) :
$$f'(s)=\frac d{ds}\operatorname{Li}_s\left(\frac 12\right)=-\sum_{n=1}^\infty \frac {\ln(n)}{n^s}\left(\frac 12\right)^n$$
giving minus your answer for $s=\frac 12$.
You may use the integrals defining the polylogarithm to get alternative formulations but don't hope much simpler expressions...
|
Are all $n\times n$ invertible matrices change-of-coordinates matrices in $\mathbb{R}^n$? More precisely, I'm trying to prove the following problem:
Assume that $\text{Span}\{\vec{v}_{1},\dots,\vec{v}_{k}\}=\mathbb{R}^{n}$
and that A
is an invertible matrix. Prove that $\text{Span}\{A\vec{v}_{1},\dots,A\vec{v}_{k}\}=\mathbb{R}^{n}$.
but I think I need the lemma in the title. I find it difficult to prove without large amounts of handwaving. Is the proposition in the title true, and if so, how to prove it?
|
Actually, the opposite is true: the problem in your question is half of the proof of the proposition in the title, namely, that $\{Av_1,\dotsc,Av_k\}$ is a spanning set whenever $\{v_1,\dotsc,v_k\}$ is. So, let $x \in \mathbb{R}^n$. Then, of course, $A^{-1}x \in \mathbb{R}^n$, so $A^{-1}x = \sum_{i=1}^k \alpha_i v_i$ for some $\alpha_i \in \mathbb{R}$ -- note that I'm only using the fact that $\{v_1,\dotsc,v_k\}$ is a spanning set, and am assuming absolutely nothing concerning linear independence. Since $x = A(A^{-1}x)$, what follows?
|
Variation of Pythagorean triplets: $x^2+y^2 = z^3$ I need to prove that the equation $x^2 + y^2 = z^3$ has infinitely many solutions for positive $x, y$ and $z$.
I got to as far as $4^3 = 8^2$ but that seems to be of no help.
Can some one help me with it?
|
the equation:
$X^2+Y^2=Z^3$
Has the solutions:
$X=2k^6+8tk^5+2(7t^2+8qt-9q^2)k^4+16(t^3+2qt^2-tq^2-2q^3)k^3+$
$+2(7t^4+12qt^3+6q^2t^2-28tq^3-9q^4)k^2+8(t^5+2qt^4-2q^3t^2-5tq^4)k+$
$+2(q^6-4tq^5-5q^4t^2-5q^2t^4+4qt^5+t^6)$
.................................................................................................................................................
$Y=2k^6+4(3q+t)k^5+2(9q^2+16qt+t^2)k^4+32qt(2q+t)k^3+$
$+2(-9q^4+20tq^3+30q^2t^2+12qt^3-t^4)k^2+4(-3q^5-tq^4+10q^3t^2+6q^2t^3+5qt^4-t^5)k-$
$-2(q^6+4tq^5-5q^4t^2-5q^2t^4-4qt^5+t^6)$
.................................................................................................................................................
$Z=2k^4+4(q+t)k^3+4(q+t)^2k^2+4(q^3+tq^2+qt^2+t^3)k+2(q^2+t^2)^2$
$q,t,k$ - What are some integers any sign. After substituting the numbers and get a result it will be necessary to divide by the greatest common divisor. This is to obtain the primitive solutions.
|
How do I show $\sin^2(x+y)−\sin^2(x−y)≡\sin(2x)\sin(2y)$? I really don't know where I'm going wrong, I use the sum to product formula but always end up far from $\sin(2x)\sin(2y)$. Any help is appreciated, thanks.
|
Just expand, note that $(a+b)^2-(a-b)^2 = 4 ab$.
Expand $\sin (x+y), \sin (x-y)$ in the usual way. Let $a = \sin x \cos y, b= \cos x \sin y$.
Then $\sin^2(x+y)−\sin^2(x−y)= 4 \sin x \cos y \cos x \sin y$.
Then note that $\sin 2 t = 2 \sin t \cos t$ to finish.
Note that the only trigonometric identity used here is $\sin (s+t) = \sin s \cos t + \sin t \cos s$.
|
Functions question help me? a) We have the function $f(x,y)=x-y+1.$ Find the values of $f(x,y)$ in the points of the parabola $y=x^2$ and build the graph $F(x)=f(x,x^2)$ .
So, some points of the parabola are $(0;0), (1;1), (2;4)$. I replace these in $f(x,y)$ and I have $f(x,y)=1,1,-1\dots$. The graph $f(x,x^2)$ must have the points $(0;1),(1;1)$ and $(2;-1)\,$ , right?
b)We have the function $f(x,y) =\sqrt x + g[\sqrt x-1]$.Find $g(x)$ and $f(x,y)$ if $f(x,1)=x$?
I dont even know where to start here :/
|
(a)
*
*To find points on $f(x, y)$ that are also on the parabola $y = f(x) = x^2$: Solve for where $f(x, y) = x - y + 1$ and $y = f(x) = x^2$ intersect by putting $f(x, y) = f(x)$:
$$ x^2 = x - y + 1$$ and and express as a value of $y$:
$$y = 1 + x - x^2\;\;\text{ and note}\;\; y = F(x) = f(x, x^2)\tag{1}$$
*
*Note that this function $F(x)$ is precisely $f(x, x^2)$. Find enough points in this equation to plot it. (The points that satisfy $(1)$ will be points on $f(x,y)$ which satisfy $y = x^2$.) You will find that $\;\;F(x) = 1 + x - x^2\;\;$ is also a parabola. (See image below.) We can manipulate $F(x)$ to learn where the vertex of the parabola is located: write $F(x): (y - \frac 54) = -(x - \frac 12)^2$, so the vertex is located at $(\frac 12, \frac 54)$. The negative sign indicates that the parabola opens downward.
$\quad F(x) = 1 + x - x^2:$
|
Prove that $B$ is a basis of the space $V$. Let $V,W$ be nonzero spaces over a field $F$ and suppose that a set $B =\lbrace v_1, . . . , v_n \rbrace \subset V$ has the following property:
For any vectors $w_1, . . . ,w_n \in W$, there exists a unique linear transformation $T : V \rightarrow W$ such that $T(v_i) = w_i$ for all $i = 1, . . . , n$.
Prove that $B$ is a basis of the space $V$.
Can anyone help me with this? I have no idea how to prove this.
|
Choose $w \in W$, $w \neq 0$. Define $T_k$ by $T_k(v_i) = \delta_{ik} w$. By assumption $T_k$ exists.
We want to show that $v_1,...,v_n$ are linearly independent. So suppose $\sum_i \alpha_i v_i = 0$. Then $T_k(\sum_i \alpha_i v_i) = \alpha_k w = 0$. Hence $\alpha_k = 0$.
Now we need to show that $v_1,...,v_n$ spans $V$. Define the transformation $Z(v_i) = 0$. By assumption, $Z$ is the unique linear transformation mapping all $v_i$ to zero. It follows that $V = \operatorname{sp}\{v_1,...,v_n\}$.
To see this, suppose $x \notin \operatorname{sp}\{v_1,...,v_n\}$. Then we can extend $Z$ to $\operatorname{sp}\{x,v_1,...,v_n\}$: Define $Z_+(v_i) = 0$, $Z_+(x) = w$, $Z_-(v_i) = 0$, $Z_-(x) = -w$. However, by uniqueness, we must have $Z = Z_+ = Z_-$, which is a contradiction. Hence no such $x$ exists.
|
Find the complete solution to the simultaneous congruence. I'm having trouble understanding the steps involved to do this question so any step by step reasoning in solving the solution would help me study for my exam.
Thanks so much!
$$x\equiv 6 \pmod{14}$$
$$x\equiv 24 \pmod{29}$$
|
Applying Easy CRT (below), noting that $\rm\displaystyle\, \frac{-18}{29}\equiv \frac{-4}{1}\ \ (mod\ 14),\ $ quickly yields
$$\begin{array}{ll}\rm x\equiv \ \ 6\ \ (mod\ 14)\\ \rm x\equiv 24\ \ (mod\ 29)\end{array}\rm \!\iff\! x\equiv 24\! +\! 29 \left[\frac{-18}{29}\, mod\ 14\right]\!\equiv 24\!+\!29[\!-4]\equiv -92\equiv 314\,\ (mod\ 406) $$
Theorem (Easy CRT) $\rm\ \ $ If $\rm\ m,n\:$ are coprime integers then $\rm\ n^{-1}\ $ exists $\rm\ (mod\ m)\ \ $ and
$\rm\displaystyle\quad \begin{eqnarray}\rm x&\equiv&\rm\ a\ \ (mod\ m) \\
\rm x&\equiv&\rm\ b\ \ (mod\ n)\end{eqnarray} \! \iff x\ \equiv\ b + n\ \bigg[\frac{a\!-\!b}{n}\ mod\ m\:\bigg]\ \ (mod\ mn)$
Proof $\rm\ (\Leftarrow)\ \ \ mod\ n\!:\,\ x\equiv b + n\ [\cdots]\equiv b,\ $ and $\rm\,\ mod\ m\!:\,\ x\equiv b + (a\!-\!b)\ n/n\: \equiv\: a\:.$
$\rm\ (\Rightarrow)\ \ $ The solution is unique $\rm\ (mod\ mn)\ $ since if $\rm\ x',x\ $ are solutions then $\rm\ x'\equiv x\ $ mod $\rm\:m,n\:$ therefore $\rm\ m,n\ |\ x'-x\ \Rightarrow\ mn\ |\ x'-x\ \ $ since $\rm\ \:m,n\:$ coprime $\rm\:\Rightarrow\ lcm(m,n) = mn\:.\ \ $ QED
Remark $\ $ I chose $\rm\: n,m = 29,14\ $ (vs. $\rm\, 14,29)\:$ since then $\rm\:n \equiv 1\,\ (mod\ m),\:$ making completely trivial the computation of $\rm\,\ n^{-1}\ mod\ m\,\ $ in the bracketed term in the formula.
|
Proof involving modulus and CRT Let m,n be natural numbers where gcd(m,n) = 1. Suppose x is an integer which satisfies
x ≡ m (mod n)
x ≡ n (mod m)
Prove that x ≡ m+n (mod mn).
I know that since gcd(m,n)=1 means they are relatively prime so then given x, gcd(x,n)=m and gcd(x,m)=n. I have trouble getting to the next steps in proving x ≡ m+n (mod mn). Which is gcd(x, mn)=m+n
|
From the Chinese Remainder Theorem (uniqueness part) you know that, since $m$ and $n$ are relatively prime, the system of congruences has a unique solution modulo $mn$.
So we only need to check that $m+n$ works. For that, we only need to verify that $m+n\equiv n\pmod{m}$, and that $m+n\equiv m\pmod{n}$. That is very easy!
|
Convert segment of parabola to quadratic bezier curve How do I convert a segment of parabola to a cubic Bezier curve?
The parabola segment is given as a polynomial with two x values for the edges.
My target is to convert a quadratic piecewise polynomial to a Bezier path (a set of concatenated Bezier curves).
|
You can do this in two steps, first convert the parabola segment to a quadratic Bezier curve (with a single control point), then convert it to a cubic Bezier curve (with two control points).
Let $f(x)=Ax^2+Bx+C$ be the parabola and let $x_1$ and $x_2$ be the edges of the segment on which the parabola is defined.
Then $P_1=(x_1,f(x_1))$ and $P_2=(x_2,f(x_2))$ are the Bezier curve start and end points
and $C=(\frac{x_1+x_2}{2},f(x_1)+f'(x_1)\cdot \frac{x_2-x_1}{2})$ is the control point for the quadratic Bezier curve.
Now you can convert this quadratic Bezier curve to a cubic Bezier curve by define the two control points as:
$C_1=\frac{2}{3}C+\frac{1}{3}P_1$ and
$C_2=\frac{2}{3}C+\frac{1}{3}P_2$.
|
Finding the derivatives of sin(x) and cos(x) We all know that the following (hopefully):
$$\sin(x)=\sum^{\infty}_{n=0}(-1)^n \frac{x^{2n+1}}{(2n+1)!}\ , \ x\in \mathbb{R}$$
$$\cos(x)=\sum^{\infty}_{n=0}(-1)^n \frac{x^{2n}}{(2n)!}\ , \ x\in \mathbb{R}$$
But how do we find the derivates of $\sin(x)$ and $\cos(x)$ by using the definition of a derivative and the those definition above?
Like I should start by doing:
$$\lim_{h\to\infty}\frac{\sin(x+h)-\sin(x)}{h}$$
But after that no clue at all. Help appreciated!
|
The radius of convergence of the power series is infinity, so you can interchange summation and differentiation.
$\frac{d}{dx}\sin x = \frac{d}{dx} \sum^{\infty}_{n=0}(-1)^n \frac{x^{2n+1}}{(2n+1)!} = \sum^{\infty}_{n=0}(-1)^n \frac{d}{dx} \frac{x^{2n+1}}{(2n+1)!} = \sum^{\infty}_{n=0}(-1)^n \frac{x^{2n}}{(2n)!} = \cos x$
If you wish to use the definition, you can do:
$\frac{\sin(x+h)-\sin x}{h} = \frac{\sum^{\infty}_{n=0}(-1)^n \frac{(x+h)^{2n+1}}{(2n+1)!}-\sum^{\infty}_{n=0}(-1)^n \frac{x^{2n+1}}{(2n+1)!}}{h} = \sum^{\infty}_{n=0}(-1)^n \frac{1}{(2n+1)!}\frac{(x+h)^{(2n+1)}-x^{(2n+1)}}{h}$
Then $\lim_{h \to 0} \frac{(x+h)^{(2n+1)}-x^{(2n+1)}}{h} = (2n+1)x^{2n}$ (using the binomial theorem).
(Note $(x+h)^p-x^p = \sum_{k=1}^p \binom{p}{k}x^{p-k}h^k = p x^{p-1}h + \sum_{k=2}^p \binom{p}{k}x^{p-k}h^k$. Dividing across by $h$ and taking limits as $h \to 0$ shows that $\frac{d}{dx} x^p = p x^{p-1}$.)
|
Determinant of the transpose of elementary matrices Is there a 'nice' proof to show that $\det(E^T) = \det(E)$ where $E$ is an elementary matrix?
Clearly it's true for the elementary matrix representing a row being multiplied by a constant, because then $E^T = E$ as it is diagonal.
I was thinking for the "row-addition" type, it's clearly true because if $E_1$ is a matrix representing row-addition then it is either an upper/lower triangular matrix and so $\det(E_1)$ is equal to the product of the diagonals. If $E_1$ is an upper/lower triangular matrix, then $E_1^T$ is a lower/upper triangular matrix and so $\det(E_1^T) = \det(E_1)$ as the diagonal entries remain the same when the matrix is transposed.
How about for the "row-switching" matrix where rows $i$ and $j$ have been swapped on the identity matrix? Can we use the linearity of the rows in a matrix somehow?
Thanks for any help!
|
You can use the fact that switching two rows or columns of a matrix changes the sign of the determinant. Switching two rows of $E$ makes it diagonal, then switch the corresponding columns and you have $E^T$
|
degree of the extension of the normal closure So I am trying to do the following problem: if $K/F$ is an extension of degree $p$ (a prime), and if $L/F$ is a normal closure of $K/F$, then $[L:K]$ is not divisible by $p$.
This is what I tried. If $F$ is separable, then primitive element theorem holds and we get $K=F(\alpha)$. Let $f(x)$ be the minimal polynomial of $\alpha$ over $F$. We know that $deg(f)=p$. If $K$ contains all the roots of $f$, then it is normal, and we are done. If not, then there is a root $\beta$ that when we add it we generate an extension of at degree at most $p-1$. If we have all the roots then we are done, otherwise we keep adding roots and at each step the maximum degree of the extension we obtain goes down, so we have that $[L:K]$ is a divisor of $(p-1)!$, so it is not divisible by $p$ since $p$ is a prime.
I am not quite sure of how to solve this problem when we dont know that $F$ is not separable. Any thoughts?
|
If $[K:F]=P$, then for any $\alpha\in K\setminus F$, we get $K=F(\alpha)$.
Let $f(x)$ be the minimal polynomial of $\alpha$ over $F$, then the normal closure $L/F$ of $K/F$ is the splitting field of $f(x)$ over $K$. If we let $f(x)=g(x)(x-\alpha)$, then $L$ is the splitting field of $g(x)$ over $K$, but $\deg(g)=p-1$, so $[L:K]$ is less than $(p-1)!$
so it is not divisible by $p$ since $p$ is a prime.
|
halfspaces question How do I find the supporting halfspace inequality to epigraph of
$$f(x) = \frac{x^2}{|x|+1}$$
at point $(1,0.5)$
|
For $x>0$, we have
$$
f(x)=\frac{x^2}{x+1}\quad\Rightarrow\quad f'(x)=\frac{x^2+2x}{(x+1)^2}.
$$
Hence $f'(1)=\frac{3}{4}$ and an equation of the tangent to the graph of $f$ at $(1,f(1))$ is
$$y=f'(1)(x-1)+f(1)=\frac{3}{4}(x-1)+\frac{1}{2}=\frac{3}{4}x-\frac{1}{4}.$$
An inequality defining the halfspace above this tangent is therefore
$$
y\geq \frac{3}{4}x-\frac{1}{4}.
$$
See here for a picture of the graph and the tangent.
|
How to prove this function is quasi-convex/concave? this is the function:
$$\displaystyle f(a,b) = \frac{b^2}{4(1+a)}$$
|
For quasi convexity you have to consider for $\alpha\in R$ the set
$$\{(a,b)\in R^{2}: f(a,b)\leq \alpha\}
$$
If this set is convex for every $\alpha \in R$ you have quasi convexity.
So we obtain the equality
$$4(1+a)\leq \alpha b^{2}.$$
If you draw this set as set in $R^{2}$ for fixed $\alpha\in R$, this should give you a clue about quasi convexity...
|
$\int \frac{dx}{x\log(x)}$ I think I'm having a bad day. I was just trying to use integration by parts on the following integral:
$$\int \frac{dx}{x\log(x)}$$
Which yields
$$\int \frac{dx}{x\log(x)} = 1 + \int \frac{dx}{x\log(x)}$$
Now, if I were to subtract
$$\int \frac{dx}{x\log(x)}$$
from both sides, it would seem that $0 = 1$. What is going on here?
Aside: I do know the integral evaluates to
$$\log(\log(x))$$
plus a constant.
|
In a general case we have
$$\int\frac{f'(x)}{f(x)}dx=\log|f(x)|+C,$$
and in our case, take $f(x)=\log(x)$ so $f'(x)=\frac{1}{x}$ to find the desired result.
|
2D Partial Integration I have a (probably very simple problem): I try to find the variational form of a PDE, at one time we have to partially integrate:
$\int_{\Omega_j} v \frac{\partial}{\partial x}E d(x,y)$ where v is our testfunction and E ist the function we try to find. We have $v, E: \mathbb{R}^2\rightarrow \mathbb{R}$.
Our approach was the partial integration: $\int_{\Omega_j} v \frac{\partial}{\partial x}E d(x,y) = - \int_{\Omega_j} E \frac{\partial}{\partial x} v d(x, y) + \int_{\partial\Omega_j} v E ds$
But I don't think that this makes sense, as we usually need the normal for the boundary integral, but how can we introduce a normal, when our functions are no vector functions?
I hope you understand my problem
|
For the following, I refer to Holland, Applied Analysis by the Hilbert Space Method, Sec. 7.2. Let $D$ be a simply connected region in the plane, and $f$, $P$, and $Q$ be functions that map $D \rightarrow \mathbb{R}$. Then
$$\iint_D f \left ( \frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y}\right ) \,dx\,dy = \int_{\partial D} f (P dx + Q dy) - \iint_D \left( \frac{\partial f}{\partial x} Q - \frac{\partial f}{\partial y} P \right )\,dx\,dy $$
This is a form of 2D integration by parts and is derived using differential forms. This applies to your problem, in that $f=v$, $P=0$, and $Q=E$. Your integration by parts then takes the form
$$\iint_{\Omega_j} v \frac{\partial E}{\partial x} \,dx\,dy = \int_{\partial \Omega_j} v \, E \, dy - \iint_{\Omega_j} E \frac{\partial v}{\partial x} \,dx\,dy$$
The important observation here is that the lack of a function that has a derivative with respect to $y$ in the original integral produces a boundary integral that is integrated over the $y$ direction only.
|
Intersecting set systems and Erdos-Ko-Rado Theorem Suppose you have an $n$-element set, where $n$ is finite, and you want to make an intersecting family of $r$-subsets of this set. Each subset has to intersecting each other subset.
We may assume $r$ is not larger than $n/2$, because that would make the problem trivial, as any two $r$-subsets would intersect!
The Erdos-Ko-Rado theorem says that the way to make your intersecting system the largest is to choose an element and simply take the set of all $r$-sets containing that element. This family has size $\binom{n-1}{r-1}$. This type of family is sometimes called a "star".
This is not necessarily larger than every other method. If $n$ is even and $r=n/2$ you could take the family of all sets excluding a given element. That would give $\binom{n-1}{r-1}$.
*Question *
Suppose $n$ is even and $r=n/2$. Suppose we move to an $(n+1)$-set and $r=n/2$. The "star" method now gives a larger intersecting family. This is obvious, you have one more element to choose from, and the formula is given by Pascal's identity.
How do you get a larger intersecting system from the "exclusion" method? It's not obvious what to do, because when I try to make the system larger I always end up turning it into a star.
|
If I understand your question correctly, you can't do better. Indeed from the proof of Erdos-Ko-Rado you can deduce that only the stars have size equal to $\binom{n-1}{r-1}$, when $2r<n$. In the exceptional case $r=n/2$ you can take one of every pair of complementary sets $A,A^c$.
EDIT: The intended question seems to be (equivalent to) the following. How large can an intersecting family of $r$-sets be if we stipulate that it is not a star?
The answer in this case was given by Hilton and Milner (see A. J. W. Hilton and E. C. Milner. Some Intersection Theorems For Systems of Finite Sets Q J Math (1967) 18(1): 369-384 doi:10.1093/qmath/18.1.369).
The largest such family is obtained by fixing an element $1$, an $r$-set $A_1 = \{2,\dots,r+1\}$, and requiring every further set to contain $1$ and to intersect $A_1$. Thus the maximum size is whatever the size of this family is.
Since you seem to be particularly interested in the case of $n$ odd and $r=\lfloor n/2\rfloor$, let's do the calculation in that case. The family you propose ("exclude two points") has size $\binom{n-2}{r} = \binom{n-2}{r-1}$, while the family that Hilton and Milner propose has size $$1 + \binom{n-1}{r-1} - \binom{n-r-1}{r-1} = \binom{n-1}{r-1} - r + 1 = \binom{n-2}{r-1} + \binom{2r-1}{r-2} - r + 1.$$
Sorry, they win. :)
|
Help me to prove this integration Where the method used should be using complex analysis.
$$\int_{c}\frac{d\theta}{(p+\cos\theta)^2}=\frac{2\pi p}{(p^2-1)\sqrt{p^2-1}};c:\left|z\right|=1$$
thanks in advance
|
Hint:
$$\cos \theta = \frac {z + z^{-1}} 2$$
Plug this in and refer to classic tools in complex analysis, such as the Cauchy formula or the residue theorem.
|
$e^{\large\frac{-t}{5}}(1-\frac{t}{5})=0$ solve for t I'm given the following equation an i have to solve for $"t"$ This is actually the derivative of a function, set equal to zero:
$$f'(t) = e^\frac{-t}{5}(1-\frac{t}{5})=0$$
I will admit im just stuck and im looking for how to solve this efficiently.
steps $1,\;2$ - rewrote the equation and distributed:
$$\frac{1}{e^\frac{t}{5}}(1-\frac{t}{5}) \iff \frac{1-\frac{t}{5}}{e^\frac{t}{5}} $$
steps $3, \;4$ - common denominator of 5, multiply by reciprocal of denominator:
$$ \frac{\frac{5-t}{5}}{e^\frac{t}{5}}\iff\frac{\frac{5-t}{5}}*\frac{1}{e^\frac{t}{5}} = \frac{5-t}{5e^\frac{t}{5}} $$
step 5, set this is where I,m stuck:
$$f'(t) = \frac{5-t}{5e^\frac{t}{5}}=0$$
How do i go from here? And am I even doing this correctly? Any help would be greatly appreciated.
Miguel
|
$e^{\dfrac{-t}{5}}\neq0$. (Cause: $\frac{1}{e^\frac{t}{5}}$ is a real number)
Therefore, the other term has to be zero.
Now you can solve it !
|
Prove $|Im(z)|\le |\cos (z)|$ for $z\in \mathbb{C}$. Let $z\in \mathbb{C}$ i.e. $z=x+iy$. Show that $|Im(z)|\le |\cos (z)|$.
My hand wavy hint was to consider $\cos (z)=\cos (x+iy)=\cos (x)\cosh (y)+i\sin (x)\sinh(y)$ then do "stuff".
Then I have $|\cos (z)|=|Re(z)+iIm(z)|$ and the result will be obvious.
Thanks in advance.
I am missing something trivial I know.
|
$$\cos(z) = \cos(x) \cosh(y) + i \sin(x) \sinh(y) $$
Taking the norm squared:
$$ \cos^2(x) \cosh^2(y) + \sin^2(x) \sinh^2(y)$$
We are left with:
$$ \cos^2(x) \left(\frac{1}{2}\cosh(2y) + \frac{1}{2}\right) + \sin^2(x) \left(\frac{1}{2}\cosh(2y) - \frac{1}{2}\right)$$
Simplifying, we get:
$$ \frac{1}{2} \left(\cosh(2y) + \cos(2x) \right)$$
We might as well suppose $\cos{2x} = -1$ Our goal is to show $|$Im$(z)|^2 $ is smaller than this quantity.
That is,
$$ \begin{align} & & y^2 & \leq \frac{1}{2} \cosh{2y} - \frac{1}{2} \\ \iff & & y^2 &\leq (\sinh y)^2 \\ \iff & & y & \leq \sinh y \ \ \ \ \ \ \ \ \forall y\geq0\end{align}$$
A quick computation of the derivative shows that $\frac{d}{dy} y = 1$ but $\frac{d}{dy} \sinh y = \cosh{y}$. If we want to see that $\cosh y \geq 1$, we can differentiate it again and see that $\sinh y \geq 0$.
|
What is the property that allows $5^{2x+1} = 5^2$ to become $2x+1 = 2$? What is the property that allows $5^{2x+1} = 5^2$ to become $2x+1 = 2$? We just covered this in class, but the teacher didn't explain why we're allowed to do it.
|
$5^{(2x+1)} = 5^2$
Multiplying by $1/5^2$ om both sides we get,
$\frac{5^{(2x+1)}}{5^2} = 1$
$\Rightarrow 5^{(2x+1)-2} = 1$
Taking log to the base 5 on both sides we get $2x+1-2=0$.
|
Prove $\sqrt{a} + \sqrt{b} + \sqrt{c} \ge ab + bc + ca$ Let $a,b,c$ are non-negative numbers, such that $a+b+c = 3$.
Prove that $\sqrt{a} + \sqrt{b} + \sqrt{c} \ge ab + bc + ca$
Here's my idea:
$\sqrt{a} + \sqrt{b} + \sqrt{c} \ge ab + bc + ca$
$2(\sqrt{a} + \sqrt{b} + \sqrt{c}) \ge 2(ab + bc + ca)$
$2(\sqrt{a} + \sqrt{b} + \sqrt{c}) - 2(ab + bc + ca) \ge 0$
$2(\sqrt{a} + \sqrt{b} + \sqrt{c}) - ((a+b+c)^2 - (a^2 + b^2 + c^2) \ge 0$
$2(\sqrt{a} + \sqrt{b} + \sqrt{c}) + (a^2 + b^2 + c^2) - (a+b+c)^2 \ge 0$
$2(\sqrt{a} + \sqrt{b} + \sqrt{c}) + (a^2 + b^2 + c^2) \ge (a+b+c)^2$
$2(\sqrt{a} + \sqrt{b} + \sqrt{c}) + (a^2 + b^2 + c^2) \ge 3^2 = 9$
And I'm stuck here.
I need to prove that:
$2(\sqrt{a} + \sqrt{b} + \sqrt{c}) + (a^2 + b^2 + c^2) \ge (a+b+c)^2$ or
$2(\sqrt{a} + \sqrt{b} + \sqrt{c}) + (a^2 + b^2 + c^2) \ge 3(a+b+c)$, because $a+b+c = 3$
In the first case using Cauchy-Schwarz Inequality I prove that:
$(a^2 + b^2 + c^2)(1+1+1) \ge (a+b+c)^2$
$3(a^2 + b^2 + c^2) \ge (a+b+c)^2$
Now I need to prove that:
$2(\sqrt{a} + \sqrt{b} + \sqrt{c}) + (a^2 + b^2 + c^2) \ge 3(a^2 + b^2 + c^2)$
$2(\sqrt{a} + \sqrt{b} + \sqrt{c}) \ge 2(a^2 + b^2 + c^2)$
$\sqrt{a} + \sqrt{b} + \sqrt{c} \ge a^2 + b^2 + c^2$
I need I don't know how to continue.
In the second case I tried proving:
$2(\sqrt{a} + \sqrt{b} + \sqrt{c}) \ge 2(a+b+c)$ and
$a^2 + b^2 + c^2 \ge a+b+c$
Using Cauchy-Schwarz Inequality I proved:
$(a^2 + b^2 + c^2)(1+1+1) \ge (a+b+c)^2$
$(a^2 + b^2 + c^2)(a+b+c) \ge (a+b+c)^2$
$a^2 + b^2 + c^2 \ge a+b+c$
But I can't find a way to prove that $2(\sqrt{a} + \sqrt{b} + \sqrt{c}) \ge 2(a+b+c)$
So please help me with this problem.
P.S
My initial idea, which is proving:
$2(\sqrt{a} + \sqrt{b} + \sqrt{c}) + (a^2 + b^2 + c^2) \ge 3^2 = 9$
maybe isn't the right way to prove this inequality.
|
I will use the following lemma (the proof below):
$$2x \geq x^2(3-x^2)\ \ \ \ \text{ for any }\ x \geq 0. \tag{$\clubsuit$}$$
Start by multiplying our inequality by two
$$2\sqrt{a} +2\sqrt{b} + 2\sqrt{c} \geq 2ab +2bc +2ca, \tag{$\spadesuit$}$$
and observe that
$$2ab + 2bc + 2ca = a(b+c) + b(c+a) + c(b+c) = a(3-a) + b(3-b) + c(3-c)$$
and thus $(\spadesuit)$ is equivalent to
$$2\sqrt{a} +2\sqrt{b} + 2\sqrt{c} \geq a(3-a) + b(3-b) + c(3-c)$$
which can be obtained by summing up three applications of $(\clubsuit)$ for $x$ equal to $\sqrt{a}$, $\sqrt{b}$ and $\sqrt{c}$ respectively:
\begin{align}
2\sqrt{a} &\geq a(3-a), \\
2\sqrt{b} &\geq b(3-b), \\
2\sqrt{c} &\geq c(3-c). \\
\end{align}
$$\tag*{$\square$}$$
The lemma
$$2x \geq x^2(3-x^2) \tag{$\clubsuit$}$$
is true for any $x \geq 0$ (and also any $x \leq -2$) because
$$2x - x^2(3-x^2) = (x-1)^2x(x+2)$$
is a polynomial with roots at $0$ and $-2$, a double root at $1$ and a positive coefficient at the largest degree, $x^4$.
$\hspace{60pt}$
I hope this helps ;-)
|
Nth number of continued fraction Given a real number $r$ and a non-negative integer $n$, is there a way to accurately find the $n^{th}$ (with the integer being the $0^{th}$ number in the continued fraction. If this can not be done for all $r$ what are some specific ones, like $\pi$ or $e$. I already now how to do this square roots.
|
You can do it recursively: $$\eqalign{f_0(r) &= \lfloor r \rfloor\cr
f_{n+1}(r) &= f_n\left( \frac{1}{r - \lfloor r \rfloor}\right)\cr}$$
Of course this may require numerical calculations with very high precision.
Actually, if $r$ is a rational number but you don't know it, no finite precision
numerical calculation will suffice.
|
After $t$ hours, the hour hand was where the minute hand had been, and vice versa
On Saturday, Jimmy started painting his toy helicopter between 9:00 a.m. and 10:00 a.m. When he finished between 10:00 a.m. and 11:00 a.m. on the same morning, the hour hand was exactly where the minute hand had been when he started, and the minute hand was exactly where the hour hand had been where the hour hand had been when he started. Jimmy spent $t$ hours painting. Determine the value of $t$.
Let $h$ represent the initial position of the hour hand.
Let $m$ represent the initial position of the minute hand.
Note that in this solution position will be represented in "minutes". For example, if the hour hand was initially at $9$, its position would be $45$.
$$45\le h \le 50$$
$$50 \le m \le 55$$
$$\implies 0 \le (m-h) \le 10$$
Time can be represented as the number of minutes passed since 12 a.m. (for example 1 am = 60, 2 am = 120 etc.)
Then:
$$60(\frac{h}{5}) + m + t = 60 (\frac{m}{5}) + h$$
$$\implies 12 h + m + t = 12m + h$$
$$\implies t = 11 m - 11 h$$
$$\implies t = 11(m-h)$$
$$0 \le t \le 120$$
$$\implies 0 \le 11(m-h) \le 120$$
$$\implies 0 \le m -h \le \frac{120}{11}$$
That was as far as I got. Could someone point me on the right path to complete the above solution? (if possible). I am not simply looking for a solution but instead a way to complete the above solution. Any help is appreciated, thank you.
|
I'm not sure what you mean by "complete the above solution". The above attempt is actually pretty far from actually finding out what $m$ and $h$ are. However, you do get $t$ as a function of $m$ and $h$, which will be needed to compute the time elapsed.
To actually solve this problem, you have to make use of the fact that $m$ and $h$ encode the same information. If I know the exact position of the hour hand, then I know the exact time because the minute hand's information is encoded in the position of the hour hand between the two integer hours.
At the beginning, we have
$$ \frac{h-45}{5} = \frac{m}{60} $$
Then, at the end, we have
$$ \frac{m-50}{5} = \frac{h}{60} $$
Now, it's just a matter of solving a system of two equations/unknowns: http://www.wolframalpha.com/input/?i=h-45+%3D+m%2F12,+m-50+%3D+h%2F12
Plugging in these values for $h$ and $m$ into $t = 11(m-h)$ yields approximately $t=$ 50.8 minutes, or $t = 0.846$ hours.
|
Help solving $\sqrt[3]{x^3+3x^2-6x-8}>x+1$ I'm working through a problem set of inequalities where we've been moving all terms to one side and factoring, so you end up with a product of factors compared to zero. Then by creating a sign diagram we've determined which intervals satisfy the inequality.
This one, however, has me stumped:
$$\sqrt[3]{x^3+3x^2-6x-8}>x+1$$
I managed to do some factoring, and think I might be on the right path, though I'm not totally sure. Here's what I have so far,
$$\sqrt[3]{x^3+3x^2-6x-8}-(x+1)>0\\
\sqrt[3]{(x-2)(x+4)(x+1)}-(x+1)>0\\
\sqrt[3]{(x-2)(x+4)}\sqrt[3]{(x+1)}-(x+1)>0\\
\sqrt[3]{(x-2)(x+4)}(x+1)^{1/3}-(x+1)>0\\
(x+1)^{1/3}\left(\sqrt[3]{(x-2)(x+4)}-(x+1)^{2/3}\right)>0$$
Is there a way to factor it further (the stuff in the big parentheses), or is there another approach I'm missing?
Update: As you all pointed out below, this was the wrong approach.
This is a lot easier than you are making it.
How very true :) I wasn't sure how cubing both sides would affect the inequality. Now things are a good deal clearer. Thanks for all your help!
|
Hint: you may want to remove the root first, by noticing that $f(y)=y^3$ is a monotonic increasing function.
|
Trignometry - Cosine Formulae c = 21
b = 19
B= $65^o$
solve A with cosine formulae
$a^2+21^2-19^2=2a(21)cos65^o$
yield an simple quadratic equation in variable a
but, $\Delta=(-2(21)cos65^o)^2-4(21^2-19^2) < 0$ implies the triangle as no solution?
How to make sense of that? Why does this happen and in what situation? Please give a range, if any, thanks.
|
Here is an investigation without directly using sine law:
From A draw a perpendicular line to BC. Let the intersection be H
BH is AB*cos(65) which is nearly 8.87
From Pythagoras law, AH is nearly 19.032
AC is 19 But we arrived at a contradiction since hypotenuse is less than another side which then immediately implies $\sin(C)>1$
|
If $E$ is generated by $E_1$ and $E_2$, then $[E:F]\leq [E_1:F][E_2:F]$? Suppose $K/F$ is an extension field, and $E_1$ and $E_2$ are two intermediate subfields of finite degree. Let $E$ be the subfields of $K$ generated by $E_1$ and $E_2$. I'm trying to prove that
$$ [E:F]\leq[E_1:F][E_2:F].$$
Since $E_1$ and $E_2$ are finite extensions, I suppose they have bases $\{a_1,\dots,a_n\}$ and $\{b_1,\dots,b_m\}$, respectively. If $E_1=F$ or $E_2=F$, then the inequality is actually equality, so I suppose both are proper extension fields. I think $E=F(a_1,\dots,a_n,b_1,\dots,b_m)$. Since $\{a_1,\dots,a_n,b_1,\dots,b_m\}$ is a spanning set for $E$ over $F$,
$$[E:F]\leq n+m\leq nm=[E_1:F][E_2:F]$$
since $m,n>1$.
Is this sufficient? I'm weirded out since the problem did not ask to show $[E:F]\leq [E_1:F]+[E_2:F]$ which I feel will generally be a better upper bound.
|
The sum of the degrees in general is not going to be an upper bound. Consider $K = \Bbb{Q}(\sqrt[3]{2},e^{2\pi i/3})$. This is a degree $6$ extension of $\Bbb{Q}$. Take $E_1 = \Bbb{Q}(\sqrt[3]{2})$ and $E_2 = \Bbb{Q}(e^{2 \pi i/3})$. Then
$$[E_1 : \Bbb{Q}] + [E_2:\Bbb{Q} ] = 3 + 2 = 5$$
but $E = K$ that has degree $6$ over $\Bbb{Q}$.
|
Eigenvalues of $A$ Let $A$ be a $3*3$ matrix with real entries. If $A$ commutes with all $3*3$ matrices with real entries, then how many number of distinct real eigenvalues exist for $A$?
please give some hint.
thank you for your time.
|
If $A$ commute with $B$ then $$[A,B] = AB - BA = 0$$
$$AB=BA$$
$$ABB^{-1}=BAB^{-1}$$
$$A=BAB^{-1}$$
If the $B$ has inverse matrix $B^{-1}$, then sets of eigenvalues not change.
Matrix $3\times{3}$ has 3 eigenvalues.
|
Balls, Bags, Partitions, and Permutations We have $n$ distinct colored balls and $m$ similar bags( with the condition $n \geq m$ ). In how many ways can we place these $n$ balls into given $m$ bags?
My Attempt: For the moment, if we assume all the balls are of same color - we are counting partitions of $n$ into at most $m$ parts( since each bag looks same - it is not a composition of $n$, just a partition ).
But, given that each ball is unique, for every partition of $n$ ----> $\lambda : \{\lambda_1,\lambda_2, \cdots \lambda_m\}$, we've $\binom{n}{\lambda_1} * \binom{n-\lambda_1}{\lambda_2}*\binom{n-\lambda_1-\lambda_2}{\lambda_3}* \cdots * \binom{\lambda_{m-1}+\lambda_{m}}{\lambda_{m-1}}*\binom{\lambda_m}{\lambda_m}$ ways of arranging the balls into $m$ given baskets. We need to find this out for every partition of $n$ into $m$ parts.
Am i doing it right?
I'm wondering whether there is any closed form formula for this( i know that enumerating the number of partitions of $n$ is also complex. But, i don't know, the stated problem seemed simple at the first look. But, it isn't? Am i right? ).
P.S. I searched for this question thinking that somebody might have asked it earlier. But, couldn't find it. Please point me there if you find it.
|
This is counting the maps from an $n$-set (the balls) to an $m$-set (the bags), up to permutations of the $m$-set (as the bags are similar). This is just one of the problems of the twelvefold way, and looking at the appropriate section you find that the result can only be expressed as a sum of Stirling numbers of the second kind, namely $\sum_{k=0}^m\genfrac\{\}0{}nk$. The $k$ in fact represents the number of nonempty bags here.
|
Easy volumes question Note: I am trying to prepare, i already see the answer to this question but i dont understand it.
It says volume of earth used for making the embarkment = $\pi (R^2 - r^2)h$
But i dont understand, what R? What r?The whole part solves out to 28$\pi$ but i dont understand where the two Rr come from and what are their values
The question is: A well of diameter 3m is dug 14m deep. The earth taken out has been spread evenly all around it in the shape of a circular ring of width 4m. Find the height of the embarkment.
I think that width refers to the diameter but i am not sure about that either, would be really helpful if someone explained the answer to me properly, my guide book sure doesnt D:
|
The total volume of earth taken out is $r^2\pi\cdot h=3^2\pi\cdot14$. The volume of the new pile is $(R^2-r^2)\pi\cdot H=(4^2-3^2)\pi H$, where you need to find $H$. Compare the two, and express $H$.
|
Counting Couples A group of m men and w women randomly sit in a single row at a theater. If a man and woman are seated next to each other, they form a "couple." "Couples" can overlap, which means one person can be a member of two "couples."
Question: What is the expected number of couples?
Comment:
I have a hard time with word problems that deal with "expectations".
|
Continuing the computation we can calculate $E[C(C-1)]$.
We have $$\left.\left(\left(\frac{d}{dz}\right)^2 (P+Q)\right)\right|_{z=1} =
2\,{\frac {uv \left( -2\,uv-u+{u}^{2}+{v}^{2}-v \right) }{ \left( v-1+u \right) ^{3}}}.$$
After a straightforward calculation this transforms into
$$ E[C(C-1)] =
{\frac {2\,mw \,\left( 2\,mw-w-m \right) }{ \left( m+w-1 \right) \left( m+w \right) }}.$$
|
How can I prove that $\|Ah\| \le \|A\| \|h\|$ for a linear operator $A$? On http://www.proofwiki.org/wiki/Definition:Norm_(Linear_Transformation) , it is stated that $||Ah|| \leq ||A||||h||$ where $A$ is an operator.
Is this a theorem of some sort? If so, how can it be proved? I've been trying to gather more information about this to no avail.
Help greatly appreciated!
|
By definition:
$$
\|A\|=\sup_{\|x\|\leq 1}\|Ax||=\sup_{\|x\|= 1}\|Ax||=\sup_{x\neq 0}\frac{\|Ax\|}{\|x\|}.$$
In particular,
$$
\frac{\|Ax\|}{\|x\|}\leq \|A\|\qquad\forall x\neq 0\quad\Rightarrow\quad\|Ax\|\leq \|A\|\|x\|\qquad \forall x.
$$
Note that $\|A\|$ can alternatively be defined as the least $M\geq 0$ such that $\|Ax\|\leq M\|x\|$ for all $x$. When such an $M$ does not exist, one has $\|A\|=+\infty$ and one says that $A$ is unbounded.
|
How to prove inverse of Matrix If you have an invertible upper triangular matrix $M$, how can you prove that $M^{-1}$ is also an upper triangular matrix? I already tried many things but don't know how to prove this. Please help!
|
The co-factor of any element above the diagonal is zero.Resaon :The matrix whose determinant is the co-factor will always be an upper triangular matrix with the determinant zero because either its last row is zero or its first column is zero or one of its diagonal element is zero.This can easily be verified. This will imply that the entries below the diagonal of the inverse of this matrix will all be zero.
|
How can I calculate or think about the large number 32768^1049088? I decided to ask myself how many different images my laptop's screen could display. I came up with (number of colors)^(number of pixels) so assuming 32768 colors I'm trying to get my head around the number, but I have a feeling it's too big to actually calculate.
Am I right that it's too big to calculate? If not, then how? If so then how would you approach grasping the magnitude?
Update: I realized a simpler way to get the same number is 2^(number of bits of video RAM) or "all the possible configurations of video RAM" - correct me if I'm wrong.
|
Your original number is
$2^{15*2^{20}}
<2^{2^{24}}
< 10^{2^{21}}< 10^{3*10^6}
$
which is certainly computable
since it has less than
3,000,000 digits.
The new, larger number is
$2^{24*2^{20}}
<2^{2^{25}}
< 10^{2^{22}}< 10^{6*10^6}
$
which is still computable
since it has less than
6,000,000 digits.
|
Analytic Geometry question (high school level) I was asked to find the focus and diretrix of the given equation: $y=x^2 -4$. This is what I have so far:
Let $F = (0, -\frac{p}{2})$ be the focus, $D = (x, \frac{p}{2})$ and $P = (x,y)$ which reduces to $x^2 = 2py$ for $p>0$. Now I have $x^2 = 2p(x^2 - 4)$ resulting in $ x^2 = \frac{-8p}{(1-2p)}$ I have no clue how to find the focus. I just know that it will be at $(0, -4+\frac{p}{2})$
Can I get help from some college math major? I went to the tutoring center at my high school but no one there understands what I'm talking about.
|
I seem to remember the focal distance $p$ satisfies $4ap=1$ where the equation for the parabola is $y = ax^2 + bx + c$. Your focus will be $1/4$ above your vertex, and the directrix will be a horizontal line $1/4$ below your vertex.
|
Generating function with quadratic coefficients. $h_k=2k^2+2k+1$. I need the generating function $$G(x)=h_0+h_1x+\dots+h_kx^k+\dots$$ I do not have to simplify this, yet I'd really like to know how Wolfram computed this sum as $$\frac{x(2x^2-2x+5}{(1-x)^3}$$ when $|x|<1$. Rewrite Wolfram's answer as $$x(2x^2-2x+5)(1+x+x^2+\dots)^3=x(2x^2-5x+2)(1+2x+3x^2+\dots)(1+x+x^2+\dots),$$ but how would this give $G$?
|
$$\sum_{k=0}^{\infty} x^k = \frac{1}{1-x}$$
$$\sum_{k=0}^{\infty}k x^k = \frac{x}{(1-x)^2}$$
$$\sum_{k=0}^{\infty}k^2 x^k = \frac{x(1+x)}{(1-x)^3}$$
$$G(x) = \sum_{k=0}^{\infty} (2 k^2+2 k+1) x^k$$
Combine the above expressions as defined by $G(x)$ and you should reproduce Wolfram.
|
Is the sum or product of idempotent matrices idempotent? If you have two idempotent matrices $A$ and $B$, is $A+B$ an idempotent matrix?
Also, is $AB$ an idempotent Matrix?
If both are true, Can I see the proof? I am completley lost in how to prove both cases.
Thanks!
|
Let $e_1$, $e_2 \in \mathbb{R}^n$ be linearly independent unit vectors with $c := \left\langle e_1,e_2\right\rangle \neq 0$, viewed as column vectors. For $i=1$, $2$, let $P_i := e_i e_i^T \in M_n(\mathbb{R})$ be the orthogonal projection onto $\mathbb{R}e_i$. Thus, $P_1$ and $P_2$ are idempotents with
$$
P_1 e_1 = e_1, \quad P_1 e_2 = c e_1, \quad P_2 e_1 = c e_2, \quad P_2 e_2 = 1.
$$
Then:
*
*On the one hand,
$$
(P_1 + P_2)e_1 = e_1 + c e_2,
$$
and on the other hand,
$$
(P_1 + P_2)^2 e_1 = (P_1+P_2)(e_1+ce_2) = (1+c^2)e_1 + 2c e_2,
$$
so that $(P_1+P_2)^2 e_1 \neq (P_1+P_2)e_1$, and hence $(P_1+P_2)^2 = P_1+P_2$.
*On the one hand,
$$
P_1 P_2 e_1 = P_1 (c e_2) = c^2 e_1,
$$
and on the other hand,
$$
(P_1 P_2)^2 e_1 = P_1 P_2 (c^2 e_1) = c^4 e_1,
$$
so that since $0 < |c| < 1$, $(P_1 P_2)^2 e_1 \neq P_1 P_2 e_1$, and hence $(P_1 P_2)^2 \neq P_1 P_2$.
|
A question about topology regarding its conditions A question from a rookie.
As we know, $(X, T)$ is a topological space, on the following conditions,
*
*The union of a family of $T$-sets, belongs to $T$;
*The intersection of a FINITE family of $T$-sets, belongs to $T$;
*The empty set and the whole $X$ belongs to $T$.
So when the condition 2 is changed into:
2'. The intersection of a family of $T$-sets, belongs to $T$;
can anyone give a legitimate topological space as a counterexample to condition 2'?
Thank you.
|
The intersection of intervals $(-1/n,\ 1/n)$ for $n\in\mathbb N$ is only $\{0\}$ which is not open in $\mathbb R$ with the euclidean topology.
|
Calculate:$\lim_{x \rightarrow (-1)^{+}}\left(\frac{\sqrt{\pi}-\sqrt{\cos^{-1}x}}{\sqrt{x+1}} \right)$ How to calculate following with out using L'Hospital rule
$$\lim_{x \rightarrow (-1)^{+}}\left(\frac{\sqrt{\pi}-\sqrt{\cos^{-1}x}}{\sqrt{x+1}} \right)$$
|
Let $\sqrt{\arccos(x)} = t$. We then have $x = \cos(t^2)$. Since $x \to (-1)^+$, we have $t^2 \to \pi^-$. Hence, we have
$$\lim_{x \to (-1)^+} \dfrac{\sqrt{\pi} - \sqrt{\arccos(x)}}{\sqrt{1+x}} = \overbrace{\lim_{t \to \sqrt{\pi}^-} \dfrac{\sqrt{\pi} - t}{\sqrt{1+\cos(t^2)}}}^{t = \sqrt{\arccos(x)}} = \underbrace{\lim_{y \to 0^+} \dfrac{y}{\sqrt{1+\cos((\sqrt{\pi}-y)^2)}}}_{y = \sqrt{\pi}-t}$$
$$1+\cos((\sqrt{\pi}-y)^2) = 1+\cos(\pi -2 \sqrt{\pi}y + y^2) = 1-\cos(2 \sqrt{\pi}y - y^2) = 2 \sin^2 \left(\sqrt{\pi} y - \dfrac{y^2}2\right)$$
Hence,
\begin{align}
\lim_{y \to 0^+} \dfrac{y}{\sqrt{1+\cos((\sqrt{\pi}-y)^2)}} & = \dfrac1{\sqrt2} \lim_{y \to 0^+} \dfrac{y}{\sin \left(\sqrt{\pi}y - \dfrac{y^2}2\right)}\\
& = \dfrac1{\sqrt2} \lim_{y \to 0^+} \dfrac{\left(\sqrt{\pi}y - \dfrac{y^2}2\right)}{\sin \left(\sqrt{\pi}y - \dfrac{y^2}2\right)} \dfrac{y}{\left(\sqrt{\pi}y - \dfrac{y^2}2\right)} = \dfrac1{\sqrt{2 \pi}}
\end{align}
|
Calculating line integral I'm working on this problem:
Calculate integral
\begin{equation}
\int_C\frac{z\arctan(z)}{\sqrt{1+z^2}}\,dz + (y-z^3)\,dx - (2x+z^3)\,dy,
\end{equation}
where the contour $C$ is defined by equations
$$
\sqrt{1-x^2-y^2}=z, \quad 4x^2+9y^2 = 1.
$$
Seems to me that I know the solution, but I have feeling that I could lost something. Would you help me to clarify this.
First, it is easy to parametrize the contour: $x=\frac{1}{2}\cos\varphi$, $y=\frac{1}{3}\sin\varphi$, $z=\sqrt{1-\frac{\cos^2\varphi}{4}-\frac{\sin^2\varphi}{9}}$ and $\varphi$ goes from $0$ to $2\pi$. So we will have the integral $\int_0^{2\pi}F(\varphi)\,d\varphi$, where the function $F(\varphi)$ is quite complicated.
But I thought about another method. The contour is symmetric and it would provide some simplifications:
When variable $z$ goes up and down (on the contour) the value of the function $\frac{z\arctan(z)}{\sqrt{1+z^2}}$ are same in such up-and-down points. So I can write
$$
\int_C\dots = \int_C + (y-z^3)\,dx - (2x+z^3)\,dy.
$$
The same I can conclude for variable $x$ and function $z^3$ and for variable $y$ and function $z^3$. So
$$
\int_C\dots = \int_C y\,dx - (2x+z^3)\,dy = \int_C y\,dx - 2x\,dy.
$$
After that it is much easier to compute the integral using parameterization.
$$
\int_C y\,dx - 2x\,dy = \int_0^{2\pi}-\frac{1}{6}\sin^2\varphi -2\frac{1}{6}\cos^2\varphi \, d\varphi = -\frac{1}{6}\int_0^{2\pi}1+\cos^2\varphi\,d\varphi = -\frac{\pi}{2}
$$
So the answer is $-\frac{\pi}{2}$.
|
It's absolutely fine to exploit the symmetries in the given problem. But we need a clear cut argument. Observing that a "variable goes up and down" doesn't suffice.
Relevant are the following symmetries in the parametrization of $C$:
$$x(\phi+\pi)=-x(\phi),\quad y(\phi+\pi)=-y(\phi),\quad z(\phi)=z(-\phi)=z(\phi+\pi)\ .$$
This implies
$$\dot x(\phi+\pi)=-\dot x(\phi),\quad \dot y(\phi+\pi)=-\dot y(\phi),\quad \dot z(-\phi)=-\dot z(\phi)\ .$$
When computing $W:=\int_C \bigl(P\ dx+Q\ dy+ R\ dz\bigr)$ for $P$, $Q$, $R$ as in your question we therefore immediately get
$$\eqalign{\int_C P\ dx&=\int_0^{2\pi}(y(\phi)-z^3(\phi))\dot x(\phi)\ d\phi=\int_0^{2\pi}y(\phi)\ \dot x(\phi)\ d\phi=-{1\over6}\int_0^{2\pi}\sin^2\phi\ d\phi=-{\pi\over6},\cr
\int_C Q\ dy&=\int_0^{2\pi}(-2x(\phi)-z^3(\phi))\dot y(\phi)\ d\phi=-2
\int_0^{2\pi}x(\phi)\ \dot y(\phi)\ d\phi=\ldots=-{\pi\over3},\cr
\int_C R\ dz&=\int_{-\pi}^\pi \tilde R(\phi)\dot z(\phi)\ d\phi=0\cr}$$
(where we have used that $\tilde R(-\phi)=\tilde R(\phi)$). It follows that $W=-{\pi\over2}$.
|
How fast can you determine if vectors are linearly independent? Let us suppose you have $m$ real-valued vectors of length $n$ where $n \geq m$.
How fast can you determine if they are linearly independent?
In the case where $m = n$ one way to determine independence would be to compute the determinant of the matrix whose rows are the vectors. I tried some googling and found that the best known algorithm to compute the determinant of a square matrix with $n$ rows runs in $O \left ( n^{2.373} \right )$. That puts an upper bound on the case where $m = n$. But computing the determinant seems like an overkill. Furthermore it does not solve the case where $n > m$.
Is there a better algorithm? What is the known theoretical lower bound on the complexity of such an algorithm?
|
Please use the following steps
*
*Arrange the vectors in form of a matrix with each vector representing a column of matrix.
*Vectors of a matrix are always Linearly Dependent if number of columns is greater than number of rows (where m > n).
*Vectors of a matrix having number of rows greater than or equal to number of columns (where n >= m) are Linearly Independent only if Elementry Row Operations on Matrix can convert it into a Matrix containing only Mutually Orthogonal Identitity Vectors (An Identity Vector is a Vector having 1 as one of the component and 0 as other components). Otherwise the vectors are Linearly Dependent
So, the only thing that is required is a fast algorithm to do elementry row operations on a matrix with n>=m to check whether the vectors in it can be converted to Mutually Orthogonal Identitity Vectors
If someone thinks this answer is wrong, please prove it by giving some counter examples of Matrices.
|
How to calculate the number of pieces in the border of a puzzle? Is there any way to calculate how many border-pieces a puzzle has, without knowing its width-height ratio? I guess it's not even possible, but I am trying to be sure about it.
Thanks for your help!
BTW you might want to know that the puzzle has 3000 pieces.
|
Obviously, $w\cdot h=3000$, and there are $2w+h-2+h-2=2w+2h-4$ border pieces. Since $3000=2^3\cdot 3\cdot 5^3$, possibilities are \begin{eqnarray}(w,h)&\in&\{(1,3000),(2,1500),(3,1000),(4,750),(5,600),(6,500),\\&&\hphantom{\{}(8,375),(10,300),(12,250),(15,200),(20,150),(24,125)\\ &&\hphantom{\{}(25,120),(30,100),(40,75),(50,60),(h,w)\},\end{eqnarray}
Considering this, your puzzle is probably $50\cdot60$ (I've never seen a puzzle with $h/w$ or $w/h$ ratio more than $1/2$), so there are $216$ border pieces. This is only $\frac{216\cdot100\%}{3000}=7.2\%$ of the puzzle pieces, which fits standards.
|
Find the necessary and sufficient conditions for all $ 41 \mid \underbrace{11\ldots 1}_{n}$, $n\in N$. Find the necessary and sufficient conditions for all $ 41 \mid \underbrace{11\ldots 1}_{n}$, $n\in N$. And, if $\underbrace{11\ldots 1}_{n}\equiv 41\times p$,
then $p$ is a prime number.
Find all of the possible values of $n$ to satisfy the condition.
|
The first sentence is asking that $41|\frac {10^n-1}9$. This is just the length of the repeat of $\frac 1{41}$. The second statement forces $n$ to be the minimum value. Without it, any multiple of $n$ would work, but $p$ would not be prime.
|
totally ordered group Suppose a no trivial totally ordered group .This group has maximum element?
A totally ordered group is a totally ordered structure (G,∘,≤) such that (G,∘) is a group.I couldnt find a more exact definition
|
I assume you want the ordering to be compatible with the group operation, such that if $a \geq b$ and $c\geq d$ then $ac\geq bd$.
In this case, the group cannot have a maximal element, which we can see as follows: Assume $g$ is such a maximal element and let $h\in G$ with $h\geq 1$.
Now we have that $g\geq g$ and $h\geq 1$ so $gh\geq g$ which by maximality would mean $gh = g$ so $h = 1$.
But if $G$ is not trivial, it has an element $h$ with $h\geq 1$ and $h\neq 1$, which gives us our contradiction.
|
volume of "$n$-hedron" In $\mathbb{R}^n$, why does the "$n$-hedron" $|x_1|+|x_2|+\dots+|x_n| \le 1$ have volume $\cfrac{2^n}{n!}$? I came across this fact in some of Minkowski's proofs in the field of geometry of numbers.
Thank you.
|
The $2^n$ comes because you have that many copies of the simplex $0 \le x_i \le 1$
The $n!$ comes from integrating up the volume. The area of the right triangle is $\frac 12$, the volume of the tetrahedron is $\frac 12 \cdot \frac 13$ and so on.
|
Find a polynomial as $2836x^2-12724x+16129$ I found a polynomial function with integer coefficients:$f(x)=2836x^2-12724x+16129$
and $f(0)=127^2,f(1)=79^2,f(2)=45^2,f(3)=59^2,f(4)=103^2,f(5)=153^2.$
My question is:can we find a polynomial function with integer coefficients,called $f(x)$,which has no multiple roots,and $f(0),f(1),f(2),f(3),……,f(k)$ are distinct square numbers?($k>5$ is a given integer)
Thanks all.
PS:I'm sorry,guys.I lost a very important condition:$f(x)$ should be a quadratic function:$f(x)=ax^2+bx+c$.($a,b,c$ are integers and $b^2-4ac≠0$)
So the Lagrange interpolation method does not work.
I wonder is there always such a quadratic polynomial when $k$ is arbitrarily large?
|
One such quadratic
$$p(t)=-4980t^2+32100t+2809$$
$p(0)=53^2,p(1)=173^2,p(2)=217^2,p(3)=233^2,p(4)=227^2,p(5)=197^2,p(6)=127^2$
Source : Polynomials E.J Barbeau
|
inscribed angles on circle
That's basically the problem. I keep getting $\theta=90-\phi/2$. But I have a feeling its not right. What I did was draw line segments BD and AC. From there you get four triangles. I labeled the intersection of BD and AC as point P. From exterior angles I got my answer.
|
One way would be to let $E$ be the center of the circle. A standard result in geometry tells you that $AEC=2\theta$. And the two sides $AE$ and $CE$ are of equal lengths, and there are right angles at $A$ and $C$, and the sides $AD$ and $CD$ are also of equal lengths. So the triangle $EAD$ is right triangle congruent to $ECD$. One of the angles in that right triangle is $\theta$, so the other is $90^\circ-\theta$. Therefore what you're looking for is $180^\circ-2\theta$.
|
Surface integral over ellipsoid I've problem with this surface integral:
$$
\iint\limits_S {\sqrt{ \left(\frac{x^2}{a^4}+\frac{y^2}{b^4}+\frac{z^2}{c^4}\right)}}{dS}
$$, where
$$
S = \{(x,y,z)\in\mathbb{R}^3: \frac{x^2}{a^2} + \frac{y^2}{b^2} + \frac{z^2}{c^2}= 1\}
$$
|
Let the ellipsoid $S$ be given by
$${\bf x}(\theta,\phi)=(a\cos\theta\cos\phi,b\cos\theta\sin\phi,c\sin\theta)\ .$$
Then for all points $(x,y,z)\in S$ one has
$$Q^2:={x^2\over a^4}+{y^2\over b^4}+{z^2\over c^4}={1\over a^2b^2c^2}\left(\cos^2\theta(b^2c^2\cos^2\phi+a^2c^2\sin^2\phi)+a^2b^2\sin^2\theta\right)\ .$$
On the other hand
$${\rm d}S=|{\bf x}_\theta\times{\rm x}_\phi|\>{\rm d}(\theta,\phi)\ ,$$
and one computes
$$\eqalign{|{\bf x}_\theta\times{\rm x}_\phi|^2&=\cos^4\theta(b^2c^2\cos^2\phi+a^2c^2\sin^2\phi)+a^2b^2\cos^2\theta\sin^2\theta\cr
&=\cos^2\theta\ (a^2b^2c^2\ \ Q^2)\ \cr}$$
It follows that your integral ($=:J$) is given by
$$\eqalign{J&=\int\nolimits_{\hat S} Q\ |{\bf x}_\theta\times{\rm x}_\phi|\>{\rm d}(\theta,\phi)=\int\nolimits_{\hat S}abc\ Q^2\ \cos\theta\ {\rm d}(\theta,\phi) \cr &={1\over abc}\int\nolimits_{\hat S}\cos\theta\left(\cos^2\theta(b^2c^2\cos^2\phi+a^2c^2\sin^2\phi)+a^2b^2\sin^2\theta\right)\ {\rm d}(\theta,\phi)\ ,\cr}$$
where $\hat S=[-{\pi\over2},{\pi\over2}]\times[0,2\pi]$. Using
$$\int_{-\pi/2}^{\pi/2}\cos^3\theta\ d\theta={4\over3},\quad \int_{-\pi/2}^{\pi/2}\cos\theta\sin^2\theta\ d\theta={2\over3},\quad \int_0^{2\pi}\cos^2\phi\ d\phi=\int_0^{2\pi}\sin^2\phi\ d\phi=\pi$$
we finally obtain
$$J={4\pi\over3}\left({ab\over c}+{bc\over a}+{ca\over b}\right)\ .$$
|
How can complex polynomials be represented? I know that real polynomials (polynomials with real coefficients) are sometimes graphed on a 3D complex space ($x=a, y=b, z=f(a+bi)$), but how are polynomials like $(1+2i)x^2+(3+4i)x+7$ represented?
|
Plotting $P(z) $ and $P:\mathbb{C}\rightarrow\mathbb{C}$, if you want to use a cartesian coordinate system give a 4D image because the input can be rappresented by the coordinate $(a,b)$ where $z=a+bi$, and the output too.
So the points (coordinates) of the graph are of the kind
$$(a,b,Re(P(a+bi)),Im(P(a+bi)))$$
and they belong to the set $\mathbb{R}^4$ (set of 4-uple of $\mathbb{R}$).
I think is impossible for evry human to watch a 4D plot on paper, but if you do a 3D plot with an animation, the timeline is your 4th dimension, and you can see how the 3d plot changes when the time (4th variable) changes.
I hope that is correct. At least this is how i represent in my mind 4D objects.
ps:I apologize in advance for any mistakes in my English, I hope that the translator did a good job :).
|
Integrating $x/(x-2)$ from $0$ to $5$ How would one go about integrating the following?
$$\int_0^5 \frac{x}{x-2} dx$$
It seems like you need to use long division, split it up into two integrals, and the use limits. I'm not quite sure about the limits part.
|
Yes, exactly, you do want to use "long division"...
Note, dividing the numerator by the denominator gives you:
$$\int_0^5 {x\over{x-2}} \mathrm{d}x = \int_0^5 \left(1 + \frac 2{x-2}\right) \mathrm{d}x$$
Now simply split the integral into the sum of two integrals:
$$\int_0^5 \left(1 + \frac 2{x-2}\right) \mathrm{d}x \quad= \quad\int_0^5 \,\mathrm{d}x \;\; + \;\; 2\int_0^5 \frac 1{x-2} \,\mathrm{d}x$$
The problem, of course, is what happens with the limits of integration in the second integral: If $u = x-2$ then the limits of integration become $\big|_{-2}^3$, and we will see that the integral does not converge - there is a discontinuity at $u = 0$, and indeed, a vertical asymptote at $x = -2$, hence the limit of the integral - evaluated as $u \to 0$ does not exist, so the integral does not converge. And so the sum of the integrals will not converge.
|
If $2x = y^{\frac{1}{m}} + y^{\frac{-1}{m}}(x≥1) $ then prove that $(x^2-1)y^{"}+xy^{'} = m^{2}y$ How do I prove following?
If $2x = y^{\frac{1}{m}} + y^{\frac{-1}{m}},(x≥1)$, then prove that $(x^2-1)y^{"}+xy^{'} = m^{2}y$
|
Let $y=e^{mu}$. Then $x=e^u+e^{-u}=\cosh u$. Note that
$$y'=mu'e^{mu}=mu'y.$$
But $x=\cosh u$, so $1=u'\sinh u$, and therefore
$$u'=\frac{1}{\sinh u}=\frac{1}{\sqrt{\cosh^2 u-1}}=\frac{1}{\sqrt{x^2-1}}.$$
It follows that
$$y'=mu'y=\frac{my}{\sqrt{x^2-1}},\quad\text{and therefore}\quad \sqrt{x^2-1}\,y'=my.$$
Differentiate again. We get
$$y''\sqrt{x^2-1}+\frac{xy'}{\sqrt{x^2-1}}=my'.$$
Multiply by $\sqrt{x^2-1}$. We get
$$(x^2-1)y'' +xy' =my'\sqrt{x^2-1}.$$
But the right-hand side is just $m^2y$.
|
Complex numbers and trig identities: $\cos(3\theta) + i \sin(3\theta)$ Using the equally rule $a + bi = c + di$ and trigonometric identities how do I make...
$$\cos^3(\theta) - 3\sin^2(\theta)\ \cos(\theta) + 3i\ \sin(\theta)\ \cos^2(\theta) - i\ \sin^3(\theta)=
\cos(3\theta) + i\ \sin(3\theta)$$
Apparently it's easy but I can't see what trig identities to substitute
PLEASE HELP!
|
Note that $$(\cos(t)+i\sin(t))^n=(\cos(nt)+i\sin(nt)),~~n\in\mathbb Z$$ and $(a+b)^3=a^3+3a^2b+3ab^2+b^3,~~~(a-b)^3=a^3-3a^2b+3ab^2-b^3$.
|
trigonometric function integration I have this integral: $$ \int \dfrac{\sin(2x)}{1+\sin^2(x)}\,dx$$ and I need hint solving it. I tried using the trigonometric identities and let $$u=\sin(x)$$ but I got $$\int ... =\int \dfrac{2u}{1+u^2}\, du$$ which I don't know how to solve. I also tried letting $$u=tg\left(\frac{x}{2}\right)$$ but that leads to $$\int ...=\int \frac{8t(1-t^2)}{(1+t^2)(t^4+6t^2+1)} du$$ which again I can't solve. I'll be glad for help.
|
Hint: $\displaystyle \log (u(x))'=\frac{u'(x)}{u(x)}$, (with the implied assymption that $u(x)>0$ for all $x$ in the domain of $u$).
You should note, however, that $\displaystyle \log (|u(x)|)'=\frac{u'(x)}{u(x)}$, for all $x\in \operatorname{dom}(u)$ such that $u(x)\neq 0$. Also $1+u^2>0$.
|
General topological space $2$. 1. Let $A\subset X$ be a closed set of a topological space $X$. Let $B \subset A$ be a subset of $A$. prove that $B$ is closed as a subset of $A$, if and only if $B$ is closed as a subset of $X$.
What I have done is that if $B$ is closed in $A,B$ should be the form of $A\cap C$ where $C$ is closed in $X$... ( I don't know whether this is true or not..) anyway, then, $A$ and $C$ is all closed in $X$. Hence, $B$ also should be closed in $X$. I think this is just stupid way. I don't know how to prove solve this problem.
2. If we omit the assumption that $A$ is closed, show that previous exercise is false.
What I have done is that if I let $X=R$ and $A=(0,1)\cup(2,3)$ in $R$, although the interval $B=(0,\frac{1}{2}]$ is a subset of $A$ and closed in $A$, it is not closed in $X$. and I don't know the opposite way.
please help me.
|
The first proof is correct, minus the remarks about stupidity. It could use a slight rewording, but the idea is correct.
The second example is also correct, you are supposed to find a non-closed $A$ and $B\subseteq A$ which is closed in $A$ but not closed in $X$. You can do with a simpler example, though.
|
Find a closed form of the series $\sum_{n=0}^{\infty} n^2x^n$ The question I've been given is this:
Using both sides of this equation:
$$\frac{1}{1-x} = \sum_{n=0}^{\infty}x^n$$
Find an expression for $$\sum_{n=0}^{\infty} n^2x^n$$
Then use that to find an expression for
$$\sum_{n=0}^{\infty}\frac{n^2}{2^n}$$
This is as close as I've gotten:
\begin{align*}
\frac{1}{1-x} & = \sum_{n=0}^{\infty} x^n \\
\frac{-2}{(x-1)^3} & = \frac{d^2}{dx^2} \sum_{n=0}^{\infty} x^n \\
\frac{-2}{(x-1)^3} & = \sum_{n=2}^{\infty} n(n-1)x^{n-2} \\
\frac{-2x(x+1)}{(x-1)^3} & = \sum_{n=0}^{\infty} n(n-1)\frac{x^n}{x}(x+1) \\
\frac{-2x(x+1)}{(x-1)^3} & = \sum_{n=0}^{\infty} (n^2x + n^2 - nx - n)\frac{x^n}{x} \\
\frac{-2x(x+1)}{(x-1)^3} & = \sum_{n=0}^{\infty} n^2x^n + n^2\frac{x^n}{x} - nx^n - n\frac{x^n}{x} \\
\end{align*}
Any help is appreciated, thanks :)
|
You've got $\sum_{n\geq 0} n(n-1)x^n$, modulo multiplication by $x^2$. Differentiate just once your initial power series and you'll be able to find $\sum_{n\geq 0} nx^n$. Then take the sum of $\sum_{n\geq 0} n(n-1)x^n$ and $\sum_{n\geq 0} nx^n$. What are the coefficients?
|
Is this convergent or diverges to infinity? Solve or give some hints.
$\lim_{n\to\infty}\dfrac {C_n^{F_n}}{F_n^{C_n}}$,
where $C_n=\dfrac {(2n)!}{n!(n+1)!}$ is the n-th Catalan number and $F_n=2^{2^n}+1$ is the n-th Fermat number.
|
approximate $(n+1)!\simeq n!$ and use sterling approximation
$$L\simeq\lim_{n\rightarrow\infty}\frac{\frac{\sqrt{4\pi n}(\frac{2n}{e})^n}{(\sqrt{2\pi n}(\frac{n}{e})^n)^2}}{2^{2^n}}$$
$$L\simeq\lim_{n\rightarrow\infty}e^{n\ln\left((1/\pi n) (2e/n)\right) -2^nlog2}$$
It appears as n goes to infinity, the upper power part goes i.e. {$\dots$} part of $e^{\dots}$ goes to negative infinity and hence the limit is zero. I took $F_n\simeq2^{2^n}$ and similar approximations.
Since you know $C_n$ grows slower than $F_n$, you can show that for any $a_n$ and $b_n$, if $a_n/b_n$ goes to zero as n goes to infty, $a_n^{b_n}/b_n^{a_n}$ goes to infinity as n goes to infinity.
To show this proceed as above, $$L'=\lim_{n\rightarrow\infty} e^\left(b_n\ln(a_n)-a_n\ln(b_n)\right)$$ since, for any sequence, or function, $\ln(n)$ grows slower than $n$, you can conclude that above limit is infinity.
|
Can a countable set of parabolas cover the unit square in the plane? Can a countable set of parabolas cover the unit square in the plane? My intuition tells me the answer is no, since it can't be covered by countably many horizontal lines (by the uncountability of $[0, 1]$). Help would be appreciated.
|
An approach: Let the parabolas be $P_1,P_2,\dots$. Then by thickening each parabola slightly, we can make the area covered by the thickened parabola $P_i^\ast$ less than $\frac{\epsilon}{2^i}$, where $\epsilon$ is any preassigned positive number, say $\epsilon=1/2$. Then the total area covered by the thickened parabolas is $\lt 1$.
Remark: There are various ways to say that a set is "small." Cardinality is one such way. Measure is another.
|
Prove for every function that from sphere to real number has points $x$, $-x$ such that f$(x)=f(-x)$ I have not taken topology course yet. This is just the question that my undergrad research professor left us to think about. She hinted that I could use a theorem from Calculus.
So I reviewed all theorems in Calculus, and I found Intermediate Value Theorem might be helpful(?)... since it has some generalizations on topology.
But I still don't know how to get started. If you could give me some hints or similar examples, that would be really helpful.
Thanks!
|
You may search borsuk–ulam theorem and get some details.
|
Is the set of non finitely describable real numbers closed under addition and squaring? Is the set of non finitely describable real numbers closed under addition and squaring? If so, can someone give a proof? Thanks.
|
Hum... if by non-finitely-describable you mean "can't construct a finite description (as e.g. a Turing machine)", they aren't closed with respect to addition: $a + b = 2$ if $a$ is one of yours, $b$ is too (if it wasn't, $a$ could be described). But 2 clearly isn't.
Squares I have no clue.
|
Minimum ceiling height to move a closet to upright position I brought a closet today. It has dimension $a\times b\times c$ where $c$ is the height and $a\leq b \leq c$. To assemble it, I have to lay it out on the ground, then move it to upright position. I realized if I just move it in the way in this picture, then it would require the height of the ceiling to be $\sqrt{c^2+a^2}$.
Is this height required if we consider all possible ways to move the closet? Or is there a smart way to use less height?
This is a image from IKEA's instruction that comes with the closet.
|
I have two solutions to this problem.
Intuitive solution
Intuitively, it seems to me that the greatest distance across the box would be the diagonal, which can be calculated according to the Pythagorean theorem:
$$h = \sqrt {l^2 + w^2}$$
If you'd like a more rigorous solution, read on.
Calculus solution
Treat this as an optimization problem.
For a box of width $w$ and length $l$ (because depth doesn't really matter in this problem), the height when the box is upright, assuming $l > w$, i
$$h=l$$
If the box is rotated at an angle $\theta$ to the horizontal, then we have two components of the height: the height $h_1$ of the short side, $w$, and the height $h_2$ of the long side, $l$. Using polar coordinates, we have
$$h_1 = w sin \theta\\
h_2 = l cos \theta$$
Thus, the total height is
$$h = h_1 + h_2 = w sin\theta + l cos \theta$$
This intuitively makes sense: for small $\theta$ (close to upright), $w sin \theta \approx 0$, and $l cos \theta \approx l$, so $h \approx l$ (and similarly for large $\theta$).
The maximum height required means we need to maximize $h$. Take the derivative:
$$\frac {dh}{d\theta} = \frac d{d\theta} (w sin \theta + l cos \theta) = w cos \theta - l sin \theta$$
When the derivative is zero, we may be at either an extremum or a point of inflection. We need to find all of these on the interval $(0, \frac \pi 2)$ (because we don't case about anything outside of a standard 90° rotation).
So, we have
$$0 = \frac {dh}{d\theta} = w cos \theta - l sin \theta\\
w cos \theta = l sin \theta$$
And, because of our interval $(0, \frac \pi 2)$, we can guarantee that $cos \theta \neq 0$, so
$$\frac w l = \frac {sin \theta} {cos \theta} = tan \theta\\
\theta = atan \left (\frac l w \right )$$
Now that we know that the maximum is at $\theta = atan \left (\frac l w \right)$. We can plug that back into our polar coordinate equation to get
$$h = w sin \theta + l cos \theta\\
h = w sin \left ( atan \left (\frac l w \right ) \right ) + l cos \left ( atan \left (\frac l w \right ) \right )$$
That's my 2¢.
|
Prove that such an inverse is unique Given $z$ is a non zero complex number, we define a new complex number $z^{-1}$ , called $z$ inverse to have the property that $z\cdot z^{-1} = 1$
$z^{-1}$ is also often written as $1/z$
|
Whenever you need to prove the uniqueness of an element that holds some property, you can begin your proof by assuming the existence of at least two such elements that hold this property, say $x$ and $y$, and showing that under this assumption, it turns out $x = y$, necessarily.
In this case, the property we'll check is "being an inverse of $z$": We'll use the definition of $z$-inverse (the inverse of $z$): it is "any" element $z'$ such that $$z' z = zz' = 1\tag{1}$$ (We won't denote any such element by $z^{-1}$ yet, because we have to first rule out the possibility that such an element is not unique.)
So suppose $z \neq 0 \in \mathbb C$ has two inverses, $x, y,\;\;x\neq y$. We won't call either of them $z^{-1}$ at this point because we are assuming they are distinct, and that they are both inverses satisfying $(1)$.
Then we use the definition of an inverse element, $(1)$, which must hold for both $x, y$. Then
*
*Since $y$ is an inverse of $z$, we must have, by $(1)$ that $\color{blue}{\bf yz} = zy = \color{blue}{\bf 1}$, and
*Since $x$ is an inverse of $z$, we must have that $xz = \color{blue}{\bf zx = 1}$, again, so it satisfies the definition of an inverse element given in $(1)$.
This means that $${\bf{x}} = \color{blue}{\bf 1} \cdot x = \color{blue}{\bf(yz)}x = y(zx) =\;y\color{blue}{\bf (zx)} = y \cdot \color{blue}{\bf 1} = {\bf {y}}$$
Hence, $$\text{Therefore,}\quad x \;=\; y \;= \;z^{-1},$$
and thus, there really is only one multiplicative inverse of $z;\;$ that is, the inverse of a given complex $z$ must be unique, and we denote it by $\,z^{-1}.$
|
$x$ is rational, $\frac{x}{2}$ is rational, and $3x-1$ is rational are equivalent How do we prove that the three statements below about the real number $x$ are equivalent?
(i) $\displaystyle x$ is rational
(ii) $\displaystyle \frac{x}{2}$ is rational
(iii) $\displaystyle 3x-1$ is rational
|
It is enough to prove that $$(i) \implies (ii) \implies (iii) \implies (i)$$
$1$. $(i) \implies (ii)$. Let $x = \dfrac{p}q$, where $p,q \in\mathbb{Z}$. We then have $\dfrac{x}2 = \dfrac{p}{2q}$ and we have $p,2q \in \mathbb{Z}$. Hence, $$(i) \implies (ii)$$
$2$. $(ii) \implies (iii)$. Let $\dfrac{x}2 = \dfrac{p}q$, where $p,q \in \mathbb{Z}$. This gives $x = \dfrac{2p}q$, which in-turn gives $$3x-1 = \dfrac{6p}q - 1 = \dfrac{6p-q}q$$ Since $p,q \in \mathbb{Z}$, we have $q,6p-q \in \mathbb{Z}$. Hence, $$(ii) \implies (iii)$$
$3$. $(iii) \implies (i)$. Let $3x-1 = \dfrac{p}q$, where $p,q \in \mathbb{Z}$. This gives $$3x = \dfrac{p}q + 1 \implies x = \dfrac{p+q}{3q}$$
Since $p,q \in \mathbb{Z}$, we have $3q,p+q \in \mathbb{Z}$. Hence, $$(iii) \implies (i)$$
|
If $U_0 = 0$ and $U_n=\sqrt{U_{n-1}+(1/2)^{n-1}}$, then $U_n < U_{n-1}+(1/2)^n$ for $n > 2$ Letting $$U_n=\sqrt{U_{n-1}+(1/2)^{n-1}}$$ where $U_0=0$, prove that:
$$U_n < U_{n-1}+(1/2)^n$$ where $n>2$
|
Here is a useful factoid:
For every $x\geqslant1$ and $y\gt0$, $\sqrt{x+2y}\lt x+y$.
Now, apply this to your setting. First note that $U_n\geqslant1$ implies $U_{n+1}\geqslant1$. Since $U_1=1$, this proves that $U_n\geqslant1$ for every $n\geqslant1$. Then, choosing $n\geqslant2$, $x=U_{n-1}$ and $y=1/2^n$, the factoid yields $U_n\lt U_{n-1}+1/2^n$, as desired.
Finally the result holds for every $n\geqslant2$ (and not only $n\gt2$).
Can you prove the factoid above?
|
Mathematical way of determining whether a number is an integer I'm developing a computer program, and I've run into a mathematical problem. This isn't specific to any programming language, so it isn't really appropriate to ask on stackoverflow. Is there any way to determine whether a number is an integer using a mathematical function, from which a boolean response is given.
For example:
let x equal 159
let y equal 12.5
f(x) returns 1 and f(y) returns 0
Please get back to me if you can. If it isn't possible, is there a similar way to determine whether a number is odd or even?
EDIT:
I found a solution to the problem thats to Karolis Juodelė. I'll use a floor function to round the integer down, and then subtract the output from the original number. If the output is zero, then the function returns 0.
I just need to make sure that floor is a purely mathematical function. Does anyone know?
Thanks
|
Since no one has answered with this debatable solution, I will post it. $$f(x) := \begin{cases}1 \qquad x \in \mathbb{Z}\\
0 \qquad x \in \mathbb{R} \setminus \mathbb{Z}\end{cases}$$ is a perfectly fine function. Even shorter would be $\chi_{\mathbb{Z}}$ defined on $\mathbb{R}$.
|
Stock behaviour probability I found this question in a financial mathematics course exam, could anyone please help with a solution and some explanation? Thanks in advance :)
A stock has beta of $2.0$ and stock specific daily volatility of $0.02$.
Suppose that yesterday’s closing price was $100$ and today the market
goes up by $1$%. What’s the probability of today’s closing price being
at least $103$?
|
Assuming normality, vola being specified as standard deviation and assuming a risk free rate (r_risk_free) of zero the following reasoning could be applied:
1) From CAPM we see that the expected return of the stock (E(r)=r_risk_free+beta*(r_market-r_risk_free) here E(r)=0+2.0*.01=0.02
2) From a casual definition of beta we know that it relates stock specific vola to market vola (sd_stock_total=beta*sd_stock_specific) here sd_stock_total=2*0.02=.04
3) Since we have data on returns only, we transfer the value of interest (103) into returns space which is r_of_interest=(103-100)/100=0.03
4) Now we are looking for the probability that the return of interest occurs (P(x>=0.03)) given the probability distribution defined as N(0.02,0.04) or written differently P(x>=0.03)~N(0.02,0.04)
5) In R you could write 1-(pnorm(.03,mean=.02,sd=.04)) where 1-pnorm(...) is necessary because pnorm() returns P(X<=x) of the cumulative distribution function (CDF) and you are interested in P(x>=0.03) - more pnorm() hints.
|
Prove $ax^2+bx+c=0$ has no rational roots if $a,b,c$ are odd If $a,b,c$ are odd, how can we prove that $ax^2+bx+c=0$ has no rational roots?
I was unable to proceed beyond this: Roots are $x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$
and rational numbers are of the form $\frac pq$.
|
Hint $\ $ By the Rational Root Test, any rational root is integral, hence it follows by
Theorem Parity Root Test $\ $ A polynomial $\rm\:f(x)\:$ with integer coefficients
has no integer roots if its constant coefficient and coefficient sum are both odd.
Proof $\ $ The test verifies that $\rm\ f(0) \equiv 1\equiv f(1)\ \ (mod\ 2)\:,\ $ i.e.
that $\rm\:f(x)\:$ has no roots modulo $2$, hence it has no integer roots. $\ $ QED
This test extends to many other rings which have a "sense of parity", i.e. an image $\cong \Bbb Z/2,\:$ for example, various algebraic number rings such as the Gaussian integers.
|
Uniform convergence of $f_n(x)=x^n(1-x)$
I need to show that $f_n(x)=x^n(1-x)\rightarrow0 $ is uniformly in $[0,1]$ (i.e. $\forall\epsilon>0\,\exists N\in\mathbb{N}\,\forall n>N: \|f_n-f\|<\epsilon$)
I tried to find the maximum of $f_n$, because:
$$\|f_n-f\|=\sup_{[0,1]}|f_n(x)-f(x)|=\max\{f_n(x)\}.$$
So if we investigate the maximum value of $f_n(x)$, we get:
$$f_n'(x)=0\Rightarrow x_\max=\dfrac{n}{n+1}.$$
Therefore $\|f_n\|=f_n\left(\frac{n}{n+1}\right)=\frac{n^n}{(n+1)^{n+1}}$. And here I get stuck. How can I get $\|f_n\|<\epsilon$
|
The $f_n$ sequence is decreasing, $[0, 1]$ is compact, constant function is continuous, so the result follows immediately from the Dini's theorem.
|
Biased coin hypothesis
Let's assume, we threw a coin $110$ times and in $85$ tosses it was head. What is the probability that the coin is biased towards head?
We can use chi squared test to test, whether the coin is biased, but using this test we only find out, that the coin is biased towards heads or tails and there seems to be no one tailed chi squared test.
The same problem seems to appear when using z-test approach.
What is the correct way to solve this problem?
|
Let's assume that you have a fair coin $p=.5$. You can approximate a binomial distribution with a normal distribution. In this case we'd use a normal distribution with mean $110p=55$ and standard deviation $\sqrt{110p(1-p)}\approx5.244$. So getting 85 heads is a $(85-55)/5.244\approx5.72$ standard deviation event. And looking this value up on a table (if your table goes out that far, lol) you can see that the probability of getting 85 heads or more is $5.3\times10^{-9}$. An extremely unlikely event. That is approximately the probability you have a fair coin.
|
Integration theory for Banach-valued functions I am actually studying integration theory for vector-valued functions in a general Banach space, defining the integral with Riemann's sums.
Everything seems to work exactly as in the finite dimensional case:
Let X be a Banach space, $f,g \colon I = [a,b] \to X$, $\alpha$, $\beta \in \mathbb{R}$ then:
$\int_I \alpha f + \beta g = \alpha \int_i f + \beta \int_i g$, $\|\int_I f\| \le \int_I \|f\|$, etc...
The fundamental theorem of calculus holds.
If $f_n$ are continuous and uniformly convergent to $f$ it is also true that $\lim_n \int_I f_n = \int_I f$.
My question is: is there any property that hold only in the finite dimensional case? Is it possible to generalize the construction of the integral as Lebesgue did? If so, does it make sense?
Thank you for your help and suggestions
|
You might want to have a look to the Bochner-Lebesgue spaces. They are an appropriate generalization to the Banach-space-valued case. Many properties translate directly from the scalar case (Lebesgue theorem of dominated convergence, Lebesgue's differentiation theorem).
Introductions could be found in the rather old book by Yoshida (Functional analysis) or Diestel & Uhl (Vector measures). The latter also considers different (weaker) definitions of integrals.
|
Solving partial differential equation using laplace transform with time and space variation I have a equation like this:
$\dfrac{\partial y}{\partial t} = -A\dfrac{\partial y}{\partial x}+ B \dfrac{\partial^2y}{\partial x^2}$
with the following I.C
$y(x,0)=0$
and boundary conditions $y(0,t)=1$ and $y(\infty , t)=0$
I tried to solve the problem as follows:
Taking Laplace transform on both sides,
$\mathcal{L}(\dfrac{\partial y}{\partial t}) = - A \mathcal{L}(\dfrac{\partial y}{\partial x})+B \mathcal{L}(\dfrac{\partial^2 y}{\partial x^2})$
Now, on the L.H.S we have,
$sY-y(x,0)=sY$
$\mathcal{L}(\dfrac{\partial^2 y}{\partial x^2}) =\displaystyle \int e^{-st} \dfrac{\partial^2 y}{\partial x^2} dt$
Exchanging the order of integration and differentiation
$\displaystyle\mathcal{L}(\frac{\partial^2 y}{\partial x^2}) =\frac{\partial^2}{\partial x^2} \int e^{-st} y(x,t) dt$
$\displaystyle\mathcal{L}(\frac{\partial^2 y}{\partial x^2}) =\frac{\partial^2}{\partial x^2}\mathcal{L}(y) = \frac{\partial^2Y}{\partial x^2} $
$\displaystyle\mathcal{L}\frac{\partial y}{\partial x} = \frac{\partial Y}{\partial x}$
Now, combing L.H.S and R.H.S, we have,
$\displaystyle sY = - A \frac{\partial Y}{\partial x} + B \frac{\partial^2Y}{\partial x^2}$
Above equation might have three solutions:
If $b^2 - 4ac > 0 $ let $r_1=\frac{-b-\sqrt{b^2-4ac}}{2a}$ and $r_2 = \frac{-b+\sqrt{b^2-4ac}}{2a}$
The general solution is $\displaystyle y(x) = C_1e^{r_1x}+C_2 e^{r_2x}$
if $b^2 - 4ac = 0 $, then the general solution is given by
$ y(x)=C_1e^{-\frac{bx}{2a}}+C_2xe^-{\frac{bx}{2a}}$
if $b^2 - 4ac <0$ , then the general solution is given by
$y(x) = C_1e^{\frac{-bx}{2a}}\cos(wx) + C_2 e^{\frac{-bx}{2a}}\sin(wx)$
Since, A, and B are always positive in my problem, the first solution seems to be appropriate.
Now, from this point I am stuck and couldn't properly use the boundary conditions.
If anyone could offer any help that would be great.
"Solution added"
The solution of the problem is
$y(x,t)= \dfrac {y_0}{2} [exp(\dfrac {Ax}{B}erfc(\dfrac{x+At}{2\sqrt{Bt}}) + erfc(\dfrac{x-At}{2\sqrt{Bt}})$
|
I get a simpler procedure that without using laplace transform.
Note that this PDE is separable.
Let $y(x,t)=X(x)T(t)$ ,
Then $X(x)T'(t)=-AX'(x)T(t)+BX''(x)T(t)$
$X(x)T'(t)=(BX''(x)-AX'(x))T(t)$
$\dfrac{T'(t)}{T(t)}=\dfrac{BX''(x)-AX'(x)}{X(x)}=\dfrac{4B^2s^2-A^2}{4B}$
$\begin{cases}\dfrac{T'(t)}{T(t)}=\dfrac{4B^2s^2-A^2}{4B}\\BX''(x)-AX'(x)-\dfrac{4B^2s^2-A^2}{4B}X(x)=0\end{cases}$
$\begin{cases}T(t)=c_3(s)e^\frac{t(4B^2s^2-A^2)}{4B}\\F(x)=\begin{cases}c_1(s)e^\frac{Ax}{2B}\sinh xs+c_2(s)e^\frac{Ax}{2B}\cosh xs&\text{when}~s\neq0\\c_1xe^\frac{Ax}{2B}+c_2e^\frac{Ax}{2B}&\text{when}~s=0\end{cases}\end{cases}$
$\therefore y(x,t)=\int_{-\infty}^\infty C_1(s)e^\frac{2Ax+t(4B^2s^2-A^2)}{4B}\sinh xs~ds+\int_{-\infty}^\infty C_2(s)e^\frac{2Ax+t(4B^2s^2-A^2)}{4B}\cosh xs~ds$
$y(0,t)=1$ :
$\int_{-\infty}^\infty C_2(s)e^\frac{t(4B^2s^2-A^2)}{4B}~ds=1$
$C_2(s)=\dfrac{1}{2}\left(\delta\left(s-\dfrac{A}{2B}\right)+\delta\left(s+\dfrac{A}{2B}\right)\right)$
$\therefore y(x,t)=\int_{-\infty}^\infty C_1(s)e^\frac{2Ax+t(4B^2s^2-A^2)}{4B}\sinh xs~ds+\int_{-\infty}^\infty\dfrac{1}{2}\left(\delta\left(s-\dfrac{A}{2B}\right)+\delta\left(s+\dfrac{A}{2B}\right)\right)e^\frac{2Ax+t(4B^2s^2-A^2)}{4B}\cosh xs~ds=\int_{-\infty}^\infty C_1(s)e^\frac{2Ax+t(4B^2s^2-A^2)}{4B}\sinh xs~ds+e^\frac{Ax}{2B}\cosh\dfrac{Ax}{2B}$
|
Summation/Sigma notation There are lots of variants in the notation for summation. For example, $$\sum_{k=1}^{n} f(k), \qquad \sum_{p \text{ prime}} \frac{1}{p}, \qquad \sum_{\sigma \in S_n} (\operatorname{sgn} \sigma) a_{1 , \sigma(1)} \ldots a_{n , \sigma(n)}, \qquad \sum_{d \mid n} \mu(d).$$
What exactly is a summation? How do we define it? Is there a notation that generalizes all of the above, so that each of the above summations is a variant of the general notation? Are there any books that discuss this matter?
It seems that summation is a pretty self-evident concept, and I have yet to find a discussion of it in a textbook.
|
Except for the case of the upper and lower limit, all the other summations are really just sums of the form $$\sum_{P(i)} f(i)$$
Where $P$ is a unary predicate in the "language of mathematics", and $f(i)$ is some function which returns a value that we can sum. In the case of the sum of prime reciprocals $P(i)$ states that $i$ is a prime number and $f(i)=\frac1i$. In the second sum, $P(i)$ was $i\in S_n$, and $f(i)$ was that summand term. And so on.
|
Evaluate the integral $\int_0^{\infty} \left(\frac{\log x \arctan x}{x}\right)^2 \ dx$ Some rumours point out that the integral you see might be evaluated in a
straightforward way.
But rumours are sometimes just rumours. Could you confirm/refute it?
$$
\int_0^{\infty}\left[\frac{\log\left(x\right)\arctan\left(x\right)}{x}\right]^{2}
\,{\rm d}x
$$
EDIT
W|A tells the integral evaluates $0$ but this is not true.
Then how do I exactly compute it?
|
Related problems: (I), (II), (III). Denoting our integral by $J$ and recalling the mellin transform
$$ F(s)=\int_{0}^{\infty}x^{s-1} f(x)\,dx \implies F''(s)=\int_{0}^{\infty}x^{s-1} \ln(x)^2\,f(x)\,dx.$$
Taking $f(x)=\arctan(x)^2$, then the mellin transform of $f(x)$ is
$$ \frac{1}{2}\,{\frac {\pi \, \left( \gamma+2\,\ln\left( 2 \right) +\psi
\left( \frac{1}{2}+\frac{s}{2} \right)\right) }{s\sin \left( \frac{\pi \,s}{2}
\right)}}-\frac{1}{2}\,{\frac {{\pi }^{2}}{s\cos\left( \frac{\pi \,s}{2}
\right) }},$$
where $\psi(x)=\frac{d}{dx}\ln \Gamma(x)$ is the digamma function. Thus $J$ can be calculated directly as
$$ J= \lim_{s\to -1} F''(s) = \frac{1}{12}\,\pi \, \left( 3\,{\pi }^{2}\ln \left( 2 \right) -{\pi }^{2}+24
\,\ln \left( 2 \right) -3\,\zeta \left( 3 \right) \right)\sim 6.200200824 .$$
|
Definition of principal ideal This is a pretty basic question about principal ideals - on page 197 of Katznelson's A (Terse) Introduction to Linear Algebra, it says:
Assume that $\mathcal{R}$ has an identity element. For $g\in \mathcal{R}$, the set $I_g = \{ag:a\in\mathcal{R}\}$ is a left ideal in $\mathcal{R}$, and is clearly the smallest (left) ideal that contains $g$.
Ideals of the form $I_g$ are called principal left ideals...
(Note $\mathcal{R}$ is a ring).
Why is the assumption that $\mathcal{R}$ has an identity element important?
|
Because if $\mathcal{R}$ has an identity, then $I_{g}$ is the smallest left ideal containing $g$. Without an identity, it might be that $g \notin I_{g}$.
For instance if $\mathcal{R} = 2 \mathbf{Z}$, then $I_{2} =\{a \cdot 2:a\in 2 \mathbf{Z} \} = 4 \mathbf{Z}$ does not contain $2$.
(Thanks Cocopuffs for pointing out an earlier mistake.)
|
Adding a surjection $\omega \to \omega$ by Levy forcing I'm trying to understand the Levy collapse, working through Kanamori's 'The Higher Infinite'. He introduces the Levy forcing $\text{Col}(\lambda, S)$ for $S \subseteq \text{On}$ to be the set of all partial functions $p: \lambda \times S \to S$ such that $|p| < \lambda$ and $(\forall \langle \alpha, \xi \rangle \in \text{dom}(p))(p(\alpha,\xi) = 0 \vee p(\alpha, \xi) \in \alpha$).
In the generic extension, we introduce surjections $\lambda \to \alpha$ for all $\alpha \in S$.
However, the example Kanamori gives is $\text{Col}(\omega,\{\omega\})$, which he says is equivalent to adding a Cohen real. I can see how in the generic extension, we add a new surjection $\omega \to \omega$, but I don't see how this gives a new subset of $\omega$.
Thanks for your help.
|
A new surjection is a subset of $\omega\times\omega$. We have a very nice way to encode $\omega\times\omega$ into $\omega$, so nice it is in the ground model. If the function is a generic subset, so must be its encoded result, otherwise by applying a function in the ground model, on a set in the ground model, we end up with a function... not in the ground model!
The fact this forcing is equivalent to a Cohen one is easy to see by a cardinality argument. It is a nontrivial countable forcing. It has the same Boolean completion as Cohen, and therefore the two are equivalent.
|
Number of spanning trees in random graph Let $G$ be a graph in $G(n, p)$ (Erdős–Rényi model). What is the (expected) number of different spanning trees of $G$?
|
There are $n^{n-2}$ trees on $n$ labelled vertices. The probability that all $n-1$ edges in a given tree are in the graph is $p^{n-1}$. So the expected number of spanning
trees is $p^{n-1} n^{n-2}$.
|
Factoring 3 Dimensional Polynomials? How do you factor a system of polynomials into their roots the way one can factor a single dimensional polynomial into its roots.
Example
$$x^2 + y^2 = 14$$
$$xy = 1$$
We note that we can find the 4 solutions via quadratic formula and substitution such that the solutions can be separated into $2$ groups of $2$ such that each group lies on either $(x + y + 4)$ and $(x + y - 4)$. Note that:
$$(x + y + 4)(x + y - 4) = 0$$
$$xy = 1$$
Is also an equivalent system.
How do I factor the bottom half?
Ideally if $g$ is a linear expression then my system should be
$$g_1 * g_2 = 0$$
$$g_3 * g_4 = 0$$
Such that the solutions to
Any of the subsystems of this system is a solution to system itself (note there are $4$ viable subsystems).
Help?
|
For the "example" you list, here are some suggestions: Given
$$x^2 + y^2 = 14\tag{1}$$
$$xy = 1 \iff y = \frac 1x\tag{2}$$
*
*Substitute $y = \dfrac{1}{x}\tag{*}$ into equation $(1)$. Then solve for roots of the resulting equation in one variable.
$$x^2 + y^2 = 14\tag{a}$$
$$\iff x^2 + \left(\frac 1x\right)^2 = 14\tag{b}$$
$$\implies x^4 - 14x^2 + 1 = 0\tag{c}$$
Try letting $z = x^2$ and solve the resulting quadratic:
$$z^2 - 14z + 1 = 0\tag{d}$$ Then for each root $z_1, z_2,\;$ there will be two repeated roots solving $(c)$: $2 \times 2$ repeated roots
Notice that by incorporating the equation (2) into equation step $(b)$, we effectively end up finding all solutions that satisfy both equations $(1), (2)$: solutions for which the two given equations coincide.
See Lubin's post for some additional "generalizations": ways to approach problems such as this.
|
Generating random numbers with skewed distribution I want to generate random numbers with skewed distribution. But I have only following information about distribution from the paper :
skewed distribution where the value is 1 with probability 0.9 and 46 with probability 0.1. the distribution has mean (5.5)
I don't know how to generate random numbers with this information and I don't know what is the distribution function.
I'm using jdistlib for random number generation. If anyone has experience using this library can help me with this library for skewed distribution random number generation?
|
What copperhat is hinting at is the following algorithm:
Generate u, uniformly distributed in [0, 1]
If u < 0.9 then
return 1
else
return 46
(sorry, would be a mess as a comment).
In general, if you have a continuous distribution with cummulative distribution $c(x)$, to generate the respective numbers get $u$ as above, $c^{-1}(u)$ will have the required distribution. Can do the same with discrete distributions, essentially searching where the $u$ lies in the cummulative distribution.
An extensive treatment of generating random numbers (including non-uniform ones) is in Knuth's "Seminumerical Algorithms" (volume 2 of "The Art of Computer Programming"). Warning, heavy math involved.
|
Why is a full circle 360° degrees? What's the reason we agreed to setting the number of degrees of a full circle to 360? Does that make any more sense than 100, 1000 or any other number? Is there any logic involved in that particular number?
|
As it has been replied here - on Wonder Quest (webarchive link):
The Sumerians watched the Sun, Moon, and the five visible planets
(Mercury, Venus, Mars, Jupiter, and Saturn), primarily for omens. They
did not try to understand the motions physically. They did, however,
notice the circular track of the Sun's annual path across the sky and
knew that it took about 360 days to complete one year's circuit.
Consequently, they divided the circular path into 360 degrees to track
each day's passage of the Sun's whole journey. This probably happened
about 2400 BC.
That's how we got a 360 degree circle. Around 1500 BC, Egyptians
divided the day into 24 hours, though the hours varied with the
seasons originally. Greek astronomers made the hours equal. About 300
to 100 BC, the Babylonians subdivided the hour into base-60 fractions:
60 minutes in an hour and 60 seconds in a minute. The base 60 of their
number system lives on in our time and angle divisions.
An 100-degree circle makes sense for base 10 people like ourselves.
But the base-60 Babylonians came up with 360 degrees and we cling to
their ways-4,400 years later.
Then, there's also this discussion on Math Forum:
In 1936, a tablet was excavated some 200 miles from Babylon. Here one
should make the interjection that the Sumerians were first to make one
of man's greatest inventions, namely, writing; through written
communication, knowledge could be passed from one person to others,
and from one generation to the next and future ones. They impressed
their cuneiform (wedge-shaped) script on soft clay tablets with a
stylus, and the tablets were then hardened in the sun. The mentioned
tablet, whose translation was partially published only in 1950, is
devoted to various geometrical figures, and states that the ratio of
the perimeter of a regular hexagon to the circumference of the
circumscribed circle equals a number which in modern notation is given
by $ \frac{57}{60} + \frac{36}{60^2} $
(the Babylonians used the sexagesimal system, i.e., their
base was 60 rather than 10).
The Babylonians knew, of course, that the perimeter of a hexagon is
exactly equal to six times the radius of the circumscribed circle, in
fact that was evidently the reason why they chose to divide the circle
into 360 degrees (and we are still burdened with that figure to this
day). The tablet, therefore, gives ... $\pi = \frac{25}{8} = 3.125$.
|
Describe a PDA that accepts all strings over $\{a, b\}$ that have as many $a$’s as $b$’s. I'm having my exam in few days and I would like help with this
Describe a PDA that accepts all strings over $\{ a, b \}$ that have as many $a$’s as $b$’s.
|
Hint: Use the stack as an indication of how many more of one symbol have been so far read from the string than the other. (Also ensure that the stack never contains both $\mathtt{a}$s and $\mathtt{b}$s at the same time.)
|
Are there countably or uncountably many infinite subsets of the positive even integers? Let $S$ be the set of all infinite subsets of $\mathbb N$ such that $S$ consists only of even numbers.
Is $S$ countable or uncountable?
I know that set $F$ of all finite subsets of $\mathbb N$ is countable but from that I am not able to deduce that $S$ is uncountable since it looks hard to find a bijection between $S$ and $P(\mathbb N)\setminus F$. Also I am not finding the way at the moment to find any bijection between $S$ and $[0,1]$ to show that $S$ is uncountable nor I can find any bijection between $S$ and $\mathbb N$ or $S$ and $\mathbb Q$ to show that it is countable. So I am thinking is there some clever way to show what is the cardinality of $S$ by avoiding bijectivity arguments?
So can you help me?
|
Notice that by dividing by two, you get all infinite subsets of $\mathbb{N}$. Now to make a bijection from $]0,1]$ to this set, write real numbers in base two, and for each real, get the set of positions of $1$ in de binary expansion.
You have to write numbers of the form $\frac{n}{2^p}$ with infinitely many $1$ digits (they have two binary expansions, one finite, one infinite). Otherwise, the image of such a real by this application would not fall into the set of infinite sequences of integers (it would have only finitely many $1$).
|
Numer Matrix and Probability Say your playing a game with a friend. Lets call it 1 in 8. Your seeing who can predict the next three quarter flips in a row. 1 player flips the quarter three times and HTT comes up. He now has to stick with that as his set of numbers to win. tThe other player gets to pick his sequence of any three like HHH. Now you only get seven more sequences and the game is over. Does anyone have the advantage or is it still 50/50??
|
If each set of three is compared with each player's goal, the game is fair. Each player has $\frac 18$ chance to win each round and there is $\frac 34$ chance the round will be a draw. The chance of seven draws in a row is $(\frac 34)^7\approx 0.1335$, so each player wins with probability about $0.4333$. If I pick TTT and would win if the next flip were T (because of the two T's that have already come) I have the advantage.
|
approximation of law sines from spherical case to planar case we know for plane triangle cosine rule is $\cos C=\frac{a^+b^2-c^2}{2ab}$ and on spherical triangle is $ \cos C=\frac{\cos c - \cos a \cos b} {\sin a\sin b}$ suppose $a,b,c<\epsilon$ which are sides of a spherical triangle, and $$|\frac{a^2 +b^2-c^2}{2ab}- \frac{\cos c - \cos a \cos b} {\sin a\sin b}|<Ke^m$$
could any one tell me what will be $m$ and $K$?
|
Note that for $x$ close to $0$,
$$1-\frac{x^2}{2!} \le \cos x\le 1-\frac{x^2}{2!}+\frac{x^4}{4!}$$
and
$$x-\frac{x^3}{3!} \le \sin x\le x.$$
(We used the Maclaurin series expansion of $\cos x$ and $\sin x$.)
Using these facts on the small angles $a$, $b$, and $c$, we can estimate your difference.
|
Glue Together smooth functions Let's say that $f(x)$ is a $C^{1}$ function defined on a closed interval $I\subset \mathbb{R^{+}}$ and $g(x)\equiv c$ ($c$=constant) on an open interval $J\subset \mathbb{R^{+}}$ where $\overline{J}∩I\neq \emptyset$. Is there a way to "glue" together those two functions in such a way that they connect smoothly?
|
If $\overline J \cap I \neq \emptyset$, clearly not. If there is an open interval between $I$ and $J$ then yes, you can interpolate with a polynomial of degree $3$.
|
Why cannot the permutation $f^{-1}(1,2,3,5)f$ be even Please help me to prove that if $f\in S_6$ be arbiotrary permutation so the permutation $f^{-1}(1,2,3,5)f$ cannot be an even permutation.
I am sure there is a small thing I am missing it. Thank you.
|
I think, you can do the problem, if you know that:
$f$ is even so is $f^{-1}$ and $f$ is odd so is $f^{-1}$.
|
Change of basis matrix to convert standard basis to another basis
Consider the basis $B=\left\{\begin{pmatrix} -1 \\ 1 \\0 \end{pmatrix}\begin{pmatrix} -1 \\ 0 \\1 \end{pmatrix}\begin{pmatrix} 1 \\ 1 \\1 \end{pmatrix} \right\}$ for $\mathbb{R}^3$.
A) Find the change of basis matrix for converting from the standard basis to the basis B.
I have never done anything like this and the only examples I can find online basically tell me how to do the change of basis for "change-of-coordinates matrix from B to C".
B) Write the vector $\begin{pmatrix} 1 \\ 0 \\0 \end{pmatrix}$ in B-coordinates.
Obviously I can't do this if I can't complete part A.
Can someone either give me a hint, or preferably guide me towards an example of this type of problem?
The absolute only thing I can think to do is take an augmented matrix $[B E]$ (note - E in this case is the standard basis, because I don't know the correct notation) and row reduce until B is now the standard matrix. This is basically finding the inverse, so I doubt this is correct.
|
By definition change of base matrix contains the coordinates of the new base in respect to old base as it's columns. So by definition $B$ is the change of base matrix.
Key to solution is equation $v = Bv'$ where $v$ has coordinates in old basis and $v'$ has coordinates in the new basis (new basis is B-s cols)
suppose we know that in old basis $v$ has coords $(1,0,0)$ (as a column) (which is by the way just an old base vector) and we want to know $v'$ (the old base vector coordinates in terms of new base) then from the above equation we get
$$B^{-1}v = B^{-1}Bv' \Rightarrow B^{-1}v = v'$$
As a side-node, sometimes we want to ask how does that change of base matrix B act if we look at it as linear transformation, that is given vector v in old base $v=(v_1,...,v_n)$, what is the vector $Bv$? In general it is a vector whith i-th coordinate bi1*v1+...+bin*vn (dot product of i-th row of $B$ with $v$). But in particular if we consider v to be an old base vector having coordinates (0...1...0) (coordinates in respect the old base) where 1 is in the j-th position, then we get $Bv = (b_{1j},...,b_{nj})$ which is the j-th column of B, which is the j-th base vector of the new base. Thus we may say that B viewed as linear transformation takes old base to new base.
|
Showing the sum over primes is equal to an integral First, note that $$\vartheta = \sum_{p \leq x} \log p$$I am trying to show $$\vartheta(x) = \pi(x)\log(x)-\int_2^x\frac{\pi(u)}{u}du$$ I am trying to show this by summation of parts. The theorem of partial summation is
Let $f$ be a continuous and differentiable function. Suppose $\{a_n\}_{n\geq 1} \subseteq \mathbb{R}$. Set $A(t) =\sum_{n\leq t}a_n$. Then $$\sum_{n\leq x} a_n f(n) = A(x)f(x) - \int_1^x A(t)f'(t) dt$$
My proof is as follows (it seems simple enough). Let $f(n) = \log(p)$ if n is a prime. Clearly $f(n)$ is continuous and diffrentiable. Set $$a_n = \begin{cases} 1 & \text{if } n = prime \\ 0 & \text{otherwise}\end{cases}$$
Then by summation of parts we have $$\sum_{n\leq x}log(p)\cdot a_n = A(x)\log(x) - \int_1^x\frac{A(n)}{u}du$$ where $A(t) = \sum_{p \leq x} 1 = \pi(x)$
Is this sufficient enough?
|
It looks reasonably good, but one thing you need to be more clear about is the definition of $f(n)$. Currently "$f(n) = \log(p)$ when $n$ is a prime" is not an adequate definition, since it fails to define, for instance $f(2.5)$.
It sounds like you might be defining $f(n) = 0$ when $n$ is not a prime; this would not be very usable since it results in a discontinuous function. On the other hand, $f(x) = \log x$ (already hinted at when you implicitly determined $f'(u) = 1/u$) works just fine for the argument.
Another minor nitpick: you need to justify the switch between $\int_1^x$ in one equation and $\int_2^x$ in the other. This isn't hard because if you think about the definition, $A(t)$ happens to be $0$ for all $t < 2$.
|
Finite automaton that recognizes the empty language $\emptyset$ Since the language $L = \emptyset$ is regular, there must be a finite automaton that recognizes it. However, I'm not exactly sure how one would be constructed. I feel like the answer is trivial. Can someone help me out?
|
You have only one state $s$ that is initial, but not accepting with loops $s \overset{\alpha}{\rightarrow} s$ for any letter $\alpha \in \Sigma$ (with non-deterministic automaton you can even skip the loops, i.e. the transition relation would be empty).
I hope this helps ;-)
|
Subgroups of the group of all roots of unity. Let $G=\mathbb{C}^*$ and let $\mu$ be the subgroup of roots of unity in $\mathbb{C}^*$. Show that any finitely generated subgroup of $\mu$ is cyclic. Show that $\mu$ is not finitely generated and find a non-trivial subgroup of $\mu$ which is not finitely generated.
I can see that a subgroup of $\mu$ is cyclic due to the nature of complex numbers and De-Moivre's theorem. The second part of this question confuses me though since, by the same logic, should follow a similar procedure. Perhaps it has something to do with the "finite" nature of the subgroup in comparison to $\mu$. Any assistance would be helpful !
Thank you in advanced.
|
The first part should probably include "Show that any f.g. subgroup of $\,\mu\,$ is cyclic finite...". From here it follows at once that $\,\mu\,$ cannot be f.g. as it isn't finite.
For a non-trivial non f.g. subgroup think of the roots of unit of order $\,p^n\,\,,\,\,p\,$ a prime, and running exponent $\,n\in\Bbb N\,$ ...
|
Continuous exponential growth and misleading rate terminology I'm learning about continuous growth and looking at examples of Continuously Compounded Interest in finance and Uninhibited Growth in biology. While I've gotten a handle on the math, I'm finding some of the terminology counterintuitive. The best way to explain would be through an example.
A culture of cells is grown in a laboratory. The initial population is 12,000 cells. The number of cells, $N$, in thousands, after $t$ days is,
$N(t)=12e^{0.86t}$, which we can interpret as an $86\%$ daily growth rate for the cells.
I understand the mechanism by which $0.86$ affects the growth rate, but it seems a misnomer to say there's an "$86\%$ daily growth rate" for the cells, as that makes it sound like the population will grow by $86\%$ in a day, when it actually grows by about $136\%$ since the growth is occurring continuously.
Is it just that we have to sacrifice accuracy for succinctness?
|
The instantaneous growth rate is $0.86$ per day in that $N(t)$ is the solution to $\frac {dN}{dt}=0.86N$. You are correct that the compounding makes the increase in one day $1.36$
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.