INSTRUCTION
stringlengths 61
6.63k
| RESPONSE
stringlengths 1
11k
|
---|---|
Need help with $\int \dfrac{2x}{4x^2+1}$ We want$$\int \dfrac{2x}{4x^2+1}$$
I only know that $\ln(4x^2 + 1)$ would have to be in the mix, but what am I supposed to do with the $2x$ in the numerator?
|
Again, as in your past question, there's a general case here: if $\,f\,$ is derivable then
$$\int\frac{f'(x)}{f(x)}dx=\log|f(x)|+K$$
Here, we have
$$\frac{2x}{4x^2+1}=\frac14\frac{(4x^2+1)'}{4x^2+1}\ldots$$
|
Integral defined as a limit using regular partitions Definition. Given a function $f$ defined on $[a,b]$, let $$\xi_k \in [x_{k-1},x_k],\quad k=1,\ldots,n$$ where $$ x_k=a+k\frac{b-a}n, \quad k=0,\ldots,n \; .$$
One says that $f$ is integrable on $[a,b]$ if the limit $$\lim_{n\to\infty}\frac{b-a}n\sum_{k=1}^n f(\xi_k)$$
exists and is independent of the $\xi_k$.
I seek a proof of the:
Theorem. If $a<c<b$ and $f$ is integrable on $[a,c]$ and $[c,b]$, then $f$ is integrable on $[a,b].$
|
HINT:
Take Two cases:
1.When $c$ is a tag of a sub-interval $[x_{k},x_{k+1}]$ of $\dot{P}$,where $\dot{P}$ is your tagged partition ${(I_{i},t_{i})}_{i=1}^{n}$,such that $I_{i}=[x_{i},x_{i+1}]$
2.When $c$ is an end-point of a sub-interval of $\dot{P}$.
|
Name of this "cut 'n slide" fractal?
Can you identify this fractal--if in fact is has a name--based either upon its look or on the method of its generation? It's created in this short video.
It looks similar to a dragon fractal, but I don't think they are the same. Help, please?
|
That is the twindragon. It is a two-dimensional self-similar set. That is it is composed of two smaller copies of itself scaled by the factor $\sqrt{2}$ as shown here:
Using this self-similarity, one can construct a tiling of the plane with fractal boundary. Analysis of the fractal dimension of the boundary is also possible, but it's a bit harder.
It's all truly great fun!
|
How to evaluate double integrals over a region? Evaluate the double integral $\iint_D(1/x)dA$, where D is the region bounded by the circles $x^2+y^2=1$ and $x^2+y^2=2x$
Alright so first I converted to polar coordinates:
$$ x^2 + y^2 = 1 \ \Rightarrow \ r = 1 \ \ , \ \
x^2 + y^2 = 2x \ \Rightarrow \ r^2 = 2r \cos θ \ \Rightarrow \ r = 2 \cos θ \ . $$
Points of intersection:
$ 2 \cos θ = 1 \ \Rightarrow \ θ = ±π/3 \ , $
$ 2 \cos θ > 1 $ for θ in (-π/3, π/3).
So,
$$ \int \int_D \ (1/x) \ \ dA
\ \ = \ \ \int_{-π/3}^{π/3} \ \int_1^{2 \cos θ} \ \frac{1}{r \cos θ} \ \ r dr \ dθ $$
$$ = \ \ \int_{-π/3}^{π/3} \ \int_1^{2 \cos θ} \ \sec θ \ \ dr \ dθ \ \
= \ \ \int_{-π/3}^{π/3} \ (2 \cos θ - 1) \sec θ \ \ dθ $$
$$ = \ \ 2 \ \int_0^{π/3} \ (2 - \sec θ) \ \ dθ \ \ , $$
(since the integrand is even)
$$ = \ \ 2 \ (2 θ \ - \ \ln |\sec θ + \tan θ| \ ) \vert_0^{π/3} \ \ = \ \ \frac{4π}{3} \ - \ 2 \ln(2 + √3) \ \ . $$
I'm not sure this is right. Could someone look over it?
|
Of course, you can do the problem by using the polar coordinates. If it's understood correctly, you would want to find the right limits for double integrals. I made a plot of the region as follows:
The red colored part is our $D$. So:
$$r|_1^{2\cos\theta},\theta|_{-\pi/3}^{\pi/3}$$
|
putting a complex structure on a graph I am studying Riemann Surfaces, and an example that comes up in two of my references, as a preamble to smooth affine plane curves, is the following:
Let $D$ be a domain in the complex plane, and let $g$ be holomorphic on $D$; giving the graph the subspace topology, and letting the charts be open subsets of the graph with maps given by projection, we get an atlas and hence the graph admits a complex structure. Clearly on any overlap the transition function will be identity so that is all good; my confusion is, why did $g$ have to be holomorphic to begin with? Couldn't we have done the exact same thing with a continuous function? For that matter, couldn't we take any function at all, and let the atlas of the graph consist of one chart, namely the whole set, with the projection map? I think the issue, at least for the more extreme second example, would be that the structure would not be compatible with the subspace topology, but I don't see what the issue is with $g$ just being a continuous function.
Thank you for any insight.
|
Suppose $g$ is a continuous complex-valued function on $D$. Then the set $\Omega=\{(z,g(z))\in\mathbb C^2: z\in D\}$, which gets the subspace topology from $\mathbb C^2$, is homeomorphic to $D$ via $z\mapsto (z,g(z))$. By declaring this homeomorphism to be an isomorphism of complex structures, we can make $\Omega$ a complex manifold. No problem at all.
By construction, $\Omega$ is an embedded submanifold of $\mathbb C^2$ in the sense of topological manifolds. But in general it is not a complex submanifold of $\mathbb C^2$, because the inclusion map $\Omega\to \mathbb C^4$ is not holomorphic. Indeed, due to our definition of the complex structure on $\Omega$, the inclusion map is holomorphic if and only if the map $D\to\mathbb C^2$ defined by $z\mapsto (z,g(z))$ is holomorphic. The latter happens precisely when $g$ is a holomorphic function.
|
Ideal of smooth function on a manifold vanishing at a point I'm trying to prove the following lemma: let $M$ be a smooth manifold and consider the algebra $C^{\infty}(M)$ of smooth functions $f\colon M \to \mathbb{R}$. Given $x_0 \in M$, consider the ideals
$$\mathfrak{m}_{x_0} := \{f\in C^{\infty}(M) : f(x_0)=0\},$$
$$\mathfrak{I}_{x_0} := \{f\in C^{\infty}(M) : f(x_0)=0, df(x_0)=0\}.$$
Then $\mathfrak{I}_{x_0} = \mathfrak{m}^2_{x_0}$, i.e. any function $f$ vanishing at $x_0$ together with its derivatives can be written as
$$f=\sum_kg_kh_k, \quad g_k, h_k \in \mathfrak{m}_{x_0}.$$
I have no idea about how to prove the inclusion $\mathfrak{I}_{x_0} \subseteq \mathfrak{m}^2_{x_0}$.
|
$\forall i \in \{1,2,\dots,n\}$, let $g_i(x_1, \dots, x_n)=\int_0^1\frac{\partial f}{\partial x_i}(tx_1, \dots, tx_n)dt$, it is easy to verify that $f=\sum_{i=1}^nx_ig_i$
|
I found out that $p^n$ only has the factors ${p^{n-1}, p^{n-2}, \ldots p^0=1}$, is there a reason why? So I've known this for a while, and only finally thought to ask about it.. so, any prime number ($p$) to a power $n$ has the factors $\{p^{n-1},\ p^{n-2},\ ...\ p^1,\ p^0 = 1\}$
So, e.g., $5^4 = 625$, its factors are:
$$
\{625 = 5^4,\ 125 = 5^3,\ 25 = 5^2,\ 5 = 5^1,\ 1 = 5^0\}
$$
Now, my best guess is that it's related to its prime factorisation, $5*5*5*5$, but other than that, I have no idea.
So my question is, why does a prime number raised to a power $n$ have only the factors of $p^{n-1}$ and so on like above? No, I'm not talking about prime factorisation, I'm talking about normal factors (like 12's factors are 1, 2, 3, 4, 6, 12) and why $p^n$ doesn't have any other (normal) factors other than $p^{n-1} \ldots$
|
Let $ab=p^n$. Consider the prime factorization of the two terms on the left hand side. If any prime other than $p$ appears on the left, say $q$, then it appears as an overall factor and so we construct a prime factorization of $ab$ that contains a $q$. But then the right hand side has only $p$ as prime factors. Since the two prime factorizations are the same, and factorizations are unique, this is impossible. Hence $a,b$ are formed entirely of $p$s in their prime decomposition.
|
property of equality The property of equality says:
"The equals sign in an equation is like a scale: both sides, left and right, must be the same in order for the scale to stay in balance and the equation to be true."
So for example in the following equation, I want to isolate the x variable. So I cross-multiply both sides by 3/5:
5/3x = 55
x = 3/5*55
What I did to one side, I had to to do the other.
However, take a look at the following problem:
y - 10/3 = -5/6(x + 2)
y = -5/6 - 10/6 + 10/3
So for we just use distributive property of multiplication to distribute -5/6 to the quantity of x + 2. Then since we isolate y, we add 10/3 to both sides.
However, now in order to add the two fractions, we find the greatest common factor is 6, so we multiple 10/3 by 2:
y = -5/6x - 10/6 + 20/6.
There is my question. Why can we multiply a term on the one side by 2, without having to do it on the other side?
After writing this question here, I'm now thinking that because 10/3 is equal to 20/6 we really didn't actually add anything new to the one side, and that's why we didn't have to add it to the other side.
|
You did not multiple it by two but $\frac{2}{2}=1$ instead.
|
What am I missing here? That's an idiot question, but I'm missing something here. If $x'= Ax$ and $A$ is linear operator in $\mathbb{R}^n$, then $x'_i = \sum_j a_{ij} x_j$ such that $[A]_{ij} =a_{ij} = \frac{\partial x'_i}{\partial x_j}$, therefore $\frac{\partial}{\partial x_i'} = \sum_j \frac{\partial x_j}{\partial x'_i} \frac{\partial}{\partial x_j}$. However $\frac{\partial}{\partial x_i'} = \frac{\partial}{\partial \sum_j a_{ij} x_j} = \sum_j a_{ij} \frac{\partial}{\partial x_j} = \sum_j \frac{\partial x'_i}{\partial x_j} \frac{\partial}{\partial x_j}$ !!! What's wrong here?
Thanks in advance.
|
When you wrote $\frac{\partial}{\partial x_i'} = \frac{\partial}{\partial \sum_j a_{ij} x_j} = \sum_j a_{ij} \frac{\partial}{\partial x_j} $ the $a_{ij}$ somehow climbed from the denominator to the numerator
|
Solving elementary row operations So I am faced with the following:
$$
\begin{align}
x_1 + 4x_2 - 2x_3 +8x_4 &=12\\
x_2 - 7x_3 +2x_4 &=-4\\
5x_3 -x_4 &=7\\
x_3 +3x_4 &=-5
\end{align}$$
How should I approach this problem? In other words, what is the next elementary row operation that I should attempt in order to solve it? I know how to do with 3 equations by using the augmented method but this got me a little confused.
|
HINT:
Use Elimination/ Substitution or Cross Multiplication to solve for $x_3,x_4$ from the last two simultaneous equation.
Putting the values of $x_3,x_4$ in the second equation, you will get $x_2$
Putting the values of $x_2,x_3,x_4$ in the first equation you will get $x_1$
|
Show that Total Orders does not have the finite model property I am not sure whether my answer to this problem is correct. I would be grateful if anyone could correct my mistakes or help me to find the correct solutions.
The problem:
Show that Total Orders does not have the finite model property by finding a sentence A which
is refuted only in models with an infinite domain.
Just for reference purpose only:
A theory T has the finite model property if and only if whenever $T\nvdash A$ there is a model $\mathcal{M}$ with a finite domain, such that $\mathcal{M}$ satisfies the theory $T$ but $A$ does not hold in $\mathcal{M}$.
The theory of Total Orders (in the language with quantifiers, propositional
connectives, identity and one new binary relation symbol '<') defined as the set of consequences of the following three formulas:
1. $(\forall x)\neg(x<x)$
2. $(\forall x)((x<y\wedge y<z)\supset x<z)$
3. $(\forall x)(x<y\vee x=y\vee y<x)$
My answer is quite simple, but it is too simple that I doubt whether it is correct. I am trying to say that the statement "there is a least element" is refuted only in models with an infinite domain, for example the integers. My sentence A is $\neg((\exists y)(\forall x)(y<x))$.
I am not sure whether I am correct. Please correct me if I am wrong and please say so if there is any better answer.
Many thanks in advance!
|
Looks like a good candidate to me. As you say, this clearly holds in every finite model of our theory, but infinite counterexamples exist, like $\mathbb{Z}$.
|
Limit of $\lim\limits_{x\to 0}\left(\frac{1+x\sin(2x)-\cos(x)}{\sin^2(x)}\right)$ I want to evaluate this limit and I faced with one issue.
for this post I set $L`$ as L'Hôpital's rule
$$\lim\limits_{x\to 0}\left(\frac{1+x\sin(2x)-\cos(x)}{\sin^2(x)}\right)$$
Solution One:
$$\lim\limits_{x\to 0}\left(\frac{1+x\sin(2x)-\cos(x)}{\sin^2(x)}\right)=\frac{0}{0}L`=\frac{\sin(2x)+2x\cos(2x)+\sin(x)}{\sin(2x)}$$
at this step I decided to evaluate each fraction so I get :
$$\lim\limits_{x\to 0}\frac{\sin(2x)}{\sin(2x)}+\frac{2x\cos(2x)}{\sin(2x)}+\frac{\sin(x)}{\sin(2x)} = \frac{3}{2}$$
Solution Two:
$$\lim\limits_{x\to 0}\left(\frac{1+x\sin(2x)-\cos(x)}{\sin^2(x)}\right)\frac{0}{0}L`=\frac{\sin(2x)+2x\cos(2x)+\sin(x)}{\sin(2x)}=\frac{0}{0}L`$$
$$\frac{2\cos(2x)+2\cos(2x)-4x\sin(2x)+\cos(x)}{2\cos(2x)}=\frac{5}{2}$$
I would like to get some idea where I did wrong, Thanks.
|
As mentioned your first solution is incorrect. the reason is $$\displaystyle lim_{x\to0}\frac{2xcos(2x)}{sin(2x)}\neq 0$$ you can activate agin l'hospital:
$$lim_{x\to0}\frac{2xcos(2x)}{sin(2x)}=lim_{x\to0}\frac{2cos(2x)-2xsin(2x)}{2cos(2x)}=lim_{x\to0} 1-2xtg(2x)=1+0=1$$, so now after we conclude this limit, in the first solution, i'd write after $$\lim\limits_{x\to 0}\left(\frac{1+x\sin(2x)-\cos(x)}{\sin^2(x)}\right)=\lim\limits_{x\to 0}\frac{\sin(2x)}{\sin(2x)}+\frac{2x\cos(2x)}{\sin(2x)}+\frac{\sin(x)}{\sin(2x)}$$ that the limit equals to $$ = 1+\frac 1 2+1=\frac 5 2$$ and that's the correct answer.
|
Proof for Lemma about convex hull I have to prove a Lemma: "For the set B of all convex combinations of arbitrary finite number of points from set A, $co (A)=B$"
I started by showing $B\subset co(A)$ first.
$B$ contains all convex combinations of arbitrary finite number of points from A.
Let $x=\alpha_1 x_1 +...+\alpha_n x_n$ be convex combination of $x_1,...x_n\in A$
$x\in B$
Let $C$ be any convex set that contains A.
Now I know that $x_1,...,x_n\in C$ and, since $C$ is convex, it contains all convex combinations of arbitrary finite number of its points (and points from A), so $x\in C$.
Thus, $B\subset C$
I also know that $C=co(C)$ because $C$ is convex.
Also, $A\subset C \rightarrow co(A)\subset co(C) \rightarrow co(A)\subset C$
So I have $B\subset C$ and $co(A)\subset C$.
How can I conclude from this that $B\subset co(A)$?
What about another direction, $co(A)\subset B$? Just to prove that $B$ is convex?
|
The conclusion of your question is not correct. You are right in whatever you have done. The set $B$ may not even be convex, so how can it be equal to co($A$)?
|
Problem related to GCD I was solving a question on GCD. The question was calculate to the value of $$\gcd(n,m)$$
where $$n = a+b$$$$m = (a+b)^2 - 2^k(ab)$$
$$\gcd(a,b)=1$$
Till now I have solved that when $n$ is odd, the $\gcd(n,m)=1$.
So I would like to get a hint or direction to proceed for the case when $n$ is even.
|
Key idea: $ $ employ $\bigg\lbrace\begin{eqnarray}\rm Euclidean\ Algorithm\ \color{#f0f}{(EA)}\!: &&\rm\ (a\!+\!b,x) = (a\!+\!b,\,x\ \,mod\,\ a\!+\!b)\\ \rm and\ \ Euclid's\ Lemma\ \color{blue}{(EL)}\!: &&\rm\ (a,\,b\,x)\ =\ (a,x)\ \ \,if\,\ \ (a,b)=1\end{eqnarray}$
$\begin{eqnarray}\rm So\ \ f \in \Bbb Z[x,y]\Rightarrow &&\rm (a\!+\!b,\, f(\color{#c00}a,b))\stackrel{\color{#f0f}{(EA)}} = (a\!+\!b,\,f(\color{#c00}{-b},b)),\ \ by\, \ \ \color{#c00}{a\equiv -b}\!\!\pmod{a\!+\!b}\\
\rm \Rightarrow &&\rm(a\!+\!b\!,\, (\color{#0a0}{a\!+\!b})^2\! \color{#c00}{- a}bc) = (a\!+\!b\!,{\color{#0a0}0}^2\!+\!\color{#c00}bbc)\!\stackrel{\color{blue}{(EL)}}= \!(a\!+\!b,c)\ \ by\ \, \bigg\lbrace\begin{array}((a\!+\!b,b)\\\rm\, = (a,b)=1\end{array} \end{eqnarray}$
|
Sum of exponential of normal random variables Suppose
$X_i \sim N(0,1)$ (independent, identical normal distributions)
Then by Law of large number,
$$
\sqrt{1-\delta} \frac{1}{n}\sum_i^\infty e^{\frac{\delta}{2}X_i^2} \rightarrow \sqrt{1-\delta} \int e^{\frac{\delta}{2}x^2}\frac{1}{\sqrt{2\pi}}e^{\frac{1}{2}x^2}dx =1
$$
However, according to simulations, this approximation doesn't seem to work when $\delta$ close to one. Is that ture or just need to run larger samples? Thanks !
Update (6/6): As sos440 mentioned, there's a typo and now fixed.
|
Note that
$$ \Bbb{E} \exp \left\{ \tfrac{1}{2}\delta X_{i}^{2} \right\} = \frac{1}{\sqrt{1-\delta}}
\quad \text{and} \quad
\Bbb{V} \exp \left\{ \tfrac{1}{2}\delta X_{i}^{2} \right\} = \frac{1}{(1-\delta)^{3/2}} < \infty. $$
Then the right form of the (strong) law of large number would be
$$ \frac{\sqrt{1-\delta}}{n} \sum_{i=1}^{n} e^{\frac{1}{2}\delta X_{i}^{2}} \xrightarrow{n\to\infty} 1 \quad \text{a.s.} $$
|
How does the Hahn-Banach theorem implies the existence of weak solution? I came across the following question when I read chapter 17 of Hormander's book "Tha Analysis of Linear Partial Differential Operators", and the theorem is
Let $a_{jk}(x)$ be Lipschitz continuous in an open set $X\subset\mathbb{R}^n$, $a_{ij}=a_{ji}$, and assume that $(\Re a_{ij}(x))$ is positive definite. Then
$$
\sum_{ij} D_j(a_{jk}D_ku)=f
$$
has a solution $u\in H_{(2)}^{loc}(X)$ for every $f\in L_{loc}^2(X)$
The auther then says if we can show that
$$
|(f,\phi)|\leq \|M \cdot\sum_{ij} D_j(\bar{a_{jk}}D_k\phi) \|_{L^2}, \quad \phi\in C_c^{\infty}(X)
$$
for some positive continuous function $M$, then by Hahn-Banach theorem there exists some $g\in L^2$
$$
(f,\phi)=\left(g,M\cdot\sum_{ij} D_j(\bar{a_{jk}}D_k\phi)\right)
$$
which inplies that the weak solution is $u=Mg$. what confuses me is how the Hahn-Banach theorem is used here to show the existence of $g$.
Thanks for your help
|
Define the functional:
$$k(M Lw)=\int_{X} fw$$
where $L$ is the differential opeartor.
This is a bounded functional thanks to estimate you are assuming, also notice that you use the estimate to check that $k$ is well defined .
Then thanks to the Hahn-Banach theorem you can extendthe domain of this functional to the whole $L^{2}$ as nullUser mention.
Finally you use Riesz-Representation theorem to obtain the solution.
|
Proving that the $n$th derivative satisfies $(x^n\ln x)^{(n)} = n!(\ln x+1+\frac12+\cdots+\frac1n)$ Question:
Prove that $(x^n\ln x)^{(n)} = n!(\ln x+1+\frac 12 + ... + \frac 1n)$
What I tried:
Using Leibnitz's theorem, with $f=x^n$ and $g=\ln x$.
So
$$f^{(j)}=n\cdots(n-j+1)x^{n-j} , g^{(j)}=(-1)^{j+1} \dfrac 1{x^{n-j}}$$
But somehow I get stuck on the way...
|
Hint: Try using induction. Suppose $(x^n\ln x)^{(n)} = n!\left(\ln x+\frac{1}{1}+\cdots\frac{1}{n}\right)$, then
$$\begin{align}{}
(x^{n+1}\ln x)^{(n+1)} & = \left(\frac{\mathrm{d}}{\mathrm{d}x}\left[x^{n+1} \ln x\right]\right)^{(n)} \\
&= \left((n+1)x^n\ln x + x^n\right)^{(n)} \\
&= (n+1)(x^n\ln x)^{(n)} + (x^n)^{(n)} \\
&= \ldots
\end{align}$$
|
Barbalat's lemma for Stability Analysis Good day,
We have:
Lyapunov-Like Lemma: If a scaler function V(t, x) satisfies the
following conditions:
*
*$V(t,x)$ is lower bounded
*$\dot{V}(t,x)$ is negative semi-definite
*$\dot{V}(t,x)$ is uniformly continuous in time
then $\dot{V}(t,x) \to 0$ as $t \to \infty $.
Now if we have the following system:
$\dot{e} = -e + \theta w(t) \\
\dot{\theta} = -e w(t)$
and assume that $w(t)$ is a bounded function, then we can select the following Lyapunov function:
$V(x,t) = e^2 + \theta^2$
Taking the time derivative:
$\dot{V}(x,t) = -2e^2 \leq 0$
Taking the time derivative again:
$\ddot{V}(x,t) = -4e(-e+\theta w)$
Now $\ddot{V}(x,t)$ satisfies condition (3) when $e$ and $\theta$ are bounded, but how can I be sure that these two variables are indeed bounded? Should I perform some other test, or...?
|
Since $\dot{V} = -2e^2 \leq 0$, from the Lyapunov stability theory, one concludes that the system states $(e,\theta)$ are bounded.
Observe above that $\dot{V}$ is a function of only one state ($e$); if it were a function of the two states $(e,\theta)$ and $\dot{V} < 0$ (except in $e=\theta = 0$, in that case $V=0$), one would conclude asymptotic stability.
|
Why is this true? $(\exists x)(P(x) \Rightarrow (\forall y) P(y))$ Why is this true?
$(\exists x)(P(x) \Rightarrow (\forall y) P(y))$
|
In classical logic the following equivalence is logically valid:
$$
\exists x (\varphi\Rightarrow\psi)\Longleftrightarrow(\forall x\varphi\Rightarrow\psi)
$$
providing that $x$ is a variable not free in $\psi$. So the formula in question is logically equivalent to $\forall xP(x)\Rightarrow\forall yP(y)$.
Looking at the poblem from a slightly different perspective. Either (i) all objects in the domain of discourse have property $P$, i.e. $\forall y P(y)$ is true or (ii) there is $a$ in the domain for which $P$ fails, i.e. $\neg P(a)$ is true. In (i) $P(x)\Rightarrow\forall y P(y)$ must be true, so $\exists x(P(x)\Rightarrow\forall y P(y))$ is true. In (ii) $P(a)\Rightarrow\forall y P(y)$ must be true, therefore the sentence in question must be true as well.
|
Finding the root of a degree $5$ polynomial $\textbf{Question}$: which of the following $\textbf{cannot}$ be a root of a polynomial in $x$ of the form $9x^5+ax^3+b$, where $a$ and $b$ are integers?
A) $-9$
B) $-5$
C) $\dfrac{1}{4}$
D) $\dfrac{1}{3}$
E) $9$
I thought about this question for a bit now and can anyone provide any hints because I have no clue how to begin to eliminate the choices?
Thank you very much in advance.
|
Use the rational root theorem, and note that the denominator of one of the options given does not divide $9$...
|
At least one vertex of a tetrahedron projects to the interior of the opposite triangle How i can give a fast proof of the following fact:
Given four points on $\mathbb{R}^3$ not contained in a plane we can choose one such that its projection to the plane passing through the others is in the triangle generated by the three others points.
Thanks in advance.
|
Here is a graphical supplement (that I cannot place into a comment) to the excellent answer by @achille hui .
I have taken the case $\eta=0.4.$ with normals in red.
The (complicated) name of this polyhedron is "tetragonal disphenoid" (https://en.wikipedia.org/wiki/Disphenoid).
|
X,Y are independent exponentially distributed then what is the distribution of X/(X+Y) Been crushing my head with this exercise. I know how to get the distribution of a ratio of exponential variables and of the sum of them, but i can't piece everything together.
The exercise goes as this:
If X,Y are independent exponentially distributed with beta = 1 (parameter of the exponential distribution = 1) then what is the distribution of X/(X+Y)
Any ideas?
Thanks a lot.
|
In other words, for each $a > 0$, you want to compute $P\left(\frac{X}{X+Y} < a \right)$.
Outline: Find the joint density of $(X,Y)$, and integrate it over the subset of the plane $\left\{ (x,y) : \frac{x}{x+y} < a \right\}$.
|
Let $R$ be a ring with $1$. a nonzero proper ideal $I$ of $R$ is a maximal ideal iff the $R/I$ is a simple ring. Let $R$ be a ring with $1$. Prove that a nonzero proper ideal $I$ of $R$ is a maximal ideal if and only if the quotient ring $R/I$ is a simple ring.
My attempt:-
$I$ is maximal $\iff$ $R/I$ is a field. $\iff$ $R/I$ has no non-trivial ideals $\iff$ $R/I$ is simple.
Is it correct?
|
You won't necessarily get a field in the quotient without commutativity, but you have a decent notion, nonetheless.
The rightmost equivalence is just the definition of simple ring.
If $I$ isn't maximal, then there is a proper ideal $J$ of $R$ with $I\subsetneq J$. Show that $J/I$ is a non-trivial ideal of $R/I$. Thus, simpleness of $R/I$ implies maximality of $I$.
On the other hand, suppose that $R/I$ isn't simple, so that there is a non-trivial ideal $\overline J$ of $R/I$. Let $J$ be the preimage of $\overline J$ under the quotient map $R\to R/I$, and show that $J$ is a proper ideal of $R$ containing $I$, so that $I$ is not maximal.
|
When a group algebra (semigroup algebra) is an Artinian algebra?
When a group algebra (semigroup algebra) is an Artinian algebra?
We know that an Artinian algebra is an algebra that satisfies the descending chain condition on ideals. I think that a group algebra (semigroup algebra) is an Artinian algebra if the group algebra (semigroup algebra) satisfies the descending chain condition on ideals. Are there other equivalent conditions that determine when a group algebra (semigroup algebra) is an Artinian algebra? Thank you very much.
|
A result of E. Zelmanov (Zel'manov), Semigroup algebras with identities,
(Russian) Sib. Mat. Zh. 18, 787-798 (1977):
Assume that $kS$ is right artinian. Then $S$ is a finite semigroup. The converse holds if $S$ is monoid.
See this assertion in Jan Okniński,
Semigroup algebras.Pure and Applied Mathematics, 138 (1990), p.172, Th.23.
|
Why why the natural density of the set $\{ n \mid n \equiv n_{0}\mod{m}\}$ is $\frac{1}{m}$. The natural density of a set $S$ is defined by $\displaystyle\lim_{x \to{+}\infty}{\frac{\left | \{ n\le x \mid n\in S \} \right |}{x}}$.
This is maybe a silly question, but I got a confusion with this definition. And I really need understand why the natural density of the set $\{ n \mid n \equiv n_{0}\mod{m}\}$ is $\dfrac{1}{m}$.
Thanks!
|
I think you should add the condition $n\geq 0$ here.
Let $n_{0}=pm+k$, where $0\leq k< m$. We know that $x\to\infty$. Let $x=rm+t$, where $0\leq t< m$.
Then $\frac {n|n\equiv n_{0}\text { and } n\in S}{x}=\frac{r+1}{rm+t}$ if $t\geq k$, and $\frac {r}{rm+t}$ if $t<k$.
In either case, as $x\to \infty$, the fractions simplify to $\frac {1}{m}$.
|
2 questions regarding solutions for $\sqrt{a+b} - (a-b)^2 = 0$ Here's two questions derived from the following question:
$\quad\begin{matrix}
\text{Is there more than one solution to the following statement?} \\
\!\sqrt{a+b} - (a-b)^2 = 0
\end{matrix}$
$\color{Blue}{(1)\!\!:\;}$How would one (dis)prove this? I.e. In what ways could one effectively determine whether an equation has more than one solution; More specifically, this one?
$\color{Blue}{(2)\!\!:\;}$Is it possible to determine this with(out) a valid solution as a sort of reference?
Cheers!
|
If you are looking for real solutions, then note that $a+b$ and $a-b$ are just arbitrary numbers, with $a + b \ge 0$. This is because the system
$$
\begin{cases}
u = a + b\\
v = a - b
\end{cases}
$$
has a unique solution
\begin{cases}
a = \frac{u+v}{2}\\
\\
b = \frac{u-v}{2}.
\end{cases}
In the variables $u,v$ the general solution is $u = v^{4}$, which translates to
$$
a = \frac{v^{4}+v}{2}, \qquad
b = \frac{v^{4}-v}{2},
$$
as noted by Peter Košinár. Note that $a+b = u = v^{4}$ is always non-negative, as requested.
If it's integer solutions you're looking for, you get the same solutions (for $v$ an integer), as $v$ and $v^{4}$ have the same parity.
|
For $0I have problems with computing the following limit:
Given a sequence $0<q_n<1$ such that $\lim\limits_{n\to\infty} q_n =q < 1$, prove that for a fixed $k \in \mathbb N$, $\lim\limits_{n\to\infty} n^k q_n^n= 0$.
I know how to prove this, but I can't do it without using L'Hôpital's Rule. Does someone have an elementary proof?
|
Note that $$\frac{(n+1)^k}{n^k}=1+kn^{-1}+{k\choose2}n^{-2}+\ldots+n^{-k}\to1$$
as $n\to\infty$, hence for any $s$ with $1<s<\frac1q$ (possible because $q<1$) we can find $a$ with $n^k<a\cdot s^n$.
Select $r$ with $q<r<\frac1s$ (possible because $s<\frac1q$). Then
For almost all $n$, we have $q_n<r$, hence
$$n^kq_n^n<n^kr^n<a(rs)^n.$$
Since $0<rs<1$, the claim follows.
|
A group that has a finite number of subgroups is finite I have to show that a group that has a finite number of subgroups is finite. For starters, i'm not sure why this is true, i was thinking what if i have 2 subgroups, one that is infinite and the other one that might or not be finite, that means that the group isn't finite, or is my consideration wrong?
|
Consider only the cyclic subgroups. No one of them can be infinite, because an infinite cyclic group has infinitely many subgroups. So every cyclic subgroup is finite, and the group is the finite set-theoretic union of these finite cyclic subgroups.
|
Estimation of the number of prime numbers in a $b^x$ to $b^{x + 1}$ interval This is a question I have put to myself a long time ago, although only now am I posting it. The thing is, though there is an infinity of prime numbers, they become more and more scarce the further you go.
So back then, I decided to make an inefficient program (the inefficiency was not intended, I just wanted to do it quickly, I took more than 10 minutes to get the numbers below, and got them now from a sheet of paper, not the program) to count primes between bases of different numbers.
These are the numbers I got $($below the first numbers are the exponents of $x$, I used a logarithmic scale, in the first line $\forall \ (n$-- $n+1)$ means $ [b^n; b^n+1[$ $)$
---------0-1--2---3----4----5--6---7---8----9----10--11--12--13-----------------
base 2: |0| 2| 2| 2| 5| 7| 13| 23| 43| 75|137|255|464|
base 3: |1| 3| 13| 13| 31|76|198|520|1380|3741|
base 10: |4|21|143|1061|8363|
I made three histograms from this data (one for each base, with the respective logarithmic scales both on the $x$ and $y$ axes) and drew a line over them, that seemed like a linear function (you can try it yourselves, or if you prefer, insert these into some program like Excel, Geogebra, etc.).
My question is: are these lines really tending (as the base and/or as x grows) to linear or even any kind of function describable by a closed form expression?
|
The prime number theorem is what you need. A rough statement is that if $\pi(x)$ is the number of primes $p \leq x$, then
$$
\pi(x) \sim \frac{x}{\ln(x)}
$$
Here "$\sim$" denotes "is asymptotically equal to".
A corollary of the prime number theorem is that, for $1\ll y\ll x$, then $\pi(x)-\pi(x-y) \sim y/\ln(x)$. So yes, the number of primes start to thin out for larger $x$; in fact, their density drops logarithmically.
To address your specific question, the PNT implies:
\begin{align}
\pi(b^{x+1}) - \pi(b^x) &\sim \frac{b^{x+1}}{\ln( b^{x+1} )} - \frac{b^x}{\ln( b^x )},
\end{align}
where
\begin{align}
\frac{b^{x+1}}{\ln( b^{x+1} )} - \frac{b^x}{\ln( b^x )} &= \frac{b^{x+1}}{x+1\ln(b)} - \frac{b^x}{x\ln(b)}\\
&= \frac{b^{x+1}}{(x+1)\ln(b)} - \frac{b^x}{x\ln(b)}\\
&=\frac{b^x}{\ln(b)}\left( \frac{b}{x+1} - \frac{1}{x} \right)\\
&=\frac{b^x}{\ln(b)}\left( \frac{ bx-(x+1) }{x(x+1)} \right)\\
&=\frac{b^x}{\ln(b)}\left( \frac{ x(b-1)-1) }{x(x+1)} \right)\\
\end{align}
For $x\gg 1$, we can neglect '$-1$' next to $x(b-1)$ in the numerator and $1$ next to $x$ in the denominator, so that:
\begin{align}
\frac{b^{x+1}}{\ln( b^{x+1} )} - \frac{b^x}{\ln( b^x )} &= \frac{b^x}{\ln(b)}\left( \frac{ x(b-1) }{x^2} \right)\\
&=\frac{b^x(b-1)}{x\ln(b)},
\end{align}
so that
$$
\pi(b^{x+1}) - \pi(b^x) \sim \frac{b^x(b-1)}{x\ln(b)}.
$$
As I'm writing this, I see that this is exactly the same as the answer that @Charles gave.
|
Graph with no even cycles has each edge in at most one cycle? As the title says, I am trying to show that if $G$ is a finite simple graph with no even cycles, then each edge is in at most one cycle.
I'm trying to do this by contradiction: let $e$ be an edge of $G$, and for contradiction suppose that $e$ was in two distinct cycles $C_1$ and $C_2$ of $G$. By assumption, $C_1$ and $C_2$ must have odd length. Now I would like to somehow patch $C_1$ and $C_2$ together to obtain a cycle of even length, but I'm not sure how to do so. If $C_1$ and $C_2$ only overlap in $e$, or perhaps in a single path containing $e$, then this can be done, but I can't see how to make this patching work when $C_1$ and $C_2$ overlap in disjoint paths.
Any help is appreciated!
|
Here is the rough idea:
Suppose $C_1$ and $C_2$ overlap in at least two disjoint paths. If we follow $C_1$ along the end of one path to the beginning of the next path, and then follows $C_2$ back to the end of the first path, we obtain a cycle $C_3$. Since this cycle must have odd length, the parity of the two parts must be different. This means I can change the parity of the length of $C_1$ by following the $C_2$ part of $C_3$ instead of the $C_1$ part. This is also a contradiction, as $C_1$ has odd length.
Explicitly, let $a$ be the last vertex of one path contained in both $C_1$ and $C_2$, and let $b$ be the first vertex of the next path contained in both $C_1$ and $C_2$. Let $C_3$ be the cycle obtained by following $C_1$ from $a$ to $b$, and then following $C_2$ back to $a$. Since $C_3$ has odd length, the parity of the length along $C_1$ from $a$ to $b$ must be different from the parity of the length of $C_2$ from $a$ to $b$. It is this difference in parity that allows us to modify $C_1$. That is, let $C_1'$ be the cycle that agrees with $C_1$ except for the path from $a$ to $b$, where it agrees with $C_2$. Then $C_1'$ will have even length.
|
Given basis spanning the vector space I am learning Linear algebra nowadays. I had a small doubt and I know it's an easy one. But still I am not able to get it. Recently I came across a statement saying "((1,2),(3,5)) is a basis of $ F^2 $ ".
According to the statement a linear combination of the vectors in the list,i.e., $a(1,2)+b(3,5)$ (where a and b belong to F of course) must span the vector space $F^2$. I wanted to know how we can get all the vectors in the vector space using linear combination of these two given vectors here (if the two vectors were ((1,0),(0,1)) which is the standard basis, it would have been fine).
But how can we get all the points in the vector space $F^2$ using ((1,2),(3,5))??? suppose I want to get the vector (1,1). I can't think of getting it using linear combination of the given two vectors.
Kindly help me with this. I know its an easy one but still could not help and had to ask you guys. Thanks.
|
Note that if you can get to $(1,0)$ and $(0,1)$ you can get anywhere. So, can you find linear combinations $a(1,2) + b(3,5)$ that get you to these two vectors? Note that you can expand the equation, say, $a(1,2) + b(3,5) = (1,0)$ into two equations with two unknowns by looking at each coordinate separately.
|
Why does this summation of ones give this answer? I saw this in a book and I don't understand it.
Suppose we have nonnegative integers $0 = k_0<k_1<...<k_m$ - why is it that
$$\sum\limits_{j=k_i+1}^{k_{i+1}}1=k_{i+1}-k_i?$$
|
Because $1+1+\cdots+1=n$ if there are $n$ ones.
So $j$ going from $k_i+1$ to $k_{i+1}$ is the same as going from $1$ to $k_{i+1}-k_i$ since the summand doesn't depend on $j$. There are $k_{i+1}-k_i$ ones in the list.
|
CINEMA : Mathematicians I know that a similar question has been asked about mathematics documentaries in general, but I would like some recommendations on films specifically about various mathematicians (male and or female).
What would be nice is if you'd recommend something about not just the famous ones but also the not so famous ones.
Note$_1$: If you happen to find a film on a transgender mathematician, well that's just great too. I'm a very progressive person with a very open brain.
Note$_2$: Throughout my life I've seen most of the mainstream films, but it'd be nice to hear from the world what things I may have missed.
Note$_3$: If you have the ability to search Nets outside of the the limitations of Google's Nets, then maybe you'll find some foreign films or something, as, for example in Switzerland.
|
The real man portrayed in A Beautiful Mind was not only an economist but a mathematician who published original discoveries in math. There is also a recent film bio of Alvin Turing. Sorry I forget the title.
|
the following inequality is true, but I can't prove it The inequality $$\sum_{k=1}^{2d}\left(1-\frac{1}{2d+2-k}\right)\frac{d^k}{k!}>e^d\left(1-\frac{1}{d}\right)$$ holds for all integer $d\geq 1$. I use computer to verify it for $d\leq 50$, and find it is true, but I can't prove it. Thanks for your answer.
|
Sorry I didn't check this sooner: the problem was cross-posted at
mathoverflow and I eventually was able to give a version of David Speyer's
analysis that makes it feasible to compute many more coefficient of the
asymptotic expansion (I reached the $d^{-13}$ term, and could go further),
and later to give error bounds good enough to prove the inequality for $d \geq 14$,
at which point the previous numerical computations complete the proof.
See
https://mathoverflow.net/questions/133028/the-following-inequality-is-truebut-i-cant-prove-it/133123#133123
|
limit of evaluated automorphisms in a Banach algebra Let $\mathcal{A}=\operatorname{M}_k(\mathbb{R})$ be the Banach algebra of $k\times k$ real matrices and let $(U_n)_{n\in\mathbb{N}}\subset\operatorname{GL}_k(\mathbb{R})$ be a sequence of invertible elements such that $U_n\to 0$ as $n\to\infty$. Define $\sigma_n\in\operatorname{Aut}(\mathcal{A})$ via $X\mapsto U_nXU_n^{-1}$. Suppose I have a sequence $(W_n)_{n\in\mathbb{N}}\subset\mathcal{A}$ such that $W_n\to W\in\mathcal{A}$ as $n\to\infty$. I would like to determine $\lim_{n\to\infty}\sigma_n(W_n)$.
My question is how can I approach such a problem? It looks like something that should have a general answer (for $\mathcal{A}$ not necessarily finite-dimensional Banach algebra over the reals) in the theory of operator algebras, but I have a rather poor background there. If it is something relatively easy, I would rather appreciate a hint or reference than a full answer, so I can work it out further on my own (I am just trying to get back on the math track after some time of troubles).
Thanks in advance for any help!
|
This limit need not exist. For example, let's work in $M_2(\mathbb R)$.
If
$$
U_n=
\left(
\begin{array}{cc}
\frac{1}{n} & 0 \\
0 & \frac{1}{n^2}
\end{array}
\right),
$$
then $\Vert U_n \Vert \to 0$ as $n \to \infty$, and
$$
U_n^{-1}=
\left(
\begin{array}{cc}
{n} & 0 \\
0 & {n^2}
\end{array}
\right).
$$
If we now let
$$
W_n=
\left(
\begin{array}{cc}
0 & \frac{1}{\sqrt{n}} \\
0 & 0
\end{array}
\right),
$$
then $W_n \to 0$ as $n \to \infty$, but
$$
\sigma_n (W_n) = U_n W_n U_n^{-1}=
\left(
\begin{array}{cc}
0 & \sqrt{n} \\
0 & 0
\end{array}
\right),
$$
which does not converge as $n \to \infty$.
|
Is this notation for Stokes' theorem? I'm trying to figure out what $\iint_R \nabla\times\vec{F}\cdot d\textbf{S}$ means. I have a feeling that it has something to do with the classical Stokes' theorem. The Stokes' theorem that I have says
$$
\int\limits_C W_{\vec{F}} = \iint\limits_S \Phi_{\nabla\times\vec{F}}
$$
where $\vec{F}$ is a vector field, $W_{\vec{F}}$ is the work form of $\vec{F}$, and $\Phi_{\nabla\times\vec{F}}$ is the flux form of the curl of $\vec{F}$. Is the notation in question the same as the RHS of the above equation?
|
It seems to me that the integrals $$\int\limits_C W_{\vec{F}}~~~~\text{and}~~~~\oint_{\mathfrak{C}}\vec{F}\cdot d\textbf{r}$$ have the same meanings. I don't know the notation $ \Phi_{\nabla\times\vec{F}}$, but if it means as $$\textbf{curl F}\cdot \hat{\textbf{N}} ~dS$$ then your answer is Yes.
|
How to prove $(a-b)^3 + (b-c)^3 + (c-a)^3 -3(a-b)(b-c)(c-a) = 0$ without calculations I read somewhere that I can prove this identity below with abstract algebra in a simpler and faster way without any calculations, is that true or am I wrong?
$$(a-b)^3 + (b-c)^3 + (c-a)^3 -3(a-b)(b-c)(c-a) = 0$$
Thanks
|
To Prove:
$$(a-b)^3 + (b-c)^3 + (c-a)^3 =3(a-b)(b-c)(c-a)$$
we know, $x^3 + y^3 = (x + y)(x^2 - xy + y^2)$
so, $$(a-b)^3 + (b-c)^3 = (a -c)((a-b)^2 - (a-b)(b-c) + (b-c)^2)$$
now, $$(a-b)^3 + (b-c)^3 + (c-a)^3 = (a -c)((a-b)^2 - (a-b)(b-c) + (b-c)^2) + (c-a)^3 =
(c-a)(-(a-b)^2 + (a-b)(b-c)- (b-c)^2 +(c-a)^2)$$
now, $(c-a)^2 - (a-b)^2 = (c-a+a-b)(c-a-a+b) = (c-b)(c-2a+b)$
the expression becomes,
$$(c-a)((c-b)(c-2a+b) + (b-c)(a-2b+c)) = (c-a)(b-c)(-c+2a-b+a-2b+c)=3(c-a)(b-c)(a-b)$$
Hence proved
|
Is there a dimension which extends to negative or even irrational numbers? Just elaborating on the question:
We all use to natural numbers as dimensions: 1 stands for a length, 2 for area, 3 for volume and so on. Hausdorff–Besicovitch extends dimensions to any positive real number.
So my question: is there any dimension which extends this notion further (negative numbers or even irrational numbers)? If so, what are the examples, how it can be used?
|
The infinite lattice is a fractal of negative dimension: if you scale the infinite lattice on a line 2x, it becomes 2x less dense, thus 2 scaled lattices compose one non-scaled. If you take a lattice or on a plane, scaling 2x makes it 4x less dense so that 4 scaled lattices compose one non-scaled, etc.
|
Period of derivative is the period of the original function Let $f:I\to\mathbb R$ be a differentiable and periodic function with prime/minimum period $T$ (it is $T$-periodic) that is, $f(x+T) = f(x)$ for all $x\in I$. It is clear that
$$
f'(x) = \lim_{h\to 0}\frac{f(x+h)-f(x)}{h} = \lim_{h\to 0} \frac{f(x+T+h) - f(x+T)}{h} = f'(x+T),
$$
but how to prove that $f'$ has the same prime/minimum period $T$? I suppose that there exist $\tilde T < T$ such that $f'(x+\tilde T) = f'(x)$ for all $x\in I$ but can't find the way to get a contradiction.
|
One solution is to note that $f(x)$ has an associated Fourier series, and since the derivative of a sinusoid of any frequency is another sinusoid of the same frequency, we deduce that the Fourier series of the derivative will have all the same sinusoidal terms as the original.
Thus, the derivative must have the same frequency as the original function.
|
help me in trace of following proposition In a paper an author proved the following proposition
Please help me in trace proof of following proposition
Proposition: let $f$ be a homeomorphism of a connected topological manifold $M$ with fixed point set $F$. then either $(1)$
$f$ is invariant on each component of $M-F$ or $(2)$ there are exactly two component and $f$ interchanges them.
and after that he said:
In the case of $(2)$ the above argument shows that F cannot contain an open set and
hence $dim F\leq (dim M) -1$ and since $F$ separates $M$ we have $dim F = (dim M) -1$.
G. Bredon has shown that if $M$ is also orientable
then any involution with an odd codimensional fixed point set must reverse the
orientation; hence we obtain
Let $f$ be an orientation-preserving homemorphism of an orientable
manifold $M$; then $f$ is invariant on each component of $M-F$.
Can you say me, what does mean the dim $F$ here? Is always $F$ is sub manifold with above condition?
and how can we deduce that $dim F = n-1$?
|
For the first question, here is a sketch that $F$ is a submanifold under the assumption that the group $G$ acting on $M$ is finite:
Consider the map $M \to \prod_{g \in G} M, m \mapsto (gm)_{g \in G}$. This is smooth and should be a local homeomorphism, hence its regular. The diagonal $\{(m,\ldots,m) | m \in M\}$ of the product is a submanifold, hence its preimage is a submanifold of $M$ and the preimage is exactly the fixed point a set.
For the second question, we have that $M$ is connected but $M - F$, with $F$ being a submanifold is not connected. Intuitively it is clear that a submanifold dividing a manifold into connected components must have codimension 1 but I cannot think of a proof right now. Maybe one could work with path-connectedness?
|
Find Gross from Net and Percentage I would like know if a simple calculation exists that would allow me to determine how much money I need to gross to receive a certain net amount. For example, if my tax rate was 30%, and my goal was to take home 700, I would need to have a Gross salary of $1000.
|
Suppose your tax rate is $r$, written in percent. If you want your net to be $N$, then we want a gross of:
$$G=\frac{100N}{100-r}$$
|
Ratio between trigonometric sums: $\sum_{n=1}^{44} \cos n^\circ/\sum_{n=1}^{44} \sin n^\circ$ What is the value of this trigonometric sum ratio: $$\frac{\displaystyle\sum_{n=1}^{44} \cos n^\circ}{\displaystyle \sum_{n=1}^{44} \sin n^\circ} = \quad ?$$
The answer is given as $$\frac{\displaystyle\sum_{n=1}^{44} \cos n^\circ}{\displaystyle \sum_{n=1}^{44} \sin n^\circ} \approx \displaystyle \frac{\displaystyle\int_{0}^{45}\cos n^\circ dn}{\displaystyle\int_{0}^{45}\sin n^\circ dn} = \sqrt{2}+1$$
Using the fact $$\displaystyle \sum_{n = 1}^{44}\cos\left(\frac{\pi}{180}\cdot n\right)\approx\int_0^{45}\cos\left(\frac{\pi}{180}\cdot x\right)\, dx $$
My question is that I did not understand the last line of this solution.
Please explain to me in detail. Thanks.
|
The last line in the argument you give could say
$$
\sum_{n=1}^{44} \cos\left(\frac{\pi}{180}n\right)\,\Delta n \approx \int_1^{44} \cos n^\circ\, dn.
$$
Thus the Riemann sum approximates the integral. The value of $\Delta n$ in this case is $1$, and if it were anything but $1$, it would still cancel from the numerator and the denominator.
Maybe what you didn't follow is that $n^\circ = n\cdot\dfrac{\pi}{180}\text{ radians}$?
The identity is ultimately reducible to the known tangent half-angle formula
$$
\frac{\sin\alpha+\sin\beta}{\cos\alpha+\cos\beta}=\tan\frac{\alpha+\beta}{2}
$$
and the rule of algebra that says that if
$$
\frac a b=\frac c d,
$$
then this common value is equal to
$$
\frac{a+c}{b+d}.
$$
Just iterate that a bunch of times, until you're done.
Thus
$$
\frac{\sin1^\circ+\sin44^\circ}{\cos1^\circ+\cos44^\circ} = \tan 22.5^\circ
$$
and
$$
\frac{\sin2^\circ+\sin43^\circ}{\cos2^\circ+\cos43^\circ} = \tan 22.5^\circ
$$
so
$$
\frac{\sin1^\circ+\sin2^\circ+\sin43^\circ+\sin44^\circ}{\cos1^\circ+\cos2^\circ+\cos43^\circ+\cos44^\circ} = \tan 22.5^\circ
$$
and so on.
Now let's look at $\tan 22.5^\circ$. If $\alpha=0$ then the tangent half-angle formula given above becomes
$$
\frac{\sin\beta}{1+\cos\beta}=\tan\frac\beta2.
$$
So
$$
\tan\frac{45^\circ}{2} = \frac{\sin45^\circ}{1+\cos45^\circ} = \frac{\sqrt{2}/2}{1+(\sqrt{2}/2)} = \frac{\sqrt{2}}{2+\sqrt{2}} = \frac{1}{\sqrt{2}+1}.
$$
In the last step we divided the top and bottom by $\sqrt{2}$.
What you have is the reciprocal of this.
Postscript four years later: In my answer I explained why the answer that was "given" was right, but I forgot to mention that $$ \frac{\displaystyle\sum_{n=1}^{44} \cos n^\circ}{\displaystyle \sum_{n=1}^{44} \sin n^\circ} = \sqrt{2}+1 \text{ exactly, not just approximately.} $$ The reason why the equality is exact is in my answer, but the explicit statement that it is exact is not.
|
Algebra simplification in mathematical induction . I was proving some mathematical induction problems and came through an algebra expression that shows as follows:
$$\frac{k(k+1)(2k+1)}{6} + (k + 1)^2$$
The final answer is supposed to be:
$$\frac{(k+1)(k+2)(2k+3)}{6}$$
I walked through every possible expansion; I combine like terms, simplify, factor, but never arrived at the answer.
Could someone explain the steps?
|
First, let's write the expression as a sum of fractions with a common denominator.
$$\dfrac{k(k+1)(2k+1)}{6} + (k + 1)^2 = \dfrac{k(k+1)(2k+1)}{6} + \dfrac{6(k+1)^2}{6}\tag{1}$$
Now expand $6(k+1)^2 = 6k^2 + 12k + 6\tag{2}$ and expand
$k(k+1)(2k+1) = k(2k^2 + 3k + 1) = 2k^3 + 3k^2 + k\tag{3}$
So now, $(1)$ becomes $$\dfrac{k(k+1)(2k+1)}{6} + \dfrac{6(k+1)^2}{6} = \dfrac{(2k^3 + 3k^2 + k) + (6k^2 +12 k + 6)}{6} $$ $$= \dfrac{\color{blue}{\bf 2k^3 + 9k^2 +13k + 6}}{6}\tag{4}$$
We can factor the numerator in $(4)$, or we can expand the numerator of our "goal"...
$$\frac{(k+1)(k+2)(2k+3)}{6} = \dfrac{\color{blue}{\bf 2k^3 + 9k^2 + 13k + 6}}{6}\tag{goal}$$
|
What leads us to believe that 2+2 is equal to 4? My professor of Epistemological Basis of Modern Science discipline was questioning about what we consider knowledge and what makes us believe or not in it's reliability.
To test us, he asked us to write down our justifications for why do we accept as true that 2 plus 2 is equal to 4. Everybody, including me, answered that we believe in it because we can prove it, like, I can take 2 beans and more 2 beans and in the end I will have 4 beans. Although the professor told us: "And if all the beans in the universe disappear", and of course he can extend it to any object we choose to make the proof. What he was trying to show us is that the logical-mathematical universe is independent of our universe.
Although I was pretty delighted with this question and I want to go deeper. I already searched about Peano axioms and Zermelo-Fraenkel axioms although I think the answer that I am looking for can't be explained by an axiom.
It is a complicated question for me, very confusing, but try to understand, what I want is the background process, the gears of addition, like, you can say that a+0=a and then say a+1 = a+S(0) = S(a+0) = S(a). Although it doesn't show what the addition operation itself is. Can addition be represented graphically? Like rows that rotates, or lines that join?
Summarizing, I think my question is: How can I understand addition, not only learn how to do it, not just reproduce what teachers had taught to me like a machine. How can I make a mental construct of this mathematical operation?
|
I've always liked this approach, that a naming precedes a counting.
===============================================
================================================
|
Probability puzzle - the 3 cannons (Apologies if this is the wrong venue to ask such a question, but I don't understand how to arrive at a solution to this math puzzle).
Three cannons are fighting each other.
Cannon A hits 1/2 of the time. Cannon B hits 1/3 the time. Cannon C hits 1/6 of the time.
Each cannon fires at the current "best" cannon. So B and C will start shooting at A, while A will shoot at B, the next best. Cannons die when they get hit.
Which cannon has the highest probability of survival? Why?
Clarification: B and C will start shooting at A.
|
A has the greatest chance of survival.
Consider the three possible scenarios for the first round:
On the first trial, define $a$ as the probability that A gets knocked out, $b$ is the probability that B gets knocked out, and $c$ is the probability that C gets knocked out.
Since both B and C are firing at A, the probability of a getting knocked out is:
$$a=\frac{1}{3}+\frac{1}{6}=\frac{1}{2}$$
Only A is firing at B, so the probability of B getting knocked out is:
$$b=\frac{1}{2}$$
No one is firing at C, so there is no chance of C being knocked out in the first round:
$$c=0$$
The probability of A or B being knocked out first is therefore even.
Now on the second round, there are one of two possibilities: A and C are left to duel, or B and C are left to duel.
Between A and C, the probability of A defeating C is $\frac{\frac{1}{2}}{\frac{1}{2}+\frac{1}{6}}=0.75$, and the probability of C defeating A is $0.25$.
Between B and C, the probability of B defeating C is $\frac{\frac{1}{3}}{\frac{1}{3}+\frac{1}{6}}=\frac{2}{3}$, and the probability of C defeating B is $\frac{1}{3}$.
Finally, assess the total probability of victory for each cannon, using the rule of product:
Probability of A winning:
$0.75*0.5=0.375$
Probability of B winning:
$\frac{2}{3}*0.5=\frac{1}{3}$
Probability of C winning:
$0.25*0.5+\frac{1}{3}*0.5\approx0.2917$
So it's close, but A has the greatest chance of survivial.
|
Stirling Binomial Polynomial Let $\{\cdot\}$ denote Stirling Numbers of the second kind. Let $(\cdot)$ denote the usual binomial coefficients. It is known that $$\sum_{j=k}^n {n\choose j} \left\{\begin{matrix} j \\ k \end{matrix}\right\} = \left\{\begin{matrix} n+1 \\ k+1 \end{matrix}\right\}.$$ Note: The indexes for $j$ aren't really needed since the terms are zero when $j>n$ or $j<k$.
How do I calculate $$\sum_{j=k}^n 4^j{n\choose j} \left\{\begin{matrix} j \\ k \end{matrix}\right\} = ?$$ I have been trying to think of this sum as some special polynomial (maybe a Bell polynomial of some kind) that has been evaluated at 4.
I have little knowledge of Stirling Numbers in the context of polynomials. Any help would be appreciated; even a reference to a comprehensive book on Stirling Numbers and polynomials.
|
It appears we can give another derivation of the closed form by @vadim123 for the
sum $$q_n = \sum_{j=k}^n m^j {n\choose j} {j \brace k}$$
using the bivariate generating function of the Stirling numbers of the
second kind. This computation illustrates generating function techniques as presented in Wilf's generatingfunctionology as well as the technique of annihilating coefficient extractors.
Recall the species for set partitions which is
$$\mathfrak{P}(\mathcal{U} \mathfrak{P}_{\ge 1}(\mathcal{Z}))$$
which gives the generating function
$$G(z, u) = \exp(u(\exp(z)-1)).$$
Introduce the generating function
$$Q(z) = \sum_{n\ge k} q_n \frac{z^n}{n!}.$$
We thus have
$$Q(z) = \sum_{n\ge k} \frac{z^n}{n!}
\sum_{j=k}^n m^j {n\choose j} {j \brace k}.$$
Substitute $G(z, u)$ into the sum to get
$$Q(z) = \sum_{n\ge k} \frac{z^n}{n!}
\sum_{j=k}^n m^j {n\choose j}
j! [z^j] \frac{(\exp(z)-1)^k}{k!}
\\ = \sum_{j\ge k} m^j \left([z^j] \frac{(\exp(z)-1)^k}{k!}\right)
\sum_{n\ge j} j! \frac{z^n}{n!} {n\choose j}
\\ = \sum_{j\ge k} m^j \left([z^j] \frac{(\exp(z)-1)^k}{k!}\right)
\sum_{n\ge j} \frac{z^n}{(n-j)!}
\\= \sum_{j\ge k} m^j \left([z^j] \frac{(\exp(z)-1)^k}{k!}\right)
z^j \sum_{n\ge j} \frac{z^{n-j}}{(n-j)!}
\\ = \exp(z)
\sum_{j\ge k} m^j z^j \left([z^j] \frac{(\exp(z)-1)^k}{k!}\right).$$
Observe that the sum annihilates the coefficient extractor, producing
$$Q(z) = \exp(z)\frac{(\exp(mz)-1)^k}{k!}.$$
Extracting coefficients from $Q(z)$ we get
$$q_n = \frac{n!}{k!}
[z^n] \exp(z)
\sum_{q=0}^k {k\choose q} (-1)^{k-q} \exp(mqz)
\\ = \frac{n!}{k!}
[z^n] \sum_{q=0}^k {k\choose q} (-1)^{k-q} \exp((mq+1)z)
= \frac{n!}{k!}
\sum_{q=0}^k {k\choose q} (-1)^{k-q} \frac{(mq+1)^n}{n!}
\\ = \frac{1}{k!}
\sum_{q=0}^k {k\choose q} (-1)^{k-q} (mq+1)^n.$$
Note that when $m=1$ $Q(z)$ becomes
$$\exp(z)\frac{(\exp(z)-1)^k}{k!}
= \frac{(\exp(z)-1)^{k+1}}{k!}
+ \frac{(\exp(z)-1)^k}{k!}$$
so that
$$[z^n] Q(z) = (k+1){n\brace k+1} + {n\brace k}
= {n+1\brace k+1},$$
which can also be derived using a very simple combinatorial argument.
Addendum.
Here is another derivation of the formula for $Q(z).$
Observe that when we multiply two exponential generating functions of
the sequences $\{a_n\}$ and $\{b_n\}$ we get that
$$ A(z) B(z) = \sum_{n\ge 0} a_n \frac{z^n}{n!}
\sum_{n\ge 0} b_n \frac{z^n}{n!}
= \sum_{n\ge 0}
\sum_{k=0}^n \frac{1}{k!}\frac{1}{(n-k)!} a_k b_{n-k} z^n\\
= \sum_{n\ge 0}
\sum_{k=0}^n \frac{n!}{k!(n-k)!} a_k b_{n-k} \frac{z^n}{n!}
= \sum_{n\ge 0}
\left(\sum_{k=0}^n {n\choose k} a_k b_{n-k}\right)\frac{z^n}{n!}$$
i.e. the product of the two generating functions is the generating
function of $$\sum_{k=0}^n {n\choose k} a_k b_{n-k}.$$
(I have included this derivation in several of my posts.)
Now in the present case we have
$$A(z) = \sum_{j\ge k} {j\brace k} m^j \frac{z^j}{j!}
\quad\text{and}\quad
B(z) = \sum_{j\ge 0} \frac{z^j}{j!} = \exp(z).$$
Evidently $A(z)$ is just the exponential generating function for set
partitions into $k$ sets evaluated at $mz,$ so we get
$$A(z) = \frac{(\exp(mz)-1)^k}{k!}$$
and with $Q(z) = A(z) B(z)$ the formula for $Q(z)$ follows.
|
Initial and terminal objects in $\textbf{FinVect}_R$ I am teaching myself category theory and I am having difficulties identifying the initial and terminal object of the category of $\textbf{FinVect}_R$.
I was thinking that because it is finite vectors then the initial and terminal should be the same object ( since they are finite, we can operate in those vectors until we reached that last one since it is finite).
Any help will be greatly appreciated.
|
I believe the terminal and initial object are both the zero-dimensional vector space $0$.
There is one map from $0$ to any other vector space $V$, since we must send $0$ to $0_V$. This follows from the definition of a linear map, and linear maps are the morphisms in this category. So $0$ is the initial object.
Similarly, there is exactly one morphism from any vector space $V$ to $0$: the map sending all elements to $0$. So $0$ satisfies the definition of terminal object.
Note that this argument is independent of the base field. So you could consider the category of vector spaces over $\mathbb C$, for example, and the answer would be the same.
|
Which are functions of bounded variations? Let $f, g : [0, 1] \to \mathbb{R}$ be defined as follows:
$f(x) = x^2 \sin (1/x)$ if $x = 0$, $f(0)=0$
$g(x) = \sqrt{x} \sin (1/x)$ if $x = 0, g(0) = 0$.
Which are functions of bounded variations?Every polynomial in a compact interval is of BV?
Could any one just tell me what is the main result to see whether a function is of BV? Derivative bounded?
|
Yes, bounded derivative implies BV. I explained this in your older question which condition says that $f$ is necessarily bounded variation. Since $f$ has bounded derivative, it is in BV.
A function with unbounded derivative could also be in BV, for example $\sqrt{x}$ on $[0,1]$ is BV because it's monotone. More generally, a function with finitely many maxima and minima on an interval is BV.
But $g$ has unbounded derivative and infinite number of maxima and minima on an interval. In such a situation you should look at the peaks and troughs of its graph and try to estimate the sum of differences $\sum |\Delta f_i|$ between them. It is not necessary to precisely locate the maxima and minima. The fact that
$$g((\pi/2+2\pi n)^{-1})=(\pi/2+2\pi n)^{-1/2},\quad g((3\pi/2+2\pi n)^{-1})=-(3\pi/2+2\pi n)^{-1/2}$$
gives you enough information about $g$ to conclude it is not BV.
|
Evaluate $\int_0^\infty \frac{\log(1+x^3)}{(1+x^2)^2}dx$ and $\int_0^\infty \frac{\log(1+x^4)}{(1+x^2)^2}dx$ Background: Evaluation of $\int_0^\infty \frac{\log(1+x^2)}{(1+x^2)^2}dx$
We can prove using the Beta-Function identity that
$$\int_0^\infty \frac{1}{(1+x^2)^\lambda}dx=\sqrt{\pi}\frac{\Gamma \left(\lambda-\frac{1}{2} \right)}{\Gamma(\lambda)} \quad \lambda>\frac{1}{2}$$
Differentiating the above equation with respect to $\lambda$, we obtain an expression involving the Digamma Function $\psi_0(z)$.
$$\int_0^\infty \frac{\log(1+x^2)}{(1+x^2)^\lambda}dx = \sqrt{\pi}\frac{\Gamma \left(\lambda-\frac{1}{2} \right)}{\Gamma(\lambda)} \left(\psi_0(\lambda)-\psi_0 \left( \lambda-\frac{1}{2}\right) \right)$$
Putting $\lambda=2$, we get
$$\int_0^\infty \frac{\log(1+x^2)}{(1+x^2)^2}dx = -\frac{\pi}{4}+\frac{\pi}{2}\log(2)$$
Question:
But, does anybody know how to evaluate $\displaystyle \int_0^\infty \frac{\log(1+x^3)}{(1+x^2)^2}dx$ and $\displaystyle \int_0^\infty \frac{\log(1+x^4)}{(1+x^2)^2}dx$?
Mathematica gives the values
*
*$\displaystyle \int_0^\infty \frac{\log(1+x^3)}{(1+x^2)^2}dx = -\frac{G}{6}+\pi \left(-\frac{3}{8}+\frac{1}{8}\log(2)+\frac{1}{3}\log \left(2+\sqrt{3} \right) \right)$
*$\displaystyle \int_0^\infty \frac{\log(1+x^4)}{(1+x^2)^2}dx = -\frac{\pi}{2}+\frac{\pi \log \left( 6+4\sqrt{2}\right)}{4}$
Here, $G$ denotes the Catalan's Constant.
Initially, my approach was to find closed forms for
$$\int_0^\infty \frac{1}{(1+x^2)^2(1+x^3)^\lambda}dx \ \ , \int_0^\infty \frac{1}{(1+x^2)^2(1+x^4)^\lambda}dx$$
and then differentiate them with respect to $\lambda$ but it didn't prove to be of any help.
Please help me prove these two results.
|
I hope it is not too late. Define
\begin{eqnarray}
I(a)=\int_0^\infty\frac{\log(1+ax^4)}{(1+x^2)^2}dx.
\end{eqnarray}
Then
\begin{eqnarray}
I'(a)&=&\int_0^\infty \frac{x^4}{(1+ax^4)(1+x^2)^2}dx\\
&=&\frac{1}{(1+a)^2}\int_0^\infty\left(-\frac{2}{1+x^2}+\frac{1+a}{(1+x^2)^2}+\frac{1-a+2ax^2}{1+a x^4}\right)dx\\
&=&\frac{1}{(1+a)^2}\left(-\pi+\frac{1}{4}(1+a)\pi+\frac{(1-a)\pi}{2\sqrt2a^{1/4}}+\frac{\pi a^{1/4}}{\sqrt2}\right)\\
&=&\frac{1}{4(1+a)^2}\left(a-3+\frac{\sqrt2(1-a)}{a^{1/4}}+2\sqrt2 a^{1/4}\right).
\end{eqnarray}
and hence
\begin{eqnarray}
I(1)&=&\int_0^1\frac{1}{4(1+a)^2}\left(a-3+\frac{\sqrt2(1-a)}{a^{1/4}}+2\sqrt2 a^{1/4}\right)da\\
&=&-\frac{\pi}{2}+\frac{1}{4}\log(6+4\sqrt2).
\end{eqnarray}
For the other integral, we can do the same thing to define
$$ J(a)=\int_0^\infty\frac{\log(1+ax^3)}{(1+x^2)^2}dx. $$
The calculation is similar and more complicated and here I omit the detail.
|
The principal ideal $(x(x^2+1))$ equals its radical. Let $\mathbb R$ be the reals and $\mathbb R[x]$ be the polynomial ring of one variable with real coefficients. Let $I$ be the principal ideal $(x(x^2+1))$. I want to prove that the ideal of the ideals variety is not the same as its radical, that is, $I(V(I))\not=\text {rad}(I)$. I've reduced this to proving that $I=\text{rad}(I)$. How can I go about that?
|
This $x(x^2 + 1)$ is the factorisation of that polynomial into irreducibles (over $\mathbb{R}$). Once you have such a factorisation, the radical is the factorisation with no powers. So it is radical.
EDIT — I am referencing this:
Let $f \in k[x]$ be a polynomial, and suppose that $f = f_1^{\alpha_1} \cdots f_n^{\alpha_n}$ is the factorisation of $f$ into irreducibles. Then $\sqrt{(f)} = (f_1 \ldots f_n)$.
One inclusion should be clear, and the other can be seen by appealing to the uniqueness of factorisation in $k[x]$.
|
If a graph with $n$ vertices and $n$ edges there must a cycle? How to prove this question? If a graph with $n$ vertices and $n$ edges it must contain a cycle?
|
Here's is an approach which does not use induction:
Let $G$ be a graph with $n$ vertices and $n$ edges.
Keep removing vertices of degree $1$ from $G$ until no such removal is possible, and let $G'$ denote the resulting graph. Note that in each removal, we're removing exactly $1$ vertex and $1$ edge, so $G'$ cannot be empty, otherwise before the last removal we'd have a graph with $1$ vertex and $1$ edge, and $G'$ has the same number of vertices and edges.
Therefore the minimum degree in $G'$ is at least $2$, which implies that $G'$ has a cycle.
|
How many samples are required to estimate the frequency of occurrence of an output (out of 8 different outputs)? I have $N$ marbles and to each of them corresponds a 1 or 2 or 3 or ... or 8.(i.e., there's 8 different kinds of marbles)
How many samples are required to estimate the frequency of occurrence of each kind (within a given confidence interval and confidence level)?
( If the answer is lengthy, a hint or a link to an online reference suffices.)
|
This is quite straight forward in socio-economic statistics. An example in health care is given here. There also a general calculator.
Hope this helps.
|
Find the area: $(\frac xa+\frac yb)^2 = \frac xa-\frac yb , y=0 , b>a$ Find the area: $$\left(\frac xa+\frac yb\right)^2 = \frac xa-\frac yb,$$ $ y=0 , b>a$
I work in spherical coordinates.
$x=a\cdot r\cdot \cos(\phi)\;\;,$
$y=b\cdot r\cdot \cos(\phi)$
Then I get the equation and don't know to do with, cause "a" and "b" are dissapearing.
For what are the conditions: $y=0, b>a?$..How to define the limits of integration?
|
Your shape for $a=1$, $b=2$ is as below
It is much easier if you would use line parametrization rather than polar coordinates. Let $y=m\,x$ then
$$\bigg(\frac xa+\frac{mx}b\bigg)^2=\frac xa-\frac {mx}b\Rightarrow x(m)=\frac{1/a-m/b}{\big(1/a+m/b\big)^2}$$
and
$$y(m)=m\,x(m)=m\frac{1/a-m/b}{\big(1/a+m/b\big)^2}$$
To find the limits you must determine where $y(m)$ becomes zero due to boundary of $y=0$
$$y(m)=m\frac{1/a-m/b}{\big(1/a+m/b\big)^2}\Rightarrow m=0\text{ and }m=b/a$$
Since $x(m)$ is zero as $m=b/a$ the integration will be from $b/a$ to $0$. Therefore
$$A=\int_{b/a}^0ydx=\int_{b/a}^0y(m)\frac{dx}{dm}dm=\int_{b/a}^0m\frac{1/a-m/b}{\big(1/a+m/b\big)^2}\frac{a^2b(a\,m-3b)}{(am+b)^3}dm=\frac{ab}{12}$$
|
when Fourier transform function in $\mathbb C$? The Fourier transform of a function $f\in\mathscr L^1(\mathbb R)$ is
$$\widehat f\colon\mathbb R\rightarrow\mathbb C, x\mapsto\int_{-\infty}^\infty f(t)\exp(-ixt)\,\textrm{d}t$$
When is this indeed a function in $\mathbb C$? Most of calculations you get functions in $\mathbb R$. When in $\mathbb C$?
Add: I know there are results like $\frac{e^{ait}-e^ {-ait}}{2i}=\sin(at)$ multiplied by 'anything', but I am asking for a function which you cannot write as a function in $\mathbb R$.
|
There is a basic fact about Fourier transform on the Schwartz space: Let $f\in \mathcal{S}(\mathbb{R})$, then $\widehat{f'} = it\widehat{f}$. Thus, if $\widehat{f}$ is real-valued, then $\widehat{f'}$ is complex-valued.
|
distributing z different objects among k people almost evenly We have z objects (all different), and we want to distribute them among k people ( k < = z ) so that the distribution is almost even.
i.e. the difference between the number of articles given to the person with maximum articles, and the one with minimum articles is at most 1.
We need to find the total number of ways in which this can be done
for example if there are 5 objects and 3 people the number of such ways should be 90.
I am not sure how we get this value.
|
Each person will get either $\lfloor \frac{z}{k}\rfloor$ or $\lceil \frac{z}{k}\rceil$ objects. These are the floor and ceiling functions.
|
Is it possible to derive all the other boolean functions by taking other primitives different of $NAND$? I was reading the TECS book (The Elements of Computing Systems), in the book, we start to build the other logical gates with a single primitive logical gate, the $NAND$ gate. With it, we could easily make the $NOT$ gate, then the $AND$ gate and then the $OR$ gate.
With the $NOT$, $AND$ and $OR$ gates, we could express any truth table through cannonical representation.
The book has the table given below, I was wondering if we could do the same by taking any other boolean function as primitive, I'm quite sure that it's not possible to do with both constant $0$ nor constant $1$. What about the others?
|
As you noted, it's impossible for the constants to generate all of the boolean functions; the gates which are functions of a single input also can't do it, for equally-obvious reasons (it's impossible to generate the boolean function $f(x,y)=y$ from either $f_1(x,y)=x$ or $f_2(x,y)=\bar{x}$).
OR and AND can't do it either, for a slightly more complicated reason — it's impossible to generate nothing from something! (More specifically, neither one of them is able to represent negation, and in particular neither one can represent a constant function — this is because any function composed of only ORs and ANDs will always return 0 when its inputs are all-0s and will return 1 when its inputs are all-1s.
XOR is a bit more complicated: it's obviously possible to generate 0 and 1 via XORs, and thus NOT; but the 'parity preserving' nature of XOR makes it impossible to represent $x\wedge y$ with XOR alone.
In fact, it can be shown that NOR and NAND are the only two-input boolean functions which suffice in and of themselves; for more details, have a look at the Wikipedia page on functional completeness.
|
Given $\frac {a\cdot y}{b\cdot x} = \frac CD$, find $y$. That's a pretty easy one... I have the following equality : $\dfrac {a\cdot y}{b\cdot x} = \dfrac CD$ and I want to leave $y$ alone so I move "$b\cdot x$" to the other side
$$a\cdot y= \dfrac {(C\cdot b\cdot x)}{D}$$
and then "$a$"
$$y=\dfrac {\dfrac{(C\cdot b\cdot x)}D} a.$$
Where is my mistake? I should be getting $y= \dfrac {(b\cdot C\cdot x)}{(a\cdot D)}$.
I know that the mistake I am making is something very stupid, but can't work it out. Any help? Cheers!
|
No mistake was made. Observe that:
$$
y=\dfrac{\left(\dfrac{Cbx}{D}\right)}{a}=\dfrac{Cbx}{D} \div a = \dfrac{Cbx}{D} \times \dfrac{1}{a}=\dfrac{Cbx}{Da}=\dfrac{bCx}{aD}
$$
as desired.
|
Representation problem: I don't understand the setting of the question! (From Serre's book) Ex 2.8 of Serre's book "Linear Representations of Finite Groups" says: Let $\rho:G\to V$ be a representation ($G$ finite and $V$ is complex, finite dimensional) and $V=W_1\oplus W_1 \oplus \dotsb \oplus W_2 \oplus \dotsb W_2\oplus \dotsb \oplus W_k$ be its explicit decomposition into irreducible subrepresentations. We know $W_i$ is only determined up to isomorphism, but $V_i := W_i\oplus \dotsb \oplus W_i$ is uniquely determined.
The question asks: Let $H_i$ be the vector space of linear mappings $h:W_i\to V_i$ such that $\rho_s h = h\rho_s$ for all $s\in G$. Show that $\dim H_i = \dim V_i / \dim W_i$ etc...
But I don't even understand what $\rho_s h = h\rho_s$ means in this case: to make sense of $h\rho_s$, don't we need to first fix some decomposition $W_i\oplus \dotsb \oplus W_i$, and consider $\rho$ restricted to one of these $W_i$? Is that what the question wants? But then how do I make sense of $\rho_s h$?
|
I'm not sure I understand what is worrying you, but each $W_{i}$ is a $G$-submodule, for any $w \in W_{i}, w\rho_{s} \in W_{i}.$ Then we can apply the map $h,$ s $(w\rho_{s})h$ is an element of $V_{i}.$ On the other hand, $wh \in V_{i}.$ We know that $V_{i}$ is also a $G$-submodule, so $(wh)\rho_{s}$ is an element of $V_{i}.$ The question asks you to consider those maps $h$ such that these two resulting elements of $V_{i}$ coincide for each $w \in W_{i}$ and each $s \in G.$ ( I am assuming that $\rho_{s}$ means the linear transformation associated to $s \in G$ by the representation $\rho).$
Clarification: Suppose that $V_{i}$ is a direct sum of $n_{i}$ isomorphic irreducible submodules. Strictly speaking, these could be labelled $W_{i_{1}},\ldots ,W_{i_{n_{i}}}.$ The intention is to look at maps from $W_{i_{1}}$ to $V_{i},$ but the resulting dimensions would be the same if any $W_{i_{j}}$ was used.
|
Projections Open but not closed I often read that projections are Open but generally not closed. Unfortunately I do not have a counterexample for not closed available. Does anybody of you guys have?
|
The example which is mentioned by David Mitra shows that the projections are not closed:
The projection $p: \mathbb R^2 \rightarrow \mathbb R$ of the plane $ \mathbb R^2 $ onto the $x$-axis is not closed. Indeed, the set $\color{red}{F}=\{(x,y)\in \mathbb R^2 : xy=1\}$ is closed in $\mathbb R^2 $ and yet its image $\color{blue}{p(F)}= \mathbb R \setminus \{0\}$ is not closed in $\mathbb R$.
|
Logic Circuits And Equations Issue - Multiply Binary Number By 3 I am trying to build a logic circuit that multiplies any 4 digit binary number by 3.
I know that if I multiply/divide number by 2 it moves left/right the digits, but what I'm
doing with multiply by 3?
*
*How to extract the equations that multiply the digits?
*I need to use a full adder?
I would like to get some advice. Thanks!
EDIT
I would like to get comments about my circuit of Multiplier 4 digits binary number by 3.
|
The easiest way I see is to note that $3n=2n+n$, so copy $n$, shift it one to the left, and add back to $n$
|
Recursive Sequence Tree Problem (Original Research in the Field of Comp. Sci) This question appears also in https://cstheory.stackexchange.com/questions/17953/recursive-sequence-tree-problem-original-research-in-the-field-of-comp-sci. I was told that cross-posting in this particular situation could be approved, since the question can be viewed from many angles.
I am a researcher in the field of computer science. In my research I have the following problem, which I have been thinking for quite a while now.
I think the problem is best explained through an example, so first assume this kind of a tree structure:
1, 2, 3, 4, 5, 6, 7, 8
/ \
6, 8, 10, 12 -4, -4, -4, -4
/ \ / \
16, 20 -4, -4 -8, -8, 0, 0
/ \ / \ / \ / \
36 -4 -8 0 -16 0 0 0
The root of the tree is always some sequence $s = (s_0, ..., s_{N-1})$ where $N = 2^p$ for some $p \in \mathbb{N}, p>2$. Please note that I am looking for a general solution to this, not just for sequences of the form $1, 2, ..., 2^p$. As you can see, the tree is defined in a recursive manner: the left node is given by $left(k)=root(k)+root(\frac{N}{2}+k), \quad 0 \leq k \leq \frac{N}{2}$
and the right node by
$right(k)=root(k)-root(\frac{N}{2}+k), \quad 0 \leq k \leq \frac{N}{2}$
So, for example, (6 = 1+5, 8 = 2+6, 10 = 3+7, 12 = 4+8) and (-4 = 1-5, -4 = 2-6, -4 = 3-7, -4 = 4-7) would give the second level of the tree.
I am only interested in the lowest level of the tree, i.e., the sequence (36, -4, -8, 0, -16, 0, 0, 0). If I compute the tree recursively, the computational complexity will be $O(N log N)$. That is a little slow for the purpose of the algorithm. Is it possible to calculate the last level in linear time?
If a linear-time algorithm is possible, and you find it, I will add you as an author to the paper the algorithm will appear in. The problem constitutes about 1/10 of the idea/content in the paper.
If a linear-time algorithm is not possible, I will probably need to reconsider other parts of the paper, and leave this out entirely. In such a case I can still acknowledge your efforts in the acknowledgements. (Or, if the solution is a contribution from many people, I could credit the whole math SE community.)
|
This is more of a comment, but it's too big for the comment block. An interesting note on Kaya's matrix $\mathbf{M}$: I believe that it can be defined recursively for any value of $p$. (I should note here that this is my belief. I have yet to prove it...)
That is, let $\mathbf{M}_p$ be the matrix for the value of $p$ (here, let's remove the bound on $p\gt2$).
Let $\mathbf{M}_1 = \begin{pmatrix}1 & 1 \\ 1 & -1\end{pmatrix}$.
Then $\mathbf{M}_n = \begin{pmatrix} \mathbf{M}_{n-1} & \mathbf{M}_{n-1} \\ \mathbf{M}_{n-1} & -\mathbf{M}_{n-1}\end{pmatrix}$.
Ah Ha! Thanks to some searches based off of Suresh Venkat's answer, I found that this matrix is called the Walsh Matrix. Multiplying this matrix by a column vector of your first sequence provides a column vector of the bottom sequence.
As a side note, this makes an almost fractal-like pattern when colored. :)
The above is for $p=4$.
EDIT: I'm almost sure I've seen a graphic similar to the one above before. If someone recognizes something similar, that would be great...
|
Is the set $\{(x, y) : 3x^2 − 2y^ 2 + 3y = 1\}$ connected? Is the set $\{(x, y)\in\mathbb{R}^2 : 3x^2 − 2y^ 2 + 3y = 1\}$ connected?
I have checked that it is an hyperbola, hence disconnected am i right?
|
Your set $S$ contains the points $(0,{1\over2})$ and $(0,1)$, but does not contain any points on the line $y={3\over4}$. Therefore $$S\cap\{(x,y)|y<{3\over4}\},\quad S\cap\{(x,y)|y>{3\over4}\}$$
is a partition of $S$ into disjoint (relatively) open subsets of ${\mathbb R}^2$. This shows that $S$ is not connected.
|
Finding a counterexample; quotient maps and subspaces Let $X$ and $Y$ be two topological spaces and $p: X\to Y$ be a quotient map.
If $A$ is a subspace of $X$, then the map $q:A\to p(A)$ obtained by restricting $p$ need not be a quotient map. Could you give me an example when $q$ is not a quotient map?
|
M.B. has given an example that shows that the restriction of a quotient map to an open subspace need not be a quotient map. Here is an example showing that the restriction to a saturated set also need not. Restrictions to an open saturated set however always works.
Let $X=[0,2],\ \ B=(0,1],\ \ A=\{0\}\cup(1,2]$ with the euclidean topology. Then the identification $q:X\to X/A$ is a quotient map, but $q:B\to q(B)$ is not.
Here is why: As Ronald Brown wrote, $q:B\to q(B)$ is a quotient map iff each subset of $B$ which is saturated and open in $B$ is the intersection of $B$ with a saturated open set in $X$. Now $U=\left(\frac12,1\right]$ is open and saturated in $B=(0,1]$, but if it were the intersection of $B$ and an open saturated $V$, then this $V$ would intersect $A$ and, since it is saturated, it would contain $0$, and then again by openness it had to contain $[0,\epsilon)$ for some $\epsilon>0$. So the intersection of $V$ and $B$ can never be $U$.
|
$\vec{u}+\vec{v}-\vec{w},\;\vec{u}-\vec{v}+\vec{w},\;-\vec{u}+\vec{v}+\vec{w} $ are linearly independent if and only if $\vec{u},\vec{v},\vec{w}$ are I'm consufed: how can I prove that $$\vec{u} + \vec{v} - \vec{w} , \qquad \vec{u} - \vec{v} + \vec{w},\qquad - \vec{u} + \vec{v} + \vec{w} $$ are linearly independent vectors if, and only if $\vec{u}$, $\vec{v}$ and $\vec{w}$ are linearly independent?
Ps: sorry for my poor English!
|
Call the three vectors $a, b, c$ then you find $2u=a+b, 2v=a+c, 2w=b+c$
If there is a linear dependence in $a, b, c$ the equations translate into a (nontrivial) dependence in $u, v, w$ and vice versa - the two sets of vectors are related by an invertible linear transformation.
|
describe the domain of a function $f(x, y)$ Describe the domain of the function $f(x,y) = \ln(7 - x - y)$. I have the answer narrowed down but I am not sure if it would be $\{(x,y) \mid y ≤ -x + 7\}$ or $\{(x,y) \mid y < -x + 7\}$ please help me.
|
Nice job: on the inequality part of things; the only confusion you seem to have is with respect to whether to include the case $y = x-7$. But note that $$f(x, y) = f(x, x - 7) = \ln(7 - x - y) = \ln \left[7 - x -(7 - x)\right] = \ln 0$$ but $\;\ln (0)\;$ is not defined, hence $y = x - 7$ cannot be included domain!
So we want the domain to be one with the strict inequality:
$$\{(x,y) | y < -x + 7\}$$
|
Prove: $D_{8n} \not\cong D_{4n} \times Z_2$.
Prove $D_{8n} \not\cong D_{4n} \times Z_2$.
My trial:
I tried to show that $D_{16}$ is not isomorphic to $D_8 \times Z_2$ by making a contradiction as follows:
Suppose $D_{4n}$ is isomorphic to $D_{2n} \times Z_2$, so $D_{8}$ is isomorphic to $D_{4} \times Z_2$. If $D_{16}$ is isomorphic to $D_{8} \times Z_2 $, then $D_{16}$ is isomorphic to $D_{4} \times Z_2 \times Z_2 $, but there is not Dihedral group of order $4$ so $D_4$ is not a group and so $D_{16}\not\cong D_8\times Z_2$, which gives us a contradiction. Hence, $D_{16}$ is not isomorphic to $D_{8} \times Z_2$.
I found a counterexample for the statement, so it's not true in general, or at least it's not true in this case.
__
Does this proof make sense or is it mathematically wrong?
|
$D_{8n}$ has an element of order $4n$, but the maximal order of an element in $D_{4n} \times \mathbb{Z}_2$ is $2n$.
|
Conditional Expected Value and distribution question
The distribution of loss due to fire damage to a warehouse is:
$$
\begin{array}{r|l}
\text{Amount of Loss (X)} & \text{Probability}\\
\hline
0 & 0.900 \\
500 & 0.060 \\
1,000 & 0.030\\
10,000 & 0.008 \\
50,000 & 0.001\\
100,000 & 0.001 \\
\end{array}
$$
Given that a loss is greater than zero, calculate the expected amount of the loss.
My approach is to apply the definition of expected value:
$$E[X \mid X>0]=\sum\limits_{x_i}x_i \cdot p(x_i)=500 \cdot 0.060 + 1,000 \cdot 0.030 + \cdots + 100,000 \cdot 0.001=290$$
I am off by a factor of 10--The answer is 2,900. I am following the definition of expected value, does anyone know why I am off by a factor of $1/10$?
Should I be doing this instead???
$E[X \mid X>0] = \sum\limits_{x_i} (x_i \mid x_i > 0) \cdot \cfrac{\Pr[x_i \cap x_i>0]}{\Pr(x_i > 0)}$
Thanks.
|
You completely missed the word "given". That means you want a conditional probability given the event cited. In other words, your second option is right.
For example the conditional probability that $X=500$ given that $X>0$ is
$$
\Pr(X=500\mid X>0) = \frac{\Pr(X=500\ \&\ X>0)}{\Pr(X>0)} = \frac{\Pr(X=500}{\Pr(X>0)} = \frac{0.060}{0.1} = 0.6.
$$
|
Help with a trig-substitution integral I'm in the chapter of trigonometric substitution for integrating different functions. I'm having a bit of trouble even starting this homework question:
$$\int \frac{(x^2+3x+4)\,dx} {\sqrt{x^2-4x}}$$
|
In order to make a proper substitution in integral calculus, the function that you are substituting must have a unique inverse function. However, there is such a case where the the derivative is present and you can make what I refer to as a "virtual substitution". This is not exactly the case here, we have to do other algebraic manipulations. Trigonometric functions such as sine, cosine and their variants have infinitely many inverse functions; inverse trigonometric functions ( i.e. arcsine, arccosine, etc...) have a unique inverse function, thus are fine. For example, if I made the substitution $y = \sin x$ ( where $-1≤y≤1$) , then $ x = (-1)^n \cdot \arcsin y + n\pi$ ($n \in \mathbb Z$): this does not work, without bound. If anyone disagrees with my statement, please prove that the substitution is proper. Also, in my opinion, turning a rational/algebraic function into a transcendental function is ridiculous. There are very elementary ways to approach this integral; a good book to read on many of these methods is Calculus Made Easy by Silvanus P. Thompson.
|
A recursive formula for $a_n$ = $\int_0^{\pi/2} \sin^{2n}(x)dx$, namely $a_n = \frac{2n-1}{2n} a_{n-1}$ Where does the $\frac{2n-1}{2n}$ come from?
I've tried using integration by parts and got $\int \sin^{2n}(x)dx = \frac {\cos^3 x}{3} +\cos x +C$, which doesn't have any connection with $\frac{2n-1}{2n}$.
Here's my derivation of $\int \sin^{2n}(x)dx = \frac {\cos^3 x}{3} +\cos x +C$:
$\sin^{2n+1}xdx=\int(1-\cos^2x)\sin xdx=\int -(1-u^2)du=\int(u^2-1)du=\frac{u^3}{3}+u+C=\frac{\cos^3x}{3}+\cos x +C$
where
$u=\cos x$;$du=-\sin x dx$
Credits to Xiang, Z. for the question.
|
Given the identity $$\int \sin^n x dx = - \frac{\sin^{n-1} x \cos x}{n}+\frac{n-1}{n}\int \sin^{n-2} xdx$$ plugging in $2n$ yields $$\int \sin^{2n} x dx = - \frac{\sin^{2n-1} x \cos x}{2n}+\frac{2n-1}{2n}\int \sin^{2n-2} xdx$$
Since
$$\int_0^{\pi/2} \sin^{2n} x dx = - \frac{\sin^{2n-1} x \cos x}{2n}|_0^{\pi/2}+\frac{2n-1}{2n}\int_0^{\pi/2} \sin^{2n-2} xdx$$ and $\frac{\sin^{2n-1} x \cos x}{2n}|_0^{\pi/2}=0$ for $n \ge 1$, we get $$\int_0^{\pi/2} \sin^{2n} x dx = \frac{2n-1}{2n}\int_0^{\pi/2} \sin^{2n-2} xdx$$
(We only care about $n \ge 1$ because in the original question, $a_0=\frac{\pi}{2}$ is given and only integer values of n with $n \ge 1$ need to satisfy $a_n=\frac{2n-1}{2n}a_{n-1}$.)
|
Has $X\times X$ the following property? Let $X$ be a topological space that satisfies the following condition at each point $x$:
For every open set $U$ containing $x$, there exists an open set $V$ with compact boundary such that $x\in V\subseteq U$.
Does $X\times X$ also have that property?
|
Try to Proof: For every open set $W$ of $X \times X$, there exists $U_1$ and $U_2$ of $X$ such that $U_1 \times U_2 \subset W.$ Then there exist $V_1$ and $V_2$ such that there compact boundary are contained in $U_1$ and $U_2$ respectively.
Next we prove that the compact boundary of $V_1 \times V_2$ are contained in $U_1\times U_2$. In fact, $Fr (V_1 \times V_2)= \overline{V_1 \times V_2} \setminus (V_1 \times V_2)=\overline{V_1} \times \overline{V_2} \setminus (V_1 \times V_2)=((\overline{V_1}\setminus V_1)\times \overline{V_2}) \cup (\overline{V_1 }\times (\overline{V_2}\setminus V_2))=(Fr(V_1) \times \overline{V_2}) \cup (\overline{V_1 }\times Fr(V_2))$.
We cannot be sure that $(\overline{V_1}\setminus V_1)\times \overline{V_2}$ or $\overline{V_1 }\times (\overline{V_2}\setminus V_2)$ is compact unless $V_1$ and $V_2$ are compact.
|
If $f^2$ is differentiable, how pathological can $f$ be? Apologies for what's probably a dumb question from the perspective of someone who paid better attention in real analysis class.
Let $I \subseteq \mathbb{R}$ be an interval and $f : I \to \mathbb{R}$ be a continuous function such that $f^2$ is differentiable. It follows by elementary calculus that $f$ is differentiable wherever it is nonzero. However, considering for instance $f(x) = |x|$ shows that $f$ is not necessarily differentiable at its zeroes.
Can the situation with $f$ be any worse than a countable set of isolated singularities looking like the one that $f(x) = |x|$ has at the origin?
|
To expand on TZakrevskiy's answer, we can use one of the intermediate lemmas from the proof of Whitney extension theorem.
Theorem (Existence of regularised distance) Let $E$ be an arbitrary closed set in $\mathbb{R}^d$. There exists a function $f$, continuous on $\mathbb{R}^d$, and smooth on $\mathbb{R}^d\setminus E$, a large constant $C$, and a family of large constants $B_\alpha$ ($C$ and $B_\alpha$ being independent of the choice of the function $f$) such that
*
*$C^{-1} f(x) \leq \mathrm{dist}(x,E)\leq Cf(x)$
*$|\partial^\alpha f(x)| \leq B_\alpha~ \mathrm{dist}(x,E)^{1 - |\alpha|}$ for any multi-index $\alpha$.
(See, for example, Chapter VI of Stein's Singular Integrals and Differentiability Properties of Functions.)
Property 1 ensures that if $x\in \partial E$ the boundary, $f$ is not differentiable at $x$. On the other hand, it also ensures that $f^2$ is differentiable on $E$. Property 2, in particular, guarantees that $f^2$ is differentiable away from $E$.
So we obtain
Corollary Let $E\subset \mathbb{R}^d$ be an arbitrary closed set with empty interior, then there exists a function $f$ such that $f^2$ is differentiable on $\mathbb{R}^d$, $f$ vanishes precisely on $E$, and $f$ is not differentiable on $E$.
|
How to show that if $A$ is compact, $d(x,A)= d(x,a)$ for some $a \in A$? I really think I have no talents in topology. This is a part of a problem from Topology by Munkres:
Show that if $A$ is compact, $d(x,A)= d(x,a)$ for some $a \in A$.
I always have the feeling that it is easy to understand the problem emotionally but hard to express it in math language. I am a student in Economics and I DO LOVE MATH. I really want to learn math well, could anyone give me some advice. Thanks so much!
|
Let $f$ : A $\longrightarrow$ $\mathbb{R}$ such that a $\mapsto$ d(x, a), where $\mathbb{R}$ is the topological space induced by the $<$ relation, the order topology.
For all open intervals (b, c) in $\mathbb{R}$, $f^{-1}((b, c))$ = {a $\in$ A $\vert$ d(x, a) $>$ b} $\cap$ {a $\in$ A $\vert$ d(x, a) $<$ c}, an open set. Therefore $f$ is continuous.
(Munkres) Theorem 27.4 Let $f$ : X $\longrightarrow$ Y be continuous, where Y is an ordered set in the order topology. If X is compact, then there exists points c and d in X such that $f$(c) $\leq$ $f$(x) $\leq$ $f$(d) for every x $\in$ X
By Theorem 27.4, $\exists$ r $\in$ A, d(x, r) = inf{ d(x, a) $\vert$ a $\in$ A}
Therefore d(x, A) = d(x, a) for some a $\in$ A
|
How $\frac{1}{n}\sum_{i=1}^n X_i^2 - \bar X^2 = \frac{\sum_{i=1}^n (X_i - \bar X)^2}{n}$ How
$\frac{1}{n}\sum_{i=1}^n X_i^2 - \bar X^2 = \frac{\sum_{i=1}^n (X_i - \bar X)^2}{n}$
i have tried to do that by the following procedure:
$\frac{1}{n}\sum_{i=1}^n X_i^2 - \bar X^2$
=$\frac{1}{n}(\sum_{i=1}^n X_i^2 - n\bar X^2)$
=$\frac{1}{n}(\sum_{i=1}^n X_i^2 - \sum_{i=1}^n\bar X^2)$
=$\frac{1}{n} \sum_{i=1}^n (X_i^2 - \bar X^2)$
Then i have stumbled.
|
I think it is cleaner to expand the right-hand side. We have
$$(X_i-\bar{X})^2=X_i^2-2X_i\bar{X}+(\bar{X})^2.$$
Sum over all $i$, noting that $2\sum X_i\bar{X}=2n\bar{X}\bar{X}=2n(\bar{X})^2$ and $\sum (\bar{X})^2=n(\bar{X})^2$.
There is some cancellation. Now divide by $n$ and we get the left-hand side.
|
lim sup and sup inequality Is it true that for a sequence of functions $f_n$
$$\limsup_{n \rightarrow \infty }f_n \leq \sup_{n} f_n$$
I tried to search for this result, but I couldn't find, so maybe my understanding is wrong and this does not hold.
|
The inequality
$$\limsup_{n\to\infty}a_n\leq\sup_{n\in\mathbb{N}}a_n$$
holds for any real numbers $a_n$, because the definition of $\limsup$ is
$$\limsup_{n\to\infty}a_n:=\lim_{m\to\infty}\left(\sup_{n\geq m}a_n\right)$$
and for any $n\in\mathbb{N}$, we have
$$\left(\sup_{n\geq m}a_n\right)\leq\sup_{m\in\mathbb{N}}a_n$$
(if the numbers $a_1,\ldots,a_{m-1}$ are less than or equal to the supremum of the others, both sides are equal, and if not, then the right side is larger).Therefore
$$\limsup_{n\to\infty}f_n(x)\leq \sup_{n\in\mathbb{N}}f_n(x)$$
holds for any real number $x$, which is precisely what is meant by the statement
$$\limsup_{n\to\infty}f_n\leq \sup_{n\in\mathbb{N}}f_n.$$
|
recurrence relations for proportional division I am looking for a solution to the following recurrence relation:
$$ D(x,1) = x $$
$$ D(x,n) = \min_{k=1,\ldots,n-1}{D\left({{x(k-a)} \over n},k\right)} \ \ \ \ [n>1] $$
Where $a$ is a constant, $0 \leq a \leq 1$. Also, assume $x \geq 0$.
This formula can be interpreted as describing a process of dividing a value of x to n people: if there is a single person ($n=1$), then he gets all of the value x, and if there are more people, they divide x in a proportional way, with some loss ($a$) in the division process.
For $a=0$ (no loss), it is easy to prove by induction that:
$$ D(x,n) = {x \over n}$$
For $a=1$, it is also easy to see that:
$$ D(x,n) = 0$$
But I have no idea how to solve for general $a$.
Additionally, it is interesting that small modifications to the formula lead to entirely different results. For example, starting the iteration from $k=2$ instead of $k=1$:
$$ D(x,2) = x/2 $$
$$ D(x,n) = \min_{k=2,\ldots,n-1}{D\left({{x(k-a)} \over n},k\right)} \ \ \ \ [n>2] $$
For $a=0$ we get the same solution, but for $a=1$ the solution is:
$$ D(x,n) = {x \over n(n-1)}$$
Again I have no idea how to solve for general $a$.
I created a spreadsheet with some examples, but could not find the pattern.
Is there a systematic way (without guessing) to arrive at a solution?
|
For $x>0$ the answer is $$D(x,n)=\frac{x(1-a)(2-a)\ldots (n-1-a)}{n!}$$
Proof: induction on $n$. Indeed,
$$D(x,n)=\min_{k=1,\ldots,n-1}\frac{x(k-a)(1-a)\ldots (k-1-a)}{k!n}$$
$$=\frac{x}{n}\min_{k=1,\ldots,n-1}\frac{(1-a)\ldots (k-a)}{k!}$$
andf it remains to prove
$$\frac{(1-a)\ldots (k-a)}{k!}>\frac{(1-a)\ldots (k+1-a)}{(k+1)!}$$
But this is evident.
For $x>0$ similarly.
|
How to find out X in a trinomial How can I find out what X equals in this?
$$x^2 - 2x - 3 = 117$$
How would I get started? I'm truly stuck.
|
Hint:1.$$ax^2 +bx +c=0 \to D=b^2-4ac\ge0 $$$$\ \to\begin{cases}
\color{green}{x_1=\frac{-b+\sqrt{D}}{2a}} \\
\color{red}{x_2=\frac{-b-\sqrt{D}}{2a}} \\
\end{cases} $$$$$$ 2. $$x^2 +bx +c=(x+\frac{b}{2})^2=\frac{b^2}{4}-c\ge0\quad \text{then} x=\pm\sqrt{\frac{b^2}{4}-c}-\frac{b}{2}$$
3. find $x_1$and $x_2 $by solving following system
\begin{cases}
x_1+x_2=\frac{-b}{a} \\
x_1x_2=\frac{c}{a} \\
\end{cases}
|
Limit of two variables, proposed solution check: $\lim_{(x,y)\to(0,0)}\frac{xy}{\sqrt{x^2+y^2}}$ Does this solution make sense,
The limit in question:
$$
\lim_{(x,y)\to(0,0)}\frac{xy}{\sqrt{x^2+y^2}}
$$
My solution is this:
Suppose, $$ \sqrt{x^2+y^2} < \delta $$ therefore $$xy<\delta^2$$ So by the Squeeze Theorem the limit exists since $$\frac{xy}{\sqrt{x^2+y^2}}<\frac{\delta^2}{\delta}=\delta$$
Is this sufficient?
|
Here's a more direct solution.
We know $|x|,|y|\le\sqrt{x^2+y^2}$, so if $\sqrt{x^2+y^2}<\delta$, then
$$\left|\frac{xy}{\sqrt{x^2+y^2}}\right|\le\frac{\big(\sqrt{x^2+y^2}\big)^2}{\sqrt{x^2+y^2}}=\sqrt{x^2+y^2}<\delta.$$
|
Telephone Number Checksum Problem I am having difficulty solving this problem. Could someone please help me? Thanks
"The telephone numbers in town run from 00000 to 99999; a common error in dialling on a
standard keypad is to punch in a digit horizontally adjacent to the intended one. So on a
standard dialling keypad, 4 could erroneously be entered as 5 (but not as 1, 2, 7, or 8). No
other kinds of errors are made.
It has been decided that a sixth digit will be added to each phone number $abcde$. There
are three different proposals for the choice of $X$:
Code 1: $a + b + c + d +e + X$ $\equiv 0\pmod{2}$
Code 2: $6a + 5b + 4c + 3d + 2e + X$ $\equiv 0\pmod{6}$
Code 3: $6a + 5b + 4c + 3d + 2e + X$ $\equiv 0\pmod{10}$
Out of the three codes given, choose one that can detect a horizontal error and one that cannot detect a horizontal error.
"
|
Code 1 can detect horizontal error. Let $c$ be the correct misdialed digit (order doesn't matter because they're added) and let $c'$ be the mistaken misdialed digit. For any two horizontally adjacent digits consist of an odd and even number. That is, $c-c'\equiv 1 \pmod 2$. It follows that the sum of the new digits will be
$$
a+b+c'+d+e+X\equiv\\
a+b+c+d+e+X+(c'-c)\equiv\\
0+(c'-c)\equiv\\
1 \pmod 2
$$
Thus, any wrong number will fail the test.
Code 2 cannot detect horizontal error. Consider, for example, the phone number $123231$. The number $223231$ will pass the test even though a horizontal error was made with the first digit. In general, this code fails to detect errors in the first digit.
|
Probability to open a door using all the keys I have 1 door and 10 keys. What is the probablity to open the door trying all the keys? I will discard every single key time to time.
|
Solution 1:
Hint: How many permutations are there for the order of the 10 keys?
Hint: How many permutations are there, where the last key is the correct key?
Solution 2:
Hint: The probability that the last key is the correct key, is the same as the probability that the nth key is the correct key. Hence ...
|
Why is $\log z$ continuous for $x\leq 0$ rather than $x\geq 0$? Explain why Log (principal branch) is only continuous on $$\mathbb{C} \setminus\{x + 0i: x\leq0\}$$ is the question.
However, I can't see why this is. Shouldn't it be $x \geq 0$ instead?
Thanks.
|
The important thing here is that the exponential function is periodic with period $2\pi i$, meaning that $e^{z+2\pi i}=e^z$ for all $z\in\Bbb C$.
If you imagine any stripe $S_a:=\{x+yi\mid y\in [a,a+2\pi)\}$ for any $a\in\Bbb R$, then we get that $z\mapsto e^z$ is actually one-to-one, restricted to $S_a$, (and maps onto $\Bbb C\setminus\{0\}$). A branch of $\log$ then, is basically the inverse of this $\exp|_{S_a}$ (more precisely, the inverse of $\exp|_{{\rm int\,}S_a}$).
Observe that if a sequence $z_n\to x+ai\,$ and another $\,w_n\to x+(a+2\pi)i$ within the stripe $S_a$, then $\lim e^{z_n} = \lim e^{w_n}=:Z$. So, which value of the logarithm should be assigned to $Z$? Is it $x+ai$ on one edge of the stripe, or is it $x+(a+2\pi)i$ on the other edge?
Since we want the logarithm to be continuous, we have to take the interior of the stripe (removing both edges, ${\rm int\,} S_a=\{x+yi\mid y\in (a,a+2\pi)\}$), else by the above, on the border, by continuity we would have
$$x+ai=\log(e^{x+ai})=\log(e^{x+(a+2\pi)i})=x+(a+2\pi)i\,.$$
(For the principal branch, $a=-\pi$ is taken.)
|
Where's the error in this $2=1$ fake proof? I'm reading Spivak's Calculus:
2 What's wrong with the following "proof"? Let $x=y$. Then
$$x^2=xy\tag{1}$$
$$x^2-y^2=xy-y^2\tag{2}$$
$$(x+y)(x-y)=y(x-y)\tag{3}$$
$$x+y=y\tag{4}$$
$$2y=y\tag{5}$$
$$2=1\tag{6}$$
I guess the problem is in $(3)$, it seems he tried to divide both sides by $(x-y)$. The operation would be acceptable in an example such as:
$$12x=12\tag{1}$$
$$\frac{12x}{12}=\frac{12}{12}\tag{2}$$
$$x=1\tag{3}$$
I'm lost at what should be causing this, my naive exploration in the nature of both examples came to the following: In the case of $12x=12$, we have an imbalance: We have $x$ in only one side then operations and dividing both sides by $12$ make sense.
Also, In $\color{red}{12}\color{green}{x}=12$ we have a $\color{red}{coefficient}$ and a $\color{green}{variable}$, the nature of those seems to differ from the nature of
$$\color{green}{(x+y)}\color{red}{(x-y)}=y(x-y)$$
It's like: It's okay to do the thing in $12x=12$, but for doing it on $(x+y)(x-y)=y(x-y)$ we need first to simplify $(x+y)(x-y)$ to $x^2-y^2$.
|
We have $x = y$, so $x - y = 0$.
EDIT: I think I should say more. I'll go through each step:
$x = y \tag{0}$
This is our premise that $x$ and $y$ are equal.
$$x^2=xy\tag{1}$$
Note that $x^2 = xx = xy$ by $(0)$. So completely valid.
$$x^2-y^2=xy-y^2\tag{2}$$
Now we're adding $-y^2$ to both sides of $(1$) so completely valid and we can see that it's another way of expressing $0 = 0$ as $x=y$, but nothing wrong here yet.
$$(x+y)(x-y)=y(x-y)\tag{3}$$
$$x+y=y\tag{4}$$
Step $(3)$ is just basic factoring, and it is around here where things begin to go wrong. For $(4)$ to be a valid consequence of $(3)$, I would need $x - y \neq 0$ as otherwise, we would be dividing by $0$. However, this is in fact what we've done as $x=y$ implies that $x - y =0$. So $(3)-(4)$ is where things go wrong.
$$2y=y\tag{5}$$
$$2=1\tag{6}$$
As a consequence of not being careful, we end up with gibberish.
Hope this clarifies more!
|
Is this Expectation finite? How do I prove that
$\int_{0}^{+\infty}\text{exp}(-x)\cdot\text{log}(1+\frac{1}{x})dx$
is finite? (if it is)
I tried through simulation and it seems finite for large intervals. But I don't know how to prove it analytically because I don't know the closed form integral of this product.
I am actually taking expectation over exponential distribution.
Thank you
|
If we write $$\int_{0}^{+\infty}=\int_0^1+\int_1^{\infty}$$ then we see that $$\lim_{x\to 0^+} x^{1/2}f(x)=0<\infty\Longrightarrow\int_0^1f(x)dx~~\text{is convergent}$$ and $$\lim_{x\to +\infty} x^{2}f(x)=0<\infty\Longrightarrow\int_1^{\infty}f(x)dx~~\text{is convergent}$$
|
If $\gcd(a,b)= 1$ and $a$ divides $bc$ then $a$ divides $c\ $ [Euclid's Lemma] Well I thought this is obvious. since $\gcd(a,b)=1$, then we have that $a\not\mid b$ AND $a\mid bc$. This implies that $a$ divides $c$. But apparently this is wrong. Help explain why this way is wrong please.
The question tells you give me two relatively prime numbers $a$ and $b$ such that $a$ divides the product $bc$, prove that $a$ divides $c$. how is this not obvious? Explain to me how my "proof" is not correct.
|
It seems you have in mind a proof that uses prime factorizations, i.e. the prime factors of $\,a\,$ cannot occur in $\,b\:$ so they must occur in $\,c.\,$ You should write out this argument very carefully, so that it is clear how it depends crucially on the existence and uniqueness of prime factorizations, i.e. FTA = Fundamental Theorem of Arithmetic, i.e. $\Bbb Z$ is a UFD = Unique Factorization Domain.
Besides the proof by FTA/UFD one can give more general proofs, e.g. using gcd laws (most notably distributive). Below is one, contrasted with its Bezout special case.
$$\begin{eqnarray} a\mid ac,bc\, &\Rightarrow&\, a\mid (ac,\ \ \ \ bc)\stackrel{\color{#c00}{\rm DL}} = \ (a,\ b)\ c\ = c\quad\text{by the gcd Distributive Law }\color{#c00}{\rm (DL)} \\
a\mid ac,bc\, &\Rightarrow&\, a\mid uac\!+\!vbc = (\color{#0a0}{ua\!+\!vb})c\stackrel{\rm\color{#c00}{B\,I}} = c\quad\text{by Bezout's Identity }\color{#c00}{\rm (B\,I)} \end{eqnarray}$$
since, by Bezout, $\,\exists\,u,v\in\Bbb Z\,$ such that $\,\color{#0a0}{ua+vb} = (a,b)\,\ (= 1\,$ by hypothesis). Notice that the Bezout proof is a special case of the proof using the distributive law. Essentially it replaces the gcd in the prior proof by its linear (Bezout) representation, which has the effect of trading off the distributive law for gcds with the distributive law for integers. However, this comes at a cost of loss of generality. The former proof works more generally in domains having gcds there are not necessarily of such linear (Bezout) form, e.g. $\,\Bbb Q[x,y].\,$ The first proof also works more generally in gcd domains where prime factorizations needn't exist, e.g. the ring of all algebraic integers.
See this answer for a few proofs of the fundamental gcd distributive law, and see this answer, which shows how the above gcd/Bezout proof extends analogously to ideals.
Remark $ $ This form of Euclid's Lemma can fail if unique factorization fails, e.g. let $\,R\subset \Bbb Q[x]\,$ be those polynomials whose coefficient of $\,x\,$ is $\,0.\,$ So $\,x\not\in R.\,$ One easily checks $\,R\,$ is closed under all ring operations, so $\,R\,$ is a subring of $\,\Bbb Q[x].\,$ Here $\,(x^2)^3 = (x^3)^2\,$ is a nonunique factorization into irreducibles $\,x^2,x^3,\,$ which yields a failure of the above form of Euclid's Lemma, namely $\ (x^2,\color{#C00}{x^3}) = 1,\ \ {x^2}\mid \color{#c00}{x^3}\color{#0a0}{ x^3},\ $ but $\ x^2\nmid \color{#0a0}{x^3},\,$ by $\,x^3/x^2 = x\not\in R,\, $ and $\,x^2\mid x^6\,$ by $\,x^6/x^2 = x^4\in R.\ $ It should prove enlightening to examine why your argument for integers breaks down in this polynomial ring. This example shows that the proof for integers must employ some special property of integers that is not enjoyed by all domains. Here that property is unique factorization, or an equivalent, e.g. that if a prime divides a product then it divides some factor.
|
how to determine the following set is countable or not? How to determine whether or not these two sets are countable?
The set A of all functions $f$ from $\mathbb{Z}_{+}$ to $\mathbb{Z}_{+}$.
The set B of all functions $f$ from $\mathbb{Z}_{+}$ to $\mathbb{Z}_{+}$ that are eventually 1.
First one is easier to determine since set of fuctions from $\mathbb{N}$ to $\{0,1\}$ is uncountable. But how to determine the second one? Thanks in advance!
|
Let $B_n$ be the set of functoins $f\colon \mathbb Z_+\to\mathbb Z_+$ with $f(x)\le n$ for all $x$ and $f(x)=1$ for $x>n$.
Then $$B=\bigcup_{n\in\mathbb N}B_n$$
and $|B_n|=n^n$.
|
The derivative of a linear transformation Suppose $m > 1$. Let $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$ be a smooth map. Consider $f + Ax$ for $A \in \mathrm{Mat}_{m\times n}, x \in \mathbb{R}^n$. Define $F: \mathbb{R}^n \times \mathrm{Mat}_{m\times n} \rightarrow \mathrm{Mat}_{m\times n}$ by $F(x,A) = df_x + A$.
So what is $dF_x$?
(A) Is it $dF(x,A) = d(df_x + A) = d f_x + A$? And therefore,
$$dF(x,A) =\left( \begin{array}{ccc}
\frac{\partial f_1}{\partial x_1} & \cdots & \frac{\partial f_m}{\partial x_1} \\
\vdots & & \vdots \\
\frac{\partial f_1}{\partial x_n} & \cdots & \frac{\partial f_m}{\partial x_n}\end{array} \right) +
\left( \begin{array}{ccc}
a_{11} & \cdots & a_{1m} \\
\vdots & & \vdots \\
a_{n1} & \cdots & a_{nm}\end{array} \right)$$
(B) Or should it be $dF(x,A) = d(df_x + A) = d^2 f_x + I$? And therefore,
$$dF(x,A) =\left( \begin{array}{ccc}
\frac{\partial^2 f_1}{\partial x_1^2} & \cdots & \frac{\partial^2 f_m}{\partial x_1^2} \\
\vdots & & \vdots \\
\frac{\partial^2 f_1}{\partial x_n^2} & \cdots & \frac{\partial^2 f_m}{\partial x_n^2}\end{array} \right) +
\left( \begin{array}{ccc}
1 & \cdots & 0 \\
\vdots & & \vdots \\
0 & \cdots & 1\end{array} \right)$$
Does this look right? Thank you very much.
|
Since, by your definition, $F$ is a matrix-valued function, $DF$ would be a rank-3 tensor with elements
$$
(DF)_{i,j,k} = \frac{\partial^2 f_i(x)}{\partial x_j \partial x_k}
$$
Some authors also define matrix-by-vector and matrix-by-matrix derivatives differently be considering $m \times n$ matricies as vectors in $\mathbb{R}^{mn}$ and "stacking" the resulting partial derivatives.
See this paper for more details.
|
Does $((x-1)! \bmod x) - (x-1) \equiv 0\implies \text{isPrime}(x)$ Does $$((x-1)! \bmod x) - (x-1) = 0$$
imply that $x$ is prime?
|
I want to add that this is the easy direction of Wilson's theorem. I have often seen Wilson's Theorem stated without the "if and only if" because this second direction is so easy to prove:
Proof: If $x > 1, x \ne 4$ were not prime, then the product $(x-1)!$ would contain two factors multiplying to get $x$, and thus we would have $(x - 1)! \equiv 0 \pmod x$, contradicting the statement. On the other hand if $x = 4$, then $(x - 1)! = 6 \equiv 2 \not \equiv -1$ so in any case $x$ must be prime.
At one point $4$ was my favorite integer for precisely the reason that it was the only exception to $(x - 1)! \equiv 0 \text{ or } 1 \pmod x$.
|
Linear equation System What are the solutions of the following system:
$ 14x_1 + 35x_2 - 7x_3 - 63x_4 = 0 $
$ -10x_1 - 25x_2 + 5x_3 + 45x_4 = 0 $
$ 26x_1 + 65x_2 - 13x_3 - 117x_4 = 0 $
*
*4 unknowns (n), 3 equations
$ Ax=0: $
$ \begin{pmatrix} 14 & 35 & -7 & -63 & 0 \\ -10 & -25 & 5 & 45 & 0 \\ 26 & 65 & -13 & -117& 0 \end{pmatrix} $
Is there really more than one solution because of:
*
*$\operatorname{rank}(A) = \operatorname{rank}(A') = 1 < n $
What irritates me:
http://www.wolframalpha.com/input/?i=LinearSolve%5B%7B%7B14%2C35%2C-7%2C-63%7D%2C%7B-10%2C-25%2C5%2C45%7D%2C%7B26%2C65%2C-13%2C-117%7D%7D%2C%7B0%2C0%2C0%7D%5D
Row reduced form:
$ \begin{pmatrix} 1 & 2.5 & -0.5 & -9.2 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & \end{pmatrix} $
How to find a solution set ?
|
Yes, there are infinitely many real solutions. Since there are more unknowns than equations, this system is called underdetermined. Underdetermined systems can have either no solutions or infinitely many solutions. Trivially the zero vector solves the equation:$$Ax=0$$
This is sufficient to give that the system must have infinitely many solutions. To find these solutions, it suffices to find the row reduced echelon form of the augmented matrix for the system. As you have already noted, the augmented matrix is:
\begin{pmatrix} 14 & 35 & -7 & -63 & 0 \\ -10 & -25 & 5 & 45 & 0 \\ 26 & 65 & -13 & -117& 0 \end{pmatrix}
Row reducing this we obtain:
\begin{pmatrix} 1 & \frac{5}{2} & \frac{-1}{2} & \frac{-9}{2} & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0& 0 \end{pmatrix}
This corresponds to the equation $$x_1+\frac{5}{2}x_2-\frac{1}{2}x_3-\frac{9}{2}x_4=0$$
You can then express this solution set with
$$x = \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{pmatrix}=\begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ \frac{2}{9}x_1 +\frac{5}{9}x_2 -\frac{9}{4}x_3 \end{pmatrix}$$
As you have already noted, the rank(A) = 1, giving you $n-1=3$ free parameters. That is, you can supply any real values for $x_1,x_2,x_3$ and $x_4$ will be given as above. The choice to let $x_1,x_2,x_3$ be free parameters here is completely arbitrary. One could also freely choose $x_1,x_2,x_4$ and have $x_3$ be determined by solving for $x_3$ in the above equation.
The wikipedia article on underdetermined systems has some more details and explanation than what I've provided: http://en.wikipedia.org/wiki/Underdetermined_system
Row reduced echelon forms can be computed using Wolfram Alpha by entering "row reduce" following by the matrix. If you're interested, Gauss-Jordan elimination is a pretty good method for calculating reduced-row echelon forms by hand.
|
Showing that $\{x\in\mathbb R^n: \|x\|=\pi\}\cup\{0\}$ is not connected I do have problems with connected sets so I got the following exercise:
$X:=\{x\in\mathbb{R}^n: \|x\|=\pi\}\cup\{0\}\subset\mathbb{R}^n$. Why is $X$ not connected?
My attempt: I have to find disjoint open sets $U,V\ne\emptyset$ such that $U\cup V=X$ .
Let $U=\{x\in\mathbb{R}^n: \|x\|=\pi\}$ and $V=\{0\}$. Then $V$ is relative open since $$V=\{x\in\mathbb{R}^n:\|x\|<1\}\cap X$$ and $\{x\in\mathbb{R}^n:\|x\|<1\}$ is open in $\mathbb{R}^n$.
Is this right? and why is $U$ open?
|
A more fancy approach: it will suffice to say that the singleton $\{0\}$ is a clopen (=closed and open) set. It is closed, because one point sets are always closed. It is open, because it is $X \cap \{ x \in \mathbb{R}^n \ : \ || x || < 1 \}$
|
find $\frac{ax+b}{x+c}$ in partial fractions $$y=\frac{ax+b}{x+c}$$ find a,b and c given that there are asymptotes at $x=-1$ and $y=-2$ and the curve passes through (3,0)
I know that c=1 but I dont know how to find a and b?
I thought you expressed y in partial fraction so that you end up with something like $y=Ax+B+\frac{C}{x+D}$
|
The line $x =-1$ is assymptote, in that case we must have:
$$\lim_{x\to -1}\frac{ax+b}{x+c}=\pm\infty$$
I've written $\pm$ because any of them is valid. This is just abuse of notation to make this a little faster. For this to happen we need a zero in the denominator at $x=-1$ since this is just a rational function. In that case we want $c=1$.
The line $y = -2$ is assymptote, so that we have:
$$\lim_{x\to +\infty}\frac{ax+b}{x+c}=-2$$
In that case, our strategy is to calculate the limit and equal it to this value. To compute the limit we do as follows:
$$\lim_{x\to +\infty}\frac{ax+b}{x+c}=\lim_{x\to\infty}\frac{a+\frac{b}{x}}{1+\frac{c}{x}}=a$$
This implies that $a = -2$. Finally, since the curve passes through $(3,0)$ we have (I'll already plug in $a$ and $c$):
$$\frac{-2\cdot3+b}{3+1}=0$$
This implies that $b=6$. So the function you want is $f : \Bbb R \setminus \{1\} \to \Bbb R$ given by:
$$f(x)=\frac{-2x+6}{x+1}$$
|
In linear logic sequent calculus, can $\Gamma \vdash \Delta$ and $\Sigma \vdash \Pi$ be combined to get $\Gamma, \Sigma \vdash \Delta, \Pi$? Linear logic is a certain variant of sequent calculus that does not generally allow contraction and weakening. Sequent calculus does admit the cut rule: given contexts $\Gamma$, $\Sigma$, $\Delta$, and $\Pi$, and a proposition $A$, we can make the inference
$$\frac{\Gamma \vdash A, \Delta \qquad \Sigma, A \vdash \Pi}{\Gamma, \Sigma \vdash \Delta, \Pi}.$$
So what I'm wondering is if it's also possible to derive a "cut rule with no $A$":
$$\frac{\Gamma \vdash \Delta \qquad \Sigma \vdash \Pi}{\Gamma, \Sigma \vdash \Delta, \Pi}.$$
|
The rule you suggest is called the "mix" rule , and it is not derivable from the standard rules of linear logic. Actually, what I know is that it's not derivable in multiplicative linear logic; I can't imagine that additive or exponential rules would matter, but I don't actually know that they don't. Hyland and Ong constructed a game semantics for multiplicative linear logic that violates the mix rule.
|
Calculating $\int_{\pi/2}^{\pi}\frac{x\sin{x}}{5-4\cos{x}}\,\mathrm dx$
Calculate the following integral:$$\int_{\pi/2}^{\pi}\frac{x\sin{x}}{5-4\cos{x}}\,\mathrm dx$$
I can calculate the integral on $[0,\pi]$,but I want to know how to do it on $[\frac{\pi}{2},\pi]$.
|
$$\begin{align}\int_{\pi/2}^\pi\frac{x\sin x}{5-4\cos x}dx&=\pi\left(\frac{\ln3}2-\frac{\ln2}4-\frac{\ln5}8\right)-\frac12\operatorname{Ti}_2\left(\frac12\right)\\&=\pi\left(\frac{\ln3}2-\frac{\ln2}4-\frac{\ln5}8\right)-\frac12\Im\,\chi_2\left(\frac{\sqrt{-1}}2\right),\end{align}$$
where $\operatorname{Ti}_2(z)$ is the inverse tangent integral and $\Im\,\chi_\nu(z)$ is the imaginary part of the Legendre chi function.
Hint:
Use the following Fourier series and integrate termwise:
$$\frac{\sin x}{5-4\cos x}=\sum_{n=1}^\infty\frac{\sin n x}{2^{n+1}}.$$
|
What are the main relationships between exclusive OR / logical biconditional? Let $\mathbb{B} = \{0,1\}$ denote the Boolean domain.
Its well known that both exclusive OR and logical biconditional make $\mathbb{B}$ into an Abelian group (in the former case the identity is $0$, in the latter the identity is $1$).
Furthermore, I was playing around and noticed that these two operations 'associate' over each other, in the sense that $(x \leftrightarrow y) \oplus z$ is equivalent to $x \leftrightarrow (y \oplus z).$
This is easily seen via the following chain of equivalences.
*
*$(x \leftrightarrow y) \oplus z$
*$(x \leftrightarrow y) \leftrightarrow \neg z$
*$x \leftrightarrow (y \leftrightarrow \neg z)$
*$x \leftrightarrow (y \oplus z)$
Anyway, my question is, what are the major connections between the operations of negation, biconditional, and exclusive OR? Furthermore, does $(\mathbb{B},\leftrightarrow,\oplus,\neg)$ form any familiar structure? I know that the binary operations don't distribute over each other, so its not a ring.
|
You probably already know this, but the immediate connection between them is $(x\oplus y) \leftrightarrow \neg(x \leftrightarrow y)$. Then the exclusive OR reduces trivially to the biconditional, and vice versa.
|
Why differentiating a function is similar to differentiating its log function? $f(6;p)=\binom{25}{6}p^6(1-p)^6,\quad 0\le p\le1$
I have three questions:
$(1)$To find the relative maxima of p, the process is to take the derivative of the function with respect to $p$ equal to $0$ and solving the resulting equation for $p$.
For easy computation we take the log of the function & differentiate the log function with respect to $p$ equal to $0$ and solving the resulting equation for $p$.
$\bullet$"My Question Is Why $\frac{d}{dp}f(6;p)$ is similar to $\frac{d}{dp}\log f(6;p)$"
(2) After the step(1) i found that $p=\frac{6}{25}$ is the only root.
But on the book, which i am reading, is written that $p=0,1,\frac{6}{25}$ are the roots.
$\bullet$"How p=0,1 can be root? The denominator $p(1-p)$ becomes $0$ when i do cross multiplication."
(3)To check which value of $p$ maximize $f(6;p)$ , we take the second derivative of the function with respect to $p$ and set $p=0,1,\frac{6}{25}$ respectively. If the second derivative is less than $0$, then $p=$ that value maximize the function .
for $p=\frac{6}{25}$ the second derivative is $-125.56<0$ , so $ p=\frac{6}{25} $ gives maximum.
i also got for $p=0,1$ the second derivative is $\quad-\infty$ , so $ p=0,1 $ also give maximum.
But the book commented out that $p=0,1$ give a minimum.
$\bullet$"How $p=0,1$ give a minimum?"
|
There is a typo in the post, presumably you mean $p^6(1-p)^{19}$.
Since $\log$ is an increasing function, finding the maximum of $p^6(1-p)^{19}$ and finding the maximum of its logarithm are equivalent problems. There is a bit of a question mark about the endpoints $p=0$ and $p=1$, but the maximum is clearly not there, so we are examining the logarithm in the interval $(0,1)$, where there is no issue.
The logarithm is $6\log p +19\log(1-p)$, which has derivative $\frac{6}{p}-\frac{19}{1-p}$.
This derivative is not $0$ at $p=0$ or $p=1$, in fact it is undefined at these places.
It is true that the derivative of $p^6(1-p)^{19}$ is $0$ at $p=0$ and $p=1$, in addition to $p=\frac{6}{25}$, for the derivative is $(p^6)(-19)(1-p)^{18}+(1-p)^{19}(6)(p^5)$. This clearly vanishes at $p=0$ and $p=1$. In fact, by taking out the common factor $p^5(1-p)^{18}$, we can quickly find where the derivative is $0$ without doing the detour through logarithms.
The second derivative is not really suitable for testing what happens at the endpoints. I assume these are probabilities, so $p$ is restricted to the interval $[0,1]$. Our original expression is obviously $0$ at the endpoints, and positive if $0\lt p\lt 1$, so it is clear that we have an absolute minimum at $0$ and at $1$.
Actually, I would not use the second derivative test at all. Our derivative simplifies to $\frac{6-25p}{p(1-p)}$. The denominator is positive for all $p$ in the interval $(0,1)$. Looking at the numerator, we can see it is positive up to $p=\frac{6}{25}$ and then negative. So the logarithm is increasing in the interval $(0,\frac{6}{25}]$, and then decreasing, so reaches a maximum at $p=\frac{6}{25}$.
|
Basis for $L^2(0,T;H)$ Given a basis $b_i$ for the separable Hilbert space $H$, what is the basis for $L^2(0,T;H)$? Could it be $\{a_jb_i : j, i \in \mathbb{N}\}$ where $a_j$ is the basis for $L^2(0,T)$?
|
You are not far from correct result. The desired basis is a family of fnctions
$\{f_{i,j}:i,j\in\mathbb{N}\}$ defined as
$$
f_{i,j}(t)=a_j(t)b_j
$$
The deep reason for this is the following. Since we have an identification.
$$
L_2((0,T), H)\cong L_2(0,T)\otimes_2 H
$$
it is enough to study to study bases of Hilbert tensor product of Hilbert spaces. It is known that for Hilbert spaces $K$, $H$ with orthnormal bases $\{e_i:i\in I\}$ and $\{f_j:j\in J\}$ respectively the family
$$
\{e_i\otimes_2 f_j:i\in I\; j\in J\}
$$
is an orthnormal basis of $K\otimes_2 H$
|
Prove that a function f is continuous (1) $f:\mathbb{R} \rightarrow \mathbb{R}$ such that
$$f(x) =
\begin{cases}
x \sin(\ln(|x|))& \text{if $x\neq0$} \\
0 & \text{if $x=0$} \\
\end{cases}$$
Is $f$ continuous on $\mathbb{R}$?
I want to use the fact that 2 continuous functions:
$$f:I \rightarrow J ( \subset \mathbb{R})$$
$$g:J \rightarrow \mathbb{R}$$
$$g \circ f:I \rightarrow \mathbb{R}, x \mapsto g(fx) $$
1)For $f=\ln(|x|)$:
"By the inverse of the Fundamental Theorem of Calculus, $\ln x$ is defined as an
integral, it is differentiable and its derivative is the integrand $1=\frac{1}{x}$.
As every differentiable
function is continuous, therefore $\ln x$ is continuous."
so $f=\ln(|x|) ,I \in ]0, \infty)$
$f$ is continuous.
2)For $g= \sin(x)$:
if $\epsilon > 0, \exists \delta>0:$
$$x \in J \wedge |x - x_0| < \delta , x_0 \in \mathbb{R} $$
$$\Rightarrow |f(x) - f(x_0)| \epsilon \Leftrightarrow |\sin(x)-\sin(x_0)|< \epsilon$$
$$|\sin(x)| \leq |x|$$
$$\Leftrightarrow |\sin(x)-\sin(x_0)|<|x - x_0| < \delta = \epsilon$$
So $g$ is continuous on $\mathbb{R}$
3)Because x is also continous on $\mathbb{R}$
$ \Rightarrow x \sin(\ln(|x|))$ is continuous.
Is my proof correct?
Are there shorter ways to get this result?
|
For all $x\in (-\infty;0)\cup(0;+\infty)$ is function continuous since it is composition of continuous functions (I think it is necessary to show it in this task, as mentioned in comments, the real problem is $x=0$).
By definition, function is continuous at $x_0$, if $$\lim_{x\rightarrow x_0}f(x)=f(x_0)$$
In this case:
$$\lim_{x\rightarrow 0}x\sin(\ln(x))=0$$
because $\lim_{x\rightarrow0}=0$ and $|\sin(\ln(x))|\leq1$ (sine is bounded).
|
Counter example of upper semicontinuity of fiber dimension in classical algebraic geometry We know that if $f : X\to Y$ is a morphism between two irreducible affine varieties over an algebraically closed field $k$, then the function that assigns to each point of $X$ the dimension of the fiber it belongs to is upper semicontinuous on $X$.
Does anyone know of a simple counterexample when $X$ is not irreducible anymore (but remains an algebraic set over $k$, i.e a finitely generated $k$-algebra) ?
Edit : to avoid ambiguity about the definition of upper semicontinuity, it means here that for all $n\geq 0$, the set of $x\in X$ such as $\dim(f^{-1}(f(x) ) ) \geq n$ is closed in $X$.
It seems to me it is not so obvious to find a counterexample, since in fact the set of $x\in X$ such that the dimension of the irreducible component of $f^{-1}(f(x) )$ in $X$ that contains $x$ is $\geq n$ is always closed even when $X$ is not irreducible.
|
I got my answer on MO, here : mathoverflow.net/questions/133567/…
|
Distinguishable telephone poles being painted Each of n (distinguishable) telephone poles is painted red, white, blue or yellow. An odd number are painted blue and an even number yellow. In how many ways can this be done?
Can some give me a hint how to approach this problem?
|
Consider the generating function given by
$( R + W + B + Y )^n$
Without restriction, the sum of all coefficients would give the number of ways to paint the distinguishable posts in any of the 4 colors. We substitute $R=W=B=Y = 1$ to find this sum, and it is (unsurprisingly) $4^n$.
Since there are no restrictions on $R$ and $W$, we may replace them with $1$, and still consider the coefficients.
If we have the restriction that we are only interested in cases where the degree of $B$ is odd (ignore $Y$ for now), then since
$ \frac{1^k - (-1)^k}{2} = \begin{cases} 1 & k \equiv 1 \pmod{2} \\ 0 & k \equiv 0 \pmod{2} \\ \end{cases}$
the sum of the coefficients when the degree of $B$ is odd, is just the sum of the coefficients of $ \frac{ (R + W + 1 + Y) ^n - ( R + W + (-1) + Y) ^n} { 2} $.
Substituting in $R=W=Y=1$, we get that the number of ways is
$$ \frac{ (1 + 1 + 1 + 1)^n - (1 + 1 + (-1) +1)^n}{2} = \frac {4^n - 2^n} {2}$$
Now, how do we add in the restriction that the degree of $Y$ is even? Observe that since
$ \frac{1^k + (-1)^k}{2} = \begin{cases} 1 & k \equiv 0 \pmod{2} \\ 0 & k \equiv 1 \pmod{2} \\ \end{cases}$
the sum of the coefficients when the degree of $B$ is odd and Y is even, is just the sum of the coefficients of
$$ \frac{ \frac{ (R + W + 1 + 1) ^n - ( R + W + (-1) + 1) ^n} { 2} + \frac{ (R + W + 1 + (-1)) ^n - ( R + W + (-1) + (-1)) ^n} { 2} } { 2} $$
Now substituting in $R=W=1$, we get $\frac{ 4^n - 2 ^n + 2^n - 0^n } { 4} = 4^{n-1}$
|
Are there any integer solutions to $\gcd(\sigma(n), \sigma(n^2)) = 1$ other than for prime $n$? A good day to everyone!
Are there any integer solutions to $\gcd(\sigma(n), \sigma(n^2)) = 1$ other than for prime $n$ (where $\sigma = \sigma_1$ is the sum-of-divisors function)?
Note that, if $n = p$ for prime $p$ then
$$\sigma(p) = p + 1$$
$$\sigma(p^2) = p^2 + p + 1 = p(p + 1) + 1.$$
These two equations can be put together into one as
$$\sigma(p^2) = p\sigma(p) + 1,$$
from which it follows that
$$\sigma(p^2) + (-p)\cdot\sigma(p) = 1.$$
The last equation implies that $\gcd(\sigma(p), \sigma(p^2)) = 1$.
I now attempt to show that prime powers also satisfy the number-theoretic equation in this question.
If $n = q^k$ for $q$ prime, then
$$\sigma(q^{2k}) = \frac{q^{2k + 1} - 1}{q - 1} = \frac{q^{2k + 1} - q^{k + 1}}{q - 1} + \frac{q^{k + 1} - 1}{q - 1} = \frac{q^{k + 1}(q^k - 1)}{q - 1} + \sigma(q^k).$$
Re-writing the last equation, we get
$$(q - 1)\left(\sigma(q^{2k}) - \sigma(q^k)\right) = q^{k + 1}(q^k - 1).$$
Since $\gcd(q - 1, q) = 1$, then we have
$$q^{k + 1} \mid \left(\sigma(q^{2k}) - \sigma(q^k)\right).$$
But we also have
$$\sigma(q^{2k}) - \sigma(q^k) = q^{k + 1} + q^{k + 2} + \ldots + q^{2k} \equiv 0 \pmod {q^{k + 1}}.$$
Alas, this is where I get stuck. (I know of no method that can help me express $1$ as a linear combination of $\sigma(q^{2k})$ and $\sigma(q^k)$, from everything that I've written so far.)
Anybody else here have any ideas?
Thank you!
|
Let $n=pq$ where $p,q$ are two distinct primes.
Then
$$\sigma(n)=(p+1)(q+1)$$
$$\sigma(n^2)=(1+p+p^2)(1+q+q^2)$$
$$\sigma(6)=12$$
$$\sigma(6^2)=7 \cdot 13 \,.$$
Note that as long as $1+p+p^2$ and $1+q+q^2$ are primes, then for $n=pq$ we have $gcd(\sigma(n), \sigma(n^2))=1$.
Actually you only need
$$gcd(p+1,q^2+q+1)=\gcd(q+1,p^2+p+1)=1 \,.$$
|
Smooth Pac-Man Curve? Idle curiosity and a basic understanding of the last example here led me to this polar curve: $$r(\theta) = \exp\left(10\frac{|2\theta|-1-||2\theta|-1|}{|2\theta|}\right)\qquad\theta\in(-\pi,\pi]$$ which Wolfram Alpha shows to look like this:
The curve is not defined at $\theta=0$, but we can augment with $r(0)=0$. If we do, then despite appearances, the curve is smooth at $\theta=0$. It is also smooth at the back where two arcs meet. However it is not differentiable at the mouth corners.
Again out of idle curiosity, can someone propose a polar equation that produces a smooth-everywhere Pac-Man on $(-\pi.\pi]$? No piece-wise definitions please, but absolute value is OK.
|
Not a very good one: $r(\theta) = e^{-\dfrac{1}{20 \theta^2}}$
|
Need explanation of passage about Lebesgue/Bochner space From a book:
Let $V$ be Banach and $g \in L^2(0,T;V')$. For every $v \in V$, it holds that
$$\langle g(t), v \rangle_{V',V} = 0\tag{1}$$
for almost every $t \in [0,T]$.
What I don't understand is the following:
This is equivalent to $$\langle g(t), v(t) \rangle_{V',V} = 0\tag{2}$$ for all $v \in L^2(0,T;V)$ and for almost every $t \in [0,T]$.
OK, so if $v \in L^2(0,T;V)$, $v(t) \in V$, so (2) follows from (1). How about the reverse? Also is my reasoning really right? I am worried about the "for almost every $t$ part of these statements, it confuses me whether I am thinking correctly.
Edit for the bounty: as Tomas' comment below, is the null set where (1) and (2) are non-zero the same for every $v$? If not, is this a problem? More details would be appreciated.
|
For $(2)$ implies $(1)$, consider the function $v\in L^2(0,T;V)$ defined by $$v(t)=w, \forall\ t\in [0,T]$$
where $w\in V$ is fixed. Hence, you have by $(2)$ that $$\langle g(t),v(t)\rangle=\langle g(t),w\rangle=0$$
for almost $t$. By varying $w$, you can conclude.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.